id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.15208
Revisiting $f(R,T)$ cosmologies
We review the status of $f(R,T)$ cosmological models, where $T$ is the trace of the energy momentum tensor $T^{\mu\nu}$. We start focusing on the modified Friedmann equations for the minimally coupled gravitational Lagrangian of the type $f(R,T)=R +\alpha e^{\beta T} + \gamma_{n} T^{n}$. We show that in such a minimally coupled case there exists a useful constraining relation between the effective fractionary total matter density with an arbitrary equation of state parameter and the modified gravity parameters. With this association the modified gravity sector can be independently constrained using estimations of the gas mass fraction in galaxy clusters. Using cosmological background cosmic chronometers data and demanding the universe is old enough to accommodate the existence of Galactic globular clusters with ages of at least $\sim 14$ Gyrs we find a narrow range of the modified gravity free parameter space in which this class of theories remains viable for the late time cosmological evolution. This preferred parameter space region accommodates the $\Lambda$CDM limit of $f(R,T)$ models. We also work out the non-minimally coupled case in the metric-affine formalism and find that there are no viable cosmologies in the latter situation. However, when analysing the cosmological dynamics including a radiation component, we find that this energy density interacts with the matter field and it does not scale according to the typical behavior. We conclude stating that $f(R,T)$ gravity is not able to provide a full cosmological scenario and should be ruled out as a modified gravity alternative to the dark energy phenomena.
Ana Paula Jeakel, Jonas Pinheiro da Silva, Hermano Velten
2023-03-27T13:45:55Z
http://arxiv.org/abs/2303.15208v3
# Revisiting \(f(R,T)\) cosmologies ###### Abstract We review the status of \(f(R,T)\) theories, where \(T\) is the trace of the energy momentum tensor \(T^{\mu\nu}\), concerning the evolution of the cosmological flat Friedmann-Lemaitre-Robertson-Walker (FLRW) background expansion. We start focusing on the modified Friedmann equations for the case of a minimally coupled gravitational Lagrangian of the type \(f(R,T)=R+\alpha e^{\beta T}+\gamma_{n}T^{n}\). With this choice one is allowed to cover all existing proposals in the literature via four free parameters and all relevant \(f(R,T)\) models as well as the \(\Lambda\)CDM model can be achieved in the appropriate limit. We show that in such minimally coupled case there exists a useful constraining relation between the effective fractionary total matter density with arbitrary equation of state parameter and the modified gravity parameters. Then, with this association the modified gravity sector can be independently constrained using estimations of the gas mass fraction in galaxy clusters. Using cosmological background data and demanding the universe is old enough to accommodate the existence of Galactic globular clusters with estimated age of at least \(\sim 13\) Gyrs we find a narrow range of the modified gravity free parameter space in which this class of theories remains cosmologically viable. As expected, this preferred parameter space region accommodates the \(\Lambda\)CDM limit of \(f(R,T)\) models. We also work out the non-minimally coupled case in the metric-affine formalism and find that there are no viable cosmologies in the latter situation. ## I Introduction Dark matter (DM) and dark energy (DE) compose the so called dark sector of the universe and represent intriguing elements of modern cosmology. Whereas the former is responsible for many unexpected astrophysical observations e.g., flatness of galaxy rotation curves, and also plays a crucial role in the cosmological large scale structure formation, the latter is evoked to deal with the current accelerated phase of the background expansion rate firstly denounced by Supernovae type Ia observations. Conversely, the inclusion of both components in the standard cosmological model can be understood as the inability of General Relativity to properly describe the gravitational interaction at scales beyond the Galactic one. This has motivated the rise of a new research route where one searches for extensions/modifications of the Einstein-Hilbert Lagrangian. There are many distinct ways to go beyond General Relativity see Ref. [1] for a review. Apart from adding new fields, departures from Riemannian geometries or adopting quantum arguments, perharps, the most natural way to modifiy gravity is to add invariants in the Einstein-Hilbert Lagrangian giving rise to higher-order theories. The widely known prototype within this category is the set of \(f(R)\) theories [2]. In the latter, the Einstein-Hilbert Lagrangian, term \(f_{\text{EH}}(R)=R\), where \(R=g_{\mu\nu}R^{\mu\nu}\) is the Ricci scalar, \(g_{\mu\nu}\) is the metric and \(R^{\mu\nu}\) is the Ricci tensor, is replaced by a more general algebraic combination of \(R\). By going beyond \(f(R)\) theories, one can keep adding geometric invariants to the gravitational Lagrangian or, for instance, to implement a non-minimal coupling between geometry and matter fields. Within the latter strategy, two classes of theories have appeared recently, the \(f(R,L_{m})\) gravity [3], where \(L_{m}\) is the matter Lagrangian and the \(f(R,T)\) gravity [4], where \(T=g_{\mu\nu}T^{\mu\nu}\) is trace of the energy-momentum tensor. In this work we will study \(f(R,T)\) theories as an alternative to the dark energy phenomena with focus on their cosmological background expansion. Several \(f(R,T)\) solutions for the cosmological expanding background have been found in the literature and some confrontation with data has been performed[5; 6; 7; 8]. Most the of the \(f(R,T)\) models are capable to induce a late time accelerated expansion rate providing negative values for the today's deceleration parameter \(q_{0}\). However, in light of available modern cosmological data, a truly viable model should obey to several other requirements. Ref. [5], by one of the authors, has challenged some of the available \(f(R,T)\) models by arguing that though the low-z evolution of \(f(R,T)\) models can be reasonable supported by available data, there is a considerable discrepancy in the high-z (\(z>1\)) dynamics in comparison with standard \(\Lambda\)CDM cosmology. Then, this reference concludes that the viability of \(f(R,T)\) cosmological models is severely challenged. Now, in this work, by considering a broad class of \(f(R,T)\) cosmologies and using additional information about the age of the universe and the existing bounds on the gas mass fraction in galaxy clusters, we will revisit this issue. We shall consider that viable modified gravity based cosmologies \(i)\) are able to reproduce quantitatively low redshift data as, for example, \(H(z)\) data and, \(ii)\) can yield to a minimum value for the universe's age consistent with the oldest astrophysical objects found so far and \(iii)\) their modified gravity free parameters are constrained such that the effective fractionary matter density parameter is consistent with available gas mass fraction data in galaxy clusters. Requirements \(i\) and \(ii\) are the new considered aspects in this present work in comparison with the analysis done in Ref. [5]. The age argument is motivated since age estimations of globular clusters in our galaxy are available. Such estimations set a conservative lower bound of \(t_{U}\gtrsim 14.16\) Gyrs for the age of the universe [9]. Also, estimations from the gas mass fraction within galaxy cluster obtained in [10] place bounds on the cosmological fractionary total matter density parameter \(\Omega_{0}\). In the next section we review the cosmological background dynamics for \(f(R,T)\) theories. The observational analysis is performed in section III. The non-minimally coupled case is studied in section IV. We conclude in the final section. ## II Cosmological background expansion in \(f(R,t)\) theories The total action \(S\) for the \(f(R,T)\) theories reads \[S=\frac{1}{2\kappa^{2}}\int\sqrt{-g}\,d^{4}x\,f(R,T)+\int\sqrt{-g}\,d^{4}x\,L_ {m}(g_{\mu\nu},\psi_{m}), \tag{1}\] where \(\kappa^{2}=8\pi G\) is the coupling constant, \(g\) is the metric determinant and \(L_{m}\) is the Lagrangian for the matter sector gathering the contribution of all matter fields \(\psi_{m}\). By applying the variational principle to the above action one finds \[f_{R}R_{\mu\nu}-\tfrac{f(R,T)}{2}g_{\mu\nu}-f_{R}\Delta_{\mu\nu}=\kappa^{2}T_{ \mu\nu}\left(1-\frac{f_{T}}{\kappa^{2}}\right)-f_{T}\Theta_{\mu\nu}. \tag{2}\] In the equation above, we have used the notation \[f_{R}\equiv\frac{\partial f(R,T)}{\partial R}\quad\text{and}\quad f_{T}\equiv \frac{\partial f(R,T)}{\partial T}. \tag{3}\] It is worth noting that the variation of the Ricci tensor has an explicit dependence on the metric \[g^{\mu\nu}\delta R_{\mu\nu}=-\Delta_{\mu\nu}\delta g^{\mu\nu}, \tag{4}\] where the d'Alembertian is related to the Ricci tensor by \(\Box=g^{\beta\alpha}\nabla_{\beta}\nabla_{\alpha}\) and \(\Delta_{\beta\alpha}=\nabla_{\beta}\nabla_{\alpha}-g_{\beta\alpha}\Box\). In order to characterise the matter sector, the energy-momentum tensor is defined as usually by \[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}L_{m}\right)}{ \delta g^{\mu\nu}}. \tag{5}\] Then, the variation of this quantity can be written as \[\delta T_{\mu\nu}=\left(\Theta_{\mu\nu}+T_{\mu\nu}\right)\delta g^{\mu\nu}, \tag{6}\] where the auxiliary quantity \(\Theta_{\mu\nu}\) appearing in (2) has been defined as \[\Theta_{\mu\nu}\equiv g^{\alpha\beta}\frac{\delta T_{\alpha\beta}}{\delta g^{ \mu\nu}}=-2T_{\mu\nu}+g_{\mu\nu}L_{m}-2g^{\alpha\beta}\frac{\partial^{2}L_{m} }{\partial g^{\alpha\beta}\partial g^{\mu\nu}}. \tag{7}\] By adopting the perfect fluid structure for the energy momentum tensor one reads \[T_{\mu\nu}=\left(\rho+p\right)u_{\mu}u_{\nu}-pg_{\mu\nu}, \tag{8}\] with the four-velocity in comoving coordinates \(u_{\nu}=(1,0,0,0)\) and \(\rho\) and \(p\) being the energy density and pressure, respectively. Let us then apply to this set of equations a flat, homogeneous, isotropic and expanding spacetime given by the Friedmann-Lemaitre-Robertson-Walker metric (FLRW) \[ds^{2}=c^{2}dt^{2}-a(t)^{2}\left(dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \phi^{2}\right), \tag{9}\] where \(a(t)\) is the cosmological scale factor. The \(0-0\) component of \(f(R,T)\) gravity (2) will provide an expression for the expansion rate \(H=\dot{a}/a\). The dot means a derivative with respect to the cosmic time. Since the Ricci scalar reads \(R=-6(\ddot{a}/a+H^{2})\) the modified Friedmann equation in \(f(R,T)\) cosmology becomes \[H^{2} =\frac{\kappa^{2}}{3}\left\{\rho-\frac{\rho(1+f_{R})}{f_{R}}- \frac{\left[f_{T}\rho(1+\omega)+f(R,T)/2-3H\dot{f}_{R}+3\dot{H}f_{R}\right]}{ \kappa^{2}f_{R}}\right\} \tag{10}\] The above equation has the appropriate GR limit with \(f(R,T)=R,f_{R}=1,f_{T}=0\) and also the \(f(R)\) gravity limit of \(f_{T}=0\). Also, \(\omega\) is the equation of state parameter relating the energy density \(\rho\) to the pressure by \(\omega=p/\rho\). The expansion rate can then be rewritten in a compact form as \[3H^{2}=\kappa^{2}\left(\rho+\bar{\rho}\right), \tag{11}\] where \[\bar{\rho}=-\frac{1}{\kappa^{2}f_{R}}\left[\kappa^{2}\rho(1+f_{R})+f_{T}\rho( 1+\omega)+\frac{f(R,T)}{2}-3H\dot{f}_{R}+3\dot{H}f_{R}\right]. \tag{12}\] The quantity \(\rho\) should be interpreted as the sum of all matter fields composing the total energy momentum tensor of the theory. In the standard cosmology it can be approximated by the sum of radiation, matter (dark + baryonic) and a dark energy component. The modified gravity contribution to the expansion rate can be collected in terms of the geometrical effective energy density \(\bar{\rho}\). This can be associated to the dark energy sector but here written in terms of the geometrical quantities and \(\rho\) as well. If the modified gravity sector is responsible for the late time accelerated phase, then \(\rho\) can be approximated, at late times, by the total matter. This is the interpretation we adopt in this work. The complete description of the cosmological background expansion demands the second Friedmann equation obtained with the spatial components of (2). It reads \[(\dot{H}+3H^{2})f_{R}+\frac{f(R,T)}{2}-2H\dot{f}_{R}-\ddot{f}_{R}=\kappa^{2}p. \tag{13}\] It is worth noting that \(f(R,T)\) theories are non-conservative since they present a non-vanishing covariant derivative of the energy momentum tensor as given by the expression \[\dot{\rho}+3H\rho(1+\omega)=-\frac{1}{\kappa^{2}+f_{T}}\left[\dot{f}_{T}\rho( 1+\omega)+f_{T}\dot{\rho}\omega+\frac{\dot{f}(T)}{2}\right]. \tag{14}\] The above equations apply for any \(f(R,T)\) model. Only in a few cases the chosen \(f(R,T)\) function leads to a vanishing right hand side of (14). Apart from this specific case, \(f(R,T)\) cosmological models are non-conservative and the effective matter density parameter will no longer scale as \(\rho\sim a^{-3}\). For a complete discussion on the issue of conservation of the energy momentum tensor in \(f(R,T)\) theories see [12]. In order to go further one has to specify the functional form of \(f(R,T)\). The simplest assumption is the non-minimally coupled case in which the contributions from \(R\) and the trace \(T\) are written separately as \[f(R,T)=f_{1}(R)+f_{2}(T). \tag{15}\] Keeping this format the most general function covering the main proposals in the literature can be written as \[f(R,T)=R+\alpha e^{\beta T}+\gamma_{n}T^{n}. \tag{16}\] This model has four free parameters \(\alpha,\beta,\gamma_{n}\) and \(n\). All power law models proposed in the literature are reached with \(\alpha=0\). Also, the recent proposed exponential model (see. Ref. [11]) is equivalent to \(\gamma_{n}=0\). General Relativity with no cosmological constant (the Einstein-de Sitter universe) corresponds to \(\alpha=\gamma_{n}=0\). For \(\beta=0\) and \(n=0\) the \(\Lambda\)CDM model is recovered. It is convenient to rewrite the background equations replacing \(\rho\) by the fractionary density \(\Omega=\rho/\rho_{c0}\), where \(\rho_{c0}\) is the today's critical density \(\rho_{0}=3H_{0}^{2}/\kappa^{2}\). Then, according to (16) the FLRW expansion rate in \(f(R,T)\) theories reads \[\frac{H^{2}}{H_{0}^{2}}=\Omega+\bar{\alpha}e^{\bar{\beta}\Omega(1-3\omega)} \left[\bar{\beta}\Omega(1+\omega)+\frac{1}{2}\right]+\bar{\gamma}_{n}\left[n(1+ \omega)(1-3\omega)^{n-1}+\frac{1}{2}(1-3\omega)^{n}\right]\Omega^{n}. \tag{17}\] In the above expression we have rewritten the modified gravity free parameters in a dimensionless form according to \[\bar{\alpha}=\frac{\alpha}{\kappa^{2}\rho_{0}};\ \ \ \ \bar{\beta}=\beta\rho_{0}\ \ \ \ \bar{\gamma}_{n}=\frac{\gamma_{n}\rho_{0}^{n-1}}{\kappa^{2}}. \tag{18}\] The cosmological dynamics will be obtained as a function of the fractionary density parameter \(\Omega\). This quantity is obtained by rewriting (14) in terms of dimensionless quantities defined above such that \[\dot{\Omega}+3H\Omega(1+\omega) = -\frac{\dot{\Omega}}{1+\bar{\alpha}e^{\beta\Omega(1-3\omega)} \bar{\beta}+\bar{\gamma}_{n}n\Omega^{n-1}(1-3\omega)^{n-1}}\times\left\{\bar{ \alpha}e^{\bar{\beta}\Omega(1-3\omega)}\bar{\beta}\left[\bar{\beta}\Omega(1+ \omega)+\omega+\frac{1}{2}\right]\right. \tag{19}\] \[+ \left.\bar{\gamma}_{n}n\Omega^{n-1}(1-3\omega)^{n-1}\left[\frac{2 n(1+\omega)-(1+3\omega)}{2}\right]\right\}.\] The numerical solution of the equation above (19) will allow us to analyze the background expansion in \(f(R,T)\) theories. The first step for solving it is to set the today's value \(\Omega(z=0)\equiv\Omega_{0}\) as the initial condition for this differential equation. Once more, this quantity is interpreted as the total (baryonic + dark) matter fraction. This quantity is not a free parameter since it is subjected to the constraining relation \[1=\Omega_{0}+\bar{\alpha}e^{\bar{\beta}\Omega_{0}(1-3\omega)}\left[\bar{\beta }\Omega_{0}(1+\omega)+\frac{1}{2}\right]+\bar{\gamma}_{n}\left[n(1+\omega)(1- 3\omega)^{n-1}+\frac{1}{2}(1-3\omega)^{n}\right]\Omega_{0}^{n}. \tag{20}\] This relation appears from (17) by setting \(H(z=0)=H_{0}\). Therefore, the today's effective fractionary matter parameter \(\Omega_{0}\) can not be arbitrarily chosen. This is a very important aspect we want to highlight since, with the exception of Ref. [5], this is not usually considered in previous analysis of the background expansion in \(f(R,T)\) theories. This is possible since the adopted \(f(R,T)\) function is minimally coupled. This means that \(H^{2}\) does not depend on \(\dot{H}\). As discussed further, in the non-minimally coupled case one can not obtain a similar constraining relation as well as in the case of \(f(R)\) theories. In both cases, using the metric formalism, \(H^{2}\) depends on \(\dot{H}\) and a constraining relation like (20) does not exist. By switching off the modified gravity contributions with \(\bar{\alpha}=\bar{\gamma}_{n}=0\) one recovers the Einstein-de Sitter model \(\Omega_{0}=1\). For non-vanishing \(\bar{\alpha}\) and \(\bar{\gamma}_{n}\) values, and demanding \(0<\Omega_{0}<1\) it is possible to set bounds on the possible values for the modified gravity parameters. ## III Observational constraints on the expansion rate Let us now confront the background expansion (17) sourced by the numerical solution of equation (19) which is subjected to the constraint (20) against available observational data. Our analysis will be similar to Ref. [5] but now adding Galactic globular clusters age constraints and the galaxy cluster gas mass fraction bounds on the model free parameters. Anticipating one of our results, such new information will be very important to revisit the main conclusion of Ref. [5]. We will consider two different \(f(R,T)\) models: * \(f(R,T)=R+\gamma_{n}T^{n}\); * \(f(R,T)=R+\alpha e^{\beta T}\). Each model has three free parameters, one more than the flat \(\Lambda\)CDM model. The background expansion of the latter is described in terms of \(H_{0}\) and \(\Omega_{0}\). The cosmological constant fractionary density is \(\Omega_{\Lambda}=1-\Omega_{0}\). In the modified gravity scenarios studied here, the quantity \(\Omega_{0}\) is replaced by a combination of \(\alpha\left(\gamma_{n}\right)\) and \(\beta\left(n\right)\) according to (20). Our goal is to find a concordance region in the free parameter space for each model. For this task we shall use three different observational information. _Age of the universe:_ For a given expansion rate \(H\), the age of the universe \(t_{U}\) is calculated via integration \[t_{U}=\int_{0}^{1}\frac{d\tilde{a}}{\tilde{a}H(\tilde{a})}, \tag{21}\] where the today's scale factor has been set \(a_{0}=1\). Age constraints can be used as a simple tool to discriminate between viable and non-viable cosmologies. In this work we will adopt a minimum and obvious requirement that \(t_{U}\) can not be smaller than the estimated age of astrophysical objects. Of course, the universe can not be younger than structures it contains. Recent age estimations of Galactic globular clusters have placed the bounds [9; 13] \[t_{glob}=13.5^{+0.16}_{-0.14}\,(stat.)\pm 0.5\,(sys.). \tag{22}\] We will use the above bounds to exclude modified gravity parameters yielding to young universes. _Gas fraction in galaxy clusters:_ The _Chandra_ measurements of X-ray from galaxy clusters is a powerful tool to constraint the temperature, gas density and mass profiles of galaxy clusters. Such quantities are sensitive to the amount of the gas mass fraction in such systems and can be linked to the cosmological baryon to total matter ratio \(\Omega_{b0}/\Omega_{0}\)[10]. By relying on this bounds and associating the total matter to the parameter \(\Omega_{0}\) appearing in (20), we can indirectly constrain the modified gravity parameters. Then, from the results presented in Ref. [10] we shall demand \[0.23<\Omega_{0}<0.31. \tag{23}\] _Cosmic Chronometers:_ A widely used technique to measure the non-local expansion rate of the universe is to obtaining the differential age of certain galaxies via the age of their stellar population. This method has allowed to obtain measurements for \(H(z)\) reaching up to redshifts around \(z\sim 2\). This method, proposed in [14], can be understood by the relation \[H(z)=-(1+z)\frac{dz}{dt}, \tag{24}\] providing \(H(z)\) at some redshift \(z\) via the relation between differential cosmic ages of objects \(dt\) within certain differential redshift range \(dz\). We shall use in our analysis the data set available in Table 1 of Ref. [15] in order to calculate statistical confidence contours for the modified gravity background dynamics studied in the last section. We show in the left panel of Fig. 1 the free parameter space for the power law model. In our analysis we will fix \(H_{0}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\)[16]. We have checked that changing \(H_{0}\) values around it yields to a mild impact in our final conclusions. In this left panel dashed lines show age contours of 12.86 Gyrs and 14.16 Gyrs, red lines show \(\Omega_{0}\) contours fixing the limits as in (20) and blue regions display the \(1\sigma\), \(2\sigma\) and \(3\sigma\) statistical confidence level contours resulting from the likelihood function obtained from the \(H(z)\) data. To stay on the conservative side, let us consider that the universe should be older than 14.16 Gyrs. This excludes a large region of the parameter space. The crossing of all such information i.e., an universe older than 14.16 Gyrs, the parameter \(\Omega_{0}\) within the bounds given by (23) and inside at least the \(3\sigma\) region provides a narrow accepted parameter space value given by the darker blue region in this figure. For such concordance range of parameters values found in the darker blue region, one can also verify how the expansion rate transits from the decelerated phase to the accelerated one via the definition of the deceleration parameter \[q(z)=-1-\frac{\dot{H}}{H^{2}}. \tag{25}\] We then plot in the right panel in Figs. 1 the deceleration parameter \(q(z)\) as a function of the redshift \(z\). This figure shows a collection of tiny blue curves computed using allowed parameter values found in the concordance darker blue region of the left panel. Fig. 2 shows results for the exponential model using the same structure as described in Fig. 1. In the case where either \(n\) or \(\beta\) vanish, \(\gamma_{n}\) and \(\alpha\) play the role of a cosmological constant in the gravitational action, respectively. One can associate \(\bar{\gamma}_{n}\) and \(\tilde{\alpha}\) values to twice the cosmological constant fractionary parameter i.e., one can expect preferred values around \(\bar{\gamma}_{n}\sim\tilde{\alpha}\sim 2\Omega_{\Lambda}\sim 1.5\). In this limiting case the observational allowed region in both figures agree with this estimation. Then, as one can see in both figures, \(f(R,T)\) cosmologies have a viable parameter space. ## IV The non-minimally coupled case: \(f(R,T)=f_{1}(R)+f_{1}(R)f_{2}(T)\) The cosmological background evolution in non-minimally coupled cases of the form \[f(R,T)=f_{1}(R)+f_{1}(R)f_{2}(T), \tag{26}\] has been investigated in Refs. [17; 18]. As shown in these references, the resulting modified Friedmann equations are such that the squared expansion rate depends on its derivative. Therefore, a constraining relation like (20) can not be imposed to find out the \(\Omega_{0}\) value. In references [17; 18], rather than solving the background dynamics numerically, analytical solutions are found by imposing that the expansion rate has a power law dependence on the cosmic time. Let us then try another approach for solving the background dynamics in non-minimally coupled \(f(R,T)\) models. We explore the dynamical equations in the metric-affine formalism as firstly studied in Ref. [19]. However, in the metric-affine (or Palatini) formalism, the variation of the Ricci tensor is performed in terms of the connection, which means that the operator \(\Delta_{\mu\nu}\) in equation (2) does not exist [19]. With respect to the modified Friedmann equations, this result implies that the first and second time derivatives, \(\dot{f}_{R}\) and \(\ddot{f}_{R}\), disappear. In order to provide an explicit example, let us now show the background dynamics of the non-minimally coupled model by considering \[f(R,T)=f_{1}(R)+f_{1}(R)f_{2}(T)=\epsilon R+\lambda_{m}RT^{m}. \tag{27}\] The parameter \(\epsilon\) allows one either to keep the Einstein-Hilbert term intact or to switched it off with a vanishing \(\epsilon\). From the above one finds the following cosmological dynamical equations in the metric-affine formalism \[3H^{2}\left[\epsilon+\lambda_{m}\rho^{m}(1-3\omega)^{m}+4\lambda_{m}m\rho^{m} (1-3\omega)^{m-1}(1+\omega)\right]=\kappa^{2}\rho-6\lambda_{m}m\rho^{m}(1-3 \omega)^{m-1}(1+\omega)\dot{H}, \tag{28}\] and \[-2\dot{H}\left[\epsilon+\lambda_{m}\rho^{m}(1-3\omega)^{m}\right]=\kappa^{2} \rho\omega+3H^{2}\left[\epsilon+\lambda_{m}\rho^{m}(1-3\omega)^{m}\right]. \tag{29}\] Both expressions (28) and (29) are different from previous results presented in the literature based on the metric formalism [17; 18]. By combining (28) and (29) we find the background expansion rate in in non-minimally coupled Figure 1: Constraints on the free parameters of the power law model \(f(R,T)=R+\gamma_{n}T^{n}\). In the left panel, the blue contours show \(1\sigma\), \(2\sigma\) and \(3\sigma\) regions of statistical confidence level. Dashed lines represent the parameter values for which the universe is \(12.86\) and \(14.16\) Gyrs old. Red lines are maximum and minimum bounds on \(\Omega_{0}\). The darker blue region represents the parameter space concordance region. In the right panel we plot the deceleration parameter as a function of the redshift \(q(z)\) for sets of \(\{\bar{\gamma},n\}\) values within the darker blue region of the left panel. \(f(R,T)\) based on the metric-affine formalism obeying to \[\frac{H^{2}}{H_{0}^{2}}=\left[\epsilon+\tilde{\lambda}_{m}\Omega^{m}(1-3\omega)^{ m}+\tilde{\lambda}_{m}m\Omega^{m}(1+\omega)(1-3\omega)^{m-1}\right]^{-1}\left\{ \Omega+\frac{6\tilde{\lambda}_{m}m\Omega(1+\omega)(1-3\omega)^{m-1}\omega \Omega^{m}}{2[\epsilon+\tilde{\lambda}_{m}\Omega^{m}(1-3\omega)^{m}]}\right\}. \tag{30}\] The dimensionless parameter appearing above has been defined as \[\tilde{\lambda}_{m}=\lambda_{m}\rho_{0}^{m}. \tag{31}\] It is worth noting that the limiting case \(m=0\) is not equivalent to the \(\Lambda\)CDM cosmology. Instead, this case means a simple redefinition of the gravitational coupling \(\kappa^{2}\) by a constant value \((\epsilon+\tilde{\lambda}_{0})\). Contrarily to the metric formalism, the above equation for \(H\) obtained in the metric-affine approach allows one to set the constraining relation between \(\Omega_{0}\) and the modified gravity parameters as in (20) i.e. \[1=\left[\epsilon+\tilde{\lambda}_{m}\Omega_{0}^{m}(1-3\omega)^{m}+\tilde{ \lambda}_{m}m\Omega_{0}^{m}(1+\omega)(1-3\omega)^{m-1}\right]^{-1}\left\{ \Omega_{0}+\frac{6\tilde{\lambda}_{m}m\Omega_{0}(1+\omega)(1-3\omega)^{m-1} \omega\Omega_{0}^{m}}{2[\epsilon+\tilde{\lambda}_{m}\Omega_{0}^{m}(1-3\omega) ^{m}]}\right\}. \tag{32}\] In order to obtain the dynamical evolution for the matter density parameter \(\Omega\) one solves its conservation law expressed by the equation \[\dot{\Omega}+3H\Omega(1+\omega)=\left\{1+\frac{\chi\omega}{\zeta\left[\epsilon +\tilde{\lambda}_{m}\Omega^{m}(1-3\omega)^{m}\right]}\right\}^{-1}\left\{(1+ \omega)\left(\frac{\dot{\chi}}{\chi}-\frac{\dot{\zeta}}{\zeta}\right)\Omega- \dot{\Omega}\omega+\frac{\dot{\Omega}\chi\omega}{\zeta\left[\epsilon+\tilde{ \lambda}_{m}\Omega^{m}(1-3\omega)^{m}\right]}\right\}, \tag{33}\] where we have defined \[\chi=\left[\epsilon+\tilde{\lambda}_{m}\Omega^{m}(1-3\omega)^{m}+\tilde{ \lambda}_{m}m\Omega^{m}(1-3\omega)^{m-1}(1+\omega)\right], \tag{34}\] and \[\zeta=1+\frac{3\tilde{\lambda}_{m}(1+\omega)(1-3\omega)^{m-1}\omega\Omega^{m} }{\epsilon+\tilde{\lambda}_{m}\Omega^{m}(1-3\omega)^{m}}. \tag{35}\] Figure 2: Constraints on the free parameters of the power law model \(f(R,T)=R+\alpha e^{gT}\). In the left panel, the blue contours show \(1\sigma\), \(2\sigma\) and \(3\sigma\) regions of statistical confidence level. Dashed lines represent the parameter values for which the universe is 12.86 and 14.16 Gyrs old. Red lines are maximum and minimum bounds on \(\Omega_{0}\). The darker blue region represents the parameter space concordance region. In the right panel we plot the deceleration parameter as a function of the redshift \(q(z)\) for sets of \(\{\tilde{\alpha},\tilde{\beta}\}\) values within the darker blue region of the left panel. Now, by assuming a pressureless matter component, \(\omega=0\), the temporal derivative of (34) and (35) reduces to, respectively \(\dot{\chi}=\tilde{\lambda}_{m}m(m+1)\Omega^{m-1}\dot{\Omega}\) and \(\dot{\zeta}=0\). Thus, using these results the equation (33) becomes \[\dot{\Omega}+3H\Omega=\frac{\tilde{\lambda}_{m}m(m+1)\dot{\Omega}\Omega^{m}}{ \epsilon+\tilde{\lambda}_{m}\Omega^{m}(m+1)}. \tag{36}\] The limiting cases leading to conservative models are easily identified i.e., either \(\tilde{\lambda}_{m}=0\) or \(m=0,-1\). Once again, using the same strategy as in the previous section, we can apply to bounds provided in (20) to the constraining relation (32) to find the allowed modified gravity parameters values. We then solve numerically equation (36) for the allowed modified gravity parameter values and find out that all resulting cosmological dynamics are inconsistent with data. In particular, all cases have pure Einstein-de Sitter like evolution for all redshifts given no transition to an accelerated epoch. ## V Conclusions Our goal in this work is to revisit the cosmological background expansion in \(f(R,T)\) theories of gravity focusing on the minimally coupled model \(f(R,T)=f_{1}(R)+f_{2}(T)\). The latter case represents a particular situation within modified gravity theories in which the expansion rate \(H\) can be written explicitly in terms of the total matter density parameter \(\Omega\) as well as the modified gravity model parameters. This allows to find the constraining relation (20) which is the most important relation in this work. The validity of (20) therefore allows one to place stringent bounds of the model free parameters. By interpreting \(\Omega_{0}\) as the effective fractionary matter density parameter, and demanding it is bounded by the gas mass fraction estimations in galaxy clusters given by (23) one can directly constrain the modified gravity parameters appearing in (16). An additional requirement concerns the age of the universe. The modified gravity parameters should provide an age for the universe larger than the estimated age of Galactic globular clusters. Figs. 1 and 2 summarize our main findings. There is indeed a tiny allowed modified gravity parameter space allowed by background cosmological data. This alleviates the conclusions of Ref. [5] which ruled out the power law class of \(f(R,T)\) theories. As expected, the \(\Lambda\)CDM model limit (\(n=0\) or \(\beta=0\)) is within this allowed region. The constraining relation (20) does not exist in the non-minimally models using the metric formalism since \(H^{2}\) depends on the derivative \(\dot{H}\). This is also the situation in \(f(R)\) theories. However, in the metric-affine approach the modified Friedmann equations lead to the constraining relation (32). The background dynamical evolution of the non-minimally coupled case, on the other hand, are non viable since all modified gravity parameter values allowed by the gas fraction bounds are consistent with Einstein-de Sitter cosmologies for all redshifts i.e., the universe does not transits to an accelerated phase as supported by current cosmological observables. ###### Acknowledgements. The authors thank FAPEMIG/FAPES/CNPq and CAPES for financial support. We thank Rodrigo von Martens and Jailson Alcaniz for useful correspondence.
2301.03213
EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
Visual object tracking is a key component to many egocentric vision problems. However, the full spectrum of challenges of egocentric tracking faced by an embodied AI is underrepresented in many existing datasets; these tend to focus on relatively short, third-person videos. Egocentric video has several distinguishing characteristics from those commonly found in past datasets: frequent large camera motions and hand interactions with objects commonly lead to occlusions or objects exiting the frame, and object appearance can change rapidly due to widely different points of view, scale, or object states. Embodied tracking is also naturally long-term, and being able to consistently (re-)associate objects to their appearances and disappearances over as long as a lifetime is critical. Previous datasets under-emphasize this re-detection problem, and their "framed" nature has led to adoption of various spatiotemporal priors that we find do not necessarily generalize to egocentric video. We thus introduce EgoTracks, a new dataset for long-term egocentric visual object tracking. Sourced from the Ego4D dataset, this new dataset presents a significant challenge to recent state-of-the-art single-object tracking models, which we find score poorly on traditional tracking metrics for our new dataset, compared to popular benchmarks. We further show improvements that can be made to a STARK tracker to significantly increase its performance on egocentric data, resulting in a baseline model we call EgoSTARK. We publicly release our annotations and benchmark, hoping our dataset leads to further advancements in tracking.
Hao Tang, Kevin Liang, Matt Feiszli, Weiyao Wang
2023-01-09T09:10:35Z
http://arxiv.org/abs/2301.03213v5
# EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset ###### Abstract Visual object tracking is a key component to many egocentric vision problems. However, the full spectrum of challenges of egocentric tracking faced by an embodied AI is underrepresented in many existing datasets; these tend to focus on relatively short, third-person videos. Egocentric video has several distinguishing characteristics from those commonly found in past datasets: frequent large camera motions and hand interactions with objects commonly lead to occlusions or objects exiting the frame, and object appearance can change rapidly due to widely different points of view, scale, or object states. Embodied tracking is also naturally long-term, and being able to consistently (re-)associate objects to their appearances and disappearances over as long as a lifetime is critical. Previous datasets under-emphasize this re-detection problem, and their "framed" nature has led to adoption of various spatiotemporal priors that we find do not necessarily generalize to egocentric video. We thus introduce EgoTracks, a new dataset for long-term egocentric visual object tracking. Sourced from the Ego4D dataset, this new dataset presents a significant challenge to recent state-of-the-art single-object tracking models, which we find score poorly on traditional tracking metrics for our new dataset, compared to popular benchmarks. We further show improvements that can be made to a STARK tracker to significantly increase its performance on egocentric data, resulting in a baseline model we call EgoSTARK. We publicly release our annotations and benchmark, hoping our dataset leads to further advancements in tracking. ## 1 Introduction First-person or "egocentric" computer vision aims to capture the real-world perceptual problems faced by an embodied AI; it has drawn strong recent interest as an underserved but highly relevant domain of vision, with important applications ranging from robotics [64, 17] to augmented and mixed reality [2, 66, 27]. Visual object tracking (VOT), long a fundamental problem in vision, is a core component to many egocentric tasks, including tracking the progress of an action or activity, (re-)association of objects in one's surroundings, and predicting future states of the environment. Yet, while the VOT field has made many significant advancements over the past decade, tracking in egocentric video remains underexplored. This lack of attention is in large part due to the absence of a large-scale egocentric tracking dataset for training and evaluation. While the community has proposed a number of popular tracking datasets in recent years, including OTB [76], TrackingNet [57], Got-10k [31], and LaSOT [20], we find that the strong performance that state-of-the-art trackers achieve on these benchmarks does not translate well to egocentric video, thus establishing a strong need for such a tracking dataset. We attribute this performance gap to the many unique aspects of egocentric views compared to the more traditional third-person views of previous datasets. In contrast to intentionally "framed" video, egocentric videos are often uncurated, meaning they tend to capture many attention shifts between activities, objects, or locations. Due to the first-person perspective, large head motions from the camera wearer often result in objects repeatedly leaving and re-entering the field of view; similarly, hand manipulations of objects [65] leads to frequent occlusions, rapid variations in scale and pose, and potential changes in state or appearance. Furthermore, egocentric video tends to be long (sometimes representing the entire life of an agent or individual), meaning the volume of the aforementioned occlusions and transformations scales similarly. These characteristics all make tracking objects in egocentric views dramatically more difficult than scenarios commonly considered in prior datasets, and their absence represents an evaluation blindspot. Head motions, locomotion, hand occlusions, and temporal length lead to several challenges. First, frequent object disappearances and reappearances causes the problem of _redetection_ within egocentric tracking to become especially critical. Many previous tracking datasets primarily focus on short-term tracking in third-person videos, providing limited ability to evaluate many of the challenges of long-term egocentric tracking due to low quantity and length of target object disappearances. As a result, competent re-detection is not required for strong performance, leading many recent short-term trackers to neglect it, instead predicting a bounding box for every frame, which may lead to rampant false positives or tracking the wrong object. Additionally, the characteristics of short-term third-person video have also induced designs relying on gradual changes in motion and appearance. As we later show (Section 5.2), many of the motion, context, and scale priors made by previous short-term tracking algorithms fail to transfer to egocentric video. Notably, re-detection, occlusions, and longer-term tracking have long been recognized as difficult for VOT as a field, leading to recent benchmark construction efforts [51, 10, 55, 69, 32, 71] emphasizing these aspects. We argue that egocentric video provides a natural source for these challenges at scale while also representing a highly impactful application for tracking, therefore constituting a significant opportunity. We thus present **EgoTracks**: a large-scale long-term egocentric visual object tracking dataset for training and evaluating long-term trackers. Seeking a realistic challenge, we source videos from Ego4D [27], a large-scale dataset consisting of unscripted, in-the-wild egocentric videos of daily-life activities. The result is a large-scale dataset to evaluate the tracking and re-detection ability of SOT models, with more than 20,000 tracks from around 6000 6-minute videos. This constitutes the first large-scale dataset for visual object tracking in egocentric videos in diverse settings, providing a new, significant challenge compared with previous datasets. We perform a thorough analysis of our new dataset and its new characteristics relative to prior benchmarks, demonstrating its difficulty and the need for further research to develop trackers capable of handling long-term egocentric vision. Our experiments reveal remaining open problems and insights towards promising future directions in egocentric tracking. Leveraging these intuitions, we propose multiple simple yet effective changes, such as adjustment of spatiotemporal priors, egocentric data finetuning and combining multiple templates. We apply these proposed strategies on the state-of-the-art (SOTA) STARK tracker [79], training a strong tracker dedicated towards long-term egocentric tracking: **EgoSTARK**. We hope Ego-STARK can serve as a strong baseline and facilitate future research. To summarize, we make the following contributions: 1. We present EgoTracks, the first large-scale long-term object tracking dataset with diverse egocentric scenar Figure 2: EgoTracks is an order of magnitude larger than past long-term VOT datasets, with significantly more tracks and object disappearances/appearances in longer videos. Circle area indicates total number of tracks. ios. We analyze its uniqueness in terms of evaluating the re-detection performance of trackers. 2. We conduct comprehensive experiments to understand the performance of many state-of-the-art trackers on the EgoTracks validation set and observe that due to the biases and evaluation blindspots of existing third-person datasets, they tend to struggle. 3. We perform an analysis of what makes a good tracker for long-form egocentric video. Applying these learnings to the STARK tracker [79], we produce a strong baseline we call EgoSTARK, which achieves significant improvements (+15% F-score) on EgoTracks. ## 2 Related work ### Visual object tracking datasets Visual object tracking studies the joint spatial-temporal localization of objects in videos. Starting from a video and a predefined taxonomy, multiple object tracking (MOT) models simultaneously detect, recognize, and track multiple objects. For example, MOT [54] tracks humans, KITTI [24, 50] tracks pedestrians and cars, and TAO [14] tracks a large taxonomy of 833 categories. In contrast to MOT, single object tracking (SOT) follows a single object via a provided initial template of the object, without any detection or recognition involved. Thus, SOT is often taxonomy-free and operates on generic objects. The community has constructed multiple popular benchmarks to study this important problem, including OTB [76], UAV [56], NfS [35], TC-128 [45], NUS-PRO [40], GOT-10k [31], VOT [37], and TrackingNet [57]. These SOT datasets mainly consist of short videos (e.g. a few seconds). Recently, there has been increasing interest in long-term tracking. Tracking objects in longer videos (several minutes or more) poses unique challenges, e.g. significant transformations, displacements, disappearances, and reappearances. On top of localizing the object when visible, the model also needs to produce no box when the object is absent, and then re-localize the same object when it reappears. OxUvA [69] is one of the first to benchmark longer videos (average 2 minutes), with 366 evaluation-only videos. LaSOT [20] scales this to a benchmark of 1400 videos with more frequent object reappearances. Concurrently, VOT-LT [36] includes frequent object disappearances and reappearances in 50 purposefully selected videos. Our EgoTracks focuses on long-term SOT and presents multiple critical and unique attributes: 1) significantly larger scale, with **17k** videos of an average **6 minutes** (Figure 2); 2) more frequent disappearances & reappearances (avg. **17.7** times) happening in natural, real-world scenarios; 3) data sourced from egocentric videos shot in-the-wild, involving unique challenging situations, such as large camera motions, diverse perspective changes, hand-object interactions, and frequent occlusions. ### Single object tracking methodologies Many modern approaches use convolutional neural networks (CNNs), either with Siamese network [42, 72, 41] or correlation-filter based [12, 3, 8, 53, 4] architectures. With recent successes in vision tasks like classification [16] and detection [5], Transformer architecture [70] for tracking have also become popular. For example, TransT [6] uses attention-based feature fusion to combine features of the object template and search image. More recently, several works utilize Transformers as direct predictors to achieve a new state of the art, such as STARK [79], ToMP [52] and SBT [77]. These models tokenize frame features from a ResNet [29] encoder, and use a Transformer to predict the bounding box and object presence score with the feature tokens. These methods are often developed on short-term SOT datasets and assume that the target object stays in the field of view with minimum occlusions. On the other hand, long-term trackers [71, 32, 10] are designed to cope with the problem of re-detecting objects in their reappearances. Designed to be aware of potential object disappearances, these approaches search the whole image for its reappearance. ### Tracking in egocentric videos Multiple egocentric video datasets have been introduced in the past decades [11, 27, 38, 67, 60, 22]. They offer a host of interesting challenges, many of which require associating objects across frames: activity recognition [34, 43, 80, 75, 25], anticipation [21, 23, 26], video summarization [15, 38, 39, 49], human-object interaction [13, 47], episodic mem Figure 3: EgoTracks is a large-scale egocentric dataset of diverse scenarios (left) and objects (right). ory [27], visual query [27], and camera-wearer pose inference [33]. To tackle these challenges, tracking is leveraged in many methodologies [27, 13, 48, 39, 47], yet few works have been dedicated to this fundamental problem on its own. The ones that do have started to recognize the challenges of egocentric object tracking [18, 19], though at smaller scales. EgoTracks provides a unique, large-scale testbed for developing tracking methods dedicated to egocentric videos; our improved baseline EgoSTARK also serves as a potential plug-and-play module to solve other tasks where object association is desired. In egocentric video understanding, Ego4D [27] and EPIC-KITCHENS VISOR [13] are closely related. Ego4D contains the largest collection of egocentric videos in-the-wild; EgoTracks is annotated on a subset of Ego4D. In addition, Ego4D proposes many novel tasks, such as Episodic Memory, with tracking identified as a core component. VISOR was introduced concurrently, annotating short-term (12 sec on average) videos from EPIC-KITCHENS [11] with instance segmentation masks. We believe EgoTracks offers multiple unique values complementary to EPIC-VISOR: long-term tracking (6 min vs. 12 sec), significantly larger-scale (6.9k video clips vs. 158), and more diversified video sources (80+ scenes vs. kitchen-only; see Fig. 3). ## 3 The EgoTracks dataset We present EgoTracks: a large-scale long-term egocentric single object tracking dataset, consisting of a total of 22.42k tracks from 5.9k videos. We follow the same data split as the Ego4D Visual Queries (VQ) 2D benchmark: 3.6k/1.2k/1.1k for train/val/test (Table 1). ### Ego4D visual queries (VQ) benchmark Ego4D [27] is a massive-scale egocentric video dataset, consisting of 3670 hours of diverse daily-life activities in an in-the-wild format. The dataset is accompanied by multiple benchmarks, such as episodic memory, hands and objects, social interaction, and forecasting. The most relevant task for our purposes is episodic memory's 2D VQ task: Given an egocentric video and a cropped image of an object, the goal is to localize when and where the object was last seen in the video, as a series of 2D bounding boxes in consecutive frames. This task is closely related to long-term tracking: finding an object in a video given a visual template is identical to the re-detection problem in the long-term tracking literature. Moreover, Ego4D's baseline approach relies heavily on tracking methods: Siam-RCNN [71] and KYS [4] for global and local tracking, respectively. **Shortcomings.** While highly related, the VQ dataset is not immediately suitable long-term tracking. In particular, the VQ annotation guidelines were roughly the following: 1) identify three different objects that appear multiple times in the video; 2) annotate a query template for each object, which should contain the entire object without any motion blur; 3) annotate an occurrence of the object that is temporally distant from the template. Thus, these annotations are not exhaustive over time (they are quite sparse), limiting their applicability to tracking. On the other hand, the selection criteria result in a strong set of candidate objects, which we leverage to build EgoTracks. ### Annotating VQ for long-term tracking We thus start with the VQ visual crop and response track, asking annotators to first identify the object represented by the visual crop, the response track, and object name. From the video's start, we instruct the annotators to draw a bounding box around the object each time it appears. Because annotators must go through each video in its entirety, which contain an average of \(\sim\)1800 frames at 5 frames per second (FPS), this annotation task is labor-intensive, taking roughly 1 to 2 hours per track. An important aspect of this annotation is its exhaustiveness: the entire video is densely anno \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Video Hours** & **Avg. Length (s)** & **Ann. FPS** & **Ann. Type** & **Egocentric** & \begin{tabular}{c} **SOTA** \\ **(P/AO)\({}^{*}\)** \\ \end{tabular} \\ \hline ImageNet-Vid [63] & 15.6 & 10.6 & 25 & mask & No & \\ YT-VOS [78] & 5.8 & 4.6 & 5 & mask & No & -/83.6 [30] \\ DAVIS 17[61] & 0.125 & 3 & 24 & mask & No & -/86.3 [7] \\ TAO [14] & 29.7 & 36.8 & 1 & mask & No & \\ UVO [74] & 2.8 & 10 & 30 & mask & No & -/73.7 [58] \\ EPIC-KITCHENS & [13] & 36 & 12\({}^{**}\) & 0.9 & mask & **Yes** & -/74.2 [58] \\ VISOR & 32.8 & 12.2 & 10 & bbox & No & -/75.6 [9] \\ OxUVA [69] & 14.4 & 141.2 & 1 & bbox & No & \\ LaSOT [20] & 31.92 & 82.1 & 30 & bbox & No & 80.3/-[9] \\ TrackingNet [57] & 125.1 & 14.7 & 28 & bbox & No & 86/-[9] \\ **EgoTracks (Ours)** & **602.9** & **367.9** & 5 & bbox & **Yes** & 45/54.1 \\ \hline \hline \end{tabular} * * *: P: Precision, AO: average overlap; we report J-Score instead of AO for mask-based datasets. **: Original video is 720s. \end{table} Table 1: **Object tracking datasets comparison. In addition to larger scale than previous datasets, the scenarios captured by EgoTracks represent a significantly harder challenge for SOTA trackers, suggesting room for improved tracking methodology.** ated for the target object, and any frame without a bounding box is considered as a negative. Being able to reject negatives examples is an important component to re-detection in real-world settings, as false positives can impact certain applications as much as false negatives. **Quality Assurance.** All tracks are quality checked by expert annotators after the initial annotations. To measure the annotation quality, we employ multi-review on a subset of the validation set. Three independent reviewers are asked to annotate the same video. We find the overlaps between these independent annotations are high (\(>0.88\) IoU). Further, since EgoTracks has a focus on re-detection, we check the temporal overlap of object presence and find it to be very consistent across annotators. In total, the entire annotation effort represented roughly 86k worker-hours of effort. ### Tracklet attributes In addition to the bounding box annotations, we also label certain relevant attributes to allow for different training strategies or deeper analysis of validation set performance. We annotate the following three attributes per occurrence (see Figure 4 for examples and Table 2 for statistics): * is_active: In Ego4D, the camera wearer often interacts with relevant objects with their hands. Objects in the state of being handled pose a challenge for tracking algorithms due to frequent occlusion and rapid changes in pose. * is_transformed: Objects in Ego4D may undergo transformations, such as deformations and state changes. Such instances require being able to quickly adapt to the tracked object having a new appearance. * is_recognizable: Due to occlusions, motion blur, scale, or other conditions, some objects in Ego4D can be extremely difficult to recognize without additional context. We thus annotate if the object is recognizable solely based on its appearance, without using additional context information (_e.g_. other frames). ## 4 Analysis of state-of-the-art SOT trackers We compare the performance of several off-the-shelf tracking models on EgoTracks's validation set. Identifying STARK [79] as the one with the best performance, we conduct further ablation studies under different settings using STARK to further understand its behavior. ### Evaluation protocols and metrics **Evaluation Protocols.** We introduce several evaluation protocols for EgoTracks, consisting of different combinations of the initial template, evaluated frames, and the temporal direction in which the tracker is run. For the initial template, we consider two choices: * **Visual Crop Template (VCT)**: The visual crop images were specifically chosen to be high-quality views of the target and served as our annotators' references for identifying the object throughout the videos. Thus, they make ideal candidates for initializing a tracker. * **Occurrence First Frame Template (OFFT)**: The tracker is initialized with the first frame of each occurrence (see \(\overrightarrow{\text{OO}}\) below). While this may result in a lower quality view of the object, temporal proximity to subsequent frames means it may be closer in appearance. Note that we exclude the template frame from the calculation of any evaluation metrics. We also consider several choices for the evaluated frames and temporal direction: \begin{table} \begin{tabular}{l c c} \hline \hline & **Total number** & **Percentage** \\ \hline All Tracks & 17593 & 100\% \\ is\_active & 3963 & 22.52\% \\ is\_transformed & 1080 & 6.13\% \\ is\_recognizable & 17557 & 99.79\% \\ \hline \hline \end{tabular} \end{table} Table 2: **Track attributes** in training and validation sets. Figure 4: **EgoTracks examples of tracklet attributes.**_Left_: A micropipette on a bench (top) versus actively used (bottom). _Middle_: A paint can (top) is opened (bottom). _Right_: A hard to recognize blowtorch (bottom) due to distance and motion blur; annotators must rely on context from other frames to identify the object. Figure 5: Evaluation protocols visualization. * **Video Start Forward (\(\overrightarrow{\textbf{VS}}\))**: The tracker is evaluated on every frame of the video in causal order, starting from the first frame. This represents a tracker's ability to follow an object through a long video. * **Visual Crop Forward/Backward (\(\overleftarrow{\textbf{VC}}\))**: The tracker is run on the video twice, once starting at the visual crop frame and running forward and time, and a second time running backwards. This represents an alternative way of covering every frame in the video, but with closer visual similarity between **VCT** initialization and the first frames encountered by the tracker. * **Occurrences Only Forward (\(\overrightarrow{\textbf{OO}}\))**: The tracker is only evaluated on the object occurrences, when the object is visible. This simplifies the tracking task and allows us to dis-entangle the challenge of re-detection from that of simply tracking in an egocentric clip. We specify protocols by concatenating the appropriate descriptors. We primarily consider **VCT-\(\overrightarrow{\textbf{VS}}\)**, **VCT-\(\overleftarrow{\textbf{VC}}\)**, **VCT-\(\overleftarrow{\textbf{OO}}\)**, and **OFFT-\(\overrightarrow{\textbf{OO}}\)** (Fig. 5) in our experiments. **Metrics.** We adopt common metrics in object tracking. The most important ones are tracking F-score, precision, and recall [51]; details on these metrics can be found in [51]. Trackers are ranked mainly by the F-score. We additionally consider average overlap (AO), success, precision, and normalized precision as short-term tracking metrics [68]. ### SOT trackers struggle on EgoTracks We compare the performance of several CNN-based tracking algorithms on EgoTracks with the **VCT-\(\overrightarrow{\textbf{VS}}\)** evaluation protocol. Given the large number of existing tracking algorithms, we do not aim to be exhaustive but select high-performing examples representative of different tracking principles, which we briefly describe here. KYS [4] and DiMP [3] are two typical short-term tracking algorithms that maintain an online target representation. ToMP [52] and STARK [79] are two examples of the SOTA short-term trackers based on Transformers. GlobalTrack [32] is a global tracker that searches the entire search image for re-detection. LTMU [10] is a high performance long-term tracker that combines a global tracker (GlobalTrack) with a local tracker (DiMP). The performance of these trackers on EgoTracks are summarized in Table 3. Note, AO in this table is equivalent to the recall at the probability threshold of 0. Qualitative results are shown in Figure 6. We highlight several observations. First, the object presence scores from most short-term trackers are not very useful, as can be seen from the low precision of KYS (12.5), DiMP (13.91), and ToMP (22.46), while long-term trackers like GlobalTrack and DiMP_LTMU achieve higher precisions at 31.28 and 37.28. This is expected as long-term trackers are designed to place more emphasis on high re-detection accuracy, though there clearly is still room for improvement. STARK achieves the second highest precision at 34.70, which is an exception as it has a second training stage to teach the model to classify whether the object is present. Second, more recent works such as ToMP and STARK achieve better F-score than previous short-term trackers. This could be partially due to advances in training strategies, more data, and Transformer-based architectures. We also include results using the principle of Tracking by Detection [59, 1]: a detector proposes 100 bounding boxes, and we select the best using cosine similarity of box features. We observe that an open-world detector GGN [73] trained on COCO [46] generalize reasonably well with oracle matching, achieving 75.92 AO. However, the association problem is very challenging, bringing down the AO to 15.19. Implementation details are in the supplementary. ### Re-detection and diverse views are challenging We perform additional EgoTracks evaluations according to alternative evaluation protocols, to gain further insight to \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **AO** & **F-score** & **Precision** & **Recall** & **FPS** \\ \hline KYS [4] & 16.09 & 13.09 & 12.50 & 13.74 & 20 \\ DiMP [3] & 16.45 & 11.84 & 10.31 & 13.91 & 43 \\ GlobalTrack [32] & 23.63 & 20.40 & 31.28 & 15.14 & 6 \\ LTMU [10] & 29.33 & 27.46 & 37.28 & 21.74 & 13 \\ ToMP [52] & 30.93 & 20.95 & 19.63 & 22.46 & 24.8 \\ Siam-RCNN [71] & 37.48 & 35.38 & 52.80 & 26.67 & 4.7 \\ STARK [79] - Res50 & 35.99 & 30.48 & 34.70 & 27.17 & 41.8 \\ STARK [79] - Res101 & 35.03 & 30.18 & 35.30 & 26.35 & 31.7 \\ \hline \hline \multicolumn{5}{l}{**Tracking by Detection**} \\ Mask R-CNN [28]+Oracle & 60.00 & - & - & - & \\ GGN [73]+Oracle & 75.92 & - & - & - \\ GGN+Inesfmb & 15.19 & 9.92 & 11.75 & 8.58 \\ \hline \hline \end{tabular} \end{table} Table 3: **EgoTracks performance comparison.** Off-the-shelf, all trackers perform poorly, demonstrating the new challenges of EgoTracks. Higher performance from tracking by detection methods + Oracle imply that instance association, not detection, is one of the primary challenges. Figure 6: Qualitative results of different trackers. tracker performance (Table 4). To decouple the re-detection problem from the other egocentric aspects of EgoTracks, we run experiments with the **OFFT-OO** protocol, which ignores the negative frames of the video and thus obviates the need for re-detection. Unsurprisingly, all trackers do significantly better, though there remains much room for improvement, emphasizing the challenging nature of EgoTracks. We also run experiments in the **VCT-VCC** setting, in which case the initial template is temporally adjacent to the first tracked frames. Here we see a 3-4% improvement to AO, F-score, precision, and recall compared to the **VCT-VCS** protocol, illustrating that trackers like STARK are designed to expect gradual transitions in appearance. Both these experiments illustrate that the re-detection problem is a significant challenge for tracking and the need for better long-term benchmarks requiring more re-detection. ### Attributes capture hard scenarios for tracking We use the validation set tracklet attribute annotations described in Section 3.3 to further understand performance on our evaluation set. For each attribute, we split the tracklets into two groups, corresponding to the attribute being true and false. We then use a standard STARK tracker [79] and report AO for each group of tracklets using the **OFFT-OO** evaluation protocol in Table 5. As might be expected, we find that when objects are being actively used by the user or in the midst of a transformation, AO tends to be lower, by roughly 6%, likely due to occlusions or changes in appearance. Additionally, STARK tends to have a harder time when the object is hard to recognize in the image, whether due to occlusions, blur, scale, or other conditions. ## 5 Egocentric tracking design considerations Observing that existing trackers do not perform well on EgoTracks, we perform a systematic exploration of priors and other design choices for egocentric tracking. Though not specifically designed for long-term tracking, Section 4 suggests STARK [79] to be the most competitive tracker on EgoTracks. We focus on this tracker for additional analysis, suggesting improvements to egocentric performance. ### Egocentric finetuning is essential We first demonstrate how various trackers trained on third-person videos can significantly benefit from finetuning on EgoTracks. As shown in Table 6, all methods gain improvement on F-score ranging from 6% - 10%. In addition, as shown in Table 7, finetuning on the VQ response track subset improves the F-score from 30.48% to 33.53%, while using the full EgoTracks annotation further improves the F-score by 4.67% to 38.2%. This demonstrates that: 1) finetuning with egocentric data helps close the exocentric-egocentric domain gap; 2) training on full EgoTracks provides further gains, showing the value of our training set. ### Third-person spatiotemporal priors fail Modern trackers often embrace spatiotemporal priors on object motion, appearance and surroundings, which helped them on past datasets. However, some of these design decisions translate poorly to long-term egocentric videos. **Search window size.** An example is the local search assumption. Many trackers assume the tracked object appears within a certain range of its previous location. Thus, for efficiency, these methods often search within a local window of the next frame. This is reasonable in high FPS, smooth videos with relatively small motion, commonly in previous short-term tracking datasets, but in egocentric videos, the object's pixel coordinates can change rapidly with frequent large head motions, and re-detection becomes a key problem. Therefore, we experiment with expanded search regions beyond what are common in past methods. As we \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **AO** & **F-score** & **Precision** & **Recall** \\ \hline ToMP & 36.13 & 28.11 & 29.01 & 27.26 \\ Siam-RCNN & 45.67 & 41.41 & 56.11 & 32.81 \\ STARK & 44.25 & 38.20 & 42.06 & 34.99 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of trackers finetuned on EgoTracks. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **AO** & **F-score** & **Precision** & **Recall** \\ \hline STARK - **VCT-VCS** & 35.99 & 30.48 & 34.70 & 27.17 \\ STARK - **VCT-VCC** & 40.01 & 34.02 & 38.31 & 30.60 \\ \hline \hline \end{tabular} \end{table} Table 7: Train and test-time hyperparameters comparison. \begin{table} \begin{tabular}{l c c c} \hline \hline **Attribute** & **True** & **False** \\ \hline is\_active & 49.65 & 55.73 \\ is\_transformed & 49.19 & 55.31 \\ is\_recognizable & 55.52 & 46.65 \\ \hline \hline \end{tabular} \end{table} Table 5: **OFFT-OO** AO of standard STARK model [79] for each attribute, averaged across tracklets. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **AO** & **Fu-score** & **Precision** & **Recall** \\ \hline STARK - **VCT-VCS** & 35.99 & 30.48 & 34.70 & 27.17 \\ STARK - **VCT-VCC** & 40.01 & 34.02 & 38.31 & 30.60 \\ \hline \hline \end{tabular} \end{table} Table 5: **OFFT-OO** AO of standard STARK model [79] for each attribute, averaged across tracklets. expand the search size from 320 up to 800, we see dramatic improvements in AO, F-score, Precision, and Recall (Table 7), as STARK is able to correctly locate objects that were previously outside of its search window due to the rapid movement of egocentric video. **Multiscale augmentations.** The characteristics of egocentric video also affect common SOT assumptions of object scale. Many trackers are trained with the assumption that an object's scale is consistent with the template image and between adjacent frames. However, large egocentric camera motions, locomotion, and hand interactions with objects (_e.g_. bringing an object to one's face, as in eating) can translate to objects rapidly undergoing large changes in scale. We thus propose adding scale augmentations during training, randomly resizing the search image by a factor of \(s\in[0.5,1.5]\). While simple, we find this dramatically improves performance on EgoTracks, improving STARK's AO by nearly \(10\%\) and F-score by more than \(8\%\) (Table 7). **Context ratio.** Past SOT works have found that including some background can be helpful for template image feature extraction, with twice the size of the object being common. We experiment with different context ratios to see if this rule of thumb transfers to egocentric videos. Because of the local window assumption, the sizes of the template and search images are related: \(\frac{\text{Search Image Size}(\text{SIS})}{\text{Search Region Ratio}(\text{SRR})}=\frac{\text{Template Image Size}}{\text{Context Ratio}(\text{CR})}=\text{Object Scale}\). The template image size is set to a fixed size \(128\times 128\). When changing the context ratio, we carefully control the other parameters for a fair comparison. The results are shown in Table 8. Among all three parameters - **CR**, **SRR**, and **SIS**, the search region size (determined by **SRR** and **SIS**) has the highest impact on the F-score. This is expected because there are frequent re-detections, which require the tracker to search in a larger area for the object, rather than just within the commonly used local window. Varying the **CR** has mixed results so we adhere to the common practice of using a **CR** of 2. ### Multiple templates can improve tracking Transformer-based architectures can encode arbitrary length inputs, making it straightforward to consume features from an arbitrary number of templates. The original STARK design encodes two templates: the initialization and a single dynamically updated template. A natural extension is to include more templates of the target, which may expose the transformer to different views of the object (particularly relevent in egocentric video), though low-quality views may compromise performance [44]. What's the right trade-off? We experiment with different numbers of templates for a basic STARK model. Motivated by potential applications where a user can take a short video of an object from different angles [62], we extend the single visual crop to a visual clip of templates by incorporating additional frames from the same occurrence where the visual crop appears as the template. We adopt a simple template sampling method: uniformly sampling 3, 5, 7, or 9 templates from the visual crop's occurrence. Uniformly sampling the videos temporally can be a simple yet effective heuristic to gather diverse views from an occurrence. We summarize the results in Table 9. While we observe improvements across all metrics using up to 5 templates, performance declines with more. We hypothesize that increasing the number of templates does increase the knowledge available to STARK for tracking, but after a certain point it may dilute the information in the templates and make it difficult for the transformer to synthesize. This highlights the importance of template selection and multi-view fusion mechanisms, which inspires promising directions. ## 6 Conclusion We present EgoTracks, the first large-scale dataset for long-term egocentric visual object tracking in diverse scenes. We conduct extensive experiments to understand the performance of state-of-the-art trackers on this new dataset, and find that they struggle considerably, possibly in part due to overfitting to some of the simpler characteristics of existing benchmarks. We thus propose several adaptations for the egocentric domain, leading to a strong baseline that we call Ego-STARK, which has vastly improved performance on EgoTracks. Lastly, we plan to organize a public benchmark challenge using a held-out test set with a test server as a testbed for new tracking algorithms. By publicly releasing this dataset and organizing the challenge, we hope to encourage advancements in the field of long-term tracking and draw more attention to the challenges of long-term and egocentric videos for this field. \begin{table} \begin{tabular}{l l l l|l l l l} \hline \hline \multicolumn{3}{c}{} & \multicolumn{2}{c}{**Method**} & \multicolumn{2}{c}{**AO**} & \multicolumn{2}{c}{**F-score**} & \multicolumn{1}{c}{**Precision**} & \multicolumn{1}{c}{**Recall**} \\ \cline{3-8} \multicolumn{1}{c}{**Setting**} & **CR** & **SRR** & **SIS** & **AO** & **F-score** & **Precision** & **Recall** \\ \hline \multirow{3}{*}{**Same SIS**} & 1x & 2.5x & 320 & 28.22 & 26.81 & 28.68 & 25.16 \\ & **2x** & **5x** & **320** & 38.94 & 33.53 & 39.13 & 29.33 \\ & 3x & 7.5x & 320 & 44.70 & 36.03 & 40.28 & 32.59 \\ & 4x & 10x & 320 & 43.19 & 34.32 & 37.98 & 31.31 \\ \hline \multirow{3}{*}{**Same SRR**} & 1x & 5x & 640 & 41.50 & 31.09 & 30.31 & 31.91 \\ & 3x & 5x & 208 & 39.87 & 35.36 & 41.54 & 30.79 \\ \hline \multirow{3}{*}{**Same CR**} & 2x & 7.5x & 480 & 48.21 & 39.69 & 43.95 & 36.19 \\ & 2x & 10x & 640 & 52.09 & 42.39 & 46.23 & 39.15 \\ \cline{1-1} & 2x & 12.5x & 800 & 54.08 & 43.74 & 47.60 & 40.45 \\ \hline \hline \end{tabular} \end{table} Table 8: STARK with different context ratios. Bold row is the default setting. **CR**: context ratio, **SRR**: search region ratio, **SIS**: search image size (in image resolution). \begin{table} \begin{tabular}{l l l l l} \hline \hline Method & AO & F-score & Precision & Recall \\ \hline STARK - 1 template & 32.97 & 25.42 & 25.80 & 25.04 \\ STARK - 3 templates & 34.76 & 26.84 & 28.84 & 25.57 \\ STARK - 5 templates & 35.47 & 28.03 & 29.82 & 26.45 \\ STARK - 7 templates & 34.81 & 27.83 & 30.77 & 25.40 \\ STARK - 9 templates & 33.92 & 26.89 & 30.36 & 24.12 \\ \hline \hline \end{tabular} \end{table} Table 9: STARK with different numbers of templates. ## 7 Acknowledgement We would like to express our sincere gratitude to Kristen Grauman for her invaluable insight, feedback, guidance and support.
2308.00211
High-fidelity achromatic metalens imaging via deep neural network
Meta-optics are attracting intensive interest as alternatives to traditional optical systems comprising multiple lenses and diffractive elements. Among applications, single metalens imaging is highly attractive due to the potential for achieving significant size reduction and simplified design. However, single metalenses exhibit severe chromatic aberration arising from material dispersion and the nature of singlet optics, making them unsuitable for full-color imaging requiring achromatic performance. In this work, we propose and validate a deep learning-based single metalens imaging system to overcome chromatic aberration in varied scenarios. The developed deep learning networks computationally reconstruct raw imaging captures through reliably refocusing red, green and blue channels to eliminate chromatic aberration and enhance resolution without altering the metalens hardware. The networks demonstrate consistent enhancement across different aperture sizes and focusing distances. Images outside the training set and real-world photos were also successfully reconstructed. Our approach provides a new means to achieve achromatic metalenses without complex engineering, enabling practical and simplified implementation to overcome inherent limitations of meta-optics.
Yunxi Dong, Bowen Zheng, Hang Li, Hong Tang, Yi Huang, Sensong An, Hualiang Zhang
2023-08-01T01:04:46Z
http://arxiv.org/abs/2308.00211v1
# High-fidelity achromatic metalens imaging via deep neural network ###### Abstract Meta-optics are attracting intensive interest as alternatives to traditional optical systems comprising multiple lenses and diffractive elements. Among applications, single metalens imaging is highly attractive due to the potential for achieving significant size reduction and simplified design. However, single metalenses exhibit severe chromatic aberration arising from material dispersion and the nature of singlet optics, making them unsuitable for full-color imaging requiring achromatic performance. In this work, we propose and validate a deep learning-based single metalens imaging system to overcome chromatic aberration in varied scenarios. The developed deep learning networks computationally reconstruct raw imaging captures through reliably refocusing red, green and blue channels to eliminate chromatic aberration and enhance resolution without altering the metalens hardware. The networks demonstrate consistent enhancement across different aperture sizes and focusing distances. Images outside the training set and real-world photos were also successfully reconstructed. Our approach provides a new means to achieve achromatic metalenses without complex engineering, enabling practical and simplified implementation to overcome inherent limitations of meta-optics. Meta-optics, Metalens, Deep Learning, Neural Networks, Imaging, 3D Printing. ## 1 Introduction The advancement of modern camera systems has led to multi-element lens configurations to minimize optical aberrations and achieve high-resolution imaging. However, these systems sacrifice compactness. Metasurfaces, the two-dimensional metamaterial analog of optical components, provide transformative opportunities to realize high-performance optics within substantially reduced volumes. Here, we utilize metalenses - metasurface lenses with carefully engineered nanoscale scattering elements that impart precise phase profiles - to demonstrate imaging capabilities analogous to conventional refractive optics. Notably, metalenses overcome the challenge of spherical aberration that has persisted in refractive optics. By imparting precise phase delays with subwavelength spatial resolution, they facilitate diffraction-limited focusing absent from traditional refractive optical systems due to the spherical shape of traditional lenses. Additionally, the capability to readily adapt the phase profile through computational nanophotonic design of the meta-atoms grants flexibility and customizability surpassing conventional optics. However, a pivotal roadblock for the wide deployment of meta-optics is chromatic aberration. Due to significant material dispersion and dispersive responses of metasurfaces, different spectral components passing through metalenses will focus on disparate spatial planes, negatively impacting image quality. Existing strategies to mitigate chromatic aberration include cascaded multi-layer metalenses[1, 2, 3, 4], interleaving meta-atoms for different wavelengths[5, 6, 7], metalens arrays[8], dispersion correction phase mask[9, 10, 11, 12], increased focusing depth[13] and computational optimization and correction of phase profiles[14, 15]. But these approaches increase system complexity while sacrificing other performance metrics such as scalable high-yield fabrication, imaging quality and freedom of material choices. Consequently, a single meta-lens solution capable of full-color aberration-free imaging under diverse operating conditions remains elusive. In this paper, we successfully demonstrate correction of chromatic aberration to achieve an achromatic metalens camera through integration of a custom-designed metalens with a commercial imaging sensor, coupled with deep learning algorithms. Our deep learning-based computational imaging approach refocuses and restores missing information of the raw captured images for RGB channels, effectively converting a single chromatic metalens camera into an achromatic imaging system. With this strategy, light (i.e. broadband optical signals) can be manipulated within substantially thinner flat optical components compared to the state-of-the-art while still maintaining full-color and aberration-free operation. To collect multi-spectral training data, we employ a 3D-printed adapter for the integration of metalens onto a commercial camera. As for the computational imaging backend, a universal deep neural network architecture built on U-net is used to achieve direct chromatic aberration correction. By training with raw images under varying conditions, the model reliably enhances image quality, removes chromatic aberration, effectively reconstructs the photos either from or outside of the training dataset. The trained model can also be used for enhancing real-world captures, which further demonstrates its capability to replace complex lens assemblies for high-quality full-color imaging. ## 2 Results ### Imaging system workflow, DL model and experimental setups An achromatic single metalens imaging system presents considerable difficulty due to the requisite restoration of all color channels lacking ideal imaging responses. To address this issue, we integrate deep learning networks as the computational backend to directly enhance the chromatic responses of the raw image captures. A highly automated workflow for collecting and pre-processing raw images was developed to enable the proposed deep learning approach as depicted in Fig. 1. Specifically, Fig. 1a shows the optical path with the metalens directly mounted on a camera, where \(d\) denotes the object distance and \(A\) denotes the aperture diameter. The aperture is placed in front of the metalens L, and its diameter is equal to or smaller than the metalens to block light outside the metalens area. Fig. 1b illustrates the assembled metalens with a 3D-printed mount on a commercial camera (Sony Alpha a7R IV). Changing the object Figure 1: **Overview of metalens imaging and reconstruction.** (a) Optical path in metalens camera system, with light passing through aperture, then metalens which directly focuses light onto CMOS image sensor. (b) Photograph of fabricated metalens mounted on a commercial camera. (c) Example source image displayed on monitor. (d) Schematic of image reconstruction workflow, including preprocessing of raw images and deep learning model for reconstruction. (e) Sample images from training and testing datasets used for deep learning model. (f) Additional validation image samples showing different objects and color representations. (g) Photograph of fabricated metasurface lens. (h) Scanning electron microscope (SEM) images showing nanostructured meta-atoms comprising metalens. (i) Cropped regions of raw red, green, and blue color channel subimages directly captured by metalens camera. (j) Reconstructed red, green, and blue channel subimages after processing through the proposed deep learning network. distance \(d\) and aperture diameter \(A\) alters working conditions of the metalens, making it suitable for a variety of applications. One of our goals is to devise universal deep learning models as illustrated in Fig. 1d to accommodate different combinations of \(d\) and \(A\). In this work, we utilized monitors of various sizes to display images (as depicted in Fig. 1c), as well as accommodating different object distances (\(d\)). To conform to the image circle of the metalens, the images' aspect ratios were intentionally set to 1, with all remaining monitor pixels set to black. The resulting captured image, positioned at the sensor's center as shown in the left corner of Fig. 1d, was cropped to eliminate black pixels and used as input for the developed deep learning network. Inspired by the successful application of image super-resolution networks, we developed a U-Net-structured deep learning model. This state-of-the-art architecture, widely applied in image processing tasks[16, 17], features skip connections that bridge contracting and expanding paths, enabling the capture of both global and local contexts. Our model enhances the original U-Net architecture by incorporating multiple skip and residual connections between layers, capturing multi-scale contexts and providing nuanced features as shown in Fig. 1d. Inter-skip connections link the encoder and decoder blocks within the U-Net model, while intra-skip connections, exclusive to the decoder blocks, link different layers within them, and conventional skip connections denote the original connections within the U-Net model[18] The structure of the encoder and decoder blocks, comprising several convolutional and upsampling layers, is detailed in the supplementary material. To train the deep learning models, we utilized the Taskonomy indoor scene dataset[19], examples of which are shown in Fig. 1e. This dataset contains 1024 \(\times\) 1024-pixel images from various buildings, providing diversity in environments and objects under consistent lighting. The resolution matched our 1920 \(\times\) 1200-pixel monitors used for data collection, as illustrated in Fig. 1c. For each combination of object distance \(d\) and aperture diameter \(A\), we selected 1000 images from the dataset to display on the monitors and capture with our metalens camera system. Of the 1000 raw images, 800 were used for training and 200 for validation for each setting (with different \(d\) and \(A\) combination). After convergence of the network training process, we applied an additional validation set with completely different objects, colors, and lighting conditions to assess the performance of the trained network, as shown in Fig. 1f. The results were consistent across both the training/testing sets and the additional validation set. One of the examples of these results is shown in Fig. 1i and Fig. 1j, which features raw captures using a 4 mm aperture diameter to capture a scene at a 50 cm distance, and its reconstructed counterpart using the trained network. The raw images predominantly contain sharp image components in the green channel, with the red and green channels significantly out of focus, aligning with our assumptions for the employed metalens. Remarkably (as shown in Fig. 1j), processing the raw captures through the deep learning network yielded a reconstructed image with clear images across RGB channels, indicating the network's ability to eliminate achromatic aberration by refocusing the image on its all three channels. Each channel benefits from an improvement in sharpness, and achromatic full-color imaging is achieved by combining all three channels. For further performance analysis details, please refer to the supplementary material. Our proposed deep learning engine for computational achromatic metalens imaging presents several notable advantages. Firstly, the incorporation of deep learning renders further metalens design for chromatic aberration correction unnecessary (which leads to simplified metalens implementation and reduced cost). Secondly, it eliminates the requirement for supplementary devices or steps from the initial photo capture to the final image reconstruction. Lastly, this method can be readily implemented on any commercial or scientific optical systems. To the best of our knowledge, the proposed image reconstruction network represents the first successful application of a deep learning tool for addressing aberrations in chromatic meta-lens imaging captured directly from a commercial camera. ### Designed metalens and integration In this work, a hyperbolic phase profile[20] was employed for the metalens, which offers several advantages. Firstly, the hyperbolic phase profile works well with the external aperture, as the entire lens area is designed to focus to the geometric center point. This allows for the inclusion of an external aperture without altering the focal length or compromising the imaging uniformity. Additionally, misalignment between the aperture and the metalens does not impact the imaging performance, as long as the transparent part of the substrate is fully blocked. Notably, the aperture size plays a crucial role as it impacts both the imaging resolution and chromatic aberration. Smaller apertures improve resolution and reduce chromatic aberration image-wide, yet larger apertures enable greater light transmission beneficial for low-light conditions. Our approach provides the flexibility to incorporate various sizes of external apertures using different 3D-printed holders, eliminating the need to fabricate metalenses of different sizes. Furthermore, it opens up the possibility of integrating a mechanical leaf aperture, similar to those found in traditional lenses. Our metalens was designed and fabricated on a 10 mm by 10 mm Silicon-on-Sapphire wafer with 230 nm Silicon thickness. It has a 5 mm diameter with 7 mm focal length, and the meta-atoms were optimized for operation at a wavelength of 526 nm. More information about the metalens can be found in the supplementary material. Meanwhile, hyperbolic phase profile has notable drawbacks, including compromised peripheral image quality stemming from unoptimized edges. This is manifested as reduced sharpness and increased chromatic aberration towards the image boundaries. For example, it is oberved that edge trapezoids in Fig. 2a appear less defined versus the center. Rainbow effects under white light further underscore greater chromatic aberration at the periphery. Additional limitations of hyperbolic lenses arise from variable field-of-depth and lateral chromatic aberration across different focal planes and object distances. This leads to captured images exhibiting differently sized in-focus areas and distinct chromatic aberration patterns depending on distance, as evidenced by the Modulation Transfer Function (MTF) results in Fig. 2(b-c)[21]. Fig. 2(b) and 2(c) display four combinations of 10 cm (representing close focusing) and 50 cm focusing distances (simulating focusing to infinity) with 1 mm and 4 mm aperture diameters (f-numbers of 7 and 1.75). Fig. 2(b) shows center and edge MTF curves derived from denoted trapezoids in Fig. 2(a) under white backlight on monitors. Contrastingly, Fig. 2(c) exhibits the photo's green channel with green backlight. Regardless of conditions, a notable MTF difference exists between center and edges, with higher center values. Overall, it is clear that increasing aperture Figure 2: **Metalens performance characterization.** (a) Test charts imaged under white and green illumination. (b-c) Modulation transfer functions (MTFs) for the center and edge regions of images captured under white and green light. Four combinations of object distance and aperture diameter were tested for each condition. diameter decreases MTF values significantly, especially at edges, thus a universal deep learning network should be trained on various aperture sizes to handle these dramatic differences. Chromatic aberration reduces MTF values, as is evident from comparing the center of smaller apertures to the edge of larger ones between Fig. 2(b) and 2(c). The marked MTF difference between green channel and white light data suggests chromatic aberration primarily causes decreased image quality in these scenarios. Conversely, for instances involving the center of larger apertures and the edge of smaller ones, the difference becomes less pronounced, with both white and green MTF values displaying similar trends. This suggests that specific image reconstruction algorithms should be applied for enhancing small and large aperture cases, given the unique sources of image blurriness in each. Although the MTF curves do not exhibit significant differences across varying object distances, the patterns of color fringing do display noticeable variations. Fig. 2(d) presents two images, cropped from the center of photos captured at 10 cm and 50 cm distances. No apparent differences in sharpness exist between these two images, yet the color fringing patterns around white edges differ significantly. The 10 cm photo exhibits blue to cyan and yellow to orange color fringing transitions at the far and near ends of the faucet, respectively. However, a reverse fringing transition pattern is observed when images were captured at 50 cm (it exhibits orange to yellow and cyan to blue transitions instead). These distinct color fringing patterns underscore the influence of object distance on the effects of chromatic aberration in captured images. The MTF curves and color fringing analyses reveal key insights into factors impacting image quality in meta-optics systems. Specifically, aperture size and object distance significantly influence aberrations and resolution. Therefore, to comprehensively improve photo quality, the effects of varying aperture diameter and shooting distance must be considered in tandem. In our work, we have been focusing on studying these two factors and their interactions, to determine optimal deep learning model architectures, formation of proper experiment setup and training strategies to enhance image quality across diverse operating conditions. In general, we could train universal deep learning models applicable to a wide range of potential use cases rather than being constrained to a narrow set of parameters. ### Imaging results Figure 3: **Experimental setup and image reconstruction examples.** (a) Top view of the photo capturing setup. (b) 3D-printed metalens holder with adjustable aperture. (c) Reconstructed images from testing data (first column) and validation set (second column). (d) Real-world photos captured through the metalens with different aperture sizes and reconstructed by the proposed network. The setup for raw photo capturing is depicted in Fig. 2(a). It consists of two monitors of varying sizes fixed on an optical table, facing towards a commercial camera integrated with a single metalens. The larger monitor (24" HP LP2475W) and the smaller one (5.2" Atomos Ninja V) are located 50 cm and 10 cm away from the camera, respectively. The monitors were employed individually during the experiment. The camera, set on a post, was adjusted to be parallel with the monitor and captured images using the center of the CMOS sensor, with the raw image displayed in Fig. 1d. A 3D-printed holder, depicted in Fig. 2(b), was designed to attach the metalens to the camera. This holder is composed of two parts: the upper component is threaded into a C-Mount adapter attached to the camera, while the lower section holds the 10 mm by 10 mm metalens sample. These two parts are threaded together, with the aperture located on the lower component to facilitate the interchange of varying aperture sizes. Upon assembly, the metalens is pushed into place by the upper component, reducing any undesired gap between the lens and the holder. Based on the previously described setup, raw images were captured and used to train the proposed deep learning models. The model's performance, as demonstrated in Fig. 2(c), was assessed with test images from both the training and validation sets. The ground truth image, randomly chosen from Taskonomy dataset, the raw image directly sourced from the camera, and the reconstructed image from the output of the deep learning model, are all presented in Fig. 2(c). To quantitatively measure the enhancement in image quality from raw to reconstructed images, we utilized two primary metrics: the peak signal-to-noise ratio (PSNR)[22] and the structural similarity index measure (SSIM)[23]. Higher PSNR values typically indicate reduced noise and improved image detail fidelity, while SSIM indicates measure similarity between two images. By comparing the PSNR and SSIM values of both the raw and reconstructed images to the ground truth image, we were able to quantify image quality improvements enabled by the proposed technique. The PSNR and SSIM values, calculated for the respective raw and reconstructed images, are displayed in Fig. 2(c) (shown at bottom right). It is obvious that our deep learning models effectively mitigated chromatic aberrations and increased image sharpness by refocusing all color channels. The model's reconstruction process successfully restored accurate color representations and considerably enhanced overall image contrast, yielding a gain of over 10 dB in PSNR and a 35% increase in SSIM values for the training set images. The computations revealed notable enhancements in image quality through our reconstruction method compared to the raw images. To fully validate our model's versatility, we conducted extensive testing on entirely new types of images beyond the indoor training data. As shown in Fig. 1f, we utilized an additional validation set of diverse scenes with various objects, lighting conditions and tones. Without any further training or parameter tuning, our pre-trained model successfully reconstructed these never-before-seen images. As evidenced in Fig. 2(c) (right column), our network reliably restored color and focus for these general validation images. Quantitatively, the developed deep learning network enhanced image quality by over 9dB in peak signal-to-noise ratio and approximately 36% in structural similarity index. These impressive gains aligned with those observed on the indoor training images, conclusively demonstrating the model's robustness and applicability to real-world scenes. Lastly, we applied the model to reconstruct real-world scenes taken both indoors and outdoors. Unlike controlled scenes using monitors, real-world objects feature a significantly larger depth-of-field and varied lighting conditions, making reconstruction remarkably more challenging. Despite these complexities, the network consistently performed well. The raw and reconstructed images are shown in Fig. 2(d), featuring one indoor and two outdoor photos. In the raw images, reduced dynamic range (evidenced by hazing) and chromatic aberration are noticeable, which diminish image quality and make distinguishing objects and characters difficult, especially in ample light conditions and near high-contrast areas. It is noted that, after the deep learning model reconstructed the images, the image quality improved significantly under all conditions, regardless of ambient lighting or shooting distances. These results further validate our model's universality and adaptability beyond its training set. ## 3 Discussion As demonstrated in previous sections, the proposed deep learning approach successfully restored full-color images from raw captures of the single metalens camera. Both quantitative metrics and visual interpretation confirmed significant enhancement of image quality compared to the unprocessed raw images exhibiting chromatic aberration. This represents a major advancement for single metalens imaging systems (which have faced persistent challenges in achieving achromatic performance). Inherent material dispersion limits metalens bandwidth when relying solely on optical and metasurface design innovations. Despite efforts exploring multi-layer systems, new materials, and hybrid meta-refractive concepts, realizing wide-band achromatic responses from a single nanostructured meta-optics device has remained elusive. Our proposed deep learning-based computational imaging engine provides a transformative solution to overcoming these physical constraints. By applying specialized deep learning models directly to raw captured images, we accomplish full-color aberration-free imaging without requiring complex metalens/metasurface engineering, reducing design and fabrication difficulties, improving tolerance, and enabling faster turnaround. To further validate performance and gain additional insights, we conducted detailed studies on the reconstructed images. Fig. 4(a) shows further analysis and comparisons across setups. A high dynamic range image from the training set was chosen given the challenge of preserving both dark and bright details using metalenses (this is evident in the raw photos where even using a small aperture leads to hazing and chromatic aberration. It is also observable through the zoomed-in view of details at the center and edge of the image and placed on the bottom of each photo). Enlarging the aperture rapidly worsens image quality (e.g. making the shoes at the edge of the photos barely distinguishable). The central image quality exceeds the edges but still lacks details in dark regions. Increasing object distance also degrades edge image quality, as at constant angular resolution and chromatic aberration ratio, greater distances lead to reduced resolution and increased chromatic aberration per pixel. Notably, our deep learning models can handle the varying challenges across different setups and consistently produce promising results. As shown in Fig. 4(a), the reconstructed images on the right of each setup remove strong color fringing and accurately restore dark region details without losing the bright details. The reconstructed images' strong similarities across different setups indicate the developed deep learning networks are universal and insensitive to aperture size, lighting conditions, and object distance. Figure 4: **Image reconstruction for different scenarios and algorithms.** (a) Raw and reconstructed images for all experimental combinations, with enlarged details from image centers and edges below. (b) Ground truth, raw single metalens image, and reconstructions from the proposed network and other existing networks. Only the proposed network successfully reconstructs the single metalens image. To demonstrate the uniqueness of our reconstruction approach, it is necessary to show that existing general-purpose computational imaging networks fail to effectively reconstruct images from our metalens. As shown in Fig. 4b, we benchmarked leading super-resolution and enhancement models by upsampling our raw images and then downsampling the outputs to 512 \(\times\) 512 pixels for comparison[24, 25, 26]. To validate generalization ability, the test image is chosen from the validation set that the network was never trained on, and nearest neighbor image scaling method (labeled as Nearest in Table 1) is used as control to validate the up- and down-sampling process. It is clear that none of the existing networks can remove chromatic aberration of metalenses. While some enhanced local details, they failed to improve global color and focus. Quantitative PSNR and SSIM analyses were conducted, and the results are listed in Table 1. It is concluded that our specialized deep learning network significantly outperformed these existing methods designed for generic imagery. This confirms the necessity of tailoring the model to the unique artifacts and distortions in raw metalens images. The customized network architecture and training process are essential to learn the intricacies of correcting meta-optics aberrations computationally.
2307.01952
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights at https://github.com/Stability-AI/generative-models
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, Robin Rombach
2023-07-04T23:04:57Z
http://arxiv.org/abs/2307.01952v1
# _Sdxl_: Improving Latent Diffusion Models for High-Resolution Image Synthesis ###### Abstract We present _SDXL_, a latent diffusion model for text-to-image synthesis. Compared to previous versions of _Stable Diffusion_, _SDXL_ leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as _SDXL_ uses a second text encoder. We design multiple novel conditioning schemes and train _SDXL_ on multiple aspect ratios. We also introduce a _refinement model_ which is used to improve the visual fidelity of samples generated by _SDXL_ using a post-hoc _image-to-image_ technique. We demonstrate that _SDXL_ shows drastically improved performance compared to previous versions of _Stable Diffusion_ and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights. Introduction The last year has brought enormous leaps in deep generative modeling across various data domains, such as natural language [50], audio [17], and visual media [38; 37; 40; 44; 15; 3; 7]. In this report, we focus on the latter and unveil _SDXL_, a drastically improved version of _Stable Diffusion_. _Stable Diffusion_ is a latent text-to-image diffusion model (DM) which serves as the foundation for an array of recent advancements in, e.g., 3D classification [43], controllable image editing [54], image personalization [10], synthetic data augmentation [48], graphical user interface prototyping [51], etc. Remarkably, the scope of applications has been extraordinarily extensive, encompassing fields as diverse as music generation [9] and reconstructing images from fMRI brain scans [49]. User studies demonstrate that _SDXL_ consistently surpasses all previous versions of _Stable Diffusion_ by a significant margin (see Fig. 1). In this report, we present the design choices which lead to this boost in performance encompassing _i)_ a \(3\times\) larger UNet-backbone compared to previous _Stable Diffusion_ models (Sec. 2.1), _ii)_ two simple yet effective additional conditioning techniques (Sec. 2.2) which do not require any form of additional supervision, and _iii)_ a separate diffusion-based refinement model which applies a noising-denoising process [28] to the latents produced by _SDXL_ to improve the visual quality of its samples (Sec. 2.5). A major concern in the field of visual media creation is that while black-box-models are often recognized as state-of-the-art, the opacity of their architecture prevents faithfully assessing and validating their performance. This lack of transparency hampers reproducibility, stifles innovation, and prevents the community from building upon these models to further the progress of science and art. Moreover, these closed-source strategies make it challenging to assess the biases and limitations of these models in an impartial and objective way, which is crucial for their responsible and ethical deployment. With _SDXL_ we are releasing an _open_ model that achieves competitive performance with black-box image generation models (see Fig. 10 & Fig. 11). ## 2 Improving _Stable Diffusion_ In this section we present our improvements for the _Stable Diffusion_ architecture. These are modular, and can be used individually or together to extend any model. Although the following strategies are implemented as extensions to latent diffusion models (LDMs) [38], most of them are also applicable to their pixel-space counterparts. ### Architecture & Scale Starting with the seminal works Ho et al. [14] and Song et al. [47], which demonstrated that DMs are powerful generative models for image synthesis, the convolutional UNet [39] architecture has been the dominant architecture for diffusion-based image synthesis. However, with the development Figure 1: _Left:_ Comparing user preferences between _SDXL_ and _Stable Diffusion_1.5 & 2.1. While _SDXL_ already clearly outperforms _Stable Diffusion_1.5 & 2.1, adding the additional refinement stage boosts performance. _Right:_ Visualization of the two-stage pipeline: We generate initial latents of size \(128\times 128\) using _SDXL_. Afterwards, we utilize a specialized high-resolution _refinement model_ and apply SDEdit [28] on the latents generated in the first step, using the same prompt. _SDXL_ and the refinement model use the same autoencoder. of foundational DMs [40; 37; 38], the underlying architecture has constantly evolved: from adding self-attention and improved upscaling layers [5], over cross-attention for text-to-image synthesis [38], to pure transformer-based architectures [33]. We follow this trend and, following Hoogeboom et al. [16], shift the bulk of the transformer computation to lower-level features in the UNet. In particular, and in contrast to the original _Stable Diffusion_ architecture, we use a heterogeneous distribution of transformer blocks within the UNet: For efficiency reasons, we omit the transformer block at the highest feature level, use 2 and 10 blocks at the lower levels, and remove the lowest level (\(8\times\) downsampling) in the UNet altogether -- see Tab. 1 for a comparison between the architectures of _Stable Diffusion_ 1.x & 2.x and _SDXL_. We opt for a more powerful pre-trained text encoder that we use for text conditioning. Specifically, we use OpenCLIP ViT-bigG [19] in combination with CLIP ViT-L [34], where we concatenate the penultimate text encoder outputs along the channel-axis [1]. Besides using cross-attention layers to condition the model on the text-input, we follow [30] and additionally condition the model on the pooled text embedding from the OpenCLIP model. These changes result in a model size of 2.6B parameters in the UNet, see Tab. 1. The text encoders have a total size of 817M parameters. ### Micro-Conditioning Conditioning the Model on Image SizeA notorious shortcoming of the LDM paradigm [38] is the fact that training a model requires a _minimal image size_, due to its two-stage architecture. The two main approaches to tackle this problem are either to discard all training images below a certain minimal resolution (for example, _Stable Diffusion_ 1.4/1.5 discarded all images with any size below 512 pixels), or, alternatively, upscale images that are too small. However, depending on the desired image resolution, the former method can lead to significant portions of the training data being discarded, what will likely lead to a loss in performance and hurt generalization. We visualize such effects in Fig. 2 for the dataset on which _SDXL_ was pretrained. For this particular choice of data, discarding all samples below our pretraining resolution of \(256^{2}\) pixels would lead to a significant 39% of discarded data. The second method, on the other hand, usually introduces upscaling artifacts which may leak into the final model outputs, causing, for example, blurry samples. Instead, we propose to condition the UNet model on the original image resolution, which is trivially available during training. In particular, we provide the original (i.e., before any rescaling) height and width of the images as an additional conditioning to the model \(\mathbf{c_{size}}=(h_{\text{original}},w_{\text{original}})\). Each component is independently embedded using a Fourier feature encoding, and these encodings are concatenated into a single vector that we feed into the model by adding it to the timestep embedding [5]. At inference time, a user can then set the desired _apparent resolution_ of the image via this _size-conditioning_. Evidently (see Fig. 3), the model has learned to associate the conditioning \(c_{\text{size}}\) with \begin{table} \begin{tabular}{l c c c} \hline \hline Model & _SDXL_ & SD 1.4/1.5 & SD 2.0/2.1 \\ \hline \# of UNet params & 2.6B & 860M & 865M \\ Transformer blocks & [0, 2, 10] & [1, 1, 1, 1] & [1, 1, 1, 1] \\ Channel mult. & [1, 2, 4] & [1, 2, 4, 4] & [1, 2, 4, 4] \\ Text encoder & CLIP ViT-L \& OpenCLIP ViT-bigG & CLIP ViT-L & OpenCLIP ViT-H \\ Context dim. & 2048 & 768 & 1024 \\ Pooled text emb. & OpenCLIP ViT-bigG & N/A & N/A \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of _SDXL_ and older _Stable Diffusion_ models. Figure 2: Height-vs-Width distribution of our pre-training dataset. Without the proposed size-conditioning, 39% of the data would be discarded due to edge lengths smaller than 256 pixels as visualized by the dashed black lines. Color intensity in each visualized cell is proportional to the number of samples. resolution-dependent image features, which can be leveraged to modify the appearance of an output corresponding to a given prompt. Note that for the visualization shown in Fig. 3, we visualize samples generated by the \(512\times 512\) model (see Sec. 2.5 for details), since the effects of the size conditioning are less clearly visible after the subsequent multi-aspect (ratio) finetuning which we use for our final _SDXL_ model. We quantitatively assess the effects of this simple but effective conditioning technique by training and evaluating three LDMs on class conditional ImageNet [4] at spatial size \(512^{2}\): For the first model (_CIN-512-only_) we discard all training examples with at least one edge smaller than \(512\) pixels what results in a train dataset of only 70k images. For _CIN-nocond_ we use all training examples but without size conditioning. This additional conditioning is only used for _CIN-size-cond_. After training we generate 5k samples with 50 DDIM steps [46] and (classifier-free) guidance scale of 5 [13] for every model and compute IS [42] and FID [12] (against the full validation set). For _CIN-size-cond_ we generate samples always conditioned on \(\mathbf{c_{size}}=(512,512)\). Tab. 2 summarizes the results and verifies that _CIN-size-cond_ improves upon the baseline models in both metrics. We attribute the degraded performance of _CIN-512-only_ to bad generalization due to overfitting on the small training dataset while the effects of a mode of blurry samples in the sample distribution of _CIN-nocond_ result in a reduced FID score. Note that, although we find these classical quantitative scores not to be suitable for evaluating the performance of foundational (text-to-image) DMs [40; 37; 38] (see App. F), they remain reasonable metrics on ImageNet as the neural backbones of FID and IS have been trained on ImageNet itself. Conditioning the Model on Cropping ParametersThe first two rows of Fig. 4 illustrate a typical failure mode of previous _SD_ models: Synthesized objects can be cropped, such as the cut-off head of the cat in the left examples for _SD_ 1-5 and _SD_ 2-1. An intuitive explanation for this behavior is the use of _random cropping_ during training of the model: As collating a batch in \begin{table} \begin{tabular}{l c c} \hline \hline \(\mathbf{c_{size}}=(64,64)\) & \(\mathbf{c_{size}}=(128,128)\), & \(\mathbf{c_{size}}=(256,236)\), & \(\mathbf{c_{size}}=(512,512)\), \\ \hline \hline \end{tabular} \end{table} Table 2: Conditioning on the original spatial size of the training examples improves performance on class-conditional ImageNet [4] on \(512^{2}\) resolution. Figure 3: The effects of varying the size-conditioning: We show draw 4 samples with the same random seed from _SDXL_ and vary the size-conditioning as depicted above each column. The image quality clearly increases when conditioning on larger image sizes. Samples from the \(512^{2}\) model, see Sec. 2.5. Note: For this visualization, we use the \(512\times 512\) pixel base model (see Sec. 2.5), since the effect of size conditioning is more clearly visible before \(1024\times 1024\) finetuning. Best viewed zoomed in. PyTorch [32] requires tensors of the same size, a typical processing pipeline is to (i) resize an image such that the shortest size matches the desired target size, followed by (ii) randomly cropping the image along the longer axis. While random cropping is a natural form of data augmentation, it can leak into the generated samples, causing the malicious effects shown above. To fix this problem, we propose another simple yet effective conditioning method: During datalogading, we uniformly sample crop coordinates \(c_{\text{top}}\) and \(c_{\text{left}}\) (integers specifying the amount of pixels cropped from the top-left corner along the height and width axes, respectively) and feed them into the model as conditioning parameters via Fourier feature embeddings, similar to the size conditioning described above. The concatenated embedding \(\mathbf{c_{\text{crop}}}\) is then used as an additional conditioning parameter. We emphasize that this technique is not limited to LDMs and could be used for any DM. Note that crop- and size-conditioning can be readily combined. In such a case, we concatenate the feature embedding along the channel dimension, before adding it to the timestep embedding in the UNet. Alg. 1 illustrates how we sample \(\mathbf{c_{\text{crop}}}\) and \(\mathbf{c_{\text{size}}}\) during training if such a combination is applied. Given that in our experience large scale datasets are, on average, object-centric, we set \((c_{\text{top}},c_{\text{left}})=(0,0)\) during inference and thereby obtain object-centered samples from the trained model. See Fig. 5 for an illustration: By tuning \((c_{\text{top}},c_{\text{left}})\), we can successfully _simulate_ the amount of cropping during inference. This is a form of _conditioning-augmentation_, and has been used in various forms with autoregressive [20] models, and more recently with diffusion models [21]. While other methods like data bucketing [31] successfully tackle the same task, we still benefit from cropping-induced data augmentation, while making sure that it does not leak into the generation process - we actually use it to our advantage to gain more control over the image synthesis process. Furthermore, it is easy to implement and can be applied in an online fashion during training, without additional data preprocessing. ### Multi-Aspect Training Real-world datasets include images of widely varying sizes and aspect-ratios (c.f. fig. 2) While the common output resolutions for text-to-image models are square images of \(512\times 512\) or \(1024\times 1024\) pixels, we argue that this is a rather unnatural choice, given the widespread distribution and use of landscape (e.g., 16:9) or portrait format screens. Motivated by this, we finetune our model to handle multiple aspect-ratios simultaneously: We follow common practice [31] and partition the data into buckets of different aspect ratios, where we keep the pixel count as close to \(1024^{2}\) pixels as possibly, varying height and width accordingly in multiples Figure 4: Comparison of the output of _SDXL_ with previous versions of _Stable Diffusion_. For each prompt, we show 3 random samples of the respective model for 50 steps of the DDIM sampler [46] and cfg-scale \(8.0\)[13]. Additional samples in Fig. 14. ``` 0: Training dataset of images \(\mathbf{\mathcal{D}}\), target image size for training \(\mathbf{s}=(h_{\text{tgt}},w_{\text{tgt}})\) 0: Resizing function \(\mathbf{R}\), cropping function function \(\mathbf{C}\) 0: Model train step \(\mathbf{T}\) converged \(\leftarrow\) False while not converged do \(x\sim\mathbf{\mathcal{D}}\) \(w_{\text{original}}\leftarrow\operatorname{width}(x)\) \(h_{\text{original}}\leftarrow\operatorname{height}(x)\) \(\mathbf{c_{size}}\leftarrow(h_{\text{original}},w_{\text{original}})\) \(x\leftarrow\mathbf{R}(x,\mathbf{s})\)\(\triangleright\) resize smaller image size to target size \(\mathbf{s}\) if\(h_{\text{original}}\leq w_{\text{original}}\)then \(c_{\text{left}}\sim\mathbf{\mathcal{U}}(0,\operatorname{width}(x)-s_{w})\)\(\triangleright\) sample \(c_{\text{left}}\) from discrete uniform distribution \(c_{\text{top}}=0\) elseif\(h_{\text{original}}>w_{\text{original}}\)then \(c_{\text{top}}\sim\mathbf{\mathcal{U}}(0,\operatorname{height}(x)-s_{h})\)\(\triangleright\) sample \(c_{\text{top}}\) from discrete uniform distribution \(c_{\text{left}}=0\) endif \(\mathbf{c_{\text{crop}}}\leftarrow(c_{\text{top}},c_{\text{left}})\) \(x\leftarrow\mathbf{C}(x,\mathbf{s},\mathbf{c_{\text{crop}}})\)\(\triangleright\) crop image to size \(\mathbf{s}\) with top-left coordinate \((c_{\text{top}},c_{\text{left}})\) converged \(\leftarrow\mathbf{T}(x,\mathbf{c_{size}},\mathbf{c_{\text{crop}}})\)\(\triangleright\) train model conditioned on \(\mathbf{c_{size}}\) and \(\mathbf{c_{\text{crop}}}\) endwhile ``` **Algorithm 1** Conditioning pipeline for size- and crop-conditioning Figure 5: Varying the crop conditioning as discussed in Sec. 2.2. See Fig. 4 and Fig. 14 for samples from _SD_ 1.5 and _SD_ 2.1 which provide no explicit control of this parameter and thus introduce cropping artifacts. Samples from the \(512^{2}\) model, see Sec. 2.5. of 64. A full list of all aspect ratios used for training is provided in App. I. During optimization, a training batch is composed of images from the same bucket, and we alternate between bucket sizes for each training step. Additionally, the model receives the bucket size (or, _target size_) as a conditioning, represented as a tuple of integers \(\mathbf{c}_{\text{arr}}=(h_{\text{tgt}},w_{\text{tgt}})\) which are embedded into a Fourier space in analogy to the size- and crop-conditionings described above. In practice, we apply multi-aspect training as a finetuning stage after pretraining the model at a fixed aspect-ratio and resolution and combine it with the conditioning techniques introduced in Sec. 2.2 via concatenation along the channel axis. Fig. 16 in App. J provides python-code for this operation. Note that crop-conditioning and multi-aspect training are complementary operations, and crop-conditioning then only works within the bucket boundaries (usually 64 pixels). For ease of implementation, however, we opt to keep this control parameter for multi-aspect models. ### Improved Autoencoder _Stable Diffusion_ is a _LDM_, operating in a pretrained, learned (and fixed) latent space of an autoencoder. While the bulk of the semantic composition is done by the LDM [38], we can improve _local_, high-frequency details in generated images by improving the autoencoder. To this end, we train the same autoencoder architecture used for the original _Stable Diffusion_ at a larger batch-size (256 vs 9) and additionally track the weights with an exponential moving average. The resulting autoencoder outperforms the original model in all evaluated reconstruction metrics, see Tab. 3. We use this autoencoder for all of our experiments. ### Putting Everything Together We train the final model, _SDXL_, in a multi-stage procedure. _SDXL_ uses the autoencoder from Sec. 2.4 and a discrete-time diffusion schedule [14; 45] with \(1000\) steps. First, we pretrain a base model (see Tab. 1) on an internal dataset whose height- and width-distribution is visualized in Fig. 2 for \(600\,000\) optimization steps at a resolution of \(256\times 256\) pixels and a batch-size of \(2048\), using size- and crop-conditioning as described in Sec. 2.2. We continue training on \(512\times 512\) pixel images for another \(200\,000\) optimization steps, and finally utilize multi-aspect training (Sec. 2.3) in combination with an offset-noise [11; 25] level of \(0.05\) to train the model on different aspect ratios (Sec. 2.3, App. I) of \(\sim 1024\times 1024\) pixel area. Refinement StageEmpirically, we find that the resulting model sometimes yields samples of low local quality, see Fig. 6. To improve sample quality, we train a separate LDM in the same latent space, which is specialized on high-quality, high resolution data and employ a noising-denoising process as introduced by _SDEdit_[28] on the samples from the base model. We follow [1] and specialize this refinement model on the first 200 (discrete) noise scales. During inference, we render latents from the base _SDXL_, and directly diffuse and denoise them in latent space with the refinement model (see Fig. 1), using the same text input. We note that this step is optional, but improves sample quality for detailed backgrounds and human faces, as demonstrated in Fig. 6 and Fig. 13. To assess the performance of our model (with and without refinement stage), we conduct a user study, and let users pick their favorite generation from the following four models: _SDXL_, _SDXL_ (with refiner), _Stable Diffusion_ 1.5 and _Stable Diffusion_ 2.1. The results demonstrate the _SDXL_ with the refinement stage is the highest rated choice, and outperforms _Stable Diffusion_ 1.5 & 2.1 by a significant margin (win rates: _SDXL_ w/ refinement: \(48.44\%\), _SDXL_ base: \(36.93\%\), _Stable Diffusion_ 1.5: \(7.91\%\), _Stable Diffusion_ 2.1: \(6.71\%\)). See Fig. 1, which also provides an overview of the full pipeline. However, when using classical performance metrics such as FID and CLIP scores the improvements of _SDXL_ over previous methods are not reflected as shown in Fig. 12 and discussed in App. F. This aligns with and further backs the findings of Kirstain et al. [23]. \begin{table} \begin{tabular}{l c c c c} \hline model & PNSR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & rFID \(\downarrow\) \\ \hline _SDXL_-VAE & \(\mathbf{24.7}\) & \(\mathbf{0.73}\) & \(\mathbf{0.88}\) & \(\mathbf{4.4}\) \\ _SD_-VAE 1.x & 23.4 & 0.69 & 0.96 & 5.0 \\ _SD_-VAE 2.x & 24.5 & 0.71 & 0.92 & 4.7 \\ \hline \end{tabular} \end{table} Table 3: Autoencoder reconstruction performance on the COCO2017 [26] validation split, images of size \(256\times 256\) pixels. Note: _Stable Diffusion_ 2.x uses an improved version of _Stable Diffusion_ 1.x’s autoencoder, where the decoder was finetuned with a reduced weight on the perceptual loss [55], and used more compute. Note that our new autoencoder is trained from scratch. ## 3 Future Work This report presents a preliminary analysis of improvements to the foundation model _Stable Diffusion_ for text-to-image synthesis. While we achieve significant improvements in synthesized image quality, prompt adherence and composition, in the following, we discuss a few aspects for which we believe the model may be improved further: * Single stage: Currently, we generate the best samples from _SDXL_ using a two-stage approach with an additional refinement model. This results in having to load two large models into memory, hampering accessibility and sampling speed. Future work should investigate ways to provide a single stage of equal or better quality. * Text synthesis: While the scale and the larger text encoder (OpenCLIP ViT-bigG [19]) help to improve the text rendering capabilities over previous versions of _Stable Diffusion_, incorporating byte-level tokenizers [52; 27] or simply scaling the model to larger sizes [53; 40] may further improve text synthesis. * Architecture: During the exploration stage of this work, we briefly experimented with transformer-based architectures such as UViT [16] and DiT [33], but found no immediate benefit. We remain, however, optimistic that a careful hyperparameter study will eventually enable scaling to much larger transformer-dominated architectures. * Distillation: While our improvements over the original _Stable Diffusion_ model are significant, they come at the price of increased inference cost (both in VRAM and sampling speed). Future work will thus focus on decreasing the compute needed for inference, and increased sampling speed, for example through guidance- [29], knowledge- [6; 22; 24] and progressive distillation [41; 2; 29]. * Our model is trained in the discrete-time formulation of [14], and requires _offset-noise_[11; 25] for aesthetically pleasing results. The EDM-framework of Karras et al. [21] is a promising candidate for future model training, as its formulation in continuous time allows for increased sampling flexibility and does not require noise-schedule corrections. Figure 6: \(1024^{2}\) samples (with zoom-ins) from _SDXL_ without (left) and with (right) the refinement model discussed. Prompt: “Epic long distance cityscape photo of New York City flooded by the ocean and overgrown buildings and jungle ruins in rainforest, at sunset, cinematic shot, highly detailed, 8k, golden light”. See Fig. 13 for additional samples. ## Appendix A Acknowledgements We thank all the folks at StabilityAI who worked on comparisons, code, etc, in particular: Alex Goodwin, Benjamin Aubin, Bill Cusick, Dennis Nitrosocke Niedworok, Dominik Lorenz, Harry Saini, Ian Johnson, Ju Huo, Katie May, Mohamad Diab, Peter Baylies, Rahim Entezari, Yam Levi, Yannik Marek, Yizhou Zheng. We also thank ChatGPT for providing writing assistance. ## Appendix B Limitations While our model has demonstrated impressive capabilities in generating realistic images and synthesizing complex scenes, it is important to acknowledge its inherent limitations. Understanding these limitations is crucial for further improvements and ensuring responsible use of the technology. Firstly, the model may encounter challenges when synthesizing intricate structures, such as human hands (see Fig. 7, top left). Although it has been trained on a diverse range of data, the complexity of human anatomy poses a difficulty in achieving accurate representations consistently. This limitation suggests the need for further scaling and training techniques specifically targeting the synthesis of fine-grained details. A reason for this occurring might be that hands and similar objects appear with very high variance in photographs and it is hard for the model to extract the knowledge of the real 3D shape and physical limitations in that case. Secondly, while the model achieves a remarkable level of realism in its generated images, it is important to note that it does not attain perfect photorealism. Certain nuances, such as subtle lighting effects or minute texture variations, may still be absent or less faithfully represented in the generated images. This limitation implies that caution should be exercised when relying solely on model-generated visuals for applications that require a high degree of visual fidelity. Furthermore, the model's training process heavily relies on large-scale datasets, which can inadvertently introduce social and racial biases. As a result, the model may inadvertently exacerbate these biases when generating images or inferring visual attributes. In certain cases where samples contain multiple objects or subjects, the model may exhibit a phenomenon known as "concept bleeding". This issue manifests as the unintended merging or overlap of distinct visual elements. For instance, in Fig. 14, an orange sunglass is observed, which indicates an instance of concept bleeding from the orange sweater. Another case of this can be seen in Fig. 8, the penguin is supposed to have a "blue hat" and "red gloves", but is instead generated with blue Figure 7: Failure cases of _SDXL_ despite large improvements compared to previous versions of _Stable Diffusion_, the model sometimes still struggles with very complex prompts involving detailed spatial arrangements and detailed descriptions (e.g. top left example). Moreover, hands are not yet always correctly generated (e.g. top left) and the model sometimes suffers from two concepts bleeding into one another (e.g. bottom right example). All examples are random samples generated with 50 steps of the DDIM sampler [46] and cfg-scale \(8.0\)[13]. gloves and a red hat. Recognizing and addressing such occurrences is essential for refining the model's ability to accurately separate and represent individual objects within complex scenes. The root cause of this may lie in the used pretrained text-encoders: firstly, they are trained to compress all information into a single token, so they may fail at binding only the right attributes and objects, Feng et al. [8] mitigate this issue by explicitly encoding word relationships into the encoding. Secondly, the contrastive loss may also contribute to this, since negative examples with a different binding are needed within the same batch [35]. Additionally, while our model represents a significant advancement over previous iterations of _SD_, it still encounters difficulties when rendering long, legible text. Occasionally, the generated text may contain random characters or exhibit inconsistencies, as illustrated in Fig. 8. Overcoming this limitation requires further investigation and development of techniques that enhance the model's text generation capabilities, particularly for extended textual content -- see for example the work of Liu et al. [27], who propose to enhance text rendering capabilities via character-level text tokenizers. Alternatively, scaling the model does further improve text synthesis [53, 40]. In conclusion, our model exhibits notable strengths in image synthesis, but it is not exempt from certain limitations. The challenges associated with synthesizing intricate structures, achieving perfect photorealism, further addressing biases, mitigating concept bleeding, and improving text rendering highlight avenues for future research and optimization. Diffusion Models In this section, we give a concise summary of DMs. We consider the continuous-time DM framework [47] and follow the presentation of Karras et al. [21]. Let \(p_{\mathrm{data}}(\mathbf{x}_{0})\) denote the data distribution and let \(p(\mathbf{x};\sigma)\) be the distribution obtained by adding i.i.d. \(\sigma^{2}\)-variance Gaussian noise to the data. For sufficiently large \(\sigma_{\max}\), \(p(\mathbf{x};\sigma_{\max^{2}})\) is almost indistinguishable from \(\sigma_{\max}^{2}\)-variance Gaussian noise. Capitalizing on this observation, DMs sample high variance Gaussian noise \(\mathbf{x}_{M}\sim\mathcal{N}\left(\mathbf{0},\sigma_{\max^{2}}\right)\) and sequentially dense \(\mathbf{x}_{M}\) into \(\mathbf{x}_{i}\sim p(\mathbf{x}_{i};\sigma_{i})\), \(i\in\{0,\ldots,M\}\), with \(\sigma_{i}<\sigma_{i+1}\) and \(\sigma_{M}=\sigma_{\max}\). For a well-trained DM and \(\sigma_{0}=0\) the resulting \(\mathbf{x}_{0}\) is distributed according to the data. **Sampling.** In practice, this iterative denoising process explained above can be implemented through the numerical simulation of the _Probability Flow_ ordinary differential equation (ODE) [47] \[d\mathbf{x}=-\dot{\sigma}(t)\sigma(t)\nabla_{\mathbf{x}}\log p( \mathbf{x};\sigma(t))\:dt, \tag{1}\] where \(\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma)\) is the _score function_[18]. The schedule \(\sigma(t)\colon[0,1]\to\mathbb{R}_{+}\) is user-specified and \(\dot{\sigma}(t)\) denotes the time derivative of \(\sigma(t)\). Alternatively, we may also numerically simulate a stochastic differential equation (SDE) [47; 21]: \[d\mathbf{x}=\underbrace{-\dot{\sigma}(t)\sigma(t)\nabla_{\mathbf{x}}\log p( \mathbf{x};\sigma(t))}_{\text{Probability Flow ODE; see Eq.~{}\eqref{eq:dbm}}}- \underbrace{\beta(t)\sigma^{2}(t)\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma( t))\:dt+\sqrt{2\beta(t)}\sigma(t)\:d\omega_{t}}_{\text{Langevin diffusion component}}, \tag{2}\] where \(d\omega_{t}\) is the standard Wiener process. In principle, simulating either the Probability Flow ODE or the SDE above results in samples from the same distribution. **Training.** DM training reduces to learning a model \(\mathbf{s_{\theta}}(\mathbf{x};\sigma)\) for the score function \(\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma)\). The model can, for example, be parameterized as \(\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma)\approx s_{\mathbf{\theta}}(\mathbf{ x};\sigma)=(D_{\mathbf{\theta}}(\mathbf{x};\sigma)-\mathbf{x})/\sigma^{2}\)[21], where \(D_{\mathbf{\theta}}\) is a learnable _denoiser_ that, given a noisy data point \(\mathbf{x}_{0}+\mathbf{n}\), \(\mathbf{x}_{0}\sim p_{\mathrm{data}}(\mathbf{x}_{0})\), \(\mathbf{n}\sim\mathcal{N}\left(\mathbf{0},\sigma^{2}\mathbf{I}_{d}\right)\), and conditioned on the noise level \(\sigma\), tries to predict the clean \(\mathbf{x}_{0}\). The denoiser \(D_{\mathbf{\theta}}\) (or equivalently the score model) can be trained via _denoising score matching_ (DSM) \[\mathbb{E}_{(\mathbf{x}_{0},\mathbf{c})\sim p_{\mathrm{data}}( \mathbf{x}_{0},\mathbf{c}),(\sigma,\mathbf{n})\sim p(\sigma,\mathbf{n})} \left[\lambda_{\sigma}\|D_{\mathbf{\theta}}(\mathbf{x}_{0}+\mathbf{n};\sigma, \mathbf{c})-\mathbf{x}_{0}\|_{2}^{2}\right], \tag{3}\] where \(p(\sigma,\mathbf{n})=p(\sigma)\:\mathcal{N}\left(\mathbf{n};\mathbf{0},\sigma ^{2}\right)\), \(p(\sigma)\) is a distribution over noise levels \(\sigma\), \(\lambda_{\sigma}\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\) is a weighting function, and \(\mathbf{c}\) is an arbitrary conditioning signal, e.g., a class label, a text prompt, or a combination thereof. In this work, we choose \(p(\sigma)\) to be a discrete distributions over 1000 noise levels and set \(\lambda_{\sigma}=\sigma^{-2}\) similar to prior works [14; 38; 45]. **Classifier-free guidance.** Classifier-free guidance [13] is a technique to guide the iterative sampling process of a DM towards a conditioning signal \(\mathbf{c}\) by mixing the predictions of a conditional and an unconditional model \[D^{w}(\mathbf{x};\sigma,\mathbf{c})=(1+w)D(\mathbf{x};\sigma, \mathbf{c})-wD(\mathbf{x};\sigma), \tag{4}\] where \(w\geq 0\) is the _guidance strength_. In practice, the unconditional model can be trained jointly alongside the conditional model in a single network by randomly replacing the conditional signal \(\mathbf{c}\) with a null embedding in Eq. (3), e.g., 10% of the time [13]. Classifier-free guidance is widely used to improve the sampling quality, trading for diversity, of text-to-image DMs [30; 38]. ## Appendix D Comparison to the State of the Art Figure 8: Qualitative comparison of _SDXL_ with DeepFloyd IF, DALLE-2, Bing Image Creator, and Midjourney v5.2. To mitigate any bias arising from cherry-picking, Parti (P2) prompts were randomly selected. Seed 3 was uniformly applied across all models in which such a parameter could be designated. For models without a seed-setting feature, the first generated image is included. Comparison to Midjourney v5.1 ### Overall Votes To asses the generation quality of _SDXL_ we perform a user study against the state of the art text-to-image generation platform Midjourney1. As the source for image captions we use the PartiPrompts (P2) benchmark [53], that was introduced to compare large text-to-image model on various challenging prompts. Footnote 1: We compare against v5.1 since that was the best version available at that time. For our study, we choose five random prompts from each category, and generate four \(1024\times 1024\) images by both Midjourney (v5.1, with a set seed of 2) and _SDXL_ for each prompt. These images were then presented to the AWS GroundTruth taskforce, who voted based on adherence to the prompt. The results of these votes are illustrated in Fig. 9. Overall, there is a slight preference for _SDXL_ over Midjourney in terms of prompt adherence. ### Category & challenge comparisons on PartiPrompts (P2) Each prompt from the P2 benchmark is organized into a category and a challenge, each focus on different difficult aspects of the generation process. We show the comparisons for each category (Fig. 10) and challenge (Fig. 11) of P2 below. In four out of six categories _SDXL_ outperforms Midjourney, and in seven out of ten challenges there is no significant difference between both models or _SDXL_ outperforms Midjourney. Figure 10: User preference comparison of _SDXL_ (without refinement model) and Midjourney V5.1 across particular text categories. _SDXL_ outperforms Midjourney V5.1 in all but two categories. Figure 9: Results from 17,153 user preference comparisons between _SDXL_ v0.9 and Midjourney v5.1, which was the latest version available at the time. The comparisons span all “categories” and “challenges” in the PartiPrompts (P2) benchmark. Notably, _SDXL_ was favored 54.9% of the time over Midjourney V5.1. Preliminary testing indicates that the recently-released Midjourney V5.2 has lower prompt comprehension than its predecessor, but the laborious process of generating multiple prompts hampers the speed of conducting broader tests. ## Appendix F On FID Assessment of Generative Text-Image Foundation Models Throughout the last years it has been common practice for generative text-to-image models to assess FID- [12] and CLIP-scores [34; 36] in a zero-shot setting on complex, small-scale text-image datasets of natural images such as COCO [26]. However, with the advent of foundational text-to-image models [40; 37; 38; 1], which are not only targeting visual compositionality, but also at other difficult tasks such as deep text understanding, fine-grained distinction between unique artistic styles and especially a pronounced sense of visual aesthetics, this particular form of model evaluation has become more and more questionable. Kirstain et al. [23] demonstrates that COCO zero-shot FID is _negatively correlated_ with visual aesthetics, and such measuring the generative performance of such models should be rather done by human evaluators. We investigate this for _SDXL_ and visualize FID-vs-CLIP curves in Fig. 12 for 10k text-image pairs from COCO [26]. Despite its drastically improved performance as measured quantitatively by asking human assessors (see Fig. 1) as well as qualitatively (see Fig. 4 and Fig. 14), _SDXL_ does _not_ achieve better FID scores than the previous _SD_ versions. Contrarily, FID for _SDXL_ is the worst of all three compared models while only showing slightly improved CLIP-scores (measured with OpenClip ViT g-14). Thus, our results back the findings of Kirstain et al. [23] and further emphasize the need for additional quantitative performance scores, specifically for text-to-image foundation models. All scores have been evaluated based on 10k generated examples. Figure 11: Preference comparisons of _SDXL_ (with refinement model) to Midjourney V5.1 on complex prompts. _SDXL_ either outperforms or is statistically equal to Midjourney V5.1 in 7 out of 10 categories. Figure 12: Plotting FID vs CLIP score for different cfg scales. _SDXL_ shows only slightly improved text-alignment, as measured by CLIP-score, compared to previous versions that do not align with the judgement of human evaluators. Even further and similar as in [23], FID are worse than for both _SD-1.5_ and _SD-2.1_, while human evaluators clearly prefer the generations of _SD-XL_ over those of these previous models. Additional Comparison between Single- and Two-Stage _Sdxl_ pipeline Figure 13: _SDXL_ samples (with zoom-ins) without (left) and with (right) the refinement model discussed. Prompt: (_top_) “close up headshot, futuristic young woman, wild hair sly smile in front of gigantic UFO, dslr, sharp focus, dynamic composition” (_bottom_) “Three people having dinner at a table at new years eve, cinematic shot, 8k”. Zoom-in for details. ## Appendix H Comparison between _Sd 1.5_ vs. _Sd 2.1_ vs. _Sdxl_ ## Appendix I Multi-Aspect Training Hyperparameters We use the following image resolutions for mixed-aspect ratio finetuning as described in Sec. 2.3. Figure 15: Additional results for the comparison of the output of _SDXL_ with previous versions of _Stable Diffusion_. For each prompt, we show 3 random samples of the respective model for 50 steps of the DDIM sampler [46] and cfg-scale \(8.0\)[13]. Pseudo-code for Conditioning Concatenation along the Channel Axis ``` 1fromeinopsimportrearrange 2importtorch 3batch_size=16 4#channeldimensionofpooledoutputoftextencoder(s) 5pooled_dim=512 6 7 8deffourier_embedding(inputs,outdim=256,max_period=10000): 9""" 10Classicalsinsoidaltimestephedding 11ascommonlyusedindiffusionmodels 12ipramipints:batchofintegerscalarshape[b,] 13:paramoutdim:embeddingdimension 14:parammax_period:maxfreqadded 15:return:batchofembeddingsofshape[b,outdim] 16""" 17... 18 19defcat_along_channel_dim( 20x:torch.Tensor,)->torch.Tensor: 21ifx.ndim==1: 22x=x[...,None] 23assertx.ndim==2 24b,d_in=x.shape 25x=rearrange(x,-bdim->(bdim)") 26#fourierfnaddadditionaldimension 27emb=fourier_embedding(x) 28d_f=emb.shape[-1] 29emb=rearrange(emb,"(bdim)df->b(dimdf)", 30b=b,dim=d_in,df=d_f) 31returnemb 32 33defconcat_embeddings( 34#batchofsizemodcropconditioningcf.Sec.3.2 35c_size:torch.Tensor, 36c_crop:torch.Tensor, 37#batchofaspectratioconditioningcf.Sec.3.3 38c_ar:torch.Tensor, 39#finaloutputoftextencodersafterpoolingcf.Sec.3.1 40c_pooled_txt:torch.Tensor,)->torch.Tensor: 41#fourierfeatureforsizeconditioning 42c_size_emb=cat_along_channel_dim(c_size) 43#fourierfeatureforsizeconditioning 44c_crop_emb=cat_along_channel_dim(c_crop) 45#fourierfeatureforsizeconditioning 46c_arm_emb=cat_along_channel_dim(c_arr) 47#theconcatenatedoutputismappedtothesame 48#channeldimensionthanthenoiselevelconditioning 49#andaddedtothatconditioningbeforebeingfedtotheunet 50returntorch.cat([c_pooled_txt, 51c_size_emb, 52c_crop_emb, 53c_ar_emb],dim=1) 54 55#simulatingc_sizeandc_cropsanSec.3.2 56c_size=torch.zeros((batch_size,2)).long() 57c_crop=torch.zeros((batch_size,2)).long() 58#simulatingc_arandpooledtextencoderoutputasinSec.3.3 59c_ar=torch.zeros((batch_size,2)).long() 60c_pooled=torch.zeros((batch_size,pooled_dim)).long() 61 62#getconcatenatedembedding 63c_concat=concat_embeddings(c_size,c_crop,c_ar,c_pooled) ``` Figure 16: Python code for concatenating the additional conditionings introduced in Secs. 2.1 to 2.3 along the channel dimension.
2303.02774
Fourier-over-Spheroid shape parametrization applied to nuclear fission dynamics
We propose a new, rapidly convergent, the so-called Fourier over Spheroid (FoS), shape parametrization to model fission of heavy nuclei. Four collective coordinates are used to characterize the shape of the fissioning system, being its elongation, left-right asymmetry, neck size, and non-axiality. The potential energy landscape is computed within the macroscopic-microscopic approach, on the top of which the multi-dimensional Langevin equation is solved to describe the dynamics. Charge equilibration at scission and de-excitation of the primary fragments after scission are further considered. The model gives access to a wide variety of observables, including fission fragments mass, charge, and kinetic energy yields, fragment mean N/Z and post-scission neutron multiplicities, and importantly, their correlations. The latter are crucial to unravel the complexity of the fission process. The parameters of the model were tuned to reproduce experimental observation from thermal neutron-induced fission of 235U, and next used to discuss the transition from the asymmetric to symmetric fission along the Fm isotopic chain.
K. Pomorski, B. Nerlo-Pomorska, C. Schmitt, Z. G. Xiao, Y. J. Chen, L. L. Liu
2023-03-05T21:26:08Z
http://arxiv.org/abs/2303.02774v1
# Fourier-over-Spheroid shape parametrization applied to nuclear fission dynamics ###### Abstract We propose a new, rapidly convergent, the so-called Fourier over Spheroid (FoS), shape parametrization to model fission of heavy nuclei. Four collective coordinates are used to characterize the shape of the fissioning system, being its elongation, left-right asymmetry, neck size, and non-axiality. The potential energy landscape is computed within the macroscopic-microscopic approach, on the top of which the multi-dimensional Langevin equation is solved to describe the dynamics. Charge equilibration at scission and de-excitation of the primary fragments after scission are further considered. The model gives access to a wide variety of observables, including fission fragments mass, charge, and kinetic energy yields, fragment mean N/Z and post-scission neutron multiplicities, and importantly, their correlations. The latter are crucial to unravel the complexity of the fission process. The parameters of the model were tuned to reproduce experimental observation from thermal neutron-induced fission of \({}^{235}\)U, and next used to discuss the transition from the asymmetric to symmetric fission along the Fm isotopic chain. KEYWORDS: nuclear fission, macro-micro model, fission fragment mass and TKE yields, post-scission neutron multiplicity and neutron excess pacs: 24.75.+i, 25.85.-w,28.41.A ## I Introduction Fission is a dynamical process along which a nucleus progressively deforms (either spontaneously or triggered by an external perturbation) from an initial compact configuration until a point where it splits into two fragments. This evolution is an intricate puzzle, involving a complex re-arrangement of the many-body neutron and proton quantum systems. Intense effort is invested in fission studies since its discovery, both on the experimental and theoretical front, due to the impact for fundamental nuclear physics and astrophysics, as well as for a wide variety of societal applications. Modeling fission, in general, implies four stages: (i) the definition of the initial conditions of the system, (ii) its dynamical evolution, and rearrangement in specific configurations of fragment pairs with corresponding probabilities, (iii) the (fast) prompt de-excitation of the excited fragments, and (iv) the (slow) decay towards \(\beta\)-stability of those fragments which are radioactive. The recent review by Schunck and Regnier [1] gives an excellent panorama on contemporary fission theories, and further details about foundations can be found in the textbook by Krappe and Pomorski [2]. In spontaneous and low-energy (mostly induced by neutrons) fission, the initial conditions are well defined. The radioactive decay of the fission products is well known also. To understand fission, the challenge thus mainly resides in the description of stages (ii) and (iii). These are not independent one from another: Stage (iii) critically depends on the properties (\(N\), \(Z\), excitation energy and angular momentum) of the (primary) fragments produced at scission at the end of stage (ii). While experimental information was restricted to fission-fragment mass distributions with limited resolution for several decades [3], recent developments give access to a widespread variety of observables, their correlations, and this with unprecedented resolution [4; 5; 6; 7; 8]. Such information is essential to unravel in an un-ambiguous way the intricacies of the fission process. It is obviously of primary importance for constraining theory, but it poses also a tremendous challenge, which is the requirement of modeling all aspects of the mechanism and their mutual interdependences. According to the complexity of the fission process, its description remains a challenge for theory, and various models have been proposed over the years. The last decade has seen the tremendous development of microscopic, self-consistent models. Unfortunately, quantitative description remains limited so far, and computing time makes systematic calculations impossible even at super computers [1]. Transport models within the macroscopic-microscopic approach have been established as a very good alternative. In this framework, the process is given by the solution of a classical equation of motion picturing the real-time evolution of the system on its potential energy landscape (PEL) under the influence of inertia, dissipation and fluctuations [9]. Systematic studies covering different regions of the nuclear chart are nowadays computationally tractable. Such widespread investigations are indispensable to converge towards a universal understanding of the process [10]. Sophisticated macroscopic-microscopic models based on the solution of the multi-dimensional Langevin equation, or some variant of it, were developed during the last two decades [11; 1]. In these models, three main ingredients are required: a parametrization of the nuclear shape involving as few as possible deformation coordinates, a prescription for the potential energy of the nucleus, and a modelization for inertia and friction forces. Aritomo et al. [12] and Usang et al. [13] developed, respectively, a 3D and 4D dynamical model for explaining fragment mass and total kinetic energy (TKE) distributions in spontaneous and low-energy fission. Unfortunately, these models do not compute the post-scission de-excitation of the fragments. Furthermore, the hypothesis of unchanged charge density (UCD), _i.e._ the fragments have the same \(N/Z\) ratio as the fissioning system, is assumed in the model of Aritomo et al. Finally, evaporation prior scission (so-called multi-chance fission) is not considered, making these codes un-suited for fissioning system initial excitation energy above 10 MeV or so [14]. The Brownian shape motion model by Randrup and Moller [15] is based on today highest-quality 5D potential energy landscapes. While its enhanced version by Albertsson et al. [16] adds the post-scission stage, similarly to the early code, the UCD assumption is made. Moller and Ichikawa [17] went beyond this hypothesis, treating independently neutrons and protons, what renders the model "6D". Unfortunately, this version is still to be combined to the post-scission stage of Ref. [16]. Furthermore, like for Refs. [12; 13] the possibility of multi-chance fission is not implemented. In our previous works [18; 19], we have developed an innovative nuclear shape parametrization, the Fourier parametrization, which demonstrated to gather within 4 collective coordinates the main features of the shapes relevant to fission. The new shape parametrization was succesfully used within the Born-Oppenheimer approximation [20] to describe fission fragment mass yields [21; 22]. We further implemented this parametrization (restricted to 3D), with a suited PEL prescription, and inertia and friction forces borrowed from classical mechanics, into a Langevin code. The latter showed able to reasonably describe fragment mass and TKE distributions from low-energy fission of typical actinides [23]. It was used also for predictions in the super-heavy element region [24]. The present work is a two-fold extension of these papers. First, we present an enhanced version of our shape parametrization, called the Fourier over Spheroid (FoS) [25; 26]. Second, we develop the previous Langevin code by proposing a method to compute (i) the fragments (\(N\), \(Z\)) composition _i.e._ levelling off the UCD assumption, and (ii) their properties in terms of excitation energy and deformation at the instant of scission 1. This information is finally used as input in the extension of the code to the calculation of the post-scission stage. Altogether is demonstrated to offer a particularly fast and flexible way to compute a wide variety of observables. Comparison with experiment is made wherever possible for spontaneous and low-energy fission. Although not treated in this manuscript, work to account for multi-chance fission is in progress. Footnote 1: At present, the angular momentum of the fragments is not treated in the model. ## II Model In this section the various ingredients entering in the here-developed model are presented. Thermal neutron-induced fission of \({}^{235}\)U is taken as an example to illustrate the main features of the theory and the variety of observables computed by the code. In section III the model is applied to spontaneous fission of fermium. ### Nuclear shape parametrization The surface of the fissioning nucleus is described in the cylindrical coordinates \((\rho,\varphi,z)\) by the following formula [26]: \[\rho^{2}(z,\varphi)=\frac{R_{0}^{2}}{c}\,f\left(\frac{z-z_{\rm sh}}{z_{0}} \right)\frac{1-\eta^{2}}{1+\eta^{2}+2\eta\cos(2\varphi)}\, \tag{1}\] where \(\rho(z,\varphi)\) is the distance from the \(z\)-axis to the surface. Function \(f(u)\) defines the shape of the nucleus having half-length \(c=1\): \[f(u)=1-u^{2}-\sum_{k=1}^{n}\left\{a_{2k}\cos(\frac{k-1}{2}\pi u)+a_{2k+1}\sin( k\pi u)\right\}\, \tag{2}\] where \(-1\leq u\leq 1\) and the expansion coefficients \(a_{i}\) are treated as the the deformation parameters. The first two terms in \(f(u)\) describe a sphere. The volume conservation condition implies \(a_{2}=a_{4}/3-a_{6}/5+\dots\). The parameter \(c\) determines the elongation of the nucleus keeping its volume fixed, while \(a_{3}\) and \(a_{4}\) describe the reflectional asymmetry and the neck size, respectively. The half-length is \(z_{0}=cR_{0}\), where \(R_{0}\) is the radius of a sphere with the same volume. The \(z\)-coordinate varies in the range \(-z_{0}+z_{\rm sh}\leq z\leq z_{0}+z_{\rm sh}\). The shift \(z_{\rm sh}=-3/(4\pi)\,z_{0}\,(a_{3}-a_{5}/2+\dots)\) places the mass of the nucleus at the origin of the coordinate system. The parameter \(\eta\) describes a possible elliptical, non-axial deformation of a nucleus. The formula (1) is entirely equivalent to those based on the Fourier expansion and described in Refs. [19]. Here, the deviation from a sphere with radius \(\rho=1\) is firstly expanded in the Fourier series, and subsequently, this deformed object of the length \(2R_{0}\) is scaled to the elongation equal to \(2cR_{0}\). The formula (1) is more adapted to the calculation of the PEL of nuclei made on a mesh in the multi-dimensional deformation parameter \((c,a_{3},a_{4},...,a_{n})\) space since the range of variability of the \(a_{i}\) coefficients does not depend on the elongation \(c\). In addition, the mass ratio of the fragments, their relative distance, and the radius of the neck between them, measured in \(z_{0}\) units, do not depend on the elongation of the nucleus. It is also worth noticing that for the reflection symmetric shapes, the geometrical scission points appear when \(a_{4}=a_{4}^{\rm sc}=\frac{3}{4}+\frac{6}{6}a_{6}\dots\) independently of the elongation \(c\). Such properties of the present FoS shape parametrization make it very useful for all kinds of calculations related to nuclear fission. The PELs of fissioning nuclei are obtained in the 4D space of deformation parameters \((c,a_{3},a_{4},\eta)\) using the macro-micro model [27]. The macroscopic part of the energy is evaluated according to the Lublin-Strasbourg-Drop (LSD) formula [28], while the microscopic energy corrections are calculated using the Yukawa-folded single-particle potential [29] and the Strutinsky shell correction method [27; 30]. The pairing correlations are described using the BCS formalism [31] using an approximative projection on a good particle number[32; 33]. All parameters of the macro-micro model used in the present paper are the same as in Ref. [34]. A typical PEL of the \({}^{236}\)U fissioning nucleus as an example is shown in Fig. 1. It is a projection of the 4D PEL onto the \((c,a_{4})\) plane, _i.e._, each energy point in the \((c,a_{4})\) map is minimized with respect to the non-axial \(\eta\) and reflectional \(a_{3}\) deformation parameters. The ground state (g.s.), first (A), and second (B) saddle points are marked in the plot. Beyond the second saddle B, two separate paths develop, an asymmetric one and a symmetric one. The exit points from the fission barrier leading to the asymmetric (C) and symmetric (D) fission valleys, are also marked. The upper value of the neck parameter \(a_{4}=0.72\) corresponds to the neck radius approximately equal to the nucleon radius \(r_{\rm neck}=r_{0}\) which we assume in the following as the scission criterion. The non-axial degree of freedom is important at a smaller elongation of the nucleus until the neighborhood of the second saddle. At larger deformation, its effect is negligible, allowing us to restrict the Langevin calculations to 3D when discussing fission dynamics. Moreover, the role of the higher-order deformation parameters \(a_{5}\) and \(a_{6}\) is rather small even in the region of well-separated fission fragments, as it was shown in Ref. [24]. The \((c,\,\rm A_{h})\) cross-section of the PEL of \({}^{236}\)U at \(a_{4}=0.72\) is presented in Fig. 2. This cross-section corresponds roughly to scission (\(r_{\rm neck}\simeq r_{\rm n}\)), as noted above. Here \(\rm A_{h}\) is the heavy fragment mass number. The close-to-scission configuration of the asymmetric valley evidenced in Fig. 1 corresponds to the minimum at \(\rm A_{h}=140\) and \(c=2.2\), while the end of the symmetric valley of Fig. 1 occurs at \(c=2.83\). As expected, asymmetric fission of uranium leads to a more compact scission configuration as compared to symmetric splitting. ### Dynamical evolution The Langevin equation governs the dissipative fission dynamics. In the generalized coordinates \((\{q_{i}\},\quad i=1,2,...,n)\) it has the following form [2]: \[\begin{array}{rl}\frac{dq_{i}}{dt}=&\sum\limits_{j}\,[{\cal M}^{-1}(\vec{q} )]_{i\,j}\;p_{j}\\ \frac{dp_{i}}{dt}=&-\frac{1}{2}\sum\limits_{j,k}\,\frac{\partial[{\cal M}^{-1} ]_{jk}}{\partial q_{i}}\;p_{j}\;p_{k}-\frac{\partial V(\vec{q})}{\partial q_{i} }\\ &-\sum\limits_{j,k}\gamma_{ij}(\vec{q})\;[{\cal M}^{-1}]_{jk}\;p_{k}+F_{i}(t) \,\end{array} \tag{3}\] Here \(V(\vec{q})=E_{\rm pot}(\vec{q})-a(\vec{q})T^{2}\) is the free-energy of fissioning nucleus having temperature \(T\) and the single-particle level density \(a(\vec{q})\). The potential energy \(E_{\rm pot}(\vec{q})\) at a given deformation point \((\vec{q})\) is given by the macroscopic-microscopic prescription quoted in the previous section, and the level density \(a(\vec{q})\) at corresponding deformation is taken from Ref. [35]. The inertia Figure 1: Potential energy surface of \({}^{236}\)U on the \((c,\,a_{4})\) plane. Each point is minimized with respect to the non-axial \((\eta)\) and the reflectional \((a_{3})\) deformations. Figure 2: Potential energy surface of \({}^{236}\)U around the scission configuration \((a_{4}=0.72)\) on the \((c,\,\rm A_{h})\) plane. Each point is minimized with respect to the non-axial \((\eta)\) deformations. and the friction \(\gamma_{ij}\) tensors are evaluated in the irrotational flow and the wall approximation, respectively, as described in Refs. [24; 36]. The vector \(\vec{F}(t)\) stands for the random Langevin force, which couples the collective dynamics to the intrinsic degrees of freedom and is defined as: \[F_{i}(t){=}{\sum_{j}}\,g_{ij}(\vec{q}\ )G_{j}(t)\, \tag{4}\] where \(\vec{G}(t)\) is a stochastic function whose strength \(g(\vec{q}\ )\) is given by the diffusion tensor \({\cal D}(\vec{q}\ )\) defined by the generalized Einstein relation: \[{\cal D}_{ij}{=}T^{*}\gamma_{ij}{=}{\sum_{k}}\,g_{ik}\ g_{jk}\, \tag{5}\] where \[T^{*}=E_{0}/{\rm tanh}\left(\frac{E_{0}}{T}\right). \tag{6}\] Here \(E_{0}=3\times 0.5\) MeV is the zero-point collective energy. The temperature \(T\) is obtained from the thermal excitation energy \(E^{*}\) defined as the difference between the initial (\(E_{\rm init}\) ) and the total collective energy, being the sum of the kinetic (\(E_{\rm kin}\)) and potential (\(V\)) energies of the fissioning nucleus at a given deformation point (\(\vec{q}\)) \[a(\vec{q}\,)T^{2}=E^{*}(\vec{q}\ )=E_{\rm init}-(E_{\rm kin}+V). \tag{7}\] For a given fissioning system, several thousands of Langevin trajectories leading to scission are run. From such samples, the properties of the primary fragments are evaluated, and at first place the mass and kinetic energy distributions presented below. #### ii.1.1 Mass yields The primary, or so-called pre-neutron, fission fragment mass yield as obtained for thermal neutron-induced fission of \({}^{235}\)U is shown in Fig. 3. Note that it was assumed here that each Langevin trajectory begins randomly at the region of the 2nd saddle (B) with the half-width of the initial distribution equal to the distance between the mesh-point (\(\delta q_{i}{=}0.03\)). It was observed that this leads to a predicted mass yield of \({}^{236}\)U which is almost independent on the starting point: similar mass distributions are obtained when starting from the ground state deformation or from the first saddle (A). Our result describes pretty well the maxima and the tails of the experimental mass yield at large asymmetry [37]. However, the yield at symmetric is slightly overestimated. #### ii.1.2 Total kinetic energy For each Langevin trajectory, the total kinetic energy (TKE) of the fragments \(E_{\rm kin}^{\rm frag}\) is given by the sum of the Coulomb repulsion energy (\(V_{\rm Coul}\)), the nuclear interaction energy of fragments (\(V_{\rm nuc}\)), and the pre-fission kinetic energy of the relative motion (\(E_{\rm kin}^{\rm Coll}\)) evaluated at the scission point (\(q_{\rm sc}\)): \[E_{\rm kin}^{\rm frag}=V_{\rm Coul}(q_{\rm sc})+E_{\rm kin}^{\rm coll}(q_{\rm sc })+V_{\rm nuc}(q_{\rm sc}). \tag{8}\] The Coulomb repulsion energy is equal to the difference between the total Coulomb energy of the nucleus at the scission configuration and the Coulomb energies of both deformed fragments: \[V_{\rm Coul}=\frac{3e^{2}}{5r_{0}}\left[\frac{Z^{2}}{A^{1/3}}B_{\rm C}(\vec{q }_{\rm sc})-\frac{Z_{\rm b}^{2}}{A_{\rm h}^{1/3}}\,B_{\rm C}(\vec{q}_{\rm h})- \frac{Z_{1}^{2}}{A_{1}^{1/3}}\,B_{\rm C}(\vec{q})\right]\, \tag{9}\] where \(r_{0}=1.217\,\)fm is the same charge radius as in the LSD mass-formula [28] and \(B_{\rm C}\) is the ratio of the Coulomb energies of the deformed and spherical nucleus. The nuclear interaction between the fragments at the scission point is approximately equal to the change of the nuclear surface energy when the neck breaks: \[\begin{array}{rl}V_{\rm nuc}(q_{\rm sc})&=-2\times E_{\rm surf }(0)\frac{\pi r_{\rm neck}^{2}({\rm sc})}{4\pi R_{0}^{2}}\\ &=-\frac{1}{2}E_{\rm surf}(0)\left(\frac{r_{\rm neck}}{R_{0}}\right)^{2}\.\end{array} \tag{10}\] Here \(E_{\rm surf}=b_{\rm surf}A^{2/3}\), where \(b_{\rm surf}\) is the surface tension LD coefficient. For \(r_{\rm neck}=r_{0}\) and the nucleus radius \(R_{0}=r_{0}A^{1/3}\) one obtains: \(V_{\rm nuc}(q_{\rm sc})=-\frac{1}{2}b_{\rm surf}\), i.e., \(V_{\rm nuc}(q_{\rm sc})\approx-9\) MeV for the neck-radius equal to the nucleon radius. We note that this prescription for \(E_{\rm kin}^{\rm frag}\) is undoubtedly a more accurate estimate of the fission-fragment kinetic energy than the frequently used point-charge approximation: \(E_{\rm kin}=e^{2}Z_{\rm h}Z_{\rm l}/R_{12}\), where \(R_{12}\) is the distance between the fragment mass-centers. The mean TKE as a function of fragment mass as obtained from the model is compared in Fig. 4 with the experimental data [5]. These are reproduced well on the average. Some discrepancy is tho Figure 3: Fission fragment mass yield of n\({}_{\rm th}\) + \({}^{235}\)U as a function of the mass of the fragment. The experimental data are taken from Ref. [37]. the predicted TKE around \(A_{h}\)=140 is too large. The yield in this mass region is nevertheless well described, see Fig. 3. Thus, we ascribe the discrepancy in TKE as due to the limitation of the 4D parametrization to describe the scission shapes characteristic of the so-called Standard II mode, corresponding to a deformed heavy fragment and a slightly or even close to spherical light partner [3]. Second, the maximum of the calculated TKE, expected to occur for the Standard I mode with a heavy fragment in the vicinity of \({}^{132}\)Sn, is seen to be shifted to larger masses around A\({}_{\rm h}=136\). The reason for this discrepancy is too fold: (i) the difficulty to describe in a 4D deformation space the compact shapes characteristic of Standard I mode, and (ii) the too large contribution of the symmetric mode, noted already in Fig. 3, in the \(A_{h}\approx 130\) region, which corresponds to very elongated scission shapes, and thus lower the average TKE in this region. ### Charge equilibration at scission At the end of the Langevin trajectory, once the system has reached the scission point, the mass of the two fragments is determined by integrating the volume of the shapes at the left and right of the point of rupture, respectively. In the wide majority of macroscopic-microscopic models available on the market, the isotopic composition, equivalently N/Z ratio, of the fragments is next assumed to be identical to the one of the fissioning nucleus (see _e.g._[12; 15; 16; 23; 24]). The UCD assumption was recently levelled off by Moller and Ichikawa [17] in a "6D" model by computing the probability of proton transfer between the two fragments along the dynamical evolution. In the fully microscopic approach, neutron and proton sharing at scission can in principle be obtained from the corresponding density distributions, see _e.g._ Ref. [38] for a recent discussion. In the present work, we go beyond the UCD assumption which we employed in our previous model [23; 24] as follows. Starting from the fragment deformation at scission, we determine for each fragment mass the most probable charge based on the LSD energy and the pairing correlation energy. Such charge equilibration can be determined by looking at the change of the total energy of the fissioning system with the charge number of the heavy fragment \(Z_{\rm h}\): \[E(Z,A,Z_{\rm h} ;A_{\rm h},\vec{q}_{\rm h},\vec{q}_{\rm l})=E_{\rm LSD}(Z_{\rm h },A_{\rm h};\vec{q}_{\rm h}) \tag{11}\] \[+E_{\rm LSD}(Z-Z_{\rm h},A-A_{\rm h});\vec{q}_{\rm l})\] \[+e^{2}Z_{\rm h}(Z-Z_{\rm h})/R_{12}-E_{\rm LD}(Z,A;0)\,\] where \(Z,A\), and \(Z_{\rm h},A_{\rm h}\) are the charge and mass numbers of the mother nucleus and the heavy fragment, respectively. The mass as well as the deformation parameters of the heavy (\(A_{\rm h}\), \(\vec{q}_{\rm h}\)) and the light fragments (\(A_{\rm l}\), \(\vec{q}_{\rm l}\)) are given by the division of the volume according to the shape of the nucleus at scission at the end of the Langevin trajectory, as mentioned in the previous work [23; 24]. The total energy as a function of the fragment charge number is shown in the upper panel of Fig. 4. The distribution of the heavy-fragment charge number can be estimated using a Wigner function corresponding to the energy \(E\) given by Eq. 11 for different values of \(Z_{\rm h}\): \[W(Z_{\rm h})=\exp\{-[E(Z_{\rm h})-E_{\rm min}]^{2}/E_{\rm W}^{2}\}\, \tag{12}\] which gives the distribution probability of the fragment charge shown in the bottom panel of Fig. 4. \(E_{\rm min}\) in Eq. 12 is the lowest discrete energy as a function of \(Z_{\rm h}\). Furthermore, the following random number decides on the charge number \(Z_{\rm h}\) of the heavy fragment, with \(Z_{\rm l}=Z-Z_{\rm h}\). The energy \(E_{\rm W}\) should be comparable with the energy distance \(\hbar\omega_{0}\) between harmonic oscillator shells since we have a single-particle (proton) transfer, here, between the touching fragments. The above outlined prescription permits to go beyond the UCD hypothesis by accounting for charge equilibration for a given mass split. The resulting fission fragment charge yield is compared with the data [39] in Fig. 5. As one can see the odd-even effect for the most probable fission fragment elements are well reproduced with our simple model which is solely based on the LSD macroscopic energy, for the largest yields. Figure 4: Energy of \({}^{240}\)Pu at scission as a function of the heavy fragment charge number in the LSD mass formula [28] (top) and the Wigner distribution probability of the fragment charge number (bottom). staggering for most asymmetric splits and at symmetry. This dependence of the magnitude of the staggering with fragment charge is under vivid debate [40] due to its connection with the influence of shell effects and dissipation in fission [41]. Within the present modeling, it will be the subject of future development. We note that a similar procedure could be introduced to account for neutron pairing. However, since evaporation after scission widely washes it out, it is hardly seen in experiment, and not much exploitable. ### Post-scission evaporation The primary fragments produced right at scission are in general excited. They return to their respective ground state by emitting neutrons and \(\gamma\)-rays. Our previous model [23; 24] was extended to account for post-scission evaporation of neutrons. Competition with \(\gamma\)-ray emission has a negligible impact on neutron evaporation, as it occurs mostly below the fragment neutron separation energy. Inclusion of \(\gamma\)-ray emission is thus left for future development. The excitation of the fissioning nucleus available at scission, and to be shared between the primary fragments, is evaluated as specified above with Eq. 7. It is then assumed that the thermal energy of a given fragment \(E_{i}^{*}\) at the scission point is proportional to its single-particle level density: \[\frac{E_{\rm l}^{*}}{E_{\rm h}^{*}}=\frac{a(Z_{\rm l},A_{\rm l};{\rm def}_{ \rm l})}{a(Z_{\rm h},A_{\rm h};{\rm def}_{\rm h})} \tag{13}\] with \(E^{*}=a(\vec{q})\,T^{2}=E_{\rm l}^{*}+E_{\rm h}^{*}\) is given by Eq. 7. Since the fragments have usually a deformation at scission which differs from their equilibrium configuration, they very fast relax to the ground-state shape. The deformation energy released by this relaxation is transformed into excitation energy. The deformation energy of each fragment can be evaluated in the LD model [28]: \[E_{\rm def}^{(i)}\approx E_{\rm LD}(Z_{i},A_{i},{\rm def}_{i})-E_{\rm exp}(Z_ {i},A_{i},{\rm g.s.}). \tag{14}\] The total excitation energy (\(E_{\rm exc}^{(i)}\)) of fragment \(i\) is then the sum of its thermal and deformation energies: \[E_{\rm exc}^{(i)}=E_{\rm def}^{(i)}+E_{i}^{*}=a(i)\,T_{i}^{2}. \tag{15}\] For each fragment, this excitation energy is available for neutron emission. The maximal energy of a neutron emitted from a fragment (mother) can be obtained from the energy conservation law: \[\epsilon_{\rm n}^{\rm max}=M_{\rm M}+E_{\rm M}^{*}-M_{\rm D}-M_{\rm n}\, \tag{16}\] where \(M_{\rm M},\,M_{\rm D},\,M_{\rm n}\) are the mass excesses of mother and daughter nuclei and of the neutron, respectively. These data can be taken from a mass table [42]. The thermal excitation energy of the daughter nucleus is: \[E_{\rm D}^{*}=\epsilon_{\rm n}^{\rm max}-\epsilon_{\rm n}. \tag{17}\] Here \(e_{\rm n}\) is the kinetic energy of the emitted neutron. The neutron emission probability for a (mother) fragment with excitation energy \(E_{\rm M}^{*}\) is given by the Weisskopf formula [43]: \[\Gamma_{\rm n}(\epsilon_{\rm n})=\frac{2\mu}{\pi^{2}\hbar^{2}\rho_{\rm M}(E_ {\rm M}^{*})}\int\limits_{0}^{\epsilon_{\rm n}}\sigma_{\rm inv}(\epsilon)\, \epsilon\,\rho_{\rm D}(E_{\rm D}^{*})\,d\epsilon. \tag{18}\] Here \(\mu\) is the reduced mass of the neutron, \(\sigma_{\rm inv}\) is the neutron inverse cross-section [44]: \[\begin{array}{rl}\sigma_{\rm inv}(\epsilon)&=[0.76+1.93/A^{1/3}\\ &+(1.66/A^{2/3}-0.050)/\epsilon]\,\pi\,(1.7A^{1/3})^{2}\,\end{array} \tag{19}\] while \(\rho_{\rm M}\) and \(\rho_{\rm D}\) are, respectively, the level densities of mother and daughter nuclei: \[\rho(E)=\frac{\sqrt{\pi}}{12a^{1/4}E^{5/4}}\exp(2\sqrt{aE})\, \tag{20}\] Like in other parts of the model, the single-particle level density parameters \(a\) of the mother and the daughter are taken from Ref. [35]. Figure 5: Fission fragment charge yield of \({\rm n_{th}}+{}^{235}{\rm U}\). The experimental data (red points) are taken from Ref. [39]. Figure 6: Post-scission neutron multiplicity as a function fragment mass for \({\rm n_{th}}+{}^{235}{\rm U}\). The experimental data (red points) are taken from Ref. [4]. Neutron evaporation is assumed to take place until the fragment reaches an excitation energy comparable to the neutron separation energy, for which we take an average value of 6 MeV (this energy is further exhausted by \(\gamma\)-rays as also observed in experiment, see _e.g._ Ref. [45]). The number of neutrons emitted by the fragments as function of their mass is displayed in Fig. 6 and compared with the measurements [4]. The _sawtooth_ shape observed in the experimental data is only roughly reproduced by the theoretical results. The too large multiplicity predicted in the range between A \(\approx\) 116 and 130 is partly due to the too large amount of very elongated scission shapes originating from the LD fission mode in this region, as already discussed in Fig. 3. The fragments of this mode experience a substantial shape relaxation after scission, what increases the excitation energy available for evaporation (see also Figs. 8 and 9). Furthermore, the too large amount of evaporation in the vicinity of \({}^{132}\)Sn is additionally due to the limitation of the model to describe the specific shapes of the Standard I mode. The little under-prediction at A \(\approx\) 155 may similarly point the issue of shape parametrization for those elongated heavy fragments. When the influence of structural effects in the heavy fragment dominates, energy minimization will naturally favor those pre-scission configurations which reproduces at best the shape of the heavy "side" of the mono-nucleus approaching scission. The limited number of collective coordinates, will necessarily bias the shape of the light counterpart and thus its excitation energy and post-scission evaporation. That partly explains the discrepancy between theory and experiment in the region A \(\approx\) (90-110). It is expected that inclusion of higher deformation parameters, namely \(a_{5}\) and \(a_{6}\) which allow to better control the fragments deformation, will substantially improve the description of post-scission neutron multiplicities. The average fragment neutron to proton \(<\) N \(>\) /Z ratio after post-scission evaporation for fission of \({}^{236}\)U at thermal energies is shown in Fig. 7 as a function of the fragment charge number. The N/Z ratio of the initial system is given by the dashed line as for reference. The change of the fragment \(<\) N \(>\) /Z with respect to the mother nucleus is due to charge equilibration at scission and post-scission neutron evaporation. The general behavior observed in experiment, see _e.g._ Ref. [7], with the heavy fragment being relatively neutron-rich and the lighter neutron-poor for fission of typical actinides, is reproduced. Though, the influence of shell effects in the vicinity of \({}^{132}\)Sn is weaker in theory as compared to the measurement. As discussed above, we mostly attribute this to the limitation in the description of the particularly compact scission shapes characteristic of those fragmentations. The excitation of the heavy partner is then slightly overestimated, the neutron multiplicity gets too large, what lowers the \(<\) N \(>\) /Z ratio. To be best of our knowledge, apart from the present work, there are only two dynamical models which addressed the experimentally observed evolution of \(<\) N \(>\) /Z with fragment charge (or mass): While the enhanced "6D" macroscopic-microscopic model of Moller and Ichikawa [17] achieved a very good quantitative description [46], the description by the self-consistent model of Verriere et al. [38] remained qualitative only. The model developed in the present work calculates all (except the angular momentum) fragment properties in a consistent manner, and takes properly care of the correlations between the various quantities. For instance, the primary fragment N and Z distributions and associated shapes predicted by the calculation of the dynamical evolution up to the scission point determine the TKE. The primary (N, Z) population together with TKE gives the total excitation energy (TXE). The fragment deformation at scission together with the TXE enters the calculation of the intrinsic excitation energy of the fragments, which finally determine the neutron multiplicity and N/Z neutron excess. Correlations are essential to get further insight into the process, as well as to understand possible deviation between experiment and theory. The primary fragment yield and TKE of \({}^{236}\)U are shown in Fig. 8 on the (N\({}_{\rm f}\), Z\({}_{\rm f}\)) plane. In our model, the most probable primary fragments are \({}^{140}\)Xe and \({}^{96}\)Sr, consistent with what suggested by combining the experimental observations of Refs. [32; 37; 47]. The largest TKE\(\gtrsim\)190 MeV corresponds to neutron-rich fragments with mass A between \(\approx\)130-140 and correlated with light fragments around A=100 having smaller neutron excess. Rather small values of the TKE of the fragments, equal to approximately 140 MeV, are calculated for symmetric fission. The larger TKE for the Standard I and II modes as compared to the LD symmetric mode well established from experiment [47] is thus reproduced. Though, the measured difference between Standard I and Standard II is not evident in the calculation, presumably due to the limited number of collective coordinates. That translates into a fragment excitation in the vicinity of \({}^{132}\)Sn which is somehow too large, and consequently an overestimation of the number of neutrons emitted, as noted above. This is indeed seen in Fig. 9 which displays Figure 7: Average post-scission neutron to proton \(<\) N \(>\) /Z ratio for n\({}_{th}\)+\({}^{236}\)U. The dashed line represents the ratio of the compound nucleus. the fragment excitation energy and neutron multiplicity on the (N\({}_{\rm f}\), Z\({}_{\rm f}\)) plane. A further stringent test of the model is presented in Fig. 10, where the experimental data on the average neutron multiplicity as a function of the fission fragment TKE for n\({}_{\rm th}\) + \({}^{235}\)U [4] are displayed for various mass gates, and compared to the predictions of our model. The description is pretty good, except for those pairs of fragments substantially contributed by the Standard I mode. For the latter, the theoretical neutron multiplicity is too large for the heavy fragment, what is in line with the interpretation of the discrepancies observed above. Though, it is to be noted that, at the same time, the neutron multiplicity of the light partner is underestimated. That is mostly attributed to the impact of the aforementioned bias introduced by the restriction to 4 dimensions. Within the 5D Brownian shape motion model, Albertsson et al. [16] obtained a better description for these fragment pairs. That supports our conjecture that an increase in dimensionality of our model, with the inclusion of independent deformation variables for the light and heavy fragments (\(a_{5}\) and \(a_{6}\)), will cure most of the deviation of the current theory. This conjecture is supported also by the analysis of the N/Z ratio reported above, where the "6D" model of Ref. [17] based on the same 5D deformation landscape as Ref. [16] achieves a better description than the present 4D model. Nevertheless, it is not excluded that part of the discrepancy observed here may be due to the prescription of excitation energy sharing and charge equilibration at scission. For both aspects we consider for the fragments the macroscopic energy, only (i.e. shell effects are omitted). Furthermore, unlike Ref. [16], we use an approximate formula [7] for the density of states of the deformed fragments, rather than the actual s.p. level densities with the shell effects. The present investigation demonstrates that high-fold correlation data, which nowadays are becoming available in experiment, and when they are properly propagated in the calculation along the real-time evolution of the fissioning system, are crucial in order to evidence in an un-ambiguous manner the origin of possible weak points of a model. That is important to guide further development of the theory. ## III Application to the FM chain Experiment well established that the Fm isotopic chain exhibits a very peculiar trend in fragment properties with the size of the fissioning system: the fragment mass distribution changes abruptly from asymmetric for \({}^{256}\)Fm to narrow and symmetric in \({}^{258}\)Fm [48; 49]. At the same time, the TKE has a double-humped shape for the heavier isotope. This is certainly the best example of bimodal fission. The first theoretical papers providing an explanation for the origin of this observation appeared at the end Figure 8: Fission fragment yield (top) and TKE (bottom) for n\({}_{th}\)+\({}^{235}\)U on the (N\({}_{\rm f}\), Z\({}_{\rm f}\)) plane. Figure 9: Fission fragment excitation energy (top) and neutron multiplicity (bottom) for n\({}_{th}\)+\({}^{235}\)U on the (N\({}_{\rm f}\), Z\({}_{\rm f}\)) plane. of the 80s; they were all based on a static analysis of the PEL (see _e.g._[50]). Thanks to the development of theory and increase in computing resources since then, advanced dynamical calculations are now possible within both the macroscopic-microscopic approach (see Refs. [13; 16; 51] for 3D, 4D and 5D models, respectively) and the microscopic self-consistent framework [52]. There is a wide consensus that the sudden transition observed along the isotopic chain of Fm (and of a few more trans-fermium elements) is caused by the proximity of strong shell effects at symmetry which fragments approach \({}^{132}\)Sn with increasing fissioning isotope mass. As obvious from the quoted theoretical papers, a proper description of the mass and TKE yields along the Fm isotopic chain is a good test for any theoretical model. The model described in the present work was used to calculate the fission fragment properties (mass, charge, TKE, post-scission neutron multiplicity) along the Fm chain. All parameters were set identical to those employed in the previous section for thermal neutron-induced fission of \({}^{236}\)U. The 4D PEL's of the even-even \({}^{252-262}\)Fm isotopes projected onto the \((c,\,a_{4})\) plane are shown in Fig. 11. Each point of the maps is minimized with respect to the non-axial (\(\eta\)) and the pear-like (\(a_{3}\)) deformation as for \({}^{236}\)U. For the lightest isotopes \({}^{252,254}\)Fm, the outer saddle point is rather well defined and located at \(c\approx 1.5\) and \(a_{4}\approx 0.18\). Its exit point, denoted \(\mathbf{a}\) in all maps, marks the beginning of a valley which corresponds to asymmetric fission (as seen from the corresponding minimized \(a_{3}\); not shown here). Between \({}^{256}\)Fm and \({}^{258}\)Fm the pattern in the outer saddle region clearly changes, and still another outer saddle (at \(c\approx 1.45\) and \(a_{4}\approx 0.27\)) appears. A new fission valley develops beyond this additional outer barrier in \({}^{258}\)Fm. It corresponds to compact symmetric fission configurations and is denoted \(\mathbf{s}\). The PEL's of Fig. 11 suggest that the symmetric valley might attract most of the flux for the heaviest Fm isotopes. The mass yields calculated for spontaneous fission of \({}^{258}\)Fm and corresponding to the starting points \(\mathbf{a}\) and \(\mathbf{s}\) are shown separately in Fig. 12, being respectively asymmetric and symmetric as noted above. The final mass yield is, of course, a weighted sum of these two distributions. The weight suited for \(\mathbf{a}\) and \(\mathbf{s}\) depends on the penetration probability (\(P_{i}\)) of the fission barrier evaluated along the path \(\mathcal{L}_{i}\), which ends at the \(t\)-th turning-point (\(\mathbf{a}\) and \(\mathbf{b}\)). As deduced from Fig. 11, there are two distinct outer saddle points for the \({}^{256-260}\)Fm isotopes, and tentatively also for \({}^{254}\)Fm. The height of the corresponding outer barriers is plotted in Fig. 13. They are almost identical for \({}^{256}\)Fm, while in the heavier isotopes, the symmetric barrier is lower than the asymmetric one. This difference in the saddle-point heights indicates that compact symmetric fission should prevail for isotopes heavier than \({}^{256}\)Fm. In order to calculate the final mass yield expected for spontaneous fission and compare quantitatively with experiment (wherever available), we proceed as follows. The final fission fragment yield (\(Y_{\rm th}\)) is taken as the weighted sum of the yields \(Y_{a}\) and \(Y_{s}\) obtained using the points \(\mathbf{a}\) and \(\mathbf{s}\) as initial points of the Langevin trajectories: \[Y_{\rm th}(A_{f})=P_{a}\cdot Y_{a}(A_{f})+P_{s}\cdot Y_{s}(A_{f})\, \tag{21}\] where \(P_{a}\) and \(P_{s}\) are the relative probabilities of reaching points \(\mathbf{a}\) and \(\mathbf{s}\) by tunneling of through fission barrier. We follow here the approximation described in Ref. [34] to evaluate \(P_{i}\). In the WKB approximation, the barrier penetration Figure 10: Average neutron multiplicity as a function of TKE for selected mass pairs, as indicated in the top right corner of each panel. Experimental data of Ref.[4] for the light and the heavy fragment separately, and their sum, are compared to the calculation probability is given by \[W_{i}=\frac{1}{1+exp[2S(\mathcal{L}_{i})]}\, \tag{22}\] where \(S(\mathcal{L}_{i})\) is the action integral taken along the \(\mathcal{L}_{i}\) path \[S(\mathcal{L})=\int\limits_{s_{l}}^{s_{r}}\sqrt{\frac{2}{\hbar^{2}}B_{ss}(s)[V( s)-E_{0}]}\,ds. \tag{23}\] Here \(s_{l}\) and \(s_{r}\) are the left and right turning points at the path \(\mathcal{L}\). \(B_{ss}\) and \(V(s)\) are the collective inertia and potential along the path \(\mathcal{L}\) respectively, and \(E_{0}\) is the ground state energy. The total penetration probability of the barrier is the sum of the probabilities along the asymmetric and symmetric paths. So, the relative population of the asymmetric and the compact-symmetric valley is \[P_{a}=\frac{W_{a}}{W_{a}+W_{s}}\quad\text{and}\quad P_{s}=\frac{W_{s}}{W_{a}+W_ {s}}. \tag{24}\] Following the above recipe, the final fission fragment mass yields (thick black line) predicted for spontaneous fission of the even-even \({}^{246-262}\)Fm isotopes are shown in Fig. 14. The yield distributions due to path \(\mathbf{a}\) (thin purple line) and to path \(\mathbf{s}\) (dotted blue line) are also displayed for reference. For the lighter \({}^{254-256}\)Fm isotopes, the mass yields do not depend on the choice of Figure 11: Potential energy surface of the even-even \({}^{252-262}\)Fm isotopes on the \((c,\,a_{4})\) plane. Each point is minimized with respect to the non-axial (\(\eta\)) and the reflectional \((a_{3})\) deformation. The asymmetric \(\mathbf{a}\) and symmetric \(\mathbf{s}\) exit points from the fission barrier are marked. the starting point, while for the heavier ones, they differ significantly. One obtains the asymmetric mass yield (solid line) when starting from the point \(\mathbf{a}\) and the symmetric distributions corresponding to the initial point \(\mathbf{s}\). The maximum of the asymmetric component in the final distribution is located between \(A\approx 146\) and \(150\) for the heavy fragment depending on fissioning mass, while for the symmetric component, there are two close-lying maxima with the heaviest one is sitting at A\({}_{h}=132\); the light partner is given by the fissioning mass. Comparison between the final calculation and experiment is seen to be pretty good, bearing in mind the simple recipe outlined above. Further improvement requires to consider the full dynamics of the process starting from _e.g._ the second minimum like in Ref. [13] (rather than assuming simple tunneling through each barrier separately). Work in this direction is the scope of future enhancement of the model. The calculated final fission fragment TKE distribution for spontaneous fission of \({}^{258}\)Fm is shown in the top panel of Fig. 15. This weighted TKE yield (thick black full line) can be compared with the experimental data (red histogram) [48]. The TKE yields corresponding to the starting points \(\mathbf{a}\) (thin purple) and \(\mathbf{s}\) (dotted blue) are drawn also. The theoretical result is seen to reproduce very reasonably the measurement. In particular it exhibits the two-humped pattern mentioned in introduction. According to the discussion above, for the \({}^{258}\)Fm isotope, the contribution from path \(\mathbf{s}\) dominates. In this respect, it is important to note that Fig. 15 suggests that the low-energy component of the TKE distribution originates almost exclusively from path \(\mathbf{s}\) rather from path \(\mathbf{a}\). In still other words, path \(\mathbf{s}\) has itself a two-humped distribution, _i.e._ has contributions from two different modes. This could already been seen in Figs. 12 and 14, where some asymmetric wings appear next to the symmetric peak in the mass distribution of path \(\mathbf{s}\). The mean TKE as a function of the fission fragment mass is plotted in the bottom part of Fig. 15. It is seen there that for \(A_{f}=258/2\), the TKE corresponding to the path \(\mathbf{s}\) is about \(50\%\) larger than the one related to the path \(\mathbf{a}\). It shows that in the case \(\mathbf{s}\), for most symmetric events, one deals with a compact symmetric path. The TKE spectra confirm conclusions drawn from the mass yields comparison: in \({}^{258}\)Fm, the compact-symmetric fission predominates. To get a deeper insight into the above observations and discussion, we consider higher-fold correlations. Figure 16 displays the fragment yield (top) and the elongation of the system just before scission, at the end of Figure 14: Fission fragment mass yields along the Fm isotopic chain. The calculation (solid black line) is compared with experimental data for pre-neutron yields (red +) [53; 54] or post-neutron yields (red x) [11; 48] depending on availability (the little shift between pre- and post-neutron mass distributions is of no importance for the present comparison). The theoretical curves corresponding to the asymmetric (thin purple line) and symmetric (dotted blue line) fission paths are shown separately for reference, see the text. Figure 12: Fragment mass yield calculated for spontaneous fission of \({}^{258}\)Fm corresponding to Langevin trajectories starting from either the asymmetric (\(\mathbf{a}\)) or the compact symmetric (\(\mathbf{s}\)) turning point. Figure 13: Second barrier heights along the of Fm chain corresponding to the asymmetric \(\mathbf{a}\) and symmetric \(\mathbf{s}\) fission paths as a function of Fm isotope mass. the trajectory (bottom) as a function of fragment mass and TKE for spontaneous fission of \({}^{258}\)Fm. The upper panel exhibits the dominant symmetric component with TKE \(\approx\) 234 MeV (dark blue and red blob) and the small contribution from the asymmetric Standard II mode at (A\({}_{l}\), A\({}_{h}\)) \(\approx\) (113, 145) and TKE \(\approx\) 180 MeV (light blue bands), see also bottom of Fig. 15. In addition, some slightly less asymmetric component is dragging from the symmetric high-TKE region down to TKE's as low as \(\approx\) 129 MeV. These events correspond to the asymmetric wings of path \(\mathbf{s}\) mentioned above. A further insight can be obtained from the bottom panel of the Fig 16 which informs about the elongation close to scission. The dominant symmetric component originating from path \(\mathbf{s}\) is seen to be characterized by the smallest elongation \(c\approx 2\) at scission, confirming that it corresponds to a compact symmetric mode. The Standard II asymmetric mode has a somehow larger mean \(c\approx\) 2.2-2.4 at scission as expected. But maybe most interesting is to notice that the slightly asymmetric wings dragged from symmetry to very low TKE, and which end as distinct blue blobs clearly separated from Standard II, have a mean elongation at scission above \(c\approx\) 2.7, _viz._ larger than Standard II. That corroborates that the asymmetric wings from path \(\mathbf{s}\) discussed above do not originate from the Standard II due to path \(\mathbf{a}\), but they shall rather to be considered as the asymmetric tails of a symmetric elongated mode stemming from path \(\mathbf{s}\). It is to be noted that these tails dominate the mass distribution of path \(\mathbf{s}\) in \({}^{256}\)Fm, while the symmetric compact mode prevails in \({}^{258}\)Fm (see purple curves in the corresponding panels of Fig. 14). This can be best understood from a detailed look at Fig. 11, which shows that these two "sub-paths" separate at \(c\approx\) 1.7 and \(a_{4}\approx\) 0.54. The small differences between the PEL's of \({}^{256}\)Fm and \({}^{258}\)Fm drive the system in one or the other sub-path (in addition to the influence of the dynamics). Finally, the predicted post-scission neutron multiplicities for spontaneous fission of \({}^{258}\)Fm are shown in Fig. 17 as a function of the fragment (N\({}_{\rm f}\), Z\({}_{\rm f}\)) isotopic composition. The diagonal purple lines correspond to constant masses. Obviously, the number of emitted neutrons at a given mass grows with the distance from the \(\beta\)-stability line. For the Standard II mode, the heavy and light fragments emit on the average a comparable number of neutrons, consistent with experimental observation in the region (see _e.g._[55]), although evaporation may be slightly overestimated for the heavy partner (see also Fig. 10). The compact symmetric mode exhibits the lowest post-scission multiplicity, only 0.5 neutron on the average, in line with the above discussion: the fragments of this mode are close to magic nuclei, poorly excited at scission and which experience very little shape relaxation after Figure 16: Fission fragment scission. The asymmetric wings of the mass distribution of path \(\mathbf{s}\) which we identified above as stemming from an elongated symmetric mode show post-scission multiplicity values which are somehow intermediate between those of Standard II and of the compact symmetric mode, while it would be expected that these events exhibit the largest post-scission multiplicities. Overall the model thus describes the main trends observed in experiment in the region. The deficiency regarding a more detailed quantitative description is mostly due to the limitation of the theory in terms of full variety of shapes, in particular at scission, of the energy sharing prescription already mentioned above. ## IV Conclusions We have proposed the innovative Fourier over Spheroid (FoS) prescription as a fast and flexible nuclear shape parametrization to model fission by means of four collective coordinates _i.e._ elongation, left-right asymmetry, neck size, and non-axiality. Neglecting non-axiality from the outer saddle region to scission, we have developed a new 3D Langevin code, based on the FoS, the LSD + Yukawa folded macroscopic-microscopic potential energy landscape, a procedure to account for charge equilibration at scission, and a method to compute the excitation energy available in the primary fragments. Finally, the de-excitation of the latter after scission was computed. Altogether gives access to a wide palette of observables, treated in a consistent way, and which permits to analyze high-fold correlations. Such information is crucial to evaluate in a unambiguous way the reliability of specific theoretical prescriptions which are often entangled in the intricate fission process. The model was first tested and tuned to reproduce at best experimental observation from thermal neutron-induced fission of \({}^{235}\)U. In a second step, it was applied to fission along the Fm isotopic chain, and seen to explain the famous abrupt transition observed in the fragment properties between \({}^{256}\)Fm and \({}^{258}\)Fm. The achievement of the present model is estimated to be impressive considering its relative simplicity. Remaining discrepancies are ascribed to limitations mainly in terms of dimensionality of the shape parametrization, restriction to the outer-saddle to scission dynamics, charge equilibration and energy sharing recipes at scission, and possibly the neglect of angular momentum. Work to improve along these lines is foreseen. Also, the extension of the model to account for multi-chance fission is underway. This enhancement is very important for calculations of interest in nuclear energy applications. Further calculations for wider mass and excitation energy ranges of the fissioning nucleus, and comparison with experiment wherever available, are in progress in parallel. These are important to constrain more and more the model ingredients, and refine them. Independent of these developments, the model constitutes already a useful tool for various domains where systematic and fast predictions are required. Additionally, the conclusions drawn from its comparison with experiment can be a useful guidance for more fundamental theory. **Acknowledgments** We acknowledge discussions with F. A. Ivanyuk. The authors would like to thank A. Gook and A. Al-Adili for supplying us with some experimental data. This work has been supported by the Polish National Science Center (Grant No. 2018/30/Q/ST2/00185) and by the Natural Science Foundation of China (Grant No. 11961131010 and 11790325). **Appendix: Deformation of fission fragments** At the end of each thousand Langevin trajectories, i.e., at scission configuration, one has to determine the deformation parameters of both fission fragments. This procedure has to be repeated several thousand times, so it should be rapid. Knowledge of the fragment deformations is necessary to estimate its deformation energy, which contributes significantly to the fragment excitation energy (13). Let us assume the fission fragments have the masses \(A_{1}\) and \(A_{2}\), where \(A=A_{1}+A_{2}\) is the mass of the mother nucleus described by the profile \(\rho^{2}(z)\) Eq. (1). The following data on the mother nucleus around scission are needed to determine the deformation of the fragments: \[z_{\rm min}=-z_{0}+z_{\rm sh},\hskip 28.452756ptz_{\rm neck},\hskip 28.452756ptz_{ \rm max}=z_{0}+z_{\rm sh},\] \[z_{\rm cm}(1),\hskip 8.535827ptz_{\rm cm}(2),\hskip 8.535827pt\rho^{2}( \frac{z_{\rm min}+z_{\rm neck}}{2}),\hskip 8.535827pt\rho^{2}(\frac{z_{\rm neck }+z_{\rm max}}{2}).\] The corresponding spherical radii are \(R_{01}\), \(R_{02}\), and \(R_{0}\), where \(R_{0i}=R_{0}(A_{i}/A)^{1/3}\). The fragment elongations Figure 17: Post-scission neutron multiplicity for spontaneous fission of \({}^{258}\)Fm as a function of fragment neutron N\({}_{\rm f}\) and proton Z\({}_{\rm f}\) number. are: \[\begin{split} c_{1}=&\ \frac{z_{\rm neck}-z_{\rm min}}{2R_{01}}\,\\ c_{2}=&\ \frac{z_{\rm max}-z_{\rm neck}}{2R_{02}}. \end{split} \tag{18}\] One evaluates the reflection asymmetry parameter \(a_{3i}\) from the shift of the fragment mass center with respect to its geometrical center: \[\begin{split} a_{31}=&\ -\frac{2\pi}{3c_{1}R_{01}} \left[z_{\rm cm}(1)-\frac{z_{\rm neck}-z_{0}+z_{\rm sh}}{2}\right]\,\\ a_{32}=&\ -\frac{2\pi}{3c_{2}R_{02}}\left[z_{\rm cm}(2 )-\frac{z_{0}+z_{\rm sh}-z_{\rm neck}}{2}\right]\.\end{split} \tag{19}\] To determine the \(a_{4i}\) deformation of the fragment \(i\) one uses the FoS relation: \[\rho_{i}^{2}(0)=\frac{R_{0i}^{2}}{c_{i}}f(0)=\frac{R_{0i}^{2}}{c_{i}}\left(1- \frac{4}{3}a_{4i}-\frac{4}{5}a_{6i}\dots\right)\,, \tag{20}\] where \[\begin{split}\rho_{1}^{2}(0)=&\ \rho^{2}[(z_{\rm neck }+z_{\rm max})/2]\,\\ \rho_{2}^{2}(0)=&\ \rho^{2}[(z_{\rm neck}+z_{\rm min})/2]. \end{split} \tag{21}\] Assuming that \(a_{6}=-a_{4}/10\) (LD energy minimum) one obtains: \[\rho_{i}^{2}(0)=\frac{R_{0i}^{2}}{c_{i}}\left(1-\frac{96}{75}a_{4i}\right) \tag{22}\] what implies \[a_{4i}=\frac{75}{94}\left(1-\frac{c_{i}}{R_{0i}^{2}}\,\rho_{i}^{2}(0)\right). \tag{23}\] The quality of the shape described above is shown in Fig. 18. It is seen that taking into account the pear-like deformation significantly improves the quality of the fit, while the \(a_{4}\) deformation has only a tiny effect.
2307.11599
A more efficient reformulation of complex SDP as real SDP
This note proposes a new reformulation of complex semidefinite programs (SDPs) as real SDPs. As an application, we present an economical reformulation of complex SDP relaxations of complex polynomial optimization problems as real SDPs and derive some further reductions by exploiting inner structure of the complex SDP relaxations. Various numerical examples demonstrate that our new reformulation runs significantly faster than the usual popular reformulation.
Jie Wang
2023-07-21T14:09:50Z
http://arxiv.org/abs/2307.11599v3
# A More Efficient Reformulation of Complex SDP as Real SDP+ ###### Abstract This note proposes a novel reformulation of complex semidefinite programs (SDPs) as real SDPs. As an application, we present an economical reformulation of complex SDP relaxations of complex polynomial optimization problems as real SDPs and derive some further reductions by exploiting structure of the complex SDP relaxations. Various numerical examples demonstrate that our new reformulation runs several times (one magnitude in some cases) faster than the usual popular reformulation. c remarkRemark omplex semidefinite programming, complex polynomial optimization, semidefinite programming, the complex moment-HSOS hierarchy, quantum information 90C22, 90C23 ## 1 Introduction Complex semidefinite programs (SDPs) arise from a diverse set of areas, such as combinatorial optimization [7], optimal power flow [8, 10], quantum information theory [2, 4, 14], signal processing [9, 12]. In particular, they appear as convex relaxations of complex polynomial optimization problems (CPOPs), giving rise to the complex moment-Hermitian-sum-of-squares (moment-HSOS) hierarchy [8, 13]. However, most modern SDP solvers deal with only real SDPs1. In order to handle complex SDPs, it is then mandatory to reformulate complex SDPs as equivalent real SDPs. A popular way to do so is to use the equivalent condition Footnote 1: As far as the author knows, SeDuMi [11] and Hypatia [6] are the only solvers that can handle complex SDPs directly. \[H\succeq 0\quad\iff Y=\begin{bmatrix}H_{R}&-H_{I}\\ H_{I}&H_{R}\end{bmatrix}\succeq 0 \tag{1}\] for an Hermitian matrix \(H=H_{R}+H_{I}\mathbf{i}\in\mathbb{C}^{n\times n}\) with \(H_{R}\) and \(H_{I}\) being its real and imaginary parts respectively. Note that the right-hand-side constraint in (1) entails certain structure and to feed it to an SDP solver, we need to impose extra affine constraints to the positive semidefinite (PSD) constraint \(Y\succeq 0\): \[Y_{i,j}=Y_{i+n,j+n},Y_{i,j+n}+Y_{j,i+n}=0,\quad i=1,\ldots,n,j=i,\ldots,n. \tag{2}\] This conversion is quite simple but could be inefficient when \(n\) is large. In this note, we take a dual point of view and propose a novel reformulation of complex SDPs as real SDPs. The benefit of the new reformulation is that there is no need to add extra affine constraints and hence it owns a lower complexity. In the same manner, we can obtain a new reformulation of complex SDP relaxations of CPOPs as real SDPs. Furthermore, by exploiting structure of the complex SDP relaxations, we are able to remove a bunch of redundant affine constraints, which leads to an even more economical real reformulation of the complex SDP relaxations. Various numerical experiments (on randomly generated CPOPs and the AC-OPF problem) confirm our theoretical finding and demonstrate that the new reformulation is indeed more efficient than the usual popular one. ###### Abstract We study the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the optimal solution of the problem of finding the optimal solution of the problem of the problem of \(\mathbb{R}^{n\times n}\). Then the Lagrangian associated with (DSDP-\(\mathbb{R}\)) is \[L(X,y_{R},y_{I})\] \[= -\left\langle\begin{bmatrix}X_{1}&X_{3}\\ X_{3}^{\intercal}&X_{2}\end{bmatrix},\begin{bmatrix}\mathscr{A}_{R}^{*}(y_{R}) -\mathscr{A}_{I}^{*}(y_{I})-C_{R}&-\mathscr{A}_{R}^{*}(y_{I})-\mathscr{A}_{I}^ {*}(y_{R})+C_{I}\\ \mathscr{A}_{R}^{*}(y_{I})+\mathscr{A}_{I}^{*}(y_{R})-C_{I}&\mathscr{A}_{R}^{* }(y_{R})-\mathscr{A}_{I}^{*}(y_{I})-C_{R}\end{bmatrix}\right\rangle\] \[+b_{R}y_{R}-b_{I}y_{I}\] \[= \langle C_{R},X_{1}+X_{2}\rangle-\langle C_{I},X_{3}-X_{3}^{ \intercal}\rangle+\langle b_{R}-\mathscr{A}_{R}(X_{1}+X_{2})+\mathscr{A}_{I}( X_{3}-X_{3}^{\intercal}),y_{R}\rangle\] \[-\langle b_{I}-\mathscr{A}_{R}(X_{3}-X_{3}^{\intercal})- \mathscr{A}_{I}(X_{1}+X_{2}),y_{I}\rangle.\] Thus the dual problem of (DSDP-\(\mathbb{R}\)) can be written down as \[\left\{\begin{aligned} \sup_{X\in\mathbb{S}^{2n}}& \langle C_{R},X_{1}+X_{2}\rangle-\langle C_{I},X_{3}-X_{3}^{ \intercal}\rangle\\ \text{s.t.}&\mathscr{A}_{R}(X_{1}+X_{2})-\mathscr{A}_{I}(X_{3}-X_{3}^{ \intercal})=b_{R},\\ &\mathscr{A}_{R}(X_{3}-X_{3}^{\intercal})+\mathscr{A}_{I}(X_{1}+X_ {2})=b_{I},\\ & X=\begin{bmatrix}X_{1}&X_{3}\\ X_{3}^{\intercal}&X_{2}\end{bmatrix}\succeq 0.\end{aligned}\right.\] (PSDP- \[\mathbb{R}\] ') The above reasoning leads to the main theorem of this note. (PSDP-\(\mathbb{R}\)') is equivalent to (PSDP-\(\mathbb{R}\)) (in the sense that they share the same optimum). As a result, (PSDP-\(\mathbb{R}\)') is equivalent to (PSDP-\(\mathbb{C}\)). In addition, if \(X^{\star}=\left[\begin{smallmatrix}X_{1}^{\star}&X_{3}^{\star}\\ X_{3}^{\intercal}\end{smallmatrix}\right]\) is an optimal solution to (PSDP-\(\mathbb{R}\)'), then \(H^{\star}=(X_{1}^{\star}+X_{2}^{\star})+(X_{3}^{\star}-(X_{3}^{\star})^{ \intercal})\mathbf{i}\) is an optimal solution to (PSDP-\(\mathbb{C}\)). Let us denote the optima of (PSDP-\(\mathbb{R}\)) and (PSDP-\(\mathbb{R}\)') by \(v\) and \(v^{\prime}\) respectively. Suppose \(Y=\left[\begin{smallmatrix}H_{R}&-H_{I}\\ H_{I}&H_{R}\end{smallmatrix}\right]\) is a feasible solution to (PSDP-\(\mathbb{R}\)). Then one can easily check that \(X\coloneqq\left[\begin{smallmatrix}\frac{1}{2}H_{R}&\frac{1}{2}H_{I}\\ -\frac{1}{2}H_{I}&\frac{1}{2}H_{R}\end{smallmatrix}\right]\) is a feasible solution to (PSDP-\(\mathbb{R}\)'). Moreover, we have \(\langle C_{R},X_{1}+X_{2}\rangle-\langle C_{I},X_{3}-X_{3}^{\intercal}\rangle =\langle C_{R},H_{R}\rangle-\langle C_{I},H_{I}\rangle\) and it follows \(v\leq v^{\prime}\). On the other hand, suppose \(X=\left[\begin{smallmatrix}X_{1}^{\star}&X_{3}\\ X_{3}^{\intercal}&X_{2}\end{smallmatrix}\right]\) is a feasible solution to (PSDP-\(\mathbb{R}\)'). We then have \[\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}^{-1}X\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}=\begin{bmatrix}X_{2}&-X_{3}^{\intercal}\\ -X_{3}&X_{1}\end{bmatrix}\succeq 0,\] and thus \[\begin{bmatrix}X_{1}+X_{2}&X_{3}-X_{3}^{\intercal}\\ X_{3}^{\intercal}-X_{3}&X_{1}+X_{2}\end{bmatrix}\succeq 0.\] Consequently, we obtain \[Y=\begin{bmatrix}H_{R}&-H_{I}\\ H_{I}&H_{R}\end{bmatrix}\coloneqq\begin{bmatrix}X_{1}+X_{2}&X_{3}^{\intercal}-X _{3}\\ X_{3}-X_{3}^{\intercal}&X_{1}+X_{2}\end{bmatrix}\succeq 0.\] One can easily see that \(Y\) is a feasible solution to (PSDP-\(\mathbb{R}\)) and in addition, it holds \(\langle C_{R},H_{R}\rangle-\langle C_{I},H_{I}\rangle=\langle C_{R},X_{1}+X_{2} \rangle-\langle C_{I},X_{3}-X_{3}^{\intercal}\rangle\). Thus \(v\geq v^{\prime}\) which proves the equivalence. The second statement of the theorem is clear from the above arguments. In contrast to (PSDP-\(\mathbb{R}\)), the PSD constraint in (PSDP-\(\mathbb{R}\)') is straightforward, and thus no extra affine constraint is required. This is why the conversion (PSDP-\(\mathbb{R}\)') is more appealing than (PSDP-\(\mathbb{R}\)) from the computational perspective. _Remark 2.2_.: A similar reformulation to (PSDP-\(\mathbb{R}\)') but for a restricted class of complex SDP relaxations of multiple-input multiple-output detection has appeared in [9]. ## 3 Application to complex SDP relaxations for CPOPs In this section, we apply the reformulation (PSDP-\(\mathbb{R}\)') to complex SDP relaxations arising from the complex moment-HSOS hierarchy for CPOPs. A CPOP is given by \[\left\{\begin{array}{ll}\inf_{\mathbf{z}\in\mathbb{C}^{s}}&f( \mathbf{z},\overline{\mathbf{z}})=\sum_{\boldsymbol{\beta},\boldsymbol{\gamma }}b_{\boldsymbol{\beta},\boldsymbol{\gamma}}\mathbf{z}^{\boldsymbol{\beta}} \overline{\mathbf{z}}^{\boldsymbol{\gamma}}\\ \text{s.t.}&g_{i}(\mathbf{z},\overline{\mathbf{z}})=\sum_{\boldsymbol{\beta},\boldsymbol{\gamma}}g^{i}_{\boldsymbol{\beta},\boldsymbol{\gamma}}\mathbf{z}^ {\boldsymbol{\beta}}\overline{\mathbf{z}}^{\boldsymbol{\gamma}}\geq 0,\quad i\in[t], \end{array}\right.\] (CPOP) where \(\overline{\mathbf{z}}\coloneqq(\overline{z}_{1},\ldots,\overline{z}_{s})\) stands for the conjugate of complex variables \(\mathbf{z}\coloneqq(z_{1},\ldots,z_{s})\). The functions \(f,g_{1},\ldots,g_{t}\) are real-valued polynomials and their coefficients satisfy \(b_{\boldsymbol{\beta},\boldsymbol{\gamma}}=\overline{b_{\boldsymbol{\gamma}, \boldsymbol{\beta}}}\), \(g^{i}_{\boldsymbol{\beta},\boldsymbol{\gamma}}=\overline{g^{i}_{\boldsymbol{ \gamma},\boldsymbol{\beta}}}\). The _support_ of \(f\) is defined by \(\text{supp}(f)\coloneqq\{(\boldsymbol{\beta},\boldsymbol{\gamma})\mid b_{ \boldsymbol{\beta},\boldsymbol{\gamma}}\neq 0\}\). For \(i\in[t]\), \(\text{supp}(g^{i})\) is defined in the same way. Fix \(d\in\mathbb{N}\). Let \(y=(y_{\boldsymbol{\beta},\boldsymbol{\gamma}})_{(\boldsymbol{\beta}, \boldsymbol{\gamma})\in\mathbb{N}_{d}^{n}\times\mathbb{N}_{d}^{s}}\subseteq \mathbb{C}\) be a sequence indexed by \((\boldsymbol{\beta},\boldsymbol{\gamma})\in\mathbb{N}_{d}^{n}\times\mathbb{N} _{d}^{n}\) and satisfying \(y_{\boldsymbol{\beta},\boldsymbol{\gamma}}=\overline{y_{\boldsymbol{\gamma}, \boldsymbol{\beta}}}\). Let \(L_{y}\) be the linear functional defined by \[f=\sum_{(\boldsymbol{\beta},\boldsymbol{\gamma})}b_{\boldsymbol{\beta}, \boldsymbol{\gamma}}\mathbf{z}^{\boldsymbol{\beta}}\overline{\mathbf{z}}^{ \boldsymbol{\gamma}}\mapsto L_{y}(f)=\sum_{(\boldsymbol{\beta},\boldsymbol{ \gamma})}b_{\boldsymbol{\beta},\boldsymbol{\gamma}}y_{\boldsymbol{\beta}, \boldsymbol{\gamma}}.\] The _complex moment_ matrix \(\mathbf{M}_{d}(y)\) associated with \(y\) is the matrix indexed by \(\mathbb{N}_{d}^{s}\) such that \[[\mathbf{M}_{d}(y)]_{\boldsymbol{\beta},\boldsymbol{\gamma}} \coloneqq L_{y}(\mathbf{z}^{\boldsymbol{\beta}}\overline{\mathbf{z}}^{ \boldsymbol{\gamma}})=y_{\boldsymbol{\beta},\boldsymbol{\gamma}},\quad\forall \boldsymbol{\beta},\boldsymbol{\gamma}\in\mathbb{N}_{d}^{s}.\] Suppose that \(g=\sum_{(\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}^{\prime})}g_{ \boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}}\mathbf{z}^{\boldsymbol{ \beta}^{\prime}}\overline{\mathbf{z}}^{\boldsymbol{\gamma}^{\prime}}\) is a complex polynomial. The _complex localizing_ matrix \(\mathbf{M}_{d}(gy)\) associated with \(g\) and \(y\) is the matrix indexed by \(\mathbb{N}_{d}^{s}\) such that \[[\mathbf{M}_{d}(gy)]_{\boldsymbol{\beta},\boldsymbol{\gamma}} \coloneqq L_{y}(g\mathbf{z}^{\boldsymbol{\beta}}\overline{\mathbf{z}}^{ \boldsymbol{\gamma}})=\sum_{(\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}^ {\prime})}g_{\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}^{\prime}}y_{ \boldsymbol{\beta}+\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}+\boldsymbol{ \gamma}^{\prime}},\quad\forall\boldsymbol{\beta},\boldsymbol{\gamma}\in \mathbb{N}_{d}^{s}.\] For convenience let us set \(g_{0}\coloneqq 1\). Let \(d_{i}\coloneqq\lceil\deg(g_{i})/2\rceil\) for \(i=0,1,\ldots,t\) and let \(d_{\min}\coloneqq\max\left\{\lceil\deg(f)/2\rceil,d_{1},\ldots,d_{t}\right\}\). For any \(d\geq d_{\min}\), the \(d\)-th (\(d\) is called the _relaxation order_) complex moment relaxation for (CPOP) is given by \[\left\{\begin{array}{ll}\inf_{y}&b^{\intercal}y=L_{y}(f)\\ \text{s.t.}&\mathbf{M}_{d}(y)\succeq 0,\\ &\mathbf{M}_{d-d_{i}}(g_{i}y)\succeq 0,\quad i\in[t],\\ &y_{\mathbf{0},\mathbf{0}}=1.\end{array}\right.\] (Mom-\mathbb{C}) and its dual form the complex moment-HSOS hierarchy of (CPOP). For more details on this hierarchy, we refer the reader to [8, 13]. For any \((\boldsymbol{\beta},\boldsymbol{\gamma})\in\mathbb{N}_{d}^{s}\times\mathbb{N}_{d }^{s}\), we associate it with a matrix \(A^{0}_{\boldsymbol{\beta},\boldsymbol{\gamma}}\in\mathbb{R}^{\omega_{s,d} \times\omega_{s,d}}\) defined by \[[A^{0}_{\boldsymbol{\beta},\boldsymbol{\gamma}}]_{\boldsymbol{\beta}^{\prime}, \boldsymbol{\gamma}^{\prime}}=\left\{\begin{array}{ll}1,&\text{if }(\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}^{\prime})=(\boldsymbol{ \beta},\boldsymbol{\gamma}),\\ 0,&\text{otherwise.}\end{array}\right. \tag{3.1}\] Moreover, for each \(i\in[t]\), we associate any \((\boldsymbol{\beta},\boldsymbol{\gamma})\in\mathbb{N}_{d-d_{i}}^{s}\times\mathbb{N }_{d-d_{i}}^{s}\) with a matrix \(A^{i}_{\boldsymbol{\beta},\boldsymbol{\gamma}}\in\mathbb{C}^{\omega_{s,d-d_{i}} \times\omega_{s,d-d_{i}}}\) defined by \[[A^{i}_{\boldsymbol{\beta},\boldsymbol{\gamma}}]_{\boldsymbol{\beta}^{\prime},\boldsymbol{\gamma}^{\prime}}=\left\{\begin{array}{ll}g^{i}_{\boldsymbol{ \beta}^{\prime\prime},\boldsymbol{\gamma}^{\prime\prime}},&\mbox{if}\;( \boldsymbol{\beta}^{\prime}+\boldsymbol{\beta}^{\prime\prime},\boldsymbol{ \gamma}^{\prime}+\boldsymbol{\gamma}^{\prime\prime})=(\boldsymbol{\beta}, \boldsymbol{\gamma}),\\ 0,&\mbox{otherwise}.\end{array}\right. \tag{3.2}\] Now for each \(i=0,1,\ldots,t\), we define the linear operator \(\mathscr{A}^{i}\) by \[\mathscr{A}^{i}(H)\coloneqq\left(\langle A^{i}_{\boldsymbol{\beta}, \boldsymbol{\gamma}},H\rangle\right)_{(\boldsymbol{\beta},\boldsymbol{\gamma} )\in\mathbb{N}_{d-d_{i}}^{s}\times\mathbb{N}_{d-d_{i}}^{s}},\quad H\in \mathbf{H}^{\omega_{s,d-d_{i}}}.\] By construction, it holds \[\mathbf{M}_{d-d_{i}}(g_{i}y)=\sum_{(\boldsymbol{\beta},\boldsymbol{\gamma}) \in\mathbb{N}_{d-d_{i}}^{s}\times\mathbb{N}_{d-d_{i}}^{s}}A^{i}_{\boldsymbol{ \beta},\boldsymbol{\gamma}}y_{\boldsymbol{\beta},\boldsymbol{\gamma}}=( \mathscr{A}^{i})^{*}(y),\quad i=0,1,\ldots,t.\] Therefore, we can rewrite (Mom-\(\mathbb{C}\)) as follows: \[\left\{\begin{array}{ll}\inf\limits_{y}&b^{\intercal}y\\ \mbox{s.t.}&(\mathscr{A}^{i})^{*}(y)\succeq 0,\quad i=0,1,\ldots,t,\\ &y_{\boldsymbol{0},\boldsymbol{0}}=1,\end{array}\right.\] whose dual reads as \[\mbox{(HSOS-$\mathbb{C}$)}\qquad\left\{\begin{array}{ll}\sup\limits_{ \lambda,H^{i}}&\lambda\\ \mbox{s.t.}&\sum_{i=0}^{t}[\mathscr{A}^{i}(H^{i})]_{\boldsymbol{\beta}, \boldsymbol{\gamma}}+\delta_{(\boldsymbol{\beta},\boldsymbol{\gamma}),( \boldsymbol{0},\boldsymbol{0})}\lambda=b_{\boldsymbol{\beta},\boldsymbol{ \gamma}},\quad(\boldsymbol{\beta},\boldsymbol{\gamma})\in\mathbb{N}_{d}^{s} \times\mathbb{N}_{d}^{s},\\ &H^{i}\succeq 0,\quad i=0,1,\ldots,t.\end{array}\right.\] Note that we have used the Kronecker delta \(\delta_{(\boldsymbol{\beta},\boldsymbol{\gamma}),(\boldsymbol{0},\boldsymbol {0})}\) in (HSOS-\(\mathbb{C}\)). Let us fix any order "\(<\)" on \(\mathbb{N}^{s}\). (HSOS-\(\mathbb{C}\)) is equivalent to the following complex SDP: \[\left\{\begin{array}{ll}\sup\limits_{\lambda,H^{i}}&\lambda\\ \mbox{s.t.}&\sum_{i=0}^{t}[\mathscr{A}^{i}(H^{i})]_{\boldsymbol{\beta}, \boldsymbol{\gamma}}+\delta_{(\boldsymbol{\beta},\boldsymbol{\gamma}),( \boldsymbol{0},\boldsymbol{0})}\lambda=b_{\boldsymbol{\beta},\boldsymbol{ \gamma}},\\ &\boldsymbol{\beta}\leq\boldsymbol{\gamma},\quad(\boldsymbol{\beta}, \boldsymbol{\gamma})\in\mathbb{N}_{d}^{s}\times\mathbb{N}_{d}^{s},\\ &H^{i}\succeq 0,\quad i=0,1,\ldots,t.\end{array}\right.\] It suffices to show that for \(\boldsymbol{\beta}<\boldsymbol{\gamma}\), \(\sum_{i=0}^{t}[\mathscr{A}^{i}(H^{i})]_{\boldsymbol{\gamma},\boldsymbol{\beta }}=b_{\boldsymbol{\gamma},\boldsymbol{\beta}}\) is equivalent to \(\sum_{i=0}^{t}[\mathscr{A}^{i}(H^{i})]_{\boldsymbol{\beta},\boldsymbol{ \gamma}}=b_{\boldsymbol{\beta},\boldsymbol{\gamma}}\). Indeed, this equivalence follows from \(b_{\boldsymbol{\gamma},\boldsymbol{\beta}}=\overline{b_{\boldsymbol{\beta}, \boldsymbol{\gamma}}}\) and \[\sum_{i=0}^{t}[\mathscr{A}^{i}(H^{i})]_{\boldsymbol{\gamma}, \boldsymbol{\beta}} =\sum_{i=0}^{t}\langle A^{i}_{\boldsymbol{\gamma},\boldsymbol{ \beta}},H^{i}\rangle\] \[=[H^{0}]_{\boldsymbol{\gamma},\boldsymbol{\beta}}+\sum_{i=1}^{t} \sum_{\begin{subarray}{c}(\boldsymbol{\gamma}^{\prime},\boldsymbol{\beta}^{ \prime})\in\mathbb{N}_{d-d_{i}}^{s}\times\mathbb{N}_{d-d_{i}}^{s}\\ (\boldsymbol{\gamma}^{\prime\prime},\boldsymbol{\beta}^{\prime\prime})\in \mbox{supp}(g)\\ (\boldsymbol{\gamma}^{\prime}+\boldsymbol{\gamma}^{\prime\prime},\boldsymbol{ \beta}^{\prime}+\boldsymbol{\beta}^{\prime\prime})=(\boldsymbol{\gamma}, \boldsymbol{\beta})\end{subarray}}g^{i}_{\boldsymbol{\gamma}^{\prime\prime}, \boldsymbol{\beta}^{\prime\prime}}[H^{i}]_{\boldsymbol{\gamma}^{\prime}, \boldsymbol{\beta}^{\prime}}\] \[=\sum_{i=0}^{t}\langle\overline{A^{i}_{\boldsymbol{\beta}, \boldsymbol{\gamma}}},H^{i}\rangle=\sum_{i=0}^{t}\overline{[\mathscr{A}^{i}(H^{i} )]_{\boldsymbol{\beta},\boldsymbol{\gamma}}}.\] With \(\mathscr{A}^{i}=\mathscr{A}^{i}_{R}+\mathscr{A}^{j}\mathbf{i},H^{i}=H^{i}_{R}+H^{j}_{I }\mathbf{i},b=b_{R}+b_{I}\mathbf{i}\), (HSOS-\(\mathbb{C}\)') is equivalent to the following real SDP: (HSOS-\(\mathbb{R}\)) \[\left\{\begin{array}{ll}\sup\limits_{\lambda,Y^{i}}&\lambda\\ \text{s.t.}&\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(H^{i}_{R})]_{\mathbf{\beta}, \mathbf{\gamma}}-[\mathscr{A}^{i}_{I}(H^{i}_{I})]_{\mathbf{\beta},\mathbf{\gamma}}\right)+ \delta_{(\mathbf{\beta},\mathbf{\gamma}),(\mathbf{0},\mathbf{0})}\lambda=[b_{R}]_{\mathbf{ \beta},\mathbf{\gamma}},\\ &\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(H^{i}_{I})]_{\mathbf{\beta},\mathbf{\gamma}}+ [\mathscr{A}^{i}_{I}(H^{i}_{R})]_{\mathbf{\beta},\mathbf{\gamma}}\right)=[b_{I}]_{\bm {\beta},\mathbf{\gamma}},\\ &\mathbf{\beta}\leq\mathbf{\gamma},\quad(\mathbf{\beta},\mathbf{\gamma})\in\mathbb{N}^{s}_{d} \times\mathbb{N}^{s}_{d},\\ &Y^{i}=\begin{bmatrix}H^{i}_{R}&-H^{i}_{I}\\ H^{i}_{I}&H^{i}_{R}\end{bmatrix}\succeq 0,\quad i=0,1,\ldots,t.\end{array}\right.\] On the other hand, by invoking Theorem 2.1 to (HSOS-\(\mathbb{C}\)'), we obtain another equivalent real SDP of (HSOS-\(\mathbb{C}\)'): \[\left\{\begin{array}{ll}\sup\limits_{\lambda,X^{i}}&\lambda\\ \text{s.t.}&\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(X^{i}_{1}+X^{i}_{2})]_{ \mathbf{\beta},\mathbf{\gamma}}-[\mathscr{A}^{i}_{I}(X^{i}_{3}-(X^{i}_{3})^{\intercal} )]_{\mathbf{\beta},\mathbf{\gamma}}\right)+\delta_{(\mathbf{\beta},\mathbf{\gamma}),(\mathbf{ 0},\mathbf{0})}\lambda=[b_{R}]_{\mathbf{\beta},\mathbf{\gamma}},\\ &\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(X^{i}_{3}-(X^{i}_{3})^{\intercal})]_{ \mathbf{\beta},\mathbf{\gamma}}+[\mathscr{A}^{i}_{I}(X^{i}_{1}+X^{i}_{2})]_{\mathbf{\beta},\mathbf{\gamma}}\right)=[b_{I}]_{\mathbf{\beta},\mathbf{\gamma}},\\ &\mathbf{\beta}\leq\mathbf{\gamma},\quad(\mathbf{\beta},\mathbf{\gamma})\in\mathbb{N}^{s}_{d} \times\mathbb{N}^{s}_{d},\\ &X^{i}=\begin{bmatrix}X^{i}_{1}&X^{i}_{3}\\ (X^{i}_{3})^{\intercal}&X^{i}_{2}\end{bmatrix}\succeq 0,\quad i=0,1,\ldots,t.\end{array}\right. \tag{3.3}\] **Proposition 3.2**.: _(3.3) is equivalent to the following real SDP:_ \[\left\{\begin{array}{ll}\sup\limits_{\lambda,X^{i}}&\lambda\\ \text{s.t.}&\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(X^{i}_{1}+X^{i}_{2})]_{\mathbf{ \beta},\mathbf{\gamma}}-[\mathscr{A}^{i}_{I}(X^{i}_{3}-(X^{i}_{3})^{\intercal})]_{ \mathbf{\beta},\mathbf{\gamma}}\right)+\delta_{(\mathbf{\beta},\mathbf{\gamma}),(\mathbf{0}, \mathbf{0})}\lambda=[b_{R}]_{\mathbf{\beta},\mathbf{\gamma}},\\ &\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(X^{i}_{3}-(X^{i}_{3})^{\intercal})]_{ \mathbf{\beta},\mathbf{\gamma}}+[\mathscr{A}^{i}_{I}(X^{i}_{1}+X^{i}_{2})]_{\mathbf{\beta},\mathbf{\gamma}}\right)=[b_{I}]_{\mathbf{\beta},\mathbf{\gamma}},\quad\mathbf{\beta}\neq\mathbf{ \gamma},\\ &\mathbf{\beta}\leq\mathbf{\gamma},\quad(\mathbf{\beta},\mathbf{\gamma})\in\mathbb{N}^{s}_{d} \times\mathbb{N}^{s}_{d},\\ &X^{i}=\begin{bmatrix}X^{i}_{1}&X^{i}_{3}\\ (X^{i}_{3})^{\intercal}&X^{i}_{2}\end{bmatrix}\succeq 0,\quad i=0,1,\ldots,t.\end{array}\right.\] Proof.: We need to show that the following constraints \[\sum_{i=0}^{t}\left([\mathscr{A}^{i}_{R}(X^{i}_{3}-(X^{i}_{3})^{\intercal})]_{ \mathbf{\beta},\mathbf{\beta}}+[\mathscr{A}^{i}_{I}(X^{i}_{1}+X^{i}_{2})]_{\mathbf{\beta}, \mathbf{\beta}}\right)=[b_{I}]_{\mathbf{\beta},\mathbf{\beta}}=0,\quad\mathbf{\beta}\in\mathbb{ N}^{s}_{d} \tag{3.4}\] in (3.3) are redundant. For each \(i=0,1,\ldots,t\) and \(\mathbf{\beta}\in\mathbb{N}^{s}_{d}\), we have \[\left\langle(A^{i}_{\mathbf{\beta},\mathbf{\beta}})_{R},X^{i}_{3}-(X^{i}_{ 3})^{\intercal}\right\rangle\] \[= \sum_{\begin{subarray}{c}(\mathbf{\beta}^{\prime},\mathbf{\gamma}^{\prime} )\in\mathbb{N}^{s}_{d}-i_{d}\times\mathbb{N}^{s}_{d}-i_{d}\\ (\mathbf{\beta}^{\prime\prime},\mathbf{\gamma}^{\prime\prime})\text{supp}(g)\\ (\mathbf{\beta}^{\prime\prime}+\mathbf{\beta}^{\prime\prime},\mathbf{\gamma}^{\prime}+\mathbf{ \gamma}^{\prime\prime})=(\mathbf{\beta},\mathbf{\beta})\end{subarray}}\mathcal{R}(g^{i}_ {\mathbf{\beta}^{\prime\prime},\mathbf{\gamma}^{\prime\prime}})\left([X^{i}_{3}]_{\mathbf{ \beta}^{\prime},\mathbf{\gamma}^{\prime}}+[X^{i}_{3}]_{\mathbf{\gamma}^{\prime},\mathbf{ \beta}^{\prime}}-[(X^{i}_{3})^{\intercal}]_{\mathbf{\beta}^{\prime},\mathbf{\gamma}^{ \prime}}-[(X^{i}_{3})^{\intercal}]_{\mathbf{\gamma}^{\prime},\mathbf{\beta}^{\prime}}\right)\] \[= 0,\] where we have used fact that \([(X_{3}^{i})^{\intercal}]_{\mathbf{\beta}^{\prime},\mathbf{\gamma}^{\prime}}=[X_{3}^{i}]_{ \mathbf{\gamma}^{\prime},\mathbf{\beta}^{\prime}}\) and \([(X_{3}^{i})^{\intercal}]_{\mathbf{\gamma}^{\prime},\mathbf{\beta}^{\prime}}=[X_{3}^{i}]_ {\mathbf{\beta}^{\prime},\mathbf{\gamma}^{\prime}}\). It follows that \([\mathscr{A}_{R}^{i}(X_{3}^{i}-(X_{3}^{i})^{\intercal})]_{\mathbf{\beta},\mathbf{ \beta}}=\langle(A_{\mathbf{\beta},\mathbf{\beta}}^{i})_{R},X_{3}^{i}-(X_{3}^{i})^{ \intercal}\rangle=0\). In addition, for each \(i=0,1,\ldots,t\) and \(\mathbf{\beta}\in\mathbb{N}_{d}^{s}\), we have \[\begin{split}&\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad where \([\mathbf{z}]_{2}\) is the vector of monomials in \(\mathbf{z}\) up to degree two and \(Q\in\mathbf{H}^{[|\mathbf{z}|_{2}]}\) is a random Hermitian matrix whose entries are selected with respect to the standard normal distribution. We approach (10) for \(s=5,7,\ldots,15\) with the second and third HSOS relaxations. The related results are shown in Table 2. From the table, we see that the reformulation (HSOS-\(\mathbb{R}\)') is several (\(2\sim 7\)) times as fast as the reformulation (HSOS-\(\mathbb{R}\)), and the speedup is more significant as the SDP size grows. ### Minimizing a random complex quartic polynomial over the unit sphere #### 4.2. Minimizing a random complex quartic polynomial with unit-norm variables The second example is to minimize a random complex quartic polynomial with unit-norm variables: \[\left\{\begin{array}{rl}\inf_{\mathbf{z}\in\mathbb{C}^{*}}&[\mathbf{z}]_{2}^ {*}Q[\mathbf{z}]_{2}\\ \text{s.t.}&|z_{i}|^{2}=1,\quad i=1,\ldots,s,\end{array}\right. \tag{11}\] where \(Q\in\mathbf{H}^{[|\mathbf{z}|_{2}]}\) is a random Hermitian matrix whose entries are selected with respect to the uniform probability distribution on \([0,1]\). We approach (11) for \(s=5,7,\ldots,15\) with the second and third HSOS relaxations. The related results are shown in Table 3. From the table, we see that the reformulation (HSOS-\(\mathbb{R}\)') is about one magnitude faster than the reformulation (HSOS-\(\mathbb{R}\)), and the speedup is more significant as the SDP size grows. ### Minimizing a randomly generated sparse complex quartic polynomial over multi-spheres Given \(l\in\mathbb{N}\backslash\{0\}\), we randomly generate a sparse complex quartic polynomial as follows: Let \(f=\sum_{i=1}^{l}f_{i}\in\mathbb{C}[z_{1},\ldots,z_{5(l+1)},\overline{z}_{1}, \ldots,\overline{z}_{5(l+1)}]\),3 where for all \(i\in[l]\), \(f_{i}=\overline{f}_{i}\in\mathbb{C}[z_{5(i-1)+1},\ldots,z_{5(i-1)+1},\overline {z}_{5(i-1)+1},\ldots,\overline{z}_{5(i-1)+10}]\) is a sparse complex quartic polynomial whose coefficients (real/imaginary parts) are \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \multirow{2}{*}{\(s\)} & \multirow{2}{*}{\(d\)} & \multirow{2}{*}{\(n_{\mathrm{sdp}}\)} & \multicolumn{4}{c|}{(HSOS-\(\mathbb{R}\))} & \multicolumn{4}{c}{(HSOS-\(\mathbb{R}\)’)} \\ \cline{3-10} & & & \(m_{\mathrm{sdp}}\) & \(\mathrm{opt}\) & time & \(m_{\mathrm{sdp}}\) & \(\mathrm{opt}\) & time \\ \hline \multirow{2}{*}{5} & 2 & 42 & 966 & -11.2409 & 0.11 & 441 & -11.2409 & 0.05 \\ \cline{2-10} & 3 & 112 & 6846 & -9.47725 & 8.13 & 3136 & -9.47725 & 2.00 \\ \hline \multirow{2}{*}{7} & 2 & 72 & 2736 & -14.2314 & 0.97 & 1296 & -14.2314 & 0.28 \\ \cline{2-10} & 3 & 240 & 30372 & -11.0407 & 389 & 14400 & -11.0407 & 57.0 \\ \hline \multirow{2}{*}{9} & 2 & 110 & 6270 & -19.0019 & 5.73 & 3025 & -19.0019 & 1.62 \\ \cline{2-10} & 3 & 440 & 100320 & - & - & 48400 & -15.5614 & 1944 \\ \hline \multirow{2}{*}{11} & 2 & 156 & 12480 & -22.8630 & 31.7 & 6084 & -22.8630 & 6.67 \\ \cline{2-10} & 3 & 728 & 271882 & - & - & 132496 & - & - \\ \hline \multirow{2}{*}{13} & 2 & 210 & 22470 & -25.6352 & 145 & 11025 & -25.6352 & 23.5 \\ \cline{2-10} & 3 & 1120 & 639450 & - & - & 313600 & - & - \\ \hline \multirow{2}{*}{15} & 2 & 272 & 37536 & -29.1672 & 585 & 18496 & -29.1672 & 86.1 \\ \cline{2-10} & 3 & 1632 & 1351976 & - & - & 665856 & - & - \\ \end{tabular} \end{table} Table 2: Minimizing a random complex quartic polynomial over the unit sphere. selected with respect to the uniform probability distribution on \([-1,1]\). Then we consider the following CPOP: \[\left\{\begin{array}{rl}\inf_{\mathbf{z}\in\mathbb{C}^{5(l+1)}}&f(\mathbf{z}, \overline{\mathbf{z}})\\ \text{s.t.}&\sum_{j=1}^{10}|z_{5(i-1)+j}|^{2}=1,\quad i=1,\ldots,l.\end{array}\right. \tag{4.3}\] The sparsity in (4.3) can be exploited to derive a sparsity-adapted complex moment-HSOS hierarchy [13]. We solve the second sparse HSOS relaxation of (4.3) for \(l=40,80,\ldots,400\). The results are displayed in Table 4. From the table we see that the reformulation (HSOS-\(\mathbb{R}\)') is \(1.5\sim 2\) times as fast as the reformulation (HSOS-\(\mathbb{R}\)). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \multirow{2}{*}{\(l\)} & \multirow{2}{*}{\(n_{\text{sdp}}\)} & \multicolumn{4}{c|}{(HSOS-\(\mathbb{R}\))} & \multicolumn{4}{c}{(HSOS-\(\mathbb{R}\)’)} \\ \cline{3-10} & & \(m_{\text{sdp}}\) & opt & time & \(m_{\text{sdp}}\) & opt & time \\ \hline 40 & 8 & 23090 & -98.9240 & 3.12 & 12529 & -98.9240 & 2.06 \\ \hline 80 & 8 & 46768 & -197.577 & 12.6 & 25549 & -197.577 & 8.07 \\ \hline 120 & 8 & 70958 & -292.024 & 30.1 & 38871 & -292.024 & 19.0 \\ \hline 160 & 8 & 94278 & -389.652 & 45.9 & 51563 & -389.652 & 30.7 \\ \hline 200 & 8 & 117526 & -482.684 & 84.5 & 64185 & -482.684 & 37.7 \\ \hline 240 & 8 & 140298 & -578.896 & 130 & 76389 & -578.896 & 59.5 \\ \hline 280 & 8 & 162504 & -671.047 & 173 & 89241 & -671.047 & 65.4 \\ \hline 320 & 8 & 187528 & -766.403 & 206 & 102171 & -766.403 & 88.5 \\ \hline 360 & 8 & 210370 & -866.771 & 291 & 114589 & -866.771 & 147 \\ \hline 400 & 8 & 233396 & -963.137 & 297 & 127173 & -963.137 & 138 \\ \hline \end{tabular} \end{table} Table 4: Minimizing a randomly generated sparse complex quartic polynomial over multi-spheres. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \multirow{2}{*}{\(s\)} & \multirow{2}{*}{\(d\)} & \multirow{2}{*}{\(n_{\text{sdp}}\)} & \multicolumn{4}{c|}{(HSOS-\(\mathbb{R}\))} & \multicolumn{4}{c}{(HSOS-\(\mathbb{R}\)’)} \\ \cline{3-10} & & & \(m_{\text{sdp}}\) & opt & time & \(m_{\text{sdp}}\) & opt & time \\ \hline 5 & 2 & 42 & 734 & -24.4919 & 0.10 & 271 & -24.4919 & 0.03 \\ \cline{2-10} & 3 & 112 & 4474 & -24.4919 & 2.34 & 1281 & -24.4919 & 0.26 \\ \hline 7 & 2 & 72 & 2202 & -56.5289 & 0.65 & 869 & -56.5289 & 0.16 \\ \cline{2-10} & 3 & 240 & 21158 & -46.7128 & 132 & 6637 & -46.7128 & 7.44 \\ \hline 9 & 2 & 110 & 5242 & -114.342 & 4.62 & 2161 & -114.342 & 0.73 \\ \cline{2-10} & 3 & 440 & 73312 & - & - & 24691 & -81.2676 & 184 \\ \hline 11 & 2 & 156 & 10718 & -202.436 & 32.1 & 4555 & -202.436 & 3.86 \\ \cline{2-10} & 3 & 728 & 206188 & - & - & 73327 & - & - \\ \hline 13 & 2 & 210 & 19686 & -338.041 & 126 & 8555 & -338.041 & 12.7 \\ \cline{2-10} & 3 & 1120 & 499438 & - & - & 185277 & - & - \\ \hline 15 & 2 & 272 & 33394 & -514.226 & 678 & 14761 & -514.226 & 55.1 \\ \cline{2-10} & 3 & 1632 & 1081514 & - & - & 414841 & - & - \\ \end{tabular} \end{table} Table 3: Minimizing a random complex quartic polynomial with unit-norm variables. ### The AC-OPF problem The alternating current optimal power flow (AC-OPF) is a central problem in power systems, which aims to minimize the generation cost of an alternating current transmission network under physical and operational constraints. Mathematically, it can be formulated as the following CPOP: \[\left\{\begin{array}{rl}\inf_{V_{i},S_{k}^{g}}&\sum_{k\in G}\left(\mathbf{c}_{2 k}(\mathcal{R}(S_{k}^{g}))^{2}+\mathbf{c}_{1k}\mathcal{R}(S_{k}^{g})+\mathbf{c}_{0k} \right)\\ \text{s.t.}&\angle V_{r}=0,\\ &\mathbf{S}_{k}^{gl}\leq S_{k}^{g}\leq\mathbf{S}_{k}^{gu},\quad\forall k\in G, \\ &\boldsymbol{\upsilon}_{i}^{l}\leq|V_{i}|\leq\boldsymbol{\upsilon}_{i}^{u}, \quad\forall i\in N,\\ &\sum_{k\in G_{i}}S_{k}^{g}-\mathbf{S}_{i}^{d}-\mathbf{Y}_{i}^{sh}|V_{i}|^{2} =\sum_{(i,j)\in E_{i}\cup E_{R}^{R}}S_{ij},\quad\forall i\in N,\\ &S_{ij}=(\overline{\boldsymbol{\nabla}}_{ij}-\mathbf{i}_{2}^{\underline{ \mathbf{b}}_{ij}^{\underline{\mathbf{c}}_{ij}}})\frac{|V_{i}|^{2}}{|\overline {\boldsymbol{\nabla}}_{ij}|^{2}}-\overline{\boldsymbol{\nabla}}_{ij}\frac{V_ {i}V_{j}}{\overline{\boldsymbol{\Gamma}}_{ij}},\quad\forall(i,j)\in E,\\ &S_{ji}=(\overline{\boldsymbol{\nabla}}_{ij}-\mathbf{i}_{2}^{\underline{ \mathbf{b}}_{ij}^{\underline{\mathbf{c}}_{ij}}})|V_{j}|^{2}-\overline{ \boldsymbol{\nabla}}_{ij}\frac{\overline{V}_{i}V_{j}}{\overline{\boldsymbol{ \Gamma}}_{ij}},\quad\forall(i,j)\in E,\\ &|S_{ij}|\leq\mathbf{s}_{ij}^{u},\quad\forall(i,j)\in E\cup E^{R},\\ &\boldsymbol{\theta}_{ij}^{\underline{\mathbf{\Lambda}}l}\leq\angle(V_{i} \overline{V}_{j})\leq\boldsymbol{\theta}_{ij}^{\underline{\mathbf{\Lambda}}u},\quad\forall(i,j)\in E,\end{array}\right. \tag{4.4}\] where \(V_{i}\) is the voltage, \(S_{k}^{g}\) is the power generation, \(S_{ij}\) is the power flow (all are complex variables; \(\angle\) stands for the angle of a complex number) and all symbols in boldface are constants. Notice that \(G\) is the collection of generators and \(N\) is the collection of buses. For a full description on the AC-OPF problem, we refer the reader to [3] as well as [5]. We select test cases from the AC-OPF library PGLiB-OPF [3]. For each case, we solve the minimal relaxation step of the sparse HSOS hierarchy [13]. The results are displayed in Table 5. From the table, we see that the reformulation (HSOS-\(\mathbb{R}\)') is several (\(1.4\sim 5\)) times as fast as the reformulation (HSOS-\(\mathbb{R}\)). ## Acknowledgments The authors would like to thank Jurij Volcic for helpful comments on an earlier preprint of this note.
2305.07582
A multi-messenger model for neutron star - black hole mergers
We present a semi-analytic model for predicting kilonova light curves from the mergers of neutron stars with black holes (NSBH). The model is integrated into the MOSFiT platform, and can generate light curves from input binary properties and nuclear equation-of-state considerations, or incorporate measurements from gravitational wave (GW) detectors to perform multi-messenger parameter estimation. The rapid framework enables the generation of NSBH kilonova distributions from binary populations, light curve predictions from GW data, and statistically meaningful comparisons with an equivalent BNS model in MOSFiT. We investigate a sample of kilonova candidates associated with cosmological short gamma-ray bursts, and demonstrate that they are broadly consistent with being driven by NSBH systems, though most have limited data. We also perform fits to the very well sampled GW170817, and show that the inability of an NSBH merger to produce lanthanide-poor ejecta results in a significant underestimate of the early (< 2 days) optical emission. Our model indicates that NSBH-driven kilonovae may peak up to a week after merger at optical wavelengths for some observer angles. This demonstrates the need for early coverage of emergent kilonovae in cases where the GW signal is either ambiguous or absent; they likely cannot be distinguished from BNS mergers by the light curves alone from ~2 days after the merger. We also discuss the detectability of our model kilonovae with the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST).
B. P. Gompertz, M. Nicholl, J. C. Smith, S. Harisankar, G. Pratten, P. Schmidt, G. P. Smith
2023-05-12T16:19:18Z
http://arxiv.org/abs/2305.07582v2
# A multi-messenger model for neutron star - black hole mergers ###### Abstract We present a semi-analytic model for predicting kilonova light curves from the mergers of neutron stars with black holes (NSBH). The model is integrated into the mosfitr platform, and can generate light curves from input binary properties and nuclear equation-of-state considerations, or incorporate measurements from gravitational wave (GW) detectors to perform multi-messenger parameter estimation. The rapid framework enables the generation of NSBH kilonova distributions from binary populations, light curve predictions from GW data, and statistically meaningful comparisons with an equivalent BNS model in mosfitr. We investigate a sample of kilonova candidates associated with cosmological short gamma-ray bursts, and demonstrate that they are broadly consistent with being driven by NSBH systems, though most have limited data. We also perform fits to the very well sampled GW170817, and show that the inability of an NSBH merger to produce lanthanide-poor ejecta results in a significant underestimate of the early (\(\lesssim 2\) days) optical emission. Our model indicates that NSBH-driven kilonovae may peak up to a week after merger at optical wavelengths for some observer angles. This demonstrates the need for early coverage of emergent kilonovae in cases where the GW signal is either ambiguous or absent; they likely cannot be distinguished from BNS mergers by the light curves alone from \(\sim 2\) days after the merger. We also discuss the detectability of our model kilonovae with the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST). keywords: keyword1 - keyword2 - keyword3 ## 1 Introduction Our understanding of compact object mergers has made significant advances following the advent of gravitational-wave (GW) astronomy, including the first ever detection in GW of a binary black hole (BBH) merger (Abbott et al., 2016), binary neutron star (BNS) merger (Abbott et al., 2017) and most recently the merger of a neutron star - black hole (NSBH) system (Abbott et al., 2021). Where neutron stars (NS) are involved, accompanying electromagnetic (EM) signals like short gamma-ray bursts (SGRBs; e.g. Paczynski, 1986; Kouveliotou et al., 1993; Berger, 2014) and kilonovae (Li and Paczynski, 1998; Rosswog, 2005; Metzger et al., 2010; Barnes and Kasen, 2013; Metzger, 2019) are expected. Both became confirmed counterparts of BNS mergers with the coincident detections of GW170817 (Abbott et al., 2017), GRB 170817A (Goldstein et al., 2017; Hallinan et al., 2017; Margutti et al., 2017; Savchenko et al., 2017; Troja et al., 2017; D'Avanzo et al., 2018; Lyman et al., 2018; Margutti et al., 2018; Mooley et al., 2018; Troja et al., 2018; Lamb et al., 2019) and the kilonova AT2017gfo (Andreonni et al., 2017; Arcavi et al., 2017; Chornock et al., 2017; Coulter et al., 2017; Cowperthwaite et al., 2017; Drout et al., 2017; Evans et al., 2017; Kasliwal et al., 2017; Lipunov et al., 2017; McCully et al., 2017; Nicholl et al., 2017; Pian et al., 2017; Shappee et al., 2017; Smartt et al., 2017; Soares-Santos et al., 2017; Tanvir et al., 2017; Utsumi et al., 2017; Valenti et al., 2017; Villar et al., 2017). The association of kilonovae with BNS mergers has important implications for the production of heavy elements in the Universe. These thermal transients are powered by the radioactive decay of unstable heavy elements assembled by rapid neutron capture (\(r\)-process) nucleosynthesis following the merger (Lattimer and Schramm, 1974; Eichler et al., 1989; Freiburghaus et al., 1999). Modelling of the GW170817 kilonova indicates that BNS mergers may be the dominant source of \(r\)-process elements in the Universe (Rosswog et al., 2018). However, comparisons with kilonova candidates associated with cosmological SGRBs (Berger et al., 2013; Tanvir et al., 2013; Yang et al., 2015; Jin et al., 2015, 2016; Kasliwal et al., 2017; Jin et al., 2018; Troja et al., 2018; Eyles et al., 2019; Lamb et al., 2019; Troja et al., 2019; Jin et al., 2020; Fong et al., 2021; O'Connor et al., 2021; Rastinejad et al., 2022; Troja et al., 2022) imply that the yield of \(r\)-process elements is highly variable between events (Gompertz, 2018; Ascenzi et al., 2019; Rastinejad et al., 2021). In addition, significant uncertainties remain in the measured BNS merger rate. Estimates from GW events (\(320^{+490}_{-240}\)\(\rm{Gpc^{-3}~{}yr^{-1}}\); Abbott et al., 2021) are hampered by the low number of detections to date, while inferences from the rate of short GRB detections (\(270^{+1580}_{-180}\)\(\rm{Gpc^{-3}~{}yr^{-1}}\); Fong et al., 2015) must account for the jet opening angle distribution, which is poorly constrained. The exact contribution BNS mergers make to the \(r\)-process census is therefore highly uncertain. A growing number of studies seek to minimise this uncertainty through simultaneous modelling of both the EM and GW observations, where available (Margalit and Metzger, 2017, 2019; Barbieri et al., 2019; Coughlin et al., 2019; Dietrich et al., 2020; Breschi et al., 2021; Nicholl et al., 2021; Raaijmakers et al., 2021). Measurements of the binary and post-merger remnant from GW interferometers like advanced LIGO (LIGO Scientific Collaboration et al., 2015), advanced Virgo (Acernese et al., 2015) and KAGRA (Kagra Collaboration et al., 2019) can be combined with observations of the subsequent transient from EM observatories and synthesised into tighter posterior distributions for parameters that impact the nucleosynthesis yield (e.g Abbott et al., 2017). They can also provide more stringent constraints on the NS equation-of-state. A significant additional uncertainty in the Universal \(r\)-process census is the contribution made by NSBH mergers (see e.g. Chen et al., 2021). Such events are theoretically capable of driving SGRBs and kilonovae (e.g. Rosswog, 2005; Tanaka et al., 2014; Paschalidis et al., 2015; Desai et al., 2019) if the NS is disrupted before plunging into the BH, and some candidate NSBH-driven events have been proposed in the literature (e.g. Troja et al., 2008; Yang et al., 2015; Jin et al., 2016; Kawaguchi et al., 2016; Gompertz et al., 2020; Zhu et al., 2022). However, the mass of disrupted material that remains outside of the remnant BH event horizon is expected to be low if the mass ratio of the binary is high and/or the magnitude of the orbit-aligned component of the pre-merger BH spin is low or negative (Foucart et al., 2014; Pannarale and Ohme, 2014; Kawaguchi et al., 2016; Foucart et al., 2018). The early GW-detected NSBH merger events (Abbott et al., 2021) and candidates (Abbott et al., 2021; The LIGO Scientific Collaboration et al., 2021, 2021) exhibit total masses and mass ratios that are suitable for NS disruption. However, the measured BH spins are consistent with zero, and the mergers are not expected to be EM bright (Dichiara et al., 2021; Mandel and Smith, 2021; Zhu et al., 2021; Gompertz et al., 2022). They appear to derive from the isolated binary evolution channel (Broekgaarden et al., 2021; Broekgaarden and Berger, 2021), though potentially via a non-standard pathway (Gompertz et al., 2022). The exception is GW191219_163120 (The LIGO Scientific Collaboration et al., 2021), whose large mass ratio implies that the binary may have formed through dynamical capture (Gompertz et al., 2022). While zero BH spin at the point of merger is a common prediction from population synthesis modelling (e.g. for BBH systems; Qin et al., 2018; Fuller and Ma, 2019), pathways to higher spin systems are possible through weak core-envelope coupling in BH progenitor stars, or tidal interactions following BH formation (Steinle and Kesden, 2021; Steinle et al., 2023). Should such systems be realised in nature, they are expected to be accompanied by bright kilonovae with nucleosynthesis yields up to 10x greater per event than that expected from BNS mergers (Tanaka et al., 2014). Their still-uncertain merger rate density may be comparable to that of BNS mergers, but could also be significantly lower (Mapelli and Giacobbo, 2018; Eldridge et al., 2019; Belczynski et al., 2020; Abbott et al., 2021). The potential contribution of NSBH mergers to Universal \(r\)-process production therefore ranges from none at all to being the dominant production sites of lanthanides and actinides through cosmic time. Calibrating their influence will require further detections of events in GW during LIGO-Virgo-KAGRA (LVK) observing runs to constrain merger rates, as well as EM detections or stringent limits on emission that translates to meaningful measurements or constraints on \(r\)-process yields. This is best achieved through GW-EM multi-messenger modelling. In this paper, we present a semi-analytic forward model for NSBH-driven kilonovae that predicts light curves from the binary configuration and NS equation-of-state. The relative simplicity of our model compared to more simulation-based alternatives means that it is optimised for quickly generating light curves for arbitrary parameters, fitting to data, predicting populations, or marginalising over unconstrained parameters. By providing the model within the noserr framework (Guillochon et al., 2018), it is publicly available for easy use and adaptation, and trivial to perform model comparison against an equivalent BNS model (Nicholl et al., 2021), e.g. for modelling mass-gap systems, or when no GW data are available. In the absence of GW observations, fitting to kilonova light curves affords constraints on the properties of the progenitor binary. Any available GW information can be included in the priors to enable multi-messenger inference of the merger, and tight constraints on the nucleosynthesis yield and equation-of-state. Our paper is structured as follows. The model is described in Section 2 and compared to a well-sampled subset of SGRB kilonovae to see if any are compatible with being NSBHs in Section 3. We perform fits to the GW-EM multi-messenger dataset of GW170817 to search for a self-consistent NSBH solution in Section 4. We discuss the implications our model has for the detectability of NSBH kilonovae with the Vera C. Rubin Observatory in Section 5. Finally, we present our conclusions in Section 6. Magnitudes are in the AB system unless otherwise stated. ## 2 Model description A schematic overview of our model is shown in Figure 1. For an electromagnetic transient to be produced, the NS must be disrupted by the tidal forces exerted upon it by the BH in the final stages of inspiral, with some mass remaining outside of the BH event horizon. Tidal disruption occurs if the NS overflows its Roche lobe at distances greater than the innermost stable circular orbit (ISCO) of the BH. This radius can be expressed as (cf. Bardeen et al., 1972): \[\hat{R}_{\rm ISCO}=3+Z_{2}-{\rm sgn}(x_{\rm BH})\sqrt{(3-Z_{1})(3+Z_{1}+2Z_{2 })}, \tag{1}\] where \(\hat{R}_{\rm ISCO}=R_{\rm ISCO}/M_{\rm BH}\) is the normalised ISCO radius, \(M_{\rm BH}\) is the BH mass, \(\chi_{\rm BH}\) is the orbit-aligned component of the BH's dimensionless spin parameter, \(Z_{1}=1+(1-x_{\rm BH}^{2})^{1/3}\left[(1+\chi_{\rm BH})^{1/3}+(1-x_{\rm BH})^{1/ 3}\right]\) and \(Z_{2}=\sqrt{3x_{\rm BH}^{2}+Z_{1}^{2}}\). An analytical fitting function for the mass of the material that remains outside of the BH event horizon was derived by Foucart et al. (2018). The fitting function was calibrated to 75 numerical relativity simulations (compiled from Etienne et al., 2009; Foucart et al., 2011; Kyutoku et al., 2011; Foucart et al., 2012, 2013; Lovelace et al., 2013; Foucart et al., 2014; Kyutoku et al., 2015; Brege et al., 2018), and gives an ejected mass of \[M_{\rm ej}=M_{\rm NS}^{b}\left[\max\left(\alpha\frac{1-2C_{\rm NS}}{\eta^{1/ 3}}-\beta\delta_{\rm ISCO}\frac{C_{\rm NS}}{\eta}+\gamma,0\right)\right]^{ \delta}, \tag{2}\] where the four fitting parameters were found to be \(\alpha=0.406\), \(\beta=0.139\), \(\gamma=0.255\) and \(\delta=1.761\). Equation 2 parameterises the ejected mass in terms of \(\hat{R}_{\rm ISCO}\), \(\eta=(1+1/q)^{-2}q^{-1}\) (where \(q=M_{\rm NS}/M_{\rm BH}\) is the binary mass ratio), the compactness of the NS \[C_{\rm NS}=G\,M_{\rm NS}/(R_{\rm NS}c^{2}), \tag{3}\] and its baryonic mass (cf. Lattimer and Prakash, 2001): \[M_{\rm NS}^{b}=M_{\rm NS}\left(1+\frac{0.6C_{\rm NS}}{1-0.5C_{\rm NS}} \right). \tag{4}\] The ejected mass of the merger is therefore primarily a function of the orbit-aligned component of the BH's spin, the binary mass ratio, and the NS equation-of-state. ### Dynamical ejecta Kruger & Foucart (2020) developed an analytical fitting function for the mass of material ejected dynamically from an NSBH merger: \[\frac{M_{\rm dyn}}{M_{\rm NS}^{b}}=a_{1}q^{-n_{1}}\frac{1-2C_{\rm NS}}{C_{\rm NS }}-a_{2}q^{-n_{2}}\frac{R_{\rm ISCO}}{M_{\rm BH}}+a_{4}. \tag{5}\] The best-fitting parameters, validated against the 45 numerical relativity simulations in Kawaguchi et al. (2016), were found to be \(a_{1}=0.007116\), \(a_{2}=0.001436\), \(a_{4}=-0.02762\), \(n_{1}=0.8636\), and \(n_{2}=1.6840\). The average velocity of this ejecta was found to be an inverse function of q (Kawaguchi et al., 2016): \[v_{\rm dyn}=(0.01533q^{-1}+0.1907)c. \tag{6}\] The dynamical ejection of matter is primarily driven by tidal torque, and is therefore typically distributed within \(10^{\circ}\) - \(20^{\circ}\) of the orbital plane (e.g. Kawaguchi et al., 2015; Kyutoku et al., 2015). For simplicity we assume an axisymmetric distribution (see however Kyutoku et al., 2015; Kawaguchi et al., 2016). The tidal dynamical ejecta experience only weak neutrino irradiation (Kyutoku et al., 2018), meaning that the electron fraction is expected to be low (\(Y_{e}\lesssim 0.1\) Foucart et al., 2014; Metzger & Fernandez, 2014; Kyutoku et al., 2018). We model the dynamical ejecta with a gray opacity of \(\kappa_{\rm dyn}=10\,{\rm cm}^{2}{\rm g}^{-1}\)(Tanaka & Hotokezaka, 2013; Kawaguchi et al., 2016; Kasen et al., 2017; Tanaka et al., 2020). ### Disc winds #### 2.2.1 Thermally driven wind Combining the work of Foucart et al. (2018) and Kruger & Foucart (2020), we obtain the disc mass: \[M_{\rm disc}=M_{\rm ej}-M_{\rm dyn}. \tag{7}\] Hydrodynamic simulations show that some of the post-merger disc surrounding the remnant BH is driven away in neutron-rich winds by viscous heating and nuclear recombination (e.g. Fernandez & Metzger, 2013; Fernandez et al., 2015; Just et al., 2015; Fernandez et al., 2020; Fujibayashi et al., 2020). The fraction of the disc that is ejected this way was shown to be a linear function of the disc compactness by Fernandez et al. (2020), and was parameterised as a function of the binary mass ratio by Raaijmakers et al. (2021) as \[\xi=\frac{M_{\rm th}}{M_{\rm disc}}=\xi_{1}+\frac{\xi_{2}-\xi_{1}}{1+e^{1.5(1/q -3)}}. \tag{8}\] We assume \(\xi_{1}=0.18\) and \(\xi_{2}=0.29\), the median values given in Raaijmakers et al. (2021). We combine Equations 7 and 8 to obtain the mass of material driven from the disc by thermal pressure (\(M_{\rm th}\)). This material is assumed to have an average velocity of \(v_{\rm therm}=0.034c\)(Fernandez et al., 2020). However, the outflow velocity is sensitive to the assumed viscosity parameter in the simulation, with higher viscous coefficients associated with more efficient acceleration of matter in the outer accretion disc (e.g. Fujibayashi et al., 2020). The electron fraction of the thermal wind is typically found to be in the range \(0.25\leq Y_{e}\leq 0.35\)(e.g. Foucart et al., 2015; Fernandez et al., 2020; Fujibayashi et al., 2020). This corresponds to a gray opacity of \(\kappa\lesssim 5\,{\rm cm}^{2}{\rm g}^{-1}\) in Tanaka et al. (2020), with the true value tending towards the lower end of this range when temperatures are below 5000K (e.g. \(\kappa=1\,{\rm cm}^{2}{\rm g}^{-1}\) in Kasen et al., 2017). Fernandez et al. (2020) find that a significant portion of this wind has a lanthanide and actinide mass fraction \(X_{(Lat+Ac)}<10^{-4}\). Motivated by this, we model the thermal wind as a two component mixture model featuring a leading blue edge with \(\kappa=1\,{\rm cm}^{2}{\rm g}^{-1}\) enveloping a redder core with \(\kappa=5\,{\rm cm}^{2}{\rm g}^{-1}\). The blue mass fraction (\(f_{\rm blue}\)) was found to monotonically increase with disc mass by Fernandez et al. (2020), so we calculate \(f_{\rm blue}\) from \(M_{\rm disc}\) (Equation 7) using a first-order Figure 1: Schematic of the model. The five measured GW parameters are shown in green. The total ejecta mass (\(M_{\rm ej}\), Equation 2; Foucart et al., 2018) and dynamical ejecta mass (\(M_{\rm dyn}\), Equation 5; Krüger & Foucart, 2020) are functions of the binary properties, and influence the kilonova light curve evolution. The masses and velocities of individual emission components are shown in their respective colours, along with their dependencies. The grey opacities for each component are \(\kappa_{\rm blue}=1\,{\rm cm}^{2}{\rm g}^{-1}\), \(\kappa_{\rm th}=5\,{\rm cm}^{2}{\rm g}^{-1}\) and \(\kappa_{\rm mag}=\kappa_{\rm dyn}=10\,{\rm cm}^{2}{\rm g}^{-1}\). polynomial fit to the data in their Table 2, noting the large scatter induced by varying the BH mass: \[f_{\rm blue}=0.20199\log_{10}(M_{\rm disc})+1.12692. \tag{9}\] #### 2.2.2 Magnetically driven wind The inclusion of magnetic fields in three-dimensional general-relativistic magnetohydrodynamic (GRMHD) models (Siegel & Metzger, 2017, 2018; Christie et al., 2019; Fernandez et al., 2019) has revealed a second outflow in the form of an MHD-mediated wind. This results in twice as much ejecta mass, a higher average ejecta velocity and a lower average electron fraction (\(Y_{e}\)) when compared to equivalent hydrodynamic simulations (Fernandez et al., 2019). The mass ejected by magnetic processes depends on the geometry of the post-merger magnetic field (Christie et al., 2019). More poloidal configurations eject more mass, and with higher velocities (cf. Fernandez et al., 2019), while preferentially toroidal fields generate very little magnetically driven ejecta (cf. Siegel & Metzger, 2018). Fernandez et al. (2019) find that the magnetically-driven outflow has a velocity \(v>0.1c\), in excess of the maximum velocity seen in hydrodynamic simulations. We include this second wind component in our fiducial model with the ignorance parameter \(f_{\rm mag}\), which accounts for the unknown magnetic field configuration. A fully poloidal field has \(f_{\rm mag}=1\), while lower values represent more toroidal field geometries. It is applied as a fraction of the thermal wind ejecta mass derived in Equation 8: \(M_{\rm mag}=f_{\rm mag}M_{\rm th}\), and as \(v_{\rm mag}=f_{\rm mag}0.22c\), where \(0.22c\) is the average velocity of the faster bimodal component in the fully poloidal field geometry of Fernandez et al. (2019). The velocity floor is set equal to the thermal wind velocity (\(v_{\rm mag}\geq 0.034c\)). The magnetic wind component has \(Y_{e}\sim 0.1\), corresponding to \(\kappa=10\,{\rm cm}^{2}{\rm g}^{-1}\). This low electron fraction is maintained because the magnetic wind is driven from the disc towards the poles before it is significantly impacted by neutrino irradiation. ### Geometry The outflow geometry is structured in a similar fashion to Nicholl et al. (2021). We assume a axially symmetric kilonova and model each emission component as a cutout with a conical polar cap defined in terms of its half-opening angle \(\theta_{\rm open}\). Emitting regions are constructed following the formalism of Darbha & Kasen (2020), where the luminosity of each region is scaled to the area of the caps projected to an observer at a viewing angle \(\theta_{\rm obs}\). For fiducial parameters we assume that the magnetic wind is restricted to polar regions with \(\theta_{\rm mag}=45^{\circ}\), the thermal wind occupies moderate latitudes (\(\theta_{\rm wind}=80^{\circ}\)) and the dynamical ejecta sits \(\pm 10^{\circ}\) from the equator. A schematic of the model is shown in Figure 1. For simplicity, our model assumes that the emitting regions do not interact. This is a reasonable assumption for the tidally ejected dynamical component, but interactions between the thermal and magnetic winds are likely to produce turbulence along their contact interface. However, our assumption is a reasonable approximation for the majority of viewing angles, and the 50:50 contribution of the two emitting regions when viewed along the boundary between them is also likely a reasonable proxy for a mixed emission component. We do not account for the possibility of polar cavities carved out by a relativistic jet launched by the merger, which may expose hot, low opacity material (Nativi et al., 2021; Klion et al., 2021). One caveat to our model is that it is based on simulations where the BH spin axis and binary orbital axis are aligned. It has been shown that only considering the aligned spin cases still results in accurate estimates of the mass that remains outside of the BH (Foucart et al., 2013; Kawaguchi et al., 2015). However, misalignment may induce spin precession, which breaks symmetry and is likely to result in asymmetric structure in the ejecta (e.g. Kawaguchi et al., 2015). This is not captured in our model. ### Conversion to light curves Our model calculates \(r\)-process ejecta masses and velocities from the input binary configuration. In order to convert them to kilonova light curves, we incorporate the NSBH ejecta model as a package in MOSFiT (Guillochon et al., 2018). \(r\)-process masses and velocities are converted to light curves through pre-existing MOSFiT modules, including semi-analytical models for heating rates and deposition (Korobkin et al., 2012; Barnes et al., 2016; Cowperthwaite et al., 2017; Villar et al., 2017; Metzger, 2019), an approximation of photon diffusion through the ejecta (Arnett, 1982), and self-consistent evolution of the photospheric radius (Nicholl et al., 2017). ### Connecting to compact binary coalescences The shape of the kilonova light curve is underpinned by the properties of the merging binary, which can be measured from GW observations. The most accurately measured GW parameter is the 'chirp' mass (\(\mathcal{M}\)), which is related to the binary component masses by \(\mathcal{M}=(M_{\rm BH}M_{\rm NS})^{3/5}(M_{\rm BH}+M_{\rm NS})^{-1/5}\). GW measurements can also provide constraints on the viewing angle \(\theta\), the mass ratio \(q\) and the orbit-aligned BH dimensionless spin \(\chi_{\rm BH}\). One parameter of particular importance when estimating the mass of material that remains outside of the event horizon (Equation 2) is the NS compactness, \(C_{\rm NS}\). When fitting combined GW-EM multi-messenger data, \(C_{\rm NS}\) can be measured rather than assumed, leading to constraints on the NS equation-of-state. From the EM side, \(C_{\rm NS}\) can be constrained via the best-fit ejecta mass from the kilonova light curve. The signal detected by GW detectors is a mass-weighted combination of the tidal deformability of the two binary components, known as the effective tidal deformability (A; Flanagan & Hinderer \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & Fiducial value & Astrophysical prior & GW170817 prior \\ \hline \(\mathcal{M}^{a}\) (\(M_{\odot}\)) & 2.22* & [1.0, 6.0] & 1.188\({}^{+0.004}_{-0.02}\) \\ \(q^{b}\) & 0.28* & [0.1, 1.0] & [0.4, 1.0] \\ \(\Lambda^{c}\) & 11.0* & [0.0, 100.0] & [0.0, 700.0] \\ \(\mathcal{A}^{d}_{\rm BH}\) & 0.8 & [-1.0, 1.0] & [-0.01, 0.17] \\ \(\cos\theta^{e}\) & 0.707 & [0.0, 1.0] & [0.883, 1.0] \\ \(\cos\theta^{f}_{\rm mag}\) & 0.707 & [0.5, 1.0] & [0.5, 1.0] \\ \(\cos\theta^{e}_{\rm mid}\) & 0.174 & [0.0, 0.342] & [0.0, 0.342] \\ \(f_{\rm mag}^{h}\) & 1.0 & [0.1, 1.0] & [0.1, 1.0] \\ \(\log N_{H}^{i}\) & 19.0 & [19.0, 23.0] & [19.0, 23.0] \\ \hline \hline \end{tabular} \end{table} Table 1: The free parameters of the NSBH kilonova model, their assumed fiducial values, and the prior ranges used when fitting. Bracketed values indicate a flat prior distribution, while Gaussian priors are given as median values with one sigma confidence intervals. The GW170817 prior set uses the high spin priors from Abbott et al. (2017). 2008; Wade et al., 2014; Raithel et al., 2018). Tidal deformability is a measure of the responsiveness of a body to an external tidal field, and is zero for a BH (Binnington and Poisson, 2009; Damour and Nagar, 2009). In the NSSH case, the tidal deformability of the NS can therefore be calculated from the component masses of the binary and the effective tidal deformability: \[\Lambda_{\rm NS}=\frac{13}{16}\frac{\bar{\Lambda}(M_{\rm BH}+M_{\rm NS})^{5}}{( M_{\rm NS}+12M_{\rm BH})M_{\rm NS}^{4}}. \tag{10}\] We then relate this quantity to \(C_{\rm NS}\) using the quasi-universal relation derived in Yagi and Yunes (2017): \[C_{\rm NS}=0.360-0.0355\,{\rm ln}\ (\Lambda_{\rm NS})+0.000705\,{\rm ln}\ ( \Lambda_{\rm NS})^{2}. \tag{11}\] Our final model consists of 9 free parameters. These are listed in Table 1 with their fiducial values and assumed priors. The GW and EM branches of the model and the relationship between the measured and derived parameters is shown in Figure 1. ### Parameter Sensitivity Figure 2 shows how the kilonova light curves are affected by varying \(\chi_{\rm BH}\), the binary mass ratio (by changing \(M_{\rm BH}\)), the observer angle, and the assumed dipole field configuration through \(f_{\rm mag}\). As expected, higher BH spins and more symmetric binary mass ratios produce brighter kilonovae in all observing filters because they lead to a greater ejected mass outside of the remnant event horizon. We find that the \(K\)-band brightness is largely insensitive to viewing angle, likely due to the highly similar colour, mass, and velocity of the dynamical ejecta at the equator and the magnetically driven wind at the poles in our fiducial model. The bluer bands are more sensitive to the viewing angle, with the \(g\)-band light curves appearing \(\sim 1.5\) magnitudes brighter at peak for an equatorial observer than a polar one at \(3-5\) days after merger. This is because an equatorial viewing angle provides the widest range of sight lines to the thermal wind of the three viewing angles presented, and hence the largest relative contribution from the lowest opacity material to the received flux. This finding suggests that NSBH-driven kilonovae may peak quite strongly in the optical up to a week after merger for oblique viewing angles, in stark contrast to BNS events. Finally, we find that varying the magnetic field geometry has a moderate (\(\sim 1\) magnitude) effect on the peak brightness in the \(K\)-band, due to the larger mass ejection associated with more polar field geometries (i.e. increasing \(f_{\rm mag}\)). A similar effect is seen in the early (\(\lesssim 1\) day) optical evolution, where higher velocity magnetic winds lower the density of the ejecta more rapidly, allowing photons to escape to the observer sooner. ## 3 Comparison to GRB-Kilonovae In this section we compare a selection of kilonova candidates associated with cosmological SGRBs to our fiducial model (see Table 1). Figure 3 shows the light curves of five afterglow + kilonova can Figure 2: Example light curves in the \(K\) (black), \(i\) (blue), and \(g\) (green) bands for our fiducial model, with variations in a single parameter per panel. _Top left:_ varying \(\chi_{\rm BH}\), the orbit-aligned BH spin. _Top right:_ varying \(q\), the binary mass ratio (via \(M_{\rm BH}\)). _Bottom left:_ varying \(\theta\), the observer inclination from the pole. _Bottom right:_ varying \(f_{\rm mag}\), the magnetic field geometry. didates. These include the first reported GRB-kilonova candidate (GRB 130603B; Tanvir et al., 2013; Berger et al., 2013), the two best-sampled GRB-kilonovae outside of GW170817 (GRB 160821B and GRB 211211A; Lamb et al., 2019; Troja et al., 2019; Rasmussen et al., 2022; Troja et al., 2022; Gompertz et al., 2023) and two examples of kilonova candidates alongside 'extended emission' (EE; Norris and Bonnell, 2006; Norris et al., 2010; Gompertz et al., 2013) SGRBs (GRB 050709 and GRB 060614; Yang et al., 2015; Jin et al., 2015, 2016). EE SGRBs have been suggested as candidates for NSBH-driven events (Troja et al., 2008; Gompertz et al., 2020), and exhibit \(\sim 100\)s of rapidly-evolving high energy emission (Gompertz et al., 2023) in addition to the \(\lesssim 2\)s prompt spike. In each case, our fiducial model is combined with power-law or broken-power law profiles that approximate the GRB afterglow. The parameters used are shown in Table 2. The comparisons are deliberately approximate; in many cases the available data is not sufficient to constrain the large number of parameters needed to model both the GRB afterglow and the kilonova. Nevertheless, we demonstrate that even without fine tuning, our fiducial NSBH kilonova model provides rough agreement with the candidate kilonova excesses seen in SGRBs. ### Grb 050709 GRB 050709 was detected by the High Energy Transient Explorer (HETE-2; Lamb et al., 2000). It featured a short, hard prompt spike with \(r_{\rm 00}=70\pm 10\) ms in the 30-400 keV energy band, followed by a long-soft tail with \(t_{\rm 00}=130\pm 7\) s in the 2-25 keV energy band (Villasour et al., 2005), where \(r_{\rm 90}\) is the time in which the middle 90 per cent of event photons are collected. GRB 050709 is therefore an EE SGRB. It was the first SGRB for which an optical counterpart was identified (Hjorth et al., 2005) and was associated with a galaxy at \(z=0.16\)(Fox et al., 2005). A kilonova was first claimed in GRB 050709 by Jin et al. (2016). Photometry was taken from Fox et al. (2005); Covino et al. (2006) and Jin et al. (2016). We find that the Jin et al. (2016) \(I_{\rm Vega}=24.1\pm 0.2\) detection at \(t\sim 2.5\) days is incompatible with the contemporaneous \(g\)- and \(r\)-band detections and preceding \(r\)-band detection under an afterglow interpretation, and use the \(I_{\rm Vega}>23.25\) upper limit from Covino et al. (2006) for this epoch. Our fiducial model provides a good qualitative match to the data. This is in agreement with Jin et al. (2016), who found a best fit with an ejecta mass of \(0.05\,\rm M_{\odot}\) and a velocity of 0.2c from an NSBH merger, consistent with the fiducial model. However, we note that all of the data can be adequately described by a GRB afterglow model if the jet break occurs at \(t\sim 10\) days, and hence the veracity of the GRB 050709 kilonova candidate remains uncertain. ### Grb 060614 GRB 060614 was detected by the Burst Alert Telescope (BAT; Barthelmy et al., 2005) on board the _Neil Gehrels Swift Observatory_(Gehrels et al., 2004). The burst duration of \(t_{\rm 00}=102\) s (15 - 350 keV; Gehrels et al., 2006) is significantly above the canonical \(t_{\rm 00}=2\) s divide between short and long GRBs (Kouveliotou et al., 1993). However, at a redshift of \(z=0.125\)(Della Valle et al., 2006; Gal-Yam et al., 2006), deep optical observations exclude an associated supernova to limits hundreds of times fainter than the archetypal GRB supernova SN1998bw (Fynbo et al., 2006; Gal-Yam et al., 2006; Della Valle et al., 2006). GRB 060614 is therefore most likely a merger-driven EE SGRB (see however Cobb et al., 2006), further supported by its negligible spectral lag (Gehrels et al., 2006) and strong spectral evolution (Mangano et al., 2007). Based on its light curve, Yang et al. (2015) and Jin et al. (2015) claimed evidence for a kilonova counterpart. Photometry was taken from Yang et al. (2015); Della Valle et al. (2006) and Gal-Yam et al. (2006). The emission of GRB 060614 is likely dominated by the bright afterglow at almost all epochs; a deviation from a power-law is only detected in two points (Yang et al., 2015). Our fiducial model provides a reasonable approximation of the \(i\)-band excess at \(\approx 8\) days, but under-predicts the flux in the \(\approx 13\) day epoch. Fine tuning to produce a slightly fainter and longer-lived kilonova signature may resolve the discrepancy, which can be achieved with e.g. a lower velocity wind. Yang et al. (2015) suggest an NSBH merger with kilonova ejecta mass of \(\approx 0.1\,\rm M_{\odot}\) and velocity \(\approx 0.2c\), with an effective temperature of \(\approx 2000\) K. This is broadly consistent with the fiducial model, which produces \(\approx 0.07\,\rm M_{\odot}\) of ejecta combined between the magnetic and thermal winds, at a temperature of \(2500\) K and \(v_{\rm mag}=0.22c\). ### Grb 130603b GRB 130603B was detected by _Swift_-BAT with a duration of \(t_{\rm 00}=0.18\pm 0.02\) s (\(15-350\) keV; Lien et al., 2016) and is therefore an unequivocal member of the SGRB class. With a redshift of \(z=0.356\)(Cucchiara et al., 2013; Thone et al., 2013), it is also the most distant GRB in our comparison sample. GRB 130603B was the first ever identified kilonova candidate (Tanvir et al., 2013; Berger et al., 2013) thanks to a significant excess in HST F160W over the expected afterglow, constrained by a simultaneous HST F606W non-detection. Photometry was taken from Tanvir et al. (2013). The fiducial model provides a good match to the data, although the kilonova is only detected in a single epoch and hence the observations are not particularly constraining. Kawaguchi et al. (2016) showed that GRB 130603B can be described with a NSBH-driven kilonova model for reasonably high spins (\(x_{\rm BH}>0.3\)) and larger NS radii. Berger et al. (2013) find that the light curve can be described by a kilonova driven by either a BNS or BHNS with an ejecta mass of \(0.03\) - \(0.08\,\rm M_{\odot}\) and a velocity in the range of \(0.1\) - \(0.3\)c, consistent with our fiducial model. Tanvir et al. (2013) find a similar mass range: \(10^{-3}\,\rm M_{\odot}<M_{\rm ej}<10^{-2}\,\rm M_{\odot}\). ### Grb 160821b GRB 160821B was detected by _Swift_-BAT with \(t_{\rm 00}=0.48\pm 0.07\)s (15 - 350 keV; Lien et al., 2016). The kilonova was reported independently by Lamb et al. (2019) and Troja et al. (2019), with the redshift found to be \(z=0.16\). Multi-wavelength observations, particularly those at X-ray and radio frequencies, suggested that GRB 160821B afterglow may have experienced late energy injection from a second blast wave arriving at the afterglow emission site at late times (Lamb et al., \begin{table} \begin{tabular}{c c c c c c} \hline \hline GRB & \(d_{L}\) & \(\beta\) & \(\alpha_{1}\) & \(t_{\rm b}\) & \(\alpha_{2}\) \\ & (Mpc) & & & (days) & \\ \hline 050709 & 795 & 1.0 & 1.4 & 2.2 & 2.5 \\ 060614 & 608 & 0.8 & 2.3 & – & – \\ 130603B & 1960 & 1.8 & 1.2 & 0.5 & 2.5 \\ 160821B & 806 & 0.5 & 0.5 & 1.3 & 2.5 \\ 211211A & 350 & 0.5 & 0.8 & 0.5 & 2.0 \\ \hline \hline \end{tabular} \end{table} Table 2: The luminosity distance (\(d_{L}\)) of the five GRBs in Figure 3 with the spectral (\(\beta\)) and temporal (\(\alpha\)) indices and break times (\(t_{b}\)) used to approximate their afterglows. Figure 3: Model comparison to five kilonova candidates associated with cosmological GRBs. Our fiducial NSBH kilonova model (dashed lines) is scaled to the distance of each GRB and assumes a polar viewing angle (cos \(\theta=1\)). The GRB afterglow is approximated by power-law or broken power-law profiles with flux \(F\propto t^{-\alpha}\nu^{-\beta}\) (dotted lines). Solid lines show the sum of the two components. Note that the kilonova models are not fit to the data in any way. 2019b). Such a phenomenon is not captured in our simple power-law representation of the afterglow. We use the photometry from Kasliwal et al. (2017) and Lamb et al. (2019). Despite higher sampling than most of the other GRBs presented in this work, the fiducial model does remarkably well in matching the evolution of GRB 160821B with no fine tuning of the kilonova. We note that the \(J\)- and \(H\)-bands are over-predicted, particularly at late times, implying that the mass of the reddest ejecta needs to be reduced or its emission evolve faster. This can be achieved with a lower binary mass ratio or BH spin. We also under-predict the emission in the \(g\)-band, which may indicate a lower grey opacity or higher blue ejecta fraction (from the thermal wind) is needed. Lamb et al. (2019) find a good fit to the data with a refreshed afterglow and a two-component kilonova model with a wind ejecta mass of \(0.01\,\mathrm{M}_{\odot}\) travelling at \(v<0.15\)c and a dynamical ejecta mass of \(0.001\,\mathrm{M}_{\odot}\) with \(v>0.1\)c. Troja et al. (2019) find a low, lanthanide-rich (\(\kappa=10\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}\)) ejecta mass of \(\lesssim 0.006\,\mathrm{M}_{\odot}\) and \(v\gtrsim 0.05\)c. The low ejecta masses inferred by both studies are as much as an order of magnitude less than is produced in our fiducial model, and may explain why it over-predicts the late near-infrared evolution. ### Grb 211211a GRB 211211A was detected by the _Fermi_ Gamma-ray Burst Monitor (GBM; Meegan et al., 2009) and _Swift_-BAT, with the latter measuring \(t_{90}=51.4\pm 0.8\) s (15 - 350 keV; Stamatikos et al., 2021). The burst is therefore an ESRB (for a full analysis of the high energy emission see Gompertz et al., 2023). At a redshift of \(z=0.076\)(Rastinejad et al., 2022), GRB 211211A is the second-closest compact binary merger to Earth ever discovered, with only the GW-localised GW170817 more proximal. The kilonova was identified through a strong infrared excess by Rastinejad et al. (2022) and was independently modelled by Mei et al. (2022); Troja et al. (2022); Xiao et al. (2022); Yang et al. (2022) and Zhu et al. (2022). We use the photometry from Rastinejad et al. (2022). Similar to GRB 160821B, our fiducial model struggles to evolve fast enough to reproduce the late near-infrared observations. It over-predicts the flux at essentially all wavelengths beyond \(\sim 2\) days, particularly in the \(i\)-band (though we note that these suffer from significant systematic errors in their magnitude measurments; Rastinejad et al. 2022). Rastinejad et al. (2022) find a best fit kilonova model with a total ejecta mass of \(0.047^{+0.026}_{-0.011}\,\mathrm{M}_{\odot}\), half of which is partitioned in a lanthanide-rich'red' component with \(v\approx 0.3\)c. The other half is divided equally between an intermediate-opacity 'purple' component with \(v\approx 0.1\)c and a lanthanide-free 'blue' component with \(v\approx 0.3\)c. A BNS merger was preferred over an NSBH. The total ejecta mass is lower than is produced by the fiducial model. The relative abundance of lanthanide-rich, high-velocity ejecta could be achieved with a strong magnetic wind in our model, implying a high magnetic field with poloidal geometry. A strong dipole field is inferred for GRB 211211A by Gao et al. (2022). Mei et al. (2022) fit the observations with an isotropic, one-component kilonova model. They find an ejecta mass of \(0.020^{+0.009}_{-0.006}\,\mathrm{M}_{\odot}\) with an average velocity of \(0.10^{+0.07}_{-0.04}\,\mathrm{c}\) and a grey opacity of \(0.6^{+0.8}_{-0.3}\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}\). Troja et al. (2022) find that the observations can be matched with \(0.01-0.1\,\mathrm{M}_{\odot}\) of wind ejecta and \(0.01-0.03\,\mathrm{M}_{\odot}\) of dynamical ejecta from a BNS merger. Zhu et al. (2022) employ an NSBH binary-driven model, and find the observations are best described by the merger of a \(8.21^{+0.77}_{-0.75}\,\mathrm{M}_{\odot}\) BH with dimensionless spin \(0.62^{+0.06}_{-0.07}\) with a \(1.23^{+0.06}_{-0.07}\,\mathrm{M}_{\odot}\) NS, producing \(0.005-0.03\,\mathrm{M}_{\odot}\) of lanthanide-poor wind ejecta and \(0.015-0.025\,\mathrm{M}_{\odot}\) of lanthanide-rich dynamical ejecta. Finally, Yang et al. (2022) find a lanthanide-poor kilonova (\(\kappa=0.8^{+0.1}_{-0.2}\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}\)) with a total ejecta mass of \(0.027^{+0.011}_{-0.001}\,\mathrm{M}_{\odot}\) at an average velocity of \(0.25^{+0.06}_{-0.02}\)c. The disagreement between these models highlights the sensitivity of the results to the construction of the model, and the urgent need for further well-sampled datasets. ## 4 Fitting to GW 170817 There is only one GW-EM multi-messenger dataset available for fitting: that of GW 170817 (Abbott et al., 2017). Observational and modelling evidence strongly supports this event being a BNS merger, but some parameter space is available for NSBH models with BH masses below typical expectations. Coughlin & Dietrich (2019) showed that an NSBH merger could potentially reproduce the GW and EM constraints, but is disfavoured relative to a BNS merger. In this section we investigate the ability of our model to reproduce the GW signal and kilonova associated with GW170817. We approach this in two different ways. In the first approach, we use an 'astrophysical' prior that does not include the posteriors derived from GW170817, and instead allows the model to explore the full parameter space for NSBH mergers that are expected to be EM bright while penalising realisations that lie outside of theoretical expectations. Specifically, we penalise solutions with NSs more massive than the maximum stable NS mass (the Tolman-Oppenheimer-Volkoff mass, \(M_{\mathrm{TOV}}\), e.g. Shapiro & Teukolsky, 1986), which we set as \(M_{\mathrm{TOV}}=2.17\)(Margalit & Metzger, 2017; Nicholl et al., 2021), as well as BHs with masses below this threshold. We also penalise solutions with tidal deformabilities outside of the expected range (e.g. Hinderer, 2008; Hinderer et al., 2010; Postnikov et al., 2010). The second approach takes the posterior solutions from Abbott et al. (2017) as the model priors, but relaxes the penalties for unconventional solutions. The first formalism therefore allows the model to search for a more 'canonical' NSBH binary system that best reproduces the light curve, and the second challenges it to find a solution that satisfies the GW signal even where it defies expectations. Fitting is performed with emcee (Foreman-Mackey et al., 2013). Our best-fitting solutions for the two prior sets are shown in Figure 4. While the model provides a reasonably good match to the late emission and redder bands, it significantly under-produces the early optical emission. This result is expected; even BNS models, which are capable of providing more 'blue' emission than the NSBH case, require additional means for producing optical light when modelling GW170817 and other well-sampled kilonovae (Nicholl et al., 2021; Rastinejad et al., 2022). Whether additional emissive mechanisms such as the shock heating of ejecta by a GRB jet (e.g. Kasliwal et al., 2017; Arcavi, 2018; Piro & Kollmeier, 2018) can be included in NSBH models depends on whether sufficient polar material is present prior to the launching of the jet (if one is launched at all by NSBH mergers), and will require GW-EM observations to confirm. It is notable that beyond \(\sim 2\) days, it becomes very difficult to distinguish the light curves of kilonovae driven by BNSs and NSBHs, and hence early observations are essential where the GW signal is either absent or ambiguous. The posteriors for the astrophysical prior show a loose preference for a chirp mass of \(\mathcal{M}=3.01^{+0.70}_{-0.56}\), a mass ratio of \(q=0.11^{+0.03}_{-0.00}\), and an effective tidal deformability of \(\tilde{\Lambda}=0.91^{+0.88}_{-0.23}\). This translates into \(M_{\mathrm{BH}}\approx 11.6\,\mathrm{M}_{\odot}\), \(M_{\mathrm{NS}}\approx 1.3\,\mathrm{M}_{\odot}\), and \(R_{\mathrm{NS}}\approx 12.1\,\mathrm{km}\). The BH spin is preferentially high, at \(\chi_{\mathrm{BH}}=0.82^{+0.04}_{-0.05}\). The viewing angle is equatorial, \(\cos\theta=0.07^{+0.09}_{-0.06}\), and the magnetic field geometry is strongly dipolar at \(f_{\rm mag}=0.94^{+0.05}_{-0.21}\). Broadly, these parameters maximise the ejected mass while retaining sight lines to the bluer material. The event-based priors limit the posterior solutions to within \(28^{\circ}\) of the poles (Abbott et al., 2017), leading to a polar solution with \(\cos\theta=1.00^{+0.00}_{-0.01}\), in contrast to the results from the less restrictive astrophysical prior set. The preferred chirp mass is \(\mathcal{M}=1.19^{+0.00}_{-0.00}\), strongly constrained by the tight Gaussian priors from the GW detection. The binary mass ratio is found to be \(q=0.41^{+0.01}_{-0.01}\), with an effective tidal deformability of \(\tilde{\Lambda}=122.1^{+1.6}_{-9.1}\). These properties define a binary with \(M_{\rm BH}\approx 2.2\,\rm M_{\odot}\), \(M_{\rm NS}\approx 0.9\,\rm M_{\odot}\), and \(R_{\rm NS}\approx 9.7\,\rm km\). The low component masses for the event-based priors are dictated by the tight constraints on the chirp mass from GW170817. This has a knock-on effect of requiring a small NS radius to avoid over-producing the emission; the NS tidal deformability is already \(\sim 1500\) with \(R_{\rm NS}\approx 9.7\,\rm km\). However, this combination of low NS mass and radius would point to very stiff equations of state. The BH is found to have a relatively low orbit-aligned spin magnitude, with \(\chi_{\rm BH}=0.15^{+0.02}_{-0.04}\), again mandated by the GW priors. As with the astrophysical prior set, the magnetic field geometry is preferentially dipolar (the configuration that produces the most ejecta mass, and hence luminosity), with \(f_{\rm mag}=0.99^{+0.01}_{-0.04}\). While the binary solutions are notably different between the two prior sets, the resultant kilonovae are strikingly similar (Figure 4). The biggest discriminant is in the bluer bands, emphasising the need for early observations. ## 5 Detectability with Rubin With its wide field of view and large aperture, the Vera C. Rubin Observatory (Rubin) will be well suited to discovering EM counterparts to GW triggers (e.g. Andreoni et al., 2022) and serendipitous transients during its Legacy Survey of Space and Time (LSST; Ivezic et al., 2019). We investigate the detectability of the population of NSBH kilonovae predicted by our model by drawing 2000 light curve realisations from our model in generative mode. This mode enables the user to define their priors in terms of component mass and NS radius rather than chirp mass and deformability, and hence is suitable for simulating populations or fitting when no GW data are available. In particular, it helps to avoid realisations with unrealistic NS masses, which are hard to mitigate against when defining populations with chirp mass and mass ratio. BH and NS mass prior distributions were constructed following model C in Broclegaarden et al. (2021), and the NS radius was set to \(11\,\rm km\), following Nicholl et al. (2021). We generate our population out to \(600\,\rm Mpc\) from Earth, which covers the full NSBH detection range predicted for advanced LIGO in O5 (Abbott et al., 2020). Priors for the other parameters were taken from the astrophysical set (Table 1), but negative \(\chi_{\rm BH}\) was excluded. Of the 2000 realisations, 727 produce detectable emission, 1273 are EM dark, and 8 were discarded for numerical artefacts in their light curves. Histograms of the peak magnitudes of our realisations in different bands are shown in Figure 5. Assuming a detection threshold of \(g=24.81\) and \(i=23.92\) for LSST's Wide Fast Deep (WFD) survey (Ivezic et al., 2019), we obtain 279 g-band detections and 237 i-band detections. Assuming a GW follow-up strategy that reaches a depth of \(g=26\) and \(i=25\)(Andreoni et al., 2022), this becomes 655 g-band detections and 599 i-band detections. However, we define 'detections' as realisations with peak magnitudes above the detection threshold. In reality, it is unlikely these transients would be recovered from faint detections in single epochs. We also do not account for line-of-sight extinction and the cadence of follow-up observations. These estimates of the fraction of realisations that are detectable should therefore be considered upper limits. Our model therefore predicts that less than one third of NSBH GW triggers (assuming a maximum distance of 600 Mpc from Earth) will yield EM detections with LSST even if all are well sampled by follow-up observations. Sampling the whole localisation region on each of Figure 4: Light curves from the posterior distributions of the best fits to the GW170817 data (Villar et al., 2017) using the NSBH kilonova model and the general astrophysical priors (left) or event-based GW170817 priors (right). The model provides a reasonable match to the data at times later than two days after trigger, but struggles to produce the early emission in both cases, particularly in optical and UV bands. the first four nights following a GW trigger, as per the _preferred_ strategy of Andreoni et al. (2022), would achieve sufficient coverage. By contrast, the reduced depth of the WFD survey compared to GW follow-up observations makes it significantly less likely that our model kilonovae would be serendipitously discovered. Only \(\sim 10\) per cent of realisations would be detectable if observed by chance at peak magnitude. By way of comparison, we also investigate the detectability of a population of 2000 BNS mergers, using the model of Nicholl et al. (2021). NS mass and mass ratio priors are chosen to be Gaussian and are taken from the sample of Galactic BNS systems presented in Farrow et al. (2019). We limit our population distance to \(300\,\mathrm{Mpc}\), in accordance with the advanced LIGO range for BNS systems. We find that essentially every realisation peaks at least one magnitude brighter than even the shallower detection threshold (Figure 5), and hence we conclude that Rubin will be capable of finding all EM counterparts to BNS GW triggers if it responds to them within a few days. However, like in the NSBH case, we neglect line-of-sight extinction and the observing cadence, defining detections only by the brightness of the realisation relative to the detectability threshold. We note that the brightness of EM counterparts (and hence their detectability) may be enhanced by gravitational lensing. Evidence of this may manifest in the GW signals, in particular in candidate'mass gap' mergers where one (or both) binary constituents are placed in the range \(3-5\,M_{\odot}\) in low latency (Smith et al., 2023). In Figure 6, we show the expected colour evolution of the NSBH and BNS kilonovae. Notably, the NSBH kilonovae show little colour evolution, with a consistent g-z colour distribution centred around 1. Conversely, the BNS kilonovae are seen to evolve rapidly in colour over the first two days, becoming comparable to the NSBH kilonovae after \(\sim 24\) hours. This is likely a product of the lack of 'blue' emission from NSBH mergers, and reinforces the need for early observations to distinguish between the two in cases where the GW signal can't. ## 6 Conclusions We present a new semi-analytic framework capable of predicting NSBH kilonova light curves from input binary properties. The model is integrated into the mosfit platform, and can be used for fast generation of libraries of light curves from an input binary population, predicting EM signals accompanying NSBH GW mergers, or performing multi-messenger parameter inference from GW-EM datasets. We demonstrate that a fiducial NSBH binary with \(M_{\mathrm{BH}}=5\,\mathrm{M}_{\odot}\) and \(M_{\mathrm{NS}}=1.4\,\mathrm{M}_{\odot}\) is broadly consistent with existing candidate kilonov counterparts to cosmological SGRBs with only minor tuning of parameters. However, we also demonstrate that NSBH systems are not capable of producing 'blue' emission (likely from lanthanide-poor ejecta) in quantities sufficient to match the light curve of GW170817 unless other processes like shock heating from a GRB jet are included. Simulations (e.g. Fernandez et al., 2019) suggest that material may be present in polar regions at the time of jet launch, but it is unclear whether there is sufficient mass to result in a signal similar to the one proposed by Piro and Kollmeier (2018). Our model indicates that for our assumed prior distributions, less than a third of NSBH mergers within the LIGO range of \(\sim 600\,\mathrm{Mpc}\) will have EM counterparts detectable with Rubin/LSST, even before accounting for survey cadence and line-of-sight extinction. Our modelling suggests that early (\(\lesssim 2\) days) observations of emergent kilonovae will be essential to distinguish BNS and NSBH mergers in cases where GW signals are absent or ambiguous. We also show that NSBH kilonovae may not peak at optical frequencies until up to a week after merger for certain viewing angles. The first discovery of an EM signal from an NSBH merger remains a key objective of GW-EM and transient astronomy. Its identification will serve to validate (or iterate) merger models, as was done for the BNS case following GW170817. Our model provides an early framework for interpreting the emission from such a system, and a platform for further development following observational ratification. ## Acknowledgements We are extremely grateful to Rodrigo Fernandez for fruitful discussions that helped shape the model. MN and BG acknowledge funding by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381). GPS acknowledges support from The Royal Society, the Leverhulme Trust, Figure 5: Histograms of the peak kilonova magnitude of 2000 realisation from our NSBH model (left) and the BNS model of Nicholl et al. (2021) (right) in different photometric filters. The dotted lines show the single visit LSST WFD survey limits (Ivezić et al., 2019) for the g and i bands, and the GW follow-up limits (Andreoni et al., 2022) are shown by the dashed lines. and the Science and Technology Facilities Council (grant numbers ST/N021702/1 and ST/S006141/1). ## Data Availability The model is available as part of mosfit v1.1.9, and can be accessed at [https://github.com/guillochon/MOSFIT](https://github.com/guillochon/MOSFIT). Installation and general usage instructions are available at [http://mosfit.readthedocs.io/](http://mosfit.readthedocs.io/).
2302.13550
Dynamic Programming in Probability Spaces via Optimal Transport
We study discrete-time finite-horizon optimal control problems in probability spaces, whereby the state of the system is a probability measure. We show that, in many instances, the solution of dynamic programming in probability spaces results from two ingredients: (i) the solution of dynamic programming in the "ground space" (i.e., the space on which the probability measures live) and (ii) the solution of an optimal transport problem. From a multi-agent control perspective, a separation principle holds: The "low-level control of the agents of the fleet" (how does one reach the destination?) and "fleet-level control" (who goes where?) are decoupled.
Antonio Terpin, Nicolas Lanzetti, Florian Dörfler
2023-02-27T07:04:21Z
http://arxiv.org/abs/2302.13550v2
# Dynamic Programming in Probability Spaces via Optimal Transport ###### Abstract We study discrete-time finite-horizon optimal control problems in probability spaces, whereby the state of the system is a probability measure. We show that, in many instances, the solution of dynamic programming in probability spaces results from two ingredients: (i) the solution of dynamic programming in the "ground space" (i.e., the space on which the probability measures live) and (ii) the solution of an optimal transport problem. From a multi-agent control perspective, a separation principle holds: The "low-level control of the agents of the fleet" (how to reach the destination?) and "fleet-level control" (who goes where?) are decoupled. ## 1 Introduction Many optimal control problems of stochastic or large-scale dynamical systems can be framed in the probability space, whereby the state is a _probability measure_. We provide three examples, starting with a pedagogical case: **Example 1.1** (Deterministic optimal control).: Consider a discrete-time dynamical system with state space \(X_{k}\), input space \(U_{k}\), and dynamics \(f_{k}:X_{k}\times U_{k}\to X_{k+1}\). The problem of steering the system from an initial state \(x_{0}\in X_{0}\) towards a target state \(r_{N}\in X_{N}\) in \(N\in\mathbb{N}\) time-steps while minimizing the terminal cost \(g_{N}:X_{N}\times X_{N}\to\bar{\mathbb{R}}_{\geq 0}\) and the stage costs \(g_{k}:X_{k}\times U_{k}\times X_{N}\to\bar{\mathbb{R}}_{\geq 0}\) reads \[\inf_{u_{k}:X_{k}\to U_{k}}g_{N}(x_{N},r_{N})+\sum_{k=0}^{N-1}g_{k}(x_{k},u_{k} (x_{k}),r_{N}), \tag{1}\] subject to the dynamics. The costs \(g_{k}\) and \(g_{N}\) measure the "closeness" between the state \(x_{k}\) and the reference \(r_{N}\), as well as the input effort. For instance, when all the spaces are \(\mathbb{R}^{n}\), they may be defined as \(g_{k}(x_{k},u_{k},r_{N})=\left\|x_{k}-r_{N}\right\|^{2}+\left\|u_{k}\right\|^{2}\) and \(g_{N}(x_{N},r_{N})=\left\|x_{N}-r_{N}\right\|^{2}\). It is instructive to capture this setting via probability measures. At each time-step \(k\), consider the Dirac's delta probability measure \(\mu_{k}=\delta_{x_{k}}\), and let \(\rho_{N}=\delta_{r_{N}}\). The relation between \(\mu_{k+1}\) and \(\mu_{k}\) is the "pushforward" operation \(\mu_{k+1}=\delta_{x_{k+1}}=\delta_{f_{k}(x_{k},u_{k}(x_{k}))}=f_{k}(\cdot,u_{k}( \cdot))_{\#}\mu_{k}\), a dynamics in the probability space. Then, an equivalent formulation to (1) is \[\begin{split}\inf_{u_{k}:X_{k}\to U_{k}}\underbrace{\int_{X_{N}} \int_{X_{N}}g_{N}(x_{N},r_{N})\,\mathrm{d}\mu_{N}(x_{N})\,\mathrm{d}\rho_{N}(r _{N})}_{G_{N}(\mu_{N},\rho_{N})}\\ +\sum_{k=0}^{N-1}\underbrace{\int_{X_{N}}\int_{X_{k}}g_{k}(x_{k}, u_{k}(x_{k}),r_{N})\,\mathrm{d}\mu_{k}(x_{k})\,\mathrm{d}\rho_{N}(r_{N})}_{G_{k}( \mu_{k},u_{k},\rho_{N})},\end{split} \tag{2}\] where \(G_{N}\) and \(G_{k}\) have the same meaning as the lower-case counterparts in (1) but are defined in the probability space. \(\triangle\) **Example 1.2** (Distribution steering).: Assume now that the initial condition \(x_{0}\) in Example 1.1 is unknown, but its realization is distributed according to \(\mu_{0}\in\mathcal{P}(X_{0})\), with \(\mathcal{P}(X_{0})\) being the space of probability measures over \(X_{0}\). The input to apply to each "particle" having state \(x_{k}\in X_{k}\) is given by the (deterministic) feedback map \(u_{k}:X_{k}\to U_{k}\), and the dynamical evolution is \(x_{k+1}=f_{k}(x_{k},u_{k}(x_{k}))\). Similarly to Example 1.1, the dynamics in the probability space are then \(\mu_{k+1}=f_{k}(\cdot,u_{k}(\cdot))_{\#}\mu_{k}\). The same formalism of Example 1.1 can then be used to ensure that the terminal state \(x_{N}\) is distributed closely to a desired probability measure \(\rho_{N}\in\mathcal{P}(X_{N})\): \[\inf_{u_{k}:X_{k}\to U_{k}}\,\sum_{k=0}^{N}\mathcal{C}_{k}(\mu_{k}, \rho_{N})+\int_{X_{k}}\left\|u_{k}(x_{k})\right\|^{2}\,\mathrm{d}\mu_{k}(x_{k }), \tag{3}\] where \(\mathcal{C}_{k}\) measures closeness between \(\mu_{k}\) and \(\rho_{N}\), akin to \(G_{k}\) and \(G_{N}\) in (2). \(\triangle\) **Example 1.3** (Large-scale multi-agent systems).: The optimal steering of a fleet of \(M\) identical agents, with dynamics \(x_{k+1}^{(i)}=f_{k}(x_{k}^{(i)},u_{k}(x_{k}^{(i)}))\), from an initial configuration \(\{x_{k}^{(i)}\}_{i=1}^{M}\) to a desired one \(\rho_{N}\) can be cast as \[\inf_{u_{k}:X_{k}\to U_{k}}\,\sum_{i=1}^{M}g_{N}(x_{N}^{(i)})+\sum_{k=0}^{N-1} \mathcal{C}_{k}(\{x_{k}^{(j)}\}_{j=1}^{M},\rho_{N})+g_{k}(x_{k}^{(i)},u_{k}(x_ {k}^{(i)})), \tag{4}\] where \(\mathcal{C}_{k}\) is a fleet-specific cost (e.g., a cohesion or formation cost), and \(g_{N}\) and \(g_{k}\) are agent-specific costs (e.g., input effort). Often, the interest lies in the _macroscopic behavior_ of the fleet. Hence, it is custom to capture the state of the fleet by a probability measure \(\mu_{k}\in\mathcal{P}(X_{k})\) and the input by a map \(u_{k}:X_{k}\to U_{k}\)[1, 2]. The optimization problem in (4) can then be written as an optimal control problem, with state \(\mu_{k}\), input \(u_{k}\), and dynamics \(\mu_{k+1}=f_{k}(\cdot,u_{k}(\cdot))_{\#}\mu_{k}\). Overall, \[\inf_{u_{k}:X_{k}\to U_{k}}\,\int_{X_{N}}g_{N}(x_{N})\,\mathrm{d}\mu_{N}(x_{N} )+\sum_{k=0}^{N-1}\mathcal{C}_{k}(\mu_{k},\rho_{N})+\int_{X_{k}}g_{k}(x_{k},u_{ k}(x_{k}))\,\mathrm{d}\mu_{k}(x_{k}). \tag{5}\] Such modeling approach suits robotics [3], mobility [4], and social networks [5, 6]. \(\triangle\) Formally, (2), (3), and (5) are instances of discrete-time finite-horizon optimal control problems in probability spaces: \[\inf_{u_{k}:X_{k}\to U_{k}}\,G_{N}(\mu_{N},\rho_{N})+\sum_{k=0}^{N-1}G_{k}(\mu _{k},u_{k},\rho_{k}), \tag{6}\] subject to the measure dynamics \(\mu_{k+1}=f_{k}(\cdot,u_{k}(\cdot))_{\#}\mu_{k}\), where \(\rho_{k}\) are (possibly time-dependent) reference probability measures. In this paper, we consider \(G_{k},G_{N}\) as _optimal transport discrepancies_: An optimal transport discrepancy measures the effort to transport one probability measure onto another when moving a unit of mass from \(x\) to \(y\) costs \(c(x,y)\); see Section 2. To solve (6), one possibility is the Dynamic Programming Algorithm (DPA) [7]. However, its deployment poses several analytical and computational challenges. For example, it is unclear which easy-to-verify assumptions ensure the existence of solutions. Moreover, even if a minimizer exists, its computation suffers the infinite dimensionality of the probability space, and the burden of repeated computations of optimal transport discrepancies; see Section 3. This inevitable complexity prompts us to adopt a different perspective: At least formally, (6) resembles a _single_ optimal transport problem [8, 9], whereby one seeks to transport one probability measure \(\mu_{0}\) to a final one \(\rho_{N}\) while minimizing some transportation cost. If this formal similarity is made rigorous, we can tackle (6) with the tools of optimal transport theory, which has reached significant maturity in the last years, both theoretically [8, 9, 10] and numerically [11, 12, 13]. ### Contributions We study the optimal control and dynamic programming in probability spaces through the lens of optimal transport theory. Specifically, we show that many optimal control problems in probability spaces can be reformulated and studied as optimal transport problems. Our results reveal a separation principle: The "low-level control of the agents of the fleet" (how to reach the destination?) and "fleet-level control" (who goes where?) are decoupled. We complement our theoretical analysis with various examples and counterexamples, which demonstrate that our conditions cannot be relaxed and expose the pitfalls of heuristic approaches. The proofs of our results rely on novel stability results for the (multi-marginal) optimal transport problem, which are of independent interest. ### Previous work Most of the literature focuses on continuous time, and it is founded on [14], which relates the optimal transport problem and fluid mechanics. Through the optimal control lens, this formulation corresponds to an optimal control problem with integrator dynamics: The resulting flow is a time-dependent feedback law [15]. An attempt to introduce generic dynamical constraints can be found in [16], where the set of possible flows is constrained in a set of admissible ones, induced by the dynamics. Constructive results can be found in the specific setting of linear systems and Gaussian probability measures. In this case and when the control laws are affine, the space of probability measures is implicitly constrained to the space of Gaussian distributions, and closed-form solutions exist [17, 18, 19, 20]. All of these works build on traditional optimal control tools. In [21, 22, 23], instead, the authors develop a Pontryagin Maximum Principle for optimal control problems in the Wasserstein space (i.e., probability space endowed with the Wasserstein distance). Their analysis combines classical tools in optimal control theory with the "differential structure" of the Wasserstein space [8, 24]. In [25], the authors study optimal transport when the transportation cost results from the cost-to-go of a Linear Quadratic Regulator (LQR). This methodology implicitly assumes that, to steer a fleet of identical particles, one can compute the cost-to-go for the single particle and then "lift" the solution to the probability space via an optimal transport problem. While attractive, this approach generally yields sub-optimal solutions; see Section 5. The discrete-time setting has, instead, received less attention. Towards this direction, [26, 27, 28, 29, 30] explore the covariance control problem for discrete-time linear systems, possibly subject to constraints. In [2], the authors study the optimal steering of multiple agents from an initial configuration to a final one in a distributed fashion. In [1], the authors follow an approach similar to [25], albeit in discrete time. Finally, in [31], the authors study the problem of mass transportation over a graph, embedding constraints such as the maximum flow on the edges. To do so, they exploit the structure of the ground space; in this case, the transportation graph. In all these approaches, the distribution/fleet steering problem is a-priori formalized as an optimal transport problem and not as an optimal control problem in the probability space. As we shall see in Section 4, our results bridge these two perspectives and allow us to back up and recover many of the approaches in the literature. ### Organization The paper unfolds as follows. In Sections 2 and 3, we review the space of probability measures and introduce our problem setting. We present and discuss our main result, Theorem 4.2, in Section 4. In Section 5, we provide examples to ignite an intuition on our results and expose potential pitfalls. All proofs are in Section 6. Finally, Section 7 summarizes our findings and the future directions. ### Notation We denote by \(C_{b}(X)\) the space of continuous and bounded functions \(X\to\mathbb{R}\), by \(\operatorname{lsc}(X,Y)\) the space of lower semi-continuous functions \(X\to Y,\) and by \(\bar{\mathbb{R}}_{\geq 0}=[0,+\infty]\) the set of non-negative extended real numbers. The identity map on \(X\) is denoted by \(\operatorname{id}_{X}\), and the projection maps from \(X\times Y\) onto \(X\) are denoted by \(\operatorname{proj}_{X}^{X\times Y}\). Given the set of maps \(\{h_{k}:X\to X_{k}\}_{k=i}^{j}\) we denote by \((h_{i},\ldots,h_{j}):X\to X_{i}\times\ldots\times X_{j}\) the map \(x\mapsto(h_{i},\ldots,h_{j})(x)\coloneqq(h_{i}(x),\ldots,h_{j}(x)).\) ## 2 The Space of Probability Measures We start with notation and preliminaries in Section 2.1. Then, in Section 2.2, we review optimal transport. ### Preliminaries We assume all spaces to be Polish spaces, and all probability measures and maps to be Borel. We denote by \(\mathcal{P}(X)\) the space of Borel probability measures on \(X\), and we denote by \(\delta_{x}\) the Dirac's delta at \(x\in X;\) i.e., the probability measure defined for all Borel sets \(B\subseteq X\) as \(\delta_{x}(B)=1\) if \(x\in B\) and \(\delta_{x}(B)=0\) otherwise. We denote by \(\operatorname{supp}(\mu)\) the support of a probability measure \(\mu\in\mathcal{P}(X)\). The _pushforward_ of a probability measure \(\mu\in\mathcal{P}(X)\) through \(T:X\to Y,\) denoted by \(T_{\#}\mu\in\mathcal{P}(Y),\) is defined by \((T_{\#}\mu)(A)=\mu(T^{-1}(A))\) for all Borel sets \(A\subseteq Y.\) For any \(T_{\#}\mu\)-integrable \(\phi:Y\to\mathbb{R}\) it holds \(\int_{Y}\phi\operatorname{d}(T_{\#}\mu)=\int_{X}\phi\circ T\operatorname{d}\mu.\) Given \(\nu\in\mathcal{P}(Y),\)\(T\) is a _transport map_ from \(\mu\) to \(\nu\) if \(T_{\#}\mu=\nu;\) to this extent, it suffices that for all \(\phi\in C_{b}(Y)\)\(\int_{Y}\phi\operatorname{d}\nu=\int_{Y}\phi\operatorname{d}(T_{\#}\mu).\) We say that \(T:X\to Y\times Z\) is a transport map from \(\mu\in\mathcal{P}(X)\) to \((\nu_{1},\nu_{2})\in\mathcal{P}(Y)\times\mathcal{P}(Z)\) if \(\left(\operatorname{proj}_{Y}^{Y\times Z}\right)_{\#}(T_{\#}\mu)=\nu_{1}\) and \(\left(\operatorname{proj}_{Z}^{Y\times Z}\right)_{\#}(T_{\#}\mu)=\nu_{2}.\) ### Optimal transport Given a non-negative _transportation cost_\(c:X\times Y\to\bar{\mathbb{R}}_{\geq 0}\), the _optimal transport discrepancy_\(\mathcal{K}[c]:\mathcal{P}(X)\times\mathcal{P}(Y)\to\bar{\mathbb{R}}_{\geq 0}\) between two probability measures \(\mu\in\mathcal{P}(X)\) and \(\nu\in\mathcal{P}(Y)\) is \[\mathcal{K}[c](\mu,\nu)\coloneqq\inf_{\gamma\in\Gamma(\mu,\nu)}\int_{X\times Y }c(x,y)\,\mathrm{d}\gamma(x,y), \tag{7}\] where \(\Gamma(\mu,\nu)\coloneqq\left\{\mathbf{\gamma}\in\mathcal{P}(X\times Y)\,|\,( \mathrm{proj}_{X}^{X\times Y})_{\#}\mathbf{\gamma}=\mu,(\mathrm{proj}_{Y}^{X\times Y })_{\#}\mathbf{\gamma}=\nu\right\}\) is the set of couplings. A prominent example of optimal transport discrepancy is, for some \(p\geq 1\), the \(p^{\mathrm{th}}\) power of the \(p\)-Wasserstein distance, obtained when \(X=Y\) and the transportation cost \(c\) is a metric that induces the topology on \(X\)[8, SS7]. _Remark 2.1_.: When the transportation cost does not depend on one of the two variables (e.g., there exists \(\tilde{c}\in X\to\bar{\mathbb{R}}_{\geq 0}\) such that \(c(x,y)=\tilde{c}(x)\)), the optimal transport discrepancy reduces to an expected value; i.e., \(\mathcal{K}[c](\mu,\nu)=\mathbb{E}^{\mu}\left[\tilde{c}\right]\). \(\triangle\) We will repeatedly work with a generalization of the optimal transport problem to \(k\) marginals. Let \(X\coloneqq X_{1}\times\ldots\times X_{k}\), and \(c:X\to\bar{\mathbb{R}}_{\geq 0}\). The _multi-marginal_ optimal transport problem between \(k\) probability measures \(\{\mu_{i}\in\mathcal{P}(X_{i})\}_{i=1}^{k}\) reads \[\mathcal{K}[c](\mu_{1}\ldots,\mu_{k})\coloneqq\inf_{\mathbf{\gamma}\in\Gamma(\mu_ {1},\ldots,\mu_{k})}\int_{X}c(x_{1},\ldots,x_{k})\,\mathrm{d}\mathbf{\gamma}(x_{1 },\ldots,x_{k}), \tag{8}\] where \(\Gamma(\mu_{1},\ldots,\mu_{k})\coloneqq\left\{\mathbf{\gamma}\in\mathcal{P}(X)\,| \,(\mathrm{proj}_{X_{i}}^{X})_{\#}\mathbf{\gamma}=\mu_{i},\,i\in\{1,\ldots,k\} \right\}.\) In general, the infima in (7) and (8) are not attained, unless mild conditions on the transportation cost hold true. For instance, \(c\in\mathrm{lsc}(X\times Y,\bar{\mathbb{R}}_{\geq 0})\) in (7); see [9, SS4]. A transport plan \(\mathbf{\gamma}^{\varepsilon}\in\Gamma(\mu_{1},\ldots,\mu_{k})\) is \(\varepsilon\)-optimal when \[\int_{X}c(x_{1},\ldots,x_{k})\,\mathrm{d}\mathbf{\gamma}^{\varepsilon}(x_{1}, \ldots,x_{k})\leq\mathcal{K}[c](\mu_{1}\ldots,\mu_{k})+\varepsilon. \tag{9}\] The formulation in (7) and (8) is the _Kantorovich formulation_ of the optimal transport problem, whereby one optimizes over _transport plans_\(\mathbf{\gamma}\). The (stricter) _Monge formulation1 considers only transport plans \(\mathbf{\gamma}=(\mathrm{id}_{X_{1}},T)_{\#}\mu_{1}\in\Gamma(\mu_{1},\ldots,\mu_{k})\) induced by a transport map \(T:X_{1}\to X_{2}\times\ldots X_{k},T_{\#}\mu_{1}=(\mu_{2},\ldots,\mu_{k})\). Footnote 1: Historically, the Monge formulation comes first. For a thorough review of the history of optimal transport and its founding fathers, see [9, §1]. ## 3 Problem Statement Let \(X_{k}\), \(U_{k}\), and \(R_{k}\) be Polish spaces, representing the state space, the input space, and the space of references in the ground space, respectively (often, \(R_{k}=X_{k}\)). We consider dynamical systems whose state is a probability measure over \(X_{k}\). This approach encompasses continuous approximations of multi-agent systems and systems with uncertain initial conditions (usually captured by absolutely continuous probability measures), as well as finite settings (captured by empirical probability measures). For instance, **Example 3.1** (Robots in a grid).: Consider \(M\) robots in a grid of three cells; i.e., \(X_{k}=\{\pm 1,0\}\). Suppose that the \(i^{\mathrm{th}}\) robot is located at \(x_{k}^{(i)}\in X_{k}\) (i.e., has state \(x_{k}^{(i)}\)). Then, the state of the system is \(\mu_{k}=\frac{1}{M}\sum_{i=1}^{M}\delta_{x_{k}^{(i)}}\). The same modeling approach applies to \(M\) robots in the two-dimensional plane, simply setting \(X_{k}=\mathbb{R}^{2}\). \(\triangle\) In this setting, we focus on the following optimal control problem: **Problem 3.1** (Discrete-time optimal control in probability spaces).: Let \(N\in\mathbb{N}_{\geq 1}\). For dynamics \(f_{k}:X_{k}\times U_{k}\to X_{k+1}\), costs \(g_{k}:X_{k}\times U_{k}\times R_{k}\to\bar{\mathbb{R}}_{\geq 0}\) and \(g_{N}:X_{N}\times R_{N}\to\bar{\mathbb{R}}_{\geq 0}\), initial condition \(\mu\in\mathcal{P}(X_{0})\), and reference trajectory \(\rho=(\rho_{0},\ldots,\rho_{N})\in\mathcal{P}(R_{0})\times\ldots\times \mathcal{P}(R_{N})\), find the joint state-input distribution \(\lambda_{k}\in\mathcal{P}(X_{k}\times U_{k})\) which solve \[J(\mu,\rho)=\inf_{\begin{subarray}{c}\mu_{k}\in\mathcal{P}(X_{k}) \\ \lambda_{k}\in\mathcal{P}(X_{k}\times U_{k})\end{subarray}} \mathcal{K}[g_{N}](\mu_{N},\rho_{N})+\sum_{k=0}^{N-1}\mathcal{K}[g_{k }](\lambda_{k},\rho_{k})\] \[\mathrm{s.t.} \mu_{k+1}=f_{k\#}\lambda_{k},\quad\mu_{0}=\mu,\] \[(\mathrm{proj}_{X_{k}}^{\mathrm{X}_{k}\times U_{k}})_{\#}\lambda _{k}=\mu_{k}.\] Before presenting our results, we detail our setting. The notation in the ground space is juxtaposed with the one in the measure space in Table 1. ### State-input distribution The state-input distribution \(\lambda_{k}\in\mathcal{P}(X_{k}\times U_{k})\) is a probability measure on \(X_{k}\times U_{k}\) whose first marginal is \(\mu_{k}\). The semantics is as follows: The probability mass assigned by \(\lambda_{k}\) to the pair \((x_{k},u_{k})\) indicates the probability that one particle has state \(x_{k}\in X_{k}\) and applies the input \(u_{k}\in U_{k}\) or, equivalently, the share of agents which have state \(x_{k}\in X_{k}\) and apply the input \(u_{k}\in U_{k}\). When \(\lambda_{k}=(\mathrm{id}_{X},u_{k})_{\#}\mu_{k}\) for some \(u_{k}:X_{k}\to U_{k}\), the input is "deterministic": All particles that have state \(x_{k}\in X\) apply the input \(u_{k}(x_{k})\in U_{k}\). **Example 3.2** (Robots in a grid, continued).: Consider again \(M\) identical robots on \(X_{k}=\{\pm 1,0\}\), where at each time step each robot can either move to the origin (\(u_{k}=0\)) and stay there forever, or change position (\(u_{k}=-1\)), so that \(U_{k}=\{0,1\}\) and \(f_{k}(x_{k},u_{k})=x_{k}u_{k}\). Consider the following input-state distributions \(\lambda_{k}^{(1)}\) and \(\lambda_{k}^{(2)}\). \begin{table} \begin{tabular}{c|c|c} & ground space & measure space \\ \hline state & \(x_{k}\in X_{k}\) & \(\mu_{k}\in\mathcal{P}(X_{k})\) \\ reference & \(r_{k}\in R_{k}\) & \(\rho_{k}\in\mathcal{P}(R_{k})\) \\ state-input distribution & \(x\mapsto u_{k}(x_{k})\in U_{k}\) & \(\lambda_{k}\in\mathcal{P}(X_{k}\times U_{k})\) \\ & s.t. \((\mathrm{proj}_{X_{k}}^{\mathrm{X}_{k}\times U_{k}})_{\#}\lambda_{k}=\mu_{k}\) \\ dynamics & \(x_{k+1}=f_{k}(x_{k},u_{k}(x_{k}))\) & \(\mu_{k+1}=f_{k\#}\lambda_{k}\) \\ cost-to-go & \(j_{k}\) & \(J_{k}\) \\ stage and terminal costs & \(g_{k}\) and \(g_{N}\) & \(\mathcal{K}[g_{k}]\) and \(\mathcal{K}[g_{N}]\) \\ \end{tabular} \end{table} Table 1: Parallelism between objects in the ground space and in the measure space. In the first case (i.e., \(\lambda_{k}^{(1)}\)), \(20\%\) of the robots are located at \(x_{k}=-1\) and go to the origin (\(u_{k}=0\)), \(30\%\) of the robots are located at \(x_{k}=-1\) and switch position (\(u_{k}=-1\)), and \(50\%\) of the robots are located at \(x_{k}=0\) and remain there (\(u_{k}=0\), despite irrelevant for the dynamics). The input is not deterministic, since not all robots located at \(x_{k}=-1\) apply the same input. From \(\lambda_{k}^{(1)}\) we can also infer the distribution of the robots: \(50\%\) of them are located at \(x_{k}=-1\), and the other \(50\%\) at \(x_{k}=0\). In the second case (i.e., \(\lambda_{k}^{(2)}\)), the input is deterministic: All robots located at \(x_{k}=-1\) switch position, and all the robots located at \(x_{k}=0\) stay there. \(\triangle\) _Remark 3.1_.: Two comments on our modeling choice. First, since the first marginal of \(\lambda_{k}\) is \(\mu_{k}\), the costs \(\mathcal{K}[g_{k}](\lambda_{k},\rho_{k})\) are implicitly a function of the state, the input, and the reference trajectory. Second, in many instances (including multi-agent settings with finitely many agents), optimal inputs turn out to be deterministic (in the sense outlined above). Yet, the more general joint state-input distribution considerably simplify the analysis, the same way the Kantorovich formulation is more tractable than the Monge formulation in optimal transport theory. \(\triangle\) ### Dynamics We consider measure dynamics resulting from the pushforward via a function \(f_{k}:X_{k}\times U_{k}\to X_{k+1}\) (typically, the dynamics of the single particles); i.e., \(\mu_{k+1}=f_{k\#}\lambda_{k}\). In the special case of deterministic inputs (i.e., \(\lambda_{k}=(\mathrm{id}_{X},u_{k})_{\#}\mu_{k}\) for some function \(u_{k}:X_{k}\to U_{k}\)), the dynamics simplifies to \(\mu_{k+1}=f_{k}(\cdot,u_{k}(\cdot))_{\#}\mu_{k}\). **Example 3.3** (Robots in a grid, continued).: Consider the setting of Example 3.2, where \(f_{k}(x_{k},u_{k})=x_{k}u_{k}\). The measure dynamics are \(\mu_{k+1}=f_{k\#}\lambda_{k}\), and the two inputs of Example 3.2 yield \(\mu_{k+1}^{(1)}=0.7\delta_{0}+0.3\delta_{1}\) and to \(\mu_{k+1}^{(2)}=0.5\delta_{0}+0.5\delta_{1}\). \(\triangle\) ### Cost We consider optimal transport discrepancies with, as transportation costs, \(g_{k}:X_{k}\times U_{k}\times R_{k}\to\bar{\mathbb{R}}_{\geq 0}\) (stage cost) and \(g_{N}:X_{N}\times R_{N}\to\bar{\mathbb{R}}_{\geq 0}\) (terminal cost). By Remark 2.1, this modeling assumption includes expected values but not functionals such as the variance of the probability measure or the Kullback-Leibner divergence from the references \(\rho_{k}\), \(\rho_{N}\). Our formulation encompasses the terminal constraint \(\mu_{N}=\rho_{N}\): It suffices to set \(g_{N}(x_{N},r_{N})=+\infty\) if \(x_{N}\neq r_{N}\). Similarly, state-dependent input constraints \(U_{k}(x_{k})\) can be encoded setting \(g_{k}(x_{k},u_{k},r_{k})=+\infty\) when \(u_{k}\not\in U_{k}(x_{k})\). In view of Example 1.1, the transportation costs \(g_{k}\) and \(g_{N}\) may be interpreted as the cost incurred by a single agent. **Example 3.4** (Robots in a grid, continued).: Suppose that the goal is to steer \(\frac{M}{2}\) robots to \(x_{k}=-1\) and \(\frac{M}{2}\) to \(x_{k}=+1\), while minimizing the input. Then, \(\rho_{N}=\frac{1}{2}\delta_{-1}+\frac{1}{2}\delta_{+1}\) and, for some weight \(\alpha>0\), possible costs are \(g_{N}(x_{N},r_{N})=|x_{N}-r_{N}|\) and \(g_{k}(x_{k},v_{k},r_{k})=\alpha|v_{k}|\). This way, the aim is to minimize the (type 1) Wasserstein distance from the reference \(\rho_{N}\) at the end of the horizon (i.e., \(\mathcal{K}[g_{N}](\mu_{N},\rho_{N})\)) and the (weighted) input effort throughout the horizon (i.e., \(\mathcal{K}[g_{k}](\lambda_{k},\rho_{k})=\alpha\mathbb{E}^{\lambda_{k}}\left[ |v_{k}|\right]\)). The weight \(\alpha>0\) arbitrates between these two objectives. The references \(\rho_{k}\) for \(k\in\{0,\ldots,N-1\}\) do not enter in the cost and are therefore irrelevant. \(\triangle\) ### DPA for Problem 3.1 Problem 3.1 is an instance of the classic discrete-time finite-horizon optimal control problem in abstract spaces [7, 32]. It is therefore natural to tackle it via the DPA: **Definition 3.1** (Dpa).: Initialization: Let \(J_{N}(\mu_{N},\rho_{N})\coloneqq\mathcal{K}[g_{N}](\mu_{N},\rho_{N})\). Recursion: For all \(k\in\{N-1,N-2,\ldots,1,0\}\), compute the cost-to-go \(J_{k}\): \[J_{k}(\mu_{k},\rho_{k},\ldots,\rho_{N})\coloneqq\inf_{\lambda_{k}\in\mathcal{ U}_{k}}\mathcal{K}[g_{k}](\lambda_{k},\rho_{k})+J_{k+1}(f_{k\#}\lambda_{k}, \rho_{k+1},\ldots,\rho_{N}). \tag{10}\] Unfortunately, the DPA in probability spaces poses several analytic and computational challenges; we mention two. First, it is unclear under which easy-to-verify assumptions minimizers exist. Second, even if they do, their computation remains challenging, if not prohibitive. Already when all sets are finite, and the (generally infinite-dimensional) probability space reduces to the finite-dimensional probability simplex, (10) is excruciating. For instance, when \(G_{N}\) is an optimal transport discrepancy, the mere evaluation of the cost-to-go \(J_{N}\) involves solving an optimal transport problem, with all the related computational difficulties [11, 12, 33, 34]. Thus, the optimization of \(J_{N}\), needed to compute \(J_{N-1}\), will inevitably be very demanding. In the following, we show that the solution of Problem 3.1 can be constructed from the solution of the DPA in the ground space (i.e., \(X_{0},X_{1},\ldots\)) and a _single_ (possibly multi-marginal) optimal transport problem. In other words, a separation principle holds: The optimal control law results from the combination of optimal low-level control laws (found via DPA in the ground space) and a fleet-level control law (found via an optimal transport problem). This way, we bypass the cumbersome application of DPA in probability spaces as well as the repeated evaluation of optimal transport discrepancies. At least formally, our result generalizes two well-known extreme cases. On the one hand, when considering Dirac's delta probability measures, the DPA in the probability space trivially reduces to the DPA in the ground space (see Example 1.1); on the other hand, when considering trivial dynamics (i.e., \(N=1\) and \(f_{0}(x_{0},u_{0})=x_{0}\)) and an optimal transport discrepancy as a terminal cost, Problem 3.1 reduces to an optimal transport problem. Thus, DPA in probability spaces should be at least "as difficult as" solving both the DPA in the ground space and an optimal transport problem. As we shall see below, it is not "more difficult" than that. ### Auxiliary problem: DPA in the ground space Before presenting our main results, we introduce an auxiliary optimal control problem in the ground space: \[j(x,r_{0},\ldots,r_{N})= \inf_{\begin{subarray}{c}x_{k}\in X_{k}\\ u_{k}\in\mathcal{U}_{k}\end{subarray}}g_{N}(x_{N},r_{N})+\sum_{k=0}^{N-1}g_{k }(x_{k},u_{k},r_{k}) \tag{11}\] \[\text{s.t. }x_{k+1}=f_{k}(x_{k},u_{k}),\quad x_{0}=x.\] Similarly to (10), the DPA provides the _cost-to-go_\(j_{k}:X_{k}\times R_{k}\times\ldots\times R_{N}\to\bar{\mathbb{R}}_{\geq 0}\): \[j_{N}(x_{N},r_{N}) \coloneqq g_{N}(x_{N},r_{N}); \tag{12}\] \[j_{k}(x_{k},r_{k},\ldots,r_{N}) \coloneqq\inf_{u_{k}\in\mathcal{U}_{k}}g_{k}(x_{k},u_{k},r_{k})+j _{k+1}(f_{k}(x_{k},u_{k}),r_{k+1},\ldots,r_{N}).\] Specifically, we use lower-case \(j_{k}\) for the cost-to-go in the ground space and capital-case \(J_{k}\) for the probability space twin. By (11), (\(\varepsilon\)-)optimal inputs will be feedback law \(u_{k}:X_{k}\times R_{k}\times\ldots\times R_{N}\to U_{k}\). In particular, an input \(u_{k}\in U_{k}\) (or, with slight abuse of notation, a feedback law \(u_{k}:X_{k}\times R_{k}\times\ldots\times R_{N}\to U_{k}\)) is \(\varepsilon\)-optimal in (11) if \[g_{k}(x_{k},u_{k},r_{k})+j_{k+1}(f_{k}(x_{k},u_{k}),r_{k+1},\ldots,r_{N})\leq j _{k}(x_{k},r_{k},\ldots,r_{N})+\varepsilon. \tag{12}\] ## 4 Main Result In this section, we present our main result. We first provide an informal statement in Section 4.1. The rigorous version is in Section 4.2. ### A separation principle in the probability space Our main result predicates a separation principle: **Theorem 4.1** (DPA in probability spaces via optimal transport, informal).: _Consider the setting of Problem 3.1. At every stage \(k\):_ 1. _The cost-to-go_ \(J_{k}\) _is a multi-marginal optimal transport problem between the current state_ \(\mu_{k}\) _and the future references_ \(\rho_{k},\ldots,\rho_{N}\)_, with transportation cost being the cost-to-go in the ground space_ \(j_{k}\)_._ 2. _The optimal state-input distribution_ \(\lambda_{k}^{*}\) _results from the following strategy:_ 1. _Find the optimal input_ \(u_{k}^{*}\) _in the ground space;_ 2. _Find the optimal transport plan_ \(\mathbf{\gamma}_{k}^{*}\) _for the cost-to-go_ \(J_{k}\)_;_ 3. _Dispatch the particles as prescribed by_ \(\mathbf{\gamma}_{k}^{*}\) _and apply_ \(u_{k}^{*}\) _to steer them to their allocated trajectory._ In words, to solve DPA in probability spaces, we first solve for the cost-to-go \(j_{k}\) in the ground space and then construct a multi-marginal optimal transport problem with transportation cost \(j_{k}\). Moreover, the optimal input for a fleet of identical agents results from the composition of the optimal control strategy for _each_ individual agent (how to optimally follow the trajectory \(r_{k},\ldots,r_{N}\) for an agent with state \(x_{k}\)?) and the solution of a multi-marginal optimal transport problem (who has state \(x_{k}\) and follows the trajectory \(r_{k},\ldots,r_{N}\)?). Importantly, our result reveals a separation principle: It is optimal to first devise low-level controllers for individual agents (i.e., \(u_{k}^{*}\)) and then solve an assignment problem to allocate agents to their destinations (i.e., \(\mathbf{\gamma}_{k}^{*}\)). ### A rigorous statement Next, we rigorously formalize Theorem 4.1: **Theorem 4.2** (DPA in probability spaces via optimal transport).: _Consider the setting of Problem 3.1. At every stage \(k\):_ 1. _The cost-to-go equals the multi-marginal optimal transport discrepancy_ \[\begin{split}& J_{k}(\mu_{k},\rho_{k},\ldots,\rho_{N})=\mathcal{ K}[j_{k}](\mu_{k},\rho_{k},\ldots,\rho_{N})\\ &=\inf_{\mathbf{\gamma}\in\Gamma(\mu_{k},\rho_{k},\ldots,\rho_{N})} \int_{X_{k}\times R_{k}\times\ldots\times R_{N}}j_{k}(x_{k},r_{k},\ldots,r_{N })\,\mathrm{d}\mathbf{\gamma}(x_{k},r_{k},\ldots,r_{N}),\end{split}\] (13) _where_ \(j_{k}\) _is the cost-to-go in the ground space, as in (_11_). Moreover, the DPA yields the optimal solution_ \(J=J_{0}\)_._ 2. _For_ \(\varepsilon\geq 0\)_, suppose_ \(u_{k}^{\varepsilon/2}:X_{k}\times R_{k}\times\ldots\times R_{N}\to U_{k}\) _and_ \(\boldsymbol{\gamma}_{k}^{\varepsilon/2}\in\Gamma(\mu_{k},\rho_{k},\ldots,\rho _{N})\) _are_ \(\frac{\varepsilon}{2}\)_-optimal in (_11_) and (_13_), respectively. Then,_ \[\lambda_{k}^{\varepsilon}=\left(\operatorname{proj}_{X_{k}}^{X_{k}\times R_{k} \times\ldots\times R_{N}},u_{k}^{\varepsilon/2}\right)_{\#}\boldsymbol{\gamma }_{k}^{\varepsilon/2}\] (14) _is an_ \(\varepsilon\)_-optimal state-input distribution. If_ \(\varepsilon=0\)_, then_ \(\lambda_{k}^{*}\coloneqq\lambda_{k}^{\varepsilon}\) _is optimal._ 3. _If_ \(\boldsymbol{\gamma}_{k}^{\varepsilon/2}\) _in (ii) is induced by a transport map_ \(\mathcal{T}_{k}^{\varepsilon/2}:X_{k}\to R_{k}\times\ldots\times R_{N}\)_, the_ \(\varepsilon\)_-optimal control input reads_ \(\lambda_{k}^{\varepsilon}=(\operatorname{id}_{X_{k}},u_{k}^{\varepsilon/2} \circ(\operatorname{id}_{X_{k}},\mathcal{T}_{k}^{\varepsilon/2}))_{\#}\mu_{k}\)_._ Before discussing Theorem 4.2 and its implications, we consider the special case when the stage costs \(g_{k}\) do not depend on the reference; i.e., \(g_{k}:X_{k}\times U_{k}\to\bar{\mathbb{R}}_{\geq 0}\). For instance, any shortest path problem on a graph can be converted into a finite-horizon optimal control problem (e.g., see [32]), where the weights of the edges determine the stage costs \(g_{k}\); these depend only on the pair \((x_{k},u_{k})\). In these cases, the DPA reads \[\begin{split} j_{N}(x_{N},r_{N})&\coloneqq g_{N}(x_{N },r_{N});\\ j_{k}(x_{k},r_{N})&\coloneqq\inf_{u_{k}\in U_{k}}g_{ k}(x_{k},u_{k})+j_{k+1}(f_{k}(x_{k},u_{k}),r_{N}).\end{split} \tag{15}\] Accordingly, the ground space \(\frac{\varepsilon}{2}\)-optimal input are of the form \(u_{k}^{\varepsilon/2}:X_{k}\times R_{N}\to U_{k}\) and the cost-to-go \(J_{k}\) simplifies to a two-marginals optimal transport discrepancy: **Corollary 4.3** (When two marginals are all you need).: _Consider the setting of Theorem 4.2, with \(g_{k}:X_{k}\times U_{k}\to\bar{\mathbb{R}}_{\geq 0}\). At every stage \(k\):_ 1. _The cost-to-go equals the optimal transport discrepancy_ \[J_{k}(\mu_{k},\rho_{N})=\mathcal{K}[j_{k}](\mu_{k},\rho_{N})=\inf_{\boldsymbol {\gamma}\in\{\mu_{k},\rho_{N}\}}\int_{X_{k}\times R_{N}}j_{k}(x_{k},r_{N}) \operatorname{d}\boldsymbol{\gamma}(x_{k},r_{N}),\] (16) _where_ \(j_{k}\) _is the cost-to-go in the ground space, as in (_15_). Moreover, the DPA yields the optimal solution_ \(J=J_{0}\)_._ 2. _For_ \(\varepsilon\geq 0\)_, suppose_ \(u_{k}^{\varepsilon/2}:X_{k}\times R_{N}\to U_{k}\) _and_ \(\boldsymbol{\gamma}_{k}^{\varepsilon/2}\in\Gamma(\mu_{k},\rho_{N})\) _are_ \(\frac{\varepsilon}{2}\)_-optimal in (_15_) and (_13_), respectively. Then,_ \[\lambda_{k}^{\varepsilon}=\left(\operatorname{proj}_{X_{k}}^{X_{k}\times R_{N }},u_{k}^{\varepsilon/2}\right)_{\#}\boldsymbol{\gamma}_{k}^{\varepsilon/2}\] (17) _is an_ \(\varepsilon\)_-optimal state-input distribution. If_ \(\varepsilon=0\)_, then_ \(\lambda_{k}^{*}\coloneqq\lambda_{k}^{\varepsilon}\) _is optimal._ 3. _If_ \(\boldsymbol{\gamma}_{k}^{\varepsilon}\) _in (ii) is induced by a transport map_ \(\mathcal{T}_{k}^{\varepsilon/2}:X_{k}\to R_{N}\)_, the_ \(\varepsilon\)_-optimal control input reads_ \(\lambda_{k}^{\varepsilon}=(\operatorname{id}_{X_{k}},u_{k}^{\varepsilon/2} \circ(\operatorname{id}_{X_{k}},\mathcal{T}_{k}^{\varepsilon/2}))_{\#}\mu_{k}\)_._ We defer the proofs of these results to Section 6. #### Discussion A few comments on our results are in order. How to construct optimal state-input distributions?We start with more details on (14) and (17). For simplicity, assume that an optimal input map \(u_{k}^{*}\) and an optimal transport plan \(\mathbf{\gamma}_{k}^{*}\) exist (else, resort to an \(\varepsilon\) argument). Then, (ii) in Theorem 4.2 and Corollary 4.3 predicate that an optimal state-input distribution \(\lambda_{k}^{*}\) for Problem 3.1 results from the DPA in the ground space (i.e., \(u_{k}^{*}\)) and the solution of an optimal transport problem (i.e., \(\mathbf{\gamma}_{k}^{*}\)): 1. _Optimal particles allocation:_ The transport plan \(\mathbf{\gamma}_{k}^{*}\in\Gamma(\mu_{k},\rho_{k},\ldots,\rho_{N})\) describes the optimal allocation of the particles throughout the horizon. In discrete instances, \(\mathbf{\gamma}_{k}^{*}(x_{k},r_{k},\ldots,r_{N})\) quantifies the share of agents with state \(x_{k}\) that will follow the reference trajectory \(r_{k},\ldots,r_{N}\). 2. _Optimal input coupling:_ Accordingly, we can interpret \(\lambda_{k}^{*}\) as the number of particles at \(x_{k}\) that apply the optimal input \(u_{k}^{*}(x_{k},r_{k},\ldots,r_{N})\). Intuitively, \(\lambda_{k}^{*}\) assigns probability mass to \((x_{k},u_{k})\) if there is a trajectory \(r_{k},r_{k+1},\ldots,r_{N}\) to which \(x_{k}\) has been allocated by \(\mathbf{\gamma}_{k}^{*}\), such that \(u_{k}^{*}\) is the optimal input to minimize the cost along that trajectory. Existence of optimal solutionsIn turn, our results provide sufficient conditions for the existence of an optimal solution for Problem 3.1: existence of a solution for both the DPA in the ground space and the associated optimal transport problem. Existence of optimal input mapsAn optimal solution to Equation (11) always exists when all spaces are finite, or when for any \(K\subseteq X_{k}\times R_{k}\times\ldots\times R_{N}\) compact and \(L>0\), the sets \(\{u_{k}\in U_{k}\,|\,g_{k}(x_{k},u_{k},r_{k})+j_{k+1}(f_{k}(x_{k},u_{k}),r_{k+ 1},\ldots,r_{N})\leq L,\forall(x_{k},y)\in K\}\) are compact, the maps \(g_{k},g_{N}\) are lower semicontinuous and \(f_{k}(x_{k},\cdot)\) are continuous for all \(x_{k}\in X_{k}\); see [7, Proposition 4.2.2] and [35, Theorem 18.19]. In general, however, optimal input may not exist. For this reason, we state our results using \(\varepsilon\)-optimality. Existence of optimal transport mapsIf the solution of the optimal transport problem is a transport map, then (iii) in Theorem 4.2 suggests that the optimal input is deterministic. Without aims of completeness, this is the case when: 1. The marginals are empirical with the same number of particles (in virtue of Birkhoff theorem [8, Theorem 6.0.1]); or 2. The cost-to-go \(j_{k}\) is continuous and semi-concave, and for each \(x_{k}\in X_{k}\) the map \((r_{k},\ldots,r_{N})\mapsto\frac{\partial j_{k}}{\partial x_{k}}(x_{k},r_{k}, \ldots,r_{N})\) is injective in its domain of definition intersected with splitting sets [36, Definition 2.4], and \(\mu_{k}\) is absolutely continuous [36, 37] (see [38, Theorem 1.2] for the case with two marginals). Connections to previous workThe approach in the literature for distribution/fleet steering is fundamentally different from ours: It is a-priori stipulated that the steering problem is an optimal transport problem from an initial distribution to a target one, without formulating an optimal control problem in probability spaces. This way, the complexity of DPA probability spaces is bypassed, at the price, however, of potentially suboptimal solutions: There is no reason for this approach to be optimal for a corresponding control problem in the probability space. With Theorem 4.2 and Corollary 4.3, we show that, provided the transportation cost is judiciously chosen, this approach is optimal, and yields the same solution as DPA in probability spaces. For instance, the results in [1, SSA] correspond to the optimal strategy when \(g_{k}(x_{k},u_{k},r_{k})=\left\|u_{k}\right\|^{2}\), and terminal constraint on the final distribution (see Section 3.3). The results in [1] can thus be extended to more general terminal costs (e.g., \(g_{N}(x_{N},r_{N})=\left\|x_{N}-r_{N}\right\|^{2}\)). Instead, the results in [1, SSB] are suboptimal in the sense of DPA in the probability space. By Theorem 4.2, when the stage costs are reference-dependent (e.g, \(g_{k}(x_{k},u_{k},r_{k})=\left\|u_{k}\right\|^{2}+\left\|x_{k}-r_{k}\right\|^{2}\)), the cost-to-go results from a multi-marginal optimal transport problem. As such, the strategy proposed in [1] does not minimize, at every time-step \(k\), the weighted sum of the squared Wasserstein distance from the target configuration and the input effort. Similarly, the problem formulation in [2] can be recovered with integrator dynamics \(f_{k}(x_{k},u_{k})=x_{k}+u_{k}\), cost \(g_{k}(x_{k},u_{k},r_{k})=\left\|u_{k}\right\|^{2}\), and terminal constraint on the final distribution (see Section 3.3). With a state augmentation (the input used along the trajectory, an independent integrator dynamics) and input constraints as suggested in Section 3.3, [30, Problem 2] is a special case of our setting, with linear dynamics (see Section 3.2), stage cost \(g_{k}\equiv 0\), and terminal cost the squared Wasserstein distance; i.e., \(g_{N}(x_{N},r_{N})=\left\|x_{N}-r_{N}\right\|^{2}\). Simple calculations reveal that the hard-constrained covariance formulation in [30, Problem 1] can be reformulated via a hard terminal constraint on the final probability measure (a Gaussian probability measure with appropriate covariance). In both cases, such specializations are possible because the authors restrict themselves to the Gaussian and linear setting. In general, covariance constraints or penalties require further study; see Section 3.3. Similarly, noisy settings do not immediately benefit from our reformulation; see Section 5.3. Analogous considerations hold for [26, 27, 28, 29]. Computational attractivenessA rough time complexity analysis in the setting of Corollary 4.3 highlights why our result is computationally attractive. Consider the finite setting: Let \(|X|\) be the number of states in the ground space, \(N\) the horizon length, \(|U|\) the number of available actions at each state, and restrict the attention to empirical probability measures consisting of \(M\) particles, which can be written as \(\mu_{k}=\frac{1}{M}\sum_{i=1}^{M}\delta_{x_{k}^{(0)}}\), for \(x_{k}^{(i)}\in X,i\in\{1,\ldots,M\}\). The number of possible states in the probability space amounts to \(\binom{M+|X|-1}{M}\). Therefore, the time complexity of naively applying DPA in the probability space (i.e., to compute the input at the current state) is \(\mathcal{O}(\binom{M+|X|-1}{M}N|X||U|)\). On the other hand, the DPA in the ground space (for all initial and terminal states) costs \(\mathcal{O}(N|X||U|)\). An optimal transport problem boils down to solving a linear program with \(M^{2}\) decision variables, which has complexity \(\mathcal{O}(M^{5})\) (see [39] for a sharper analysis). The total complexity of the recipe provided in Corollary 4.3 is thus \(\mathcal{O}(N|X||U|+M^{5})\), which improves over the DPA in the probability space. Design of transportation costsIn many disciplines, the design of transportation costs is challenging; e.g., [40, 41]. For instance, in [40], the underlying Riemannian metric characterizing the trajectory of single-cell RNA is retrieved in a data-driven fashion. Theorem 4.2 and Corollary 4.3 suggest an alternative approach: First, "learn" the cost-to-go for single particles; then, use it as the transportation cost. Measurability issuesIn general, the cost-to-go \(j_{k}\) is not Borel. Nonetheless, with our assumptions, it is lower semi-analytic [42, Corollary 8.2.1] and, thus, the integral in (13) is well-defined [42, SS7.7]. Similarly, for any \(\varepsilon>0\), the inputs \(u_{k}^{\varepsilon/2}\) may fail to be Borel measurable but are only universally measurable [42, Proposition 7.50]. However, for any Borel measure \(\gamma_{k}^{\varepsilon/2}\) there exists a Borel map \(\tilde{u}:X_{k}\times R_{k}\times\ldots\times R_{N}\to U_{k}\) so that \(u_{k}^{\varepsilon/2}(x_{k},r_{k},\ldots,r_{N})=\tilde{u}(x_{k},r_{k},\ldots,r_{ N})\)\(\boldsymbol{\gamma}_{k}^{\varepsilon/2}\)-a.e.[42, Lemma 7.27]. Thus, we can without loss of generality assume that \(u_{k}^{\varepsilon/2}\) is Borel and, thus, the pushforward map in (17) and the resulting probability measures \(\lambda_{k}^{\varepsilon}\) are well-defined. ## 5 Examples and Pitfalls In Section 5.1, we present two examples where two marginals are enough, in line with the existing literature [1, 2]. Then, in Section 5.2, we showcase that, in general, the multi-marginal formulation is necessary. Finally, Section 5.3 shows that our results do not readily extend to noisy dynamics. ### Examples when two marginals are all you need We start with an example to which Corollary 4.3 applies: **Example 5.1** (Integrator particle dynamics, input effort).: Suppose we aim at steering a probability measure \(\mu_{0}\in\mathcal{P}\left(\mathbb{R}^{n}\right)\) to a target \(\rho_{N}\in\mathcal{P}\left(\mathbb{R}^{n}\right)\) in \(N\) steps; i.e., \(X_{k}=R_{k}=\mathbb{R}^{n}\). The input space is \(U_{k}=\mathbb{R}^{n}\), and the dynamics are \(f_{k}(x_{k},u_{k})=x_{k}+u_{k}\). The costs are \(g_{k}(x_{k},u_{k})=\left\|u_{k}\right\|^{2}\), and \(g_{N}(x_{N},r_{N})=0\) if \(x_{N}=r_{N}\) and \(+\infty\) otherwise, so that the stage cost in the probability space is \(\mathcal{K}[g_{k}](\lambda_{k},\rho_{k})=\mathbb{E}^{\lambda_{k}}\left[\left\| v_{k}\right\|^{2}\right]\), and the terminal cost is \(\mathcal{K}[g_{N}](\mu_{N},\rho_{N})=0\) if \(\mu_{N}=\rho_{N}\) and \(+\infty\) otherwise. The optimal control problem in the ground space admits the solution \(u_{k}(x_{k},r_{N})=\frac{r_{N}-x_{k}}{N-k}\), with the associated cost-to-go \(j_{k}(x_{k},r_{N})=\frac{N-k}{N^{2}}\left\|r_{N}-x_{k}\right\|^{2}\). By Corollary 4.3, the cost-to-go in the space of probability measures \(J_{k}\) is \[J_{k}(\mu_{k},\rho_{N})=\min_{\boldsymbol{\gamma}\in\Gamma(\mu_{k},\rho_{N})} \int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\frac{N-k}{N^{2}}\left\|r_{N}-x_{k} \right\|^{2}\,\mathrm{d}\boldsymbol{\gamma}(x_{k},r_{N})=\frac{N-k}{N^{2}}W_{ 2}(\mu_{k},\rho_{N})^{2},\] and the optimal input reads \(\lambda_{k}=(\mathrm{proj}_{X_{k}}^{X_{k}\times R_{N}},u_{k})_{\#} \boldsymbol{\gamma}_{k}\), where \(\boldsymbol{\gamma}_{k}\) is the optimal transport plan for \(J_{k}(\mu_{k},\rho_{N})\). In the particular case where an optimal transport map \(T_{k}:X_{k}\to R_{N}\) exists, the optimal input simplifies to \(\lambda_{k}=(\mathrm{id}_{X_{k}},u_{k}(\cdot,T_{k}(\cdot)))_{\#}\mu_{k}\). That is, all particles having state \(x_{k}\) apply the input \(u_{k}(x_{k},T_{k}(x_{k}))=\frac{T_{k}(x_{k})-x_{k}}{N-k}\). \(\triangle\) Sometimes, the optimal input is probabilistic: **Example 5.2** (Sometimes it is necessary to split the mass).: Let \(N=1\), and consider \(X_{k}=U_{k}=R_{k}=\mathbb{R}\), \(f_{0}(x_{0},u_{0})=u_{0}\), \(g_{0}(x_{0},u_{0})=0\), \(g_{N}(x_{1},r_{1})=\left\|x_{1}-r_{1}\right\|^{2}\). Let \(\mu_{0}=\delta_{0}\) and \(\rho_{1}=\frac{1}{2}\delta_{-1}+\frac{1}{2}\delta_{+1}\). For every pair \((x_{0},r_{1})\) the solution in the ground space is \(u_{0}(x_{0},r_{1})=r_{1}\), which yields the cost-to-go \(j_{0}(x_{0},r_{1})=0\). That is, any allocation \(\boldsymbol{\gamma}\in\Gamma(\delta_{0},\rho_{N})\) is optimal; in particular, the only feasible plan \(\boldsymbol{\gamma}^{\star}\in\Gamma(\delta_{0},\rho_{N})\) displaces \(50\%\) of mass to \(r_{1}=-1\) and the other \(50\%\) of the mass to \(r_{1}=+1\); see Figure 0(a). Then, the optimal input reads \(\lambda_{0}=(\mathrm{proj}_{X_{0}}^{X_{0}\times R_{1}},u_{0})_{\#}\boldsymbol{ \gamma}^{\star}\): \(50\%\) of the particles apply the input \(u_{0}=+1\), and the others \(u_{0}=-1\). \(\triangle\) ### Why all these marginals? Hereby, we explore the differences between Theorem 4.2 and Corollary 4.3. Specifically, we clarify why a _multi-marginal_ optimal transport formulation arises, even when the target probability measure remains constant throughout the horizon (i.e., \(\rho_{0}=\ldots=\rho_{N}\eqplus\rho\)). **Counterexample 5.1** (Two marginals are not enough).: Consider, as in Example 3.3, \(X_{k}=R_{k}=\{\pm 1,0\},U_{k}=\{-1,0\}\), dynamics \(f_{k}(x_{k},u_{k})=x_{k}u_{k}\), with horizon \(N=2\), and costs \(g_{k}(x_{k},u_{k},r_{k})=\left\|x_{k}-r_{k}\right\|^{2}\) and \(g_{N}(x_{N},r_{N})=\left\|x_{N}-r_{N}\right\|^{2}\), so that the stage and terminal cost in the probability space are the squared (type 2) Wasserstein distance from the fixed reference measure \(\rho=\frac{1}{2}(\delta_{-1}+\delta_{+1})\). First, we utilize Corollary 4.3, keeping the reference constant throughout the horizon. The cost-to-go \(\tilde{J}_{0}(\pm 1,\pm 1)=2\) (here and below, this notation means \(\tilde{j}_{0}(+1,+1)=\tilde{j}_{0}(-1,-1)=2\)) and \(\tilde{j}_{0}(\pm 1,\mp 1)=6\), both obtained applying at the first stage \(u_{0}=0\) (and subsequently any input). The cost-to-go for the fleet is \(\tilde{J}_{0}(\mu_{0},\rho)=\mathcal{K}[\tilde{j}_{0}](\mu_{0},\rho)=2\), with the particle having state \(x_{0}=\pm 1\) allocated to \(r_{2}=\pm 1\). However, from a fleet perspective, the input \(u_{k}=-1\) leads to \(\mu_{0}=\mu_{1}=\mu_{2}=\rho\). By changing allocations throughout the horizon, we obtain a total cost \(J_{0}(\mu_{0},\rho)=0\). This behavior emerges naturally with Theorem 4.2. The cost-to-go in the ground space satisfies \(j_{0}(x_{0}=\pm 1,r_{0}=\pm 1,r_{1}=\mp 1,r_{2}=\pm 1)=0\), with the input \(u_{k}=-1\) at all times. Then, the transport plan \(\boldsymbol{\gamma}=(\mathrm{id}_{\mathbb{R}},\mathrm{id}_{\mathbb{R}},- \mathrm{id}_{\mathbb{R}},\mathrm{id}_{\mathbb{R}})_{\#}\mu_{0}\in\Gamma(\mu_{0 },\rho,\rho,\rho)\) yields \[J_{0}(\mu_{0},\rho,\rho,\rho)=\mathcal{K}[j_{0}](\mu_{0},\rho,\rho,\rho)\leq 2 \frac{1}{2}j_{0}(x_{0}=\pm 1,r_{0}=\pm 1,r_{1}=\mp 1,r_{2}=\pm 1)=0,\] necessarily optimal; see Figure 0(b). In particular, \(J(\mu_{0},\rho,\rho,\rho)<\tilde{J}(\mu_{0},\rho)=2\). That is, Corollary 4.3 does not apply and the optimal solution results from Theorem 4.2. \(\triangle\) ### The effect of local noise When the particle dynamics are noisy, it is common to minimize the expected particle cost via the _stochastic DPA_: \[\begin{split} j_{N}(x_{N},r_{N})&=g_{N}(x_{N},r_{N} );\\ j_{k}(x_{k},r_{N})&=\inf_{u_{k}\in U_{k}}\mathbb{E}^{ w_{k}\sim\xi_{k}}\left[g_{k}(x_{k},u_{k},w_{k})+j_{k+1}(f_{k}(x_{k},u_{k},w_{k}),r_{ N})\right],\end{split} \tag{18}\] where \(\xi_{k}\in\mathcal{P}(W_{k})\) is the probability measure of the noise, and \(W_{k}\) is the space of possible realizations. Since \(j_{k}\) is of the form required for Corollary 4.3, it is tempting to extend our results. Unfortunately, the noisy drift may favor a different allocation of the particles, and the expectation annihilates such effect: Figure 1: Depiction of Example 5.2, Counterexample 5.1, and Counterexample 5.2. **Counterexample 5.2** (Corollary 4.3 does not readily extend).: Consider a horizon \(N=2\) and the setting depicted in Figure 0(c). Let \(X_{k}=R_{k}=U_{k}=\{\heartsuit,\spadesuit,\spadesuit\}\), and consider uniformly distributed noise over \(W_{k}=\{\spadesuit,\spadesuit\}\). The particle dynamics is \(f_{k}(\heartsuit,u_{k},w_{k})=f_{k}(\heartsuit,u_{k},w_{k})=w_{k}\) and \(f_{k}(\spadesuit,u_{k},w_{k})=f_{k}(\spadesuit,u_{k},w_{k})=u_{k}\). The stage cost is \(\mathcal{K}[g_{k}]\), where \(g_{k}(\spadesuit,\heartsuit,w_{k})=g_{k}(\spadesuit,\diamond,w_{k})=2\) and \(0\) otherwise. The terminal cost enforces the configuration \(\rho_{N}=\frac{1}{2}\delta_{\heartsuit}+\frac{1}{2}\delta_{\heartsuit}\), namely \(\mathcal{K}[g_{N}]\) with \(g_{N}(x_{N},r_{N})=0\) if \(x_{N}=r_{N}\) and \(+\infty\) otherwise. The recursion Equation (18) yields \(j_{0}(\heartsuit,\heartsuit)=1\) (any input at the first stage and \(u=\heartsuit\) at the second stage), \(j_{0}(\heartsuit,\diamondsuit)=1\), and \(j_{0}(\heartsuit,\heartsuit)=j_{0}(\heartsuit,\diamondsuit)=1\) (all with analogous transitions). Therefore, with the initial configuration and target configuration \(\mu_{0}=\rho_{N}=\frac{1}{2}\left(\delta_{\heartsuit}+\delta_{\diamondsuit}\right)\), Corollary 4.3 yields \(\tilde{J}_{0}(\mu_{0},\rho_{N})=\mathcal{K}[j_{0}](\mu_{0},\rho_{N})=1\). Instead, the DPA in the probability space gives \(\mu_{1}=\frac{1}{2}\left(\delta_{\spadesuit}+\delta_{\spadesuit}\right)\) with zero cost (regardless of the input). Then, the evolution is deterministic and the cost-to-go amounts to \(j_{1}(\spadesuit,\diamondsuit)=j_{1}(\spadesuit,\heartsuit)=0\) and \(j_{1}(\spadesuit,\heartsuit)=j_{1}(\spadesuit,\diamondsuit)=1\). Thus, Corollary 4.3 applies and yields \(J_{1}(\mu_{1},\rho_{N})=\mathcal{K}[j_{1}](\mu_{1},\rho_{N})=\frac{1}{2}j_{1}( \spadesuit,\diamondsuit)+\frac{1}{2}j_{1}(\spadesuit,\heartsuit)=0\). Overall, \(J_{0}(\mu_{0},\rho_{N})\leq 0+J_{1}(\mu_{1},\rho_{N})=0<1=\tilde{J}_{0}(\mu_{0}, \rho_{N})\). Thus, the naive application of Corollary 4.3 is suboptimal. ## 6 Proof of Theorem 4.2 and Corollary 4.3 For the proof Theorem 4.2 and Corollary 4.3, we need a few preliminary lemmata. For ease of notation, let \(X\coloneqq X_{1}\times\ldots\times X_{k}\), \(Y\coloneqq Y_{1}\times\ldots\times Y_{h}\), and \(Z\coloneqq Z_{1}\times\ldots\times Z_{k}\). To start, we introduce a variation of (8), in which only the first \(k\) marginals \(\mu_{i}\in\mathcal{P}(X_{i})\) are fixed. Namely, \[\mathcal{J}[c](\mu_{1}\ldots\mu_{k})\coloneqq\inf_{\begin{subarray}{c} \left(\operatorname{proj}_{X}^{X\times Y}\right)_{\#}\end{subarray}}\inf_{ \boldsymbol{\gamma}\in\Gamma(\mu_{1},\ldots,\mu_{k})}\int_{X\times Y}c\, \mathrm{d}\boldsymbol{\gamma},\] where \(c:X\times Y\to\bar{\mathbb{R}}_{\geq 0}\) is the transportation cost. When \(c\in X\to\bar{\mathbb{R}}_{\geq 0}\) (i.e., there are no free marginals), we conveniently write \(\mathcal{J}[c](\mu_{1}\ldots,\mu_{k})=\mathcal{K}[c](\mu_{1}\ldots,\mu_{k})\). Further, given a collection of maps \(\{l_{k}:X_{k}\to Y_{k}\}_{k=i}^{j}\), we denote by \(l\coloneqq l_{i}\times\ldots\times l_{j}\) the map \(X_{i}\times\ldots\times X_{j}\to Y_{i}\times\ldots\times Y_{j}\) defined point-wise as \((x_{i},\ldots,x_{j})\mapsto(l_{i}(x_{i}),\ldots,l_{j}(x_{j}))\). Given the probability measures \(\{\mu_{i}\in\mathcal{P}(X_{i})\}_{i=1}^{k}\), \(\mu\coloneqq(\mu_{1},\ldots,\mu_{k})\), we conveniently write \(l_{\#}\mu\coloneqq(l_{1\#}\mu_{1},\ldots,l_{k\#}\mu_{k})\). A measure-valued map \(X\ni x\mapsto\mu\in\mathcal{P}(X)\) is Borel if and only if, for any Borel set \(B\subseteq X\), the map \(x\mapsto\mu(B)\) is Borel. In our setting, the cost-to-go will be an optimal transport discrepancy, and the dynamics are a push-forward. To relate the cost-to-go at the \(k^{\text{th}}\) stage to the one at the previous time step, we rigorously formalize their interplay. A similar but less general result (i.e., only with two fixed marginals) was derived in the context of uncertainty propagation via optimal transport [43]. **Lemma 6.1** (Pushforward and optimal transport).: _Given a transportation cost \(c:Z\times Y\to\bar{\mathbb{R}}_{\geq 0}\), \(k\in\mathbb{N}_{\geq 1},h\in\mathbb{N}\), maps \(\{l_{i}:X_{i}\to Z_{i}\}_{i=1}^{k}\), \(l\coloneqq l_{1}\times\ldots\times l_{k}\), and probability measures \(\{\mu_{i}\in\mathcal{P}(X_{i})\}_{i=1}^{k}\), \(\mu\coloneqq(\mu_{1},\ldots,\mu_{k})\), it holds:_ \[\mathcal{J}[c\circ(l\times\mathrm{id}\chi)](\mu) =\inf_{\begin{subarray}{c}\left(\operatorname{proj}_{X}^{X\times Y }\right)_{\#}\end{subarray}}\inf_{\boldsymbol{\mu}\in\Gamma(\mu)}\int_{X\times Y }c\circ(l\times\mathrm{id}_{Y})\,\mathrm{d}\boldsymbol{\mu}\] \[=\inf_{\begin{subarray}{c}\left(\operatorname{proj}_{Z}^{Z \times Y}\right)_{\#}\end{subarray}}\inf_{\boldsymbol{\mu}^{\prime}\in\Gamma(l _{\#}\mu)}\int_{Z\times Y}c\,\mathrm{d}\boldsymbol{\mu}^{\prime}=\mathcal{J}[c ](l_{\#}\mu).\] Proof.: We prove "\(\leq\)" and "\(\geq\)" separately. We start with "\(\geq\)". For any \(\boldsymbol{\mu}\in\mathcal{P}(X\times Y)\) such that \((\mathrm{proj}_{X}^{\mathrm{X\times Y}})_{\#}\mathbf{\mu}\in\Gamma(\mu)\), let \(\mathbf{\mu}^{\prime}=(l\times\mathrm{id}_{Y})_{\#}\mathbf{\mu}\). For \(i\in\{1,\ldots,k\}\) consider \(\phi\in C_{b}(Z_{i})\). It holds: \[\int_{Z\times Y}\phi(z_{i})\,\mathrm{d}\mathbf{\mu}^{\prime}(z,y) =\int_{Z\times Y}\phi(z_{i})\,\mathrm{d}((l\times\mathrm{id}_{Y} )_{\#}\mathbf{\mu})(z_{1},\ldots,z_{k},y)\] \[=\int_{X\times Y}\phi(l_{i}(x_{i}))\,\mathrm{d}\mathbf{\mu}(x_{1}, \ldots,x_{k},y)\] \[=\int_{X_{i}}\phi(l_{i}(x_{i}))\,\mathrm{d}((\mathrm{proj}_{X_{i} }^{\mathrm{X\times Y}})_{\#}\mathbf{\mu})(x_{i})\] \[=\int_{X_{i}}\phi\circ l_{i}\,\mathrm{d}\mu_{i}=\int_{Z_{i}}\phi \,\mathrm{d}(l_{i}\mu_{i}).\] That is, \((\mathrm{proj}_{Z_{i}}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime}=l_{i\#}\mu_{i}\) and, thus, \((\mathrm{proj}_{Z}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime}\in\Gamma(l_{\#}\mu)\). Similarly, for all \(j\in\{1,\ldots,h\}\) we have \((\mathrm{proj}_{Y_{j}}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime}=(\mathrm{proj}_{Y_{ j}}^{\mathrm{X\times Y}})_{\#}\mathbf{\mu}\in\mathcal{P}(Y_{j})\). Therefore, \(\mathbf{\mu}^{\prime}\) provides the upper bound \(\mathcal{J}[c](l_{\#}\mu)\leq\int_{X\times Y}c\circ(l\times\mathrm{id}_{Y}) \,\mathrm{d}\mathbf{\mu}\). Since \(\mathbf{\mu}\) is arbitrary, we obtain \(\mathcal{J}[c](l_{\#}\mu)\leq\mathcal{J}[c\circ(l\times\mathrm{id}_{Y})](\mu)\). To prove "\(\leq\)", fix \(\mathbf{\mu}^{\prime}\in\mathcal{P}(Z\times Y)\) with \((\mathrm{proj}_{Z}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime}\in\Gamma(l_{\#}\mu)\). By definition, \(\mathbf{\mu}^{\prime}\in\Gamma(l_{\#}\mu,(\mathrm{proj}_{Y}^{Z\times Y})_{\#}\mathbf{ \mu}^{\prime})\). Then, for all \(i\in\{1,\ldots,k\}\), let \(\mathbf{\mu_{i}}=(\mathrm{id}_{X_{i}},l_{i})_{\#}\mu_{i}\in\mathcal{P}(X_{i}\times Z _{i})\). Analogously to the previous step, we have \(\mathbf{\mu_{i}}\in\Gamma(\mu_{i},l_{i\#}\mu_{i})\). We can "glue" \(\{\mathbf{\mu_{i}}\}_{i=1}^{k}\) and \(\mathbf{\mu}^{\prime}\) to obtain \(\mathbf{\mu}^{\bullet}\in\mathcal{P}(X\times Z\times Y)\) such that \((\mathrm{proj}_{X\times Z}^{\times Z\times Y})_{\#}\mathbf{\mu}^{\bullet}\in \Gamma(\mu,l_{\#}\mu)\). Specifically, we apply \(k\) times [9, Gluing lemma] as follows. First, we glue \(\mathbf{\mu}^{\prime}\) and \(\mathbf{\mu_{1}}\), since they share a marginal: \((\mathrm{proj}_{Z_{1}}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime}=l_{1\#}\mu_{1}=( \mathrm{proj}_{Z_{1}}^{\mathrm{X_{1}}\times Z_{1}})_{\#}\mathbf{\mu_{1}}\). Call the resulting plan \(\mathbf{\mu_{1}^{\bullet}}\in\Gamma(\mu_{1},l_{\#}\mu,(\mathrm{proj}_{Y}^{Z\times Y })_{\#}\mathbf{\mu}^{\prime})\). Next, define inductively \(\mathbf{\mu_{i}^{\bullet}}\in\Gamma(\mu_{1},\ldots,\mu_{i},l_{\#}\mu,(\mathrm{ proj}_{Y}^{Z\times Y})_{\#}\mathbf{\mu}^{\prime})\) as the plan obtained from gluing \(\mathbf{\mu_{i-1}^{\bullet}}\) and \(\mathbf{\mu_{i}}\), for \(i\in\{2,\ldots,k\}\). The definition is well-posed in view of [9, Gluing lemma], since \((\mathrm{proj}_{Z_{i}}^{\mathrm{X_{1}}\times\ldots\times X_{i}\times Z\times Y })_{\#}\mathbf{\mu_{i-1}}=l_{i\#}\mu_{i}=(\mathrm{proj}_{Z_{i}}^{\mathrm{X_{i}} \times Z_{i}})_{\#}\mathbf{\mu_{i}}\). Finally, we take \(\mathbf{\mu}^{\bullet}=\mathbf{\mu_{k}^{\bullet}}\), so that \(\mathbf{\mu}=(\mathrm{proj}_{X\times Y}^{\times Y\times Y})_{\#}\mathbf{\mu}^{\bullet} \in\Gamma\left(\mu_{k}^{k},(\mathrm{proj}_{Y}^{Z\times Y})_{\#}\mathbf{\mu}^{ \prime}\right).\) Let \(\bar{X}\coloneqq X_{1}\times\ldots\times X_{k-1}\), \(\bar{Z}\coloneqq Z_{1}\times\ldots\times Z_{k-1}\), and \(\bar{l}\coloneqq l_{1}\times\ldots\times l_{k-1}\). Then, for the \(k^{\text{th}}\) argument of \(c\), \[\int_{X\times Y}(c\circ(l\times\mathrm{id}_{Y}))(x,y)\,\mathrm{d} \mathbf{\mu}(x,y)\] \[=\int_{X\times Z\times Y}(c\circ(l\times\mathrm{id}_{Y}))(x,y)\, \mathrm{d}\mathbf{\mu}^{\bullet}(x,z,y)\] \[\stackrel{{\clubsuit}}{{=}}\int_{X_{k}\times Z_{k}} \int_{\bar{X}\times Z\times Y}(c\circ(l\times\mathrm{id}_{Y}))(\bar{x},x_{k},y)\, \mathrm{d}\mathbf{\tilde{\mu}^{x_{k}z_{k}}}(\bar{x},\bar{z},y)\,\mathrm{d}\mathbf{\mu_{k} }(x_{k},z_{k})\] \[=\int_{X_{k}\times Z_{k}}\int_{\bar{X}\times\bar{Z}\times Y}c(\bar {l}(\bar{x}),l_{k}(x_{k}),y)\,\mathrm{d}\mathbf{\tilde{\mu}^{x_{k}z_{k}}}(\bar{x}, \bar{z},y)\,\mathrm{d}\mathbf{\mu_{k}}(x_{k},z_{k})\] \[\stackrel{{\clubsuit}}{{=}}\int_{X_{k}\times Z_{k}} \int_{\bar{X}\times\bar{Z}\times Y}c(\bar{l}(\bar{x}),z_{k},y)\,\mathrm{d}\mathbf{ \tilde{\mu}^{x_{k}z_{k}}}(\bar{x},\bar{z},y)\,\mathrm{d}\mathbf{\mu_{k}}(x_{k},z_{k})\] \[\stackrel{{\clubsuit}}{{=}}\int_{X_{k}\times Z\times Y }c(\bar{l}(\bar{x}),z_{k},y)\,\mathrm{d}\mathbf{\mu^{\bullet}}(x,z,y),\] where in \(\clubsuit\) we used the disintegration theorem ([8, Theorem 5.3.1]), which provides us a collection \(\{\mathbf{\tilde{\mu}^{x_{k}z_{k}}}\}_{(x_{k},z_{k})\in X_{k}\times Z_{k}}\) to complement \(\mathbf{\mu_{k}}\). Then, in \(\clubsuit\), we used the definition of \(\mathbf{\mu_{k}}\): \(z_{k}=l_{k}(x_{k})\)\(\mu_{k}\)- a.e. Repeating the same steps for the other arguments of \(c\) we obtain \(\mathcal{J}[c\circ(l\times\mathrm{id}_{Y})](\mu)\leq\int_{X\times Y}c(l(x),y)\, \mathrm{d}\mathbf{\mu}(x,y)=\int_{X\times Z\times Y}c(z,y)\,\mathrm{d}\mathbf{\mu}^{ \bullet}(x,z,y)=\int_{Z\times Y}c\,\mathrm{d}\mathbf{\mu}^{\prime}\). Since \(\mathbf{\mu}^{\prime}\) is arbitrary, it follows \(\mathcal{J}[c\circ(l\times\mathrm{id}_{Y})](\mu)\leq\mathcal{J}[c](l_{\#}\mu)\). The next result express the sum of two optimal transport discrepancies, possibly with free marginals, as a single optimal transport discrepancy, with the same free marginals. Similar results provide multi-marginal reformulations for Wasserstein barycenters [44, 45], whose computation has recently received much interest [46, 47]. **Lemma 6.2** (Sum of optimal transport discrepancies).: _Given transportation costs \(c_{1}:X_{1}\times Z\to\mathbb{R}_{\geq 0}\), \(c_{2}:X\times Y\to\bar{\mathbb{R}}_{\geq 0}\), and probability measures \(\{\mu_{i}\in\mathcal{P}(X_{i})\}_{i=1}^{k}\), \(\mu\coloneqq(\mu_{1},\ldots,\mu_{k})\), \(\nu\in\mathcal{P}(Z)\), it holds \(\mathcal{K}[c_{1}](\mu_{1},\nu)+\mathcal{J}[c_{2}](\mu)=\mathcal{J}[c](\mu,\nu)\), with \(c:X\times Y\times Z\to\bar{\mathbb{R}}_{\geq 0}\) defined as \(c(x_{1},\ldots,x_{k},y,z)=c_{1}(x_{1},z)+c_{2}(x_{1},\ldots,x_{k},y)\)._ Proof.: We prove separately "\(\leq\)" and "\(\geq\)". With the short-hand notation \(x\coloneqq(x_{1},\ldots,x_{k})\), "\(\leq\)" follows minimizing separately over the shared marginal: \[\mathcal{J}[c](\mu,\nu) =\inf_{(\mathrm{proj}_{X\times Z}^{X\times Y\times Z})_{\#} \,\overline{\nu}\in\Gamma(\mu,\nu)}\int_{X\times Y\times Z}c_{1}(x_{1},z)+c_{2 }(x,y)\,\mathrm{d}\mathbf{\gamma}(x,y,z)\] \[\geq\inf_{(\mathrm{proj}_{X\times Y\times Z}^{X\times Y\times Z})_ {\#}\,\overline{\nu}\in\Gamma(\mu,\nu)}\int_{X\times Y\times Z}c_{1}(x_{1},z) \,\mathrm{d}\mathbf{\gamma}(x,y,z)\] \[\quad+\inf_{(\mathrm{proj}_{X\times Z}^{X\times Y\times Z})_{\#} \,\overline{\nu}\in\Gamma(\mu,\nu)}\int_{X\times Y\times Z}c_{2}(x,y)\, \mathrm{d}\mathbf{\gamma}(x,y,z)\] \[=\inf_{(\mathrm{proj}_{X\times Y\times Z}^{X\times Y\times Z})_{ \#}\,\overline{\nu}\in\Gamma(\mu,\nu)}\int_{X_{1}\times Z}c_{1}(x_{1},z)\, \mathrm{d}((\mathrm{proj}_{X_{1}\times Z}^{X\times Y\times Z})_{\#}\mathbf{\gamma} )(x_{1},z)\] \[\quad+\inf_{(\mathrm{proj}_{X\times Y\times Z}^{X\times Y\times Z}) _{\#}\,\overline{\nu}\in\Gamma(\mu,\nu)}\int_{X\times Y}c_{2}(x,y)\,\mathrm{d }((\mathrm{proj}_{X\times Y\times Z}^{X\times Y\times Z})_{\#}\mathbf{\gamma})(x,y)\] \[\stackrel{{\mathcal{Q}}}{{=}}\inf_{\mathbf{\gamma_{1}}\in \Gamma(\mu_{1},\nu)}\int_{X_{1}\times Z}c_{1}\,\mathrm{d}\mathbf{\gamma_{1}}+\inf _{(\mathrm{proj}_{X}^{X\times Y\times})_{\#}\,\overline{\mathbf{\gamma_{2}}}\in \Gamma(\mu)}\int_{X\times Y}c_{2}\,\mathrm{d}\mathbf{\gamma_{2}}\] \[=\mathcal{K}[c_{1}](\mu_{1},\nu)+\mathcal{J}[c_{2}](\mu),\] where in \(\heartsuit\) (i) we noticed that the first infimum is only over \((\mathrm{proj}_{X_{1}\times Z}^{X\times Y\times Z})_{\#}\mathbf{\gamma}=\mathbf{\gamma}^ {\prime}\in\Gamma(\mu_{1},\nu)\), and (ii) in the second infimum we used Lemma 6.1 with the pushforward map being \(\mathrm{proj}_{X\times Y}^{X\times Y\times Z}\). We now prove "\(\geq\)". For all \(\varepsilon>0\), consider \(\varepsilon\)-optimal \(\mathbf{\gamma_{1}^{\varepsilon}}\in\Gamma(\mu_{1},\nu)\) and \(\mathbf{\gamma_{2}^{\varepsilon}}\in\mathcal{P}(X\times Y)\) so that \((\mathrm{proj}_{X}^{X\times Y})_{\#}\mathbf{\gamma_{2}^{\varepsilon}}\in\Gamma(\mu)\); i.e., \(\int_{X_{1}\times Z}c_{1}\,\mathrm{d}\mathbf{\gamma_{1}^{\varepsilon}}\leq \mathcal{K}[c_{1}](\mu_{1},\nu)+\varepsilon\) and \(\int_{X\times Y}c_{2}\,\mathrm{d}\mathbf{\gamma_{2}^{\varepsilon}}\leq\mathcal{J} [c_{2}](\mu)+\varepsilon\). Since \((\mathrm{proj}_{X_{1}}^{X_{1}\times Z})_{\#}\mathbf{\gamma_{1}^{\varepsilon}}=\mu_ {1}=(\mathrm{proj}_{X_{1}}^{X\times Y})_{\#}\mathbf{\gamma_{2}^{\varepsilon}}\), we can glue them [9, Gluing lemma] to obtain \(\mathbf{\gamma}^{\varepsilon}\in\Gamma(\mu,\nu,(\mathrm{proj}_{Y}^{X\times Y})_{ \#}\mathbf{\gamma_{2}^{\varepsilon}})\). Then, it holds \[\int_{X\times Y\times Z}c\,\mathrm{d}\mathbf{\gamma}^{\varepsilon}=\int_{X_{1} \times Z}c_{1}\,\mathrm{d}\underbrace{((\mathrm{proj}_{X_{1}\times Z}^{X\times Y \times Z})_{\#}\mathbf{\gamma}^{\varepsilon})}_{\mathbf{\gamma_{1}^{\varepsilon}}}+ \int_{X\times Y}c_{2}\,\mathrm{d}\underbrace{((\mathrm{proj}_{X\times Y}^{X \times Y\times Z})_{\#}\mathbf{\gamma}^{\varepsilon})}_{\mathbf{\gamma_{2}^{\varepsilon}}}\] and, thus, \(\mathcal{J}[c](\mu,\nu)\leq\mathcal{K}[c_{1}](\mu_{1},\nu)+\mathcal{J}[c_{2}] (\mu)+2\varepsilon\). Let \(\varepsilon\to 0\) to conclude. In particular, when \(\mathcal{K}[c_{1}]\) is an expected value, the composition simplifies: **Lemma 6.3** (Compositionality of optimal transport).: _Given a cost \(v:X\to\bar{\mathbb{R}}_{\geq 0}\), a transportation cost \(c:Y\times Z\to\bar{\mathbb{R}}_{\geq 0}\), a map \(l:X\to Y\), and probability measures \(\mu\in\mathcal{P}(X),\nu\in\mathcal{P}(Z)\), it holds \(\mathbb{E}^{\mu}\left[v\right](\mu)+\mathcal{K}[c](l_{\#}\mu,\nu)=\mathcal{K}[v+c \circ(l\times\mathrm{id}v)](\mu,\nu)\)._ Proof.: The statement is a special case of Lemma 6.1. Finally, we give a useful disintegration property of the cost term \(\mathcal{J}[c]\): **Lemma 6.4** (Disintegration of the optimizer).: _Given a transportation cost \(c\in\mathrm{lsc}(X\times Y\times Z,\bar{\mathbb{R}}_{\geq 0})\) and probability measures \(\mu\in\mathcal{P}(X)\), \(\nu\in\mathcal{P}(Y)\), it holds:_ \[\inf_{(\mathrm{proj})_{X\times Y}^{X\times Z}\atop\#\,\boldsymbol{ \gamma}\in\Gamma(\mu,\nu)}\int_{X\times Y\times Z}c(x,y,z)\,\mathrm{d} \boldsymbol{\gamma}(x,y,z)\\ =\inf_{\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu,\nu)}\inf_{\{ \xi^{xy}\}\in\Lambda(Z)}\int_{X\times Y}\!\int_{Z}c(x,y,z)\,\mathrm{d}\xi^{xy} (z)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x,y),\] _where \(\Lambda(Z)\coloneqq\{\{\xi^{xy}\}_{(x,y)\in X\times Y}\subseteq\mathcal{P}(Z) \,|\,X\times Y\ni(x,y)\mapsto\xi^{xy}\in\mathcal{P}(Z)\text{ Borel}\}\)._ Proof.: We prove "\(\geq\)" and "\(\leq\)" separately. To prove "\(\geq\)", consider any \(\boldsymbol{\gamma}\in\mathcal{P}(X\times Y\times Z)\) such that \((\mathrm{proj}_{X\times Y}^{X\times Y\times Z})_{\#}\boldsymbol{\gamma}\in \Gamma(\mu,\nu)\). By [8, Theorem 5.3.1], there exists \(\{\boldsymbol{\gamma}^{xy}\}\in\Lambda(Z)\) such that \[\int_{X\times Y\times Z}c\,\mathrm{d}\boldsymbol{\gamma} =\int_{X\times Y}\int_{Z}c(x,y,z)\,\mathrm{d}\boldsymbol{\gamma} ^{xy}(z)\,\mathrm{d}((\mathrm{proj}_{X\times Y}^{X\times Y\times Z})_{\#} \boldsymbol{\gamma})(x,y)\] \[\geq\inf_{\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu,\nu)}\inf_{ \{\xi^{xy}\}\in\Lambda(Z)}\int_{X\times Y}\!\int_{Z}c(x,y,z)\,\mathrm{d}\xi^{ xy}(z)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x,y).\] Then, take the infimum over \(\boldsymbol{\gamma}\). To prove "\(\leq\)", we follow [8, SS5.3] to construct the reverse of the disintegration. Given any \(\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu,\nu)\), and any \(\{\xi^{xy}\}\in\Lambda(Z)\). Then, we can construct a Borel probability measure \(\boldsymbol{\gamma}\in\mathcal{P}(X\times Y\times Z)\) defined for every Borel set \(B\subseteq X\times Y\times Z\) as \(\boldsymbol{\gamma}(B)=\int_{X\times Y}\!\int_{Z}1_{B}(x,y,z)\,\mathrm{d}\xi^ {xy}(z)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x,y).\) For \(\phi\in C_{b}(X\times Y)\), we have \[\int_{X\times Y\times Z}\phi\,\mathrm{d}\boldsymbol{\gamma}=\int_{X\times Y }\int_{Z}\phi(x,y)\,\mathrm{d}\xi^{xy}(z)\,\mathrm{d}\boldsymbol{\gamma}^{ \prime}(x,y)=\int_{X\times Y}\phi(x,y)\,\mathrm{d}\boldsymbol{\gamma}^{ \prime}(x,y).\] Thus, \((\mathrm{proj}_{X\times Y}^{X\times Y\times Z})_{\#}\boldsymbol{\gamma}= \boldsymbol{\gamma}^{\prime}\in\Gamma(\mu,\nu)\). Therefore, \[\int_{X\times Y}\!\int_{Z}c(x,y,z)\,\mathrm{d}\xi^{xy}(z)\, \mathrm{d}\boldsymbol{\gamma}^{\prime}(x,y)\\ =\int_{X\times Y\times Z}c\,\mathrm{d}((\mathrm{proj}_{X\times Y }^{X\times Z})_{\#}\boldsymbol{\gamma})\geq\inf_{(\mathrm{proj}_{X\times Y}^{ X\times Z})_{\#}\boldsymbol{\gamma}\in\Gamma(\mu,\nu)}\int_{X\times Y \times Z}c\,\mathrm{d}\boldsymbol{\gamma},\] and the claim follows taking the infimum over \(\boldsymbol{\gamma}^{\prime}\) and \(\{\xi^{xy}\}\). We are now ready to prove Theorem 4.2 and Corollary 4.3: Proof of Theorem 4.2.: We prove the statements separately. To ease the notation, we recall \(R\coloneqq R_{k}\times R_{k+1}\times\ldots R_{N}\), and we introduce \[c_{k}\coloneqq g_{k}+j_{k+1}\circ(f_{k}\times\mathrm{id}_{R_{k+1}\times\ldots \times R_{N}}):X_{k}\times U_{k}\times R\to\bar{\mathbb{R}}_{\geq 0}. \tag{19}\] 1. We proceed by induction. The base case is \(J_{N}=\mathcal{K}[g_{N}]\) and \(j_{N}=g_{N}\). For \(k<N\), suppose \(J_{k+1}=\mathcal{K}[j_{k+1}]\). Then, the backward recursion gives \[J_{k}(\mu_{k},\rho) =\inf_{(\mathrm{proj}_{X_{k}}^{X_{k}\times U_{k}})_{\#}\lambda_{k} =\mu_{k}}\mathcal{K}[g_{k}](\lambda_{k},\rho_{k})+J_{k+1}(f_{k\#}\lambda_{k}, \rho_{k+1},\ldots,\rho_{N})\] \[=\inf_{(\mathrm{proj}_{X_{k}}^{X_{k}\times U_{k}})_{\#}\lambda_{k} =\mu_{k}}\inf_{\boldsymbol{\gamma}_{1}\in\Gamma(\lambda_{k},\rho_{k})}\int_{X_{k }\times U_{k}\times R_{k}}g_{k}\,\mathrm{d}\boldsymbol{\gamma}_{\boldsymbol{ \gamma}_{\boldsymbol{1}}}\] \[\qquad+\inf_{\boldsymbol{\gamma}_{\boldsymbol{2}}\in\Gamma(f_{k\# }\lambda_{k},\rho_{k+1},\ldots,\rho_{N})}\int_{X_{k+1}\times R_{k+1}\times \ldots\times R_{N}}j_{k+1}\,\mathrm{d}\boldsymbol{\gamma}_{\boldsymbol{2}}\] \[\stackrel{{\boldsymbol{\Delta}}}{{=}}\inf_{(\mathrm{ proj}_{X_{k}}^{X_{k}\times U_{k}})_{\#}\lambda_{k}=\mu_{k}}\int_{X_{k}\times U_{k} \times R}c_{k}\,\mathrm{d}\boldsymbol{\gamma}^{\prime}\] \[\qquad\qquad\gamma^{\prime}\in\Gamma(\lambda_{k},\rho)\] \[= \inf_{\begin{subarray}{c}X_{k}\times U_{k}\times R\\ (\operatorname{proj}_{X_{k}\times R}^{\varepsilon/2})\end{subarray}}\inf_{ \boldsymbol{\eta}^{\prime}\in\Gamma(\mu_{k},\rho)}\int_{X_{k}\times U_{k} \times R}c_{k}\,\mathrm{d}\boldsymbol{\gamma}^{\prime}\] \[\underline{\triangle} \inf_{\begin{subarray}{c}\boldsymbol{\gamma}^{\prime}\in\Gamma( \mu_{k},\rho)\\ \{\xi^{\varepsilon_{k}}\}\in\Lambda(U_{k})\end{subarray}}\int_{X_{k}\times R} \int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi^{\varepsilon_{k}r}(u)\,\mathrm{d} \boldsymbol{\gamma}^{\prime}(x_{k},r)\] \[\underline{\triangle} \inf_{\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu_{k},\rho)}\int_{X _{k}\times R}\inf_{\xi\in\mathcal{P}(U_{k})}\int_{U_{k}}c_{k}(x_{k},u,r)\, \mathrm{d}\xi(u)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x_{k},r),\] where first, in \(\blacktriangle\), we used the definition of \(c_{k}\) (see (19)), Lemma 6.1, and Lemma 6.2. Second, in \(\diamondsuit\), we used Lemma 6.4. Third, \(\clubsuit\) requires proving separately "\(\geq\)" and "\(\leq\)". Let \(\psi(x_{k},\xi,r)\coloneqq\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi(u)\). Then, \(\Lambda(U_{k})\subseteq\{\xi^{\varepsilon_{k}r}\in\mathcal{P}(U_{k})\}\), and \(\psi(x_{k},\xi,r)\geq\inf_{\xi\in\mathcal{P}(U_{k})}\psi(x_{k},\xi,r)\) reveal "\(\geq\)". To prove "\(\leq\)", let \(\Omega\coloneqq\operatorname{supp}(\mu_{k})\times\operatorname{supp}(\rho) \subseteq X_{k}\times R\) and \(\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu_{k},\rho)\). By definition, we can restrict the integration domain to the support of \(\boldsymbol{\gamma}^{\prime}\), for which it holds \(\operatorname{supp}(\boldsymbol{\gamma}^{\prime})\subseteq\operatorname{ supp}(\mu_{k})\times\operatorname{supp}(\rho)\). We thus consider \(\Omega\) in place of \(X_{k}\times R\) as the integration domain. For all \(\varepsilon>0\), consider the collection \(\{u_{k}^{\varepsilon/2}(x_{k},r)\}_{(x_{k},r)\in\Omega}\subseteq U_{k}\). Without loss of generality, we assume that \(u_{k}^{\varepsilon/2}\) is Borel; see the discussion in Section 4. As a consequence, also the measure-valued map \(h:\Omega\to\mathcal{P}(U_{k}),h(x_{k},r)\coloneqq\delta_{u_{k}^{\varepsilon/2 }(x_{k},r)}\) is Borel. To show this, we can equivalently show that, for every \(B\subseteq U_{k}\) Borel, the pre-image of the intervals \([a,+\infty]\), for \(a\in\mathbb{R}\), of \((x_{k},r)\mapsto h(x_{k},r)(B)\) is Borel. Let \(h_{B}:U_{k}\to\mathbb{R}_{\geq 0},u\mapsto h_{B}(u)\coloneqq\delta_{u}(B)\). Then, \(h_{B}^{-1}([a,+\infty))=\emptyset\) if \(a>1\), \(h_{B}^{-1}([a,+\infty))=B\) if \(a\in(0,1]\), and \(h_{B}^{-1}([a,+\infty))=U_{k}\) otherwise. In all cases, \(h_{B}^{-1}([a,+\infty))\) is Borel set, and, thus, the map \(h_{B}\) is Borel. Since the composition of Borel maps is a Borel map, \(h_{B}\circ u_{k}^{\varepsilon/2}\) is Borel. Therefore, the measure-valued map \(h\) is Borel. Then, \(\{\delta_{u_{k}^{\varepsilon/2}(x_{k},r)}\}_{(x,r)\in\Omega}\in\Lambda(U_{k})\), with \(\Lambda(U_{k})\) as in Lemma 6.4. Thus, \[\int_{\Omega}\inf_{\xi\in\mathcal{P}(U_{k})}\int_{U_{k}}c_{k}(x_{k },u,r)\,\mathrm{d}\xi(u)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x_{k},r)\] \[\geq\int_{\Omega}\inf_{u\in U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d} \boldsymbol{\gamma}^{\prime}(x_{k},r)\] \[\geq\int_{\Omega}c_{k}(x_{k},u_{k}^{\varepsilon/2}(x_{k},r),r)\, \mathrm{d}\boldsymbol{\gamma}^{\prime}(x_{k},r)-\frac{\varepsilon}{2}\] \[\geq\int_{\Omega}\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\delta_ {u_{k}^{\varepsilon/2}(x_{k},r)}(u)\,\mathrm{d}\boldsymbol{\gamma}^{\prime}(x _{k},r)-\frac{\varepsilon}{2}\] \[\geq\inf_{\{\xi^{\varepsilon_{k}r}\}\in\Lambda(U_{k})}\int_{ \Omega}\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi^{\varepsilon_{k}r}(u)\, \mathrm{d}\boldsymbol{\gamma}^{\prime}(x_{k},r)-\frac{\varepsilon}{2}.\] Take the infimum over \(\boldsymbol{\gamma}^{\prime}\) on both sides and let \(\varepsilon\to 0\) to prove "\(\leq\)". Next, it holds \(\inf_{\xi\in\mathcal{P}(U_{k})}\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi(u) \geq\inf_{u\in U_{k}}c_{k}(x_{k},u,r)=j_{k}(x,r).\) For "\(\leq\)", let \(\{u_{n}\}_{n\in\mathbb{N}}\subseteq U_{k}\) yield \(j_{k}(x_{k},r)=\lim_{n\to\infty}c_{k}(x_{k},u_{n},r)\), and consider \(\{\delta_{u_{n}}\}_{n\in\mathbb{N}}\subseteq\mathcal{P}(U_{k})\). For all \(n\in\mathbb{N}\) we have \[\inf_{\xi\in\mathcal{P}(U_{k})}\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi(u) \leq\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\delta_{u_{n}}(u)=c_{k}(x_{k},u_{n},r).\] The limit \(n\to\infty\) reveals "\(\leq\)" and, thus, the equality. Thus, for every \(x_{k}\in X_{k},r\in R\), we have \(\inf_{\xi\in\mathcal{P}(U_{k})}\int_{U_{k}}c_{k}(x_{k},u,r)\,\mathrm{d}\xi(u)=j_{k }(x_{k},r)\), and so \(J_{k}(\mu_{k},\rho)=\inf_{\boldsymbol{\gamma}^{\prime}\in\Gamma(\mu_{k},\rho)} \int_{X_{k}\times R}j_{k}\,\mathrm{d}\boldsymbol{\gamma}^{\prime}\). This proves (13). Finally, analogously to the traditional DPA [7, 32], the additivity of the cost structure yields \(J=J_{0}\). * Let \(\varepsilon\geq 0\) and define \(u_{k}^{\varepsilon/2}\), and \(\mathbf{\gamma}_{k}^{\varepsilon/2}\) as in the theorem statement. Consider the (possibly sub-optimal) plan \[\mathbf{\tilde{\gamma}}_{k}^{\varepsilon}\coloneqq\left(\operatorname{proj}_{X_{k} ^{k}\times R}^{X_{k}\times R},u_{k}^{\varepsilon/2},\operatorname{proj}_{R}^ {X_{k}\times R}\right)_{\#}\mathbf{\gamma}_{k}^{\varepsilon/2}.\] (20) By definition, \(\operatorname{proj}_{X_{k}^{k}\times U_{k}\times R}^{X_{k}\times U_{k}\times R }_{\#}\mathbf{\tilde{\gamma}}_{k}^{\varepsilon}=\lambda_{k}^{\varepsilon}\) and \(\operatorname{proj}_{R}^{X_{k}\times U_{k}\times R}_{\#}\mathbf{\tilde{\gamma}}_{k }^{\varepsilon}=\rho\). Therefore, \(\mathbf{\tilde{\gamma}}_{k}^{\varepsilon}\) is a valid choice for the infimum, and it holds: \[\mathcal{K}[g_{k}](\lambda_{k}^{\varepsilon},\rho_{k}) +J_{k+1}(f_{k\#}\lambda_{k}^{\varepsilon},\rho_{k+1},\ldots, \rho_{N})\] \[\overset{\heartsuit}{=}\inf_{\mathbf{\gamma}^{\prime}\in\Gamma( \lambda_{k}^{\varepsilon},\rho)}\int_{X_{k}\times U_{k}\times R}c_{k}(x_{k},u_ {k},y)\,\mathrm{d}\mathbf{\gamma}^{\prime}(x_{k},u_{k},r)\] \[\leq\int_{X_{k}\times U_{k}\times R}c_{k}(x_{k},u_{k},r)\, \mathrm{d}\mathbf{\tilde{\gamma}}_{k}^{\varepsilon}(x_{k},u_{k},r)\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:
2303.04305
POEM: Proof of Entropy Minima
Nakamoto consensus has been incredibly influential in enabling robust blockchain systems, and one of its components is the so-called heaviest chain rule (HCR). Within this rule, the calculation of the weight of the chain tip is performed by adding the difficulty threshold value to the previous total difficulty. Current difficulty based weighting systems do not take the intrinsic block weight into account. This paper studies a new mechanism based on entropy differences, named proof of entropy minima (POEM), which incorporates the intrinsic block weight in a manner that significantly reduces the orphan rate of the blockchain while simultaneously accelerating finalization. Finally, POEM helps to understand blockchain as a static time-independent sequence of committed events.
Karl Kreder, Shreekara Shastry
2023-03-08T00:55:24Z
http://arxiv.org/abs/2303.04305v3
# POEM: Proof of Entropy Minima ###### Abstract Nakamoto consensus has been incredibly influential in enabling robust blockchain systems, and one of its components is the so-called heaviest chain rule (HCR). Within this rule, the calculation of the weight of the chain tip is performed by adding the difficulty threshold value to the previous total difficulty. Current difficulty based weighting systems do not take the intrinsic block weight into account. This paper proposes a new mechanism based on entropy differences, named proof of entropy minima (POEM), which incorporates the intrinsic block weight in a manner that significantly reduces the orphan rate of the blockchain while simultaneously accelerating finalization. Finally, POEM helps to understand blockchain as a static time-independent sequence of committed events. ## 1 Introduction In their seminal work, Nakamoto [18] introduces a novel form of consensus which is often referred to as "Nakamoto Consensus". Among its core attributes, the choice of the canonical tip of a blockchain is made by selecting the "heaviest" head based on a particular implementation of the so-called heaviest chain rule (HCR). This arguably led to the first economic byzantine fault tolerant [1] mechanism for coordinating an open group of node operators in a distributed system. However, Nakamoto consensus using this implementation of HCR suffers from certain disadvantages, one of which is the production of orphaned blocks resulting from propagation delays (and thus partial information) within the network. Orphaned blocks are valid blocks that share a common parent block but have two different commitments with two different but equally valid proofs. However, only one of the blocks can ultimately be accepted by the network as canonical. As the network participants do not know which block(s) were produced first, they assume an ordering based on the order of the receipt of each block. Since both blocks are assigned the threshold difficulty weight and share a common parent, there is no alternative preferred objective mechanism for picking the head. Therefore, the network cannot converge on the choice of canonical head until one or more additional blocks are found. This can lead to delays in consensus each time such an event occurs. This problem becomes especially pernicious within blockchains with consistently high orphan block rates, as orphans lead to the production of more orphans. Moreover, this results in discarded hashes generated by the associated Proof-of-Work (PoW) algorithm used within Nakamoto Consensus. An attempt at addressing wasted work in blockchain production with many orphans was proposed by [11] with the greediest heaviest observed sub-tree (GHOST). GHOST includes a discounted addition of weight for orphaned blocks, referred to as uncles within GHOST, that are referenced by the head. Inclusion of uncles in the weight calculation helps to better measure the total work referenced by the various head choices in a noisy environment. PoW based blockchains achieve economic finalization at the point when the cost of an attack exceeds the benefit to the attacker. The practical exploitation of finalization latency is manifested by attackers mining private chains which they eventually reveal to revert one or more transactions. This is known as a 51% attack but [11] shows that this can be reduced to 33% with coordination of mining pools. In addition to 51% attacks, nefarious miners can cause the reversion of large amounts of work through block-withholding attacks [1]. As throughput is one of the primary limitations faced by current blockchain systems, many proposals have been made on scaling blockchains via sharding. A subset of these proposals is applicable to work based consensus mechanism including BlockReduce [10], treechains, fruitchains [12], and bitcoinNG [23]. Each of these solutions propose the use of sub-chains or shares as a mechanism to asynchronously produce datasets prior to inclusion in the highest level block data structure. However, subchains or shares make these proposals more vulnerable to withholding attacks due to block weight stratification for the various block types in the hierarchy. The Hierarchical Longest Chain Rule (HLCR)[10], an enhancement on HCR described above, proposes that a tip is chosen hierarchically such that subchain choices are restricted by the longest tip in a dominant chain. Although promising, HLCR is potentially vulnerable to withholding attacks which would cause all of the subchain blocks to be discarded up to the average block time of the most dominant chain. Fruitchains proposes mitigation of withholding attacks by weighting shares based on age at inclusion. This can potentially partially mitigate withholding attacks. Here, we present a potentially more powerful manner of addressing such attacks using an entropic mechanism for consensus, and minimization of difference entropy as a mechanism for head choice. ## 2 Entropic Consensus Any blockchain's evolution is that of a random process. In the classical proof-of-work (PoW) setting, an associated PoW function (often a hash function) has \(2^{l}\) states and is (approximately, if well-designed) uniformly distributed across these states. Thus, the maximum entropy associated with hashed outputs is \(S_{max}=l\) bits, corresponding to no "work" performed on the system [13]. The PoW algorithm restricts acceptable hashed outputs to all values below a threshold difficulty \(2^{d}\), implying that the first \(l-d\) bits of the output hash must be zero. This sets the maximum output target entropy to be \(d\) bits. In practice, the mining process achieves a hashed output that is less than or equal to the difficulty threshold, i.e., it may possess _greater_ than or equal to \(l-d\) leading zeros. We call this the intrinsic difficulty \(d_{int}\), resulting in a sequence with \(c\leq d\) non-zero elements. Thus, in practice, the realized entropy of the output is \(c\) bits and the reduction in the entropy is \(l-c\) bits which represent the number of leading zeros that will be called \(n\). Additionally, \(n\) can include fractional zero bits which are after the first non-zero bit. More precisely, this makes \(n=l-log_{2}(d_{int})\). The intrinsic difficulty can be used to calculate the _difference entropy_: \[\boxed{\Delta S=\frac{1}{2^{n}}} \tag{1}\] where \(\Delta S\) represents the number of possible states removed from the macrostate in the achievement of a single block. This can be extended to compute the change in entropy in an arbitrary sequence of \(k\) blocks. \[\Delta S_{k}=\Delta S_{k-1}\times\frac{1}{2^{n_{k}}}\] \[log_{2}\Delta S_{k}=log_{2}\Delta S_{k-1}+log_{2}(\frac{1}{2^{n_ {k}}})\] \[=log_{2}\Delta S_{k-1}-n_{k}\] This allows the computation of the \(\Delta S_{k}\) to simply be carried out by the summation of all prior zero bits found in a chain. \[\boxed{log_{2}\Delta S_{k}=-\Sigma_{i=1}^{k}n_{i}} \tag{2}\] ### Impact on Finalization To understand the impact of using \(\Delta S_{k}\) to determine a canonical blockchain tip, it is most illustrative to consider a system with at least one subchain. For example, consider a dominant blockchain which merge mines a subchain.1 Footnote 1: The definitions and a deeper understanding of dominant chain, subchain and merged mining can be found in [Geo+22] Figure 1: Dominant and Subordinate Chains Let the subordinate threshold difficulty be \(m_{t}\) and the dominant block difficulty be \(m_{t}+m_{d}\). In a traditional calculation of difficulty based on target difficulty, the dominant blocks that meet (or exceed the target) will be assigned the weight: \[\propto 2^{m_{t}+m_{d}}\] Each of the subordinate blocks would have a difficulty \(\propto 2^{m_{t}}\). Therefore the subordinate chain would be chosen when: \[k\times 2^{m_{t}}>2^{m_{t}+m_{d}}\] \[\boxed{k>2^{m_{d}}} \tag{3}\] Alternatively, using the \(\Delta S_{k}\) formulation, the dominant block's entropy is: \[\Delta S_{dom}=\frac{1}{2^{(m_{t}+m_{d})}}\] The entropy of the \(k^{th}\) subordinate blocks is: \[\Delta S_{k}=\frac{1}{2^{k(m_{t}+1)}}\] Therefore, the subordinate chain would be chosen when: \[k\times m_{t}>(m_{t}+m_{d})\] \[\boxed{k>\frac{(m_{t}+m_{d})}{m_{t}}} \tag{4}\] The entropy measurement allows the subordinate to overtake the dominant with a linear number of blocks whereas the difficulty measurement would require the subordinate to have an exponential number of blocks. Practically speaking, if we take \(m_{t}=20\) and \(m_{d}=5\) the difficulty measurement would require 32 subordinate blocks whereas the entropy measurement would only require 2 blocks. Using the difficulty measurement, a dominant block could be withheld for 32 subordinate blocks or the average dominant block time. Alternatively, using an entropy measurement, a dominant block could only be withheld for approximately 2 subordinate blocks or 3% of the dominant block time. This represents a dramatic improvement in the tolerance of merged mined or sub-share blockchain constructions to withholding attacks. Additionally, this practically allows the sub blocks to accumulate meaningful finalization guarantees prior to inclusion in the dominant chain. Additionally, there is an interesting impact on the preference in the choice of hash algorithm as well as field size shown by equation (4). Although it may be intuitive in the context of POEM, equation (4) shows that a hash algorithm that is most efficient at reducing entropy while maintaining a collision resistant one-way function with a uniform field distribution is most desirable. Specifically, given a fixed field size, a higher \(m_{t}\) threshold yields a lower \(k\) value and faster finalization guarantees. ### Actual Difficulty Impact on Orphaned Blockchain POEM enables the removal of the equating the intrinsic block difficulty to the threshold value. Using the intrinsic difficulty will effectively eliminate competing blocks, and therefore increase the chains hash efficiency. Let \(p(d_{int})\) be the probability of getting a block with the value \(d_{int}\). The probability of getting a competing block with the same value of \(d_{int}\) is \[p_{o}(d_{int})=p(d_{int})\times\frac{1}{2^{l}} \tag{5}\] Therefore, if two blocks are originated in close temporal proximity, \(\frac{2^{l}-1}{2^{l}}\) percent of the time intrinsic difficulty causes one of the blocks to be preferred over the other. Given a sufficiently large choice of \(l\) intrinsic difficulty allows the instantaneous resolution of effectively all latency driven forks. ### Finite Finalization Guarantee In the difficulty measurement paradigm, the actual difficulty cannot be used because it has an exponential relative weight compared to the head calculation. When \(c-d\) additional bits are found past the threshold, equation (3) becomes: \[k>2^{m_{d}+(c-d)}\] This means k, although finite, is effectively unbounded and there is no longer any practical finalization guarantee. However, when the entropy calculation is used, the intrinsic difficulty can also be used as they are both of linear weight in a logarithmic field. When \(c-d\) additional bits are found past the threshold, equation (4) becomes: \[k>\frac{m_{t}+m_{d}+(c-d)}{m_{t}}\] Moreover, when coupled with the entropy calculation, the finalization duration remains finite and is bounded by the bits in the hash function field. For a 256-bit field this would guarantee \(1<k<256\). However, practical choices of \(m_{t}\) would keep \(k\) in low single digits. ## 3 Discussion Independently, the concept of optimizing (difference) entropy as a means of progressing a blockchain is very coherent and almost obvious once presented. However, it is hard to develop an intuitive understanding of the proposal when bringing certain precepts which are fundamentally incompatible. Specifically, a blockchain is primarily a sequencer and is completely independent of time. Therefore, any introductions of assumptions which reincorporate time will create a conflicting intuition about the entropy based concept. Specifically, any considerations of averages, distributions, hash-rate, or time will cause a misunderstanding of this proposal. Ultimately, understanding the nuance of why time based considerations seemingly conflict with the entropy based system is the key to understanding both. The entropy calculation provides a new intuition on the relative value of shares, dominant blocks, and interlinks in a blockchain. Previous intuition, using a work based calculation, would cause the weight of a given block to be proportional to the average hash rate needed to find that single block. However, this is just a single sample of a large number of independent events. With one sample, the weight assigned to any one block effectively has infinite variance. This imprecision in measurement and over assignment of single block weight is what causes almost all consensus inefficiencies and attack vectors that exist in PoW blockchains. Within the entropy context, a dominant block which has 5 additional bits of entropy and is 32 times harder to find than a sub block with 20-bits would only be counted as having an entropy that is 25% greater. This makes sense when you consider the chain to be a sequence of references to each other and the tip being the choice that statistically has the greatest amount of measured effort. Only when the dominant block is included in the chain, and also has the expected number of sub shares which it references, does the dominant block get the full bit consideration. Effectively the entropy method takes in all information and creates many sample calculations on the effort applied to any given tip. Thus, POEM makes a choice that most closely reflects reality with the least amount of variance. Therefore, it is likely POEM can solve a number of outstanding problems with current blockchains including selfish mining, withholding attacks, and Sybil resistance \(<51\%\) while also allowing sub-share based systems to operate robustly and efficiently. Another interesting consequence of using the entropy-based measurement is that unlike difficulty-based systems with subshares the tip of the chain is dictated by the most recent subshare rather than the most dominant block. This means that instead of the tip being coerced from the top down [Geo+22], the dominant chains are pulled along by the subshares and eventually come into agreement. Therefore, blocks don't really have high independent \(\Delta S\) on their own but rather become canonical as the amount of entropy reduction which references them continues to increase. This concept of eventually finalizing a block with the increase in depth of a blockchain now holds true even for blockchains with subshares and interlinks. It appears to the authors that all prior worked based blockchain proposals misrepresented and misunderstood the mechanism being used to choose and extend the tip of a chain. Although this work is closely related with PoW consensus, POEM is a subset within work mechanisms. Not all work functions are explicitly compatible with POEM and certain functions and field sizes will potentially have preference over others. Additionally, the lack of precision in the measurement of work as a proxy for entropy has led to many engineering approximations to compensate for the deficiency. This includes limiting reorgs to a certain depth, coordinating heads by time, and truncating bits at threshold values. With the new precision of using entropy measurements, we believe that all further "choices" in blockchain design may become rationally emergent. ## 4 Conclusion In this work, a novel consensus mechanism is proposed Proof-of-entropy minima (POEM). It was shown that POEM is able to decrease the order of the time to finalization in a blockchain with sub-shares from exponential to linear. Additionally, it was shown that POEM can immediately resolve practically all contentious blocks at the blockchain tip. POEM gives closed-form equations to blockchain systems that can be used to analyze the difference between blockchain systems, selfish mining attacks, etc. It is posited that POEM will also be able to address deficiencies in PoW designs which allow for selfish mining and reduction of Sybil resistance below 51%, however, the specific analysis is left for future work.
2310.13741
On quantum melting of superfluid vortex crystals: from Lifshitz scalar to dual gravity
Despite a long history of studies of vortex crystals in rotating superfluids, their melting due to quantum fluctuations is poorly understood. Here we develop a fracton-elasticity duality to investigate a two-dimensional vortex lattice within the fast rotation regime, where the Lifshitz model of the collective Tkachenko mode serves as the leading-order low-energy effective theory. We incorporate topological defects and discuss several quantum melting scenarios triggered by their proliferation. Furthermore, we lay the groundwork for a dual non-linear emergent gravity description of the superfluid vortex crystals.
Dung Xuan Nguyen, Sergej Moroz
2023-10-20T18:00:38Z
http://arxiv.org/abs/2310.13741v2
**On quantum melting of superfluid vortex crystals:** ## Abstract Despite a long history of studies of vortex crystals in rotating superfluids, their melting due to quantum fluctuations is poorly understood. Here we develop a fracton-elasticity duality to investigate a two-dimensional vortex lattice within the fast rotation regime, where the Lifshitz model of the collective Tkachenko mode serves as the leading-order low-energy effective theory. We incorporate topological defects and discuss several quantum melting scenarios triggered by their proliferation. Furthermore, we lay the groundwork for a dual non-linear gravity description of the superfluid vortex crystals. ###### Contents * 1 Introduction * 2 Lifshitz effective theory of vortex crystal * 2.1 Tkachenko modes * 2.2 Condensation of vacancies and interstitials * 3 Dual tensor gauge theory * 3.1 Vortex crystal * 3.2 Vortex fluid * 3.3 Ginzburg-Landau theory and the Higgs transition * 4 Towards a dual gravitational theory * 5 Conclusions and outlook * A Vertex operators in quantum Lifshitz theory * B Tkachenko dispersion * C From old to new dual tensor gauge theory Introduction Quantum vortices are topological defects in quantum superfluids which reveal quantum mechanics in these phases on macroscopic scales. Quantum vortex matter is an intriguing and multi-disciplinary research field [1] which attracts both theorists and experimentalists. While being energetically costly excitations deep in the superfluid regime, condensation of vortices provides a natural framework for understanding of neighbouring non-supeful phases and associated phase transitions [2, 3, 4]. In the superfluid regime vortices emerge in abundance at low temperatures provided the whole system is rotated [5, 6, 7, 8]. As discovered first by Abrikosov [9] in a closely related context of type-II superconductors in an external magnetic field, in thermodynamic limit a regular vortex crystal ground state can emerge. It breaks spontaneously (magnetic) translation and rotation symmetries. In the two-dimensional limit, the study of low-energy collective excitations, known as Tkachenko waves [10], has been a subject of extensive theoretical investigation, as evidenced by works such as [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Additionally, an experimental observation of the Tkachenko waves have been successfully conducted in cold atoms [23]. Notably, it was also suggested that the Tkachenko modes might explain the dynamics of pulsars [24]. Given that the two transverse Cartesian coordinates of a vortex constitute a canonical pair of variables [6, 25, 26, 27], it follows that vortices represent inherently fuzzy entities with an uncertainty area inversely proportional to the density of elementary bosons within the superfluid phase. Consequently, as the vortex density within the crystal approaches the magnitude of the boson density, quantum mechanical fluctuations in vortex positions become comparable to the distances between vortices. Rough estimates relying on the Lindemann criterion and small-scale exact diagonalization numerical simulations suggest that the vortex crystal experiences quantum melting at zero temperature when the filling fraction is roughly between \(1\) and \(10\)[6]. Here, the filling fraction, to be called \(\nu\) in the following, is defined as the ratio between the boson density, \(n_{b}\), and the vortex density, \(n_{v}\). The precise nature of this quantum melting phenomenon remains poorly understood, representing a longstanding challenge in the field. Fracton-elasticity duality [28, 29, 30, 31, 32, 33, 34, 35] and its predecessors [36] provide an excellent framework to study possible melting mechanisms because it naturally incorporates disclinations and dislocations, which are topological defects in solids [37]. One can also easily incorporate vacancy and interstitial defects [29, 32]. In this formalism, quantum melting can be realized by a series of phase transitions, where dynamical defect fields play the role of the Higgs fields. This approach found practical application in the study of vortex crystals, as pioneered in [38]. In addition to computations of static interactions among various types of defects, this investigation uncovered several continuous quantum Higgs transitions triggered by condensation of the defects. Notably, it was found that the quantum melting of the vortex crystal might be preceeded by the condensation of vacancies or interstitials, leading to the emergence of an intermediate vortex supersolid phase, investigated originally in the classical finite-temperature problem [39, 40]. In this paper, we provide new insights into quantum melting of two-dimensional superfluid vortex crystals. Our starting point is the effective theory of the Tkachenko mode, which in the quadratic approximation reduces to a Lifshitz theory of a compact scalar field [19, 22, 40, 41]. This is a good coarse-grained description of the superfluid vortex crystal in a fast rotation limit, where the condensate occupies only the lowest Landau level. Within this field theory we discuss the fate of symmetry-allowed magnetic vertex operators that create vortex defects of the Lifshitz scalar that under special conditions correspond to vacancy and interstitial defects in the vortex crystal. Taking inspiration from the previous work [3, 42], we determine at which filling \(\nu\) such magnetic vertex operators are relevant in the renormalization group (RG) sense and thus destabilize the Lifshitz description of the vortex crystal. This sheds some new light on the vortex supersolid regime discussed in [38]. Recent surge of interest in fracton models inspired the authors of [43, 44] to develop fractonic gauge duals of various realizations of the compact Lifshitz theory in two spatial dimensions. Here, we develop a simple and elegant quadratic traceless symmetric tensor gauge theory that is dual to the Lifshitz scalar description of the superfluid vortex crystal. Within this framework we investigate the crystalline and fluid phases and study corresponding topological defects and excitation modes. Using [45], we also consider and speculate about an exotic direct quantum phase transition between the vortex solid and fluid. While capturing the quadratic dispersion of the low-lying Tkachenko mode, the quadratic Lifshitz theory has an important shortcoming in describing the superfluid vortex crystal as it does not realize a non-commutative algebra of magnetic translation symmetries. Recently, a non-linear non-commutative field theory, which reduces to the Lifshitz model in the quadratic approximation, was proposed that incorporates all physical symmetries [22]. This theory was used to determine the decay rate of the Tkachenko quanta at low energies. Here, starting from the linearized fractonic duality of the Lifshitz theory of the vortex crystal, we make first steps towards a non-linear dual of the non-linear theory [22]. We argue that this dual must be a dynamical theory of bimetric gravity and identify some of its gauge-invariant building blocks. ## 2 Lifshitz effective theory of vortex crystal ### Tkachenko modes Low-energy excitations of a two-dimensional vortex crystal in a rotating superfluid emerge from intertwined superfluid and elastic course-grained fluctuations. Employing the boson-vortex duality [46, 47], the leading-order quadratic effective theory [20] in the lowest Landau level approximation1 is given by the following Lagrangian [21] Footnote 1: To go beyond the lowest Landau level approximation, one should add a kinetic superfluid contribution that in the dual description is represented by a subleading electric term \(\sim m\text{e}^{2}/(2n_{0})\), where \(m\) denotes the mass of the elementary boson particles. This addition gives rise to the celebrated gapped Kohn mode, but does not modify the quadratic Tkachenko dispersion (2.3) at low momenta [20]. \[\mathcal{L}^{(2)}=-\frac{Bn_{0}}{2}\epsilon_{ij}u^{i}\dot{u}^{j}+Be_{i}u^{i}- \frac{\lambda}{2}b^{2}-\mathcal{E}_{\text{el}}\,\left(u_{ij}\right). \tag{2.1}\] Here the building blocks are the coarse-grained crystal displacement field \(u^{i}\) and a dual \(u(1)\) gauge field \(a_{\mu}\). In spirit of the boson-vortex duality, the superfluid density fluctuations \(\delta n=n-n_{0}\) and superfluid current \(j^{i}\) are fixed by the dual magnetic field \(b=\epsilon^{ij}\partial_{i}a_{j}\) and the electric field \(e_{i}=\partial_{t}a_{i}-\partial_{i}a_{0}\), respectively. The first term in the Lagrangian (2.1), encodes the Berry phase of vortices moving in the superfluid. Here \(B\) denotes an effective magnetic field experienced by elementary bosons due to external rotation and \(n_{0}\) is the average superfluid density. The second term in Eq. (2.1) represents an effective dipole energy acquired by vortices away from their elastic equilibrium. The superfluid internal energy is a function of the superfluid density and in Eq. (2.1) it was expanded around the ground state value \(n_{0}\) to quadratic order in the density fluctuations \(b=\delta n\). Finally, for a two-dimensional triangular vortex crystal, the elastic energy density \(\mathcal{E}_{\text{el}}\,\left(u_{ij}\right)=2C_{1}u_{kk}^{2}+2C_{2}\tilde{u}_ {ij}^{2}\) with \(\tilde{u}_{ij}=u_{ij}-\left(u_{kk}\delta_{ij}\right)/2\) being the traceless part of the symmetric strain tensor \(u_{ij}=\left(\partial_{i}u_{j}+\partial_{j}u_{i}\right)/2\). The coefficients \(C_{1}\) and \(C_{2}\) are the compression and shear elastic moduli, respectively. The \(u(1)\) Gauss law \(\partial_{i}u^{i}=0\) implies immediately that the two components of the dispacement field are not independent. To hardwire the transverse nature of displacements, we can introduce a dimensionless scalar field \(\phi\) such that \(u^{i}=\tilde{\partial}^{i}\phi/B\), where we introduced a skew derivative \(\tilde{\partial}^{i}=\epsilon^{ij}\partial_{j}\). In addition to parametrizing allowed transwerese displacements, the field \(\phi\) also represents a coarse-grained \(2\pi\)-periodic superfluid phase [22]. Integrating now out the dual \(u(1)\) gauge field, one arrives at the quantum Lifshitz model representation of the vortex crystal [40, 41, 19, 22] \[\begin{split}\mathcal{L}_{\phi}&=\frac{1}{2\lambda} \dot{\phi}^{2}-2C_{2}\tilde{u}_{ij}^{2}\\ &=\frac{1}{2\lambda}\dot{\phi}^{2}-\frac{C_{2}}{2B^{2}}(\partial_ {i}\tilde{\partial}_{j}\phi+\partial_{j}\tilde{\partial}_{i}\phi)^{2}\end{split} \tag{2.2}\] which encodes the low-energy transverse Tkachenko excitations with a quadratic dispersion relation \[\omega^{2}=\frac{2C_{2}\lambda}{B^{2}}q^{4}. \tag{2.3}\] This agrees with the known low-momentum limit of the collective Tkachenko excitation of the vortex crystal in the lowest Landau level approximation [48, 16, 49], where the shear elastic modulus \(C_{2}\) is known to be [48, 16] \[C_{2}=0.119\lambda n_{0}^{2}. \tag{2.4}\] Although the low-momentum limit of the Tkachenko dispersion is encoded properly in the quadratic Lifshitz model, this theory does not capture a non-commutative algebra of magnetic translation symmetries. The theory (2.2) is only a quadratic truncation of a non-linear non-commutative Goldstone theory [22] that respects all physical symmetries of the problem. ### Condensation of vacancies and interstitials Here within the quantum Lifshitz theory, we investigate proliferation of vacancies and interstitials in a two-dimensional superfluid vortex crystal at vanishing temperature.2 Microscopically, we have in mind the supersolid scenario by Andreev and Lifshitz [50]: In the crystal, isolated vacancies and interstitials cost finite energy to create, but due to quantum tunneling, the bottom of their energy band touches zero and they become gapless. Provided this happens (which should be verified in a microscopic calculation), here we want to clarify if such condensation of vacancies/interstitials is an RG-relevant perturbation that destabilizes the vortex crystal phase captured by the quantum Lifshitz model (2.2). Footnote 2: At finite temperature superconductors this problem was discussed in detail in an Abrikosov crystal in [39]. Up to surface terms, the effective theory (2.2) of the compact scalar \(\phi\in(0,2\pi)\) is equivalent to the \(z=2\) Lifshitz theory \[\mathcal{L}=\frac{1}{2\lambda}\dot{\phi}^{2}-\frac{C_{2}}{B^{2}}(\Delta\phi)^ {2}. \tag{2.5}\] We now rescale the Tkachenko field \(\varphi=\sqrt{\lambda}\phi\), so the Lagrangian takes the canonical form \[\mathcal{L}=\frac{1}{2}\dot{\varphi}^{2}-\frac{\eta^{2}}{2}(\Delta\varphi)^{2} \tag{2.6}\] with \(\eta^{2}=2C_{2}\lambda/B^{2}\). Using now Eq. (2.4), one finds \(\eta\approx 0.488\lambda n_{0}/B\). Notice that the field rescaling implies that \(\varphi\in(0,2\pi/\sqrt{\lambda})\). We are interested in operators that create vacancies and interstitials in the vortex crystals. In the Lifshitz effective theory they correspond to magnetic vertex operators [42] that create vortex defects of the field \(\varphi\). To create such a vortex centered around a position \(\mathbf{x}\), one should act on the vortex crystal vacuum with \[\tilde{O}_{\tilde{m}}(\mathbf{x})=\exp(i\int d^{2}\mathbf{z}\alpha(\mathbf{z} )\Pi(\mathbf{z})), \tag{2.7}\] where \(\alpha(\mathbf{z})=\tilde{m}\arg(\mathbf{z}-\mathbf{x})\) and \(\Pi(\mathbf{z})\) denotes a canonical momentum density that is conjugate to the Lifshitz field \(\varphi\). To account for the rescaled radius of the redefined field \(\varphi\), here \(\tilde{m}=m/\sqrt{\lambda}\) with \(m\) an integer. Elementary vacancies and interstitials correspond to \(m=\pm 1\). The magnetic vertex operators can be added to the Lagrangian of the vortex lattice since they do not break any global symmetry. Noteably, in the quantum Lifshitz theory the static correlation functions of the vertex operators are known exactly [3, 42] and are fixed by the parameter \(\eta\), see Appendix A. This allows to extract scaling dimensions of the vertex operators. For an elementary vortex (\(m=\pm 1\)) of \(\phi\) that corresponds to an elementary vacancy/interstitial we find \[\Delta_{v}=0.488\frac{n_{0}}{B/(2\pi)}=0.488\nu, \tag{2.8}\] where the filling fraction \(\nu=n_{b}/n_{v}\) with the vortex density \(n_{v}=B_{0}/(2\pi)\). We observe that at large filling fraction (\(\nu\to\infty\)), where the system is deep in the Gross-Pitaevskii vortex lattice regime, \(\Delta_{v}\gg 1\) and vacancies/interstitials are irrelevant. However, as \(\nu\) decreases (and the shear modulus \(C_{2}\) softens), the vacancy turns marginal \(\Delta_{v}=2+z=4\) at the critical filling \(\nu_{c}\approx 4/0.488\approx 8.2\). Now in a given microscopic model, if a vortex vacancy/interstitial becomes gapless at a filling \(\nu>\nu_{c}\), the perturbaton is RG-irrelevant and one expects that the Lifshitz fixed point is stable. On the other hand, if the defect becomes gapless at \(\nu<\nu_{c}\), it will distabilize the Lifshitz fixed point because it is an RG-relevant perturbation. The detailed investigation of such instability is left to a future work, but we anticipate that it might shed new light on the mysterious vortex supersolid regime [38, 39, 40, 32]. ## 3 Dual tensor gauge theory ### Vortex crystal Given that the low-momentum strain of the Tkachenko excitations is symmetric and traceless, we will dualize the Lifshitz theory (2.2) to a symmetric traceless gauge theory coupled to a scalar charge [51]. To this end, first introduce the Hubbard-Stratonovich fields \(b\) and \(e^{ij}\) \[\mathcal{L}=\frac{\kappa}{8}e_{ij}e^{ij}-\frac{\lambda}{2}b^{2}-\frac{1}{2B}e ^{ij}(\partial_{i}\tilde{\partial}_{j}\phi+\partial_{j}\tilde{\partial}_{i} \phi)+b\partial_{t}\phi \tag{3.1}\] with \(\kappa=C_{2}^{-1}\) and \(e^{ij}\) being a symmetric and traceless tensor. Solving the equations of motion, one finds, \(b=\partial_{t}\phi/\lambda\) and \(e_{ij}=2(\partial_{i}\tilde{\partial}_{j}\phi+\partial_{j}\tilde{\partial}_{i} \phi)/(\kappa B)\). The field \(b\) represents fluctuations of the coarse-grained superfluid density \(n\) in the vortex crystal [22]. On the other hand, \(e^{ij}\), as the variation of the action with respect to the shear strain, is the traceless part of the stress tensor \(T^{ij}\). Now we split the Lifshitz field \(\phi=\phi^{(r)}+\phi^{(s)}\), where the regular part \(\phi^{(r)}\) is smooth, while the singular part \(\phi^{(s)}\) contains contributions from topological defects (disclinations and dislocations) of the vortex crystal. For now, we will assume that the phase field \(\phi\) has no vortex singularities, i.e. \(\epsilon^{ij}\partial_{i}\partial_{j}\phi=0\), but only higher derivative singularites that correspond to disclinations and dislocations. In other words, there are no vacancies and interstitials at low energies, which justifies why the compression part of the elastic energy is dropped in our departure point (2.2). Integrating out the regular part \(\phi^{(r)}\), we find the conservation law \[\partial_{t}b+\frac{1}{2B}(\partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde {\partial}_{i})e^{ij}=0. \tag{3.2}\] This equation has a simple physical interpretation as a consequence of the lowest Landau level limit of the momentum and particle number conservations [52]. In this limit, due to the absence of inertia, the particle number current is fixed by equating the Lorentz force and the exerted stress which gives \(j_{i}=-\epsilon_{ij}\partial_{k}T^{jk}/B\). As a result, Eq. (3.2) is just the particle number conservation equation when restricted to the lowest Landau level [52, 53] \[\partial_{t}n+\frac{1}{B}\tilde{\partial}_{i}\partial_{j}T^{ij}=0. \tag{3.3}\] By introducing a traceless symmetric gauge potential \(a_{ij}\) and representing the magnetic field \(b\) and the traceless symmetric electric field \(e_{ij}\) as \[b=-\frac{1}{2B}(\partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{\partial }_{i})a_{ij}, \tag{3.4}\] \[e_{ij}=\partial_{t}a_{ij}-(\partial_{i}\partial_{j}-\frac{1}{2}\delta_{ij} \Delta)a_{0}, \tag{3.5}\] the conservation law (3.2) becomes the Bianchi identity of a symmetric traceless tensor gauge theory. Both \(b\) and \(e_{ij}\) are invariant under the following \(u(1)\) gauge transformation \[a_{0}\to a_{0}+\partial_{t}\beta,\qquad a_{ij}\to a_{ij}+(\partial_{i} \partial_{j}-\frac{1}{2}\delta_{ij}\Delta)\beta \tag{3.6}\] which preserves the traceless form of the gauge potential \(a_{ij}\). The gauge theory encodes only one physical degree of freedom because the two components of the traceless symmetric tensor \(a_{ij}\) are reduced to one due to the \(u(1)\) gauge redundancy. The excitation mode of the dual tensor gauge theory \[\mathcal{L}=\frac{\kappa}{8}e_{ij}e^{ij}-\frac{\lambda}{2}b^{2} \tag{3.7}\] has the quadratic dispersion (2.3), see Appendix B. As we demonstrate in Appendix C, the simple gauge theory (3.7) can be obtained from a more complicated dual theory with intertwined tensor (dual to elasticity) and vector (dual to superfluidity) gauge fields that was derived and analysed in Ref. [38]. The field equation for \(a_{0}\) is the Gauss law of the gauge theory that constraints the stress tensor as \(\partial_{i}\partial_{j}e_{ij}=0\). The topological defects of the vortex lattice encoded in the singular part \(\phi^{(s)}\) of the Tkachenko field couple naturally to the tensor gauge theory. Using integration by parts, we end up with the following result \[\mathcal{L}=\frac{\kappa}{8}e_{ij}e^{ij}-\frac{\lambda}{2}b^{2}-\rho a_{0}+j^ {ij}a_{ij} \tag{3.8}\] with the isolated disclination charge density [28, 37] \[\rho=\frac{1}{2B}(\tilde{\partial}_{j}\tilde{\partial}_{i}-\frac{1}{2} \delta_{ij}\Delta)[\partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{ \partial}_{i}]\phi^{(s)}=\frac{1}{2B}\tilde{\partial}_{j}\tilde{\partial}_{i}[ \partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{\partial}_{i}]\phi^{(s)}= \tilde{\partial}_{j}\tilde{\partial}_{i}u_{ij}^{(s)} \tag{3.9}\] and the symmetric tensor current \[j^{ij}=-\frac{1}{2B}\left(\partial_{t}[\tilde{\partial}_{i}\partial_{j}+ \tilde{\partial}_{j}\partial_{i}]+[\tilde{\partial}_{i}\partial_{j}+\tilde{ \partial}_{j}\partial_{i}]\partial_{t}\right)\phi^{(s)} \tag{3.10}\] that satisfy \[\partial_{t}\rho+(\partial_{j}\partial_{i}-\frac{1}{2}\delta_{ij}\Delta)j^{ij}=0. \tag{3.11}\] This equation implies conservation of particles, dipoles and the trace of the quadrupoles3 Footnote 3: These three conservation laws also follow from the Gauss law. \[Q=\int d^{2}x\rho,\qquad Q^{i}=\int d^{2}x\epsilon^{ij}x^{j}\rho,\qquad Q^{tr }=\int d^{2}x\mathbf{x}^{2}\rho. \tag{3.12}\] As a result, isolated gauge charges are immobile, gauge dipoles are conserved and can only move perpendicular to their dipole moment, while gauge quadrupoles are free to move. We thus identify charges with lattice disclinations and dipoles with lattice dislocatioins which can glide along their Burgers vector, but cannot climb. Mathematically, they satisfy the glide constraint \[\delta_{ij}j^{ij}=0. \tag{3.13}\] It is now straightforward to compute a static interaction potential between disclinations. Integrating out the gauge field \(a_{0}\), one finds \[\mathcal{L}=-\frac{1}{2}\rho(-q)\frac{8C_{2}}{q^{4}}\,\rho(q). \tag{3.14}\] In real space it gives rise to a harmonic attractive potential. As a result, in the vortex crystal disclinations are very costly in energy. They usually do not appear in isolation, but are bound together into dislocations. To determine the interaction potential between dislocations, we consider charge density \(\rho\) induced by dipoles, i.e. \(\rho=\epsilon_{ij}\partial_{i}\chi_{j}\), where we introduced the Burgers vector density \(\chi_{i}=\epsilon^{ab}\partial_{a}\partial_{b}u_{i}^{(s)}\)[37]. In momentum space, we can rewrite Eq. (3.14) as \[\mathcal{L}=-\frac{1}{2}\chi_{i}^{T}(-q)\frac{8C_{2}}{q^{2}}\chi_{i}^{T}(q), \tag{3.15}\] where we introduced the transverse projection \(\chi_{i}^{T}(q)=\left(\delta^{ij}-q^{i}q^{j}/q^{2}\right)\chi_{j}(q)\). This agrees with the lowest Landau level limit of the previous result [38, 54]. We observe that dislocation interact via an anisotropic long-range logarithmic potential. In the presence of vacancies and interstitials, the dual gauge theory must be modified because the stress tensor is not traceless anymore. Its trace part couples to the dislocation current (via the term \(j^{ij}a_{ij}\)), the vacancy/intestitial density \(j_{v}^{t}\) (via a term \(\sim e_{ij}\delta^{ij}j_{v}^{t}\)) and the current \(j_{v}^{i}\) (via a term \(\sim\partial_{k}a_{ij}\delta^{ij}j_{v}^{k}\)). As the result, the glide constraint is modified to [38] \[\delta_{ij}j^{ij}+B\partial_{\mu}j_{v}^{\mu}=0. \tag{3.16}\] Now dislocations can climb at expense of creating or destroying vortex vacancies resulting in the modification of the conserved charge \(Q^{tr}\) in Eq. (3.11) to \[\tilde{Q}^{tr}=\int d^{2}x\,\,\left(\mathbf{x}^{2}\rho-\frac{2}{B}j_{v}^{t} \right), \tag{3.17}\] see Appendix C. In the vortex crystal, vacancies interact via a short-range potential whose nature depends on the interplay of the compression and shear elastic moduli [38]. Note that in order to fix the interaction constant, one needs to go beyond our theory (2.2) which for example misses elastic compression contributions to the interaction potential between vacancies. ### Vortex fluid In this section we propose a simple field theory of a fully gapped vortex fluid phase where all global symmetries of the ground state are restored. To this end, consider a Lifshitz theory of a compact scalar \(\chi\) minimally coupled to the \(u(1)\) traceless symmetric tensor gauge theory derived in the previous section. Physically, we can think of the scalar \(\chi\) as representing the phase of a (complex) disclination field that serves as the Higgs field in the vortex fluid phase. Its Lagrangian is \[\mathcal{L}_{\chi}=\frac{\tau}{2}(\frac{\partial_{t}\chi-a_{0}}{\mathcal{D}_{t \chi}})^{2}-\frac{\sigma}{2}\left(\underbrace{[\partial_{i}\partial_{j}-\frac{ 1}{2}\delta_{ij}\Delta]\chi-a_{ij}}_{\mathcal{D}_{ij}\chi}\right)^{2}. \tag{3.18}\] Under \(u(1)\) gauge transformations \(\chi\rightarrow\chi+\beta\), so the covariant derivatives \(\mathcal{D}_{t}\chi\) and \(\mathcal{D}_{ij}\chi\) are gauge invariant. The corresponding field equation for \(\chi\) is exactly the conservation law (3.11) \[\partial_{t}\underbrace{(\tau\mathcal{D}_{t}\chi)}_{\rho}+(\partial_{i} \partial_{j}-\frac{1}{2}\delta_{ij}\Delta)\underbrace{(\sigma\mathcal{D}_{ij} \chi)}_{j^{ij}}=0, \tag{3.19}\] where \(\rho\) and \(j^{ij}\) is the disclination density and current. As a result the charges \(Q\), \(Q^{i}\) and \(Q^{tr}\) introduced in Eq. (3.12) are all automatically conserved. In the unitary gauge \(\mathcal{D}_{t}\chi\rightarrow-a_{0}\) and \(\mathcal{D}_{ij}\chi\rightarrow-a_{ij}\) and in that particular gauge the conservation law is \[\tau\partial_{t}a_{0}+\sigma\partial_{i}\partial_{j}a_{ij}=0, \tag{3.20}\] where we used that \(a_{ij}\) is traceless. Now we compute dispersion relations of excitation modes of the gauge theory coupled to the Lifshitz matter. We start from the complete Lagrangian \[\mathcal{L}=\frac{\kappa}{8}e_{ij}e^{ij}-\frac{\lambda}{2}b^{2}+\frac{\tau}{2 }(\mathcal{D}_{t}\chi)^{2}-\frac{\sigma}{2}(\mathcal{D}_{ij}\chi)^{2}. \tag{3.21}\] The corresponding Gauss law \[\frac{\kappa}{4}\partial_{i}\partial_{j}e_{ij}+\rho=0, \tag{3.22}\] while the Ampere law \(\frac{\delta S}{\delta a_{ij}}=0\) is \[\frac{\kappa}{4}\partial_{t}e_{ij}-\frac{\lambda}{2B}(\partial_{i}\tilde{ \partial}_{j}+\partial_{j}\tilde{\partial}_{i})b=j^{ij}. \tag{3.23}\] To solve it, we first rewrite this equation in terms of the gauge potentials \(a_{0}\) and \(a_{ij}\). Working in the unitary gauge and using Eq. (3.20) allows to eliminate completely the scalar potential \(a_{0}\). So one ends up with the equation for \(a_{ij}\) \[\frac{\kappa}{4}\left(\partial_{t}^{2}a_{ij}+\frac{\sigma}{\tau}(\partial_{i} \partial_{j}-\frac{1}{2}\delta_{ij}\ \Delta)\partial_{k}\partial_{l}a_{kl}\right)+\frac{\lambda}{4B^{2}}( \partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{\partial}_{i})(\partial_ {k}\tilde{\partial}_{l}+\partial_{l}\tilde{\partial}_{k})a_{kl}+\sigma a_{ij}=0. \tag{3.24}\] To simplify the calculation of the dispersion relation, we will use isotropy and consider a mode propagating in the \(x\)-direction. For the trace components \(a_{11}=-a_{22}=f\), Eq. (3.24) simplifies to \[\frac{\kappa}{4}[\partial_{t}^{2}+\frac{\sigma}{2\tau}\partial_{x}^{4}]f+ \sigma f=0 \tag{3.25}\] which leads to the gapped dispersion of the \(f\)-mode \[\omega^{2}=\frac{2\sigma}{\kappa\tau}q^{4}+\frac{4\sigma}{\kappa}. \tag{3.26}\] For the off-diagonal components \(a_{12}=a_{21}=g\), we find \[\frac{\kappa}{4}[\partial_{t}^{2}+\frac{\lambda}{2B^{2}}\partial_{x}^{4}]g+ \sigma g=0. \tag{3.27}\] We observe that the \(g\)-mode (which corresponds to the gapless Tkachenko mode in the absence of the Lifshitz matter, see Appendix B) acquires a gap due to the coupling to the Lifshitz matter sector \[\omega^{2}=\frac{2\lambda}{\kappa B^{2}}q^{4}+\frac{4\sigma}{\kappa}. \tag{3.28}\] We thus end up with two physical modes that have the same gap \(\Delta^{2}=4\sigma/\kappa\) at \(q=0\). In spirit, these results resemble physical excitations in a superconductor, where the longitudinal and transverse excitation modes have the same energy gap [55]. Similarly to superconductivity, the vortex fluid exhibits a (dual) Meissner effect. Specifically, in the static limit \(\omega=0\), from the dispersion (3.28) we find \(q=e^{i\pi/2}/\lambda_{L}\), where the dual London penetration length is \(\lambda_{L}=\sqrt[4]{2\sigma B^{2}/\lambda}\). As the result, the dual magnetic field \(b\sim\partial_{i}\tilde{\partial}_{j}a_{ij}\) which represents fluctuations of the superfluid density, near the boundary of the system decays exponentially into the bulk. In summary, the \(u(1)\) tensor gauge theory coupled to the Lifshitz matter represents a fully gapped vortex fluid phase, where global symmetries (magnetic translations and rotations) are respected by the ground state. Being produced by the dual Higgs mechanism, this phase has many properties similar to \(u(1)\) superconductors. ### Ginzburg-Landau theory and the Higgs transition One might wonder if the vortex crystal from Sec. 3.1 can undergo a direct quantum melting transition to the isotropic vortex fluid from Sec. 3.2. Here we write down a simple Ginzburg-Landau theory that achieves that. We consider a complex scalar \(\Psi\) that represents a disclination annihilation and plays the role of the Higgs field. Under \(u(1)\) gauge transformations it is postulated to transform as \[\Psi\to e^{ir\beta}\Psi, \tag{3.29}\] where \(r\) is the \(u(1)\) gauge charge of the Higgs field \(\Psi\). We define the covariant derivative \[D_{ij}\Psi^{2}=\Psi\partial_{i}\partial_{j}\Psi-\partial_{i}\Psi\partial_{j} \Psi-\frac{1}{2}\delta_{ij}\left(\Psi\Delta\Psi-\partial_{k}\Psi\partial^{k} \Psi\right)-ira_{ij}\Psi\Psi, \tag{3.30}\] that transforms covariantly \[D_{ij}\Psi^{2}\to e^{i2r\beta}D_{ij}\Psi^{2} \tag{3.31}\] under the gauge transformations (3.6) and (3.29). This covariant derivative is the traceless symmetric version of the one considered in Ref. [45]. Moreover, we can define the temporal covariant derivative \[D_{t}\Psi=\partial_{t}\Psi-ira_{0} \tag{3.32}\] that also transforms covariantly \[D_{t}\Psi\to e^{ir\beta}D_{t}\Psi. \tag{3.33}\] From these building blocks, we now can write down the following Ginzburg-Landau Lagrangian \[\mathcal{L}_{\Psi}=\frac{i}{2}\Psi^{\dagger}D_{t}\Psi-\frac{m}{2}|D_{ij}\Psi^ {2}|^{2}-v_{2}\Psi^{\dagger}\Psi-\frac{v_{4}}{2}|\Psi^{\dagger}\Psi|^{2}. \tag{3.34}\] Notice that in contrast to the ordinary Ginzburg-Landau theory, the term with spatial derivatives is quartic in \(\Psi\) and thus represents interactions. The gauged dipole symmetry \(Q^{i}\) from Eq. (3.12) prohibits quadratic terms in \(\Psi\) with spatial derivatives, while the quadrupole conservation \(Q^{tr}\) imposes the traceless condition for the covariant derivative (3.30). We can now use the parameter \(v_{2}\) to tune between the two phases. When \(v_{2}<0\), the theory is in the Higgs phase. We write \(\Psi=\sqrt{\psi}e^{i\chi}\) with \(\psi=|v_{2}|/v_{4}+\gamma\), where \(\gamma\) is a radial massive fluctuation. After integrating out \(\gamma\) and keeping the leading order terms, we obtain \[\mathcal{L}_{\chi}=\frac{r^{2}}{2v_{4}}\left(\partial_{t}\chi-a_{0}\right)^{2} -\frac{mr^{2}v_{2}^{2}}{2v_{4}^{2}}\left(|\partial_{i}\partial_{j}-\frac{1}{2} \delta_{ij}\Delta|\chi-a_{ij}\right)^{2}. \tag{3.35}\] After renaming \[r^{2}/v_{4}\rightarrow\tau,\quad\frac{mr^{2}v_{2}^{2}}{v_{4}^{2}}\to\sigma, \tag{3.36}\] we recover the effective theory of the vortex fluid, i.e., the Lifshitz scalar coupled to the dual tensor gauge theory (3.18).4 Footnote 4: One can consider a different Ginzburg-Landau Lagrangian \[\mathcal{L}^{\prime}_{\Phi}=\frac{1}{2}|D_{t}\Psi|^{2}-\frac{m}{2}|D_{ij}\Psi^ {2}|^{2}-v_{2}\Psi^{\dagger}\Psi-\frac{v_{4}}{2}|\Psi^{\dagger}\Psi|^{2}. \tag{3.37}\] After integrating out the massive fluctuation \(\gamma\), one arrives at a similar quadratic form of the Golstone boson action in the Higgs phase but with different coefficients. On the other hand, for \(v_{2}>0\), the Higgs field \(\Psi\) is gaped and at low energies it decouples from the gauge theory (3.7). We are thus in the vortex crystal phase that supports the quadratically dispersing Tkachenko mode. In the mean-field approximation, the quantum transition at \(v_{2}=0\) between the vortex crystal and vortex fluid is direct and continuous. Of course, this simple picture might not survive quantum fluctuations near \(v_{2}=0\) that could lead to split transitions, as discussed for example in [38, 43]. A careful treatment of this problem is left to a future work. ## 4 Towards a dual gravitational theory The symmetric tensor gauge field \(a_{ij}\) is reminiscent of a metric in a (linearized) theory of gravity. Indeed, already Kleinert noticed the analogy between the tensor gauge formulation of elasticity and Einstein's theory of gravity, and suggested that gravity could emerge from the defects of a crystal with lattice spacing of order the Plank length [56, 57]. Later, the symmetric tensor description of a boson liquid phase was also proposed as a gravity theory in works by Xu and collaborators [58, 59]. Pretko also formulated the fractonic symmetric tensor gauge theory in terms of a gravity model with both positive and negative mass matter fields [60]. In this paper we follow closely [53]. To construct the dynamical gravity from the dual tensor gauge theory, we first introduce a symmetric and traceless dimensionless field \[\mathfrak{h}_{ij}=-l^{2}(\varepsilon_{ik}a_{jk}+\varepsilon_{jk}a_{ik}) \tag{4.1}\] that contains equivalent information to \(a_{ij}\). Here \(l=1/\sqrt{B}\) is the magnetic length. Under \(u(1)\) gauge transformations (3.6), \(\mathfrak{h}_{ij}\) transforms as \[\mathfrak{h}_{ij}\rightarrow\mathfrak{h}_{ij}-l^{2}(\partial_{j}\tilde{ \partial}_{i}+\partial_{i}\tilde{\partial}_{j})\beta \tag{4.2}\] which can be rewritten as \[\mathfrak{h}_{ij}\rightarrow\mathfrak{h}_{ij}-\partial_{i}\xi_{j}-\partial_{j }\xi_{i}, \tag{4.3}\] where we introduced \(\xi^{i}=l^{2}\tilde{\partial}^{i}\beta\) that by construction satisfies \(\partial_{i}\xi^{i}=0\). We recognize the linearized transformation of a metric tensor fluctuation under volume-preserving diffeomorphisms. Now we are ready to extend to a non-linear realization of the \(u(1)\) gauge redundancy. To that end, we introduce a dynamical unimodular metric \(\mathfrak{g}_{ij}\) that under volume-preserving infinitesimal diffeomorphisms \(x^{i}\to x^{i}+\xi^{i}=x^{i}+\ell^{2}\varepsilon^{ij}\partial_{j}\beta\) transforms as \[\delta_{\beta}\mathfrak{g}_{ij}=-\xi^{k}\partial_{k}\mathfrak{g}_{ij}- \mathfrak{g}_{kj}\partial_{i}\xi^{k}-\mathfrak{g}_{ik}\partial_{j}\xi^{k}=- \ell^{2}\varepsilon^{kl}\left(\partial_{k}\mathfrak{g}_{ij}+\mathfrak{g}_{kj} \partial_{i}+\mathfrak{g}_{ik}\partial_{j}\right)\partial_{l}\beta. \tag{4.4}\] In the linear regime, where \(\mathfrak{g}_{ij}=\delta_{ij}+\mathfrak{h}_{ij}+O\left(\mathfrak{h}^{2}\right)\), we recover the transformation (4.3). Notice that we are dealing with a bimetric theory5 because in additional to the dual dynamical metric \(\mathfrak{g}_{ij}\), we have at our disposal also a background symmetric metric tensor \(g_{ij}\) that measures distances between elementary bosons on the surface, where the vortex crystal is formed. To have a periodic crystal structure, this background metric is assumed to be flat. Footnote 5: A gapped bimetric theory of fractional quantum Hall fluids was proposed and analyzed in [61]. Following [53], the non-linear generalization of the \(u(1)\) gauge transformations of the field \(a_{0}\) that satisfies the non-commutative algebra of volume-preserving diffeomorphisms \([\delta_{\alpha},\delta_{\beta}]=\delta_{[\alpha,\beta]}\) is \[\delta_{\beta}a_{0}=\partial_{t}\beta-\xi^{k}\partial_{k}a_{0}=\partial_{t} \beta-\ell^{2}\varepsilon^{kl}\partial_{k}a_{0}\partial_{l}\beta. \tag{4.5}\] Given these objects and their transformation properties, we will look now for the basic building blocks of the non-linear dual gravity theory that correspond to the magnetic and electric fields of the traceless tensor gauge theory (3.7). From the dynamical metric \(\mathfrak{g}_{ij}\) (and its inverse \(\mathfrak{g}^{ij}\)), we can construct the gauge-invariant Ricci scalar \(\mathfrak{R}\)[62] that in two spatial dimensions fixes completely the Riemann tensor.6 In the linearized regime Footnote 6: Here we use the dynamical metric \(\mathfrak{g}_{ij}\) and and its inverse \(\mathfrak{g}^{ij}\) to lower and raise indices. \[\mathfrak{R}=\partial_{i}\partial_{j}\mathfrak{h}_{ij}=-2b. \tag{4.6}\] We thus find that in the non-linear realization, the Ricci scalar \(\mathfrak{R}\) corresponds to the magnetic field \(b\) which represents fluctuations of the coarse-grained superfluid density in the vortex crystal. Although the time-derivative of the dynamical metric does not transform nicely under time-dependent gauge transformations, we can introduce a traceless "shear strain rate" tensor [63, 64] \[\mathfrak{s}_{ij}=\partial_{t}\mathfrak{g}_{ij}+\nabla_{i}v_{j}+\nabla_{j}v_{ i}-\mathfrak{g}_{ij}\nabla_{k}v^{k}, \tag{4.7}\] where the covariant derivatives were defined using the dynamical metric \(\mathfrak{g}_{ij}\). Here we also introduced the velocity vector field \(v^{i}=l^{2}\varepsilon^{ij}\partial_{j}a_{0}\) that under volume-preserving diffeomorphisms transforms as \[\delta_{\beta}v^{i}=-\xi^{k}\partial_{k}v^{i}+v^{k}\partial_{k}\xi^{i}+\dot{ \xi}^{i}. \tag{4.8}\] Physically, up to epsilon contractions, the tensor \(\mathfrak{s}_{ij}\) represents the traceless part of the physical stress tensor. In the linearized regime it thus essentially reduces to to the electric field \(e_{ij}\). Since the Einstein-Hilbert integral \(\int d^{2}x\sqrt{\mathfrak{g}}\mathfrak{R}\) is fixed by the topological Euler characteristic \(\chi_{E}\) of a two-dimensional manifold, the first non-trivial term including the Ricci scalar is \(\sim\mathfrak{R}^{2}\). We can also raise the indices \(\mathfrak{s}^{ij}=\mathfrak{g}^{ik}\mathfrak{g}^{il}\mathfrak{s}_{kl}\) and construct a dynamical scalar contribution \(\sim\mathfrak{s}_{ij}\mathfrak{s}^{ij}\) to the non-linear theory. The combination of the two terms \[\mathcal{L}=\frac{\kappa}{32\ell^{4}}\mathfrak{s}_{ij}\mathfrak{s}^{ij}-\frac {\lambda}{8}\mathfrak{R}^{2} \tag{4.9}\] reduces in the linearized effective theory (3.7). Using the dynamical metric tensor \(\mathfrak{g}_{ij}\) and the "shear strain rate" tensor \(\mathfrak{s}_{ij}\), we can construct a more general non-linear gravity action that is invariant under the non-commutative volume-preserving diffeomorphism transformations (4.4), (4.5). A systematic construction of a dual non-linear bimetric theory of a vortex crystal that respects all global symmetries is an interesting future challenge. Conclusions and outlook In summary, we have explored several mechanisms relevant to the quantum-induced melting of two-dimensional vortex crystals, employing the quadratic low-energy effective Lifshitz theory and its symmetric tensor fractonic dual as our theoretical framework. Using the dual description, we studied the physics of topological defects, and speculated about a direct quantum phase transition from the vortex solid to the vortex liquid phase. We also took initial steps towards a non-linear dual description which we anticipate to be a dynamical theory of gravity. Finally, we would like to highlight several intriguing research avenues that have emerged from this study of vortex matter in superfluids and warrant further investigation in the future: * _Vacancy/interstitial RG instability_: An essential task is to gain deeper insights into the nature of the RG instability discussed in Section 2.2 that arises for filling fractions \(\nu<\nu_{cr}\approx 8.2\) due to the Andreev-Lifshitz condensation of vacancies and interstitials. * _Disclination Higgs transition_: It is important to explore whether the direct and exotic continuous Higgs transition, as discussed in Section 3.3, remains intact in the presence of quantum fluctuations or if it gives way to several more conventional transitions, as suggested in [38, 43]. * _Non-linear gravity theory_: Using the ingredients introduced in Section 4, one should be able to construct a non-linear dynamical gravity theory that is consistent with all global symmetries inherent to the problem. This theory could then be employed to compute the decay rate of the Tkachenko mode which should be compared with the result obtained in Ref. [22]. The coupling of the dynamical metric \(\mathfrak{g}_{ij}\) and the background metric \(g_{ij}\) needs to be understood, and the complete theory of the bimetric model for the rotating superfluid is awaited to be discovered. Beyond rotating superfluids, it is well-appreciated that the Lifshitz theory emerges at low energies at the critical Rokhsar-Kivelson point in quantum dimmer models and quantum spin ice [3, 65, 42]. Despite originating from a model with different global symmetries, the dual theory that we considered in this paper might inspire a study of the constrained dynamics of the low-energy excitations at the Rokhsar-Kivelson critical point. _Note added_: We would like to draw reader's attention to the paper [66] by Yi-Hsien Du, Ho Tat Lam, and Leo Radzihovsky, titled "Quantum vortex lattice via Lifshitz duality", that has some overlap with our work. ## Acknowledgements The authors thank Eddy Ardonne, Eduardo Fradkin, Carlos Hoyos, Leo Radzihovsky, Dam Thanh Son and Wilhelm Zwerger for discussions and comments. S.M. is supported by Vetenskapsradet (grant number 2021-03685) and Nordita. The work of D.X.N. is supported, in part, by Grant IBS-R024-D1. ## Appendix A Vertex operators in quantum Lifshitz theory Here following closely Refs. [3, 42], we review the known facts about vertex operators in quantum Lifshitz model in two spatial dimensions. Consider a quantum Lifshitz theory of a compact scalar \(\varphi\in(0,2\pi)\). The theory is defined by the (Euclidean) Lagrangian \[\mathcal{L}=\frac{1}{2}(\partial_{\tau}\varphi)^{2}+\frac{\eta^{2}}{2}(\nabla^{2} \varphi)^{2}.\] (A.1) Consider perturbations that preserve the \(U(1)\) shift symmetry that acts as \(\varphi\to\varphi+\delta\).7 In particular, we are interested in operators that create vortex defects of \(\varphi\) with integer vorticity \(m\) Footnote 7: Due to the dynamical exponent \(z=2\), the \(U(1)\) symmetry is not broken spontaneously, but only exhibits algebraically-decaying correlators of \(U(1)\)-charged operators. \[\tilde{O}_{m}(\mathbf{x})=\exp(i\int d\mathbf{z}\alpha(\mathbf{z})\Pi( \mathbf{z})),\] (A.2) where \(\alpha(\mathbf{z})=m\arg(\mathbf{z}-\mathbf{x})\) and \(\Pi(\mathbf{z})\) denotes canonical momentum density that is conjugate to the Lifshitz field \(\varphi\). When acting on the ground state, this operator shifts the phase to create a singular vortex profile. The (spatial) scaling dimension of \(\tilde{O}_{m}\) is [3, 42] \[\tilde{\Delta}_{m}=2\pi\eta m^{2}.\] (A.3) In other words, the equal-time correlator is given by the power-law \[\langle\tilde{O}_{m}(\mathbf{z})^{\dagger}\tilde{O}_{m}(\mathbf{z}^{\prime}) \rangle\sim|\mathbf{z}-\mathbf{z}^{\prime}|^{-4\pi\eta m^{2}}.\] (A.4) Due to dynamical critical exponent \(z=2\), for \(\tilde{\Delta}_{m}<2+z=4\), the vortex operator becomes RG-relevant and destabilizes the Lifshitz scale-invariant fixed point. In particular, if we start with a large \(\eta\) and decrease it to the value \(\eta_{c}=2/(\pi m^{2})\), \(\tilde{\Delta}_{m}\) becomes marginal. One can also consider electric vertex operators of the form \[O_{n}=\exp(in\varphi)\] (A.5) which have the (spatial) scaling dimension [3, 42] \[\Delta_{n}=\frac{n^{2}}{8\pi\eta}\] (A.6) and thus \[\langle O_{n}(\mathbf{z})^{\dagger}O_{n}(\mathbf{z}^{\prime})\rangle\sim| \mathbf{z}-\mathbf{z}^{\prime}|^{-\frac{n^{2}}{4\pi\eta}}.\] (A.7) These electric operators violate the \(U(1)\) shift symmetry and cannot be added to the Lagrangian of the model. ## Appendix B Tkachenko dispersion We start from the Euler-Lagrange equations of the dual theory (3.7) \[\frac{\kappa}{4}\partial_{t}\underbrace{(\partial_{t}a_{ij}-(\partial_{i} \partial_{j}-\frac{1}{2}\delta_{ij}\Delta)a_{0})}_{e_{ij}}+\frac{\lambda}{2B} (\partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{\partial}_{i})\underbrace {\frac{1}{2B}(\partial_{k}\tilde{\partial}_{l}+\partial_{l}\tilde{\partial}_{ k})a_{kl}}_{-b}=0.\] (B.1) Now without loss of generality we work in the temporal gauge \(a_{0}=0\) and consider a wave propagation along the \(x\)-direction such that \(\partial_{y}=0\). Under these conditions, \(e_{ij}=\partial_{t}a_{ij}\) and \(b=\partial_{x}^{2}a_{xy}/B\). The above field equations simplify to \(\partial_{t}^{2}a_{xx}=\partial_{t}^{2}a_{yy}=0\) and \(\frac{\kappa}{4}\partial_{t}^{2}a_{xy}+\frac{\lambda}{2B^{2}}\partial_{x}^{4} a_{xy}=0\). The last equation gives us the quadratic dispersion \(\omega^{2}=\frac{2C_{\lambda}\lambda}{B^{2}}q^{4}\) of oscillations of the off-diagonal component \(a_{xy}\). From old to new dual tensor gauge theory The fracton-elasticity duality of vortex crystals was investigated in Ref. [38], where a gauge theory involving symmetric tensor gauge fields (dual to elasticity) intertwined with a \(u(1)\) vector gauge theory (dual to superfluidity). Here we demonstrate how one can derive from that construction the dual gauge theory (3.7) investigated in this paper. We start from the intertwined dual gauge theory considered in Ref [38] \[\mathcal{L}=\mathcal{L}_{g}\left(a_{\mu}\right)+\frac{1}{2Bn_{0} }\epsilon_{ij}\left(B^{i}+B\epsilon^{ik}a_{k}\right)\partial_{t}\left(B^{j}+ B\epsilon^{jl}a_{l}\right)\] \[\qquad\qquad\qquad\qquad+\frac{1}{2}C_{ij;kl}^{-1}\left(E^{ij}+B \delta^{ij}a_{t}\right)\left(E^{kl}+B\delta^{kl}a_{t}\right)-A_{ij}J^{ij}+A_{ 0}\rho+a_{\mu}j_{v}^{\mu},\] (C.1) where the elasticity tensor \(C_{ij;kl}\) is given by \[C_{ij;kl}=8C_{1}P_{ij;kl}^{(0)}+4C_{2}P_{ij;kl}^{(2)}\] (C.2) with the compression and shear projection operators \[\begin{split} P_{ij;kl}^{(0)}&=\frac{1}{2}\delta_{ ij}\delta_{kl},\\ P_{ij;kl}^{(2)}&=\frac{1}{2}\left(\delta_{ik} \delta_{jl}+\delta_{il}\delta_{jk}-\delta_{ij}\delta_{kl}\right).\end{split}\] (C.3) The definitions of the electric field and magnetic field in terms of the symmetric tensor gauge field are \(E_{ij}=\partial_{t}A_{ij}-\partial_{i}\partial_{j}A_{0}\) and \(B^{i}=-\epsilon_{jk}\partial^{j}A^{ki}=\partial_{k}A^{ki}\). The superfluid current in the dual picture is \(j^{\mu}=\epsilon^{\mu\nu\rho}\partial_{\nu}a_{\rho}\), while \(n_{0}\) is the superfluid background density. The electric field \(E_{ij}\) and magnetic field \(B^{i}\) are invariant under the gauge transformations \[A_{ij}\to A_{ij}+\partial_{i}\partial_{j}\beta,\quad A_{0}\to A_{0}+ \partial_{t}\beta.\] (C.4) The gauge theory is also invariant under the additional gauge transformation \[A_{ij}\to A_{ij}+\frac{1}{B}\delta_{ij}\xi,\quad a_{\mu}\to a_{\mu}- \partial_{\mu}\xi.\] (C.5) Notice that the gauge transformation (C.4) differs from the gauge transformation (3.6) since \(A_{ij}\) is not traceless. The gauge transformations imply the conservation laws [38] for vortex crystal topological defects (represented by the disclination density \(\rho\) and the dislocation current \(J^{ij}\)) and vacancies/interstitials (represented by their current \(j_{v}^{\mu}\)) \[\partial_{t}\rho+\partial_{i}\partial_{j}J^{ij}=0,\] (C.6) \[\frac{1}{B}J^{ij}\delta_{ij}+\partial_{\mu}j_{v}^{\mu}=0.\] (C.7) Combining the above equations gives us \[\partial_{t}\rho+(\partial_{i}\partial_{j}-\frac{1}{2}\delta_{ij}\triangle)J^ {ij}-\frac{1}{2B}\triangle\left(\partial_{\mu}j_{v}^{\mu}\right)=0.\] (C.8) We observe that in the presence of vacancies, the conservation of the trace of the quadrupole in the main text needs to be modified to \(d\tilde{Q}^{tr}/dt=0\) with \[\tilde{Q}^{tr}=\int d^{2}x\ \left(\mathbf{x}^{2}\rho-\frac{2}{B}j_{v}^{t}\right)\] (C.9) which implies the modified glide constraint derived in [38]. Note that a similar conclusion for an ordinary crystal was derived in [28]. From now on, we ignore the vacancy current and lattice topological defects and derive from Eq. (C.1) the traceless tensor gauge theory in the main text. At leading order in derivative expansion, the first term in (C.1) does not depend on \(a_{t}\), so the resulting Gauss law \[B^{2}\delta^{ij}C_{ij;kl}^{-1}\delta^{kl}a_{t}+BE^{ij}C_{ij;kl}^{-1}\delta^{kl} =0.\] (C.10) Thus \(a_{t}\) is nothing but the trace of the symmetric tensor electric field \[a_{t}=-\frac{1}{2B}E^{ij}\delta_{ij},\] (C.11) where we used that \(P_{ij;kl}^{(2)}\delta^{kl}=0\). Substituting it now back into the Lagrangian (C.1), we use the explicit form of the projectors and find \[\mathcal{L}=\mathcal{L}_{g}\left(a_{\mu}\right)+\frac{1}{2Bn_{0}}\epsilon_{ij }\left(B^{i}+B\epsilon^{ik}a_{k}\right)\partial_{t}\left(B^{j}+Be^{jl}a_{l} \right)+\frac{\kappa}{8}e^{ij}e^{kl},\] (C.12) where we defined the symmetric traceless tensor \(e^{ij}=E^{ij}-\delta^{ij}\frac{1}{2}E^{ab}\delta_{ab}\) and \(\kappa=C_{2}^{-1}\). Restricting to the leading-order superfluid Lagrangian \(\mathcal{L}_{g}\left(a_{\mu}\right)=-\lambda(\epsilon^{ij}\partial_{i}a_{j})^ {2}/2\), the equation of motion for \(a_{i}\) is \[\lambda\tilde{\partial}_{i}\tilde{\partial}_{j}a_{j}+\frac{1}{n_{0}}\partial _{t}(B^{i}+B\epsilon^{ij}a_{j})=0\] (C.13) which approximately can be solved by \[a_{i}=\frac{1}{B}\epsilon_{ij}B_{j}+\ldots\] (C.14) Substituting this solution into Eq. (C.12), we finally get \[\mathcal{L}=-\frac{\lambda}{2}b^{2}+\frac{\kappa}{8}e_{ij}e^{ij},\] (C.15) where we introduced \(b=-\frac{1}{B}\partial_{k}B_{k}=-\frac{1}{B}\partial_{k}\tilde{\partial}_{b} A^{4k}\). This is exactly our new symmetric traceless gauge theory (3.7). Finally, if we define the traceless symmetric tensor gauge field \(a_{ij}\) and rename \(A_{0}\) \[a_{ij}=A_{ij}-\frac{\delta_{ij}}{2}A_{kk},\quad A_{0}\to a_{0}\] (C.16) we can write the \(e_{ij}\) and \(b\) in terms of the new gauge fields \[b=-\frac{1}{2B}(\partial_{i}\tilde{\partial}_{j}+\partial_{j}\tilde{\partial }_{i})a_{ij},\] (C.17) \[e_{ij}=\partial_{t}a_{ij}-(\partial_{i}\partial_{j}-\frac{1}{2}\delta_{ij} \Delta)a_{0}.\] (C.18) After the redefinitions, the gauge transformation of the traceless symmetric tensor gauge field inherited from (C.4) reads \[a_{0}\to a_{0}+\partial_{t}\beta,\qquad a_{ij}\to a_{ij}+(\partial_{i} \partial_{j}-\frac{1}{2}\delta_{ij}\Delta)\beta\] (C.19) which reproduces gauge transformations (3.6) from the main text.
2303.13679
Primer: Fast Private Transformer Inference on Encrypted Data
It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by 90.6% ~ 97.5% over previous methods.
Mengxin Zheng, Qian Lou, Lei Jiang
2023-03-23T21:23:37Z
http://arxiv.org/abs/2303.13679v1
# Primer: Fast Private Transformer Inference on Encrypted Data ###### Abstract It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by \(90.6\%\sim 97.5\%\) over previous methods. Fully Homomorphic Encryption, Multi-party Computation, Transformer, Cryptographic Protocol, Private Inference ## I Introduction Transformer-based, or more broadly attention-based, models show superior performance over previous methods, becoming increasingly popular in natural language processing (NLP) applications [6]. For example, BERT obtains new state-of-the-art results on eleven NLP tasks, including pushing the GLUE score to \(80.5\%\) (\(7.7\%\) absolute improvement), and even proves superior performance compared to human results on the challenging sentence classification tasks. Server-based Transformer service is an effective way for clients to run their computationally expensive and memory-intensive NLP tasks on powerful cloud servers. During a server-based Transformer service, cloud servers require access to clients' language data, thus introducing potential privacy risks. Therefore, to be able to utilize this technology, it is urgently needed to safeguard the confidentiality of users' biomedical, financial, and other sensitive data that are submitted to servers. Post-quantum cryptographic protocols, e.g., FHE [8, 13] and MPC [2] are popular methods to enable provably confidential computation on encrypted data. We use Figure 1 to show the overview of private transformer inference, where the client receives cloud services based on Transformer models by only uploading encrypted data generated by cryptographic protocols such as FHE or MPC. This Transformer inference is provably privacy-preserving since data is not revealed to other parties [4, 12]. However, existing works for private Transformer inference based on FHE, e.g., THE-X [4], suffer from enormous latency. For example, THE-X takes more than 3 orders of magnitudes latency than regular Transformer inference. And polynomial approximation of activation in THE-X significantly reduces accuracy, e.g., \(<77\%\) GLUE score (\(\sim 8\%\) absolute accuracy decrease). We identify several challenges to design private Transformer inference, such as the large one-hot word embeddings, complex attention, frequent \(SoftMax\), and very deep blocks. Specifically, BERT [6] uses WordPiece embeddings [23] with \(30522\) token vocabulary and \(768\) embedding dimensions so that \(n\) tokens require \(n\) times of \(30522\times 768\) matrix-vector multiplication. Directly applying existing techniques to design privacy-preserving embeddings suffers from enormous latency overhead. In addition, we identify that the attention scheme in Transformer models requires massive ciphertext-ciphertext multiplications that cannot directly be implemented by previous methods that are optimized for ciphertext-plaintext multiplications. Moreover, deeper Transformer architecture adds expensive FHE rotations and communicational interactions. In this work, we present a fast and accurate Transformer inference method, denoted by Primer, over encrypted data. We propose several techniques to construct Primer. In particular, a hybrid cryptographic protocol is proposed to construct a private Transformer, where FHE is used for polynomial operations and MPC is for non-polynomial operations. We call our Primer with this protocol Primer-base. Primer-base is accurate since it removes the polynomial approximation in previous works based on FHE. To reduce the online time of Primer-base, we propose a new hybrid protocol, denoted by HGS, to pre-process most FHE operations. Offloading computations into the offline phase from the online phase is important since offline computations can be computed in advance before inference. We further propose FHGS, denoted by Primer-F to improve the compatibility of HGS on attention computations in Transformer models. Other techniques including computation merge (combined FHGS) and tokens-first packing are presented to further reduce the inference latency. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by \(90.6\%\sim 97.5\%\) over previous methods. ## II Background and Motivation **Threat Model.** We use Figure 1 to show the overview of our threat model, where servers and clients are semi-honest, e.g., a semi-honest cloud server that attempts to infer clients' input information but follows the cryptographic protocol. Our threat Fig. 1: Overview of private Transformer inference based on cryptographic protocol, e.g., FHE and MPC. The lock represents that data is encrypted. model follows previous work THE-X [4] and Gazelle [10]. The security level of our method is 128 bits for a fair comparison. **Transformer-based Models for NLP Tasks.** Transformer-based models [21] achieve state-of-the-art performance in many NLP tasks. A Transformer architecture mainly includes embeddings, stacked encoders, and decoders using Multi-Head Self-Attention (MHSA) and point-wise, fully connected (FC) layers. A model with only encoders, e.g. BERT [6], can be used in discriminative NLP tasks including classification and regression, etc. Meanwhile, a model with decoders, e.g. GPT-2 [17], works for generative NLP tasks including Language Modeling (LM) and machine translation. In particular, embeddings include word embedding and positional embedding. Word embedding converts the input tokens and output tokens (each token is a one-hot vector with a length of \(d_{oh}\)) to vectors of dimension \(d_{emb}\) by a linear projection. Positional embedding ensures the Transformer model has the sequence order information by adding "positional encodings" \(\lambda\) to the previous word embedding. The embedded representations are fed into MHSA. In MHSA, embedded representations are firstly converted into three categories, key \(X_{K}\), query \(X_{Q}\), and value \(X_{V}\), by linear projections with key weight \(W_{K}\), query weight \(W_{Q}\), and value weight \(W_{V}\), respectively. Then, the output of MHSA is calculated as a weighted sum of the values \(X_{V}\) by \(Attention(X_{Q},X_{K},X_{V})=SoftMax(\frac{X_{Q}X_{K}^{T}}{\sqrt{n}})X_{V}\), where \(SoftMax(\frac{X_{Q}X_{K}^{T}}{\sqrt{n}})\) is the weight assigned to each value, and \(n\) is token numbers. Instead of only computing the attention once, the multi-head mechanism computes attention multiple times in parallel, and these multiple attentions are simply concatenated and linearly transformed into the expected dimensions as \(MultiHead(X_{Q},X_{K},X_{V})=[head_{1},...,head_{H}]W_{O}\), where \(head_{i}=Attention(X_{Q}W_{Q}^{i},X_{K}W_{K}^{i},X_{V}W_{V}^{i})\), \(W_{O}\) is a linear projection weight matrix. **Interactive hybrid cryptographic protocol.** FHE [8] is an encryption method that enables one to perform computations on encrypted data without decryption. Garbled Circuit (GC) [2, 7] and Secret Sharing (SS) [9] are two paramount methods of multi-party secure computations. An Interactive hybrid cryptographic protocol [10] is proposed to combine the advantages of FHE, GC, and SS. In particular, FHE has superior performance over GC on linear operations, e.g., matrix-vector multiplication. This is because FHE with a ciphertext packing technique supports efficient operations in a SIMD (single instruction multiple data) manner. Therefore, FHE is used to support private linear operations where a client encrypts input and sends it to the server, and the server returns encrypted output to the client that decrypts the received output. In the state-of-the-art mixed protocols [10, 14, 15], GC shows superior performance over HE in non-linear operations such as activation functions. And SS is used to combine GC and HE in the mixed protocol. Inspired by Beaver's Triple [1], FHE can be used to efficiently perform multiplications on two additive secret shares. In this work, we use this interactive hybrid method to construct Primer-base, which is a starting point for our optimization techniques. **Motivation.** As Figure 2 shows, prior works like THE-X [4] using only FHE for private inference suffer from low accuracy and enormous online latency due to polynomial approximation and expensive FHE operations. We use prior GC-based work [19] to implement a GCFormer (we convert the Transformer model into a circuit based on binary gates so that GC [2] can implement it). GCFormer achieves an accurate performance, i.e., 85.1% accuracy, but it takes a larger latency than THE-X. Thus, the FHE-based method or GC-based method cannot achieve a low-latency and accurate private Transformer inference. Instead, we follow the interactive and hybrid cryptographic protocol [10] and construct our Primer-base by using GC for non-polynomial operations, FHE for polynomial operations, and SS for secure communication between multiple parties. Primer-base significantly improves the accuracy of THE-X, e.g., 7.3% accuracy increase, and reduces the latency of GCFormer. However, Primer-base still suffers from enormous online latency. This motivates us to propose techniques like the FHGS protocol, denoted by Primer-F, to offload the online computation to the offline phase where computations can be computed before inference. Considering Primer-F still has a large total latency, we have motivations to propose techniques including computation merge, i.e., combined FHGS, and tokens-first ciphertext packing techniques. More details about Primer and related techniques are introduced in the following section III. ## III Primer ### _Primer-base Construction_ As Figure 3(a) shows, a Transformer-based model involves computations of \(\includegraphics[width=1.2]{fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:figfig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:figfig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:figfig:fig:fig:figfig:fig:figfig:fig:figfig:figfig:fig:fig:figfig:figfig:fig:figfig:figfig:fig:figfig:figfig:figfig:fig:fig:figfig:fig:figfig:fig:fig:figfig:figfig:figfig:fig:fig:figfig:figfig:fig:figfig:figfig:figfig:fig:figfig:figfig:figfig:figfig:figfig:fig:figfig:figfig:fig:figfig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:figfig:figfig:figfig:figfig:fig:figfig:fig:figfig:figfig:figfig:fig:fig:figfig:figfig:figfig:fig:figfig:fig:figfig:figfig:figfig:fig:figfig:fig:figfig:figfig:figfig:fig:fig:figfig:figfig:fig:fig:figfig:figfig:fig:figfig:figfig:fig:figfig:fig:figfig:fig:figfig:fig:fig:figfig:fig:figfig:figfig:fig:figfig:fig:fig:figfig:fig:fig:figfig:fig:figfig:fig:figfig:fig:fig:figfig:figfig:fig:fig:figfig:fig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:figfig obtain the weights on the values, and multiply the attention weights with the value \(X_{V}\). Figure 3(b) illustrates how to construct a basic private Transformer, i.e., Primer-base, using the prior interactive hybrid cryptographic protocol, i.e., FHE (we denote it as HE in Figure 3(b)) is used for polynomial operations in all the steps other than non-polynomial operations, e.g., \(SoftMax\). Instead, GC is used for non-polynomial operations. ### _Primer-F Construction_ Primer-base significantly improves the accuracy of FHE-based methods like THE-X [4] and reduces the latency of MPC-based methods, like GCFormer [19]. However, Primer-base still suffers from enormous online latency. This motivates us to propose techniques like the HGS and FHGS protocol, denoted by Primer-F, to offload the online computation to the offline phase where computations can be computed before inference. **The HGS protocol.** Figure 4 shows the mixed HGS protocol. The offline phase in the HGS protocol is used to prepare data for the subsequent online phase. For the \(i\)-th layer of a Transformer model, a client first samples a random matrix \(Rc[i]\) that is required to have the same size with private input \(X[i]\), and then submits the ciphertext Enc(\(Rc[i]\)) to the server for the subsequent multiplication between Enc(\(Rc[i]\)) and the \(i\)-th layer weights \(W[i]\). A random matrix \(Rs[i]\) is generated by the server and \(Enc(Rs[i]+Rc[i]\times W[i])\) is sent back to the client. The client performs decryption to get \(Rs[i]+Rc[i]\times W[i]\). \(Rs[i]+Rc[i]\times W[i]\) held by the client, and \(Rs[i]\) held by the server are secret shares of \(Rc[i]\times W[i]\). Meanwhile, the offline phase, e.g. garbling, of GC is performed. During the online phase, the difference of \(X[i]\) and \(Rc[i]\), instead of \(X[i]\), is sent to the server. The computation of \((X[i]-Rc[i])\times W[i]-Rs[i]\) and previous offline computation make the client and server have the additive secret shares of \(X[i]\times W[i]\). In this way, the heavy encrypted HE operations of privacy-preserving matrix multiplication of \(X[i]\times W[i]\) is calculated offline, and the online overhead is almost removed since only unencrypted computations exist. Then GC is used to perform the subsequent mapping function \(F\), e.g., \(ReLU\) activation. Specifically, the garbled Boolean \(X[i]\times W[i]\) is derived by the modular sum of secret shares of \(X[i]\times W[i]\), then the Boolean circuits of mapping function \(F\) are calculated. Finally, a modular subtraction between function \(F\)'s result and a new random matrix \(Rc[i+1]\) is performed to generate secret shares of function \(F\)'s result. A modular operation circuit is implemented by an adder and a multiplexer [10, 16]. We encapsulate HGS protocol shown in Figure 4 into a module that takes random matrices \(Rc[i]\), \(Rc[i+1]\), \(i\)-th layer input \(X[i]\), weight matrix \(W[i]\) as inputs, and generates \(X[i+1]-Rc[i+1]=F(X[i]\times W[i])-Rc[i+1]\). Here \(F()\) function can be an identity function or an activation function. The \(i\)-th layer input \(X[i]\) can be removed if the server holds \(X[i]-Rc[i]\). \(W[0]=W_{E},W[2]=\sigma=1\), and \(\lambda\) is added to \(X[1]\) instead of multiplication. As Figure 3(c) shows, steps in the Transformer including,,, and the other FC computations can be performed by the HGS protocol; however, steps and cannot be directly constructed by the additive HGS protocol since HGS only supports additive computations including ciphertext additions and ciphertext-plaintext multiplication. This is because the HGS protocol that depends on an additive HE scheme is only sufficient for modules where weights are always not encrypted. Therefore, HGS cannot transform ciphertext-ciphertext multiplications in steps and on secret shares into the offline phase. **The Fully HGS (FHGS) protocol for ciphertext operations.** The step of attention in Transformer is different from steps through where weights are not encrypted and only inputs are encrypted so that private Fig. 4: Our HGS protocol for Transformer’s attention operations. Fig. 5: Our Fully HGS (FHGS) protocol for attention operations. Fig. 3: Private transformer block inference under various Primer protocols. inference is a type of ciphertext-plaintext operations. In step, however, all query, key, and value matrices are encrypted. The Attention operations require ciphertext-ciphertext operations. Ciphertext-ciphertext operations are not only more expensive than ciphertext-plaintext operations but also cannot directly use HGS method, thus we propose the FHGS protocol to solve this problem. Inspired by Beaver's Triple method [1], we propose a Fully HGS (FHGS) protocol to empower the prior additive HGS to efficiently support ciphertext-ciphertext operations such as \(X_{Q}\times X_{K}^{T}\) in Transformer models. Figure 5 shows our FHGS protocol for step \(\text{\text{\textbullet}}\)\(X_{Q}[i]\times X_{K}[i]^{T}\). Since \(X_{Q}[i]\) and \(X_{K}[i]^{T}\) are both ciphertexts, additive HGS cannot offload \(X_{Q}[i]\times X_{K}[i]^{T}\) operations. FHGS pre-computes encrypted triples including \(Enc(Rc[i])\), \(Enc(Rc[i]^{T})\), and \(Enc(Rc[i]^{T}\times Rc[i])\) for the usage of the subsequent online process. During the online phase in FHGS, the server has access to \(X_{Q}[i]-Rc[i]\) and \((X_{K}[i]-Rc[i])^{T}\) although \(X_{Q}[i]\) and \(X_{K}[i]^{T}\) are not seen by the server. So in a important intermediate result \(tmp1=(X_{Q}[i]-Rc[i])\times(X_{K}[i]-Rc[i])^{T}\) can be derived. The key idea to obtain our target \(tmp4\), a ciphertext of \(X_{Q}[i]\times X_{K}[i]^{T}\), is that it can be calculated by subtracting three entries from \(tmp1\), where this subtraction can be done by \(tmp4=tmp1+tmp2+tmp3+Enc(Rc[i]^{T}\times Rc[i])\). In order to preserve the privacy of \(X_{Q}[i]\times X_{K}[i]^{T}\), the server transmits its additive secret sharing ciphertext \(Enc(X_{Q}[i]\times X_{K}[i]^{T}-Rs[i+1])\), instead of \(Enc(X_{Q}[i]\times X_{K}[i]^{T})\), to the client who can decrypt \(Enc(X_{Q}[i]\times X_{K}[i]^{T}-Rs[i+1])\) and obtain \(X_{Q}[i]\times X_{K}[i]^{T}-Rs[i+1]\). In this way, the client and the server acquire additive secret shares of \(X_{Q}[i]\times X_{K}[i]^{T}\). Optionally, the client can further share \(X_{Q}[i]\times X_{K}[i]^{T}-Rs[i+1]-Rc[i+1]\) with the server to obtain new secret shares of \(X_{Q}[i]\times X_{K}[i]^{T}\). At last, the FHGS protocol is enclosed into a module that takes random matrices \(Rc[i]\), \(Rc[i+1]\), \(Rs[i+1]\), \(X_{Q}[i]-Rc[i]\), and \((X_{K}[i]-Rc[i])^{T}\) as inputs, and outputs the secret shares of \(X_{Q}[i]\times X_{K}[i]^{T}\). **Privacy analysis.** The \(X_{Q}[i]\), \(X_{K}[i]\), \(X_{Q}[i]\times X_{K}[i]^{T}\) are confidential to both the client and the server, which ensures our FHGS protocol is privacy-preserving. The server that has no access to HE private key cannot decrypt ciphertexts including \(Enc(Rc[i]^{T}\times Rc[i])\), \(tmp1\), \(tmp2\), \(tmp3\), \(tmp4\), and \(tmp4-Rs[i+1]\). Only secret shares of \(X_{Q}[i]\), \(X_{K}[i]\), \(X_{Q}[i]\times X_{K}[i]^{T}\) can be accessed by client and server. Also, FHGS completely offloads complex and expensive ciphertext-ciphertext operations from the online phase into the offline phase since \(Rc[i]\) and \(Rc[i]^{T}\) are pre-sampled and their product can be calculated in advance, which enables a additive HE scheme to efficiently perform privacy-preserving ciphertext-ciphertext Transformer operations. ### _Primer-FC by Combined FHGS (CHGS)_ We further reduce the computational and communicational overhead of previous technologies by a combined FHGS (CHGS) method that can combine adjacent HGS layers. The CHGS processes multiple stacked operations using a single calculation, and most HE-based operations are moved to the offline phase. As Figure 3(d) shows, our CHGS module removes its previous HGS operations by incorporating three HGS modules into the adjacent FHGS module. The key idea is that the combined target is \((X[i]\times W_{E}+\lambda)\times W_{Q}\times[(X[i]\times W_{E}+\lambda)\times W _{K}]^{T}=X_{Q}[i]\times X_{K}[i]^{T}\) which can be derived from \(tmp1=((X[i]-Rc[i])\times W_{E}+\lambda)\times W_{Q}\times[((X[i]-Rc[i])\times W _{E}+\lambda)\times W_{K}]^{T}\) by \(tmp1-tmp2+tmp3+tmp4-tmp5+tmp6-tmp7-result\). The combined weight \(W_{M}\), \(tmp6\), and \(tmp7\) can be calculated in the offline phase. The server sends \(result\)-\(Rs[i+1]\) to the client so that the client obtains the decryption of \(result\)-\(Rs[i+1]\) and the server has \(Rs[i+1]\). The client can also subtract the decryption of \(result\)-\(Rs[i+1]\) with \(Rc[i+1]\) to construct a new secret sharing. Using CHGS, 4-time interactions in Figure 3(d) can be reduced into 1-time interaction. The improvement details of CHGS are discussed in the following results section. ### _Tokens-first Packing, i.e., Primer-FPC Construction_ Embedding is used to compress a large and sparse one-hot vector into a small and dense vector. How to efficiently support high-dimension (\(>30K\)) matrix multiplication is not studied. Each word (token) will be a vector of size \(30522\) which is larger than the ciphertext slot numbers. For multiple words in a sentence, how to pack these words into ciphertext is a new challenge. We propose tokens-first packing to tackle this challenge, instead of prior feature-based ciphertext packing used in [3, 5, 10, 16]. In feature-based ciphertext batching, multiple features (e.g. pixels in an input image) are batched into the same ciphertext. However, we found that directly applying the feature-based packing method on Transformer-based NLP models introduces massive FHE rotations. We propose a tokens-first packing method instead of the prior features-based packing method to reduce the homomorphic rotations in Primer-FC. Figure 6(a) depicts the pseudo-code of encrypted matrix multiplication based on feature-base packing between \(X[i]\), i.e., \(X[i][0:n][0:d_{oh}]\) and \(W[i]\), i.e., \(W[i][0:d_{oh}][0:d_{emb}]\), where \(n\), \(d_{oh}\), and \(d_{emb}\) are input tokens number, one-hot dimension of a token, and embedding dimension, respectively. Here \(X[i][0:n][0:d_{oh}]\) represents Fig. 6: The comparison of features-based packing and our tokens-first packing. the shape of matrix \(X[i]\). The result of matrix multiplication is \(X[i+1]\) with a size of \(X[i+1][n][d_{emb}]\). Lines 2 through 8 are used to pack the input matrix \(X[i][0:n][0:d_{oh}]\) into \(c\) plaintexts \(p[0:c]\) and encrypt them into \(c\) ciphertexts via the encryption function \(Enc()\). Each plaintext \(p\) has \(M\) slots so it can hold \(M\) entries. In the features-based packing, the one-hot features \(X[i][h][0:j]\) of a token \(h\) are first placed into plaintexts, then the features of the next token \(h+1\) are packed until all tokens' features are packed and encrypted. The features-based packing requires \(c\times M\)\(Rotate\) operations since each ciphertext with \(M\) features needs \(M\) rotations shown in line 9 \(\sim\) line 14. One key observation is that features from different tokens are independent and they are not required to accumulate in the matrix multiplication, which motivates us to propose tokens-first packing to batch tokens as much as possible into the same ciphertext. As Figure 6(b) shows, the lines of 2, 3, 6, and 10 of features-based packing are replaced so that the \(j\)-th feature \(X[i][0:n][j]\) of all \(n\) tokens are packed into a ciphertext, then the \(j+1\)-th feature will be packed. Using this tokens-first packing, one ciphertext only has \(\sim\frac{M}{n}\) features so that one ciphertext only requires \(\sim\frac{M}{n}\)\(Rotate\) operations. Considering both features-based packing and our tokens-first packing have similar ciphertext numbers \(c\), our tokens-first packing reduces \(c\times(M-\frac{M}{n})\) rotations. ## IV Experimental Methodology **System setup and security analysis.** We run the privacy-preserving Transformer experiments on two instances that are equipped with an Intel Xeon E7-4850 CPU and 128 GB DRAM, and each instance was provided with 4 threads. In our current system setup, the average network delay between these two instances is 2.3 ms and the bandwidth is about 100 MB/s. The layer-wise PAHE used in Primer is implemented by SEAL [20] libraries where only additive HE operations and rotations are used and ciphertext-ciphertext multiplications are not required. We adopt an extension version of JustGarble tool [2] used in [10] to implement GC-based operations, including additions of secret sharings and activation functions. The HE parameters and GC settings are selected to provide 128-bit security level. The inputs and weights use 15-bit fix-point representation and the intermediate results are truncated into 15 bits to avoid overflow. The training, fine-tuning, and testing of the Transformer on plaintext was implemented in Python on 4 NVIDIA Tesla V100 GPUs. **Transformer architecture and NLP datasets.** We evaluated Primer on five discriminative NLP models shown in Table III: BERT-Tiny, BERT-small, BERT-base, BERT-medium, BERT-large. The hyper-parameters of these models are listed in Table III. For example, the BERT-tiny model has \(N=3\) blocks, \(d_{emb}=768\) embedding dimensions, \(H=12\) attention heads, and \(n=30\) input tokens. Datasets for five BERT tasks are SQuAD1 [18], SQuAD2 [11], and MNLI-m, MRPC, SST-2 from GLUE benchmarks [22]. ## V Results and Analysis **Comparison with Prior Works.** We compare our primer with prior works on private BERT-base inference for MNLI-m dataset in table I. Prior work, e.g., THE-X [4] that only uses FHE for private inference only achieves \(77.3\%\) accuracy with \(4.7k\) seconds latency due to polynomial approximation and expensive FHE operations. We use prior GC-based work [19] to implement a GCFormer. It achieves an accurate performance, i.e., \(85.1\%\) accuracy, but it takes a larger latency than THE-X. Our Primer-F significantly improves the accuracy of THE-X, e.g., \(7.3\%\) accuracy increase, and reduces the latency of GCFormer. To reduce the large offline latency of Primer-F, we further propose Primer-FPC, i.e., Primer, with tokens-first packing and combined FHGS. Our primer only takes \(\sim 0.4k\)-second latency, thus achieving a \(\sim 16\times\) latency reduction. **Ablation Study.** Table II describes the performance breakdown and the ablation effects of proposed techniques using BERT-base model with \(n=30\) on MNLI-m dataset. The Primer-base implemented by FHE and MPC protocols requires \(\sim\) 6553 seconds latency to perform one inference on a sentence in MNLI-m dataset and achieves \(84.6\%\) accuracy. We further propose FHGS, denoted by Primer-F to Offload computations into the offline phase from the online phase, which significantly shrinks the offline latency from \(\sim 6553\) seconds to \(\sim 41\) seconds, introducing almost \(160\times\) latency reduction. Primer-FP is proposed to reduce the latency of embedding layers and the following layers that include HE operations. It further decreases \(5.3\%\) online latency over Primer-F and has \(16\times\) offline latency reduction. Primer-FPC has the similar offline latency and accuracy with Primer-topk, but further reduces the online latency by \(9.2\%\). Table II shows Primer (Primer-FPC) achieves competitive NLP accuracy and reduces the online and offline inference latency by \(90.6\%\sim 97.5\%\) over Primer-base. **Results on Different Models.** Table III studies the effects of different BERT models and datasets on Primer. Privacy-preserving BERT-tiny with only 3 Transformer blocks achieves average \(81.7\%\) accuracy on three GLUE datasets including MNLI-m, MRPC, SST-2, and average \(81.4\%\) F1 test accuracy on SQuAD1 and SQuAD2. Primer with BERT-tiny requires 10.6 seconds to perform an inference or classification on a sequence with \(30\) tokens, thereby attaining a throughput of 2.83 tokens per second. Also, the communicational message size between clients and a server is \(\sim 0.9\) GB. Primer with BERT-small or BERT-base achieves \(2.4\%\sim 7.5\%\) higher accuracy by adding more Transformer blocks, but increases latency by \(78.3\%\sim 230\%\). Also, Primer with BERT-small and BERT-base require more than 2\(\times\) and 3 \(\times\) message size, respectively, than BERT-tiny based Primer. BERT-medium with larger embedding dimension and BERT-large with 24 block numbers are also supported by Primer, and they take 45.1 seconds and 91.6 seconds to perform an inference on a sentence with \(30\) tokens. BERT-large achieves state-of-the-art accuracy on GLUE and SQuAD benchmarks. ## VI Conclusion In this paper, we present Primer to enable a fast and accurate privacy-preserving Transformer for NLP tasks. First, a naive version of our Primer, called Primer-base, is constructed by a hybrid interactive cryptographic protocol. Secondly, we propose tokens-first packing instead of prior features-first packing to reduce the offline and online overhead brought by HE. Finally, we demonstrate that multiple secret sharing layers in the Transformer can be combined to reduce the latency. Primer establishes a solid baseline and shed the light on private Transformer inference over encrypted data. ## Acknowledgment This work was supported in part by NSF awards CCF-1908992, CCF-1909509, and CCF-2105972.
2310.05137
Effects of Annihilation with Low-Energy Neutrinos on High-Energy Neutrinos from Binary Neutron Star Mergers and Rare Core-Collapse Supernovae
We explore the possibility that high-energy (HE) neutrinos produced from choked jets can be annihilated with low-energy (LE) neutrinos emitted from the accretion disk around a black hole in binary neutron star mergers and rare core-collapse supernovae. For HE neutrinos produced close to the stellar center ($\lesssim 10^{9}-10^{12}$ cm), we find that the emerging all-flavor spectrum for neutrinos of $E\gtrsim 0.1-1$ PeV could be modified by a factor $E^{-n}$ with $n\gtrsim 0.4-0.5$ under realistic conditions. Flavor evolution of LE neutrinos does not affect this result but can change the emerging flavor composition of HE neutrinos. As a consequence, the above annihilation effect may need to be considered for HE neutrinos produced from choked jets at small radii. We briefly discuss the annihilation effects for different HE neutrino production models and point out that such effects could be tested through precise measurements of the diffuse neutrino spectrum and flavor composition.
Gang Guo, Yong-Zhong Qian, Meng-Ru Wu
2023-10-08T12:12:28Z
http://arxiv.org/abs/2310.05137v2
Effects of Annihilation with Low-Energy Neutrinos on High-Energy Neutrinos from Binary Neutron Star Mergers and Rare Core-Collapse Supernovae ###### Abstract We explore the possibility that high-energy (HE) neutrinos produced at mildly relativistic shocks can be annihilated with low-energy (LE) neutrinos emitted from the accretion disk around a black hole in binary neutron star mergers and rare core-collapse supernovae. For HE neutrinos produced close to the stellar center (\(\lesssim 10^{9}\)-\(10^{11}\) cm), we find that the emerging all-flavor spectrum for neutrinos of \(E\gtrsim 0.1\) PeV could be modified by a factor \(E^{-n}\) with \(n\approx 0.4\)-\(0.5\). Flavor evolution of LE neutrinos does not affect this result but can change the emerging flavor composition of HE neutrinos. As a consequence, the above annihilation effect needs to be considered for HE neutrinos produced at nonrelativistic or mildly relativistic shocks at small radii. In particular, we point out that models of core-collapse supernovae with slow jets and charmed meson decay can better fit the diffuse HE neutrino flux observed by IceCube if annihilation of HE and LE neutrinos is taken into account. Moreover, the relevance of charmed meson and the annihilation effect can be tested by precise measurements of the diffuse neutrino spectrum and flavor composition. + Footnote †: The nondetection of HE neutrinos from the brightest GRB, GRB 221009A, sets a limit comparable to that from the stacked searches [16; 17; 18; 19; 20]. ## I Introduction The detection of cosmic high-energy (HE) neutrinos by IceCube [1; 2; 3; 4] clustered in a new era of neutrino astronomy. While some IceCube events are correlated with the blazar TXS 0506+056 [5; 6], three tidal disrupted events [7; 8; 9], and an active galaxy [10], the sources of TeV-PeV neutrinos remain mostly unidentified. Nevertheless, there are many constraints on contributions from various sources and the associated models. For instance, lack of direction and time correlation with gamma-ray bursts (GRBs) [11; 12; 13; 14; 15] limited their contribution to be less than 1%1 and hence led to reconsideration of HE neutrino production in GRBs [21; 22; 23]. Because both HE neutrinos and \(\gamma\)-rays are produced from meson decay, sources are likely opaque to \(\gamma\)-rays in order to avoid overproducing the observed diffuse \(\gamma\)-ray background [24; 25; 26; 27; 28]. While current data allow a wide range of flavor composition, sources with the standard pion or neutron decay scenarios are constrained [29; 30; 31; 32; 33]. Footnote 1: The nondetection of HE neutrinos from the brightest GRB, GRB 221009A, sets a limit comparable to that from the stacked searches [16; 17; 18; 19; 20]. Many other issues regarding HE neutrino production and propagation are worth exploring. For example, in specific astrophysical environments, microscopic processes beyond those included in the current models may be important. New physics beyond the standard model such as non-standard interaction of neutrinos can alter the spectrum and flavor content of the HE neutrinos reaching the Earth [34; 35; 36; 37; 38; 39; 40]. By accumulating statistics on the diffuse flux, IceCube, and especially IceCube-Gen2, can provide even better probes of the sources and related neutrino physics [41; 42; 43; 44]. Detection of many events from a single nearby source would also give powerful constraints. With the above considerations in mind, here we discuss low-energy (LE) neutrino emission associated with the central engines of GRBs and rare core-collapse supernovae (CCSNe) that produce HE neutrinos, and explore how annihilation with these LE neutrinos may affect the spectrum and flavor composition of the HE neutrinos emerging from these sources. Short GRBs lasting \(\sim 0.1\)-\(1\) s are associated with binary neutron star mergers (BNSMs) while long GRBs lasting a few seconds are mostly associated with rare CCSNe, the so-called collapsars or hypernovae. In both cases, an accreting black hole is widely considered as one of the primary candidates for the central engine of GRBs [45]. The accretion disk associated with the black hole can emit profuse fluxes of nearly thermal LE neutrinos of \(\mathcal{O}(10)\) MeV, mainly \(\nu_{e}\) and \(\bar{\nu}_{e}\). Relativistic jets may be powered either by annihilation of these LE \(\nu_{e}\) and \(\bar{\nu}_{e}\) or by extracting the rotational energy of the black hole through the Blandford-Znajek mechanism [46]. Shocks can occur at different stages of jet propagation, leading to different scenarios for HE neutrino production2[65]. As the jet propagates through the ejecta from a BNSM or the envelope of a CCSN, HE neutrinos can be produced at internal shocks before jet collimation, at collimation shocks, and at forward and reverse shocks driven by the mildly-relativistic or non-relativistic jet head [66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80]. In the case of rare CCSNe, it may be more common that the jet is not energetic enough to penetrate the whole stellar envelope, and the resulting choked jet is dark in electromagnetic radiation but bright in HE neutrinos. In addition, both successful and failed jets may transfer energy to stellar matter, driving a mildly-relativistic cocoon that produces low-luminosity GRBs [74; 91] accompanied by HE neutrinos during shock breakout [92; 93; 94]. Because we are interested in HE neutrinos and their annihilation with LE neutrinos, we focus on those BNSMs and rare CCSNe that can produce both types of neutrinos regardless of any associated GRBs. The impact of annihilation with LE neutrinos on HE neutrinos received little attention in previous studies. The only exception is our recent study [95], where we considered collapsars as sources for both HE neutrinos and \(r\)-process nuclei. The \(r\)-process nuclei are synthesized in the nonrelativistic winds from the accretion disk [96; 97; 98]. The \(\beta\)-decay of these radioactive nuclei produces LE \(\bar{\nu}_{e}\), which first oscillate into \(\bar{\nu}_{\mu}\) and then can annihilate HE \(\nu_{\mu}\) produced by shocks associated with jet propagation. We demonstrated that such annihilation could leave imprints on the spectrum and flavor composition of the emerging HE neutrino flux [95]. In this work, we conduct a similar study, but as stated above, our focus is on the LE neutrinos emitted by the accretion disk. Given the complexity of HE neutrino production in different sites and the extensive studies already available in the literature, we choose to present a largely parametric study on how annihilation with LE neutrinos affects the emerging HE neutrino flux. Our results can be used to assess the significance of such annihilation for specific scenarios of HE neutrino production. As an example, we also discuss the effects of such annihilation on HE neutrinos from CCSNe with slow jets [72; 77; 99]. This paper is organized as follows. In Sec. II we discuss emission of LE neutrinos by an accretion disk in BNSMs or rare CCSNe and the flavor evolution of both LE and HE neutrinos. Without addressing the detailed mechanism of HE neutrino production, we present in Sec. III a parametric study on how annihilation with LE neutrinos may affect HE neutrinos. We then discuss such effects on HE neutrinos from CCSNe with slow jets in Sec. IV. We discuss our results and give conclusions in Sec. V. ## II Emission of LE neutrinos and flavor evolution of LE and HE neutrinos ### Luminosity and duration Abundant fluxes of MeV-scale \(\nu_{e}\) and \(\bar{\nu}_{e}\) can be emitted from accretion disks mainly via \(e^{\pm}\) captures on nucleons. For hyperaccreting rate of \(\dot{M}\sim 0.001\)-\(10~{}M_{\odot}/s\), the fraction of energy converted to neutrinos, \(\epsilon_{\nu}\equiv L_{\nu}/\dot{M}c^{2}\), can vary widely from \(\sim 0.01\)-\(0.3\), depending on the initial mass and spin of the black hole, the accretion rates, and the viscosity of accretion disk [100; 101; 102; 103; 46]. Generally, lower viscosity and higher black hole spin give rise to higher values of \(\epsilon_{\nu}\). For \(\dot{M}\gtrsim 0.1M_{\odot}\)/s, \(\epsilon_{\nu}\) can reach 0.05 for both a non-rotating (Schwarzschild) and a fast rotating (Kerr) black hole [104; 101]. The corresponding neutrino luminosity can be as high as \(10^{53}\) erg/s for \(\dot{M}=1M_{\odot}\)/s, and even higher for accretion rates of 10 \(M_{\odot}\)/s, which could occur in the case of compact object mergers [105; 101]. The jet luminosity is related to \(L_{\nu}\) if the jet is powered by pair annihilation of accretion disk neutrinos. Previous studies indicated that the pair annihilation luminosity \(L_{\bar{\nu}\nu}\sim(10^{-3}\)-\(10^{-1})L_{\nu}\)[102; 46], which is consistent with the observed luminosity for classical GRBs with \(L_{\rm ob}\sim 10^{51}\) erg/s. The mean energies of accretion disk neutrinos are typically \(\sim 10\)-\(20\) MeV [104]. The duration of the accretion process and neutrino emission could be similar to that of the associated GRBs, which are \(\sim 0.1\)-\(1\) s and \(\sim\) a few seconds for SGRBs [104] and LGRBs [96], respectively. Instead of using any specific numerical models, we simply assume that the neutrino luminosities from the accretion disk are constant over time, and choose a standard Fermi-Dirac distribution with the same effective \(T_{\nu}\) and zero chemical potential for all flavors to describe the spectra. For thermal neutrinos emitted from a surface area of \(S\sim 2\pi R_{\nu}^{2}\) with an effective emission radius \(R_{\nu}=10^{7}\) cm, we have \(L_{\nu}\sim\sigma T_{\nu}^{4}S\), with \(\sigma\) being the Stefan-Boltzmann constant. For \(T_{\nu}=5\) MeV, the corresponding luminosity is \(L_{\nu}\sim 10^{53}\) erg/s. Without referring to specific models, we vary \(T_{\nu}\) in the range from 3 to 8 MeV for the parametric study. Correspondingly, the total neutrino luminosity varies from \(10^{52}\) to \(10^{54}\) erg/s, which is broadly consistent with numerical simulations [106; 107; 108; 109]. It is crucial to consider the contemporaneity of LE neutrino emission by the accretion disk and HE neutrino production by the jets. Shortly after the onset of thermal neutrino emission, jets are launched immediately from the accretion disk. With the jet velocity \(\beta_{j}c\) and Lorentz factor \(\Gamma_{j}=(1-\beta_{j}^{2})^{-1/2}\), the time lag \(\Delta t\) for the jets to reach the shock formation site at radius \(R_{\rm sh}\) relative to the thermal neutrinos is \(\sim R_{\rm sh}/(2\Gamma_{j}^{2}c)\) for mildly relativistic or relativistic jets, and \(\sim R_{\rm sh}/(\beta_{j}c)\) for non-relativistic jets. To ensure that the HE neutrinos meet the LE neutrinos, the time lag needs to be smaller than the duration of thermal neutrino emission \(\Delta T\). In the case of mildly relativistic or relativistic jets, this indicates that \(R_{\rm sh}\lesssim 6\times 10^{11}(\Gamma_{j}/3)^{2}(\Delta T/\)s cm. For nonrelativistic jets, \(R_{\rm sh}\) should be smaller than \(\sim 3\times 10^{10}\beta_{j}(\Delta T/\)s) cm. Our study focuses on cases where HE neutrinos are produced close to the center at \(R_{\rm sh}\sim 10^{9}\)-\(10^{12}\) cm so that a significant impact of the LE neutrinos could be expected [see, e.g., Eq. (4) below]. Given the above considerations, LE and HE neutrinos can meet for annihilation except for cases with \(\beta_{j}\ll 1\) and \(\Delta T\ll 1\) s. ### Flavor evolution of LE and HE neutrinos For studying their effects on jet-produced HE neutrinos, the flavor evolution of the thermal neutrinos due to different mechanisms, including the collective oscillations [110, 111, 112, 113] and the Mikheyev-Smirnov-Wolfenstein (MSW) effect [114, 115], is a crucial input. For simplicity, we assume that only \(\nu_{e}\) and \(\bar{\nu}_{e}\)[110, 46] are emitted from the accretion disk, and the probability for an initial \(\nu_{e}\) (\(\bar{\nu}_{e}\)) to become a \(\nu_{\beta}\) (\(\bar{\nu}_{\beta}\); \(\beta=e,\ \mu,\ \tau\)) at radius \(r\gg R_{\nu}\) outside the accretion disk is parametrized as \(f_{\beta}(r)\) [\(\bar{f}_{\beta}(r)\)]. Instead of solving the detailed flavor evolution of thermal neutrinos, we consider the following five different flavor evolution scenarios for accretion disk neutrinos (see Tab. 1): (1) no evolution (NE), for which \(f_{\beta}(r)=\bar{f}_{\beta}(r)=\delta_{\beta e}\) in the Kronecker \(\delta\) notation, (2) adiabatic evolution with normal mass ordering (NO), for which \(f_{\beta}(r)=|U_{\beta 3}|^{2}\) and \(\bar{f}_{\beta}(r)=|\bar{U}_{\beta 1}|^{2}\) in terms of vacuum mixing matrix elements \(U_{\beta i}\) and \(\bar{U}_{\beta i}\) (\(i=1\), \(2\), \(3\)), (3) adiabatic evolution with inverted mass ordering (IO), for which \(f_{\beta}(r)=|U_{\beta 2}|^{2}\) and \(\bar{f}_{\beta}(r)=|\bar{U}_{\beta 3}|^{2}\), (4) exotic evolution (EE), for which \(f_{\beta}(r)=\bar{f}_{\beta}(r)=\delta_{\beta\mu}\), and (5) flavor equipartition (FE) with \(f_{\beta}=f_{\beta}=1/3\). We use appropriate best-fit values of mixing parameters from [116] to evaluate \(U_{\beta i}\) and \(\bar{U}_{\beta i}\). The above five scenarios may be realized in CCSNe and BNSMs under different physical conditions. For a neutrino of energy \(E_{\nu}\), the MSW effect takes place at a resonance density \(\rho_{\rm res,7}\approx 1.3E_{\nu,\rm MeV}^{-1}(\delta m^{2}/{\rm eV}^{2})\cos 2 \theta_{v}\), where \(\delta m^{2}\) is the vacuum-mass-squared difference and \(\theta_{v}\) is the vacuum mixing angle. Here and below, we use subscripts to indicate eV-based units and powers of \(10\) in cgs units, i.e., \(a_{x}\equiv a/10^{x}\). For accretion disk neutrinos of \(E_{\nu}\sim 10\) MeV, two resonances occur at high and low densities of \(\rho_{H,3}\sim 3\) and \(\rho_{L,1}\sim 4\) for \((\delta m^{2}/{\rm eV}^{2},\theta_{v})=(2.4\times 10^{-3},8.5^{\circ})\) and \((7.5\times 10^{-5},33.5^{\circ})\), respectively [116]. As \(\rho\gg\rho_{H}\) at the accretion disk, both resonances are relevant if \(\rho<\rho_{\rm L}\) at \(r\sim R_{\rm sh}\). Such a condition can be fulfilled for CCSNe with \(R_{\rm sh,10}\gtrsim 3\)[78] and for BNSMs [117]. Assuming that the collective oscillations are neglected and the flavor evolution through both resonances are adiabatic, this corresponds to the scenario NO or IO, depending on the yet-unknown neutrino mass ordering. For the scenario NE, it may occur in CCSNe with \(R_{\rm sh,9}\sim 3\), where the accretion disk neutrinos do not go through MSW resonances before reaching \(r\sim R_{\rm sh}\) with \(\rho>\rho_{H}\)[78] when collective oscillations are ignored. For the collective oscillations expected to occur near the accretion disk where the neutrino densities are high [110, 111, 112, 113], the effect can be complicated and is under intense investigation. If it happens, it will lead to different flavor evolution history from the above three scenarios, and we use the scenarios EE and FE to represent the range of possible outcomes. In short, the exact flavor evolution of accretion disk neutrinos for specific models requires more detailed treatment but we expect the outcome to be within the range covered by the above five representative scenarios. For the HE neutrinos, the MSW effect is relevant when they are produced at \(R_{\rm sh,9}\sim 3\)-\(100\). HE neutrinos with \(E_{\nu}\gtrsim 10\) TeV experience no MSW resonances inside stars, and hence, little flavor evolution before interacting with accretion disk neutrinos [118, 119, 120, 121, 122]. For BNSMs [117], there might be resonances at \(r\sim R_{\rm sh}\) but flavor conversion is suppressed due to nonadiabatic evolution for extremely large \(E_{\nu}\). Even for cases where vacuum oscillations can occur, the effects are unimportant as \(R_{\rm sh}\delta m^{2}/(2E_{\nu})\lesssim 1\). Therefore, we can neglect flavor evolution of HE neutrinos above \(\sim 10\) TeV in discussing their annihilation with accretion disk neutrinos (see Ref. [123] for discussion on how LE neutrinos might induce flavor evolution of HE neutrinos). Although TeV neutrinos could be affected by the MSW effect when propagating inside stars, they are not energetic enough to be affected by pair annihilation. ## III \(\rho\nu\) annihilation of LE and HE neutrinos In this section we study how the flux and flavor composition of HE neutrinos are modified by annihilation with the LE neutrinos. Without referring to specific models, we simply assume that the HE neutrinos are produced at a radius \(R_{\rm sh}\) representative of shocks accelerating protons3. Consider a HE \(\nu_{\alpha}\) of energy \(E\) emitted with an angle \(\theta_{0}\) relative to the jet axis (Fig. 1) and assume that the HE neutrino can always meet the thermal neutrinos along its trajectory. When it interacts with an LE \(\bar{\nu}_{\beta}\) of energy \(E^{\prime}\) at radius \(r\), the main processes are Footnote 3: In addition to shock acceleration, protons can also be accelerated through the neutron-proton-converter mechanism [124, 125] or magnetic reconnections. We focus on the shock case but our parametric study applies equally to the other cases. \[\nu_{\alpha}\bar{\nu}_{\beta}\rightarrow\left\{\begin{array}{ll}f\bar{f},& \alpha=\beta,\\ l_{\alpha}\bar{l}_{\beta},&\alpha\neq\beta,\end{array}\right. \tag{1}\] where \(f\) stands for the relevant quarks and leptons and \(l\) for the charged leptons. The corresponding cross sections \(\sigma_{\nu_{\alpha}\bar{\nu}_{\beta}}(s)\)[126] are functions of \(s=2EE^{\prime}(1-\cos\theta)\), where \(\theta\) is the intersection angle between \(\nu_{\alpha}\) and \(\bar{\nu}_{\beta}\). The probability for the HE \(\nu_{\alpha}\) to survive annihilation, \(P_{\nu_{\alpha}}(E,\theta_{0})=\exp[-\tau_{\nu_{\alpha}}(E,\theta_{0})]\), is determined by the "optical" depth \[\tau_{\nu_{\alpha}}(E,\theta_{0})=\sum_{\beta}\int(1-\cos\theta)\sigma_{\nu_{ \alpha}\bar{\nu}_{\beta}}(s)dn_{\bar{\nu}_{\beta}}(E^{\prime},r)d\ell, \tag{2}\] where \(\ell\) is the path length of \(\nu_{\alpha}\), \[dn_{\bar{\nu}_{\beta}}(E^{\prime},r)=\frac{E^{\prime 2}dE^{\prime}}{\exp(E^{ \prime}/T_{\nu})+1}\frac{R_{\nu}^{2}\cos\theta^{\prime}}{8\pi^{2}r^{2}}\bar{f}_{ \beta}(r), \tag{3}\] is the energy-differential number density of \(\bar{\nu}_{\beta}\) at radius \(r\), and \(\theta^{\prime}=\theta_{0}-\theta\). Note that \(\theta\) and \(r\) can be solved from \(R_{\rm sh}\), \(\theta_{0}\), and \(\ell\) (Fig. 1). Taking \(\theta\sim\theta_{0}\ll 1\), \(\ell\sim r\sim R_{\rm sh}\), and \(\sigma_{\nu_{\alpha}\bar{\nu}_{\beta}}\sim G_{F}^{2}s\), where \(G_{F}\) is the Fermi coupling constant, we can estimate \[\tau_{\nu_{\alpha}}(E,\theta_{0}) \sim \frac{7\pi^{2}}{1920}G_{F}^{2}E\frac{R_{\nu}^{2}T_{\nu}^{4}\theta _{0}^{4}}{R_{\rm sh}} \tag{4}\] \[\sim 25E_{\rm PeV}R_{\nu,7}^{2}T_{\nu,\rm MeV}^{4}\theta_{0}^{4}R_{ \rm sh,9}^{-1}.\] The above estimate shows that \(P_{\nu_{\alpha}}(E,\theta_{0})\) is sensitive to the emission angle \(\theta_{0}\) of \(\nu_{\alpha}\). This angle is mainly limited by the shock Lorentz factor \(\Gamma_{\rm sh}\). We assume that HE neutrinos are emitted isotropically in the shock rest frame, and the typical value of \(\theta_{0}\) is of order \(\Gamma_{\rm sh}^{-1}\). To expect a significant annihilation effect, we consider only the case of mildly relativistic shocks. The corresponding normalized angular distribution can be estimated as \[g(\theta_{0})=\frac{1-v^{2}}{2(1-v\cos\theta_{0})^{2}}, \tag{5}\] where \(v\) is the velocity for shocks with \(\Gamma_{\rm sh}\equiv(1-v^{2})^{-1/2}\). As we aim to estimate the effect of annihilation on the diffuse HE neutrino flux from similar sources, we average \(P_{\nu_{\alpha}}(E,\theta_{0})\) over \(\theta_{0}\) to obtain \[\langle P_{\nu_{\alpha}}(E)\rangle=\int_{-1}^{1}\exp[-\tau_{\nu_{ \alpha}}(E,\theta_{0})]g(\theta_{0})d\cos\theta_{0}. \tag{6}\] For quantitative estimates, we take \(R_{\nu,7}=1\), \(T_{\nu,\rm MeV}=3\), 4, 5, 6, 7, 8, \(R_{\rm sh,9}=3\), 10, 30, 100, and \(\Gamma_{\rm sh}=3\), 5, 10. For all these conditions and all five LE neutrino flavor evolution scenarios, we find that \[\langle P_{\nu_{\alpha}}(E)\rangle=[1+(E/E_{0})^{n}]^{-1} \tag{7}\] is an excellent fit over \(E=10\) TeV to 3 PeV, as demonstrated by Fig. 2 for \(\langle P_{\nu_{\mu}}(E)\rangle\). The same form of fit with slightly different \(E_{0}\) and \(n\) also applies to \(\langle P_{\bar{\nu}_{\alpha}}(E)\rangle\). The parameter \(E_{0}\) is a characteristic energy for which annihilation with thermal neutrinos is significant. For \(\theta_{0}\sim\Gamma_{\rm sh}^{-1}\), we expect from Eq. (4) that \(E_{0}\) should inversely scale with \[\eta\equiv R_{\nu,7}^{2}T_{\nu,\rm MeV}^{4}R_{\rm sh,9}^{-1}\Gamma_{\rm sh}^{- 4}. \tag{8}\] For illustration, Fig. 3a shows \(E_{0}\) as a function of \(\eta\) for \(\langle P_{\nu_{\mu}}(E)\rangle\) in the NO scenario of LE neutrino flavor evolution. It can be seen that \(E_{0,\rm PeV}\sim 0.1/\eta\) for all combinations of \(T_{\nu}\), \(R_{\rm sh}\), and \(\Gamma_{\rm sh}\) considered, in agreement with Eq. (4). [The effect of \(R_{\nu}\) is as in Eq. (4) but not shown.] The behavior of \(n\) is more complex. It clearly scales with \(\eta\) for fixed \(\Gamma_{\rm sh}\), but the trend varies with \(\Gamma_{\rm sh}\). However, when annihilation of 0.1 PeV neutrinos becomes significant for \(\eta\gtrsim 1\) (see the dashed horizontal line of Fig. 3), \(n\) lies in a narrow range of \(\approx 0.4\)-0.5 (Fig. 3b). The explicit dependence of \(n\) on \(\Gamma_{\rm sh}\) can be traced to the contribution from \(\nu_{\alpha}\bar{\nu}_{\alpha}\) annihilation. In contrast to \begin{table} \begin{tabular}{c c c c} no evolution & adiabatic evolution & adiabatic evolution & exotic evolution & flavor equipartition \\ & assuming NO & assuming IO & & \\ (NE) & (NO) & (IO) & (EE) & (FE) \\ \hline \(f_{\beta}(r)=\bar{f}_{\beta}(r)=\delta_{\beta e}\) & \(f_{\beta}(r)=|U_{\beta 2}|^{2}\), & \(f_{\beta}(r)=|U_{\beta 2}|^{2}\), & \(f_{\beta}(r)=\bar{f}_{\beta}(r)=\delta_{\beta\mu}\) & \(f_{\beta}=f_{\beta}=1/3\) \\ & \(f_{\beta}(r)=|\bar{U}_{\beta 3}|^{2}\) & \(f_{\beta}(r)=|\bar{U}_{\beta 3}|^{2}\) & & \\ \end{tabular} \end{table} Table 1: Flavor evolution scenarios for LE neutrinos. the approximate linear scaling with \(s\) for the cross section of \(\nu_{\alpha}\bar{\nu}_{\beta}\) (\(\beta\neq\alpha\)) annihilation, the cross section of \(\nu_{\alpha}\bar{\nu}_{\alpha}\) annihilation has a resonant form \[\sigma_{\nu_{\alpha}\bar{\nu}_{\alpha}}\sim\frac{G_{F}^{2}M_{Z}^{4}s}{(s-M_{Z} ^{2})^{2}+\Gamma_{Z}^{2}M_{Z}^{2}}, \tag{9}\] where \(M_{Z}\) is the mass of the \(Z\) boson and \(\Gamma_{Z}\) is its decay width. Taking \(E^{\prime}\sim 3T_{\nu}\) and \(s\sim E^{\prime}E\theta^{2}\sim 3T_{\nu}E/\Gamma_{\rm sh}^{2}\), one would naively estimate that the \(Z\) resonance occurs for \[E_{\rm PeV}\sim 3T_{\nu,{\rm MeV}}^{-1}\Gamma_{\rm sh}^{2}. \tag{10}\] The above estimate indicates that the resonance has little effect on HE neutrinos of PeV and below for the case of \(\Gamma_{\rm sh}=10\), but may affect \(\langle P_{\nu_{\alpha}}(E)\rangle\) for \(\Gamma_{\rm sh}=3\) and \(5\). In fact, for the latter cases, the resonance starts to play a role for HE neutrinos with energies lower than that given by Eq. (10) because the intersection angle \(\theta_{0}\) follows a broader distribution [Eq. (5)] for smaller \(\Gamma_{\rm sh}\). The resonance significantly affects \(\langle P_{\nu_{\alpha}}(E)\rangle\) at \(E\gtrsim 0.1\) PeV and \(\gtrsim 1\) PeV for \(\Gamma_{\rm sh}=3\) and \(5\), respectively. Consequently, \(n\) increases with decreasing \(\eta\) at \(\eta\lesssim 1\) (corresponding to \(E_{0}\gtrsim 0.1\) PeV) for \(\Gamma_{\rm sh}=3\) and stays approximately constant at \(\eta\lesssim 0.1\) (corresponding to \(E_{0}\gtrsim 1\) PeV) for \(\Gamma_{\rm sh}=5\). The above results can be used along with models of HE neutrino production to estimate signals from a nearby source or contributions to the diffuse flux at IceCube. A proper calculation should include flavor evolution from the source to IceCube and detailed detector response. To estimate effects of annihilation with LE neutrinos, we focus on the all-flavor spectrum and flavor composition emerging from a source. Simply for illustration, we consider a case frequently discussed in the literature, where HE \(\nu_{\mu}\), \(\bar{\nu}_{\mu}\), \(\nu_{e}\), and \(\bar{\nu}_{e}\) are produced initially in ratios of \(2:2:1:1\) with an all-flavor flux spectrum \(\phi^{(0)}(E)\). In this case, the emerging all-flavor flux spectrum \(\phi(E)\) can be estimated by \[\frac{\phi}{\phi^{(0)}}\simeq\frac{\langle P_{\nu_{\mu}}(E)\rangle+\langle P _{\bar{\nu}_{\mu}}(E)\rangle}{3}+\frac{\langle P_{\nu_{e}}(E)\rangle+\langle P _{\bar{\nu}_{e}}(E)\rangle}{6}, \tag{11}\] and the corresponding flavor ratio is4 Footnote 4: It should be pointed out that Eqs. (11) and (12) neglect the secondary HE neutrinos produced from \(\nu\bar{\nu}\) annihilation via the \(Z\) resonance. It is well justified as the branching ratio of the \(Z\) boson decay into each neutrino flavor is only \(\sim 6\%\). \[R_{\mu/e}\simeq\frac{\phi_{\nu_{\mu}}+\phi_{\bar{\nu}_{\mu}}}{\phi_{\nu_{e}}+ \phi_{\bar{\nu}_{e}}}=\frac{2[\langle P_{\nu_{\mu}}(E)\rangle+\langle P_{\bar {\nu}_{\mu}}(E)\rangle]}{\langle P_{\nu_{e}}(E)\rangle+\langle P_{\bar{\nu}_{e }}(E)\rangle}. \tag{12}\] It is apparent that Eqs. (11) and (12) can be straightforwardly extended to accommodate other scenarios as well. The parameters \(T_{\nu}\), \(R_{\rm sh}\), and \(\Gamma_{\rm sh}\) could span relatively wide ranges. To illustrate a significant annihilation effect, we focus on cases with mildly relativistic jets [see Eq. (8)], and take the following three representative parameter sets with \((T_{\nu,{\rm MeV}},R_{{\rm sh},9},\Gamma_{\rm sh})=(3,10,5)\), \((6,10,5)\), and \((8,3,3)\) for cases A, B, and C, respectively. The corresponding values of \(\eta\) are 0.013, 0.21, and 16.9, respectively. We show \(\phi/\phi^{(0)}\) as functions of \(E\) for different flavor evolution scenarios of LE neutrinos in Fig. 4a. For each parameter set, \(\phi/\phi^{(0)}\) is not sensitive to flavor evolution of LE neutrinos and follows the form of \(\langle P_{\nu_{\alpha}}(E)\rangle\) in Eq. (7). As \(E_{0}\) is \(\sim 1\) PeV for \(\eta=0.1\) and decreases for larger \(\eta\) (Fig. 3a), annihilation of \(\sim 10\) TeV to \(1\) PeV neutrinos is marginal, significant, and severe for cases A, B, and C, respectively. For case C, \(\langle P_{\nu_{\alpha}}(E)\rangle\sim\langle P_{\bar{\nu}_{\alpha}}(E)\rangle \propto E^{-n}\) with \(n\approx 0.4\)-0.5 at \(E\gtrsim 0.1\) PeV [Eq. (7) and Fig. 3]. Effects of flavor evolution of LE neutrinos are more evident for \(R_{\mu/e}\) as shown in Figs. 4b and 4c for cases B and C, respectively. In particular, \(R_{\mu/e}\) for scenarios NE and EE differs remarkably from 2 for the case without annihilation of HE neutrinos. Because annihilation is more efficient for \(\nu\) and \(\bar{\nu}\) of the same flavor, more HE \(\nu_{e}\) and \(\bar{\nu}_{e}\) are annihilated than \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) in scenario NE, where thermal \(\bar{\nu}_{e}\) and \(\nu_{e}\) remain unchanged. This preferential destruction of HE \(\nu_{e}\) and \(\bar{\nu}_{e}\) is enhanced as annihilation probability increases with \(E\) [Eq. (4)]. Therefore, \(R_{\mu/e}\) increases from 2.2 to 3.2 for case B (Fig. 4b) and from 2.6 to 4 for case C (Fig. 4c) as \(E\) increases from 20 TeV to 2 PeV in scenario NE. In contrast, all thermal \(\bar{\nu}_{e}\) (\(\nu_{e}\)) are converted into \(\bar{\nu}_{\mu}\) (\(\nu_{\mu}\)) in scenario EE, for which the preferential destruction of HE \(\nu_{\mu}\) (\(\bar{\nu}_{\mu}\)) results in a decreasing \(R_{\mu/e}\) from 1.8 to 1.2 for case B and from 1.5 to 1 for case C as \(E\) increases. In scenarios NO and IO, thermal neutrinos evolve into combinations of three flavors, for which annihilation of HE \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) is comparable to Figure 3: Fitting parameters \(E_{0}\) and \(n\) as functions of \(\eta\) for \(\langle P_{\nu_{\mu}}(E)\rangle\) in the NO scenario of LE neutrino flavor evolution. The three trends for \(E_{0}\) and \(n\) from left to right are for \(\Gamma_{\rm sh}=10\) (blue), 5 (green), and 3 (red), respectively. The dashed line corresponds to \(E_{0}=0.1\) PeV. that of \(\nu_{e}\) and \(\bar{\nu}_{e}\). This reduces \(R_{\mu/e}\) from 2 by significantly smaller amounts than scenario EE (Figs. 4b and 4c). For scenario FE (not shown), the flavor composition of HE neutrinos is not affected and \(R_{\mu/e}\) is always equal to 2. ## IV Effects on the neutrinos in a slow jet supernova model We have conducted a parametric and largely model-independent study to investigate the impacts of annihilation with LE neutrinos on HE neutrinos. For mildly-relativistic or even non-relativistic jets propagating inside the stellar envelope or the ejecta from a BNSM, we may expect a significant annihilation effect on the HE neutrinos produced, as long as the associated \(R_{\rm sh}\) does not exceed \(\sim 10^{11}\) cm so the LE and HE neutrinos can meet. As an illustrative case, below we focus on HE neutrinos produced at internal shocks in a slow jet supernova model, which can possibly contribute significantly to the diffuse HE neutrino flux. The jets, especially those (partly) powered by the annihilation of accretion disk neutrinos, will propagate within the LE neutrino bath. The effects of these LE neutrinos on the jet dynamics and particle acceleration in jet-induced shocks need to be estimated. Taking a typical cross section of \(10^{-42}\) cm\({}^{2}\) for LE neutrinos of 10 MeV, the associated optical depth for nucleons within the jet at radius \(r\) is \(\sim 10^{-6}L_{\nu,53}r_{10}^{-1}\). However, the interaction between LE neutrinos and HE protons may become significant during acceleration, as the cross section of the \(p\nu\) process increases with the proton energy. It can potentially compete with the \(p\gamma\) process and thus have a notable impact on proton acceleration and HE neutrino production. For a proton of 1 PeV, a neutrino of 10 MeV, and a substantial intersection angle between their momenta, the \(p\nu\) cross section is about \(10^{-34}\) cm\({}^{2}\), which is about 6 orders of magnitude lower than that of the \(p\gamma\) process. Assuming that a fraction, \(\epsilon_{e}\), of the jet energy is converted into thermal radiation, the number density of thermal photons in the comoving frame of shocks can be estimated as \(n^{\prime}_{\gamma}\sim 10^{26}\)\([\epsilon_{e,0.1}L_{\rm iso,52}/(R_{\rm sh,10}^{2}\Gamma_{\rm sh,0.5}^{2})]^{3/4}\) cm\({}^{-3}\)[68; 70], where \(L_{\rm iso}\) is the jet isotropic luminosity, and \(\Gamma_{\rm sh}\) and \(R_{\rm sh}\) are the shock Lorentz factor and radius, respectively. For comparison, the LE neutrino density is \(n^{\prime}_{\nu}\sim 6\times 10^{26}\)\(L_{\nu,53}\Gamma_{\rm sh,0.5}R_{\rm sh,10}^{-2}\) cm\({}^{-3}\), which is similar to the photon number density. Consequently, the \(p\nu\) process is always unimportant relative to the \(p\gamma\) process and can be safely ignored. It was emphasized that shocks generated deep inside a star are likely radiation-mediated and particle acceleration at such shocks are inefficient [74; 127]. To form a collisionless shock that supports efficient particle acceleration, Ref. [74] considered fast (\(\Gamma\sim 100\)) and low-luminosity (\(L_{\rm iso}\lesssim 10^{48}\) erg) jets as potential HE neutrino sources, but the HE neutrinos so produced will not be affected by \(\nu\bar{\nu}\) annihilation due to a large Lorentz factor. The slow jet model, on the other hand, is subject to the radiation constraint and could be ineffective in producing HE neutrinos [74]. However, if the jets are mildly magnetized, a collisionless subshock could form in the radiation-dominated region [128], which opens up the possibility for slow jets to act as potential HE neutrino emitters [86]. Production of HE neutrinos at internal shocks caused by slow jets in CCSNe have been extensively studied [70; 73; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89]. In a strong magnetic field \(B\propto\Gamma_{\rm sh}^{-1}R_{\rm sh}^{-1}\), the produced \(\pi^{\pm}\) and \(K^{\pm}\) experience severe synchrotron cooling, and therefore, the resulting HE neutrino flux from their decay at TeV-PeV is highly suppressed. It was shown that the decays of charmed D-mesons from \(pp\) collisions, though with a lower yields compared to \(\pi^{\pm}\) and \(K^{\pm}\), contribute predominantly to the HE neutrino flux above \(\sim 0.1\) PeV in the slow jet model due to a much shorter decay lifetime, thereby giving rise to a significantly harder neutrino flux extending up to a few PeV [72]. For typical slow jets and proper Figure 4: (a) Effects of \(\nu\bar{\nu}\) annihilation on \(\phi/\phi^{(0)}\) for \((T_{\nu,{\rm MeV}},R_{\rm sh,9},\Gamma_{\rm sh})=(3,10,5)\), \((6,10,5)\), and \((8,3,3)\) corresponding to cases A, B, and C, respectively. (b) Effects of \(\nu\bar{\nu}\) annihilation on \(R_{\mu/e}\) for case B. (c) Same as (b) but for case C. In each case, the curves from top to bottom are for LE neutrino flavor evolution scenarios NE, NO, IO, and EE, respectively. parameters of quantum chromodynamics, neutrinos from the slow jet model including charm decays could dominate the observed diffuse flux at IceCube [72; 77; 99; 129]. It should be noted that the associated HE neutrino spectrum from charm decay follows the primary HE proton spectrum with a spectral index of \(\sim 2\)-2.3 typically expected for (non-)relativistic diffusive shock acceleration (see, e.g., [130; 131; 132]). This is harder than the observed neutrino spectrum at IceCube with a spectral index of \(\sim 2.4\) to \(\sim 2.9\) for \(E\) ranging from a few 10 TeV to a few PeV based on different data sets [133; 134; 135]. Interestingly, the annihilation with accretion disk neutrinos can steepen the HE neutrino flux above 0.1 PeV for \(\eta\gtrsim 1\), which leads to a better agreement between the emerging HE neutrino spectrum and the IceCube observations. The production of TeV-PeV HE neutrinos from charm decay in slow jets is very robust. Once protons are accelerated to multi-PeV and the \(pp\) reaction rate is not orders of magnitude lower than the \(p\gamma\) reaction rate, the dominance of charmed contribution to HE neutrino flux above \(\sim 0.1\) PeV is guaranteed as the contributions from \(\pi^{\pm}\) and \(K^{\pm}\) decays are highly suppressed due to strong synchrotron cooling [66; 67; 68; 69; 70]. The maximal energy of protons is determined by the balance between shock acceleration and the cooling processes that are dominated by either the \(p\gamma\) reaction or synchrotron radiation. Based on the formulae given in [70; 72], \(E_{p,\rm max}\) in the stellar rest frame can be estimated as \[E_{p,\rm max} \sim\min\left[2.5\times\kappa_{1}^{-1/4}\Gamma_{\rm sh,0.5}R_{\rm sh,10}^{1/2}\epsilon_{B,-1}^{-1/4}L_{j,50.5}^{-1/4},\right.\] \[\left.0.65\kappa_{1}^{-1}\Gamma_{\rm sh,0.5}R_{\rm sh,10}^{1/2} \epsilon_{B,-1}^{1/2}\epsilon_{e,-1}^{-3/4}L_{j,50.5}^{-1/4}\right]\,{\rm PeV}, \tag{13}\] where \(\kappa\sim 1\)-10 is the shock acceleration parameter related to the diffusion coefficient, \(\epsilon_{e}\) and \(\epsilon_{B}\) are the fractions of energy deposited into thermal radiation and magnetic field, respectively, and \(L_{j}\) is the total jet luminosity. The ratio of the \(pp\) rate to the \(p\gamma\) rate is \[t_{pp}^{-1}/t_{p\gamma}^{-1}\sim 0.25R_{\rm sh,10}^{-1/2}\epsilon_{e}^{-3/4}L_{ j,50.5}^{1/4}. \tag{14}\] As can be seen from Eqs. (13) and (14), for mildly relativistic shocks occurring inside stars with typical parameters, the charm decay within the slow jet model could be highly relevant for making TeV-PeV neutrinos. We note that for mildly magnetized jets with large \(\epsilon_{B}\sim 0.5\), the above conditions can also be fulfilled, but the contributions from \(\pi^{\pm}\) and \(K^{\pm}\) are suppressed more severely and the charmed mesons start to contribute to HE neutrinos at lower energies. An accurate measurement of the flavor composition of HE neutrinos can provide a test of the slow jet model and the annihilation effect. In this model, charm decay dominates the HE neutrino flux above 0.1 PeV, while \(\pi^{\pm}\) and \(K^{\pm}\) decay can contribute significantly below \(\sim 0.1\) PeV [72; 77; 99; 129]. Different from the standard \(\pi^{\pm}\) and \(K^{\pm}\) decay that gives rise to \(F_{\nu_{e}}:F_{\bar{\nu}_{e}}:F_{\nu_{\mu}}:F_{\bar{\nu}_{\mu}}\approx 1:1:2:2\), the charmed mesons produce almost equal numbers of \(\nu_{e}\), \(\bar{\nu}_{e}\), \(\nu_{\mu}\), and \(\bar{\nu}_{\mu}\) from semileptonic decays. Therefore, for the slow jet model with charm decays, one expects \(F_{\nu_{\mu}+\bar{\nu}_{\mu}}/F_{\nu_{\mu}+\bar{\nu}_{e}}\approx 2\) at low energies5, and this ratio gradually decreases to 1 at \(E\gtrsim 0.1\) PeV. Such an energy-dependent flavor composition can be tested when more statistics are accumulated [129]. Footnote 5: Note that \(\pi^{\pm}\) and \(K^{\pm}\) as well as \(\mu^{\pm}\) from their decay could be subject to severe synchrotron cooling in the strong magnetic field, and consequently, the \(\nu_{e}\) and \(\bar{\nu}_{e}\) fluxes at low energies mainly from secondary \(\mu^{\pm}\) decay would be further suppressed. In what follows, we show that the flavor composition of HE neutrinos in the slow jet model can be further affected by \(\nu\bar{\nu}\) annihilation. Without computing the energy-dependent flavor composition, we focus on HE neutrinos above 0.1 PeV for which the annihilation effect becomes significant, and simply assume that they are all from charm decay. With \(\nu\bar{\nu}\) annihilation, the flavor ratio \(R_{\mu/e}\) introduced in Eq. (12) is approximately \([(P_{\nu_{\mu}}(E))+\langle P_{\bar{\nu}_{\mu}}(E)\rangle]/[\langle P_{\nu_{e}} (E)\rangle+\langle P_{\bar{\nu}_{e}}(E)\rangle]\) for HE neutrinos from charm decay. For illustration, we take \(\Gamma_{\rm sh}=2\), \(R_{\rm sh}=3\times 10^{10}\) cm, and \(T_{\nu}=5\) MeV. The corresponding \(\eta\) is about 1.3, indicating that the HE neutrino flux above 0.1 PeV can be softened with a spectral index change of 0.4-0.5. These values of \(\Gamma_{\rm sh}\) and \(R_{\rm sh}\) are slightly different from those adopted in Ref. [72], but can lead to similar HE neutrino fluxes at \(E\gtrsim 0.1\) PeV as charm decay dominates [see also Eqs. (13) and (14) and the related discussion]. If we adopt the parameters \(\Gamma_{\rm sh}=3\) and \(R_{\rm sh}=5\times 10^{10}\) cm as in [72], we can choose \(T_{\nu}\sim 8\) MeV (indicating a higher accretion disk neutrino luminosity) to obtain \(\eta\sim 1\) for the annihilation effect to be important. Figure 5 shows the resulting \(R_{\mu/e}(E)\) at \(E>0.1\) PeV from charm decay for different LE neutrino flavor evolution scenarios after \(\nu\bar{\nu}\) annihilation is included. For evolution scenarios NO, IO, and EE, \(R_{\mu/e}\) remains consistently below 1, a value that cannot be attained for the standard \(\pi^{\pm}\) and \(K^{\pm}\) decay scenario with annihilation included [see, e.g., Fig. 4(c)]. For evolution scenario NE, \(R_{\mu/e}\) stays at \(\sim 1.4\)-1.5. The behavior of \(R_{\mu/e}\) becomes more interesting if neutrinos with energies below 0.1 PeV are also considered. As mentioned above, \(R_{\mu/e}\approx 2\) at \(E\sim 1\) TeV as expected from \(\pi^{\pm}\) and \(K^{\pm}\) decay. This ratio gradually decreases to \(\sim 1\) at \(E\sim 0.1\) PeV due to the growing relevance of charm decay. For \(E\gtrsim 0.1\) PeV, the annihilation effect becomes important, and leads to a further reduction of \(R_{\mu/e}\) to below 1 in the LE neutrino flavor evolution scenarios NO, IO, and EE, but to an increase of \(R_{\mu/e}\) to \(\sim 1.4\)-1.5 for the NE scenario (Fig. 5). These distinctive flavor compositions, along with their variations with neutrino energy, if precisely measured at the forthcoming neutrino telescopes [41; 42; 43; 129], could provide a test of the slow jet model with charm decay and annihilation of LE and HE neutrinos inside the sources. ## V Discussion and Summary We have studied the annihilation of HE and LE neutrinos in BNSMs and CCSNe and how this process can affect the spectra and flavor ratio of HE neutrinos emerging from these sources. We show that the potential effect can be characterized by a single parameter \(\eta\) [Eq. (8)], which captures the key dependence on astrophysical conditions relevant to HE neutrino production. Annihilation probability increases with \(\eta\) and starts to be significant for neutrinos of \(\gtrsim 0.1\) PeV at \(\eta\sim 1\). For a specific \(\eta\), annihilation probability increases with energy, which modifies the emerging spectra. For \(\eta\gtrsim 1\), the all-flavor spectrum at \(E\gtrsim 0.1\) PeV is modified by a factor \(E^{-n}\) with \(n\approx 0.4\)-\(0.5\). Moreover, we have found that although the flavor evolution of the accretion disk neutrinos does not affect the above modification of the spectral index, it can change the emerging flavor composition of HE neutrinos. Among the five flavor evolution scenarios that we have considered (see Table 1), scenarios NE and EE are particularly interesting as they can lead to large increases and decreases, respectively, of the emerging \(\mu\)-to-\(e\) flavor ratio \(R_{\mu/e}\) from the canonical values and do so in an energy-dependent manner. As a specific example, we have considered the HE neutrino production within a slow jet supernova model that could potentially explain the diffuse neutrino flux observed at IceCube, particularly when charm decay is included. Interestingly, the spectral softening due to \(\nu\bar{\nu}\) annihilation can improve the agreement with the observed data. Additionally, the variations and energy dependence of \(R_{\mu/e}\) could be tested at the next-generation large neutrino observatories such as IceCube-Gen2 [41, 42, 43, 129]. We plan to present a comprehensive and detailed treatment of these aspects in a separate future study. Note that the HE neutrinos produced inside stars may also experience absorption by stellar material. This would result in a cutoff in the HE neutrino spectrum, while retaining the same flavor composition. This effect can be ignored for BNSMs and for HE neutrinos produced near the edge of stellar envelope or outside the stars. In particular, for Wolf-Rayet stars with the hydrogen envelope stripped off, the stellar radius can be as small as \(3\times 10^{10}\) cm [136]. However, the absorption effect can be significant for CCSNe if \(R_{\rm sh}\lesssim 10^{10}\) cm. It is also possible that the stellar matter is pushed away by the earlier jets [137] or the winds from the proto-neutron star prior to the formation of the black-hole accretion disk, and therefore, the matter absorption effect is suppressed. These and other relevant details should be addressed by specific models or simulations. Finally, the central engine driving relativistic jets could be a magnetar [138, 139], which also emits substantial fluxes of LE neutrinos. We expect that the potential presence of \(\nu\bar{\nu}\) annihilation would similarly impact the HE neutrino flux for the magnetar case as well. ###### Acknowledgements. This work was supported in part by the National Natural Science Foundation of China (12205258) and the Natural Science Foundation of Shandong Province, China [ZR2022JQ04 (G.G.)], the US Department of Energy [DE-FG02-87ER40328 (Y.Z.Q.)], and the National Science and Technology Council (No. 111-2628-M-001-003-MY4), the Academia Sinica (No. AS-CDA-109-M11), and the Physics Division of the National Center for Theoretical Sciences, Taiwan (M.R.W.).
2302.05326
Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks
Constructing states from sequences of observations is an important component of reinforcement learning agents. One solution for state construction is to use recurrent neural networks. Back-propagation through time (BPTT), and real-time recurrent learning (RTRL) are two popular gradient-based methods for recurrent learning. BPTT requires complete trajectories of observations before it can compute the gradients and is unsuitable for online updates. RTRL can do online updates but scales poorly to large networks. In this paper, we propose two constraints that make RTRL scalable. We show that by either decomposing the network into independent modules or learning the network in stages, we can make RTRL scale linearly with the number of parameters. Unlike prior scalable gradient estimation algorithms, such as UORO and Truncated-BPTT, our algorithms do not add noise or bias to the gradient estimate. Instead, they trade off the functional capacity of the network for computationally efficient learning. We demonstrate the effectiveness of our approach over Truncated-BPTT on a prediction benchmark inspired by animal learning and by doing policy evaluation of pre-trained policies for Atari 2600 games.
Khurram Javed, Haseeb Shah, Rich Sutton, Martha White
2023-01-20T23:17:48Z
http://arxiv.org/abs/2302.05326v3
# Scalable Real-Time Recurrent Learning Using ###### Abstract State construction from sensory observations is an important component of a reinforcement learning agent. One solution for state construction is to use recurrent neural networks. Back-propagation through time (BPTT), and real-time recurrent learning (RTRL) are two popular gradient-based methods for recurrent learning. BPTT requires the complete sequence of observations before computing gradients and is unsuitable for online real-time updates. RTRL can do online updates but scales poorly to large networks. In this paper, we propose two constraints that make RTRL scalable. We show that by either decomposing the network into independent modules, or learning the network incrementally, we can make RTRL scale linearly with the number of parameters. Unlike prior scalable gradient estimation algorithms, such as UORO and Truncated-BPTT, our algorithms do not add noise or bias to the gradient estimate. Instead, they trade-off the functional capacity of the network to achieve scalable learning. We demonstrate the effectiveness of our approach over Truncated-BPTT on a benchmark inspired by animal learning and by doing policy evaluation for pre-trained Rainbow-DQN agents in the Arcade Learning Environment (ALE). Scalable recurrent learning, Online learning, RTRL, Cascade correlation networks, Agent-state construction ## 1 Introduction Learning by interacting with the world is a powerful framework for building systems that can autonomously achieve goals in complex worlds. A key ingredient for building autonomous systems is agent-state construction--learning a compact representation of the history of interactions that helps in predicting and controlling the future. One solution for state construction is to use differentiable recurrent neural networks (RNNs) learned to minimize prediction error (Kapturowski _et al_., 2018 and Vinyals _et al_., 2019). State construction by minimizing prediction error using RNNs involves structural credit assignment--identifying how to change network parameters to improve predictions. In RNNs, parameters can influence predictions made in the future and credit assignment requires tracking the influence of the parameters on these future predictions. Two popular algorithms for gradient-based structural credit assignment are back-propagation through time (BPTT) (Werbos, 1988; Robinson and Fallside, 1987) and real-time recurrent learning (RTRL) (Williams and Zipser, 1989). We define scalable real-time state construction as the ability of the agent to update the agent-state in real-time while interacting with the world. The agent does not postpone learning to the future by storing data, nor does it have access to specialized hardware for learning that is removed at the time of deployment. Instead, learning is continual and never-ending with no distinction between learning and deployment. Both BPTT and RTRL are not suitable for scalable real-time state construction. BPTT stores all past activations and uses sequential computation proportional to the length of the data-stream seen so far for estimating the gradient. As as result, it neither scales well, nor learns in real-time. RTRL can estimate the gradient on the go, and does not require more computation per-step for longer sequences. However, RTRL scales poorly with an increase in the number of parameters of the RNN. Both BPTT and RTRL can be approximated, to make them more suitable for online learning. A promising direction to scale gradient-based credit assignment to large networks is to approximate the gradient. Elman (1990) proposed to ignore the influence of parameters on future predictions entirely for training RNNs. This resulted in a computationally cheap but biased algorithm. Williams and Peng (1990) proposed Truncated-BPTT (T-BPTT), an algorithm that tracks the influence of a parameter on predictions made up to k steps in the future, where k is a hyperparameter. T-BPTT works well for a range of problems (Mikolov _et al._, 2009, 2010; Sutskever, 2013 and Kapturowski _et al._, 2018). A limitation of T-BPTT is that the resultant gradient is blind to long-range dependencies. Mujika _et al._ (2018) showed that on a simple copy task, T-BPTT failed to learn dependencies beyond the truncation window. Tallec _et al._ (2018) demonstrated T-BPTT can even diverge when a parameter has a negative long-term effect on a target and a positive short-term effect. Hochreiter and Schmidhuber (1997) used a diagonal approximation to RTRL (Diagonal-RTRL) that scales linearly with the number of parameters. Menick _et al._ (2021) generalized the diagonal approximation with algorithm called SnAp-\(k\). Diagonal-RTRL and SnAp-1 are not blind to all long-term dependencies, but introduces significant bias in the gradient estimate for dense recurrent networks. They assume that changing a recurrent feature will not change the values of other features, an assumption that does not hold in densely connected recurrent networks. SnAp-\(k\) for \(k>1\) is less biased, but scales poorly. Tallec _et al._ (2017) proposed UORO, a computationally efficient algorithm for getting unbiased samples of gradient. However, the resulting samples are noisy and only effective for learning with small step-sizes. Menick _et al._ (2021) showed that UORO performs poorly even on simple benchmarks. Existing methods for scaling gradient-based recurrent learning approximate the gradient but do not make assumptions about the function class of the recurrent network. In this work, we propose a different strategy: instead of introducing bias or noise in the gradient estimate, we limit the function class of the RNNs to enable scalable, unbiased, and noise-free gradient estimation. Towards this goal, we first introduce Columnar networks. Columnar networks restrict the network structure to be composed of independent, potentially deep, columns. Each column has a scalar recurrent state, and features in one column are not connected to features in other columns. As a result, the gradient of each recurrent feature is non-zero w.r.t parameters of exactly one column. The RTRL update for this class of restricted RNNs is computationally efficient: linear instead of quadratic in numbers of parameters. The ability to use RTRL is significant, as gradient estimates are readily available at every step for online and real-time updates. We find, however, that the Columnar network structure lacks hierarchical recurrent features--recurrent features that take as input other recurrent features--and can perform poorly. To introduce hierarchy in features, we investigate a second approach called Constructive networks. Constructive networks learn the recurrent network iteratively, one feature at a time. Constructive networks can learn deep recurrent features efficiently, but are incapable of learning multiple features in parallel. Finally, we propose Constructive-Columnar networks (CCNs) that combine the main ideas of both the Columnar and the Constructive networks. The idea is to learn multiple columns using RTRL, similar to Columnar networks. Then freeze these columns, and learn multiple new columns that take as input the features of all the existing frozen columns. This approach iteratively constructs multiple columns of feature in parallel, overcoming the primary limitation of both Constructive and Columnar networks. We compare CCNs to RNNs trained using T-BPTT and find that CCNs learn more effectively under restricted computation. As we increase the truncation in T-BPTT, performance steadily improves as the gradient estimates become less biased; however, computation and memory also grow. We find that under the same per-step computation budget, CCNs Figure 1: Two structures of recurrent neural networks for which gradients can be estimated in a scalable way without significant bias or noise. Dotted lines represent parameters that are updated at every step, whereas solid lines are weights that are fixed. Once the weight is fixed, the feature is fixed and gradient estimates do not need to propagate past the feature. This allows us to compute unbiased gradients, without having to backpropagate far back in time. Recurrent networks with a columnar structure can be trained end-to-end using gradients without any truncation, only requiring \(O(n)\) operations and memory per step. However, columnar networks do not have hierarchical recurrent features—recurrent features made out of other recurrent features. Constructive networks have hierarchical recurrent features; however, they must be trained incrementally to avoid bias in the gradient estimate. Incremental learning is achieved by initializing all \(w_{i}\) to zero, and learning \(h_{1}\), \(h_{2}\), and \(h_{3}\) in three stages. In the second and third stages, parameters represented by solid lines are fixed can perform much better than T-BPTT, especially when relatively small agents have to interface with large and complex environments. We evaluate our algorithms on two partially observable benchmarks to estimate values (prediction) for reinforcement learning agents. First, we use an existing animal-learning benchmark (Rafiee _et al_., 2022), which has low-dimensional inputs and a focus on the need for memory--the only way to make accurate predictions is to remember information from many steps in the past. Second, to test the algorithms in a more complex image-based environment, we make a new benchmark based on ALE (Arcade Learning Environment) (Bellemare _et al_., 2013). We used policies of pre-trained Rainbow-DQN agents (Hessel _et al_., 2018 and Fujita _et al_., 2021). Removing frame-stacking (Mnih _et al_., 2015) in ALE makes the environments partially observable. We further down-scaled the observations to make partial observability more pronounced. Our prediction benchmark based on ALE is available here. ## 2 Problem Formulation We formulate the goal of a learner as predicting the discounted sum of a cumulant from an online stream of experience. The agent sees a feature vector \(\mathbf{x}_{t}\in\mathbb{R}^{n}\) at time step \(t\) and predicts the discounted sum of the future value of a cumulant \(c_{t}\), where \(c_{t}\) is a fixed index of the vector \(\mathbf{x}_{t}\). The goal of the agent is to minimize the sum of squared error between the prediction and the empirical return incurred over time, _i.e._, the agent aims to minimize: \[\mathcal{L}(k,T)=\frac{1}{T}\sum_{t=k}^{T+k}(y_{t}-\sum_{j=t+1}^{\infty}\gamma ^{j-t-1}c_{j})^{2} \tag{1}\] where \(k,T\) control the horizon over which the prediction error is accumulated, and \(y_{t}\) is the prediction made at time step \(t\). Note that the error is measured w.r.t the predictions made over time and not using a final set of weights. Our problem formulation can capture various online temporal-prediction and supervised-learning problems. For example, setting the cumulant to be the reward turns our problem formulation into policy evaluation (Sutton & Barto, 2018). Similarly, by setting \(\gamma=0\), the problem formulation can represent online supervised recurrent learning benchmarks. ### Learning under Resource Constraints We focus on the under-parameterized setting where the environments are more complex than the learners. The learners have a fixed compute and memory budget per-step, that they can allocate however they choose. For instance, a learner can pick an expensive learning algorithm, such as RTRL, and satisfy the compute constraint by using a smaller recurrent network. Alternatively, it can choose a larger recurrent network, and learn the network using a computationally efficient learning algorithm, such as T-BPTT with a small truncation window. The focus on the under-parameterized setting, limited resources, and per-step learning emphasizes developing learning algorithms that are computationally efficiently and can be applied continually. Moreover, since the real-world is significantly more complex compared to even the largest recurrent networks, the under-parameterized setting is arguably a better proxy for understanding how different algorithms will behave on real-world problems. ### Learning with Recurrent Architectures In our temporal prediction setting it is natural to assume that the learner will face a partially observable environment. The learner will want to leverage histories to improve prediction accuracy. Throughout this work, we assume that learners attempt to summarize this history by learning recurrent neural networks (RNNs). The dynamics of an RNN can be written as \[\mathbf{h}_{t}=f\left(\mathbf{h}_{t-1},\mathbf{x}_{t},\theta\right) \tag{2}\] where \(\mathbf{h}_{t}\in\mathbb{R}^{d}\) is the hidden state of the network at time \(t\), \(\mathbf{x}_{t}\in\mathbb{R}^{n}\) is the feature vector seen by the learner, \(f\) is the dynamics function of the recurrent network, and \(\theta\) are the parameters of the RNN. The hidden state \(\mathbf{h}_{t}\) of the recurrent network is linearly weighted with weights \(\mathbf{w}_{t}\in\mathbb{R}^{d}\) to make a prediction \(y_{t}\) as: \[y_{t}=\sum_{k=0}^{d-1}h_{t,k}w_{t,k} \tag{3}\] where \(h_{t,k}\) and \(w_{t,k}\) are the kth element of vectors \(\mathbf{h}_{t}\) and \(\mathbf{w}_{t}\), respectively. To update the parameters \(\theta_{t}\) at time \(t\), we need to be able to compute the gradient of this prediction with respect to \(\theta\). Using the chain rule, we can write the gradient as \[\frac{\partial y_{t}}{\partial\theta}=\frac{\partial y_{t}}{\partial\mathbf{ h}_{t}}\frac{\partial\mathbf{h}_{t}}{\partial\theta}. \tag{4}\] The key question is how to compute \(\frac{\partial\mathbf{h}_{t}}{\partial\theta}\). We can obtain a recursive formula for this expression, which is used by RTRL and by the algorithms we introduce in this work. To make it clear how we can use the multivariable chain rule, let us explicitly write \(\mathbf{h}_{t}(\theta)=f\left(\mathbf{h}_{t-1}(\theta),\mathbf{x}_{t},\mathbf{ g}_{t}(\theta)\right)\) where \(\mathbf{g}_{t}(\theta)\doteq\theta\). Then the multivariable chain rule gives us \[\frac{\partial\mathbf{h}_{t}}{\partial\theta}=\frac{\partial\mathbf{h}_{t}}{ \partial\mathbf{g}_{t}}\frac{\partial\mathbf{g}_{t}}{\partial\theta}+\frac{ \partial\mathbf{h}_{t}}{\partial\mathbf{h}_{t-1}}\frac{\partial\mathbf{h}_{t -1}}{\partial\theta}, \tag{5}\] where the first term corresponds to the gradient of the RNN given \(\mathbf{h}_{t-1}\) and inputs \(\mathbf{x}_{t}\) and the second term corresponds to the indirect impact of \(\theta\) on \(\partial\mathbf{h}_{t}\) due to its impact on \(\partial\mathbf{h}_{t-1}\). This recursive relationship is exploited by two algorithms: BPTT and RTRL. BPTT stores the network activations and inputs from prior steps and expands equation 4 as: \[\frac{\partial y_{t}}{\partial\theta}=\frac{\partial y_{t}}{\partial\mathbf{h} _{t}}\frac{\partial\mathbf{h}_{t}}{\partial\mathbf{g}_{t}}\frac{\partial \mathbf{g}_{t}}{\partial\theta}+\frac{\partial y_{t}}{\partial\mathbf{h}_{t}} \frac{\partial\mathbf{h}_{t}}{\partial\mathbf{h}_{t-1}}\frac{\partial\mathbf{h} _{t-1}}{\partial\theta} \tag{6}\] to compute the gradient. It unrolls the formula back in time, computing and accumulating gradient until the start of the recursion at \(t=0\). RTRL, on the other hand, updates the Jacobian \(\frac{\partial\mathbf{h}_{k}}{\partial\theta}\) using equation 5 at every step. To get the gradient w.r.t the prediction, it uses the computed Jacobian in equation 4. These two algorithms both compute the same gradient, but make different compromises in terms of computation and memory. RTRL does not require storing past activations and inputs, as it can update the Jacobian using only the most recent input. However, computing the Jacobian using equation 5 requires \(O(|\mathbf{h}|^{2}|\theta|)\) operations and \(O(|\mathbf{h}||\theta|)\) memory. The size of the parameters \(|\theta|\) in a fully connected RNN is \(|\mathbf{h}|^{2}\). RTRL is therefore often said to have quartic complexity in terms of the size of the hidden state, and scales poorly to large networks. BPTT requires \(O(|\theta|t)\) memory and compute, where \(t\) is the length of the sequence. It avoids the bigger memory cost by computing the product \(\frac{\partial y_{t}}{\partial\mathbf{h}_{t}}\frac{\partial\mathbf{h}_{t}}{ \partial\mathbf{g}_{t}}\frac{\partial\mathbf{g}_{t}}{\partial\theta}\) directly, rather than separately computing the Jacobian and then multiplying by \(\frac{\partial y_{t}}{\partial\mathbf{h}_{t}}\). For sequences shorter than \(|\mathbf{h}|^{2}\), BPTT is cheaper than RTRL for fully connected RNNs. ## 3 Constructive Columnar Networks In this section we develop a new approach to recurrent learning, called Constructive Columnar networks (CCNs). CCNs leverage two key ideas: First, RTRL is computationally efficient for modular recurrent networks where each module has a scalar hidden state; we call these modular networks Columnar networks. Second, RTRL is computationally efficient if the recurrent units of a recurrent networks are learned incrementally, as opposed to learning them all simultaneously. We call the the incremental learning approach Constructive networks. Figure 1 visualizes the central ideas behind Columnar and Constructive networks Both Columnar and Constructive networks, on their own, show promising results but also have limitations. Columnar networks perform poorly on difficult tasks, and Constructive networks can only learn one feature at a time. We show that their weaknesses can be largely overcome by combining the two ideas to create a third learning system that we call Constructive Columnar networks (CCNs). ### Columnar Networks Columnar networks organize the recurrent network such that each scalar recurrent feature is independent of other recurrent features. Let \(h_{t,k}\) be the kth index of the state vector \(\mathbf{h}_{t}\). Then, in columnar networks, \[h_{t,k}=f_{k}(h_{t-1,k},\mathbf{x}_{t},\theta_{t,k}). \tag{7}\] Each \(f_{k}\) outputs a scalar recurrent feature and is called a column.1\(\theta_{t,k}\) is the set of parameters of the \(kth\) column. For any \(i\neq j\), the set \(\theta_{t,i}\) and \(\theta_{t,j}\) are disjoint. A columnar network consists of \(d\) columns. The output of all columns are concatenated to get the \(d\)-dimensional hidden-state vector \(\mathbf{h}_{t}\). Figure 1 (left) shows a graphical representation of a Columnar network. Note that changing \(h_{1}\) has no influence on the value of \(h_{2}\) or \(h_{3}\). Footnote 1: This terminology comes from the connection to structure observed in brains (Mountcastle 1957). Because recurrent features in a columnar network are independent of each other, we can apply RTRL to each of them individually. To better understand why, let us rederive our recursive formula for the gradient. For \(\theta_{k}\) the parameters for the \(k\)th column, we have \[\frac{\partial y_{t}}{\partial\theta_{k}}=\frac{\partial y_{t}}{\partial \mathbf{h}_{t}}\frac{\partial\mathbf{h}_{t}}{\partial\theta_{k}}=\sum_{j=1}^{d }\frac{\partial y_{t}}{\partial h_{t,j}}\frac{\partial h_{t,j}}{\partial \theta_{k}}=\frac{\partial y_{t}}{\partial h_{t,k}}\frac{\partial h_{t,k}}{ \partial\theta_{k}}\] where most of the terms in the sum are zero because \(\theta_{k}\) does not influence them. Therefore, we only have to compute \(\frac{\partial h_{t,k}}{\partial\theta_{k}}\) with RTRL. Like before, we can write this recursively using \(h_{t,k}(\theta_{k})=f\left(h_{t-1,k}(\theta_{k}),\mathbf{x}_{t},\mathbf{g}_{t}( \theta_{k})\right)\) where \(\mathbf{g}_{t}(\theta_{k})\doteq\theta_{k}\), giving \[\frac{\partial h_{t,k}}{\partial\theta_{k}}=\frac{\partial h_{t,k}}{\partial \mathbf{g}_{t}}\frac{\partial\mathbf{g}_{t}}{\partial\theta_{k}}+\frac{ \partial h_{t,k}}{\partial h_{t-1,k}}\frac{\partial h_{t-1,k}}{\partial \theta_{k}} \tag{8}\] Computing and storing this Jacobian only costs \(O(|\theta_{t,k}|)\) for each column \(k\) because \(|h_{t,i}|=1\) for a single column. The cost for all the columns is \[O(|\theta_{t,1}|)+O(|\theta_{t,2}|)+\cdots+O(|\theta_{t,n}|)=O(|\theta_{t}|). \tag{9}\] Therefore, RTRL for Columnar Networks scales linearly in the size of the parameters. In this work, we implement each column as a single LSTM cell with a hidden size of one. We provide the explicit gradients in Appendix 6. ### Constructive networks In constructive networks, we learn the recurrent network one feature at a time. A feature that is learned later can take as input features that have already been learned. However, the opposite is not allowed--feature learned earlier cannot take as input a feature that would be learned later. We elucidate the multi-stage learning process in a three-feature constructive network using Figure 1 (right). Dotted lines represent parameters that are being updated at every step, whereas solid lines represent parameters that are fixed. In the first stage, the learner learns the incoming weights of \(h_{1}\), which is connected to the input features \(x\), but not to \(h_{2}\) or \(h_{3}\). Note that we are omitting the time index for brevity, and \(h_{1}\) is the same as \(h_{t,1}\). Once the incoming and the recurrent weights of \(h_{1}\) are learned, the learner freezes those weights and goes to stage 2. In stage 2, it learns the incoming weights of \(h_{2}\). \(h_{2}\) can use both \(x\) and \(h_{1}\) as the inputs. The outgoing weight of \(h_{1}\)--\(w_{1}\)--is not fixed and continues to be updated. Similarly in the 3rd stage, both \(h_{1}\) and \(h_{2}\) are fixed and fed to \(h_{3}\) as input features. In each stage, the newly introduced feature can be connected to all prior features. In this staged approach, the learner never learns more than one feature at a time. As a result, the effective size of the hidden state of the learning system is just one, and RTRL can be applied cheaply. In fact, since only a small subset of the network is being learned at any given time, constructive networks use even less per-step computation than columnar networks. Constructive networks introduce one additional hyperparameter--steps-per-stage--that controls the number of steps after which the learner moves from one stage to the next. This constructive approach is similar to and inspired by related work on cascade correlation for recurrent neural networks (Fahlman, 1990). The biggest differences are that (1) in the cascade correlation work, new recurrent units are trained to maximize correlation with the error whereas we use the gradient w.r.t the prediction error to learn the incoming weights of the new units and (2) the cascade correlation work learns on a batch of data, whereas our network learns from an online stream of data. The two differences are arguably minor. Rather, the bigger departure from this older idea is when we combine it with columnar networks in the next section. This lets us move beyond adding only one recurrent feature in each stage, to learning multiple recurrent features (columns) in each stage. ### Constructive Columnar Networks Constructive columnar networks (CCNs), as the name suggests, are a combination of columnar and constructive networks. In CCNs, we keep the multi-stage approach of the constructive network; however, instead of learning a single feature in every stage, the learner can learn multiple features that are independent of each other. It leverages the fact that RTRL is efficient for columnar structures to grow the network more quickly. A two-stage CCN is shown in Figure 2. In stage one, the learner learns the incoming weights of \(h_{1}\) and \(h_{2}\). Note that since \(h_{1}\) and \(h_{2}\) are independent of each other, they are equivalent to a columnar network with two features, and can be learned efficiently together. In the second stage, the learner freezes the incoming and recurrent weights of \(h_{1}\) and \(h_{2}\), and learns the incoming weights of \(h_{3}\) and \(h_{4}\), which take as input both \(h_{1}\) and \(h_{2}\). Once again, \(h_{3}\) and \(h_{4}\) are independent of each other. CCNs inherit the hyperparameters from columnar networks and from constructive networks. In addition to the step-size for RTRL and the steps-per-stage for constructive networks, CCNs have a features-per-stage hyperparameter that controls the number of recurrent features to be learned in parallel in each stage ### Feature Normalization A key to making our system work is online feature normalization. Unlike dense recurrent networks, features in our constructive and CCN networks can have varying number of incoming weights. This discrepancy can change the scale of each feature, making learning difficult using a uniform step-size. Our feature normalization is similar to an online version of Figure 2: Constructive-Columnar networks (CCNs) combines the ideas from both the columnar and the constructive approach. In each stage, the learner learns multiple features that are independent of each other, just like a columnar network. Across stages, the learner can learn hierarchical feature, similar to the constructive approach. batch normalization (Ioffe and Szegedy, 2015). Prior work has shown feature normalization to be helpful for recurrent networks in the batch setting (Cooijmans _et al_., 2017). To normalize the features, we first maintain an online running estimate of the mean and variance of each feature. We then use the running estimates to normalize the feature to have zero mean and unit variance. Additionally, if the variance of a feature goes below a threshold, we set it to a small number \(\epsilon\) to prevent the normalized feature from getting too large. Here \(\epsilon\) is a hyperparameter. Capping the maximum value of the feature is important to prevent unstable behavior. The formula for the normalized feature \(\hat{f}_{i}\), given the unnormalized feature \(f_{i}\), is \[\hat{f}_{t,i} =\frac{f_{t,i}-\mu_{t,i}}{\max(\epsilon,\sigma_{t,i})} \tag{10}\] \[\text{where }\mu_{t,i} =\mu_{t-1,i}\beta+(1-\beta)f_{t,i}\] \[\sigma_{t,i}^{2} =\sigma_{t-1,i}^{2}\beta+(1-\beta)(\mu_{t,i}-f_{t,i})(\mu_{t-1,i} -f_{t,i})\] where \(\beta=0.99999\) for all our experiments. \(\mu_{0,i}\) and \(\sigma_{0,i}^{2}\) are initialized to be 0 and 1 respectively. \(\epsilon\) is tuned; the values used in this work are shown in Table 1 in Appendix 6. ## 4 Experiments on an Animal Learning Benchmark We start by evaluating the methods on a recently proposed benchmark inspired by animal learning (Rafiee _et al_., 2022). The trace patterning task is an online prediction task that requires the learner to discriminate between patterns--conditional stimuli (CS)--that are Figure 3: Visualization of the stream of experience for the trace patterning task. At each step, the learner sees a vector with seven values. The first six are the CS, and the last is the US. CS is either a vector of zeros, or three of them are one. Twenty possible patterns can be represented by the CS. Ten of these patterns activate the US after ISI number of steps, whereas others do not. The learner has to predict the discounted sum of the value of US in the future. The CS is present every ISI + ITI number of steps. The bottom part of the figure shows the ground-truth prediction for the task. followed by a scalar--unconditional stimuli (US)--after a time delay. The goal is to predict the discounted sum of the US. Correct predictions require the ability to discriminate between patterns that lead to US from those that do not. The time delay between the CS and US requires remembering information from the past. The delay between the CS and US is uniformly randomly sampled to be between 14 and 26 steps after every CS, and is called the inter-stimulus interval (ISI). The delay between the US and next CS is uniformly randomly sampled to be between 80 and 120 steps after every US, and is called the inter-trial interval (ITI). The CS consists of 6 features. When CS is present, three of the six features in the CS vector are one. Since \({6\choose 3}\) is twenty, the CS vector can represent twenty different patterns. Ten randomly chosen patterns are followed by US=1 after ISI steps, whereas the remaining ten do not activate the US. The learner has to learn to discriminate between the patterns that lead to the US from those that do not. A visual representation of experience from the trace patterning benchmark with ISI of 3 and ITI of 7 is shown in Figure 3. The vertical dimension represents the features, and the horizontal the time. At time-step 4, 3 of the 6 features are one. After 3 more (ISI = 3) steps, the US becomes active. Then no features are active for ITI number of steps. After ITI steps, the CS again becomes active. The second pattern of the CS is not followed by US. At the bottom of Figure 3, we show the ground truth return that the learner has to predict to minimize the prediction error. ### Experimental Setup We compare CCNs to T-BPTT, Columnar networks and Constructive Networks. All networks use the LSTM cell architecture (Hochreiter and Schmidhuber 1997) for recurrence. For T-BPTT, we use a fully connected LSTM network. T-BPTT introduces another hyperparameter--the truncation window k. To keep the per-step computation constant, a learner using a larger truncation window has to use fewer features. We set the per-step compute budget to \(\approx\) 4,000 floating point operations and use TD(\(\lambda\)) (Sutton 1984, 1988) for learning for all methods. We use TD(\(\lambda\)) in non-linear networks similar to prior work (Tesauro, 1995). We use \(\lambda=0.99\), and \(\gamma=0.90\) and report the learning curves for 50 million steps. For each method, we individually tune the step-size, \(\epsilon\), steps-per-stage, features-per-stage, and the truncation window; we report the results for the best performing configuration. Details of hyperparameter tuning are in Appendix 6. The columnar networks, constructive networks, and CCNs have 5, 10, and 20 features respectively. T-BPTT uses a truncation window of 30, and has two features. Note that even though T-BPTT only has two features, it uses the same amount of per-step computation as CCNs because it uses a more expensive learning algorithm. ### Results We start by looking at the learning curves for all four methods in Figure 4. All three approaches learn to reduce the prediction error over time. Columnar networks perform the worst, demonstrating the need for hierarchical recurrent features. Both CCNs and constructive networks reliably converge to a good solution. There is a clear structure in their learning curves, where there are plateaus followed by sudden steep declines in error when new features are added, which occurs every 5 million steps. T-BPTT achieves error in-between columnar networks and CCNs. We further investigate the sensitivity of T-BPTT to the value of truncation. We first consider the impact of reallocating resources, allowing T-BPTT to have bigger networks with shorter truncation length \(k\). The results above used T-BPTT with two features and \(k=30\). To maintain the same level of computation and increase the number of features, \(k\) has to be correspondingly decreased. We can see from the learning curves in Figure 5 that when the truncation length is much smaller than the longest dependency in the learning problem--26--the performance drops significantly. T-BPTT performs the best when it selects a smaller network (two features) and longer truncation (\(k=30\)). We conducted an additional experiment where we allowed T-BPTT to use more computation. We ignore our per-step resource constraint and fix the number of features to 10. We then use T-BPTT with different truncation windows and report the results in Figure 6. We see that large networks with large truncation window--red line--performs almost as well as CCNs. However, it uses around six times more computation per-step. ## 5 Experiments in the Arcade Learning Environment To evaluate the proposed algorithms on higher-dimensional image-based inputs, we introduce a new prediction learning benchmark based on the Arcade Learning Environment (Bellemare, 2013). We first describe this benchmark, and then conduct similar experiments to above, on the 50 Atari games. Figure 4: Results of the proposed algorithms, and the best performing T-BPTT on the trace patterning task for 50 million learning steps. All methods can learn to make accurate predictions. Columnar learns quickly, but converges to a worse solution because it is unable to build hierarchical representations. Both CCN and constructive converge to the almost optimal solution. The best T-BPTT performs worse than both the constructive and CCN. All plots are averaged over 30 seeds, and the shaded area is +- standard error. ### An Atari Prediction Benchmark for Recurrent Learning Since our goal is to study state construction in the prediction setting, we do policy evaluation on expert Atari agents, as opposed to solving the control task. We use pre-trained Rainbow-DQN (Hessel _et al._, 2018) agents from the model zoo of Chainer-RL (Fujita _et al._, 2021), and collect at least 200k samples following the greedy policy for each Atari environment. After 200k samples are collected for an environment, we keep collecting samples until the episode terminates. We clip the rewards to be in the range \((-1,+1)\). The input \(\mathbf{x}\) given to the learner is composed of the observation, action and reward from the previous step. The observation at each step of a Rainbow-DQN agent is \(84\times 84\times 4\), where each observation stacks the previous four frames to reduce partial observability. However, since our goal is to study how well our algorithms can construct agent-state, we only pass the single most-recent frame to the prediction learner. Additionally, we downscale the frames to be \(16\times 16\), resulting in 256 features. Downscaling and removing frame-stacking makes the problem much more challenging, and the learner has to look at the trajectory of observations to predict well. We visualize these downscaled observations in various games in Figure 7. We see that due to downscaling, it is not enough to look at a single frame to make an accurate prediction. For instance in the Pong environment, the ball or paddles are not visible in many frames. The only way to make accurate predictions on all frames is to Figure 5: Different combinations of T-BPTT on the trace patterning task. Each curve is denoted by two numbers a:b. The first number indicates the number of features in the learner, and the second number indicates the truncation length of T-BPTT. For example, 2:30 means an LSTM with two features trained with T-BPTT with a truncation window of 30. All lines use about the same amount of computation for learning and prediction. By choosing a small value of truncation, the learner can afford to have more features. We see that different values of truncation results in very different performance. Large networks trained with small truncation lengths—13:2 and 10:3—perform the worst showing the impact of the bias introduced by T-BPTT. All lines are averaged over 30 random seeds. remember information from the past. The expert Atari agent can take one of 20 actions. We one-hot encode the action and append it to the observation, giving 276 features at every step. The Atari Prediction benchmark can generally be used to evaluate recurrent learning algorithms. We use it to simulate an online learning setting. If there are less than 200k learning steps, then the learner can simply iterate over the dataset in order, if they are seeing the data streaming in real-time. Otherwise, when learning for more than 200k steps, the dataset can be used like a simulator. After looping through all the episodes in the dataset, the order of episodes can be shuffled and the agent can do another epoch over the training set. We learn for 50 million steps, meaning we approximately loop over the entire dataset 500 times. Traditionally, learning performance on a dataset is evaluated on a held-out test set to measure the generalization performance of the system. While the train and test distinction is important in offline learning, especially when the learning network is over-parameterized, it is unnecessary when the learner is an order of magnitude smaller than the dataset, as is the case in our experiments. A small learner that generalizes better can outperforms a small learner that tries to memorize trajectories on a dataset. Figure 6: We train LSTMs with 10 hidden units using truncation windows of 2, 3, 5, 10, and 20. For each truncation window, we independently tuned the step-size parameter. We see that as the truncation window increases, the performance improves significantly at the expense of more computation: using a truncation window of 20 is ten times more computationally expensive as compared to a truncation window of two. The sensitivity of performance w.r.t truncation window highlights the degradation in performance due to the bias introduced by truncation. All lines are averaged over 30 random seeds with shaded regions corresponding to two standard error. ### Experimental Setup We compare our methods with T-BPTT, using LSTMs and TD(\(\lambda\)) for all algorithms. We set the per-step compute budget to \(\approx\) 50k operations per step and learn the value function for 50 million steps. We treat multiplication, addition, division, and subtraction as one operation. We fix the discount factor \(\gamma\) to be 0.98, and \(\lambda\) to be 0.99 in all experiments. The remaining parameters--\(\epsilon\), steps-per-stage, truncation window, and step-size--are tuned for each method independently. The details of the hyperparameter tuning are in Table 6. We pick hyperparameters that give the best results averaged over all the environments. For each environment, we report the average return error in the last 200k steps. Since the scale of the returns is very different for different environments, it is important to normalize the errors for easy visualization. For each environment, we normalize the error of all methods by dividing by the error achieved by the T-BPTT baseline in that environment. This means that after normalization, the T-BPTT baseline has an error of one in all environments, whereas the error of other methods is relative to that achieved by T-BPTT. For instance, if a CCN network achieves an error of 0.5 in an environment, that means the error is half of what was achieved by T-BPTT. Similarly, an error of 2 for CCN means the error is twice as much as T-BPTT. ### Overall Performance We report error across all environments for the CCN and T-BPTT in Figure 8. The CCN performs better than T-BPTT in all but two environments. In many environments, it achieves 5x lower error, whereas even in the worst case of CrazyClimber, the error is only twice as much as T-BPTT. We also look at errors achieved by constructive and columnar networks and report them in Figure 9. For brevity, we only report the error averaged over all environments. We see Figure 7: Environments down-scaled to 16 x 16. Looking at a single frame, it’s hard to figure out information information about the environment. For instance in Pong, the ball is often not visible in a single frame. However, looking at the sequence of frames, we can tell the position and the direction of the ball. This partial observability due to down-scaling makes 16 x 16 Atari an interesting benchmark for studying state construction. that all three of the proposed methods improve over T-BPTT. CCN performs the best, demonstrating that combining columnar and constructive approaches is useful. ### Visualizing the Predictions We visualize the predictions made by the CCN and T-BPTT at end of learning in Figure 10. We can see that both methods can learn to make accurate predictions. Predictions made Figure 8: Comparing the CCN network with the best T-BPTT on the Atari Prediction Benchmark. For all but two games, CCN network achieves lower prediction error than T-BPTT. In many games, the CCN reduces the prediction error by many folds. All errors are averaged over 15 random seeds, and the error margins represent one standard error. Figure 9: Average relative error of all methods on the Atari Prediction Benchmark averaged over 15 random seeds. Both the constructive and the columnar approach improve over T-BPTT on average. Combining both gives the best results. The average relative error achieved by CCN is less than half of what the best T-BPTT achieves at the same compute budget. by the CCN are closer, on average, to the ground truth returns than the predictions made by the LSTM trained using T-BPTT. ### Sensitivity to truncation for T-BPTT As before, we more fully investigate the impact of the number of features and truncation length on the performance of T-BPTT, to give a more complete picture of our main comparator algorithm. We perform two experiments. In the first experiment, we fix the truncation window to 8 and vary the number of features from 2 to 15. In the second experiment, we fix the number of features to 8, and vary the truncation window from 2 to 15. We report both results in Figure 11. We see that both increasing the number of features, and the truncation window improve the performance of T-BPTT. The number of features has a bigger impact: going from 2 features to 15 halves the error, whereas going from a truncation window of 2 to 15 reduces the error by around 23%. Figure 10: Visualizing predictions made by T-BPTT and CCN networks on five Atari environments. We plot predictions made at the end of the 50 million steps learning process. The green lines are predictions made by CCN, the orange are predictions made by the best T-BPTT, and the dotted blue line represents the ground truth return observed by the agent. We see that CCN makes qualitatively better predictions on most of the environments. The difference is most pronounced in Pong, in which CCN makes near perfect predictions. T-BPTT also learns the general trend of predictions correctly, but does not follow the ground truth return as closely. ## 6 Conclusions and Future Directions In this paper we showed that by either restricting connections between recurrent neurons--the columnar approach--or learning a recurrent network incrementally--the constructive approach--we can compute gradients of parameters of a recurrent network cheaply and without truncation. Moreover, unlike T-BPTT, our algorithm does not rely on sequential operations and can be fully parallelized. We show that in the under-parameterized setting, our methods out-perform T-BPTT when using a fixed compute budget. Moreover, the algorithms can be scaled to learn networks with billions of parameters using roughly the same amount of resources needed for inference of similarly sized models. Because the learning algorithms do not use a lot of resources, there is no need to disable learning at the time of deployment; we can build systems that continually learn and adapt to their data-stream. One major limitation of our approach is that in both constructive and CCN approaches, most of the features are frozen as time goes by. As a result, the network loses its plasticity and the ability to adapt to changes. There are two possible routes to address this limitation. One route is to allow the frozen features to instead change very slowly. The gradient should remain mostly accurate, since these slowly changing features are effectively frozen from the timescale of the fast changing features. Another direction is to combine our approach with online weight and feature pruning. Instead of only adding features to grow the size of the network, we can instead continually replace the least useful features with new features, and Figure 11: Impact of capacity and truncation window on the performance of T-BPTT on the Atari Prediction Benchmark. We report normalized error averaged over all atari environments. The error is normalized such that the average error is one when number of features, or truncation window is 15. For the graph on the left, we fix the truncation window to 8, and vary the number of features. We see that as the network gets larger, the performance improves. The error of an LSTM using two features is twice as much as an LSTM using 15 features. For the plot on the right, we fix the number of features to 8, and vary the truncation window of T-BPTT. Once again, we see that as the truncation window gets smaller, error increases. learn them, as proposed by Dohare _et al_. (2021). The continual pruning and generation assures that a frozen part of the network stays only as long as it is useful for prediction. Another open question in this work is the types of networks representable by Constructive networks and CCNs. A Constructive network is an RNN with certain connections omitted. A theoretical understanding of the subclass of functions learnable by Constructive networks would help identify when, or even if, these networks might be sufficient for a given problem. CCNs, on the other hand, are less constrained in terms of the architecture, because they can iteratively grow an RNN with many connections. However, the training procedure itself is likely to limit the class of RNNs that can be learned. There has been some prior work (Giles _et al_. 1995 and Kremer 1995) analyzing Recurrent Cascade-Correlation (RCC), which is similar to Constructive networks. Kremer (1995) showed that there are certain Finite State Automata that RCC cannot learn with linear threshold and sigmoid activations, whereas more general RNN architectures can. It is not yet clear if CCNs suffer from similar problems. It is unclear if that argument used by Kremer (1995) applies to the much more complex LSTM architecture used in our experiments. Intuitively, the combination of gradient-based learning (RTRL) with a growing network should overcome the limitations of each approach in isolation. A natural next step is to revisit these counterexamples, and see if they extend to CCNs. ## Appendix A Hyperparameter Settings We tune the solution-specific hyperparameters for all the methods independently. For each configurations, we use five random seeds and look at the performance over all five seeds to pick the best hyperparameters. We then run the best hyperparameter configuration for 30 seeds for reporting the trace patterning results and 15 seeds for reporting the Atari results. List of all the hyperparameter, and their values are given in Table 1. ### Implementation Details We implement all methods in C++. For columnar, constructive, and CCN approaches, we use the update equations derived in Appendix B. We verify the correctness of the gradients computed by our derived equations, and our implementation of T-BPTT by comparing them to the gradients computed by PyTorch for networks initialized to have the same parameters. The gradients given by our implementation and those by PyTorch match exactly. Our C++ implementation avoids the overhead of Python and PyTorch, and is around 50x faster for small recurrent networks as compared to PyTorch for LSTMs that are trained using one sample at a time. Having a fast and efficient implementation was crucial for performing large hyperparameter sweeps and reporting statistically significant results by averaging over multiple seeds. For batch learning, GPU implementation of PyTorch would be faster. That said, our algorithms are fully decentralized and when applied to recurrent networks with millions of features, constructive, columnar, and CCN can benefit from parallel compute units of GPUs. LSTM trained with T-BPTT, on the other hand, have to do sequential computation to compute the gradient, and are fundamentally limited in terms of benefiting from parallel compute units. #### Compute infrastructure We run all experiments on large CPU clusters. A single run of the trace patterning task for 50 million steps takes around 5 minutes on a single CPU, whereas a single run on Atari for 50 million steps takes around 50 minutes. Both experiments take less than 2 GB of ram per run. We used GNU parallel (Tange, 2018) to distribute the experiments over 1,000 CPUs. #### Equations for Estimating Compute Used by Each Method Every method uses roughly the same amount of computation per-step. We estimate the amount of compute used by each method by looking at its architecture and the learning algorithm. These estimates are not exact, and there may be some minor differences depending on how these methods are implemented in practice. However, the principle largely remains the same. And we have verified from our empirical observations that these estimates are close to what we observe. \begin{table} \begin{tabular}{l l l l} \hline Symbol & Hyperparameter & Environment & Hyperparameter values \\ \hline \(\alpha\) & Step-size (T-BPTT) & Both & \(1^{-2},3^{-3},1^{-3}\), \\ & & & \(3^{-4},1^{-4}\) \\ \(\alpha\) & Step-size (CCN and Constructive) & Both & \(1^{-2},1^{-3},1^{-4}\) \\ \(\gamma\) & Discount factor & Trace & 0.90 \\ \(\gamma\) & Discount factor & Atari & 0.98 \\ \(\lambda\) & Eligibility trace decay rate & Both & 0.99 \\ \(k:d\) & Truncation:Hidden features (T-BPTT) & Trace & 2:13, 3:10, 5:8, 8:6, \\ & & & 10:5, 15:4, 20:3, 30:2 \\ \(k:d\) & Truncation:Hidden features (T-BPTT) & Atari & 15:2, 8:5, 5:8, \\ & & & 4:10, 2:25 \\ & Hidden features (Columnar) & Trace & 5 \\ & Hidden features (Columnar) & Atari & 7 \\ & Features per stage (CCN) & Trace & 4 \\ & Features per stage (CCN) & Atari & 5 \\ & Steps per stage (CCN) & Trace & 10 million \\ & Steps per stage (CCN) & Atari & 16 million \\ & Steps per stage (Constructive) & Both & 5 million \\ & Total steps & Both & 50 million \\ & Seeds for parameter sweep & Both & \(\{0,1,2,3,4\}\) \\ & Seeds for best parameter configuration & Trace & \(\{0,1,\cdots,29\}\) \\ & Seeds for best parameter configuration & Atari & \(\{0,1,\cdots,14\}\) \\ \(\epsilon\) & Min division term (CCN and Constructive) & Both & \(\{0.1,0.01,0.001\}\) \\ \hline \end{tabular} \end{table} Table 1: Hyperparameter sweeps Let \(|h|\) be the number of hidden features, \(|x|\) be the number of input features, \(k\) be the truncation window, and \(u\) be the features-per-stage parameter. Then the total amount of computation used by an LSTM cell for a single forward pass can be estimated using the following equation: \[4|h|+4|x|+4\] where the number four is due to the four gates used by an LSTM cell. In T-BPTT, we used a fully connected LSTM so the total number of features would be \(|h|\). Forward pass of a fully connected LSTM would use: \[h(4|h|+4|x|+4)=4|h|^{2}+4|h||x|+4|h|\] operations. Finally, T-BPTT requires k times more computation for computing the gradient, bringing the total cost to: \[4|h|^{2}+4|h||x|+4|h|+k(4|h|^{2}+4|h||x|+4|h|)\] \[= (k+1)(4|h|^{2}+4|h||x|+4|h|)\] For columnar, constructive and CCN, first we see from Appendix B that recursively computing the gradient is roughly six times more expensive than the forward pass of the LSTM, which, according to our empirical observations, is an overestimation. Total compute used by a single columnar cell for the forward pass, therefore, is: \[4+4|x|+4\] since hidden state = 1 for a single column. Compute used by \(|h|\) cells is: \[|h|(4|x|+8).\] Adding compute used by the learning algorithm, we get: \[|h|(4|x|+8)+6|h|(4|x|+8)\] In the CCN approach, on average, an LSTM cell takes as input \(\frac{|h|}{2}\) hidden states. As a result, the compute used for a single forward pass by a single recurrent feature is given by: \[4\frac{|h|}{2}+4|x|+4,\] and for \(|h|\) features it is: \[|h|(2|h|+4|x|+4).\] Since we learn \(u\) features at a time, the total estimated compute per step for CCN networks is given by: \[|h|(2|h|+4|x|+4)+6u(2|h|+4|x|+4).\] For constructive networks, we can substitute \(u=1\) in the equation above. ## Appendix B Forward-mode gradient computation of an LSTM cell Here we derive the update equations for recursively computing the gradients of a single LSTM based recurrent column. Each column has a single hidden unit. Because all columns are identical, the same update equations can be used for learning in columnar, constructive, and the CCN approach. We compared the gradients estimated using the derived equations with the gradient computed using BPTT in PyTorch without truncation on random trajectories, and found them to match exactly. The state of an LSTM column is updated using following equations: \[i(t) =\sigma(W_{i}^{T}x_{k}(t)+u_{i}h(t-1)+b_{i}) \tag{11}\] \[f(t) =\sigma(W_{f}^{T}x_{k}(t)+u_{f}h(t-1)+b_{f})\] (12) \[o(t) =\sigma(W_{o}^{T}x_{k}(t)+u_{o}h(t-1)+b_{o})\] (13) \[g(t) =\phi(W_{g}^{T}x_{k}(t)+u_{g}h(t-1)+b_{g})\] (14) \[c(t) =f(t)c(t-1)+i(t)g(t)\] (15) \[h(t) =o(t)\phi(c(t)) \tag{16}\] where \(\sigma\) and \(\phi\) are the sigmoid and tanh activation functions, \(h(t)\) is the state of the column at time \(t\) and \(W_{i}^{T}x_{k}(t)=\sum_{k=1}^{m}W_{i_{k}}x_{k}(t)\). The derivative of \(\sigma(x)\) and \(\phi(x)\) w.r.t to \(x\) are \(\sigma(x)(1-\sigma(x))\) and \((1-\phi^{2}(x))\) respectively. Let the length of input vector \(x\) be \(m\). Then, \(W_{i},W_{f},W_{o}\) and \(W_{g}\) are vectors of length \(m\) whereas \(u_{i},b_{i},u_{f},b_{f},u_{o},b_{o},u_{g}\) and \(b_{g}\) are scalars. We want to compute gradient of \(h(t)\) with respect to all the parameters. We derive the update equations for \(\frac{\partial h(t)}{\partial W_{i}},\frac{\partial h(t)}{\partial u_{i}}, \frac{\partial h(t)}{\partial b_{i}},\frac{\partial h(t)}{\partial W_{f}}, \frac{\partial h(t)}{\partial u_{f}},\frac{\partial h(t)}{\partial b_{f}}, \frac{\partial h(t)}{\partial W_{o}},\frac{\partial h(t)}{\partial u_{o}}, \frac{\partial h(t)}{\partial b_{o}},\frac{\partial h(t)}{\partial W_{g}}, \frac{\partial h(t)}{\partial u_{g}}\), and \(\frac{\partial h(t)}{\partial b_{g}}\) in the following sections. \(\frac{\partial h(t)}{\partial W_{i}}\) \(W_{i}=(W_{i_{1}},W_{i_{2}},\cdots,W_{i_{m}})\) is a vector of length \(m\). Since all elements of \(W_{i}\) are symmetric, we show gradient derivation for \(W_{i_{j}}\) without loss of generality. Let \[TH_{W_{i_{j}}}(t) :=\frac{\partial h(t)}{\partial W_{i_{j}}}\] _(By definition)_ (17) \[TH_{W_{i_{j}}}(0) :=0\] _(By definition)_ (18) \[TC_{W_{i_{j}}}(t) :=\frac{\partial c(t)}{\partial W_{i_{j}}}\] _(By definition)_ (19) \[TC_{W_{i_{j}}}(0) :=0\] _(By definition)_ (20) Then: \[TH_{W_{i_{j}}}(t) =\frac{\partial}{\partial W_{i_{j}}}\left(o(t)\phi(c(t))\right)\] _From equation 16 and definition 17_ \[=o(t)\frac{\partial\phi(c(t))}{\partial W_{i_{j}}}+\phi(c(t))\frac{ \partial o(t)}{\partial W_{i_{j}}}\] _Product rule of differentiation_ \[=o(t)(1-\phi^{2}(c(t)))\frac{\partial c(t)}{\partial W_{i_{j}}}+\phi(c(t)) \frac{\partial o(t)}{\partial W_{i_{j}}}\] _Derivative of_ \[\phi(x)\] _is (1-_ \[\phi^{2}(x)\] _)_ \[=o(t)(1-\phi^{2}(c(t)))TC_{W_{i_{j}}}(t)+\phi(c(t))\frac{\partial o(t)}{ \partial W_{i_{j}}}\] _From definition 19_ \[\frac{\partial o(t)}{\partial W_{i_{j}}} =\frac{\partial}{\partial W_{i_{j}}}\sigma(W_{o}^{T}x(t)+u_{o}h( t-1)+b_{o})\] _From equation 13_ \[=\sigma(y)(1-\sigma(y))u_{o}TH_{W_{i_{j}}}(t-1)\] _Where_ \[y\] _equals_ \[W_{o}^{T}x(t)+u_{o}h(t-1)+b_{o}\] _From definition 19_ \[=\frac{\partial}{\partial W_{i_{j}}}(f(t)c(t-1)+i(t)g(t))\] _From equation 15_ \[=f(t)TC_{W_{i_{j}}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial W_{i_ {j}}}\] _Product rule and definition 19_ \[+\frac{\partial}{\partial W_{i_{j}}}(i(t)g(t))\] _\[=f(t)TC_{W_{i_{j}}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial W_{i _{j}}}\] _Product rule_ \[+i(t)\frac{\partial g(t)}{\partial W_{i_{j}}}+g(t)\frac{\partial i( t)}{\partial W_{i_{j}}}\] Where gradient of \(g(t)\) w.r.t \(W_{i_{j}}\) is: \[\frac{\partial g(t)}{\partial W_{i_{j}}} =\frac{\partial}{\partial W_{i_{j}}}\phi(W_{g}^{T}x(t)+u_{g}h(t-1 )+b_{g})\] _From equation 14_ \[=(1-\phi^{2}(y))u_{g}TH_{W_{i_{j}}}(t-1)\] _Where_ \[y\] _equals_ \[W_{g}^{T}x(t)+u_{g}h(t-1)+b_{g}\] , gradient of \(f(t)\) w.r.t \(W_{i_{j}}\) is: \[\frac{\partial f(t)}{\partial W_{i_{j}}} =\frac{\partial}{\partial W_{i_{j}}}\sigma(W_{f}^{T}x(t)+u_{f}h( t-1)+b_{f})\] _From equation 12_ \[=\sigma(y)(1-\sigma(y))u_{f}TH_{W_{i_{j}}}(t-1)\] _Where_ \[y\] _equals_ \[W_{f}^{T}x(t)+u_{f}h(t-1)+b_{f}\] and gradient of \(i(t)\) w.r.t \(W_{i_{j}}\) is: \[\frac{\partial i(t)}{\partial W_{i_{j}}} =\frac{\partial}{\partial W_{i_{j}}}\sigma(W_{i}^{T}x(t)+u_{i}h(t- 1)+i_{f})\] _From equation 11_ \[=\sigma(y)(1-\sigma(y))\left(x_{j}(t)+u_{i}TH_{W_{i_{j}}}(t-1)\right)\] _Where \[y\] equals \[W_{i}^{T}x(t)+u_{i}h(t-1)+b_{i}\]_ The derivation shows that using two traces per parameter of \(W_{i}\), it is possible to compute the gradient of \(h(t)\) w.r.t \(W_{i}\) recursively. We provide the derivations for parameters \(u_{i}\) and \(b_{i}\) below. We skip the step-by-step derivations for the remaining parameters as they are similar. \(\frac{\partial h(t)}{\partial u_{i}}\) \[TH_{u_{i}}(t) :=\frac{\partial h(t)}{\partial u_{i}}\] _(By definition)_ (21) \[TH_{u_{i}}(0) :=0\] _(By definition)_ (22) \[TC_{u_{i}}(t) :=\frac{\partial c(t)}{\partial u_{i}}\] _(By definition)_ (23) \[TC_{u_{i}}(0) :=0\] _(By definition)_ (24) \[TH_{u_{i}}(t) =\frac{\partial}{\partial u_{i}}\left(o(t)\phi(c(t))\right)\] _From equation 16_ \[=o(t)\frac{\partial\phi(c(t))}{\partial u_{i}}+\phi(c(t))\frac{ \partial o(t)}{\partial u_{i}}\] _Product rule_ \[=o(t)(1-\phi^{2}(c(t)))\frac{\partial c(t)}{\partial u_{i}}+\phi(c(t ))\frac{\partial o(t)}{\partial u_{i}}\] _Derivative of_ \[\phi(x)\] _is_ \[1-\phi^{2}(x)\] _Using definition 23_ \[\frac{\partial o(t)}{\partial u_{i}} =\frac{\partial}{\partial u_{i}}\sigma(W_{o}^{T}x(t)+u_{o}h(t-1)+ b_{o})\] _Using equations 13_ \[=\sigma(x)(1-\sigma(x))u_{o}TH_{u_{i}}(t-1)\] _Where_ \[x\] _equal_ \[W_{o}^{T}x(t)+u_{o}h(t-1)+b_{o}\] _Definition 23_ \[=\frac{\partial}{\partial u_{i}}(f(t)c(t-1)+i(t)g(t))\] _From equation 16_ \[=f(t)TC_{u_{i}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial u_{i}}\] _Product rule_ \[+\frac{\partial}{\partial u_{i}}\left(i(t)g(t)\right)\] _Product rule_ \[=f(t)TC_{u_{i}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial u_{i}}\] _Product rule_ \[+i(t)\frac{\partial g(t)}{\partial u_{i}}+g(t)\frac{\partial i(t )}{\partial u_{i}}\] Gradient of \(g(t)\) w.r.t \(u_{i}\) is: \[\frac{\partial g(t)}{\partial u_{i}} =\frac{\partial}{\partial u_{i}}\phi(W_{g}^{T}x(t)+u_{g}h(t-1)+b_ {g})\] _From equations 16_ \[=(1-\phi^{2}(y))u_{g}TH_{u_{i}}(t-1)\] _Where_ \[y\] _equals_ \[W_{g}^{T}x(t)+u_{g}h(t-1)+b_{g}\] , gradient of \(f(t)\) w.r.t \(u_{i}\) is: \[\frac{\partial f(t)}{\partial u_{i}} =\frac{\partial}{\partial u_{i}}\sigma(W_{f}^{T}x(t)+u_{f}h(t-1)+ b_{f})\] _From equations 1_ \[=\sigma(y)(1-\sigma(y))u_{f}TH_{u_{i}}(t-1)\] _Where_ \[y\] _equals_ \[W_{f}^{T}x(t)+u_{f}h(t-1)+b_{f}\] and the gradient of \(i(t)\) w.r.t \(u_{i}\) is \[\frac{\partial i(t)}{\partial u_{i}} =\frac{\partial}{\partial u_{i}}\sigma(W_{i}^{T}x(t)+u_{i}h(t-1)+ b_{i})\] _Using equations 1_ \[=\sigma(y)(1-\sigma(y))\left(h(t-1)+u_{i}TH_{u_{i}}(t-1)\right)\] _Where_ \[y\] _equals_ \[W_{i}^{T}x(t)+u_{i}h(t-1)+b_{i}\] \(\frac{\partial h(t)}{\partial b_{i}}\) \[TH_{b_{i}}(t) :=\frac{\partial h(t)}{\partial b_{i}}\] _(By definition)_ (25) \[TH_{b_{i}}(0) :=0\] _(By definition)_ (26) \[TC_{b_{i}}(t) :=\frac{\partial c(t)}{\partial b_{i}}\] _(By definition)_ (27) \[TC_{b_{i}}(0) :=0\] _(By definition)_ (28) \[TH_{b_{i}}(t) =\frac{\partial}{\partial b_{i}}\left(o(t)\phi(c(t))\right)\] _From equation 16_ \[=o(t)\frac{\partial\phi(c(t))}{\partial b_{i}}+\phi(c(t))\frac{ \partial o(t)}{\partial b_{i}}\] _Product rule_ \[=o(t)(1-\phi^{2}(c(t)))\frac{\partial c(t)}{\partial b_{i}}+\phi (c(t))\frac{\partial o(t)}{\partial b_{i}}\] _Derivative of of_ \[\phi(x)\] _is_ \[1-\phi^{2}(x)\] _From definition 27_ \[\frac{\partial o(t)}{\partial b_{i}} =\frac{\partial}{\partial b_{i}}\sigma(W_{o}^{T}x(t)+u_{o}h(t-1) +b_{o})\] _From equations 13_ \[=\sigma(y)(1-\sigma(y))u_{o}TH_{b_{i}}(t-1)\] _Where_ \[y\] _equal_ \[W_{o}^{T}x(t)+u_{o}h(t-1)+b_{o}\] _From definition 27_ \[=\frac{\partial}{\partial b_{i}}(f(t)c(t-1)+i(t)g(t))\] _From equation 15_ \[=f(t)TC_{b_{i}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}\] _Product rule_ \[+\frac{\partial}{\partial b_{i}}i(t)g(t)\] _Product rule_ \[=f(t)TC_{b_{i}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}\] _Product rule_ \[+i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{ \partial b_{i}}\] Where gadient of \(g(t)\) w.r.t \(b_{i}\) is: \[\frac{\partial g(t)}{\partial b_{i}} =\frac{\partial}{\partial b_{i}}\phi(W_{g}^{T}x(t)+u_{g}h(t-1)+b_ {g})\] _From equation 14_ \[=(1-\phi^{2}(y))u_{g}TH_{b_{i}}(t-1)\] _Where_ \[y\] _equal_ \[W_{g}^{T}x(t)+u_{g}h(t-1)+b_{g}\] , gradient of \(f(t)\) w.r.t \(b_{i}\) is: \[\begin{split}\frac{\partial f(t)}{\partial b_{i}}&= \frac{\partial}{\partial b_{i}}\sigma(W_{f}^{T}x(t)+u_{f}h(t-1)+b_{f})\quad \text{ \emph{From equation 12}}\\ &=\sigma(y)(1-\sigma(y))u_{f}TH_{b_{i}}(t-1)\quad\quad\quad\text {\emph{Where }}y\text{ \emph{equal }}W_{f}^{T}x(t)+u_{f}h(t-1)+b_{f}\end{split}\] and gradient of \(i(t)\) w.r.t \(b_{i}\) is: \[\begin{split}\frac{\partial i(t)}{\partial b_{i}}&= \frac{\partial}{\partial b_{i}}\sigma(W_{i}^{T}x(t)+u_{i}h(t-1)+b_{i})\quad \quad\text{\emph{From equation 11}}\\ &=\sigma(y)(1-\sigma(y))\left(u_{i}TH_{b_{i}}(t-1)+1\right)\quad \text{\emph{Where }}y\text{ \emph{equal }}W_{i}^{T}x(t)+b_{i}h(t-1)+b_{i}\end{split}\] \(\frac{\partial h(t)}{\partial W_{f_{j}}}\) The derivations for the remaining parameters is analogous to what previous derivations. The final equations are as follows. \[\begin{split}\frac{\partial g(t)}{\partial W_{f_{j}}}& =(1-\phi^{2}(y))(u_{g}TH_{W_{f_{j}}}(t-1))\\ \frac{\partial f(t)}{\partial W_{f_{j}}}&=\sigma(y) (1-\sigma(y))(x_{j}+u_{f}TH_{W_{f_{j}}}(t-1))\\ \frac{\partial i(t)}{\partial W_{f_{j}}}&=\sigma(y) (1-\sigma(y))(u_{i}TH_{W_{f_{j}}}(t-1))\\ \frac{\partial o(t)}{\partial W_{f_{j}}}&=\sigma(y) (1-\sigma(y))(u_{o}TH_{W_{f_{j}}}(t-1))\\ TC_{W_{f_{j}}}&=f(t)TC_{f_{j}}(t-1)+c(t-1)\frac{ \partial f(t)}{\partial b_{i}}+i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t) \frac{\partial i(t)}{\partial b_{i}}\\ TH_{W_{f_{j}}}&=o(t)(1-\phi^{2}(c(t)))TC_{W_{f_{j}}} (t)+\phi(c(t))\frac{\partial o(t)}{\partial W_{ij}}\end{split} \tag{29}\] \[\frac{\partial g(t)}{\partial W_{o_{j}}} =(1-\phi^{2}(y))(u_{g}TH_{W_{o_{j}}}(t-1)) \tag{30}\] \[\frac{\partial f(t)}{\partial W_{o_{j}}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{W_{o_{j}}}(t-1))\] \[\frac{\partial i(t)}{\partial W_{o_{j}}} =\sigma(y)(1-\sigma(y))u_{i}TH_{W_{o_{j}}}(t-1)\] \[\frac{\partial o(t)}{\partial W_{o_{j}}} =\sigma(x)(1-\sigma(x))(x_{j}+u_{o}TH_{W_{o_{j}}}(t-1))\] \[TC_{W_{o_{j}}} =f(t)TC_{o_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{W_{o_{j}}} =o(t)(1-\phi^{2}(c(t)))TC_{W_{o_{j}}}(t)+\phi(c(t))\frac{\partial o (t)}{\partial W_{ij}}\] \[\frac{\partial h(t)}{\partial W_{g_{j}}} =(1-\phi^{2}(y))(x_{j}+u_{g}TH_{W_{g_{j}}}(t-1)) \tag{31}\] \[\frac{\partial f(t)}{\partial W_{g_{j}}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{W_{g_{j}}}(t-1))\] \[\frac{\partial i(t)}{\partial W_{g_{j}}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{W_{g_{j}}}(t-1))\] \[\frac{\partial o(t)}{\partial W_{g_{j}}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{W_{g_{j}}}(t-1))\] \[TC_{W_{g_{j}}} =f(t)TC_{g_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{W_{g_{j}}} =o(t)(1-\phi^{2}(c(t)))TC_{W_{g_{j}}}(t)+\phi(c(t))\frac{\partial o (t)}{\partial W_{ij}}\] \[\frac{\partial g(t)}{\partial u_{o}} =(1-\phi^{2}(y))(u_{g}TH_{u_{o}}(t-1)) \tag{32}\] \[\frac{\partial f(t)}{\partial u_{o}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{u_{o}}(t-1))\] \[\frac{\partial i(t)}{\partial u_{o}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{u_{o}}(t-1))\] \[\frac{\partial o(t)}{\partial u_{o}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{u_{o}}(t-1)+h(t-1))\] \[TC_{u_{o}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{u_{o}} =o(t)(1-\phi^{2}(c(t)))TC_{u_{o}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\] \[\frac{\partial h(t)}{\partial u_{f}} =(1-\phi^{2}(y))(u_{g}TH_{u_{f}}(t-1)) \tag{33}\] \[\frac{\partial f(t)}{\partial u_{f}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{u_{f}}(t-1)+h(t-1))\] \[\frac{\partial i(t)}{\partial u_{f}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{u_{f}}(t-1))\] \[\frac{\partial o(t)}{\partial u_{f}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{u_{f}}(t-1))\] \[TC_{u_{f}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{u_{f}} =o(t)(1-\phi^{2}(c(t)))TC_{u_{f}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\] \[\frac{\partial g(t)}{\partial u_{g}} =(1-\phi^{2}(y))(u_{g}TH_{u_{g}}(t-1)+h(t-1)) \tag{34}\] \[\frac{\partial f(t)}{\partial u_{g}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{u_{g}}(t-1))\] \[\frac{\partial i(t)}{\partial u_{g}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{u_{g}}(t-1))\] \[\frac{\partial o(t)}{\partial u_{g}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{u_{g}}(t-1))\] \[TC_{u_{g}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{u_{g}} =o(t)(1-\phi^{2}(c(t)))TC_{u_{g}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\] \[\frac{\partial h(t)}{\partial b_{g}} =(1-\phi^{2}(y))(u_{g}TH_{b_{g}}(t-1)+1) \tag{35}\] \[\frac{\partial f(t)}{\partial b_{g}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{b_{g}}(t-1))\] \[\frac{\partial i(t)}{\partial b_{g}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{b_{g}}(t-1))\] \[\frac{\partial o(t)}{\partial b_{g}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{b_{g}}(t-1))\] \[TC_{b_{g}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{b_{g}} =o(t)(1-\phi^{2}(c(t)))TC_{b_{g}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\] \[\frac{\partial h(t)}{\partial b_{f}} =(1-\phi^{2}(y))(u_{g}TH_{b_{f}}(t-1)) \tag{36}\] \[\frac{\partial f(t)}{\partial b_{f}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{b_{f}}(t-1)+1)\] \[\frac{\partial i(t)}{\partial b_{f}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{b_{f}}(t-1))\] \[\frac{\partial o(t)}{\partial b_{f}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{b_{f}}(t-1))\] \[TC_{b_{f}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{b_{f}} =o(t)(1-\phi^{2}(c(t)))TC_{b_{f}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\] \(\frac{\partial h(t)}{\partial b_{o}}\) \[\frac{\partial g(t)}{\partial b_{o}} =(1-\phi^{2}(y))(u_{g}TH_{b_{o}}(t-1)) \tag{37}\] \[\frac{\partial f(t)}{\partial b_{o}} =\sigma(y)(1-\sigma(y))(u_{f}TH_{b_{o}}(t-1))\] \[\frac{\partial i(t)}{\partial b_{o}} =\sigma(y)(1-\sigma(y))(u_{i}TH_{b_{o}}(t-1))\] \[\frac{\partial o(t)}{\partial b_{o}} =\sigma(x)(1-\sigma(x))(u_{o}TH_{b_{o}}(t-1)+1)\] \[TC_{b_{o}} =f(t)TC_{i_{j}}(t-1)+c(t-1)\frac{\partial f(t)}{\partial b_{i}}+ i(t)\frac{\partial g(t)}{\partial b_{i}}+g(t)\frac{\partial i(t)}{\partial b_{i}}\] \[TH_{b_{o}} =o(t)(1-\phi^{2}(c(t)))TC_{b_{o}}(t)+\phi(c(t))\frac{\partial o( t)}{\partial W_{ij}}\]
2301.04568
Nonlinear Boundary Conditions for Energy and Entropy Stable Initial Boundary Value Problems in Computational Fluid Dynamics
We derive new boundary conditions and implementation procedures for nonlinear initial boundary value problems that lead to energy and entropy bounded solutions. A step-by-step procedure for general nonlinear hyperbolic problems on skew-symmetric form is presented. That procedure is subsequently applied to the three most important equations in computational fluid dynamics: the shallow water equations and the incompressible and compressible Euler equations. Both strong and weak imposition of the nonlinear boundary conditions are discussed. Based on the continuous analysis, we show that the new nonlinear boundary procedure lead to energy and entropy stable discrete approximations if the scheme is formulated on summation-by-parts form in combination with a weak implementation of the boundary conditions.
Jan Nordström
2023-01-11T16:52:22Z
http://arxiv.org/abs/2301.04568v1
Nonlinear Boundary Conditions for Energy and Entropy Stable Initial Boundary Value Problems in Computational Fluid Dynamics ###### Abstract We derive new boundary conditions and implementation procedures for nonlinear initial boundary value problems that lead to energy and entropy bounded solutions. A step-by-step procedure for general nonlinear hyperbolic problems on skew-symmetric form is presented. That procedure is subsequently applied to the three most important equations in computational fluid dynamics: the shallow water equations and the incompressible and compressible Euler equations. Both strong and weak imposition of the nonlinear boundary conditions are discussed. Based on the continuous analysis, we show that the new nonlinear boundary procedure lead to energy and entropy stable discrete approximations if the scheme is formulated on summation-by-parts form in combination with a weak implementation of the boundary conditions. keywords: Nonlinear boundary conditions, computational fluid dynamics, Euler equations, shallow water equations, energy and entropy stability, summation-by-parts ## 1 Introduction In this paper we will complete the general stability theory for nonlinear hyperbolic initial boundary value problems (IBVPs) partly developed in [1; 2]. This theory is valid for both linear and nonlinear primal and dual problems. It is direct, easy to understand and leads to \(L_{2}\) estimates. The requirement for an energy and entropy bound is that _i)_ a skew-symmetric form of the governing equations exist and _ii)_ energy bounding boundary conditions (BCs) are available. In [1; 2] we focused on the skew-symmetric property _assuming_ that boundary conditions leading to an energy bound were available. In this article we derive these BCs explicitly, and show how to implement them in a provable stable way. We exemplify the procedure for the most important equations in computational fluid dynamics (CFD): the shallow water equations (SWEs), the incompressible Euler equations (IEEs) and the compressible Euler equations (CEEs). It was shown in [1] that the original form of the velocity-divergence form of the IEEs equations had the required skew-symmetric form and that it could be derived for the SWEs. In [2] we showed that also the CEEs could be transformed to skew-symmetric form. It was also shown that the new skew-symmetric formulation allows for a mathematical (or generalised) entropy conservation and bound. Once the skew-symmetric formulation is obtained, an energy and entropy bound follows by applying integration-by-parts (IBP) and imposing proper boundary conditions. The continuous procedure was reused by discretising the equations in space using summation-by-parts (SBP) operators [3; 4] which discretely mimic the IBP procedure. To derive the stable boundary procedures that was _assumed_ to exist in [1; 2] is the topic of this paper. As in the previous papers, it is shown that the key to stability is found in the continuous formulation. Skew-symmetric formulations for parts or the whole set of governing flow equations have drawn interest previously [5; 6; 7; 8] where fragments of the general theory in [1; 2] was included. Nonlinear boundary conditions were not discussed. With few exceptions, only boundary conditions for solid walls (or glancing boundaries) have been considered previously as for example in [9; 10; 11; 12; 13; 14]. Solid wall boundary conditions are notoriously simple and straightforward to implement due to their homogeneous nature, i.e. no external non-zero data must be considered. In contrast to the previous investigations, we will for the first time (to the best of our knowledge) treat the general case with non-homogeneous nonlinear boundary conditions and derive estimates of the solution in terms of given non-zero boundary data. The remaining part of paper is organised as follows: In Section 2 we reiterate and complement the main theoretical findings in [1; 2] and outline the general procedure for obtaining energy and entropy bounds. The remaining key ingridient: how to formulate and impose general nonlinear boundary conditions, is presented in Section 3. In Section 4, we show that the most important IBVPs in CFD: the SWEs, the IEEs and the CEEs can be described by the new general theoretical framework. Explicit examples of boundary conditions and implementation procedures are given for all three cases. In Section 5 we return to the general formulation and show that the energy and entropy bounded continuous formulation lead to nonlinear stability of the SBP based semi-discrete scheme, including non-zero boundary data. A summary is provided in Section 6. ## 2 Nonlinear energy and entropy boundedness: the governing equations Following [1; 2], we consider the following general hyperbolic IBVP \[PU_{t}+(A_{i}(V)U)_{x_{i}}+B_{i}(V)U_{x_{i}}+C(V)U=0,\quad t\geq 0,\quad\vec{x}=(x _{1},x_{2},..,x_{k})\in\Omega \tag{2.1}\] augmented with the initial condition \(U(\vec{x},0)=F(\vec{x})\) in \(\Omega\) and the non-homogeneous boundary condition \[L(V)U=g(\vec{x},t),\quad t\geq 0,\quad\vec{x}=(x_{1},x_{2},..,x_{k})\in \partial\Omega. \tag{2.2}\] In (2.2), \(L\) is the boundary operator and \(g\) the boundary data. In (2.1), Einsteins summation convention is used and \(P\) is a symmetric positive definite (or semi-definite) time-independent matrix that defines an energy norm (or semi-norm) \(\|U\|_{P}^{2}=\int_{\Omega}U^{T}PUd\Omega\). We assume that \(U\) and \(V\) are smooth. The \(n\times n\) matrices \(A_{i},B_{i},C\) are smooth functions of the \(n\) component vector \(V\), but otherwise arbitrary. Note that (2.1) and (2.2) encapsulates both linear (\(V\neq U\)) and nonlinear (\(V=U\)) problems. **Definition 2.1**.: _Firstly, the problem (2.1) is energy conserving if \(\|U\|_{P}^{2}=\int_{\Omega}U^{T}PUd\Omega\) only changes due to boundary effects. Secondly, it is energy bounded if \(\|U\|_{P}^{2}\leq\|F\|_{P}^{2}\) for a minimal number of homogeneous (\(g=0\)) boundary conditions (2.2). Thirdly, it is strongly energy bounded if \(\|U\|_{P}^{2}\leq\|F\|_{P}^{2}+\int_{0}^{t}(\oint G^{T}G\;ds)dt\) for a minimal number of non-homogeneous (\(g\neq 0\)) boundary conditions (2.2), where \(G=G(g,\vec{x},t)\)._ **Proposition 2.1**.: _The IBVP (2.1) for linear (\(V\neq U\)) and nonlinear (\(V=U\)) is energy conserving if_ \[B_{i}=A_{i}^{T},\quad i=1,2,..,k\quad\text{and}\quad\ C+C^{T}=0 \tag{2.3}\] _holds. It is energy bounded if it is energy conserving and the boundary conditions (2.2) for \(g=0\) lead to_ \[\oint\limits_{\partial\Omega}U^{T}(n_{i}A_{i})\ U\ ds=\oint\limits_{\partial \Omega}\frac{1}{2}U^{T}((n_{i}A_{i})+(n_{i}A_{i})^{T})U\ ds\geq 0. \tag{2.4}\] _It is strongly energy bounded if it is energy conserving and the boundary conditions (2.2) for \(g\neq 0\) lead to_ \[\oint\limits_{\partial\Omega}U^{T}(n_{i}A_{i})\ U\ ds=\oint\limits_{\partial \Omega}\frac{1}{2}U^{T}((n_{i}A_{i})+(n_{i}A_{i})^{T})U\ ds\geq-\oint\limits_{ \partial\Omega}G^{T}G\ ds, \tag{2.5}\] _where \(G=G(g,\vec{x},t)\) is independent of the solution \(U\)._ Proof.: The energy method applied to (2.1) yields \[\frac{1}{2}\frac{d}{dt}\|U\|_{P}^{2}+\oint\limits_{\partial\Omega}U^{T}(n_{i}A_{i })\ U\ ds=\int\limits_{\Omega}(U_{x_{i}}^{T}A_{i}U-U^{T}B_{i}U_{x_{i}})\ d\Omega- \int\limits_{\Omega}U^{T}CU\ d\Omega, \tag{2.6}\] where \((n_{1},..,n_{k})^{T}\) is the outward pointing unit normal. The terms on the right-hand side of (2.6) are cancelled by (2.3) leading to energy conservation. If in addition (2.4) or (2.5) holds, an energy bound or a strong energy bound respectively follows after integration in time. _Remark 2.2_.: For linear problems, a minimal number of boundary conditions that lead to a bound is a necessary and sufficient condition for well-posedness. For nonlinear problems this is not the case. A minimal number of boundary conditions that lead to a bound is a necessary but not a sufficient condition [15; 16; 17]. For non-smooth solutions \(U\), (2.1) interpreted in a weak sense allows for an entropy conservation law. **Proposition 2.3**.: _The IBVP (2.1) together with conditions (2.3) leads to the entropy conservation law_ \[S_{t}+(F_{i})_{x_{i}}=0, \tag{2.7}\] _where \(S=U^{T}PU/2\) is the mathematical (or generalised) entropy and \(F_{i}=U^{T}A_{i}U\) are the entropy fluxes._ Proof.: Multiplication of (2.1) from the left with \(U^{T}\) yields \[(U^{T}PU/2)_{t}+(U^{T}A_{i}U)_{x_{i}}=(U_{x_{i}}^{T}A_{i}U-U^{T}B_{i}U_{x_{i}} )-U^{T}CU. \tag{2.8}\] The right-hand side of (2.9) is cancelled by (2.3) leading to the entropy conservation relation (2.7). _Remark 2.4_.: The entropy conservation law (2.7) holds for smooth solutions. For discontinuous solutions it holds in a distributional sense. The non-standard compatibility conditions in this case reads \[\partial S/\partial U=S_{U}=U^{T}P,\quad S_{U}P^{-1}((A_{i}(V)U)_{x_{i}}+A_{i} ^{T}(V)U_{x_{i}}+C(V)U)=(U^{T}A_{i}U)_{x_{i}}. \tag{2.9}\] The entropy \(S\) is convex (\(S_{UU}=P\)) and identical to the energy [17]. In the following we will use energy to denote both quantities, but sometimes remind the reader by writing out both notations explicitly. ## 3 Nonlinear energy and entropy boundedness: the boundary conditions We start with a couple of convinient transformations. Consider the boundary term \[\oint\limits_{\partial\Omega}U^{T}(n_{i}A_{i})\ U\ ds=\oint\limits_{\partial \Omega}\frac{1}{2}U^{T}((n_{i}A_{i})+(n_{i}A_{i})^{T})U\ ds=\oint\limits_{ \partial\Omega}U^{T}\tilde{A}(V)\ U\ ds, \tag{3.1}\] where \(\tilde{A}(V)\) is symmetric. Recall that if \(V=U\) we are dealing with a nonlinear problem, otherwise a variable coefficient problem. In the CFD problems we consider, the Cartesian velocity field is transformed to the normal and tangential ones leading to the new vectors \(U_{n}=NU\). Next we rotate the matrix \(\tilde{A}\) to diagonal form as \(\tilde{T}^{T}\tilde{A}\tilde{T}=\Lambda=diag(\lambda_{i})\) which gives us new rotated variables \(W=(N\tilde{T})^{-1}U=T^{-1}U\) and \[\oint\limits_{\partial\Omega}U^{T}\tilde{A}(V)\ U\ ds\ ds=\oint\limits_{ \partial\Omega}W^{T}\Lambda\ W\ ds=\oint\limits_{\partial\Omega}(W^{+})^{T} \Lambda^{+}\ W^{+}+(W^{-})^{T}\Lambda^{-}\ W^{-}\ ds=\oint\limits_{\partial \Omega}\lambda_{i}W_{i}^{2}\ ds, \tag{3.2}\] where we again use Einsteins summation convention. In (3.2), \(\Lambda^{+}\) and \(\Lambda^{-}\) denote the positive and negative parts of \(\Lambda\) respectively, while \(W^{+}\) and \(W^{-}\) denote the corresponding variables. The new rotated variables \(W=W(U)\) are functions of the solution in both the linear and nonlinear case. In the nonlinear case, the diagonal matrix \(\Lambda(U)\) is solution dependent and not a priori bounded while in the linear case, \(\Lambda(V)\) is bounded by external data. This difference lead to significant differences in the boundary condition procedure. _Remark 3.1_.: For linear problems, the number of boundary conditions is equal to the number of eigenvalues of \(\tilde{A}(V)\) with the wrong (in this case negative) sign [16]. Sylvester's Criterion [18], show that the number of boundary conditions is equal to the number of \(\lambda_{i}(V)\) with the wrong sign if the rotation matrix \(T\) is nonsingular. In the nonlinear case where \(\lambda_{i}=\lambda_{i}(U)\) it is more complicated since multiple forms of the boundary term \(W^{T}\Lambda W\) may exist, see Section 4.1 below and [1; 2; 17] for examples. With a slight abuse of notation we will sometimes refer to \(\Lambda(U)\) as "eigenvalues" and to the rotated variables \(W(U)\) as "characteristic" variables, although strictly speaking they are not, even though they play a similar role. We will impose the boundary conditions both strongly and weakly. For the weak imposition we introduce a lifting operator \(L_{C}\) that enforce the boundary conditions in our governing equation (2.1) as follows \[PU_{t}+(A_{i}(V)U)_{x_{i}}+A_{i}^{T}(V)U_{x_{i}}+C(V)U+L_{C}(L(V)U-g)=0,\quad t \geq 0,\quad\vec{x}=(x_{1},x_{2},..,x_{k})\in\Omega. \tag{3.3}\] The lifting operator for two smooth vector functions \(\Phi,\Psi\) satisfies \(\int\Phi^{T}L_{C}(\Psi)d\Omega=\oint\Phi^{T}\Psi ds\) which enables development of the essential parts of the numerical boundary procedure in the continuous setting [19; 20]. ### The general form of nonlinear boundary conditions in rotated variables The starting point for the derivation of stable general nonlinear (and linear) boundary conditions (2.2) is the form (3.2) of the boundary term. First we need to find the formulation (3.2) with a _minimal_ number of entries in \(\Lambda^{-}\)[1; 2; 17] (there might be more than one formulation of the cubic boundary terms). Next, we need to specify the characteristic variables \(W^{-}\) in terms of \(W^{+}\) and external data. The general form is \[S(W^{-}-RW^{+})=G\quad\text{or equivalently}\quad W^{-}=RW^{+}+S^{-1}G. \tag{3.4}\] In (3.4), \(S\) is a non-singular matrix combining values of \(W^{-}\), the matrix \(SR\) combine values of \(W^{+}\) while \(G\) is given external data. The boundary condition (3.4) implemented weakly using a lifting operator is \[L_{C}=L_{C}(2(J^{-}T^{-1})^{T}\Sigma(W^{-}-RW^{+}-S^{-1}G)), \tag{3.5}\] where \(W=T^{-1}U\), \(W^{-}=J^{-}W\), \(W^{+}=J^{+}W\) and \(\Sigma\) is a penalty matrix. After the derivation of the stability conditions we will return to the boundary condition formulation (2.2) in the original variables. ### Boundary conditions and implementation techniques for stability of nonlinear problems Before attacking the nonlinear problem, we digress momentarily to the linear case to introduce one aspect of our subsequent nonlinear analysis. In the simplest possible version of (3.4) one can specify \(W^{-}=g\) corresponding to negative \(\lambda_{i}(V)\) indicated by \(\lambda_{i}^{-}=-|\lambda_{i}(V)|\). Since \(|\lambda_{i}(V)|\) are bounded, we obtain \[\oint\limits_{\partial\Omega}U^{T}\tilde{A}(V)\ U\ ds\ =\oint\limits_{\partial \Omega}W^{T}\Lambda\ W\ ds=\oint\limits_{\partial\Omega}(W^{+})^{T}\Lambda^{+ }\ W^{+}+g^{T}\Lambda^{-}\ g\ ds\geq-\oint\limits_{\partial\Omega}G^{T}G\ ds, \tag{3.6}\] where \(G_{i}=\sqrt{|\lambda_{i}^{-}(V)|}g_{i}\). Hence we get a strong energy bound in terms of external data. However, in the nonlinear case, no estimate is obtained since \(\lambda_{i}^{-}(U)\) is not a priori bounded, see [21; 22] for IEE examples. The procedure to arrive at a general stable nonlinear inhomogeneous boundary condition and implementation consist of the following steps for the unknowns in \(R,S,\Sigma\) in (3.4). 1. Derive strong homogeneous (\(G=0\)) boundary conditions. This lead to conditions on matrix \(R\). 2. Derive strong inhomogeneous (\(G\neq 0\)) boundary conditions. This lead to conditions on matrix \(S\). 3. Derive weak homogeneous (\(G=0\)) boundary conditions. This lead to conditions on matrix \(\Sigma\). 4. Show that the weak inhomogeneous (\(G\neq 0\)) case of the boundary conditions follow from 1-3 above. The following Lemma (structured as the step-by-step procedure above) is the main result of this paper. **Lemma 3.2**.: _Consider the boundary term described in (3.1),(3.2) and the boundary conditions (3.4) implemented strongly or weakly using (3.5). Furthermore, let \(|\Lambda^{-}|=diag(|\lambda_{i}^{-}|)\) and \(|\Lambda^{-}|^{1/2}=diag(\sqrt{|\lambda_{i}^{-}|})\)._ _The boundary term augmented with \(\mathbf{1.}\)_strong nonlinear homogeneous boundary conditions_ _is positive semi-definite if the matrix \(R\) is such that_ \[\Lambda^{+}-R^{T}|\Lambda^{-}|R\geq 0. \tag{3.7}\] _The boundary term augmented with \(\mathbf{2.}\)_strong nonlinear inhomogeneous boundary conditions_ _is bounded by external given data if the matrix \(R\) satisfies (3.7) with strict inequality and the matrix \(S\) satisfies_ \[S=\tilde{S}^{-1}|\Lambda^{-}|^{1/2}\text{ with }\tilde{S}\text{ sufficiently small.} \tag{3.8}\] _The boundary term augmented with \(\mathbf{3.}\)_weak nonlinear homogeneous boundary conditions_ _is positive semi-definite if the matrix \(R\) satifies (3.7) and the matrix \(\Sigma\) satisfies_ \[\Sigma=|\Lambda^{-}|. \tag{3.9}\] _The boundary term augmented with \(\mathbf{4.}\)_weak nonlinear inhomogeneous boundary conditions_ _is bounded by external given data if the matrix \(R\) satisfies (3.7) with strict inequality, the matrix \(S\) satisfies (3.8) and the matrix \(\Sigma\) satisfies (3.9)._ Proof.: We proceed in the step-by-step manner described above. _1. The homogeneous boundary condition (3.4) implemented strongly_ (with \(G=0\)) lead to \(W^{T}\Lambda W=(W^{+})^{T}(\Lambda^{+}-R^{T}|\Lambda^{-}|R)W^{+}\) and (3.7) lead to a positive semi-definite boundary term. _2. The inhomogeneous boundary condition (3.4) implemented strongly_ (with \(G\neq 0\)) lead to \[W^{T}\Lambda W=(W^{+})^{T}\Lambda^{+}W^{+}-(W^{+}+S^{-1}G)^{T}|\Lambda^{-}|(W^ {+}+S^{-1}G). \tag{3.10}\] Expanding (3.10), adding and subtracting \(G^{T}G\) and using \(S\) as in (3.8) lead to the result \[W^{T}\Lambda W=\begin{bmatrix}W^{+}\\ G\end{bmatrix}^{T}\begin{bmatrix}\Lambda^{+}-R^{T}|\Lambda^{-}|R&-R^{T}| \Lambda^{-}|^{1/2}\tilde{S}\\ -\tilde{S}^{T}|\Lambda^{-}|^{1/2}R&I-\tilde{S}^{T}\tilde{S}\end{bmatrix} \begin{bmatrix}W^{+}\\ G\end{bmatrix}-G^{T}G, \tag{3.11}\] which is bounded from below by external data if \(\tilde{S}\) is sufficiently small and (3.7) holds strictly. _3. The homogeneous boundary condition (3.4) implemented weakly_ (with \(G=0\)) using the lifting operator in (3.5) lead to the boundary term \[W^{T}\Lambda W+2U^{T}(J^{-}T^{-1})^{T}\Sigma(W^{-}-RW^{+}))=W^{T}\Lambda W^{+ }+2(W^{-})^{T}\Sigma(W^{-}-RW^{+}). \tag{3.12}\] Collecting similar terms transforms the right hand side to \[(W^{+})^{T}\Lambda^{+}W^{+}+(W^{-})^{T}(-|\Lambda^{-}|+2\Sigma)W^{-}-2(W^{-})^ {T}\Sigma RW^{+}. \tag{3.13}\] The choice (3.9) of \(\Sigma\) followed by adding and subtracting \((RW^{+})^{T}|\Lambda^{-}|RW^{+}\) transform (3.13) into \[(W^{+})^{T}(\Lambda^{+}-R^{T}|\Lambda^{-}|R)W^{+}+(W^{-}-RW^{+})^{T}|\Lambda^{ -}|(W^{-}-RW^{+}), \tag{3.14}\] which lead to positive semi-definite boundary term by using condition (3.7). _The inhomogeneous boundary condition (3.4) implemented weakly_ (with \(G\neq 0\)) using the lifting operator in (3.5) and the choice \(\Sigma\) in (3.9) lead to the boundary terms \[W^{T}\Lambda W+2U^{T}(J^{-}T^{-1})^{T}\Sigma(W^{-}-RW^{+}-S^{-1}G))=W^{T} \Lambda W+2(W^{-})^{T}|\Lambda^{-}|(W^{-}-RW^{+}-S^{-1}G)).\] By adding and subtracting \((W^{-})^{T}|\Lambda^{-}|(W^{-})\) and rearranging, the boundary terms above can be written as \[(W^{+})^{T}(\Lambda^{+}-R^{T}|\Lambda^{-}|R)W^{+}+(W^{-}-RW^{+})^{T}|\Lambda^{ -}|(W^{-}-RW^{+})-2(W^{-})^{T}|\Lambda^{-}|S^{-1}G. \tag{3.15}\] By rearranging (3.15) we find that it is equivalent to \[(W^{-}-RW^{+}-S^{-1}G)^{T}|\Lambda^{-}|(W^{-}-RW^{+}-S^{-1}G)+\begin{bmatrix}W^{+} \\ G\end{bmatrix}^{T}\begin{bmatrix}\Lambda^{+}-R^{T}|\Lambda^{-}|R&-R^{T}|\Lambda^ {-}|S^{-1}\\ -(S^{-1})^{T}|\Lambda^{-}|R&-(S^{-1})^{T}|\Lambda^{-}|S^{-1}\end{bmatrix} \begin{bmatrix}W^{+}\\ G\end{bmatrix}.\] The first left term is obviously positive semi-definite. By adding and subtracting the boundary data \(G^{T}G\) and inserting the matrix \(S\) as in (3.8), the second right term becomes \[\begin{bmatrix}W^{+}\\ G\end{bmatrix}^{T}\begin{bmatrix}\Lambda^{+}-R^{T}|\Lambda^{-}|R&-R^{T}| \Lambda^{-}|^{1/2}\widetilde{S}\\ -\widetilde{S}^{T}|\Lambda^{-}|^{1/2}R&I-\tilde{S}^{T}\tilde{S}\end{bmatrix} \begin{bmatrix}W^{+}\\ G\end{bmatrix}-G^{T}G, \tag{3.16}\] which is bounded from below by external data if \(\tilde{S}\) is sufficiently small and condition (3.7) holds strictly. Lemma 3.2 can be used to prove that the estimates (2.4) and (2.5) in Proposition 2.1 holds. ### The general form of nonlinear boundary conditions in original variables We are now ready to connect the characteristic boundary condition formulation (3.4) with (2.2) in the original variables. By using the definitions \(W=T^{-1}U\), \(W^{-}=J^{-}W\), \(W^{+}=J^{+}W\) and relation (3.8) we find that (3.4) transforms to \[\tilde{S}^{-1}|\Lambda^{-}|^{1/2}(J^{-}-RJ^{+})T^{-1}U=G. \tag{3.17}\] By comparing (2.2) and (3.17), the original boundary operator and boundary data can be identified as \[L=|\Lambda^{-}|^{1/2}(J^{-}-RJ^{+})T^{-1}\text{ and }g=\tilde{S}G \tag{3.18}\] respectively. This concludes the analysis of the general formulation of nonlinear boundary conditions. ## 4 Application of the general theory to initial boundary value problems in CFD We will specifically consider the IEEs, the SWEs and the CEEs, and focus on the boundary conditions. ### The 2D incompressible Euler equations The incompressible 2D Euler equations in split form are \[PU_{t}+\frac{1}{2}\left[(AU)_{x}+AU_{x}+(BU)_{y}+BU_{y}\right]=0. \tag{4.1}\] where \(U=(u,v,p)^{T}\) and \[P=\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{bmatrix},\quad A=\begin{bmatrix}u&0&1\\ 0&u&0\\ 1&0&0\end{bmatrix},\quad B=\begin{bmatrix}v&0&0\\ 0&v&1\\ 0&1&0\end{bmatrix}. \tag{4.2}\] Since the matrices \(A,B\) are symmetric, the formulation (4.1) is in the required skew-symmetric form (3.3) We obtain an estimate in the semi-norm \(\|U\|_{P}^{2}=\int_{\Omega}U^{T}PUd\Omega\) involving only the velocities. Note that the pressure \(p\) includes a division by the constant density, and hence has the dimension velocity squared. By applying the transformation \(W=T^{-1}U\) described above, the boundary term gets the form \[U^{T}(n_{1}A+n_{2}B)=W^{T}\Lambda W=(W^{+})^{T}\Lambda^{+}W^{+}+(W^{-})^{T} \Lambda^{-}W^{-} \tag{4.3}\] where \(W=(u_{n}+p/u_{n},u_{\tau},p/u_{n})^{T}\), \(\Lambda=diag(u_{n},u_{n},-u_{n})\), \(u_{n}=n_{1}u+n_{2}v\) and \(u_{\tau}=-n_{2}u+n_{1}v\). At inflow \[W^{-}=\begin{bmatrix}u_{n}+p/u_{n}\\ u_{\tau}\end{bmatrix},\quad\Lambda^{-}=\begin{bmatrix}u_{n}&0\\ 0&u_{n}\end{bmatrix},\quad W^{+}=p/u_{n},\quad\Lambda^{+}=-u_{n} \tag{4.4}\] where \(u_{n}<0\) while at outflow with \(u_{n}>0\) we get the reversed situation with \[W^{+}=\begin{bmatrix}u_{n}+p/u_{n}\\ u_{\tau}\end{bmatrix},\quad\Lambda^{+}=\begin{bmatrix}u_{n}&0\\ 0&u_{n}\end{bmatrix},\quad W^{-}=p/u_{n},\quad\Lambda^{-}=-u_{n}. \tag{4.5}\] By using the definitions in (4.4) and (4.5), it is straightforward to check for boundedness using Lemma 3.2. **Example 4.1**.: _Consider the general form of boundary condition in (3.4)._ _With Dirichlet inflow conditions on the normal and tangential velocities \(u_{n},u_{\tau}\) we find that_ \[W^{-}-RW^{+}=\begin{bmatrix}u_{n}+p/u_{n}\\ u_{\tau}\end{bmatrix}-\begin{bmatrix}R_{1}\\ R_{2}\end{bmatrix}p/u_{n}=\begin{bmatrix}u_{n}\\ u_{\tau}\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}R_{1}\\ R_{2}\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}, \tag{4.6}\] _which lead to \(\Lambda^{+}-R^{T}|\Lambda^{-}|R=0\). Hence condition (3.7) is satisfied, but not strictly, which makes the choice of \(S\) in (3.8) irrelevant. This leads to boundedness, but not strong boundedness as defined in Proposition 2.1. A weak implementation require \(\Sigma=|\Lambda^{-}|=diag(|u_{n}|,|u_{n}|)\) as specified in (3.9)._ _For an outflow condition on the characteristic variable \(p/u_{n}\), the boundary condition (3.4) holds with \(R=(0,0)\) and hence condition (3.7) holds strictly. This leads to a strongly energy bounded solution using \(S=\tilde{S}^{-1}\sqrt{|u_{n}|}\) with \(|\tilde{S}|\leq 1\) as can be seen in (3.11) and (3.16) and required in (3.8). A weak implementation require \(\Sigma=|\Lambda^{-}|=|u_{n}|,\) as specified in (3.9)._ ### The 2D shallow water equations The 2D SWEs on skew-symmetric form as required in Proposition 2.1 and derived in [1] are \[U_{t}+(AU)_{x}+A^{T}U_{x}+(BU)_{y}+B^{T}U_{y}+CU=0, \tag{4.7}\] where \(U=(U_{1},U_{2},U_{3})^{T}=(\phi,\sqrt{\phi}u,\sqrt{\phi}v))^{T}\), \(\phi=gh\) is the geopontential [23], \(h\) is the water height, \(g\) is the gravitational constant and \((u,v)\) is the fluid velocity in \((x,y)\) direction respectively. The Coriolis forces are included in the matrix \(C\) with the function \(f\) which is typically a function of latitude [24; 25]. Note that \(h>0\) and \(\phi>0\) from physical considerations. The matrices in (4.7) constitute a two-parameter family \[A=\begin{bmatrix}\alpha\frac{U_{3}}{\sqrt{U_{1}}}&(1-3\alpha)\sqrt{U_{1}}&0 \\ 2\alpha\sqrt{U_{1}}&\frac{1}{2}\frac{U_{2}}{\sqrt{U_{1}}}&0\\ 0&0&\frac{1}{2}\frac{U_{2}}{\sqrt{U_{1}}}\end{bmatrix},B=\begin{bmatrix}\beta \frac{U_{3}}{\sqrt{U_{1}}}&0&(1-3\beta)\sqrt{U_{1}}\\ 0&\frac{1}{2}\frac{U_{3}}{\sqrt{U_{1}}}&0\\ 2\beta\sqrt{U_{1}}&0&\frac{1}{2}\frac{U_{3}}{\sqrt{U_{1}}}\end{bmatrix},C= \begin{bmatrix}0&0&0\\ 0&0&-f\\ 0&+f&0\end{bmatrix} \tag{4.8}\] where the parameters \(\alpha,\beta\) are arbitrary. (Symmetric matrices are e.g. obtained with \(\alpha=\beta=1/5\).) The energy rate cannot depend on the free parameters \(\alpha\) and \(\beta\) in the matrices \(A\) and \(B\) since they are not present in the original SWEs from where (4.7) is derived [1]. By computing the boundary term, we find \[U^{T}(n_{1}A+n_{2}B)U=U^{T}\begin{bmatrix}\frac{\alpha+\beta}{2}u_{n}&\frac{1 -\alpha}{2}n_{x}\sqrt{U_{1}}&\frac{1-\beta}{2}n_{y}\sqrt{U_{1}}\\ \frac{1-\alpha}{2}n_{x}\sqrt{U_{1}}&\frac{1}{2}u_{n}&0\\ \frac{1}{2}\beta n_{y}\sqrt{U_{1}}&0&\frac{1}{2}u_{n}\end{bmatrix}U=U^{T} \begin{bmatrix}u_{n}&0&0\\ &\frac{1}{2}u_{n}&0\\ 0&0&\frac{1}{2}u_{n}\end{bmatrix}U \tag{4.9}\] and the (somewhat mysterious) dependency on the free parameters \(\alpha\) and \(\beta\) vanishes. The relation (4.9) seemingly indicate that we need three boundary conditions at inflow (\(u_{n}<0\)), and zero at outflow (\(u_{n}>0\)). However, this is a nonlinear problem and as shown in [17], it can be rewritten by changing variables and observing that \(u_{n}=(n_{1}U_{2}+n_{2}U_{3})/\sqrt{U_{1}}\). Reformulating (4.9) in new variables we find \[U^{T}(n_{1}A+n_{2}B)U=U^{T}\begin{bmatrix}u_{n}&0&0\\ &\frac{1}{2}u_{n}&0\\ 0&0&\frac{1}{2}u_{n}\end{bmatrix}U=W^{T}\begin{bmatrix}-\frac{1}{2U_{n}\sqrt{ U_{1}}}&0&0\\ &\frac{1}{2U_{n}\sqrt{U_{1}}}&0\\ 0&0&\frac{1}{2U_{n}\sqrt{U_{1}}}\end{bmatrix}W, \tag{4.10}\] where \(W^{T}=(W_{1},W_{2},W_{3})=(U_{1}^{2},U_{1}^{2}+U_{n}^{2},U_{n}U_{\tau})\). The variables \((U_{1},U_{n},U_{\tau})=(\phi,\sqrt{\phi}u_{n},\sqrt{\phi}u_{\tau})\) are directed in the normal (\(U_{n}\)) and tangential (\(U_{\tau}\)) direction respectively. The relation (4.10) indicate that only two boundary conditions are needed at inflow when \(U_{n}<0\). Since we search for a minimal number of boundary conditions, we consider the formulation (4.9) for outflow, where no boundary conditions are required. To be specific, at inflow where \(U_{n}<0\) we find \[W^{-}=\begin{bmatrix}U_{1}^{2}+U_{n}^{2}\\ U_{n}U_{\tau}\end{bmatrix},\quad\Lambda^{-}=\begin{bmatrix}\frac{1}{2U_{n} \sqrt{U_{1}}}&0\\ 0&\frac{1}{2U_{n}\sqrt{U_{1}}}\end{bmatrix},\quad W^{+}=U_{1}^{2},\quad\Lambda^ {+}=-\frac{1}{2U_{n}\sqrt{U_{1}}}. \tag{4.11}\] The definitions in (4.11) can be used to check any inflow conditions for boundedness using Lemma 3.2. **Example 4.2**.: _Consider the general form of boundary condition in (3.4)._ _With Dirichlet inflow conditions on \(U_{n},U_{\tau}\) we find_ \[W^{-}-RW^{+}=\begin{bmatrix}U_{1}^{2}+U_{n}^{2}\\ U_{n}U_{\tau}\end{bmatrix}-\begin{bmatrix}R_{1}\\ R_{2}\end{bmatrix}U_{1}^{2}=\begin{bmatrix}U_{n}^{2}\\ U_{n}U_{\tau}\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}R_{1}\\ R_{2}\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}, \tag{4.12}\] _which lead to \(\Lambda^{+}-R^{T}|\Lambda^{-}|R=0\). Hence condition (3.7) is satisfied, but not strictly, which makes the choice of \(S\) in (3.8) irrelevant (similar to the inflow case in Example 4.1). This leads to boundedness, but not strong boundedness as defined in Proposition 2.1. A weak implementation require \(\Sigma=|\Lambda^{-}|\) in (4.11)._ _By instead specifying the characteristic variable \(W^{-}\) directly (similar to the outflow case in Example 4.1) we have \(R=(0,0)^{T}\) and (3.7) holds strictly. This lead to a strongly bounded solution if \(S=\tilde{S}^{-1}\sqrt{|\Lambda^{-}|}\) with \(\tilde{S}=diag(\tilde{s}_{1},\tilde{s}_{2})\) sufficiently small as required in (3.8). A weak implementation require \(\Sigma=|\Lambda^{-}|\) in (4.11)._ ### The 2D compressible Euler equations The 2D CEEs on skew-symmetric form as required in Proposition 2.1 and derived in [2] are \[P\Phi_{t}+(A\Phi)_{x}+A^{T}\Phi_{x}+(B\Phi)_{y}+B^{T}\Phi_{y}=0, \tag{4.13}\] where \(\Phi=(\sqrt{\rho},\sqrt{\rho}u,\sqrt{\rho}v,\sqrt{p})^{T}\), \(P=diag(1,(\gamma-1)/2,(\gamma-1)/2,1)\) and \[A=\frac{1}{2}\begin{bmatrix}u&0&0&0\\ 0&\frac{(\gamma-1)}{2}u&0&0\\ 0&0&\frac{(\gamma-1)}{2}u&0\\ 0&2(\gamma-1)\frac{\phi_{4}}{\phi_{1}}&0&(2-\gamma)u\end{bmatrix},\quad B= \frac{1}{2}\begin{bmatrix}v&0&0&0\\ 0&\frac{(\gamma-1)}{2}v&0&0\\ 0&0&\frac{(\gamma-1)}{2}v&0\\ 0&0&2(\gamma-1)\frac{\phi_{4}}{\phi_{1}}&(2-\gamma)v\end{bmatrix}. \tag{4.14}\] By rotating the Cartesian velocities to normal and tangential velocities at the boundary, we obtain \[\Phi^{T}(n_{1}\tilde{A}+n_{2}\tilde{B})\Phi=\Phi_{r}^{T}\begin{bmatrix}\alpha ^{2}u_{n}&0&0&0\\ 0&\frac{(\gamma-1)}{2}u_{n}&0&(\gamma-1)\frac{\phi_{4}}{\phi_{1}}\\ 0&0&\frac{(\gamma-1)}{2}u_{n}&0\\ 0&(\gamma-1)\frac{\phi_{4}}{\phi_{1}}&0&(2-\gamma)u_{n}\end{bmatrix}\Phi_{r}, \tag{4.15}\] where \(\Phi=(\phi_{1},\phi_{2},\phi_{3},\phi_{4})^{T}=(\sqrt{\rho},\sqrt{\rho}u_{n}, \sqrt{\rho}u_{\tau},\sqrt{\rho})^{T}\). The boundary term (4.15) can be rotated to diagonal form which yield the boundary term \(W^{T}\Lambda W\) where \[W=\begin{bmatrix}\phi_{1}\\ \phi_{2}+2\phi_{4}^{2}/\phi_{2}\\ \phi_{3}\\ \phi_{4}\end{bmatrix}\quad\Lambda=\begin{bmatrix}u_{n}&0&0&0\\ 0&\frac{(\gamma-1)}{2}u_{n}&0&0\\ 0&0&0&(2-\gamma)u_{n}\Psi(M_{n})\end{bmatrix}. \tag{4.16}\] By comparing with (4.15) we see that the last diagonal entry is modified by the multiplication of \(\Psi(M_{n})\) which is a function of the normal Mach number \(M_{n}=u_{n}/c\). Explicitly we have \[\Psi(M_{n})=1-\frac{2(\gamma-1)}{\gamma(2-\gamma)}\frac{1}{M_{n}^{2}}, \tag{4.17}\] which switches sign at \(M_{n}^{2}=\gamma(2-\gamma)/(2(\gamma-1))\). _Remark 4.3_.: As shown in [2], this yields \(|M_{n}|=1\) for \(\gamma=\sqrt{2}\), while for \(\gamma=1.4\) we get \(|M_{n}|=1.05\). Due to the sign shift in \(\Psi\) at \(M_{n}^{2}=\gamma(2-\gamma)/(2(\gamma-1))\) we get different cases for inflow where \(u_{n}<0\). We find that for subsonic inflow where \(u_{n}<0,\Psi<0\), the relation (4.16) leads to \[W^{-}=\begin{bmatrix}\phi_{1}\\ \phi_{2}+2\phi_{4}^{2}/\phi_{2}\\ \phi_{3}\end{bmatrix},\quad\Lambda^{-}=\begin{bmatrix}u_{n}&0&0\\ 0&\frac{(\gamma-1)}{2}u_{n}&0\\ 0&0&\frac{(\gamma-1)}{2}u_{n}\end{bmatrix},\quad W^{+}=\phi_{4},\quad\Lambda^{ +}=(2-\gamma)u_{n}\Psi(M_{n}). \tag{4.18}\] For supersonic inflow \(u_{n}<0,\Psi>0\) we get \(W^{-}=W\) and \(\Lambda^{-}=\Lambda\) from relation (4.16), i.e. all eigenvalues are negative. In the outflow case, the shift in speed can be ignored since an alternate form of (4.15) different from (4.16) exist. By contracting (4.15) we find that \[\Phi^{T}(n_{x}A+n_{y}B)\Phi=u_{n}(\phi_{1}^{2}+\frac{(\gamma-1)}{2}(\phi_{2}^{2 }+\phi_{3}^{2})+\gamma\phi_{4}^{2})=\Phi_{r}^{T}\begin{bmatrix}u_{n}&0&0&0\\ 0&\frac{(\gamma-1)}{2}u_{n}&0&0\\ 0&0&\frac{(\gamma-1)}{2}u_{n}&0\\ 0&0&0&\gamma u_{n}\end{bmatrix}\Phi_{r}, \tag{4.19}\] which proves that no boundary conditions are necessary in the outflow case. **Example 4.4**.: _Consider the general form of boundary condition in (3.4)._ _With Dirichlet inflow conditions on \(\phi_{1},\phi_{2},\phi_{3}\) for \(\Psi(M_{n})<0\) we find using (4.4)_ \[W^{-}-RW^{+}=\begin{bmatrix}\phi_{1}\\ \phi_{2}+2\phi_{4}^{2}/\phi_{2}\\ \phi_{3}\end{bmatrix}-\begin{bmatrix}R_{1}\\ R_{2}\\ R_{3}\end{bmatrix}\phi_{4}=\begin{bmatrix}\phi_{1}\\ \phi_{2}\\ \phi_{3}\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}R_{1}\\ R_{2}\\ R_{3}\end{bmatrix}=\begin{bmatrix}0\\ 2\phi_{4}/\phi_{2}\\ 0\end{bmatrix}, \tag{4.20}\] _which lead to \(\Lambda^{+}-R^{T}|\Lambda^{-}|R=-|u_{n}|(2-\gamma+2(\gamma-1)/(\gamma M_{n}^{2 }))<0\). Hence condition (3.7) is violated, and no bound can be found._ _By instead specifying the characteristic variable \(W^{-}\) directly (as for the outflow case in Example 4.1 and inflow case in Example 4.2) we have \(R=(0,0,0)^{T}\) and (3.7) holds strictly. This lead to a strongly bounded solution if \(S=\tilde{S}^{-1}\sqrt{|\Lambda^{-}|}\) with \(\tilde{S}=diag(\tilde{s}_{1},\tilde{s}_{2},\tilde{s}_{3})\) sufficiently small, see (3.8). A weak implementation require \(\Sigma=|\Lambda^{-}|\) in (4.18). In the outflow case, no boundary conditions are required due to (4.19)._ ### Open questions for nonlinear boundary conditions We will end this section by discussing some open questions stemming from the nonlinear analysis above. #### 4.4.1 The number of boundary conditions in nonlinear IBVPs required for boundedness The boundary conditions for the SWEs and CEEs are similar in the sense that at least two _different_ formulations of the boundary terms can be found. The minimal number of required conditions differ both in the inflow and outflow cases. One common feature is that that no outflow conditions seem to be necessary. Another similar feature is that the number of outflow conditions is independent of the speed of sound for the CEEs and the celerity in the SWE case. Both these effects differ from what one finds in a linear analysis. _Remark 4.5_.: By substituting the IEE variables \(W=(u_{n}+p/u_{n},u_{\tau},p/u_{n})^{T}\) in (4.3) with \(W=(u_{n},u_{\tau},\sqrt{p})^{T}\) (similar to the ones used in the CEE and SWE cases) one obtains a similar situation also for the IEEs. The eigenvalues for the IEEs transform from \(\Lambda=diag(u_{n},u_{n},-u_{n})\) to \(\Lambda=diag(u_{n},u_{n},2u_{n})\) which leads to different number of boundary conditions. #### 4.4.2 The effect of nonlinear boundary conditions on uniqueness and existence Roughly speaking, a minimal number of dissipative boundary conditions in the linear case leads to uniqueness by the fact that it determines the normal modes of the solution [26; 27]. The minimal number of boundary conditions can also be obtained using the energy method, see [16]. If uniqueness and boundedness for a minimal number of boundary conditions are given, existence can be shown (e.g. using Laplace transforms or difference approximations [28; 29]). For linear IBVPs, the number of boundary conditions is independent of the solution and only depend on known external data. For nonlinear IBVPs, that is no longer the case, and the number may change in an unpredictable way as the solution develops in time. In addition, as we have seen above, it also varies depending on the particular formulation choosen. This is confusing and raises a number of questions that we will _speculate_ on below. Let us consider the SWEs as an example. The two forms of the boundary terms given in (4.10) were \[U^{T}\begin{bmatrix}u_{n}&0&0\\ &\frac{1}{2}u_{n}&0\\ 0&0&\frac{1}{2}u_{n}\end{bmatrix}U=W^{T}\begin{bmatrix}-\frac{1}{2U_{n}\sqrt{U_{1 }}}&0&0\\ &\frac{1}{2U_{n}\sqrt{U_{1}}}&0\\ 0&0&\frac{1}{2U_{n}\sqrt{U_{1}}}\end{bmatrix}W, \tag{4.21}\] where \(W^{T}=(W_{1},W_{2},W_{3})=(U_{1}^{2},U_{1}^{2}+U_{n}^{2},U_{n}U_{\tau})\) and \((U_{1},U_{n},U_{\tau})=(\phi,\sqrt{\phi}u_{n},\sqrt{\phi}u_{\tau})\). Based on the two formulations in (4.21), one may base the boundary procedure on one of the following four scenarios. 1. The left formulation with variable \(U\) at both inflow and outflow boundaries. 2. The right formulation with variable \(W\) at both inflow and outflow boundaries. 3. The left formulation with variable \(U\) at inflow and the right formulation with \(W\) at outflow boundaries. 4. The right formulation with variable \(W\) at inflow and the left formulation with \(U\) at outflow boundaries. Scenario 1 would in a one-dimensional setting lead to three boundary conditions all applied on the inflow boundary. Scenario 2 would also give three boundary conditions, but now two would be applied on the inflow boundary and one on the outflow boundary. Scenario 3 would lead to four boundary conditions, three on the inflow and one on the outflow boundary. Scenario 4 would only give two boundary conditions, both applied on the inflow boundary. If the above scenarios were interpreted in the linear sense, both Scenario 1 and 2 would determine the solution uniquely. (One of them would be a better choice than the other depending on the growth or decay of the solution away from the boundary [26; 27].) In scenario 3, the solution would be overspecifed, leading to loss of existence. In scenario 4, the solution would be underspecified, leading to loss of uniqueness. In summary: Scenario 1 and 2 may lead to acceptable solutions, Scenario 3 give no solution at all, while scenario 4 yield a bounded solution with limited (or no) accuracy. However, since these results are nonlinear, the above summary is merely _speculative_. We do not know exactly how to interpret them, since the present nonlinear theory is incomplete. We only know that boundedness is required. It also seems likely though that scenario 1 and 2 are should be preferred over scenario 3 and 4. The speculations in this section are of course equally valid (or not valid) for the CEEs and IEEs. ## 5 Nonlinear energy and entropy stability Consider the extended version (3.3) of (2.1) rewritten (using Einstein's summation convention) for clarity \[PU_{t}+(A_{i}U)_{x_{i}}+A_{i}^{T}U_{x_{i}}+CU+L_{C}=0,\quad t\geq 0,\quad\vec{ x}=(x_{1},x_{2},..,x_{k})\in\Omega. \tag{5.1}\] Equation (5.1) is augmented with the initial condition \(U(\vec{x},0)=F(\vec{x})\) in \(\Omega\) and boundary conditions of the form (3.4) on \(\delta\Omega\). Furthermore \(A_{i}=A_{i}(U)\), \(C=C(U)\) and \(P\) are \(n\times n\) matrices while \(U\) and \(L_{C}\) are \(n\) vectors. \(L_{C}\) is the continuous lifting operator of the form (3.5) implementing the boundary conditions weakly. A straightforward approximation of (5.1) on summation-by-parts (SBP) form in \(M\) nodes is \[(P\otimes I_{M})\vec{U}_{t}+\mathbf{D}_{\mathbf{x_{i}}}\mathbf{A_{i}}\vec{U}+ \mathbf{A_{i}^{T}}\mathbf{D}_{\mathbf{x_{i}}}\vec{U}+\mathbf{C}\vec{U}+\vec{L }_{D}=0,\quad\vec{U}(0)=\vec{F} \tag{5.2}\] where \(\vec{U}=(\vec{U}_{1}^{T},\vec{U}_{2}^{T},...,\vec{U}_{n}^{T})^{T}\) include approximations of \(U=(U_{1},U_{2},...,U_{n})^{T}\) in each node. The discrete lifting operator \(\vec{L}_{D}(\vec{U})\) implements the boundary conditions in a similar way to \(L_{C}(U)\) and \(\vec{F}\) denotes the discrete initial data with the continuous initial data injected in the nodes. The matrix elements of \(\mathbf{A_{i}},\mathbf{C}\) are matrices with node values of the matrix elements in \(A_{i},C\) injected on the diagonals as exemplified below \[A_{i}=\begin{pmatrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{n1}&\ldots&a_{nn}\end{pmatrix},\quad\mathbf{A_{i}}=\begin{pmatrix}\mathbf{a_ {11}}&\ldots&\mathbf{a_{1n}}\\ \vdots&\ddots&\vdots\\ \mathbf{a_{n1}}&\ldots&\mathbf{a_{nn}}\end{pmatrix},\quad\mathbf{a_{ij}}=diag (a_{ij}(x_{1},y_{1}),\ldots,a_{ij}(x_{M},y_{M})). \tag{5.3}\] Moreover \(\mathbf{D_{x_{i}}}=I_{n}\otimes D_{x_{i}}\) where \(\otimes\) denotes the Kronecker product, \(I_{n}\) is the \(n\times n\) identity matrix, \(D_{x_{i}}=P_{\Omega}^{-1}Q_{x_{i}}\) are SBP difference operators, \(P_{\Omega}\) is a positive definite diagonal volume quadrature matrix that defines a scalar product and norm such that \[(\vec{U},\vec{V})_{\Omega}=\vec{U}^{T}P_{\Omega}\vec{V}\approx\int\limits_{ \Omega}U^{T}Vd\Omega,\quad\text{and}\quad(\vec{U},\vec{U})_{\Omega}=\|\vec{U} \|_{\Omega}^{2}=\vec{U}^{T}P_{\Omega}\vec{U}\approx\int\limits_{\Omega}U^{T}Ud \Omega=\|U\|_{\Omega}^{2}. \tag{5.4}\] Following [30] we introduce the discrete normal \(\mathbf{N}=(N_{1},N_{2},...,N_{k})\) approximating the continuous normal \(\mathbf{n}=(n_{1},n_{2},...,n_{k})\) in the \(N\) boundary nodes and a restriction operator \(E\) that extracts the boundary values \(E\vec{U}\) from the total values. We also need a positive definite diagonal boundary quadrature \(P_{\partial\Omega}=diag(ds_{1},ds_{2},...,ds_{N})\) such that \(\oint_{\partial\Omega}U^{T}Uds\approx(E\vec{U})^{T}P_{\partial\Omega}(\vec{U} )=(EU)_{i}^{2}ds_{i}\). With this notation in place (again using Einsteins summation convention), the SBP constraints for a scalar variable becomes \[Q_{x_{i}}+Q_{x_{i}}^{T}=E^{T}P_{\partial\Omega}N_{i}E, \tag{5.5}\] which leads to the scalar summation-by-parts formula mimicking integration-by-parts \[(\vec{U},D_{x_{i}}\vec{V})=\vec{U}^{T}P_{\Omega}(D_{x_{i}}\vec{V})=-(D_{x_{i} }\vec{U},\vec{V})+(E\vec{U})^{T}P_{\partial\Omega}N_{i}(E\vec{V}). \tag{5.6}\] The scalar SBP relations in (5.5),(5.6), correspond to the SBP formulas for a vector with \(n\) variables as \[(\vec{U},\mathbf{D_{x_{i}}}\vec{V})=\vec{U}^{T}(I_{n}\otimes P_{\Omega})( \mathbf{D_{x_{i}}}\vec{V})=-(\mathbf{D_{x_{i}}}\vec{U},\vec{V})+(E\vec{U})^{T }(I_{n}\otimes P_{\partial\Omega})N_{i}(E\vec{V}). \tag{5.7}\] It remains to construct the discrete lifting operator \(\vec{L}_{D}\) (often called the SAT term [3, 4]) such that we can reuse the continuous analysis. We consider an operator of the form \(\vec{L}_{D}=(I_{n}\otimes P_{\Omega})(DC)\vec{L}_{C}\). The transformation matrix \(DC\) first extracts the boundary nodes from the volume nodes, secondly permute the dependent variables from being organised as \((E\vec{U}_{1},E\vec{U}_{2},..,E\vec{U}_{n})^{T}\) to \(((E\vec{U})_{1},(E\vec{U})_{2},..,(E\vec{U})_{N})^{T}\) using the permutation matrix \(P_{erm}\) and thirdly numerically integrate the resulting vector against the continuous lifting operator \(\vec{L}_{C}\) (now applied to the discrete solution). More specifically we have \[\vec{L}_{D} =(I_{n}\otimes P_{\Omega})(DC)\vec{L}_{C}, DC =(I_{n}\otimes E^{T})(P_{erm})^{T}(P_{\partial\Omega}\otimes I_{n}), \tag{5.8}\] \[\vec{L}_{C} =diag((L_{C})_{1},(L_{C})_{2},...,(L_{C})_{N}), (L_{C})_{j} =(2(J^{-}T^{-1})^{T}\Sigma(\vec{W}^{-}-R\vec{W}^{+}-S^{-1}\vec{G }))_{j}. \tag{5.9}\] We can now prove the semi-discrete correspondence to Proposition 2.1. **Proposition 5.1**.: _Consider the nonlinear scheme (5.2) with \(\vec{L}_{D}\) defined in (5.8) and (5.9)._ _It is nonlinearly stable for \(\vec{G}=0\) if the relations (3.7) and (3.9) in Lemma 3.2 hold and the solution satisfies the estimate_ \[\|\vec{U}\|_{P\otimes P_{\Omega}}^{2}\leq\|\vec{F}\|_{P\otimes P_{\Omega}}^{2 }. \tag{5.10}\] _It is strongly nonlinearly stable for \(\vec{G}\neq 0\) if the relations (3.7),(3.8) and (3.9) in Lemma 3.2 hold and the solution satisfies the estimate_ \[\|\vec{U}\|_{P\otimes P_{\Omega}}^{2}\leq\|\vec{F}\|_{P\otimes P_{\Omega}}^{2 }+2\int_{0}^{t}\sum_{j=1,N}[\vec{G}^{T}\vec{G}])_{j}ds_{j}\ dt. \tag{5.11}\] _In (5.10) and (5.11), \(\vec{F}\) and \(\vec{G}\) are external data from \(F\) and \(G\) injected in the nodes._ Proof.: The discrete energy method (multiply (5.2) from the left with \(\vec{U}^{T}(I_{n}\otimes P_{\Omega})\) yields \[\vec{U}^{T}(P\otimes P_{\Omega})\vec{U}_{t}+(\vec{U},\mathbf{D_{x_{i}}}\mathbf{ A_{i}}\vec{U})+(\mathbf{A_{i}}\vec{U},\mathbf{D_{x_{i}}}\vec{U})+(\vec{U}, \vec{L}_{D})=0, \tag{5.12}\] where we have used that \((I_{n}\otimes P_{\Omega})\) commutes with \(\mathbf{A_{i}}\) (since the matrices have diagonal blocks) and that the symmetric part of \(C\) is zero. The SBP constraints (5.7) and the notation \(\vec{U}^{T}(P\otimes P_{\Omega})\vec{U}=\|\vec{U}\|_{P\otimes P_{\Omega}}^{2}\) simplifies (5.12) to \[\frac{1}{2}\frac{d}{dt}\|\vec{U}\|_{P\otimes P_{\Omega}}^{2}+\vec{U}^{T}(I_{n }\otimes E^{T}P_{\partial\Omega}N_{i}E)\mathbf{A_{i}}\vec{U}+(\vec{U},\vec{L }_{D})=0. \tag{5.13}\] The semi-discrete energy rate in (5.13) mimics the continuous energy rate in the sense that only boundary terms remain. To make use of the already performed continuous energy analysis, we expand the boundary terms and exploit the diagonal form of \(P_{\partial\Omega}\). The result is \[\vec{U}^{T}(I_{n}\otimes E^{T}P_{\partial\Omega}N_{x_{i}}E)\mathbf{A_{i}}\vec {U}=\sum_{j=1,N}[(E\vec{U})^{T}(N_{i}\mathbf{A_{i}})(E\vec{U})]_{j}ds_{j}. \tag{5.14}\] The relation (5.14) mimics the continuous result (3.1) in each of the \(N\) boundary nodes. Next, the continuous transformation formula applied to the discrete solution yields \(W_{i}=(T^{-1}(E\vec{U}))_{i}\) and hence \[\sum_{j=1,N}[(E\vec{U})^{T}(N_{i}\mathbf{A_{i}})(E\vec{U})]_{j}ds_{j}=\sum_{j =1,N}[\vec{W}^{T}\Lambda\vec{W}]_{j}ds_{j}. \tag{5.15}\] The discrete boundary terms (5.15) now have the same form as the continuous ones in (3.2). By using (5.8) and (5.9) we find that \[(\vec{U},\vec{L}_{D})=\sum_{j=1,N}[2(\vec{W}^{-})^{T}\Sigma(\vec{W}^{-}-R\vec {W}^{+}-S^{-1}\vec{G})]_{j}ds_{j}. \tag{5.16}\] The combination of (5.13)-(5.16) leads to the final form of the energy rate \[\frac{1}{2}\frac{d}{dt}\|\vec{U}\|_{P\otimes P_{\Omega}}^{2}+\sum_{j=1,N}[ \vec{W}^{T}\Lambda\vec{W}+2(\vec{W}^{-})^{T}\Sigma(\vec{W}^{-}-R\vec{W}^{+}-S^ {-1}\vec{G})]_{j}ds_{j}=0. \tag{5.17}\] By using (3.7)-(3.9) in Lemma 3.2, the estimates (5.10) and (5.11) follow by using the same technique that was used for the continuous estimates in the proof of Lemma 3.2. ## 6 Summary In this paper we have completed the general stability theory for nonlinear skew-symmetric hyperbolic problems partly developed in [1; 2], by adding the analysis of nonlinear boundary conditions. In [1; 2] we focused on the skew-symmetric property _assuming_ that boundary conditions leading to an energy bound were available. In this article we derive these boundary conditions explicitly, and show how to implement them in a provable stable way using summation-by-parts formulations and weak boundary procedures. We exemplify the general procedure for the most important equations in computational fluid dynamics: the shallow water equations, the incompressible Euler equations and the compressible Euler equations. Jan Nordstrom was supported by Vetenskapsradet, Sweden [award no. 2018-05084 VR and 2021-05484 VR] and the Swedish e-Science Research Center (SeRC).
2308.00205
On eigenvalues problems for the $p(x)$-Laplacian
This paper studies nonlinear eigenvalues problems with a double non homogeneity governed by the $p(x)$-Laplacian operator, under the Dirichlet boundary condition on a bounded domain of $\mathbb{R}^N(N\geq2)$. According to the type of the nonlinear part (sublinear, superlinear) we use the Lagrange multiplier's method, the Ekeland's variational principle and the Mountain-Pass theorem to show that the spectrum includes a continuous set of eigenvalues, which can in some contexts be all the set $\mathbb{R_+^{*}}$. Moreover, we show that the smallest eigenvalue obtained from the Lagrange multipliers is exactly the first eigenvalue in the Ljusternik-Schnirelman eigenvalues sequence. Key words: Nonlinear eigenvalue problems, $p(x)$-Laplacian, Lagrange multipliers, Ekeland variational principle, Ljusternik-Schnirelman principle, Mountain-Pass theorem.
Aboubacar Marcos, Janvier Soninhekpon
2023-08-01T00:04:30Z
http://arxiv.org/abs/2308.00205v2
# On eigenvalue problems for the \(p(x)\)-Laplacian ###### Abstract. This paper studies nonlinear eigenvalues problems with a double non homogeneity governed by the \(p(x)\)-Laplacian operator, under the Dirichlet boundary condition on a bounded domain of \(\mathbb{R}^{N}(N\geq 2)\). According to the features of the nonlinearity (sublinear, superlinear) we use the Lagrange multiplier's method, the Ekeland's variational principle or the Mountain-Pass theorem to show that the spectrum includes a continuous set of eigenvalues, which is in some contexts the whole set \(\mathbb{R}^{+}_{+}\). Moreover, we show that the smallest eigenvalue obtained from the Lagrange multipliers is exactly the first eigenvalue in the Ljusternik-Schnirelman eigenvalues sequence and also provide sufficient conditions for multiplicity results. **Key words**: Nonlinear eigenvalue problems, \(p(x)\)-Laplacian, Lagrange multipliers, Ekeland variational principle, Ljusternik-Schnirelman principle, Mountain-Pass theorem. **Mathematics Subject Classification**: 35D30, 35J60, 35J70, 35P30. ## 1. Introduction The search of eigenvalue for non-homogeneous problems such as \[D_{1}(\Omega)\left\{\begin{array}{ll}-\Delta_{p}u=\lambda|u|^{q-2}u&\mbox{ in }\Omega\mbox{ with },q\neq p\\ u=0&\mbox{ on }\partial\Omega\end{array}\right.. \tag{1.1}\] can be reduced to study the existence of solution for problem \[D_{1}^{\prime}(\Omega)\left\{\begin{array}{ll}-\Delta_{p}u=|u|^{q-2}u&\mbox {in }\Omega\\ u=0&\mbox{ on }\partial\Omega\end{array}\right.. \tag{1.2}\] since the parameter \(\lambda\) can be scaled out by multiplying \(u\) with a suitable real number. Of course when \(u_{1}\) is a solution for problem (1.2), then for any \(\lambda\in(0,+\infty),\lambda^{\frac{p-3}{q}}\) is an eigenvalue with eigenfunction \(\lambda^{\frac{1}{p}}u_{1}.\) Accordingly, it is meaningless to seek eigenvalues for problem (1.1) since then the parameter \(\lambda\) plays no role. When \(p=2\), (1.2) becomes \[D_{1}^{\prime}(\Omega)\left\{\begin{array}{ll}-\Delta u=|u|^{q-2}u&\mbox{ in }\Omega\\ u=0&\mbox{ on }\partial\Omega\end{array}\right.. \tag{1.3}\] and belongs to the celebrated family equations of Emden-Fowler which has been extensively studied in the literature. The question is quite different and complex when dealing with eigenvalue problems involving a double non-homogeneity such as \[D_{3}(\Omega)\left\{\begin{array}{ll}-\Delta_{p(x)}u=\lambda V(x)|u|^{q(x)- 2}u&\mbox{in }\Omega\\ u=0&\mbox{ on }\partial\Omega.\end{array}\right.. \tag{1.4}\] The non-homogeneous operator \(\Delta_{p(x)}\) is the so-called \(p(x)\)-Laplacian, \(\Omega\) is a bounded domain in \(\mathbb{R}^{N}(N\geq 2)\) with smooth boundary, \(\lambda\) is a real parameter and \(p(x)\neq q(x)\). We assume throughout the paper that \(p\) and \(q\) are continuous functions on \(\overline{\Omega}\) and \(V\) is a positive function in a generalized Lebesgue space \(L^{s(x)}(\Omega)\). The \(p(x)\)- Laplacian operator appears in many contexts in physics, namely in nonlinear electrorheological fluids and other phenomena related to image processing, elasticity and the flow in porous media; for a survey see ([2, 23, 26, 27, 32, 33]) and references therein. When the variable exponents \(p(.)=q(.)\), problem (1.4) has been considered in several aspects and existence of eigenvalues and some of their qualitative properties have been established (cf [10, 11, 19, 20, 23, 25]). In the particular case where \(p(x)=q(x)=\) constant and \(V\equiv 1\), An Le proved in [29] the existence of a non-decreasing sequence of nonnegative Ljusternik-Schnirelman eigenvalues, and derived the properties of simplicity and isolation of the principal eigenvalue. For the case \(p(x)=q(x)\neq\) constant and \(V\equiv 1\), Fan, Zhang and Zhao proved the existence of infinitely many eigenvalues sequence and that \(\sup\Lambda=+\infty\), where \(\Lambda\) is the set of all nonnegative eigenvalues. They have also given some sufficient conditions under which \(\inf\Lambda=0\) or positive (see [25]). In the same framework, an extension of the study to the whole space \(\mathbb{R}^{N}\) has been carried out in [6] by N. Benouhiba. When \(q(x)\neq p(x)\), to the best of our knowledge, the pioneer work on the eigenvalue problem (1.4) when \(V=1\), is that in [30] where the authors prove the existence of a continuous family of eigenvalues. They mainly suppose that \[1<\min_{x\in\overline{\Omega}}q(x)<\min_{x\in\overline{\Omega}}p(x)<\max_{x\in \overline{\Omega}}q(x)\] and show that there exists \(\lambda^{*}>0\) such that any \(\lambda\in(0,\lambda^{*})\) is an eigenvalue for the problem (1.4). One can clearly notice that under their assumption the ranges of \(p()\) and \(q()\) can interfere as well. The present paper considers the problem (1.4) in the case the weight \(V\) is positive and in the both context of sublinearity and superlinearity. The purpose of the paper is to investigate the structure of the spectrum of problem (1.4) at the light of various methods of nonlinear analysis contributions. Roughly speaking, by means of the Ekeland's variational principle, we show in a first part of the paper devoted to the sublinear case, that under assumption (1.5), problem (1.4) admits a continuous family of eigenvalues in the neighborhood of \(0\) for any and particularly when the ranges of \(p(.)\) and \(q()\) do not interfere, we point out that the eigenvalues family is exactly the whole \(\mathbb{R}^{*}_{+}\). Moreover, we derive sufficient conditions for which each eigenvalue of the continuous spectrum admits, an infinitely countable (possibly unbounded) family of eigenfunctions. Always in the context that the ranges of \(p(.)\) and \(q(.)\) do not interfere, we focused our investigation on the eigenvalue problem (1.4) constrained to the sphere, using the Lagrange's multipliers method and next on the Ljusternik-Schnirelman principle. We derive from this part that the smallest eigenvalue on the sphere provides by the Lagrange's multipliers method corresponds exactly to the first eigenvalue of the Ljusternik-Schnirelman sequence. In the latter part, the superlinear (non coercive) problem has been considered using the Mountain- Pass theorem. Globally, our work deals with sublinear and superlinear eigenvalue problems under many aspects of the exponent functions \(p()\) and \(q()\) and our approach involves new techniques contrasting with other treatments of (1.4). This paper is organized as follows: In section 2, we state some classical properties of the spaces \(L^{p(x)}(\Omega)\) and \(W^{1,p(x)}_{0}(\Omega)\). In section 3 we study the sublinear case and we devote the section 4 to the case where problem (1.4) is superlinear. 2. General setting Let \(C(\overline{\Omega})\) be the set of all continuous functions on \(\overline{\Omega}\). Put \[C_{+}(\overline{\Omega})=\left\{h\in C(\overline{\Omega})\text{ such that }h(x)>1\ \forall x\in\overline{\Omega}\right\}\] For \(p(.)\in C_{+}(\overline{\Omega})\), we put \[p^{-}=\min_{x\in\overline{\Omega}}p(x)\ \text{ and }\ p^{+}=\max_{x\in\overline{\Omega}}p(x)\] The variable exponent Lebesgue space \(L^{p(x)}(\Omega)\) is defined by: \[L^{p(x)}(\Omega)=\left\{u:\Omega\longrightarrow\mathbb{R}\text{ measurable such that }\int_{\Omega}|u|^{p(x)}dx<\infty\right\}\] with the norm \[\|u\|_{L^{p(x)}(\Omega)}=\|u\|_{p(x)}=\inf\left\{\lambda>0/\int_{\Omega}\left| \frac{u}{\lambda}\right|^{p(x)}dx\leq 1\right\}\] The variable exponent Sobolev space \(W^{1,p(x)}(\Omega)\) is defined by: \[W^{1,p(x)}(\Omega)=\left\{u\in L^{p(x)}(\Omega)/\ |\nabla u|\in L^{p(x)}( \Omega)\right\}\] with the norm \[\|u\|_{W^{1,p(x)}(\Omega)}=\|u\|_{1,p(x)}=\inf\left\{\lambda>0/\int_{\Omega} \left(\left|\frac{u}{\lambda}\right|^{p(x)}+\left|\frac{\nabla u}{\lambda} \right|^{p(x)}\right)dx\leq 1\right\}=\|u\|_{p(x)}+\|\nabla u\|_{p(x)}.\] Its conjugate space is \(L^{p^{\prime}(x)}(\Omega)\) with \(p^{\prime}(x)=\frac{p(x)}{p(x)-1}\). Define \(W^{1,p(x)}_{0}(\Omega)\) as the closure of \(C^{\infty}_{0}(\Omega)\) in \(W^{1,p(x)}(\Omega)\). \(W^{1,p(x)}_{0}(\Omega)\) is endowed with the norm \[\|u\|=\|\nabla u\|_{p(x)}\] which is an equivalent norm on \(W^{1,p(x)}(\Omega)\). For more information on the Lebesgue and Sobolev spaces with variables exponents, we refer to ([2, 12, 16, 22, 26, 28, 33]). Define the critical Sobolev exponent of \(p\) by: \(p^{*}(x)=\left\{\begin{array}{ll}\frac{Np(x)}{N-p(x)}&\mbox{if }p(x)<N\\ +\infty&\mbox{if }p(x)\geq N\end{array}\right.\) for all \(x\in\Omega\). We have the following. **Proposition 2.1**.: 1. _There exists_ \(C_{H}>0\) _such that for any_ \(u\in L^{p(x)}(\Omega)\) _and_ \(v\in L^{p^{\prime}(x)}(\Omega)\)_, we have the Holder's inequality:_ \[\left|\int_{\Omega}uvdx\right|\leq C_{H}\|u\|_{p(x)}\|v\|_{p^{\prime}(x)}\leq 2 \|u\|_{p(x)}\|v\|_{p^{\prime}(x)}\] _where_ \(C_{H}=\frac{1}{p^{*}}+\frac{1}{(p^{\prime})^{-}}\)_._ 2. _There is a constant_ \(C>0\) _such that for all_ \(u\in W^{1,p(x)}_{0}(\Omega)\)_,_ \[\|u\|_{p(x)}\leq C\|\nabla u\|_{p(x)}:=C\|u\|\] **Proposition 2.2**.: _(cf [12, 22, 24, 28]) If \(p,q\in C(\overline{\Omega})\) and \(1\leq q(x)<p^{*}(x)\) for all \(x\in\overline{\Omega}\), then the embedding \(W^{1,p(x)}(\Omega)\hookrightarrow L^{q(x)}(\Omega)\) is continuous and compact._ As consequences, we have: **Proposition 2.3**.: _(cf [12, 22, 24, 28]) If \(p:\overline{\Omega}\longrightarrow(1,\infty)\) is continuous and \(q:\Omega\longrightarrow(1,\infty)\) is a measurable function such that \(p(x)\leq q(x)\leq p^{*}(x)\) a.e on \(\overline{\Omega}\), then there is a continuous embedding \(W^{1,p(x)}_{0}(\Omega)\hookrightarrow L^{q(x)}(\Omega)\)._ **Proposition 2.4**.: _If \(p^{+}<\infty\), both spaces \(\left(L^{p(x)},\|.\|_{p(x)}\right)\) and \(\left(W^{1,p(x)}_{0}(\Omega),\|.\|\right)\) are separable, reflexive and uniformly convex Banach spaces._ Define the modular: \[\varphi_{p(x)}(u)=\int_{\Omega}|u|^{p(x)}dx.\] Then we have the following properties: **Proposition 2.5**.: _(cf [1, 24, 28]) For all \(u,v\in L^{p(x)}(\Omega)\), we have :_ 1. \(\|u\|_{p(x)}<1(\mbox{resp. }=1,>1)\Leftrightarrow\varphi_{p(x)}(u)<1(\mbox{resp. }=1,>1)\)_._ 2. \(\min\left(\|u\|_{p(x)}^{p^{-}};\|u\|_{p(x)}^{p^{+}}\right)\leq\varphi_{p(x)}( u)\leq\max\left(\|u\|_{p(x)}^{p^{-}};\|u\|_{p(x)}^{p^{+}}\right)\)_. Consequently, we have:_ \[\left\{\begin{array}{ll}\|u\|_{p(x)}^{p^{-}}\leq\varphi_{p(x)}(u)\leq\|u\|_ {p(x)}^{p^{+}}&\mbox{if }\|u\|_{p(x)}>1\\ \|u\|_{p(x)}^{p^{+}}\leq\varphi_{p(x)}(u)\leq\|u\|_{p(x)}^{p}&\mbox{if }\|u\|_{p(x)} \leq 1\end{array}\right..\] 3. _For_ \(u_{n},u\in L^{p(x)}(\Omega)\)_, we have:_ \(u_{n}\to u\) _if and only if_ \(\varphi_{p(x)}(u_{n}-u)\to 0\)__ 4. _For all_ \(u,v\in L^{p(x)}(\Omega)\)_,_ \(\varphi_{p(x)}(u+v)\leq 2^{p^{*}-1}\left(\varphi_{p(x)}(u)+\varphi_{p(x)}(v) \right)\)_._ **Proposition 2.6**.: _(cf [1, 24, 28]) Let \(p\) and \(q\) be measurable functions such that \(p\in L^{\infty}(\Omega)\) and \(1\leq p(x)q(x)\leq\infty\) for a.e \(x\in\Omega\). Let \(u\in L^{q(x)}(\Omega),u\neq 0\). Then we have:_ \[\min\left(\|u\|_{p(x)q(x)}^{p^{-}};\|u\|_{p(x)q(x)}^{p^{+}}\right)\leq\left|u \right|^{p(x)}\Big{|}_{q(x)}\leq\max\left(\|u\|_{p(x)q(x)}^{p^{-}};\|u\|_{p(x) q(x)}^{p^{+}}\right)\] _As consequence, we have:_ \[\left\{\begin{array}{ll}\|u\|_{p(x)q(x)}^{p^{-}}\leq\left|\left|u\right|^{p( x)}\right|_{q(x)}\leq\|u\|_{p(x)q(x)}^{p^{+}}&\mbox{if }\|u\|_{p(x)}>1\\ \|u\|_{p(x)q(x)}^{p^{+}}\leq\left|\left|u\right|^{p(x)}\right|_{q(x)}\leq\|u\|_ {p(x)q(x)}^{p^{-}}&\mbox{if }\|u\|_{p(x)}\leq 1\end{array}\right..\] In the following, we put \(X=W^{1,p(x)}_{0}(\Omega)\) with the norm \(\|u\|=\|\nabla u\|_{p(x)}\). Define on \(X\) the following functionals: \[F(u)=\int_{\Omega}\frac{V(x)}{q(x)}|u|^{q(x)}dx,\ \ G(u)=\int_{\Omega}\frac{1}{p(x)}| \nabla u|^{p(x)}dx,\ \ \phi(u)=\int_{\Omega}V(x)|u|^{q(x)}dx,\ \ \psi(u)=\int_{\Omega}|\nabla u|^{p(x)}dx \tag{2.1}\] with \(V\in L^{s(x)}(\Omega)\) \[I_{\lambda}(u)=\int_{\Omega}\frac{1}{p(x)}|\nabla u|^{p(x)}dx-\lambda\int_{ \Omega}\frac{V(x)}{q(x)}|u|^{q(x)}dx \tag{2.2}\] It is well known that \(F\) and \(G\) belong to \(C^{1}(X,\mathbb{R})\) (see [4, 5, 9]) and one has for all \(v\in X\): \[<F^{\prime}(u),v>=\int_{\Omega}V(x)|u|^{q(x)-2}uvdx\ \ \ \text{and}\ \ \ <G^{\prime}(u),v>=\int_{\Omega}|\nabla u|^{p(x)-2}\nabla u \nabla vdx\] To go further in the setting of the functional framework we require the following \[1<p(x),q(x)<N<s(x)\text{ and }s^{\prime}(x)q(x)=\frac{s(x)q(x)}{s(x)-1}<p^{*}(x) \text{ for any }x\in\overline{\Omega} \tag{2.3}\] From the assumptions, the embedding \(W^{1,p(x)}(\Omega)\hookrightarrow L^{s^{\prime}(x)q(x)}(\Omega)\) and \(W^{1,p(x)}(\Omega)\hookrightarrow L^{q(x)}(\Omega)\) are continuous and compact. **Definition 2.1**.: _A pair \((\lambda,u)\in\mathbb{R}\times X\) is a weak solution of \((\ref{eq:1})\) if:_ \[\int_{\Omega}|\nabla u|^{p(x)-2}\nabla u\nabla vdx=\lambda\int_{\Omega}V(x)|u |^{q(x)-2}uvdx\ \ \forall v\in X \tag{2.4}\] _Such a pair \((u,\lambda)\in X\times\mathbb{R}\) with \(u\) non trivial is called an eigenpair, \(\lambda\) is an eigenvalue and \(u\) is called an associated eigenfunction._ It is well known that \(u\) is a weak solution of problem \((\ref{eq:1})\) on \(X\) if and only if \(u\) is a critical point of the energy functional \(I_{\lambda}(u)=G(u)-\lambda F(u)\), that is: \[G^{\prime}(u)-\lambda F^{\prime}(u)=0 \tag{2.5}\] To solve the eigenvalue problem \((\ref{eq:1})\) the constrained variational method is usually employed (see [3, 7, 10, 14, 15, 29, 35, 36]). We take here \(G\) as a constrained functional and \(F\) as an objective functional. Let \(\alpha>0\) and define: \[\widetilde{M_{\alpha}}=\{u\in X;G(u)\leq\alpha>0\} \tag{2.6}\] and the boundary \[\partial\widetilde{M_{\alpha}}=M_{\alpha}=\{u\in X;G(u)=\alpha>0\} \tag{2.7}\] It is well known that: * \(M_{\alpha}\) is a \(C^{1}\)-submanifold of \(X\) with codimension one. * \((u,\lambda)\in X\times\mathbb{R}\) satisfies relation \((\ref{eq:1})\) if and only if \(u\) is a critical point of \(F\) with respect to \(M_{\alpha}\) (see [36]). **Remark 2.1**.: _For all \(u\in M_{\alpha}\), we have_ \[\frac{\alpha p^{-}}{q^{+}F(u)}\leq\frac{\psi(u)}{\phi(u)}\leq\frac{p^{+}\alpha }{q^{-}F(u)}\] The following propositions and definition are useful throughout this work. Reader who is interested by any proofs is invited to consult references therein. **Proposition 2.7**.: _(cf [6, 11, 18, 21, 23]) The mapping \(G\) is coercive, convex and sequentially weakly lower semi-continuous; that is \(u_{n}\rightharpoonup u_{0}\) in \(X\) implies \(G(u_{0})\leq\lim\limits_{n\to\infty}\inf G(u_{n})\)_ **Proposition 2.8**.: _( cf [23])_ 1. \(G^{\prime}:X\longrightarrow X^{*}\) _is continuous, bounded and strictly monotone operator._ 2. \(G^{\prime}\) _is a mapping of type_ \((S_{+})\)_; that is if_ \(u_{n}\rightharpoonup u_{0}\) _in_ \(X\) _and_ \(\overline{\lim}_{n\to\infty}<G^{\prime}(u_{n})-G^{\prime}(u_{0});u_{n}-u_{0}>\leq 0\) _then_ \(u_{n}\to u_{0}\) _in_ \(X\)_._ 3. \(G^{\prime}:X\longrightarrow X^{*}\) _is a homeomorphism._ **Proposition 2.9**.: _( cf [17]) Let \((X,d)\) be a complete metric space. Let \(\Phi:X\longrightarrow\mathbb{R}\cup\{+\infty\}\) be lower semicontinuous and bounded below. Then given any \(\varepsilon>0\) there exists \(u_{\varepsilon}\in X\) such that:_ \[\Phi(u_{\varepsilon})\leq\inf_{u\in X}\Phi+\varepsilon \tag{2.8}\] _and_ \[\Phi(u_{\varepsilon})<\Phi(u)+\epsilon d(u,u_{\epsilon})\ \forall u\in X\text{ with }u \neq u_{\varepsilon} \tag{2.9}\] In the case the functional \(\Phi\) is of \(C^{1}\) on Banach spaces, the Ekeland's variational principle takes the following version: **Corollary 2.1**.: _Let \(J\) be a functional of class \(C^{1}\) on a Banach space \(X\), bounded below and \(c=\inf_{u\in X}J(u)\). Then, given \(\epsilon>0\), there is \(u_{\epsilon}\in X\) such that:_ \[\left\{\begin{array}{l}c\leq J(u_{\epsilon})\leq c+\epsilon\\ \|DJ(u_{\epsilon})\|_{X^{\prime}}\leq\epsilon\end{array}\right.. \tag{2.10}\] **Definition 2.2**.: _Let \(X\) be a Banach space and \(I:X\longrightarrow\mathbb{R}\) a continuously Frechet differential functional._ _We say that the functional \(I\) satisfies the Palais-Smale condition if any sequence \((u_{n})_{n}\subset X\) such that \(I(u_{n})\) is bounded and \(I^{\prime}(u_{n})\to 0\) in \(X\) has convergent subsequence in \(X\)._ **Proposition 2.10**.: _(cf [34]) Suppose that \(X\) is a reflexive Banach space and let \(K\subset X\) be a weakly closed subset of \(X\). Suppose \(J:X\longrightarrow\mathbb{R}\cup\{+\infty\}\) is coercive and sequentially-weakly lower semi-continuous on \(K\) with respect to \(X\). Then \(J\) is bounded below and attains its minimum:_ \[\exists u\in K;\;J(u)=\inf_{v\in K}J(v)=\min_{v\in K}J(v).\] Below we give the following proposition and its proof which is adapted from the assumptions of our problem. **Proposition 2.11**.: _The functional \(F\) is weakly-strongly continuous, that is, \(u_{n}\rightharpoonup u\) in \(X\) implies that \(F(u_{n})\longrightarrow F(u)\)._ **Proof.** Let \(\left(u_{n}\right)_{n}\subset X\) be a sequence and \(u\in X\) such that \(u_{n}\rightharpoonup u\). We have: \[\left|F\left(u_{n}\right)-F(u)\right|\leq\frac{1}{q^{-}}\int_{\Omega}V(x) \left|\left|u_{n}\right|^{q(x)}-\left|u\right|^{q(x)}\right|dx\] By using the well known inequality \[\left|\left|a\right|^{p}-\left|b\right|^{p}\right|\leq\gamma|a-b|\left(\left|a \right|+\left|b\right|\right)^{p-1},\;\;\text{ for all }p>1\text{ and }(a,b)\in\mathbb{R}^{2}\] where \(\gamma\) is a positive constant, we obtain: \[\left|F\left(u_{n}\right)-F(u)\right|\leq\frac{\gamma}{q^{-}}\int_{\Omega}V(x )|u_{n}-u|\left(\left|u_{n}\right|+\left|u\right|\right)^{q(x)-1}dx \tag{2.11}\] and by applying two times the Young inequality to the right-hand side term of the expression above, we get \[\int_{\Omega}V(x)|u_{n}-u|\left(\left|u_{n}\right|+\left|u\right|\right)^{q(x )-1}dx\leq\] \[\frac{1}{s^{-}}\int_{\Omega}V^{s(x)}(x)dx+\frac{1}{q^{-}s^{\prime-}}\int_{ \Omega}|u_{n}-u|^{s^{\prime}(x)q(x)}dx+\frac{1}{q^{\prime-}s^{\prime-}}\int_{ \Omega}\left(|u_{n}|+\left|u\right|\right)^{q(x)s^{\prime}(x)}dx\] Let's denote \(a=\|V\|_{s(x)},\quad b=\|u_{n}-u\|_{s^{\prime}(x)q(x)},\quad c=\||u_{n}|+|u|\|_ {s^{\prime}(x)q(x)}\) We have \[\int_{\Omega}\frac{V(x)}{a}\frac{|u_{n}-u|}{b}\left(\frac{\left|u_{n}\right|+ \left|u\right|}{c}\right)^{q(x)-1}dx\leq\frac{1}{s^{-}}+\frac{1}{q^{-}s^{ \prime-}}+\frac{1}{q^{\prime-}s^{\prime-}}.\] Consequently, we have \[\int_{\Omega}V(x)|u_{n}-u|\left(\left|u_{n}\right|+\left|u\right|\right)^{q(x) -1}dx\leq\] \[\frac{\gamma}{q^{-}}\left(\frac{1}{s^{-}}+\frac{1}{q^{-}s^{\prime-}}+\frac{1} {q^{\prime-}s^{\prime-}}\right)\|V\|_{s(x)}\|u_{n}-u\|_{s^{\prime}(x)q(x)}\max ((\||u_{n}|+|u|\|)_{s^{\prime}(x)q(x)}^{q^{+}-1},(\||u_{n}|+|u|\|)_{s^{\prime}( x)q(x)}^{q^{-}-1})\] and then \[\left|F\left(u_{n}\right)-F(u)\right|\leq\] \[C\|V\|_{s(x)}\|u_{n}-u\|_{s^{\prime}(x)q(x)}\max((\||u_{n}|+|u|\|)_{s^{\prime} (x)q(x)}^{q^{+}-1},(\||u_{n}|+|u|\|)_{s^{\prime}(x)q(x)}^{q^{-}-1})\] where \(C\) is a positive constant. Since the embeddings \(X\hookrightarrow L^{s^{\prime}(x)q(x)}(\Omega)\) is compact, we have \(\lim_{n\rightarrow\infty}\|u_{n}-u\|_{s^{\prime}(x)q(x)}=0\) and \(\lim_{n\rightarrow\infty}\left(\left(\||u_{n}|\|+\||u|\right)_{s^{\prime}(x)q( x)}\right)^{q^{i}-1}=2^{q^{i}-1}\|u\|_{s^{\prime}(x)q(x)}^{q^{i}-1}\) for \(i\in\{-,+\}\). Hence \(F(u_{n})\to F(u)\) as \(n\rightarrow\infty\). ## 3. Sublinear problem ### Existence of a continuous family of eigenvalues Let us consider for \(V>0\), the following Rayleigh quotients \[\mu_{*}=\inf_{u\in X\smallsetminus\{0\}}\frac{\int_{\Omega}|\nabla u|^{p(x)}dx}{ \int_{\Omega}V(x)|u|^{p(x)}dx}=\inf_{u\in X\smallsetminus\{0\}}\frac{\psi(u)}{ \phi(u)};\quad\mu^{*}=\inf_{u\in X\smallsetminus\{0\}}\frac{\int_{\Omega}\frac{1} {p(x)}|\nabla u|^{p(x)}dx}{\int_{\Omega}\frac{1}{p(x)}V(x)|u|^{p(x)}dx}=\inf_{u \in X\smallsetminus\{0\}}\frac{G(u)}{F(u)}\] \[\lambda_{*}=\inf_{u\in\widetilde{M_{\alpha}}}\frac{\int_{\Omega}|\nabla u|^{p( x)}dx}{\int_{\Omega}V(x)|u|^{q(x)}dx}=\inf_{u\in\widetilde{M_{\alpha}}}\frac{ \psi(u)}{\phi(u)}\quad\mbox{and}\ \ \lambda^{*}=\inf_{u\in\widetilde{M_{\alpha}}}\frac{ \int_{\Omega}\frac{1}{p(x)}|\nabla u|^{p(x)}dx}{\int_{\Omega}\frac{V(x)}{q(x )}|u|^{q(x)}dx}=\inf_{u\in\widetilde{M_{\alpha}}}\frac{G(u)}{F(u)}\] We start this section by establishing the existence of a continuous family of eigenvalue in the case the ranges of \(p(.)\) and \(q(.)\) intercept. We recall that assumptions (2.3) are fulfilled, that is : \[1<p(x),q(x)<N<s(x)\mbox{ and }s^{\prime}(x)q(x)=\frac{s(x)q(x)}{s(x)-1}<p^{*}( x)\mbox{ for any }x\in\overline{\Omega}.\] For this aim, we consider the following eigenvalues sets: \[\Lambda=\{\lambda\in\mathbb{R}/\exists u\in X\smallsetminus\{0\}\mbox{ such that }(\lambda,u)\mbox{ is an eigenpair of }(\ref{eq:1.4})\}\] and for any \(\alpha>0\), \[\widetilde{\Lambda_{\alpha}}=\{\lambda\in\mathbb{R}/\exists u\in\widetilde{M_ {\alpha}}\mbox{ such that }(\lambda,u)\mbox{ is an eigenpair of }(\ref{eq:1.4})\}.\] Obviously, we have \(\widetilde{\Lambda_{\alpha}}\subset\Lambda\). For \(V>0\), we first point out the fact that if \((\lambda,u)\) is a solution of problem (1.4), then \(\lambda>0\). Indeed: \[(\lambda,u)\mbox{ is a weak solution of }(\ref{eq:1.4}) \Longleftrightarrow \int_{\Omega}|\nabla u|^{p(x)-2}\nabla u\nabla vdx=\lambda\int_{ \Omega}V(x)|u|^{q(x)-2}uvdx\ \ \forall v\in X\] \[\Rightarrow \lambda=\frac{\int_{\Omega}|\nabla u|^{p(x)}dx}{\int_{\Omega}V( x)|u|^{q(x)}dx}\geq 0\mbox{ for }v=u.\] Suppose that \(\lambda=0\). Thus \(u\) is constant on \(\overline{\Omega}\). This together with the fact that \(u=0\) on \(\partial\Omega\) gives \(u\equiv 0\) on \(\overline{\Omega}\), which is a contradiction. Our goal in this first part of the paper is to investigate the existence of eigenvalues and corresponding eigenfunctions in \(\widetilde{M_{\alpha}}\) and on the whole \(X.\) **Theorem 3.1**.: _Assume that assumptions (2.3) are fulfilled_ 1. _If_ \(\lambda\) _is such that_ \(0<\lambda<\lambda_{*}\) _then_ \(\lambda\notin\widetilde{\Lambda_{\alpha}}.\)__ 2. _If_ \(q^{-}<p^{-}\) _then for any_ \(\alpha>0\)_, there exists_ \(\lambda_{\alpha}>0\) _such that any_ \(\lambda\in(0,\lambda_{\alpha})\) _is an eigenvalue of problem (_1.4_) with eigenfunction in_ \(\widetilde{M_{\alpha}}.\) _Moreover_ \(\lambda_{*}=0.\)__ The proof will be carried out after the following lemma. **Lemma 3.1**.: _Let \(\alpha>0\), there exist \(\lambda_{\alpha}>0\) such that for any \(\lambda\in(0;\lambda_{\alpha})\) we have:_ \[I_{\lambda}(u)\geq\frac{\alpha}{2}>0\mbox{ for any }u\in M_{\alpha}. \tag{3.1}\] **Proof.** Let \(\alpha>0\) and \(u\in M_{\alpha}\). Trivially we have \(I_{\lambda}(u)=\alpha-\lambda\int_{\Omega}\frac{V(x)}{q(x)}|u|^{q(x)}dx.\) Thus we have \[I_{\lambda}(u) = \alpha-\lambda\int_{\Omega}\frac{V(x)}{q(x)}|u|^{q(x)}dx\] \[\geq \alpha-\frac{\lambda}{q^{-}}\int_{\Omega}V(x)|u|^{q(x)}dx\] and by Proposition 2.5, we have: \[I_{\lambda}(u)\geq\alpha-\frac{\lambda}{q^{-}}C_{H}C^{q^{\pm}}\|V\|_{s(x)} \max(\|u\|^{q^{-}},\|u\|^{q^{+}}).\] On the other hand, since \(u\in M_{\alpha}\), we have \[\|u\| \leq \max\left(\left(\alpha p^{+}\right)^{\frac{1}{p^{-}}},\left(\alpha p ^{+}\right)^{\frac{1}{p^{+}}}\right)\text{ and next} \tag{3.2}\] \[\max\left(\|u\|^{q^{-}},\|u\|^{q^{+}}\right) \leq \max\left(\left(\alpha p^{+}\right)^{\frac{q^{-}}{p^{-}}},\left( \alpha p^{+}\right)^{\frac{q^{-}}{p^{+}}},\left(\alpha p^{+}\right)^{\frac{q^ {+}}{p^{-}}},\left(\alpha p^{+}\right)^{\frac{q^{+}}{p^{+}}}\right). \tag{3.3}\] Consequently we derive \[I_{\lambda}(u)\geq\alpha-\frac{\lambda}{q^{-}}C_{H}C^{q^{\pm}}\|V\|_{s(x)}\max (\left(\alpha p^{+}\right)^{\frac{q^{-}}{p^{-}}},\left(\alpha p^{+}\right)^{ \frac{q^{-}}{p^{+}}},\left(\alpha p^{+}\right)^{\frac{q^{+}}{p^{-}}},\left( \alpha p^{+}\right)^{\frac{q^{+}}{p^{+}}}).\] By the above inequality, we remark that if we define \[\lambda_{\alpha}:=\frac{\alpha q^{-}}{2C_{H}C^{q^{\pm}}\|V\|_{s(x)}\max\left( \left(\alpha p^{+}\right)^{\frac{q^{-}}{p^{-}}},\left(\alpha p^{+}\right)^{ \frac{q^{-}}{p^{+}}},\left(\alpha p^{+}\right)^{\frac{q^{+}}{p^{-}}},\left( \alpha p^{+}\right)^{\frac{q^{+}}{p^{+}}}\right)}, \tag{3.4}\] then for any \(\lambda\in(0;\lambda_{\alpha})\) and any \(u\in M_{\alpha}\), we have \[I_{\lambda}(u)\geq\frac{\alpha}{2}>0.\] **Proof of Theorem 3.1** 1. Suppose that there exists \(\lambda\in(0,\lambda_{*})\) such that \(\lambda\in\widetilde{\Lambda_{\alpha}}\). Thus there exists \(u\in\widetilde{M_{\alpha}}\) such that \[\int_{\Omega}|\nabla u|^{p(x)-2}\nabla u\nabla vdx=\lambda\int_{\Omega}V(x)|u |^{q(x)-2}uvdx\] for any \(v\in X\). By taking \(v=u\), we have \(\int_{\Omega}|\nabla u|^{p(x)}dx=\lambda\int_{\Omega}V(x)|u|^{q(x)}dx\). Besides \[\lambda<\lambda_{*}=\inf_{u\in\widetilde{M_{\alpha}}}\frac{\int_{\Omega}| \nabla u|^{p(x)}dx}{\int_{\Omega}V(x)|u|^{q(x)}dx}\leq\frac{\int_{\Omega}| \nabla u|^{p(x)}dx}{\int_{\Omega}V(x)|u|^{q(x)}dx}.\] Thus \[\int_{\Omega}|\nabla u|^{p(x)}dx>\lambda\int_{\Omega}V(x)|u|^{q(x)}dx=\int_{ \Omega}|\nabla u|^{p(x)}dx.\] This is a contradiction. 2. Let \(\alpha>0\), \(\lambda_{\alpha}\) be defined as in relation (3.4) and \(\lambda\in(0,\lambda_{\alpha})\). Let consider the closed set \(\widetilde{M_{\alpha}}\) as in (2.6) and denote \(U_{\alpha}\) the interior set of the \(\widetilde{M_{\alpha}}\). \(\widetilde{M_{\alpha}}\) endowed with the norm of \(X\) is Banach subspace of \(X\). Since \(I_{\lambda}(0)=0\), we deduce from Lemma 3.1 that \(I_{\lambda}\) achieved its infimum in the interior set of \(\widetilde{M_{\alpha}}\). Moreover \[I_{\lambda}(u) \geq -\frac{\lambda_{\alpha}}{q^{-}}C_{H}C^{q^{\pm}}\|V\|_{s(x)}\max(\|u \|^{q^{-}},\|u\|^{q^{+}})\] \[\geq\] that is \(I_{\lambda}\) is bounded below and then \[-\infty<\inf_{M_{\alpha}}I_{\lambda}=\inf_{U_{\alpha}}I_{\lambda},\] and consequently for any \(\epsilon>0\), one can find \(u_{\epsilon}\in U_{\alpha}\) such that \[I_{\lambda}(u_{\epsilon})<\inf_{U_{\alpha}}I_{\lambda}+\epsilon=\inf_{ \widetilde{M_{\alpha}}}I_{\lambda}+\epsilon.\] And next, from the Ekeland's variational principle applied to the functional \(I_{\lambda}:\widetilde{M_{\alpha}}\longrightarrow\mathbb{R}\), for any \(u\in\widetilde{M}_{\alpha}\) with \(u\neq u_{\epsilon}\), one has \[I_{\lambda}(u_{\epsilon})<I_{\lambda}(u)+\epsilon\|u-u_{\epsilon}\|\] (3.6) Choose \(u=u_{\epsilon}+tv\) for any \(v\in\widetilde{M_{\alpha}}\) and \(t>0\) small enough in (3.6), it then follows that \[\frac{I_{\lambda}(u_{\epsilon}+tv)-I_{\lambda}(u_{\epsilon})}{t}\geq-\epsilon\|v\|;\] replacing \(t\) by \(-t\), \(t\) chosen to be negative in the inequality above and letting \(t\to 0\), it follows that \[\|I^{\prime}_{\lambda}(u_{\varepsilon})\|_{X^{*}}\leq\epsilon.\] We then deduce the existence of a sequence \((u_{n})\subset U_{\alpha}\) such that \[-\infty<I_{\lambda}(u_{n})\rightarrow\inf_{U_{\alpha}}I_{\lambda}=\underline {c}\text{ and }I^{\prime}_{\lambda}(u_{n})\to 0\text{ in }X^{*}. \tag{3.7}\] Obviously, \((u_{n})_{n}\) is bounded and there exists \(u_{0}\in X\) such that \(u_{n}\rightharpoonup u_{0}\). It follows from sequential and weak semicontinuity of \(G\) ( cf Proposition 2.7) that \(G(u_{0})\leq\alpha\); that is \(u_{0}\in\widetilde{M}_{\alpha}\). On the other hand, we have: \[\langle G^{\prime}(u_{n}),u_{n}-u_{0}\rangle=\langle I^{\prime}_{\lambda}(u_{ n}),u_{n}-u_{0}\rangle+\lambda\langle F^{\prime}(u_{n}),u_{n}-u_{0}\rangle.\] Then using (3.7) and the fact that \(\langle F^{\prime}(u_{n}),u_{n}-u_{0}\rangle\to 0\) in Lemma 3.3, we deduce that \[\langle G^{\prime}(u_{n}),u_{n}-u_{0}\rangle\to 0, \tag{3.8}\] hence \((u_{n})_{n}\) converges strongly to \(u_{0}\) in X since the functional \(G\) is of type \(S_{+}\). Since \(I_{\lambda}\in C^{1}(X,\mathbb{R})\), we conclude that \[I^{\prime}_{\lambda}(u_{n})\to I^{\prime}_{\lambda}(u_{0})\text{ as }n \rightarrow\infty. \tag{3.9}\] Relations (3.7) and (3.9) show that \[I_{\lambda}(u_{0})=\underline{c}\text{ and }I^{\prime}_{\lambda}(u_{0})=0.\] It remains to prove that \(u_{0}\neq 0.\) For this aim, it suffices to show that \(c<0.\) Indeed, since \(q^{-}<p^{-}\), we can choose \(\varepsilon>0\) such that \(q^{-}+\varepsilon<p^{-}\). By the continuity of \(q(.)\), we deduce the existence of an open set \(\Omega_{0}\subset\Omega\) such that \(q(x)\leq q^{-}+\varepsilon<p^{-}\) for all \(x\in\Omega_{0}\). Let \(u_{1}\in M_{\alpha}\). It is obvious that \(tu_{1}\in U_{\alpha}\) for any \(t\in(0,1)\). So \[I_{\lambda}(tu_{1}) = \int_{\Omega}\frac{t^{p(x)}}{p(x)}|\nabla u_{1}|^{p(x)}dx-\lambda \int_{\Omega}\frac{t^{q(x)}}{q(x)}V|v_{0}|^{q(x)}dx\] \[\leq t^{p^{-}}\alpha-\frac{\lambda}{q^{+}}\int_{\Omega_{0}}t^{q(x)}V(x )|u_{1}|^{q(x)}dx\] \[\leq t^{p^{-}}\alpha-\frac{\lambda t^{q^{-}+\varepsilon}}{q^{+}} \int_{\Omega_{0}}V(x)|u_{1}|^{q(x)}dx.\] Therefore \[I_{\lambda}(tu_{1})<0 \tag{3.10}\] for \(0<t<\delta^{\frac{1}{p^{-}-q^{-}-\varepsilon}}\) with \(0<\delta<\min\left\{1;\frac{\lambda\int_{\Omega}V(x)|u_{1}|^{q(x)}dx}{\alpha q ^{+}}\right\}.\) Consequently, we get \[\inf_{U_{\alpha}}I_{\lambda}=c<0.\] So \(u_{0}\) is a nontrivial weak solution for problem (1.4) and thus any \(\lambda\in(0;\lambda_{\alpha})\) is an eigenvalue of problem (1.4) with corresponding eigenfunction in \(\widetilde{M}_{\alpha}\). From what precede, no eigenvalue lies in \((0,\lambda_{*})\) and hence \(\lambda_{*}=0\). **Remark 3.1**.: _Expression of \(\lambda_{\alpha}\) in (3.4) can be decoded to extract more information on the size of the eigenvalues set with respect the exponent \(p\) and \(q\). Indeed, suppose \(\alpha p^{+}\geq 1\) then_ \[\lambda_{\alpha}:=\alpha^{1-\frac{q^{+}}{p^{-}}}\frac{q^{-}\left(p^{+}\right)^ {-\frac{q^{+}}{p^{-}}}}{2C_{H}C^{q^{2}}\|V\|_{s(x)}} \tag{3.11}\] _and hence \(\lim_{\alpha\rightarrow+\infty}\lambda_{\alpha}=+\infty\) when \(q^{+}<p^{-}.\) On the other hand when \(q^{+}=p^{-}\), we observe that_ \[\lambda_{\alpha}=\lambda_{p^{-},q^{+}}:=\frac{q^{-}}{2p^{+}C_{H}C^{q^{\pm}}\|V \|_{s(x)}}\] ceases to depend on \(\alpha.\) This fact will enable us to provide farther, a multiplicity result on the eigenfunctions. In the case that \(\alpha p^{+}<1\)_ \[\lambda_{\alpha}:=\alpha^{1-\frac{a^{-}}{p^{+}}}\frac{q^{-}\left(p^{+}\right)^{- \frac{a^{-}}{p^{+}}}}{2C_{H}C^{q^{\pm}}\|V\|_{s(x)}}. \tag{3.12}\] _Hence, when \(\alpha\) goes toward \(0\), \(\lim\limits_{\alpha\to 0}\lambda_{\alpha}=0.\)_ At the light of Remark 3.1, we give in the following, the eigenvalues set \(\widetilde{\Lambda_{\alpha}}\) when problem (1.4) is a sublinear problem. Roughly speaking we suppose that \(1<q(x)<p(x)\). **Corollary 3.1**.: _Assume that assumptions \(\left(\ref{eq:1}\right)\) are fulfilled with moreover \(q^{+}<p^{-}\), then \(\Lambda=(0,+\infty)\), that is any \(\lambda>0\) is an eigenvalue of problem \(\left(\ref{eq:1}\right)\) on \(X\) and hence \(\lambda_{*}=\mu_{*}=0.\)_ **Proof.** We know that \(\Lambda\subset(0,+\infty).\) So let \(\lambda\geq 0.\) Assuming that \(q^{+}<p^{-},\) then from Theorem 3.1, and for any \(\alpha>0,\) there is an eigenfunction \(u_{\lambda}\in M_{\alpha}\) associated to \(\lambda\). \(\lim\limits_{\alpha\rightarrow+\infty}\lambda_{\alpha}=+\infty\) and then there exists \(\alpha\) big enough and \(\lambda_{\alpha}\) such that \(\lambda\in(0,\lambda_{\alpha})\subset\Lambda\). Thus \(\Lambda=(0,+\infty)\) with eigenfunctions in \(X.\) To conclude that \(\lambda_{*}=\mu_{*}=0,\) we just have to notice that \(\inf\widetilde{\Lambda_{\alpha}}=0\) and \(\lambda_{*}\geq\mu_{*}\). **Corollary 3.2**.: _Assume that assumptions \(\left(\ref{eq:1}\right)\) are fulfilled with \(q^{-}<p^{-}\) and \(q^{+}=p^{-}\), then each \(\mu\in(0,\lambda_{q^{+},p^{-}}),\) admits at least an infinitely countable family of eigenfunctions in \(X\)._ **Proof.** Let \(\mu\in(0,\lambda_{p^{-},q^{+}})\) and consider an increasing sequence of positive real numbers \((\alpha_{n})_{n>0}\) such that \(\alpha_{n}p^{+}\geq 1\ \ \forall n.\) From Lemma 3.1, there exists a positive real number \(\lambda_{\alpha_{n}}\) such that for any \(\lambda\in(0,\lambda_{\alpha_{n}}),\) inequality \(\left(\ref{eq:1}\right)\) holds. But since \(q^{+}=p^{-}\), \(\lambda_{\alpha_{n}}=\lambda_{p^{-},q^{+}}\) for all \(n,\) then \(\mu\in(0,\lambda_{\alpha_{n}})\ \ \ \forall n.\) Using inequality \(\left(\ref{eq:1}\right)\) in the proof of Theorem 3.1 along with the fact that \(\alpha_{n}p^{+}\geq 1\ \ \ \forall n\) and \(q^{+}=p^{-},\) we get \[I_{\mu}(u)\ \ \geq-\frac{\lambda_{p^{-},q^{+}}}{q^{-}}C_{H}C^{q^{\pm}}\|V\|_{s( x)}\max\left((\alpha p^{+})^{\frac{a^{-}}{p^{-}}},(\alpha p^{+})^{\frac{a^{-}}{p^{+}}},( \alpha p^{+})^{\frac{a^{+}}{p^{-}}},(\alpha p^{+})^{\frac{a^{+}}{p^{+}}}\right) \tag{3.13}\] \[\geq-\frac{\lambda_{p^{-},q^{+}}}{q^{-}}C_{H}C^{q^{\pm}}\|V\|_{s(x)}(\alpha_{n }p^{+})\geq-\frac{\alpha_{n}}{2}.\] So \(I_{\mu}(u)\) is bounded below on \(\widetilde{M}_{\alpha_{n}}\) and since \(I_{\mu}(0)=0\) and \(I_{\mu}(u)\geq\alpha_{n}\) on \(\widetilde{M}_{\alpha_{n}}\), it achieves its infimum in \(\widetilde{M}_{\alpha_{n}}\) interior. Thus, processing closely to the idea developed in the the proof of the theorem, we derive on each \(\widetilde{M}_{\alpha_{n}}\) some eigenfunctions associated to \(\mu\) and consequently when \(n\) tends toward \(+\infty,\) we can extract a sequence of eigenfunctions \((u_{n})_{n>0}\) belonging to \(X\) and having the same eigenvalue \(\mu.\) Thus the proof is complete. ### Eigenvalues problem constrained to a sphere The use of the Lagrange multipliers to solve a constrained eigenvalues problem like (1.4) on the sphere \(M_{\alpha}\) is reduced to find a real number \(\mu\in\mathbb{R}\) and \(u\in M_{\alpha}\) such that \[F^{\prime}(u)=\mu G^{\prime}(u),\ \ \ \mu\in\mathbb{R}. \tag{3.14}\] Accordingly \(\lambda=\frac{1}{\mu}\) will be an eigenvalue for problem (1.4) corresponding to the eigenfunction \(u\). Thus, we will point out in what follows that problem (1.4) admits an eigenvalue by means of the Lagrange multipliers method. First of all let's denote \[\nu_{*}=\inf_{u\in M_{\alpha}}\frac{\int_{\Omega}|\nabla u|^{p(x)}dx}{\int_{ \Omega}V(x)|u|^{q(x)}dx}:=\inf_{u\in M_{\alpha}}\frac{\psi(u)}{\phi(u)}\ \ \ \mbox{and}\ \ \nu^{*}=\inf_{u\in M_{\alpha}}\frac{\int_{ \Omega}\frac{1}{p(x)}|\nabla u|^{p(x)}dx}{\int_{\Omega}\frac{V(x)}{q(x)}|u|^{q( x)}dx}:=\inf_{u\in M_{\alpha}}\frac{G(u)}{F(u)}.\] \[\Lambda_{\alpha}=\{\lambda\in\mathbb{R}/\exists u\in M_{\alpha}\ \mbox{such that}\ ( \lambda,u)\ \mbox{is an eigenpair of}\ (\ref{eq:1})\}\] #### 3.2.1. Eigenvalue and Lagrange multiplier The first result of this section is expressed as follows. **Theorem 3.2**.: _Consider that assumption \(\left(\ref{eq:1}\right)\) is fulfilled. Then_ 1. \(\frac{q^{-}}{p^{+}}\nu_{*}\leq\nu^{*}\leq\frac{q^{+}}{p^{-}}\nu_{*}.\) _and_ \(\nu^{*}=0\) _if only if_ \(\nu_{*}=0\)__ 2. \(\nu_{*}\neq 0\) _and if_ \(\lambda\) _is such that_ \(0<\lambda<\nu_{*}\) _then_ \(\lambda\notin\Lambda_{\alpha}.\)__ 3. _If moreover_ \(q^{+}<p^{-}\)_, then_ \(\nu^{*}\notin\Lambda_{\alpha}\) _and there exists some_ \(\lambda>\nu_{*}\) _such that_ \(\lambda\in\Lambda_{\alpha}\)_. Moreover_ \(\nu_{*}\notin\Lambda_{\alpha}\)__ The following lemmas are relevant for the proof of the theorem. **Lemma 3.2**.: _Let \(\alpha>0\) be given. For any \(u\in X\smallsetminus\{0\}\), there exists a unique \(t>0\) such that \(tu\in M_{\alpha}\)._ **Proof.** Let \(\alpha>0\) and \(u\in X\smallsetminus\{0\}\) be given. Consider the function \[h:(0;+\infty) \longrightarrow (0;+\infty)\] \[t \longmapsto h(t)=G(tu)=\int_{\Omega}\frac{t^{p(x)}}{p(x)}|\nabla u|^{p(x)}dx\] Obviously the function \(h\) is continuous and for any \(t_{1},t_{2}>0\) such that \(t_{1}<t_{2}\), we have \(h(t_{1})<h(t_{2})\); that is \(h\) is strictly increasing. For \(t\in(0,1)\), we have \(h(t)\leq\frac{t^{p^{-}}}{p^{-}}\int_{\Omega}|\nabla u|^{p(x)}dx\) and thus \(h(t)\longrightarrow 0\) as \(t\longrightarrow 0\). For \(t\in(1,\infty)\), \(h(t)\geq\frac{t^{p^{-}}}{p^{+}}\int_{\Omega}|\nabla u|^{p(x)}dx\). Hence \(h(t)\longrightarrow+\infty\) as \(t\longrightarrow+\infty\). It follows that \(h((0,\infty))=(0,\infty)\) and then the function \(h\) is bijective. We deduce that for any \(\alpha>0\), there exists a unique \(t>0\) such that \(h(t)=G(tu)=\alpha\); that is \(tu\in M_{\alpha}\). **Lemma 3.3**.: _Let \(\left(u_{n}\right)_{n}\subset X\) such that \(u_{n}\rightharpoonup u\). Then_ \[\lim_{n\rightarrow\infty}\int_{\Omega}V(x)|u_{n}|^{q(x)-2}u_{n}(u_{n}-u)dx=0\] **Proof.** Proceeding similarly as in Proposition 2.11, we have \(\left|\int_{\Omega}V(x)|u_{n}|^{q(x)-2}u_{n}(u_{n}-u)dx\right|\leq\) \(\left(\frac{1}{s^{-}}+\frac{1}{q^{-}s^{\prime-}}+\frac{1}{q^{\prime-}s^{\prime -}}\right)\|V\|_{s(x)}\max(\|u_{n}\|_{s^{\prime}(x)q(x)}^{q^{+}-1},\|u_{n}\|_ {s^{\prime}(x)q(x)}^{q^{-}-1})\|u_{n}-u\|_{s^{\prime}(x)q(x)}\). From \(u_{n}\rightharpoonup u_{0}\) and the compact embeddings \(X\hookrightarrow L^{s^{\prime}(x)q(x)}(\Omega)\), we have that \((|u_{n})_{n}\) is bounded in \(L^{s^{\prime}(x)q(x)}(\Omega)\) and \(\|u_{n}-u\|_{s^{\prime}(x)q(x)}\to 0\). The proof is complete. Next, we move on to the proof of Theorem 3.2 **Proof of Theorem 3.2** 1. First of all we observe that \(\nu^{*}=0\) if only if \(\nu_{*}=0\) results easily from the inequality \(\frac{q^{-}}{p^{+}}\nu_{*}\leq\nu^{*}\leq\frac{q^{+}}{p^{-}}\nu_{*}\). Next, from Remark (2.1) we have for all \(u\in M_{\alpha}\), \[\frac{\alpha p^{-}}{q^{+}F(u)}\leq\frac{\psi(u)}{\phi(u)}\leq\frac{p^{+}\alpha }{q^{-}F(u)}\] And recalling the definitions of \(\nu^{*}\) and \(\nu_{*}\), we get \(\frac{q^{-}}{p^{+}}\nu_{*}\leq\nu^{*}\leq\frac{q^{+}}{p^{-}}\nu_{*}\). 2. Suppose that \(\nu^{*}=\inf\limits_{u\in M_{\alpha}\smallsetminus\{0\}}\frac{G(u)}{F(u)}=0\). Thus there exists a sequence \((u_{n})_{n}\) in \(M_{\alpha}\) such that \(\lim\limits_{n\rightarrow+\infty}\frac{G(u_{n})}{F(u_{n})}=\lim\limits_{n \rightarrow+\infty}\frac{\alpha}{F(u_{n})}=0\); that is \[\lim\limits_{n\rightarrow+\infty}F(u_{n})=+\infty\] (3.15) But since the sequence \((u_{n})_{n}\) belongs to \(M_{\alpha}\), it is bounded in \(X\) and then converges weakly towards a function \(u\) and because of the strong continuity of \(F\) (cf Proposition 2.11), \(\lim\limits_{n\rightarrow+\infty}F(u_{n})=F(u)\). A contradiction with (3.15) and then \(\nu^{*}>0\). Suppose that there exists \(\lambda\in(0,\nu_{*})\) such that \(\lambda\in\Lambda_{\alpha}\). Arguing in a similar way as in the first assertion of Theorem 3.1, we reach a contradiction and hence there is no eigenvalue of problem (1.4) in \((0,\nu_{*})\). 3. Since \(q^{+}<p^{-}\) and \(\nu^{*}\leq\frac{q^{+}}{p^{-}}\nu_{*}\), we get \(\nu^{*}<\nu_{*}\) and since no eigenvalue belongs to \((0,\nu_{*})\), \(\nu^{*}\notin\Lambda_{\alpha}\). Let's prove that there exists an eigenvalue \(\lambda>\nu^{*}\) by proving that a minimizer of \(I_{\lambda}\) on \(M_{\alpha}\) is also a critical point. Since \[\inf\limits_{M_{\alpha}}I_{\lambda}(u)=\alpha-\lambda\sup\limits_{M_{\alpha}}F( u),\] (3.16) any minimizing sequence \((u_{n})_{n}\) of \(I_{\lambda}\) is a maximizing sequence of \(F\). On the other hand \((u_{n})_{n}\) being in \(M_{\alpha}\) is bounded in \(X\) and then, there exists \(u_{0}\in X\) such that \((u_{n})_{n}\) converges weakly to \(u_{0}\). Because \(G\) is sequentially weakly continuous and \(F\) is strongly continuous, we have \[G(u_{0})\leq\liminf_{n\rightarrow+\infty}G(u_{n})\leq\alpha, \tag{3.17}\] and \[F(u_{0})=\lim_{n\rightarrow+\infty}F(u_{n})=\sup_{M_{\alpha}}F(u). \tag{3.18}\] Clearly, from (3.17), the maximum in (3.18) is achieved in the closed, convex and bounded set \[\widetilde{M}_{\alpha}=\{u\in X;G(u)\leq\alpha\}. \tag{3.19}\] But \(F\) is a convex function and its maximum value occurs on the boundary of \(\widetilde{M}_{\alpha}\), that is on \(M_{\alpha}\). Accordingly, the limit \(u_{0}\in M_{\alpha}\) and then \[I_{\lambda}(u_{0})=\inf_{u\in M_{\alpha}}I_{\lambda}(u). \tag{3.20}\] Now, let's show that \(u_{0}\) is a critical point of \(I_{\lambda}\) on \(M_{\alpha}\). Since \(M_{\alpha}\) is not a vector space, we will consider some small variations around \(u_{0}\) that lies on \(M_{\alpha}\) so as in (see [4]). So, let \(u\in X=W^{1,p(x)}_{0}(\Omega)\) be fixed and \(\varepsilon>0\) small enough such that for any \(s\in(-\varepsilon,\varepsilon)\), the function \(s\mapsto u_{0}+su\) is not identically zero. From Lemma 3.2, there exists a function \(t:(-\varepsilon,\varepsilon)\longrightarrow(0,+\infty)\) such that \[G(t(s)(u_{0}+su))=\int_{\Omega}\frac{[t(s)]^{p(x)}}{p(x)}|\nabla(u_{0}+su)|^{ p(x)}dx=\alpha. \tag{3.21}\] As \(u_{0}\in M_{\alpha}\), we deduce from relation (3.21) that \(t(0)=1\). On the other hand, we have \[\lim_{s\to 0}G\left(t(s)(u_{0}+su)\right)=G(u_{0})=\alpha. \tag{3.22}\] Consequently, there is \(\varepsilon>0\) small enough such that for any \(s\in(-\varepsilon,\varepsilon)\) we have \(t(s)(u_{0}+su)\in M_{\alpha}\). It follows that the map \(s\longmapsto t(s)(u_{0}+su)\) defines a curve on \(M_{\alpha}\) which passes through \(u_{0}\) when \(s=0\); that is \(t(s)(u_{0}+su)\in M_{\alpha}\) for any \(u\in X\) and \(s\in(-\epsilon;\epsilon)\). One can easily see that the function \(t:s\in(-\epsilon,\epsilon)\longmapsto t(s)\) is derivable. Indeed, let's consider the map: \[\tilde{g}:(-\epsilon,\epsilon)\times(0;+\infty) \longrightarrow \mathbb{R}\] \[(s,t) \longmapsto \tilde{g}(s,t)=\int_{\Omega}\frac{t^{p(x)}}{p(x)}|\nabla u_{0}+s \nabla u|^{p(x)}dx-\alpha.\] Obviously, \(\tilde{g}(0,1)=0\), \(\tilde{g}\) is differentiable and for any \((s,t)\in(-\epsilon,\epsilon)\times(0;+\infty)\), we have: \[\frac{\partial\tilde{g}}{\partial t}(s,t)=\int_{\Omega}t^{p(x)-1}|\nabla u_{0 }+s\nabla u|^{p(x)}dx.\] We then deduce that \[\frac{\partial\tilde{g}}{\partial t}(0,1)=\int_{\Omega}|\nabla u_{0}|^{p(x)} dx\geq p^{-}\alpha\neq 0\] and by means of (3.1) the implicit functions theorem, for \(\epsilon\) small enough, there exists a derivable function \(\varphi:(-\epsilon,\epsilon)\longrightarrow(0;+\infty)\) such that \(\forall(s,t)\in(-\epsilon,\epsilon)\times(0,+\infty),\tilde{g}(s,t)=0\Leftrightarrow t =\varphi(s)\) such that \(1=\varphi(0)\). Writing \(t(s)=\varphi(s)\), yields a function \(t:s\in(-\epsilon,\epsilon)\longmapsto t(s)\) derivable with \(t(0)=1\). Moreover for any \(s\in(-\epsilon,\epsilon)\) and \(u\in X\) we have: \[\frac{\partial\tilde{g}}{\partial s}(s,t)=\int_{\Omega}t^{p(x)}|\nabla u_{0}+s \nabla u|^{p(x)-2}(\nabla u_{0}+s\nabla u)\nabla udx\] and accordingly \[t^{\prime}(s)=-\frac{\frac{\partial\tilde{g}}{\partial s}(s,t)}{\frac{\partial \tilde{g}}{\partial t}(s,t)}=-\frac{\int_{\Omega}[t(s)]^{p(x)}|\nabla u_{0}+s \nabla u|^{p(x)-2}(\nabla u_{0}+s\nabla u)\nabla udx}{\int_{\Omega}[t(s)]^{p( x)-1}|\nabla u_{0}+s\nabla u|^{p(x)}dx}. \tag{3.23}\] Put \[\gamma(s)=I_{\lambda}(t(s)(u_{0}+su)), \tag{3.24}\] of course \(\gamma\) is derivable on \((-\epsilon,\epsilon)\) and since \(u_{0}\) is a minimal point for \(I_{\lambda},\quad s=0\) is a critical point for \(\gamma\). So, for any \(s\in(-\epsilon;\epsilon)\), we have \[\gamma^{\prime}(s)=\langle I^{\prime}_{\lambda}(t(s)(u_{0}+su));t^{\prime}(s)( u_{0}+su)+t(s)u\rangle,\forall u\in X,\] and hence : \[0=\gamma^{\prime}(0)\Leftrightarrow\langle I^{\prime}_{\lambda}(t(0)u_{0});t ^{\prime}(0)u_{0}+t(0)u\rangle=\langle I^{\prime}_{\lambda}(u_{0});t^{\prime}( 0)u_{0}+u\rangle=0,\forall u\in X.\] Recalling the expression of \(t^{\prime}\) we get \(t^{\prime}(0)=-\frac{\int_{\Omega}|\nabla u_{0}|^{p(x)-2}\nabla u_{0}\nabla u _{\infty}}{\int_{\Omega}|\nabla u_{0}|^{p(x)}dx}\) and then \[\langle I^{\prime}_{\lambda}(u_{0}),u\rangle=-t^{\prime}(0)\langle I^{\prime} _{\lambda}(u_{0}),u_{0}\rangle,\forall u\in X\] that is \[\langle G^{\prime}(u_{0}),u\rangle-\lambda\langle F^{\prime}(u_{0}),u\rangle= \frac{\langle G^{\prime}(u_{0}),u\rangle}{\int_{\Omega}|\nabla u_{0}|^{p(x)} dx}\left(\int_{\Omega}|\nabla u_{0}|^{p(x)}dx-\lambda\int_{\Omega}V(x)|u_{0}|^{q(x)} dx\right),\forall u\in X.\] Hence \[\langle G^{\prime}(u_{0}),u\rangle-\frac{\int_{\Omega}|\nabla u_{0}|^{p(x)}dx} {\int_{\Omega}V(x)|u_{0}|^{q(x)}dx}\langle F^{\prime}(u_{0}),u\rangle=0, \forall u\in X\] and then \[\langle I^{\prime}_{\lambda}(u_{0}),u\rangle=0,\quad\forall u\in X\mbox{ with }\lambda=\frac{\int_{\Omega}|\nabla u_{0}|^{p(x)}dx}{\int_{\Omega}V(x)|u_{0}|^{q(x)}dx}.\] Hence \(\lambda>\nu_{*}\) is an eigenvalue of problem (1.4) with its corresponding eigenfunction \(u_{0}\) in \(M_{\alpha}\) and is the smallest one constrained to the sphere \(M_{\alpha}\). We next show that \(\nu_{*}\notin\Lambda_{\alpha}\). Suppose by contradiction that here exists \(u_{*}\in M_{\alpha}\) such that \[\langle I^{\prime}_{\nu_{*}}(u_{*}),v\rangle=0\mbox{ for any }v\in X. \tag{3.25}\] By taking \(v=u_{*}\) in equation (3.25), we obtain: \[\nu_{*}=\frac{\psi(u_{*})}{\phi(u_{*})} \tag{3.26}\] Obviously, we have \((1+s)t(s)u_{*}\in M_{\alpha}\) for any \(s\in(-\epsilon;\epsilon)\) and thus \[\frac{\psi(u_{*})}{\phi(u_{*})}=\nu_{*}\leq\frac{\psi((1+s)t(s)u_{*})}{\phi((1 +s)t(s)u_{*})}\mbox{ for any }s\in(-\epsilon;\epsilon) \tag{3.27}\] We will show that there are some \(s_{0}\in(-\epsilon,\epsilon)\) such that \(0<[(1+s_{0})t(s_{0})]^{p^{-}-q^{+}}<1\). For this purpose, we define the function \[g(s)=G(t(s)(u_{*}+su))=\alpha\ \ \forall(u,s)\in X\times(-\epsilon,\epsilon) \tag{3.28}\] The function \(g\) is derivable and we have for all \(u\in X\) and \(s\in(-\epsilon,\epsilon)\): \[0=g^{\prime}(s)=\langle G^{\prime}(t(s)(u_{*}+su));t^{\prime}(s)(u_{*}+su)+t(s) u\rangle \tag{3.29}\] Let \(\theta>1\). For \(u=\theta u_{*}\) in relation (3.29), we have: \[0=g^{\prime}(s) = \langle G^{\prime}((1+\theta s)t(s)u_{*});((1+\theta s)t^{\prime} (s)+\theta t(s))\,u_{*}\rangle\] \[= ((1+\theta s)t^{\prime}(s)+\theta t(s))\,\langle G^{\prime}((1+ \theta s)t(s)u_{*});u_{*}\rangle\] \[= ((1+\theta s)t^{\prime}(s)+\theta t(s))\int_{\Omega}((1+\theta s) t(s))^{p(x)-1}|\nabla u_{*}|^{p(x)}dx\] So for any \(s\in(0,\epsilon)\) with \(\epsilon>0\) small enough, we have: \[\int_{\Omega}((1+\theta s)t(s))^{p(x)-1}|\nabla u_{*}|^{p(x)}dx \geq ((1+\theta s)t(s))^{p^{i}-1}\int_{\Omega}|\nabla u_{*}|^{p(x)}dx\] \[\geq ((1+\theta s)t(s))^{p^{i}-1}p^{+}\alpha\] \[> 0\] We then deduce that, for any \(s\in(0,\epsilon)\): \[g^{\prime}(s)=0 \Leftrightarrow (1+\theta s)t^{\prime}(s)+\theta t(s)=0\] \[\Leftrightarrow t^{\prime}(s)+\frac{\theta}{1+\theta s}t(s)=0\] \[\Leftrightarrow t(s)=\tau e^{-\ln(1+\theta s)}\mbox{ with }\tau\in\mathbb{R}\] \[\Leftrightarrow t(s)=\frac{1}{1+\theta s}\mbox{ since }t(0)=1\] \[\Leftrightarrow (1+s)t(s)=\frac{1+s}{1+\theta s}.\] For some \(s_{0}\in(0,\epsilon)\) and any \(\theta>1\), we have \(0<(1+s_{0})t(s_{0})=\frac{1+s_{0}}{1+\theta s_{0}}<1\). Since \(p^{-}>q^{+}\), we deduce that \(0<[(1+s_{0})t(s_{0})]^{p^{-}-q^{+}}<1\). We then deduce from relation (3.27) that \[\frac{\psi(u_{*})}{\phi(u_{*})}\leq\frac{\psi((1+s_{0})t(s_{0})u_{*})}{\phi((1 +s_{0})t(s_{0})u_{*})}\leq[(1+s_{0})t(s_{0})]^{p^{-}-q^{+}}\frac{\psi(u_{*})}{ \phi(u_{*})}<\frac{\psi(u_{*})}{\phi(u_{*})}. \tag{3.30}\] By combining relations (3.27) and (3.30) we have \(\frac{\psi(u_{*})}{\phi(u_{*})}<\frac{\psi(u_{*})}{\phi(u_{*})}\) which is a contradiction. In short, we prove that if \(\nu_{*}\) is an eigenvalue of problem (1.4) with corresponding eigenfunction \(u_{*}\in M_{\alpha}\), then \(\langle I^{\prime}_{\nu_{*}}(u_{*});\theta u_{*}\rangle\neq 0\) for any \(\theta>1\). Hence \(\nu_{*}\notin\Lambda_{\alpha}\) if \(p^{-}>q^{+}\). #### 3.2.2. **Existence of a Ljusternik-Schnirelman eigenvalues sequence.** The most popular characterization of eigenvalues for nonlinear operator is certainly due to the Ljusternik-Schnirelman principle. Thus, many results on the Ljusternik-Schnirelman characterizations exist in the literature when \(p=q\) is a constant or when \(p(.)=q(.)\) under various assumptions and boundary conditions ([10, 11, 13]). Here we are interested in the case \(p(.)\neq q(.)\) and particularly when assumptions (2.3) are satisfied and \(q^{+}<p^{-}\) that is when the ranges of \(p\) and \(q\) do not interfere. By means of a version of the Ljusternik-Schnirelman (L-S) principle (see [7]) we derive the existence of a sequence of eigenvalues for problem (1.4) in \(\mathcal{M}_{\alpha}\subset W^{1,p(x)}(\Omega)\) and moreover, we establish a relationship between the smallest eigenvalue yields by the multiplier of Lagrange method in Theorem 3.2 and the first eigenvalue in the (L-S) sequence. Let \(X\) be a real reflexive Banach space and \(F\), \(G\) some functionals in \(X\) as above. We assume that: (H1): \(F,G:X\longrightarrow\mathbb{R}\) are even functionals and that \(F,G\in C^{1}(X,\mathbb{R})\) with \(F(0)=G(0)=0\). (H2): \(F^{\prime}\) is strongly continuous (i.e \(u_{n}\rightharpoonup u\) in \(X\) implies \(F^{\prime}(u_{n})\longrightarrow F^{\prime}(u)\)) and \(\langle F^{\prime}(u),u\rangle=0,u\in\overline{coM_{\alpha}}\) implies \(F(u)=0\) where \(\overline{coM_{\alpha}}\) is the closed convex hull of \(M_{\alpha}\) where \(M_{\alpha}\) is as in the previous sections. (H3): \(G^{\prime}\) is continuous, bounded and satisfies condition \(S_{0}\); i.e as \(n\rightarrow\infty\), \(u_{n}\rightharpoonup u\); \(G^{\prime}(u_{n})\rightharpoonup v\); \(\langle G^{\prime}(u_{n}),u_{n}\rangle\rightarrow\langle v,u\rangle\) implies \(u_{n}\to u\). (H4): The level set \(M_{\alpha}\) is bounded and \(u\neq 0\) implies \(\langle G^{\prime}(u),u\rangle>0\), \(\lim_{t\rightarrow\infty}G(tu)=+\infty\) and \(\inf_{u\in M_{\alpha}}\langle G^{\prime}(u),u\rangle>0\). Let \[\Sigma_{(n,\alpha)}=\{H\subset M_{\alpha};H\mbox{ is compact},-H=H\mbox{ and } \gamma(H)\geq n\}\] where \(\gamma(H)\) denotes the genus of \(H\), i.e \(\gamma(H):=\inf\left\{k\in\mathbb{N}/\exists h:H\longrightarrow\mathbb{R}^{k}- \{0\}\mbox{ such that }h\mbox{ is continuous and odd }\right\}\). Let's define the following (L-S) sequence \[a_{(n,\alpha)}=\left\{\begin{array}{ll}\sup_{H\in\Sigma_{(n, \alpha)}}\inf_{u\in H}F(u)&\mbox{if }\Sigma_{(n,\alpha)}\neq\phi\\ 0&\mbox{if }\Sigma_{(n,\alpha)}=\phi\end{array}\right.. \tag{3.31}\] \[\chi_{\alpha}=\left\{\begin{array}{ll}\sup\{n\in\mathbb{N};a_{(n,\alpha)}> 0\}&\mbox{if }a_{(1,\alpha)}>0\\ 0&\mbox{if }a_{(1,\alpha)}=0\end{array}\right.. \tag{3.32}\] We suppose that \(q^{+}<p^{-}\) so that the functional \(F\) is bounded below and the equations above are meaningful. The well-known Ljusternik-Schnirelmann principle asserts conditions on which the \(a_{(n,\alpha)}\) provide a sequence of eigenpairs \((u_{n,\alpha},\mu_{n,\alpha})\) satisfying (3.14), that is: \[F^{\prime}(u_{n,\alpha})=\mu_{n,\alpha}G^{\prime}(u_{n,\alpha}).\] We express below the version corresponding to our situation. **Proposition 3.1**.: _(Ljusternik-Schnirelman principle) (see [29], [11]) Assume that \(V>0\) and \((H1)-(H4)\) are fulfilled. Then the following assertions hold._ 1. _If_ \(a_{n,\alpha}>0\)_, then_ \((\ref{eq:Ljusternik-Schnirelman principle})\) _possesses a pair_ \(\pm u_{n,\alpha}\) _of eigenfunctions and an eigenvalue_ \(\mu_{n,\alpha}\neq 0\)_; further more_ \(F(u_{n,\alpha})=a_{n,\alpha}\)_._ 2. _If_ \(\chi_{\alpha}=\infty\)_,_ \((\ref{eq:Ljusternik-Schnirelman principle})\) _has infinitely many pairs_ \(\pm u\) _of eigenfunctions corresponding to non zero eigenvalues._ 3. \(\infty>a_{1,\alpha}\geq a_{2,\alpha}\geq...\geq 0\) _and_ \(a_{n,\alpha}\to 0\) _as_ \(n\to\infty\)_._ 4. _If_ \(\chi_{\alpha}=\infty\) _and_ \(F(u)=0,u\in\overline{coM_{\alpha}}\) _implies_ \(\langle F^{\prime}(u),u\rangle=0\)_, then there exists an infinite sequence_ \((\mu_{n,\alpha})_{n}\) _of distinct eigenvalues of_ \((\ref{eq:Ljusternik-Schnirelman principle})\) _such that_ \(\mu_{n,\alpha}\to 0\) _as_ \(n\to\infty\)_._ 5. _Assume that_ \(F(u)=0,u\in c\overline{coM_{\alpha}}\) _implies_ \(u=0\)_. Then_ \(\chi_{\alpha}=\infty\) _and there exists a sequence of eigenpairs_ \((u_{n,\alpha},\mu_{n,\alpha})\) _of_ \((\ref{eq:Ljusternik-Schnirelman principle})\) _such that_ \(u_{n,\alpha}\rightharpoonup 0,\ \mu_{n,\alpha}\to 0\) _as_ \(n\to\infty\) _and_ \(\mu_{n,\alpha}\neq 0\ \forall n\)_._ To apply the Ljusternik-Schnirelmann principle to our problem, we have to prove that the assumptions \((H1)-(H4)\) are satisfied. Let \(F\) and \(G\) be defined as in relation \((\ref{eq:Ljusternik-Schnirelman principle})\). Clearly the functionals \(F\) and \(G\) satisfy condition \((H1)\). As \(V>0\), we obviously have \(a_{n,\alpha}>0\) for any \((n,\alpha)\in\mathbb{N}^{*}\times\mathbb{R}_{+}^{*}\). We next prove that \(F\) and \(G\) satisfy conditions \((H2)-(H4)\). For this aim, we process as follows. **Proposition 3.2**.: _(cf [28, 12]) Let \(\Omega\) be a domain in \(\mathbb{R}^{N}\) and let \(\phi:\Omega\times\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}\) be a generalized N- function which is uniformly convex and satisfies the \(\Delta_{2}-\)condition, that is there exists \(c>0\) such that \(\phi(x,2t)\leq c\phi(x,t)\) for all \(x\in\Omega\) and all \(t\geq 0\). Then, if \((u_{n})_{n}\) is a sequence of integrable functions in \(\Omega\) such that:_ \[u(x)=\lim_{n\to+\infty}u_{n}(x)\quad\text{for a.e }x\in\Omega\ \,\int_{\Omega}\phi(x,|u|)dx=\lim_{n\to+\infty}\int_{\Omega}\phi \left(x,|u_{n}|\right)\] _, we have_ \[\lim_{n\to+\infty}\int_{\Omega}\phi\left(x,|u_{n}-u|\right)dx=0.\] **Lemma 3.4**.: _The functional \(F\) satisfies condition \((H2)\)._ **Proof.** Let \(\left(u_{n}\right)_{n}\subset X\) and \(u\in X\) such that \(u_{n}\rightharpoonup u\). We have to show that \(F^{\prime}\left(u_{n}\right)\to F^{\prime}\left(u\right)\) in \(X^{*}\); that is \(\left\langle F^{\prime}(u_{n})-F^{\prime}(u),v\right\rangle\to 0\) for any \(v\in X\). Let \(v\in X\). Proceeding similarly as in Proposition \((\ref{eq:Ljusternik-Schnirelman principle})\), we have \[|\langle F^{\prime}(u_{n})-F^{\prime}(u),v\rangle| = \left|\int_{\Omega}V(x)\left(|u_{n}|^{q(x)-2}u_{n}-|u|^{q(x)-2}u \right)vdx\right|\] \[\leq \int_{\Omega}V(x)\left|\left(|u_{n}|^{q(x)-2}u_{n}-|u|^{q(x)-2}u \right)v\right|dx\] \[\leq \left(\frac{1}{s^{-}}+\frac{1}{q^{-}s^{\prime-}}+\frac{1}{q^{ \prime-}s^{\prime-}}\right)\|V\|_{s(x)}\left\|\left|u_{n}|^{q(x)-2}u_{n}-|u|^{q (x)-2}u\right\|_{s^{\prime}(x)q^{\prime}(x)}\|v\|_{s^{\prime}(x)q(x)}\] \[\leq C\left(\frac{1}{s^{-}}+\frac{1}{q^{-}s^{\prime-}}+\frac{1}{q^{ \prime-}s^{\prime-}}\right)\|V\|_{s(x)}\left\|\left|u_{n}|^{q(x)-2}u_{n}-|u|^{ q(x)-2}u\right\|_{s^{\prime}(x)q^{\prime}(x)}\|v\|\] where \(C\) is a positive constant due to the Sobolev embedding. Next we have to show that \(\omega_{n}=|u_{n}|^{q(x)-2}u_{n}\to\omega=|u|^{q(x)-2}u\) in \(L^{s^{\prime}(x)q^{\prime}(x)}(\Omega)\). Since the embedding \(W^{1,p(x)}(\Omega)\hookrightarrow L^{s^{\prime}(x)q(x)}(\Omega)\) is compact, \(u_{n}\rightharpoonup u\) in \(X\) implies \(u_{n}\to u\) in \(L^{s^{\prime}(x)q(x)}(\Omega)\). Then \(u_{n}(x)\to u(x)\) a.e \(x\in\Omega\) and \(\omega_{n}(x)\to w(x)\) a.e in \(\Omega\). Recalling again the fact that \(u_{n}\to u\) in \(L^{s^{\prime}(x)q(x)}(\Omega)\), we obtain that \(\int_{\Omega}|\omega_{n}|^{s^{\prime}(x)q^{\prime}(x)}dx\longrightarrow\int_{ \Omega}|\omega|^{s^{\prime}(x)q^{\prime}(x)}dx\). Setting: \(\phi:\Omega\times\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}\) such that \(\phi(x,t)=\varphi_{s^{\prime}(x)q^{\prime}(x)}(t)=t^{s^{\prime}(x)q^{\prime}(x)}\). It is well known \(\phi\) is a generalized N-function which is uniformly convex and satisfies the the \(\Delta_{2}\) condition. From what precedes we have \[u_{n}(x)\to u(x)\text{ a.e }x\in\Omega\text{ and }\lim_{n\to\infty}\int_{\Omega}\phi(x,| \omega_{n}|)dx=\int_{\Omega}\phi(x,|\omega|)dx\] . Accordingly, by applying Proposition 3.2, we conclude that \[0=\lim_{n\to\infty}\int_{\Omega}\phi(x,|\omega_{n}-\omega|)dx=\lim_{n\to\infty} \int_{\Omega}|\omega_{n}-\omega|^{s^{\prime}(x)q^{\prime}(x)}dx=\lim_{n\to\infty }\int_{\Omega}\phi_{s^{\prime}(x)q^{\prime}(x)}(|\omega_{n}-\omega|)dx\] and then by means of the Proposition 2.5 we have the convergence in the sense of the norm, that is \[\left\|\left|u_{n}\right|^{q(x)-2}u_{n}-\left|u\right|^{q(x)-2}u\right\|_{s^{ \prime}(x)q^{\prime}(x)}=0.\] Therefore \(\langle F^{\prime}(u_{n})-F^{\prime}(u),v\rangle\longrightarrow 0\) for all \(v\in X\). Hence \(F^{\prime}(u_{n})\longrightarrow F^{\prime}(u)\) in \(X^{*}\) and thus \(F\) satisfies \((H2)\). **Lemma 3.5**.: _The functional \(G^{\prime}\) satisfies \((H3)\)._ **Proof.**\(G^{\prime}\) is continuous and bounded (Proposition 2.8). We next show that \(G^{\prime}\) satisfies condition \((S_{0})\). Let \((u_{n})_{n}\) be a sequence in \(X\) such that \(u_{n}\rightharpoonup u\), \(G^{\prime}(u_{n})\rightharpoonup v_{0}\) and \(\langle G^{\prime}(u_{n}),u_{n}\rangle\longrightarrow\langle v_{0},u_{0}\rangle\) for some \(v_{0}\in X^{*}\) and \(u_{0}\in X\). Then \[\langle G^{\prime}(u_{n})-G^{\prime}(u_{0}),u_{n}-u_{0}\rangle = \langle G^{\prime}(u_{n}),u_{n}-u_{0}\rangle-\langle G^{\prime} (u_{0}),u_{n}-u_{0}\rangle\] \[= \langle G^{\prime}(u_{n}),u_{n}\rangle-\langle G^{\prime}(u_{n}),u_{0}\rangle-\langle G^{\prime}(u_{0}),u_{n}-u_{0}\rangle.\] Accordingly, we have \[\lim_{n\to\infty}\langle G^{\prime}(u_{n})-G^{\prime}(u_{0}),u_{n}-u_{0} \rangle=0\] and since \(G^{\prime}\) is of type \((S_{+})\), we have that \(u_{n}\longrightarrow u_{0}\) as \(n\longrightarrow\infty\) and thus \(G^{\prime}\) satisfies \((H3)\). **Lemma 3.6**.: _The functionnal \(G\) satisfies \((H4)\)_ **Proof.** Obviously, the level set \(M_{\alpha}\) is bounded, \(\langle G^{\prime}(u),u\rangle=\int_{\Omega}|\nabla u|^{p(x)}dx=\psi(u)\geq \alpha p^{-}>0\) and \(G(tu)\to\infty\) as \(t\to\infty\) for any \(u\in M_{\alpha}\). Suppose that \(\inf\limits_{u\in M_{\alpha}}\langle G^{\prime}(u),u\rangle=\inf\limits_{u\in M _{\alpha}}\psi(u)=0\). Thus there is \((u_{n})_{n}\subset M_{\alpha}\) such that \(\lim\limits_{n\to\infty}\psi(u_{n})=0\). By Remark 2.1, we have: \[p^{-}\alpha\leq\lim_{n\to\infty}\psi(u_{n})\leq p^{+}\alpha\] that is \[p^{-}\alpha\leq 0\leq p^{+}\alpha\] which is is a contradiction. Thus \(\inf\limits_{u\in M_{\alpha}}\langle G^{\prime}(u),u\rangle>0\) and then \(G\) satisfies \((H4)\). Thus \(F\) and \(G\) satisfy conditions \((H1)-(H4)\). By the Ljusternik-Schnirelman principle (Proposition 3.1) we conclude that: **Proposition 3.3**.: _[Existence of Ljusternik-Schnirelman sequence] (see [11]) For each given \(\alpha>0\), there exists an eigenpair sequence \((u_{n,\alpha},\mu_{n,\alpha})_{n}\) obtained from the Ljusternik-Schnirelman principle such that:_ 1. \(G(\pm u_{n,\alpha})=\alpha\) _and_ \(F(\pm u_{n,\alpha})=a_{n,\alpha}\)__ 2. \(\mu_{n,\alpha}=\frac{\langle F^{\prime}(u_{n,\alpha}),u_{n,\alpha}\rangle}{ \langle G^{\prime}(u_{n,\alpha}),u_{n,\alpha}\rangle}\) _where each_ \(\mu_{n,\alpha}\) _is an eigenvalue of_ \(F^{\prime}(u)=\mu G^{\prime}(u)\) _on_ \(M_{\alpha}\)__ 3. \(u_{n,\alpha}\rightharpoonup 0\) _and_ \(\mu_{n,\alpha}\longrightarrow 0\) _as_ \(n\longrightarrow\infty\)__ 4. \(\infty>a_{1,\alpha}\geq a_{2,\alpha}\geq...\geq 0\) _and_ \(a_{n,\alpha}\to 0\) _as_ \(n\to\infty\)_._ **Remark 3.2**.: _Let \((u_{n,\alpha},\mu_{n,\alpha})\) be a sequence of eigenpairs satisfying (3.14). Thus:_ \[\mu_{n,\alpha}=\frac{\langle F^{\prime}(u_{n,\alpha}),u_{n,\alpha}\rangle}{ \langle G^{\prime}(u_{n,\alpha}),u_{n,\alpha}\rangle}=\frac{\phi(u_{n,\alpha})} {\psi(u_{n,\alpha})}\ \ \mbox{and then}\ \ \frac{q^{-}a_{n,\alpha}}{p^{+}\alpha}\leq\mu_{n,\alpha}\leq\frac{q^{+}a_{n, \alpha}}{p^{-}\alpha}\] We next apply Ljusternik-Schnirelman principle in eigenvalue problem (1.4) and we obtain the following results: **Proposition 3.4**.: 1. _Problem_ (1.4) _possesses a sequence_ \((\lambda_{n,\alpha})\) _of eigenvalues obtained by using the (L-S) principle and such that_ \(\lambda_{n,\alpha}=\frac{1}{\mu_{n,\alpha}}\) _for all_ \(n\) _and_ \(\alpha>0\)_. Furthermore,_ \(\lambda_{n,\alpha}\to+\infty\) _as_ \(n\to+\infty\)_._ 2. _For any_ \(n\in\mathbb{N}^{*}\) _and_ \(\alpha>0\)_, we have:_ \[\frac{p^{-}}{q^{+}}\frac{\alpha}{a_{n,\alpha}}\leq\lambda_{n,\alpha}\leq\frac{ p^{+}}{q^{-}}\frac{\alpha}{a_{n,\alpha}}\] (3.33) **Proof.** 1. Let \(u\in X=W^{1,p(x)}(\Omega)\) be an eigenfunction satisfying relation (3.14). \[F^{\prime}(u)=\mu G^{\prime}(u) \Leftrightarrow \int_{\Omega}V(x)|u|^{q(x)-2}uvdx=\mu\int_{\Omega}|\nabla u|^{p(x) -2}\nabla u\nabla vdx\] \[\Leftrightarrow \int_{\Omega}|\nabla u|^{p(x)-2}\nabla u\nabla vdx=\frac{1}{\mu} \int_{\Omega}V(x)|u|^{q(x)-2}uvdx\] By using the weak formulation (2.4) we get: \[\lambda=\frac{1}{\mu}\] Hence \(\lambda_{n,\alpha}=\frac{1}{\mu_{n,\alpha}}\) for any \(n\in\mathbb{N}^{*}\) and \(\alpha>0\). From the (L-S) principle we have \(a_{n,\alpha}\to 0\) as \(n\rightarrow+\infty\); thus \(\lambda_{n,\alpha}\rightarrow+\infty\) as \(n\rightarrow+\infty\). 2. Let \((\mu_{n,\alpha},u_{n,\alpha})\) be an eigenpair satisfying (3.14). It follows that \(F(u_{n,\alpha})=a_{n,\alpha}\) and \(G(u_{n,\alpha})=\alpha\). We then deduce from Remark 3.2 that \[\frac{p^{-}}{q^{+}}\frac{\alpha}{a_{n,\alpha}}\leq\lambda_{n,\alpha}\leq\frac{ p^{+}}{q^{-}}\frac{\alpha}{a_{n,\alpha}}\] #### 3.2.3. **The smallest Lagrange multiplier and the (L-S) sequence.** Let \(\lambda>0\) be the smallest eigenvalue of problem (1.4) obtain in Theorem 3.2 by the Lagrange multipliers. Then there is \(u_{\lambda}\in M_{\alpha}\) such that \(u_{\lambda}\) is a (weak) solution of problem (1.4) and: \[I_{\lambda}(u_{\lambda})=\inf_{u\in M_{\alpha}}I_{\lambda}(u)\mbox{ while }F(u_{\lambda})=\sup_{u\in M_{\alpha}}F(u)\] By combining the different results in this section, we obtain the following result on the coincidence of the smallest eigenvalue \(\lambda\) on the sphere with the first term of the (L-S) eigenvalues sequence: **Theorem 3.3**.: _Suppose that assumption (2.3) and \(q^{+}<p^{-}\) hold and let \(\alpha>0\). The eigenpair \((\lambda,u_{\lambda})\) of problem (1.4) where \(\lambda\) is the smallest eigenvalue on the sphere obtained in Theorem 3.2 is such that:_ \[I_{\lambda}(u_{1,\alpha})=\inf_{u\in M_{\alpha}}I_{\lambda}(u)=I_{\lambda}(u_ {\lambda}),\] _that is \(u_{1,\alpha}\) is an eigenfunction solution of problem (1.4) associated to the eigenvalue \(\lambda\) and hence \(\lambda=\lambda_{1,\alpha}=\frac{1}{\mu_{1,\alpha}}\) where \(\mu_{1,\alpha}\) is the first term in the (L-S) eigenvalues sequence._ **Proof of Theorem \(3.3:\)** Let \((\lambda,u_{\lambda})\) be an eigenpair as in Theorem 3.2. Then: \[F(u_{\lambda})=\sup_{u\in M_{\alpha}}F(u)\geq F(u_{1,\alpha})=a_{1,\alpha} \tag{3.34}\] Put \(H_{0}=\{\pm u_{\lambda}\}\). Of course, \(H_{0}\subset M_{\alpha}\), compact and \(\gamma(H_{0})=1\) where \(\gamma(H_{0})\) is the genus of \(H_{0}\) (we refer to [7, 36] for more details on the genus). Consequently, we have \(H_{0}\in\Sigma_{(1,\alpha)}\) and: \[a_{1,\alpha} = \sup_{H\in\Sigma_{(1,\alpha)}}\inf_{u\in H}F(u)\] \[\geq \inf_{u\in H_{0}}F(u)=F(\pm u_{\lambda})=F(u_{\lambda})\] This together with relation (3.34) gives \[a_{1,\alpha}=\sup_{u\in M_{\alpha}}F(u)=F(u_{1,\alpha})=F(u_{\lambda}) \tag{3.35}\] Hence \[\inf_{u\in M_{\alpha}}I_{\lambda}(u)=\alpha-\lambda\sup_{u\in M_{\alpha}}F(u)= \alpha-\lambda F(u_{1,\alpha})=I_{\lambda}(u_{1,\alpha}) \tag{3.36}\] Accordingly \(u_{1,\alpha}\) is a critical point of \(I_{\lambda}\) constrained to \(M_{\alpha}\) and then satisfied \[\frac{1}{\lambda}F^{\prime}(u_{1,\alpha})=G^{\prime}(u_{1,\alpha}). \tag{3.37}\] On the other hand \(u_{(1,\alpha)}\) is an eigenfunction associated to \(\lambda_{(1,\alpha)}\) and consequently we get \[\frac{1}{\lambda_{(1,\alpha)}}F^{\prime}(u_{1,\alpha})=G^{\prime}(u_{1,\alpha}). \tag{3.38}\] From (3.37) and (3.38), we derive that \(\lambda=\lambda_{(1,\alpha)}\) and then the proof is complete. ## 4. Superlinear problem Throughout this section, we will suppose that \[1<p(x)<q(x)<N<s(x)\ \ \text{and}\ s^{\prime}(x)q(x)<p^{*}(x)\ \ \ \ \text{for all}\ x\in\overline{\Omega} \tag{4.1}\] First of all we notice that Lemma 3.1 in section 3.1 is expressed for any exponents \(p(.)\) and \(q(.)\) and accordingly is still valid. In particular the assertions in Remark 3.1 are still holding in the following variant. **Remark 4.1**.: _Suppose \(\alpha p^{+}\geq 1\) then_ \[\lambda_{\alpha}:=\alpha^{1-\frac{q^{+}}{p^{-}}}\frac{q^{-}\left(p^{+}\right) ^{-\frac{q^{+}}{p^{-}}}}{2C_{H}C^{q^{\pm}}\|V\|_{s(x)}} \tag{4.2}\] _and hence \(\lim\limits_{\alpha\rightarrow+\infty}\lambda_{\alpha}=0\) when \(q^{+}>p^{-}\). In the case that \(\alpha p^{+}<1\)_ \[\lambda_{\alpha}:=\alpha^{1-\frac{q^{-}}{p^{+}}}\frac{q^{-}\left(p^{+}\right) ^{-\frac{q^{-}}{p^{+}}}}{2C_{H}C^{q^{\pm}}\|V\|_{s(x)}}. \tag{4.3}\] _Hence, \(\lim\limits_{\alpha\to 0}\lambda_{\alpha}=+\infty\) when \(q^{-}>p^{+}\) and if \(q^{-}=p^{+}\), then_ \[\forall\ \ \ \alpha>0,\ \ \ \lambda_{\alpha}=\lambda_{p^{+},q^{-}}:=\frac{q^{- }}{2p^{+}C_{H}C^{q^{\pm}}\|V\|_{s(x)}}. \tag{4.4}\] Here below is the main result of the section **Theorem 4.1**.: _Suppose that assumptions \(\eqref{eq:1}\) hold and \(q^{-}>p^{+}\). Then \(\Lambda=(0,+\infty)\)._ Under our assumption the problem obviously is non- coercive but satisfies the Palais-Smale \((PS)\). **Lemma 4.1**.: _Assume \(\eqref{eq:1}\) with \(q^{-}\geq p^{+}\). Then \(I_{\lambda}\) satisfies the Palais-Smale \((PS)\) condition._ **Proof.** Let \((u_{n})_{n\in\mathbb{N}}\subseteq X\) be a \((PS)\) sequence for \(I_{\lambda}\), i.e., \((I_{\lambda}(u_{n}))_{n\in\mathbb{N}}\subset\mathbb{R}\) is bounded and \(I^{\prime}_{\lambda}(u_{n})\to 0\) as \(n\rightarrow\infty\), that is, there exists a positive constant \(k\in\mathbb{R}\) such that \[|I_{\lambda}(u_{n})|\leq k,\ \text{for every}\ n\in\mathbb{N}. \tag{4.5}\] and \[|\langle I^{\prime}_{\lambda}(u_{n}),v\rangle|\leq\epsilon_{n}\|v\|\ \ \ \forall v\in X. \tag{4.6}\] From (4.5) and (4.6), we have respectively \[k\geq\frac{1}{p^{+}}\int_{\Omega}|\nabla u_{n}|^{p(x)}dx-\frac{1}{q^{-}} \lambda\int_{\Omega}V|u_{n}|^{q(x)}dx \tag{4.7}\] \[\frac{1}{q^{-}}\langle I^{\prime}_{\lambda}(u_{n}),u_{n}\rangle=\frac{1}{q^{- }}\int_{\Omega}|\nabla u_{n}|^{p(x)}dx-\frac{1}{q^{-}}\lambda\int_{\Omega}V|u _{n}|^{q(x)}dx\leq\frac{1}{q^{-}}\epsilon_{n}\|u_{n}\| \tag{4.8}\] Subtracting (4.6) and (4.8), we get \[k-\frac{1}{q^{-}}\epsilon_{n}\|u_{n}\|\geq(\frac{1}{p^{+}}-\frac{1}{q^{-}}) \int_{\Omega}|\nabla u_{n}|^{p(x)}dx \tag{4.9}\] Since \(q^{-}\geq p^{+}\) the sequence \((u_{n})_{n}\) is bounded in \(X\) and then \(I_{\lambda}\) satisfies \((PS)\) condition. The proof of Theorem 4.1 will be concluded using a standard version of the Mountain-Pass theorem (cf [17]). **Theorem 4.2**.: _(Mountain-Pass theorem) Let \(X\) a Banach space and \(\Theta:X\rightarrow\mathbb{R}\) a \(C^{1}\) functional which satisfies the \((PS)\) condition. Let \(S\) be a closed subset of \(X\) which disconnect \(X\). Let \(e_{0}\) and \(e_{1}\) be point of \(X\) which are in distinct connected components of \(X\setminus S.\) Suppose that \(\Theta\) is bounded below in \(S\) and in fact the following condition is verified_ \[\inf_{S}\Theta\geq b\text{ and }\max(\Theta(e_{0}),\Theta(e_{1}))<b. \tag{4.10}\] _Let_ \[\Gamma=\{f\in(C([0,1];X),\quad f(0)=e_{0},f(1)=e_{1})\}.\] _Then_ \[c=\inf_{\Gamma}\max_{t\in[0,1]}\Theta(f(t))\geq b \tag{4.11}\] _and is a critical value, that is there exists \(u_{0}\in X\) such that \(\Theta(u_{0})=c\) and \(\Theta^{\prime}(u_{0})=0)\)_ **Proof of Theorem \(4.1:\)** The proof consists in showing that the geometry of the Mountain-Pass theorem is realized with \(\Theta=I_{\lambda}.\) Since assumptions (4.1) and \(q^{-}>p^{+}\) are fulfilled, then the functional \(I_{\lambda}\) satisfied the \((PS)\) condition for any \(\lambda>0.\) Next for any \(\alpha,\) choose \(S\) to be \(M_{\alpha};\) then for any \(\lambda>0,\) Lemma 3.1 and the second property in Remark 4.1 provide \(\alpha\) (small enough) and \(\lambda_{\alpha}>0\) such that for any \(\lambda\in(0,\lambda_{\alpha}),I_{\lambda}(u)\geq\frac{\alpha}{2}.\) On the other hand, recalling again assumptions (4.1), we have \(p^{+}<q^{+}\). Let \(\varepsilon_{0}>0\) be such that \(p^{+}<q^{+}-\varepsilon_{0}\). By the continuity of \(q(.),\) we deduce the existence of an open set \(\Omega_{0}\subset\Omega\) such that is \(p^{+}\leq q^{+}-\varepsilon_{0}\leq q(x)\) for all \(x\in\Omega_{0}.\) Let \(v_{0}\in M_{\alpha}\). It is obvious that \(tv_{0}\in X\setminus M_{\alpha}\) for any \(t>1\). Then, we have: \[I_{\lambda}(tv_{0}) = \int_{\Omega}\frac{tp^{(x)}}{p(x)}|\nabla v_{0}|^{p(x)}dx-\lambda \int_{\Omega}\frac{t^{q(x)}}{q(x)}V|v_{0}|^{q(x)}dx\] \[\leq t^{p^{+}}\alpha-\frac{\lambda}{q^{+}}\int_{\Omega}t^{q(x)}|v_{0} |^{q(x)}dx\] \[\leq t^{p^{+}}\alpha-\frac{\lambda t^{q^{+}-\varepsilon_{0}}}{q^{+}} \int_{\Omega_{0}}V(x)|v_{0}|^{q(x)}dx\] Therefore for any \(\lambda\in(0,\lambda_{\alpha})\) there is some \(t\) large enough, say \(t\geq\eta>1\) such that \(I_{\lambda}(tv_{0})<0\). Choose \(e_{0}=0\) and \(e_{1}=tv_{0}\) for a \(t\geq\eta>1,\) we get \[\max(I_{\lambda}(0)=0,I_{\lambda}(tv_{0})<0)<\frac{\alpha}{2}\leq\inf_{M_{ \alpha}}I_{\lambda}\] and then we are done. **Corollary 4.1**.: _Assume that assumptions \((\ref{eq:1})\) are fulfilled with \(q^{-}=p^{+}\), then each \(\mu\in(0,\lambda_{p^{+},q^{-}}),\) admits at least an infinitely countable and unbounded family of eigenfunctions in \(X\)._ **Proof.** Choose \(\mu\in(0,\lambda_{q^{+},p^{-}})\) and consider an increasing sequence of positive real numbers \((\alpha_{n})_{n>0}\) such that \(\alpha_{n}p^{+}<1\quad\forall n.\) Under the assumptions (4.1) and \(q^{-}=p^{+},\)\(I_{\mu}\) satisfies \((PS)\) condition and \[I_{\mu}(u)\geq\frac{\alpha_{n}}{2}\quad\text{ for any }u\in M_{\alpha_{n}},\quad \forall n. \tag{4.12}\] Set \(\Gamma_{\alpha_{n}}=\{f\in(C([0,1];X),\quad f(0)=0,f(1)=t_{n}v_{n}^{0})\}\) where \(0\in M_{\alpha_{n}}\) and \(t_{n}v_{n}^{0}\in X\setminus M_{\alpha_{n}},\)\(v_{n}^{0}\in M_{\alpha_{n}}\) and \(t_{n}\) a fixed real in \((0,1)\) such that \(I_{\mu}(t_{n}v_{n}^{0})<0.\) Moreover \[\max(I_{\mu}(0)=0,I_{\mu}(e_{n}^{1})<0)<\frac{\alpha_{n}}{2}\leq\inf_{M_{ \alpha_{n}}}I_{\mu},\quad\forall n.\] Next, applying the Mountain-Pass Theorem, there exists at least a sequence \((u_{n})_{n}\in X\) of eigenfunctions and a sequence of real numbers \((c_{n})_{n}\) with \(c_{n}=\inf_{\Gamma_{\alpha_{n}}}\max\limits_{t\in[0,1]}I_{\mu}(f(t)),\) such that \(I_{\mu}(u_{n})=c_{n}\geq\alpha_{n}\) and \(I_{\mu}^{\prime}(u_{n})=0.\) Clearly \(I_{\mu}(u_{n})\) tends to \(+\infty\) when \(n\) tends to \(+\infty\) and consequently \((u_{n})_{n}\) is unbounded in \(X.\) **Conclusion** We observe that * When the exponents \(p(x)\geq q(x)\), there is a neighborhood of \(0\) such that any positive element is an eigenvalue and if moreover the ranges of \(p(.)\) and \(q(.)\) do not interfere all real number in \((0,+\infty)\) are eigenvalues. When we constrain the eigenvalue problem to a sphere, the smallest Lagrange multiplier coincide with the first term in the (L-S) sequence. * When the exponents \(p(x)\leq q(x)\) and when the ranges of \(p(.)\) and \(q(.)\) do not interfere, we also have that all real numbers in \((0,+\infty)\) are eigenvalues. * When \(p(x)\geq q(x)\) and \(p^{-}=q^{+}\) or \(p(x)\leq q(x)\) and \(p^{+}=q^{-}\), there are some eigenvalues having an infinite sequence of eigenfunctions.. Many challenges are still arsing from the nonhomogeneous eigenvalues problem. For instance * When \(p(.)=q(.)=\) cste, the (L-S) eigenvalues sequence \((\lambda_{n,\alpha})_{n}\) given by Proposition 3.4 is nondecreasing. Is it the same in the case \(p(.)\) and \(q(.)\) are functions? * We still do not know whether the first eigenvalue in the (L-S) sequence is simple or not, neither when \(p(.)=q(.)\) nor for \(p(.)\neq q(.)\).
2306.00971
ViCo: Plug-and-play Visual Condition for Personalized Text-to-image Generation
Personalized text-to-image generation using diffusion models has recently emerged and garnered significant interest. This task learns a novel concept (e.g., a unique toy), illustrated in a handful of images, into a generative model that captures fine visual details and generates photorealistic images based on textual embeddings. In this paper, we present ViCo, a novel lightweight plug-and-play method that seamlessly integrates visual condition into personalized text-to-image generation. ViCo stands out for its unique feature of not requiring any fine-tuning of the original diffusion model parameters, thereby facilitating more flexible and scalable model deployment. This key advantage distinguishes ViCo from most existing models that necessitate partial or full diffusion fine-tuning. ViCo incorporates an image attention module that conditions the diffusion process on patch-wise visual semantics, and an attention-based object mask that comes at no extra cost from the attention module. Despite only requiring light parameter training (~6% compared to the diffusion U-Net), ViCo delivers performance that is on par with, or even surpasses, all state-of-the-art models, both qualitatively and quantitatively. This underscores the efficacy of ViCo, making it a highly promising solution for personalized text-to-image generation without the need for diffusion model fine-tuning. Code: https://github.com/haoosz/ViCo
Shaozhe Hao, Kai Han, Shihao Zhao, Kwan-Yee K. Wong
2023-06-01T17:58:44Z
http://arxiv.org/abs/2306.00971v2
# ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation ###### Abstract Personalized text-to-image generation using diffusion models has recently been proposed and attracted lots of attention. Given a handful of images containing a novel concept (_e.g._, a unique toy), we aim to tune the generative model to capture fine visual details of the novel concept and generate photorealistic images following a text condition. We present a plug-in method, named ViCo, for fast and lightweight personalized generation. Specifically, we propose an image attention module to condition the diffusion process on the patch-wise visual semantics. We introduce an attention-based object mask that comes almost at no cost from the attention module. In addition, we design a simple regularization based on the intrinsic properties of text-image attention maps to alleviate the common overfitting degradation. Unlike many existing models, our method does not finetune any parameters of the original diffusion model. This allows more flexible and transferable model deployment. With only light parameter training (\(\sim\)6% of the diffusion U-Net), our method achieves comparable or even better performance than all state-of-the-art models both qualitatively and quantitatively. Code: [https://github.com/haoosz/ViCo](https://github.com/haoosz/ViCo) ## 1 Introduction Nowadays, people can easily generate unprecedentedly high-quality photorealistic images with text prompts using fast-growing text-to-image diffusion models [14; 44; 36; 31; 42; 39]. However, these models are trained on a text corpus of seen words, and they fail to synthesize novel concepts like a special-looking dog or your Batman toy collection. Imagine how fascinating it would be if your plastic Batman toy could appear in scenes of the original 'Batman' movie. Recent works [8; 41; 23] make this fantasy come true, terming the task _personalized_ text-to-image generation. Specifically, given several images of a unique object, the goal is to capture the object and reconstruct it in text-guided image generation. DreamBooth [41] incorporates a unique identifier before the category word in the text embedding space and finetunes the entire diffusion model during training. The authors also finetune the text encoder, which empirically shows improved performance. Custom Diffusion [23] finds that only tuning a few parameters, _i.e._, key and value projection matrices, is sufficiently powerful. DreamBooth and Custom Diffusion both meet the issue of language drift [26; 27] because finetuning the pretrained model on new data can lead to a loss of the preformed language knowledge. They leverage a preservation loss to address this problem, which requires manually generating [41] or retrieving massive class-specific images. Textual Inversion [8] adopts minimal optimization by exclusively learning a novel text embedding to represent the given object, showing enhanced performance using latent diffusion models [39]. For the more powerful Stable Diffusion, however, the learned embedding struggles to express fine details of the visual object and the generated results are prone to overfitting to training samples due to the limited fine-grained expressiveness of CLIP [35]. In this work, we follow [8] to use a single learnable token embedding \(S_{\star}\) to represent the novel concept instead of the form of "[V] class" used in [41, 23]. In our vision, a single token embedding should be capable of effectively representing any visual concept within an ideal unified text-image space. To overcome the issue of declined model expressiveness of novel concepts, we propose a plug-in approach that integrates visual conditions into the diffusion process. Specifically, we present an image cross-attention module that enables the integration of visual conditions from a reference image into the denoising process, without modifications or fine-tuning of any layers in the original diffusion model. Unlike feature concatenation [4] or direct element-wise addition [30, 56] that only preserve image layout, our cross-attention module is effective in capturing fine object-specific details from visual conditions. Another major challenge we address is the difficulty in isolating the foreground object of interest from the background. Instead of relying on prior annotated masks as in concurrent works [49, 43, 18], we propose an automatic mechanism to generate object masks that are naturally incorporated into the denoising process. Specifically, we leverage the notable semantic correlations between text and image in cross-attentions [13] and utilize the cross-attention map associated with the learnable object token to generate an object mask. Our method is computationally efficient, non-parametric, and online, and can effectively suppress the influence of distracting backgrounds in the training samples. To prevent the common overfitting problem, we introduce a simple yet effective regularization between the cross-attention maps associated with the end-of-text token and the learnable token. This regularization can be easily deployed without requiring any heavy preprocessing steps like image generation [41] and retrieval [23]. We name our model ViCo, which offers a number of advantages over previous works. (1) It is fast (\(\sim\)5 minutes) and lightweight (6% of diffusion U-Net). (2) It is plug-and-play and requires no fine-tuning of the original diffusion model, allowing highly flexible and transferable deployment. (3) It is easy to implement and use, requiring no heavy preprocessing or mask annotations. (4) It can preserve fine object-specific details of the novel concept in text-guided generation (see Fig. 1). Our contributions include: (1) proposing an image cross-attention module to integrate visual conditions into the denoising process for capturing object-specific semantics; (2) introducing an automatic object mask generation mechanism from the cross-attention map; (3) designing a simple yet effective regularization on the attention maps to overcome overfitting; and (4) providing quantitative and qualitative comparisons with state-of-the-art methods [41, 23, 8] and demonstrating the efficiency of ViCo in multiple applications. ## 2 Related work **Text-to-image synthesis.** In the literature of GANs [12, 3, 21, 22, 20], plenty of works have gained remarkable progress in text-to-image generation [38, 61, 45, 52, 55, 53] and image manipulation Figure 1: **Personalized text-to-image generation.** Generated images of the Batman toy (top) and the Toller (bottom) by ViCo. \(S_{\star}\) denotes the learnable text embedding [8]. using text [10; 34; 51; 1], advancing the generation of images conditioned on plain text. These methods are trained on a fixed dataset that leverages strong prior knowledge of a specific domain. Towards a zero-shot fashion, auto-regressive models [37; 54] trained on large-scale data of text-image pairs achieve high-quality and content-rich text-to-image generation results. Based on the pretrained CLIP [35], Crowson _et al._[7] applies CLIP similarity to optimize the generated image at test time without any training. The use of diffusion-based methods [14] has pushed the boundaries of text-to-image generation to a new level. Examples include DALL-E 2 [36], Imagen [42], GLIDE [31], and LDM [39]. Recently, some works consider personalized text-to-image generation by learning a token embedding [8] and finetuning [41] or partially finetuning [23] a diffusion model. Many concurrent works emerge lately, but they require finetuning the whole or partial networks in the vanilla U-Net [46; 29; 49], or training with large-scale data on specific category domain [43; 18; 9]. In contrast, our work tackles the general domain-agnostic task while keeping the pretrained diffusion model completely frozen. We compare the characteristics of different models in Tab. 1. Visual condition.Visual condition is commonly used in image-to-image translation [17; 59; 60; 6; 33], which involves training a model to map an input image to an output image based on a certain condition, _e.g._, edge, sketch, or semantic segmentation. Similar techniques have been used for tasks such as style transfer [11; 19], colorization [57; 24; 58], and super-resolution [25; 19; 48]. In the context of diffusion models, visual condition is also used for image editing [4] and controllable conditioning [30; 56]. Despite the massive study on visual condition, most works use it for controlling the spatial layout and geometric structure but discard its rich semantics. Our work stands out in capturing fine-grained semantics related to the specific visual appearance from visual conditions, an aspect that is rarely discussed. Diffusion-based generative models.Diffusion-based generative models develop fast and continuously produce striking outcomes. Ho _et al._[14] first presents DDPMs to progressively denoise from random noise to a synthesized image. DDIMs [44] accelerate the sampling process of DDPMs. Latent diffusion models (LDMS) [39] introduce multiple conditions in latent diffusion space, producing realistic and high-fidelity text-to-image synthesis results. Following the implementation of LDMs [39], Stable Diffusion (SD) is trained on a large-scale text-image data collection, which achieves the state-of-the-art text-to-image synthesis performance. Diffusion models are widely used for generation tasks such as video generation [15; 50], inpainting [28], and semantic segmentation [16; 2]. ## 3 Method Given a handful of images (4-7) showing a novel object concept, we target at generating images of this unique object following some text guidance. We aim to neatly inject visual condition, which is neglected in previous works, along with text condition into the diffusion model to better preserve the visual expressions. Following the attempt of textual inversion [8], we adopt a placeholder (\(S_{\star}\)) as the learnable text embedding to capture the unique visual object. We first quickly review Stable Diffusion [39] which serves as our base model (Sec. 3.1). We then introduce a simple yet efficient method to inject fine-grained semantics from visual conditions into the denoising process (Sec. 3.2), and show how to automatically generate object masks within training (Sec. 3.3). We finally present an attention regularization to avoid the common overfitting issue and our overall learning objective (Sec. 3.4). Fig. 2 shows an overview of our method. ### Stable Diffusion Stable Diffusion (SD) [39] is a latent text-to-image diffusion model derived from classic Denoising Diffusion Probabilistic Models (DDPMs) [14]. SD applies a largely pretrained autoencoder \(\mathcal{E}\) to extract latent code for images, and a corresponding decoder \(\mathcal{D}\) to reconstruct the original images. Specifically, the autoencoder maps images \(x\in\mathcal{I}\) to latent code \(z=\mathcal{E}(x)\), and the decoder maps latent code back to images \(\hat{x}=\mathcal{D}(\mathcal{E}(x))\), where \(\hat{x}\approx x\). SD adopts a diffusion model in the latent \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Placeholder type & Preprocessing & Diffusion U-Net & Test encoder & \#Trainable parameters & Visual condition \\ \hline DreamBooth [41] & [V] class & Image generation & Entirely finetuned & Entirely finetuned & 982.6M & ✗ \\ Custom Diffusion [23] & [V] class & Image retrieval & Partially finetuned & Frozen & 57.1M & ✗ \\ Textual Inversion [8] & \(S_{\star}\) & Null & Frozen & Frozen & 768 & ✗ \\ ViCo & \(S_{\star}\) & Null & Frozen & Frozen & 51.3M & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: **Model characteristics.** space of the autoencoder. For the text-to-image diffusion model, text conditions can be added to the diffusion process. The diffusion process can be formulated as iterative denoising that predicts the noise at the current timestep. In this process, we have the loss \[\mathcal{L}_{SD}=\mathbb{E}_{z\sim\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0,1),t}[\|\epsilon-\epsilon_{\theta}(z_{t},t,c_{\pi}(y))\|_{2}^{2}], \tag{1}\] where \(t\) is the timestep, \(z_{t}\) is the latent code at timestep t, \(c_{\pi}\) is the text encoder that maps text prompts \(y\) into text embeddings, \(\epsilon\) is the noise sampled from Gaussian distribution, and \(\epsilon_{\theta}\) is the denoising network (_i.e._, U-Net [40]) that predicts the noise. Training SD is flexible, such that we can jointly learn \(c_{\pi}\) and \(\epsilon_{\theta}\) or exclusively learn \(\epsilon_{\theta}\) with a frozen pretrained text encoder. ### Visual condition injection Common approaches for conditioning diffusion models on images include feature concatenation [4] and direct element-wise addition [30; 56]. These visual conditions show astonishing performance in capturing the layout of images. However, visual semantics, especially fine-grained details, are hard to preserve or even lost using these image conditioning methods. Instead of only considering the patches at the same spatial location on the noisy latent code and visual condition, we exploit correlations across all patches on both images. To this end, we propose to train an image cross-attention block that has the same structure as the text cross-attention block in the vanilla diffusion U-Net. The image cross-attention block takes an intermediate noisy latent code and a visual condition as inputs, integrating visual conditions into the denoising process. Some works [9; 43] acquire visual conditions from reference images by additionally training a visual feature extractor. This may cause a misalignment between the feature spaces of the latent code and the visual condition. Instead of deploying extra networks, we directly feed the autoencoded reference image into the vanilla diffusion U-Net, and apply the intermediate latent codes as visual conditions. We use the pretrained autoencoder to map the reference image \(x_{r}\in\mathcal{I}\) to the latent space: \(z_{r}=\mathcal{E}(x_{r})\). Let \(c_{\theta}^{l}(\cdot)\) denote the output of the \(l\)-th attention block of U-Net. The visual condition at the \(l\)-th attention block is then given by \[c_{I}^{l}=\epsilon_{\theta}^{l}(z_{r},t,c_{T}),l\in\{0,1,\cdots,L-1\} \tag{2}\] where \(L\) is the number of attention blocks in U-Net, and \(c_{T}=c_{\pi}(y)\) is the text condition from the text encoder. Note that \(c_{T}\) is derived from token embeddings in which the embedding \(S_{\star}\) is learnable. Let the raw text cross-attention block from vanilla U-Net be \(\mathcal{A}_{T}(q,kv)\), the proposed image cross-attention block be \(\mathcal{A}_{I}(q,kv)\). We denote the new denoising process after incorporating \(\mathcal{A}_{I}\) as \(\epsilon_{\theta,\psi}\), in which the \(l\)-th attention block is denoted as \(\epsilon_{\theta,\psi}^{l}\). We can compute the intermediate latent code of the generated noisy image at the \(l\)-th attention block as \[n_{t}^{l}=\epsilon_{\theta,\psi}^{l}(z_{t},t,c_{T},c_{I}^{l}),l\in\{0,1, \cdots,L-1\} \tag{3}\] Figure 2: **Method overview. We introduce a module of image (cross-)attention to integrate visual conditions into the frozen diffusion model. On the left, the noisy image and a reference image are fed into diffusion U-Net. The frozen copy implies that both branches share the _same frozen_ U-Net. We follow [8] to learn the embedding \(S_{\star}\). On the right, we present the data stream comprising the original text attention and the proposed image attention. \(\copyright\) denotes the attention output in vanilla diffusion model and \(\copyright\) represents the visually conditioned output. The generation and use of the mask \(\hat{M}\) are further detailed in Sec. 3.3.** Because all operations are executed at the \(l\)-th attention block, we can omit all superscripts of \(l\) for simplicity. The original attention at each attention block in U-Net, _i.e._, \(n^{\prime}_{t}=\mathcal{A}_{T}(n_{t},c_{T})\), can be replaced by \[\hat{n}_{t} =\mathcal{A}_{T}(n_{t},c_{T}) \tag{4}\] \[\hat{c}_{I} =\mathcal{A}_{T}(c_{I},c_{T})\] (5) \[n^{\prime}_{t} =\mathcal{A}_{I}(\hat{n}_{t},\hat{c}_{I}) \tag{6}\] where \(n^{\prime}_{t}\) is the output of the current attention block that is fed into the following layers in U-Net. At the image cross-attention block \(\mathcal{A}_{I}\), we can capture visual semantics from the reference image and inject them into the noisy generated image. ### Emerging object masks To avoid capturing the background from training samples and exclusively learn the foreground object we are interested in, we propose an online, computationally-efficient, and non-parametric method that is naturally incorporated into our pipeline to generate reliable object masks. Next, we will illustrate how attention maps of text and image conditions can be directly used as object masks to capture the object-exclusive patch regions. Recall the process of computing the text cross-attention in diffusion U-Net \[\texttt{TextAttention}(Q,K,V)=\texttt{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{7}\] where the query is the reference image, the key and value are the text condition, and the scaling factor \(d_{k}\) is the dimension of the query and key.2 Inspired by [13] that diffusion models gain pretty good cross-attentions, we notice the attention map \(\texttt{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})\) inherently implies a good object mask for the reference image. Specifically, the attention map for the text condition and the visual condition from the reference image (Eq. (5)) reveals the response distribution of each text token to all image patches of different resolutions. The learnable embedding \(S_{\star}\) has strong responses at the exact patch regions where the foreground object lies. After binarizing the similarity distribution of \(S_{\star}\) on the reference image, we can obtain a good-quality object mask \(\hat{M}\). In this paper, we simply apply Otsu thresholding [32] for binarization. The mask can be directly deployed in our proposed image cross-attention by simply masking the attention map between the noisy generated image and the reference image in the latent space. The masked image cross-attention is formulated as Footnote 2: All happen in the latent space after linear projections. \[\texttt{ImageAttention}(Q,K,V)=\left(\hat{M}\odot\texttt{softmax}(\frac{QK^{ T}}{\sqrt{d_{k}}})\right)V \tag{8}\] where the query is the noisy generated image, the key and value are the reference image, and \(\odot\) is Hadamard product (element-wise product) with broadcasting \(\hat{M}\). By masking the attention map in Eq. (8), distractors from the background can be drastically suppressed. We can thus condition the generation process exclusively on the foreground object that is captured in the reference image. ### Regularization on attentions Due to the small size of training samples, a common problem in personalized text-to-image generation is that the model tends to easily overfit to the training samples, _i.e._, the model will fail to follow the given text in a generation. We observe that overfitting occurs when the learnable embedding \(S_{\star}\) is optimized not to be a descriptor of the unique object, but to be an uncontrolled set of parameters imposed to memorize the training samples. Therefore, we present a simple regularization that forces \(S_{\star}\) to capture the true information of the object. The end-of-text token <|EOT|> is used as a global representation in transformers. We empirically find <|EOT|> can maintain comparably good semantics on the unique object though \(S_{\star}\) is overfitted. From this observation, we apply a regularization between similarity maps of the reference image associated with \(S_{\star}\) and <|EOT|> in the text cross-attention. Specifically, from cross-attentions, we have the attention map \(\mathrm{A}:=\texttt{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})\in\mathbb{R}^{B \times D_{p}\times D_{t}}\) where B is the batch size, \(D_{p}\) is the number of image patches, and \(D_{t}\) is the number of text tokens. Let \(S_{\star}\) be the \(i\)-th token and <|EOT|> be the \(j\)-th token, and their corresponding similarity logits be \(\mathrm{A}_{\star,i}\) and \(\mathrm{A}_{\star,j}\). We define our regularization as \[\mathcal{L}_{reg}=\|A_{\star,i}/\max(A_{\star,i})-A_{\star,j}/\max(A_{\star,j}) \|_{2}^{2} \tag{9}\] where we apply a max normalization to guarantee the same scale of two logits. The regularization effectively weakens the overfitting and forces \(S_{\star}\) to learn meaningful object-specific semantics. **Training.** We train our model on 4-7 images with the vanilla diffusion U-Net frozen. We formulate the final training loss by integrating the standard denoising loss and the regularization term as \[\mathcal{L}=\mathbb{E}_{z\sim\mathcal{E}(x),z_{\star}\sim\mathcal{E}(x_{r}),y,e\sim\mathcal{N}(0,1),t}[\|\epsilon-\epsilon_{\theta,\psi}(z_{t},t,z_{r},c_{ \pi}(y))\|_{2}^{2}]+\lambda\mathcal{L}_{reg} \tag{10}\] where \(\lambda\) is the scaling weight of the regularization loss, and \(\epsilon_{\theta,\psi}\) is the new denoising networks composed of the vanilla diffusion U-Net parameterized by \(\theta\) and the proposed image attention blocks parameterized by \(\psi\). During training, we freeze the pretrained diffusion model and only train the image attention blocks and finetune the learnable text embedding \(S_{\star}\) simultaneously. **Implementation details.** We use Stable Diffusion [39] as our backbone. The diffusion U-Net [40] contains encoder, middle, and decoder layers. We incorporate the proposed image attention module into every other attention block exclusively in the decoder. Our image attention module follows the standard attention-feedforward fashion [47], and has the same structure as the text cross-attention used in LDMs [39] only differing in the dimension of the condition projection layer. We set \(\lambda=5\times 10^{-4}\) and learning rate to \(10^{-4}\) for \(S_{\star}\) and \(10^{-5}\) for image attention blocks. We train ViCo with a batch size of 4 for 400 steps. At inference, our model also requires a reference image input for the visual condition, injected into the denoising process in the same way as in training. Our method is insensitive and robust to the reference image. Therefore, either one in the training samples or a new image of the identical object is a feasible visual condition in sampling. ## 4 Experiment ### Quantitative evaluation **Data.** Previous works (_e.g._, Textual Inversion (TI) [8], DreamBooth [41], and Custom Diffusion [23]) use different datasets for evaluation. For a fair and unbiased comparison, we collect a dataset of 16 unique concepts from these three works. The collected dataset spans a large range of object categories covering 6 toys, 5 live animals, 2 accessories, 2 containers, and 1 building, allowing a comprehensive evaluation. Each object category contains 4-7 images of a unique object (except for one having 12 images). Based on the prompt list provided in [41], we remove one undesirable prompt "a cube shaped \(S_{\star}\)" because we are more interested in keeping the appearance of the unique object. In addition, we add more detailed and informative prompts to test the expressiveness of richer and more complex textual knowledge (_e.g._, "a \(S_{\star}\) among the skyscrapers in New York city"). Totally, we collect 31 prompts for 11 non-live objects and 31 prompts for 5 live animals. We generate 8 samples per object and per prompt, giving rise to 3,968 images in total, to robustly evaluate the quantitative performance. More details about the dataset can be found in Appendix. **Metric.** In our task, we concern with two core questions regarding the personalized generative models: (1) how well do the generated images capture and preserve the input object? and (2) how well Figure 3: **Mask mechanism. We can obtain a similarity distribution from the cross-attention map of the reference image associated with the learnable object token \(S_{\star}\). The distribution can be unflattened into a similarity map. After binarization with Otsu thresholding [32], the derived binary mask can be applied to the image cross-attention map to discard the non-object patches.** do the generated images tail the text condition? For the first question, we adopt two metrics, namely CLIP [35] image similarity \(I_{\text{CLIP}}\) and DINO [5] image similarity \(I_{\text{DINO}}\). Specifically, we compute the feature similarity between the generated image and the corresponding real image respectively using CLIP [35] or DINO [5]. DINO is trained in a self-supervised fashion without ground-truth class labels, thus not neglecting the difference among objects from the same category. Therefore, DINO metric better reflects how well the generated object resembles the real one, as also noted in [41]. For the second question, we adopt one metric, namely CLIP text similarity \(T_{\text{CLIP}}\). Specifically, we compute the feature similarity between the CLIP visual feature of the generated image and the CLIP textual feature of the corresponding prompt text that omits the placeholder. The three metrics are derived from the average similarities of all compared pairs. In our experiments, we deploy ViT-B/32 for the CLIP vision model and ViT-S/16 for the DINO model to extract visual and textual features. Comparison.We compare our method ViCo with three state-of-the-art models, namely Textual Inversion [8], DreamBooth [41], and Custom Diffusion [23]. Note that Textual Inversion originally uses LDM [39] as the backbone and we use Stable Diffusion for all compared methods for a fair comparison. The results of three quantitative metrics are shown in Tab. 2. Our model achieves the highest image similarity on both DINO and CLIP metrics, indicating our method best preserves the object-specific semantics from the image. DreamBooth and Custom Diffusion perform better on the text similarity metric because they use the fashion of "[V] class" to represent the visual object in the text space. The class category word provides rich prior knowledge, while the learnable identifier "[V]" primarily serves as an auxiliary guidance, such as controlling texture or facial appearance in the generation process. In contrast, Textual Inversion and our method employ a single token in the text embedding space, which, once learned, may dominate the text space and slightly weaken the influence of text-related information in the generated results. We deliberately choose the single token fashion in our work because we believe that representing a visual concept with a single word token is crucial for achieving effective text-image alignment. This minimalist approach allows us to capture the essence of the concept in a concise and precise manner, focusing on the core problem of aligning textual and visual information. Besides, DreamBooth and Custom Diffusion require finetuning either the entire SD network [41] or a portion of it [23] while our model and Textual Inversion do not. With the same foundation, our method outperforms Textual Inversion by a significant margin on all metrics. ### Qualitative evaluation In our massive qualitative experiments, depicted in Fig. 4, we observe that ViCo produces text-guided images of high quality. We assess the qualitative results based on several aspects. Image fidelity.Our model preserves fine details of the object in the training samples. As a comparison, Textual Inversion fails to preserve sufficient details in many cases (the 3rd and 6th rows) due to its limited expressiveness. The use of "[V] class" in DreamBooth and Custom Diffusion, while providing strong class-related information, may result in the loss of object-specific details. For instance, in the second row, both DreamBooth and Custom Diffusion alter the appearance of the cat. Similarly, DreamBooth fails to preserve the holes in the generated elephant in the fifth row. Text fidelity.Our model can faithfully follow the text prompt guidance to generate reasonable results. For example, in the first row of the "teddy bear", our model successfully incorporates elements such as "a tree" and "autumn leaves" as indicated by the text, while other models may occasionally struggle to achieve this level of fidelity. In more complex cases, like the third and fifth rows, Textual Inversion fails to express any information from the text prompts. Text-image equilibrium.Our model excels at balancing the effects of both text conditions and visual conditions, resulting in a harmonious equilibrium between the text and the image. We notice that there are instances where the text prompts and the image samples may have varying degrees of influence on the generation results. For example, in the last row, Custom Diffusion successfully generates a visually appealing "lion face" guided by the text, but the generated image is almost no longer a "pot". Similarly, DreamBooth maintains the overall appearance of a pot but loses significant \begin{table} \begin{tabular}{l c c c} \hline \hline & \(I_{\text{DINO}}\uparrow\) & \(I_{\text{CLIP}}\uparrow\) & \(T_{\text{CLIP}}\uparrow\) \\ \hline DreamBooth [41] & 0.638 & 0.811 & 0.234 \\ Custom Diffusion [23] & 0.557 & 0.766 & **0.251** \\ Textual Inversion [8] & 0.518 & 0.771 & 0.220 \\ ViCo & **0.643** & **0.816** & 0.228 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparison.** details of the original "wooden pot". In contrast, our method excels at preserving the original "pot" details while synthesizing a high-quality "lion face" on it. **Authenticity.** Our generation results are authentic and photorealistic, devoid of noticeable traces of artificial synthesis. For example, in the fifth row, although Custom Diffusion generates visually appealing images, they may appear noticeably synthetic. In comparison, our results are more photorealistic, authentically depicting a golden elephant statue positioned at Times Square. **Diversity.** All methods can produce diverse generation results, except for DreamBooth in the fourth row. Although DreamBooth generates very high-quality images, the layout and appearance of multiple runs are quite similar, which is not desirable. ### Abalation study We study the effect of the visual condition, the automatic object mask, and the regularization on generation. Representative comparison and visualization results are presented in Fig. 5. Visual condition.The proposed visual condition module can significantly improve the visual expressiveness of the single learnable embed Figure 4: **Qualitative comparison.** Given input images (first column), we generate three samples using ViCo (ours), Textual Inversion [8], Custom Diffusion (CD) [23], and DreamBooth (DB) [41]. The text prompt is under the generation samples, in which \(S_{\star}\) for CD and DB is “[V] class”. \begin{table} \begin{tabular}{l l l l} \hline \hline & \(I_{\text{DINO}}\uparrow\) & \(I_{\text{CLIP}}\uparrow\) & \(T_{\text{CLIP}}\uparrow\) \\ \hline Baseline [8] & 0.518 & 0.771 & 0.220 \\ + Visual condition & **0.646** - 24.74 & **0.814** - 5.64 & **0.228** - 3.66 \\ \hline \hline \end{tabular} \end{table} Table 3: Improvements on quantitative metrics. ding used by Textual Inversion, making higher image fidelity. We compare the performance of Textual Inversion before and after adding the visual condition module in Fig. 5(a). We can observe the degree of object detail preservation is considerably enhanced without losing text information after adding our visual condition module. We also show the quantitative evaluation in Tab. 3, where the image similarity between the training samples and the generated images is greatly boosted by adding visual conditions into the diffusion process with slight text similarity enhancement. **Automatic mask.** Our automatic mask mechanism enables isolating the object from the distracting background, which further improves the object fidelity. As shown in Fig. 5(b), the generation results may be occasionally distorted without the mask. After adding the mask, the object can be well captured and reconstructed. We show some samples of attention masks along with the ascending sampling steps in Fig. 5(d), which exhibit the notable effect of image matting. **Attention regularization.** The proposed regularization term can efficiently alleviate the dominance of the learnable \(S_{\star}\) in the embedding space. In terms of generation results, the dominance of \(S_{\star}\) occasionally impairs text guidance, addressed by the regularization as shown in Fig. 5(c). In terms of attention, the dominance makes \(S_{\star}\) attend to non-object patches. The regularization directs the high-response patches in the attention of \(S_{\star}\) towards the object region as shown in Fig. 5(e), ensuring that important visual information is focused on the desired object. We observe negligible effects in quantitative metrics by adding the mask and the regularization on the model after having our introduced visual condition. We hypothesize this is because the quantitative metrics evaluate the entire image and are not sensitive to the perceptual details in the generation. ### Applications We show three types of applications of ViCo in Fig. 6. The first application is _recontextualization_. We generate images for a novel object in different contexts. The generated results present natural-looking and unobtrusive integration of the object and the contexts, with diverse poses (_e.g._, sitting, standing, and floating). We also generate _art renditions_ of novel objects in different painting styles. We use a text prompt "a painting of a \(S_{\star}\) in the style of [painter]". Our results have novel poses that are unseen in the training samples, _e.g._, the painting in the style of "Vermeer". In addition, we change Figure 5: **Comparison and visualization in ablation study. In the first row, we generate images to evaluate the effectiveness of each proposed component in our method. In the second row, we visualize attention maps and corresponding binarized masks along with sampling steps in (d) and compare the average attention map of the generated image over 50 steps, associated with \(S_{\star}\), with or without the use of the proposed regularization in (e).** the _costume_ for the novel object using a text prompt "a \(S_{\star}\) in a [figment] outfit", producing novel image variations while preserving the appearance of novel objects. ## 5 Conclusion In conclusion, our paper presents a simple and effective method, ViCo, for personalized text-to-image generation that preserves fine object-specific details. Our approach introduces visual conditions that are incorporated into the diffusion process via a plug-in image cross-attention module. We also derive an accurate object mask using the inherent information contained within the cross-attention maps, which isolates the object of interest from distracting background elements in the latent space. Moreover, we introduce a simple regularization technique for attention that mitigates the dominating effect of the learnable embedding in the latent text space. Our visual condition module is seamlessly compatible with the pretrained diffusion model and requires no fine-tuning, enabling flexible, transferable, and scalable deployment. Additionally, our model is easy-to-use, as it does not rely on prior object masks or extensive preprocessing to obtain a regularization set. Overall, our model is fast (\(\sim\) 5 minutes) and lightweight (6% of the diffusion U-Net). Through quantitative and qualitative evaluations, we have demonstrated the superior performance of our model. AcknowledgementsThis work is partially supported by Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27208022) and HKU Seed Fund for Basic Research.
2303.14102
Distributed Silhouette Algorithm: Evaluating Clustering on Big Data
In the big data era, the key feature that each algorithm needs to have is the possibility of efficiently running in parallel in a distributed environment. The popular Silhouette metric to evaluate the quality of a clustering, unfortunately, does not have this property and has a quadratic computational complexity with respect to the size of the input dataset. For this reason, its execution has been hindered in big data scenarios, where clustering had to be evaluated otherwise. To fill this gap, in this paper we introduce the first algorithm that computes the Silhouette metric with linear complexity and can easily execute in parallel in a distributed environment. Its implementation is freely available in the Apache Spark ML library.
Marco Gaido
2023-03-24T16:10:43Z
http://arxiv.org/abs/2303.14102v1
# Distributed Silhouette Algorithm: ###### Abstract In the big data era, the key feature that each algorithm needs to have is the possibility of efficiently running in parallel in a distributed environment. The popular Silhouette metric to evaluate the quality of a clustering, unfortunately, does not have this property and has a quadratic computational complexity with respect to the size of the input dataset. For this reason, its execution has been hindered in big data scenarios, where clustering had to be evaluated otherwise. To fill this gap, in this paper we introduce the first algorithm that computes the Silhouette metric with linear complexity and can easily execute in parallel in a distributed environment. Its implementation is freely available in the Apache Spark ML library. Silhouette, clustering, Apache Spark ## I Introduction As the amount of data that is produced every day is huge and keeps increasing the need for efficient solutions to process huge volumes of data has risen [1]. These solutions solve the problem by parallelizing the work over different machines that belong to a cluster. In this way, each machine processes a portion of the data and the overall time required to perform the operations scales down. Among the systems for distributed and parallel data processing, Apache Spark1 is nowadays the most widespread solution, as it allows for an easy and efficient execution of the required transformations and the wide range of operations it supports. Footnote 1: [https://spark.apache.org/](https://spark.apache.org/). In particular, Apache Spark also features the distributed execution of many supervised and unsupervised machine-learning algorithms, which include clustering methods [2], such as K-Means and Gaussian Mixture models. The efficient implementation of these methods in a distributed environment is not complicated and their computational complexity is linear with the size of the input dataset [3]. However, the output of these clustering methods should also be evaluated with equal efficiency and this is non-trivial as one of the most widespread clustering evaluation metrics, the Silhouette [4], has a quadratic computational complexity with respect to the input size. This unfortunately holds true also for the efficient implementation by [5], which pre-compute (and cache) part of the operations and prevents its adoption in big data settings due to the excessive computational time required on huge datasets. To overcome this limitation, and inspired by the idea of pre-computing part of the operations of [5], in this work we describe the first algorithm that computes the Silhouette scores with linear complexity with respect to the size of the dataset. Our method requires a dedicated implementation for each distance measure and currently it is defined (and described in this paper) for two distance measures: _i)_ the squared euclidean distance, and _ii)_ the cosine distance. The algorithm is easy to distribute on different machines, being particularly suitable for parallel computation. In light of these appealing characteristics, it has been implemented and contributed to the Apache Spark ML library under the Apache 2.0 Licence and constitutes its current implementation. ## II Background: The Silhouette Metric The Silhouette is a widespread metric used to evaluate the quality of a clustering operation. Specifically, it is an unsupervised metric that measures how close the data in the same cluster are in opposition to how separated they are from other clusters, in particular from the closest cluster to each given datum (named "neighbouring cluster"). DefinitionFormally, the Silhouette score \(s_{i}\) for each datum \(i\) is computed as the the difference between its average distance to the other data in the same cluster \(a_{i}\) and its average distance to the data in the neighbouring cluster \(b_{i}\), rescaled by the maximum of them to contain a value in the interval \([-1,1]\): \[s_{i}=\frac{b_{i}-a_{i}}{max\{a_{i},b_{i}\}} \tag{1}\] which can be rewritten as \[s_{i}=\left\{\begin{array}{ll}1-\frac{a_{i}}{b_{i}}&\mbox{ if }a_{i}\leq b_{i}\\ \frac{b_{i}}{a_{i}}-1&\mbox{ if }a_{i}>b_{i}\end{array}\right. \tag{2}\] The overall Silhouette score is then \(S=\sum_{i=1}^{N}s_{i}/N\), i.e. the average of all \(s_{i},\forall i\in[1,...,N]\), where \(N\) is the dataset size. As such, the metric ranges from -1 to +1 and the highest it is, the better the clustering is. Computational ComplexityAs we do not know in advance which is the closest cluster to each datum, the implementation of the Silhouette requires that, for each datum, we compute its distance with all the points in the dataset and average them by cluster. Once we have the average distance between one point and all the clusters, \(s_{i}\) can be easily computed with the equations above. However, computing the distance between each datum and all the others is the dataset has a computational complexity of \(O(N^{2}*D)\), where \(D\) is the number of dimensions in the given dataset, i.e. \(X_{i}\in\mathbb{R}^{D}\). Indeed, the distance metric computation, although depending on the actual distance considered, generally requires \(O(D)\) operations and we need to compute \(O(N^{2})\) distances. As already discussed, this computational complexity leads to excessive computational costs and time in a big data environment, where \(N\) is a large number. In addition, in a distributed environment, this computation either required that each machine hosts the whole dataset to compute the distance with a datum - which would cause an \(O(N)\) memory footprint, causing OOM issues for large \(N\) - or that the whole dataset is exchanged over the network between the machines (usually named _workers_) at least for \(W\) times, where \(W\) is the number of workers used. From this discussion, it is clear that the algorithm does not efficiently scale with the size of the input dataset and with the number of workers, as required in big data clusters. ## III Distributed Silhouette To avoid the above mentioned limitations of the Silhouette implementation, we designed methods to compute it with a linear complexity with respect to the input size and that allow for an efficient distribution of the workload across different workers. Namely, the critic operation is the computation of the average distance between a datum \(X_{i}\) and the points \(C_{j}\) belonging to each cluster \(\Gamma_{k}\) with \(k\in[1,...,K]\), where \(K\) is the number of clusters obtained from the execution of a clustering method. As such, the focus of this work is the computation with linear complexity of such distances: \[d(X_{i},\Gamma_{k})=\sum\limits_{j=1}^{N_{\Gamma_{k}}}d(X_{i},C_{j})/N_{\Gamma _{k}} \tag{3}\] for each \(k\). To do so, we designed specialized algorithms that strictly depend on the distance measure used. In particular, in this work we describe the algorithms for two widespread distance measures: the _squared Euclidean distance_ (SSIII-A), and the _cosine distance_ (SSIII-B). Though, a similar approach may be used for other metrics as well. ### _Squared Euclidean Distance_ When using the squared Euclidean distance as distance measure, the distance between one datum \(X_{i}\) and a cluster \(\Gamma_{k}\) can be rewritten as: \[\sum\limits_{j=1}^{N_{\Gamma_{k}}}d(X_{i}, C_{j})^{2}=\sum\limits_{j=1}^{N_{\Gamma_{k}}}\Big{(}\sum\limits_{l=1}^{D}(x _{il}-c_{jl})^{2}\Big{)}\] \[=\sum\limits_{j=1}^{N_{\Gamma_{k}}}\Big{(}\sum\limits_{l=1}^{D}x _{il}^{2}+\sum\limits_{l=1}^{D}c_{jl}^{2}-2\sum\limits_{l=1}^{D}x_{il}c_{jl} \Big{)} \tag{4}\] \[=\sum\limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}x_{il}^{2 }+\sum\limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}c_{jl}^{2}-2\sum \limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}x_{il}c_{jl}\] where \(D\) is the number of dimensions of the data in the dataset, \(x_{il}\) is the \(l\)-th dimension of the \(X_{i}\) vector, and \(c_{jl}\) is the \(l\)-th dimension of the \(C_{j}\) vector belonging to the \(\Gamma_{k}\) cluster, which contains \(N_{\Gamma_{k}}\) elements. Then, the first element of Eq. 4 can be rewritten as: \[\sum\limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}x_{il}^{2}=N_{\Gamma_{ k}}\xi_{X_{i}}\text{, where }\xi_{X_{i}}=\sum\limits_{l=1}^{D}x_{il}^{2} \tag{5}\] where \(\xi_{X_{i}}=\sum\limits_{l=1}^{D}x_{il}^{2}\) can be pre-computed independently and in parallel for each point \(X_{i}\). In addition, keeping in mind the definition of \(\xi_{X_{i}}\), the second term of Eq. 4 can be rewritten as: \[\sum\limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}c_{jl}^{2}=\sum\limits _{j=1}^{N_{\Gamma_{k}}}\xi_{C_{j}}=\Psi_{\Gamma_{k}} \tag{6}\] which can be pre-computed for each cluster with a single pass over the whole dataset, i.e. with linear complexity with respect to the size of the dataset.2 Footnote 2: In the implementation, each worker can independently compute the cluster-wise sums of its data, which are then collected on a node which aggregates them in the overall sums. We can notice that the efficiency of this algorithm, hence, depends on the number of clusters, which in practice is a fairly small number and much lower than the dataset size. Lastly, the last element of Eq. 4 can be rewritten as: \[\sum\limits_{j=1}^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}x_{il}c_{jl}=\sum \limits_{l=1}^{D}\Big{(}\sum\limits_{j=1}^{N_{\Gamma_{k}}}c_{jl}\Big{)}x_{il} \tag{7}\] and, by defining a vector \(Y_{\Gamma_{k}}\) so that: \[Y_{\Gamma_{k}l}=\sum\limits_{j=1}^{N_{\Gamma_{k}}}c_{jl}\forall l\in[1,...,D] \tag{8}\] we obtain: \[\sum\limits_{l=1}^{D}\Big{(}\sum\limits_{j=1}^{N_{\Gamma_{k}}}c_{jl}\Big{)}x_ {il}=\sum\limits_{l=1}^{D}Y_{\Gamma_{k}l}x_{il} \tag{9}\] where the vectors \(Y_{\Gamma_{k}}\) can be pre-computed for each cluster with a single pass over the whole dataset, similarly to \(\Psi_{\Gamma_{k}}\).3 Footnote 3: In the implementation, \(Y\) and \(\Psi\) are jointly computed with a single pass over the whole dataset. As such, by integrating Eq. 5, Eq. 6 and Eq. 9 into Eq. 4, we can rewrite Eq. 4 as: \[N_{\Gamma_{k}}\xi_{X_{i}}+\Psi_{\Gamma_{k}}-2\sum\limits_{l=1}^{D}Y_{\Gamma_{k} l}x_{il}. \tag{10}\] With this formula, the average distance of the datum \(X_{i}\) from cluster \(\Gamma_{k}\) (Eq. 3) becomes: \[\frac{\sum\limits_{j=1}^{N_{\Gamma_{k}}}d(X_{i},C_{j})^{2}}{N_{\Gamma_{k}}} = \tag{11}\] \[=\frac{N_{\Gamma_{k}}\xi_{X_{i}}+\Psi_{\Gamma_{k}}-2\sum\limits_{ l=1}^{D}Y_{\Gamma_{k}l}x_{il}}{N_{\Gamma_{k}}}\] \[=\xi_{X_{i}}+\frac{\Psi_{\Gamma_{k}}}{N_{\Gamma_{k}}}-2\frac{\sum \limits_{l=1}^{D}Y_{\Gamma_{k}l}x_{il}}{N_{\Gamma_{k}}}\] In this way, the distance between each element \(X_{i}\) and each cluster \(\Gamma_{k}\) does not require the computation of the distance between \(X_{i}\) and all the other data in the dataset. Indeed, each \(X_{i}\) can be processed independently, as it is enough to pre-compute the constant \(\xi_{X_{i}}\) for each point \(X_{i}\) and the constants \(\Psi_{\Gamma_{k}}\) and \(N_{\Gamma_{k}}\) and the vector \(Y_{\Gamma_{k}}\) for each cluster \(\Gamma_{k}\). In the Apache Spark implementation, the pre-computed values for the clusters are distributed among the worker nodes via broadcasted variables, because we can assume that the clusters are limited in number and anyway they are much fewer than the points. The main strengths of this algorithm are the low computational complexity and the intrinsic parallelism. As we have seen, \(\Psi_{\Gamma_{k}}\), \(N_{\Gamma_{k}}\) and the vector \(Y_{\Gamma_{k}}\) can be pre-computed with a computational complexity that is \(O(D*N/W)\). After that, every point can be analyzed independently of the others. Specifically, for every point we need to compute the average distance to all the clusters. Since, Eq. 11 requires \(O(D)\) operations, this phase has a computational complexity of \(O(C*D*N/W)\) where \(C\) is the number of clusters (which we assume quite low). Lastly, each score \(s_{i}\) can be computed with Eq. 2, and the \(s_{i}\) scores are averaged. This average has a computational complexity of \(O(N/W)\). All in all, we can conclude that the computational complexity of the algorithm is \(O(C*D*N/W)\). As in big data settings it is reasonable to assume that \(N>>C\) and \(N>>D\), this is \(O(N/W)\) that means that the algorithm scales linearly with the size of the input dataset and that the time required to compute the metric reduces linearly with the number of workers used. This is an ideal condition in big data clusters as it enforces that the size of a dataset can grow indefinitely without increasing the time required to perform the computation by scaling with the same growth factor the number of worker nodes. ### _Cosine Distance_ To define the metric with the cosine distance, we use a similar approach. The cosine distance is defined as \(1-cs\), where \(cs\) is the cosine similarity, which is: \[cs(X,Y)=\frac{\sum\limits_{l=1}^{D}x_{l}y_{l}}{\|X\|\|Y\|}. \tag{12}\] Hence, the average distance between a datum \(X_{i}\) and the data \(C_{j}\) of a cluster \(\Gamma_{k}\) is: \[\frac{\sum\limits_{j=1}^{N_{\Gamma_{k}}}d(X_{i},C_{j})}{N_{\Gamma_{k}}}=\frac{ \sum\limits_{j=1}^{N_{\Gamma_{k}}}\left(1-\frac{\sum\limits_{l=1}^{D}x_{il}c_ {jl}}{\|X_{i}\|\|C_{j}\|}\right)}{N_{\Gamma_{k}}}. \tag{13}\] The numerator can be rewritten as: \[\begin{split}\sum\limits_{j=1}^{N_{\Gamma_{k}}}1-\sum\limits_{j=1 }^{N_{\Gamma_{k}}}\sum\limits_{l=1}^{D}\frac{x_{il}}{\|X_{i}\|}\frac{c_{jl}}{ \|C_{j}\|}\\ =N_{\Gamma_{k}}-\sum\limits_{l=1}^{D}\frac{x_{il}}{\|X_{i}\|} \left(\sum\limits_{j=1}^{N_{\Gamma_{k}}}\frac{c_{jl}}{\|C_{j}\|}\right).\end{split} \tag{14}\] Now, analogously to the squared Euclidean case, we can define the vectors \[\xi_{X_{i}}:\xi_{X_{i}l}=\frac{x_{il}}{\|X\|}\forall l\in[1,...,D] \tag{15}\] which can be pre-computed for each datum and \[\Omega_{\Gamma_{k}}:\Omega_{\Gamma_{k}l}=\sum\limits_{j=1}^{N_{\Gamma_{k}}} \xi_{C_{jl}}\forall l\in[1,...,D] \tag{16}\] which can be pre-computed for each cluster. Eq. 14 hence becomes: \[N_{\Gamma_{k}}-\sum\limits_{l=1}^{D}\xi_{X_{l}l}\Omega_{\Gamma_{k}l}. \tag{17}\] Therefore, Eq. 13 can be computed with \[1-\frac{\sum\limits_{l=1}^{D}\xi_{X_{l}l}\Omega_{\Gamma_{k}l}}{N_{\Gamma_{k}}} \tag{18}\] which can be computed for each \(X_{i}\) without comparing it to all the data in other clusters, but using only the above-defined pre-computed vectors. Once obtained the average distance of each \(X_{i}\) for all the clusters, its Silhouette score \(s_{i}\) is computed with Eq. 2 and the Silhouette scores are averaged for all the elements in the dataset. As can be inferred from its definition, all the considerations regarding the computational costs done in the previous subsection for the squared Euclidean distance apply to this case as well. ## IV Experiments To showcase the benefits of the proposed algorithm, we compare the runtime required by the standard Silhouette implementation and the method proposed in this paper with the squared Euclidean distance as a dissimilarity measure. All the experiments have been executed with a single thread on a MacBook Pro with 2,8 GHz Intel Core i7 and 8 GB of RAM 1600 MHz DDR3. In this condition, we do not exploit the ability of our method to parallelize over multiple workers. So in a big data scenario, the difference would be even larger or the standard implementation would not be an option in case of very large datasets. We used a proprietary dataset with 129 features and observed the computational cost by increasing the dataset size. The results are reported in Fig. 1. First, we can notice that the quadratic complexity of the standard implementation emerges clearly in Fig. (a)a. In addition, when we reach the 100,000 of dataset size, its runtime explodes in comparison with our proposed method with a difference of 3 orders of magnitude, as we can see from Fig. (b)b. With the proposed method, the runtime reaches at maximum 23 seconds and it is not reached at the maximum dataset size: this happens because with different sizes we also have a different number of clusters in this experiment and, as previously seen, the number of clusters plays an important role in the computational cost of our method. ## V Conclusions With the goal of enabling the execution and proper evaluation of clustering algorithms in a big data environment, in this work we describe the first method to compute the Silhouette score with a linear computational complexity with respect to the input dataset size and with the possibility of being executed in parallel over different machines. Our scaling experiment, although performed on relatively small datasets (\(\sim\)150,000 data), showed the great benefits of the proposed algorithm that are even larger in a real distributed environment. The implementation of our method has been contributed to the Apache Spark ML library with Apache 2.0 Licence and is also available at [https://gitlab.com/mark91/SparkClusteringEvaluationMetrics/](https://gitlab.com/mark91/SparkClusteringEvaluationMetrics/) -/tree/master/src/main/scala/org/apache/spark/ml/evaluation under the same license. ## VI Limitations As discussed in SSIII, the proposed approach does not provide a generic algorithm that generalizes over any distance measure. On the contrary, it requires a dedicated implementation for each distance measure. Currently, the algorithms and implementations have been defined only for the squared Euclidean distance and the cosine distance, but similar definitions may be possible also for other distance measures. Unfortunately, it is hard to apply the same approach to the Euclidean distance, as the square root operator hinders the possibility of aggregating cluster-level statistics to be used for the final formula. While approximations are possible, e.g. by taking the square root of the element-cluster average distances the exact computation of the Silhouette with Euclidean distance with this method is not possible.
2309.01232
Chirped Pulse Control of Raman Coherence in Atoms and Molecules
A novel chirped pulse control scheme is presented based on Coherent Anti-Stokes Raman Spectroscopy (C-CARS) aiming at maximizing the vibrational coherence in atoms and molecules. The scheme utilizes chirping of the three incoming pulses, the pump, the Stokes and the probe, in the four-wave mixing process of C-CARS to fulfill the adiabatic passage conditions. The derivation of the scheme is based on simplifying the four-level system into a 'super-effective' two level system via rotating wave approximation and adiabatic elimination of the excited state manifold. The robustness, spectral selectivity and adiabatic nature of C-CARS method may prove useful for sensing, imaging, and detection. It is demonstrated that the selectivity in excitation of vibrational degrees of freedom can be controlled by carefully choosing the spectral chirp rate of the pulses. The C-CARS control scheme is applied to a surrogate methanol molecule to generate an optimal anti-Stokes signal backscattered from a cloud of molecules a kilometer away. The theory is based on the solution of the coupled Maxwell-Liouville von Neumann equations and focuses on the quantum effects induced in the target molecules by the control pulse trains. The propagation effects of pulses through the medium are evaluated and the buildup of the molecular-specific anti-Stokes signal is demonstrated numerically. A deep learning technique, using Convolutional Neural Networks (CNN), is implemented to characterize the control pulses and evaluate time-dependent phase characteristics from them. The effects of decoherence induced by spontaneous decay and collisional dephasing are also examined. Additionally, we present the technique of Fractional Stimulated Raman Adiabatic Passage (F-STIRAP) and demonstrate that it can be utilized for remote detection in a multi-level system by creation of a maximally coherent superposition state.
Jabir Chathanathil, Svetlana A. Malinovskaya
2023-09-03T17:40:20Z
http://arxiv.org/abs/2309.01232v1
# Chirped Pulse Control of Raman Coherence in Atoms and Molecules ###### Abstract A novel chirped pulse control scheme is presented based on Coherent Anti-Stokes Raman Spectroscopy (C-CARS) aiming at maximizing the vibrational coherence in atoms and molecules. The scheme utilizes chirping of the three incoming pulses, the pump, the Stokes and the probe, in the four-wave mixing process of C-CARS to fulfill the adiabatic passage conditions. The derivation of the scheme is based on simplifying the four-level system into a'super-effective' two level system via rotating wave approximation and adiabatic elimination of the excited state manifold. The robustness, spectral selectivity and adiabatic nature of C-CARS method may prove useful for sensing, imaging, and detection. It is demonstrated that the selectivity in excitation of vibrational degrees of freedom can be controlled by carefully choosing the spectral chirp rate of the pulses. The C-CARS control scheme is applied to a surrogate methanol molecule to generate an optimal anti-Stokes signal backscattered from a cloud of molecules a kilometer away. The theory is based on the solution of the coupled Maxwell-Liouville von Neumann equations and focuses on the quantum effects induced in the target molecules by the control pulse trains. The propagation effects of pulses through the medium are evaluated and the buildup of the molecular-specific anti-Stokes signal is demonstrated numerically. A deep learning technique, using Convolutional Neural Networks (CNN), is implemented to characterize the control pulses and evaluate time-dependent phase characteristics from them. The effects of decoherence induced by spontaneous decay and collisional dephasing are also examined. Additionally, we present the technique of Fractional Stimulated Raman Adiabatic Passage (F-STIRAP) and demonstrate that it can be utilized for remote detection in a multi-level system by creation of a maximally coherent superposition state. ###### Contents * 1 Introduction * 2 Quantum Control in Cars using Chirped Pulses * 2.1 Coherent Anti-Stokes Raman Spectroscopy * 2.2 Adiabatic Elimination of excited states * 2.3 The C-CARS chirping scheme * 2.3.1 Wigner-Ville distributions * 2.3.2 Analysis of populations and coherence * 2.3.3 Comparison with the exact four-level system * 2.4 Analysis of Dressed States and Adiabatic Passage * 2.5 Section Summary * 3 Application of Ultrafast C-CARS FOR REMOTE DETECTION * 3.1 Theoretical framework * 3.1.1 Maxwell - Liouville von Neumann formalism * 3.1.2 The target molecules distribution * 3.1.3 Propagation through the atmosphere * 3.2 Numerical Results * 3.2.1 Analysis of the state populations and coherence * 3.2.2 Analysis of the system dynamics subject to the interaction with the control pulse trains in the presence of decoherence * 3.2.3 Impact of Beer's law on the average intensity * 3.2.4 Analysis of the Maxwell - Liouville von Neumann equations and demonstration of the anti-Stokes signal generation * 3.3 Section Summary * 4 DEEP NEURAL NETWORKS APPLICATIONS IN QUANTUM CONTROL * 4.1 Deep learning and applications * 4.2 Classification and regression of chirped pulses using Convolutional Neural networks (CNN) * 4.3 The structure of the CNN used * 4.4 Results * 5 CREATION OF MAXIMALLY COHERENT STATES USING FRACTIONAL STIMULATED RAMAN ADIABATIC PASSAGE * 5.1 The Stimulated Raman Adiabtic Passage (STIRAP) * 5.2 Chirped-STIRAP: selective population of two nearly-degenerate states * 5.3 The Fractional-STIRAP * 5.4 Appilcation of F-STIRAP for Remote Detection * 6 Summary Introduction The quest for understanding and mastering the world has been an inherent part of human life ever since it existed. As a student of physics, whenever I explained to my friends and family members of my research, it always fascinated them to know that we can predict the results of a microscopic process using a mathematical derivation. The fact that a pen and a paper are all that theoretical physicists require in order to describe and predict the profound realities of the universe still inspires me every day. The perfect sync between the results of mathematical calculations and physical processes is the manifestation of an elegant reality we live in. The development of quantum theory in the early twentieth century revolutionized the way we understood physical realities of the universe. The phenomena of wave-matter duality and the uncertainty principle could only be perceived as'miracles' in the classical world. The implications of quantum mechanics posed deep philosophical questions and troubled scientists like Albert Einstein. It took several more decades to settle the question of whether 'God plays dice or not'. After the introduction of Bell's theorem and its experimental realization, quantum theory was finally established as a fundamental physical reality of the universe. As we improved our understanding about the physical world, we mastered the ways in which this knowledge could be used to improve human lives. Just as the development of mechanical engines and industrial revolution can be attributed to Newton's laws, the modern technological advancements are, in part, the result of our understanding of the quantum mechanical processes. Today, after a century of progress, we may be close to realizing a quantum computer that will outperform classical computers. In the early twentieth century, along with the development of quantum theory came the discovery of Raman scattering process. A quantum mechanical description of physical systems was necessary to understand the Raman scattering. Since then, modern Raman Spectroscopy paved new ways to learn about the structure of atoms and molecules. The advancement in laser technology since 1960s accelerated this trend and opened wide range of possibilities to control microscopic systems for various applications including sensing, imaging and detection. Before the discovery of Raman scattering, Rayleigh scattering could successfully describe classical phenomena like the color of sky. This is a classical process of elastic scattering in which an electromagnetic field scatters off of a particle without changing the incident frequency. This happens when the particle has much smaller wavelength compared to the incident field. But as the particle gets smaller, the scattering process becomes inelastic and the classical description of the Rayleigh scattering proves to be insufficient to explain the process. This is because the output fields in this case do not have a continuous spectrum of frequencies, rather it possesses a frequency spectrum that corresponds to the quantum energy levels of the sample. This inelastic scattering of the electromagnetic field with microscopic systems is called Raman scattering, or Spontaneous Raman scattering. In this process, a pump field, which excites the molecule into a virtual state, scatters off of the molecule, generating a field with a lower (red-shifted) or higher (blue-shifted) frequency than the pump, and bringing the molecule back to one of its vibrational states. The red-shifted field, known as Stokes, brings the molecule to a higher vibrational state, while a blue-shifted field, known as anti-Stokes brings the molecule into a lower vibrational state. Even though the discovery of Modern Raman spectroscopy delivered tremendous progress in our probing and understanding of the microscopic structures, it needed improvements as the output signal was very weak, incoherent and non-directional. This led to the development of other improved versions of Raman spectroscopy such as Stimulated Raman Spectroscopy (SRS) [1], and Coherent anti-Stokes Raman Spectroscopy (CARS)[2]. SRS is a third order nonlinear process in which two fields, a pump and a Stokes, are incident on the sample, stimulating the vibrational transition. In contrast to the spontaneous Raman scattering, this is a coherent process and the resonant enhancement of the transition happens when the frequency difference of pump and Stokes (\(\omega_{p}-\omega_{s}\)) is matched with the vibrational frequency. CARS is a four-wave mixing process in which the pump and Stokes pulses, having frequencies \(\omega_{p}\) and \(\omega_{s}\) respectively, excite the molecular vibrations to create a coherent superposition that a probe pulse, having frequency \(\omega_{pr}\), interacts with to generate an anti-Stokes signal. The output signal is blue-shifted and has a frequency of \(\omega_{as}=\omega_{p}-\omega_{s}+\omega_{pr}\). Owing to addressing the inherent vibrational properties of matter, CARS is one of the best suited and most frequently used methods for imaging, sensing and detection without labeling or staining [3, 4, 5, 6, 7, 8]. The high sensitivity, high resolution and non-invasiveness of CARS has been exploited for imaging of chemical and biological samples [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], standoff detection [20, 21, 22, 23] and combustion thermometry [24, 25]. Recent developments in the applications of CARS in biology include imaging and classification of cancer cells that help early diagnosis [26, 27, 28] and rapid and label-free detection of the SARS-CoV-2 pathogens [29]. CARS has also been used recently for observing real-time vibrations of chemical bonds within molecules [30], direct imaging of molecular symmetries [31], graphene imaging [32], and femtosecond spectroscopy [33, 34]. Both SRS and CARS, being coherent processes, provide signals many orders of magnitude higher in amplitude compared to the spontaneous Raman process. But the detection in SRS requires a scheme that is more complicated than CARS as the output and input signals in SRS have the same frequencies. In CARS, the signal can be easily separated by an optical filter due to the anti-Stokes field having a frequency blue-shifted compared to the incoming pulses [4]. The direction of signal in CARS is determined by the phase matching conditions [35, 36, 37, 38]. SRS has an advantage over CARS, because of the absence of a nonresonant background [39], as the signal is stimulated only when the frequency difference matches with that of the vibrational mode. The presence of the non-resonant background appearing in the spectra, which limits image contrast and sensitivity, has been one of the main challenges in CARS. To overcome the limitations of CARS, there has been tremendous work on removing the background from nonresonant processes and enhancing the signal amplitude [40, 41, 42, 43, 44, 45, 46, 47, 48, 49]. Quantum control is a way to manipulate the dynamics in a quantum system in order to make a more useful outcome. This is usually done with a tailored external field possessing controllable parameters. The amplitude of the anti-Stokes field in CARS is related to coherence between the electronic vibrational states within the Maxwell's equations framework. Thus, maximizing coherence is the key to optimizing the intensity of the anti-Stokes signal [22, 50]. The primary aim of this work is to develop a quantum control scheme applicable to CARS that maximizes coherence by manipulating the field parameters involved in the process. In this chapter, we take a semiclassical approach to analyze the dynamics of light-matter interactions, which means a classical electromagnetic field is interacting with a quantum system of atoms or molecules is considered. The evolution of any quantum system is determined by the Schrodinger equation: \[i\hbar\frac{\partial}{\partial t}\left|\mathbf{\Psi}(t)\right\rangle=\mathbf{ H}(t)\left|\mathbf{\Psi}(t)\right\rangle \tag{1}\] where \(\left|\mathbf{\Psi}(t)\right\rangle\) is the wave function that describe the quantum system and is a superposition of its \(n\) eigenstates \(\left|n\right\rangle\). \[\left|\mathbf{\Psi}(t)\right\rangle=\sum_{n}a_{n}\left|n\right\rangle\,. \tag{2}\] The co-efficients \(a_{n}(t)\) are known as the probability amplitudes, the square of which give the probability of observing the nth eigenstate when making a measurement of an observable in the eigenstate basis. In most cases of light-matter interactions, the Hamiltonian \(\mathbf{H}(t)\) can be decomposed as: \[\mathbf{H}(t)=\mathbf{H}_{0}+\mathbf{V}(t) \tag{3}\] where \(\mathbf{H}_{0}\) is the time-independent Hamiltonian of the system and \(\mathbf{V}(t)\) is the time-dependent interaction Hamiltonian. For a field \(\mathbf{E}(t)\) interacting with the quantum system, the interaction Hamiltonian is given by: \(\mathbf{V}(t)=-\boldsymbol{\mu}\cdot\mathbf{E}(t)\), where \(\boldsymbol{\mu}\) is the transition dipole moment. Using Hamiltonian (3) in the Schrodinger equation (1) yields the dynamics of the quantum system during the interaction. As the simplest example, we can consider the quantum system with two energy levels \(\left|g\right\rangle\) and \(\left|e\right\rangle\), having frequency \(\omega_{g}\) and \(\omega_{e}\) respectively, interacting with a monochromatic light field with frequency \(\omega_{L}\). The quantum state of this system at any time can be written as the Schrodinger wave function: \(\left|\mathbf{\Psi}(t)\right\rangle=a_{g}\left|g\right\rangle+a_{e}\left|e\right\rangle\). When the system involves multiple quantum states interacting with many fields, it convenient to move to a different reference frame to simplify the numerical simulation of the problem. One way is to move to a frame that rotates with the frequencies of the given quantum states. This way of representing the evolution of the system is called the "interaction representation". To do this, the probability amplitudes are transformed by the equations: \[a_{n}(t)=\tilde{a}_{n}(t)e^{-i\omega_{n}t}\,,\ \ \ \ \ n=g,e \tag{4}\] This transformation of the Hamiltonian into this representation removes the time independent diagonal elements and absorb them into the interaction part of the Hamiltonian. Another representation, in which the system can be transformed to a reference frame rotating at the frequency of incident fields, is called the "field-interaction representation". For the two level system described above, this can be done using the below equations: \[a_{g}(t)=\tilde{\tilde{a}}_{g}(t)e^{i\omega_{L}(t)t/2}\,,\ \ \ \ a_{e}(t)=\tilde{ \tilde{a}}_{e}(t)e^{-i\omega_{L}(t)t/2} \tag{5}\] where \(\omega_{L}(t)\) represents the time dependent frequency in the case of a chirped field. This transformation removes the exponential terms in the Hamiltonian and represents the energy levels in terms of the detunings, which is the difference between laser frequency and and frequency of energy splitting in the system, \((\omega_{e}-\omega_{g})-\omega_{L}(t)\). This representation is very convenient when analyzing dynamics of energy levels during the light-matter interaction. An alternative way to of expressing quantum states is using density matrix formalism. When dealing with the ensemble of quantum states, the density matrix representation is more convenient especially when the effects of decoherence are taken into account. The evolution of quantum states in density matrix representation are described by the Liouville-von Neumann equations: \[i\hbar\dot{\boldsymbol{\rho}}(t)=[\mathbf{H}(t),\boldsymbol{\rho}(t)] \tag{6}\] where the density matrix elements can be expressed in terms of the probability amplitudes as: \(\boldsymbol{\rho}_{ij}=a_{i}a_{j}^{*}\). There has been a considerable number of studies published on different control methods aiming to improve the technique of coherent anti-Stokes Raman spectroscopy. Chirped pulses have been used in CARS-based imaging techniques to achieve high spectroscopic resolution [51, 52] and maximum coherence[53, 54, 55]. A method for selective excitation in a multimode system using a transform limited pump pulse and a lineraly chirped Stokes pulse in stimulated Raman scattering was proposed in [56]. The effects of a chirped pump and Stokes pulses on the nonadiabatic coupling between vibrational modes are discussed in [57]. A 'roof' method of chirping to maximize coherence was introduced in [58] based on adiabatic passage in an effective two-level system. In this method, the Stokes pulse was linearly chirped at the same rate of the chirped pump pulse in the first half of the pulse duration and was chirped with a negative rate afterwards. In the current work, we develop a scheme in which all the pulses in CARS are chirped in the a way that creates maximum vibrational coherence in a robust and selective way. The primary aim of this chapter is to develop theoretical frameworks that improve the existing methods of imaging, sensing and detection using quantum control methods. We seek to improve the techniques of coherent anti-Stokes Raman Spectroscopy (CARS) and Stimulated Raman adiabatic passage (STIRAP) by controlling the field parameters involved in the process. In section 2, we present a theoretical description of a general and robust technique of creating maximal-coherence superpositions of quantum states that can be used to optimize the signals in CARS-based applications. This technique is named as C-CARS and is based on the idea of manipulating the amplitudes and phases of incoming pulses in order to optimize the output signal and suppress the background. In the third section, we present a semiclassical theory based on this control scheme to simulate the output from a molecular system at kilometer scale aiming at remote detection. This theory combines the C-CARS scheme with the coupled Maxwell-Liouville-von Neumann equations, augmented with relaxation terms, and makes use of a machine learning technique to analyze the phase values of the scattered signal. In section 4, we give details of this machine learning technique, which uses deep Convolutional Neural Networks (CNN), and outline the application of this technique in quantum control methods. In section 5, we describe the process of Stimulated Raman Adiabatic Passage (STIRAP), the conditions for adiabatic passage, and the effects of chirping pulses in STIRAP. We show that a variation of STIRAP, namely fractional STIRAP (F-STIRAP), can be used to maximize coherence in a multi-level system. This provides another robust way for optimizing the output signal in detection methods. The chapter is concluded with a summary of results in section 6. ## 2 Quantum Control in Cars Using Chirped Pulses ### Coherent Anti-Stokes Raman Spectroscopy In the previous section, we discussed the limitations of Coherent Anti-Stokes Raman Spectroscopy (CARS) and noted that it is important to maximize the vibrational coherence and suppress the background signal in CARS. Here, we present a chirping scheme in CARS, in which all the incoming pulses are chirped to achieve this goal. The selectivity, robustness and adiabatic nature of this control scheme make it a viable candidate for improving the current methods for imaging, sensing and detection using CARS. A schematic diagram of the CARS process is given in Fig. 1. The pump and Stokes fields create a coherent superposition between states \(|1\rangle\) and \(|2\rangle\) which a probe field scatters off of to Figure 1: Schematic of Coherent Anti-Stokes Raman Spectroscopy (CARS): the pump (\(\omega_{p}\)) and the Stokes (\(\omega_{s}\)) fields interact with the ground vibrational state \(|1\rangle\) and the excited vibrational \(|2\rangle\) state of the ground electronic state in the target molecule to create a superposition state with coherence \(\rho_{12}\). The probe (\(\omega_{pr}\)) field interacts with this superposition state to generate the anti-Stokes field at frequency \(\omega_{as}\). Parameters \(\Delta_{s}\) and \(\Delta_{as}\) are the one-photon detunings, and \(\delta\) is the two-photon detuning. generate an anti-Stokes signal. Consider chirped pump, Stokes and probe pulses with temporal chirp rates \(\alpha_{q},\)\(q=p,s,pr\) as \[E_{q}(t)=E_{q_{0}}(t)\cos\left[\omega_{q}(t-t_{c})+\frac{\alpha_{q}}{2}(t-t_{c})^ {2}\right] \tag{7}\] and having Gaussian envelopes \[E_{q_{0}}(t)=\frac{\tilde{E}_{q_{0}}}{\left(1+\frac{\alpha_{q}^{\prime 2}}{ \tau_{0}^{4}}\right)^{1/4}}e^{-\frac{(t-t_{c})^{2}}{2\tau^{2}}}, \tag{8}\] where \(\tau_{0}\) is the tranform-limited pulse duration, \(\tau\) is the chirp-dependent pulse duration given by: \(\tau=\tau_{0}[1+\alpha_{q}^{\prime 2}/\tau_{0}^{4}]^{1/2}\) and \(\alpha_{q}^{\prime}\) is the spectral chirp rate which is related to the temporal chirp rate by: \(\alpha_{q}=\alpha_{q}^{\prime}/\tau_{0}^{2}(1+\alpha_{q}^{2}/\tau_{0}^{4})\). The interaction Hamiltonian of the four-level system, after defining the one photon detunings, \(\Delta_{s}=\omega_{p}-\omega_{31}\) and \(\Delta_{as}=\omega_{as}-\omega_{41}\), reads \[H=\frac{\hbar}{2}\begin{pmatrix}0&0&\Omega_{p_{0}}(t)e^{i\Delta_{s}t+i\frac{ \alpha_{p}}{2}t^{2}}&\Omega_{as_{0}}(t)e^{i\Delta_{as}t}\\ 0&0&\Omega_{s_{0}}(t)e^{i\Delta_{s}t+i\frac{\alpha_{s}}{2}t^{2}}\,\Omega_{pro }(t)e^{i\Delta_{as}t+i\frac{\alpha_{pr}}{2}t^{2}}\\ \Omega_{p_{0}}^{*}(t)e^{-i\Delta_{s}t-i\frac{\alpha_{p}}{2}t^{2}}&\Omega_{s_{0 }}^{*}(t)e^{-i\Delta_{s}t-i\frac{\alpha_{s}}{2}t^{2}}&0&0\\ \Omega_{as_{0}}^{*}(t)e^{-i\Delta_{as}t}&\Omega_{pr_{0}}^{*}(t)e^{-i\Delta_{ as}t-i\frac{\alpha_{pr}}{2}t^{2}}&0&0\end{pmatrix} \tag{9}\] where Rabi frequencies are given by \(\Omega_{q_{0}}=-\mu_{ij}E_{q_{0}}/\hbar\). ### Adiabatic Elimination of excited states This Hamiltonian can be simplified to a two-level super-effective Hamiltonian by eliminating the states \(|3\rangle\) and \(|4\rangle\) adiabatically under the assumption of large one-photon detunings. The dynamics of the four-level system interacting with the fields in Eq.(7) is described by the Liouville-von Neumann equation \(i\hbar\dot{\mathbf{\rho}}(t)=[\mathbf{H}_{int}(t),\mathbf{\rho}(t)].\) We define the two-photon detuning \(\delta=\omega_{p}-\omega_{s}-\omega_{21}=\omega_{as}-\omega_{pr}-\omega_{21}\), make the transformations in the interaction frame: \[\begin{split}\rho_{11}=&\tilde{\rho}_{11}\\ \rho_{12}=&\tilde{\rho}_{12}e^{-i(\omega_{1}-\omega_{2} )(t-t_{c})}\\ \rho_{13}=&\tilde{\rho}_{13}e^{i\omega_{p}(t-t_{c})}\\ \rho_{14}=&\tilde{\rho}_{14}e^{i\omega_{as}(t-t_{c})}\\ \rho_{22}=&\tilde{\rho}_{22}\\ \rho_{23}=&\tilde{\rho}_{23}e^{-i(\omega_{2}-\omega_{1}- \omega_{p})(t-t_{c})}\\ \rho_{24}=&\tilde{\rho}_{24}e^{-i(\omega_{2}-\omega_{1}- \omega_{as})(t-t_{c})}\\ \rho_{33}=&\tilde{\rho}_{33}\\ \rho_{34}=&\tilde{\rho}_{34}e^{-i(\omega_{p}-\omega_{as})(t-t_{c})} \\ \rho_{44}=&\tilde{\rho}_{44}\end{split} \tag{10}\] and obtain a system of differential equations for the density matrix elements after dropping the _tilde_ on both sides: \[i\dot{\rho}_{11}= \tfrac{1}{2}\Omega_{p0}(t)e^{\frac{i}{2}\alpha_{p}(t-t_{c})^{2}} \rho_{31}+\tfrac{1}{2}\Omega_{as0}(t)\rho_{41}-c.c\,, \tag{11}\] \[i\dot{\rho}_{22}= \tfrac{1}{2}\Omega_{s0}(t)e^{-i\delta(t-t_{c})+\frac{i}{2}\alpha_ {s}(t-t_{c})^{2}}\rho_{32}+\tfrac{1}{2}\Omega_{pr0}(t)e^{-i\delta(t-t_{c})+ \frac{i}{2}\alpha_{pr}(t-t_{c})^{2}}\rho_{42}-c.c\,,\] \[i\dot{\rho}_{33}= \tfrac{1}{2}\Omega_{p0}^{*}(t)e^{-\frac{i}{2}\alpha_{p}(t-t_{c})^ {2}}\rho_{13}+\tfrac{1}{2}\Omega_{s0}^{*}(t)e^{i\delta(t-t_{c})-\frac{i}{2} \alpha_{s}(t-t_{c})^{2}}\rho_{23}-c.c\,,\] \[i\dot{\rho}_{44}= \tfrac{1}{2}\Omega_{as0}^{*}(t)\rho_{14}+\tfrac{1}{2}\Omega_{pr0 }^{*}(t)e^{i\delta(t-t_{c})-\frac{i}{2}\alpha_{s}(t-t_{c})^{2}}\rho_{24}-c.c\,,\] \[i\dot{\rho}_{12}= \tfrac{1}{2}\Omega_{p0}(t)e^{\frac{i}{2}\alpha_{p}(t-t_{c})^{2}} \rho_{32}+\tfrac{1}{2}\Omega_{as0}(t)\rho_{42}-\tfrac{1}{2}\Omega_{s0}^{*}(t)e ^{i\delta(t-t_{c})-\frac{i}{2}\alpha_{s}(t-t_{c})^{2}}\rho_{13}\] \[-\tfrac{1}{2}\Omega_{pr0}^{*}(t)e^{i\delta(t-t_{c})-\frac{i}{2} \alpha_{pr}(t-t_{c})^{2}}\rho_{14}\,,\] \[i\dot{\rho}_{13}= \Delta_{s}\rho_{13}+\tfrac{1}{2}\Omega_{p0}(t)e^{\frac{i}{2} \alpha_{p}(t-t_{c})^{2}}\rho_{33}+\tfrac{1}{2}\Omega_{as0}(t)\rho_{43}-\tfrac {1}{2}\Omega_{p0}(t)e^{\frac{i}{2}\alpha_{p}(t-t_{c})^{2}}\rho_{11}\] \[-\tfrac{1}{2}\Omega_{s0}(t)e^{-i\delta(t-t_{c})+\frac{i}{2} \alpha_{pr}(t-t_{c})^{2}}\rho_{12}\,,\] \[i\dot{\rho}_{14}= \Delta_{as}\rho_{14}+\tfrac{1}{2}\Omega_{p0}(t)e^{\frac{i}{2} \alpha_{p}(t-t_{c})^{2}}\rho_{34}+\tfrac{1}{2}\Omega_{as0}(t)\rho_{44}- \tfrac{1}{2}\Omega_{a0}(t)\rho_{11}\] \[-\tfrac{1}{2}\Omega_{pr0}(t)e^{-i\delta(t-t_{c})+\frac{i}{2} \alpha_{pr}(t-t_{c})^{2}}\rho_{12}\,,\] \[i\dot{\rho}_{23}= \Delta_{s}\rho_{23}+\tfrac{1}{2}\Omega_{s0}(t)e^{-i\delta(t-t_{c} )+\frac{i}{2}\alpha_{s}(t-t_{c})^{2}}\rho_{33}+\tfrac{1}{2}\Omega_{pr0}(t)e^{ -i\delta(t-t_{c})+\frac{i}{2}\alpha_{pr}(t-t_{c})^{2}}\rho_{43}\] \[-\tfrac{1}{2}\Omega_{p0}(t)e^{\frac{i}{2}\alpha_{p}(t-t_{c})^{2}} \rho_{21}-\tfrac{1}{2}\Omega_{s0}(t)e^{-i\delta(t-t_{c})-\frac{i}{2}\alpha_{s }(t-t_{c})^{2}}\rho_{22}\,,\] \[i\dot{\rho}_{24}= \Delta_{as}\rho_{24}+\tfrac{1}{2}\Omega_{s0}(t)e^{-i\delta(t-t_{ c})+\frac{i}{2}\alpha_{s}(t-t_{c})^{2}}\rho_{34}+\tfrac{1}{2}\Omega_{pr0}(t)e^{ -i\delta(t-t_{c})+\frac{i}{2}\alpha_{pr}(t-t_{c})^{2}}\rho_{44}\] \[-\tfrac{1}{2}\Omega_{as0}(t)\rho_{21}-\tfrac{1}{2}\Omega_{pr0}(t) e^{-i\delta(t-t_{c})+\frac{i}{2}\alpha_{pr}(t-t_{c})^{2}}\rho_{22}\,,\] \[i\dot{\rho}_{34}= (\Delta_{as}-\Delta_{s})\rho_{34}+\tfrac{1}{2}\Omega_{p0}^{*}(t) e^{-\frac{i}{2}\alpha_{p}(t-t_{c})^{2}}\rho_{14}+\tfrac{1}{2}\Omega_{s0}^{*}(t)e^{ i\delta(t-t_{c})-\frac{i}{2}\alpha_{s}(t-t_{c})^{2}}\rho_{24}\] \[-\tfrac{1}{2}\Omega_{as0}(t)\rho_{31}-\tfrac{1}{2}\Omega_{pr0}(t) e^{-i\delta(t-t_{c})+\frac{i}{2}\alpha_{pr}(t-t_{c})^{2}}\rho_{32}\,.\] The condition for chirping of the probe pulse, \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\), is then imposed which is necessary to equate the exponentials. The above set of equations can be simplified considering the conditions for adiabatic elimination, \(\dot{\rho}_{33}=\dot{\rho}_{44}=\dot{\rho}_{34}=0\) and substituting for \(\rho_{14},\rho_{24},\rho_{23}\) and \(\rho_{24}\) in the equations of \(\dot{\rho}_{11},\dot{\rho}_{22}\) and \(\dot{\rho}_{12}\). After defining the new Rabi frequencies: \[\Omega_{1}(t)=\frac{|\Omega_{p0}(t)|^{2}}{4\Delta_{s}}+\frac{|\Omega_{as0}(t)|^{ 2}}{4\Delta_{as}},\qquad\Omega_{2}(t)=\frac{|\Omega_{s0}(t)|^{2}}{4\Delta_{s}}+ \frac{|\Omega_{pr0}(t)|^{2}}{4\Delta_{as}}\,, \tag{12}\] and \[\Omega_{3}(t)=\frac{\Omega_{p0}(t)\Omega_{s0}^{*}(t)}{4\Delta_{s}}+\frac{\Omega_{ pr0}^{*}(t)\Omega_{as0}(t)}{4\Delta_{as}}\,, \tag{13}\] the density matrix equations are reduced to: \[i\dot{\rho}_{11}= \Omega_{3}(t)e^{i\delta(t-t_{c})-\frac{i}{2}(\alpha_{s}-\alpha_{p}) (t-t_{c})^{2}}\rho_{21}-c.c\,, \tag{14}\] \[i\dot{\rho}_{22}= \Omega_{3}^{*}(t)e^{-i\delta(t-t_{c})+\frac{i}{2}(\alpha_{s}- \alpha_{p})(t-t_{c})^{2}}\rho_{12}-c.c\,,\] \[i\dot{\rho}_{12}= \left[\Omega_{1}(t)-\Omega_{2}(t)\right]\rho_{12}+\Omega_{3}(t)e^{ i\delta(t-t_{c})-\frac{i}{2}(\alpha_{s}-\alpha_{p})(t-t_{c})^{2}}(\rho_{22}-\rho_{11})\,.\] Further transformations: \(\rho_{11}=\tilde{\rho}_{11}\), \(\rho_{12}=\tilde{\rho}_{12}e^{i\delta(t-t_{c})-\frac{i}{2}(\alpha_{s}-\alpha_{p}) (t-t_{c})^{2}}\), \(\rho_{22}=\tilde{\rho}_{22}\) and shifting of diagonal elements lead to the following Hamiltonian in the field-interaction repre sentation for the "super-effective" two-level system \[\mathbf{H}_{se}(t)=\frac{\hbar}{2}\begin{pmatrix}\delta-(\alpha_{s}-\alpha_{p})(t -t_{c})+\Omega_{1}(t)-\Omega_{2}(t)&2\Omega_{3}(t)\\ 2\Omega_{3}^{*}(t)&-\delta+(\alpha_{s}-\alpha_{p})(t-t_{c})-\Omega_{1}(t)+ \Omega_{2}(t)\end{pmatrix} \tag{15}\] The amplitudes of incoming fields can be manipulated to make the AC stark shifts equal, \(\Omega_{1}(t)=\Omega_{2}(t)\), which can be satisfied by taking \(\Omega_{s0}=\Omega_{pr0}=\Omega_{p0}/\sqrt{2}\), considering the fact that the anti-Stokes field is absent before the interaction, \(\Omega_{as0}=0\). The effective Rabi frequency, which is the relevant quantity in the dynamics, can then be written as: \[\Omega_{3}(t)=\frac{\Omega_{3(0)}}{\left[(1+\frac{\alpha_{p}^{\prime 2}}{ \tau_{0}^{\prime}})(1+\frac{\alpha_{p}^{\prime 2}}{\tau_{0}^{\prime}}) \right]^{1/4}}e^{-\frac{(t-t_{c})^{2}}{\tau^{2}}}\,. \tag{16}\] The peak effective Rabi frequency of transform-limited pulses \(\Omega_{3(0)}\) is given by \(\Omega_{3(0)}=\Omega_{p0}^{2}/(4\sqrt{2}\Delta)\). It is reduced when chirping is applied to the pump and Stokes pulses with the spectral rates \(\alpha_{p}^{\prime}\) and \(\alpha_{s}^{\prime}\) respectively. The relative amplitudes of all the Rabi frequencies involved in the dynamics are shown in Fig. 2. In this chapter and the following one, all the frequency parameters are defined in the units of the frequency \(\omega_{21}\) and time parameters are defined in the units of \(\omega_{21}^{-1}\). ### The C-CARS chirping scheme The dynamics of this processs can be written as follows: During the interaction, at time \(t=t_{c}+\delta/(\alpha_{s}-\alpha_{p})\), the diagonal elements equal to zero - creating a coherent superposition Figure 2: The evolution of different Rabi frequencies in C-CARS scheme. The Stokes and probe Rabi frequencies have the same amplitude which is less than the amplitude of the pump pulse by a factor of \(\sqrt{2}\). \(\Omega_{1}(t)\) and \(\Omega_{2}(t)\) are canceled in the Hamiltonian making \(\Omega_{3}(t)\) the only relevant quantity in the scheme. state having equal populations for the states \(|1\rangle\) and \(|2\rangle\) and therefore, a maximum coherence. This time can be determined for a fixed value of two-photon detuning, \(\delta\). At two-photon resonance, the system reaches this maximum coherence at the central time \(t_{c}\). The system can be preserved at this state of the maximum coherence by imposing the condition that \((\alpha_{s}-\alpha_{p})\) is zero in the second half of the interaction. A smooth realization of this is possible by choosing the temporal chirp rates of the pump and Stokes pulses to be opposite in sign before the central time and equal in sign after that, along with the condition imposed for the chirp rate of probe, \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\), which was used while deriving the Hamiltonian in Eq. (80). This chirping scheme, namely C-CARS, can be summarized as: \(\alpha_{p}=-\alpha_{s}\) and \(\alpha_{pr}=2\alpha_{s}\) for \(t\leq t_{c}\), and \(\alpha_{p}=\alpha_{s}\) and \(\alpha_{pr}=0\) for \(t>t_{c}\). #### 2.3.1 Wigner-Ville distributions The Wigner-Ville distribution is one of the important methods for time-frequency analysis. For a function \(f(t)\), the Wigner-Ville Distribution is given by: \[W_{f}(t,w)=\int_{-\infty}^{\infty}f\left(t+\frac{t^{\prime}}{2}\right)\bar{f} \left(t-\frac{t^{\prime}}{2}\right)e^{-i\omega t^{\prime}}\,dt^{\prime}\,. \tag{17}\] To find the Wigner-Ville distribution of the incident pulses, substitute the pulse equations (7) in (17). In rotating wave approximation, the terms with \(2\omega_{q}\) can be ignored as they oscillate at a higher frequency compared to the terms with \(\omega_{q}\) and average away to zero. The equation then becomes: \[\begin{split} W_{E_{q}}(t,\omega)&=\int_{-\infty}^ {\infty}\frac{1}{4}E_{q_{0}}^{\prime 2}e^{-(t-t_{c})^{2}/\tau^{2}-t^{\prime 2}/4 \tau^{2}}\left[e^{i\omega_{q}t^{\prime}+i\alpha_{q}(t-t_{c})t^{\prime}}+e^{-i \omega_{q}t^{\prime}-i\alpha_{q}(t-t_{c})t^{\prime}}\right]e^{-i\omega t^{ \prime}}dt^{\prime}\\ &=\frac{1}{4}E_{q_{0}}^{\prime 2}e^{-(t-t_{c})^{2}/\tau^{2}} \left[\int_{-\infty}^{\infty}e^{-t^{\prime 2}/4\tau^{2}+[-i(\omega-\omega_{q})+i \alpha_{q}(t-t_{c})]t^{\prime}}dt^{\prime}\right.\\ &+\left.\int_{-\infty}^{\infty}e^{-t^{\prime 2}/4\tau^{2}+[-i( \omega+\omega_{q})-i\alpha_{q}(t-t_{c})]t^{\prime}}dt^{\prime}\right]\,.\end{split} \tag{18}\] Using the fact that for any Gaussian integral: \[\int_{-\infty}^{\infty}e^{ax^{2}+bx}dx=\sqrt{\frac{\pi}{a}}e^{b^{2}/4a}\,, \tag{19}\] the Wigner-Ville distribution equations are given by: \[W_{E_{q}}(t,\omega)=\frac{\tau\sqrt{\pi}}{2}E_{q_{0}}^{\prime 2}e^{-(t-t_{c})^{2} /\tau^{2}}\left[e^{-\tau^{2}[\omega-\omega_{q}-\alpha_{q}(t-t_{c})]^{2}}+e^{- \tau^{2}[\omega+\omega_{q}+\alpha_{q}(t-t_{c})]^{2}}\right]\,. \tag{20}\] The positive solutions of these equations are given in Fig. 3. The "turning off" of chirping in the second half is the essence of this scheme, resulting in a selective excitation of the molecules and suppressing any off-resonant background. If \(\alpha_{p}\) is not reversed, the coherence is not preserved leading to population reversal between states \(|1\rangle\) and \(|2\rangle\). #### 2.3.2 Analysis of populations and coherence To demonstrate the selective excitation of molecules using C-CARS, the time evolution of populations \(\rho_{11}\) and \(\rho_{22}\) and coherence \(\rho_{12}\) is presented in Fig. 4 for four different cases described below. The C-CARS control scheme is applied for the resonant (\(\delta=0\)) and off-resonant (\(\delta\neq 0\)) cases respectively in figures 4(a) and 4(b). The coherence reaches maximum at a central time in the resonant case, which is preserved till the end of dynamics owing to the zero net chirp rate attained by reversing the sign of \(\alpha_{p}\). On the contrary, the time of maximum coherence does not coincide with the central time in the non-resonant case, which results in a population transfer to the upper state and zero coherence. To emphasize the significance of reversing the sign of \(\alpha_{p}\) in C-CARS control scheme we compare it with the scheme when the pump and Stokes are oppositely chirped for the whole pulse duration, \(\alpha_{p}=-\alpha_{s}\). The dynamics of the system in such case is plotted in figures 4(c) and 4(d) for \(\delta=0\) and \(\delta=0.1\) respectively. In (c), even though the system reaches a perfect coherence at \(t=t_{c}\), it drops to zero because population is further adiabatically transferred to state \(|2\rangle\). Coherence in (d) behaves similar to that of (b). #### 2.3.3 Comparison with the exact four-level system The validity of adiabatic approximation, which led to a derivation of the super-effective Hamiltonian, can be tested by comparing the results of the super-effective two-level system with the exact solution using the Liouville von Neumann equation for the four-level systems. To this end, the field interaction Hamiltonian of the four level system, after imposing the condition for chirping of the probe pulse \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\), can be written as: \[\mathbf{H}_{ex}(t)=\frac{\hbar}{2}\left(\begin{array}{cccc}2\alpha_{p}(t-t_ {c})&0&\Omega_{p_{0}}(t)&\Omega_{as_{0}}(t)\\ 0&2[\alpha_{s}(t-t_{c})-\delta]&\Omega_{s_{0}}(t)&\Omega_{pr_{0}}(t)\\ \Omega_{p_{0}}(t)&\Omega_{s_{0}}(t)&-2\Delta_{s}&0\\ \Omega_{as_{0}}(t)&\Omega_{pr_{0}}(t)&0&2[\alpha_{p}(t-t_{c})-\Delta_{as}]\\ \end{array}\right). \tag{21}\] Figure 5 shows the contour-plot of vibrational coherence \(\rho_{12}\) at the end of dynamics as a function of the peak Rabi frequency \(\Omega_{3(0)}(t)\) and dimensionless spectral chirp rate \(\alpha_{s}^{\prime}/\tau_{0}^{2}\). Figures (a) and (b) represent the \(\delta=0\) and \(\delta=0.1\) cases, respectively, of the super-effective two level system, and (c) and (d) represent the same cases obtained by the exact solution of the four-level system using the same set of parameters. In all the figures, the one-photon Figure 4: The evolution of the populations and coherence demonstrating selective coherent excitation in C-CARS: in (a) and (b), C-CARS scheme is applied to the resonant case (\(\delta=0\)) (a) and the off-resonant case (\(\delta=0.1\)) (b). Coherence is preserved at the maximum value in resonant case, while it is destroyed in the detuned case. This is in contrast with the chirping scheme where the pump and Stokes pulses are oppositely chirped, \(\alpha_{p}=-\alpha_{s}\), for the whole pulse duration, shown for the resonant case (\(\delta=0\)) in (c) and off-resonant (\(\delta=0.1\)) case in (d). The dynamics is similar and the coherence is zero in both these cases demonstrating the need for turning off the chirp at central time. The parameters are: \(\Omega_{3(0)}=5.0[\omega_{21}]\), \(\tau_{0}=10[\omega_{21}^{-1}]\), \(\Delta=1.0[\omega_{21}]\) and \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-7.5\). Figure 5: Vibrational coherence as a function of spectral chirp and peak Rabi frequency when C-CARS chirping scheme is used: the above figures (a and b) are plotted using the super-effective two-level Hamiltonian, Eq. (80), and below figures (c and d) are plotted using the exact four-level Hamiltonian, Eq. (21). In figures (a) and (c), \(\delta=0\) and (b) and (d), \(\delta=0.1\). The similarity between the results of two Hamiltonians indicates the validity of the adiabatic approximation which is used to derive the chirping scheme. In the case of resonance, the coherence is maximum (blue) for most of the Rabi frequencies and spectral chirp rates, meaning that the chirping scheme is very robust against the changes in input parameters. In the absence of resonance, the coherence is zero (red) for most values of parameters, implying that the chirping scheme is effective in selectively exciting the system. The parameters used in this figure are: \(\tau_{0}=10[\omega_{21}^{-1}]\) and \(\Delta_{s}=\Delta_{as}=1.0[\omega_{21}]\). detuning is \(\Delta_{s}=\Delta_{as}=\Delta=1.0\). For the adiabatic elimination procedure to work properly, the terms in the Hamiltonian containing detunings should be larger than the terms containing Rabi frequencies \(|\Delta|\gg\Omega_{3}(0)\). Around the region where \(\alpha_{s}^{{}^{\prime}}/\tau_{0}^{2}=0\), \(|\Delta|\) is comparable to the peak effective Rabi frequency, and the adiabatic approximation disagrees with the exact solution. As the magnitude of spectral chirp rate increases, the adiabatic approximation is accurate owing to the reduction in the peak effective Rabi frequency, because of the presence of high temporal chirp. In the resonant cases (a) and (c), the coherence is at the maximum - color blue - for the most part, indicating the robustness of C-CARS chirping scheme in preparing the system in a coherent superposition. In the off-resonant coherence, zero coherence - color red - is seen for the most part; it is in stark contrast with that of the resonant case, revealing the selective nature of coherent excitation using the C-CARS chirping scheme. Figure 6: The evolution of the bare state and the dressed state energies and the non-adiabatic parameter: \(E_{1}(t)\) and \(E_{2}(t)\) (dashed lines) are the bare state energies and \(\lambda_{1}(t)\) and \(\lambda_{2}(t)\) (solid lines) are the dressed state energies. Figures (a) and (b) are the resonant and off-resonant cases respectively when C-CARS scheme is used. Figures (c) and (d) show the resonant and off-resonant cases, when the pump and Stokes pulses are oppositely chirped for the whole pulse duration. In contrast to all the other cases, figure (a) shows the non-adiabatic parameter \(\dot{\theta}(t)\), the dark solid line, remaining at zero after the central time. The parameters used are: \(\Omega_{3(0)}=5.0[\omega_{21}]\), \(\tau_{0}=10[\omega_{21}^{-1}]\), \(\Delta=1.0[\omega_{21}]\) and \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-7.5\). ### Analysis of Dressed States and Adiabatic Passage When light interacts with any quantum system, the eigenstates undergo some shifts resulting in a set of quantum states that are said to be 'dressed' by the light. These states are called dressed states and the initial states that were 'untouched' by the light are called bare states [59]. The robustness of the C-CARS chirping scheme stems from the adiabatic nature of the interaction, which can be demonstrated by analyzing the evolution of dressed state energies. To this end, the density matrix \(\mathbf{\rho}(t)\) is transformed to a dressed density matrix using the transformation \(\mathbf{\rho}_{d}(t)=\mathbf{T}(t)\mathbf{\rho}(t)\mathbf{T}^{\dagger}(t)\), where \(\mathbf{T}(t)\) is an orthogonal matrix given by: \[\mathbf{T}(t)=\begin{pmatrix}\cos\theta(t)&-\sin\theta(t)\\ \sin\theta(t)&\cos\theta(t)\end{pmatrix}\,. \tag{22}\] Since \(\mathbf{T}\) is an orthogonal matrix, \(\mathbf{T}\mathbf{T}^{\dagger}=\mathbf{T}^{\dagger}\mathbf{T}=1\), and \[\mathbf{\rho}=\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\,, \tag{23}\] the Liouville von-Neumann equations can now be transformed to the dressed frame: \[i\hbar\dot{\mathbf{\rho}}=[\mathbf{H}_{se},\,\mathbf{\rho}] \tag{24}\] \[i\hbar\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{\rho}=\begin{bmatrix}\mathbf{H}_{se},\,\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\end{bmatrix} \tag{25}\] \[\begin{split} i\hbar\frac{\mathrm{d}}{\mathrm{d}t}\left(\mathbf{T }^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\right)&=i\hbar\left[\left(\mathbf{T}^{ \dagger}\mathbf{\rho}_{d}\right)\dot{\mathbf{T}}+(\mathbf{T}^{\dagger}\dot{\mathbf{ \rho}}_{d})\mathbf{T}\right]\\ &=i\hbar\left[\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\dot{\mathbf{T}}+ \dot{\mathbf{T}}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}+\mathbf{T}^{\dagger}\dot{ \mathbf{\rho}}_{d}\mathbf{T}\right]\end{split} \tag{26}\] Rewriting the Eq.(25), \[\begin{split} i\hbar\left[\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\dot{ \mathbf{T}}+\dot{\mathbf{T}}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}+\mathbf{T}^{ \dagger}\dot{\mathbf{\rho}}_{d}\mathbf{T}\right]&=\mathbf{H}_{se} \left(\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\right)-\left(\mathbf{T}^{ \dagger}\mathbf{\rho}_{d}\mathbf{T}\right)\mathbf{H}_{se}\\ i\hbar\left[\mathbf{T}^{\dagger}\dot{\mathbf{\rho}}_{d}\mathbf{T}\right]& =\mathbf{H}_{se}\left(\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\mathbf{T} \right)-\left(\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\right)\mathbf{H}_{ se}\\ &-i\hbar\left[\mathbf{T}^{\dagger}\mathbf{\rho}_{d}\dot{\mathbf{T}}+ \dot{\mathbf{T}}^{\dagger}\mathbf{\rho}_{d}\mathbf{T}\right]\end{split} \tag{27}\] Multiplying this equation with \(\mathbf{T}\) from the left side and \(\mathbf{T}^{\dagger}\) from the right side gives: \[i\hbar\dot{\mathbf{\rho}}_{d}=\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}\mathbf{ \rho}_{d}-\mathbf{\rho}_{d}\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}-i\hbar \mathbf{\rho}_{d}\dot{\mathbf{T}}\mathbf{T}^{\dagger}-i\hbar\mathbf{T}\dot{ \mathbf{T}}^{\dagger}\mathbf{\rho}_{d} \tag{28}\] Beacuse \(\mathbf{T}\) is orthogonal, \(\dot{\mathbf{T}}\mathbf{T}^{\dagger}=-\mathbf{T}\dot{\mathbf{T}}^{\dagger}\), using this in the above equation gives: \[i\hbar\dot{\mathbf{\rho}}_{d}=\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}\mathbf{ \rho}_{d}-\mathbf{\rho}_{d}\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}+i\hbar \mathbf{\rho}_{d}\mathbf{T}\dot{\mathbf{T}}^{\dagger}-i\hbar\mathbf{T}\dot{ \mathbf{T}}^{\dagger}\mathbf{\rho}_{d} \tag{29}\] Rearranging the equation gives the density matrix equations in the dressed frame: \[\begin{split} i\hbar\dot{\mathbf{\rho}}_{d}&=\mathbf{T} \mathbf{H}_{se}\mathbf{T}^{\dagger}\mathbf{\rho}_{d}-i\hbar\mathbf{T}\dot{ \mathbf{T}}^{\dagger}\mathbf{\rho}_{d}-\mathbf{\rho}_{d}\mathbf{T}\mathbf{H}_{se} \mathbf{T}^{\dagger}+i\hbar\mathbf{\rho}_{d}\mathbf{T}\dot{\mathbf{T}}^{\dagger} \\ &=\left(\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}-i\hbar \mathbf{T}\dot{\mathbf{T}}^{\dagger}\right)\mathbf{\rho}_{d}-\mathbf{\rho}_{d}\left( \mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}-i\hbar\mathbf{T}\dot{\mathbf{T}} ^{\dagger}\right)\\ &=\left[\mathbf{T}\mathbf{H}_{se}\mathbf{T}^{\dagger}-i\hbar \mathbf{T}\dot{\mathbf{T}}^{\dagger}\,,\mathbf{\rho}_{d}\right]\\ i\hbar\dot{\mathbf{\rho}}_{d}&=\left[\mathbf{H}_{d}\,,\mathbf{ \rho}_{d}\right]\end{split} \tag{30}\] where \(\mathbf{T}(t)\mathbf{H}_{se}(t)\mathbf{T}^{\dagger}(t)\) is a diagonal matrix. For adiabatic passage to occur, the dressed state Hamiltonian \(\mathbf{H}_{d}(t)\) should give the dressed state energies separation to be greater than \(\mathbf{T}(t)\dot{\mathbf{T}}^{\dagger}(t)\) to avoid coupling between dressed states [59; 60; 61]. The dressed state Hamiltonian is found to be: \[\mathbf{H}_{d}(t)=\frac{\hbar}{2}\begin{pmatrix}-\sqrt{(-\delta+\alpha_{pr}(t-t _{c}))^{2}+(2\Omega_{3}(t))^{2}}&-i\dot{\theta}(t)\\ i\dot{\theta}(t)&\sqrt{(-\delta+\alpha_{pr}(t-t_{c}))^{2}+(2\Omega_{3}(t))^{2}} \end{pmatrix}\,, \tag{31}\] where the non-adiabatic parameter \(\dot{\theta}(t)\), which comes from the matrix \(\mathbf{T}(t)\dot{\mathbf{T}}^{\dagger}(t)\), is given by: \[\dot{\theta}(t)=\frac{(-\delta+\alpha_{pr}(t-t_{c}))\dot{\Omega}_{3}(t)-2 \Omega_{3}(t)\alpha_{pr}}{(-\delta+\alpha_{pr}(t-t_{c}))^{2}+4\Omega_{3}(t)^{2 }}\,. \tag{32}\] Analyzing the equation for \(\dot{\theta}(t)\) reveals the selective nature of adiabatic passage in the case of resonance. In the resonant case, the C-CARS chirping scheme ensures that the process be adiabatic in the second half of the pulse by keeping the non-adiabatic coupling parameter \(\dot{\theta}(t)\) at zero during this time period. But adiabaticity is not guaranteed in the off-resonant case due to the non-zero factor \(\delta\) in the equation. This is demonstrated in Fig. 6, where the bare state energies are given by: \(E_{1}(t)=H_{se_{11}}(t)\) and \(E_{2}(t)=H_{se_{22}}(t)\) and the dressed state energies are given by: \(\lambda_{1}(t)=H_{d_{11}}(t)\) and \(\lambda_{2}(t)=H_{d_{22}}(t)\). The figures (a) and (b) represent resonant (\(\delta=0\)) and off-resonant (\(\delta\neq 0\)) cases when C-CARS chirping scheme is used. Clearly, the \(\dot{\theta}(t)\), dark sold line, has non-zero values in the second half when the system is detuned. The perfectly adiabatic nature of interaction in Fig. 6(a) corresponds to the maximum coherence in Fig. 4(a) and the non-adiabatic nature in Fig. 6(b) corresponds to the population inversion in Fig. 4(b). The parameters used in Fig. 6 are the same as that used in Fig. 4. In figures 6(c) and 6(d), the same quantities are plotted for \(\delta=0\) and \(\delta\neq 0\) respectively for the scheme when the pump and Stokes are chirped oppositely for the whole pulse duration. The process is perfectly adiabatic only in the second half of (a) since a smooth realization of \(\dot{\theta}(t)=0\) was made possible owing to the developed C-CARS chirping scheme. The dynamics is greatly different and the selective excitation does not happen when the effective Rabi frequency, \(\Omega_{3}(t)\), is not strong enough - as shown in Fig. 7 where \(\Omega_{3}(0)=0.18\), \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-0.8\) and \(\tau_{0}=25[\omega_{21}^{-1}]\). In both (a), \(\delta=0\) and (b), \(\delta=0.1\) cases, the non-adiabatic coupling parameter is much greater compared to Fig. 6. In the off-resonant case (d), the coherence oscillates much stronger compared to the resonant case (c) showing the absence of adiabatic passage. To investigate how the value of two-photon detuning and spectral chirp rate are related to the selectivity in C-CARS method, the end-of-pulse vibrational coherence is plotted as a function of \(\delta\) and \(\alpha_{s}^{\prime}/\tau_{0}^{2}\) in Fig. 8. At two-photon resonance, the coherence is maximum for all the values of chirp rate. If the detuning is large, a small chirp rate can selectively excite the system while for smaller values of detuning, the chirp rate needs to be increased in order to suppress the non-resonant background. This provides a way to control the selectivity by adjusting the values of chirp rate. The plot is symmetric across the diagonal lines as flipping the signs of both \(\alpha_{s}\) and \(\delta\) will not change the dynamics; it would only result in switching of the diagonal elements in Hamiltonian (80). The plot is also nearly symmetric across the \(\delta=0\) line, indicating that the selectivity holds for both red and blue detunings. When the chirp rate is close to zero, the pulses are transform limited, implying that the maximum coherence and selectivity provided by the chirping scheme is absent in this region. This explains the vertical line present at the 0 of abscissa. ### Section Summary Here we presented a control scheme to prepare the ground electronic-vibrational states in the four-level system of CARS in a maximally coherent superposition. We derived the Hamiltonian for a'super-effective' two-level system employing the adiabatic approximation. This two-level Hamiltonian is used to derive the conditions for adiabatic passage necessary for the implementation of a selective excitation of spectrally close vibrations. The amplitudes of the Stokes and probe pulses have to be equal and should be \(\sqrt{2}\) times that of the pump pulse. The pump pulse should be chirped at the same rate as the Stokes pulse before the central time and at opposite rate after that. The probe pulse has to be chirped at a rate equal to the difference between the chirp rates of the Stokes and the pump pulses for the whole pulse duration. The solutions of Liouville-von Neumann equation show that vibrational coherence is preserved until the end of dynamics in the resonant case due to the adiabatic nature of the interaction. At two-photon resonance, vibrational coherence is maximum, 0.5, for a wide range of field parameters revealing the robustness of the method. Conversely, coherence is almost zero in the off-resonant case for most of the peak Rabi frequency values and the chirp parameters. A comparison of the coherence in the four-level and the two-level systems reveals that the adiabatic approximation is valid except when the chirp rate is almost equal to zero. A dressed-state analysis further reveals the presence of adiabatic passage in the two-photon resonance case. Liquid crystal shapers have been successfully used for chirping of femtosecond pulses. Such a shaper can be used for realization the presented C-CARS chirping scheme. This method can find important applications in sensing and imaging of molecular species because it creates a maximally coherent superposition of vibrational states in coherent anti-Stokes Raman scattering allowing the system to emit an optimized signal suitable for detection. The robustness of this method against changes in Rabi frequencies and chirp rates is helpful in experiments. The method helps suppress the background species and excite only the desired species; the resolution needed for this distinction can be controlled by the chirp parameter. ## 3 Application of Ultrafast C-Cars for Remote Detection In this section, we present the theory of generation of the anti-Stokes signal in CARS and apply the control scheme we developed in the last section to optimize the signal for remote detection of molecules. To resolve the ultrafast dynamics and optimize the output signal in CARS, we use femtosecond control pulses. We take into account the field propagation effects in a cloud of molecules. The motivation is to demonstrate the buildup of the anti-Stokes signal which may be used as a molecular signature in the backward CARS signal. The theory is based on the solution of the coupled sets of Maxwell's and the Liouville von Neumann equations and focuses on the quantum effects induced in the target molecules by the shaped laser pulse trains. We analyze the enhancement of the backscattered anti-Stokes signal upon multiple scattering of radiation from the target molecules, which modifies propagating fields. We examine the impact of decoherence induced by spontaneous decay and collisional dephasing. We demonstrate that decoherence due to spontaneous decay can be mitigated by applying the control pulse trains with the train period close to the decay time. The novelty of the study is in the demonstration of the buildup of coherent anti-Stokes signal as a result of controllability of vibrational coherence in the target molecules upon four chirped pulse trains propagation subject to multiple scattering events, in utilizing the pulse train properties to mitigate decoherence and in implementing the Deep Convolution Network approach to evaluate the phase of the propagating fields, which provides with the information about the relative phase change between the pump, the Stokes, the probe and the anti-Stokes pulses. As a case study we use the methanol vapor. Methanol molecules have Raman active symmetric \(2837~{}cm^{-1}\) (\(85.05~{}\)THz) and asymmetric \(2942~{}cm^{-1}\) (\(88.20~{}\)THz) stretch modes. These values are within the range of molecular group vibrations in various biochemical species, which span from \(2800\) to \(3100~{}cm^{-1}\) making the methanol a suitable choice as a surrogate molecule to allow for non-hazardous experiments in the lab. Thus, the results of methanol studies would be useful for the development of remote detection schemes as well as for the environmental analyses. Various setups are available to perform CARS experiments satisfying the phase-matching conditions to separate the directional anti-Stokes signal from the incident fields. However for particles having a size comparable to the wavelength, the phase-mismatched factor is small and it was shown that the non-phase-matched CARS can provide an effective method to probe complex molecules [22, 69]. For methanol, the ratio \(4\pi\rho_{0}/\lambda\ll 1\), where \(\rho_{0}\sim 10^{-10}m\) is the target molecule diameter; it relaxes the phase-matching condition and permits consideration of the collinear copropagating fields configuration. This section is organized as follows. In Section 3.1, a theoretical framework is formulated. Section 3.2 contains the numerical results for the methanol and a discussion. The section concludes with a Summary. ### Theoretical framework #### 3.1.1 Maxwell - Liouville von Neumann formalism CARS is a third order nonlinear process in which three beams, the pump, the Stokes and the probe, at frequencies \(\omega_{p}\), \(\omega_{s}\) and \(\omega_{pr}\) respectively, interact with the electronic vibrational - vibronic - states of the target molecules to generate the anti-Stokes field at frequency \(\omega_{as}=\omega_{p}+\omega_{pr}-\omega_{s}\), Fig(1). In our control scheme, we use linearly chirped pulse trains which read \[E_{i}(t)=\sum_{k=0}^{N-1}E_{i0}exp\{-\frac{(t-t_{c}-kT)^{2}}{2\tau^{2}}\}\cos \{\omega_{i0}(t-t_{c}-kT)+\alpha_{i}\frac{(t-t_{c}-kT)^{2}}{2}\}. \tag{33}\] Here \(T\) is the pulse train period, \(t_{c}\) is the central time when the peak value of the Gaussian field envelope is \(E_{0}\), \(\tau\) is the chirp-dependent pulse duration, \(\omega_{i0}\) is the carrier frequency, and \(\alpha_{i}\), \(i=p,s,pr\), is the linear chirp rate of an individual pump, Stokes and probe pulse in the respective pulse train. The values of \(\alpha_{i}\) are chosen in accordance with the control scheme described in the last section, which means \(\alpha_{s}=-\alpha_{p}\) and \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\) for \(t\leq t_{c}\); and \(\alpha_{s}=\alpha_{p}\) and \(\alpha_{pr}=0\) for \(t>t_{c}\)[53]. Such chirped pulses induce the maximum coherence between vibronic states in the target molecules via adiabatic passage provided the two-photon detuning \(\delta=0\). Any slightly different vibrational mode not satisfying the two-photon resonance condition, \(\delta\neq 0\), is suppressed as we explained in the last section. The selectivity of the mode excitation is determined by the condition \(\tau\delta\geq 1\). The chirped pulse duration \(\tau\) relates to the transform-limited pulse duration \(\tau_{0}\) as \(\tau=\tau_{0}(1+\alpha^{\prime 2}/\tau_{0}^{4})^{1/2}\), and the temporal (\(\alpha\)) and the spectral (\(\alpha^{\prime}\)) chirps relate as \(\alpha=\alpha^{\prime}\tau_{0}^{-4}/(1+\alpha^{\prime 2}/\tau_{0}^{4})\). The matrix Hamiltonian written in the interaction representation and in the rotating wave approximation (RWA) was given in Eq. (9). To account for the propagation effects in the scattering process, we combine the Liouville von Neumann equation for the states with Maxwell's equations for the fields. The displacement current is determined as \(D=\epsilon_{0}E+P\), where P is the expectation value of the induced dipole moment and \(\epsilon_{0}\) is the permittivity of free space. The effects arising from magnetization are neglected giving \(B=\mu_{0}H\), where \(\mu_{0}\) is permeability of free space. The wave equation for a field propagating in the \(\hat{z}\) direction and having linear polarization in the Y plane reads: \[\left(\frac{\partial}{\partial z}+\frac{1}{c}\frac{\partial}{\partial t} \right)\left(-\frac{\partial}{\partial z}+\frac{1}{c}\frac{\partial}{\partial t }\right)E=-\mu_{0}\frac{\partial^{2}P}{\partial t^{2}} \tag{34}\] Assuming the field is \(E(z,t)=\frac{1}{2}(E_{0}(z,t)e^{-i[\omega t-kz-\phi(z,t)]}+c.c)\) and considering \(E_{0}(z,t)\) Figure 9: The Gaussian distribution of the target molecules in (a) based on the density of molecules and in (b) is converted into multi-layer model; molecules are given different colors to distinguish the layers. Each layer in the multi-layer model is characterized by the fractional number density \(\eta\) and a distance to it’s adjacent layer \((\Delta z)_{\eta}\). If \(N_{s}\) is the number of the target molecules and \(N\) is the number of total molecules associated with the layer, the fractional number density of that layer is defined as \(\eta=N_{s}/N\). The distance between the adjacent layers \((\Delta z)_{\eta}\) changes according to the Gaussian distribution of molecules. The incoming pulses go through a series of scattering events with the target molecules within each layer to produce a detectable backscattered CARS signal. and \(\phi(z,t)\) as slowly varying functions of position and time, Eq.(34) can be written as: \[-2k\left(\frac{\partial E_{0}(z,t)}{\partial z}+\frac{1}{c}\frac{\partial E_{0}( z,t)}{\partial t}\right)\sin\left(\omega t-kz-\phi(z,t)\right)=-\mu_{0}\frac{ \partial^{2}}{\partial t^{2}}P(z,t) \tag{35}\] Substituting \(P(z,t)=\frac{1}{2}(P_{0}(z,t)e^{-i[\omega t-kz-\phi(z,t)]}+c.c)\) in the RHS, Eq.(34) becomes: \[-2k(\frac{\partial E_{0}(z,t)}{\partial z}+\frac{1}{c}\frac{\partial E_{0}(z,t) }{\partial t})=\mu_{0}\omega^{2}Im\left[P_{0}(z,t)\right]. \tag{36}\] In quantum theory, the macroscopic polarization P is given by the expectation value of the electric dipole moment \(\hat{\mu}\); \(\langle P(z,t)\rangle=N_{s}Tr\{\langle\rho(z,t)\cdot\mu\rangle\}\), where \(N_{s}\) is the molecular density of the target molecules. Applied to the four-level system of CARS, the four components of P can written as: \(P_{0p}(z,t)=N_{s}\mu_{13}\rho_{13}(z,t)\), \(P_{0s}(z,t)=N_{s}\mu_{23}\rho_{23}(z,t)\), \(P_{0pr}(z,t)=N_{s}\mu_{24}\rho_{24}(z,t)\), and \(P_{0as}(z,t)=N_{s}\mu_{14}\rho_{14}(z,t)\). The space components can be eliminated by the following procedure: If \(\bar{t}=(t-\frac{z}{c})\), then \(\frac{\mathrm{d}t}{\mathrm{d}z}=(\frac{\mathrm{d}\bar{t}}{\mathrm{d}z}+\frac{ 1}{c})\), which leads to \(\frac{\partial}{\partial z}=\frac{\partial}{\partial t}\frac{\partial t}{ \partial z}=\frac{1}{c}\frac{\partial}{\partial t}\). Using the above expressions of polarizations, Eq.(36) casts into: \[\frac{1}{c}\frac{\partial E_{q}}{\partial t}=-N_{s}\mu_{0}\mu_{ij}\frac{E_{q}( t)}{\hbar}\mathop{\mathrm{Im}}\{\rho_{ij}\} \tag{37}\] where \(q=p,s,pr,as\) and \(i,j\) are the indexes of the states involved in the respective transitions. The following transformations are applied to the density matrix elements: \[\rho_{12} =\tilde{\rho}_{12}e^{i(\alpha_{p}-\alpha_{s})t^{2}/2}\] \[\rho_{13} =\tilde{\rho}_{13}e^{i(\Delta_{s}t+\alpha_{p}t^{2}/2)}\] \[\rho_{14} =\tilde{\rho}_{14}e^{i\Delta_{as}t}\] \[\rho_{23} =\tilde{\rho}_{23}e^{i(\Delta_{s}t+\alpha_{s}t^{2}/2)}\] \[\rho_{24} =\tilde{\rho}_{24}e^{i(\Delta_{as}t+\alpha_{pr}t^{2}/2)}\] \[\rho_{34} =\tilde{\rho}_{34}e^{i(\Delta_{as}-\Delta_{s})t-i\alpha_{p}t^{2} /2}\] and the 'tilde' is removed. Then the density matrix elements \(\rho_{ij}\) for the corresponding transitions are found using the Liouville von Neumann equation \(i\hbar\dot{\rho}=[H,\rho]\) with the Hamiltonian from Eq.(9). After applying the rotating wave approximation and the adiabatic elimination of the excited states assuming that \(\dot{\rho}_{13},\dot{\rho}_{14},\dot{\rho}_{23},\dot{\rho}_{24},\dot{\rho}_{3 4}\approx 0,\ \rho_{34}\approx 0,\ \rho_{33},\rho_{44}\ll\rho_{11},\rho_{22}\) and \(\dot{\rho}_{33},\dot{\rho}_{44}\approx 0\), and using the control condition on the chirp parameters \(\alpha_{s}-\alpha_{p}=\alpha_{pr}\), the density matrix elements \(\rho_{13},\rho_{23},\rho_{14},\rho_{24}\) read in terms of \(\rho_{11},\rho_{22}\) and \(\rho_{12}\) in the field interaction representation as follows: \[\rho_{13} =\frac{1}{2(\Delta_{s}+\alpha_{p}t)}\Omega_{p0}(t)\rho_{11}+ \frac{1}{2(\Delta_{s}+\alpha_{p}t)}\Omega_{s0}(t)\rho_{12}\] \[\rho_{23} =\frac{1}{2(\Delta_{s}+\alpha_{s}t)}\Omega_{s0}(t)\rho_{22}+ \frac{1}{2(\Delta_{s}+\alpha_{s}t)}\Omega_{p0}(t)\rho_{21} \tag{38}\] \[\rho_{14} =\frac{1}{2\Delta_{as}}\Omega_{as0}(t)\rho_{11}+\frac{1}{2\Delta _{as}}\Omega_{pr0}(t)\rho_{12}\] \[\rho_{24} =\frac{1}{2(\Delta_{as}+\alpha_{pr}t)}\Omega_{pr0}(t)\rho_{22}+ \frac{1}{2(\Delta_{as}+\alpha_{pr}t)}\Omega_{as0}(t)\rho_{21}\,.\] Further, substituting Eq.(38) into Eq.(37) and rewriting the equations in terms of Rabi frequencies lead to the following Maxwell's equations: \[\begin{split}\frac{\partial\Omega_{p0}}{\partial t}&=c \frac{\partial\Omega_{p0}}{\partial z}=-\frac{\eta}{2(\Delta_{s}+\alpha_{p}t)} \kappa_{13}\omega_{p}\Omega_{s0}(t)\operatorname{Im}[\rho_{12}]\\ \frac{\partial\Omega_{s0}}{\partial t}&=c\frac{ \partial\Omega_{s0}}{\partial z}=\frac{\eta}{2(\Delta_{s}+\alpha_{s}t)} \kappa_{23}\omega_{s}\Omega_{p0}(t)\operatorname{Im}[\rho_{12}]\\ \frac{\partial\Omega_{pr0}}{\partial t}&=c\frac{ \partial\Omega_{pr0}}{\partial z}=\frac{\eta}{2(\Delta_{as}+\alpha_{pr}t)} \kappa_{24}\omega_{pr}\Omega_{as0}(t)\operatorname{Im}[\rho_{12}]\\ \frac{\partial\Omega_{as0}}{\partial t}&=c\frac{ \partial\Omega_{as0}}{\partial z}=-\frac{\eta}{2(\Delta_{as})}\kappa_{14} \omega_{as}\Omega_{pr0}(t)\operatorname{Im}[\rho_{12}].\end{split} \tag{39}\] Here \(\kappa_{ij}=n\mu_{0}\mu_{ij}^{2}c^{2}/(3\hbar)\), n is the number density of molecules given by \(N_{A}/V_{0}\) under the ideal gas conditions, where \(N_{A}\) is the Avogadro's Number, \(V_{0}\) is the molar volume, and \(\eta\) is the fractional number density which will be described in detail in the next section. The factor \(1/3\) comes from the averaging over all orientations of the molecular dipole \(\langle\mu_{x}\mu_{y}\rangle=\langle\mu_{x}\mu_{z}\rangle=\langle\mu_{y}\mu_{z} \rangle=0\) and \(\langle\mu_{j}\rangle=(1/3)\mu^{2},j=x,y,z\)[70]. Considering dipole moment of methanol \(\mu_{ij}=1.70D\), the constant \(\kappa_{ij}\) is found to be \(3.636\times 10^{-3}[\omega_{21}]\). The Eqs.(39) coupled with the multi-layer model described below are numerically solved using the transform-limited and the control pulse trains to find the scattered anti-Stokes signal. Note, that the right side of Eqs.(39), which describes the induced polarization in the target molecules, depends only on the imaginary part of coherence \(\rho_{21}\) out of all density matrix elements. Thus, the maximum value of this coherence provides the optimal amplitude of the scattered signal. To analyze the impact of decoherence due to spontaneous decay and collisional dephasing of molecules, the Liouville von Neumann equations are augmented by the relaxation terms. Figure 10: An example of the multi-layer model of a molecular distribution for the width of the Gaussian distribution in Eq.(42) of the target molecules \(\sigma=0.19\) m. Here, each of 200 vertical lines represents the location of the scattering event and the scattering layers become more dense as the density peaks at the center. Spontaneous decay from state \(|i\rangle\) to state \(|j\rangle\) is denoted by \(\gamma_{ij}\), while collisional dephasing between states \(|i\rangle\) and \(|j\rangle\) is denoted by \(\Gamma_{ij}\). Spontaneous decay impacts state populations and coherence via the diagonal and off-diagonal reduced density matrix elements respectively, while collisional dephasing assumed to be weak enough not to change state populations but to cause dipole phase interruption via off-diagonal reduced density matrix elements. Vibrational energy relaxation [71, 72] within the ground electronic state is accounted through parameter \(\gamma_{21}\). We neglect vibrational energy relaxation within the excited electronic state since the respective vibrational states \(|3\rangle\) and \(|4\rangle\) are negligibly populated during dynamics. Vibrational energy relaxation is an important topic in chemical physics, since it relates to fundamental reaction processes [73, 74], conformational changes [75] or spectroscopic measurements [76, 77], and its understanding is the first step toward controlling these phenomena. \[\dot{\rho}_{11} =-i/\hbar[H,\rho]_{11}+\gamma_{21}\rho_{22}+\gamma_{31}\rho_{33}+ \gamma_{41}\rho_{44} \tag{40}\] \[\dot{\rho}_{12} =-i/\hbar[H,\rho]_{12}-(\gamma_{21}/2+\Gamma_{21})\rho_{12}\] \[\dot{\rho}_{13} =-i/\hbar[H,\rho]_{13}-(\gamma_{31}/2+\gamma_{32}/2+\gamma_{21}/2 +\gamma_{41}/2+\Gamma_{31})\rho_{13}\] \[\dot{\rho}_{14} =-i/\hbar[H,\rho]_{14}-(\gamma_{41}/2+\gamma_{42}/2+\gamma_{21}/2 +\gamma_{31}/2+\Gamma_{41})\rho_{14}\] \[\dot{\rho}_{22} =-i/\hbar[H,\rho]_{22}-\gamma_{21}\rho_{22}+\gamma_{32}\rho_{33}+ \gamma_{42}\rho_{44}\] \[\dot{\rho}_{23} =-i/\hbar[H,\rho]_{23}-(\gamma_{31}/2+\gamma_{32}/2+\gamma_{21}/2 +\gamma_{42}/2+\Gamma_{32})\rho_{23}\] \[\dot{\rho}_{24} =-i/\hbar[H,\rho]_{24}-(\gamma_{41}/2+\gamma_{42}/2+\gamma_{21}/2 +\gamma_{23}/2+\Gamma_{42})\rho_{24}\] \[\dot{\rho}_{33} =-i/\hbar[H,\rho]_{33}-(\gamma_{31}+\gamma_{32})\rho_{33}\] \[\dot{\rho}_{34} =-i/\hbar[H,\rho]_{34}-\Gamma_{43}\rho_{34}\] \[\dot{\rho}_{44} =-i/\hbar[H,\rho]_{44}-(\gamma_{41}+\gamma_{42})\rho_{44}.\] #### 3.1.2 The target molecules distribution We consider the target molecules as a cluster of molecules with its center located a large distance away from the source and its density following the Gaussian distribution. We introduce a multi-layer model to analyze the propagation and scattering of the pump, Stokes, probe and anti-Stokes pulses through this spatial distribution of molecules. The model mimics the distribution of a cloud of molecules in the air and allows us to solve the propagation and scattering tasks in an elegant and simple way. In this model, each layer is characterized by the fractional number density \(\eta\) and a distance to it adjacent layer \((\Delta z)_{\eta}\). The distance between the layers changes according to the Gaussian distribution of molecules. If \(N_{s}\) is the number of the target molecules and \(N\) is the number of total molecules associated with the layer, the fractional number density of that layer is defined as \(\eta=N_{s}/N\). Suppose all target molecules in the central layer are arranged vertically next to each other with no background molecules between them, then the area occupied by these molecules is \(S\) = \(\pi(d/2)^{2}N_{s}\) giving \(N_{s}=4S/\pi d^{2}\), where \(d\) is an approximate diameter of the target molecule. If \((\Delta z)_{\eta}\) is the width of this layer, the total number of molecules \(N\) is \((S(\Delta z)_{\eta}/V_{0})N_{A}\), where \(V_{0}\) is the molar volume and \(N_{A}\) is the Avogadro's number. This gives \[\eta=\frac{N_{s}}{N}=\frac{\frac{4S}{\pi d^{2}}}{(\frac{S(\Delta z)_{\eta}}{V_ {0}})N_{A}}=\frac{4V_{0}}{\pi d^{2}(\Delta z)_{\eta}N_{A}}. \tag{41}\] We consider \(N_{s}\) be constant within each layer. Now, we take \(N=N_{s}\) for the central layer and calculate its width. For any subsequent layer the total number of molecules is different. Given \(N_{s}\), the increase in the layer width by \(\Delta z_{\eta}\) increases the layers volume and, thus, decreases the target's density by the factor \((1+\Delta z_{\eta}/\Delta z_{0})\). The width of each sequential layer is calculated using Eq.(41). Consider that the density changes as per the Gaussian distribution function having the full width at the half maximum (FWHM) \(\sigma\) and its maximum value at the center \(z_{0}\) of the cluster of molecules as \[\eta=\frac{N_{s}V_{0}}{SN_{A}\sqrt{2\pi}\sigma}e^{-(z-z_{0})^{2}/(2\sigma^{2})}. \tag{42}\] The maximum density \(\eta_{0}\) of the central layer is found by substituting \(z=z_{0}\) in Eq.(42). This value of \(\eta\) is then substituted in the Eq.(41) to find the width of the central layer \((\Delta z)_{\eta}=(\Delta z)_{0}\). Once we find the width of the central layer, the \(\eta\) of the adjacent layer is found by substituting the new value of \(z\), \(z_{0}+(\Delta z)_{\eta}\), in Eq.(42). This process is repeated to find the entire density distribution of the cluster of molecules. The distance between scattering layers \((\Delta z)_{\eta}\) increases towards both ends of the distribution. So we converted the three dimensional cluster of molecules into a set of two-dimensional layers of molecules. Fig.(10) shows a set of layers, the distance between them and the density associated with each layer. In numerical calculations, we consider \(\sigma=0.2m\) with its center 1 km away from the source, which together with \(\eta_{0}\) determines the total number of layers to be equal to 199. #### 3.1.3 Propagation through the atmosphere For a completeness of the picture, taking into account the effects of atmosphere as pulses propagate through the molecular distribution is needed. The propagation of femtosecond pulses through the atmosphere under various air conditions has been broadly investigated, e.g. [78, 79]. Various effects during the propagation including the dispersion and the nonlinear self-focusing are not within the scope of this research. We use Beer's law under the ideal conditions to account for the change in the amplitude of the pulses as they propagate through the atmosphere [80]. Assuming there is no turbulence and the air is homogeneous, the intensity of the pulse trains attenuate exponentially due to scattering and absorption as they propagate. The intensity \(I\) as a function of the distance \(z\) can be written as \(I(z)=I_{0}e^{-\beta_{e}z}\), where \(\beta_{e}\) is the extinction coefficient that contains factors of both scattering and absorption. We use the clear air atmospheric coefficient of \(0.55~{}km^{-1}\) in numerical calculations [81] as shown later. ### Numerical Results Numerical analyses of the effects of the pulse shaping on the optimization of quantum coherence and mitigation of decoherence in the target molecules as well as the impact of multiple scattering from the target molecules are performed using the methanol molecule and addressing the Raman active symmetric mode having frequency \(2837~{}cm^{-1}\) (\(85.05\) THz) [65]. This mode is chosen as a frequency unit \([\omega_{21}]\). The control scheme provides the selectivity of excitation of Raman active modes with the resolution up to \(1/\tau\), where \(\tau\) is a chirped pulse duration, which is about 2 to \(3~{}cm^{-1}\). Thus, the asymmetric stretch mode having frequency \(2942~{}cm^{-1}\) (\(88.20\) THz) is not excited by the control scheme. The selectivity of excitation is not preserved when a broadband but transform-limited pulse trains are applied. First we present the results of investigation of the dependence of the population and coherence on the peak Rabi frequency of the control pulses and reveal adiabatic type of solution leading to the maximum vibrational coherence. Then we analyze the four-level system dynamics subject to the interaction with the control pulse trains in the presence of decoherence and demonstrate a sustainable value of vibrational coherence. Finally, we show the solution of the Maxwell - Liouville von Neumann equations for the control pulse trains interacting with an ensemble of methanol molecules illustrating growth of the vibrational coherence and the anti-Stokes component of the propagating fields. Where appropriate, we compare the results with those for the transform-limited pulse trains interaction with the symmetric stretch mode in the CARS configuration. #### 3.2.1 Analysis of the state populations and coherence Fig.(11(a)-(d)) shows the dependence of the populations and coherence as a function of the peak Rabi frequency for the case of the transform-limited pump, Stokes and probe pulses with zero and non-zero one-photon detuning (a),(c), and control pulses with zero and non-zero one-photon detuning (b),(d) respectively. The envelope of the Rabi frequency is the same for all three transform-limited pulses, which are also used as an initial condition for chirping in the control scheme. The values of the Rabi frequency on the abscissa are presented for the transform-limited pulse. Decoherence is not taken into account to get a clear picture of the dependence of the state population and coherence on the Rabi frequencies. Under the one-photon resonance condition shown for the transform-limited pulses in (a) and for the chirped pulses in (b), the population of the excited states is significant, which prevents from achieving half population each in the ground state \(|1\rangle\) and the excited state \(|2\rangle\). In the transform-limited pulse scenario in (a), coherence periodically becomes zero, which is not the case for the control pulses solution shown in (b). Such a behavior in (a) is due to the pulse area type of solution, when the probability amplitude of the states depends on the pulse area with \(\pi\) value leading to the population inversion and \(2\pi\) - to the population return. In contrast, the control pulse scheme provides adiabatic type of response in the four-level system with nonzero value of coherence, which depends on the strength of the fields as shown in (b). The one-photon detuning \(\Delta_{s}=\Delta_{as}=\Delta=1.0[\omega_{21}]\) minimizes the transitional population of the excited states \(|3\rangle\) and \(|4\rangle\) for both transform-limited and the control pulse scenario shown in (c) and (d) respectively. The one-photon detuning shifts the point of first zero coherence toward higher values of the Rabi frequencies in the transform-limited case in (c). In the control case in (d), the first point of equal population giving the maximum vibrational coherence occurs at the peak Rabi frequency \(\Omega_{p0}=0.75[\omega_{21}]\) and is achieved due to two-photon adiabatic passage with a negligible involvement of the excited state manifold into population dynamics. Beyond this point, coherence value varies within the range from 0.5 to 0.35 for the peak Rabi frequency \(\Omega_{p0}=1[\omega_{21}]\) and higher. Once coherence is built, it never drops to zero, in contrast to the transform-limited pulses solution. Thus, the detuned chirped pulse control scheme is more robust for the applications in CARS microscopy and spectroscopy because it provides one with a sustainable value of coherence resilient to fluctuations in the intensity of the Raman fields. To demonstrate adiabatic passage generated under the condition of nonzero one-photon detuning, a time-dependent picture is presented in Fig.(12(a-d)). The time dependence of the population and coherence in the four level system interacting with the transform-limited Figure 11: The population and coherence in the four-level system as a function of the peak Rabi frequency \(\Omega_{p}[\omega_{21}]\), which is the same for the pump, Stokes and probe pulses, \(\omega_{21}=85\) THz. Parameters used in the calculations are \(\tau_{0}=4.66[\omega_{21}^{-1}],\Gamma=\gamma=0\). In (a) the transform-limited pump, Stokes and probe pulses with zero one-photon detuning are applied, \(\Delta_{s}=\Delta_{as}=\Delta=0\); (b) the control pump, Stokes and probe pulses with zero one-photon detuning are applied \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-1.0,\Delta=0\); (c) the transform-limited pulses with non-zero one-photon detuning are applied, \(\Delta=1.0[\omega_{21}]\); (d) Control pulses with non-zero one-photon detuning are applied, \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-1.0,\Delta=1.0[\omega_{21}]\). Once coherence is built by the control pulses, it never drops to zero, in contrast to the transform-limited pulses solution. The detuned control scenario is robust for applications in CARS microscopy and spectroscopy because it provides coherence resilient to fluctuations in the intensity of the Raman fields. Figure 12: Dynamics of the population of four states \(\rho_{11}\) (dashed red), \(\rho_{22}\) (dash-dotted green), \(\rho_{33}\) (dotted black), \(\rho_{44}\) (solid yellow) and coherence \(\rho_{21}\) (solid black) in the four-level system interacting with the transform-limited pump, Stokes and probe pulses in (a),(c); and the control pulses in (b),(d), \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-1.0\) for the peak Rabi frequency of the pump, the Stokes and the probe pulses, (before chirping for the control scheme), \(\Omega_{p}=1.08\)\([\omega_{21}]\) in (a),(b), and 1.5 \([\omega_{21}]\) in (c),(d). Other parameters are \(\tau_{0}=4.66\)\([\omega_{21}^{-1}]\), all \(\gamma_{ij}=\gamma=1.176\times 10^{-2}[\omega_{21}],\Gamma=0,\Delta=1.0[\omega_{21}]\). pump, Stokes and probe pulses, (a),(c), and with the control pulses, (b),(d) shows population dynamics and coherence for two values of the peak Rabi frequency \(\Omega_{p}=1.08\) and \(1.5[\omega_{21}]\). The value of the Rabi frequency \(\Omega_{p0}=1.08[\omega_{21}]\) is chosen according to the Fig.(11(d)), which generates the second equal population between the ground state \(|1\rangle\) and the excited state \(|2\rangle\) and the maximum coherence \(\rho_{21}\) in the control pulses scenario. It leads to adiabatic population transfer from the ground state \(|1\rangle\) to the excited state \(|2\rangle\). Meanwhile, the value of the peak Rabi frequency \(\Omega_{p0}=1.5[\omega_{21}]\) is chosen because it gives the first zero coherence for the transform-limited pulse scenario in Fig.(11(c)), which is not the case for the control scheme in Fig.(11(d)). Parameter \(\gamma\) is non-zero in order to see how spontaneous decay impacts state dynamics for the chosen representative values of the Rabi frequency. The time dependence of the populations and a significant coherence is still observed in (d) demonstrating benefits of the control scheme. 2.2 Analysis of the system dynamics subject to the interaction with the control pulse trains in the presence of decoherence We analyze the impact of decoherence in the four-level system through its interaction with the control pump, Stokes and probe pulse trains each consisting of ten pulses in Fig.(13). The results in (a-d) are given for the peak Rabi frequency \(\Omega_{p0}=1.08[\omega_{21}]\), and the results in (e-h) for \(\Omega_{p0}=1.5[\omega_{21}]\). The value \(\Omega_{p0}=1.08[\omega_{21}]\) provides the maximum coherence (\(1/2\)) for the control pulse and high value of coherence (\(0.45\)) for the transform-limited pulse according to Fig.(11c,d), and the \(\Omega_{p0}=1.5[\omega_{21}]\) gives a contrast value of coherence for the control and the transform-limited pulse application, \(0.39\) and \(0.07\) respectively. We analyze the controllability and sustenance of vibrational coherence in the four-level system subject to a fast spontaneous decay and collisions (\(\sim 10fs\)); then we investigate the impact of vibrational relaxation considering the decay on the order of \(1ps\) and demonstrate how the loss of coherence due to this process may be mitigated by periodically restoring population of the excited vibrational state \(|2\rangle\) of the ground electronic state; and then we compare this result to the case when collisional dephasing is on the same order of magnitude (\(\sim 1ps\)). Fast spontaneous decay and collisional dephasing rates (\(10^{14}Hz\)) of the transitional excited states \(|3\rangle\) and \(|4\rangle\) impact population dynamics and coherence even though these states are negligibly populated, shown in Fig.(13(a),(e)). Here populations and coherence \(\rho_{21}\) are presented as a function of time for \(\gamma_{4i}=\gamma_{3i}=\Gamma_{4i}=\Gamma_{3i}=10^{14}Hz\), i=1,2. Population of states \(|1\rangle\approx 0.6\) and \(|2\rangle\approx 0.4\) is stable between pulses, but, even though \(|3\rangle\) and \(|4\rangle\) states are negligibly populated owing to the control scheme applied, their fast decoherence while pulse is on (chirped pulse duration is \(55fs\)) impacts populations of states \(|2\rangle\) and \(|1\rangle\) and coherence \(\rho_{21}\) periodically drops to \(\sim 0.02\). Between pulses, such a fast relaxation from the excited states leads to a reduced but stable value of coherence \(\rho_{21}\sim 0.2\). Fig.(13(b),(f)) shows the system dynamics in the presence of the vibrational relaxation of state \(|2\rangle\) described by \(\gamma_{21}=10^{12}Hz\). Spontaneous decay from the excited states is also present, \(\gamma_{4i}=\gamma_{3i}=\gamma_{21}=10^{12}Hz;\Gamma_{4i}=\Gamma_{3i}=\Gamma_{ 21}=0.\) Figure demonstrates that coherence \(\rho_{21}\) is periodically built up by the chirped pulses, and then insignificantly reduces its value between the pulses in the trains. Spontaneous decay rate \(\gamma=1THz\) from the excited state \(|4\rangle\) to \(|3\rangle\) does not make any contribution to the population dynamics and was neglected. However, because the pulse train period is chosen to match the decay time \(T=1/\gamma_{21}=1ps\), (and no collisional dephasing, \(\Gamma_{ij}=0\)), the population of state \(|2\rangle\) decreased due to spontaneous decay Figure 13: Dynamics of the population of four states \(\rho_{11}\) (dashed red), \(\rho_{22}\) (dash-dotted green), \(\rho_{33}\) (dotted black), \(\rho_{44}\) (solid yellow) and coherence \(\rho_{21}\) (solid black) in the four-level system interacting with the control pulse trains having a repetition rate of 1 THz and peak Rabi frequency in (a-d) equal to \(\Omega_{p0}=1.08[\omega_{21}]\), and in (e-h) equal to \(\Omega_{p0}=1.5[\omega_{21}]\). In (a),(e) \(\gamma_{4i}=\gamma_{3i}=\Gamma_{4i}=\Gamma_{3i}=10^{14}Hz\), i=1,2, with no vibrational relaxation, \(\gamma_{21}=\Gamma_{21}=0\); in (b),(f) \(\gamma_{4i}=\gamma_{3i}=\gamma_{21}=10^{12}Hz;\Gamma_{4i}=\Gamma_{3i}=\Gamma_ {21}=0\); in (c),(g) \(\gamma_{4i}=\gamma_{3i}=\Gamma_{4i}=\Gamma_{3i}=\gamma_{21}=\Gamma_{21}=10^{12 }Hz\); and (d),(h) the four-level system interacting with the transform-limited pulse trains and \(\gamma_{4i}=\gamma_{3i}=\Gamma_{4i}=\Gamma_{3i}=\gamma_{21}=\Gamma_{21}=10^{12 }Hz\). The rest field parameters are \(\tau_{0}=4.66[\omega_{21}^{-1}],\Delta_{s}=\Delta_{as}=1.0[\omega_{21}]\) and \(\alpha_{s}^{\prime}/\tau_{0}^{2}=-1.0\) for the control pulse scenario. is periodically restored by control fields providing a sustainable value of coherence. When vibrational relaxation is much faster (e.g., \(10^{14}Hz\)) than the pulse repetition rate (\(10^{12}Hz\)), coherence \(\rho_{21}\) becomes negligibly small between pulses (not shown here). Switching on collisional dephasing such that \(\Gamma_{21}=\gamma_{21}=1THz\) results in a more dramatical reduction of coherence \(\rho_{21}\) as it is shown in Fig.(13(c),(g)) because collisional dephasing cannot be mitigated by this mechanism being represented by off-diagonal density matrix elements. However, the resultant coherence \(\rho_{21}\) does not drop to zero between pulses. This is due to the choice of the pulse repetition rate as well as the control scheme leading to a negligible population of the excited states \(|3\rangle\) and \(|4\rangle\) in the dynamics. In contrast, the simultaneous application of the transform-limited pump, Stokes and probe pulse trains shown in Fig.(13(d),(h)) results in strong dependence of coherence on the peak Rabi frequency in accordance with the pulse area solution. The simultaneous application of the transform-limited pulses in this calculation aims to compare with the chirped pulses scenario. (Note, that within a different control scheme, e.g., F-STIRAP [89], which imposes a time delay between the Stokes and the pump pulses, the transform-limited pulses generate the maximum coherence.) The results of calculations presented in Fig.(13) for various values of the Rabi frequency of the control pulses and the transform-limited pulses led to a conclusion that for the control scheme there is vibrational coherence in the system for any value of the peak Rabi frequency within the adiabatic range, while for the related transform-limited pulse scenario this is not the case. #### 3.2.3 Impact of Beer's law on the average intensity We apply Beer's law under the ideal conditions to evaluate the change in the amplitude of the anti-Stokes signal as pulses propagate through the atmosphere. I apply ten transform-limited pulses in the pulse train. Numerical analysis shows that the amplitude of the pump, Stokes and probe pulse trains is reduced upon propagation, while the average intensity of the anti-Stokes pulse trains is amplified as shown in Fig.(14) for propagation through 699 layers for both cases, with and without impact from the air. The intensity of the anti-Stokes pulse trains in the presence of the air is depreciated due to the scattering and absorption effects. 2.4 Analysis of the Maxwell - Liouville von Neumann equations and demonstration of the anti-Stokes signal generation Using Maxwell's equations Eqs.(39) coupled to the Liouville von Neumann equations Eqs. (40) we numerically analyzed the propagation effects of the control pump, Stokes, probe and the generated anti-Stokes fields scattered from the target molecules and observed the amplification of the anti-Stokes component. A detailed description of this analysis with chirped pulses, where a deep learning technique is implemented, is included in the next section. We also analyzed propagation effects using the transform-limited pump, Stokes, and probe pulse trains having the peak Rabi frequency \(\Omega_{p(s,pr)}=85THz=\omega_{21}\) and been largely detuned from the one-photon transitions, the detuning is \(\Delta_{s}=\Delta_{as}=\Delta=10\omega_{21}=850THz\) for the adiabatic regime. We consider 10 pulses in the pulse train having period \(T=1ps\). The increase of the peak value of the anti-Stokes Rabi frequency \(\Omega_{as}(t)\) by two orders of magnitude is observed 1 meter (199 layers) away from the peak molecular density. Coherence is increasing from pulse to pulse and the population is adiabatically transferred from the ground state \(|1\rangle\) to the excited state \(|2\rangle\) in the four-level system during the interaction with four fields in the Figure 14: An average intensity of the anti-Stokes pulses as a function of the number of scattering layers. The pulses are calculated by modeling the propagation of a transform limited pulse train containing 10 pulses using Beer’s law. The black solid curve represents the change in the average intensity as pulses undergo scattering through layers for the case of \(\beta_{e}=0\) (without taking air into consideration), and red dashed curves shows the intensity for \(\beta_{e}=0.55\)\(km^{-1}\). The one-photon detuning is \(\Delta=1[\omega_{21}]\) in (a), and \(\Delta=10[\omega_{21}]\) in (b). The width of the target molecules distribution is \(\sigma=1m\). The depreciation of intensity is due to scattering and absorption in the air. CARS configuration. Here adiabatic regime is achieved due to a large one-photon detuning \(\Delta=10\omega_{21}\) and the choice of the peak Rabi frequency \(\Omega_{p(s,pr)}=\omega_{21}\), which result in a negligible population of the transitional states \(|3\rangle\) and \(|4\rangle\). From the results above it follows that the implementation of the control pulse trains in the four-wave mixing of CARS is more robust for the generation of a sustainable anti-Stokes backscattered signal compared to the use of a set of transform-limited pulses. This is due to the adiabatic regime of light-matter interaction which preserves vibrational coherence and facilitates a build-up of the anti-Stokes signal. For the case of the phase-matching conditions relaxed, given the size of the molecules is less than the wavelength of the incident fields, a collinear copropagating configuration of CARS may be created using the methanol as a surrogate target. Because the anti-Stokes radiation is generated as a result of the stimulated Raman scattering process, it is highly directional and is built up in the forward and the backward directions dominantly [22, 90]. Therefore, the backscattered anti-Stokes signal will reach a detector near the laser source. The following parameters of the fields may be used in an experiment: the pulse duration of order \(100fs\), the peak field amplitude of Figure 15: Scattering dynamics using the transform-limited pump, Stokes, and probe pulse trains with the peak Rabi frequency equal to the frequency between states \(|1\rangle\) and \(|2\rangle\), \(\Omega_{p(s,pr)}=\omega_{21}\), and been largely detuned from the one-photon transitions, the detuning is \(\Delta_{s}=\Delta_{as}=\Delta=10\omega_{21}=850THz\) for the adiabatic regime. There are 10 pulses in each pulse train. The first column shows ten anti-Stokes pulses (top), the state coherence (middle) and populations (bottom) after the first scattering event; the second column shows the same after the 199th scattering event. Parameters \(\sigma=0.2m\); 199 layers provide a distance of 1 m away from the peak molecular density; \(\tau_{0}=54.8fs\); \(T=1ps\). \(1.6\times 10^{9}V/m\); the control pulse chirps obeying the relationship \(\alpha_{s}=-\alpha_{p}\), and \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\) for the first half of the pulse duration \(t\leq t_{c}\), and \(\alpha_{s}=\alpha_{p}\), \(\alpha_{pr}=0\) for \(t>t_{c}\); the value of \(\alpha_{s}=-7THz/fs\), the pulse train period of order of spontaneous decay time and the one-photon detuning of order \(\Delta\sim 1/fs\). ### Section Summary We presented a semiclassical theory of the four-wave mixing process in the coherent anti-Stokes Raman scattering implementing the control pulse trains. The theory is based on a set of Maxwell's equations for propagation of the pump, the Stokes, the probe and the anti-Stokes components of the fields coupled to the Liouville von Neumann equations with relaxation for dynamics in the target molecules. It is intended for the investigations of the remote detection of biochemical molecules. The multi-layer model is developed to account for the spatial distribution of the target molecules in the air mimicking the environmental conditions. The machine learning approach is developed to analyze the evolving phase of the pulse trains as they undergo scattering within each layer. The approach makes use of the deep Convolutional Neural Networks (discussed in the next section). The quantum control method for the incident pulse shaping is implemented, which optimizes the macroscopic induced polarization in the target molecules by maximizing vibrational coherence. The method implies chirping of the incident pulse trains, which induce adiabatic population transfer within four states in the CARS scheme leading to a sustainable, high vibrational coherence. Importantly, the transitional excited states get negligibly populated, thus minimizing the impact of spontaneous decay and associated losses of coherence from these states. Moreover, the choice of the pulse train period to match the spontaneous decay time permits for mitigation of the vibrational decay. The enhancement of the anti-Stokes field is observed upon propagation through the ensemble of the target molecules, achieved by the control pulse trains as well as by the transform-limited pulse trains with a large detuning and a carefully chosen Rabi frequency. The coherent enhancement of the anti-Stokes signal and mitigation of decoherence by chirped control fields form a foundation for the propagation of the anti-Stokes signal through distances on a kilometer scale. ## 4 Deep Neural Networks Applications in Quantum Control In the previous section, we developed a control scheme that helps us optimize the signal from target molecules by maximizing the vibrational coherence. In order to apply this scheme effectively and to investigate the controllability of population dynamics and vibrational coherence in the target molecules by propagating electromagnetic fields, we need to know the key fields' parameters evolution after each scattering event. This allows us to accurately calculate the quantum coherence and the induced polarization at the sequential steps of numerical calculation. In the case of using the chirped pulse control scheme within the multi-layer model of molecule distribution, the Maxwell - Liouville von Neumann equations alter the initial, pre-determined phase of the incident pulses impacting the response of the target molecules. Thus, extracting the analytical phase from the numerical solutions of Eqs.(39) and verifying that the pre-determined chirping scheme is applied to each scattering event becomes an extremely important task for evaluating the response from the quantum system. To accomplish this goal, we need to develop a mechanism to extract the chirp parameters from the scattered pulses. To this end, we created a generic machine learning model that classifies a given pulse into one of the three categories based the kind of phase it has and do the regression analysis to reveal the phase of pulse. In this section, we first present this deep learning technique and then apply it to the formalism we developed in the last section to simulate the output signal using chirped pulses. ### Deep learning and applications Although the idea of artificial intelligence has decades of history, it has picked up momentum recently with the development of machine learning and deep learning techniques along with the advancement in computational power. Deep learning is rapidly transforming almost all the industries. It helps reduce human intervention and scale up the speed in solving complex problems. Deep learning, in general terms, can be thought of as training a computer to solve a certain type of problem by feeding enormous data. At its core, it is the optimization of numerous parameter values of a mathematical function to fit the training data. The idea of applying deep learning into quantum control techniques is novel. A deep neural network consists of several layers each having certain number of layers. We developed a deep neural network for classifying different kinds of pulses from the numerical data, based on their chirping and extracting the chirp parameters from these classified pulses using a machine learning technique [82, 83]. This approach of extracting the information about the phase of the pulses from the numerical grid and obtaining an accurate value of the chirp parameters is principally novel and may have a wide range of applications in the quantum control and spectroscopy. There are different kinds of neural networks, each being used for specific purposes. The machine learning model that we created is the deep Convolutional Neural Network (CNN). A CNN is generally used for analyzing visual imagery. As the plots of pulses with different chirps can be analyzed visually, the CNN one of the best choices in this case as well. ### Classification and regression of chirped pulses using Convolutional Neural networks (CNN) A CNN is built to classify a given pulse into one of three kinds: linear, quadratic and the chirp shape according to our control scheme \(\alpha_{s}=-\alpha_{p}\) and \(\alpha_{pr}=\alpha_{s}-\alpha_{p}\) for \(t\leq t_{c}\); and \(\alpha_{s}=\alpha_{p}\) and \(\alpha_{pr}=0\) for \(t>t_{c}\). Another CNN is built to do the regression work, it calculates the parameters of the fields and shares a similar structure as the classification neural network. The structure of CNN used will be discussed later in the section. Of principle importance for studying the phase of the numerical pulses is the availability of training data. Massive training data is a necessary requirement for deep learning training to concur a problem [84]. Since it is difficult to collect thousands of actual data from the experiments, we created a program that generated the scattered laser pulses randomly based on an arbitrary laser pulse model \[E(t)=E_{0}e^{-\frac{t^{2}}{2\tau^{2}}}\cos[\omega_{L}t+M(t)]. \tag{43}\] ## 6 Conclusion Figure 16: Different shapes of the phase of the field obtained numerically (solid line) and using the deep convolution neural network model (dashed line) with different types of the phase of the input pulse: (a) Linear chirp, \(\phi(t)=a_{1}t+a_{2}t^{2}\); (b) Quadratic dependence of the phase on time having \(a_{2}<0\) in \(\phi(t)=a_{1}t+a_{2}t^{2}+a_{3}t^{3}\); (c) ’Roof’ chirp having positive chirp rate for the first and negative chirp rate for the second part of the pulse [53], \(\phi(t)=a_{1}t+\bar{a}_{2}t^{2}\) for \(t\leq 0\), and \(\phi(t)=a_{1}t+\bar{\bar{a}}_{2}t^{2}\) for \(t>0\); (d) Quadratic dependence of the phase on time having \(a_{2}>0\) in \(\phi(t)=a_{1}t+a_{2}t^{2}+a_{3}t^{3}\). The values of parameters are printed in the titles of the pictures. Note that there is no discrepancy in determination of the kind of the phase, only parameters have rare errors. Here \(\tau\) is a single pulse duration, \(E_{0}\) is the peak value of the field having the Gaussian envelope, and \(\omega_{L}t+M(t)\) is the phase of the field having the modulation \(M(t)\), which is the key to quantum control. A different parity of the phase modulation leads to different control scenarios [85, 86]. Here we present \(M(t)\) as an expansion in the Taylor series \[M(t)=a_{0}+a_{1}t^{1}+a_{2}t^{2}+a_{3}t^{3}+... \tag{44}\] Since in most cases the higher orders have a very limited contribution, we created data for three kinds of the phase using terms up to the third power in time: 'The Linear', which is determined by two parameters: the carrier frequency (\(a_{1}\)) and the linear chirp (\(a_{2}\)), then the field phase reads \(\phi(t)=a_{1}t+a_{2}t^{2}\); 'The Second', which is determined by three parameters: the carrier frequency (\(a_{1}\)), the linear chirp (\(a_{2}\)), and the second order chirp (\(a_{3}\)), then the phase reads \(\phi(t)=a_{1}t+a_{2}t^{2}+a_{3}t^{3}\); and 'The Roof', which is comprised of two parts, before central time and after, and is determined by three parameters: the carrier frequency (\(a_{1}\)), the linear chirp (\(\tilde{a}_{2}\)) for the first half of the pulse and the linear chirp (\(\widetilde{a}_{2}\)) for the second half of the pulse, then the constructed phase of the field reads \(\phi(t)=a_{1}t+\tilde{a}_{2}t^{2}\) for \(t\leq 0\), and \(\phi(t)=a_{1}t+\widetilde{a}_{2}t^{2}\) for \(t>0\). We simulated the pulses with these three kinds of phases using characteristic values of the field parameters and generated training data in quantity of \(5\times 10^{4}\) for each kind by varying the carrier frequency and the chirp rate. During the training process, we applied the Adam Optimizer algorithm with the learning rate of 0.1, and the regularization of 0.02 [87]. The loss function of the classification model is the cross entropy, but the mean squared error for the regression model. The early stop technique was also used to control the overfitting [88]. The details of the construction of the neural networks for both the classification and the regression models are presented in the next section. After training the classification and the regression models, they are combined to be used as directed. The classification block classifies the random pulse and sends it to the corresponding regression block to solve for the analytical parameters of one of three kinds of the phase. The classification reaches the accuracy of 97.93%, and the overall root mean square error of the regression is smaller than 0.1, providing the deep learning model's results accurate enough. Both the classification and regression models are evaluated via a separate test data set, which contains \(3\times 10^{3}\) samples. To demonstrate high accuracy of the analytical fit to the numerical data of the phase of the field we show several prototypical phases in Fig.(16). ### The structure of the CNN used Both the classification and the regression neural networks share the same core structure. Since the numerical pulses, which we generated as the training data, have 2500 time steps, all models have the input shape of \(2500\times 1\). There are three blocks of the mini-convolutional neural network in the models. The first block contains three 1D convolutional layers with the kernel size of 3. The second block has two layers of the 1D convolutional network with kernel size of 5. The third block has a single 1D convolutional layer of kernel of 7. All the convolutional layers are activated by the Rectified Linear Units Function [91] and the Group Normalization [92]. There is a maximum pooling layer of pool size 4 after each block. There is a linear layer of size 1024 after the output of the convolutional blocks is flattened. The structure of the neural network, shown in Fig.(17), is determined by the validation results, together with the other hyperparameters, such as the learning rate, the choice of Figure 17: The structure of the Deep Neural Network. The same structure is shared by the phase type classifier and the three phase value regression models, except for the last output layer. Three convolutional blocks are u sed sequentially to extract the highly non-linear information from the input time dependent tensor. The linear layer is used after flattening the output from the last convolutional block. optimizer and regularization. We adjust the kernel size, the number of blocks and the number of layers in each block to have the optimal validation result. The 1D convolution layers are used because they are suitable for extracting the information within the sub-region of the whole input tensor. It is a match to our aim, which is to extract the instantaneous value of the analytical parameter from the numerical sequential, time-dependent data. Besides, we use several 1D convolution layers as a block to extract the high dimension information from the input tensor. Three kernels of size of 3 cover the same area of the input tensor as a single kernel of size of 7, but the former catches the higher dimension information than the later one. We didn't set all blocks to three layers of kernel size of 3 because we would like to control the overfitting problem. ### Results The machine learning approach was implemented to reveal the modulation of the phase of four field components after each scattering. Fig.(18) shows the control pump, Stokes, probe and the build-up anti-Stokes pulses after each of five consecutive scattering events for the parameters of the fields \(\Omega_{p(s,pr)}=85THz\) (\(E_{p(s,pr)_{0}}\sim 1.6\times 10^{9}V/m\)), \(\tau_{0}=54.8fs\), \(\alpha_{s}=-7THz/fs\), and \(\Delta_{s}=\Delta_{as}=\Delta=850THz\). The neural networks explained in the previous section were optimized to work for these parameters. The classifier neural network predicted the pulses as the third kind described above and the regression neural network provided the chirping parameters. After 5 scattering events, the change in the initial chirp rate \(\alpha_{s}\) is less than 0.001% indicating that the control scheme would work for large number of layers. The anti-Stokes component is built up having the peak Rabi frequency about \(10^{-6}\Omega_{p}\) after the fifth iteration. Machine learning is a powerful technique to solve problems in almost all branches of science. With the availability of immense amount of data and increased computational efficiency, the machine learning is transforming academic research and every major industry. We showed how deep neural networks can be used to analyze and understand chirped pulses. The analysis helped us to verify that control pulses can be used for optimization of the signal in detection of molecular systems without losing the phase values. As signal optimization is the essence of any sensing and detection methods, the technique we developed here could find variety of applications in quantum control methods. ## 5 Creation of Maximally Coherent States Using Fractional Stimulated Raman Adiabatic Passage Stimulated Raman Adiabatic Passage (STIRAP), which was first reported in 1990 [93], is a process that allows population transfer in a quantum system efficiently to an initially unpopulated state via an intermediate state which is not populated in the process. As the intermediate state is not populated and the process is adiabatic, this method is very robust and immune to the spontaneous decay. STIRAP is a two-photon process where the Stokes pulse is applied first followed by the pump pulse, which is often referred to as a "counter-intuitive" ordering of pulses. A considerable overlap between the two pulses is necessary for the adiabatic process and efficient transfer of population. Since its discovery, STIRAP has been exploited for tremendous applications and several reviews have been published [61, 94, 95, 96]. A variation of Figure 18: The pump, the Stokes, the probe and the built-up anti-Stokes chirped pulses after each of five consecutive scattering events. 0,1,2,3,4,5 represent incoming, 1st, 2nd, 3rd, 4th and 5th scattering event respectively. The incident pulses are chirped in accordance with the control scheme. The parameters of the fields are \(\Omega_{p(s,pr)}=85THz\) (\(E_{p(s,pr)_{0}}\sim 1.6\times 10^{9}V/m\)), \(\tau_{0}=54.8fs\), \(\alpha_{s}=-7THz/fs\), and \(\Delta_{s}=\Delta_{as}=\Delta=850THz\). The anti-Stokes field is built up gradually and constitutes \(\sim 10^{-6}\) of the amplitude of the incident field. the STIRAP process, namely fractional STIRAP (F-STIRAP), was introduced by Vitanov _et al._ in [97], where they showed that a coherent superposition of the initial and final states can be prepared by keeping the amplitude of the Stokes pulse non-zero for a longer time and making both the amplitudes vanish simultaneously. In [98], Sautenkov _et al._ used F-STIRAP to create a maximally coherent superposition in Rb vapor to enhance signal generation. This technique was based on the idea of delayed CARS, where the anti-Stokes signal is generated by a probe field applied at a later time once the superposition of states is created by the pump and Stokes pulses. This is different from the process of ordinary CARS where the anti-Stokes pulse is generated due to the four-wave mixing process involving the pump, Stokes and probe pulses which are applied simultaneously. In the previous sections, we developed and applied a quantum control theory to maximize the vibrational coherence and optimize the signal in CARS, for the purpose of remote detection. The process of F-STIRAP explained above can be used for similar applications as it creates a maximally coherent superposition selectively via adiabatic processes. One variation of STIRAP is introduced in [99], where the pump and Stoke pulses were chirped to selectively populate one of the two states in a nearly degenerate system. It was shown that by changing the sign of the chirp rate, the population can be driven to a pre-determined state in the four-level system. This was a major improvement to the existing methods based of STIRAP. Just like the chirping of pulses in STIRAP results in selective population of states, the chirping of pulses in F-STIRAP can be used to selectively create coherent superpositions in a nearly degenerate system. This is the motivation for this section, because the already developed semiclassical theory may be applied for remote detection in the framework of F-STIRAP. We first explain the process of STIRAP and the effect of non-zero two-photon detuning. We investigate how chirping of pulses in STIRAP can be beneficial for populating the desired energy level in a nearly degenerate four-level system. Then the process of F-STIRAP is described, along with an explanation as to how it can be used to create arbitrary coherent superposition states. Finally, we lay the ground work for a chirped-fractional-STIRAP scheme which can be used to improve the existing methods to achieve selective coherent superposition in a nearly degenerate system. ### The Stimulated Raman Adiabtic Passage (STIRAP) The schematic diagram of the three-level system for STIRAP is shown in Fig. 19. The system is driven by two pulses: pump and Stokes, having frequencies \(\omega_{p}\) and \(\omega_{s}\) respectively. The population in state \(|1\rangle\) is tranferred completely to the state \(|3\rangle\) via the intermediate state \(|2\rangle\). An important characteristics of this process is that the intermediate state \(|2\rangle\) does not receive any population and it makes the transfer of population immune to any spontaneous decay. Another peculiarity of this method is the ordering of pulses: the Stokes pulse, which couples the initially unpopulated states \(|2\rangle\) and \(|3\rangle\) begins earlier than the pump pulse which couples states \(|1\rangle\) and \(|2\rangle\). A considerable overlap between the two pulses is necessary to provide a smooth adiabatic transfer as the mixing angle should vary very slowly. This will be further explained in the next section. The basic STIRAP Hamiltonian in Schrodinger representation is can be written as: \[\mathbf{H}(t)=\hbar\begin{pmatrix}\omega_{1}&\frac{\mu_{21}}{\hbar}E_{p}(t)& 0\\ \frac{\mu_{21}}{\hbar}E_{p}^{*}(t)&\omega_{2}&\frac{\mu_{23}}{\hbar}E_{s}^{*}( t)\\ 0&\frac{\mu_{23}}{\hbar}E_{s}(t)&\omega_{3}\end{pmatrix} \tag{45}\] where the pump and Stokes fields, as a general case, are considered to be chirped with chirp rates \(\alpha\) and \(\beta\). The equations of pulses, having Gaussian envelopes with time duration \(\tau\) are given by: \[E_{p}(t) =\tfrac{1}{2}E_{p0}e^{-\frac{(t-t_{p})^{2}}{\tau^{2}}}e^{i\omega_{p }(t-t_{p})+i\tfrac{1}{2}\alpha(t-t_{p})^{2}}+c.c.\] \[E_{s}(t) =\tfrac{1}{2}E_{s0}e^{-\frac{(t-t_{s})^{2}}{\tau^{2}}}e^{i\omega_{ p}(t-t_{s})+i\tfrac{1}{2}\beta(t-t_{s})^{2}}+c.c.\] where \(t_{p}\) and \(t_{s}\) are the central times of pump and Stokes respectively satisfying \(t_{s}<t_{p}\). To transform the above Hamiltonian into field-interaction representation, consider the Schrodinger equation: \[i\hbar\mathbf{\hat{a}}(t)=\mathbf{H}(\mathbf{t})\mathbf{a}(\mathbf{t}) \tag{46}\] and apply the transformations: \[a_{1} =\tilde{a}_{1}e^{i\omega_{p}(t-t_{p})+i\tfrac{1}{2}\alpha(t-t_{p} )^{2}} \tag{47}\] \[a_{2} =\tilde{a}_{2}\] \[a_{3} =\tilde{a}_{3}e^{i\omega_{s}(t-t_{s})+i\tfrac{1}{2}\beta(t-t_{s} )^{2}}\] and shift the diagonal elements to receive: \[H=\hbar\begin{pmatrix}\alpha(t-t_{p})&\tfrac{1}{2}\Omega_{p0}(t)&0\\ \tfrac{1}{2}\Omega_{p0}^{*}&\Delta&\tfrac{1}{2}\Omega_{s0}^{*}(t)\\ 0&\tfrac{1}{2}\Omega_{s0}(t)&\delta+\beta(t-t_{s})\end{pmatrix} \tag{48}\] where \(\Delta\) and \(\delta\) are the one-photon and two-photon detunings defined by: \(\Delta=\omega_{21}-\omega_{p}\) and \(\delta=\omega_{31}-(\omega_{p}-\omega_{s})\) respectively. Figure 19: The three-level system for STIRAP. The population is transferred from the state \(|1\rangle\) to \(|3\rangle\) without populating the intermediate state \(|2\rangle\). The STIRAP process is applicable to a ladder system as well, in which case the state \(|3\rangle\) lies above \(|2\rangle\). The detunings in both cases are given by: \(\Delta=\omega_{21}-\omega_{p}\) and \(\delta=\omega_{31}-(\omega_{p}-\omega_{s})\). Apparently, taking \(\alpha=\beta=0\) gives the conventional STIRAP Hamiltonian without chirp: \[H=\hbar\begin{pmatrix}0&\frac{1}{2}\Omega_{p0}(t)&0\\ \frac{1}{2}\Omega_{p0}^{*}&\Delta&\frac{1}{2}\Omega_{s0}^{*}(t)\\ 0&\frac{1}{2}\Omega_{s0}(t)&\delta\end{pmatrix}. \tag{49}\] To investigate the conditions for adiabatic passage in STIRAP, it is useful to diagonalize the Hamiltonian in (45) by using an orthogonal matrix. Assume that the probability amplitudes of bare states \(\mathbf{c}(t)\) evolve according to the Schrodinger equation \(i\hbar\dot{\mathbf{c}}(t)=\mathbf{H}(t)\mathbf{c}(t)\). The dressed (adiabatic) states with probability amplitudes \(\mathbf{a}(t)\) can be defined by the equation \(\mathbf{c}(t)=\mathbf{T}(t)\mathbf{a}(t)\) where \(\mathbf{T}(t)\) is an orthogonal matrix given by: \[\mathbf{T}(t)=\begin{pmatrix}\sin\theta(t)\sin\phi(t)&\cos\theta(t)&\sin\theta (t)cos\phi(t)\\ \cos\phi(t)&0&-\sin\phi(t)\\ \cos\theta(t)\sin\phi(t)&-\sin\theta(t)&\cos\theta(t)\cos\phi(t)\end{pmatrix} \tag{50}\] where the mixing angles are defined as: \[\tan\theta(t)=\frac{\Omega_{p0}(t)}{\Omega_{s0}(t)} \tag{51}\] and \[\tan 2\phi(t)=\frac{\sqrt{\Omega_{p0}^{2}(t)+\Omega_{s0}^{2}(t)}}{\Delta(t)}= \frac{\Omega_{rms}(t)}{\Delta(t)}\,. \tag{52}\] The dressed state amplitudes \(\mathbf{a}(t)\) follow the Schrodinger equations \(i\hbar\dot{\mathbf{a}}(t)=\mathbf{H}_{\mathbf{a}}(t)\mathbf{a}(t)\), where \[\mathbf{H}_{\mathbf{a}}(t)=\mathbf{T}^{\dagger}(t)\mathbf{H}(t)\mathbf{T}(t)- i\hbar\mathbf{T}^{\dagger}(t)\dot{\mathbf{T}}(t)\] Figure 20: The STIRAP process. The pump pulse is followed by the Stokes pulse, allowing an adiabatic passage of population from state \(|1\rangle\) to \(|3\rangle\), without populating state \(|2\rangle\). which gives: \[\begin{array}{l}i\hbar\dot{\bf a}(t)=[{\bf T}^{\dagger}(t){\bf H}(t){\bf T}(t)-i \hbar{\bf T}^{\dagger}(t)\dot{\bf T}(t)]{\bf a}(t)\\ i\hbar\dot{\bf a}(t)={\bf H_{d}}(t){\bf a}(t)+i\dot{\bf\Theta}(t){\bf a}(t)\,. \end{array}\] Here, \({\bf H_{d}}(t)\) is a diagonal matrix and \(\dot{\bf\Theta}(t)\) is a matrix with only non-diagonal elements. For adiabatic passage to occur, the matrix \({\bf H_{a}}(t)\) should be very close to \({\bf H_{d}}(t)\), meaning values in \(\dot{\bf\Theta}\) should be very small compared to the difference in diagonal values \({\bf H_{a}}_{11}(t)\) and \({\bf H_{a}}_{11}(t)\). At two-photon resonance, \(\delta=0\), The matrix \({\bf H_{a}}(t)\) looks like: \[{\bf H_{a}}(t)=\hbar\begin{pmatrix}\frac{1}{2}\Omega_{rms}\cot\phi&i\dot{ \theta}\sin\phi&i\dot{\phi}\\ -i\dot{\theta}\sin\phi&0&-i\dot{\theta}\cos\phi\\ -i\dot{\phi}&i\dot{\theta}\cos\phi&-\frac{1}{2}\Omega_{rms}\tan\phi\end{pmatrix} \tag{53}\] The diagonal elements of this Hamiltonian are the dressed (adiabatic) energies, which can be written as: \[\begin{array}{l}\lambda_{+}(t)=\frac{1}{2}\Omega_{rms}(t)\cot\phi(t)=\frac {1}{2}\left(\Delta+\sqrt{\Delta^{2}+\Omega_{rms}^{2}(t)}\right)\\ \lambda_{0}(t)=0\\ \lambda_{-}(t)=-\frac{1}{2}\Omega_{rms}(t)\tan\phi(t)=\frac{1}{2}\left(\Delta -\sqrt{\Delta^{2}+\Omega_{rms}^{2}(t)}\right)\,.\end{array} \tag{54}\] At one-photon resonance, \(\Delta=0\), \(\tan 2\phi=\infty\), \(\phi=\pi/4\). In this limit, the adiabaticity condition becomes: \(|\Omega_{rms}(t)|\gg|\dot{\theta}(t)|\). To satisfy the adiabatic condition in STIRAP, the mixing angle \(\theta=\tan^{-1}(\Omega_{p0}(t)/\Omega_{s0}(t))\), should be varying slowly. For this, it is necessary that the overlap between the pump and Stokes pulses is not too large or not too small. The eigenstates (adiabatic states of dressed states) corresponding to these eigenvalues are: \[\begin{array}{l}\Phi_{+}(t)=\psi_{1}\sin\theta(t)\sin\phi(t)+\psi_{2}\cos \phi(t)+\psi_{3}\sin\theta(t)\sin\phi(t)\\ \Phi_{0}(t)=\psi_{1}\cos\theta(t)-\psi_{3}\sin\theta(t)\\ \Phi_{-}(t)=\psi_{1}\sin\theta(t)\cos\phi(t)-\psi_{2}\sin\phi(t)+\psi_{3}\cos \theta(t)\cos\phi(t)\end{array} \tag{55}\] where \(\psi_{1}\), \(\psi_{2}\) and \(\psi_{3}\) are the eigenstates of bare quantum system. The eigenstate corresponding to the dressed energy zero, \(\Phi_{0}(t)\) is called dark state. In the beginning, when \(\Omega_{p0}(t)=0\) while \(\Omega_{s0}(t)>0\), the mixing angle \(\theta(t)=0\) and the dark state \(\Phi_{0}(t)=\psi_{1}\). In the end, when \(\Omega_{s0}(t)=0\) while \(\Omega_{p0}(t)>0\), the mixing angle \(\theta(t)=\pi/2\) and the dark state \(\Phi_{0}(t)=-\psi_{3}\). So the dark state has now gone from \(\psi_{1}\) to \(\psi_{3}\) without acquiring any component of \(\psi_{2}\). In order for the dark state to not acquire any component of the excited state, the condition for adiabaticity should be satisfied. We have now derived the conditions for adiabaticity when both \(\Delta=0\) and \(\delta=0\). Finding the adiabaticity conditions for non-zero detunings is not so trivial. In the general case, when \(\Delta\neq 0\) and \(\delta\neq 0\), the matrix \({\bf H_{a}}(t)\) may be written as the sum of two matrices. \({\bf H_{a}}(t)={\bf H_{a0}}(t)+{\bf H_{a1}}(t)\), where \({\bf H_{a0}}(t)\) is the Hamiltonian when \(\delta=0\), which is given by the Eq 53 when \(\Delta=0\) as well, and \({\bf H_{a1}}(t)\) is the additional term due to the absence of two-photon resonance, which is given by: \[{\bf H_{a1}}(t)=\frac{1}{2}\hbar\delta\begin{pmatrix}\cos 2\theta\sin^{2}\phi&- \sin 2\theta\sin\phi&\frac{1}{2}\cos 2\theta\sin 2\phi\\ -\sin 2\theta\sin\phi&-\cos 2\theta&-\sin 2\theta\cos\phi\\ \frac{1}{2}\cos 2\theta\sin 2\phi&-\sin 2\theta\cos\phi&\cos 2\theta\cos^{2} \phi\end{pmatrix}\,. \tag{56}\] The two-photon detuning shifts all energies of adiabatic states in proportion to the \(\delta\) and two-photon resonance is a necessary condition for adiabatic passage in STIRAP. For the specific case when \(\Delta=0\), \(\phi=\pi/4\), and the above Hamiltonian becomes \[\begin{split}\mathbf{H}_{a1}(t)&=\frac{1}{2}\hbar \delta\begin{pmatrix}\frac{1}{2}\cos 2\theta&-\frac{1}{\sqrt{2}}\sin 2\theta&\frac{1}{2}\cos 2 \theta\\ -\frac{1}{\sqrt{2}}\sin 2\theta&-\cos 2\theta&-\frac{1}{\sqrt{2}}\sin 2 \theta\\ \frac{1}{2}\cos 2\theta&-\frac{1}{\sqrt{2}}\sin 2\theta&\frac{1}{2}\cos 2 \theta\end{pmatrix}\\ &=\frac{1}{4}\hbar\delta\cos 2\theta\begin{pmatrix}1&-\sqrt{2}\tan 2 \theta&1\\ -\sqrt{2}\tan 2\theta&-2&-\sqrt{2}\tan 2\theta\\ 1&-\sqrt{2}\tan 2\theta&1\end{pmatrix}\,.\end{split} \tag{57}\] The evolution of dressed state energies and populations in the case of both two-photon resonance and non-zero two-photon detuning are given in the Fig. 21. In the left figure, the \(\Delta\neq 0\) and \(\delta=0\). As shown, adiabatic passage possible in case of two-photon resonance even if the system is not in one-photon resonance, \(\Delta\neq 0\). The system is aligned with the dark state \(\Phi_{0}\) and the population is fully transferred the state \(\psi_{3}\). There is no crossing of energy levels. But, when the \(\delta\) is non-zero, the dressed state energy levels cross each other and the adiabaticity is lost. The system is aligned with the dressed state \(\Phi_{-}\) in the beginning and becomes aligned with \(\Phi_{+}\) by the end. Even though the population is completely transferred to \(\psi_{3}\), the process is not fully adiabatic. So, in summary, two-photon resonance is necessary for the adiabatic passage in the process of STIRAP. Figure 21: STIRAP in the presence of two-photon resonance (left) and in the absence of two-photon resonance (right). In the left figure, \(\Delta\neq 0\) and \(\delta=0\). The system remains aligned with the adiabatic state \(\Phi_{0}\) throughout the process. The population is fully transferred to \(\psi_{3}\). In the right figure \(\Delta\neq 0\) and \(\delta\neq 0\). This process is not completely adiabatic. The system is aligned with \(\Phi_{-}\) in the beginning and is aligned with \(\Phi_{+}\) in the end. Even though the population is transferred completely to \(\psi_{3}\), the process is completely not adiabatic as there are two crossings with non-adaiabtic coupling between the dressed states. ### Chirped-STIRAP: selective population of two nearly-degenerate states In the previous section, we analyzed the ordinary STIRAP process and explained the origin of adiabatic passage. In this section, we will consider a four-level system with two nearly degenerate states and use chirped pulse in STIRAP to populate one of these states. This method was first introduced in [99]. The four-level system we consider for this analysis is shown in Fig. 22. The states \(\ket{3}\) and \(\ket{4}\) are nearly degenerate with Stokes pulse in resonance with state \(\ket{3}\). The Hamiltonian for this four-level system in field-interaction representation when chirped pulses are used can be obtained from (45) and is given by: \[H=\hbar\begin{pmatrix}\alpha(t-t_{p})&\frac{1}{2}\Omega_{p0}(t)&0&0\\ \frac{1}{2}\Omega_{p0}^{*}&\Delta&\frac{1}{2}\Omega_{s0}^{*}(t)&\frac{1}{2} \Omega_{s0}^{*}(t)\\ 0&\frac{1}{2}\Omega_{s0}(t)&\beta(t-t_{s})&0\\ 0&\frac{1}{2}\Omega_{s0}(t)&0&\delta+\beta(t-t_{s})\end{pmatrix} \tag{58}\] The evolution of populations plotted using the above Hamiltonian is shown in Fig. 23. In Fig. 23(a), both the pump and Stokes pulses are chirped with negative chirp rate \(\alpha=\beta=-0.001\). In this case, state \(\ket{3}\) which is in resonance with the Stokes pulse, is populated. This behavior is much like the ordinary STIRAP. But if the sign of both the chirp rates are flipped, the detuned state \(\ket{4}\) is populated, as shown in Fig. 23(b). This means that the flow of population can be controlled and be directed to the desired energy level by chirping the pulses in STIRAP. Similar to the ordinary STIRAP, the intermediate level is not populated in the process of chirped STIRAP as well. A broader analysis of this process is shown in Fig. 5.2 where the populations are plotted against the chirp rates of pump and Stokes pulses for positive, (a) and negative values of detuning (b). The chirp rates \(\alpha\) and \(\beta\) are equal in all the calculations here. In the Fig. 5.2(a), \(\delta>0\) implying that \(\ket{4}\) is above \(\ket{3}\). In this case, a positive chirp causes the population to flow to the resonant final state \(\ket{3}\) while negative chirp drives population to the detuned state \(\ket{4}\). Fig. 5.2(b) shows the opposite behavior as the detuning is now negative, meaning \(\ket{4}\) is below \(\ket{3}\). In short, we have shown that chirping Figure 23: Population dynamics in STIRAP when both pump and Stokes pulses are negatively chirped (a) and positively chirped (b). The population is driven exclusively to state \(|3\rangle\) when the chirp rate is negative and to \(|4\rangle\) when chirp rate is positive Figure 24: Populations vs chirp rate of pump and Stokes when the two-photon detuning is positive (a) and negative (b). The chirp rates of pump and Stokes are taken to be equal, \(\alpha=\beta\), in both the cases. the pulses in STIRAP is a powerful way to control the flow of population to a desired state adiabatically in a nearly degenerate four-level system. ### The Fractional-STIRAP We have now seen that STIRAP is an effective and robust way to transfer population to a particular quantum state. In this section, we will see that instead of transferring the population completely to the final state, coherent superposition between the initial and final states can be created by slightly modifying the STIRAP technique. The idea is based on manipulating the amplitude of Stokes pulse so that the mixing angle \(\theta(t)\) is a constant by the end of the process. Similar to STIRAP, the Stokes pulse begins earlier than the pump, but unlike STIRAP, both the pulses vanish simultaneously. This provides a coherent superposition instead of a complete population transfer while the process remain adiabatic. To derive the evolution of amplitudes in this process, take the dark state of STIRAP process given in (5.1): \[\Phi_{0}(t)=\psi_{1}\cos\theta(t)-\psi_{3}\sin\theta(t) \tag{59}\] At \(t=-\infty\), the system is in \(\psi_{1}\), and at \(t=\infty\), the system has moved to the \(\psi_{3}\). We need to manipulate the mixing angle \(\theta(t)\) in such a way that at \(t=\infty\), the system is in a coherent superposition of \(\psi_{1}\) and \(\psi_{3}\). Let us assume: \(\theta(\infty)=A\), where \(A\) is a constant. This gives: \[\Phi(t=-\infty)=\psi_{1},\ \ \ \ \ \Phi(t=\infty)=\psi_{1}\cos A-\psi_{3}\sin A \tag{60}\] which means the mixing angle: \[\theta(t=-\infty)=0,\ \ \ \ \ \theta(t=\infty)=\tan A\,. \tag{61}\] To achieve a mixing angle that satisfies this condition, two Stokes pulses can be applied; the first one at time \(t=-t_{p}\) and second one at \(t=t_{p}\), where \(t_{p}\) is the central time of pump. The envelope equations of pump and Stokes satisfying this condition can be written as: \[\Omega_{p_{0}}(t) =\Omega_{0}\sin Ae^{-\frac{(t-t_{p})^{2}}{\tau^{2}}} \tag{62}\] \[\Omega_{s_{0}}(t) =\Omega_{0}e^{-\frac{(t+t_{p})^{2}}{\tau^{2}}}+\Omega_{0}\cos Ae ^{-\frac{(t-t_{p})^{2}}{\tau^{2}}}\,.\] Note that if the constant mixing angle \(A=\pi/2\), the second term of the Stokes equation is zero and we are back to STIRAP, where the Stokes has a central time of \(-t_{p}\) and pump has a central time of \(t_{p}\). This provides a complete population transfer. On the other hand, when \(A=\pi/4\), \(\cos A=\sin A=1/\sqrt{2}\) and the system is transformed to a maximally coherent superposition. In this case, the dark state \(\Phi_{0}(t)=\frac{1}{\sqrt{2}}(\psi_{1}-\psi_{3})\). The envelopes of two Stokes pulses and their superposition according to the Eq. (62) when \(A=\pi/4\) are shown in Fig. (25). In this figure, \(\Omega_{s1_{0}}(t)\) and \(\Omega_{s2_{0}}(t)\) are the first and second Gaussian envelopes that make up the new Stokes field \(\Omega_{s_{0}}(t)\) given in Eq. (62). Note that the second Stokes pulse \(\Omega_{s2_{0}}(t)\) completely overlaps with the pump pulse \(\Omega_{p_{0}}(t)\) because they both have the same central time. This makes sure that the tail of the resultant Stokes field overlaps with the that of the pump field, in order to achieve \(\tan\theta(t)=\Omega_{p_{0}}(t)/\Omega_{s_{0}}(t)=1\) as time \(t\to\infty\). In order to understand the population dynamics in fractional-STIRAP, it is useful to deal with the field-interaction Hamiltonian in this case. By intuition, it can be seen that the STIRAP Hamiltonian in Eq (49) can be used for F-STIRAP as well, replacing the Stokes pulse with the new Stokes field in Eq. (62). In the previous sections, we analyzed the dressed (adiabatic) states in STIRAP and the conditions for achieving adiabatic passage. As in STIRAP, two-photon resonance is necessary in order to have adiabatic passage in F-STIRAP as well. Apart from that, we saw that when \(\Delta=0\), the variation in mixing angle should be very slow compared to the rms Rabi frequency, \(|\Omega_{rms}(t)|\gg|\dot{\theta}(t)|\). The evolution of pulses, dressed states, mixing angle and populations are shown in Fig. 26. The dressed energies \(\lambda_{+}\), \(\lambda_{0}\) and \(\lambda_{-}\) evolve similarly in both the processes and there is no crossing of energy levels. The mixing angle \(\theta(t)\) evolve slowly in both cases indicating that \(\dot{\theta}(t)\) is very small compared to the difference between the dressed states \(\lambda_{+}\) and \(\lambda_{-}\). Note that the final value of \(\theta(t)\) is \(\pi/2(\approx 1.5)\) in STIRAP while it is \(\pi/4(\approx 0.8)\) in F-STIRAP. Fractional STIRAP can be understood as a generalized form of STIRAP. By varying the constant mixing angle, it is possible to create any arbitrary coherent superposition of the initial and final states. The final populations and coherence are plotted the constant mixing angle in Fig. 27. For \(A=\pi/4\approx 0.8\), the coherence is maximum and for \(A=\pi/2\approx 1.6\), the coherence is zero and the final state population is 1. Any arbitrary coherence between the initial and final states can be produced by carefully choosing the angle \(A\). ### Appilcation of F-STIRAP for Remote Detection We showed that fractional-STIRAP is a robust and efficient technique to create maximally coherent superposition states. In CARS, a coherent superposition state is generated through driving by the pump and Stokes fields and the probe pulse interacts with this superposition to generate the anti-Stokes signal. An extension of fractional-STIRAP to the technique of CARS can be done by applying a pump and Stokes first to create the coherence, followed by a probe pulse at a later time. The schematic of this method is given in Fig. 28. This scheme of creating maximum coherence can be combined with the semiclassical theory we developed in section 3 to create control protocols that optimize the signal used sensing and detection. We saw that chirping of pulses in STIRAP is beneficial as it helps us to control the Figure 25: Fractional STIRAP using the superposition of two Stokes pulses, \(\Omega_{s_{0}}(t)=\Omega_{s1_{0}}(t)+\Omega_{s2_{0}}(t)\), as given in Eq. (62). Here the constant mixing angle \(A=\pi/4\). Note that the pump pulse \(\Omega_{p_{0}}(t)\) overlaps exactly with the second Stokes pulse \(\Omega_{s2_{0}}(t)\). Figure 27: The plot of constant mixing angle \(A\) vs populations and coherence. If \(A{=}\pi/4\approx 0.8\), the coherence is maximum and the populations initial and final states are equal. If \(A=\pi/2\approx 1.6\), the population is completely transferred to the final state. This is equivalent to the ordinary STIRAP. Figure 26: Comparison of the STIRAP (left) and fractional-STIRAP (right) processes. In both cases, the evolution of dressed state energies are similar and there are no crossing of energy levels. The mixing angles vary slowly in both cases, implying that the process is adiabatic. As \(t\rightarrow\infty\), the mixing angle goes to \(\pi/2\) and \(\pi/4\) in STIRAP and F-STIRAP respectively. The coherence \(\rho_{13}\) is zero in STIRAP while it is maximum, 0.5, in F-STIRAP. population flow to a desired state in a four-level system. In the same way, chirping of pulses in fractional-STIRAP can be used as a way to control the formation of coherent superposition between a desired pair of states in a four-level system. Combining the technique of chirping pulses in fractional-STIRAP with the semiclassical theory for remote detection is expected to make considerable improvements to the existing method for imaging, sensing and detection. ## 6 Summary In this chapter, we took a semiclassical approach to deal with light-matter interactions and developed several methods to prepare quantum systems in a predetermined state. The primary focus was to improve the existing methods of detection and sensing by controlling the incident field parameters in order to optimize the output signal. We learned that adiabatic passage regime of interaction gives a robust way of preparing maximally coherent superpositions of quantum states in a multilevel system. We presented ways of improving the techniques of CARS and STIRAP by chirping the incident laser fields. In the introductory section, the general theory of light-matter interaction and Raman spectroscopy was discussed. In the second section, a theory of quantum control method where the amplitudes and phases of all the incident pulses in Coherent Anti-Stokes Raman Spectroscopy are carefully manipulated to satisfy the conditions for adiabatic passage was developed. First, a large one-photon detuning was assumed so that the two excited states can be eliminated and the four-level system can be simplified to a "super-effective" two-level system. This reveals the dynamics of energy levels and a control scheme can be developed in order to maximize the vibrational coherence. The amplitudes of probe and Stokes pulses should be equal and should be less than the amplitude of pump by a factor of \(\sqrt{2}\). The chirp rates of the pump and Stokes needs to be opposite in sign before the central time and should be equal after that. The probe should be chirped at a rate of equal to the difference between pump and Stokes pulses all the time. This chirping scheme, which we called C-CARS or Chirped-CARS, is a robust method to create a maximally coherent superposition of the system via adiabatic passage while suppressing the non-resonant background, overcoming one of the major limitations Figure 28: The schematic of using Fractional STIRAP technique to optimize the signal for remote detection. First, fields \(\Omega_{p}\) and \(\Omega_{s}\) distribute the populations equally and maximize coherence between the sublevels. Field \(\Omega_{3}\) is then applied after some time, which generated field \(\Omega_{4}\) by coherent scattering with the system. of CARS spectroscopy. We also show that the selectivity of this scheme can be increased by controlling the chirp parameter in the chirping scheme. In the third section, we developed a semiclassical theory which makes use of the C-CARS scheme and presented a realistic model of the detection method by taking methanol vapor as a surrogate system. The aim was to simulate the optimized output signal from a cloud of molecules using both pulses and pulse trains incident on the system. A detailed analysis was done to show the advantages of using control pulses and pulse trains, and to understand the effects of decoherence and propagation through atmosphere. A layer model of molecular distribution was created where each layer is characterized by the fractional density of the target molecules. A set of coupled Maxwell-Liouville von Neumann equations was derived and numerically solved to find the output from each layer. The results from each layer were applied to the subsequent layers to generate the final output signal. When transform limited pulses are propagated through a molecule distribution of 199 layers, equivalent to 0.5 meters from the center of the cloud, there is an amplification of 2 orders of magnitude by the final scattering compared to the first scattering. To ensure that the control pulses do not lose their phase values during the propagation through multiple layers, a machine learning model was created to extract the chirp parameters from numerical outputs. An exclusive look at this machine learning technique, which is based on deep Convolutional Neural Networks (CNN), was given in section 4. Two CNNs, one to classify the pulses based on their phase values and another one to extract these values were created. Primary results show that, the control scheme is efficient for hundred of layers as the average change in chirp rate after each scattering is less than 0.001%. In the final section, we discussed the theory of STIRAP process and the conditions for adiabatic passage in STIRAP. We showed that by controlling the sign of chirp rate of the incident pulses, the population can be exclusively flown to a predetermined quantum state in a nearly degenerate system. Later, we explained how a variation of STIRAP, namely Fractional STIRAP, can be used to create a maximally coherent superposition of two quantum states in a multilevel system. Later, we laid the groundwork for how the Fractional STIRAP technique can be used to optimize the signal in detection and sensing methods. ## Acknowledgment The authors gratefully acknowledge support from the Office of Naval Research under awards N00014-20-1-2086 and N00014-22-1-2374. S.M. acknowledges the Helmholtz Institute Mainz Visitor Program and J. Ch. the support from Johannes Gutenberg University of Mainz.
2301.02119
A tensor bidiagonalization method for higher-order singular value decomposition with applications
The need to know a few singular triplets associated with the largest singular values of third-order tensors arises in data compression and extraction. This paper describes a new method for their computation using the t-product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos bidiagonalization methods for computing a few of the largest or smallest singular triplets of a matrix. The methods of this paper use Ritz and harmonic Ritz lateral slices to determine accurate approximations of the largest and smallest singular triplets, respectively. Computed examples show applications to data compression and face recognition.
Anas El Hachimi, Khalide Jbilou, Ahmed Ratnani, Lothar Reichel
2023-01-05T15:53:17Z
http://arxiv.org/abs/2301.02119v2
# A Tensor Bidiagonalization Method for Higher-Order Singular Value Decomposition with Applications ###### Abstract The need to know a few singular triplets associated with the largest singular values of third-order tensors arises in data compression and extraction. This paper describes a new method for their computation using the t-product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos bidiagonalization methods for computing a few of the largest or smallest singular triplets of a matrix. The methods of this paper use Ritz and harmonic Ritz lateral slices to determine accurate approximations of the largest and smallest singular triplets, respectively. Computed examples show applications to data compression and face recognition. t nesors, t-product, partial tensor bidiagonalization, restarted tensor bidiagonalization, singular value decomposition, face recognition. ## 1 Introduction The last 20 years has seen an immense growth of the amount of data that is collected for analysis, but it is a challenging problem to extract useful information from available data. This difficulty arises, e.g., in machine learning, data mining, and deep learning; see, e.g., Arnold et al. [1]. The extraction of useful information from data that is represented by a _matrix_ often is facilitated by the singular value decomposition of the matrix. Typically, only a few of the largest singular triplets, i.e., the largest singular values and associated right and left singular vectors, are required to extract useful information from the matrix. A restarted Lanczos bidiagonalization method for computing accurate approximations of these singular triplets is described in [5], and R code written by Bryan W. Lewis is available at [6]. In many recent applications the given data are represented by a multidimensional array. These arrays, known as _tensors_, are natural generalizations of matrices. Several approaches to define tensor-tensor products and tensor-matrix products are described in the literature, including the \(n\)-mode product [9, 25], the t-product [22, 31], and the c-product [21, 30]. Generalizations of the singular value decomposition (SVD) to tensors are described in [25] using the \(n\)-mode product (the so-called HOSVD), and in [21, 22] using the tensor c-product and t-product. The need to compute the SVD or a partial SVD of a tensor arises in a variety of applications, including image restoration, tensor completion [10], robust tensor principal component analysis [13], tensor compression [3], and recognition of color faces [17, 18]. These applications require knowledge of the largest singular values and associated lateral tensor singular slices. It is the purpose of the this paper to introduce a new restarted tensor Lanczos bidiagonalization method for third-order tensors using the t-product for approximating a few of the largest singular values and associated lateral tensor singular slices. This method generalizes the approach described in [5] from matrices to tensors. We remark that the Lanczos bidiagonalization method (also known as the Golub-Kahan bidiagonalization method) for third-order tensors using the t-product has been described in [15, 16, 22, 32]; however, this bidiagonalization method differs from the one of the present paper. In [5] the authors also describe a restarted Lanczos bidiagonalization method for the computation of a few of the smallest singular values and associated singular vectors of a large matrix by determining harmonic Ritz values is presented. This paper presents an analogous scheme for third-order tensors. The organization of this paper is as follows. Section 2 recalls some properties of the t-product and Section 3 reviews tensor Lanczos bidiagonalization of third-order tensors using the t-product. Restarted tensor Lanczos bidiagonalization methods are presented for the approximation of a few of the largest singular values and associated lateral tensor singular slices by computing lateral tensor Ritz slices, as well as for approximating a few of the smallest singular values and associated lateral tensor singular slices by evaluating harmonic lateral tensor Ritz slices. Section 4 discusses multidimensional principal component analysis using a partial tensor HOSVD with application to face recognition, and Section 5 presents a few computed examples. Concluding remarks and possible extensions can be found in Section 6. ## 2 The tensor t-product This section reviews results by Kilmer et al. [22, 23] and uses notation employed there and by Kolda and Bader [25]. A third-order tensor is an array \(\mathscr{A}=[a_{ijk}]\in\mathbb{R}^{\ell\times p\times n}\). Matrices and vectors are tensors of order two and one, respectively. A _slice_ or _frame_ of a third-order tensor \(\mathscr{A}\) is a section obtained by fixing any one of the three indices. Using MATLAB notation, \(\mathscr{A}(i,:,:)\), \(\mathscr{A}(:,j,:)\), and \(\mathscr{A}(:,:,k)\) denote the \(i\)th horizontal, the \(j\)th lateral, and the \(k\)th frontal slices of \(\mathscr{A}\), respectively. The lateral slice \(\mathscr{A}(:,j,:)\) also is denoted by \(\vec{\mathscr{A}_{j}}\), and the frontal slice \(\mathscr{A}(:,:,k)\) is an \(\ell\times p\) matrix that is sometimes denoted by \(\mathscr{A}^{(k)}\). A _fiber_ of a third order tensor \(\mathscr{A}\) is defined by fixing any two of the three indices. The fiber \(\mathscr{A}(i,j,:)\) is called a _tube_ of \(\mathscr{A}\). We will use capital calligraphic letters \(\mathscr{A}\) to denote third-order tensors, capital letters \(A\) to identify matrices, bold face lower case letters \(\mathbf{a}\) to denote tubes, and lower case letters \(a\) stand for scalars. Further, \(\mathbb{K}_{n}^{\ell\times p}=\mathbb{R}^{\ell\times p\times n}\) denotes the space of third-order tensors of size \(\ell\times p\times n\), \(\mathbb{K}_{n}^{\ell}=\mathbb{R}^{\ell\times 1\times n}\) stands for the space of lateral slices of size \(\ell\times n\), and \(\mathbb{K}_{n}=\mathbb{R}^{1\times 1\times n}\) denotes the space of tubes with \(n\) entries. For a third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) with frontal slices \(\mathscr{A}^{(i)}\), \(i=1,\ldots,n\), we define: * The block circulant matrix associated with \(\mathscr{A}\): \[\mathtt{bcirc}(\mathscr{A})=\begin{bmatrix}\mathscr{A}^{(1)}&\mathscr{A}^{ (n)}&\ldots&\mathscr{A}^{(2)}\\ \mathscr{A}^{(2)}&\mathscr{A}^{(1)}&\ldots&\mathscr{A}^{(3)}\\ \vdots&\ddots&\ddots&\vdots\\ \mathscr{A}^{(n)}&\mathscr{A}^{(n-1)}&\ldots&\mathscr{A}^{(1)}\end{bmatrix} \in\mathbb{K}^{\ell n\times pn}.\] (1) * The operator \(\mathtt{unfold}\) applied to \(\mathscr{A}\) gives the matrix made up of its frontal slices, \[\mathtt{unfold}(\mathscr{A})=\begin{bmatrix}\mathscr{A}^{(1)}\\ \mathscr{A}^{(2)}\\ \vdots\\ \mathscr{A}^{(n)}\end{bmatrix}\in\mathbb{K}^{\ell n\times p}.\] We also will need the inverse operator \(\mathtt{fold}\) such that \(\mathtt{fold}(\mathtt{unfold}(\mathscr{A}))=\mathscr{A}\). * The block diagonal matrix associated with \(\mathscr{A}\) is defined as \[\mathtt{bdiag}(\mathscr{A})=\begin{bmatrix}\mathscr{A}^{(1)}&&&\\ &\mathscr{A}^{(2)}&&\\ &&\ddots&\\ &&&\mathscr{A}^{(n)}\end{bmatrix}\in\mathbb{K}^{\ell n\times pn}.\] **Definition 1**: _([23]) Let \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times q}\) and \(\mathscr{B}\in\mathbb{K}_{n}^{q\times p}\) be third-order tensors. The t-product of \(\mathscr{A}\) and \(\mathscr{B}\) is defined by_ \[\mathscr{A}\star\mathscr{B}:=\texttt{fold}\left(\texttt{bcirc}(\mathscr{A}) \texttt{unfold}(\mathscr{B})\right)\in\mathbb{K}_{n}^{\ell\times p}.\] The block circulant matrix (2.1) can be block-diagonalized by using the discrete Fourier transform (DFT) as follows: \[\texttt{bcirc}(\mathscr{A})=\left(F_{n}^{H}\otimes I_{\ell}\right)\texttt{ bdiag}(\widehat{\mathscr{A}})\left(F_{n}\otimes I_{p}\right),\] where \(F_{n}\in\mathbb{C}^{n\times n}\) is the discrete Fourier matrix, \(F_{n}^{H}\) denotes its conjugate transpose, \(\widehat{\mathscr{A}}\) stands for the Fourier transform of \(\mathscr{A}\) along each tube, \(I_{\ell}\in\mathbb{R}^{\ell\times\ell}\) denotes the identity matrix, and \(\otimes\) is the Kronecker product. The matrix \(\widehat{\mathscr{A}}\) can be computed with the fast Fourier transform (FFT) algorithm; see [23] for details. Using MATLAB notations, we have \[\widehat{\mathscr{A}}=\texttt{fft}(\mathscr{A},[\,],3).\] The inverse operation can be evaluated in MATLAB with the command \[\mathscr{A}=\texttt{ifft}(\widehat{\mathscr{A}},[\,],3).\] Hence, the t-product \(\mathscr{C}=\mathscr{A}\star\mathscr{B}\) can be evaluated as \[\widehat{\mathscr{C}}^{(i)}=\widehat{\mathscr{A}^{(i)}}\widehat{\mathscr{B}}^ {(i)},\qquad i=1,2,\ldots,n, \tag{2.2}\] where \(\widehat{\mathscr{A}^{(i)}}\), \(\widehat{\mathscr{B}}^{(i)}\), and \(\widehat{\mathscr{C}}^{(i)}\) are the \(i\)th frontal slices of the tensors \(\widehat{\mathscr{A}}\), \(\widehat{\mathscr{B}}\), and \(\widehat{\mathscr{C}}\), respectively. As already pointed out by Kilmer et al. [22], one can use symmetry properties of the DFT when applied to real data to reduce the computational effort when evaluating the t-product with the FFT. This is described by the following result, which can be found, e.g., in [33]. **Lemma 1**: _Given a real vector \(v\in\mathbb{R}^{n}\), the associated DFT vector \(\widehat{v}=F_{n}v\) satisfies_ \[\widehat{v}_{1}\in\mathbb{R},\quad\texttt{conj}\left(\widehat{v}_{i}\right)= \widehat{v}_{n-i+2},\quad i=2,3,\ldots,\left[\frac{n+1}{2}\right],\] _where \(\texttt{conj}\) denotes the complex conjugation operator and \(\left[\frac{n+1}{2}\right]\) denotes the integer part of \(\frac{n+1}{2}\)._ It follows that for a third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\), we have \[\widehat{\mathscr{A}}^{(1)}\in\mathbb{R}^{\ell\times p},\quad\texttt{conj} \left(\widehat{\mathscr{A}}^{(i)}\right)=\widehat{\mathscr{A}}^{(n-i+2)}, \quad i=2,3,\ldots,\left[\frac{n+1}{2}\right].\] This shows that the t-product of two third-order tensors can be determined by evaluating just about half the number of products involved in (2.2). Algorithm 1 describes the computations. The following definition is concerned with the t-product of a third-order tensor and a tube. **Definition 2**.: _Let \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) and \(\mathbf{b}\in\mathbb{K}_{n}\). Then \(\mathscr{C}:=\mathscr{A}\star\mathbf{b}\in\mathbb{K}_{n}^{\ell\times p}\) is obtained by applying the inverse DFT along each tube of \(\widehat{\mathscr{C}}\), where each frontal slice is determined by the standard matrix product between each frame of \(\widehat{\mathscr{A}}\) and \(\widehat{\mathbf{b}}\), i.e.,_ \[\widehat{\mathscr{C}}^{(i)}=\widetilde{\mathscr{A}}^{(i)}\widehat{\mathbf{b}}^{(i )}=\widehat{\mathbf{b}}^{(i)}\widetilde{\mathscr{A}}^{(i)},\quad i=1,2,\ldots,n.\] A third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) can be written as \[\mathscr{A}=\left[\widetilde{\mathscr{A}_{1}},\widetilde{\mathscr{A}_{2}}, \ldots,\widetilde{\mathscr{A}_{p}}\right],\] thus, for the tensors \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times q}\) and \(\mathscr{B}\in\mathbb{K}_{n}^{q\times p}\), the t-product \(\mathscr{A}\star\mathscr{B}\) can be expressed as \[\mathscr{A}\star\mathscr{B}=\left[\mathscr{A}\star\widetilde{\mathscr{B}}_{1 },\mathscr{A}\star\widetilde{\mathscr{B}}_{2},\ldots,\mathscr{A}\star \widetilde{\mathscr{B}}_{p}\right],\] where \[\mathscr{A}\star\widetilde{\mathscr{B}}_{i}=\overline{(\mathscr{A}\star \mathscr{B})}_{i},\quad i=1,2,\ldots,p.\] The Frobenius norm of a third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) is given by \[\left\|\mathscr{A}\right\|_{F}:=\sqrt{\sum_{i_{1},i_{2},i_{3}=1}^{\ell,p,n}a_ {i_{1},i_{2},i_{3}}^{2}},\] and the inner product of two third-order tensors of the same size \(\mathscr{A},\mathscr{B}\in\mathbb{K}_{n}^{\ell\times p}\) is defined as \[\left\langle\mathscr{A},\mathscr{B}\right\rangle:=\sum_{i_{1},i_{2},i_{3}=1}^{ \ell,p,n}a_{i_{1},i_{2},i_{3}}b_{i_{1},i_{2},i_{3}}.\] We have the relations \[\left\|\mathscr{A}\right\|_{F}=\frac{1}{\sqrt{n}}\left\|\widetilde{\mathscr{A }}\right\|_{F},\qquad\left\langle\mathscr{A},\mathscr{B}\right\rangle=\frac{1 }{n}\langle\widehat{\mathscr{A}},\widehat{\mathscr{B}}\rangle.\] We recall for later use the definitions of some special tensors and operations: * The identity tensor \(\mathscr{I}_{\ell}\in\mathbb{K}_{n}^{\ell\times\ell}\) is the tensor whose first frontal slice is the identity matrix and all other slices have zero entries only. * The transpose of a real third-order tensor, \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\), denoted by \(\mathscr{A}^{H}\in\mathbb{K}_{n}^{p\times\ell}\), is the tensor obtained by first transposing each one of the frontal slices of \(\mathscr{A}\), and then reversing the order of the transposed frontal slices \(2\) through \(n\); see [23]. Let the third-order tensors \(\mathscr{A}\) and \(\mathscr{B}\) be such that the products \(\mathscr{A}\star\mathscr{B}\) and \(\mathscr{B}^{H}\star\mathscr{A}^{H}\) are defined. Then, similarly to the matrix transpose, the tensor transpose satisfies \((\mathscr{A}\star\mathscr{B})^{H}=\mathscr{B}^{H}\star\mathscr{A}^{H}\). * A tensor \(\mathscr{Q}\in\mathbb{K}_{n}^{\ell\times\ell}\) is said to be orthogonal if and only if \[\mathscr{Q}^{H}\star\mathscr{Q}=\mathscr{Q}\star\mathscr{Q}^{H}=\mathscr{I}_{\ell}.\] * A square third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times\ell}\) is invertible if there is a third-order tensor \(\mathscr{B}\in\mathbb{K}_{n}^{\ell\times\ell}\) such that \[\mathscr{A}\star\mathscr{B}=\mathscr{I}_{\ell},\quad\mathscr{B}\star\mathscr{ A}=\mathscr{I}_{\ell}.\] In this case \(\mathscr{B}\) is said to be the inverse of \(\mathscr{A}\), and is denoted by \(\mathscr{A}^{-1}\). **Definition 3**: _([22]) Let \(\vec{\mathscr{A}_{i}}\in\mathbb{K}_{n}^{\ell}\) for \(i=1,2,\ldots,p\) be lateral slices of the tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\). A t-linear combination of these slices is defined as_ \[\vec{\mathscr{A}_{1}}\star\mathbf{b}_{1}+\vec{\mathscr{A}_{2}}\star\mathbf{b}_{2}+ \ldots+\vec{\mathscr{A}_{p}}\star\mathbf{b}_{p},\] _where the \(\mathbf{b}_{i}\) for \(i=1,2,\ldots,p\) are tubes in \(\mathbb{K}_{n}\). Moreover,_ \[\mathsf{span}\left\{\vec{\mathscr{A}_{1}},\vec{\mathscr{A}_{2}},\ldots,\vec{ \mathscr{A}_{p}}\right\}=\left\{\sum_{i=1}^{p}\vec{\mathscr{A}_{i}}\star\mathbf{b} _{i}:\ \ \mathbf{b}_{i}\in\mathbb{K}_{n},\ \ i=1,2,\ldots,p\right\}.\] The tensor singular value decomposition (t-SVD) associated with the t-product, introduced by Kilmer and Martin [23], generalizes the classical SVD of a matrix. It is described in the next theorem. **Theorem 4**: _([23]) Let \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) be a third-order tensor. Then it can be represented as the t-product of three third-order tensors,_ \[\mathscr{A}=\mathscr{U}\star\mathscr{S}\star\mathscr{V}^{H}, \tag{2.3}\] _where \(\mathscr{U}\in\mathbb{K}_{n}^{\ell\times\ell}\) and \(\mathscr{V}\in\mathbb{K}_{n}^{p\times p}\) are orthogonal tensors, and \(\mathscr{S}\in\mathbb{K}_{n}^{\ell\times p}\) is an f-diagonal tensor, i.e., each frontal slice of the DFT of \(\mathscr{S}\) is a diagonal matrix._ Algorithm 2 summarizes the computation of the t-SVD of a third-order tensor with the aid of the FFT. The factorization (2.3) can be expressed as \[\mathscr{A}=\mathscr{U}\star\mathscr{S}\star\mathscr{V}^{H}=\sum_{i=1}^{\min\{ \ell,p\}}\widetilde{\mathscr{U}}_{i}\star\mathbf{s}_{i}\star\mathscr{V}_{i}^{H},\] where the \(\mathbf{s}_{i}=\mathscr{S}(i,i,:)\) are singular tubes, and \(\widetilde{\mathscr{U}}_{i}=\mathscr{U}(:,i,:)\) and \(\mathscr{V}_{i}=\mathscr{U}(:,i,:)\) are right and left lateral tensor singular slices, respectively, for \(i=1,2,\ldots,\min(\ell,p)\). The triplets \(\{\mathbf{s}_{i},\widetilde{\mathscr{U}}_{i},\widetilde{\mathscr{V}}_{i}\}_{i=1 :\min(\ell,p)}\) will be referred to as singular triplets of the tensor \(\mathscr{A}\). The singular tubes are ordered so that their norms \(\sigma_{i}=\|\mathbf{s}_{i}\|_{F}\) are decreasing with \(i\), i.e., \[\sigma_{1}\geq\sigma_{2}\geq\ldots\geq\sigma_{\min(\ell,p)}\geq 0.\] Note that we also have the relations \[\mathscr{A}\star\mathscr{V}_{i}=\widetilde{\mathscr{U}}_{i}\star\mathbf{s}_{i}, \quad\mathscr{A}^{H}\star\widetilde{\mathscr{U}}_{i}=\mathscr{V}_{i}\star\mathbf{ s}_{i},\quad i=1,2,\ldots,\min\{\ell,p\}.\] We remark that the latter relations have to be modified if \(\mathscr{A}\) has complex-valued entries. We note for future reference that \[\mathscr{S}(i,i,1)=\sum_{j=1}^{n}\frac{1}{n}\widetilde{\mathscr{S}}(i,i,j). \tag{2.4}\] In the following, we will need the notion of rank of a third-order tensor. **Definition 5**.: _Let \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) be a third-order tensor. Then its tubal rank is defined as_ \[\mathtt{rank}_{t}\left(\mathscr{A}\right)=\mathtt{card}\left\{\sigma_{i}\neq 0,\ \ i=1,2,\ldots,\min\{\ell,p\}\right\},\] _where \(\sigma_{i}\) is the norm of the singular tube \(\mathbf{s}_{i}\) of \(\mathscr{A}\) and \(\mathtt{card}\) stands for the cardinality._ The next result generalizes the Eckart-Young theorem for matrices to third-order tensors. It is important in the context of data compression. **Theorem 6**.: _([3, 23]) Let the t-SVD of a third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) be given by \(\mathscr{A}=\mathscr{U}\star\mathscr{S}\star\mathscr{V}^{H}\). For \(1\leq k\leq\min\{\ell,p\}\), define the truncated t-SVD by_ \[\mathscr{A}_{k}=\sum_{i=1}^{k}\widetilde{\mathscr{U}}_{i}\star\mathbf{s}_{i}\star \mathscr{V}_{i}^{H}.\] _Then_ \[\mathscr{A}_{k}=\operatorname*{arg\,min}_{\vec{\mathscr{A}}\in\mathbb{M}}\left\| \mathscr{A}-\widetilde{\mathscr{A}}\right\|_{F}.\] _Where \(\mathbb{M}\) is the set given by \(\mathbb{M}=\{\mathscr{X}\star\mathscr{Y};\text{ with }\mathscr{X}\in\mathbb{K}_{n}^{l \times k},\ \mathscr{Y}\in\mathbb{K}_{n}^{k\times p}\}\)._ The matrix QR factorization also can be generalized to tensors. **Theorem 7**.: _([23]) Let \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\). Then \(\mathscr{A}\) can be factored as_ \[\mathscr{A}=\mathscr{Q}\star\mathscr{R}, \tag{2.5}\] _where \(\mathscr{Q}\in\mathbb{K}_{n}^{\ell\times\ell}\) is an orthogonal tensor and \(\mathscr{R}\in\mathbb{K}_{n}^{\ell\times p}\) is an f-upper triangular tensor, i.e., each frontal slice of the DFT of \(\mathscr{R}\) is an upper triangular matrix. The factorization (2.5) is referred to as the t-QR factorization of \(\mathscr{A}\)._ Algorithm 3 summarizes the computation of the t-QR factorization (2.5). The function \(\mathtt{qr}\) in line 3 of the algorithm computes a QR factorization of the matrix \(\widetilde{\mathscr{A}^{(i)}}\in\mathbb{R}^{\ell\times p}\); thus \(\widetilde{\mathscr{A}^{(i)}}=\widehat{\mathscr{Q}}^{(i)}\widehat{\mathscr{ R}}^{(i)}\), where the matrix \(\widehat{\mathscr{Q}}^{(i)}\in\mathbb{R}^{\ell\times\ell}\) is orthogonal and the matrix \(\widehat{\mathscr{R}}^{(i)}\in\mathbb{R}^{\ell\times p}\) has an upper triangular leading principal submatrix of order \(\ell\). ``` 0:\(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\). 0:\(\mathscr{Q}\in\mathbb{K}_{n}^{\ell\times\ell}\), \(\mathscr{R}\in\mathbb{K}_{n}^{\ell\times p}\). 1:\(\widehat{\mathscr{A}}=\mathtt{fft}(\mathscr{A},[],3)\). 2:for\(i=1\)\(\ldots,\left\lceil\dfrac{n+1}{2}\right\rceil\)do 3:\(\left[\widehat{\mathscr{Q}}^{(i)},\widehat{\mathscr{R}}^{(i)}\right]= \mathtt{qr}(\widehat{\mathscr{A}^{(i)}})\). 4:endfor 5:for\(i=\left\lceil\dfrac{n+1}{2}\right\rceil+1\ldots,n\)do 6:\(\widehat{\mathscr{Q}}^{(i)}=\mathtt{conj}\left(\widehat{\mathscr{Q}}^{(n-i+2 )}\right)\) and \(\widehat{\mathscr{R}}^{(i)}=\mathtt{conj}\left(\widehat{\mathscr{R}}^{(n-i+2 )}\right)\). 7:endfor 8: Compute \(\mathscr{Q}=\mathtt{ifft}(\widehat{\mathscr{Q}},[],3)\) and \(\mathscr{R}=\mathtt{ifft}(\widehat{\mathscr{R}},[],3)\). ``` **Algorithm 3** t-QR factorization of a third-order tensor. Following Kilmer et al. [22], we define orthogonality of lateral tensor slices. Let \(\vec{\mathscr{X}}\) and \(\vec{\mathscr{Y}}\) be two lateral tensor slices in \(\mathbb{K}_{n}^{\ell}\) and define the inner product of these slices as \[\left\langle\vec{\mathscr{X}},\vec{\mathscr{Y}}\right\rangle:=\vec{\mathscr{X} }^{H}\star\vec{\mathscr{Y}}\in\mathbb{K}_{n}.\] The lateral slices in the set \[\left\{\vec{\mathscr{X}}_{1},\vec{\mathscr{X}}_{2},\ldots,\vec{ \mathscr{X}}_{p}\right\}, \tag{2.6}\] with \(p\geq 2\), are said to be _orthogonal_ if \[\left\langle\vec{\mathscr{X}}_{i},\vec{\mathscr{X}}_{j}\right\rangle=\left\{ \begin{array}{ll}\alpha_{i}\mathbf{e}_{1}&\text{if }i=j,\\ \mathbf{0}&\text{if }i\neq j,\end{array}\right.\] where \(\mathbf{e}_{1}\) is the tube in \(\mathbb{K}_{n}\), whose its first element is \(1\) and the remaining elements vanish, and the \(\alpha_{i}\), \(i=1,2,\ldots,p\), are nonvanishing scalars. Furthermore, if \(\alpha_{i}=1\) for all \(i=1,2,\ldots,p\), then the set (2.6) is said to be _orthonormal_. Following [22], we observe that any lateral slice \(\mathscr{X}\in\mathbb{K}_{n}^{\ell}\) can be normalized as \[\mathscr{\vec{X}}=\mathscr{\vec{Y}}\star\mathbf{a} \tag{2.7}\] with \(\mathscr{\vec{Y}}\in\mathbb{K}_{n}^{\ell}\), \(\left\|\mathscr{\vec{Y}}\right\|=1\), and \(\mathbf{a}\in\mathbb{K}_{n}\). Here the tensor norm is defined as \[\left\|\mathscr{\vec{Y}}\right\|=\frac{\left\|\left\langle\mathscr{\vec{Y}}, \mathscr{\vec{Y}}\right\rangle\right\|_{F}}{\left\|\mathscr{\vec{Y}}\right\|_ {F}}.\] Note that \(\mathscr{\vec{Y}}\) has unit norm if and only if \(\left\langle\mathscr{\vec{Y}},\mathscr{\vec{Y}}\right\rangle=\boldsymbol{e}_{1}\); see [22] for more detail. Algorithm 4 summarizes the normalization process. The MATLAB function randn in the algorithm generates a vector in \(\mathbb{R}^{\ell}\) with normally distributed pseudorandom entries with mean zero and variance one. ``` 0:\(\mathscr{\vec{X}}\in\mathbb{K}_{n}^{\ell}\). 0:\(\mathscr{\vec{Y}}\in\mathbb{K}_{n}^{\ell}\) of unit norm and \(\mathbf{a}\in\mathbb{K}_{n}\) that satisfy (2.7). 1:\(\mathscr{\vec{Y}}=\mathtt{fft}(\mathscr{\vec{X}},[],3)\). 2:for\(i=1,\ldots,\left\lfloor\frac{n+1}{2}\right\rfloor\)do 3:\(\widehat{\mathbf{a}}^{(i)}=\left\|\mathscr{\vec{Y}}^{(i)}\right\|_{F}\). 4:if\(\widehat{\mathbf{a}}^{(i)}>0\)then 5:\(\mathscr{\vec{Y}}^{(i)}=\frac{\mathscr{\vec{Y}}^{(i)}}{\widehat{\mathbf{a}}^ {(i)}}\) 6:else 7:\(\mathscr{\vec{Y}}^{(i)}=\mathtt{randn}(\ell,1)\); \(\mathbf{b}^{(i)}=\left\|\mathscr{\vec{Y}}^{(i)}\right\|_{F}\), and \(\mathscr{\vec{Y}}^{(i)}=\frac{\mathscr{\vec{Y}}^{(i)}}{\mathbf{b}^{(i)}}\). 8:endif 9:endfor 10:for\(i=\left\lceil\frac{n+1}{2}\right\rceil+1,\ldots,n\)do 11:\(\mathscr{\vec{Y}}^{(i)}=\mathtt{conj}\left(\mathscr{\vec{Y}}^{(n-i+2)}\right)\), \(\widehat{\mathbf{a}}^{(i)}=\mathtt{conj}\left(\widehat{\mathbf{a}}^{(n-i+2)}\right)\). 12:endfor 13:\(\mathscr{\vec{Y}}=\mathtt{fft}(\mathscr{\vec{Y}},[],3)\), \(\mathbf{a}=\mathtt{fft}(\widehat{\mathbf{a}},[],3)\). ``` **Algorithm 4**Normalize(\(\mathscr{\vec{X}}\)). ## 3 Tensor Lanczos bidiagonalization for computing the largest and smallest singular triplets This section describes the Lanczos bidiagonalization process for tensors using the t-product, and discusses how approximations of the largest and smallest singular triplets of a large third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) can be computed. ### The tensor Lanczos bidiagonalization algorithm The Lanczos bidiagonalization process was introduced for matrices by Golub and Kahan [14] and therefore sometimes is referred to as the Golub-Kahan bidiagonalization process. For a matrix \(A\in\mathbb{R}^{\ell\times p}\), this process is closely related to symmetric Lanczos process applied to the real symmetric matrices \(AA^{T}\) and \(A^{T}A\), or alternatively to the symmetric matrix \[\begin{bmatrix}0&A\\ A^{T}&0\end{bmatrix}.\] Lanczos bidiagonalization algorithms have been applied to solve numerous problems such as large-scale least squares problem [28], the approximation of the largest or smallest singular triplets of a large matrix [5, 19, 24], and in Tikhonov regularization of large linear discrete ill-posed problems; see, e.g., [11, 12]. We note that the bidiagonalization method described in [28] and applied in [11, 12] reduces a large matrix \(A\) to a small lower bidiagonal matrix, while in [5] the matrix \(A\) is reduced to a small upper bidiagonal matrix. We will review the latter approach. Application of \(m\ll\min\{\ell,p\}\) steps of the Lanczos bidiagonalization process to the matrix \(A\in\mathbb{R}^{\ell\times p}\) with the initial unit vector \(p_{1}\in\mathbb{R}^{\ell}\) generically produces two matrices \[P_{m}=[p_{1},p_{2},\ldots,p_{m}]\in\mathbb{R}^{p\times m},\quad Q_{m}=[q_{1},q _{2},\ldots,q_{m}]\in\mathbb{R}^{\ell\times m}.\] The columns of \(P_{m}\) and \(Q_{m}\) form orthonormal bases for the Krylov subspaces \[\mathscr{K}_{m}\left(A^{T}A,p_{1}\right) =\texttt{span}\{p_{1},A^{T}Ap_{1},\left(A^{T}A\right)^{2}p_{1}, \ldots,\left(A^{T}A\right)^{m-1}p_{1}\},\] \[\mathscr{K}_{m}\left(AA^{T},q_{1}\right) =\texttt{span}\{q_{1},AA^{T}q_{1},\left(AA^{T}\right)^{2}q_{1}, \ldots,\left(AA^{T}\right)^{m-1}q_{1}\},\] respectively, where \(q_{1}=Ap_{1}/\|Ap_{1}\|_{2}\). A matrix interpretation of the recursion relations of the Lanczos process gives the matrix relations \[AP_{m} =\ Q_{m}B_{m}, \tag{1}\] \[A^{T}Q_{m} =P_{m}B_{m}^{T}+\beta_{m}p_{m+1}e_{m}^{T}, \tag{2}\] where \(e_{m}=[0,\ldots,0,1]^{T}\in\mathbb{R}^{m}\), \(\beta_{m}\geq 0\) is a scalar, and \(p_{m+1}\in\mathbb{R}^{p}\). The matrix \(B_{m}\in\mathbb{R}^{m\times m}\) is upper bidiagonal and satisfies \(B_{m}=Q_{m}^{T}AP_{m}\). When considering bidiagonalization of a third-order tensor \(\mathscr{A}\) using the t-product, the scalars and the columns of the matrices \(P_{m}\) and \(Q_{m}\) in the matrix decompositions (1) and (2) become tubes and lateral slices, respectively, in the decompositions determined by the tensor Lanczos bidiagonalization process. The application of \(m\) steps of tensor Lanczos bidiagonalization to the third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) generically computes two tensors \[\mathscr{P}_{m}=\left[\mathscr{\bar{P}}_{1},\mathscr{\bar{P}}_{2},\ldots, \mathscr{\bar{P}}_{m}\right]\in\mathbb{K}_{n}^{p\times m}\ \ \text{and}\ \ \mathscr{Q}_{m}=\left[\mathscr{\bar{Q}}_{1},\mathscr{\bar{Q}}_{2},\ldots, \mathscr{\bar{Q}}_{m}\right]\in\mathbb{K}_{n}^{\ell\times m},\] whose lateral slices form bases for the tensor Krylov subspaces \(\mathscr{K}_{m}\left(\mathscr{A}^{H}\star\mathscr{A},\mathscr{\bar{P}}_{1}\right)\) and \(\mathscr{K}_{m}\left(\mathscr{A}\star\mathscr{A}^{H},\mathscr{\bar{Q}}_{1}\right)\), respectively. They are defined by \[\mathscr{K}_{m}\left(\mathscr{A}^{H}\star\mathscr{A},\mathscr{ \bar{P}}_{1}\right) =\texttt{span}\{\mathscr{\bar{P}}_{1},\left(\mathscr{A}^{H}\star \mathscr{A}\right)\star\mathscr{\bar{P}}_{1},\ldots,\left(\mathscr{A}^{H} \star\mathscr{A}\right)^{m-1}\star\mathscr{\bar{P}}_{1}\},\] \[\mathscr{K}_{m}\left(\mathscr{A}\star\mathscr{A}^{H},\mathscr{ \bar{Q}}_{1}\right) =\texttt{span}\{\mathscr{\bar{Q}}_{1},\left(\mathscr{A}\star \mathscr{A}^{H}\right)\star\mathscr{\bar{Q}}_{1},\ldots,\left(\mathscr{A} \star\mathscr{A}^{H}\right)^{m-1}\star\mathscr{\bar{Q}}_{1}\},\] where \(\vec{\mathscr{P}}_{1}\in\mathbb{K}_{n}^{p}\) is a lateral slice of unit norm, and the lateral slice \(\vec{\mathscr{D}}_{1}\in\mathbb{K}_{n}^{\ell}\) is of unit norm and proportional to \(\mathscr{A}\star\vec{\mathscr{P}}_{1}\). Algorithm 5 describes the tensor Lanczos bidiagonalization algorithm. ``` 0:\(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\), number of steps \(m\leqslant\min\{\ell,p\}\), \(\vec{\mathscr{P}}_{1}\in\mathbb{K}_{n}^{p}\) with unit norm. 0:\(\mathscr{P}_{m}=[\vec{\mathscr{P}}_{1},\vec{\mathscr{P}}_{2},\ldots,\vec{ \mathscr{P}}_{m}]\in\mathbb{K}_{n}^{p\times m}\) and \(\mathscr{D}_{m}=[\vec{\mathscr{D}}_{1},\vec{\mathscr{D}}_{2},\ldots,\vec{ \mathscr{D}}_{m}]\in\mathbb{K}_{n}^{\ell\times m}\) with orthonormal lateral slices, \(\mathscr{B}_{m}\in\mathbb{K}_{n}^{m\times m}\) a bidiagonal tensor, and \(\vec{\mathscr{R}}_{m}\in\mathbb{K}_{m}^{\ell}\). 1:\(\mathscr{P}_{1}=\left[\vec{\mathscr{P}}_{1}\right]\). 2:\(\vec{\mathscr{D}}_{1}=\mathscr{A}\star\vec{\mathscr{P}}_{1}\). 3:\([\vec{\mathscr{D}}_{1},\mathbf{\alpha}_{1}]=\texttt{Normalize}(\vec{\mathscr{D }}_{1})\). 4:\(\mathscr{D}_{1}=\left[\vec{\mathscr{D}}_{1}\right]\), \(\mathscr{B}_{m}(1,1,;)=\mathbf{\alpha}_{1}\). 5:for\(i=1\) to \(m\)do 6:\(\vec{\mathscr{R}}_{i}=\mathscr{A}^{H}\star\vec{\mathscr{D}}_{i}-\mathbf{\alpha}_{i} \star\vec{\mathscr{P}}_{i}\). 7: Reorthogonalization \(\vec{\mathscr{R}}_{i}=\vec{\mathscr{R}}_{i}-\mathscr{P}_{i}\star(\mathscr{P} _{i}^{H}\star\vec{\mathscr{R}}_{i})\). 8:if\(i<m\)then 9:\([\vec{\mathscr{P}}_{i+1},\mathbf{\beta}_{i}]=\texttt{Normalize}(\vec{\mathscr{R }}_{i})\). 10:\(\mathscr{P}_{i+1}=\left[\mathscr{P}_{i},\vec{\mathscr{P}}_{i+1}\right]\), \(\mathscr{B}_{m}(i,i+1,;)=\mathbf{\beta}_{i}\). 11:\(\vec{\mathscr{D}}_{i+1}=\mathscr{A}\star\vec{\mathscr{P}}_{i+1}-\mathbf{\beta}_{i} \star\vec{\mathscr{D}}_{i}\). 12: Reorthogonalization \(\vec{\mathscr{D}}_{i+1}=\vec{\mathscr{D}}_{i+1}-\mathscr{D}_{i}\star(\mathscr{ D}_{i}^{H}\star\vec{\mathscr{D}}_{i+1})\). 13:\([\vec{\mathscr{D}}_{i+1},\mathbf{\alpha}_{i+1}]=\texttt{Normalize}(\vec{\mathscr{D }}_{i+1})\). 14:\(\mathscr{D}_{i+1}=\left[\mathscr{D}_{i},\vec{\mathscr{D}}_{i+1}\right]\), \(\mathscr{B}_{m}(i+1,i+1,;)=\mathbf{\alpha}_{i+1}\). 15:endif 16:endfor ``` **Algorithm 5** Tensor Lanczos bidiagonalization using the t-product. We remark that Algorithm 5 differs from the tensor bidiagonalization algorithms described in [22, 32] in that the former produces an upper bidiagonal tensor \(\mathscr{B}_{m}\), while the latter determine a lower bidiagonal tensor. The use of an upper bidiagonal tensor in the present paper is inspired by the choices in [5, 14]. Algorithm 5 is said to break down when one of the tensor slices \(\vec{\mathscr{R}}_{i}\) or \(\vec{\mathscr{D}}_{i+1}\) vanishes. We comment below on this situation, but note that breakdown is exceedingly rare. **Theorem 8**.: _Generically, Algorithm 5 determines the decompositions_ \[\mathscr{A}\star\mathscr{P}_{m} =\mathscr{D}_{m}\star\mathscr{B}_{m}, \tag{3.3}\] \[\mathscr{A}^{H}\star\mathscr{D}_{m} =\mathscr{P}_{m}\star\mathscr{B}_{m}^{H}+\vec{\mathscr{R}}_{m} \star\vec{\mathscr{E}}_{m}^{H}, \tag{3.4}\] _with \(\mathscr{P}_{m}\in\mathbb{K}_{n}^{p\times m}\), \(\mathscr{D}_{m}\in\mathbb{K}_{n}^{\ell\times m}\), where \(\mathscr{P}_{m}^{H}\star\mathscr{P}_{m}=\mathscr{I}_{m}\) and \(\mathscr{D}_{m}^{H}\star\mathscr{D}_{m}=\mathscr{I}_{m}\). The tensor \(\vec{\mathscr{E}}_{m}\in\mathbb{K}_{n}^{m}\) is the canonical lateral slice whose elements are zero except for the first element of the \(m\)th tube, which equals \(1\), and \(\vec{\mathscr{R}}_{m}\in\mathbb{K}_{n}^{p}\) is determined by steps \(4\) and \(5\) of Algorithm 5 such that \(\mathscr{P}_{m}^{H}\star\vec{\mathscr{R}}_{m}=0\). The tensor \(\mathscr{B}_{m}\in\mathbb{K}_{n}^{m\times m}\) is upper bidiagonal, each of whose frontal slices _is an upper bidiagonal matrix. Thus,_ \[\mathscr{B}_{m}=\left[\begin{array}{cccccc}\boldsymbol{\alpha}_{1}&\boldsymbol {\beta}_{1}&\boldsymbol{0}&\ldots&&\boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{\alpha}_{2}&\boldsymbol{\beta}_{2}&\boldsymbol{0}& \vdots\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \boldsymbol{0}&\ldots&\ldots&\boldsymbol{\alpha}_{m-1}&\boldsymbol{\beta}_{ m-1}\\ \boldsymbol{0}&\ldots&\ldots&\boldsymbol{0}&\boldsymbol{\alpha}_{m}\\ \end{array}\right],\] _where \(\boldsymbol{\alpha}_{i}\) and \(\boldsymbol{\beta}_{i}\) are tubes in \(\mathbb{K}_{n}\)._ _Proof._ The relations (3.3) and (3.4) follow immediately from the recursion relations of Algorithm 5. The orthonormality of the lateral slices of \(\mathscr{P}_{m}\) and \(\mathscr{Q}_{m}\) can be shown by induction. The proof is closely related to the proof of the existence of the relations (3.1) and (3.2), and the properties of the matrices involved. The latter relations are used in [5]. The Lanczos bidiagonalization process may suffer from loss of orthogonality of the lateral slices of the tensors \(\mathscr{P}_{m}\) and \(\mathscr{Q}_{m}\). Therefore, reorthogonalization is carried out in Lines 5 and 9 in Algorithm 5. We remark that reorthogonalization makes the algorithm more costly both in terms of storage and arithmetic floating point operations. The extra cost may be acceptable as long as the number of steps \(m\) is fairly small; see [5, 34] for discussions in the matrix case. Let \(\vec{\mathscr{R}}_{m}\) be the tensor whose lateral slices are defined in Line 5. Then \[[\vec{\mathscr{P}}_{m+1},\boldsymbol{\beta}_{m}]=\texttt{Normalize}\left( \vec{\mathscr{R}}_{m}\right). \tag{3.5}\] In the rare event that some \(\boldsymbol{\beta}_{j}\), \(1\leqslant j<m\), vanishes, Algorithm 5 breaks down. Then the singular tubes of \(\mathscr{B}_{j}\) are singular tubes of \(\mathscr{A}\), and the left and right lateral tensor singular slices are obtained as described below. When no breakdown takes place, we can express equation (3.4) as \[\mathscr{A}^{H}\star\mathscr{Q}_{m}=\mathscr{P}_{m+1}\star\mathscr{B}_{m,m+1}^ {H},\] where \(\mathscr{P}_{m+1}\) is obtained from \(\mathscr{P}_{m}\) by appending the lateral slice \(\vec{\mathscr{P}}_{m+1}\), defined in (3.5), to get \(\mathscr{P}_{m+1}=\left[\mathscr{P}_{m},\vec{\mathscr{P}}_{m+1}\right]\in \mathbb{K}_{n}^{p\times(m+1)}\), and \(\mathscr{B}_{m,m+1}\in\mathbb{K}_{n}^{m\times(m+1)}\) is obtained by appending the lateral slice \(\boldsymbol{\beta}_{m}\star\vec{\mathscr{E}}_{m}\) to \(\mathscr{B}_{m}\), i.e., \(\mathscr{B}_{m,m+1}=\left[\mathscr{B}_{m},\boldsymbol{\beta}_{m}\star\vec{ \mathscr{E}}_{m}\right]\). We turn to the connection between the partial Lanczos tridiagonalization of a third-order tensor and the partial Lanczos tridiagonalization process of the tensor \(\mathscr{A}^{H}\star\mathscr{A}\). This connection will be used later. Multiplying (3.3) from the left by \(\mathscr{A}^{H}\), we get \[\mathscr{A}^{H}\star\mathscr{A}\star\mathscr{P}_{m} =\mathscr{A}^{H}\star\mathscr{Q}_{m}\star\mathscr{B}_{m}\] \[=\mathscr{P}_{m}\star\mathscr{B}_{m}^{H}\star\mathscr{B}_{m}+ \vec{\mathscr{R}}_{m}\star\vec{\mathscr{E}}_{m}^{H}\star\mathscr{B}_{m}\] \[=\mathscr{P}_{m}\star\mathscr{B}_{m}^{H}\star\mathscr{B}_{m}+ \vec{\mathscr{R}}_{m}\star\vec{\mathscr{E}}_{m}^{H}\star\boldsymbol{\alpha}_{m}. \tag{3.6}\] Let \(\mathscr{T}_{m}\) be the symmetric tridiagonal tensor defined by \[\mathscr{T}_{m}=\mathscr{B}_{m}^{H}\star\mathscr{B}_{m}\in\mathbb{K}_{n}^{m \times m}.\] Then (3.6) is a partial tensor Lanczos bidiagonalization of \(\mathscr{A}^{H}\star\mathscr{A}\) with initial lateral slice \(\vec{\mathscr{P}}_{1}=\mathscr{P}_{m}\star\vec{\mathscr{E}}_{1}\). The lateral slices of \(\mathscr{P}_{m}\) form an orthonormal basis for the tensor Krylov subspace \[\mathscr{K}_{m}\left(\mathscr{A}^{H}\star\mathscr{A},\vec{\mathscr{P}}_{1} \right)=\texttt{span}\{\vec{\mathscr{P}}_{1},\mathscr{A}^{H}\star\mathscr{A} \star\vec{\mathscr{P}}_{1},\left(\mathscr{A}^{H}\star\mathscr{A}\right)^{2} \star\vec{\mathscr{P}}_{1},\ldots,\left(\mathscr{A}^{H}\star\mathscr{A} \right)^{m-1}\star\vec{\mathscr{P}}_{1}\}.\] Similarly, multiplying (11) from the left by \(\mathscr{A}\), we obtain \[\mathscr{A}\star\mathscr{A}^{H}\star\mathscr{Q}_{m}=\mathscr{Q}_{m}\star\mathscr{ B}_{m}\star\mathscr{B}_{m}^{H}+\mathscr{A}\star\bar{\mathscr{B}}_{m}\star \bar{\mathscr{E}}_{m}^{H}.\] It follows that the lateral slices of \(\mathscr{Q}_{m}\) form an orthonormal basis for the Krylov subspace \[\mathscr{K}_{m}\left(\mathscr{A}\star\mathscr{A}^{H},\bar{\mathscr{Q}}_{1} \right)=\mathsf{span}\{\bar{\mathscr{Q}}_{1},\mathscr{A}\star\mathscr{A}^{H} \star\bar{\mathscr{Q}}_{1},\left(\mathscr{A}\star\mathscr{A}^{H}\right)^{2} \star\bar{\mathscr{Q}}_{1},\ldots,\left(\mathscr{A}\star\mathscr{A}^{H}\right)^ {m-1}\star\bar{\mathscr{Q}}_{1}\}.\] ### Approximating singular tubes and singular lateral slices We describe an approach to approximate the largest or smallest singular triplets (singular tubes and associated left and right lateral singular slices) of a large tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\) using restarted partial tensor Lanczos bidiagonalization. Since the tensor \(\mathscr{A}\) is large, computing its \(k\) largest or smallest singular triplets by determining the t-SVD of \(\mathscr{A}\) is very expensive. The idea is to approximate the extreme singular triplets of the tensor \(\mathscr{A}\) by determining the extreme singular triplets the bidiagonal tensor \(\mathscr{B}_{m}\), where \(m\) is small. Let \(\{\mathbf{s}_{i},\mathscr{\bar{U}}_{i},\mathscr{\bar{V}}_{i}\}\), \(1\leq i\leq m\), denote the singular triplets of \(\mathscr{B}_{m}\). They satisfy \[\mathscr{B}_{m}\star\mathscr{\bar{V_{i}}}=\mathbf{s}_{i}\star\mathscr{\bar{U}}_{i} \ \ \text{and}\ \ \mathscr{B}_{m}^{H}\star\mathscr{\bar{U}}_{i}=\mathbf{s}_{i}\star\mathscr{\bar{V _{i}}}.\] The \(k\leq m\) largest singular triplets of \(\mathscr{A}\) are approximated by the triplets \(\{\mathbf{s}_{i,m}^{\mathscr{A}},\mathscr{\bar{U}}_{i,m}^{\mathscr{A}},\mathscr{ \bar{V}}_{i,m}^{\mathscr{A}}\}\) defined by \[\mathbf{s}_{i,m}^{\mathscr{A}}=\mathbf{s}_{i},\quad\mathscr{\bar{U}}_{i,m}^{\mathscr{ A}}=\mathscr{Q}_{m}\star\mathscr{\bar{U}}_{i},\quad\mathscr{\bar{V}}_{i,m}^{ \mathscr{A}}=\mathscr{P}_{m}\star\mathscr{\bar{V_{i}}},\quad i=1,2,\ldots,k. \tag{12}\] For \(i=1,2,\ldots,k\), we have \[\mathscr{A}\star\mathscr{\bar{V}}_{i,m}^{\mathscr{A}} =\mathscr{A}\star\mathscr{P}_{m}\star\mathscr{\bar{V_{i}}}\] \[=\mathscr{Q}_{m}\star\mathscr{B}_{m}\star\mathscr{\bar{V_{i}}}\] \[=\mathscr{Q}_{m}\star\mathbf{s}_{i}\star\mathscr{\bar{U}}_{i}\] \[=\mathscr{Q}_{m}\star\mathscr{\bar{U}}_{i}\star\mathbf{s}_{i}\] \[=\mathscr{\bar{U}}_{i,m}^{\mathscr{A}}\star\mathbf{s}_{i,m}^{\mathscr{ A}}.\] Similarly, \[\mathscr{A}^{H}\star\mathscr{\bar{U}}_{i,m}^{\mathscr{A}}= \mathscr{A}^{H}\star\mathscr{Q}_{m}\star\mathscr{\bar{U}}_{i} =\left(\mathscr{P}_{m}\star\mathscr{B}_{m}+\bar{\mathscr{B}}_{m} \star\bar{\mathscr{E}}_{m}^{H}\right)\star\mathscr{\bar{U}}_{i}\] \[=\mathscr{\bar{V}}_{i,m}^{\mathscr{A}}\star\mathbf{s}_{i,m}^{\mathscr{ A}}+\bar{\mathscr{B}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star\mathscr{\bar{U}}_{i}. \tag{13}\] To accept \(\{\mathbf{s}_{i,m}^{\mathscr{A}},\mathscr{\bar{U}}_{i,m}^{\mathscr{A}},\mathscr{ \bar{V}}_{i,m}^{\mathscr{A}}\}\) as an approximate singular triplet of \(\mathscr{A}\), the remainder term \(\bar{\mathscr{B}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star\mathscr{\bar{U}}_{i}\) should be small enough. We can bound the remainder term according to \[\left\|\bar{\mathscr{B}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star \mathscr{\bar{U}}_{i}\right\|_{F} =\frac{1}{\sqrt{n}}\left\|\mathsf{bdiag}\left(\widehat{\mathscr{ \bar{B}}_{m}}\right)\mathsf{bdiag}\left(\widehat{\left(\bar{\mathscr{E}}_{m}^{H }\right)}\right)\mathsf{bdiag}\left(\widehat{\mathscr{\bar{U}}_{i}}\right) \right\|_{F}\] \[=\left\|\mathsf{bdiag}\left(\bar{\mathscr{\bar{B}}}_{m}\right) \right\|_{F}\left\|\mathsf{bdiag}\left(\widehat{\left(\bar{\mathscr{E}}_{m}^{H }\right)}\right)\mathsf{bdiag}\left(\widehat{\mathscr{\bar{U}}_{i}}\right) \right\|_{F}\] \[=\left\|\mathbf{\beta}_{m}\right\|_{F}\sum_{s=1}^{n}\left|\widehat{ \left(\bar{\mathscr{E}}_{m}^{H}\right)}^{(s)}\widehat{\mathscr{\bar{U}}_{i}}^ {(s)}\right|.\] Analogously as in [5], we require for \(1\leq s\leq n\) that \[\left|\widehat{\left(\widehat{\mathscr{E}_{m}^{H}}\right)^{(s)}\widehat{\mathscr{ Q}_{i}}^{(s)}}\right|\leq\delta^{\prime}\left\|\widehat{\mathscr{A}^{(s)}} \right\|=\delta^{\prime}\left(\mathbf{s}_{1,m}^{\widehat{\mathscr{A}^{(s)}}} \right)=\delta\left(\mathbf{s}_{1,m}^{\widehat{\mathscr{A}}}\right)^{(s)},\] for a user-chosen parameter \(\delta^{\prime}>0\), where \(\left(\mathbf{s}_{j,m}^{\widehat{\mathscr{A}}}\right)^{(s)}\) denotes the \(s\)th element of the \(j\)th approximate singular tube of \(\widehat{\mathscr{A}}\). We obtain from eq. (4) that \[\left\|\bar{\mathscr{R}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star\bar{\mathscr{ Q}_{i}}\right\|_{F}\leq\delta^{\prime}\left\|\mathbf{\beta}_{m}\right\|_{F}\sum_{s=1}^{n} \left(\mathbf{s}_{1}^{\widehat{\mathscr{A}}}\right)^{(s)}=n\delta^{\prime}\left\| \mathbf{\beta}_{m}\right\|_{F}\left(\mathbf{s}_{1}^{\mathscr{A}}\right)^{(1)}=n\delta ^{\prime\prime}\left(\mathbf{s}_{1}^{\mathscr{A}}\right)^{(1)},\] where \(\delta^{\prime\prime}=\delta^{\prime}\left\|\mathbf{\beta}_{m}\right\|_{F}\). The computed approximate singular triplets \(\{\mathbf{s}_{i,m}^{\mathscr{A}},\bar{\mathscr{U}}_{i,m}^{\mathscr{A}},\bar{ \mathscr{V}}_{i,m}^{\mathscr{A}}\}\), \(i=1,2,\ldots,k\), of \(\mathscr{A}\) are accepted as singular triplets of \(\mathscr{A}\) if \[\left\|\bar{\mathscr{R}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star\bar{\mathscr{ U}}_{i}^{\mathbf{\Big{\|}}}\right\|_{F}\leq\delta\left(\mathbf{s}_{1,m}^{\mathscr{A}} \right)^{(1)},\quad i=1,2,\ldots k, \tag{10}\] for some user-specified parameter \(\delta>0\). To keep the storage requirement fairly small for large-scale problems, we would like the number of steps \(m\) of the tensor Lanczos bidiagonalization process to be small. However, when \(m\) is small, it may not be possible to approximate the desired singular triplets sufficiently accurately using the available Krylov subspaces \(\mathscr{K}_{m}\left(\mathscr{A}^{H}\star\mathscr{A},\bar{\mathscr{Q}_{1}}\right)\) and \(\mathscr{K}_{m}\left(\mathscr{A}\star\mathscr{A}^{H},\bar{\mathscr{P}_{1}}\right)\). A remedy for this situation is to restart the tensor Lanczos bidiagonalization process. The idea is to repeatedly update the initial lateral slices used for the tensor Lanczos bidiagonalization process, and in this way determine a sequence of increasingly more appropriate Krylov subspaces, until the \(k\) desired singular triplets have been found with required accuracy. We remark that restarting techniques have been used for computing a few desired singular triplets or eigenvalue-eigenvector pairs of a large matrix, where properties of Ritz vectors, harmonic Ritz vectors, and refined Ritz vectors have been exploited; see, e.g., [5, 19, 20, 35, 36] for details. ### Augmentation by Ritz lateral slices Assume that we would like to approximate the \(k\) largest singular triplets of \(\mathscr{A}\in\mathbb{R}^{\ell\times p\times n}\). To this end, we carry out \(m>k\) steps of tensor Lanczos bidiagonalization as described in the previous subsection. The approximate right singular lateral slice \(\bar{\mathscr{V}}_{i,m}^{\mathscr{A}}\) is a Ritz lateral slice of \(\mathscr{A}^{H}\star\mathscr{A}\) associated with the Ritz tube \(\left(\mathbf{s}_{i,m}^{\mathscr{A}}\right)^{2}=\mathbf{s}_{i,m}^{\mathscr{A}}\star \mathbf{s}_{i,m}^{\mathscr{A}}\) for \(i\in\{1,2,\ldots,m\}\), and we have \[\mathscr{A}^{H}\star\mathscr{A}\star\bar{\mathscr{V}}_{i,m}^{ \mathscr{A}}=\mathscr{A}^{H}\star\bar{\mathscr{U}}_{i,m}^{\mathscr{A}}\star \mathbf{s}_{i,m}^{\mathscr{A}} =\left(\bar{\mathscr{V}}_{i,m}^{\mathscr{A}}\star\mathbf{s}_{i,m}^{ \mathscr{A}}+\bar{\mathscr{R}}_{m}\star\bar{\mathscr{E}}_{m}^{H}\star\bar{ \mathscr{U}}_{i}^{\mathscr{A}}\right)\star\mathbf{s}_{i,m}^{\mathscr{A}}\] \[=\bar{\mathscr{V}}_{i,m}^{\mathscr{A}}\star\left(\mathbf{s}_{i,m}^{ \mathscr{A}}\right)^{2}+\bar{\mathscr{R}}_{m}\star\bar{\mathscr{E}}_{m}^{H} \star\bar{\mathscr{U}}_{i}\star\mathbf{s}_{i,m}^{\mathscr{A}}.\] In what follows we will show some results that will help us to approximate the largest or smallest singular triplets of a third-order tensor. The idea behind these results is to find equations that are analogous to (11) and (12), and such that the reduced tensor will contain the \(k\) approximate singular tubes among its first \(k\) elements on the diagonal, and the right projection tensor will contain the \(k\) right Ritz lateral slices among its first \(k\) lateral slices, and the left projection tensor will contain the \(k\) left Ritz lateral slices among its first \(k\) lateral slices. The following theorem will be helpful. Theorem 9._Assume that \(m\) steps of Algorithm 5 have been applied to the third-order tensor \(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\), and suppose that \(\mathbf{\beta}_{m}\) in (3.4) is nonvanishing. Then for \(k<m\), we have_ \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1}, \tag{3.10}\] \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+1} =\widetilde{\mathscr{P}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1} ^{H}+\widetilde{\mathbf{\beta}}_{k+1}\star\widetilde{\widetilde{\mathscr{P}}}_{k+ 2}\star\widetilde{\mathscr{E}}_{k+1}^{H}, \tag{3.11}\] _where \(\widetilde{\mathscr{P}}_{k+1}\in\mathbb{K}_{n}^{p\times(k+1)}\) and \(\widetilde{\mathscr{Q}}_{k+1}\in\mathbb{K}_{n}^{\ell\times(k+1)}\) have orthonormal lateral slices, and the first \(k\) lateral slices of \(\widetilde{\mathscr{P}}_{m}\) are the first \(k\) Ritz lateral slices of \(\mathscr{A}\), \(\widetilde{\mathscr{B}}_{k+1}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}\) is an upper triangular tensor, \(\widetilde{\widetilde{\mathscr{P}}}_{k+2}\in\mathbb{K}_{n}^{p}\) is a lateral slice that is orthogonal to \(\widetilde{\mathscr{P}}_{k+1}\), \(\widetilde{\mathbf{\beta}}_{k+1}\in\mathbb{K}_{n}\), and \(\widetilde{\mathscr{E}}_{k+1}\in\mathbb{K}_{n}^{k+1}\) is the canonical element under the t-product._ _Proof._ Let the Ritz lateral slices \(\mathscr{V}_{i,m}^{\mathscr{A}}\) for \(1\leq i\leq k\) be associated with the \(k\) Ritz tubes of \(\mathscr{A}\). Introduce the tensor \[\widetilde{\mathscr{P}}_{k+1}=\left[\mathscr{V}_{1,m}^{\mathscr{A}},\mathscr{ V}_{2,m}^{\mathscr{A}},\ldots,\mathscr{V}_{k,m}^{\mathscr{A}},\widetilde{ \mathscr{P}}_{m+1}\right]\in\mathbb{K}_{n}^{p\times(k+1)}, \tag{3.12}\] where \(\widetilde{\mathscr{P}}_{m+1}\) is given by (3.5). Then, using the fact that \(\mathscr{A}\star\mathscr{V}_{i,m}^{\mathscr{A}}=\widetilde{\mathscr{U}}_{i,m} ^{\mathscr{A}}\star\mathbf{s}_{i,m}^{\mathscr{A}}\) for \(i=1,2,\ldots,k\), we obtain \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\left[\mathscr{A}\star\mathscr{V}_{1,m}^{\mathscr{A}},\mathscr{ A}\star\mathscr{V}_{2,m}^{\mathscr{A}},\ldots,\mathscr{A}\star\mathscr{V}_{k,m}^{ \mathscr{A}},\mathscr{A}\star\mathscr{\widetilde{\mathscr{P}}}_{m+1}\right]\] \[=\left[\widetilde{\mathscr{U}}_{1,m}^{\mathscr{A}}\star\mathbf{s}_{1,m }^{\mathscr{A}},\widetilde{\mathscr{U}}_{2,m}^{\mathscr{A}}\star\mathbf{s}_{2,m}^ {\mathscr{A}},\ldots,\widetilde{\mathscr{U}}_{k,m}^{\mathscr{A}}\star\mathbf{s}_{ k,m}^{\mathscr{A}},\mathscr{A}\star\mathscr{\widetilde{\mathscr{P}}}_{m+1}\right]. \tag{3.13}\] Orthogonalizing the term \(\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}\) against \(\{\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}\}_{i=1:k}\) gives \[\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}=\sum_{i=1}^{k}\mathbf{\rho}_{i}\star \widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}+\widetilde{\widetilde{\mathscr{ B}}}_{k}, \tag{3.14}\] where \(\widetilde{\widetilde{\mathscr{B}}}_{k}\) is orthogonal to \(\{\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}\}_{i=1:k}\), and the \(\mathbf{\rho}_{i}\) for \(i\in\{1,2,\ldots,k\}\) are given by \[\mathbf{\rho}_{i}=\left(\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}} \right)^{H}\star\left(\mathscr{A}\star\mathscr{\widetilde{\mathscr{P}}}_{m+1}\right) =\left(\mathscr{A}^{H}\star\widetilde{\mathscr{U}}_{i,m}^{ \mathscr{A}}\right)^{H}\star\widetilde{\mathscr{P}}_{m+1}\] \[=\left(\widetilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\star\mathbf{s}_{ i,m}^{\mathscr{A}}+\widetilde{\mathscr{A}}_{m}\star\widetilde{\mathscr{E}}_{m}^{H} \star\widetilde{\mathscr{U}}_{i}^{H}\right)^{H}\star\widetilde{\mathscr{P}}_{m +1}\] \[=\mathbf{\beta}_{m}^{H}\star\left(\widetilde{\mathscr{U}}_{i}^{H} \star\widetilde{\mathscr{E}}_{m}\star\widetilde{\mathscr{P}}_{m+1}^{H} \right)\star\widetilde{\mathscr{P}}_{m+1}\] \[=\mathbf{\beta}_{m}\star\widetilde{\mathscr{U}}_{i}^{H}\star \widetilde{\mathscr{E}}_{m}\] \[=\mathbf{\beta}_{m}\star\left\langle\widetilde{\mathscr{U}}_{i}, \widetilde{\mathscr{E}}_{m}\right\rangle,\] because \(\mathbf{\beta}_{m}=\mathbf{\beta}_{m}^{H}\). Let \(\widetilde{\widetilde{\mathscr{B}}}_{k}=\widetilde{\widetilde{\mathscr{B}}^{ \prime}}_{k}\star\widetilde{\mathbf{\alpha}}_{k+1}\) be a normalization of \(\widetilde{\widetilde{\mathscr{B}}}_{k}\), and introduce the tensors \[\widetilde{\mathscr{Q}}_{k+1}=\left[\widetilde{\mathscr{U}}_{1,m}^{\mathscr{A}}, \widetilde{\mathscr{U}}_{2,m}^{\mathscr{A}},\ldots,\widetilde{\mathscr{U}}_{k,m}^ {\mathscr{A}},\widetilde{\widetilde{\mathscr{B}}^{\prime}}_{k}\right]\in \mathbb{K}_{n}^{\ell\times(k+1)} \tag{3.15}\] and \[\widetilde{\mathscr{B}}_{k+1}=\begin{bmatrix}\mathbf{s}_{1,m}^{\mathscr{A}}& \mathbf{0}&\ldots&\mathbf{0}&\boldsymbol{\rho}_{1}\\ \mathbf{0}&\mathbf{s}_{2,m}^{\mathscr{A}}&\ldots&\mathbf{0}&\boldsymbol{\rho} _{2}\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \mathbf{0}&\ldots&\mathbf{0}&\mathbf{s}_{k,m}^{\mathscr{A}}&\boldsymbol{\rho} _{k}\\ \mathbf{0}&\ldots&\ldots&\mathbf{0}&\widetilde{\boldsymbol{\alpha}}_{k+1} \end{bmatrix}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}. \tag{3.16}\] Then, from (3.13) and (3.14), we obtain \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\begin{bmatrix}\widetilde{\mathscr{U}}_{1,m}^{\mathscr{A}}\star \boldsymbol{s}_{1,m}^{\mathscr{A}},\widetilde{\mathscr{U}}_{2,m}^{\mathscr{A}} \star\boldsymbol{s}_{2,m}^{\mathscr{A}},\ldots,\widetilde{\mathscr{U}}_{k,m}^ {\mathscr{A}}\star\boldsymbol{s}_{k,m}^{\mathscr{A}},\sum_{i=1}^{k} \boldsymbol{\rho}_{i}\star\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}+ \widetilde{\widetilde{\mathscr{B}}}_{k}\end{bmatrix}\] \[=\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1}. \tag{3.17}\] On the other hand, as \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+1}=\begin{bmatrix} \mathscr{A}^{H}\star\widetilde{\mathscr{U}}_{1,m}^{\mathscr{A}},\mathscr{A}^ {H}\star\widetilde{\mathscr{U}}_{2,m}^{\mathscr{A}},\ldots,\mathscr{A}^{H} \star\widetilde{\mathscr{U}}_{k,m}^{\mathscr{A}},\mathscr{A}^{H}\star \widetilde{\widetilde{\mathscr{B}}}_{k}\end{bmatrix},\] using (3.8), we get \[\mathscr{A}^{H}\star\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}} =\begin{aligned} \tilde{\mathscr{V}}_{i,m}^{\mathscr{A}} \star\boldsymbol{s}_{i,m}^{\mathscr{A}}+\widetilde{\mathscr{R}}_{m}\star \widetilde{\mathscr{U}}_{m}^{H}\star\widetilde{\mathscr{U}}_{i}\\ =\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\star\boldsymbol{s}_{i,m}^ {\mathscr{A}}+\widetilde{\mathscr{P}}_{m+1}\star\boldsymbol{\beta}_{m}\star \widetilde{\mathscr{E}}_{m}^{H}\star\widetilde{\mathscr{U}}_{i}\\ =\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\star\boldsymbol{s}_{i,m}^ {\mathscr{A}}+\widetilde{\mathscr{P}}_{m+1}\star\boldsymbol{\rho}_{i}^{H}. \end{aligned}\] Since \[\left\langle\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{ B}}}_{k}^{\prime},\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\right\rangle= \begin{pmatrix}\widetilde{\widetilde{\mathscr{B}}}_{k}^{\prime}\end{pmatrix}^ {H}\star\mathscr{A}\star\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}=\boldsymbol{s }_{i,m}^{\mathscr{A}}\star\begin{pmatrix}\widetilde{\widetilde{\mathscr{B}}}_{ k}^{\prime}\end{pmatrix}^{H}\star\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}= \boldsymbol{0},\] the tensor \(\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{B}}}_{k}^{\prime}\) is orthogonal to \(\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\). Moreover, in view of that \(\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\) is orthogonal to \(\widetilde{\mathscr{P}}_{m+1}\), we obtain \[\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{B}}}_{k}^{\prime}= \boldsymbol{\gamma}\star\tilde{\mathscr{P}}_{m+1}+\widetilde{\mathscr{F}}_{k +1}, \tag{3.18}\] where \(\widetilde{\mathscr{F}}_{k+1}\) is orthogonal to \(\widetilde{\mathscr{P}}_{m+1}\) as well as to \(\tilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\). Due to the orthogonality of \(\widetilde{\widetilde{\mathscr{B}}}_{k}\) (or \(\widetilde{\widetilde{\mathscr{B}}}_{k}^{\prime}\)) to \(\begin{pmatrix}\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}\end{pmatrix}_{i=1 :k}\), the parameter \(\boldsymbol{\gamma}\) in (3.18) is given by \[\boldsymbol{\gamma}=\left\langle\widetilde{\mathscr{P}}_{m+1}, \mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{B}}}_{k}^{\prime}\right\rangle =\left\langle\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}, \overrightarrow{\widetilde{\mathscr{B}}}_{k}^{\prime}\right\rangle\] \[=\left\langle\sum_{i=1}^{k}\boldsymbol{\rho}_{i}\star \widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}+\widetilde{\widetilde{\mathscr{B}}}_ {k},\overrightarrow{\widetilde{\mathscr{B}}}_{k}^{\prime}\right\rangle\] \[=\left\langle\widetilde{\widetilde{\mathscr{B}}}_{k}, \overrightarrow{\widetilde{\mathscr{B}}}_{k}^{\prime}\right\rangle=\widetilde{ \boldsymbol{\alpha}}_{k+1}.\] Consequently, \[\mathscr{A}^{H}\star\widetilde{\mathscr{B}}_{k+1} =\left[\bar{\mathscr{V}}_{1,m}^{\mathscr{A}}\star\mathbf{s}_{1,m}^{ \mathscr{A}\mathscr{A}}+\widetilde{\mathscr{P}}_{m+1}\star\mathbf{\rho}_{1}^{H}, \ldots,\bar{\mathscr{V}}_{k,m}^{\mathscr{A}\mathscr{A}}\star\mathbf{s}_{k,m}^{ \mathscr{A}\mathscr{A}}+\widetilde{\mathscr{P}}_{m+1}\star\mathbf{\rho}_{k}^{H}, \widetilde{\mathbf{\alpha}}_{k+1}\star\widetilde{\mathscr{P}}_{m+1}+\widetilde{ \mathscr{F}}_{k+1}\right]\] \[=\widetilde{\mathscr{P}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1}^ {H}+\widetilde{\mathscr{F}}_{k+1}\star\widetilde{\mathscr{E}}_{k+1}^{H}\] \[=\widetilde{\mathscr{P}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1}^ {H}+\widetilde{\mathbf{\beta}}_{k+1}\star\widetilde{\mathscr{P}}_{k+2}\star \widetilde{\mathscr{E}}_{k+1}^{H}, \tag{3.19}\] where \(\widetilde{\mathbf{\beta}}_{k+1}\) and \(\widetilde{\widetilde{\mathscr{P}}}_{k+2}\) are determined by the normalization of \(\bar{\mathscr{F}}_{k+1}\), i.e., \(\bar{\mathscr{F}}_{k+1}=\widetilde{\mathbf{\beta}}_{k+1}\star\widetilde{ \widetilde{\mathscr{P}}}_{k+2}\), because \[\widetilde{\mathscr{B}}_{k+1}^{H}=\begin{bmatrix}\mathbf{s}_{1,m}^{\mathscr{A}}& \mathbf{0}&\ldots&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{s}_{2,m}^{\mathscr{A}}&\mathbf{0}&\ldots&\mathbf{0}\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \mathbf{0}&\ldots&\mathbf{0}&\mathbf{s}_{k,m}^{\mathscr{A}}&\mathbf{0}\\ \mathbf{\rho}_{1}^{H}&\mathbf{\rho}_{2}^{H}&\ldots&\mathbf{\rho}_{k}^{H}&\widetilde{\mathbf{ \alpha}}_{k+1}\end{bmatrix}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}.\] The orthogonality of \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{B}}_{k+1}\) now follows from the orthogonality of the sequences \(\left\{\widetilde{\mathscr{V}}_{i,m}^{\mathscr{A}}\right\}_{i=1:k}\) and \(\left\{\widetilde{\mathscr{U}}_{i,m}^{\mathscr{A}}\right\}_{i=1:k}\), respectively, given by (3.7). In the preceding theorem we assumed \(\mathbf{\beta}_{m}\) to be nonvanishing. If, instead, \(\mathbf{\beta}_{m}\) vanishes, then the singular tubes of \(\mathscr{B}_{m}\) are singular tubes of \(\mathscr{A}\), and the left and right singular lateral slices of \(\mathscr{A}\) can be determined from those of \(\mathscr{B}_{m}\). Similarly, if \(\widetilde{\mathbf{\beta}}_{k+1}\) in (3.18) vanishes, then the singular tubes of \(\widetilde{\mathscr{B}}_{k+1}\) are singular tubes of \(\mathscr{A}\), and the singular lateral slices of \(\mathscr{A}\) can be determined from \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{B}}_{k+1}\). If the \(\widetilde{\mathbf{\beta}}_{k+1}\) is nonvanishing, then we append new lateral slices to \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{B}}_{k+1}\) repeatedly until iteration \(m-k\). This is the subject of the following theorem. **Theorem 10**.: _Assume that \(m\) steps of Algorithm 5 have been applied to \(\mathscr{A}\) and that eqs. (3.17) and (3.19) hold. If the \(\widetilde{\mathbf{\beta}}_{k+1}\) are nonvanishing for \(1\leqslant k<m\), then we have the following relations_ \[\mathscr{A}\star\widetilde{\mathscr{P}}_{m} =\widetilde{\mathscr{Q}}_{m}\star\widetilde{\mathscr{B}}_{m},\] \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{m} =\widetilde{\mathscr{P}}_{m}\star\widetilde{\mathscr{B}}_{m}^{H} +\widetilde{\mathbf{\beta}}_{m}\star\widetilde{\widetilde{\mathscr{P}}}_{m+1} \star\widetilde{\mathscr{E}}_{m}^{H},\] _where \(\widetilde{\mathscr{P}}_{m}\in\mathbb{K}_{n}^{p\times m}\) and \(\widetilde{\mathscr{Q}}_{m}\in\mathbb{K}_{n}^{\ell\times m}\) have orthonormal lateral slices, \(\widetilde{\mathscr{B}}_{m}\in\mathbb{K}_{n}^{m\times m}\) is an upper triangular, \(\widetilde{\mathbf{\beta}}_{m}\in\mathbb{K}_{n}\), \(\widetilde{\widetilde{\mathscr{P}}}_{m+1}\in\mathbb{K}_{n}^{p}\) is orthogonal to \(\widetilde{\mathscr{P}}_{m}\), and \(\vec{\mathscr{E}}_{m}\in\mathbb{K}_{n}^{m}\) is the canonical element under the t-product. The first \(k\) lateral slices of \(\widetilde{\mathscr{P}}_{m}\) and \(\widetilde{\mathscr{Q}}_{m}\) are the same as those of the tensors \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\), respectively, given in Theorem 9._ _Proof_. Let the tensors \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\) defined in (3.17) and (3.19), respectively, be represented by \[\widetilde{\mathscr{P}}_{k+1}=\left[\begin{smallmatrix}\widetilde{\widetilde{ \mathscr{P}}}_{1},\widetilde{\widetilde{\mathscr{P}}}_{2},\ldots,\widetilde{ \widetilde{\mathscr{P}}}_{k+1}\end{smallmatrix}\right]\in\mathbb{K}_{n}^{p \times(k+1)}\] and \[\widetilde{\mathscr{Q}}_{k+1}=\left[\begin{smallmatrix}\widetilde{\widetilde{ \mathscr{Q}}}_{1},\widetilde{\widetilde{\mathscr{Q}}}_{2},\ldots,\widetilde{ \widetilde{\mathscr{Q}}}_{k+1}\end{smallmatrix}\right]\in\mathbb{K}_{n}^{\ell \times(k+1)},\] and the tensor \(\widetilde{\mathscr{P}}_{k+2}\) be given by \[\widetilde{\mathscr{P}}_{k+2}=\left[\widetilde{\mathscr{P}}_{k+1}, \widetilde{\widetilde{\mathscr{P}}}_{k+2}\right]\in\mathbb{K}_{n}^{p\times(k+2)}.\] By normalizing the quantity \(\left(\mathscr{I}_{\ell}-\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{ \mathscr{Q}}_{k+1}^{H}\right)\star\mathscr{A}\star\widetilde{\widetilde{ \mathscr{P}}}_{k+2}\), we obtain the lateral slice \(\widetilde{\mathscr{Q}}_{k+2}\) such that \(\widetilde{\boldsymbol{\alpha}}_{k+2}\star\widetilde{\widetilde{\mathscr{Q}}}_ {k+2}=\left(\mathscr{I}_{\ell}-\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{ \mathscr{Q}}_{k+1}^{H}\right)\star\mathscr{A}\star\widetilde{\widetilde{ \mathscr{P}}}_{k+2}\). Application of (3.11) gives \[\widetilde{\boldsymbol{\alpha}}_{k+2}\star\widetilde{\widetilde{ \mathscr{Q}}}_{k+2} =\left(\mathscr{I}_{\ell}-\widetilde{\mathscr{Q}}_{k+1}\star \widetilde{\mathscr{Q}}_{k+1}^{H}\right)\star\mathscr{A}\star\widetilde{ \widetilde{\mathscr{P}}}_{k+2}\] \[=\mathscr{A}\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}- \widetilde{\mathscr{Q}}_{k+1}\star\widetilde{\mathscr{Q}}_{k+1}^{H}\star \mathscr{A}\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}\] \[=\mathscr{A}\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}- \widetilde{\mathscr{Q}}_{k+1}\star\left(\widetilde{\mathscr{B}}_{k+1}\star \widetilde{\mathscr{P}}_{k+1}^{H}+\widetilde{\boldsymbol{\beta}}_{k+1}\star \widetilde{\mathscr{E}}_{k+1}\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}^{H }\right)\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}\] \[=\mathscr{A}\star\widetilde{\widetilde{\mathscr{P}}}_{k+2}- \widetilde{\boldsymbol{\beta}}_{k+1}\star\widetilde{\widetilde{\mathscr{Q}}}_ {k+1}. \tag{3.20}\] Consider the tensors \[\widetilde{\mathscr{Q}}_{k+2}=\left[\widetilde{\mathscr{Q}}_{k+1}, \widetilde{\widetilde{\mathscr{Q}}}_{k+2}\right]\in\mathbb{K}_{n}^{\ell\times (k+2)}\] and \[\widetilde{\mathscr{B}}_{k+2}=\left[\begin{array}{ccccc}\boldsymbol{s}_{1,m} ^{\mathscr{A}}&\boldsymbol{0}&\ldots&\boldsymbol{0}&\boldsymbol{\rho}_{1}& \boldsymbol{0}\\ \boldsymbol{0}&\boldsymbol{s}_{2,m}^{\mathscr{A}}&\boldsymbol{0}&\ldots& \boldsymbol{\rho}_{2}&\boldsymbol{0}\\ \vdots&\ddots&\ddots&\ddots&\vdots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\vdots&\vdots\\ \boldsymbol{0}&\ldots&\boldsymbol{0}&\boldsymbol{s}_{k,m}^{\mathscr{A}}& \boldsymbol{\rho}_{k}&\boldsymbol{0}\\ \boldsymbol{0}&\ldots&\ldots&\boldsymbol{0}&\widetilde{\boldsymbol{\alpha}}_{k +1}&\widetilde{\boldsymbol{\beta}}_{k+1}\\ \boldsymbol{0}&\ldots&\ldots&\ldots&\boldsymbol{0}&\widetilde{\boldsymbol{ \alpha}}_{k+2}\end{array}\right]\in\mathbb{K}_{n}^{(k+2)\times(k+2)}.\] Using (3.10) and (3.20), we get \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+2}=\widetilde{\mathscr{Q}}_{k+2} \star\widetilde{\mathscr{B}}_{k+2}.\] To determine the lateral slice \(\widetilde{\widetilde{\mathscr{P}}}_{k+3}\), we normalize \(\left(\mathscr{I}-\widetilde{\mathscr{P}}_{k+2}\star\widetilde{\mathscr{P}} _{k+2}^{H}\right)\star\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{Q}}} _{k+2}^{\widetilde{\boldsymbol{\alpha}}}\) so that \[\widetilde{\boldsymbol{\beta}}_{k+2}\star\widetilde{\widetilde{\mathscr{P}} }_{k+3}=\left(\mathscr{I}-\widetilde{\mathscr{P}}_{k+2}\star\widetilde{ \mathscr{P}}_{k+2}^{H}\right)\star\mathscr{A}^{H}\star\widetilde{\widetilde{ \mathscr{Q}}}_{k+2}\] and \[\widetilde{\boldsymbol{\beta}}_{k+2}\star\widetilde{\widetilde{\mathscr{P}} }_{k+3}=\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{Q}}}_{k+2}- \widetilde{\boldsymbol{\alpha}}_{k+2}\star\widetilde{\widetilde{\mathscr{P}}} _{k+2}. \tag{3.21}\] It now follows from (3.10) and (3.21) that \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+2}=\widetilde{\mathscr{P}}_{k +2}\star\widetilde{\mathscr{B}}_{k+2}^{H}+\widetilde{\boldsymbol{\beta}}_{k+2 }\star\widetilde{\widetilde{\mathscr{P}}}_{k+3}\star\widetilde{\mathscr{E}} _{k+2}^{H}.\] We can continue this procedure until iteration \(m-k\) and then obtain \[\mathscr{A}\star\widetilde{\mathscr{P}}_{m}=\widetilde{\mathscr{Q}}_{m}\star \widetilde{\mathscr{B}}_{m},\quad\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{m }=\widetilde{\mathscr{P}}_{m}\star\widetilde{\mathscr{B}}_{m}^{H}+\widetilde{ \boldsymbol{\beta}}_{m}\star\widetilde{\widetilde{\mathscr{P}}}_{m+1}\star \widetilde{\mathscr{E}}_{m}^{H},\] where \(\widetilde{\mathscr{P}}_{m}\) and \(\widetilde{\mathscr{Q}}_{m}\) have orthonormal lateral slices and \[\widetilde{\mathscr{B}}_{m}=\begin{bmatrix}\boldsymbol{s}_{1,m}^{\mathscr{A} }&\boldsymbol{0}&\ldots&\boldsymbol{\rho}_{1}&\boldsymbol{0}&\ldots& \boldsymbol{0}\\ &\ddots&&\vdots&&&&\\ &&\boldsymbol{s}_{k,m}^{\mathscr{A}}&\boldsymbol{\rho}_{k}&&&&\\ &&&\widetilde{\boldsymbol{\alpha}}_{k+1}&\widetilde{\boldsymbol{\beta}}_{k+ 1}&&\\ &&&&&\ddots&\ddots&\\ &&&&&\widetilde{\boldsymbol{\alpha}}_{m-1}&\widetilde{\boldsymbol{\beta}}_{m- 1}\\ &&&&&&&\widetilde{\boldsymbol{\alpha}}_{m}\end{bmatrix}\in\mathbb{K}_{n}^{m\times m}.\] This gives the desired result. If we would like to compute the smallest singular triplets of \(\mathscr{A}\), then we can use the same theorem, but instead of working with the first right singular lateral slices \(\mathscr{V}_{i,m}^{\mathscr{A}}\) for \(1\leq i\leq k\), we use the last \(k\) right singular lateral slices in (3.12). The computations are analogous to those described above. ### Augmentation by harmonic Ritz lateral slices When the smallest singular values of a matrix \(A\) are clustered, their computation by the restarted Lanczos bidiagonalization method as described above may require many iterations. In this situation it may be beneficial to instead compute approximations of the smallest singular values of \(A\) by seeking to determine approximations of the largest singular values of the matrix \(\left(A^{T}A\right)^{-1}\) without explicitly computing the matrix \(\left(A^{T}A\right)^{-1}\). This was done for the matrix case by using computing harmonic Ritz vectors; see [5, 27]. Harmonic Ritz vectors furnish approximations of eigenvectors of \(A^{T}A\) associated with the corresponding harmonic Ritz values. In the case of tensors, harmonic Ritz lateral slices furnish approximations of eigenvectors of \(\mathscr{A}^{H}\star\mathscr{A}\) associated with harmonic Ritz tubes of \(\mathscr{A}^{H}\star\mathscr{A}\). The harmonic Ritz tubes \(\widetilde{\boldsymbol{\theta}}_{j}\) of \(\mathscr{A}^{H}\star\mathscr{A}\) associated with the partial tensor tridiagonalization defined in (3.6) are the eigentubes of the generalized eigenvalue problem \[\left(\left(\mathscr{B}_{m}^{H}\star\mathscr{B}_{m}\right)^{2}+ \boldsymbol{\alpha}_{m}^{2}\star\boldsymbol{\beta}_{m}^{2}\star\vec{\mathscr{ E}}_{m}\star\vec{\mathscr{E}}_{m}^{H}\right)\star\vec{\omega}_{j}= \widetilde{\boldsymbol{\theta}}_{j}\star\mathscr{B}_{m}^{H}\star\mathscr{B}_{ m}\star\vec{\omega}_{j},\quad 1\leq j\leq m. \tag{3.22}\] The eigenpair \(\{\widetilde{\boldsymbol{\theta}}_{j},\vec{\omega}_{j}\}\) can be computed without forming the tensor \(\mathscr{B}_{m}^{H}\star\mathscr{B}_{m}\). Let \[\vec{\omega}_{j}=\mathscr{B}_{m}\star\vec{\omega}_{j}. \tag{3.23}\] Using the relations \[\boldsymbol{\alpha}_{m}\star\vec{\mathscr{E}}_{m}^{H}=\vec{\mathscr{E}}_{m}^{ H}\star\mathscr{B}_{m}\ \ \text{and}\ \ \boldsymbol{\alpha}_{m}\star\vec{\mathscr{E}}_{m}=\mathscr{B}_{m}^{H}\star \vec{\mathscr{E}}_{m},\] we can write \[\boldsymbol{\alpha}_{m}^{2}\star\boldsymbol{\beta}_{m}^{2}\star \vec{\mathscr{E}}_{m}^{\prime}\star\vec{\mathscr{E}}_{m}^{H}=\boldsymbol{ \beta}_{m}^{2}\star\mathscr{B}_{m}^{H}\star\vec{\mathscr{E}}_{m}\star\vec{ \mathscr{E}}_{m}^{H}\star\mathscr{B}_{m}.\] Therefore, using (3.23), the relation (3.22) can be written as \[\mathscr{B}_{m}^{H}\star\left(\mathscr{B}_{m}\star\mathscr{B}_{m}^{H}\star\mathscr{ B}_{m}+\mathbf{\beta}_{m}^{2}\star\vec{\mathscr{E}_{m}}\star\vec{\mathscr{E}_{m}} \star\vec{\mathscr{E}_{m}}^{H}\star\mathscr{B}_{m}\right)\star\mathscr{B}_{m}^{ -1}\star\vec{\omega}_{j}=\vec{\mathbf{\theta}}_{j}\star\mathscr{B}_{m}^{H}\star \mathscr{B}_{m}\star\mathscr{B}_{m}^{-1}\star\vec{\omega}_{j}.\] It follows that \[\left(\mathscr{B}_{m}\star\mathscr{B}_{m}^{H}+\mathbf{\beta}_{m}^{2}\star\vec{ \mathscr{E}_{m}}\star\vec{\mathscr{E}_{m}}^{H}\right)\star\vec{\omega}_{j}= \vec{\mathbf{\theta}}_{j}\star\vec{\omega}_{j} \tag{3.24}\] and \[\left(\mathscr{B}_{m}\star\mathscr{B}_{m}^{H}+\mathbf{\beta}_{m}^{2}\star\vec{ \mathscr{E}_{m}}\star\vec{\mathscr{E}_{m}}^{H}\right)=\mathscr{B}_{m,m+1} \star\mathscr{B}_{m,m+1}^{H}.\] In this subsection, we denote the singular triplets of \(\mathscr{B}_{m,m+1}\) by \(\{\mathbf{s}_{i}^{\prime},\vec{\mathscr{Q}_{i}^{\prime}},\vec{\mathscr{V}_{i}^{ \prime}}\}\) for \(1\leqslant i\leqslant m\), with the first \(k\) of them being the smallest singular triplets. Recall that we are interested in determining approximations of the smallest singular triplets of \(\mathscr{A}\). The \(k\) smallest singular triplets of \(\mathscr{B}_{m,m+1}\) form the tensors \[\mathscr{U}_{k}^{\prime} =\left[\vec{\mathscr{U}_{1}^{\prime}},\vec{\mathscr{U}_{2}^{ \prime}},\dots,\vec{\mathscr{U}_{k}^{\prime}}\right]\in\mathbb{K}_{n}^{m\times k },\quad\mathscr{V}_{k}^{\prime}=\left[\vec{\mathscr{V}_{1}^{\prime}},\vec{ \mathscr{V}_{2}^{\prime}},\dots,\vec{\mathscr{V}_{k}^{\prime}}\right]\in \mathbb{K}_{n}^{(m+1)\times k},\] \[\mathscr{S}_{k}^{\prime} =\left[\mathbf{s}_{1}^{\prime}\star\vec{\mathscr{E}_{1}},\mathbf{s}_{2}^ {\prime}\star\vec{\mathscr{E}_{2}},\dots,\mathbf{s}_{k}^{\prime}\star\vec{ \mathscr{E}_{k}}\right]\in\mathbb{K}_{n}^{k\times k},\] where \[\mathscr{B}_{m,m+1}\star\mathscr{V}_{k}^{\prime}=\mathscr{U}_{k}^{\prime} \star\mathscr{S}_{k}^{\prime}\ \ \text{and}\ \ \mathscr{B}_{m,m+1}^{H}\star\mathscr{U}_{k}^{\prime}=\mathscr{V}_{k}^{\prime} \star\mathscr{S}_{k}^{\prime}.\] We obtain from the above equations that \[\mathscr{B}_{m,m+1}\star\mathscr{B}_{m,m+1}^{H}\star\mathscr{U}_{k}^{\prime}= \mathscr{U}_{k}^{\prime}\star\left(\mathscr{S}_{k}^{\prime}\right)^{2},\] where \[\left(\mathscr{S}_{k}^{\prime}\right)^{2}=\left[\left(\mathbf{s}_{1}^{\prime} \right)^{2}\star\vec{\mathscr{E}_{1}},\dots,\left(\mathbf{s}_{k}^{\prime}\right)^{ 2}\star\vec{\mathscr{E}_{k}}\right].\] Consequently, the eigenpair \(\left\{\left(\mathbf{s}_{i}^{\prime}\right)^{2},\mathscr{U}_{i}^{\prime}\right\}\) satisfies (3.24), and \(\left\{\left(\mathbf{s}_{i}^{\prime}\right)^{2},\mathscr{B}_{m}^{-1}\star\mathscr{ U}_{i}^{\prime}\right\}\) is an eigenpair of (3.22). It follows that the harmonic Ritz lateral slice associated with \(\vec{\mathbf{\theta}}_{j}\) is given by \[\vec{\mathscr{V}_{j}}=\mathscr{P}_{m}\star\vec{\omega}_{j}=\mathscr{P}_{m} \star\vec{\mathscr{B}_{m}}^{-1}\star\vec{\mathscr{U}_{j}^{\prime}}. \tag{3.25}\] We turn to the computation of the residual of harmonic Ritz lateral slices. Using eqs. (3.6) and (3.24), we obtain the relations \[\mathscr{A}^{H}\star\mathscr{A}\star\vec{\mathscr{V}_{j}}-\vec{ \mathbf{\theta}}_{j}\star\vec{\mathscr{V}_{j}} =\mathscr{A}^{H}\star\mathscr{A}\star\mathscr{P}_{m}\star\vec{ \omega}_{j}-\vec{\mathbf{\theta}}_{j}\star\mathscr{P}_{m}\star\vec{\omega}_{j}\] \[=\left(\mathscr{P}_{m}\star\mathscr{B}_{m}^{H}\star\mathscr{B}_{m }+\mathbf{\beta}_{m}\star\vec{\mathscr{E}_{m}}^{H}\ast\mathscr{B}_{m}\right) \star\vec{\omega}_{j}-\vec{\mathbf{\theta}}_{j}\star\mathscr{P}_{m}\star\vec{ \omega}_{j}\] \[=\mathscr{P}_{m}\star\mathscr{B}_{m}^{-1}\star\left(\mathscr{B}_{m }\star\mathscr{B}_{m}^{H}-\mathbf{\theta}_{j}\star\mathscr{S}_{m}\right)\star\vec {\omega}_{j}+\mathbf{\beta}_{m}\star\vec{\mathscr{P}}_{m+1}\star\vec{\mathscr{E}_{ m}^{H}}\star\vec{\omega}_{j}\] \[=-\vec{\mathbf{\beta}}_{m}^{2}\star\mathscr{P}_{m}\star\vec{\mathscr{ B}_{m}}^{-1}\star\vec{\mathscr{E}_{m}}\star\vec{\mathscr{E}_{m}^{H}}\star\vec{ \omega}_{j}+\mathbf{\beta}_{m}\star\vec{\mathscr{P}}_{m+1}\star\vec{\mathscr{E}_{ m}^{H}}\star\vec{\omega}_{j}\] \[=\vec{\mathscr{E}_{m}^{H}}\star\vec{\omega}_{j}\star\mathbf{\beta}_{m }\left(\vec{\mathscr{P}}_{m+1}-\mathbf{\beta}_{m}\star\mathscr{P}_{m}\star\mathscr{B }_{m}^{-1}\star\vec{\mathscr{E}_{m}}\right).\] It follows that the residual can be expressed as \[\begin{split}\widetilde{\widetilde{\mathscr{R}}}_{m}=\,\widetilde{ \mathscr{P}}_{m+1}-\mathbf{\beta}_{m}\star\mathscr{P}_{m}\star\mathscr{B}_{m}^{- 1}\star\widetilde{\mathscr{E}}_{m}.\end{split} \tag{3.26}\] We now proceed analogously as in the previous subsection, i.e., we use the smallest harmonic Ritz eigentubes of \(\mathscr{B}_{m+1,m}^{H}\star\mathscr{B}_{m+1,m}\) and associated eigenslices to approximate the \(k\) smallest singular triplets of \(\mathscr{A}\). This yields relations that are analogous to (3.3) and (3.4). The following theorem provides the details. **Theorem 11**.: _Apply \(m\) steps of Algorithm 5 to the third-order tensor \(\mathscr{A}\) and assume that the tensor \(\mathscr{B}_{m}\) in (3.3) and (3.4) is invertible. Then, for \(k=1,\ldots,m-1\), we have the relations_ \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{\mathscr{B}}_{k+1}, \tag{3.27}\] \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+1} =\,\widetilde{\mathscr{P}}_{k+1}\star\widetilde{\mathscr{B}}_{k+ 1}^{H}+\widetilde{\mathscr{Q}}_{k+1}\star\widetilde{\widetilde{\mathscr{P}}}_ {k+2}\star\widetilde{\mathscr{E}}_{k+1}^{H}, \tag{3.28}\] _where \(\widetilde{\mathscr{P}}_{k+1}\in\mathbb{K}_{n}^{p\times(k+1)}\) and \(\widetilde{\mathscr{Q}}_{k+1}\in\mathbb{K}_{n}^{\ell\times(k+1)}\) have orthonormal lateral slices and \(\widetilde{\mathscr{B}}_{k+1}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}\) is an upper triangular tensor, where the \(k\) first lateral slices of \(\widetilde{\mathscr{P}}_{k+1}\) are a t-linear combination of the \(k\) first harmonic Ritz lateral slices of \(\mathscr{A}\) with \(\widetilde{\widetilde{\mathscr{P}}}_{k+2}\in\mathbb{K}_{n}^{p}\) is orthogonal to \(\widetilde{\mathscr{P}}_{k+1}\). Moreover, \(\widetilde{\mathscr{E}}_{k+1}\in\mathbb{K}_{n}^{m}\) is the canonical lateral slice under the t-product._ _Proof_. Let \(\{\widetilde{\mathscr{V}}_{i}\}_{i=1:k}\) be the first \(k\) harmonic Ritz lateral slices of \(\mathscr{A}\). Using (3.25) and (3.26), we get \[\begin{split}\left[\mathbf{s}_{1}^{\prime}\star\widetilde{\widetilde{ \mathscr{V}}_{1}},\mathbf{s}_{2}^{\prime}\star\widetilde{\mathscr{V}}_{2},\ldots, \mathbf{s}_{k}^{\prime}\star\widetilde{\mathscr{V}}_{k},\widetilde{\widetilde{ \mathscr{R}}}_{m}\right]&=\left[\mathscr{P}_{m},\widetilde{\mathscr{ P}}_{m+1}\right]\star\left[\begin{matrix}\mathscr{B}_{m}^{-1}\star \mathscr{U}_{k}^{\prime}\star\mathscr{S}_{k}^{\prime}&-\mathbf{\beta}_{m}\star \mathscr{B}_{m}^{-1}\star\widetilde{\mathscr{E}}_{m}\\ \mathbf{0}&\mathbf{e}\end{matrix}\right]\\ &=\,\mathscr{P}_{m+1}\star\left[\begin{matrix}\mathscr{B}_{m}^{-1} \star\mathscr{U}_{k}^{\prime}\star\mathscr{S}_{k}^{\prime}&-\mathbf{\beta}_{m} \star\mathscr{B}_{m}^{-1}\star\widetilde{\mathscr{E}}_{m}\\ \mathbf{0}&\mathbf{e}\end{matrix}\right].\end{split}\] Define the tensor \[\mathscr{J}_{k+1}=\left[\begin{matrix}\mathscr{B}_{m}^{-1}\star\mathscr{U}_{ k}^{\prime}\star\mathscr{S}_{k}^{\prime}&-\mathbf{\beta}_{m}\star\mathscr{B}_{m}^{-1} \star\widetilde{\mathscr{E}}_{m}\\ \mathbf{0}&\mathbf{e}\end{matrix}\right]. \tag{3.29}\] Using the reduced t-QR factorization of \(\mathscr{J}_{k+1}\), we get \[\mathscr{J}_{k+1}=\mathscr{Q}_{k+1}^{\prime}\star\mathscr{B}_{k+1}^{\prime},\] where \(\mathscr{Q}_{k+1}^{\prime}\in\mathbb{K}_{n}^{(m+1)\times(k+1)}\) has orthonormal lateral slices and \(\mathscr{R}_{k+1}^{\prime}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}\) is an f-upper triangular tensor. This factorization can be computed by a simple modification of Algorithm 3. Let \[\widetilde{\mathscr{P}}_{k+1}=\left[\widetilde{\widetilde{\mathscr{P}}}_{1}, \widetilde{\widetilde{\mathscr{P}}}_{2},\ldots,\widetilde{\widetilde{\mathscr{ P}}}_{k+1}\right]=\mathscr{P}_{m+1}\star\mathscr{Q}_{k+1}^{\prime}\in\mathbb{K}_{n}^{ \ell\times(k+1)}. \tag{3.30}\] Then \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\mathscr{A}\star\mathscr{P}_{m+1}\star\mathscr{L}^{\prime}_{k+1}\] \[=\left[\mathscr{A}\star\mathscr{P}_{m},\mathscr{A}\star \widetilde{\mathscr{P}}_{m+1}\right]\star\mathscr{L}^{\prime}_{k+1}\] \[=\left[\mathscr{A}\star\mathscr{P}_{m},\mathscr{A}\star \widetilde{\mathscr{P}}_{m+1}\right]\star\mathscr{J}_{k+1}\star\left( \mathscr{R}^{\prime}_{k+1}\right)^{-1}\] \[=\left[\mathscr{A}\star\mathscr{P}_{m}\star\mathscr{B}_{m}^{-1} \star\mathscr{U}^{\prime}_{k}\star\mathscr{S}^{\prime}_{k},\mathscr{A}\star \widetilde{\mathscr{P}}_{m+1}-\mathscr{A}\star\mathscr{P}_{m}\star\mathbf{\beta}_ {m}\star\mathscr{B}_{m}^{-1}\star\widetilde{\mathscr{E}}_{m}\right]\star\left( \mathscr{R}^{\prime}_{k+1}\right)^{-1}\] \[=\left[\mathscr{D}_{m}\star\mathscr{U}^{\prime}_{k}\star\mathscr{ S}^{\prime}_{k},\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}-\widetilde{ \mathscr{D}}_{m}\star\mathbf{\beta}_{m}\right]\star\left(\mathscr{R}^{\prime}_{k+1} \right)^{-1}.\] Define \[\widetilde{\mathscr{Q}}_{k}=\mathscr{D}_{m}\star\mathscr{U}^{\prime}_{k}\in \mathbb{K}_{n}^{p\times k}. \tag{3.31}\] Using the orthogonality of \(\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}-\mathbf{\beta}_{m}\star\widetilde{ \mathscr{Q}}_{m}\) against the lateral slices of \(\widetilde{\mathscr{Q}}_{k}\) gives \[\widetilde{\mathbf{\alpha}}_{k+1}\star\widetilde{\widetilde{\mathscr{Q}}}_{k+1}=- \mathbf{\beta}_{m}\star\widetilde{\mathscr{Q}}_{m}+\mathscr{A}\star\widetilde{ \mathscr{P}}_{m+1}-\widetilde{\mathscr{Q}}_{k}\star\begin{bmatrix}\widetilde{ \mathbf{\gamma}}_{1}\\ \widetilde{\mathbf{\gamma}}_{2}\\ \vdots\\ \widetilde{\mathbf{\gamma}}_{k}\end{bmatrix}, \tag{3.32}\] where \(\left\|\widetilde{\widetilde{\mathscr{Q}}}_{k+1}\right\|=1\) and \(\widetilde{\mathbf{\alpha}}_{k+1}\) is the tube obtained from the normalization of the tensor \[-\mathbf{\beta}_{m}\star\widetilde{\mathscr{Q}}_{m}+\mathscr{A}\star\widetilde{ \mathscr{P}}_{m+1}-\widetilde{\mathscr{Q}}_{k}\star\begin{bmatrix}\widetilde{ \mathbf{\gamma}}_{1}\\ \widetilde{\mathbf{\gamma}}_{2}\\ \vdots\\ \widetilde{\mathbf{\gamma}}_{k}\end{bmatrix}\] with \[\widetilde{\mathscr{Q}}_{k}^{H}\star\left(-\mathbf{\beta}_{m}\star\widetilde{ \mathscr{Q}}_{m}+\mathscr{A}\star\widetilde{\mathscr{P}}_{m+1}\right)= \begin{bmatrix}\widetilde{\mathbf{\gamma}}_{1}\\ \widetilde{\mathbf{\gamma}}_{2}\\ \vdots\\ \widetilde{\mathbf{\gamma}}_{k}\end{bmatrix}.\] It follows from (3.31) and (3.32) that \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1} =\left[\mathscr{D}_{m}\star\mathscr{U}^{\prime}_{k}\star\mathscr{ S}^{\prime}_{k},\widetilde{\mathbf{\alpha}}_{k+1}\star\widetilde{\widetilde{ \mathscr{Q}}}_{k+1}+\widetilde{\mathscr{Q}}_{k}\star\begin{bmatrix}\widetilde{ \mathbf{\gamma}_{1}}\\ \vdots\\ \widetilde{\mathbf{\gamma}}_{k}\end{bmatrix}\right]\star\left(\mathscr{R}^{\prime} _{k+1}\right)^{-1}\] \[=\left[\mathscr{D}_{m}\star\mathscr{U}^{\prime}_{k},\widetilde{ \widetilde{\mathscr{Q}}}_{k+1}\right]\star\begin{bmatrix}\mathbf{s}^{\prime}_{1} &&\widetilde{\mathbf{\gamma}}_{1}\\ &\ddots&&\vdots\\ &&\mathbf{s}^{\prime}_{k}&\widetilde{\mathbf{\gamma}}_{k}\\ &&\widetilde{\mathbf{\alpha}}_{k+1}\end{bmatrix}\star\left(\mathscr{R}^{\prime} _{k+1}\right)^{-1}.\] Hence, \[\mathscr{A}\star\widetilde{\mathscr{P}}_{k+1}=\widetilde{\mathscr{Q}}_{k+1} \star\widetilde{\mathscr{B}}_{k+1}, \tag{3.33}\] with \[\widetilde{\mathscr{B}}_{k+1}=\begin{bmatrix}\mathbf{s}_{1}^{\prime}&&&\widetilde{ \mathbf{\gamma}}_{1}\\ &\ddots&&\vdots\\ &&\mathbf{s}_{k}^{\prime}&\widetilde{\mathbf{\gamma}}_{k}\\ &&&\widetilde{\mathbf{\alpha}}_{k+1}\end{bmatrix}\star\left(\mathscr{R}_{k+1} \right)^{-1}\in\mathbb{K}_{n}^{(k+1)\times(k+1)}, \tag{3.34}\] where \(\widetilde{\mathscr{B}}_{k+1}\) is an upper triangular tensor as it is the t-product of two upper triangular tensors. To show (3.28), we first notice that \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k}=\mathscr{A}^{H}\star\mathscr{ Q}_{m}\star\mathscr{U}_{k}^{\prime}=\mathscr{P}_{m+1}\star\mathscr{B}_{m,m+1}^{H} \star\mathscr{U}_{k}^{\prime}=\mathscr{P}_{m+1}\star\mathscr{V}_{k}^{\prime} \star\mathscr{S}_{k}^{\prime}.\] Using the fact that \[\mathscr{B}_{m,m+1}=\left[\mathscr{B}_{m},\mathbf{\beta}_{m}\star\widetilde{ \mathscr{E}}_{m}\right]=\mathscr{B}_{m}\star\left[\mathscr{I}_{m},\mathbf{\beta}_{ m}\star\mathscr{B}_{m}^{-1}\star\widetilde{\mathscr{E}}_{m}\right],\] we get \[\mathscr{B}_{m,m+1}\star\mathscr{V}_{k}^{\prime}=\mathscr{U}_{k}^{\prime} \star\mathscr{S}_{k}^{\prime}\Leftrightarrow\left[\mathscr{I}_{m},\mathbf{\beta}_ {m}\star\mathscr{B}_{m}^{-1}\star\widetilde{\mathscr{E}}_{m}\right]\star \mathscr{V}_{k}^{\prime}=\mathscr{B}_{m}^{-1}\star\mathscr{U}_{k}^{\prime} \star\mathscr{S}_{k}^{\prime}.\] It follows from the above result that \[\mathscr{V}_{k}^{\prime}=\begin{bmatrix}\mathscr{B}_{m}^{-1}\star\mathscr{U}_{ k}^{\prime}\star\mathscr{S}_{k}&-\mathbf{\beta}_{m}\star\mathscr{B}_{m}^{-1}\star \widetilde{\mathscr{E}}_{m}\\ \mathbf{0}&\mathbf{e}\end{bmatrix}\star\begin{bmatrix}\mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix}= \mathscr{I}_{k+1}\star\begin{bmatrix}\mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix}.\] We obtain \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k} =\mathscr{A}^{H}\star\mathscr{Q}_{m}\star\mathscr{U}_{k}^{\prime}\] \[=\mathscr{P}_{m+1}\star\mathscr{B}_{m,m+1}\star\mathscr{U}_{k}^{\prime}\] \[=\mathscr{P}_{m+1}\star\mathscr{V}_{k}^{\prime}\star\mathscr{S}_{ k}^{\prime}\] \[=\mathscr{P}_{m+1}\star\mathscr{I}_{k+1}\star\left[\begin{bmatrix} \mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix} \star\mathscr{S}_{k}^{\prime}\right.\] \[=\mathscr{P}_{m+1}\star\mathscr{Q}_{k+1}^{\prime}\star\mathscr{ B}_{k+1}^{\prime}\star\left[\begin{bmatrix}\mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix} \star\mathscr{S}_{k}^{\prime}\right.\] \[=\widetilde{\mathscr{P}}_{k+1}\star\mathscr{R}_{k+1}^{\prime} \star\left[\begin{bmatrix}\mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix} \star\mathscr{S}_{k}^{\prime}.\] The relation (3.33) now yields \[\widetilde{\mathscr{Q}}_{k}^{H}\star\mathscr{A}\star\widetilde{\mathscr{P}}_ {k+1}=\widetilde{\mathscr{B}}_{k,k+1}\Leftrightarrow\widetilde{\mathscr{P}}_ {k+1}^{H}\star\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k}=\widetilde{ \mathscr{B}}_{k,k+1}^{H},\] where \(\widetilde{\mathscr{B}}_{k,k+1}\in\mathbb{K}_{n}^{(k+1)\times k}\) is the subtensor of \(\widetilde{\mathscr{B}}_{k+1}\), which is obtained by removing the last horizontal slice of \(\widetilde{\mathscr{B}}_{k+1}\). Then \[\widetilde{\mathscr{P}}_{k+1}^{H}\star\mathscr{A}^{H}\star\widetilde{\mathscr{ Q}}_{k}=\mathscr{R}_{k+1}^{\prime}\star\begin{bmatrix}\mathscr{I}_{k}\\ \widetilde{\mathscr{E}}_{m+1}^{H}\star\mathscr{V}_{k}^{\prime}\end{bmatrix} \star\mathscr{S}_{k}^{\prime}=\widetilde{\mathscr{B}}_{k,k+1}^{H}\] and \[\widetilde{\mathscr{P}}_{k+1}^{H}\star\mathscr{A}^{H}\star\widetilde{\widetilde{ \mathscr{Q}}}_{k+1}=\widetilde{\mathscr{B}}_{k+1}^{H}\star\widetilde{\mathscr{Q }}_{k+1}^{H}\star\widetilde{\widetilde{\mathscr{Q}}}_{k+1}=\widetilde{\mathscr{ B}}_{k+1}^{H}\star\widetilde{\mathscr{E}}_{k+1}=\widetilde{\boldsymbol{\alpha}}_{k+1} \star\widetilde{\mathscr{E}}_{k+1}.\] Hence, \[\mathscr{A}^{H}\star\widetilde{\widetilde{\mathscr{Q}}}_{k+1}=\widetilde{ \boldsymbol{\alpha}}_{k+1}\star\widetilde{\mathscr{P}}_{k+1}+\widetilde{ \widetilde{\mathscr{B}}}_{k+1}^{\prime} \tag{3.35}\] with \(\widetilde{\widetilde{\mathscr{B}}}_{k+1}^{\prime}\perp\widetilde{\mathscr{P} }_{k+1}\). It follows that \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+1}=\widetilde{\mathscr{P}}_{k+1 }\star\widetilde{\mathscr{B}}_{k+1}^{H}+\widetilde{\widetilde{\mathscr{B}}}_{ k+1}^{\prime}\star\widetilde{\mathscr{E}}_{k+1}^{H}.\] Normalization of \(\widetilde{\widetilde{\mathscr{B}}}_{k+1}^{\prime}\) gives \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{k+1}=\widetilde{\mathscr{P}}_{k+ 1}\star\widetilde{\mathscr{B}}_{k+1}^{H}+\widetilde{\boldsymbol{\beta}}_{k+1} \star\widetilde{\widetilde{\mathscr{P}}}_{k+2}\star\widetilde{\mathscr{E}}_{k +1}^{H}.\] The orthonormality of the lateral slices of \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\) holds by the construction of these tensors. Specifically, it follows from (3.30) that the lateral slices of \(\widetilde{\mathscr{P}}_{k+1}\) are orthonormal. Due to (3.31), the first \(k\) lateral slices of \(\widetilde{\mathscr{Q}}_{k+1}\) are orthonormal. Notice that if \(\widetilde{\boldsymbol{\beta}}_{k+1}\) given in (3.28) vanishes, then we have determined \(k\) singular triplets, i.e., these singular triplets of \(\mathscr{A}\) can be computed by using the singular triplets of \(\widetilde{\mathscr{B}}_{k+1}\), as well as \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\) defined in (3.27) and (3.28). If \(\widetilde{\boldsymbol{\beta}}_{k+1}\) does not vanish, then we append new lateral slices to \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\) in a similar way as we did in the previous subsection. The following result is analogous to Theorem 10. **Theorem 12**: _Carry out \(m\) steps of Algorithm 5 and assume that eqs (3.27) and (3.28) hold for \(k=1,2,\ldots,m-1\). Further, let \(\widetilde{\boldsymbol{\beta}}_{k+1}\) in (3.28) be nonvanishing. Then we have the following relations_ \[\mathscr{A}\star\widetilde{\mathscr{P}}_{m} =\widetilde{\mathscr{Q}}_{m}\star\widetilde{\mathscr{B}}_{m},\] \[\mathscr{A}^{H}\star\widetilde{\mathscr{Q}}_{m} =\widetilde{\mathscr{P}}_{m}\star\widetilde{\mathscr{B}}_{m}^{H}+ \widetilde{\boldsymbol{\beta}}_{m}\star\widetilde{\mathscr{P}}_{m+1}\star \widetilde{\mathscr{E}}^{H},\] _where \(\widetilde{\mathscr{P}}_{m}\in\mathbb{K}_{n}^{p\times m}\) and \(\widetilde{\mathscr{Q}}_{m}\in\mathbb{K}_{n}^{\ell\times m}\) are orthonormal tensors, \(\widetilde{\mathscr{B}}_{m}\in\mathbb{K}_{n}^{m\times m}\) is an upper triangular tensor, \(\widetilde{\boldsymbol{\beta}}_{m}\) is a tube of \(n\) elements, \(\widetilde{\mathscr{P}}_{m+1}\in\mathbb{K}_{n}^{p}\) is orthogonal to all the lateral slices of \(\widetilde{\mathscr{P}}_{m}\) and \(\widetilde{\mathscr{E}}^{H}\in\mathbb{K}_{n}^{\ell}\) is the canonical lateral slice under the \(t\)-product, where the first \(k\) lateral slices of \(\widetilde{\mathscr{P}}_{m}\) and \(\widetilde{\mathscr{Q}}_{m}\) are the same as the lateral slices of \(\widetilde{\mathscr{P}}_{k+1}\) and \(\widetilde{\mathscr{Q}}_{k+1}\), respectively, given in Theorem 11._ _Proof._ These results can be shown similarly as Theorem 10. Theorem 11 requires the invertibility of \(\mathscr{B}_{m}\). Notice that this tensor is well conditioned if all the frontal slices of \(\widehat{\mathscr{B}}_{m}\) are well conditioned, i.e., if \[\max_{1\leqslant i\leqslant n}\kappa\left(\widetilde{\mathscr{B}}_{m}^{(i)}\right)\] is small, where \[\kappa(\widehat{\mathscr{B}}_{m}^{(i)})=\frac{\left(\widehat{\boldsymbol{s}}_ {1}^{\mathscr{B}_{m}}\right)^{(i)}}{\left(\widehat{\boldsymbol{s}}_{m}^{ \mathscr{B}_{m}}\right)^{(i)}}.\] Algorithm 6 describes computations required to compute approximations of either the \(k\) largest singular triplets or the \(k\) smallest singular triplets of a third-order tensor \(\mathscr{A}\) using the methods we developed in the present or previous subsections. ``` 0:\(\mathscr{A}\in\mathbb{K}_{n}^{\ell\times p}\). \(m\): the number of tensor Lanczos bidiagonalization steps. \(\widetilde{\mathscr{P}}_{1}\in\mathbb{K}_{n}^{p}\) with unit norm. \(k\): the number of the desired singular triplets. \(\delta\): The tolerance to accept the singular triplets approximated. \(\epsilon\): machine epsilon. type: A Boolean variable for the kind of augmentation which is either 'Ritz' for Ritz augmentation or 'Harm' for harmonic Ritz augmentation. 0: The \(k\) desired singular triplets of \(\mathscr{A}\), \(\{\sigma_{i},\widetilde{\mathscr{U}}_{i},\widetilde{\mathscr{V}}_{i}\}_{i=1:k}\). 1:Compute the Partial Lanczos bidiagonalization of \(\mathscr{A}\) by Algorithm 5. 2:Compute the t-SVD of \(\mathscr{B}_{m}\) using Algorithm 2. 3:Check the convergence of Equation (3.9). If all the \(k\) desired singular triplets are well approximated, then exist. 4:Compute the augmented vectors: 5:if type='Ritz' or \(\boldsymbol{k}(\mathscr{B}_{m})>\epsilon^{\frac{1}{2}}\)then 6: Compute the tensors \(\mathscr{P}:=\widetilde{\mathscr{P}}_{k+1}\), \(\mathscr{Q}:=\widetilde{\mathscr{Q}}_{k+1}\), \(\mathscr{B}:=\widetilde{\mathscr{B}}_{k+1}\) and the residual \(\widetilde{\mathscr{F}}_{k}\) from (3.12), (3.15), (3.16) and (3.18). 7:endif 8:if type='Harm' and \(\boldsymbol{k}(\mathscr{B}_{m})\leq\epsilon^{\frac{1}{2}}\)then 9: Compute the t-SVD of \(\mathscr{B}_{m,m+1}\). 10: Compute the t-QR factorization of \(\mathscr{J}_{k+1}\) in (3.29). 11: Compute the tensors \(\mathscr{P}:=\widetilde{\mathscr{P}}_{k+1}\), \(\mathscr{Q}:=\widetilde{\mathscr{Q}}_{k+1}\), \(\mathscr{B}:=\widetilde{\mathscr{B}}_{k+1}\) and the residual \(\widetilde{\widetilde{\mathscr{R}}}_{m}\) from (3.30), (3.31), (3.34) and (3.35). 12:endif 13: Append \(m-k\) lateral slices to \(\mathscr{P}\) and \(\mathscr{Q}\), and \(m-k\) horizontal and lateral slices to \(\mathscr{B}\) to obtain \(\mathscr{P}_{m}\), \(\mathscr{Q}_{m}\) and \(\mathscr{B}_{m}\), and determine a new residual \(\widetilde{\mathscr{R}}_{m}\). 14: Go to 2. ``` **Algorithm 6** Tensor Lanczos Bidiagonalization Ritz (t-LBR) algorithm for computing the largest and the smallest singular triplets. ## 4 Multidimensional principal component analysis for facial recognition Principal component analysis (PCA) is used in numerous areas of science and engineering, such as in data denoising, image classification, and facial recognition. Some approaches to color image classification involve conversion of color images to grayscale images to reduce the computational burden, because color images are represented by tensors, while gray scale images can be represented by matrices; see [2, 29]. However, this conversion entails loss of information. A color image in RGB format can be represented by a third-order tensor. This section discusses the application of PCA to third-order tensors. PCA when applied to gray-scale face recognition computes a set of characteristics (eigenfaces) corresponding to the main components of the initial set of training images. Recognition is done by projecting the training images into the eigenface subspace, in which an image of a person is classified by comparing it with other available images in the eigenface subspace. The main advantages of this procedure are its simplicity, speed, and insensitivity to small changes on the faces. When applying PCA to third-order tensors using the t-product, tubes, lateral slices, and third-order tensors are analogues of scalars, vectors, and matrices in the eigenface technique for classifying grayscale images. Using this identification, PCA for third-order tensors that represent color images is structurally very similar to PCA for matrices that represent grayscale images. The latter is described in [17]. Let \(N\) training color images \(I_{1},I_{2},\ldots,I_{N}\) of size \(\ell\times p\times n\) be available. They are represented by the third-order tensors \(\mathscr{I}_{1},\mathscr{I}_{2},\ldots,\mathscr{I}_{N}\) in \(\mathbb{R}^{\ell\times p\times n}\). The procedure of recognizing color facial images using third-order tensors is as follows: 1. For each image \(I_{i}\) for \(i=1,2,\ldots,N\), we determine a lateral slice \(\vec{\mathscr{X}}_{i}\in\mathbb{R}^{\ell p\times 1\times n}\) by vectorizing each frontal slice, i.e., \(\vec{\mathscr{X}}_{i}^{(s)}=\texttt{vec}(\mathscr{I}_{i}^{(s)})\) for \(s=1,2,\ldots,n\). We then construct a tensor, whose frontal slices are given by \(\vec{\mathscr{X}}_{i}\), i.e., \[\vec{\mathscr{X}}=\Big{[}\vec{\mathscr{X}}_{1},\vec{\mathscr{X}}_{2},\ldots, \vec{\mathscr{X}}_{N}\Big{]}\in\mathbb{R}^{\ell p\times N\times n}.\] 2. Compute the mean of the frontal slices of \(\mathscr{X}\), i.e., \[\vec{\mathscr{M}}=\sum_{i=1}^{N}\frac{\vec{\mathscr{X}}_{i}^{ }}{N},\] and let \[\overline{\mathscr{X}}=[\vec{\mathscr{X}}_{1},\vec{\mathscr{X}}_{2},\ldots, \vec{\mathscr{X}}_{N}],\qquad\vec{\mathscr{X}}_{i}=\vec{\mathscr{X}}_{i}-\vec {\mathscr{M}}.\] 3. Determine the first \(k\) left singular vectors of \(\overline{\mathscr{X}}\). We denote them by \(\vec{\mathscr{U}}_{1},\ldots,\vec{\mathscr{U}}_{k}\). Construct the projection subspace \[\mathbb{U}_{k}=\texttt{span}\left\{\vec{\mathscr{U}}_{1},\vec{\mathscr{U}}_{2},\ldots,\vec{\mathscr{A}}_{k}\right\}\] (4.1) and let \[\mathscr{U}_{k}=\Big{[}\vec{\mathscr{U}}_{1},\vec{\mathscr{U}}_{2},\ldots, \vec{\mathscr{U}}_{k}\Big{]}\in\mathbb{R}^{\ell p\times k\times n}.\] 4. Project each face \(I_{i}\) onto the subspace (4.1) to obtain \(\mathscr{U}_{k}^{H}\star\vec{\mathscr{X}}_{i}\). A test image \(I_{0}\) also is projected onto the same space to get \(\mathscr{U}_{k}^{H}\star\Big{(}\vec{\mathscr{X}}_{0}-\vec{\mathscr{M}}\Big{)}\). Finally, determine the closest image to the test image by computing the minimal distance between the projected test image and all the projected training images. The main difference between methods that use PCA for facial recognition is the way that the first (dominant) left singular vectors of \(\overline{\mathscr{X}}\) are computed. In the present paper, we use our proposed method to compute the dominant singular triplets that are used in PCA. The following algorithm summarises the different steps in our approach. ## 5 Numerical experiments This section illustrates the performance of Algorithm 6 for detecting the largest or smallest singular triplets when applied to synthetic data, tensor compression, and facial recognition. All computations are carried out on a laptop computer with 2.3 GHz Intel Core i5 processors and 8 GB of memory using MATLAB 2018a. ### Examples with synthetic data We use synthetic data generated by the MATLAB command \(\mathtt{randn}(\ell,p,n)\), which generates a tensor \(\mathscr{A}\in\mathbb{R}^{\ell\times p\times n}\), whose entries are normally distributed pseudorandom numbers with mean zero and variance one. #### 5.1.1 Largest singular values Table 1 displays the error in the four largest approximate singular tubes computed by augmentation by Ritz lateral slices (referred to as Ritz in the table) and by the partial Lanczos bidiagonalization/Golub-Kahan algorithm (referred to as GK in the table) as described in [17], but using the t-product. These errors are given by \(\left\|\mathscr{S}(i,i,:)-\boldsymbol{\Sigma}(i,i,:)\right\|_{F}\) for \(i=1,2,3,4\) with \(m=20\). Table 2 shows the number of iterations required when using augmentation by Ritz lateral slices to approximate the four largest singular triplets for tensors of different sizes and the number of Lanczos bidiagonalization steps \(m\). \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline \(i\) & Methods & \(100\times 100\times 3\) & \(500\times 500\times 3\) & \(1000\times 1000\times 3\) & \(100\times 100\times 5\) & \(500\times 500\times 5\) \\ \hline \multirow{2}{*}{1} & Ritz & 7.13e-14 & 1.60e-13 & 2.27e-13 & 2.85e-14 & 1.63e-13 \\ & GK & 8.16e-10 & 0.09 & 0.01 & 3.18e-08 & 0.01 \\ \hline \multirow{2}{*}{2} & Ritz & 9.29e-14 & 1.98e-13 & 1.56e-13 & 5.62e-14 & 1.48e-13 \\ & GK & 1.27e-05 & 0.07 & 0.44 & 3.12e-04 & 0.15 \\ \hline \multirow{2}{*}{3} & Ritz & 5.01e-14 & 2.70e-13 & 8.93e-14 & 5.41e-14 & 2.66e-13 \\ & GK & 0.02 & 0.95 & 1.78 & 6.05e-04 & 0.51 \\ \hline \multirow{2}{*}{4} & Ritz & 3.39e-13 & 4.92e-11 & 9.01e-13 & 3.39e-14 & 6.74e-13 \\ & GK & 0.01 & 1.60 & 3.37 & 0.08 & 2.03 \\ \hline \end{tabular} \end{table} Table 1: The Frobenius norm \(\left\|\mathscr{S}(i,i,:)-\boldsymbol{\Sigma}(i,i,:)\right\|_{F}\), where \(\mathscr{S}(i,i,:)\) denotes the singular tubes computed by either augmentation by Ritz lateral slices (Ritz) or by partial Lanczos bidiagonalization also known a partial Golub-Kahan bidiagonalization (GK), and \(\boldsymbol{\Sigma}(i,i,:)\) stands for the singular tubes determined by the t-SVD method with \(m=20\) for \(i=1,2,3,4\). Table 5.1 shows the Ritz augmentation method to yield much higher accuracy than the GK method. Figures 5.1 and 5.2 display the values of some frames of the first 10 singular tubes of third-order tensors of sizes \(100\times 100\times 3\) and \(1000\times 1000\times 5\), respectively, computed by Ritz augmentation using Algorithm 6, the t-SVD, and partial Lanczos bidiagonalization (GK). Each tube is denoted by \(\mathscr{S}(k,k,:)\in\mathbb{K}_{n}\), where \(n\) is equal to 3 or 5, and \(k=1,2,\ldots,10\). In other word, for a fixed \(i\) with \(1\leq i\leq n\), we plot \(\mathscr{S}(k,k,i)\in\mathbb{K}_{n}\) for \(k=1,2,\ldots,10\). As mentioned above, the \(i\)th computed singular triplet is accepted as an approximate singular triplet if \(\widetilde{\mathscr{R}}_{m}\star\widetilde{\mathscr{E}}_{m}^{H}\star \widetilde{\mathscr{U}}_{i}\) is small enough for \(1\leq i\leq k\), where \(k\) is the number of desired singular triplets and the \(\widetilde{\mathscr{U}}_{i}\) are left singular lateral slice of the current tensor \(\mathscr{B}_{m}\); see eq. (3.9). Figure 5.3 shows the evolution of the error computed by (3.9) for the first three singular triplets determined by Algorithm 6 when applied to a third-order tensor of size \(1000\times 1000\times 3\) for \(m=20\). Figures 5.1 and 5.2 illustrate that using Algorithm 6 with Ritz augmented method gives more accurate approximations than the GK method. In particular, the frontal slices of each tube computed with Algorithm 6 are very close to the corresponding frontal slices of the tubes determined by the t-SVD, independently of the size of the third-order tensor. Figure 5.1: On the left, we display the values of the first frontal slices (frames) of the first 10 singular tubes detected by t-SVD, Ritz augmentation and Partial Lanczos bidiagonalization (GK) for a synthetic data of size \(100\times 100\times 3\) with \(m=20\), and on the right we plotted the third frontal slices of these tubes, i.e., \(\mathscr{S}(k,k,i)\) with \(k=1,2,\ldots,10\) and \(i=1,3\). \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l} \hline Ritz & \multicolumn{2}{l|}{\(100\times 100\times 3\)} & \multicolumn{2}{l|}{\(500\times 500\times 3\)} & \multicolumn{2}{l|}{\(1000\times 1000\times 3\)} & \multicolumn{2}{l|}{\(100\times 100\times 5\)} & \multicolumn{2}{l}{\(500\times 500\times 5\)} \\ \cline{2-10} augmentation & iter & time & iter & time & iter & time & iter & time \\ \hline \(m=10\) & 15 & 0.40 & 29 & 2.84 & 41 & 18.20 & 13 & 0.41 & 29 & 4.09 \\ \hline \(m=20\) & 3 & 0.15 & 5 & 2.14 & 7 & 12.88 & 3 & 0.18 & 5 & 2.91 \\ \hline \end{tabular} \end{table} Table 5.2: Number of iterations (iter) needed by the Ritz augmentation method to determine the four largest singular tubes for third-order tensors of different sizes with \(m=10,\,20\). The columns with header “time” shows the CPU time in seconds. #### Smallest singular values This subsection illustrates the performance of Algorithm 6 with Ritz augmentation (referred to as Ritz) and with harmonic Ritz augmentation (referred to as Harm) for computing the smallest singular triplets of synthetic third-order tensors of different sizes. Table 3 displays the error in the fourth smallest singular tubes computed by Ritz augmentation and harmonic Ritz augmentation for \(m=20\), and compares with results determined by the t-SVD method. In Table 4 we show the number of iterations and the required CPU time (in seconds) for these methods when \(m=20\). Figure 3: Evolution of the remainder term for a third-order tensor of size \(1000\times 1000\times 3\) when computing the first three singular triplets by Algorithm 6 with Ritz augmentation. Figure 2: The left-hand side pane shows the values of the first frontal slices (frames) of the first 10 singular tubes computed by t-SVD, Ritz augmentation, and the partial Lanczos bidiagonalization (GK) method for a synthetic data of size \(1000\times 1000\times 5\) with \(m=20\). The right-hand side pane displays the third frontal slices of these tubes, i.e., \(\mathscr{S}(k,k,i)\) for \(k=1,2,\dots,10\) and \(i=1,3\). \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} \hline \multirow{2}{*}{Method} & \multicolumn{2}{l|}{\(100\times 100\times 3\)} & \multicolumn{2}{l|}{\(500\times 500\times 3\)} & \multicolumn{2}{l|}{\(100\times 100\times 5\)} & \multicolumn{2}{l|}{\(500\times 500\times 5\)} \\ \cline{2-7} & CPU time & iter & CPU time & iter & CPU time & iter & CPU time & iter \\ \hline Ritz & 0.99 & 31 & 231.81 & 615 & 1.11 & 30 & 425.83 & 831 \\ \hline Harm & 0.85 & 29 & 227.49 & 606 & 1.03 & 30 & 355.35 & 723 \\ \hline \end{tabular} \end{table} Table 5.4: CPU time in seconds, and number of iterations required by Algorithm 6 with Ritz augmentation and harmonic Ritz augmentation for \(m=20\) to compute the four smallest singular triplets of synthetic third-order tensors of different sizes. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline \multirow{2}{*}{i} & Method & \(100\times 100\times 3\) & \(100\times 100\times 5\) & \(500\times 500\times 3\) & \(500\times 500\times 5\) \\ \hline \multirow{2}{*}{\(n-3\)} & Ritz & 3.82e-11 & 5.22e-12 & 1.34e-10 & 2.50e-10 \\ & Harm & 1.03e-13 & 4.64e-13 & 4.66e-13 & 1.07e-13 \\ \hline \multirow{2}{*}{\(n-2\)} & Ritz & 1.99e-14 & 4.34e-13 & 1.20e-14 & 1.68e-11 \\ & Harm & 4.94e-15 & 3.10e-13 & 2.46e-14 & 3.77e-14 \\ \hline \multirow{2}{*}{\(n-1\)} & Ritz & 8.36e-14 & 4.56e-14 & 1.77e-14 & 6.86e-12 \\ & Harm & 1.64e-15 & 6.05e-15 & 2.88e-14 & 1.39e-13 \\ \hline \multirow{2}{*}{\(n\)} & Ritz & 1.38e-15 & 7.71e-16 & 6.49e-15 & 2.00e-12 \\ & Harm & 8.59e-16 & 7.90e-16 & 3.01e-15 & 1.41e-14 \\ \hline \end{tabular} \end{table} Table 5.3: The Frobenius norm \(\left\|\mathscr{S}(i,i,:)-\boldsymbol{\Sigma}(i,i,:)\right\|_{F}\), where \(\mathscr{S}(i,i,:)\) denotes the singular tubes determined by Ritz augmentation or harmonic Ritz augmentation for \(m=20\), and \(\boldsymbol{\Sigma}(i,i,:)\) are tubes computed by the t-SVD method for the four smallest tubes, i.e., for \(i=n-3,n-2,n-1,n\). Figures 5.4 and 5.5 show the error \(\|\vec{\mathcal{R}}_{m}\star\vec{\mathcal{E}}_{m}^{H}\star\vec{\mathcal{U}}_{i} \|_{F}\) associated with Ritz augmentation in Algorithm 6 to converge in a smoother way than the corresponding error for harmonic Ritz augmentation. Both errors converge to zero as the number of iterations increases. Figure 5.4: The Frobenius norm of \(\vec{\mathcal{R}}_{m}\star\vec{\mathcal{E}}_{m}^{H}\star\vec{\mathcal{U}}_{i}\) obtained by Algorithm 6 with Ritz augmentation when approximating the two smallest singular triplets of a synthetic tensor of size \(500\times 500\times 5\) with \(m=20\) at each iteration for \(i=499,500\). Figure 5.5: The Frobenius norm of \(\vec{\mathcal{R}}_{m}\star\vec{\mathcal{E}}_{m}^{H}\star\vec{\mathcal{U}}_{i}\) obtained by harmonic Ritz augmentation when approximating the last two singular triplets of a synthetic tensor data of size \(500\times 500\times 5\) with \(m=20\), at each iteration for \(i=499,500\). ### Application to data compression Figure 5.6 displays examples of image compression using two color images: "house" of size \(256\times 256\times 3\) and "Hawaii" of size \(1200\times 1200\times 3\). For each image, we compute the \(k\)th largest singular triplets using Ritz augmentation in Algorithm 6, which will be referred to as "Ritz," for different numbers \(k\) of desired singular triplets. Figure 5.7 displays the relative error of the compressed images for \(k=5,10,15,25\), by using Ritz augmentation (Ritz) and the t-SVD method. This error is measured by \[\frac{\left\|\mathscr{A}_{k}-\mathscr{A}\right\|_{F}}{\left\|\mathscr{A} \right\|_{F}}, \tag{5.1}\] where \(\mathscr{A}\) denotes the tensor that represents the original image and \(\mathscr{A}_{k}=\sum_{i=1}^{k}\vec{\mathscr{U}_{i}}\star\mathbf{s}_{i}\star\vec{ \mathscr{V}}^{H}\). Figure 5.7 shows the relative errors obtained with Algorithm 6 with Ritz augmentation and the t-SVD are almost the same. This means that the approximate singu Figure 5.6: Examples of image compression applied to the “house” and “Hawaii” images for \(k=5,10,15,25\) slices using Algorithm 6 with Ritz augmentation. Figure 5.7: Relative compression error (5.1) for the images “house” and “Hawaii” obtained with Algorithm 6 with Ritz augmentation (Ritz) and the t-SVD method. and left singular lateral slices determined by Algorithm 6 with Ritz augmentation are very accurate. ### Facial recognition We illustrate the application of Algorithm 7 to facial recognition using color images that are represented by third-order tensors. The images in our test are from the Georgia Tech database GTDB_crop[26], which contains 750 images of 50 persons, with each person represented by 15 images that show various facial expressions and facial orientation, and different illumination conditions. Figure 5.8 shows an example of images of one person in the data set. Each image in the data set is of size \(100\times 100\times 3\) pixels, and we use 3 randomly chosen images of each person as test images. The remaining 600 images form our training set and define the tensor \(\mathscr{X}\in\mathbb{R}^{10000\times 600\times 3}\). We applied Algorithm 7 and compared the results with those obtained by the t-SVD and also with results obtained by the' Golub-Kahan (GK) algorithm using the t-product. The performance of these methods is measured by the identification rate given by \[\text{Identification rate}=\frac{\text{number of correctly matched images}}{\text{number of test images}}\times 100(\%). \tag{5.2}\] Figures 5.9 and 5.10 show results obtained for \(k=1\) and \(k=5\) for two different persons. The mean image is defined as in Algorithm 7. Figure 5.8: An example of a person with different facial expressions and orientations. Figure 5.10: A test for \(k=5\). Figure 5.9: A test for \(k=1\). Figures 5.9 and 5.10 show that Algorithm 7 performs well for some values of the truncation index \(k\). In Figure 5.11, we plotted the identification rate (5.2) obtained with Algorithm 7 (Ritz augmentation), GK for \(m=k\), and with the exact t-SVD method for the 150 test images. Table 5.5 reports CPU times for Algorithm 7 for \(m=10\) (Ritz) and for the t-SVD method for different values of the truncation index \(k\). The results show Algorithm 7 to be very effective both in terms of accuracy and CPU time compared to the t-SVD and the classical Golub-Kahan methods. ## 6 Conclusion and extensions This paper presents two new methods for approximating the largest or smallest singular triplets of large third-order tensors using the t-product. We use restarted Lanczos bidiagonalization for third-order tensors to develop the Ritz augmentation method to determine the largest or smallest singular triplets. Moreover, we propose the harmonic Ritz augmentation method to compute the smallest singular triplets. These methods are applied to data compression and face recognition.
2305.07596
Visualizing Entanglement in multi-Qubit Systems
In the field of quantum information science and technology, the representation and visualization of quantum states and related processes are essential for both research and education. In this context, a focus especially lies on ensembles of few qubits. There exist many powerful representations for single-qubit and multi-qubit systems, such as the famous Bloch sphere and generalizations. Here, we utilize the dimensional circle notation as a representation of such ensembles, adapting the so-called circle notation of qubits and the idea of representing the n-particle system in an n-dimensional space. We show that the mathematical conditions for separability lead to symmetry conditions of the quantum state visualized, offering a new perspective on entanglement in few-qubit systems and therefore on various quantum algorithms. In this way, dimensional notations promise significant potential for conveying nontrivial quantum entanglement properties and processes in few-qubit systems to a broader audience, and could enhance understanding of these concepts as a bridge between intuitive quantum insight and formal mathematical descriptions.
Jonas Bley, Eva Rexigel, Alda Arias, Nikolas Longen, Lars Krupp, Maximilian Kiefer-Emmanouilidis, Paul Lukowicz, Anna Donhauser, Stefan Küchemann, Jochen Kuhn, Artur Widera
2023-05-12T16:41:59Z
http://arxiv.org/abs/2305.07596v4
# Visualizing Entanglement, Measurements and Unitary Operations ###### Abstract In the field of quantum information science and technology, the representation and visualization of quantum states and processes are essential for both research and education. In this context, a focus especially lies on ensembles of few qubits. While powerful representations exist for single-qubit illustrations, such as the infamous Bloch sphere, similar visualizations to intuitively understand quantum correlations or few-body entanglement are scarce. Here, we present the dimensional circle notation as a representation of such ensembles, adapting the so-called circle notation of qubits. The \(n\)-particle system is represented in an \(n\)-dimensional space, and the mathematical conditions for separability lead to symmetry conditions of the quantum state visualized. This notation promises significant potential for conveying nontrivial quantum properties and processes such as entanglement, measurements and unitary operations in few-qubit systems to a broader audience, and it could enhance understanding of these concepts beyond education as a bridge between intuitive quantum insight and formal mathematical descriptions. ## I Introduction Genuine quantum properties are hard to visualize and hence to intuitively understand. Powerful visualizations of simple two-level, single-particle systems such as the Bloch vector representation of the density matrix have been developed to represent properties and dynamics in various situations beyond the mathematical description. Due to the extraordinary mathematical complexity of multi-qubit systems, representing many-body correlations for even two- or few-qubit systems comes along with many challenges. Geometric representations of pure multi-qubit states and entanglement was previously addressed from the perspective of the mathematical field of topology [1, 2]. Other representations include the Majorana representation depicting multi-qubit states on a Bloch sphere [3] or, alternatively, the use of separate Bloch spheres for the non-entangled part of the system and the entangled part [4, 5]. Also, generalized Wigner functions can be used to represent systems of few qubits [6]. Lastly, a haptic model of entanglement based on knot theory has been proposed [7]. In all of these works, entanglement is geometrically represented. However, they are difficult to generalize to more than two- or three-qubit systems. In addition, the profound mathematical background in, e.g., topology or advanced geometry often needed to understand these models adds multiple layers of complexity. These are, however, often unnecessary in the context of quantum computing algorithms. To solve the latter challenge, various two-qubit visualizations are used for educational purposes [8, 9] and also in the context of quantum games [10, 11, 12]. For more general applications, one needs to go beyond two- or three-qubit systems. Here, graphical languages like the ZX, ZW or ZH calculi, that can be seen as abstractions of circuit diagrams, are commonly used to visualize quantum states and algorithms [13, 14, 15, 16]. Their abstractness can be an advantage, e.g., for efficiently showing gate identities and the different possible entanglement properties of multi-qubit system [14]. At the same time, they require an already existing understanding of the often complex underlying concepts and processes. To acquire this understanding, explicit visualizations are necessary. As such an explicit visualization, the so-called circle notation [17] has been introduced. The aim of this visualization is to minimize the reluctance of learners towards quantum notations and linear algebra formalities, and instead highlight the basic ideas and mechanisms of quantum algorithms explicitly. In this notation, complex numbers are represented graphically by visualizing their magnitude as a filled area in a circle, and their phase as gauge in the circle. A drawback of this visualization is that the action of gate operations on the multi-qubit registers is not intuitive but rather has to be memorized. Furthermore, entanglement remains hidden. In this work, we present an extension of the circle notation associating every qubit with a separate dimension in space. This new representation visualizes entanglement and provides natural access to quantum operations on multi-qubit registers which enables the explicit visualization of quantum algorithms of up to at least five qubits. We call this extension _dimensional circle notation_ (DCN). The approach of assigning qubits to different dimensions in space is also utilized in [18] for educative purposes, however, considering only real coefficients, thus restricting use cases and prohibiting a general depiction of entanglement that DCN enables. In addition, we show extensions to four- and five-qubit systems. DCN considers the well known theory of learning and problem solving with multiple external representations (MERs) [19; 20] which aims to support learners' understanding by focusing not only on symbolic-mathematical or text-based representations (e.g., formulas or written text), but also on visual-graphical representations (e.g., pictures and diagrams). In addition, it provides a new perspective on separability of pure multi-qubit states that could be more suitable for learners than the often-used definitions using the density matrix formalism [21; 22; 23; 24; 25; 26; 27]. Therefore, we see its relevance as a bridge between single-particle visualization and mathematical many-body descriptions to build intuition for few-body quantum correlations. This can be used, for instance, in courses within the field of quantum information science and technology (QIST) as a facilitator for the construction of conceptual understanding of entanglement and gate operations in multi-qubit systems and, also, beyond education. This paper is structured as follows: In Sec. II, the underlying theory of MERs and its relevance for the discussion of DCN is described. In Sec. III the circle notation is introduced. It is followed by the introduction of the dimensional circle notation in Sec. IV. Examples in three-qubit systems using DCN are presented in Sec. V. After concluding in Sec. VI, we present an outlook in Sec. VII, where we introduce an interactive DCN web tool and illustrate further extensions of DCN, like visualization of quantum algorithms in four-and five-qubit systems. ## II Supporting conceptual understanding in QIST by using multiple external representations It is well known that learning and problem solving in different contexts of science, technology, engineering, and mathematics (STEM) can be supported by using not only one but multiple external representations (MERs) [20]. In particular, this finding can be utilized in supporting learners' understanding by focusing not only on text-based and symbolic-mathematical representations (e.g., written text and formulas), but also on graphical representations (e.g., pictures and diagrams). From the theoretical perspective, the benefit of using a text accompanied by a graphical representation in learning can be explained by taking advantage of dual coding in the verbal and visual channel of the working memory in contrast to the processing of verbal information only [28; 29]. In this way, information processing is distributed among the two channels, so the load on a channel is lower in comparison when all the information is processed only by single channel. However, each representation additional to a text implies a new effort for learners, because they need to know how a representation depicts information, i.e., learners need to possess representational competence [30]. Ref. [31] points out that learning with more than two representations, such as a text, an equation and a diagram, is only more efficient than learning with two representations if the learner possesses visual understanding (i.e., representational competence) of each representation. In this line, a representation that encodes complex information in a relatively intuitive way for learners, so that it is easy to acquire representational competence, may be a valuable asset for learning. As mentioned above, up to now, there are only a few representations of multi-qubit systems in QIST and they are rather limited in their capacities, so that it is not easily possible to visualize entanglement and the actions of gate operations. Therefore, DCN enables instructors to encode information in an easily accessible third representation additionally to a descriptive text and mathematical equations to exploit the benefits of MERs for these complex concepts. Based on Ainsworth's theoretical framework [19], there are three key functions MERs can fulfill to support learning. They can _complement each other_ either by containing different information or supporting different processes. Furthermore, they can _constrain each other_, e.g. by familiarity or inherent properties. Third, using MERs can _construct deeper understanding_ by confronting learners with the abstraction of underlying knowledge structures, the extension of knowledge to an unknown representation or enhancing understanding of the relations between different representations. By providing a graphical representation of qubit characteristics and gate operations, we supply learners with additional access to QIST basics complementary to the mathematical notation. In this way, we especially aim to facilitate the understanding of the corresponding mathematical concepts by providing the opportunity to extend existing knowledge structures based on DCN to mathematical formulations. More particularly in the context of entanglement, by using DCN, learners get access to new deciding factors for whether qubit systems or even subsystems are separable or entangled. However, it is important to note that, in order to benefit from MERs, learners have to cope with understanding of not only how the scientific knowledge is presented in one representation but also how to translate between different representations. Hence, the learning effectiveness of MERs does not only rely on the learning material but also on learner characteristics [19; 32]. In this work, we extend the mathematical formalism of multi-qubit systems and related processes with DCN. Based on current psychological and educational research, we expect that the use of DCN can utilize the known advantages of learning with MERs in QIST by constructing deeper understanding of entanglement and quantum gate operations. ## III Circle notation We start by briefly introducing the circle notation. In an \(n\)-qubit system, there are \(2^{n}\) different possible basis states represented by \(2^{n}\) circles. We will work solely in the computational basis as it is commonly used in quantum computing. Here, the basis is given by \(\{\left|i\right\rangle\}\), \(i\in\{0,1\}^{n},|i_{n}i_{n-1}\ldots i_{1}\rangle\), which defines the \(n\)-qubit register. Any pure \(n\)-qubit state \(\left|\psi\right\rangle\) can be written as a superposition of these basis states: \[\begin{split}\left|\psi\right\rangle&=\alpha_{0} \left|0\ldots 0\right\rangle+\alpha_{1}\left|0\ldots 01\right\rangle\\ &+\alpha_{2}\left|0\ldots 010\right\rangle+\ldots+\alpha_{2^{n}-1} \left|1\ldots 1\right\rangle\end{split} \tag{1}\] with \(\alpha_{i}\in\mathbb{C},\sum_{i=0}^{2^{n}-1}|\alpha_{i}|^{2}=1\). As per the convention used here, the rightmost entry in the ket state corresponds to the first qubit and the leftmost entry to the \(n\)'th qubit. This means that the least significant qubit in the binary system corresponds to the rightmost entry. As shown in Fig. 1, the circle notation graphically represents the magnitudes of the amplitudes \(\alpha_{i}\) as filled inner circles with radius \(|\alpha_{i}|\) and their phase \(\varphi\) of \(\alpha_{i}=e^{i\varphi}|\alpha_{i}|\) as the angle between the radial line and a vertical line. Some important single qubit operations (in a single qubit system) are shown in Fig. 13 in Appendix A. For two qubits, the possible states are lined up as shown in Fig. 2. In standard circle notation, one can not immediately determine whether the represented state is separable or entangled. We refer to [17] for a precise and comprehensive introduction to the circle notation, in particular, unitary operations and measurements in multi-qubit systems. For calculating their effect, if not memorized, operations require the additional effort of checking each basis state in Dirac ket notation which could reduce the advantage of this representation in respect to the mathematical representation. We tackle these difficulties with DCN where it is enough to understand these operations in single-qubit systems to understand them in any multi-qubit system. ## IV Dimensional circle notation of two-qubit systems Based on circle notation, we introduce DCN as a graphical representation of multi-qubit systems. Instead of arranging states in a row, we assign every qubit to an axis in a new direction in space, see Fig. 3. As shown there, building product states in DCN is an intuitive procedure following the standard Kronecker product. New qubits are simply attached to the original system in a new dimension in space. Separability and EntanglementEntangled states are multi-qubit states that are not separable. In the classical circle notation, see Fig. 2, it is cumbersome to distinguish a separable state from an entangled. In this section, we will show how DCN allows spotting separable states in the two-qubit case. A state \(\left|\psi\right\rangle=\alpha_{00}\left|00\right\rangle+\alpha_{01}\left|01 \right\rangle+\alpha_{10}\left|10\right\rangle+\alpha_{11}\left|11\right\rangle\) is separable into \(\left|\psi\right\rangle=(\alpha_{1}\left|0\right\rangle+\beta_{1}\left|1 \right\rangle)\otimes(\alpha_{2}\left|0\right\rangle+\beta_{2}\left|1\right\rangle)\), where \(\otimes\) is the Kronecker product, if and only if \[\alpha_{00}\alpha_{11}=\alpha_{01}\alpha_{10} \tag{2}\] as stated in, e.g., Ref. [2, p. 396]. We can represent this condition in terms of coefficient ratios \(\alpha_{00}/\alpha_{01}=\alpha_{10}/\alpha_{11}\) in the case of \(\alpha_{01},\alpha_{11}\neq 0\) or \(\alpha_{10}/\alpha_{00}=\alpha_{11}/\alpha_{01}\) in the case of \(\alpha_{10},\alpha_{11}\neq 0\). In the case of more than two coefficients being \(0\), the system is trivial. This means we can visually not only identify entangled states, but also get a sense for the degree of entanglement by comparing the ratios of the coefficients \(\alpha_{00}/\alpha_{01}=r_{1}e^{i\varphi_{1}},\alpha_{10}/\alpha_{11}=r_{2}e^{ i\varphi_{2}}\) in terms of the ratio of their amplitudes \(r_{1}/r_{2}\) and the difference of their phases \(\varphi_{1}-\varphi_{2}\). For example, the concurrence \(\mathcal{C}\) is a common way to measure entanglement [33]. It is defined as \(\mathcal{C}=2|\alpha_{11}\alpha_{00}-\alpha_{10}\alpha_{01}|=2r_{1}|1-r_{2}/r_ {1}e^{i\varphi_{1}-\varphi_{2}}|\) for pure two-qubit states (under the assumption of \(r_{1}>0\)). It can be seen that the concurrence is large for large differences in phases (\(|\varphi_{2}-\varphi_{1}|\approx\pi\)) and large or small ratios of magnitudes (\(r_{2}/r_{1}\gg 1\) or \(r_{2}/r_{1}\ll 1\)). We compare these ratios for every pair of Figure 1: A qubit in the state \(\left|\psi\right\rangle=\sqrt{2/3}\left|0\right\rangle+1/\sqrt{3}e^{i\pi/2} \left|1\right\rangle\) in circle notation. The outer circles represent the basis states \(\left|0\right\rangle\) and \(\left|1\right\rangle\). The radii of the inner circles represent the absolute value of the corresponding coefficients. The radius of the blue circle is \(\sqrt{2/3}\) and the radius of the green circle \(1/\sqrt{3}\). The blue area is double the size of the green area, showing that measuring would, on average, yield the result \(0\) twice as often as \(1\). The angles of the lines in respect to a vertical line represent the phases of the corresponding coefficients. Here, the angle of the line of the coefficient \(1/\sqrt{3}e^{i\pi/2}\) of the basis state \(\left|1\right\rangle\) is horizontal and facing left, representing the phase \(\pi/2\). states along the axis of one qubit, where both of the corresponding coefficients are non-zero. Then, we can determine whether the system is symmetrical along that axis, apart from a (complex) ratio. If we find symmetry, we know that the system is separable. This is shown in Fig. 4. It is important to note that this representation of separability into single-particle states only holds if the chosen basis states are themselves separable. We consider exclusively the computational basis here, but in principle any _separable_ basis can be used. MeasurementsMeasuring a single qubit, the state collapses into a classical bit of 0 or 1. Similarly, in a terminal measurement of \(n\)-qubits the system collapses into the classical bit string \(i=i_{n}i_{n-1}\ldots i_{1}\), where \(i\in\{0,1\}^{n}\). The measurement of a subset of qubits is, however, more peculiar. In conventional circle notation, see Fig. 2, one needs to precisely identify the subset of qubits measured, by evaluation of the corresponding register state, see [17] for more details. In DCN, we expect this procedure to be more intuitive to understand. A partial measurement (see Fig. 5) in this new dimensional arrangement means that all circles along the measured qubit differing from the measured value turn empty. Afterwards, the state simply has to be renormalized. Furthermore, the probabilities of measuring 0 or 1 are given by the sum of the areas of the inner circles of the basis states corresponding to that value. Unitary OperationsExamples of unitary operations in single qubit systems are shown in Fig. 13 in Appendix A. Having understood them and in order to generalize from single-qubit systems to multi-qubit systems in circle notation, one still needs to memorize not only the effects of single qubit operations but instead all possible actions of single qubit gates on all possible qubits. We show here how the dimensional arrangement in DCN eliminates this drawback. Single-qubit gates need only to be applied alongside the axis of the qubit considered. Thus, the visualization of single-qubit operations within two-qubit systems is transferable from the one-qubit case which importantly still holds for larger qubit systems as we show in the following sections. A comparison of DCN with the standard circle notation is shown in Fig. 6 for the Pauli-\(X_{1}\)- and \(X_{2}\)-gates. Note that local unitary operations leave the ratio characterization of separable states intact, i.e., we can not entangle a non-entangled system locally and vice-versa, in agreement with the no-communication theorem [34]. Two-qubit operations also work geometrically in DCN and - again - avoid the necessity of memorizing multiple operations of, e.g., controlled gates where the targeted and controlled qubits are swapped. We show this for two gates that are fundamental to quantum algorithms - the CNOT-gate and the SWAP-gate. The CNOT-gate applies a NOT (X)-gate to the target qubit if the control qubit has value 1. In DCN, this has a geometrical explanation: the CNOT-gate swaps all states where the control qubit is 1 along the axis of the target qubit as shown in Fig. 7. The SWAP-gate exchanges two qubits in the system, which is equivalent to swapping the two qubit axes. This gate can be decomposed into three CNOT gates which is relevant in practice because existing quantum computer hardware can often only make use of CNOT gates for qubit interactions. Fig. 8 shows how DCN visualizes this decomposition geometrically. In Appendix D we provide additional DCN examples for \(\text{CNOT}_{12}=(\text{H}_{2}\otimes\text{H}_{1})\text{CNOT}_{21}(\text{H}_{2 }\otimes\text{H}_{1})\) as an example of a phase kickback swapping the role of target and control qubit, see Fig. 14. We also show the Deutsch algorithm which is often considered as an example of quantum parallelism and a (albeit non-practical) use-case of phase kickback, see Fig. 15. The representation of Deutsch algorithm in DCN shows that although a CNOT-gate is present, no entanglement has been created, and therefore the algorithm could, in principle, be realised classically which has been shown in classical optical systems [35]. ## V Dimensional circle notation in three-qubit systems We now shift from two-qubit systems to three-qubit systems and explore the advances of DCN in respect to standard circle notation. Similarly to the transfer from one-qubit systems to two-qubit systems, DCN operations in three-qubit systems are transferable from the one- or two-qubit cases. Still, the additional qubit leads to a few key differences that we will explain in the following. In addition, we show that DCN is a natural representation for quantum teleportation, which is an algorithm combining many fundamental concepts of QIST like entanglement, unitary operations, and measurement in one protocol. (Partial) Separability and EntanglementTo distinguish separable states from entangled ones, we apply a similar procedure taken from the two-qubit case to determine whether a three-qubit system is separable. The two key differences are: 1. In order to compare the ratios of coefficients, we look for symmetry _planes_ instead of axes. This way, we compare the ratios of the top coefficients with the bottom coefficients, left with right or front with back. This is shown in Fig. 9. 2. We have to (and can) differentiate between partial and full separability and compare along two planes. If the ratios are the same along only one plane, we have an entangled two-qubit system that the third qubit, represented by the axis perpendicular to this symmetry plane, is independent of (Fig. 9 is an example of such a state). If and only if the ratios are the same along two planes, they are also the same along the third plane and we have a fully separable system. This is also stated in more detail in Appendix B and C for the general case of \(n\)-qubit systems formulated for the purpose of visualization in DCN [36; 37]. Quantum TeleportationQuantum Teleportation has been at the heart of quantum technologies for many years, allowing the transfer of quantum information between two parties over arbitrary distances when an EPR pair is shared between them. It has multiple applications in quantum communication [38] and quantum computation [39; 40] and is therefore an essential part of quantum information processing [41]. Because it incorporates many fundamental concepts of QIST, quantum teleportation is a suitable example of how DCN could enhance understanding of quantum algorithms in general. Quantum teleportation works as follows: A pair of entangled qubits \(\#2\) and \(\#3\) in the state \(\left|\phi^{+}\right\rangle_{32}=1/\sqrt{2}(\left|00\right\rangle+\left|11 \right\rangle)\) is prepared. Qubit \(\#3\) is sent to Bob and qubit \(\#2\) to Alice. Alice also has qubit \(\#1\) in the state \(\left|\psi_{1}\right\rangle\) which she does not necessarily need to know and that she wants to teleport to Bob. \(\left|\phi^{+}\right\rangle_{32},\left|\psi\right\rangle_{1}\) and the product state \(\left|\psi\right\rangle=\left|\phi^{+}\right\rangle_{32}\otimes\left|\psi \right\rangle_{1}\) are shown in Fig. 10 in DCN. During quantum teleportation, the information of qubit \(\#1\) is transferred to qubit \(\#3\). Fig. 11 shows that, in DCN, this has geometric meaning: Because of the equivalence of an axis with a qubit, transferring information from one qubit to another is the same as transferring information from one _axis_ to another. This can be done using the unitary operations \(\text{CNOT}_{12}\) and \(H_{1}\). These operations only act on qubit #1 and #2, i.e. along the axis of qubit #1 and #2. In practice, this means that Alice does not need physical access to qubit #3. This transfer of information is only possible due to the entanglement of qubit #2 and qubit #3. To achieve her goal, Alice first applies a CNOT-gate with qubit #1 as control and qubit #2 as target. She then applies a Hadamard-gate to qubit #1. This is shown in Fig. 11. When Alice now measures qubit #1 and qubit #2, the four possible measurement outcomes 00, 01, 10 and 11 lie on the 2D plane spanned by qubit #1 and qubit #2. The resulting state of qubit #3 depends on the measurement result. Alice sends the result to Bob who applies an \(X\) and/or a \(Z\)-gate if needed so that his qubit #3 is in the state that qubit #1 previously was in. This is shown in Fig. 12. ## VI Conclusions The standard circle notation is already a useful tool for introductory quantum computing courses, as the visualization lowers the barrier to entry into a mathematically challenging field. This is especially needed due to it's interdisciplinarity and the various different academical backgrounds of learners [42]. In this paper, we showed that DCN has several advantages over standard circle notation on a conceptual level. This is because DCN visualizes separability due to the ratio characterization and could make the effect of measurements and unitary operations in two- and three-qubit systems more intuitive due to a geometric depiction of single qubits as parts of these systems. It is important to consider the conceptual limitations of DCN. First of all, larger than six- to seven-qubit systems will be difficult to visualize due to the exponential scaling of the number of basis states, although one could say that this will be a drawback of any explicit visualization. An important limitation of DCN is that it can not completely replace mathematics for two reasons. Firstly, exact numerical amplitudes and phases are not visible, which, for example, means that many separable states can only _approximately_ be identified as such. Secondly, DCN can not display variables and is restricted to specific examples. However, specific examples are often enough and even needed to understand the general case by abstraction. Lastly, the theory-based educational foundation of DCN lies in Ainsworth's framework of multiple external representations and, more specifically, in the relation and extension of currently used representations to construct a deeper understanding of QIST basics as discussed in Sec. II while complementing the mathematical notation. These theoretical functions of DCN will have to be proven in future systematic empirical education research. We conclude that DCN can find immediate educational use in introductory quantum computing and quantum technology courses as well as in contexts beyond education to visualize the entanglement properties of and gate operations in multi-qubit systems complementary to the mathematical formalism. It provides a new perspective on entanglement and the geometry of unitary operations in multi-qubit systems and by doing so, it could enhance understanding of quantum algorithms in general. Figure 11: The central part of the quantum teleportation algorithm. Qubit #1 starts in the arbitrary state \(\left|\psi\right\rangle_{1}=\sqrt{2/3}\left|0\right\rangle+1/\sqrt{3}e^{-i\pi/4} \left|1\right\rangle\). Qubits #2 and #3 start in the bell state \(\left|\psi^{+}\right\rangle_{32}=1/\sqrt{2}(\left|00\right\rangle+\left|11 \right\rangle)\). The product state is constructed as shown in Fig. 10. The information that is initially stored in qubit #1 (green) which is independent of the other two qubits is transferred to qubit #3 using only unitary operations on qubit #1 and #2, i.e. operations only along the axes of qubits #1 and #2 in two steps. Step 1: Swap states on the right hand side (where qubit #1 is 1) along the axis of qubit #2 using a CNOT-gate with control qubit #1 and target qubit #2. Step 2: Split states along axis of qubit #1 using a Hadamard-gate on qubit #1. The corresponding quantum circuit is displayed in the top left. Figure 12: The last steps of the quantum teleportation process: Alice measures and sends the information to Bob who then applies single-qubit gates according to the measurement result. The corresponding quantum circuit is displayed in the top left. The system starts in the fully entangled state \(2\left|\psi\right\rangle=\left(\sqrt{2/3}\left|0\right\rangle+1/\sqrt{3}e^{-i \pi/4}\left|1\right\rangle\right)\left|00\right\rangle+\left(\sqrt{2/3}\left| 0\right\rangle+1/\sqrt{3}e^{3i\pi/4}\left|1\right\rangle\right)\left|01 \right\rangle+\left(1/\sqrt{3}e^{-i\pi/4}\left|0\right\rangle+\sqrt{2/3} \left|1\right\rangle\right)\left|11\right\rangle\) depicted in Fig. 11. Alice measures qubit #1 and #2 which is shown in DCN. a) The measurement of qubit #1; b) The measurement of qubit #2. c) The combined measurement of qubit #1 and #2. Because the sum of the areas of the inner circles is the same for all of the four possibilities, the chance of measuring any of the four values is 25%. d) The four possible states of qubit #3 depending on the measurement outcome. Bob has to apply an \(X\) and/or a \(Z\)-gate such that qubit #3 is in the previous state of qubit #1. ## VII Outlook Following the discussed limitations, we are working on developing an interactive web tool which makes it possible for everyone to visualize quantum operations in DCN. The repositories for this project can be found here: [https://github.com/QuanTUK/](https://github.com/QuanTUK/), see also Appendix E, and the website can be accessed via [https://dcn.physik.rptu.de/](https://dcn.physik.rptu.de/). Furthermore, we show in Appendix F that we can visualize quantum algorithms of up to at least five qubits as is shown there for a four-qubit error detection and a five-qubit error correction algorithm. For this, we "modularize" DCN, arranging qubit systems in a variety of different ways to lay focus on specific entanglement properties and/or the geometry of unitary operations. By doing so, we aim to enhance understanding of complex multi-qubit algorithms. We can also represent density matrices and partial traces of density matrices in DCN as shown in Appendix G. Here, the ratio characterization of separability applies similarly. This visualization could serve the purpose of making the transition from Dirac ket notation and DCN to the density matrix formalism more intuitive. Another possible extension is the visualization of qudit-systems (qudits can be in \(d\) possible states instead of only two). Gates and algorithms in qudit systems are described in [43]. Although qudits are not in the general focus of quantum computing at the moment, it is possible that they could be relevant at some point as there are some recent advancements [44; 45]. In this context, theorem 3 in [36] can be applied similarly to reveal entanglement properties of such systems. As pointed out above, it has to be studied whether DCN fosters learning and it needs to be validated as a useful educational tool for conveying the basics of quantum computing. As discussed in Sec. II the effectiveness of DCN likely depends on learner prerequisites. This should be considered in future empirical studies. Even beyond educational contexts, DCN can possibly be used to enhance understanding of many different quantum algorithms in order to shed more light on this complex field. For this, the flexibility of the representation that is shown in Appendix F is a particular strength. ## Acknowledgements We thank Stefan Heusler from the WWU Munster for valuable general discussions and specific input regarding basis dependency of qubit models. M. K-E., P. L. and A. W. acknowledge support by the Quantum Initiative Rhineland-Palatinate (QUIP). J.B., E.R., A.A., M. K-E., P.L. and A.W. acknowledge support by the project QuanTUK at the RPTU in Kaiserslautern, supported by the Federal Ministry of Education and Research (FKZ13N15995). N.L., L.K. and P.L. acknowledge support by the project KI4TUK at the RPTU in Kaiserslautern, supported by the Federal Ministry of Education and Research (BMBF) under grant number 16DHBKI058. A.D., S.K. and J.K acknowledge support by the project Quantum Lifelong Learning (QL3) at the LMU Munich, supported by the Federal Ministry of Education and Research (BMBF) under grant number 13N16024, and the project DigiQ (EU), supported by the European Union's Digital Europe programme under grant number 101084035. ## Appendix A Single-qubit operations in circle notation To understand single-qubit operations in multi-qubit systems in DCN, it is enough to understand these operations in single-qubit systems which is one of the main advantages of DCN in comparison to the standard cicle notation. Fig. 13 shows some important single-qubit operations in single-qubit systems in circle notation. ## Appendix B Separating single Qubits from \(n\)-Qubit States The following ratio characterization [36; 37] of separability in \(n\)-qubit systems is used throughout this work to visualize entanglement. It is formulated here for the purpose of showing separability in DCN. Figure 13: Single qubit operations in circle notation. The \(X\)-gate flips the coefficients of two states. The \(Z\)-gate adds a \(+\pi\) phase to the \(|1\rangle\)-state, flipping the sign of the coefficient. The Hadamard-gate splits a state into two, flipping the phase if starting at \(|1\rangle\). All these gates are self-adjoint, i.e. their own inverse. **Theorem 1**.: _Let \(\alpha,\beta,c_{i}\in\mathbb{C}\). An \(n\)-qubit state \(\ket{\psi}=\sum_{i=0}^{2^{n-1}}c_{i}\ket{i}\) is 2-\(2^{n-1}\) separable into \(\ket{\psi}=(\alpha\ket{0}+\beta\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c^{\prime}_{i }\ket{i}\) if and only if for all \(i\in\{0,\ldots,2^{n-1}-1\}\) either \(c_{2^{n-1}+i}=0\) or there exists a ratio \(r\in\mathbb{C}\) such that \(c_{i}=rc_{2^{n-1}+i}\)._ Proof.: "\(\Rightarrow\)": Let first \(\ket{\psi}=\sum_{i=0}^{2^{n}-1}c_{i}\ket{i}\) be separable into \(\ket{\psi}=(\alpha\ket{0}+\beta\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c^{\prime}_ {i}\ket{i}\). Then, \(\ket{\psi}=\sum_{i=0}^{2^{n-1}-1}\alpha c^{\prime}_{i}\ket{0}\ket{i}+\sum_{i=0 }^{2^{n-1}-1}\beta c^{\prime}_{i}\ket{1}\ket{i}=\sum_{i=0}^{2^{n-1}-1}\alpha c^{ \prime}_{i}\ket{i}+\sum_{i=2^{n-1}}^{2^{n-1}}\beta c^{\prime}_{i}\ket{i}\). If \(\beta=0\), then \(c_{2^{n}-i+i}=\beta c^{\prime}_{i}=0\) for all \(i\in\{0,\ldots,2^{n-1}-1\}\). If \(\beta\neq 0\), then with \(r=\alpha/\beta\): \(rc_{2^{n-1}+i}=c_{i}\), again, for all \(i\in\{0,\ldots,2^{n-1}-1\}\). "\(\Leftarrow\)": Let first \(c_{2^{n-1}+i}=0\) for all \(i\in\{0,\ldots,2^{n-1}-1\}\). Then, \(\ket{\psi}\) is separable into \(\ket{\psi}=\ket{0}\otimes\sum_{i=0}^{2^{n-1}-1}c^{\prime}_{i}\ket{i}\). Let otherwise \(c_{i}=rc_{2^{n-1}+i}\). Then, \(\ket{\psi}=\sum_{i=0}^{2^{n-1}-1}rc_{2^{n-1}+i}\ket{i}+\sum_{i=2^{n-1}}^{2^{n-1 }}c_{i}\ket{i}=r\ket{0}\sum_{i=0}^{2^{n-1}-1}c_{2^{n-1}+i}\ket{i}+\ket{1}\sum_{ i=0}^{2^{n-1}-1}c_{2^{n-1}+i}\ket{i}=(r\ket{0}+\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c_{2^{n-1 }+i}\ket{i}=(\alpha\ket{0}+\beta\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c_{2^{n-1}+ i}\ket{i}=\sum_{i=0}^{2^{n-1}-1}c_{2^{n-1}+i}\ket{i}=(\alpha\ket{0}+\beta\ket{1}) \otimes\sum_{i=0}^{2^{n-1}-1}c_{2^{n-1}+i}\ket{i}=\sum_{i=0}^{2^{n-1}-1}c_{2^{n -1}+i}\ket{i}=(\alpha\ket{0}+\beta\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c^{ \prime}_{i}\ket{i}\) with \(r=\alpha/\beta\). ## Appendix C Full Separability of \(n\)-Qubit States Theorem 1 can be used in fully separable systems for every qubit [37]. Again, here we formulate this for the purpose of showing full separability in DCN. **Theorem 2**.: _Let \(\alpha_{i},\beta_{i},c_{i}\in\mathbb{C}\). An \(n\)-qubit state \(\ket{\psi}=\sum_{i\in\{0,1\}^{n}}c_{i}\ket{i}\) is fully separable into \(\ket{\psi}=(\alpha_{n}\ket{0}+\beta_{n}\ket{1})\otimes\ldots\otimes(\alpha_{1} \ket{0}+\beta_{1}\ket{1})\) if and only if for all \(j\in 1,\ldots,n\):_ _for all pairs of bit strings \(i,i^{\prime}\in\{0,1\}^{n}\) which only differ at position \(j\) such that \(i_{j}=0\) and \(i^{\prime}_{j}=1\) and \(i_{k}=i^{\prime}_{k}\) for all \(k\neq j\):_ _either \(c_{i^{\prime}}=0\) (for all such \(i^{\prime}\)) or there exists a ratio \(r_{j}\in\mathbb{C}\) such that \(c_{i}=r_{j}c_{i^{\prime}}\)._ Proof.: In the two-qubit case, Theorem 2 is the same as Theorem 1. Assume that Theorem 2 is correct for \(n-1\) qubits and let \(\ket{\psi}=\sum_{i\in\{0,1\}^{n}}c_{i}\ket{i}\). Let, without loss of generality, \(j=n\). Then, according to Theorem 1, \(\ket{\psi}\) is separable into \(\ket{\psi}=(\alpha_{n}\ket{0}+\beta_{n}\ket{1})\otimes\sum_{i=0}^{2^{n-1}-1}c^{ \prime}_{i}\ket{i}=(\alpha_{n}\ket{0}+\beta_{n}\ket{1})\otimes\ket{\psi^{\prime}}\) if and only if for all \(i,i^{\prime}\in\{0,1\}^{n}\) with \(i_{n}=0\) and \(i^{\prime}_{n}=1\) and \(i_{k}=i^{\prime}_{k}\) for all \(k\neq n\): either \(c_{i^{\prime}}=0\) for all \(i^{\prime}\), or there exists a ratio \(r_{n}\in\mathbb{C}\) such that \(c_{i}=r_{n}c_{i^{\prime}}\). Then, we can apply Theorem 2 to the \(n-1\)-qubit state \(\ket{\psi^{\prime}}\). ## Appendix D Multi-Qubit Gates and Algorithms in two-Qubit Systems Phase kickback is an inherently quantum concept and an essential part of quantum computing. The main idea is that by local basis transformation, operations with a control and a target qubit are inverted such that the roles of control and target qubit are swapped. This happens because the control qubit inherits the phase of the target qubit while the target qubit is unchanged. This has applications in, e.g., so-called oracle functions that are part of many quantum algorithms - the controlled gates are applied to a set of auxiliary qubits in the Hadamard basis, such that the logical qubits are changed [46]. Fig. 14 shows the most basic example of a phase kickback and Fig. 15 shows a use case of this: the Deutsch algorithm. ## Appendix E Interactive (web-based) DCN-tool We provide a python package to visualize DCN, which can be accessed at github.com/QuanTUK/QC-Education-Package. Using this package we built a set of hands-on examples for exploring DCN. For easy and fast access we also provide an interactive web tool which utilizes the addressed python packages. This web tool can be accessed via [https://dcn.physik.rptu.de/](https://dcn.physik.rptu.de/), the source files are provided at (github.com/QuanTUK/DCN_Webtool). We plan to further extend and improve the package and web tool in the near future, e.g. with visualizations for more than three qubits as discussed in Appendix F. Figure 14: Basic phase kickback, i.e. the relation \(\text{CNOT}_{12}=(\text{H}_{2}\otimes\text{H}_{1})\text{CNOT}_{21}(\text{H}_{2} \otimes\text{H}_{1})\), shown with the initial state \(|\psi\rangle=1/\sqrt{2}(|00\rangle-|11\rangle)\). The change of basis into the Hadamard basis by applying Hadamard gates on all qubits makes the \(\text{CNOT}_{21}\)-gate work like a \(\text{CNOT}_{12}\)-gate Figure 15: The Deutsch algorithm to determine whether a function \(f:\{0,1\}\rightarrow\{0,1\}\) is constant (\(f=0\) or \(f=1\)) or balanced (\(f(x)=x\) or \(f(x)=x\oplus 1\) where \(1\oplus 1=0\)). The Qubits are initialized to the state \(|10\rangle\). After application of Hadamard-Gates on all qubits, the system is in equal superposition with a phase shift on qubit #2. Then the oracle \(U_{f}\) defined by \(U_{f}:|x\rangle\,|y\rangle\rightarrow|x\rangle\,|f(x)\oplus y\rangle\) is applied. The two cases where \(f\) is constant and the two cases where \(f\) is balanced only differ by a global phase, respectively. Therefore, only the cases \(f=0\) and \(f(x)=x\) are shown. After application of a Hadamard-Gate on qubit #1, one can see that the operation \(U_{f}\) actually acted on qubit #1 due to phase kickback. When measuring qubit #1, the result will be 0 when \(f\) was balanced and 1 when \(f\) was constant. ## Appendix F Modular DCN in four- and five-qubit systems In this section, we give examples on how to represent qubit ensembles of four and five qubits in various ways. There are multiple ways to represent four-qubit systems (systems with 16 basis states) in three dimensional space (and, on paper, then in two dimensions). One natural possibility is a projection of a four dimensional hypercube into three dimensions. This retains the geometric depiction of entanglement that is presented in this paper. For the ratio characterization of separability, eight pairs of coefficients have to be compared for each qubit in order to check for separability of that qubit from the system. In quantum settings, decoherence is a common factor to consider. Quantum Error correction can counteract the effects of decoherence. Classical error correction is often thought of in terms of hypercubes [47; 48; 49]. In fact, similar ideas exist for quantum error correction as seen in hypercubes or hypercube-like lattices [50; 51]. Therefore, it makes sense to apply DCN to quantum error detection and correction. Here, we show the four-qubit error detection code demonstrated experimentally in [52] in Fig. 16 in a hypercube. Note that for a code to also _correct_ the detected error, it needs five qubits to function [53]. Another possibility is to represent the system using a mixture of circle notation and DCN that we call modular DCN. We can have two or more qubits on every axis and assign only specific qubits to their own axis. We can then check, again via ratio characterization, separability from the system of the qubits that have their own axis. The five qubit error correction code that is shown in e.g. [54] is visualized in Fig. 17 (simple three-qubit encoding process and three possible single-qubit flip errors), Fig. 18 (transfer Syndrome and error correction in modular 2x2x8 DCN) and Fig. 19 (the last step of error correction in a four-cube system). DCN is flexible as we can arrange qubit ensembles in modular DCN in a variety of different ways to lay focus on specific multi-partite entanglement properties and/or in a way such that the visualized unitary operations remain geometrically intuitive with the aim of enhancing understanding of complex multi-qubit algorithms. Figure 16: Four-qubit quantum error detection code as demonstrated experimentally in [52], here in the case of a Hadamard error. The system is initialized to the state \(\ket{\psi}=1/\sqrt{2}(\ket{0}+\ket{1})\otimes\ket{0}\otimes 1/\sqrt{2}(\ket{00}+ \ket{11})\) where qubit #1 and #2 are entangled and qubit #4 is brought into the Hadamard basis \(\ket{+}=1/\sqrt{2}(\ket{0}+\ket{1})\) in order to detect a phase flip. First, an error \(\epsilon_{1}\) is applied, in this case a Hadamard error \(H_{1}\) corresponding to half of a bit flip and half a phase flip on qubit #1. Then, the bit flip error is encoded onto qubit #3 via the CNOT\({}_{13}\)CNOT\({}_{23}\) operation. Afterwards, the operation CNOT\({}_{41}\)CNOT\({}_{42}\) that can be seen as a 180\({}^{\circ}\) rotation of the cube corresponding to qubit #4 being in the state 1 in the plane spanned by qubit #1 and #2. In the end, qubit #4 will be found in the state 1 if a phase flip has occurred while qubit #3 will be found in the state 1 when a bit flip has occurred. In this case of a Hadamard error, the error detection algorithm will always find that there was some error, as qubit #3 and #4 are anti-correlated as can be seen in DCN. Figure 17: The initial step of error correcting the (arbitrary) state \(\ket{\psi}_{1}=\sqrt{2}/\sqrt{3}\ket{0}+1/\sqrt{3}e^{-i\pi/4}\ket{1}\) using four additional qubits. First, qubit #1 is entangled with qubit #2 and #3 in a GHZ-similar state \(\ket{\psi}=\sqrt{2}/\sqrt{3}\ket{000}+1/\sqrt{3}e^{-i\pi/4}\ket{111}\) with two CNOT gates. Then, a bit flip error is applied. Here, three possible bit flip errors are shown (ilac = bit flip error on qubit #1, orange = bit flip error on qubit #2 and green = bit flip error on qubit #3) as well as the case of no bit flip errors in gray blue. We assume that only one bit flip error occurs at the same time. Figure 18: The ”transfer syndrome” step of error correcting the state \(\left|\psi\right\rangle_{1}=\sqrt{2}/\sqrt{3}\left|0\right\rangle+1/\sqrt{3}e^{-i \pi/4}\left|1\right\rangle\) using four additional qubits. We start in the final state \(\left|\psi\right\rangle\) of Fig. 17, flatten out the cube to standard circle notation and introduce the anzilla qubits #4 and #5, arranging the system in modular DCN. The CNOT\({}_{24}\)- and CNOT\({}_{34}\)-gates encode an \(X_{2}\)-error onto anzilla qubit #4 and the CNOT\({}_{35}\)- and CNOT\({}_{15}\)-gates encode an \(X_{1}\)-error onto anzilla qubit #5 while an interesting and desirable byproduct of these operations is that an \(X_{3}\)-error is encoded on both anzilla qubits. Figure 19: The last step of error correcting the state \(\left|\psi\right\rangle_{1}=\sqrt{2}/\sqrt{3}\left|0\right\rangle+1/\sqrt{3}e^{-i \pi/4}\left|1\right\rangle\). We start by transforming the depiction of the last state \(\left|\psi\right\rangle\) in Fig. 18 to a four-cube system where the cubes are represented in space depending on anzilla qubit #4 and #5. Here, we can see that the three different kinds of bit flip errors correspond to three different configurations of anzilla qubits #4 and #5. Now, CNOT gates are applied to correct these errors. The CNOT\({}_{51}\) gate corrects the \(X_{1}\)-error, the CNOT\({}_{42}\)-gate corrects the \(X_{2}\)-error and the CENOT\({}_{453}\)-gate corrects the \(X_{3}\)-error. Lastly, the CENOT\({}_{452}\)- and CENOT\({}_{451}\)-gates are needed to counteract the unwanted effects of the first two CNOT-gates in the case of an \(X_{3}\)-error. Now we can see that in all three cases, qubit #1 is in the desired state \(\left|\psi\right\rangle_{1}\). As can be seen, qubit #4 and #5 are now disentangled from the rest of the system and can be measured to see whether a bit flip error has occurred and which one. ## Appendix G Representing partial traces of density matrices of two-qubit systems Density matrices are used to describe general quantum states, including mixed states, and can be used to calculate probabilities of measurement outcomes of observables using the Born rule [55]. Being introduced to the density matrix formalism can come with challenges due to the outer product formulation and to their general abstractness. A general introduction to this formalism can be found in e.g. [56] that visualization could ease the entry to. In the following we provide a way of representing density matrices of single qubits in DCN and show that the ratio characterization of separability can be seen, i.e. whether the single-qubit state is pure or mixed/part of a larger entangled state. This representation could be incorporated in the DCN web tool, allowing the ability to visually trace out single qubits from the system. For a general two-qubit state \[\ket{\psi}=\alpha_{00}\ket{00}+\alpha_{01}\ket{01}+\alpha_{10}\ket{10}+\alpha _{11}\ket{11}\in\mathcal{H}_{2}\otimes\mathcal{H}_{1} \tag{10}\] we can write the density matrix in the computational basis as \[\rho=\ket{\psi}\bra{\psi}=\begin{pmatrix}\ket{\alpha_{00}}^{2}&\alpha_{00} \alpha_{01}^{*}&\alpha_{00}\alpha_{10}^{*}&\alpha_{00}\alpha_{11}^{*}\\ \alpha_{01}\alpha_{00}^{*}&\ket{\alpha_{01}}^{2}&\alpha_{01}\alpha_{10}^{*}& \alpha_{01}\alpha_{11}^{*}\\ \alpha_{10}\alpha_{00}^{*}&\alpha_{10}\alpha_{01}^{*}&\ket{\alpha_{10}}^{2}& \alpha_{10}\alpha_{11}^{*}\\ \alpha_{11}\alpha_{00}^{*}&\alpha_{11}\alpha_{01}^{*}&\alpha_{11}\alpha_{10}^ {*}&\ket{\alpha_{11}}^{2}\end{pmatrix}, \tag{11}\] and tracing out qubit #1 to find the density matrix of qubit #2, we find \[\rho_{2}=\text{tr}_{1}(\rho)=\begin{pmatrix}\ket{\alpha_{00}}^{2}+\ket{\alpha _{01}}^{2}&\alpha_{00}\alpha_{10}^{*}+\alpha_{01}\alpha_{11}^{*}\\ \alpha_{10}\alpha_{00}^{*}+\alpha_{11}\alpha_{01}^{*}&\ket{\alpha_{10}}^{2}+ \ket{\alpha_{11}}^{2}\end{pmatrix}, \tag{12}\] whereas \[\rho_{1}=\text{tr}_{2}(\rho)=\begin{pmatrix}\ket{\alpha_{00}}^{2}+\ket{\alpha _{10}}^{2}&\alpha_{00}\alpha_{01}^{*}+\alpha_{10}\alpha_{11}^{*}\\ \alpha_{01}\alpha_{00}^{*}+\alpha_{11}\alpha_{10}^{*}&\ket{\alpha_{01}}^{2}+ \ket{\alpha_{11}}^{2}\end{pmatrix}. \tag{13}\] As stated in Sec. IV, one possible characterization of separability is the following: \(\ket{\psi}\) is separable if and only if 1. \(\alpha_{00}=\alpha_{10}=0\) or 2. there exists a ratio \(c\in\mathbb{C}\) such that \(\alpha_{01}=c\alpha_{00}\) and \(\alpha_{11}=c\alpha_{10}\). In the case of i), \[\rho_{1}=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}. \tag{14}\] In the case of ii), \[\rho_{1} =\begin{pmatrix}\ket{\alpha_{00}}^{2}+\ket{\alpha_{10}}^{2}&c^{* }(\ket{\alpha_{00}}^{2}+\ket{\alpha_{10}}^{2})\\ c(\ket{\alpha_{00}}^{2}+\ket{\alpha_{10}}^{2})&\ket{c}^{2}(\ket{\alpha_{00}}^{ 2}+\ket{\alpha_{10}}^{2})\end{pmatrix} \tag{15}\] \[=\begin{pmatrix}p_{1}(0)&c^{*}p_{1}(0)\\ cp_{1}(0)&|c^{2}p_{1}(0)\end{pmatrix}.\] where \(p_{1}(0)\) is the probability of obtaining 0 when measuring qubit #1. This way, the ratio characterization of separability can be visually represented as can be seen in Fig. 20. When tracing out qubit #1 to find \(\rho_{2}\), we find an analogous and equivalent characterization of separability: 1. \(\alpha_{01}=\alpha_{11}=0\) or 2. there exists a ratio \(c^{\prime}\in\mathbb{C}\) such that \(\alpha_{00}=c^{\prime}\alpha_{01}\) and \(\alpha_{10}=c^{\prime}\alpha_{11}\). Then we find analogously to above in the case of i) \[\rho_{2}=\begin{pmatrix}0&0\\ 0&1\end{pmatrix} \tag{16}\] and otherwise \[\rho_{2}=\begin{pmatrix}p_{2}(0)&c^{\prime}p_{2}(0)\\ c^{\prime}p_{2}(0)&|c^{\prime}|^{2}p_{2}(0)\end{pmatrix}. \tag{17}\]
2308.08881
Text-Only Training for Visual Storytelling
Visual storytelling aims to generate a narrative based on a sequence of images, necessitating both vision-language alignment and coherent story generation. Most existing solutions predominantly depend on paired image-text training data, which can be costly to collect and challenging to scale. To address this, we formulate visual storytelling as a visual-conditioned story generation problem and propose a text-only training method that separates the learning of cross-modality alignment and story generation. Our approach specifically leverages the cross-modality pre-trained CLIP model to integrate visual control into a story generator, trained exclusively on text data. Moreover, we devise a training-free visual condition planner that accounts for the temporal structure of the input image sequence while balancing global and local visual content. The distinctive advantage of requiring only text data for training enables our method to learn from external text story data, enhancing the generalization capability of visual storytelling. We conduct extensive experiments on the VIST benchmark, showcasing the effectiveness of our approach in both in-domain and cross-domain settings. Further evaluations on expression diversity and human assessment underscore the superiority of our method in terms of informativeness and robustness.
Yuechen Wang, Wengang Zhou, Zhenbo Lu, Houqiang Li
2023-08-17T09:32:17Z
http://arxiv.org/abs/2308.08881v1
# Text-Only Training for Visual Storytelling ###### Abstract. Visual storytelling aims to generate a narrative based on a sequence of images, necessitating both vision-language alignment and coherent story generation. Most existing solutions predominantly depend on paired image-text training data, which can be costly to collect and challenging to scale. To address this, we formulate visual storytelling as a visual-conditioned story generation problem and propose a text-only training method that separates the learning of cross-modality alignment and story generation. Our approach specifically leverages the cross-modality pre-trained CLIP model to integrate visual control into a story generator, trained exclusively on text data. Moreover, we devise a training-free visual condition planner that accounts for the temporal structure of the input image sequence while balancing global and local visual content. The distinctive advantage of requiring only text data for training enables our method to learn from external text story data, enhancing the generalization capability of visual storytelling. We conduct extensive experiments on the VIST benchmark, showcasing the effectiveness of our approach in both in-domain and cross-domain settings. Further evaluations on expression diversity and human assessment underscore the superiority of our method in terms of informativeness and robustness. Visual Storytelling, Text-Only Training, Story Planning + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Weng Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Weng Zhou and Zhenbo Lu. + Footnote †: Corresponding authors: Wengang Zhou and Zhenbo Lu. ## 1. Introduction Visual storytelling (Ash scene graphs (Beng et al., 2019), have further enriched generated stories with additional details. More recently, the employment of large pre-trained Transformer-based language models has led to considerable improvements in visual storytelling (Beng et al., 2019). Nevertheless, the substantial cost associated with annotating and training extensive datasets remains a significant bottleneck, limiting the scalability of visual storytelling approaches. On the other hand, the burgeoning capabilities of pre-trained models offer potential for leveraging these models to transfer knowledge to downstream tasks such as visual storytelling, facilitating more data-efficient learning. To this end, some prior works have combined generative language models (Beng et al., 2019; Chen et al., 2020; Chen et al., 2020) with cross-modality pretrained models (Chen et al., 2020) to explore text-only training for image captioning (Chen et al., 2020; Chen et al., 2020). However, while these cross-modality models trained on paired image-text data successfully align text with individual images, they are limited in their capacity to comprehend the temporal structure of image sequences--an essential component of visual storytelling. Motivated by the observations discussed above, we propose a novel framework that leverages pretrained generative language models and cross-modality models for data-efficient visual storytelling. We formulate visual storytelling as a visual-conditioned story generation task. As shown in Fig. 2, we first fine-tune a pre-trained language model using only textual data to develop a story generator. Then, we incorporate visual clues during the generation process. Specifically, at each decoding step, we use a pretrained cross-modality model CLIP (Chen et al., 2020) as a visual discriminator to compute a matching score between candidate text and input images. A visual condition planner is then designed to aggregate the matching results of the input images, emphasizing semantics in the corresponding image while retaining information from other images. Finally, the aggregated result is incorporated into the decoding probability distribution to guide the generation of the next token, resulting in a coherent and visually aligned story. To demonstrate the effectiveness of our proposed method, we conduct extensive experiments on the widely-used VIST benchmark (Beng et al., 2019). The results show that our approach achieves state-of-the-art performance on various evaluation metrics including comparing-based automatic metrics, statistics-based metrics, and human evaluation. Additionally, our method exhibits impressive generalization ability in domain-transfer experiments, suggesting its potential for real-world applications. We summarize the major contributions of this work as follows: * We formulate visual storytelling as a visual-conditioned generation problem and propose a data-efficient framework which is trained solely on text-only data by leveraging pre-trained CLIP model. * We introduce a visual condition planner which is free of training. The planner aggregates sequential visual inputs to provide local details while maintaining the global theme of the image album, thereby improving the quality of generated stories. * Extensive experiments on VIST benchmark demonstrate the effectiveness of our proposed method, as evidenced by its superior performance compared to existing methods in both automatic metrics and human evaluations. ## 2. Related Work The main idea of our work is to model visual storytelling as a controlled text generation task, and exploit large pretrained models to reduce the cost of cross-modality training. In this section, we provide a brief review of the related areas. ### Visual Storytelling Visual storytelling was first introduced by Huang et al.(Huang et al., 2019), which involves the use of a sequence of images to convey a narrative, necessitating reasoning over temporal context rather than merely understanding a static moment. Early approaches expanded upon conventional image captioning models by learning contextualized image representations(Huang et al., 2019) and incorporating global visual information (Huang et al., 2019). Additionally, reinforcement learning was employed to learn an implicit reward function through adversarial reward learning, optimizing the policy model to better align with human demonstrations (Huang et al., 2019). Hierarchical architectures (Huang et al., 2019) and hierarchical reinforced training (Huang et al., 2019) have also demonstrated effectiveness in learning high-level semantics. Given the imaginative nature of storytelling, external knowledge graphs have been integrated to introduce fictional concepts not present in images (Huang et al., 2019; Chen et al., 2020; Chen et al., 2020). To provide richer stories with greater visual detail, Wang et al.(Wang et al., 2019) incorporated scene graph generation, while Li et al.(Li et al., 2020) learned cross-modal rules for mining visual concepts. Braude et al.(Braude et al., 2020) proposed an ordered image attention approach to enhance story coherence through consistent grounding across sequenced images. Furthermore, Transformer-based frameworks have demonstrated capabilities in modeling spatial relationships between objects in images(Huang et al., 2020). In light of the proliferation of large pre-trained models, several studies have focused on leveraging pre-trained models (PTMs) for visual storytelling. Strategies include fine-tuning pre-trained Transformer encoders (Huang et al., 2019; Chen et al., 2020) and jointly tuning pre-trained language generation models with pre-trained image encoders (Beng et al., 2019). While the aforementioned approaches have demonstrated improvements in generated stories by incorporating external models, knowledge, and annotations, they also result in a significant increase in computational cost. In contrast, our proposed method circumvents the challenges associated with cross-modality training and annotation expenses by exclusively focusing on training using a text corpus. ### Controlled Text Generation In natural language generation, incorporating controllable constraints for open-ended text generation is both important and fundamental (Huang et al., 2019). With the advancements in pretraining, recent efforts have primarily concentrated on adapting pre-trained language models (LMs) to various attributes. A straightforward approach involves fine-tuning a pre-trained LM to generate text with specific attributes (Huang et al., 2019; Chen et al., 2020; Chen et al., 2020). Alternatively, it is feasible to design new large LM architectures or retrain large conditioned LMs from scratch (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020). Recently, the exponentially increasing scale and capacity of pre-trained LMs have made it more viable and promising to fix pre-trained parameters and guide generation through post-processing. Dethathri _et al.(Dethathri et al., 2020)_ first proposed this paradigm as Plug-and-Play language models, wherein an attribute discriminator updates LM hidden states through back-propagation for attribute-controlled text generation. To reduce the computational cost associated with classifier-like discriminators ranking generated text, fine-tuned small LMs have been employed as generative discriminators to guide the generation of large pre-trained LMs[(33; 34; 35; 36)]. Pascual _et al._[(37)] extended the plug-and-play method to keyword constraints and designed a distribution shifting strategy to augment the decoding probability of keywords. Guided decoding methods have demonstrated remarkable flexibility in accommodating various constraint types and hold considerable potential due to their independence from language models. In this work, we model visual storytelling as a visual-conditioned story generation task and propose a visual-linguistic discriminator to guide the generation process. ### Large Pretrained Models **Generative language models.** Taking advantage of the parallelism in the Transformer architecture [(38)], generative language models have shown a remarkable improvement in their capabilities in the past few years. These models can be broadly classified into two categories based on their network architecture: Decoder-Only models [(6; 8)] and Encoder-Decoder models [(39; 40)]. Pretrained on large corpora, these models can effectively transfer to various language generation tasks, such as summarization, question answering, and story generation, with limited or even no supervised data. **Cross-modality pretrained models.** As the foundation of visual-language understanding, the idea to align the two modalities and learn a joint embedding space has been investigated extensively in the past decade [(41; 42; 43; 44)]. In recent years, large cross-modality aligning models based on Transformers have gained considerable attention [(45; 9; 46)]. A representative work is CLIP [(9)], which trains two encoders for image and text inputs using a contrastive loss. With 400 million data pairs for training, CLIP has demonstrated remarkable zero-shot capabilities on multiple downstream tasks. ## 3. Preliminaries A standard generative language model predicts the probability distribution of the next token based on previous inputs, which can be formulated as \(P_{LM}(x_{t}|x_{<t})\). As a result, the probability of a text sequence \(\mathbf{x}=\{x_{1},\dots,x_{T}\}\) can be modeled as follows: \[P_{LM}(\mathbf{x})=\Pi_{t=1}^{T}P_{LM}(x_{t}|x_{<t}). \tag{1}\] In order to incorporate controls during the generation process, a constraint \(c\) can be added to form a conditioned language model. This model generates the probability distribution of the next token based on the history inputs and the control constraint, and can be formulated as: \[P(\mathbf{x}|c)=\Pi_{t=1}^{T}P(x_{t}|x_{<t},c). \tag{2}\] Krause et al. [(33)] designed a generative discriminator to predict the probability that every candidate text sequence corresponds to the given constraint, which is given as: \[P_{\theta}(c|x_{t},x_{<t})=\frac{P(c)\Pi_{t=1}^{T}P(x_{t}|x_{<t},c)}{\sum_{c^ {\prime}\in\{c,\mathbf{c}\}}P(c^{\prime})\Pi_{t=1}^{T}P(x_{t}|x_{<t},c^{\prime})} \tag{3}\] where \(\theta\) represents the learned parameters of the discriminator. Then, based on the Bayes rule, the conditioned language model can be decoupled as: \[P(x_{t}|x_{<t},c)\propto P_{LM}(x_{t}|x_{<t})P_{\theta}(c|x_{t},x_{<t}). \tag{4}\] Therefore, each step of the generation process is implemented by combining an unconditioned language modeling \(P_{LM}(x_{t}|x_{<t})\), and Figure 2. The training and inference pipeline of our method. During training, we only train the language generator on a story dataset without visual information. Then, at inference time, we utilize a pretrained CLIP model as a visual discriminator to align images with candidate tokens. Additionally, we introduce a visual condition planner that aggregates image sequences, and the output visual control is then incorporated into the generation process. an attribute discriminator \(P_{\theta}(c|x_{t},x_{<t})\) with the guided decoding strategy as described in Eq. (4). Here the discriminator is trained externally and can be easily used with any language generator in a plug-and-play manner. ## 4. Method Given a sequence of images \(\mathcal{I}=\{I_{1},\dots,I_{N}\}\), a visual storytelling approach aims to generate a multi-sentence story \(\mathbf{x}\) by predicting the probability \(P(\mathbf{x}|\mathcal{I})\). To achieve this, we propose a framework that combines a text-only trained language generator, a pretrained visual discriminator, and a visual condition planner. Fig. 2 illustrates the training and inference pipeline of our method. During the training phase, we fine-tune the language generator using story text, while in the inference phase, we employ the pre-trained visual discriminator and the visual condition planner to guide the generation process. ### Text-Only Training Compared to other supervised visual storytelling methods, our approach offers a notable advantage in that it requires training only on a text corpus, resulting in significant cost reductions in both training and annotation efforts. Specifically, we fine-tune a Transformer decoder-based language model on a text story corpus to bridge the gap between pretraining on generic text and generating coherent stories. Given a narrative text sequence \(\mathbf{x}=\{x_{1},\dots,x_{T}\}\), the language model is fine-tuned by minimizing the maximum likelihood estimation (MLE) loss: \[\mathcal{L}_{MLE}=-\frac{1}{T}\sum_{t=1}^{T}\log P_{LM}(x_{t}|x_{<t}) \tag{5}\] Inspired by Su et al. (Su et al., 2017), we incorporate an additional contrastive objective \(\mathcal{L}_{CL}\) to encourage the generation of diverse and distinct expressions. The objective is defined as: \[\mathcal{L}_{CL}=\frac{1}{T(T-1)}\sum_{i=1}^{T}\sum_{j=1,j\neq i}^{T}\max(0, \epsilon-s(x_{i},x_{i})+s(x_{i},x_{j})), \tag{6}\] where \(\epsilon\) is a predefined margin, and \(s\) is the cosine similarity between tokens, defined by: \[s(x_{i},x_{j})=\frac{h_{x_{i}}^{T}h_{x_{j}}}{|h_{x_{i}}||h_{x_{j}}|}. \tag{7}\] The overall training objective of the language generator is the combination of the above two losses: \[\mathcal{L}=\mathcal{L}_{MLE}+\alpha\mathcal{L}_{CL}, \tag{8}\] where \(\alpha\) is a hyper-parameter to balance the loss items. After fine-tuning on a text story corpus, the language generator is able to generate coherent stories in a style that is aligned with the training data. However, since the generation process of the language generator is solely based on textual input, it may not take into account any visual content or the desired topic of the story. To address this, we introduce a visual discriminator and a visual condition planner to control the story topic and add details to the generated sentences. ### Visual Discriminator and Story Planning As previously mentioned, we consider visual storytelling as a visual-conditioned story generation task, and employ the guided decoding paradigm to integrate visual controls into the language generator. To achieve this, we introduce a visual discriminator and a visual condition planner to score candidate sequences during generation. The visual discriminator is implemented using a pretrained visual-linguistic aligning model, while the visual condition planner is a training-free weighting model which aggregates the text matching results of different images. During each generation step \(t\), our language generator predicts a probability distribution \(P_{LM}(x_{t}|x_{<t})\) over the vocabulary \(V\) of possible next tokens, based on the context \(x_{<t}\). To guide the selection of candidate tokens, we employ a pretrained CLIP (Chen et al., 2017) model as a visual discriminator \(\mathbf{D}\). Although the CLIP model has been pretrained on a large-scale dataset of paired visual and textual data, the pretraining process does not specifically involve annotations for visual storytelling. Therefore, by utilizing the pretrained CLIP, our method does not require cross-modality training and is capable of handling open-domain visual input. This makes our approach data-efficient and more scalable than previous methods. Specifically, we feed each candidate token \(x_{t}\) into the text encoder of CLIP along with the context tokens \(x_{<t}\) to obtain a textual representation \(f_{x_{i},t}\). For each image \(I_{j}\) in the input album, we extract a visual representation \(\hat{I}_{I_{j}}\) using the visual encoder of CLIP, where \(j\in 1,\dots,N\). Then, the cosine similarity of \(f_{x_{1:t}}\) and \(\hat{I}_{I_{j}}\) is computed as: \[\mathbf{D}(x_{1:t},I_{j})=\frac{f_{x_{1:t}}f_{I_{j}}}{|f_{x_{1:t}}||\hat{I}_{I _{j}}|},j\in\{1,\dots,N\}. \tag{9}\] As the CLIP model is trained to map visual and textual input representations into a sharing space, the matching score \(\mathbf{D}(x_{1:t},I_{j})\) measures the relevance between candidate sequence \(x_{1:t}\) and the input image \(I_{j}\). Figure 3. Illustration of the visual condition planner. To ensure that the generated story aligns with the visual input fine-level semantics of the corresponding image to the sentence being generated and maintains the overall theme, we propose a visual condition planner. It aggregates the scores of the input images to derive a visual control for the current decoding step. Inspired by the work of Lin and Riedl (Lin and Riedl, 2017), the planner does not require any training and achieves both global and local alignment through weighting and multiplication operations. As depicted in Fig. 3, the visual condition planner computes a control weight for each input image based on the position of current sentence in the story. More precisely, the weight \(\omega_{j}\) for image \(I_{j}\) is: \[\omega_{j}=C\exp(-\frac{(i-j)^{2}}{2\sigma^{2}}), \tag{10}\] where \(i\in\{1,\dots,N\}\) represents the position of current sentence in the story, and \(C\) is a constant to normalize the weights and insure \(\sum_{j=1}^{N}\omega_{j}=1\). When \(i=j\), the current sentence should be the exact description of image \(I_{j}\), while remaining coherent to other images \(I_{k\neq j}\). Therefore, the weight of \(I_{j}\) is the largest, and weight of \(I_{k}\) descends as the distance \(|k-j|\) grows. Finally, the planner applies weighted multiplication on the scores of different images to obtain a unified matching score between the candidate sequence \(x_{1:t}\) and the input images \(\mathcal{I}\). Formally, \[P_{w}(\mathcal{I}|x_{t},x_{<t})=\Pi_{j=1}^{N}\mathbf{D}(x_{1:t},I_{j})^{ \omega_{j}}. \tag{11}\] It is worth noting that in our experiments, the aforementioned process is applied to a subset of the entire vocabulary, thereby reducing the computational cost of encoding and aligning candidate text. Specifically, we select the top \(K\) tokens predicted by the language generator as the subset \(V_{(t)}^{K}\). Moreover, to eliminate the bias of the cross-modality alignment results, we normalize the scores among candidate tokens. The final output of the visual condition planner can be written as: \[P(\mathcal{I}|x_{t},x_{<t})=\frac{e^{(P_{w}(\mathcal{I}|x_{t},x_{<t}))}}{\sum _{x_{j}\in V_{(t)}^{K}}e^{(P_{w}(\mathcal{I}|x_{t},x_{<t}))}}. \tag{12}\] ### Visual-Conditioned generation Given the token probability predicted by the language generator and the aggregated cross-modality matching score, visual storytelling can be decoupled into the combination of language modeling and cross-modality aligning. Similar to Eq. (4), the probability of next token \(x_{t}\) can be decoupled as follows: \[P(x_{t}|x_{<t},\mathcal{I})\propto P_{LM}(x_{t}|x_{<t})P(\mathcal{I}|x_{t},x_{< t})^{\gamma}, \tag{13}\] where the hyper-parameter \(\gamma\) controls the weight of visual information in the language generation process. While a higher value of \(\gamma\) can improve the alignment of visual semantics, it may also adversely affect the quality of the generated language. Finding the right balance between language and visual information is crucial for achieving high-quality visual storytelling. Furthermore, inspired by Su et al. (Su et al., 2017), we incorporate a degeneration penalty into Eq. (13) to prevent the repetitive degeneration problem. The final probability of visual-conditioned is formulated as: \[\begin{split} P(x_{t}|x_{<t},\mathcal{I})=& P_{LM}(x_ {t}|x_{<t})P(\mathcal{I}|x_{t},x_{<t})^{\gamma}-\\ &\beta(\max(s(x_{t},x_{j}),j\in\{1,\dots,t-1\}),\end{split} \tag{14}\] where \(\beta\) is a hyper-parameter to control the degeneration penalty strength, and \(s(x_{i},x_{j})\) is defined in Eq. (7). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Method & METEOR & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & ROUGE\_1 & CIDEr \\ \hline \hline \multicolumn{8}{c}{_Fully-Supervised Methods_} \\ \hline INet (Wang et al., 2018) & 35.6 & 64.4 & 40.1 & 23.9 & 14.7 & 29.7 & 10.0 \\ TAPM (Chen et al., 2018) & 37.2 & - & - & - & - & 33.1 & 13.8 \\ OIAVist (Zhou et al., 2018) & 36.8 & 68.4 & 42.7 & 25.2 & 15.3 & 30.2 & 10.1 \\ KAGS (Kang et al., 2018) & 36.2 & 70.1 & 43.5 & 25.2 & 14.7 & 31.4 & 11.3 \\ \hline \multicolumn{8}{c}{_Text-Only Trained_} \\ \hline Top-\(k\) & 20.3 & 40.0 & 15.6 & 5.6 & 2.2 & 15.7 & 0.6 \\ Nucleus & 19.6 & 38.6 & 14.2 & 4.9 & 1.9 & 15.5 & 0.5 \\ MAGIC & 20.3 & 41.2 & 16.1 & 5.9 & 2.8 & 16.0 & **1.3** \\ **Ours** & **23.0** & **43.7** & **20.2** & **9.2** & **4.5** & **17.3** & 1.2 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison with existing methods on VIST test set. “Fully-Supervised” methods are trained on paired data, “Text-Only Trained” methods are trained on the textual stories of VIST. The best results under each metric are highlighted in bold. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{ROCStories} & \multicolumn{8}{c}{WritingPrompts} \\ \cline{2-13} & M & B-1 & B-2 & B-3 & B-4 & R\_L & C & M & B-1 & B-2 & B-3 & B-4 & R\_L & C \\ \hline Top-\(k\) & 15.3 & 28.6 & 9.5 & 2.5 & 0.7 & 12.1 & 0.2 & 15.0 & 26.8 & 8.0 & 2.0 & 0.4 & 12.2 & **0.2** \\ Nucleus & 15.0 & 28.4 & 9.0 & 2.4 & 0.7 & 12.0 & 0.3 & 14.3 & 25.6 & 7.3 & 1.5 & 0.3 & 11.9 & **0.2** \\ MAGIC & 16.4 & **29.7** & 10.1 & 2.7 & 0.7 & 12.6 & 0.1 & 15.4 & 27.8 & 9.6 & **2.9** & 0.5 & 12.8 & **0.2** \\ **Ours** & **16.6** & 28.6 & **11.5** & **3.8** & **1.2** & **12.9** & **0.2** & **16.2** & **28.8** & **9.9** & **2.9** & **0.9** & **13.7** & **0.2** \\ \hline \hline \end{tabular} \end{table} Table 2. Domain transfer results of text-only trained methods. The best results under each metric are highlighted in bold. ## 5. Experiments **Dataset.** We make evaluation on the widely-used VIST benchmark (Beng et al., 2019) for visual storytelling. VIST contains 210,819 images from 10,117 Flickr albums. Each sample in VIST contains five images selected from an album, and a five-sentence story is annotated as ground truth. After excluding broken images, the dataset contains 40,098, 4988, and 5050 samples for training, validation, and testing respectively. We use the test split of VIST as the evaluation benchmark in all experiments. Following previous works (Chen et al., 2019; Chen et al., 2019), we evaluate at the album level, generating one story for each album regardless of different selected images. During the training stage, we use the text part of the VIST training split, where all names are replaced with special placeholders. **Implementation Details.** The language generator is initialized with a pre-trained GPT-2 model, and fine-tuning is performed on 2 GTX3090 GPUs for 40,000 steps with batch size of 256. We set the training loss weight \(\alpha\) to 1. To implement the visual discriminator, we utilize a pretrained CLIP with ViT-base architecture as the image encoder. The visual-conditioned generation is performed on 1 GTX3090 GPU. In the reported results, we set the hyper-parameters \(K\), \(\gamma\), and \(\beta\) to 45, 1, and 0.01, respectively. **Evaluation Metrics.** Following the existing works on the VIST benchmark, we adopt a set of automatic evaluation metrics including METEOR (M) (Wang et al., 2019), BLEU (B-n) (Wang et al., 2019), ROUGE_L (R_L) (Wang et al., 2019) and CIDEr (C) (Wang et al., 2019). METEOR measures the semantic alignment between generated and reference sentences by leveraging WordNet. BLEU computes the unigram and n-gram overlap between generated and candidate sentences. ROUGE_L measures sentence-level similarity by computing the length of longest common subsequence. CIDEr evaluates the consensus based on n-grams and weights n-grams using Term Frequency Inverse Document Frequency (TF-IDF) to emphasize informative content. However, we note that these metrics, as they rely on word correspondence with the ground truth, may not fully capture the quality of open-ended generation tasks such as storytelling. ### Quantitative Results **Comparison with Existing Methods.** We compare the generation quality of our method with a text-only trained methods. First, we adopt top-\(k\) sampling (Wang et al., 2019) (\(k=40\)) and nucleus sampling (Wang et al., 2019) (\(p=0.95\)). Since these sampling-based decoding strategy takes no account of visual inputs, we consider them as the lower bound of the text-only trained methods. We also include MAGIC (Wang et al., 2019), which was proposed for image captioning and image-based story generation. MAGIC takes an image as input and generate text outputs by adding CLIP similarity scores on language model predicted probabilities. To extend MAGIC to the visual storytelling task, we average the representation of input image sequence to form the visual input of MAGIC. To provide a comprehensive comparison, we also report results of several fully-supervised baselines. INet (Wang et al., 2019), TAPM (Chen et al., 2019), OIAVist (Wang et al., 2019), and KAGS (Kag et al., 2019) In Table 1, we present the comparison of our proposed method with existing fully-supervised and text-only trained methods. As expected, the fully-supervised methods trained on cross-modality paired data exhibit better performance compared to the text-only trained methods. However, our proposed method outperforms the text-only trained baselines on almost all metrics by a considerable margin, demonstrating the effectiveness of our visual-conditioned generation strategy. **Cross-domain Transfer.** In order to evaluate the generalization ability of our method, we also explore cross-domain transfer by using story datasets of different domains in the text-only training stage. Specifically, we use **ROCStories**(Wang et al., 2019) and **WritingPrompts**(Wang et al., 2019) for training. The training split of ROCStories dataset contains 51,165 five-sentence commonsense stories. And the training split of WritingPrompts dataset contains 272,600 stories collected from Reddit's WRITINGPROMPTS forum1. The average length of WritingPrompts stories is 734.5, and the average number of sentences is 39.4, making it significantly larger than the VIST dataset and introducing a larger domain gap. During training, we exclude the story title and writing prompts to align with the VIST evaluation process. Figure 4. Human evaluation results. “Tie” means the annotator cannot choose the better story. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Method & M & B-1 & B-2 & B-3 & B-4 & R\_L & C \\ \hline Ours-Max & 22.6 & 41.8 & 18.9 & 8.5 & 4.1 & 17.0 & 0.9 \\ Ours-Mean & 22.8 & 43.2 & 20.0 & 9.1 & 4.4 & 17.2 & **1.3** \\ Ours-Local & 22.4 & 42.1 & 19.4 & 8.7 & 4.2 & 17.2 & 1.0 \\ **Ours-Planner** & **23.0** & **43.7** & **20.2** & **9.2** & **4.5** & **17.3** & 1.2 \\ \hline \hline \end{tabular} \end{table} Table 4. Evaluation results of different image album aggregation strategies. The best results for each metric are highlighted in bold. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{VIST-Text} & \multicolumn{2}{c}{ROC} & \multicolumn{2}{c}{WP} \\ \cline{2-6} & D-1 & D-2 & D-1 & D-2 & D-1 & D-2 \\ \hline MAGIC & 2.4 & 6.6 & 0.8 & 1.3 & 1.1 & 2.5 \\ **Ours** & **4.7** & **17.2** & **5.5** & **18.8** & **7.4** & **26.5** \\ \hline \hline \end{tabular} \end{table} Table 3. Diversity evaluation results. “VIST-Text”, “ROC”, “WP” represents the training of language generator is conducted on the text part of VIST dataset, ROCStories, and WritingPrompts, respectively. “D-\(n\)” refers to “Distinct-\(n\)”. The best results for each metric are highlighted in bold. In Table 2, we compare the cross-domain transfer ability between our method and the text-only trained baselines. We observe a considerable drop in performance for all methods when evaluated on datasets from different domains. This is expected since the style, theme and topic of the stories are different across datasets. However, our method still outperforms others on most evaluation metrics, demonstrating its superior generalization ability. **Diversity Evaluation.** To further assess the expressive diversity of the generated stories, we use Distinct-\(n\) which calculates the number of distinct n-grams of all generated stories (Sutskever et al., 2016). The value is divided by the total number of generated tokens to avoid favoring long sentences. The results presented in Table 3 demonstrate that our method significantly outperforms the baseline in terms of diversity. This can be attributed to the ability of our method to attend to both global and local visual input, which results in more informative and diverse expressions. Additionally, it can be observed that the diversity of generated stories is relevant to the training corpus, which suggests that the incorporation of external text corpus can benefit visual storytelling. **Human Evaluation.** As illustrated in previous works (Kang et al., 2017; Wang et al., 2018), automatic evaluation metrics are insufficient for visual storytelling due to its subjective and imaginative nature. To obtain more reliable estimates, we also perform human evaluation. Following common practice, we randomly selected 150 examples from the test set, and invited 5 human annotators to rank generation results of different methods. Specifically, the annotators were asked to evaluate the stories based on three criteria: relevance, expressiveness, and concreteness. Relevance refers to whether the story covers the topic and main objects in the images. Expressiveness refers to whether the story is coherent, grammatically and semantically correct, and free of repetition. Concreteness refers to whether the story is narrative and concrete. Fig. 4 shows the evaluation results of 5 human annotators. Our method outperforms MAGIC by a large margin in all three aspects. The dominance of our method is most significant in terms of Concreteness, indicating a greater ability to incorporate visual details in the generated stories. The expressiveness of MAGIC is better than the two other aspects, which reflects the fact that the language quality of our method is slightly affected by introducing fine-level visual control. Additionally, the "Tie" option is selected in a large percentage in all three criteria, which has not been reported in previous methods (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). We believe the reason is that the overall quality of stories generated by text-only trained methods is lower than full-supervised methods, making it difficult to rank for human annotators. ### Ablation Study **Impact of Visual Condition Planner.** We conduct ablation experiments to analyze the effect of the visual condition planner, which aggregates the cross-modality matching result of input images. Specifically, we replace the aggregation process to three straightforward strategies: 1) choosing the maximum matching score in all images, 2) averaging the scores of all images, and 3) using the score of the corresponding image. The evaluation results in Table 4 indicate that both strategies of viewing the images equally within the album ("Ours-Max" and "Ours-Mean") and focusing solely on the corresponding local image ("Ours-Local") have a negative impact on the quality of the generated stories. **Impact of Hyper-parameters.** During the visual-conditioned generation, the selection of top-\(K\) candidate tokens to compute cross-modality matching score with visual inputs and the addition of visual control to the decoding process with a control weight \(\gamma\) in Eq. (14) are governed by predefined hyper-parameters. Therefore, it is important to analyze the influence of these hyper-parameters on the quality of generated stories. From the results in Fig. 5, we observe that the performance improves with \(K\) when \(K<30\), and remains relatively stable when \(K\) continues to increase. However, when \(K\) is too large (\(>60\)), the performance slightly decreases as \(K\) keep increasing. It is also worth noting that the inference time significantly increases as \(K\) increases. Therefore, we choose \(K=45\) in our experiments as it strikes a balance between performance and efficiency. The results in Fig. 6 demonstrate the significant impact of the control weight \(\gamma\) on the generation process. Specifically, when the control weight is too small, the generated stories tend to be disconnected from the visual input, while an excessively large control weight will lead to a disruption in the decoding process, thus deteriorating the overall quality of the generated text. These experimental findings align with our initial intuition and suggest the importance of selecting an appropriate control weight in the visual-conditioned generation. Figure 5. Analysis of the effect of number of candidates \(K\). Figure 6. Analysis of the effect of control weight \(\gamma\). ### Qualitative Results Fig. 7 presents two examples of generated stories by MAGIC and our proposed method. The results show that our approach generates stories with more accurate semantics that correspond to the images, as indicated by the red highlights. Moreover, the visual condition planner enables the generation of sentences that are relevant to the other images in the input sequence, as shown by the green highlights. Our method outperforms the baseline method in capturing visual contents within a single image and maintaining the theme of the album, resulting in stories of higher quality. ## 6. Conclusion and Discussion In this paper, we propose a novel approach for visual storytelling that only requires textual story data for training. By leveraging the capabilities of pretrained cross-modality models such as CLIP, we model the visual storytelling task as a visual-conditioned generation problem. We adopt a guided decoding paradigm and design a visual condition planner to aggregate the input visual sequence. Our method is evaluated on the VIST benchmark through extensive experiments, which demonstrate its effectiveness in generating high-quality visual stories. Although the proposed method avert the cost of cross-modality annotated data, the training-free visual condition planner does have its limitations in understanding the complex temporal structures of visual input, which may affect the complexity of the generated story. In future work, it may be worth exploring few-shot learning methods to aggregate aligning results of image sequence to generate more informative and narrative stories. ###### Acknowledgements. This work was supported by NSFC under Contract U20A20183 and 62021001. It was also supported by the GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC, and the Supercomputing Center of the USTC. Figure 7. Qualitative comparison between our method and MAGIC. Words highlighted in red represents exact description of corresponding image, and words highlighted in green represents information from other images.
2310.04229
Exploring the Origin of Solar Energetic Electrons I: Constraining the Properties of the Acceleration Region Plasma Environment
Solar flare electron acceleration is an efficient process, but its properties (mechanism, location) are not well constrained. Via hard X-ray (HXR) emission, we routinely observe energetic electrons at the Sun, and sometimes we detect energetic electrons in interplanetary space. We examine if the plasma properties of an acceleration region (size, temperature, density) can be constrained from in-situ observations, helping to locate the acceleration region in the corona, and infer the relationship between electrons observed in-situ and at the Sun. We model the transport of energetic electrons, accounting for collisional and non-collisional effects, from the corona into the heliosphere (to 1.0 AU). In the corona, electrons are transported through a hot, over-dense region. We test if the properties of this region can be extracted from electron spectra (fluence and peak flux) at different heliospheric locations. We find that cold, dense coronal regions significantly reduce the energy at which we see the peak flux and fluence for distributions measured out to 1.0 AU, the degree of which correlates with the temperature and density of plasma in the region. Where instrument energy resolution is insufficient to differentiate the corresponding peak values, the spectral ratio of [7-10) to [4-7) keV can be more readily identified and demonstrates the same relationship. If flare electrons detected in-situ are produced in, and/or transported through hot, over-dense regions close to HXR-emitting electrons, then this plasma signature should be present in their lower-energy spectra (1-20 keV), observable at varying heliospheric distances with missions such as Solar Orbiter.
Ross Pallister, Natasha L. S. Jeffrey
2023-10-06T13:16:12Z
http://arxiv.org/abs/2310.04229v1
Exploring the Origin of Solar Energetic Electrons I: Constraining the Properties of the Acceleration Region Plasma Environment ###### Abstract Solar flare electron acceleration is an efficient process, but its properties (mechanism, location) are not well constrained. Via hard X-ray (HXR) emission, we routinely observe energetic electrons at the Sun, and sometimes we detect energetic electrons in interplanetary space. We examine if the plasma properties of an acceleration region (size, temperature, density) can be constrained from in-situ observations, helping to locate the acceleration region in the corona, and infer the relationship between electrons observed in-situ and at the Sun. We model the transport of energetic electrons, accounting for collisional and non-collisional effects, from the corona into the heliosphere (to 1.0 AU). In the corona, electrons are transported through a hot, over-dense region. We test if the properties of this region can be extracted from electron spectra (fluence and peak flux) at different heliospheric locations. We find that cold, dense coronal regions significantly reduce the energy at which we see the peak flux and fluence for distributions measured out to 1.0 AU, the degree of which correlates with the temperature and density of plasma in the region. Where instrument energy resolution is insufficient to differentiate the corresponding peak values, the spectral ratio of [7-10) to [4-7] keV can be more readily identified and demonstrates the same relationship. If flare electrons detected in-situ are produced in, and/or transported through, hot, over-dense regions close to HXR-emitting electrons, then this plasma signature should be present in their lower-energy spectra (1-20 keV), observable at varying heliospheric distances with missions such as Solar Orbiter. 0000-0002-8070-7880]Ross Pallister ## 1 Introduction The Sun is an efficient particle accelerator capable of producing kilo-, Mega- and Giga-electronvolt (keV, MeV, GeV) energetic particles (e.g., Holman et al., 2011; Vilmer et al., 2011; Klein & Dalla, 2017), in transient events such as solar flares and coronal mass ejections (CME) via magnetic reconnection (e.g., Parker, 1957; Sweet, 1958; Priest & Forbes, 2000) and/or shocks (e.g., Forbes et al., 2006). In solar flares, a large fraction (\(\sim 10-50\%\)) of the released magnetic energy is converted into energetic particles, including energetic electrons (e.g., Emslie et al., 2012). However, the processes that accelerate and transport energetic electrons are widely debated with the exact configuration(s) of the acceleration mechanism(s), environment and location(s) still undetermined. In this standard model, hard X-ray (HXR) producing energetic electrons (e.g., Kontar et al., 2011) are transported along newly formed magnetic field lines in the corona, precipitating into the dense layers of the lower atmosphere and losing energy. So-called flare-produced solar energetic electrons (SEEs) can also be detected in the heliosphere (e.g., Lin, 1985), either via in-situ measurements or by their radio emissions (c.f., Pick & Vilmer, 2008), but the connection between these distinct electron populations, and indeed their connecting magnetic topology, is still poorly understood. Moreover, multi-messenger diagnostics, whether remote sensing or in-situ, can be complicated by various particle propagation and emission effects such as e.g., Coulomb collisions (e.g., Jeffrey et al., 2014), X-ray albedo (e.g., Bai & Ramaty, 1978), radio wave scattering (e.g., Kontar et al., 2017), turbulence (parallel and cross-field diffusion) (e.g., Kontar et al., 2014) and field line meandering (e.g., Laitinen et al., 2016). HXR-emitting electrons are produced close to or within hot and dense flaring loops, possibly via turbulence (Kontar et al., 2017; Stores et al., 2021), and we see the signature of that plasma in their low-energy spectra (\(\leq 30\) keV). Non-thermal electrons accelerated out of a hot, dense plasma and/or non-thermal electrons moving through a hot, dense region and undergoing (partial-) thermalization at lower-energies (\(\leq 50\) keV) (e.g., Jeffrey et al., 2019; Kontar et al., 2015), will retain or imprint the properties of that plasma environment. Most studies to date explore the connection between different flare electron populations by studying the properties of the higher energy power law (usually above 40 keV). After the launch of the Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al. (2002)), several studies examined flares with both HXR-producing electrons and SEEs detected at 1.0 AU. Krucker et al. (2007) compared HXR spectra with WIND/3DP (Lin et al., 1995) electron spectra at \(\sim 1.0\) AU. For so-called 'prompt' events, where the SEE release time appears to coincide with the flare HXR burst, Krucker et al. (2007) found a clear correlation for both power-law spectral indices and total number of electrons, which is consistent with a single process accelerating both electron populations. Under the assumption that both HXR-producing and escaping electron populations are accelerated by a similar mechanism within similar plasma conditions, we expect near identical spectra. However, in Krucker et al. (2007), the peak flux spectrum was harder then the inferred cold-thick-target electron spectra at the Sun possibility suggesting the effects of coronal or heliospheric transport processes. A near-identical study using flare data from solar cycle 24, Dresing et al. (2021), also found a strong correlation of about 0.8 between remote HXR-producing and in-situ spectral indices. Dresing et al. (2021) observed an increased correlation for events with'significant anisotropy', suggesting that transport effects reduce the signature of the acceleration region properties in the data. In Wang et al. (2021), sixteen SEE producing flares were examined; they determined that the spectral index of HXR-producing electrons was no less than the observed high-energy spectral index of SEEs (above a spectral break energy), showing a positive correlation with the high-energy spectral index of SEEs. Further, the spectral analysis (extending down to \(\sim 5\) keV) suggested that the source of SEEs was high in the corona at a heliocentric distance of \(\geq 1.3\) solar radii. In contrast, other studies looking at Type III radio bursts (e.g., Reid et al., 2011) offer conflicting results, suggesting that outward propagating electrons are accelerated between heights of \(40-60\) Mm (\(\approx 1.07\) solar radii), much closer to the flare location at the Sun. Only with a better understanding of the acceleration environment and transport processes therein can we explore the connection between flare-accelerated electron populations. To this end, we aim to constrain the properties of an acceleration region (e.g., its size, temperature, density, turbulence profile); properties not currently constrained well by remote flare observation or in-situ detection alone. In this initial study (Paper I) we use solely SEE transport modelling in the corona and heliosphere alone to investigate the possibility of extracting acceleration region plasma properties from in-situ data, constraining the likely location of their acceleration, and hence relation with HXR-producing electrons. The study mainly concentrates on the electron spectral observations at heliospheric locations of \(0.4-1.0\) AU, that can now be explored with both Parker Solar Probe (PSP; Fox et al. (2016)) and Solar Orbiter (SolO; Muller et al. (2020)). Later studies will combine these constraints with those derived from coronal HXR-centric studies to explore the possibility of a unified acceleration region parameter range for prompt flare events. We examine if more attention should be given to the understudied electron spectral range of \(1-30\) keV, and if signatures of hot and dense plasma can be extracted from in-situ datasets. Most studies concentrate on studying and comparing the properties of interplanetary electrons with their HXR-emitting counterparts above \(20-30\) keV only. However, the analysis of HXR-producing electrons shows that the lower portion of the spectrum \(\leq 20\) keV is directly related to the presence and properties of the surrounding coronal plasma (e.g., Kontar et al., 2015). We suggest that any hot, over-dense plasma signature will help to locate the origin of such particles and in particular their relationship to HXR-emitting electrons. For this task, we assume that electron spectra (peak flux and fluence) in the range of \(1-20\) keV are appropriate for this purpose. In Section 2 we present the model of electron transport for both the coronal region and heliospheric components, and the coronal plasma properties used in the former. In Section 3 we present the results of the study, including different diagnostics that can be used to estimate the acceleration plasma environment from a partially-thermalized electron distribution. In Section 4, we summarize and discuss the main results. ## 2 Coronal and heliospheric transport model We have developed an electron acceleration and transport model for the inner corona and heliosphere, starting at 1 solar radius \(R_{\odot}\). It is assumed that the extent of the simulated coronal region beyond the solar surface is always negligible compared to 1.0 AU (maximum 40 Mm vs approx. 150,000 Mm, or 0.026%). The model is composed of two discrete domains: a hot and over-dense (flare) coronal 'acceleration' region of length \(L\) with given uniform electron temperature \(T\) and density \(n_{e}\), and a sparser cold plasma (\(T\approx 0\)) representing the wider heliosphere for \(z>L\) (see Figure 1). Contrary to the name, the current study neglects acceleration mecha nisms and will instead inject a single non-thermal electron distribution into a region dominated by collisional effects, henceforth referred to as the 'collisional' region1. The extent and plasma properties of this collisional region are varied within reasonable limits to investigate the effect each variable has on the electron population ejected from this region. Following different solar flare observations of (above-the-) loop-top sources (e.g., Caspi et al., 2014; Jeffrey et al., 2015; French et al., 2020), we choose a sensible range of parameters for the coronal region: temperature \(T\) ranging between \(10-30\) MK, electron number density \(n_{e}\) ranging between \(10^{9}-10^{10}\) cm\({}^{-3}\) and region size \(L\) ranging between \(10-40\) Mm (\(\sim 14^{\prime\prime}-55^{\prime\prime}\)). Footnote 1: The acceleration of the electrons out of the background thermal plasma will be discussed in Paper II. The effects acting on electrons in the following transport equations are broadly separable into two types: collisional and non-collisional (and the latter into focusing and diffusion). In the heliospheric environment, adiabatic focusing and pitch-angle diffusion act in opposite to align or disperse particles (respectively) with regards to the magnetic field, the former being more significant where the magnetic field is strong and the latter where the mean free path is short. In the coronal region, collisional energy losses may dominate in the dense environment, whereas collisions will be negligible in the sparser heliospheric domain where non-collisional effects have previously been demonstrated to take precedence (due to low heliospheric densities described by empirical models and observations e.g., Newkirk Jr, 1967; Saito et al., 1977; Leblanc et al., 1998; Fludra et al., 1999; McCauley et al., 2018). Forces due to gravity and electric fields are neglected. In this preliminary study, we neglect the effects of Langmuir wave turbulence and Landau damping that can produce a clear spectral break at \(\sim 40\) keV or lower (Kontar & Reid, (2009); Section 4 briefly discusses how wave-particle interactions may change the diagnostics outlined in Section 3). The effects of cross field particle diffusion and field line meandering which are vital for large spread events (Latinen et al., 2016) are neglected here.Particle (re-)acceleration in the heliosphere, such as that induced by shocks associated with co-rotating interaction regions (CIRs) (as described by Heber et al., 1999; Allen et al., 2021; Zhao et al., 2019) and the heliospheric current sheet (e.g., Zharkova & Khabarova, 2012; Khabarova et al., 2015) are also ignored in the present study. ### Governing Fokker-Planck Equation Figure 1: Simplified cartoon indicating one possible magnetic topology (such as emerging flux) that could lead to the production of both flare HXR-producing, and in-situ detected electrons in close proximity, in hot and over-dense regions close to the flare energy release site. In Paper I, we concentrate on studying the properties of those electrons accelerated during the flare but observed in-situ, by modelling the transport of an injected electron population through a hot and over-dense coronal region related to the flare (black-lined rectangle) of various temperature \(T\), number density \(n_{e}\) and size \(L\), and then out into the heliosphere until 1.0 AU. To describe the transport of a chosen electron distribution function \(f(t,z,v,\mu)\) in time \(t\), along a guiding magnetic field \(z\), speed \(v\) and cosine of the pitch-angle (\(\beta\)) to the guiding magnetic field \(\mu=\cos\beta\), we use the following Fokker-Planck equation (e.g., Lifshitz & Pitaevskii, 1981, Karney, 1986), \[\begin{split}\frac{\partial f}{\partial t}+\mu v\frac{\partial f }{\partial z}=&\underbrace{-\frac{v(1-\mu^{2})}{2L_{z}}\frac{ \partial f}{\partial\mu}}_{\text{adiabatic focusing}}\\ &\hskip 28.452756pt+\frac{\partial}{\partial\mu}\left[D_{\mu\mu} \frac{\partial f}{\partial\mu}\right]\\ &\hskip 28.452756pt+\underbrace{\frac{\Gamma}{2v^{2}}\left[\frac{ \partial}{\partial v}\left(2vG(u)\frac{\partial f}{\partial v}+4u^{2}G(u)f \right)\right]}_{\text{collisional energy losses}}\\ &\hskip 28.452756pt+\underbrace{\frac{\Gamma}{2v^{3}}\left[\frac{ \partial}{\partial\mu}\left((1-\mu^{2})\bigg{[}\text{erf}(u)-G(u)\bigg{]} \frac{\partial f}{\partial\mu}\right)\right]}_{\text{collisional pitch-angle scattering}}\end{split} \tag{1}\] The first two terms on the right hand side of Equation 1 model a simple heliospheric environment with the presence of only adiabatic focusing and (parallel) pitch-angle scattering following e.g., Roelof (1969); Droge & Kartavykh (2009); Agueda & Vainio (2013). Both terms are present within the defined collisional coronal 'acceleration' region \(z<L\) and the wider heliosphere. \(D_{\mu\mu}\) is the pitch-angle diffusion coefficient which quantifies the diffusion of an electron subject to its pitch-angle, velocity and local mean free path \(\lambda\). The diffusion coefficient and its \(\mu\) derivative are given by: \[D_{\mu\mu}=K(1-\mu^{2})(|\mu|^{q-1}+h) \tag{2}\] \[\frac{\partial D_{\mu\mu}}{\partial\mu}=K\mu\bigg{[}(q-1)(1-\mu^{2})|\mu|^{q- 3}-2(|\mu|^{q-1}+h)\bigg{]} \tag{3}\] where \[K=\frac{3v}{2\lambda(4-q)(2-q)} \tag{4}\] and \(q=5/3\) is the spectral index of magnetic field fluctuations, taken as a Kolmogorov spectrum and constant \(h=0.01\), which accounts for unmodelled scattering affects that are otherwise dominated at \(|\mu|>0\) in the heliosphere. Following Alcock (2018), the mean free path \(\lambda\) is calculated as in Equation 5, \[\lambda=\lambda_{\oplus}\Bigg{(}\frac{z}{z_{\oplus}}\Bigg{)}^{\kappa}\Bigg{(} \frac{p}{p_{\min}}\Bigg{)}^{2\xi} \tag{5}\] dependent on the mean free path at 1.0 AU, \(\lambda_{\oplus}=0.3\) AU, the ratio of electron position \(z\) relative to 1.0 AU (\(=z_{\oplus}\)) and the ratio of current and minimum electron momenta \(p\) (derived from the minimum allowed kinetic energy in the heliosphere). \(\kappa\) and \(\xi\) are parameters that quantify the degree to which the electron momentum and radial distance from the Sun affect the mean free path. For this study they are given as \(\kappa=0.5\) and \(\xi=-0.2\), and not explored further. The magnetic field at any point along \(z\) is given by Equation 6(Dulk & McLean, 1978), \[B=\frac{1}{2}\Bigg{(}\frac{z}{R_{\odot}}-1\Bigg{)}^{-\frac{3}{2}} \tag{6}\] and \(L_{z}\) in Equation 1 is ratio of the local magnetic field to its spatial gradient. \[L_{z}=\frac{B(z)}{(-dB/dz)} \tag{7}\] The final two terms in Equation 1 describe collisional energy losses and pitch-angle scattering respectively (e.g., Jeffrey et al., 2014; Kontar et al., 2015) where \(\Gamma=4\pi e^{4}\mathrm{ln}\Lambda n_{e}/m_{e}^{2}\), for electron charge \(e\) [statC], Coulomb logarithm \(\mathrm{ln}\Lambda\) and electron mass \(m_{e}\) [g]. The error function \(\mathrm{erf}(u)\) and the Chandrasekhar function \(G(u)\) are given by \[\mathrm{erf}(u)\equiv(2/\sqrt{\pi})\int_{0}^{u}\exp(-u^{2})du \tag{8}\] and \[G(u)\equiv\frac{\mathrm{erf}(u)-\mathrm{erf}^{\prime}(u)}{2u^{2}} \tag{9}\] where \(u\) is the dimensionless velocity \(u=v/(\sqrt{2}v_{th})\) and \(v_{th}=\sqrt{k_{B}T_{e}/m_{e}}\). The error function and \(G(u)\) control the lower-energy (\(E\approx k_{B}T_{e}\)) electron collisional interactions ensuring that they become indistinguishable from the background thermal plasma. ### Conversion to Stochastic Differential Equations (SDEs) The Fokker-Planck equation (Equation 1) can be rewritten as a Kolmogorov forward equation (Kolmogorov, 1931) and then converted to a set of time-dependent stochastic differential equations (e.g., Gardiner (1986); Strauss & Effenberger (2017)) that describe the evolution of \(z\), \(E\), and \(\mu\) in Ito calculus. The electron transport equations are solved numerically by a series of first-order Euler expressions, returning the evolution of electron velocity \(v\), pitch-angle \(\mu\) and position \(z\) along the guiding field: \[\begin{split} v_{i+1}&=v_{i}-\frac{\Gamma}{v_{i}^{ 2}}\left(\mathrm{erf}(u_{i})-2u_{i}\mathrm{erf}^{\prime}(u_{i})+G(u_{i}) \right)\,\Delta t\\ &+\sqrt{\frac{2\Gamma G(u_{i})}{v_{i}}\Delta t}\;W_{v}(t)\end{split} \tag{10}\] \[\begin{split}\mu_{i+1}&=\mu_{i}\\ &-\frac{\Gamma\mu_{i}(\mathrm{erf}(u_{i})-G(u_{i}))}{v_{i}^{3}} \,\Delta t+\left(\frac{dD_{\mu\mu}}{d\mu}+\frac{v_{i}(1-\mu_{i}^{2})}{2L_{z}} \right)\Delta t\\ &+\sqrt{\left(2D_{\mu\mu}+\frac{\Gamma(1-\mu_{i}^{2})(\mathrm{ erf}(u_{i})-G(u_{i}))}{v_{i}^{3}}\right)\,\Delta t}\;W_{\mu}(t)\end{split} \tag{11}\] \[z_{i+1}=z_{i}+\mu_{i}v_{i}\Delta t \tag{12}\] \(W\) is a value randomly chosen each time step from a Brownian normal distribution with a mean of 0 and variance of 1. This provides the stochastic element to the electron transport. In the coronal region, we set the time step \(\Delta t\) at a constant value of \(10^{-2}\) s; in order for collisional effects to have significance, there must be multiple simulation steps within the relatively small coronal region. With initial kinetic energies of order 10 keV, beamed electrons are expected to exit the largest modelled coronal region (40 Mm) within a few tenths of a second. Setting \(\Delta t=10^{-2}\) s allows even the most energetic electrons to have multiple simulation steps within the coronal region. As described in Lemons et al. (2009) and Jeffrey et al. (2014), at velocities less than \[v\leq\left(\Gamma\frac{8}{3}\sqrt{\frac{m_{e}}{2\pi k_{B}T_{e}}}\Delta t \right)^{1/2} \tag{13}\] the analytical equation \[v\simeq\left(v_{0}^{2}+\Gamma\frac{8}{3}\sqrt{\frac{m_{e}}{2\pi k_{B}T_{e}}} \Delta t\right)^{1/2} \tag{14}\] can be used to determine \(v\) to remove any divergence at low \(v\) in Equations 1 and 10. For such velocities, \(\mu\) can be drawn randomly from an isotropic distribution between -1 and +1. Outside of the coronal collisional region, we use a time step \(\Delta t=1.0\) s and stop updating the velocity terms, as the mean free path in the simulated heliosphere is large enough that further collisional effects are assumed to be negligible enough to ignore. Any electrons with kinetic energies below 1 keV are taken to be thermal and have their pitch-angle values frozen, rather than randomised every step, saving computation time otherwise spent propagating electrons below energies of interest. ### Initial, plasma and boundary conditions At a singular point called \(z=R_{\odot}\), \(10^{6}\) electrons are initialised with a beamed pitch-angle \(\mu_{0}=0.99\), where positive values correspond to a direction outwards from the Sun and \(\mu_{0}=\pm 1.0\) is perpendicular to the solar surface. Since electrons are not accelerated from a thermal population in this initial study, we create a power-law velocity (energy) distribution between the energies of 5 keV and 100 keV, matching the electron energies detected via remote-sensing HXR observations at the Sun, with a power index in energy \(E\) of \(\delta=3\) (\(E^{-\delta}\)). The electrons are initialised at the lowest point of a coronal region, simulated for a length of time sufficient for all particles to be ejected into the heliosphere. Heliospheric electrons are then simulated for an additional period of \(t=50,000\)s, after which the majority of the initial distribution have reached positions \(z>1.0\) AU. This procedure is performed for all combination of coronal region properties. As mentioned, we test the following coronal 'flaring' conditions: temperature \(T\) equal to either 10 MK, 20 MK or 30 MK, electron number density \(n_{e}\) equal to \(1\times 10^{9}\) cm\({}^{-3}\), \(5\times 10^{9}\) cm\({}^{-3}\) or \(1\times 10^{10}\) cm\({}^{-3}\), and region length \(L\) equal to 10 Mm, 20 Mm, 30 Mm or 40 Mm. The kinetic energies and pitch-angles of the population for each set of coronal region properties are examined at various points in space and time. In particular, we will examine the time-integrated electron fluence (spectra) at one location in space, and the flux peak spectra, where the energy spectra are created using the peak flux in each energy bin or channel, at the following heliospheric locations of 0.01 AU, 0.4 AU and 1.0 AU. It is assumed that electrons are transported out to empty space, such that no planetary magnetospheres are present in the simulation. As the transport equations do not inherently prevent an unphysical pitch-angle value (\(|\mu|>1\)), care must be taken when modelling particles close to this boundary. In this case they are reset in the same position with a uniformly random pitch angle \(|\mu|=[0.89,1.00)\). Particles cannot be transported below \(R_{\odot}\) and are perfectly reflected at this boundary. The collisional region is not present in the heliospheric component of the simulation, and so any ejected electrons that would re-enter those ranges of distance from the Sun are similarly reflected. This does not represent a physical effect, but rather a way to prevent extended thermalisation of particles that may be trapped close to the boundary without affecting statistics. We find that in all cases the coronal ejecta population remains relatively anisotropic, such that electrons that do re-encounter the boundary represent a very small proportion of the heliospheric energy spectra. ## 3 Results ### Collisional region plasma properties The ejected electron energy and pitch-angle fluence distributions of an example parameter set are shown in Figure 2. The ejected fluence corresponds to time-integrated distributions measured at one spatial location (similar to spacecraft data), here namely the furthest boundary from the Sun of the simulated collisional region of length \(L\) (\(z=R_{\odot}+L\)). The ejecta fluence distribution (blue) is compared with the injected beamed power law population (black) and a thermal distribution corresponding to the ambient plasma temperature (grey dotted) of \(T=20\) MK. Prior to ejection, the population steadily evolves towards a thermal shape, though the high-energy tail (\(>\)10 keV) remains relatively unchanged from injection, as expected in lower density (\(n_{e}=5\times 10^{9}\) cm\({}^{-3}\)) plasma. These electrons are ejected quickly and do not undergo enough collisions to begin to significantly thermalise above 10 keV. The population as a whole remains relatively anisotropic out to the point of ejection, though some pitch-angle scattering is evident if largely insignificant2. We examine these properties in the respective fluence distributions of all parameter sets in order to determine how they are affected by changing the region size, plasma temperature and density. Footnote 2: Here, there is an obvious bias towards high positive pitch-angles since we inject a beamed distribution, and measurement is taken at the first crossing of a given boundary. However, the aim here is to study the effects of different plasma properties on a given distribution, not exhaustively examine different pitch-angle injections. In Figure 3 these distributions are compared for the minimum and maximum tested values of \(L\), \(T\) and \(n_{e}\) in turn, fixing the other two parameters at intermediate values. In all parameter sets, the fluence (energy) distribution remains more or less unchanged above 20 keV, as these electrons are so energetic as to be practically unaffected by the plasma prior to ejection for the properties we test for. As such the focus of this analysis will be on energies below this approximate value, in particular in the 1-20 keV range, where spectral variations due to hot, over-dense plasma environments should show, if present. Changing the physical size of the region shows the smallest change in the energy and pitch-angle distributions. The smaller region has a fluence peak at a slightly higher energy and fewer electrons ejecting at lower energies. Smaller regions also appear to have somewhat less isotropic ejecta. As the electrons are travelling across a shorter distance (and so a shorter time) before ejection, the population has had less time to thermalise than for a larger region, therefore resulting in the distributions retaining more of the features of the non-thermal injected population. Between plasma temperatures of 10 MK (blue) and 30 MK (pink), a higher \(T\) shows a higher energy for the fluence peak. In similarly sized regions or equivalent density, hotter ambient temperatures result in the population being thermalised to higher energies before ejection. Such temperatures also appear to make the population less isotropic than for colder regions (since the collisional times are increased in a hotter plasma). In higher density regions, the population evolves more towards a thermal distribution before ejection than for a lower density, becoming more isotropic in the process. This is due to the greater number of collisions an average electron will undergo in this time, making the process of thermalisation more efficient. Conversely, the population in the lower density region has fewer average collisions and retains more of the injected distribution's shape. The effects of plasma temperature and density are not entirely separable, as both play a role in the electron collisional time and by extension the efficiency of thermalisation. In Figure 4 we compare how varying the electron temperature for high and low densities affects the resulting fluence spectra at the collisional region boundary. In the low-density case, there are too few collisions for the distribution to thermalise above the lowest measured energies before ejection. The fluence peaks have approximately the same heights and correspond to the same band of kinetic energy. At higher densities however we see a much more significant divergence between the fluence distributions. Low temperatures greatly suppress the fluence peak and decelerate the injected distribution to energies of order \(\sim 1\) keV. For higher temperatures, the fluence peaks have comparable heights to the low density region but show more electrons being thermalised between \(1-10\) keV energies, corresponding to the thermal energy of the ambient plasma. It can be seen that ejecta from low-density regions have little temperature dependence making it more difficult to constrain the region properties based on ejecta distributions than it would be for a higher density. Differences in lower Figure 2: Example ejected electron fluence (blue) versus energy (left) and pitch-angle (right). The injected distribution (power law, beamed) is denoted by the black curve in the left plot and by the dashed black line in the right. The injected distribution travels through a “flare’ region with the following plasma properties: length \(L=30\) Mm, temperature \(T=20\) MK and number density \(n_{e}=5\times 10^{9}\) cm\({}^{-3}\). A thermal distribution corresponding to a plasma temperature of \(T=20\) MK is also shown (grey dotted, left) (10 MK) temperature and higher temperature (\(\geq 20\) MK) can be clearly distinguished in high density plasma (\(10^{10}\) cm\({}^{-3}\)). However, for any temperature plasma, there are clear spectral differences between the low and high density cases. In Figure 5, for the ejected fluence spectra at the collisional region boundary, we plot the energy at which the fluence spectra peaks for each studied plasma region varying in \(L\), \(T\) and \(n_{e}\). We determine if such properties are useful to help diagnose the plasma properties of the flare 'acceleration' region. At higher densities the energy at which the fluence peaks drops significantly at all temperatures, though to a lesser degree in hotter regions. Higher temperatures themselves increase this peak energy in regions of equal density, with seemingly diminishing returns at \(L\geq 20\) Mm and \(n_{e}\geq 5\times 10^{9}\) cm\({}^{-3}\). For each \(L\), we see the general pattern of energy of the fluence peak increasing with hotter plasma temperature and decreasing with increasing plasma number density (with changes due to temperature more evident in higher density regions). In this example, a peak fluence sitting at an energy of \(\sim 5\) keV could correspond to \(T=20\) MK (high flare temperature), with \(L\) ranging between \(20-40\) Mm and \(n\) between \(5\times 10^{9}-1\times 10^{10}\) cm\({}^{-3}\) (i.e., higher density). We will now extend the use of these diagnostics into the heliosphere where such spectra can be observed. ### Extending transport to the heliosphere The distributions for energy, pitch angle, and ejection time generated by the coronal component are taken as injection profiles and iterated out to 1.0 AU. In order to draw an approximate equivalence with the energy resolution of modern spacecraft in the heliosphere (and archived data), all data discussed after this point will be displayed with 1 and 3 keV binning unless otherwise stated. Figure 3: Collisional-boundary electron energy (upper) and pitch-angle (lower) fluence distributions independently varying each parameter between their minimum (blue) and maximum (pink) values, while fixing the other parameters at intermediate values. Dotted lines represent the thermal distributions associated with the given temperature values. The fluence peaks at higher energies for smaller, hotter and sparser regions, which also correspond to less isotropic ejecta distributions. As expected, the electron spectra above 20 keV remains relatively unchanged regardless of region parameters. Figure 6 shows the fluence and peak flux energy spectra at 1.0 AU for independently varied values of \(L\), \(T\) and \(n_{e}\) (similar to Figure 3). There are minor differences between both distributions for a small and large collisional region, suggesting that region size is difficult to constrain within the selected range. Here, the most significant difference is between colder and hotter regions, with the maximum fluence and peak flux occurring at larger energies as the temperature increases, as expected. It also shows a significant deviation between the low and high temperature regions in the 10-19 keV range that is not as readily seen when other parameters are varied instead. For 20 MK, the density comparison demonstrates a similar (if weaker) pattern, with lower density regions allowing more of the non-thermal injected population to escape into (and propagate through) the heliosphere. In addition to varying the physical parameters of the collisional region, we can also vary the parameters of the analysis itself: accounting for the position of a virtual instrument \(R_{i}\), the total simulated runtime \(\tau_{i}\) and the width of the energy bins \(S\). These are displayed in Figure 7, with fixed values of \(R_{i}=1.0\) AU, \(\tau_{i}=50,000\) s and \(S=3\) keV unless otherwise stated. For the maximum simulated runtime, the fluence across the heliosphere remains relatively unchanged as the majority of electrons will cross all values of \(R_{i}\) that are being compared. The peak flux on the other hand is significantly higher closer to the Sun, though the shape of the distribution remains approximately the same. Scattering effects in the heliosphere delay transport along a 1D line out to 1.0 AU so while the energy values are unchanged and will arrive at every point along the heliosphere eventually, the rate of arrival is spread out resulting in lower peak flux. Reducing the simulated runtime shows a lower number of electrons arriving at 1.0 AU, reducing the overall fluence. The lowest energy electrons are expected to arrive latest so reducing \(\tau_{i}\) steadily will show a much faster decrease in low energy fluence first. This is also reflected in the flux peak plot though to a lesser extent. Enlarging the bin size results in a higher fluence and peak flux per bin, as naturally more energy is collected within a wider range. However this has the added effect of obfuscating the location of the spectral peaks. Sufficiently high resolution energy measurement would be required to identify and measure this feature (\(\sim<3\) keV binning at energies \(<20\) keV). Compiled in Figure 8 are the energies where the maximum values of fluence and peak flux occur, for electron distributions at two heliospheric positions (0.4 AU and 1.0 AU) and integration times (10,000 s and 50,000 s). We also consider how they would manifest in 1 and 3 keV binning schemes comparable to modern spacecraft instrumentation. In addition, we calculate the ratio of the \([7-10)\) to \([4-7)\) keV bin heights for each parameter set. With the exception of the \(R_{i}=1.0\) AU, \(\tau_{i}=10,000\) s case, the peaks and ratios show relatively'settled' cases where the majority of Figure 4: The temperature dependence of the ejected electron fluence distributions is compared for minimum (left) and maximum (right) simulated densities, and \(L=30\) Mm. Distribution shape is very similar in all low-density regions, but the spectral differences due to temperature can be clearly seen in higher density cases. electrons have reached \(R_{i}\) and little variation can be seen between these parameter sets. At high \(R_{i}\) and low \(\tau_{i}\) the maxima (fluences and peak fluxes) are biased towards higher energies that have been able to reach 1.0 AU in the reduced integration time. In addition, this is the only case where the fluence bin ratios exceed those of the flux ratios, indicating that the peak flux ratios are identifiable within a shorter simulation runtime than the fluence ratios. With regards to the collisional region parameters, there are clear patterns shown in both the peaks and ratios. For the high-resolution 1-keV-binned peak cases, it can be seen that increasing the temperature of the region results in a fluence and flux peaks occurring at a higher energy at both 0.4 and 1.0 AU, though there is relatively little difference when increasing the density at high temperatures. Increasing the density at lower temperatures on the other hand shows a significant drop in peak fluence and flux energies from \(1\times 10^{9}\) cm\({}^{-3}\) to \(5\times 10^{9}\) cm\({}^{-3}\). This indicates that a cold (\(<30\) MK) and dense (\(>1x10^{9}\) cm\({}^{-3}\)) collisional region is an effective decelerator of energetic electrons and evident even out at 1.0 AU. These patterns are somewhat more difficult to discern at 3 keV binning; with the exception of the lowest temperature parameter sets, the high-resolution peaks occupy the same \([4-7)\) keV bin. Instead we compare the relative heights of the \([7-10)\) and \([4-7)\) keV bins for fluence and peak flux, which show the same relations as the higher-resolution maxima. As such, in the absence of sufficiently high-resolution energy binning these ratios are our preferred metric for constraining collisional region properties using heliospheric data. ## 4 Summary and Discussion Locating the acceleration region is fundamental in understanding how and where energetic particles are produced in flares, as well as the relationship between different particle populations observed at the Sun and in interplanetary Figure 5: Peak (in energy) of ejected electron fluence for every simulated parameter set, with each window corresponding to a different region size. The parameter set labels (x-axis) are formatted in order of region temperature and electron density. Increasing the electron density of the region significantly reduces the energy at which the fluence peaks for most region sizes, with a greater degree of deceleration for colder plasmas. space (e.g., plasma environment, magnetic topology). Thus, examining whether signatures of hot, over-dense plasma exist in interplanetary electron spectra is vital. This analysis is concerned with so-called 'prompt' events where energetic electrons are evidenced to be directly related to the flare and not to any secondary acceleration mechanism such as a CME shock. Previous studies provide conflicting evidence regarding where in-situ electrons are energised compared to their HXR-emitting counterparts. In certain flares, for example, emerging flux or interchange reconnection (e.g., Battaglia et al., 2023; Wang et al., 2016; Heyvaerts et al., 1977) and/or the presence of a flare-jet structure (e.g., Musset et al., 2020; Krucker et al., 2011; Bain & Fletcher, 2009) may ultimately allow loop-top electrons to escape, while having access to hot, over-dense material usually related to HXR-emitting electrons. Alternatively, or in conjunction, the presence of a turbulent acceleration mechanism (e.g., Stores et al., 2021; Kontar et al., 2017) may both heat the surrounding plasma and energise electrons simultaneously. Here, we analysed how the plasma properties of a hot, over-dense coronal region (including or close to an acceleration region) change the properties of \(\approx 1-20\) keV electrons and examine whether the signature of such plasma properties can be extracted from the electron fluence or peak flux spectrum. In this study, we only present very specific or 'extreme' plasma examples that are easily identifiable in the electron spectrum, unlike lower temperatures (\(<10\) MK) and densities (\(<10^{9}\) cm\({}^{-3}\)), possibly in higher altitude locations away from flare site producing HXR-emitting electrons. The results suggest that if interplanetary flare-related electrons are indeed produced in hot, over-dense regions, possibly during continued accelerated in flares (similar to HXR-producing electrons) and within a magnetic topology that allows the escape of such electrons, then we should be able to detect these signatures, and even constrain the plasma properties of such a region. We tested how different bin sizes used for plotting electron fluence or peak flux affected the extraction of such properties, and how electron loss at different locations in the heliosphere can also hide these signatures. Thus, the lack of such 'thermal' signatures would suggest that such particles indeed originate from Figure 6: Fluence (upper) and peak flux (lower) distributions of the ejected population at 1.0 AU with a simulated energy binning of 3 keV for the minimum and maximum collisional region parameters (shown in each legend). The fluence and flux distributions peak at higher energies for hotter and sparser collisional regions similar to the region boundary results, with the largest divergence found between minimum and maximum temperature parameter sets. 'higher' coronal locations, unrelated to the acceleration region producing HXR-emitting electrons. In order to diagnose these properties, a detailed individual flare study would have to be performed (including a comparison of in-situ and HXR-emitting electrons), beyond the scope of this initial modelling study. Properties of an 'acceleration' region can be more narrowly constrained by identifying signatures of the 'thermal' (\(<20\) keV) electron datasets. Energy of peak fluence and peak flux bands at resolutions of 1 keV both vary significantly between different parameters sets. If these features exist, then these properties can be readily extracted from modern in-situ instruments at comparable resolutions, meaning that such constraining is currently possible with existing and upcoming observational data. Such information will be extracted by carefully examining individual flares spectra with simulation outputs. The prospect of different plasma conditions changing the properties of an electron population (accelerated out of the thermal plasma) at higher energies (i.e., spectral index) will be studied in detail in Paper II. The model we used in this preliminary analysis is rather artificial; a chosen accelerated electron distribution (single power law) is injected and transported through a hot over-dense region. Paper II will perform more realistic simulation where electrons are accelerated out of different thermal plasma using a turbulent diffusion model (similar to Stores et al., 2023), with the resulting shape of the spectra also dependent on the plasma properties of the acceleration region, and producing a smoother spectral transition from thermal to non-thermal if this region exists within the \(1-20\) keV data. As mentioned, we purposely used higher temperatures and higher densities akin to hot flaring loops to show the types of spectra and diagnostics we may expect to see if such electrons are being produced in regions identical to HXR-emitting electrons. Moreover, we did not exhaustively input a range of electron spectral indices or low energy cutoffs into the simulation. We investigated the effect of independently decreasing the minimum initial energy (down Figure 7: Fluence (upper) and peak-electron flux (lower) spectra of the ejected population for intermediate collisional region plasma properties and varying analytical parameters: the distance of the virtual measurement from the Sun \(R_{i}\) (left), the total integration time of the simulation \(\tau_{i}\) (middle) and the energy bin resolution \(S\) (right). Other analytical parameters are fixed at \(R_{i}=1.0\) AU, \(\tau_{i}=50,000\) s and \(S=3\) keV. The fluence is effectively unchanged across the heliosphere at the maximum integration time with greater peak flux values at all energies closer to the Sun. Shorter integration times show a significant reduction of fluence at 1.0 AU. Bin sizes larger than \(\sim 3\) keV mask the maximum values of peak flux and fluence and make it difficult to identify features at the low-energy ranges (\(<10\) keV) of both distributions. Figure 8: Full collisional-region parameter analysis at 0.4 AU (upper) and 1.0 AU (lower). The energies where the maximum fluence (left) and peak flux (right) occur are shown for 3 keV (thin line) and 1 keV (thick line) resolutions. Ratios of the \([7-10)\) to \([4-7)\) keV bin heights for both fluence (circles) and flux (crosses) are calculated and displayed in the right column. While the size of some 1 and 3 keV bins mask the change in peaks (fluence and peak flux) between parameter sets for regions with temperatures above 10 MK, the bin height ratios demonstrate the same relation for a sufficiently long integration time. to 3 keV) and increasing the spectral index of the injected distribution (up to \(\delta=5\)). The result of both changes was a slight decrease in the energy of peak fluence at 1.0 AU, by 0.5 to 0.7 keV. There is a similar drop in the \([7-10)\):\([4-7)\) keV bin ratios, by 0.1 to 0.2. These indicate that making such changes to the injected distribution will result in the fluence distribution at 1.0 AU shifting slightly towards lower energies. As discussed earlier, Kontar & Reid (2009) investigated the transport of solar flare energetic electrons in the heliosphere taking into account the self-consistent generation and absorption of Langmuir waves, effects of nonuniform plasma, collisions, and Landau damping, and found that such processes lead to a spectral break and flattening below approximately 40 keV, acting similar to collisional effects in dense plasma, leading to an overall flatter spectra at lower energies. For our study, modelling is required to analyse the effects fully. However, we suggest that such processes should lead to flatter spectra in the \(1-20\) keV range, with any visible peak due to a (partially-)thermal component possibly shifted to lower energies. In real data the flare-accelerated population must be separated from the different populations of the solar wind (core, halo and super-halo) (e.g., Feldman et al., 1975; Pierrard et al., 2001; Wang et al., 2012). Although flares are evidenced by their increased electron flux compared to the background, at \(\approx 1\) keV the flare-spectra will start to merge back into the solar wind halo and core (e.g., Pan et al., 1984; Lin, 1985; Wang, 2022) and produce a spectral upturn close to merging point at \(\approx 1\) keV. Thus as above, we suggest the \(1-20\) keV range is the optimal range for such analysis. Upturns in the spectral data within the \(1-20\) keV suggest the presence of a hotter thermal component. Interestingly, in a rare published example of electron peak intensity energy spectra i.e., (Jebaraj et al., 2023) produced by the SolO Energetic Particle Detector (STEP, EPT and HET) (Rodriguez-Pacheco et al., 2020) we see a flattening i.e., Kontar & Reid (2009) but then a noticeable upturn in the energy spectrum at lower energies (at around \(10-20\) keV), which may indicate the signature of the accelerating plasma environment. Nevertheless, this provides an excellent example of why this spectral region deserves more attention and analysis. ## Acknowledgments RP & NLSJ gratefully acknowledge the current financial support from the Science and Technology Facilities Council (STFC) Grant ST/V000764/1. The authors acknowledge IDL support provided by STFC. NLSJ is supported by an international team grant "Measuring Solar Flare HXR Directivity using Stereoscopic Observations with SolO/STIX and X-ray Inst from the International Space Sciences Institute (ISSI) Bern, Switzerland. The data that support the findings of this study are available from the corresponding author upon reasonable request. We thank Professor Eduard Kontar for insightful comments.
2303.04311
Atomic Representations of Local and Global Chemistry in Complex Alloys
The exceptional properties observed in complex concentrated alloys (CCAs) arise from the interplay between crystalline order and chemical disorder at the atomic scale, complicating a unique determination of properties. In contrast to conventional alloys, CCA properties emerge as distributions due to varying local chemical environments and the specific scale of measurement. Currently there are few ways to quantitatively define, track, and compare local alloy compositions (versus a global label, i.e. equiatomic) contained in a CCA. Molecular dynamics is used here to build descriptive metrics that connect a global alloy composition to the diverse local alloy compositions that define it. A machine-learned interatomic potential for MoNbTaTi is developed and we use these metrics to investigate how property distributions change with excursions in global-local composition space. Short-range order is examined through the lens of local chemistry for the equiatomic composition, demonstrating stark changes in vacancy formation energy with local chemistry evolution.
Megan J. McCarthy, Jacob Startt, Rémi Dingreville, Aidan P. Thompson, Mitchell A. Wood
2023-03-08T01:22:19Z
http://arxiv.org/abs/2303.04311v3
# Atomic Representations of Local and Global Chemistry in Complex Alloys ###### Abstract The exceptional properties observed in complex concentrated alloys (CCAs) arise from the interplay between crystalline order and chemical disorder at the atomic scale, complicating a unique determination of properties. In contrast to conventional alloys, CCA properties emerge as distributions due to varying local chemical environments and the specific scale of measurement. Currently there are few ways to quantitatively define, track, and compare 'local' alloy compositions (versus a 'global' label, i.e. equiatomic) contained in a CCA. Molecular dynamics is used here to build descriptive metrics that connect a global alloy composition to the diverse local alloy compositions that define it. A machine-learned interatomic potential for MoNbTaTi is developed and we use these metrics to investigate how property distributions change with excursions in global-local composition space. Short-range order is examined through the lens of local chemistry for the equiatomic composition, demonstrating stark changes in vacancy formation energy with local chemistry evolution. ## Introduction The basic metallurgical idea behind complex concentrated alloys (CCAs) is two-fold: the maximization of the solid solution strengthening effect and phase stabilization via configurational entropy versus enthalpy. Composed of three or more metallic elements, CCAs represent a family of alloys with a large composition space for property improvement to be explored. Many physical metallurgy research groups have explored this space and ventured away from the reference equiatomic composition to seek novel compositions that yield improved properties ranging from high temperature strength [1; 2; 3; 4; 5; 6], to wear resistance [7; 8], to corrosion and oxidation resistance [9; 10; 11], to magnetic properties [12; 13; 14]. For instance, Startt and coworkers [6] investigated how incremental changes in elemental compositions affected thermomechanical properties in the MoNbTaTi refractory CCA, finding that increases in the concentration of Mo lead to significantly improved yield strength, while higher Ti concentrations lead to greater ductility. Similarly, Zhao and colleagues [12] showed that increasing the amount of Co in a CoCrCuFeMnNi alloy resulted in a higher magnetization saturation moment and an improved soft magnetic response. While most studies use only the global alloy composition, \(c_{\text{global}}\), to characterize and describe an alloy and its properties, the reliance on a single global descriptor can be problematic. Though they exist in ordered crystalline states, CCAs typically possess a large degree of intrinsic chemical disorder, leading to large fluctuations in micro- and nanoscale properties. Those properties can in turn be highly sensitive to local environment and short-range ordering effects. Such local/global duality adds a new dimension and challenges to the atomic representation of the huge compositional space in these multicomponent alloys. Atomistic modeling techniques such as density functional theory (DFT) and molecular dynamics (MD) offer ways to quantify the connections between local and global chemistry in these complex alloys. Recent developments in the training and validation of machine-learned interatomic potentials (ML-IAPs) have made CCAs more accessible in MD, allowing for deeper investigation into local and global property relationships. Several ML-IAPs have already been developed to study MoNbTaW-based quaternary and quinary refractory BCC alloy systems and have been used to study various CCA properties such as segregation and defect formation [15], strengthening mechanisms [16], and dislocation mobility [17]. In all cases, both the ML-IAP training procedures as well as subsequent analysis were centered around global composition descriptions, potentially obscuring important trends that may arise from local chemical fluctuations. These descriptive limitations arise from the lack of a unified language or system to describe variability in chemistry. In this work, we are focused on the intricate relationships between the local and global chemistry in complex alloys. We are particularly interested in the implications these relationships have in estimating alloy properties, validating CCA interatomic potentials' performance through composition space, and in training improved CCA ML-IAPs in the future. To do this, we begin by defining the composition space of a quaternary CCA (elements labeled A, B, C, and D) in Fig. 1, where the concept of local and global duality is illustrated. In Fig. 1a, the tetrahedron represents the entire composition space spanned for our quaternary alloy. The inner volume of this tetrahedron contains all possible quaternary compositions and the center of the tetrahedron represents the equiatomic composition (black star). The vertices, edges, and faces of the tetrahedron correspond to pure, binary, and ternary compositions, respectively. From a macroscopic perspective a global composition, \(c_{\text{global}}\), corre sponds to a single point within this tetrahedron for an infinitely large random solid solution structure, such as that in Fig. 1b. Conversely, Fig. 1c illustrates the concept of a _local composition_, \(c_{\rm local}\), which can be defined as the concentration of elements found within a spherical volume element of radius \(r_{\rm cut}\) centered on any one atom or lattice site, \(i\). Mapping all local compositions for a given \(r_{\rm cut}\) from all atoms in a structure yields a point cloud of \(c_{\rm local}\) values centered around the parent structure's \(c_{\rm global}\), similar to how averaging all atoms' local composition yields \(c_{\rm global}\). Varying \(r_{\rm cut}\) changes the number of atoms locally sampled and thus the density of the point cloud, as shown in the two tetrahedra of Fig. 1c. The variability in \(c_{\rm local}\) depends on \(r_{\rm cut}\) and will eventually dwindle and converge to \(c_{\rm global}\) as the cutoff increases to infinity. The relationship between these two perspectives can be quantified by three characteristic metrics: a composition deviation, \(\lambda\), which measures the distance between two compositions (either global and/or local); a volume fraction, \(V_{\rm f}\), that quantifies the amount of composition space a point cloud spans in the local composition tetrahedron; and the cutoff radius, \(r_{\rm cut}\), which defines a physical scale of interest as well as the chemical resolution tying both perspectives together. The equations corresponding to each metric are included in the Methods below. In what follows, we demonstrate how \(\lambda\) and \(V_{\rm f}\) can be used to draw connections between local and global compositions, and how material properties in CCAs can be affected by deviations in composition at the local scale. We start by building an ML-IAP for a four-component CCA using a training database generated from ab-initio calculations. The MoNbTaTi alloy is a BCC refractory CCA that has been shown to exhibit not just excellent strength and elastic stiffness but is also prone to significant deviations in strength when incremental changes are made to elemental compositions [6]. Thus, it is hypothesized that local composition fluctuations could prove to have significant effects on material properties and behaviors, making this CCA an ideal choice for this present study. We then use this trained ML-IAP to illustrate how to sample a representative local chemistry and the implications of this sampling on scale-dependent property measurement across composition space. Finally, we examine how local-global composition sampling changes when incorporating CCA chemical effects such as short-range order. ## Results **Multi-compositional refractory CCA ML-IAP for MoNbTaTi.** In this section, we describe the fitting process and the performance of the MoNbTaTi CCA ML-IAP used in the following sections for analysis. The primary target in fitting this potential was to ensure accuracy across a wide range of global composition space, with a particular focus on replicating DFT measured elastic properties. Because elasticity is a property measured over an entire simulation cell, and not locally, we did not apply the local composition analysis techniques outlined above to optimize the fitting process. Instead, we followed the global composition sampling scheme detailed in the Methods. For creating fitting data, the global composition was systematically varied via a pseudo-equiatomic composition sampling scheme by adjusting the concentration of one element, labeled _element_\(A\), against the other three elements, labeled _elements B, C, and D_. This sampling scheme follows that of the black dotted line in Fig. 1a, illustrating that all global composition training points fall along single lines connecting each vertex (corresponding to a pure element) to its opposing tetrahedral face (corresponding to an equiatomic ternary alloy completely absent of the vertex element). Each DFT training structure was averaged over four special quasirandom structures (SQS) to approximate bulk mixing character. Starting from the equiatomic composition in the center, each component element A was depleted to 12.5% at minimum and enriched to 50.0% at maximum. For the purposes of this work, we selected one well-performing ML-IAP generated using the above parameters, though many thousand ML-IAPs were generated and tested. A schematic of the fitting process is shown in Fig. 2a, consisting of two major parts: a linear regression which connects the DFT data to LAMMPS bispectrum calculations, run in the software FitSNAP [18] and a genetic algorithm to optimize SNAP hyperparameters, run in the DAKOTA software [19]. More details on each part, and the specifics of fitting to multiple compositions, are included \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline **Composition** & **Property** & **DFT\({}^{e}\)** & **SNAP** & **Error (\%)** \\ \hline \multirow{8}{*}{Equi.} & \(\mathbb{C}_{11}\) [GPa] & 239.6 & 255.3 & 6.6\% \\ & \(\mathbb{C}_{12}\) [GPa] & 129.7 & 139.7 & 7.7\% \\ & \(\mathbb{C}_{44}\) [GPa] & 37.8 & 39.1 & 3.4\% \\ & \(B\) [GPa] & 165.7 & 178.4 & 7.7\% \\ & \(G\) [GPa] & 43.6 & 42.7 & 1.0\% \\ & \(E\) [GPa] & 120.1 & 126.4 & 5.2\% \\ & \(\nu\) & 0.379 & 0.382 & 0.8\% \\ \hline \multirow{8}{*}{Mo 12.5 at.-\%} & \(\mathbb{C}_{11}\) [GPa] & 234.3 & 233.8 & 0.2\% \\ & \(\mathbb{C}_{12}\) [GPa] & 134.6 & 134.3 & 0.2\% \\ & \(\mathbb{C}_{44}\) [GPa] & 35.6 & 36.0 & 1.0\% \\ & \(B\) [GPa] & 167.8 & 167.5 & 0.2\% \\ & \(G\) [GPa] & 40.8 & 40.1 & 0.5\% \\ & \(E\) [GPa] & 113.2 & 113.6 & 0.4\% \\ & \(\nu\) & 0.388 & 0.387 & 0.2\% \\ \hline \multirow{8}{*}{Mo 50 at.-\%} & \(\mathbb{C}_{11}\) [GPa] & 334.4 & 312.5 & 6.5\% \\ & \(\mathbb{C}_{12}\) [GPa] & 147.3 & 149.3 & 1.3\% \\ \cline{1-1} & \(\mathbb{C}_{44}\) [GPa] & 50.331 & 50.0 & 0.6\% \\ \cline{1-1} & \(B\) [GPa] & 209.7 & 203.7 & 2.9\% \\ \cline{1-1} & \(G\) [GPa] & 64.7 & 60.9 & 5.8\% \\ \cline{1-1} & \(E\) [GPa] & 175.9 & 166.2 & 5.5\% \\ \cline{1-1} & \(\nu\) & 0.360 & 0.364 & 1.1\% \\ \hline \hline \end{tabular} \end{table} Table 1: Elasticity value comparisons from the MoNbTaTi dataset and the SNAP model for the equiatomic random solid solution and an example at Mo enriched to 50 at.%. Each value was averaged over four DFT special quasirandom structure calculations and five SNAP calculations in MD. in the Methods. Errors on the energies and forces for this ML-IAP are 12 meV and 176 meV/A, respectively. Representative model errors for elastic constants and moduli for the equiatomic and two Mo compositions (Mo at 12.5 at.% and 50 at.%) are shown in Table. 1. All errors for the equiatomic and for a great number of other fitted compositions remain in general below 10% of their respective DFT values and frequently near or below 5%. Fig. 2b plots the bulk modulus B and shear modulus G for a selection of trained compositions for both the trained SNAP ML-IAP (curves) and DFT values (symbols). Values from the trained model's bulk and shear moduli (\(B\) and \(G\), respectively) are shown in Fig. 2b for 29 selected compositions. Data points from the DFT training (symbols) are averaged over four SQS elasticity calculations (taken from Startt et al. [6]), and trained SNAP ML-IAP (curves) are averaged over five MD calculations. The error bars of the SNAP curves show the standard deviation over five modulus calculations. Note that, because the DFT and SNAP values for the equiatomic calculations had \(<1\%\) error, the symbols have been left off for clarity. The horizontal axis of these plots follows the concentration of element A, whose species assignment is indicated by colors and symbols. In general, the ML-IAP moduli match closely to the expected DFT values for the majority of the trained global alloy compositions. Depletion or enrichment of Mo, Nb, and Ti from the equiatomic composition are especially well-fit. The largest errors are found in alloys with Ta at the highest (approx. 20% error at 50 at-% Ta) and lowest concentrations (approx. 8% error at 12.5 at-% Ta). **Representative Sampling of Chemistry.** Given the availability of an IAP, one of the key advantages of MD simulations is their ability to scale. By dint of its larger possible size, a single MD simulation cell (\(N_{\rm MD}\leq 10^{10},\leq 1~{}\mu\)m\({}^{3}\)) can sample a vastly wider range of local compositions than even a large ensemble of DFT-sized ones (\(N_{\rm DFT}\leq 10^{4},\leq 10~{}\)A [3]). This range means that, for a given global composition \(c_{\rm global}\) of a random solid solution (_i.e._ Mo\({}_{\rm x}\)Nb\({}_{\rm y}\)Ta\({}_{\rm z}\)Ti\({}_{\rm 1-x-y-z}\)), a large fraction of \(N_{\rm MD}\) will contain many high probability local compositions (\(c_{\rm local}\)) near \(c_{\rm global}\), as well as extreme values of \(c_{\rm local}\) that deviate from \(c_{\rm global}\). The first effort herein is focused on quantitatively defining these length scales where \(N_{\rm MD}\) can be considered as a representative volume of some average chemistry, \(c_{\rm global}\). Caveats to consider are then: would it suffice to examine only the most common environments? What represents an uncommon environment? And how different is it from the most common ones? In this section, we illustrate how metrics of distance and volume within composition space, represented by \(\lambda\) and \(V_{\rm f}\), can be used to answer these and related questions. The first task is to understand the distribution of \(c_{\rm local}\) around a single \(c_{\rm global}\). We define the scalar distance metric \(\lambda=|c_{\rm global}-c_{\rm local}|\), taken element wise, that maps any Figure 1: Illustration of global and local composition spaces for a quaternary alloy of generic elements A B C D. **a** Global composition perspective. The location of a quaternary alloy’s equiatomic composition in the phase tetrahedron is shown with a star. The vertices, edges, and faces of the tetrahedron correspond to pure, binary, and ternary compositions, respectively. Dotted black line tracks the changes in one element’s concentration from pure A at the vertex, to a CCA enriched in A (square), to one depleted of A (triangle) to a ternary alloy consisting of B, C, and D. Gray dotted line starting at element D is included as a guide for the eye. **b** An example equiatomic random solid solution structure with elements A, B, C, and D. The mapping of local compositions \(c_{\rm local}\) for each atom \(i\) for different values of \(r_{\rm cut}\) is shown in panel **c. c** Local composition perspective. An atom \(i\)’s local composition is calculated by taking the concentration of all elements found within a sampling radius \(r_{\rm cut}\) around said atom. Different values of \(r_{\rm cut}\), here 4 Å (top) vs. 6 Å (bottom), capture different numbers of neighboring atoms. Mapping all of a structure’s atom’s local compositions results in point cloud sizes that depend on \(r_{\rm cut}\). arbitrary local composition's deviation from the global one. Its maximum value is \(\sqrt{2}\) (the distance between any two pure elements using the Euclidian norm), and its minimum value is determined by \(r_{\rm cut}\). The cumulative distribution function (CDF) of \(\lambda\) for three different \(r_{\rm cut}\) values is shown in Fig. 3a. Reflective of the qualitative local perspective data displayed in Fig. 1c, it is clear in Fig. 3a that the median \(\lambda\) is inversely proportional to \(r_{\rm cut}\). This makes intuitive sense due to the fact that when fewer atoms are collected in calculating \(c_{\rm local}\), the concentration will change more drastically with a single atom swap. Additionally, these single atom composition changes also explain the more jagged nature of the CDF for smaller \(r_{\rm cut}\). From the discreteness in single atom changes to \(c_{\rm local}\) and on-lattice measurement of \(\lambda\) comes another metric, the _duplicate local compositions_, \(\delta\), which is a count of the number of atomic environments that have identical local composition values. Fig. 3b shows \(\delta\) as a function of \(\lambda\) for different values of \(r_{\rm cut}\). Projected into the composition tetrahedron, the inset shows local composition data for \(r_{\rm cut}=6\) A with a log-scaled color bar. For \(r_{\rm cut}=4\) A, the smallest possible compositional change is \(\sim\)5.77 at-%, where this definition of length scale corresponds to the two nearest-neighbor shells in BCC. The smaller cutoff radius results in higher degeneracy of observed \(c_{\rm local}\), and coarser \(\lambda\) spacing. Though duplicate compositions are chemically degenerate, they are likely to be structurally unique, just as is true for global compositions as a whole. The differences between each duplicate environments' atomic arrangements gives rise to a distribution of atomic energies (as predicted by the SNAP ML-IAP developed herein) per \(c_{\rm local}\). Due to the coarse chemical representation of short cutoff distances, the volume contained by a bounding surface of \(c_{\rm local}\), \(V_{\rm f}\), shown in Fig. 3c, also increases. As the occurrence of extrema local chemistries is more probable at short cutoff distances, these outlier points are filtered out by a modest 0.5% rejection of the highest \(\lambda\) values (Fig. 3c, solid lines). Compositional volume fraction \(V_{\rm f}\) data are plotted against the size of a cubic simulation cell, where the number of unit cell replicas, \(s\), spans atom counts capable of being represented in DFT, 54 (\(s=3\)), to those only possible in MD, \(\sim 1\cdot 10^{6}\) (\(s\geq 70\)). Sampling of \(\lambda\) is taken from 5 unique random solid solution cubes per data point with unit side length of \(s\). Shaded bands around each line and point encompass the standard deviation for each \(r_{\rm cut}\). At small cell sizes, before \(V_{\rm f}\) has been saturated, not enough unique \(c_{\rm local}\) configurations will be present for the simulation cell to be chemically representative of \(c_{\rm global}\). The apparent lack of saturation in the data for \(r_{\rm cut}=4\) A is mathematically explained by the probability for \(c_{\rm local}\) to be a single element species. Element labels drawn at random for the 15 nearest atoms equal to the central atom type is \(p=(\frac{1}{4})^{15}\), which multiplied by the \(10^{7}\) atom environments sampled for \(s=70\) is not a vanishingly small probability(\(\approx 0.93\%\)). The same random Figure 2: **a** Schematic of the ML-IAP fitting process. The FitSNAP software [18] performs regression while LAMMPS is used to calculate descriptors. The resulting fit is run through a series of tests, whose errors are used to inform a single-objective genetic algorithm provided by DAKOTA. By iterating thousands of times, the genetic algorithm is able to isolate ML-IAPs with optimal material properties and excellent dynamic stability. **b** Calculations of the bulk modulus B (top) and shear modulus G (bottom) for selected compositions using the SNAP MoNbTaTi model. Solid lines follow the SNAP-calculated values for enrichment or depletion of each element in the refractory CCA where corresponding shading indicates the standard deviation. The symbols show averaged data from the DFT training, taken from Startt et al. [6]. draw where \(r_{\rm cut}=6\) A can be safely assumed as a complete chemically representative volume element. Filtering out the largest 0.5% of \(\lambda\) values (\(V_{\rm f}^{99.5\%}\)) leads to a significant drop in volume fraction, as shown in solid lines in Fig. 3c. Fig. 3d captures saturated \(V_{\rm f}^{99.5\%}\) values while moving along a tie line of one element A (e.g., where A = Mo, then \(\rm Mo_{x}(NbTaTi)_{1-x}\)). There it can be seen that the chemically representative volume element is more compact for compositions near a dilute single component or ternary composition of the CCA. For example, the chemically representative volume element for random solid solution equiatomic cubes sampled with \(r_{\rm cut}=6\) A is \(V_{\rm f}^{99.5\%}=0.15\) located directly in the center of the phase tetrahedron. In contrast, chemically representative volume elements for near-ternary alloys (where the concentration of element A approaches \(<10\) at..% of element A) or element A-enriched alloys (increasing at-% of element A) are comparatively smaller than the equiatomic. The same measure taken for a \(c_{\rm global}\) near a single element vertex would result in a comparatively small \(V_{\rm f}^{99.5\%}\), given the fewer number of possible local chemical arrangements. It is also important to note that, though we have chosen to filter the largest 0.5% values of \(\lambda\), one could also choose heavier filtering, which would decrease \(V_{\rm f}\) proportionally. As a visual guide, the two inset images of Fig. 3d show two cases of the bounding surface formed by each of the \(c_{\rm local}\) values, either where the full CDF (top inset) is used, resulting in a rough surface, or when the lowest 99.5% values of \(\lambda\) (bottom inset) is used, yielding a smooth surface. The exact choice of a saturation filter is less important than having the ability to make informed decisions about the degree of chemical accuracy desired for CCA modeling and simulation. Taking this quantitative measure one step further, the _complexity_ of an alloy can be defined by its \(V_{\rm f}^{99.5\%}\), which has merit for alloys of both fewer and more elements included in them. While the visualization of the volume contained by the collection of \(c_{\rm local}\) in five or more element composition space is challenging, using these relatively simple measures of complexity make comparisons possible. As will be explored in an upcoming section, complexity metrics also enable an assessment of the degree of chemical change within a single \(c_{\rm global}\). For example, where short-range order is present in a given CCA, \(\lambda\), \(\delta\) and \(V_{\rm f}\) will reflect the unique evolution with respect to an equivalent random solid solution at the same \(c_{\rm global}\). Furthermore, when selecting a computational method for predicting properties of CCAs, these chemically representative volume element measures confirm that DFT-sized simulation cells, which have maximal atom counts on the order of \(10^{2}\) (\(s\leq 5\)) for a single cell, sample far too small a fraction of the chemically representative volume element (which includes all duplicates per local composition \(\delta\)) to capture global CCA properties accurately. Having determined a reasonable simulation cell size from which to properly sample CCA properties (i.e., \(s\simeq 40\)), we now turn predictions of material properties that can be reasonably assumed to be of the same length scale as these \(r_{\rm cut}\) definitions. **Scale-Dependent Property Measurement.** Material properties in CCAs are generally expressed as distributions rather than as single, averaged values typical of chemically simpler systems. The question then becomes whether this property distribution is explained by the \(c_{\rm local}\) distribution defined herein. This assertion is critically dependent on the characteristic length scale of the property of interest, and whether that length scale demonstrates the chemical variability discussed previously. Example properties and their associated length scales could include: elastic moduli that encompass the entire chemically representative volume element, dislocations that sample as many \(c_{\rm local}\) as atoms along the line, or point defects that may (depending on the strain field generated) only reflect a single \(c_{\rm local}\). In order to enable high-throughput alloy design in CCAs, we need an ability to characterize which properties are sensitive to the distribution of \(c_{\rm local}\), or can be assigned to \(c_{\rm global}\). A material defect whose properties are highly spatially Figure 3: **a** CDF of \(\lambda\) for three values of \(r_{\rm cut}\). **b** Duplicate local compositions \(\delta\) versus the deviation from the global composition \(\lambda\) for each \(r_{\rm cut}\). Inset is of s = 40 and \(r_{\rm cut}=6\) Å, with the color representing the number of duplicates \(\delta\). **c** Volume fraction \(V_{\rm f}\) as a function of simulation cell size, s. Dashed lines show \(V_{\rm f}\) calculated with all values of \(\lambda\) (as in the top inset of panel **a**), solid lines where 0.5% of the highest values of \(\lambda\) excluded. Excluding 0.5% of \(\lambda\)’s largest values not only significantly lowers \(V_{\rm f}\), but also stabilizes it to a constant value for all \(r_{\rm cut}\) shown. **d** Volume fraction \(V_{\rm f}=V_{\rm f}^{99.5\%}\) for a range of compositions included in the training set (see Fig. 1a for reference), using \(s=40a_{0}\) and \(r_{\rm cut}=6\) Å averaged over 5 runs for one example element. Insets show the convex hull geometry of \(r_{\rm cut}=6\) Å with all \(\lambda\) included (top), and with the largest 0.5% \(\lambda\) excluded (bottom). localized is the vacancy formation energy. It has been shown that vacancy formation and migration energies in CCAs have varied distributions [20] which can for example affect the evolution of radiation damage cascades [21; 22]. Utilizing the SNAP ML-IAP described above and in the Methods, we sample the vacancy formation energy for each element species, measuring \(\lambda\) in order to address how properties are distributed around a given \(c_{\rm global}\). The local chemical definition coincides with the interaction range of the ML-IAP where \(r_{\rm cut}=6\) A. This distance sufficiently resolves the strain field at a vacancy, where the outermost (see Fig. 1b) neighbor shell shows local strain values less than \(6\cdot 10^{-4}\) as measured by tools made available in OVITO [23]. Fig. 4 shows the vacancy formation energy, \(E_{\rm vac}\), of removed Nb atoms versus \(\lambda\), with colors indicating CCA compositions that are Nb-depleted and Nb-enriched in panels a and b, respectively. Details on vacancy formation energy calculations can be found in Methods. The contour plots indicate regions of similar data point density, calculated using a kernel density function. The outermost contour contains 90% of data points, with each line thereafter delineating a 20% change in point density up to the innermost line, which encompasses the final 1% of data. This smallest contour indicates the most probable values from the \(\lambda\) and \(E_{\rm vac}\) distributions. The univariate distribution plots provide another means of interpreting point densities with respect to its parallel axis. We keep the equiatomic composition included in both plots as the aforementioned baseline CCA distribution example. In Fig. 4a, the Nb depleted compositions, which moves toward the MoTiTa ternary face in the composition tetrahedron, show very little change in this localized property with respect to the composition deviation and maintain a \(E_{\rm vac}\) as seen in the average equiatomic CCA (= 3.40 eV) than pure Nb (= 2.76 eV). The converse is true for the enriched side of the tie line. The composition deviations gradually decrease in tandem with the property measurement, converging on the pure element's value. Given the locality (as determined by the rapidly diminishing strain field) of a vacancy, it is reasonable to attribute the formation energy to the structure change within the range of the \(c_{\rm local}\) definition. Therefore, it is incorrect to assign \(E_{\rm vac}\) to \(c_{\rm global}\). Rather, each \(c_{\rm local}\) carries its' own property value and the global composition only determines the range of compositions where that property is averaged based on the chemically representative volume element. If a larger \(r_{\rm cut}\) were used to define the property measurement (and thus the local composition) it would have the effect in a random solid solution to concentrate \(c_{\rm local}\) to smaller values of \(\lambda\). But this effect is subverted by allowing for short-range order to develop. In a chemically-ordered CCA structure, more exotic local environments will be expressed in comparison to a random solid solution. As will be shown in the following section, these environments give even more merit to assigning properties to local chemistries than global averaged values. **Local composition and short-range order.** As useful as a _truly random_ solid solution is for general modeling purposes, it is unlikely to accurately reflect the equilibrium state of real CCAs. Far more probable is the development of at least some degree of short-range order [24; 25; 26]. Though small in CCAs by design, finite differences in chemical potential between species imply that some reordering is highly probable, especially given factors such as intrinsic defects and thermo-mechanical processing steps that can activate such reordering. In terms of the metrics defined above, even mild degrees of short-range order evolution will significantly change the geometry of the local composition point cloud. The values of \(\lambda\) that are chemically representative, \(V_{f}\), \(\delta\), and what constitutes an outlier will shift in turn. Critically, these all will change even for a single global composition label. This means that, for CCAs, global composition not only obscures important scale-dependent property measurements for the random solid solution, but also lack descriptive power in capturing shifts in properties due to chemical interactions. In this final section, we will use the above-defined metrics to explore how short-range order alters both local composition space and CCA properties. This portion of the study is critically dependent on the availability of an accurate energy model, even for extrapolative local chemistries. For the MoNbTaTi system specifically, hybrid Monte Carlo MD (MC/MD) and DFT studies of CCAs containing Mo, Ta, and Nb show that Mo and Ta strongly favor each other as neighbors and can form B2 phases [27; 15; 28; 16]. Hybrid MC/MD simulations of the SNAP MoNbTaTi featured in this work show exactly the same trend, which is clearly reflected in changes in the local composition point cloud as compared to an initially randomly-ordered sample. Details of the setup of the hybrid MC/MD simulations are included in the Methods. Fig. 5 demonstrates how chemical reordering in an equiatomic sample is reflected in local composition space through time, using \(r_{\rm cut}=6\) A and a simulation cell with 128,000 atoms (\(s=40\)). Initially, the structure begins as a random solid solution (Fig. 5a), reflected in the spherical shape of the local composition point cloud. Note that the colors of the points reflect the species of the central atom being sampled. Fig. 5b tracks the evolution of both \(V_{f}^{99.5\%}\) and the cell's Warren-Cowley parameters [29] for the first nearest-neighbor shell \(\alpha_{1}\). Near-zero values of alpha indicate random ordering, negative values indicate species pair attraction, and positive values indicate repulsion. Further details of the calculation of \(\alpha_{1}\) are described in the Methods. By the final simulation step at \(t=150\) ps, shown in Fig. 5c, the structure has become clearly ordered. This change in local chemistry is reflected in the point cloud geometry, which has narrowed and spread into new local composition regions. The volume fraction has significantly increased, from an initial \(V_{f}^{99.5\%}=0.15\) to approx. 0.41 in the chemically-ordered cell. Especially notable is a shift towards the binary Mo-Ta phase tie line, demonstrating these elements' capacity to form B2 phases in CCAs. The favored swapping towards Mo-Ta pairings deplete equiatomic local compositions and shift those points into Ti- and Nb-richer regions of the tetrahedron. Both of these trends correspond to the pairs with the largest negative values of \(\alpha_{1}\) at \(t=150\) ps, namely Mo-Ta, Ti-Ti, and Nb-Nb. Supplemental Fig. 1 shows the evolution of average potential energy through time, as well as a visualization of B2 ordering. The structural changes arising from short-range ordering will leave their mark on the distribution of local chemical properties relative to the random solid solution. Fig. 6 illustrates how evolved short-range order alters the vacancy formation energy distributions of the initially-random equiatomic cell. In both panels, dashed lines indicate the initial states of vacancy energies, and solid lines show the result after reordering. All elements except Ti exhibit significant shifts in vacancy formation energies, as shown in Fig. 6a. Mo atoms undergo clear changes in the shape of the property distribution, with a total cumulative change of -0.28 eV especially represented in the lowest 50% of the curve. Ta undergoes an overall increase of 0.61 eV in formation energies. All Nb vacancy formation energies decrease in the ordered state by -0.50 eV. The highest vacancy energies of Ti increase by 0.08 eV, the smallest change of all elements. Negative (favorable) vacancy formation energies when removing Ti are reflective of the metastability of Ti in a BCC phase. The unusual shift in the energy distributions of Mo vacancies is explored in more detail in Fig. 6b, with respect to the composition deviation \(\lambda\). While there is some overlap in the contours of the random and ordered structures, the energy values have split into two distinct regions at larger values of \(\lambda\), one centered near \(E_{vac}\approx 3.25eV\) and the other much lower at \(E_{vac}\approx 1.25eV\). An analysis of the characteristic local compositions at \(\lambda>0.3\) indicates that Mo vacancies in the lower energy region have local compositions especially rich in Ti-Nb. The higher-energy region, which generally also overlaps the random structure's energies, is characterized by Mo-Ta enriched local environments. Though further detailed analysis of trends in \(E_{vac}\) is beyond the scope of the current work, the additional information given by \(\lambda\) enables straightforward segmentation of trends in atomic-scale properties. These results illustrate the potential power of local composition analysis, highlighting other major benefits of MD studies and ML-IAPs for CCAs. For example, the local composition analysis techniques developed here can be used to uncover the degree of short-range order change and shift in chemical space from the initial \(\varepsilon_{\text{global}}\) not only from two single points in time, but also throughout an entire MC/MD trajectory. Such information could be used to correlate degrees of short-range ordering with e.g. information gathered from advanced microscopy data [30], creating a potential bridge between the timescales of experimental and computational ordering measures. The above Figure 4: **a-b** Vacancy energy trends vs. \(\lambda\), the deviation from a chosen global composition. Each plot features distributions for Nb vacancies. Panel a illustrates trends while depleting Nb and panel **b** shows those for enriching Nb in the CCA, with each global composition indicated by different colors. Contour lines, calculated using a kernel density estimate, enclose (outer to inner) 90%, 70%, 50%, 30%, and 10% of data points, and the innermost circle the most probable 1%. The dotted red lines show the average vacancy energy in a pure Nb sample, which corresponds to \(\lambda=0\). The X- and Y-axes show the univariate distribution for the vacancy energies and deviations from global, respectively. examples are viewed as avenues for future development of these alloy chemical complexity metrics. ## Discussion While the details of the model development were not the focus of this work, generating an interatomic potential for a CCA has brought about unique perspective on measuring material properties in CCAs. What makes classical molecular dynamics a powerful tool to study materials is the computational efficiency derived from the key approximation that interactions between atoms are localized within some distance. In turn, we have addressed how this locality has derived alloy complexity measures such as the chemically representative volume element and volume occupied in composition space for a given global composition. Additional scrutiny is now required on our computational methods when predicting properties of CCAs due to these measures of chemical complexity and locality of a property, which we believe will cause a dramatic shift in this field. Looking forward, data of \(c_{\text{local}}\) and chemically representative volume elements of CCAs should be universally used in training ML-IAPs themselves to ensure proper chemical fidelity. Clear metrics of chemical complexity allow confirmation that targeted regions of composition space have been adequately represented in training, and enables the automated generation of new training structures where alloy chemistry data is lacking. Looking beyond model validation, these techniques could also open new possibilities in scale bridging. In principle, one could compute material properties in a pre-defined grid in composition space. When querying a new alloy of a desired (or measured) \(c_{\text{global}}\) a property can then be inferred based on averaging nearby \(c_{\text{local}}\) and observing whether those points are contained within the surface of a chemically representative volume element. This type of prediction would allow MD results to inform property variability at much larger (micro-, meso-) scales as well because chemical inhomogeneities can be defined at numerous length scales. For example, if experiments or CAL-PHAD indicate that certain phases or regions of global composition should be present in CCAs, the chemically representative volume elements for those phases can be added to the training process to ensure models are sampling chemical environments (and also material properties) adequately. This kind of _diverse-by-design chemical training_ approach could be expanded to allow CCA models to capture important phenomena not only through composition space, but also temperature and pressure. Such capabilities would greatly expand MD simulation utility in becoming truly integrated into workflows of alloy design and discovery. All levels of property analysis involve choices about length scale - indeed, navigating those choices is perhaps the most fundamental challenge of materials science. Thus, what is needed to bring modeling and simulation efforts in closer concert with experimental capability is a review of the definition of critical length scales. When practitioners of _ab initio_ methods identify a material with its global composition, it should be cast as one of many local composition measurements in MD. The same can be said for global composition in MD - it is a very local measurement when compared to even the most resolved experimental measurements in, for example, atom probe tomography. If we are to achieve high-throughput, or even a baseline capability to screen for optimal CCA compositions, these ideas of chemical representation of length scales need to be addressed more completely. The results herein bring quantitative measures to this challenge, and address how scale-bridging efforts between DFT and MD efforts can resolve the atomic length scale of chemical representation and its influence on simple material properties. ## Methods **Definition of local and global characteristic metrics.** For a simulation cell of arbitrary size, a single point in global composition space \(c_{\text{global}}\) is equivalent to the average chemistry of all its local atomic environments, \(\bar{c}_{\text{local}}\): \[c_{\text{global}}=\frac{1}{N}\sum_{i=1}^{N}c_{\text{local}}^{\text{atom},i}= \bar{c}_{\text{local}}(r_{\text{cut}}) \tag{1}\] where \(N\) is the total number of atoms (i.e., local chemical environments), \(c_{\text{local}}^{\text{atom},i}(r_{\text{cut}})\) is the local composition of atom \(i\) given a sampling radius of \(r_{\text{cut}}\). This metric considers the joint probability of chemical occupations of all neighbor shells within \(r_{\text{cut}}\), though for smaller \(r_{\text{cut}}\) can be approximated by other well-known chemical ordering approaches (e.g., an average of Warren-Cowley parameters [29]). For the analysis of local composition, we define two metrics in composition space, the _composition deviation_\(\lambda\), and the _convex hull volume fraction_\(V_{\text{f}}\). The former value is the magnitude of a vector mapping of any two points in composition space: \[\lambda^{2}=\sum_{i=\text{Mo},\text{Nb},\text{Ta},\text{Ti}}(c_{\text{global} }^{i}-c_{\text{local}}^{i})^{2} \tag{2}\] The latter value requires an extra step. We can create a surface using the outermost points in the point clouds shown in 1b, or in other words, a convex hull. The volume enclosed by the surface of that convex hull \(V_{hull}\) encompasses a certain space in a unit tetrahedron, whose side length in 4 dimensions is \(\sqrt{2}\). By dividing the hull volume by the volume of a unit tetrahedron, we gain a fraction representing the span of a given sample in composition space: \[V_{\text{f}}=V_{\text{hull}}\cdot\frac{1}{3} \tag{3}\] **Density functional theory calculations for training set.** DFT training data for the MoNbTaTi ML-IAP developed in this work is based on the work of Startt et al.[6]. All data for that study was generated using the Vienna Ab-Initio Simulation Package (VASP) [31; 32; 33] using plane wave basis sets for the orbital wavefunctions and the the projector augmented wave (PAW) method [34; 35] to describe interactions between the core and valence-state electrons. The CCA training data in this work was constructed around two primary composition classes: compositionally varied quaternary compounds and pure elemental systems. Within each class, DFT calculations were performed to extract forces from several material state conditions to be used in the training of the ML-IAP. These states included: structures with displaced atoms, isotropically and uniaxially strained structures, surfaces, and high-temperature NVT ensemble states. Regarding the simulations for the quaternary compounds, we calculated the aforementioned properties for specific compositional ranges following the composition nomenclature: A\({}_{\text{x}}\)(BCD)\({}_{1-\text{x}}\), where all four metals (Mo, Nb, Ta, and Ti) were taken in turn as the 'A' component. The value of x was sampled at x = 0.125, 0.1667, 0.2083, 0.25, 0.2917, 0.3333, 0.375, and 0.5 atomic %. For quaternary compounds, we performed structural relaxations of four unique special quasirandom structure (SQS) [36], 72-atom supercells (a \(4\times 3\times 3\) multiplication of the 2-atom conventional BCC unit cell) at each composition point. These structures were modelled using a \(3\times 5\times 5\) Monkhorst-Pack k-point grid. For all four structures belonging to each composition, an atomic displacement calculation (_i.e._ an elastic constant calculation) was performed, where each atom was displaced a total of six times (two times \(\pm\) along each Cartesian coordinate). For each displacement all atoms are fixed in place and the electronic wave-function and atomic forces are minimized. For other material conditions, a subset of calculations were performed over a reduced range of composition for x = 0.1667, 0.25, 0.3333 atomic %. Uniaxially and isotropically strained supercells were modelled for this reduced composition range and over compression to elongation ranges of 0.86 to 1.06 and 0.94 to 1.03 times the equilibrium lattice spacing, respectfully. Surface supercells were modeled for the BCC 100 (72 atoms), 110 (72), 111 (162); and 112 (128) surfaces. Lastly, thermodynamic ab-initio MD (AIMD) NVT simulations were performed at temperatures of 300K, 1200K, 2400K, and 3200K, and at volumes set to match the expected thermal expansion as predicted by [6] at each temperature. Regarding simulations for the pure elements, we simulated each elemental component of the MoNbTaTi quaternary individually in their pure ground state structures (_i.e.,_ Mo, Nb, Ta - BCC, Ti - HCP (hexagonally close-packed)). These systems were modelled over a similar set of deformation, strain, and thermodynamic conditions as the quaternary compounds, with some notable differences. Given the simplicity of the atomic structures, structural relaxations were performed on smaller simula Figure 5: Evolution of the local composition tetrahedra (\(r_{\text{cut}}=6\) Å ) and physical samples of a 128,000-atom equiatomic simulation of hybrid Monte Carlo/molecular dynamics at \(T_{MD}=T_{MC}=300\) K for \(t=150\) ps (768,000 MC atom swap attempts). The colors of the points in the tetrahedra correspond to the species of the sampled central atom. **a** The state of the composition tetrahedron at \(t=0\) ps. The local composition point cloud is spherical, reflecting the initially random atomic ordering. **b** Change of the volume fraction \(V_{f}^{99.5\%}\) (top panel) and the Warren-Cowley parameter for the first nearest-neighbor shell (bottom panel) through simulation time. Negative numbers indicate pair attraction and positive indicate pair repulsion. The most attractive pair by this criteria is Mo-Ta, followed by Ti-Ti and Nb-Nb. **c** The initially-spherical composition point cloud spreads out into broader regions of the local composition tetrahedron, saturating parts of the binary Mo-Ta phase tie line, centered around 50 at.% and expanding into CCA compositions roughly spanning the entire Ti-Nb tie line. tion cells, utilizing far denser k-point grids. Additionally, surface slabs were constructed and minimized for the 8-9 most stable surfaces known for each element as listed on the materials project website [37]. AIMD NVT simulations at were carried out at 300K, 1200K and one point at least 100K above the known melting temperature of each metal. Volumes at each temperature were again set to match the expected lattice constant according to known thermal expansions. For both classes of simulations, certain simulation parameters were kept constant. Notably, plane-wave energy cutoffs were kept at 400 eV for all material systems. Wavefunction energy and ionic force minimization criteria were set to \(10^{-6}\) eV and 0.02 eV/A\({}^{2}\), respectively. Partial orbital occupancies were set according to a Gaussian smearing scheme using a smearing width of 0.02 eV. Exchange and correlation effects were handled according the commonly employed Perdew, Burke, and Ernzerhof formalism of the generalized gradient approximation [38]. Lastly, the effects of spin-polarization were found to be negligible for all systems after rigorous testing and screening and so were excluded from calculations in the dataset. For more detailed information and an in-depth analysis of some features of the DFT training set, see Ref. [6]. **Interatomic Potential Construction.** The energy model is constructed using the standard form of SNAP that is described in more detail in earlier publications [39; 40]. The total potential energy of a configuration of atoms is written as the sum of atomic energies combined with an additional reference potential, \[E(\mathbf{r}^{N})=E_{ref}(\mathbf{r}^{N})+\sum_{i=1}^{N}E_{i}(\mathbf{r}^{N}), \tag{4}\] where \(E\) is the total potential, \(\mathbf{r}^{N}\) are the positions of the \(N\) atoms in the configuration, \(E_{ref}\) is the reference energy, and \(E_{i}\) is the atomic energy of atom \(i\). The atomic energy of atom \(i\) is expressed as a sum of the bispectrum components \(\mathbf{B}_{i}\) for that atom weighted by regression coefficients \[E_{i}(\mathbf{r}^{N})=\mathbf{\beta}_{\nu_{i}}\cdot\left(\mathbf{B}_{i}-\mathbf{B} _{0\nu_{i}}\right), \tag{5}\] where the elements of the vector \(\mathbf{\beta}_{\nu}\) are constant linear coefficients for atoms of element \(\nu\) whose values are determined in training. The vector \(\mathbf{B}_{i}\) is a flattened list of bispectrum components for atom \(i\), while \(\mathbf{B}_{0\nu}\) is the list of bispectrum components for an isolated atom of type \(\nu\). By construction, the energy of an isolated atom is zero. The bispectrum components are real, rotationally invariant triple-products of four-dimensional hyperspherical harmonics \(\mathbf{U}_{j}\)[41] \[B_{j_{1}j_{2}j}=\mathbf{U}_{j_{1}}\otimes_{j_{1}j_{2}}^{j}\mathbf{U}_{j_{2}} \cdot\mathbf{U}_{j}^{*}\,, \tag{6}\] where symbol \(\otimes_{j_{1}j_{2}}^{j}\) indicates a Clebsch-Gordan product of two matrices of arbitrary rank, while : corresponds to an element-wise scalar product of two matrices of equal rank. The total hyperspherical harmonics for a central atom \(i\) are written as sums over neighbor contributions, \[\mathbf{U}_{j}=\mathbf{u}_{j}(\mathbf{0})+\sum_{k\in\mathcal{N}(i)}\;f_{c}(r_ {ik})w_{\nu_{k}}\mathbf{u}_{j}(\mathbf{r}_{ik})\,, \tag{7}\] where the summation is over all neighbor atoms \(k\) within a cutoff distance \(R_{\nu_{i}\nu_{k}}\) of atom \(i\). Atoms of different chemical elements are distinguished by the element Figure 6: **a** Cumulative distribution functions of the vacancy formation energies \(E_{\text{vac}}\) for each element in the random solid solution (RSS, dashed curves) and short-range ordered (SRO, solid curves) cells from Fig. 5, panels **a** and **c** respectively. The shaded regions indicate the shift in energies due to chemical ordering. **b**\(E_{\text{vac}}\) and \(\lambda\) for Mo vacancies from panel **a**. The contour outlines share the same parameters as those from Fig. 4. Two regions of different average energies can be observed at values of \(\lambda\gtrsim 0.3\), which can be further analyzed to isolate local environmental trends. The region with distinctly lower energies (\(\approx 1.25eV\)) is correlated with local compositions rich in Ti-Nb. weights \(w_{\nu}\). The radial cutoff function \(f_{c}(r)\) ensures that atomic contributions go smoothly to zero as \(r\) approaches \(R\) from below. We used the 55 lowest order bispectrum components \(B_{j_{1}j_{2}j}\) with half-integral indices restricted to the range \(0\leq j_{2}\leq j_{1}\leq j\leq 4\). The SNAP element weights and cutoff distances were optimized for each element using the genetic algorithm search described below. To ensure strong repulsion at short separations between all pairs of atoms, a short-ranged ZBL[42] reference potential was added (\(Z\) = 44.5, \(R_{cut}\) = 5.0 A). **Multi-compositional refractory CCA ML-IAP for MoNbTaTi.** In this section, we will briefly describe the fitting process and the performance of the MoNbTaTi refractory CCA ML-IAP. The primary target in fitting this potential was to ensure accuracy across a wide range of global composition space, with a particular focus on replicating DFT predicted elastic properties. Because elasticity is a property measured over an entire simulation cell, and not locally, we did not apply the local composition analysis techniques outlined above to optimize the fitting process. Instead, we followed the global composition sampling scheme detailed in the "Density functional theory calculations for training set" described above. For the present the case of the MoNbTaTi CCA ML-IAP, two methodologies were combined: linear regression to generate individual IAPs and a genetic algorithm to explore parameter space. The first method is the core model connecting the DFT training set and a resulting IAP. We use a linear regression scheme whose general form is: \[\hat{\mathbf{\sigma}}=\underset{\mathbf{\beta}}{\text{argmin}}(\|\mathbf{\epsilon}\circ(D \mathbf{\beta}-T)\|^{2}-\gamma_{n}\ \|\mathbf{\beta}\|^{n}) \tag{8}\] \(\hat{\mathbf{\sigma}}\) represents regression error between the bispectrum descriptor predictions, \(D\) and the training data reference energies, \(T\). \(\gamma_{n}\ \|\mathbf{\beta}\|^{n}\) is a regularization penalty of order \(n\) and weight \(\gamma_{n}\). To implement this scheme, descriptors of the training structures from the DFT must be reproduced in MD simulations. For this purpose, we used the open-source software FitSNAP[18] (GitHub link: [https://github.com/FitSNAP/FitSNAP](https://github.com/FitSNAP/FitSNAP)), which parses configurations from DFT into LAMMPS. Once each structures' corresponding bispectrum descriptors, reference energies, and reference forces have been calculated, FitSNAP retrieves that information from LAMMPS to form the D matrix from Eq. 8. The generation process is completed by solving for \(\hat{\mathbf{\beta}}\) using singular value decomposition and outputting an IAP. Though the first process as described above will generate an IAP for use in LAMMPS, it does not guarantee that it will be optimal for applications of interest. To address the issue of tuning variables, we wrap a single-objective genetic algorithm (GA) around FitSNAP, as implemented by the DAKOTA optimization software[19]. The purpose of the GA is to take a user-generated selection of target IAP properties and discover the optimal set of variables to use within the core fitting calcuations in FitSNAP. To accomplish this, the user creates a series of short simulations in LAMMPS that are run and are used to evaluate a generated IAP's overall quality. Including multi-compositional data in the training set is not enough to guarantee that an IAP will perform well across those compositions. We found the key technique to be the setup of the simulations that feed into the GA's objective functions. For the purposes of this work, we aimed to optimize ML-IAPs on the first-principles elasticity data (see "Density functional theory calculations for training set" subsection above), which indicate that MoNbTaTi alloys undergo linear changes in elastic moduli with the enrichment and depletion of single elements. Thus, our post-fit testing and matching GA objective functions were designed to make the GA especially sensitive to the rate of change of the bulk (\(B\)) and shear (\(G\)) moduli as compositions are varied. Fitting the moduli to slopes instead of single-compositional values forces the GA to favor not only low regression errors on single-composition fits, but also couple those errors' tendencies to favor accurate calculations of \(B\) and \(G\). To fit these slopes, it is also necessary to create objective functions that minimize errors on the components of the elasticity tensor, \(\mathbb{C}_{xx}\). As the stable MoNbTaTi single-phase refractory CCAs take on BCC structure, only three of the tensor components (\(\mathbb{C}_{11}\), \(\mathbb{C}_{12}\), and \(\mathbb{C}_{44}\)), which we will also refer to as the elastic constants, need to be calculated per IAP and composition tested. Having obtained those, the calculation of the bulk and shear moduli is as follows: \[B=\frac{\mathbb{C}_{11}+2\mathbb{C}_{12}}{3} \tag{9}\] \[G=\frac{1}{2}\left[\frac{\mathbb{C}_{11}-\mathbb{C}_{12}+3\mathbb{C}_{44}}{5}+ \frac{5\mathbb{C}_{44}\left(\mathbb{C}_{11}-\mathbb{C}_{12}\right)}{4\mathbb{C }_{44}+3\left(\mathbb{C}_{11}-\mathbb{C}_{12}\right)}\right] \tag{10}\] Once calculated for three separate compositions, a slope for \(B\) and \(G\) for the ML-IAP can be fit and tested against slopes calculated from the training data. **Vacancy energy simulations.** In alloys, the vacancy formation energy of a given atomic species \(\mu\), \(E^{\mu}_{\text{vac}}\), is found by first calculating the average cohesive energy of all atoms in a cell \(E^{\text{all}}_{\text{cohesive}}\), removing an atom of one species \(\mu\), and then recalculating the cell's new energy \(E^{\mu}_{\text{removed}}\). The difference of the new energy and the total potential energy corresponding to a cell one atom smaller gives the penalty for vacancy formation for that species \(\mu\): \[E^{\mu}_{\text{vac},\text{f}}=E^{\mu}_{\text{removed}}-E^{\text{all}}_{\text{ total}}\cdot\frac{N_{\text{cell}}-1}{N_{\text{cell}}} \tag{11}\] Hybrid Monte Carlo/molecular dynamics (MC/MD) simulations. To induce chemical re-ordering in the MoNbTaTi SNAP potential, we used a hybrid Monte Carlo/molecular dynamics procedure on one equiatomic random solid solution cube with side length of \(s\cdot a_{0}=40\cdot 3.2546\) A, totalling 128,000 atoms. The random solid solution was initially relaxed at T = 300K for 10 ps in the NPT ensemble (Nose-Hoover barostat), using a 1 fs timestep. For the hybrid algorithm, the same settings were used for all further MD steps. The MC element type swaps were conducted at intervals of 50 MD steps (0.050 ps) using a series of _fix atom/swap_ in LAMMPS at a MC temperature of 300K. In total, 150 ps of hybrid MC/MD (75,000 MC steps with 768,000 atom swap attempts, 138,748 accepted) were conducted to achieve the results found in Fig. 5a. Warren-Cowley parameters [29] were used to calculate trends in pair ordering for the 1st nearest-neighbor shell of central atom \(i\) relative to elements A and B in Fig. 5b: \[\alpha_{i}^{AB}=1-\frac{P_{i}^{AB}}{c_{j}} \tag{12}\] Details of the convergence of potential energy through MC steps can be found in the Supplemental Information. ## Data Availability The full training data sets as well as all validation and test cases are available from the corresponding author upon reasonable request. ## Acknowledgments This work was supported by the U.S. Department of Energy, Office of Fusion Energy Sciences (OFES) under Field Work Proposal Number 20-023149, and the the Center for Integrated Nanotechnologies, an Office of Science user facility operated for the U.S. Department of Energy. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan). ## Author Contributions MJM : Development of interatomic potential, performed MD simulations, implementation of local composition analysis code, chemical complexity analysis. JS : Performed DFT simulations, chemical complexity analysis. APT : Implemented interatomic potential in LAMMPS, chemical complexity analysis. RD : Validation of DFT, MD and interatomic potential. MAW : Development of interatomic potential, chemical complexity analysis. All authors participated in conceiving the research and writing the manuscript. ## Competing Interests The authors declare no competing interests. ## Additional Information Supplementary Information is available for this paper.
2308.06674
Nonadiabatic holonomic quantum computation based on commutation relation
Nonadiabatic holonomic quantum computation has received increasing attention due to the merits of both robustness against control errors and high-speed implementation. A crucial step in realizing nonadiabatic holonomic quantum computation is to remove the dynamical phase from the total phase. For this reason, previous schemes of nonadiabatic holonomic quantum computation have to resort to the parallel transport condition, i.e., requiring the instantaneous dynamical phase to be always zero. In this paper, we put forward a strategy to design nonadiabatic holonomic quantum computation, which is based on a commutation relation rather than the parallel transport condition. Instead of requiring the instantaneous dynamical phase to be always zero, the dynamical part of the total phase is separated from the geometric part and then removed by properly choosing evolution parameters. This strategy enhances the flexibility to realize nonadiabatic holonomic quantum computation as the commutation relation is more relaxed than the parallel transport condition. It provides more options for realizing nonadiabatic holonomic quantum computation and hence allows us to optimize realizations such as the evolution time and evolution paths.
P. Z. Zhao, D. M. Tong
2023-08-13T03:30:13Z
http://arxiv.org/abs/2308.06674v1
# Nonadiabatic holonomic quantum computation based on commutation relation ###### Abstract Nonadiabatic holonomic quantum computation has received increasing attention due to the merits of both robustness against control errors and high-speed implementation. A crucial step in realizing nonadiabatic holonomic quantum computation is to remove the dynamical phase from the total phase. For this reason, previous schemes of nonadiabatic holonomic quantum computation have to resort to the parallel transport condition, i.e., requiring the instantaneous dynamical phase to be always zero. In this paper, we put forward a strategy to design nonadiabatic holonomic quantum computation, which is based on a commutation relation rather than the parallel transport condition. Instead of requiring the instantaneous dynamical phase to be always zero, the dynamical part of the total phase is separated from the geometric part and then removed by properly choosing evolution parameters. This strategy enhances the flexibility to realize nonadiabatic holonomic quantum computation as the commutation relation is more relaxed than the parallel transport condition. It provides more options for realizing nonadiabatic holonomic quantum computation and hence allows us to optimize realizations such as the evolution time and evolution paths. ## I Introduction Practical applications of circuit-based quantum computation need to realize a universal set of accurately controllable quantum gates. However, the errors resulting from the imperfect control of a quantum system and the decoherence caused by the interaction between the quantum system and its environment inevitably affect quantum gates, which is the main obstacle to quantum computation. This practical issue motivates researchers to design quantum gates by utilizing the features of geometric phases [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Quantum computation based on nonadiabatic non-Abelian geometric phases [4] is known as nonadiabatic holonomic quantum computation [10, 11]. Since nonadiabatic non-Abelian geometric phases are only dependent on evolution paths but independent of evolution details, nonadiabatic holonomic gates possess a completely geometric property, being robust against control errors. Furthermore, nonadiabatic non-Abelian geometric phases avoid the long-run time required for adiabatic geometric phases and therefore nonadiabatic holonomic gates allow for high-speed implementation. Due to the merits of both robustness against control errors and high-speed implementation, nonadiabatic holonomic quantum computation has received increasing attention. The seminal scheme of nonadiabatic holonomic quantum computation is based on a three-level quantum system driven by two resonant lasers [10, 11], where a general one-qubit gate is realized by two-loop implementations. To simplify the operations, the single-shot scheme [12, 13] and the single-loop scheme [14] of nonadiabatic holonomic quantum computation were proposed. The latter two schemes allow us to realize an arbitrary one-qubit gate by a single-shot implementation, which reduces the exposure time of quantum gates to error sources. To have more choices of evolution paths, a general approach of constructing Hamiltonians for nonadiabatic holonomic quantum computation was put forward [15]. By this approach, one can find the Hamiltonian that makes the quantum system evolve along a desired path, and thus nonadiabatic holonomic gates can be realized with shortened evolution paths. Up to now, nonadiabatic holonomic quantum computation has been well developed in both theories [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] and experiments [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. The merit of nonadiabatic holonomic gates comes from their purely geometric property. A crucial step in realizing nonadiabatic holonomic quantum computation is to remove the dynamical phase from the total phase. In the previous schemes, the dynamical phase was removed by resorting to the parallel transport condition, which implies that the instantaneous dynamical phase is always zero. However, this strict requirement limits the realization of nonadiabatic holonomic quantum computation to a special family of quantum systems. Actually, it is not necessary to keep the instantaneous dynamical phase always zero for removing the dynamical phase [52]. In this paper, we put forward a strategy to design nonadiabatic holonomic quantum computation, which is based on a commutation relation rather than the parallel transport condition. Instead of requiring the instantaneous dynamical phase to be always zero, the dynamical part of the total phase is separated from the geometric part and then removed by properly choosing evolution parameters. Compared with the previous ones, the schemes based on this strategy are more flexible as the commutation relation is more relaxed than the parallel transport condition. The quantum systems satisfying the commutation relation, containing those satisfying the parallel transport condition as a subset, are more general than the latter. ## II Strategy Consider an \(N\)-dimensional quantum system governed by the Hamiltonian \(H(t)\), of which the evolution operator is denoted as \(U(t)=\mathbf{T}\exp[-i\int_{0}^{t}H(t^{\prime})dt^{\prime}]\) with \(\mathbf{T}\) being time ordering. We use \(\{|\phi_{k}(t)\rangle\}_{k=1}^{N}\) to represent \(N\) orthonormal solutions of the Schrodinger equation \(i\partial_{k}(t)\rangle=H(t)|\phi_{k}(t)\rangle\). Assume there is an \(L\)-dimensional subspace \(\mathcal{S}(t)=\mathrm{Span}\{|\phi_{k}(t)\rangle\}_{k=1}^{L}\) evolving cyclically with the period \(\tau\), i.e., \(\mathcal{S}(\tau)=\mathcal{S}(0)\). The computational basis can be then encoded into \(\mathcal{S}(0)\) and the final evolution operator \(U(\tau)\) acting on \(\mathcal{S}(0)\) is a quantum gate. \(U(\tau)\) plays a holonomic gate if the dynamical part can be removed from it. To make this point clear, we introduce a set of auxiliary orthonormal bases \(\{|\nu_{k}(t)\rangle\}_{k=1}^{L}\) in the subspace \(\mathcal{S}(t)\), which satisfy \(|\nu_{k}(\tau)\rangle=|\nu_{k}(0)\rangle=|\phi_{k}(0)\rangle\). Expanding \(|\phi_{k}(t)\rangle\) in terms of the auxiliary bases gives \[|\phi_{k}(t)\rangle=\sum_{l=1}^{L}|\nu_{l}(t)\rangle C_{lk}(t), \tag{1}\] where \(C_{lk}(t)\) are time-dependent coefficients. By substituting Eq. (1) into the Schrodinger equation, the coefficient matrix can be calculated as \[C(t)=\mathbf{T}e^{i\int_{0}^{t}[A(t^{\prime})-K(t^{\prime})]dt^{\prime}} \tag{2}\] with \[A_{lk}(t)=i\langle\nu_{l}(t)|\dot{\nu}_{k}(t)\rangle,\quad K_{lk}(t)=\langle \nu_{l}(t)|H(t)|\nu_{k}(t)\rangle. \tag{3}\] After a period of time \(\tau\), we have \(|\phi_{k}(\tau)\rangle=\sum_{l=1}^{L}|\nu_{l}(\tau)\rangle C_{lk}(\tau)=\sum_{ l=1}^{L}|\phi_{l}(0)\rangle C_{lk}(\tau)\). Accordingly, the evolution operator acting on the subspace \(\mathcal{S}(0)\) is given by \[U(\tau)=C(\tau)=\mathbf{T}e^{i\int_{0}^{t}[A(t)-K(t))dt}, \tag{4}\] where \(A(t)\) and \(K(t)\) lead to the geometric and dynamical parts of the evolution operator, respectively. If the commutation relation, \[[A(t),K(t^{\prime})]=0, \tag{5}\] is fulfilled for \(t\in[0,\tau]\) and \(t^{\prime}\in[0,\tau]\), the evolution operator can be then written as the product of two parts: \[U(\tau)=\left[\mathbf{T}e^{i\int_{0}^{t^{\prime}}A(t)dt}\right]\left[\mathbf{T }e^{-i\int_{0}^{t}K(t)dt}\right]. \tag{6}\] The first part \(\mathbf{T}\exp[i\int_{0}^{t}A(t)dt]\) corresponds to a non-Abelian geometric phase factor and the second part \(\mathbf{T}\exp[-i\int_{0}^{t}K(t)dt]\) corresponds to a non-Abelian dynamical phase factor. As the dynamical phase factor is separated from the geometric phase factor, it can be removed by letting \(\mathbf{T}\exp[-i\int_{0}^{t}K(t)dt]=I\). In this case, we have the evolution operator, \[U(\tau)=\mathbf{T}e^{i\int_{0}^{t^{\prime}}A(t)dt}, \tag{7}\] which plays a holonomic gate acting on the subspace \(\mathcal{S}(0)\). The key to designing nonadiabatic holonomic quantum computation based on this strategy is to find a quantum system that possesses a cyclically evolutional subspace and satisfies the commutation relation. For this, one can start from the auxiliary bases \(\{|\nu_{k}(t)\rangle\}_{k=1}^{L}\). Without loss of generality, we take \(N=L+1\) and introduce the \((L+1)\)th auxiliary basis \(|\nu_{L+1}(t)\rangle=\exp[-i\gamma(t)|\phi_{L+1}(t)\rangle\), where \(\gamma(t)\) is a time-dependent undetermined parameter with \(\gamma(0)=0\). Since \(|\phi_{k}(t)\rangle\) are the solutions of the Schrodinger equation \(i|\dot{\phi}_{k}(t)\rangle=H(t)|\phi_{k}(t)\rangle\), the Hamiltonian can be expressed as \[H(t)=i\sum_{k=1}^{L+1}|\dot{\phi}_{k}(t)\rangle\langle\phi_{k}(t)|. \tag{8}\] Substituting \(|\phi_{k}(t)\rangle=\sum_{l=1}^{L}|\nu_{l}(t)\rangle C_{lk}(t)\) and \(|\phi_{L+1}(t)\rangle=\exp[i\gamma(t)|\nu_{L+1}(t)\rangle\) into Eq. (8), we can obtain \[H(t)= i\sum_{k=1}^{L}\langle\nu_{k}(t)|\dot{\nu}_{L+1}(t)\rangle|\nu_{k}(t) \rangle\langle\nu_{L+1}(t)|+\text{H.c.}\] \[+\sum_{l=1}^{L}\langle\nu_{k}(t)|H(t)|\nu_{l}(t)\rangle|\nu_{k}(t )\rangle\langle\nu_{l}(t)|\] \[+[i\langle\nu_{L+1}(t)|\dot{\nu}_{L+1}(t)\rangle-\dot{\gamma}(t) ]|\nu_{L+1}(t)\rangle\langle\nu_{L+1}(t)|, \tag{9}\] where H.c. represents the Hermitian conjugate terms. Equation (9) expresses the relation between the Hamiltonian \(H(t)\) and the auxiliary bases \(\{|\nu_{k}(t)\rangle\}_{k=1}^{L+1}\). This relation is useful to construct the Hamiltonian for realizing nonadiabatic holonomic quantum gates. In passing, we would like to point out that the commutation relation \([A(t),K(t^{\prime})]=0\) is naturally satisfied when \(K(t)=0\) is taken. In this case, the Hamiltonian in Eq. (9) is reduced to the special form given in Ref. [15]: \[H(t)= i\sum_{k=1}^{L}\langle\nu_{k}(t)|\dot{\nu}_{L+1}(t)\rangle|\nu_{k}(t) \rangle\langle\nu_{L+1}(t)|+\text{H.c.}\] \[+[i\langle\nu_{L+1}(t)|\dot{\nu}_{L+1}(t)\rangle-\dot{\gamma}(t) ]|\nu_{L+1}(t)\rangle\langle\nu_{L+1}(t)|. \tag{10}\] Since \(K(t)=0\) means that the parallel transport condition, \(\langle\nu_{l}(t)|H(t)|\nu_{k}(t)\rangle=0\) for \(l,k=1,\cdots,L\), is fulfilled, we can conclude that the commutation relation is more relaxed than the parallel transport condition. The quantum systems satisfying the commutation relation, containing those satisfying the parallel transport condition as a subset, are more general than the latter, therefore the schemes of nonadiabatic holonomic quantum computation based on the commutation relation are more flexible than those based on the parallel transport condition. ## III Scheme We now show the practicability of our strategy, which is effective to design nonadiabatic holonomic gates indeed. For a one-qubit nonadiabatic holonomic gate, the quantum system has at least three dimensions, where a two-dimensional subspace is used as a computational space while the remanent one-dimensional subspace acts as an auxiliary space. To this end, we consider a three-level quantum system consisting of two ground states \(|0\rangle\) and \(|1\rangle\) and an excited state \(|e\rangle\). To construct the quantum system that possesses a cyclically evolutional subspace and satisfies the commutation relation, we take the auxiliary bases as \[|\nu_{1}(t)\rangle= \cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}e^{i\varphi}|1\rangle,\] \[|\nu_{2}(t)\rangle= \cos\frac{\alpha(t)}{2}\sin\frac{\theta}{2}e^{-i\varphi}|0\rangle- \cos\frac{\alpha(t)}{2}\cos\frac{\theta}{2}|1\rangle\] \[|\nu_{3}(t)\rangle= \sin\frac{\alpha(t)}{2}\sin\frac{\theta}{2}e^{-\left[\varphi+ \beta(t)\right]}|0\rangle-\sin\frac{\alpha(t)}{2}\cos\frac{\theta}{2}e^{-i \beta(t)}|1\rangle\] \[-\cos\frac{\alpha(t)}{2}|e\rangle, \tag{11}\] where \(\theta\) and \(\varphi\) are time-independent parameters, and \(\alpha(t)\) and \(\beta(t)\) are evolution parameters, being functions of time \(t\) with \(\alpha(0)=\alpha(\tau)=0\). One can easily verify that \(\mathcal{S}(t)=\mathrm{Span}\{|\nu_{1}(t)\rangle,|\nu_{2}(t)\rangle\}\) undergoes cyclic variation such that \(\mathcal{S}(\tau)=\mathcal{S}(0)=\mathrm{Span}\{|0\rangle,|1\rangle\}\). Thus, we can take \(\{|0\rangle,|1\rangle\}\) as the computational basis. For our purpose, we expect the Hamiltonian to have the form of \[H(t)=\Delta(t)|e\rangle\langle e|+[\Omega(t)e^{i\alpha(t)}|e\rangle\langle b|+ \mathrm{H.c.}], \tag{12}\] where \(\Delta(t)\) is the detuning of lasers, \(\Omega(t)\) is the pulse envelope, \(\kappa(t)\) is a time-dependent phase, and \(|b\rangle=\sin(\theta/2)\exp(-i\varphi)|0\rangle-\cos(\theta/2)|1\rangle\). To match the Hamiltonian with the auxiliary bases, we substitute Eqs. (11) and (12) into Eq. (9), and compare the coefficients of each term \(|i\rangle\langle j|\) on both sides of the resulting equation [53]. We can obtain \[\Delta(t)=-\dot{\alpha}(t)\cot\alpha(t)\cot[\kappa(t)-\beta(t)]- \dot{\beta}(t),\] \[\Omega(t)e^{i\alpha(t)}=\frac{1}{2}\{i\dot{\alpha}(t)+\dot{ \alpha}(t)\cot[\kappa(t)-\beta(t)]\}e^{i\beta(t)}, \tag{13}\] and \(\dot{\gamma}(t)=\dot{\beta}(t)+\dot{\alpha}\cot[\alpha(t)/2]\cot[\kappa(t)- \beta(t)]/2\). By substituting Eqs. (11) and (12) with the parameters given in Eq. (II) into Eq. (3), a direct calculation shows \[A_{11}(t)=A_{12}(t)=A_{21}(t)=0,\] \[A_{22}(t)=-\frac{\dot{\beta}(t)}{2}[1-\cos\alpha(t)], \tag{14}\] and \[K_{11}(t)=K_{12}(t)=K_{21}(t)=0,\] \[K_{22}(t)=\frac{\dot{\alpha}(t)}{2}\tan\frac{\alpha(t)}{2}\cot[ \kappa(t)-\beta(t)]-\dot{\beta}(t)\sin^{2}\frac{\alpha(t)}{2}, \tag{15}\] One can readily verify that \([A(t),K(t^{\prime})]=0\), i.e., the commutation relation (5) is fulfilled. It implies that the dynamical phase factor \(\mathbf{T}\exp[-i\int_{0}^{\tau}K(t)dt]\) can be extracted from the evolution operator, as shown in Eq. (6). The above discussion shows that the quantum system governed by the Hamiltonian in Eq. (12) with the parameters given in Eq. (II) satisfies the requirements that it possesses a cyclically evolutional subspace and satisfies the commutation relation. Such quantum system is qualified to realize nonadiabatic holonomic quantum computation. We now only need to remove the dynamical part of \(U(\tau)\) by letting \(\mathbf{T}\exp[-i\int_{0}^{\tau}K(t)dt]=I\). From Eq. (II), we see that this is guaranteed if \[\int_{0}^{\tau}K_{22}(t)dt=0. \tag{16}\] Obviously, there are many candidates of \(K_{22}(t)\) satisfying Eq. (16). For instance, we can choose \(K_{22}(t)=0\). Alternatively, we can also choose \(K_{22}(t)=-\dot{\beta}(t)\) with \(\beta(0)=\beta(\tau)\). In any case, as long as \(\int_{0}^{\tau}K_{22}(t)dt=0\), there is \(U(\tau)=\mathbf{T}\exp[i\int_{0}^{\tau}A(t)dt]\). Substituting \(A(t)\) given in Eq. (II) into the integral, we have \[U(\tau)=|\nu_{1}(0)\rangle\langle\nu_{1}(0)|+e^{-i\phi(\tau)}|\nu_{2}(0) \rangle\langle\nu_{2}(0)| \tag{17}\] with \(\phi(\tau)=\int_{0}^{\tau}\dot{\beta}(t)[1-\cos\alpha(t)]dt/2\). Ignoring an unimportant global phase, it can be equivalently rewritten as \[U(\tau)=e^{i\beta(\tau)\mathbf{m}\cdot\sigma/2}, \tag{18}\] where \(\mathbf{n}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)\) and \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\). It is an arbitrary one-qubit gate as the direction of the rotation axis \(\mathbf{n}\) and the value of the rotation angle \(\phi(\tau)\) can be freely chosen. If \(\alpha(t)\) and \(\beta(t)\) are taken as the polar angle and azimuthal angle of a spherical coordinate system, \((\alpha(t),\beta(t))\) represents a point in a unit two-sphere. It traces a closed path \(C\) in the parameter space when \(\alpha(t)\) varies from \(\alpha(0)=0\) to \(\alpha(\tau)=0\), and \(\phi(\tau)\) is just equal to half of the solid angle enclosed by the path \(C\): \[\phi(\tau)=\frac{1}{2}\oint_{C}(1-\cos\alpha)d\beta. \tag{19}\] Clearly, \(\phi(\tau)\) is only dependent on the path in the parameter space but independent of the changing rate of the parameters. ## IV Discussion After showing that it is practicable to design nonadiabatic holonomic quantum computation based on the commutation relation, we now discuss some details related to the choice of \(K_{22}(t)\) in the above scheme and illustrate the flexibility of our strategy, which allows us to optimize the evolution time and evolution paths. As stated in the last section, there are many candidates of \(K_{22}(t)\) to realize the holonomic gate \(U(\tau)\). For instance, it can be taken as \(K_{22}(t)=0\) or \(K_{22}(t)=-\dot{\beta}(t)\) with \(\beta(0)=\beta(\tau)\). From the expression of \(K_{22}(t)\) in Eq. (II), we see that \(K_{22}(t)=0\) means \(\dot{\alpha}(t)=\dot{\beta}(t)\sin\alpha(t)\tan[\kappa(t)-\dot{\beta}(t)]\). Inserting this expression into Eq. (II), we have \(\Delta(t)=-\dot{\beta}(t)[1+\cos\alpha(t)]\) and \(\Omega(t)\exp[i\dot{\alpha}(t)]=[i\dot{\alpha}(t)+\dot{\beta}(t)\sin\alpha(t)] \exp[i\dot{\beta}(t)]/2\). Then, the Hamiltonian in Eq. (12) can be explicitly written as \[H(t)= -\dot{\beta}(t)[1+\cos\alpha(t)]|e\rangle\langle e|\] \[+\left\{\frac{1}{2}[i\dot{\alpha}(t)+\dot{\beta}(t)\sin\alpha(t)] e^{i\beta(t)}|e\rangle\langle b|+\mathrm{H.c.}\right\}. \tag{20}\] It is just the one given in Ref. [15], some specific expressions of which has been widely used in the previous schemes including the two-loop scheme [10; 11] and one-loop scheme [14]. Alternatively, if we take \(K_{22}(t)=-\dot{\beta}(t)\) with \(\beta(0)=\beta(\tau)\), Eq. (15) results in \(\dot{\alpha}(t)=-2\dot{\beta}(t)\cot[\alpha(t)/2]\cos^{2}[\alpha(t)/2]\tan[ \kappa(t)-\beta(t)]\). Inserting this expression into Eq. (13), we have \(\Delta(t)=\dot{\beta}(t)\cos\alpha(t)\cot^{2}[\alpha(t)/2]-\dot{\beta}(t)\) and \(\dot{\Omega}(t)\exp[i\dot{\kappa}(t)]=\{i\dot{\alpha}(t)-2\dot{\beta}(t)\cot [\alpha(t)/2]\cos^{2}[\alpha(t)/2]\}\exp[i\dot{\beta}(t)]/2\). Then, the Hamiltonian in Eq. (12) can be explicitly written as \[H(t)= \left[\dot{\beta}(t)\cos\alpha(t)\cot^{2}\frac{\alpha(t)}{2}- \dot{\beta}(t)\right]|e\rangle\langle e|+\left\{\frac{1}{2}\right[i\dot{ \alpha}(t)\] \[-2\dot{\beta}(t)\cot\frac{\alpha(t)}{2}\cos^{2}\frac{\alpha(t)}{2} \bigg{]}e^{\beta(t)}|e\rangle\langle b|+\text{H.c.}\right\}. \tag{21}\] Such Hamiltonian has never been used in previous schemes of nonadiabatic holonomic quantum computation. Each choice of \(K_{22}(t)\) has its own advantages and we can optimize the realization by a proper choice. To illustrate this point, we compare the evolution time in the above two choices by using the pulse area as a measure of evolution time. We take the evolution path which starts from the north pole along the great circle with \(\beta(t)=0\) to the point \((\alpha,0)\), then along the circle with \(\alpha(t)=\alpha\) for a round, and finally along the great circle with \(\beta(t)=0\) to the north pole. That is, the path consists of three segments, as shown in Fig. 1, corresponding to the time intervals \(t\in[0,\tau_{1}]\), \(t\in(\tau_{1},\tau_{2}]\) and \(t\in(\tau_{2},\tau]\), respectively. If we use the Hamiltonian in Eq. (20), which corresponds to \(K_{22}=0\), to realize the nonadiabatic holonomic gate, its piecewise expression reads \(H(t)=i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in[0,\tau_{1}]\), \(-\dot{\beta}(t)(1+\cos\alpha)|e\rangle\langle e|+[\dot{\beta}(t)\sin\alpha \exp[i\dot{\beta}(t)]/2|e\rangle\langle b|+\text{H.c.}]\) for \(t\in(\tau_{1},\tau_{2}]\), and \(i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in(\tau_{2},\tau]\). Therefore, the pulse envelope, denoted as \(\Omega(t)\), is \(\Omega(t)=\dot{\alpha}(t)/2\) for \(t\in[0,\tau_{1}]\), \([\dot{\beta}(t)\sin\alpha]/2\) for \(t\in(\tau_{1},\tau_{2}]\), and \(-\dot{\alpha}(t)/2\) for \(t\in(\tau_{2},\tau]\). We then can calculate the pulse area as \(\mathcal{A}_{1}=\int_{0}^{\tau}\Omega(t)dt=\pi\sin\alpha+\alpha\). If we use the Hamiltonian in Eq. (21), which corresponds to \(K_{22}(t)=-\dot{\beta}(t)\), to realize the nonadiabatic holonomic gate, its piecewise expression is \(H(t)=i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in(\tau_{1},\tau_{2}]\), \(\dot{\beta}(t)\cot(\alpha/2)\cos^{2}(\alpha/2)\exp[i\dot{\beta}(t)]e\rangle \langle b|+\text{H.c.}\) for \(t\in(\tau_{1},\tau_{2}]\), and \(i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in(\tau_{2},\tau]\). Correspondingly, the pulse envelope reads \(\Omega(t)=\dot{\alpha}(t)/2\) for \(t\in[0,\tau_{1}]\), \(\dot{\beta}(t)\cot(\alpha/2)\cos^{2}(\alpha/2)\) for \(t\in(\tau_{1},\tau_{2}]\), and \(-\dot{\alpha}(t)/2\) for \(t\in(\tau_{2},\tau]\). In this case, the pulse area reads \(\mathcal{A}_{2}=\int_{0}^{\tau}\Omega(t)dt=2\pi\cot(\alpha/2)\cos^{2}(\alpha/ 2)+\alpha\). Comparing the two cases, we have \(\mathcal{A}_{1}<\mathcal{A}_{2}\) for \(\alpha\in[0,\pi/2)\) and \(\mathcal{A}_{2}<\mathcal{A}_{1}\) for \(\alpha\in(\pi/2,\pi]\). Since \(\phi(\tau)=\pi(1-\cos\alpha)\), we see that it needs a shorter time to realize the quantum gate with \(\phi(\tau)\in[0,\pi)\) by using the Hamiltonian in Eq. (20) than that in Eq. (21). Conversely, it needs a shorter time to realize the quantum gate with \(\phi(\tau)\in(\pi,2\pi]\) by using the Hamiltonian in Eq. (21) than that in Eq. (20). Therefore, our strategy allows us to optimize the evolution time of realizing a nonadiabatic holonomic gate. Similarly, we can demonstrate that our strategy also allows us to optimize the evolution path of realizing a nonadiabatic holonomic gate, as many paths can be chosen for realizing the same nonadiabatic holonomic gate. For example, when using resonant coupling to realize nonadiabatic holonomic gates, the previous schemes based on the parallel transport condition only permit us to take the great circle and orange-slice-shaped loop, while the scheme based on the commutation relation permits us to take many available paths but not limited to the great circle and orange-slice-shaped loop. Here, resonant coupling means that the detuning of lasers is equal to zero. The Hamiltonian in Eqs. (20) and (21) is achievable in physical systems. To see this, we can generally express them as the form \(H(t)=\Delta(t)|e\rangle\langle e|+|\tilde{\Omega}(t)|e\rangle\langle b|+\text{H.c.}\). By substituting \(|b\rangle=\sin(\theta/2)\exp(-i\varphi)|0\rangle-\cos(\theta/2)|1\rangle\) into the expression, \(H(t)\) can be further written as \(H(t)=\Delta(t)|e\rangle\langle e|+[\tilde{\Omega}(t)\sin(\theta/2)\exp(i\varphi)| e\rangle\langle b|-\tilde{\Omega}(t)\cos(\theta/2)|e\rangle\langle 1|+\text{H.c.}]\). Obviously, such Hamiltonian describes a three-level quantum system driven by two off-resonant lasers with common detuning \(\Delta(t)\) and different Rabi frequencies \(\tilde{\Omega}(t)\sin(\theta/2)\exp(i\varphi)\) and \(-\tilde{\Omega}(t)\cos(\theta/2)\). It can be implemented in many physical systems, such as superconducting circuits [49] and nitrogen-vacancy centers in diamond [54]. Here, \(\theta\) and \(\varphi\) completely determine the direction of the rotation axis \(\mathbf{n}\), and they are constants for a specific quantum gate. \(\Delta(t)\) and \(\tilde{\Omega}(t)\) are determined by \(\alpha(t)\) and \(\beta(t)\). For a given evolution path traced by \((\alpha(t),\beta(t))\), \(\Delta(t)\) and \(\tilde{\Omega}(t)\) can be fixed and thus \(H(t)\) is completely determined. For example, if the evolution path traced by \((\alpha(t),\beta(t))\) is taken as the one in Fig. 1, the Hamiltonian in Eq. (20) yields \(H(t)=i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in[0,\tau_{1}]\), \(-\dot{\beta}(t)(1+\cos\alpha)|e\rangle\langle e|+[\dot{\beta}(t)\sin\alpha\exp[ i\beta(t)]/2|e\rangle\langle b|+\text{H.c.}]\) for \(t\in(\tau_{1},\tau_{2}]\), and \(i\dot{\alpha}(t)/2|e\rangle\langle b|+\text{H.c.}\) for \(t\in(\tau_{2},\tau]\). Here, \(\Delta(t)=0\) and \(\tilde{\Omega}(t)=i\dot{\alpha}(t)/2\) for \(t\in[0,\tau_{1}]\cup(\tau_{2},\tau]\), and \(\Delta(t)=-\dot{\beta}(t)(1+\cos\alpha)\) and \(\tilde{\Omega}(t)=\dot{\beta}(t)\sin\alpha\exp[i\beta(t)]/2\) for \(t\in(\tau_{1},\tau_{2}]\). The success of our scheme is dependent on the condition Figure 1: The Bloch sphere representation of the evolution path that starts from the north pole along the great circle with \(\beta(t)=0\) to the point \((\alpha,0)\), then along the circle with \(\alpha(t)=\alpha\) for a round, and finally along the great circle with \(\beta(t)=0\) to the north pole. stated in Eq. (16), which guarantees the removal of the dynamical part. If the Hamiltonian of the quantum system is exactly controlled, the condition determined by \((\alpha(t),\beta(t))\) is strictly satisfied and the holonomic gate can be accurately realized. However, if the control parameters, such as \(\Delta(t)\) and \(\tilde{\Omega}(t)\) in the Hamiltonian, contain errors due to some inevitable noises, the evolution path traced by \((\alpha(t),\beta(t))\) will not be the desired one and thus the accuracy of the quantum gate may be affected. For this, we numerically simulate the performance of our scheme under imperfect parameters \(\Delta(t)\) and \(\tilde{\Omega}(t)\) by taking the widely used Hadamard gate \(H\) as an example. The Hadamard gate \(H\) is a rotation operation with the rotation axis \((\sigma_{x}+\sigma_{z})/\sqrt{2}\) and rotation angle \(\pi\), which correspond to \(\theta=\pi/4\), \(\varphi=0\), and \(\phi(\tau)=\pi\). For our purpose, we take the input state as \(|0\rangle\), the Hamiltonian as that in Eq. (20), and the evolution path traced by \((\alpha(t),\beta(t))\) as the one in Fig. 1. Furthermore, we set \(\alpha(t)=\pi\sin[\pi t/(2\tau_{1})]/2\) and \(\beta(t)=0\) for \(t\in[0,\tau_{1}]\), \(\alpha(t)=\pi/2\) and \(\beta(t)=2\pi(t-\tau_{1})/(\tau_{2}-\tau_{1})\) for \(t\in(\tau_{1},\tau_{2}]\), and \(\alpha(t)=\pi\sin[\pi(\tau-t)/[2(\tau-\tau_{2})]]/2\) and \(\beta(t)=0\) for \(t\in(\tau_{2},\tau]\). Then, we have \(\phi(\tau)=\oint_{\omega}(1-\cos\alpha)d\beta/2=\pi\) and thus the Hadamard gate \(H\) can be realized. In this case, the parameters should be taken as \(\Delta(t)=0\) and \(\tilde{\Omega}(t)=i\pi^{2}\cos[\pi t/(2\tau_{1})]/(8\tau_{1})\) for \(t\in[0,\tau_{1}]\), \(\Delta(t)=-2\pi/(\tau_{2}-\tau_{1})\) and \(\tilde{\Omega}(t)=\pi\exp[i2\pi(t-\tau_{1})/(\tau_{2}-\tau_{1})]/(\tau_{2}- \tau_{1})\) for \(t\in[\tau_{1},\tau_{2}]\), and \(\Delta(t)=0\) and \(\tilde{\Omega}(t)=-i\pi^{2}\cos[\pi(\tau-t)/[2(\tau-\tau_{2})]/[8(\tau-\tau_ {2})]\) for \(t\in[\tau_{2},\tau]\). Let us now assume that there exist systematic errors for the parameters such that \(\Delta(t)\rightarrow(1+e)\Delta(t)\) and \(\tilde{\Omega}(t)\rightarrow(1+e)\tilde{\Omega}(t)\), where \(e\) is a small number. With the aid of numerical simulation, we calculate the fidelity \(F=|\phi_{d}|\phi_{r}\rangle|^{2}\) between the desired output state \(|\phi_{d}\rangle\) and the real output state \(|\phi_{r}\rangle\). The result indicates that the fidelities corresponding to \(\epsilon=0.05,0.10,\,0.15,\,0.20,\,0.25\), and \(0.30\) can be up to \(99.99\%\), \(99.95\%\), \(99.56\%\), \(98.74\%\),\(97.27\%\), and \(95.09\%\), respectively, as depicted in Fig. 2. Besides arbitrary one-qubit gates, a nontrivial two-qubit gate is also needed for nonadiabatic holonomic quantum computation. To realize a two-qubit gate, one can take the auxiliary bases as \[|\nu_{1}(t)\rangle= |00\rangle,\quad|\nu_{2}(t)\rangle=|01\rangle,\] \[|\nu_{3}(t)\rangle= \cos\frac{\theta}{2}|10\rangle+\sin\frac{\theta}{2}e^{i\varphi}|11\rangle,\] \[|\nu_{4}(t)\rangle= \cos\frac{\alpha(t)}{2}\sin\frac{\theta}{2}e^{-i\varphi}|10\rangle -\cos\frac{\alpha(t)}{2}\cos\frac{\theta}{2}|11\rangle,\] \[|\nu_{4}(t)\rangle= \cos\frac{\alpha(t)}{2}\sin\frac{\theta}{2}e^{-i\varphi}|10\rangle -\cos\frac{\alpha(t)}{2}\cos\frac{\theta}{2}|11\rangle\] \[+\sin\frac{\alpha(t)}{2}e^{i\beta(t)}|ee\rangle,\] \[|\nu_{5}(t)\rangle= \sin\frac{\alpha(t)}{2}\sin\frac{\theta}{2}e^{-i\phi+\beta(t)}|10 \rangle-\sin\frac{\alpha(t)}{2}\cos\frac{\theta}{2}e^{-i\beta(t)}|11\rangle\] \[-\cos\frac{\alpha(t)}{2}|ee\rangle, \tag{22}\] where \(\theta\) and \(\varphi\) are time-independent parameters and \(\alpha(t)\) and \(\beta(t)\) are time-dependent parameters with \(\alpha(0)=\alpha(\tau)=0\). Note that \(\{|\nu_{1}(t)\rangle,|\nu_{2}(t)\rangle\}\) are invariant and \(\{|\nu_{3}(t)\rangle,|\nu_{4}(t)\rangle,|\nu_{5}(t)\rangle\}\) have the same form as Eq. (11). Therefore, we can use a similar approach to one-qubit gates to realize the two-qubit gate. So far, we have demonstrated that besides requiring the instantaneous dynamical part to be always zero, nonadiabatic holonomic quantum computation can be also realized by separating the dynamical part of the total phase from the geometric part and then removing the dynamical part. This is similar to the case of nonadiabatic geometric quantum computation [8; 9], where the dynamical phase can be removed by requiring the instantaneous dynamical part to be always zero or by a dynamical compensation method with multiple evolution paths [55]. Furthermore, nonadiabatic geometric quantum computation can be realized too without removing the dynamical phase but with requiring the dynamical phase to be proportional to the geometric phase [56; 57]. However, it is an open problem for realizing nonadiabatic holonomic quantum computation without removing the dynamical phase. ## V Conclusion In conclusion, by introducing the commutation relation defined in Eq. (5), we put forward a strategy to design nonadiabatic holonomic quantum computation. The key to realizing a nonadiabatic holonomic gate based on this strategy is to construct the Hamiltonian of the quantum system that possesses a cyclically evolutional subspace and satisfies the commutation relation. The commutation relation guarantees that the dynamical part of the evolution operator is separated from the geometric part, which can be removed by properly choosing evolution parameters. To show the practicability of our strategy, a set of Hamiltonians that can realize nonadiabatic holonomic quantum computation is given too. The schemes of nonadiabatic holonomic quantum computation based on the commutation relation are more flexible than the previous ones as the commutation relation is more relaxed than the parallel transport condition. The quantum systems satisfying the commutation relation, containing those satisfying the parallel transport condition as a subset, are more general than the latter. They provide more options for realizing nonadiabatic holonomic quantum computation, and hence allow us to optimize realizations such as the evolution time and evolution paths. ###### Acknowledgements. We acknowledge support from the National Natural Science Foundation of China through Grant No. 12174224.
2301.05676
Simplified likelihoods using linearized systematic uncertainties
This paper presents a simplified likelihood framework designed to facilitate the reuse, reinterpretation and combination of LHC experimental results. The framework is based on the same underlying structure as the widely used HistFactory format, but with systematic uncertainties considered at linear order only. This simplification leads to large gains in computing performance for the evaluation and maximization of the likelihood function, compared to the original statistical model. The framework accurately describes non-Gaussian effects from low event counts, as well as correlated uncertainties in combinations. While primarily targeted towards binned descriptions of the data, it is also applicable to unbinned models.
Nicolas Berger
2023-01-13T17:48:42Z
http://arxiv.org/abs/2301.05676v5
# Simplified likelihoods using linearized systematic uncertainties ###### Abstract This paper presents a simplified likelihood framework designed to facilitate the reuse, reinterpretation and combination of LHC experimental results. The framework is based on the same underlying structure as the widely used HistFactory format, but with systematic uncertainties considered at linear order only. This simplification leads to large gains in computing performance for the evaluation and maximization of the likelihood function, compared to the original statistical model. The framework accurately describes non-Gaussian effects from low event counts, as well as correlated uncertainties in combinations. While primarily targeted towards binned descriptions of the data, it is also applicable to unbinned models. ## 1 Introduction The statistical models describing experimental measurements are a key component of LHC data analysis. Consisting of the probability distribution function (PDF) of the measurement together with the observed dataset, they are used to compute the final experimental results -- e.g. confidence intervals for model parameters, or significance values for possible excesses over background -- often through the use of frequentist profile-likelihood ratio (PLR) methods [1]. They can also be utilized to make further use of the measurement information, for instance in combinations with other results, or as reinterpretations in the context of alternative signal models. Despite this central role, statistical models are not systematically made available as part of experimental publications. This is partly for technical reasons: first, they are often complex, with up to \(O(10^{4})\) parameters in some cases [2]. A single maximization the likelihood function, which is needed to compute the PLR, can therefore require up to several hours or days of computation time. Another limitation is the fact that the statistical models of LHC measurements are typically implemented within formats and tools not widely used in other fields, such as the ROOT framework [3]. The information provided in publications, such as the best-fit value of the parameters of interest (POIs) and the covariance matrix of their measurement, typically allow a partial reconstruction of the model. However this is only possible under additional assumptions - in particular Gaussian approximations that do not accurately describe data taken in the Poisson regime with low expected event counts. In cases where full PLR scans are published, the description of systematic uncertainties also does not typically allow a full separation of the different sources of uncertainty, so that correlations across different measurements cannot be properly accounted for when performing their combination. For these reasons, recent efforts have encouraged the publication of faithful representations of the experimental statistical models under FAIR (Findable, Accessible, Interoperable, Reusable) principles [4], in particular with a view towards reinterpretations targeting alternative physics models [5; 6]. This objective can be realized in particular through their publication in open formats. Some recent progress has been achieved in this direction, such as the publication of statistical models by the ATLAS collaboration using the pybf[7; 8] framework. These cases however remain rare so far, in particular due to the limitations described above. Simplified likelihoods offer compromise solutions that aim to provide less complex descriptions of the experimental statistical models that remain more accurate than Gaussian models. Several approaches have been proposed [9; 10; 11; 12; 13]. This work describes a simplification applied to statistical models, in which the dependence on the POIs of the measurement is treated exactly, but the remaining _nuisance parameters_ (NPs) are considered at linear order only. This allows the maximization of the likelihood function with respect to the NPs (usually denoted as _profiling_ the likelihood function) to be performed in closed form using matrix algebra techniques. This in turn can significantly decrease the computing times of the PLR computation since the NPs, which are used to describe in particular systematic uncertainties, typically form a large fraction of the model parameters. The structure of the simplified model, in terms of the POIs, NPs, measurement regions and event samples, remains faithful to the original model. The models are stored in plain text, and computations are performed using python-based tools. The method is applicable to both binned and unbinned descriptions of the experimental data, with unbinned models treated in a binned approximation. This flavor of simplified likelihoods is denoted as Simplified Likelihoods with Linearized Systematics (SLLS) in the rest of this paper to avoid confusion with other simplified likelihood formats. The paper is organized as follows: the SLLS formalism is presented in detail in Section 2; Section 3 shows a realistic application to a ATLAS search for supersymmetric particles; an application to an unbinned model is presented in Section 4, and Sections 5 and 6 present a discussion of these results and conclusions. Simplified likelihood formalism ### The HistFactory framework The simplified likelihoods described in this work are based on the HistFactory framework [14], which is widely used in LHC experiments and implemented within both ROOT and pyhf. It encodes measurements derived from multiple event counts as a set of _channels_, each corresponding to an independent set of data, consisting of one or several counting bins. In each bin, a counting experiment is described using a Poisson distribution. Each expected event yield is expressed as a sum of contributions from several _samples_, representing both signal(s) and background(s), and each is a function of the POIs and NPs of the model. Systematic uncertainties are represented as NPs that are constrained by external information described by a constraint PDF. This constraint is a representation of a separate _auxiliary_ experiment, sensitive to the value of the NP through the measurement of an _auxiliary observable_. The full likelihood function is written as \[L(\mathbf{\mu},\mathbf{\theta})=\prod_{c=1}^{N_{\text{channels}}}\prod_{b=1}^{N_{\text {bins},c}}\text{Pois}\left(n_{cb},\sum_{s=1}^{N_{\text{samples},c}}\nu_{cbs}( \mathbf{\mu},\mathbf{\theta})\right)\prod_{l=1}^{N_{\text{constraints}}}C_{l}(\tilde{ \theta}_{l},\theta_{l}) \tag{1}\] where the index \(c\) runs over the \(N_{\text{channels}}\) measurement channels, \(b\) runs over the \(N_{\text{bins},c}\) bins in channel \(c\), and \(s\) over the \(N_{\text{samples},c}\) samples. The observed event yield in bin \(b\) of channel \(c\), denoted by \(n_{cb}\), is described by the Poisson PDF Pois in terms of the expected yields \(\nu_{cbs}(\mathbf{\mu},\mathbf{\theta})\) for each sample \(s\). The \(\mathbf{\mu}\) and \(\mathbf{\theta}\) refer collectively to the POIs and the NPs, respectively, and the index \(l\) runs over the \(N_{\text{constraints}}\) constrained NPs \(\theta_{l}\) and their respective auxiliary observables \(\tilde{\theta}_{l}\). The constraints \(C_{l}\) are in principle arbitrary but in practice either Poisson or Gaussian forms are used, depending on the properties of the associated systematic uncertainty. ### Simplified likelihoods with linearized systematics The SLLS formalism introduced in this paper brings two simplifications to the HistFactory description. Firstly, the impact of the NPs on the log-likelihood value is described at linear order only. In particular, the \(\nu_{cbs}\) are expressed as a linear function of the NPs, \[\nu_{cbs}(\mathbf{\mu},\mathbf{\theta})=\nu_{cbs}^{\text{nom}}(\mathbf{\mu})\left[1+\sum_ {k=1}^{N_{\text{NP}}}\Delta_{cbsk}(\theta_{k}-\theta_{k}^{\text{nom}})\right]. \tag{2}\] The \(\nu_{cbs}^{\text{nom}}(\mathbf{\mu})\) are the expected event yields computed at the nominal values \(\theta_{k}^{\text{nom}}\) of the NPs. The \(\Delta_{cbsk}\) are linear coefficients specifying the impact of \(\theta_{k}\) on \(\nu_{cbs}\), for each of the \(N_{\text{NP}}\) parameters \(\theta_{k}\). As noted above, the dependence of the \(\nu_{cbs}(\mathbf{\mu},\mathbf{\theta})\) on the parameters of interest \(\mathbf{\mu}\) is described exactly. The linear approximation in the impact of the NPs is also applied to the Poisson distributions, as described in Appendix A. Secondly, the constraints \(C_{l}\) are all assumed to be Gaussian, and are collectively represented as a single multivariate Gaussian PDF with central value \(\tilde{\mathbf{\theta}}\) and inverse covariance matrix \(\Gamma\). With these assumptions, the profiled value \(\hat{\theta}_{k}(\mathbf{\mu})=\arg\max_{\theta_{k}}L(\mathbf{\mu},\mathbf{\theta})\) of the parameter \(\theta_{k}\) at a given value \(\mathbf{\mu}\) of the POIs can be computed in closed form as \[\hat{\hat{\theta}}_{k}(\mathbf{\mu})=\theta_{k}^{\rm nom}+\sum_{k^{\prime}}\left[( \Gamma+P(\mathbf{\mu}))^{-1}\right]_{kk^{\prime}}\left[\sum_{k^{\prime\prime}} \Gamma_{k^{\prime}k^{\prime\prime}}(\tilde{\theta}_{k^{\prime\prime}}-\theta_ {k^{\prime\prime}}^{\rm nom})-Q_{k^{\prime}}(\mathbf{\mu})\right]. \tag{3}\] with the vector \(Q(\mu)\) and the matrix \(P(\mu)\) given by \[Q_{k}(\mathbf{\mu}) =\sum_{c=1}^{N_{\rm channels}}\sum_{b=1}^{N_{\rm bins,c}}(\nu_{cb }^{\rm nom}(\mathbf{\mu})-n_{cb})\sum_{s=1}^{N_{\rm samples,c}}\frac{\nu_{cbs}^{ \rm nom}(\mathbf{\mu})}{\nu_{cb}^{\rm nom}(\mathbf{\mu})}\Delta_{cbsk} \tag{4}\] \[P_{kk^{\prime}}(\mathbf{\mu}) =\sum_{c=1}^{N_{\rm channels}}\sum_{b=1}^{N_{\rm bins,c}}n_{cb} \sum_{s,s^{\prime}=1}^{N_{\rm samples,c}}\frac{\nu_{cbs}^{\rm nom}(\mathbf{\mu}) \nu_{cbs^{\prime}}^{\rm nom}(\mathbf{\mu})}{[\nu_{cb}^{\rm nom}(\mathbf{\mu})]^{2}} \Delta_{cbsk}\Delta_{cbs^{\prime}k^{\prime}} \tag{5}\] where \(\nu_{cb}^{\rm nom}(\mathbf{\mu})=\sum_{s}\nu_{cbs}^{\rm nom}(\mathbf{\mu})\). The \(Q_{k}\) and \(P_{kk^{\prime}}\) terms in Eq. 3 encode the impact of the data on the value \(\hat{\hat{\theta}}_{k}(\mathbf{\mu})\), while the terms involving \(\Gamma_{kk^{\prime}}\) originate from the constraint PDFs in the likelihood function. While \(P_{kk^{\prime}}\) is quadratic in the \(\Delta_{cbsk}\), it generally cannot be neglected, in particular in the case of NPs that are not associated with a constraint PDF. Using these relations, the profiling of the NPs at a given \(\mathbf{\mu}\) can be performed using simple matrix algebra. The sizes of the matrices is given by the number of NPs, which can be fairly large - in some cases up to \(O(10^{4})\) - but building these matrices and performing multiplication and inversion operations is nevertheless far quicker than the non-linear maximization of the full likelihood function using e.g. a gradient descent algorithm. While the form given in Eq. 2 is used to profile the NPs, the evaluation of the likelihood function uses instead the alternative form \[\nu_{cbs}(\mathbf{\mu},\mathbf{\theta})=\nu_{cbs}^{\rm nom}(\mathbf{\mu})\exp\left[\sum_{ k=1}^{N_{\rm NP}}\Delta_{cbsk}(\theta_{k}-\theta_{k}^{\rm nom})\right]. \tag{6}\] This guarantees that \(\nu_{cbs}(\mathbf{\mu},\mathbf{\theta})\geq 0\) for all \(\mathbf{\theta}\) as required for the expected event yield of a Poisson PDF, and provides a suitable approximation to Eq. 2 for small values of \(|\theta_{k}-\theta_{k}^{\rm nom}|\) since the two forms are equal at leading order in this quantity.1 Footnote 1: Alternatively, a variation of Eq. 2 with a truncation applied to avoid negative \(\nu_{cbs}\) can also be used. ### Implementation and storage format A python implementation of the SLLS formalism is provided in the fastprof public package2. It describes the full statistical model, including both the PDF of the measurement and the observed data. It includes tools to evaluate and profile the likelihood function and perform maximum-likelihood fits as well as higher-level computations such for hypothesis testing, limit-setting and confidence interval estimation. Other tools are provided to validate the simplified models and perform other operations such as combining or pruning models. The computations make use of the linear algebra routines included in numpy[15] and the minimization routines provided by scipy[16]. Footnote 2: [https://github.com/fastprof-hep/fastprof](https://github.com/fastprof-hep/fastprof) The statistical models are stored in a plain-text format using the JSON markup language. The format specifies the POIs, NPs, auxiliary observables, and measurement channels. Each channel is described as a list of samples, specified by the nominal expected bin yields \(N_{cbs}^{\text{nom}}\), the linear impacts \(\Delta_{cbsk}\) of each NP on the expected yields, and an optional normalization factor \(K(\mathbf{\mu})\) that can be an arbitrary function of the \(\mathbf{\mu}\). The expected yields are then expressed as in Eq. 2, with \(\nu_{cbs}^{\text{nom}}(\mathbf{\mu})=N_{cbs}^{\text{nom}}K(\mathbf{\mu})/K(\mathbf{\mu}^{ \text{nom}})\), where \(\mathbf{\mu}^{\text{nom}}\) is the value of the POIs for which the nominal yields \(N_{cbs}^{\text{nom}}\) are provided. The format also specifies the observed data, in terms of the observed yields for each bin of each channel and the observed values of the auxiliary observables. ### Example As an illustration, we consider a simple example measurement consisting of a single-bin counting experiment in the presence of both signal and background contributions. The expected background yield is \(b_{0}=1\), with a relative uncertainty \(\epsilon=25\%\). The background yield is treated as a NP in the fit, associated with a Gaussian constraint with an auxiliary observable \(\tilde{b}\) (as would occur in the case where the background is determined from a control region with a sufficiently large number of events). The observed yield is \(n=2\). The JSON specification of the statistical model is given in Figure 1. The results for the signal yield \(s\) are computed using its maximum likelihood estimator (MLE) \(\hat{s}\) and the profile-likelihood ratio \[\Lambda(s)=-2\log\frac{L(s,\hat{\tilde{b}}(s))}{L(\hat{s},\hat{b})} \tag{7}\] where \(L(s,b)\) is the likelihood function of the measurement, \(\hat{b}\) is the MLE of \(b\) and \(\hat{\tilde{b}}(s)\) its conditional MLE at a fixed value \(s\) of the signal yield. The values of \(\Lambda(s)\) can then be used to derive results such as confidence intervals on \(s\) or the discovery significance of the signal. Figure 2 shows the values of \(\Lambda(s)\) and \(\hat{\tilde{b}}(s)\) computed from the model given in Figure 1 for a range of values of \(s\). In this simple case both results can also be computed in closed form as \[\hat{\tilde{b}}(s) =\frac{1}{2}\left[\sqrt{(s+\tilde{b}-\tilde{b}^{2}\epsilon^{2})^ {2}+4\tilde{b}^{2}\epsilon^{2}n}-(s-\tilde{b}+\tilde{b}^{2}\epsilon^{2})\right] \tag{8}\] \[\Lambda(s) =2(s-\hat{s}+\hat{\tilde{b}}(s)-\hat{b})-2n\log\left(\frac{s+ \hat{\tilde{b}}(s)}{\hat{s}+\hat{b}}\right). \tag{9}\] and excellent agreement is observed between these expressions and the SLLS results. The asymmetric shape of \(\Lambda(s)\) is driven by the Poisson nature of the measurement, and the good agreement in this case is due to the fact that this feature is accounted for exactly in the simplified likelihood. Using a Gaussian approximation would yield a parabolic shape for \(\Lambda(s)\) that would provide a less accurate description. While the systematic uncertainty on the background yield plays only a small role in this example, the good agreement in the profiled values \(\hat{\tilde{b}}(s)\) of the corresponding NP shows that systematic effects are also described accurately within the linear approximation. [FIGURE This agreement is not by construction, since SLLS only provides an approximation to the exact results of Eq. 9. For instance deviations of about 10% in the value of \(\hat{\hat{b}}(s)\) and \(\Lambda(s)\) are observed in a scenario in which the auxiliary background observable \(\hat{b}\) is set to deviate from the nominal value \(b_{0}\) by \(2\sigma\) in the lower direction, which in turn pulls \(\hat{\hat{b}}(s)\) away from its nominal value. For the same reason, the SLLS computation of the total expected yield \(s+\hat{\hat{b}}(s)\) can take negative values when using the linear expression of Eq. 2, although this quantity is positive by construction in the exact computation. Since \(\Lambda(s)\) cannot be computed for null or negative values of \(s+\hat{\hat{b}}(s)\), this motivates the use of Eq. 6 which ensures positive-definite values for the expected event yields. ## 3 Application to an ATLAS search for new phenomena ### Full statistical model This section presents a realistic application of the SLLS framework to a search for new phenomena by the ATLAS collaboration [17] for which the full experimental statistical model has been published [18]. The search targets supersymmetric particles in final states with Figure 1: Specification for the example SLLS model described in the text. at least three charged leptons originating from the chargino decay \(\tilde{\chi}_{1}^{+}\to Z\ell\to 3\ell\). The analysis considers three signal regions (SRs), targeting signatures with 3 leptons (\(3\ell\)), 4 leptons (\(4\ell\)) and 4 leptons with a fully reconstructed \(W\), \(Z\) or \(H\) boson (FR). Each signal region is divided into 16 bins of the invariant mass \(m_{Z\ell}\) of the trilepton system. Three single-bin control regions are also included to provide data-driven estimates of the main backgrounds, from the Standard Model production of a \(WZ\) boson pair, a \(ZZ\) pair, or a \(t\bar{t}\) pair accompanied by a \(Z\) boson (\(t\bar{t}Z\)). The model includes a single parameter of interest, the signal strength \(\mu_{\rm signal}\), and 624 NPs: three unconstrained parameters representing normalization terms for the main backgrounds and 621 constrained parameters representing systematic uncertainties. The full statistical model of the analysis was published by the ATLAS collaboration as a pyhf model available in the HEPData repository [18]. In this example we consider the example case of a chargino with a mass of 500 GeV with branching ratios to \(W\), \(Z\) and \(H\) bosons of respectively 20%, 60% and 20%, and equal branching ratios to \(e\), \(\mu\) and \(\tau\) for the accompanying lepton. ### Simplified model The SLLS model is computed by taking the nominal event yield for each signal and background sample in each bin of each region from the pyhf model, as published by the ATLAS collaboration.3 The \(1\sigma\) impacts of the systematics NPs are similarly determined from the definition of the systematic effects in the pyhf model. The impact of the background nor Figure 2: Values of (a) \(\Lambda(s)\) and (b) the conditional MLE \(\hat{\hat{b}}(s)\) for a range of values of the signal yield \(s\), computed from the model described in the text. In each plot, the simplified likelihood result (solid blue) is compared to an exact closed-form expression of the same quantity (dashed red), showing very close agreement. malization NPs are derived from the relative fractions of the corresponding backgrounds. The conversion is performed using an automated tool included in the fastprof package. The measurement regions of the analysis as implemented in the simplified model are shown in Figure 3. The profile likelihood scan of the signal strength parameter \(\mu_{\text{signal}}\) using the simplified model is shown in Figure 3(a). A reference scan computed using the full model is also presented for comparison, and shows that the simplified likelihood provides an adequate description of the full result. A simple Gaussian model, using the best-fit value \(\mu_{\text{signal}}\) in the observed data computed by pyhf and the corresponding parabolic error, is also displayed and shows worse agreement. The 95% CL\({}_{s}\) upper limit on \(\mu_{\text{signal}}\) computed using the simplified model is 0.126, in good agreement with the value of 0.124 obtained using the full model. The Gaussian model yields a value of 0.114. The fits to the SLLS likelihood with fixed \(\mu_{\text{signal}}\) take about 50 ms on a laptop computer equipped with a 16-core Intel i7-10875H CPU. The fit with free \(\mu_{\text{signal}}\), which relies on non-linear rather than linear minimization for this parameter (since POIs are treated exactly), takes about 0.5 s. A full-likelihood fit performed with pyhf require approximately 10 min Figure 3: Expected and observed event counts in the SR3l, SR4l and SRFR signal regions of the analysis of Ref [17], shown respectively in panels (a), (b) and (c). Panel (d) shows the analysis control regions. The signal regions are binned in the \(m_{Zl}\) observable, while the control regions each use a single inclusive event count. The observed data (black points) is overlaid with stacked histograms (filled areas) representing the gaugino signal (dark blue) and the main background contributions. on the same computing platform, a factor \(\approx 1000\) longer. The full-likelihood fit times for the fixed-POI and free-POI cases are similar, since both are dominated by the non-linear minimization over the 624 NPs. These fits are performed with numpy as the numerical backend to pyhf, the same as used the fastprof implementation of SLLS likelihoods. Figure 4: Comparison between the SLLS simplified model (solid lines) and the full model (dashed lines) for (a) the PLR \(\Lambda(\mu_{\text{signal}})\) as a function of \(\mu_{\text{signal}}\), (b) the profiled values of selected NPs describing systematic uncertainties and (c) the profiled values of NPs describing scale factors applied to the normalization of the main analysis backgrounds. The profiled values are shown as deviations from the nominal value of the parameters (0 for systematic uncertainties, 1 for background scaling factors), divided by the uncertainty on the parameter in the full-model fit to the observed data with free \(\mu_{\text{signal}}\). The SLLS results are computed using the fastprof tool, and the full-model results with the pyhf tool. Panel (a) also shows the PLR scan computed using a Gaussian model as described in the text. etter performance can however likely be achieved using other pyhf backends interfacing to tensorflow[19] or pytorch[20]. To validate the SLLS linear profiling, the profiled values of selected NPs are shown in Figures 3(b) and 3(c) for both the SLLS and the full model. Good agreement is seen between the two cases, illustrating that the original likelihood function is modeled to good approximation at the level of individual NPs. The largest deviation is seen in the scale factor for the \(t\bar{t}Z\) background, amounting to about 30% of the fit uncertainty. As a further illustration, the exclusion contour presented in Figure 9 of the original publication is recomputed using the SLLS models and compared to the full-model results. The results are shown in Figure 5, and good agreement is again observed. An exclusion contour based on Gaussian models built as described above is also presented for comparison and shows similar agreement in this case, in part due to the fact that the signal production cross-sections vary rapidly with the chargino mass. ## 4 Simplified likelihoods for unbinned models ### Binned description of unbinned models The previous examples use a binned description of the experimental measurement, which employs only two types of PDFs: Poisson distributions to describe the counting experiments in each bin, and Gaussian distributions for the constraints. Another common modeling option is _unbinned_ models, which describe the continuous probability distribution of the Figure 5: Exclusion plot in the plane of chargino mass and its branching ratio to \(Z\) bosons, assuming equal branching ratios to \(W\) and \(H\) and to all lepton flavors. The computation from SLLS simplified likelihoods (solid blue) is compared with a reference (dashed red) taken from the top-left panel of Figure 9 in Ref. [17] and good agreement is observed. Gaussian models computed from the full likelihood as described in the text (dotted black) also show good agreement in this case. measurement observables. It is used for instance to study the \(H\to\gamma\gamma\) decay of the Higgs boson at the LHC [21; 22], as well as in many results published by LHCb (see for instance Refs. [23; 24]). It requires support for arbitrary PDF forms, as needed to describe each measurement, and therefore more general and flexible tools than for binned models. For LHC measurements, this functionality is usually provided by the roofit package [25] distributed as part of ROOT, but this and other similar tools are not widely used outside the high-energy physics experimental community. While there are some recent ongoing efforts to provide more portable alternatives, none is currently in wide use. A possible way forward is based on the observation that unbinned models can be approximated by binned models with a sufficiently fine binning (see Appendix B). While this approach typically runs into practical difficulties for full likelihoods due to the large number of bins required, it is feasible for simplified likelihoods which are quick to evaluate even for relatively large bin numbers. In the rest of this section, we present the application of the SLLS framework to an unbinned model loosely inspired by an ATLAS \(H\to\gamma\gamma\) measurement. ### Full model example We consider a simple example based on the ATLAS \(H\to\gamma\gamma\) analysis of Ref. [21]. The analysis uses an unbinned model based on the distribution the invariant mass \(m_{\gamma\gamma}\) of the two photons in the range \(105<m_{\gamma\gamma}<160\,\mathrm{GeV}\). The Higgs boson signal manifests itself as a sharp peak in the \(m_{\gamma\gamma}\) distribution, with a position close to the Higgs boson mass and a width of 1.1-2.1 GeV depending on event kinematics. The background contributions follow smoothly falling shapes. Several signal regions (referred to as _categories_ in the rest of this section) are defined according to the properties of the signal photons and of the rest of the event. The example uses a simplified description of the 33 categories defined in Ref. [21] to study Higgs boson production in the gluon-fusion process. The signal and background distributions are represented respectively by Gaussian and exponential distributions, instead of the more complex shapes used in Ref. [21]. The peak position and width of the Gaussian, as well as the expected signal and background yields are taken from Ref. [21], while the exponential slope of the background is assumed to be \(-0.02\,\mathrm{GeV}^{-1}\) in all categories. The background normalizations and exponential slopes are free to vary in the fit, except for the slopes in five low-statistics categories which are kept fixed to avoid unstable fits. Five NPs are used to describe the leading systematic uncertainties: the uncertainty on the integrated luminosity of the dataset; on the reference cross-section for the gluon-fusion production process; on the effect of parton shower modeling on the signal yields; on the \(H\to\gamma\gamma\) reconstruction efficiency; and on the photon energy resolution. This last uncertainty leads to a change in the width of the signal peak and therefore induces highly non-linear effects in the per-bin signal yields in a binned description of the likelihood. Systematic uncertainties on the background model are implemented using separate NPs in each category, following the _spurious signal_ method described in Ref. [21]. The values of the uncertainties listed above are all taken from Ref. [21]. In total, 99 NPs are defined. The single POI is the Higgs boson signal strength \(\mu\), applied as a scaling factor to the expected signal yield in all categories. A dataset of events randomly generated from the model PDF is used as the "observed" data in this example. ### Simplified model The SLLS model is built as a binned approximation to the full model. A fine binning is required to obtain an accurate description of the signal peak. In this example a uniform bin width of \(0.1\,\mathrm{GeV}\) is used, leading to 18150 bins in total for the 33 categories4. The \(m_{\gamma\gamma}\) distributions for two selected categories (the first and last in the order used in Ref. [21]) are shown in Figure 6. Footnote 4: A variable-width binning with wider bins away from the peak can also be considered, but a uniform binning was chosen in this example for simplicity. The conversion is performed using an automated tool distributed as part of the fastprof package. All the NPs are retained, and their effect is described in terms of their linear impact on the event yield in each measurement bin, following the SLLS procedure. In some cases, in particular normalization parameters such as the one shown on Figure 6(a), impacts are linear by construction. By contrast, the photon energy resolution systematic shown in Figure 6(b) exhibits non-linear behavior since it induces a change in the width of the signal peak which does not propagate linearly to the bin contents. Non-linearities become larger closer to the tails of the signal peak, but with a smaller impact on the results due to lower signal yields. The linear approximation remains in any case typically accurate for small Figure 6: Distributions of the observable \(m_{\gamma\gamma}\) for two selected categories – the ones labeled 0-jet, \(p_{\mathrm{T}}^{H}<10\,\mathrm{GeV}\) (top) and \(p_{\mathrm{T}}^{H}\geq 650\,\mathrm{GeV}\) (bottom) in Ref. [21]. The bin width is \(0.1\,\mathrm{GeV}\) in both cases. The signal and background contributions (blue and green histograms respectively) are shown together with the example dataset (black points), for the best-fit value of the model parameters. deviations of the NP from the nominal. Figure 7(a) shows the profile likelihood scan for the signal strength parameter \(\mu\) obtained with the linearized model. The reference result obtained with the full unbinned likelihood, computed using the RooFitUtils package5, is also shown for comparison and excellent agreement is observed. The resulting 68% CL likelihood intervals are \(\mu=1.082^{+0.117}_{-0.093}\) for the full model and \(\mu=1.082^{+0.113}_{-0.093}\) for the simplified model. A fully Gaussian approximation, constructed as described in the previous section, yields \(\mu=1.082\pm 0.098\). The fits take about 15 min to perform on the full model, compared to about 50 ms and 1 s for simplified likelihood fits with respectively a free and floating \(\mu\). Footnote 5: [https://gitlab.cern.ch/cburgard/RooFitUtils](https://gitlab.cern.ch/cburgard/RooFitUtils) To better compare the treatment of systematic effects, the profiled values of the five NPs describing the leading systematic uncertainties are shown in Figure 7(b). The agreement between the simplified and the full model is found to be accurate to about 10% of the parameter uncertainties. This agreement is crucial to the description of this example since the uncertainty on \(\mu\) is dominated by systematic effects. In particular, the profiling of the photon energy resolution systematic (which represents about 20% of the total uncertainty) shows good agreement with the full model in spite of the non-linear effects highlighted in Figure 6(b). Figure 7(c) shows the difference between the profiled values of the other NPs in the simplified and full models, normalized to their fit uncertainty. This difference is below 10% of the fit uncertainty for about 80% of the parameters. Figure 7: Relative change in expected bin yields as a function of the normalized parameter value for two cases: (a) the impact of the background normalization parameter nBkg_000 on the expected background yield; and (b) the impact of the photon energy resolution parameter npPER on the expected signal yield. In both cases, the bin belongs to the first category of the model and is located at \(m_{\gamma\gamma}\approx 127\) GeV, about \(0.7\sigma\) above the signal peak. The impacts computed from the full model (dots) are compared with the linear impacts computed from Eq. 2 (dashed red line) and the non-linear impacts from Eq. 6 (solid blue line). ## 5 Discussion As observed in the examples shown in this paper, linearized NP impacts provide a generally adequate approximation of their behavior in the full model, in particular in the description of systematic effects. It can be noted that the approximation approaches the exact description in situations where the impact of the NP is naturally linear, such as the case shown in Figure (a)a. Discrepancies are expected in cases which deviate from this ideal configuration, in particular for: Figure 8: Comparison between the SLLS model (solid lines) and the full model (dashed lines) for (a) the profile likelihood \(\Lambda(\mu)\) as a function of the POI \(\mu\); (b) the profiled values of a selection of NPs describing systematic uncertainties; and (c) the difference between the profiled values of each NP obtained from the SLLS model and the full model, divided by its uncertainty in the full-model fit to the data with free \(\mu\). * Large non-linear systematic uncertainties, with effects that are not fully accounted for in the linear approximation, such as the one shown in Figure 6(b). * Asymmetric systematic uncertainties, with different impacts for NPs above or below their nominal value. These effects cannot be included in the linearized profiling, although they can be taken into account for the evaluation of the likelihood function. * Low expected event yields, leading to Poisson counts that are not well-described by Gaussian distributions. While the Poisson distribution itself is described exactly in the SLLS formalism, non-linearities can occur due to systematics on the expected event yield, since the Poisson PDF does not depend linearly on its expected yield. These situations are partially covered in the examples described in this paper, and it is encouraging that in these cases at least, the linear description seems adequate. However simplified likelihoods should be carefully validated against the full model in each case nevertheless. Tools to perform these checks are included in the fastprof package, using methods similar to those shown in this paper. Another limitation to take into consideration is the memory footprint of the \(\Delta_{cbsk}\) coefficients which encode the linear impacts: their number is given by \(N_{\text{NPs}}\times N_{\text{bins}}\times N_{\text{samples}}\), which can reach \(O(10^{8})\) or more for complex models. For models with a large numbers of bins and NPs, such as converted unbinned models, memory constraints can be more stringent than those related to computation times, since these computations mainly involves matrix operations that are quite efficient even for large models. ## 6 Conclusion Simplified likelihoods provide a convenient setting for the reuse of experimental results, and functionality that is complementary to that of full models and Gaussian approximations. The SLLS framework is based on a linear description of NP impacts which provides an approximation with two main benefits: it describes the Poisson behavior of counting measurements exactly, and also preserves the NPs of the full model and therefore a fully granular description of its systematic uncertainties. Both of these aspects make it well-suited to the description of LHC measurements, where systematic uncertainties and non-Gaussian effects from low event counts (e.g. in tails of distributions) both play important roles. The preservation of the NP structure allows in particular a proper treatment of correlated systematic effects when performing combinations of measurements, by identifying parameters associated with identical sources of uncertainty in the combination inputs. Since the POIs of the original model are also preserved, reinterpretations of the simplified model can also be performed as for the original model. These properties have particular relevance to global combinations of LHC measurements, for instance those performed in the context of effective-field theory models [26; 27; 28; 29; 30] which are based on measurements dominated by systematic uncertainties as well as others performed in high-momentum regions with low expected event counts. Currently these combinations are typically performed under Gaussian approximations without accounting for correlated uncertainties and Poisson behavior, and the use of simplified likelihoods could improve their accuracy. SLLS models can be built automatically from binned likelihood implemented using the HistFactory formalism within the ROOT and pyhf framework, or from unbinned likelihood using binned approximations. An implementation of the SLLS framework is provided in the fastprof package at [https://github.com/fastprof-hep/fastprof](https://github.com/fastprof-hep/fastprof). The models are stored in a plain-text JSON format, and computations and other operations are performed using python tools based on the widely available numpy and scipy libraries. Together with other full and simplified likelihood formats with complementary functionality, it is hoped that the SLLS framework will encourage the further publication of detailed statistical models by LHC experiments and beyond. The author would like to thank Nick Wardle providing the code for the simplified likelihoods of Ref. [9], and Tetiana Hryn'ova for valuable feedback. Plots in this paper were produced with matplotlib using the SciencePlots style package [31]. This research was funded, in whole or in part, by l'Agence Nationale de la Recherche (ANR), project ANR-22-CE31-0022. Linearization procedure This section provides a sketch of the derivation of the profile value \(\hat{\hat{\theta}}_{k}(\mathbf{\mu})\) of the NP \(\theta_{k}\) given by Eq 3. Starting from the likelihood in Eq. 1 and applying the linearization procedure described in Section 2.2, we obtain the negative log-likelihood \[\lambda(\mathbf{\mu},\mathbf{\theta})=\sum_{c=1}^{N_{\text{channels}}}\sum_{b=1}^{N_{ \text{bins},c}}\left[\nu_{cb}(\mathbf{\mu},\mathbf{\theta})-n_{cb}\log\nu_{cb}(\mathbf{\mu },\mathbf{\theta})\right]+\frac{1}{2}(\mathbf{\theta}-\tilde{\mathbf{\theta}})\Gamma(\mathbf{ \theta}-\tilde{\mathbf{\theta}}) \tag{10}\] up to a an additive constant. In the expression above, indices \(c\), \(b\) and \(s\) run respectively over measurement channels, bins within each channel, and event samples. The \(\nu_{cb}(\mathbf{\mu},\mathbf{\theta})=\sum_{s}\nu_{obs}(\mathbf{\mu},\mathbf{\theta})\) are the total expected events yields for the corresponding channel bin, and the per-sample yields \(\nu_{obs}(\mathbf{\mu},\mathbf{\theta})\) are given by Eq 2. The Gaussian constraints on the NPs are parameterized using the auxiliary observables \(\tilde{\mathbf{\theta}}\) and the inverse covariance matrix \(\Gamma\). The derivative of \(\lambda(\mathbf{\mu},\mathbf{\theta})\) with respect to the NPs \(\mathbf{\theta}\) is \[\frac{\partial\lambda}{\partial\mathbf{\theta}}(\mathbf{\mu},\mathbf{\theta})=\sum_{c=1}^ {N_{\text{channels}}}\sum_{b=1}^{N_{\text{bins},c}}\left[\sum_{s}\nu_{obs}^{ \text{nom}}(\mathbf{\mu})\mathbf{\Delta}_{obs}\left(1-\frac{n_{cb}}{\nu_{cb}(\mathbf{\mu },\mathbf{\theta})}\right)\right]+\Gamma(\mathbf{\theta}-\tilde{\mathbf{\theta}}). \tag{11}\] where \(\mathbf{\Delta}_{obs}\) is the vector with components \(\Delta_{obsk}\), the linear impacts of the parameter \(\theta_{k}\) on \(\nu_{obs}\). The linear approximation of NP impacts are applied to the denominator as \[\frac{n_{cb}}{\nu_{cb}(\mathbf{\mu},\mathbf{\theta})}\approx\frac{n_{cb}}{\nu_{cb}^{ \text{nom}}(\mathbf{\mu})}\left[1-\sum_{s}\frac{\nu_{obs}^{\text{nom}}(\mathbf{\mu})}{ \nu_{cb}^{\text{nom}}(\mathbf{\mu})}\mathbf{\Delta}_{obs}(\mathbf{\theta}-\mathbf{\theta}^{ \text{nom}})\right]. \tag{12}\] and one finally obtains \[\frac{\partial\lambda}{\partial\mathbf{\theta}}(\mathbf{\mu},\mathbf{\theta})=Q(\mathbf{\mu}) +P(\mathbf{\mu})\left[\mathbf{\theta}-\mathbf{\theta}^{\text{nom}}\right]+\Gamma\left[ \mathbf{\theta}-\tilde{\mathbf{\theta}}\right] \tag{13}\] with \(Q(\mathbf{\mu})\) and \(P(\mathbf{\mu})\) defined by Eq. 5, and the profile values \(\hat{\hat{\mathbf{\theta}}}(\mathbf{\mu})\) defined by \(\partial\lambda/\partial\mathbf{\theta}(\mathbf{\mu},\hat{\hat{\mathbf{\theta}}}(\mathbf{\mu} ))=0\) are therefore given by Eq. 3. ## Appendix B Binned approximation to an unbinned PDF We consider an extended unbinned PDF for an observable \(x\), \[P(\mathbf{x};\theta)\prod_{i=1}^{n}dx_{i}=\frac{e^{-N(\mathbf{\theta})}}{n!}N(\mathbf{ \theta})^{n}\prod_{i=1}^{n}f(x_{i};\mathbf{\theta})dx_{i} \tag{14}\] where \(f(x,\mathbf{\theta})\) is the PDF for one observation of \(x\), the dataset consists of the values \(x_{1}\cdots x_{n}\), \(\mathbf{\theta}\) are the model parameters, and the expected number of observations is \(N(\mathbf{\theta})\). We include the infinitesimal volume elements \(dx_{i}\) in the expression since these will be useful below. We introduce a set of bins \(B_{a}\), \(a=1\cdots N_{\text{bins}}\) that span the allowed range of \(x\). In the spirit of finite-element analysis, we approximate \(f(x)\) by a form that is constant over each bin as \[f(x,\mathbf{\theta})=\sum_{i}f_{a}(\mathbf{\theta})I_{a}(x) \tag{15a}\] \[f_{a}(\mathbf{\theta})=\frac{1}{w_{a}}\int_{B_{a}}f(x,\mathbf{\theta})dx \tag{12b}\] where the indicator \(I_{a}(x)\) is 1 if \(x\in B_{a}\) and 0 otherwise, and \(w_{a}=\int_{B_{a}}I_{a}(x)dx\) is the measure of bin \(B_{a}\). The value \(f_{a}(\mathbf{\theta})\) is the average of \(f(x,\mathbf{\theta})\) over the bin \(B_{a}\), so that for a sufficiently fine binning and smooth \(f(x,\mathbf{\theta})\), \(f(x,\mathbf{\theta})\approx f_{a}(\mathbf{\theta})\) for \(x\in B_{a}\). One can remove the explicit dependence on the \(x_{i}\) by integrating them out of the likelihood. The integration of the product term of Eq. 11 can be written as \[\int\prod_{i=1}^{n}f(x_{i},\mathbf{\theta})dx_{i}=\prod_{i=1}^{n}\sum_{i}f_{a}( \mathbf{\theta})\int I_{a}(x_{i})dx_{i}=\prod_{i=1}^{n}f_{a_{i}}(\mathbf{\theta})w_{a_ {i}}=\prod_{a=1}^{N_{\text{bins}}}\left[f_{a}(\mathbf{\theta})w_{a}\right]^{n_{a}} \tag{12c}\] where \(a_{i}\) is the index of the bin to which \(x_{i}\) belongs, \(n_{a}\) is the number of observations that fall in bin \(B_{a}\), and we have used the fact that the \(x_{i}\) are independent to propagate the integral through the product. Returning to the full expression of Eq. 11, we can write the likelihood as a function of the \(\mathbf{n}\) as \[P(\mathbf{n};\theta)=\frac{e^{-N(\mathbf{\theta})}}{n_{1}!\cdots n_{N_{\text{bins}}}! }N(\mathbf{\theta})^{n}\prod_{a=1}^{N_{\text{bins}}}\left[w_{a}f_{a}(\mathbf{\theta}) \right]^{n_{a}}, \tag{12d}\] after including an additional multiplicative factor \((n,n_{1},n_{2},\cdots n_{N_{\text{bins}}})\) to account for the number of different orderings of the \(x_{i}\) that can yield a given set of \(n_{a}\). One can introduce the per-bin expected yields \[N_{a}(\mathbf{\theta})=w_{a}f_{a}(\mathbf{\theta})N(\mathbf{\theta}) \tag{12e}\] and note that since \[1=\int f(x,\mathbf{\theta})dx=\sum_{a=1}^{N_{\text{bins}}}\int_{B_{a}}f(x,\mathbf{ \theta})dx=\sum_{a=1}^{N_{\text{bins}}}w_{a}f_{a}(\mathbf{\theta})\] one has \(N(\mathbf{\theta})=\sum_{a}N_{a}(\mathbf{\theta})\) as expected. One can finally rewrite \[P(\mathbf{n};\theta)=\prod_{a=1}^{N_{\text{bins}}}\frac{e^{-N_{a}(\mathbf{\theta})}}{ n_{a}!}N(\mathbf{\theta})^{n_{a}}\left[w_{a}f_{a}(\mathbf{\theta})\right]^{n_{a}}= \prod_{a=1}^{N_{\text{bins}}}\frac{e^{-N_{a}(\mathbf{\theta})}}{n_{a}!}N_{a}(\mathbf{ \theta})^{n_{a}}. \tag{12f}\] This takes the usual form of a binned likelihood, with a Poisson distribution in each measurement bin with expected yields given by Eq. 12e.
2305.17785
Lighting and Rotation Invariant Real-time Vehicle Wheel Detector based on YOLOv5
Creating an object detector, in computer vision, has some common challenges when initially developed based on Convolutional Neural Network (CNN) architecture. These challenges are more apparent when creating model that needs to adapt to images captured by various camera orientations, lighting conditions, and environmental changes. The availability of the initial training samples to cover all these conditions can be an enormous challenge with a time and cost burden. While the problem can exist when creating any type of object detection, some types are less common and have no pre-labeled image datasets that exists publicly. Sometime public datasets are not reliable nor comprehensive for a rare object type. Vehicle wheel is one of those example that been chosen to demonstrate the approach of creating a lighting and rotation invariant real-time detector based on YOLOv5 architecture. The objective is to provide a simple approach that could be used as a reference for developing other types of real-time object detectors.
Michael Shenoda
2023-05-28T18:06:46Z
http://arxiv.org/abs/2305.17785v1
# Lighting and Rotation Invariant Real-time Detector Based on YOLOv5: Vehicle Wheel Detector ###### Abstract Creating an object detector, in computer vision, has some common challenges when initially developed based on Convolutional Neural Network (CNN) architecture. These challenges are more apparent when creating model that needs to adapt to images captured by various camera orientations, lighting conditions, and environmental changes. The availability of the initial training samples to cover all these conditions can be an enormous challenge with a time and cost burden. While the problem can exist when creating any type of object detection, some types are less common and have no pre-labeled image datasets that exists publicly. Sometime public datasets are not reliable nor comprehensive for a rare object type. Vehicle wheel is one of those example that been chosen to demonstrate the approach of creating a lighting and rotation invariant real-time detector based on YOLOv5 architecture. The objective is to provide a simple approach that could be used as a reference for developing other types of real-time object detectors. ## I Introduction Surprisingly humans are really good at understanding visual content of an image and instantly provide information about the objects within. Unfortunately, this task is a hard problem in computer vision. Since 2012 Convolutional Neural Network (CNN) has became so popular for object detection and classification tasks, but yet the challenges to create reliable models can get overwhelming. Looking at the YOLO architecture, it has taken a leap forward as being the architecture of choice for real-time detectors. The original version of YOLO was designed for real-time object detection as primary focus. Then the latest version YOLOv5 has retained the same focus with the simplification of the development process of creating custom models with the power of image augmentation techniques. It provides the ability to visualize metrics, and get insights of changes made on each model update using integration with Weights and Biases machine learning tool. While the end result of a well designed model works well in terms of performance and accuracy, the amount of initial development can be daunting. The objective in this paper is to outline a general guideline to achieve a model that can be adaptive to lighting and rotation changes with minimal initial images samples. These guidelines are being presented with a practical example by implementing a model that is capable of detecting vehicle wheels in images captured with various camera orientations, and lighting conditions. After getting an overview of the YOLOv5 architecture and methodology, the basic approach is summarized as series of steps that can be easily followed: 1. Understand the immediate use case of the detector. 2. Select the model size of YOLOv5 for the detector. 3. Understand the relationship of the object to the surroundings and how to propose specific views captured to optimize the visual appearance of the object. 4. Collect initial image samples to cover the various camera orientations and lighting conditions. 5. Train the initial model with the initial image samples, with weights transferred from an existing YOLOv5 model, then evaluate. 6. Collect 3D synthetic images to cover the variety of visual appearances of the object and also covers various rotations. 7. Use ground truth labels extracted from the 3D models, if available. Or use initial model to pre-label the 3D synthetic images, then manually review them to remove any false positives and fine tune any bounding box as necessary. 8. Train the model with the 3D synthetic images with weights transferred from initial training session then evaluate 9. Collect sample images from publicly available image datasets that represents the desired views of the object with various lighting conditions. 10. Use previous model to pre-label the public image samples, then manually review them to remove any false positives and fine tune any bounding box as necessary. 11. Train the model with the public image samples with weights transferred from previous training session then evaluate 12. Finally the model should be sufficient to automatically label images collected specifically for the use case. Fig. 1: The Proposed General Guideline for Model Development The work presented here is a humble approach towards a simple way to develop a custom model that can be easily prototyped and speed up the effort of creating an initial model that is reliable enough for providing machine labeled images with minimal manual review. The main goal is to allow the machine learning engineer to carefully craft the initial model than can be used later on to automatically label images for a non-technical person to manually review and perform any refinement to the bounding boxes. ## II Overview of YOLOv5 YOLO is one of the best architecture and family of object detection models providing sate of the art performance with its focus on real-time detection and classification. YOLO for stands for You Only Look Once. The initial YOLO [1] was created by Joseph Redmon and later published YOLOv2 [2] and YOLOv3 [3] papers. Joseph's work was further advanced by Alexey Bochkovskiy who has published YOLOv4 in 2020. [4] YOLOv5, created by Ultralytics, is a family of object detection architectures and models pretrained on the COCO dataset that represents their open-source research into computer vision AI methods, and incorporating lessons learned and best practices. It is a continuation on the great work that has been initially developed in the earlier versions [5]. It's important to note that YOLOv5 is built using PyTorch while previous YOLOv4 was built using Darknet. Using the PyTorch approach has simplified the development since it uses python by default. The YOLOv5 model structure consists of two main blocks and the overall architecture is shown below: The model consists of two parts, Backbone and Head. Backbone is responsible for extracting low level features using convolutional layers in combination with Spatial Pixel Pair Features (SPPF) layer. Head is responsible for extracting the high level feature maps and performing the detections, applying anchor boxes on the feature maps to generate the final output vector of classes probabilities and bounding boxes. The YOLOv5 models originally came in four different sizes which are small, medium, large, and xlarge. The size of the model depends on the complexity of the detection and classification task. Recent version 6.0 has introduced the nano size which was primarily developed to be very lightweight to fit ultra small devices. The nice aspect about YOLOv5 that models are defined declaratively using yaml configuration files. Yaml configuration describes the model definition without the need to programmatically define it in Python. This aspect makes it superb for rapid model development. The other powerful aspect of YOLOv5 is the image augmentation capability that it provides, especially when starting with very small number of samples. It's important to note when training custom dataset, a new model file needs to be created for the detector. One of the important parameters is the number of classes. In this case, we need to set to 1 for a simple detector of single object type like vehicle wheel. ## III Proposed General Guideline of Model Development ### _Understanding the use case_ The proposed wheel detector has direct use case of providing vehicle axle count that can be used in application such as traffic analysis, and tolling systems. The end goal is to provide a reliable method of detecting the vehicle wheels regardless of camera orientation and lighting conditions. Fig. 4: Wheel Detector Use Case Fig. 3: YOLOv5 model size comparison Fig. 2: YOLOv5 architecture overview ### _Model Size Selection_ Based on the published performance of the different YOLOv5 model sizes, the medium size has been chosen for the wheel detector to provide a good combination of accuracy and performance. The medium size model definition file has been copied from the original yolov5m.yaml then modified the number of classes to be 1, since dealing with a single type. Another important consideration during model selection is the available GPU and the training input image size. The GPU used in this case is Nvidia GeForce RTX 3050 Ti with available dedicated GPU memory of 4 GB. I choose 512x512 image size based on initial testing on what can be fit in the available GPU memory. By default YOLOv5 uses a square size for the input and I decided to keep it that way. The reason behind that is the uncertainty of the image size for the final use case of the detector. In this case, square input size minimizes the amount of pixel padding that needs to be added to a vertical or horizontal rectangular image thus maximizing the image content. Of course, if the final image size is known, it will be optimized to build a detector with input size that matches the aspect ratio of final deployment image size. ### _Understanding Visual Appearance of Object_ The appearance of the wheel can vary based on the camera orientation and different wheel types based on different vehicle types. Also, the positioning of the wheel while the vehicle is turning left or right. Collection of images to be able to cover from simple camera angles to complex scenarios. In addition, images samples should be collected to cover various image capturing quality. For example, samples need to cover low resolution, blurriness, and environmental changes such as fog, harsh sun light, harsh shadows, etc. Considering the simple camera angles of straight on 90 degree, angled 45 degree, and steep angled 20 degrees as shown in the following figure: Fig. 5: Reason for Using Square Detector Input Size Fig. 7: Simple Camera Angles Considering the complex scenarios where the wheel could be occluded, partially visible, blurred, or ultra steep angle. Here are some of the examples: ### _Training Initial Model and Evaluation_ Initial image samples have been manually captured to cover various camera orientation as well as slight shadows and sun light conditions. Total initial samples are 72 images, below are some of the images to visualize. The initial image samples have been manually labeled using LabelImg [7], a python tool for labeling images using bounding boxes that supports YOLO bounding box format. YOLO label format for an object is defined as the following: (**object-class**) (**x**) (**y**) (**width**) (**height**) **object-class** is a numerical index of the object class, which in out case it's 0, for single object. **x** and **y** represent the object center and are expressed as relative pixel position. **width** and **height** represent the width and height of the object, and expressed as relative size. It uses relative pixel position and relative size to accommodate different scaling of the image without worrying about absolute numbers [1]. The initial model has been trained with the following parameters: input size = 512 batch size = 4 model weights = yolov5m.pt validation spit ratio = 0.22 Let's evaluate the initial model starting with the metrics: Fig. 11: Image Labeling using LabelImg Fig. 8: Complex Scenarios to Consider Fig. 10: Initial Image Samples Part2 (total of 23 samples) Fig. 9: Initial Image Samples Part1 (total of 49 samples) The metrics looks reasonable for the initial training. The training and validation loss seems consistent, but the mean average precision is only hovering around 0.8. Let's check the detection performance on random samples from validation set. The model does good job of detecting the wheels on the validation set. Before getting overexcited, we need to consider that the validation set is not far off from how the training set looks visually. We will need to test how the model would perform on the 3D synthetic images, which weren't part of the training yet. That should give a better insight of how the model would behave on images that it hasn't seen before during the training. Fig. 16: Initial Model Mean Average Precision Fig. 14: Initial Model Validation Object Loss Fig. 13: Initial Model Training Box Loss Fig. 15: Initial Model Validation Box Loss Fig. 18: Initial Model Detection on Validation Set We can clearly see that the model has false positive on other parts of the vehicles. It detects truck mirror as a wheel and also mud flap of the semi truck. Also, some of the bounding boxes are not precisely localizing the wheel. ### _Training with Synthetic Images and Evaluation_ 3D synthetic image samples have been collected to cover various camera orientation with the goal of providing different varieties of vehicle wheel shapes. Total synthetic samples collected 165 images. Below are some of the images to visualize. The 3D synthetic images here are just yet another 2D images that were gathered from Hum3D [8]. Ideally, we would use ground the truth labels wheel labels extracted from the 3D model and unnecessary for labeling those images by machine or human. Unfortunately, due to lack of time for learning a 3D simulation software that provides such capabilities, such as Carla Simulator, I decided to not to pursue that route. In this case, the wheel have been automatically labeled by machine using the initial model then manually reviewed and corrected using LabelImg [7] annotation tool. The model has been trained with 3D synthetic images added to the initial sample images with the following parameters: input size = 512 batch size = 6 model weights = last.pt, from previous training session validation spin ratio = 0.22 Let's evaluate the model starting with the metrics: Fig. 21: (3D Synthetic Images) Training Object Loss Fig. 19: Initial Model Detection on 3D Synthetic Images Fig. 20: 3D Synthetic Images The metrics looks better. The training and validation loss seems consistent, and the mean average precision is getting up to the 0.9 ranges Let's check the detection performance on random samples from validation set. The model has improved and the validation looks good. Now, let's test on some of the next image samples that are collected from CompCars [9] public dataset. Overall, the detection is looking good in terms of bounding box accuracy, but there are some miss detection and false positives. It seems like the false positives are shapes that looks like wheel and reflective dark gray ground that has same color shade as the vehicle tire! ### _Training with added Public Images and Evaluation_ The public datasets that have been selected are CompCars and OpenImages. I decided to start with CompCars, since it has a comprehensive dataset that contains 163 car makes with 1,716 car models [9] as well as combination of different roads with nature and buildings. The ability to get comprehensive set of vehicles is ideal to get variety of vehicle wheel shapes. Also, CompCars provide a close up view of the cars which makes it ideal for the model to learn more details about the different wheels introduced. Fig. 23: (3D Synthetic Images) Validation Object Loss Fig. 24: (3D Synthetic Images) Validation Box Loss Fig. 28: (3D Synthetic Images) Detection on CompCars Images Fig. 27: (3D Synthetic Images) Detection Validation Set Fig. 23: (3D Synthetic Images) Validation Object Loss Total of 827 images have been manually selected from CompCars to machine label them using the previous trained model then manually verified by LabelImg. Below are some of the image examples from CompCars: The model has been trained with CompCars images added to the previous sample images with the following parameters: input size = 512 batch size = 6 model weights = last.pt, from previous training session validation spit ratio = 0.22 Let's evaluate the model starting with the metrics: The model has performed well on the validation set. Then tested the model with CompCars on some of the samples from OpenImages to check the detection performance. It seems that the detection results are inconsistent. I decided to choose OpenImages as the next image set for training, since it contains some real examples that are visually close to the final use case of the detector. The total number of samples from OpenImages are 179 images that have been machine labeled using the previous trained model. Manual review has been done on the automatically labeled images. It seems that the model gets better every time with less manual edits needed to be made. The model has been trained with the samples from OpenImages added to the previous sample images with the following parameters: input size = 512 batch size = 6 model weights = last.pt, from previous training session validation spit ratio = 0.22 Let's evaluate the model starting with the metrics: This time will be comparing the metrics from OpenImages and CompCars in same plots so we can get better insight since they are getting very close to each other. Fig. 34: CompCars Detection on Random Validation Set Fig. 35: CompCars Model Testing on OpenImages Samples Fig. 36: OpenImages Samples Looking at those plots and trying to understand what just happened, it was an aha moment. The metrics all of the sudden started to look bad. Mean average precision has dropped. The interesting insight here is the bounding box loss. We can clearly see OpenImages has gotten worse than CompCars. At the same time, the actual object loss has remained very similar to each other. That's where I realized that my samples from OpenImages contains very small vehicle sizes and their wheels are obviously too small after scaling down the image to 512 pixels during the training. Those small bounding boxes can encounter rounding issues at that small scale of 512 input size. This especially magnified with rectangular image sizes that are far from the aspect ratio of a square. That's when I came to realization that I need to crop the OpenImages to vehicle size region of interest in order to maximize the wheel size in the image. I utilized the existing YOLOv5 model to do the detection and save the cropped images based on the bounding boxes of the detection with classification of bus, car, truck, and others. Every other type of object has been used as negative samples with empty label files. Before I proceeded with my theory, I decided to test the detector on completely different image found to mimic the final ideal use case for the detector. A good example here if we look at the white van, highlighted with red arrow on the full view rectangular image, we can see how the localization has loose bounding box around the wheels. Fig. 41: OpenImages and CompCars Validation Box Loss Fig. 38: OpenImages and CompCars Train Object Loss Fig. 42: Model with OpenImages Model Test on Rectangular Full Image Fig. 39: OpenImages and CompCars Train Box Loss After comparing the same vehicle on the cropped view, we can clearly see that the localization of the wheels have tight bounding boxes. This clearly explains why the object loss plot looked better than the box loss plot. The next approach is to remove the full resolution sample images then train the model with cropped objects views extracted from the full images. Total cropped images are 1157 that got machine labeled and manually reviewed. Finally, the model has been trained with the samples from OpenImages cropped added to the previous sample images, with the full view removed. The following parameters used for the training: input size = 512 batch size = 6 model weights = last.pt, from previous training session validation spit ratio = 0.22 Let's evaluate the model and check the metrics: Again, we will be comparing the metrics from OpenImages, CompCars, and OpenImage-crop in same plots so we can get better insight since they are getting really close to each other. We can clearly see that the model has improved with the cropped images from OpenImages dataset. This confirms the theory once more with evidence based on the metrics. ### _Final Model Evaluation_ Let's put everything into prospective and look the all metrics from all iterations. Based on the metrics evaluation, the model seem to be in a good stage. It's time to test with image samples that resemble the final use case of the model. Based on the detection testing results, the model has performed very well and the images are showing in the Final Model Testing Results section. ## IV Limitations and Future Improvements The main limitation of this model is the input square size. For the final inference, you should try to stay close to a square image as possible. The advice here is to build a model that is optimized for the final deployment image size and transfer the weights from this model to your final optimized model. Future work would include the ground truth wheel labels extracted from 3D simulation such as CARLA [11] or other type of Fig. 48: OpenImages-Crop Model Train Box Loss Fig. 51: Wheel Model Train Box Loss Fig. 49: Wheel Model All mAP Fig. 50: Wheel Model Train Object Loss Fig. 52: Wheel Model Validation Object Loss 3D simulator based on the type of objects of interest. Future improvement or addition to this work is the ability to provide semantic segmentation for the detector. ## V Conclusion In conclusion, the vehicle wheel detector model has reached a reliable phase that is a good starting point for further improvement for a final deployment use case. We can clearly see the benefit of an iterative analytical approach of building a deep neural network model. Crafting the model in an iterative approach can give the insights needed to correct mistakes early on in the model development. It's important to try to understand how the model behavior changes with addition of samples and how metrics and visual validation can be a key to create a reliable model. The guideline proposed to create the vehicle wheel detector is intended to be used as a future reference for creating any kind of detector. ## VI Final Model Testing Results Fig. 54: Final Model Test 1 Fig. 56: Final Model Test 3 Fig. 57: Final Model Test 4 Fig. 58: Final Model Test 5 Fig. 59: Final Model Test 6 Fig. 61. Final Model Test 8 Fig. 62. Final Model Test 9 Fig. 63. Final Model Test 10 Fig. 64. Final Model Test 11
2308.09547
Test Code Refactoring Unveiled: Where and How Does It Affect Test Code Quality and Effectiveness?
Context. Refactoring has been widely investigated in the past in relation to production code quality, yet still little is known on how developers apply refactoring on test code. Specifically, there is still a lack of investigation into how developers typically refactor test code and its effects on test code quality and effectiveness. Objective. This paper presents a research agenda aimed to bridge this gap of knowledge by investigating (1) whether test refactoring actually targets test classes affected by quality and effectiveness concerns and (2) the extent to which refactoring contributes to the improvement of test code quality and effectiveness. Method. We plan to conduct an exploratory mining software repository study to collect test refactoring data of open-source Java projects from GitHub and statistically analyze them in combination with quality metrics, test smells, and code/mutation coverage indicators. Furthermore, we will measure how refactoring operations impact the quality and effectiveness of test code.
Luana Martins, Valeria Pontillo, Heitor Costa, Filomena Ferrucci, Fabio Palomba, Ivan Machado
2023-08-18T13:25:53Z
http://arxiv.org/abs/2308.09547v1
# Test Code Refactoring Unveiled: Where and How Does It Affect Test Code Quality and Effectiveness? ###### Abstract _Context._ Refactoring has been widely investigated in the past in relation to production code quality, yet still little is known on how developers apply refactoring on test code. Specifically, there is still a lack of investigation into how developers typically refactor test code and its effects on test code quality and effectiveness. _Objective._ This paper presents a research agenda aimed to bridge this gap of knowledge by investigating (1) whether test refactoring actually targets test classes affected by quality and effectiveness concerns and (2) the extent to which refactoring contributes to the improvement of test code quality and effectiveness. _Method._ We plan to conduct an exploratory mining software repository study to collect test refactoring data of open-source Java projects from GitHub and statistically analyze them in combination with quality metrics, test smells, and code/mutation coverage indicators. Furthermore, we will measure how refactoring operations impact the quality and effectiveness of test code. Software testing, Test smells, Test refactoring, Refactoring mining, Mining software repositories ## I Introduction Refactoring is an engineered approach that allows developers to improve the quality of source code without affecting its external behavior [1]. Over the last decades, researchers have been proposing automated refactoring recommenders [2] and investigated how refactoring relates to code quality [3, 4, 5, 6]. In particular, researchers identified both benefits and drawbacks of its application [7, 8, 9], finding that, while refactoring is theoretically associated with modifications that do not affect the external behavior of source code, it may possibly induce defects [10, 11, 12], vulnerabilities [13], or even code smells [14]. These drawbacks are mainly due to refactoring activities performed manually without the support of automated tools and interleaved with other code changes [15]. Our research is motivated by these previous works. On the one hand, most previous studies focused on the refactoring of production code and, for this reason, we argue that there is a lack of investigations into _how refactoring is applied to test code_. On the other hand, we do not know if similar effects observed in previous work may arise with test refactoring, i.e., it may have some impact on both test quality and effectiveness, for instance, in cases where refactoring actions target the logic of a test case. Hence, we point out a _limited knowledge on the effects of refactoring_ on both test quality and effectiveness. An improved understanding of test refactoring would have a number of potential benefits for research and practice. In the first place, test cases represent a crucial asset for software dependability: developer's productivity is partly dependent on the quality of test cases [16], as these help practitioners to decide on whether to merge pull requests or deploy the system [17]. As such, analyzing how refactoring affects test cases may have a significant impact on practice. Secondly, researchers have been showing that the design of test code is approached in a substantially different way with respect to traditional development [18]. Indeed, the test code must often interact with external systems, databases, or APIs to set up test environments and verify the system's behavior [19]. As a consequence, test code may suffer from different issues that, in turn, would require different refactoring operations [20]. For these reasons, new refactoring practices have been proposed with the aim of dealing with quality or effectiveness concerns [19, 20, 21]. While those refactoring practices were the target of some previous investigations, researchers limited their focus to how refactoring may influence test smells, i.e., symptoms of poor test code quality [22, 23, 24], hence not providing comprehensive analysis into the _nature_ and _effects_ of test refactoring. More specifically, we highlight a lack of knowledge on (1) whether developers apply test refactoring operations on test classes that are actually affected by quality or effectiveness concerns, as it is supposed to be given the definition of refactoring; and (2) what is the effect of refactoring on both quality and effectiveness of test cases. This paper aims at addressing this gap of knowledge by proposing an _exploratory empirical study_. We first plan to collect test refactoring data from the change history of open-source Java projects from GitHub and combine them with data coming from automated instruments able to profile test code from the perspective of quality metrics, test smells, and code/mutation coverage information. Afterward, we plan to apply statistical analyses to address three main research goals targeting (1) whether test classes with a low level of quality, in terms of test smells and code metrics, are associated with more test refactoring, (2) whether a low level of effectiveness, in terms of mutation coverage and code coverage, is associated with more test refactoring, and (3) to what extent the removal of test smells improve the test code quality and effectiveness. Our findings might benefit researchers and practitioners under multiple perspectives. In the first place, our research may reveal insights into the refactoring types that may deteriorate test code quality and effectiveness. Such information would be relevant for researchers in both the fields of refactoring and testing, as it may lead them to (1) extend the knowledge on the best and bad practices to properly apply test refactoring; (2) devise novel test refactoring approaches which are aware of the possible side effects of refactoring, e.g., we may envision multi-objective search-based refactoring approaches that may optimize refactoring recommendations based on both quality and effectiveness attributes; and (3) design novel recommendation systems that may support developers in understanding how a refactoring would impact different test code properties. The results would also be useful to practitioners, who may have additional proof of the side effects of refactoring, hence possibly being stimulated further on the need to employ automated refactoring tools. In the second place, our findings may indicate the nature of the test cases more likely to be subject to refactoring operations. Researchers might use this information to define refactoring recommenders and refactoring prioritization approaches, while practitioners may acquire the awareness of their actions. ## II Related Work The current literature can be distinguished based on the type of empirical studies conducted. First, several studies analyzed change history information to extract knowledge about test smells and their impact. Spadini et al. [25] investigated ten open-source projects to find a relation between six test smells and the change and defect-proneness of both test and production code, finding that smelly JUnit tests are more change-prone and defect-prone than non-smelly ones. In addition, they found that production code is typically more defect-prone when tested by smelly tests. As such, the authors did not target test code refactoring, hence not assessing how the seemingly test code quality improvement actions performed by developers affect test code quality and effectiveness, i.e., the authors looked exactly in the opposite direction of our paper, focusing on how bad practices affect test code quality. Wu et al. [26] explored the impact of eliminating test smells on the production code quality of ten open-source projects. In this respect, there are two key points that make our investigation novel: (1) test smell removal does not imply the application of refactoring: a previous empirical study [27] indeed showed that 83% of test smell removal activities are due to feature maintenance actions, i.e., our work can therefore further the knowledge on how developers apply test code refactoring; (2) the authors worked, also in this case, in the opposite direction as our work, focusing on the effects of test smells on code quality rather than analyzing the impact of test code refactoring actions. As such, our work extends the current knowledge by assessing how test refactoring is applied and what is its impact on multiple aspects of test code, such as quality and effectiveness. Peruma et al. [24] investigated the relationship between refactoring changes and their effect on test smells. The authors used Refactoring Miner [28] to detect refactoring operations and the tsDetect tool [29] to identify the test smells from unit test files of 250 open-source Android Apps. Results showed that refactoring operations in test and non-test files differ, and the refactorings co-occur with test smells. With respect to the work by Peruma et al. [29], we do not limit ourselves to the analysis of test smells, but also consider additional indicators of test code quality and effectiveness: in this sense, ours will represent a more comprehensive analysis of the role of test refactoring. Second, we assess the actual effects of test refactoring on test code quality and effectiveness, providing insights into how various test refactoring types may support the evolutionary activities of developers. A second line of research is represented by qualitative studies targeting the developer's perception of test refactoring. Damasceno et al. [30] investigated the impact of test smell refactoring on internal quality attributes, reporting some insights that may potentially be in line with the results of our study, e.g., they let emerge the impact of test smell refactoring on internal quality attributes. In the first place, the authors specifically focused on the refactoring of test smells, while our work targets test code refactoring from a more general perspective, attempting to assess the extent to which this is applied to classes suggesting the presence of quality or effectiveness concerns. Secondly, the results of our work may possibly provide evidence-based, complementary insights with respect to what the authors found out in their qualitative study. Third, our work has a broader scope and, indeed, it also targets the effectiveness side of the problem. Soares et al. [22] investigated how developers refactor test code to eliminate test smells. The authors surveyed 73 open-source developers and submitted 50 pull requests to assess developers' preferences and motivation while refactoring the test code. The results showed that developers preferred the refactored test code for most test smells. In another work, Soares et al. [23] investigated whether the JUnit 5 features help refactor test code to remove test smells. They conducted a mixed-method study to analyze the usage of the testing framework features in 485 popular Java open-source projects, identifying the features helpful for test smell removal and proposing novel refactorings to fix test smells. Also in this case, the authors focused on the refactoring of test smells, while our study has a broader scope. In addition, while we do not plan to conduct surveys or interviews--this is part of our future research agenda--, we will extend the current body of knowledge by assessing whether test code quality and effectiveness indicators may trigger refactoring activities, other than providing a comprehensive overview of how test refactoring relates to branch and mutation coverage, which is a premiere of our study. ## III Research Questions and Objectives The _goal_ of the empirical study is to analyze the test refactoring operations performed by developers over the history of software projects, with the _purpose_ of understanding (1) whether low-quality test classes, in terms of structural metrics and test smells, provide indications on which test classes are more likely of being refactored, (2) whether test classes with low effectiveness, in terms of code coverage and mutation coverage, provide indications on which test classes are more likely of being refactored, and (3) as a consequence, to what extent test refactoring operations are effective in improving quality and effectiveness of test classes. In other terms, we are first interested in assessing the **quantity** of test refactoring operations performed on classes exhibiting test code quality and effectiveness issues and, in the second place, the **quality** of the test refactoring operations applied in terms of improvements provided to test code quality and effectiveness. The _perspective_ is of both researchers and practitioners who are interested in understanding the relationship and effects of test refactoring operations on the quality and effectiveness of test classes. More specifically, our empirical investigation will first aim at addressing the following research questions (**RQs**): **RQ\({}_{1}\).**_Are test refactoring operations performed on test classes having a low level of quality, as indicated by quality metrics and test smell detectors?_ **RQ\({}_{2}\).**_Are test refactoring operations performed on test classes having a low level of effectiveness, as indicated by code and mutation coverage?_ Through **RQ\({}_{1}\)** and **RQ\({}_{2}\)**, we aim to address the first objective of the study, hence understanding whether the low quality and effectiveness of test classes are associated with more test refactoring operations. The results of these two research questions might have multiple implications for software maintenance, evolution, and testing researchers. An improved understanding of these aspects may indeed inform researchers on the characteristics of the test suites that trigger more refactoring operations, possibly informing researchers on (1) the factors that are associated with test refactoring and (2) the design of novel or improved instruments to better support developers in their activities. For instance, should we discover that test refactoring is not frequently applied on test classes exhibiting test smells, this would imply that further research should be conducted on the motivations leading developers to refactor test code, other than to how test smell detectors should be designed to ease the application of refactoring operations. Upon completion of this investigation, we will further elaborate on the impact of test refactoring, addressing the following research questions: **RQ\({}_{3}\).**_What is the effect of test refactoring on test code quality, as indicated by quality metrics and test smell detectors?_ **RQ\({}_{4}\).**_What is the effect of test refactoring on test code effectiveness, as indicated by code and mutation coverage?_ Through **RQ\({}_{3}\)** and **RQ\({}_{4}\)**, we aim to extend the current knowledge on the impact of test refactoring, assessing whether the test code quality and effectiveness increase, decrease or remain the same after the application of test refactoring operations. It is worth to remark that addressing these two research questions would be important independently from the results obtained by **RQ\({}_{1}\)** and **RQ\({}_{2}\)**. Indeed, regardless of the amount of refactoring operations performed on test classes exhibiting quality or effectiveness concerns, it would still be possible that the specific refactoring actions targeting those classes have an impact. To make our argumentation more practical, consider the case of the _Extract Method_ refactoring, whose suboptimal implementation may potentially affect test code effectiveness. Given a verbose test method with several steps and assertions, the refactoring enables the extraction of multiple test methods, which are supposed to be more cohesive and focused on the verification of specific conditions of production methods. However, if developers do not appropriately perform such an extraction, this would potentially change the logic of the test and be detrimental to test effectiveness. For instance, consider test T, which verifies two branches, B1 and B2, of the production method M. In this case, an Extract Method operation is supposed to split T so that the resulting tests T1 and T2 target B1 and B2 individually. However, should there be a logical relation between B1 and B2, T2 will still need to pass through T1 to ensure that the logical relation is still met: a suboptimal refactoring may overlook this requirement, possibly not embedding in T2 the statements required to reach B1. As a result, this operation would affect the overall level of coverage of the production code. As such, **RQ\({}_{3}\)** and **RQ\({}_{4}\)** provide an orthogonal view on the matter. Also in this case, the outcome of our investigation may lead to implications for research and practice. First, our findings may help researchers measure the actual, practical impact of test refactoring--this may drive considerations on how future research efforts should be prioritized, e.g., by favoring more research on impactful refactoring operations. Second, our results may increase the practitioner's awareness of test refactoring, possibly increasing its application in practice. To design and report our empirical study, we will follow the empirical software engineering guidelines by Wohlin et al. [31] other than the ACM/SIGSOFT Empirical Standards.1 Footnote 1: Available at: [https://github.com/acmsigsoft/EmpiricalStandards](https://github.com/acmsigsoft/EmpiricalStandards) ## IV Experimental Plan This section reports the research method that we plan to apply to address our **RQs**. ### _Context of the study_ The _context_ of our investigation will be composed of (i) empirical study variables, i.e., the independent and dependent variables that we will statistically analyze, and (ii) software systems, i.e., the projects that will be mined to collect the data required to address our research objectives. **Software Systems.** The selection of suitable software systems will be driven by various considerations. First, we will focus on open-source projects, as we need access to change history information. Second, we will rely on popular, large real-world projects having enough releases to collect data that can be analyzed statistically. Third, we will standardize the building process to ease dependency management and streamline build configurations across all projects. As such, we plan to use SEART tool2 to select 100 open-source, non-fork projects from GitHub that have at least 100 stars, 10 major releases, 1,000 lines of code, and 10 test classes. We will seek Java projects that can be compiled with Maven and Java 8--Java 8 is the most popular Java version used nowadays.3 Should our search identify more than 100 projects, we will apply random sampling and verify whether the projects were properly built until we have 100 projects. Footnote 2: [https://seart-ghs.si.usi.ch/](https://seart-ghs.si.usi.ch/) Footnote 3: [https://www.jebetrains.com/lp/devecosystem-2021/java/](https://www.jebetrains.com/lp/devecosystem-2021/java/) It is worth noting that some projects may adopt the so-called _Boy Scout_ rule, i.e., "Leave every piece of code you touch cleaner than how they found it".4 These projects may be more inclined to the application of refactoring and therefore we may observe a higher test code quality and effectiveness. As part of our study, we will manually analyze the contribution guidelines of the projects selected, looking for any insight suggesting that those projects follow the _Boy Scout_ rule. Should we identify a decent amount of projects, we will perform an additional analysis, comparing the results obtained between Boy Scout and non-Boy Scout projects. Footnote 4: The Boy Scout Rule: [https://www.oreilly.com/library/view/97-things-every/97-things-every/9780359889515/h08.html](https://www.oreilly.com/library/view/97-things-every/97-things-every/9780359889515/h08.html). **Empirical Study Variables.** In the context of **RQ\({}_{1}\)** and **RQ\({}_{2}\)**, we are interested in assessing whether refactoring operations are more likely to be observed on test classes exhibiting test code quality and effectiveness concerns. As such, we define the following empirical study variables: _Independent Variables._ These are the factors that will be related to the application of test refactoring, namely (i) test code quality metrics; (ii) presence of test smells (of different types); (iii) branch coverage; and (iv) mutation coverage. Tables I and II list and describe the independent variables of the study. These metrics will be all computed across releases of different software systems and will be statistically analyzed as described later in this section. The selection of these independent variables is driven by multiple considerations. First, we consider test code quality metrics and test smells that were targeted by previous research in the field [33, 34] and found to impact test code in different manners [27, 35]. Second, branch and mutation coverage are widely considered as two key indicators of test code effectiveness, which may estimate the goodness of test cases in dealing with real defects [36, 37]. \begin{table} \begin{tabular}{l|l|l|r|r} \hline \hline **Acronym** & **Test Smell** & **Description** & **Precision** & **Recall** \\ \hline AR & Assertion Roulette & A test method contains assertion statements without an explanation/message & 94.7\% & 90.0\% \\ DA & Duplicate Assert & A test method that contains more than one assertion statement with the same parameters & 85.7\% & 90.0\% \\ \hline ECT & Handling Exception & A test method that contains throws statements & 100.0\% & 100.0\% \\ ET & Eager Test & A test method contains multiple calls to multiple production methods & 100.0\% & 100.0\% \\ GF & General Fixture & Fields within the setUp method are not utilized by all test methods & 95.2\% & 100.0\% \\ LT & Lazy Test & Multiple test methods call the same class under test methods & 90.9\% & 100.0\% \\ \hline \hline \end{tabular} \end{table} TABLE II: Description of test smells as detected by tsDetect [29] \begin{table} \begin{tabular}{l|l|l} \hline \hline **Acronym** & **Quality Metrics** & **Description** \\ \hline LOC & Number of Lines & Counts the number of lines \\ NOM & Number of Methods & Counts the number of methods \\ WMC & Weight Method Class & Counts the number of branch instructions in a class \\ RFC & Response for a Class & Counts the number of method invocations in a class \\ AD & Assertion density & Percentage of assert statements with respect to the total number of statements in a test class \\ MUT & Mutation Coverage & Percentage of mutated statements in the production class that is covered by the test \\ LCOV & Line coverage & Lines exercised by the test \\ BCOV & Branch coverage & Branches exercised by the test \\ \hline \hline \end{tabular} \end{table} TABLE I: Description of quality metrics as detected by VITRuM [32] \begin{table} \begin{tabular}{l|l|l|r} \hline \hline **Refactoring** & **Description** & **Precision** & **Recall** \\ \hline Add assert explanation & Add an optional parameter into the assert methods to provide an explanatory message & 100.0\% & 78.0\% \\ Extract Class & Create a new class and place the fields and methods responsible for the relevant functionality in it & 100.0\% & 100.0\% \\ Extract Method & Move a code fragment to a separate new method and replace the old code with a call to the method & 99.9\% & 96.9\% \\ Inline Method & Replace calls to the method with the method’s content and delete the method itself & 100.0\% & 98.2\% \\ Parameterize Test & Remove duplicate code using the @parameterized test annotation to define a variety of arguments & 100.0\% & 100.0\% \\ Replace @Test annotation and add of assertThrows & & 100.0\% & 93.0\% \\ Replace @Rule annougments & Remove @Rule annotation and add of assertThrows method & & 100.0\% & 88.0\% \\ Replace try/catch w/ assertThrows & Remove try/catch blocks and add of assertThrows method & & 100.0\% & 89.0\% \\ assertThrows & Spitt method & Separate a long function by splitting it into short methods and adding a call for the new methods & 100.00\% & 100.00\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Description of refactorings detected by TestRefactoringMiner tool _Dependent Variables._ These are the refactoring operations (of different types) being observed across releases of different software systems. To select suitable test refactoring operations for our purpose, we investigated the literature to elicit the test refactoring operations that were previously associated to our independent variables--we basically surveyed the previous papers on the matter, discussed in Section II, to extract the test refactoring types that researchers have been observing as potentially impacting on testing evolutionary activities. Table III lists the refactoring operations that will be targeted, along with a brief description. When it turns to \(\mathbf{RQ}_{3}\) and \(\mathbf{RQ}_{4}\), we are interested in assessing the impact of test refactoring on the test code quality and effectiveness aspects considered. As such, we need to swap independent and dependent variables: indeed, in this case we are interested to observe how refactoring impacts test code properties rather than the opposite: _Independent Variables._ These are the different types of refactoring operations (Table III) computed across the releases of software systems considered. _Dependent Variables._ These will be the test code quality and effectiveness metrics described in Tables I and II, which will be computed across releases of software systems. In all \(\mathbf{RQ}_{3}\) we will include a number of control variables, which will help us better verify the extent to which test refactoring impacts the variation of test code quality and effectiveness in relation to project- and process-level characteristics that may impact the dependent variables. _Control Variables._ We will first account for the frequency of releases and activities by the project, as these may provide insights into the development speed which, in turn, may impact test code quality and effectiveness. Given a release \(R_{i}\), we will compute the number of releases issued within the last 1, 3, 6, and 12 months. In addition, for each class \(C_{j}\) within \(R_{i}\), we will compute the number of commits performed by developers between the releases \(R_{i-1}\) and \(R_{i}\). We will also consider project-level metrics such as (1) project size in terms of lines of code; (2) number of contributors; (3) number of branches; and (4) number of pull requests. On the one hand, these metrics can well overview the main characteristics of the project and the community around it. On the other hand, all these metrics can impact in various manners test code quality and effectiveness, e.g., a higher amount of branches may indicate a higher level of activity around the project, which in turn can influence the way test cases are maintained and evolved. ### _Data Collection_ We will use different automated tools available in the literature to extract data on quality and effectiveness metrics, test smells, and refactoring operations. Then, we will merge the data to compose our dataset. **Collecting test code quality and effectiveness metrics.** To collect both test code quality and effectiveness metrics (Table I), we will run VITRuM, a plug-in for the visualization of test-related metrics in order to calculate five static metrics and three dynamic metrics from the test code [32]. Note that the tool uses JACoCo to calculate line and branch coverage, and pitest for the mutation coverage. Therefore, we will have to build the projects to calculate the dynamic metrics. **Collecting test smells.** Among the test smell detection tools available for Java code [38], we will use tsDetect[29], which is the most accurate tool, with a precision score ranging from 85% to 100% and a recall score ranging from 90% to 100%. tsDetect performs a test code static analysis through an AST (Abstract Syntax Tree) to apply the test smells detection rules in the test files. A test file in the JUnit testing framework should follow the naming conventions of either pre-pending or appending the word 'Test'' to the name of the production class under test and at the same package hierarchy [29]. With the detection rules, the tool can detect (i) the presence or absence of a test smell in a test class, or (ii) the number of instances per test smell in a test class. In addition, the tool receives a configuration of the severity thresholds for each test smell [39]. We run the tool to identify the number of instances of the six test smells described in Table II with default values for the severity thresholds (i.e., the tool reports all instances of test smells detected). **Collecting refactoring data.** To detect test refactoring operations, we will use the TestRefactoringMiner tool [40]. The tool is built on top of the state-of-the-art refactoring mining tool RefactoringMiner, which has the highest precision (99.8%) and recall (97.6%) scores among the currently available refactoring mining tools [28]. In more detail, TestRefactoringMiner analyzes the added, deleted, and changed files between two project versions to detect specific test refactorings, reaching 100% and 92.5% of precision and recall scores. The tool operationalizes the detection of all the refactoring operations considered in the study--Table III presents the set of test refactoring that we will investigate in this study. It is worth noting that this set considers various refactoring operations, such as integrating new technologies like JUnit 5 or improving the organization of test classes. **Data integration.** Although some tools allow a finer granularity during the code analysis, all of them can also report the results at the class level. Therefore, we will establish traceability links between the test classes reported by tsDetect, VITRuM, and TestRefactoringMiner tools, finally integrating their outcome in a unique data source to be further analyzed from a statistical standpoint. ### _Data Analysis_ We first formulate the working hypotheses that we will later statistically assess. As for \(\mathbf{RQ}_{1}\), given a quality metric \(Qm_{i}\), with \(Qm_{i}\) in {LOC, NOM, WMC, RFC, AD} and a refactoring \(ref_{k}\) in the set of refactoring operations considered in the study, our null hypothesis is the following: \(\mathbf{HnI}_{Qm_{i}-ref_{k}}\).: There is _no significant difference_ in terms of the amount of \(ref_{k}\) operations performed on test classes having different values of \(Qm_{i}\). As in **RQ\({}_{1}\)**, we will also evaluate the relation between test refactoring and test smells. Given a test smell \(Ts_{i}\) in the set of test smells considered in the study and \(ref_{k}\), we define a second null hypothesis: **Hn2\({}_{Ts_{i}-ref_{k}}\).**: There is _no significant difference_ in terms of the amount of \(ref_{k}\) operations performed on test classes affected and not by \(Ts_{i}\). As for **RQ\({}_{2}\)**, given an effectiveness metric \(Em_{i}\), where \(Em_{i}\) assumes values in the set {Branch Coverage and Mutation Coverage} and \(ref_{k}\), the null hypothesis is the following: **Hn3\({}_{Em_{i}-ref_{k}}\).**: There is _no significant difference_ in terms of the amount of \(ref_{k}\) operations performed on test classes having different values of \(Em_{i}\). As for **RQ\({}_{3}\)**, given a quality metric \(Qm_{i}\), a test smell \(Ts_{i}\), and a refactoring \(ref_{k}\), the null hypotheses is: **Hn4\({}_{Qm_{i}-ref_{k}}\).**: There is _no significant difference_ in terms of \(Qm_{i}\) before and after the application of \(ref_{k}\). **Hn5\({}_{Ts_{i}-ref_{k}}\).**: There is _no significant difference_ in the number of \(Ts_{i}\) instances before and after the application of \(ref_{k}\). Finally, as for **RQ\({}_{4}\)**, the null hypothesis will be: **Hn6\({}_{Em_{i}-ref_{k}}\).**: There is _no significant difference_ in terms of \(Em_{i}\) before and after the application of \(ref_{k}\). If one of the null hypotheses will be statistically rejected, we will accept the corresponding alternative hypothesis, namely: **An1\({}_{Qm_{i}-ref_{k}}\).**: The amount of \(ref_{k}\) operations on test classes having different values of \(Qm_{i}\) is _statistically different_. **An2\({}_{Ts_{i}-ref_{k}}\).**: The amount of \(ref_{k}\) operations on test classes affected and not by \(Ts_{i}\) is _statistically different_. **An3\({}_{Em_{i}-ref_{k}}\).**: The amount of \(ref_{k}\) operations on test classes having different values of \(Em_{i}\) is _statistically different_. **An4\({}_{Qm_{i}-ref_{k}}\).**: The \(Qm_{i}\) before and after the application of \(ref_{k}\) is _statistically different_. **An5\({}_{Ts_{i}-ref_{k}}\).**: The number of \(Ts_{i}\) instances before and after the application of \(ref_{k}\) is _statistically different_. **An6\({}_{Em_{i}-ref_{k}}\).**: The \(Em_{i}\) before and after the application of \(ref_{k}\) is _statistically different_. We will then verify the working hypotheses, hence accepting or rejecting them, by building statistical models. **Statistical modeling for RQ\({}_{1}\) and RQ\({}_{2}\).** To address our first two research questions, we will devise a _Logistic Regression Model_ for each refactoring operation considered in the study. Such a model belongs to the class of Generalized Linear Models (GLM) [41] and relates a (dichotomous) dependent variable--in our case, whether or not a particular type of refactoring is performed--with either continuous and discrete independent variables--the quality and effectiveness metrics considered in **RQ\({}_{1}\)** and **RQ\({}_{2}\)**. Before building the statistical model, we plan to assess the presence of multi-collinearity [42], which arises when two or more independent variables are highly correlated and can be predicted one from the other. We will use the vif (Variance Inflation Factors) function and discard highly correlated variables, putting a threshold value equal to 5 [42]. For each statistical model, we assess (i) whether each independent variable is significantly correlated with the dependent variable (using a significance level of \(\alpha\) = 5%, and (ii) quantify this correlation using the Odds Ratio (OR) [43], which is a measure of the strength of the association between each independent variable and the dependent variable. Higher OR values for an independent variable indicate a higher probability of explaining the dependent variable, i.e., a higher likelihood that a refactoring operation has been triggered by the independent variable. Nonetheless, the interpretation of OR values change depending on the different measurement scale of the independent variables, i.e., ratio for the test code quality and effectiveness metrics and categorical for the test smells. As for the metrics, the OR for an independent variable indicates the increment of chances for a test class to be subject of refactoring as a consequence of a one-unit increase of the independent variable. As for test smells, the OR indicates how likely a smelly test class is involved in refactoring operations with respect to a non-affected class. The statistical significance of the correlation between independent and dependent variables will allow us to accept or reject **Hn1\({}_{Qm_{i}-ref_{k}}\)**, **Hn2\({}_{Ts_{i}-ref_{k}}\)**, and **Hn3\({}_{Em_{i}-ref_{k}}\)**, while OR values will measure the strengths of the correlations. **Statistical modeling for RQ\({}_{3}\) and RQ\({}_{4}\).** To statistically assess the impact of test refactoring on test code quality and effectiveness metrics and smells, we will first collect all the test classes subject to the refactoring type \(ref_{k}\) in a generic release \(R_{i}\). Afterward, for each of those test classes, we will compute its value of test code quality and effectiveness metrics and smells computed on the release \(R_{i}\) and the value of the metrics and smells computed on the release \(R_{i-1}\). We will produce two distributions: the first representing the metric values (or the number of test smells) in \(R_{i-1}\), i.e., before the application of \(ref_{k}\); the second representing the metric values (or the number of test smells) in \(R_{i}\), i.e., after the application of \(ref_{k}\). On this basis, we will employ the non-parametric Wilcoxon Rank Sum Test [44] (with \(\alpha\)-value = 0.05), through which we will accept or reject the null hypotheses **Hn4\({}_{Qm_{i}-ref_{k}}\)**, **Hn5\({}_{Ts_{i}-ref_{k}}\)**, and **Hn6\({}_{Em_{i}-ref_{k}}\)**. In addition, we will also rely on the Vargha-Delaney (\(\hat{A}_{12}\)) [45] statistical test to measure the magnitude of the differences observed in the considered distributions. According to the direction and value given by \(\hat{A}_{12}\) we will have a practical interpretation of our findings, which will depend on the test code factor considered. Specifically, should the \(\hat{A}_{12}\) values be lower than 0.5, this would imply that: * The metric values computed on the release \(R_{i-1}\) are lower than those on \(R_{i}\), i.e., the refactoring \(ref_{k}\) would have a _positive_ effect on the quality or effectiveness metric considered--lower metric values in \(R_{i-1}\) would indeed indicate that the refactoring induced the increase of the metric in \(R_{i}\), hence having a positive effect; * The number of test smells computed on the release \(R_{i-1}\) is lower than the one computed on \(R_{i}\), i.e., the refactoring \(ref_{k}\) would have a _negative_ effect, hence suggesting that, rather than improving test code design, the refactoring induced the emergence of some form of test smells. Similarly, a \(\hat{A}_{12}>0.50\) indicates the opposite, hence that either \(ref_{k}\) has a _negative_ impact on the considered test code quality or effectiveness metric, or that the refactoring has a _positive_ impact of the removal of test smells. Finally, \(A_{12}==0.50\) points out that the results are identical, i.e., the refactoring has limited to no effect on the dependent variables. ### _Publication of generated dataset_ The dataset that we will collect by merging test code metrics, test smells, test effectiveness metrics, and test refactoring data will be made publicly available in an online repository [46]. We also plan to release the scripts for the data collection and analysis that we will use to perform this study. ## V Threats to validity This section discusses the potential threats that may affect the validity of our empirical study plan. **Construct validity.** A first threat concerns with the criteria we will use to select software projects: despite the actions to standardize the building process, we might still fall into build failures. Should this happen, we will attempt to manually diagnose the reasons of the failures, trying to fix them - in this respect, we will exploit recent research [47, 48] reporting insights on how to fix build failures. In the best case, we would still be able to build the project. In the worst case, we would not be able to fix the build failure and, in this case, we will finally discard the project from our study and replace it with another project retrieved by using the SEART tool. As for the set of test smells, structural and dynamic metrics we will use to assess the test code quality. We will not calculate all the Chidamber & Kemerer metrics as some of them do not apply to the context of test code (e.g., _Depth Inheritance Tree_). Nevertheless, we have chosen a mix of metrics capturing the test code size, structural, and dynamic characteristics. Another threat to validity concerns the identification of test smells and refactoring operations. We will use tools already validated and used by the research community. Although the tools present high precision and recall scores, they might report some false positive or false negative instances of test smells or refactorings: in response of this limitation, we will attempt to perform preliminary, manual investigations to assess the degree of accuracy of the tools before running them on a large scale--in this way, we will be able to provide indications on the confidence level of our conclusions. **Internal Validity.** This category of threats to validity concerns by-product changes of other maintenance activities (e.g., bug fixes or changes in requirements) that could also contribute to the removal of test smells. Therefore, the data analysis will not indicate a causal relationship, but rather that there is a possibility of a relationship that may be further investigated. We will attempt to corroborate our quantitative results by means of some qualitative insights. In addition, we acknowledge test flakiness as a potential threat to the internal validity which can impact the reliability of our findings. However, despite being a severe issue for practitioners, previous investigations found test flakiness to arise in a limited amount of cases, e.g., Luo et al. [49] found out that flaky tests affect up to 4.56% of test cases. In this sense, it is reasonable to believe that the problem of test flakiness will have a limited impact on our findings. **External Validity.** This class of threats to validity mainly concerns the subject projects of our study. We selected open-source Java projects from GitHub, which are only a fraction of the complete picture of open-source software and do not necessarily represent industrial practices. Therefore, the results may not generalize to the industrial context and other programming languages. In addition, we will select projects based on the number of stars, which may raise some popularity bias. Replications of our work would be, therefore, beneficial to corroborate our findings in different contexts: to stimulate further research, we will release all materials and scripts as part of an archived online appendix [46]. **Conclusion validity.** To address how frequently test refactoring is performed on test classes affected by quality or effectiveness concerns, we will use logistic regression models to identify correlations. Other than highlighting cases of significant correlations, we will report and discuss OR values. In addition, to investigate the effect of test refactoring on test code quality and effectiveness, we will employ well-established statistical tests such as the Wilcoxon Rank Sum Test [44] and the Vargha-Delaney (\(\hat{A}_{12}\)) [45] statistical tests. Our analysis will be conducted at the granularity of classes because the tools we plan to employ work at this level. This may bias our conclusions, as this granularity may be subject to various confounding variables. On the one hand, this is a limitation that we unfortunately share with all the other research works that analyze dynamic test code metrics [50]. On the other hand, we plan for the inclusion of multiple process- and project-level control variables, through which we will be able to partially mitigate this threat to validity. An additional point to remark is that our data collection procedure cannot distinguish between changes that were meant as refactoring and other changes where refactoring was applied as part of other modifications. We might have mitigated this limitation by extracting refactoring changes through the analysis of issues and pull requests, i.e., collecting changes explicitly intended as refactoring. Nonetheless, such an alternative method could have biased even further the conclusions drawn for two reasons connected to the availability and reliability of the information available within the developers' discussions on GitHub. More particularly: _Availability._ Previous studies established that developers perform "floss refactoring", combining refactoring operations and behavioral change edits within individual commits [15]. From a practical standpoint, this means that developers do not often apply refactoring for the sake of refactoring source code, but as an instrument to perform other changes, e.g., to simplify a piece of code before making further evolutionary changes. As such, it is unlikely to find "pure" refactoring changes or discussions, in the form of issues or pull requests, around refactoring operations to be applied. _Reliability_. Literature found that developers not only rarely document refactoring activities explicitly [51, 52], but also that when they do, they are inconsistent [53], i.e., labeling changes as refactoring, although no refactoring is done at all. Other researchers found out that the term "refactoring" is misused, i.e., developers do not often correctly distinguish between refactoring changes and normal code modifications [54]. In this respect, the seminal paper by Murphy-Hill et al. [55] reported that _"messages in version histories are unreliable indicators of refactoring activities. This is due to the fact that developers do not consistently report/document refactoring activities"_. This latter observation was also backed up by the findings obtained by Ratzinger et al. [56], who discovered that the extraction of refactoring documentation from repositories may lead to several false positives, as the words used by developers are too generic and do not often refer to real refactoring operations. As a consequence, the analysis of issues and pull requests would have led to unreliable conclusions. On the contrary, the goal of a statistical study is exactly that to identify hidden relations between dependent and independent variables while controlling for possible confounding effects [57]: we believe that such an approach better fits our research goals. Through a large-scale, statistical investigation, we may indeed end up discovering the intrinsic factors associated with the refactoring actions performed by developers, finally providing evidence of how test refactoring is done in practice. ## VI Conclusion The ultimate goal of our research plan is to understand whether the test code quality and effectiveness provide indications of which test classes are more likely of being refactored and to what extent test refactoring operations can improve the test code quality and effectiveness. We will conduct this study on a set of 100 open-source Java projects, starting from the collection of data on the test code quality, test smells, and refactoring operations arising in the major releases of the projects. Then, we will employ statistical approaches to address the goals of our investigation and, based on the conclusions we will be able to draw, finally provide actionable items and implications for researchers and practitioners. As an outcome of our exploratory study, we expect to provide the following key contributions: 1. An empirical understanding of the factors triggering test refactoring operations, which comprises an analysis of how test code quality and effectiveness come into play; 2. Evidence of the impact of test refactoring on test code quality and effectiveness; 3. An online appendix which will provide all material and scripts employed to address the goals of the study. ## Acknowledgment This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001; and FAPESB grants BOL0188/2020 and PIE0002/2022. Fabio is supported by the Swiss National Science Foundation through the SNF Project No. PZ00P2_186090 (TED).
2303.10364
Evolution of the number and temperature of the remaining cold atoms in CW-laser photoionization of laser-cooled $^{87}$Rb atoms
Based on the Rb$^+$-Rb hybrid trap, we investigate the effect of ion-atom elastic collisions on the number and temperature of the remaining atoms. We measured the remaining atomic number and temperature as a function of the wavelength and intensity of the ionization laser, and whether the ion trap was turned on. Fittings with a single exponential decay function plus an offset to the number and radius of the remaining atoms are found to be in good agreement. We found a difference in the exponential factor of different wavelengths of ionization laser with the ion trap on or off. We suppose that the presence of electrons affects ion-atom collisions through disorder-induced heating. Our research contributes to a better understanding of how ultracold neutral plasma evolves, particularly the subsequent kinetics of atomic processes, which also serves as a useful reference for high-energy-density plasma.
Fei Wang, Feng-Dong Jia, Wei-Chen Liang, Xiao-Kang Li, Yu-Han Wang, Jing-Yu Qian, Dian-Cheng Zhang, Yong Wu, Jian-Guo Wang, Rong-Hua Lu, Xiang-Yuan Xu, Ya-Ping Ruan, Ping Xue, Zhi-Ping Zhong
2023-03-18T08:47:07Z
http://arxiv.org/abs/2303.10364v2
Evolution of the number and temperature of the remaining cold atoms in CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms ###### Abstract Based on the Rb\({}^{+}\)-Rb hybrid trap, we investigate the effect of ion-atom elastic collisions on the number and temperature of the remaining atoms. We measured the remaining atomic number and temperature as a function of the wavelength and intensity of the ionization laser, and whether the ion trap was turned on. Fittings with a single exponential decay function plus an offset to the number and radius of the remaining atoms are found to be in good agreement. We found a difference in the exponential factor of different wavelengths of ionization laser with the ion trap on or off. We suppose that the presence of electrons affects ion-atom collisions through disorder-induced heating. Our research contributes to a better understanding of how ultracold neutral plasma evolves, particularly the subsequent kinetics of atomic processes, which also serves as a useful reference for high-energy-density plasma. **PACS numbers**: 34.50.Cx,34.80.Dp,34.90.+q,52.20.Hv,52.27.Gr ## I Introduction Photoionization is an important process in many fields of science, such as atomic and molecular physics, astrophysics, plasma physics, and atmospheric science. The mixture of electrons, ions, and neutral particles created by photoionizing atoms or neutral molecules provides a powerful tool for understanding the structure and dynamics of complex physical systems, e.g., the collisions in the mixture. Laser-cooled atoms offer great opportunities for studies of precise spectroscopic measurements, quantum coherent phenomena, and low-energy collisions with the neutral atoms due to the significant mitigating of the Doppler broadening effect, and low collision rate. Atom-trap-based techniques have been widely used to measure the absolute cross-section for collisional processes and photoionization processes in a magneto-optical trap (MOT), such as pioneering work in electron collisional processes [1; 2] and photoionization process[3]. This is because collisions typically involve a change in the kinetic energies, velocities, and/or chemical structure of the collision partners, as well as trap loss; a photoionization process also results in trap loss. Ruan _et al.[4]_ discovered cold ion-atom collisions heat the remaining atoms by extending the trap-loss measurement to measure both the temperature and number of remaining atoms in the two-step CW-laser photoionization of a laser-cooled \({}^{87}\)Rb cloud in a standard vapor-loaded MOT with a glass chamber. Furthermore, ultracold-neutral plasmas (UNPs) can be created by pulsed photoionization of laser-cooled atoms near the ionization threshold[5]. UNP is an effective high-energy-density plasma (HEDP) simulator because they overlap in the strongly-coupled region and have similar coupling parameter \(\Gamma\) and screening length \(\kappa\)[6; 7; 8]. Collisions are crucial in the evolution of a UNP. However, up to now, only electron-ion collisions are considered in the study of ultracold neutral plasmas, and interactions of the charged particles with the neutral atoms are neglected since Killian _et al.[5]_ believe that the mean free path for neutral-charged particle collisions is much larger than the sample size (typically on the order of 0.5 mm)[5]. This was verified in electron-atom collisions based on the calculations for elastic scattering of slow electrons from noble gases (\(\sim 38\) a.u. at 1 K)[9]; however, ion-neutral collisions dominated by long-range interactions between atoms and ions by a factor of \(C_{4}/r^{-4}\)[10; 11; 12; 13]. Here \(C_{4}\) is the leading long-range induction coefficient and \(r\) is the internuclear distance. As a result, large elastic scattering cross-sections (\(\sim 10^{6}\) a.u. at 1 mK)[14; 10] are expected, allowing for strong ion-atom interactions. Owing to the advantages of laser trapping, cooling, and ion-trapping techniques, cold hybrid ion-atom systems have emerged over the past 20 years, paving the way for the study of ion-atom collisions in the quantum regime. The hybrid system provides highly controllable quantum systems with tunable ion-atom long-range interactions, and the theoretical and experimental progress has been well summarized by the most recent review[10]. Due to the ion-atom large elastic scattering cross-section, a wide range of exciting experiments have been proposed, such as reaching ultra-low temperatures with sympathetic cooling, ultracold charge transport, new many-body bound states, and strongly coupled polaritons, quantum information processing, and quantum simulations, etc.[12; 13; 15] The majority of these studies rely on interactions mediated by elastic ion-atom collisions, but little is known about the subsequent kinetics of ion-atom collision processes[16]. In the two-step CW-laser photoionization, these ion-atom collisions might be reactive, inelastic, or elastic. In this study, we investigate how the elastic processes develop and influence the number and temperature of the remaining atoms. The present work is an in-depth discussion of Ruan _et al.[4]_. Using atomic absorption imaging techniques and adjusting the system parameters, we measured the change in the number and temperature of the remaining atoms as a function of system parameters during the two-step CW-laser photoionization of the laser-cooled \({}^{87}\)Rb atoms in the ion-neutral hybrid trap. The system parameters include the wavelength and intensity of the ionization light, and whether the ion trap is turned on. These in-depth studies of ion-atom cold collisions contribute to a deep understanding of UNPs evolution, the physics of strongly correlated many-body systems, quantum simulations, etc. ## II Method The detailed description of our ion-neutral hybrid trap for rubidium atoms can be found in our previous study[17]. This trap consists of an Rb standard MOT and a mass-selective linear Paul trap (LPT), which are concentrically arranged in a polyhedral flat non-magnetic stainless-steel cavity. The following steps are performed during each experimental cycle in the ion-atom hybrid trap: the MOT is loaded to a steady state, the CW ionization light is turned on for a predetermined irradiation time either the ion trap is on or off, the CW ionization light is switched off, and the trapped ions are pushed to the MCP by turning off the voltage on the end-cap ring electrode closer to the MCP. The ion time-of-flight (TOF) spectrum is recorded by an oscilloscope. In the meantime, the TOF approach is utilized to obtain the temperature of the cold atoms while atomic absorption imaging techniques are used to count the atoms and measure the radius of atomic clouds. System parameters are listed below. The atom number and radius \(r_{atom}\) (\(1/e^{2}\) half-waist) of the cold atomic cloud were measured as \(\sim 5\times 10^{7}\), \(\sim 0.5\) mm by absorption imaging. The first excitation laser was the MOT cooling laser with a detuning of -12 MHz from the resonant frequency of \(5\ ^{2}S_{1/2},F=2\to 5\ ^{2}P_{3/2},F^{\prime}=3\) transition. The second excitation laser, that is, the ionization laser, was provided by another CW-diode laser with a variable wavelength in the range of \(\lambda_{ion}=450\)\(\sim\)479 nm. The radial directions x, y, and axial direction z of the ion cloud were 2.32, 2.32, and 20.20 mm, respectively[17; 18]. The trap depth of the ion trap is approximately 0.7 eV, corresponding to a maximum temperature ranging at \(10^{3}-10^{4}\) K for the trapped Rb\({}^{+}\) ion. Due to the need to fulfill the laws of energy and momentum conservation, as well as the fact that the electron mass is much smaller than the ion mass, the initial temperature of the ion is slightly higher than the atomic temperature and is around mK. The majority of the excess photon energy, i.e., the difference between the photon energy and the ionization threshold, is carried by electrons. The wavelength of the ionization laser is varied among 447, 450, 475, 476, 477, 478, 478.8, and 479 nm, corresponding to initial electron temperatures of 1438.5, 1295.2, 171.4, 128.9, 86.6, 44.4, 10.8 and 2.5 K, respectively. In the case of UNPs created by pulsed photoionizing laser-cooled atoms near the ionization threshold, the ions are then heated to several K on a time scale of approximately \(\sim 10^{2}\) ns by disorder-induced heating (DIH)[6]. DIH is related to the spatial distribution of the charged particles during ionization[6; 7]. Thus, the ion-atom collision energy \(E_{col}\) in our experiment is roughly 10 mK-K. Ct and Dalgarno[19] obtained the expression for the elastic cross-section for ultracold atom-ion collisions as \(\sigma_{ela.}\propto E_{col}^{-1/3}\). Thus the ion-atom elastic rate constant follows \(K=\langle\sigma_{ela.}v\rangle\propto E_{col}^{1/6}\) (\(\langle\cdot\rangle\) indicates averaging over velocities) and is proportional to the collision energy, which is nearly equal to ion temperature. The atomic number variation with time can be fitted with a single exponential decay function plus offset according to the rate equation of the total atomic number[17; 20; 21] \[N_{atom}(t)=Ae^{-\gamma_{x}t}+N_{e},\gamma_{x}=\gamma_{L}+\gamma_{PI}+\gamma_{ ia}. \tag{1}\] \[\gamma_{PI}=\frac{f\sigma_{ion}}{E_{ion}}I_{PI},f=\frac{I/I_{s}}{1+2I/I_{s}+(2 \delta/\Gamma)^{2}}. \tag{2}\] Here \(I_{PI}\) and \(I\) stand for the intensity of the ionization laser and the total cooling laser, respectively, \(E_{ion}\) for ionization laser photon energy, \(f\) for excited state fraction, and \(\sigma_{ion}\) for photoionization cross-section. \(I_{s}\) is the saturation intensity the transition \(5\ ^{2}S_{1/2},F=2\to 5\ ^{2}P_{3/2},F^{\prime}=3\). \(\delta\) is the detuning of the cooling laser frequency from resonances, which is -12MHz in the present experiment. The offset \(N_{e}\) represents the equilibrium number of atoms, and the exponential factor \(\gamma_{x}\) is the loss rate of the MOT atoms, including \(\gamma_{L}\) caused by collisions between cold atoms, \(\gamma_{PI}\) caused by photoionization of cold atoms, and \(\gamma_{ia}\) caused by ion-atom collisions. It's interesting to note that this kind of analytical expression can also describe radius \(r_{atom}\) of the cold atomic cloud, as shown in Fig.1. \[r_{atom}=Be^{-\gamma_{r}t}+r_{e}. \tag{3}\] Here the offset \(r_{e}\) represents the equilibrium radius of the cold atomic cloud. The exponential factor \(\gamma_{r}\) is the reduction rate of atomic cloud radius and means the decreasing rates of the temperature of remaining atoms. The relation between atomic temperature and the radius of an atomic cloud can be expressed as \(T\propto r_{atom}^{2}\)[22], as illustrated in Fig.2. Furthermore, our experimental results show that the exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) are well linearly related when measured at various ionization laser intensities and wavelengths with the ion trap on or off, as shown in Fig.3. This is because, whereas the number of atoms in a magneto-optical trap and atomic temperature generally follow a power law [23; 24; 25; 26; 27; 28]. The experimental error in this study resulted from the following factors: the approximately 10-15% systematic error from the fluctuation of temperature and number of cold atoms, the approximately 1-5% error resulting from the deconvoluting procedure, the statistical uncertainties, the uncertainties in determining the intensity of the ionization laser, and the uncertainties in determining the frequency/wavelength of lasers. Specifically, 1 nm for \(\lambda_{ion}=447\) and 450 nm, 600 MHz for \(\lambda_{ion}=475\)-479 nm. ## III Result We investigate how the ion-atom elastic collision and the presence of electrons affect the number and temperature of the remaining atoms. We first discuss the exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) that inscrib the rate of evolution of the atomic number and temperature, respectively. We performed these experiments as the function of the irradiation period of the ionization laser in the Rb\({}^{+}\)-Rb hybrid trap, and the wavelength of the ionization laser is 447 nm or 478.8 nm with the ion trap on or off. Next, we survey the variation of atomic number and temperature Figure 1: The number (up) and the radius (down) of atomic cloud in the MOT as the function of the irradiation of the ionization laser in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms in the ion-neutral hybrid trap [17]. The wavelength of the ionization laser is 478.8 nm. Time zero is the moment that the ionization laser turns on and the following graphs are so. Figure 3: \(\gamma_{r}\) as a function of \(\gamma_{x}\) measured at different intensities and wavelengths of the ionization laser with the ion trap on or off. Figure 2: Comparison of the cold atom temperature measured by the time-of-flight method and the square of the radius of the cold atomic cloud measured by atomic absorption imaging techniques. Measurements are performed in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms in ion-neutral hybrid trap[17]. with the intensity of the ionization laser in the case that the irradiation period of the ionization laser was set as 4 s, with the ion trap on or off, and the wavelength of the ionization laser was varied as 450, 475, 476, 477, 478 and 479 nm, corresponding to initial electron temperatures of 1295.2, 171.4, 128.9, 86.6, 44.4, and 2.5 K, respectively. As shown in Fig.4, both two exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) in the ion trap-on case are larger than those in the ion trap-off case with \(\lambda_{ion}=447\) nm. However, the two exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) when the ion trap is on are not always larger than those in the ion trap off case with \(\lambda_{ion}=478.8\) nm, as shown in Fig.5. We will discuss the mechanisms underlying these experimental phenomena as follows. Firstly, the loss rate caused by ion-atom collisions \(\gamma_{ia}\) can be divided into \(\gamma_{ia}^{MOT}\) and \(\gamma_{ia}^{LPT}\), as shown in the following Eq.4. \[\gamma_{x}=\gamma_{L}+\gamma_{PI}+\gamma_{ia}^{MOT}+\gamma_{ia}^{LPT}. \tag{4}\] \(\gamma_{ia}^{MOT}\) is the loss rate of the MOT atoms caused by ion-atom collisions from the ions in the MOT area. \(\gamma_{ia}^{LPT}\) is the loss rate of the MOT atoms caused by ion-atom collisions from the ions in the ion trap area. Certainly, \(\gamma_{ia}^{LPT}\) is zero when the ion trap is off. Now we discuss how the presence of electrons affects ion-atom elastic collision. The study of ultracold neutral plasma shows that the temperature of the ions is rapidly heated to several K due to disorderly induction heating (DIH). Plasma can only be created when the initial electron temperature is less than 1000 K due to the size of the MOT atom. DIH is brought on by electron-ion spatial correlations [6; 7]. Therefore, electron-ion spatial correlations cannot be established, and DIH is reduced or even eliminated completely. Thus, regardless of the initial electron temperature, the ion trap repels the electrons, making it difficult to establish electron-ion spatial correlations. Thus, the heating caused by DIH is also reduced or eliminated entirely when the ion trap works. In the case of \(\lambda_{ion}=447\)\(nm,\gamma_{ia}^{LPT}\) is equal to zero when the ion trap is turned off, and becomes \(\gamma_{ia}^{LPT}>0\) when the ion trap is turned on. Therefore, the atomic loss rate \(\gamma_{x}\) in the ion trap-on case is larger than that in the ion trap-off case as shown in Fig.4. As discussed above, the exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) are well linear, so a similar conclusion is made for \(\gamma_{r}\). As a contrast, when the \(\lambda_{ion}=478.8\) nm, the initial electron temperature is 10.8 K, our result shows that occasionally greater than those in the ion trap-off, and occasionally smaller. We hypothesize that the ions in the MOT area are heated by DIH in the ion trap-off situation. Since the ion-atom elastic rate constant is proportional to the collision energy as discussed above, when the ion trap turns off, the ion temperature in the MOT is higher than it would be in the case of the ion trap on. As a result, the ion-atom elastic rate constant is higher than it would be in the case of the ion trap on. The discrepancy brought on by the collision energy is mitigated by the elastic ion-atom collision from the ions in the ion trap. \(\gamma_{r}\) and \(\gamma_{x}\) are linear, so the same conclusion is reached for \(\gamma_{r}\). Now we survey the variation of atomic number and temperature with the intensity of the ionization laser in the case that the irradiation period of the ionization light was set as 4 s, the wavelength of the ionization laser was varied among 450, 475, 476, 477,478 and 479 nm, corresponding to initial electron temperatures of 1295.2, 171.4, 128.9, 86.6, 44.4, and 2.5 K, respectively. The ratios \(N/N_{0}\) and \(T/T_{0}\) are taken to overcome the fluctuation of MOT [4]. \(N_{0}\) and \(T_{0}\) represent the number and the temperature of trapped atoms in a steady state without photoionization, respectively. As demonstrated in Figs.6 to 9, for any wavelength of the ionization laser, the variation of ratio in the atomic number \(N/N_{0}\) is well fit by a single exponential decay function plus an offset, which can be explained by the Eq.1. The variation of ratio in Figure 4: Comparison of exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) as the function of the ionization laser intensity at \(\lambda_{ion}\)=447 nm, ion trap on or off. Figure 5: Comparison of exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) as the function of the ionization laser intensity at \(\lambda_{ion}\)=478.8 nm, ion trap on or off. the temperature of atomic cloud \(T/T_{0}\) is well fit by a double exponential decay function plus an offset because of \(T\propto r_{atom}^{2}\)[22]. As for the relationships between \(-\log(T/T_{0})\) and \(-\log(N/N_{0})\), it depends not only on the wavelength of the ionization laser but also on whether the ion trap is turned on as shown in Figs.10 to 13. For the case that the wavelength of the ionization laser is 450 nm shown in Fig.10, \(-\log(T/T_{0})\) linearly increases as \(-\log(N/N_{0})\), It means that the number \(N\) and the temperature \(T\) of trapped atoms follows a power law \(T\propto N^{\gamma}\). More interestingly, the data obtained with or without the ion trap on almost fall on the same curve within the experimental error. The \(\gamma\) obtained in the ion-neutral hybrid trap is 0.21(0.03) and this is nearly the same as that 0.19(0.01) obtained in the glass MOT[4]. The difference between this work measured in the hybrid trap and the result measured in the MOT with a glass chamber[4] is the difference in the offset. This may be mainly due to different collisional loss rates \(\gamma_{L}\) for the two apparatus. The relationships between \(-\log(T/T_{0})\) and \(-\log(N/N_{0})\) are similar when the wavelength of the ionization laser was varied among 475, 476, 477, 478 and 479 nm, as shown in Figs.11 to 13. We take the 477 nm ionization laser case as an example and show it in Fig.12. In the case that the ion trap is turned off, at first, \(-\log(T/T_{0})\) linearly increases as \(-\log(N/N_{0})\) increasing, then jumps at a certain ionization intensity \(I_{thre}\) and remains a constant after the jump. However, when the ion trap is turned on, \(-\log(T/T_{0})\) linearly increases as \(-\log(N/N_{0})\) increasing in the whole range of experimental ionization laser intensity. To reveal the underlying physical mechanism, we compare the expansion velocities of ions in an ultracold neutral plasma as a function of the initial electron temperature[29] Figure 8: The number of the remaining atoms of \(N/N_{0}\) as a function of the intensity of the ionization laser in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms. Measurements are performed in ion-neutral hybrid trap [17] with or without the ion trap on. The wavelength of the ionization laser is 477 nm and the detuning \(\Delta\) of the first excitation laser is -12 MHz. The irradiation period of the ionization laser is 4 s. Figure 6: The number and temperature of the remaining atoms as a function of the intensity of the ionization laser. Measurements are performed in the ion-neutral hybrid trap[17] with or without the ion trap, and in the standard vapor-loaded MOT with a glass vacuum rectangular chamber[4], respectively. The wavelength of the ionization laser is 450 nm and the detuning \(\Delta\) of the first excitation laser frequency from the transition \(5\ ^{2}S_{1/2},F=2\to 5\ ^{2}P_{3/2},F^{\prime}=3\) is -12 MHz. The irradiation period of 4 s. \(N_{0}\) and \(T_{0}\) represent the number and the temperature of trapped atoms in a steady state without photoionization, respectively. The atomic number is fitted by a single exponential curve \(ae^{-bI_{PI}}+c\), and the temperature is fitted by a double exponential curve (\(ae^{-bI_{PI}}+c\))\({}^{2}\). The following graphs are so. Figure 7: The number of the remaining atoms of \(N/N_{0}\) as a function of the intensity of the ionization laser in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms. Measurements are performed in ion-neutral hybrid trap [17] with or without the ion trap on. The wavelength of the ionization laser is 475 nm. The detuning \(\Delta\) of the first excitation laser is -12 MHz. The irradiation period of the ionization laser is 4 s. with the intensity of the ionization laser at the onset of curves for \(-\log(T/T_{0})\) versus \(-\log(N/N_{0})\), as shown in Fig.14, and their variation behavior is similar. This result further indicates that electrons affect ion-atom collisions through disorder-induced heating in CW-laser photoionization. Specifically, the linear relationship between \(-\log(T/T_{0})\) and \(-\log(N/N_{0})\) is the result from the fact that \(N\) and \(T\) follows a power law \(T\propto N^{\gamma}\). Jumping to a constant value means that temperature achieves equilibrium when the irradiation period of the ionization laser is 4 s. It illustrates a jump in the behavior of decreasing rates of the temperature \(\gamma_{r}\). By taking DIH into account, it is clearly understood. As was already mentioned, the rate of atomic loss \(\gamma_{x}\) should increase with the intensity of the ionization laser, the Figure 11: The behavior of \(-\log(T/T_{0})\) as a function of \(-\log(N/N_{0})\) obtained with different \(I_{PI}\) in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms with \(\lambda_{ion}\)=475 nm. Measurements are performed in ion-neutral hybrid trap [17] with or without the ion trap on. The wavelength of the ionization laser is 45 nm. The irradiation period of the ionization laser is 4 s. Figure 12: The behavior of \(-\log(T/T_{0})\) as a function of \(-\log(N/N_{0})\) obtained with different \(I_{PI}\) in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms with \(\lambda_{ion}\)=477 nm. Measurements are performed in ion-neutral hybrid trap [17] with or without the ion trap on. The wavelength of the ionization laser is 477 nm. The irradiation period of the ionization laser is 4 s. Figure 9: The number of the remaining atoms of \(N/N_{0}\) as a function of the intensity of the ionization laser in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms. Measurements are performed in ion-neutral hybrid trap [17] with or without the ion trap on. The wavelength of the ionization laser is 476(up), 478 (middle), and 479 (bottom) nm, respectively. The detuning \(\Delta\) of the first excitation laser is -12 MHz. The irradiation period of the ionization laser is 4 s. DIH effect intensifies, and the increase in \(\gamma_{r}\) follows. Thus, the temperature approaches equilibrium more quickly as the intensity of the ionization laser increases. ## IV Conclusion To investigate the impact of ion-atom elastic collisions on the number and temperature of the remaining atoms in CW-laser photoionization, we measured the number of the remaining atoms, radius, and temperature of the cold atomic cloud in an Rb\({}^{+}\)-Rb hybrid trap. We have investigated the dependence of the number and temperature of the remaining atoms on the wavelength and intensity of the ionization laser as well as the ion trap on or off. We discover that there is good agreement between fits with a single exponential decay function plus an offset to the number and radius of remaining atoms, respectively. The two exponential factors \(\gamma_{x}\) and \(\gamma_{r}\) follow a well-linear connection since the temperature \(T\) and number \(N\) of remaining atoms in an MOT follow a power law, \(T\propto N^{\gamma}\). Therefore, the evolution of the number and temperature of the remaining atoms can be obtained by analyzing the loss rate of atoms. The ion-atom elastic collision will cause atomic loss. We believe that electrons can affect ion-atom collisions by disorder-induced heating (DIH) caused by spatial correlations between electrons and ions. Specifically, even if it cannot produce plasma, DIH still heats the ions. The ion temperature determines the ion-atom collision energy and thus affects the number and temperature of remaining atoms. On the other hand, when the ion trap turns on to repel the electrons or the initial electron temperature is above 1000 K, there is no DIH. These results are shown in Figs.4, 5 and 10 to 13 and reveal that both the effect of ion-atom collisions and the existence of electrons on the temperature \(T\) and number \(N\) of the remaining atoms. These results demonstrate that we created a powerful experimental tool, i.e., the number and temperature of the remaining atoms, to study the relaxation of ion temperature and exhibit the effect of ion-atom elastic collisions on the number and temperature of the remaining atoms in CW-laser photoionization. Our finding opens new avenues for research into atomic processes, transport, etc. in ultracold plasma. This has an important reference for high-energy-density plasma (HEDP) research. Figure 14: Comparison of expansion velocities of ions \(v_{0}\)[29] and the ionization laser intensity at the onset \(I_{thre}\) as a function of initial electron temperature. Figure 13: The behavior of \(-\log(T/T_{0})\) as a function of \(-\log(N/N_{0})\) obtained with different \(I_{PI}\) in the two-step CW-laser photoionization of laser-cooled \({}^{87}\)Rb atoms with \(\lambda_{ion}\)=450 nm. Measurements are performed in the ion-neutral hybrid trap [17] without the ion trap on. The wavelength of the ionization laser is 476 (up), 478 (middle), and 479 (bottom) nm, respectively. The irradiation period of the ionization laser is 4 s. Acknowledgements This study was supported by the National Key Research and Development Program of China (Grant Nos. 2017YFA0402300 and 2017YFA0304900), the Beijing Natural Science Foundation (Grant No. 1212014), the Fundamental Research Funds for the Central Universities, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB08-3), specialized research fund for CAS Key Laboratory of Geospace Environment (GE2020-01), and National Natural Science Foundation of China (61975091, 61575108).
2307.02297
RIS with insufficient phase shifting capability: Modeling, beamforming, and experimental validations
Most research works on reconfigurable intelligent surfaces (RIS) rely on idealized models of the reflection coefficients, i.e., uniform reflection amplitude for any phase and sufficient phase shifting capability. In practice however, such models are oversimplified. This paper introduces a realistic reflection coefficient model for RIS based on measurements. The reflection coefficients are modeled as discrete complex values that have non-uniform amplitudes and suffer from insufficient phase shift capability. We then propose a group-based query algorithm that takes the imperfect coefficients into consideration while calculating the reflection coefficients. We analyze the performance of the proposed algorithm, and derive the closed-form expressions to characterize the received power of an RIS-aided wireless communication system. The performance gains of the proposed algorithm are confirmed in simulations. Furthermore, we validate the proposed theoretical results by experiments with our fabricated RIS prototype systems. The simulation and measurement results match well with the theoretical analysis.
Lin Cao, Haifan Yin, Li Tan, Xilong Pei
2023-07-05T13:55:40Z
http://arxiv.org/abs/2307.02297v2
RIS with Insufficient Phase Shifting Capability: Modeling, Beamforming, and Experimental Validations ###### Abstract Most research works on reconfigurable intelligent surfaces (RIS) rely on idealized model of the reflection coefficients, i.e., uniform reflection amplitude for any phases and sufficient phase shifting capability. In practice however, such models are oversimplified. This paper introduces a realistic reflection coefficient model for RIS based on measurements. The reflection coefficients are modeled as discrete complex values that have non-uniform amplitudes and suffer from insufficient phase shift capability. We then propose a group-based query algorithm that takes the imperfect coefficients into consideration while calculating the reflection coefficients. We analyze the performance of the proposed algorithm, and derive the closed-form expressions to characterize the received power of an RIS-aided wireless communication system. The performance gains of the proposed algorithm are confirmed in simulations. Furthermore, we validate the proposed theoretical results by experiments with our fabricated RIS prototype systems. The simulation and measurement results match well with the theoretical analysis. Reconfigurable intelligent surface, practical reflection coefficient, performance analysis, wireless propagation measurements. ## I Introduction As the fifth-generation (5G) mobile communication gradually matured, the sixth-generation (6G) mobile communication has appeared in horizon, which calls for much higher data rates, connection density, and energy efficiency. With the expansion of the network scale and the ever increasing throughput requirement, mobile communications are facing challenges of high energy consumption and low cost efficiency [1]. The recently proposed reconfigurable intelligent surface (RIS) provides new paradigms for wireless communication, for its potential of re-designing the wireless propagation environment and counteract the adverse radio conditions. As a result, RIS is actively being discussed as a prospective technology for 6G [2]. RIS is a two-dimensional array of sub-wavelength elements that can be configured using a large number of passive components [3]. The passive electromagnetic response of the structure of each component (e.g., phase and amplitude) is controlled by simple programmable components, such as positive-intrinsic-negative (PIN) diode [4, 5], varactor diode [6, 7], micro electro mechanical systems (MEMS) switch [8, 9], etc. By jointly manipulating these elements, RIS will be able to build a programmable wireless environment with low additional power or hardware expense [10], thereby further increasing the spectrum efficiency, energy efficiency, and physical layer security aspects of wireless communication systems. To explore the potential of RIS techniques, RIS-aided wireless communication systems have recently been investigated in various applications/setups. Researchers propose new cost-effective solutions with RIS that lead to high beamforming gains and effective interference suppression using only low-cost reflecting elements, such as physical layer security [11], orthogonal frequency division multiplexing (OFDM) [12], and integrated sensing and communications (ISAC) [13], etc. The existing works are mostly based on the following three assumptions to make the system model more concise yet idealized: * Continuous phase shifts at its reflecting elements, * Uniform reflection amplitude at any phase shift, * Sufficient phase shifting capability covering the range from 0 to \(2\pi\). However, the RIS element with continuous phase shift is very challenging to implement due to its limited size and cost. Take the RISs with varactor diodes or PIN diodes as example, the RIS element with finely tuned phase shifts requires a wide range of biasing voltages of varactor diodes or a lot of PIN diodes and the corresponding controlling signals from the RIS controller. As such, for practical RISs, it is more cost-effective to consider discrete phase shifts with a small number of control bits for each element. Besides, due to hardware limitations [14, 15], it is difficult and unrealistic to implement an RIS satisfying ideal reflection model which implies the reflection amplitude of each element is uniform at any phase shift. The experimental results reported in [6] show that the amplitude and phase shift of the reflected signal in a practical RIS system with varactor diodes are related to both the frequency of the incident signal and the biasing voltage of the varactor diode. This is due to the fact that changing the frequency of incident signal or the control voltage will shift the equivalent impedance of the RIS element, leading to a variation in the ohmic loss in the system, which subsequently affects the amplitude and phase shift of the reflected signal. In reality, this has long been a problem in RIS implementation [16]. Moreover, the phase shifting capability is not always sufficient. The experimental results in [6] show that the phase response of RIS element is sensitive to the angle of incident signal, which is due to the RIS being spatially dispersive [17, 18]. Besides, most existing RIS elements have limited phase shifting capability and cannot cover the range from 0 to \(2\pi\). There have been some previous studies to investigate the practical system model of RIS-aided systems [19, 20, 21, 22, 23, 24, 25]. In these studies, however, the authors focused on the discrete phase shift at RIS elements or the non-ideal reflection model which implies the amplitude of the received signal varies with the phase shift. There have been few previous studies that focused on practical reflection coefficient model, especially for the insufficient phase shifting capability. Motivated by the above, we study in this paper an RIS-aided wireless communication system by establishing a practical system model with discrete reflection coefficients and limited phase shift range. We formulate and solve the problems to maximize the received power at the user by proposing a group-based query algorithm to optimize the reflection coefficients in the scenario that the above three idealized assumptions are not valid. To analyze the effect of non-ideal reflection coefficients, the asymptotic performance is analyzed, and the corresponding closed-form expressions are derived. We validate the theoretical results by both numerical simulations and experiment measurements using our prototypes of RIS system. The main contributions of this paper are as follows: * Based on experimental measurements, we introduce a realistic model for the reflection coefficients, which, to the best of our knowledge, is the first model taking into account the limited phase shifting capability. * With the above model, we formulate a maximization problem for the received power, which is non-convex and difficult to solve. We propose a group-based query algorithm to find the solutions efficiently by calculating the corresponding phase range of each discrete reflection coefficient. * We analyze the performance of the proposed algorithm. The closed-form expressions for the performances of RIS-aided communication systems are derived, including the cases of uniform reflection amplitude and non-uniform reflection amplitude. * We conduct experiments with our fabricated RIS prototypes, to evaluate and validate the theoretical performance under practical deployment conditions. Two different RIS prototypes working on 5.8 GHz and 2.6 GHz are employed in the measurements. The experimental results match well with our theoretical analysis. The rest of this paper is organized as follows. Section II introduces the system model for RIS-aided communication systems and derives the received power. In Section III, we propose a realistic reflection coefficient model and a group-based query algorithm for solving the received power maximization problem. In Section IV we validate our theoretic results using both simulations and experimental measurements. Section V concludes the paper. ## II System Model We consider an RIS-aided wireless communication system as shown in Fig.1, where an RIS is adopted to reflect the signal from the base station (BS) towards the user. The RIS is composed of \(M\) reflecting elements. By utilizing varactor diodes or PIN diodes, each element can shift the phase of the reflected signal. We show two examples of the structure of the element in Fig.1. Tunable phase shift to the reflected signal is achieved by varying the bias voltage of the diode. For ease of exposition, we assume that there is no line-of-sight (LoS) path between the BS and the user. However, the following derivation can be easily extended for scenarios with Line-of-Sight (LoS) path. The signal received at the user is expressed as \[y=\sqrt{P_{t}}\mathbf{f}^{H}\mathbf{\Phi}\mathbf{h}s+n, \tag{1}\] where \(n\sim\mathcal{CN}\left(0,\sigma^{2}\right)\) is the additive white Gaussian noise (AWGN) with zero mean and variance of \(\sigma^{2}\), \(P_{t}\) is the transmit power, \(s\) is the transmitted signal with \(\left|s^{2}\right|=1\), and \(\mathbf{\Phi}\overset{\Delta}{=}\mathrm{diag}(A_{1}e^{-j\theta_{1}},\cdots,A _{M}e^{-j\theta_{M}})\in\mathbb{C}^{M\times M}\) denotes the reflection coefficient matrix of the RIS, where \(A_{m}\) and \(\theta_{m}\) are the reflection amplitude and phase shift on the incident signal, respectively. \(\mathbf{h}=\left[h_{1},\cdots,h_{M}\right]^{T}\in\mathbb{C}^{M\times 1}\) denotes the channel from the BS to the RIS, and \(\mathbf{f}=\left[f_{1},\cdots,f_{M}\right]^{T}\in\mathbb{C}^{M\times 1}\) denotes the channel from the RIS to the user. Although the real direct path between the BS and the user is blocked, the LoS components exist in practical implementation due to the directed reflection of RIS. For this reason, the Rician fading is used to model the channels between the BS and the RIS, as well as the RIS and the user, as signified by \(\mathbf{h}\) and \(\mathbf{f}\), which are written as \[\mathbf{h}=\sqrt{\frac{K_{1}}{K_{1}+1}\mathbf{\tilde{h}}}+\sqrt{\frac{1}{K_{1 }+1}\mathbf{\tilde{h}}}, \tag{2}\] and \[\mathbf{f}=\sqrt{\frac{K_{2}}{K_{2}+1}\mathbf{\tilde{f}}}+\sqrt{\frac{1}{K_{2 }+1}\mathbf{\tilde{f}}}, \tag{3}\] where \(\mathbf{\tilde{h}}\) and \(\mathbf{\tilde{f}}\) are the LoS components of each channel; \(\mathbf{\tilde{h}}\) and \(\mathbf{\tilde{f}}\) are the non-LoS (NLoS) components; \(K_{1}\) and \(K_{2}\) are Rician K-factors of \(\mathbf{h}\) and \(\mathbf{f}\), respectively. Since the distances between the BS and the RIS, as well as the RIS and the user, are significantly greater than the distances Fig. 1: An RIS-based wireless communication system. between any two RIS elements, we assume that the path loss of the BS-RIS link and the RIS-user link via different RIS elements is identical. The reflected LoS components of each channel via the \(m\)-th RIS element are denoted by [26] \[\mathbf{\tilde{h}=} \sqrt{G_{a}D_{1}^{-\alpha}}\cdot\left[e^{-j\frac{2\pi}{d}D_{1}}, \cdots,e^{-j\frac{2\pi}{d}D_{m}},\cdots,\sqrt{e^{-j\frac{2\pi}{d}D_{M}}}\right] ^{T}, \tag{4}\] and \[\mathbf{\tilde{f}=} \sqrt{d_{1}^{-\alpha}}\left[e^{-j\frac{2\pi}{d}d_{1}},\cdots,e^{- j\frac{2\pi}{d}d_{m}},\cdots,e^{-j\frac{2\pi}{d}d_{M}}\right]^{T}, \tag{5}\] where \(\alpha\) is the path loss factor, \(G_{a}\) is the antenna gain, and \(\lambda\) is the wavelength of the signal. \(D_{m}\) and \(d_{m}\) are the distances between the BS and the \(m\)-th RIS element and between the \(m\)-th RIS element and the user, respectively, as illustrated in Fig.1. During a channel coherent interval, the LoS components are constant, whereas the NLoS components for \(\mathbf{h}\) and \(\mathbf{f}\) follow \(i.i.d.\) complex Gaussian distribution [27]. The NLoS components of each channel are respectively denoted by \[\mathbf{\tilde{h}=}\ L(D_{1})\left[g_{1},\cdots,g_{m},\cdots,g_{M}\right]^{T}, \tag{6}\] and \[\mathbf{\tilde{f}=}\ L(d_{1})\left[b_{1},\cdots,b_{m},\cdots,b_{M}\right]^{T}, \tag{7}\] where \(L(\cdot)\) is the channel gain of the NLoS component, \(g_{m}\sim\mathcal{CN}\left(0,1\right)\) and \(b_{m}\sim\mathcal{CN}\left(0,1\right)\) denote the small-scale NLoS components. We then derive the analytical expression for the maximum received power of the system. The instantaneous received power is given by \[P_{r}=P_{r}\big{\|}\mathbf{f}^{H}\mathbf{\Phi h}\big{\|}^{2}, \tag{8}\] The instantaneous received power is an exponentially distributed random variable. The long-term average received power (LARP) \(\Gamma\) is denoted by \[\Gamma=\mathbb{E}\left\{P_{r}\right\}=P_{r}\mathbb{E}\left\{\left\|\mathbf{f} ^{H}\mathbf{\Phi h}\right\|^{2}\right\}. \tag{9}\] To have a deeper understanding of the LARP \(\Gamma\), we provide another form in the following proposition. **Proposition 1**.: _The LARP \(\Gamma\) is given by_ \[\Gamma=\kappa_{\text{NLoS}}\sum_{m}A_{m}^{2}+\kappa_{\text{LoS}} \sum_{m,m^{\prime}}A_{m}A_{m^{\prime}}e^{-j\left[\phi_{m}-\phi_{m^{\prime}}+ \theta_{m}-\theta_{m^{\prime}}\right]}, \tag{10}\] _where \(\phi_{m}=\frac{2\pi}{d}(D_{m}+d_{m})\) denotes the total phase shift induced by the LoS components of each channel; \(\kappa_{\text{LoS}}\) and \(\kappa_{\text{NLoS}}\) are constants defined as_ \[\kappa_{\text{LoS}}=\frac{K_{1}K_{2}\eta_{\text{LoS}}}{(K_{1}+1)( K_{2}+1)}, \tag{11}\] \[\kappa_{\text{NLoS}}=\frac{K_{1}\eta_{\text{NLoS}I}+K_{2}\eta_{ \text{NLoS}2}+\eta_{\text{NLoS}3}}{(K_{1}+1)(K_{2}+1)}, \tag{12}\] _where \(\eta_{\text{LoS}}\), \(\eta_{\text{NLoS}1}\), \(\eta_{\text{NLoS}2}\) and \(\eta_{\text{NLoS}1}\) are constants related to the path loss of the channels, which are defined as_ \[\eta_{\text{LoS}} =\sqrt{D_{1}^{-\alpha}d_{1}^{-\alpha}}P_{t}G_{a}, \tag{13}\] \[\eta_{\text{NLoS}1} =\sqrt{G_{a}D_{1}^{-\alpha}}P_{t}L(d_{1}),\] (14) \[\eta_{\text{NLoS}2} =\sqrt{G_{a}d_{1}^{-\alpha}}P_{t}L(D_{1}),\] (15) \[\eta_{\text{NLoS}3} =P_{t}L(D_{1})L(d_{1}). \tag{16}\] _Proof:_ See Appendix A. \(\Box\) We aim to maximize the received power at the user by optimizing the response of each RIS element. The problem is formulated as \[\text{(P0):}\max_{\Phi} P_{t}\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi h} \right\|^{2}\right\} \tag{17}\] \[s.t. \phi_{m}=A_{m}e^{-j\theta_{m}},m=1,\cdots,M,\] \[0\leq\theta_{m}\leq 2\pi,m=1,\cdots,M.\] The maximum LARP is obtained when \(\phi_{m}-\phi_{m^{\prime}}+\theta_{m}-\theta_{m^{\prime}}=0\) and \(A_{m}=1\) for any \(m\) and \(m^{\prime}\). In other words, the optimal phase shifts with continuous value \(\theta_{m}^{\prime}\) on the \(m\)-th RIS element should satisfy the following constraint: \[\theta_{m}^{*}+\phi_{m}=C, \tag{18}\] where \(C\) is an arbitrary constant. When the phase shift of each RIS element satisfies (18) and all elements share the same amplitude value \(A_{m}=1\), the maximum LARP is obtained: \[\Gamma_{\text{max}}= \max_{\Phi}P_{t}\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi h }\right\|^{2}\right\} \tag{19}\] \[= \kappa_{\text{NLoS}}M+\kappa_{\text{LoS}}M^{2}.\] \(\Gamma_{\text{max}}\) will, therefore, serve as an upper bound of the received power. According to (19), the maximum LARP increases with the Rician K-factors of the channels. The relationship between RIS size \(M\) and the maximum LARP for different values of \(K_{1}\) and \(K_{2}\) is as follows. When considering a pure LoS channel, i.e., \(K_{1},K_{2}\rightarrow\infty\), an asymptotic squared maximum LARP of \(O\left(M^{2}\right)\) can be achieved. When considering a Rayleigh channel, i.e., \(K_{1},K_{2}\to 0\), an asymptotic linear LARP of \(O\left(M\right)\) can be achieved. ## III Performance Analysis for Practical System Since continuous phase shifts are difficult to realize due to hardware limitations, discrete phase shifts are usually employed in practical systems. In this section, we will introduce a realistic discrete phase shifting model, and discuss how the employment of realistic discrete phase shifters affects the maximum LARP of an RIS-aided system. ### _The realistic discrete phase shifting model_ We first assume that the RIS has a phase shifting capability of \(\omega\), which means it can generate a phase shift covering the range from \(0\) to \(\omega\), and that the phase shift is \(k\)-bit uniformly quantized. In other words, we control the programmable components such as varactor diodes or PIN diodes, to generate \(2^{k}\) patterns of the reflection coefficients, which are denoted by: \[\Phi_{i}=A_{\theta_{i}}e^{-j\theta_{i}},\quad i=1,2,\cdots,2^{k}, \tag{20}\] where \(A_{\theta_{i}}\) is the amplitude when the phase shift is \(\theta_{i}\). To investigate the effect of phase shifting capability on the performance, RIS systems are divided into two categories: systems with sufficient phase shifting capability and systems with insufficient phase shifting capability. We consider phase shifting capability to be sufficient when it can meet the quantification requirements. For example, the phase shifting capability in a 1-bit quantized system surpasses \(180^{\circ}\) and in a 2-bit quantized system exceeds \(270^{\circ}\), etc. Otherwise, the RIS system is called an insufficient phase shifting capability system. For systems having sufficient phase shift capability, i.e., \(\omega\geq\frac{2^{k}-1}{2^{k}}2\pi\), the uniform phase interval is \(\Delta\theta=\frac{2\pi}{2^{k}}\). For systems with insufficient phase shift capability, i.e., \(\omega<\frac{2^{k}-1}{2\pi}\), the uniform phase interval after quantization is \(\Delta\theta=\frac{\omega}{2^{k}-1}\). \(\theta_{i}\) is given by \[\theta_{i}=\begin{cases}\frac{\omega}{2^{k}-1}\cdot(i-1)\,,\quad\omega<\frac {2^{k}-1}{2^{k}}\cdot 2\pi;\\ \frac{2\pi}{2^{k}}\cdot(i-1)\,,\quad\omega\geq\frac{2^{k}-1}{2^{k}}\cdot 2 \pi.\end{cases} \tag{21}\] We will analyze the performance of an RIS-aided system employed the realistic phase shifter with or without the assumption of ideal reflection, respectively, in the following subsection. ### _Analysis on the realistic discrete phase shifting model with uniform amplitude_ In this subsection, we will discuss the impact of limited phase shift range on the maximum LARP, using the ideal reflection model with uniform reflective amplitude, which means each discrete phase shift \(\theta_{i}\) corresponds to an amplitude of \(A_{\theta_{i}}=1\). In this scenario, the problem (P0) of maximizing the received power is transformed to \[\text{(P1)}: \max_{\Phi} P_{i}\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi}\mathbf{h} \right\|^{2}\right\} \tag{22}\] \[s.t. \hat{\phi}_{m}=e^{-j\hat{\theta}_{m}},m=1,\cdots,M,\] (23) \[\hat{\theta}_{m}\in(\theta_{1},\cdots,\theta_{i},\cdots,\theta_{ 2^{k}}),m=1,\cdots,M. \tag{24}\] Under the constraints in (23) and (24), the LARP of the RIS-aided system is written as \[\Gamma= \kappa_{\text{NLoS}}M+\kappa_{\text{LoS}}\sum_{m,m^{\prime}}e^{-j \left[(\phi_{m}+\theta_{m})-(\phi_{m}^{\prime}+\theta_{m}^{\prime})\right]}. \tag{25}\] To obtain the maximum LARP in this scenario, for the \(m\)-th RIS element, we select the discrete phase shift which is closest to the optimal one \(\theta_{m}^{*}\) as given in (18), and denote it by \(\hat{\theta}_{m}\). The phase errors resulting from discrete phase shifts are defined as \[\delta_{m}=\theta_{m}^{*}-\hat{\theta}_{m},m=1,\cdots,M. \tag{26}\] The maximum LARP \(\hat{\Gamma}_{\text{max}}\) with discrete phase shifts is given by \[\hat{\Gamma}_{\text{max}}=\kappa_{\text{NLoS}}M+\kappa_{\text{ LOS}}\sum_{m,m^{\prime}}e^{-j\left[C+\delta_{m}-C-\delta_{m^{\prime}}\right]} \tag{27}\] \[=\kappa_{\text{NLoS}}M+\kappa_{\text{LoS}}\sum_{m,m^{\prime}}( \sin\delta_{m}\sin\delta_{m^{\prime}}+\cos\delta_{m}\cos\delta_{m^{\prime}}).\] Since \(\phi_{m}\) is jointly determined by the wavelength of the incident signal, the distance between the BS and the \(m\)-th RIS element, and the distance between the \(m\)-th element of the RIS and the user, and that \(\theta_{m}^{*}+\phi_{m}=C\), we assume that the optimal phase shift \(\theta_{m}^{*}\) in the practical system is uniformly distributed in \([0,2\pi)\). The following theorem shows the expectation of the maximum LARP \(\mathbb{E}(\hat{\Gamma}_{\text{max}})\) in RIS-aided wireless communication systems in this scenario. **Theorem 1**.: _Assuming that all elements of RIS have the same reflection amplitude \(A\), the closed-form expression for the expectation of the maximum LARP at the user is given by_ \[\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)= \tag{28}\] \[\times\left[P_{1}\sin b+P_{2}(\sin a-\sin b)\right]^{2},\quad \omega<\frac{2^{k}-1}{2^{k-1}}\pi;\] \[\kappa_{\text{NLoS}}M+\kappa_{\text{LoS}}M^{2}\cdot\frac{2^{2k} }{\pi^{2}}\sin^{2}\frac{\pi}{2^{k}},\quad\omega\geq\frac{2^{k}-1}{2^{k-1}}\pi.\] _where \(a=\pi-\frac{\omega}{2}\), \(b=\frac{\omega}{2(2^{k}-1)}\), \(P_{1}=\frac{2^{k}}{2\pi}\), \(P_{2}=\frac{1}{2\pi}\)._ _Proof:_ See Appendix B. \(\Box\) Theorem 1 indicates that the maximum LARP expectation \(\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)\) is determined by the size and topology of the system, the propagation environment, the number of quantization bits \(k\) and the phase shift capability \(\omega\). We will further show in the next section that the phase shift capability \(\omega\) is a key factor for performance. ### _Analysis on the realistic discrete phase shifting model_ In practical systems, the amplitude response of RIS reflecting elements generally depends on the phase shift value. In this section, we further consider the influence of phase shifting capability on the maximum LARP based on the non-ideal reflection model in which the reflection amplitude varies with the phase shift. In this scenario, based on the constraint of the realistic discrete phase shifting model, the problem of maximizing LARP is written by \[\text{(P2)}: \max_{\Phi} P_{i}\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi}\mathbf{h} \right\|^{2}\right\} \tag{29}\] \[s.t. \hat{\phi}_{m}=A_{\hat{\theta}_{m}}e^{-j\hat{\theta}_{m}},m=1, \cdots,M,\] (30) \[\hat{\theta}_{m}\in(\theta_{1},\cdots,\theta_{i}\cdots,\theta_{2^ {k}}),m=1,\cdots,M. \tag{31}\] Although the objective function of (P2) is convex in this scenario, solving (P2) is difficult due to the non-convex constraint in (30). When using the non-ideal reflection model, the reflection design should strike an appropriate balance between the amplitude and phase of the reflected signal. To solve this problem, we propose a low-complexity algorithm based on vector quantization of reflection coefficients to find an approximate solution to (P2). We begin by defining a quantization loss function \(\mathcal{L}\) to assess the difference between the quantified and desired reflection coefficients: \[\mathcal{L}(\theta_{i},m)=1-A_{\theta_{i}}\cos(\theta_{i}-\theta_{m}^{*}), \tag{32}\] which can be thought of as the difference between the desired reflection coefficient and the quantified reflection coefficient projected onto it. The optimization problem of the reflection coefficient of the \(m\)-th element is then simplified to (P3): \[\text{(P3)}:= \max_{\phi_{m}} a_{\text{NLS}}A_{\tilde{\theta}_{m}}^{2}+a_{\text{LoS}}A_{ \tilde{\theta}_{m}}\cos(\hat{\theta}_{m}-\theta_{m}^{*}) \tag{33}\] \[s.t. \hat{\theta}_{m}\in(\theta_{1},\cdots,\theta_{i},\cdots,\theta_{ 2^{k}}),m=1,\cdots,M. \tag{34}\] where \(a_{\text{NLSoS}}\) and \(a_{\text{LoS}}\) denote the constants related to the channel path loss as \[a_{\text{NLSoS}} =\frac{K_{1}\eta_{\text{NLSoS1}}+K_{2}\eta_{\text{NLSoS2}}+\eta_ {\text{NLSoS3}}}{(K_{1}+1)(K_{2}+1)}, \tag{35}\] \[a_{\text{LoS}} =M\bar{A}\frac{K_{1}K_{2}\eta_{\text{LoS}}}{(K_{1}+1)(K_{2}+1)}. \tag{36}\] Here, \(\bar{A}=\sum A_{\tilde{\theta}_{i}}^{2}/\sum A_{\tilde{\theta}_{i}}\) is a constant related to the power loss caused by the reflection coefficient of RIS, which is derived based on the characteristic that the optimized phase shift is usually more concentrated towards the phase shift with the larger reflective amplitude [22]. Note that (P3) can be solved by the exhaustive search method. Since in practical systems, the number of control bits \(k\) is generally not greater than 3, the complexity of this method is not exceptionally high. Moreover, since each RIS element can generate the same patterns of reflection coefficients, any RIS element with the identical expected phase shift has the same optimal reflection coefficient when solving the problem (P3). Therefore, we may build a look-up table by calculating the expected phase shift range \(c_{i}\) for each quantized reflection coefficient. This table provides the optimized reflection coefficients for any possible value of expected phase shifts. In other words, the reflection coefficients are quantized using the look-up table, which will further reduce the computational complexity of solving (P3). Below we will show how this table is developed. For notational simplicity, we omit the subscript \(m\) of the expected phase \(\theta_{m}^{*}\). Firstly, substitute each quantized reflection coefficient into the objective function (33) and obtain a series of equations as follows: \[f_{1}(\theta)=a_{\text{NLSoS}}A_{\theta_{1}}^{2}+a_{\text{LoS}}A _{\theta_{1}}\cos(\theta_{1}-\theta),\] \[\cdots\] \[f_{i}(\theta)=a_{\text{NLSoS}}A_{\theta_{i}}^{2}+a_{\text{LoS}}A _{\theta_{i}}\cos(\theta_{i}-\theta), \tag{37}\] \[\cdots\] \[f_{2^{k}}(\theta)=a_{\text{NLSoS}}A_{\theta_{2^{k}}}^{2}+a_{ \text{LoS}}A_{\theta_{2^{k}}}\cos(\theta_{2^{k}}-\theta).\] As shown in Fig. 2, the solution to problem (P3) when the expected phase \(\theta^{*}=\theta\) is the reflection coefficient corresponding to the envelope of the set of curves at phase shift \(\theta\). As a result, determining the expected phase range \(c_{i}\) is equivalent to finding among the equations (37) that makes the objective function maximum, and calculating the value range of \(\theta\). In order to solve this problem, we can start by seeking the intersection of every two curves in [0,2\(\pi\)). For instance, for \(f_{i}(\theta)\) and \(f_{i^{\prime}}(\theta)\), we have \[a_{\text{NLSoS}}A_{\theta_{i}}^{2}+a_{\text{LoS}}A_{\theta_{i}} \cos(\theta_{i}-\theta)\] \[=a_{\text{NLSoS}}A_{\theta_{i^{\prime}}}^{2}+a_{\text{LoS}}A_{ \theta_{i^{\prime}}}\cos(\theta_{i^{\prime}}-\theta). \tag{38}\] The above formula can be converted into \[\sqrt{C_{\text{sin}}^{2}+C_{\text{cos}}{}^{2}}\sin(\theta\pm\theta)=\frac{a_{ \text{NLSoS}}(A_{\theta_{i^{\prime}}}^{2}-A_{\theta_{i}}^{2})}{a_{\text{LoS}}}, \tag{39}\] where \(C_{\text{sin}}\) and \(C_{\text{cos}}\) are constants, which are determined by \[C_{\text{sin}} =A_{\theta_{i}}\sin\theta_{i}-A_{\theta_{i}}\sin\theta_{i^{ \prime}}, \tag{40}\] \[C_{\text{cos}} =A_{\theta_{i}}\cos\theta_{i}-A_{\theta_{i^{\prime}}}\cos\theta_{ i^{\prime}}. \tag{41}\] The quadrant of \(\theta\) is defined by the sign of \(C_{\text{sin}}\) and \(C_{\text{cos}}\), and \(\theta\) is defined by \(\tan\theta=C_{\text{cos}}/C_{\text{sin}}\). Taking \(C_{\text{sin}}>0\) as an example, we solve the intersection of \(f_{i}(\theta)\) and \(f_{i^{\prime}}(\theta)\) as \[\theta_{ii^{\prime}}=\arcsin\frac{a_{\text{NLS}}(A_{\theta_{i^{\prime}}}^{2}-A _{\theta_{i}}^{2})}{a_{\text{LoS}}\sqrt{C_{\text{sin}}^{2}+C_{\text{cos}}{}^{2}} }-\arctan\frac{C_{\text{cos}}}{C_{\text{sin}}} \tag{42}\] Then we need to determine if the following equality holds: \[f_{i}(\theta_{ii^{\prime}})=\max[f_{1}(\theta_{ii^{\prime}}),\cdots,f_{i}( \theta_{ii^{\prime}}),\cdots,f_{2^{k}}(\theta_{ii^{\prime}})]. \tag{43}\] If it does, we call it a valid intersection. Following that, we compare the values of \(f_{i^{\prime}}(\theta_{ii^{\prime}})\) and \(f_{i^{\prime}}(\theta_{ii^{\prime}})\). If \(f_{i^{\prime}}(\theta_{ii^{\prime}})>f_{i^{\prime}}(\theta_{ii^{\prime}})\), the phase shift range between \(\theta_{ii^{\prime}}\) and the next valid intersection belongs to \(c_{i}\), and the phase shift range between the last valid intersection and \(\theta_{ii^{\prime}}\) belongs to \(c_{i^{\prime}}\), and vice versa. By calculating all the valid intersections between the curves and comparing the derivatives of the corresponding curves at valid intersections, the expected phase range corresponding to each quantized state is obtained. Once the phase range \(c_{i}\) is obtained, we may easily compute the quantized coefficient with the expected phase shift, which will greatly reduce the computational complexity. The overall procedure to solve (P2) is summarized in Algorithm 1 which is referred to as group-based query algorithm. Below we will analyze the performance of the proposed algorithm and derive the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)\) at the user in RIS-aided wireless communication systems. Fig. 2: An example of a set of curves of a 2-bit quantized RIS-aided system. **Theorem 2**.: _Assuming that each RIS element can produce \(2^{k}\) discrete reflection coefficients as defined in (20), the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\max}\right)\) at the user is given by_ \[\mathbb{E}\left(\hat{\Gamma}_{\max}\right)=\frac{M\kappa_{NLoS}}{2 \pi}\sum_{i}\frac{\mu_{i}}{2\pi}A_{\theta_{i}}^{2}+\frac{M^{2}\kappa_{LoS}}{4 \pi^{2}}\sum_{i,i^{\prime}}A_{\theta_{i}}A_{\theta^{\prime}_{i}} \tag{44}\] \[\times\int_{\delta_{m}\in d_{i}}\int_{\delta_{m^{\prime}}\in d_{ i^{\prime}}}\left(\sin\delta_{m}\sin\delta_{m^{\prime}}+\cos\delta_{m}\cos \delta_{m^{\prime}}\right)d\delta_{m}d\delta_{m^{\prime}}.\] _where \(d_{i}\) is the quantization error of the \(i\)-th quantized reflection coefficient \(\Phi_{i}\)._ _Proof:_ See Appendix C. The proposed algorithm shows that the expected phase shift range \(c_{i}\) is related to the parameters of the RIS-aided system, such as the phase shifting capability \(\omega\), the reflective amplitude \(A_{\theta_{i}}\), etc. Therefore, whenever we compute the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\max}\right)\) of different systems, we need to recalculate the \(c_{i}\) according to the parameters of the system. However, \(c_{i}\) has a simpler solution when the number of bits \(k=1\) and the pure LoS path, which means the power of the RIS-reflected signal dominates in the total received power. In other words, the Rician K-factors of the channels \(K_{1}\), \(K_{2}\rightarrow\infty\). In this case, the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\max}\right)\) at the user is shown in Corollary 1. **Corollary 1**.: _Assuming that each RIS element can produce 2 discrete reflective coefficients as \(\Phi_{1}=A_{\theta_{1}}e^{-j\theta_{1}}\) and \(\Phi_{2}=A_{\theta_{2}}e^{-j\theta_{2}}\), and that the channels contain pure LoS paths. The optimal phase shift ranges are as follows:_ \[\begin{split}& c_{1}\in\left[0,\psi_{1}\right)\cup\left[\psi_{2},2 \pi\right),\\ & c_{2}\in\left[\psi_{1},\psi_{2}\right),\end{split} \tag{45}\] _where \(\psi_{1}=-\arctan\frac{A_{\theta_{2}}\cos\omega^{\prime}-A_{\theta_{1}}}{A_{ \theta_{2}}\sin\omega^{\prime}}\in\left[0,\frac{\pi}{2}\right)\cup\left( \frac{\pi}{2},\pi\right)\), \(\psi_{2}=\pi-\arctan\frac{A_{\theta_{2}}\cos\omega^{\prime}-A_{\theta_{1}}}{A_{ \theta_{2}}\sin\omega}\in\left[\pi,\frac{3\pi}{2}\right)\cup\left(\frac{3\pi }{2},2\pi\right)\)._ _The closed-form expression for the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\max}\right)\) at the user is given by_ \[\begin{split}&\mathbb{E}\left(\hat{\Gamma}_{\max}\right)\\ &=\begin{cases}\frac{\eta_{LoS}M^{2}}{\pi^{2}}\left[A_{\theta_{1} }^{2}+A_{\theta_{2}}^{2}-2A_{\theta_{1}}A_{\theta_{2}}\cos\omega\right], \omega<\pi;\\ \frac{\eta_{LoS}M^{2}}{\pi^{2}}\left[A_{\theta_{1}}^{2}+A_{\theta_{2}}^{2}+2A _{\theta_{1}}A_{\theta_{2}}\right],\omega\geq\pi.\end{cases}\end{split} \tag{46}\] _Proof:_ See Appendix D. The above corollary shows that given the amplitude for each quantized reflection coefficient, the LARP of the RIS-aided system with the number of bits \(k=1\) tends to decrease when the phase shift capability \(\omega\) declines. However, provided a phase shift capability \(\omega\), the impact of the reflection amplitude on the RIS-aided system is not straightforward; we will show this in the following section using simulations. ## IV Simulation and Experimental Results In this section, we make both simulations and experimental measurements to evaluate the performance of the group-based query algorithm and validate the theoretic results presented in this work. ### _Simulation Results_ In this subsection, we analyze the performance of an RIS-aided communication system with various users distributed randomly and uniformly on a quarter sphere of radius \(d_{0}\) centered on the RIS. The simulation results in all figures are averaged over 2000 independent realizations of the different user locations. The channel parameters and RIS system parameters were chosen in accordance with the 3GPP propagation environment outlined in [28]. Unless otherwise notified, the simulation parameters are as follows. \(D_{0}=90\) m is the distance between the BS and the RIS center, and \(d_{0}=70\) m is the distance between the user and the RIS center. The number Fig. 3: The LARP versus the RIS size \(M\), continuous phase shifts. of RIS elements is \(M=4096\), and the sizes of RIS elements are \(d_{h}=d_{v}=0.05\) m. The transmit power is \(20\) dBm, and the noise power is \(-90\) dBm. The path loss of LoS and NLoS is configured based on the UMa model defined in [28]. The carrier frequency is \(2.6\) GHz, and the Rician factor is \(K_{1}=K_{2}=4\). Fig. 3 shows the maximum LARP with continuous phase shifts versus the number of elements \(M\). As shown in Fig. 3, our theoretical results are very close to the simulated ones. The figure also shows that for all three channel conditions, the maximum received power increases with the number of elements \(M\). Furthermore, the slope of the maximum received power curve for the pure LoS channel is \(20\), indicating that the received power is proportional to the square of the number of RIS elements \(M\). Similarly, the slope of the maximum LARP curve for the Rayleigh channel is \(10\), meaning that the received power is proportional to \(M\). In addition, we can see that the maximum LARP increases with the Rician factors \(K_{1}\) and \(K_{2}\). To quantify the performance degradation in this scenario, we define a loss factor \(\varepsilon\): \[\varepsilon=\log_{10}\frac{\mathbb{E}\left(\hat{\Gamma}_{\max}\right)}{\Gamma _{\max}}. \tag{47}\] The loss factor is the ratio of the average received power based on the realistic discrete phase shifting model to the maximum received power with continuous phase shift given by (19), which represents the performance degradation caused by practical phase shifters. Fig. 4 shows the expectation of the LARP as a function of the decrement of phase shifting capability \(c\) under the ideal reflection model. Here, \(c\) is defined as the difference between the phase capability of the RIS and the necessary phase shifting capability of the quantization bits, which means \(c=\max\{0,\frac{2^{-1}-1}{2k}\cdot 2\pi-\omega\}\). In Fig. 4, the numbers of quantization bits are set to \(1\), \(2\), and \(3\), respectively. As can be seen, more quantization bits lead to higher performance for the same phase shifting capability, which is expected because a larger number of bits reduces phase quantization error. Furthermore, according to Fig. 4, for systems with different numbers of control bits, the LARP decreases significantly with the decrement of the phase shifting capability of the system, especially when the number of quantization bits is smaller. For the \(1\)-bit, \(2\)-bit, and \(3\)-bit quantized reflection coefficients, a \(3\) dB LARP degradation is caused by a \(90^{\circ}\), \(140^{\circ}\), and \(175^{\circ}\) phase capability decrement, respectively. Besides, the theoretical results obtained according to Theorem 1 are in good agreement with the simulation results in Fig. 4. Next, by varying the decrement of phase shifting capability \(c\), the LARP is compared in Fig. 5 for the following two schemes of computing the discrete RIS phase shifts: (i) group-based query algorithm, and (ii) the ideal model (i.e., assuming reflection amplitude \(A=1\) and phase shifting capability is sufficient). 'AM[dB]' in this figure means the value of the reflected signal amplitudes which is determined by the amplitude response and phase response of the prototypes. The curves (1) and (2) in Fig. 5 represent \(2\)-bit quantized systems with states '\(00^{\circ}\), '\(01^{\circ}\), '\(10^{\circ}\) and '\(11^{\circ}\)' corresponding to reflection amplitudes \(0\) dB, \(-6\) dB, \(-10\) dB, and \(-3\) dB, respectively. The curves (3) and (4) in Fig. 5 represent \(3\)-bit quantized systems, with corresponding reflection amplitudes of \(0\) dB, \(-3\) dB, \(-6\) dB, \(-9\) dB, \(-10\) dB, \(-7\) dB, \(-3\) dB, and \(-2\) dB in Fig. 4: The performance degradation \(\varepsilon\) versus the decrement of phase shifting capability \(c\), uniform reflection amplitude. Fig. 5: The maximum LARP versus decrement of phase shifting capability \(c\) when \(k=2\) and \(k=3\). Fig. 6: The maximum LARP versus decrement of phase shifting capability \(c\) under different amplitudes of the reflected signal when \(k=2\). the '000', '001', '010', '011', '100', '101', '110', '111' states, respectively. This figure shows that the proposed group-based query algorithm outperforms the scheme based on the ideal model thanks to its ability to strike a balance between the reflected signal amplitude by individual elements and phase alignment over all the elements so as to achieve the maximum signal power at the receiver. When the number of bits \(k\) increases, the performance gap between these two schemes increases, especially when the phase shifting capability is relatively small. In Fig. 6, we evaluate the differences of the maximum LARP between two 2-bit quantized RIS-aided systems in which the RIS elements have different amplitudes of the reflected signal. The reflection coefficients of the RISs are determined by the proposed group-based query algorithm. The '00', '01', '10', '11' states in curve (1) correspond to the reflection amplitudes 0 dB, \(-5\) dB, \(-6\) dB, and \(-2\) dB, respectively. In curve (2) the states '00', '01', '10', '11' correspond to reflection amplitudes 0 dB, \(-6\) dB, \(-10\) dB, and \(-3\) dB, respectively. It can be seen from this figure that the derived theoretical expressions match well with the simulation results, which validates the proposed theorem. Besides, from the comparison of the curves (1) and (2), we can see that when the RIS system has sufficient phase shifting capability or a comparatively large phase shifting capability, the lower reflection amplitudes lead to a lower LARP, which is expected because lower reflection signal amplitudes mean a lower reflection signal power. However, when the phase shifting capability of the RIS system falls below a specified threshold, the lower the reflected amplitude cause the greater the LARP. This is due to the fact that when the phase shifting capability of the system is reduced to a specific level, the quantization error \(\delta\) will be greater than \(90^{\circ}\) for some desired phase, indicating that the reflected signal of the RIS elements will have a negative impact on the LARP; in this case, the lower the reflection amplitudes mean the negative impact on the LARP is smaller. ### _Experimental Measurements_ In this subsection, the experimental results validate the effect of realistic discrete phase shifters on the performance of RIS-aided communication systems. We established a measurement system and employed two different RISs with non-ideal reflection coefficients. Fig. 7 illustrates the measurement system, which includes RIS, an RF vector signal generator (Tektronix TSG4106A), a Tx horn antenna, an RF signal spectrum analyzer (Rohde & Schwarz ZNB 8), cables, and blockages (electromagnetic absorbers). The RIS, Tx horn antenna, and Rx horn antenna are horizontally polarized and well matched in the experimental measurement. As shown in Fig. 7 (a), the transmitting and receiving antennas are positioned on a semicircle with RIS at the center and a radius of \(d=2.5\) m. The transmitting and receiving horn antennas are aligned with the center of the RIS. The RF vector signal generator provides the RF signal to the Tx horn antenna. The signal reflected by the RIS propagates over the distance \(d\) and is received by the Rx horn antenna and the RF signal spectrum analyzer, which gives the measured received signal power. Fig. 8 shows the RISs used in different scenarios. The RIS in Fig. 8 (a) operates at 5.8 GHz with element sizes of \(d_{h}=14.3\) mm, \(d_{v}=10.27\) mm, and the number of elements \(M=1100\). More details of this RIS can be found in our previous work [6]. The RIS in Fig. 8 (b) operates at 2.6 GHz, and has \(M=256\) elements with the element sizes \(d_{h}=45\) mm and \(d_{v}=45\) mm. Both RISs are 1-bit regulated by varactor diodes. Since altering the bias voltage changes the impedance of the varactor diode as well as the loss induced by the dielectric substrate, metal plate, etc, the reflection phase and amplitude of the reflected signal will fluctuate unpredictably. We measured the phase and amplitude differences of the two states of 5.8 GHz RIS at different incident angles at 3 V and 7 V bias voltages, representing the state '0' and state '1', respectively. As shown in Table I, the reflection coefficient of RIS is sensitive to incident angle, implying that an RIS system with sufficient phase shift capability at a specific incident angle may become less than satisfactory when the incident angle is altered. Then, using the 5.8 GHz RIS, we conduct experiments to investigate the impact of incidence angle change on the received power and compare the measured results to those calculated according to proposition 1. In this scenario, for evaluating system performance degradation, the system performance at \(10^{\circ}\) incidence is served as the baseline. We move the transmitting antenna to various angles and select Fig. 7: The measurement platform for RIS-aided wireless communications. 12 random places on the circular arc \(R\) illustrated in Fig. 7 (a) to measure the power after RIS beamforming at each of these points and average the results. As shown in Table I, the system with sufficient phase shifting capability at 10\({}^{\circ}\) incidence has a diminishing trend in beamforming capability as the incident angle increases. As shown in Fig. 9, the system performance decreases as the angle of incidence increases. Besides, the measured curve follows the same trend as the theoretical curve, with the biggest difference being only about 0.3 dB. This discrepancy may be due to environmental factors. Furthermore, based on the 2.6 GHz RIS prototype, we simulate that the RIS system lacks sufficient phase shifting capability due to the RIS element design by varying the bias voltage corresponding to the state '0' and state '1', evaluate the performance of the RIS system with different phase shifting capabilities, and compare it to theoretical results. We fixed the incident angle at 10\({}^{\circ}\) and measured the relationship between the 2.6 GHz RIS control voltage and the phase shift and amplitude of each RIS element; the results are presented in Fig. 10. The bias voltage is then manually adjusted to vary the phase difference between the state '0' and the state '1' of the RIS. Six sets of bias voltages based on the measured results in Fig. 10 are chosen to make the phase difference 180\({}^{\circ}\), 150\({}^{\circ}\), 120\({}^{\circ}\), 90\({}^{\circ}\), 60\({}^{\circ}\), and 30\({}^{\circ}\), respectively. In this scenario, the received power of RIS at a set of bias voltages that produced a 180\({}^{\circ}\) phase difference is served as a baseline for performance comparison. For each pair of bias voltages, 12 positions are chosen at random on the circular arc R shown in Fig. 7 (a). The received power after RIS beamforming is measured at these positions and the results are averaged to obtain the measurement results as shown in Fig. 11. We observe that the \begin{table} \begin{tabular}{|c c c c c c c|} \hline \hline **Incident angle (degree)** & 10 & 20 & 30 & 40 & 50 & 60 \\ \hline **Phase difference (degree)** & 180 & 160 & 132 & 117 & 107 & 76 \\ \hline **Amplitude difference (dB)** & 2 & 0.7 & 0.1 & 0.3 & 2.3 & 1.5 \\ \hline \hline \end{tabular} \end{table} TABLE I: Measured Reflection Coefficients Fig. 8: Photographs of the RISs utilized for the performance measurements of RIS-aided wireless communications. Fig. 10: The relationship between the control voltage and the coefficient of the elements for the 2.6 GHz RIS. Fig. 9: Performance degradation versus incident angle. beamforming capability of the RIS diminishes as the phase shifting capability of the system decreases. Hence, sufficient phase shift capability must be ensured at the RIS element design stage. The measured curve has the same trend as the theoretical curve, and the biggest difference is only about 0.3 dB. This discrepancy in results, like the previous experiment, could be attributed to environmental influences. ## V Conclusion In this paper, we proposed a realistic reflection coefficient model for RIS-aided wireless communication systems, which takes into account the discreteness of the phase shift, the attenuation of the reflective signal and the limited phase shifting capability. The maximum received power of the user based on this model was derived. We then proposed a group-based query algorithm to maximize the received power for the RIS-aided system with the realistic reflection coefficient model. We analyzed the asymptotic performance of the proposed algorithm and derived the closed-form expression for the maximum long-term average received power. Finally, by conducting both simulations and corresponding experiments with the fabricated RIS prototype systems, we verified the proposed theoretical results. The simulated results and measurement results all match quite well with the analytical results. ## Appendix A Proof of Proposition 1 By applying (2), (3) in \(\left\|\mathbf{f}^{H}\mathbf{\Phi h}\right\|^{2}\), Eq. (48) at the bottom of this page holds. Therefore, \(\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi h}\right\|^{2}\right\}\) is given by \[\mathbb{E}\left\{\left\|\mathbf{f}^{H}\mathbf{\Phi h}\right\|^{2 }\right\}=\frac{1}{\left(K_{1}+1\right)\left(K_{2}+1\right)} \tag{49}\] \[\times\left(\sum_{i=1}^{4}\mathbb{E}\left\{\left\|x_{i}\right\|^ {2}\right\}+\sum_{i=1,j=1,i\neq j}^{4}\mathbb{E}\left\{x_{i}{}^{H}x_{j}\right\} \right).\] Since the NLoS components are independent with each other, and have zero means, any correlation between channel matrices is zero. We observe that \[\mathbb{E}\left\{x_{i}{}^{H}x_{j}\right\}=0,i,j=1,2,3,4,i\neq j. \tag{50}\] For the LoS channel, by applying (4) and (5) in \(\mathbb{E}\left\{\left\|x_{1}\right\|^{2}\right\}\), we may derive \[\mathbb{E}\left\{\left\|x_{1}\right\|^{2}\right\} =K_{1}K_{2}\left(\mathbf{\tilde{f}}^{H}\mathbf{\Phi h}\right)^{H} \left(\mathbf{\tilde{f}}^{H}\mathbf{\Phi\tilde{h}}\right) \tag{51}\] \[=K_{1}K_{2}\sqrt{D_{1}^{-\alpha}d_{1}^{-\alpha}}G_{a}\] \[\times\left(\sum_{m,m^{\prime}}A_{m}A_{m^{\prime}}e^{-j\left\{ \phi_{m}-\phi_{m^{\prime}}+\theta_{m}-\theta_{m^{\prime}}\right\}}\right).\] According to the random matrix theory, it is easy to obtain \[\mathbb{E}\left\{\mathbf{\Phi}^{H}\mathbf{\Phi}\right\}=diag \left(A_{1}^{2},\cdots,A_{m}^{2},\cdots,A_{M}^{2}\right), \tag{52}\] \[\mathbb{E}\left\{\mathbf{\tilde{f}}^{\mathbf{H}}\right\}=M,\] \[\mathbb{E}\left\{\mathbf{\tilde{h}}^{H}\mathbf{\tilde{h}}\right\} =1.\] Thus, we may derive \[\mathbb{E}\left\{\left\|x_{2}\right\|^{2}\right\} =K_{1}\mathbb{E}\left\{\mathbf{\tilde{h}}^{H}\mathbf{\Phi}^{H} \mathbf{\tilde{H}}^{H}\mathbf{\Phi\tilde{h}}\right\} \tag{53}\] \[=K_{1}\sqrt{G_{a}D_{1}^{-\alpha}}L(d_{1})\sum_{m}A_{m}^{2}.\] Similarly, \[\mathbb{E}\left\{\left\|x_{3}\right\|^{2}\right\} =K_{2}\mathbb{E}\left\{\mathbf{\tilde{h}}^{H}\mathbf{\Phi}^{H} \mathbf{\tilde{H}}^{H}\mathbf{\Phi\tilde{h}}\right\} \tag{54}\] \[=K_{2}\sqrt{G_{a}d_{1}^{-\alpha}}L(D_{1})\sum_{m}A_{m}^{2},\] \[\mathbb{E}\left\{\left\|x_{4}\right\|^{2}\right\} =\mathbb{E}\left\{\mathbf{\tilde{h}}^{H}\mathbf{\Phi}^{H}\mathbf{ \tilde{H}}^{H}\mathbf{\Phi\tilde{h}}\right\} \tag{55}\] \[=L(D_{1})L(d_{1})\sum_{m}A_{m}^{2}.\] Fig. 11: Performance degradation versus phase shifting capacity. Then, by applying (49), (51)-(55) to (9), we obtain: \[\Gamma=\kappa_{\text{NLoS}}\sum_{m}A_{m}^{2}+\kappa_{\text{LoS}}\sum_{m,m^{\prime }}A_{m}A_{m^{\prime}}e^{-\int\left[\phi_{m}-\phi_{m^{\prime}}+\phi_{m^{\prime}} -\theta_{m^{\prime}}\right]}. \tag{56}\] This ends the proof. ## Appendix B Proof of Theorem 1 We first define a function \(\varphi(\delta_{m},\delta_{m^{\prime}})\) as \[\varphi(\delta_{m},\delta_{m^{\prime}})=\sin\delta_{m}\sin\delta_{m^{\prime}} +\cos\delta_{m}\cos\delta_{m^{\prime}}. \tag{58}\] Then according to the expression of LARP showed in (10), we obtain the expectation of the maximum LARP \(\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)\): \[\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)=\kappa_{\text{NLoS}}M+\kappa _{\text{LoS}}M^{2}\mathbb{E}\left[\varphi(\delta_{m},\delta_{m^{\prime}}) \right]. \tag{59}\] Since \(K_{1}\), \(K_{2}\), \(\eta_{\text{NLoS1}}\), \(\eta_{\text{NLoS2}}\), \(\eta_{\text{NLoS3}}\), \(\eta_{\text{LoS}}\) and \(M\) are constant, we will focus on \(\mathbb{E}\left[\varphi(\delta_{m},\delta_{m^{\prime}})\right]\) in the following. Because the expected phase shift \(\theta_{m}{}^{*}\) is uniformly distributed in \([0,2\pi)\) and the discrete phase shift closest to the expected one will be chosen to achieve the maximum LARP, depending on phase shifting capability \(\omega\), the quantization error \(\delta\) is uniformly distributed on \(\left[-\frac{\pi}{2^{k}},\frac{\pi}{2^{k}}\right]\), or uniformly distributed over each of three contiguous subintervals that are \(\left[\frac{\omega}{2}-\pi,-\frac{\omega}{2\left(2^{k}-1\right)}\right)\), \(\left[-\frac{\omega}{2\left(2^{k}-1\right)},\frac{\omega}{2\left(2^{k}-1 \right)}\right)\) and \(\left[\frac{\omega}{2\left(2^{k}-1\right)},\pi-\frac{\omega}{2}\right)\). When \(\omega<\frac{\omega^{2k}-1}{2^{k-1}}\pi\), the probability density function (PDF) of \(\delta\) is obtained as \[f_{\delta}\left(\delta\right)\] \[=\begin{cases}\frac{2^{k}}{2\pi},&\delta\in\left[-\frac{\omega} {2\left(2^{k}-1\right)},\frac{\omega}{2\left(2^{k}-1\right)}\right);\\ \frac{1}{2\pi},&\delta\in\left[\frac{\omega}{2}-\pi,-\frac{\omega}{2\left(2^{k }-1\right)}\right]\cup\left[\frac{\omega}{2\left(2^{k}-1\right)},\pi-\frac{ \omega}{2}\right];\\ 0,&\text{others}.\end{cases} \tag{60}\] Otherwise, when \(\omega\geq\frac{2^{k}-1}{2^{k}-1}\pi\), the PDF of \(\delta\) is obtained as \[f_{\delta}\left(\delta\right)=\begin{cases}\frac{2^{k}}{2\pi},& \delta\in\left[-\frac{\pi}{2^{k}},\frac{\pi}{2^{k}}\right);\\ 0,&\text{others}.\end{cases} \tag{61}\] \(\mathbb{E}\left[\varphi(\delta_{m},\delta_{m^{\prime}})\right]\) is expressed as (57) in the RIS-aided system with phase shifting capability \(\omega\) in terms of PDF of \(\delta\), which is shown at the bottom of the this page. Applying (58) to (57) and through some basic algebraic manipulations, we derive that \[\mathbb{E}\left[\varphi(\delta_{m},\delta_{m^{\prime}})\right] \tag{62}\] \[=\begin{cases}\frac{2^{2k}}{\pi^{2}}\sin^{2}\frac{\pi}{2^{k}},& \omega\geq\frac{2^{k}-1}{2^{k-1}}\pi;\\ 4\left[P_{1}\sin b+P_{2}(\sin a-\sin b)\right]^{2},&\omega<\frac{2^{k}-1}{2^{k -1}}\pi.\end{cases}\] By substituting (62) into (59), \(\mathbb{E}\left(\hat{\Gamma}_{\text{max}}\right)\) is obtained as (28). This ends the proof. ## Appendix C Proof of Theorem 2 According to the expression of LARP shown in (10), the maximum LARP expectation \(\mathbb{E}(\hat{\Gamma}_{\text{max}})\) in this scenario is given by \[\mathbb{E}(\hat{\Gamma}_{\text{max}})= \kappa_{\text{NLoS}}M\mathbb{E}\left[A_{m}^{2}\right]+\kappa_{ \text{LoS}}M^{2}\mathbb{E}\left[\varrho(\delta_{m},\delta_{m^{\prime}}) \right], \tag{63}\] where \[\varrho(\delta_{m},\delta_{m^{\prime}})=A_{m}A_{m^{\prime}}\left(\sin\delta_{m} \sin\delta_{m^{\prime}}+\cos\delta_{m}\cos\delta_{m^{\prime}}\right). \tag{64}\] As \(K_{1}\), \(K_{2}\), \(\eta_{\text{NLoS1}}\), \(\eta_{\text{NLoS2}}\), \(\eta_{\text{NLoS}}\), \(\eta_{\text{LoS}}\) and M are constant, the key to derive \(\mathbb{E}(\hat{\Gamma}_{\text{max}})\) is to derive \(\mathbb{E}\left[A_{m}^{2}\right]\) and \(\mathbb{E}\left[\varrho(\delta_{m},\delta_{m^{\prime}})\right]\). Since the optimal phase shift \(\theta^{*}\) is uniformly distributed in \([0,2\pi)\), the probability density of \(\theta^{*}\) in \([0,2\pi)\) is always \(1/2\pi\). Therefore, the probability of using the \(i-\)th quantified reflection coefficient is obtained as \[P_{i}=\frac{\mu_{i}}{2\pi}, \tag{65}\] \[\mathbb{E}\left[\varphi(\delta_{m},\delta_{m^{\prime}})\right]=\begin{cases} \int_{-\frac{\pi}{2^{k}}}^{\frac{\pi}{2^{k}}}\int_{-\frac{\pi}{2^{k}}}^{\frac{ 2^{k}}{2^{k}-2}}\frac{2^{k-2}}{\pi^{2}}\varphi(\delta_{m},\delta_{m^{\prime}}) d\delta_{m}d\delta_{m^{\prime}},\quad\omega\geq\frac{2^{k}-1}{2^{k-1}}\pi;\\ \int_{-a}^{-b}\int_{-a}^{-b}{P_{2}}^{2}\varphi(\delta_{m},\delta_{m^{\prime}}) d\delta_{m}d\delta_{m^{\prime}}+\int_{-a}^{-b}\int_{-b}^{b}{P_{2}}{P_{1}}\varphi( \delta_{m},\delta_{m^{\prime}})d\delta_{m}d\delta_{m^{\prime}}+\\ \int_{-a}^{-b}\int_{b}^{b}{P_{2}}^{2}\varphi(\delta_{m},\delta_{m^{\prime}})d \delta_{m}d\delta_{m^{\prime}}+\int_{-b}^{b}\int_{-a}^{-b}{-b}{P_{1}}{P_{2}} \varphi(\delta_{m},\delta_{m^{\prime}})d\delta_{m}d\delta_{m^{\prime}}+\\ \int_{-b}^{b}\int_{-b}^{b}{P_{1}}^{2}\varphi(\delta_{m},\delta_{m^{\prime}})d \delta_{m}d\delta_{m^{\prime}}+\int_{-b}^{b}\int_{b}^{a}{P_{1}}{P_{2}} \varphi(\delta_{m},\delta_{m^{\prime}})d\delta_{m}d\delta_{m^{\prime}}+\\ \int_{b}^{a}\int_{-a}^{-b}{P_{2}}^{2}\varphi(\delta_{m},\delta_{m^{\prime}})d \delta_{m}d\delta_{m^{\prime}}+\int_{b}^{a}\int_{-b}^{b}{P_{2}}{P_{1}}\varphi( \delta_{m},\delta_{m^{\prime}})d\delta_{m}d\delta_{m^{\prime}}+\\ \int_{b}^{a}\int_{b}^{a}\int_{b}^{a}{P_{2}}^{2}\varphi(\delta_{m},\delta_{m^{ \prime}})d\delta_{m}d\delta_{m^{\prime}},\quad\omega<\frac{2^{k}-1}{2^{k-1}} \pi,\end{cases} \tag{66}\] where \(a=\pi-\frac{\omega}{2^{k}}\), \(b=\frac{\omega}{2\left(2^{k}-1\right)}\), \(P_{1}=\frac{\omega}{2\pi}\), \(P_{2}=\frac{1}{2\pi}\). where \(\mu_{i}\) denotes the length of \(c_{i}\) which is the optimal phase shift range corresponding to each quantized reflection coefficient \(\Phi_{i}\) and is obtained from the group-based query algorithm. Thus, \(\mathbb{B}\left[A_{m}^{2}\right]\) is obtained as \[\mathbb{B}\left[A_{m}^{2}\right]=\sum_{i}P_{i}A_{\theta_{i}}^{2}=\sum_{i}\frac{ \mu_{i}}{2\pi}A_{\theta_{i}}^{2}, \tag{66}\] For each \(c_{i}\), the corresponding quantization error \(d_{i}\) is denoted by \[d_{i}=c_{i}-\theta_{i}. \tag{67}\] Since the probability density of \(\theta^{*}\) in \([0,2\pi)\) is always \(1/2\pi\), the joint probability density of \(\delta_{m}\) and \(\delta_{m^{\prime}}\) is \(1/4\pi^{2}\) when \((\delta_{m}\in d_{i})\cup(\delta_{m^{\prime}}\in d_{i^{\prime}}),i,i^{\prime}= 1,\cdots,2^{k}\). Thus, \(\mathbb{B}\left[\varrho(\delta_{m},\delta_{m^{\prime}})\right]\) is obtained as \[\mathbb{B}\left[\varrho(\delta_{m},\delta_{m^{\prime}})\right]\] \[=\sum_{i,i^{\prime}}\int_{\delta_{m}\in d_{i}}\int_{\delta_{m^{ \prime}}\in d_{i^{\prime}}}\frac{1}{4\pi^{2}}\varrho(\delta_{m},\delta_{m^{ \prime}})\,d\delta_{m}d\delta_{m^{\prime}}, \tag{68}\] By substituting (66) and (68) into (63), the LARP expectation \(\mathbb{B}\left(\Gamma_{\max}\right)\) is obtained as (44). This ends the proof. ## Appendix D Proof of Corollary 1 We first define \(\omega^{\prime}=\min\left(\omega,\pi\right)\). When the RIS is 1-bit coded and the channels are pure LoS paths, the optimization problem (P3) of the reflection coefficient of the \(m\)-th element is simplified to: \[\hat{\phi}_{m}^{*}=\max\left[A_{\theta_{1}}\cos(-\theta_{m}^{*}),A_{\theta_{2 }}\cos(\omega^{\prime}-\theta_{m}^{*})\right], \tag{69}\] which can be rewritten as \[\hat{\phi}_{m}^{*}=\begin{cases}\Phi_{1},\left[A_{\theta_{1}}\cos(-\theta_{m} ^{*})-A_{\theta_{2}}\cos(\omega^{\prime}-\theta_{m}^{*})\right]>0;\\ \Phi_{2},\left[A_{\theta_{1}}\cos(-\theta_{m}^{*})-A_{\theta_{2}}\cos(\omega^ {\prime}-\theta_{m}^{*})\right]<0.\end{cases} \tag{70}\] To solve the optimal phase shift range \(c_{i}\), we define a function as \[\zeta(\theta)=A_{\theta_{1}}\cos(-\theta)-A_{\theta_{2}}\cos(\omega^{\prime}- \theta). \tag{71}\] Eq. (71) can be converted to \[\zeta(\theta)= -\sqrt{A_{\theta_{2}}}^{2}\text{sin}^{2}\omega^{\prime}+\left(A_ {\theta_{2}}\cos\omega^{\prime}-A_{\theta_{1}}\right)^{2}\] \[\times\sin\left(\theta+\arctan\frac{A_{\theta_{2}}\cos\omega^{ \prime}-A_{\theta_{1}}}{A_{\theta_{2}}\sin\omega^{\prime}}\right). \tag{72}\] Since (72) is obviously in sine form, we rewrite (70) as \[\hat{\phi}_{m}^{*}=\begin{cases}A_{\theta_{1}}e^{-j\omega^{\prime}\cdot 0}, \theta_{m}^{*}\in[0,\psi_{1})\cup\left\{\psi_{2},2\pi\right\};\\ A_{\theta_{2}}e^{-j\omega^{\prime}\cdot 1},\theta_{m}^{*}\in\left\{\psi_{1},\psi_{2} \right\}.\end{cases} \tag{73}\] where \(\psi_{1}\) and \(\psi_{2}\) are denoted by \[\psi_{1}= -\arctan\frac{A_{\theta_{2}}\cos\omega^{\prime}-A_{\theta_{1}}}{ A_{\theta_{2}}\sin\omega^{\prime}}\in\left[0,\frac{\pi}{2}\right)\cup\left( \frac{\pi}{2},\pi\right), \tag{74}\] \[\psi_{2}= \pi-\arctan\frac{A_{\theta_{2}}\cos\omega^{\prime}-A_{\theta_{1} }}{A_{\theta_{2}}\sin\omega^{\prime}}\in\left[\pi,\frac{3\pi}{2}\right)\cup \left(\frac{3\pi}{2},2\pi\right). \tag{75}\] Applying \(c_{1}\in[0,\psi_{1})\cup[\psi_{2},2\pi)\) and \(c_{2}\in[\psi_{1},\psi_{2})\) in (68), we derive that \[\mathbb{B}\left[\varrho(\delta_{m},\delta_{m^{\prime}})\right]=\frac{A_{ \theta_{1}}^{2}+A_{\theta_{2}}^{2}-2A_{\theta_{1}}A_{\theta_{2}}\cos\omega^{ \prime}}{\pi^{2}}. \tag{76}\] By substituting (76) into (63), \(\mathbb{B}\left(\Gamma_{max}\right)\) is obtained as (46). This ends the proof.
2310.07720
Parametric Leaky Tanh: A New Hybrid Activation Function for Deep Learning
Activation functions (AFs) are crucial components of deep neural networks (DNNs), having a significant impact on their performance. An activation function in a DNN is typically a smooth, nonlinear function that transforms an input signal into an output signal for the subsequent layer. In this paper, we propose the Parametric Leaky Tanh (PLTanh), a novel hybrid activation function designed to combine the strengths of both the Tanh and Leaky ReLU (LReLU) activation functions. PLTanh is differentiable at all points and addresses the 'dying ReLU' problem by ensuring a non-zero gradient for negative inputs, consistent with the behavior of LReLU. By integrating the unique advantages of these two diverse activation functions, PLTanh facilitates the learning of more intricate nonlinear relationships within the network. This paper presents an empirical evaluation of PLTanh against established activation functions, namely ReLU, LReLU, and ALReLU utilizing five diverse datasets.
Stamatis Mastromichalakis
2023-08-11T08:59:27Z
http://arxiv.org/abs/2310.07720v1
# Parametric Leaky Tanh: A New Hybrid Activation Function for Deep Learning ###### Abstract Activation functions (AFs) are crucial components of deep neural networks (DNNs), having a significant impact on their performance. An activation function in a DNN is typically a smooth, nonlinear function that transforms an input signal into an output signal for the subsequent layer. In this paper, we propose the Parametric Leaky Tanh (PLTanh), a novel hybrid activation function designed to combine the strengths of both the Tanh and Leaky ReLU (LReLU) activation functions. PLTanh is differentiable at all points and addresses the 'dying ReLU' problem by ensuring a non-zero gradient for negative inputs, consistent with the behavior of LReLU. By integrating the unique advantages of these two diverse activation functions, PLTanh facilitates the learning of more intricate nonlinear relationships within the network. This paper presents an empirical evaluation of PLTanh against established activation functions, namely ReLU, LReLU, and ALReLU utilizing five diverse datasets. A Parametric Leaky Tanh: A New Hybrid Activation Function for Deep Learning Stamatis Mastromichalakis1 1London South Bank University / IST College, Pireos 72, GR-18346, Moschato, Athens, Greece Email: [email protected] Footnote 1: [https://github.com/pamatis/](https://github.com/pamatis/) **MSC Subject Classification:** 68T07, 68T45, 68T10, 68T50, 68U35 **Keywords:** Activation Function, dying / vanishing gradients, Tanh, Leaky ReLU, Deep Neural Networks ## 1 Introduction Activation functions (AFs) are instrumental in shaping the performance of Deep Neural Networks (DNNs), responsible for transforming input signals into output signals for subsequent network layers. In this study, we propose a novel activation function, Parametric Leaky Tanh (PLTanh) that harnesses the benefits of both Tanh and LReLU to enhance DNN performance. The Tanh activation function is a smooth, differentiable function that maps real numbers to the interval [-1, 1]. Its bounded nature and symmetric properties around the origin make it a fitting choice for applications that require resilience to outliers or expect output centered around zero. In contrast, the LReLU activation function, favored in DNNs for its computational efficiency, introduces a slight gradient for negative input values, preventing the 'dying ReLU' problem commonly encountered with traditional ReLU. This ensures that all negative inputs contribute to the learning process. By merging the properties of Tanh and LReLU, we formulate a new hybrid activation function capable of learning more complex nonlinear relationships. Such a combination can pave the way for models that establish more sophisticated relationships between input and output data, contributing to more robust performance and better generalization. While Tanh and LReLU are widely employed in DNNs, they each come with their own set of strengths and weaknesses. Our proposed combination seeks to significantly enhance the performance of a DNN by capitalizing on their individual strengths and mitigating their weaknesses. The combined benefits of centering from Tanh and non-zero gradients for negative inputs from LReLU can lead to improved DNN performance. Despite significant advancements in the development of activation functions, such as the introduction of QReLU/m-QReLU (Parisi et al., 2020) and ALReLU (Mastromichalakis, 2020), SigmaReLU (Mastromichalakis, 2021), traditional activation functions like the Sigmoid and Tanh are still plagued by the well-known issue of the vanishing gradient problem. Traditional ReLU offers more accuracy and scalability for DNNs but is susceptible to the 'dying ReLU' problem. Several variants of ReLU, such as the Leaky ReLU (LReLU), Parametric ReLU (PReLU), Randomised ReLU (RReLU), and Concatenated ReLU (CReLU) were developed to address these challenges. For instance, LReLU (Maas et al., 2013) provides a small negative slope for negative inputs, leading to minor improvements in classification performance compared to the original ReLU. However, these AFs often encounter issues of robustness in classification tasks of varying complexity, such as slow or non-convergence (Vert and Vert, 2006) and frequently fall into local minima (Parisi et al., 2020). In this study, we introduce a novel variant of the tanh AF to alleviate common vanishing gradient and 'dying ReLU' issues. Based on numerical evaluations, our method offers substantial improvements in training and classification procedures compared with ReLU, LReLU, and ALReLU across five distinct datasets. Evaluation metrics such as accuracy, AUC, recall, precision, and F1-score were computed to assess the performance of our proposed technique and provide a reliable, objective basis for comparison. The rest of this paper is structured as follows: Section 2 provides the main contribution of this study, detailing the implementation of PLTanh in Keras. Section 3 presents experimental results and an evaluation of the training accuracy, comparing PLTanh with other established AFs in the field. Finally, Section 4 concludes with a discussion and summary of our findings.. ## 2 Methods and Algorithms ### Datasets and Models Used for Training The experiments in this study utilized various datasets encompassing image, text, and tabular data classifications. The specific datasets employed were: * MNIST Dataset * Fashion MNIST Dataset * TensorFlow Flowers Dataset * CIFAR-10 Dataset * Histopathologic Cancer Detection Dataset used in the 2019 Kaggle competition ([https://www.kaggle.com/c/histopathologic-cancer-detection/data](https://www.kaggle.com/c/histopathologic-cancer-detection/data)) For the training of the MNIST and Fashion MNIST datasets, a deep Convolutional Neural Network (CNN) model was used, with the following layers: * A convolutional layer consisting of 32 filters, each with a kernel size of 3x3. The corresponding activation function (AF) was applied after this layer. * A Max Pooling 2D layer for downsampling the input. * A Flatten layer to transform the 2D matrix data to a 1D vector. * A final Dense layer with softmax activation for outputting probabilities for the classes. For the TensorFlow Flowers dataset, a deeper Convolutional Neural Network (CNN) model was utilized. The architecture is as follows: * The first layer is a Conv2D layer equipped with 32 filters of size 3x3, utilizing the corresponding activation function under test in each respective case. The layer also processes an input shape of (32, 32, 3). * A Max Pooling 2D layer is then used for downsampling the input representation, followed by a Dropout layer with a rate of 0.25 to reduce overfitting. * A second Conv2D layer with 64 filters of size 3x3 is then added, again using the corresponding AF, followed by another Max Pooling 2D layer and Dropout layer (with the same rate of 0.25). * A third Conv2D layer is then applied, this time with 128 filters of size 3x3 and the corresponding AF, followed by a Dropout layer with a rate of 0.4. * The data is then flattened from a 2D matrix to a 1D vector using a Flatten layer. * This is followed by a Dense layer with 128 units and the corresponding AF, and another Dropout layer with a rate of 0.3. * Finally, the output layer is a Dense layer with 5 units (representing the number of classes in the Flowers dataset) and a softmax activation function for outputting the probability distribution across the classes. For CIFAR-10 Dataset The following CNN was used: * The first layer of the model is a Conv2D layer with 32 filters of size 3x3, using the corresponding AF under test. This layer applies'same' padding and accepts an input shape of (32, 32, 3). This is followed by a Batch Normalization layer. * This is followed by another Conv2D layer with 32 filters of size 3x3, also using the corresponding AF, and'same' padding. This is followed by another Batch Normalization layer, a MaxPooling2D layer with a pool size of 2x2, and a Dropout layer with a rate of 0.2. * The model then repeats a similar structure: a Conv2D layer with 64 filters and the corresponding AF, a Batch Normalization layer, another Conv2D layer with 64 filters and the same activation function, another Batch Normalization layer, a MaxPooling2D layer (2x2), and a Dropout layer with a rate of 0.3. * Again, a similar structure follows, with Conv2D layers having 128 filters, along with Batch Normalization, MaxPooling2D (2x2), and Dropout (rate 0.4) layers. * The Conv2D layers are followed by a Flatten layer, a Dense layer with 128 units using the activation function under test, a Batch Normalization layer, a Dropout layer with a rate of 0.5, and a final Dense layer with 10 units and a softmax activation function. For Histopathologic Cancer Detection Dataset the following CNN was used: * The model incorporates five convolutional layers in total. The first layer employs a kernel size of 5x5, whereas the remaining four utilize a kernel size of 3x3. * The number of convolutional filters in these layers increases in a progressive sequence, starting from 32 for the first layer and ending with 512 for the fifth layer. * After each convolutional layer, a Max Pooling operation and Batch Normalization are applied. * Dropout layers are also included after each convolutional layer. The dropout rates used for these layers gradually increase from 0.1 in the first layer to 0.5 in the fifth layer. * The tested activation functions are incorporated after every convolutional layer. * The model includes a Global Average Pooling layer, followed by the chosen activation function, Batch Normalization, and a Dropout layer with a rate of 0.3. * A Dense layer with 256 units is then added. This layer is followed by the activation function, Batch Normalization, and a Dropout layer with a rate of 0.4. * The final layer is a Dense layer with a softmax activation function. The number of neurons in this layer matches the number of output classes for each respective dataset. All models are compiled with the Adam Optimizer. ### The PLTanh Activation Function The Rectified Linear Unit (ReLU) is among the most frequently employed activation functions (AFs) in contemporary neural networks. Its use between layers introduces nonlinearity, thus enabling the network to handle complex, nonlinear datasets. ReLU and its derivative are expressed in Eq. (1). \[f\left(x\right) =\begin{cases}0\forall\text{ x }<0\\ x\text{ }\forall\text{ x }>0\end{cases} \tag{1}\] \[\frac{dy}{dx}\ f\left(x\right) =\begin{cases}0\forall\text{ x }<0\\ 1\text{ }\forall\text{ x }>0\end{cases} \tag{2}\] Despite its widespread use and success in deep neural networks (DNNs), ReLU possesses some inherent drawbacks. First, ReLU is not continuously differentiable, which, while not detrimental, can slightly affect the training performance. This is due to the undefined gradient at x=0. Furthermore, ReLU sets all values less than 0 to zero, a feature that can be advantageous for sparse data. However, the gradient of 0 is also 0, meaning that neurons reaching large negative values risk getting stuck at 0 - a phenomenon colloquially referred to as the 'dying ReLU' problem. Consequently, these 'dead' neurons halt the network's learning progression, leading to suboptimal performance. Even with the careful initialization of weights to small random values, the summed input to the traditional ReLU AF remains negative, irrespective of the input values supplied to the neural network. To address these issues, variants of the ReLU function, such as the Leaky ReLU (LReLU), have been developed. These variants aim to deliver a more nonlinear output for small negative values or ease the transition from positive to small negative values, albeit without fully resolving the issue. The LReLU is trying to solve these problems by providing a small negative gradient for negative inputs into a ReLU function. Fig. 1 and Eqs. (3) and (4) demonstrate the LReLU and its derivative. \[f(x)\,=\,\begin{cases}\text{x}\,\,\forall\,\,\text{x}>0\\ \text{ax}\,\,\forall\,\,\text{x}\leq 0\\ \text{where}\,\,\alpha=0.01\end{cases} \tag{3}\] \[\frac{dy}{dx}\,\,f(x)\,=\,\begin{cases}0.01\,\,\,\forall\,\,\text{x}<0\\ 1\,\,\forall\,\,\text{x}>0\end{cases} \tag{4}\] Although theoretically LReLU is solving the 'dying ReLU', it is not actually proven to improve the classification performance. Indeed, in several studies the LReLU performance is the same or lower with ReLU. The Parametric Leaky Tanh (PLTanh) activation function introduced in this research aims to mitigate the challenges often associated with traditional Leaky ReLU and tanh functions. This activation function is given by max(tanh(x),a*abs(x)), astutely harnessing the strengths of both Tanh and Leaky ReLU. The tanh activation function maps real numbers into the interval [-1,1], producing a normalized output that stands resilient against outliers. On the other hand, the Leaky ReLU function, apart from maintaining the positive part of its input, also introduces a slight gradient for the negative values, ensuring that all neurons remain active during the learning process. By doing so, it addresses the "dead neuron" issue, a scenario where neurons may only output zero for all inputs, often seen with the conventional ReLU. The PLTanh function synergizes the merits of these two activation functions while effectively circumventing their limitations. The versatility of the PLTanh activation function comes from its potential to handle a diverse range of input data, thereby bolstering the learning process of deep neural networks. The inclusion of the alpha parameter offers adaptability, allowing the function to be attuned to various data distributions. In essence, PLTanh is crafted to amalgamate the advantages of both Tanh and LReLU, while simultaneously countering their inherent weaknesses. By offering a well-adjusted response to both positive and negative inputs, it stands as a potent candidate for an efficient activation function in deep neural networks. Figure 1: Blue: LReLU AF, Orange: LReLU Derivative Fig. 3 and Eqs. (5) and (6) elucidate PLTanh and its derivative, with a=0.01, respectively \[f\left(x\right)=\begin{cases}\tanh(x)\bigtriangledown\tanh(x)>=\alpha|x|\\ |\alpha|x\bigtriangledown\tanh(x)\leq\alpha|x|\end{cases} \tag{5}\] \[\frac{dy}{dx}\ f\left(x\right)=\begin{cases}0.01&\forall\ x>0\ \Lambda\ 0.01x>= \tanh(x)\\ -0.01&\forall\ x<0\ \Lambda\ 0.01x+\tanh(x)<=0\\ \text{intermediate}\ \forall\ (x>=0\ \Lambda\ 0.01x>=\tanh(x))\ \text{V ($x<0\ \Lambda 0.01x+\tanh(x)<=0$)}\\ \text{sech}^{2}\left(x\right)&\forall\ otherwise\end{cases} \tag{6}\] The derivative of the Parametric Leaky Tanh (PLTanh) function adheres to the specified rules divided into distinct conditions: The derivative is 0.01 when x is greater than 0 and 0.01x is greater or equal to tanh(x). This reflects a 'leaky' behavior for values where x is positive or where the linear term surpasses the tanh component. The derivative is -0.01 when x is less than 0 and 0.01x + tanh(x) is less than or equal to 0. This demonstrates the 'leaky' behavior but for negative values, effectively taking the more negative value between the linear term and the tanh component. Figure 3: Blue: PLTanh AF, Orange: PLTanh Derivative For intermediate scenarios, which include cases where (x is greater or equal to 0 and 0.01x is less than tanh(x)) or (x is less than 0 and 0.01x + tanh(x) is greater than 0), the structure of the derivative is not explicitly defined. In all other circumstances, the derivative is expressed as sech*2(x), reflecting the derivative behavior of the Tanh function. The unique configuration of the PLTanh function results in a special derivative profile. It incorporates the merits of the LReLU activation function, notably its swift learning attributes and the ability to introduce gradients for positive and slight negative inputs. Simultaneously, it taps into the Tanh activation function's ability to render a non-linear, consistently varying gradient over its entire domain. The derivative of the PLTanh function seamlessly fuses the benefits of both the LReLU and Tanh activation functions. It offers a mix of gradients - some constant, and others more dynamic - depending on the input. As such, PLTanh is adept at circumventing challenges often seen with either function in isolation, like the 'dying ReLU' phenomenon or the vanishing gradients challenge associated with the Tanh function. Listing 1 provides the Keras implementation code for PLTanh. ``` fromtensorflow.kerasimportbackendasK fromtensorflow.keras.layersimportInput,Conv2D,Lambda fromtensorflow.keras.utilsimportget_custom_objects defPLTanh(x): alpha=0.01 returnX.maximum(tf.keras.activations.tanh(x),alpha*K.abs(x)) get_custom_objects().update(('PLTanh':tf.keras.layers.Activation(PLTanh))) model=tf.keras.models.Sequential({ tf.keras.layers.Conv2D(32,kernel_size=(3,3),activation='PLTanh',input_shape=(28,28,1)), tf.keras.layers.MaxPooling2D(pool_size=(2,2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='softmax') ]) ``` ## 3 Experimental study and results The evaluation of our trained neural network models was conducted on specific datasets using a 5-Fold cross-validation approach. This statistical method plays a crucial role in preventing overfitting, while also serving as a reliable means for comparing different learning algorithms. Moreover, it is utilized Bayesian Optimization to pinpoint the 'a' parameter for the PLTanh AF in each dataset. To ensure consistency and dependability in our results, all tests were performed on an RTX3090 GPU. The results discussed in this section are average measures, derived from the 5-Fold cross-validation process. Notably, these results underline the theoretical superiority of the proposed Parametric Leaky Tanh (PLTanh) Activation Function in handling image classification tasks. The performance metrics related to classification are depicted in Table 1, and further elaborated in the subsequent sections. The experimental results, as indicated in the table, highlight the performance of the proposed Parametric Leaky Tanh (PLTanh) activation function, ALReLU, LReLU, and ReLU on various datasets. For the MNIST dataset, the PLTanh model exhibited superior performance metrics, with Macro Precision, Accuracy, Macro Recall, and Macro F1 scores of 98.16%, 98.17%, 98.16%, and 98.15% respectively, slightly outperforming the other activation functions. On the Fashion MNIST dataset, the PLTanh function also prevailed with the highest Macro Precision of 90.58%, Accuracy of 90.33%, Macro Recall of 90.33%, and Macro F1 score of 90.38%, surpassing the performance of the other functions. \begin{table} \begin{tabular}{l l l l l l} & **Performance measures** & **PLTanh (this study)** & **ALReLU** & **LReLU** & **ReLU** \\ \cline{2-6} & **Macro Precision** & 98.16\% & 98.11\% & 98.13\% & 98.09\% \\ & **Accuracy** & 98.17\% & 98.11\% & 98.14\% & 98.09\% \\ & **Macro Recall** & 98.16\% & 98.09\% & 98.12\% & 98.07\% \\ & **AUC** & 99.99\% & 99.99\% & 99.99\% & 99.99\% \\ & **Macro F1** & 98.15\% & 98.09\% & 98.12\% & 98.08\% \\ \hline \multirow{6}{*}{**P**-**} & **Macro Precision** & 90.58\% & 90.31\% & 90.34\% & 90.39\% \\ & **Accuracy** & 90.33\% & 90.22\% & 90.21\% & 90.21\% \\ & **Macro Recall** & 90.33\% & 90.22\% & 90.21\% & 90.21\% \\ & **AUC** & 99.26\% & 99.21\% & 99.21\% & 99.21\% \\ & **Macro F1** & 90.38\% & 90.22\% & 90.24\% & 90.22\% \\ \hline \multirow{6}{*}{**P**-**} & **Macro Precision** & 73.36\% & 72.75\% & 72.50\% & 73.45\% \\ & **Accuracy** & 73.21\% & 72.07\% & 72.31\% & 72.77\% \\ & **Macro Recall** & 72.85\% & 71.52\% & 71.95\% & 72.07\% \\ & **AUC** & 92.93\% & 92.48\% & 92.74\% & 92.57\% \\ & **Macro F1** & 72.58\% & 71.80\% & 72.01\% & 72.37\% \\ \hline \multirow{6}{*}{**P**-**} & **Macro Precision** & 85.89\% & 85.36\% & 85.75\% & 85.60\% \\ & **Accuracy** & 85.87\% & 85.16\% & 85.56\% & 85.5\% \\ & **Macro Recall** & 85.87\% & 85.16\% & 85.56\% & 85.5\% \\ & **AUC** & 98.81\% & 98.75\% & 98.78\% & 98.78\% \\ & **Macro F1** & 85.81\% & 85.11\% & 85.53\% & 85.46\% \\ \hline \multirow{6}{*}{**P**-**} & **Macro Precision** & 87\% & 88\% & 89\% & 89\% \\ & **Accuracy** & 86.68\% & 86.69\% & 87.34\% & 87.48\% \\ \cline{1-1} & **Macro Recall** & 87\% & 87\% & 87\% & 87\% \\ \cline{1-1} & **AUC** & 92.78\% & 95.3\% & 95.45\% & 95.21\% \\ \cline{1-1} & **Macro F1** & 87\% & 87\% & 87\% & 87\% \\ \hline \end{tabular} \end{table} Table 1: Classification performance measures for ALTanh, ALReLU, ReLU and LReLU, various datasets. q=PLTanh parameter In the Tf Flowers dataset, the PLTanh model demonstrated a higher Accuracy of 73.21%, Macro Recall of 72.85%, and Macro F1 score of 72.58%, as compared to the other activation functions. On the CIFAR-10 dataset, the PLTanh model outshone the others with an Accuracy, Macro Recall, and Macro F1 score of 85.87%, 85.87%, and 85.81%, respectively. However, for the Histopathologic Cancer Detection (Kaggle) dataset, the PLTanh model's performance was slightly lower than the others, but still commendable with an Accuracy of 86.68%, Macro Precision and Macro Recall of 87%, and a Macro F1 score of 87%. In all cases, the Area Under the Curve (AUC) scores were remarkably high for all activation functions, indicating a strong discriminative power for the positive and negative classes. These results collectively validate the strong performance of the PLTanh activation function, demonstrating its potential in handling various types of classification tasks. ## 4 Conclusion In conclusion, this study has examined the potential of a new activation function, the Parametric Leaky Tanh (PLTanh), which is a combination of LReLU and Tanh, in comparison to existing ones such as ALReLU, LReLU, and ReLU across multiple datasets. Our experiments demonstrated that PLTanh generally exhibits superior performance metrics, outshining its counterparts in most cases. This is particularly noteworthy considering that PLTanh addresses some fundamental limitations of both LReLU and Tanh functions. PLTanh proved its robustness and efficiency in diverse datasets, with different types of data, varying from images to text. However, there is still room for improvement, as seen in the Histopathologic Cancer Detection dataset, where PLTanh was slightly outperformed by the other functions. Future work could investigate further refinement of the PLTanh parameters, aiming to further improve its generalization capabilities across a wider range of tasks. This study highlights the importance of continuous exploration in the field of neural network activation functions, driving improvements in model performance and efficiency.
2306.14904
Determining Smallest Path Size of Multiplication Transducers Without a Restricted Digit Set
Directed multiplication transducers are a tool for performing non-decimal base multiplication without an additional conversion to base 10. This allows for faster computation and provides easier visualization depending on the problem at hand. By building these multiplication transducers computationally, new patterns can be identified as these transducers can be built with much larger bases and multipliers. Through a recursive approach, we created artificial multiplication transducers, allowing for the formation of several unique conjectures specifically focused on the smallest closed loop around a multiplication transducer starting and ending at zero. We show a general recursive pattern for this loop; through this recurrence relation, the length of the smallest closed loop for a particular transducer base b along with the range of multipliers having this particular length for multiplier m was also identified. This research is expected to be explored further by testing reductions of the digit set and determining whether similar properties will hold.
Aditya Mittal, Karthik Mittal
2023-06-14T22:56:12Z
http://arxiv.org/abs/2306.14904v1
# Determining Smallest Path Size of Multiplication Transducers Without a Restricted Digit Set ###### Abstract Directed multiplication transducers are a tool for performing non-decimal base multiplication without an additional conversion to base 10. This allows for faster computation and provides easier visualization depending on the problem at hand. By building these multiplication transducers computationally, new patterns can be identified as these transducers can be built with much larger bases and multipliers. Through a Python-based recursive approach, we created artificial multiplication transducers, allowing for the formation of several unique conjectures specifically focused on the smallest closed loop around a multiplication transducer starting and ending at zero. We show a general recursive pattern for this loop; through this recurrence relation, the length of the smallest closed loop for a particular transducer base \(b\) along with the range of multipliers having this particular length for multiplier \(m\) was also identified. This research is expected to be explored further by testing reductions of the digit set and determining whether similar properties will hold. ## 1 Introduction Directed multiplication transducers are tools to multiply a number by base \(b\) without the need for conversion into an intermediary base such as base 10. These transducers can run at computationally faster speeds due to this property. Finding patterns in these transducers (e.g. recursive formulas and minimum path lengths for specific base and multiplier transducers) can introduce faster methods for finding bases and create new breakthroughs in the field of quotient sets. This paper analyzes multiplication transducers with no excluded digits and determines patterns in paths (where all states in the path are distinct) within the multiplication transducer starting and ending with zero as a state. For all bases \(b\) and multipliers \(m\), a generalized conclusion can be made on the length and values of states within the shortest path. This paper presents a formula for the length of the shortest possible quotient set given base \(b\) and multiplier \(m\) and provides a recursive solution for finding this using depth first search and Python libraries. This newfound method of identifying these paths can lead to faster computational analysis regarding the multiplication of numbers in different bases; research will be performed in the future to find this formula for restricted digits. The applications of multiplication transducers are varied, but they can be seen mostly in number theory and automata. By recognizing these paths, faster computations can be made, and shortcuts can be found to emerging problems within the field. ### Overview of Multiplication Transducers Multiplication transducers can consolidate the information stored in base multiplication into an easily explainable diagram [1][3]. Base multiplication has five essential components when in step \(i\): the carry-in value \(c_{i}\), the read value \(r_{i}\), the total value \(t_{i}\), the write value \(w_{i}\) and the carry over value \(c_{i+1}\). Elementary one-digit base ten multiplication (which has only one step) can be discussed in order to understand these characteristics. **Example:** Let's take a base ten example where we are multiplying 5 by 6. In variables, this means \(m=6\), \(b=10\), and \(r=5\). We know this can be done with multiplication, but this can also be completed using multiplication transducers. Some notes can be taken: * The carry-over value for the first step is 0, because we aren't carrying over anything. This means that \(c_{0}=0\). * The read value for the first step is 5, which means \(r_{0}=5\). Note that if the read value was 15 for instance, then \(r_{0}=5\) and \(r_{1}=1\). **Step 1** (\(i=0\)): \(c_{0}=0\) (our initial state) and \(r_{0}=5\). The following is determined: * We first compute the total. This can be done by multiplying our current read by our multiplier, and then adding over any carry values from earlier. This gives us a total of \((5*6)+0=30\). * Next, we have to calculate the carry value. Since 30 consists of 3 10's, this means that the carry value or \(c_{1}=\lfloor\frac{30}{10}\rfloor=3\). * Lastly, we have to calculate the write value or the value that we will write down from our first step. This is calculated by finding the remainder when dividing the total from the base. This means that the first write value or \(w_{0}=30\,(\text{mod}\,10)=0\). Some things to note before the next step: * The carry-over value for the second step is 3, which means that \(c_{1}=3\). * The read value for the second step is 0 because our read was only one digit. This means that \(r_{1}=0\). **Step 2** (\(i=1\)): \(c_{1}=3\) and \(r_{0}=0\). The following is determined: * We first compute the total. This can be done by multiplying our current read by our multiplier, and then adding over any carry values from earlier. This gives us a total of \((6*0)+3=3\). * Next, we have to calculate the carry value. Since 3 consists of \(0\,10\)'s, this means that the carry value or \(c_{1}=\lfloor\frac{3}{10}\rfloor=0\). * Lastly, we have to calculate the write value or the value that we will write down from our first step. This is calculated by finding the remainder when dividing the total from the base. This means that the first write value or \(w_{0}=3\,(\mathrm{mod}\,10)=3\). Note that both our read value and carry value for the third step are zero. This means that our total will be zero, which means that the write value will be zero. Therefore, we are done with our calculations. We have \(w_{0}=0\) and \(w_{1}=3\), so our final answer is \(w=30\). This can now be expressed with variables. For a one-digit \(r\) by \(m\) base ten multiplication operation, \(c_{0}=0\) since there is no underlying carry value from a previous multiplication. \(r\) is the read value while \(m\) is the multiplier. The total can be represented as: \[t_{0}=r_{0}m+c_{0}=rm+c_{0}=10c_{1}+w_{0} \tag{1}\] The equation above is justified since the carry value \(c_{i+1}\) is always written when the write value is too large to be expressed, like in base ten multiplication operations. In this case, \(c_{1}\) would be \(w_{1}\) since \(r_{1}=0\) and \(r_{1}m=0\) (assuming that \(r\) and \(m\) are both one digit). However, when dealing with larger numbers, this number will be carried over to the next multiplication, until the operation is solved, so that the carry over value \(c_{i+1}\) becomes the carry in value. We can expand this concept to any \(r\) by \(m\) multiplication operation in base \(b\). Let \(l_{w}\) be the length of the final write value \(w\) in base \(b\) and \(l_{r}\) be the length of the read value in base \(b\). Then, for \(r\) and \(w\) in base ten, \[w=\sum_{i=0}^{l_{w}-1}w_{i}b^{i},r=\sum_{i=0}^{l_{r}-1}r_{i}b^{i} \tag{2}\] For \(r\) and \(w\) in base \(b\), \(r=[r_{l_{r}-1}r_{l_{r}-2}...r_{0}]_{b}\) and \(w=[w_{l_{w}-1}w_{l_{w}-1}...w_{0}]_{b}\). The read value \(r_{i}\) for multi-digit operations will be the last value index of \(r\) for the first operation (\(i=0\)), the penultimate value index for the second (\(i=1\)), and so on. Therefore, the generalized total for step \(i\) in a base \(b\) multiplication operation can be written as: \[t_{i}=r_{i}m+c_{i}=bc_{i+1}+w_{i} \tag{3}\] The write value \(w_{i}\) and the carry over value \(c_{i+1}\) are found with: \[c_{i+1}=\left\lfloor\frac{t_{i}}{b}\right\rfloor;\,w_{i}=t_{i}\ (\mbox{mod b}) \tag{4}\] A multiplication transducer represents all the distinct combinations of this base multiplication for a predefined multiplier \(m\) and base \(b\), iterating through the different possible combinations between the read value \(r\) and the carry in value \(c_{i+1}\). In multiplication transducers, carry in values are often represented by states (denoted by circles). As shown in Figure 1, the corresponding read value \(r_{i}\) and write value \(w_{i}\) is written adjacent to the arrow pointing from state \(c_{i}\) to \(c_{j}\), in the notation (\(r_{i}\), \(w_{i}\)). The total is calculated using a different interpretation of the equation above: \[t_{i}=r_{i}m+c_{i}=bc_{j}+w_{i} \tag{5}\] The carry value \(c_{i}=\{0,1,...,m-1\}\) and read value \(r_{i}=\{0,1,...,b-1\}\) represent the total of \(b^{2}\) combinations in the multiplication transducer of designated base \(b\) and multiplier \(m\) or \(T_{m,b}\). ### Base 3 Example Let's take an example of \(m=4\) and \(b=3\). Let \(r=[20]_{10}=[202]_{3}\). We can use multiplication transducers to determine \([202]_{3}*4\) without converting to Figure 1: Representation of the multiplication transducer \(T_{4,3}\). base 10. **Step 1** (\(i=0\)): \(c_{0}=0\) (our initial state) and \(r_{0}=2\). The following is determined: * \(t_{0}=r_{0}m+c_{0}=2*4+0=8\) * \(c_{1}=\lfloor\frac{t_{0}}{b}\rfloor=\lfloor\frac{8}{3}\rfloor=2\) * \(w_{0}=t_{0}\) (mod b) = 8 (mod 3) \(=2\) * \(c_{0}\) to \(c_{1}\): The state in Figure 1 changes from 0 to 2 with a read value of 2 and a write value of 2. This corresponds with our calculations in Step 1, as \(r_{0}=2\) and \(w_{0}=2\). **Step 2** (\(i=1\)): \(c_{1}=2\) and \(r_{1}=0\). The following is determined: * \(t_{1}=r_{1}m+c_{1}=0*4+2=2\) * \(c_{2}=\lfloor\frac{t_{1}}{b}\rfloor=\lfloor\frac{2}{3}\rfloor=0\) * \(w_{1}=t_{1}\) (mod b) = 2 (mod 3) \(=2\) * \(c_{1}\) to \(c_{2}\): The state in Figure 1 changes from 2 to 0 with a read value of 0 and a write value of 2. This corresponds with our calculations in Step 2, as \(r_{1}=0\) and \(w_{1}=2\). **Step 3** (\(i=2\)): \(c_{2}=0\) and \(r_{2}=2\). The following is determined: * \(t_{2}=r_{2}m+c_{2}=2*4+0=8\) * \(c_{3}=\lfloor\frac{t_{2}}{b}\rfloor=\lfloor\frac{8}{3}\rfloor=2\) * \(w_{2}=t_{2}\) (mod b) = 8 (mod 3) \(=2\) * \(c_{2}\) to \(c_{3}\): The state in Figure 1 changes from 0 to 2 with a read value of 2 and a write value of 2. This corresponds with our calculations in Step 3, as \(r_{2}=2\) and \(w_{2}=2\). **Step 4** (\(i=3\)): \(c_{3}=2\) and \(r_{3}=0\) (because \([0202]_{3}=[202]_{3}\)). The following is determined: * \(t_{3}=r_{3}m+c_{3}=0*4+2=2\) * \(c_{4}=\lfloor\frac{t_{3}}{b}\rfloor=\lfloor\frac{2}{3}\rfloor=0\) * \(w_{3}=t_{3}\) (mod b) = 2 (mod 3) \(=2\) * \(c_{3}\) to \(c_{4}\): The state in Figure 1 changes from 2 to 0 with a read value of 0 and a write value of 2. This corresponds with our calculations in Step 4, as \(r_{3}=0\) and \(w_{3}=2\). The iteration is terminated when there are no more read values (\(i>l_{r}\)) and when \(c_{i+1}=0\). Adding all of the write values will give our final value. We know \(w\) in base 3 is \([w_{3}w_{2}w_{1}w_{0}]_{3}=[2222]_{3}\). Using Equation 2 for \(w\), \(w=(2*3^{0})+(2*3^{1})+(2*3^{2})+(2*3^{3})=80\). We can see this equates to the more familiar base 10 multiplication of \(r*m=20*4=80\). ## 2 Methods The methods below will primarily cover the steps towards finding the minimum length for a path of \(c\)'s starting and ending at zero for \(T_{m,b}\). Finding these minimum lengths (if the length is greater than zero) will allow us to find whether a quotient set exists for \(T_{m,b}\); these quotient sets will form the basis of many conjectures that will be outlined later in this paper. ### Visualizer The visualizer works by producing an artificial multiplication transducer (with values stored in a data structure) and then transforming this structural representation into a visual one similar to that of Figure 1. A visual representation of the multiplication transducer was generated using Python libraries like Matplotlib in order to further prove the logic behind the conjectures outlined in this paper; however, these visualizers became difficult to read as \(b\) and \(m\) increase due to the fast growth of combinations between different states. Since the visualizer proves these conjectures for smaller bases/multipliers, this limitation does not hinder how the conjectures are proven. For making the artificial representation of the transducer, every combination of \(c_{i}\) and \(r_{i}\) was iterated through, and the corresponding \(c_{i+1}\) and \(w_{i}\) were calculated from these. Note that Equations 3 and 4 can be used to calculate these values. Since \(c_{i}\) has a possible \(m\) values while \(r_{i}\) has \(l_{r}\) values, the runtime complexity of this step is \(O(ml_{r})\). Note that since there are \(m\) carry values, this means that even if \(l_{r}>m\), there are only \(m\) calculations that need to be made. Therefore, the runtime complexity for this is \(O(m^{2})\). Secondly, Matplotlib was used to convert this data into an visual multiplication transducer. Various functions were used, such as plt.Circle (which created the structure to house the carry-in values), plt.arrow (which created lines between the carry-in values), and plt.text (which helped create text on the graph that made the visualizer easier to view). In addition, different colors and line styles (e.g. blue dashed lines) were used to represent the write and read values respectively for the arrow between the carry-in values. Using these processes, a figure similar to Figure 2 was created. These particular linestyles were chosen due to their distinctness from the other linestyles that would be represented on the transducer; in order for easier comprehension of the transducer itself, it was necessary to choose differentiating characteristics for each of the linestyles so that the reader can understand which line corresponds a specific read value. Besides from the simple linestyles provided by Matplotlib (e.g.'solid', 'dotted', and 'dashed'), more complex linestyles were taken to increase the amount of read values that can be represented on the visualizer; this control was achieved by providing a dash tuple with the form (offset, (on_off_seq)). For example, (0, (2, 7, 1, 14)) represents a 2 pt line, a 7 pt space, a 1 pt line, and a 14 pt space with no offset. A similar process was done for the colors with the write values where only extremely distinct colors were chosen (e.g. red, green, blue, etc.). More nuanced colors were discarded due to their difficult visibility. Note that the visual multiplication transducer should only be used for \(m<8\) and \(b<8\), because values greater than that will produce a multiplication transducer that will be difficult to comprehend (see Figure 3). The implementation behind the visualizer can be referenced here. ### Multiplication Transducer Traversal To find and validate these conjectures, an artificial multiplication transducer was formed in Python by iterating through the possible read and carry value combinations for a particular base \(b\) and multiplier \(m\). Since multiplication transducers for different bases and multipliers can be difficult to compute and draw non-computationally, a multiplication transducer formed by a computational algorithm provided the perfect method to scale bases and multipliers efficiently. Two different approaches were mainly used to generate the pathways to make the multiplication transducer itself and to traverse these pathways to find the minimum length path \(p\) that starts and ends at zero: the networkx library and the depth first search algorithm. Note that these two different methods were both implemented in order to ensure the validity of the findings outlined in this paper. Figure 3: Representation of a more complex multiplication transducer \(T_{10,7}\) using the visualizer. Figure 2: Representation of the multiplication transducer \(T_{4,3}\) using the visualizer. #### 2.2.1 NetworkX Library The networkx library is a library that provides a platform for reviewing graphs and networks using Python. By building and manipulating complex structures, the networkx library is widely used by computational mathematicians wanting to solve new conjectures in the fields of graph theory. As shown in Figure 1, multiplication transducers can be seen as these networks that networkx manipulates. The states can be interpreted as the vertices of the network while the corresponding arrow with the read and write values represent the edges of the network. These transducers can be formed computationally using the networkx library through the.add_node(), and.add_edge() commands. Since the networkx library has a number of different standard graph algorithms for different niche cases and provides different measures for analysis, it was the optimal library for building a multiplication transducer. Specifically, the.DiGraph() command was used as the multiplication transducer acts as a directed graph since there are arrows pointing to the next possible state inside the transducer (meaning that it has a direction). After the generation of this multiplication transducer, this directed graph was then traversed using the networkx library to find the shortest possible paths in the graph that start at state zero and end at state zero (one of the conditions necessary for a path to be part of the quotient set). The implementation behind the networkx library can be referenced here. [2] #### 2.2.2 Depth First Search The depth first search algorithm (DFS) traverses tree structures by starting at the root node and travelling down each possible path to minimize a specified parameter. Although algorithms such as breadth-first search (BFS) have similar time complexities of \(O(|V|+|E|)\) with \(V\) being the number of states and \(E\) being the number of arrows between the states, DFS is more suitable due to its inherent algorithmic structure since the first states explored (e.g. state 0, state 1, etc.) often provide the optimal solution and there are much more solutions farther away from the source. The generation of the multiplication transducer uses a similar strategy to the one seen in Section 2.1 since the networkx library is not being utilized for graph traversal in this scenario like in Section 2.2.1. Note that the logic in Section 2.2.1 shown to prove a multiplication transducer to be a directed graph can be used to allow for DFS to be implemented on the artificial transducer. Since this algorithm was run in Python, arrays were used instead of stacks (which follow a last in first out pattern). A recursive formula was mainly utilized to perform this depth first search; the code used can be referenced here. ### Challenges #### 2.3.1 C++ Implementation For most programs, C++ is computationally faster at running algorithms compared to Python; this is why it is often the preferred language for time-intensive operating programs. Therefore, we attempted a C++ implementation for forming the artificial multiplication transducer to reduce the time and space complexity of our operations. However, the operations done by this algorithm worked slower instead of faster as allocating space to a vector took a computationally intensive time, especially for larger bases and multipliers. Therefore, at the end, we took a Python-based approach to build the transducer. The implementation for building the transducer using C++ can be seen here. #### 2.3.2 NetworkX Visualizer We considered the networkx library when building the visualizer to build an efficient and visually appealing model compared to the Matplotlib library. Due to its versatility, we believed that the networkx library could be used to not only build the artificial multiplication transducer but also create it visually; this would provide a consolidated approach for building these multiplication transducers, only involving the use of one library. However, when creating this visualization, there was no option to produce different linestyles or colors, which meant that different read and write values could not be differentiated between. Additionally, the scale of the graph could not be altered, which meant that the multiplication transducer was becoming too cluttered even for small bases. After seeing this effect with networkx, we decided that a manually-made multiplication transducer would be optimal. Figure 4: Side by side representation of a BFS vs. DFS approach. Results **Conjecture 1:** For all natural numbers \(b\), \(m>1\), the path of carry values \(c_{i}\)'s that is the smallest closed loop across a multiplication transducer \(T_{m,b}\) starting and ending from \(0\) is: * \(c_{0}=0\). * \(c_{1}=\lfloor\frac{m}{b}\rfloor\). * \(c_{i}=\lfloor\frac{c_{i-1}}{b}\rfloor\) for \(i\geq 2\). _Note._ The conjecture has been computationally checked until \(b<2000\) and \(m<2000\). Additionally, note that for the conditions stated in Theorem 1, \(c_{0}=0\), so \(t_{0}=r_{0}m\) and \(c_{1}=\lfloor\frac{r_{0}m}{b}\rfloor\). This indicates that \(r_{0}=1\), since \(c_{1}=\lfloor\frac{m}{b}\rfloor\) as stated in the theorem. Similarly, since \(p_{i}=\lfloor\frac{p_{i-1}}{b}\rfloor\), \(c_{2}=\lfloor\frac{p_{1}}{b}\rfloor=\lfloor\frac{c_{1}}{b}\rfloor=\lfloor \frac{t_{1}}{b}\rfloor\). This indicates \(t_{1}=c_{1}\), and since \(t_{1}=r_{1}m+c_{1}\), \(r_{1}m=0\). Since multiplier \(m\geq 2\), \(r_{1}=0\). Following this pattern, it can be noted that \(r_{0}=1\) and \(r_{i}=0\) for all \(i=2,...,l_{r}\). Therefore, \(r\) can be represented as \([00...1]_{b}=[1]_{b}=1\) for all instances of base \(b\) and multiplier \(m\). **Theorem 1:** For the path of carry values \(c_{i}\)'s that make the smallest closed loop across a multiplication transducer \(T_{m,b}\) starting and ending from \(0\), the read and write values are: * \(r=[1]_{b}=1\). * \(w=m\). _Proof._ First, note that the smallest closed loop across a multiplication transducer \(T_{m,b}\) must contain a \(c_{i}\neq 0\). Therefore, in order to arrive at the smallest closed loop, the path needs to produce a non-zero carry value that will get the path back to the carry value of zero as quickly as possible. It needs to choose the smallest carries in order to arrive at this smallest path. Since the total is calculated as \(t_{0}=r_{0}m+c_{0}\) and \(c_{0}=0\), the fastest way to get it to the smallest state/carry value is by making \(r_{0}=1\). This is because \(t_{0}=r_{0}m\) and any \(r_{0}>1\) would produce a state that is greater than what is produced by \(r_{0}=1\) since the next state \(c_{1}=\lfloor\frac{t_{0}}{b}\rfloor=\lfloor\frac{r_{0}m}{b}\rfloor\). Notice that this potrays a similar result to what was seen in the proof discussed in the first conjecture. If \(r_{0}=1\), then \(t_{0}=m\). Therefore, the next carry value is \(\lfloor\frac{m}{b}\rfloor\) while the read value is \(w_{i}=t_{0}(\text{mod b})=m(\text{mod b})\). Now that it has left state zero, it now aims to have the shortest path possible. In our expression of \(t_{i}=r_{i}m+c_{i}\), we can't control anything except \(r_{i}\) since \(m\) is predefined and \(c_{i}\) is dependent on the previous calculation. The simplest way to get the transducer back to state zero is to make \(r_{i}=0\). Therefore, \(t_{i}=(0)(m)+c_{i}\), so \(t_{i}=c_{i}\). Note that \(t_{1}=\lfloor\frac{m}{b}\rfloor\), and \(c_{2}\) can be calculated by taking the integer division of the previous state and the base. \(\Box\) To understand this concept in further detail, let's take an example of base \(b=3\) and multiplier \(m=10\): **Step 1** (\(i=0\)): \(c_{0}=0\) (our initial state) and \(r_{0}=1\). Therefore, * \(t_{0}=r_{0}m+c_{0}=1*10+0=10\) * \(c_{1}=\lfloor\frac{t_{0}}{b}\rfloor=\lfloor\frac{10}{3}\rfloor=3\) * \(w_{0}=t_{0}\) (mod b) = 10 (mod 3) \(=1\) **Step 2** (\(i=1\)): \(c_{1}=3\) and \(r_{1}=0\). Therefore, * \(t_{1}=r_{1}m+c_{1}=0*10+3=3\) * \(c_{2}=\lfloor\frac{t_{1}}{b}\rfloor=\lfloor\frac{3}{3}\rfloor=1\) * \(w_{1}=t_{1}\) (mod b) = 3 (mod 3) \(=0\) **Step 3** (\(i=2\)): \(c_{2}=1\) and \(r_{2}=0\). Therefore, * \(t_{2}=r_{2}m+c_{2}=0*10+1=1\) * \(c_{3}=\lfloor\frac{t_{2}}{b}\rfloor=\lfloor\frac{1}{3}\rfloor=0\) * \(w_{2}=t_{2}\) (mod b) = 1 (mod 3) \(=1\) In this example, notice that the read values follow a pattern similar to that shown in the proof of the first theorem. Essentially, the read value \(r=[1]_{b}=1\) for all bases. Additionally, an interesting pattern occurs when looking at the write values produced by the example above. The write value is \([101]_{3}=[10]_{10}\). With a read value of \([1]_{b}\), the write values can always be generalized to have pattern \(w=m\) as evidenced by the example above. Therefore, a general pattern for the read and write values of the shortest path have been found. **Corollary 1:** The length for the path of carry values \(c_{i}\)'s, the smallest closed loop across a multiplication transducer \(T_{m,b}\) starting and ending at \(0\), is \(\lfloor m^{1/b}\rfloor+2\) for \(b\geq 2\). _Proof:_ Note that this combination of read values always produces the shortest path as it goes to the state that is just far enough to escape zero, and then it takes the fastest approach to go back to zero afterwards. Knowing this, the length of the shortest path is \(\lfloor m^{1/b}\rfloor+2\) for \(b\geq 2\) by using arithmetic logic. First, note that the addition of two is to account for the zeroes in the beginning and ending of the shortest path. For the path between these zeroes, \(\lfloor m^{1/b}\rfloor\) can be used to denote the length. Remember that these numbers are \(c_{i}=\lfloor\frac{l_{i-1}}{b}\rfloor\) where \(t_{0}=m\) and \(t_{i}=c_{i}\). Knowing this is the case, that means that there is an integer division between m and \(b^{l_{w}-1}\) at the very last step where \(l_{w}\) represents the length of the write value. This is because base \(m\) is divided by base \(b\) in all steps of this base division except for the first since \(t_{i}\) equals \(c_{i}\). Knowing that the integer division needs to produce a zero in order to produce a closed path, then \(\frac{m}{b^{l_{w}-1}}<1\) or \(m<b^{l_{w}-1}\). Since \(l_{w}-1=l_{p}\) where \(l_{p}\) represents the length of the smallest path of base \(b\) and multiplier \(m\) (excluding the first and last zeroes), this formula can be rewritten as \(m<b^{l_{p}}\). Doing algebraic manipulation on this inequality, we can reasonably conclude that the length of the smallest closed set (excluding the zeroes) or \(l_{p}\) of base \(b\) and multiplier \(m\) is \(\lfloor m^{1/b}\rfloor\). Therefore, we can conclude that the length of the smallest closed set (including the zeroes) of a particular base \(b\) and multiplier \(m\) is \(\lfloor m^{1/b}\rfloor+2\). \(\Box\) **Corollary 2:** The multipliers that have a length of \(n+1\) for the shortest closed loop across a multiplication transducer \(T_{m,b}\) starting and ending at \(0\) for a particular \(b\) has a range of \(m\in[b^{n-1},b^{n}-1]\) for all \(n\geq 3\) and \(b\geq 2\). Therefore, the number of multipliers that have a length of \(n+1\) for a particular \(b\) is \(b^{n-1}(b-1)\) for all \(n\geq 3\) and \(b\geq 2\). _Proof._ We prove this corollary by proving that there are sharp bounds for multipliers that have a length of \(n+1\) for the shortest closed loop and that the length of the shortest closed loop with respect to multipliers is monotonically increasing. First, note that Theorem 1 shows that the length of the shortest closed loop with respect to multipliers in monotonically increasing, since \(f(m)=\lfloor m^{1/b}\rfloor+2\) is monotonically increasing. We then prove the lower bound of \(m=b^{n-1}\) is sharp. Let \(m=b^{n-1}\). Then, we try to prove that the length of the shortest closed loop around a multiplication transducer \(T_{b^{n-1},b}\) starting and ending at \(0\) for any \(b\) is \(n+1\). Note that the first two elements in the path are \(0\) and \(\lfloor\frac{b^{n-1}}{b}\rfloor=\lfloor b^{n-2}\rfloor\). The third element in the path is \(\lfloor\frac{b^{n-2}}{b}\rfloor=\lfloor b^{n-3}\rfloor\). This means that the ith element is \(\lfloor b^{n-i}\rfloor\) and the nth element is \(\lfloor b^{n-n}\rfloor=b^{0}=1\). Therefore, the (n+1)th element is \(\lfloor\frac{1}{m}\rfloor=0\). We can see that the length is \(n+1\) for \(m=b^{n-1}\). The lower bound can be shown to be \(m=b^{n-1}\) by showing \(m=b^{n-1}-1\) has a length of \(n\). Note the second element in the path would be \(\lfloor\frac{b^{n-1}-1}{b}\rfloor=\lfloor b^{n-2}-1\rfloor\). The third element in the path would be \(\lfloor\frac{b^{n-2}-1}{b}\rfloor=\lfloor b^{n-3}-1\rfloor\).This means that the ith element is \(\lfloor b^{n-i}-1\rfloor\) and the nth element is \(\lfloor b^{n-n}-1\rfloor=b^{0}-1=1-1=0\). Therefore, the length is \(n\) for \(m=b^{n-1}-1\). Next, the upper bound has to be shown to be \(m=b^{n}-1\). We can do this by proving that \(b^{n}\) has a length of \(n+2\). Note that the proof that the length of \(m=b^{n-1}\) is \(n+1\) can be altered such that the length of \(m=b^{n}\) is \(n+2\). We then show that \(m=b^{n}-1\) has a length of \(n+1\). Note that the proof for the length of \(m=b^{n-1}-1\) is \(n\) can be altered such that the length of \(m=b^{n}-1\) has a length of \(n+1\). Therefore, the upper and lower bounds have been proven to be sharp and the lengths are monotonically increasing with respect to the multipliers. Thus, the corollary is proven. \(\Box\) ## 4 Conclusion The theorems shown above provide an overview of multiplication transducers with no excluded digits and analyze paths through the multiplication transducer when \(m=1\). These help prove the basis of multiplication transducers inside of base multiplications and will help with rapidly calculating bases with small multipliers. Further research includes generalizing the theorems to multiplication transducers with a reduction of the digit set and determining whether some of the same properties hold. Additionally, Corollary 1 has yet to be analytically proven, which proves another topic of exploration. Furthermore, pathways with larger multipliers can be explored, and better ways of visualizing multiplication transducers with large \(b\)'s and \(m\)'s have yet to be discovered. ### Exploring Quotient Sets With Restricted Digits As seen above, with these multiplication transducers, we can calculate an output \(w\) when multiplying \(m\) by \(r\) in base \(b\). We can now add a further constraint to limit the number of \(r\) values that can be multiplied by \(m\), which will, in turn, reduce the set of all outputs and reduce the number of states in the multiplication transducer. This constraint involves reducing the original digit set \(\{d_{1},d_{2},...,d_{k}\}\), the set of digits that can be used to represent \(r\) in base \(b\), where \(k=b\). For instance, for digit set \(\{0,1\}\) for \(b=3\), \(r=\{1,3,4,9,...\}=\{[1]_{3},[10]_{3},[11]_{3},[100]_{3},...\}\), and for digit set \(\{0,1,2\}\) (the entire set for base three), \(r=\{1,2,3,4,...\}=\{[1]_{3},[2]_{3},[10]_{3},[11]_{3},...\}=\mathbb{N}\). Let \(S(b;\{d_{1},...,d_{k}\})\) be the set of all \(r\) that can be created in base \(b\) using digit set \(\{d_{1},...,d_{k}\}\). For the previous example, \(S(3;\{0,1\})=\{1,3,4,9,...\}\). To express this mathematically, \[S(b;\{d_{1},...,d_{k}\})=\{s\in\mathbb{N};s=\sum_{i=0}^{\infty}\alpha_{i}b^{i }\text{ with }\alpha_{i}\in\{d_{1},...,d_{k}\}\text{ for all }i\} \tag{6}\] What we are interested in doing is studying the positive whole numbers that come from dividing numbers in \(S(b;\{d_{1},...,d_{k}\})\) or the set of all \(q\) for a particular \(b\). This set is known as a quotient set, and is denoted as \(Q(b;\{d_{1},...,d_{k}\})\). Expressed mathematically, \[Q(b;\{d_{1},...,d_{k}\})=\{x\in\mathbb{Z}:x=\frac{s}{s^{\prime}}\text{ for some }s,s^{\prime}\in S(b;\{d_{1},...,d_{k}\})\} \tag{7}\] Note that in \(Q(b;\{d_{1},...,d_{k}\})\), \(s^{\prime}\neq 0\). We can prove whether a particular number \(n\) is in this quotient set if two conditions are met: * \(w=n\). * \(p_{0}=p_{l_{w}-1}=0\) (or there is a closed loop in the multiplication transducer starting and ending at \(0\)). Note that for no restricted digits (containing the original digit sit), \(Q=\mathbb{N}\).
2301.01410
Kernel Subspace and Feature Extraction
We study kernel methods in machine learning from the perspective of feature subspace. We establish a one-to-one correspondence between feature subspaces and kernels and propose an information-theoretic measure for kernels. In particular, we construct a kernel from Hirschfeld--Gebelein--R\'{e}nyi maximal correlation functions, coined the maximal correlation kernel, and demonstrate its information-theoretic optimality. We use the support vector machine (SVM) as an example to illustrate a connection between kernel methods and feature extraction approaches. We show that the kernel SVM on maximal correlation kernel achieves minimum prediction error. Finally, we interpret the Fisher kernel as a special maximal correlation kernel and establish its optimality.
Xiangxiang Xu, Lizhong Zheng
2023-01-04T02:46:11Z
http://arxiv.org/abs/2301.01410v2
# Kernel Subspace and Feature Extraction ###### Abstract We study kernel methods in machine learning from the perspective of feature subspace. We establish a one-to-one correspondence between feature subspaces and kernels and propose an information-theoretic measure for kernels. In particular, we construct a kernel from Hirschfeld-Gebelein-Renyi maximal correlation functions, coined the maximal correlation kernel, and demonstrate its information-theoretic optimality. We use the support vector machine (SVM) as an example to illustrate a connection between kernel methods and feature extraction approaches. We show that the kernel SVM on maximal correlation kernel achieves minimum prediction error. Finally, we interpret the Fisher kernel as a special maximal correlation kernel and establish its optimality. ## I Introduction One main objective of machine learning is to obtain useful information from often high-dimensional data. To this end, it is a common practice to extract meaningful feature representations from original data and then process features [1]. Neural networks [2] and kernel methods [3, 4, 5, 6] are two of most representative approaches to map data into feature space. In neural networks, the features are represented as the outputs of hidden neurons in the network. In contrast, the feature mapping in kernel methods is defined by the used kernel, which is used implicitly and is often infinite dimensional. While kernel approaches require much fewer parameters and can obtain good empirical performance on certain tasks [7], the performance significantly relies on the choice of kernels. With many attempts to investigate kernel methods [6, 8, 9], there still lacks a theoretical understanding of the mechanism behind kernel methods, which restricts their applications on complicated data. On the other hand, the feature extraction in deep neural networks has been studied recently by information-theoretic and statistical analyses [10, 11]. For example, it was shown in [10] that, the feature extracted by deep neural networks coincides with the most informative feature, which is essentially related to the classical Hirschfeld-Gebelein-Renyi (HGR) maximal correlation problem [12, 13, 14]. Such theoretical characterizations provide a better understanding of existing algorithms and have been shown useful in designing algorithms for multimodal learning tasks [15]. In this paper, our goal is to characterize kernel methods from the perspective of feature subspace and reveal its connection with other learning approaches. We first introduce the associated kernel with each given feature subspace, which we coin the _projection kernel_, to establish a correspondence between kernel operations and geometric operations in feature subspaces. This connection allows us to study kernels methods via analyzing the corresponding feature subspaces. Specifically, we propose an information-theoretic measure for projection kernels, and demonstrate that the information-theoretically optimal kernel can be constructed from the HGR maximal correlation functions, coined the _maximal correlation kernel_. We further demonstrate that the support vector machine (SVM) with maximal correlation kernel can obtain the minimum prediction error, which justifies its optimality in learning tasks. Our analysis also reveals connections between SVM and other classification approaches including neural networks. Finally, we interpret the Fisher kernel, a classical kernel induced from parameterized distribution families [16], as a special case of maximal correlation kernels, thus demonstrating its optimality. ## II Preliminaries and Notations Throughout this paper, we use \(X,Y\) denote two random variables with alphabets \(\mathcal{X},\mathcal{Y}\), and denote their joint distribution and marginals as \(P_{X,Y}\) and \(P_{X},P_{Y}\), respectively. We also use \(\mathbb{E}[\cdot]\) to denote the expectation with respect to \(P_{X,Y}\). ### _Feature Space_ We adopt the notation convention introduced in [15], and let \(\mathcal{F}_{\mathcal{X}}\triangleq\{\mathcal{X}\rightarrow\mathbb{R}\}\) denote the feature space formed by the (one-dimensional) features of \(X\), with the geometry defined as follows. The inner product \(\langle\cdot,\cdot\rangle_{\mathcal{F}_{\mathcal{X}}}\) on \(\mathcal{F}_{\mathcal{X}}\) is defined as \(\langle f_{1},f_{2}\rangle_{\mathcal{F}_{\mathcal{X}}}\triangleq\mathbb{E}_{ P_{X}}[f_{1}(X)f_{2}(X)]\) for \(f_{1},f_{2}\in\mathcal{F}_{\mathcal{X}}\). This induces a norm \(\|\cdot\|_{\mathcal{F}_{\mathcal{X}}}\) with \(\|f\|_{\mathcal{F}_{\mathcal{X}}}\triangleq\sqrt{\langle f,f\rangle_{\mathcal{ F}_{\mathcal{X}}}}\) for \(f\in\mathcal{F}_{\mathcal{X}}\). Then, for given \(f\in\mathcal{F}_{\mathcal{X}}\) and subspace \(\mathcal{G}\) of \(\mathcal{F}_{\mathcal{X}}\), we denote the projection of \(f\) onto \(\mathcal{G}\) as \[\Pi(f;\mathcal{G})\triangleq\operatorname*{arg\,min}_{h\in\mathcal{G}}\|h-f \|_{\mathcal{F}_{\mathcal{X}}}. \tag{1}\] In addition, for a \(d\)-dimensional feature \(f=(f_{1},\ldots,f_{d})^{\mathrm{T}}\colon\mathcal{X}\rightarrow\mathbb{R}^{d}\), we use \(\operatorname{span}\{f\}\triangleq\operatorname{span}\{f_{1},\ldots,f_{d}\}\) to denote the subspace spanned by all dimensions. We also use \(\tilde{f}\) to denote the centered \(f\), i.e., \(\tilde{f}(x)\triangleq f(x)-\mathbb{E}_{P_{X}}[f(X)]\), and denote \(\Lambda_{f}\triangleq\mathbb{E}_{P_{X}}\big{[}f(X)f^{\mathrm{T}}(X)\big{]}\). ### _Kernel_ Given \(\mathcal{X}\), \(\xi\colon\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) is a kernel on \(\mathcal{X}\) if for all finite subset \(\mathcal{I}\subset\mathcal{X}\), the \(|\mathcal{I}|\) by \(|\mathcal{I}|\) matrix \([\xi(x,x^{\prime})]_{x\in\mathcal{I},x^{\prime}\in\mathcal{I}}\) is positive semidefinite. For each kernel \(\xi\), we define the associated functional operator \(\tau\colon\mathcal{F}_{\mathcal{X}}\rightarrow(\mathcal{X}\rightarrow\mathbb{R})\) as \[[\tau(f)](x)\triangleq\mathbb{E}_{P_{X}}[\xi(X,x)f(X)], \tag{2}\] and we use \(\mathpzc{K}\leftrightarrow\tau\) to denote the correspondence between \(\mathpzc{K}\) and \(\tau\). Furthermore, we define the centered kernel \(\tilde{\mathpzc{K}}\colon\mathscr{X}\times\mathscr{X}\to\mathbb{R}\) as \[\tilde{\mathpzc{K}}(x,x^{\prime})\triangleq\mathpzc{K}(x,x^{\prime})-\bar{k}( x)-\bar{k}(x^{\prime})+\mathbb{E}_{P_{X}}\big{[}\bar{k}(X)\big{]}, \tag{3}\] where we have defined \(\bar{k}\colon(x\mapsto\mathbb{E}_{P_{X}}\big{[}\mathpzc{K}(X,x)\big{]})\in \mathscr{F}_{\mathscr{X}}\). The following fact is the basis of the kernel trick in learning algorithms. _Fact 1:_ For each given kernel \(\mathpzc{K}\), there exist an inner product \(\mathcal{V}\) with the inner product \(\langle\cdot,\cdot\rangle_{\mathcal{V}}\), and a mapping \(\nu\colon\mathscr{X}\to\mathcal{V}\), such that \(\mathpzc{K}(x,x^{\prime})=\langle\nu(x),\nu(x^{\prime})\rangle_{\mathcal{V}}\). **Remark 1**: _Suppose \(\nu\) is one mapping for \(\mathpzc{K}\) satisfying Fact 1. Then for the centered kernel \(\tilde{\mathpzc{K}}\) [cf. (3)], we have \(\tilde{\mathpzc{K}}(x,x^{\prime})=\langle\tilde{\nu}(x),\tilde{\nu}(x^{\prime} )\rangle_{\mathcal{V}}\), where \(\tilde{\nu}(x)\triangleq\nu(x)-\mathbb{E}_{P_{X}}[\nu(X)]\)._ In addition, we introduce the kernelized discriminative model (KDM) as follows. **Definition 1** (Kernelized Discriminative Model): _For each kernel \(\mathpzc{K}\), we define its associated kernelized discriminative model \(P_{Y|X}^{(\mathpzc{K})}\) as_ \[P_{Y|X}^{(\mathpzc{K})}(y|x)\triangleq P_{Y}(y)\Big{(}1+\mathbb{E}\Big{[} \tilde{\mathpzc{K}}(X,x)\Big{|}Y=y\Big{]}\Big{)}. \tag{4}\] _Then, we use \(\hat{y}^{(\mathpzc{K})}\) to denote the maximum a posteriori (MAP) estimation induced from KDM \(P_{Y|X}^{(\mathpzc{K})}\), i.e.,_ \[\hat{y}^{(\mathpzc{K})}(x)\triangleq\operatorname*{arg\,max}_{y\in y}P_{Y|X} ^{(\mathpzc{K})}(y|x). \tag{5}\] _The KDM can be regarded as a generalized probability distribution, since we have \(\sum_{y\in y}P_{Y|X}^{(\mathpzc{K})}(y|x)=1\) for all \(x\in\mathscr{X}\) while \(P_{Y|X}^{(\mathpzc{K})}(y|x)\) can sometimes take negative values._ ### _Modal Decomposition, Maximal Correlation, and H-score_ We first introduce the modal decomposition of joint distribution \(P_{X,Y}\)[11, 15]. **Proposition 1** (Modal Decomposition [11]): _For given \(P_{X,Y}\), there exists \(K\leq\min\{|\mathscr{X}|,|\mathpzc{Y}|\}-1\), such that_ \[P_{X,Y}(x,y)=P_{X}(x)P_{Y}(y)\Bigg{(}1+\sum_{i=1}^{K}\sigma_{i}f_{i}^{*}(x)g_{ i}^{*}(y)\Bigg{)}, \tag{6}\] _where \(\sigma_{1}\geq\sigma_{2}\geq\sigma_{K}>0\), and \(\mathbb{E}\big{[}f_{i}^{*}(X)f_{j}^{*}(X)\big{]}=\mathbb{E}\big{[}g_{i}^{*}(Y )g_{j}^{*}(Y)\big{]}=\mathbb{1}_{\{i=j\}}\) for all \(1\leq i,j\leq K\), where \(\mathbb{1}_{\{\cdot\}}\) denotes the indicator function._ It can be shown that \((f_{i}^{*},g_{i}^{*})\) pairs are the most correlated function pairs of \(X\) and \(Y\), referred to as maximal correlation functions. We also denote \(\varrho\triangleq\sigma_{1}\), known as the HGR maximal correlation [12, 13, 14] of \(X\) and \(Y\), and define the \(K\)-dimensional feature \(f^{*}(x)\triangleq[f_{i}^{*}(x),\ldots,f_{K}^{*}(x)]^{\mathrm{T}}\). In particular, when \(Y\) is binary, we have \(f^{*}=f_{1}^{*}\in\mathscr{F}_{\mathscr{X}}\). It has been shown in [11] that the maximal correlation functions \(f_{i}^{*},i=1,\ldots,K\) are the optimal features of \(X\) in inferring or estimating \(Y\). In general, given a \(d\)-dimensional feature \(f\) of \(X\), the effectiveness of \(f\) in inferring or estimating \(Y\) can be measured by its _H-score_[10, 11], defined as \[\mathscr{H}(f)\triangleq\frac{1}{2}\cdot\mathbb{E}\bigg{[}\Big{\|}\mathbb{E }\Big{[}\Lambda_{f}^{-\frac{1}{2}}\tilde{f}(X)\Big{|}Y\Big{]}\Big{\|}^{2} \bigg{]}, \tag{7}\] where \(\tilde{f}(x)\triangleq f(x)-\mathbb{E}[f(X)]\). It can be verified that for all \(d\) and \(f\colon\mathscr{X}\to\mathbb{R}^{d}\), we have \[\mathscr{H}(f)\leq\mathscr{H}(f^{*})=\frac{1}{2}\sum_{i=1}^{K}\sigma_{i}^{2}, \tag{8}\] where \(\sigma_{1},\ldots,\sigma_{K}\) are as defined in (6). ### _Binary Classification_ We consider the binary classification problem which predicts binary label \(Y\) from the data variable \(X\). For convenience, we assume \(Y\) takes values from \(\mathpzc{Y}\triangleq\{-1,1\}\). Suppose the training dataset contains \(n\) data samples \(\{(x_{i},y_{i})\}_{i=1}^{n}\), then the corresponding alphabet \(\mathscr{X}\) is given by \(\mathscr{X}\triangleq\{x_{i}\colon i=1,\ldots,n\}\), and we use \(P_{X,Y}\) to denote the empirical distribution of training data, i.e., \[P_{X,Y}(x,y)\triangleq\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}_{\{x_{i}=x,y_{i}=y\}}. \tag{9}\] #### Iii-D1 Support Vector Machine The support vector machine (SVM) solves binary classification tasks by finding the optimal hyperplane that separates two classes with maximum margin [7]. Given \(d\)-dimensional feature mapping \(f\colon\mathscr{X}\to\mathbb{R}^{d}\), the loss for SVM based on \(f\) can be written as \[L_{\mathsf{SVM}}(f,w,b;\lambda)\] \[\triangleq\mathbb{E}_{P_{X,Y}}\big{[}\ell_{\mathrm{hing}}(Y, \langle w,f(X)\rangle+b)\big{]}+\frac{\lambda}{2}\cdot\|w\|^{2} \tag{10}\] where \(w,b\in\mathbb{R}^{d}\) are the parameters of the hyperplane, where \(\lambda>0\) is a hyperparameter of SVM, and where \(\ell_{\mathrm{hing}}\colon\mathscr{Y}\times\mathbb{R}\to\mathbb{R}\) denotes the hinge loss, defined as \(\ell_{\mathrm{hing}}(y,z)\triangleq(1-yz)^{+}\) with \(x^{+}\triangleq\max\{0,x\}\). Moreover, let \((w_{\mathsf{SVM}},b_{\mathsf{SVM}})\triangleq\operatorname*{arg\,min}_{w,b}L_{ \mathsf{SVM}}(f,w,b;\lambda)\) and \(L_{\mathsf{SVM}}^{*}(f;\lambda)\triangleq L_{\mathsf{SVM}}(f,w_{\mathsf{SVM }},b_{\mathsf{SVM}};\lambda)\) denote the optimal parameters and the value of loss function, respectively. Then, the prediction of SVM is \[\hat{y}_{\mathsf{SVM}}(x;f,\lambda)\triangleq\mathrm{sgn}(\langle w_{\mathsf{ SVM}},f(x)\rangle+b_{\mathsf{SVM}}), \tag{11}\] where \(\mathrm{sgn}(\cdot)\) denotes the sign function. Specifically, for a given kernel \(\mathpzc{K}\), the prediction of the corresponding _kernel SVM_ is1\(\hat{y}_{\mathsf{SVM}}^{(\mathpzc{K})}(x;\lambda)\triangleq\hat{y}_{\mathsf{ SVM}}(x;\nu,\lambda),\) where \(\nu\) is any mapping given by Fact 1. Footnote 1: It is worth mentioning that the practical implementation of kernel SVM is typically done by solving a dual optimization problem without explicitly using \(\nu\). See [17, Section 12] for detailed discussions. #### Iii-D2 Logistic Regression and Neural Networks Given \(d\)-dimensional feature \(f\) of \(X\), the discriminative model of logistic regression is \(\tilde{P}_{Y|X}(y|x;f,w,b)\triangleq\mathrm{sigmoid}(y\cdot(w,f(x))+b)\), where \(w\in\mathbb{R}^{d}\), \(b\in\mathbb{R}\) are the weight and bias, respectively, and where \(\mathrm{sigmoid}(\cdot)\) is defined as \(\mathrm{sigmoid}(x)\triangleq\frac{1}{1+\exp(-x)}\). Then, the loss of logistic regression is \(L_{\mathsf{LR}}(f,w,b)\triangleq-\mathbb{E}\Big{[}\log\tilde{P}_{Y|X}(Y|X;f,w,b )\Big{]}\), and the optimal parameters \(w_{\mathsf{LR}},b_{\mathsf{LR}}\) are learned by minimizing the loss, i.e., \((w_{\mathsf{LR}},b_{\mathsf{LR}})\triangleq\operatorname*{arg\,min}_{w,b}L_{ \mathsf{LR}}(f,w,b)\). The resulting decision rule is \[\hat{y}_{\mathsf{LR}}(x;f) \triangleq\operatorname*{arg\,max}_{y\in\mathcal{Y}}\tilde{P}_{Y| X}(y|x;f,w_{\mathsf{LR}},b_{\mathsf{LR}})\] \[=\operatorname*{sgn}(\langle w_{\mathsf{LR}},f(x)\rangle+b_{ \mathsf{LR}}). \tag{12}\] The logistic regression is often used as the classification layer for multi-layer neural networks, where \(w\) and \(b\) correspond to weights and the bias term, respectively. In this case, the feature mapping \(f(\cdot)\) also takes a parameterized form, and the parameters of \(f(\cdot)\) are jointly learned with \(w\) and \(b\). ## III Projection Kernel and Informative Features In this section, we introduce a one-to-one correspondence between kernels and feature subspaces, and then characterize the informativeness of kernels by investigating the features in the associated subspaces. ### _Projection Kernel and Feature Subspace_ We first introduce a family of kernels with one-to-one correspondence to feature subspace. **Definition 2** (Projection Kernel): _Let \(\mathcal{G}\) denote a \(d\)-dimensional subspace of \(\mathcal{F}_{\mathcal{X}}\) with a basis \(\{f_{1},\ldots,f_{d}\}\). We use \(\mathcal{K}_{\mathcal{G}}\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) to denote the projection kernel associated with \(\mathcal{G}\), defined as \(\mathcal{K}_{\mathcal{G}}(x,x^{\prime})\triangleq f^{\mathrm{T}}(x)\Lambda_ {f}^{-1}f(x^{\prime})\), where we have defined \(f\triangleq(f_{1},\ldots,f_{d})^{\mathrm{T}}\) and \(\Lambda_{f}\triangleq\mathbb{E}\big{[}f(X)f^{\mathrm{T}}(X)\big{]}\)._ With slight abuse of notation, we also denote \(\mathpzc{k}_{f}\triangleq\mathpzc{k}_{\mathrm{span}\{f\}}\), the projection kernel associated with \(\operatorname*{span}\{f\}\). Note that \(\mathpzc{k}_{\mathcal{G}}\) is a valid kernel function, and the corresponding \(\nu\) mapping in Fact 1 can be chosen as \(\nu(x)=[f_{1}(x),\ldots,f_{d}(x)]^{\mathrm{T}}\) for any orthonormal basis \(\{f_{1},\ldots,f_{d}\}\) of \(\mathcal{G}\). It turns out that the functional operators associated with projection kernels are projection operators in the feature space, which we formalize as follows. A proof is provided in Appendix A. **Property 1**: _Let \(\tau\leftrightarrow\mathpzc{k}_{\mathcal{G}}\) denote the operator corresponding to subspace \(\mathcal{G}\) [cf. (2)], then we have \(\tau(f)=\Pi(f;\mathcal{G})\) for all \(f\in\mathcal{F}_{\mathcal{X}}\)._ Therefore, given a projection kernel \(\mathpzc{k}\), the associated subspace can be represented as \(\{f\in\mathcal{F}_{\mathcal{X}}\colon\tau(f)=f\}\), where \(\tau\leftrightarrow\mathpzc{k}\) is the associated operator. This establishes a one-to-one correspondence between projection kernels and feature subspaces. ### _H-score and Informative Features_ The projection kernel provides a connection between feature subspace and kernel, from which we can characterize subspace \(\mathcal{G}\) in terms of the corresponding kernel \(\mathpzc{k}_{\mathcal{G}}\). Specifically, we can represent the H-score [cf. (7)] of a feature \(f\) in terms of the projection kernel \(\mathpzc{k}_{f}\), formalized as follows. A proof is provided in Appendix B. **Proposition 2**: _For all \(f\) with \(\operatorname*{span}\{f\}=\mathcal{G}\), we have \(\mathscr{H}(f)=\frac{1}{2}\cdot\big{(}\mathbb{E}_{P_{\mathcal{X}X^{\prime}}}[ \mathpzc{k}_{\mathcal{G}}(X,X^{\prime})]-\mathbb{E}_{P_{\mathcal{X}}P_{ \mathcal{X}^{\prime}}}[\mathpzc{k}_{\mathcal{G}}(X,X^{\prime})]\big{)}\), where we have defined \(X^{\prime}\) such that the joint distribution of \(X\) and \(X^{\prime}\) is_ \[P_{XX^{\prime}}(x,x^{\prime})\triangleq\sum_{y\in\mathcal{Y}}P_{Y}(y)P_{X|Y=y }(x)P_{X|Y=y}(x^{\prime}). \tag{13}\] With slight abuse of notation, we can use \(\mathscr{H}(\mathcal{G})\) to denote the H-score corresponding to feature subspace \(\mathcal{G}\). In particular, we have the following characterization of \(\mathscr{H}(\mathcal{G})\) when \(Y\) is binary. A proof is provided in Appendix C. **Proposition 3**: _Suppose \(Y\) is binary, and \(f^{*}\) is the maximal correlation function of \(P_{X,Y}\). Then, for each subspace \(\mathcal{G}\) of \(\mathcal{F}_{\mathcal{X}}\), we have_ \[\mathscr{H}(\mathcal{G})=\frac{\varrho^{2}}{2}\cdot\big{\|}\Pi(f^{*}; \mathcal{G})\big{\|}_{\mathcal{F}_{\mathcal{X}}}^{2}=\max_{f\in\mathcal{G}} \mathscr{H}(f)=\mathscr{H}(\Pi(f^{*};\mathcal{G})). \tag{14}\] From Proposition 3, \(\mathscr{H}(\mathcal{G})\) depends only on the projection of \(f^{*}\) onto \(\mathcal{G}\), which is also the most informative feature in \(\mathcal{G}\). In addition, note that since \(\|f^{*}\|_{\mathcal{F}_{\mathcal{X}}}=1\), \(\big{\|}\Pi(f^{*};\mathcal{G})\big{\|}_{\mathcal{F}_{\mathcal{X}}}^{2}\) is also the cosine value of the principal angle between \(f^{*}\) and \(\mathcal{G}\). Therefore, we can interpret the H-score as a measure of the principal angle between the optimal feature \(f^{*}\) and the given subspace. ### _Maximal Correlation Kernel_ Note that from (8), \(\mathscr{H}(f)\) is maximized when \(f\) takes the maximal correlation function \(f^{*}\). Therefore, the subspace \(\operatorname*{span}\{f^{*}\}\) (and thus projection kernel \(\mathpzc{k}_{f^{*}}\)) is optimal in terms of the H-score measure. We will denote \(\mathpzc{k}^{*}\triangleq\mathpzc{k}_{f^{*}}\), referred to as the _maximal correlation kernel_. Specifically, the KDM (cf. Definition 1) of maximal correlation kernel \(\mathpzc{k}^{*}\) coincides with the underlying conditional distribution \(P_{Y|X}\), demonstrated as follows. A proof is provided in Appendix D. **Property 2**: _For all \(x\) and \(y\), we have \(P_{Y|X}(y|x)=P_{Y|X}^{(\mathpzc{k}^{*})}(y|x)\) and \(\hat{y}^{(\mathpzc{k}^{*})}(x)=\hat{y}_{\mathsf{MAP}}(x)\), where \(\hat{y}_{\mathsf{MAP}}\) denotes the MAP estimation, i.e.,_ \[\hat{y}_{\mathsf{MAP}}(x)\triangleq\operatorname*{arg\,max}_{y\in\mathcal{Y}}P _{Y|X}(y|x). \tag{15}\] As we will develop in the next section, the maximal correlation kernel also achieves the optimal performance in support vector machine. ## IV Support Vector Machine Analysis In this section, we investigate support vector machine, a representative kernel approach for binary classification. Let \((X,Y)\) denote the training data and corresponding label taken from \(\mathcal{Y}=\{-1,1\}\), with \(P_{X,Y}\) denoting the empirical distribution as defined in (9). Throughout this section, we will focus on the balanced dataset with \[P_{Y}(-1)=P_{Y}(1)=\frac{1}{2}. \tag{16}\] It can be verified that in this case, the MAP estimation [cf. (15)] can be expressed in terms of maximal correlation function. A proof is provided in Appendix E. **Property 3**: _Under assumption (16), we can express the MAP estimation as \(\hat{y}_{\mathsf{MAP}}(x)=\operatorname{sgn}(f^{*}(x))\) for all \(x\in\mathcal{X}\), where \(f^{*}\in\mathcal{F}_{\mathcal{X}}\) is the maximal correlation function of \(P_{X,Y}\)._ ### _SVM on Given Features_ We first consider the SVM algorithm applied on a given feature representation \(f(X)\in\mathbb{R}^{d}\), which can also be regarded as the kernel SVM on kernel \(\mathpzc{k}(x,x^{\prime})=\langle f(x),f(x^{\prime})\rangle\). To begin, for each given feature \(f\) and \(\lambda>0\), let us define \[\hat{L}(f;\lambda)\triangleq 1-\frac{1}{2\lambda}\cdot\|\mathbb{E}[f(X)Y]\|^{2}.\] Then we have the following characterization, a proof of which is provided in Appendix F. **Theorem 1**: _For all given feature \(f\) and \(\lambda\geq 0\), we have_ \[\hat{L}(f;\lambda)\leq L^{*}_{\mathsf{SVM}}(f;\lambda)\leq\hat{L}(f;\lambda)+ \left(\frac{\lambda_{\mathrm{T}}}{\lambda}-1\right)^{+}, \tag{17}\] _where we have defined \(\lambda_{\mathrm{T}}\triangleq M\cdot\|\mathbb{E}[f(X)Y]\|\) and \(M\triangleq\max_{x\in\mathcal{X}}\bigl{\|}\tilde{f}(x)\bigr{\|}\), with \(\tilde{f}(x)\triangleq f(x)-\mathbb{E}[f(X)]\), and where \(x^{+}\triangleq\max\{0,x\}\)._ Specifically, when \(\lambda\geq\lambda_{\mathrm{T}}\), we have \(L^{*}_{\mathsf{SVM}}(f;\lambda)=\hat{L}(f;\lambda)\), which can be achieved by \[w_{\mathsf{SVM}}=\frac{1}{\lambda}\cdot\mathbb{E}[f(X)Y],\quad b_{\mathsf{ SVM}}=-\langle w_{\mathsf{SVM}},\mathbb{E}[f(X)]\rangle, \tag{18}\] and the resulting SVM prediction is \[\hat{y}_{\mathsf{SVM}}(x;f,\lambda) =\operatorname{sgn}\Bigl{(}\Bigl{\langle}\mathbb{E}[\tilde{f}(X) Y],\tilde{f}(x)\Bigr{\rangle}\Bigr{)} \tag{19}\] \[=\operatorname*{arg\,min}_{y\in\mathcal{Y}}\|f(x)-\mathbb{E}[f(X )|Y=y]\|. \tag{20}\] From Theorem 1, when \(\lambda\geq\lambda_{\mathrm{T}}\), the SVM decision \(\hat{y}_{\mathsf{SVM}}(x;f,\lambda)\) does not depend on the value of \(\lambda\). In the remaining, we will focus on the regime where \(\lambda\geq\lambda_{\mathrm{T}}\), and drop the \(\lambda\) in expressions whenever possible, e.g., we simply denote \(\hat{y}_{\mathsf{SVM}}(x;f,\lambda)\) by \(\hat{y}_{\mathsf{SVM}}(x;f)\). As we will see soon, SVM can still obtain minimum prediction error in this regime, by using a good feature mapping \(f\) (or equivalently, a good kernel). From (20), the SVM prediction can be interpreted as a nearest centroid classifier, where decision is based on comparing the distance between \(f(x)\) and the class centroids \(\mathbb{E}[f(X)|Y=y]\), \(y\in\mathcal{Y}\). In addition, from \[\mathbb{E}[f(X)Y] =\mathbb{E}[Y\cdot\mathbb{E}[f(X)|Y]]\] \[=\frac{1}{2}(\mathbb{E}[f(X)|Y=1]-\mathbb{E}[f(X)|Y=-1]),\] we can interpret the SVM loss \(L^{*}_{\mathsf{SVM}}=\hat{L}\) as measuring the distance between two class centroids. Furthermore, when \(f\) is one-dimensional feature, we can rewrite (19) as \[\hat{y}_{\mathsf{SVM}}(x;f)=\operatorname{sgn}\Bigl{(}\Bigl{\langle}\mathbb{ E}\bigl{[}\tilde{f}(X)Y\bigr{]},\tilde{f}(x)\Bigr{\rangle}\Bigr{)}= \operatorname{sgn}\Bigl{(}\hat{f}(x)\Bigr{)},\] where \(\hat{f}\triangleq\Pi\Bigl{(}f^{*};\operatorname{span}\{\tilde{f}\}\Bigr{)}\). Therefore, the decision rule depends only the projection of \(f^{*}\) onto the subspace \(\operatorname{span}\{\hat{f}\}\), which is also the most informative features on the subspace (cf. Proposition 3). Later on we will see a similar geometric illustration of kernel SVM. Moreover, we can establish a connection between SVM loss and the H-score measure, formalized as the following corollary. A proof is provided in Appendix G. **Corollary 1**: _Suppose \(\lambda\geq\lambda_{\mathrm{T}}\), then we have_ \[1-\frac{r_{\max}}{\lambda}\cdot\mathscr{H}(\tilde{f})\leq L^{*}_{\mathsf{ SVM}}(f;\lambda)\leq 1-\frac{r_{\min}}{\lambda}\cdot\mathscr{H}(\tilde{f}),\] _where \(r_{\max}\) and \(r_{\min}\) denote the maximum and minimum positive eigenvalues of the covariance matrix \(\Lambda_{\tilde{f}}\), respectively. Specifically, if \(\Lambda_{\tilde{f}}=I\), then we have \(L^{*}_{\mathsf{SVM}}(f;\lambda)=1-\lambda^{-1}\cdot\mathscr{H}(\tilde{f})\)._ As a result, for each normalized feature \(f\) with covariance matrix \(\Lambda_{\tilde{f}}=I\), the SVM loss \(L^{*}_{\mathsf{SVM}}\) measures the informativeness of \(f\) in inferring the label \(Y\). ### _Kernel SVM_ In practice, instead of applying SVM on a given or manually designed feature \(f\), it is more often to directly implement SVM on a kernel \(\mathpzc{k}\). Similar to Theorem 1, we have the following characterization, from which we can interpret KDM as a probabilistic output for kernel SVM. **Theorem 2**: _For each given kernel \(\mathpzc{k}\), there exists a constant \(\lambda_{\mathrm{T}}>0\), such that when \(\lambda\geq\lambda_{\mathrm{T}}\), the SVM prediction is \(\hat{y}^{(\mathpzc{k})}_{\mathsf{SVM}}(x)=\operatorname{sgn}([\tau(f^{*})](x))\), where \(\tau\leftrightarrow\tilde{\mathpzc{k}}\) is the operator associated with centered kernel \(\tilde{\mathpzc{k}}\) [cf. (2) and (3)]. In addition, the SVM prediction coincides with the KDM prediction (cf. Definition 1) obtained from \(\mathpzc{k}\), i.e., we have \(\hat{y}^{(\mathpzc{k})}_{\mathsf{SVM}}(x)=\hat{y}^{(\mathpzc{k})}(x)\) for all \(x\in\mathcal{X}\)._ Let \(\mathcal{V}\) and \(\nu\colon\mathcal{X}\to\mathcal{V}\) denote the inner product space and mapping associated with kernel \(\mathpzc{k}\) (cf. Fact 1), and let \(\tilde{\nu}(x)\triangleq\nu(x)-\mathbb{E}_{P_{X}}[\nu(X)]\). Then, we have \[\langle\mathbb{E}[\tilde{\nu}(X)Y],\tilde{\nu}(x)\rangle_{ \mathcal{V}} =\mathbb{E}[\langle\tilde{\nu}(X),\tilde{\nu}(x)\rangle_{\mathcal{V}} \cdot Y]\] \[=\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)\cdot Y\Bigr{]}, \tag{21}\] which can be rewritten as \[\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)\cdot Y\Bigr{]}\] \[\quad=\mathbb{E}_{P_{X,Y}}\Bigl{[}\tilde{\mathpzc{k}}(X,x) \cdot Y\Bigr{]}\] \[\quad=\mathbb{E}_{P_{X}P_{Y}}\Bigl{[}\tilde{\mathpzc{k}}(X,x) \cdot Y\cdot(1+\varrho\cdot f^{*}(X)\cdot Y)\Bigr{]}\] \[\quad=\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)\cdot\mathbb{E}[Y] +\varrho\cdot\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)f^{*}(X)\Bigr{]} \cdot\mathbb{E}\bigl{[}Y^{2}\bigr{]}\] \[\quad=\varrho\cdot\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)f^{* }(X)\Bigr{]}\] \[\quad=\varrho\cdot[\tau(f^{*})](x),\] where to obtain the second equality we have used the modal decomposition of \(P_{X,Y}\) (cf. Fact 2). Hence, from Theorem 1 we obtain \[\hat{y}^{(\mathpzc{k})}_{\mathsf{SVM}}(x)=\hat{y}_{\mathsf{SVM}}(x;\nu) =\operatorname{sgn}(\langle\mathbb{E}[\tilde{\nu}(X)Y],\tilde{ \nu}(x)\rangle)\] \[=\operatorname{sgn}\Bigl{(}\mathbb{E}\Bigl{[}\tilde{\mathpzc{k}}(X,x)\cdot Y\Bigr{]}\Bigr{)}\] \[=\operatorname{sgn}([\tau(f^{*})](x)).\] It remains only to establish the equivalence between \(\hat{g}^{(\xi)}_{\text{SVM}}\) and KDM decision \(\hat{y}^{(\xi)}\). To this end, note that from (4) and the balanced dataset assumption (16), we have \[P^{(\xi)}_{Y|X}(y|x) =P_{Y}(y)\Big{(}1+\mathbb{E}\Big{[}\tilde{\xi}(X,x)\Big{|}Y=y\Big{]} \Big{)}\] \[=\frac{1}{2}\Big{(}1+y\cdot\mathbb{E}\Big{[}\tilde{\xi}(X,x)Y \Big{]}\Big{)}\] for all \(x\in\mathscr{X},y\in\mathscr{Y}\). Hence, for all \(x\in\mathscr{X}\), \[\hat{y}^{(\xi)}(x)=\operatorname*{arg\,max}_{y\in\mathscr{Y}}P^{ (\xi)}_{Y|X}(y|x) =\operatorname*{sgn}\Big{(}\mathbb{E}\Big{[}\tilde{\xi}(X,x)Y \Big{]}\Big{)}\] \[=\hat{y}^{(\xi)}_{\text{SVM}}(x),\] which completes the proof. From Theorem 2, the final decision \(\hat{y}^{(\xi)}_{\text{SVM}}\) depends on \(\xi\) only through the centered kernel \(\tilde{\xi}\). Moreover, compare Theorem 2 with Property 3, kernel SVM prediction differs from MAP only in applying the operator \(\tau\) on \(f^{*}\). In particular, when the maximal correlation function \(f^{*}\) is an eigenfunction of the corresponding operator \(\tau\leftrightarrow\tilde{\xi}\), i.e., \(\tau(f^{*})=c\cdot f^{*}\) for some \(c>0\), the SVM prediction coincides with the MAP prediction, i.e., \(\hat{y}^{(\xi)}_{\text{SVM}}(x)=\hat{y}_{\text{MAP}}(x)\) for all \(x\in\mathscr{X}\). If we restrict our attention to projection kernels, the kernel SVM decision can be further interpreted as a projection operation on the associated subspace. To see this, let \(\mathscr{G}\) denote a feature subspace of \(\mathscr{F}_{\mathscr{X}}\) spanned by zero-mean features, then from Theorem 1 and Proposition 3, the kernel SVM loss for \(\xi_{\mathbb{S}}\) is \[1-\frac{1}{\lambda}\cdot\mathscr{H}(\mathscr{G})=1-\frac{\varrho^{2}}{2 \lambda}\cdot\big{\|}\Pi(f^{*};\mathscr{G})\big{\|}^{2}_{\mathscr{F}_{\mathscr{ X}}},\] which measures the principal angle between \(f^{*}\) and \(\mathscr{G}\). In addition, the decision rule can be expressed as \[\hat{y}^{(\xi_{\mathbb{S}})}_{\text{SVM}}(x)=\operatorname*{sgn}([\Pi(f^{*}; \mathscr{G})](x)), \tag{22}\] From Proposition 3, \(\Pi(f^{*};\mathscr{G})\) is also the most informative feature in \(\mathscr{G}\). Therefore, kernel SVM on \(\xi_{\mathbb{S}}\) is equivalent to first extracting the most informative feature in \(\mathscr{G}\), and then using the extracted feature to make decision. ### _Relationship to Other Classification Approaches_ #### Iv-C1 Maximum a Posteriori (MAP) Estimation From (22), when the maximal correlation kernel \(\xi^{*}\) is applied, the kernel SVM decision is \(\operatorname*{sgn}(f^{*}(x))\), which coincides with the MAP prediction (cf. Property 3). Since MAP achieves the minimum prediction error, kernel SVM on the maximal correlation kernel also obtains the minimum prediction error. #### Iv-C2 Logistic Regression and Neural Networks We have interpreted SVM as extracting the most informative feature, where the informativeness is measured by H-score. The analysis in [10] has shown that logistic regression is also equivalent to maximizing the H-score, when \(X\) and \(Y\) are weakly independent. Indeed, we can show that SVM and logistic regression lead to the same prediction in a weak dependence regime, which we formalize as follows. A proof is provided in Appendix H. **Proposition 4**: _Suppose \(\varrho=O(\epsilon)\) for some \(\epsilon>0\). For SVM and logistic regression applied on feature \(f\colon\mathscr{X}\to\mathbb{R}^{d}\) with covariance \(\Lambda_{\tilde{f}}=I_{d}\), the optimal parameters satisfy_ \[w_{\text{LR}} =2\lambda\cdot w_{\text{SVM}}+o(\epsilon),\] \[b_{\text{LR}} =2\lambda\cdot b_{\text{SVM}}+o(\epsilon),\] _where \(\lambda\) is the hyperparameter in SVM. In addition, we have \(\hat{y}_{\text{SVM}}(x;f)=\hat{y}_{\text{LR}}(x;f)\) for \(\epsilon\) sufficiently small._ **Remark 2**: _Since H-score can also be directly maximized by implementing the maximal correlation regression [18], a similar connection holds for SVM and maximal correlation regression._ ## V Fisher Kernel We demonstrate that _Fisher kernel_[16, 19] can also be interpreted as a maximal correlation kernel. Given a family of distributions \(\pi(\cdot;\theta)\) supported on \(\mathscr{X}\) and parameterized by \(\theta\in\mathbb{R}^{m}\), suppose the score function \(s_{\theta}(x)\triangleq\frac{\partial}{\partial\theta}\log\pi(x;\theta)\) exists. Then, the Fisher kernel is defined as the projection kernel associated with the score function \(s_{\theta}\), i.e., \(\hat{\xi}_{s_{\theta}}\). Specifically, we consider classification tasks where the joint distribution between data variable \(X\) and label \(Y\) are a mixture of the parameterized forms. Suppose for each class \(Y=y\in\mathscr{Y}\), the data variable \(X\) is generated from \[P_{X|Y}(x|y)=\pi(x;\theta_{y}) \tag{23}\] for some \(\theta_{y}\in\mathbb{R}^{m}\). Then we have the following result, a proof of which is provided in Appendix I. **Theorem 3**: _Suppose \(\|\theta_{y}\|<\epsilon\) for all \(y\in\mathscr{Y}\), and let \(s(x)\triangleq s_{0}(x)\). Then for the joint distribution \(P_{X,Y}=P_{X|Y}P_{Y}\) generated according to (23), we have_ \[P_{X,Y}(x,y) =P_{X}(x)P_{Y}(y)\Big{(}1+\langle s(x),\tilde{\theta}_{y}\rangle \Big{)}+o(\epsilon), \tag{24}\] \[\qquad\qquad\qquad\xi_{s}(x,x^{\prime})=\xi^{*}(x,x^{\prime})+o( \epsilon), \tag{25}\] _where \(\tilde{\theta}_{y}\triangleq\theta_{y}-\mathbb{E}[\theta_{Y}]\) denotes the centered \(\theta_{y}\), and where \(\xi^{*}\) is the maximal correlation kernel defined on \(P_{X,Y}\). In addition, the H-score of \(s\) satisfies_ \[\mathscr{H}(s)=I(X;Y)+o(\epsilon^{2}), \tag{26}\] _where \(I(X;Y)\) denotes the mutual information between \(X\) and \(Y\)._ From (24), the score function \(s\) is equal to the maximal correlation function \(f^{*}\) of \(P_{X,Y}\) up to a linear transformation [cf. (6)], and we have \(P_{Y|X}(y|x)=P_{Y|X}^{(\xi)}(y|x)=P_{Y|X}^{(\xi_{s})}(y|x)+o(\epsilon)\). Therefore, Fisher kernel is the optimal kernel for tasks generated from (23). ## VI Conclusion In this paper, we study kernel methods from the perspective of feature subspace, where we demonstrate a connection between kernel methods and informative feature extraction problems. With SVM as an example, we illustrate the relationship between kernel methods and neural networks. The theoretical results can help guide practical kernel designs and incorporate kernel methods with feature-based learning approaches. ## Appendix ### _Proof of Property 1_ Suppose \(\mathcal{G}\) is a \(d\)-dimensional feature subspace with an orthonormal basis \(\{g_{1},\ldots,g_{d}\}\) satisfying \(\langle g_{i},g_{j}\rangle=\mathbb{1}_{\{i=j\}}\). Let \(g(x)\triangleq(g_{1}(x),g_{2}(x),\ldots,g_{d}(x))^{\mathrm{T}}\). Then we have \(\Lambda_{g}=I_{d}\) and \(\xi_{\mathcal{G}}(x,x^{\prime})=\langle g(x),g(x^{\prime})\rangle\). Then we have \[[\tau(f)](x) =\mathbb{E}[\xi_{\mathcal{G}}(X,x)f(X)]\] \[=\mathbb{E}[\langle g(x),g(X)\rangle\cdot f(X)]\] \[=\sum_{i=1}^{d}\mathbb{E}[f(X)\cdot g_{i}(X)]\cdot g_{i}(x)\] \[=\sum_{i=1}^{d}\langle f,g_{i}\rangle_{\mathcal{F}_{\chi}}\cdot g _{i}(x),\] which implies that \(\tau(f)=\sum_{i=1}^{d}\langle f,g_{i}\rangle_{\mathcal{F}_{\chi}}\cdot g_{i}\). From the orthogonality principle, it suffices to prove that \(\langle f-\tau(f),\hat{g}\rangle_{\mathcal{F}_{\chi}}=0\) for all \(f\in\mathcal{F}_{\mathcal{X}}\) and \(\hat{g}\in\mathcal{G}\). To this end, suppose \(\hat{g}=\sum_{i=1}^{d}c_{i}\cdot g_{i}\) for some \(c_{1},\ldots,c_{d}\in\mathbb{R}\). Then, we have \[\langle\tau(f),\hat{g}\rangle_{\mathcal{F}_{\chi}} =\left\langle\sum_{i=1}^{d}\langle f,g_{i}\rangle_{\mathcal{F}_{ \chi}}\cdot g_{i},\sum_{j=1}^{d}c_{j}\cdot g_{j}\right\rangle_{\mathcal{F}_{ \chi}}\] \[=\sum_{i=1}^{d}\sum_{j=1}^{d}\langle f,g_{i}\rangle_{\mathcal{F}_{ \chi}}\cdot c_{j}\cdot\langle g_{i},g_{j}\rangle_{\mathcal{F}_{\chi}}\] \[=\sum_{i=1}^{d}\sum_{j=1}^{d}\langle f,g_{i}\rangle_{\mathcal{F}_{ \chi}}\cdot c_{j}\cdot\mathbb{1}_{\{i=j\}}\] \[=\sum_{i=1}^{d}c_{i}\cdot\langle f,g_{i}\rangle_{\mathcal{F}_{ \chi}}\] \[=\left\langle f,\sum_{i=1}^{d}c_{i}\cdot g_{i}\right\rangle_{ \mathcal{F}_{\chi}}\] \[=\langle f,\hat{g}\rangle_{\mathcal{F}_{\chi}},\] which completes the proof. ### _Proof of Proposition 2_ First, note that for each \(y\in\mathcal{Y}\), \[\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=y\Big{]} =\mathbb{E}[f(X)|Y=y]-\mathbb{E}[f(X)]\] \[=\sum_{x\in\mathcal{X}}[P_{X|Y}(x|y)-P_{X}(x)]\cdot f(x).\] Therefore, we have \[2\cdot\mathscr{H}(f)\] \[=\sum_{y\in\mathcal{Y}}P_{Y}(y)\cdot\Big{(}\mathbb{E}\Big{[} \tilde{f}(X)\Big{|}Y=y\Big{]}\Big{)}^{\mathrm{T}}\Lambda_{f}^{-1}\mathbb{E} \Big{[}\tilde{f}(X)\Big{|}Y=y\Big{]}\] \[=\sum_{y\in\mathcal{Y}}P_{Y}(y)\cdot\sum_{x\in\mathcal{X}}\sum_{ x^{\prime}\in\mathcal{X}}\bigg{(}[P_{X|Y}(x|y)-P_{X}(x)]\] \[\quad\cdot\Big{(}f^{\mathrm{T}}(x)\Lambda_{f}^{-1}f(x^{\prime}) \Big{)}\cdot[P_{X|Y}(x^{\prime}|y)-P_{X}(x^{\prime})]\bigg{)}\] \[=\mathbb{E}_{P_{X^{\prime}}X^{\prime}}[\xi_{\mathcal{G}}(X,X^{ \prime})]-\mathbb{E}_{P_{X}P_{X^{\prime}}}[\xi_{\mathcal{G}}(X,X^{\prime})],\] which completes the proof. ### _Proof of Proposition 3_ We start with the first equality. Suppose \(P_{X,Y}\) has modal decomposition (cf. Proposition 1) \[P_{X,Y}(x,y)=P_{X}(x)P_{Y}(y)\cdot[1+\varrho\cdot f^{*}(x)\cdot g^{*}(y)],\] where \(g^{*}\) satisfies \(\mathbb{E}[g^{*}(Y)]=0\) and \(\mathbb{E}\big{[}(g^{*}(Y))^{2}\big{]}=1\). Then, we obtain \[P_{X|Y=y}(x)=P_{X}(x)\cdot[1+\varrho\cdot f^{*}(x)\cdot g^{*}(y)],\] and the \(P_{X,X^{\prime}}\) as defined in (13) can be expressed as \[P_{XX^{\prime}} (x,x^{\prime})\] \[=\sum_{y\in\mathcal{Y}}P_{Y}(y)P_{X|Y=y}(x)P_{X|Y=y}(x^{\prime})\] \[=P_{X}(x)P_{X}(x^{\prime})\cdot\left[\sum_{y\in\mathcal{Y}}P_{Y}( y)(1+\varrho\cdot f^{*}(x)\cdot g^{*}(y))\right.\] \[\quad\quad\cdot(1+\varrho\cdot f^{*}(x^{\prime})\cdot g^{*}(y))\right]\] \[=P_{X}(x)P_{X}(x^{\prime})\cdot\big{[}1+\varrho^{2}\cdot f^{*}( x)\cdot f^{*}(x^{\prime})\big{]},\] where to obtain the last equality follows from the fact that \(\mathbb{E}[g^{*}(Y)]=0,\mathbb{E}\big{[}(g^{*}(Y))^{2}\big{]}=1\). Note that since \(P_{X^{\prime}}=P_{X}\), we have \[P_{XX^{\prime}} (x,x^{\prime})-P_{X}(x)P_{X^{\prime}}(x^{\prime})\] \[=P_{XX^{\prime}}(x,x^{\prime})-P_{X}(x)P_{X}(x^{\prime})\] \[=P_{X}(x)P_{X^{\prime}}(x^{\prime})\cdot\varrho^{2}\cdot f^{*}(x) \cdot f^{*}(x^{\prime}). \tag{27}\] In addition, let \(\tau\leftrightarrow\xi_{\mathcal{G}}\) denote the operator associated with \(\xi_{\mathcal{G}}\), then from Property 1 we have \(\tau(f^{*})=\Pi(f^{*};\mathcal{G})\). In addition, from the orthogonality principle, we have \(\langle f^{*}-\tau(f^{*}),\tau(f^{*})\rangle_{\mathcal{F}_{\chi}}=0\) and thus \[\langle f^{*},\tau(f^{*})\rangle_{\mathcal{F}_{\chi}} =\langle\tau(f^{*}),\tau(f^{*})\rangle_{\mathcal{F}_{\chi}}+ \langle f^{*}-\tau(f^{*}),\tau(f^{*})\rangle_{\mathcal{F}_{\chi}}\] \[=\langle\tau(f^{*}),\tau(f^{*})\rangle_{\mathcal{F}_{\chi}}\] \[=\big{\|}\tau(f^{*})\big{\|}_{\mathcal{F}_{\chi}}^{2}=\big{\|} \Pi(f^{*};\mathcal{G})\big{\|}_{\mathcal{F}_{\chi}}^{2}. \tag{28}\] Hence, the first equality of (14) can be obtained from \[\mathscr{H}(\mathcal{G}) =\frac{\varrho^{2}}{2}\cdot\big{(}\mathbb{E}_{P_{X\mathcal{X}^{ \prime}}}[\xi_{\mathcal{G}}(X,X^{\prime})]-\mathbb{E}_{P_{X}P_{X^{\prime}}}[ \xi_{\mathcal{G}}(X,X^{\prime})]\big{)}\] \[=\frac{\varrho^{2}}{2}\cdot\mathbb{E}_{P_{X}P_{X^{\prime}}}[f^{*}( X)\cdot f^{*}(X^{\prime})\cdot\xi_{\mathcal{G}}(X,X^{\prime})]\] \[=\frac{\varrho^{2}}{2}\cdot\sum_{x\in\mathfrak{X}}P_{X}(x)\cdot f^{*}( x)\cdot\mathbb{E}[f^{*}(X)\cdot\mathpzc{K}_{\mathcal{G}}(X,x)]\] \[=\frac{\varrho^{2}}{2}\cdot\sum_{x\in\mathfrak{X}}P_{X}(x)\cdot f^ {*}(x)\cdot[\tau(f^{*})](x)\] \[=\frac{\varrho^{2}}{2}\cdot\langle f^{*},\tau(f^{*})\rangle_{ \mathpzc{F}_{\mathfrak{X}}}\] \[=\frac{\varrho^{2}}{2}\cdot\left\|\Pi(f^{*};\mathcal{G})\right\|_ {\mathpzc{F}_{\mathfrak{X}}}^{2},\] where the first equality follows from Proposition 2, where the second equality follows from (27), and where the last equality follows from (28). To obtain the second and third equalities of (14), it suffices to note that for all \(f\in\mathcal{G}\), we have \[\left\|\Pi(f^{*};\mathrm{span}\{f\})\right\|_{\mathpzc{F}_{ \mathfrak{X}}}^{2}\] \[\quad=\left\|f^{*}\right\|_{\mathpzc{F}_{\mathfrak{X}}}^{2}- \left\|f^{*}-\Pi(f^{*};\mathrm{span}\{f\})\right\|_{\mathpzc{F}_{\mathfrak{X}}} ^{2}\] \[\quad\leq\left\|f^{*}\right\|_{\mathpzc{F}_{\mathfrak{X}}}^{2}- \left\|f^{*}-\Pi(f^{*};\mathcal{G})\right\|_{\mathpzc{F}_{\mathfrak{X}}}^{2}\] \[\quad=\left\|\Pi(f^{*};\mathcal{G})\right\|_{\mathpzc{F}_{ \mathfrak{X}}}^{2}\] where the equalities follows from the orthogonality principle, and where the inequality follows from the definition of projection [cf. (1)]. In addition, it can be verified that the inequality holds with quality when \(f=\Pi(f^{*};\mathcal{G})\). Hence, for all \(f\in\mathcal{G}\), we have \[\frac{\mathscr{H}(f)}{\mathscr{H}(\mathcal{G})} =\frac{\mathscr{H}(\mathrm{span}\{f\})}{\mathscr{H}(\mathcal{G})}\] \[=\frac{\left\|\Pi(f^{*};\mathrm{span}\{f\})\right\|_{\mathpzc{F} _{\mathfrak{X}}}^{2}}{\left\|\Pi(f^{*};\mathcal{G})\right\|_{\mathpzc{F}_{ \mathfrak{X}}}^{2}}\leq 1=\frac{\mathscr{H}(\Pi(f^{*};\mathcal{G}))}{ \mathscr{H}(\mathcal{G})},\] which completes the proof. ### _Proof of Property 2_ It suffices to prove that \(P_{Y|X}=P_{Y|X}^{(\xi^{\prime})}\). To this end, suppose \(P_{X,Y}\) satisfies the modal decomposition (6), and let \(f^{*}(x)\triangleq(f_{1}^{*}(x),\ldots,f_{K}^{*}(x))^{\mathrm{T}},g^{*}(y) \triangleq(g_{1}^{*}(y),\ldots,g_{K}^{*}(y))^{\mathrm{T}}\), \(\Sigma\triangleq\mathrm{diag}(\varrho_{1},\ldots,\sigma_{K})\). Then, it can be verified that \(\mathbb{E}[f_{i}^{*}(X)|Y=y]=\sigma_{i}\cdot g_{i}^{*}(y)\) for all \(i=1,\ldots,K\), which implies that \(\mathbb{E}[f^{*}(X)|Y=y]=\Sigma\cdot g^{*}(y)\). Since \(\mathpzc{H}_{f^{*}}=I\), we have \(\mathpzc{K}^{*}(x,x^{\prime})=\langle f^{*}(x),f^{*}(x^{\prime})\rangle\), for all \(x,x^{\prime}\in\mathfrak{X}\). Hence, for all \(x\in\mathfrak{X},y\in\mathfrak{Y}\), we have \[P_{Y|X}^{(\xi^{\prime})}(y|x) =P_{Y}(y)\cdot(1+\mathbb{E}[\mathpzc{K}^{*}(X,x)|Y=y])\] \[=P_{Y}(y)\cdot(1+\mathbb{E}[\langle f^{*}(X),f^{*}(x)\rangle|Y=y])\] \[=P_{Y}(y)\cdot(1+\langle\mathbb{E}[f^{*}(X)|Y=y],f^{*}(x)\rangle)\] \[=P_{Y}(y)\cdot(1+\langle\Sigma\cdot g^{*}(y),f^{*}(x)\rangle)\] \[=P_{Y}(y)\cdot\left(1+\sum_{i=1}^{K}\sigma_{i}\cdot f_{i}^{*}(x) \cdot g_{i}^{*}(y)\right)\] \[=P_{Y|X}(y|x),\] which completes the proof. ### _Proof of Property 3_ Our proof will make use of the following fact. _Fact 2:_ If \(\mathfrak{Y}=\{-1,1\}\) and \(P_{Y}(y)\equiv\frac{1}{2}\), the modal decomposition of \(P_{X,Y}\) (cf. Proposition 1) can be written as \[P_{X,Y}(x,y)=P_{X}(x)P_{Y}(y)(1+\varrho\cdot f^{*}(x)\cdot y),\] where \(\varrho\) is the maximal correlation coefficient, and \(f^{*}\) is the maximal correlation function with \(\mathbb{E}\big{[}(f^{*}(X))^{2}\big{]}=1\). From Fact 2, we have \[P_{Y|X}(y|x) =P_{Y}(y)(1+y\cdot\varrho\cdot f^{*}(x))\] \[=\frac{1}{2}\cdot(1+y\cdot\varrho\cdot f^{*}(x)),\] which implies that \[\hat{y}_{\mathsf{MAP}}(x) =\operatorname*{arg\,max}_{y\in\mathfrak{Y}}P_{Y|X}(y|x)\] \[=\operatorname*{arg\,max}_{y\in\mathfrak{Y}}\;\left[y\cdot f^{*}( x)\right]=\mathrm{sgn}(f^{*}(x)).\] ### _Proof of Theorem 1_ Our proof will make use of the following simple fact. _Fact 3:_ Given a random variable \(Z\) taking values from \(\mathpzc{Z}\), let \(z_{\min}\) denote the minimum entry in \(\mathpzc{Z}\), then we have \[\mathbb{E}[Z]\leq\mathbb{E}\big{[}Z^{+}\big{]}\leq\mathbb{E}[Z]+(z_{\min})^{-},\] where \(x^{-}\triangleq\max\{-x,0\}\). From Fact 3, we obtain \[\mathbb{E}[\ell_{\mathrm{hinge}}(Y,\langle w,f(X)\rangle+b)]\] \[\quad=\mathbb{E}\Big{[}(1-Y\cdot\langle w,f(X)\rangle-Y\cdot b \rangle^{+}\Big{]}\] \[\quad\geq 1-\langle w,\mathbb{E}[Y\cdot f(X)]\rangle-\mathbb{E}[Y]\cdot b\] \[\quad=1-\langle w,\mathbb{E}[Y\cdot f(X)]\rangle. \tag{29}\] Therefore, for all \(w,b\), we have \[L_{\mathsf{SVM}}(f,w,b;\lambda) =\mathbb{E}[\ell_{\mathrm{hinge}}(Y,\langle w,f(X)\rangle+b)]+ \frac{\lambda}{2}\cdot\|w\|^{2}\] \[\quad\geq 1-\langle w,\mathbb{E}[Y\cdot f(X)]\rangle+\frac{\lambda}{2} \cdot\|w\|^{2}\] \[\quad=1-\frac{1}{2\lambda}\cdot\|\mathbb{E}[f(X)Y]\|^{2}\] \[\quad\quad\quad+\frac{\lambda}{2}\cdot\left\|w-\frac{1}{\lambda} \cdot\mathbb{E}[Y\cdot f(X)]\right\|^{2}\] \[\quad\geq 1-\frac{1}{2\lambda}\cdot\|\mathbb{E}[f(X)Y]\|^{2}\] \[\quad=\hat{L}(f;\lambda).\] Hence, we have \[L_{\mathsf{SVM}}^{*}(f;\lambda)=L_{\mathsf{SVM}}(f;w\mathsf{SVM},b\mathsf{SVM },\lambda)\geq\hat{L}(f;\lambda).\] Let \(w^{\prime}\triangleq\frac{1}{\lambda}\cdot\mathbb{E}[Y\cdot f(X)],b^{\prime} \triangleq-\langle w^{\prime},\mathbb{E}[f(X)]\rangle\), then we have \[\langle w^{\prime},f(X)\rangle+b^{\prime}=\Big{\langle}w^{\prime},\tilde{f}(X) \Big{\rangle}.\] Therefore, from the upper bound in Fact 3, we have \[\mathbb{E}[\ell_{\mathrm{hinge}}(Y,\langle w^{\prime},f(X)\rangle+b^{ \prime})]\] \[\qquad\leq 1-\mathbb{E}\Big{[}Y\cdot\Big{\langle}w^{\prime},\tilde{f} (X)\Big{\rangle}\Big{]}\] \[\qquad\qquad+\Big{(}\min_{i}\Bigl{\{}1-y_{i}\cdot\Big{\langle}w^ {\prime},\tilde{f}(x_{i})\Big{\rangle}\Bigr{\}}\Bigr{)}^{-}\] \[\qquad\leq 1-\Big{\langle}w^{\prime},\mathbb{E}\Big{[}Y\cdot \tilde{f}(X)\Big{]}\Big{\rangle}+\left(1-\frac{\lambda_{\mathrm{T}}}{ \lambda}\right)^{-} \tag{30}\] \[\qquad=1-\lambda\|w^{\prime}\|^{2}+\left(1-\frac{\lambda_{ \mathrm{T}}}{\lambda}\right)^{-} \tag{31}\] where to obtain the last inequality, we have used the fact that \[\min_{i}\Bigl{\{}1-y_{i}\cdot\Big{\langle}w^{\prime},\tilde{f}(x_{i})\Big{\rangle} \Bigr{\}}\geq 1-\frac{\lambda_{\mathrm{T}}}{\lambda}\] since for each \(i\), we have \[1-y_{i}\cdot\Big{\langle}w^{\prime},\tilde{f}(x_{i})\Big{\rangle} \geq 1-\Big{|}\Big{\langle}w^{\prime},\tilde{f}(x_{i})\Big{\rangle} \Big{|}\] \[\geq 1-M\cdot\|w^{\prime}\|\] \[=1-\frac{M}{\lambda}\cdot\left\|\mathbb{E}[Y\cdot f(X)]\right\|\] \[=1-\frac{\lambda_{\mathrm{T}}}{\lambda}.\] Therefore \[L^{*}_{\mathsf{SVM}}(f;\lambda) =\min_{w,b}L_{\mathsf{SVM}}(f,w,b;\lambda)\] \[\leq L_{\mathsf{SVM}}(f,w^{\prime},b^{\prime};\lambda)\] \[\leq 1-\lambda\|w^{\prime}\|^{2}+\left(1-\frac{\lambda_{ \mathrm{T}}}{\lambda}\right)^{-}+\frac{\lambda}{2}\|w^{\prime}\|^{2}\] \[\leq 1-\frac{\lambda}{2}\|w^{\prime}\|^{2}+\left(1-\frac{\lambda_{ \mathrm{T}}}{\lambda}\right)^{-}\] \[=\hat{L}(f;\lambda)+\left(1-\frac{\lambda_{\mathrm{T}}}{\lambda} \right)^{-}\] \[=\hat{L}(f;\lambda)+\left(\frac{\lambda_{\mathrm{T}}}{\lambda}-1 \right)^{+}.\] Finally, if \(\lambda\geq\lambda_{T}\), it can be readily verified that \(L^{*}_{\mathsf{SVM}}(f;\lambda)=\hat{L}(f;\lambda)=L_{\mathsf{SVM}}(f,w^{ \prime},b^{\prime};\lambda)\). As a result, the optimal solution is given by \[w_{\mathsf{SVM}}=w^{\prime}=\frac{1}{\lambda}\cdot\mathbb{E}[Y \cdot f(X)],\] \[b_{\mathsf{SVM}}=b^{\prime}=-\langle w^{\prime},\mathbb{E}[f(X) ]\rangle=-\langle w^{*},\mathbb{E}[f(X)]\rangle,\] and we have \[\langle w_{\mathsf{SVM}},f(x)\rangle+b_{\mathsf{SVM}} =\langle w_{\mathsf{SVM}},\tilde{f}(x)\rangle\] \[=\frac{1}{\lambda}\cdot\Big{\langle}\mathbb{E}[Y\cdot f(X)], \tilde{f}(x)\Big{\rangle}\] \[=\frac{1}{\lambda}\cdot\Big{\langle}\mathbb{E}\Big{[}\tilde{f}(X )\cdot Y\Big{]},\tilde{f}(x)\Big{\rangle}.\] Therefore, the SVM prediction is given by \[\hat{y}_{\mathsf{SVM}}(x;f,\lambda) =\mathrm{sgn}(\langle w_{\mathsf{SVM}},f(x)\rangle+b_{\mathsf{SVM}})\] \[=\mathrm{sgn}\Big{(}\Big{\langle}\mathbb{E}\Big{[}\tilde{f}(X)Y \Big{]},\tilde{f}(x)\Big{\rangle}\Big{)}.\] To obtain (20), note that for each \(x\in\mathcal{X}\), we have \[\|f(x)-\mathbb{E}[f(X)|Y=-1]\|^{2}-\|f(x)-\mathbb{E}[f(X)|Y=1]\|^ {2}\] \[=\Big{\|}\tilde{f}(x)-\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1] \Big{\|}^{2}\] \[=\Big{\|}\tilde{f}(x)-\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=1] \Big{\|}^{2}\] \[\qquad-\Big{\|}\tilde{f}(x)-\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y= -1\Big{]}\Big{\|}^{2}\] \[=\Big{\langle}\tilde{f}(x),\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y= 1\Big{]}-\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1\Big{]}\Big{\rangle}\] \[\qquad+\Big{\|}\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1]\Big{\|}^ {2}-\Big{\|}\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1\Big{]}\Big{\|}^{2}\] \[=2\cdot\Big{\langle}\tilde{f}(x),\mathbb{E}\Big{[}\tilde{f}(X)Y \Big{]}\Big{\rangle},\] where we have used the facts that \[\mathbb{E}\Big{[}\tilde{f}(X)Y\Big{]} =\mathbb{E}\Big{[}Y\cdot\mathbb{E}\Big{[}\tilde{f}(X)|Y\Big{]}\Big{]}\] \[=\frac{1}{2}\Big{(}\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=1\Big{]} +\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1\Big{]}\Big{)},\] and \[0 =\mathbb{E}\Big{[}\tilde{f}(X)\Big{]}\] \[=\mathbb{E}\Big{[}\mathbb{E}\Big{[}\tilde{f}(X)|Y\Big{]}\Big{]}\] \[=\frac{1}{2}\Big{(}\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=1\Big{]} +\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=-1\Big{]}\Big{)}.\] ### _Proof of Corollary 1_ From Theorem 1, when \(\lambda\geq\lambda_{\mathrm{T}}\), we have \[L^{*}_{\mathsf{SVM}}(f;\lambda)=\hat{L}(f;\lambda)=1-\frac{1}{2\lambda}\cdot \left\|\mathbb{E}[f(X)\cdot Y]\right\|^{2}.\] Therefore, it suffices to prove that \[r_{\min}\cdot\mathscr{H}(\tilde{f})\leq\frac{1}{2}\cdot\left\|\mathbb{E}[f(X)Y] \right\|^{2}\leq r_{\max}\cdot\mathscr{H}(\tilde{f}). \tag{32}\] To this end, note that we have \[\left\|\mathbb{E}[f(X)Y]\right\|^{2}=\left\|\mathbb{E}\Big{[} \tilde{f}(X)Y\Big{]}\right\|^{2} =\mathbb{E}\bigg{[}\left\|Y\cdot\mathbb{E}\Big{[}\tilde{f}(X)Y \Big{]}\right\|^{2}\bigg{]}\] \[=\mathbb{E}\bigg{[}\left\|\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y \Big{]}\right\|^{2}\bigg{]},\] where the last equality follows from the fact that for zero-mean \(f\) and \(Y\) uniformly distributed on \(\{1,-1\}\), we have \(\mathbb{E}[f(X)|Y=y]=y\cdot\mathbb{E}[f(X)\cdot Y]\) for \(y\in\mathcal{Y}\). In addition, for all \(v\in\mathrm{span}\Big{\{}\tilde{f}(x)\colon x\in\mathcal{X}\Big{\}}\), we have \[r_{\min}\leq\frac{\|v\|^{2}}{\left\|\Lambda_{\tilde{f}}^{-\frac{1}{2}}v\right\|^{2} }\leq r_{\max}.\] Therefore, we obtain \[r_{\min}\cdot\left\|\Lambda_{\tilde{f}}^{-\frac{1}{2}}\mathbb{E}\Big{[}\tilde{f}(X )\Big{|}Y\Big{]}\right\|^{2}\leq\left\|\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y \Big{]}\right\|^{2}\] \[\leq r_{\max}\cdot\left\|\Lambda_{f}^{-\frac{1}{2}}\mathbb{E}\Big{[} \tilde{f}(X)\Big{|}Y\Big{]}\right\|^{2}. \tag{33}\] Taking the expectation of (33) over \(P_{Y}\) yields (32). ### _Proof of Proposition 4_ First, note that logistic regression can be regarded as a special case of softmax regression with the correspondences \(w_{\mathsf{LR}}=w(1)-w(-1),b_{\mathsf{LR}}=b(1)-b(-1)\), where \(w(y)\in\mathbb{R}^{d}\) and \(b(y)\in\mathbb{R}\) are the weights and bias for softmax regression, respectively. In addition, from [10, Theorem 2], the centered weight \(\tilde{w}(y)\triangleq w(y)-\mathbb{E}[w(Y)]\) and \(b(y)\) are \[\tilde{w}(y) =\Lambda_{f}^{-1}\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=y\Big{]}+o(\epsilon)\] \[=\mathbb{E}\Big{[}\tilde{f}(X)\Big{|}Y=y\Big{]}+o(\epsilon),\] \[b(y) =-\langle\mathbb{E}[f(X)],\tilde{w}(y)\rangle+o(\epsilon).\] Therefore, we obtain \[w_{\mathsf{LR}} =w(1)-w(-1)\] \[=\tilde{w}(1)-\tilde{w}(-1)\] \[=2\cdot\mathbb{E}\Big{[}Y\cdot\tilde{f}(X)\Big{]}+o(\epsilon)\] \[=2\lambda\cdot w_{\mathsf{SVM}}+o(\epsilon)\] and \[b_{\mathsf{LR}} =b(1)-b(-1)\] \[=-\langle\mathbb{E}[f(X)],\tilde{w}(1)-\tilde{w}(-1)\rangle+o(\epsilon)\] \[=-\langle\mathbb{E}[f(X)],w_{\mathsf{LR}}\rangle+o(\epsilon)\] \[=-2\lambda\cdot\langle\mathbb{E}[f(X)],w_{\mathsf{SVM}}\rangle+o (\epsilon)\] \[=2\lambda\cdot b_{\mathsf{SVM}}+o(\epsilon),\] which implies that \[\langle w_{\mathsf{LR}},f(x)\rangle+b_{\mathsf{LR}}=2\lambda\cdot(\langle w_ {\mathsf{SVM}},f(x)\rangle+b_{\mathsf{SVM}})+o(\epsilon).\] From (11) and (12), we have \(\hat{y}_{\mathsf{SVM}}(x;f)=\hat{y}_{\mathsf{LR}}(x;f)\) for \(\epsilon\) sufficiently small. ### _Proof of Theorem 3_ To begin, note that we have \[P_{X|Y}(x|y) =\pi(x;\theta_{y})\] \[=\pi(x;0)+\bigg{\langle}\left.\frac{\partial}{\partial\theta}\pi( x;\theta)\right|_{\theta=0},\theta_{y}\bigg{\rangle}+o(\epsilon)\] \[=\pi(x;0)(1+\langle s(x),\theta_{y}\rangle)+o(\epsilon),\] which implies that \[P_{X}(x) =\sum_{y\in\mathbb{Y}}P_{X|Y}(x|y)P_{Y}(y)\] \[=\sum_{y\in\mathbb{Y}}\pi(x;\theta_{y})P_{Y}(y)\] \[=\pi(x;0)(1+\langle s(x),\mathbb{E}[\theta_{Y}]))+o(\epsilon).\] Therefore, we obtain \[\frac{P_{X|Y}(x|y)}{P_{X}(x)} =\frac{1+\langle s(x),\theta_{y}\rangle}{1+\langle s(x),\mathbb{E }[\theta_{Y}]\rangle}+o(\epsilon)\] \[=(1+\langle s(x),\theta_{y}\rangle)\cdot[1-\langle s(x),\mathbb{ E}[\theta_{Y}]\rangle+o(\epsilon)]\] \[=1+\big{\langle}s(x),\tilde{\theta}_{y}\big{\rangle}+o(\epsilon),\] where we have used the fact that \(\|\mathbb{E}[\theta_{Y}]\|=O(\epsilon)\) since \(\|\theta_{y}\|=O(\epsilon)\) for all \(y\in\mathbb{Y}\). Hence, we have \[P_{X,Y}(x,y) =P_{X|Y}(x|y)P_{Y}(y)\] \[=P_{X}(x)P_{Y}(y)\Big{(}1+\langle s(x),\tilde{\theta}_{y}\rangle \Big{)}+o(\epsilon). \tag{34}\] Without loss of generality, we assume \(\Lambda_{s}\) is non-singular, since otherwise we can reparameterize \(\theta\) to a vector with dimension less than \(m\). Compare (34) with (6), we have \(K=m\). In addition, there exists \(A\in\mathbb{R}^{m\times m}\) such that \[s(x)=A\cdot f^{*}(x)+o(\epsilon),\quad\text{for all }x\in\mathbb{X}, \tag{35}\] Then, we can readily verify (25) from definition. Finally, (26) follows from (35) and the fact that \[\mathscr{H}(f^{*})=\frac{1}{2}\sum_{i=1}^{K}\sigma_{i}^{2}=I(X;Y)+o(\epsilon^{ 2}),\] where the second equality follows from the modal decomposition of mutual information (see, e.g., [11, Lemma 16]).
2310.17533
Decoding The Digital Fuku: Deciphering Colonial Legacies to Critically Assess ChatGPT in Dominican Education
Educational disparities within the Dominican Republic (DR) have long-standing origins rooted in economic, political, and social inequity. Addressing these challenges has necessarily called for capacity building with respect to educational materials, high-quality instruction, and structural resourcing. Generative AI tools like ChatGPT have begun to pique the interest of Dominican educators due to their perceived potential to bridge these educational gaps. However, a substantial body of AI fairness literature has documented ways AI disproportionately reinforces power dynamics reflective of jurisdictions driving AI development and deployment policies, collectively termed the AI Global North. As such, indiscriminate adoption of this technology for DR education, even in part, risks perpetuating forms of digital coloniality. Therefore, this paper centers embracing AI-facilitated educational reform by critically examining how AI-driven tools like ChatGPT in DR education may replicate facets of digital colonialism. We provide a concise overview of 20th-century Dominican education reforms following the 1916 US occupation. Then, we employ identified neocolonial aspects historically shaping Dominican education to interrogate the perceived advantages of ChatGPT for contemporary Dominican education, as outlined by a Dominican scholar. This work invites AI Global North & South developers, stakeholders, and Dominican leaders alike to exercise a relational contextualization of data-centric epistemologies like ChatGPT to reap its transformative benefits while remaining vigilant of safeguarding Dominican digital sovereignty.
Anaelia Ovalle
2023-10-26T16:20:35Z
http://arxiv.org/abs/2310.17533v2
Decoding The Digital Fuku: Deciphering Colonial Legacies to Critically Assess ChatGPT in Dominican Education ###### Abstract Educational disparities within the Dominican Republic (DR) have long-standing origins rooted in economic, political, and social inequity.1 Addressing these challenges has necessarily called for capacity building with respect to educational materials, high-quality instruction, and structural resurcoring. Generative AI tools like ChatGPT have begun to pique the interest of Dominican educators due to their perceived potential to bridge these educational gaps. However, a substantial body of AI fairness literature has documented ways AI disproportionately reinforces power dynamics reflective of jurisdictions driving AI development and deployment policies, collectively termed the _AI Global North_. As such, indiscriminate adoption of this technology for DR education, even in part, risks perpetuating forms of digital coloniality. Therefore, this paper centers embracing AI-facilitated educational reform by critically examining how AI-driven tools like ChatGPT in DR education may replicate facets of digital colonialism. We provide a concise overview of 20th-century Dominican education reforms following the 1916 US occupation. Then, we employ identified neocolonial aspects historically shaping Dominican education to interrogate the perceived advantages of ChatGPT for contemporary Dominican education, as outlined by a Dominican scholar. This work invites AI Global North & South developers, stakeholders, and Dominican leaders alike to exercise a relational contextualization of data-centric epistemologies like ChatGPT to reap its transformative benefits while remaining vigilant of safeguarding Dominican digital sovereignty. Footnote 1: [https://www.education-inequalities.org/countries/dominican-rep](https://www.education-inequalities.org/countries/dominican-rep) ## 1 Introduction In Dominican lore, _fuku_ describes the curse brought to Hispanola on slave ships by Spanish imperialists in their pursuit of the New World. It is a persistent doom casted upon the island and its inhabitants since Colonization, reverberating throughout generations as manifestations of political, economic, social, and personal misery [19; 18; 34; 36]. We ground this paper in the colonial imaginary to center ways interlocking forms of coloniality have endured across dimensions of Dominican life. The educational system in the Dominican Republic (DR) is no exception, historically grappling with instability and inequality stemming from historical colonial influences that persist to this day. Colonial forces exhibit a fluid nature, manifestating in the digital contemporary as digital coloniality [43]. This is marked by the prevalence of data-centric epistemologies and technologies which entrench power asymmetries [32], leading to critical AI fairness researchers raising concerns on the potential for epistemic violence in AI-driven technologies like ChatGPT. These technologies often exhibit a bias toward predominantly white and Anglo-Saxon perspectives, favoring AI global north contexts while marginalizing underrepresented voices [35; 40; 7]. Contextualized to the Dominican Republic's educational system, we identify parallels between neocolonialism and present-day techno-optimism. Responding to existing work by Dominican scholars who share enthusiasm for incorporating technologies like ChatGPT into Dominican Education, this paper aims to assess risks in manifesting digital _fukat_ through the inadvertent relinquishing of Dominican intellectual sovereignty via the uncritical adoption of these technologies. While ChatGPT may contribute to expanded educational access, we strongly urge Dominican public institutions like the Ministry of Education, AI developers from the Global South, and, most importantly, Dominican citizens from various socioeconomic backgrounds, to actively participate in a critical assessment of these technologies within the context of their colonial histories. This approach will enable a deeper understanding of the sociotechnical implications they carry, so as to empower stakeholders to make more informed decisions surrounding their digital agency. ## 2 Divisions between The AI Global North and AI Global South The terms _AI Global North_ and _AI Global South_ are often used to describe the disparities in the development, deployment, and impact of artificial intelligence technologies between regions of the world [41]. The predominant dissemination of AI Global North-oriented technology and AI governance reflect resource inequalities, infrastructural gaps, and varying concentrations of power [52; 37; 51]. This concentration of authority reinforces dominant values [35] onto vulnerable communities, thereby exacerbating inequality and harm [51]. This results in the creation of LLMs whose sociolinguistic boundaries reflect formations of privilege and power [44; 47]. As just one example, the most commonly used large language models (LLM) are pretrained on data primarily sourced from English, angio-centric texts located in North America and the UK [23]. As a result, less resourced languages are more prone to jailbreaking an LLM [56] and non-native English speakers are more likely to be flagged for plagiarism by an AI system [30]. This _digital privileging_ is reminiscent of colonial legacies which wiped indigenous knowledge through coercive cultural assimilation and exploitative practice. Generative AI systems derived by the Global North are developed and trained within AI Global North contexts, holding societal impacts tied to propagating stereotypes, representational and allocation harm, and cultural erasure [51]. Empirical risk minimization (ERM), a theoretical foundation underpinning many machine learning (ML) algorithms, is a general method for fitting some parametrized predictor (e.g. linear regression, logistic regression, neural networks) on real-world data. ERM finds model parameters that minimize a user-defined loss function by taking its expectation, resulting in an _averaging_ of the loss over a training dataset. Yet, it is through this expectation that majority groups are privileged in representation learning, as minimizing average error fits majority populations [14; 6]. This subsequent tyranny of the majority [13] leads language representations to favor the majority groups. English, American culture and lexicon [23] are the prevailing data used for LLM pretraining. Consequently, these behaviors lead to direct and indirect harms on non-dominant groups, inclusive of vulnerable and marginalized communities. It is through this severe power asymmetry that we recognize parallels to coloniality that endanger a country's structural fabrics within AI Global South, including a country's educational system. Notably, this may occur even in the absence of malicious intent, even when individuals act in good faith. In practice, existing inequities are algorithmically exacerbated in many forms. AI detectors consistently misclassify non-native English writing samples as AI-generated [30]. Biases appear against minoritized groups, where LLMs have displayed, for instance anti-Muslim [3] and anti-LGBTQIA behavior [42]. [55; 48] found that LLMs overfilter voices from historically marginalized communities during LLM detoxicifcation. Meanwhile, users are increasingly using LLMs to analyze information, even though these models have shown persistent inclinations to harms and hallucination with respect to factual information [24; 58]. In the absence of diverse social and infrastructural investment defending against an undue sense of trust and dependence on these systems [51], dissemination of biased and inaccurate information have large scale ramifications. Therefore, ensuring the trustworthiness of AI necessarily calls for their careful situating within AI Global South contexts. While several works have examined aspects of coloniality in AI, none have contextualized what this may mean for the Dominican Republic. In the following section, we provide a concise historical overview of education in the DR, which later helps contextualize some implications of incorporating ChatGPT into the DR educational system. Dominican Republic: Historical Grapplings with Education and Agency The Dominican Republic has consistently grappled with domination by external entities. 2. At the turn of the 20th century, the DR faced US occupation between 1916-1924. During this time, the DR's education system went through several overhauls. In this section, while limited by space in this paper, we do our best to identify a handful of threads which cogently unveil how 20th century neocolonialism shaped education in the Dominican Republic. We acknowledge that this recounting is not exhaustive. Footnote 2: Initially colonized by Spain, the country gained independence in 1821, briefly occupied by Haiti from 1822 to 1844, until going under Spanish rule again until 1865. **U.S. Paternalism & White Supremacy** At the turn of the 20th century, U.S. concerns of geopolitical instability 3 were heightened by the DR's debt to European creditors and the looming First World War. In addition to the US consolidating Dominican loans in exchange for control over DR customs, President Woodrow Wilson authorized the 1916 military occupation as a preemptive measure to protect U.S. assets, subsequently cementing the paternalistic role of the U.S. [46]. While framing their actions as altruistic, several historical documents authored by US military governors outline clear initiatives aimed at dissolving Dominican autonomy and identity. Working-class adult Dominicans were described as "mulatto peasants" and children which lacked mental capacity for self-sovereignty, reflecting clear forms of explicit racial minoritization [54, 1, 2, 46]. As such, overhauling the DR educational system was seen as a priority so that individuals could better participate as citizens [46]. Simultaneously, Dominican intellectual elites, termed _letrados_, varied in their opinion of the Dominican _campesinos_, or rural masses. Some embraced their Black lineage and _nueva raza_ while others fed white supremacial ideology, characterizing lower social classes - often non-white - as "semi-savage", apathetic, and indifferent to their duties as citizens [33, 53, 46]. These _letrados_, in charge of instilling US mandates post-occupation, were the same citizens advocating for increased immigration of Europeans so as to dilute the DR's African heritage [10, 33, 17]. Footnote 3: US investments in Dominican agriculture, the Panama Canal **Coercive Tutelage** Notably, the DR iterated on educational reform prior to US occupation, with its "Codigo Organico y Reglamentario de Educacion Comun" issued in 1914 [8]. However, US officials introduced new reforms and a new educational code that centered American values and ensured alignment with US interests. This was enacted through the appointing of American allies in key government positions and implementing educational reforms. Reminiscent of Rudyard Kipling's White Man's Burden [27], the US undertook efforts to "civilize" the nation through its teaching of democracy, citizenship, and capitalism [46]. Between 1917 and 1924, the US addressed high illiteracy rates in a coercive manner that further cemented social inequities. Redirecting funds from secondary schools to rural primary school construction, US officials solidified a labor division in the Dominican Republic; education prioritized preparing future agricultural laborers, thereby perpetuating existing social mobility gaps. The US introduced enrollment mandates that obligated educators in the Dominican to police attendance and penalize parents and guardians of school children, so as to fulfill these mandates [16, 22, 38, 33, 57]. **Unsustainable Growth** Rural schools grew faster than their ability to be well-resourced. Letters between Dominican educational administration evidence how they relied on the free labor from community members for rudimentary school construction. Even then, schools had large student-to-teacher ratios, with teachers paid up to 75% less than others who taught in grade schools [20, 46]. In an attempt to improve resourcing, U.S. military government imposed property taxes, though this had limited benefit and only strained rural communities. As a result, the military government permanently closed many schools and centers for educator training, leaving Dominicans with gutted educational structures. ## 4 Assessing the Colonial Threads of ChatGPT for Dominican Education In an analysis of ChatGPT for Dominican education, Bueno [2023] reminds readers that capacity building is long overdue in the Dominican Republic, optimistically framing the tool as a new avenue to educational resourcing and outlines benefits such as 1) access to educational material 2) personalized tutoring and 3) training instructors. Acknowleding ChatGPT's potential to influence pedagogy and educational access in the Dominican Republic, assessing its actual benefits for students and educators relies on a thorough evaluation within its existing education system. In this section, we position each claimed advantage within the crossroads of DR's neocolonial grapplings and digital coloniality. By demonstrating how AI tools can perpetuate these historical harms, our paper emphasizes that validating their benefits relies on empowering DR constituents to critically assess these tools. **Access to Educational Material**Bueno (2023) highlights ChatGPT's potential to widen access to educational materials otherwise previously inaccessible. While ChatGPT holds potential for providing educational materials on-the-fly, critical works in natural language processing (NLP) reveal the tendency of LLMs to hallucinate, potentially leading to the generation of incorrect or fake content [24]. Secondly, these works show significant deficiencies in LLM abilities to retrieve facts in non-English languages [49, 25]. Therefore, non-English speakers engaging with the tool are more likely to consume low-quality or misleading information. Furthermore, access to "educational material" presupposes that text outputted by ChatGPT is universally relevant across cultural contexts. However, knowledge is not neutral; every LLM operates within specific frame of reference [51]. As such, the success of an LLM like ChatGPT hinges on the quality and diversity of its pretraining data so that it can operate across contexts. And yet, LLMs like ChatGPT predominantly learn from texts from or sponsored by the AI global north (e.g. preference annotations) [40, 23]. Fraught with familiar epistemic violence through the erasure of indigenous voices, LLMs which train on these hegemonies output "material" which can perpetuate power imbalances through the privileging of AI Global North values, norms, discourse, and knowledge which need to be scrutinized. Therefore, in assessing ChatGPT's viability for DR education, the "view from nowhere" must be rejected through the sociohistorical contextualization of these technologies. **Personalized Digital Tutelage**Bueno (2023) points to ChatGPT as a new resource for assisting students in exam preparation, delivering real-time homework feedback and enabling personalized learning. While the technology holds promise, concerns linger in its impact on student learning and understanding student progress. For instance, potential plagiarism resulting from the LLM's generated content necessitate new approaches to fairly assessing student knowledge [31, 15]. However, failing to incorporate critical sociocultural context in these approaches, especially if they incorporate AI tools, risks harming students. Namely, LLMs used for plagiarism detection exhibit biases against non-native English writers, resulting in their marginalization in both evaluative and educational settings 30. This highlights yet another aspect of harm caused by Anglo-centric LLMs within multilingual contexts [21, 25, 47]. English's global dominance stems from a historical process of militarized colonization, raising concerns about the erosion of multilingualism, language and culture rights, and perpetuation of marginalization through generative AI systems [51]. Meanwhile, scholarship which explores the benefits of ChatGPT-based tutoring remain within a western context [31], which in turn, raise further concern over its ability to recognize alternative worldviews and historical accounts. Risks of imposing particular forms of historical knowledge by AI tools are reminiscent of the DR's grapple to maintain a united sense of identity and agency [50]. Given aforementioned dependence on diverse training data, one may reasonably question the extent to which these tools can offer tutoring contextualized to varying sociohistorical positionalities, inclusive of those from the margins. Parallels nostalgically emerge between US military government in DR celebrating the expansion of schools as their own achievement, while community narratives offer a differing viewpoint which centers achieving this through local community efforts [45]. Within techno-optimistic discourse, it is critical to approach new AI tools with reasonable skepticism, especially if students may be privy to heavily rely on digital tutoring. Therefore, the prospect of AI tool adoption in DR education requires a careful examination of their contextualized tutoring benefits, the imparting of critical AI assessment skills to students, and new methods to fairly assess student knowledge [31]. **Teacher Aids and Training**Another claim by Bueno (2023) centers how ChatGPT can assist teachers in administrative work and professional training. LLMs like ChatGPT have the potential to revolutionize these aspects, though scholarly discourse on computer science and education describe the success of AI education as closely dependent on the readiness of teachers and their trust in AI tools [5, 12]. Situated in a historical context, DR educational capacity building has faced substantial demand. Meeting this demand necessitates a comprehensive understanding of educators' needs, capabilities, and constraints, shaping case-specific guidance [26]. Open educational resources - like tutorials, studies, and guidelines - offer a way for educators and institutions to gain knowledge about using LLMs in education [26]. For instance, learning to differentiate between model-generated and student-generated answers is a valuable skill in AI-assisted pedagogical practice. Nevertheless, ex clusively providing this training is no substitute for essential educator needs, such as fair teacher salaries and addressing persistent structural inequalities in educational funding and maintenance. **Safety Bueno** (2023) emphasizes the need to secure the storage and protection of student personal information while using ChatGPT, raising concerns about third-party access. However, ChatGPT already owns user-submitted queries, thereby necessitating a reconceptualization of privacy and safety within this context. Reassessing safety implications within the existing Dominican educational system presupposes clear definitions of digital privacy, agency, and safety infringements (Safety Bueno, 2020). In this regard, gaps in digital literacy also hold significant risk to both the digital sovereignty of the user and their wider community. Previously mentioned scholarship detailing AI hallucinations and possibilities for consuming and propagating misinformation raise concerning questions around safety within and across these resolutions. If relying on ChatGPT as a sole educational resource, rather than an auxiliary resource, this concern for safety only amplifies (Krishnan et al., 2020). Addressing these safety concerns requires building capacity which critically drives transparent discourse surrounding digital sovereignty to both educators and parents alike. One approach may include dedicating resources for educational seminars and audits by educators, educational staff, and students so that they learn about data privacy, regulations, ethical concerns and best practices surrounding data protection (Krishnan et al., 2020). Furthermore, despite being presented as a transformative tool free of charge, dimensions of invisible data labor by users paradoxically reify the notion that user data - the driving force behind machine learning - is exclusively created by corporations. Consequently, this reinforces the idea that corporations should be the sole custodians of such data. For instance, eliciting user preferences with a thumbs up or thumbs down after ChatGPT generations is one data point consumed for RLHF training (Beng et al., 2019; Bueno et al., 2020; Beng et al., 2020). While one may argue that free user-tailored systems already brings substantial benefits to users, masking this data labor prevents broader public discourse concerning the equitable distribution of gains within the data economy. In unveiling these aspects, addressing safety concerns goes beyond merely acknowledging the need to secure personal information. It necessitates both questioning and understanding the safety implications behind (1) _who_ is responsible for safeguarding personal data (2) _how_ is that data used, and (3) our capacity to recognize potential misinformation and manage unsafe or adverse AI-driven outcomes. ## 5 Conclusion The Dominican Republic's colonial history reveals important historical considerations for assessing ChatGPT for Dominican education. While the prospect of integrating the AI-driven tool for improved educational access drives speculative optimism, validating its benefits on-the-ground relies on DR constituents holistically assessing these tools for themselves. This requires centering the exercising of digital agency in praxis; empowering Dominican citizens to critically assess AI-driven systems requires digital capacity building that is supported by structural initiatives aimed at broadening access to AI technologies in a way reflects the will of its constituents.
2310.01351
Streaming Motion Forecasting for Autonomous Driving
Trajectory forecasting is a widely-studied problem for autonomous navigation. However, existing benchmarks evaluate forecasting based on independent snapshots of trajectories, which are not representative of real-world applications that operate on a continuous stream of data. To bridge this gap, we introduce a benchmark that continuously queries future trajectories on streaming data and we refer to it as "streaming forecasting." Our benchmark inherently captures the disappearance and re-appearance of agents, presenting the emergent challenge of forecasting for occluded agents, which is a safety-critical problem yet overlooked by snapshot-based benchmarks. Moreover, forecasting in the context of continuous timestamps naturally asks for temporal coherence between predictions from adjacent timestamps. Based on this benchmark, we further provide solutions and analysis for streaming forecasting. We propose a plug-and-play meta-algorithm called "Predictive Streamer" that can adapt any snapshot-based forecaster into a streaming forecaster. Our algorithm estimates the states of occluded agents by propagating their positions with multi-modal trajectories, and leverages differentiable filters to ensure temporal consistency. Both occlusion reasoning and temporal coherence strategies significantly improve forecasting quality, resulting in 25% smaller endpoint errors for occluded agents and 10-20% smaller fluctuations of trajectories. Our work is intended to generate interest within the community by highlighting the importance of addressing motion forecasting in its intrinsic streaming setting. Code is available at https://github.com/ziqipang/StreamingForecasting.
Ziqi Pang, Deva Ramanan, Mengtian Li, Yu-Xiong Wang
2023-10-02T17:13:16Z
http://arxiv.org/abs/2310.01351v1
# Streaming Motion Forecasting for Autonomous Driving ###### Abstract Trajectory forecasting is a widely-studied problem for autonomous navigation. However, existing benchmarks evaluate forecasting based on independent _snapshots_ of trajectories, which are not representative of real-world applications that operate on a _continuous stream_ of data. To bridge this gap, we introduce a benchmark that continuously queries future trajectories on streaming data and we refer to it as "streaming forecasting." Our benchmark inherently captures the disappearance and re-appearance of agents, presenting the emergent challenge of _forecasting for occluded agents_, which is a safety-critical problem yet overlooked by snapshot-based benchmarks. Moreover, forecasting in the context of continuous timestamps naturally asks for _temporal coherence_ between predictions from adjacent timestamps. Based on this benchmark, we further provide solutions and analysis for streaming forecasting. We propose a plug-and-play meta-algorithm called "_Predictive Streamer_" that can adapt any snapshot-based forecaster into a streaming forecaster. Our algorithm estimates the states of occluded agents by propagating their positions with multi-modal trajectories, and leverages differentiable filters to ensure temporal consistency. Both occlusion reasoning and temporal coherence strategies significantly improve forecasting quality, resulting in 25% smaller endpoint errors for occluded agents and 10-20% smaller fluctuations of trajectories. Our work is intended to generate interest within the community by highlighting the importance of addressing motion forecasting in its _intrinsic_ streaming setting. Code is available at [https://github.com/ziqipang/StreamingForecasting](https://github.com/ziqipang/StreamingForecasting). ## I Introduction Motion forecasting is _inherently_ a _streaming_ task for autonomous driving, as it operates on continuous data streams. Imagine agents constantly moving in a dynamic traffic scene, as illustrated in Fig. 1a. Naturally, agents may temporarily disappear due to _occlusions_ and then re-appear, such as agent B. Even when an agent is invisible, a forecasting algorithm or forecaster is still supposed to predict its future trajectories. Ignoring occlusions could lead to the sudden appearance of agents, posing critical safety risks. Moreover, in this streaming setting where trajectories predicted on continuous timestamps have overlaps, ensuring _temporal coherence_ between the trajectories becomes a key challenge. The forecaster needs to produce stable and smooth trajectories over time to serve downstream planning and control models. Unfortunately, current motion forecasting benchmarks universally adopt a _snapshot_-based setup [2][5][6], as illustrated in Fig. 1b - it only considers _independent_ and _isolated_ snapshots of trajectories, without explicitly modeling the spatial-temporal connection among them. Such a setting standardizes but over-simplifies the motion forecasting problem, making it not representative of the streaming nature of the real world. To overcome this limitation and demonstrate the streaming reality of autonomous driving, we propose "_streaming forecasting_" which involves _continuously_ querying the future trajectories of agents _on every frame_. Such a new perspective of motion forecasting presents a clear departure from the snapshot-based setting with independent samples. Importantly, our formulation also exhibits two novel real-world challenges. (1) _Forecasting for occluded agents_: as in frame \(t\) of Fig. 1a, our streaming setup captures the changing visibility of agents and reveals the occlusion challenge. However, snapshot-based datasets [2][5][6] uniformly assume that the target agents are visible and collect the data accordingly. (2) _Temporal coherence_: as in frames \(t\) and \(t+\Delta T\) of Fig. 1a, trajectories in neighboring frames have overlaps and our streaming setting reflects such a constraint by modeling motion forecasting continuously, which is impossible for isolated samples in the snapshot-based setting. Fig. 1: **(Best viewed in color) Gray lines: HD-Map; Blue lines: predicted trajectories. (a) Our streaming setup queries predictions on _continuous frames_ to reflect the streaming property in the real world. Under our setting, the challenge of _forecasting for occluded agents_ emerges from the frequent disappearance and re-appearance of agents (_e.g._, agent B, highlighted by “B’s future”), and _temporal coherence_ becomes a natural constraint for predictions in adjacent timestamps (highlighted by “Overlap”). (b) The snapshot-based setup in existing forecasting benchmarks sample queries _independently_ that are isolated in space and time. Such a setting hides away _occlusion_ and _temporal coherence_ challenges, which are presented in realistic streaming forecasting.** Based on this formulation, we introduce a new benchmark for streaming forecasting. Our key insight is to _re-purpose tracking_ datasets, which offer realistic observations from the ego vehicle. This strategy mitigates the bias of forecasting datasets in their data collection and processing. Specifically, Argoverse [5] is chosen for supporting modern vectorized high-definition maps (HD-Maps) on its tracking dataset. We call our benchmark _Argoverse-SF_, which faithfully captures the challenges of frequent occlusions and temporal continuity of predictions. To evaluate the performance of forecasting algorithms in this streaming setting, we design tailored metrics that investigate occlusion reasoning and temporal coherence. To address both challenges, we further propose a simple yet effective meta-algorithm, called "_Predictive Streamer_," to adapt any existing snapshot-based forecaster into the streaming world. Our predictive streamer estimates the states of occluded agents to enable robust predictions for them. While inferring occluded positions is a long-standing problem in 3D perception [26][25][29], existing strategies of using the Kalman filter or single-modal trajectories are insufficient for the forecasting purpose, because the forecaster depends on the estimated occluded positions and amplifies the errors in occlusion reasoning. We discover that a more advanced strategy of utilizing _multi-modal trajectories_ predicted by the forecaster performs better, as it provides higher-quality estimation covering a larger spectrum of motion distributions. For _temporal coherence_, our predictive streamer introduces _differentiable filters_ (DFs) [12] into multi-modal trajectory forecasting, adapting them from the state estimation literature. Our modified DFs represent future trajectories as hidden states and emphasize the temporal coherence of multi-modal trajectories in the process model of DFs. Compared with recurrent models, _e.g._, long short-term memory networks (LSTMs), our DFs also perform better because of the explicit modeling of temporal continuity. Finally, our approach enhances the temporal coherence of trajectories and improves the accuracy of forecasting results accordingly. In brief, we have made the following contributions. left-margin=*, noitemsep, nolistsep 1. _Benchmark._ **(a)** We introduce a novel _streaming forecasting_ formulation and construct an associated "Argoverse-SF" benchmark by re-purposing the tracking dataset. **(b)** Our benchmark reveals and captures the inherent challenges in a streaming world: "_occlusion reasoning_" and "_temporal coherence_," which are previously ignored by the widely-used snapshot-based benchmarks. 2. _Solution._ **(a)** We propose a plug-and-play meta-algorithm called "_Predictive Streamer_" that can adapt any snapshot-based forecaster into a streaming forecaster. **(b)** We instantiate our meta-algorithm by applying _multi-modal trajectories_ to estimate occluded positions and introducing _differentiable filters_ to improve the temporal coherence of multi-modal trajectories. Our solution addresses the two key challenges in streaming forecasting, decreasing the minimum final displacement error (minFDE) by 25% for occluded agents and reducing the fluctuations of trajectories by 10-20%. ## II Related Work ### _Motion Forecasting in Autonomous Driving_ Motion forecasting methods aim to predict the trajectories of agents by modeling their dynamics, interactions, and relationship with high-definition maps (HD-Maps). Previous work has developed effective encoders to extract agent features using graph neural networks (GNNs) [7][20], LSTMs [27], and transformers [21][24][31]. Other work has focused on decoding trajectories from agent features using improved sampling strategies and learning objectives [8][9][11][28][33]. Despite rapid progress, they are all developed and evaluated under the contrived snapshot-based forecasting benchmarks [2][5][6]. Although some work [27][34] provides interfaces for making predictions on continuous timestamps, we found that adding recurrent neural networks or relative positional embedding cannot fully address the streaming forecasting challenges, especially occlusion reasoning. Instead, we analyze the limitations of existing models and propose simple yet effective strategies to estimate occluded positions and improve temporal coherence. In addition, our streaming forecasting is broadly relevant to _open-loop planning_[3] which similarly models a streaming world, with the key difference of predicting the motion of the ego-vehicle, while we focus on surrounding agents. ### _Joint Perception and Forecasting_ Motion forecasting is often studied jointly with other perception modules. This naturally positions forecasting in data streams. End-to-end perception and prediction connects forecasting with 3D perception [4][10][13][23][25], which mainly explores how to benefit motion forecasting from the 3D perception features instead of improving the capability of forecasting models. By contrast, our work analyzes the setbacks of existing forecasting models under a streaming formulation and proposes an advanced "predictive streamer" to address the challenges of occlusion reasoning and temporal coherence. ### _Differentiable Filters_ Differentiable filters (DFs) [1][12] combine the benefits of neural networks with Bayes filters [15], where a neural network handles high-dimensional sensor inputs and enables the filters with the flexibility to adapt to a wide range of scenarios. Recently, DFs have demonstrated their effectiveness in 3D tracking and visual odometry [14][17][18][30], as well as robot manipulation [19][32]. However, they mainly concentrate on state estimation and apply filters to _current_ positions of objects. Instead, we extend DFs to enhance the prediction of _long-term future_ trajectories of agents in autonomous driving scenarios, which exhibit more significant stochastic fluctuations and multi-modalities. ## III Streaming Forecasting Benchmark ### _Background: Conventional Motion Forecasting_ **Motion forecasting.** Given historical observations, motion forecasting aims to predict the trajectories of agents in future timestamps. In the conventional formulation (Fig. 1(a)), historical observations have a fixed length of \(\tau_{\mathfrak{h}}\), and consist of the center positions of \(N\) agents \(C_{t-\tau_{\mathfrak{h}}+1:t}=\{c^{t}_{t-\tau_{\mathfrak{h}}+1:t}\}_{t=1}^{N}\) and an HD-Map \(M\). The future trajectories are multi-modal predictions denoted as \(P_{t:t+\tau_{f}}=\{\{p^{i,k}_{t:t+\tau_{f}}\}_{k=1}^{K}\}_{i=1}^{N}\), where \(K\) is the number of predictions and each \(p^{i,k}_{t:t_{2}}\) represents the estimated _movement_ between timestamps \(t_{1}\) and \(t_{2}\). **Snapshot-based benchmarks.** As illustrated in Fig. 0(b), existing benchmarks, such as [2][5][6], are typically constructed by collecting _independent_ and _isolated_ "snapshots" of trajectories, without explicitly considering the spatial-temporal connection among them. Within each snapshot, only the predictions for a _single_ timestamp are queried. **Forecasters.** Most of the current forecasting models are based on encoder-decoder architectures. The encoder takes as input the past locations \(C_{t-\tau_{\mathfrak{h}}+1:t}\) and the HD-Map \(M\) to generate agent features \(F_{t}\). The decoder then predicts the trajectories \(P_{t:t+\tau_{f}}\) from the agent features \(F_{t}\). That is, \[F_{t}=\texttt{Encoder}(C_{t-\tau_{\mathfrak{h}}+1:t},M),\quad P_{t:t+\tau_{ f}}=\texttt{Decoder}(F_{t}). \tag{1}\] We denote a snapshot-based forecasting model as \(\texttt{Model}=\{\texttt{Encoder},\texttt{Decoder}\}\). ### _Our Formulation_ Although it is widely used, the snapshot-based formulation is incompatible with the "streaming" nature of the real world. We thus propose a _streaming forecasting_ formulation that evaluates a forecasting algorithm on continuous timestamps instead of a single run. As shown in Fig. 1(b), streaming forecasting iteratively executes two steps: (1) _Input_ the positions of agents; (2) _Query_ the corresponding future trajectories. **Input.** We define \(A_{t}\) as the set of IDs of all the agents that have ever appeared up to time \(t\) (details of agent selection are in Sec. III-C). At time \(t\), the input consists of the 3D coordinates \(c^{a}_{t}\) and visibility \(v^{a}_{t}\) for every agent \(a\in A_{t}\). If agent \(a\) is visible or present at time \(t\), we set \(v^{a}_{t}\) as true; otherwise, we set \(v^{a}_{t}\) as false and \(c^{a}_{t}\leftarrow\varnothing\). We represent all the input data at time \(t\) as \(D_{t}=\{(c^{a}_{t},v^{a}_{t})\}_{a\in A_{t}}\), and \(D_{t-\tau_{\mathfrak{h}}+1:t}\) denotes all the input data between frames \(t-\tau_{\mathfrak{h}}+1\) and \(t\). **Query.** Streaming forecasting queries the future trajectories of agents in \(A_{t}\) on every timestamp. We adopt the same multi-modality setting with \(K\) futures for each agent as the conventional formulation in Sec. III-A. Thus, the predictions for agent \(a\in A_{t}\) are \(p^{a}_{t:t+\tau_{f}}=\{p^{a,k}_{t:t+\tau_{f}}\}_{k=1}^{K}\). The corresponding ground truth is \(G_{t:t+\tau_{f}}=\{g^{a}_{t:t+\tau_{f}}\}_{a\in A_{t}}\), which comes from the observations in future frames \(D_{t:t+\tau_{f}}\) of the streaming data. **Evaluated agents.** Our streaming forecasting queries and evaluates all the agents in \(A_{t}\), further setting it apart from conventional forecasting benchmarks that only concern a subset of target agents. While selecting specific target agents may help to focus on interesting movements, pre-assuming target agents can be risky in safety-critical applications. For similar considerations, joint perception and forecasting studies [4][10][23][25] often evaluate all the agents. By evaluating all the agents, our streaming forecasting can capture the challenge of "_forecasting for occluded agents._" By contrast, previous forecasting benchmarks implicitly assume the visibility of target agents and overlook this issue. Note that occluded agents also have available ground truth for future trajectories in streaming forecasting, because _they can re-appear after occlusion_. This makes streaming forecasting a natural scenario for investigating occlusions and addressing the challenges they pose. ### _Argoverse-SF Benchmark_ Guided by our formulation, we construct the Argoverse-SF benchmark for streaming forecasting. We make the following important design choices and contributions. **Using tracking instead of forecasting data.** Autonomous driving datasets [2][5][6] typically have separate splits for tracking and forecasting. While using the forecasting data seems the obvious choice, such data have been processed for snapshot-based evaluation: predicting on a single timestamp instead of continuous timestamps, and filtering out noisy (_e.g._, occluded) agents for a cleaner setup. By contrast, tracking data satisfy our purpose, as the input to forecasting components comes from an upstream tracker in autonomous driving. Therefore, tracking data are a more appropriate option for our benchmark to reflect the real world faithfully. **Argoverse vs. other datasets.** We prioritize evaluating datasets that support algorithms based on modern vectorized HD-Maps. As Waymo [6] does not include HD-Maps in its tracking data, Argoverse [5] and nuScenes [2] become our main options. We ultimately choose Argoverse, because more forecasters using vectorized HD-Maps are studied on Argoverse than nuScenes. **Agent positions.** As for the agent positions in our benchmark, instead of relying on the tracking results from a specific tracker, we leverage high-quality ground truth from Fig. 2: Formulations of _snapshot_-based and _streaming_ forecasting. **(a)** Conventional forecasting handles snapshots with a fixed length and is only intended for single-timestamp predictions. **(b)** Our _streaming_ setup queries the future trajectories on _every_ timestamp, aligning better with the streaming world. Compared with the snapshot-based setting, the length of history grows without an upper bound. Argoverse to create an "oracle" tracker. Specifically, we use the centers of ground truth bounding boxes as the input positions. This design avoids the influence of specific trackers and enables us to concentrate on the difficulties of forecasting. For instance, the occlusions in our Argoverse-SF are faithful in that they reflect the actual invisibility of occluded agents and cannot be filled in by human annotators. We further control the set of agents \(A_{t}\) by using the oracle life-cycle management [26][29]: an agent is removed from the set of agents only if it does not re-appear. **Selection of agents.** For a realistic setup and fair assessment of forecasting algorithms, Argoverse-SF carefully selects the agents. First, agents outside the perception range (\(>\)100m) or far from the road (outside the regions of interest in Argoverse) are excluded. Second, Argoverse-SF begins querying at \(\tau_{h}=20\) frame, ensuring that the benchmark does not create short histories artificially. Finally, Argoverse-SF only includes automobile agents such as vehicles, buses, and trailers, aligning with the Argoverse forecasting dataset, and we leave the implementation of additional agent types and datasets as future work. **Dataset splits.** Argoverse-SF adopts the same training and validation sets as Argoverse's tracking dataset: 65 and 24 sequences for training and evaluation, respectively. We use all the frames in the logs of Argoverse, which are 10Hz sensor inputs with a duration of 15-30 seconds. The training and validation sets have 9,937 and 3,839 timestamps with valid queries of future trajectories, respectively. Note that Argoverse-SF allows using Argoverse's forecasting dataset for pre-training forecasters. This is useful, because the forecasting dataset contains a curated collection of diversified trajectories that are necessary for learning effective initialization and comparing fairly with existing models. **Evaluation metrics.** Argoverse-SF quantitatively evaluates the performance of the predictions \(P_{t:t+\tau_{f}}\) by comparing them with the ground truth \(G_{t:t+\tau_{f}}\), which is the observations of agents' positions in future frames \(D_{t:t+\tau_{f}}\) (Sec. III-B). We adopt the commonly-used evaluation metrics of minimum final displacement error (minFDE), minimum average displacement error (minADE), and miss rate (MR). However, due to the frequent occlusion and de-occlusion of agents in streaming data, some frames and agents may not have ground truth. Therefore, we only compute the metric values for timestamps with available ground truth, _i.e._, where the visibility mask \(v^{a}_{t}\) is true. For example, the minFDE for agent \(a\) is calculated as: \[\text{minFDE}(a)=\frac{1}{|T^{a}|}\sum_{t\in T^{a}}\text{minFDE}(\{p^{a,k}_{t:t +\tau_{f}}\}_{k=1}^{K},g^{a}_{t:t+\tau_{f}}), \tag{2}\] where \(T^{a}\) is the set of all the queried timestamps with valid ground truth for agent \(a\): \(T^{a}=\{t|v^{a}_{t+\tau_{f}}\}\) is true\(\}\). For metrics like minADE, we adopt a "masked" version by performing average only on the frames with \(v^{a}_{t}\) being true. **Breakdown into subsets for evaluation.** Conventional motion forecasting only considers moving and visible agents. By contrast, our streaming forecasting introduces three new types of agents: moving-occluded, static-visible, and static-occluded, and we evaluate all of them. Table I summarizes the statistics with respect to the number of agents and queries, showing that the four types (subsets) of agents have a highly-imbalanced distribution. Here, we intuitively define _moving_ agents as those having an overall traveling distance greater than 3.0 meters. To avoid the domination of a single motion pattern or visibility, we introduce the four-subset division of movements and visibility for evaluation. Specifically, we first average the metrics within each subset \(A\) of the agents, calculated as "\(\text{minFDE}(A)=(1/|A|)\sum_{a\in A}(\text{minFDE}(a))\)," and then average the metrics across the four subsets to evaluate overall performance. ## IV Algorithms ### _Pipeline of Streaming Forecasting_ We propose a general pipeline to adapt a well-learned snapshot-based model seamlessly into streaming forecasting. **Three-stage training.** (1) _Pretraining_. We train a standard snapshot-based model on the forecasting dataset of Argoverse to reasonably initialize the forecaster. (2) _Finetuning_. We extract snapshots from the Argoverse-SF training set to finetune the pretrained model. This mitigates the distribution shift between Argoverse's forecasting dataset and Argoverse-SF. (3) _Streaming training_. We design and train _predictive \begin{table} \begin{tabular}{c|c c c c} \hline & Moving-Visible & Moving-Occluded & Static-Visible & Static-Occluded \\ \hline Queries & 90814 & 8342 & 103838 & 20875 \\ \hline Agents & 1042 & 520 & 2424 & 1494 \\ \hline \end{tabular} \begin{tabular}{c|c c c} \hline & Moving-Visible & Moving-Occluded & Static-Visible & Static-Occluded \\ \hline Queries & 29124 & 2968 & 47957 & 5069 \\ \hline Agents & 350 & 187 & 742 & 451 \\ \hline \end{tabular} \end{table} TABLE I: The number of queried predictions (“Queries”) and the total number of agents to which these predictions belong (“Agents”) in Argoverse-SF. Fig. 3: Predictive streamer. It serves as an intermediate integration between snapshot-based forecasters and streaming data. The occlusion reasoning module estimates the positions of occluded agents from the predicted movement (Sec. IV-B). Differentiable filters (DFs) estimate future trajectories as their hidden states and refine the forecasting results through temporal coherence in adjacent frames (Sec. IV-C). streamer_ to better integrate with the streaming setup, detailed in Sec. IV-B and Sec. IV-C. **Streaming inference.** To align with the interface of a snapshot-based forecaster, we integrate it into streaming data in a sliding window way. At frame \(t\), the model takes as input the last \(\tau_{h}\) frames, \(D_{t-\tau_{h}+1:t}\), and predicts future \(\tau_{f}\) frames, \(P_{t:t+\tau_{f}}\). If our predictive streamer is available, it uses predictions and agent features from the forecaster and refines \(P_{t:t+\tau_{f}}\) into \(\widetilde{P}_{t:t+\tau_{f}}\) as final predictions for output. **Baseline.** Hallucinating the occluded positions is necessary to operate forecasting models on all the agents. Our baseline uses a Kalman filter to predict occluded positions, inspired by how 3D multi-object tracking addresses occlusions [26][29]. **Overview of predictive streamers.** Occlusion reasoning and temporal coherence are emergent challenges in streaming forecasting. The baseline described above lacks specialized solutions to address these issues. Therefore, our _predictive streamer_ accommodates these challenges and acts as a general intermediate component as in Fig. 3, better integrating a snapshot-based forecaster into the streaming setup. ### _Forecasting for Occluded Agents_ **Occlusion reasoning with multi-modal predictions.** Estimating the occluded positions is the prerequisite of forecasting for those agents. Its common approach is adding predicted motion to previously visible positions. Although our Kalman filter baseline (Sec. IV-A) follows this intuition, it is insufficient in that the Kalman filter is unable to handle complex motions and contextual information, such as HD-Maps and the movements of surrounding agents. Therefore, we propose to leverage the forecasting results from snapshot-based models to provide more accurate estimation. In practice, a special branch \(\widetilde{p}_{t:t+\tau_{f}}\) from multi-modal trajectories is selected according to the highest confidence scores. The movement in \(\widetilde{p}_{t:t+\tau_{f}}^{\prime}\) propagates the agents' positions through the occluded timestamp as in Algorithm 1. Specifically, if an agent \(a\) is occluded, we replace its invisible location with previous motion \(c_{t-1}^{a}+\widetilde{p}_{t-1:t}^{a}\). We highlight that the selection process connects the multi-modal trajectories with the single-modal input of forecasters. In this way, our approach retains the advantage of multi-modal trajectories from learning-based forecasters, including their diverse movements, HD-Map context, and cross-agent interaction. **Discussion.** Our occlusion reasoning is different from prior 3D perception work [25][26][29] under a forecasting context. First, we leverage _multi-modal_ trajectories, which cover a larger spectrum of distributions and thus outperform single-modal trajectories (empirically validated in Sec. V-C). Second, while previous studies avoid updating predictions on occluded frames due to unreliable information, our streaming module updates \(p_{t:t+\tau_{f}}\) on every timestamp, providing a solution for handling long occlusions (\(>\tau_{f}\) frames) and incorporating the latest information for instant reactions. The remaining unreliability challenge is addressed by leveraging temporal coherence, which is discussed in Sec. IV-C. ### _Differentiable Filters for Temporal Coherence_ **Temporal coherence.** As streaming forecasting operates on continuous frames, adjacent timestamps have strong temporal consistency because of their large overlaps, as in Fig. 4. Suppose we use the superscript in \(\widetilde{p}_{t:t+\tau_{f}}^{\prime}\) to denote predictions originating from \(t\). The trajectories on \(t-1\), denoted as \(\widetilde{p}_{t-1:t+\tau_{f}-1}^{\prime-1}\), and the trajectories on \(t\), denoted as \(\widetilde{p}_{t:t+\tau_{f}}^{\prime}\), should be aligned between frames \(t\) and \(t+\tau_{f}-1\), which cover most of their prediction horizons. To account for the predictions in nearby frames, _our key insight_ is that differentiable filters (DFs) can effectively enhance temporal coherence by recursively fusing predictions into their hidden states. **Background of DFs.** The basis of DFs is Bayes filtering, _e.g._, Kalman filters [15]. It considers a _process model_ and an _observation model_ as Eqn. 3, where \(x_{t}\) denotes hidden states, \(z_{t}\) denotes observations and the assumptions are linear dynamics, no control signals, and Gaussian distribution. Additionally, \(w\) and \(\delta\) are noises following distributions \(N(0,Q)\) and \(N(0,R)\), where \(Q\) and \(R\) are covariance matrices. \[x_{t}=Ax_{t-1}+w,\text{ and }z_{t}=Cx_{t}+\delta. \tag{3}\] The filter recursively approximates the posterior distribution of \(x_{t}\) in the form of \(N(\mu_{x_{t}},\Sigma_{x_{t}})\). During this process, DF [12] proposes using a neural network \(\phi\) to infer low-dimensional \(z_{t}\) from high-dimensional sensor inputs or generate adaptive Fig. 4: Illustration of temporal coherence. The predictions on adjacent frames (\(t-1\) and \(t\)) have a large overlap, and such consistency is useful for improving the forecasting quality. covariance \(R\). As the Kalman filter is a differentiable process, the neural network \(\phi\) can be learned jointly with it. **DFs for motion forecasting.** In the case of motion forecasting, we treat future trajectories \(p^{\prime}_{t:t+\tau_{f}}\) as both the hidden states \(x_{t}\) and observation \(z_{t}\) in DF. Then DF represents the hidden states with a Gaussian distribution \(N(\mu_{x_{t}},\Sigma_{x_{t}})\) and recursively estimates the mean \(\mu_{x_{t}}\) of the hidden states \(x_{t}\) as refined forecasting results, denoted as \(\widetilde{p}^{\prime}_{t:t+\tau_{f}}\). We illustrate this process as follows: \[\mu_{x_{t}}\leftarrow\text{DF}(p^{\prime}_{t:t+\tau_{f}},\mu_{\tau_{t-1}}, \Sigma_{x_{t-1}}),\text{ then }\widetilde{p}^{\prime}_{t:t+\tau_{f}}\leftarrow\mu_{x}. \tag{4}\] In DF, we first use a learnable neural network \(\phi_{R}\) to predict adaptive covariance \(R_{t}\) for varied agents and timestamps based on their features \(F_{t}\), which enables DF to fuse the highly stochastic trajectories properly. Then we feed the predicted \(R_{t}\) into a standard Kalman filter to update \(\mu_{x_{t-1}}\) as in Fig. 5, according to the following filtering steps. \[\widetilde{\mu}_{x_{t}}\gets A\mu_{x_{t-1}},\ \widetilde{ \Sigma}_{x_{t}}\gets A\Sigma_{x_{t-1}}A^{\top}+Q_{t}\text{ [Prediction]} \tag{5}\] \[K_{t}\leftarrow\widetilde{\Sigma}_{x_{t}}C^{\top}(C\widetilde{ \Sigma}_{x_{t}}C^{\top}+R_{t})^{-1}\text{ [Kalman gain]}\] (6) \[\mu_{x_{t}}\leftarrow\widetilde{\mu}_{x_{t}}+K_{t}(z_{t}-C \widetilde{\Pi}_{x_{t}}),\Sigma_{x_{t}}\leftarrow(\mathbb{I}-K_{t}C) \widetilde{\Sigma}_{x_{t}}\text{ [Update]} \tag{7}\] We further discover that _specifying inter-frame dynamics_ is the key to capturing temporal coherence. Taking the movements on X-axis for example, denoted as \(\widetilde{p}^{\prime,x}_{t:t+\tau_{f}}\), we define the matrix \(A\) in the process model of DF (Eqn. 3) as: \[A\leftarrow\left[\begin{array}{c|c}0_{(\tau_{f}-1)\times 1}&\boxed{\mathbb{I}_{( \tau_{f}-1)\times(\tau_{f}-1)}}\\ \hline 0_{1\times 1}&1_{1\times 1}\end{array}\right],\ \widetilde{p}^{\prime,x}_{t:t+\tau_{f}} \gets A\widetilde{p}^{\prime-1,x}_{t-1:t+\tau_{f}-1}. \tag{8}\] The above matrix \(A\) uses the identity matrix \(\mathbb{I}_{(\tau_{f}-1)\times(\tau_{f}-1)}\) to indicate that the overlapped forecasting frames, _i.e._, \(\widetilde{p}^{\prime,x}_{t:t+\tau_{f}-1}\) and \(\widetilde{p}^{\prime-1,x}_{t:t+\tau_{f}-1}\), should be consistent. The bottom-right \(1_{1\times 1}\) means that the process model pads the additional timestamp \(t+\tau_{f}\) with the last movement on the previous timestamp, where we assume that motion cannot change drastically. As for \(C\) in the observation model, we define it as an identity matrix \(\mathbb{I}\), because the hidden states \(x_{t}\) and observations \(z_{t}\) both represent future movements. For implementation, we follow prior work [12][18] in treating \(Q_{t}\) as a fixed hyper-parameter. Moreover, as our future trajectories have larger dimensions than conventional state estimation tasks, we assume \(R_{t}\) is diagonal and predict it by \((\phi_{R}(F_{t}))^{2}\) to avoid invalid covariance matrices. **Training and inference.** We train DF using the same forecasting loss penalizing the distance between predictions and ground truth (details in Sec. V-A). During inference, we recursively apply DF as in Fig. 5 to refine every trajectory. **Discussion.** Our DF in predictive streamer not only improves trajectory prediction but also highlights the temporal coherence property of streaming forecasting. In response to the question from the discussion of Sec. IV-B, our DF-based approach allows the occluded frames to aggregate the information from previous visible frames, which addresses the unreliability issue of hallucinated occluded positions. ## V Experiments ### _Implementation Details_ **Snapshot forecasting models.** We use two encoder/decoder architectures for generalizability: VectorNet [7] and mm-Transformer [21]. To ensure a fair comparison, we _double the number of layers_ of the original VectorNet, as it was proposed earlier and relatively light-weight. Our loss functions follow the _winner-takes-all_ strategy, where the best branch is selected from the multi-modal trajectories based on the smallest distance to ground truth. Then a cross-entropy loss supervises its confidence score to approach 1, and a smooth L1 loss penalizes its distance to the ground truth. Our re-implementation of VectorNet and mmTransformer outperforms their reported results in [7][21]. **Differentiable filters.**\(\phi_{R}\) adopts the same multi-layer perceptron (MLP) architecture as the decoder in VectorNet. During streaming training, we freeze the encoder and decoder of the snapshot-based forecaster and supervise DFs with the same smooth L1 loss for multi-modal trajectories. **Model training.** We introduce the details of three-stage training. (1) _Pretraining_: we use AdamW [22], 5e-4 as learning rate, 1e-4 as weight decay, and train for 24 epochs. (2) _Finetuning_: we use AdamW, 1e-4 as learning rate, 1e-4 as weight decay, and train for 8 epochs. The training samples come from Argoverse-SF, and every iteration involves _forecasting on 5 adjacent frames_. (3) _Streaming training_: we freeze the snapshot encoder and decoder and adopt the same configuration as the finetuning stage. ### _Ablation Studies_ **Predictive streamer.** We analyze _predictive streamer_ in Table II and Table III based on the encoders and decoders from VectorNet and mmTransformer, respectively. As illustrated, _every streaming strategy significantly improves the forecasting quality_, especially for the minFDE and minADE. We highlight that DF is vital for better predictions. It refines occlusion reasoning by fusing the results from previous reliable frames. Furthermore, better temporal coherence also improves the trajectories of visible agents without modifying the underlying encoders and decoders, which is plug-and-play for deploying snapshot forecasters. **Properties of streaming datasets.** Before further analysis, we clarify the differences between a snapshot-based dataset and our streaming datasets, which are reflected in Table II and Table III and subtle to notice at first look. (1) The Fig. 5: Computation of differentiable filters. A neural network \(\phi_{R}\) predicts the observation covariance \(R_{t}\) from agent features. Then a Kalman filter fuses the latest predictions into the hidden states accordingly. minADE for occluded cases can be larger than minFDE. This is because minADE computes more occlusion cases than minFDE in Argoverse-SF. Concretely, we adopt a "masked" minADE that computes the metrics once there is one frame with ground truth (as in Sec. III-C, evaluation metrics); thus, any prediction accounted by minFDE is also considered by minADE, but not the reverse. (2) The static agents (excluded in the table due to space limit) have much smaller minADE and minFDE values and decrease the scales of changes in overall metrics. (3) We compare the performance between VectorNet and mmTransformer, where mmTransformer is better for visible agents and K=1 cases, but fails in the K=6 occlusion scenarios. This is because the queries of mmTransformer are tailored to distinct movement patterns, and their less confident directions have a weaker ability to adapt to noisy occlusions. Even so, our predictive streamer consistently improves for all scenarios. **Three-stage training.** The three-stage procedure of adapting forecasters, which is proposed in Sec. IV-A, is necessary: finetuning improves the performance, where the overall minFDE decreases from 3.07m to 1.67m and the overall MR drops from 25% to 20%. ### _Quantitative Analysis_ We conduct several analyses for our design choices and provide further insights for streaming forecasting. By default, we use the VectorNet encoder and decoder. **Comparing DF with KF and LSTM.** We compare the effects of DF and LSTM, because LSTM is a common approach for temporal coherence [12][18]. Specifically, we replace the DF in our predictive streamer with a 4-layer LSTM to enhance \(F_{t}\), and jointly train it with the decoder. For a fair comparison, both DF and LSTM _adopt the occlusion reasoning_ described in Sec. IV-B. As shown in Table IVa, LSTM cannot outperform our DF. After more detailed analysis in Table IVb, we find that LSTM can slightly improve over the snapshot models on visible cases, but not as large as DFs, because DF explicitly specifies temporal coherence by a process model (Eqn. 8). The main disadvantage of LSTMs is on the occluded cases because the problematic features for occluded agents can corrupt the hidden states of LSTM and influence the features in subsequent frames. We also compare DF to the baseline of Kalman filters, where DF has a better performance by learning adaptive covariance matrices. **DF improves temporal coherence.** We mimic [16] in defining a second-order metric "fluctuation" to analyze the temporal coherence. Specifically, we convert predictions on \(t-1\) and \(t\) into their absolute coordinates \(\mathcal{C}_{t-1+\tau+\tau_{f}-1}^{t-1}\) and \(\mathcal{C}_{t+\tau_{f}}^{t}\) and compute their distance in overlapped frames \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{OCC DF} & \multicolumn{4}{c|}{Overall} & \multicolumn{4}{c|}{Moving-Occluded} & \multicolumn{4}{c}{Moving-Visible} \\ & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & MR\({}_{\downarrow}\) \\ \hline & 1.67 & 1.64 & 0.20 & 4.05 & 4.59 & 0.50 & 1.70 & 1.09 & 0.25 \\ ✓ & 1.43 & 1.24 & 0.18 & 3.22 & 3.17 & 0.44 & 1.70 & 1.09 & 0.25 \\ ✓ & **1.37** & **1.20** & **0.17** & **3.63** & **3.07** & **0.41** & **1.67** & **1.07** & **0.24** \\ \hline \end{tabular} \end{table} TABLE II: Analysis of predictive streamer with VectorNet. We analyze the overall metrics (primary) and the breakdown metrics specifically for moving agents (secondary). Beginning from the baseline of directly adapting snapshot models with a Kalman filter (Sec. IV-A), we use the most confident trajectory for occlusion reasoning (“OCC”) and learn a differentiable filter to improve the temporal coherence (“DF”). Our strategies significantly improve the performance, indicating the benefits of viewing motion forecasting from a streaming perspective. More analysis is in Sec. V-B. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{OCC DF} & \multicolumn{4}{c|}{Overall} & \multicolumn{4}{c|}{Moving-Occluded} & \multicolumn{4}{c}{Moving-Visible} \\ & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & MR\({}_{\downarrow}\) \\ \hline & 1.74 & 1.61 & 0.19 & 4.27 & 4.48 & 0.49 & 1.70 & 1.07 & 0.23 \\ ✓ & 1.61 & 1.37 & 0.18 & 3.89 & 3.67 & 0.43 & 1.70 & 1.07 & 0.23 \\ ✓ & **1.54** & **1.30** & **0.17** & **3.60** & **3.40** & **0.41** & **1.66** & **1.05** & 0.23 \\ \hline \end{tabular} \end{table} TABLE III: Analysis of predictive streamer with mmTransformer. Abbreviations are the same as Table II. Our algorithm improves the forecasting quality on mmTransformer, further supporting the significance of exploring streaming forecasting. (a) Evaluation metrics (K=6) with mmTransformer. \begin{table} \begin{tabular}{c|c c c|c c|c c} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Overall (K=6)} & \multicolumn{4}{c}{Overall (K=1)} \\ & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) \\ \hline Snapshot & 1.67 & 1.64 & 0.20 & 3.90 & 2.57 & 0.37 \\ KF & 1.42 & 1.24 & 0.18 & 3.39 & 2.03 & 0.37 \\ LSTM & 1.56 & 1.39 & 0.18 & 3.75 & 2.30 & **0.36** \\ DF (Ours) & **1.37** & **1.20** & **0.17** & **3.23** & **1.98** & **0.36** \\ \hline \end{tabular} \begin{tabular}{c|c c c|c c c} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Moving-Occluded (K=6)} & \multicolumn{4}{c}{Moving-Visible (K=6)} \\ & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) & minFDE\({}_{\minADE}\) & minADE\({}_{\downarrow}\) & MR\({}_{\downarrow}\) \\ \hline Snapshot & 4.05 & 4.59 & 0.50 & 1.70 & 1.09 & 0.25 \\ KF & 3.13 & 3.15 & 0.42 & 1.70 & 1.09 & 0.25 \\ LSTM & 3.74 & 3.78 & 0.46 & 1.68 & **1.07** & 0.25 \\ DF (Ours) & **3.03** & **3.07** & **0.41** & **1.67** & **1.07** & **0.24** \\ \hline \end{tabular} \end{table} TABLE IV: Comparing DF with Kalman filter (KF) and LSTM for temporal coherence. Our DF outperforms both KF and LSTM, especially for occlusion cases. Fig. 6: DF decreases the fluctuations of trajectories by 10%-20%. Y-axis is measured by meters/frame. \(\frac{1}{\tau_{f}-1}\|\vec{\sigma}_{t^{2}+\tau_{f}-1}^{d-1}-\vec{c}_{t^{\prime}+ \tau_{f}-1}^{\prime}\|_{2}\). As in Fig. 6, DF decreases the fluctuation by a large margin. **Occlusion reasoning is better with multi-modality.** We compare our estimation of occluded positions with previous 3D perception strategies that capitalize occlusion reasoning [26][25][29]. The main distinction is that we utilize multi-modal predictions. To validate the benefits of multi-modality, we integrate an additional single-modal decoder \(\texttt{Decoder}_{O}\) and a DF into our predictive streamer for occlusion reasoning. As in Table V, _multi-modal predictions are beneficial to estimating invisible positions_. The reason is that multi-modal trajectories can cover a larger spectrum of futures. We also notice that the improvement of DF generalizes to both single-modal and multi-modal trajectories. ### _Qualitative Analysis_ We first illustrate the intuition of streaming forecasting in Fig. 6(a), where our predictions gradually adapt to an agent's incoming observations. For example, the trajectories have smaller divergence on \(t+3\) than on \(t\). Furthermore, we demonstrate the challenge of occlusion reasoning in Fig. 6(b). Although the errors for occluded positions are subtle, the Kalman filter and single-modal methods have seriously wrong predictions, while our multi-modal method has relatively larger fidelity. Please check our demo for more visualizations. ## VI Conclusions and Future Work This paper proposes a novel perspective to study motion forecasting: _streaming forecasting_. By repurposing tracking data, we capture the intrinsic challenges of _forecasting for occluded agents_ and _temporal coherence_ in real-world traffic, which are overlooked by previous snapshot-based setups. In addition, we propose general _predictive streamers_ that leverage multi-modal forecasting to address occlusions and introduce differentiable filters to enhance temporal continuity. These solutions and analyses intend to generate interest in studying motion forecasting in a realistic streaming setting. As the first study on streaming forecasting, we expect future works to improve predictive streamers for miss rates. In addition, the benchmark is also extensible to more types of agents and larger datasets. **Acknowledgement.** This work was supported in part by NSF Grant 2106825, NIFA Award 2020-67021-32799, the Jump ARCHES endowment, the NCSA Fellows program, the IBM-Illinois Discovery Accelerator Institute, the Illinois-Insper Partnership, and the Amazon Research Award.
2303.16715
Large time behavior to a 2D micro-macro model for compressible polymeric fluids near equilibrium
In this paper, we mainly study the large time behavior to a 2D micro-macro model for compressible polymeric fluids with small initial data. This model is a coupling of isentropic compressible Navier-Stokes equations with a nonlinear Fokker-Planck equation. Firstly the Fourier splitting method yields that the logarithmic decay rate. By virtue of the time weighted energy estimate, we can improve the decay rate to $(1 + t)^{-\frac{1}{4}}$. Under the low-frequency condition and by the Littlewood-Paley theory, we show that the solutions belong to some Besov spaces with negative index and obtain the optimal $L^2$ decay rate. Finally, we obtain the $\dot{H}^s$ decay rate by establishing a new Fourier splitting estimate.
Wenjie Deng, Wei Luo, Zhaoyang Yin
2023-03-29T14:15:31Z
http://arxiv.org/abs/2303.16715v1
# Large time behavior to a 2D micro-macro model for compressible polymeric fluids near equilibrium ###### Abstract In this paper, we mainly study the large time behavior to a 2D micro-macro model for compressible polymeric fluids with small initial data. This model is a coupling of isentropic compressible Navier-Stokes equations with a nonlinear Fokker-Planck equation. Firstly the Fourier splitting method yields that the logarithmic decay rate. By virtue of the time weighted energy estimate, we can improve the decay rate to \((1+t)^{-\frac{1}{4}}\). Under the low-frequency condition and by the Littlewood-Paley theory, we show that the solutions belong to some Besov spaces with negative index and obtain the optimal \(L^{2}\) decay rate. Finally, we obtain the \(\dot{H}^{s}\) decay rate by establishing a new Fourier splitting estimate. _2010 Mathematics Subject Classification_: 35Q30, 76B03, 76D05, 76D99. _Keywords_: The compressible polymeric fluids; global strong solutions; time decay rate. ###### Contents * 1 Introduction * 1.1 Short reviews for the incompressible polymeric fluid models * 1.2 Short reviews for the compressible polymeric fluid models * 1.3 Main results * 2 Preliminaries * 3 The \(L^{2}\) decay rate * 4 The \(\dot{H}^{s}\) decay rate Introduction In this paper, we consider a micro-macro model for compressible polymeric fluids near equilibrium with dimension \(d\geq 2\)[9] : \[\left\{\begin{array}{l}\varrho_{t}+\mathrm{div}(\varrho u)=0,\\ (\varrho u)_{t}+\mathrm{div}(\varrho u\otimes u)-\mathrm{div}\Sigma(\mathrm{u}) +\frac{1}{\mathrm{Ma}^{2}}\mathrm{VP}(\varrho)=\frac{1}{\mathrm{De}}\frac{ \kappa}{\mathrm{div}}\ \tau,\\ \psi_{t}+u\cdot\nabla\psi=\mathrm{div}_{q}[-\nabla u\cdot q\psi+\frac{\sigma}{ De}\nabla_{q}\psi+\frac{1}{De\cdot r}\nabla_{q}\mathcal{U}\psi],\\ \tau_{ij}=\int_{\mathbb{R}^{d}}(q_{i}\nabla_{q_{j}}\mathcal{U})\psi dq,\\ \varrho|_{t=0}=\varrho_{0},\ \ u|_{t=0}=u_{0},\ \ \psi|_{t=0}=\psi_{0},\end{array}\right. \tag{1.1}\] where \(\varrho(t,x)\) is the density of the solvent, \(u(t,x)\) stands for the velocity of the polymeric liquid and \(\psi(t,x,q)\) denotes the distribution function for the internal configuration. Here the polymer elongation \(q\in\mathbb{R}^{d}\), \(x\in\mathbb{R}^{d}\) and \(t\in[0,\infty)\). The notation \[\Sigma(u)=\mu\left(\nabla u+\nabla^{T}u\right)+\mu^{\prime}\mathrm{div}u\cdot Id\] stands for the stress tensor, with \(\mu\) and \(\mu^{\prime}\) being the viscosity coefficients satisfying the relation \(\mu>0\) and \(2\mu+\mu^{\prime}>0\). The pressure obeys the so-called \(\gamma\)-law: \[P(\varrho)=a\varrho^{\gamma}\] with \(\gamma\geq 1,a>0\). \(\sigma\) is a constant satisfing the relation \(\sigma=k_{B}T_{a}\), where \(k_{B}\) is the Boltzmann constant and \(T_{a}\) is the absolute temperature. Furthermore, \(r>0\) is related to the linear damping mechanism in dynamics of the microscopic variable \(q\), and \(\kappa>0\) is some parameter describing the ratio between kinetic and elastic energy. The parameter \(De\) denotes the Deborah number, which represents the ratio of the time scales for elastic stress relaxation, so it characterizes the fluidity of the system. The Mach number \(Ma\) describes the ratio between the fluid velocity and the sound speed, which measures the compressibility of the system. Moreover, the potential \(\mathcal{U}(q)\) follows the same assumptions as that of [9] : \[\left\{\begin{array}{l}|q|\lesssim(1+|\nabla_{q}\mathcal{U}|),\\ \Delta_{q}\mathcal{U}\leq C+\delta|\nabla_{q}\mathcal{U}|^{2},\\ \int_{\mathbb{R}^{d}}|\nabla_{q}\mathcal{U}|^{2}\psi_{\infty}dq\leq C,\ \ \int_{\mathbb{R}^{d}}|q|^{4}\psi_{\infty}dq\leq C,\end{array}\right. \tag{1.2}\] with \(\delta\in(0,1)\) and \[\left\{\begin{array}{l}|\nabla_{q}^{k}(q\nabla_{q}\mathcal{U})|\lesssim(1+| q||\nabla_{q}\mathcal{U}|),\\ \int_{\mathbb{R}^{d}}|\nabla_{q}^{k}(q\nabla_{q}\mathcal{U}\sqrt{\psi_{\infty} })|^{2}dq\leq C,\\ |\nabla_{q}^{k}(\Delta_{q}\mathcal{U}-\frac{1}{2}|\nabla_{q}\mathcal{U}|^{2}) |\lesssim(1+|\nabla_{q}\mathcal{U}|^{2}),\end{array}\right. \tag{1.3}\] with the integer \(k\in[1,3]\). The most simplest case among them is the Hookean spring \(\mathcal{U}(q)=\frac{1}{2}|q|^{2}\). From the Fokker-Planck equation, one can derive the following compressible Oldroyd-B equation, which has been studied in depth in [6, 11, 22, 23, 28] : \[\tau_{t}+u\cdot\nabla\tau+2\tau+Q(\nabla u,\tau)=Du\,\qquad(Du)^{ij}=\sum_{l,m=1}^ {d}\nabla^{l}u^{m}\int_{\mathbb{R}^{d}}q^{l}q^{m}q^{i}q^{j}e^{-|q|^{2}}dq. \tag{1.4}\] It is universally known that the system (1.1) can be used to described the fluids coupling polymers. The system is of great interest in many branches of physics, chemistry, and biology, see [2, 5]. In this model, a polymer is idealized as an "elastic dumbbell" consisting of two "beads" joined by a spring that can be modeled by a vector \(q\). The polymer particles are described by a probability function \(\psi(t,x,q)\) satisfying that \(\int_{\mathbb{R}^{d}}\psi(t,x,q)dq=1\), which represents the distribution of particles' elongation vector \(q\in\mathbb{R}^{d}\). In comparision to the macro-micro models studied in [9], it's reasonable for us to remove the term \(\mathrm{div}u\psi\) from the equation \(\psi\) obey. Otherwise, one can derive \(\mathrm{div}u=0\) from the equation \(\psi\) obey by assumption \(\int_{\mathbb{R}^{d}}\psi dq=\int_{\mathbb{R}^{d}}\psi_{0}dq\). However, the condition \(\int_{\mathbb{R}^{d}}\psi dq=\int_{\mathbb{R}^{d}}\psi_{0}dq\) is of great significance in the estimation of \(\mathrm{div}_{q}\left(\nabla uq\psi\right)\) and it seems impossible to obtain global priori estimation without any dissipation or conservation law for \(\int_{\mathbb{R}^{d}}\psi dq\). Therefore, we underlined that the system (1.1) is meaningful and the results in this paper indeed cover those obtained in [9, 28]. At the level of liquid, the system couples the Navier-Stokes equation for the fluid velocity with a Fokker-Planck equation describing the evolution of the polymer density. One can refer to [2, 5, 17, 18] for more details. In this paper we will take \(a,\ \sigma,\ \kappa,\ r,\ De,\ Ma\) equal to \(1\). It is easy to check that \((1,0,\psi_{\infty})\) with \[\psi_{\infty}(q)=\frac{e^{-\mathcal{U}(q)}}{\int_{\mathbb{R}^{d}}e^{-\mathcal{ U}(q)}dq}\,\] is a stationary solution to the system (1.1). Considering the perturbations near the global equilibrium: \[\rho=\varrho-1,\ \ u=u\ \ \ \text{and}\ \ \ g=\frac{\psi-\psi_{\infty}}{\psi_{ \infty}}\,\] then we can rewrite system (1.1) as follows : \[\left\{\begin{array}{l}\rho_{t}+\mathrm{div}(1+\rho)=-\mathrm{u}\cdot\nabla \rho,\\ u_{t}-\frac{1}{1+\rho}\mathrm{div}\Sigma(\mathrm{u})+\frac{\mathrm{P}^{\prime} (1+\rho)}{1+\rho}\nabla\rho=-\mathrm{u}\cdot\nabla\mathrm{u}+\frac{1}{1+\rho }\mathrm{div}\tau,\\ g_{t}+\mathcal{L}g=-u\cdot\nabla g-\frac{1}{\psi_{\infty}}\nabla_{q}\cdot( \nabla uqg\psi_{\infty})-\mathrm{div}\mathrm{u}-\nabla\mathrm{u}\mathrm{q} \nabla_{\mathrm{d}}\mathcal{U},\\ \tau_{ij}(g)=\int_{\mathbb{R}^{d}}(q_{i}\nabla_{q_{j}}\mathcal{U})g\psi_{ \infty}dq,\\ \rho|_{t=0}=\rho_{0},\ \ u|_{t=0}=u_{0},\ \ g|_{t=0}=g_{0},\end{array}\right. \tag{1.5}\] where \(\mathcal{L}g=-\frac{1}{\psi_{\infty}}\nabla_{q}\cdot(\psi_{\infty}\nabla_{q}g)\). ### Short reviews for the incompressible polymeric fluid models So far, there are lots of results about the incompressible model that had been constructed extensively. M. Renardy [24] established the local well-posedness in Sobolev spaces with potential \(\mathcal{U}(q)=(1-|q|^{2})^{1-\sigma}\) for \(\sigma>1\). Later, B. Jourdain et al. [10] proved local existence of a stochastic differential equation with potential \(\mathcal{U}(q)=-k\log(1-|q|^{2})\) in the case \(k>3\) for a Couette flow. P. L. Lions and N. Masmoudi [13] constructed global weak solution for some Oldroyd models in the co-rotation case. In order to give a sufficient condition of non-breakdown for an incompressible viscoelastic fluid of the Oldroyd type, J. Y. Chemin and N. Masmoudi [3] established a new priori estimate for 2D Navier-Stokes system and derived a losing derivative estimate for the transport equation, which is of significance in the proof of the global well-posedness for viscoelastic fluids. Later, N. Masmoudi et al. [19] obtained global solutions for 2D polymeric fluid models under the co-rotational assumption without any small conditions. In addition, Z. Lei et al. [12] provided a new method to improve the criterion for viscoelastic systems of Oldroyd type considered by [3]. It's noting that this method is much easier and is expected to be adopted to other problems involving the prior estimate of losing derivative. L. He and P. Zhang [8] studied the long time decay of the \(L^{2}\) norm to the incompressible Hooke dumbbell models and found that the solutions tends to the equilibrium by \((1+t)^{-\frac{3}{4}}\) under the low-frequency assumption that \(u_{0}\in L^{1}(\mathbb{R}^{3})\). Recently, M. Schonbek [26] studied the \(L^{2}\) decay of the velocity \(u\) to the co-rotation FENE dumbbell model and proved that velocity \(u\) tends to zero in \(L^{2}\) by \((1+t)^{-\frac{3}{4}+\frac{1}{4}}\), \(d\geq 2\) when the initial perturbation is additionally bounded in \(L^{1}\mathbb{R}^{d}\). More recently, W. Luo and Z. Yin [14, 15] improved Schonbek's result and showed that the optimal decay rate of velocity \(u\) in \(L^{2}\) should be \((1+t)^{-\frac{d}{4}}\). ### Short reviews for the compressible polymeric fluid models Z. Lei [11] first investigated the incompressible limit problem of the compressible Oldroyd-B model in torus. Recently, D. Fang and R. Zi [6] studied the global well-posedness for compressible Oldroyd-B model in critical Besov spaces with \(d\geq 2\). Z. Zhou et al. [28] proved the global well-posedness and time decay rates for the 3D compressible Oldroyd-B model. More details for the compressible Oldroyd-B type model based on the deformation tensor can refer to [22, 23]. Recently, N. Jiang et al. [9] employed the energetic variational method to derive a micro-macro model for compressible polymeric fluids and proved the global existence near the equilibrium in the Sobolev space \(H^{3}(\mathbb{R}^{3})\). W. Deng et al. [4] established the global well-posedness for a micro-macro model for compressible polymeric fluids near equilibrium in Sobolev spaces with \(d\geq 2\) and obtained the optimal time decay rates in \(L^{2}(\mathbb{R}^{d})\) and \(\dot{H}^{s}(\mathbb{R}^{d})\) with \(d\geq 3\) when the initial perturbation is additionally bounded in \(\dot{B}_{2,\infty}^{-\frac{d}{2}}(\mathbb{R}^{d})\). Z. Luo et al. [16] studied the global strong solution for the compressible FENE models near equilibrium with \(d\geq 2\) and obtained the optimal decay rate in \(L^{2}(\mathbb{R}^{d})\) under the low-frequency assumption for initial data. ### Main results The long time behavior for polymeric models is noticed by N. Masmoudi [7]. To our best knowledge, large time behaviour for the \(2D\) compressible polymeric fluid model with general Hookean potentials given by (1.2), (1.3) have not been studied yet. In this paper, we are devoted to the study of large time behaviour optimal time decay rate of (1.5) in \(L^{2}(\mathbb{R}^{2})\) and \(\dot{H}^{s}(\mathbb{R}^{2})\). The brief outline of the proof is carrying out in the following. Firstly we can prove the logarithmic decay rate for the velocity. The main difficulty for us is to get the initial algebraic decay rate for \(u\). By virtue of the time weighted energy estimate and the logarithmic decay rate obtained, then we improve the initial decay rate to \((1+t)^{-\frac{1}{4}}\) for the velocity in \(L^{2}(\mathbb{R}^{2})\). Different from incompressible cases, we can not obtain the estimate of \(\|u\|_{L^{1}}\), which forces us to obtain half decay rate merely at the very beginning. We will use three steps to overcome the difficulty. **Step 1**. Estimate the \(\dot{B}_{2,\infty}^{-\frac{1}{2}}(\mathbb{R}^{2})\)-norm of the velocity as in [16]. **Step 2**. By the Littlewood-Paley decomposition theory and the Fourier splitting method, we can improve the time decay rate: \[\|u\|_{L^{2}}\leq(1+t)^{-\frac{5}{16}}.\] **Step 3**. Estimate the \(\dot{B}^{-1}_{2,\infty}(\mathbb{R}^{2})\)-norm of the velocity, which eventually leads the optimal time decay rate for the velocity by iteration. It is worthy mentioning that the optimal decay rates of \((\rho,u,g)\) in \(\dot{H}^{s}(\mathbb{R}^{2})\) is absolutely innovative. It fails to obtain the decay rate of \((\rho,u)\) in \(\dot{H}^{s}(\mathbb{R}^{2})\) by the same way as that of \(L^{2}(\mathbb{R}^{2})\) since \[\|\Lambda^{s}(\rho,u)\|^{2}_{L^{2}}+\eta\langle\Lambda^{s-1}u,\Lambda^{s}\rho \rangle\simeq\|\Lambda^{s}(\rho,u)\|^{2}_{L^{2}}\] does not hold anymore for any \(\eta>0\). To overcome the difficulty, a critical Fourier splitting estimate is established in Lemma 4.2, which implies that the dissipation of \(\rho\) only in high frequency fully enables us to obtain optimal decay rate in \(\dot{H}^{s}(\mathbb{R}^{2})\). It should be pointed out that the frequency decomposition considered in this paper is in time dependency, which conduces to obtain optimal decay rate without finite induction argument. Moreover, we can obtain the \(\frac{1}{2}\) faster algebric decay rate for \(g\). Let's recall the following theorem. **Theorem 1.1**.: _[_4_]_ _Let \(d\geq 2\) and \(s>1+\frac{d}{2}\). Assume \((\rho,u,g)\) be a classical solution of (1.5) with the initial data \((\rho_{0},u_{0},g_{0})\) satisfying the conditions \(\int_{\mathbb{R}^{d}}g_{0}\psi_{\infty}dq=0\) and \(1+g_{0}>0\), then there exists some sufficiently small constant \(\varepsilon_{0}\) such that if_ \[E_{\lambda}(0)=\|\rho_{0}\|^{2}_{H^{s}}+\|u_{0}\|^{2}_{H^{s}}+\|g_{0}\|^{2}_{H ^{s}(\mathcal{L}^{2})}+\lambda\|\langle q\rangle g_{0}\|^{2}_{H^{s-1}( \mathcal{L}^{2})}\leq\varepsilon_{0}, \tag{1.6}\] _then the compressible system (1.5) admits a unique global classical solution \((\rho,u,g)\) with \(\int_{\mathbb{R}^{d}}g\psi_{\infty}dq=0\) and \(1+g>0\) and_ \[\sup_{t\in[0,+\infty)}E_{\lambda}(t)+\int_{0}^{\infty}D_{\lambda}(t)dt\leq\varepsilon, \tag{1.7}\] _where \(\varepsilon\) is a small constant dependent on the viscosity coefficients._ Our main result can be stated as follows. **Theorem 1.2**.: _Set \(d=2\). Let \((\rho,u,g)\) be a global strong solution of (1.5) with the initial data \((\rho_{0},u_{0},g_{0})\) under the condition in Theorem 1.1. In addition, if \((\rho_{0},u_{0},\int_{\mathbb{R}^{2}}q\otimes\nabla_{q}\mathcal{U}g_{0}dq)\in \dot{B}^{-1}_{2,\infty}(\mathbb{R}^{2})\times\dot{B}^{-1}_{2,\infty}(\mathbb{ R}^{2})\times\dot{B}^{-1}_{2,\infty}(\mathbb{R}^{2})\), then there exists a constant \(C\) such that_ \[\left\{\begin{array}{ll}\|\Lambda^{\sigma}(\rho,u)\|_{L^{2}}\leq C(1+t)^{- \frac{1}{2}-\frac{\sigma}{2}},\;\sigma\in[0,s],\\ \|\Lambda^{\delta}g\|_{L^{2}(\mathcal{L}^{2})}\leq C(1+t)^{-\frac{1}{2}-\frac{ 1}{2}-\frac{\delta}{2}},\;\delta\in[0,s-1],\\ \|\Lambda^{\eta}g\|_{L^{2}(\mathcal{L}^{2})}\leq C(1+t)^{-\frac{1}{2}-\frac{ \sigma}{2}},\;\;\eta\in(s-1,s],\end{array}\right. \tag{1.8}\] **Remark 1.3**.: _One can see that the system (1.1) includes the Oldroyd-B type equations by taking the classical Hookean spring \(\mathcal{U}(q)=\frac{1}{2}|q|^{2}\). Therefore, Theorems 1.2 implies the optimal decay rate for the classical Oldroyd-B model. It is noting that we obtain the \(\frac{1}{2}\) faster algebraic decay rate for \(\tau\) or \(\|g\|_{\mathcal{L}^{2}}\) in \(\dot{H}^{\eta}\) for any \(\eta\in[s-2,s]\) than those the Oldroyd-B type equations can attain originally. To the best of our knowledge, Theorem 1.2 is the first result for the highest order derivative decay of solutions to the \(2D\) micro-macro model for compressible polymeric fluids._ **Remark 1.4**.: _When we derive the dissipation of \(\rho\) in high frequency, \(\|\Lambda^{s}u^{high}\|^{2}_{L^{2}}\) arises from linear term \(\mathrm{div}\ u\), which may lead to the loss of time decay rate. In order to overcome the difficulty, we consider the time weight \((1+t)^{-1}\) such that it can be controlled by the dissipation of \(u\), i.e._ \[(1+t)^{-1}\|\Lambda^{s}u^{high}\|^{2}_{L^{2}}\lesssim\|\Lambda^{s+1}u\|^{2}_{L^ {2}}\.\] _We point out that the time weight we consider is critial indeed in our proof since it would be failed if we choose the time weight as \((1+t)^{-\alpha}\) for any \(\alpha\in(0,1)\). More details refer to Lemma 4.2._ The paper is organized as follows. In Section 2 we introduce some notations and give some preliminaries which will be used in the sequel. In Section 3 we study the \(L^{2}\) decay of solutions to a micro-macro model for compressible polymeric fluids near equilibrium with general Hookean potentials by using the Fourier splitting method. In Section 4 we study the \(\dot{H}^{s}\) decay by establishing a new critical Fourier splitting estimate. ## 2 Preliminaries In this section we will introduce some notations and useful lemmas which will be used in the sequel. If the function spaces are over \(\mathbb{R}^{d}\) for the variable \(x\) and \(q\), for simplicity, we drop \(\mathbb{R}^{d}\) in the notation of function spaces if there is no ambiguity. For \(p\geq 1\), we denote by \(\mathcal{L}^{p}\) the space \[\mathcal{L}^{p}=\big{\{}f\big{\|}\|f\|_{\mathcal{L}^{p}}^{p}=\int_{\mathbb{R}^ {d}}\psi_{\infty}|f|^{p}dq<\infty\big{\}}.\] We will use the notation \(L^{p}_{x}(\mathcal{L}^{q})\) to denote \(L^{p}[\mathbb{R}^{d};\mathcal{L}^{q}]:\) \[L^{p}_{x}(\mathcal{L}^{q})=\big{\{}f\big{\|}\|f\|_{L^{p}_{x}( \mathcal{L}^{q})}=(\int_{\mathbb{R}^{d}}(\int_{\mathbb{R}^{d}}\psi_{\infty}|f |^{q}dq)^{\frac{p}{q}}dx)^{\frac{1}{p}}<\infty\big{\}}.\] The symbol \(\widehat{f}=\mathcal{F}(f)\) denotes the Fourier transform of \(f\). Let \(\Lambda^{s}f=\mathcal{F}^{-1}(|\xi|^{s}\widehat{f})\). If \(s\geq 0\), we can denote by \(H^{s}(\mathcal{L}^{2})\) the space \[H^{s}(\mathcal{L}^{2})=\big{\{}f\big{\|}\|f\|_{H^{s}(\mathcal{L}^{2})}^{2}= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(|f|^{2}+|\Lambda^{s}f|^{2})\psi_{ \infty}dRdx<\infty\big{\}}.\] Then we introduce the energy and energy dissipation functionals for the fluctuation \((\rho,u,g)\) as follows: \[E_{\lambda}(t) =\sum_{m=0,s}\Big{(}\|h(\rho)^{\frac{1}{2}}\Lambda^{m}\rho\|_{L^ {2}}^{2}+\|(1+\rho)^{\frac{1}{2}}\Lambda^{m}u\|_{L^{2}}^{2}+\|\Lambda^{m}g\|_ {L^{2}(\mathcal{L}^{2})}^{2}\Big{)}\] \[\quad+\sum_{m=0,s-1}\lambda\|\langle q\rangle\Lambda^{m}g\|_{L^ {2}(\mathcal{L}^{2})}^{2}\,\] and \[D_{\lambda}(t) =\gamma\|\nabla\rho\|_{H^{s-1}}^{2}+\mu\|\nabla u\|_{H^{s}}^{2}+( \mu+\mu^{\prime})\|\text{div}u\|_{H^{s}}^{2}+\|\nabla_{q}g\|_{H^{s}(\mathcal{L }^{2})}^{2}\] \[\quad+\lambda\|\langle q\rangle\nabla_{q}g\|_{H^{s-1}(\mathcal{L }^{2})}^{2}\,\] where positive constant \(\lambda\) is small enough. Sometimes we write \(f{\lesssim}g\) instead of \(f\leq Cg\), where \(C\) is a positive constant. We agree that \(\nabla\) stands for \(\nabla_{x}\) and \(div\) stands for \(div_{x}\). We recall the Littlewood-Paley decomposition theory and and Besov spaces. **Proposition 2.1**.: _[_1_]_ _Let \(\mathcal{C}\) be the annulus \(\{\xi\in\mathbb{R}^{d}:\frac{3}{4}\leq|\xi|\leq\frac{8}{3}\}\). There exist radial functions \(\chi\) and \(\varphi\), valued in the interval \([0,1]\), belonging respectively to \(\mathcal{D}(B(0,\frac{4}{3}))\) and \(\mathcal{D}(\mathcal{C})\), and such that_ \[\forall\xi\in\mathbb{R}^{d},\ \chi(\xi)+\sum_{j\geq 0}\varphi(2^{-j}\xi)=1,\] \[\forall\xi\in\mathbb{R}^{d}\backslash\{0\},\ \sum_{j\in\mathbb{Z}}\varphi(2^{-j} \xi)=1,\] \[|j-j^{\prime}|\geq 2\Rightarrow\text{Supp}\ \varphi(2^{-j}\cdot)\cap\text{Supp}\ \varphi(2^{-j^{\prime}}\cdot)=\emptyset,\] \[j\geq 1\Rightarrow\text{Supp}\ \chi(\cdot)\cap\text{Supp}\ \varphi(2^{-j}\cdot)=\emptyset.\] _The set \(\widetilde{\mathcal{C}}=B(0,\frac{2}{3})+\mathcal{C}\) is an annulus, then_ \[|j-j^{\prime}|\geq 5\Rightarrow 2^{j}\mathcal{C}\cap 2^{j^{\prime}}\widetilde{ \mathcal{C}}=\emptyset.\] _Further, we have_ \[\forall\xi\in\mathbb{R}^{d},\ \frac{1}{2}\leq\chi^{2}(\xi)+\sum_{j\geq 0} \varphi^{2}(2^{-j}\xi)\leq 1,\] \[\forall\xi\in\mathbb{R}^{d}\backslash\{0\},\ \frac{1}{2}\leq\sum_{j\in\mathbb{Z}} \varphi^{2}(2^{-j}\xi)\leq 1.\] \(\mathcal{F}\) represents the Fourier transform and its inverse is denoted by \(\mathcal{F}^{-1}\). Let \(u\) be a tempered distribution in \(\mathcal{S}^{\prime}(\mathbb{R}^{d})\). For all \(j\in\mathbb{Z}\), define \[\dot{\Delta}_{j}u=\mathcal{F}^{-1}\left(\varphi(2^{-j}\cdot)\mathcal{F}u \right)\qquad and\qquad\dot{S}_{j}u=\sum_{j^{\prime}<j}\dot{\Delta}_{j^{\prime }}u\.\] Then the Littlewood-Paley decomposition is given as follows: \[u=\sum_{j\in\mathbb{Z}}\dot{\Delta}_{j}u\quad\text{in}\quad\mathcal{S}^{{}^{ \prime}}_{h}(\mathbb{R}^{d}).\] Let \(s\in\mathbb{R},\ 1\leq p,r\leq\infty.\) The homogeneous Besov space \(\dot{B}^{s}_{p,r}\) and \(\dot{B}^{s}_{p,r}(\mathcal{L}^{p^{\prime}})\) is defined by \[\dot{B}^{s}_{p,r}=\{u\in S^{{}^{\prime}}_{h}:\|u\|_{\dot{B}^{s}_{p,r}}=\Big{\|} \left(2^{js}\|\dot{\Delta}_{j}u\|_{L^{p}}\right)_{j}\Big{\|}_{l^{r}(\mathbb{Z} )}<\infty\},\] \[\dot{B}^{s}_{p,r}(\mathcal{L}^{p^{\prime}})=\{\phi\in S^{{}^{\prime}}_{h}:\| \phi\|_{\dot{B}^{s}_{p,r}(\mathcal{L}^{p^{\prime}})}=\Big{\|}\left(2^{js}\| \dot{\Delta}_{j}\phi\|_{L^{p}_{x}(\mathcal{L}^{p^{\prime}})}\right)_{j}\Big{\|} _{l^{r}(\mathbb{Z})}<\infty\}.\] The following lemma is about the embedding between Lesbesgue and Besov spaces. **Lemma 2.2**.: _[_1_]_ _Let \(1\leq p\leq 2\) and \(d\geq 3\). Then it holds that_ \[L^{p}\hookrightarrow B^{\frac{d}{2}-\frac{d}{p}}_{2,\infty}.\] _Moreover, it follows that_ \[L^{2}=\dot{B}^{0}_{2,2}.\] The following lemma is the Gagliardo-Nirenberg inequality. **Lemma 2.3**.: _[_21_]_ _Let \(d\geq 2,\ p\in[2,+\infty)\) and \(0\leq s,s_{1}\leq s_{2}\), then there exists a constant \(C\) such that_ \[\|\Lambda^{s}f\|_{L^{p}}\leq C\|\Lambda^{s_{1}}f\|_{L^{2}}^{1-\theta}\|\Lambda ^{s_{2}}f\|_{L^{2}}^{\theta},\] _where \(0\leq\theta\leq 1\) and \(\theta\) satisfy_ \[s+d(\frac{1}{2}-\frac{1}{p})=s_{1}(1-\theta)+\theta s_{2}.\] _Note that we require that \(0<\theta<1\), \(0\leq s_{1}\leq s\), when \(p=\infty\)._ The following lemma allows us to estimate the extra stress tensor \(\tau\). **Lemma 2.4**.: _[_9_]_ _Assume \(g\in H^{s}(\mathcal{L}^{2})\) with \(\int_{\mathbb{R}^{d}}g\psi_{\infty}dq=0\), then it follows that_ \[\left\{\begin{array}{l}\|\nabla_{q}\mathcal{U}g\|_{\dot{H}^{s}(\mathcal{L}^{2 })}+\|qg\|_{\dot{H}^{s}(\mathcal{L}^{2})}\lesssim\|\nabla_{q}g\|_{\dot{H}^{s} (\mathcal{L}^{2})},\\ \|q\nabla_{q}\mathcal{U}g\|_{\dot{H}^{s}(\mathcal{L}^{2})}+\|q|^{2}g\|_{\dot{H }^{s}(\mathcal{L}^{2})}\lesssim\|\langle q\rangle\nabla_{q}g\|_{\dot{H}^{s}( \mathcal{L}^{2})},\end{array}\right.\] _for any \(\sigma\in[0,s]\)._ **Lemma 2.5**.: _[_9_]_ _Assume \(g\in H^{s}(\mathcal{L}^{2})\), then it holds that_ \[|\tau(g)|^{2}\lesssim\|g\|_{\mathcal{L}^{2}}^{2}\lesssim\|\nabla_{q}g\|_{ \mathcal{L}^{2}}^{2}.\] **Lemma 2.6**.: _[_20_]_ _Let \(s\geq 1\). Assume \(p_{1},...,p_{4}\) and \(p\in(1,\infty)\) with \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}+\frac{1}{p_{4}}\). Then it holds that_ \[\|[\Lambda^{s},f]g\|_{L^{p}}\leq C\left(\|\Lambda^{s}f\|_{L^{p_{1}}}\|g\|_{L^{ p_{2}}}+\|\nabla f\|_{L^{p_{3}}}\|\Lambda^{s-1}g\|_{L^{p_{4}}}\right).\] _Analogously,_ \[\|[\Lambda^{s},f]g\|_{L^{2}(\mathcal{L}^{2})}\leq C\left(\|\Lambda^{s}f\|_{L^{ 2}}\|g\|_{L^{\infty}(\mathcal{L}^{2})}+\|\nabla f\|_{L^{\infty}}\|\Lambda^{s-1 }g\|_{L^{2}(\mathcal{L}^{2})}\right).\] ## 3 The \(L^{2}\) decay rate This section is devoted to investigating the long time behaviour for the 2D micro-macro model for compressible polymeric fluids with general Hookean springs given by (1.2), (1.3). We blame the failure to obtaining the optimal decay rate in \(L^{2}\) as that of [25, 14, 15] for the additional stress tensor \(\tau\) which does not decay fast enough. To deal with this term, we need to use the coupling effect between \(\rho\), \(u\) and \(g\). Different from the classical Hookean potential \(\mathcal{U}=\frac{1}{2}|q|^{2}\), we can not obtain the optimal decay estimate directly due to the lack of the estimate for \(\|(\rho,u)\|_{L^{1}}\). To over the difficulty, we should consider estimating some Besov-norm with negative index instead. We introduce some notations for simplicity. Denote the energy and energy dissipation functionals considered in this chapter as follows : \[E_{0}(t)=\|(\rho,u)\|_{H^{s}}^{2}+\|g\|_{H^{s}(\mathcal{L}^{2})}^{2 }\,\] \[E_{1}(t)=\|\Lambda^{1}(\rho,u)\|_{H^{s-1}}^{2}+\|\Lambda^{1}g\|_ {H^{s-1}(\mathcal{L}^{2})}^{2}\,\] \[D_{0}(t)=\gamma\eta\|\nabla\rho\|_{H^{s-1}}^{2}+\mu\|\nabla u\|_ {H^{s}}^{2}+(\mu+\mu^{\prime})\|\mathrm{div}\mathrm{u}\|_{H^{s}}^{2}+\|\nabla _{q}g\|_{H^{s}(\mathcal{L}^{2})}^{2}\,\] \[D_{1}(t)=\gamma\eta\|\nabla\Lambda^{1}\rho\|_{H^{s-2}}^{2}+\mu \|\nabla\Lambda^{1}u\|_{H^{s-1}}^{2}+(\mu+\mu^{\prime})\|\mathrm{div}\Lambda^{ 1}\mathrm{u}\|_{H^{s-1}}^{2}+\|\nabla_{q}\Lambda^{1}g\|_{H^{s-1}(\mathcal{L}^ {2})}^{2}\.\] Denote the following domains involved in Fourier splitting method : \[S(t)=\left\{\xi:|\xi|^{2}\leq C_{2}\frac{f^{\prime}(t)}{f(t)}\right\}\quad \text{ with }\quad\quad f(t)=1+t\ \ \text{or}\ \ f(t)=\ln^{l}(e+t)\,\ l\in\mathbb{N}.\] Denote the following energy and energy dissipation functionals involved in Fourier splitting method: \[H_{0}=\mu\|u\|_{H^{s}}^{2}+\eta\gamma\|\rho\|_{H^{s-1}}^{2}\quad\quad\quad \text{and}\quad\quad H_{1}=\mu\|\Lambda^{1}u\|_{H^{s-1}}^{2}+\eta\gamma\| \Lambda^{1}\rho\|_{H^{s-2}}^{2}\.\] Denote two important factors \(B_{1}\) and \(B_{2}\) : \[B_{1}=\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{4}ds\quad\text{ and }\quad B_{2}=\int_{0}^{t}\int_{S(t)}| \hat{G}\cdot\tilde{\bar{u}}|d\xi ds^{\prime}\,\] where \[G=-u\cdot\nabla u+\frac{\rho}{1+\rho}\left(\mathrm{div}\Sigma(u)+\mathrm{div}\tau \right)+\left[\gamma-\frac{P^{{}^{\prime}}(1+\rho)}{1+\rho}\right]\nabla\rho.\] Let's recall the following energy estimate. **Proposition 3.1**.: _[_4_]_ _Under the conditions of Theorem 1.2, it holds for \(\sigma=0\) or \(1\) that_ \[\frac{d}{dt}E_{\sigma}+D_{\sigma}\leq 0, \tag{3.1}\] Let's recall the key lemma of time decay estimate as follows. **Lemma 3.2**.: _[_4_]_ _Let \(d=2\). Assume \((\rho,u,g)\) be a global strong solution of system (1.5) with initial data \((\rho_{0},u_{0},g_{0})\) under the conditions in Theorem 1.2. Then there exists a positive time \(T_{0}\) such that_ \[\int_{S(t)}\left(|\hat{\rho}|^{2}+|\hat{u}|^{2}+\|\hat{g}\|_{ \mathcal{L}^{2}}^{2}\right)d\xi \lesssim\frac{f^{\prime}(t)}{f(t)}\left(1+\|(\rho_{0},u_{0})\|_ {\dot{B}_{2,\infty}^{-\frac{d}{2}}}^{2}+\|g_{0}\|_{\dot{B}_{2,\infty}^{-\frac {d}{2}}(\mathcal{L}^{2})}^{2}\right)\] \[+\frac{f^{\prime}(t)}{f(t)}B_{1}+B_{2}\, \tag{3.2}\] _for any \(t>T_{0}\)._ Firstly we can prove the logarithmic decay rate with \(d=2\). **Proposition 3.3**.: _Under the conditions of Theorem 1.2, then for any \(l\in\mathbb{N}^{+}\), there exists a positive constant \(C\) such that_ \[E_{0}+(e+t)E_{1}\leq C\ln^{-l}(e+t). \tag{3.3}\] Proof.: Taking \(\sigma=0\) in (3.1), we have \[\frac{d}{dt}E_{0}(t)+D_{0}(t)\leq 0. \tag{3.4}\] Define \(S_{0}(t)=\left\{\xi:|\xi|^{2}\leq C_{2}\frac{f^{\prime}(t)}{f(t)}\right\}\) with \(f(t)=\ln^{3}(e+t)\) and \(C_{2}\) large enough. Applying Schonbek's strategy to (3.4) as that of [27] leads to \[\frac{d}{dt}[f(t)E_{0}(t)] +C_{2}f^{\prime}(t)H_{0}(t)+f(t)\|g\|_{H^{*}(\mathcal{L}^{2})}^{2}\] \[\lesssim f^{\prime}(t)\int_{S_{0}(t)}|\hat{u}|^{2}+|\hat{\rho}|^{ 2}d\xi+f^{\prime}(t)\|\rho\|_{\dot{H}^{*}}^{2}. \tag{3.5}\] It follows from Lemma 3.2 that \[\int_{S_{0}(t)}|\hat{\rho}|^{2}+|\hat{u}|^{2}d\xi \lesssim\ln^{-\frac{1}{2}}(e+t)+B_{2}\] \[\lesssim\ln^{-\frac{1}{2}}(e+t)+\left(\frac{f^{{}^{\prime}}(t)}{ f(t)}\right)^{\frac{1}{2}}\int_{0}^{t}\|G\|_{L^{1}}\|u\|_{L^{2}}ds\] \[\lesssim\ln^{-\frac{1}{2}}(e+t)+\left(\frac{f^{{}^{\prime}}(t)}{ f(t)}\right)^{\frac{1}{2}}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{2}\|\nabla(\rho,u, \tau)\|_{H^{1}}ds \tag{3.6}\] \[\lesssim\ln^{-\frac{1}{2}}(e+t).\] According to (3.5) and (3.6), we deduce that \[\frac{d}{dt}[\ln^{3}(e+t)E_{0}(t)]\lesssim\frac{\ln^{\frac{3}{2}}(e+t)}{e+t}+\| \rho\|_{\dot{H}^{s}}^{2}, \tag{3.7}\] which implies \[E_{0}(t)\lesssim\ln^{-\frac{1}{2}}(e+t). \tag{3.8}\] Nextly, we improve the above decay rate by applying inductive argument. Define \(S(t)=\left\{\xi:|\xi|^{2}\leq C_{2}\frac{f^{\prime}(t)}{f(t)}\right\}\) with \(f(t)=\ln^{l+3}(e+t)\) and \(C_{2}\) large enough, then we temporarily assume \[E_{0}(t)\lesssim\ln^{-\frac{1}{2}}(e+t). \tag{3.9}\] According to (3.9), we obtain \[B_{2}=\int_{0}^{t}\int_{S_{0}(t)}|\hat{G}||\dot{u}|d\xi ds \lesssim\left(\frac{f^{\prime}(t)}{f(t)}\right)^{\frac{1}{2}}\int _{0}^{t}\|G\|_{L^{1}}\|u\|_{L^{2}}ds\] \[\lesssim\left(\frac{f^{\prime}(t)}{f(t)}\right)^{\frac{1}{2}} \left(\int_{0}^{t}\ln^{-l}(e+s)ds\right)^{\frac{1}{2}}\] \[\lesssim\ln^{-\frac{l+1}{2}}(e+t), \tag{3.10}\] where we have used the fact that \[\lim_{t\to\infty}\frac{\int_{0}^{t}\ln^{-l}(e+s)ds}{(e+t)\ln^{-l}(e+t)}<+\infty, \tag{3.11}\] for any \(l\in\mathbb{N}^{+}\). Thus it follows from (3.10) that \[\frac{d}{dt}[\ln^{l+2}(e+t)E_{0}(t)]\lesssim\frac{\ln^{\frac{l+3}{2}}(e+t)}{e +t}+\|\rho\|_{\dot{H}^{s}}^{2}, \tag{3.12}\] which implies \[E(t)\lesssim\ln^{-\frac{l+1}{2}}(e+t). \tag{3.13}\] By virtue of the inductive argument, we finally deduce that \[E_{0}\lesssim\ln^{-l}(e+t), \tag{3.14}\] for any \(l\in\mathbb{N}^{+}\). Further, multiplying (3.4) by \(\ln^{k}(e+t)\), we obtain \[\frac{d}{dt}[\ln^{k}(e+t)E_{0}(t)]+\ln^{l}(e+t)D_{0}(t) \lesssim\frac{\ln^{k-1}(e+t)}{e+t}E_{0}(t)\] \[\lesssim\frac{\ln^{-2}(e+t)}{e+t}, \tag{3.15}\] which implies the following enhanced integrability that \[\int_{0}^{t}\ln^{k}(e+s)D_{0}ds\leq C, \tag{3.16}\] for any \(k\in\mathbb{N}^{+}\). Multiplying (3.1) by \((e+t)\ln^{l}(e+t)\) and taking \(\sigma=1\), we infer that \[\frac{d}{dt}[(e+t)\ln^{l}(e+t)E_{1}(t)] \lesssim\ln^{l}(e+t)E_{1}(t)\] \[\lesssim\ln^{l}(e+t)D_{0}(t), \tag{3.17}\] which implies by using (3.16) that \[(e+t)E_{1}\lesssim\ln^{-l}(e+t). \tag{3.18}\] We thus complete the proof of Proposition 3.3. By virtue of Proposition 3.3, we are going to prove the initial algebraic time decay rate. **Proposition 3.4**.: _Under the condition in Theorem 1.2, there exists a positive constant \(C\) such that_ \[E_{0}+(1+t)E_{1}\leq C(1+t)^{-\frac{1}{2}}. \tag{3.19}\] Proof.: Define \(S(t)=\{\xi:|\xi|^{2}\leq\frac{C_{2}}{1+t}\}\) for some \(C_{2}\) large enough. Taking \(\sigma=0\) in (3.1), then we infer from Schonbek's strategy that \[\frac{d}{dt}E_{0}+\frac{C_{2}}{(1+t)}\int|\hat{u}|^{2}+|\hat{ \rho}|^{2}d\xi +\|\Lambda^{s}\rho\|_{L^{2}}+\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|g \|_{H^{s}(\mathcal{L}^{2})}^{2}\] \[\lesssim\frac{1}{1+t}\int_{S(t)}|\hat{u}|^{2}+|\hat{\rho}|^{2}d\xi. \tag{3.20}\] It follows from Lemma 3.2 that \[\int_{S_{0}(t)}|\hat{\rho}|^{2}+|\hat{u}|^{2}d\xi \lesssim(1+t)^{-1}+(1+t)^{-1}B_{1}+B_{2}\] \[\lesssim(1+t)^{-1}+(1+t)^{-1}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{ 4}ds\] \[\quad+(1+t)^{-\frac{1}{2}}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{2}\| \nabla(\rho,u,\tau)\|_{H^{1}}ds. \tag{3.21}\] Combining with estimates (3.20) and (3.21), we deduce that \[\frac{d}{dt}E_{0} +\frac{C_{2}}{(1+t)}\int_{\mathbb{R}^{2}}|\hat{u}|^{2}+|\hat{ \rho}|^{2}d\xi+\|\Lambda^{s}\rho\|_{L^{2}}+\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|g \|_{H^{s}(\mathcal{L}^{2})}^{2}\] \[\lesssim(1+t)^{-2}+(1+t)^{-2}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{ 4}ds+(1+t)^{-\frac{3}{2}}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{2}\|\nabla(\rho,u, \tau)\|_{H^{1}}ds, \tag{3.22}\] Multiplying (3.22) by \((1+t)^{2}\) and integrating it over \([0,t]\) leads to \[(1+t)^{2}E_{0} \lesssim(1+t)^{1}+(1+t)^{1}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{4}ds\] \[\quad+(1+t)^{\frac{3}{2}}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{2}\| \nabla(\rho,u,\tau)\|_{H^{1}}ds. \tag{3.23}\] Define \(M(t)=\sup\limits_{s\in[0,t]}(1+s)^{\frac{1}{2}}E_{0}(s)\). According to Proposition 3.4 with \(l=2\), we obtain \[M(t)\lesssim 1+\int_{0}^{t}(1+s)^{-1}\|(\rho,u,\tau)\|_{L^{2}}^{2}M(s)ds+\int_{0}^{t }(1+s)^{-\frac{1}{2}}\|\nabla(\rho,u,\tau)\|_{H^{1}}M(s)ds \tag{3.24}\] \[\lesssim 1+\int_{0}^{t}\frac{\ln^{-2}(e+s)}{1+s}M(s)ds.\] We infer from Gronwall's inequality that \(M(t)\lesssim 1\), which implies \[E_{0}\lesssim(1+t)^{-\frac{1}{2}}. \tag{3.25}\] Further, multiplying (3.4) by \((1+t)^{\frac{3}{2}}\), we obtain \[\frac{d}{dt}[(1+t)^{\frac{3}{2}}E_{0}(t)]+(1+t)^{\frac{3}{2}}D_{0}(t)\lesssim(1 +t)^{\frac{1}{2}}E_{0}(t), \tag{3.26}\] which implies the following enhanced integrability that \[(1+t)^{-1}\int_{0}^{t}(1+t)^{\frac{3}{2}}D_{0}ds\leq C. \tag{3.27}\] Multiplying (3.1) by \((1+t)^{\frac{5}{2}}\) and taking \(\sigma=1\), we infer that \[\frac{d}{dt}[(1+t)^{\frac{5}{2}}E_{1}(t)] \lesssim(1+t)^{\frac{3}{2}}E_{1}(t)\] \[\lesssim(1+t)^{\frac{3}{2}}D_{0}(t), \tag{3.28}\] which implies by using (3.27) that \[(1+t)E_{1}(t) \lesssim(1+t)^{-\frac{3}{2}}\int_{0}^{t}(1+s)^{\frac{3}{2}}D_{0} (s)ds\] \[\lesssim(1+t)^{-\frac{1}{2}}. \tag{3.29}\] We thus complete the proof of Proposition 3.4. By Proposition 3.4, we can show that the solution of (1.5) belongs to some Besov spaces with negative index. **Lemma 3.5**.: _Let \(0<\alpha,\sigma\leq 1\) and \(\sigma<2\alpha\). Assume that \((\rho_{0},u_{0},g_{0})\) satisfies the condition in Theorem 1.2. If_ \[E_{0}(t)+(1+t)E_{1}(t)\leq C(1+t)^{-\alpha}, \tag{3.30}\] _then it holds that_ \[(\rho,u,\tau)\in L^{\infty}(0,\infty;\dot{B}_{2,\infty}^{-\sigma}). \tag{3.31}\] Proof.: Applying \(\dot{\Delta}_{j}\) to (1.5), we obtain by virtue of standard energy estimate that \[\frac{1}{2}\frac{d}{dt}(\|\Delta_{j}g\|_{L^{2}(\mathcal{L}^{2})}^ {2}+\|\Delta_{j}(\rho,u)\|_{L^{2}}^{2}+\eta\langle\Delta_{j}u,\nabla\Delta_{j }\rho\rangle)\] \[\quad+\eta 2^{2j}\|\Delta_{j}\rho\|_{L^{2}}^{2}+2^{2j}\|\Delta_{j}u \|_{L^{2}}^{2}+\|\nabla_{g}\Delta_{j}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\] \[\quad\leq\|\Delta_{j}(\rho u)\|_{L^{2}}^{2}+2^{2j}\|\Delta_{j}( \rho u)\|_{L^{2}}\|\Delta_{j}u\|_{L^{2}}+\|\Delta_{j}(u\cdot\nabla u)\|_{L^{2} }\|\Delta_{j}u\|_{L^{2}}+\|\Delta_{j}(u\cdot\nabla u)\|_{L^{2}}^{2}\] \[\quad+\|\Delta_{j}g\|_{L^{2}}^{2}+\|\Delta_{j}g\|_{L^{2}}\|\Delta _{j}u\|_{L^{2}}+\|\Delta_{j}(u\cdot\nabla g)\|_{L^{2}(\mathcal{L}^{2})}^{2}+ \|\Delta_{j}(\nabla uqg)\|_{L^{2}(\mathcal{L}^{2})}^{2}. \tag{3.32}\] Multiplying both sides of (3.32) by \(2^{-2j\sigma}\) and taking \(l^{\infty}\) norm for \(j\in\mathbb{N}\), we obtain \[\frac{1}{2}\frac{d}{dt}(\|g\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{ L}^{2})}+\|\rho\|^{2}_{B^{-\sigma}_{2,\infty}}+\|u\|^{2}_{B^{-\sigma}_{2, \infty}})+\eta\|\rho\|^{2}_{B^{-\sigma+1}_{2,\infty}}+\|u\|^{2}_{B^{-\sigma+1} _{2,\infty}}+\|\nabla g\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{L}^{2})}\] \[\leq\|\rho u\|^{2}_{B^{-\sigma}_{2,\infty}}+\|\rho u\|_{B^{- \sigma+1}_{2,\infty}}\|u\|_{B^{-\sigma+1}_{2,\infty}}+\|u\cdot\nabla u\|_{B^{- \sigma}_{2,\infty}}\|u\|_{B^{-\sigma}_{2,\infty}}+\|u\cdot\nabla u\|^{2}_{B^{- \sigma}_{2,\infty}}\] \[+\|g\|^{2}_{B^{-\sigma}_{2,\infty}}+\|g\|_{B^{-\sigma}_{2,\infty }}\|u\|_{B^{-\sigma}_{2,\infty}}+\|u\cdot\nabla g\|^{2}_{B^{-\sigma}_{2,\infty }(\mathcal{L}^{2})}+\|\nabla uqg\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{L}^{2 })}. \tag{3.33}\] According to Lemma 2.5 and (3.30), we infer \[\|\rho u\|^{2}_{B^{-\sigma}_{2,\infty}}\leq\|\rho u\|^{2}_{L^{\frac{2}{\sigma+ 1}}}\lesssim\|\rho\|^{2}_{L^{2}}\|u\|^{2}_{L^{\frac{2}{\sigma}}}\lesssim\| \rho\|^{2}_{L^{2}}\|u\|^{2\sigma}_{L^{2}}\|\nabla u\|^{2-2\sigma}_{L^{2}} \lesssim(1+t)^{-2\alpha-1+\sigma}, \tag{3.34}\] and \[\|u\cdot\nabla u\|^{2}_{B^{-\sigma}_{2,\infty}}\leq\|u\cdot\nabla u\|^{2}_{L^ {\frac{2}{\sigma+1}}}\lesssim\|\nabla u\|^{2}_{L^{2}}\|u\|^{2}_{L^{\frac{2}{ \sigma}}}\lesssim\|u\|^{2\sigma}_{L^{2}}\|\nabla u\|^{4-2\sigma}_{L^{2}} \lesssim(1+t)^{-2\alpha-2+\sigma}, \tag{3.35}\] as well as \[\|\rho u\|^{2}_{B^{-\sigma+1}_{2,\infty}}\leq\|\rho u\|^{2}_{L^{\frac{2}{ \sigma}}}\lesssim\|(\rho,u)\|^{2}_{H^{s-1}}\|(\rho,u)\|^{2\sigma}_{L^{2}}\| \nabla(\rho,u)\|^{2-2\sigma}_{L^{2}}\lesssim(1+t)^{-2\alpha-2+\sigma}. \tag{3.36}\] Similarly, by virtue of Lemmas 2.5, 2.4 and condition (3.30), we have \[\|u\cdot\nabla g\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{L}^{2})}\leq\|u\cdot \nabla g\|^{2}_{L^{\frac{2}{\sigma+1}}(\mathcal{L}^{2})}\lesssim\|\nabla g\|^{ 2}_{L^{2}(\mathcal{L}^{2})}\|u\|^{2\sigma}_{L^{2}}\|\nabla u\|^{2-2\sigma}_{L ^{2}}\lesssim(1+t)^{-2\alpha-2+\sigma}, \tag{3.37}\] and \[\|g\|^{2}_{B^{-\sigma}_{2,\infty}}\leq\|g\|^{2}_{L^{\frac{2}{\sigma+1}}} \lesssim\|\nabla(\rho,u,\tau)\|^{2}_{H^{s-1}}\|\rho\|^{2\sigma}_{L^{2}}\| \nabla\rho\|^{2-2\sigma}_{L^{2}}\lesssim(1+t)^{-2\alpha-4+\sigma}, \tag{3.38}\] as well as \[\|\nabla uqg\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{L}^{2})} \leq\|\nabla uqg\|^{2}_{L^{\frac{2}{\sigma+1}}(\mathcal{L}^{2})}\] \[\lesssim\|\nabla u\|^{2}_{L^{2}}\|qg\|^{2}_{L^{\frac{2}{\sigma}}( \mathcal{L}^{2})}+\|qg\|^{2}_{L^{2}(\mathcal{L}^{2})}\|\nabla u\|^{2}_{L^{ \frac{2}{\sigma}}}\] \[\lesssim\|\nabla u\|^{2}_{L^{2}}+\|\nabla_{q}g\|^{2}_{L^{2}}. \tag{3.39}\] Therefore, according to estimates from (3.33) to (3.39), we conclude that \[\|g\|^{2}_{B^{-\sigma}_{2,\infty}(\mathcal{L}^{2})} +\|\rho\|^{2}_{B^{-\sigma}_{2,\infty}}+\|u\|^{2}_{B^{-\sigma}_{2, \infty}}\] \[\lesssim 1+\int_{0}^{t}(1+t)^{-(\alpha+1-\frac{\delta}{2})}ds \lesssim 1. \tag{3.40}\] This completes the proof of Lemma 3.5. Thanks to Lemma 3.5, we can improve the time decay rate by Littlewood-Paley decomposition theory and Fourier splitting method. **Lemma 3.6**.: _Let \(0<\beta,\sigma\leq 1\) and \(\frac{1}{2}\leq\alpha\). Assume that \((\rho_{0},u_{0},\tau_{0})\) satisfies the condition in Theorem 1.2. For any \(t\in[0,\infty)\), if_ \[E_{0}(t)+(1+t)E_{1}(t)\lesssim(1+t)^{-\alpha}, \tag{3.41}\] _and_ \[(\rho,u,\tau)\in L^{\infty}(0,\infty;B_{2,\infty}^{-\sigma}), \tag{3.42}\] _then there exists a positive constant \(C\) such that_ \[E_{0}(t)+(1+t)E_{1}(t)\leq C(1+t)^{-\beta}, \tag{3.43}\] _where \(\beta<\frac{\sigma+1}{2}\) for \(\alpha=\frac{1}{2}\) and \(\beta=\frac{\sigma+1}{2}\) for \(\alpha>1\)._ Proof.: By virtue of conditions (3.41) and (3.42), we infer that \[(1+t)^{-1}B_{1}=(1+t)^{-1}\int_{0}^{t}\|(\rho,u)\|_{L^{2}}^{4}ds \lesssim(1+t)^{-1}\int_{0}^{t}(1+t)^{-2\alpha}ds\] \[\lesssim(1+t)^{-\beta}, \tag{3.44}\] and \[B_{2}=\int_{S(t)}\int_{0}^{t}|\hat{G}||\hat{u}|dsd\xi \lesssim\int_{0}^{t}\|G\|_{L^{1}}\sum_{j\leq\log_{2}[\frac{4}{3}C_ {2}^{\frac{1}{2}}(1+t)^{-\frac{1}{2}}]}\int_{S(t)}2^{\sigma j}2^{-\sigma j} \varphi^{2}(2^{-j}\xi)|\hat{u}|d\xi ds\] \[\lesssim(1+t)^{-\frac{1}{2}-\frac{\sigma}{2}}\int_{0}^{t}\|u\|_{ \dot{B}_{2,\infty}^{-\sigma}}\|(\rho,u)\|_{L^{2}}\|\nabla(\rho,u,\tau)\|_{H^{ 1}}ds\] \[\lesssim(1+t)^{-\frac{1}{2}-\frac{\sigma}{2}}\int_{0}^{t}(1+t)^{- \alpha-\frac{1}{2}}ds\] \[\lesssim(1+t)^{-\beta}, \tag{3.45}\] According to the proof of Proposition 3.3 and Lemma 3.2, we deduce that \[\frac{d}{dt}E_{0}+\frac{C_{2}}{(1+t)}\int|\hat{u}|^{2}+|\hat{ \rho}|^{2}d\xi +\|\Lambda^{s}\rho\|_{L^{2}}+\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|g \|_{H^{s}(\mathcal{L}^{2})}^{2}\] \[\lesssim(1+t)^{-\beta-1}. \tag{3.46}\] which implies \[E_{0}(t)\lesssim(1+t)^{-\beta}. \tag{3.47}\] By performing a routine procedure, one can arrive at \[E_{1}(t)\lesssim(1+t)^{-\beta-1}. \tag{3.48}\] We thus complete the proof of Lemma 3.6. Thus we can obtain the optimal decay rate in \(L^{2}\) by using the bootstrap argument as follows. **Proposition 3.7**.: _Assume that \((\rho_{0},u_{0},\tau_{0})\) satisfies the condition in Theorem 1.2, then_ \[E_{0}(t)+(1+t)E_{1}(t)\lesssim(1+t)^{-1}. \tag{3.49}\] Proof.: According to Proposition 3.3 and Lemma 3.5 with \(\alpha=\sigma=\frac{1}{2}\), we have \[(\rho,u)\in L^{\infty}(0,\infty;B_{2,\infty}^{-\frac{1}{2}}),\quad g\in L^{ \infty}(0,\infty;B_{2,\infty}^{-\frac{1}{2}}(\mathcal{L}^{2})). \tag{3.50}\] Taking advantage of Lemma 3.6 with \(\alpha=\sigma=\frac{1}{2}\) and \(\beta=\frac{5}{8}\), we deduce that \[E_{0}(t)\lesssim(1+t)^{-\frac{5}{8}},\quad E_{1}(t)\lesssim(1+t)^{-\frac{13}{8}}. \tag{3.51}\] Taking \(\sigma=1\) and \(\alpha=\frac{5}{8}\) in Lemma 3.5, we infer that \[(\rho,u)\in L^{\infty}(0,\infty;B_{2,\infty}^{-1}),\quad g\in L^{\infty}(0, \infty;B_{2,\infty}^{-1}(\mathcal{L}^{2})). \tag{3.52}\] Using Propositions 3.6 again with \(\alpha=\frac{5}{8}\) and \(\sigma=\beta=1\), we finally obtain \[E_{0}(t)\lesssim(1+t)^{-1},\quad E_{1}(t)\lesssim(1+t)^{-2}. \tag{3.53}\] This completes the proof of Proposition 3.7. ## 4 The \(\dot{H}^{s}\) decay rate In this section, we consider the optimal decay rate of \((\rho,u)\) in \(\dot{H}^{s}\) and \(g\) in \(\dot{H}^{s}(\mathcal{L}^{2})\). To the best of our knowledge, the dissipation of \(\rho\) is of great significance to obtain the decay rate of \((\rho,u)\). However, it fails to obtain the optimal decay rate of \((\rho,u)\) in \(\dot{H}^{s}\) by the same way as that of \(L^{2}\) since the equivalence \[\|\Lambda^{s}(\rho,u)\|_{L^{2}}^{2}+\eta\langle\Lambda^{s-1}u,\Lambda^{s}\rho \rangle\simeq\|\Lambda^{s}(\rho,u)\|_{L^{2}}^{2}\] does not hold anymore for any \(\eta>0\). In the usual way, we can show that \(\|\Lambda^{s}(\rho,u)\|_{L^{2}}^{2}\) has the same decay rate as the quantity \(\|\Lambda^{s-1}(\rho,u)\|_{L^{2}}^{2}\). Motivated by [4], however, we aware that the dissipation of \(\rho\) only in high frequency fully enables us to obtain optimal decay rate in \(\dot{H}^{s}\). More precisely, to overcome the difficulty above, \(\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle\) in high frequency \(S^{c}(t)\) is considered and it results in dissipation \(\int_{S^{c}(t)}|\xi|^{2s}|\widehat{\rho}|^{2}d\xi\). To make full use of the benefit the dissipation \(\rho\) provides, a critical fourier splitting estimate is established in the following lemma, which helps to obtain the optimal decay rate in \(\dot{H}^{s}\). Define \(S(t)=\{\xi\in\mathbb{R}^{d}||\xi|^{2}\leq\frac{C_{2}}{1+t}\}\) and consider the frequency decomposition as follows : \[\left\{\begin{array}{ll}\rho=\mathcal{F}^{-1}(\chi_{S(t)}\hat{\rho})+ \mathcal{F}^{-1}(\chi_{S^{c}(t)}\hat{\rho})=\rho^{low}+\rho^{high},\\ u=\mathcal{F}^{-1}(\chi_{S(t)}\hat{u})+\mathcal{F}^{-1}(\chi_{S^{c}(t)}\hat{u} )=u^{low}+u^{high}.\end{array}\right. \tag{4.1}\] Then we have the following Lemma. **Lemma 4.1**.: _Assume that \((\rho_{0},u_{0},g_{0})\) satisfies the condition in Theorem 1.2, then there exists a positive constant \(\delta\) such that_ \[\|\Lambda^{s-1}u\|_{L^{4}}\|\nabla u\|_{L^{4}}\|\Lambda^{s+1}u\|_ {L^{2}}\,\ \|\Lambda^{s}u\|_{L^{4}}\|\nabla g\|_{L^{4}}\|\Lambda^{s}g\|_{L^{2}}\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda ^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)+(1+t)^{-1-\delta}\|\Lambda^{s}u^{ low}\|_{L^{2}}^{2}, \tag{4.2}\] _and_ \[\|\nabla u\|_{L^{\infty}}\|\Lambda^{s}\rho\|_{L^{2}}^{2}\,\ \|\rho\|_{L^{ \infty}}^{2}\|\Lambda^{s}\rho\|_{L^{2}}^{2}\,\ \|\nabla\rho\|_{L^{4}}\|\Lambda^{s}u\|_{L^{4}}\| \Lambda^{s}\rho\|_{L^{2}}\,\] \[\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla^{2}u\|_{L^{4}}\|\Lambda^{s+1 }u\|_{L^{2}}\,\ \|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla(\rho,\tau)\|_{L^{4}}\|\Lambda^{s+1}u\|_{L ^{2}}\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda ^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)+(1+t)^{-1-\delta}\left(\|\Lambda^{s }\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2}\right). \tag{4.3}\] Proof.: According to Lemma 2.3 and Proposition 3.7, we deduce that \[\|\rho\|_{L^{\infty}}^{2}\|\Lambda^{s}\rho\|_{L^{2}}^{2} \lesssim\|\rho\|_{L^{2}}^{2-\frac{s^{2}}{s+1}}\|\Lambda^{s}\rho\|_ {L^{2}}^{2+\frac{2}{s+1}}\] \[\lesssim(1+t)^{-1-\frac{1}{s+1}}\left(\|\Lambda^{s}\rho^{high}\|_ {L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2}\right), \tag{4.4}\] and \[\|\nabla u\|_{L^{\infty}}\|\Lambda^{s}\rho\|_{L^{2}}^{2} \lesssim\|\nabla u\|_{L^{2}}^{1-\frac{1}{s}}\|\Lambda^{s+1}u\|_{L ^{2}}^{\frac{1}{s}}\|\Lambda^{s}\rho\|_{L^{2}}^{\frac{2s-1}{s}+\frac{1}{s}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\nabla u\|_{ L^{2-1}}^{\frac{2(s-1)}{s+1}}\|\Lambda^{s}\rho\|_{L^{2}}^{\frac{2}{s-1}+2}\] \[\lesssim(1+t)^{-1-\frac{1}{2-1}}\left(\|\Lambda^{s}\rho^{high}\|_ {L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2}\right)+\varepsilon\|\Lambda^ {s+1}u\|_{L^{2}}^{2}. \tag{4.5}\] By virtue of Lemmas 2.3, 2.5 and Proposition 3.7, we infer that \[\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla\tau\|_{L^{4}}\|\Lambda^{s+1}u \|_{L^{2}} \lesssim\|\nabla\rho\|_{L^{2}}^{\frac{1}{2(s-1)}}\|\Lambda^{s} \rho\|_{L^{2}}^{\frac{2s-3}{2(s-1)}}\|\nabla\tau\|_{L^{2(s-1)}}^{\frac{2s-3}{2( s-1)}}\|\Lambda^{s}\tau\|_{L^{2}}^{\frac{1}{2(s-1)}}\|\Lambda^{s+1}u\|_{L^{2}}\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^ {s}\rho\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)+\|\nabla\rho\|_{L^{2}}^{\frac{2 -s}{s-1}}\|\nabla g\|_{L^{2}(\mathcal{L}^{2})}^{2}\|\Lambda^{s}\rho\|_{L^{2}}^ {2}\] \[\lesssim(1+t)^{-2-\frac{2}{2-s}}\left(\|\Lambda^{s}\rho^{high}\|_ {L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2}\right)\] \[\quad+\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^{ s}\nabla_{q}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right), \tag{4.6}\] and \[\|\nabla\rho\|_{L^{4}}\|\Lambda^{s}u\|_{L^{4}}\|\Lambda^{s}\rho\| _{L^{2}} \lesssim\|\nabla\rho\|_{L^{2}}^{\frac{2s-3}{2(s-1)}}\|\Lambda^{s} \rho\|_{L^{2}}^{\frac{2s+3}{2(s+1)}+\frac{1}{s-1}}\|u\|_{L^{2(s+1)}}^{\frac{1} {2(s+1)}}\|\Lambda^{s+1}u\|_{L^{2}}^{\frac{2s+3}{2(s+1)}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\nabla\rho\|_ {L^{2(s+3)(s-1)}}^{\frac{2(s-3)(s+1)}{s-1}}\|u\|_{L^{2(s)}}^{\frac{2}{2s-3}}\| \Lambda^{s}\rho\|_{L^{2}}^{\frac{4}{2(s-1)(s-1)}+2}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+C(1+t)^{-1- \frac{2s}{2s+3}}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho ^{low}\|_{L^{2}}^{2}\right). \tag{4.7}\] According to Lemma 2.3 and Proposition 3.7, one can arrive at \[\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla^{2}u\|_{L^{4}}\|\Lambda^{s+ 1}u\|_{L^{2}} \lesssim\|\rho\|_{L^{2}}^{\frac{1}{2s}}\|\Lambda^{s}\rho\|_{L^{2}}^ {\frac{1}{2}+\frac{2s-3}{2s}}\|\nabla u\|_{L^{2}}^{\frac{2s-3}{2}}\|\Lambda^{s+ 1}u\|_{L^{2}}^{\frac{2s+3}{2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\rho\|_{L^{2}} ^{\frac{2}{2s}}\|\nabla u\|_{L^{2}}^{2}\|\Lambda^{s}\rho\|_{L^{2}}^{\frac{4}{2 }+2}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+C(1+t)^{-1-\frac {2s+2}{2s-3}}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{ low}\|_{L^{2}}^{2}\right), \tag{4.8}\] and \[\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla\rho\|_{L^{4}}\|\Lambda^{s+ 1}u\|_{L^{2}} \lesssim\|\rho\|_{L^{2}}^{1-\frac{1}{s}}\|\Lambda^{s}\rho\|_{L^{2}} ^{1+\frac{1}{2}}\|\Lambda^{s+1}u\|_{L^{2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\rho\|_{L^{2}} ^{2-\frac{2}{s}}\|\Lambda^{s}\rho\|_{L^{2}}^{\frac{2}{2}+2}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+C(1+t)^{-1- \frac{1}{s}}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{ low}\|_{L^{2}}^{2}\right). \tag{4.9}\] Analogously, \[\|\Lambda^{s-1}u\|_{L^{4}}\|\nabla u\|_{L^{4}}\|\Lambda^{s+1}u\|_{L ^{2}} \lesssim\|\nabla u\|_{L^{2}}\|\Lambda^{s}u\|_{L^{2}}\|\Lambda^{s+ 1}u\|_{L^{2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+C(1+t)^{-2}\| \Lambda^{s}u^{low}\|_{L^{2}}^{2}, \tag{4.10}\] and \[\|\Lambda^{s}u\|_{L^{4}}\|\nabla g\|_{L^{4}}\|\Lambda^{s}g\|_{L^{2}} \lesssim\|\nabla g\|_{L^{2}}^{\frac{2s-3}{2(s-1)}}\|\Lambda^{s}g\|_{L^{2}}^{ \frac{1}{(s-1)s}+1}\|\Lambda^{s}u\|_{L^{2}}^{\frac{1}{2(s-1)s}+1}\|\Lambda^{s}u \|_{L^{2}}^{\frac{1}{2}}\|\Lambda^{s+1}u\|_{L^{2}}^{\frac{1}{2}} \tag{4.11}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+C\|\nabla g\|_{L^{2} }^{2}\|\Lambda^{s}g\|_{L^{2}}^{2}+\|\rho\|_{L^{\infty}}\|\Lambda^{s+1}u\|_{L^{ 2}}^{2}\left\|\Lambda^{s}\rho\right\|_{L^{2}}^{2}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1- \delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\| _{L^{2}}^{2}\right),\] which implies \[\langle\Lambda^{s}u\cdot\nabla\rho,\Lambda^{s}\rho\rangle =\langle u\cdot\nabla\Lambda^{s}\rho,\Lambda^{s}\rho\rangle+ \langle[\Lambda^{s},u\cdot\nabla]\rho,\Lambda^{s}\rho\rangle\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1- \delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\| _{L^{2}}^{2}\right). \tag{4.16}\] Combining estimates (4.13), (4.14) and (4.16), we conclude that \[\frac{d}{dt}\|\Lambda^{s}\rho\|_{L^{2}}^{2}-\langle\Lambda^{s}u,\Lambda^{s+1}\rho\rangle \tag{4.17}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1-\delta} \left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2 }\right).\] Similarly, applying \(\Lambda^{s}\) to (1.5)\({}_{2}\) and taking inner product with \(\Lambda^{s}u\), we obtain \[\frac{d}{dt}\|\Lambda^{s}u\|_{L^{2}}^{2} +\gamma\langle\Lambda^{s}u,\Lambda^{s+1}\rho\rangle+\|\Lambda^{s +1}u\|_{L^{2}}^{2}\] \[-\langle\Lambda^{s}\text{div }\tau,\Lambda^{s}u\rangle=\langle \Lambda^{s}G,\Lambda^{s}u\rangle, \tag{4.18}\] with \[\langle\Lambda^{s}G,\Lambda^{s}u\rangle =\langle\Lambda^{s}\left(\frac{\rho}{1+\rho}\text{div }\Sigma u+[h(\rho)-\gamma]\nabla\rho\right),\Lambda^{s}u\rangle\] \[\quad+\langle\Lambda^{s}\left(\frac{\rho}{1+\rho}\text{div }\tau+u \nabla u\right),\Lambda^{s}u\rangle. \tag{4.19}\] According to Lemma 4.2 and Proposition 3.7, we infer that \[\langle\Lambda^{s}\left(\frac{\rho}{1+\rho}\text{div }\Sigma u \right),\Lambda^{s}u\rangle \leq\|\Lambda^{s-1}\left(\frac{\rho}{1+\rho}\text{div }\Sigma u\right)\|_{L^{2}}\|\Lambda^{s+1}u\|_{L^{2}}\] \[\lesssim\left(\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla^{2}u\|_{L^{4} }+\|\rho\|_{L^{\infty}}\|\Lambda^{s+1}u\|_{L^{2}}\right)\|\Lambda^{s+1}u\|_{L^ {2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1- \delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_ {L^{2}}^{2}\right), \tag{4.20}\] and \[\langle\Lambda^{s}([h(\rho)-\gamma]\nabla\rho),\Lambda^{s}u\rangle \leq\|\Lambda^{s-1}([h(\rho)-\gamma]\nabla\rho)\|_{L^{2}}\|\Lambda ^{s+1}u\|_{L^{2}}\] \[\lesssim\left(\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla\rho\|_{L^{4}} +\|\rho\|_{L^{\infty}}\|\Lambda^{s}\rho\|_{L^{2}}\right)\|\Lambda^{s+1}u\|_{L ^{2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1- \delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}\rho^{low}\|_ {L^{2}}^{2}\right). \tag{4.21}\] Analogously, we have \[\langle\Lambda^{s}\left(\frac{\rho}{1+\rho}\text{div }\tau\right), \Lambda^{s}u\rangle \leq\|\Lambda^{s-1}\left(\frac{\rho}{1+\rho}\text{div }\tau\right)\|_{L^{2}}\|\Lambda^{s+1}u\|_{L^{2}}\] \[\lesssim\left(\|\Lambda^{s-1}\rho\|_{L^{4}}\|\nabla\tau\|_{L^{4}} +\|\rho\|_{L^{\infty}}\|\Lambda^{s}\tau\|_{L^{2}}\right)\|\Lambda^{s+1}u\|_{L ^{2}}\] \[\lesssim(1+t)^{-1-\delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}} ^{2}+\|\Lambda^{s}\rho^{low}\|_{L^{2}}^{2}\right)\] \[\quad+\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^{ s}\rho\|_{L^{2}(\mathcal{L}^{2})}^{2}\right), \tag{4.22}\] and \[\langle\Lambda^{s}(u\cdot\nabla u),\Lambda^{s}u\rangle \leq\|\Lambda^{s-1}(u\cdot\nabla u)\|_{L^{2}}\|\Lambda^{s+1}u\|_ {L^{2}}\] \[\lesssim\left(\|\Lambda^{s-1}u\|_{L^{4}}\|\nabla u\|_{L^{4}}+\|u \|_{L^{4}}\|\Lambda^{s}u\|_{L^{4}}\right)\|\Lambda^{s+1}u\|_{L^{2}}\] \[\lesssim\varepsilon\|\Lambda^{s+1}u\|_{L^{2}}^{2}+(1+t)^{-1- \delta}\|\Lambda^{s}u^{low}\|_{L^{2}}^{2}. \tag{4.23}\] Hence, together with estimates from (4.20) to (4.23), one can arrive at \[\frac{1}{2}\frac{d}{dt}\|\Lambda^{s}u\|_{L^{2}}^{2}+ \mu\|\nabla\Lambda^{s}u\|_{L^{2}}^{2}+(\mu+\mu^{\prime})\|\text{ div }\Lambda^{s}u\|_{L^{2}}^{2}\] \[+\gamma\langle\Lambda^{s}\nabla\rho,\Lambda^{s}u\rangle-\langle \Lambda^{s}\text{div }\tau,\Lambda^{s}u\rangle\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\| \Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)+(1+t)^{-1-\delta}\|\Lambda^{ s}(\rho,u)^{low}\|_{L^{2}}^{2}. \tag{4.24}\] Applying \(\Lambda^{s}\) to (1.5)\({}_{3}\) and taking \(L^{2}(\mathcal{L}^{2})\) inner product with \(\Lambda^{s}g\), we obtain \[\frac{d}{dt}\|\Lambda^{s}g\|_{L^{2}}^{2}+\|\nabla_{q}g\|_{L^{2}(\mathcal{L}^{2} )}^{2}+\langle\Lambda^{s}u,\Lambda^{s}\mathrm{div}\ \tau\rangle=\langle\Lambda^{s}H,\Lambda^{s}g\rangle, \tag{4.25}\] with \[\langle\Lambda^{s}H,\Lambda^{s}g\rangle=-\langle\Lambda^{s}\left(u\cdot\nabla g \right),\Lambda^{s}g\rangle+\langle\Lambda^{s}\left(\nabla uqg\right),\nabla_{q }\Lambda^{s}g\rangle. \tag{4.26}\] According to Lemmas 2.5, 4.2 and Proposition 3.7, we infer that \[\langle\Lambda^{s}(u\cdot\nabla g),\Lambda^{s}g\rangle =\langle u\cdot\nabla\Lambda^{s}g,\Lambda^{s}g\rangle+\langle[ \Lambda^{s},u\cdot\nabla]g,\Lambda^{s}g\rangle\] \[\lesssim\|\Lambda^{s}u\|_{L^{4}}\|\nabla g\|_{L^{4}(\mathcal{L}^ {2})}\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}+\|\nabla u\|_{L^{\infty}}\| \Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\nabla_ {q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)+(1+t)^{-1-\delta}\| \Lambda^{s}u^{low}\|_{L^{2}}^{2}. \tag{4.27}\] Similarly, we obtain from Theorem 1.1 that \[\langle\Lambda^{s}\left(\nabla uqg\right),\nabla_{q}\Lambda^{s}g\rangle \lesssim\|\langle q\rangle g\|_{H^{s-1}(\mathcal{L}^{2})}\|\Lambda^ {s+1}u\|_{L^{2}}\|\nabla_{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}\] \[\quad+\|\nabla u\|_{L^{\infty}}\|q\Lambda^{s}g\|_{L^{2}(\mathcal{ L}^{2})}\|\nabla_{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}\] \[\lesssim\varepsilon\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\nabla _{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right). \tag{4.28}\] Therefore, we deduce from (4.25) to (4.28) that \[\frac{1}{2}\frac{d}{dt}\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^ {2}+\|\nabla_{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}+\int_{\mathbb{R}^ {d}}\nabla\Lambda^{s}u:\Lambda^{s}\tau dx\] \[\quad\lesssim\varepsilon(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\nabla _{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2})+(1+t)^{-1-\delta}\|\Lambda^{ s}u^{low}\|_{L^{2}}^{2}. \tag{4.29}\] Together with estimates (4.17), (4.24) and (4.29), we conclude that \[\frac{d}{dt}\left(\gamma\|\Lambda^{s}\rho\|_{L^{2}}^{2}+\|\Lambda ^{s}u\|_{L^{2}}^{2}+\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)\] \[\quad+\mu\|\nabla\Lambda^{s}u\|_{L^{2}}^{2}+(\mu+\mu^{\prime})\| \mathrm{div}\ \Lambda^{s}u\|_{L^{2}}^{2}+\|\nabla_{q}\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})} ^{2}\] \[\lesssim(1+t)^{-1-\delta}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}} ^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right). \tag{4.30}\] **Dissipationin of \(\rho\) in high frequency :** Applying \(\Lambda^{s}\Delta_{j}\) to (1.5)\({}_{1}\) and taking \(L^{2}\) inner product with \(\Lambda^{s-1}\dot{\Delta}_{j}u\), we have \[\partial_{t}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j }u\rangle-\|\Lambda^{s}\dot{\Delta}_{j}u\|_{L^{2}}^{2}=\langle\Lambda^{s} \dot{\Delta}_{j}F,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle. \tag{4.31}\] We infer from Lemma 4.2 and proposition 3.7 that \[\langle\Lambda^{s}\dot{\Delta}_{j}F,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle =-\langle\Lambda^{s}\dot{\Delta}_{j}(\rho u),\Lambda^{s}\dot{\Delta }_{j}u\rangle\] \[\leq\|\Lambda^{s-1}\dot{\Delta}_{j}(\rho u)\|_{L^{2}}\|\Lambda^{s+1 }\dot{\Delta}_{j}u\|_{L^{2}}\] \[\lesssim d_{j}\|\Lambda^{s+1}u\|_{L^{2}}^{2}+d_{j}\left(\|\Lambda^ {s-1}\rho\|_{L^{2}}^{2}\|u\|_{L^{\infty}}^{2}+\|\Lambda^{s-1}u\|_{L^{2}}^{2}\| \rho\|_{L^{\infty}}^{2}\right)\] \[\lesssim d_{j}\|\Lambda^{s+1}u\|_{L^{2}}^{2}+d_{j}\|\rho\|_{L^{2}}^{ \frac{2}{2}}\|u\|_{L^{2}}^{2-\frac{2}{2}}\|\Lambda^{s}\rho\|_{L^{\infty}}^{2- \frac{2}{2}}\|\Lambda^{s}u\|_{L^{2}}^{\frac{2}{2}}\] \[\quad+d_{j}\|\rho\|_{L^{2}}^{-\frac{2}{2}}\|u\|_{L^{2}}^{2}\| \Lambda^{s}\rho\|_{L^{\infty}}^{2}\|\Lambda^{s}u\|_{L^{2}}^{2-\frac{2}{2}}\] \[\lesssim d_{j}\|\Lambda^{s+1}u\|_{L^{2}}^{2}+d_{j}(1+t)^{-1}\left( \|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right) \tag{4.32}\] for some \(\{d_{j}\}_{j\in Z}\in l^{1}\). Therefore \[\langle\partial_{t}\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1} \dot{\Delta}_{j}u\rangle -\|\Lambda^{s}\dot{\Delta}_{j}u\|_{L^{2}}^{2}\lesssim d_{j}\| \Lambda^{s+1}u\|_{L^{2}}^{2}\] \[+d_{j}(1+t)^{-1}\left(\|\Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\| \Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right). \tag{4.33}\] Similarly, applying \(\Lambda^{s-1}\dot{\Delta}_{j}\) to (1.5)\({}_{2}\) and taking \(L^{2}\) inner product with \(\Lambda^{s}\dot{\Delta}_{j}\rho\) leads to \[\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\partial_{t}\Lambda^{s-1} \dot{\Delta}_{j}u\rangle+\gamma\|\Lambda^{s}\dot{\Delta}_{j}\rho\|_{L^{2}}^{2} \lesssim d_{j}\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^{s}g\|_{L^{2}( \mathcal{L}^{2})}^{2}\right)\] \[\qquad+d_{j}(1+t)^{-\frac{1}{2}}\left(\|\Lambda^{s}\rho^{high}\|_ {L^{2}}^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right). \tag{4.34}\] Adding up (4.33) and (4.34), we conclude that \[\frac{d}{dt}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1} \dot{\Delta}_{j}u\rangle+\gamma\|\Lambda^{s}\dot{\Delta}_{j}\rho\|_{L^{2}}^{2} -\|\Lambda^{s}\dot{\Delta}_{j}u\|_{L^{2}}^{2}\] \[\lesssim d_{j}\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^{s}g \|_{L^{2}(\mathcal{L}^{2})}^{2}\right)\] \[\qquad+d_{j}(1+t)^{-\frac{1}{2}}\left(\|\Lambda^{s}\rho^{high}\|_ {L^{2}}^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right), \tag{4.35}\] which implies \[\frac{d}{dt}\left(\frac{C_{2}\eta}{(1+t)\ln^{2}(e+t)}\langle \Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle\right)+ \frac{C_{2}\gamma\eta}{(1+t)\ln^{2}(e+t)}\|\Lambda^{s}\dot{\Delta}_{j}\rho\|_{ L^{2}}^{2}\] \[\qquad\lesssim d_{j}\eta\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\| \Lambda^{s}\tau\|_{L^{2}}^{2}\right)+d_{j}\eta(1+t)^{-\frac{3}{2}}\left(\| \Lambda^{s}\rho^{high}\|_{L^{2}}^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right)\] \[\qquad+\frac{\eta}{(1+t)^{2}\ln^{2}(e+t)}\langle\Lambda^{s}\dot{ \Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle+\frac{\eta}{(1+t)\ln^{2 }(e+t)}\|\Lambda^{s}\dot{\Delta}_{j}u\|_{L^{2}}^{2}. \tag{4.36}\] Adding \(j\in\sigma_{R}\) up leads to \[\frac{d}{dt}\left(\frac{C_{2}\eta}{(1+t)ln^{2}(e+t)}\underset{j \in\sigma_{R}}{\Sigma}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1} \dot{\Delta}_{j}u\rangle\right)+\frac{C_{2}\gamma\eta}{(1+t)ln^{2}(e+t)}\int_{S ^{c}(R)}|\xi|^{2s}|\widehat{\rho}|^{2}d\xi\] \[\qquad\lesssim\eta\left(\|\Lambda^{s+1}u\|_{L^{2}}^{2}+\|\Lambda^ {s}\tau\|_{L^{2}}^{2}\right)+\eta(1+t)^{-\frac{3}{2}}\left(\|\Lambda^{s}\rho^{ high}\|_{L^{2}}^{2}+\|\Lambda^{s}(\rho,u)^{low}\|_{L^{2}}^{2}\right)\] \[\qquad+\frac{\eta}{(1+t)^{2}\ln^{2}(e+t)}\underset{j\in\sigma_{R}} {\Sigma}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u \rangle+\frac{\eta}{(1+t)\ln^{2}(e+t)}\underset{j\in\sigma_{R}}{\Sigma}\| \Lambda^{s}\dot{\Delta}_{j}u\|_{L^{2}}^{2}. \tag{4.37}\] Combining with estimates 4.30 and 4.37, we thus complete the proof of Lemma 4.2. **The proof of Theorem 1.2:** According to Schonbek's strategy and Lemma 4.2, one can arrive at \[\frac{d}{dt^{\prime}}\left(\|\Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{ L^{2}}^{2}+\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}+\frac{C_{2}\eta}{(1+t^{ \prime})\ln^{2}(e+t^{\prime})}\underset{j\in\sigma_{R}}{\Sigma}\langle \Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle\right)\] \[\qquad+\frac{C_{2}}{1+t^{\prime}}\int|\xi|^{2s}|\widehat{u}|^{2} d\xi+\frac{C_{2}\eta\gamma}{(1+t^{\prime})\ln^{2}(e+t^{\prime})}\int|\xi|^{2s}| \widehat{\rho}|^{2}d\xi+\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\] \[\qquad\lesssim\frac{C_{2}}{1+t^{\prime}}\int_{S(t^{\prime})}|\xi| ^{2s}|\widehat{u}|^{2}d\xi+\frac{\eta}{(1+t^{\prime})\ln^{2}(e+t^{\prime})} \int_{S(R)}|\xi|^{2s}\left(|\widehat{\rho}|^{2}+|\widehat{u}|^{2}\right)d\xi\] \[\qquad+(1+t^{\prime})^{-1-\delta}\int_{S^{c}(t^{\prime})}|\xi|^{2 s}|\widehat{\rho}|^{2}d\xi+\frac{\eta}{(1+t^{\prime})^{2}\ln^{2}(e+t^{\prime})} \underset{j\in\sigma_{R}}{\Sigma}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^ {s-1}\dot{\Delta}_{j}u\rangle. \tag{4.38}\] Set a positive constant \(T_{2}\) sufficiently large. By virtue of Proposition 3.7, we deduce that \[\frac{d}{dt^{\prime}}\left(\|\Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{L^{2}}^{2}+\| \Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}+\frac{C_{2}\eta}{(1+t^{\prime})\ln^{ 2}(e+t^{\prime})}\underset{j\in\sigma_{R}}{\Sigma}\langle\Lambda^{s}\dot{ \Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle\right) \tag{4.39}\] \[\leq C_{0}+C_{d}(1+t)^{2}+\frac{C\eta(1+t)^{s+2}}{(1+t)^{s}}+\int_{0}^{t} \frac{\eta(1+R)}{\ln^{2}(e+t^{\prime})}(1+t^{\prime})^{s+1}\|\Lambda^{s}u\|_{L^ {2}}^{2}dt^{\prime},\] where \(C_{0}=C_{\gamma}\left(\|(\rho_{0},u_{0})\|_{H^{s}}^{2}+\|g_{0}\|_{H^{s}(\mathcal{ L}^{2})}^{2}+\|\langle q\rangle g_{0}\|_{H^{s-1}(\mathcal{L}^{2})}^{2}\right)\). Since \[\frac{C_{2}}{1+t}\|\Lambda^{s-1}u^{high}\|_{L^{2}}^{2}\leq\|\Lambda^{s}u^{high }\|_{L^{2}}^{2},\] it follows that \[\|\Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{L^{2}}^{2}+\|\Lambda^{s}g\|_ {L^{2}(\mathcal{L}^{2})}^{2} +\frac{C_{2}\eta}{(1+t)ln^{2}(e+t)}\underset{j\in\sigma_{t}}{ \Sigma}\langle\Lambda^{s}\dot{\Delta}_{j}\rho,\Lambda^{s-1}\dot{\Delta}_{j}u\rangle\] \[\geq\frac{1}{2}\left(\|\Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{L^{2 }}^{2}+\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right). \tag{4.41}\] for some \(\eta\) small enough. Therefore, taking \(R=t\), we obtain \[(1+t)^{s+3} \left(\|\Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{L^{2}}^{2}+\|\Lambda^ {s}g\|_{L^{2}(\mathcal{L}^{2})}^{2}\right)\lesssim C_{0}+(1+t)^{2}\] \[+(1+t)\int_{0}^{t}\frac{C\eta}{\ln^{2}(e+t^{\prime})}(1+t^{ \prime})^{s+1}\|\Lambda^{s}u\|_{L^{2}}^{2}dt^{\prime}. \tag{4.42}\] Denote \(\mathrm{M}(t)=\sup\limits_{t^{\prime}\in[0,t]}(1+t^{\prime})^{s+1}\left(\| \Lambda^{s}(\sqrt{\gamma}\rho,u)\|_{L^{2}}^{2}+\|\Lambda^{s}g\|_{L^{2}( \mathcal{L}^{2})}^{2}\right)\), we deduce that \[\mathrm{M}(t)\lesssim C_{0}+1+\int_{0}^{t}\frac{\mathrm{M}(t^{\prime})}{(1+t^ {\prime})\ln^{2}(e+t^{\prime})}dt^{\prime}, \tag{4.43}\] which implies \[\|\Lambda^{s}(\rho,u)\|_{L^{2}}^{2}+\|\Lambda^{s}g\|_{L^{2}(\mathcal{L}^{2})} ^{2}\lesssim(1+t)^{-s-1}. \tag{4.44}\] Furthermore, taking \(L^{2}(\mathcal{L}^{2})\) innner product with \(g\) to (1.5)\({}_{3}\), we have \[\frac{d}{dt}\|g\|_{L^{2}(\mathcal{L}^{2})}^{2}+\|g\|_{L^{2}(\mathcal{L}^{2})} ^{2}\lesssim\|\nabla u\|_{L^{2}}^{2}, \tag{4.45}\] then we infer by using Duhamel's principle that \[\|g\|_{L^{2}(\mathcal{L}^{2})}^{2} \lesssim e^{-t}\|g_{0}\|_{L^{2}(\mathcal{L}^{2})}^{2}+\int_{0}^{t }e^{-(t-t^{\prime})}\|\nabla u\|_{L^{2}}^{2}dt^{\prime}\] \[\lesssim e^{-t}\|g_{0}\|_{L^{2}(\mathcal{L}^{2})}^{2}+\int_{0}^{t }e^{-(t-t^{\prime})}(1+t^{\prime})^{-2}dt^{\prime}\] \[\lesssim(1+t)^{-2}. \tag{4.46}\] Applying \(\Lambda^{\sigma}\) to (1.5)\({}_{3}\) and taking \(L^{2}(\mathcal{L}^{2})\) inner product with \(\Lambda^{\sigma}g\) leads to \[\frac{1}{2}\frac{d}{dt}\|\Lambda^{\sigma}g\|_{L^{2}(\mathcal{L}^{2 })}^{2}+\|\nabla_{q}\Lambda^{\sigma}g\|_{L^{2}(\mathcal{L}^{2})}^{2}+\int_{ \mathbb{R}^{d}}\nabla\Lambda^{\sigma}u:\Lambda^{\sigma}\tau dx=-\langle\Lambda^ {\sigma}(u\cdot\nabla g),\Lambda^{\sigma}g\rangle_{M}\] \[-\langle\frac{1}{\psi_{\infty}}\nabla_{q}\cdot(\Lambda^{\sigma} \nabla ugg\psi_{\infty}),\Lambda^{\sigma}g\rangle_{M}-\langle\frac{1}{\psi_{ \infty}}\nabla_{q}\cdot(q\psi_{\infty}[\Lambda^{\sigma},g]\nabla u),\Lambda^{ \sigma}g\rangle_{M}. \tag{4.47}\] According to Lemmas 2.4, 2.5, Proposition 3.7 and (4.44), we deduce that \[\|\Lambda^{\sigma}g\|_{L^{2}(\mathcal{L}^{2})}^{2} \lesssim e^{-t}\|g_{0}\|_{\dot{H}^{\sigma}(\mathcal{L}^{2})}^{2}+ \int_{0}^{t}e^{-(t-t^{\prime})}\|\Lambda^{\sigma}u\|_{L^{2}}^{2}\|\nabla g\|_{ H^{s-1}(\mathcal{L}^{2})}^{2}dt^{\prime}\] \[+\int_{0}^{t}e^{-(t-t^{\prime})}(\|\langle q\rangle g\|_{H^{s-1} (\mathcal{L}^{2})}^{2}+1)\|\Lambda^{\sigma+1}u\|_{L^{2}}^{2}dt^{\prime}\] \[\lesssim e^{-t}\|g_{0}\|_{\dot{H}^{\sigma}(\mathcal{L}^{2})}^{2}+ \int_{0}^{t}e^{-(t-t^{\prime})}((1+t^{\prime})^{-\sigma-2}+(1+t^{\prime})^{- \sigma-3})dt^{\prime}\] \[\lesssim e^{-t}\|g_{0}\|_{\dot{H}^{\sigma}(\mathcal{L}^{2})}^{2}+ (1+t)^{-\sigma-2}, \tag{4.48}\] which implies \[\|g\|_{\dot{H}^{\sigma}(\mathcal{L}^{2})}\lesssim(1+t)^{-\frac{\sigma}{2}-1},\] for any \(\sigma\in[0,s-1]\). We thus complete the proof of Theorem 1.2 by the interpolation. \(\square\) **Acknowledgments** This work was partially supported by the National key R&D Program of China(2021YFA1002100), the National Natural Science Foundation of China (No.12171493), the Macao Science and Technology Development Fund (No. 098/2013/A3), and Guangdong Province of China Special Support Program (No. 8-2015), and the key project of the Natural Science Foundation of Guangdong province (No. 2016A030311004).
2304.13812
Guaranteed Quantization Error Computation for Neural Network Model Compression
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems. The guaranteed output error computation problem for neural network compression with quantization is addressed in this paper. A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks. Then, optimization-based methods and reachability analysis methods are applied to the merged neural network to compute the guaranteed quantization error. Finally, a numerical example is proposed to validate the applicability and effectiveness of the proposed approach.
Wesley Cooke, Zihao Mo, Weiming Xiang
2023-04-26T20:21:54Z
http://arxiv.org/abs/2304.13812v1
# Guaranteed Quantization Error Computation for Neural Network Model Compression ###### Abstract Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems. The guaranteed output error computation problem for neural network compression with quantization is addressed in this paper. A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks. Then, optimization-based methods and reachability analysis methods are applied to the merged neural network to compute the guaranteed quantization error. Finally, a numerical example is proposed to validate the applicability and effectiveness of the proposed approach. model compression, neural networks, quantization + Footnote †: This research was supported by the National Science Foundation, under NSF CAREER Award 2143351, and NSF CNS Award no. 2223035. ## I Introduction Neural networks have been demonstrated to be powerful and effective tools to solve complex problems such as image processing [1], high-performance adaptive control [2], etc. Due to the increasing complexity of the problems in various applications, the scale and complexity of neural networks also grow exponentially to meet the desired accuracy and performance. Recent progress of machine learning, such as training and using a new generation of large neural networks, heavily depends on the availability of exceptionally large computational resources, e.g., the Transformer model with neural architecture search proposed in [3], if trained from scratch for each case, requires 274,120 hours training on 8 NVIDIA P100 GPUs [4]. Additionally, even for already trained neural networks, the verification process is also quite time- and resource-consuming, e.g., some simple properties in the ACAS Xu neural network of a 5-layer simple structure proposed in [5] need more than 100 hours to be verified. To avoid unaffordable computation when using neural networks, a variety of neural network acceleration and compression methods are proposed such as neural network pruning and quantization, which can significantly reduce the size and memory footprint of neural networks as well as expedite the speed of model inference. Quantization as a reduction method is mainly concerned with the amount of memory utilized for the learnable parameters of a neural network. The data type for the weights and biases of a typical neural network is usually expressed as 32-bit floating-point values that will carry out millions of floating-point operations during inference time. Quantization aims to shrink the memory footprint of deep neural networks by reducing the number of bits used to store the values for the learnable parameters and activations. This is not only ideal for application scenarios where memory resources may be restricted such as embedded systems or microcontroller environments, but with the selected weight representation you could potentially facilitate faster inference using cheaper arithmetic operations [6]. With the reduction in parameter bit precision, however, it is typical that a quantized neural network will perform worse in terms of accuracy than its non-quantized counterpart using gradient-based learning methods. However, these drops in accuracy are usually considered minimal and worth the given benefit in memory reduction and inference speed-up. There exists much literature describing various techniques of quantization and successful results thereof, including works utilizing stochastic rounding to select weight values beneficial to gradient training [7] and applications on modern deep architectures [8], as well as quantization methods that reduce the number of multiplication operations required during training time [9]. Significant research has also been done on quantization-aware training methods [10], where the loss in accuracy due to bit precision reduction is minimized. Some of these quantization-aware training methods utilize a straight-through gradient estimator (STE) [11] to more appropriately select weights during network training that minimizes accuracy and further reduces computational burden [12]. As quantization methods are used for neural network reduction, there inevitably exist discrepancies between the performances of original and compressed neural networks. In this work, we propose a computationally tractable approach to compute the guaranteed output error caused by quantization. A merged neural network is constructed to generate the output differences between two neural networks, and then reachability analysis on the merged neural network can be performed to obtain the guaranteed error. The remainder of the paper is organized as follows: Preliminaries are given in Section II. The main results on quantization error computation are presented in Section III. A numerical example is given in Section IV. The conclusion is presented in Section V. ## II Preliminaries In this work, we consider a class of fully-connected feedforward neural networks which can be described by the following recursive equations \[\begin{cases}\mathbf{u}_{0}=\mathbf{u}\\ \mathbf{u}_{\ell}=\phi_{\ell}(\mathbf{W}_{\ell}\mathbf{u}_{\ell-1}+\mathbf{b}_{ \ell}),\ \ell=1,\ldots,L\\ \mathbf{y}=\mathbf{u}_{L}\end{cases} \tag{1}\] where \(\mathbf{u}_{0}=\mathbf{u}\in\mathbb{R}^{n_{u}}\) is the input vector of the neural network, \(\mathbf{y}=\mathbf{u}_{L}\in\mathbb{R}^{n_{y}}\) is the output vector of the neural network, \(\mathbf{W}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) and \(\mathbf{b}_{\ell}\in\mathbb{R}^{n_{\ell}}\) are weight matrices and bias vectors for the \(\ell\)-th layer, respectively. \(\phi_{\ell}=[\psi_{\ell},\cdots,\psi_{\ell}]\) is the concatenation of activation functions of the \(\ell\)-th layer in which \(\psi_{\ell}:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function, e.g., such as logistic, tanh, ReLU, sigmoid functions. In addition, the input-output mapping of the above neural network \(\Phi:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is denoted in the form of \[\mathbf{y}=\Phi(\mathbf{u}) \tag{2}\] where \(\mathbf{u}\in\mathbb{R}^{n_{u}}\) and \(\mathbf{y}\in\mathbb{R}^{n_{y}}\) are the input and output of the neural network, respectively. A common quantization procedure \(\mathsf{Q}(\cdot)\) to map a floating point value \(r\) to an integer can be formulated as what follows \[\mathsf{Q}(r)=\mathsf{int}(r/S)-Z \tag{3}\] where \(S\) is a floating point value as a scaling factor, and \(Z\) is an integer value that represents \(0\) in the quantization policy which could be \(0\) or other values. The \(\mathsf{int}:\mathbb{R}\rightarrow\mathbb{Z}\) is the function rounding the floating point value of an integer. To reduce the size and complexity of the neural network, the quantization procedure is implemented on the neural network parameters, i.e., weights and biases. The quantized version of the neural network (1) is in the form of \[\begin{cases}\mathbf{u}_{0}=\mathbf{u}\\ \mathbf{u}_{\ell}=\phi_{\ell}(\mathsf{Q}(\mathbf{W}_{\ell})\mathbf{u}_{\ell-1 }+\mathsf{Q}(\mathbf{b}_{\ell})),\ \ell=1,\ldots,L\\ \mathbf{y}=\mathbf{u}_{L}\end{cases} \tag{4}\] where \(\mathsf{Q}(\mathbf{W}_{\ell})\in\mathbb{Z}^{n_{\ell}\times n_{\ell-1}}\) and \(\mathsf{Q}(\mathbf{b}_{\ell})\in\mathbb{Z}^{n_{\ell}}\) are the quantized weight matrices and bias vectors for the \(\ell\)-th layer under the quantization process \(\mathsf{Q}(\cdot)\). Furthermore, the quantized version of neural network \(\Phi:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is expressed as \(\Phi_{\mathsf{Q}}:\mathbb{Z}^{n_{u}}\rightarrow\mathbb{Z}^{n_{y}}\) in the form of \[\mathbf{y}=\Phi_{\mathsf{Q}}(\mathbf{u}). \tag{5}\] The quantization can significantly reduce the size and computational complexity of a neural network, e.g., mapping the 32-bit floating point representation to an 8-bit integer representation, leading to smaller models that can fit in hardware with high computational efficiency. However, the price to pay is the loss of performance and precision post-quantization. To formally characterize the performance loss caused by quantization, a reasonable expectation is to compute the quantization error between the neural network and its quantized version. **Definition 1**: _Given a tuple \(\mathbb{M}\triangleq\langle\Phi,\mathsf{Q},\mathcal{U}\rangle\) where \(\Phi\) is a neural network defined by (1), \(\mathsf{Q}\) is the quantization process of (3) producing quantized neural network \(\Phi_{\mathsf{Q}}\), and \(\mathcal{U}\in\mathbb{R}^{n_{u}}\) is a compact input set, the guaranteed quantization error is defined by_ \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\|\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u})\| \tag{6}\] _where \(\Phi_{\mathsf{Q}}\) is the quantized neural network of \(\Phi\)._ **Remark 1**: _The assumption that the input set \(\mathcal{U}\) is a compact set is reasonable since neural networks are rarely applied to raw data sets. Instead, standardization and rescaling techniques such as normalization are used which ensure the inputs are always within a compact set such as \([0,1]\) or \([-1,1]\). Given the compact input set \(\mathcal{U}\) which contains all possible input to the neural network, the guaranteed quantization error \(\rho(\mathbb{M})\) characterizes the upper bound for the difference between the outputs of neural network \(\Phi\) and its quantized version \(\Phi_{\mathsf{Q}}\) generated from the same inputs in set \(\mathcal{U}\), which quantifies the discrepancy caused by the quantization process \(\mathsf{Q}\) in terms of outputs._ ## III Quantization Error Computation To address the quantization error computation problem, the key is to estimate a \(\gamma>0\) such that \(\rho(\mathbb{M})\leq\gamma\). Due to the complexity of the neural network, it is challenging to estimate the \(\gamma\) directly from the discrepancy of \(\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\). Other than directly analyzing the discrepancy of two neural networks, we proposed to construct a new fully-connected neural network \(\tilde{\Phi}\) merged from \(\Phi\) and \(\Phi_{\mathsf{Q}}\) which is expected to produce the discrepancy of the outputs of two neural networks, i.e., \(\tilde{\Phi}(\mathbf{u})=\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\), and then search for the upper bound of the outputs of the merged neural network. Given \(L\)-layer neural network \(\Phi\) and quantized \(\Phi_{\mathsf{Q}}\), the merged neural network \(\tilde{\Phi}:\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{y}}\) is constructed with \(L+1\) layers as follows: \[\begin{cases}\tilde{\mathbf{u}}_{0}=\mathbf{u}\\ \tilde{\mathbf{u}}_{\ell}=\tilde{\phi}_{\ell}(\tilde{\mathbf{W}}_{\ell}\tilde{ \mathbf{u}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell}),\ \ell=1,\ldots,L+1\\ \tilde{\mathbf{y}}=\tilde{\mathbf{u}}_{L+1}\end{cases} \tag{7}\] where \[\tilde{\mathbf{W}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{W}_{1}\\ \mathsf{Q}(\mathbf{W}_{1})\end{bmatrix},&\ell=1\\ \begin{bmatrix}\mathbf{W}_{\ell}&\mathbf{0}_{n_{\ell}\times n_{\ell-1}}\\ \mathbf{0}_{n_{\ell-1}\times n_{\ell}}&\mathsf{Q}(\mathbf{W}_{\ell})\end{bmatrix},&1<\ell\leq L\\ \begin{bmatrix}\mathbf{I}_{n_{y}}&\mathsf{-I}_{n_{y}}\end{bmatrix},&\ell=L+1\\ \end{cases} \tag{8}\] \[\tilde{\mathbf{b}}_{\ell}=\begin{cases}\begin{bmatrix}\mathbf{b}_{\ell}\\ \mathsf{Q}(\mathbf{b}_{\ell})\\ \end{bmatrix},&1\leq\ell\leq L\\ \begin{bmatrix}\mathbf{0}_{2n_{y}\times 1}\end{bmatrix},&\ell=L+1\end{cases} \tag{9}\] \[\tilde{\phi}_{\ell}(\cdot)=\begin{cases}\phi_{\ell}(\cdot),&1\leq\ell\leq L\\ \mathsf{L}(\cdot),&\ell=L+1\end{cases} \tag{10}\] where \(\mathsf{L}(\cdot)\) is linear transfer function, i.e., \(x=\mathsf{L}(x)\). **Theorem 1**: _Given a tuple \(\mathbb{M}\triangleq\langle\Phi,\mathsf{Q},\mathcal{U}\rangle\) where \(\Phi\) is a neural network defined by (1), \(\mathsf{Q}\) is the quantization process of (3), and \(\mathcal{U}\in\mathbb{R}^{n_{u}}\) is a compact input set, the guaranteed quantization error \(\rho(\mathbb{M})\) can be computed by_ \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\tilde{\Phi}(\mathbf{u})\right\| \tag{11}\] _where \(\tilde{\Phi}\) is a fully-connected neural network defined in (7)._ _Proof_. First, let us consider \(\ell=1\). Given an input \(\tilde{\mathbf{u}}_{0}=\mathbf{u}\in\mathbb{R}^{n_{u}}\), one can obtain that \[\tilde{\mathbf{u}}_{1}=\tilde{\phi}_{1}(\tilde{\mathbf{W}}_{1}\tilde{\mathbf{u }}_{0}+\tilde{\mathbf{b}}_{1})=\begin{bmatrix}\phi_{1}(\mathbf{W}_{1}\tilde{ \mathbf{u}}_{0}+\mathbf{b}_{1})\\ \phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b} _{1}))\end{bmatrix}. \tag{12}\] Then, we consider \(1<\ell\leq L\). Starting from \(\ell=2\), we have \[\tilde{\mathbf{W}}_{2}\tilde{\mathbf{u}}_{1} =\begin{bmatrix}\mathbf{W}_{2}&\mathbf{0}_{n_{2}\times n_{1}}\\ \mathbf{0}_{n_{2}\times n_{1}}&\mathsf{Q}(\mathbf{W}_{2})\end{bmatrix} \begin{bmatrix}\phi_{1}(\mathbf{W}_{1}\tilde{\mathbf{u}}_{0}+\mathbf{b}_{1})\\ \phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b }_{1}))\end{bmatrix}\] \[=\begin{bmatrix}\mathbf{W}_{2}\phi_{1}(\mathbf{W}_{1}\tilde{ \mathbf{u}}_{0}+\mathbf{b}_{1})\\ \mathsf{Q}(\mathbf{W}_{2})\phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{\mathbf{u}} _{0}+\mathsf{Q}(\mathbf{b}_{1}))\end{bmatrix}\] . Furthermore, it leads to \[\tilde{\mathbf{u}}_{2} =\tilde{\phi}_{2}(\tilde{\mathbf{W}}_{2}\tilde{\mathbf{u}}_{1}+ \tilde{\mathbf{b}}_{2})\] \[=\begin{bmatrix}\phi_{2}(\mathsf{W}_{2}\phi_{1}(\mathbf{W}_{1} \tilde{\mathbf{u}}_{0}+\mathbf{b}_{1})+\mathbf{b}_{2})\\ \phi_{2}(\mathsf{Q}(\mathbf{W}_{2})\phi_{1}(\mathsf{Q}(\mathbf{W}_{1})\tilde{ \mathbf{u}}_{0}+\mathsf{Q}(\mathbf{b}_{1}))+\mathbf{b}_{2})\end{bmatrix}.\] Iterating the above process from \(\ell=2\) to \(\ell=L\), the following recursive equation can be derived \[\tilde{\mathbf{u}}_{\ell}=\tilde{\phi}_{\ell}(\tilde{\mathbf{W}}_{\ell} \tilde{\mathbf{u}}_{\ell-1}+\tilde{\mathbf{b}}_{\ell})=\begin{bmatrix}\phi_{ \ell}(\mathbf{W}_{\ell}\tilde{\mathbf{u}}_{\ell-1}+\mathbf{b}_{\ell})\\ \phi_{\ell}(\mathsf{Q}(\mathbf{W}_{\ell})\tilde{\mathbf{u}}_{\ell-1}+\mathsf{Q }(\mathbf{b}_{\ell}))\end{bmatrix}\] where \(\ell=2,\ldots,L\). Together with (12) when \(\ell=1\), it yields that \[\tilde{\mathbf{u}}_{L}=\begin{bmatrix}\Phi(\mathbf{u})\\ \Phi_{\mathsf{Q}}(\mathbf{u})\end{bmatrix}. \tag{13}\] Furthermore, when considering the last layer \(\ell=L+1\), the following result can be obtained \[\tilde{\mathbf{u}}_{L+1}=\mathsf{L}\left(\begin{bmatrix}\mathbf{I}_{n_{y}}&- \mathbf{I}_{n_{y}}\end{bmatrix}\begin{bmatrix}\Phi(\mathbf{u})\\ \Phi_{\mathsf{Q}}(\mathbf{u})\end{bmatrix}\right)=\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u}) \tag{14}\] where means \(\tilde{\Phi}(\mathbf{u})=\Phi(\mathbf{u})-\Phi_{\mathsf{Q}}(\mathbf{u})\). Based on the definition of guaranteed quantization error \(\rho(\mathbb{M})\), i.e., Definition 1, we can conclude that \[\rho(\mathbb{M})=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\Phi(\mathbf{u})-\Phi_{ \mathsf{Q}}(\mathbf{u})\right\|=\sup_{\mathbf{u}\in\mathcal{U}}\left\|\tilde{ \Phi}(\mathbf{u})\right\|. \tag{15}\] The proof is complete. \(\square\) **Remark 2**: _Theorem 1 implies that we can analyze the merged neural network \(\tilde{\Phi}\) to compute the quantization error between neural network \(\Phi\) and its quantized version \(\Phi_{\mathsf{Q}}\). This result facilitates the computation process by employing those analyzing tools, such as optimization and reachability analysis tools, for merged neural network \(\tilde{\Phi}\)._ * _Using the interval arithmetic for neural network, we can employ Moore-Skelboe Algorithm_ _[_13_]_ _to search upper bound of_ \(||\tilde{\Phi}(\mathbf{u})||\) _subject to_ \(\mathbf{u}\in\mathcal{U}\) _where_ \(\mathcal{U}\) _is a compact set. The key to implement Moore-Skelboe Algorithm is to construct the interval extension of_ \([\tilde{\Phi}]:\mathbbm{R}^{n_{u}}\rightarrow\mathbbm{R}^{n_{y}}\)_. First, from Theorem 1 in_ _[_14_]_ _under the assumption that activation functions are monotonically increasing, the interval extension of merged neural network_ \([\tilde{\Phi}]\) _can be constructed as_ \[[\tilde{\Phi}]=[\tilde{\Phi}^{-},\tilde{\Phi}^{+}]\] (16) _where_ \(\tilde{\Phi}^{-}\) _and_ \(\tilde{\Phi}^{+}\) _are left (limit inferior) and right (limit superior) bounds of interval_ \([\tilde{\Phi}]\) _that are defined as follows_ \[\tilde{\Phi}^{-}:\begin{cases}\tilde{\mathbf{u}}_{0}^{-}&=\mathbf{u}^{-}\\ \tilde{\mathbf{u}}_{\ell}^{-}&=\tilde{\phi}_{\ell}\left(\begin{bmatrix}\tilde{ \mathbf{W}}_{\ell}^{-}&\tilde{\mathbf{W}}_{\ell}^{+}\end{bmatrix}\begin{bmatrix} \tilde{\mathbf{u}}_{\ell-1}^{+}\\ \tilde{\mathbf{u}}_{\ell-1}^{-}\end{bmatrix}+\tilde{\mathbf{b}}_{\ell}\right)\\ \tilde{\mathbf{y}}^{-}&=\tilde{\mathbf{u}}_{L+1}^{-}\end{cases}\] \[\tilde{\Phi}^{+}:\begin{cases}\tilde{\mathbf{u}}_{0}^{+}&=\mathbf{u}^{+} \\ \mathbf{u}_{\ell}^{+}&=\tilde{\phi}_{\ell}\left(\begin{bmatrix}\tilde{\mathbf{W}}_{ \ell}^{-}&\tilde{\mathbf{W}}_{\ell}^{+}\end{bmatrix}\begin{bmatrix}\tilde{ \mathbf{u}}_{\ell-1}^{-}\\ \tilde{\mathbf{u}}_{\ell-1}^{+}\end{bmatrix}+\tilde{\mathbf{b}}_{\ell}\right)\\ \tilde{\mathbf{y}}^{+}&=\tilde{\mathbf{u}}_{L+1}^{+}\end{cases}\] _in which_ \(\mathcal{U}\subseteq[\mathbf{u}]=[\mathbf{u}^{-},\mathbf{u}^{+}]\)_, and_ (17) _with_ \(w_{\ell}^{i,j}\)_,_ \(\underline{w}_{\ell}^{i,j}\)_, and_ \(\overline{w}_{\ell}^{i,j}\) _being the elements in_ \(i\)_-th row and_ \(j\)_-th column of matrix_ \(\mathbf{W}_{\ell}\)_,_ \(\mathbf{W}_{\ell}^{-}\)_, and_ \(\mathbf{W}_{\ell}^{+}\)_. With the above tractable calculation of_ \(\tilde{\Phi}^{-}\) _and_ \(\tilde{\Phi}^{+}\)_, we can perform Moore-Skelboe Algorithm to compute guaranteed quantization error_ \(\rho(\mathbb{M})\)_._ * _Under the framework of reachability analysis of neural networks, the guaranteed quantization error computation problem can be turned into a reachable set computation problem for merged neural network_ \(\tilde{\Phi}\)_. Given the input set_ \(\mathcal{U}\)_, the following set_ \[\mathcal{Y}=\left\{\tilde{\mathbf{y}}\in\mathbb{R}^{n_{y}}\mid\tilde{\mathbf{y}}= \tilde{\Phi}(\mathbf{u}),\ \mathbf{u}\in\mathcal{U}\right\}\] (19) _is called the output set of neural network (_1_). The guaranteed quantization error_ \(\rho(\mathbb{M})\) _can be obtained by_ \[\rho(\mathbb{M})=\max\{\mathbf{y}\mid\tilde{\mathbf{y}}\in\mathcal{Y}\}.\] (20) _The key step is the computation for the reachable set \(\mathcal{Y}\). This can be efficiently done through neural network reachability analysis. There exist a number of verification tools for neural networks available for the reachable set computation. The neural network reachability analysis tool can produce the reachable set \(\mathcal{Y}\) in the form of a union of polyhedral sets such as NNV [15], veritex [16], etc. The IGNNV tool computes the reachable set \(\mathcal{Y}\) as a union of interval sets [17, 14]. With the reachable set \(\mathcal{Y}\), the guaranteed quantization error \(\rho(\mathbb{M})\) can be easily obtained by searching for the maximal value of \(\|\tilde{\mathbf{y}}\|\) in \(\mathcal{Y}\), e.g., testing throughout a finite number of vertices in the interval or polyhedral sets. ## IV Numerical Example To verify the effectiveness of the quantization error computation, a numerical example is used. First, a large neural network, \(\Phi\), is generated such that it has a 1-D input layer, three hidden layers with 50 neurons in each layer, and a 1-D output layer. Each layer has an activation function of ReLU except for the output layer with a linear function. The weights and biases were randomly initialized, but with the condition that they were normally distributed with a mean of zero and a standard deviation of one. After generating \(\Phi\), a quantization method was applied to reduce the size of the weights and biases, and to introduce a slight reduction in accuracy. This quantized network is called \(\Phi_{\text{Q}}\). While there exist several quantization methods and tools to quantize networks, a basic technique that truncates the weights and biases to 4 decimal places is used in this numerical example. Next, a merged network \(\tilde{\Phi}\) was constructed from \(\Phi\) and \(\Phi_{\text{Q}}\) according to (7). The vertex neural network reachability tool [16] computed the reachable output set of \(\tilde{\Phi}\) given the input interval normalized as \([0,1]\). Finally, the quantization error was obtained using (20) which is \(\rho(\mathbb{M})=0.5008\). Using this error, the lower and upper bound can be constructed using the following formula: \(\Phi(u)\pm\rho(\mathbb{M})\) as shown in Fig. 1. Please note that this quantization error is very small compared with the range of outputs. Thus, to provide more detail, the figure has been zoomed into an appropriate scale. Moreover, the memory sizes of models are shown in Table I. ## V Conclusions This paper addressed the guaranteed output error computation problem for neural network compression with quantization. Based on the original neural network and its compressed version resulted from quantization, a merged neural network computation framework is developed, which can utilize optimization-based methods and reachability analysis methods to compute the guaranteed quantization error. At last, numerical examples are proposed to validate the applicability and effectiveness of the proposed approach. Future work will be expanded to include more complex and various neural network architectures such as convolutional neural networks.
2303.09635
Universality and Control of Fat Tails
Motivated by applications in hydrodynamics and networks of thermostatically-control loads in buildings we study control of linear dynamical systems driven by additive and also multiplicative noise of a general position. Utilizing mathematical theory of stochastic multiplicative processes we present a universal way to estimate fat, algebraic tails of the state vector probability distributions. This prompts us to introduce and analyze mean-q-power stability criterion, generalizing the mean-square stability criterion, and then juxtapose it to other tools in control.
Michael Chertkov
2023-03-16T20:29:30Z
http://arxiv.org/abs/2303.09635v4
# Universality and Control of Fat Tails ###### Abstract Motivated by applications in hydrodynamics and networks of thermostatically-control loads in buildings we study control of linear dynamical systems driven by additive and also multiplicative noise of a general position. Utilizing mathematical theory of stochastic multiplicative processes we present a universal way to estimate fat, algebraic tails of the state vector probability distributions. This prompts us to introduce and analyze mean-\(q\)-power stability criterion, generalizing the mean-square stability criterion, and then juxtapose it to other tools in control. ## I Introduction Study of multiplicative noise models has a long history in control, with many foundational results reported in late 20th century and early 21st century [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1]. These classic approaches included testing stability of the control solutions via perturbations, analyzing the mean-square stability measure [5, 9], and also studying the multiplicative version of the Linear Quadratic Gaussian (LQG) control problem by solving the generalized Ricatti equations [3, 6], then followed by a path to efficient resolution via semi-definite-programming computations of the linear inequality type [9, 12]. This letter, inspired by applications in hydrodynamics and thermal control of buildings, and also by theoretical overlap with studies in stochastic multiplicative processes [13, 14] and in statistical hydrodynamics [15] suggests the following complementary (and to the best of our knowledge novel) contribution to the classic subject of stability and control of the multiplicative linear systems: (1) We introduce and analyze effect of a multiplicative noise of a general position, that is possibly not white and non-Gaussian, on stability of a linear state feedback solution. Utilizing the theory of stochastic multiplicative processes, and specifically the so-called Oseledets theorem [13, 14, 15], we observe that if the probability distribution of the state vector stabilizes it shows fat, algebraic tail with the exponent which scales linearly with the vector of the feedback rates. Moreover, we express the algebraic exponent via universal characteristics of the so-called time-ordered exponential of the multiplicative matrix, in particular via Cramer function of the Lyapunov exponents measuring the exponential rates of growth of uncertainty in different components of the initial state vector. Our analysis also reveals that at the times larger than inverse of the largest Lyapunov exponent ANY multiplicative noise can be described by a white-Gaussian substitute with a properly re-normalized covariance. (2) Motivated by (1) we introduce the Mean-\(q\)-Power (M\(q\)P) stability criterion, requiring that the mean of the \(q/2\)-th moment of the state-vector squared \(\lim_{t\rightarrow\infty}\mathbb{E}[(\mathbf{x}\mathbf{x}^{T})^{q/2}]\) is finite. (This criterion is a generalization of the standard in control theory mean-square stability criterion, correspondent to the case of \(q=2\).) The M\(q\)P criterion is useful because flexibility in the choice of the parameter \(q\) helps us to better test stability of the feedback control in the cases where it results in the algebraic tails of the state vector probability distribution function. Exploring the regime of the "slower than inverse of the largest Lyapunov exponent" control, and thus replacing multiplicative noise of a general position by white-Gaussian, we are able, following classic approaches of the control theory, to estimate minimal linear-state feedback which guarantees the M\(q\)P-stability and then verify consistency with the long time asymptotic of the noise-multiplicative version of the Linear Quadratic Gaussian approach. Sections-to-Concept map of the letter is shown in Fig. (1). Reader interested in theory (only) is advised to check the main part of Sections II, III, skipping Subsection devoted to the applications, and then see Section IV-B where synthesis of the main general results, concerning the probability distribution of the state vector in the control/stabilized regime, are reported. The two applications - "swimmers" and "thermal" - are Fig. 1: Section-to-Concepts map of the letter. introduced in Section II-A and Section II-B and the results are presented in Sections III-A,V-A and Sections III-B,III-C,V-B, respectively. ## II Basic Dynamic Model Consider stochastic dynamics of a state vector \(\mathbf{x}(t)\in\mathbb{R}^{d}\) which is governed by the following linear equation: \[\frac{dx_{i}}{dt}=\sum_{j}\left(m_{ij}+\sigma_{ij}(t)\right)x_{j}(t)+\xi_{i}(t) +u_{i}(t), \tag{1}\] where \(i=1,\cdots,d\) and \(\mathbf{m}=\left(m_{ij}:i,j=1,\cdots,d\right)\) is a constant matrix; \(\mathbf{\xi}(t)=\left(\xi_{i}(t):i=1,\cdots,d\right)\) is a zero-mean white-Gaussian noise fully described by \[\forall i,j:\quad\mathbb{E}\left[\xi_{i}(t)\xi_{j}(t^{\prime})\right]=\kappa_{ i}\delta_{ij}\delta(t-t^{\prime});\] \(\mathbf{u}(t)=\left(u_{i}(t):i=1,\cdots,d\right)\) is a vector of control; and the multiplicative matrix \(\mathbf{\sigma}(t)=\left(\sigma_{ij}(t):i,j=1,\cdots,d\right)\) is a zero-mean stochastic and independent of the vector of additive noise \(\mathbf{\xi}\). We consider two ways to model \(\mathbf{\sigma}\) in Eq. (1) - _special_ and _general_. In the _special_ case \(\mathbf{\sigma}\) is white-Gaussian with a constant covariance. To introduce the _general_ model of \(\mathbf{\sigma}(t)\) we consider an auxiliary multiplicative dynamics \[\frac{d}{dt}\mathbf{W}=\mathbf{\sigma}\mathbf{W}, \tag{2}\] where the matrix \(\mathbf{W}\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) is called the time-ordered exponential of \(\mathbf{\sigma}\). According to the Oseledets theorem (see [13, 14, 15] and references there in) at sufficiently large times \(t\) the matrix \(\log(\mathbf{W}^{+}\mathbf{W})/t\) stabilizes. That is eigenvectors of the matrix tend to \(d\) fixed orthonormal eigenvectors \(\mathbf{f}_{i}\) of \(\mathbf{W}\) and the respective set of ordered eigenvalues \(\lambda_{i}=\log|\mathbf{W}\mathbf{f}_{i}|/t\), where \(\lambda_{1}\geq\lambda_{2}\geq\cdots\lambda_{d}\) called the Lyapunov exponents, stabilize to their mean-values asymptotically: \[\text{general}:\ P(\lambda_{1},\cdots,\lambda_{d}|t)\propto\exp\left(-tS( \lambda_{1},\cdots,\lambda_{d})\right), \tag{3}\] where \(S(\cdot)\) is the so-called Cramer function. In the remainder of this Section we present two examples where the basic model applies. ### _Active and Passive Swimmers_ We consider a smooth, chaotic velocity field of a general position discussed extensively in stochastic hydrodynamics, see review [15] and references therein. Particles/swimmers which are placed in such a flow separate exponentially fast. Our task is to navigate the active swimmer to control its separation from the passive swimmer, assuming that the two were released at the same position initially. The vector of separation of the two swimmers \(\mathbf{r}=\left(r_{i}:i=1,\cdots,d\right)\) in \(d\) dimensions (where \(d=2,3\)) evolves according to \[-\alpha\Big{(}\frac{d\mathbf{r}}{dt}-\mathbf{\sigma}(t)\mathbf{r}\Big{)}=\mathbf{u}(t)+\mathbf{ \xi}(t), \tag{4}\] where \(\alpha\) is the friction coefficient (which we set to unity without loss of generality) and \(\mathbf{u}\) and \(\mathbf{\xi}\) are control force exerted by the active swimmer and the difference of the thermal forces acting on the active and thermal swimmers, respectively. The \(\mathbf{\sigma}(t)\mathbf{r}\) term in Eq. (4) represents the first term of the Taylor expansion of the velocity difference between the two swimmers in the separation vector between the two. (This expansion is justified in the case of a smooth large scale flow.) Following assumptions which are standard in stochastic hydrodynamics [15] we will model \(\mathbf{\sigma}(t)\) and \(\mathbf{\xi}\) as stochastic and independent. ### _Dynamics of Temperature in Multi-Zone Buildings_ We follow a "gray box" modeling of the multi-zone building describing thermal exchange between a zone and outside environment and between a zone and the building's Air Handling Unit (AHU), as discussed in [16] (see also references therein), and then generalize the model to account for thermal exchange between neighboring zones [17]. Consider, first, the case of a single zone, where temperature within the zone is governed by \[\frac{dT}{dt}=-c_{o}(T-T_{o})-c_{s}(T-T_{s})u(t)+\xi(t), \tag{5}\] where \(c_{o}\) and \(c_{s}\) are rates of thermal exchange between the zone \(i\) and the "outside" environment (which can be viewed as outside of the building, but may also represent an aggregation of other zones of the building) and the AHU, respectively (the rates are measured in the units of inverse time); \(T_{o}\) and \(T_{s}\) are "outside" and AHU temperatures, respectively; \(\xi(t)\) is the white-Gaussian noise modeling additive fluctuations, with the covariance \(\kappa\) proportional to occupancy and thus expressing behavioral uncertainty; and \(u(t)\) is control of the opening in the pipe connecting zone \(i\) to the AHU. Given fluctuations of the zone's occupancy in time, it is also reasonable to model its contribution to the thermal exchange with the "outside" as split into the mean (constant or slowly dependent on time) and uncertain (thus stochastic) term: \(c_{o}=\underline{c}_{o}+\sigma(t)\). Now, suppose that \(T_{o}\) and \(T_{s}\) are constant and assume that a single zone control \(u(t)\) is split into constant and linear feedback components, i.e., \(u(t)=\underline{u}+\phi\theta\), where \(\theta=T-\underline{T}\) is deviation from the desired comfort temperature \(\underline{T}\) and the constant component of the control \(\bar{u}\) is chosen according to \(0=-c_{o}(\underline{T}-T_{o})-c_{s}(\underline{T}-T_{s})\underline{u}\) thus guaranteeing that when \(\xi\) and \(\sigma\) are set to zero \(T\) stabilizes to \(\underline{T}\). Then Eqs. (5) results in the following stochastic ODE for \(\theta\) \[\frac{d\theta}{dt}=-c(\phi)\theta+\tilde{\xi}(t)-\sigma(t)\theta, \tag{6}\] where \(c(\phi)=c_{0}+c_{1}\phi\), \(c_{0}=\underline{c}_{o}+c_{s}\underline{u}\), \(c_{1}=c_{s}(\underline{T}-T_{s})\) and \(\tilde{\xi}(t)=\xi(t)+T_{o}\sigma(t)\). Network generalization of Eq. (6) accounting for thermal flows between zones results in the following system of equations for the components of the temperature vector \(\mathbf{\theta}=(\theta_{i}:i\in\mathcal{V})\) (counted from the comfort temperature \(\underline{T}\) set the same for all zones in the building), where \(\mathcal{V}\) stands for the set of zones: \[\frac{d\theta_{i}}{dt}\!\!=\!\!-(c_{i}(\phi)\!+\!\sigma_{io})\,\theta_{i}\!-\! \!\!\sum_{j:\{i,j\}\in\mathcal{E}}\!\!\left(\underline{c}_{ij}\!+\!\sigma_{ij} \right)(\theta_{i}\!-\!\theta_{j})\!+\!\xi_{i}(t). \tag{7}\] Here \(\mathcal{E}\) denotes the set of edges in the network linking neighboring zones; \(c_{ij}\) is the rate of thermal exchange between the pair of neighboring zones \((i,j)\in\mathcal{E}\); it is assumed that the constant components of the control vector \(\underline{\mathbf{u}}=(\underline{u}_{i}:i\in\mathcal{V})\) are chosen according to \(\forall i:\ 0=\underline{c}_{io}(\underline{T}-T_{o})+c_{is}(\underline{T}-T_{s}) \underline{u}_{i}\), where \(c_{io}\) and \(c_{is}\) are the rates of thermal exchange, respectively, between zone \(i\) and the outside environment, kept at the constant temperature \(T_{o}\), and between zone \(i\) and the AHU, kept at the constant temperature \(T_{s}\); and \(c_{i}(\phi)=\underline{c}_{io}+\sum_{j}c_{js}(\underline{T}-T_{s})\phi_{ij}\), where \(\phi=(\phi_{ij}:i,j\in\mathcal{V})\) is vector of the linear feedback rates. We also assume that both \(c_{io}\) and \(c_{ij}\) are split into constant and fluctuating parts, \(\forall i:\ c_{i}=\underline{c}_{io}+\sigma_{io}\); \(\forall(i,j):\ c_{ij}=\underline{c}_{ij}+\sigma_{ij}(t)\). It is clear that Eq. (4), with \(\mathbf{r}\) substituted by \(\mathbf{x}\), and Eqs. (6,7), with \(\mathbf{\theta}\) substituted by \(\mathbf{x}\) with properly re-defined uncertainty/noise terms, constitute particular cases of Eqs. (1). ## III Control of Steady State (CSS) Consider state feedback control, that is \(\mathbf{u}(t)\to\mathbf{w}(\mathbf{x}(t))\), where \(\mathbf{w}(\cdot)\) is a yet-to-be-defined parameterized function. Then in the case of the white-Gaussian \(\mathbf{\sigma}\) Eq. (1) results in the following Kolmogorov-Fokker-Planck (KFP) equation for the stationary probability density function of the state vector \(\mathbf{x}\) conditioned to \(\mathbf{w}(\cdot)\): \[\hat{\mathcal{D}}P(\mathbf{x}|\mathbf{w})=0,\quad\hat{\mathcal{D}}= \partial_{x_{i}}(w_{i}(\mathbf{x})+m_{ij}x_{j}) \tag{8}\] \[+\kappa_{ij}\partial_{x_{i}}\partial_{x_{j}}+D_{ik;j\ell} \partial_{x_{i}}x_{k}\partial_{x_{j}}x_{\ell},\] where \(\hat{\mathcal{D}}\) is a second order differential operator and \(\kappa\) and \(D\) are elements of the matrix and tensor of covariances associated with the additive and multiplicative terms, respectively. Assuming that the steady state is achieved, i.e. that solution of the stationary KFP equation is well-defined, we pose the following steady version of the stochastic optimal control \[\mathbf{\phi}^{*}=\arg\min_{\mathbf{\phi}}\bar{C}(\mathbf{\phi}),\quad\bar{C} (\mathbf{\phi})=\int d\mathbf{x}P(\mathbf{x}|\mathbf{w}_{\mathbf{\phi}})C(\mathbf{x},\mathbf{w}_{\mathbf{\phi}}),\] \[C(\mathbf{x},\mathbf{w}_{\mathbf{\phi}})=\underbrace{C_{c}(\mathbf{w}_{\mathbf{\phi} })}_{\text{cost of control}}\ +\underbrace{C_{g}(\mathbf{x})}_{\text{cost of achieving the goal}}, \tag{9}\] where \(\mathbf{\phi}\) stands for a vector of parameters selected to represent \(\mathbf{w}_{\mathbf{\phi}}(\cdots)\). Notice, that the CSS analysis will also help us to solve the M\(q\)P-stability problem. Indeed, we will see below that the steady state (if settled) results in the algebraic decay of \(P(\mathbf{x}|\mathbf{w}_{\mathbf{\phi}})\) with \(|\mathbf{x}|\) and thus the \(\phi\)-parameterized linear state feedback is M\(q\)P-stable, i.e. the integral \[\int d\mathbf{x}P(\mathbf{x}|\mathbf{w}_{\mathbf{\phi}})(\mathbf{x}\mathbf{x}^{T})^{q/2} \tag{10}\] is convergent, if \(\mathbf{\phi}\) is sufficiently large or if \(q\) is sufficiently small. This explains our choice of the cost-of-achieving-the-goal in the M\(q\)P form \(C_{g}(\mathbf{x})\to\beta(\mathbf{x}\mathbf{x}^{T})^{q/2}\) in the following Subsections, where we discuss solution of the CSS problem on our two enabling examples. ### _Swimmers CSS: short-correlated large-scale flow_ Consider the swimmers' linear state feedback version of Eqs. (9), thus with \(\mathbf{x}\) substituted by \(\mathbf{r}\), \[\mathbf{u}(t)\to\mathbf{w}(\mathbf{r})=\mathbf{\phi}r,\ C_{c}\{\mathbf{w}\}\to\mathbf{w}^{2},\ C_{g}( \mathbf{r})\to\beta r^{q}, \tag{11}\] and apply to the stochastic dynamics governed by Eq. (4). Let us also choose Batchelor-Kraichnan model for the chaotic flow in \(d\)-dimensions described by the following pair-correlation function of the velocity gradient matrix \(\mathbf{\sigma}\) entering Eq. (4): \[\forall i,j,k,l=1,\cdots,d:\ \mathbb{E}\left[\sigma_{ij}(t)\sigma_{kl}(t^{ \prime})\right]= \tag{12}\] \[D(d+1)\delta(t-t^{\prime})\left(\delta_{jl}\delta_{ik}-\frac{ \delta_{ij}\delta_{kl}+\delta_{jk}\delta_{lj}}{d+1}\right),\] where \(\delta(\cdot)\) and \(\delta_{ij}\) are the \(\delta\)-function and the Kronecker symbol respectively. Then, the KFP Eq. (8) for the spherically symmetric probability density \(P(r|\phi)\), where \(r=|\mathbf{r}|\), becomes (see [18] for details) \[\mathcal{L}_{sw}P(r|\phi)=0, \tag{13}\] \[\mathcal{L}_{sw}=r^{1-d}\frac{d}{dr}r^{d}\left(\phi+\frac{1}{2} \left(D(d-1)r+\frac{\kappa}{r}\right)\frac{d}{dr}\right).\] Here in Eq. (13) \(\kappa\) stands for covariance of the thermal noise in Eq. (4). Solution of Eq. (13) is \[P(r|\phi)=N^{-1}\left(\frac{\kappa}{D}+(d-1)r^{2}\right)^{-\phi/((d-1)D)}, \tag{14}\] \[N=2^{\phi/(d-1)}d\left(\frac{(d-1)D}{\kappa\pi}\right)^{d/2} \frac{\Gamma(\phi/(D(d-1)))}{\Gamma(\phi/(D(d-1))-d/2)},\] where \(N\) is the normalization coefficient which guarantees that \(\int_{0}^{\infty}\Omega_{r}drP(r|\phi)=1\) and \(\Omega_{r}=(\pi^{d/2}/\Gamma(d/2+1))r^{d-1}\). The solution is valid, i.e. the normalization integral is bounded, if \(\phi>(d-1)dD/2\). Substituting \(P(r|\phi)\), given by Eq. (14), into Eq. (9) with \(C_{c}(\cdots)\) and \(C_{g}(\cdots)\) chosen according to Eq. (11) we observe that the cost of control is well-defined, i.e. the respective integral converges and the control is M\(q\)P-stable at \(\phi\geq\phi^{(s)}=(d+q)(d-1)/2\). To find optimal \(\phi^{(s)}\), which should obviously be larger than the M_q_P value \(\phi^{(*)}>\phi^{(s)}\), one needs to solve a straightforward one-parametric convex optimization. The solution, which is generally a bulky but explicit expression in terms of special function, simplifies in the \(q=2\) case: \[\phi^{*}=\frac{D(d+2)(d-1)+\sqrt{4\beta+D^{2}(d+2)^{2}(d-1)^{2}}}{2}. \tag{15}\] ### _Thermal CSS: single zone_ Assuming that \(\tilde{\xi}(t)\) and \(\sigma(t)\) in Eq. (6) are zero-mean, independent and white-Gaussian with the pair correlation functions described by \[\mathbb{E}\left[\tilde{\xi}(t)\tilde{\xi}(t^{\prime})\right]/\kappa=\mathbb{E }\left[\sigma(t)\sigma(t^{\prime})\right]/D=\delta(t-t^{\prime}), \tag{16}\] we arrive at the following KFP equation for the probability distribution function of \(\theta\) \[\left(\partial_{\theta}c(\phi)\theta+\kappa\partial_{\theta}^{2}+D(\partial_ {\theta}\theta)^{2}\right)P(\theta|\phi)=0. \tag{17}\] Solution of Eq. (17), \[P(\theta|\phi)\!=\!\sqrt{\frac{D}{\pi\kappa}}\frac{\Gamma\left(\frac{c(\phi)} {2D}+\frac{1}{2}\right)}{\Gamma\left(\frac{c(\phi)}{2D}\right)}\left(1\!+\! \frac{D\theta^{2}}{\kappa}\right)^{-\frac{1}{2}-\frac{c(\phi)}{2D}}, \tag{18}\] is normalizable if \(c(\phi)>0\). Then expectation of the cost evaluated according to Eqs. (9) is finite and convex at \(c(\phi)=c_{0}+c_{1}\phi>D(\max(q,2)-1)\), i.e. the system is M_q_P-stable at \(\phi>\phi^{(s)}=(D(\max(q,2)-1)-c_{0})/c_{1}\). The optimal value is achieved at \(\phi^{*}>\phi^{(s)}\) which returns the following explicit expression at \(q=2\): \[\phi^{*}=\frac{2D-c_{0}+\sqrt{(2D-c_{0})^{2}+\beta c_{1}^{2}}}{c_{1}}. \tag{19}\] ### _Thermal CSS: multi-zone_ Assuming that \(\xi_{i}\), \(\sigma_{io}\) and \(\sigma_{ij}\) entering Eq. (7) are independent, zero mean, white-Gaussian with covariances \(\kappa\), \(D_{io}\) and \(D_{ij}\), respectively, we arrive at the following multi-zone version of the single-zone KFP Eq. (17): \[\hat{\mathcal{L}}_{m}P(\theta|\phi)\!=\!0,\] (20) \[\hat{\mathcal{L}}_{m}\!=\!\!\sum_{i\in\mathcal{Y}}\!\!\left(c_{i }\hat{l}_{i}+D_{io}(\hat{l}_{i})^{2}\!+\!\kappa_{i}\partial_{\theta_{i}}^{2} \right)\!+\!\!\sum_{\{i,j\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Notice that the stationary version of the last line in Eq. (24) settles if \(\phi>\tilde{\lambda}_{1}+1/(2S^{\prime\prime}(\tilde{\lambda}_{1}))\) and that it is fully consistent with Eq. (14) derived for the short (\(\delta\))-correlated velocity gradient, where \(S_{1}^{\prime\prime}(\tilde{\lambda}_{1})=1/((d-1)D)\) and \(\tilde{\lambda}_{1}=(d-1)^{2}D/2\). ### _General Model: Synthesis_ Assume that \(\mathbf{u}(t)\) in Eq. (1) is substituted by a general linear feedback \(u_{i}(t)\rightarrow\sum_{j}\phi_{ij}x_{j}\). Then Eq. (21) generalizes to \[\mathbf{x}(t)=e^{-(\mathbf{m}+\phi)t}\mathbf{W}(t)\tilde{x}, \tag{25}\] where \(\tilde{x}\) stabilizes to a constant as \(t\) grows. Projecting Eq. (25) to the \(i\)-th eigen-vector \(\mathbf{f}_{i}\) of \(\mathbf{W}\), assuming that the linear feedback is sufficiently strong and taking the \(t\rightarrow\infty\) limit, we arrive at the following explicit and simple expression for the fat tail of \((\mathbf{x}\mathbf{f}_{i}^{T})\) \[\log P_{st}(\mathbf{x}\mathbf{f}_{i}^{T})\propto 2\mathbf{f}_{i}\left(\mathbf{m}+\phi \right)\mathbf{f}_{i}^{T}S_{i}^{\prime\prime}(0)\log\frac{x_{d}}{\mathbf{x}\mathbf{f}_{i} ^{T}}, \tag{26}\] which is bi-linear in the sum of the system's dynamic matrix \(\mathbf{m}\), in the linear feedback rate \(\phi\) and in the curvature \(S_{i}^{\prime\prime}(0)\) evaluated at the minimum of the Cramer function of the \(i\)-th Lyapunov exponent of \(\mathbf{W}\). Two additional remarks are in order. First, notice that dependence on \(\tilde{x}\) is "under logarithm" - thus weak and replaced by \(x_{d}\), which is an estimate for the size of the center of the probability distribution of \((\mathbf{x}\mathbf{f}_{i}^{T})\) dependent on the additive noise. Second, it follows from Eq. (26) that statistics of any norm of \(\mathbf{x}\) is equivalent to statistics of \((\mathbf{x}\mathbf{f}_{1}^{T})\) associated with the largest Lyapunov exponent \(\lambda_{1}\). ## V Stochastic Optimal Control (SOC) Our next step is to derive and solve Hamilton-Jacobi-Bellman (HJB) equation associated with the dynamics, governed by Eqs. (1), and by the dynamic version of the cost function described by Eq. (9). This material is auxiliary (to the main message of the letter) and it is presented here as a check of consistency and also a link to the classic (and thus benchmark) methodology in the field. Derivation of the HJB equations which follow is standard (see e.g. [20] and references therein for details) thus abbreviated. We study a finite horizon version of Eqs. (1), evaluated in the "running time" \(\tau\) where \(\tau\in[t,t_{f}]\), and we introduce the cost-to-go \(S(t,\mathbf{x})\) considered as a function of \(t\) and \(\mathbf{x}(t)\): \[S(t,\mathbf{x})=\min_{\{\mathbf{u}(t)\}}\Bigl{(}S_{f}(\mathbf{x}(t_{f}))+\int_{t}^{t_{f}} \!\!\!d\tau\ \mathbb{E}\left[C(\mathbf{x}(\tau),\mathbf{u}(\tau))\right]\Bigr{)}, \tag{27}\] where \(S_{f}(\mathbf{x}(t_{f}))\) is the contribution to the cost-to-go associated with the final position \(\mathbf{x}(t_{f})\); and the expectation in Eq. (27) is over the random \(\mathbf{\xi}(t)\) and \(\mathbf{\sigma}(t)\). Then \(S(t,\mathbf{x})\) satisfies the following HJB equation supplemented by the condition \(S(t_{f},\mathbf{x})=S_{f}(\mathbf{x})\): \[-\partial_{t}S(t,\mathbf{x})=\min_{\mathbf{u}}\left(C(\mathbf{x},\mathbf{u})+ \left(\underline{a}_{i}+u_{i}+\underline{m}_{ij}x_{j}\right)\partial_{x_{i}}S (t,\mathbf{x})\right.\] \[+\left(\kappa_{ij}\partial_{x_{i}}\partial_{x_{j}}+D_{ik;jl}x_{k} \partial_{x_{i}}x_{l}\partial_{x_{j}}\right)S(t,\mathbf{x})\right). \tag{28}\] ### _Swimmers SOC: short correlated large-scale flow_ Assume that \(\mathbf{u}\) is elongated with \(\mathbf{r}\), i.e. \(\mathbf{u}=u\mathbf{r}/\mathbf{r}\), then \(C(u,r)=u^{2}+\beta r^{q}\). Accounting for Eqs. (11,13) we arrive at the following "simple swimmers" version of Eq. (28) \[-\partial_{t}S =\beta r^{q}+\frac{r^{1-d}}{2}\partial_{r}r^{d-1}\left(D(d-1)r^{2 }+\kappa\right)\partial_{r}S+J,\] \[J =\min_{u}\left(u^{2}+u\partial_{r}S\right)=-\frac{1}{4}\left( \partial_{r}S\right)^{2}, \tag{29}\] where the optimal control is \(u^{*}(t,r)=-\partial_{r}S(t,r)/2\). Assuming that \(q=2\), function defining the final condition \(S_{f}(r)\) is quadratic in \(r\) and thus looking for solution of Eq. (29) in the (quadratic in \(r\)) form \(S(t,r)=\varsigma(t)r^{2}+s(t)\), we arrive at the following system of equations: \[d\kappa\varsigma+ds/dt=0,\ (d^{2}+d-2)D\varsigma-\varsigma^{2}+\beta+d \varsigma/dt=0,\] which results in the following solution for \(\varsigma(t)\): \[\varsigma(t)=\frac{1}{2}\Biggl{(}D(d+2)(d-1)+\sqrt{4\beta+D^{2}(d +2)^{2}(d-1)^{2}}\] \[\ast\tanh\left(\frac{(t_{1}-t)}{2}\sqrt{4\beta+D^{2}(d+2)^{2}(d-1 )^{2}}\right)\Biggr{)}, \tag{30}\] where \(t_{1}\) is tuned to satisfy the final condition, \(S(t_{f},r)=S_{f}(r)\). We observe that \(\varsigma(t)\), defined by Eq. (30) and considered at the \(t\rightarrow-\infty\) asymptotic, is fully consistent with Eq. (15) derived under assumption of the CSS control. ### _Thermal SOC_ Formally, analysis of the single-zone version of the thermal SOC is similar to what was just described for the case of two swimmers. Adapting the HJB Eq. (28) to the case of the multi-zone temperature control and then generalizing the thermal CSS setting discussed in Section III-C we arrive at the following HJB equation for the cost-to-go \(S(t,\mathbf{\theta})\): \[-\partial_{t}S =\sum_{i\in\mathcal{F}}\Bigl{(}\beta_{i}|\mathbf{\theta}_{i}|^{q}+D_{ io}(\tilde{l}_{i}^{(r)})^{2}S+\kappa_{i}\partial_{\theta}^{2}S+J_{i}\Bigr{)} \tag{31}\] \[+\sum_{\{i,j\}\in\mathcal{E}}\tilde{l}_{ij}^{(r)}\Bigl{(} \underline{c}_{ij}+D_{ij}\tilde{l}_{ij}^{(r)}\Bigr{)}\,S,\] \[\forall i:\ \ J_{i}= \min_{\tilde{u}}\left(\alpha_{i}\tilde{u}^{2}-\tilde{u}c_{is}( \underline{T}-T_{s})\partial_{\theta}S\right)\] \[=-\frac{\gamma_{i}}{2}\left(\partial_{\theta}S\right)^{2},\ \ \gamma_{i}=\frac{c_{is}^{2}( \underline{T}-T_{s})^{2}}{2\alpha_{i}},\] where \(\hat{I}_{i}^{(r)}\) and \(\hat{I}_{ij}^{(r)}\) are (time) conjugates of the operators \(\hat{l}_{i}\) and \(\hat{l}_{ij}\), i.e., \(\hat{I}_{i}^{(r)}=\theta_{i}\partial_{\theta_{i}}\) and \(\hat{I}_{ij}^{(r)}=(\theta_{i}-\theta_{j})(\partial_{\theta_{i}}-\partial_{ \theta_{j}})\). As in the other HJB examples discussed so far we are looking for the cost-to-go, solving the multi-zone HJB Eq. (31) correspondent to \(q=2\), as a symmetric quadratic form in \(\mathbf{\theta}\) with coefficients dependent on \(t\): \(S(t,\mathbf{\theta})=\sum_{i,j}\theta_{i}\varsigma_{j}(t)\theta_{j}+s(t)\), where \(\varsigma_{ij}=\varsigma_{ji}\). Substituting the quadratic ansatz in the Eq. (31) we arrive at the system of generalized Ricatti equations (which we will not present here to save space). It is straightforward to check that at \(\kappa=0\) and in the \(t\to-\infty\) limit the system of equations is consistent with a solution of the KFP Eq. (20) at the optimal \(\phi^{(*)}\). ### Conclusions and Path Forward We analyzed linear dynamic system driven by additive and multiplicative noise of a general position which is stabilized by a linear feedback control. We have introduced an orthonormal basis of the time-ordered exponential of the multiplicative noise matrix and showed that stationary statistics of the state vector projected to an element of the basis shows an algebraic tail. Exponent of the tail is presented as an explicit expression (26) which is bi-linear in the sum of the constant part of the system's dynamic vector with the linear feedback rate matrix and in the curvature at the minimum of the Cramer function of the element of the basis. We believe that it is of interest to extend the approach in the future to a data driven setting where the Cramer functions of the Lyapunov exponents and respective eigen-vectors are learned from observations. Emergence of the fat tails in the linear systems driven by a multiplicative noise suggests using the newly introduced Mean-\(q\)-Power criterion for adjusting the linear feedback control. We have illustrated that the criterion can be validated on and is useful for approaches which are classic in control, specifically for Control of Steady State (Section III) and for Stochastic Optimal Control (Section V). We plan to extend this approach to systems with partial observability and control-dependent noise. Our results are motivated by and illustrated on examples from hydrodynamics and civil engineering. We envision extending this work to more complex fluid flows and "white box" modeling of the thermostatically control multi-zone buildings. This can be achieved via data driven reinforcement learning approaches improving the control schemes described in this letter 2. Footnote 2: The author is grateful to L. Pagnier, R. Ferrando, C. Koh, S Konkimalla and A. Larsen for multiple discussions. This work is a part of the team collaboration.
2303.06506
Information propagation in long-range quantum many-body systems
We study general lattice bosons with long-range hopping and long-range interactions decaying as $|x-y|^{-\alpha} $ with $\alpha\in (d+2,2d+1)$. We find a linear light cone for the information propagation starting from suitable initial states. We apply these bounds to estimate the minimal time needed for quantum messaging, for the propagation of quantum correlations, and for quantum state control. The proofs are based on the ASTLO method (adiabatic spacetime localization observables). Our results pose previously unforeseen limitations on the applicability of fast-transfer and entanglement-generation protocols developed for breaking linear light cones in long-range and/or bosonic systems.
Marius Lemm, Carla Rubiliani, Israel Michael Sigal, Jingxuan Zhang
2023-03-11T22:20:53Z
http://arxiv.org/abs/2303.06506v3
# Propagation of information in long-range quantum systems ###### Abstract We present bounds on the minimal time for quantum messaging, propagation/creation of correlations, and control of states for _general_, long-range, lattice quantum many-body bosonic systems. The proofs are based on a maximal velocity bound and the light-cone approximation of dynamics, which provide different expressions of the fact that the many-body evolution stays, up to small leaking probability tails, within a light cone of the support of the initial conditions and imply, in particular, Lieb-Robinson-type bounds on commutators of evolving observables. Maximal propagation speed; Lieb-Robinson bounds; quantum dynamical systems; quantum many-body systems; quantum information; quantum light cones; quantum correlations; entanglement pacs: 03.65.Ta, 03.65.Ta, 03. decay of commutators and holds (uniformly) on _a subset of localized states_. Theorems II.4-II.7 provide general constraints on propagation/creation of correlation, quantum messaging, state control times, and the relation between a spectral gap and the decay of correlations. They are derived readily from Theorems II.2 and II.3. Theorem II.8, whose proof is an essential extension of the proof of Theorem II.1, describes macroscopic particle transport. For pure states, since the correlation signifies entanglement, Theorem II.4 gives bounds on the time for propagation/creation of entanglement between different regions. In connection with our bounds on correlations, we introduce the notion of _weakly correlated states_ within given spatial domains, see Section II. To emphasize, our results yield the existence of a linear light cone for the most general lattice quantum many-body systems, providing general constraints on the evolution of information for such systems. The bounds on the maximal speed of propagation are given in terms of the norm of the 1-particle group velocity operator \(i[h,x]\), where \(x\) is the 1-particle Hamiltonian entering (I.1) and the position observable, respectively. All of our results hold for bosonic systems with long-range interactions, say, \(|h_{xy}|\leq C(1+|x-y|)^{-\alpha}\) with \(\alpha>d+n+1\) and similarly for \(v\), which suffices for (I.2). Taking \(n=1\), we see that, for \(d>1\) and \(\alpha\in(d+2,2d+1)\), our result gives a linear light cone as defined in terms of the weak LRB (II.11). On the other hand, fast state-transfer and entanglement-generation protocols [24; 25; 26; 27; 28] show that linear light cones, defined in terms of the LRB, do not exist for \(\alpha<2d+1\). See [11] for the phase diagram summarizing the situation for the Lieb-Robinson light cones and [29; 30] for reviews of the effect of the long-range interactions on quantum many-body dynamics and, in particular, on the transmission of quantum information. Thus our bounds narrow the class of systems for which long-range interactions lead to speed-up of the spreading of information [31]. Our results can be extended readily to Hamiltonians with time-dependent and few-body interactions, fermionic systems, and to open quantum systems (i. e. the Lindblad equation, see [32]) and estimating the decoherence and thermalization times. Detailed discussion and proofs of the results presented in this Letter are given in [33]. Related results.Results similar to Theorems II.1-II.3, II.5, and II.6, but for the Bose-Hubbard model, were obtained in [10; 17]. Our proofs of those theorems follow the corresponding proofs in [10; 17]. Earlier on, results similar to Thms II.1, II.2 and II.3 have been obtained in [34], [35] and [25], respectively (the last two papers deal with the Bose-Hubbard model). Moreover, LRB for a special class of bosonic lattice systems was proved in [36]. The constraints imposed by the LRB on the propagation of correlations were first discussed in [4], with rigorous results for fermionic systems given in [37]. The relation between spectral gap and the decay of correlations for fermionic systems was established in [2; 3; 38; 39], with the sharpest results given in [3]. As we were preparing [33] and the present Letter for publication, a new preprint [40] was posted with deep results related to those in Thms. II.1 and II.2 for nearest-neighbour quantum many-body Hamiltonians. Assuming the initial state satisfies a uniform low density condition, the authors of [40] proved the existence of the superlinear light cone \(|x|\sim t\log t\) (resp. \(|x|\sim t^{d}\,\mathrm{polylog}\,t\), where \(d\) is the dimension and \(\mathrm{polylog}\) is the polylogarithmic function) for particle transport (resp. the light-cone approximation of observables), up to fast decaying leaking probability tails. Notation.We fix the underlying lattice \(\mathcal{L}\), with grid size \(\geq 1\), and the domain \(\Lambda\subset\mathcal{L}\), and we do not display these in our notations. We denote by \(\|\cdot\|\) the norm of operators on \(\mathcal{F}\). All quantities and equations we work with are dimensionless and, in our units, the Planck constant is set to \(2\pi\) and speed of light, to one (\(\hbar=c=1\)). ## II Setup and main results In this section, we fix a subset \(\Lambda\subset\mathcal{L}\) and drop it from the notation, writing e.g. \(H\equiv\)\(H_{\Lambda}\). For symmetric \(h\) and \(v\), the Hamiltonian \(H\) in (I.1) is symmetric and therefore self-adjoint. To show the latter, one observes that the number operator \(N\equiv N_{\Lambda}\), where \(N_{X}:=\sum_{x\in X}a_{x}^{*}a_{x}\), commutes with \(H\). Since the operators \(H_{n}:=H\ |_{\{N=n\}}\) are symmetric and bounded, they are self-adjoint. Hence so is \(H=\oplus_{n=0}^{\infty}H_{n}\) as an infinite direct sum of self-adjoint operators. Therefore the propagator \(e^{-itH}\) is well-defined for every \(t\in\mathbb{R}\). Denote by \(S(\mathcal{F})\) the space of density operators on \(\mathcal{F}\), i.e. positive trace-class operators \(\rho\) on \(\mathcal{F}\), which we identify with positive linear functionals (i.e. expectations) of observables, \(\omega(A)\equiv\omega_{\rho}(A):=\mathrm{Tr}(A\rho).\) Consequently, we pass from the Schrodinger equation \(i\partial_{t}\psi=H\psi\) on \(\mathcal{F}\), to the von Neumann equation: \[\partial_{t}\rho_{t}=-i[H,\rho_{t}],\quad\text{ or }\quad\partial_{t}\omega_{t}(A )=\omega_{t}(i[H,A]).\] (II.1) We denote by \(\mathcal{D}\) the domain of \(\mathrm{ad}_{H}:A\mapsto[A,H]\) in the space \(S(\mathcal{F})\)[41] and write \(\omega\in\mathcal{D}\) if \(\omega=\omega_{\rho}\) for some \(\rho\in\mathcal{D}\). For each \(\rho\in\mathcal{D}\), eq. (II.1) has a unique solution with initial state \(\rho\), given by \(\rho_{t}\equiv\alpha_{t}^{\prime}(\rho):=e^{-itH}\rho e^{itH}\). This evolution preserves total probability, i.e. \(\mathrm{Tr}(\rho_{t})\equiv\mathrm{Tr}(\rho)\), as well as the eigenvalues of \(\rho\). The evolution of observables, dual to \(\alpha_{t}^{\prime}\) w.r.t. the coupling \((A,\rho)\mapsto\mathrm{Tr}(A\rho)\), is given by \[\alpha_{t}(A):=e^{itH}Ae^{-itH},\,\text{so that}\,\,\omega_{t}(A)=\omega(\alpha_{t}(A )).\] (II.2) In this Letter, we present several bounds imposing general constraints on the many-body quantum evolutions. We will consider test (or initial) states satisfying \[\omega\in\mathcal{D},\quad\omega(N^{2})<\infty.\] (II.3) Below, \(\forall\;X\subset\Lambda\), we let \(X^{\rm c}:\,=\Lambda\setminus X\), \(d_{X}(x):=\inf_{y\in X}|x-y|\), \(X_{\alpha}:=\{x:d_{X}(x)\leq\alpha\}\), \(X_{\alpha}^{\rm c}\equiv(X_{\alpha})^{\rm c}\), and \[\kappa:=\sup_{x\in\Lambda}\sum_{y\in\Lambda}\left|h_{xy}\right|\left|x-y\right|.\] (II.4) Maximal velocity bound.We have: **Theorem II.1** (Mvb).: _Suppose \(\kappa_{n}<\infty\) with some \(n\geq 1\) (see (I.2)). Then, for every \(c>\kappa\), there exists \(C=C(n,\kappa_{n},c)>0\) s.th. \(\forall\;\eta\geq 1,\,X\subset\Lambda\), \(\left|t\right|<\eta/c\):_ \[\alpha_{t}(N_{X_{\eta}^{\rm c}})\leq C(N_{X^{\rm c}}+\eta^{-n}N).\] (II.5) Continuing with terminology of [10, 17, 18], we call such an estimate the _maximal velocity bound_ (MVB). We outline the proof of Theorem II.1 in Section III. Here and below, an operator inequality \(A\leq B\) means that \(\omega(A)\leq\omega(B)\;\forall\) states \(\omega\) satisfying (II.3). Estimate (II.5) shows that, if the initial condition \(\omega\) satisfies (II.3) and \(\omega(N_{X^{\rm c}})=0\) (i.e. \(\omega\) is localized in \(X\)), then, \(\forall\;\left|t\right|<\eta/c\), \[\omega_{t}(N_{X_{\eta}^{\rm c}})=\omega(\alpha_{t}(N_{X_{\eta}^{\rm c}}))\leq C \eta^{-n}\omega(N_{X}).\] (II.6) At the last step, we used the observation, due to M. Lemm, that under \(\omega(N_{X^{\rm c}})=0\) one has \(\omega(N^{p})=\omega(N^{p}_{X}),\;p=1,2.\) In other words, up to polynomially vanishing probability tails, the particles propagate within the strictly linear light cone (LC) \[X_{ct}\equiv\left\{d_{X}(x)\leq ct\right\},\] (II.7) for every fixed \(c>\kappa\) and all \(t\). Put differently, the probability that particles are transported from \(X\) to any test (or probe) domain \(Y\) outside the LC \(X_{ct}\) is of the order \(O(\eta^{-n})\), where \(\eta={\rm dist}(X,Y)\). LC approximation of evolution (II.2).We say that an operator \(A\) acting on \(\mathcal{F}\) is _localized_ in \(X\subset\Lambda\) (in symbols, \(\operatorname{supp}A\subset X\)) if \(\left[A,a^{\sharp}_{x}\right]=0\;\forall\;x\in X^{\rm c}\), where \(a^{\#}_{x}\) stands for either \(a_{x}\) or \(a^{*}_{x}\). The support of an initially localized observable generally spreads over the entire space \(\forall\;t>0\). Nonetheless, in Theorem II.2 below, we show that the evolution of local observables under (II.2) is approximated by a family of observables localized within the LC of the initial support. For any subset \(S\subset\Lambda\), we define the _localized evolution_ of observables as \(\alpha^{S}_{t}(A):=e^{itH_{S}}Ae^{-itH_{S}}\), where \(H_{S}\) is defined in (I.1) with \(S\) in place of \(\Lambda\), and \[\mathcal{B}_{S}:=\left\{A\in\mathcal{B}(\mathcal{F}):[A,N]=0,\,\operatorname{ supp}A\subset S\right\},\] (II.8) where \(\mathcal{B}(\mathcal{F})\) is the space of bounded operators on \(\mathcal{F}\). Then, one can check that \(\forall\;S\subset\Lambda\), \(A\in\mathcal{B}_{S}\), and \(t\in\mathbb{R}\), we have \(\alpha^{S}_{t}(A)\in\mathcal{B}_{S}\). **Theorem II.2** (LC approximation of quantum evolution).: _Let (I.2) hold with some \(n\geq 1\), and let a state \(\omega\) satisfy (II.3) and_ \[\omega(N_{X^{\rm c}})=0,\] (II.9) _with some \(X\subset\Lambda\). Then, for every \(c>2\kappa\), there exists \(C=C(n,\kappa_{n},\nu_{n},c)>0\) s.th. for all \(\xi\geq 1\) and operator \(A\in\mathcal{B}_{X}\), the full evolution \(\alpha_{t}(A)\) is approximated by the local evolution \(\alpha_{t}^{X\xi}(A)\), for all \(\left|t\right|<\xi/c\), as_ \[\left|\omega\Big{(}\alpha_{t}(A)-\alpha_{t}^{X_{t}}(A)\Big{)}\right|\leq C \left|t\right|\xi^{-n}\left\|A\right\|\omega(N_{X}^{2}).\] (II.10) _c. Lieb-Robinson-type bounds._ Theorem II.2 leads to a Lieb-Robinson-type bound: **Theorem II.3** (Weak Lieb-Robinson bound).: _Suppose the assumptions of Theorem II.2 hold with \(n\geq 1,\,X\subset\Lambda\). Then, for every \(c>2\kappa\), there exists \(C=C(n,\kappa_{n},\nu_{n},c)>0\) s.th. \(\forall\;\xi\geq 1\), \(Y\subset\Lambda\) with \({\rm dist}(X,Y)\geq 2\xi\), and operators \(A\in\mathcal{B}_{X},\,B\in\mathcal{B}_{Y}\), we have, \(\forall\;\left|t\right|<\xi/c\):_ \[\left|\omega\left([\alpha_{t}(A),B]\right)\right|\leq C\left|t\right|\left\|A \right\|\left\|B\right\|\xi^{-n}\omega(N_{X}^{2}).\] (II.11) We call a bound of the form (II.11) the _weak Lieb-Robinson bound (LRB)_. Unlike the classical LRB, estimate (II.11) depends on a subclass of states and provides power-law, rather than exponential, decay. Estimate (II.11) shows that, with the probability approaching \(1\) as \(t\to\infty\), an evolving family of observable \(A_{t}=\alpha_{t}(A)\) remains commuting with any other observable supported outside the LC (II.7) \(\forall\;c>2\kappa\), provided the supports of these observables are separated by initially empty regions. Propagation/creation of correlations.Assuming a state \(\omega\) is weakly correlated in a domain \(Z^{\rm c}\subset\Lambda\), how long does it take for the correlations in \(Z\) to spread, under the evolution (II.2), into \(Z^{\rm c}\)? Put differently, how long does it take to create correlations in \(Z^{\rm c}\)? The notion of weakly correlated state is defined as follows: **Definition II.1**.: Let \(Z\subset\Lambda\). For subsets \(X,\,Y\subset\Lambda\), let \(d_{XY}:={\rm dist}(X,Y)\) and \(d_{XY}^{Z}:=\min(d_{XY},d_{XZ},d_{YZ})\). We say a state \(\omega\) is _weakly correlated in a subset \(Z^{\rm c}\) at a scale \(\lambda>0\)_, or \({\rm WC}(Z^{\rm c},\lambda,C,n)\), with \(C>0\), \(n\geq 1\), if \(\forall\;X\), \(Y\subset Z^{\rm c}\) with \(d_{XY}^{Z}>0\) and operators \(A\in\mathcal{B}_{X},\,B\in\mathcal{B}_{Y}\) (see (II.8)), the following holds: \[\left|\omega^{c}(AB)\right|\leq C(d_{XY}^{Z}/\lambda)^{1-n}\left\|A\right\| \left\|B\right\|,\] (II.12) where \(\omega^{c}(A,B):=\omega(AB)-\omega(A)\omega(B)\). As for exponentially decaying correlations, \(\lambda\) characterizes the scale of decay of correlations. **Theorem II.4** (Propagation/creation of correlation).: _Suppose (I.2) holds with some \(n\geq 1\). Let \(Z\subset\Lambda\) and suppose the initial state \(\omega\) satisfies (II.3), \(\omega(N_{Z^{\rm c}})=0\), and is \({\rm WC}(Z^{\rm c},\lambda,C,n)\). Then, \(\omega_{t}\) is \({\rm WC}(Z^{\rm c},3\lambda,C\omega(N_{Z}^{2}),n)\) for all \(\left|t\right|<\lambda/3\kappa\); specifically, \(\forall\;A\in\mathcal{B}_{X},\,B\in\mathcal{B}_{Y}\) supported in \(X,\,Y\subset Z^{\rm c}\) with \(d_{XY}^{Z}>0\) and \(\left|t\right|<\lambda/3\kappa\),_ \[\left|\omega_{t}^{c}(AB)\right|\leq C\omega(N_{Z}^{2})(d_{XY}^{Z}/3\lambda)^{1- n}\left\|A\right\|\left\|B\right\|.\] (II.13) _For short-range (i.e. exponentially decaying) interactions, (II.13) holds for all \(n\geq 1\)._ For the second statement, we note that for short-range interactions, condition (I.2) is valid \(\forall\;n\geq 1\). Constraint on the propagation of quantum signals.The weak LRB (II.11) imposes a direct constraint on the speed of quantum messaging (c.f. [4; 10; 42]). Assume that Bob at a location \(Y\) is in possession of a state \(\rho\) and an observable \(B\) and would like to send a signal through the quantum channel \(\alpha_{t}^{\prime}\) to Alice who is at \(X\) and who possesses the same state \(\rho\) and an observable \(A\). To send a message, Bob uses \(B\) as a Hamiltonian to evolve \(\rho\) for a time \(r>0\), and then sends Alice the resulting state \(\rho_{r}=\tau_{r}(\rho)\), where \(\tau_{r}(\rho):=e^{-iBr}\rho e^{iBr}\), as \(\alpha_{t}^{\prime}(\rho_{r})\). To see whether Bob sent his message, Alice computes the difference between the expectations of \(A\) in the states \(\alpha_{t}^{\prime}(\rho_{r})\) and \(\alpha_{t}^{\prime}(\rho)\), which we call the _signal detector_: The weak LRB (II.11) implies: **Theorem II.5** (Bound on messaging time).: _Let the assumptions of Theorem II.2 hold with \(n\geq 1\), \(X\subset\Lambda\) and \(\omega(\cdot)=\operatorname{Tr}((\cdot)\rho)\). Then, for every \(c>4\kappa\), there exists \(C=C(n,\kappa_{n},\nu_{n},c)>0\) s.th. \(\forall\;\;\xi\geq 2\), \(X,\,Y\subset\Lambda\) with \(\operatorname{dist}(X,Y)\geq 2\xi\), and operators \(A\in\mathcal{B}_{X}\), \(B\in\mathcal{B}_{Y}\) with \(\left\|B\right\|_{n}<\infty\) (see after (I.2)), we have, \(\forall\;r,\left|t\right|<\xi/c\):_ \[\left|\operatorname{SD}(t,r)\right|\leq Cr\left|t\right|\xi^{-n}\left\|A \right\|\left\|B\right\|\operatorname{Tr}(N_{X}^{2}\rho).\] (II.14) Bound on quantum state control.For any subset \(S\subset\Lambda\), we denote by \(\mathcal{F}_{S}\) the (bosonic) Fock space over the one-particle Hilbert space \(\ell^{2}(S)\), i.e., \(\mathcal{F}_{S}:=\oplus_{n=0}^{\infty}\otimes_{\mathbb{S}}^{n}\ell^{2}(S)\), where \(\otimes_{\mathbb{S}}\) stands for the symmetric tensor product of \(n\) copies of \(\ell^{2}(S)\), and let \(\mathcal{F}\equiv\mathcal{F}_{A}\). Due to the tensorial structure \(\mathcal{F}\simeq\mathcal{F}_{Y}\otimes_{\mathcal{F}_{Y^{c}}}\) (see [33, App. A] for the definitions and discussions), we can define the partial trace \(\operatorname{Tr}_{\mathcal{F}_{Y^{c}}}\) over \(\mathcal{F}_{Y^{c}}\), e.g. by the equation \(\operatorname{Tr}_{\mathcal{F}_{Y}}(A\operatorname{Tr}_{\mathcal{F}_{Y^{c}}} \rho)=\operatorname{Tr}((A\otimes\mathbf{1}_{\mathcal{F}_{Y^{c}}})\rho)\) for every bounded operator \(A\) acting on \(\mathcal{F}_{Y}\). This allows one to define a _restriction_ of a state \(\rho\) to the density operators on the local Fock space \(\mathcal{F}_{Y}\), \(Y\subset\Lambda\), by \(\rho_{Y}:=\operatorname{Tr}_{\mathcal{F}_{Y^{c}}}\rho\). Let \(\tau\) be a quantum map (or _state control map_) supported in \(X\). Given a density operator \(\rho\), our task is to design \(\tau\) so that at some time \(t\), the evolution \(\rho_{t}^{\tau}:=\alpha_{t}(\rho^{\tau})\) of the density operator \(\rho^{\tau}:=\tau(\rho)\) has the restriction \([\rho_{t}^{\tau}]_{Y}\) to \(S(\mathcal{F}_{Y})\), which is close to a desired state, say \(\sigma\). To measure the success of the transfer operation, one can use the figure of merit \(F([\rho_{t}^{\tau}]_{Y},\sigma)\), where \(F(\rho,\sigma)=\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{S_{1}}\) is the _fidelity_. Here \(\|\rho\|_{S_{1}}:=\operatorname{Tr}(|\rho|)\) is the Schatten 1-norm. One would like to find \(\tau\) that maximizes \(F([\rho_{t}^{\tau}]_{Y},\sigma)\). Using this figure of merit, one can estimate the upper bound on the state transfer time. On the other hand, to show that the state transfer is impossible in a given time interval, we would compare \(\rho_{t}^{\tau}\) and \(\rho_{t}:=\alpha_{t}(\rho)\) by using \(F([\rho_{t}^{\tau}]_{Y},[\rho_{t}]_{Y})\), as a figure of merit (c.f. [9; 10]), and try to show that it is close to 1 for \(t\leq t_{*}\) and \(\forall\) state preparation (unitary) maps \(\tau\) localized in \(X\). If this is true, then clearly using \(\tau\)'s localized in \(X\) does not affect states in \(Y\). Let \(\tau(\rho)=U\rho U^{*}\equiv\rho^{U}\), where \(U\) is a unitary operator. **Theorem II.6** (Quantum control bound).: _Let the assumptions of Theorem II.2 hold with \(n\geq 1\), \(X\subset\Lambda\), and \(\omega\equiv\omega_{\rho}\), and let \(\rho\) be a pure state.Then, for every \(c>8\kappa\), there exists \(C=C(n,\kappa_{n},\nu_{n},c)>0\) s.th. \(\forall\;\xi\geq 4\), \(Y\subset\Lambda\) with \(\operatorname{dist}(X,Y)\geq 2\xi\), and unitary operator \(U\in\mathcal{B}_{X}\) (see (II.8)), we have, \(\forall\;\left|t\right|<\xi/c\):_ \[F(\operatorname{Tr}_{Y^{c}}(\alpha_{t}^{\prime}(\rho)),\operatorname{Tr}_{Y^{c}} (\alpha_{t}^{\prime}(\rho^{U})))\geq 1-C\left|t\right|\xi^{-n}\operatorname{Tr}(N_{X}^{2}\rho).\] The estimate above imposes a lower bound on the time for the best-possible quantum control protocols for the quantum many-body dynamics. (We view Theorem II.6 as a bound on quantum control time, rather than that on the time for state transfer as in [9; 10].) Spectral gap and decay of correlation.Denote by \(\Omega\) the ground state of the Hamiltonian \(H\) in (I.1). **Theorem II.7** (Gap at the ground state implies decay of ground state correlations).: _Suppose \(H\) in (I.1) has a spectral gap of size \(\gamma>0\) at the ground state energy. Suppose the assumptions of Theorem II.2 hold with \(n\geq 1,\,X\subset\Lambda\), and \(\omega=\left\langle\Omega,\,(\cdot)\Omega\right\rangle.\) Then, there exists \(C=C(n,\kappa_{n},\nu_{n})>0\) s.th. \(\forall\;\xi\geq 1\), \(Y\subset\Lambda\) with \(\operatorname{dist}(X,Y)\geq 2\xi\), and operators \(A\in\mathcal{B}_{X}\), \(B\in\mathcal{B}_{Y}\), we have:_ \[\left|\omega(BA)\right|\leq C\left\|A\right\|\left\|B\right\|(\gamma^{-1}\xi^{-2 }+\xi^{1-n}\omega(N_{X}^{2})).\] (II.15) LC in macroscopic particle transport.For a given \(S\subset\Lambda\), we define the (macroscopic) local relative particle numbers as \(\bar{N}_{S}:=\frac{N_{S}}{N_{\Lambda}}.\) For \(0\leq\nu\leq 1\), we write \(P_{\bar{N}_{S}\leq\nu}\), \(P_{\bar{N}_{S}\geq\nu}\) for spectral projections associated to \(\bar{N}_{S}\). **Theorem II.8** (LC for macroscopic particle transport).: _Suppose \(\kappa_{n}<\infty\) with some \(n\geq 1\) (see (I.2)). Suppose the initial state \(\omega\in\mathcal{D}\) satisfies \(\omega(P_{\bar{N}_{X}\geq\nu})=0\) with some \(\nu\geq 0,\,X\subset\Lambda\). Then, \(\forall\;\nu^{\prime}>\nu\), \(c>\kappa,\) there exists \(C=C(n,\kappa_{n},c,\nu^{\prime}-\nu)>0\) s.th. \(\forall\;\eta\geq 1\), \(\left|t\right|<\eta/c\):_ \[\omega_{t}\left(P_{\bar{N}_{X}\geq\nu^{\prime}}\right)\leq C\eta^{-n}.\] (II.16) Note that (a) estimate (II.16) holds for rather general initial states (including ones with particle densities uniformly bounded from below) and controls macroscopic fractions of particles and (b) the constant \(C>0\) depends on system parameters but iis independent of \(\Lambda\) and stays bounded in the thermodynamic limit. Extensions.Results from the preceding subsections can be extended to (a) time-dependent one-particle and two-particle operators \(h\) and \(v\) satisfying (I.2) uniformly in time, (b) few-body potentials in (I.1), (c) observables which are polynomials in \(\{a_{x},\,a_{x}^{*}\}_{x\in\Lambda}\), and (d) fermi systems. ## III Main ideas in the proof of thms. ii.1-ii.3 and ii.8 Theorem ii.1Recall that the second quantization d\(\Gamma\) of 1-particle operators \(b\) on \(\mathfrak{h}\equiv\ell^{2}(\Lambda)\) is given by \(\operatorname{d}\Gamma(b):=\sum_{\lambda\times\Lambda}b_{xy}a_{x}^{*}a_{y}\), where \(b_{xy}\) is the matrix of \(b\). As we identify a function \(f:\Lambda\to\mathbb{C}\) with the multiplication operator induced by it on \(\mathfrak{h}\equiv\ell^{2}(\Lambda)\), we can write \[\hat{f}:=\mathrm{d}\Gamma(f)=\sum_{x\in\Lambda}f(x)a_{x}^{*}a_{x}.\] We denote by \(\chi_{S}^{\sharp}\) the characteristic function of a subset \(S\subset\Lambda\). For \(f=\chi_{S}^{\sharp}\), the above gives the local particle number operators \(N_{S}\equiv\mathrm{d}\Gamma(\chi_{S}^{\sharp})\). As in [10, 17], we control the time evolution associated to (I.1) by _recursive monotonicity estimate_ (RME) for _adiabatic spacetime localization observables_ (ASTLOs): \[\dot{\chi}_{ts}:=\mathrm{d}\Gamma(\chi_{ts}),\quad\chi_{ts}=\chi\left(\tfrac{d \chi-\mathrm{std}}{s}\right),\] (III.1) where \(s>t\geq 0\), \(d_{X}\) is the distance function to \(X\), \(v\in(\kappa,c)\) (with \(\kappa\) from (II.4) and \(c\) from the statement of Theorem II.1), and \(\chi\) is a smooth function cutoff function with \(\mathrm{supp}\,\chi^{\prime}\subset(0,c-v)\), \(\chi^{\prime}\geq 0\). For a differentiable path of observables, define the Heisenberg derivative \(DA(t)=\tfrac{\partial}{\partial t}A(t)+i[H,A(t)]\), with \[\partial_{t}\alpha_{t}(A(t))=\alpha_{t}(DA(t)).\] (III.2) **Theorem III.1** (Rme).: _Suppose the assumptions of Theorem II.1 hold. Then, for every \(\chi\in\mathcal{X}\), there exist \(C=C(n,\kappa_{n},\chi)>0\) and, for \(n\geq 2\), functions \(\xi^{k}=\xi^{k}(\chi)\in\mathcal{X}\), \(k=2,\ldots,n\), s.th. \(\forall\;s,t>0\),_ \[D\hat{\chi}_{ts}\leq-\,\frac{(v-\kappa)\dot{\chi}_{ts}^{\prime}}{s}+C\sum_{k= 2}^{n}\frac{\widehat{(\xi^{k})}_{ts}}{s^{k}}+\frac{CN}{s^{n+1}}.\] (III.3) _(The sum in the r.h.s. is dropped if \(n=1\).)_ Since the second term on the r.h.s. is of the same form as the leading, negative term (recall \(v>\kappa\) in (III.1)), estimate (III.3) can be bootstrapped to obtain an integral inequality with \(O(s^{-n})\) remainder. Thus we call (III.3) the _recursive monotonicity estimate_ (RME). Inequality (III.3) can be derived from a similar inequality for the 1-particle observable \(\chi_{ts}\): \[d\chi_{ts}\leq-\frac{v-\kappa}{s}\chi_{ts}^{\prime}+C\,\sum_{k=2}^{n}\frac{( \xi^{k})_{ts}^{\prime}}{s^{k}}+Cs^{-(n+1)},\] (III.4) where \(db\) is the 1-particle Heisenberg derivative, \(db(t):=\partial_{t}b(t)+i[h,b(t)]\), for 1-particle operator-family \(b(t)\) on \(\mathfrak{h}\), and the rest of symbols are as in Theorem III.1. Theorem II.2Let \(A_{t}=\alpha_{t}(A)\) and \(A_{t}^{\xi}\equiv\alpha_{t}^{X_{\xi}}(A)\). By the fundamental theorem of calculus, we have \(A_{t}-A_{t}^{\xi}=\int_{0}^{t}\partial_{r}\alpha_{r}(\alpha_{t-r}^{X_{\xi}}(A ))\,dr.\) Using identity (III.2) for \(\alpha_{r}\) and \(\alpha_{t-r}^{\xi}\) in the integrand above, as well as the fact that \(\alpha_{t-r}^{X_{\xi}}([H_{X_{\xi}},A])=[H_{X_{\xi}},\alpha_{t-r}^{X_{\xi}}(A )]\), we find \[A_{t}-A_{t}^{\xi}=i\int_{0}^{t}\alpha_{r}([R^{\prime},A_{t-r}^{\xi}])\,dr,\] (III.5) where \(R^{\prime}:=H-H_{X_{\xi}}\). Since \(A_{s}^{\xi}\) is localized in \(X_{\xi}\), only terms in \(R^{\prime}\) which connect \(X_{\xi}\) and \(X_{\xi}^{\varepsilon}\) contribute to \([R^{\prime},A_{t-r}^{\xi}]\) (see Figure 1). Assuming \(h\) and \(v\) are finite-range, we see that the commutator \([R^{\prime},A_{t-r}^{\xi}]\) is localized near the boundary \(\partial X_{\xi}\). Considering for simplicity the _Hubbard model_, i.e. \(v_{xy}=\lambda\delta_{xy}\), \(\lambda\in\mathbb{R}\), and assuming \(A\) and thus \(A_{s}^{\xi}\) are self-adjoint, we bound \(i[R^{\prime},A_{s}^{\xi}]\leq C\left\|A\right\|N_{\partial X_{\xi}}.\) Next, we take \(X\) so that \(X^{c}\) is 'bounded', i.e. independent of \(\Lambda\) (see Figure 1 below) and set \(Y:=X_{\xi}^{\varepsilon}\), so that \(X^{c}=Y_{\xi}\). Then MVB (II.5) gives the 'incoming' light cone estimate \[\alpha_{r}(N_{Y})\leq C(N_{Y_{\xi}}+\xi^{-n}N),\;r\leq\xi/c,\;c>2\kappa.\] Inserting the last two estimates into (III.5) and using \(r\leq\xi/c\), \(c>2\kappa\) and \((\partial X_{\xi})_{\xi}=X_{2\xi\setminus X}\) yields \(\left|\omega\big{(}\alpha_{t}(A)-\alpha_{t}^{X_{\xi}}(A)\big{)}\right|\leq C \left[t\left\|A\right\|(\omega(N_{X_{2\xi}\setminus X})\right.\left.+\xi^{-n} \omega(N))\right]\). Since \(X_{2\xi}\setminus X\) lies in the particle-free region \(X^{c}\), this, together with \(\omega(N^{p})=\omega(N_{Y}^{p}),p=1,2\), and (II.9), gives the Hubbard version of (II.10) in the finite-range case. For infinite-range \(h\) and \(v\), we refine the argument presented above. Let \(X_{a,b}:=X_{b}\setminus X_{a}\) for \(b>a\geq 0\). To estimate \([R^{\prime},A_{t-r}^{\xi}]\), we split the annulus \(X_{0,2\xi}\) into four annuli, say \(X_{j}:=X_{j\xi/2,(j+1)\xi/2},j=0,\ldots,3\). In \(X_{2}\), \(X_{3}\), we use the MVB from Thm. II.1 and in \(X_{1}\), \(X_{4}\), the decay properties of \(h_{xy}\) and \(v_{xy}\) as \(|x-y|\to\infty\). Theorem II.3Write \(A_{t}=A_{t}^{\xi}+\mathrm{Rem}(A,t)\) with \(\mathrm{Rem}(A,t)\) satisfying (II.10). Plugging this into \(\omega([A_{t},B])\) and using that \([A_{t}^{\xi},B]=0\), we obtain (II.11). Theorem II.8For \(\chi\in\mathcal{X}\) and a suitable smooth cutoff function \(f\) with derivative supported in \((0,\nu^{\prime}-\nu)\), we define the ASTLOs \(\bar{\chi}_{ts}:=\hat{\chi}_{ts}/N\), with \(f_{ts}:=f(\bar{\chi}_{ts})\), for \(|t|<s\). Using (III.3) and the 'integral chain rule' \(Df_{ts}=\int R_{ts}(z)D\bar{\chi}_{ts}R_{ts}(z)\,d\tilde{f}(z)\), where \(R_{ts}(z)=(z-\bar{\chi}_{ts})^{-1}\) for \(\mathrm{Im}z\neq 0\) and \(d\tilde{f}(z)\) is a complex measure vanishing for \(\mathrm{Im}z=0\), we obtain the RME for \(f_{ts}\): \[Df_{ts}\leq f_{ts}^{\prime}\left(\frac{\kappa-v}{s}\bar{\chi}_{ts}^{\prime}+ \sum_{k=2}^{n}s^{-k}\overline{(\xi^{k})^{\prime}}_{ts}+Cs^{-(n+1)}\right).\] From here we proceed as in the proof of Theorem II.1. ###### Acknowledgements. The first author is grateful to Jeremy Faupin, Marius Lemm, and Avy Soffer for enjoyable and fruitful col laborations. Both authors thank M. Lemm for helpful discussions. The research of I.M.S. is supported in part by NSERC Grant NA7901. J. Zhang is supported by DNRF Grant CPH-GEOTOP-DNRF151, DAHES Fellowship Grant 2076-00006B, DFF Grant 7027-00110B, and the Carlsberg Foundation Grant CF21-0680. His research was also supported in part by NSERC Grant NA7901. Parts of this work were done while the second author was visiting MIT.
2302.11353
Polymerization in magnetic metamaterials
We numerically study a mesoscopic system consisting of magnetic nanorings in the presence of thermal magnetization fluctuations. We find the formation of dipolar-field-mediated ``bonds" promoting the formation of annuli clusters, where the amount of bonds between two rings varies between zero and two. This system resembles the formation of polymers from artificial atoms, which in our case are the annuli and where the valency of the atom is set by the ring multipolarity. We investigate the thermodynamic properties of the resulting structures, and find a transition associated with the formation of the bonds. In addition, we find that the system has a tendency to form topological structures, with a distinct critical temperature in relation to the one for bond formation.
Samuel D. Slöetjes, Matías P. Grassi, Vassilios Kapaklis
2023-02-22T12:55:13Z
http://arxiv.org/abs/2302.11353v2
# Polymerization in magnetic metamaterials ###### Abstract We numerically study a mesoscopic system consisting of magnetic nanorings in the presence of thermal magnetization fluctuations. We find the formation of dipolar-field-mediated "bonds" promoting the formation of annuli clusters, where the amount of bonds between two rings varies between zero and two. This system resembles the formation of polymers from artificial atoms, which in our case are the annuli and where the valency of the atom is set by the ring multipolarity. We investigate the thermodynamic properties of the resulting structures, and find a transition associated with the formation of the bonds. In addition, we find that the system has a tendency to form topological structures, with a distinct critical temperature in relation to the one for bond formation. Polymerization is typically related to the process of reaction of monomer molecules towards the formation of larger chain-like or three-dimensional molecular networks, polymers. Essential for these processes is the ability of monomers to bond with other monomers and their steric effects, relating to the way atoms can spatially arrange. Extrapolating these observations into the realm of magnetism and artificial arrays of mesoscopic magnetic entities, allows for a new look and design approach for the collective magnetic order and dynamics of magnetic metamaterials [1]. Important to this shift in perspective, is introducing the concept of bonds between magnetic entities, captured by the dipole coupling between them and the associated magnetostatic charges. The formation of 1D chains in magnetic metamaterials has been observed before, for example in square arrays of circular disks in the form of antiferromagnetic lines [2; 3], which can be regarded as polymers with a trivial topology. Another 1D entity in metamaterials consisting of bistable elements are Dirac strings, which connect certain high energy vertex configurations [4; 5]. However, structures with topologies beyond that of an open chain remain scarcely explored. The magnetic texture within the individual elements can be harnessed in order to allow for increased complexity of the emergent artificial structures in these materials. One way to achieve this, is by altering the topology of the building blocks themselves. Tailoring of the topology in magnetic metamaterials has so far only been realized on the lattice level, for the realization of frustrated magnetic systems [7], and has lead for example to the investigations of Shakti [8; 9], Saint George [10], and Tetris [11] artificial spin ice, among others. In this work, we will consider a system consisting of building blocks with an altered topology compared to the usual Ising-like (elongated) or disk-shaped mesospins (for a classification scheme see Table 1 by Skovdal _et al._[12]), namely magnetic rings. Nanomagnetic rings have been studied previously, albeit in a different context. Early efforts consisted of studying domain walls in single rings [13; 14; 15], which focused on switching between micromagnetic states in the ring and subsequent dynamics investigations [16; 17]. Studies on the thermally driven dynamics in ring systems are scarce, and only focus on the thermally excited transition from a vortex state to an onion state [18]. Experimentally, Laufenberg _et al._[19] have reported on the observation of coupling between rings in patterned arrays, showing that it is possible to realize such systems. More recently, it was demonstrated that it is possible to do basic neuromorphic computations in arrays of connected rings, utilizing the domain walls in these arrays [20]. However, in all of these works, the temperature was not a parameter of interest and concepts like emerging order and phase transitions were not considered. Here, we inspect the temperature dependent magnetic order in Figure 1: (a) Magnetic rings, each with two domain walls with opposite-sign magnetostatic charges [6]. These charges can couple across the elements, forming bonds via stray fields (gray shaded areas). (b) Energy landscape for two domain wall states in magnetic nanorings, as a function of the net magnetic moment of the ring (indicated on the right). this system, and find the formation of clusters, where the amount of bonds between rings can be more than one. Consequently, this system mimics the formation of polymers coupling together a significant number of individual magnetic textures, with the bonding valency set by the multipolarity of a ring. We used the micromagnetic simulation package MuMax3, which solves the Landau-Lifshitz-Gilbert equation for a grid of cells describing magnetic moments [21]. The cell size was \(l_{x}\times l_{y}\times l_{z}\) = 2.5 nm \(\times\) 2.5 nm \(\times\) 4 nm, where \(l_{z}\) is equal to the thickness of the rings. The saturation magnetization and exchange stiffness are \(M_{\mathrm{S}}\) = 1\(\times\)10\({}^{6}\) A/m and \(A_{\mathrm{ex}}\) = 1\(\times\)10\({}^{-11}\) J/m, and the damping is \(\alpha\) = 0.01, effectively describing a material that resembles Permalloy. The rings have an outer and inner radius of \(r_{o}\) = 125 nm and \(r_{i}\) = 75 nm, respectively. The square grid in the simulation provides an effective fourfold anisotropy to the rings due to the corrugated edges, which was partially compensated by a cubic anisotropy set along the \([1,1]\) and \([1,-1]\) directions, resulting in a weak 8-fold anisotropy. The rings were placed on a square 16 \(\times\) 16 grid, with a mutual spacing of \(d\) = 50 nm. A finite temperature was taken into account by way of a stochastic field which is proportional to the root of temperature, \(\sqrt{T}\), and uncorrelated both in space and time [22]. We set out by considering the magnetization within a single ring. Magnetic rings have a fundamentally different magnetic texture than conventional nanomagnet elements, as the topology is the same as that of a strip with continuous boundary conditions. As such, their coupling to neighbouring elements occurs via the stray field emitted from domain walls, instead of the conventional majority part of the magnetic texture (see Fig. 1a). When exposed to temperature, the domain walls are free to move around the magnetic ring, resulting in a key difference compared to magnetic disks, which is the additional freedom of the domain walls to move with respect to each other without a high energy cost. The amount of bonds available for one ring, and thereby the valency, depends on the amount and type of domain walls. A ring with a vortex state has zero bonds, an onion state provides a double valency, and an antivortex state has a quadruple valency. In most of the cases, the sum of the magnetostatic charges is zero, i.e., in the case of the onion state the two magnetostatic charges are \(+\) and \(-\). This is the case if the winding numbers of the topological defects on the outer edges, as defined by Tchernyshyov and Chern [6], are positive (\(n=+1/2\)), and the ones on the inner edges are negative (\(n=-1/2\)). This configuration is most often the case, since defects with positive topological numbers have the lowest energy on positively curved edges, and vice versa for defects with \(n=-1/2\) on negatively curved edges. In some cases, \(n=-1/2\) charges can be found on the outer edges of the ring, which results in an uncompensated net magnetostatic charge. In such a case, the state can decay through annihilation of a \(n=+1/2\) charge. The energy landscape of the magnetization in the ring can be mapped out as a function of the total net magnetization in-plane (its components being \(m_{x}\) and \(m_{y}\)), as shown in Fig. 1b for the case of two domain walls. In this case, the energy landscape is flat in the azimuthal direction, due to the rotational symmetry of the ring, but curved in the radial direction. The landscape features one deep inner trough and a shallow outer rim. The deep inner trough represents the groundstate, corresponding to the configuration in which the domain walls Figure 2: (a) Excerpt from the 16 \(\times\) 16 array of rings at \(\widetilde{T}=0.171\), after relaxing from a random magnetization. The colormap which is given by the dot product between the normalized radial and magnetization vectors, and represents the magnetostatic charge of the domain wall. The bonding field, \(H_{\mathrm{b}}\), is shown in gray. (b) Average cluster size for different reduced temperatures \(\widetilde{T}=k_{\mathrm{B}}T/E_{\mathrm{b}}\), starting from a random configuration. The window average of the data is shown in blue, the red dots are the raw data. (c) Time evolution of the average domain wall density, as a function of azimuthal position with respect to the individual magnetic rings (calculated as \(\langle\left|\mathbf{i}\hat{\mathbf{n}}\cdot\mathbf{f}\right|\rangle\)), for three different temperatures. are close together, due to the attractive interaction between magnetostatic charges, but still remain apart due to the repulsive interaction of the topological charges. This repulsive interaction leads to an upturn in the energy landscape at the smallest net magnetization values. The outer rim in the energy landscape corresponds to a state where the two domain walls are on opposite sides of the ring. When magnetic rings are organized in an array and starting from a paramagnetic state, the emergent order is not immediately apparent from just considering the magnetization. As mentioned previously, the rings can couple to one another through the stray fields, which are produced at the domain walls in the magnetic texture in the ring. In order to define a bond, we introduce a scalar "_bonding field_", \(H_{\rm b}\), which has binary values, by thresholding the demagnetizing field, \(|{\bf H}_{\rm dem}|\): \[|{\bf H}_{\rm dem}(x,y)| > H_{\rm thresh}\to H_{\rm b}(x,y)=1\] \[|{\bf H}_{\rm dem}(x,y)| < H_{\rm thresh}\to H_{\rm b}(x,y)=0\] We used \(\mu_{0}H_{\rm thresh}\) = 0.03 T, and details on the justification of this value and further identification of bonds can be found in the supplementary material. If two rings are connected by \(H_{\rm b}\), this defines a bond. In order to reveal the underlying order, \(H_{\rm b}\) must be included in the visualisation. The appearance of bonds is shown in Fig. 2a, where bond coupling causes clustering into polymers, where the individual rings are considered monomers. As such, the type of order that emerges is of a percolative nature. We will now inspect the behaviour of the artificial polymers upon thermal excitation. In this analysis, the order parameter is taken to be the amount of bonds on the lattice, \(N_{\rm b}\). The energy needed to break one bond is \(E_{\rm b}=0.50\) eV [23], and henceforth we will make use of a dimensionless temperature, scaled by this value, \(\widetilde{T}=k_{\rm B}T/E_{\rm b}\), where \(k_{\rm B}\) is the Boltzmann constant. We have simulated the magnetization for seven different temperatures between \(\widetilde{T}=0.043\) and 0.341, for a duration of 200 ns, in each case starting from the same magnetization state. This initial state has 56 rings with vortex states, 151 rings with two charges 48 rings with four charges, and one ring with six charges. This initial magnetization state is relaxed from a random magnetization in the absence of temperature, and is not the groundstate, but rather a metastable state with \(N_{\rm b}=96\). As time progresses, it can be seen in Fig. 2b that the amount of bonds on the lattice increases for temperatures up to \(\widetilde{T}=0.256\). In the cases of \(\widetilde{T}=0.085\) to 0.171, there is a steep increase in \(N_{\rm b}\) during the initial 15-20 ns, after which the rate slows down. We attribute this increase to the annihilation of charges (domain walls) with an inverted winding, thus providing more space for the other charges to move to a bonding position (see supplementary material for the total amount of domain walls over time). An exception is the high temperature case of \(\widetilde{T}=0.341\), where the thermal energy is high enough to annihilate domain walls with regular winding in the timescales of the simulation. Overall, the largest increase in \(N_{\rm b}(t)\) over 200 ns is seen to occur for a temperature of \(\widetilde{T}=0.171\). This behaviour can be rationalized by the fact that, below this temperature, an increased thermal agitation leads to a higher annealing rate, which means that it can be expected that eventually all cases with \(\widetilde{T}\leq 0.171\) converge to the same value of \(N_{\rm b}(t)\). At higher temperatures, the entropy begins to prevail, which results in the breaking of bonds and unpairing of charges. As such, we can establish that there exists a ceiling temperature around \(\widetilde{T}^{*}=0.171\), analogous to the ceiling temperature in realistic polymer systems, at which the rate of polymerization equals that of depolymerization. This behaviour signifies a soft transition, involving the unpairing of charges, thereby bearing resemblance to a Berezinskii-Kosterlitz-Thouless phase transition [24; 25]. At the critical temperature, the system optimizes the mobility of the domain walls to find bonding positions. A detailed investigation of this transition is due in future work. The difference in behaviour of the system when the temperature is varied is ultimately reflected in the positions of the domain walls, whose motion is governed by the underlying energy landscape. The domain wall positions are tracked, and a histogram was made as a function of their position in polar angle, for all timesteps, shown in Fig. 2c. At low temperatures, the domain wall positions strongly reflect the 8-fold anisotropy of the individual rings. When the temperature is increased, the system has enough energy to overcome this anisotropy, and the domain wall position is mostly dominated by the lattice symmetry, as seen from the four-fold anisotropy of the histogram. When the temperature is increased further to \(\widetilde{T}=0.341\), we observe that the domain walls explore the full phase space, unimpeded by local or global anisotropies. As such, the system becomes fully ergodic at the highest temperature. We observe that the tendency of the ring system to compensate the magnetic charges leads to the formation of emergent topological structures on the next length scale, namely loops, as can be seen in Fig. 3b. The typical size of these loops is two and four rings. Loops play an important role in conventional artificial spin ice systems, where they realize a topological model system, in which trivial and non-trivial loops can be distinguished [26; 27]. The (possibly degenerate) groundstate of the ring lattice must feature only loops, in order for all charges to be compensated. As such, we expect an increased amount of loops at lower temperatures. This tendency to form loops is enhanced by the intra-ring interactions, which favour small angles between domain walls, causing the emerging polymers to bend. This can be contrasted to a lattice of disks, which favours straight lines due to the contribution of the exchange interaction in the interior [28; 2; 3; 29]. Moreover, rings with four domain walls serve as pinch points in the polymers, also promoting the formation of loops. In the following, we shall investigate the thermal dynamics of these loops. We start by considering the average polymer length, given by \(\langle L\rangle\). This parameter is closely related to \(N_{\rm b}\). However, if one is to relate the two quantities mathematically, the amount of loops, \(l\), must also be taken into account: \[\langle L\rangle=\frac{N}{N-N_{\rm b}+l} \tag{1}\] where \(N\) is the total amount of rings. As such, once \(\langle L\rangle\) and \(N_{\rm b}\) are found, \(l\) can be determined via this relationship. The amount of loops over time is shown for different temperatures in Fig. 3a. For lower temperatures, initially the amount of loops drops drastically (for \(t=0\)), before increasing again. The initial relative drop in loops (\(\Delta l/l(t=0)\)) is much larger than that for the amount of bonds (\(\Delta N_{\rm b}/N_{\rm b}(t=0)\)). This can perhaps be attributed to a different relaxation time associated with loop formation versus bond formation. The total amount of loops fluctuates between 0 and 18, where the maximum is seen for \(\widetilde{T}=0.128\), in contrast to the number of bonds, which stabilizes at \(\widetilde{T}=0.171\). This contrast suggests a different critical temperature associated with bond formation versus that of loop formation. We investigate this possibility by inspecting the susceptibility (see Fig. 3c), which is calculated as \(\chi_{x}(T)=(1/k_{\rm B}T)\sigma_{x}^{2}\), where \(\sigma_{x}=\sqrt{\langle x^{2}\rangle-\langle x\rangle^{2}}\) is the standard deviation of \(x\), with \(x=N_{\rm b}\) or \(l\)[30]. A peak in the susceptibility is associated with a maximum in the fluctuations, and could indicate the occurrence of a phase transition. We observe peaks in the susceptibilities for both \(\chi_{N_{\rm b}}\) and \(\chi_{l}\). However, the susceptibility of the bonds peaks at \(\widetilde{T}=\widetilde{T}^{*}=0.171\), while the susceptibility peak for the loops occurs at \(\widetilde{T}\) = 0.128. The susceptibility for the loops was also calculated when normalized by the amount of bonds, \(\chi_{l/N_{\rm b}}\), and it was found that the peak remained localized at \(\widetilde{T}=0.128\). The difference in critical temperature for these two entities is surprising, as the loops consist of bonds, and one would therefore expect the same critical temperature. However, there is one crucial difference: whereas bond formation only depends on inter-ring interactions, the formation of loops depends on both inter- and intra-ring interactions, thus setting a different energy scale. Additionally, the entropy associated with \(N_{\rm b}\) and \(l\) should be considered. The configurational entropy associated with placing a bond on a lattice is smaller than the entropy associated with placing structures on the lattice that consist of multiple bonds, such as loops, thus reducing the critical temperature for \(l\) with respect to \(N_{\rm b}\). In conclusion, we have investigated a novel artificial spin system, in which the topology of the individual building blocks is altered with respect to conventional magnetic metamaterials. We have found that bonding between elements occurs via stray fields emitted and absorbed by domain walls, giving rise to a magnetic order that resembles polymerization. Moreover, the formation of bonds is associated with a thermodynamic transition. On the next length scale, we observe an additional transition associated with the formation of a topological structure known as a loop, with a definite handedness. The critical temperatures associated with the transitions of these two entities differ, which we attribute to different energy scales and configurational entropy. The data that support the reported findings are available upon reasonable request. ## Acknowledgements We wish to thank Prof. Bjorgvin Hjorvarsson for fruitful discussions. S.D.S. and V.K. acknowledge support from the Swedish Research Council (Project No. 2019-03581). M.P.G. and V.K. also acknowledge support from the Figure 3: (a) Amount of loop-clusters for different temperatures, starting from a random configuration. The blue line indicates the windowed average, the red dots are the raw data (b) Example of a loop clusters spanning 2 and 4 rings. (c) The susceptibility is shown for different temperatures for the amount of bonds (upper panel), the amount of loops (middle panel), and for the amount of loops normalized by the amount of bonds (lower panel). Carl Trygger Foundation (Project No. CTS21:1219). The authors have no conflicts of interest to disclose
2308.04063
Geographical space based on urban allometry and fractal dimension
The conventional concept of geographical space is mainly referred to actual space based on landscape, maps, and remote sensing images. However, this notion of space is not enough to interpret different types of fractal dimension of cities. The fractal dimensions derived from Zipf's law and time series analysis do not belong to the traditional geographical space. Based on the nature of the datasets, the urban allometry can be divided into three types: longitudinal allometry indicating time, transversal allometry indicating hierarchy, and isoline allometry indicating space. According to the principle of dimension consistency, an allometric scaling exponent must be a ratio of one fractal dimension to another. From abovementioned three allometric models, we can derive three sets of fractal dimension. In light of the three sets of fractal dimension and the principle of dimension uniqueness, urban geographical space falls into three categories, including the real space based on isoline allometry and spatial distribution, the phase space based on longitudinal allometry and time series, and order space based on transversal allometry and rank-size distribution. The generalized space not only helps to explain various fractal dimensions of cities, but also can be used to develop new theory and methods of geospatial analysis.
Yanguang Chen
2023-08-08T05:55:45Z
http://arxiv.org/abs/2308.04063v1
# Geographical Space Based on Urban Allometry and Fractal Dimension ###### Abstract The conventional concept of geographical space is mainly referred to actual space based on landscape, maps, and remote sensing images. However, this notion of space is not enough to interpret different types of fractal dimension of cities. The fractal dimensions derived from Zipf's law and time series analysis do not belong to the traditional geographical space. Based on the nature of the datasets, the urban allometry can be divided into three types: longitudinal allometry indicating time, transversal allometry indicating hierarchy, and isoline allometry indicating space. According to the principle of dimension consistency, an allometric scaling exponent must be a ratio of one fractal dimension to another. From abovementioned three allometric models, we can derive three sets of fractal dimension. In light of the three sets of fractal dimension and the principle of dimension uniqueness, urban geographical space falls into three categories, including the real space based on isoline allometry and spatial distribution, the phase space based on longitudinal allometry and time series, and order space based on transversal allometry and rank-size distribution. The generalized space not only helps to explain various fractal dimensions of cities, but also can be used to develop new theory and methods of geospatial analysis. Keywords:allometry; fractals; hierarchy; scaling; cities; geographical space ## 1 Introduction Three basic laws are important in geographical analysis, that is, distance decay law, rank-size law, and allometric growth law. The first law is a spatial law, the second law is a hierarchical law, and third law is originally a temporal law. Nowadays, the law of allometric growth has been generalized to allometric scaling law and can be used to associate space and time with hierarchy. In urban studies, allometric scaling law has been employed to describe urban growth, urban form, and urban systems. The law of allometric growth initially implies that the rate of relative growth of an organ is a constant fraction of the rate of relative growth of the total organism (Beckmann, 1958; Lee, 1989). In general system theory, allometry means a constant ratio of one relative growth rate to another relative growth rate (Bertalanffy, 1968). We can find two types of allometric scaling relations: First, the ratio of the relative growth rate of a part to the relative growth rate of the whole is a constant. For example, the relationships between a central city and a system of cities (Beckmann, 1958; Chen, 2017). Second, the ratio of the relative growth rate of one part to the relative growth rate of another part is a constant. For example, the relationships between urban perimeter and urban area (Batty and Longley, 1994; Benguigui _et al_, 2006), the relationships between urban area and population (Batty and Longley, 1994; Lee, 1989), the relationships between two cities (Chen, 2017), and so on (West, 2017). The question is how to interpret the allometric scaling exponent. This involves two basic principles about dimension, that is, dimension consistency principle and dimension uniqueness principle. Allometric relation always takes on power laws, and a power law is in essence a geometric measure relation, which obeys the principle of dimensional consistency. In this sense, an allometric scaling exponent is a ratio of one dimension value to another dimension value. Unfortunately, it is hard to explain the empirical values of allometric scaling exponents using the concepts from traditional mathematics. It is fractal geometry rather than Euclidean geometry that can be adopted to effectively interpret the allometric scaling exponents in scientific research (Batty and Longley, 1994; Chen and Xu, 1999; Chen, 2010; West _et al_, 1997; West _et al_, 1999). On the other hand, dimension is a kind of geometric characteristic quantity of space. A dimension value corresponds to a spatial form. A geometric object and an aspect of the object bear only one dimension value. This is the principle of dimension uniqueness. However, in geographical analysis, we can obtain several dimension values for the same geographical object. To solve the problem of dimension paradox, we necessarily reconsider the concepts of geographical space. This work is devoted to deriving three types of geographical space from different allometric scaling relations. With the help of new results of space classification, some specious problems in geographical research can be clarified, including the confusion between the box dimension and the similarity dimension of an urban system. ## 2 Allometry models of cities ### Allometric scaling classification Allometric scaling relations can be divided into three categories, that is, longitudinal allometry, transversal allometry, and isoline allometry. The transversal allometry can be equivalently expressed by cross-sectional allometry and hierarchical allometry (Table 1). The urban area-population allometric growth is a simple and good example to illustrate the three types of geographical allometric relations. Using the allometric scaling relation between urban area and population size, we can derive three concepts of geographical space. The urban area-population allometric relation is well known for geographers (Nordbeck, 1971; Lo and Welch, 1977). The model can be formulated as \[A=aP^{b}=aP^{D_{\mathrm{a}}/D_{\mathrm{p}}}\,, \tag{1}\] in which \(a\) refers to the proportionality coefficient, and \(b\)=\(D_{\mathrm{a}}\)/\(D_{\mathrm{p}}\) to the allometric scaling exponent (Lee, 1989). According to the principle of dimension consistency, the allometric exponent is a ratio of two fractal dimensions (Batty and Longley, 1994; Chen, 2010), that is \[b=\frac{D_{\mathrm{a}}}{D_{\mathrm{p}}}\,, \tag{2}\] where \(D_{\mathrm{a}}\) refers to the fractal dimension of urban area, and \(D_{\mathrm{p}}\) to the fractal dimension of urban population. The allometric scaling relation between urban area and population size can be examined from three angles of view. The first is the isoline allometry based on spatial distribution, which reflects the spatial pattern of urban form. For a given city at a certain time, equations (1) and (2) should be replaced by \[A(r)=a^{\prime}P(r)^{b^{\prime}}=a^{\prime}P(r)^{D_{\mathrm{a}}/D_{\mathrm{p} }^{\prime}}\,, \tag{3}\] where \(r\) denotes the radius from the city center, \(A(r)\) refers to the land-use area within a radius of \(r\) unit from the center (0\(\leq\)\(r\)\(\leq\)\(R\), where \(R\) is the maximum radius of a cities), and \(P(r)\) to the population within the same sphere as \(A(r)\), \(a^{\prime}\) is the proportionality coefficient, and \(b^{\prime}\) =\(D_{\mathrm{a}}\)/\(D_{\mathrm{p}}\)' is the scaling exponent. Equation (3) can be derived from two fractal models as follows \[A(r)=A_{\mathrm{r}}r^{D_{\mathrm{a}}^{\prime}}\,,\,\,\,P(r)=P_{\mathrm{1}}r^{ D_{\mathrm{p}}^{\prime}}\,, \tag{4}\] where \(A_{\mathrm{1}}\) and \(P_{\mathrm{1}}\) are two proportionality constants, \(D_{\mathrm{a}}\)' is the fractal dimension of urban land use form, and \(D_{\text{p}}\)' is the fractal dimension of population distribution of the city. This suggests that the fractal dimensions \(D_{\text{a}}\)' and \(D_{\text{p}}\)' belong to the real geographical space (\(0{\leq}D_{\text{a}}\)', \(D_{\text{p}}{\leq}2\)), which is always confined in a 2-dimensional Euclidean space by maps or digital maps (Chen _et al_, 2019). The second is the longitudinal allometry based on time series, which reflects the dynamic process of urban growth. For a given city at a certain time, equations (1) and (2) should be replaced by \[A(t)=a^{\bullet}P(t)^{b^{\star}}=a^{\bullet}P(t)^{D_{\text{a}}^{ \star}D_{\text{p}}^{\star}}\,, \tag{5}\] where \(t\) denotes the time (\(t{=}1,2,\,\ldots,\,n\)), \(A(t)\) refers to the land-use area in the \(t\)th time within a radius of \(R\) unit from the center, and \(P(t)\) to the population in the same time within the same sphere as \(A(t)\), \(a^{\text{\text{\text land use form of the \(N\) cities, and \(D_{p}\)\({}^{m}\) is the average dimension of population distribution of the same urban system. This implies that the fractal dimensions \(D_{a}\)\({}^{m}\) and \(D_{p}\)\({}^{m}\) belong to another generalized geographical space--order space (0\(\leq\)\(D_{a}\)\({}^{m}\), \(D_{p}\)\({}^{m}\)\(\leq\)3), which is always determined by one or more rank-size series. In fact, equation (7) can be equivalently expressed as the following hierarchical scaling relation \[A(m)=a^{\pi}P(m)^{b^{\pi}}=a^{\pi}P(m)^{D_{p}^{\pi}D_{p}^{\pi}}, \tag{9}\] which can be derived from \[A(m)=A_{\ast}R(m)^{D_{p}^{\pi}}\,,\,\,\,P(m)=P_{1}R(m)^{D_{p}^{\pi}}\,, \tag{10}\] where \(m\)=1,2,3,... represents the order number of levels in a hierarchy (Chen, 2010; Chen and Feng, 2017). This suggests that the rank-size series that is/are used to define an order space can be substituted with one or more hierarchical series. What is more, equation (7) can be derived from two Zipf laws, area-based Zipf's law and population-based Zipf's law (Table 1). ### Allometric scaling exponents and fractal dimension According to the principle of dimension consistency, the allometric scaling exponent is a ratio of one fractal dimension to another fractal dimension. In geometry, a measure (e.g., length) is proportional to another measure (e.g., area) if and only if the two measures bear the same dimension. For example, length is not proportional to area, but length is proportional to square root of area. Generally speaking, we have geometric measure relation as below \[L^{1/1}\propto A^{1/2}\propto V^{1/3}\,, \tag{11}\] where \(L\), \(A\), and \(V\) represent length, area, and volume, respectively. Equation (11) can be generalized to fractal measure relation such as (Mandelbrot, 1982; Takayasu, 1990) \[L^{1/1}\propto A^{1/2}\propto V^{1/3}\propto M^{1/D}\,, \tag{12}\] where \(M\) refers to generalized volume, and \(D\) to fractal dimension. Here Euclidean dimension is regarded as the special case of fractal dimension. Suppose that the dimension of urban area is \(d\)=2, and the dimension of urban population is \(d\)=3. According to equation (11), we have a geometric measure relation between one measure \(A\) and another measure \(V\) such as \(A\)=\(aV^{2/3}\), where \(a\) denotes a proportionality coefficient. According to equation (12), we have a general proportional relation between one measure \(A\) and another measure \(M\) such as \(A\)=\(aM^{\pm D}\). In short, an allometric scaling model can be decomposed into two growth processes, or two spatial distributions, or two probability distributions. Based on spatial data, an allometric scaling process can be decomposed into a pair of power law distributions; Based on time series, an allometric growth can be decomposed into a pair of exponential growths or logistical growth; Based on cross-sectional data, an allometric scaling process can be decomposed into a pair of exponential distributions or power law distributions of probability. Thus we have isoline allometry indicative of spatial patterns, longitudinal allometry indicative of temporal process, and transversal allometry indicative of hierarchical structure (Table 1). From each allometry, we can derive a pair of fractal parameters indicating dimension of some types of geographical space. **Table 1 The spatial, longitudinal, and transversal allometric scaling relations of cities and the related growth or distribution functions** \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Item & Type & Sub-type & Basic models & Main model & Parameters \\ \hline Space & **Isoline** & Spatial & \(S(r)=S_{1}r^{D_{s}}\) & \(A(r)=aS(r)^{b}\) & \(a=A_{1}S_{1}^{-b}\) \\ & **allometry** & **allometry** & \(A(r)=A_{1}r^{D_{s}}\) & & \(b=D_{a}/D_{s}\) \\ \hline Time & Longitudinal allometry & Exponential allometry & \(S_{t}=S_{0}e^{\mu t}\) & \(A_{t}=aS_{t}^{b}\) & \(a=A_{0}S_{0}^{-b}\) \\ & & \(A_{t}=A_{0}e^{\nu t}\) & & \(b=v/u\) \\ \cline{2-7} & Logistic allometry & \(S_{t}=\dfrac{S_{\max}}{1+(S_{\max}/S_{0}-1)e^{-\nu t}}\) & \(\dfrac{A_{t}}{A_{\max}-A_{t}}\) & \(a=\dfrac{A_{0}}{A_{\max}-A_{0}}\) \\ & & \(A=\dfrac{A_{\max}}{1+(A_{\max}/A_{0}-1)e^{-\nu t}}\) & \(=a(\dfrac{S_{t}}{S_{\max}-S_{t}})^{b}\) & \(\pm(\dfrac{S_{0}}{S_{\max}-S_{0}})^{b}\) \\ \hline Hierarchy & Crosssectional allometry & Power allometry & \(S_{k}=S_{k}k^{-q}\) & \(A_{k}=aS_{k}^{b}\) & \(a=A_{1}S_{1}^{-b}\) \\ & & \(A_{k}=A_{k}k^{-p}\) & & \(b=p/q\) \\ \cline{2-7} & Hierarchical allometry & Exponential allometry & \(S_{m}=S_{1}r_{s}^{1-m}\) & \(A_{m}=aS_{m}^{b}\) & \(a=A_{1}S_{1}^{-b}\) \\ & & \(A_{m}=A_{1}r_{a}^{1-m}\) & & \(b=\ln r_{a}/\ln r_{s}\) \\ \cline{2-7} & Power allometry & \(S_{m}=S_{1}N_{m}^{-q}\) & \(A_{m}=aS_{m}^{b}\) & \(a=A_{1}S_{1}^{-b}\) \\ & & \(A_{m}=A_{1}N_{m}^{-p}\) & & \(b=p/q\) \\ \hline \end{tabular} **Note:** The symbols are as follows: \(t\)—time; \(r\)—distance; \(k\)—rank; \(m\)—level; \(S\)—(population) size; \(A\)—urban area; \(a\), \(b\), \(p\), \(q\), \(u\), \(v\), \(r_{s}\), \(r_{b}\), \(d_{0}\), \(A_{1}\), \(d_{\max}\), \(D_{0}\), \(D_{b}\), \(S_{0}\), \(S_{1}\), \(S_{\max}\) are all parameters (proportionality coefficient, fractal dimension, scaling exponent, ratio, capacity, etc.). \begin{table} \end{table} Table 1: The spatial, longitudinal, and transversal allometric scaling relations of cities and the related growth or distribution functions ### Three types of geographical space Since there are three types of fractal dimensions for a city as a system or a system of cities, the notion of generalized space should be introduced into geography. Given different spatio-temporal conditions, allometric scaling model will suggests three types of urban geographical space: real space, phase space, and order space (Chen, 2014). The first geographical space is the _real space_ (R-space). This is the conventional, concrete geographical space, which can be described by field investigation, map, remote sensing image data, etc. Using spatial data of a city or an urban system, we can model it through real space. The second geographical space is the _phase space_ (P-space). This is the first abstract geographical space, which can be depicted by one or more time series. Using the observational data of temporal process of a city or an urban system, we can model it through phase space. The third geographical space is the _order space_ (O-space). This is the second abstract geographical space, which can be characterized by rank-size series or hierarchical series. Using the cross-sectional data of a city or an urban system, we will be able to model it through order space. Different types of geographical space correspond to different types of fractal dimension (Table 2). In theory, the same kind of fractal dimension of different spaces should be equal to one another. For a given city at a given time (\(t\) is determined), if the urban radius is defined according to certain criterion (\(r\)=\(R\)), we will have \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Space & Description & Physical base and data & Basic fractal dimension & Dimension value range \\ \hline Real space (R-space: the first space) & Empirical space & Spatial series or random observational data based on maps, digital maps, remotely sensed images, etc. & Box dimension, radial dimension & 0\(\leq\)\(D\)\(\leq\)2 \\ \hline Phase space (P-space: the second space) & Abstract space & Temporal series based on daily/monthly/yearly observations and measurements, etc. & Similarity dimension & 0\(\leq\)\(D\)\(\leq\)3 \\ \hline Order space (O-space: the third space) & Abstract space & Cross-sectional data based on regional observations and measurements, etc. & Similarity dimension, correlation dimension & 0\(\leq\)\(D\)\(\leq\)3 \\ \hline \end{tabular} \end{table} Table 2: Three types of geographical space: real space, phase space, and order space \[b=\frac{D_{\text{a}}^{\prime}}{D_{\text{p}}^{\prime}}=\frac{D_{\text{a}}^{\prime}} {D_{\text{p}}^{\prime\prime}}=\frac{D_{\text{a}}^{\prime\prime}}{D_{\text{p}}^{ \prime\prime}}. \tag{13}\] However, because of random disturbance and varied human factors, the observational data do not always support this equation. In practice, equation (13) should be replaced by an approximate relation in the following form \[b=\frac{D_{\text{a}}^{\prime}}{D_{\text{p}}^{\prime}}\approx\frac{D_{\text{a}}^ {\prime}}{D_{\text{p}}^{\prime\prime}}\approx\frac{D_{\text{a}}^{\prime\prime }}{D_{\text{p}}^{\prime\prime}}, \tag{14}\] which can be validated with the statistical average of large-sized samples. ### Fractal methods of spatial analysis In the past, we used distance to characterize geospatial space. If a geographical phenomenon bears characteristic scales, distance will be an effect measure for spatial description of geographical systems. On the contrary, if a geographical phenomenon possesses no characteristic scale, the distance measurement will be invalid for geographical spatial analysis. In this case, the characteristic scale should be replaced by scaling, and distance-based space should be replaced by dimension-based space. Anyway, dimension is the characteristic parameter of space. Euclidean dimension has no more geographical information, but fractal dimension give us useful geographical information for spatial analysis. In short, fractal geometry provide a powerful tool for scaling analysis in geography, especially, in urban studies. We have various fractal parameters, which can be applied to geographical analysis of different types of space (Table 3). \begin{table} \begin{tabular}{l|l|l|l} \hline **Object** & **Method** & **Object** & **Fractal dimension** \\ \hline **R-space for** & Box counting method & Form/network & Box dimension \\ \cline{2-3} **pattern:** & Sandbox & Growth & Sandbox dimension \\ \cline{2-3} **Spatial** & Radius-area/number scaling & Growth & Radial dimension \\ \cline{2-3} **structure,** & (cluster growing) & & \\ \cline{2-3} **texture, and** & Wave spectral analysis & Form/network & Image dimension \\ \cline{2-3} **distribution** & Walking-divider method & Boundary & Boundary dimension \\ \cline{2-3} & Perimeter-area scaling & Boundary & Boundary dimension \\ \cline{2-3} &....... &....... &....... \\ \hline & Power spectral analysis & Process & Self-affine dimension \\ \cline{2-3} & Reconstructing phase space & Dynamics & Correlation dimension \\ \hline \end{tabular} \end{table} Table 3: Methods of urban fractal dimension estimation of three types of space based on time series, spatial structure and hierarchical structure ## 3 Empirical analysis ### Case of R-space Three typical examples can be presented to illustrate the allometric scaling defined in different types of geographical space. The first case is the spatial allometric relation between urban area and total length of streets based on spatial distribution data. This type of allometry reflects the real space (R-space) of urban geographical systems. Two Chinese cities, Changchun and Jinan, are taken into account (Chen _et al_, 2019). Based on different searching radius \(r\), a number of urban envelopes of a city can be identified. Each urban envelope gives an urban area, \(A(r)\), and a total length of streets, \(L(r)\), of the city. The relationships between urban area and corresponding street length follow spatial allometric scaling law (Figure 1). The scaling exponent is the ratio of the fractal dimension of street network and the fractal dimension of urban form in the real geographical space. **Chinese cities defined in real space (2011)** **[Note: Different searching radius \(r\) yields different urban boundaries for a city. A set of urban areas \(A(r)\) and Figure 1: The allometric scaling relations between urban area and total street length of two ### Case of P-space The second example is the longitudinal allometric growth relation between urban population size and built-up area based on time series data. This type of allometry reflects the phase space (P-space) of urban geographical systems. Two Chinese cities, Beijing and Shanghai, are taken as cases (Chen and Feng, 2017). For a city, urban population and urban built-up area data can be extracted every year. In principle, the relationships between urban population, \(P_{t}\), and urban area, \(A_{t}\), follow the law of allometric growth. The scaling exponent is the ratio of the fractal dimension of urban form to the fractal dimension of urban population in geographical phase space. Unfortunately, owing to the unstable statistical caliber of Chinese cities, the allometric scaling relationships based on time series exhibit significant variability. Nevertheless, this example can be utilized to show the allometry growth defined in phase space (Figure 2). ### Case of O-space The third example is the transversal allometric relation between urban population size and built-up area based on cross-sectional data. This type of allometry is associated with rank-size distribution and reflects the order space (O-space) of urban systems. Chinese cities in two years, 2011 and 2014, are taken as cases (Chen and Feng, 2017). Suppose that the rank of urban population size is \(k\), the city size of the \(k\)th city is \(P_{k}\), and the corresponding urban area is \(A_{k}\). The relationships between Figure 2: The allometric scaling relationships between population size and area of built district of Chinese cities defined in phase space urban population and urban area follow the rank-size allometric scaling law (Figure 3). The scaling exponent is the ratio of the fractal dimension of urban form to the fractal dimension of urban population in geographical order space. **Figure 3 The rank-size allometric scaling relationships between urban population and urbanized area of Chinese cities of two years defined in order space** **[Note: Only comparable data of Chinese 657 to 658 cities from 1991 to 2014 are available.]** **Figure 4 The hierarchical allometric scaling relationships between population size and area of built district of Chinese cities of two years defined in order space (10 levels)** **[Note: The hierarchical allometric scaling relation is equivalent to the rank-size allometric scaling relation because Zipf's law reflects a self-similar hierarchy with cascade structure.]** The cross-sectional allometry can be equivalently transformed into hierarchical allometry of cities. The hierarchical allometry also reflects the order space of urban geographical systems. Suppose that the level order of urban hierarchy is numbered as \(m\), the average city size of the \(m\)th level is \(P_{m}\), and the corresponding average urban area is \(A_{m}\). The relationships between average urban population and urban area follow the hierarchical allometric scaling law (Figure 4). The scaling exponent is also the ratio of two fractal dimension of geographical order space. ## 4 Questions and discussion Dimension is a measurement in space indicative of length, width, or height. Generally speaking, dimension implies the magnitude of something in a particular direction, e.g., length or width or height. In mathematics and science, dimension is a spatial concept, which is used to describe points, lines, and solids. In Euclidean geometry, the dimensions of points, lines, areas, and cubes are 0, 1, 2, 3, respectively. In analytic geometry, a dimension represents one of three Cartesian coordinates that determine a position in space. If we talk about the dimensions of an object or place, we always refer to its aspects, sizes, or proportions. In Euclidean geometry, dimension is apriori and cannot provide much spatial information for scientific research due to the dimension value of each Euclidean object is known. However, in fractal geometry, things are different. Fractal dimension is empirical and no longer a known quantity, but an unknown quantity to be measured. Thus, fractal dimension can provide useful information for spatial analysis. Two basic principles, which were known in Hellenic times, are important for understanding fractal dimension. One is the _principle of dimension uniqueness_. For a given object (e.g., a line), or an aspect of an object (e.g., perimeter or area of a circle), the dimension is unique. The other is the _principle of dimension consistency_. As indicated above, one measure (e.g., length, area) is in proportion to another measure (e.g., area, volume) if and only if the two measures share the same dimension. In geospatial analysis, we have encountered and still face a series of problems to be solved. Power laws can be found in geographical world everywhere. However, the power exponent is hard to be explained by means of traditional mathematical notions in many cases (Table 4). The typical difficult problems are the relationship between the allometric exponent of urban area _vs_ population growth and the distance exponent of gravity models. Suppose that the dimension of urban area is \(d\)=2, and the dimension of urban population is \(d\)=3. According to the principle of dimension consistency, we have a geometric measure relation between urban area and population such as _A=aP23_. However, a large number of observational datasets do not lend support to this measure relation based on Euclidean dimension. Generally speaking, the allometric scaling exponent values come between 2/3 and 1, and approach 0.85 (Chen, 2010; Chen and Xu, 1999; Louf and Barthelemy, 2014). In other words, urban form and growth are fractal patterns and process. If so, it is easy to explain the values of the allometric scaling exponent. Suppose that a fractal city is defined in a 2-dimensional space. Assuming the dimension of urban form is _D_=1.7, and the dimension of urban population is _d_=2, we have _b=D/d_=0.85. If we employ an inverse power law as an impedance function of spatial interaction, the gravity model will encounter a difficult problem of dimension. It is hard to interpret the experimental results of distance exponent value using Euclidean geometry. According to the principle of dimension consistency, the distance exponent is supposed to be an integer or an integer ratio. However, the calculation results based on the observation data are arbitrary values varying from 0 to 4. In fact, distance exponent \(\sigma\) proved to the fractal dimension of city size, \(D_{\text{p}}\), or the product of Zipf exponent, \(q\), and the fractal dimension of central place network, \(D_{\text{f}}\). That is, we have \(\sigma\)=\(D_{\text{p}}\)=\(qD_{\text{f}}\). Today, many such problems can be solved by using the ideas from fractals. However, the phenomenon of fractal dimension values violating the principle of dimension uniqueness is impossible to be interpreted yet. For the same city at a given time, we can obtain a fractal dimension value from spatial measurement, and we can get another fractal dimension value from time series. The only solution to the problem of dimension uniqueness is to distinguish one type of geographical space from another type of geographical space. Different types of geographical space correspond to different types of fractal dimension, which depend on different types of observational data and calculation methods (Table 4). On the other hand, different types of fractal dimension can be employed to make different types of spatial analysis for geographical systems. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Principle** & **Meaning** & **Parameter** & **Traditional explanation** & **New explanation** \\ \hline **Dimension** & Measure X is proportional to measure Y if and only if X and Y & Scaling exponent of allometry & A ratio of one Euclidean dimension to another dimension & A ratio of one fractal dimension to another fractal dimension \\ \hline \end{tabular} \end{table} Table 4: Several typical scientific conundrums associated with dimensions in geography ## 5 Conclusions So far, we have had four paradigms for scientific research, that is, mathematical theory, laboratory experiment, computer simulation, and data-intensive computing. The four paradigms represents mathematical method, controlled experimental method, simulation method, and computational method, respectively. Among these four paradigms, mathematical method is the basic and very important paradigm. Anyway, scientific research comprises two correlated processes: description and understanding. No exact description, no correct understanding. Mathematical modeling is the precondition for effective description and deep understanding. There are three difficult problems against mathematical modeling in scientific research: spatial dimension, time lag (response delay), interaction (coupling). The three parts method of geographical space is helpful to the effective mathematical modeling of geographical phenomena. The introduction of fractal dimension is helpful for spatial characterization of geographical systems. Only when the concept of geospatial space is clarified can fractal dimension be effectively used. Based on three types of allometric scaling of cities, geographical space were divided into three categories: real-space (R-space), phase-space (P-space), and order space (O-space). The real space can be described with box dimension and radial dimension, the phase space can be depicted by correlation dimension, and the order space can be characterized by similarity dimension. In the future, new geo-spatial theory and analytical methodologies can be developed on the basis of three-space concepts. ## Acknowledgements This research was sponsored by the National Natural Science Foundation of China (Grant No. 42171192). The support is gratefully acknowledged.
2310.11624
The free energy balance equation applied to gyrokinetic instabilities, the effect of the charge flux constraint, and application to simplified kinetic models
The free energy balance equation for gyrokinetic fluctuations is derived and applied to instabilities. An additional term due to electromagnetic sources is included. This can provide a simpler way to compute the free energy balance in practical applications, and is also conceptually clarifying. The free energy balance, by itself, is not sufficient to determine an eigenfrequency. The preceding results are derived in general geometry. The charge flux constraint in gyrokinetics can provide a necessary additional relation, and the combination of these two can be equivalent to a dispersion relation. The charge flux constraint can prevent the appearance of an unstable eigenmode even though the free energy balance would allow strongly growing fluctuations. The application of these concepts to simplified kinetic models in simplified geometry is also indicated.
M. Kotschenreuther, X. Liu, S. M. Mahajan, D. R. Hatch
2023-10-17T23:21:42Z
http://arxiv.org/abs/2310.11624v1
The free energy balance equation applied to gyrokinetic instabilities, the effect of the charge flux constraint, and application to simplified kinetic models ###### Abstract The free energy balance equation for gyrokinetic fluctuations is derived and applied to instabilities. An additional term due to electromagnetic sources is included. This can provide a simpler way to compute the free energy balance in practical applications, and is also conceptually clarifying. The free energy balance, by itself, is not sufficient to determine an eigenfrequency. The preceding results are derived in general geometry. The charge flux constraint in gyrokinetics can provide a necessary additional relation, and the combination of these two can be equivalent to a dispersion relation. The charge flux constraint can prevent the appearance of an unstable eigenmode even though the free energy balance would allow strongly growing fluctuations. The application of these concepts to simplified kinetic models in simplified geometry is also indicated. ## I Derivation of the gyrokinetic free energy balance equation, and its application to instabilities Instabilities that are described by the gyrokinetic equation are typically the main source of transport in magnetically confined plasmas. The gyrokinetic equation describes small fluctuation around a local Maxwellian that has relatively small gradients of temperature and density (in comparison to the fluctuation scales) [1]. The preceding sentence describes an ideal situation in which to apply the concepts of non-equilibrium thermodynamics, where it is often very advantageous that the departures from a Maxwellian are small. Surprisingly, such concepts have not been widely used to understand gyrokinetic instabilities and transport. But we will see that the use of fundamental non-equilibrium thermodynamic relationships is extremely illuminating. Here, we derive and describe a free energy balance equation that describes the conversion of free energy in equilibrium gradients into the free energy of the fluctuations. Unlike other derivations, we include potential source terms in the equation. This has several important computational and conceptual advantages, as we will see. The free energy balance equation can be viewed as the instability process from the thermodynamic viewpoint. The generality of this relation means that it pertains to all the classes of instabilities usually considered, e.g., curvature driven modes, modes driven by parallel resonances and drift resonances, trapped particle modes, collisionally driven modes, etc, etc. One particular application of this equation is to allow us to consider how the charge flux constraint affects the instability process [2] without limiting our framework to a specific instability type. And, if our focus is on any specific instability (e.g. the ITG/TEM as in [2]), to consider that mode from the most general viewpoint: essential thermodynamic and statistical mechanical concepts. The effects of the charge flux constraint (FC) are often crucially important to weaken instabilities in transport barriers, where the gradients are strong but the transport is astonishingly weak. Transport barriers (TBs) are both an extraordinary phenomenon observed in existing devices, and, are important in future devices to obtain thermonuclear energy gain. One thermodynamic definition of free energy is the capacity to do work at a fixed temperature (Helmholtz free energy), given that, of course, entropy cannot decrease. In the gyrokinetic system, the presence of the gradients of the Maxwellian is the source of free energy that imbues it with such a capacity for work. A crucial manifestation of this capacity is the system performing work on fluctuations so that they grow, i.e, the presence of instabilities. The very strong gradients of TBs should enable a large amount of work to be performed on the fluctuations so that they grow rapidly to large amplitude. And for most physical systems, this is precisely what happens when there are steep gradients. Instabilities growth is possible because, simultaneously, the fluctuations produce transport fluxes that acting to reduce the free energy in the gradients. The stronger the instability, the more rapid the relaxation. This is just an instance of the universal tendency of systems to minimize free energy, and is the most general perspective from which to examine instabilities. One of our goals is to understand how it is that extremely large gradients can exist in TBs, and yet, this large free energy is apparently unable to cause rapid instabilities to give rapid relaxation of that free energy. For this, we start by deriving a free energy equation for the gyrokinetic equation. Since this point of view is not usually taken in the community for gyrokinetic instability dynamics, we will be slightly pedagogical. We begin with the nonlinear electrostatic gyrokinetic equation, which describes fluctuations of the non-adiabatic part of the distribution function around a local Maxwellian \(F_{Ms}\), for each species s, with a temperature \(T_{s}\). Specifically, it describes the distribution of the positions of the center of gyromotion: \[\frac{\partial h_{s}}{\partial t}+v_{tot}^{*}\cdot\vec{\nabla}h_{s}+C(h _{s})=\] \[\frac{q_{s}}{T_{s}}(\frac{\partial<\phi>}{\partial t}F_{Ms}+<v_{E \times B}>\cdot\vec{\nabla}F_{Ms})\frac{h_{s}}{F_{M}}\mathbf{C}(h_{s})\] where \(<>\) denotes the gyro-average, \(\vec{\nabla}F_{Ms}\) is the local gradient of the background Maxwellian, and \(v_{tot}^{*}\) includes the parallel motion, \(\vec{E}\times\vec{B}\) drifts, curvature drift and grad B drift. Within the nominal gyrokinetic ordering that applies to the large majority of calculations done in the field, \(F_{Ms}\) is regarded as a constant in space in eq(1). Similarly, the local gradients are also taken as constant in the perpendicular direction. (And note \(F_{Ms}\) is constant on a flux surface, so \(\vec{\nabla}F_{Ms}(\psi)=\partial F_{Ms}/\partial\psi\vec{\nabla}\psi\), where \(\psi\) is a flux function) We multiply eq(1) by \(h_{s}/f_{Ms}\) and integrate over \(\vec{x}\) and \(\vec{v}\). With conventional boundary conditions the \(v_{tot}^{*}\cdot\vec{\nabla}\) terms vanish. Furthermore, we define \[\delta f_{s}=-\frac{q_{s}<\phi>}{T_{s}}F_{Ms}+h_{s} \tag{2}\] and also multiply by the species temperature \(T_{s}\), and summing over species. We use the fact that \(\int<a>b=\int a<b>\) (for conventional boundary conditions), to obtain the free energy balance equation \[\frac{\partial}{\partial t}\left[\sum_{s}\frac{T_{s}}{2}\int{\rm d }\vec{x}{\rm d}\vec{v}\left(\frac{\delta f_{s}^{2}}{F_{Ms}}+(\phi>^{2})F_{Ms }\right)+\frac{|\vec{\nabla}_{\perp}\phi|^{2}}{8\pi}\right]\] \[=\sum_{s}\left[\mathbf{n}_{s}(\mathbf{Q}_{s}\frac{1}{T_{s}}\frac{dT_{s}}{ dx}+\Gamma_{s}\frac{T_{s}}{n_{s}}\frac{dn_{s}}{dx})\right]-\sum_{s}\int d\mathbf{x}d\mathbf{ v}\frac{h_{s}}{F_{M}}\mathbf{C}(h_{s})\] \[+\int{\rm d}\vec{x}\phi\frac{\partial}{\partial t}(\frac{\nabla_ {\perp}^{2}\phi}{4\pi}-\sum_{s}q_{s}\delta n_{s})\] So far we have not imposed the condition that the fluctuations obey Maxwell's equations. But, the last term on the RHS vanishes when the fluctuations satisfy Poisson's equation, which in the gyrokinetic limit, is \[\frac{\nabla_{\perp}^{2}\phi}{4\pi}=\sum_{s}q_{s}\delta n_{s} \tag{4}\] where the \(\delta n_{s}\) are the perturbed densities, which are related to \(h_{s}\) by \[\delta n_{s}=\int{\rm d}v<h_{s}>-\frac{q_{s}\phi}{T_{s}} \tag{5}\] For the typical ITG/TEM, Poison's equation becomes quasi-neutrality, since the LHS of eq(4) is smaller than the RHS by \(\sim k_{\perp}\lambda_{Debye}^{2}<<1\). However, for ETG modes, this may not be case. It will be instructive to not take the limit of small Debeye length, and hence, include the field energy in eq(3). Although last term in eq(3) vanishes for fluctuations that satisfy Poisson's equation, it will prove useful to include it nonetheless, for some conceptual and computational reasons. This is a fundamental thermodynamic relation for the gyrokinetic system, so we discuss it's interpretation. In the limit \(k_{\perp}^{2}\to 0\), this approaches the eq(2) of [2] (for fluctuations that obey Poisson's equation). The apparently obscure term \(\sim(\phi^{2}-<\phi>^{2})\) in eq(3) becomes the perpendicular kinetic energy from \(E\times B\) motion in this limit, so it can be considered to be a part of the fluctuation energy. \[\frac{\partial}{\partial t}\sum_{s}\left[\frac{T_{s}}{2}\int d \mathbf{x}d\mathbf{v}\frac{\delta f_{s}^{2}}{F_{M}}+\frac{\mathbf{m}_{s}\mathbf{n}_{s}\delta V _{E\times B}^{2}}{2}+\frac{\delta E^{2}}{8\pi}\right]=\] \[\sum_{s}\left[\mathbf{n}_{s}(\mathbf{Q}_{s}\frac{1}{T_{s}}\frac{dT_{s}} {dx}+\Gamma_{s}\frac{T_{s}}{n_{s}}\frac{dn_{s}}{dx})\right]-\sum_{s}\int d\mathbf{x }d\mathbf{v}\frac{\delta f_{s}}{F_{M}}\mathbf{C}(\delta f_{s})\] As a pedagogical exercise, let us see that this equation can clearly be interpreted as a free energy balance for fluctuations about a Maxwellian plasma at a given temperature. The gyrokinetic equation applies to small fluctuations away from a Maxwellian with temperature T, \(f=f_{M}+\delta f\). The perturbed entropy \(\delta S\) due to small fluctuations \(\delta f\) is \[\delta S=-\delta(flogf)=-\delta f\ logf_{M}-\frac{\delta f^{2}}{2f_{M}}+H.O.T. \tag{7}\] (So at constant energy and particles, the maximum entropy state is a Maxwellian, \(\delta f\)=0, noting that \(logf_{M}=-m_{s}v^{2}/2T_{s}+Constant\)). There is also a contribution to the total entropy production in the equilibrium, and the change in this entropy is the usual product of thermodynamic forces and thermodynamic fluxes on the RHS of eq(6). From Boltzmann's H-theorem there is also entropy production from the collision operator \(\mathbf{C}\). Furthermore, the U in the Helmholtz free energy should include the field energy as well as the perturbed kinetic energy. Finally, since the gyrokinetic equation conserves particles for each species, no changes in the free energy result from terms \(\sim Constant\ \delta f)\) Taking all this into account, the LHS of eq(6) corresponds to the rate of change of the free energy of fluctuations for all species summed. A crucial feature of eq(3) is that the LHS is the derivative of a positive definite quantity. There is an entropy "cost" to increase the fluctuations away from a Maxwellian \(\delta f\), and also an energy "cost" from the other terms on the LHS, and this is "paid for" by the decrease in the equilibrium free energy(relaxing the gradients). The eq(3) has corrections due to gyro-averaging, since \(h_{s}\) is for a distribution function of gyro-centers, but it is otherwise, clearly, essentially the same equation. One might also increase the fluctuations in the system, and hence its free energy, by doing work on the system by an external agent. The last term on the RHS of eq(3) is potentially such a term. If Poisson's equation was not satisfied for the plasma charges, we could presume that there must be additional hypothetical external charges \(\rho_{external}\) that bring Poisson's equation into balance \(\frac{\nabla_{\perp}^{2}\phi}{4\pi}-\sum_{s}q_{s}\delta n_{s}=\rho_{external}\). Using elementary electrodynamics, it is easily shown that the last term on the RHS is equal to the work done on the gyrokinetic charges by the external charges. So as expected, the free energy is increased by the amount of external work done on the system. This is the electrodynamic equivalent of a piston in elementary thermodynamics. For the usual plasma instabilities this external term vanishes. The system itself does the work on the fluctuations to cause them to grow. This comes at the expense of decreasing the free energy of the background, which is given by the usual terms with products of thermodynamic forces and corresponding fluxes. The equilibrium gradients act as a thermodynamic "potential energy" that is tapped into to increase the free energy in the fluctuations. The free energy balance equation will be satisfied as long as the last term in eq(3) vanishes: \[\int\mathrm{d}\vec{x}\phi\frac{\partial}{\partial t}(\frac{\nabla_{\perp}^{2} \phi}{4\pi}-\sum_{s}q_{s}\delta n_{s})=0. \tag{8}\] For eigenmodes that satisfy Poisson's equation, this obviously holds. Equation(8) means that fluctuations do no net electrostatic work on themselves. Or, in other words, the fluctuation growth is sustained without any "assistance" from external agents doing additional work on the system. The growth rate is determined purely by how efficiency the fluctuations tap the thermodynamic potential energy (free energy) of the equilibrium gradients. Equation(8) is usually much easier to compute than the free energy balance equation eq(3). For fluctuations that satisfy the gyrokinetic equation, the vanishing of eq(8) is equivalent to the free energy equation being satisfied. This is a thermodynamic description of the instability growth process. II The insufficiency of free energy balance for determining instabilities, and the need to consider the flux constraint What is the connection of the free energy equation to the usual eigenvalue problem? One considers linear gyrokinetic fluctuations that evolve in time as \(\sim e^{-i\omega t}\). The usual approach is to solve the gyrokinetic equation for given \(\omega\) and insert this into Maxwell's equations (here, Poisson's Equation), and solve for the value of \(\omega\) that allows a nontrivial solution. Suppose we knew what the spatial structure of the eigenmode was, or, had a reasonable approximation to it. Using it, we could compute \(\delta n_{s}\) as a function of the complex \(\omega\) by using the gyrokinetic equation eq(1). Then, an instructive integral of Poisson's equation, when written for the usual complex \(\phi\) and \(\delta n\), and the complex conjugate \(\phi^{*}\), is \[Re[\int\mathrm{d}\vec{x}\phi^{*}(\frac{\nabla_{\perp}^{2}\phi}{4\pi}-\sum_{s} q_{s}\delta n_{s})]=0. \tag{9}\] \[Im[\int\mathrm{d}\vec{x}\phi^{*}(\frac{\nabla_{\perp}^{2}\phi}{4\pi}-\sum_{s} q_{s}\delta n_{s})]=0. \tag{10}\] Thus we have two real equations in two real unknowns (\(\omega_{r}\) and \(\gamma\)), which constitutes a full dispersion relation to determine them. Let us compare this to the free energy balance equation, which when written for the usual complex \(\phi\) and \(\delta n\), and the complex conjugate \(\phi^{*}\), eq(8) is \[Re[\int\mathrm{d}\vec{x}\phi^{*}i\omega(\frac{\nabla_{\perp}^{2}\phi}{4\pi}- \sum_{s}q_{s}\delta n_{s})]=0. \tag{11}\] _But crucially, this is only one real equation, a linear combination of eq(9) and eq(10). To determine the eigenfrequency, we need two relations, not just one_ _The crucial physical consequence is: considerations of free energy balance, although they are absolutely fundamental to the physics of the instability process, are not sufficient to determine the realizable eigenfrequencies._ The flux constraint can provide the other needed condition. In the case of an instability, using the quasilinear fluxes (averaged over long space scales), and with some manipulation, the FC is: \[\sum_{s}q_{s}\Gamma_{rs}=0 \tag{12}\] After some manipulation, if can be shown that the charge flux constraint eq(12) for exponential eigenmodes is equivalent to \[Im[\int\mathrm{d}\vec{x}\phi(\frac{\nabla_{\perp}^{2}\phi}{4\pi}-\sum_{s}q_{s }\delta n_{s})]=0. \tag{13}\] Together, eq(8) and eq(13) are equivalent to eq(9) and eq(10). (As long as the growth rate \(\gamma\) does not vanish.) _Thus, the free energy balance together with the flux constraint can constitute a dispersion relation._ As described in detail in [3] and [4], the physics of the FC is totally different from free energy dynamics, and can be interpreted as akin to the constraint of local momentum conservation of localized fluctuations. _A crucial physical conclusion is that the FC can be a serious constraint upon the realizable free energy dynamics, as discussed in the section below._ Even if the free energy balance has solutions for growing modes, the FC might not have a solution for \(\gamma>0\). As shown in [2], there are circumstances where the FC is insoluble. That work is dedicated to explicating these concepts for the ITG/TEM modes, in detail. As described there, stronger density gradients tend to bring this situation about, because of the basic statistical mechanical fact that thermodynamic forces drive their respective thermodynamic fluxes. So stronger density gradients can drive the fluxes to be too strong to balance in the FC eq(12), if the dynamics of electrons and ions are sufficiently different. _And such a circumstance has no physical connection to the amount of free energy in the equilibrium, i.e, how large \(\gamma\) might be capable of reaching by only considering the free energy balance._ It is straightforward to include electromagnetic effects in these considerations, and they leave the essential structure just described intact. Including the parallel vector potential \(A_{\parallel}\), the result is \[\frac{\partial}{\partial t}\Bigg{[}\quad\sum_{s}\frac{T_{s}}{2} \int\mathrm{d}\vec{x}\mathrm{d}\vec{v}\left(\frac{\delta f_{s}^{2}}{F_{Ms}}+( \phi^{2}-\langle\phi\rangle^{2})F_{Ms}\right)\] \[+\quad\frac{|\vec{\nabla}_{\perp}\phi\ |^{2}+B_{\perp}^{2}}{8\pi}\Bigg{]}\] \[=\sum_{s}\left[\mathbf{n}_{s}(\mathbf{Q}_{s}\frac{1}{T_{s}}\frac{dT_{s}} {dx}+\Gamma_{s}\frac{T_{s}}{n_{s}}\frac{dn_{s}}{dx})\right]-\sum_{s}\int d\mathbf{ x}d\mathbf{v}\frac{h_{s}}{F_{M}}\mathbf{C}(h_{s})\] \[+\int\mathrm{d}\vec{x}\phi\frac{\partial}{\partial t}(\frac{ \nabla_{\perp}^{2}\phi}{4\pi}-\sum_{s}q_{s}\delta n_{s})\] \[+\int\mathrm{d}\vec{x}\frac{A_{\parallel}}{c}\frac{\partial}{ \partial t}(\frac{\nabla_{\perp}^{2}A_{\parallel}}{4\pi c}-\sum_{s}\delta j_{ \parallel s}) \tag{14}\] Magnetic field energy is now included in the free energy of the fluctuations on the LHS. On the RHS, the fluxes now include electromagnetic contributions from the \(A_{\parallel}\) terms, in addition to the electrostatic fluxes from ExB fluctuation. Such magnetic terms embody transport effects such as those from stochastic magnetic fields. And, if the fluctuations do not satisfy Ampere's Law, there is additional electromagnetic work done on the system by the last term of eq(14). For these magnetic contributions too, free energy consideration are only the real component of the last complex quantity in eq((14)) which is obviously closely related to Ampere's Law, but are only a real part. In other words, once again, free energy considerations alone are insufficient to imply that Ampere's Law is satisfied. As in the in the electrostatic case, an additional relation is needed. This additional condition is provided by the charge flux constraint _due to electromagnetic part of the fluxes alone. This can be shown to vanish, separately from the electrostatic component of the charge flux constraint_ (see [3]). And just like the electrostatic FC, this necessary condition might also be a serious constraint upon the realizable free energy dynamics. (For example, as shown in [4], for tearing mode like fluctuations, this magnetic part of the FC essentially requires that the real frequency be of order \(\omega^{*}\).) We note a final small detail in closing. One often considers gyrokinetic instabilities in the ballooning limit. To apply eq(3) or eq(14) to such cases, note that this is a flux tube geometry, where modes only depend upon the length along a field line \(l\). The volume element in coordinate space \(\mathrm{d}\vec{x}\rightarrow\mathrm{d}lA\sim\mathrm{d}l/B\) since the area perpendicular to a flux tune is \(\sim 1/B\). ## III Application of these considerations to a simplified kinetic model Often, approximate 0D dispersion relations are considered for electrostatic modes. An example is the Simplified Kinetic Model of [2]. Even though the geometry of SKiM is simplified, it is still a version of the gyrokinetic equation. Hence, one can show that the free energy balance equation is still obeyed, and, the flux constrain is also obeyed. Quasinuetrality gives the dispersion relation eq(5), and for such local dispersion relations, we denote this as \(D(\omega)=0\). From eq(3), the vanishing of the LHS of the free energy balance equation is equivalent to \[Re(i\omega D(\omega))=0 \tag{15}\] What about the FC? In the gyrokinetic equation, guiding centers move by the gyroaveraged ExB drift. This velocity, for the SKiM case, in the radial direction, is \(i(c/B)k_{y}J_{0}(<k_{\perp}>\rho_{i})\phi\). The total flux of gyrocenters is the product of this times the non-adiabatic part of the distribution function for each species \(h_{s}\), integrated over velocity. The FC implies that the sum over all species of the charge flux from this vanishes, or \[\sum_{s}Re[-i(c/B)q_{s}\int\mathrm{d}v\,k_{y}J_{0}(<k_{\perp}>\rho_{i})\phi^{ *}h_{s}]=0 \tag{16}\] Using typical 0D expressions for \(h_{s}\) (as in [2]) this becomes \[Im(D(\omega))=0 \tag{17}\] which is eq(13) for simplified models. As long as the growth rate does not vanish, these are two separate relations. When \(\gamma>0\), these dual equations, together, are equivalent to \(D(\omega)=0\). As described above, the free energy balance together with the flux constraint specifies the mode frequency \(\omega=(\omega_{r},\gamma)\). The free energy balance describes how fast the fluctuation can potentially grow, given the free energy in the equilibrium. On could construct a graph, in the upper half plane of \(\omega\) space, of solutions of the free energy balance. It would be a continuous path of points \((\omega_{r},\gamma)\) with positive growth rates, if the system contains sufficient free energy so that growth is _possible_. And quite likely, the peak growth rate for this curve could be very large when gradients are also strong. This is the situation in a TB, for example. Another curve in the upper half plane of \(\omega\) space gives the solution of the FC. The actual mode frequency is at the intersection of this curve and the free energy balance curve. There is no unstable mode unless _unless the FC is also satisfied_. And, as described in [2], there are generic circumstances when the FC becomes insoluble. So even though there is strong free energy, there will be no instability. Or, there are circumstances when the FC only has solutions for very low growth rates. Then, no matter where the intersection of the FC and free energy balance occurs (if at all), the mode can, at best, only have a low growth rate. As shown in [2], this is exactly the situation that allows stability of the ITG/TEM in a TB without velocity shear. There, one finds numerous graphs that show the situation described in the paragraphs above. These considerations are not limited to the ITG/TEM. The same basic structure applies to quite different modes, including strongly electromagnetic ones. The circumstances described above can be quite generic. If there are strong gradients in the equilibrium, the RHS of eq(14) is large. In other words, solutions of the free energy balance will likely trace a curve in the upper half of the \((\omega_{r},\gamma)\) plane which reach large \(\gamma\) for some \(\omega_{r}\). But such growth rates may not be realizable, because the restriction of the FC always applies. It can still be quite possible that the FC has no solution, or, only has a solution at small \(\gamma\). As described in [2], such a circumstance tends to arise generically, when the density gradient is large, and when the physical dynamics of the ions and electrons are very different so that each tends to produce a very different charge flux.
2305.04921
The semigroup of increasing functions on the rational numbers has a unique Polish topology
The set of increasing functions on the rational numbers, equipped with the composition operation, naturally forms a topological semigroup with respect to the topology of pointwise convergence in which a sequence of increasing functions converges if and only if it is eventually constant at every argument. We develop new techniques to prove there is no other Polish topology turning this semigroup into a topological one, and show that previous techniques are insufficient for this matter.
Michael Pinsker, Clemens Schindler
2023-05-08T17:56:17Z
http://arxiv.org/abs/2305.04921v2
# The semigroup of increasing functions on the rational numbers has a unique Polish topology ###### Abstract. We consider the semigroup of all increasing functions on the rational numbers, equipped with the composition operation, and study the Polish topologies which are compatible with this operation in the sense that the composition operation shall be continuous with respect to the topology. One such topology is the one inherited from the product topology on the power \(\mathbb{Q}^{\mathbb{Q}}\) where each copy of \(\mathbb{Q}\) carries the discrete topology. We show that this is in fact the only compatible Polish topology. This research was funded in whole or in part by the Austrian Science Fund (FWF) [P 32337, 1 5948]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. This research is also funded by the European Union (ERC, POCOCOP, 101071674). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The second author is a recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Institute of Discrete Mathematics and Geometry, TU Wien. a coincidence since Solovay [14] and Shelah [15] showed the consistency of ZF (without choice) with the fact that _any_ Polish group has a unique Polish group topology. In the realm of semigroups, it was shown in [1] that the pointwise topology on the so-called full transformation monoid of all functions on a countably infinite set is the unique Polish semigroup topology (with respect to function composition), meaning that the topological and the algebraic structure on this monoid are closely connected. With this paradigm of reconstruction in mind, we arrive at the following problem: **Question A**.: _Is the pointwise topology the only Polish semigroup topology on the space \(\mathcal{M}_{\mathbb{Q}}\) of increasing functions on \(\mathbb{Q}\)?_ Note that a natural alternative topology is the subspace topology of the product topology on \(\mathbb{Q}^{\mathbb{Q}}\) where instead each copy of \(\mathbb{Q}\) is endowed with the Euclidean topology on \(\mathbb{Q}\). However, this topology is not Polish. And indeed, the goal of the present paper is to prove the following result: **Theorem A**.: _The pointwise topology is the unique Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\)._ ### Context The reconstruction problem has been considered in a significantly wider setting: if \(A\) is a countably infinite set, the pointwise topology on \(A^{A}\) induces a Polish semigroup topology on any subsemigroup of \(A^{A}\) which is a \(G_{\delta}\) set; we refer to this subspace topology also as _pointwise topology_. The most prominent examples of such \(G_{\delta}\) subsemigroups are \(\operatorname{Sym}(A)\), the space of all permutations of \(A\), as well as - more generally - the automorphism group \(\operatorname{Aut}(\mathbb{A})\) and the endomorphism monoid \(\operatorname{End}(\mathbb{A})\) of any given (model-theoretic) structure \(\mathbb{A}\). In the first two cases, the algebraic structure is even a group and the topology is a _group topology_, meaning that the inversion map on the group is also continuous. Our question concerning the increasing functions on the rational numbers fits into this framework since \(\mathcal{M}_{\mathbb{Q}}=\operatorname{End}(\mathbb{Q},\leq)\). As it turns out, the algebraic and the topological structure on many subsemigroups of \(A^{A}\) are very closely intertwined, yielding results of the following shape (which our Theorem A is parallel to): On the (semi-)group \(S\subseteq A^{A}\), the pointwise topology is the unique Polish (semi-)group topology. (hereafter: \(S\) has the Unique Polish Property or UPP for short) However, Question A escaped existing techniques. For the class of groups, UPP has been extensively studied; examples include the full symmetric group \(\operatorname{Sym}(A)\) ([1] combined with [16]) and the automorphism group of the random (di-)graph ([12] combined with [13]). Additionally, \(\operatorname{Aut}(\mathbb{Q},\leq)\) - explicitly: the space of all increasing permutations of \(\mathbb{Q}\) - has UPP as well ([13] combined with [16]). Recent years brought results for the case of endomorphism monoids as well; apart from the full transformation monoid \(A^{A}\) mentioned above, it turns out that the endomorphism monoids of the random graph, the random digraph and the equivalence relation with countably many equivalence classes of countable size all have UPP, see [1]. One notices that the examples listed above either contain only bijective functions (the groups) or contain both non-injective and non-surjective functions. This is essential for UPP to hold: by constructions given in [1], both the monoid \(\operatorname{Inj}(A)\) of all injective functions on \(A\) and the monoid \(\operatorname{Surj}(A)\) of all surjective functions on \(A\) carry multiple Polish semigroup topologies. The construction on \(\operatorname{Inj}(A)\) also applies to the so-called self-embedding monoid \(\operatorname{Emb}(\mathbb{A})\) of any \(\omega\)_-categorical_ (defined e.g. in [1]) structure \(\mathbb{A}\), see [1]. This in particular encompasses the monoid of all strictly increasing (but not necessarily surjective) functions on \(\mathbb{Q}\). Thus, if we were to replace \(\mathcal{M}_{\mathbb{Q}}=\operatorname{End}(\mathbb{Q},\leq)\) by \(\operatorname{End}(\mathbb{Q},<)\) in Question A, the resulting question would have a simple (negative) answer. ### Known technique The papers [1] and subsequently [1] show uniqueness of Polish semigroup topologies in two natural steps: 1. Show that the pointwise topology is coarser than any Polish semigroup topology. 2. Show that the pointwise topology is finer than any Polish semigroup topology. Usually, Step (2) takes considerably more work than Step (1). The essential tool for the first step is often the so-called _Zariski topology_, a topology guaranteed to be coarser than any Hausdorff semigroup topology; it then clearly is sufficient to prove that the Zariski topology coincides with the pointwise topology. The second step is accomplished by means of lifting from a subset, usually the automorphism group, to the endomorphism monoid. To this end, the following crucial instrument called _Property_\(\mathbf{X}\) was introduced in [1]: **Definition 1.1**.: Let \((S,\mathcal{T})\) be a topological semigroup and let \(D\subseteq S\) be a subset of \(S\). Then \((S,\mathcal{T})\) has _Property_\(\mathbf{X}\) with respect to \(D\) if for all \(s\in S\) there exist \(f_{s},g_{s}\in S\) and \(a_{s}\in D\) such that 1. \(s=g_{s}a_{s}f_{s}\) 2. for every \(\mathcal{T}\)-neighbourhood \(O\subseteq S\) of \(a_{s}\), the set \(g_{s}(O\cap D)f_{s}\) is a \(\mathcal{T}\)-neighbourhood of \(s\). When trying to use the above outline on \(\mathcal{M}_{\mathbb{Q}}\), Step (1) works smoothly via a direct construction and has already been executed in [1] (see also Section 2; our proof implicitly shows that the Zariski topology coincides with the pointwise topology). However, the common technique to perform Step (2) is not directly applicable since Property \(\mathbf{X}\) does not hold. ### The proof Our strategy to prove Theorem A is a threefold generalisation of the technique involving Property \(\mathbf{X}\), two of these aspects being essential extensions and one merely a technical one. First, we consider topologies that are finer than the pointwise topology in intermediate steps, showing that \(\mathcal{M}_{\mathbb{Q}}\) endowed with a finer topology has a form of Property \(\mathbf{X}\) with respect to the automorphism group (evidently, we subsequently have to reduce from that richer topology to the pointwise topology in an additional step). Second, we admit the postcomposition of a specific endomorphism (left-invertible is the key) on the left hand side of the term \(s=g_{s}a_{s}f_{s}\) serving as basis for Property \(\mathbf{X}\). Third, and this is only a technical complication, we increase the length of the right hand side of the term. All in all, we will consider \(e_{s}s=h_{s}b_{s}g_{s}a_{s}f_{s}\), leading to a generalisation of Property \(\mathbf{X}\) that we call _Pseudo-Property_\(\overline{\mathbf{X}}\); see Definition 3.1. In Section 2, we introduce some relevant notions and state the known results which we will use in the sequel, in particular the fact that the pointwise topology on \(\mathcal{M}_{\mathbb{Q}}\) is coarser than any Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\). The remainder of this paper is devoted to showing that, conversely, the pointwise topology is finer than any Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\). We start by presenting the proof strategy more thoroughly and defining the so-called _rich_ topology on \(\mathcal{M}_{\mathbb{Q}}\) in Section 3. The details of the proof are contained in Sections 4 and 5, where the former proves that the rich topology has Pseudo-Property \(\overline{\mathbf{X}}\) with respect to the pointwise topology on the automorphism group and the latter focuses on reducing the rich topology to the pointwise topology. ### Related notions Several notions related to the existence of a unique Polish (semi-)group topology have been considered over the years, all of them capturing various degrees to which the topological structure can be _reconstructed_ from the algebraic one. For the purposes of this paper, the most important ones are _automatic homeomorphicity_ as studied e.g. in [1, 2, 1, 1, 2, 3] and _automatic continuity_ as discussed e.g. in [1, 1, 2, 1, 3, 4, 5, 6, 7, 8]. Similar concepts have also been studied, e.g. in [1, 1, 2, 3, 4]. Automatic homeomorphicity, the former notion, means that any algebraic isomorphism from the closed sub-(semi-)group \(S\) of \(\operatorname{Sym}(A)\) (or \(A^{A}\)) in question to another closed sub-(semi-)group \(T\) of \(\operatorname{Sym}(A)\) (or \(A^{A}\)) is indeed a homeomorphism between the respective pointwise topologies on \(S\) and \(T\). This property clearly is a weakening of UPP; it can be paraphrased as "unique _pointwise-like_ semigroup topology". In [1], it was shown that \(\mathcal{M}_{\mathbb{Q}}\) has automatic homeomorphicity - this result as well as the fact that UPP holds for \(\operatorname{Aut}(\mathbb{Q},\leq)\) form the central motivation for the present paper. Automatic continuity, the latter notion, is in fact a template for results, parametrised by a class \(\mathcal{K}\) of topological (semi-)groups; traditionally: For the topological (semi-)group \(S\), all algebraic homomorphisms \(S\to H\), where \(H\in\mathcal{K}\), are continuous. The fact that \(\mathcal{M}_{\mathbb{Q}}\)_does not_ have automatic continuity (with respect to the class of second countable topological semigroups), see Proposition 2.3, plays an important role in the present paper - it can be seen as the reason why we need a new method for the proof; see Section 3 for more details. ## 2. Preliminaries and known facts In this section, we make precise the terminology used in the following and give some known results crucial to our reasoning. ### Structures and functions A _(relational) structure_\(\mathbb{A}=\langle A,(R_{i})_{i\in I}\rangle\) consists of a _domain_\(A\) endowed with relations \(R_{i}\) on \(A\) of arity \(m_{i}\), i.e. \(R_{i}\subseteq A^{m_{i}}\). If no misunderstandings can arise, we will not strictly distinguish between the structure \(\mathbb{A}\) and its domain \(A\). Contrary to some works in the area (e.g. [1, 2]), we write the application of a function \(f\) to an element \(a\) as \(f(a)\) and compose functions from right to left, i.e. \(fg:=f\circ g:=(a\mapsto f(g(a)))\). If \(\bar{a}=(a_{1},\ldots,a_{m})\) is a tuple of elements of the domain of \(f\), we write \(f(\bar{a})=(f(a_{1}),\ldots,f(a_{m}))\). A function \(f\colon A\to A\) and a relation \(R\) on \(A\) are _compatible_ if whenever \(\bar{a}\in R\), we have \(f(\bar{a})\in R\). An _endomorphism_ of a structure \(\mathbb{A}=\langle A,(R_{i})_{i\in I}\rangle\) is a function \(f\colon A\to A\) which is compatible with all the relations \(R_{i}\). The semigroup of all endomorphisms is denoted by \(\operatorname{End}(\mathbb{A})\). An _automorphism_ of \(\mathbb{A}\) is a bijective function \(f\colon A\to A\) such that both \(f\) and \(f^{-1}\) are endomorphisms. The group of all automorphisms is denoted by \(\operatorname{Aut}(\mathbb{A})\). In the sequel, we will focus on \(\mathbb{A}=\langle\mathbb{Q},\leq\rangle\), the rational numbers equipped with the non-strict order. As already noted in the introduction, \[\mathcal{M}_{\mathbb{Q}} :=\operatorname{End}(\mathbb{Q},\leq)=\left\{f\colon\mathbb{Q} \to\mathbb{Q}\mid f\text{ increasing}\right\},\] \[\mathcal{G}_{\mathbb{Q}} :=\operatorname{Aut}(\mathbb{Q},\leq)=\left\{f\colon\mathbb{Q} \to\mathbb{Q}\mid f\text{ bijective, (strictly) increasing}\right\}.\] Additionally, it will be useful to embed \(\mathbb{Q}\) into the real numbers \(\mathbb{R}\). Consequently, we will allow intervals with irrational boundary points as well. Differing from standard notation, we only consider the _rational_ points in this interval, unless explicitly mentioned otherwise: for \(\gamma_{1},\gamma_{2}\in\mathbb{R}\cup\{\pm\infty\}\), we put \((\gamma_{1},\gamma_{2}):=\left\{q\in\mathbb{Q}:\gamma_{1}<q<\gamma_{2}\right\}\). To avoid lengthy typesetting, we will denote \(s((-\infty,\gamma))\) by \(s(-\infty,\gamma)\) et cetera. In the same spirit, we will write \(\sup\operatorname{Im}(s)\) as \(\sup s\) and \(\inf\operatorname{Im}(s)\) as \(\inf s\). Finally, we abbreviate \(\mathbb{I}:=\mathbb{R}\setminus\mathbb{Q}\) and distinguish intervals as follows: An interval is _rational_ if its boundary points are contained in \(\mathbb{Q}\cup\{\pm\infty\}\), and _irrational_ if its boundary points are contained in \(\mathbb{I}\cup\{\pm\infty\}\). ### Topologies We endow \(A^{A}\) with the _pointwise topology_\(\mathcal{T}_{pw}\), that is the product topology where each copy of \(A\) carries the discrete topology. Explicitly, a basis for the pointwise topology is given by the sets \(\left\{f\in A^{A}:f(x_{1})=y_{1},\ldots,f(x_{n})=y_{n}\right\}\), where \(n\geq 1\) and \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in A\). If \(A\) is countable, it is a folklore fact that the pointwise topology on \(A^{A}\) is _Polish_ (second countable and completely metrisable) as a countable product of Polish topologies. As can be easily seen, the composition operation \(\circ\colon A^{A}\times A^{A}\to A\) is continuous with respect to this topology; hence, the pointwise topology is a _semigroup topology_. The induced topology on any \(G_{\delta}\) subsemigroup of \(A^{A}\) with respect to the pointwise topology is a Polish semigroup topology as well; notable examples are the spaces \(\operatorname{End}(\mathbb{A})\) and \(\operatorname{Aut}(\mathbb{A})\) - the former is closed in \(A^{A}\) with respect to the pointwise topology, the latter is closed in the set \(\operatorname{Sym}(A)\) of all permutations on \(A\), which in turn is readily seen to be \(G_{\delta}\) in \(A^{A}\). On any of these spaces, we will refer to the induced topology also as _pointwise topology_ and always denote it by \(\mathcal{T}_{pw}\), unless the underlying set is not clear from the context. We will make frequent use of the left and right translations, defined on any semigroup \(S\) as follows: Given a fixed \(t\in S\), let \[\lambda_{t}\colon S\to S,\quad\lambda_{t}(s):=ts\] \[\rho_{t}\colon S\to S,\quad\rho_{t}(s):=st\] denote the _left_ and _right translation_ on \(S\) by \(t\), respectively. If \(S\) is a topological semigroup, then \(\lambda_{t}\) and \(\rho_{t}\) are continuous maps for any \(t\in S\). In the sequel, we will have to distinguish multiple topologies on the same set; whenever the topology is not clear from the context, we will write \((S,\mathcal{T})\) for the space \(S\) endowed with the topology \(\mathcal{T}\). ### Automatic continuity In Subsection 1.5, we presented a version of automatic continuity for topological (semi-)groups which we now generalise in a straightforward way: **Definition 2.1**.: Let \(S\) be a (semi-)group and let \(\mathcal{T}\) be a topology on \(S\) (which need not be a (semi-)group topology). Given a class \(\mathcal{K}\) of topological (semi-)groups, we say that \((S,\mathcal{T})\) has _automatic continuity_ with respect to \(\mathcal{K}\) if for any \((H,\mathcal{O})\in\mathcal{K}\), all algebraic homomorphisms \(S\to H\) are continuous as maps \((S,\mathcal{T})\to(H,\mathcal{O})\). The present subsection contains references to results which imply that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) does not have automatic continuity while \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\) satisfies a very strong version of automatic continuity. We begin with the negative result, reformulating it to match our terminology. **Proposition 2.2** ([1, Proposition 9]).: _Let \(M\) be a \(\mathcal{T}_{pw}\)-closed submonoid of \(A^{A}\) for a countable set \(A\). Suppose that \(M\) contains a submonoid \(N\) such that_ 1. \(N\) _is not_ \(\mathcal{T}_{pw}\)_-closed in_ \(M\)_;_ 2. _composing any element of_ \(M\) _with an element outside_ \(N\) _yields an element outside_ \(N\)_._ _Then \((M,\mathcal{T}_{pw})\) does not have automatic continuity with respect to the class of all subsemigroups of \(A^{A}\), equipped with the respective pointwise topologies._ By a straightforward application of this observation, we obtain: **Proposition 2.3**.: \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) _does not have automatic continuity with respect to the class of second countable topological semigroups._ Proof.: We set \(M:=\mathcal{M}_{\mathbb{Q}}\) as well as \[N:=\{f\in\mathcal{M}_{\mathbb{Q}}:\inf f=-\infty\text{ and }\sup f=+\infty\}\] and check the assumptions of Proposition 2.2. Clearly, \(N\) is a submonoid of \(M\). If we define \(f_{n}\in\mathcal{M}_{\mathbb{Q}}\) by \[f_{n}(x):=\begin{cases}0,&-n<x<n\\ x,&x\geq-n\text{ or }x\leq n\end{cases}\] we have \(f_{n}\in N\) but the sequence \((f_{n})_{n\in\mathbb{N}}\) converges with respect to the pointwise topology, namely to the constant function with value \(0\) - which is not in \(N\). Hence, \(N\) is not \(\mathcal{T}_{pw}\)-closed. Finally, if \(g\in M\) and \(f\notin N\), i.e. \(\operatorname{Im}(f)\subseteq[u,v]\), then \(\operatorname{Im}(fg)\subseteq[u,v]\) and \(\operatorname{Im}(gf)\subseteq[g(u),g(v)]\), so \(fg,gf\notin N\). By Proposition 2.2, the topological semigroup \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) does not have automatic continuity with respect to the class of all subsemigroups of \(A^{A}\). Since all these subsemigroups are second countable, \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) in particular does not have automatic continuity with respect to the class of second countable topological semigroups. On the other hand, \(\mathcal{G}_{\mathbb{Q}}\) with the pointwise topology does have automatic continuity by the following result by Rosendal and Solecki (which we again reformulate to fit our notation). **Theorem 2.4** ([1, Corollary 5] combined with the remarks before [13, Corollary 3]).: \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\) _has automatic continuity with respect to the class of second countable topological groups._ Explicitly, this means: If \((H,\mathcal{O})\) is a second countable topological group, then any group homomorphism \(\varphi\colon(\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\to(H,\mathcal{O})\) is continuous. Considering \(\mathcal{G}_{\mathbb{Q}}\) as a semigroup and forgetting that it is in fact a group, one can ask whether it is sufficient that \(H\) be a second countable topological semigroup and \(\varphi\) be multiplicative. By [1, Proposition 4.1], the notions of automatic continuity with respect to the classes of second countable topological groups and second countable topological semigroups are indeed equivalent, so we obtain: **Proposition 2.5**.: \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\) _has automatic continuity with respect to the class of second countable topological semigroups, explicitly: If \((H,\mathcal{O})\) is a second countable topological semigroup, then any semigroup homomorphism \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\to(H,\mathcal{O})\) is continuous._ ### Coarsest topology As it turns out, even very mild assumptions to a topology on \(\mathcal{M}_{\mathbb{Q}}\) are enough for ascertaining that the pointwise topology is coarser than the given topology. This was shown in [1] using a construction from [1]. In order to keep the present paper as self-contained as possible, we include a proof for our special case (we refer to [1] for more details, in particular on the abstract properties that are used and on the link to the Zariski topology mentioned in Subsection 1.3): **Theorem 2.6** ([1, Lemma 5.1] and [1, Theorem 3.3]).: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\). Then \(\mathcal{T}_{pw}\subseteq\mathcal{T}\), i.e. \(\mathcal{T}_{pw}\) is coarser than \(\mathcal{T}\)._ Proof.: Consider the maps \(h_{y}\colon\mathbb{Q}\to\mathbb{Q}\) defined for all \(y\in\mathbb{Q}\) by \[h_{y}(x):=\begin{cases}y-1,&x<y\\ y,&x=y\\ y+1,&x>y\end{cases}\] as well as the constant functions \(c_{x}\) with value \(x\in\mathbb{Q}\). Clearly, both \(h_{y}\) and \(c_{x}\) are elements of \(\mathcal{M}_{\mathbb{Q}}\). It suffices to show that the subbasic open sets \(\left\{f\in\mathcal{M}_{\mathbb{Q}}:f(x)=y\right\}\), where \(x,y\in\mathbb{Q}\), are contained in \(\mathcal{T}\). In other words, we must prove that \(\left\{f\in\mathcal{M}_{\mathbb{Q}}:f(x)\neq y\right\}\) is \(\mathcal{T}\)-closed. For \(f\in\mathcal{M}_{\mathbb{Q}}\), we observe \[f(x)\neq y\Leftrightarrow fc_{x}\neq c_{y}\Leftrightarrow h_{y}fc_{x}\neq c_{y }\Leftrightarrow h_{y}fc_{x}\in\{c_{y-1},c_{y+1}\}.\] The finite set \(\{c_{y-1},c_{y+1}\}\) is \(\mathcal{T}\)-closed since \(\mathcal{T}\) is Polish (and thus satisfies the first separation axiom). Using the \(\mathcal{T}\)-continuity of the translations \(\lambda_{h_{y}}\) and \(\rho_{c_{x}}\), we obtain that \[\left\{f\in\mathcal{M}_{\mathbb{Q}}:f(x)\neq y\right\}=\lambda_{h_{y}}^{-1}( \rho_{c_{x}}^{-1}(\{c_{y-1},c_{y+1}\}))\] is closed as well. ### Back&Forth In our proofs, we will repeatedly use the "Back&Forth" method, see for instance [1]. **Definition 2.7**.: Let \(\mathbb{X}\) and \(\mathbb{Y}\) be countably infinite structures in the same language and let \(\mathcal{S}\) be a set of finite partial homomorphisms from \(\mathbb{X}\) to \(\mathbb{Y}\). 1. \(\mathcal{S}\) is a _Forth system_ between \(\mathbb{X}\) and \(\mathbb{Y}\) if for all \(m\in\mathcal{S}\) and all \(x\in\mathbb{X}\) with \(x\notin\operatorname{Dom}(m)\), there exists \(m^{\prime}\in\mathcal{S}\) such that \(m^{\prime}\) extends \(m\) and \(x\in\operatorname{Dom}(m^{\prime})\). 2. \(\mathcal{S}\) is a _Back system_ between \(\mathbb{X}\) and \(\mathbb{Y}\) if for all \(m\in\mathcal{S}\) and all \(y\in\mathbb{Y}\) with \(y\notin\operatorname{Im}(m)\), there exists \(m^{\prime}\in\mathcal{S}\) such that \(m^{\prime}\) extends \(m\) and \(y\in\operatorname{Im}(m^{\prime})\). 3. \(\mathcal{S}\) is a _Back&Forth system_ between \(\mathbb{X}\) and \(\mathbb{Y}\) if it is both a Back system and a Forth system. Iteratively extending finite partial homomorphisms so that their domains exhaust the entire structure \(\mathbb{X}\) (Forth) or in an alternating fashion so that their domains and images exhaust \(\mathbb{X}\) and \(\mathbb{Y}\), respectively (Back&Forth), one obtains the following folklore result: **Lemma 2.8**.: _Let \(\mathbb{X}\) and \(\mathbb{Y}\) be countably infinite structures in the same language._ 1. _If_ \(\mathcal{S}\) _is a Forth system between_ \(\mathbb{X}\) _and_ \(\mathbb{Y}\) _which is closed under restriction, then any_ \(m\in\mathcal{S}\) _can be extended to a total homomorphism_ \(s\colon\mathbb{X}\to\mathbb{Y}\) _such that every finite restriction of_ \(s\) _is contained in_ \(\mathcal{S}\)_. In particular, if_ \(\mathcal{S}\) _consists of injective finite partial homomorphisms, then_ \(s\) _can be picked to be injective as well._ 2. _If_ \(\mathcal{S}\) _is a Back&Forth system between_ \(\mathbb{X}\) _and_ \(\mathbb{Y}\) _which is closed under restriction, then any_ \(m\in\mathcal{S}\) _can be extended to a total and surjective homomorphism_ \(s\colon\mathbb{X}\to\mathbb{Y}\) _such that every finite restriction of_ \(s\) _is contained in_ \(\mathcal{S}\)_. In particular, if_ \(\mathcal{S}\) _consists of finite partial isomorphisms, then_ \(s\) _can be picked to be an automorphism._ In the sequel, we will repeatedly need an answer to the following question: given \(s,f\in\mathcal{M}_{\mathbb{Q}}\), under which conditions does there exist a map \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fs^{\prime}\)? **Lemma 2.9**.: _Let \(s,f\in\mathcal{M}_{\mathbb{Q}}\) such that \(\operatorname{Im}(s)\subseteq\operatorname{Im}(f)\)._ 1. _Any finite partial increasing map_ \(m_{0}\) _from_ \(\mathbb{Q}\) _to_ \(\mathbb{Q}\) _satisfying_ \(s(p)=fm_{0}(p)\) _for all_ \(p\in\operatorname{Dom}(m_{0})\) _can be extended to_ \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) _with_ \(s=fs^{\prime}\)_._ 2. _Additionally suppose that for each_ \(w\in\operatorname{Im}(f)\) _the preimage_ \(f^{-1}\{w\}\) _is an irrational interval. Then any finite partial increasing_ injective _map_ \(m_{0}\) _from_ \(\mathbb{Q}\) _to_ \(\mathbb{Q}\) _satisfying_ \(s(p)=fm_{0}(p)\) _for all_ \(p\in\operatorname{Dom}(m_{0})\) _can be extended to an_ injective __\(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) _with_ \(s=fs^{\prime}\)_._ Proof.: The proofs of both statements are almost parallel: one verifies that the system \(\mathcal{S}\) of all finite partial increasing [for (ii): strictly increasing] maps \(m\) from \(\mathbb{Q}\) to \(\mathbb{Q}\) satisfying \(s(p)=fm(p)\) for all \(p\in\operatorname{Dom}(m)\) is a Forth system and applies Lemma 2.8. In Section 5, we will also employ the following variant. **Definition 2.10**.: Let \(\mathbb{X}\) and \(\mathbb{Y}\) be countably infinite structures in the same language and let \(A\subseteq\mathbb{X}\) as well as \(C\subseteq\mathbb{Y}\). Let further \(\mathcal{S}\) be a set of finite partial homomorphisms from \(\mathbb{X}\) to \(\mathbb{Y}\). 1. \(\mathcal{S}\) is an \((A,C)\)_-Back system_ between \(\mathbb{X}\) and \(\mathbb{Y}\) if the following holds: For all \(m\in\mathcal{S}\) and all1\(y\in C\), there exists \(m^{\prime}\in\mathcal{S}\) such that \(m^{\prime}\) extends \(m\) and \(\exists x\in A\cap\operatorname{Dom}(m^{\prime})\colon m^{\prime}(x)=y\). Footnote 1: Note: Contrary to “Back” from above, \(y\in\operatorname{Im}(m)\) is in general possible! 2. \(\mathcal{S}\) is an \((A,C)\)_-Back&Forth system_ between \(\mathbb{X}\) and \(\mathbb{Y}\) if it is both an \((A,C)\)-Back system and a Forth system. **Lemma 2.11**.: _Let \(\mathbb{X}\) and \(\mathbb{Y}\) be countably infinite structures in the same language and let \(A\subseteq\mathbb{X}\) as well as \(C\subseteq\mathbb{Y}\). If \(\mathcal{S}\) is an \((A,C)\)-Back&Forth system between \(\mathbb{X}\) and \(\mathbb{Y}\), then any \(m\in\mathcal{S}\) can be extended to a total homomorphism \(s\colon\mathbb{X}\to\mathbb{Y}\) such that_ \[\forall y\in C\colon s^{-1}\{y\}\cap A\neq\emptyset.\] Proof.: The argument proceeds in almost the same way as a standard Back&Forth construction: Instead of applying a Back step to all elements of \(\mathbb{Y}\setminus\operatorname{Im}(m)\), one applies an \((A,C)\)-Back step to all elements of \(C\) (even if they are contained in \(\operatorname{Im}(m)\)). ## 3. Finest topology - strategy and definitions In this section, we elaborate on our strategy for the proof of Theorem A by introducing our generalisation of Property \(\mathbf{X}\) called _Pseudo-Property_\(\overline{\mathbf{X}}\) and defining the _rich_ topology on \(\mathcal{M}_{\mathbb{Q}}\). ### Pseudo-Property \(\overline{\mathbf{X}}\) **Definition 3.1**.: Let \(S\) be a monoid with neutral element \(1_{S}\) endowed with a topology2\(\mathcal{T}\), let \(D\subseteq S\) be a subset of \(S\) endowed with a topology \(\mathcal{T}_{D}\) and let \(m\geq 1\). Then \((S,\mathcal{T})\) has _Pseudo-Property_\(\overline{\mathbf{X}}\) of length \(m\) with respect to \((D,\mathcal{T}_{D})\) if the following holds: For all \(s\in S\) there exist \(e_{s},h_{s}^{(1)},\ldots,h_{s}^{(m+1)}\in S\) and \(a_{s}^{(1)},\ldots,a_{s}^{(m)}\in D\) such that Footnote 2: Note: \((S,\mathcal{T})\) need not be a topological semigroup! * \(e_{s}\) is _left-invertible_ in \(S\), i.e. there exists \(p\in S\) such that \(pe_{s}=1_{S}\). _._ 2. \(e_{s}s=h_{s}^{(m+1)}a_{s}^{(m)}h_{s}^{(m)}a_{s}^{(m-1)}\dots a_{s}^{(1)}h_{s}^{(1)}\)_._ 3. _For all_ \(V^{(1)},\dots,V^{(m)}\in\mathcal{T}_{D}\) _with_ \(a_{s}^{(i)}\in V^{(i)}\)_, there exists_ \(U\in\mathcal{T}\) _with_ \(s\in U\) _such that_ \[e_{s}U\subseteq h_{s}^{(m+1)}V^{(m)}h_{s}^{(m)}V^{(m-1)}\dots V^{(1)}h_{s}^{(1)}.\] _Remark 3.2_.: Pseudo-Property \(\overline{\mathbf{X}}\) of length \(m\) can thus be verified as follows: Given \(s\in S\), we find suitable \(e_{s},h_{s}^{(1)},\dots,h_{s}^{(m+1)}\in S\) with \(e_{s}\) left-invertible and devise a method to write \(e_{s}s=h_{s}^{(m+1)}a_{s}^{(m)}h_{s}^{(m)}a_{s}^{(m-1)}\dots a_{s}^{(1)}h_{s}^{ (1)}\) (where \(a_{s}^{(i)}\in D\)) in such a way that for arbitrary \(\mathcal{T}_{D}\)-neighbourhoods \(V^{(i)}\) of \(a_{s}^{(i)}\), there exists a \(\mathcal{T}\)-neighbourhood \(U\) of \(s\) such that our method applied to any \(\tilde{s}\in U\) yields \(\tilde{a}^{(i)}\in V^{(i)}\) (not just \(\tilde{a}^{(i)}\in D\)) with \(e_{s}\tilde{s}=h_{s}^{(m+1)}\tilde{a}^{(m)}h_{s}^{(m)}\tilde{a}^{(m-1)}\dots \tilde{a}^{(1)}h_{s}^{(1)}\). Thus, this neighbourhood \(U\) must be small enough to ensure two properties: first, it must encode enough information about \(s\) to make sure that the _same_ auxiliary elements \(e_{s},h_{s}^{(1)},\dots,h_{s}^{(m+1)}\) can be used for \(\tilde{s}\); second, it must ascertain that \(\tilde{s}\) is "close enough" to \(s\) so that the resulting elements \(\tilde{a}^{(i)}\) are "close enough" to \(a_{s}^{(i)}\). Note the following equilibrium at the heart of Pseudo-Property \(\overline{\mathbf{X}}\): Increasing the length \(m\), it becomes easier to decompose a large class of elements \(s\) in the desired form. However, there are more conditions \(\tilde{a}^{(i)}\in V^{(i)}\) to be taken care of, potentially interacting with each other and yielding a more complex situation. The notation \(\overline{\mathbf{X}}\) instead of \(\mathbf{X}\) refers to the arbitrary number \(m\) of elements of \(D\) on the right hand side, while the term "Pseudo" refers to the composition with the left-invertible element \(e_{s}\) on the left hand side, see [1, 1, 10]. Thus, the "traditional" Property \(\mathbf{X}\) from Definition 1.1 corresponds in our terminology to Property \(\overline{\mathbf{X}}\) of length \(1\) (without "Pseudo"). We will apply Pseudo-Property \(\overline{\mathbf{X}}\) via the following proposition which generalises parts of [1, Theorem 3.1]: **Proposition 3.3**.: _Let \(S\) be a monoid endowed with a topology \(\mathcal{T}\) and let \(D\subseteq S\) be a subset of \(S\) endowed with a topology \(\mathcal{T}_{D}\). If \((S,\mathcal{T})\) has Pseudo-Property \(\overline{\mathbf{X}}\) with respect to \((D,\mathcal{T}_{D})\), then the following statements hold:_ 1. _If_ \((H,\mathcal{O})\) _is a topological semigroup and_ \(\varphi\colon S\to H\) _is a homomorphism such that the restriction_ \(\varphi|_{D}\) _is continuous as a map_ \(\varphi|_{D}\colon(D,\mathcal{T}_{D})\to(H,\mathcal{O})\)_, then_ \(\varphi\) _is continuous as a map_ \(\varphi\colon(S,\mathcal{T})\to(H,\mathcal{O})\)_._ 2. _If_ \(D\) _is a semigroup such that_ \((D,\mathcal{T}_{D})\) _has automatic continuity with respect to a class_ \(\mathcal{K}\) _of topological semigroups, then_ \((S,\mathcal{T})\) _also has automatic continuity with respect to_ \(\mathcal{K}\)_._ Proof.: Since 1 immediately implies 2, we only prove the former. We denote the neutral element of \(S\) by \(1_{S}\). Without loss of generality, \(\varphi\) is surjective. Therefore, \(H\) can be assumed to be a monoid with neutral element \(\varphi(1_{S})\). Let \(O\in\mathcal{O}\) and \(s\in S\) such that \(\varphi(s)\in O\). We need to find \(U\in\mathcal{T}\) such that \(s\in U\) and \(\varphi(U)\subseteq O\). Let \(m\) be the length of Pseudo-Property \(\overline{\mathbf{X}}\). Thus, there exist \(e_{s},h_{s}^{(1)},\dots,h_{s}^{(m+1)}\in S\) and \(a_{s}^{(1)},\dots,a_{s}^{(m)}\in D\) with \(e_{s}\) left-invertible such that \[e_{s}s=h_{s}^{(m+1)}a_{s}^{(m)}h_{s}^{(m)}a_{s}^{(m-1)}\dots a_{s}^{(1)}h_{s}^{ (1)}\] and such that for arbitrary \(V^{(1)},\dots,V^{(m)}\in\mathcal{T}_{D}\) with \(a_{s}^{(i)}\in V^{(i)}\), there exists \(U\in\mathcal{T}\) with \(s\in U\) satisfying \[e_{s}U\subseteq h_{s}^{(m+1)}V^{(m)}h_{s}^{(m)}V^{(m-1)}\dots V^{(1)}h_{s}^{( 1)}.\] Denote the left inverse of \(e_{s}\) by \(p\). The left translations \[\lambda_{\varphi(e_{s})}\colon(H,\mathcal{O})\to(H,\mathcal{O})\quad\text{and} \quad\lambda_{\varphi(p)}\colon(H,\mathcal{O})\to(H,\mathcal{O})\] are continuous (since \(\mathcal{O}\) is a semigroup topology). Further, \[\lambda_{\varphi(e_{s})}\colon(H,\mathcal{O})\to(\varphi(e_{s})H,\mathcal{O} |_{\varphi(e_{s})H})\quad\text{and}\quad\lambda_{\varphi(p)}\colon(\varphi(e_{ s})H,\mathcal{O}|_{\varphi(e_{s})H})\to(H,\mathcal{O})\] form inverse maps because \(\varphi(p)\) is a left inverse of \(\varphi(e_{s})\) - here we use that \(H\) is a monoid with neutral element \(\varphi(1_{S})\). Thus, \(\lambda_{\varphi(e_{s})}\colon(H,\mathcal{O})\to(\varphi(e_{s})H,\mathcal{O}|_{ \varphi(e_{s})H})\) is a homeomorphism and we obtain \(\varphi(e_{s})O=\lambda_{\varphi(e_{s})}(O)=P\cap\varphi(e_{s})H\) for some \(P\in\mathcal{O}\). Consequently, \[\varphi(h_{s}^{(m+1)})\varphi(a_{s}^{(m)})\varphi(h_{s}^{(m)})\varphi(a_{s}^{(m- 1)})\dots\varphi(a_{s}^{(1)})\varphi(h_{s}^{(1)})=\varphi(e_{s})\varphi(s)\in P \cap\varphi(e_{s})H.\] Using that the map \((b^{(1)},\dots,b^{(m)})\mapsto\varphi(h_{s}^{(m+1)})b^{(m)}\varphi(h_{s}^{(m)} )b^{(m-1)}\dots b^{(1)}\varphi(h_{s}^{(1)})\) is continuous with respect to \(\mathcal{O}\) (since \(\mathcal{O}\) is a semigroup topology) yields sets \(W^{(i)}\in\mathcal{O}\) such that \(\varphi(a_{s}^{(i)})\in W^{(i)}\) and \[\varphi(h_{s}^{(m+1)})W^{(m)}\varphi(h_{s}^{(m)})W^{(m-1)}\dots W^{(1)}\varphi (h_{s}^{(1)})\subseteq P.\] By the assumed continuity of \(\varphi|_{D}\colon(D,\mathcal{T}_{D})\to(H,\mathcal{O})\), the preimages \(V^{(i)}:=\varphi|_{D}^{-1}(W^{(i)})\) are contained in \(\mathcal{T}_{D}\). Thus, we can invoke Pseudo-Property \(\overline{\mathbf{X}}\) to obtain a set \(U\in\mathcal{T}\) such that \(s\in U\) and \[e_{s}U\subseteq h_{s}^{(m+1)}V^{(m)}h_{s}^{(m)}V^{(m-1)}\dots V^{(1)}h_{s}^{( 1)}.\] Applying \(\varphi\), we conclude \[\varphi(e_{s})\varphi(U)\subseteq\varphi(h_{s}^{(m+1)})W^{(m)}\varphi(h_{s}^{ (m)})W^{(m-1)}\dots W^{(1)}\varphi(h_{s}^{(1)})\subseteq P,\] and thus \(\varphi(e_{s})\varphi(U)\subseteq P\cap\varphi(e_{s})H=\varphi(e_{s})O\). Multiplying with \(\varphi(p)\) from the left, we obtain \(\varphi(U)\subseteq O\) as desired. The previous proposition implies the already mentioned fact that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\)_cannot_ have Pseudo-Property \(\overline{\mathbf{X}}\)_of any length_ with respect to \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\): for otherwise, Proposition 3.3 in tandem with Proposition 2.5 would yield that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) has automatic continuity with respect to the class of second countable topological semigroups, in violation of Proposition 2.3. Therefore, we have to improve our approach by enriching the topology on \(\mathcal{M}_{\mathbb{Q}}\) to make Pseudo-Property \(\overline{\mathbf{X}}\) possible. To motivate, consider \(s\in\mathcal{M}_{\mathbb{Q}}\) which is "unbounded on both sides", i.e. \(\inf s=-\infty\) and \(\sup s=+\infty\). We strive for a representation of the form \(e_{s}s=h_{s}^{(m+1)}a_{s}^{(m)}h_{s}^{(m)}a_{s}^{(m-1)}\dots a_{s}^{(1)}h_{s}^{ (1)}\) with \(e_{s}\) left-invertible. In particular, \(e_{s}\) has to be unbounded on both sides as well, thus so too is the right hand side \(h_{s}^{(m+1)}a_{s}^{(m)}h_{s}^{(m)}a_{s}^{(m-1)}\dots a_{s}^{(1)}h_{s}^{(1)}\) and consequently each \(h_{s}^{(i)}\). For any \(V^{(i)}\subseteq\mathcal{G}_{\mathbb{Q}}\), the set \(h_{s}^{(m+1)}V^{(m)}h_{s}^{(m)}V^{(m-1)}\dots V^{(1)}h_{s}^{(1)}\) therefore only contains functions which are unbounded on both sides. Hence, any set \(U\) such that \(e_{s}U\subseteq h_{s}^{(m+1)}V^{(m)}h_{s}^{(m)}V^{(m-1)}\dots V^{(1)}h_{s}^{(1)}\) must consist of such functions. Thus, in any topology on \(\mathcal{M}_{\mathbb{Q}}\) which yields Pseudo-Property \(\overline{\mathbf{X}}\), the set of all functions which are unbounded on both sides must have nonempty interior. Similar reasonings apply to the remaining kinds of "boundedness behaviour". We define several types of subsets of \(\mathcal{M}_{\mathbb{Q}}\): **Definition 3.4**.: \[\begin{array}{ll}(0)&O_{x,y}^{(0)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:s(x)=y\}; &\text{(pointwise)}\\ &x,y\in\mathbb{Q}\\ \end{array}\] 1. \(O_{I,J}^{(1)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:s(I)\subseteq J\};&\text{(generalised pointwise)}\\ &I=(-\infty,p)\text{ and either }J=(-\infty,q]\text{ or }J=(-\infty,q)\text{ OR}\\ &I=(p,+\infty)\text{ and either }J=[q,+\infty)\text{ or }J=(q,+\infty)&\text{ for }p,q\in \mathbb{Q}\\ \end{array}\] 2. \(O_{LU}^{(2)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\inf\operatorname{Im}(s)\in L, \text{ sup}\operatorname{Im}(s)\in U\};&\text{(boundedness types)}\\ &L=\mathbb{R}\text{ or }L=\{-\infty\}\quad\text{ AND}\quad U=\mathbb{R}\text{ or }U=\{+\infty\}\\ \end{array}\) Explicitly, these are the following four sets: \[\begin{array}{ll}O_{\mathbb{R},\mathbb{R}}^{(2)}:=\{s\in\mathcal{M}_{ \mathbb{Q}}:\inf\operatorname{Im}(s)\in\mathbb{R},\text{ sup}\operatorname{Im}(s)\in\mathbb{R}\}&\text{(bounded-bounded)}\\ O_{-\infty,\mathbb{R}}^{(2)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\inf \operatorname{Im}(s)=-\infty,\text{ sup}\operatorname{Im}(s)\in\mathbb{R}\}&\text{( unbounded-bounded)}\\ O_{\mathbb{R},+\infty}^{(2)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\inf \operatorname{Im}(s)\in\mathbb{R},\text{ sup}\operatorname{Im}(s)=+\infty\}&\text{( bounded-unbounded)}\\ O_{-\infty,+\infty}^{(2)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\inf \operatorname{Im}(s)=-\infty,\text{ sup}\operatorname{Im}(s)=+\infty\}&\text{( unbounded-unbounded)}\\ \end{array}\] 3. \(O_{K}^{(3)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\operatorname{Im}(s)\cap K=\emptyset\}\); (avoiding) \[\begin{array}{ll}K=[q_{1},q_{2}]\text{ or }K=[q_{1},q_{2})\text{ or }\\ K=(q_{1},q_{2}]\text{ or }K=(q_{1},q_{2})\text{ }\qquad\qquad\text{ for }q_{1},q_{2}\in\mathbb{Q}\cup\{\pm\infty\},\,q_{1}\leq q_{2} \end{array}\] (30pn) \[\begin{array}{ll}O_{K}^{(3)}:=\{s\in\mathcal{M}_{\mathbb{Q}}: \operatorname{Im}(s)\cap K=\emptyset\}\text{ }\qquad\qquad\qquad\qquad\text{(avoiding, open constraint)}\\ K=(q_{1},q_{2})\qquad\text{ for }q_{1},q_{2}\in\mathbb{Q}\cup\{\pm\infty\},\,q_{1}<q_{2} \end{array}\] We mention explicitly that the sets formed analogously to type 1 but with closed intervals \(I\) are already encompassed by type 0, i.e. the pointwise topology. For instance, if \(I=(-\infty,p]\) and \(J=(-\infty,q]\) or \(J=(-\infty,q)\), then \(\{s\in\mathcal{M}_{\mathbb{Q}}:s(I)\subseteq J\}=\bigcup_{y\in J}\left\{s\in \mathcal{M}_{\mathbb{Q}}:s(p)=y\right\}\). We will make use of this fact in Section 5. The types of sets defined above yield a template for constructing topologies. **Definition 3.5**.: If \(M\subseteq\{0,1,1^{cls},2,3,3^{opn}\}\), then \(\mathcal{T}_{M}\) is the topology generated by the sets of the types occurring in \(M\). We further define the _rich topology_\(\mathcal{T}_{rich}:=\mathcal{T}_{0123}\); explicitly, this is the topology generated by \[\left\{O_{x,y}^{(0)}:x,y\in\mathbb{Q}\right\} \cup\left\{O_{I,J}^{(1)}:I=(-\infty,p),J\in\{(-\infty,q],(-\infty, q)\},p,q\in\mathbb{Q}\right\}\] \[\cup\left\{O_{I,J}^{(1)}:I=(p,+\infty),J\in\{[q,+\infty),(q,+ \infty)\},p,q\in\mathbb{Q}\right\}\] \[\cup\left\{O_{\mathbb{R},\mathbb{R}}^{(2)},O_{-\infty,\mathbb{R} }^{(2)},O_{\mathbb{R},+\infty}^{(2)},O_{-\infty,+\infty}^{(2)}\right\}\] \[\cup\left\{O_{K}^{(3)}:K\in\{[q_{1},q_{2}],[q_{1},q_{2}),(q_{1},q_ {2}],(q_{1},q_{2})\},q_{1},q_{2}\in\mathbb{Q}\cup\{\pm\infty\},q_{1}\leq q_{2} \right\}.\] If \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in\mathbb{Q}\), it will sometimes be convenient to abbreviate \(O_{\bar{x},\bar{y}}^{(0)}:=\bigcap_{i=1}^{n}O_{x_{i},y_{i}}^{(0)}\). With this terminology, we can formulate our main technical results. **Proposition 3.6**.: \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{rich})\) _has Pseudo-Property \(\overline{\mathbf{X}}\) of length 2 with respect to \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\)._ Let us note that we deem it unlikely that \(\mathcal{M}_{\mathbb{Q}}\) equipped with any meaningful topology could have Pseudo-Property \(\overline{\mathbf{X}}\) of length 1 (so Pseudo-Property \(\mathbf{X}\)) with respect to \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\) since it is only the second automorphism which gives us enough flexibility and control over discontinuity points (see Definition 4.1 for this notion). **Proposition 3.7**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{rich}\). Then \(\mathcal{T}=\mathcal{T}_{pw}\)._ Before we get to their proofs, let us comment on how Theorem A follows from these results. Proof (of Theorem A given Propositions 3.6 and 3.7).: Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\). By Theorem 2.6, we obtain \(\mathcal{T}_{pw}\subseteq\mathcal{T}\). On the other hand, we note that \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\) has automatic continuity with respect to the class of second countable topological semigroups by Proposition 2.5. Combining Proposition 3.6 with Proposition 3.3(ii) yields that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{rich})\) has automatic continuity with respect to the class of second countable topological semigroups as well. Since \((\mathcal{M}_{\mathbb{Q}},\mathcal{T})\) is second countable, the identity map \(\operatorname{id}:(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{rich})\to(\mathcal{M }_{\mathbb{Q}},\mathcal{T})\) is therefore continuous, in other words \(\mathcal{T}\subseteq\mathcal{T}_{rich}\). By Proposition 3.7, we finally conclude \(\mathcal{T}=\mathcal{T}_{pw}\). The proofs of Propositions 3.6 and 3.7 are the subject of Sections 4 and 5, respectively. In the former section, we will find _generic_ maps \(e,f,g,h\in\mathcal{M}_{\mathbb{Q}}\) (with \(e\) left-invertible) so that the compositions \(fahbg\) for \(a,b\in\mathcal{G}_{\mathbb{Q}}\) exhaust the maps \(es\) for a great variety of \(s\in\mathcal{M}_{\mathbb{Q}}\). Further, we will - roughly speaking - analyse how the compositions \(fahbg\) change with varying \(a,b\in\mathcal{G}_{\mathbb{Q}}\). Some of the complexity arises from the requirement that \(a,b\) be automorphisms. The latter section has a different flavour in that we can allow maps to vary within \(\mathcal{M}_{\mathbb{Q}}\), yielding less intricate constructions. Nonetheless, most (but not all) intermediate results can be reformulated as a - albeit easier - Property \(\overline{\mathbf{X}}\)-type statement, namely with respect to the entire semigroup \(\mathcal{M}_{\mathbb{Q}}\) (equipped with different topologies) instead of \(\mathcal{G}_{\mathbb{Q}}\). One major exception (Proposition 5.20) crucially employs regularity of the topology \(\mathcal{T}\) in combination with Polishness and cannot be reformulated as a (Pseudo-)Property \(\overline{\mathbf{X}}\)-type statement, for good reason: if the proof of Proposition 3.7 just consisted of a series of such statements, we could start from Proposition 3.6 and repeatedly apply Proposition 3.3 to show that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) has automatic continuity with respect to the class of second countable topological semigroups, contradicting Proposition 2.3. ## 4. The rich topology has Pseudo-Property \(\overline{\mathbf{X}}\) This section is devoted to proving Proposition 3.6. With Remark 3.2 in mind, we want to find a decomposition \(e_{s}s=f_{s}a_{s}h_{s}b_{s}g_{s}\) of a given \(s\in\mathcal{M}_{\mathbb{Q}}\) with \(e_{s},f_{s},g_{s},h_{s}\in\mathcal{M}_{\mathbb{Q}}\) and \(a_{s},b_{s}\in\mathcal{G}_{\mathbb{Q}}\) as well as a \(\mathcal{T}_{rich}\)-neighbourhood \(U\) of \(s\) such that for any \(\tilde{s}\in U\), we can similarly decompose \(e_{s}\tilde{s}=f_{s}\tilde{a}h_{s}\tilde{b}g_{s}\) with \(\tilde{a},\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) and the _same_ maps \(e_{s},f_{s},g_{s},h_{s}\). Given \(\mathcal{T}_{pw}\)-neighbourhoods \(V\) and \(W\) of \(a_{s}\) and \(b_{s}\), respectively, we additionally have to make sure that \(U\) can be taken small enough that for any \(\tilde{s}\in U\), we can pick \(\tilde{a}\in V\) and \(\tilde{b}\in W\). This means that \(\tilde{a}\) and \(a_{s}\) need to have the same behaviour on a given finite set, as do \(\tilde{b}\) and \(b_{s}\). We will proceed in three steps. First, we will derive "compatibility conditions" such that \(e_{s}s\) can be written in the form \(f_{s}a_{s}t_{s}\). These conditions exhibit such a tight connection between \(s\) and \(\iota_{s}\) that \(U\) can never force \(e_{s}\tilde{s}\) to satisfy these conditions for all \(\tilde{s}\in U\) and a fixed \(\iota_{s}\). In a second step, we will therefore expand \(\iota_{s}\) in the form \(\iota_{s}=h_{s}b_{s}g_{s}\) for fixed \(g_{s},h_{s}\in\mathcal{M}_{\mathbb{Q}}\) and varying \(b_{s}\in\mathcal{G}_{\mathbb{Q}}\), yielding indeed \(e_{s}s=f_{s}a_{s}h_{s}b_{s}g_{s}\). For \(\tilde{s}\in U\), it turns out that we can pick \(\tilde{\iota}=h_{s}\tilde{b}g_{s}\) which is compatible with \(e_{s}\tilde{s}\) to obtain \(e_{s}\tilde{s}=f_{s}\tilde{a}\tilde{\iota}=f_{s}\tilde{a}h_{s}\tilde{b}g_{s}\). All the while, we have to make sure that \(\tilde{a}\) and \(a_{s}\) as well as \(\tilde{b}\) and \(b_{s}\) coincide on given finite sets, resulting in a third major step. ### Generic surjections, generic injections, sparse injections and basic formulas **Definition 4.1**.: Let \(s\in\mathcal{M}_{\mathbb{Q}}\). We set \[\operatorname{Cont}(s) :=\left\{\gamma\in\mathbb{R}:\sup s(-\infty,\gamma)=\inf s(\gamma, +\infty)\right\}\] \[\operatorname{Dc}(s) :=\left\{\gamma\in\mathbb{R}:\sup s(-\infty,\gamma)<\inf s(\gamma, +\infty)\right\},\] the sets of _continuity points_ and _discontinuity points_ of \(s\), respectively. Additionally, we write \(\operatorname{Dc}^{\mathbb{I}}(s):=\operatorname{Dc}(s)\cap\mathbb{I}\) for notational simplicity. Finally, we extend \(s\) to an increasing map \(\bar{s}\colon\mathbb{R}\to\mathbb{R}\) by setting \(\bar{s}(\gamma):=\sup s(-\infty,\gamma)\) for all \(\gamma\in\mathbb{I}\). We will frequently use the notion of limit points in the following sense: **Definition 4.2**.: Let \(A\subseteq\mathbb{Q}\) and \(\gamma\in\mathbb{R}\). We say that \(\gamma\) is a _limit point_ of \(A\) if \(\gamma\) is contained in the closure of \(A\setminus\left\{\gamma\right\}\) with respect to the standard topology on \(\mathbb{R}\). The set of all limit points of \(A\) will be denoted by \(\operatorname{LP}(A)\). If \(s\in\mathcal{M}_{\mathbb{Q}}\), we will abbreviate \(\operatorname{LP}(\operatorname{Im}(s))\) as \(\operatorname{LP}(s)\) for better readability. We collect a few easy facts: **Lemma 4.3**.: _Let \(s\in\mathcal{M}_{\mathbb{Q}}\)._ 1. \(\operatorname{Dc}(s)\) _is at most countable._ 2. _If_ \(s\) _is injective and_ \(\operatorname{LP}(s)\subseteq\mathbb{I}\)_, then_ \(\mathbb{Q}\subseteq\operatorname{Dc}(s)\) _and_ \(\bar{s}(\mathbb{I})\subseteq\mathbb{I}\)_. In fact,_ \(\sup s(-\infty,q)<s(q)<\inf s(q,+\infty)\) _for all_ \(q\in\mathbb{Q}\)_. Additionally,_ \((\mathbb{R}\setminus\operatorname{Im}(\bar{s}))\cap\mathbb{I}\) _is topologically dense in_ \(\mathbb{R}\)_._ 3. _If_ \(b\in\mathcal{G}_{\mathbb{Q}}\)_, then_ \(\operatorname{Dc}(b)=\emptyset\) _and_ \(\bar{b}\colon\mathbb{R}\to\mathbb{R}\) _is a strictly increasing bijection with_ \(\bar{b}(\mathbb{I})\subseteq\mathbb{I}\)_. Additionally, any increasing extension_ \(\beta\) _of_ \(b\) _to a set_ \(M\subseteq\mathbb{R}\) _coincides with_ \(\bar{b}|_{M}\)_._ We define three kinds of _generic_ maps in \(\mathcal{M}_{\mathbb{Q}}\). **Definition 4.4**.: 1. A map \(f\in\mathcal{M}_{\mathbb{Q}}\) is called a _generic surjection_ if it is surjective and for each \(q\in\mathbb{Q}\), the preimage \(f^{-1}\{q\}\) is an irrational interval, i.e. \(f^{-1}\{q\}=(r_{q},t_{q})\) for \(r_{q},t_{q}\in\mathbb{I}\). 2. A map \(g\in\mathcal{M}_{\mathbb{Q}}\) is called a _generic injection_ if it is injective and unbounded-unbounded with \(\mathrm{Dc}^{\mathbb{I}}(g)=\emptyset\) and \(\mathrm{LP}(g)\subseteq\mathbb{I}\). 3. A map \(h\in\mathcal{M}_{\mathbb{Q}}\) is called a _sparse injection_ if it is injective, \(\mathrm{Dc}^{\mathbb{I}}(h)\) is topologically dense in \(\mathbb{R}\) and \(\mathrm{LP}(h)\subseteq\mathbb{I}\). It is an easy observation that such maps really exist. **Lemma 4.5**.: 1. _For every_ \(A\subseteq\mathbb{Q}\)_, there exists a map_ \(f\in\mathcal{M}_{\mathbb{Q}}\) _with_ \(\mathrm{Im}(f)=A\) _such that the_ \(f\)_-preimages of single elements are irrational intervals. In particular, there exists a generic surjection._ 2. _For every finite or countably infinite_ \(A\subseteq\mathbb{I}\) _and every boundedness type_ \(O^{(2)}_{LU}\)_, there exists an injective map_ \(\iota\in O^{(2)}_{LU}\) _which satisfies_ \(\mathrm{Dc}(\iota)=A\mathbin{\dot{\cup}}\mathbb{Q}\) _as well as_ \(\mathrm{LP}(\iota)\subseteq\mathbb{I}\)_._ 3. _There exists a generic injection in_ \(\mathcal{M}_{\mathbb{Q}}\)_._ 4. _There exists a sparse injection in_ \(\mathcal{M}_{\mathbb{Q}}\) _of any boundedness type._ Proof.: **(i).** We put \(M:=A\times\mathbb{Q}\) and set \(<_{M}\) to be the lexicographic order on \(M\) where the first component is the significant one. Define \(\pi\colon M\to\mathbb{Q}\) by \(\pi(w,q):=w\). Since \((M,<_{M})\) is countably infinite and densely ordered without greatest or least element, there exists an order isomorphism \(\alpha\colon\mathbb{Q}\to M\). Setting \(f:=\pi\circ\alpha\), we obtain a map as desired. **(ii).** We only consider the case \(O^{(2)}_{LU}=O^{(2)}_{\mathbb{R},+\infty}\); the others are treated analogously. Put \[M:=(A\mathbin{\dot{\cup}}\mathbb{Q}\mathbin{\dot{\cup}}\{-\infty\})\times \mathbb{Q}\] and set \(<_{M}\) to be the lexicographic order on \(M\) where the first component is the significant one. Define \(j\colon\mathbb{Q}\to M\) by \(j(x):=(x,0)\). Since \((M,<_{M})\) is countably infinite and densely ordered without greatest or least element, there exists an order isomorphism \(\beta\colon M\to\mathbb{Q}\). Setting \(\iota:=\beta\circ j\in\mathcal{M}_{\mathbb{Q}}\), we obtain a map as desired. **(iii).** Setting \(A=\emptyset\) as well as \(O^{(2)}_{LU}=O^{(2)}_{-\infty,+\infty}\) and using 2, we obtain a generic injection. **(iv).** Setting \(A\subseteq\mathbb{I}\) to be a countably infinite topologically dense set and using 2, we obtain a sparse injection with any boundedness type. Another useful notion is given by the _generalised inverse_ of maps in \(\mathcal{M}_{\mathbb{Q}}\). **Definition 4.6**.: Let \(s\in\mathcal{M}_{\mathbb{Q}}\) and \(y\in\mathbb{Q}\). Define3\(s^{L}(y):=\sup s^{-1}(-\infty,y)\in\mathbb{R}\cup\{\pm\infty\}\) and \(s^{R}(y):=\inf s^{-1}(y,+\infty)\in\mathbb{R}\cup\{\pm\infty\}\). If \(s^{L}(y)\) and \(s^{R}(y)\) coincide, we define \(s^{\dagger}(y):=s^{L}(y)=s^{R}(y)\) (the _generalised inverse_ of \(s\) at \(y\)). Footnote 3: We put \(\sup\emptyset:=-\infty\) and \(\inf\emptyset:=+\infty\). The following observations are easily deduced directly from the definitions. **Lemma 4.7**.: 1. _Let_ \(s\in\mathcal{M}_{\mathbb{Q}}\)_. If_ \(y\in\mathrm{Im}(s)\) _and if_ \(x\in\mathbb{Q}\) _is the only_ \(s\)_-preimage of_ \(y\)_, then_ \(s^{\dagger}(y)=x\)_._ 2. _Let_ \(s\in\mathcal{M}_{\mathbb{Q}}\) _be injective. Then_ \(s^{\dagger}(y)\) _is a welldefined real number for all elements_ \(y\in(\inf s,\sup s)\)_._ 3. _Let_ \(g\in\mathcal{M}_{\mathbb{Q}}\) _be a generic injection. Then_ \(g^{\dagger}(y)\in\mathbb{Q}\) _for all_ \(y\in\mathbb{Q}\)_. In particular,_ \(g\) _is left-invertible in_ \(\mathcal{M}_{\mathbb{Q}}\) _with left inverse_ \(g^{\dagger}\)_. Moreover, for all rational intervals_ \(J\) _with boundary points_ \(q_{1}\) _and_ \(q_{2}\) _in_ \(\mathbb{Q}\cup\{\pm\infty\}\)_, the preimage_ \(g^{-1}(J)\) _is again a rational interval with boundary points4__\(g^{\dagger}(q_{1})\) _and_ \(g^{\dagger}(q_{2})\)_._ Footnote 4: Note, however, that e.g. \(g^{-1}[q_{1},q_{2}]\) need not be closed. 4. _Let_ \(g\in\mathcal{M}_{\mathbb{Q}}\) _be a generic injection. Then the translation_ \(\lambda_{g}\colon s\mapsto gs\) _is continuous as a map_5__\((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{rich})\to(\mathcal{M}_{\mathbb{Q}}, \mathcal{T}_{rich})\)_._ Footnote 5: Since \(\mathcal{T}_{rich}\) is not a semigroup topology – a fact on which most of Section 5 hinges – this cannot be taken for granted and depends on the genericity of \(g\). 5. _Let_ \(g\in\mathcal{M}_{\mathbb{Q}}\) _be a generic injection. Then the translation_ \(\lambda_{g}\colon s\mapsto gs\) _is continuous as a map_5__\((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{rich})\to(\mathcal{M}_{\mathbb{Q}}, \mathcal{T}_{rich})\)_._ Footnote 5: We put \(\sup\emptyset:=-\infty\) and \(\inf\emptyset:=+\infty\). _(combine the \(\mathcal{T}_{\mathbb{Q}}\)-continuity of \(\lambda_{g}\) with 2 and the fact that \(s\) and \(gs\) have the same boundedness type since \(g\) is unbounded-unbounded)_ As mentioned in the introduction of Section 4, we will derive compatibility conditions such that \(e_{s}s=f_{s}a_{s}t_{s}\) for maps \(s,e_{s},f_{s},t_{s}\in\mathcal{M}_{\mathbb{Q}}\) and \(a_{s}\in\mathcal{G}_{\mathbb{Q}}\). It will be convenient to consider a slightly more general situation and aim for \(\sigma=\pi au\); this will then be applied for \(\pi=f_{s}\) and \(\sigma:=e_{s}s\), and later on for \(\tilde{\sigma}:=e_{s}\tilde{s}\). Again by the introductory remarks, we will need to make sure that the function \(a\) maps \(\bar{x}\mapsto\bar{y}\) for given tuples \(\bar{x},\bar{y}\). First, we reformulate our problem in model-theoretic language. As a starting point, note that \(\sigma=\pi au\) is equivalent to the fact that \(a(\iota(q))\in\pi^{-1}\{\sigma(q)\}\) for all \(q\in\mathbb{Q}\). Since \(\pi\) is increasing, the preimage \(\pi^{-1}\{\sigma(q)\}\) is an interval. Thus, if \(\sigma(q)=\sigma(q^{\prime})\) for some \(q,q^{\prime}\in\mathbb{Q}\), then not only do \(\iota(q)\) and \(\iota(q^{\prime})\) have to be mapped to the same interval, but all points _between_\(\iota(q)\) and \(\iota(q^{\prime})\) have to be as well. This motivates the following definition: **Definition 4.8**.: 1. Let \(\{P_{q}:q\in\mathbb{Q}\}\) be a set of unary relation symbols and define the language \(L\) by6\(L:=\{<\}\cup\{P_{q}:q\in\mathbb{Q}\}\). Footnote 6: Note: \(<\) instead of \(\leq\)! 2. Let \(\sigma,\pi,\iota\in\mathcal{M}_{\mathbb{Q}}\). For \(q\in\mathbb{Q}\), we set \[P_{q}^{\mathbb{A}} :=\text{convex hull of }\iota(\sigma^{-1}\{\sigma(q)\})\] \[P_{q}^{\mathbb{B}} :=\pi^{-1}\{\sigma(q)\}\] and define \(L\)-structures and \(\mathbb{B}=\big{\langle}\mathbb{Q},<,(P_{q}^{\mathbb{A}})_{q\in\mathbb{Q}} \big{\rangle}\). If \(\sigma,\pi,\iota\) are not clear from the context, we will write \(\mathbb{A}(\sigma,\pi,\iota)\) and \(\mathbb{B}(\sigma,\pi,\iota)\). In the sequel, \(L\), \(\mathbb{A}\) and \(\mathbb{B}\) will always denote the objects just defined. Note that a surjective \(L\)-homomorphism \(a\colon\mathbb{A}\to\mathbb{B}\) is automatically contained in \(\mathcal{G}_{\mathbb{Q}}\) and satisfies \(\sigma=\pi au\). Thus, our aim is to construct a surjective \(L\)-homomorphism extending a given map \(\bar{x}\mapsto\bar{y}\). We will do so using the Back&Forth method, see Subsection 2.5. **Definition 4.9**.: A formula \(\psi(\bar{z})\) over \(L\) is called _basic_ if it is one of the formulas 1. \(P_{q}(z_{i}),\quad q\in\mathbb{Q}\) 2. \(z_{i}<z_{j}\) 3. \(L_{q}(z_{i}):\leftrightarrow\exists u\colon u<z_{i}\wedge P_{q}(u),\quad q\in \mathbb{Q}\) 4. \(R_{q}(z_{i}):\leftrightarrow\exists u\colon u>z_{i}\wedge P_{q}(u),\quad q\in \mathbb{Q}\) For a basic formula \(\psi(\bar{z})\) and a tuple \(\bar{x}\) in \(\mathbb{A}\), we write \(\mathbb{A}\models\psi(\bar{x})\) if \(\bar{x}\) satisfies the formula \(\psi(\bar{z})\) in \(\mathbb{A}\); we analogously define \(\mathbb{B}\models\psi(\bar{y})\). If \(m\) is a (potentially partial) map from \(\mathbb{A}\) to \(\mathbb{B}\), then \(m\) is said to _preserve_\(\psi(\bar{z})\) if \(\mathbb{A}\models\psi(\bar{x})\) implies \(\mathbb{B}\models\psi(m(\bar{x}))\) for all tuples \(\bar{x}\) in the domain of \(m\). Note that basic formulas contain only existential and no universal quantifiers, so total homomorphisms \(\mathbb{A}\to\mathbb{B}\) always preserve all basic formulas. In the following, we will work with partial maps from \(\mathbb{A}\) to \(\mathbb{B}\) preserving all basic formulas, either extending maps without losing that property or analysing when a given map indeed preserves all basic formulas. ### Proving Proposition 3.6 The crucial technical results necessary for the proof of Proposition 3.6 are three lemmas, one for each of the steps mentioned in the introduction of Section 4: the Sandwich Lemma 4.11, the Preconditioning Lemma 4.12 and the Variation Lemma 4.13. In this subsection, we formulate them and demonstrate how they are used to show Proposition 3.6. For proofs of the three lemmas, we refer to the following subsections. We start by fixing some notation for the sake of brevity: **Definition 4.10**.: We say that \(\sigma,\pi,\iota\in\mathcal{M}_{\mathbb{Q}}\) are _compatible_ if 1. \(\sigma\in\mathcal{M}_{\mathbb{Q}}\) satisfies \(\mathrm{LP}(\sigma)\subseteq\mathbb{I}\), 2. \(\pi\in\mathcal{M}_{\mathbb{Q}}\) is a generic surjection, 3. \(\iota\in\mathcal{M}_{\mathbb{Q}}\) is injective with \(\mathrm{LP}(\iota)\subseteq\mathbb{I}\), has the same boundedness type as \(\sigma\) and satisfies \(\mathrm{Dc}^{\mathbb{I}}(\iota)=\mathrm{Dc}^{\mathbb{I}}(\sigma)\). **Lemma 4.11** (Sandwich Lemma).: _Let \(\sigma,\pi,\iota\in\mathcal{M}_{\mathbb{Q}}\) be compatible._ _Then the following statements hold:_ 1. _The set of all finite partial_ \(L\)_-homomorphisms_ \(m\) _from_ \(\mathbb{A}\) _to_ \(\mathbb{B}\) _preserving all basic formulas is a Back&Forth system._ 2. _There exists_ \(a\in\mathcal{G}_{\mathbb{Q}}\) _such that_ \(\sigma=\pi a\iota\)_. Indeed, if_ \(m\) _is a finite partial_ \(L\)_-homomorphism from_ \(\mathbb{A}\) _to_ \(\mathbb{B}\) _preserving all basic formulas, there exists_ \(a\in\mathcal{G}_{\mathbb{Q}}\) _extending_ \(m\) _such that_ \(\sigma=\pi a\iota\)_._ Referring back to the overview presented in the introduction of Section 4, we can now precisely state why our approach requires aiming for Pseudo-Property \(\overline{\mathbf{X}}\) of length \(2\): to apply the Sandwich Lemma 4.11, we need that \(\sigma=e_{s}s\) and \(\iota\) have the same irrational discontinuity points, so the irrational discontinuity points of \(s\) and \(\iota\) need to be closely connected. Since no \(\mathcal{T}_{rich}\)-neighbourhood \(U\) can encode \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(s)\), we cannot use a fixed map \(\iota\) for all \(\tilde{s}\) in \(U\). Thus, we need to adapt \(\iota\) to \(\tilde{s}\). We will write \(\iota=hbg\), where \(b\) varies in \(\mathcal{G}_{\mathbb{Q}}\) and \(g,h\in\mathcal{M}_{\mathbb{Q}}\) are fixed elements. As it will turn out, it is crucial that we are very free in stipulating finite pointwise behaviour not only of \(b\) on \(\mathbb{Q}\) but also of the extension \(\bar{b}\) on \(\mathbb{I}\). **Lemma 4.12** (Preconditioning Lemma).: _Let \(g\in\mathcal{M}_{\mathbb{Q}}\) be a generic injection, let \(h\in\mathcal{M}_{\mathbb{Q}}\) be a sparse injection and let \(A\subseteq\mathbb{I}\) be finite or countably infinite7. Then there exists \(b\in\mathcal{G}_{\mathbb{Q}}\) such that \(\iota:=hbg\) satisfies \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(\iota)=A\) as well as \(\mathrm{LP}(\iota)\subseteq\mathbb{I}\), namely any \(b\in\mathcal{G}_{\mathbb{Q}}\) with \(\bar{b}^{-1}(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h))\cap\mathrm{Im}(\bar{g})= \bar{g}(A)\). The boundedness type of \(\iota\) coincides with the boundedness type of \(h\)._ Footnote 7: When applying this lemma, we will put either \(A=\mathrm{D}\mathrm{c}(e_{s}s)\) or \(A=\mathrm{D}\mathrm{c}(e_{s}\tilde{s})\). _Moreover, suppose that \(\bar{z}\) and \(\bar{w}\) are tuples in \(\mathbb{Q}\), that \(\bar{z}^{\prime}\) and \(\bar{w}^{\prime}\) are tuples in \((\mathbb{R}\setminus\mathrm{Im}(\bar{g}))\cap\mathbb{I}\) and \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h)\), respectively, and that \(\bar{z}^{\prime\prime}\) and \(\bar{w}^{\prime\prime}\) are tuples in \(\bar{g}(A)\) and \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h)\), respectively. If the partial map sending \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\) and \(\bar{z}^{\prime\prime}\mapsto\bar{w}^{\prime\prime}\) is strictly increasing, then \(b\in\mathcal{G}_{\mathbb{Q}}\) can be picked so that \(\bar{b}\) extends this map._ Combining the Preconditioning Lemma 4.12 (putting \(A=\mathrm{D}\mathrm{c}^{\mathbb{I}}(e_{s}s)\), see the proof of Proposition 3.6 below) with the Sandwich Lemma 4.11, we can show that \(\sigma:=e_{s}s\) can be written in the form \(\pi a_{s}\iota=f_{s}a_{s}h_{s}b_{s}g_{s}\) with \(a_{s},b_{s}\in\mathcal{G}_{\mathbb{Q}}\) if \(\pi=f_{s}\) is a generic surjection, \(g_{s}\) and \(e_{s}\) are generic injections and \(h_{s}\) is a sparse injection with the same boundedness type as \(s\) - note in particular that the choice of the maps \(e_{s},f_{s},g_{s},h_{s}\) only depends on the boundedness type of \(s\). For the remaining part of Pseudo-Property \(\overline{\mathbf{X}}\), we have to prove the following: If \(a_{s}(\bar{x})=\bar{y}\) as well as \(b_{s}(\bar{z})=\bar{w}\), there is a \(\mathcal{T}_{rich}\)-neighbourhood \(U\) of \(s\) such that for all \(\tilde{s}\in U\) one can write \(e_{s}\tilde{s}=f_{s}\tilde{a}h_{s}\tilde{b}g_{s}\), where \(\tilde{a},\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) with \(\tilde{a}(\bar{x})=\bar{y}\) as well as \(\tilde{b}(\bar{z})=\bar{w}\). By the Preconditioning Lemma 4.12, we could find \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) with \(\tilde{b}(\bar{z})=\bar{w}\) such that \(\tilde{\sigma}:=e_{s}\tilde{s},f_{s},\tilde{\iota}:=h_{s}\tilde{b}g_{s}\) are compatible. Thus, the Sandwich Lemma 4.11 would yield \(\tilde{a}\in\mathcal{G}_{\mathbb{Q}}\) with \(e_{s}\tilde{s}=f_{s}\tilde{a}h_{s}\tilde{b}g_{s}\) - however, this automorphism \(\tilde{a}\) need not satisfy the condition \(\tilde{a}(\bar{x})=\bar{y}\). To improve upon this strategy, the final statement of the Sandwich Lemma 4.11 suggests we construct \(\tilde{b}\) in such a way that the finite partial map defined by \(\bar{x}\mapsto\bar{y}\) preserves all basic formulas when considered as a map from \(\mathbb{A}(\tilde{\sigma},f_{s},\tilde{\iota})\) to \(\mathbb{B}(\tilde{\sigma},f_{s},\tilde{\iota})\). **Lemma 4.13** (Variation Lemma).: _Let \(\sigma,f,g,h\in\mathcal{M}_{\mathbb{Q}}\) and \(a,b\in\mathcal{G}_{\mathbb{Q}}\) such that \(\sigma=fahbg\), where \(\mathrm{LP}(\sigma)\subseteq\mathbb{I}\), \(f\) is a generic surjection, \(g\) is a generic injection, \(h\) is a sparse injection with the same boundedness type as \(\sigma\), and finally \(\bar{b}^{-1}(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h))\cap\mathrm{Im}(\bar{g})= \bar{g}(\mathrm{D}\mathrm{c}^{\mathbb{I}}(\sigma))\). Let further \(\bar{x},\bar{y},\bar{z},\bar{w}\) be tuples in \(\mathbb{Q}\) such that \(a(\bar{x})=\bar{y}\) and \(b(\bar{z})=\bar{w}\). Then there exists a \(\mathcal{T}_{rich}\)-neighbourhood \(O\) of \(\sigma\) such that the following holds:_ _For any_ \(\tilde{\sigma}\in O\) _with_ \(\mathrm{LP}(\tilde{\sigma})\subseteq\mathbb{I}\)_, there exist tuples_ \(\bar{z}^{*}\) _and_ \(\bar{w}^{*}\) _in_ \(\mathbb{Q}\) _and tuples_ \(\bar{z}^{\prime}\) _and_ \(\bar{w}^{\prime}\) _in_ \((\mathbb{R}\setminus\mathrm{Im}(\bar{g}))\cap\mathbb{I}\) _and_ \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h)\)_, respectively, and tuples_ \(\bar{z}^{\prime\prime}\) _and_ \(\bar{w}^{\prime\prime}\) _in_ \(\bar{g}(\mathrm{D}\mathrm{c}^{\mathbb{I}}(\tilde{\sigma}))\) _and_ \(\mathrm{D}\mathrm{c}^{\mathbb{I}}(h)\)_, respectively, such that_ \(\bullet\) _the finite partial map_ \(\bar{z}\mapsto\bar{w}\)_,_ \(\bar{z}^{*}\mapsto\bar{w}^{*}\)_,_ \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\)_,_ \(\bar{z}^{\prime\prime}\mapsto\bar{w}^{\prime\prime}\) _is strictly increasing,_ _if_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _satisfies_ \(\tilde{b}(\bar{z})=\bar{w}\)_,_ \(\tilde{b}(\bar{z}^{*})=\bar{w}^{*}\)_,_ \(\tilde{b}(\bar{z}^{\prime})=\bar{w}^{\prime}\)_,_ \(\tilde{b}(\bar{z}^{\prime\prime})=\bar{w}^{\prime\prime}\) _and is such that_ \(\tilde{\sigma},f,\tilde{t}:=h\tilde{b}g\) _are compatible, then_ \(\bar{x}\mapsto\bar{y}\) _preserves all basic formulas when considered as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{t})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{t})\)_._ Combining these results, we can prove that \(\mathcal{M}_{\mathbb{Q}}\) equipped with the rich topology has Pseudo-Property \(\overline{\mathbf{X}}\) of length \(2\) with respect to \((\mathcal{G}_{\mathbb{Q}},\mathcal{T}_{pw})\): Proof (of Proposition 3.6 given Lemmas 4.11, 4.12 and 4.13).: Let \(s\in\mathcal{M}_{\mathbb{Q}}\). We follow the strategy outlined in Figure 1. First, we construct a decomposition \(e_{s}s=f_{s}a_{s}h_{s}b_{s}g_{s}\). We use Lemma 4.5 to find a generic injection \(e_{s}\in\mathcal{M}_{\mathbb{Q}}\), a generic surjection \(f_{s}\in\mathcal{M}_{\mathbb{Q}}\), a generic injection \(g_{s}\in\mathcal{M}_{\mathbb{Q}}\) and a sparse injection \(h_{s}\in\mathcal{M}_{\mathbb{Q}}\) with the same boundedness type as \(s\). By Lemma 4.7(iii), the map \(e_{s}\) is left-invertible. Since \(e_{s}\) is unbounded-unbounded, \(\sigma:=e_{s}s\) has the same boundedness type as \(s\) (and as \(h_{s}\)) and satisfies \(\mathrm{LP}(\sigma)\subseteq\mathrm{LP}(e_{s})\subseteq\mathbb{I}\). Applying the Preconditioning Lemma 4.12 with \(A=\mathrm{Dc}^{\mathbb{I}}(\sigma)\), we obtain \(b_{s}\in\mathcal{G}_{\mathbb{Q}}\) such that \(\iota_{s}:=h_{s}b_{s}g_{s}\) is compatible with \(\sigma\) and \(f_{s}\), namely \(b_{s}\in\mathcal{G}_{\mathbb{Q}}\) with \(\bar{b}_{s}^{-1}(\mathrm{Dc}^{\mathbb{I}}(h))\cap\mathrm{Im}(\bar{g}_{s})= \bar{g}_{s}(\mathrm{Dc}^{\mathbb{I}}(\sigma))\). Using the Sandwich Lemma 4.11, we obtain \(a_{s}\in\mathcal{G}_{\mathbb{Q}}\) such that \[e_{s}s=\sigma=f_{s}a_{s}\iota_{s}=f_{s}a_{s}h_{s}b_{s}g_{s}.\] This proves conditions 1 and 2 in the definition of Pseudo-Property \(\overline{\mathbf{X}}\). For condition 3, let \(V,W\subseteq\mathcal{G}_{\mathbb{Q}}\) be open sets in the pointwise topology on \(\mathcal{G}_{\mathbb{Q}}\) with \(a_{s}\in V\) and \(b_{s}\in W\). We need to find \(U\in\mathcal{T}_{rich}\) with \(s\in U\) such that \(e_{s}U\subseteq f_{s}Vh_{s}Wg_{s}\). By shrinking the sets if necessary, we can assume that \(V=\{\tilde{a}\in\mathcal{G}_{\mathbb{Q}}:\tilde{a}(\bar{x})=\bar{y}\}\) and \(W=\left\{\tilde{b}\in\mathcal{G}_{\mathbb{Q}}:\tilde{b}(\bar{z})=\bar{w}\right\}\) for tuples \(\bar{x},\bar{y},\bar{z},\bar{w}\) in \(\mathbb{Q}\). We apply the Variation Lemma 4.13 for \(\sigma=f_{s}a_{s}h_{s}b_{s}g_{s}\) to obtain a \(\mathcal{T}_{rich}\)-neighbourhood \(O\) of \(\sigma\) with the following property: If \(\tilde{s}\in\mathcal{M}_{\mathbb{Q}}\) is such that \(\tilde{\sigma}:=e_{s}\tilde{s}\in O\), there exist tuples \(\bar{z}^{*}\) and \(\bar{w}^{*}\) in \(\mathbb{Q}\) and tuples \(\bar{z}^{\prime}\) and \(\bar{w}^{\prime}\) in \((\mathbb{R}\setminus\mathrm{Im}(\bar{g}))\cap\mathbb{I}\) and \(\mathrm{Dc}^{\mathbb{I}}(h)\), respectively, and tuples \(\bar{z}^{\prime\prime}\) and \(\bar{w}^{\prime\prime}\) in \(\bar{g}(\mathrm{Dc}^{\mathbb{I}}(\tilde{\sigma}))\) and \(\mathrm{Dc}^{\mathbb{I}}(h)\), respectively, such that \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{*}\mapsto\bar{w}^{*}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\), \(\bar{z}^{\prime\prime}\mapsto\bar{w}^{\prime\prime}\) is strictly increasing and, additionally, if \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) satisfies \(\tilde{b}(\bar{z})=\bar{w}\), \(\tilde{b}(\bar{z}^{*})=\bar{w}^{*}\), \(\tilde{b}(\bar{z}^{\prime})=\bar{w}^{\prime}\), \(\tilde{b}(\bar{z}^{\prime\prime})=\bar{w}^{\prime}\) and is such that \(\tilde{\sigma},f,\tilde{t}:=h\tilde{b}g\) are compatible, then \(\bar{x}\mapsto\bar{y}\) preserves all basic formulas when considered as a finite partial map from \(\mathbb{A}(\tilde{\sigma},f_{s},\tilde{t})\) to \(\mathbb{B}(\tilde{\sigma},f_{s},\tilde{t})\). Given such \(\tilde{s}\in\mathcal{M}_{\mathbb{Q}}\), the Preconditioning Lemma 4.12 with \(A=\mathrm{Dc}^{\mathbb{I}}(\tilde{\sigma})\) as well as \(\bar{z}\cup\bar{z}^{*}\) and \(\bar{w}\cup\bar{w}^{*}\) in place of \(\bar{z}\) and \(\bar{w}\), respectively, yields \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) with the above properties. Hence, \(\bar{x}\mapsto\bar{y}\) preserves all basic formulas when considered as a finite partial map from \(\mathbb{A}(\tilde{\sigma},f_{s},\tilde{t})\) to \(\mathbb{B}(\tilde{\sigma},f_{s},\tilde{t})\), and the Sandwich Lemma 4.11 gives \(\tilde{a}\in\mathcal{G}_{\mathbb{Q}}\) with \(\tilde{a}(\bar{x})=\bar{y}\) and \(e_{s}\tilde{s}=f_{s}\tilde{a}\tilde{t}=f_{s}\tilde{a}h_{s}\tilde{b}g_{s}\in f_ {s}Vh_{s}Wg_{s}\). In other words, setting \(U:=\lambda_{e_{s}}^{-1}(O)\) gives \(e_{s}U\subseteq f_{s}Vg_{s}Wh_{s}\) as desired. Noting that \(U\) is a \(\mathcal{T}_{rich}\)-neighbourhood of \(s\) by Lemma 4.7(iv) finishes the proof. ### Proving the Sandwich Lemma 4.11 The proof of the Sandwich Lemma 4.11 requires two additional auxiliary facts. Since \(\pi\) is a generic surjection, the preimages \(\pi^{-1}\{z\}\) have neither a greatest nor a least element. This implies the following simple yet crucial interpretation of the formulas \(P_{q}(z)\), \(L_{q}(z)\) and \(R_{q}(z)\) in \(\mathbb{B}\): **Lemma 4.14**.: _Let \(\pi\in\mathcal{M}_{\mathbb{Q}}\) be a generic surjection. Then the following holds for all \(q,y\in\mathbb{Q}\):_ 1. \(\mathbb{B}\models P_{q}(y)\) _if and only if_ \(\sigma(q)=\pi(y)\)_._ 2. \(\mathbb{B}\models L_{q}(y)\) _if and only if_ \(\sigma(q)\leq\pi(y)\)_._ 3. \(\mathbb{B}\models R_{q}(y)\) _if and only if_ \(\sigma(q)\geq\pi(y)\)_._ _In particular, \(\mathbb{B}\models P_{q}(y)\) implies \(\mathbb{B}\models L_{q}(y)\) as well as \(\mathbb{B}\models R_{q}(y)\)._ The following straightforward lemma intuitively means that our definition of \(P_{q}^{\mathbb{A}}\) is the "correct" one: **Lemma 4.15**.: _Let \(\sigma,\pi,\iota\in\mathcal{M}_{\mathbb{Q}}\). For all \(q,q^{\prime}\in\mathbb{Q}\), we have_ \[P_{q}^{\mathbb{A}}\cap P_{q^{\prime}}^{\mathbb{A}}\neq\emptyset\Leftrightarrow P_{q}^{ \mathbb{A}}=P_{q^{\prime}}^{\mathbb{A}}\Leftrightarrow\sigma(q)=\sigma(q^{ \prime})\Leftrightarrow P_{q}^{\mathbb{B}}=P_{q^{\prime}}^{\mathbb{B}} \Leftrightarrow P_{q}^{\mathbb{B}}\cap P_{q^{\prime}}^{\mathbb{B}}\neq\emptyset.\] Now we can prove the Sandwich Lemma 4.11: Proof (of the Sandwich Lemma 4.11).: Since (ii) follows by combining (i) with Lemma 2.8 to obtain a surjective \(L\)-homomorphism \(a\colon\mathbb{A}\to\mathbb{B}\) extending \(m\), we only have to show (i). We will verify that the set of all finite partial \(L\)-homomorphisms \(m\) from \(\mathbb{A}\) to \(\mathbb{B}\) preserving all basic formulas has the Forth property and the Back property. Let \(m\) be such a homomorphism. **Forth.** Given \(x\in\mathbb{A}\setminus\operatorname{Dom}(m)\), we need to find \(y\in\mathbb{B}\setminus\operatorname{Im}(m)\) such that the extension \(m^{\prime}\) of \(m\) by \(x\mapsto y\) is a finite partial \(L\)-homomorphism preserving all basic formulas. We will use the following general strategy: We first identify the desired position of \(y\) with respect to the predicates \(P_{q},L_{q},R_{q}\), and then employ the fact that \(m\) preserves all basic formulas to find \(y\) such that \(m\) and \(x\mapsto y\) are additionally order-compatible. Let \(\bar{a}=(a_{1},\dots,a_{n})\) be an ascending enumeration of \(\operatorname{Dom}(m)\) and let \(\bar{b}:=m(\bar{a})\). Since \(m\) is strictly increasing, \(\bar{b}\) is an ascending enumeration of \(\operatorname{Im}(m)\). Setting \(a_{0}:=-\infty\) and \(a_{n+1}:=+\infty\) as well as \(b_{0}:=-\infty\) and \(b_{n+1}:=+\infty\), there exists an index \(i_{0}\in\{0,\dots,n\}\) such that \(a_{i_{0}}<x<a_{i_{0}+1}\). We distinguish two cases: _Case 1_ (\(\exists q_{0}\in\mathbb{Q}\colon x\in P_{q_{0}}^{\mathbb{A}}\)): Since \(\pi\) is a generic surjection (property (b) of compatibility), it suffices to find \(y\) with \[\sigma(q_{0})=\pi(y)\text{ and }b_{i_{0}}<y<b_{i_{0}+1};\] note that even though we do not know whether \(x\) satisfies \(L_{q_{0}}\) and \(R_{q_{0}}\) in \(\mathbb{A}\), the element \(y\) certainly satisfies \(L_{q_{0}}\) and \(R_{q_{0}}\) in \(\mathbb{B}\), see Lemma 4.14. Applying that \(m\) preserves \(R_{q_{0}}\) and \(L_{q_{0}}\), one obtains \(\pi(b_{i_{0}})\leq\sigma(q_{0})\leq\pi(b_{i_{0}+1})\) via Lemma 4.14 which yields the existence of \(y\) with the desired properties (by property (b) of compatibility, the preimage \(\pi^{-1}\{\sigma(q_{0})\}\) does not have a greatest or least element). _Case 2_ (\(\nexists q\in\mathbb{Q}\colon x\in P_{q}^{\mathbb{A}}\)): In this case, we have \(J_{-}:=\{q\in\mathbb{Q}:\mathbb{A}\models L_{q}(x)\}=\iota^{-1}(-\infty,x)\) as well as \(J_{+}:=\{q\in\mathbb{Q}:\mathbb{A}\models R_{q}(x)\}=\iota^{-1}(x,+\infty)\), and further \(\mathbb{Q}=J_{-}\cup J_{+}\) where the common boundary point of \(J_{-}\) and \(J_{+}\) is \(\iota^{1}(x)\); note that \(J_{\pm}\) could be empty, in which case \(\iota^{1}(x)=\pm\infty\). Similarly to Case 1, it suffices to find \(y\) with (for \(J_{-}=\emptyset\), we put \(\sup\sigma(J_{-})=-\infty\); analogously for \(J_{+}=\emptyset\)) \[\sup\sigma(J_{-})\leq\pi(y)\leq\inf\sigma(J_{+})\text{ and }b_{i_{0}}<y<b_{i_{0}+1}.\] Figure 1. Illustration of the proof of Proposition 3.6. This is accomplished by verifying \[\sup\sigma(J_{-}) <\inf\sigma(J_{+}) \tag{2}\] \[\pi(b_{i_{0}}) \leq\inf\sigma(J_{+})\] (3) \[\sup\sigma(J_{-}) \leq\pi(b_{i_{0}+1})\] (4) \[\exists u_{0}\in\mathbb{Q}\colon\max(\sup\sigma(J_{-}),\pi(b_{i_{0 }})) \leq u_{0}\leq\min(\inf\sigma(J_{+}),\pi(b_{i_{0}+1})). \tag{1}\] If \(u_{0}\) is as in (4), there exists \(y\in\pi^{-1}\{u_{0}\}\) with \(b_{i_{0}}<y<b_{i_{0}+1}\). Any such \(y\) has the desired properties. By our assumption for the current case, the element \(x\) is in a "gap" of \(\mathbb{A}\); the inequality (1) expresses that there exists a matching "gap" of \(\mathbb{B}\). To verify, one distinguishes by \(\iota^{\dagger}(x)\) and applies convenient parts of the properties 1 and 2 of compatibility: If \(\iota^{\dagger}(x)=-\infty\), i.e. \(J_{-}=\emptyset\) and \(J_{+}=\mathbb{Q}\), then \(\iota\) is bounded below, so \(\sigma\) is bounded below by property 2 of compatibility, yielding (1). For \(\iota^{\dagger}(x)=+\infty\), one argues analogously. If \(J_{-}\) has a greatest element \(q\), observe that \(\sigma(J_{+})\) consists of elements strictly greater than \(\sigma(q)\). Use \(\operatorname{LP}(\sigma)\subseteq\mathbb{I}\) (property 1 of compatibility) combined with Lemma 4.15 to see that \(\inf\sigma(J_{+})\) is either contained in \(\sigma(J_{+})\) or irrational. Conclude \(\sigma(q)<\inf\sigma(J_{+})\) which yields (1). If \(J_{+}\) has a least element, one argues analogously. It remains to consider the case that \(\iota^{\dagger}(x)\in\mathbb{R}\) and neither \(J_{-}\) nor \(J_{+}\) has a greatest or least element, respectively. Then \(\iota^{\dagger}(x)\in\mathbb{I}\) and, since \(x\notin\operatorname{LP}(\iota)\) by property 2 of compatibility, also \(\iota^{\dagger}(x)\in\operatorname{Dc}(\iota)\). Another application of property 2 of compatibility yields \(\iota^{\dagger}(x)\in\operatorname{Dc}(\sigma)\) and thus (1). The inequalities (2) and (3) are clear since \(m\) preserves all basic formulas, the inequality (4) is immediate from the previous ones. **Back.** Given \(y\in\mathbb{B}\setminus\operatorname{Im}(m)\), we need to find \(x\in\mathbb{A}\setminus\operatorname{Dom}(m)\) such that the extension \(m^{\prime}\) of \(m\) by \(x\mapsto y\) is a finite partial \(L\)-homomorphism preserving all basic formulas. We proceed similarly to the Forth step. As before, let \(\bar{a}=(a_{1},\dots,a_{n})\) be an ascending enumeration of \(\operatorname{Dom}(m)\) and let \(\bar{b}:=m(\bar{a})\) be the corresponding ascending enumeration of \(\operatorname{Im}(m)\). We again set \(a_{0}:=-\infty\) and \(a_{n+1}:=+\infty\) as well as \(b_{0}:=-\infty\) and \(b_{n+1}:=+\infty\), and define the index \(i_{0}\in\{0,\dots,n\}\) such that \(b_{i_{0}}<y<b_{i_{0}+1}\). We further set \(I_{-}:=\sigma^{-1}(-\infty,\pi(y))\), \(I:=\sigma^{-1}\{\pi(y)\}\) and \(I_{+}:=\sigma^{-1}(\pi(y),+\infty)\). If \(x\) satisfies \[\sup\iota(I_{-})<x<\inf\iota(I_{+})\text{ and }a_{i_{0}}<x<a_{i_{0}+1},\] then \(m^{\prime}\) extending \(m\) by \(x\mapsto y\) preserves all basic formulas since \(\sup_{q\in I_{-}}P_{q}^{\mathbb{A}}=\sup\iota(I_{-})\) by Lemma 4.15 (and analogously for \(I_{+}\)). If \(I\neq\emptyset\), note that even though we cannot predict whether \(x\) will be contained in \(P_{q}^{\mathbb{A}}\) or will be below or above \(P_{q}^{\mathbb{A}}\) for one (and thus for all) \(q\in I\), the element \(y\) satisfies \(P_{q}\) as well as \(L_{q},R_{q}\) in \(\mathbb{B}\). One finds the desired element \(x\) by verifying \[\sup\iota(I_{-}) <\inf\iota(I_{+}) \tag{6}\] \[a_{i_{0}} <\inf\iota(I_{+})\] (7) \[\sup\iota(I_{-}) <a_{i_{0}+1} \tag{5}\] and picking any \(x\) with \(\max(\sup\iota(I_{-}),a_{i_{0}})<x<\min(\inf\iota(I_{+}),a_{i_{0}+1})\). Using the properties 1 and 2 of compatibility, the inequality (5) follows just as (1) did in the Forth step. For the inequality (6), observe that \(a_{i_{0}}<\iota(q)\) for all \(q\in I_{+}\) (for otherwise, \(m\) would not preserve \(P_{q}\) and \(R_{q}\)) and that \(\inf\iota(I_{+})\) is either contained in \(\iota(I_{+})\) or irrational. The same argument yields the inequality (7). ### Proving the Preconditioning Lemma 4.12 To find \(\tilde{b}\), we use a Back&Forth strategy. Proof (of the Preconditioning Lemma 4.12).: Combining the facts that \(g\) is continuous at all irrational points (by definition), that \(\tilde{g}(\mathbb{I})\subseteq\mathbb{I}\) (by Lemma 4.3(ii)) and that any \(\tilde{b}\) for \(b\in\mathcal{G}_{\mathbb{Q}}\) is continuous at all irrational points (by Lemma 4.3(iii)), we obtain that \(bg\) will always be continuous at all irrational points as well. Thus, \[\operatorname{Dc}^{\mathbb{I}}(hbg)=\bar{g}^{-1}\Big{(}\bar{b}^{-1}( \operatorname{Dc}(h))\cap\bar{g}(\mathbb{I})\Big{)}\cap\mathbb{I}.\] If we use that \(\bar{g}(\mathbb{I}),\bar{b}(\mathbb{I})\subseteq\mathbb{I}\), we conclude \[\operatorname{Dc}^{\mathbb{I}}(hbg)=\bar{g}^{-1}\left(\bar{b}^{-1}( \operatorname{Dc}^{\mathbb{I}}(h))\cap\operatorname{Im}(\bar{g})\right)\cap \mathbb{I}.\] Hence, it is sufficient to construct \(b\) in such a way that \[\bar{b}^{-1}(\operatorname{Dc}^{\mathbb{I}}(h))\cap\operatorname{Im}(\bar{g}) =\bar{g}(A); \tag{8}\] if we set \(\iota:=hbg\), then \(\operatorname{LP}(\iota)\subseteq\operatorname{LP}(h)\subseteq\mathbb{I}\) and \(\iota\) has the same boundedness type as \(h\) since both \(g\) and \(b\) are unbounded-unbounded. To fulfil (8), note first that there exists a countable set \(D\subseteq(\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) which is topologically dense in \(\mathbb{R}\): by Lemma 4.3(ii), the set \((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) is topologically dense in \(\mathbb{R}\), so it suffices to pick \(D\) to be topologically dense in \((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) (which is possible since the latter is a subset of a separable metric space and therefore separable itself). In doing so, we can make sure that \(D\) contains all entries of \(\bar{z}^{\prime}\). Instead of directly constructing a map \(b\colon\mathbb{Q}\to\mathbb{Q}\) which satisfies (8), we will find an order isomorphism \(\beta\colon\mathbb{Q}\mathbin{\dot{\cup}}(\bar{g}(A)\cup D)\to\mathbb{Q} \mathbin{\dot{\cup}}\operatorname{Dc}^{\mathbb{I}}(h)\) satisfying \[\beta(\mathbb{Q})=\mathbb{Q}\quad\text{and}\quad\beta(\bar{g}(A) \cup D)=\operatorname{Dc}^{\mathbb{I}}(h)\qquad\text{as well as}\] \[\beta(\bar{z})=\bar{w}\quad\text{and}\quad\beta(\bar{z}^{\prime}) =\bar{w}^{\prime},\beta(\bar{z}^{\prime\prime})=\bar{w}^{\prime\prime}\] Setting \(b:=\beta|_{\mathbb{Q}}\) then yields the map as in (8) since \(\bar{b}|_{\mathbb{Q}\mathbin{\dot{\cup}}(\bar{g}(A)\cup D)}=\beta\) by uniqueness of the increasing extension (see Lemma 4.3(iii)) and therefore \[\bar{b}^{-1}(\operatorname{Dc}^{\mathbb{I}}(h))\cap\operatorname{Im}(\bar{g}) =(\bar{g}(A)\cup D)\cap\operatorname{Im}(\bar{g})=\bar{g}(A).\] To obtain \(\beta\), we show that the system \(\mathcal{S}\) of all finite partial order isomorphisms \(m\) from \(\mathbb{Q}\mathbin{\dot{\cup}}(\bar{g}(A)\cup D)\) to \(\mathbb{Q}\mathbin{\dot{\cup}}\operatorname{Dc}^{\mathbb{I}}(h)\) such that \[m\big{(}\mathbb{Q}\cap\operatorname{Dom}(m)\big{)}=\mathbb{Q}\cap \operatorname{Im}(m)\quad\text{and}\quad m\big{(}(\bar{g}(A)\cup D)\cap \operatorname{Dom}(m)\big{)}=\operatorname{Dc}^{\mathbb{I}}(h)\cap \operatorname{Im}(m)\] is a Back&Forth system - by Lemma 2.8, the finite partial order isomorphism defined by \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\) and \(\bar{z}^{\prime\prime}\mapsto\bar{w}^{\prime\prime}\) (which is a member of \(\mathcal{S}\)) can then be extended to a map \(\beta\) with the desired properties. For the Back step, suppose \(m\in\mathcal{S}\) and \(\gamma\notin\operatorname{Im}(m)\). Let \(\gamma^{\prime\prime}\) be the greatest element of \(\operatorname{Im}(m)\cap(-\infty,\gamma)\) (or \(-\infty\) if no such element exists) and, dually, let \(\gamma^{\prime}\) be the least element of \(\operatorname{Im}(m)\cap(\gamma,+\infty)\) (or \(+\infty\) if no such element exists). If \(\gamma\in\mathbb{Q}\), pick \(\delta\in(m^{-1}(\gamma^{\prime\prime}),m^{-1}(\gamma^{\prime}))_{\mathbb{R}} \cap\mathbb{Q}\); if \(\gamma\in\operatorname{Dc}^{\mathbb{I}}(h)\), pick \(\delta\in(m^{-1}(\gamma^{\prime\prime}),m^{-1}(\gamma^{\prime}))_{\mathbb{R}} \cap(\bar{g}(A)\cup D)\) - by topological density of \(\mathbb{Q}\) and \(\bar{g}(A)\cup D\supseteq D\) in \(\mathbb{R}\), this is always possible. Then the extension of \(m\) by \(\delta\mapsto\gamma\) is an element of \(\mathcal{S}\) as well. For the Forth step, one argues analogously. ### Proving the Variation Lemma 4.13, special cases In a series of lemmas, we first consider the cases that can occur in the special situation that \(\bar{x}\) and \(\bar{y}\) consist of a single element. In Subsection 4.6, we will then amalgamate these special cases to a full proof. We will always consider the same setup: **Notation 4.16**.: We say that \((*)\) holds if we are in the following situation: * \(\sigma,\tilde{\sigma}\in\mathcal{M}_{\mathbb{Q}}\) satisfy \(\operatorname{LP}(\sigma),\operatorname{LP}(\tilde{\sigma})\subseteq\mathbb{I}\) and have the same boundedness type, * \(f\in\mathcal{M}_{\mathbb{Q}}\) is a generic surjection, * \(g\in\mathcal{M}_{\mathbb{Q}}\) is a generic injection, * \(h\in\mathcal{M}_{\mathbb{Q}}\) is a sparse injection with the same boundedness type as \(\sigma\) and \(\tilde{\sigma}\), * \(a\in\mathcal{G}_{\mathbb{Q}}\), * \(b\in\mathcal{G}_{\mathbb{Q}}\) satisfies \(\bar{b}^{-1}(\operatorname{Dc}^{\mathbb{I}}(h))\cap\operatorname{Im}(\bar{g})= \bar{g}(\operatorname{Dc}^{\mathbb{I}}(\sigma))\), 2. \(\sigma=fahbg\), so \(a\) preserves all basic formulas as a map from \(\mathbb{A}(\sigma,f,\iota)\) to \(\mathbb{B}(\sigma,f,\iota)\), where \(\iota:=hbg\). To simplify the arguments, we reformulate the property of preserving all basic formulas central to the Sandwich Lemma 4.11. **Lemma 4.17**.: _Let \(\sigma,\pi,\iota\in\mathcal{M}_{\mathbb{Q}}\) such that \(\pi\) is a generic surjection. Then the map \(x\mapsto y\) preserves all basic formulas when considered as a finite partial map from \(\mathbb{A}\) to \(\mathbb{B}\) if and only if the following two conditions hold:_ \[\sigma\left(\iota^{-1}(-\infty,x]\right)\subseteq(-\infty,\pi(y)]\quad\text{ and}\quad\sigma\left(\iota^{-1}[x,+\infty)\right)\subseteq[\pi(y),+\infty). \tag{9}\] Proof.: Assume first that (9) holds. Since the finite partial map in question has a one-element domain and image, we do not have to consider formulas of the form \(z_{i}<z_{j}\). If \(\mathbb{A}\models P_{q}(x)\) for some \(q\in\mathbb{Q}\), then there exist \(q^{\prime},q^{\prime\prime}\in\mathbb{Q}\) such that \(\iota(q^{\prime\prime})\leq x\leq\iota(q^{\prime})\) and \(\sigma(q^{\prime\prime})=\sigma(q)=\sigma(q^{\prime})\). By (9), we obtain \(\sigma(q)=\sigma(q^{\prime\prime})\leq\pi(y)\leq\sigma(q^{\prime})=\sigma(q)\), so \(y\in P_{q}^{\mathbb{B}}\). If \(\mathbb{A}\models L_{q}(x)\) for some \(q\in\mathbb{Q}\), there exists \(u\in P_{q}^{\mathbb{A}}\) such that \(u<y\). By the previous argument we have \(\sigma(q)=\pi(u)\leq\pi(y)\), so \(\mathbb{B}\models L_{q}(y)\) (see Lemma 4.14). Finally, if \(\mathbb{A}\models R_{q}(x)\) for some \(q\in\mathbb{Q}\), we argue analogously. Now assume that \(x\mapsto y\) preserves all basic formulas as a finite partial map from \(\mathbb{A}\) to \(\mathbb{B}\). We only show \(\sigma\left(\iota^{-1}(-\infty,x]\right)\subseteq(-\infty,\pi(y)]\). Let \(q\in\iota^{-1}(-\infty,x]\), i.e. \(\iota(q)\leq x\). If \(\iota(q)=x\), then \(x\in P_{q}^{\mathbb{A}}\), so \(y\in P_{q}^{\mathbb{B}}\) by assumption and thus \(\sigma(q)=\pi(y)\). If \(\iota(q)<x\), then \(x\in L_{q}^{\mathbb{A}}\), so \(y\in L_{q}^{\mathbb{B}}\) and thus \(\sigma(q)\leq\pi(y)\) (see Lemma 4.14). After these preparations, we can state and prove the series of auxiliary lemmas. **Lemma 4.18** (see Figure 2).: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that \(a(x)=y\)._ _Suppose that \(x\notin(\inf h,\sup h)\). Then one of the following two cases occurs:_ 1. _[label=(0)]_ 2. _(i)_ \(\operatorname{Im}(h)\subseteq(-\infty,x)\) _and_ \(\operatorname{Im}(\sigma)\subseteq(-\infty,f(y)]\)_._ (ii) _If_ \[\operatorname{Im}(\tilde{\sigma})\subseteq(-\infty,f(y)],\quad\text{i.e.} \operatorname{Im}(\tilde{\sigma})\cap(f(y),+\infty)=\emptyset,\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ 3. _(i)_ \(\operatorname{Im}(h)\subseteq(x,+\infty)\) _and_ \(\operatorname{Im}(\sigma)\subseteq[f(y),+\infty)\)_._ (ii) _If_ \[\operatorname{Im}(\tilde{\sigma})\subseteq[f(y),+\infty),\quad\text{i.e.} \operatorname{Im}(\tilde{\sigma})\cap(-\infty,f(y))=\emptyset,\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ Proof.: Our assumption \(x\notin(\inf h,\sup h)\) implies that either \(h(r)\leq x\) for all \(r\in\mathbb{Q}\) or \(h(r)\geq x\) for all \(r\in\mathbb{Q}\). Since \(h\) is injective, \(\operatorname{Im}(h)\) cannot have a greatest or least element. Thus, either \(\operatorname{Im}(h)\subseteq(-\infty,x)\) or \(\operatorname{Im}(h)\subseteq(x,+\infty)\). We only treat the former case which corresponds to (1). **(i).** We have to show that \(\operatorname{Im}(\sigma)\subseteq(-\infty,f(y)]\) - this follows directly from Lemma 4.17 by noting \(\iota^{-1}(-\infty,x]=g^{-1}(b^{-1}(h^{-1}(-\infty,x]))=\mathbb{Q}\). **(ii).** With the same argument as in (i), observe \(\tilde{\iota}^{-1}(-\infty,x]=\mathbb{Q}\) and \(\tilde{\iota}^{-1}[x,+\infty)=\emptyset\). Thus, the statement follows by another application of Lemma 4.17. **Lemma 4.19** (see Figure 3).: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that \(a(x)=y\)._ _Suppose that \(x\in(\inf h,\sup h)\) with9\(r:=h^{\dagger}(x)\in\mathbb{Q}\), and set \(p:=b^{-1}(r)\) as well as \(I_{-}:=\iota^{-1}(-\infty,x]\) and \(I_{+}:=\iota^{-1}[x,+\infty)\). Then the following holds:_ Footnote 9: Note that this encompasses the case \(x\in\operatorname{Im}(h)\) and in particular \(x\in\operatorname{Im}(\iota)\). _(i) \(I_{-}\) and \(I_{+}\) are rational intervals with \(\sigma(I_{-})\subseteq(-\infty,f(y)]\) and \(\sigma(I_{+})\subseteq[f(y),+\infty)\)._ _._ 2. _If_ \[\tilde{\sigma}\left(I_{-}\right)\subseteq(-\infty,f(y)]\quad\text{and}\quad \tilde{\sigma}\left(I_{+}\right)\subseteq[f(y),+\infty),\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _with_ \(\tilde{b}(p)=b(p)\left(=r\right)\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{iota})\) _where_ \(\tilde{iota}:=h\tilde{b}g\)_._ Proof.: **(i).** Since \(r=h^{\dagger}(x)\in\mathbb{Q}\), we know that \(h^{-1}(-\infty,x)\) and \(h^{-1}(x,+\infty)\) are rational intervals, both with boundary point \(r\). Hence, the intervals \(h^{-1}(-\infty,x]\) and \(h^{-1}[x,+\infty)\) also have boundary point \(r\) (if \(x\in\operatorname{Im}(h)\), the intervals become closed; otherwise, they do not change). By \(b\in\mathcal{G}_{\mathbb{Q}}\), the preimages \(b^{-1}(h^{-1}(-\infty,x])\) and \(b^{-1}(h^{-1}[x,+\infty))\) are rational intervals as well, both with boundary point \(p=b^{-1}(r)\). Finally, \(I_{-}=g^{-1}(b^{-1}(h^{-1}(-\infty,x]))\) and \(I_{+}=g^{-1}(b^{-1}(h^{-1}[x,+\infty)))\) are rational intervals by applying Lemma 4.7(iii). The inclusions \(\sigma(I_{-})\subseteq(-\infty,f(y)]\) and \(\sigma(I_{+})\subseteq[f(y),+\infty)\) follow from Lemma 4.17. **(ii).** We claim that \(\tilde{\iota}^{-1}(-\infty,x]=\iota^{-1}(-\infty,x]=I_{-}\) and \(\tilde{\iota}^{-1}[x,+\infty)=\iota^{-1}[x,+\infty)=I_{+}\); the statement then follows by another application of Lemma 4.17. To this end, it suffices to note that \(\tilde{b}^{-1}(h^{-1}(-\infty,x])\) coincides with \(b^{-1}(h^{-1}(-\infty,x])\) since they have the same structure Figure 3. Illustration of Lemma 4.19. Figure 2. Illustration of Lemma 4.18, Case (1). (open/closed) as \(h^{-1}(-\infty,x]\) and the same boundary point, namely \(\bar{b}^{-1}(r)=p=b^{-1}(r)\). Analogously, \(\tilde{b}^{-1}(h^{-1}[x,+\infty))\) coincides with \(b^{-1}(h^{-1}[x,+\infty))\). **Lemma 4.20** (see Figures 4 and 5).: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that \(a(x)=y\)._ _Suppose that \(x\in(\inf h,\sup h)\) with \(\gamma:=h^{\dagger}(x)\in\mathbb{I}\) and \(q:=\iota^{\dagger}(x)\in\mathbb{Q}\). Then one of the following four cases occurs:_ 1. 1. \(g(q)<\bar{b}^{-1}(\gamma)<\inf g(q,+\infty)\)_. Additionally, there exist_ \(u,v\in\mathbb{Q}\) _such that_ \(g(q)<u<v<\inf g(q,+\infty)\) _and_ \(hb(u)<x<hb(v)\)_. Finally,_ \(\sigma(q)\leq f(y)\) _and_ \(\sigma(q,+\infty)\subseteq[f(y),+\infty)\)_._ 2. _If_ \[\tilde{\sigma}(q)=\sigma(q)\quad\text{and}\quad\tilde{\sigma}(q,+\infty) \subseteq[f(y),+\infty),\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _with_ \(\tilde{b}(u)=b(u)\) _and_ \(\tilde{b}(v)=b(v)\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ 2. 1. \(g(q)<\bar{b}^{-1}(\gamma)=\inf g(q,+\infty)\)_. Additionally, there exists_ \(u\in\mathbb{Q}\) _such that_ \(g(q)<u<\inf g(q,+\infty)\) _and_ \(hb(u)<x\)_. Finally,_ \(\sigma(q)\leq f(y)\) _and_ \(\sigma(q,+\infty)\subseteq[f(y),+\infty)\) _as well as_ \(\bar{b}^{-1}(\gamma)\in(\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap \mathbb{I}\) _and_ \(\gamma\in\operatorname{Dc}^{\mathbb{I}}(h)\)_._ 2. _If_ \[\tilde{\sigma}(q)=\sigma(q)\quad\text{and}\quad\tilde{\sigma}(q,+\infty) \subseteq[f(y),+\infty),\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _with_ \(\tilde{b}(u)=b(u)\) _and_ \(\bar{\tilde{b}}(\bar{b}^{-1}(\gamma))=\bar{b}(\bar{b}^{-1}(\gamma))=\gamma\) _(so_ \(\bar{\tilde{b}}^{-1}(\gamma)=\bar{b}^{-1}(\gamma)\)_), the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ 3. 1. \(\sup g(-\infty,q)<\bar{b}^{-1}(\gamma)<g(q)\)_. Additionally, there exist_ \(u,v\in\mathbb{Q}\) _such that_ \(\sup g(-\infty,q)<u<v<g(q)\) _and_ \(hb(u)<x<hb(v)\)_. Finally,_ \(\sigma(-\infty,q)\subseteq(-\infty,f(y)]\) _and_ \(\sigma(q)\geq f(y)\)_._ 2. _If_ \[\sigma(-\infty,q)\subseteq(-\infty,f(y)]\quad\text{and}\quad\tilde{\sigma}(q)= \sigma(q),\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _with_ \(\tilde{b}(u)=b(u)\) _and_ \(\tilde{b}(v)=b(v)\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ 4. 1. \(\sup g(-\infty,q)=\bar{b}^{-1}(\gamma)<g(q)\)_. Additionally, there exists_ \(v\in\mathbb{Q}\) _such that_ \(\sup g(-\infty,q)<v<g(q)\) _and_ \(x<hb(v)\)_. Finally,_ \(\sigma(-\infty,q)\subseteq(-\infty,f(y)]\) _and_ \(\sigma(q)\geq f(y)\) _as well as_ \(\bar{b}^{-1}(\gamma)\in(\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap \mathbb{I}\) _and_ \(\gamma\in\operatorname{Dc}^{\mathbb{I}}(h)\)_._ 2. _If_ \[\sigma(-\infty,q)\subseteq(-\infty,f(y)]\quad\text{and}\quad\tilde{\sigma}(q)= \sigma(q),\] _then for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _with_ \(\tilde{b}(v)=b(v)\) _and_ \(\bar{\tilde{b}}(\bar{b}^{-1}(\gamma))=\bar{b}(\bar{b}^{-1}(\gamma))=\gamma\) _(so_ \(\bar{\tilde{b}}^{-1}(\gamma)=\bar{b}^{-1}(\gamma)\)_), the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ Proof.: First of all, note that \(\iota^{\dagger}(x)\) is welldefined since \(x\in(\inf h,\sup h)\) which coincides with \((\inf\iota,\sup\iota)\) by the unboundedness on either side of both \(g\) and \(b\). Since \(\gamma=h^{\dagger}(x)\) is irrational and \(h\) is injective, \(x\) cannot be contained in \(\operatorname{Im}(h)\), in particular not in \(\operatorname{Im}(\iota)\). In any case, we have \(h^{-1}(-\infty,x]=h^{-1}(-\infty,x)=(-\infty,\gamma)\) and \(h^{-1}[x,+\infty)=h^{-1}(x,+\infty)=(\gamma,+\infty)\). The cases 1 and 2 correspond to \(\iota(q)<x\), in other words \(\iota^{-1}(-\infty,x]=(-\infty,q]\) and \(\iota^{-1}[x,+\infty)=(q,+\infty)\), while the cases 3 and 4 correspond to \(\iota(q)>x\), in other words \(\iota^{-1}(-\infty,x]=(-\infty,q)\) and \(\iota^{-1}[x,+\infty)=[q,+\infty)\). We only treat the former cases. Since \(\operatorname{LP}(h)\subseteq\mathbb{I}\), the point \(x\) cannot be a limit point of \(h\), so we have \(\sup h(-\infty,\gamma)<x<\inf h(\gamma,+\infty)\) and \(\gamma\in\operatorname{Dc}^{\mathbb{I}}(h)\). Additionally, \(x<\inf\iota(q,+\infty)\) by the same argument. If \(\iota(q)<x\), we obtain \(bg(q)<\gamma\leq\inf bg(q,+\infty)\), i.e. \(g(q)<\bar{b}^{-1}(\gamma)\leq\inf g(q,+\infty)\). The first two cases are distinguished by checking whether the latter inequality is strict or not. 1. \(g(q)<\bar{b}^{-1}(\gamma)<\inf g(q,+\infty)\). **(i).** Take any \(u,v\in\mathbb{Q}\) with \(g(q)<u<\bar{b}^{-1}(\gamma)<v<\inf g(q,+\infty)\) to satisfy \(g(q)<u<v<\inf g(q,+\infty)\) and \(hb(u)<x<hb(v)\). The remaining statements follows from Lemma 4.17: \(\sigma(q)\in\sigma(-\infty,q]=\sigma\left(\iota^{-1}(-\infty,x]\right)\subseteq(- \infty,f(y)]\) and \(\sigma(q,+\infty)=\sigma\left(\iota^{-1}[x,+\infty)\right)\subseteq[f(y),+\infty)\). **(ii).** We use \(\tilde{b}(u)=b(u)\) and \(\tilde{b}(v)=b(v)\) to verify the conditions in Lemma 4.17. Note that \(h^{-1}(-\infty,x]\subseteq(-\infty,b(v))\) and \(h^{-1}[x,+\infty)\subseteq(b(u),+\infty)\), so that \[\tilde{\iota}^{-1}(-\infty,x]=g^{-1}(\tilde{b}^{-1}(h^{-1}(- \infty,x]))\subseteq g^{-1}(-\infty,v)=(-\infty,q]\qquad\text{and}\] \[\tilde{\iota}^{-1}[x,+\infty)=g^{-1}(\tilde{b}^{-1}(h^{-1}[x,+ \infty)))\subseteq g^{-1}(u,+\infty)=(q,+\infty)\] which yields \[\tilde{\sigma}\left(\tilde{\iota}^{-1}(-\infty,x]\right) \subseteq\tilde{\sigma}(-\infty,q]\subseteq(-\infty,\tilde{\sigma}(q )]=(-\infty,\sigma(q)]\subseteq(-\infty,f(y)]\quad\text{and}\] \[\tilde{\sigma}\left(\tilde{\iota}^{-1}[x,+\infty)\right) \subseteq\tilde{\sigma}(q,+\infty)\subseteq[f(y),+\infty).\] 2. \(g(q)<\bar{b}^{-1}(\gamma)=\inf g(q,+\infty)\). **(i).** Take any \(u\in\mathbb{Q}\) with \(g(q)<u<\bar{b}^{-1}(\gamma)\) to satisfy \(g(q)<u<\inf g(q,+\infty)\) and \(hb(u)<x\). The statements \(\sigma(q)\leq f(y)\) and \(\sigma(q,+\infty)\subseteq[f(y),+\infty)\) follow just as in (1). Figure 4. Illustration of Lemma 4.20, Case (1). Figure 5. Illustration of Lemma 4.20, Case (2). We have already argued that \(\gamma\in\mathrm{Dc}^{\mathrm{I}}(h)\), so it remains to show \(\bar{b}^{-1}(\gamma)\in(\mathbb{R}\setminus\mathrm{Im}(\bar{g}))\cap\mathbb{I}\). We know that \(\gamma\) is irrational, so \(\bar{b}^{-1}(\gamma)\) is as well by Lemma 4.33. Additionally, \(\bar{b}^{-1}(\gamma)=\inf g(q,+\infty)\) cannot be contained in \(\mathrm{Im}(\bar{g})\) since \(g\) is injective. **(ii).** Similarly to (1), we use \(\tilde{b}(u)=b(u)\) and \(\bar{\tilde{b}}^{-1}(\gamma)=\bar{b}^{-1}(\gamma)\) to verify the conditions in Lemma 4.17. Observe \(h^{-1}(-\infty,x]=(-\infty,\gamma)\) and \(h^{-1}[x,+\infty)\subseteq(b(u),+\infty)\), so that \[\tilde{t}^{-1}(-\infty,x] =g^{-1}(-\infty,\bar{\tilde{b}}^{-1}(\gamma))=\iota^{-1}(-\infty, x]=(-\infty,q]\quad\text{and}\] \[\tilde{t}^{-1}[x,+\infty) \subseteq g^{-1}(u,+\infty)=(q,+\infty),\] which yields \(\tilde{\sigma}(\tilde{t}^{-1}(-\infty,x])\subseteq(-\infty,f(y)]\) and \(\tilde{\sigma}(\tilde{t}^{-1}[x,+\infty))\subseteq[f(y),+\infty)\) as in (1). In the remaining case that \(x\in(\inf h,\sup h)\) and both \(h^{\dagger}(x)\) and \(\iota^{\dagger}(x)\) are irrational, we take a similar but somewhat more involved route in that the automorphisms \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) we are picking do not simply mimic the behaviour of \(b\) on sufficiently many elements. Instead, we redefine \(\tilde{b}\) on certain crucial points which are tuned to the specific \(\tilde{\sigma}\) being considered. In doing so, we have to make sure that our desired redefinition does not violate the condition for \(\tilde{b}\) on finitely many points given by Pseudo-Property \(\overline{\mathbf{X}}\) and the previous auxiliary lemmas. We split our treatment of this problem into two subcases. **Lemma 4.21** (see Figure 6).: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that \(a(x)=y\)._ _Suppose that \(x\in(\inf h,\sup h)\) with \(\gamma:=h^{\dagger}(x)\in\mathbb{I}\) and \(\delta:=\iota^{\dagger}(x)\in\mathbb{I}\). Additionally, suppose that \(f(y)\in\mathrm{Im}(\sigma)\). Let \(\bar{z}\) and \(\bar{w}\) be tuples in \(\mathbb{Q}\) and let \(\bar{z}^{\prime}\) and \(\bar{w}^{\prime}\) be tuples in \((\mathbb{R}\setminus\mathrm{Im}(\bar{g}))\cap\mathbb{I}\) and \(\mathrm{Dc}^{\mathrm{I}}(h)\), respectively, such that \(b(\bar{z})=\bar{w}\) and \(\bar{b}(\bar{z}^{\prime})=\bar{w}^{\prime}\). Assume that \(\bar{z}\cup\bar{z}^{\prime}\) contains both an element greater and less than \(\bar{g}(\delta)\). Put \(z_{-}\) and \(z_{+}\) to be the greatest entry of \(\bar{z}\cup\bar{z}^{\prime}\) less than \(\bar{g}(\delta)\) and the least entry of \(\bar{z}\cup\bar{z}^{\prime}\) greater than \(\bar{g}(\delta)\), respectively, and put \(w_{-}\) and \(w_{+}\) to be the corresponding entries of \(\bar{w}\cup\bar{w}^{\prime}\). Then one of the following two cases occurs10:_ Footnote 10: It is possible that both cases occur simultaneously; it this happens, pick one of them arbitrarily. 1. _[label=(0)]_ 2. _[label=(0)]_ 3. _There exist_ \(q,q^{\prime}\in\mathbb{Q}\) _such that_ \(q<q^{\prime}<\delta\) _and_ \(z_{-}<g(q)<g(q^{\prime})<\bar{g}(\delta)<z_{+}\) _as well as_ \(\sigma(q)=\sigma(q^{\prime})=f(y)\)_; further,_ \(\gamma=\bar{b}(\bar{g}(\delta))\) _and_ \(w_{-}<\gamma<w_{+}\)_._ 4. _If_ \[\tilde{\sigma}(q)=\sigma(q)=f(y)\quad\text{and}\quad\tilde{\sigma}(q^{\prime })=\sigma(q^{\prime})=f(y),\] _and if_ \(\tilde{u},\tilde{v},\hat{u},\hat{v}\in\mathbb{Q}\) _satisfy_ \(g(q)<\tilde{u}<\tilde{v}<g(q^{\prime})\) _as well as_ \(w_{-}<\hat{u}<\gamma<\hat{v}<w_{+}\)_, then the finite partial map_ \(\tilde{z}\mapsto\bar{w}\)_,_ \(\tilde{z}^{\prime}\mapsto\bar{w}^{\prime}\) _and_ \(\tilde{u}\mapsto\hat{u},\tilde{v}\mapsto\hat{v}\) _is strictly increasing. Additionally, for any_ \(b\in\mathcal{G}_{\mathbb{Q}}\) _such that_ \(\tilde{b}(\tilde{z})=\bar{w}\)_,_ \(\tilde{b}(\tilde{z}^{\prime})=\bar{w}^{\prime}\) _and_ \(b(\tilde{u})=\hat{u}\)_,_ \(b(\tilde{v})=\hat{v}\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{t})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{t})\) _where_ \(\tilde{t}:=h\tilde{b}g\)_._ 4. _There exist_ \(q,q^{\prime}\in\mathbb{Q}\) _such that_ \(\delta<q<q^{\prime}\) _and_ \(z_{-}<\bar{g}(\delta)<g(q)<g(q^{\prime})<z_{+}\) _as well as_ \(\sigma(q)=\sigma(q^{\prime})=f(y)\)_; further,_ \(\gamma=\bar{b}(\bar{g}(\delta))\) _and_ \(w_{-}<\gamma<w_{+}\)_._ 5. _If_ \[\tilde{\sigma}(q)=\sigma(q)=f(y)\quad\text{and}\quad\tilde{\sigma}(q^{\prime})= \sigma(q^{\prime})=f(y),\] _and if_ \(\tilde{u},\tilde{v},\hat{u},\hat{v}\in\mathbb{Q}\) _satisfy_ \(g(q)<\tilde{u}<\tilde{v}<g(q^{\prime})\) _as well as_ \(w_{-}<\hat{u}<\gamma<\hat{v}<w_{+}\)_, then the finite partial map_ \(\tilde{z}\mapsto\bar{w}\)_,_ \(\tilde{z}^{\prime}\mapsto\bar{w}^{\prime}\) _and_ \(\tilde{u}\mapsto\hat{u},\tilde{v}\mapsto\hat{v}\) _is strictly increasing. Additionally, for any_ \(b\in\mathcal{G}_{\mathbb{Q}}\) _such that_ \(\tilde{b}(\tilde{z})=\bar{w}\)_,_ \(\tilde{b}(\tilde{z}^{\prime})=\bar{w}^{\prime}\) _and_ \(b(\tilde{u})=\hat{u}\)_,_ \(b(\tilde{v})=\hat{v}\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{t})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{t})\) _where_ \(\tilde{t}:=h\tilde{b}g\)_._ Proof.: As in the proof of Lemma 4.20, the generalised inverse \(\iota^{\dagger}(x)\) is welldefined and \(x\) cannot be contained in \(\mathrm{Im}(h)\), in particular \(\mathrm{Im}(\iota)\). Additionally, we have \(h^{-1}(-\infty,x]=h^{-1}(-\infty,x)=(-\infty,\gamma)\) and \(h^{-1}[x,+\infty)=h^{-1}(x,+\infty)=(\gamma,+\infty)\) as well as \(\iota^{-1}(-\infty,x]=\iota^{-1}(-\infty,x)=\iota^{-1}( \((-\infty,\delta)\) and \(\iota^{-1}[x,+\infty)=\iota^{-1}(x,+\infty)=(\delta,+\infty)\). This also yields that \(\bar{b}(\bar{g}(\delta))=\gamma\). By Lemma 4.17, we conclude11 Footnote 11: Note that we cannot express \(\tilde{\sigma}(-\infty,\delta)\subseteq(-\infty,f(y)]\) and \(\tilde{\sigma}(\delta,+\infty)\subseteq[f(y),+\infty)\) using the rich topology from Definition 3.5 since \(\delta\) is irrational. This is one of the reasons why we need to redefine \(\tilde{b}\) instead of transferring the behaviour of \(b\) at sufficiently many points. \[\sigma(-\infty,\delta) =\sigma\left(\iota^{-1}(-\infty,x]\right)\subseteq(-\infty,f(y) ]\quad\text{and}\] \[\sigma(\delta,+\infty) =\sigma\left(\iota^{-1}[x,+\infty)\right)\subseteq[f(y),+\infty).\] Since \(f(y)\in\operatorname{Im}(\sigma)\) and since \(\delta\) is irrational, this is only possible if \(\sigma\) is locally constant with value \(f(y)\) either below or above \(\delta\) (or both). These two situations form the cases (1) and (2), respectively. We only treat the former option. **(i).** By our preparatory reasoning, there exist \(q,q^{\prime}\in\mathbb{Q}\) with \(q<q^{\prime}<\delta\) and \(\sigma(q)=\sigma(q^{\prime})=f(y)\). The number \(\bar{g}(\delta)\) is irrational by Lemma 4.32, and obviously not an element of \(\mathbb{R}\setminus\operatorname{Im}(\bar{g})\). Thus, \(\bar{g}(\delta)\) cannot be contained in \(\bar{z}\cup\bar{z}^{\prime}\). Consequently, \(\gamma=\bar{b}(\bar{g}(\delta))\) cannot be contained in \(\bar{w}\cup\bar{w}^{\prime}\), and we conclude \(w_{-}<\gamma<w_{+}\) from \(z_{-}<\bar{g}(\delta)<z_{+}\). Since \(\delta\in\mathbb{I}=\operatorname{Cont}(g)\), we can pick \(q,q^{\prime}\) close enough to \(\delta\) to ascertain \(z_{-}<g(q)<g(q^{\prime})<\bar{g}(\delta)<z_{+}\). **(ii).** By the definitions of \(z_{-}\) and \(z_{+}\) and the fact that \(\tilde{z}\mapsto\bar{w}\), \(\tilde{z}^{\prime}\mapsto\bar{w}^{\prime}\) is strictly increasing, the finite partial map \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\) and \(\tilde{u}\mapsto\tilde{u},\tilde{v}\mapsto\hat{v}\) is strictly increasing. For the second statement, we check the assumptions of Lemma 4.17. Note that \[\tilde{\iota}^{-1}(-\infty,x] =g^{-1}(\tilde{b}^{-1}(-\infty,\gamma))\subseteq g^{-1}(-\infty, \tilde{v})\subseteq(-\infty,q^{\prime}]\qquad\text{and}\] \[\tilde{\iota}^{-1}[x,+\infty) =g^{-1}(\tilde{b}^{-1}(\gamma,+\infty))\subseteq g^{-1}(\tilde{u },+\infty)\subseteq[q,+\infty),\] so \[\tilde{\sigma}\left(\tilde{\iota}^{-1}(-\infty,x]\right) \subseteq\tilde{\sigma}(-\infty,q^{\prime}]\subseteq(-\infty, \tilde{\sigma}(q^{\prime})]=(-\infty,\sigma(q^{\prime})]=(-\infty,f(y)]\quad \text{and}\] \[\tilde{\sigma}\left(\tilde{\iota}^{-1}[x,+\infty)\right) \subseteq\tilde{\sigma}[q,+\infty)\subseteq[\tilde{\sigma}(q),+ \infty)=[\sigma(q),+\infty)=[f(y),+\infty).\qed\] Our final auxiliary lemma treats the second subcase of \(x\in(\inf h,\sup h)\) and \(h^{\dagger}(x),\iota^{\dagger}(x)\in\mathbb{I}\). **Lemma 4.22** (see Figure 7).: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that \(a(x)=y\)._ _Suppose that \(x\in(\inf h,\sup h)\) with \(\gamma:=h^{\dagger}(x)\in\mathbb{I}\) and \(\delta:=\iota^{\dagger}(x)\in\mathbb{I}\). Additionally, suppose that \(f(y)\notin\operatorname{Im}(\sigma)\). Let \(\bar{z}\) and \(\bar{w}\) be tuples in \(\mathbb{Q}\) and let \(\bar{z}^{\prime}\) and \(\bar{w}^{\prime}\) be tuples in \((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) and \(\operatorname{Dc}^{\mathbbm{I}}(h)\), respectively, such that \(b(\bar{z})=\bar{w}\) and \(\bar{b}(\bar{z}^{\prime})=\bar{w}^{\prime}\). Assume that \(\bar{z}\cup\bar{z}^{\prime}\) contains both an element greater and less than \(\bar{g}(\delta)\). Put \(z_{-}\) and \(z_{+}\) to be the greatest entry of \(\bar{z}\cup\bar{z}^{\prime}\) less than Figure 6. Illustration of Lemma 4.21, Case (1). \(\bar{g}(\delta)\) and the least entry of \(\bar{z}\cup\bar{z}^{\prime}\) greater than \(\bar{g}(\delta)\), respectively, and put \(w_{-}\) and \(w_{+}\) to be the corresponding entries of \(\bar{w}\cup\bar{w}^{\prime}\). Then the following holds:_ 1. \(\sigma^{\dagger}(f(y))=\delta\) _and_ \(\delta\in\mathrm{Dc}^{\mathbb{I}}(\sigma)\)_. Additionally,_ \(\gamma\in\mathrm{Dc}^{\mathbb{I}}(h)\) _and_ \(\gamma=\bar{b}(\bar{g}(\delta))\)_. Further,_ \(z_{-}<\bar{g}(\delta)<z_{+}\) _as well as_ \(w_{-}<\gamma<w_{+}\)_. Finally,_ \(\sigma\left(g^{-1}(-\infty,z_{-}]\right)\subseteq(-\infty,f(y))\) _and_ \(\sigma\left(g^{-1}[z_{+},+\infty)\right)\subseteq(f(y),+\infty)\)_._ _If_ \(z_{\pm}\) _and_ \(w_{\pm}\) _are rational, then the intervals_ \(I_{-}:=g^{-1}(-\infty,z_{-}]\) _and_ \(I_{+}:=g^{-1}[z_{+},+\infty)\) _are rational as well._ 2. _If_ \(z_{\pm}\) _and_ \(w_{\pm}\) _are rational and if_ \(\mathrm{Im}(\tilde{\sigma})\cap\{f(y)\}=\emptyset\quad\text{and}\quad\tilde{ \sigma}(I_{-})\subseteq(-\infty,f(y))\quad\text{and}\quad\tilde{\sigma}(I_{+} )\subseteq(f(y),+\infty),\)__ _then - setting_ \(\tilde{H}_{-}:=\tilde{\sigma}^{-1}(-\infty,f(y)]=\tilde{\sigma}^{-1}(-\infty, f(y))\) _and_ \(\tilde{H}_{+}:=\tilde{\sigma}^{-1}[f(y),+\infty)=\tilde{\sigma}^{-1}(f(y),+\infty)\) _- we have_ \(\sup g(\tilde{H}_{-})<z_{+}\) _as well as_ \(z_{-}<\inf g(\tilde{H}_{+})\)_. Further, there exists_ \(\tilde{\rho}\in((\mathbb{R}\setminus\mathrm{Im}(\tilde{g}))\cap\mathbb{I}) \cup\bar{g}(\mathrm{Dc}^{\mathbb{I}}(\tilde{\sigma}))\) _such that_ \(z_{-}<\tilde{\rho}<z_{+}\) _and_ \(g(\tilde{H}_{-})\subseteq(-\infty,\tilde{\rho})\) _as well as_ \(g(\tilde{H}_{+})\subseteq(\tilde{\rho},+\infty)\)_. The finite partial map_ \(\bar{z}\mapsto\bar{w}\)_,_ \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\) _and_ \(\tilde{\rho}\mapsto\gamma\) _is strictly increasing and, additionally, for any_ \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) _such that_ \(\tilde{b}(\bar{z})=\bar{w}\)_,_ \(\bar{\tilde{b}}(\bar{z}^{\prime})=\bar{w}^{\prime}\) _and_ \(\bar{\tilde{b}}(\tilde{\rho})=\gamma\)_, the map_ \(x\mapsto y\) _preserves all basic formulas as a finite partial map from_ \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) _to_ \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) _where_ \(\tilde{\iota}:=h\tilde{b}g\)_._ Proof.: As in the proof of Lemma 4.21, the generalised inverse \(\iota^{\dagger}(x)\) is welldefined and \(x\) cannot be contained in \(\mathrm{Im}(h)\), in particular \(\mathrm{Im}(\iota)\). Additionally, \(h^{-1}(-\infty,x]=h^{-1}(-\infty,x)=(-\infty,\gamma)\) and \(h^{-1}[x,+\infty)=h^{-1}(x,+\infty)=(\gamma,+\infty)\) as well as \(\iota^{-1}(-\infty,x]=\iota^{-1}(-\infty,x)=(-\infty,\delta)\) and \(\iota^{-1}[x,+\infty)=\iota^{-1}(x,+\infty)=(\delta,+\infty)\). We again conclude \(\tilde{b}(\bar{g}(\delta))=\gamma\). Combining Lemma 4.17 with \(f(y)\notin\mathrm{Im}(\sigma)\), we obtain \(\sigma(-\infty,\delta)\subseteq(-\infty,f(y))\) as well as \(\sigma(\delta,+\infty)\subseteq(f(y),+\infty)\) which yields \(\delta=\sigma^{\dagger}(f(y))\) and \[\sigma\left(g^{-1}(-\infty,z_{-}]\right)\subseteq\sigma\left(g^{-1}(-\infty, \bar{g}(\delta))\right)\subseteq(-\infty,f(y))\quad\text{as well as}\] \[\sigma\left(g^{-1}[z_{+},+\infty)\right)\subseteq\sigma\left(g^{-1}(\bar{g}( \delta),+\infty)\right)\subseteq(f(y),+\infty).\] **(i).** We know that \(f(y)\in\mathbb{Q}\) cannot be a limit point of \(\sigma\), so \(\sup\sigma(-\infty,\delta)<f(y)<\inf\sigma(\delta,+\infty)\), in particular \(\delta\in\mathrm{Dc}^{\mathbb{I}}(\sigma)\). Using \(\mathrm{LP}(h)\subseteq\mathbb{I}\) in the same fashion, we obtain \(\gamma\in\mathrm{Dc}^{\mathbb{I}}(h)\). The remaining statements \(z_{-}<\bar{g}(\delta)<z_{+}\) as well as \(w_{-}<\gamma<w_{+}\) follow just as in the proof of Lemma 4.21. Finally, if \(z_{\pm}\) and \(w_{\pm}\) are rational, then the intervals \(I_{-}:=g^{-1}(-\infty,z_{-}]\) and \(I_{+}:=g^{-1}[z_{+},+\infty)\) are rational by Lemma 4.7(iii). Figure 7. Illustration of Lemma 4.22. **(ii).** We have \(\mathbb{Q}=\tilde{H}_{-}\dot{\cup}\,\tilde{H}_{+}\), so \(\sup\tilde{H}_{-}=\inf\tilde{H}_{+}\). By our assumption on \(\tilde{\sigma}\), we know that \(I_{-}\cap\tilde{H}_{+}=\emptyset\), so \(\inf g(\tilde{H}_{+})\geq z_{-}\). In fact, this inequality is strict since \(\inf g(\tilde{H}_{+})\) is either contained in \(g(\tilde{H}_{+})\) or irrational by \(\operatorname{LP}(g)\subseteq\mathbb{I}\). One argues analogously to show \(\sup g(\tilde{H}_{-})<z_{+}\). To find \(\tilde{\rho}\), we distinguish whether \(\sup\tilde{H}_{-}=\inf\tilde{H}_{+}\) is rational or irrational. _Case 1_ (\(\tilde{q}:=\tilde{\sigma}^{\dagger}(f(y))=\sup\tilde{H}_{-}=\inf\tilde{H}_{+} \in\mathbb{Q}\)): We conclude \(\sup g(\tilde{H}_{-})<\inf g(\tilde{H}_{+})\) from Lemma 4.3(ii). Combined with \(\sup g(\tilde{H}_{-})<z_{+}\) and \(z_{-}<\inf g(\tilde{H}_{+})\), this implies \(\max\left(\sup g(\tilde{H}_{-}),z_{-}\right)<\min\left(\inf g(\tilde{H}_{+}), z_{+}\right)\). Any irrational \(\tilde{\rho}\) between these two numbers satisfies the requirements - note that \(\tilde{\rho}\) is contained in \(\mathbb{R}\setminus\operatorname{Im}(\bar{g})\) by injectivity of \(g\). _Case 2_ (\(\tilde{\delta}:=\tilde{\sigma}^{\dagger}(f(y))=\sup\tilde{H}_{-}=\inf\tilde{H}_ {+}\in\mathbb{I}\)): We obtain \(\sup g(\tilde{H}_{-})=\tilde{g}(\tilde{\delta})=\inf g(\tilde{H}_{+})\) since \(\tilde{\delta}\in\operatorname{Cont}(g)\), so \(z_{-}<\bar{g}(\tilde{\delta})<z_{+}\). Since \(\tilde{\sigma}(-\infty,\tilde{\delta})\subseteq(-\infty,f(y))\) and \(\tilde{\sigma}(\tilde{\delta},+\infty)\subseteq(f(y),+\infty)\) and since \(f(y)\in\mathbb{Q}\) cannot be a limit point of \(\tilde{\sigma}\), we conclude \(\tilde{\delta}\in\operatorname{Dc}^{\mathbb{I}}(\tilde{\sigma})\). Hence, we set \(\tilde{\rho}:=\bar{g}(\tilde{\delta})\). By construction, the finite partial map \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\) and \(\tilde{\rho}\mapsto\gamma\) is strictly increasing. For the preservation statement, we verify the conditions in Lemma 4.17. Note that \[\tilde{t}^{-1}(-\infty,x] =g^{-1}(-\infty,\tilde{\rho})=\mathbb{Q}\setminus g^{-1}(\tilde{ \rho},+\infty)\subseteq\mathbb{Q}\setminus g^{-1}(g(\tilde{H}_{+}))=\tilde{H} _{-}\qquad\text{and}\] \[\tilde{t}^{-1}[x,+\infty) =g^{-1}(\tilde{\rho},+\infty)=\mathbb{Q}\setminus g^{-1}(-\infty, \tilde{\rho})\subseteq\mathbb{Q}\setminus g^{-1}(g(\tilde{H}_{-}))=\tilde{H} _{+},\] so \[\tilde{\sigma}\left(\tilde{t}^{-1}(-\infty,x]\right) \subseteq\tilde{\sigma}(\tilde{H}_{-})\subseteq(-\infty,f(y))\quad \text{and}\] \[\tilde{\sigma}\left(\tilde{t}^{-1}[x,+\infty)\right) \subseteq\tilde{\sigma}(\tilde{H}_{+})\subseteq(f(y),+\infty).\qed\] _Remark 4.23_.: Examining the last proof more closely, one observes that we never used \(a(x)=y\) other than via the inclusion of intervals from Lemma 4.17. Hence, we in fact proved the following slightly stronger statement which will be useful when amalgamating the auxiliary lemmas: _Let \(\sigma,\tilde{\sigma},f,g,h,a,b\) such that \((*)\) holds. Let further \(x,y\in\mathbb{Q}\) such that_ \[\sigma\left(t^{-1}(-\infty,x]\right)\subseteq(-\infty,f(y)]\quad\text{and} \quad\sigma\left(t^{-1}[x,+\infty)\right)\subseteq[f(y),+\infty).\] _Suppose that \(x\in(\inf h,\sup h)\) with \(\gamma:=h^{\dagger}(x)\in\mathbb{I}\) and \(\delta:=t^{\dagger}(x)\in\mathbb{I}\). Additionally, suppose that \(f(y)\notin\operatorname{Im}(\sigma)\). Let \(\bar{z}\) and \(\bar{w}\) be tuples in \(\mathbb{Q}\) and let \(\tilde{z}^{\prime}\) and \(\bar{w}^{\prime}\) be tuples in \((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) and \(\operatorname{Dc}^{\mathbb{I}}(h)\), respectively, such that \(b(\bar{z})=\bar{w}\) and \(\bar{b}(\tilde{z}^{\prime})=\bar{w}^{\prime}\). Assume that \(\bar{z}\cup\tilde{z}^{\prime}\) contains both an element greater and less than \(\bar{g}(\delta)\). Put \(z_{-}\) and \(z_{+}\) to be the greatest entry of \(\bar{z}\cup\tilde{z}^{\prime}\) less than \(\bar{g}(\delta)\) and the least entry of \(\bar{z}\cup\tilde{z}^{\prime}\) greater than \(\bar{g}(\delta)\), respectively, and put \(w_{-}\) and \(w_{+}\) to be the corresponding entries of \(\bar{w}\cup\bar{w}^{\prime}\). Then the following holds:_ 1. \(\sigma^{\dagger}(f(y))=\delta\) _and_ \(\delta\in\operatorname{Dc}^{\mathbb{I}}(\sigma)\)_. Additionally,_ \(\gamma\in\operatorname{Dc}^{\mathbb{I}}(h)\) _and_ \(\gamma=\bar{b}(\bar{g}(\delta))\)_. Further,_ \(z_{-}<\bar{g}(\delta)<z_{+}\) _as well as_ \(w_{-}<\gamma<w_{+}\)_. Finally,_ \(\sigma\left(g^{-1}(-\infty,z_{-}]\right)\subseteq(-\infty,f(y))\) _and_ \(\sigma\left(g^{-1}[z_{+},+\infty)\right)\subseteq(f(y),+\infty)\)_._ _If_ \(z_{\pm}\) _and_ \(w_{\pm}\) _are rational, then the intervals_ \(I_{-}:=g^{-1}(-\infty,z_{-}]\) _and_ \(I_{+}:=g^{-1}[z_{+},+\infty)\) _are rational as well._ 2. _If_ \(z_{\pm}\) _and_ \(w_{\pm}\) _are rational and if_ \[\operatorname{Im}(\tilde{\sigma})\cap\{f(y)\}=\emptyset\quad\text{and}\quad \tilde{\sigma}(I_{-})\subseteq(-\infty,f(y))\quad\text{and}\quad\tilde{ \sigma}(I_{+})\subseteq(f(y),+\infty),\] \[\text{then - setting }\tilde{H}_{-}:=\tilde{\sigma}^{-1}(-\infty,f(y)]=\tilde{\sigma}^{-1}(- \infty,f(y))\text{ and }\tilde{H}_{+}:=\tilde{\sigma}^{-1}[f(y),+\infty)=\tilde{\sigma}^{-1}(f(y),+ \infty)\text{ - we have }\sup g(\tilde{H}_{-})<z_{+}\text{ as well as }z_{-}<\inf g(\tilde{H}_{+})\text{. Further, there exists }\tilde{\rho}\in((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap \mathbb{I})\cup\bar{g}(\operatorname{Dc}^{\mathbb{I}}(\tilde{\sigma}))\) such that \(z_{-}<\tilde{\rho}<z_{+}\) and \(g(\tilde{H}_{-})\subseteq(-\infty,\tilde{\rho})\) as well as \(g(\tilde{H}_{+})\subseteq(\tilde{\rho},+\infty)\). The finite partial map \(\bar{z}\mapsto\bar{w}\), \(\tilde{z}^{\prime}\mapsto\bar{w}^{\prime}\) and \(\tilde{\rho}\mapsto\gamma\) is strictly increasing and, additionally, for any \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) such that \(\tilde{b}(\bar{z})=\bar{w}\), \(\tilde{\bar{b}}(\tilde{z}^{\prime})=\bar{w}^{\prime}\) and \(\tilde{\bar{b}}(\tilde{\rho})=\gamma\), we have_ \[\tilde{\sigma}\left(\tilde{t}^{-1}(-\infty,x]\right)\subseteq(-\infty,f(y)] \quad\text{and}\quad\tilde{\sigma}\left(\tilde{t}^{-1}[x,+\infty)\right) \subseteq[f(y),+\infty),\] _where \(\tilde{t}:=h\tilde{b}g\)._ ### Proving the Variation Lemma 4.13, full Finally, we amalgamate the special cases. Proof (of the Variation Lemma 4.13).: We construct \(O\) as an intersection of \(\mathcal{T}_{rich}\)-subbasic open sets, i.e. of sets of the types \(0\), \(1\), \(2\), \(3\). By adding to the intersection \(O\) the condition that \(\tilde{\sigma}\) has the same boundedness type as \(h\) (type \(2\)), we can ascertain that \((*)\) holds. Considering that \(\bar{x}\mapsto\bar{y}\) automatically preserves the formulas \(z_{i}<z_{j}\) since \(a(\bar{x})=\bar{y}\) and that all the other basic formulas are unary, it suffices to pick the automorphism \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) in such a way that the map \(x\mapsto y\) preserves all basic formulas for each corresponding pair \(x,y\) in \(\bar{x},\bar{y}\). First, we treat those corresponding pairs \(x,y\) in \(\bar{x},\bar{y}\) for which 1. \(x\notin(\inf h,\sup h)\quad\text{OR}\) 2. \(x\in(\inf h,\sup h)\) with \(h^{\dagger}(x)\in\mathbb{Q}\quad\text{OR}\) 3. \(x\in(\inf h,\sup h)\) with \(h^{\dagger}(x)\in\mathbb{I}\) and \(\iota^{\dagger}(x)\in\mathbb{Q}\). Applying Lemmas 4.18, 4.19 and 4.20 each yields a finite intersection of sets of types \(0\), \(1\), \(2\), \(3\) and additional conditions of the form \(\tilde{b}(z)=w=b(z)\) for \(z,w\in\mathbb{Q}\) or \(\bar{\tilde{b}}(z^{\prime})=w^{\prime}=\bar{b}(z^{\prime})\) for \(z^{\prime}\in(\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I}\) and \(w^{\prime}\in\operatorname{Dcl}^{\mathbb{I}}(h)\) under which \(x\mapsto y\) always preserves all basic formulas as a finite partial map from \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) to \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) where \(\tilde{\iota}:=h\tilde{b}g\). We add the sets of types \(0\), \(1\), \(2\), \(3\) to the intersection \(O\), we add the points \(z\) and \(w\) to \(\bar{z}^{*}\) and \(\bar{w}^{*}\), respectively, and we add the points \(z^{\prime}\) and \(w^{\prime}\) to \(\bar{z}^{\prime}\) and \(\bar{w}^{\prime}\), respectively. Summarising, we obtain that if \(\tilde{\sigma}\) is contained in the set \(O\) constructed thus far and if \(\tilde{b}(\bar{z}^{*})=\bar{w}^{*}\) and \(\tilde{b}(\bar{z}^{\prime})=\bar{w}^{\prime}\), then \(x\mapsto y\) preserves all basic formulas for each corresponding pair \(x,y\) with one of the three properties 1-3. It remains to consider those corresponding pairs \(x,y\) in \(\bar{x},\bar{y}\) for which 1. \(x\in(\inf h,\sup h)\) with \(\gamma:=h^{\dagger}(x)\in\mathbb{I}\) and \(\delta:=\iota^{\dagger}(x)\in\mathbb{I}\). Put \(z_{-}\) and \(z_{+}\) to be the greatest entry of \(\bar{z}\cup\bar{z}^{*}\cup\bar{z}^{\prime}\) less than \(\bar{g}(\delta)\) and the least entry of \(\bar{z}\cup\bar{z}^{*}\cup\bar{z}^{\prime}\) greater than \(\bar{g}(\delta)\), respectively, and put \(w_{-}\) and \(w_{+}\) to be the corresponding entries of \(\bar{w}\cup\bar{w}^{*}\cup\bar{w}^{\prime}\). As a first step, Lemmas 4.21 and 4.22 yield that \(z_{-}<\bar{g}(\delta)<z_{+}\) (as well as \(w_{-}<\gamma<w_{+}\)) and \(\gamma=\bar{b}(\bar{g}(\delta))\). Hence, we can find _rationals_\(\hat{z}_{-},\hat{z}_{\pm}\in\mathbb{Q}\) such that \(z_{-}<\hat{z}_{-}<\bar{g}(\delta)<\hat{z}_{+}<z_{+}\). We add \(\hat{z}_{\pm}\) to \(\bar{z}^{*}\) and \(\hat{w}_{\pm}:=b(\hat{z}_{\pm})\) to \(\bar{w}^{*}\). In this way, we can assume that \(z_{\pm}\) and \(w_{\pm}\) are always rational for each corresponding pair \(x,y\). If \(x_{1},y_{1}\) and \(x_{2},y_{2}\) are two such pairs (without loss of generality, let \(x_{1}<x_{2}\)) and if \[\gamma_{1}:=h^{\dagger}(x_{1})<\gamma_{2}:=h^{\dagger}(x_{2}),\] we enrich \(\bar{z}^{*}\) and \(\bar{w}^{*}\) even further: putting \(\delta_{1}:=\iota^{\dagger}(x_{1})\) and \(\delta_{2}:=\iota^{\dagger}(x_{2})\), we know that \[\bar{b}(\bar{g}(\delta_{1}))=\gamma_{1}<\gamma_{2}=\bar{b}(\bar{g}(\delta_{2}))\] by Lemmas 4.21 and 4.22, and hence \(\bar{g}(\delta_{1})<\bar{g}(\delta_{2})\). If we pick \(\tilde{z},\tilde{w}\in\mathbb{Q}\) such that \[\bar{g}(\delta_{1})<\tilde{z}<\bar{g}(\delta_{2})\text{ and }\tilde{w}:=b(\tilde{z}), \tag{10}\] then \(\gamma_{1}<\tilde{w}<\gamma_{2}\). We add \(\tilde{z}\) to \(\bar{z}^{*}\) and \(\tilde{w}\) to \(\bar{w}^{*}\). If \(z_{\pm,1},w_{\pm,1},z_{\pm,2},w_{\pm,2}\) denote the values12\(z_{\pm},w_{\pm}\) for \(x_{1},y_{1}\) and \(x_{2},y_{2}\), respectively, we obtain \(z_{+,1}\leq\tilde{z}\leq z_{-,2}\) and \(w_{+,1}\leq\tilde{w}\leq w_{-,2}\). Distinguishing cases, we conclude that whichever combination of Lemmas 4.21 and 4.22 applies to \(x_{1},y_{1}\) and \(x_{2},y_{2}\), the resulting conditions on \(\tilde{b}\) will be compatible, i.e. strictly increasing. By way of example, consider the case that \(x_{1},y_{1}\) fall into the scope of Lemma 4.21 and \(x_{2},y_{2}\) fall into the scope of Lemma 4.22. Then we are required to pick Footnote 12: They are necessarily rational! \[\tilde{u},\tilde{v},\hat{u},\hat{v}\in\mathbb{Q}\quad\text{and}\quad\tilde{ \rho}\in((\mathbb{R}\setminus\operatorname{Im}(\bar{g}))\cap\mathbb{I})\cup \bar{g}(\operatorname{Dcl}^{\mathbb{I}}(\tilde{\sigma}))\] with (in particular) \[z_{-,1}<\tilde{u}<\tilde{v}<z_{+,1}\leq z_{-,2}<\tilde{\rho}<z_{+,2}\quad\text{and }\quad w_{-,1}<\hat{u}<\hat{v}<w_{+,1}\leq w_{-,2}<\gamma_{2}<w_{+,2}.\] Thus, for any \(\tilde{u},\tilde{v},\hat{u},\hat{v},\tilde{\rho}\) we could pick, the finite partial map \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{*}\mapsto\bar{w}^{*}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\), \(\tilde{u}\mapsto\hat{u}\), \(\tilde{v}\mapsto\hat{v}\) and \(\tilde{\rho}\mapsto\gamma_{2}\) is automatically strictly increasing. Finally, we treat the possibility that \[h^{\dagger}(x_{1})=h^{\dagger}(x_{2}).\] We will show that we can reduce to a single application of Lemma 4.21, Lemma 4.22 or Remark 4.23. First, let \(x_{1},y_{1}\) and \(x_{2},y_{2}\) and \(x_{3},y_{3}\) be three corresponding pairs with \(x_{1}<x_{2}<x_{3}\) (and consequently \(y_{1}<y_{2}<y_{3}\)) but \(h^{\dagger}(x_{1})=h^{\dagger}(x_{2})=h^{\dagger}(x_{3})\). Then \[h^{-1}(-\infty,x_{1}]=h^{-1}(-\infty,x_{2}]=h^{-1}(-\infty,x_{3}]\text{ and }h^{-1}[x_{1},+\infty)=h^{-1}[x_{2},+\infty)=h^{-1}[x_{3},+\infty),\] so \[\tilde{t}^{-1}(-\infty,x_{1}]=\tilde{t}^{-1}(-\infty,x_{2}]=\tilde{t}^{-1}(- \infty,x_{3}]\text{ and }\tilde{t}^{-1}[x_{1},+\infty)=\tilde{t}^{-1}[x_{2},+\infty)= \tilde{t}^{-1}[x_{3},+\infty)\] for _all_\(\tilde{t}=h\tilde{b}g\) we could pick in the sequel. It is immediate from Lemma 4.17 that we can drop \(x_{2},y_{2}\) from \(\bar{x},\bar{y}\); more precisely: if \(x_{1}\mapsto y_{1}\) and \(x_{3}\mapsto y_{3}\) preserve all basic formulas, then so does \(x_{2}\mapsto y_{2}\). Hence, we can assume that \(\bar{x},\bar{y}\) contains only two corresponding pairs \(x_{1},y_{1}\) and \(x_{2},y_{2}\) with \(h^{\dagger}(x_{1})=h^{\dagger}(x_{2})\). If additionally \(f(y_{1})=f(y_{2})\), we can drop one of the pairs from \(\bar{x},\bar{y}\) and apply Lemma 4.21 or 4.22 to the remaining one. If on the other hand \(f(y_{1})<f(y_{2})\), we apply Lemma 4.17 to \(x_{1}\mapsto y_{1}\) and \(x_{2}\mapsto y_{2}\) as finite partial maps from \(\mathbb{A}\) to \(\mathbb{B}\) to obtain \[\sigma\left(\iota^{-1}(-\infty,x_{1}]\right)\subseteq(-\infty,f(y_{1})]\quad \text{and}\quad\sigma\left(\iota^{-1}[x_{1},+\infty)\right)\subseteq[f(y_{2} ),+\infty).\] Since \(\iota^{-1}(-\infty,x_{1}]\) and \(\iota^{-1}[x_{1},+\infty)\) partition the whole of \(\mathbb{Q}\) (note that \(x_{1}\notin\operatorname{Im}(\iota)\)), this implies \(\operatorname{Im}(\sigma)\cap(f(y_{1}),f(y_{2}))=\emptyset\). We add the condition \[\operatorname{Im}(\tilde{\sigma})\cap(f(y_{1}),f(y_{2}))=\emptyset\quad \text{(type 3)}\] to the intersection \(O\) and pick \(\hat{y}\in\mathbb{Q}\) such that \(f(y_{1})<f(\hat{y})<f(y_{2})\). Then \[\sigma\left(\iota^{-1}(-\infty,x_{1}]\right)\subseteq(-\infty,f(\hat{y})] \quad\text{and}\quad\sigma\left(\iota^{-1}[x_{1},+\infty)\right)\subseteq[f( \hat{y}),+\infty).\] Applying Remark 4.23 to the pair \(x_{1},\hat{y}\) one obtains \[\tilde{\sigma}\left(\bar{t}^{-1}(-\infty,x_{1}]\right)\subseteq(-\infty,f( \hat{y})]\quad\text{and}\quad\tilde{\sigma}\left(\tilde{\iota}^{-1}[x_{1},+ \infty)\right)\subseteq[f(\hat{y}),+\infty)\] under suitable conditions on \(\tilde{\sigma}\) and \(\tilde{b}\) (see below). By our choice of \(\hat{y}\), since \(\tilde{t}^{-1}[x_{2},+\infty)=\tilde{\iota}^{-1}[x_{1},+\infty)\) and since \(\operatorname{Im}(\tilde{\sigma})\cap(f(y_{1}),f(y_{2}))=\emptyset\), Lemma 4.17 yields that this is equivalent to \(x_{1}\mapsto y_{1}\) and \(x_{2}\mapsto y_{2}\) both preserving all basic formulas. To complete the proof, we apply either Lemma 4.21, Lemma 4.22 or Remark 4.23 (the latter only if we use the reduction from two instances of Lemmas 4.21 or 4.22 to a single instance of Remark 4.23 as derived above) to each corresponding pair \(x,y\) in \(\bar{x},\bar{y}\) satisfying (d). This yields additional sets of types 0, 1, 2, 3 and additional tuples \(\tilde{\zeta}^{*},\bar{\eta}^{*}\) in \(\mathbb{Q}\), \(\tilde{\zeta}^{\prime}\) in \((\mathbb{R}\setminus\operatorname{Im}(\tilde{g}))\cap\mathbb{I}\), \(\tilde{\zeta}^{\prime\prime}\) in \(\bar{g}(\operatorname{Dc}^{\mathbb{I}}(\tilde{\sigma}))\) and \(\bar{\eta}^{\prime},\bar{\eta}^{\prime\prime}\) in \(\operatorname{Dc}^{\mathbb{I}}(h)\) such that for all these pairs \(x,y\), the map \(x\mapsto y\) preserves all basic formulas as a finite partial map from \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) to \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) where \(\tilde{\iota}:=h\tilde{b}g\), whenever \(\tilde{\sigma}\) is contained in the the additional sets and \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) satisfies \(\tilde{b}(\bar{z})=\bar{w}\), \(\tilde{b}(\bar{z}^{*})=\bar{w}^{*}\), \(\bar{\tilde{b}}(\bar{z}^{\prime})=\bar{w}^{\prime}\) as well as \(\tilde{b}(\tilde{\zeta}^{*})=\bar{\eta}^{*}\), \(\bar{\tilde{b}}(\tilde{\zeta}^{\prime})=\bar{\eta}^{\prime}\), \(\bar{\tilde{b}}(\tilde{\zeta}^{\prime\prime})=\bar{\eta}^{\prime\prime}\). We add the additional sets to the intersection \(O\) and add the tuples \(\tilde{\zeta}^{*},\bar{\eta}^{*}\) to \(\bar{z}^{*},\bar{w}^{*}\), the tuples \(\bar{\zeta}^{\prime},\bar{\eta}^{\prime}\) to \(\bar{z}^{\prime},\bar{w}^{\prime}\) and the tuples \(\bar{\zeta}^{\prime\prime},\bar{\eta}^{\prime\prime}\) to \(\bar{z}^{\prime\prime},\bar{w}^{\prime\prime}\), respectively. Note that the resulting finite partial map \(\bar{z}\mapsto\bar{w}\), \(\bar{z}^{*}\mapsto\bar{w}^{*}\), \(\bar{z}^{\prime}\mapsto\bar{w}^{\prime}\), \(\bar{z}^{\prime\prime}\mapsto\bar{w}^{\prime\prime}\) is strictly increasing: different entries of the new tuples \(\bar{\zeta}^{*},\bar{\zeta}^{\prime},\bar{\zeta}^{\prime\prime},\bar{\eta}^{ \prime},\bar{\eta}^{\prime\prime}\) cannot interfere with each other since the generalised inverses \(h^{\dagger}(x)\) are pairwise distinct and since we added the elements \(\tilde{z}\) and \(\tilde{w}\) from (10) to \(\bar{z}^{*}\) and \(\bar{w}^{*}\). If \(\tilde{\sigma}\in O\) and if \(\tilde{b}\in\mathcal{G}_{\mathbb{Q}}\) satisfies \(\tilde{b}(\bar{z})=\bar{w}\), \(\tilde{b}(\bar{z}^{*})=\bar{w}^{*}\), \(\bar{\tilde{b}}(\bar{z}^{\prime})=\bar{w}^{\prime}\), \(\bar{\tilde{b}}(\bar{z}^{\prime\prime})=\bar{w}^{\prime\prime}\), then by our previous construction of \(\bar{z}^{*},\bar{z}^{\prime},\bar{w}^{*},\bar{w}^{\prime}\), the finite partial map \(x\mapsto y\) preserves all basic formulas as a map from \(\mathbb{A}(\tilde{\sigma},f,\tilde{\iota})\) to \(\mathbb{B}(\tilde{\sigma},f,\tilde{\iota})\) not only for for each pair \(x,y\) with property (d) but also for each pair \(x,y\) with one of the properties (a)-(c) - thus completing the proof. ## 5. Reduction of the rich to the pointwise topology The aim of this section is to prove Proposition 3.7. We will argue in several steps, each having the following general form: **Notation 5.1**.: If \(\mathcal{T}_{a}\) and \(\mathcal{T}_{b}\) are topologies on \(\mathcal{M}_{\mathbb{Q}}\) with \(\mathcal{T}_{pw}\subseteq\mathcal{T}_{a},\mathcal{T}_{b}\), then \(\mathcal{T}_{a}\rightsquigarrow\mathcal{T}_{b}\) shall denote the following statement13: Footnote 13: In many (but not all!) applications of this notation, we will have \(\mathcal{T}_{b}\subseteq\mathcal{T}_{a}\). Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{a}\). Then \(\mathcal{T}\subseteq\mathcal{T}_{b}\). We will require an additional auxiliary type of subsets of \(\mathcal{M}_{\mathbb{Q}}\) which encompasses type 3 (see Definition 3.4). **Definition 5.2**.: * \(O_{A}^{(4)}:=\{s\in\mathcal{M}_{\mathbb{Q}}:\operatorname{Im}(s)\subseteq A\}\) for \(A\subseteq\mathbb{Q}\) (restricting) The proof will proceed along the following route: \[\mathcal{T}_{rich}=\mathcal{T}_{0123}\overset{\ref{eq:T}_{01^{cls}23^{oppn}}} {\rightsquigarrow}\mathcal{T}_{024}\overset{\ref{eq:T}_{023^{oppn}}}{ \rightsquigarrow}\mathcal{T}_{023^{oppn}}\overset{\ref{eq:T}_{03^{oppn}}}{ \rightsquigarrow}\mathcal{T}_{0}=\mathcal{T}_{pw}\] Proof (of Proposition 3.7 given Lemmas 5.6, 5.9, 5.12, 5.20 and 5.23).: Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) with \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{rich}=\mathcal{T}_{ 0123}\). By Lemma 5.6, we know that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{01^{cls}23^{opn}}\). Analogously, we apply Lemmas 5.9, 5.12, 5.20 and 5.23 in sequence to finally conclude \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{pw}\), i.e. \(\mathcal{T}=\mathcal{T}_{pw}\) as claimed. Reductions \(\mathcal{T}_{0123}\rightsquigarrow\mathcal{T}_{01^{cls}23^{opn}}\rightsquigarrow \mathcal{T}_{024}\) For the first two reductions, we will need to determine the image of \(\mathcal{T}_{0123}\)-basic open (and in particular \(\mathcal{T}_{01^{cls}23^{opn}}\)-basic open) sets under a suitable left translation \(\lambda_{f}\). This requires a canonical representation of basic open sets in \(\mathcal{T}_{0123}\) and \(\mathcal{T}_{01^{cls}23^{opn}}\) which will later be also applied to \(\mathcal{T}_{023^{opn}}\)-basic open sets. **Definition 5.3**.: Let \(O\neq\emptyset\) be a \(\mathcal{T}_{0123}\)-basic open (or \(\mathcal{T}_{01^{cls}23^{opn}}\)-basic open or \(\mathcal{T}_{023^{opn}}\)-basic open) set, i.e. \[O=\bigcap_{i=1}^{n}O_{\tilde{x}_{i},y_{i}}^{(0)}\cap\bigcap_{j=1}^{m}O_{\tilde{ I}_{j},J_{j}}^{(1)}\cap\bigcap_{k=1}^{\widetilde{m}}O_{\tilde{I}_{k},\tilde{J}_{k}}^ {(1)}\cap O_{LU}^{(2)}\cap\bigcap_{\ell=1}^{N}O_{K_{\ell}}^{(3)} \tag{11}\] where \(I_{j}=(-\infty,p_{j})\) and \(\tilde{I}_{k}=(\tilde{p}_{k},+\infty)\) for \(p_{j},\tilde{p}_{k}\in\mathbb{Q}\). We call the representation (11) _stratified_ if * \(\forall i=1,\ldots,n-1\colon x_{i}<x_{i+1}\) (then automatically, \(y_{i}\leq y_{i+1}\) since \(O\neq\emptyset\)) * \(\forall j=1,\ldots,m-1\colon p_{j}<p_{j+1}\) and \(J_{\tilde{j}}\subseteq J_{\tilde{j}+1}\) * \(\forall k=1,\ldots,\widetilde{m}-1\colon\tilde{p}_{k}<\tilde{p}_{k+1}\) and \(\tilde{J}_{k}\supseteq\tilde{J}_{\tilde{k}+1}\) * \(\forall\ell=1,\ldots,N-1\colon\sup K_{\ell}\leq\inf K_{\ell+1}\) and \((\inf K_{\ell},\sup K_{\ell+1})\setminus(K_{\ell}\cup K_{\ell+1})\neq\emptyset\) * \(\forall j=1,\ldots,m\,\forall i=1,\ldots,n\colon p_{j}\leq x_{i}\Rightarrow y _{i}\notin J_{\tilde{j}}\) * \(\forall k=1,\ldots,\widetilde{m}\,\forall i=1,\ldots,n\colon\tilde{p}_{k} \geq x_{i}\Rightarrow y_{i}\notin\tilde{J}_{\tilde{k}}\) * \(\forall j=1,\ldots,m\,\forall\ell=1,\ldots,N\colon(J_{j}\cap K_{\ell}\neq \emptyset\Rightarrow\exists t\in J_{j}\colon t>K_{\ell})\) * \(\forall k=1,\ldots,\widetilde{m}\,\forall\ell=1,\ldots,N\colon(\tilde{J}_{k} \cap K_{\ell}\neq\emptyset\Rightarrow\exists t\in\tilde{J}_{k}\colon t<K_{\ell})\) **Lemma 5.4**.: _Any \(\mathcal{T}_{0123}\)-basic open set \(O\) has a stratified representation._ _The same holds for a \(\mathcal{T}_{01^{cls}23^{opn}}\)-basic open set, where the resulting representation again consists of sets of types 0, 1cls, 2 and 3 open._ _The same holds for a \(\mathcal{T}_{023^{opn}}\)-basic open set, where the resulting representation again consists of sets of types 0, 2 and 3 open._ Proof.: We start with any representation \[O=\bigcap_{i=1}^{n}O_{\tilde{x}_{i},y_{i}}^{(0)}\cap\bigcap_{j=1}^{m}O_{\tilde{ I}_{j},J_{j}}^{(1)}\cap\bigcap_{k=1}^{\widetilde{m}}O_{\tilde{I}_{k},\tilde{J}_{k}}^ {(1)}\cap O_{LU}^{(2)}\cap\bigcap_{\ell=1}^{N}O_{K_{\ell}}^{(3)}\] and turn it into a stratified one in several steps, one for each item in Definition 5.3. **(S1).** Rearrange the \(x_{i}\) in increasing order. **(S2).** Rearrange the \(p_{j}\) in increasing order; if \(p_{j}=p_{j+1}\), drop the larger set of \(J_{j}\) and \(J_{j+1}\). If \(J_{j}\) is not a subset of \(J_{j+1}\), then \(J_{j+1}\subseteq J_{j}\) and \(O^{(1)}_{I_{j},J_{j}}\cap O^{(1)}_{I_{j+1},J_{j+1}}\) can be replaced by \(O^{(1)}_{I_{j+1},J_{j+1}}\). **(S3).** Analogously to (S2). **(S4).** Rearrange the \(K_{\ell}\) by increasing order of \(\inf K_{\ell}\). If \(\sup K_{\ell}>\inf K_{\ell+1}\) or if \((\inf K_{\ell},\sup K_{\ell+1})\subseteq K_{\ell}\cup K_{\ell+1}\), then \(K_{\ell}\cup K_{\ell+1}\) is again an interval and \(O^{(3)}_{K_{\ell}}\cap O^{(3)}_{K_{\ell+1}}\) can be replaced by \(O^{(3)}_{K_{\ell}\cup K_{\ell+1}}\). **(S5).** If \(p_{j}\leq x_{i}\) and \(y_{i}\in J_{j}\), then \(O^{(0)}_{x_{i},y_{i}}\cap O^{(1)}_{I_{j},J_{j}}\) can be replaced by \(O^{(0)}_{x_{i},y_{i}}\). **(S6).** Analogously to (S5). **(S7).** If \(J_{j}\cap K_{\ell}\neq\emptyset\) but no element of \(J_{j}\) is greater than \(K_{\ell}\), then \(J_{j}\setminus K_{\ell}\) is again a rational interval and \(O^{(1)}_{I_{j},J_{j}}\cap O^{(3)}_{K_{\ell}}\) can be replaced by \(O^{(1)}_{I_{j},J_{j}\setminus K_{\ell}}\cap O^{(3)}_{K_{\ell}}\). **(S8).** Analogously to (S7). If all intervals \(J_{j}\) and \(\tilde{J}_{k}\) are closed and all intervals \(K_{\ell}\) are open, then so are the respective intervals in the representation we obtain after going through (S1)-(S8), proving the second statement. If additionally no sets of type \(1^{\rm cls}\) occur, then the representation will only contains sets of types 0, 2 and \(3^{\rm opn}\) since the above procedure never generates sets of type 1 if there are none in the original set. Now we can provide the result about images of basic open sets under certain left translations (which can be seen as a generalisation of Lemma 2.9). Its proof is somewhat technical, but the main idea is very straightforward: if \(f,s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) and if \(s^{\prime}\) satisfies \(s^{\prime}(-\infty,p)\subseteq(-\infty,q)\), then one might be led to believe that necessarily \(fs^{\prime}(-\infty,p)\subseteq(-\infty,f(q))\). However, in general only the conclusion \(fs^{\prime}(-\infty,p)\subseteq(-\infty,f(q))\) is true, namely if the preimage \(f^{-1}\{f(q)\}\) contains not only \(q\) but also elements less than \(q\). This can be ensured by requiring that \(f^{-1}\{f(q)\}\) is an irrational interval. Indeed, if \(f\) is also surjective, then _any_\(s\) with \(s(-\infty,p)\subseteq(-\infty,f(q)]\) can be rewritten as \(s=fs^{\prime}\) where \(s^{\prime}(-\infty,p)\subseteq(-\infty,q)\). An analogous fact holds for sets of type 3 - if \(s^{\prime}\) avoids \([u,v]\), then \(fs^{\prime}\) in general only avoids \((f(u),f(v))\) and, conversely, if \(f\) is surjective with \(f^{-1}\{f(u)\}\), \(f^{-1}\{f(v)\}\) irrational and if \(s\) avoids \((f(u),f(v))\), then \(s\) can be rewritten as \(s=fs^{\prime}\) where \(s^{\prime}\) avoids \([u,v]\). Combining these facts for the building blocks of (stratified representations of) basic open sets requires thorough bookkeeping. **Lemma 5.5**.: _Let \(O\neq\emptyset\) be a nonempty \(\mathcal{T}_{0123}\)-basic open set with stratified representation_ \[O=\bigcap_{i=1}^{n}O^{(0)}_{x_{i},y_{i}}\cap\bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_{j}),J_{j}}\cap\bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{(\tilde{p}_{k},+\infty),\tilde{J}_{k}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{K_{\ell}}.\] _Define_ \[q_{j}:=\sup J_{j},\quad\tilde{q}_{k}:=\inf\tilde{J}_{k},\quad u_{\ell}:=\inf K _{\ell}\quad\text{and}\quad v_{\ell}:=\sup K_{\ell}.\] _Let further \(f\in\mathcal{M}_{\mathbb{Q}}\) be unbounded-unbounded such that for all \(w\in\operatorname{Im}(f)\), the preimage \(f^{-1}\{w\}\) is an irrational interval. Then (putting \(f(\pm\infty):=\pm\infty\)) we have_ \[\lambda_{f}(O)=\{s:\operatorname{Im}(s)\subseteq\operatorname{Im }(f)\}\cap\bigcap_{i=1}^{n}O^{(0)}_{x_{i},f(y_{i})}\cap\\ \bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_{j}),(-\infty,f(q_{j}))}\cap \bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{(\tilde{p}_{k},+\infty),[f(\tilde{q}_{k} ),+\infty)}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(f(u_{\ell}),f(v_{ \ell}))}.\] Proof.: The inclusion "\(\subseteq\)" is immediate, so we deal only with "\(\supseteq\)". Take \(s\in\mathcal{M}_{\mathbb{Q}}\) such that \(\operatorname{Im}(s)\subseteq\operatorname{Im}(f)\) and \[s\in\bigcap_{i=1}^{n}O^{(0)}_{x_{i},f(y_{i})}\cap\bigcap_{j=1}^{m}O^{(1)}_{(- \infty,p_{j}),(-\infty,f(q_{j})]}\cap\bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{( \tilde{p}_{k},+\infty),[f(\tilde{q}_{k}),+\infty)}\cap O^{(2)}_{LU}\cap\bigcap_ {\ell=1}^{N}O^{(3)}_{(f(u_{\ell}),f(v_{\ell}))}. \tag{12}\] We want to find \(s^{\prime}\in O\) such that \(s=fs^{\prime}\). The latter statement is equivalent to \(s^{\prime}(s^{-1}\{w\})\subseteq f^{-1}\{w\}\) for all \(w\in\operatorname{Im}(s)\). Since \(\operatorname{Im}(s)\subseteq\operatorname{Im}(f)\), we have \(f^{-1}\{w\}\neq\emptyset\) for all \(w\in\operatorname{Im}(s)\). Note that if one takes \(s^{\prime}|_{s^{-1}\{w\}}\) to be an increasing map \(s^{-1}\{w\}\to f^{-1}\{w\}\) independently for each \(w\in\operatorname{Im}(s)\), their union will be increasing as well since \(s^{-1}\{w_{1}\}<s^{-1}\{w_{2}\}\) and \(f^{-1}\{w_{1}\}<f^{-1}\{w_{2}\}\) for all \(w_{1}<w_{2}\). Additionally requiring \(s^{\prime}\in O\) amounts to the following properties: 1. \(\forall i=1,\dots,n\colon s^{\prime}(x_{i})=y_{i}\) 2. \(s^{\prime}\in O^{(2)}_{LU}\) 3. \(\forall j=1,\dots,m\,\forall w\in\operatorname{Im}(s)\colon s^{\prime}\left(s ^{-1}\{w\}\cap(-\infty,p_{j})\right)\subseteq J_{j}\cap f^{-1}\{w\}\) 4. \(\forall k=1,\dots,\widetilde{m}\,\forall w\in\operatorname{Im}(s)\colon s^{ \prime}\left(s^{-1}\{w\}\cap(\tilde{p}_{k},+\infty)\right)\subseteq\tilde{J} _{k}\cap f^{-1}\{w\}\) 5. \(\forall\ell=1,\dots,N\,\forall w\in\operatorname{Im}(s)\colon s^{\prime} \left(s^{-1}\{w\}\right)\cap K_{\ell}=\emptyset\) To simplify the proof, we replace (i) by: 1. \(\forall i=1,\dots,n\,\forall w\in\operatorname{Im}(s)\colon s^{\prime}\left(s ^{-1}\{w\}\cap(-\infty,x_{i})\right)\subseteq(-\infty,y_{i}]\cap f^{-1}\{w\}\) 2. \(\forall i=1,\dots,n\,\forall w\in\operatorname{Im}(s)\colon s^{\prime}\left(s ^{-1}\{w\}\cap(x_{i},+\infty)\right)\subseteq[y_{i},+\infty)\cap f^{-1}\{w\}\) If we find \(s^{\prime}\) satisfying (ii)-(vii), then we can redefine \(s^{\prime}(x_{i}):=y_{i}\) to obtain \(s^{\prime}\in O\) - by (vi) and (vii), the resulting map will still be an element of \(\mathcal{M}_{\mathbb{Q}}\); and since \(O\neq\emptyset\), mapping \(x_{i}\mapsto y_{i}\) cannot contradict (ii)-(v). As a first step, we show that the statements in (ii)-(vii) are already implied by \(s^{\prime}(s^{-1}\{w\})\subseteq f^{-1}\{w\}\) for many values \(w\) (and therefore automatically satisfied). If \(w<f(q_{j})\), then \(f^{-1}\{w\}\subseteq J_{j}\), so (iii) automatically holds for \(w<f(q_{j})\) and (vi) for \(w<f(y_{i})\); a dual argument yields (iv) for \(w>f(\tilde{q}_{k})\) and (vii) for \(w>f(y_{i})\). Finally, if \(w<f(u_{\ell})\) or \(w>f(v_{\ell})\), then (v) is automatically satisfied as well since \(f^{-1}\{w\}\cap K_{\ell}=\emptyset\). Using (12), we obtain that (ii)-(vii) hold for many more values \(w\). For instance, (iii) holds for \(w>f(q_{j})\): since \(s(-\infty,p_{j})\subseteq(-\infty,f(q_{j})]\), we have \(s^{-1}\{w\}\cap(-\infty,p_{j})=\emptyset\). Similarly, (vi) holds for \(w>f(y_{i})\), (iv) holds for \(w<f(\tilde{q}_{k})\) and (vii) holds for \(w<f(y_{i})\). In (v), we do not have to consider \(f(u_{\ell})<w<f(v_{\ell})\) since \(\operatorname{Im}(s)\cap(f(u_{\ell}),f(v_{\ell}))=\emptyset\). Finally, since \(f\) is unbounded-unbounded, any function \(s^{\prime}\) with \(s=fs^{\prime}\) has the same boundedness type as \(s\), i.e. (ii) is automatically satisfied as well. Collecting the previous arguments and additionally reformulating (v), it suffices to ascertain the following properties (instead of (ii)-(vii)): 1. \(\forall j=1,\dots,m\colon s^{\prime}\left(s^{-1}\{f(q_{j})\}\cap(-\infty,p_{j} )\right)\subseteq J_{j}\cap f^{-1}\{f(q_{j})\}\) 2. \(\forall k=1,\dots,\widetilde{m}\colon s^{\prime}\left(s^{-1}\{f(\tilde{q}_{k}) \}\cap(\tilde{p}_{k},+\infty)\right)\subseteq\tilde{J}_{k}\cap f^{-1}\{f( \tilde{q}_{k})\}\) 3. \(\forall\ell=1,\dots,N\colon s^{\prime}\left(s^{-1}\{f(u_{\ell})\}\right)\subseteq f ^{-1}\{f(u_{\ell})\}\backslash K_{\ell}\) and \(s^{\prime}\left(s^{-1}\{f(v_{\ell})\}\right)\subseteq f^{-1}\{f(v_{\ell})\} \backslash K_{\ell}\) 4. \(\forall i=1,\dots,n\colon s^{\prime}\left(s^{-1}\{f(y_{i})\}\cap(-\infty,x_{i}) \right)\subseteq(-\infty,y_{i}]\cap f^{-1}\{f(y_{i})\}\) 5. \(\forall i=1,\dots,n\colon s^{\prime}\left(s^{-1}\{f(y_{i})\}\cap(x_{i},+\infty) \right)\subseteq[y_{i},+\infty)\cap f^{-1}\{f(y_{i})\}\) We replace \(O\) by \[\bigcap_{i=1}^{n}O^{(1)}_{(-\infty,x_{i}),(-\infty,y_{i}]}\cap O^{(1)}_{(x_{i},+ \infty),[y_{i},+\infty)}\cap\bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_{j}),J_{j}}\cap \bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{(\tilde{p}_{k},+\infty),\tilde{J}_{k}} \cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{K_{\ell}}, \tag{13}\] observing that this representation is still stratified (up to adding the elements \(x_{i}\) to \(\{p_{1},\dots,p_{m}\}\) as well as \(\{\tilde{p}_{1},\dots,\tilde{p}_{\widetilde{m}}\}\) and rearranging). Since we have \[s\in\big{\{}s^{\prime}:\text{Im}(s^{\prime})\subseteq\text{Im}(f) \big{\}}\cap\bigcap_{i=1}^{n}O^{(1)}_{(-\infty,x_{i}),(-\infty,f(y_{i})]}\cap O^{( 1)}_{(x_{i},+\infty),[f(y_{i}),+\infty)}\cap\\ \bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_{j}),(-\infty,f(q_{j})]}\cap \bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{(\tilde{p}_{k},+\infty),[f(\tilde{q}_{k} ),+\infty)}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(f(u_{\ell}),f(v_{ \ell}))},\] we can, without loss of generality, subsume (vi') and (vii') in (iii') and (iv') so that we only have to deal with (iii')-(v'). If we can find an increasing map satisfying (iii')-(v'), then any extension \(s^{\prime}\) of that map satisfying \(s^{\prime}(s^{-1}\{w\})\subseteq f^{-1}\{w\}\) for \(w\neq f(q_{j}),f(\tilde{q}_{k}),f(u_{\ell}),f(v_{\ell})\) (\(j=1,\ldots,m\); \(k=1,\ldots,\widetilde{m}\); \(\ell=1,\ldots,N\)) will be an element of \(\mathcal{M}_{\mathbb{Q}}\) for which (ii)-(vii) hold - thus completing the proof. Since the \(f\)-preimages of single elements are assumed to be irrational intervals, the right hand sides in (iii')-(v') are nonempty14: the preimage \(f^{-1}\{f(q_{j})\}\) in (iii') is an irrational interval which contains \(q_{j}\), so \(q_{j}\) must be contained in the interior of \(f^{-1}\{f(q_{j})\}\); we conclude \(J_{j}\cap f^{-1}\{f(q_{j})\}\neq\emptyset\). For the other items, we argue analogously, noting in (v') that \(u_{\ell}\) and \(v_{\ell}\) are limit points not only of \(K_{\ell}\) but also of \(\mathbb{Q}\setminus K_{\ell}\). Footnote 14: This is the first half of the main observation behind the lemma! In the remainder of the proof, we will use that the representation of \(O\) is stratified to show that combinations of (iii')-(v') are not contradictory, either. This could only happen if they are making statements about the same \(s\)-preimage, i.e. if the \(f\)-images of some of the points \(q_{j},\tilde{q}_{k},u_{\ell},v_{\ell}\) coincide. For each \[w\in\{f(q_{1}),\ldots,f(q_{m}),f(\tilde{q}_{1}),\ldots,f(\tilde{q}_{\widetilde {m}}),f(u_{1}),\ldots,f(u_{N}),f(v_{1}),\ldots,f(v_{N})\},\] we will define \(s^{\prime}\) on \(s^{-1}\{w\}\). We first show that we can find an image \(s^{\prime}(z)\) satisfying (iii')-(v') for each indivdual \(z\in s^{-1}\{w\}\). If \(f(q_{j})=f(q_{j+1})\), then \(J_{j}\cap f^{-1}\{f(q_{j})\}\subseteq J_{j+1}\cap f^{-1}\{f(q_{j+1})\}\) by (S2). Therefore, it suffices to consider the _least_\(j\) such that \(z\in(-\infty,p_{j})\) (if such a \(j\) exists) and fulfil \(s^{\prime}(z)\in J_{j}\) - the other conditions of the same type will then be automatically satisfied. Analogously, it is enough to consider the _greatest_\(k\) such that \(z\in(\tilde{p}_{k},+\infty)\) (if it exists). For given \(z\), we can thus reduce (iii') and (iv') to a _single_ condition of the respective types (if they occur at all). Therefore, we need to map \(z\) to the intersection of \(f^{-1}\{w\}\) and a combination of \(J_{j}\) and \(\tilde{J}_{k}\) and \(\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K_{\ell}\) which respectively occur if \(z\in(-\infty,p_{j})\wedge f(q_{j})=w\) and \(z\in(\tilde{p}_{k},+\infty)\wedge f(\tilde{q}_{k})=w\) and \(f(v_{\ell_{1}})=f(u_{\ell_{1}+1})=f(v_{\ell_{1}+1})=\ldots,f(u_{\ell_{2}})=w\) (and possibly \(f(u_{\ell_{1}})=w\) or \(f(v_{\ell_{2}})=w\) as well - we do not know whether the chain \(f^{-1}\{w\}\cap(\{u_{\ell}:\ell=1,\ldots,N\}\cup\{v_{\ell}:\ell=1,\ldots,N\})\) begins and ends with an element \(u_{*}\) or \(v_{*}\)!). We distinguish cases by the types of sets actually occurring and show that this intersection is always nonempty15: Footnote 15: This is the second half of the main observation behind the lemma! * We have already argued that \(f^{-1}\{w\}\cap J_{j}\neq\emptyset\), \(f^{-1}\{w\}\cap\tilde{J}_{k}\neq\emptyset\) and \(f^{-1}\{w\}\cap\mathbb{Q}\setminus K_{\ell}\neq\emptyset\). * If the sets \(\mathbb{Q}\setminus K_{\ell}\) occur for \(\ell=\ell_{1},\ldots,\ell_{2}\), then \(f^{-1}\{w\}\cap\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K_{\ell}\neq\emptyset\), since \(f^{-1}\{w\}\subseteq\bigcup_{\ell=\ell_{1}}^{\ell_{2}}K_{\ell}\) combined with (S4) would yield \(f^{-1}\{w\}\subseteq K_{\ell}\) for some \(\ell\), contradicting the previous item. * If both \(J_{j}\) and \(\tilde{J}_{k}\) occur, then \(J_{j}\cap\tilde{J}_{k}\neq\emptyset\) since \(s(z)\in J_{j}\cap\tilde{J}_{k}\) by \(s\in O\). Therefore, \(\tilde{q}_{k}\leq q_{j}\) and \(\sup(J_{j}\cap\tilde{J}_{k})=q_{j}\) as well as \(\inf(J_{j}\cap\tilde{J}_{k})=\tilde{q}_{k}\). We know that both \(q_{j}\) and \(\tilde{q}_{k}\) are contained in the interval \(f^{-1}\{w\}\), whence \(f^{-1}\{w\}\cap J_{j}\cap\tilde{J}_{k}=J_{j}\cap\tilde{J}_{k}\neq\emptyset\). * If \(J_{j}\) and \(\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K_{\ell}\) occur, then \(f^{-1}\{w\}\cap J_{j}\cap\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K _{\ell}=\emptyset\) would imply \(f^{-1}\{w\}\cap J_{j}\subseteq K_{\ell}\) for some \(\ell\), again via (S4). We pick any \(r\in f^{-1}\{w\}\cap J_{j}\neq\emptyset\), and thus \(r\in K_{\ell}\). By (S7), there exists \(t\in J_{j}\) such that \(t>K_{\ell}\). In particular, \(t\in[r,q_{j}]\subseteq f^{-1}\{w\}\) which yields the contradiction \(t\in f^{-1}\{w\}\cap J_{j}\) but \(t\notin K_{\ell}\). * If \(\tilde{J}_{k}\) and \(\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K_{\ell}\) occur, one argues analogously. * If both \(J_{j}\) and \(\tilde{J}_{k}\) as well as \(\bigcap_{\ell=\ell_{1}}^{\ell_{2}}\mathbb{Q}\setminus K_{\ell}\) occur, we again derive a contradiction from \(f^{-1}\{w\}\cap J_{j}\cap\tilde{J}_{k}\cap\bigcap_{\ell=\ell_{1}}^{\ell_{2}} \mathbb{Q}\setminus K_{\ell}=\emptyset\). As in the previous cases, (S4) yields \(\emptyset\neq f^{-1}\{w\}\cap J_{j}\cap\tilde{J}_{k}\subseteq K_{\ell}\) for some \(\ell\). By (S7), there exists \(t\in J_{j}\) such that \(t>K_{\ell}\). Increasing \(t\) if necessary, one obtains the contradiction \(t\in J_{j}\cap\tilde{J}_{k}=f^{-1}\{w\}\cap J_{j}\cap\tilde{J}_{k}\) but \(t\notin K_{\ell}\). Finally, we combine our arguments for each individual \(z\in s^{-1}\{w\}\) to a definition of \(s^{\prime}\) on the whole of \(s^{-1}\{w\}\). We define an equivalence relation \(\sim\) on \(s^{-1}\{w\}\) by putting \(z\sim z^{\prime}\) if and only if \(z\) and \(z^{\prime}\) are contained in the same intervals of the shapes \((-\infty,p_{j})\) and \((\tilde{p}_{k},+\infty)\). Clearly, there are only finitely many \(\sim\)-equivalence classes. Let \(z_{1},\ldots,z_{M}\) be a system of representatives of these equivalence classes which we assume to be arranged in increasing order. By the previous part of our proof, we can pick images \(s^{\prime}(z_{1}),\ldots,s^{\prime}(z_{M})\in f^{-1}\{w\}\) such that (iii')-(v') hold, where \(s^{\prime}(z_{1}),\ldots,s^{\prime}(z_{M})\) are in increasing order - the latter is possible by (S2) and (S3). Defining \(s^{\prime}\) on the equivalence class represented by \(z_{h}\) to be the constant function with value \(s^{\prime}(z_{h})\), we obtain an increasing function \(s^{\prime}\colon s^{-1}\{w\}\to f^{-1}\{w\}\) such that (iii')-(v') hold. Now we can prove the first two reductions. **Lemma 5.6**.: _It holds that \(\mathcal{T}_{0123}\leadsto\mathcal{T}_{01^{cls}23^{open}}\)._ Proof.: Let \(O\in\mathcal{T}\subseteq\mathcal{T}_{0123}\). We show that \(O\) is a \(\mathcal{T}_{01^{cls}23^{open}}\)-neighbourhood of every element of \(O\). Take \(s\in O\) and, using Lemma 4.5, pick a generic surjection \(f\). Since \(\operatorname{Im}(s)\subseteq\mathbb{Q}=\operatorname{Im}(f)\), there exists \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fs^{\prime}=\lambda_{f}(s^{\prime})\) by Lemma 2.9(i). Therefore, \(s^{\prime}\in\lambda_{f}^{-1}(O)\) where this set is \(\mathcal{T}\)-open by continuity of \(\lambda_{f}\). In particular, \(\lambda_{f}^{-1}(O)\in\mathcal{T}_{0123}\), so there exists \[O^{\prime}=O_{\tilde{x},\tilde{y}}^{(0)}\cap\bigcap_{j=1}^{m}O_{I_{j},J_{j}}^{ (1)}\cap\bigcap_{k=1}^{\widehat{m}}O_{\tilde{I}_{k},\tilde{J}_{k}}^{(1)}\cap O _{LU}^{(2)}\cap\bigcap_{\ell=1}^{N}O_{K_{\ell}}^{(3)} \tag{14}\] such that \(s^{\prime}\in O^{\prime}\subseteq\lambda_{f}^{-1}(O)\). We conclude \(s=\lambda_{f}(s^{\prime})\in\lambda_{f}(O^{\prime})\subseteq O\). By Lemma 5.4, we can assume the representation (14) to be stratified. Since \(\operatorname{Im}(f)=\mathbb{Q}\), Lemma 5.5 asserts that \(\lambda_{f}(O^{\prime})\) is a \(\mathcal{T}_{01^{cls}23^{open}}\)-basic open set, so \(O\) is indeed a \(\mathcal{T}_{01^{cls}23^{open}}\)-neighbourhood of \(s\). _Remark 5.7_.: We can reformulate the proof of Lemma 5.6 as follows: We show that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{01^{cls}23^{open}})\) has Property \(\mathbf{X}\) with respect to \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{0123})\), using the decomposition \(s=fs^{\prime}\operatorname{id}_{\mathbb{Q}}\) for a fixed generic surjection \(f\), the fixed map \(\operatorname{id}_{\mathbb{Q}}\) and varying \(s^{\prime}\). Applying Proposition 3.3(i) to the map \(\operatorname{id}\colon(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{0123})\to( \mathcal{M}_{\mathbb{Q}},\mathcal{T})\) - which is continuous since \(\mathcal{T}\subseteq\mathcal{T}_{0123}\), note also that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T})\) is a topological semigroup - yields the continuity of \(\operatorname{id}\colon(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{01^{cls}23^{open}} )\to(\mathcal{M}_{\mathbb{Q}},\mathcal{T})\), so \(\mathcal{T}\subseteq\mathcal{T}_{01^{cls}23^{open}}\). The second reduction is a slightly more involved application of Lemma 5.5, picking both \(f\) and \(s^{\prime}\) in a more thoughtful way (tuned to the specific \(s\) being considered) by the following construction. **Lemma 5.8**.: _Let \(s,f\in\mathcal{M}_{\mathbb{Q}}\) with \(\operatorname{Im}(f)=(-\infty,\inf s)\cup\operatorname{Im}(s)\cup(\sup s,+\infty)\) and such that the preimages \(f^{-1}\{w\}\) are irrational intervals, i.e. \(f^{-1}\{w\}=(r_{w},t_{w})\) for all \(w\in\operatorname{Im}(f)\), where \(r_{w},t_{w}\in\mathbb{I}\). Then there exists \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fs^{\prime}\) and the following hold for all \(p\in\mathbb{Q}\):_ 1. _If_ \(\sup s^{\prime}(-\infty,p)<s^{\prime}(p)\) _then_ \(\sup s^{\prime}(-\infty,p)=r_{s(p)}\)_._ 2. _If_ \(\inf s^{\prime}(p,+\infty)>s^{\prime}(p)\) _then_ \(\inf s^{\prime}(p,+\infty)=t_{s(p)}\)_._ Proof.: Defining \(s^{\prime}\) as the union of order isomorphisms between \(s^{-1}\{w\}\) and either \([z,z^{\prime}]\) or \((r_{w},z^{\prime}]\) or \([z,t_{w})\) or \((r_{w},t_{w})\) where \(z\) and \(z^{\prime}\) are fixed elements of \(f^{-1}\{w\}\) - depending on the order type of \(s^{-1}\{w\}\) - we obtain a map with the following properties: 1. \(s=fs^{\prime}\) 2. \(\forall w\in\operatorname{Im}(s)\colon\) \(\big{(}s^{-1}\{w\}\text{ has no greatest element }\Rightarrow\sup s^{\prime}(s^{-1}\{w\})=t_{w}\big{)}\) and16 Footnote 16: In other words: \(s^{\prime}\) exhausts \(f^{-1}\{w\}\) whenever possible. 1. \(\forall w\in\operatorname{Im}(s)\colon\) \(\big{(}s^{-1}\{w\}\text{ has no least element }\Rightarrow\inf s^{\prime}(s^{-1}\{w\})=r_{w}\big{)}\) and 3. \(\forall w\in\operatorname{Im}(s)\colon s^{\prime}|_{s^{-1}\{w\}}\) is continuous17 Footnote 17: Note: If \(s^{-1}\{w\}\) has e.g. a greatest element, this does _not_ mean that \(s^{\prime}\) is continuous at that point but rather that \(s^{\prime}\) is left-continuous there. We only show 2, the second assertion follows analogously. Assuming \(\sup s^{\prime}(-\infty,p)<s^{\prime}(p)\), we distinguish two cases: _Case 1_ (\(s(-\infty,p)\) has a greatest element): We set \(w:=\max s(-\infty,p)\). Then there exists \(p_{0}<p\) such that \(s|_{(p_{0},p)}\equiv w\). Observe first that \(s(p)>w\), i.e. \(p\) is the supremum of \(s^{-1}\{w\}\) but not a greatest element - for otherwise \((p_{0},p]\subseteq s^{-1}\{w\}\), so 3 would yield \(\sup s^{\prime}(-\infty,p)=s^{\prime}(p)\). By 2, we have \(\sup s^{\prime}(s^{-1}\{w\})=t_{w}\). Since \(\sup s^{\prime}(-\infty,p)=\sup s^{\prime}(s^{-1}\{w\})\), it remains to show \(t_{w}=r_{s(p)}\), equivalently \((w,s(p))\cap\operatorname{Im}(f)=\emptyset\). It suffices to note that \((w,s(p))\cap\operatorname{Im}(s)=\emptyset\) and that \(\operatorname{Im}(f)\setminus\operatorname{Im}(s)\) and the convex hull of \(\operatorname{Im}(s)\) are disjoint by choice of \(\operatorname{Im}(f)\). _Case 2_ (\(s(-\infty,p)\) does not have a greatest element): For each \(p^{\prime}<p\), there exists \(p^{\prime\prime}\) such that \(p^{\prime}<p^{\prime\prime}<p\) and \(s(p^{\prime})<s(p^{\prime\prime})\leq s(p)\). We have \(s^{\prime}(p^{\prime\prime})\in f^{-1}\{s(p^{\prime\prime})\}\) and thus \[\sup s^{\prime}(-\infty,p)\geq s^{\prime}(p^{\prime\prime})\geq\inf f^{-1}\{s (p^{\prime\prime})\}=r_{s(p^{\prime\prime})}\geq t_{s(p^{\prime})}.\] Hence, \(\sup s^{\prime}(-\infty,p)\geq\sup_{p^{\prime}<p}t_{s(p^{\prime})}\). We claim that \(\sup_{p^{\prime}<p}t_{s(p^{\prime})}\geq r_{s(p)}\). The opposite would yield \(f(q)\in\big{(}\sup s(-\infty,p),s(p)\big{)}\) for any \(q\in\big{(}\sup_{p^{\prime}<p}t_{s(p^{\prime})},r_{s(p)}\big{)}\). However, \(\big{(}\sup s(-\infty,p),s(p)\big{)}\cap\operatorname{Im}(f)=\emptyset\) with the same reasoning as in Case 1. On the other hand, \(s^{\prime}(p^{\prime})\leq t_{s(p^{\prime})}<r_{s(p)}\) for each \(p^{\prime}<p\) since \(s(p^{\prime})<s(p)\), so \(\sup s^{\prime}(-\infty,p)\leq r_{s(p)}\). For our reduction, we take into account the following two observations: on the one hand, if \(s^{\prime}\in O^{(1)}_{(-\infty,p),(-\infty,q]}\) and \(s^{\prime}(p)\leq q\), then \(O^{(1)}_{(-\infty,p),(-\infty,q]}\) can be replaced by \(O^{(0)}_{p,s^{\prime}(p)}\); compare with 5. On the other hand, if \(r:=\sup s^{\prime}(-\infty,p)\in\mathbb{I}\), then no set of the form \(O^{(1)}_{(-\infty,p),(-\infty,q]}\) with \(q\in\mathbb{Q}\) (!) containing \(s^{\prime}\) can prohibit that \(\sup\tilde{s}^{\prime}(-\infty,p)>r\) for some \(\tilde{s}^{\prime}\in O^{(1)}_{(-\infty,p),(-\infty,q]}\). **Lemma 5.9**.: _It holds that \(\mathcal{T}_{01^{cls}23^{3opn}}\rightsquigarrow\mathcal{T}_{024}\)._ Proof.: Let \(O\in\mathcal{T}\subseteq\mathcal{T}_{01^{cls}23^{3opn}}\). We show that \(O\) is a \(\mathcal{T}_{024}\)-neighbourhood of every element of \(O\). Take \(s\in O\) and, using Lemma 4.5, pick \(f\in\mathcal{M}_{\mathbb{Q}}\) such that \[\operatorname{Im}(f)=(-\infty,\inf s)\cup\operatorname{Im}(s)\cup(\sup s,+\infty)\] and all the preimages \(f^{-1}\{w\}\) are irrational intervals, i.e. \(f^{-1}\{w\}=(r_{w},t_{w})\) for all \(w\in\operatorname{Im}(f)\), where \(r_{w},t_{w}\in\mathbb{I}\) (note that \(r_{w}=-\infty\) or \(t_{w}=+\infty\) is impossible since \(f\) is unbounded-unbounded). By Lemma 5.8, there exists \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) satisfying \(s=fs^{\prime}\) and the following for all \(p\in\mathbb{Q}\): 1. If \(\sup s^{\prime}(-\infty,p)<s^{\prime}(p)\) then \(\sup s^{\prime}(-\infty,p)=r_{s(p)}\). 2. If \(\inf s^{\prime}(p,+\infty)>s^{\prime}(p)\) then \(\inf s^{\prime}(p,+\infty)=t_{s(p)}\). Similarly to the proof of Lemma 5.6, we use \(s=fs^{\prime}=\lambda_{f}(s^{\prime})\), the \(\mathcal{T}\)-continuity of \(\lambda_{f}\) and the assumption \(\mathcal{T}\subseteq\mathcal{T}_{01^{cls}23^{3opn}}\) to obtain a \(\mathcal{T}_{01^{cls}23^{3opn}}\)-basic open set \[O^{\prime}=O^{(0)}_{\tilde{x},\tilde{y}}\cap\bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_ {j}),(-\infty,q_{j}]}\cap\bigcap_{k=1}^{\tilde{m}}O^{(1)}_{(\tilde{p}_{k},+ \infty),[\tilde{q}_{k},+\infty)}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3) }_{(u_{\ell},v_{\ell})} \tag{15}\] such that \(s\in\lambda_{f}(O^{\prime})\subseteq O\). We additionally use Lemma 5.4 and assume that the representation 15 is stratified. If we have \(s^{\prime}(p_{j})\leq q_{j}\) for some \(j\in\{1,\ldots,m\}\), then \(s^{\prime}\in O^{(0)}_{p_{j},s^{\prime}(p_{j})}\subseteq O^{(1)}_{(-\infty,p_{j}), (-\infty,q_{j}]}\) and we replace \(O^{(1)}_{(-\infty,p_{j}),(-\infty,q_{j}]}\) in 15 by \(O^{(0)}_{p_{j},s^{\prime}(p_{j})}\). We proceed analogously if \(s^{\prime}(\tilde{p}_{k})\geq\tilde{q}_{k}\). By rerunning the stratification procedure from Lemma 5.4, we again obtain a stratified representation. In our situation of 7 and 8 already holding, the proof of Lemma 5.4 never adds new sets of type 1. Hence, we can assume that \[s^{\prime}(p_{j})>q_{j}\quad\text{for all }j\qquad\text{and}\qquad s^{\prime}( \tilde{p}_{k})<\tilde{q}_{k}\quad\text{for all }k. \tag{16}\] Lemma 5.5 yields \[\lambda_{f}(O^{\prime})=\{\tilde{s}:\operatorname{Im}(\tilde{s}) \subseteq\operatorname{Im}(f)\}\cap O^{(0)}_{\tilde{x},f(\tilde{y})}\cap\\ \bigcap_{j=1}^{m}O^{(1)}_{(-\infty,p_{j}),(-\infty,f(q_{j})]} \cap\bigcap_{k=1}^{\widetilde{m}}O^{(1)}_{(\tilde{p}_{k},+\infty),[f(\tilde{q} _{k}),+\infty)}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(f(u_{\ell}), f(v_{\ell}))}. \tag{17}\] From (16) and \(s^{\prime}\in O^{\prime}\), we obtain \(\sup s^{\prime}(-\infty,p_{j})\leq q_{j}<s^{\prime}(p_{j})\) for all \(j\) as well as \(\inf s^{\prime}(\tilde{p}_{k},+\infty)\geq\tilde{q}_{k}>s^{\prime}(\tilde{p}_ {k})\) for all \(k\). By 1 and 2, we conclude \(\sup s^{\prime}(-\infty,p_{j})=r_{s(p_{j})}\) for all \(j\) and \(\inf s^{\prime}(\tilde{p}_{k},+\infty)=t_{s(\tilde{p}_{k})}\) for all \(k\). Therefore, \(q_{j}\geq r_{s(p_{j})}\) for all \(j\) and \(\tilde{q}_{k}\leq t_{s(\tilde{p}_{k})}\) for all \(k\). Since the left hand sides of these inequalities are rational numbers while the right hand sides are irrational, we even obtain \(q_{j}>r_{s(p_{j})}\) for all \(j\) and \(\tilde{q}_{k}<t_{s(\tilde{p}_{k})}\) for all \(k\). In other words, we have \(f(q_{j})\geq s(p_{j})\) for all \(j\) and \(f(\tilde{q}_{k})\leq s(\tilde{p}_{k})\) for all \(k\). Consequently, we can replace the sets of type 1 in (17) by sets of type 0, similarly to the above: we set \[P:=\{\tilde{s}:\operatorname{Im}(\tilde{s})\subseteq\operatorname{Im}(f)\} \cap O^{(0)}_{\tilde{x},f(\tilde{y})}\cap O^{(0)}_{\tilde{p},s(\tilde{p})} \cap O^{(0)}_{\tilde{p},s(\tilde{p})}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{ N}O^{(3)}_{(f(u_{\ell}),f(v_{\ell}))}\] where \(\bar{p}=(p_{1},\dots,p_{m}),\overline{\tilde{p}}=(\widetilde{p}_{1},\dots, \widetilde{p}_{\widetilde{m}})\) to obtain \(s\in P\subseteq\lambda_{f}(O^{\prime})\subseteq O\). Putting \[A:=\operatorname{Im}(f)\cap\mathbb{Q}\setminus\bigcup_{\ell=1}^{N}(f(u_{\ell} ),f(v_{\ell})),\] we see that \[P=O^{(0)}_{\tilde{x},f(\tilde{y})}\cap O^{(0)}_{\tilde{p},s(\tilde{p})}\cap O ^{(0)}_{\overline{\tilde{p}},s(\overline{\tilde{p}})}\cap O^{(2)}_{LU}\cap O ^{(4)}_{A}\] is a \(\mathcal{T}_{024}\)-(basic) open set. Hence, \(O\) is indeed a \(\mathcal{T}_{024}\)-neighbourhood of \(s\), as claimed. _Remark 5.10_.: We can reformulate the proof of Lemma 5.9 as follows: We show that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{024})\) has Property \(\mathbf{X}\) with respect to \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{01^{cls}23^{opn}})\), again using the decomposition \(s=fs^{\prime}\operatorname{id}_{\mathbb{Q}}\) - this time for a fixed map \(f\) with \(\operatorname{Im}(f)=(-\infty,\inf s)\cup\operatorname{Im}(s)\cup(\sup s,+\infty)\) whose preimages of single points are irrational intervals, the fixed map \(\operatorname{id}_{\mathbb{Q}}\) and varying \(s^{\prime}\). As in Remark 5.7, we apply Proposition 3.31 to the continous map \(\operatorname{id}\colon(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{01^{cls}23^{opn }})\to(\mathcal{M}_{\mathbb{Q}},\mathcal{T})\) to obtain \(\mathcal{T}\subseteq\mathcal{T}_{024}\). ### Reduction \(\mathcal{T}_{024}\rightsquigarrow\mathcal{T}_{023^{opn}}\) For the next statement, we again aim at showing that a \(\mathcal{T}\)-open set is a \(\mathcal{T}_{023^{opn}}\)-neighbourhood of its elements. Instead of directly doing this for _all_ elements, we start by restricting to _injective_ elements - this special case contains the bulk of the work. The main observation behind it is an analysis of "products" of the form \(O^{(0)}_{\tilde{x},\tilde{z}}\circ O^{(4)}_{A}\) if \(\tilde{z}=(z_{1},\dots,z_{n})\) is a tuple in \(\mathbb{Q}\) and \(A\) is densely ordered (which is connected to \(g\) being injective). Clearly, if \((z_{i},z_{j})\cap A=\emptyset\), then no element of \(O^{(0)}_{\tilde{z},\tilde{z}}\circ O^{(4)}_{A}\) can hit \((z_{i},z_{j})\) - yielding a condition of type \(3^{\operatorname{opn}}\) instead of 4. As it turns out, this is the only obstruction to points in the image. **Lemma 5.11**.: _Let \(\mathcal{T}\) be a semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}\subseteq\mathcal{T}_{024}\). Then any injective endomorphism \(g\in\mathcal{M}_{\mathbb{Q}}\) has a neighbourhood basis consisting of \(\mathcal{T}_{023^{opn}}\)-open sets._ Proof.: Given an injective endomorphism \(g\), let \(O\) be any \(\mathcal{T}\)-open neighbourhood of \(g\). We use continuity of the composition map. Since \(g=\operatorname{id}_{\mathbb{Q}}\circ g\in O\), there exist \(\mathcal{T}\)-neighbourhoods \(V_{1}\) of \(\operatorname{id}_{\mathbb{Q}}\) and \(V_{2}\) of \(g\) such that \(V_{1}\circ V_{2}\subseteq O\). By assumption, \(\mathcal{T}\subseteq\mathcal{T}_{024}\), hence there exist \(\mathcal{T}_{024}\)-basic open sets \(U_{1},U_{2}\) such that \(\operatorname{id}_{\mathbb{Q}}\in U_{1}\subseteq V_{1}\) and \(g\in U_{2}\subseteq V_{2}\). Note that a \(\mathcal{T}_{024}\)-basic open set containing \(\operatorname{id}_{\mathbb{Q}}\) has the form \(O^{(0)}_{\tilde{x},\tilde{z}}\cap O^{(2)}_{-\infty,+\infty}\) for a tuple \(\tilde{z}\) in \(\mathbb{Q}\) - sets of type 4 cannot occur. We can assume that \(U_{2}\) has the form \(U_{2}=O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\), where \(A\) is a densely ordered set (for otherwise, replace \(A\) by \(\operatorname{Im}(g)\)). We obtain \[g\in\left(O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty}\right)\circ \left(O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\right)\subseteq V _{1}\circ V_{2}\subseteq O. \tag{18}\] The lemma will be proved once we find a \(7_{023^{opm}}\)-open set \(P\) with \[g\in P\subseteq\left(O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty} \right)\circ\left(O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A} \right).\] Since (18) remains valid if we expand the tuple \(\bar{z}\), we can assume that the elements listed in \(\bar{y}\) are contained in \(\bar{z}\). We write \(\bar{z}=(z_{1},\ldots,z_{n})\) where the elements \(z_{i}\) shall be sorted in ascending order. Adding additional elements \(z_{\pm}\) to \(\bar{z}\) if necessary, we can assume that \(z_{1}=z_{-}<\inf A\) if \(A\) is bounded below and that \(z_{n}=z_{+}>\sup A\) if \(A\) is bounded above. To simplify notation, we set \(z_{0}:=-\infty\) as well as \(z_{n+1}:=+\infty\). Further, we define \[\mathcal{M}_{0} :=\left\{(i,j)\in\{0,\ldots,n+1\}^{2}:i<j,\,(z_{i},z_{j})\cap A= \emptyset\right\},\] \[\mathcal{M}_{1} :=\left\{(i,j)\in\{0,\ldots,n+1\}^{2}:i<j,\,|(z_{i},z_{j})\cap A| =1\right\}.\] Note that \(\{(z_{i},z_{j}):(i,j)\in\mathcal{M}_{0}\}\) always contains \((-\infty,z_{-})\) if \(A\) is bounded below and \((z_{+},+\infty)\) if \(A\) is bounded above. For \((i,j)\in\mathcal{M}_{1}\), define \(w_{i,j}\) such that \((z_{i},z_{j})\cap A=\{w_{i,j}\}\). Expanding the tuple \(\bar{z}\) once more, we can assume that the elements \(w_{i,j}\) are also contained in \(\bar{z}\). Defining \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\) from this expanded tuple, we obtain \(\mathcal{M}_{1}=\emptyset\). For each pair \((i,j)\), the set \((z_{i},z_{j})\cap A\) is thus either empty or it contains at least two elements - in which case it contains an infinite densely ordered set. We claim that \[g\in P:=O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap\bigcap_{(i,j)\in \mathcal{M}_{0}}O^{(3)}_{(z_{i},z_{j})}\subseteq\left(O^{(0)}_{\bar{z},\bar{z }}\cap O^{(2)}_{-\infty,+\infty}\right)\circ\left(O^{(0)}_{\bar{x},\bar{y}} \cap O^{(2)}_{LU}\cap O^{(4)}_{A}\right); \tag{19}\] note that \(\operatorname{Im}(g)\cap(z_{i},z_{j})\subseteq A\cap(z_{i},z_{j})=\emptyset\) for all \((i,j)\in\mathcal{M}_{0}\). To prove the set inclusion in (19), the crucial step is to find \(f\in O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty}\) such that \[\forall q\in\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j}) \colon f^{-1}\{q\}\cap A\neq\emptyset. \tag{20}\] This will be accomplished via a Back&Forth strategy, distinguishing whether \(A\) is bounded or unbounded above and below. In the following, we will consider the case that \(A\) is bounded below and unbounded above; the other cases are treated analogously. We will first find an increasing map \(\varphi\colon[z_{-},+\infty)\to[z_{-},+\infty)\) such that (20) holds with \(\varphi\) in place of \(f\) (note that \(\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j})=[z_{-},+ \infty)\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j})\)). To this end, we consider the following property of a finite partial increasing map \(m\) from \([z_{-},+\infty)\) to \([z_{-},+\infty)\): * For any \(q\in\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j})\) and all \(u,u^{\prime}\in\operatorname{Dom}(m)\) with \(u<u^{\prime}\), if \(m(u)<q<m(u^{\prime})\), then \((u,u^{\prime})\cap A\) is an infinite densely ordered set18. Footnote 18: equivalently: This intersection contains at least two elements. Setting \(C:=\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j})\), we claim that the system of all finite partial increasing maps \(m\) from \([z_{-},+\infty)\) to \([z_{-},+\infty)\) satisfying \((+)\) is an \((A,C)\)-Back&Forth system (see Definition 2.10). In order to simplify notation, we formally add the elements \(+\infty,-\infty\) to both \(\operatorname{Dom}(m)\) and \(\operatorname{Im}(m)\). \(\mathbf{(A,C)}\)**-Back:** Given \(q\in C\), we set \(u_{-}:=\max\left\{u\in\operatorname{Dom}(m):m(u)<q\right\}\) and further \(u_{+}:=\min\left\{u\in\operatorname{Dom}(m):q<m(u)\right\}\); since we added \(\pm\infty\) to the domain and image of \(m\), these elements are welldefined19. We claim that \((u_{-},u_{+})\cap A\) is an infinite densely ordered set. Note that \(u_{-}\neq-\infty\) since \(q\in C\subseteq[z_{-},+\infty)\). If \(u_{+}\) is finite as well, our claim follows from condition \((+)\), and if \(u_{+}=+\infty\), it follows from \(A\) being unbounded above. Taking \(p\in(u_{-},u_{+})\cap A\) such that both \((u_{-},p)\cap A\) and \((p,u_{+})\cap A\) are infinite densely ordered sets, we obtain that the extension \(m^{\prime}\) of \(m\) by \(m^{\prime}(p):=q\) is an increasing map which still satisfies condition \((+)\). **Forth:** Given \(p\in\mathbb{Q}\setminus\operatorname{Dom}(m)\), we set \(u_{-}:=\max\left\{u\in\operatorname{Dom}(m):u<p\right\}\) and20\(u_{+}:=\min\left\{u\in\operatorname{Dom}(m):p<u\right\}\). We distinguish cases: Footnote 20: Here, there can never exist elements \(u\) of \(\operatorname{Dom}(m)\) in between \(u_{-}\) and \(u_{+}\)! _Case 1_ (both \((u_{-},p)\cap A\) and \((p,u_{+})\cap A\) are infinite densely ordered): Pick _any_\(q\in\mathbb{Q}\) with \(m(u_{-})\leq q\leq m(u_{+})\). _Case 2_ (\((u_{-},p)\cap A\) is infinite densely ordered, but \((p,u_{+})\cap A\) is not): Pick \(q:=m(u_{+})\). _Case 3_ (\((u_{-},p)\cap A\) is not infinite densely ordered, but \((p,u_{+})\cap A\) is): Pick \(q:=m(u_{-})\). _Case 4_ (neither \((u_{-},p)\cap A\) nor \((p,u_{+})\cap A\) are infinite densely ordered): Pick _any_\(q\in\mathbb{Q}\) with \(m(u_{-})\leq q\leq m(u_{+})\). Observe that the extension \(m^{\prime}\) of \(m\) by \(m^{\prime}(p):=q\) is an increasing map which still satisfies condition \((+)\); for Case 4, we see that \((u_{-},u_{+})\cap A\) is not infinite densely ordered - since \((+)\) holds, \((u_{-},u_{+})\cap A\) is never considered as a set of the form \((u,u^{\prime})\cap A\) in condition \((+)\), even less so \((u_{-},p)\cap A\) and \((p,u_{+})\cap A\). Since \(\mathcal{M}_{1}=\emptyset\), we know that \(m\) defined by \(\bar{z}\mapsto\bar{z}\) satisfies \((+)\). Thus, Lemma 2.11 yields \(\varphi\colon[z_{-},+\infty)\to[z_{-},+\infty)\) with \(\varphi(\bar{z})=\bar{z}\) and \[\forall q\in C=\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{M}_{0}}(z_{i},z_{j })\colon\varphi^{-1}\{q\}\cap A\neq\emptyset.\] Extending \(\varphi\) to a total map \(f\) by setting \(f(q):=q\) for \(q\in(-\infty,z_{-})\), this finishes the definition of \(f\); by design, \(f\in O^{(0)}_{\bar{z},\bar{z}}\). Since \(C\) is unbounded above, \(f\) must be as well. Moreover, \(f\) is obviously unbounded below, yielding \(f\in O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty}\) as desired. Using \(f\), we can finally prove (19). Let \(s\in O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap\bigcap_{(i,j)\in\mathcal{ M}_{0}}O^{(3)}_{(z_{i},z_{j})}\). We will prove \(s\in\left(O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty}\right) \circ\left(O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\right)\) by finding \(h\in O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\) such that \(s=fh\). The latter equality can be certainly satisfied by picking, for each \[q\in\operatorname{Im}(s)\subseteq\mathbb{Q}\setminus\bigcup_{(i,j)\in\mathcal{ M}_{0}}(z_{i},z_{j})\subseteq\operatorname{Im}(f),\] _any_ element \(p_{q}\in f^{-1}\{q\}\neq\emptyset\) and defining \(h\) by \(h(c):=p_{s(c)}\). Because of (20), the elements \(p_{q}\) can be chosen in \(A\), thus yielding \(\operatorname{Im}(h)\subseteq A\). For the entries \(y_{i}\) of \(\bar{y}\), we can pick \(p_{y_{i}}=y_{i}\) since \(f(\bar{y})=\bar{y}\) - note that \(\bar{y}\) has been added to \(\bar{z}\), that the entries are pairwise different since \(g\) is injective and that \(y_{i}\in A\) (by \(g(\bar{x})=\bar{y}\)). Thus, \(s(\bar{x})=\bar{y}\) implies \(h\in O^{(0)}_{\bar{x},\bar{y}}\). Since \(f\in O^{(2)}_{-\infty,+\infty}\), the boundedness type of \(h\) is the same as the boundedness type of \(s\) which in turn is the same as the boundedness type of \(g\). Hence, \(h\in O^{(2)}_{LU}\) and we conclude \(h\in O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\). Therefore \[s=fh\in\left(O^{(0)}_{\bar{z},\bar{z}}\cap O^{(2)}_{-\infty,+\infty}\right) \circ\left(O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap O^{(4)}_{A}\right),\] thus proving (19) and, consequently, the lemma. **Lemma 5.12**.: _It holds that \(\mathcal{T}_{024}\rightsquigarrow\mathcal{T}_{023^{opn}}\)._ Proof.: Let \(O\in\mathcal{T}\). We show that \(O\) is a \(\mathcal{T}_{023^{opn}}\)-neighbourhood of every element of \(O\). Take \(s\in O\). We claim that for any generic surjection \(f\in\mathcal{M}_{\mathbb{Q}}\) (which exists by Lemma 4.5), there is some injective \(g\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fg\): Since \(\operatorname{Im}(s)\subseteq\mathbb{Q}=\operatorname{Im}(f)\) and since the preimages \(f^{-1}\{w\}\) are irrational intervals, Lemma 2.92 applies and yields an injective \(g\in\mathcal{M}_{\mathbb{Q}}\) as desired. We use continuity of the translation map \(\lambda_{f}\). Since \(\lambda_{f}(g)=s\in O\), there exists a \(\mathcal{T}\)-neighbourhood \(V\) of \(g\) such that \(\lambda_{f}(V)\subseteq O\). By Lemma 5.11, there exists a \(\mathcal{T}_{023^{opn}}\)-basic open set \(U\) such that \(g\in U\subseteq V\); we assume \(U\) to be stratified via Lemma 5.4. Hence, \(s\in\lambda_{f}(U)\subseteq O\). Using Lemma 5.5, we obtain that \(\lambda_{f}(U)\) is a \(\mathcal{T}_{023^{opn}}\)-basic open set which proves the lemma. _Remark 5.13_.: We can combine Lemmas 5.11 and 5.12 and reformulate the proof of \(\mathcal{T}_{024}\rightsquigarrow\mathcal{T}_{023^{opn}}\) as follows: We show that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{023^{opn}})\) has Property \(\overline{\mathbf{X}}\) of length \(2\) with respect to \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{024})\), using the decomposition \(s=f\operatorname{id}_{\mathbb{Q}}\operatorname{id}_{\mathbb{Q}}g\operatorname{id} _{\mathbb{Q}}\) where the first, third and fifth position are fixed and the second and fourth position are varying, subsequently yielding \(\tilde{s}=ff\operatorname{id}_{\mathbb{Q}}\tilde{h}\operatorname{id}_{\mathbb{Q}}\). As in Remarks 5.7 and 5.10, we apply Proposition 3.3(i) to the continous map \(\operatorname{id}\colon(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{024})\to( \mathcal{M}_{\mathbb{Q}},\mathcal{T})\) to obtain \(\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\). ### Reduction \(\mathcal{T}_{023^{opn}}\rightsquigarrow\mathcal{T}_{03^{opn}}\) In our next reduction, we eliminate the sets of type 2, i.e. the boundedness types, from the upper bound. Compared to our previous reductions, this requires a different approach; we use the regularity of the given topology \(\mathcal{T}\) in a crucial way. The main observation is the following: if \(O\) is \(\mathcal{T}\)-open and \(s\in O\), there exists a \(\mathcal{T}\)-open set \(P\) such that \(s\in P\subseteq\overline{P}^{\mathcal{T}}\subseteq O\), where \(\overline{P}^{\mathcal{T}}\) denotes the topological closure of \(P\) with respect to \(\mathcal{T}\). Our proof essentially amounts to showing that taking this topological closure eliminates the sets \(O^{(2)}_{LU}\) from \(P\) - this corresponds to \(O^{(2)}_{LU}\) being topologically dense. It is easy to see that \(O^{(2)}_{LU}\) is dense with respect to the pointwise topology; however, this set is obviously not dense with respect to \(\mathcal{T}_{023^{opn}}\). Hence, independently of the above sketch, it can also be seen as an important step in showing \(\mathcal{T}=\mathcal{T}_{pw}\) that indeed \(O^{(2)}_{LU}\) is dense with respect to \(\mathcal{T}\) as well. This will depend on the Polishness of \(\mathcal{T}\). We start with a variant of Lemma 5.8. **Lemma 5.14**.: _Let \(s,f\in\mathcal{M}_{\mathbb{Q}}\) and \(q\in\mathbb{Q}\setminus\operatorname{Im}(s)\) such that \(\operatorname{Im}(f)=\operatorname{Im}(s)\mathbin{\dot{\cup}}\{q\}\) where the preimages \(f^{-1}\{w\}\) are irrational intervals, i.e. \(f^{-1}\{w\}=(r_{w},t_{w})\) for all \(w\in\operatorname{Im}(f)\), where \(r_{w},t_{w}\in\mathbb{I}\cup\{\pm\infty\}\). Then the following hold:_ 1. _Suppose there is_ \(p\in\mathbb{Q}\) _such that_ \(\sup s(-\infty,p)=\max s(-\infty,p)<q<s(p)\)_._ _Then there exists_ \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) _such that_ \(s=fs^{\prime}\) _and_ \(\sup s^{\prime}(-\infty,p)=r_{q}\in\mathbb{I}\)_._ 2. _Suppose that_ \(\sup s=\max s<q\)_._ _Then there exists_ \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) _such that_ \(s=fs^{\prime}\) _and_ \(\sup s^{\prime}=r_{q}\in\mathbb{I}\)_._ 3. _Suppose that_ \(q<\min s=\inf s\)_._ _Then there exists_ \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) _such that_ \(s=fs^{\prime}\) _and_ \(\inf s^{\prime}=t_{q}\in\mathbb{I}\)_._ Proof (of Lemma 5.14).: One picks \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) with 1. \(s=fs^{\prime}\) 2. \(\forall w\in\operatorname{Im}(s)\colon\,\bigl{(}s^{-1}\{w\}\text{ has no greatest element }\Rightarrow\sup s^{\prime}(s^{-1}\{w\})=t_{w}\bigr{)}\) and \(\forall w\in\operatorname{Im}(s)\colon\,\bigl{(}s^{-1}\{w\}\text{ has no least element }\Rightarrow\inf s^{\prime}(s^{-1}\{w\})=r_{w}\bigr{)}\) 3. \(\forall w\in\operatorname{Im}(s)\colon\,s^{\prime}|_{s^{-1}\{w\}}\) is continuous and argues as in Case 1 of the proof of Lemma 5.8 with \(q\) in place of \(s(p)\). The boundary points \(r_{q}\) (for (1),(2)) and \(t_{q}\) (for (3)) are finite since there exist elements in \(\operatorname{Im}(f)\) which are below \(q\) and above \(q\), respectively. Next, we show that the set \(\operatorname{Surj}(\mathbb{Q})\) of all surjective elements of \(\mathcal{M}_{\mathbb{Q}}\) is dense with respect to our given Polish semigroup topology \(\mathcal{T}\) with \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\) - this uses Polishness in an essential way and is another step in matching \(\mathcal{T}\) to \(\mathcal{T}_{pw}\). **Lemma 5.15**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\)._ 1. _For each_ \(q\in\mathbb{Q}\)_, the set_ \(M_{q}:=\{s\in\mathcal{M}_{\mathbb{Q}}:q\in\operatorname{Im}(s)\}\) _is_ \(\mathcal{T}\)_-dense._ 2. _The set_ \(\operatorname{Surj}(\mathbb{Q})\) _of surjective endomorphisms on_ \(\mathbb{Q}\) _is_ \(\mathcal{T}\)_-dense._ Proof.: **(i).** Let \(O\in\mathcal{T}\) be open and nonempty; we have to show \(O\cap M_{q}\neq\emptyset\). Since \(\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\), the set \(O\) contains a nonempty \(\mathcal{T}_{023^{opn}}\)-basic open set; we write \[\emptyset\neq O^{(0)}_{\bar{x},\bar{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N }O^{(3)}_{(u_{\ell},v_{\ell})}\subseteq O\] which we assume to be a stratified representation, see Lemma 5.4. If \(q\) is contained in \(\bar{y}\), then _any_\(s\in O\) has \(q\) in its image, so we assume the contrary. Distinguishing by the position of \(q\) relative to \(\bar{y}=(y_{1},\ldots,y_{n})\) and \((u_{1},v_{1},\ldots,u_{N},v_{N})\) and by the required boundedness type \(O^{(2)}_{LU}\), one easily constructs (by a piecewise definition) a map \(s\in O\) possibly together with a rational \(p\in\mathbb{Q}\) such that \(\sup s(-\infty,p)=\max s(-\infty,p)<q<s(p)\) (if \(\mathbb{Q}\setminus\bigcup_{\ell=1}^{N}(u_{\ell},v_{\ell})\) contains elements less and elements greater than \(q\)) or \(\sup s=\max s<q\) (if \(\mathbb{Q}\setminus\bigcup_{\ell=1}^{N}(u_{\ell},v_{\ell})\) contains only elements less than \(q\)) or \(q<\min s=\inf s\) (if \(\mathbb{Q}\setminus\bigcup_{\ell=1}^{N}(u_{\ell},v_{\ell})\) contains only elements greater than \(q\)). We use Lemma 4.5 to find \(f\in\mathcal{M}_{\mathbb{Q}}\) with \(\operatorname{Im}(f)=\operatorname{Im}(s)\mathbin{\dot{\cup}}\{q\}\) and \(f^{-1}\{w\}=(r_{w},t_{w})\) for all \(w\in\operatorname{Im}(f)\), where \(r_{w},t_{w}\in\mathbb{I}\cup\{\pm\infty\}\). By Lemma 5.14, there exists \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fs^{\prime}\) and \(\sup s^{\prime}(-\infty,p)=r_{q}\in\mathbb{I}\) or \(\sup s^{\prime}=r_{q}\in\mathbb{I}\) or \(\inf s^{\prime}=t_{q}\in\mathbb{I}\). Applying continuity of the translation map \(\lambda_{f}\) at \(s^{\prime}\) as well as \(\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\), we obtain a \(\mathcal{T}_{023^{opn}}\)-basic open set \[O^{\prime}=O_{\vec{x}^{\prime},\vec{y}^{\prime}}^{(0)}\cap O_{L^{\prime}U^{ \prime}}^{(2)}\cap\bigcap_{\ell=1}^{N^{\prime}}O_{(u_{\ell}^{\prime},v_{\ell} ^{\prime})}^{(3)}\] such that \(s^{\prime}\in O^{\prime}\) and \(s\in\lambda_{f}(O^{\prime})\subseteq O\). In particular, \(\operatorname{Im}(s^{\prime})\subseteq\mathbb{Q}\setminus\bigcup_{\ell=1}^{N }(u_{\ell}^{\prime},v_{\ell}^{\prime})=:A^{\prime}\), so either \(r_{q}\) or \(t_{q}\) is a limit point of \(A^{\prime}\). Since \(r_{q}\) and \(t_{q}\) are irrational while the boundary points of \(A^{\prime}\) are rational, either \(r_{q}\) or \(t_{q}\) must in fact be contained in the interior of \(A^{\prime}\). Thus, \(A^{\prime}\cap f^{-1}\{q\}=A^{\prime}\cap(r_{q},t_{q})\neq\emptyset\); we pick \(z^{\prime}\) in this intersection. Similarly to our construction of \(s\), we distinguish by the positition of \(z^{\prime}\) relative to \(\bar{y}^{\prime}=(y_{1}^{\prime},\ldots,y_{n^{\prime}}^{\prime})\) and \((u_{1}^{\prime},v_{1}^{\prime},\ldots,u_{N^{\prime}}^{\prime},v_{N^{\prime}}^{ \prime})\) and by the required boundedness type \(O_{L^{\prime}U^{\prime}}^{(2)}\) to find a map \(\bar{s}^{\prime}\in O^{\prime}\) with \(z^{\prime}\in\operatorname{Im}(\bar{s}^{\prime})\). We obtain \(\tilde{s}:=\lambda_{f}(\bar{s}^{\prime})=f\bar{s}^{\prime}\in O\) and \(q\in\operatorname{Im}(\tilde{s})\), i.e. \(\tilde{s}\in O\cap M_{q}\neq\emptyset\). **(ii).** For each \(q\in\mathbb{Q}\), the set \(M_{q}=\{s\in\mathcal{M}_{\mathbb{Q}}:q\in\operatorname{Im}(s)\}\) is \(\mathcal{T}\)-open since \(\mathcal{T}_{pw}\subseteq\mathcal{T}\). By (i), it is also \(\mathcal{T}\)-dense. Since \(\mathcal{T}\) is a Polish topology, Baire's Category Theorem applies and yields the \(\mathcal{T}\)-density of \(\operatorname{Surj}(\mathbb{Q})=\bigcap_{q\in\mathbb{Q}}M_{q}\). By definition, any \(\mathcal{T}_{023^{opn}}\)-open set can be represented as a union of sets of the form \[O_{\tilde{x},\tilde{y}}^{(0)}\cap O_{LU}^{(2)}\cap\bigcap_{\ell=1}^{N}O_{(u_{ \ell},v_{\ell})}^{(3)}.\] If we rearrange to separate the \(\mathcal{T}_{02}\)-interior from the "proper" type \(3^{\text{opn}}\) portion, we obtain the following alternative notation which will prove to be very helpful: **Notation 5.16**.: Setting \[A:=O_{-\infty,+\infty}^{(2)}\qquad B:=O_{-\infty,\mathbb{R}}^{(2)}\qquad C:=O_{ \mathbb{R},+\infty}^{(2)}\qquad D:=O_{\mathbb{R},\mathbb{R}}^{(2)},\] we can rewrite any \(\mathcal{T}_{023^{opn}}\)-open set \(O\) as \[O=(O_{A}\cap A)\cup(O_{B}\cap B)\cup(O_{C}\cap C)\cup(O_{D}\cap D)\cup\bigcup _{i\in I}\left(O_{\tilde{x}^{(i)},\bar{y}^{(i)}}^{(0)}\cap O_{L^{(i)},U^{(i)}} ^{(2)}\cap\bigcap_{\ell=1}^{N^{(i)}}O_{(u_{\ell}^{(i)},v_{\ell}^{(i)})}^{(3)}\right)\] where \(O_{A},O_{B},O_{C},O_{D}\in\mathcal{T}_{pw}\), \(\bar{x}^{(i)},\bar{y}^{(i)}\) are tuples in \(\mathbb{Q}\), \(N^{(i)}\geq 1\) and \(u_{\ell}^{(i)},v_{\ell}^{(i)}\in\mathbb{Q}\). Note that the sets \(O_{A},O_{B},O_{C},O_{D}\) could in general be empty even if \(O\) is nonempty. However, for \(O\in\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\), one uses the previous lemma to prove: **Lemma 5.17**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\), and let \(O\in\mathcal{T}\) be nonempty. Then \(O\subseteq\overline{O_{A}}^{\mathcal{T}}\). In particular, \(O_{A}\neq\emptyset\)._ Proof.: Aiming for a contradiction, we assume \(O\nsubseteq\overline{O_{A}}^{\mathcal{T}}\). Thus, denoting the complement of \(\overline{O_{A}}^{\mathcal{T}}\) by \(\left(\overline{O_{A}}^{\mathcal{T}}\right)^{\mathrm{c}}\), we know that \(O\cap\left(\overline{O_{A}}^{\mathcal{T}}\right)^{\mathrm{c}}\) is a nonempty \(\mathcal{T}\)-open set. However, \(O_{(u_{\ell}^{(i)},v_{\ell}^{(i)})}^{(3)}\cap\operatorname{Surj}(\mathbb{Q})=\emptyset\) and \((B\cup C\cup D)\cap\operatorname{Surj}(\mathbb{Q})=\emptyset\) imply \[O\cap\left(\overline{O_{A}}^{\mathcal{T}}\right)^{\mathrm{c}}\cap\operatorname{ Surj}(\mathbb{Q})=O_{A}\cap A\cap\left(\overline{O_{A}}^{\mathcal{T}}\right)^{ \mathrm{c}}\cap\operatorname{Surj}(\mathbb{Q})\subseteq O_{A}\cap\left( \overline{O_{A}}^{\mathcal{T}}\right)^{\mathrm{c}}=\emptyset,\] which contradicts Lemma 5.15(ii). With this result, we can attain an important intermediate step already hinted at in our proof outline in the introductory remarks to Subsection 5.3. **Lemma 5.18**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\). Then any nonempty \(O\in\mathcal{T}\) has nonempty \(\mathcal{T}_{pw}\)-interior. Consequently, a subset of \(\mathcal{M}_{\mathbb{Q}}\) is \(\mathcal{T}\)-dense if and only if it is \(\mathcal{T}_{pw}\)-dense; in particular, every boundedness type \(O^{(2)}_{LU}\) is \(\mathcal{T}\)-dense._ Proof.: By regularity, there exists a nonempty \(P\in\mathcal{T}\) such that \(\overline{P}^{\mathcal{T}}\subseteq O\) and thus \(\overline{P_{A}\cap A}^{\mathcal{T}}\subseteq O\). Since \(A\supseteq\mathrm{Surj}(\mathbb{Q})\) is \(\mathcal{T}\)-dense by Lemma 5.15(ii) and \(P_{A}\) is \(\mathcal{T}\)-open, we obtain \(\overline{P_{A}\cap A}^{\mathcal{T}}=\overline{P_{A}}^{\mathcal{T}}\) from elementary topology. Therefore, \(P_{A}\subseteq\overline{P_{A}\cap A}^{\mathcal{T}}\subseteq O\) and the \(\mathcal{T}_{pw}\)-interior of \(O\) contains the set \(P_{A}\) which is nonempty by Lemma 5.17. That any \(\mathcal{T}\)-dense set is \(\mathcal{T}_{pw}\)-dense follows from \(\mathcal{T}_{pw}\subseteq\mathcal{T}\). For the converse, assume that \(M\) is \(\mathcal{T}_{pw}\)-dense and let \(O\) be nonempty and \(\mathcal{T}\)-open. Since the \(\mathcal{T}_{pw}\)-interior of \(O\) is nonempty, it has nonempty intersection with \(M\), in particular \(M\cap O\neq\emptyset\). Next, we use the previous results to show that taking the topological closure with respect to \(\mathcal{T}\) eliminates the boundedness types from open sets. This is the crucial technical step in the proof of our reduction. **Lemma 5.19**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}_{pw}\subseteq\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\). Let further \(O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)} _{(u_{\ell},v_{\ell})}\neq\emptyset\) be a nonempty \(\mathcal{T}_{023^{opn}}\)-basic open set. Then_ \[\overline{O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^ {N}O^{(3)}_{(u_{\ell},v_{\ell})}}^{\mathcal{T}}=O^{(0)}_{\tilde{x},\tilde{y}} \cap\bigcap_{\ell=1}^{N}O^{(3)}_{(u_{\ell},v_{\ell})}.\] Proof.: The inclusion "\(\subseteq\)" follows from \(O^{(0)}_{\tilde{x},\tilde{y}}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(u_{\ell},v_{ \ell})}\) being \(\mathcal{T}_{pw}\)-closed, in particular \(\mathcal{T}\)-closed. For the other inclusion "\(\supseteq\)", take \(s\in O^{(0)}_{\tilde{x},\tilde{y}}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(u_{\ell}, v_{\ell})}\) and consider a \(\mathcal{T}\)-open set \(O\) containing \(s\). We have to show \(O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)} _{(u_{\ell},v_{\ell})}\cap O\neq\emptyset\). Pick \(f\in\mathcal{M}_{\mathbb{Q}}\) such that \(\mathrm{Im}(f)=\mathbb{Q}\setminus\left(\bigcup_{\ell=1}^{N}(u_{\ell},v_{\ell} )\right)\supseteq\mathrm{Im}(s)\). By continuity of the translation map \(\lambda_{f}\), the preimage \(\lambda_{f}^{-1}(O)\) is \(\mathcal{T}\)-open. Lemma 2.9(i) yields a map \(s^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(s=fs^{\prime}\). We conclude from \(\mathcal{T}_{pw}\subseteq\mathcal{T}\) that the intersection \(\emptyset\neq\lambda_{f}^{-1}(O)\cap O^{(0)}_{\tilde{x},s^{\prime}(\tilde{x})} \ni s^{\prime}\) is \(\mathcal{T}\)-open. By Lemma 5.18, the boundedness type \(O^{(2)}_{LU}\) is \(\mathcal{T}\)-dense, therefore there exists \(\tilde{s}^{\prime}\in\lambda_{f}^{-1}(O)\cap O^{(0)}_{\tilde{x},s^{\prime}( \tilde{x})}\cap O^{(2)}_{LU}\). We define \(\tilde{s}:=f\tilde{s}^{\prime}=\lambda_{f}(\tilde{s}^{\prime})\) and claim \[\tilde{s}\in O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1 }^{N}O^{(3)}_{(u_{\ell},v_{\ell})}\cap O\] which will complete the proof. We only argue \(\tilde{s}\in O^{(2)}_{LU}\), the rest is straightforward. If \(-\infty\) occurs among the \(u_{\ell}\), then \(L=\mathbb{R}\) since \(O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)} _{(u_{\ell},v_{\ell})}\neq\emptyset\). Further, \(\tilde{s}\) is bounded below since \(\tilde{s}\in\bigcap_{\ell=1}^{N}O^{(3)}_{(u_{\ell},v_{\ell})}\). If on the other hand \(-\infty\) is not contained among the \(u_{\ell}\), then \(f\) is unbounded below, so \(\tilde{s}\) is unbounded below if and only if \(\tilde{s}^{\prime}\) is unbounded below which occurs if and only if \(L=\{-\infty\}\). Arguing analogously for upper bounds, we conclude \(\tilde{s}\in O^{(2)}_{LU}\). Lemmas 5.18 and 5.19 finally enable us to show our reduction: **Lemma 5.20**.: _It holds that \(\mathcal{T}_{023^{opn}}\rightsquigarrow\mathcal{T}_{03^{opn}}\)._ Proof.: Let \(O\in\mathcal{T}\). We show that \(O\) is a \(\mathcal{T}_{03^{opn}}\)-neighbourhood of every element of \(O\). Take \(s\in O\). By regularity, there exists \(P\in\mathcal{T}\) such that \(s\in P\subseteq\overline{P}^{\mathcal{T}}\subseteq O\). Since \(\mathcal{T}\subseteq\mathcal{T}_{023^{opn}}\), there exists a \(\mathcal{T}_{023^{opn}}\)-basic open set \[U=O^{(0)}_{\tilde{x},\tilde{y}}\cap O^{(2)}_{LU}\cap\bigcap_{\ell=1}^{N}O^{(3)} _{(u_{\ell},v_{\ell})}\] such that \(s\in U\subseteq P\), in particular \(s\in\overline{U}^{\mathcal{T}}\subseteq O\). By Lemma 5.19, the \(\mathcal{T}\)-closure of \(U\) is \(O^{(0)}_{\bar{x},\bar{y}}\cap\bigcap_{\ell=1}^{N}O^{(3)}_{(u^{(\ell)}_{\ell},v^ {(\ell)}_{\ell})}\). Hence, \(O\) is indeed a \(\mathcal{T}_{03^{opn}}\)-neighbourhood of \(s\). _Remark 5.21_.: As already stated in the concluding remarks of Section 3, the reduction \(\mathcal{T}_{023^{opn}}\rightsquigarrow\mathcal{T}_{03^{opn}}\) is the only one whose proof cannot be reformulated as a (Pseudo-)Property \(\overline{\mathbf{X}}\)-type statement. Starting from Proposition 3.6 and applying Proposition 3.3 along the route \(\mathcal{T}_{rich}=\mathcal{T}_{0123}\rightsquigarrow\mathcal{T}_{01^{cls}23^{ opn}}\rightsquigarrow\mathcal{T}_{024}\rightsquigarrow\mathcal{T}_{023^{opn}}\), we obtain that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{023^{opn}})\) has automatic continuity with respect to the class of second countable topological semigroups. However, we cannot continue on to \(\mathcal{T}_{03^{opn}}\). Thus, the reduction \(\mathcal{T}_{023^{opn}}\rightsquigarrow\mathcal{T}_{03^{opn}}\) is indeed fundamentally different. ### Reduction \(\mathcal{T}_{03^{opn}}\rightsquigarrow\mathcal{T}_{0}=\mathcal{T}_{pw^{*}}\) In our final reduction, we eliminate the sets of type \(3^{\mathrm{opn}}\). The technique resembles those of Subsections 5.1 and 5.2, albeit with crucial involvement of the \(\mathcal{T}\)-density of \(\mathrm{Surj}(\mathbb{Q})\) shown in Subsection 5.3. The following easy observation gives an idea as to why this is important. **Lemma 5.22**.: _Let \(\mathcal{T}\) be a Polish semigroup topology on \(\mathcal{M}_{\mathbb{Q}}\) such that \(\mathcal{T}\subseteq\mathcal{T}_{03^{opn}}\). Let further \(O\in\mathcal{T}\) and let \(f\in O\) be surjective. Then \(O\) is a \(\mathcal{T}_{pw}\)-neighbourhood of \(f\), in other words, there exists \(P\in\mathcal{T}_{pw}\) such that \(f\in P\subseteq O\)._ Proof.: With the same spirit as in Notation 5.16, we can write \[O=O_{pw}\cup\bigcup_{i\in I}\left(O^{(0)}_{\bar{x}^{(i)},\bar{y}^{(i)}}\cap \bigcap_{\ell=1}^{N^{(i)}}O^{(3)}_{(u^{(i)}_{\ell},v^{(i)}_{\ell})}\right),\] where \(O_{pw}\in\mathcal{T}_{pw}\), \(\bar{x}^{(i)},\bar{y}^{(i)}\) are tuples in \(\mathbb{Q}\), \(N^{(i)}\geq 1\) and \(u^{(i)}_{\ell},v^{(i)}_{\ell}\in\mathbb{Q}\). Since none of the sets \(O^{(0)}_{\bar{x}^{(i)},\bar{y}^{(i)}}\cap\bigcap_{\ell=1}^{N^{(i)}}O^{(3)}_{(u ^{(i)}_{\ell},v^{(i)}_{\ell})}\) can contain surjective functions, \(f\) has to be contained in \(P:=O_{pw}\). **Lemma 5.23**.: _It holds that \(\mathcal{T}_{03^{opn}}\rightsquigarrow\mathcal{T}_{0}=\mathcal{T}_{pw}\)._ Proof.: Let \(O\in\mathcal{T}\). We show that \(O\) is a \(\mathcal{T}_{pw}\)-neighbourhood of every element of \(O\). Take \(s\in O\). By \(\mathcal{T}\)-continuity of the composition map \(\circ\) and since \(s\circ\mathrm{id}_{\mathbb{Q}}\in O\), there exist \(\mathcal{T}\)-open sets \(U\) and \(V\) such that \(s\in U\), \(\mathrm{id}_{\mathbb{Q}}\in V\) and \(U\circ V\subseteq O\). Using Lemma 5.22, we can shrink \(V\) and assume that \(V\) is \(\mathcal{T}_{pw}\)-open; shrinking further we can even take \(V\) to be \(\mathcal{T}_{pw}\)-basic open, so \(V=O^{(0)}_{\bar{x},\bar{x}}\). The set \(U\cap O^{(0)}_{\bar{x},s(\bar{x})}\) is a nonempty \(\mathcal{T}\)-open set. By Lemma 5.15(ii), the surjective functions form a \(\mathcal{T}\)-dense set, so there exists \(f\in U\cap O^{(0)}_{\bar{x},s(\bar{x})}\cap\mathrm{Surj}(\mathbb{Q})\). We claim that \(f\circ O^{(0)}_{\bar{x},\bar{x}}=O^{(0)}_{\bar{x},\bar{f}(\bar{x})}\) (\(=O^{(0)}_{\bar{x},s(\bar{x})}\)). The inclusion "\(\subseteq\)" is clear; for the converse inclusion "\(\supseteq\)", we argue as follows: given \(\tilde{s}\in O^{(0)}_{\bar{x},f(\bar{x})}\), the finite partial map \(m\) defined by \(\bar{x}\mapsto\bar{x}\) satisfies \(\tilde{s}(p)=fm(p)\) for all \(p\in\mathrm{Dom}(m)\). Since \(f\) is surjective, we can apply Lemma 2.9(i) to find \(\tilde{s}^{\prime}\in\mathcal{M}_{\mathbb{Q}}\) such that \(\tilde{s}^{\prime}(\bar{x})=\bar{x}\) and \(\tilde{s}=f\tilde{s}^{\prime}\), thus proving the claim. We obtain \[s\in O^{(0)}_{\tilde{x},s(\tilde{s})}=f\circ O^{(0)}_{\tilde{x},\bar{x}}\subseteq U \circ V\subseteq O,\] showing that \(O\) is indeed a \(\mathcal{T}_{pw}\)-neighbourhood of \(s\), as desired. _Remark 5.24_.: We can reformulate the proof of Lemma 5.23 as follows: We show that \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{pw})\) has Property \(\overline{\mathbf{X}}\) of length \(2\) with respect to \((\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{03^{opn}})\), using the decomposition \(s=\mathrm{id}_{\mathbb{Q}}\,s\,\mathrm{id}_{\mathbb{Q}}\,\mathrm{id}_{\mathbb{Q }}\,\mathrm{id}_{\mathbb{Q}}\) where the first, third and fifth position are fixed and the second and fourth position are varying, subsequently yielding \(\tilde{s}=\mathrm{id}_{\mathbb{Q}}\,f\,\mathrm{id}_{\mathbb{Q}}\,\tilde{s}^{ \prime}\,\mathrm{id}_{\mathbb{Q}}\). As in Remarks 5.7, 5.10 and 5.13, we apply Proposition 3.3(i) to the continous map \(\mathrm{id}\colon(\mathcal{M}_{\mathbb{Q}},\mathcal{T}_{03^{opn}})\to(\mathcal{M}_ {\mathbb{Q}},\mathcal{T})\) to obtain \(\mathcal{T}\subseteq\mathcal{T}_{pw}\). Observe that the existence of \(f\) requires the density statements from the previous reduction (which were shown using Polishness).
2310.06358
Core-Intermediate-Peripheral Index: Factor Analysis of Neighborhood and Shortest Paths-based Centrality Metrics
We perform factor analysis on the raw data of the four major neighborhood and shortest paths-based centrality metrics (Degree, Eigenvector, Betweeenness and Closeness) and propose a novel quantitative measure called the Core-Intermediate-Peripheral (CIP) Index to capture the extent with which a node could play the role of a core node (nodes at the center of a network with larger values for any centrality metric) vis-a-vis a peripheral node (nodes that exist at the periphery of a network with lower values for any centrality metric). We conduct factor analysis (varimax-based rotation of the Eigenvectors) on the transpose matrix of the raw centrality metrics dataset, with the node ids as features, under the hypothesis that there are two factors (core and peripheral) that drive the values incurred by the nodes with respect to the centrality metrics. We test our approach on a diverse suite of 12 complex real-world networks.
Natarajan Meghanathan
2023-10-10T06:52:20Z
http://arxiv.org/abs/2310.06358v1
###### Abstract ###### Abstract We perform factor analysis on the raw data of the four major neighborhood and shortest paths-based centrality metrics (Degree, Eigenvector, Betweenness and Closeness) and propose a novel quantitative measure called the Core-Intermediate-Peripheral (CIP) Index to capture the extent with which a node could play the role of a core node (nodes at the center of a network with larger values for any centrality metric) vis-a-vis a peripheral node (nodes that exist at the periphery of a network with lower values for any centrality metric). We conduct factor analysis (varimax-based rotation of the Eigenvectors) on the transpose matrix of the raw centrality metrics dataset, with the node ids as features, under the hypothesis that there are two factors (core and peripheral) that drive the values incurred by the nodes with respect to the centrality metrics. We test our approach on a diverse suite of 12 complex real-world networks. **Core-Intermediate-Peripheral Index: Factor Analysis of Neighborhood and Shortest Paths-based Centrality Metrics** Natarajan Meghanathan, PhD Professor of Computer Science Jackson State University, MS E-mail: [email protected] **Keywords:** Factor analysis, Core nodes, Intermediate Nodes, Peripheral nodes, Centrality metrics, Varimax rotation ## 1 Introduction The topological importance of nodes in complex networks has been analyzed in the literature from the perspectives of core-periphery structure and centrality metrics. While the core-periphery structure analysis of a network is more of a qualitative approach (and sometimes quantitative) at a mesoscopic level, centrality metrics are designed to quantify the topological importance of individual nodes in a network. The core-periphery analysis of a network is aimed at categorizing a node as either a core node or a peripheral node. The current status quo in the literature on the definitions of core nodes and peripheral nodes is that the core nodes need to be of larger degree and form a highly dense backbone to which the low degree peripheral nodes are connected to; the peripheral nodes are expected to be not connected to other peripheral nodes as well. Some of the works (e.g., [1-3]) in the literature have suggested that high degree nodes need not always be core nodes; but they still analyze the core-periphery structure and quantify the extent of coreness of a node within the realms of the above model. Several of the existing algorithms to analyze the core-periphery structure of a network would fail to identify node 2 (in Figure 1) of our running example graph of Section 2 as a core node and on the other hand, would overrate and classify nodes 3 and 6 as core nodes. A close look at the example graph of Figure 1 would indicate that nodes 1 and 2 bring the two otherwise disconnected components of nodes ([1, 3, 5, 6, 9] and [2, 4, 7, 8, 10]) together and must be highly ranked as core nodes, whereas nodes 3 and 6 are to be treated as peripheral nodes, because their presence do not add any value to the network, other than increasing the degrees of nodes 1 and 5. We do not claim a core node has to be an articulation point [4] (whose removal would disconnect an otherwise connected graph); but, we do expect a core node to add value to the connectivity of the nodes in the network to justify the notion that the core nodes are to form the center of the network and the peripheral nodes are at the boundary of the network. In this paper, we seek to enhance the core-periphery model by proposing a more comprehensive definition for the core nodes and peripheral nodes on the basis of the values incurred by the nodes for both neighborhood-based and shortest paths-based centrality metrics. The four commonly studied centrality metrics spanning these two categories are: Degree (DEG) [5], Eigenvector (EVC) [6], Betweenness (BWC) [7] and Closeness (CLC) [8]. While DEG (measure of the number of neighbors of a node) and EVC (measure of the degree of a node as well as the degrees of its neighbors) are neighborhood-based metrics, BWC (measure of the extent with which the shortest paths between any two nodes in the network go through the node) and CLC (measure of the lengths of the shortest paths from the node to the rest of the nodes in the network) are shortest paths-based metrics. Our premise is: nodes that are part of the shortest paths for several node pairs and located closer to the majority (if not all) of the nodes in the network are more likely to incur reasonably larger values for the neighborhood-based centrality metrics as well. Accordingly, we propose the following definitions for a core node and a peripheral node: a core node should form the center of the network by being closer to the rest of the nodes and located on the shortest paths between several node pairs; whereas the peripheral nodes form the boundary of the network by being far away from a majority of the nodes and barely located on the shortest paths for any node pairs. Several works in the literature have focused on either qualitatively/quantitatively assessing the core-periphery structure of a network or quantitatively assessing the centrality metrics values incurred for the nodes and their correlations. To the best of our knowledge, we have not come across any work that analyzes the core-periphery structure of a network by taking into consideration the values incurred by the nodes with respect to a comprehensive set of centrality metrics spanning both the neighborhood-based and shortest paths-based categories. Our work in this paper takes the latter direction. Our hypothesis for this research is that the values incurred by a node with respect to different centrality metrics are majorly influenced by their location: either at the core or the periphery of a network. Our hypothesis stems from the observation that the core nodes incur significantly larger values for the centrality metrics (especially, BWC, CLC and DEG), while the peripheral nodes incur significantly lower centrality values. Though there are quite a few works in the literature (e.g., [1]) that quantify the extent of coreness of a node, none of these works take into consideration the shortest paths aspect and are heavily-based on degree centrality. This forms the motivation for our research. We propose to conduct factor analysis [9] on the centrality dataset (with respect to values incurred for the DEG, EVC, BWC and CLC metrics) of the nodes in a network and seek to quantify the latent (hidden) factors (core or peripheral) that drive the values incurred by a node with respect to the different centrality metrics. The rest of the paper is organized as follows: Section 2 explains the proposed centrality dataset-based factor analysis approach for core-periphery analysis along with a running example graph. Section 3 presents the results of running the proposed factor analysis approach on a suite of 12 diverse complex real-world networks and classifies them as heavy with respect to one or two of the three classes of nodes (core, peripheral and intermediate) that we identify from the factor analysis approach. Section 4 discusses related work in the literature and highlights the uniqueness of our work. Section 5 concludes the paper and presents plans for future work. Throughout the paper, the terms 'node' and'vertex', 'edge' and 'link', 'network' and 'graph','measure' and'metric' are used interchangeably. They mean the same. ## 2 Factor Analysis of Centrality Dataset Factor analysis [9] is a widely used approach in machine learning to quantify the hidden factors that are behind the values incurred for the different features (columns) in a dataset (matrix) of records. For the problem in hand, the features are the nodes and we hence conduct factor analysis on the transpose matrix (that will have four rows, each pertaining to a centrality metric: DEG, EVC, BWC and CLC and \(n\) columns, where \(n\) is the number of nodes in the network) of the centrality dataset. Figure 1 presents a toy 10-node graph that we will use as a running example in this section to illustrate our proposed procedure. The transpose matrix of the centrality dataset with the 10 nodes as features is shown as well. We first obtain the covariance matrix (see Figure 1) of the transpose matrix of the centrality dataset and determine its Eigenvalues and Eigenvectors [10]. The covariance matrix comprises of the Pearson's correlation coefficient [10] between the centrality metrics values for any two nodes. Since we hypothesize that there are only two factors behind the centrality values incurred by the nodes, we retain only the Eigenvectors (EVs; see Figure 2) corresponding to the largest and second largest Eigenvalues. The entries in these two Eigenvectors (referred to respectively as the first principal and second principal Eigenvectors) are considered as the initial loadings for the nodes (features). We build a two-dimensional coordinate system (see Figure 2) of the Eigenvectors with these node loadings as the data points. We seek to synchronously rotate the two Eigenvector coordinate-axes (that are orthogonal to each other) in such a way that the maximum of the data points are aligned on either of them (a procedure referred to as Varimax rotation [10] in the literature). Varimax rotation aims to maximize the communality score (sum of the squares of the loadings) for each node by going through repeated orthogonal rotations of the Eigenvector coordinate-axes. We conducted Varimax rotation using the relevant libraries available in Python (Pandas) [11]. The axes in the resulting rotated coordinate system (see Figure 2) correspond to the two factors (core and peripheral); more specifically, we treat the axis where nodes with larger values for BWC align with as the vertical Y-axis (referred to as the core-axis) Figure 1: Running Example Graph; Transpose Matrix of its Centrality Dataset and its Covariance Matrix Figure 2: Varimax Rotation of the Eigenvector-Axes to the Factor-Axes and the axis where nodes with lower or zero BWC values align with as the horizontal X-axis (referred to as the peripheral-axis). The coordinates (final loadings; see Figure 2) of the nodes in such a rotated coordinate system are expected to be either close to (1, 0): if the node is a peripheral node or (0, 1): if the node is a core node. **Core-Intermediate-Peripheral (CIP) Index:** When we conducted the above-described procedure for factor analysis on the centrality datasets of several complex real-world networks, we observed the density and pattern of the distribution of the final loadings (coordinates) of the nodes in the peripheral axis-core axis coordinate system to depend on whether the network is a random network [12] or a scale-free network [13] or in between these two extreme categories. This motivated us to determine the tan (tangent) angle (referred to as the Core Intermediate Peripheral: CIP Index) of the line joining the origin (0, 0) and the coordinates for a node in the peripheral axis-core axis coordinate system. The angle is measured anti-clockwise from the peripheral-axis for those coordinates lying in the first and second quadrants, and measure clockwise from the peripheral-axis for those coordinates lying in the fourth quadrant. With the core-axis being the vertical Y-axis (and the peripheral-axis being the horizontal X-axis), the CIP index value for a node could be construed as a quantitative estimate of the extent with which a node could serve as a core node in the network. If the CIP index value for a node is closer to 90 degrees, then the node could be considered a core node and if the CIP index value for a node is closer to 0 degrees, the node could be considered a peripheral node. Figure 3 illustrates the CIP angle measurements for nodes 2 (a core node), 4 (an intermediate node) and 9 (a peripheral node) in the example graph. **3-element CIP Bins-Fraction Tuple:** We also propose a binning approach (with interval of 10 degrees: < 0, 0..10, 10...20, etc.., 80...90 and > 90) to group the nodes based on their CIP values. We treat the bins for CIP values falling in either of these two ranges [80,..., 90) and [90,....) as those corresponding to the core nodes and the bins for CIP values falling in either of these two ranges [..., 0) and [0,..., 10) to correspond to the peripheral nodes. We categorize the rest of the CIP bins to belong to nodes that are neither core nor peripheral (we refer to such nodes as intermediate nodes). We observe the centrality values for such intermediate nodes to be neither too high nor too low (i.e., not negligible, but not appreciably high as well). For a given network (of \(n\) nodes), we determine a 3-element CIP bins-fraction tuple that would comprise the fractions of the nodes in the core class, peripheral class and intermediate class. If the value for any of these three fractions is greater than or equal to 0.5, we categorize the network as heavy with respect to that class (i.e., core-heavy or peripheral-heavy or intermediate-heavy). If all the three fractions in the 3-element CIP bins-fraction tuple for a network are less than 0.5, we categorize the network as heavy with respect to the classes with the top two fraction values. We observe the random networks to be typically intermediate-heavy; scale-free networks with a lower variation of node degree to be core-heavy on its own or core/intermediate-heavy or core/peripheral-heavy and scale-free networks with a larger variation of node degree to be peripheral-heavy. Figure 4 presents the CIP index values for all the nodes in the network as well as ranks them on the basis of the extent of coreness. To the best of our knowledge, this is the first such work to be able to individually rank the nodes with respect to the extent of coreness in the form of a real-valued quantitative measure taking into consideration both the neighborhood-based and shortest paths-based centrality Figure 3: Sample Core-Intermediate-Peripheral (CIP) Index Angle Measurements for Nodes in the Example Graph metrics (unlike the k-core measure [14] and its variants [1] that can take only integer values and are also insensitive to the shortest paths-based centrality metrics). Figure 4 also presents a count of the number of nodes falling in each of the CIP index range bins: we observe 3 of the 10 nodes to be core, 2 nodes to be intermediate and 5 nodes to be peripheral. The 3-element CIP bins-fraction tuple for the example graph is thus [3/10, 2/10, 5/10] = [0.3, 0.2, 0.5]. Since the fraction of peripheral nodes satisfies the criteria of being 0.5 or above, we conclude the example graph to be peripheral-heavy. In addition, Figure 4 also visually illustrates the core-peripheral structure of the graph in a typical layout as well as using the Yifan proportional layout algorithm [15] (in both the layouts, the node color corresponds to the class of the node). In the Yifan Hu proportional layout, the node size is proportional to their CIP index values as well. **Comparison with \(k\)-Core:** From the perspective of the procedure (see Section 4) to determine the \(k\)-core measure, all the four nodes 1, 3, 5 and 6 in the example graph of Section 2 would form a 3-core graph, the highly-dense sub graph possible in the graph. But, per the CIP index measure proposed in this paper: node 1 is the only core node among these four nodes, whereas nodes 3 and 6 are peripheral nodes. We see the CIP index-based classification of these four nodes to more appropriately fit our definition of core nodes and peripheral nodes stated in Section 1. None of the node pairs in the network go through nodes 3 or 6 for shortest paths as well as these two nodes are relatively (compared to node 1) farther away from the rest of the nodes in the network. On the other hand, we see the classification of nodes 1, 2 and 7 as core nodes (forming the center of the network) is well-justified by the visualization obtained using the Yifan Hu proportional layout algorithm. ## 3 Evaluation with Real-World Networks In this section, we present the results obtained by running the proposed factor analysis procedure of Section 2 on a suite of 12 diverse real-world networks, ranging from pure random networks to extremely scale-free networks. The diversity among these networks is captured in the form of the spectral radius ratio for node degree (represented as \(\lambda_{\text{sp}}\)) [16], a metric that seamlessly captures the variation in node degree independent of the number of nodes and edges in the network. The \(\lambda_{\text{sp}}\) value [16] for a network is the ratio of the largest Eigenvalue of the 0-1 adjacency matrix of the network and the average node degree. Table 1 presents the name, number of nodes and edges and the \(\lambda_{\text{sp}}\) value of the networks as well as Figure 4: Ranking, Classification and Visualization of the Nodes as Core, Intermediate and Peripheral Nodes based on their CIP Index Values the results (the 3-element bins-fraction tuple and the network classification) for the 12 real-world networks. The networks in Table 1 are ordered in the increasing order of their \(\lambda_{\text{sp}}\) values. Most of these real-world networks have been used as benchmark networks in several studies related to complex network analysis. We use the magnitude of the entries in the 3-element bins-fraction tuple as the basis to classify a network: If the fraction of nodes for one of the three classes is greater than or equal to 0.50, we classify the network as heavy with respect to the particular class only (i.e., either core-heavy or intermediate-heavy or peripheral-heavy). If all the three fractions of nodes are less than 0.50, we classify the network as heavy with respect to the top two classes (referred to in the decreasing order of the fraction values). For example, the well-studied Karate Network (Net 6) incurs a 3-element bins fraction tuple of [0.35, 0.21, 0.44] and accordingly the network is classified as peripheral/core-heavy: implying that the largest fraction of nodes in the network are peripheral nodes and the next largest fraction of nodes are the core nodes. We observe the random networks (US Football Network and the Taro Exchange Network) to be intermediate-heavy. The rest of the real-world networks are scale-free networks with different levels of variation in node degree. We observe the scale-free networks with relatively lower \(\lambda_{\text{sp}}\) values to be predominantly core-heavy (mostly on their own or sometimes in association with either intermediate nodes or peripheral nodes). Among the five real-world networks, with \(\lambda_{\text{sp}}\) values ranging from 1.21 to 1.73, we observe three of them to be core-heavy, one network to be intermediate/core-heavy and the other network to be peripheral/core-heavy. On the other hand, all the five scale-free real-world networks with a relatively larger variation in node degree (i.e., with \(\lambda_{\text{sp}}\) values \(\geq\) 1.82) are observed to be peripheral-heavy. These are significant observations that have not been hitherto reported in the literature. In Figure 5, we notice the intermediate nodes to be typically nodes that are adjacent to the core nodes and/or the peripheral nodes; they may not be at the center of the network, but may not be at the boundary of the network as well. It is very important to recognize such nodes and categorize them to a separate class rather than strictly following the core-peripheral two-layer model. Figure 5 presents a visualization of each of the 12 real-world networks (identified with the Net #s used in Table 1), in the increasing order of their \(\lambda_{\text{sp}}\) values. All the real-world networks are displayed per the Yifan Hu proportional layout algorithm run in Gephi [17]. We also follow the same coloring convention for the core, intermediate and peripheral nodes as is shown in Figure 4 (i.e., the core nodes are colored in blue; the peripheral nodes are colored in red, and the intermediate nodes are colored in Whitish-Yellow). We observe the real-world networks to showcase the trend explained above (i.e., random networks are intermediate-heavy; scale-free networks with lower \(\lambda_{\text{sp}}\) values are core-heavy on their own or in \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Net \#} & \multirow{2}{*}{Network Name} & \multirow{2}{*}{\(\lambda_{\text{sp}}\)} & \multirow{2}{*}{\(\lambda_{\text{sp}}\)} & \multirow{2}{*}{\(\lambda_{\text{sp}}\)} & 3-Element Bins & \multirow{2}{*}{Network Classification} \\ & & & & Fraction Tuple & [C, I, P] Fractions & \\ \hline Net 1 & US Football Net & 115 & 613 & 1.01 & [0.00, 1.00, 0.00] & Intermediate-heavy \\ \hline Net 2 & Taro Exchange Net & 22 & 39 & 1.06 & [0.36, 0.59, 0.05] & Intermediate-heavy \\ \hline Net 3 & Flying Teams Cadets Net & 48 & 170 & 1.21 & [0.50, 0.35, 0.15] & Core-heavy \\ \hline Net 4 & Dolphin Net & 62 & 159 & 1.40 & [0.68, 0.16, 0.16] & Core-heavy \\ \hline Net 5 & Band Jazz Net & 198 & 2742 & 1.44 & [0.33, 0.46, 0.21] & Intermediate/Core-heavy \\ \hline Net 6 & Karate Net & 34 & 78 & 1.47 & [0.35, 0.21, 0.44] & Peripheral/Core-heavy \\ \hline Net 7 & Adjacency Noun Net & 112 & 425 & 1.73 & [0.62, 0.26, 0.12] & Core-heavy \\ \hline Net 8 & Les Miserables Net & 77 & 254 & 1.82 & [0.28, 0.14, 0.58] & Peripheral-heavy \\ \hline Net 9 & Copper Field Net & 87 & 406 & 1.83 & [0.11, 0.39, 0.50] & Peripheral-heavy \\ \hline Net 10 & Anna Karenina Net & 138 & 493 & 2.48 & [0.22, 0.10, 0.68] & Peripheral-heavy \\ \hline Net 11 & US Airports 1997 Net & 332 & 2126 & 3.22 & [0.27, 0.18, 0.55] & Peripheral-heavy \\ \hline Net 12 & EU Air Transport Net & 405 & 1981 & 3.81 & [0.24, 0.21, 0.55] & Peripheral-heavy \\ \hline \end{tabular} \end{table} Table 1: Real-World Networks used for Centrality-based Factor Analysis and their 3-Element Bins CIP Fractions Tuple-based Classification association with an another class; scale-free networks with larger \(\lambda_{\text{sp}}\) values are peripheral-heavy). The two airport networks (US and EU) are peripheral-heavy and this is understandable with the presence of several stub airports compared to the hub airports (even though the number of hub airports is not negligible, vindicating the scale-free degree distribution of these networks). We also observe that for four of the five peripheral-heavy networks (including the two airport networks), the next largest fraction of nodes are the core nodes and not the intermediate nodes. Note that the Yifan Hu proportional layout algorithm was run on the real-world networks without any inputs regarding the CIP index values for the nodes. Only the node size and color were set per the CIP index value and the network classification respectively. The presence of the blue-colored nodes at the Figure 5: Yifan Hu Proportional Layout Algorithm-based Visualization of the Real-World Networks [Node Size and Color set per the CIP Index Values and the Node Classification] core/center of the network and the red-colored nodes at the periphery of the network per the Yifan Hu proportional layout algorithm is an implicit validation of our approach. ## 4 Related Work Borgatti and Everett [18] formulated a hub-and-spoke structure for the core-peripheral nodes in a network: the core nodes are mainly adjacent to other core nodes and in some instances adjacent to peripheral nodes; whereas, peripheral nodes do not connect with other peripheral nodes. The above model formulation fits mainly for scale-free networks, but not for random networks. On the other hand, the \(k\)-core measure [14, 19] divides the network to a layered hierarchy and is by far the most commonly used centrality metric in the literature to capture the coreness of a node. The \(k\)-core of a graph is determined by removing all nodes whose degrees are less than \(k\) and this procedure is repeatedly applied on the graphs resulting from the node removals; the procedure is stopped when all the remaining nodes in the graph (forming a connected sub graph) have degree greater than or equal to \(k\). The \(k\)-core value for all the nodes in the sub graph is \(k\). The above procedure is used repeatedly (with different values of \(k\)) to find the \(k\)-core values for all the nodes in the network. The \(k\)-core measure is strongly-correlated to the EVC (eigenvector centrality) of the nodes; but, it does not take into account the participation of the nodes in the shortest paths between any two nodes as well as the proximity of the nodes to the rest of the nodes in the network. The procedure for determining the \(k\)-core measure is thence vulnerable (like we observed in the case of the running example graph of Section 2) to find a highly dense sub graph (for a larger \(k\) value) that may be farther away from the rest of the nodes in the network. Besides, the \(k\)-core value for a node will be just an integer and several nodes could incur the same value of \(k\). Hence, the \(k\)-core procedure could lead to ambiguity in the ranking of the nodes (two or more nodes might incur the same \(k\)-core value). In 2010, a centrality metric referred to as _coreness_[1] of a node was proposed and formulated as the sum of the \(k\)-core measures of the neighbors of the node. The above-described \(k\)-core procedure is ideally suited for undirected and unit-weight graphs. Various modified versions of this procedure have been proposed in the literature, especially with respect to weighted graphs. Garas et al [20] define the weighted degree of a node as the sum of its degree and the weights of the edges incident on the node. Proposals (e.g., [21-22]) that weigh these two terms (node degree and the weights of the incident edges) are also available in the literature. Our proposal to introduce the notion of intermediate nodes also fits within the k-core idea. The contribution of non-core nodes (the intermediate nodes and peripheral nodes in our jargon) has been recognized in [23] for the effective spread of social media protests from the core nodes (forming the epicenter) to the rest of the nodes in the network; though the non-core nodes may not have as many neighbors as the core nodes, the sheer number of such non-core nodes (especially for scale-free networks) was stated to be critical for information diffusion. Likewise, in the study of human brain dynamics, the authors [24] note that the separation between the stiff temporal core region (composed primarily of sensorimotor and visual regions) and the flexible temporal peripheral region (composed primarily of multimodal association regions) is important to assess/predict the individual differences in learning success. In [1], the authors state that the most influential spreaders in a social network may not be the most central nodes with high BWC; thus, acknowledging the contribution of the non-core nodes for information diffusion and epidemic spread. In [2-3], the authors advocated a core-periphery structure for networks wherein there are multiple cores and multiple peripheries, rather than a single hub-and-spoke model. This in turn implies that not all cores and peripheries are of the same size and density, indicating a need for a third class of nodes (the intermediate nodes). We now review some of the methodologies (other than those based on the \(k\)-core measure) available in the literature to determine the core-periphery structure. In [25], the authors differentiate core-periphery structure from assortative mixing or dissortative mixing [26] by stating that the probability of connection between two core nodes is greater than the probability of connection between a core node and a peripheral node, which is in turn greater than the probability of connection between two peripheral nodes. The above rule formed the basis of their stochastic block model (involving the method of maximum likelihood) to determine the core-periphery structure. In [27], the authors use a random walker approach to build the core-periphery profile of the nodes in a network and use the measure of persistence probability (the fraction of time a walker would spend on a core node across the entire walk) as a quantitative measure of the coreness of the node. Note that the CIP index value proposed in our work is a deterministic quantity and captures the exact level of coreness for a node. In [28], the authors propose a motif-based approach that involves spectral analysis of the motif-adjacency matrix to identify the core nodes and periphery nodes of a network such that the degree of the core nodes (and peripheral nodes) are greater (lower) than the average degree; still, this approach is only degree-based and does not take into account the shortest paths-based centrality metrics. The minimum residual method [29] seeks to assign both the end vertices of an edge \((i,j)\) to either core or peripheral sets such that the square of the number of residuals is minimized. A residual for a pair of vertices \((i,j)\) is a 1 if there is an edge between \(i\) and \(j\), but \(i\) and \(j\) are not in the same grouping (core or periphery) or a -1 if there is no edge between \(i\) and \(j\), but \(i\) and \(j\) are in the same grouping of vertices. ## 5 Conclusions and Future Work Our contributions in this paper are the following: (1) We propose a quantitative measure (referred to as Core-Intermediate-Peripheral: CIP Index) to capture the extent with which a node could serve as core node in a network. Unless two nodes incur the same values for all the four centrality metrics (DEG, EVC, BWC, CLC) based on which the CIP index values for the nodes are determined, the CIP index values (a real-valued measure) for any two nodes are expected to be different and could be used to unambiguously rank the nodes in the network with respect to the extent the nodes could serve as core nodes. (2) Rather than two classes (core or peripheral), we propose three classes of nodes (core, intermediate and peripheral) and the CIP index value incurred for a node could be used to assign the class to which a node belongs to. The 3-element CIP bins-fraction tuple determined for a network could be used to decide whether the network is heavy with respect to a particular class or two of the three classes. (3) Upon evaluation of the centrality metrics datasets of a suite of 12 diverse real-world networks, we observe random networks to be intermediate-heavy; scale-free networks with lower variation in node degree to be core-heavy on their own or in association with an another class (intermediate or peripheral) and scale-free networks with relatively larger variation in node degree to be peripheral-heavy. As part of future work, we plan to conduct factor analysis on centrality metrics datasets of real-world networks with the above four centrality metrics as features and determine the number of latent factors behind the centrality metric values incurred by nodes in random networks and scale-free networks with lower and higher variations in node degree. ## Acknowledgments The work leading to this paper was partly funded through the U.S. National Science Foundation (NSF) grant OAC-1835439. The views and conclusions contained in this paper are those of the authors and do not represent the official policies, either expressed or implied, of the funding agency.
2307.14393
The cycle class of the supersingular locus of principally polarized abelian varieties
We prove a formula for the cycle class of the supersingular locus in the Chow ring with rational coefficients of the moduli space of principally polarized abelian varieties in characteristic $p$. This formula determines this class as a monomial in the Chern classes of the Hodge bundle up to a factor that is a polynomial in $p$. This factor is known for $g\leq 3$. We determine the factor for $g=4$.
Gerard van der Geer, Shushi Harashita
2023-07-26T08:45:18Z
http://arxiv.org/abs/2307.14393v1
# The cycle class of the supersingular locus of principally polarized abelian varieties ###### Abstract. We prove a formula for the cycle class of the supersingular locus in the Chow ring with rational coefficients of the moduli space of principally polarized abelian varieties in characteristic \(p\). This formula determines this class as a monomial in the Chern classes of the Hodge bundle up to a factor that is a polynomial in \(p\). This factor is known for \(g\leq 3\). We determine the factor for \(g=4\). 2020 Mathematics Subject Classification: 14G,14K,11G ## 1. Introduction An abelian variety over a field \(k\) of characteristic \(p>0\) is called supersingular if it is isogenous to a product of supersingular elliptic curves over the algebraic closure of \(k\). Equivalently, by [21, Thm. 4.2], if its formal isogeny type has a Newton polygon with all slopes equal to \(1/2\). Recall that the Newton polygon of an abelian variety starts at \((0,0)\), ends at \((2g,g)\) and is lower convex and satisfies a symmetry condition. The two extreme cases are the polygon with slopes \(0\) and \(1\) and break point \((g,0)\) (the ordinary case) and the case with slope \(1/2\) (the supersingular case). Let \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) be the moduli space of principally polarized abelian varieties of dimension \(g\) in characteristic \(p>0\). The supersingular locus \(S_{g}\) is defined as the closed subset of principally polarized abelian varieties of \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) that are supersingular. This locus can be considered as the most degenerate stratum in the Newton polygon stratification on \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\). Its dimension is known by Li and Oort to be \([g^{2}/4]\) and also the number of irreducible components is known, see below. Besides the Newton polygon stratification there is another stratification on \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\), the Ekedahl-Oort stratification. While the cycle classes of the Ekedahl-Oort stratification on \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) are known, the cycle classes of the Newton polgon strata in general are not. For \(g=1\) and \(g=2\) the supersingular locus is a stratum of the Ekedahl-Oort stratification and thus the class is known. For \(g=3\) the supersingular locus is not a stratum of the Ekedahl-Oort stratification, but its cycle class was determined in joint work of the first author with Ekedahl, and the result was presented in [9]. In this paper we will prove a formula for the cycle class of the supersingular locus in the Chow ring with rational coefficients of a Faltings-Chai compactification \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\). This formula determines this class as a monomial in the Chern classes of the Hodge bundle up to a factor that is a polynomial in \(p\). Furthermore, we will determine the factor for the cycle class of the supersingular locus for \(g=4\). This latter determination builds upon the method used for the case of \(g=3\) and calculates the degrees of the top Chern classes of the Hodge bundle on a component of the supersingular locus. For this we construct an explicit smooth model of each irreducible component of \(S_{4}\). We also give the proof for the class for \(g=3\) that was not published in [9]. Including the well-known results for \(g=1\) and \(g=2\) we arrive at the following theorem. **Theorem 1.1**.: _The cycle class of the supersingular locus \(S_{g}\) in the Chow ring with rational coefficients of a Faltings-Chai compactification \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) of the moduli space \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) lies in the tautological ring. More precisely, its class is of the form_ \[[S_{g}]=f_{g}(p)\begin{cases}\lambda_{g}\lambda_{g-2}\cdots\lambda_{2}&g\text { even},\\ \lambda_{g}\lambda_{g-2}\cdots\lambda_{1}&g\text{ odd}\,,\end{cases}\] _where \(f_{g}(p)\) is a polynomial in \(p\) with rational coefficients and \(\lambda_{i}\) is the \(i\)th Chern class of the Hodge bundle on \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\). For \(g\leq 4\) the cycle class is given by_ \[[S_{g}]=\begin{cases}(p-1)\,\lambda_{1}&g\text{=1}\\ (p-1)(p^{2}-1)\,\lambda_{2}&g\text{=2}\\ (p-1)^{2}(p^{3}-1)(p^{4}-1)\,\lambda_{3}\lambda_{1}&g\text{=3}\\ (p-1)^{3}(p^{3}-1)(p^{4}-1)(p^{6}-1)\,\lambda_{4}\lambda_{2}&g\text{=4}.\end{cases}\] We also discuss for \(g=3\) and \(g=4\) the loci in the supersingular locus where the \(a\)-number is at least \(2\). ###### Contents * 1 Introduction * 2 The moduli space \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) * 3 Irreducible components of the supersingular locus * 4 Flag type quotients * 5 Dieudonne modules and displays * 6 The cycle class of the supersingular locus * 7 Moduli of flag type quotients for \(g=3\) * 8 The cycle class of \(S_{3}\) * 9 Loci for \(g=3\) defined by conditions on the \(a\)-number * 10 Moduli of flag type quotients for \(g=4\) * 11 Interpretation of the morphism \(\mathcal{F}_{1}\to\mathcal{F}_{2}\) * 12 The Hodge bundle on the supersingular locus * 13 Loci with \(a\)-number \(\geq 2\) for \(g=4\) * 13.1 Loci of the first type. * 13.2 Loci of the second type The fibres over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) * 15 Superspecial points of \(S_{4}\) * 16 The cycle class of \(S_{4}\) and intersection numbers * 17 Determination of intersection numbers ## 2. The moduli space \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) By \(\mathcal{A}_{g}\) we denote the moduli stack of principally polarized abelian varieties of dimension \(g\) and by \(\pi:\mathcal{X}_{g}\to\mathcal{A}_{g}\) the universal abelian variety over \(\mathcal{A}_{g}\). It is a Deligne-Mumford stack defined over \(\mathbb{Z}\). The moduli space \(\mathcal{A}_{g}\) carries a natural vector bundle \(\mathbb{E}\) of rank \(g\), the Hodge bundle, defined as \(\pi_{*}\Omega^{1}_{\mathcal{X}_{g}/\mathcal{A}_{g}}\). We denote by \(\tilde{\mathcal{A}}_{g}\) a Faltings-Chai compactification of \(\mathcal{A}_{g}\) as defined and treated in [6]. The Hodge bundle extends to \(\tilde{A}_{g}\) and will again be denoted by \(\mathbb{E}\). In the rest of this paper we consider the moduli stack \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) in characteristic \(p>0\). We set \(\lambda_{i}=c_{i}(\mathbb{E})\) for the \(i\)th Chern class of \(\mathbb{E}\) for \(i=1,\ldots,g\). These classes satisfy the relation \[(1+\lambda_{1}+\cdots+\lambda_{g})(1-\lambda_{1}+\cdots+(-1)^{g}\lambda_{g})=1\] and these classes generate a subring \(R_{g}\) of the Chow ring \(\mathrm{CH}^{*}_{\mathbb{Q}}(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p})\) called the tautological ring, see [9, 5]. For \(0\leq n\leq g(g+1)/2\) the graded part of \(R_{g}\) of degree \(n\) has a basis \(\lambda_{1}^{e_{1}}\cdots\lambda_{g}^{e_{g}}\) with \(0\leq e_{i}\leq 1\) and \(\sum_{i}e_{i}i=n\). The ring \(R_{g}\) is a Gorenstein ring with socle generated by \(\lambda_{1}\lambda_{2}\cdots\lambda_{g}\). We will denote the degree of this \(0\)-cycle by \[v(g)=\deg\lambda_{1}\lambda_{2}\cdots\lambda_{g}\,,\] the Hirzebruch proportionality constant, and we have \[v(g)=(-1)^{g(g+1)/2}2^{-g}\zeta(-1)\zeta(-3)\cdots\zeta(1-2g),\] where \(\zeta(s)\) is the Riemann zeta function. We give a little table with relevant values: \begin{tabular}{|c|c|c|c|c|c|} \hline \(g\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) \\ \hline \(v(g)\) & \(1\) & \(1/24\) & \(1/5760\) & \(1/2903040\) & \(1/1393459200\) \\ \hline \end{tabular} The tautological ring of \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) is the quotient \(R_{g}/(\lambda_{g})\cong R_{g-1}\). The moduli space \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) carries two important stratifications, the Ekedahl-Oort stratification and the Newton polygon stratification, see [22] and [23]. The strata of the Ekedahl-Oort stratification \(\mathcal{V}_{\mu}\) are indexed by Young diagrams or tuples \(\mu=[\mu_{1},\ldots,\mu_{r}]\) of integers with \(0\leq r\leq g\) and \(\mu_{i}>\mu_{i+1}\), according to the usage of [9, 4]. The largest open stratum \(\mathcal{V}_{[\emptyset]}\) is the locus of ordinary abelian varieties. The codimension of \(\mathcal{V}_{\mu}\) is \(\sum_{i}\mu_{i}\). The stratification can be extended to \(\tilde{\mathcal{A}}_{g}\). By [9, 4] we can calculate the cycle classes of the closed Ekedahl-Oort strata in \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) and \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\). For example the cycle class of the locus of abelian varieties with \(p\)-rank \(\leq f\) (corresponding to \(\mu=[g-f]\)) is \[[\overline{\mathcal{V}}_{[g-f]}]=(p-1)(p^{2}-1)\cdots(p^{g-f}-1)\lambda_{g-f} \tag{1}\] and the cycle class of the smallest stratum, the locus of superspecial abelian varieties (corresponding to \(\mu=[g,g-1,\ldots,1]\)) is \[[\mathcal{V}_{g,g-1,\ldots,1]}]=(p-1)(p^{2}+1)\cdots(p^{g}+(-1)^{g})\lambda_{1 }\lambda_{2}\cdots\lambda_{g}\,.\] This formula implies as a special case a result of Ekedahl [3], namely that \[\sum\frac{1}{\#\mathrm{Aut}(X)}=(p-1)(p^{2}+1)\cdots(p^{g}+(-1)^{g})\,v(g)\,, \tag{2}\] where the sum is over the isomorphism classes of principally polarized superspecial abelian varieties over \(k\) and \(v(g)\) the proportionality constant defined above. A formula for the actual number of isomorphism classes of superspecial abelian varieties with a level \(n\geq 3\) structure is obtained by multiplying the formula for the degree of \(\mathcal{V}_{[g,g-1,\ldots,1]}\) by the degree of the natural map \(\mathcal{A}_{g}[n]\to\mathcal{A}_{g}\) (as stacks) with \(\mathcal{A}_{g}[n]\) the moduli space of principally polarized abelian varieties with a level \(n\) structure. ## 3. Irreducible components of the supersingular locus The number of irreducible components of the supersingular locus \(S_{g}\) in \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) was determined by Deuring for \(g=1\), by Katsura and Oort for \(g=2\) ([15, 16]) and in general by Li and Oort for \(g\geq 3\), [18, 4.9]. The actual number of irreducible components in \(\mathcal{A}_{g}\otimes\overline{\mathbb{F}}_{p}\) is given by a class number \(h_{p}(g)\) for \(g\) odd and a similar class number \(h_{p}^{\prime}(g)\) for \(g\) even. Here \(h_{p}(g)\) (resp. \(h_{p}^{\prime}(g)\)) is the class number of the principal (resp. non-principal) genus in the hermitian space \(B^{g}\), with \(B\) the definite quaternion algebra ramified at \(p\) and \(\infty\). These class numbers are difficult to deal with, see for example [13, p. 147], and one gets better and more useful formulas by counting in a stacky way, that is, taking into account weights equal to the inverse of the order of the automorphism groups of the objects that one counts. For example, for \(g=1\) the class number of the quaternion algebra \(B=\mathbb{Q}_{p,\infty}\) over \(\mathbb{Q}\) split outside \(p\) and \(\infty\), is given by \[h_{p}(1)=\frac{p-1}{12}+\left(1-\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We will denote by \(N_{g}\) the number of irreducible components of the supersingular locus, where each irreducible components is counted with a weight \(w\) with \(1/w=\#\text{Aut}(X_{\eta})\), where \(X_{\eta}\) denotes the principally polarized abelian variety corresponding to the generic point of the irreducible component. This number \(N_{g}\) has the property that the number \(N_{g}[n]\) of irreducible components of the supersingular locus on the moduli space \(\mathcal{A}_{g}[n]\) with a level \(n\geq 3\) structure equals \[N_{g}[n]=N_{g}\cdot\deg(\mathcal{A}_{g}[n]\to\mathcal{A}_{g})\,.\] **Proposition 3.1**.: _The number \(N_{g}\) of irreducible components of the supersingular locus in \(\mathcal{A}_{g}\otimes\overline{\mathbb{F}}_{p}\) is_ \[\begin{cases}(p-1)(p^{2}+1)(p^{3}-1)\cdots(p^{g}-1)\,v(g)&\text{for $g$ odd},\\ (p^{2}-1)(p^{6}-1)\cdots(p^{2g-2}-1)\,v(g)&\text{for $g$ even}.\end{cases}\] The stacky interpretation that we use reduces to the mass of the principal (resp. non-principal genus) and can be deduced from [3] or [12]. One finds this mass formula also in [8, p. 123]. For odd \(g\) the irreducible components of \(S_{g}\) are in bijective correspondence with the isomorphism classes of superspecial principally polarized abelian varieties of dimension \(g\), hence the formula for \(N_{g}\) follows immediately from Ekedahl's result (2). For even \(g\) one has a correction factor \[\frac{(p+1)(p^{3}+1)\cdots(p^{g-1}+1)}{(p^{2}+1)(p^{4}+1)\cdots(p^{g}+1)}\,.\] Here for \(g\) even the numerator can be interpreted as the number of totally isotropic subspaces of dimension \(g/2\) in a \(g\)-dimensional hermitian space over \(\mathbb{F}_{p^{2}}\) with conjugation given by Frobenius, while the denominator equals the number of totally isotropic subspaces of dimension \(g/2\) in a symplectic space of dimension \(g\) over \(\mathbb{F}_{p^{2}}\). ## 4. Flag type quotients Work of Oda and Oort ([20]) makes it possible to parametrize the irreducible components of the supersingular locus \(S_{g}\) by so-called flag type quotients. For an abelian variety \(X\) over a field of characteristic \(p\) we will denote the subgroup scheme \(\ker(F)\cap\ker(V)\) by \(A(X)\) with \(F\) and \(V\) Frobenius and Verschiebung on \(X\). It is a subgroup scheme of order \(p^{a(X)}\) with \(a(X)\) the \(a\)-number of \(X\). A supersingular abelian has \(1\leq a(X)\leq g\) and if \(a(X)=g\) and \(g\geq 2\) then \(X\) is isomorphic to the base change to \(k\) of a product \(E^{g}\) with \(E\) a supersingular elliptic curve defined over \(\mathbb{F}_{p}\). For a supersingular abelian variety \(X\) of dimension \(g\) one has \(a(X/A(X))\geq\min(g,a(X)+1)\), see [17]. By starting with \(X=X_{0}\) and putting \(X_{i+1}=X_{i}/A(X_{i})\) one arrives after \(g-1\) steps at a superspecial abelian variety \(X_{g-1}\), that is, an abelian variety with with \(a(X_{g-1})=g\). Then the kernel of the dual map is contained in \(\ker(F^{g-1})\), hence one finds a homomorphism \(Y\to X\) with \(Y=X_{g-1}^{(p^{g-1})}\). This implies the fact that for a supersingular abelian variety \(X\) there exists a minimal isogeny \(\rho:E^{g}\to X\) with \(E\) a supersingular elliptic curve with the property that any other homomorphism \(h:Z\to X\) of a superspecial abelian variety \(Z\) factors uniquely through \(\rho\). If \(a(X)=1\) this minimal isogeny is obtained in \(g-1\) steps \[Y_{g-1}\to Y_{g-2}\to\dots\to Y_{0}=X\] where \(Y_{g-1}=E^{g}\otimes\operatorname{Spec}(k)\) and \(Y_{i}=Y_{g-1}/G_{i}\) for \(i=1,\dots,g-1\) with \(G_{i}=\ker(\rho)\cap Y_{g-1}[F^{g-1-i}]\). If \(a(X)>1\) this sequence need not be unique. Taking into account also the polarizations leads to the definition of a (polarized) flag type quotient. **Definition 4.1**.: A polarized flag type quotient of dimension \(g\) is a diagram of abelian varieties and homomorphisms with \(Y_{g-1}\) superspecial and \(\eta_{g-1}\) a polarization with kernel \(Y_{g-1}[F^{g-1}]\) satisfying 1. \(\ker(\rho_{i})\subset A(Y_{i})\) is of order \(p^{i}\); 2. \(\ker(\eta_{i})\subseteq\ker(F^{i-j}\circ V^{j})\) for \(0\leq j\leq i/2\enspace.\) This flag type quotient is called rigid if \(G_{i}=G_{0}\cap G[F^{g-1-i}]\) with \(G_{0}=\ker(Y_{g-1}\to Y_{0})\cap Y_{g-1}[F^{g-1}]\). The term 'rigid' refers to the fact that in this case the corresponding flag type is unique. The main references for flag type quotients are [20] and [18, 9.6-9.7]. ## 5. Dieudonne modules and displays The theory of Dieudonne modules makes it possible to describe flag type quotients in terms of Dieudonne modules. Here \(k\) will denote an algebraically closed field of characteristic \(p\) and \(W=W(k)\) the ring of Witt vectors of \(k\). We define a ring \[A=W[F,V]/(FV-p,VF-p,Fa-a^{\sigma}F,aV-Va^{\sigma},\,\forall a\in W)\] and set \(A_{1,1}:=A/(F-V)\). A polarized flag type quotient as described in Definition 4.1 corresponds to a flag of contravariant Dieudonne modules \[M_{0}\subset M_{1}\subset M_{2}\subset M_{3}\] satisfying 1. \(M_{g-1}=A_{1,1}^{g}\) provided with a quasi-polarization \[\langle\,,\,\rangle:M_{g-1}\otimes_{W}M_{g-1}^{t}\to Q(W)\] that induces an identification \(M_{g-1}^{t}=F^{g-1}M_{g-1}\); 2. \((F,V)M_{i}\subset M_{i-1}\) and \(\dim(M_{i}/M_{i-1})=i\) for \(i=0,\dots,g-1\); 3. \((F,V)^{i}M_{i}\subset M_{i}^{t}\) for \(i=0,\dots,g-1\). We call such a flag a polarized Dieudonne flag of length \(g\). It is called rigid if \(M_{i}=M_{0}+F^{g-1-i}M_{g-1}\) for \(i=0,\dots,g-1\). We observe that rigidity implies \[M_{i}=M_{m}+F^{g-1-i}M_{g-1}\quad\text{for }m<i\leq(g-1)\,.\] We can translate rigid polarized flag type quotients in terms of displays, replacing Dieudonne modules by displays. We recall the definition of displays (cf. [24, Section 1]). Let \(R\) be a commutative unitary ring. Let \(W(R)\) be the ring of Witt vectors and \(Q(W)\) its field of fractions. Let \(\mathfrak{f}:W(R)\to W(R)\) be Frobenius and \(\mathfrak{v}:W(R)\to W(R)\) Verschiebung. Set \(I_{R}=\mathfrak{v}(W(R))\). A _display over \(R\)_ is a quadruple \((P,Q,F,V^{-1})\) consisting of a finitely generated projective \(W(R)\)-module \(P\), a \(W(R)\)-submodule \(Q\) of \(P\) and homomorphisms \(F:P^{(p)}\to P\) and \(V^{-1}:Q^{(p)}\to P\), where \(M^{(p)}:=W(R)\otimes_{\mathfrak{f},W(R)}M\), with the properties: 1. \(I_{R}P\subset Q\subset P\) and there exists a decomposition of \(P\) into a direct sum of \(W(R)\)-modules \(P=L\oplus T\), such that \(Q=L\oplus I_{R}T\); 2. \(V^{-1}\) is epimorphism; 3. For \(x\in P\) and \(w\in W(R)\) we have \(V^{-1}(1\otimes\mathfrak{v}(w)x)=wFx\). By [24, Lemma 9], we have an isomorphism \[V^{-1}\oplus F:(L\oplus T)^{(p)}\to P. \tag{3}\] The matrix (with respect to a basis of \(P\)) associated to this isomorphism is a generalization of the classical display ([19]). _Remark 5.1_.: If \(R\) is a perfect field, then \(P\) is the usual Dieudonne module, \(I_{R}=p\,W(R)\) and \(Q\) is the \(V\)-image \(VP\), so \(Q\) is determined by the Dieudonne module \(P\). But if \(R\) is not a perfect field, then \(Q\) is not determined by \(P\) with \(F,V\); conversely \(P\) is determined by the \(V^{-1}\)-image of \(Q^{(p)}\). By a result of Li-Oort [18, 3.7] the moduli space of polarized Dieudonne flags of length \(g\) exists and is projective. Moreover, by [18, 3.7] the moduli of rigid polarized Dieudonne flags of length \(g\) exists and is quasi-projective, and by [18, 7.6] it is non-singular. ## 6. The cycle class of the supersingular locus In this section we will show that the cycle class of the supersingular locus \(S_{g}\) in \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) lies in the tautological ring \(R_{g}\) generated by the Chern classes \(\lambda_{i}\)\((i=1,\dots,g)\) of the Hodge bundle \(\mathbb{E}\) on a Faltings-Chai compactification of \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) and give a formula for it that fixes the class up to a multiplicative constant. Here the cycle class is taken in the Chow ring with rational coefficients of a Faltings-Chai compactification \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) of \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\). **Theorem 6.1**.: _The cycle class of the supersingular locus on \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) in \(\operatorname{CH}_{\mathbb{Q}}(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p})\) is a non-zero multiple of \(\lambda_{g}\lambda_{g-2}\cdots\lambda_{1}\) if \(g\) is odd and of \(\lambda_{g}\lambda_{g-2}\cdots\lambda_{2}\) if \(g\) is even. The multiple is a polynomial in \(p\)._ Proof.: For the proof we will use the presentation of Frobenius on the covariant Dieudonne module \(M\) with \(p\)-rank \(0\) and \(a\)-number \(1\) as given by Oort in [23]. His description of the display of such a module \(M\) is as follows. With \(W\) the Witt ring of \(k\), an algebraically closed field of characteristic \(p>0\), there exists a \(W\)-basis \(e_{1},\dots,e_{g},e_{g+1},\dots,e_{2g}\) such that Frobenius is given by the formulas \[Fe_{j}=\sum_{i=1}^{2g}\gamma_{ij}e_{i},\quad(1\leq j\leq g)\,,\] \[e_{j}=V(\sum_{i=1}^{2g}\gamma_{ij}e_{i})\quad(g+1\leq j\leq 2g)\,,\] where \(\gamma=(\gamma_{ij})\) is a \(W\)-valued \(2g\times 2g\) matrix which is symplectic in the sense that \[\gamma\begin{pmatrix}0&1_{g}\\ -1_{g}&0\end{pmatrix}\gamma^{t}=\begin{pmatrix}0&1_{g}\\ -1_{g}&0\end{pmatrix}\,.\] We write \(\gamma\) as a matrix of \(g\times g\) blocks \[\gamma=(\gamma_{ij})=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\,.\] We denote the Frobenius endomorphism of the Witt ring \(W\) by \(\sigma\). Note that the \(\sigma\)-linear map \(F\) is given by the matrix \[\begin{pmatrix}a&pb\\ c&pd\end{pmatrix}\,.\] Oort shows ([23, p. 191] that if \(M\) has \(p\)-rank \(0\) and \(a\)-number \(1\) we may choose the basis such that the matrix \(\gamma\) is of the form \[a_{ij}=d_{ij}=\begin{cases}1&i=j+1\\ 0&i\neq j+1\end{cases},\quad c_{ij}=\begin{cases}1&(i,j)=(1,g)\\ 0&else\end{cases}\,,\] and \(b_{ig}=0\) for \(i\neq 1\). Given such a basis in normal form, we have according to [23, Lemma 2.6] that there exists a \(P\in A\) such that \[F^{2g}e_{1}=Pe_{1}\quad\text{with}\quad P=\sum_{i=1}^{g}\sum_{j=g+1}^{2g}p^{j- g}\gamma_{ij}^{\sigma^{2g-j}}F^{2g+i-j-1}\,, \tag{4}\] with \(Fx=x^{\sigma}F\) for \(x\in W[F,V]\) and repeated application of \(F\) is in the \(\sigma\)-linear sense (cf. [23, p. 195]). **Lemma 6.2**.: _We have \(\gamma_{1,j}=0\) for \(j=g+1,\ldots,2g-1\). Moreover, the square matrix_ \[\tilde{\gamma}=\begin{pmatrix}\gamma_{2,g+1}&\ldots&\gamma_{2,2g-1}\\ \vdots&&\vdots\\ \gamma_{g,g+1}&\ldots&\gamma_{g,2g-1}\end{pmatrix}\] _is symmetric._ Proof.: We have \(ab^{t}=ba^{t}\) and \(b^{t}d=d^{t}b\). In view of the shape of the matrices \(a\) and \(d\) the result follows as \(\gamma\) is symplectic. _Remark 6.3_.: We know that the Ekedahl-Oort stratum \(\mathcal{V}_{\mu}\) with \(\mu=[g,1]\) corresponding to \(p\)-rank \(0\) and \(a\)-number \(2\) has codimension \(1\) in the \(p\)-rank \(0\) locus \(V_{0}\), hence the generic point of every irreducible component of \(V_{0}\) has \(a=1\). Moreover, by the results of Li-Oort [18] we know that each irreducible component of the supersingular locus \(S_{g}\) has an open dense subset where the \(a\)-number equals \(1\). One can read off supersingularity from the matrix \(\tilde{\gamma}\) using Oort's result on the action of \(F\) on \(e_{1}\) given in (4), see [23, Prop. 2.7]. **Corollary 6.4**.: _The module \(M\) is supersingular if \(\gamma_{ij}\equiv 0\,(\operatorname{mod}p)\) for \(2\leq i\leq g-1\), \(g+1\leq j\leq 2g-2\) with \(i+j\leq 2g\)._ Note that because of the symmetry this gives a priori \[\sum_{j=1}^{\lfloor g/2\rfloor}(g-2j)=\frac{g(g-1)}{2}-[\frac{g^{2}}{4}]= \dim V_{0}-\dim S_{g}\] conditions for supersingularity, where \(V_{0}\) is the \(p\)-rank zero locus. The strategy is now to impose consecutively conditions that together imply supersingularity by Corollary 6.4; we begin by requiring the vanishing modulo \(p\) of the column of entries that is the transpose of \[(\gamma_{2,g+1},\ldots,\gamma_{g-1,g+1})\,,\] and continue by requiring the vanishing modulo \(p\) of the column of entries whose transpose is \[(\gamma_{3,g+2},\ldots,\gamma_{g-2,g+2})\,,\] and so on, till finally the column with transpose \((\gamma_{g/2,3g/2-1},\gamma_{g/2+1,3g/2-1})\) of length \(2\) for \(g\) even or the vanishing of the single entry \(\gamma_{(g+1)/2,(3g-1)/2}\) for \(g\) odd. For example, for \(g=5\) we require the vanishing modulo \(p\) of the red entries in the symmetric matrix \[\tilde{\gamma}=\begin{pmatrix}\gamma_{26}&\gamma_{27}&\gamma_{28}&\gamma_{29 }\\ \gamma_{36}&\gamma_{37}&\gamma_{38}&\gamma_{39}\\ \gamma_{46}&\gamma_{47}&\gamma_{48}&\gamma_{49}\\ \gamma_{56}&\gamma_{57}&\gamma_{58}&\gamma_{59}\end{pmatrix}\] giving \(4\) conditions. In terms of displays, we have an \(\mathfrak{f}\)-linear map \(V^{-1}\oplus F:M=L\oplus T\to M\), see (3). We write \(F/p\) for the composition \[VM/pM\to M/pM\xrightarrow{V^{-1}\oplus F}M/pM\to M/VM\,.\] This map is given by the square matrix \((\gamma_{ij})_{1\leq i\leq g,g+1\leq j\leq 2g}\). Then by the vanishing indicated in Lemma 6.2 we may restrict to submodules of rank \(g-1\) generated by \(g-1\) consecutive generators in \(VM/pM\) and \(M/VM\): \[G=\langle e_{g+1},e_{g+2},\ldots,e_{2g-1}\rangle\longrightarrow H=\langle e_{2 },e_{3},\ldots,e_{g}\rangle\,.\] We have increasing filtrations for \(i=1,\ldots,g\) given by \[G_{i}=\langle e_{g+1},e_{g+2},\ldots,e_{g+i}\rangle\quad\text{and}\quad H_{i} =\langle e_{2},e_{g-1},\ldots,e_{g+1-i}\rangle\,.\] That the \(p\)-rank is zero means that the image of \(G_{g-1}\) is in \(H_{g-1}\). If we identify \(\operatorname{Lie}(X)\) with \(VM/pM\) for the abelian variety \(X\) corresponding to the dual of \(M\) (cf. [1, 4.3.12] and [18, 5.4, 7.4]), we can view the induced map \(F/p:G_{g-1}\to H_{g-1}\) as a symmetric morphism between vector bundles of rank \(g-1\) made from the Hodge bundle and its dual by Frobenius twists. Since we wish to have the filtrations we will have to work on a cover of the \(p\)-rank zero locus \(V_{0}\). We now consider \(G\langle 1\rangle\), the module generated by \(e_{g+1}\). We require that it maps to zero modulo \(p\) under \(F/p:G\langle 1\rangle\to H\langle 1\rangle\) with the module \(H\langle i\rangle\) generated by \(e_{i+1},\ldots,e_{g-i}\). We can view the semi-linear map \(G\langle 1\rangle\to H\langle 1\rangle\) defined by \(F/p\) modulo \(p\) as a morphism of a line bundle to a vector bundle of rank \(g-2\), where these bundles are made from the Hodge bundle by truncations and Frobenius twists. We consider the locus where this morphism vanishes. The vanishing of this morphism corresponds to the vanishing modulo \(p\) of the vector \((\gamma_{2,g+1},\ldots,\gamma_{g-1,g+1})\). If this morphism vanishes then by the symmetry \(\gamma_{2,g+2}\) vanishes modulo \(p\) and we can consider a morphism \(G\langle 2\rangle\to H\langle 2\rangle\) induced by \(F/p\) with \(G\langle j\rangle=G_{j}/G_{j-1}\) generated by \(e_{g+j}\) and require its vanishing modulo \(p\). By induction, assuming the vanishing modulo \(p\) of the semi-linear morphism \[G\langle j\rangle\longrightarrow H\langle j\rangle \tag{5}\] for \(j=1,\ldots,s\), we get a next morphism \(G\langle s+1\rangle\to H\langle s+1\rangle\). We require inductively that these morphisms vanish for \(j=1,\ldots[(g-1)/2]\) on an appropriate covering space of \(V_{0}\) where we have the filtrations. Supersingularity follows if the conditions that the induced map \(G\langle j\rangle\to H\langle j\rangle\) is zero are satisfied successively for \(j=1,\ldots[(g-1)/2]\). The locus where the morphism (5) for \(j=1\) is zero has cycle class the \((g-2)\)th Chern class of the dual of \(G\langle 1\rangle\otimes(H\langle 1\rangle)^{\vee}\). We now work on the space of flags \(\mathfrak{F}\) on the cohomology \(H^{1}_{\text{dR}}\) of the universal principally polarized abelian variety as introduced in [4, Section 3]; more precisely, we work on the closure of the final stratum \(\mathfrak{F}_{w}\) corresponding to \(p\)-rank zero. Here the Hodge bundle \(\mathbb{E}\) has a filtration \(\mathbb{E}(i)\) for \(i=1,\ldots,g\) with \(\operatorname{rank}(\mathbb{E}(i))=i\). We can view the induced map \(F/p:G_{g-1}\to H_{g-1}\) as a symmetric morphism between modules of rank \(g-1\) that induces a morphism of vector bundles \(G\langle 1\rangle\to H\langle 1\rangle\) on \(\mathfrak{F}_{w}\). The vector bundles induced by \(G\) and \(H\) have filtrations and the Chern classes of their graded quotients are of the form \(\pm p^{r_{i}}\ell_{i}\) where \(\ell_{i}=c_{1}(\mathbb{E}(i)/\mathbb{E}(i-1))\) (\(i=1,\ldots,g\)) are the Chern classes of the graded quotients of the Hodge bundle on the final stratum and \(r_{i}\in\mathbb{Z}\). The conditions on the vanishing modulo \(p\) of rows of entries can now be viewed as a degeneracy condition for a symmetric morphism between vector bundles on \(\mathfrak{F}_{w}\). We shall calculate the cycle class of the Zariski closure of the degeneracy locus of this map over the open part of \(V_{0}\) where \(a=1\). This Zariski closure is contained in the supersingular locus as the Newton polygon can only go up under specialization. Moreover, for \(g\geq 2\) each irreducible component of \(S_{g}\) has an open dense set with \(a=1\), hence intersects the degeneracy locus over \(V_{0}\). We know that the codimension of the degeneracy locus equals the number of conditions imposed by Corollary 6.4 in the supersingular case, hence also for the intermediate cases defined by the vanishing of \(G\langle j\rangle\to H\langle j\rangle\). The theory of degeneracy loci [7] tells that the cycle classes of these degeneracy loci on \(\mathfrak{F}_{w}\) are polynomials in the classes \(\ell_{i}\). To calculate these, we begin by remarking that the cycle class of the \(p\)-rank zero locus \(V_{0}\) in \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) is a multiple of \(\lambda_{g}\) by [4]. We carry out induction and assume that the image under the Gysin map from \(\mathfrak{F}_{w}\) to \(\mathcal{A}_{g}\otimes\mathbb{F}_{p}\) of the class of the locus over \(V_{0}\) where \(F/p\) maps \(G\langle s\rangle\) to zero in \(H\langle s\rangle\) for \(s=1,\ldots,j-1\) is a multiple of \(\lambda_{g}\lambda_{g-2}\cdots\lambda_{g+2-2j}\). The locus where the morphism \(G\langle j\rangle\to H\langle j\rangle\) is zero has as cycle class the \((g-2j)\)th Chern class of the dual of \(G\langle j\rangle\otimes(H\langle j\rangle)^{\vee}\). With \(r=g-2j=\operatorname{rank}(G\langle j\rangle)\) this Chern class is \[(-1)^{r}(c_{r}(H\langle j\rangle)-c_{r-1}(H\langle j\rangle)c_{1}(G\langle j \rangle))\,.\] In order to calculate the class of the corresponding locus on \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) we have to apply a Gysin map from the Chow group of \(\mathfrak{F}_{w}\) to the Chow group of \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) and calculate the image of the class of the degeneracy locus. We first look at the case \(j=1\). **Lemma 6.5**.: _The pushdown to \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) of the classes \(c_{g-2}(H\langle 1\rangle)\) and \(c_{g-3}(H\langle 1\rangle)c_{1}(G\langle 1\rangle)\) on \(\mathfrak{F}_{w}\) are multiples of \(\lambda_{g-2}\)._ Proof.: The filtration on \(\mathbb{E}\) is extended to the de Rham bundle by \(\mathbb{E}_{g+i}=(\mathbb{E}_{g-i})^{\perp}\) as in [4, Section 3]. This symplectic pairing is different from the one used the description of the display used in [23]. Since we use covariant Dieudonne modules we have to take duals and Frobenius twists to relate the Chern roots of \(G\langle j\rangle\) and \(H\langle j\rangle\) to those of the Hodge bundle. The Chern roots of \(G\langle j\rangle\) and \(H\langle j\rangle\) are determined by the filtrations \(G_{i}\) and \(H_{i}\). We write \(l_{i}\) for these roots, while writing \(\ell_{i}\) for the roots of \(\mathbb{E}\). Then the Chern roots of \(H\langle 1\rangle\) given by this filtration are \(l_{2},...,l_{g-1}\) and that of \(G\langle 1\rangle\) is \(-l_{1}\). The Chern class \(c_{g-2}(H\langle 1\rangle)\) is then the \((g-2)\)th elementary symmetric function in \(l_{2},\dots,l_{g-1}\). The \((g-2)\)th symmetric function in \(l_{2},\dots,l_{g-1}\) is a Frobenius twist of the \((g-2)\)th symmetric function in \(l_{1},\dots,l_{g-2}\) (cf. the proof of [4, Lemma 12.3]) and is a multiple of \(\lambda_{g-2}(g-2)=c_{g-2}(\mathbb{E}(g-2))\). Now by [4, Lemma 12.3] the pushdown of \(\lambda_{g-2}(g-2)\) equals a non-zero multiple of \(\lambda_{g-2}\). The morphism from \(\mathfrak{F}\) to \(\mathcal{A}_{g}\) is fibered by generically finite morphisms \(\pi_{i}\) defined by forgetting a step of the flag \(\mathbb{E}(i)\subsetneq\mathbb{E}(i+1)\subsetneq\dots\subsetneq\mathbb{E}(g)\). We have for the Chern classes \(\lambda_{r}(i)=c_{r}(\mathbb{E}(i))\) of the partial flag the formula \((\pi_{i})^{*}(\lambda_{r}(i+1))=\ell_{i+1}\lambda_{r-1}(i)+\lambda_{r}(i)\). For the Chern roots \(l_{i}\) that we use here a similar formula holds. Therefore, again by [4, Lemma 12.3], the pushdown of \(c_{g-3}(H\langle 1\rangle)c_{1}(G\langle 1\rangle)\) is also a multiple of \(\lambda_{g-2}\). We conclude that the class of the locus where \(G\langle 1\rangle\to H\langle 1\rangle\) vanishes on \(V_{0}\) is a multiple of the class \(\lambda_{g-2}\) on \(V_{0}\). Since this is a Chern class of a vector bundle on \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) and the class of \(V_{0}\) is a multiple of \(\lambda_{g}\) we find that the class of the vanishing locus in \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) of this bundle morphism on \(V_{0}\) is a multiple of \(\lambda_{g}\lambda_{g-2}\). We now carry out induction. We restrict to the locus \(Z\) where the consecutive morphisms \(G\langle s\rangle\to H\langle s\rangle\) for \(s=1,\dots,j-1\) vanish. Then the class of the locus of vanishing of (5) equals up to a sign the \((g-2j)\)th Chern class of \(G\langle j\rangle\otimes(H\langle j\rangle)^{\vee}\) and this is \(c_{g-2j}(H\langle j\rangle)-c_{g-2j-1}(H\langle j\rangle)c_{1}(G\langle j\rangle)\). By the argument given in Lemma 6.5 the class \(c_{g-2j}(H\langle j\rangle)\) is a non-zero multiple of the \((g-2j)\)th elementary symmetric function in \(g-2j\) consecutive classes \(\ell_{i}\). We can view this as obtained by applying a Frobenius power to \(\ell_{1},\dots,\ell_{g-2j}\), or use [4, Lemma 12.3], hence this elementary symmetric function represents a multiple of \(\lambda_{g-2j}(g-2j)\). The pushdown of this is a multiple of \(\lambda_{g-2j}\). The argument for \(c_{g-2j-1}(H\langle j\rangle)c_{1}(G\langle j\rangle)\) is similar, as in Lemma 6.5. Therefore the cycle class of the vanishing locus is a multiple of \(\lambda_{g-2j}\) on the locus \(Z\) and \(Z\) has as class a multiple of \(\lambda_{g}\lambda_{g-2}\cdots\lambda_{2g+2-2j}\). As \(\lambda_{g-2j}\) is the Chern class of a vector bundle on \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) we find as cycle class on \(\tilde{\mathcal{A}}_{g}\otimes\mathbb{F}_{p}\) a multiple of \(\lambda_{g}\lambda_{g-2}\cdots\lambda_{2g-2j}\). _Remark 6.6_.: i) By analyzing more precisely the characteristic classes of the degeneracy loci in the proof, it should be possible to determine the multiple \(f(p)\) as a polynomial in \(p\), but this involves many subtleties. ii) By interpreting Newton polygon strata contained in the \(p\)-rank zero locus as degeneracy loci as done in the proof of Theorem 6.1 we see that the cycle classes of these loci lie in the tautological ring. This suggests that all Newton polygon classes are tautological. ## 7. Moduli of flag type quotients for \(g=3\) In this section and the next we calculate the cycle class of the supersingular locus \(S_{3}\). We consider an irreducible component of the space of polarized flags of Dieudonne modules for \(g=3\), defined by the choice of a quasi-polarization on \(A^{3}_{1,1}\). This space is the Zariski closure of the moduli of rigid polarized Dieudonne flags. A description was given in [18, p. 58]. Thus we look at polarized flags \((E^{3},\eta)=(Y_{2},\eta)\,\stackrel{{\rho_{2}}}{{\longrightarrow}} (Y_{1},\eta_{1})\,\stackrel{{\rho_{1}}}{{\longrightarrow}}\,Y_{0},\eta_{0})\) corresponding to a polarized flag of Dieudonne modules \[M_{0}\subset M_{1}\subset M_{2}=A^{3}_{1,1}=A\langle x,y,z\rangle\] with the quasi-polarization given by \[\langle x,Fx\rangle=\langle y,Fy\rangle=\langle z,Fz\rangle=1/p\,.\] Since \(FM_{2}\subset M_{1}\) with \(\dim(M_{1}/FM_{2})=1\) the module \(M_{1}\) is determined by a \(1\)-dimensional subspace of \(M_{2}/FM_{2}\), say generated by a vector \(v=ax+by+cz\). The condition \((F,V)M_{1}\subset M_{1}^{t}\) requires \(\langle v,Fv\rangle\in W\), that is, if we view the coefficients \(a,b,c\) as elements of \(k\), the condition \((F,V)M_{1}\subset M_{1}^{t}\) is satisfied if and only if \[a^{p+1}+b^{p+1}+c^{p+1}=0\,.\] Thus the moduli space \(\mathcal{F}_{1}\) of truncated flags \(M_{1}\subset M_{2}\) can be identified with a Fermat curve \(\mathcal{X}_{p+1}\subset\mathbb{P}^{2}=\operatorname{Gr}(2,3)\). The module \(M_{0}\) is determined by a \(2\)-dimensional subspace \(M_{0}/FM_{1}\subset M_{1}/FM_{1}\). Assuming rigidity, we see that it is spanned by two vectors \[w_{1}=v_{0},\qquad w_{2}=\alpha Fx+\beta Fy+\gamma Fz\,,\] and the condition \(M_{0}\subseteq M_{0}^{t}\) gives \(a\alpha+b\beta+c\gamma=0\). This implies that \(M_{1}/M_{0}\) defines a sheaf isomorphic to \(\mathcal{O}_{\mathcal{F}_{1}}(1)\). Moreover, the degree \(p^{2}\) homomorphism \[\eta_{1}:Y_{1}\to Y_{0}\,\stackrel{{\sim}}{{\to}}\,Y_{0}^{t} \to Y_{1}^{t}\] shows that \(M_{1}/M_{1}^{t}\) is self dual, and it defines a locally free sheaf isomorphic to \(\mathcal{O}_{\mathcal{F}_{1}}(1)\oplus\mathcal{O}_{\mathcal{F}_{1}}(-1)\). This implies that the moduli space of rigid polarized Dieudonne flags with given quasi-polarization \(\eta\) admits a structure \[\mathcal{F}_{0}^{0}\to\mathcal{F}_{1}\to\mathcal{F}_{2}=\text{point}\] with \(\mathcal{F}_{0}^{0}\) the open dense part of the \(\mathbb{P}^{1}\)-bundle \(\mathcal{F}_{0}=\mathbb{P}(\mathcal{O}_{\mathcal{F}_{1}}(1)\oplus\mathcal{O} _{\mathcal{F}_{1}}(-1))\) that is the complement of the unique section with negative self-intersection number. The Zariski closure is obtained by taking the full \(\mathbb{P}^{1}\)-bundle \(\mathcal{F}_{0}\). The morphism \(\mathcal{F}_{0}\to S_{3}\subset\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) is of degree \(1\) onto its image, and the image forms an irreducible component of \(S_{3}\). The natural morphism to \(\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) contracts the section. ## 8. The cycle class of \(S_{3}\) Here we give the proof of the formula for the cycle class of \(S_{3}\) stated in [9, Thm. 11.3]. The first author learned from Ekedahl at that time how to calculate the Hodge bundle for flag type quotients. Ekedahl employed this in [2, Cor. 3.4]. This idea was used in [9] to calculate the cycle class of \(S_{3}\). As done at the time of [9], here we will not use the results of Section 6. The Chow rings with rational coefficients of \(\mathcal{A}_{3}\) and \(\tilde{\mathcal{A}}_{3}\) are known by [10]. The ring \(\operatorname{CH}_{\mathbb{Q}}^{*}(\tilde{\mathcal{A}}_{3})\) is generated by the Chern classes of the Hodge bundle and boundary classes \(\sigma_{1}\) and \(\sigma_{2}\). A priori the class of \(S_{3}\) is a linear combination of the generators of \(\operatorname{CH}_{\mathbb{Q}}^{4}(\tilde{\mathcal{A}}_{3})\), viz. \(\lambda_{1}^{4}\), \(\lambda_{1}^{3}\sigma_{1}\), \(\lambda_{1}^{2}\sigma_{1}^{2}\) and \(\lambda_{1}\sigma_{1}\sigma_{2}\), see [10]. But since \(S_{3}\cdot\sigma_{1}^{2}=0=S_{3}\cdot\sigma_{2}\) we see from the multiplication table 3f in [10, p. 765] that the class of \(S_{3}\) is a multiple of \(\lambda_{1}^{4}=8\,\lambda_{1}\lambda_{3}\). Alternatively, this follows from the fact that \(S_{3}\) is contained in \(V_{0}\), the \(p\)-rank \(0\) locus, whose class is a multiple of \(\lambda_{3}\). **Theorem 8.1**.: _The class of the supersingular locus for genus \(3\) in the Chow ring with rational coefficients of a Faltings-Chai compactification of \(\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) is given by_ \[[S_{3}]=(p-1)^{2}(p^{3}-1)(p^{4}-1)\,\lambda_{1}\lambda_{3}\,.\] Proof.: The class \([S_{3}]\) is a multiple of \(\lambda_{1}\lambda_{3}\) and the multiple can be determined by calculating the intersection number with \(\lambda_{2}\). Using the flag type quotients we see above that an irreducible component of the supersingular locus \(S_{3}\) in \(\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) is the image of a surface \(\mathcal{F}_{0}\) under a map \(\mathcal{F}_{0}\to\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) of degree \(1\) and \(\mathcal{F}_{0}\) is of the form \[\mathcal{F}_{0}\xrightarrow{\pi_{0}}\mathcal{F}_{1}\xrightarrow{\pi_{1}} \mathcal{F}_{2}=\operatorname{point}\,,\] where \(\mathcal{F}_{i}\) parametrizes partial flag type quotients \(Y_{2}\to\cdots\to Y_{i}\). More precisely, a component of \(S_{3}\) is the image under a morphism of a \(\mathbb{P}^{1}\)-bundle \(\mathcal{B}=\mathcal{F}_{0}\) over the Fermat curve \(\mathcal{F}_{1}=\mathcal{X}_{p+1}\) of degree \(p+1\) in \(\mathbb{P}^{2}\) that blows down the unique section \(S\) with negative self-intersection number of the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}(\mathcal{O}(1)\oplus\mathcal{O}(-1))\) over \(\mathcal{X}_{p+1}\). A point of \(\mathcal{F}_{1}\) corresponds to the choice of a subgroup scheme \(\alpha_{p}^{2}\) in \(E^{3}[F]\). If we use contravariant Dieudonne modules over a geometric point of \(\mathcal{F}_{i}\) we have for \(i=0\) and \(i=1\) an exact squence \[0\to pM_{i+1}/pM_{i}\to VM_{i}/pM_{i}\to VM_{i+1}/pM_{i+1}\to VM_{i+1}/VM_{i} \to 0.\] Over \(\mathcal{F}_{i}\), we can identify \(\operatorname{Lie}(Y_{i})^{\vee}\) with \(VM_{i}/pM_{i}\) (cf. [1, 4.3.12] and [18, 5.4, 7.4]), more precisely with \(\operatorname{Q}_{i}/I_{\mathcal{O}_{\mathcal{F}_{i}}}\mathrm{P}_{i}\), where \((\mathrm{P}_{i},\operatorname{Q}_{i},F,V^{-1})\) is the display associated to \(Y_{i}\). (Note that \(\operatorname{Q}_{i}\) and \(I_{\mathcal{O}_{\mathcal{F}_{i}}}\mathrm{P}_{i}\) become \(VM_{i}\) and \(pM_{i}\) respectively if we pull back them to the spectrum of a perfect field.) By the exact sequence we have in the Grothendieck group \(K_{0}(\mathcal{F}_{i})\) the relation \[\operatorname{Lie}(Y_{i})^{\vee}=\operatorname{Lie}(Y_{i+1})^{\vee}-Q_{i}+Q_ {i}^{(p)}\] with \(Q_{i}\) the locally free \(\mathcal{O}_{\mathcal{F}_{1}}\)-module defined by \(VM_{i+1}/VM_{i}\). Here \(\operatorname{Lie}(Y_{i+1})\) denotes the pull back under \(\pi_{i}\). We pull back the relation \[\operatorname{Lie}(Y_{1})^{\vee}=\operatorname{Lie}(Y_{2})^{\vee}-Q_{1}+Q_{1 }^{(p)}\] under \(\pi_{0}\) to \(K_{0}(\mathcal{F}_{0})\) and then find in \(K_{0}(\mathcal{F}_{0})\) suppressing the \(\pi_{0}^{*}\) \[\operatorname{Lie}(Y_{0})^{\vee}=[3]-Q_{1}+Q_{1}^{(p)}-Q_{0}+Q_{0}^{(p)}\,,\] where the [3] stands for the class of the trivial rank 3 bundle \(\pi_{0}^{*}\pi_{1}^{*}(\text{Lie}(Y_{2}))^{\vee}\). From the short exact sequence \[0\to VM_{1}/pM_{1}\to VM_{2}/pM_{2}\to VM_{2}/VM_{1}\to 0\] we get the exact sequence of vector bundles \[0\to U_{1}\to\pi_{1}^{*}(\text{Lie}(Y_{2})^{\vee})\to Q_{1}\to 0\] with \(\text{rank}(Q_{1})=2\) that comes from the universal tautological exact sequence of bundles on the Grassmannian. Here \(U_{1}\) has rank 1 and \(\pi_{1}^{*}\text{Lie}(Y_{2})\) is trivial. This implies that \([Q_{1}]=-[U_{1}]\) in the Grothendieck group of vector bundles and so the total Chern class of \(\text{Lie}(Y_{0})^{\vee}\) is given by \[(1-\ell_{1})(1-p\ell_{1})^{-1}(1+\ell_{0})^{-1}(1+p\ell_{0})\,,\] where \(\ell_{i}=c_{1}(Q_{i})\). Now \(\ell_{1}\) lives on the curve \(\mathcal{F}_{1}=\mathcal{X}_{p+1}\), so \(\ell_{1}^{2}=0\). This gives for the classes \(\lambda_{1}\) and \(\lambda_{2}\) the relations \[\lambda_{1}=(p-1)(\ell_{0}+\ell_{1}),\quad\lambda_{2}=(p-1)^{2}\ell_{0}\ell_{ 1}-(p-1)\ell_{0}^{2}\,.\] The identity \(\lambda_{1}^{2}=2\lambda_{2}\) that holds in the tautological ring \(R_{3}\) implies that \((p-1)^{2}(\ell_{0}^{2}-\ell_{1}^{2})=0\), hence \(\ell_{0}^{2}=0\). Since \(\deg(\ell_{1})=p+1\) on \(\mathcal{F}_{1}\) and \(\ell_{0}\) represents \(\mathcal{O}(1)\) on the fibres of \(\mathcal{F}_{0}\to\mathcal{F}_{1}\) we find \(\deg(\ell_{0}\ell_{1})=p+1\). We thus find that \(\deg(\lambda_{2})=(p+1)(p-1)^{2}\) on each irreducible component of \(S_{4}\). We get \[\deg(\lambda_{2}[S_{3}]) =(p+1)(p-1)^{2}\,N_{3}\] \[=(p+1)(p-1)^{2}(p-1)(p^{2}+1)(p^{3}-1)\,v(3)\,.\] On the other hand, \(\deg(\lambda_{1}\lambda_{2}\lambda_{3})=v(3)\) and this implies the result. The morphism \(\pi_{0}:\mathcal{F}_{0}\to\mathcal{F}_{1}\) is a \(\mathbb{P}^{1}\)-bundle over a Fermat curve of degree \(p+1\) with a section with image \(S\). The Picard group of \(\mathcal{F}_{0}\) is generated by the pullback under \(\pi_{0}\) of the Picard group of \(\mathcal{F}_{1}\) and by the class of the section \(S\). **Proposition 8.2**.: _We have \([S]=\ell_{0}-\ell_{1}\) and \(S^{2}=-2(p+1)\)._ Proof.: Let \(X\) be a fibre of \(\pi_{0}\). We have \(XS=1\) and \((S-\ell_{0})X=0\), hence \(S-\ell_{0}=\pi_{0}^{*}(D)\) with \(D\) a divisor class on \(\mathcal{F}_{1}\). This gives \((S-\ell_{0})^{2}=0\). The identity \(\lambda_{1}^{2}=2\,\lambda_{2}\) implies \(\ell_{0}^{2}=0\) and thus \(S^{2}-2\ell_{0}S=0\). Now we use the fact that \(S\) is contracted under the map of \(\mathcal{F}_{0}\) to \(\mathcal{A}_{3}\otimes\mathbb{F}_{p}\). This implies that \(\lambda_{1}\) restricted to \(S\) vanishes, hence \((\ell_{0}+\ell_{1})S=0\). We thus get \(S^{2}=2\ell_{0}S=-2\ell_{1}S\) and on the other hand \(S^{2}=\ell_{0}S+\pi_{0}^{*}(D)S=-\ell_{1}S+\pi_{0}^{*}(D)S\), hence \(\pi_{0}^{*}(D)=-\ell_{1}\) and \(S=\ell_{0}-\ell_{1}\). The fact that \(\ell_{0}\ell_{1}=p+1\) and \(\ell_{0}^{2}=\ell_{1}^{2}=0\) imply \(S^{2}=-2(p+1)\) ## 9. Loci for \(g=3\) defined by conditions on the \(a\)-number We now discuss the subloci of \(S_{3}\) defined by the inequality \(a\geq 2\). Here \(a\) indicates the \(a\)-number of an abelian variety. Let \(J\) with \(\#J=N_{3}\) be the set of irreducible components of \(S_{3}\) (where we count in the stacky way). Each irreducible component of \(S_{3}\) is the image under a morphism of a \(\mathbb{P}^{1}\)-bundle \(\mathcal{F}_{0}\to\mathcal{F}_{1}\) that blows down a section. The curve \(\mathcal{F}_{1}\) has \(p^{3}+1\) points rational over \(\mathbb{F}_{p^{2}}\) and \(\#\mathcal{F}_{0}(\mathbb{F}_{p^{2}})=(p^{3}+1)(p^{2}+1)\) and each point of \(\mathcal{F}_{0}(\mathbb{F}_{p^{2}})\) defines a superspecial abelian variety. Let \(\sqcup_{j\in J}\mathcal{F}_{0}^{j}\) be the disjoint union of the smooth models of the irreducible components of \(S_{3}\). Under the natural morphism \[m:\sqcup_{j\in J}\mathcal{F}_{0}^{j}\longrightarrow S_{3}\subset\mathcal{A}_{ 3}\otimes\mathbb{F}_{p}\,.\] the \(N_{3}(p^{3}+1)(p^{2}+1)\) superspecial points of \(\sqcup_{j\in J}\mathcal{F}_{0}^{j}\) map to \(N_{3}\) superspecial points of \(S_{3}\). Thus each superspecial point of \(S_{3}\) is the image of \((p^{3}+1)(p^{2}+1)\) points and this multiplicity can be explained as follows. On each surface \(\mathcal{F}_{0}^{j}\) a section is contracted giving a factor \(p^{3}+1\), while the image of an \(\mathbb{F}_{p^{2}}\)-rational fibre of \(\mathcal{F}_{0}^{j}\to\mathcal{F}_{1}^{j}\) lies on the image of \(p^{2}+1\) surfaces \(\mathcal{F}_{0}^{j}\). This can be checked by using Ekedahl-Oort strata and their classes as follows. Each \(\mathbb{F}_{p^{2}}\)-rational point of \(\mathcal{F}_{1}^{j}\) determines a fibre in the \(\mathbb{P}^{1}\)-bundle \(\mathcal{F}_{0}^{j}\to\mathcal{F}_{1}^{j}\) and the image under \(m\) of such a fibre provides a component of the Ekedahl-Oort locus \(\mathcal{V}_{[3,2]}\). This locus \(\mathcal{V}_{[3,2]}\) consists of a finite union of \(\mathbb{P}^{1}\)s. By [4] we know the class of this locus: \[[\overline{\mathcal{V}}_{[3,2]}]=(p-1)^{2}(p^{6}-1)\,\lambda_{2}\lambda_{3}\,.\] Since the degree of the determinant \(\lambda_{1}\) of the Hodge bundle restricted to such a \(\mathbb{P}^{1}\) is \(p-1\), we find that \(\overline{\mathcal{V}}_{[3,2]}\) has \[m_{3,2}=\frac{\deg((\overline{\mathcal{V}}_{[3,2]}|\,\lambda_{1})}{p-1}=(p-1)( p^{6}-1)\,v(3)\] irreducible components, each a copy of \(\mathbb{P}^{1}\). Here we count in the stacky sense. Each such component contributes \(p^{2}+1\) superspecial points and we see from \[m_{3,2}\,(p^{2}+1)=\deg(\overline{\mathcal{V}}_{[3,2,1]})\,(p^{3}+1)\] that this fits with the fact that through a superspecial point there pass \(p^{3}+1\) components of \(\overline{\mathcal{V}}_{[3,2]}\). In fact, under the map \(\mathcal{F}_{0}^{j}\to\mathcal{A}_{3}\otimes\mathbb{F}_{p}\) a section is blown down and this section intersects the \(p^{3}+1\) fibres of \(\mathcal{F}_{0}^{j}\to\mathcal{F}_{1}^{j}\) over \(\mathcal{F}_{1}^{j}(\mathbb{F}_{p^{2}})\). We can also check that each such fibre lies on \(p^{2}+1\) irreducible components of \(S_{3}\); hence we find for the number of superspecial points \[N_{3}\,(p^{3}+1)(p^{2}+1)=\deg(\overline{\mathcal{V}}_{[3,2,1]})(p^{2}+1)(p^{3} +1)\] in agreement with the fact that \(\mathcal{V}_{[3,2,1]}\) is the superspecial locus and that \(N_{g}\) equals the degree of the superspecial locus for odd \(g\). ## 10. Moduli of flag type quotients for \(g=4\) In this section we construct a smooth model for each irreducible component of the supersingular locus \(S_{4}\). The model is obtained by taking the Zariski closure of the moduli of rigid flag type quotients for \(g=4\) and by showing that this moduli space is smooth. We consider the space \(\mathcal{M}=\mathcal{M}_{\eta}\) of polarized flags of covariant Dieudonne modules \[M_{0}\subset M_{1}\subset M_{2}\subset M_{3}\] satisfying 1. \(M_{3}=A_{1,1}^{4}\) provided with \(\eta\), a fixed quasi-polarization \(\langle\,,\,\rangle\) that induces an identification \(M_{3}^{t}=F^{3}M_{3}\); 2. \((F,V)M_{i}\subset M_{i-1}\) and \(\dim(M_{i}/M_{i-1})=i\); 3. \((F,V)^{i}M_{i}\subset M_{i}^{t}\). We say that it is rigid if \(M_{i}=M_{0}+F^{3-i}M_{3}\) for \(i=0,\dots,3\). **Theorem 10.1**.: _The Zariski closure \(\mathcal{F}_{0}\) of the moduli space of rigid polarized Dieudonne flags of length \(4\) with given quasi-polarization on \(M_{3}\) inside \(\mathcal{M}\) is non-singular._ Proof.: We assume we have a skeleton of \(M_{3}\), that is, generating elements \(x_{1},x_{2},x_{3},x_{4}\) with \((F-V)x_{i}=0\) and such that the pairing defined by \(\eta\) satisfies \[\langle x_{1},F^{4}x_{4}\rangle=\langle x_{2},F^{4}x_{3}\rangle=1\,.\] For a rigid polarized Dieudonne flag \(M\) the module \(M_{2}\) is generated by \(FM_{3}\) and a vector \[v_{0}=\sum_{i=1}^{4}a_{i}x_{i}\,\in M_{3}/FM_{3}\] with the condition \(\langle v_{0},F^{2}v_{0}\rangle\in W\). Viewing the coefficients \(a_{i}\) as lying in \(k\), this amounts to the equation \[f:=a_{1}a_{4}^{p^{2}}-a_{1}^{p^{2}}a_{4}+a_{2}a_{3}^{p^{2}}-a_{2}^{p^{2}}a_{3} =0\,.\] This defines a smooth surface \(\mathcal{F}_{2}\) in \(\mathbb{P}^{3}\). This surface was studied in detail by Katsura [14]. Locally on this surface we may assume without loss of generality that \(a_{1}\neq 0\) and that \(a_{1}=1\). Now \(M_{1}\) is generated by \(FM_{2}\) and a \(2\)-dimensional subspace \(M_{1}/FM_{2}\) in \(M_{2}/FM_{2}\). Since \(a_{1}=1\) we can assume that this \(2\)-dimensional subspace is generated by non-zero elements \(v\) and \(w\) with \[v=a_{5}v_{0}+a_{6}Fx_{2}+a_{7}Fx_{3}+a_{8}Fx_{4},\quad w=a_{9}Fx_{2}+a_{10}Fx_ {3}+a_{11}Fx_{4}\,. \tag{6}\] We then have the conditions \[\langle v,Fv\rangle\in W,\quad\langle v,Fw\rangle\in W,\quad\langleFv,w \rangle\in W\,. \tag{7}\] Viewing the coefficients as elements of \(k\) we find three equations all divisible by \(a_{5}\). But \(a_{5}=0\) yields a flag that is not rigid; indeed, \[M_{1}+FM_{3}=(F,V)M_{2}+Aw+FM_{3}=(F,V)(Av+FM_{3})+Aw+FM_{3}\subset FM_{3}\] but \(M_{2}\not\subset FM_{3}\), hence \(M_{2}\neq M_{1}+FM_{3}\), contradicting rigidity. Removing the factor \(a_{5}\) from the equations (7) by considering \(\langle v,Fv\rangle/a_{5},\langle v,Fw\rangle/a_{5}\) and \(\langle Fv,w\rangle/a_{5}^{p}\), we get the equations \[g_{1}:= a_{1}a_{8}^{p}-a_{1}^{p}a_{5}^{p-1}a_{8}+a_{2}a_{7}^{p}-a_{2}^{p}a_{5} ^{p-1}a_{7}+a_{3}^{p}a_{5}^{p-1}a_{6}-a_{3}a_{6}^{p}=0,\] \[g_{2}:= a_{1}a_{11}^{p}+a_{2}a_{10}^{p}-a_{3}a_{9}^{p}=0\,, \tag{8}\] \[g_{3}:= a_{1}^{p}a_{11}+a_{2}^{p}a_{10}-a_{3}^{p}a_{9}=0\,.\] _Remark 10.2_.: The reader may verify that if the point \((1:a_{2}:a_{3}:a_{4})\in\mathcal{F}_{2}(k)\) is not rational over \(\mathbb{F}_{p^{2}}\) then we may choose as \(w\) the element \[(F-V)\,v_{0}\,.\] Indeed, it satisfies \(g_{2}=0\) and \(g_{3}=0\) for any non-zero choice of \(v\); namely with \(a_{1}=1\) we have \(a_{1}(a_{4}^{p^{2}}-a_{4})+a_{2}(a_{3}^{p^{2}}-a_{3})-a_{3}(a_{2}^{p^{2}}-a_{ 2})=0\) and similarly for \(g_{3}\). Now we first look at a point with \(a_{5}\neq 0\). If both \(a_{9}\) and \(a_{10}\) vanish we have by \(g_{3}=0\) that \(w=0\). So we may assume that, say, \(a_{9}\neq 0\) and then have \(a_{1}=a_{5}=a_{9}=1\) and by changing \(v\) to \(v-a_{6}w\) we may assume \(a_{6}=0\). The Jacobian matrix of the equations \(f,g_{1},g_{2},g_{3}\) with respect to the variables \(a_{j}\) for \(j=2,3,4,7,8,10,11\) is \[\begin{pmatrix}a_{3}^{p^{2}}&-a_{2}^{p^{2}}&-a_{1}^{p^{2}}&0&0&0&0\\ a_{7}^{p}&-a_{6}^{p}&0&-a_{2}^{p}a_{5}^{p-1}&-a_{1}^{p}a_{5}^{p-1}&0&0\\ a_{10}^{p^{2}}&-a_{9}^{p}&0&0&0&0&0\\ 0&0&0&0&0&a_{2}^{p}&a_{1}^{p}\end{pmatrix}\] and this is of rank \(4\). Next we look at the case where \(a_{5}=0\). The vanishing of \(a_{9}\) and \(a_{10}\) implies by \(g_{3}\) that \(a_{11}=0\), so we may assume that \(a_{9}\neq 0\) or \(a_{10}\neq 0\). Again without loss of generality we may assume \(a_{9}\neq 0\). Changing \(v\) by a multiple of \(w\) we may assume \(a_{6}=0\). If now \(a_{7}=0\) then \(g_{1}\) forces \(a_{8}=0\), hence \(v=0\). So we may assume that \(a_{7}\neq 0\). Then it suffices to treat the case of \(a_{1}=a_{7}=a_{9}=1\) and \(a_{6}=0\). Then the Jacobian matrix of the equations \(f,g_{1},g_{2},g_{3}\) with respect to the variables \(a_{j}\) for \(j=2,3,4,5,8,10,11\) is \[\begin{pmatrix}a_{3}^{p^{2}}&-a_{2}^{p^{2}}&-a_{1}^{p^{2}}&0&0&0&0\\ a_{7}^{p}&0&0&(a_{1}^{p}a_{8}+a_{2}^{p}a_{7})a_{5}^{p-2}&-a_{1}^{p}a_{5}^{p-1 }&0&0\\ a_{10}^{p}&-a_{9}^{p}&0&0&0&0&0\\ 0&0&0&0&0&a_{2}^{p}&a_{1}^{p}\end{pmatrix}\] which is of rank \(4\) as required. By writing \(\mathcal{F}_{i}\) for the Zariski closure in \(\mathcal{M}\) of the moduli space of rigid polarized Dieudonne flags \(M_{i}\subset\cdots\subset M_{3}\) we get a sequence \[\mathcal{F}_{0}\stackrel{{\pi_{0}}}{{\longrightarrow}}\mathcal{F}_ {1}\stackrel{{\pi_{1}}}{{\longrightarrow}}\mathcal{F}_{2} \stackrel{{\pi_{2}}}{{\longrightarrow}}\mathcal{F}_{3}=\text{ point}\] with \(\dim\mathcal{F}_{i}=4-i\) for \(i=0,1,2\). We now describe the fibres of the morphism \(\pi_{1}:\mathcal{F}_{1}\to\mathcal{F}_{2}\). We start by remarking that by using the symmetry of \(\mathcal{F}_{2}\) there is no loss of generality if we look at the fibre of a point \((a_{1}:a_{2}:a_{3}:a_{4})\) of \(\mathcal{F}_{2}\) with \(a_{1}=1\). If one of \(a_{2},a_{3},a_{4}\) lies in \(\mathbb{F}_{p^{2}}\) then the point lies on one of the lines of \(\mathcal{F}_{2}\). Indeed, if \(a_{4}\in\mathbb{F}_{p^{2}}\) then such a line is parametrically \((1:t:t:a_{4})\), while if, say, \(a_{2}\in\mathbb{F}_{p^{2}}\) then such a line is \((t:a_{2}:1:t)\). For describing the fibre over a point \((1:a_{2}:a_{3}:a_{4})\) we consider the equations \[a_{8}^{p}+a_{2}a_{7}^{p}-a_{3}a_{6}^{p}-a_{5}^{p-1}(a_{8}+a_{2}^{p}a_{7}-a_{3}^ {p}a_{6})=0\,, \tag{9}\] and \[a_{11}^{p}+a_{2}a_{10}^{p}-a_{3}a_{9}^{p}=0,\quad a_{11}+a_{2}^{p}a_{10}-a_{3}^ {p}a_{9}=0\,. \tag{10}\] By the two equations \(g_{2}\), \(g_{3}\) of (10) we eliminate \(a_{11}\) and get \[\frac{a_{10}^{p}}{a_{9}^{p}}=\frac{a_{3}^{p^{2}}-a_{3}}{a_{2}^{p^{2}}-a_{2}}\,. \tag{11}\] In the neighborhood of an \(\mathbb{F}_{p^{2}}\)-valued point of \(\mathcal{F}_{2}\), say \((1:a_{2}:a_{3}:a_{4})\), the expressions \(a_{2}-a_{2}^{p^{2}}\) and \(a_{3}-a_{3}^{p^{2}}\) are local coordinates. This shows that the function field of \(\mathcal{F}_{1}\) can be generated over the function field of \(\mathcal{F}_{2}\) by adjoining the \(p\)th root of \((a_{2}-a_{2}^{p^{2}})/(a_{3}-a_{3}^{p^{2}})\) as determined by (11) and then adjoining a further element via an Artin-Schreier equation (9). Hence the degree of inseparability of \(\mathcal{F}_{1}\) over \(\mathcal{F}_{2}\) is \(p\). Over an open neighborhood \(U\) of a \(\mathbb{F}_{p^{2}}\)-rational point with local coordinates \(a_{2}-a_{2}^{p^{2}}\) and \(a_{3}-a_{3}^{p^{2}}\), the equation (11) describes an inseparable cover of the blow-up of \(U\) (in \(U\times\mathbb{P}^{1}\) with coordinates \((u:v)\) on \(\mathbb{P}^{1}\)) given by \[u(a_{2}-a_{2}^{p^{2}})-v(a_{3}-a_{3}^{p^{2}})=0,\quad u/v=(a_{10}/a_{9})^{p}\,.\] Thus we see that the morphism \(\pi_{1}:\mathcal{F}_{1}\to\mathcal{F}_{2}\) factors via an inseparable cover of the blow-up \(\tilde{\mathcal{F}}_{2}\) of \(\mathcal{F}_{2}\) in the \(\mathbb{F}_{p^{2}}\)-rational points. If we have a point not on a line we may assume \(a_{9}=1\) and then that \(a_{6}=0\). The reduced fibre is a curve in \(\mathbb{P}^{2}\) with coordinates \((a_{5},a_{7},a_{8})\) given by \[a_{8}^{p}+a_{2}a_{7}^{p}-a_{5}^{p-1}(a_{8}+a_{2}^{p}a_{7})=0\,.\] This is a curve with one singularity of order \(p\), a cusp located at \(a_{5}=0\) and \(a_{8}+a_{2}^{1/p}a_{7}=0\). Next we consider the case of a point on a line. Since the automorphism group of \(\mathcal{F}_{2}\) acts transitively on the set of lines defined over \(\mathbb{F}_{p^{2}}\) (by Witt's theorem, see [14]) we may assume that the line is given as \((1:t:0:0)\) The last two equations give \((t^{p^{2}}-t)a_{10}^{p}=0\) and the first equation yields \(a_{8}^{p}+ta_{7}^{p}-a_{5}^{p-1}(a_{8}+t^{p}a_{7})=0\), again a curve with a cusp. So if the point is not a \(\mathbb{F}_{p^{2}}\)-valued point of \(\mathcal{F}_{2}\) we get \(a_{10}^{p}=0\) and as reduced fibre again a curve with a single singularity, a cusp. If \(t\in\mathbb{F}_{p^{2}}\), then the first equation splits as the union of \(p\) lines passing through one point. We summarize. **Proposition 10.3**.: _Let \(\tilde{\mathcal{F}}_{2}\) be the blow-up of \(\mathcal{F}_{2}\) in all \(\mathbb{F}_{p^{2}}\)-rational points. The morphism \(\pi_{1}:\mathcal{F}_{1}\to\mathcal{F}_{2}\) factors through \(\mathcal{F}_{1}\to\tilde{\mathcal{F}}_{2}\to\mathcal{F}_{2}\). The morphism \(\pi_{1}^{\prime}:\mathcal{F}_{1}\to\tilde{\mathcal{F}}_{2}\) has inseparability degree \(p\). The reduced fibre over a non-\(\mathbb{F}_{p^{2}}\)-rational point is an irreducible curve with one singularity, a cusp singularity of order \(p\), while the fibre over a point on an exceptional curve is a union of \(p\) lines meeting in one point._ ## 11. Interpretation of the morphism \(\mathcal{F}_{1}\to\mathcal{F}_{2}\) The morphism \(\pi_{1}:\mathcal{F}_{1}\to\mathcal{F}_{2}\) is inseparable and factors through the blow-up surface \(\tilde{\mathcal{F}}_{2}\). We give an interpretation of this factorization by describing the blow-up \(\tilde{\mathcal{F}}_{2}\) in terms of Dieudonne modules and by showing that \(\mathcal{F}_{1}\) is realized in a natural \(\mathbb{P}^{2}\)-bundle over \(\tilde{\mathcal{F}}_{2}\). We begin with a moduli interpretation of the fibers of \(\tilde{\mathcal{F}}_{2}\to\mathcal{F}_{2}\). **Proposition 11.1**.: _The fiber of \(\tilde{\mathcal{F}}_{2}\to\mathcal{F}_{2}\) over a point \((M_{2}\subset M_{3})\in\mathcal{F}_{2}(k)\) is given by a set of lines in a \(2\)-dimensional vector space_ \[\{L\subset V^{-1}M_{2}^{t}/FM_{2}\mid\dim L=1,\ \ L\ \text{contains}\ (F,V)M_{2}\bmod FM_{2}\}.\] Proof.: We begin by observing two facts: * \((FM_{2}\subset)\)\((F,V)M_{2}\subset V^{-1}M_{2}^{t}\) * \(V^{-1}M_{2}^{t}/FM_{2}\) is a \(k\)-vector space of dimension two. Indeed, (i) follows from \(V(F,V)M_{2}\subset(F,V)^{2}M_{2}\subset M_{2}^{t}\). To prove (ii), consider the dual of \(FM_{3}\subset M_{2}\subset M_{3}\): \[M_{3}^{t}\subset M_{2}^{t}\subset V^{-1}M_{3}^{t}.\] By \(V^{-1}M_{3}^{t}=F^{-1}M_{3}^{t}=F^{2}M_{3}\subset FM_{2}\), we have \(M_{2}^{t}\subset FM_{2}\). This means that \(V\) (and therefore \(p\)) kills \(V^{-1}M_{2}^{t}/FM_{2}\), whence \(V^{-1}M_{2}^{t}/FM_{2}\) is a \(k\)-vector space. Looking at the inclusions \(M_{2}^{t}\subset FM_{2}\subset V^{-1}M_{2}^{t}\), we have \[\dim V^{-1}M_{2}^{t}/FM_{2} =\dim V^{-1}M_{2}^{t}/M_{2}^{t}-\dim FM_{2}/M_{2}^{t}\] \[=4-\dim FM_{2}/F^{2}M_{3}-\dim V^{-1}M_{3}^{t}/M_{2}^{t}\] \[=4-1-1=2\] and this proves ii). If \((M_{2}\subset M_{3})\) represents a point of \(\mathcal{F}_{2}\) that is not rational over \(\mathbb{F}_{p^{2}}\) then \(FM_{2}\neq VM_{2}\) and \(L\) is unique. If \((M_{2}\subset M_{3})\) represents a \(\mathbb{F}_{p^{2}}\)-rational point, then \(FM_{2}=VM_{2}\) and the fibre is a \(\mathbb{P}^{1}\). _Remark 11.2_.: We point out that the Dieudonne module \(V^{-1}M_{2}^{t}/FM_{2}\) is self-dual. We now describe the morphism \(\mathcal{F}_{1}\to\tilde{\mathcal{F}}_{2}\). On \(\tilde{\mathcal{F}}_{2}\) we have by Proposition 11.1 the subspace \(L\subset V^{-1}M_{2}^{t}/FM_{2}\). It determines a \(W\)-module \(\tilde{L}\) with \[(F,V)M_{2}\subset\tilde{L}\subset V^{-1}M_{2}^{t}\,,\] the inverse image of \(L\) under the projection \(V^{-1}M_{2}^{t}\to V^{-1}M_{2}^{t}/FM_{2}\). It has the property that outside \(\tilde{\pi}_{2}^{-1}(\mathcal{F}_{2}(\mathbb{F}_{p^{2}}))\) we have \(\tilde{L}=(F,V)M_{2}\). We can now consider over a point of \(\tilde{\mathcal{F}}_{2}\) the \(3\)-dimensional vector space \(M_{2}/\tilde{L}\). This should define a rank \(3\) vector bundle \(B\), but as the equations show we can realize \(B\) only after an inseparable base change. **Lemma 11.3**.: _The threefold \(\mathcal{F}_{1}\) is a divisor in a \(\mathbb{P}^{2}\)-bundle \(\mathbb{P}(B)\) with \(B\) the rank \(3\) vector bundle defined by \(M_{2}/\tilde{L}\) over a surface \(\tilde{\mathcal{F}}_{2}^{\prime}\) obtained by an inseparable base change \(\tilde{\mathcal{F}}_{2}^{\prime}\to\tilde{\mathcal{F}}_{2}\) of degree \(p\)._ Proof.: Recall that in order to define \(M_{1}\subset M_{2}\), we chose a basis \[v=a_{5}v_{0}+a_{6}Fx_{2}+a_{7}Fx_{3}+a_{8}Fx_{4},\quad w=a_{9}Fx_{2}+a_{10}Fx_{ 3}+a_{11}Fx_{4}\] as in (6) with \(\langle v,Fv\rangle\), \(\langle v,Fw\rangle\), \(\langleFv,w\rangle\) all in \(W\). The equations \((g_{2})\) and \((g_{3})\) correspond to the inseparable base change \(\tilde{\mathcal{F}}_{2}^{\prime}\to\tilde{\mathcal{F}}_{2}\) given on the locus with \(a_{1}\neq 0\) by (11) \[(a_{9}/a_{10})^{p}=(a_{2}-a_{2}^{p^{2}})/(a_{3}-a_{3}^{p^{2}})\,.\] Then on \(\tilde{\mathcal{F}}_{2}^{\prime}\) we have the bundle \(\mathbb{P}(B)\). If \(a_{5}\neq 0\) the morphism \(\mathcal{F}_{1}\to\tilde{\mathcal{F}}_{2}\) is defined by sending \((M_{1}\subset M_{2}\subset M_{3})\) to the point defined by \(L:=M_{1}\cap V^{-1}M_{2}^{t}\bmod FM_{2}\). Indeed, by \((F,V)M_{2}\subset M_{1}\), the subspace \(L\) contains \((F,V)M_{2}\bmod FM_{2}\), and \(L\) is the one-dimensional space generated by \(w\) of (6), since one can check \(\langle Vw,M_{2}\rangle=\{0\}\) and if \(a_{5}\neq 0\), then \(\langle Vv,M_{2}\rangle\neq\{0\}\). For \(a_{5}\neq 0\) we find from \(\langle v,Fv\rangle\in W\) an equation \[a_{1}a_{8}^{p}+a_{2}a_{7}^{p}-a_{3}a_{6}^{p}-a_{5}^{p-1}(a_{1}^{p}a_{8}+a_{2}^ {p}a_{7}-a_{3}^{p}a_{6})=0\,.\] This defines a rational curve with a cusp in \(\mathbb{P}^{2}=\mathbb{P}(M_{2}/\tilde{L})\). As \(\mathcal{F}_{1}\) is defined as the closure of the space of rigid flags, we obtain that this equation defines \(\mathcal{F}_{1}\) in \(\mathbb{P}(B)\). Observe that in order to analyze this we may assume as we did in the preceding section that \(a_{1}=1\) and \(a_{2}\neq 0\) and then \(a_{6}=0\) and the curve can be written in coordinates \((a_{5}:a_{7}:a_{8})\) as \[a_{1}a_{8}^{p}+a_{2}a_{7}^{p}-a_{5}^{p-1}(a_{1}^{p}a_{8}+a_{2}^{p}a_{7})=0\,.\] The cusp is determined by \(a_{5}=0\) and \(a_{8}+a_{2}^{1/p}a_{7}=0\). In particular we see that after an inseparable base change the bundle \(B\) admits a nowhere vanishing section. ## 12. The Hodge bundle on the supersingular locus The description of principally polarized supersingular abelian varieties of dimension \(4\) via a flag gives us for each irreducible component \(S\) of \(S_{4}\) a morphism \(\mathcal{F}_{0}\to S\) and a fibration of \(\mathcal{F}_{0}\) \[\mathcal{F}_{0}\stackrel{{\pi_{0}}}{{\longrightarrow}}\mathcal{F} _{1}\stackrel{{\pi_{1}}}{{\longrightarrow}}\mathcal{F}_{2}\to \mathcal{F}_{3}\,,\] where \(\mathcal{F}_{i}\) for \(i=0,\dots,3\) is the closure of the moduli space of rigid polarized flag type quotients \(Y_{3}\to\dots\to Y_{i}\). Note that \(\mathcal{F}_{3}\) is a point. We have seen above that these spaces \(\mathcal{F}_{i}\) are non-singular. **Lemma 12.1**.: _For each irreducible component \(S\) of \(S_{4}\) in \(\mathcal{A}_{4}\otimes\mathbb{F}_{p}\) the natural morphism \(\mathcal{F}_{0}\to S\) is a morphism of degree \(p\)._ Proof.: Let \(x\) be a geometric point of \(\mathcal{F}_{2}\). Let \(\mathcal{F}_{1,x}\) be the fiber \(\pi_{1}^{-1}(x)\). We claim that \(\pi_{0}^{-1}(\mathcal{F}_{1,x})\to\mathcal{A}_{g}\) is a \(p\)-to-\(1\) map onto the image. Indeed if \(x\) is represented by \((a_{1},a_{2},a_{3},a_{4})\in k^{4}\) for an algebraically closed field \(k\), then the fiber \(\pi_{1}^{-1}(x)\) is described in \(a_{5},\dots,a_{11}\) by \[g_{1}:= a_{1}a_{8}^{p}-a_{1}^{p}a_{5}^{p-1}a_{8}+a_{2}a_{7}^{p}-a_{2}^{p}a_{5}^ {p-1}a_{7}+a_{3}^{p}a_{5}^{p-1}a_{6}-a_{3}a_{6}^{p}=0,\] \[g_{2}:= a_{1}a_{11}^{p}+a_{2}a_{10}^{p}-a_{3}a_{9}^{p}=0\,,\] \[g_{3}:= a_{1}^{p}a_{11}+a_{2}^{p}a_{10}-a_{3}^{p}a_{9}=0\,.\] But \(g_{2}\) is the \(p\)-th power of \[g_{2}^{\prime}:=a_{1}^{1/p}a_{11}+a_{2}^{1/p}a_{10}-a_{3}^{1/p}a_{9}\,.\] The space defined by \(g_{1},g_{2}^{\prime},g_{3}\), say \(\mathcal{F}_{1,x}^{\prime}\), coincides on an open part of \(\mathcal{F}_{1}\) with the fiber of \(\mathcal{V}_{11}\to\mathcal{V}_{2}\) studied in [18, 9.7], where \(\mathcal{V}_{2}\) corresponds to our \(\mathcal{F}_{2}\) and \(\mathcal{V}_{11}\) is the non-garbage component considered in [18, 9.7]. Note that \(\mathcal{F}_{1,x}^{\prime}\) is a closed subscheme of \(\mathcal{F}_{1,x}\). Thanks to the proof by Li and Oort (cf. [18, 7.11]), the map \((\pi_{0})^{-1}(\mathcal{F}_{1}^{\prime})\to\mathcal{A}_{g}\) is one-to-one on its image as stacks; indeed, the proof of Li and Oort was done by fiberwise arguments. The claim follows. The space \(\mathcal{F}_{i}\) carries an abelian variety \(\mathcal{Y}_{i}\). Its cotangent bundle along the zero section may be described by Dieudonne theory. Using contravariant Dieudonne theory with the Dieudonne module \(M_{i}\) of a fibre \(Y_{i}\) of \(\mathcal{Y}_{i}\), we have \[\operatorname{Lie}(Y_{i})^{\vee}=VM_{i}/pM_{i}\,.\] The flag type quotient provides an inductive construction. For \(i=2,1,0\) we have the exact sequence \[0\to pM_{i+1}/pM_{i}\to VM_{i}/pM_{i}\to VM_{i+1}/pM_{i+1}\to VM_{i+1}/VM_{i} \to 0\,.\] In the Grothendieck group of vector bundles on \(\mathcal{F}_{i}\) we thus get the identity \[[\operatorname{Lie}(\mathcal{Y}_{i})^{\vee}]=[\pi_{i}^{*}(\operatorname{Lie}( \mathcal{Y}_{i+1})^{\vee})]-[Q_{i}]+[Q_{i}^{(p)}]\,,\] where \(Q_{i}\) is the locally free \(\mathcal{O}_{\mathcal{F}_{i}}\)-module of rank \(i+1\) corresponding to \(VM_{i+1}/VM_{i}\). Moreover, the exact sequence for \(i=1\) and \(i=2\) \[0\to VM_{i}/pM_{i+1}\to VM_{i+1}/pM_{i+1}\to VM_{i+1}/VM_{i}\to 0\] gives us an exact sequence of \(\mathcal{O}_{\mathcal{F}_{i}}\)-modules \[0\to U_{i}\to\pi_{i}^{*}(\operatorname{Lie}(\mathcal{Y}_{i+1})^{\vee})\to Q_{ i}\to 0\] with \(U_{i}\) the locally free \(\mathcal{O}_{\mathcal{F}_{i}}\)-module defined by \(VM_{i}/pM_{i+1}\). For \(i=0\) we have \[0\to VM_{0}/VM_{1}^{t}\to VM_{1}/VM_{1}^{t}\to VM_{1}/VM_{0}\to 0\] and this gives a short exact sequence of \(\mathcal{O}_{\mathcal{F}_{0}}\)-modules \[0\to U_{0}\to\pi_{0}^{*}(K_{1})\to Q_{0}\to 0\] with \(K_{1}\) the locally free sheaf corresponding to the Dieudonne module of \(\ker(\mathcal{Y}_{1}\stackrel{{\eta_{1}}}{{\longrightarrow}} \mathcal{Y}_{1}^{t})\). In the following we will abuse the notation \(Q_{i}\) also for the pullback of \(Q_{i}\) to \(\mathcal{F}_{i-1}\) in order to simplify notation. Since \(\operatorname{Lie}(\mathcal{Y}_{3})^{\vee}\) is trivial of rank \(4\) we get from the above the class of the Hodge bundle \(\mathbb{E}=\operatorname{Lie}(\mathcal{Y}_{0})^{\vee}\) in the Grothendieck group of vector bundle on \(\mathcal{F}_{0}\). **Proposition 12.2**.: _The class of Hodge bundle of \(\mathcal{Y}_{0}\) in the Grothendieck group of vector bundles on \(\mathcal{F}_{0}\) is given by_ \[[\mathbb{E}] =4-[Q_{2}]+[Q_{2}^{(p)}]-[Q_{1}]+[Q_{1}^{(p)}]-[Q_{0}]+[Q_{0}^{(p )}]\] \[=4+[U_{2}]-[U_{2}^{(p)}]-[Q_{1}]+[Q_{1}^{(p)}]-[Q_{0}]+[Q_{0}^{(p )}]\,,\] _where \(U_{2}\) and \(Q_{0}\) have rank \(1\), while \(Q_{1}\) has rank \(2\)._ Note that here we abuse the notation \(Q_{i}\) for the pull back of \(Q_{i}\) to \(\mathcal{F}_{0}\). We now set \[\ell_{i}=c_{1}(Q_{i})\quad\text{for $i=0,1,2$.}\] We may consider \(\ell_{i}\) as a class living on \(\mathcal{F}_{i}\), but we will denote the pull back \(\pi_{0}^{*}(\ell_{1})\), \(\pi_{1}^{*}(\ell_{2})\) and \(\pi_{0}^{*}(\pi_{1}^{*}(\ell_{2}))\) also by \(\ell_{1}\), \(\ell_{2}\) in order to simplify notation. Proposition 12.2 implies the following. **Proposition 12.3**.: _The total Chern class \(c(\mathbb{E})\) of the Hodge bundle on \(\mathcal{F}_{0}\) is given by_ \[c(\mathbb{E})=\frac{(1-\ell_{2})(1+p\,\ell_{1}+p^{2}c_{2}(Q_{1}))(1+p\,\ell_{0 })}{(1-p\,\ell_{2})(1+\ell_{1}+c_{2}(Q_{1}))(1+\ell_{0})}\,.\] **Corollary 12.4**.: _We have \(c_{2}(Q_{1})=(\ell_{0}^{2}+\ell_{1}^{2}-\ell_{2}^{2})/2\). Moreover, the class \(\ell_{0}^{2}\) is a pullback from \(\mathcal{F}_{1}\)._ Proof.: We deduce \(\lambda_{1}=(p-1)(\ell_{0}+\ell_{1}+\ell_{2})\) and \[2\,\lambda_{2}-\lambda_{1}^{2}=(p^{2}-1)(2\,c_{2}(Q_{1})-\ell_{0}^{2}-\ell_{1 }^{2}+\ell_{2}^{2})\] and since \(\lambda_{1}^{2}=2\,\lambda_{2}\) on \(\mathcal{A}_{4}\) the formula for \(c(\mathbb{E})\) follows. Since \(Q_{1}\) lives on \(\mathcal{F}_{1}\) it implies that the class \(\ell_{0}^{2}\) is a pullback from \(\mathcal{F}_{1}\). ## 13. Loci with \(a\)-number \(\geq 2\) for \(g=4\) The abelian variety corresponding to the generic point of an irreducible component \(S\) of \(S_{4}\) has \(a\)-number equal to \(1\). An irreducible component of the closed stratum of \(S\) where \(a\geq 2\) is of one of two types, as shown in [18, Section 9.9]. See also [11]. The first type maps under \(\mathcal{F}_{0}\to\mathcal{F}_{2}\) to a line on \(\mathcal{F}_{2}\), while the second type maps dominantly to \(\mathcal{F}_{0}\) or maps to a point of \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\). ### Loci of the first type The first type parametrizes flag types \(M_{3}\supset M_{2}\supset M_{1}\supset M_{0}\) such that there exists a totally isotropic subspace \(I\) of \(M_{3}/FM_{3}\) such that \(M_{1}\subset N\) with \(N\subset M_{3}\) the submodule generated by \(I\) and \(FM_{3}\). Since the automorphism group of \(M_{3}\) acts transitively on totally isotropic subspaces defined over \(\mathbb{F}_{p^{2}}\), we may assume that \(I=\langle x_{1},x_{2}\rangle\). In terms of abelian varieties, such a flag type can be obtained from a flag type \[E^{4}=Y_{3}\stackrel{{\rho_{3}}}{{\longrightarrow}}Y_{2} \stackrel{{\rho_{2}}}{{\longrightarrow}}Y_{1}\stackrel{{ \rho_{1}}}{{\longrightarrow}}Y_{0} \tag{12}\] with quasi-polarization \(\eta_{3}:Y_{3}\to Y_{3}^{t}\) with \(\ker\eta_{3}=E^{4}[F^{3}]\) if the composition \(\rho_{2}\rho_{3}:E^{4}\to Y_{1}\) factors through \[1_{E^{2}}\times F_{E^{2}}:E^{4}\longrightarrow E^{2}\times E^{2}/E^{2}[F]\,.\] By identifying \(E^{2}\times E^{2}/E^{2}[F]\) with \(E^{4}\) and thus factoring \(\rho_{2}\rho_{3}\), we put \(Z_{2}=E^{2}\times E^{2}/E^{2}[F]\cong E^{4}\) and \(Z_{1}=Y_{1}\) and then associate to it the flag \[E^{4}=Z_{2}\stackrel{{\zeta_{2}}}{{\longrightarrow}}Z_{1} \stackrel{{\zeta_{1}}}{{\longrightarrow}}Z_{0}\,, \tag{13}\] where \(\deg(\zeta_{2})=p^{3}\) and \(\deg(\zeta_{1})=p\) and \(\theta_{2}:Z_{2}\to Z_{2}^{t}\) is a quasi-polarization with kernel equal to \(E^{4}[p]\). This can be described by Dieudonne modules: consider the Dieudonne module \(N_{2}=\langle x_{1},x_{2},Fx_{3},Fx_{4}\rangle\) with \(x_{1},x_{2},x_{3},x_{4}\) the skeleton of \(M_{3}\). It satisfies \(N_{2}^{t}=F^{2}N_{2}\). We choose a submodule \(N_{1}\) generated by \(u=ax_{1}+bx_{2}+cFx_{3}+dFx_{4}\) and \(FM_{2}\) with \(\langle u,Fu\rangle=0\). By viewing \(u\) as an element of \(N_{2}/FM_{2}\) and the coefficients \(a,b,c,d\) in \(k\) we obtain an equation \[ad^{p}-a^{p}d+bc^{p}-b^{p}c=0\,. \tag{14}\] Then \(\dim N_{2}/N_{1}=3\) and \(\dim N_{1}/N_{1}^{t}=2\). We then can choose a Dieudonne submodule \(N_{0}\) with \(N_{1}^{t}\subset N_{0}\subset N_{1}\) with \(\dim N_{1}/N_{0}=1\). The filtration \(N_{0}\subset N_{1}\subset N_{2}\) corresponds to (13). The moduli of \(N_{2}\supset N_{1}\) defines a surface \(\mathcal{G}_{1}\) in projective space \(\mathbb{P}^{3}\) given by (14) and choosing \(N_{0}\) defines a \(\mathbb{P}^{1}\)-bundle \(\mathcal{G}_{0}\to\mathcal{G}_{1}\). We now discuss how to map \(\mathcal{G}_{1}\) to \(\mathcal{F}_{1}\). Given \(u\) we choose \(v_{0}\) as a multiple of \(u\). This determines a submodule \(M_{2}\) of \(M_{3}\), generated by \(FM_{3}\) and \(v_{0}\), that contains \(N_{1}\) and we set \(M_{1}=N_{1}\). Note that \(M_{2}\) is generated also by \(ax_{1}+bx_{2}\) and \(FM_{3}\). Then we can choose two generators \(v,w\) for \(M_{1}\) modulo \(FM_{2}\) and assuming that \(a\neq 0\) we may choose \[v=a_{5}v_{0}+cFx_{3}+dFx_{4}=ax_{1}+bx_{2}+cFx_{3}+dFx_{4},\quad w=Fx_{2}\,.\] In terms of the coordinates in Section 10 we have \[a=a_{1}a_{5},b=a_{2}a_{5},c=a_{7},d=a_{8}\,.\] The fibre of \(\mathcal{G}_{1}\to\mathcal{F}_{1}\) over a point \(v_{0}=(1:t:0:0)\) of \(\mathcal{F}_{2}\) consists of all \((a:b:c:d)\) with \(d^{p}-a^{p-1}d+tc^{p}-t^{p}a^{p-1}c=0\); it is defined by a Lefschetz pencil on \(\mathcal{G}_{1}\) defined by \(b=ta\). We refer to the paper [14] for such a Lefschetz fibering. The general fibre is a rational curve with one singularity given by \(a=0\). Recall that the automorphism group of \(\mathcal{F}_{2}\) acts transitively on the set of lines of \(\mathcal{F}_{2}\) defined over \(\mathbb{F}_{p^{2}}\). For each line \(L\) defined over \(\mathbb{F}_{p^{2}}\) on the surface \(\mathcal{F}_{2}\) we find a surface isomorphic to \(\mathcal{G}_{1}\) that is contained in the inverse image \(\pi_{1}^{-1}(L)\). The fibration \(\mathcal{G}_{0}\to\mathcal{G}_{1}\) has a natural section \(S\) by taking \(Z_{0}=Z_{2}/Z_{2}[F]\). Note that then \(N_{0}=FN_{2}\subset N_{1}\). This implies that \(Z_{0}\), determined by \(N_{0}\), is constant for all choices of \(N_{1}\). It also implies that this section is blown down under the natural morphism \(\mathcal{G}_{0}\to S_{4}\subset\mathcal{A}_{4}\) that associates to a flag type quotient (13) the isomorphism class of \(Z_{0}\). We summarize: Let \(M=A_{1,1}^{4}\) with quasi-polarization such that \(M^{t}=F^{3}M\). **Proposition 13.1**.: _For each totally isotropic subspace of \(M_{3}/FM_{3}\) there is a threefold \(\mathcal{G}_{0}\) that is a \(\mathbb{P}^{1}\)-bundle over a surface given by (14) with a section and a morphism \(\mathcal{G}_{0}\to\mathcal{F}_{0}\) whose image is a locus of supersingular abelian \(4\)-folds with \(a\geq 2\). Under \(\mathcal{G}_{0}\to\mathcal{F}_{2}\) it maps to a line on \(\mathcal{F}_{2}\). Under the morphism \(\mathcal{G}_{0}\to\mathcal{A}_{4}\otimes\mathbb{F}_{p}\) the section of \(\mathcal{G}_{0}\to\mathcal{G}_{1}\) is blown down._ ### Loci of the second type The locus of \(a\)-number \(\geq 2\) of the second type inside an irreducible component \(S\) of the supersingular locus \(S_{4}\) is characterized by the condition \(M_{1}\subset FM_{3}\). This condition is determined on \(\mathcal{F}_{1}\). The condition \(M_{1}\subset FM_{3}\) can be paraphrased by saying that the natural homomorphism \[M_{1}/(F,V)M_{2}\to M_{2}/FM_{3} \tag{15}\] induced by \(M_{1}\hookrightarrow M_{2}\) is zero. Let \(\mathcal{L}\) be the sheaf corresponding to the module \(M_{1}/(F,V)M_{2}\) and \(U_{2}\) the one corresponding to \(VM_{2}/pM_{3}\). The invertible sheaf \(U_{2}\) lives on \(\mathcal{F}_{2}\), and \(\mathcal{L}\) lives on \(\mathcal{F}_{1}\) and is invertible only outside \(\pi_{1}^{-1}(\mathcal{F}_{0}(\mathbb{F}_{p^{2}}))\). Thus we work on the open set \(\mathcal{F}_{1}^{0}\) that is the complement of \(\pi_{1}^{-1}(\mathcal{F}_{2}(\mathbb{F}_{p^{2}}))\). The homomorphism (15) defines a homomorphism of sheaves \[\psi:\mathcal{L}\to\pi_{1}^{*}(U_{2}^{(p)})\,.\] The locus \(\mathcal{H}_{1}\) can now be defined as the Zariski closure in \(\mathcal{F}_{1}\) of the zero locus \(D(\psi)\) in \(\mathcal{F}_{1}^{0}\) of the map \(\psi\). **Lemma 13.2**.: _The cycle class of the Zariski closure in \(\mathcal{F}_{1}\) of the zero locus \(D(\psi)\) of \(\psi\) equals_ \[[\overline{D(\psi)}]=p\,\ell_{1}-(p^{2}+1)\ell_{2}+e\,,\] _where \(e\) is a class with support in the fibres of \(\pi_{1}\) over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\)._ Proof.: We work on the open set \(\mathcal{F}_{1}^{0}\) that is the complement of \(\pi_{1}^{-1}(\mathcal{F}_{2}(\mathbb{F}_{p^{2}}))\). Consider the exact sequence \[0\to VM_{2}/FM_{2}\cap VM_{2}\to M_{1}/FM_{2}\to M_{1}/(F,V)M_{2}\to 0\,.\] If \(M_{2}\) is generated by \(FM_{3}\) and \(v_{0}=a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+a_{4}x_{4}\) with \((a_{1}:a_{2}:a_{3}:a_{4})\) determining a point not in \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) then \(FM_{2}\cap VM_{2}=pM_{3}\) and \[VM_{2}/FM_{2}\cap VM_{2}=VM_{2}/pM_{3}.\] This translates into a short exact sequence of \(\mathcal{O}_{\mathcal{F}_{1}^{0}}\)-modules \[0\to\pi_{1}^{*}(U_{2})\to U_{1}^{(p)}\to\mathcal{L}\to 0\] with \(U_{1}\) the locally free \(\mathcal{O}_{\mathcal{F}_{1}}\)-module determined by \(VM_{1}/pM_{2}\). We view \(\psi\) as a section of \(\pi_{1}^{*}(U_{2}^{(p)})\otimes\mathcal{L}^{-1}\) on \(\mathcal{F}_{1}^{0}\) with class \((p+1)[U_{2}]-p[U_{1}]\). From Section 12 we use the identities \([U_{2}]=[4]-[Q_{2}]\), \([U_{1}]=[\operatorname{Lie}(Y_{2})^{\vee}]-[Q_{1}]=[4]-[Q_{2}]+[Q_{2}^{(p)}]-[ Q_{1}]\), hence \(c_{1}((p+1)[U_{2}]-p[U_{1}])=p\ell_{1}-(p^{2}+1)\ell_{2}\). When taking the closure of the degeneracy locus of \(\psi\) we have to take into account a class with support in the fibres over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) and the result follows. **Corollary 13.3**.: _An irreducible component of the locus with \(a\geq 2\) of the second type maps dominantly to \(\mathcal{F}_{2}\) or is contained in the fibres over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\)._ Proof.: If an irreducible component is not contained in the fibres over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) then it is contained in \(\overline{D(\psi)}\) and the class is given by Lemma 13.2. The intersection number of \(\overline{D(\psi)}\) with a generic fibre of \(\pi_{1}\) equals the degree of \(\ell_{1}\) on such a fibre, that is, \(1\). That means that it intersects the generic fibre of \(\pi_{1}\) and is irreducible. The resulting abelian variety of a flag type with \(M_{1}\subset FM_{3}\) is defined by the filtration of Dieudonne modules \(FM_{3}\supset M_{1}\supset M_{0}\). By forgetting \(M_{1}\) between \(FM_{3}\) and \(M_{0}\), we have polarized flag types with \(\ker(\eta_{1})=E^{4}[F]\) and \(\ker(\rho_{1})\cong\alpha_{p}^{2}\). The choice of \(\ker(\rho_{1})\) in \(E^{4}[F]\) defines a point in the Grassmann variety \(G=\operatorname{Gr}(2,4)\). Note that \(G\) can be identified with a quadric in \(\mathbb{P}^{5}\) in terms of Plucker coordinates. If we choose a basis \(x_{1},x_{2},x_{3},x_{4}\) of the Dieudonne module of \(E^{4}\) with \((F-V)x_{i}=0\) and quasi-polarization with \[\langle x_{i},p\,x_{5-j}\rangle=\delta_{ij},\quad\langle x_{i},Fx_{j}\rangle= 0\,,\] then \(M_{0}\) can be generated by two vectors \(a=\sum_{i=1}^{4}a_{i}x_{i}\), \(b=\sum_{i=1}^{4}b_{i}x_{i}\) and the condition \(\langle a,b\rangle\in W\) says \[a_{1}b_{4}-a_{4}b_{1}+a_{2}b_{3}-a_{3}b_{2}\equiv\,0\,(\operatorname{mod}p)\] and this defines a hyperplane section \(\mathcal{Q}=H\cap G\) of the Grassmann variety. Indeed, with the Plucker coordinates \(\lambda_{ij}=a_{i}b_{j}-a_{j}b_{i}\) the variety \(G\) is given by \(\lambda_{12}\lambda_{34}-\lambda_{13}\lambda_{24}+\lambda_{14}\lambda_{23}=0\) and \(H\) by \(\lambda_{14}+\lambda_{23}=0\). We summarize. **Lemma 13.4**.: _The image in \(S_{4}\) of an irreducible component of the locus of second type with \(a\geq 2\) can be identified with a hyperplane section \(\mathcal{Q}\) of the Grassmann variety \(\operatorname{Gr}(2,4)\)._ We will analyze the case of loci of the second type contained in the fibres over \(\mathbb{F}_{p^{2}}\)-rational points on \(\mathcal{F}_{2}\) in the next section. ## 14. The fibres over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) Here we study the fibre under \(\mathcal{F}_{0}\to\mathcal{F}_{2}\) of a rational point \(\xi\in\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\). Since \(\mathcal{F}_{0}\to\mathcal{F}_{1}\) is a \(\mathbb{P}^{1}\)-bundle it suffices to study the fibre under \(\mathcal{F}_{1}\to\mathcal{F}_{2}\). The automorphism group \(\operatorname{Aut}(\mathcal{F}_{2})\) can be identified with the general unitary group \(\operatorname{GU}_{4}(p^{2})\) of \(4\)-dimensional space over \(\mathbb{F}_{p^{2}}\) that fixes the Hermitian form \[\xi_{1}\bar{\xi}_{4}-\xi_{4}\bar{\xi}_{1}+\xi_{2}\bar{\xi}_{3}-\xi_{3}\bar{ \xi}_{2}\] where \(\bar{\xi}=\xi^{p^{2}}\). By a theorem of Witt this group acts transitively on isotropic subspaces of dimension \(1\) and \(2\). This implies that it acts transitively on the set of lines of \(\mathcal{F}_{2}\) and on the set of \(\mathbb{F}_{p^{2}}\)-rational points, see [14, Appendix]. We thus may restrict to analyzing the fibre over the point \((1:0:0:0)\) of \(\mathcal{F}_{2}\). This corresponds to the case with \(M_{2}\subset M_{3}\) generated by \(v_{0}=x_{1}\) and \(FM_{3}\). The fibre \(\pi_{1}^{-1}(\xi)\) corresponds to the choices of \(M_{1}\). It can be given by a choice of basis \[v=a_{5}v_{0}+a_{6}Fx_{2}+a_{7}Fx_{3}+a_{8}Fx_{4},\quad w=a_{9}Fx_{2}+a_{10}Fx_ {3}+a_{11}Fx_{4},\] satisfying the equations \(g_{1}\), \(g_{2}\) and \(g_{3}\) of Section 10 \[a_{8}^{p}-a_{5}^{p-1}a_{8}=0,\quad a_{11}^{p}=0,\quad a_{11}=0\,.\] We distinguish whether \(a_{5}\neq 0\) or \(a_{5}=0\). Case i). \(a_{5}\neq 0\). We may assume \(a_{5}=1\) and find \(a_{8}\in\mathbb{F}_{p}\). We can change basis of \(M_{3}\) by \((x_{1},x_{2},x_{3},x_{4})\mapsto(x_{1}+a_{8}Fx_{4},x_{2},x_{3},x_{4})\) and then may assume that \(a_{8}=0\). Then \(M_{1}\) is generated inside \(M_{2}\) by \(FM_{2}=\langle Fx_{1},F^{2}x_{2},F^{2}x_{3},F^{2}x_{4}\rangle\) and \(v=x_{1}+a_{6}Fx_{2}+a_{7}Fx_{3}\) and \(w=a_{9}Fx_{2}+a_{10}Fx_{3}\). We now construct a flag of Dieudonne modules \[F^{2}M_{3}^{\prime}\subset M_{1}^{t}\subset M_{1}\subset FM_{3}^{\prime}\] with \(M_{3}^{\prime}=\langle F^{-1}x_{1},x_{2},x_{3},Fx_{4}\rangle\) and show that we can extend it to a flag type \[M_{1}\subset F^{2}M_{3}^{\prime}\subset M_{2}^{\prime}\subset M_{3}^{\prime} \tag{16}\] so that we can associate to it a point of a locus of the second type as treated in the Section 13.2 with respect to a changed basis \(\langle F^{-1}x_{1},x_{2},x_{3},Fx_{4}\rangle\) of \(M_{3}\). To prove our claim we have to construct \(M_{2}^{\prime}\) with \((F,V)M_{2}^{\prime}\subset M_{1}\). We take \(v_{0}^{\prime}=\alpha_{1}F^{-1}x_{1}+\alpha_{2}x_{2}+\alpha_{3}x_{3}+Fx_{4}\) and impose the following conditions 1. \(\langle v^{\prime}_{0},Fv^{\prime}_{0}\rangle\in W\), that is, \(\alpha_{1}-\alpha_{1}^{p^{2}}+\alpha_{2}\alpha_{3}^{p^{2}}-\alpha_{2}^{p^{2}} \alpha_{3}=0\), 2. \(Fv^{\prime}_{0}\in M_{1}\), equivalently, there exists \(\beta\) with \(Fv^{\prime}_{0}=\alpha_{1}^{p}+\beta w+F^{2}x_{4}\), that is, \(\alpha_{2}^{p}=\alpha_{1}^{p}a_{6}+\beta a_{9}\) and \(\alpha_{3}^{p}=\alpha_{1}^{p}a_{7}+\beta a_{10}\), 3. \(Vv^{\prime}_{0}\in M_{1}\), equivalently, there exists \(\gamma\) with \(Vv^{\prime}_{0}=\alpha^{1/p}v+\gamma w+F^{2}x_{4}\), that is, \(\alpha_{2}^{1/p}=\alpha_{1}^{1/p}a_{6}+\gamma a_{9}\) and \(\alpha_{3}^{1/p}=\alpha_{1}^{1/p}a_{7}+\gamma a_{10}\). For generic \(a_{i}\) (that is, \(a_{7}a_{9}-a_{6}a_{10}\) and \(a_{9}a_{10}\) not in \(\mathbb{F}_{p^{2}}\)) we find a solution. We then set \[M^{\prime}_{2}=Av^{\prime}_{0}+FM^{\prime}_{3}\] and then by (2) we have \(\langle v,w,FM^{\prime}_{2}\rangle=\langle v,w,FM_{2}\rangle=M_{1}\). Thus we have a filtration (16) and it gives a point of a locus \(\mathcal{H}_{1}\) with respect to the module \(M^{\prime}_{3}\). Case ii). Here \(a_{5}=0\). Then by \(g_{2}\) we have \(a_{11}=0\) and find that \(M_{1}\) is generated by \(v=a_{6}Fx_{2}+a_{7}Fx_{3}\), \(w=a_{9}Fx_{2}+a_{10}Fx_{3}\) and \(F^{2}M_{3}\), hence \(M_{1}=\langle Fx_{1},Fx_{2},Fx_{3},F^{2}x_{4}\rangle\). So \(M_{1}\) is fixed and this case thus yields one point. Moreover \(M_{1}^{t}=\langle Fx_{1},F^{2}x_{2},F^{2}x_{3},F^{2}x_{4}\rangle\). We thus see that the supersingular abelian variety corresponding to a generic point of an irreducible component \(\mathcal{E}\) of the fibre over a rational point \(\xi\in\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) can be viewed as the supersingular abelian variety defined by a generic point of a locus with \(a\geq 2\) of the second kind with \(M_{1}\subset FM^{\prime}_{3}\). This means that there is an irreducible component \(S^{\prime}\) of \(S_{4}\) with model \(\mathcal{F}^{\prime}_{0}\) and a locus \(\mathcal{H}^{\prime}_{0}\) mapping dominantly to \(\mathcal{F}^{\prime}_{2}\) such that image of \(\mathcal{E}\) and \(\mathcal{H}^{\prime}_{0}\) coincide in \(S_{4}\subset\mathcal{A}_{4}\). We summarize. **Proposition 14.1**.: _Let \(S\) be a component of \(S_{4}\) and \(\mathcal{F}_{0}\) be the model constructed in Section 10. The fibre in \(\mathcal{F}_{0}\) over a rational point \(\xi\in\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) consists of \(p\) irreducible components. The image of each of these in \(S_{4}\) is a hyperplane section of the Grassmann variety \(\operatorname{Gr}(2,4)\) and can be seen as the image of a locus of \(a\geq 2\) of the second type in another component \(S^{\prime}\) of \(S_{4}\)._ ## 15. Superspecial points of \(S_{4}\) The number of points of \(S_{4}\) representing isomorphism classes of superspecial abelian varieties counted in the stacky sense was given in formula (2) in Section 2 and equals \[\Sigma_{4}=(p-1)(p^{2}+1)(p^{3}-1)(p^{4}+1)\,v(4)\,.\] Each superspecial principally polarized abelian variety of dimension \(4\) defines a \(\mathbb{F}_{p^{2}}\)-rational point of \(S_{4}\subset\mathcal{A}_{4}\). By Proposition 3.1 we have \(N_{4}=(p^{2}-1)(p^{6}-1)\,v(4)\) irreducible components (again counted in the stacky sense) of \(S_{4}\). Each irreducible component is the image of \(\mathcal{F}_{0}\) under a degree \(p\) morphism to its image in \(S_{4}\) that induces a bijection between geometric points. **Lemma 15.1**.: _We have \(\#\mathcal{F}_{0}(\mathbb{F}_{p^{2}})=(p^{2}+1)^{3}(p^{3}+1)(p^{4}+1)\)._ Proof.: We have \(\#\mathcal{F}_{2}(\mathbb{F}_{p^{2}})=(p^{2}+1)(p^{4}+1)\), see for example [14], hence \(\#\tilde{\mathcal{F}}_{0}(\mathbb{F}_{p^{2}})=(p^{2}+1)^{2}(p^{4}+1)\) and these points are the \(\mathbb{F}_{p^{2}}\)-rational points on the exceptional curves of \(\tilde{\mathcal{F}}_{2}\). The fibre in \(\mathcal{F}_{1}\) over a \(\mathbb{F}_{p^{2}}\)-rational point of \(\tilde{\mathcal{F}}_{2}\) consists of a union of \(p\) lines through one point. So we find \(\#\mathcal{F}_{1}(\mathbb{F}_{p^{2}})=(p^{2}+1)^{2}(p^{4}+1)(p^{3}+1)\). Since \(\mathcal{F}_{0}\) is a \(\mathbb{P}^{1}\)-bundle over \(\mathcal{F}_{1}\) the formula follows. Let \(J\) be the set of irreducible components of \(S_{4}\) and for \(j\in J\) we let \(\mathcal{F}_{0}^{j}\) be the corresponding smooth model. The disjoint union of these smooth models has \[\#(\bigsqcup_{j\in J}\mathcal{F}_{0}^{j})(\mathbb{F}_{p^{2}})=N_{4}\,(p^{2}+1 )^{3}(p^{3}+1)(p^{4}+1)\] \(\mathbb{F}_{p^{2}}\)-rational points mapping to \(\Sigma_{4}\) superspecial points of \(S_{4}\). The variety \(\mathcal{F}_{0}\) contains \((p^{2}+1)(p^{4}+1)\) loci \(\mathcal{G}_{0}^{n}\) of the first kind, each isomorphic to \(\mathcal{G}_{0}\). We have \(\#\mathcal{G}_{1}(\mathbb{F}_{p^{2}})=(p^{2}+1)(p^{3}+1)\) (see [14]) and \(\#\mathcal{G}_{0}(\mathbb{F}_{p^{2}})=(p^{2}+1)^{2}(p^{3}+1)\) since \(\mathcal{G}_{0}\) is a \(\mathbb{P}^{1}\)-bundle over \(\mathcal{G}_{1}\). On \(\mathcal{F}_{1}\) these loci \(\mathcal{G}_{1}\) of the first kind are disjoint and we see \[\#\mathcal{F}_{0}(\mathbb{F}_{p^{2}})=(p^{2}+1)(p^{4}+1)\,\#\mathcal{G}_{0}( \mathbb{F}_{p^{2}})\,.\] On each component \(\mathcal{G}_{0}^{n}\) a section of \(\mathcal{G}_{0}\to\mathcal{G}_{1}\) is blown down. This section has \((p^{2}+1)(p^{3}+1)\) points rational over \(\mathbb{F}_{p^{2}}\). **Lemma 15.2**.: _Each superspecial point of \(S_{4}\) lies on \((p+1)(p^{3}+1)\) irreducible components of \(S_{4}\)._ Proof.: The number of totally isotropic subspaces of dimension \(2\) in a \(4\)-dimensional unitary space over \(\mathbb{F}_{p^{2}}\) with conjugation given by Frobenius is equal to \((p+1)(p^{3}+1)\). A choice of an irreducible component corresponds exactly to the choice of a totally isotropic subspace. We thus see that under the natural map \[\bigsqcup_{j\in J}\mathcal{F}_{0}^{j}\longrightarrow S_{4}\] the inverse image of each of the \(\Sigma_{4}\) superspecial points of \(S_{4}\) has \[(p+1)(p^{3}+1)\times(p^{2}+1)(p^{3}+1)\times(p^{2}+1)\] points, where the second factor corresponds to blowing down the section of \(\mathcal{G}_{0}\to\mathcal{G}_{1}\), and the third one comes from the fact that each exceptional curve on \(\tilde{\mathcal{F}}_{2}\) intersects \(p^{2}+1\) proper images of the lines defined over \(\mathbb{F}_{p^{2}}\), in agreement with the formula \[N_{4}\,(p^{2}+1)^{3}(p^{3}+1)(p^{4}+1)=\Sigma_{4}\,(p+1)(p^{2}+1)^{2}(p^{3}+1 )^{2}\,.\] ## 16. The cycle class of \(S_{4}\) and intersection numbers In this section we express the cycle class of the supersingular locus \(S_{4}\) for dimension \(g=4\) in terms of intersection numbers. We know that the cycle class of \(S_{4}\) lies in the tautological ring and is a multiple of \(\lambda_{4}\lambda_{2}\). This multiple can be determined by intersection numbers. We identify the degree of a top-dimensional Chern class with an intersection number. **Proposition 16.1**.: _We have \([S_{4}]=a\,\lambda_{4}\lambda_{2}\) with_ \[a=\frac{\lambda_{3}\lambda_{1}\,[S_{4}]}{v(4)}=\frac{\lambda_{1}^{4}[S_{4}]}{8 \,v(4)}\] _with \(v(4)\) the proportionality constant defined in Section 2._ Proof.: We have \(\lambda_{3}\lambda_{1}[S_{4}]=a\,\lambda_{4}\lambda_{3}\lambda_{2}\lambda_{1}= a\,v(4)\). In the tautological ring \(R_{4}\) we have \(\lambda_{3}\lambda_{1}=\lambda_{1}^{4}/8\). We shall calculate the intersection number \([S]\cdot\lambda_{3}\lambda_{1}\) for each irreducible component \(S\) of \(S_{4}\). We will do this by pulling back the Hodge bundle of \(\mathcal{A}_{4}\) to \(\mathcal{F}_{0}\) and calculating the degrees of the top Chern classes of the Hodge bundle on \(\mathcal{F}_{0}\). ## 17. Determination of intersection numbers Our goal is to calculate the intersection number \(\lambda_{1}\lambda_{3}[S]\) for each irreducible component \(S\) of the supersingular locus. For this we calculate \(\deg(\lambda_{3}\lambda_{1})\) on the \(4\)-dimensional variety \(\mathcal{F}_{0}\). By Proposition 12.3, that describes the total Chern class of the Hodge bundle, and by Corollary 12.4 this intersection number can be expressed in the intersection numbers given by the monomials of degree \(4\) in \(\ell_{0},\ell_{1},\ell_{2}\) evaluated at the fundamental class of \(\mathcal{F}_{0}\). Note that we write \(\ell_{1}\) and \(\ell_{2}\) for their pullbacks to \(\mathcal{F}_{0}\) and sometimes identify such a monomial \(\ell_{0}^{a}\ell_{1}^{b}\ell_{2}^{c}\) with \(\deg(\ell_{0}^{a}\ell_{1}^{b}\ell_{2}^{c})\). **Lemma 17.1**.: _The following intersection numbers vanish on \(\mathcal{F}_{0}\):_ \[\ell_{0}^{4},\ell_{0}^{2}\ell_{1}^{2},\ell_{0}^{2}\ell_{1}\ell_{2},\ell_{0}^{2 }\ell_{2}^{2},\ell_{0}\ell_{2}^{3},\ell_{1}^{4},\ell_{1}^{3}\ell_{2},\ell_{1}^ {2}\ell_{2}^{2},\ell_{1}\ell_{2}^{3},\ell_{2}^{4}\,.\] Proof.: Since \(\dim\mathcal{F}_{1}=3\) and \(\ell_{0}^{2}\) is a pullback from \(\mathcal{F}_{1}\) by Corollary 12, and \(\ell_{1}\) and \(\ell_{2}\) are also pullbacks from \(\mathcal{F}_{1}\) we find that \(\ell_{0}^{4},\ell_{0}^{2}\ell_{1}^{2},\ell_{0}^{2}\ell_{1}\ell_{2},\ell_{0}^{2 }\ell_{2}^{2}\) vanish. Since the class \(\ell_{2}\) is a pullback from \(\mathcal{F}_{2}\), which is of dimension \(2\), we have \(\ell_{2}^{3}=0\), implying that \(\ell_{2}^{4}=\ell_{0}\ell_{2}^{3}=\ell_{1}\ell_{2}^{3}=0\). Similarly, \(\ell_{2}\) and \(\ell_{1}\) are induced from \(\mathcal{F}_{1}\), which is of dimension \(3\), hence the monomials of degree \(4\) in \(\ell_{1}\) and \(\ell_{2}\) vanish. Proposition 12.3 together with Lemma 17.1 imply the following relation. **Corollary 17.2**.: _We have on \(\mathcal{F}_{0}\)_ \[\deg(\lambda_{3}\lambda_{1})=\frac{1}{2}(p-1)^{4}\left(\ell_{0}^{3}\ell_{1}+ \ell_{0}^{3}\ell_{2}+\ell_{0}\ell_{1}^{3}+3\,\ell_{0}\ell_{1}^{2}\ell_{2}+3\, \ell_{0}\ell_{1}\ell_{2}^{2}\right)\,.\] Thus we need the intersection numbers defined by the five monomials in \(\ell_{0},\ell_{1},\ell_{2}\) appearing in Corollary 17.2. The intersection numbers \((\ell_{0}\ell_{1}^{3},\ell_{0}\ell_{1}^{2}\ell_{2},\ell_{0}\ell_{1}\ell_{2}^{2})\) on \(\mathcal{F}_{0}\) are equal to the intersection numbers \((\ell_{1}^{3},\ell_{1}^{2}\ell_{2},\ell_{1}\ell_{2}^{2})\) on \(\mathcal{F}_{1}\) as the degree of \(\ell_{0}\) on a generic fibre of \(\pi_{0}\) is \(1\). **Lemma 17.3**.: _We have \(\deg\ell_{1}\ell_{2}^{2}=p^{2}(p^{2}+1)\) on \(\mathcal{F}_{1}\)._ Proof.: The space \(\mathcal{F}_{2}\) can be identified with the surface in \(\mathbb{P}^{3}\) over \(\mathbb{F}_{p}\) given by the equation \[x_{1}x_{4}^{p^{2}}-x_{1}^{p^{2}}x_{4}+x_{2}x_{3}^{p^{2}}-x_{2}^{p^{2}}x_{3}=0\] and \(\ell_{2}\) is represented by the pullback under \(\pi_{1}\) of the hyperplane class \(h\) on \(\mathcal{F}_{2}\). Therefore \(h^{2}\) can be represented by an effective zero cycle of degree \(p^{2}+1\). The surface \(\mathcal{F}_{2}\) is unirational (see [14]), hence \(h^{2}\) can be represented by \(p^{2}+1\) times a point. The morphism \(\pi_{1}\) is inseparable of degree \(p\), hence the pullback of a point \(\mathcal{F}_{2}\) is \(p\) times a fibre of \(\mathcal{F}_{1}\). Since the degree of \(\ell_{1}\) on a fibre of \(\pi_{1}\) is \(p\) we get \(\deg(\ell_{1}\ell_{2}^{2})=p\cdot p\cdot(p^{2}+1)\). **Lemma 17.4**.: _We have on \(\mathcal{F}_{0}\) the relation_ \[p\,\ell_{0}^{3}\ell_{1}-(p^{2}+1)\,\ell_{0}^{3}\ell_{2}+p\,\ell_{0}\ell_{1}^{3 }-(p-1)^{2}\,\ell_{0}\ell_{1}^{2}\ell_{2}-(2p^{2}-p+2)\ell_{0}\ell_{1}\ell_{2} ^{2}=0\,.\] Proof.: This follows from the fact that \(\lambda_{4}\) vanishes in the Chow ring of \(\mathcal{A}_{g}\) as explained in Section 2 and the expression for \(\lambda_{4}\) as a polynomial in the \(\ell_{i}\) by Proposition 12.3 and Corollary 12.4. **Corollary 17.5**.: _On \(\mathcal{F}_{1}\) we have the relation_ \[p\,\ell_{0}^{2}\ell_{1}-(p^{2}+1)\,\ell_{0}^{2}\ell_{2}+p\,\ell_{1}^{3}-(p-1) ^{2}\,\ell_{1}^{2}\ell_{2}-(2p^{2}-p+2)\ell_{1}\ell_{2}^{2}=0\,.\] Proof.: We know that \(\mathcal{F}_{0}\) is a \(\mathbb{P}^{1}\)-bundle over \(\mathcal{F}_{1}\). Therefore each cycle class \(\xi\in A_{k}(\mathcal{F}_{0})\) can be written uniquely as \(\xi=\pi_{0}^{*}(\xi_{0})+\pi_{0}^{*}(\xi_{1})\ell_{0}\) with \(\xi_{0}\in A_{k-1}(\mathcal{F}_{1})\) and \(\xi_{1}\in A_{k}(\mathcal{F}_{1})\). In particular, the map \(\xi_{1}\mapsto\pi_{0}^{*}(\xi_{1})\ell_{0}\) is injective. The result thus follows from Lemma 17.4. **Lemma 17.6**.: _We have on \(\mathcal{F}_{1}\) the relation_ \[2\,\ell_{0}^{2}\ell_{1}-(p-1)\ell_{0}^{2}\ell_{2}+(p-1)\ell_{1}^{2}\ell_{2}-2( p^{2}-p+1)\ell_{1}\ell_{2}^{2}=0\,.\] Proof.: We have the exact sequence of Dieudonne modules \[0\to A\to VM_{2}/pM_{2}\to VM_{2}/VM_{1}\to 0\] with \(\operatorname{Lie}(Y_{2})^{\vee}=VM_{2}/pM_{2}\) and \(Q_{1}=VM_{2}/VM_{1}\). The total Chern class of the sheaf corresponding to \(A\) has the form \[c(A)=(1-\ell_{2})(1-p\,\ell_{2})^{-1}(1+\ell_{1}+c_{2}(Q_{1}))^{-1}\,.\] Since \(\operatorname{rank}(A)=2\) the third Chern class should vanish; this gives a relation on \(\mathcal{F}_{1}\) \[2\ell_{0}^{2}\ell_{1}-(p-1)\ell_{0}^{2}\ell_{2}+(p-1)\ell_{1}^{2}\ell_{2}-2( p^{2}-p+1)\ell_{1}\ell_{2}^{2}=0\,.\] **Lemma 17.7**.: _On \(\mathcal{F}_{1}\) we have the relation_ \[p\,\ell_{0}^{2}\ell_{2}-p\,\ell_{1}^{2}\ell_{2}+2(p^{2}-p+1)\ell_{1}\ell_{2}^{2}= 0\,.\] Proof.: Let \(H\) be a hyperplane section of \(\mathcal{F}_{2}\) with \(H\cap\mathcal{F}(\mathbb{F}_{p^{2}})=\emptyset\). We work on \(\pi_{1}^{-1}(H)\). Here we have that \(\dim M_{2}/(F,V)M_{2}=3\) and we thus have a rank \(3\) locally free sheaf \(B\) on \(H\) determined by \(M_{2}/(F,V)M_{2}\). Because of the exact sequence \[0\to VM_{2}/VM_{2}\cap FM_{2}\to M_{2}/FM_{2}\to M_{2}/(F,V)M_{2}\to 0\] we have the exact sequence \[0\to U_{2}\to\operatorname{Lie}(Y_{2})^{(p)}{}^{\vee}\to B\to 0\,,\] since \(VM_{2}/VM_{2}\cap FM_{2}=VM_{2}/pM_{3}\). We thus find \[[B]=[4]-[Q_{2}^{(p)}]+[Q_{2}^{(p^{2})}]-[U_{2}]=[4]+[U_{2}^{(p)}]-[U_{2}^{(p^ {2})}]-[U_{2}]\,.\] We also have the inclusions \((F,V)M_{2}\subset M_{1}\subset M_{2}\) on \(\pi_{1}^{-1}(H)\) and we thus have a locally free sheaf \(L\) corresponding to \(M_{1}/(F,V)M_{2}\). In the Grothendieck group we have the corresponding relation \([B]=[L]+[Q_{1}^{(p)}]\). Thus we find \([L]=[4]+[U_{2}^{(p)}]-[U_{2}^{(p^{2})}]-[U_{2}]-[Q_{1}^{(p)}]\) and we see that the total Chern class of \(L\) is given by \[c(L)=\frac{(1-p\ell_{2})}{(1-p^{2}\ell_{2})(1-\ell_{2})}\,\frac{1}{(1+p\ell_{1 }+p^{2}c_{2}(Q_{1}))}\,.\] But \(L\) has rank \(1\), so \(c_{2}(L)=0\). With \(c_{2}(Q_{1})=(\ell_{0}^{2}+\ell_{1}^{2}-\ell_{2}^{2})/2\) this gives \[(p^{4}-p^{3}+\frac{3}{2}p^{2}-p+1)\ell_{2}^{2}-(p^{3}-p^{2}+p)\ell_{1}\ell_{2 }-\frac{1}{2}p^{2}\ell_{0}^{2}+\frac{1}{2}p^{2}\ell_{1}^{2}=0\,.\] Recall now that the class of \(H\) is \(\ell_{2}\). Multiplying the preceding relation by \(\ell_{2}\) and using \(\ell_{2}^{3}=0\) we find \[p\ell_{0}^{2}\ell_{2}-p\ell_{1}^{2}\ell_{2}+2(p^{2}-p+1)\ell_{1}\ell_{2}^{2}= 0\,.\] As remarked above we need five intersection numbers: \[\ell_{0}^{3}\ell_{1},\,\ell_{0}^{3}\ell_{2},\,\ell_{0}\ell_{1}^{3},\,\ell_{0} \ell_{1}^{2}\ell_{2},\,\ell_{0}\ell_{1}\ell_{2}^{2}\,.\] We know already the last one by Lemma 17.3. By multiplying the relations of Lemma 17.6 and 17.7 by \(\ell_{0}\) we find in total three relations coming from Lemmas 17.4, 17.6 and 17.7 between these five intersection numbers. **Corollary 17.8**.: _We have \(\deg(\ell_{0}^{3}\ell_{1})=p(p^{2}+1)(p^{2}-p+1)\)._ Proof.: The sum of \(p\) times the relation of 17.6 and \((p-1)\) times that of 17.7 gives the relation \(2p\,\ell_{0}^{3}\ell_{1}-2(p^{2}-p+1)\ell_{0}\ell_{1}\ell_{2}^{2}=0\). Using the three relations and Lemma 17.3 our five intersection numbers depend on one unknown. **Corollary 17.9**.: _With \(x=\deg(\ell_{0}\ell_{1}^{2}\ell_{2})\) we find that_ \[\deg\left[\begin{array}{c}\ell_{0}^{3}\ell_{1}\\ \ell_{0}^{3}\ell_{2}\\ \ell_{0}\ell_{1}^{3}\\ \ell_{0}\ell_{1}^{2}\ell_{2}\\ \ell_{0}\ell_{1}\ell_{2}^{2}\end{array}\right]=\left[\begin{array}{c}p(p^{2} +1)(p^{2}-p+1)\\ x-2p(p^{2}+1)(p^{2}-p+1)\\ 2(p-1+1/p)x-(p^{2}+1)^{2}(2p^{2}-3p+2)\\ x\\ p^{2}(p^{2}+1)\end{array}\right]\] _Remark 17.10_.: We have on \(\mathcal{F}_{0}\) \[\deg\lambda_{1}^{4}=8\,(p-1)^{4}(p^{2}+p+1)\left(\frac{\deg(\ell_{0}\ell_{1}^ {2}\ell_{2})}{p}-(p^{2}+1)(p-1)^{2}\right)\,.\] Since \(\lambda_{1}\) is ample on \(S_{4}\) this should be positive and this gives \[\deg(\ell_{0}\ell_{1}^{2}\ell_{2})>p(p^{2}+1)(p-1)^{2}\,.\] We now determine the last intersection number. Recall that the second Chern class \(c_{2}(Q_{1})\) satisfies \(c_{2}(Q_{1})=(\ell_{0}^{2}+\ell_{1}^{2}-\ell_{2}^{2})/2\). Furthermore, recall the cycle class \([\overline{D(\psi)}]\) of a 'horizontal' \(a\geq 3\)-locus on \(\mathcal{F}_{1}\) given by \[[\overline{D(\psi)}]=p\,\ell_{1}-(p^{2}+1)\ell_{2}+e\] with \(e\) a class with support in the exceptional fibres as given in Lemma 13.2. **Proposition 17.11**.: _We have \(c_{2}(Q_{1})\cdot[\overline{D(\psi)}]=0\) and \(c_{2}(Q_{1})\cdot e=0\)._ Proof.: Since \(Q_{1}\) is the tautological quotient of the \(O_{\mathcal{F}_{1}}\)-module associated to \(M_{2}/FM_{2}\) by the universal rank \(2\) subbundle \(U_{1}\), the second Chern class can be realized as the class of the locus where the fibre of \(U_{1}\) contains a fixed vector. For this we choose an element \(v^{\prime}\) of \(M_{2}/FM_{2}\) that has the property that over each affine part of \(\mathcal{F}_{2}\) with \(a_{i}\neq 0\) (for \(i=1,\dots,4\)) it is of the form \[v^{\prime}=\alpha_{5}\,v_{0}+\alpha_{6}\,Fx_{2}+\alpha_{7}Fx_{3}+\alpha_{8}Fx_ {4}\] with the property that the equation \(g_{2}=0\), that is, \[a_{1}\,\alpha_{8}^{p}-a_{1}^{p}\alpha_{5}^{p-1}\alpha_{8}+a_{2}\alpha_{7}^{p} -a_{2}^{p}\alpha_{5}^{p-1}\alpha_{7}+a_{3}^{p}\alpha_{5}^{p-1}\alpha_{6}-a_{3} \alpha_{6}^{p}=0\] has no solutions with \((a_{1},a_{2},a_{3},a_{4})\in\mathbb{F}_{p^{2}}\) with \(a_{i}\neq 0\). Indeed, choosing \(\alpha_{5}\neq 0\), \(\alpha_{6}\) and \(\alpha_{7}\) there are only finitely many \(\alpha_{8}\) satisfying this equation. Then since \(\alpha_{5}\neq 0\), we see that this locus has zero intersection with \(\overline{D(\psi)}\). We get \(c_{2}(Q_{1})\cdot[\overline{D(\psi)}]=0\). By the requirement that we put over \(\mathcal{F}_{2}(\mathbb{F}_{p^{2}})\) we see that also \(c_{2}(Q_{1})\cdot e=0\). **Corollary 17.12**.: _We have \((\ell_{0}^{2}+\ell_{1}^{2}-\ell_{2}^{2})(p\ell_{1}-(p^{2}+1)\ell_{2})=0\)._ Proof.: Recall that \(c_{2}(Q_{1})=(\ell_{0}^{2}+\ell_{1}^{2}-\ell_{2}^{2})/2\) and \([\overline{D(\psi)}]=p\ell_{1}-(p^{2}+1)\ell_{2}+e\) with \(e\) a class with support in the exceptional fibres. By combining Corollary 17.9 and Corollary 17.12 we can determine all the intersection numbers. **Corollary 17.13**.: _We have on \(\mathcal{F}_{0}\)_ \[\deg\left[\begin{array}{c}\ell_{0}^{3}\ell_{1}\\ \ell_{0}^{3}\ell_{2}\\ \ell_{0}\ell_{1}^{2}\\ \ell_{0}\ell_{1}\ell_{2}^{2}\end{array}\right]=p\left(p^{2}+1\right)\left[ \begin{array}{c}p^{2}-p+1\\ -p^{2}+p-1\\ -(p-1)^{2}\\ p^{2}-p+1\\ p\end{array}\right]\] Finally we are ready to calculate the coefficient \(f_{4}(p)\) of Theorem 1.1. **Theorem 17.14**.: _The class of the supersingular locus \(S_{4}\subset\mathcal{A}_{4}\otimes\mathbb{F}_{p}\) in the Chow ring of \(\tilde{\mathcal{A}}_{4}\otimes\mathbb{F}_{p}\) equals_ \[[S_{4}]=(p-1)^{3}(p^{3}-1)(p^{4}-1)(p^{6}-1)\lambda_{4}\lambda_{2}\,.\] Proof.: For each irreducible component \(S\) of \(S_{4}\) we calculate the degree of \(\lambda_{3}\lambda_{1}\) on the model \(\mathcal{F}_{0}\) of \(S\). Indeed, we have \([S_{4}]=a\lambda_{4}\lambda_{2}\) with \(a=\lambda_{3}\lambda_{1}[S_{4}]/v(4)\) by Proposition 16.1. A calculation using Corollary 17.13 and taking into account the degree \(p\) of the map \(\mathcal{F}_{0}\to S\) (see Lemma 12.1) yields that \(\deg(\lambda_{3}\lambda_{1})\) on \(S\) equals \(1/p\) times the degree on \(\mathcal{F}_{0}\) of \[(p^{2}-3p+1)\ell_{0}^{3}\ell_{1}+(2p^{2}-2p+2)\ell_{0}^{3}\ell_{2 }+(p^{2}-3p+1)\ell_{0}\ell_{1}^{3}+\] \[4(p-1)^{2}\ell_{0}\ell_{1}^{2}\ell_{2}+(5p^{2}-7p+5)\ell_{0}\ell _{1}\ell_{2}^{2}\] and this equals \((p-1)^{4}(p^{2}+p+1)(p^{2}+1)\). Multiplying this with the number of irreducible components \((p^{2}-1)(p^{6}-1)v(4)\) we find the coefficient \(a=(p-1)^{3}(p^{3}-1)(p^{4}-1)(p^{6}-1)\).
2310.19278
he Cauchy problem for the Novikov equation under a nonzero background: Painlevé asymptotics in a transition zone
In this paper, we investigate the Painlev\'e asymptotics in a transition zone for the solutions to the Cauchy problem of the Novikov equation under a nonzero background \begin{align} &u_{t}-u_{txx}+4 u_{x}=3uu_xu_{xx}+u^2u_{xxx}, \nonumber &u(x, 0)=u_{0}(x),\nonumber \end{align} where $u_0(x)\rightarrow \kappa>0, \ x\rightarrow \pm \infty$ and $u_0(x)-\kappa$ is assumed in the Schwarz space. This result is established by performing the $\overline\partial$-steepest descent analysis to a Riemann-Hilbert problem associated with the the Cauchy problem in a new spatial scale \begin{equation*} y = x - \int_{x}^{\infty} \left((u-u_{xx}+1)^{2/3}-1\right)ds, \end{equation*} for large times in the transition zone $y/t \approx -1/8 $. It is shown that the leading order term of the asymptotic approximation comes from the contribution of solitons, while the sub-leading term is related to the solution of the Painlev\'e \uppercase\expandafter{\romannumeral2} equation.n.
Zhaoyu Wang, Xuan Zhou, Engui Fan
2023-10-30T05:27:59Z
http://arxiv.org/abs/2310.19278v2
The Cauchy problem for the Novikov equation under a nonzero background: Painleve asymptotics in a transition zone ###### Abstract In this paper, we investigate the Painleve asymptotics in a transition zone for the solutions to the Cauchy problem of the Novikov equation under a nonzero background \[u_{t}-u_{txx}+4u_{x}=3uu_{x}u_{xx}+u^{2}u_{xxx},\] \[u(x,0)=u_{0}(x),\] where \(u_{0}(x)\rightarrow\kappa>0,\ x\rightarrow\pm\infty\) and \(u_{0}(x)-\kappa\) is assumed in the Schwarz space. This result is established by performing the \(\overline{\partial}\)-steepest descent analysis to a Riemann-Hilbert problem associated with the the Cauchy problem in a new spatial scale \[y=x-\int_{x}^{\infty}\left((u-u_{xx}+1)^{2/3}-1\right)ds,\] for large times in the transition zone \(y/t\approx-1/8\). It is shown that the leading order term of the asymptotic approximation comes from the contribution of solitons, while the sub-leading term is related to the solution of the Painleve II equation. keywords: Novikov equation, Riemann-Hilbert problem, \(\overline{\partial}\)-steepest descent method, Painleve transcendents, large time asymptotics. _Mathematics Subject Classification:_ 35Q51; 35Q15; 35C20; 37K15. ###### Contents * 1 Introduction * 2 Inverse Scattering and RH Problem * 2.1 The Lax pair and spectral analysis * 2.2 An RH characterization * 3 Interpolation and Conjugation * 4 Painleve Asymptotics in Transition Zone \(y/t\approx-1/8\) * 4.1 Opening \(\bar{\partial}\)-lenses * 4.2 A hybrid \(\bar{\partial}\)-RH problem and its decomposition * 4.3 Contribution from discrete spectrum * 4.4 Contribution from jump contours * 4.4.1 Local model near critical points * 4.4.2 RH problem near singularities * 4.4.3 Small norm RH problem for the residual error * 4.5 Contribution from \(\bar{\partial}\)-components * 4.6 Proof of Theorem 1.1 * A Modified Painleve II RH Problem * B Model RH Problem for the Transition Zone ## 1 Introduction In this paper, we are concerned with the Painleve asymptotics of solutions to the Cauchy problem for the Novikov equation under a nonzero background \[u_{t}-u_{txx}+4u_{x}=3uu_{x}u_{xx}+u^{2}u_{xxx}, \tag{1.1}\] \[u(x,0)=u_{0}(x),\quad x\in\mathbb{R},\ t>0,\] (1.2) \[u_{0}(x)\rightarrow\kappa>0,\ x\rightarrow\pm\infty, \tag{1.3}\] where \(u=u(x,t)\) is a real-valued function of \(x\) and \(t\). By introducing the momentum variable \(m=u-u_{xx}\), the Novikov equation (1.1) can be rewritten as the conversation law form \[(m^{2/3})_{t}+\left(u^{2}m^{2/3}\right)_{x}=0. \tag{1.4}\] The Novikov equation (1.1) as a new integrable system was derived from the classification of integrable generalized Camassa-Holm equations of the form \[(1-\partial_{x}^{2})u_{t}=F(u,u_{x},u_{xx},...) \tag{1.5}\] possessing infinite hierarchies of higher symmetries. The Novikov equation possesses a scalar Lax pair involving the third order derivative with respect to \(x\), which has been provided [1; 2]. Furthermore, by employing the prolongation algebra method, Hone and Wang introduced a \(3\times 3\) matrix Lax pair and established a bi-Hamiltonian structure for the Novikov equation (1.1) [3]. This Lax pair was used to explicitly construct peakon solutions on a zero background, replicating a feature characterizing the waves of great height-waves of largest amplitude that were exact solutions of the governing equations for water waves [3; 4; 5; 6; 7]. Hone et al. further derived the explicit formulas for multipeakon solutions of the Novikov equation (1.1) [8]. Matsuno, using the Hirota bilinear method, presented parametric representations of smooth multisoliton solutions for the Novikov equation (1.1) on a nonzero constant background [9]. He also demonstrated that a smooth soliton converges to a peakon in the limit where the constant background approaches zero while the velocity of the soliton is fixed. Wu et al. obtained \(N\)-soliton solutions for the Novikov equation through Darboux transformations [10]. Recently, Chang et al. applied Pfaffian technique to investigate multipeakons of the Novikov equation, establishing a connection between the Novikov peakons and the finite Toda lattice of BKP type, as well as employing Hermite-Pade approximation to address the Novikov peakon problem [11, 12]. There exists a unique global solution \(u(x,t)\) of the Novikov equation (1.1), such that \(u(x,t)\to 0\) as \(x\to\pm\infty\) for all \(t>0\)[14]. Boutet de Monvel et al. developed the inverse scattering theory to the Novikov equation (1.1) with a nonzero constant background. They proved that under a transformation \[u(x,t)\to\kappa\tilde{u}(x-\kappa^{2}t,\kappa^{2}t)+\kappa, \tag{1.6}\] the Cauchy problem (1.1)-(1.3) can be reduced into the following Cauchy problem on zero background \[(\tilde{m}^{2/3})_{t}+\left(\tilde{m}^{2/3}\left(u^{2}+2u\right) \right)_{x}=0,\;\tilde{m}=u-u_{xx}+1, \tag{1.7}\] \[u(x,0)=u_{0}(x), \tag{1.8}\] where \(u_{0}(x)\) satisfies the sign condition \[u_{0}(x)-u_{0,xx}(x)+1>0.\] Building upon the above characteristics, a Riemann-Hilbert (RH) formalism for the Cauchy problem (1.7)-(1.8) has been established [13]. The Novikov equation and DP equation possess numerous common characteristics in their RH problem and face some difficulties. One of the difficulties is the Lax pair associated with (1.7) has six spectral singularities at \(\varkappa_{n}=e^{\frac{n\pi i}{3}}\) for \(n=1,\cdots,6\), which means that the corresponding RH problem also exhibits spectral singularities. During the large time asymptotic analysis for the DP equation, Boutet de Monvel et al. considered a row vector RH problem to avoid the impact of singularities [15, 16]. On the contrary, in the case of the Novikov equation, the situation is different. The solution of the similar row vector RH problem cannot be directly used to recover \(e^{x(y,t)-y}\)[13]. Recently, this difficulty was overcome by the establishment of a small-norm RH problem near the singular points [17]. Furthermore, the large time asymptotic expansions for the solution to the Cauchy problem (1.7)-(1.8) of the Novikov equation in four different space-time regions (See Figure 1.1) \[\text{I. }\xi<-1/8;\text{ II. }-1/8<\xi<0;\text{ III. }0<\xi<1;\text{ \ IV. }\xi>1,\quad\xi:=y/t\] were obtained with the \(\bar{\partial}\) nonlinear steepest approach, which was originally introduced by McLaughlin-Miller [18, 19]. This method has been successfully applied to analyze the large time asymptotics and soliton resolution of integrable systems [20, 21, 22, 23, 24, 25]. The remaining question is: How to describe the asymptotics of the solution to the Cauchy problem (1.7)-(1.8) in the transition zones V. \(\xi\approx-1/8\) and VI. \(\xi\approx 1\) as illustrated in Figure 1.1? The aim of the present work is to present the large time asymptotics of the Novikov equation in the transition zone V. \(\xi\approx-1/8\). The leading term of the asymptotic expansion is influenced by the discrete spectrum and the sub-leading term is in terms of the solutions of the Painleve II equation. Our main result is stated as follows. **Theorem 1.1**.: _Let \(u(x,t)=u(y(x,t),t)\) be the solution for the Novikov equation (1.7) with generic initial data \(u_{0}\in\mathcal{S}(\mathbb{R})\) and its associated setting data \(\left\{r(z),\{\zeta_{n},c_{n}\}_{n=1}^{6N_{0}}\right\}\). Let \(u^{\diamond}(y,t)\) be the \(\mathcal{N}(\diamondsuit)\)-soliton solution corresponding to the scattering data \(\widetilde{\mathcal{D}}_{\diamondsuit}=\left\{\zeta_{n},c_{n}T^{2}(\zeta_{n}) \right\}_{n\in\diamondsuit}\) shown in Corollary 4.1. Then in the transition zone \(|\xi+\frac{1}{8}|t^{2/3}<C\) with \(\xi=y/t\), there exists a large constant \(T_{1}\) such that for all \(t>T_{1}\), we have_ \[u(y,t) =u^{\diamondsuit}(y,t;\widetilde{\mathcal{D}}_{\diamondsuit}) \left(T_{1}(e^{\frac{\pi i}{6}})T_{3}(e^{\frac{\pi i}{6}})\right)^{-1/2}-1\] \[+\frac{1}{2}\left(T_{1}(e^{\frac{\pi i}{6}})T_{3}(e^{\frac{\pi i }{6}})\right)^{-1/2}f_{11}t^{-1/3}+\mathcal{O}(t^{-2/3+2\delta_{1}}), \tag{1.9}\] \[x(y,t) =x^{\diamondsuit}(y,t;\widetilde{\mathcal{D}}_{\diamondsuit})+ \frac{1}{2}\ln T_{13}(e^{\frac{\pi i}{6}})+\frac{1}{2}f_{12}t^{-1/3}+\mathcal{ O}(t^{-2/3+2\delta_{1}}), \tag{1.10}\] _where \(u^{\diamondsuit}(y,t;\widetilde{\mathcal{D}}_{\diamondsuit})\) and \(x^{\diamondsuit}(y,t;\widetilde{\mathcal{D}}_{\diamondsuit})\) are defined in Corollary 4.1, \(T_{13}\), \(f_{11}\), and \(f_{12}\) are given by (3.14), (4.151), and (4.152) respectively, while functions \(f_{11}\) and \(f_{12}\) are related to the solution of the Painleve II equation_ \[v_{ss}=2v^{3}+sv,\quad s\in\mathbb{R}, \tag{1.11}\] _with asymptotics_ \[v(s)\sim\left|r\left(\frac{\sqrt{7}+\sqrt{3}}{2}\right)\right|\mathrm{Ai}(s),\quad s\to-\infty. \tag{1.12}\] **Remark 1.2**.: _In the subsequent paper, we will provide the large time asymptotic analysis of the Novikov equation in the transition zone VI. \(\xi\approx 1\). For this case, the critical points and spectral singularities are the same, which implies that singularities will emerge in the local model, requiring the use of alternative methods rather than the approach described in this paper._ The organization of our paper is as follows: In Section 2, we quickly review some basic results, especially the construction of a RH problem for \(M(z)\) related to the Cauchy problem (1.1)-(1.3), which will be used to analyze the large time asymptotics of the Novikov equation. For details, refer to [13, 17]. Figure 1.1: The different asymptotic regions of the Novikov equation in the \((y,t)\)-half plane, where \(\xi=y/t\). In Section 3, we convert the original RH problem for \(M(z)\) to a RH problem for \(M^{(2)}(z)\), whose jump contour can be opened with different factorizations of the jump matrix (2.29) according to the sign of the phase functions (2.25). In Section 4, we focus on the large time asymptotic analysis in the transition zone \(|\xi+1/8|t^{2/3}<C\) with the following steps. First of all, introducing a matrix-valued function \(\mathcal{R}^{(3)}(z)\) to make a continuous extension of \(M^{(2)}(z)\) into a hybrid \(\bar{\partial}\)-RH problem for \(M^{(3)}(z)\), which can be decomposed into a pure RH problem for \(M^{R}(z)\) and a pure \(\bar{\partial}\)-problem for \(M^{(4)}(z)\). We observe that the contribution to \(M^{R}(z)\) arises from three distinct components: * The first contribution originates from the discrete spectrum, where a modified reflectionless RH problem for \(M^{O}(z)\) is constructed in Subsection 4.3. * The second contribution comes from jump contours near the critical points generated by phase points after colliding. The corresponding local parametrix for \(M^{L}(z)\) can be approximated by the modified Painleve II RH problem near the critical points in Subsection 4.4.1. * The third contribution generates from a pure jump RH problem near the singularities in Subsection 4.4.2, which has an order of \(\mathcal{O}(t^{-1})\). The residual error function results from a small norm RH problem outside the neighborhood of critical points in Subsection 4.4.3. * In Subsection 4.5, we analyze the contribution associated with the \(\bar{\partial}\)-problem for \(M^{(4)}(z)\). * Finally, in Subsection 4.6, based on the result obtained above,we provide the proof of Theorem 1.1. ## 2 Inverse Scattering and RH Problem In this section, we state some main results on the inverse scattering transform and the RH problem associated with the Cauchy problem (1.1)-(1.3). The details can be found in [13; 17]. ### The Lax pair and spectral analysis The Novikov equation (1.7) admits the Lax pair [3; 13] \[\ddot{\Phi}_{x}=\tilde{X}\tilde{\Phi},\ \ \ \ \ \ddot{\Phi}_{t}=\tilde{T}\ddot{ \Phi}, \tag{2.1}\] where \(k\) is a spectral parameter, \(\ddot{\Phi}=\ddot{\Phi}(k;x,t)\) is a \(3\times 3\) matrix valued eigenfunction, the matrices \(\tilde{X}\) and \(\tilde{T}\) are defined by \[\tilde{X} =\begin{pmatrix}0&k\tilde{m}&1\\ 0&0&k\tilde{m}\\ 1&0&0\end{pmatrix},\] \[\tilde{T} =\begin{pmatrix}-(u+1)u_{x}+\frac{1}{3k^{2}}&\frac{u_{x}}{k}-(u^{ 2}+2u)k\tilde{m}&u_{x}^{2}+1\\ \frac{u+1}{k}&-\frac{2}{3k^{2}}&-\frac{u_{x}}{k}-(u^{2}+2u)k\tilde{m}\\ -u^{2}-2u&\frac{u+1}{k}&(u+1)u_{x}+\frac{1}{3k^{2}}\end{pmatrix}.\] Denote \[k^{2}(z)=\frac{1}{3\sqrt{3}}\left(z^{3}+\frac{1}{z^{3}}\right),\ \omega=e^{\frac{2i\pi}{3}},\] then the following algebraic equation \[\lambda^{3}(z)-\lambda(z)=k^{2}(z) \tag{2.2}\] admits three roots in the form \[\lambda_{j}(z)=\frac{1}{\sqrt{3}}\left(\omega^{j}z+\frac{1}{\omega^{j}z}\right),\ j=1,2,3. \tag{2.3}\] Moreover, \(k(\kappa_{n})=0\), for \(\kappa_{n}=e^{\frac{i\pi}{6}+\frac{i\pi(n-1)}{3}}\), \(n=1,\cdots,6\). In order to control the large \(k\) behavior of the solutions of the Lax pair (2.1), we define \[D(x,t)=\mbox{diag}\{q,q^{-1},1\},\quad q:=\tilde{m}^{1/3}(x,t),\] \[P(z)=\left(\begin{array}{ccc}\lambda_{1}^{2}(z)&\lambda_{2}^{2 }(z)&\lambda_{3}^{2}(z)\\ k(z)&k(z)&k(z)\\ \lambda_{1}(z)&\lambda_{2}(z)&\lambda_{3}(z)\end{array}\right), \tag{2.4}\] \[P^{-1}(z)=\left(\begin{array}{ccc}\frac{1}{3\lambda_{1}^{2}(z )-1}&0&0\\ 0&\frac{1}{3\lambda_{2}^{2}(z)-1}&0\\ 0&0&\frac{1}{3\lambda_{3}^{2}(z)-1}\end{array}\right)\left(\begin{array}{ccc }1&\frac{z}{\lambda_{1}(z)}&\lambda_{1}(z)\\ 1&\frac{z}{\lambda_{2}(z)}&\lambda_{2}(z)\\ 1&\frac{z}{\lambda_{3}(z)}&\lambda_{3}(z)\end{array}\right). \tag{2.5}\] Then the new function \[\Phi=P^{-1}(z)D^{-1}(x,t)\ddot{\Phi} \tag{2.6}\] satisfies the Lax pair \[\Phi_{x}-q^{2}\Lambda(z)\Phi=U\Phi, \tag{2.7}\] \[\Phi_{t}+\left[(u^{2}+2u)q^{2}\Lambda(z)-A(z)\right]\Phi=V\Phi, \tag{2.8}\] where \[\Lambda(z)=\mbox{diag}\{\lambda_{1}(z),\lambda_{2}(z),\lambda_{3} (z)\},\ \ A(z)=\frac{1}{3k^{2}}+\Lambda(z)^{-1},\] \[U=U_{1}U_{2},\ \ V=U_{1}(V_{1}+V_{2}\Lambda), \tag{2.9}\] \[U_{1}=\mbox{diag}\left\{\frac{1}{3\lambda_{1}^{2}-1},\frac{1}{3 \lambda_{2}^{2}-1},\frac{1}{3\lambda_{3}^{2}-1}\right\},\] \[U_{2}=\left(\begin{array}{ccc}c_{2}\lambda_{1}&c_{1}(\lambda_ {1}\lambda_{2}-\lambda_{2}^{2})+c_{2}\lambda_{2}&c_{1}(\lambda_{1}\lambda_{3}- \lambda_{3}^{2})+c_{2}\lambda_{3}\\ c_{1}(\lambda_{1}\lambda_{2}-\lambda_{1}^{2})+c_{2}\lambda_{1}&c_{2}\lambda_{2}& c_{1}(\lambda_{3}\lambda_{2}-\lambda_{3}^{2})+c_{2}\lambda_{3}\\ c_{1}(\lambda_{1}\lambda_{3}-\lambda_{3}^{2})+c_{2}\lambda_{1}&c_{1}(\lambda_{3} \lambda_{2}-\lambda_{2}^{2})+c_{2}\lambda_{2}&c_{2}\lambda_{3}\end{array} \right),\] with \(c_{1}=\frac{q_{x}}{q}\) and \(c_{2}=q^{-2}-q^{2}\). While \(V_{1}\) has same form of \(U_{1}\) with \(c_{1}\) and \(c_{2}\) replaced by \(c_{3}=-(u^{2}+2u)\frac{q_{x}}{q}\) and \(c_{4}=(u^{2}+2u)q^{2}+\frac{u_{x}^{2}+1}{q^{2}}-1\), respectively; \(V_{2}\) is given by \[V_{2}=[V_{2}^{(jl)}]_{3\times 3},\quad V_{2}^{(jl)}=c_{5}\left(\frac{1}{ \lambda_{l}}-\frac{1}{\lambda_{j}}\right)+c_{6}\left(\frac{\lambda_{j}}{ \lambda_{l}}+\frac{\lambda_{l}}{\lambda_{j}}\right). \tag{2.10}\] with \(c_{5}=\frac{u_{x}}{q}\), \(c_{6}=(u+1)q-1\), Introduce a new eigenfunction \(\mu=\mu(z;x,t)\) satisfying \[\mu=\Phi e^{-Q}, \tag{2.11}\] where \(Q=Q(z;x,t)\) is a \(3\times 3\) diagonal function \[Q=y(x,t)\Lambda(z)+tA(z),\] with \[Q_{x}=q^{2}\Lambda,\quad Q_{t}=-\left(u^{2}+2u\right)q^{2}\Lambda -A,\] \[y(x,t)=x-\int_{x}^{\infty}\left(q^{2}(s,t)-1\right)ds. \tag{2.12}\] Then the Lax pair (2.8) is changed to \[\mu_{x}-[Q_{x},\mu]=U\mu,\quad\mu_{t}-[Q_{t},\mu]=V\mu, \tag{2.13}\] whose solutions satisfy the Fredholm integral equations \[\mu^{\pm}(z;x,t)=I-\int_{x}^{\pm\infty}e^{-\bar{\Lambda}(z)\int_{x}^{s}q^{2}(v,t)dv}[U\mu_{\pm}(s,t;z)]ds. \tag{2.14}\] We define six rays at \(z=0\), \[\Sigma=\cup_{n=1}^{6}L_{n},\quad L_{n}=e^{\frac{\pi(n-1)t}{3}}\mathbb{R}^{+}, \ n=1,\cdots,6,\] which divide the complex plane \(\mathbb{C}\) into six open cones \[S_{n}=\{z\in\mathbb{C};\arg z\in((n-1)\pi/3,n\pi/3)\},\ n=1,\cdots,6,\] see Figure 2.1. Denote the matrix \[\mu^{\pm}=\left(\mu_{1}^{\pm},\mu_{2}^{\pm},\mu_{3}^{\pm}\right),\] where the scripts \(1,2\) and \(3\) denote the first, second and third column of \(\mu^{\pm}(z)\) respectively. Then from (2.14), we can show that \(\left(\mu_{1}^{+},\mu_{2}^{+},\mu_{3}^{+}\right)\) is analytical in the domains \[\left(\bar{S}_{1}\cup\bar{S}_{2},\ \bar{S}_{5}\cup\bar{S}_{6},\ \bar{S}_{3}\cup \bar{S}_{4}\right),\] while \(\left(\mu_{1}^{-},\mu_{2}^{-},\mu_{3}^{-}\right)\) is analytical in the domains \[\left(\bar{S}_{4}\cup\bar{S}_{5},\ \bar{S}_{3}\cup\bar{S}_{2},\ \bar{S}_{1}\cup \bar{S}_{6}\right).\] Here, \(\bar{S}_{j}\) denotes the closure of \(S_{j}\) for \(j=1,\cdots,6\), respectively. The initial points of integration \(\infty_{j,l}\) are specified as follows for each matrix entry \((j,l)\) for \(j,l=1,2,3\): \[\infty_{j,l}=\left\{\begin{array}{l}+\infty,\,\mbox{if}\ \mbox{Re}\lambda_{j} \geq\mbox{Re}\lambda_{l},\\ \\ -\infty,\,\mbox{if}\ \mbox{Re}\lambda_{j}<\mbox{Re}\lambda_{l}.\end{array}\right. \tag{2.15}\] Then (2.14) can be rewritten as a system of Fredholm integral equations \[\mu_{jl}(z;x,t)=I_{jl}-\int_{x}^{\infty_{jl}}e^{-\int_{x}^{s}q^{2}(v,t)dv( \lambda_{j}(z)-\lambda_{l}(z))}[U\mu(s,t;z)]_{jl}ds. \tag{2.16}\] It was shown that the eigenfunction \(\mu(z)\) defined by (2.11) has the following properties [13]. **Proposition 2.1**.: _The equations (2.14) uniquely define a \(3\times 3\)-matrix valued solutions \(\mu(z):=\mu(z;x,t)\) of (2.13) with the following properties:_ 1. _det_ \(\mu(z)\)_=1._ 2. \(\mu(z)\) _is piecewise meromorphic with respect to_ \(\mathbb{C}\setminus\Sigma\)_, as a function of the spectral parameter_ \(z\)_._ 3. \(\mu(z)\) _obeys the symmetries_ \[\mu(z)=\Gamma_{1}\overline{\mu(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{\mu( \omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{\mu(\omega\bar{z})}\Gamma_{ 3}=\overline{\mu(\bar{z}^{-1})},\] _where_ \[\Gamma_{1}=\left(\begin{array}{cc}0&1&0\\ 1&0&0\\ 0&0&1\end{array}\right),\quad\Gamma_{2}=\left(\begin{array}{cc}0&0&1\\ 0&1&0\\ 1&0&0\end{array}\right),\quad\Gamma_{3}=\left(\begin{array}{cc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right).\] (2.17) 4. \(\mu(z)\) _has pole singularities at_ \(\varkappa_{n}=e^{\frac{n\pi i}{3}}\) _with_ \(\mu=\mathcal{O}(\frac{1}{z-\varkappa_{n}})\) _as_ \(z\to\varkappa_{n}\)_._ 5. \(\mu(z)\to I\) _as_ \(z\to\infty\)_, and for_ \(z\in\mathbb{C}\setminus\Sigma\)_,_ \(\mu(z)\) _is bounded as_ \(x\to-\infty\) _and_ \(\mu(z)\to I\) _as_ \(x\to+\infty\)_._ **Remark 2.1**.: _From the symmetries in Proposition 2.1(3), it follows that the values of \(\mu\) at \(z\) and at \(\omega z\) are related by_ \[\mu(\omega z)=C^{-1}\mu(z)C,\ \ \mbox{where}\ \ C=\left(\begin{array}{cc}0&0&1 \\ 1&0&0\\ 0&1&0\end{array}\right). \tag{2.18}\] Figure 2.1: The \(z\)-plane is divided into six analytical domains \(S_{n},n=1,\cdots,6\), by six rays \(L_{n}\). The red points \(\varkappa_{n}\) and \(\kappa_{n}\) are spectral singularities. Denote \(\mu_{\pm}(z)\) as the limiting values of \(\mu(z^{\prime})\) as \(z^{\prime}\to z\) from the positive or negative side of \(L_{n}\), then they are related as follows \[\mu_{+}(z)=\mu_{-}(z)e^{Q}V_{n}(z)e^{-Q},\quad z\in L_{n},\ n=1,\cdots,6, \tag{2.19}\] where \(V_{n}(z)\) only depends on \(z\) and is completely determined by \(u(x,0)\), i.e., by the initial data for the Cauchy problem (1.7). Take \(L_{1}=\mathbb{R}^{+}\) and \(L_{4}=\mathbb{R}^{-}\) as an example, \(V_{n}\) for \(n=1,4\) has a special matrix structure \[V_{n}(z)=\left(\begin{array}{cc}1&-r_{\pm}(z)&0\\ \bar{r}_{\pm}(z)&1-|r_{\pm}(z)|^{2}&0\\ 0&0&1\end{array}\right),\quad z\in\mathbb{R}^{\pm}. \tag{2.20}\] where \(r_{\pm}(z)\) are single scalar functions, with \(r_{\pm}(z)\in L^{\infty}(\mathbb{R}^{\pm})\), and \(r_{\pm}(z)=\mathcal{O}(z^{-1})\) as \(z\to\pm\infty\). The symmetry of \(\mu(z)\) gives that \(r_{\pm}(z)=\overline{r_{\pm}(z^{-1})}\), therefore it also has \(r_{\pm}(z)\in L^{2}(\mathbb{R}^{\pm})\) and \(\lim_{z\to 0^{\pm}}r_{\pm}(z)=0\). Moreover, the singularities at \(\pm 1\) give that \(r_{\pm}(\pm 1)=0\). So we define the _reflection coefficient_ \[r(z)=\left\{\begin{array}{ll}r_{\pm}(z),\,z\in\mathbb{R}^{\pm},\\ \\ 0,&z=0.\end{array}\right. \tag{2.21}\] Then \(r\in L^{\infty}(\mathbb{R})\cap L^{2}(\mathbb{R})\) and \(r(z)=\mathcal{O}(z^{-1})\) as \(z\to\infty\). In references [13, 26], it was shown that there exist at most a finite number of simple poles \(z_{n}\) of \(\mu(z)\) lying in \(S_{1}\cap\{z\in\mathbb{C};\ |z|>1\}\) and \(w_{m}\) lying in \(S_{1}\cap\{z\in\mathbb{C};\ |z|=1\}\). And there are no poles except \(\pm 1\), \(\pm\omega\) and \(\pm\omega^{2}\) on the contour \(\Sigma\). To differentiate this two types of poles, we denote them as \(z_{n}\), \(z_{n}^{A}\) and \(w_{m}\), \(w_{m}^{A}\), respectively. Denote \(N_{1}\), \(N_{1}^{A}\), \(N_{2}\) and \(N_{2}^{A}\) as the number of \(z_{n}\), \(z_{n}^{A}\), \(w_{m}\), and \(w_{m}^{A}\), respectively. The symmetries of \(\mu(z)\) imply \(\bar{z}_{n}^{-1}\) and \(\frac{1}{\bar{z}_{n}^{A}}\) are also the poles of \(\mu(z)\) in \(S_{1}\). It is convenient to define \(\zeta_{n}=z_{n}\), and \(\zeta_{n+N_{1}}=\bar{z}_{n}^{-1}\) for \(n=1,\cdot\cdot\cdot,N_{1}\); \(\zeta_{m+2N_{1}}=w_{m}\) for \(m=1,\cdot\cdot\cdot,N_{2}\); \(\zeta_{j+2N_{1}+N_{2}}=z_{j}^{A}\), and \(\zeta_{j+2N_{1}+N_{2}+N_{1}^{A}}=\frac{1}{\bar{z}_{n}^{A}}\) for \(j=1,\cdot\cdot\cdot,N_{1}^{A}\); \(\zeta_{m+2N_{1}+2N_{1}^{A}+N_{2}}=w_{m}^{A}\) for \(m=1,\cdot\cdot\cdot,N_{2}^{A}\). For the sake of brevity, let \[N_{0}=2N_{1}+2N_{1}^{A}+N_{2}+N_{2}^{A}.\] Moreover, \(\omega\zeta_{n}\), \(\omega^{2}\zeta_{n}\), \(\bar{\zeta}_{n}\), \(\omega\bar{\zeta}_{n}\), \(\omega^{2}\bar{\zeta}_{n}\) are also poles of \(\mu(z)\) in \(S_{j}\), \(j=2,\cdots,6\). For convenience, let \(\zeta_{n+N_{0}}=\omega\bar{\zeta}_{n}\), \(\zeta_{n+2N_{0}}=\omega\zeta_{n}\), \(\zeta_{n+3N_{0}}=\omega^{2}\bar{\zeta}_{n}\), \(\zeta_{n+4N_{0}}=\omega^{2}\zeta_{n}\) and \(\zeta_{n+5N_{0}}=\bar{\zeta}_{n}\) for \(n=1,\cdots,N_{0}\). Therefore, define the discrete spectrum \(\mathcal{Z}\) as \[\mathcal{Z}=\{\zeta_{n}\}_{n=1}^{6N_{0}}, \tag{2.22}\] with \(\zeta_{n}\in S_{1}\) and \(\bar{\zeta}_{n}\in S_{6}\) whose distribution on the \(z\)-plane is shown in Figure 2.2. As shown in [13], denote the _norming constant_\(c_{n}\) and residue conditions as \[\underset{z=\zeta_{n}}{\text{Res}}\mu(z)=\lim_{z\to\zeta_{n}}\mu(z)e^{Q}\left( \begin{array}{ccc}0&-c_{n}&0\\ 0&0&0\\ 0&0&0\end{array}\right)e^{-Q}, \tag{2.23}\] for \(n=1,\cdots,2N_{1}+N_{2}\) and \[\underset{z=\zeta_{n}}{\text{Res}}\mu(z)=\lim_{z\to\zeta_{j}}\mu(z)e^{Q}\left( \begin{array}{ccc}0&0&0\\ 0&0&-c_{n+2N_{1}+N_{2}}\\ 0&0&0\end{array}\right)e^{-Q}, \tag{2.24}\] for \(n=1+2N_{1}+N_{2},\cdots,N_{0}\). In addition, the collection \(\sigma_{d}=\{\zeta_{n},C_{n}\}_{n=1}^{6N_{0}}\) is called the _scattering data_ with \(C_{n}=c_{n}\), \(C_{n+N_{0}}=\omega\bar{c}_{n}\), \(C_{n+2N_{0}}=\omega c_{n}\), \(C_{n+3N_{0}}=\omega^{2}\bar{c}_{n}\), \(C_{n+4N_{0}}=\omega^{2}c_{n}\) and \(C_{n+5N_{0}}=\bar{c}_{n}\) for \(n=1,\cdots,N_{0}\). To deal with our following work, we assume our initial data satisfies that \(u_{0}\in\mathcal{S}(\mathbb{R})\) to generate generic scattering data such that \(\mu(z)\) has no the poles on \(L_{n}\setminus\{\varkappa_{n}\}\), \(n=1,\cdots,6\) and at the point \(z=e^{\frac{\pi i}{6}}\). For the reflection coefficient \(r(z)\), we have the following proposition [17]. **Proposition 2.2**.: _If the initial data \(u_{0}(x)\in\mathcal{S}(\mathbb{R})\), then \(r(z)\) belongs to \(\mathcal{S}(\mathbb{R})\). There exist fixed constants \(C_{1,r}>0\) and \(C_{2,r}\) such that if \(u_{0}-u_{0,xx}>C_{2,r}>-1\), \(\parallel u_{0}-u_{0,xx}\parallel_{L^{1}}<C_{1,r}\) and \(\parallel u_{0}\parallel_{W^{3,1}\cup W^{3,\infty}}<C_{1,r}\), then \(|r(z)|<1\) for \(z\in\mathbb{R}\). Especially, \(|r(\pm 1)|=0\)._ ### An RH characterization We replace the variable \(x\) with \(y\) defined by (2.12). The price to pay for this is that the solution of the initial problem can be given only implicitly, or parametrically. Let Figure 2.2: Distribution of the discrete spectrum \(\mathcal{Z}\) in the \(z\)-plane. \(\xi=\frac{y}{t}\). By the definition of the new scale \(y(x,t)\), we denote the phase function \[\theta_{jl}(z)=-i\left[\xi(\lambda_{j}(z)-\lambda_{l}(z))+\left(\frac{1}{\lambda_ {j}(z)}-\frac{1}{\lambda_{l}(z)}\right)\right]. \tag{2.25}\] Especially, \[\theta_{12}(z)=\sqrt{3}\left(z-\frac{1}{z}\right)\left[\xi-\frac{1}{z^{2}-1+z^ {-2}}\right], \tag{2.26}\] with \(\theta_{23}(z)=\theta_{12}(\omega z),\text{ and }\theta_{31}(z)=\theta_{12}( \omega^{2}z)\). Define \[M(z;y,t):=\mu(z;x(y,t),t), \tag{2.27}\] which solves the following RH problem. **RH problem 2.1**.: _Find a matrix-valued function \(M(z):=M(z;y,t)\) which satisfies_ * \(M(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\Sigma\) _and has finite single poles._ * \(M(z)=\Gamma_{1}\overline{M(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{M(\omega^ {2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M(\omega\bar{z})}\Gamma_{3}= \overline{M(\bar{z}^{-1})}.\)__ * \(M(z)\) _has continuous boundary values_ \(M_{\pm}(z)\) _on_ \(\Sigma\)_, and_ \[M_{+}(z)=M_{-}(z)V(z),\ \ z\in\Sigma,\] (2.28) _where_ \[V(z)=\begin{cases}\left(\begin{array}{ccc}1&-r(z)e^{it\theta_{12}}&0\\ \bar{r}(z)e^{-it\theta_{12}}&1-|r(z)|^{2}&0\\ 0&0&1\end{array}\right),\ z\in L_{1},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&-r(\omega z)e^{it\theta_{23}}\\ 0&\bar{r}(\omega z)e^{-it\theta_{23}}&1-|r(\omega z)|^{2}\end{array}\right),\ z \in L_{2},\\ \left(\begin{array}{ccc}1-|r(\omega^{2}z)|^{2}&0&\bar{r}(\omega^{2}z)e^{it \theta_{13}}\\ 0&1&0\\ -r(\omega^{2}z)e^{-it\theta_{13}}&0&1\end{array}\right),\ z\in L_{3},\\ \left(\begin{array}{ccc}1&-r(z)e^{it\theta_{12}}&0\\ \bar{r}(z)e^{-it\theta_{12}}&1-|r(z)|^{2}&0\\ 0&0&1\end{array}\right),\ z\in L_{4},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&-r(\omega z)e^{it\theta_{23}}\\ 0&\bar{r}(\omega z)e^{-it\theta_{23}}&1-|r(\omega z)|^{2}\end{array}\right), \ z\in L_{5},\\ \left(\begin{array}{ccc}1-|r(\omega^{2}z)|^{2}&0&\bar{r}(\omega^{2}z)e^{it \theta_{13}}\\ 0&1&0\\ -r(\omega^{2}z)e^{-it\theta_{13}}&0&1\end{array}\right),\ z\in L_{6}.\end{cases}\] * \(M(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty\). * _As_ \(z\to\varkappa_{n}=e^{\frac{i\pi(n-1)}{3}}\)_,_ \(n=1,\cdots,6\)_, the limit of_ \(M(z)\) _has pole singularities_ \[M(z) =\frac{1}{z\mp 1}\left(\begin{array}{ccc}\alpha_{\pm}&\alpha_{\pm}& \beta_{\pm}\\ -\alpha_{\pm}&-\alpha_{\pm}&-\beta_{\pm}\\ 0&0&0\end{array}\right)+\mathcal{O}(1),\ z\to\pm 1,\] (2.30) \[M(z) =\frac{\pm\omega^{2}}{z\mp\omega^{2}}\left(\begin{array}{ccc}0&0 &0\\ \beta_{\pm}&\alpha_{\pm}&\alpha_{\pm}\\ -\beta_{\pm}&-\alpha_{\pm}&-\alpha_{\pm}\end{array}\right)+\mathcal{O}(1),\ z \to\pm\omega^{2},\] (2.31) \[M(z) =\frac{\pm\omega}{z\mp\omega}\left(\begin{array}{ccc}-\alpha_{\pm}&- \beta_{\pm}&-\alpha_{\pm}\\ 0&0&0\\ \alpha_{\pm}&\beta_{\pm}&\alpha_{\pm}\end{array}\right)+\mathcal{O}(1),\ z\to \pm\omega,\] (2.32) _with_ \(\alpha_{\pm}=\alpha_{\pm}(y,t)=-\bar{\alpha}_{\pm}\)_,_ \(\beta_{\pm}=\beta_{\pm}(y,t)=-\bar{\beta}_{\pm}\) _and_ \(M^{-1}(z)\) _has same specific matrix structure with_ \(\alpha_{\pm}\)_,_ \(\beta_{\pm}\) _replaced by_ \(\tilde{\alpha}_{\pm}\)_,_ \(\tilde{\beta}_{\pm}\)_. Moreover,_ \((\alpha_{\pm},\ \beta_{\pm})\neq 0\) _iff_ \(\left(\tilde{\alpha}_{\pm},\ \tilde{\beta}_{\pm}\right)\neq 0\)_._ * \(M(z)\) _has simple poles at each point in_ \(\mathcal{Z}\) _with_ \[\begin{split}&\operatorname*{Res}_{k=\zeta_{n}}M(k)=\lim_{k\to \zeta_{n}}M(k)B_{n},\\ &\operatorname*{Res}_{k=\omega\zeta_{n}}M(k)=\lim_{k\to\omega \zeta_{n}}M(k)\Gamma_{3}(\omega\bar{B}_{n})\Gamma_{3}:=\lim_{k\to\zeta_{n}}M(k )B_{n+N},\\ &\operatorname*{Res}_{k=\omega\zeta_{n}}M(k)=\lim_{k\to\omega \zeta_{n}}M(k)C^{2}(\omega^{2}B_{n})C^{-2}:=\lim_{k\to\zeta_{n}}M(k)B_{n+2N}, \\ &\operatorname*{Res}_{k=\omega^{2}\zeta_{n}}M(k)=\lim_{k\to\omega ^{2}\zeta_{n}}M(k)\Gamma_{2}(\omega^{2}\bar{B}_{n})\Gamma_{2}:=\lim_{k\to\zeta _{n}}M(k)B_{n+3N},\\ &\operatorname*{Res}_{k=\omega^{2}\zeta_{n}}M(k)=\lim_{k\to\omega ^{2}\zeta_{n}}M(k)C(\omega^{2}B_{n})C^{-1}:=\lim_{k\to\zeta_{n}}M(k)B_{n+4N}, \\ &\operatorname*{Res}_{k=\tilde{\zeta}_{n}}M(k)=\lim_{k\to\tilde{ \zeta}_{n}}M(k)\Gamma_{1}\bar{B}_{n}\Gamma_{1}:=\lim_{k\to\zeta_{n}}M(k)B_{n+5 N},\end{split}\] (2.33) _where_ \[B_{n}=\begin{cases}\left(\begin{array}{ccc}0&-c_{n}e^{\mathrm{i}t\theta_{12 }(\zeta_{n})}&0\\ 0&0&0\\ 0&0&0\end{array}\right),\ n=1,\cdots,2N_{1}+N_{2},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&-c_{n}e^{\mathrm{i}t\theta_{23}(\zeta_{n})}\\ 0&0&0\end{array}\right),\ n=2N_{1}+N_{2}+1,\cdots,2N_{1}+N_{2}+2N_{1}^{A}+N_{2} ^{A}.\end{cases}\] Denote \(M(z;y,t)=(M_{jl}(z;y,t))_{jl=1}^{3}\). Then the solution of Novikov equation (1.7) can be obtained by the following reconstruction formula \[u(x,t)=u(y(x,t),t)= \frac{1}{2}\tilde{m}_{1}(y,t)\left(\frac{M_{33}(e^{\frac{i\pi}{6}} ;y,t)}{M_{11}(e^{\frac{i\pi}{6}};y,t)}\right)^{1/2}\] \[+\frac{1}{2}\tilde{m}_{3}(y,t)\left(\frac{M_{33}(e^{\frac{i\pi}{6} };y,t)}{M_{11}(e^{\frac{i\pi}{6}};y,t)}\right)^{-1/2}-1, \tag{2.34}\] where \[x(y,t)=y+\frac{1}{2}\ln\frac{M_{33}(e^{\frac{i\pi}{6}};y,t)}{M_{11}(e^{\frac{i\pi} {6}};y,t)}, \tag{2.35}\] and \[\tilde{m}_{l}:=\sum_{j=1}^{3}M_{jl}(e^{\frac{i\pi}{6}};y,t),\ l=1,2,3.\] ## 3 Interpolation and Conjugation In this section, we aim to convert the original RH problem to a new RH problem which satisfies the following conditions: * It is well behaved as \(t\to\infty\) with \(\xi\) fixed. * Different factorizations of the jump matrix (2.29) should be taken for different transition sectors. For convenience, we denote \[\mathcal{N}:=\left\{1,\cdots,N_{0}\right\},\ \tilde{\mathcal{N}}:=\left\{1, \cdots,2N_{1}+N_{2}\right\},\ \tilde{\mathcal{N}}^{A}:=\left\{1+2N_{1}+N_{2},\cdots,N_{0}\right\}.\] Further, to distinguish different types of zeros, we introduce a small positive constant \(\delta_{0}\) to give the partitions \(\Delta,\nabla\) and \(\Diamond\) of \(\mathcal{N}\) as follows: \[\nabla_{1}=\left\{j\in\tilde{\mathcal{N}};\mathrm{Im}\theta_{12}( \zeta_{j})>\delta_{0}\right\},\ \Delta_{1}=\left\{j\in\tilde{\mathcal{N}};\mathrm{Im}\theta_{12}(\zeta_{j})<0 \right\},\] \[\nabla_{2}=\left\{i\in\tilde{\mathcal{N}}^{A};\mathrm{Im}\theta_ {23}(\zeta_{i})>\delta_{0}\right\},\ \Delta_{2}=\left\{i\in\tilde{\mathcal{N}}^{A};\mathrm{Im}\theta_{23}(\zeta_{i} )<0\right\}, \tag{3.1}\] \[\Diamond_{1}=\left\{j_{0}\in\tilde{\mathcal{N}};0\leq\mathrm{Im} \theta_{12}(\zeta_{j_{0}})\leq\delta_{0}\right\},\ \Diamond_{2}=\left\{i_{0}\in\tilde{\mathcal{N}}^{A};0\leq\mathrm{Im}\theta_{23}( \zeta_{i_{0}})\leq\delta_{0}\right\},\] (3.2) \[\nabla=\nabla_{1}\cup\nabla_{2},\ \Delta=\Delta_{1}\cup\Delta_{2},\ \Diamond= \Diamond_{1}\cup\Diamond_{2}. \tag{3.3}\] For \(\zeta_{n}\) with \(n\in\Delta\), the residue of \(M(z)\) at \(\zeta_{n}\) in RH problem 2.1 grows without bound as \(t\to\infty\). However, for \(\zeta_{n}\) with \(n\in\nabla\), the residue decays to \(0\). Denote two constants \(\mathcal{N}(\Diamond)=|\Diamond|\) and \[\rho_{0}=\min_{n\in\mathcal{N}\setminus\Diamond}\left\{|\mathrm{Im}\theta_{1 2}(\zeta_{n})|,\ |\mathrm{Im}\theta_{23}(\zeta_{n})|\right\}>0. \tag{3.4}\] For the poles \(\zeta_{n}\) with \(n\in\mathcal{N}\setminus\Diamond\), we want to convert the residue of these poles into the jumps along small closed loops enclosing themselves, respectively. The jump matrix \(V(z)\) on \(\Sigma\) in (2.28) has the following factorizations: On the contour \(\mathbb{R}\), \[V(z)=\left(\begin{array}{ccc}1&0&0\\ \bar{r}e^{-i\theta_{12}}&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&-re^{i\theta_{12}}&0\\ 0&1&0\\ 0&0&1\end{array}\right) \tag{3.5}\] \[=\left(\begin{array}{ccc}1&\frac{-re^{i\theta_{12}}}{1-|r|^{2} }&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}\frac{1}{1-|r|^{2}}&0&0\\ 0&1-|r|^{2}&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{r}e^{-i\theta_{12}}}{1-|r|^{2}}&1&0\\ 0&0&1\end{array}\right);\] On the contour \(\omega^{2}\mathbb{R}\), \[V(z)=\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&\bar{r}(\omega z)e^{-it\theta_{23}}&1\end{array}\right)\left(\begin{array}{ccc }1&0&0\\ 0&1&-r(\omega z)e^{it\theta_{23}}\\ 0&0&1\end{array}\right) \tag{3.6}\] \[=\left(\begin{array}{ccc}1&0&0\\ 0&1&-r(\omega z)e^{it\theta_{23}}\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&\frac{1}{1-|r(\omega z)|^{2}}&0\\ 0&0&1-|r(\omega z)|^{2}\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&\frac{\bar{r}(\omega z)e^{-it\theta_{23}}}{1-|r(\omega z)|^{2}}&1\end{array}\right);\] On the contour \(\omega\mathbb{R}\), \[V(z)=\left(\begin{array}{ccc}1&0&\bar{r}(\omega^{2}z)e^{it\theta_{13}}\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ -r(\omega^{2}z)e^{-it\theta_{13}}&0&1\end{array}\right) \tag{3.7}\] \[=\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ \frac{-r(\omega^{2}z)e^{-it\theta_{13}}}{1-|r(\omega^{2}z)|^{2}}&0&1\end{array} \right)\left(\begin{array}{ccc}1&-|r(\omega^{2}z)|^{2}&0&0\\ 0&1&0\\ 0&0&\frac{1}{1-|r(\omega^{2}z)|^{2}}\end{array}\right)\left(\begin{array}{ cccc}1&0&\frac{\bar{r}(\omega^{2}z)e^{it\theta_{13}}}{1-|r(\omega^{2}z)|^{2}}\\ 0&1&0\\ 0&0&1\end{array}\right).\] To achieve our goal, we utilize these factorizations to deform the jump contours so that the oscillating factor \(e^{\pm it\theta_{12}}\), which is determined by the sign of \(\mathrm{Im}\,\theta_{12}\), are decaying in the corresponding regions, respectively,. We consider the signature table of \(\mathrm{Im}\,\theta_{12}\) \[\mathrm{Im}\,\theta_{12}=\sqrt{3}\mathrm{Im}z\left(1+|z|^{-2} \right)\xi-\] \[\frac{\sqrt{3}\mathrm{Im}z\left(1+|z|^{-2}\right)\left(-|z|^{6}-| z|^{4}+4\mathrm{Re}^{2}z|z|^{2}-|z|^{2}\right)}{|z|^{8}+1+2[(\mathrm{Re}^{2}z- \mathrm{Im}^{2}z)^{2}-4\mathrm{Re}^{2}z\mathrm{Im}^{2}z]-2(1+|z|^{4})(\mathrm{ Re}^{2}z-\mathrm{Im}^{2}z)+|z|^{4}}. \tag{3.8}\] which is depicted in Figure 3.1. To proceed, we introduce the following scalar RH problem for the transition zone (as shown in Figure 1(b). **RH problem 3.1**.: _Find a scalar function \(\delta_{1}(z)\) with_ * \(\delta_{1}(z)\) _is analytic in_ \(\mathbb{C}\setminus\mathbb{R}\)_._ * \(\delta_{1}(z)\) _has jump relation:_ \[\delta_{1,+}(z)=\delta_{1,-}(z)(1-|r(z)|^{2}),\ z\in\mathbb{R},\] * \(\delta_{1}(z)\to 1\) _as_ \(z\to\infty\)_._ By the Plemelj formula, this RH problem admits an unique solution \[\delta_{1}(z)=\exp\left(-i\int_{\mathbb{R}}\frac{\nu(s)ds}{s-z}\right), \tag{3.9}\] where \(\nu(s)=-\frac{1}{2\pi}\log(1-|r(s)|^{2})\). Moreover, define \[H(z)=\prod_{n\in\Delta_{1}}\frac{z-\zeta_{n}}{z-\zeta_{n}}\prod_{m\in\Delta_{ 2}}\frac{z-\omega\zeta_{m}}{z-\omega^{2}\zeta_{m}}\delta_{1}(z,\xi)^{-1}; \tag{3.10}\] \[T_{1}(z)=T_{1}(z,\xi)=\frac{H(\omega^{2}z)}{H(z)}; \tag{3.11}\] \[T_{2}(z)=T_{2}(z,\xi)=T_{1}(\omega z)=\frac{H(z)}{H(\omega z)};\] (3.12) \[T_{3}(z)=T_{3}(z,\xi)=T_{1}(\omega^{2}z)=\frac{H(\omega z)}{H( \omega^{2}z)};\] (3.13) \[T_{ij}(z)=T_{ij}(z,\xi)=\frac{T_{i}(z)}{T_{j}(z)},\;i,j=1,2,3. \tag{3.14}\] In the above formulas, we choose the principal branch of power and logarithm functions. **Proposition 3.1**.: _The function defined by (3.11) and (3.14) has the following properties_ 1. \(T_{1}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\mathbb{R}\)_. For each_ \(n\in\Delta_{1}\)_,_ \(T_{1}(z)\) _exhibits a simple pole at_ \(\zeta_{n}\) _and a simple zero at_ \(\bar{\zeta}_{n}\)_; while for each_ \(m\in\Delta_{2}\)_,_ \(T_{1}(z)\) _possesses a simple pole at_ \(\omega\zeta_{m}\) _and a simple zero at_ \(\omega\bar{\zeta}_{m}\)_._ 2. \(\overline{T_{1}(\bar{z})}=T_{1}(\omega z)=T_{1}(z^{-1})\)_._ 3. \(T_{1}(z)\) _satisfies the jump condition_ \[T_{1,-}(z)=(1-|r(z)|^{2})T_{1,+}(z),\quad\ z\in\mathbb{R},\] (3.15) \[T_{1,+}(z)=(1-|r(\omega^{2}z)|^{2})T_{1,-}(z),\quad\ z\in\omega \mathbb{R}.\] (3.16) * \(\lim_{z\to\infty}T_{1}(z):=T_{1}(\infty)\) _with_ \(T_{1}(\infty)=1\)_._ * \(T_{1}(e^{\frac{i\pi}{6}})\) _exists as a constants._ * \(T_{1}(z)\) _is continuous at_ \(z=0\) _with_ \(T_{1}(0)=1\)_._ For \[\varrho:=\frac{1}{4}\min\left\{\min_{j\in\mathcal{N}}|\mathrm{Im} \zeta_{j}|,\min_{j\in\mathcal{N},\;\arg(z)=\frac{i\pi}{3}}|\zeta_{j}-z|,\min_{ j\in\mathcal{N}\setminus\{\circ,\;\mathrm{Im}\theta_{ik}(z)=0\}}|\zeta_{j}-z|,\right.\] \[\left.\min_{j\in\mathcal{N}}|\zeta_{j}-e^{\frac{i\pi}{6}}|,\min_{ j\neq k\in\mathcal{N}}|\zeta_{j}-\zeta_{k}|\right\}, \tag{3.17}\] we also define \[\mathbb{D}_{n}:=\mathbb{D}(\zeta_{n},\varrho)=\{z:|z-\zeta_{n}|\leq\varrho\},\ n\in\mathcal{N},\] to be small disks, which are pairwise disjoint, also disjoint with critical lines \(\{z\in\mathbb{C};\mathrm{Im}\theta(z)=0\}\), as well as the contours \(\mathbb{R}\), \(\omega\mathbb{R}\) and \(\omega^{2}\mathbb{R}\). Besides, \(e^{\frac{i\pi}{6}}\notin\mathbb{D}_{n}\). By the above definition and the symmetry of poles and \(\theta_{jk}(z)\), for every \(n\), there exists a \(k\in\{0,\cdots,5\}\) such that \(n-kN_{0}\in\mathcal{N}\setminus\Diamond\). Denote a piecewise matrix function \[G(z)=\left\{\begin{array}{ll}I-\frac{B_{n}}{z-\zeta_{n}},&z\in\mathbb{D}_{n},n-kN_{0}\in\nabla,k\in\{0,\cdots,5\},\\ &\\ \left(\begin{array}{ccc}1&0&0\\ -\frac{z-\zeta_{n}}{C_{n}e^{i\theta t_{12}(\zeta_{n})}}&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{D}_{n},n\in\Delta_{1}\text{ or }n-2N_{0}\in \Delta_{2},\\ &\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&-\frac{z-\zeta_{n}}{C_{n}e^{i\theta t_{23}(\zeta_{n})}}&1\end{array}\right),&z\in\mathbb{D}_{n},n-N_{0}\in\Delta_{1}\text{ or }n-5N_{0}\in\Delta_{2},\\ &\\ \left(\begin{array}{ccc}1&0&-\frac{z-\zeta_{n}}{C_{n}e^{-i\theta t_{13}( \zeta_{n})}}\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{D}_{n},n-2N_{0}\in\Delta_{1}\text{ or }n-4N_{0}\in\Delta_{2},\\ &\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&-\frac{z-\zeta_{n}}{C_{n}e^{-i\theta t_{23}(\zeta_{n})}}\\ 0&0&1\end{array}\right),&z\in\mathbb{D}_{n},n-3N_{0}\in\Delta_{1}\text{ or }n-N_{0}\in\Delta_{2},\\ &\\ \left(\begin{array}{ccc}1&0&0\\ 0&-\frac{z-\zeta_{n}}{C_{n}e^{i\theta t_{23}(\zeta_{n})}}&1\end{array}\right),&z\in\mathbb{D}_{n},n-4N_{0}\in\Delta_{1}\text{ or }n\in\Delta_{2},\\ &\\ \left(\begin{array}{ccc}1&-\frac{z-\zeta_{n}}{C_{n}e^{-i\theta t_{12}(\zeta_{n })}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{D}_{n},n-5N_{0}\in\Delta_{1}\text{ or }n-3N_{0}\in\Delta_{2},\\ &\\ I,&\text{$z$ in elsewhere},\end{array}\right. \tag{3.18}\] where \(B_{n},\ n=1,\cdots,2N_{1}+N_{2}+2N_{1}^{A}+N_{2}^{A}\) are defined in RH problem 2.1, and define \[T(z)=\mathrm{diag}\{T_{1}(z),T_{2}(z),T_{3}(z)\}.\] Now we introduce the following transformation to construct a regular RH problem \[M^{(1)}(z):=M^{(1)}(z;y,t)=M(z)G(z)T(z), \tag{3.19}\] which then satisfies the following RH problem. **RH problem 3.2**.: _Find a matrix-valued function \(M^{(1)}(z)\) which satisfies:_ * \(M^{(1)}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\Sigma^{(1)}\)_, where_ \(\Sigma^{(1)}=\Sigma\cup\left(\underset{n\in\mathcal{N}\setminus\Diamond,\ k=0,..,5}{\cup}\partial\mathbb{D}_{n+kN_{0}}\right)\)_._ * \(M^{(1)}(z)=\Gamma_{1}\overline{M^{(1)}(\bar{z})}\Gamma_{1}=\Gamma_{2} \overline{M^{(1)}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{(1)}( \omega\bar{z})}\Gamma_{3}=\overline{M^{(1)}(\bar{z}^{-1})}\)_._ * \(M^{(1)}_{+}(z)=M^{(1)}_{-}(z)V^{(1)}(z),\ \ \ z\in\Sigma^{(1)}\)_, where_ \[V^{(1)}(z)=\left\{\begin{array}{ll}\left(\begin{array}{ccc}1&-\frac{rT_{2 }e^{i\theta_{12}}}{1-|r|^{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \frac{rT_{12}e^{-i\theta_{12}}}{1-|r|^{2}}&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{R},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&-\frac{r(\omega z)T_{32}e^{i\theta_{23}}}{1-|r(\omega z)|^{2}}\end{array} \right)\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&\frac{\bar{r}(\omega z)e^{-i\theta_{23}}T_{23}}{1-|r(\omega z)|^{2}}&1\end{array} \right),&z\in\omega\mathbb{R},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ \frac{-r(\omega^{2}z)T_{13}e^{i\theta_{13}}}{1-|r(\omega^{2}z)|^{2}}&0&1\end{array} \right)\left(\begin{array}{ccc}1&0&\frac{\bar{r}(\omega^{2}z)T_{31}e^{i \theta_{13}}}{1-|r(\omega^{2}z)|^{2}}\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\omega^{2}\mathbb{R},\\ T^{-1}(z)G(z)T(z),&z\in\partial\mathbb{D}_{n}\cap(\underset{k=1}{\cup}{}^{3}S_ {2k}),\\ T^{-1}(z)G^{-1}(z)T(z),&z\in\partial\mathbb{D}_{n}\cap(\underset{k=1}{\cup}{}^{3}S _{2k-1}).\end{array}\right.\] (3.20) * \(M^{(1)}(z)=I+\mathcal{O}(z^{-1}),\ \ \ z\rightarrow\infty\)_._ * _As_ \(z\rightarrow\varkappa_{l}=e^{\frac{i\pi(l-1)}{3}}\)_,_ \(l=1,\cdots,6\)_, the limit of_ \(M^{(1)}(z)\) _has the pole singularities_ \[M^{(1)}(z) =\frac{1}{z\mp 1}\left(\begin{array}{ccc}\alpha^{(1)}_{\pm}& \alpha^{(1)}_{\pm}&\beta^{(1)}_{\pm}\\ -\alpha^{(1)}_{\pm}&-\alpha^{(1)}_{\pm}&-\beta^{(1)}_{\pm}\\ 0&0&0\end{array}\right)T(\pm 1)+\mathcal{O}(1),\ z\rightarrow\pm 1,\] (3.21) \[M^{(1)}(z) =\frac{1}{z\mp\omega^{2}}\left(\begin{array}{ccc}0&0&0\\ \beta^{(1)}_{\pm}&\alpha^{(1)}_{\pm}&\alpha^{(1)}_{\pm}\\ -\beta^{(1)}_{\pm}&-\alpha^{(1)}_{\pm}&-\alpha^{(1)}_{\pm}\end{array}\right)T (\pm\omega^{2})+\mathcal{O}(1),\ z\rightarrow\pm\omega^{2},\] (3.22) \[M^{(1)}(z) =\frac{1}{z\mp\omega}\left(\begin{array}{ccc}-\alpha^{(1)}_{\pm }&-\beta^{(1)}_{\pm}&-\alpha^{(1)}_{\pm}\\ 0&0&0\\ \alpha^{(1)}_{\pm}&\beta^{(1)}_{\pm}&\alpha^{(1)}_{\pm}\end{array}\right)T(\pm \omega)+\mathcal{O}(1),\ z\rightarrow\pm\omega,\] (3.23) _with_ \(\alpha^{(1)}_{\pm}=\alpha^{(1)}_{\pm}(y,t)=-\bar{\alpha}^{(1)}_{\pm}\)_,_ \(\beta^{(1)}_{\pm}=\beta^{(1)}_{\pm}(y,t)=-\bar{\beta}^{(1)}_{\pm}\) _and_ \(M^{(1)}(z)^{-1}\) _has same specific matrix structure with_ \(\alpha^{(1)}_{\pm}\)_,_ \(\beta^{(1)}_{\pm}\) _replaced by_ \(\tilde{\alpha}^{(1)}_{\pm}\)_,_ \(\tilde{\beta}^{(1)}_{\pm}\)_._ * \(M^{(1)}(z)\) _has simple poles at each point_ \(\zeta_{n}\) _for_ \(n-kN_{0}\in\Diamond\) _with_ \[\operatorname*{Res}_{z=\zeta_{n}}M^{(1)}(z)=\lim_{z\to\zeta_{n}}M^{(1)}(z)\left[ T^{-1}(z)B_{n}T(z)\right].\] (3.24) Proof.: The above properties of RH problem 3.2 can be directly obtained by the properties of RH problem 2.1, Proposition 3.1, and (3.18)- (3.19). Since the jump matrices on the disks \(\mathbb{D}_{n},n-kN_{0}\in\mathcal{N}\setminus\Diamond\) decay exponentially to the identity matrix as \(t\to\infty\), it follows that the RH problem is asymptotically equivalent to the following RH problem. **RH problem 3.3**.: _Find a matrix-valued function \(M^{(2)}(z)\) which satisfies:_ * \(M^{(2)}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\Sigma^{(2)}\)_, where_ \(\Sigma^{(2)}=\Sigma\)_._ * \(M^{(2)}(z)\) _obeys the symmetries_ \[M^{(2)}(z)=\Gamma_{1}\overline{M^{(2)}(\bar{z})}\Gamma_{1}=\Gamma_{2} \overline{M^{(2)}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{(2)}( \omega\bar{z})}\Gamma_{3}=\overline{M^{(2)}(\bar{z}^{-1})}.\] (3.25) * \(M^{(2)}_{+}(z)=M^{(2)}_{-}(z)V^{(2)}(z),\quad z\in\Sigma^{(2)},\) _where_ \[V^{(2)}(z)=\left\{\begin{array}{ll}\left(\begin{array}{ccc}1&-\frac{rT_{2 1}e^{it\theta_{12}}}{1-|r|^{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{r}T_{12}e^{-it\theta_{12}}}{1-|r|^{2}}&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{R},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&\frac{-r(\omega z)T_{32}e^{it\theta_{23}}}{1-|r(\omega z)|^{2}}\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ \frac{\bar{r}(\omega z)e^{-it\theta_{23}}T_{23}}{1-|r(\omega z)|^{2}}&1\\ \end{array}\right),&z\in\omega\mathbb{R},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ -r(\omega^{2}z)T_{13}e^{-it\theta_{13}}&1\\ \end{array}\right)\left(\begin{array}{ccc}1&0&\frac{\bar{r}(\omega^{2}z)T_ {31}e^{it\theta_{13}}}{1-|r(\omega^{2}z)|^{2}}\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\omega^{2}\mathbb{R},\\ \end{array}\right.\] (3.26) * \(M^{(2)}(z)=I+\mathcal{O}(z^{-1}),\quad z\to\infty\)_._ * _As_ \(z\to\varkappa_{l}=e^{\frac{i\pi(l-1)}{3}}\)_,_ \(l=1,\cdots,6\)_,_ \[M^{(2)}(z)=\frac{1}{z\mp 1}\left(\begin{array}{ccc}\alpha^{(2)}_{\pm}& \alpha^{(2)}_{\pm}&\beta^{(2)}_{\pm}\\ -\alpha^{(2)}_{\pm}&-\alpha^{(2)}_{\pm}&-\beta^{(2)}_{\pm}\\ 0&0&0\end{array}\right)T(\pm 1)+\mathcal{O}(1),\ z\to\pm 1,\] (3.27) \[M^{(2)}(z)=\frac{1}{z\mp\omega^{2}}\left(\begin{array}{ccc}0&0&0 \\ \beta^{(2)}_{\pm}&\alpha^{(2)}_{\pm}&\alpha^{(2)}_{\pm}\\ -\beta^{(2)}_{\pm}&-\alpha^{(2)}_{\pm}&-\alpha^{(2)}_{\pm}\end{array}\right)T( \pm\omega^{2})+\mathcal{O}(1),\ z\to\pm\omega^{2},\] (3.28) \[M^{(2)}(z)=\frac{1}{z\mp\omega}\left(\begin{array}{ccc}-\alpha^{(2 )}_{\pm}&-\beta^{(2)}_{\pm}&-\alpha^{(2)}_{\pm}\\ 0&0&0\\ \alpha^{(2)}_{\pm}&\beta^{(2)}_{\pm}&\alpha^{(2)}_{\pm}\end{array}\right)T(\pm \omega)+\mathcal{O}(1),\ z\to\pm\omega,\] (3.29) _with_ \(\alpha^{(2)}_{\pm}=\alpha^{(2)}_{\pm}(y,t)=-\bar{\alpha}^{(2)}_{\pm}\)_,_ \(\beta^{(2)}_{\pm}=\beta^{(2)}_{\pm}(y,t)=-\bar{\beta}^{(2)}_{\pm}\) _and_ \(M^{(2)}(z)^{-1}\) _has same specific matrix structure with_ \(\alpha^{(2)}_{\pm}\)_,_ \(\beta^{(2)}_{\pm}\) _replaced by_ \(\tilde{\alpha}^{(2)}_{\pm}\)_,_ \(\tilde{\beta}^{(2)}_{\pm}\) * \(M^{(2)}(z)\) _has simple poles at each point_ \(\zeta_{n}\) _for_ \(n-kN_{0}\in\Diamond\) _with_ \[\operatorname*{Res}_{z=\zeta_{n}}M^{(2)}(z)=\lim_{z\to\zeta_{n}}M^{(2)}(z)\left[T ^{-1}(z)B_{n}T(z)\right].\] (3.30) **Proposition 3.2**.: _The solution of RH problem 3.2 can be approximated by the solution of RH problem 3.3_ \[M^{(1)}(z)=M^{(2)}(z)(I+\mathcal{O}(e^{-ct})), \tag{3.31}\] _where \(c\) is a positive constant._ Proof.: The result is derived from the theorem of Beals-Coifman and the corresponding norm estimates. ## 4 Painleve Asymptotics in Transition Zone \(y/t\approx-1/8\) In this section, we consider the large time asymptotics in the zone \(0<(\xi+\frac{1}{8})t^{2/3}<C\) with \(C>0\) corresponding to Figure 1(c). A comparable discussion can be conducted for the other half region \(-C<(\xi+\frac{1}{8})t^{2/3}<0\) corresponding to Figure 1(b). From (2.26), the phase function \(\theta_{12}\) has eight saddle points such that \(\theta_{12}^{\prime}=0\). Introducing \[\tilde{k}=z-\frac{1}{z},\] then one gets \[\theta_{12}^{\prime}(z)=\sqrt{3}\left(\xi-\frac{1-\tilde{k}^{2}}{(\tilde{k}^{ 2}+1)^{2}}\right)\left(1+\frac{1}{z^{2}}\right) \tag{4.1}\] Thus, in the \(\tilde{k}\)-plane the real critical points \(\tilde{k}_{1}\in\mathbb{R}\) are determined by the equation \[\xi=\frac{1-\tilde{k}^{2}}{(\tilde{k}^{2}+1)^{2}}, \tag{4.2}\] or, equivalently, in terms of \(\varpi=\tilde{k}^{2}+1\geq 1\), by \[\xi\varpi^{2}+\varpi-2=0. \tag{4.3}\] In \(0<(\xi+\frac{1}{8})t^{2/3}<C\), this equation has two solutions \(\geq 1\), which gives in the \(\tilde{k}\)-plane four real saddle points \(\pm\kappa_{0},\pm\kappa_{1}\): \[\kappa_{0}(\xi) =\left(\frac{\sqrt{1+8\xi}-1-2\xi}{2\xi}\right)^{\frac{1}{2}}, \tag{4.4}\] \[\kappa_{1}(\xi) =\left(-\frac{\sqrt{1+8\xi}+1+2\xi}{2\xi}\right)^{\frac{1}{2}}. \tag{4.5}\] Applying the inverse map \(\tilde{k}\leadsto z\) to obtain the corresponding saddle points in the \(z\)-plane we get \(\pm p_{0},\pm\frac{1}{p_{0}},\pm p_{1},\pm\frac{1}{p_{1}}\): \[p_{0}(\xi)=\frac{\sqrt{\kappa_{0}^{2}+4}-\kappa_{0}}{2}, \tag{4.6}\] \[p_{1}(\xi)=\frac{\sqrt{\kappa_{1}^{2}+4}-\kappa_{1}}{2}. \tag{4.7}\] As \(\xi\to-\frac{1}{8}^{+}\), \(\kappa_{0},\kappa_{1}\to\sqrt{3}\). Thus, pairs of saddle points collide in the \(z\)-plane \[p_{0},p_{1}\to z_{b}=\frac{\sqrt{7}-\sqrt{3}}{2},\quad\frac{1}{p_{0}},\frac{1}{ p_{1}}\to z_{a}=\frac{\sqrt{7}+\sqrt{3}}{2}.\] By (3.26), the jump matrix for \(M^{(2)}(z)\) on \(\mathbb{R}\) is \[V^{(2)}(z)=\left(\begin{array}{ccc}1&-\overline{\tilde{r}(z)}e^{it\theta_{1 2}(z)}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \tilde{r}(z)e^{-it\theta_{12}(z)}&1&0\\ 0&0&1\end{array}\right), \tag{4.8}\] where \[\tilde{r}(z):=\frac{\overline{r(z)}T_{12}(z)}{1-|r(z)|^{2}}. \tag{4.9}\] The jump matrix on \(\omega\mathbb{R}\) and \(\omega^{2}\mathbb{R}\) can be obtained by the symmetries (3.25). ### Opening \(\bar{\partial}\)-lenses According to the signature table of \(\operatorname{Im}\theta_{12}\) for \(0<\left(\xi+\frac{1}{8}\right)t^{2/3}<C\) illustrated in Figure 1(c), we now want to remove the jump from the intervals \((-\infty,-\frac{1}{p_{1}})\cup(-\frac{1}{p_{0}},-p_{0})\cup(-p_{1},p_{1}) \cup(p_{0},\frac{1}{p_{0}})\) in such a way that the new problem takes advantage of the decay/growth of \(e^{\pm i\theta_{12}(z)}\). Additionally, we want to "open the lens" in such a way that the lenses are bounded away from the disks introduced previously to remove the poles from the RH problem. For \(l=0,1,2\), define \[\Omega_{1}^{l} :=\{z\in\mathbb{C}:0\leq\arg(z-\omega^{l}\frac{1}{p_{1}})\leq \varphi_{0}\}, \tag{4.10}\] \[\Omega_{2}^{l} :=\{z\in\mathbb{C}:\pi-\varphi_{0}\leq\arg(z-\omega^{l}\frac{1}{p _{0}})\leq\pi,\ |\operatorname{Re}(z-\omega^{l}\frac{1}{p_{0}})|\leq\frac{1-p_{0}^{2}}{2p_{0} }\},\] (4.11) \[\Omega_{3}^{l} :=\{z\in\mathbb{C}:0\leq\arg(z-\omega^{l}p_{0})\leq\varphi_{0},\ | \operatorname{Re}(z-\omega^{l}p_{0})|\leq\frac{1-p_{0}^{2}}{2p_{0}}\},\] (4.12) \[\Omega_{4}^{l} :=\{z\in\mathbb{C}:\pi-\varphi_{0}\leq\arg(z-\omega^{l}p_{1})\leq \pi,\ |\operatorname{Re}(z-\omega^{l}p_{1})|\leq\frac{p_{1}}{2}\}, \tag{4.13}\] where \(0<\varphi_{0}<\frac{\pi}{8}\) is a sufficiently small angle such that each \(\Omega_{j}^{l}\) doesn't intersect the set \(\{z\in\mathbb{C}:\operatorname{Im}\theta_{12}(z)=0\}\) and any small disks \(\mathbb{D}_{n},\ n\in\mathcal{N}\). Denote \(\Omega_{j}^{l},\ j=5,6,7,8\), be the regions of \(\Omega_{j}^{l},\ j=1,2,3,4\), symmetric about the imaginary axis. Moreover, we use \(\Sigma_{j}^{l},\ j=1,\cdots,8,\ l=0,1,2\), to denote the boundary of \(\Omega_{j}^{l},\ j=1,\cdots,8,\ l=0,1,2\), in the upper half plane and set \[\Sigma_{2,3}^{l} =\left\{z\in\mathbb{C}:\frac{z}{\omega^{l}}=\frac{1+p_{0}^{2}}{2p _{0}}+i\rho,\ \rho\in(0,\frac{1-p_{0}^{2}}{2p_{0}\tan\varphi_{0}})\right\}, \tag{4.14}\] \[\Sigma_{6,7}^{l} =\left\{z\in\mathbb{C}:\frac{z}{\omega^{l}}=-\frac{1+p_{0}^{2}}{2p _{0}}+i\rho,\ \rho\in(0,\frac{1-p_{0}^{2}}{2p_{0}\tan\varphi_{0}})\right\}, \tag{4.15}\] \[I_{1}^{l}:=\omega^{l}(\frac{1}{p_{0}},\frac{1}{p_{1}}),\ I_{2}^{l}:=\omega^{l}(p_{1 },p_{0}),\ I_{3}^{l}:=\omega^{l}(-p_{0},-p_{1}),\ I_{4}^{l}=\omega^{l}(-\frac{1}{ p_{1}},-\frac{1}{p_{0}}). \tag{4.16}\] For convenience, we use \[f^{*}(z):=\overline{f(\bar{z})},\quad z\in\mathbb{C},\] to denote the Schwartz conjugation for a complex-valued function \(f(z)\). Using these rays defined above, we define new contours obtained when opening jump contours \(\omega^{l}\mathbb{R}\setminus\cup_{j=1}^{4}I_{j}^{l},\ l=0,1,2\): \[\Sigma_{p}^{l}=\Sigma_{2,3}^{l}\cup\Sigma_{6,7}^{l},\quad I^{l}= \cup_{j=1}^{4}I_{j}^{l},\] \[\tilde{\Sigma}^{l}=(\mathop{\cup}\limits_{j=1,\cdots,8}(\Sigma_{ j}^{l}\cup(\Sigma_{j}^{l})^{*}))\cup\Sigma_{p}^{l}\cup(\Sigma_{p}^{l})^{*},\] \[\Sigma^{(3)}=\cup_{l=0,1,2}(\tilde{\Sigma}^{l}\cup I^{l}),\] as shown in Figure 4.1. Additionally, we define the open domains along the jump contours \(\omega^{l}\mathbb{R}\setminus I^{l},\ l=0,1,2\): \[\Omega=\mathop{\cup}\limits_{\begin{subarray}{c}j=1,\cdots,8 \\ l=0,1,2\end{subarray}}\Omega_{j}^{l}\cup(\Omega_{j}^{l})^{*},\] From Figure 1(c), we open the contours \(\mathbb{R}\setminus I^{0}\) via continuous extensions of the jump matrix \(V^{(3)}(z)\) by defining appropriate functions and other contours can be opened by the symmetries. We can construct a matrix function \(\mathcal{R}^{(3)}\) like in [21]. The difference is here we have extra singularity on the boundary. Hence, to deal with the singularity at \(\varkappa_{k}\), \(k=1,\cdots,6\), we need to introduce a fixed cutoff function \(\mathcal{X}(z)\) in \(C_{0}^{\infty}(\mathbb{R},[0,1])\) with support near \(1\) with \[\mathcal{X}(z)=\left\{\begin{aligned} & 0,\,|z-1|>2\varepsilon,\\ & 1,\,|z-1|<\varepsilon,\end{aligned}\right. \tag{4.17}\] Figure 4.1: The contour \(\Sigma^{(3)}\). where \(\varepsilon\) is a small enough positive constant satisfying the support of \(\mathcal{X}(z)\) doesn't contain any of phase points with \(\varepsilon<\frac{1-p_{0}^{2}}{16p_{0}}\). We now define the continuous extension functions in this case: for \(j=1,\cdots,8\), \[\mathcal{R}^{(3)}(z)=\left\{\begin{array}{ll}1&0\ 0\\ \left(\begin{array}{ccc}1&0\ 0\\ -R_{j}(z)e^{-it\theta_{12}}&1\ 0\\ 0&0\ 1\end{array}\right),&z\in\Omega_{j}^{0},\\ \left(\begin{array}{ccc}1&-R_{j}^{*}(z)e^{it\theta_{12}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\Omega_{j}^{0*},\\ \left(\begin{array}{ccc}1&0\ 0\\ 0&1\ 0\\ -R_{j}(\omega^{2}z)e^{-it\theta_{13}}&0\ 1\end{array}\right),&z\in\Omega_{j}^{1}, \\ \left(\begin{array}{ccc}1&0\ -R_{j}^{*}(\omega^{2}z)e^{it\theta_{13}}\\ 0&1\end{array}\right),&z\in\Omega_{j}^{1*},\\ \left(\begin{array}{ccc}1&0\ 0\\ 0&1\ -R_{j}(\omega z)e^{it\theta_{23}}\\ 0&0\ 1\end{array}\right),&z\in\Omega_{j}^{2},\\ \left(\begin{array}{ccc}1&0\ 0\\ 0&1\ -R_{j}(\omega z)e^{-it\theta_{23}}\\ 0&0\ 1\end{array}\right),&z\in\Omega_{j}^{2*},\\ \left(\begin{array}{ccc}1&0\ 0\\ 0&1\ 0\\ 0&-R_{j}^{*}(\omega z)e^{-it\theta_{23}}&1\end{array}\right),&z\in\Omega_{j}^ {2*},\\ I,&elsewhere,\end{array}\right. \tag{4.18}\] where the functions \(R_{j}(z)\), \(j=1,\cdots,8\) are given by the following proposition. **Proposition 4.1**.: _Define functions \(R_{j}\): \(\overline{\Omega}_{j}^{0}\rightarrow\mathbb{C}\), \(j=1,\cdots,8\), continuous on \(\overline{\Omega}_{j}^{0}\), with continuous first partials on \(\Omega_{j}^{0}\), and boundary values_ \[R_{j}(z)=\begin{cases}\tilde{r}^{*}(z),&z\in\mathbb{R},\\ \tilde{r}^{*}(\frac{1}{p_{1}}),&z\in\Sigma_{1}^{0},\\ \tilde{r}^{*}(\frac{1}{p_{0}}),&z\in\Sigma_{2}^{0},\\ \tilde{r}^{*}(p_{0}),&z\in\Sigma_{3}^{0},\\ \tilde{r}^{*}(p_{1}),&z\in\Sigma_{4}^{0},\\ \tilde{r}^{*}(-p_{1}),&z\in\Sigma_{5}^{0},\\ \tilde{r}^{*}(-p_{0}),&z\in\Sigma_{6}^{0},\\ \tilde{r}^{*}(-\frac{1}{p_{0}}),&z\in\Sigma_{7}^{0},\\ \tilde{r}^{*}(-\frac{1}{p_{1}}),&z\in\Sigma_{8}^{0},\end{cases} \tag{4.19}\] _such that_ \[|\bar{\partial}R_{j}(z)|\lesssim|\tilde{r}^{*^{\prime}}(\operatorname{Re}z)| +|\mathcal{X}^{\prime}(\operatorname{Re}z)|+|\operatorname{Re}z-\xi_{j}|^{-1 /2},\text{for all }z\in\Omega_{j}^{0}, \tag{4.20}\] \[|\bar{\partial}R_{j}(z)|\lesssim|\tilde{r}^{*^{\prime}}( \operatorname{Re}z)|+|\mathcal{X}^{\prime}(\operatorname{Re}z)|,\ \text{for all }z\in\Omega_{j}^{0},\] (4.21) \[R_{j}(z)=\bar{\partial}R_{j}(z)=0,\ \text{ for all }z\in\Omega_{j}^{0} \text{ with }|\operatorname{Re}z\pm 1|<\varepsilon, \tag{4.22}\] \[\partial R_{j}(z)=0,\quad\quad\text{elsewhere}, \tag{4.23}\] _where \(\xi_{j},\ j=1,\cdots,8\) stands for the saddle points in the corresponding regions \(\Omega^{0}_{j},\ j=1,\cdots,8\). Setting \(R:\Omega\to\mathbb{C}\) by \(R(z)|_{z\in\Omega^{0}_{j}}=R_{j}(z)\), the extension can preserve the symmetry \(R(z)=\overline{R(\bar{z}^{-1})}\)._ Proof.: Without loss of generality, we give the details for \(R_{1}(z)\) and the other cases are easily inferred. The extension of \(R_{1}(z)\) can be constructed by \[R_{1}(z)=(1-\mathcal{X}(\operatorname{Re}z))\left(\tilde{r}^{*}(\operatorname {Re}z)-\tilde{r}^{*}(\frac{1}{p_{1}})\right)\cos\left(\frac{\pi}{2\varphi_{0}} \arg(z-\frac{1}{p_{1}})\right)+\tilde{r}^{*}(\frac{1}{p_{1}}). \tag{4.24}\] Note that for \(z-\frac{1}{p_{1}}=le^{i\varphi}=u+vi\), \(l,\varphi,u,v\in\mathbb{R}\), we have \(\bar{\partial}=\frac{1}{2}\left(\partial_{u}+i\partial_{v}\right)=\frac{e^{i \varphi}}{2}\left(\partial_{l}+il^{-1}\partial_{\varphi}\right)\). Applying \(\bar{\partial}\) operator to (4.24), it is readily seen that \[\bar{\partial}R_{1}(z) =\frac{1}{2}\tilde{r}^{*^{\prime}}(u)\cos\left(\frac{\pi\varphi} {2\varphi_{0}}\right)-\frac{1}{2}\mathcal{X}^{\prime}(u)\left(\tilde{r}^{*}(u )-\tilde{r}^{*}(\frac{1}{p_{1}})\right)\cos\left(\frac{\pi\varphi}{2\varphi_{0 }}\right)\] \[+\frac{1}{2}(1-\mathcal{X}(u))\left(\tilde{r}^{*}(u)-\tilde{r}^{* }(\frac{1}{p_{1}})\right)\bar{\partial}\cos\left(\frac{\pi\varphi}{2\varphi_{ 0}}\right). \tag{4.25}\] From the definition (4.9) and Holder's inequality, we obtain \[|\tilde{r}^{*}(u)-\tilde{r}^{*}(\frac{1}{p_{1}})|=|\int_{\frac{1}{p_{1}}}^{u} \tilde{r}^{*^{\prime}}(s)ds|\leq\parallel\tilde{r}^{*^{\prime}}\parallel_{L^{ 2}(\mathbb{R})}|u-\frac{1}{p_{1}}|^{1/2}, \tag{4.26}\] which yields (4.20). Moreover, (4.21) is obtained from the boundedness of \(\tilde{r}^{\prime}(z)\). In addition, \(\mathcal{R}^{(3)}\) achieves the symmetry: \[\mathcal{R}^{(3)}(z)=\Gamma_{1}\overline{\mathcal{R}^{(3)}(\bar{z})}\Gamma_{ 1}=\Gamma_{2}\overline{\mathcal{R}^{(3)}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma _{3}\overline{\mathcal{R}^{(3)}(\omega\bar{z})}\Gamma_{3}=\overline{\mathcal{ R}^{(3)}(\bar{z}^{-1})}. \tag{4.27}\] ### A hybrid \(\bar{\partial}\)-RH problem and its decomposition Now we use \(\mathcal{R}^{(3)}\) to define a new transformation \[M^{(3)}(z):=M^{(3)}(z;y,t)=M^{(2)}(z)\mathcal{R}^{(3)}(z), \tag{4.28}\] which satisfies the following hybrid \(\bar{\partial}\)-RH problem. **RH problem 4.1**.: _Find a matrix valued function \(M^{(3)}(z)\) with following properties:_ * \(M^{(3)}(z)\) _has sectionally continuous first partial derivatives in_ \(\mathbb{C}\backslash\left(\Sigma^{(3)}\cup\{\zeta_{n}\}_{n-kN_{0}\in\Diamond}\right)\)_, and is meromorphic outside_ \(\bar{\Omega}\)_._ * \(M^{(3)}(z)=\Gamma_{1}\overline{M^{(3)}(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{ M^{(3)}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{(3)}(\omega\bar{z})} \Gamma_{3}=\overline{M^{(3)}(\bar{z}^{-1})}\)_._ * \(M^{(3)}(z)\) _has continuous boundary values_ \(M^{(3)}_{\pm}(z)\) _on_ \(\Sigma^{(3)}\) _with_ \[M^{(3)}_{+}(z)=M^{(3)}_{-}(z)V^{(3)}(z),\quad\ z\in\Sigma^{(3)},\] _where_ \[V^{(3)}(z)=\left\{\begin{array}{ll}\left(\begin{array}{ccc}1&-\overline{ \tilde{r}(z)}e^{it\theta_{12}(z)}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \tilde{r}(z)e^{-it\theta_{12}(z)}&1&0\\ 0&0&1\end{array}\right),&z\in I^{0},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&-\overline{\tilde{r}(\omega z)}e^{it\theta_{23}(z)}\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&\tilde{r}(\omega z)e^{-it\theta_{23}(z)}&1\end{array}\right),&z\in I^{2},\\ \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ -\overline{\tilde{r}(\omega^{2}z)}e^{-it\theta_{13}(z)}&0&1\end{array}\right) \left(\begin{array}{ccc}1&0&\tilde{r}(\omega^{2}z)we^{it\theta_{13}(z)}\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in I^{1},\\ \mathcal{R}^{(3)}(z)|_{z\in\Omega_{j+1}}-\mathcal{R}^{(2)}(z)|_{z\in\Omega_{j}}, &z\in\Sigma^{l}_{p},l=0,1,2,\\ \mathcal{R}^{(3)}(z)|_{z\in\Omega^{*}_{j+1}}-\mathcal{R}^{(2)}(z)|_{z\in \Omega^{*}_{j}},&z\in\Sigma^{l*}_{p},l=0,1,2\\ \mathcal{R}^{(3)}(z),&z\in\tilde{\Sigma}^{l},l=0,1,2\\ \mathcal{R}^{(3)}(z)^{-1},&z\in\tilde{\Sigma}^{l*},l=0,1,2.\end{array}\right.\] (4.29) * \(M^{(3)}(z)=I+\mathcal{O}(z^{-1}),\quad\ z\to\infty\). * _For_ \(z\in\mathbb{C}\)_,_ \[\bar{\partial}M^{(3)}(z)=M^{(3)}(z)\bar{\partial}\mathcal{R}^{(3)}(z),\] (4.30) _where_ \[\bar{\partial}\mathcal{R}^{(3)}(z)=\left\{\begin{array}{ll}\left(\begin{array} []{ccc}0&0&0\\ -\bar{\partial}R_{j}(z)e^{-it\theta_{12}}&0&0\\ 0&0&0\end{array}\right),&z\in\Omega^{0}_{j},\\ \left(\begin{array}{ccc}0&-\bar{\partial}R_{j}^{*}(z)e^{it\theta_{12}}&0\\ 0&0&0\\ 0&0&0\end{array}\right),&z\in\Omega^{0*}_{j},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&-R_{j}(\omega z)e^{it\theta_{23}}\\ 0&0&0\end{array}\right),&z\in\Omega^{1}_{j},\\ \left(\begin{array}{ccc}0&0&-\bar{\partial}R_{j}^{*}(\omega^{2}z)e^{it\theta _{13}}\\ 0&0&0\\ 0&0&0\\ 0&-\bar{\partial}R_{j}^{*}(\omega z)e^{-it\theta_{23}}&0\end{array}\right),&z\in \Omega^{1*}_{j},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&-R_{j}(\omega z)e^{it\theta_{23}}\\ 0&0&0\end{array}\right),&z\in\Omega^{2}_{j},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&-\bar{\partial}R_{j}^{*}(\omega z)e^{-it\theta_{23}}&0\end{array}\right),&z \in\Omega^{2*}_{j},\\ \mathbf{0},&elsewhere,\end{array}\right.\] (4.31) * \(M^{(3)}(z)\) _satisfies the singularity conditions in (_3.27_)-(_3.29_) with_ \(M^{(3)}(z)\) _replacing_ \(M^{(2)}(z)\)_._ * \(M^{(3)}(z)\) _has simple poles at each point_ \(\zeta_{n}\) _for_ \(n-kN_{0}\in\Diamond\) _with_ \[\operatorname*{Res}_{z=\zeta_{n}}M^{(3)}(z)=\lim_{z\to\zeta_{n}}M^{(3)}(z) \left[T^{-1}(z)B_{n}T(z)\right].\] (4.32) To solve the RH problem 4.1, we decompose it into a pure RH problem for \(M^{R}(z):=M^{R}(z;y,t)\) with \(\bar{\partial}\mathcal{R}^{(3)}\equiv 0\) and a pure \(\bar{\partial}\)-problem with nonzero \(\bar{\partial}\)-derivatives. By omitting the \(\bar{\partial}\)-derivative part of RH problem 4.1, we obtain the pure RH problem for \(M^{R}(z)\) as follows. **RH problem 4.2**.: _Find a matrix-valued function \(M^{R}(z)\) with following properties:_ * \(M^{R}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\Sigma^{(3)}\)_._ * \(M^{R}(z)\) _has continuous boundary values_ \(M^{R}_{\pm}(z)\) _on_ \(\Sigma^{(3)}\) _and_ \[M^{R}_{+}(z)=M^{R}_{-}(z)V^{(3)}(z),\ \ \ z\in\Sigma^{(3)},\] _where_ \(V^{(3)}(z)\) _is defined by (_4.29_)._ * \(M^{R}(z)=\Gamma_{1}\overline{M^{R}(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{M^ {R}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{R}(\omega\bar{z})} \Gamma_{3}=\overline{M^{R}(\bar{z}^{-1})}\)_._ * \(M^{R}(z)=I+\mathcal{O}(z^{-1}),\ \ \ z\to\infty\)_._ * _As_ \(z\to\varkappa_{l}=e^{\frac{i\pi(l-1)}{3}},l=1,\cdots,6\)_, the limit of_ \(M^{R}(z)\) _have the pole singularities_ \[M^{R}(z) =\frac{1}{z\mp 1}\left(\begin{array}{ccc}\alpha^{R}_{\pm}& \alpha^{R}_{\pm}&\beta^{R}_{\pm}\\ -\alpha^{R}_{\pm}&-\alpha^{R}_{\pm}&-\beta^{R}_{\pm}\\ 0&0&0\end{array}\right)+\mathcal{O}(1),\ z\to\pm 1,\] (4.33) \[M^{R}(z) =\frac{1}{z\mp\omega^{2}}\left(\begin{array}{ccc}0&0&0\\ \beta^{R}_{\pm}&\alpha^{R}_{\pm}&\alpha^{R}_{\pm}\\ -\beta^{R}_{\pm}&-\alpha^{R}_{\pm}&-\alpha^{R}_{\pm}\end{array}\right)+ \mathcal{O}(1),\ z\to\pm\omega^{2},\] (4.34) \[M^{R}(z) =\frac{1}{z\mp\omega}\left(\begin{array}{ccc}-\alpha^{R}_{\pm} &-\beta^{R}_{\pm}&-\alpha^{R}_{\pm}\\ 0&0&0\\ \alpha^{R}_{\pm}&\beta^{R}_{\pm}&\alpha^{R}_{\pm}\end{array}\right)+\mathcal{ O}(1),\ z\to\pm\omega,\] (4.35) _with_ \(\alpha^{R}_{\pm}=\alpha^{R}_{\pm}(y,t)=-\bar{\alpha}^{R}_{\pm}\)_,_ \(\beta^{R}_{\pm}=\beta^{R}_{\pm}(y,t)=-\bar{\beta}^{R}_{\pm}\) _and_ \(M^{R}(z)^{-1}\) _has same specific matrix structure with_ \(\alpha^{R}_{\pm}\)_,_ \(\beta^{R}_{\pm}\) _replaced by_ \(\tilde{\alpha}^{R}_{\pm}\)_,_ \(\tilde{\beta}^{R}_{\pm}\)_._ * \(M^{R}(z)\) _has the simple poles at each point_ \(\zeta_{n}\) _for_ \(n-kN_{0}\in\Diamond\) _with_ \[\operatorname*{Res}_{z=\zeta_{n}}M^{R}(z)=\lim_{z\to\zeta_{n}}M^{R}(z)\left[T^{ -1}(z)B_{n}T(z)\right].\] To proceed, define \(\mathbb{B}_{j}\) as the neighborhood of \(\varkappa_{j}\), \(j=1,\cdots,6\) with \[\mathbb{B}_{j}=\{z\in\mathbb{C}\setminus\{\varkappa_{j}\}_{j=1}^{6}:| \operatorname*{Re}(z/\varkappa_{j})-1|<2\varepsilon,|\operatorname*{Im}(z/ \varkappa_{j})|<2\varepsilon\}. \tag{4.36}\] For convenience, let \(z_{c}=-z_{b}\) and \(z_{d}=-z_{a}\). Then denote \[U_{a}^{l} =\left\{z\in\mathbb{C}:\left|z-\omega^{l}z_{a}\right|\leq\varrho^{0 }\right\},\ U_{b}^{l}=\left\{z\in\mathbb{C}:\left|z-\omega^{l}z_{b}\right|\leq \varrho^{0}\right\},\] \[U_{c}^{l} =\left\{z\in\mathbb{C}:\left|z-\omega^{l}z_{c}\right|\leq\varrho^ {0}\right\},\ U_{d}^{l}=\left\{z\in\mathbb{C}:\left|z-\omega^{l}z_{d}\right| \leq\varrho^{0}\right\},\] where \[\varrho^{0}:=\min\left\{\frac{p_{1}}{2},\frac{1}{8}|p_{i}\pm 1|,2\left(p_{i}- \frac{\sqrt{7}-\sqrt{3}}{2}\right)t^{\delta_{1}},2\left(\frac{1}{p_{i}}-\frac{ \sqrt{7}+\sqrt{3}}{2}\right)t^{\delta_{1}},i=0,1\right\}\] with \(\frac{1}{9}<\delta_{1}<\frac{1}{6}\), which makes \(U_{j}^{l}\) and \(\mathbb{B}_{j}\) are disjoint. Denote as the union set of \(U_{j}^{l}\) with \(j=a,b,c,d\) and \(l=0,1,2\). For \(t\) large enough, we have \(\omega^{l}\frac{1}{p_{0}},\omega^{l}\frac{1}{p_{1}}\in U_{a}^{l}\), \(\omega^{l}p_{1},\omega^{l}p_{0}\in U_{b}^{l}\), \(-\omega^{l}p_{0},-\omega^{l}p_{1}\in U_{c}^{l}\) and \(-\omega^{l}\frac{1}{p_{1}},-\omega^{l}\frac{1}{p_{0}}\in U_{d}^{l}\). From (4.29), we know the jump matrix \(V^{(3)}(z)\) uniformly goes to \(I\) on \(\Sigma^{(3)}\) except in \(U\), which enlightens us to construct the solution \(M^{R}(z)\) as follows: \[M^{R}(z)=\left\{\begin{array}{ll}E(z)M^{O}(z),&z\notin U\cup\mathbb{B}_{j}, \\ E(z)M^{O}(z)M^{L}(z),&z\in U,\\ E(z)M^{O}(z)M_{j}^{B}(z),&z\in\mathbb{B}_{j}.\end{array}\right. \tag{4.37}\] where \(M^{O}(z)\) is an outer model including the influence of solitons, \(M^{L}(z)\) is a local model which can be well approximated by the Painleve II RH model, \(M_{j}^{B}(z)\) is a solution of a RH problem which only has jumps near \(\varkappa_{j}\), and \(E(z)\) is the error function which we will prove exists and bound it asymptotically. Then we use \(M^{R}(z)\) to construct a new matrix function \[M^{(4)}(z):=M^{(4)}(z;y,t)=M^{(3)}(z)M^{R}(z)^{-1}. \tag{4.38}\] which removes analytical component \(M^{R}(z)\) to get a pure \(\bar{\partial}\)-problem. \(\bar{\partial}\)**-problem**. Find a matrix-valued function \(M^{(4)}(z):=M^{(4)}(z;y,t)\) such that * \(M^{(4)}(z)\) has sectionally continuous first partial derivatives in \(\mathbb{C}\). * \(M^{(4)}(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty\). * \(M^{(4)}(z)\) satisfies the \(\bar{\partial}\)-equation \[\bar{\partial}M^{(4)}(z)=M^{(4)}(z)W^{(4)}(z),\ \ z\in\mathbb{C},\] where \[W^{(4)}(z)=M^{R}(z)\bar{\partial}\mathcal{R}^{(3)}(z)M^{R}(z)^{-1}.\] (4.39) Proof.: The proof of this pure \(\bar{\partial}\)-problem can be given similarly as shown in Section 4.3 of [17]. The existence and asymptotics of the above pure \(\bar{\partial}\)-problem for \(M^{(4)}(z)\) will be shown in Section 4.4.3. ### Contribution from discrete spectrum Now we construct a model solution outside \(U\) which ignores the jumps completely. The outer model \(M^{O}(z)\) satisfies the following RH problem. **RH problem 4.3**.: _Find a matrix-valued function \(M^{O}(z)\) with the following properties:_ * \(M^{O}(z)\) _is analytical in_ \(\mathbb{C}\setminus\{\zeta_{n}\}_{n-kN_{0}\in\Diamond}\)_._ * \(M^{O}(z)=\Gamma_{1}\overline{M^{r}(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{M^ {r}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{r}(\omega\bar{z})} \Gamma_{3}=\overline{M^{r}(\bar{z}^{-1})}\)_._ * \(M^{O}(z)=I+\mathcal{O}(z^{-1}),\quad\ z\to\infty\)_._ * _As_ \(z\to\varkappa_{l}=e^{\frac{i\pi(l-1)}{3}},l=1,\cdots,6\)_, the limit of_ \(M^{O}(z)\) _have the pole singularities_ \[M^{O}(z) =\frac{1}{z\mp 1}\left(\begin{array}{ccc}\alpha_{\pm}^{O}& \alpha_{\pm}^{O}&\beta_{\pm}^{O}\\ -\alpha_{\pm}^{O}&-\alpha_{\pm}^{O}&-\beta_{\pm}^{O}\\ 0&0&0\end{array}\right)+\mathcal{O}(1),\ z\to\pm 1,\] (4.40) \[M^{O}(z) =\frac{1}{z\mp\omega^{2}}\left(\begin{array}{ccc}0&0&0\\ \beta_{\pm}^{O}&\alpha_{\pm}^{O}&\alpha_{\pm}^{O}\\ -\beta_{\pm}^{O}&-\alpha_{\pm}^{O}&-\alpha_{\pm}^{O}\\ \end{array}\right)+\mathcal{O}(1),\ z\to\pm\omega^{2},\] (4.41) \[M^{O}(z) =\frac{1}{z\mp\omega}\left(\begin{array}{ccc}-\alpha_{\pm}^{O} &-\beta_{\pm}^{O}&-\alpha_{\pm}^{O}\\ 0&0&0\\ \alpha_{\pm}^{O}&\beta_{\pm}^{O}&\alpha_{\pm}^{O}\end{array}\right)+\mathcal{O }(1),\ z\to\pm\omega,\] (4.42) _with_ \(\alpha_{\pm}^{O}=\alpha_{\pm}^{O}(y,t)=-\bar{\alpha}_{\pm}^{O}\)_,_ \(\beta_{\pm}^{O}=\beta_{\pm}^{O}(y,t)=-\bar{\beta}_{\pm}^{O}\) _and_ \(M^{O}(z)^{-1}\) _has same specific matrix structure with_ \(\alpha_{\pm}^{O}\)_,_ \(\beta_{\pm}^{O}\) _replaced by_ \(\tilde{\alpha}_{\pm}^{O}\)_,_ \(\tilde{\beta}_{\pm}^{O}\)_._ * \(M^{O}(z)\) _has simple poles at each point_ \(\zeta_{n}\) _for_ \(n-kN_{0}\in\Diamond\) _with_ \[\underset{z=\zeta_{n}}{\mathrm{Res}}M^{O}(z)=\lim_{z\to\zeta_{n}}M^{O}(z) \left[T^{-1}(z)B_{n}T(z)\right].\] (4.43) The essential fact we need concerning \(M^{O}\) is as follows. **Proposition 4.2**.: _The unique solution \(M^{O}\) of RH problem 4.3 is given by_ \[M^{O}(z)=M^{\Diamond}(z;\tilde{\mathcal{D}}), \tag{4.44}\] _where \(M^{\Diamond}\) is the solution of the RH problem 3.3 corresponding to the reflectionless scattering data \(\tilde{\mathcal{D}}=\{(\zeta_{n},\tilde{C}_{n})_{n-kN_{0}\in\Diamond}\}\). Here the modified connection coefficients are given by_ \[\tilde{C}_{n}=c_{n}T_{12}(\zeta_{n}),\] _with \(T_{12}\) defined by (3.14)._ Proof.: The detailed proof for the existence and uniqueness of solution of the above RH problem 4.3 can be found in Proposition 4.3 of [17]. From this, we know RH problem 4.3 has an unique solution \(M^{O}(z)\) with modified scattering data \(\tilde{\mathcal{D}}=\left\{\{\zeta_{n},\tilde{C}_{n}\}_{n-kN_{0}\in\Diamond}\right\}\), where \(\tilde{C}_{n}=c_{n}T_{12}(\zeta_{n})\) Denote \(u^{\diamond}(y,t;\mathcal{D})\) as the \(\mathcal{N}(\diamond)\)-soliton with scattering data \(\mathcal{\tilde{D}}\). By the reconstruction formula (2.34) and (2.35), we then have **Corollary 4.1**.: _The soliton solution of the Novikov equation (1.7) is given by_ \[u^{\diamond}(y,t;\mathcal{\tilde{D}})= \frac{1}{2}\tilde{m}_{1}^{\diamond}(y,t)\left(\frac{M_{33}^{O}(e ^{\frac{i\pi}{6}};y,t)}{M_{11}^{O}(e^{\frac{i\pi}{6}};y,t)}\right)^{1/2}\] \[+\frac{1}{2}\tilde{m}_{3}^{\diamond}(y,t)\left(\frac{M_{33}^{O}(e ^{\frac{i\pi}{6}};y,t)}{M_{11}^{O}(e^{\frac{i\pi}{6}};y,t)}\right)^{-1/2}-1, \tag{4.45}\] _in which_ \[x^{\diamond}(y,t;\mathcal{\tilde{D}})=y+\frac{1}{2}\ln\frac{M_{33}^{O}(e^{ \frac{i\pi}{6}};y,t)}{M_{11}^{O}(e^{\frac{i\pi}{6}};y,t)},\ \tilde{m}_{l}^{\diamond}:=\sum_{j=1}^{3}M_{jl}^{O}(e^{\frac{i\pi}{6}};y,t), \ l=1,2,3. \tag{4.46}\] ### Contribution from jump contours Since we have ignored the jump conditions completely in the above section, now we construct the solutions which match with the jump conditions. Again by the property \[\|V^{(3)}(z)-I\|_{L^{q}(\Sigma^{(3)}\setminus U)}=\mathcal{O}(e^{-K_{q}t}), \quad t\to\infty, \tag{4.47}\] where \(K_{q}\) is a positive constant and \(1\leq q\leq+\infty\), the jump matrix is exponentially close to the identity outside of \(U\) and hence we need to investigate the local properties near the saddle points. #### 4.4.1 Local model near critical points We define a new local contour \[\Sigma^{L}=\Sigma^{L,0}\cup\Sigma^{L,1}\cup\Sigma^{L,2},\] where \(\Sigma^{L,l},\ l=0,1,2\) are the local contours on jump contours \(\omega^{n}\mathbb{R}\), respectively \[\Sigma^{L,l}=(\underset{j=a,b,c,d}{\cup}(\Sigma^{l}_{j}\cup\Sigma^{l*}_{j})\cup I ^{l})\cap U,\ l=0,1,2.\] Since there are 12 colliding points \(\omega^{l}\xi_{j},l=0,1,2,j=a,b,c,d\), so the entire local jump contour \(\Sigma^{L}\) consists 12 separate local jumps in this case. See Figure 4.2. Further denote the local jump for each colliding point \(\omega^{l}\xi_{j}\) \[\Sigma^{L,l}_{j}=(\Sigma^{l}_{j}\cup\Sigma^{l*}_{j}\cup I^{l}_{j})\cap U^{l}_ {j},\ l=0,1,2,\ j=a,b,c,d.\] We consider the following local RH problem. **RH problem 4.4**.: _Find a matrix-valued function \(M^{L}(z)\) with following properties:_ * \(M^{L}(z)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma^{L}\)_._ * \(M^{L}(z)=\Gamma_{1}\overline{M^{L}(\bar{z})}\Gamma_{1}=\Gamma_{2}\overline{M^ {L}(\omega^{2}\bar{z})}\Gamma_{2}=\Gamma_{3}\overline{M^{L}(\omega\bar{z})} \Gamma_{3}=\overline{M^{L}(\bar{z}^{-1})}.\) * \(M^{L}(z)\) _has continuous boundary values_ \(M^{L}_{\pm}\) _on_ \(\Sigma^{L}\) _and_ \[M^{L}_{+}(z)=M^{L}_{-}(z)V^{L}(z),\ \ \ \ z\in\Sigma^{L},\] _where_ \(V^{L}(z)=V^{(3)}(z)\big{|}_{z\in\Sigma^{L}}\)_._ * \(M^{L}(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty.\)__ This local RH problem, which consists of 12 local models on \(\Sigma^{L,l}_{j}\) about phase point \(\omega^{l}\xi_{j},\ j=a,b,c,d,\ l=0,1,2,\) has the jump condition and no poles. **RH problem 4.5**.: _Find a matrix-valued function \(M^{L,l}_{j}(z)\) with following properties:_ * \(M^{L,l}_{j}(z)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma^{L,l}_{j}\)_._ * \(M^{L,l}_{j}(z)\) _has continuous boundary values_ \(M^{L,l}_{j,\pm}\) _and_ \[M^{L,l}_{j,+}(z)=M^{L,l}_{j,-}(z)V^{L,l}_{j}(z),\ \ \ \ z\in\Sigma^{L,l}_{j},\] _where_ \(V^{L,l}_{j}(z)=V^{L}(z)\big{|}_{z\in\Sigma^{L,l}_{j}}\)__ * \(M^{L,l}_{j}(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty.\)__ According to the theorem of Beals-Coifman, we know as \(t\to\infty\), the solution \(M^{L}(z)\) of the RH problem 4.4 is approximated by the sum of the separate local model \(M^{L,l}_{j}(z)\) of the RH problem 4.5 in the neighborhoods of \(\omega^{l}\xi_{j}\), \(j=a,b,c,d,\ =0,1,2\) respectively. As illustrative example, we only consider a local model at the point \(z_{a}\) on \(\mathbb{R}\), whose jump contours denotes (red lines in Figure 4.2) \[\Sigma_{a}^{L,0}=(\Sigma_{a}^{0}\cup\Sigma_{a}^{0*}\cup I_{1}^{0})\cap U_{a}^{0},\] which corresponds to the following local RH problem. **RH problem 4.6**.: _Find a matrix-valued function \(M_{a}^{L,0}(z)\) with following properties:_ * \(M_{a}^{L,0}(z)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma_{a}^{L,0}\)_._ * \(M_{a}^{L,0}(z)\) _has continuous boundary values_ \(M_{a\pm}^{L,0}(z)\)__ \[M_{a+}^{L,0}(z)=M_{a-}^{L,0}(z)V_{a}^{L,0}(z),\ z\in\Sigma_{a}^{L,0},\] _where_ \[V_{a}^{L,0}(z)=\left\{\begin{array}{ccc}1&0&0\\ \left(\begin{array}{ccc}\tilde{r}(1/p_{1})e^{-it\theta_{12}(z)}&1&0\\ 0&0&1\end{array}\right),&z\in\Sigma_{1}^{0}\cap U_{a}^{0},\\ \left(\begin{array}{ccc}1&-\overline{\tilde{r}(1/p_{1})}e^{it\theta_{12}(z) }&0\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\Sigma_{1}^{0*}\cap U_{a}^{0},\\ \left(\begin{array}{ccc}1&0&0\\ \tilde{r}(1/p_{0})e^{-it\theta_{12}(z)}&1&0\\ 0&0&1\end{array}\right),&z\in\Sigma_{2}^{0}\cap U_{a}^{0},\\ \left(\begin{array}{ccc}1&-\overline{\tilde{r}(1/p_{0})}e^{it\theta_{12}(z) }&0\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\Sigma_{2}^{0*}\cap U_{a}^{0},\\ \left(\begin{array}{ccc}1&-\overline{\tilde{r}(z)}e^{it\theta_{12}(z)}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \tilde{r}(z)e^{-it\theta_{12}(z)}&1&0\\ 0&0&1\end{array}\right),\ z\in(1/p_{0},1/p_{1}).\end{array}\right.\] (4.48) * \(M_{a}^{L,0}(z)=I+\mathcal{O}(z^{-1}),\quad z\to\infty\). To solve the RH problem for \(M_{a}^{L,0}(z)\), we observe that for \(z\in U_{a}^{0}\) and \(t\) large enough, \[-t\theta_{12}(z)=-t\theta_{12}(z_{a})+\frac{8}{3}\hat{z}^{3}+2\hat{s}\hat{z}+ \mathcal{O}(t^{-1/3}\hat{z}^{2}). \tag{4.49}\] Here, we define the scaled spectral parameter \(\hat{z}\) as \[\hat{z}=\left(c_{a}t\right)^{1/3}(z-z_{a}), \tag{4.50}\] where the constant \(c_{a}\) is given by \[c_{a}=\frac{21}{256}(14\sqrt{3}-9\sqrt{7}). \tag{4.51}\] Additionally, we introduce the parameter \(\tilde{s}\) as \[\tilde{s}=s_{1}(1+8\xi)t^{2/3} \tag{4.52}\] with \[s_{1}=\frac{3^{1/6}(-7+\sqrt{21})}{2^{7/3}7^{1/3}(14\sqrt{3}-9\sqrt{7} )}<0, \tag{4.53}\] which parametrizes the space-time region. Next we show that after scaling, RH problem 4.6 can be well-approximated by the model RH problem B.1 in B, which is associated with the Painleve II equation. Through this change of variable (4.50), it is directly inferred that \(M_{a}^{L,0}(\hat{z})=M_{a}^{L,0}(z(\hat{z}))\) is holomorphic for \(\hat{z}\in\mathbb{C}\setminus\hat{\Sigma}_{a}^{L,0}\) where \[\hat{\Sigma}_{a}^{L,0}=\cup_{j=1,2}\left(\hat{\Sigma}_{j}^{0} \cup\hat{\Sigma}_{j}^{0*}\right)\cup(\hat{z}_{1},\hat{z}_{2}), \tag{4.54}\] with \(\hat{\Sigma}_{j}^{0},j=1,2\) be the corresponding contours of \(\Sigma_{j}^{0},j=1,2\) after scaling, and \[\hat{z}_{1}=(c_{a}t)^{1/3}(1/p_{1}-z_{a}),\ \hat{z}_{2}=(c_{a}t)^{1/3}(1/p_{ 0}-z_{a}). \tag{4.55}\] In addition, the jump matrix \(V_{a}^{L,0}(\hat{z})\) satisfies \[V_{a}^{L,0}(\hat{z})=\left\{\begin{array}{ll}1&0\ 0\\ \left(\begin{array}{ccc}\tilde{r}(1/p_{1})e^{-it\theta_{12}((c_{a}t)^{-1/3} \hat{z}+z_{a})}&1\ 0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{1}^{0},\\ 0&0\ 1\end{array}\right.,&\hat{z}\in\hat{\Sigma}_{1}^{0*},\\ \left(\begin{array}{ccc}1&0\ 0\\ \tilde{r}(1/p_{0})e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a})}&1\ 0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{2}^{0},\\ \left(\begin{array}{ccc}1&-\tilde{r}(1/p_{0})e^{it\theta_{12}((c_{a}t)^{-1/3} \hat{z}+z_{a})}&0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{1}^{0},\\ \left(\begin{array}{ccc}1&-\tilde{r}(1/p_{0})e^{-it\theta_{12}((c_{a}t)^{-1/3 }\hat{z}+z_{a})}&0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{1}^{0},\\ \left(\begin{array}{ccc}1&-\tilde{r}(1/p_{0})e^{it\theta_{12}((c_{a}t)^{-1/3 }\hat{z}+z_{a})}&0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{2}^{0*},\\ \left(\begin{array}{ccc}0&0\\ \tilde{r}(1/p_{0})e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a})}&0\\ 0&0\ 1\end{array}\right),&\hat{z}\in\hat{\Sigma}_{2}^{0},\\ \mathcal{A}_{1}\left(\begin{array}{ccc}1&-\tilde{r}((c_{a}t)^{-1/3}\hat{z}+ z_{a})&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0\ 0\\ \tilde{r}((c_{a}t)^{-1/3}\hat{z}+z_{a})&1\ 0\\ 0&0\ 1\end{array}\right)\mathcal{A}_{1}^{-1},\ \hat{z}\in(\hat{z}_{1},\hat{z}_{2}),\end{array}\right. \tag{4.56}\] where \[\mathcal{A}_{1}=\left(\begin{array}{ccc}e^{it\theta_{12}((c_{a}t)^{-1/3} \hat{z}+z_{a})/2}&0&0\\ 0&e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a})/2}&0\\ 0&0&1\end{array}\right). \tag{4.57}\] Since (4.49) holds, we will show the RH problem for \(M_{a}^{L,0}(\hat{z})\) in the \(\hat{z}\)-plane can be explicitly approximated by the model RH problem for \(N^{P}(\hat{z})\) in B. To do this, we give the following basic estimates. **Proposition 4.3**.: _As \(t\to+\infty\),_ \[|\tilde{r}((c_{a}t)^{-1/3}\hat{z}+z_{a})e^{-it\theta_{12}((c_{a}t)^{ -1/3}\hat{z}+z_{a})}-R_{a}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}|\lesssim t^{-1 /3+2\delta_{1}},\ \hat{z}\in(\hat{z}_{1},\hat{z}_{2}), \tag{4.58}\] \[|\tilde{r}(1/p_{1})e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a}) }-R_{a}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}|\lesssim t^{-1/3+2\delta_{1}}, \ \hat{z}\in\hat{\Sigma}_{1}^{0},\] (4.59) \[|\tilde{r}(1/p_{0})e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a}) }-R_{a}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}|\lesssim t^{-1/3+2\delta_{1}}, \ \hat{z}\in\hat{\Sigma}_{2}^{0}. \tag{4.60}\] _where_ \[R_{a}:=\tilde{r}(z_{a})e^{-it\theta_{12}(z_{a})} \tag{4.61}\] Proof.: For \(\hat{z}\in(\hat{z}_{1},\hat{z}_{2})\), we have \[|e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a})}|=|e^{i(8\hat{z}^{3}/3+2 \tilde{s}\hat{z})}|=1,\] and \[\left|\tilde{r}((c_{a}t)^{-1/3}\hat{z}+z_{a})-\tilde{r}(z_{a})\right|=\left| \int_{z_{a}}^{(c_{a}t)^{-1/3}\hat{z}+z_{a}}\tilde{r}^{\prime}(\eta)d\eta \right|\leq\|\tilde{r}^{\prime}\|_{L^{\infty}(\mathbb{R})}\left|(c_{a}t)^{-1/ 3}\hat{z}\right|\lesssim t^{-1/3}.\] For \(\hat{z}\in\hat{\Sigma}_{1}^{0}\), since \(\mathrm{Re}(i(8\hat{z}^{3}/3+2\tilde{s}\hat{z}))<0\), it follows that \(|e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}|\) is bounded and \[\left|e^{-it\theta_{12}((c_{a}t)^{-1/3}\hat{z}+z_{a})}-e^{-it\theta_{12}(z_{a })}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}\right|=\left|e^{\mathcal{O}(t^{-1/3 }\hat{z}^{3})}-1\right|\lesssim t^{-1/3+2\delta_{1}}. \tag{4.62}\] On the other hand, \[|\tilde{r}(1/p_{1})-\tilde{r}(z_{a})|=\left|\int_{z_{a}}^{1/p_{1}}\tilde{r}^{ \prime}(\eta)d\eta\right|\leq\|\tilde{r}^{\prime}\|_{L^{\infty}(\mathbb{R})} \left|1/p_{1}-z_{a}\right|\lesssim t^{-1/3}. \tag{4.63}\] Then (4.59) can be derived from (4.62) and (4.63). For \(\hat{z}\in\hat{\Sigma}_{2}^{0}\), we can use a similar method to prove (4.60). Then we obtain the following proposition as a direct corollary of Proposition 4.3. **Proposition 4.4**.: _As \(t\to+\infty\),_ \[M_{a}^{L,0}(\hat{z})=\hat{M}_{a}^{L,0}(\hat{z})+\mathcal{O}(t^{-1/3+2\delta_{1 }}), \tag{4.64}\] _where \(\hat{M}_{a}^{L,0}(\hat{z})\) satisfies the following RH problem._ **RH problem 4.7**.: _Find a matrix-valued function \(\hat{M}_{a}^{L,0}(\hat{z})\) with properties:_ * \(\hat{M}_{a}^{L,0}(\hat{z})\) _is analytical in_ \(\mathbb{C}\setminus\hat{\Sigma}_{a}^{L,0}\)_._ * \(\hat{M}^{L,0}_{a}(\hat{z})=\hat{M}^{L,0}_{a-}(\hat{z})\hat{V}^{L,0}_{a}(\hat{z}), \ \hat{z}\in\hat{\Sigma}^{L,0}_{a}\)_, where_ \[\hat{V}^{L,0}_{a}(\hat{z})=\left\{\begin{array}{ccc}1&0&0\\ \left(\begin{array}{ccc}R_{a}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}&1&0\\ 0&0&1\end{array}\right),&\hat{z}\in\hat{\Sigma}^{0}_{j},\ j=1,2,\\ \left(\begin{array}{ccc}1&-\overline{R_{a}}e^{-i(8\hat{z}^{3}/3+2\tilde{s} \hat{z})}&0\\ 0&1&0\\ 0&0&1\end{array}\right),&\hat{z}\in\hat{\Sigma}^{0*}_{j},\ j=1,2,\\ \left(\begin{array}{ccc}1&-\overline{R_{a}}e^{-i(8\hat{z}^{3}/3+2\tilde{s} \hat{z})}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ R_{a}e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}&1&0\\ 0&0&1\end{array}\right),\ \hat{z}\in(\hat{z}_{1},\hat{z}_{2}).\end{array}\right.\] (4.65) * \(\hat{M}^{L,0}_{a}(\hat{z})=I+\mathcal{O}(\hat{z}^{-1}),\ \ \ \ \hat{z}\to\infty\)_._ To match RH problem 4.7 with the model RH problem, we need to convert the coefficients in front of the exponential terms in the jump matrix into the pure imaginary number. Rewriting \(R_{a}\) as \[R_{a}=|R_{a}|e^{i\varphi_{a}} \tag{4.66}\] with \[|R_{a}|=|r(z_{a})|,\ \varphi_{a}=\arg R_{a}=\arg\tilde{r}(z_{a})-t \theta_{12}(z_{a}), \tag{4.67}\] we make the following transformation \[\tilde{M}^{L,0}_{a}(\hat{z})=\mathcal{B}_{1}\hat{M}^{L,0}_{a}( \hat{z})\mathcal{B}^{-1}_{1}, \tag{4.68}\] where \[\mathcal{B}_{1}=\left(\begin{array}{ccc}e^{i(\varphi_{a}/2-\pi/4)}&0&0\\ 0&e^{-i(\varphi_{a}/2-\pi/4)}&0\\ 0&0&1\end{array}\right). \tag{4.69}\] Then \(\tilde{M}^{L,0}_{a}(\hat{z})\) satisfies the following RH problem. **RH problem 4.8**.: _Find a matrix-valued function \(\tilde{M}^{L,0}_{a}(\hat{z})\) with properties:_ * \(\tilde{M}^{L,0}_{a}(\hat{z})\) _is analytical in_ \(\mathbb{C}\setminus\hat{\Sigma}^{L,0}_{a}\)_._ * \(\tilde{M}^{L,0}_{a+}(\hat{z})=\tilde{M}^{L,0}_{a-}(\hat{z})\tilde{V}^{L,0}_{a }(\hat{z}),\ \hat{z}\in\hat{\Sigma}^{L,0}_{a}\)_, where_ \[\tilde{V}^{L,0}_{a}(\hat{z})=\left\{\begin{array}{ccc}1&0&0\\ \left(\begin{array}{ccc}i|r(z_{a})|e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}&1&0 \\ 0&0&1\end{array}\right),&\hat{z}\in\hat{\Sigma}^{0}_{j},\ j=1,2,\\ \left(\begin{array}{ccc}1&i|r(z_{a})|e^{-i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}& 0\\ 0&1&0\\ 0&0&1\end{array}\right),&\hat{z}\in\hat{\Sigma}^{0*}_{j},\ j=1,2,\\ \left(\begin{array}{ccc}1&i|r(z_{a})|e^{-i(8\hat{z}^{3}/3+2\tilde{s}\hat{z} )}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ i|r(z_{a})|e^{i(8\hat{z}^{3}/3+2\tilde{s}\hat{z})}&1&0\\ 0&0&1\end{array}\right),\ \hat{z}\in(\hat{z}_{1},\hat{z}_{2}).\end{array}\right.\] (4.70) * \(\tilde{M}_{a}^{L,0}(\hat{z})=I+\mathcal{O}(\hat{z}^{-1})\), \(\hat{z}\to\infty\). Observing the jump matrix (4.70), the RH problem 4.8 could be explicitly solved by using the model RH problem in Appendix B. **Proposition 4.5**.: _Let \(c_{1}=i|r(z_{a})|\), then we obtain \(\tilde{M}_{a}^{L,0}(\hat{z})=N^{P}(\hat{z})\)._ Proof.: Note that \(N^{P}(\hat{z})\) is invertible, we define \[\Xi(\hat{z}):=\tilde{M}_{a}^{L,0}(\hat{z})N^{P}(\hat{z})^{-1}, \tag{4.71}\] which satisfies the following RH problem. **RH problem 4.9**.: _Find a matrix-valued function \(\Xi(\hat{z})\) with properties:_ * \(\Xi(\hat{z})\) _is analytical in_ \(\mathbb{C}\setminus\hat{\Sigma}_{a}^{L,0}\)_._ * \(\Xi_{+}(\hat{z})=\Xi_{-}(\hat{z})V^{\Xi}(\hat{z}),\ \hat{z}\in\hat{\Sigma}_{a}^{L,0}\)_, where_ \[V^{\Xi}(\hat{z})=N_{-}^{P}(\hat{z})\tilde{V}_{a}^{L,0}(\hat{z})V^{P}(\hat{z})^ {-1}N_{+}^{P}(\hat{z})^{-1}.\] (4.72) * \(\Xi(\hat{z})=I+\mathcal{O}(\hat{z}^{-1})\), \(\hat{z}\to\infty\)_._ Because of the boundedness of \(N^{P}(\hat{z})\) (see (A.7) below), it is sufficient to estimate the error between the jump matrices \(\tilde{V}_{a}^{L,0}(\hat{z})\) and \(V^{P}(\hat{z})\). \[\tilde{V}_{a}^{L,0}(\hat{z})-V^{P}(\hat{z})=\] \[\begin{cases}\begin{pmatrix}1&0&0\\ (i|r(z_{a})|-p)e^{i(8\hat{z}^{2}/3+2\bar{s}\hat{z})}&1&0\\ 0&0&1\end{pmatrix},\quad\hat{z}\in\hat{\Sigma}_{j}^{0},\ j=1,2,\\ \begin{pmatrix}1&(i|r(z_{a})|+p^{*})e^{-i(8\hat{z}^{2}/3+2\bar{s}\hat{z})}&0 \\ 0&1&0\\ 0&0&1\end{pmatrix},\quad\hat{z}\in\hat{\Sigma}_{j}^{0*},\ j=1,2,\\ \begin{pmatrix}-|r(z_{a})|^{2}+|p|^{2}&(i|r(z_{a})|+p^{*})e^{-i(8\hat{z}^{2}/3 +2\bar{s}\hat{z})}&0\\ (ir(z_{a})-p)e^{i(8\hat{z}^{2}/3+2\bar{s}\hat{z})}&0&0\\ 0&0&1\end{pmatrix},\quad\hat{k}\in(\hat{z}_{1},\hat{z}_{2}).\end{cases}\] Then using Proposition 4.3, we have \[\|V^{\Xi}(\hat{z})-I\|_{L^{1}\cap L^{2}\cap L^{\infty}(\hat{\Sigma}_{a}^{L,0} )}\lesssim t^{-1/3+2\delta_{1}}. \tag{4.73}\] Thus, the existence and uniqueness of \(\Xi(\hat{z})\) are valid by a small norm RH problem arguments, which also yields \[\Xi(\hat{z})=I+\mathcal{O}(t^{-1/3+2\delta_{1}}),\quad\text{as $t\to+\infty$}. \tag{4.74}\] Finally, the following proposition follows directly from propositions 4.3-4.5. **Proposition 4.6**.: _As \(t\to+\infty\),_ \[M_{a}^{L,0}(\hat{z})=\mathcal{B}_{1}^{-1}N^{P}(\hat{z})\mathcal{B}_{1}+\mathcal{ O}(t^{-1/3+2\delta_{1}}). \tag{4.75}\] _Moreover, as \(\hat{z}\to\infty\),_ \[M_{a}^{L,0}(\hat{z})=I+\frac{M_{a1}^{L,0}}{\hat{z}}+\mathcal{O}(\hat{z}^{-2}), \tag{4.76}\] _where \(M_{a1}^{L,0}\) is the coefficient of the term \(1/\hat{z}\) in the large-\(\hat{z}\) expansion of \(M_{a}^{L,0}(\hat{z})\). As \(t\to+\infty\),_ \[M_{a1}^{L,0}=\frac{i}{2}\begin{pmatrix}-\int_{\tilde{s}}^{\infty}u(\eta)^{2}d \eta&u(\tilde{s})e^{-i\varphi_{a}}&0\\ -u(\tilde{s})e^{i\varphi_{a}}&\int_{\tilde{s}}^{\infty}u(\eta)^{2}d\eta&0\\ 0&0&0\end{pmatrix}+\mathcal{O}(t^{-1/3+2\delta_{1}}). \tag{4.77}\] The RH problem for \(M_{b}^{L,0}\), \(M_{c}^{L,0}\) and \(M_{d}^{L,0}\) can be solved in a similar manner. For \(z\in U_{b}^{0}\) and \(t\) large enough, we have \[-t\theta_{12}(z)=-t\theta_{12}(z_{b})+\frac{8}{3}\tilde{z}^{3}+2\tilde{s} \tilde{z}+\mathcal{O}(t^{-1/3}\tilde{z}^{2}), \tag{4.78}\] where \(\tilde{s}\) is defined by (4.52) and \[\tilde{z}=(c_{b}t)^{\frac{1}{3}}(z-z_{b}), \tag{4.79}\] is the scaled spectral parameter in this case with \[c_{b}=\frac{21}{256}(14\sqrt{3}+9\sqrt{7}). \tag{4.80}\] Through the analysis, we obtain the result of the large-\(\tilde{z}\) expansion of \(M_{b}^{L,0}(\tilde{z})\), \[M_{b}^{L,0}(\tilde{z})=I+\frac{M_{b1}^{L,0}}{\tilde{z}}+\mathcal{O}(\tilde{z} ^{-2}), \tag{4.81}\] where as \(t\to+\infty\), \[M_{b1}^{L,0}=\frac{i}{2}\begin{pmatrix}-\int_{\tilde{s}}^{\infty}u(\eta)^{2}d \eta&u(\tilde{s})e^{-i\varphi_{b}}&0\\ -u(\tilde{s})e^{i\varphi_{b}}&\int_{\tilde{s}}^{\infty}u(\eta)^{2}d\eta&0\\ 0&0&0\end{pmatrix}+\mathcal{O}(t^{-1/3+2\delta_{1}}), \tag{4.82}\] with \[\varphi_{b}=\arg\tilde{r}(z_{b})-t\theta_{12}(z_{b}). \tag{4.83}\] For \(z\in U_{c}^{0}\), and for sufficiently large \(t\), the following relationship holds: \[-t\theta_{12}(z)=-t\theta_{12}(z_{c})+\frac{8}{3}\tilde{z}^{3}+2\tilde{s} \tilde{z}+\mathcal{O}(t^{-1/3}\tilde{z}^{2}). \tag{4.84}\] Here, \(\tilde{s}\) is defined in accordance with (4.52), and the scaled spectral parameter \(\tilde{z}\) is given by: \[\tilde{z}=(c_{c}t)^{\frac{1}{3}}(z-z_{c}), \tag{4.85}\] where \[c_{c}=c_{b}, \tag{4.86}\] which is as specified in (4.80). Moreover, as \(\tilde{z}\to\infty\), \[M_{c}^{L,0}(\tilde{z})=I+\frac{M_{c1}^{L,0}}{\tilde{z}}+\mathcal{ O}(\tilde{z}^{-2}), \tag{4.87}\] where as \(t\to+\infty\), \[M_{c1}^{L,0}=\frac{i}{2}\begin{pmatrix}-\int_{\tilde{s}}^{\infty }u(\eta)^{2}d\eta&-u(\tilde{s})e^{i\varphi_{b}}&0\\ u(\tilde{s})e^{-i\varphi_{b}}&\int_{\tilde{s}}^{\infty}u(\eta)^{2}d\eta&0\\ 0&0&0\end{pmatrix}+\mathcal{O}(t^{-1/3+2\delta_{1}}). \tag{4.88}\] Here we use the symmetry \(|r(z)|=|r(-z)|\) whose detailed proof could be given by the similar method in the proof of Proposition 4.2 in [15]. For \(z\in U_{d}^{0}\), and for sufficiently large \(t\), the following relationship holds: \[-t\theta_{12}(z)=-t\theta_{12}(z_{d})+\frac{8}{3}\breve{z}^{3}+2 \tilde{s}\tilde{z}+\mathcal{O}(t^{-1/3}\breve{z}^{2}). \tag{4.89}\] Here, \(\tilde{s}\) is defined in accordance with (4.52), and the scaled spectral parameter \(\breve{z}\) is given by: \[\breve{z}=(c_{d}t)^{\frac{1}{3}}(z-z_{d}), \tag{4.90}\] where \[c_{d}=c_{a}, \tag{4.91}\] which is as specified in (4.51). Moreover, as \(\breve{z}\to\infty\), \[M_{d}^{L,0}(\breve{z})=I+\frac{M_{d1}^{L,0}}{\breve{z}}+ \mathcal{O}(\breve{z}^{-2}), \tag{4.92}\] where as \(t\to+\infty\), \[M_{d1}^{L,0}=\frac{i}{2}\begin{pmatrix}-\int_{\tilde{s}}^{\infty }u(\eta)^{2}d\eta&-u(\tilde{s})e^{i\varphi_{a}}&0\\ u(\tilde{s})e^{-i\varphi_{a}}&\int_{\tilde{s}}^{\infty}u(\eta)^{2}d\eta&0\\ 0&0&0\end{pmatrix}+\mathcal{O}(t^{-1/3+2\delta_{1}}). \tag{4.93}\] According to the symmetries of RH problem 2.1, we have the following proposition. **Proposition 4.7**.: _As \(z\to\infty\), the coefficients \(M_{j1}^{L,l},\ j=a,b,c,d,\ l=0,1,2\) of the term of \(z^{-1}\) have the following relationships:_ \[M_{j1}^{L,1}=\omega\Gamma_{3}\overline{M_{j1}^{L,0}}\Gamma_{3}, \quad M_{j1}^{L,2}=\omega^{2}\Gamma_{2}\overline{M_{j1}^{L,0}}\Gamma_{2}. \tag{4.94}\] Combining the results above, finally we obtain the following proposition. **Proposition 4.8**.: _As \(t\to+\infty\),_ \[M^{L}(z)=I+t^{-1/3}\sum_{j=a,b,c,d}c_{j}^{-1/3}\left(\frac{M_{j1}^ {L,0}}{z-\xi_{j}}+\frac{\omega\Gamma_{3}\overline{M_{j1}^{L,0}}\Gamma_{3}}{z- \omega\xi_{j}}+\frac{\omega^{2}\Gamma_{2}\overline{M_{j1}^{L,0}}\Gamma_{2}}{z- \omega^{2}\xi_{j}}\right)+\mathcal{O}(t^{-1}), \tag{4.95}\] _where \(c_{j},\ j=a,b,c,d\) are defined by (4.51), (4.80), (4.86), and (4.91), and \(M_{j1}^{L,0},\ j=a,b,c,d\) are given by (4.77), (4.82), (4.88), and (4.93)._ #### 4.4.2 RH problem near singularities RH problems near singularities \(\varkappa_{j},\,j=1,\cdots,6\) have the following properties. **RH problem 4.10**.: _Find a matrix-valued function \(M_{j}^{B}(z)\) with following properties:_ * \(M_{j}^{B}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus(\mathbb{B}_{j}\cap L_{j})\)_._ * \(M_{j}^{B}\) _has continuous boundary values_ \(M_{j\pm}^{B}\) _on_ \(\mathbb{B}_{j}\cap L_{j}\) _and_ \[M_{j+}^{B}(z)=M_{j-}^{B}(z)V^{B}(z),\ \ \ \ z\in\mathbb{B}_{j}\cap L_{j},\] (4.96) _where_ \(V^{B}(z)=V^{(3)}(z)|_{z\in\mathbb{B}_{j}\cap L_{j}}\)_._ * \(M_{j}^{B}(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty\)_._ By the symmetry of \(M^{R}(z)\), we obtain \(M_{3}^{B}(z)=\Gamma_{3}\overline{M_{1}^{B}(\omega\bar{z})}\Gamma_{3}\) for \(z\in\mathbb{B}_{3}\), \(M_{5}^{B}(z)=\Gamma_{2}\overline{M_{1}^{B}(\omega^{2}\bar{z})}\Gamma_{2}\) for \(z\in\mathbb{B}_{5}\), \(M_{6}^{B}(z)=\Gamma_{3}\overline{M_{4}^{B}(\omega\bar{z})}\Gamma_{3}\) for \(z\in\mathbb{B}_{6}\), and \(M_{2}^{B}(z)=\Gamma_{2}\overline{M_{4}^{B}(\omega^{2}\bar{z})}\Gamma_{2}\) for \(z\in\mathbb{B}_{2}\). Thus, we only need to give the detail of RH problem for \(M_{1}^{B}(z)\). Then \(M_{4}^{B}(z)\) can be obtained analogously and the others can be obtained by symmetry. \(M_{1}^{B}(z)\) only has the jump condition on \((1-2\varepsilon,1+2\varepsilon)\) with \[V^{B}(z)=\left(\begin{array}{ccc}1&\frac{rT_{21}\mathcal{X}e^{ i\theta_{12}}}{1-|r|^{2}}&0\\ 0&1&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ \frac{\bar{r}T_{12}\mathcal{X}e^{-it\theta_{12}}}{1-|r|^{2}}&1&0\\ 0&0&1\end{array}\right).\] To proceed, we first give the following lemma about the properties of the imaginary part of the phase function \(\theta_{12}\). **Lemma 4.1**.: _Let \(\Omega_{j}^{l},\ j=1,\cdots,8,\ l=0,1,2\), denote the sectors defined in (4.10)- (4.13). The following estimates for the imaginary part of the phase function \(\theta_{12}(z)\) in the transition zone \(0<(\xi+1/8)t^{2/3}<C\), as defined in (2.26), are valid. Similar estimates can also be provided for the imaginary parts of the phase functions \(\theta_{23}(z)\) and \(\theta_{31}(z)\)._ * _If_ \(|\operatorname{Re}z(1-|z|^{-2})|\leq 2\)_, then_ \[\operatorname{Im}\theta_{12}(z) \leq-\frac{\sqrt{3}(7-3\tilde{k}_{1}^{2})}{(\tilde{k}_{1}+1)^{2}} \operatorname{Im}z\left(\operatorname{Re}z-\xi_{j}\right)^{2},\ \ \ z\in\Omega_{j}^{0},\ j=1,\cdots,8,\] (4.97a) \[\operatorname{Im}\theta_{12}(z) \geq\frac{\sqrt{3}(7-3\tilde{k}_{1}^{2})}{(\tilde{k}_{1}+1)^{2}} \operatorname{Im}z\left(\operatorname{Re}z-\xi_{j}\right)^{2},\ \ \ z\in\Omega_{j}^{0*},\ j=1,\cdots,8,\] (4.97b) * _If_ \(|\operatorname{Re}z(1-|z|^{-2})|>2\)_, then_ \[\operatorname{Im}\theta_{12}(z) \leq-\frac{\sqrt{3}}{8}\operatorname{Im}z,\quad z\in\Omega_{j}^{0},\ j=1,\cdots,8,\] (4.98a) \[\operatorname{Im}\theta_{12}(z) \geq\frac{\sqrt{3}}{8}\operatorname{Im}z,\quad z\in\Omega_{j}^{ 0*},\ j=1,\cdots,8,\] (4.98b) Proof.: We give the proof on the sector \(\Omega_{1}\) and the proof on other sectors can be obtained similarly. For \(z\in\Omega_{1}\), denote \[z-1/z:=u+vi,\ u,v\in\mathbb{R},\ \text{and}\ \tilde{k}_{1}:=\xi_{1}-1/\xi_{1}.\] Then we obtain \(u=\operatorname{Re}z(1-1/|z|^{2})\), \(v=\operatorname{Im}z(1+1/|z|^{2})\). From (2.26), \[\operatorname{Im}\theta_{12}=\sqrt{3}v\left(\xi+F(u,v)\right),\ F(u,v):=\frac {u^{2}+v^{2}-1}{(u^{2}-v^{2}+1)^{2}+4u^{2}v^{2}}. \tag{4.99}\] It is evident that \[F(u,v)\leq\begin{cases}&F(u,0),\quad u^{2}\leq 4,\\ &0,\quad u^{2}>4.\end{cases} \tag{4.100}\] Moreover, from (4.2),we have \[\xi+F(u,0)=\frac{1-\tilde{k}_{1}^{2}}{(1+\tilde{k}_{1}^{2})^{2}}+\frac{u^{2}- 1}{(u^{2}+1)^{2}}=(u^{2}-\tilde{k}_{1}^{2})\frac{3+u^{2}-\tilde{k}_{1}^{2}(u^{ 2}-1)}{(1+\tilde{k}_{1}^{2})^{2}(u^{2}+1)^{2}} \tag{4.101}\] Inserting (4.100) and (4.101) into (4.99), we obtain (4.97a) and (4.98a) from the ranges of \(u\) and \(\tilde{k}_{1}\). To obtain the large time asymptotic behavior of \(M_{1}^{B}(e^{\frac{\pi i}{\theta}})\), we transform it to a pure \(\bar{\partial}\)-problem by multiplying a new function \(R^{B}\) defined as follow: \[R^{B}(z)=\left\{\begin{array}{ccc}\left(\begin{array}{ccc}1&R_{+}^{B}e^{ it\theta_{12}}&0\\ 0&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{C}^{-},\\ \left(\begin{array}{ccc}1&0&0\\ R_{-}^{B}e^{-it\theta_{12}}&1&0\\ 0&0&1\end{array}\right),&z\in\mathbb{C}^{+},\end{array}\right.\] in which \[R_{+}^{B}(z)=\mathcal{X}(\operatorname{Re}z)\mathcal{X}( \operatorname{Im}z+1)f(\operatorname{Re}z)g(z),\quad R_{-}^{B}(z)=\overline{ R_{+}^{B}(\bar{z})},\] with \[f(z)=\frac{r(z)}{1-|r(z)|^{2}},\quad\ g(z)=T_{12}(z). \tag{4.102}\] Then direct calculations yield \[|\bar{\partial}R_{+}^{B}|\lesssim|\mathcal{X}^{\prime}(\mathrm{Re}z)\mathcal{X}( \mathrm{Im}z+1)|+|\mathcal{X}(\mathrm{Re}z)\mathcal{X}^{\prime}(\mathrm{Im}z+1)|. \tag{4.103}\] It is obvious that the support of \(R_{+}^{B}\) and \(\bar{\partial}R_{+}^{B}\) are contained in \(\mathbb{B}_{1}\). Denote \[\tilde{M}_{1}^{B}(z)=M_{1}^{B}(z)R^{B}(z). \tag{4.104}\] Then \[\bar{\partial}\tilde{M}_{1}^{B}(z)=M_{1}^{B}(z)\bar{\partial}R^{B}(z),\quad \tilde{M}_{1}^{B}(z)\sim I,\ z\to\infty. \tag{4.105}\] Specially, \(\tilde{M}_{1}^{B}\) has no jump. Therefore, its solution can be given by the following integral equation \[\tilde{M}_{1}^{B}(z)=I+\frac{1}{\pi}\iint_{\mathbb{C}}\frac{\tilde{M}_{1}^{B} (\eta)\bar{\partial}R^{B}(\eta)}{\eta-z}dm(\eta). \tag{4.106}\] Denote \(C_{B}:L^{\infty}(\mathbb{C})\to L^{\infty}(\mathbb{C})\) be the integral operator as \[C_{B}f(z)=\frac{1}{\pi}\iint_{\mathbb{C}}\frac{f(\eta)\bar{\partial}R^{B}(\eta )}{\eta-z}dm(\eta). \tag{4.107}\] **Proposition 4.9**.: \(C_{B}\) _is a bounded integral operator from \(L^{\infty}(\mathbb{C})\) to \(L^{\infty}(\mathbb{C})\). Moreover, \(C_{B}\) has the following estimate:_ \[\parallel C_{B}\parallel_{L^{\infty}}\lesssim t^{-1/p},\ p>1, \tag{4.108}\] _which implies that \((I-C_{B})^{-1}\) exists as \(t\to\infty\)._ Proof.: A direct calculation shows that \[\parallel C_{B}\parallel_{L^{\infty}}\lesssim\iint_{\mathbb{C}^{+}}\frac{| \bar{\partial}R^{B}(\eta)|}{|\eta-z|}dm(\eta)+\iint_{\mathbb{C}^{-}}\frac{| \bar{\partial}R^{B}(\eta)|}{|\eta-z|}dm(\eta). \tag{4.109}\] Take the first term as an example. Let \(\eta=u+vi\), \(z=x+yi\). Using Holder's inequality, Lemma 4.1, and the following basic inequalities \[\parallel|\eta-z|^{-1}\parallel_{L^{q}(0,+\infty)}\lesssim|v-y|^{1/q-1}, \tag{4.110}\] where \(1<q<+\infty\) and \(\frac{1}{p}+\frac{1}{q}=1\), it follows that \[\iint_{\mathbb{C}^{+}}\frac{|\bar{\partial}R^{B}(\eta)|}{|\eta-z |}dm(\eta) \leq\int_{0}^{2\varepsilon}\int_{1-2\varepsilon}^{1+2\varepsilon}| \eta-z|^{-1}|\bar{\partial}R^{B}(\eta)|e^{-\frac{\sqrt{3}(7-3\tilde{k}_{1}^{2} )}{(\tilde{k}_{1}+1)^{2}}t(u-z_{b})^{2}v}dudv\] \[\lesssim\int_{0}^{2\varepsilon}v^{-1/q}e^{-\frac{\sqrt{3}(7-3 \tilde{k}_{1}^{2})}{(\tilde{k}_{1}+1)^{2}}(1+2\varepsilon-z_{b})^{2}tv}dv \lesssim t^{-1/p}. \tag{4.111}\] Hence, from \(\tilde{M}_{1}^{B}=\left(I-C_{B}\right)^{-1}\cdot I\), we get the existence and uniqueness of \(\tilde{M}_{1}^{B}(z)\). Take \(z=e^{\frac{\pi i}{6}}\) in (4.106), then \[\tilde{M}_{1}^{B}(z)-I=\frac{1}{\pi}\iint_{\mathbb{C}}\frac{\tilde{M}_{1}^{B} (\eta)\bar{\partial}R^{B}(\eta)}{\eta-e^{\frac{\pi i}{6}}}dm(\eta). \tag{4.112}\] **Proposition 4.10**.: _There exists a constant \(T_{1}\), such that for all \(t>T_{1}\), \(\tilde{M}^{B}(z)\) admits the following estimate_ \[\parallel\tilde{M}^{B}(e^{\frac{i\pi}{6}})-I\parallel\lesssim t^{-1}. \tag{4.113}\] Proof.: Since \(e^{\frac{\pi i}{6}}\notin\mathbb{B}_{1}\), \(|s-e^{\frac{\pi i}{6}}|\lesssim 1\) in \(\mathbb{B}_{1}\). Then, we have \[\frac{1}{\pi}\iint_{\mathbb{C}}\frac{\tilde{M}_{1}^{B}(\eta)\bar{\partial}R^{ B}(\eta)}{\eta-e^{\frac{\pi i}{6}}}dm(\eta)\lesssim\iint_{\mathbb{C}^{+}}| \bar{\partial}R^{B}(\eta)|dm(\eta)+\iint_{\mathbb{C}^{-}}|\bar{\partial}R^{B}( \eta)|dm(\eta).\] Here we take the estimate of the first term as an example \[\iint_{\mathbb{C}^{+}}|\bar{\partial}R^{B}(\eta)|dm(\eta)\lesssim \int_{0}^{2\varepsilon}\int_{1-2\varepsilon}^{1+2\varepsilon}|\bar{\partial}R ^{B}(\eta)|e^{-\frac{\sqrt{3}(\tau-3\tilde{k}_{1}^{2})}{(k_{1}+1)^{2}}(1+2 \varepsilon-z_{b})^{2}tv}dudv\] \[\lesssim\int_{0}^{2\varepsilon}e^{-\frac{\sqrt{3}(\tau-3\tilde{k }_{1}^{2})}{(k_{1}+1)^{2}}(1+2\varepsilon-z_{b})^{2}tv}dv\lesssim t^{-1}. \tag{4.114}\] The estimate of the second term can be given in a similar way. Finally, we obtain \[M_{1}^{B}(e^{\frac{i\pi}{6}})=\tilde{M}_{1}^{B}(e^{\frac{i\pi}{6}})R^{B}(e^{ \frac{i\pi}{6}})=I+\mathcal{O}(t^{-1}). \tag{4.115}\] In addition, for \(z\in\partial\mathbb{B}_{1}\), \(R^{B}(z)=I\) and then \(M_{1}^{B}(z)=\tilde{M}_{1}^{B}(z)\). Furthermore, for \(z\in\partial\mathbb{B}_{1}\), when \(\eta\in\partial\mathbb{B}_{1}\), we still have \(|\bar{\partial}R^{B}(\eta)||\eta-z|^{-1}=0\), which implies that \(|\bar{\partial}R^{B}(\eta)||\eta-z|^{-1}\) is bounded for \(z\in\partial\mathbb{B}_{1}\) and \(\eta\in\mathbb{B}_{1}\). Then through direct calculations, we obtain for \(z\in\partial\mathbb{B}_{1}\), \[M_{1}^{B}(z)=I+\mathcal{O}(t^{-1}). \tag{4.116}\] Moreover, \(M_{1}^{B}(1)=R^{B}(1)^{-1}+\mathcal{O}(t^{-1/p})\). Here, \(p\) is a arbitrary constant with \(p>1\). For ease of use, rewrite \(p\) as \(1/p=1-\rho\) where \(\rho\) is a small enough positive constant with \(\rho<\frac{1}{4}\). Then we obtain \[M_{1}^{B}(1)=R^{B}(1)^{-1}+\mathcal{O}(t^{-1+\rho}).\] Similarly, for \(j=2,\cdots,6\), \[\lim_{z\to\kappa_{j}}\frac{M_{j}^{B}(z)-M_{j}^{B}(\kappa_{j})}{z-\kappa_{j}}= \mathcal{O}(t^{-1+\rho}). \tag{4.117}\] #### 4.4.3 Small norm RH problem for the residual error In this section, we consider the error matrix-function \(E(z)\). From the definition (4.37), we can obtain the RH problem for \(E(z)\): **RH problem 4.11**.: _Find a matrix-valued function \(E(z)\) with properties:_ * \(E(z)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma^{E}\)_, where_ \[\Sigma^{E}=\partial U\cup(\cup_{j=1}^{6}\partial\mathbb{B}_{j})\ (\Sigma^{(3)}\setminus(U\cup(\cup_{j=1}^{6}\mathbb{B}_{j})).\] * \(E(z)\) _has continuous boundary values_ \(E_{\pm}(z)\) _satisfying_ \[E_{+}(z)=E_{-}(z)V^{E}(z),\ \ z\in\Sigma^{E},\] _where the jump matrix_ \(V^{E}(z)\) _is given by_ \[V^{E}(z)=\left\{\begin{aligned} & M^{O}(z)V^{(3)}(z)M^{O}(z)^{-1}, \ \ \ z\in\Sigma^{(3)}\setminus(U\cup(\cup_{j=1}^{6}\mathbb{B}_{j})),\\ & M^{O}(z)M_{j}^{L,l}(z)^{-1}M^{O}(z)^{-1},\,z\in\partial U_{j}^{ l},\ j=a,b,c,d,\ l=0,1,2,\\ & M^{O}(z)M_{j}^{B}(z)^{-1}M^{O}(z)^{-1},\ \ z\in\partial\mathbb{B}_{j},\ j=,1,\cdots,6.\end{aligned}\right.\] (4.118) _See Figure_ 4.3_._ * \(E(z)\) _has the following asymptotic behavior:_ \[E(z)=I+\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty.\] * _As_ \(z\to\varkappa_{j}=e^{\frac{i\pi(j-1)}{3}},j=1,\cdots,6\)_, the limit of_ \(E(z)\) _has pole singularities with leading terms of a specific matrix structure_ \[\lim_{z\to\varkappa_{j}}E(z)=\lim_{z\to\varkappa_{j}}M^{R}(z)M_{j}^{B}(z)^{- 1}M^{O}(z)^{-1}=\mathcal{O}((z-\varkappa_{j})^{-2}).\] (4.119) Next we prove that the above RH problem 4.11 can be approximated by the following RH problem, which owns the same jump condition with RH problem 4.11, but without pole singularities. **RH problem 4.12**.: _Find a matrix-valued function \(E^{(1)}(z)\) with properties:_ * \(E^{(1)}(z)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma^{E}\)_._ Figure 4.3: The jump contour \(\Sigma^{E}\). The red circles represents \(U\) and the orange rectangular box stands for \(\mathbb{B}_{j}\), \(j=1,\cdots,6\). * \(E^{(1)}(z)\) _has continuous boundary values_ \(E^{(1)}_{\pm}(z)\) _satisfying_ \[E^{(1)}_{+}(z)=E^{(1)}_{-}(z)V^{E}(z),\ \ z\in\Sigma^{E},\] _where the jump matrix_ \(V^{E}(z)\) _is given by (_4.118_)._ * \(E^{(1)}(z)=I+\mathcal{O}(z^{-1}),\ \ \ z\to\infty.\)__ We will show that the RH problem 4.12 for \(E^{(1)}(z)\) is solvable for large \(t\) as a small norm RH problem. By Proposition 4.8, the jump matrix \(V^{E}(z)\) admits the following estimates. \[\parallel V^{E}(z)-I\parallel_{L^{p}(\Sigma^{E})}\lesssim\left\{ \begin{aligned} & e^{-ct^{3\delta_{1}}},\ z\in\Sigma^{E} \setminus(U\cup(\cup_{j=1}^{6}\mathbb{B}_{j})),\\ & t^{-K_{p}},\ \ \ z\in\partial U,\\ & t^{-1},\ \ \ \ z\in\partial\mathbb{B}_{j},\end{aligned}\right. \tag{4.120}\] for some positive \(c\) with \(K_{\infty}=\delta_{1}\) and \(K_{2}=1/6+\delta_{1}\). Therefore, the existence and uniqueness of RH problem 4.11 can be shown by using a small-norm RH problem [28, 29]. Moreover, its solution can be given by \[E^{(1)}(z)=I+\frac{1}{2\pi i}\int_{\Sigma^{E}}\frac{\varpi(\eta)(V^{E}(\eta)- I)}{\eta-z}ds, \tag{4.121}\] where \(\varpi\in I+L^{2}(\Sigma^{E})\) is the unique solution of the following equation \[\varpi=I+\mathcal{C}_{E}\varpi. \tag{4.122}\] Here \(\mathcal{C}_{E}\) is a integral operator defined by \[\mathcal{C}_{E}(f)(z)=\mathcal{C}^{-}\left(f(V^{E}(z)-I)\right), \tag{4.123}\] and \(\mathcal{C}^{-}\) is the Cauchy projection operator on \(\Sigma^{E}\). By (4.120), we have \[\parallel\mathcal{C}_{E}\parallel_{L^{2}(\Sigma^{E})}\leq\parallel \mathcal{C}^{-}\parallel_{L^{2}(\Sigma^{E})}\parallel V^{E}(z)-I\parallel_{L ^{\infty}(\Sigma^{E})}\lesssim t^{-\delta_{1}}, \tag{4.124}\] which implies that \(1-\mathcal{C}_{E}\) is invertible for sufficiently large \(t\). So \(\varpi\) exists and is unique with \[\varpi=I+(1-\mathcal{C}_{E})^{-1}(\mathcal{C}_{E}I). \tag{4.125}\] Moreover, the following estimates hold: \[\parallel\mathcal{C}_{E}I\parallel_{L^{2}(\Sigma^{E})}\lesssim t^{-1/6-\delta _{1}/2},\ \ \ \parallel\varpi-I\parallel_{L^{2}(\Sigma^{E})}\lesssim t^{-1/6-\delta_{1}/2}. \tag{4.126}\] In order to reconstruct the solution \(u(y,t)\) of (1.7), it is necessary to consider the large time asymptotic behavior of \(E^{(1)}(e^{\frac{i\pi}{6}})\). Note that when estimating its asymptotic behavior, based on (4.121) and (4.120), our calculations need only to focus on \(\partial U\), as it exponentially tends to zero on the remaining boundary. **Proposition 4.11**.: _When \(z=e^{\frac{i\pi}{6}}\), we have_ \[E^{(1)}(e^{\frac{i\pi}{6}})=I+\frac{1}{2\pi i}\int_{\Sigma^{E}}\frac{\varpi( \eta)(V^{E}(\eta)-I)}{\eta-e^{\frac{i\pi}{6}}}d\eta,\] _with the large time asymptotic behavior_ \[E^{(1)}(e^{\frac{i\pi}{6}})=I+t^{-1/3}E_{1}+\mathcal{O}(t^{-2/3+2\delta_{1}}), \tag{4.127}\] _where \(E_{1}\) is explicitly computed by_ \[E_{1} =-\sum_{l=0}^{2}\sum_{j\in\{a,b,c,d\}}c_{j}^{-1/3}\frac{M^{O}( \omega^{l}z_{j})M_{j1}^{L,j}M^{O}(\omega^{l}\xi_{j})^{-1}}{\omega^{l}z_{j}-e^{ \frac{i\pi}{6}}}\] \[=-\sum_{j\in\{a,b,c,d\}}c_{j}^{-1/3}\Big{(}\frac{M^{O}(z_{j})M_{j 1}^{L,0}M^{O}(z_{j})^{-1}}{z_{j}-e^{\frac{i\pi}{6}}}-\frac{\omega M^{O}( \omega z_{j})\Gamma_{3}\overline{M_{j1}^{L,0}}\Gamma_{3}M^{O}(\omega z_{j})^{ -1}}{\omega z_{j}-e^{\frac{i\pi}{6}}}\] \[-\frac{\omega^{2}M^{O}(\omega^{2}z_{j})\Gamma_{2}\overline{M_{j 1}^{L,0}}\Gamma_{2}M^{O}(\omega^{2}z_{j})^{-1}}{\omega^{2}z_{j}-e^{\frac{i\pi }{6}}}\Big{)}. \tag{4.128}\] Proof.: From (4.95) and (4.118), it follows that \[E^{(1)}(e^{\frac{i\pi}{6}}) =I+\frac{1}{2\pi i}\sum_{l=0}^{2}\sum_{j\in\{a,b,c,d\}}\oint_{ \partial U_{j}^{l}}\frac{M^{O}(\eta)(M_{j}^{l}(\eta)^{-1}-I)M^{O}(\eta)^{-1}} {\eta-e^{\frac{i\pi}{6}}}d\eta+\mathcal{O}(t^{-1/3-\delta_{1}})\] \[=I-\frac{1}{2\pi i}\sum_{j\in\{a,b,c,d\}}(c_{j}t)^{-1/3}\oint_{ \partial U_{j}^{0}}\frac{M^{O}(\eta)M_{j1}^{L,0}M^{O}(\eta)^{-1}}{(\eta-e^{ \frac{i\pi}{6}})(\eta-z_{j})}d\eta\] \[-\frac{1}{2\pi i}\sum_{j\in\{a,b,c,d\}}(c_{j}t)^{-1/3}\oint_{ \partial U_{j}^{1}}\omega\frac{M^{O}(\eta)\Gamma_{3}\overline{M_{j1}^{L,0}} \Gamma_{3}M^{O}(\eta)^{-1}}{(\eta-e^{\frac{i\pi}{6}})(\eta-\omega z_{j})}d\eta\] \[-\frac{1}{2\pi i}\sum_{j\in\{a,b,c,d\}}(c_{j}t)^{-1/3}\oint_{ \partial U_{j}^{2}}\omega^{2}\frac{M^{O}(\eta)\Gamma_{2}\overline{M_{j1}^{L,0 }}\Gamma_{2}M^{O}(\eta)^{-1}}{(\eta-e^{\frac{i\pi}{6}})(\eta-\omega^{2}z_{j}) }d\eta\] \[+\mathcal{O}(t^{-2/3+2\delta_{1}})\] \[=I-\sum_{j\in\{a,b,c,d\}}(c_{j}t)^{-1/3}M^{O}(e^{\frac{i\pi}{6}}) \left(\frac{M_{j1}^{L,0}}{z_{j}-e^{\frac{i\pi}{6}}}+\frac{\omega\Gamma_{3} \overline{M_{j1}^{L,0}}\Gamma_{3}}{\omega z_{j}-e^{\frac{i\pi}{6}}}+\frac{ \omega^{2}\Gamma_{2}\overline{M_{j1}^{L,0}}\Gamma_{2}}{\omega^{2}z_{j}-e^{ \frac{i\pi}{6}}}\right)M^{O}(e^{\frac{i\pi}{6}})^{-1}\] \[+\mathcal{O}(t^{-2/3+2\delta_{1}}).\] Moreover, for \(j=1,\cdots,6\), \[E^{(1)}(\varkappa_{j})=I+\mathcal{O}(t^{-1+\rho}). \tag{4.129}\] Finally we consider the error between \(E(z)\) and \(E^{(1)}(z)\). Define the error function \[E^{(2)}(z)=E(z)E^{(1)}(z)^{-1}, \tag{4.130}\] which is a solution of a RH problem only has the following singularities \[\lim_{z\to\varkappa_{j}}E^{(2)}(z)=\lim_{z\to\varkappa_{j}}M^{R}(z)M^{B}_{j}(z)^{- 1}M^{O}(z)^{-1}E^{(1)}(z)^{-1}, \tag{4.131}\] which leads to \[E^{(2)}(z)=I+\sum_{j=1}^{6}\left(\frac{E^{(2),j}_{-2}}{(z-\varkappa_{j})^{2}}+ \frac{E^{(2),j}_{-1}}{z-\varkappa_{j}}\right), \tag{4.132}\] where \(E^{(2),j}_{-l}\), \(j=1,\cdots,6\) and \(l=1,2\), represents the coefficients of the term of \((z-\varkappa_{j})^{l}\) in the expansion of \(E^{(2)}(z)\). Then, from (4.33) and (4.40), it follows that \[E^{(1),j}_{-2}= \left(\begin{array}{ccc}\alpha^{R}_{\pm}&\alpha^{R}_{\pm}&\beta ^{R}_{\pm}\\ -\alpha^{R}_{\pm}&-\alpha^{R}_{\pm}&-\beta^{R}_{\pm}\\ 0&0&0\end{array}\right)M^{B}_{j}(\varkappa_{j})^{-1}\] \[\left(\begin{array}{ccc}\tilde{\alpha}^{O}_{\pm}&\tilde{\alpha} ^{O}_{\pm}&\tilde{\beta}^{O}_{\pm}\\ -\tilde{\alpha}^{O}_{\pm}&-\tilde{\alpha}^{O}_{\pm}&-\tilde{\beta}^{O}_{\pm} \\ 0&0&0\end{array}\right)E^{(1)}(\varkappa_{j})^{-1}. \tag{4.133}\] As \(t\to\infty\), bring (4.117) into the above formula leads to \[E^{(1),j}_{-2}=\mathcal{O}(t^{-1+\rho}). \tag{4.134}\] Analogously, as \(t\to\infty\), the coefficient of \((z-\varkappa_{j})^{-1}\) satisfies \[E^{(1),j}_{-1}=\mathcal{O}(t^{-1+\rho}). \tag{4.135}\] Summarizing above results gives the following proposition. **Proposition 4.12**.: _Taking \(z=e^{\frac{i\pi}{6}}\), we have_ \[E(e^{\frac{i\pi}{6}})= I+t^{-1/3}E_{1}+\mathcal{O}(t^{-2/3+2\delta_{1}}), \tag{4.136}\] _where \(E_{1}\) is given by (4.128)._ ### Contribution from \(\bar{\partial}\)-components Now we consider the asymptotics of \(M^{(4)}\) of the \(\bar{\partial}\)-problem 4.1, whose solution can be given by an integral equation \[M^{(4)}(z)=I+\frac{1}{\pi}\iint_{\mathbb{C}}\frac{M^{(4)}(\eta)W^{(4)}(\eta)}{ \eta-z}dm(\eta), \tag{4.137}\] where \(m(\eta)\) is the Lebesgue measure on the \(\mathbb{C}\). Define the left Cauchy-Green integral operator, \[fC_{z}(z)=\frac{1}{\pi}\iint_{\mathbb{C}}\frac{f(\eta)W^{(4)}(\eta)}{\eta-z}dm (\eta),\] then the above equation (4.137) can be rewritten as \[(I-C_{z})\,M^{(4)}(z)=I. \tag{4.138}\] Aiming at estimating \(M^{(4)}(z)\), we need to evaluate the norm of the integral operator \((I-C_{z})^{-1}\) in this transition zone. By Lemma 4.1, we can show that for sufficiently large \(t\) the operator \(\mathcal{C}_{z}\) is small-norm, so that the resolvent operator \((I-\mathcal{C}_{z})^{-1}\) exists and can be expressed as a Neumann series. **Proposition 4.13**.: _The norm of the integral operator \(C_{z}\) satisfies the inequality_ \[\parallel C_{z}\parallel_{L^{\infty}\to L^{\infty}}\lesssim t^{-1/3},\ \ t\to\infty. \tag{4.139}\] Proof.: For any \(f\in L^{\infty}\), \[\parallel fC_{z}\parallel_{L^{\infty}}\leq \parallel f\parallel_{L^{\infty}}\frac{1}{\pi}\iint_{\mathbb{C}} \frac{|W^{(4)}(\eta)|}{|z-\eta|}dm(\eta).\] We detail the case for matrix functions having support in the region \(\Omega_{1}^{0}\), the case for the other regions follows similarly. Recall the definition of \(W^{(4)}(z)=M^{R}(z)\bar{\partial}\mathcal{R}^{(3)}(z)M^{R}(z)^{-1}\). Note that \(W^{(4)}(z)\equiv 0\) out of \(\overline{\Omega}\). Proposition 4.8 and 4.11 implies the boundedness of \(M^{R}(z)\) and \(M^{R}(z)^{-1}\) for \(z\in\overline{\Omega}_{1}\). By (4.31) and (4.39), it follows that \[\frac{1}{\pi}\iint_{\Omega_{1}}\frac{|W^{(4)}(\eta)|}{|z-\eta|}dm(\eta) \lesssim\frac{1}{\pi}\iint_{\Omega_{1}}\frac{|\bar{\partial}R_{1}(\eta)e^{-it \theta_{12}}|}{|z-\eta|}dm(\eta).\] From Lemma 4.1, we divide \(\Omega_{1}^{0}\) in two regions: * \(\{z\in\Omega_{1}^{0}:\operatorname{Re}z(1-|z|^{-2})\leq 2\}\subseteq\Omega_{A}:= \{z\in\Omega_{1}^{0}:\operatorname{Re}z\leq 3\}\), * \(\{z\in\Omega_{1}^{0}:\operatorname{Re}z(1-|z|^{-2})>2\}\subseteq\Omega_{B}:= \{z\in\Omega_{1}^{0}:\operatorname{Re}z\geq 2\}\). Set \(\eta=u+1/p_{1}+vi\), \(z=z_{R}+iz_{I}\), \(u,v,z_{R},z_{I}\in\mathbb{R}\), and \(1/q+1/p=1\) with \(p>2\). Referring to (4.20) and (4.21) in Proposition 4.1, then the following integral can be divided into three part: \[\iint_{\Omega_{1}^{0}}\frac{|\bar{\partial}R_{1}(\eta)|e^{t\mathrm{Im}\theta_ {12}}}{|z-\eta|}dm(\eta)\lesssim\hat{I}_{1}+\hat{I}_{2}+\hat{I}_{3},\] with \[\hat{I}_{1} :=\iint_{\Omega_{1}^{0}}\frac{\left(|\vec{r}^{\prime}(u+1/p_{1}) |+|\mathcal{X}^{\prime}(u+1/p1)|\right)e^{t\mathrm{Im}\theta_{12}}}{|z-\eta|}dm (\eta),\] \[\hat{I}_{2} :=\iint_{\Omega_{B}}\frac{|u|^{-1/2}e^{t\mathrm{Im}\theta_{12}}}{ |z-\eta|}dm(\eta).\] Our task now is to estimate the above integrals \(\hat{I}_{i}\), \(i=1,2,3\), respectively. \[\hat{I}_{1} \leq\iint_{\Omega_{A}}\frac{\left(|\vec{r}^{\prime}(u+1/p_{1})|+| \mathcal{X}^{\prime}(u+1/p_{1})|\right)e^{t\mathrm{Im}\theta_{12}}}{|z-\eta|}dm (\eta)\] \[+\iint_{\Omega_{B}}\frac{\left(|\vec{r}^{\prime}(u+1/p_{1})|+| \mathcal{X}^{\prime}(u+1/p_{1})|\right)e^{t\mathrm{Im}\theta_{12}}}{|z-\eta|}dm (\eta)\] \[\leq \int_{0}^{(3-1/p_{1})\tan\varphi_{0}}\int_{v}^{3}\frac{e^{-t\frac{ \sqrt{3}(7-3k_{1}^{2})}{(k_{1}+1)^{2}}}vu^{2}}{|z-\eta|}dudv+\int_{2}^{+\infty} \int_{u\tan(\varphi_{0}/3)}^{u\tan\varphi_{0}}\frac{e^{-t\frac{\sqrt{3}}{8}v}}{| z-\eta|}dvdu.\] Note that the following basic inequalities hold \[\||z-\eta|^{-1}\|_{L^{q}_{u}(v,\infty)}\lesssim|v-y|^{-1+1/q},\quad\|e^{-tvu^{ 2}}\|_{L^{p}_{u}(v,\infty)}\lesssim(tv)^{-1/(2p)},\quad\|e^{-tv}\|_{L^{p}_{u}( v,\infty)}\lesssim(pt)^{-1/p}. \tag{4.140}\] Then using Cauchy-Schwartz's inequality, we have \[\int_{0}^{(3-1/p_{1})\tan\varphi_{0}}\int_{v}^{3}\frac{e^{-t\frac {\sqrt{3}(7-3k_{1}^{2})}{(k_{1}+1)^{2}}}vu^{2}}{|z-\eta|}dudv\] \[\lesssim t^{-1/4}\int_{0}^{(3-1/p_{1})\tan\varphi_{0}}|v-z_{I}|^ {-1/2}v^{-1/4}e^{-t\frac{\sqrt{3}(7-3k_{1}^{2})}{(k_{1}+1)^{2}}}v^{3}dv \lesssim t^{-1/3}. \tag{4.141}\] Using Holder's inequality, we obtain \[\int_{2}^{+\infty}\int_{u\tan(\varphi_{0}/3)}^{u\tan\varphi_{0}}\frac{e^{-t \frac{\sqrt{3}}{8}v}}{|z-\eta|}dvdu\lesssim t^{-1/p}\int_{2}^{\infty}e^{-t \frac{\sqrt{3}}{8}u\tan(\varphi_{0}/3)}|u-z_{R}|^{-1+1/q}du\lesssim t^{-1}. \tag{4.142}\] Together (4.141) with (4.142) gives us that \[\hat{I}_{1}\lesssim t^{-1/3}. \tag{4.143}\] To estimate \(\hat{I}_{2}\), it follows from Holder's inequality again that \[\hat{I}_{2} \leq \iint_{\Omega_{B}}\frac{|u|^{-1/2}e^{t\text{Im}\theta_{12}}}{|z- \eta|}dm(\eta)\leq\int_{2}^{+\infty}\int_{u\tan(\varphi_{0}/3)}^{u\tan\varphi _{0}}\frac{|u|^{-1/2}e^{-t\frac{\sqrt{3}}{8}v}}{|z-\eta|}dvdu\] \[\lesssim \int_{2}^{+\infty}u^{1/p-1/2}|u-z_{R}|^{-1+1/q}e^{-t\frac{\sqrt{ 3}}{8}u\tan(\varphi_{0}/3)}du\lesssim t^{-1/2}.\] Then combining the estimates for \(\hat{I}_{1}\) and \(\hat{I}_{2}\) yields the estimate \[\frac{1}{\pi}\iint_{\Omega_{1}}\frac{|W^{(4)}(\eta)|}{|z-\eta|}dm(\eta)\lesssim t ^{-1/3}.\] The integral over other regions can be estimated in similar manners, which finally confirms (4.139). Then from (4.138), we immediately arrive at the existence and uniqueness of \(M^{(4)}(z)\) for \(z\in\mathbb{C}\). Take \(z=e^{\frac{i\pi}{6}}\) in (4.137), then \[M^{(4)}(e^{\frac{\pi i}{6}})=I+\frac{1}{\pi}\iint_{\mathbb{C}}\frac{M^{(4)}( \eta)W^{(4)}(\eta)}{\eta-e^{\frac{\pi i}{6}}}dm(\eta). \tag{4.144}\] To reconstruct the solution of (1.7), we need the local behaviors of (4.144) as \(t\to\infty\). **Proposition 4.14**.: _There exists a constant \(T_{1}\), such that for all \(t>T_{1}\), the solution \(M^{(4)}(z)\) of the \(\bar{\partial}\)-problem admits the following estimate:_ \[|M^{(4)}(e^{\frac{\pi i}{6}})-I|\lesssim t^{-2/3}. \tag{4.145}\] Proof.: Similar to the proof of Proposition 4.13, we only give the proof for the integral over \(\Omega_{1}^{0}\) and the integral on other regions can be obtained in the same way. Proposition 4.13 and (4.138) imply that the boundedness of \(M^{(4)}(z)\) for \(z\in\Omega_{1}^{0}\) as \(t\to\infty\). Then it is sufficient to consider the integral \[\iint_{\mathbb{C}}\frac{|M^{(4)}(\eta)W^{(4)}(\eta)|}{\eta-e^{\frac{\pi i}{6}} }dm(\eta)\lesssim\iint_{\mathbb{C}}\frac{|W^{(4)}(\eta)|}{\eta-e^{\frac{\pi i }{6}}}dm(\eta).\] Referring (4.21) in Proposition 4.1, and note that \(|\mathcal{X}^{\prime}(\mathrm{Re}z)|=0\) in \(\overline{\Omega_{1}^{0}}\), this integral can be divided into two parts \[\iint_{\Omega_{1}^{0}}\frac{|W^{(4)}(\eta)|}{\eta-e^{\frac{\pi i}{6}}}dm(\eta )\leq\left(\iint_{\Omega_{A}}+\iint_{\Omega_{B}}\right)\frac{|\bar{\partial} R_{1}(\eta)|e^{2t\mathrm{Im}\theta_{12}}}{\eta-e^{\frac{\pi i}{6}}}dm(\eta). \tag{4.146}\] Let \(\eta=u+1/p_{1}+vi\) with \(u,v\in\mathbb{R}\). From (4.21) and \(|\eta-e^{\frac{\pi i}{6}}|^{-1}\) is bounded for \(z\in\Omega_{A}\), we have \[\iint_{\Omega_{A}}\frac{|\bar{\partial}R_{1}(\eta)|e^{2t\mathrm{ Im}\theta_{12}}}{\eta-e^{\frac{\pi i}{6}}}dm(\eta) \leq\int_{0}^{(3-1/p_{1})\tan\varphi_{0}}\int_{v}^{3}|\tilde{r}^{ \prime}(u+1/p_{1})|e^{-t\frac{\sqrt{3}(7-3k_{1}^{2})}{(k_{1}+1)^{2}}eu^{2}}dudv\] \[\lesssim\int_{0}^{\infty}\int_{v}^{\infty}e^{-t\frac{\sqrt{3}(7-3 k_{1}^{2})}{(k_{1}+1)^{2}}eu^{2}}dudv\lesssim t^{-2/3}.\] To estimate the integral over \(\Omega_{B}\), we use the fact \(|\eta-e^{\frac{\pi i}{6}}|^{-1}\leq|\eta|^{-1}\) and the estimate (4.21) to obtain that \[\iint_{\Omega_{B}}\frac{|\bar{\partial}R_{1}(\eta)|e^{2t\mathrm{ Im}\theta_{12}}}{\eta-e^{\frac{\pi i}{6}}}dm(\eta)\leq\int_{2}^{+\infty}\int_{0}^{u }(u^{2}+v^{2})^{-1/2}e^{-t\frac{\sqrt{3}}{8}v}dvdu\lesssim t^{-1}. \tag{4.147}\] Combining the above two estimates, we arrive at the desired result (4.145). ### Proof of Theorem 1.1 Now we begin to construct the large time asymptotics of the Novikov equation (1.7) in the transition zone \(|\xi+\frac{1}{8}|t^{2/3}<C\). Inverting the sequence of transformations (3.19), (3.31), (4.28), (4.37) and (4.38), we have for \(z\in\mathbb{C}\setminus U\), \[M(z)= M^{(4)}(z)E(z)M^{O}(z)\mathcal{R}^{(3)}(z)^{-1}T(z)^{-1}G(z)^{-1}+ \mathcal{O}(e^{-ct}). \tag{4.148}\] To reconstruct the solution \(u(x,t)\) by using (2.34), we take \(z=e^{\frac{\pi i}{6}}\). In this case, \(\mathcal{R}^{(3)}(z)=G(z)=I\), and then we can obtain that \[M(e^{\frac{\pi i}{6}})= M^{(4)}(e^{\frac{\pi i}{6}})E(e^{\frac{\pi i}{6}})M^{O}(e^{ \frac{\pi i}{6}})T(e^{\frac{\pi i}{6}})^{-1}+\mathcal{O}(e^{-ct}). \tag{4.149}\] Using Proposition 4.14, (4.149) comes down to \[M(e^{\frac{\pi i}{6}})=E(e^{\frac{\pi i}{6}})M^{O}(e^{\frac{\pi i}{6}})T(e^{\frac {\pi i}{6}})^{-1}+\mathcal{O}(t^{-2/3}). \tag{4.150}\] Substitute the above estimates into (2.34) and (2.35), and obtain \[u(y,t) =u^{\Diamond}(y,t;\tilde{\mathcal{D}}_{\Diamond})\left(T_{1}(e^{ \frac{\pi i}{6}})T_{3}(e^{\frac{\pi i}{6}})\right)^{-1/2}-1\] \[+\frac{1}{2}\left(T_{1}(e^{\frac{\pi i}{6}})T_{3}(e^{\frac{\pi i} {6}})\right)^{-1/2}f_{11}t^{-1/3}+\mathcal{O}(t^{-2/3+2\delta_{1}}),\] \[x(y,t) =y+\frac{1}{2}\ln\frac{M_{33}^{O}(e^{\frac{\pi i}{6}};y,t)}{M_{11 }^{O}(e^{\frac{\pi i}{6}};y,t)}+\frac{1}{2}\ln\left(\frac{T_{1}(e^{\frac{\pi i }{6}})}{T_{3}(e^{\frac{\pi i}{6}})}\right)+\frac{1}{2}f_{12}t^{-1/3}+\mathcal{ O}(t^{-2/3+2\delta_{1}}),\] \[=x^{\Diamond}(y,t;\tilde{\mathcal{D}}_{\Diamond})+\frac{1}{2}\ln T _{13}(e^{\frac{\pi i}{6}})+\frac{1}{2}f_{12}t^{-1/3}+\mathcal{O}(t^{-2/3+2 \delta_{1}}),\] where \[f_{11} =\frac{1}{2}\left(\hat{m}_{1}^{\Diamond}(y,t)\left(\frac{M_{33} ^{O}(e^{\frac{\pi i}{6}})}{M_{11}^{O}(e^{\frac{\pi i}{6}})}\right)^{1/2}-\hat {m}_{3}^{\Diamond}(y,t)\left(\frac{M_{33}^{O}(e^{\frac{\pi i}{6}})}{M_{11}^{O} (e^{\frac{\pi i}{6}})}\right)^{-1/2}\right)f_{12}\] \[+(E_{1}M^{O}(e^{\frac{\pi i}{6}}))_{1}\left(\frac{M_{33}^{O}(e^{ \frac{\pi i}{6}})}{M_{11}^{O}(e^{\frac{\pi i}{6}})}\right)^{1/2}+(E_{1}M^{O}( e^{\frac{\pi i}{6}}))_{3}\left(\frac{M_{33}^{O}(e^{\frac{\pi i}{6}})}{M_{11}^{O} (e^{\frac{\pi i}{6}})}\right)^{-1/2}, \tag{4.151}\] \[f_{12} =(E_{1}M^{O}(e^{\frac{\pi i}{6}}))_{33}M_{33}^{O}(e^{\frac{\pi i }{6}})^{-1}-(E_{1}M^{O}(e^{\frac{\pi i}{6}}))_{11}M_{11}^{O}(e^{\frac{\pi i}{ 6}})^{-1}, \tag{4.152}\] \((E_{1}M^{O})_{ij}\) represents the element in the \(i\)-th row and \(j\)-th column of the matrix \(E_{1}M^{O}\), \((E_{1}M^{O})_{j}=\sum_{i=1}^{3}(E_{1}M^{O})_{ij}\), and \(u^{\Diamond}(y,t;\tilde{\mathcal{D}}_{\Diamond})\), \(x^{\Diamond}(y,t;\tilde{\mathcal{D}}_{\Diamond})\), and \(\hat{m}_{j}^{\Diamond}(y,t)\) are defined in Corollary 4.1. Bring (3.14) into the above formulas, we obtain the final result. ## Appendix A Modified Painleve II RH Problem The (homogeneous) Painleve II equation is \[v_{ss}=2v^{3}+sv,\quad s\in\mathbb{R}.\] (A.1) The standard Painleve II equation is related to a \(2\times 2\) matrix-valued RH problem, here we give a modified \(3\times 3\) matrix-valued RH problem related to (A.1) as follows. Denote \(\Sigma^{P}=\bigcup_{n=1}^{6}\left\{\Sigma_{n}^{P}=e^{i\left(\frac{\pi}{6}+(n- 1)\frac{\pi}{3}\right)}\mathbb{R}_{+}\right\}\), see Figure A.1. Let \(\mathcal{C}=\{c_{1},c_{2},c_{3}\}\) be a set of complex constants such that \[c_{1}-c_{2}+c_{3}+c_{1}c_{2}c_{3}=0,\] (A.2) and define the matrices \(\{C_{n}\}_{n=1}^{6}\) by \[C_{n}=\begin{pmatrix}1&0&0\\ c_{n}e^{2i(\frac{4}{3}k^{3}+sk)}&1&0\\ 0&0&1\end{pmatrix},\ n\ \text{odd};\quad C_{n}=\begin{pmatrix}1&c_{n}e^{-2i( \frac{4}{3}k^{3}+sk)}&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\ n\ \text{even},\] where \(c_{n+3}=-c_{n},\ n=1,2,3\). Then there exists a countable set \(\mathcal{S}_{\mathbb{C}}=\{s_{j}\}_{j=1}^{\infty}\subset\mathbb{C}\) with \(s_{j}\to\infty\) as \(j\to\infty\), such that the following RH problem **RH problem Appendix A.1**.: _Find \(M^{P}(k)=M^{P}(k,s)\) with properties_ * _Analyticity:_ \(M^{P}(k)\) _is analytical in_ \(\mathbb{C}\setminus\Sigma^{P}\)_._ * _Jump condition:_ \[M^{P}_{+}(k)=M^{P}_{-}(k)C_{n},\quad k\in\Sigma^{P}_{n}.\] * _Asymptotic behavior:_ \[M^{P}(k) =I+\mathcal{O}(k^{-1}),\quad k\to\infty,\] \[M^{P}(k) =\mathcal{O}(1),\quad k\to 0.\] has a unique solution \(M^{P}(k,s)\) for each \(s\in\mathbb{C}\setminus\mathcal{S}_{\mathcal{C}}\). For each \(n\), the restriction of \(M^{P}(k,s)\) to \(\arg k\in\left(\frac{\pi(2n-3)}{6},\frac{\pi(2n-1)}{6}\right)\) admits an analytic continuation to \((\mathbb{C}\setminus\mathcal{S}_{\mathcal{C}})\times\mathbb{C}\) and there are smooth function \(\{M^{P}_{j}(s)\}_{j=1}^{\infty}\) of \(s\in\mathbb{C}\setminus\mathcal{S}_{\mathcal{C}}\) such that, for each integer \(N\geq 0\), \[M^{P}(k)=I+\sum_{j=1}^{N}\frac{M^{P}_{j}(s)}{k^{j}}+\mathcal{O}(k^{-N-1}),\quad k \to\infty,\] (A.3) uniformly for \(s\) in compact subsets of \(\mathbb{C}\setminus\mathcal{S}_{\mathcal{C}}\) and for \(\arg k\in[0,2\pi]\). Moreover, \[\big{(}M^{P}_{1}(s)\big{)}_{12}=\big{(}M^{P}_{1}(s)\big{)}_{21}=\frac{1}{2}u( s),\] (A.4) solves the Painleve II equation. The map \((c_{1},c_{2},c_{3})\in\mathcal{C}\to u(\cdot;c_{1},c_{2},c_{3})\) is a bijection \[\{(c_{1},c_{2},c_{3})\in\mathbb{C}^{3}|c_{1}-c_{2}+c_{3}+c_{1}c_{2}c_{3}=0\} \to\{\text{solutions of \eqref{eq:1}}\},\] (A.5) and \(\mathcal{S}_{\mathcal{C}}\) is the set of poles of \(u(\cdot;c_{1},c_{2},c_{3})\). Moreover, if \(\mathcal{C}=(c_{1},0,-c_{1})\) where \(c_{1}\in i\mathbb{R}\) with \(|c_{1}|<1\), then the leading coefficient \(M^{P}_{1}\) is given by \[M^{P}_{1}(s)=\frac{1}{2}\begin{pmatrix}-i\int_{s}^{\infty}u(\eta)^{2}d\eta&u( s)&0\\ u(s)&i\int_{s}^{\infty}u(\eta)^{2}d\eta&0\\ 0&0&0\end{pmatrix}.\] (A.6) Figure A.1: The jump contour \(\Sigma^{P}\). For each \(C_{1}>0\), \[\sup_{k\in\mathbb{C}\setminus\Sigma^{P}}\sup_{s\geq-C_{1}}|M^{P}(k)|<\infty.\] (A.7) The solution \(u(s)\) of the Painleve II equation is specified by its asymptotics as \(s\to\infty\) \[v(s)\sim-\operatorname{Im}c_{1}\text{Ai}(s)\sim-\frac{\operatorname{Im}c_{1}} {2\sqrt{\pi}}k^{-\frac{1}{4}e^{-\frac{2}{3}k^{3/2}}}.\] (A.8) where \(\text{Ai}(s)\) denotes the Airy function. ## Appendix B Model RH Problem for the Transition Zone Let \(\tilde{\Sigma}^{P}=\tilde{\Sigma}^{P}(k_{0})\) denote the contour \(\tilde{\Sigma}^{P}=\cup_{j=1}^{5}\tilde{\Sigma}_{j}^{P}\), as depicted in Figure B.1, where \[\tilde{\Sigma}_{1}^{P}=\{k|k=k_{0}+re^{\frac{\pi i}{6}},\ 0\leq r< \infty\},\quad\tilde{\Sigma}_{2}^{P}=\{k|k=-k_{0}+re^{\frac{5\pi i}{6}},\ 0\leq r< \infty\},\] \[\tilde{\Sigma}_{3}^{P}=\{k|k\in\overline{\Sigma_{2}^{L}}\},\quad \tilde{\Sigma}_{4}^{P}=\{k|k\in\overline{\Sigma_{1}^{L}}\},\quad\tilde{ \Sigma}_{5}^{P}=\{k|-k_{0}\leq k\leq k_{0}\}.\] The model RH problem for the transition zone is given as follows: **RH problem B.1**.: _Find \(N^{P}(k)=N^{P}(k,s,c_{1},k_{0})\) with properties_ * _Analyticity:_ \(N^{P}(k)\) _is analytical in_ \(\mathbb{C}\setminus\tilde{\Sigma}^{P}\)_._ * _Jump condition:_ \[N^{P}_{+}(k)=N^{P}_{-}(k)V^{P}(k),\quad k\in\tilde{\Sigma}^{P},\] _where_ \[V^{P}(k)=\begin{cases}\begin{pmatrix}1&0&0\\ c_{1}e^{2i(\frac{4k^{3}}{3}+sk)}&1&0\\ 0&0&1\end{pmatrix},\quad k\in\tilde{\Sigma}_{1}^{P}\cup\tilde{\Sigma}_{2}^{P}, \\ 0&0&1\end{pmatrix},\quad k\in\tilde{\Sigma}_{3}^{P}\cup\tilde{\Sigma}_{4}^{P}, \\ \begin{pmatrix}1-|c_{1}|^{2}&-\bar{c}_{1}e^{-2i(\frac{4k^{3}}{3}+sk)}&0\\ c_{1}e^{2i(\frac{4k^{3}}{3}+sk)}&1&0\\ 0&0&1\end{pmatrix},\quad k\in\tilde{\Sigma}_{5}^{P}.\end{cases}\] (B.1) * _Asymptotic behavior:_ \(N^{P}(k)=I+\mathcal{O}(k^{-1}),\quad k\to\infty\). Define the parameter subset \(\mathcal{P}_{T}\) of \(\mathbb{R}^{3}\) by \[\mathcal{P}_{T}=\{(s,t,k_{0})\in\mathbb{R}^{3}|-C_{1}\leq s\leq 0,t\geq T,\sqrt{|s|} /2\leq k_{0}\leq C_{2}\},\] (B.2) where \(C_{1},C_{2}>0\) are constants. Then there exists a \(T\geq 1\) such that the above RH problem has a unique solution. and for each integer \(N\geq 1\), \[N^{P}(k)=I+\sum_{j=1}^{N}\frac{N_{j}^{P}(s)}{k^{j}}+\mathcal{O}(k^{-N-1}),\quad k \to\infty,\] (B.3) uniformly with respect to \(\arg k\in[0,2\pi]\) and \((s,t,k_{0})\in\mathcal{P}_{T}\) as \(k\to\infty\), where \(\{N_{j}^{P}(s)\}_{j=1}^{N}\) are smooth functions in (A.3). Proof.: Let \(u(s;c_{1},0,-c_{1})\) denote the smooth real-valued solution of (A.1) corresponding to \((c_{1},0,-c_{1})\) and \(M^{P}(k)=M^{P}(k,s;c_{1},0,-c_{1})\) be the corresponding solution of RH problem B.1. Denote the open subsets \(\{V_{j}\}_{j=1}^{4}\), as shown in Figure B.2. To match with the modified Painleve II RH problem, we make a transformation \(N(k)=N(k,s,c_{1},k_{0})\) by \[N(k)=M^{P}(k)\times\begin{cases}\begin{pmatrix}1&0&0\\ c_{1}e^{2i(\frac{4k^{3}}{3}+sk)}&1&0\\ 0&0&1\end{pmatrix},\quad k\in V_{1}\cup V_{2},\\ \begin{pmatrix}1&\bar{c}_{1}e^{-2i(\frac{4k^{3}}{3}+sk)}&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\quad k\in V_{3}\cup V_{4}.\end{cases}\end{cases}\] Then \(N(k)\) satisfies the above RH problem B.1. **Acknowledgements** This work is supported by the National Science Foundation of China (Grant No. 11671095, 51879045).
2308.14516
Prediction of Tourism Flow with Sparse Geolocation Data
Modern tourism in the 21st century is facing numerous challenges. Among these the rapidly growing number of tourists visiting space-limited regions like historical cities, museums and bottlenecks such as bridges is one of the biggest. In this context, a proper and accurate prediction of tourism volume and tourism flow within a certain area is important and critical for visitor management tasks such as sustainable treatment of the environment and prevention of overcrowding. Static flow control methods like conventional low-level controllers or limiting access to overcrowded venues could not solve the problem yet. In this paper, we empirically evaluate the performance of state-of-the-art deep-learning methods such as RNNs, GNNs, and Transformers as well as the classic statistical ARIMA method. Granular limited data supplied by a tourism region is extended by exogenous data such as geolocation trajectories of individual tourists, weather and holidays. In the field of visitor flow prediction with sparse data, we are thereby capable of increasing the accuracy of our predictions, incorporating modern input feature handling as well as mapping geolocation data on top of discrete POI data.
Julian Lemmel, Zahra Babaiee, Marvin Kleinlehner, Ivan Majic, Philipp Neubauer, Johannes Scholz, Radu Grosu, Sophie A. Neubauer
2023-08-28T12:03:03Z
http://arxiv.org/abs/2308.14516v1
# Prediction of Tourism Flow with Sparse Geolocation Data ###### Abstract Modern tourism in the 21st century is facing numerous challenges. Among these the rapidly growing number of tourists visiting space-limited regions like historical cities, museums and bottlenecks such as bridges is one of the biggest. In this context, a proper and accurate prediction of tourism volume and tourism flow within a certain area is important and critical for visitor management tasks such as sustainable treatment of the environment and prevention of overcrowding. Static flow control methods like conventional low-level controllers or limiting access to overcrowded venues could not solve the problem yet. In this paper, we empirically evaluate the performance of state-of-the-art deep-learning methods such as RNNs, GNNs, and Transformers as well as the classic statistical ARIMA method. Granular limited data supplied by a tourism region is extended by exogenous data such as geolocation trajectories of individual tourists, weather and holidays. In the field of visitor flow prediction with sparse data, we are thereby capable of increasing the accuracy of our predictions, incorporating modern input feature handling as well as mapping geolocation data on top of discrete POI data. Keywords:Tourism Time series forecasting Sustainable tourism Sparse geolocation data ## 1 Introduction With increasing population and travel capacities (e.g. easy access to international flights) cultural tourism destinations have seen a rise in visitors. In addition, recent needs for social distancing and attendance limitations due to the global COVID-19 pandemic have confronted tourism destinations with significant challenges in e.g. creating and establishing sustainable treatment of the both urbanised and natural environment or e.g. preventing overcrowded waiting-lines. The perception of tourists regarding health hazards, safety and unpleasant tourism experiences may be influenced by social distance and better physical separation [24]. Based on The United Nation's 2030 Agenda for Sustainable Development [25], tourism is obligated to contribute to several Sustainable Development Goals, including sustainable cities, responsible consumption, and economic growth. Sustainable tourism can achieve this by understanding and controlling visitor flows, preserving natural landmarks, reducing emissions and waste, establishing sustainable energy consumption, creating harmony between residents and tourists, and maximizing tourist satisfaction for economic prosperity. Insufficient data availability in real-world problems is caused by factors such as compliance issues, lack of data collection, and transfer. Nonpersonal data from POIs, tourist facilities, and anonymized digital device data are used in research, but location data collected by mobile apps is controversial due to profit-oriented collection practices. It's important to consider whether people are aware of what they're sharing when using these services, even if the datasets don't contain direct personal data. The question of how to improve awareness of data shared by such apps or services is not answered in this research. This scientific work is focusing on what is possible to achieve in the given environment considering the given data and data history in regards to tourist flow prediction since sparse data is a widespread generic problem. The first step in order to control tourist flows is to predict authentic movement and behavior patterns. However, since the tourist visitor flow is affected by many factors such as the weather, cultural events, holidays, and regional traffic and hotspots throughout a specific day, it is a very challenging task to accurately predict the future flow [15]. Due to the availability of large datasets and computational resources, deep neural networks became the state-of-the-art methods in the task of forecasting time-series data [20], including tourism flow applications [21]. In this work, we focus on tourist flow prediction based on a local dataset from the visitors of the tourist attractions of the city of Salzburg as well as third-party geolocation data of individual tourists. After data preprocessing and dataset preparation, we attempt to compare the performance of different deep-learning-based methods for time-series prediction with ARIMA, a traditional statistics-based method. According to Li and Cao [13], ARIMA is the most popular classical time forecasting method based on exponential smoothing and it was made popular in the 1970s when it was proposed by Ahmed and Cook [1] to be used for short-term freeway traffic predictions. We summarize the specific contributions of our paper as follows: * We perform a comprehensive comparison of DL and ARIMA, a traditional technique, on a real-world dataset to reveal the shortcomings and point out necessary future improvements. * Per point-of-interest (POI), we perform granular predictions on an hourly basis, which is critical for the task of tourism flow control. * We further evaluate modern DL techniques such as Transformers and GNNs. * To the best of our knowledge, we are the first to apply a wide range of DL models to tourist flow prediction. ## 2 Related Work Considering the importance of predicting tourist flows in a growing industry, visitor forecasting has gained attention in recent years. Recurrent Neural Networks are used to forecast tourist demand, such as LSTMs that can be used in conjunction with deep neural networks or hidden Markov Models [22, 13]. Only a limited set of models is used in most of these studies to make predictions. Another important aspect of tourism data is its granularity. Several studies focus on long-term estimates of monthly, quarterly, and yearly, or in the best case daily numbers of tourists in large regions as a measure of city or country-level tourism demand [2]. For tourism flow control, it is vital to perform granular predictions on an hourly basis and per POI. **DL-based models.** Time-series data prediction is typically handled by recurrent neural networks (RNNs). With RNNs, neural networks gain memory, allowing them to forecast sequence-based data. Gated RNNs are able to produce a good performance as LSTM [10] and GRU [5]. The RNN has limitations when faced with irregularly sampled time series, such as that encountered in tourist flow forecasting. In order to overcome this limitation, phased-LSTM [18] adds a time gate to the LSTM cells. GRU-D [4] incorporates time intervals via a trainable decaying mechanism to deal with missing data and long-term dependence in time series. instead of discrete-time models, continuous-time models with latent state defined at all times can also be used, such as CT-RNN [8], CT-LSTM [16], and CT-GRU [17], as well as NeuralODEs [7], which define the hidden state of the network as a solution to an ordinary differential equation. Augmented-NeuralODEs [7] can alleviate some limitations of NeuralODEs, such as non-intersecting trajectories, by using augmentation strategies. These continuous-time models have favorable properties, such as adaptive computation and training with constant memory cost. GoTube [9] can be used to statistically verify them by constructing stochastic reach tubes of continuous-time systems. On the other hand, transformer-based models [26] have been successful in various applications due to their powerful capability for sequence learning and representation. They have also been explored in time-series forecasting tasks for datasets with long sequences and high historical information. The multi-head self-attention mechanism is the primary component of transformer models, which can extract correlations in long sequences. However, the permutation-invariant nature of self-attention requires positional encodings to prevent the loss of temporal dependencies. Graph Neural Networks (GNNs) are an interesting new class of Deep Learning Algorithms that allow for the inputs to be structured as graphs. Most GNN models build on the notion of Graph Convolutions which can be seen as a generalization of Convolutional Neural Networks to graph structured data - as opposed to being arranged in a grid. An even more fascinating type of DL models are temporal GNNs that combine Graph Convolutions with RNNs. Such temporal GNN models are most prominent in traffic flow prediction applications. [29] **Traditional techniques.** For time-series forecasting with traditional techniques we use the Autoregressive Integrated Moving Average (ARIMA) model. ARIMA has been used in recent studies as a baseline for the evaluation of novel deep-learning based models [28] and is thus selected as a baseline model for this paper as well. ## 3 Data Two different data sources were combined to enable the use of their different features in the training of the models and prediction of future visitor counts. The first dataset we used stems from the "Salzburg Card" which was kindly provided to us by TSG Tourismus Salzburg GmbH. Upon purchase of these cards, the owner has the ability to enter 32 different tourist attractions and museums included in the portfolio of the Salzburg Card. The dataset consists of the time-stamps of entries to each POI. Additionally, we used data about weather and holidays in Austria. We utilized mobile phone location data from a third-party service to improve tourist flow predictions in Salzburg. The dataset covers around 3% of tourists and provides information on the number of tourists between points of interest. However, the data is sparse and lacks a distinct recording frequency. To further improve our predictions, we incorporated a street graph obtained from OpenStreetMap using the osmnx python package. The resulting graph contains 2064 nodes and 5359 edges, with edge values corresponding to the lengths of the street segments. We then mapped the location data to the graph by assigning each location to the nearest node and aggregating the total number per hour. CoVid-19Tourism around the globe saw huge drops during the global CoVID-19 pandemic. Starting in march of 2020, Austria started to take preemptive measures to prevent the spread of the virus. These travel restrictions and closings of public spaces, hotels and restaurants severely reduced the number of tourists in and around the city of Salzburg. As a consequence, prediction accuracy could be diminished when using models that have been trained on pre-CoVID data. ## 4 Methods For this work, we built our own dataset on hourly data collected from tourist attractions and then expanded this by including geolocation data. Including many different datasources is a key challenge for this real-world prediction task. Sparse geolocation data is therefore fed into our GNN model as features. With this approach we aim to create models that are capable of easily integrating new datasources that might be available in the future. We then perform predictions with a rich set of models and do a comprehensive comparison of the results. In this section we first introduce the dataset we used for the experiments. Then we go over the methods we chose to evaluate and compare their performances. ### Deep-Learning models We use a large set of RNN variations on the tourist-flow dataset to perform a comprehensive comparison of the state-of-the art models and provide insight on their performance. The set comprises vanilla-RNN, LSTM, phased-LSTM, GRU-D, CT-RNN, CT-LSTM and Neural-ODE networks. Moreover, we used a Transformer model, using only the encoder part with 8 heads, 64 hidden units, and 3 layers, to forecast the tourist flow. Finally, we applied a naive continuous-time temporal GNN approach based on CT-RNNs to our prediction problem in order to utilize geolocation data of individual tourists. All of the Neural Networks were trained with Backpropagation-Through-Time and the Adam optimizer [12] using the parameters given in the Appendix 3. In order to incorporate inductive bias stemming from the street layout from Salzburg, we used a simplified CT-RNN based GNN model that we will call Continuous-Time Recurrent Graph Network (**CT-GRN**) in the following. It consists of one neuron per node in the street graph and exhibits the same connectivity. This is done by point-wise multiplying the recurrent kernel with the graph's normalized adjacency matrix whose entries are the inverse of the corresponding street segment lengths. \[y_{t+1}=y_{t}-\tau y_{t}+a\odot\tanh((W_{rec}\odot\hat{A})y_{t}+W_{in}x_{t}+b)\] Where \(y_{t}\) is the network's state at time \(t\), \(x_{t}\) is the exogenous input, \(\tau\), \(a\), \(W_{rec}\), \(W_{in}\) and \(b\) are trainable parameters, and \(\hat{A}=D^{-1}A\) is the normalized Adjacency matrix. The resulting model inherits all the favourable ODE properties of CT-RNNs such as the ability to evaluate at arbitrary points in (continuous) time and differentiable dynamics used in verification. Finally, we used a variation of the _Teacher Forcing_[27] technique which basically translates to resetting the nodes of the network to the target value after each step. Our _Mixed Teacher Forcing_ version forces the hidden state of the POI nodes to the true value and adds up the predicted and true values for the other nodes. ### Traditional methods In this study, we used a non-seasonal ARIMA model (_ARIMA (p,d,q)_) that ignores seasonal patterns in a time-series, where \(p\) is the number of autoregressive terms, \(d\) is the number of non-seasonal differences, and \(q\) is the number of lagged forecast errors in the prediction equation [3]. We utilized the _auto.arima_ function from the _R forecast_ library to automatically determine the best values for \(p\), \(d\), and \(q\) for each of the 32 POIs. The ARIMA model was then individually fitted to each POI's training dataset using the _pmdarima_ library in Python. Each time the number of visitors is predicted for the next hour in the test data, the true value (i.e., number of visitors) for that hour is added to update the existing ARIMA model and make it aware of all previous values before making the next prediction. This approach prioritized prediction accuracy over time complexity. ### Preprocessing We used the Salzburg card data from years 2017, 2018, and 2019 for our first set of experiments. In order to create the time-series data, we accumulated the hourly entries to each location. The data then consists of the hour of the day, and the number of entries at that hour to each of the 32 POIs. For the DL models, we added additional features to the dataset: Year, Month, Day of month, Day of week, Holidays and Weather data. For the Holiday data we used the national holidays and school holidays and count the days to the next school day. For the Weather data, we used the hourly weather data with these features: Temperature, Feels Like, Wind speed, Precipitation, and Clouds as well as a One-Hot-Encoded single word description of the weather (e.g. "Snow"). We performed further pre-processing by normalizing all features to values between 0 and 1. To account for seasons, we performed sine-cosine transformation for the month. Intuitively, since it is a circular feature we do not want to have the values for December and January to be far apart. Finally, we split the data into sequences of length 30, and used the data from years 2017 and 2018 as the training set, and 2019 as the test set. **Graph Neural Networks** For the GNN we used the OSM graphs as illustrated in Section 3. Our dataset of tourist locations was very sparse which subsequently resulted in very sparse inputs for each node. Since we are trying to predict numbers of entries at the POIs, we added them as additional nodes to the graph connecting them to up to 5 of the nearest nodes present in the graph Figure 1: One sample of the series of OSM graphs of the Salzburg city center obtained from preprocessing. Encircled nodes are the special POI nodes. Color coded are the normalized aggregated entry and tracking data, where most of the nodes indicate zero (pale) with a max distance of 80 m. Finally, the global features such as weather and holidays are added to the graph by a linear mapping from features to nodes. This way we obtained a series of graphs where each sample constitutes the OSM graph with the edge values corresponding to the distance and the node values corresponding to the aggregated number of people near this location / POI entries. One sample is visualized in Figure 1. For inference we predicted the whole graph and discarded the nodes that do not represent POIs. ## 5 Main Results ### Forecasting visitor numbers We performed a diverse set of experiments with ARIMA and DL models to evaluate their forecasting accuracy, execution time and prediction time and compare the models. Table 1 shows the Mean-Absolute-Error (MAE) and Root-Mean-Squared-Error (RMSE) achieved for each method applied to the timeframe from 2017-2019 - before COVID. In order to find optimal model size, loss function, and whether to use normalized visitor counts, we did a grid search conducting three runs per configuration and keeping the one which achieved the lowest average RMSE. As a baseline we include the naive approach of using the last true value as prediction at each step, i.e. \(\hat{y}_{t}=y_{t-1}\). The table includes the model size, number of parameters, and training and prediction times for the best run of each deep-learning model. We excluded non-normalized models since normalized visitor counts consistently led to better results. MAE was the best loss function for all models except ANODE, which performed better with Huber loss. The phased LSTM achieved comparable results with the fewest parameters. Our DL models outperformed ARIMA in both metrics, with and without additional features. Adding more features did not significantly improve performance, suggesting that it may lead to over-fitting. We report results with and \begin{table} \begin{tabular}{|l|c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{\# Cells /} & \multicolumn{2}{c|}{Time} & \multicolumn{2}{c|}{only visitors} & \multicolumn{2}{c|}{external features} \\ **Model** & \# Parameters & Train (min) & Pred (ms) & MAE & RMSE & MAE & RMSE \\ \hline ARIMA & **224** & - & 69k & 5.217 & 7.833 & - & - \\ \hline ANODE & 64 / 21.3k & 145.6 & 3.01 & 4.599 & 6.965 & 4.410 & 6.663 \\ Vanilla RNN & 128 / 43.7k & 5.9 & **0.18** & 3.958 & 6.321 & 3.802 & 6.160 \\ LSTM & 32 / 11.9k & **1.5** & 0.24 & 3.713 & 6.209 & 3.630 & 6.113 \\ Phased LSTM & 32 / **11.8k** & 27.0 & 0.46 & 3.825 & 6.359 & 3.651 & 6.120 \\ CT-LSTM & 32 / 19.9k & 18.1 & 0.31 & 3.734 & 6.239 & 3.700 & 6.185 \\ CT-RNN & 128 / 27.4k & 57.1 & 0.60 & 3.694 & 6.131 & 3.629 & **5.983** \\ GRU-D & 64 / 27.7k & 16.6 & 0.33 & **3.638** & **6.121** & **3.621** & 6.073 \\ \hline Naive & - & - & - & 6.466 & 9.483 & - & - \\ \hline \end{tabular} \end{table} Table 1: Averaged prediction errors without additional features for DL models to ensure fairness in comparison with ARIMA, which cannot use external features. Additionally, ARIMA struggles with short sequences, while DL models can handle them when trained on the full dataset. In Table 1, we compare the training and prediction times of ARIMA and DL models. ARIMA took 69s to perform a single prediction for all POIs, while the DL models took fractions of milliseconds, with the trade-off of having longer training times. ARIMA does not have a dedicated training step, and its calculations are time-consuming since it makes predictions for each POI separately. In contrast, DL models are trained with the visitors to all POIs in a single vector and make predictions for all at the same time. This allows DL models to leverage implicit data about the total number of visitors in the city, which ARIMA loses. In order to visually explore the predictions made by the models, we plotted the predictions and the ground truth for a few selected time-windows (see Figure 2). We plot the predictions made by the DL models (including the external features) with the best MAE and RMSE, which were the GRU-D and CT-RNN respectively. The prediction made by the DL models with the visitors only data was only slightly worse than the others, which is why we omit these evaluations in the plots. Our plots show that although ARIMA is out-performed Figure 2: Predicted and True visitor counts for the Funicular Railway (top) and Mozart’s Birthplace Museum (mid) and the Festival Hall (bottom). Predictions are computed using CT-RNN (orange), GRU-D (green) and ARIMA (red). by the DL methods in the average error of all predictions, there are cases where it actually performs better than the other models. The plot on the Top shows the forecast and real values for the tourists entered the Funicular Railway descend which is the cable car ride leading up to Salzburg Castle. As visible in the plot, the DL models show a better performance, especially in the valleys where the ARIMA fails to predict the downfalls accurately. Mid shows visitor predictions for Mozart's Birthplace Museum around the time of New Year's Eve. The reduced numbers of visitors on the 1st and 2nd of January is overestimated by all our models. Finally, on the bottom the predictions for the Festival Hall Festspielhaus guided tour are shown which is sparse since it takes place once a day at 2 pm. All models fail in prediction for the second and third peak at this location. However, CT-RNN shows a very good performance in predicting the first and last peak and at least shows an upward trend for the second and third peak. ARIMA can not handle this type of sparse data at all. ### Including geolocation data We conducted a second set of experiments on the timeframe from 2019 to 2021 that includes geolocation data of individual (anonymized) tourists. Results are presented in Table 2 which shows for each model the number of Parameters and MAE with and without additional features and also when using features and the geolocation data. This time we included the Transformer and GNN models, but excluded ARIMA due to computation time reasons. Since the _Salzburg Card dataset_ for this particular timeframe contains a significantly lower number of datapoints due to lockdowns enforced by the government, the numbers must not be compared directly to the results discussed in the last section. This time the naive approach outlined above led to surprisingly good results and only the Transformers with exogenous features were able to surpass it. Transformers can handle multi-variate data well due to the multi-head self-attention mechanism which enables them to extract hidden correlations in input, \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \multicolumn{3}{c|}{MAE} \\ **Model** & only visitors & + features & + geolocation \\ \hline Vanilla RNN & 2.48 & 2.42 & 3.37 \\ LSTM & 2.66 & 2.58 & 3.54 \\ Phased LSTM & 2.44 & 2.44 & 3.05 \\ CT-LSTM & 2.61 & 2.57 & 3.32 \\ CT-RNN & 2.50 & 2.45 & 3.16 \\ ANODE & 2.63 & 2.57 & 3.64 \\ GRU-D & 2.99 & 2.87 & 3.58 \\ Transformer & 2.19 & **2.04** & 2.65 \\ \hline Naive & 2.187 & - & - \\ \hline CT-GRN & - & - & **2.63** \\ \hline \end{tabular} \end{table} Table 2: Average prediction results for 2021 after training on data from 2019 & 2020 and hence get better loss after using additional features. However, they require considerably more parameters in comparison to the RNN models. For the GNN we only conducted experiments with additional geolocation data since input graph attributes would be even sparser without, defeating the point of using a graph based approach. The CT-GRN algorithm scored a slightly worse prediction error in comparison to the other models when not using visitors and features as input. However, all other methods scored worse when trained on the sparse geolocation data which shows the usefulness of the GNN approach. Our GNN approach enables us to incorporate the sparse geolocation data into our model. Since there is more sparse geolocation data expected to be processed within real-life-scenarios, this is the only approach to fit these needs. ## 6 Conclusions and future work Our study demonstrated the effectiveness of DL models for tourist flow time-series forecasting, particularly when external features are included. DL models outperformed the traditional ARIMA method and were faster in terms of prediction time. We also showed that GNNs are more suitable for incorporating spatial structure using sparse geolocation data. Moving forward, there are several directions for future research. One possibility is to investigate methods to further improve the performance of DL models, such as implementing regularization or learning rate scheduling. Another option is to explore the use of Vector Auto-Regression (VAR) to address the limitations of ARIMA for univariate data. Finally, we plan to develop specialized models that can outperform existing state-of-the-art models in short-term prediction, with the ultimate goal of helping tourism stakeholders make informed decisions and promote sustainable tourism practices. ## Acknowledgements This work is supported by the Austrian Research Promotion Agency (FFG) Project grant No. FO999887513. SG is partially funded by the Austrian Science Fund (FWF) project number W1255-N23. Map data copyrighted OpenStreetMap contributors and available from [https://www.openstreetmap.org](https://www.openstreetmap.org).
2307.04180
Lattice path matroidal subdivisions, Positive Tropical Grassmannian and Amplituhedron
We introduce the notion of lattice path matroidal subdivisions, or LPM subdivisions for short, and show that these subdivisions are regular and hence the weight vectors for them lie in the Dressian. This leads us to explore the structure of the set of these weights inside the Dressian and owing to the fact that Lattice path matroids are positroids, we move to the positive Dressian which in turn is equal to the positive tropical Grassmannian, an object of immense interest currently in Physics. This is related to the amplituhedron and positive configuration space, which we describe here and wish to explore these connections further.
Ayush Kumar Tewari, Ahmed Umer Ashraf
2023-07-09T14:08:50Z
http://arxiv.org/abs/2307.04180v1
# Lattice path matroidal subdivisions, positive tropical grassmannian and amplituhedron ###### Abstract. We introduce the notion of lattice path matroidal subdivisions, or LPM subdivisions for short, and show that these subdivisions are regular and hence the weight vectors for them lie in the Dressian. This leads us to explore the structure of the set of these weights inside the Dressian and owing to the fact that Lattice path matroids are positroids, we move to the positive Dressian which in turn is equal to the positive tropical Grassmannian, an object of immense interest currently in Physics. This is related to the amplituhedron and positive configuration space, which we describe here and wish to explore these connections further. Key words and phrases:Lattice path matroidal subdivision, LPMfan, LPM polytope decomposition 2020 Mathematics Subject Classification: 52B40, 14T15, 81U99 We would like to thank Michael Joswig and Luis Ferroni for going through earlier drafts of this article and for providing valuable suggestions and comments. We are also thankful to David Speyer for providing his comments and for pointing us to Luis Ferroni and his work on lattice path matroids. ## 1. Introduction Lattice path matroids (LPM) 1 were introduced by Bonin et.al in [10] and matroidal properties including the Tutte polynomial were derived for them. Subsequently, it was proven that they are positroids [40] and also enjoy multiple connections with the positive Grassmannian. Lattice paths in themselves are ubiquitous in various topics within mathematics for example in combinatorics, representation theory, etc. In our work we see this feature helping us connect our study to various topics of not only mathematics but also to a recently defined concept in physics, the _amplituhedron_[5], which is a geometric object encoding information concerning the scattering amplitudes of particles. Footnote 1: We use this abbreviation for lattice path matroid and lattice path matroidal depending on the context We begin with the introduction of _lattice path matroidal subdivisions_, which are matroidal subdivisions with each maximal cell corresponding to a lattice path matroid polytope. The idea for this class of subdivisions comes from the lattice path matroid polytope decompositions [15], which is a subclass of matroid base polytope decompositions, studied in detail in [15, 16]. Lattice path matroidal decompositions enjoy a unique property; they are obtained in an iterative way via simple decompositions into two LPMs, termed as a _hyperplane split_. We harness this property to relate them to the well-known class of _split subdivisions_. This relation eventually helps us in proving one of our first results. **Theorem 1**.: _Any LPM subdivision of a lattice path matroid polytope \(\mathrm{P}_{\mathsf{M}[P,Q]}\) is regular._ Not only we are able to establish regularity for LPM subdivisions but we also show that they are obtained as common refinements of split subdivisions, which allows much more structure to these subdivisions. We introduce the notion of LPMfan as the polyhedral fan that corresponds to LPM subdivisions. We discuss the relation of LPMfan to various well-known polyhedral fan structures which correspond to regular matroidal subdivisions, namely _tropical Grassmannian_ and _Dressian_. Since LPM are positroids as well, this discussion can also be connected to the positive part of the tropical Grassmannian and Dressian. We furnish computational examples for both LPM subdivisions and LPMfans for the underlying hypersimplex \(\Delta(k,n)\) which is an LPM polytope, where \(k=3,4\) and \(n=6,8\) respectively. Postnikov [43, 44] led the study on the stratification of the positive Grassmannian into cells that enjoy equivalences with various combinatorial objects, like _decorated permuations_, _reduced plabic graphs_, etc. We also put our results into perspective by discussing how our LPM subdivisions correspond to these combinatorial objects. This also helps us in bringing the connections to the geometric object _amplituhedron_, introduced first by Arkani et. al [5] to study problems concerning scattering amplitudes in high energy physics. We point the reader to [3] for exploring the connections between scattering amplitudes in physics and the geometry of the Grassmannian in full detail. Our discussion mostly revolves around the connections between positive Grassmannian, positive tropical Grassmannian and the amplituhedron. Firstly, for the \(m=2\) amplituhedron, we provide a purely matroidal treatment to the definition of BCFW2 style recurrence relations for positroid dissections of the hypersimplex in the form of Theorem 51. These positroidal dissections were introduced in [50] and it is shown in [50] that via _T-duality_ they are also related to certain dissections of the \(m=2\) amplituhedron \(\mathcal{A}_{n,k,2}\). Secondly, for the \(m=4\) amplituhedron, in [33] it is shown that BCFW cells of the amplituhedron correspond to a _noncrossing_ lattice paths of a certain lattice rectangle. Additionally, a recent work [18] shows that BCFW cells provide a triangulation of the amplituhedron \(\mathcal{A}_{n,k,4}\). In light of these results, we prove the following result which is the first result highlighting the relation between the BCFW triangulation of \(\mathcal{A}_{n,k,4}\) and positroidal dissection of a certain hypersimplex. Footnote 2: the abbreviation is after the names of Physicists Britto, Cachazo, Feng, and Witten **Theorem 2**.: _Each triangulation of the amplituhedron \(\mathcal{A}_{n,k,4}\) into \((k,n)\)-BCFW cells provides a positroid dissection \(\{\Gamma_{i}\}\) of the hypersimplex \(\Delta(k,n-4)\), where each BCFW cell corresponds to a lattice path matroid polytope \(\Gamma_{i}\)._ Lastly, [4] discusses the relation between positroidal cells of the positive Grassmannian and the positive configuration space, via the Chow quotient of the Grassmanian. We also encounter a special class of LPM's throughout our study, namely _snakes_, which are _minimal_, and we use this property, to provide examples of _clusters_ for them, which implies intricate connections between LPM's and the underlying cluster algebra, which we wish to explore further in subsequent work. This minimality of snakes also helps us answer partially a Question asked in [42]. We would like to make a special mention of the various salient features which we encounter for _snakes_ and would like to state them as follows, _Snakes are lattice path matroids, positroids, minimal, binary, indecomposable, series-parallel, graphical [34], order, alcoved 3_Footnote 3: We do acknowledge that order and alcoved are properties satisfied by matroid polytopes of snakes. In Section 2 we introduce all basic definitions which we will use in further discussions. Section 3 introduces the notion of LPM subdivisions and Theorem 14 is proven here. Section 4 describes the relation between the positive tropical Grassmannian and LPM subdivisions. Section 5 collects all our computational examples, which are mostly LPM subdivisions and LPMfan for LPM polytopes \(\Delta(3,6)\) and \(\Delta(4,8)\). Section 6 introduces the notion of amplituhedron and relates in detail the findings pertaining to LPM's. Finally, we discuss probable future problems and open questions in Section 7. ## 2. Preliminaries We would like to guide the readers unfamiliar with the concepts in this section to [43, 44] and [38] for further details. A _matroid_ of rank \(k\) on the set \([n]:=\{1,2,\ldots,n\}\) is a nonempty collection \(\mathsf{M}\subseteq\binom{[n]}{k}\) of \(k\)-element subsets of \([n]\), called _bases_ of \(\mathsf{M}\), that satisfies the exchange axiom: For any \(I,J\in\mathsf{M}\) and \(i\in I\), there exists \(j\in J\) such that \(I\setminus\{i\}\cup\{j\}\in\mathsf{M}\). A matroid is called _realizable_ if it can be represented by elements of a matrix over some field \(\mathbb{K}\). A _positroid_ of rank \(k\) is a matroid that can be represented by a \(k\times n\)-matrix with non-negative maximal minors. The _Grassmannian_\(\operatorname{Gr}(k,n)\) is the parameterization of the family of all \(k\)-dimensional subspaces of \(n\)-dimensional vector space in \(\mathbb{K}^{n}\). It also possesses a smooth projective variety structure, corresponding to the vanishing set of the _Plucker ideal_\(\mathcal{I}_{k,n}\). An element in the Grassmannian \(\operatorname{Gr}(k,n)\) can be understood as a collection of \(n\) vectors \(v_{1},\ldots,v_{n}\in\mathbb{K}^{k}\) spanning the space \(\mathbb{K}^{k}\) modulo the simultaneous action of \(\operatorname{GL}(k,n)\) on the vectors, where the vectors \(v_{i}\) are the columns of a \(k\times n\)-matrix \(A\). Then an element \(V\in\operatorname{Gr}(k,n)\) represented by \(A\) gives the matroid \(\mathsf{M}_{V}\) whose bases are the \(k\)-subsets \(I\subset[n]\) such that \(\det_{I}(A)\neq 0\). Here, \(\det_{I}(A)\) denotes the determinant of \(A_{I}\), the \(k\times k\) submatrix of \(A\) with the column set \(I\). An element \(V\in\operatorname{Gr}(k,n)\) is termed as _totally non-negative_ if \(\det_{I}(V)\geq 0\), for all \(I\in\binom{[n]}{k}\). The set of all totally non-negative \(V\in\operatorname{Gr}(k,n)\) is the _totally non-negative Grassmannian_\(\operatorname{Gr}^{\geq 0}(k,n)\); abusing notation, we refer to \(\operatorname{Gr}^{\geq 0}(k,n)\) as the _positive Grassmannian_[50]. Tropical geometry is the study of polynomials over the tropical semiring \(\mathbb{T}=\{\mathbb{R}\cup\infty,\max,+\}\). Given \(e=(e_{1},\ldots,e_{N})\in\mathbb{Z}_{\geq 0}^{N}\), we let \(x^{e}\) denote \(x_{1}^{e_{1}}\ldots x_{N}^{e_{N}}\). For a polynomial \(f=\sum_{e\in E}a_{e}x^{e}\), we firstly associate a corresponding tropical polynomial \(f\) where the binary operations are replaced by tropical addition and multiplication respectively, and we denote by \(\operatorname{Trop}(f)\) the _tropical hypersurface_ associated to \(f\) which is the collection of all points where the maxima is achieved at least twice. Let \(E=E^{+}\cup E^{-}\subseteq\mathbb{Z}_{\geq 0}^{N}\), and let \(f\) be a nonzero polynomial with real coefficients such that \(f=\sum_{e\in E^{+}}a_{e}x^{e}-\sum_{e\in E^{-}}a_{e}x^{e}\), where all of the coefficients \(a_{e}\) are non-negative real numbers. Then \(\operatorname{Trop}^{+}(f)\) denotes the _positive part_ of \(\operatorname{Trop}(f)\), and the set of all points \((x_{1},\ldots,x_{N})\) such that, if we form the collection of numbers \(\sum e_{i}x^{i}\) for \(e\) ranging over \(E\), then the minimum of this collection is not unique and furthermore is achieved for some \(e\in E^{+}\) and some \(e\in E^{-}\)[50]. The _tropical Grassmannian_\(\operatorname{TropGr}(k,n)\) is the intersection of the tropical hypersurfaces \(\operatorname{Trop}(f)\), where \(f\) ranges over all elements of the _Plucker ideal_\(\mathcal{I}_{k,n}\) which is generated by the _quadratic Plucker relations_[38]. The _Dressian_\(\operatorname{Dr}(k,n)\) is the intersection of the tropical hypersurfaces \(\operatorname{Trop}(f)\), where \(f\) ranges over all three-term Plucker relations. Similarly, the _positive tropical Grassmannian_\(\operatorname{Trop}^{+}\operatorname{Gr}(k,n)\) is the intersection of the positive tropical hypersurfaces \(\operatorname{Trop}^{+}(f)\), where \(f\) ranges over all elements of the Plucker ideal. The _positive Dressian_\(\operatorname{Dr}^{+}(k,n)\) is the intersection of the positive tropical hypersurfaces \(\operatorname{Trop}^{+}(f)\), where \(f\) ranges over all three-term Plucker relations. The underlying matroid for the definitions of the tropical Grassmannian and Dressian is the _uniform matroid_\(\mathsf{U}_{k,n}\). However, the notion of Dressian can be extended to arbitrary matroids with the definition of a _local Dressian_. The _local Dressian_\(\operatorname{Dr}(\mathsf{M})\) is defined as the tropical pre-variety given by the set of quadrics obtained from the three-term Plucker relations by setting the variables \(p_{B}\) to zero, where \(B\) is not a basis of \(\mathsf{M}\)[42]. A subdivision \(\Sigma\) of a polytope \(P\) in \(\mathbb{R}^{d}\) is said to be _regular_ if there exits a weight vector \(w\) such that if the vertices of \(P\) are lifted to heights provided by \(w\) in \(\mathbb{R}^{d+1}\) and subsequently the lower convex hull is projected back to \(\mathbb{R}^{d}\), then the subdivision \(\Sigma\) is retrieved. A tropical polynomial with Newton polytope \(P\) defines a _tropical hypersurface_ that is dual to a regular subdivision of \(P\). We point the reader to [38, Chapter 1,3], [29, Chapter 1] for further details about this duality. We recall details about a special class of subdivisions that appear in our work. A _split_ subdivision is a subdivision with exactly two maximal cells [27]. Two splits \(S_{1}\) and \(S_{2}\) are said to be _compatible_ if the hyperplane along the split edges do not intersect in the interior of the polytope. We now introduce definitions dealing with lattice path matroids. Let \(E\) be a set (which is going to be the ground set of the matroid), and let \(\mathcal{A}=(A_{j}:j\in J)\) be a set system over \(E\), that is, a multiset of subsets of a finite set \(S\). A _transversal_ of \(\mathcal{A}\) is a set \(\{x_{j}:j\in J\}\) of \(|J|\) distinct elements such that \(x_{j}\in A_{j}\) for all \(j\in J\). A _partial transversal_ of \(\mathcal{A}\) is a transversal of a set system of the form \((A_{k}:k\in K)\) with \(K\) a subset of \(J\). A _transversal matroid_ is a matroid whose independent sets are the partial transversals of some set system \(\mathcal{A}=(A_{j}:j\in J)\) and \(\mathcal{A}\) is called the _presentation_ of the transversal matroid. We denote this matroid by \(\mathsf{M}[\mathcal{A}]\). The bases of a transversal matroid are the maximal partial transversals of \(\mathcal{A}\)[10]. We now recall the definition of a lattice path matroid as a certain kind of transversal matroid [10, Definition 3.1]. Consider an \(r\times(n-r)\) rectangular lattice grid \(\mathsf{U}_{r,n}\). This consists of all the lattice points \(\left\{(a,b):0\leq a\leq n-r\,\ 0\leq b\leq r\right\}\), and all the edges between neighboring lattice points. This can also be thought as a Young diagram [24] consisting of \(r\cdot(n-r)\) unit squares of the partition \(\lambda=(\underbrace{n-r,n-r,\cdots,n-r}_{r})\). An NE-_path_ over \(\mathsf{U}_{r,n}\) is a path from the point \((0,0)\) to the point \((n-r,r)\) each of whose step is either a step in \((1,0)\) direction (i.e. an \(E\)-step) or a step in \((0,1)\) direction (i.e. an \(N\)-step). Note that for each edge in \(\mathsf{U}_{r,n}\), its position in any NE-path is the same. Hence we can denote it by this position. Using this observation, we can denote each NE-path by the sequence of its north steps. **Definition 3**.: Let \(P\) and \(Q\) be two NE-paths on \(\mathsf{U}_{r,n}\) denoted by \[P=p_{1}p_{2}\ldots p_{r}\] \[Q=q_{1}q_{2}\ldots q_{r}\] then the set of all NE-paths between \(P\) and \(Q\) forms a matroid. That is, \[\mathsf{M}[P,Q]=\left\{\{i_{1},i_{2},\ldots,i_{r}\}:p_{j}\leq i_{j}\leq q_{j} \text{ for }j=1,\ldots,r\right\} \tag{1}\] Sometimes, we denote the matroid \(\mathsf{M}[P,Q]\) by just \(\mathsf{M}[J]\) where \(J\) is the skew Young diagram bounded by \(P\) and \(Q\). An example of a lattice path matroid is depicted in Figure 1, where the edges in the North direction are marked with their respective indices. ## 3. LPM Subdivisions We use the following definition from [34], **Definition 4**.: We call a lattice path matroid \(\mathsf{M}[P,Q]\) a _snake_ if it has at least two elements, it is connected and the strip contained between the paths \(P\) and \(Q\) does not contain any interior lattice point. Snakes are also referred to as _border strip_ matroids. Snakes have the minimal number of bases a rank \(r\) connected matroid over \(n\) elements has. That is why they are also called _minimal_ matroids [19]. In contrast to this, uniform matroids are _maximal_ with respect to this property. We introduce a new class of subdivision as follows: **Definition 5**.: Let \(\mathsf{M}[P,Q]\) be a lattice path matroid and \(\mathrm{P}_{\mathsf{M}[P,Q]}\) be its matroid polytope. A subdivision \(\Sigma\) of \(\mathrm{P}_{\mathsf{M}[P,Q]}\) is called a _lattice path matroidal_ (LPM) subdivision if all maximal cells of \(\Sigma\) are lattice path matroid polytopes. In [15, 20], matroid base polytope decompositions are studied in detail and it is shown that for a lattice path matroid \(\mathsf{M}[P,Q]\) which is not a _snake_, its matroid polytope \(\mathrm{P}_{\mathsf{M}[P,Q]}\) admits a decomposition into lattice path matroids polytopes s.t \[\mathrm{P}_{\mathsf{M}[P,Q]}=\bigcup_{i=1}^{t}\mathrm{P}_{\mathsf{M}[P_{i},Q_{ i}]} \tag{2}\] Figure 1. The lattice path in red depicts the path \(P\) and the lattice path in green depicts the path \(Q\), for the lattice path matroid \(\mathsf{M}[P,Q]\). where each \(\mathrm{P}_{\mathsf{M}[P_{i},Q_{i}]}\) is also a lattice path matroid base polytope for some lattice path matroid \(M[P_{i},Q_{i}]\), and for each \(1\leq i\neq j\leq t\), the intersection \(\mathrm{P}_{\mathsf{M}[P_{i},Q_{i}]}\cap\mathrm{P}_{\mathsf{M}[P_{j},Q_{j}]}\) is a face of both \(\mathrm{P}_{\mathsf{M}[P_{i},Q_{i}]}\) and \(\mathrm{P}_{\mathsf{M}[P_{j},Q_{j}]}\). A _hyperplane LPM split_ decomposition is a decomposition in exactly two lattice path matroid polytopes, i.e., \(t=2\) and as a consequence of [15, Corollary 1] we also know that these two LPM polytopes are full-dimensional. We feel that it is a good time to recall the notion of a polytopal subdivision [29, Section 1.2], **Definition 6**.: For a polytope \(P\in\mathbb{R}^{d}\), a _(polyhedral) subdivision_\(\Sigma\) is a polytopal complex whose vertices are the vertices of \(P\), and that covers \(P\). \(\Sigma\) can be understood as a collection of faces \(F\), such that for any two faces \(F_{i}\) and \(F_{j}\), \(F_{i}\cap F_{j}\in\Sigma\). It is pretty obvious from the definition above that the notions of LPM decompositions and LPM subdivisions coincide and we state this in the form of Corollary 7 **Corollary 7**.: _Let \(\Sigma^{\prime}=(\mathrm{P}_{\mathsf{M}[P_{1},Q_{1}]}),\ldots\mathrm{P}_{ \mathsf{M}[P_{t},Q_{t}]})\) be a decomposition of \(\mathrm{P}_{\mathsf{M}[P,Q]}\) into lattice path matroid polytopes. Then \(\Sigma^{\prime}\) coincides with the subdivision \(\Sigma\) of \(\mathrm{P}_{\mathsf{M}[P,Q]}\), where each maximal cell is \(C_{i}=\mathrm{P}_{\mathsf{M}[P_{i},Q_{i}]}\)._ The LPM subdivision corresponding to a hyperplane LPM split decomposition is called a split subdivision. The subsequent subdivision is obtained iteratively via split subdivisions which correspond to hyperplane LPM split decomposition. We take this opportunity to specify our terminology so as to minimize any confusion in the text; split with the prefix 'hyperplane' would always refer to the LPM subdivision of a lattice path matroid into two LPM, whereas split with the suffix 'hyperplane' would refer to the hyperplane defining a split subdivision. From now on our discussion would mostly focus on the LPM subdivisions, however, because of the equivalence in Corollary 7 most of our results also extend to LPM decompositions, unless otherwise stated. _Remark 8_.: The property of being obtained via iterative hyperplane LPM split decompositions is unique to the LPM decompositions described in [15, 8] and is different in this aspect from the concept of matroid decompositions defined in [9] which define a new quasisymmetric invariant for matroids which acts as a valuation on decompositions of matroid polytopes. Although, Kapranov [32] showed that for rank 2 matroids such matroid decompositions can be obtained via hyperplane split decompositions. We recall first this technical result regarding split subdivisions, **Lemma 9** (Lemma 3.5 [27]).: _Split subdivisions are regular._ Proof.: Let \(S\) be a split subdivision of a polytope \(P\). We provide a canonical weight vector for this subdivision in the following way. Let \(a\) be the normal vector to the split hyperplane \(H_{S}\). We define the weight vector for \(S\) as \(w_{S}:\operatorname{Vert}(P)\to\mathbb{R}\) such that \[w_{S}(v)=\begin{cases}|av|&\text{if}\,v\in S_{+}\\ 0&\text{if}\,v\in S_{-}\end{cases}\] It is clear that this weight function is well-defined and induces the split subdivision \(S\). We now state a technical result concerning split LPM subdivisions. We call an LPM polytope \(\operatorname{P}_{\mathsf{M}[J]}\subseteq\operatorname{P}_{\mathsf{M}[P,Q]}\) a _truncated_ LPM polytope if \(\operatorname{P}_{\mathsf{M}[J]}=\operatorname{P}_{\mathsf{M}[P,Q]}\setminus( \operatorname{P}_{\mathsf{M}[P,Q]}\cap H_{-})\), where \(H_{-}\) is a halfspace defined by the split hyperplane \(H\) of a split subdivision (cf. Figure 2). **Lemma 10**.: _A split subdivision of a truncated LPM polytope \(\operatorname{P}_{\mathsf{M}[J]}\) into two LPM can be extended to a split subdivision of the LPM polytope \(\operatorname{P}_{\mathsf{M}[P,Q]}\) into two LPM._ Proof.: We consider a split \(S\) of the LPM polytope \(\operatorname{P}_{\mathsf{M}[P,Q]}\). By Lemma 9 we know there exists a weight vector \(w_{S}\) of the form \[w_{S}(v)=\begin{cases}|av|&\text{if}\,v\in S_{+}\\ 0&\text{if}\,v\in S_{-}\end{cases}\] where \(a\) is the normal vector to the split hyperplane \(H_{S}\). Similarly, let us consider a split \(S^{\prime}\) of the truncated LPM polytope \(\operatorname{P}_{\mathsf{M}[J]}\). Again by Lemma 9 we know that restricted to \(\operatorname{P}_{\mathsf{M}[J]}\) there exists a weight vector \(w_{S^{\prime}}\) of the form \[w_{S^{\prime}}(v)=\begin{cases}|bv|&\text{if}\,v\in S^{\prime}_{+}\\ 0&\text{if}\,v\in S^{\prime}_{-}\end{cases}\] where \(b\) is the normal vector the split hyperplane \(H_{S^{\prime}}\) and we choose \(S^{\prime}_{-}\) such that \(S_{-}\subseteq S^{\prime}_{-}\). Now we notice that there exists an extension of the weight vector \(w_{S^{\prime}}\) to \(w^{\prime}_{S^{\prime}}\) which is defined as follows \[w^{\prime}_{S^{\prime}}(v)=\begin{cases}w_{S^{\prime}}(v)&\text{if}\,v\in \operatorname{P}_{\mathsf{M}[J]}\\ 0&\text{if}\,v\in\operatorname{P}_{\mathsf{M}[P,Q]}\cap S_{-}\end{cases}\] **Lemma 11**.: _For an LPM polytope \(\operatorname{P}_{\mathsf{M}[P,Q]}\), the split subdivisions induced from a hyperplane split decomposition are compatible._ Proof.: We proceed by proving the claim for two arbitrarily chosen split subdivisions. Let \(S_{1}\) and \(S_{2}\) be two split subdivisions of \(\operatorname{P}_{\mathsf{M}[P,Q]}\). Since split LPM subdivisions are defined in an iterative manner, therefore without loss of generality we assume that \(S_{2}\) restricted to the truncated LPM polytope \(\operatorname{P}_{\mathsf{M}[J]}=\operatorname{P}_{\mathsf{M}[P,Q]} \setminus(\operatorname{P}_{\mathsf{M}[P,Q]}\cap S_{1_{-}})\) defines a split subdivision for \(\operatorname{P}_{\mathsf{M}[J]}\). But this implies that the split hyperplane \(H_{S_{2}}\in\operatorname{P}_{\mathsf{M}[J]}\). Therefore, the split hyperplane \(H_{S_{1}}\) and \(H_{S_{2}}\) cannot meet in the interior of \(\operatorname{P}_{\mathsf{M}[P,Q]}\). Hence, the splits \(S_{1}\) and \(S_{2}\) are compatible. _Remark 12_.: As for the case of the hypersimplex \(\Delta(k,n)\) which is also a LPM polytope, we already know that any two splits are always compatible [27, Corollary 5.6]. _Remark 13_.: The compatibility of splits which provide the iterative description of LPM subdivisions also shows that LPM are _split matroids_, introduced by Joswig and Schroeter in [31]. **Theorem 14**.: _Any LPM subdivision \(\Sigma\) of a lattice path matroid polytope \(\mathrm{P}_{\mathsf{M}[P,Q]}\) is regular._ Proof.: Let \(\sigma\) be the LPM decomposition corresponding to \(\Sigma\). We know that \(\sigma\) can be obtained via iterative hyperplane LPM split decompositions. These hyperplane LPM split decompositions correspond to split subdivisions. Let \(\{S_{1},S_{2},\ldots S_{n}\}\) be the sequence of split subdivisions which correspond to \(\Sigma\). We note that \(\{S_{2},\ldots,S_{n}\}\) are splits for the corresponding truncated LPM polytope \(\mathrm{P}_{\mathsf{M}[J]}\). By Lemma 10 we know that the splits \(\{S_{2},S_{3},\ldots S_{n}\}\) can be extended to split subdivisions for \(\mathrm{P}_{\mathsf{M}[P,Q]}\) and let \(\{S^{\prime}_{2},S^{\prime}_{3},\ldots S^{\prime}_{n}\}\) be the corresponding split subdivisions on \(\mathrm{P}_{\mathsf{M}[P,Q]}\) for \(\Sigma\). We see that \(\Sigma\) is the common refinement of the splits \(\{S_{1},S^{\prime}_{2},\ldots S^{\prime}_{n}\}\) and as we know from Lemma 11 that these splits are compatible, therefore this common refinement is well defined. We now invoke the Split Decomposition Theorem [27, Theorem 3.10] to conclude that there exists a canonical weight vector \[w=\sum_{S^{\prime}}\alpha^{w}_{w_{S^{\prime}}}w_{S^{\prime}}\] which induces \(\Sigma\), where the sum runs over all splits and \(\alpha^{w}_{w_{S}}\) represents the _coherency index_[27]. Hence, \(\Sigma\) is a regular subdivision. **Example 15**.: For the hypersimplex \(\Delta(3,6)\) we describe a LPM subdivision \(\Sigma^{\mathrm{LPM}}\) in Section 5, illustrated in Figure 4, where we see that in the corresponding LPM polytope decomposition \((M_{1},\ldots,M_{6})\) shown in Figure 5, is obtained as common refinements of four splits namely \(S_{1},S_{2},S_{3}\) and \(S_{4}\). The weight which induces \(\Sigma^{\mathrm{LPM}}\) is Figure 2. A pictorial description of a truncated polytope \(P^{\prime}\) from \(P\) with respect to a split \(S\), where \(S^{\prime}\) is a split of the truncated polytope \(P^{\prime}\) which can be extended to the polytope \(P\) \[w_{\Sigma^{\text{LPM}}}=\{0,0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5\}\] and \[w_{S_{1}} =\{0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,1,2\}\] \[w_{S_{2}} =\{0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,1\}\] \[w_{S_{3}} =\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1\}\] \[w_{S_{4}} =\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1\}\] are the weights which induce the splits \(S_{1},S_{2},S_{3}\) and \(S_{4}\). With this, we see an example of the result described in Theorem 14, with the split decomposition in the following form, \[w_{\Sigma^{\text{LPM}}}=w_{S_{1}}+w_{S_{2}}+w_{S_{3}}+w_{S_{4}}\] **Corollary 16**.: _Let \(w_{\Sigma}\) be a weight vector for an LPM subdivision \(\Sigma\) of a lattice path matroid polytope \(\operatorname{P_{\mathsf{M}}}[P,Q]\). Then \(w_{\Sigma}\in\operatorname{Dr}(\mathsf{M}[P,Q])\)._ Proof.: Since \(w_{\Sigma}\) induces a regular and matroidal subdivsion \(\Sigma\), therefore by [42, Corollary 4.1]\(w_{\Sigma}\) lies in the Dressian \(\operatorname{Dr}(\mathsf{M}[P,Q])\). We know that the Dressian is endowed with two polyhedral fan structures; one coming from the tropical prevariety definition with points satisfying Plucker relations and termed as the _Plucker fan structure_[42] on the Dressian. The other structure termed as the _secondary fan structure_[42] which comes by virtue of being a subfan of the secondary fan. Moreover, we know that these two fan structures coincide [42, Theorem 4.1]. We now have the required setup to describe a new polyhedral fan structure for LPM subdivisions. We begin this exploration with the following definition, **Definition 17**.: Let \(\mathsf{M}[P,Q]\) be a lattice path matroid. We define the \(\operatorname{LPMfan}(\mathsf{M}[P,Q])\) to be the polyhedral fan which is the collection of all weight vectors \(w\) such that, \(w\) is a weight vector for an LPM subdivision of \(\mathsf{M}[P,Q]\). Two weight vector \(w_{1}\) and \(w_{2}\) lie in the same cone \(C\) if the LPM subdivisions \(\Sigma_{1}\) and \(\Sigma_{2}\) are same. Clearly, \[\operatorname{LPMfan}(\mathsf{M}[P,Q])\subseteq\operatorname{Dr}(\mathsf{M} [P,Q])\subseteq\operatorname{Secfan}(P_{\mathsf{M}[P,Q]}) \tag{3}\] where all inclusions represent inclusions as subfan. Additionally, from the definition of LPM subdivisions, given that they are obtained via refinement of split subdivisions, this makes the LPMfan sit as a subfan inside the _split complex_\(\operatorname{Split}(P_{\mathsf{M}[P,Q]})\), which is an abstract simplicial complex defined on the set of compatible splits of \(\operatorname{P_{\mathsf{M}[P,Q]}}\)[27]. Hence, we get this refined containment relation of subfans, \[\text{LPMfan}(\mathsf{M}[P,Q]))\subseteq\text{Split}(P_{\mathsf{M}[P,Q]}) \subseteq\text{Dr}(\mathsf{M}[P,Q])\subseteq\text{Secfan}(P_{\mathsf{M}[P,Q]}) \tag{4}\] An important observation is that the hypersimplex \(\Delta(k,n)\) is a lattice path matroid polytope and hence all our results for LPM polytopes follow in this case, \[\text{LPMfan}(k,n)\subseteq\text{Split}(\Delta(k,n))\subseteq\text{Dr}(k,n) \subseteq\text{Secfan}(\Delta(k,n))\] An important avenue of research has been to understand the structure of the Dressian \(\text{Dr}(k,n)\), particularly for certain low values of \(k\) and \(n\), namely \((3,6),(3,7)\), [26] and \((3,8)\)[28], etc. We describe LPMfans for certain values of \(k,n\) and discuss the calculations in Section 5. ## 4. Positive Tropical Grassmannian and LPM subdivisions In this section, our aim is to highlight the consequences of the fact that LPMs are positroids, and towards the end we also are able to provide an answer to a question asked concerning finest matroidal subdivisions of the hypersimplex in [42]. Since it is a major theme for this section we recall the result from [40] which shows us that lattice path matroids are positroids, upon which we build further in this section. **Theorem 18** (Lemma 23 [40]).: _A lattice path matroid is a positroid._ Proof.: Let \(\mathsf{M}[P,Q])\) be a LPM. For the result to be true, it is sufficient to construct a \(k\times n\) matrix \(A\) such that \[\det(A_{I})=\begin{cases}0&I\in\binom{[n]}{k}\setminus\mathsf{M}[P,Q]\\ \alpha&I\in\mathsf{M}[P,Q]\end{cases}\] where \(\alpha>0\). Such a matrix can be constructed as follows. Let \(A=(a_{i,j})_{i,j=1,1}^{k,n}\) be the \(k\times n\)_Vandermonde_ matrix. Set \(a_{i,j}=0\,\,\,\forall\,\,j\in[P_{i},Q_{i}]\), where \(P_{i}\) and \(Q_{i}\) represent the \(i^{th}\) north step in the lattice paths \(P\) and \(Q\), respectively. So \(A\) has the following form, \[a_{i,j}=\begin{cases}x_{i}^{j-1}&\text{if}\,P_{i}\leq j\leq Q_{i}\\ 0&\text{otherwise}\end{cases} \tag{5}\] Assign values to variables \(x_{1},\ldots,x_{k}\) such that \(x_{1}>1\) and \(x_{i+1}=x_{i}^{k^{2}}\forall i\in[k-1]\). We denote the submatrix \(A_{[1,\ldots,i][c_{1},\ldots,c_{i}]}\) as a submatrix of \(A\) which has rows indexed from \(1\) to \(i\) and columns indexed from \(c_{1}\) to \(c_{i}\). We have \(\det(A_{I})>0\) if and only if \(A_{[1,\ldots,k]I}\) has nonzero diagonal entries, which happens if and only if \(I\in\mathsf{M}[P,Q])\). Lusztig [37] and Postnikov[43, 44] introduced the notion of positivity for Grassmannians. This notion extends naturally to the tropical Grassmannian and Dressian [38, Section 4.3]. In [46] and independently in [4], the authors prove the following equality between \(\text{Trop}^{+}\,Gr(k,n)\) and \(Dr^{+}(k,n)\). **Theorem 19** (Theorem 3.9 [46]).: _The positive tropical Grassmannian \(\operatorname{Trop}^{+}\operatorname{Gr}(k,n)\) equals the positive Dressian \(\operatorname{Dr}^{+}(k,n)\)._ A generalization of this theorem to the case of positive local Dressian with respect to a positroid \(\mathsf{M}\) is provided in [4]. An important parameterization of points residing in the positive Dressian is explained in this result, **Theorem 20** (Theorem 4.3 [46]).: _Let \(\Sigma\) be a regular subdivision of \(\Delta(k,n)\) induced by a weight vector \(w_{\Sigma}\). Then the following are equivalent:_ 1. \(w\) _is a positive tropical Plucker vector._ 2. _Every face of_ \(\Sigma\) _is a positroid._ The generalization of this to local positive Dressian is provided again in [4, Proposition 8.3]. With this parameterization, we conclude that a point inducing a LPM subdivision resides in the positive Dressian. **Lemma 21**.: _Let \(\Sigma\) be an LPM subdivision of \(\operatorname{P}_{\mathsf{M}[P,Q]}\) and let \(w_{\Sigma}\) be the weight vector for \(\Sigma\). Then \(w\in\operatorname{Dr}^{+}(\mathsf{M}[P,Q])=\operatorname{Trop}^{+} \operatorname{Gr}(\mathsf{M}[P,Q])\)._ Proof.: We know that a point \(w\) lies in the positive Dressian if all the maximal cells of the subdivision induced by this point as a weight vector on \(\operatorname{P}_{\mathsf{M}[P,Q]}\) are the matroid polytopes of a positroid, i.e., \(w\) induces a _positroidal_ subdivision [4, Proposition 8.3]. We know that LPM are positroids, hence it also induces a positroidal subdivision, and therefore \(w\in\operatorname{Dr}^{+}(\mathsf{M}[P,Q])=\operatorname{Trop}^{+} \operatorname{Gr}(\mathsf{M}[P,Q])\). Another important result proven in [46] is about the classification of the finest positroidal subdivision of the hypersimplex \(\Delta(k,n)\). **Theorem 22**.: _Let \(\Sigma\) be a regular positroidal subdivision of \(\Delta(k,n)\). Then the following are equivalent:_ 1. \(\Sigma\) _is a finest subdivision._ 2. _Every facet of_ \(\Sigma\) _is the matroid polytope of a series-parallel matroid._ 3. _Every octahedron in_ \(\Sigma\) _is subdivided._ Along with the classification, [46] also provides the exact number of maximal cells in a finest positroidal subdivision of \(\Delta(k,n)\), **Corollary 23**.: _Every finest positroidal subdivision of \(\Delta(k,n)\) has exactly \(\binom{n-2}{k-1}\) facets._ We also recall the following classification of connected positroids which are series-parallel, **Lemma 24**.: _A connected positroid is series-parallel if and only if it has no uniform matroid \(\mathsf{U}_{2,4}\) as a minor._ In light of these results, we provide results about positroidal subdivisions of \(\Delta(k,n)\) obtained from LPM. We begin with our first technical result concerning snakes, **Lemma 25**.: _Snakes are series-parallel matroid._ Proof.: We acknowledge that the uniform matroid \(\mathsf{U}_{2,4}\) is also an LPM as shown in Figure 3. Clearly, \(\mathsf{U}_{2,4}\) has an interior lattice point and therefore cannot be a minor of a lattice path matroid which is a snake. Therefore, by Lemma 24 snakes are series-parallel matroid. We also acknowledge that another proof of this result is present in [20, Proposition 5.14]. With Lemma 25 and Theorem 22, we state the following result **Corollary 26**.: _Let \(\Sigma\) be an LPM subdivision of \(\Delta(k,n)\) such that the underlying matroid of each maximal cell is a snake. Then, \(\Sigma\) is a finest positroidal subdivision of \(\Delta(k,n)\) and has exactly \(\binom{n-2}{k-1}\) facets._ With Theorem 22 and Lemma 24, we also are able to provide a partial answer to Question 6.2 posed in [42], **Question 27** (Question 6.2 [42]).: _Are all cells in the finest matroid subdivision of a hypersimplex, matroid polytopes of indecomposable matroids?_ The authors show that the answer to this question is affirmative in the case when the hypersimplex is \(\Delta(2,n)\)[42, Proposition 5.3]. However, we know of explicit counterexamples provided in [13] which show that there exist finest matroidal subdivisions of certain hypersimplices, whose cells do not correspond to indecomposable matroids. We state some technical definitions before stating the partial answer. We state the following classification for _binary matroids_; which are matroids representable over the field with two elements. **Theorem 28** (Tutte [49]).: _A matroid is said to be binary if and only if it has no minor isomorphic to the uniform matroid \(\mathsf{U}_{2,4}\)._ Figure 3. Uniform matroid \(\mathsf{U}_{2,4}\) as a lattice path matroid **Definition 29** (Definition 5.2 [42]).: A matroid is said to be _indecomposable_ if and only if its polytope does not allow a non-trivial matroid subdivision. Therefore, we obtain Corollary 30 as an answer to Question 27, when restricted to the case of positroidal subdivisions of the hypersimplex. **Corollary 30**.: _The cells of the finest positroidal subdivision of \(\Delta(k,n)\) correspond to binary matroids. In particular, they are indecomposable._ Proof.: We know from Theorem 22 that maximal cells of the finest positroidal subdivisions of \(\Delta(k,n)\) correspond to connected series-parallel positroids and by Lemma 24 we know that they do not have \(U_{2,4}\) as a minor and therefore are also binary matroids. With Lemma 21 it is clear that the corresponding fan structure for LPM subdivisions also resides as a subfan inside the positive Dressian \[\operatorname{LPMfan}(\mathsf{M}[P,Q]))\subseteq\operatorname{Dr}^{+}( \mathsf{M}[P,Q]))=\operatorname{Trop}^{+}\!Gr(\mathsf{M}[P,Q])) \tag{6}\] \[\operatorname{LPMfan}(\Delta(k,n))\subseteq\operatorname{Dr}^{+}(k,n)= \operatorname{Trop}^{+}\!Gr(k,n) \tag{7}\] Also, in [4] a third fan structure on the positive Dressian \(\operatorname{Dr}^{+}(\mathsf{M})\) is defined as the _positive fan structure_. This fan structure is based on the underlying _cluster algebra_, studied in detail in [4]. We refer the reader to [23, 39] for basic details concerning cluster algebras. Our aim here is to highlight the third fan structure on the positive Dressian that is induced via these clusters, although they will emerge later again in our discussion concerning minimal positroids and positive configuration space in Section 6.3. We define the notion of a _cluster_ associated with a matroid [4], **Definition 31**.: A cluster \(\mathcal{C}\) for a matroid \(\mathsf{M}\) is a subset of \(\mathsf{M}\) that indexes a seed in the cluster structure of the cluster algebra isomorphic to \(\mathbb{C}[\tilde{\pi}_{\mathsf{M}}]\), where \(\mathbb{C}[\tilde{\pi}_{\mathsf{M}}]\) is the coordinate ring associated to the positroid variety. **Definition 32**.: The _positive fan structure_ on \(\operatorname{Dr}^{+}(\mathsf{M})\) is the fan whose cones are the images of the domains of linearity for a positive parameterization by a cluster \(\mathcal{C}\). Two points lie in the same cone of \(\operatorname{Dr}^{+}(\mathsf{M})\), if they determine the same common domains of linearity for all the functions \(p_{J},J\in\mathsf{M}\). The authors in [4] also prove that this new fan structure coincides with the previous two fan structures **Theorem 33** (Theorem 10.3 [4]).: _The three fan structures on \(\operatorname{Dr}^{+}(\mathsf{M})\) coincide._ With the sub-fan relation in place 6 **Corollary 34**.: _The three fan structures on \(\operatorname{LPMfan}(\operatorname{P}_{\mathsf{M}[P,Q]})\) coincide._ _Remark 35_.: We also want to highlight that matroid decompositions are invariant under matroid duality, which is also reflected in our description of the LPMfan, meaning if a \(k\)-dimensional cone \(C\) in the LPMfan(\(\text{P}_{\mathsf{M}[P,Q]}\)) corresponds to an LPM decomposition \(\mathsf{M}_{t}[P^{t},Q^{t}]\), then there exists a \(k\)-dimensional cone \(C^{\prime}\) such that it represents the LPM decomposition \(\{\mathsf{M}_{t}^{*}[P^{t},Q^{t}]\}\), where \(*\) represents the matroid dual. This fact can be verified in the case of \(\Delta(3,6)\) from Figure 9. ## 5. Computations for LPM polytope \(\Delta(k,n)\) In this section we look at some computational examples, concentrating on the case of \(\Delta(k,n)\) for \(k=3,4\) and \(n=6,8\) respectively. We use polymake[25] for our computations. ### Computations for LPM polytope \(\Delta(3,6)\) Figure 4 illustrates an LPM subdivision \(\Sigma^{\text{LPM}}\) of \(\Delta(3,6)\) with the lattice path matroids corresponding to the maximal cells also shown. We also calculate the weight vector \(w\) which induces this subdivision \[w=\{0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5\}\] Figure 4. A LPM subdivision of \(\Delta(3,6)\), which is also one of the finest matroidal subdivisions, and hence indexes a maximal cone of \(Dr(3,6)\). We illustrate the LPM polytope decomposition which corresponds to the subdivision in Figure 4 and we see the truncated LPM polytope after each iterative step of taking a hyperplane split decomposition in Figure 5. We also see that \(\Sigma^{LPM}\) corresponds to a metric tree arrangement shown in Figure 6. Figure 5. The LPM decomposition of \(\mathsf{U}_{3,6}\), which corresponds to the LPM subdivision in Figure 4. The dashed portions show the truncated parts of \(\mathsf{U}_{3,6}\). It is easy to see that under the permutation \[1\to 1,\quad 2\to 5,\quad 3\to 3,\quad 4\to 2,\quad 5\to 4,\quad 6\to 6\] this tree arrangement permutes to the tree arrangement shown in Figure 7 which corresponds to \(\operatorname{Cone}_{4}\)[47] in the classification of all maximal cones of \(Dr(3,6)\)[26]. ### Decorated permutations and reduced plabic graphs We now connect our computations to some other parameterization of the positive Grassmannian, namely _decorated permutations_ and _reduced plabic graphs_, and we rely on [50] for most of our definitions in this subsection. **Definition 36**.: A _decorated permutation_ of \([n]\) is a bijection \(\pi:[n]\to[n]\) whose fixed points are each colored either black or white. A black fixed point \(i\) is denoted by \(\pi(i)=\underline{i}\), and Figure 6. Metric tree arrangement corresponding to the LPM subdivision in Figure 4 Figure 7. Metric tree arrangement and the matroidal subdivision corresponding to \(\operatorname{Cone}_{4}\) in \(\operatorname{Dr}(3,6)\). a white fixed point \(i\) by \(\pi(i)=\vec{i}\). An _anti-excedance_ of the decorated permutation \(\pi\) is an element \(i\in[n]\) such that either \(\pi^{-1}(i)>i\) or \(\pi(i)=i\). A decorated permutation on \([n]\) is of type \((k,n)\) if it has \(k\) anti-excedances. We now establish the connection between decorated permutations and positroid cells of the positive Grassmanians. **Definition 37**.: Given a \(k\times n\) matrix \(C=(c_{1},\ldots,c_{n})\) written as a list of its columns, a decorated permutation \(\pi:=\pi_{C}\) is associated to \(C\) as follows. Set \(\pi(i):=j\) to be the label of the first column \(j\) such that \(c_{i}\in\operatorname{span}c_{i+1},c_{i+2},\ldots,c_{j}\). If \(c_{i}\) is the all-zero vector, it is called a _loop_ and if \(c_{i}\) is not in the span of the other column vectors, it is called a _coloop_. The associated positroid cell to this decorated permutation is defined as \[S_{\pi}=\{C\in\operatorname{Gr}(k,n)^{\geq 0}|\pi_{C}=\pi\}\] Postnikov showed that \(S_{\pi}\) is a cell, and that the positive Grassmannian \(Gr(k,n)^{\geq 0}\) is the union of cells \(S_{\pi}\) where \(\pi\) ranges over decorated permutations of type \((k,n)\)[43]. **Definition 38**.: A _plagic graph_ is an undirected planar graph \(G\) drawn inside a disk (considered modulo homotopy) with \(n\) boundary vertices on the boundary of the disk, labeled \(1,\ldots,n\) in clockwise order, as well as some internal vertices. Each boundary vertex is incident to a single edge, and each internal vertex is colored either black or white. If a boundary vertex is incident to a leaf (a vertex of degree 1), it is called a lollipop. **Definition 39**.: A _perfect orientation_\(\mathcal{O}\) of a plagic graph \(G\) is a choice of orientation of each of its edges such that each black internal vertex \(u\) is incident to exactly one edge directed away from \(u\); and each white internal vertex \(v\) is incident to exactly one edge directed toward \(v\). A plagic graph is called _perfectly orientable_ if it admits a perfect orientation. Let \(G_{\mathcal{O}}\) denote the directed graph associated with a perfect orientation \(\mathcal{O}\) of \(G\). The _source set_\(I_{\mathcal{O}}\subseteq[n]\) of a perfect orientation \(\mathcal{O}\) is the set of \(i\) which are sources of the directed graph \(G_{\mathcal{O}}\). Similarly, if \(j\in\overline{I_{\mathcal{O}}}:=[n]-I_{\mathcal{O}}\), then \(j\) is a _sink_ of \(\mathcal{O}\). The following result links positroids with plabic graphs [43, 50]. **Theorem 40** (Theorem 12.6 [50]).: _Let \(G\) be a plagic graph of type \((k,n)\). Then we have a positroid \(M_{G}\) on \([n]\) defined by_ \[M_{G}=\left\{I_{\mathcal{O}}\,|\,\mathcal{O}\quad\text{is}\quad\text{a perfect}\quad\text{orientation}\quad\text{of}\quad G\right\}\] _where \(I_{\mathcal{O}}\) is the set of sources of \(\mathcal{O}\). Moreover, every positroid cell has the form \(S_{M_{G}}\) for some plagic graph \(G\)._ If a plagic graph \(G\) is _reduced_[43, 23] we have that \(S_{M_{G}}=S_{\pi_{G}}\), where \(\pi_{G}\) is the decorated permutation defined as follows. **Definition 41**.: Let \(G\) be a reduced plabic graph with boundary vertices \(1,\ldots,n\). For each boundary vertex \(i\in[n]\), we follow a path along the edges of \(G\) starting at \(i\), turning (maximally) right at every internal black vertex, and (maximally) left at every internal white vertex. This path ends at some boundary vertex \(\pi(i)\). The fact that \(G\) is reduced implies that each fixed point of \(\pi\) is attached to a lollipop; we color each fixed point by the color of its lollipop. This defines a decorated permutation, called the _decorated trip permutation_\(\pi_{G}=\pi\) of \(G\). In [40], the following result elaborates on the way to compute the associated decorated permutations of an LPM. **Theorem 42** (Theorem 25 [40]).: _Let I and J be two lattice paths starting at the origin and terminating at \((k,n-k)\), such that \(I\) never crosses \(J\). Let \(I=\{i_{1}<\ldots<i_{k}\}\) and \(J=\{j_{1}<\ldots<j_{k}\}\in\binom{[n]}{k}\). Denote \([n]\setminus J=\{d_{1}<\ldots<d_{n-k}\}\) and \([n]\setminus I=\{c_{1}<\ldots<c_{n-k}\}\). Then \(\mathsf{M}[I,J]\) is a positroid and its decorated permutation \(\pi_{\mathsf{M}[I,J]}\) is given by:_ \[\pi_{\mathcal{M}[I,J]}(j_{r})=i_{r}\quad\forall r\in[k]\] \[\pi_{\mathcal{M}[I,J]}(d_{r})=c_{r}\quad\forall r\in[n-k]\] _if \(\pi_{\mathcal{M}[I,J]}(t)=t\), then,_ \[\text{col}(t)=\left\{\begin{aligned} &-1&\text{ if }t\in J\\ & 1&\text{ otherwise}\end{aligned}\right\}\] where \(col()\) represents the coloring map for loop and co-loop elements of the permutation. Figure 8 lists the decorated permutations and the reduced plabic graphs corresponding to the snakes in the snake decomposition of \(\mathsf{U}_{3,6}\) described in Figure 4. ### LPMfan(3,6) We first inspect the _\(f\)-vector_ of the fans associated to \(\mathsf{U}_{3,6}\)[26] \[f\text{-vector}(\operatorname{Dr}(3,6))=(1,65,535,1350,1005)\] \[f\text{-vector}(\operatorname{Trop}(\operatorname{Gr}(3,6)))=( 1,65,550,1395,1035)\] Out of the 65 rays of the Dressian \(\text{D}r(3,6)\), 35 correspond to splits and lie in the split complex, whereas the other 30 correspond to coarsest subdivisions of \(\Delta(3,6)\) into three maximal cells. Restricting to the positive tropical Grassmannian, we get the following vector [4], [45], [50] where \(F_{3,6}\) is the fan associated to \(\operatorname{Trop}^{+}(Gr(3,6))\). \[\text{f-vector}(F_{3,6})=(1,16,66,98,48)\] Out of these 16 rays, five occur in the LPMfan in the form of \(S_{1},S_{2},S_{3},S_{4}\) and \(S_{5}\) which we see in Figure 9. The _f-vector_ of the LPMfan for \(\Delta(3,6)\) is listed below where all cones are Figure 8. Decorated permutations and reduced plabic graphs corresponding to snakes of \(\mathsf{U}_{3,6}\) obtained as refinements of the five splits \(S_{1},S_{2},S_{3},S_{4}\) and \(S_{5}\), illustrated in Figure 9, where the edges between cones labeled signify the combination of the corresponding splits. f-vector(LPMfan(\(\Delta(3,6)\)) = (1,5,7,3,1) The LPMfan(3,6) sits inside the Split subcomplex generated by the refinements of splits \(S_{1},S_{2},S_{3},S_{4}\) and \(S_{5}\). Also, to reiterate the cones are defined as secondary cones with rays defined by the corresponding splits, i.e., the collection of all the weight vectors which induce the same LPM subdivision lie in the same cone. **Definition 43**.: We refer to a lattice path matroidal subdivision which is a split with a snake as a maximal cell as _snake split subdivision_ and we refer to the snakes appearing in a snake split subdivision as _split snakes_. _Remark 44_.: We point out the LPM decompositions for \(\mathcal{U}_{(3,6)}\) other than the ones shown in Figure 5, and these are depicted in Figure 10, and one of the weight vectors inducing the split subdivision \(S_{5}\) is the zero vector. \[w_{S^{\prime}}=\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\}\] _Remark 45_.: We also want to point out to the reader that there exists a natural action of the symmetric group \(S_{n}\) on the cones of the Dressian \(Dr(\Delta(k,n))\) well documented in [26] and well described in their computations, and that is why with respect to this action there are Figure 9. LPMfan(3,6). only 7 maximal cells of \(\operatorname{Dr}(3,6)\)[26]. Our description of the LPMfan implicitly incorporates this symmetry, for example, the weight vectors \[w_{1}=\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1\}\] and \[w_{2}=\{1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\}\] both induce the split \(S_{3}\), but we know that both of them are equivalent under the action of \(S_{6}\). ### Computations for LPM polytope \(\Delta(4,8)\) The subdivision is described in Figure 11 and Figure 12 and this subdivision is induced by the weight vector Figure 10. A split LPM decomposition \(S^{\prime}\) along with other decompositions that it corresponds to. Figure 11. A LPM decomposition of \(\mathsf{U}_{4,8}\) which corresponds to the LPM subdivision in Figure 12 and indexes a maximal cone of \(\mathrm{Dr}^{+}(4,8)\). \[w=\{0,0,0,0,0,0,0,0,0,0,1,1,1,2,2,3,0,0,0,0,0,1,1,1,2,2,3,2,\\ 2,2,3,3,4,5,5,6,8,0,0,0,0,1,1,1,2,2,3,2,2,2,3,\\ 3,4,5,5,6,8,3,3,3,3,4,4,4,5,6,6,7,9,8,8,8,9,11,14\}\] A subsequent computation for the LPMfan\((\Delta(4,8))\) is more intricate than the computation of LPMfan\((\Delta(3,6))\) and hence we leave it for future work as we believe it would be nice to utilize the symmetric group action also into the computation to produce bigger examples. All the files containing the code used for all these computations can be found at the following link [https://github.com/Ayush-Tewari13/LPM_SUBDIVISIONS](https://github.com/Ayush-Tewari13/LPM_SUBDIVISIONS) ## 6. Amplituhedron and Positive configuration spaces We now describe an important implication of our results and connections to topics in Physics, which in recent times have gained immense interest. In [5], Arkani-Hamed et. al introduced the notion of the _amplituhedron_ which is obtained from the positive Grassmannian via the _amplituhedron map_. It has been noted that the amplituhedron encodes information concerning scattering amplitudes in \(\mathcal{N}=4\) super Yang-Mills theory, which in turn explains the etymology of the term. In [50], the authors introduce the notion of _positroid dissections_ Figure 12. The LPM subdivision of \(\Delta(4,8)\), corresponding to the decomposition in Figure 11. for the hypersimplex \(\Delta(k+1,n)\) and the _Grassotopes dissection_ for the amplituhedron and explain the ways in these two dissections can be related via a duality map. We begin with the definition of amplituhedron [5], [50], **Definition 46**.: For \(a\leq b\), define \(\operatorname{Mat}_{a,b}^{>0}\) as the set of real \(a\times b\) matrices whose \(a\times a\) minors are all positive. Let \(Z\in\operatorname{Mat}_{n,k+m}^{>0}\). The amplituhedron map \(\overline{Z}:\operatorname{Gr}(k,n)^{\geq 0}\to\operatorname{Gr}(k,k+m)\) is defined by \(\overline{Z}:=CZ\), where \(C\) is a \(k\times n\) matrix representing an element of \(\operatorname{Gr}(k,n)^{\geq 0}\) and \(CZ\) is a \(k\times(k+m)\) matrix representing an element of \(\operatorname{Gr}(k,k+m)\). The _amplituhedron_\(\mathcal{A}_{n,k,m}^{\geq 0}(Z)\subseteq\operatorname{Gr}(k,k+m)\) is the image \(\overline{Z}(\operatorname{Gr}(k,n)^{\geq 0})\). We briefly state some of the results from [50] to sketch the outline of their discussion, **Definition 47**.: Let \(\mathcal{C}=\{\Gamma_{\pi}\}\) be a collection of positroid polytopes, and let \(S_{\pi}\) be the collection of corresponding positroid cells. \(\mathcal{C}\) is a _positroid dissection_ of \(\Delta(k,n)\) if * \(\dim(\Gamma_{\pi})=n-1\) for each \(\Gamma_{\pi}\in\mathcal{C}\) * pairs of two distinct positroid polytopes \(\Gamma_{\pi}^{o}=\mu(S_{\pi})\) and \(\Gamma_{\pi^{\prime}}^{o}=\mu(S_{\pi^{\prime}})\) are pairwise disjoint, and * \(\cup_{\pi}\Gamma=\Delta(k,n)\). **Definition 48**.: Let \(A\) be a \(k\times n\) matrix representing a point in \(Gr(k,n)^{\geq 0}\). The _moment map_\(\mu:Gr(k,n)^{\geq 0}\to\mathbb{R}^{n}\) is defined by \[\mu(A)=\frac{\sum_{I\in\binom{[n]}{k}}|p_{I}(A)|^{2}e_{I}}{\sum_{I\in\binom{[n] }{k}}|p_{I}(A)|^{2}}\] A positroid dissection is called a _positroid tiling_ if \(\mu\) is injective on each \(S_{\pi}\). As can be seen from the definition, dissections are a more generalized notion of a polytopal subdivision for a hypersimplex, with no restrictions on how individual pieces meet at the boundary, although the notion of _good dissections_[50] exactly agrees with the notion of a subdivision, **Definition 49**.: Let \(\mathcal{C}=\{\Gamma_{\pi^{(1)}},\ldots,\Gamma_{\pi^{(l)}}\}\) be a dissection of \(\Delta(k+1,n)\). We say that \(\mathcal{C}\) is a good dissection of \(\Delta(k+1,n)\) if the following condition is satisfied: for \(i\neq j\), if \(\Gamma_{\pi^{(i)}}\cap\Gamma_{\pi^{(j)}}\) has codimension one, then \(\Gamma_{\pi^{(i)}}\cap\Gamma_{\pi^{(j)}}\) equals \(\Gamma_{\pi}\), where \(\Gamma_{\pi}\) is a facet of both \(\Gamma_{\pi^{(i)}}\) and \(\Gamma_{\pi^{(j)}}\). In [50] a dissection for the hypersimplex is provided inspired by _BCFW recurrence_ relations for tilings of the \(m=4\) amplituhedron, which is referred as the _BCFW-style recurrence_ **Theorem 50** (Theorem 4.5 [50]).: _Let \(\mathcal{C}_{k+1,n-1}\) (respectively \(\mathcal{C}_{k,n-1}\)) be a collection of positroid polytopes that dissects the hypersimplex \(\Delta(k+1,n-1)\) (respectively \(\Delta(k,n-1)\)). Then_ \[\mathcal{C}_{k+1,n}=i_{\text{pre}}(\mathcal{C}_{k+1,n-1})\cup i_{\text{inc}}( \mathcal{C}_{k,n-1})\] _dissects \(\Delta(k+1,n)\),where \(i_{\text{pre}}\) and \(i_{\text{inc}}\) are maps defined on reduced plabic graphs in [50, Definition 4.1]._ ### Matroidal definition for BCFW dissections of hypersimplex We now try to build a purely matroidal relation for BCFW-style recurrence dissection for hypersimplices. We provide some context to our notations. For a positroid polytope \(\mathcal{P}\), we refer to the underlying positroid as \(\mathcal{P}=\operatorname{Conv}(\mathcal{M})\), where \(\operatorname{Conv}\) represents taking the convex hull of the indicator vectors of the bases of \(\mathcal{M}\). We now provide the matroidal definition for _BCFW style recurrence dissections_ for the hypersimplex. **Theorem 51** (matroidal BCFW style relations for hypersimplex).: _Let \(\mathcal{C}_{k+1,n}\) be a collection of positroid polytopes that dissects the hypersimplex \(\Delta(k+1,n)=\operatorname{Conv}(\mathcal{U}_{k+1,n})\)._ _Then,_ \[\mathcal{C}_{k+1,n}=((\mathcal{C}_{k+1,n})/e_{i})\cup((\mathcal{C}_{k+1,n}) \setminus e_{i})\] _and the set \((\mathcal{C}_{k+1,n})/e_{i})\) provides a positroid dissection of \(\Delta(k,n-1)\) and \((\mathcal{C}_{k+1,n})\setminus e_{i})\) provides a positroid dissection of \(\Delta(k+1,n-1)\), where \({}^{\prime}/^{\prime}\) represents contraction and \({}^{\prime}\setminus{}^{\prime}\) represents the deletion operations on matroids._ Proof.: Firstly we note that the hypersimplex \(\Delta(k+1,n)\), is a 0-1 polytope obtained by the intersection of the unit cube \([0,1]^{n}\) with the affine hyperplane \(\sum_{i=1}^{n}x_{i}=k+1\). We note that the facet corresponding to the hyperplane \(x_{i}=0\) is termed as the _i-th deletion facet_ of \(\Delta(k+1,n)\) and is isomorphic to \(\Delta(k+1,n-1)\). Similarly, the facet corresponding to the hyperplane \(x_{i}=1\) is termed as the _i-th contraction facet_ of \(\Delta(k+1,n)\) and is isomorphic to \(\Delta(k,n-1)\). Also, these facets can be obtained as deletion and contraction respectively on the uniform matroid \(\mathcal{U}_{k,n}\)[26]. With these definitions, the notions of contraction and deletion extend to respective dissections and subdivisions and this fact is used in [26]. We point out the natural dissections of hypersimplex into two minors provided by contraction and deletion. Let \(v\in\operatorname{Vert}(\Delta(k+1,n))\) then since each dissection into \(\operatorname{Conv}\left((\mathcal{M}_{k+1,n})/e_{i}\right)\) and \(\operatorname{Conv}((\mathcal{M}_{k+1,n})\setminus e_{i})\) is defined by hyperplanes \(x_{i}=0\) or \(x_{i}=1\), therefore every vertex \(v\) lies in either \(\operatorname{Conv}\left((\mathcal{M}_{k+1,n})/e_{i}\right)\) or \(\operatorname{Conv}((\mathcal{M}_{k+1,n})\setminus e_{i})\). Given a positroid dissection \(\mathcal{C}_{k+1,n-1}\) we consider the minors with respect to an element \(i\in[n]\) and obtain minors \(((\mathcal{C}_{k+1,n})/e_{i})\) and \(((\mathcal{C}_{k+1,n})\setminus e_{i})\). We recognize that these minors also correspond to the dissections induced on the respective contraction and deletion facets of \(\Delta(k+1,n)\) respectively, each of which is isomorphic to \(\Delta(k,n-1)\) and \(\Delta(k+1,n-1)\) respectively, which give us the two required positroid dissections. _Remark 52_.: We point out that Theorem 51 provides a matroidal formulation of BCFW style relations for hypersimplex, and proves an almost converse statement of Theorem 50, and we say that this is almost a converse statement since we know that not all positroid dissections of \(\Delta(k+1,n)\) occur from BCFW style recursions [50], whereas the statement of Theorem 51 involves matroidal operations, therefore for any positroid dissection of \(\Delta(k+1,n)\) we can obtain dissections of \(\Delta(k,n-1)\) and \(\Delta(k+1,n-1)\) in this way. We point out that it is not obvious that there exist matroidal operations equivalent to the operations \(i_{pre}\) and \(i_{inc}\) used in Theorem 50. We also wish to explore a possible generalization of the statement for Theorem 51 to matroid dissections and not necessarily positroid dissections. However, such a discussion would require an appropriate definition of a matroid dissection and a generalization of Theorem 50 to the case of matroid dissections, as the non-trivial part of the proof of Theorem 50 rests on a refined description of facets of positroid polytopes defined by Postnikov, described in [2, Proposition 5.6]. **Example 53**.: We again consider the snake polytope decomposition of \(\Delta(3,6)\) described in Figure 4. As we know that this is also a regular positroidal subdivision, or equivalently a regular positroid good dissection. We now perform the contraction and deletion with respect to the element \(i=1\) on this subdivision, and obtain two collections in which we see that \(\{M_{2}\setminus\{1\},\ldots,M_{6}\setminus\{1\}\}\) provides a positroidal subdivision (equivalently a positroid good dissection) of \(\Delta(3,5)\) on letters \([6]\setminus\{1\}=\{2,3,4,5,6\}\) and \(\{M_{1}/\{1\},M_{2}/\{1\},M_{3}/\{1\}\}\) provides a positroidal subdivision on \(\Delta(2,5)\) (cf. from Figure 13.) ### BCFW cells correspond to lattice path matroids We want to point out that in the discussion in this section, we would only be focusing on the m=4 amplituhedron. In a recent breakthrough work [18], the authors prove the conjecture that BCFW cells provide a triangulation of the amplituhedron \(\mathcal{A}_{n,k,4}\). In [18] and [33] the authors establish the equivalence between BCFW cells and noncrossing lattice walks (paths). We use Figure 13. The contraction and deletion operation on the positroidal subdivision of \(\Delta(3,6)\) from Figure 4. this observation to explore the connection between BCFW triangulations and lattice path matroids. We borrow mostly our notation from [33]. Let \(\mathcal{L}_{n,k,4}\) denote the set of all pairs \((P_{\mathcal{L}},Q_{\mathcal{L}})\) of _noncrossing_ lattice paths inside a \(k\times(n-k-4)\) rectangle, where the notion of noncrossing is the same as \(P\) never going above \(Q\) implicit in Definition 3. Therefore, we state one of our first conclusions in the form of Corollary 54. **Corollary 54**.: _Let \((P_{\mathcal{L}},Q_{\mathcal{L}})\in\mathcal{L}_{n,k,4}\) be a pair of noncrossing lattice paths. Then \((P_{\mathcal{L}},Q_{\mathcal{L}})\) determine a lattice path matroid \(\mathcal{M}[P_{\mathcal{L}},Q_{\mathcal{L}}]\) which lies inside the lattice path matroid \(\mathcal{U}_{k,n-4}\)._ We describe the connection between non-crossing lattice paths and BCFW cells of \(\mathcal{A}_{k,n,4}\). Firstly, in[33] the authors introduce the notion of a \(\oplus-\)diagram of type \((k,n)\), which are defined as follows [33, Definition 2.3] **Definition 55**.: Fix \(0\leq k\leq n\). Given a partition \(\lambda\), we let \(Y_{\lambda}\) denote the Young diagram of \(\lambda\). A \(\oplus-\) diagram of type \((k,n)\) is a filling \(D\) of a Young diagram \(Y_{\lambda}\) fitting inside a \(k\times(n-k)\) rectangle with the symbols \(0\) and \(+\) (such that each box of Y is filled with exactly one symbol) and \(\lambda\) is called the shape of \(D\) (cf. Figure 14). The rules according to which the filling in a \(\oplus\)-diagram is obtained are elaborated in [33, Definition 6.2]. Let \(\mathcal{D}_{n,k,4}\) be the space of \(\oplus-\)diagram of type \((k,n)\). We infer the following result from [33, Definition 6.2] **Lemma 56**.: _There exists an bijection \(\Omega_{\mathcal{LD}}\) such that_ \[\Omega_{\mathcal{LD}}:\mathcal{L}_{n,k,4}\to\mathcal{D}_{n,k,4}\] **Theorem 57** (Theorem 6.3 [33]).: _The \(\oplus-\) diagrams \(\mathcal{D}_{n,k,4}\) index the \((k,n)\)-BCFW cells \(\mathcal{C}_{n,k,4}\)._ This theorem is proven by using another bijection between the space of _binary rooted trees_\(\mathcal{T}_{n,k,4}\) and \(\mathcal{L}_{n,k,4}\) and the authors use reduced plabic graphs to produce _decorated permutations_ for the \(\oplus-\)diagrams. We point the reader to [33] to explore these concepts and proofs in Figure 14. A \(\oplus-\)diagram \(D\) of type \((3,12)\). full detail. Our interest develops with Corollary 54 and this inspires us to enquire about the existence of a duality between cells of the amplituhedron and dissections of the hypersimplex, which is established via T-duality in the case of \(m=2\) amplituhedron in [50]. In [18] the following result concerning BCFW cells is proven, which was stated as a conjecture in [5, 33]. **Theorem 58**.: _For every \(k\geq 1\) and \(n\geq k+4\), the \((k,n)\)-BCFW cells form a triangulation of the amplituhedron \(\mathcal{A}_{n,k,4}\)._ We now state our result based on this discussion, **Theorem 59**.: _Each triangulation of the amplituhedron \(\mathcal{A}_{n,k,4}\) into \((k,n)\)-BCFW cells provides a positroid dissection \(\{\Gamma_{i}\}\) of the hypersimplex \(\Delta(k,n-4)\), where each BCFW cell corresponds to a lattice path matroid polytope \(\Gamma_{i}\)._ Proof.: By Corollary 54 we already know that each \((k,n)\) BCFW cell corresponds to a LPM \(\mathcal{M}[P_{\mathcal{L}},Q_{\mathcal{L}}]\) inside \(\mathcal{U}_{k+4,n}\), where \((P_{\mathcal{L}},Q_{\mathcal{L}})\in\mathcal{L}_{n,k,4}\). Therefore, each \((k,n)\) BCFW cell corresponds to a lattice path matroid polytope \(\mathcal{P}(\mathcal{M}[P_{\mathcal{L}},Q_{\mathcal{L}}])\) which lies inside \(\Delta(k+4,n)=\operatorname{Conv}(\mathcal{U}_{k+4,n})\). Therefore, a triangulation of \(\mathcal{A}_{n,k,4}\) into \((k,n)\)-BCFW cells corresponds to a collection of all lattice path matroids which lie inside the uniform matroid \(\Delta(k+4,n)=\operatorname{Conv}(\mathcal{U}_{k+4,n})\), which is clearly a positroid dissection from Definition 47. With Theorem 59 we establish a first notion in the direction of _T-duality_ for a \(m=4\) amplituhedron, where in the case of \(m=2\) amplituhedron [50] shows that subdivisions of the amplituhedron correspond to positroid dissections of the corresponding hypersimplex. We provide this in the case of \(m=4\) amplituhedron for the BCFW triangulation which inspires for the exploration of the case of other triangulations and subdivision of \(\mathcal{A}_{k,n,4}\). Also, BCFW style dissections enjoy a recursive description and can be understood as coming from splits as discussed in the case of the \(m=2\) amplituhedron in [50, Remark 4.8], and we believe that a positroid dissection in LPM cells captures this in essence as well owing to the recursive definition of LPM polytope decompositions. ### Positive configuration spaces, weakly separated collections and connected minimal positroids We highlight some of the connections between our study on LPM's and [4]. Firstly, in [4] the authors relate the _positive Chow cells_ of the _Chow quotient_ of the Grassmannian with positroidal subdivisions. Let \(Ch(k,n)_{\geq 0}\) denote the nonnegative part of the Chow quotient of the Grassmannian. **Theorem 60** (Theorem 1.1 [4]).: _There are canonical bijections between the following sets._ * _The set_ \(\{\Theta_{\bar{\Delta}>0}\}\) _of positive Chow cells of_ \(Ch(k,n)_{\geq 0}\)__ * _The set_ \(D(k,n)\) _of regular positroidal subdivisions of_ \(\Delta(k,n)\)_._ * _The set of cones in the positive tropical Grassmannian_ \(\operatorname{\mathit{Trop}}Gr^{+}(k,n)\)_, the space of valuations of positive Puiseux series points_ \(Gr(k,n)(\mathcal{R}>0)\)__ * _The set of cones in the positive Dressian_ \(Dr(k,n)\)_, which satisfy the three term positive Plucker relations._ As LPM'S are positroids too, all these equivalences also are true when restricted to the LPMfan. We also delve into the connection between _cluster_ of a matroid, _weakly separated collections_[41] and snakes. We fix some notations relevant to our discussion. We define the _cyclic ordering_[40] (referred as the _\(t\)-th Gale order_ in [7]) \(\leq_{t}\) on \([n]\) for some \(t\in[n]\) by the total order \(t\leq_{t}t+1\leq_{t}\ldots\leq_{t}n\leq_{t}1\ldots\leq_{t}t-1\). For \(I,J\in\binom{[n]}{k}\), where \[I=\{i_{1},\ldots,i_{k}\},\quad i_{1}\leq_{t}i_{2}\ldots\leq_{t}i_{k}\] and \[J=\{j_{1},\ldots,j_{k}\},\quad j_{1}\leq_{t}j_{2}\ldots\leq_{t}j_{k}\] then \[I\leq_{t}J\quad\text{if}\quad\text{and}\quad\text{only}\quad\text{if}\,\,\,i_ {1}\leq_{t}j_{1},\ldots,i_{k}\leq_{t}j_{k}\] **Definition 61**.: For each \(I\in\binom{[n]}{k}\) and \(t\in[n]\), we define the _cyclically shifted Schubert_ matroid as \[SM_{I}^{t}=\left\{J\in\binom{[n]}{k}\mid I\leq_{t}J\right\}\] We recall the definition for weakly separated sets from [41], **Definition 62**.: Let \(I\) and \(J\) be two subsets of \([n]\). \(I\) and \(J\) are said to be weakly separated if either * \(|I|\leq|J|\) and \(I\setminus J\) can be partitioned as \(I_{1}\cup I_{2}\) such that \(I_{1}\prec J\setminus I\prec I_{2}\) or * \(|J|\leq|I|\) and \(J\setminus I\) can be partitioned as \(J_{1}\cup J_{2}\) such that \(J_{1}\prec I\setminus J\prec J_{2}\) where \(A\prec B\) indicates that every element of \(A\) is less than every element of \(B\). Equivalently, the definition can be stated as the sets \(I\) and \(J\in\binom{[n]}{k}\) are said to be _weakly separated_ if we cannot find cyclically ordered elements \(a,b,c,d\) such that \(a,c\in I\setminus J\) and \(b,d\in J\setminus I\) (also along with the symmetrical statement for \(I\) and \(J\) swapped). We also recall the definition of _Grassmann necklaces_[43, 41, 40, 44]. **Definition 63**.: A Grassmann necklace is a sequence \(I=(I_{1},\ldots,I_{n})\) of subsets \(I_{r}\subseteq[n]\) such that: * if \(i\in I_{i}\) then \(I_{i+1}=(I_{i}\setminus\{i\})\cup\{j\}\) for some \(j\in[n]\), * if \(i\not\in I_{i}\) then \(I_{i+1}=I_{i}\) The indices are taken modulo \(n\). In particular, we have \(|I_{1}|=\ldots=|I_{n}|\). There exists a canonical bijection between positroids and Grasmann necklaces. We state the characterization of the _cluster_ of a matroid (Definition 31), in terms of weakly separated sets and Grassmann necklaces [41] **Lemma 64**.: _A subset \(\mathcal{C}\subseteq\mathcal{M}\) is a cluster if it is pairwise weakly separated, has size dim\((\mathcal{M})\) + 1, and contains the Grassmann necklace \(\mathcal{I}\) of \(M\). Any pairwise weakly-separated subset of \(\binom{[n]}{k}\) can be extended to a cluster._ As one of the takeaways in [4], the authors state this result concerning minimal connected positroids and clusters for them **Lemma 65**.: _A connected positroid \(\mathcal{M}\) is minimal if and only if the associated reduced plabic graph \(G(C)\) is a tree, for some cluster \(\mathcal{C}\) of \(\mathcal{M}\). In this case, \(\mathcal{M}\) has a unique cluster \(\mathcal{C}\subseteq\mathcal{M}\)._ We already know that for lattice path matroids, snakes are minimal matroids. Hence, by Lemma 65, we obtain a unique cluster in this case. We explain this with one of our running examples; the snake decomposition of \(\mathcal{U}_{3,6}\) shown in Figure 4. We obtain the cluster \(\mathcal{C}_{1}\) for the snake \(M_{1}\) \[\mathcal{C}_{1}=\{123,234,134,124,125,126\}\] It is easy to verify that \(C_{1}\) is weakly separated, contains the Grassmann necklace for \(\mathcal{M}_{1}\) and is of cardinality = dim\((\mathcal{M}_{1})+1=5+1=6\). Likewise, we obtain unique clusters for all the snakes. The corresponding graphs for these snakes have been described in Figure 8. We conclude with another interesting observation. For both the _snake split_ (Definition 43) matroids, we notice that they exactly contain \(k(n-k)+1\) elements. This is exactly equal to the cardinality of the _maximal weakly separated collection_, which is the maximal collection of pairwise weakly separated elements inside the matroid \(\mathcal{M}\) and the bound on its cardinality was famously conjectured by Leclerc and Zelevinsky and proven to be true in [41]. However, we realize that the elements in a snake split are not all pairwise weakly separated, so they are not examples of maximal weakly separated collections. For example for the snake decomposition of \(\mathcal{U}_{3,6}\), the snake split \(M_{1}\) has the elements \(124\) and \(135\) which are not pairwise weakly separated. ## 7. Future Perspectives We utilize this section to condense our discussion and to highlight the takeaways from our results. We also point to subsequent questions which arise from our work. Firstly, we want to mention a recent work concerning lattice path matroid decompositions into snakes and _alcoved triangulation_[7] in which the authors prove results based on the snake decomposition of LPM and also discuss results on Ehrhart theory of LPM. They prove that the alcoved triangulation of an alcoved polytope is regular. We observe that our discussion pertaining to lattice path matroid subdivisions being regular generalizes this result for LPM. We point the reader to Figure 1 in [7] to understand the context of where LPM lie with respect to other well-known families of matroids. We also want to point the reader to [14], where the authors show that there exist finest matroid subdivisions of matroid polytopes that do not contain matroid polytopes of indecomposable matroids as maximal cells. Hence, it might be a worthwhile question to ask for which other families of matroids apart from positroids, a result like Corollary 30 can be obtained, for example, the class of transversal matroids might be a good candidate to be considered. Also, a natural generalization of this question would be to consider the finest subdivisions of Dressians for arbitrary matroids, and not necessarily the hypersimplex, and see if we can recover some of these results. With the introduction of the notion of LPMfan we believe that there are many questions that can be asked just pertaining to its structure and we believe this might interest readers for further research. Some of the interesting queries could be, to understand if there exists a bound on the number of LPM splits and how it behaves with respect to the Dressian, computation of the dimension of the LPMfan, etc. We believe there is much more to analyze about the LPMfan and we aim to fulfill this in future work. We also acknowledge via [6] recursive relations between LPM's defined by quotients and direct sums. This could be really interesting to understand LPM subdivisions for larger LPM polytopes. We wish to employ such a technique for computing LPMfan recursively. Additionally, it would be interesting to inquire about the specific Plucker relations that are satisfied by points corresponding to LPM subdivisions, which lie in the Dressian. We already know that they would be satisfying the positive Plucker relations owing to the fact that LPM are positroids, but they might be even further refined and this could be done by analyzing the forbidden minors for a matroid to be LPM, classified in [11]. One of our future goals is also to find an equivalent of Theorem 50 for LPM dissections. Also, in [50], the authors provide a characterization of the positroid polytope in the form of the following statement **Theorem 66** (Theorem 3.9[50]).: _Let \(M\) be a matroid of rank \(k\), and consider the matroid polytope \(P_{M}\). It is a positroid polytope if and only if all of its two-dimensional faces are positroid polytopes._ We are able to obtain a one-way implication similar to this in the case of LPM polytopes as follows, **Lemma 67**.: _The faces of a lattice path matroid polytope \(P_{M[P,Q]}\) are also lattice path matroid polytopes._ Proof.: It is clear that it is sufficient to prove the claim for the facets of an LPM polytope \(P_{M[P,Q]}\). We utilize the characterization of facets of matroids polytopes described in [31, Proposition 7] which says that facets of a matroid polytope are either induced by hypersimplex facets or hypersimplex splits. If the facet is induced by a hypersimplex facet, we know that these correspond to matroidal deletions and contractions, and LPM are closed under these operations [12]. Hence, the facet of an LPM polytope is again an LPM polytope in this case. Alternatively, if the facet is induced by a hypersimplex split, we know that it is induced by a _F-hyperplane_[31] where \(F\) is a flat of the LPM \(M[P,Q]\), such that \(0<\operatorname{rank}(F)<\#F\), in which case the facet can be described as \(P_{M[P,Q]}(F)=P_{(M[P,Q]\,|\,F\oplus M[P,Q]/F)}\). Since LPM are also closed under direct sum and restrictions [11], therefore this facet is again a LPM polytope. We do highlight the fact that a characterization of snake polytopes does exist, and it concludes that snake polytopes are unimodular equivalent to order polytopes of zig-zag posets [35, Theorem 4.7]. Additionally, the facial structure of LPM has also been classified in terms of certain sets of deletions, contractions and direct sums in [1]. _Remark 68_.: Lemma 67 appears also as a result in [8, Theorem 3.12] however the argument there appears incomplete since they only consider hypersimplex facets in their proof and not the facets that get induced via hyperplane splits. Another important connection to our results which we want to highlight is the work of Fink and Rincon on _Stiefel tropical linear spaces_[22]. For the uninitiated, the _Stiefel_ map assigns a \(k\times n\) matrix over a field \(\mathbb{K}\) to an element in the Grassmannian \(Gr(k,n)\). The authors study the tropicalization of this map and also study the properties of its image, called the _Stiefel_ image, inside the tropical Grassmannian. The authors in [22] relate the points inside the Stiefel image, to the class of _regular transversal matroid subdivisions_, which as the name suggests is the class of regular matroidal subdivisions where each maximal cell corresponds to a transversal matroid. Since LPM are transversal, LPM subdivisions are also transversal matroid subdivisions. Additionally, we obtain the following corollary as a direct consequence of [21, Theorem 6.20] **Corollary 69**.: _Let \(L\) be the tropical linear space dual to a LPM subdivision. Then \(L\) lies in the corresponding Stiefel image._ In [22, Proposition 5.1] a facet description for transversal matroid polytopes is provided and [21, Corollary 6.21] provides a partial characterization of transversal matroids in terms of its facets. Based on these results, we propose the following question **Question 70**.: _Let \(P_{M}\) be the matroid polytope of a matroid \(M\), such that all of its faces are LPM polytopes. Does this imply that \(M\) is also a LPM polytope?_ We observe that an affirmative answer, along with Lemma 67 would provide a full characterization of LPM polytopes in terms of their faces. Also, we already know due to prior results that with the assumptions in the question, \(M\) is both transversal [21, Corollary 6.20] and a positroid [50, Theorem 3.20]. Hence, it is also worthwhile to inquire about the ways in which the three different classes of matroids namely; transversal matroids, lattice path matroids and positroids interact. A subsequent study of relations between Stiefel tropical linear spaces and LPM subdivisions will be explored elsewhere. We recall that a matroidal subdivision is completely determined by its 3-skeleton [42]. In recent work [30], the authors introduce the class of _permutahedral subdivisions_ which is the class of polyhedral subdivisions of generalized permutahedra into cells that are generalized permutahedra. They also that the 2-skeleton of a permutahedral subdivision does not completely determine the subdivision. In the background of these results, we would be happy to understand how the class of _LPM subdivisions_ which we introduced in this paper, behave and possibly find out the criterion which completely determines a LPM subdivision. We also comment on the location of the positroid cells corresponding to LPM in the stratification of the positive Grassmannian. We consider two well-known families of cells in the positive Grassmannian [7] **Definition 71**.: A positroid cell \(\Pi\) is called a _Schubert cell_ if a generic point \(U\in\Pi\) gives rise to a representable matroid \(\mathcal{M}_{I}=([n],\mathcal{B})\) where \(B\in\mathcal{B}\) if and only if \(I<_{1}B\), where \(<_{1}\) is the usual total order on \([n]\). **Definition 72**.: A positroid cell \(\Pi\) is called a _Richardson cell_ if a generic point \(U\in\Pi\) gives rise to a representable matroid \(\mathcal{M}_{I}^{J}=([n],\mathcal{B})\) where \(B\in\mathcal{B}\) if and only if \(I<_{1}B<_{1}J\), where \(<_{1}\) is the usual total order on \([n]\). Schubert matroids correspond to Schubert cells and lattice path matroids correspond to Richardson cells. We wish to understand these Richardson cells in depth, given the context of lattice path matroids and in the light of questions from algebraic geometry concerning positroid and Richardson varieties as mentioned in [36, 7]. We are currently working on a sequel to our work here, in the context of the new definition of lattice path flag matroids [7] and to look at equivalent questions in the realm of flag matroids, along with the flag matroid equivalent of the Dressian, i.e, _Flag Dressian_[13] and the associated tropical flag variety in [48]. For our results about the amplituhedron, our results have two facets. Firstly, we provide a matroidal treatment to the well-known BCFW style recurrence relations for positroidal dissections of the hypersimplex. For the \(m=2\) amplituhedron, via _T-duality_ described in [50], these dissections correspond to a dissection of the amplituhedron in terms of _Grasstopes_[50]. However, not much is known about the relations between triangulations of the amplituhedron and dissections of the hypersimplex in the case of the \(m=4\) amplituhedron. We provide a first counterpart of positroid dissections of the hypersimplex for BCFW triangulations of \(\mathcal{A}_{n,k,4}\). We wish to explore the possibility of equivalent notions of T-duality for the \(m=4\) amplituhedron as well. We also wish to examine connections between combinatorial objects and LPM's other than the ones discussed here for example _chord diagrams_ and _domino bases_ described in [18]. We also point the reader to recent work done on weakly separated collections and matroidal subdivisions [17], which also correlates to some of your observations and is an interesting avenue for further exploration.
2303.00611
Track-To-Track Association for Fusion of Dimension-Reduced Estimates
Network-centric multitarget tracking under communication constraints is considered, where dimension-reduced track estimates are exchanged. Previous work on target tracking in this subfield has focused on fusion aspects only and derived optimal ways of reducing dimensionality based on fusion performance. In this work we propose a novel problem formalization where estimates are reduced based on association performance. The problem is analyzed theoretically and problem properties are derived. The theoretical analysis leads to an optimization strategy that can be used to partly preserve association quality when reducing the dimensionality of communicated estimates. The applicability of the suggested optimization strategy is demonstrated numerically in a multitarget scenario.
Robin Forsling, Zoran Sjanic, Fredrik Gustafsson, Gustaf Hendeby
2023-03-01T16:07:33Z
http://arxiv.org/abs/2303.00611v2
# Track-To-Track Association for Fusion of Dimension-Reduced Estimates ###### Abstract Network-centric multitarget tracking under communication constraints is considered, where dimension-reduced track estimates are exchanged. Previous work on target tracking in this subfield has focused on fusion aspects only and derived optimal ways of reducing dimensionality based on fusion performance. In this work we propose a novel problem formalization where estimates are reduced based on association performance. The problem is analyzed theoretically and problem properties are derived. The theoretical analysis leads to an optimization strategy that can be used to partly preserve association quality when reducing the dimensionality of communicated estimates. The applicability of the suggested optimization strategy is demonstrated numerically in a multitarget scenario. Network-centric estimation, target tracking, track-to-track association, communication constraints, dimension-reduced estimates. ## I Introduction The multitarget tracking (MTT, [1]) problem is a well-studied topic. Two popular classical MTT methods are the global nearest neighbor (GNN) tracker and the multiple hypothesis tracker [2, 3]. In the last few decades different MTT methods based on random finite sets have emerged that provide a solid mathematical framework for the multitarget Bayesian filter, see, e.g., [4, 5]. A key feature of all of these MTT algorithms is how they deal with the _association problem_ where measurements are assigned to existing tracks. Association problems also arise in _network-centric_ MTT where multiple agents estimate a common set of targets and the communicated tracks must be associated with local tracks. This is a _track-to-track association_ problem. In addition, the communication channel is a limited resource and in certain situations the exchanged data must be reduced [6, 7], which in general have a negative impact on the association quality. A network-centric MTT scenario with dimension-reduced estimates is illustrated in Fig. 0(a). The problem of fusing dimension-reduced measurements and estimates have been studied before: In [8, 9, 10] it is done in centralized and distributed configurations, and in [11, 12, 13] it is done for decentralized sensor networks. However, all of these papers assume that the association process, see Fig. 0(b), can be neglected or is trivially solved such that the dimension-reduction can be optimized for fusion performance only. The corresponding association problem--where the dimension-reduction takes data association into account--remains untreated. In this paper we deal with the association problem in network-centric MTT with dimension-reduced estimates. The main goal is to find a way to compute the dimension-reduction such that satisfactory association performance is obtained. This problem is formalized and the relationship to fusion optimal dimension-reduction is discussed. As a result of a problem analysis an optimization strategy is suggested for computing dimension-reductions that yield good association performance. The contributions are listed below. * We propose a novel formalization of the association problem in network-centric MTT with dimension-reduced estimates. This problem formulation essentially involves a GNN tracker and computation of the dimension-reduction such that satisfactory association quality is obtained. Fig. 0: A multiagent multitarget tracking scenario where agent 2 transmits dimension-reduced estimates to agent 1. The colored numerated circles in (a) represent agents. The black symbols represent targets and the corresponding colored symbols and ellipses are estimates. Before fusing received dimension-reduced estimates, agent 1 must associate these estimates with its local estimates. The scope of this paper is highlighted by the red dashed box in (b). * The proposed problem is analyzed theoretically and problem properties are derived. * Based on the problem analysis we suggest an optimization algorithm for the dimension-reduction computation. ## II Network-Centric Target Tracking Using Dimension-Reduced Estimates In this section we introduce the studied multiagent MTT problem. The outlined estimation model forms the basis for fusion related operations and is mainly related to previous work. The provided association model is fundamental for the contributions of this paper. We give a motivating example to illustrate why association properties should be taken into account when reducing dimensionality. The considered problem is formalized at the end. ### _Preliminaries_ Let \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m\times n}\) denote the set of all real-valued \(n\)-dimensional vectors and the set of all real-valued \(m\times n\) matrices, respectively. Let \(\mathbb{S}_{+}^{n}\) and \(\mathbb{S}_{++}^{n}\) denote the set of all symmetric positive semidefinite \(n\times n\) matrices and the set of all symmetric positive definite \(n\times n\) matrices, respectively. Targets and estimates are distinguished by subscript \((i)\), e.g., the state of the \(i\)th target is \(x_{(i)}\in\mathbb{R}^{n}\). We use boldface to express random variables and normal face for a realization of the random variable, e.g., \(y\) is a realization of \(\mathbf{y}\). The expectation operator is denoted by \(\mathsf{E}(\cdot)\). A random variable \(\mathbf{y}\) is said to be Gaussian distributed with mean \(\mu=\mathsf{E}(\mathbf{y})\) and covariance matrix \(\Sigma=\mathsf{E}(\mathbf{y}-\mu)(\mathbf{y}-\mu)^{\mathsf{T}}\) if \(\mathbf{y}\sim\mathcal{N}(\mu,\Sigma)\). ### _Estimation Model_ We consider two agents. Let \[y_{1(i)}=x_{(i)}+v_{1(i)},\qquad\mathbf{v}_{1(i)}\sim\mathcal{N} (0,R_{1(i)}), \tag{1a}\] \[y_{2(i)}=x_{(i)}+v_{2(i)},\qquad\mathbf{v}_{2(i)}\sim\mathcal{N }(0,R_{2(i)}), \tag{1b}\] be the local estimates of \(x_{(i)}\) in agent 1 and agent 2, respectively. For instance, \(y_{1(i)}\) is the state estimate and \(R_{1(i)}\) the corresponding covariance of the \(i\)th target in agent 1. All cross-covariances \(R_{12(i)}=\mathsf{E}(\mathbf{v}_{1(i)}\mathbf{v}_{2(i)}^{\mathsf{T}})\) are assumed to be zero1. A dimension-reduced estimate is given by Footnote 1: In network-centric MTT estimates are typically correlated to some degree. Here it is assumed that estimates have been decorrelated before they are communicated, for instance by using the techniques in [13, 14]. \[y_{\Psi(i)}=\Psi_{(i)}y_{2(i)},\qquad\quad R_{\Psi(i)}=\Psi_{(i)}R_{2(i)}\Psi _{(i)}^{\mathsf{T}}, \tag{2}\] where \(\Psi_{(i)}\in\mathbb{R}^{m\times n}\) with \(m<n\) and \(\mathrm{rank}(\Psi_{(i)})=m\). The sets of local estimates of agent 1 and agent 2 are \[\mathcal{Y}_{1} =\left\{(y_{1(1)},R_{1(1)}),\ldots,(y_{1(N)},R_{1(N)})\right\}, \tag{3a}\] \[\mathcal{Y}_{2} =\left\{(y_{2(1)},R_{2(1)}),\ldots,(y_{2(N)},R_{2(N)})\right\}. \tag{3b}\] Agent 1 and agent 2 track exactly the same targets and hence have the same number of tracks. Moreover, it is assumed that the elements of \(\mathcal{Y}_{1}\) and \(\mathcal{Y}_{2}\) are labeled according to \(x_{(1)},\ldots,x_{(N)}\), e.g., \((y_{1(i)},R_{1(i)})\) and \((y_{2(i)},R_{2(i)})\) are estimates of the same target \(x_{(i)}\). This might sound a bit counterintuitive but the assumption is not a restriction since here the actual correct association result is assumed to be known, as described later, and the task is to compute \(\Psi_{(1)},\ldots,\Psi_{(N)}\). We also define \[\mathcal{Y}_{\Psi}=\left\{(y_{\Psi(1)},R_{\Psi(1)}),\ldots,(y_{\Psi(N)},R_{ \Psi(N)})\right\}. \tag{4}\] Since \(R_{12(i)}=0\), \((y_{1(i)},R_{1(i)})\) and \((y_{\Psi(i)},R_{\Psi(i)})\) are mean square error (MSE) optimally fused according to [11] \[\hat{x}_{(i)} =P_{(i)}\left(R_{1(i)}^{-1}y_{1(i)}+\Psi_{(i)}^{\mathsf{T}}R_{ \Psi(i)}^{-1}y_{\Psi(i)}\right), \tag{5a}\] \[P_{(i)} =\left(R_{1(i)}^{-1}+\Psi_{(i)}^{\mathsf{T}}R_{\Psi(i)}^{-1} \Psi_{(i)}\right)^{-1}. \tag{5b}\] This fusion rule is denoted Kalman fuser (KF). For KF, a fusion optimal2\(\Psi_{(i)}\) is computed using Algorithm 1[11]. Footnote 2: Fusion optimal in the sense that this \(\Psi_{(i)}\) yields the smallest MSE when fusing \((y_{1(i)},R_{1(i)})\) and \((y_{\Psi(i)},R_{\Psi(i)})\). ### _Association Model_ The association problem is formulated as a linear assignment problem [15]. In case of full estimates, the assignment matrix is \[\mathcal{A}_{\text{full}}=\begin{bmatrix}d_{(11)}^{2}&\ldots&d_{(1N)}^{2}\\ \vdots&\ddots&\vdots\\ d_{(N1)}^{2}&\ldots&d_{(NN)}^{2}\end{bmatrix}, \tag{6}\] where \(d_{(ij)}^{2}\) is a Mahalanobis distance (MD) given by \[d_{(ij)}^{2} =\bar{y}_{(ij)}^{\mathsf{T}}S_{(ij)}^{-1}\bar{y}_{(ij)}, \tag{7a}\] \[\bar{y}_{(ij)} =y_{1(i)}-y_{2(j)},\] (7b) \[S_{(ij)} =R_{1(i)}+R_{2(j)}, \tag{7c}\] since \(\mathsf{E}(\mathbf{v}_{1(i)}\mathbf{v}_{2(j)}^{\mathsf{T}})=0\). Similarly, the dimension-reduced assignment matrix \(\mathcal{A}_{\text{red}}\) is defined as \[\mathcal{A}_{\text{red}}=\begin{bmatrix}r_{(11)}^{2}&\ldots&r_{(1N)}^{2}\\ \vdots&\ddots&\vdots\\ r_{(N1)}^{2}&\ldots&r_{(NN)}^{2}\end{bmatrix}, \tag{8}\] where \(r_{(ij)}^{2}\) is an MD given by \[r_{(ij)}^{2} =(\Psi_{(j)}y_{1(i)}-y_{\Psi(j)})^{\mathsf{T}}\left(\Psi_{(j)}R_{1( i)}\Psi_{(j)}^{\mathsf{T}}+R_{\Psi(j)}\right)^{-1}\] \[\quad\times(\Psi_{(j)}y_{1(i)}-y_{\Psi(j)})\] \[=\bar{y}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{T}}\left(\Psi_{(j )}S_{(ij)}\Psi_{(j)}^{\mathsf{T}}\right)^{-1}\Psi_{(j)}\bar{y}_{(ij)}. \tag{9}\] Agent 1 receives estimates from agent 2 and solves the association problem using the following optimization formulation. Let \(\mathbb{P}^{N}\) be the set of all \(N\times N\) permutation matrices, i.e., \[\mathbb{P}^{N}=\left\{\Pi\in\mathbb{R}^{N\times N}\;\big{|}\left[\Pi\right]_{ ij}\in\{0,1\},\Pi\mathsf{N}^{\mathsf{T}}=I\right\}. \tag{10}\] A permutation matrix \(\Pi\in\mathbb{P}^{N}\) assigns exactly one estimate in \(\mathcal{Y}_{1}\) to each of the estimates in \(\mathcal{Y}_{\Psi}\). The optimal \(\Pi\) for a certain assignment matrix \(\mathcal{A}\) is computed using [15] \[\underset{\Pi}{\text{minimize}} \operatorname{tr}(\Pi\mathcal{A}) \tag{11}\] \[\operatorname{subject\ to} \Pi\in\mathbb{P}^{N}.\] In this formulation correct assignment is given by \(\Pi_{0}=I\). _Remark 1_.: Let \(Z=[z_{ij}]\), where \(z_{ij}\in\{0,1\}\). The problem in (11) is a matrix version of \[\underset{Z}{\text{minimize}} z_{ij}[\mathcal{A}]_{ij}\] \[\operatorname{subject\ to} \sum_{i}z_{ij}=1,\quad\forall i,\] \[\sum_{j}z_{ij}=1,\quad\forall j.\] This formulation is more common in the MTT literature [16]. However, here we use (11). ### _Motivating Example_ We will now illustrate how the choice of \(\Psi_{(i)}\) affects the association performance. Consider the scenario in Fig. (a)a, where \(N=2\), \(n=2\) and \(m=1\). Each agent has a local estimate of each of the two targets as defined in Fig. (a)a, where \(R_{1(1)}=R_{1(2)}\) and \(R_{2(1)}=R_{2(2)}\). Assume \[\Psi_{(1)}=\Psi_{(2)}=\Psi=\left[\cos\alpha\quad\sin\alpha\right],\] where \(\alpha\) is an angle. Based on this parametrization it is possible to define \(\mathcal{A}_{\text{red}}\) as a function of \(\alpha\). Let \[J_{0} =\left.\operatorname{tr}(\Pi\mathcal{A}_{\text{red}})\right|_{ \Pi=\Pi_{0}}=r_{(11)}^{2}+r_{(22)}^{2},\] \[J_{e} =\left.\operatorname{tr}(\Pi\mathcal{A}_{\text{red}})\right|_{ \Pi=\left[\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right]}=r_{(12)}^{2}+r_{(21)}^{2},\] be the cost corresponding to correct and incorrect assignment, respectively. By construction \(J_{0}\), \(J_{e}\) and \(\operatorname{tr}(P_{(1)})=\operatorname{tr}(P_{(2)})\) are functions of \(\alpha\). The fusion and association performance with respect to (w.r.t.) \(\alpha\) is evaluated by computing \(J_{0}\), \(J_{e}\) and \(\operatorname{tr}(P_{(i)})\) for each \(\alpha\in[0^{\circ},180^{\circ}]\). The results are shown in Fig. (b)b. The fusion optimal \(\Psi\) corresponds to \(\alpha^{\star}=90^{\circ}\). However, this \(\Psi\) lies in the interval where \(J_{0}>J_{e}\) which would imply incorrect assignment. To have correct assignment in the dimension-reduced case while maintaining good fusion performance the selected \(\Psi\) should be such that it minimizes \(\operatorname{tr}(P_{(i)})\) subject to \(J_{0}<J_{e}\). ### _Problem Formalization_ Assume the targets \(x_{(1)},\ldots,x_{(N)}\) are well separated such that solving the assignment problem in (11) with \(\mathcal{A}=\mathcal{A}_{\text{full}}\) yields \(\Pi_{0}\). Moreover, assume that agent 2 has no knowledge about \(\mathcal{Y}_{1}\). The problem is, at agent 2, to compute \(\Psi_{(1)},\ldots,\Psi_{(N)}\in\mathbb{R}^{1\times n}\) such that when agent 1 solves (11) with \(\mathcal{A}=\mathcal{A}_{\text{red}}\) the solution \(\Pi\) is as close as possible to \(\Pi_{0}\). In other words, since it in general is not possible to obtain correct association in the dimension-reduced case, we want to compute \(\Psi_{(1)},\ldots,\Psi_{(N)}\) in such a way that the association is not degraded too much. The focus is on the case \(m=1\). However, some of the results are given for arbitrary \(m\geq 1\). _Remark 2_.: The considered problem is not the common association problem of network-centric MTT where received tracks are associated with local tracks and correct assignment \(\Pi_{0}\) is _unknown_. Here, the correct assignment is _known_ by construction and hence, for the presentation, we have the freedom of defining \(\mathcal{A}_{\text{full}}\) and \(\mathcal{A}_{\text{red}}\) such that \(\Pi_{0}=I\). Fig. 2: Motivating example. Two agents estimate two targets. By construction \(\Psi_{(1)}=\Psi_{(2)}=\Psi\), where \(\Psi(\alpha)=\left[\cos\alpha\quad\sin\alpha\right]\) and \(\alpha\in[0^{\circ},180^{\circ}]\). The dashed lines in (a) represent projections of the state estimates along \(\Psi(0^{\circ})\) and \(\Psi(90^{\circ})\). The effect of \(\Psi\) on the fusion and association performance is evaluated by varying \(\alpha\). The fusion loss function is \(\operatorname{tr}(P_{(i)})\) and the association loss function is \(\operatorname{tr}(\Pi\mathcal{A}_{\text{red}})\), with \(J_{0}\) and \(J_{e}\) defined as the losses corresponding to correct and incorrect assignment, respectively. The fusion optimal \(\Psi\) is given by \(\alpha^{\star}=90^{\circ}\). At \(\alpha^{\star}\), \(\Psi y_{1(1)}=\Psi y_{2(2)}\) and \(\Psi y_{1(2)}=\Psi_{2(1)}\) which implies \(J_{e}=0<J_{0}\). ## III Problem Analysis In this section we examine properties of the considered association problem. Sufficient conditions for correct assignment are given. An example is used to show that the problem is further complicated by inherent randomness. Statistical properties of the problem are derived at the end to be used in the subsequent section. ### _A Sufficient Condition for Correct Assignment_ Consider now an oracle's perspective. The example of Sec. II-D illustrate an important property of the problem. That is, for \(\Psi_{(j)}\neq 0\) and \(\bar{y}_{(ij)}\neq 0\) \[\Psi_{(j)}\perp\bar{y}_{(ij)}^{\mathsf{T}}\iff\Psi_{(j)}\bar{y}_{(ij)}=0,\] where \(\bar{y}_{(ij)}=y_{1(i)}-y_{2(j)}\). From this it can be inferred that for the association we want \[\Psi_{(j)}\bar{y}_{(jj)}=0\quad\wedge\quad i\neq j\implies\Psi_{(j)}\bar{y}_ {(ij)}\neq 0, \tag{12}\] where \(\wedge\) is logical and, since in this case \(r_{(jj)}^{2}=0\) and \(r_{(ij)}^{2}>0\) if \(i\neq j\). A sufficient condition for correct assignment is hence that (12) holds for all \(j\) as this would imply \(\mathrm{tr}(\mathcal{A}_{\text{red}})=0\). However, by assumption agent 2 has no knowledge about \(\mathcal{Y}_{1}\) and hence without further knowledge agent 2 cannot compute \(\Psi_{(j)}\) such that (12) is satisfied. ### _Problem Properties_ In the example of the previous section the fusion optimal \(\Psi\) gave incorrect association. Luckily, it is not generally the case that the fusion optimal \(\Psi\) yields incorrect assignments. Unfortunately, it is impossible to say something general about tradeoffs between fusion and association performance. The main reasons for this are described below. Consider \(\Psi_{(j)}^{\mathsf{T}}\in\mathbb{R}^{n}\), and let \(Q_{(j)}=R_{1(j)}^{2}\in\mathbb{S}_{++}^{n}\) and \(S_{(jj)}=R_{1(j)}+R_{2(j)}\in\mathbb{S}_{++}^{n}\). In the fusion case the optimal \(\Psi_{(j)}\) solves [11] \[\underset{\|\Psi_{(j)}\|=1}{\text{maximize}}\quad\frac{\Psi_{(j)}Q_{(j)}\Psi_ {(j)}^{\mathsf{T}}}{\Psi_{(j)}S_{(jj)}\Psi_{(j)}^{\mathsf{T}}}. \tag{13}\] Hence the fusion optimal \(\Psi_{(j)}\) for a certain target \(x_{(j)}\) can be solved isolated from the other targets. This is not true in the association problem where optimal \(\Psi_{(j)}\) for a certain target \(x_{(j)}\) depends on all estimates in both \(\mathcal{Y}_{1}\) and \(\mathcal{Y}_{2}\) through \(\mathcal{A}_{\text{red}}\). A slightly less restrictive sufficient condition for correct assignment, cf. (12), is that for each \(j\) \[r_{(jj)}^{2}<r_{(ij)}^{2},\quad\forall i\neq j. \tag{14}\] If this condition holds nearest neighbor [16] association yields the same results as GNN association. The condition in (14) can also be expressed as [11] \[\frac{\Psi_{(j)}\bar{y}_{(jj)}\bar{y}_{(jj)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{ T}}}{\Psi_{(j)}S_{(jj)}\Psi_{(j)}^{\mathsf{T}}}<\frac{\Psi_{(j)}\bar{y}_{(ij)} \bar{y}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{T}}}{\Psi_{(j)}S_{(ij)}\Psi_{( j)}^{\mathsf{T}}},\quad\forall i\neq j, \tag{15}\] where each fraction is structurally similar to the fraction in (13). However, a complication compared to the fusion case is that \(r_{(ij)}^{2}\) is a realization of a random variable \[\mathbf{r}_{(ij)}^{2}=\bar{\mathbf{y}}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{ T}}\left(\Psi_{(j)}S_{(ij)}\Psi_{(j)}^{\mathsf{T}}\right)^{-1}\Psi_{(j)} \bar{\mathbf{y}}_{(ij)},\] where \(\bar{\mathbf{y}}_{(ij)}=\mathbf{y}_{1(i)}-\mathbf{y}_{2(j)}\). Hence, assuming that agent 2 has access to \(R_{1(i)}\) and a good estimate of \(x_{(i)}\), the fusion optimal \(\Psi_{(i)}\) could be computed while it would still be difficult to predict \(r_{(ij)}^{2}\) due to randomness. Fig. 3 shows two possible realizations of each of the random variables \[\mathbf{y}_{1(1)}=x_{(1)}+\mathbf{v}_{1(1)},\qquad\mathbf{y}_{2(1 )}=x_{(1)}+\mathbf{v}_{2(1)},\] \[\mathbf{y}_{1(2)}=x_{(2)}+\mathbf{v}_{1(2)},\qquad\mathbf{y}_{2(2 )}=x_{(2)}+\mathbf{v}_{2(2)},\] where \(\mathbf{v}_{1(1)},\mathbf{v}_{1(2)}\sim\mathcal{N}(0,R_{1})\) and \(\mathbf{v}_{2(1)},\mathbf{v}_{2(2)}\sim\mathcal{N}(0,R_{2})\). Since the covariances are the same in each case and since by assumption \(R_{1(1)}=R_{1(2)}=R_{1}\) and \(R_{2(1)}=R_{2(2)}=R_{2}\) we have that fusion optimal \(\Psi_{(j)}\) satisfy \(\Psi_{(1)}=\Psi_{(2)}=\Psi\) in both cases. Computing \(\mathcal{A}_{\text{red}}(\Psi)\) in realization 1 and realization 2 yields \[\mathcal{A}_{1}=\begin{bmatrix}0.05&1.01\\ 0.31&0.05\end{bmatrix},\qquad\mathcal{A}_{2}=\begin{bmatrix}0.11&0.01\\ 0.01&0.11\end{bmatrix},\] respectively. In realization 1 we will hence have correct assignment \(\Pi_{0}\) while in realization 2 the incorrect combination is chosen. The example illustrates that, due to the inherent randomness, it is in general impossible to decide if a fusion optimal \(\Psi_{(j)}\) will imply correct or incorrect assignment without knowing the actual realization. ### _Statistical Properties_ Assume \(m\geq 1\). By construction \[\Psi_{(j)}\bar{\mathbf{y}}_{(ij)}\sim\mathcal{N}\left(\Psi_{(j)}\bar{x}_{(ij)},\Psi_{(j)}S_{(ij)}\Psi_{(j)}^{\mathsf{T}}\right),\] where \(\bar{x}_{(ij)}=x_{(i)}-x_{(j)}\). Hence [17] \[\mathbf{r}_{(ij)}^{2}\sim\begin{cases}\chi_{m}^{2},&\text{ if }i=j,\\ \chi_{m,\nu}^{2},&\text{ if }i\neq j,\end{cases} \tag{16}\] where \(\chi_{m}^{2}\) is the central chi-squared distribution with \(m\) degrees of freedom, and \(\chi_{m,\nu}^{2}\) is the noncentral chi-squared Fig. 3: Two noise realizations of the same scenario. The target states and the covariances \(R_{1(1)}=R_{1(2)}\) and \(R_{2(1)}=R_{2(2)}\) are the same in both realizations. What differs are the state estimates \(y_{1(1)}\), \(y_{2(1)}\), \(y_{1(2)}\) and \(y_{2(2)}\). In realization 1 correct assignment is obtained while in realization 2 incorrect assignment is obtained. distribution, where \(\nu\) is the noncentrality parameter. The expectation value is \[\mathsf{E}\left(\mathbf{r}_{(ij)}^{2}\right)=m+\nu_{(ij)}, \tag{17}\] where \(\nu_{(ij)}=\bar{x}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{T}}\left(\Psi_{(j)}S_ {(ij)}\Psi_{(j)}^{\mathsf{T}}\right)^{-1}\Psi_{(j)}\bar{x}_{(ij)}\) is the noncentrality parameter. The variance is given by [17] \[\mathrm{var}\left(\mathbf{r}_{(ij)}^{2}\right)=2m+4\nu_{(ij)}. \tag{18}\] One conclusion is that as \(\nu_{(ij)}\) increases the relative effect of randomness decreases since \(\mathsf{E}(\mathbf{r}_{(ij)}^{2})\) scales as \(\nu_{(ij)}\) while \(\sqrt{\mathrm{var}(\mathbf{r}_{(ij)}^{2})}\) only scales as \(\sqrt{\nu_{(ij)}}\). This result is important and is used in the solution proposed in the next section. ## IV Preserving Correct Assignment With Dimension-Reduced Estimates In this section a method for preserving high association quality is suggested. Based on the analysis of Sec. III, an optimization formulation is provided for computation of \(\Psi_{(j)}\). This leads to the proposed descent based optimization strategy, where a key ingredient and contribution is an adaptive step size. At the end we provide a numerical example and some comments about the optimization strategy. ### _Approximated Assignment Matrix_ The proposed solution is based on the analysis of the previous section. In particular, we estimate \(r_{(ij)}^{2}\) using \(\mathsf{E}(\mathbf{r}_{(ij)}^{2})\) in (17). To compute \(r_{(ij)}^{2}\) agent 2 must have access to both \((y_{1(i)},R_{1(i)})\) and \((y_{2(j)},R_{2(j)})\), but \((y_{1(i)},R_{1(i)})\) is unknown to agent 2. An approximation to \((y_{1(i)},R_{1(i)})\) which is already locally available is \((y_{2(i)},R_{2(i)})\). Let \[\hat{r}_{(ij)}^{2}=\hat{y}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{T}}\left( \Psi_{(j)}\hat{S}_{(ij)}\Psi_{(j)}^{\mathsf{T}}\right)^{-1}\Psi_{(j)}\hat{y}_ {(ij)}, \tag{19}\] where \(\hat{y}_{(ij)}=y_{2(i)}-y_{2(j)}\) and \(\hat{S}_{(ij)}=R_{2(i)}+R_{2(j)}\) such that \(\hat{\mathbf{y}}_{(ij)}\sim\mathcal{N}(\bar{x}_{(ij)},\hat{S}_{(ij)})\). This is consistent with \(\mathbf{r}_{(ij)}^{2}\) in the sense that \[\mathsf{E}\left(\hat{\mathbf{r}}_{(ij)}^{2}\right)\] \[=\begin{cases}m,&\text{if $i=j$,}\\ m+\bar{x}_{(ij)}^{\mathsf{T}}\Psi_{(j)}^{\mathsf{T}}\left(\Psi_{(j)}\hat{S}_{( ij)}\Psi_{(j)}^{\mathsf{T}}\right)^{-1}\Psi_{(j)}\bar{x}_{(ij)},&\text{if $i\neq j$.}\end{cases}\] which is identical to (17) except that \(S_{(ij)}\) is replaced by \(\hat{S}_{(ij)}\). We then define the _approximated assignment matrix_ as \[\hat{\mathcal{A}}_{\text{red}}=\begin{bmatrix}\hat{r}_{(11)}^{2}&\ldots&\hat{ r}_{(1N)}^{2}\\ \vdots&\ddots&\vdots\\ \hat{r}_{(N1)}^{2}&\ldots&\hat{r}_{(NN)}^{2}\end{bmatrix}. \tag{20}\] ### _Proposed Solution_ Since we only have access to an approximation \(\hat{\mathcal{A}}_{\text{red}}\) of \(\mathcal{A}_{\text{red}}\), \(\Psi_{(j)}\) is computed based on the sufficient condition in Sec. III, cf. (14). The condition is utilized because we want to have some marginal when choosing \(\Psi_{(j)}\), to avoid that \(r_{(ij)}^{2}\) is zero or very small if \(i\neq j\). Moreover, if \(\Psi_{(j)}\) satisfies this sufficient condition there is no need to take into account the other \(\Psi_{(i)},i\neq j\) when computing \(\Psi_{(j)}\)--correct assignment is obtained regardlessly. Consider now a certain \(j\) and \(\Psi_{(j)}\). Let \[f_{i}(z)=\frac{z^{\mathsf{T}}\hat{Y}_{(ij)}z}{z^{\mathsf{T}}\hat{S}_{(ij)}z}, \qquad\quad\hat{Y}_{(ij)}=\hat{y}_{(ij)}\vec{y}_{(ij)}^{\mathsf{T}}, \tag{21}\] be defined for all \(i\neq j\). To maximize \(f_{i}(z)\) simultaneously for all \(i\) is in general impossible since this is a _multiobjective_ optimization problem. However, we can consider a worst-case approach and maximize the minimum \(f_{i}(z)\). This implies a _maximin_ formulation where \(\Psi_{(j)}\) is computed using \[\underset{\Psi_{(j)}}{\mathrm{maximize}}\quad\left(\underset{i\neq j}{\mathrm{ min}}\quad f_{i}(\Psi_{(j)}^{\mathsf{T}})\right). \tag{22}\] The problem in (22) is a _nonconvex problem_ involving optimization over a finite set of quadratic form ratios. The problem is difficult to solve in general and therefore the following optimization strategy is proposed. ### _Optimization Strategy_ For each individual \(f_{i}(z)\), the \(z\) that maximizes \(f_{i}(z)\) is known to be given by the eigenvector \(u\) that corresponds to the maximum eigenvalue \(\lambda\) of [11] \[\hat{Y}_{(ij)}u=\lambda\hat{S}_{(ij)}u. \tag{23}\] As \(\hat{Y}_{(ij)}\in\mathbb{S}_{+}^{n}\) and \(\mathrm{rank}(\hat{Y}_{(ij)})=1\) this eigenvalue problem has only one strictly positive eigenvalue \(\lambda\) for which the corresponding eigenvector is denoted by \(u_{i}\). Since \(u_{i}\) in general differ for different \(i\), it is not possible to maximize all \(f_{i}(z)\) simultaneously. However, for a certain \(z\) we know the values of all \(f_{i}(z)\) and hence are able to compute \[i^{*}=\underset{i\neq j}{\mathrm{arg\,min}}\quad f_{i}(z). \tag{24}\] To increase \(f_{i^{*}}\) it is suggested that \[z\gets z+\alpha u_{i^{*}}, \tag{25}\] where \(\alpha\) resembles the step size to traverse along \(u_{i^{*}}\). Using a too large \(|\alpha|\) there is a risk that \(f_{i}\) for some other \(i\neq i^{*}\) is severely decreased. Too small \(|\alpha|\) means slow convergence. From Proposition 3 we have that a first-order approximation of \(f_{i}\) evaluated at \(z\) in the direction of \(\alpha u_{i^{*}}\) is given by \[f_{i}(z+\alpha u_{i^{*}})\approx f_{i}(z)+2\alpha\frac{u_{i^{*}}^{\mathsf{T}}( \hat{Y}_{(ij)}-f_{i}(z)\hat{S}_{(ij)})z}{z^{\mathsf{T}}\hat{S}_{(ij)}z}. \tag{26}\] We proceed by solving \[f_{i}(z)+2\alpha\frac{u_{i^{*}}^{\mathsf{T}}(\hat{Y}_{(ij)}-f_{i}(z) \hat{S}_{(ij)})z}{z^{\mathsf{T}}\hat{S}_{(ij)}z}\] \[\quad=f_{i^{*}}(z)+2\alpha\frac{u_{i^{*}}^{\mathsf{T}}(\hat{Y}_{i^ {*}j}-f_{i}(z)\hat{S}_{i^{*}j})z}{z^{\mathsf{T}}\hat{S}_{i^{*}j}z}, \tag{27}\] for each \(i\neq j,i^{*}\). This yields \(N-2\) solutions for \(\alpha\), where some might be negative and other positive. Since the task is to increase \(f_{i^{*}}\) while not decreasing the other \(f_{i}\) too much, \(\alpha\) is chosen such that \(|\alpha|\) is the smallest among all the ones that satisfy \[\alpha\frac{u_{i^{*}}^{\mathsf{T}}(\hat{Y}_{i^{*}j}-f_{i}(z)\hat{S}_{i^{*}j})z }{z^{\mathsf{T}}\hat{S}_{i^{*}j}z}>0. \tag{28}\] This last condition is introduced to ensure that the correct sign is chosen for \(\alpha\). The operations in (24)-(28) are performed iteratively until some termination criterion is met. The optimization algorithm is summarized in Algorithm 2. **Proposition 3**.: Let \(u,z\in\mathbb{R}^{n}\), \(Y,S\in\mathbb{R}^{n\times n}\) and \(f(z)=(z^{\mathsf{T}}Yz)/(z^{\mathsf{T}}Sz)\), where \(z\neq 0\) and \(\mathrm{rank}(S)=n\). Then a first-order approximation of \(f(z+\alpha u)\), for any scalar \(\alpha\), is given by \[f(z+\alpha u)\approx f(z)+2\alpha\frac{u^{\mathsf{T}}(Y-f(z)S)z}{z^{\mathsf{T }}Sz}.\] Proof.: From [18] we have \[\frac{\partial f(z)}{\partial z} =-\frac{2Szz^{\mathsf{T}}Yz}{(z^{\mathsf{T}}Sz)^{2}}+\frac{2Yz}{ z^{\mathsf{T}}Sz}=-\frac{2f(z)Sz}{z^{\mathsf{T}}Sz}+\frac{2Yz}{z^{\mathsf{T}}Sz}\] \[=2\frac{(Y-f(z)S)}{z^{\mathsf{T}}Sz}z.\] A first-order approximation of \(f(z+\alpha u)\) is given by \[f(z+\alpha u) \approx f(z)+\left.\alpha u^{\mathsf{T}}\frac{\partial f(z^{ \prime})}{\partial z^{\prime}}\right|_{z^{\prime}=z}\] \[=f(z)+2\alpha\frac{u^{\mathsf{T}}(Y-f(z)S)z}{z^{\mathsf{T}}Sz}.\] ### _Example_ As an example of the proposed optimization strategy, consider a scenario with \(N=3\) and \(n=4\). Assume \(j=3\). Since \(N=3\) we consider two loss functions \[f_{1}(z)=\frac{z^{\mathsf{T}}\hat{Y}_{(13)}z}{z^{\mathsf{T}}\hat{S}_{(13)}z}, \qquad\quad f_{2}(z)=\frac{z^{\mathsf{T}}\hat{Y}_{(23)}z}{z^{\mathsf{T}}\hat{S} _{(23)}z}.\] The multiobjective problem of maximizing \(f_{1}\) and \(f_{2}\) simultaneously is not solvable, hence we will use the maximin approach and Algorithm 2. The original Algorithm 2 uses an adaptive step size \(\alpha\in[\alpha_{\text{min}},\alpha_{\text{max}}]\). We will compare this to the same algorithm with: (i) a small fixed step size \(\alpha=\alpha_{\text{min}}\), and (ii) a large fixed step size \(\alpha=\alpha_{\text{max}}\). The optimization results for the three cases, which all use the same initial vector \(z_{0}\), are shown in Fig. 4 for \(k_{\max}=25\) iterations. In Fig. 4a \(f_{1}\) is plotted against \(f_{2}\). The yellow dots resemble \(f_{1}\) and \(f_{2}\) at randomly sampled \(z\). Fig. 4b visualizes \[f_{\min}=\min\;\left(f_{1},f_{2}\right),\] for each iteration \(k=1,2,\ldots,k_{\max}\). In this case the adaptive step size provides the best results. The small step size gives slow convergence while the large step oscillates as it becomes inaccurate due to the large step size. It cannot be concluded if Algorithm 2 have reached a global maximum or a stationary point. ### _Comments_ In essence the proposed optimization strategy in Algorithm 2 is an iterative descent based optimization method, where the descent directions are chosen from a finite set of predefined directions. In this interpretation step 4-6 correspond to a backtracking line search where the step size \(\alpha\) is selected. Algorithm 2 takes \(\alpha_{\text{min}}>0\) as an input to avoid getting stuck at local minima, and \(\alpha_{\text{max}}>\alpha_{\text{min}}\) such that the linear approximation given by (26) does not become too poor. The stopping criterion used in this paper is \(k>k_{\max}\), i.e., the algorithm terminates after \(k_{\max}\) iterations. It is possible to include more sophisticated optimization techniques for better performance, but such techniques are out of the scope in this paper. It should be emphasized that there are no guarantees that Algorithm 2 converges to a global maximum w.r.t. the problem in (22). In fact, simulations verify that in general only local maxima are reached. ## V Numerical Evaluation In this section we provide a numerical evaluation of Algorithm 2. The association performance when computing \(\Psi_{(j)}\) using Algorithm 2 is compared to the case when \(\Psi_{(j)}\) is computed using Algorithm 1. ### _Simulation Specification_ A target tracking scenario with \(N=10\) targets is assumed. It is assumed that the dimensionality \(n=6\) which we here interpret as a constant acceleration model in two spatial dimensions [19]. For each target \(x_{(i)}\) a pair of covariances \(R_{1(i)}\) and \(R_{2(i)}\) are defined and are held fixed throughout the simulations. A _Monte Carlo_ (MC) approach is used, where in each MC run the state estimates \(y_{1(i)}\) and \(y_{2(i)}\) are sampled using \(R_{1(i)}\) and \(R_{2(i)}\), respectively, and the model in (1). We also use a _scaling factor_\(c\) that scales the two spatial uncertainty components. Hence, for larger \(c\) the association problem becomes more difficult to solve, and for smaller \(c\) the association problem becomes easier to solve. The assumed target tracking scenario is depicted in Fig. 5 with \(c=1\). To evaluate association performance the _incorrect assignment rate_\(q_{e}\) is computed for a certain \(c\) as the mean over all MC runs of the number of incorrect assignments divided by \(N\). We compute \(q_{e}\) for the following cases: * \((\mathcal{Y}_{1},\mathcal{Y}_{2})\): The full estimate configuration where agent 1 receives receives \(\mathcal{Y}_{2}\) from agent 2. * \((\mathcal{Y}_{1},\mathcal{Y}_{\Psi})\) + \(\Psi_{(j)}\) using Alg. 1: A dimension-reduced configuration where agent 1 receives \(\mathcal{Y}_{\Psi}\) from agent 2 and \(\Psi_{(j)}\) is computed using Algorithm 1. In this case it is assumed that agent 2 has access to \(\mathcal{Y}_{1}\) such that fusion optimal \(\Psi_{(j)}\) can be computed. * \((\mathcal{Y}_{1},\mathcal{Y}_{\Psi})\) + \(\Psi_{(j)}\) using Alg. 2: A dimension-reduced configuration where agent 1 receives \(\mathcal{Y}_{\Psi}\) from agent 2 and \(\Psi_{(j)}\) is computed using the proposed optimization strategy of Algorithm 2. The standard deviation of \(q_{e}\) is also computed. _Remark 4_.: Since agent 1 needs \(\Psi_{(1)},\ldots,\Psi_{(N)}\) to be able to fuse the estimates in \(\mathcal{Y}_{\Psi}\) with its local estimates, agent 2 must also include \(\Psi_{(1)},\ldots,\Psi_{(N)}\) when transmitting \(\mathcal{Y}_{\Psi}\). Functionality for encoding \(\Psi_{(j)}\) is described in [12] with Matlab\({}^{\text{\tiny\textregistered}}\) code available at [https://gitlab.com/robinforsling/dt/](https://gitlab.com/robinforsling/dt/). ### _Results_ The results of the numerical evaluation are visualized in Fig. 6, where \(q_{e}\) is plotted against \(c\). For each value of \(c\), \(M=1\,000\) MC runs are evaluated. The quantity \(q_{e}\) is computed in the same realizations of \(\mathcal{Y}_{1}\) and \(\mathcal{Y}_{2}\) for each of the cases described previously. The shaded areas in the plot resemble 1-\(\sigma\) confidence intervals. Perfect association is maintained in the full estimate case for all values of \(c\). The approach that utilizes Algorithm 2 clearly outperforms the approach that computes \(\Psi_{(j)}\) for optimal fusion performance. ## VI Concluding Remarks The association problem for multitarget tracking in a dimension-reduced context has been proposed. In it, the track Fig. 4: Example of the proposed optimization strategy with \(N=3\) and \(n=4\). Algorithm 2 is compared to the same algorithm but with fixed step size. The black circle marks the common initial value \(z_{0}\). The squares mark the final point of each case. Yellow dots resemble \(f_{1}\) and \(f_{2}\) evaluated at randomly generated \(z\). A small step size yields slow convergence. A large step size yields inaccurate results but possibly a higher convergence rate. The adaptive step size outperforms the other two. Fig. 5: Numerical scenario. The targets are represented by black dots. The ellipse around a target illustrates the uncertainty of the corresponding estimate in the two spatial dimensions. estimates to be communicated from one agent are dimension-reduced with respect to association quality in the agent that receives the dimension-reduced estimates. The implied problem was analyzed theoretically where it was illustrated that the problem is versatile and complex, and where no general solutions exists. An _optimization strategy_ has been suggested for computing dimension-reduced estimates while preserving _association performance_. The optimization strategy was demonstrated using a numerical evaluation in which the suggested method outperformed a method that reduces dimensionality based on optimal _fusion performance_. Possible future extensions include a generalization of Algorithm 2 for the \(m>1\) case, and a more general configuration where agents have different sets of tracks. Another possibility is to consider a setting where there is partly knowledge available about the local estimates of the agent that receives the dimension-reduced estimates. A joint problem formulation which includes both fusion and association performance simultaneously is also of interest.
2308.02374
Optimal Sizing of On-site Renewable Resources for Offshore Microgrids
The offshore oil and natural gas platforms, mostly powered by diesel or gas generators, consume approximately 16TWh of electricity worldwide per year, which emits large amount of CO2. To limit their contribution to climate change, a proposed solution is to replace the traditional fossil fuel based energy resources with offshore clean energy. One of the main challenges in designing such a system is to ensure that energy demand is met while minimizing cost and reducing environmental impact. To address this challenge, several strategies including microgrid systems consisting of offshore wind turbines, wave energy converters, tidal energy converters, floating photovoltaic systems and battery energy storage systems are being proposed. In this paper, cost optimization for sizing these renewable energy sources is investigated. A cost optimization renewable sizing (CORS) model is proposed to optimize the sizes of the generation and storage resources. The proposed CORS model considers the variability of the power outputs of various renewable energy sources and load, as well as the cost of different generation technologies and the energy storage system. Simulations conducted on three test systems show the proposed resource sizing method significantly reduces the total lifetime cost of energy while maintaining a high level of reliability and sustainability.
Ann Mary Toms, Xingpeng Li, Kaushik Rajashekara
2023-08-04T15:14:28Z
http://arxiv.org/abs/2308.02374v1
# Optimal Sizing of On-site Renewable Resources ###### Abstract The offshore oil and natural gas platforms, mostly powered by diesel or gas generators, consume approximately 16TWh of electricity worldwide per year, which emits large amount of CO2. To limit their contribution to climate change, a proposed solution is to replace the traditional fossil fuel based energy resources with offshore clean energy. One of the main challenges in designing such a system is to ensure that energy demand is met while minimizing cost and reducing environmental impact. To address this challenge, several strategies including microgrid systems consisting of offshore wind turbines, wave energy converters, tidal energy converters, floating photovoltaic systems and battery energy storage systems are being proposed. In this paper, cost optimization for sizing these renewable energy sources is investigated. A cost optimization renewable sizing (CORS) model is proposed to optimize the sizes of the generation and storage resources. The proposed CORS model considers the variability of the power outputs of various renewable energy sources and load, as well as the cost of different generation technologies and the energy storage system. Simulations conducted on three test systems show the proposed resource sizing method significantly reduces the total lifetime cost of energy while maintaining a high level of reliability and sustainability. Battery energy storage systems, Floating photovoltaic systems, Microgrid planning, Offshore platforms, Offshore wind turbines, Optimization, Renewable energy sources, Tidal energy converters, Wave energy converters. ## Nomenclature \begin{tabular}{l l} Sets & \\ \(T\) & Set of Time Intervals. \\ Indices & \\ \(t\) & Time Intervals. \\ Parameters & \\ \(c_{BESS}^{capital}\) & Capital cost of each kWh of BESS. \\ \(c_{BESS}^{capital}\) & Capital cost of an FPV unit. \\ \(c_{FPV}^{capital}\) & Capital cost of an FPV unit. \\ \(c_{GPV}^{capital}\) & Capital cost of an OWT unit. \\ \(c_{GPV}^{capital}\) & Capital cost of a TEC unit. \\ \(c_{GPV}^{capital}\) & Capital cost of a WEC unit. \\ \(c_{TEC}^{max}\) & Decommissioning cost for each kWh of BESS. \\ \(c_{FPV}^{capital}\) & Decommissioning cost of an FPV unit. \\ \(c_{GPV}^{capital}\) & Decommissioning cost of an OWT unit. \\ \end{tabular} \begin{tabular}{l l} \(C_{TEC}^{decom}\) & Decommissioning cost of a TEC unit. \\ \(C_{GECom}^{decom}\) & Decommissioning cost of a WEC unit. \\ \(C_{GECom}^{DBR}\) & O\&M cost of each kWh of the BESS unit per annum. \\ \(c_{GPV}^{DBR}\) & O\&M cost associated with each FPV unit per annum. \\ \(c_{GPV}^{DBR}\) & O\&M cost associated with each OWT unit per annum. \\ \(c_{GPV}^{DBR}\) & O\&M cost associated with each OWT unit per annum. \\ \(c_{GPV}^{DBR}\) & O\&M cost associated with each TEC unit per annum. \\ \(c_{GPC}^{DBR}\) & O\&M cost associated with each WEC unit per annum. \\ \(c_{GBSES}^{precom}\) & Pre-commissioning cost of each kWh of BESS. \\ \(c_{GPV}^{precom}\) & Pre-commissioning cost of each FPV Unit. \\ \(c_{GPCom}^{precom}\) & Pre-commissioning cost of each WEC Unit. \\ \(c_{GPCom}^{precom}\) & Pre-commissioning cost of each TEC Unit. \\ \(c_{GPC}^{precom}\) & Pre-commissioning cost of each WEC Unit. \\ \(P_{MAX}^{P}\) & Maximum Charging Power of BESS. \\ \(P_{MAX}^{P}\) & Maximum Dismcharging Power of BESS. \\ \(P_{PPV}^{Disk}\) & Available solar power at time period \(t\). \\ \(P_{T}^{Load}\) & Load at time period \(t\). \\ \(P_{TEC}^{OUT}\) & Available wind power at time period \(t\). \\ \(P_{TEC}^{TEC}\) & Available tidal power at time period \(t\). \\ \(P_{WEC}^{WEC}\) & Available wave power at time period \(t\). \\ \(SOC_{min}\) & Minimum state of charge of BESS. \\ \(SOC_{max}\) & Maximum state of charge of BESS. \\ \(T_{e}\) & Expected lifetime of the OHRES system. \\ \(c_{degrad}^{degrad}\) & Battery Degradation Cost Factor. \\ \(\eta_{BESS}^{DBR}\) & Charging Efficiency of the BESS. \\ \(\eta_{BESS}^{Disk}\) & Discharging Efficiency of the BESS. \\ \(\eta_{BESS}^{Disk}\) & Variables \\ \(E_{BESS}\) & Total energy capacity of the BESS unit. \\ \(E_{BESS}^{intlal}\) & Initial energy capacity of the BESS unit. \\ \(E_{BESS}\) & Energy capacity of the BESS unit at time period \(t\). \\ \(N_{FPV}\) & Number of floating photovoltaic panels. \\ \(N_{WWT}\) & Number of offshore wind turbines. \\ \(N_{TEC}\) & Number of tidal energy converters. \\ \(N_{WFC}\) & Number of wave energy converters. \\ \(t_{t}^{Char}\) & Charging power of BESS at time \(t\). \\ \(t_{t}^{Curt}\) & Renewable power curtailment at time period \(t\). \\ \(p_{t}^{Disk}\) & Discharging power of BESS at time period \(t\). \\ \(U_{t}^{Char}\) & Charging status of BESS at time period \(t\). 1 represents charging mode and 0 represents discharging or idle mode. \\ \(U_{t}^{Disc}\) & Discharging status of BESS at time period \(t\). 1 represents charging mode and 0 represents discharging or idle mode. \\ \end{tabular} ## I Introduction Offshore oil and gas (O&G) production plays a pivotal role in the energy landscape of many countries. However, it is worth noting that these operations are quite energy intensive, with some larger and more complex platforms requiring several hundreds of megawatts of power to keep the system running smoothly. Collectively, offshore O&G platforms consume approximately 16TWh of energy per year [1]. In 2019, gas accounted for 21% of global CO\({}_{2}\) emissions from fuel, while oil accounted for 34%, with a significant portion coming from offshore O&G rigs [2]. Offshore O&G rigs in Louisiana, Texas, California, and Alaska account for a significant amount of the USA's oil and gas supply. According to the US Energy Information Administration, the estimated theoretical annual energy that can be extracted from waves of the coasts of the United States is anticipated to be as much as 2640 TWh [3]. According to the National Renewable Energy Laboratory (NREL), the estimated theoretical annual recoverable energy from tidal resources is estimated to be approximately 252 TWh [4]. The potential for offshore wind generation in the US is vast, with more than 2TW estimated [5]. Offshore wind turbines (OWT) are significantly larger than onshore wind turbines. Because offshore winds are stronger and more stable than onshore winds, the capacity factor for offshore wind farms are typically higher [6]-[9]. The US has set a national offshore wind target of 30 GW by 2030 [10]. It is also worth noting that the floating wind farms capable of being built in deep and ultra-deep waters are rapidly developing. Currently, there exists O&G rigs that are already being powered by OWT. The Beatrice wind farm serves as a prime example, featuring two 5MW wind turbines connected to the Beatrice oil production platform in Scotland via submarine cables [11]-[12]. In this paper, an offshore hybrid renewable energy source (OHRES)-based microgrid that integrates wave energy converters (WEC), tidal energy converters (TEC), offshore wind turbines, floating photovoltaic system (FPV) and battery energy storage systems (BESS) to envisage an offshore O&G platform powered solely by renewable sources of energy is considered. A cost optimization for renewable sizing (CORS) model is proposed to optimize the sizes of the generation and storage resources considering various practical factors. The rest of this paper is organized as follows. Literature review is conducted in Section II. The proposed CORS model for OHRES microgrids is explained in Section III. The dataset is described in Section IV, while the calculations are detailed in Section V. Three test cases and the associated simulation results are presented in Section VI. Section VII concludes the paper. Potential future works are discussed in Section VIII. ## II Literature Review Currently, there is an increase in research focused on optimizing the use of renewable energy systems for offshore O&G rigs. Various approaches were studied to meet the electrical demand of O&G rigs by utilizing marine renewable resources, wind energy and solar energy. A feasibility study assessed the integration of WEC and solar energy system for supplying power to offshore O&G rigs [13]. The results demonstrated that combining these renewable resources led to an increase in electricity production, reduced intra-annual variability, and mitigated intermittency issues. The levelized cost of energy ranged from 140-282 $/MWh [13]. Similar studies were also performed for O&G rigs in the Caspian sea integrating OWT along with FPV systems [14], and FPV along with ocean thermal energy conversion systems [15]. The results for these studies were also comparable. Researchers are also interested in the introduction of OWT for meeting the needs of O&G rigs. Preliminary studies were conducted to explore the possibility of replacing fossil fuel powered offshore O&G rigs with OWT in Brazil [16]. Studies were conducted to investigate the possibility of operating four 5 MW OWT in conjunction with gas turbines and BESS [17]-[18]. Additionally, another research proposed the implementation of a 40MW wind farm to power an isolated offshore O&G rig [19]. The simulation results demonstrate that using OWT to power O&G rigs resulted in cost saving by reducing fuel consumption. Since FPV is a relatively nascent technology, it has greater untapped potential [20]-[23]. Thus, a lucrative alternative is the integration of FPV with other renewables especially OWT [24]. This type of system hybridization enhances the overall productivity of the energy generation system [25]-[27]. Another potential solution is the combination of TEC with WEC and OWT to generate electricity [28]-[29]. One of the research gaps identified is the lack of studies on the optimal sizing of offshore microgrids that incorporate renewable resources such as WEC, TEC, FPV or OWT. It is suggested that future research should focus on developing advanced optimization techniques for optimal design and sizing of microgrids, which would help maximize the use of renewable resources and reduce the reliance on fossil fuels. Another gap was the need for more comprehensive analysis on the impact of environmental factors such as weather, sea patterns, and energy consumption profiles on the performance of the offshore grids. There is also a need for sensitivity analysis to test the robustness of these microgrids under different scenarios and parameter settings. ## III CORS Model for Sizing OHRES Microgrid The proposed CORS sizing model for OHRES microgrid is designed to be efficient, flexible, and cost-effective. By using renewable energy sources and smart energy storage solutions, it aims to reduce the carbon footprint of offshore platforms, contributing to a sustainable future. The proposed CORS model for sizing candidate resources in the OHRES microgrid is composed of (1)-(13), which minimizes the total cost of supplying power for an offshore platform over its lifetime. The system achieves this objective by considering the costs of five subsystems: WEC, TEC, OWT, FPV and BESS. The ultimate goal is to reduce the total cost of energy while ensuring reliable power supply. The objective function is defined in (1). \[\begin{split} min\,F(Cost)=f(WEC)+f(TEC)+f(OWT)\\ +f(FPV)+f(BESS)\end{split} \tag{1}\] To achieve the goal of minimizing the total cost of maintaining the power supply for offshore platforms, the CORS model considers the cost of each subsystem, including the pre-commissioning cost, capital cost, operations and maintenance costs, decommissioning cost and expected lifetime of the subsystem, which are presented in (2)-(6) for WEC, TEC, OWT, FPV and BESS respectively. \[f(WEC)=N_{WEC}\big{(}C_{WEC}^{precom}+C_{WEC}^{capital}+\big{\{}C_{WEC}^{ \mathit{OBM}}\times T_{e}\big{\}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \tag{2}\] \[f(TEC)=N_{TEC}\big{(}C_{TEC}^{precom}+C_{TEC}^{capital}+\big{\{}C_{TEC}^{ \mathit{OBM}}\times T_{e}\big{\}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ ### _Output Power of OWT_ As mentioned earlier in Section IV, the wind speed is measured at a height of 3.8-5m above sea level. However, the hub of OWT could be at a height of at least 80m above sea level. Thus, the log wind profile method [34]-[35] is used to estimate the wind speed at 80m height. Equation (15) represents the governing equation of electrical power output that can be obtained from OWT. The size of a single OWT is rated at 8000kW: \[P_{OWT}=\frac{1}{2}\rho_{w}\pi r_{w}^{2}(v_{w})^{3}\times\mathcal{C}_{p}\times\eta \tag{15}\] where, \(\rho_{w}\) denotes density of air; \(\tau_{w}\) is radius of the rotor; \(v_{w}\) is the wind velocity; \(\mathcal{C}_{p}\) is the coefficient of power; and \(\eta\) denotes the efficiency of electrical system. The hourly averages of the electrical power output obtained by a single OWT over a year for the 3 regions at each hour are shown in Fig. 3. ### _Output Power of FPV_ The output power of the FPV was calculated using the NREL PVWatts Calculator. The size of a single FPV panel is set to 0.4kW. The hourly averages of the electrical power output obtained by a single FPV over a year for the 3 regions at each hour are shown in Fig. 4. ## VI Case Studies An offshore oil project consists of four platforms, including one central power platform, one oil extraction and process platform, one oil storage platform and maintenance platform. The energy demand of O&G rigs could vary substantially. For instance, the peak power demand of the facilities in a rig could be around 44MW with a heating demand of more than 12 MW [36]. However, depending on the distance from shore and the number of heavy equipment employed in the rig, the electric power requirement can go up to 250 MW [37]. To demonstrate the proposed sizing model CORS for OHRES, a typical offshore platform is used as a test bed. The load profile of the test bed platform, shown in Fig. 5, does not fluctuate like residential loads since it operates continuously for 24 hours. The mining and oil processing activities constitute 80% of the load [38]. It is assumed in this paper that a typical day repeats itself for the next two decades. The case study is conducted on 3 different regions: Gulf of Mexico, Alaska, and California. The proposed CORS model is implemented on Python using packages "Pyomo" [39] and "Gurobi" [40] was Fig. 1: Average power output of a single TEC of 500 kW. Fig. 4: Average power output of a single FPV of 0.4 kW. Fig. 5: Load profile for the offshore platform. Fig. 3: Average power output of a single OWT unit of 8,000 kW. Fig. 2: Average power output of a single WEC unit of 750 kW. employed as the optimizer solver. The charging efficiency of the BESS is set at 80% and the discharging efficiency is set at 95%. The preset cost parameters are detailed in Table I. Table II shows the results for the optimized lifetime cost of the proposed OHRES system in the 3 different regions. The cost was calculated for the total electricity consumption of 8,760,000MWh by an offshore platform over 2 decades. In analyzing the OHRES in the Gulf of Mexico, Alaska and California, it is evident that the utilization of WEC and TEC was not deemed cost effective across any of these locations. Despite exhibiting a relatively stable output, the high associated costs prevented the consideration of WEC technology in all cases. Among the three regions, Alaska stood out with its consistent power generation from OWT. With 13 turbines in place, Alaska experienced a steady and reliable output. Consequently, due to the dependable power supply from the OWT, the size of the BESS installed in Alaska could be much smaller. ## VII Conclusion A resource sizing model CORS for a OHRES system that could replace traditional electricity generation for offshore platforms with 100% clean energy and reduce CO\({}_{2}\) emissions is investigated.. Although the average electricity cost of the proposed system is currently higher than traditional diesel generators, the elimination of CO\({}_{2}\) emissions from offshore platforms makes it a viable alternative. Based on the proposed CORS model, the OHRES system is a feasible and reliable replacement for traditional systems, and its zero emission benefits contribute significantly to global decarbonization. ## VIII Future Work The ocean is a vast resource for renewable energy that can be harnessed for generating electricity. However, technological limitations, logistical, infrastructural and regulatory hurdles limit us from exploiting these resources to its fullest potential. Further research and development can help design the most efficient, yet cost effective and reliable offshore hybrid renewable energy systems. For this work, it is assumed that the power requirement and generation of a single day would repeat itself for 20 years, which could be improved by considering the power generation and consumption requirements with higher resolutions. The OHRES system and the proposed CORS model considered 4 specific energy converters, however, they can be optimized to choose the best sizes of single candidate units of energy converters. Also, the model presented in this research only deals with 5 subsystems, more subsystems like hydrogen energy storage systems can be incorporated. ## Acknowledgment This research is supported by Texas Commission on Environmental Quality through an award to the Subsea Systems Institute. This project was paid for [in part] with federal funding from the Department of the Treasury through the State of Texas under the Resources and Ecosystems Sustainability, Tourist Opportunities, and Revived Economics of the Gulf Coast States Act of 2012 (RESTORE Act). The content, statements, findings, opinions, conclusions, and recommendations are those of the author(s) and do not necessarily reflect the views of the State of Texas or the Treasury.
2305.13285
NASA's Cold Atom Laboratory: Four Years of Quantum Science Operations in Space
The Cold Atom Laboratory (CAL) is a quantum facility for studying ultra-cold gases in the microgravity environment of the International Space Station. It enables research in a temperature regime and force-free environment inaccessible to terrestrial laboratories. In the microgravity environment, observation times over a few seconds and temperatures below 100 pK are achievable, unlocking the potential to observe new quantum phenomena. CAL launched to the International Space Station in May 2018 and has been operating since then as the world's first multi-user facility for studying ultra\-cold atoms in space. CAL is the first quantum science facility to produce the fifth state of matter called a Bose-Einstein condensate with rubidium-87 and potassium-41 in Earth orbit. We will give an overview of CAL's operational setup, outline its contributions to date, present planned upgrades for the next few years, and consider design choices for microgravity BEC successor-mission planning.
Kamal Oudrhiri, James M. Kohel, Nate Harvey, James R. Kellogg, David C. Aveline, Roy L. Butler, Javier Bosch-Lluis, John L. Callas, Leo Y. Cheng, Arvid P. Croonquist, Walker L. Dula, Ethan R. Elliott, Jose E. Fernandez, Jorge Gonzales, Raymond J. Higuera, Shahram Javidnia, Sandy M. Kwan, Norman E. Lay, Dennis K. Lee, Irena Li, Gregory J. Miles, Michael T. Pauken, Kelly L. Perry, Leah E. Phillips, Diane C. Malarik, DeVon W. Griffin, Bradley M. Carpenter, Michael P. Robinson, Kirt Costello Sarah K. Rees, Matteo S. Sbroscia, Christian Schneider, Robert F. Shotwell, Gregory Y. Shin, Cao V. Tran, Michel E. William, Jason R. Williams, Oscar Yang, Nan Yu, Robert J. Thompson
2023-05-22T17:47:34Z
http://arxiv.org/abs/2305.13285v1
# NASA's Cold Atom Laboratory: Four Years of Quantum Science Operations in Space ###### Abstract The Cold Atom Laboratory (CAL) is a quantum facility for studying ultra-cold gases in the microgravity environment of the International Space Station. It enables research in a temperature regime and force-free environment inaccessible to terrestrial laboratories. In the microgravity environment, observation times over a few seconds and temperatures below 100 pK are achievable, unlocking the potential to observe new quantum phenomena. CAL launched to the International Space Station in May 2018 and has been operating since then as the world's first multi-user facility for studying ultracold atoms in space. CAL is the first quantum science facility to produce the fifth state of matter called a Bose-Einstein condensate with rubidium-87 and potassium-41 in Earth orbit. We will give an overview of CAL's operational setup, outline its contributions to date, present planned upgrades for the next few years, and consider design choices for microgravity BEC successor-mission planning. ## I Introduction Cold atom experiments in space are poised to revolutionize our understanding of physics in the coming decades. Among the myriad of proposed experiments are probes of the nature of the quantum vacuum, tests of quantum theories of gravity, investigations of novel quantum matter, and searches for dark energy and dark matter [1]. Space-based cold atom technologies offer the possibility for creating quantum sensors of unprecedented sensitivity, and practical applications abound, ranging from using atom interferometry to monitor the effects of climate change, to developing space-based optical clocks that can synchronize timekeeping worldwide. The Cold Atom Laboratory (CAL) is the first experimental facility for the study of unique quantum-engineered states of matter in the microgravity environment of the International Space Station [2]. This multi-user facility is the culmination of over three decades of rapid scientific and engineering development which has enabled the deployment of laboratory-based techniques to generate ultracold atomic gases into space [2, 3, 4]. CAL has reported the first on-orbit production of the quantum state of matter known as a Bose-Einstein condensate (BEC) with rubidium [2] and also potassium [5]. A BEC is formed, in simplest terms, when atoms with integer spin are cooled below a critical temperature where the individual atoms' de Broglie wavelengths become comparable to their mean separation; at this point the indistinguishable particles begin to condense into a single macroscopic wavefunction corresponding to the lowest accessible quantum state. The condensed atoms exhibit collective behavior in response to perturbations, allowing researchers to investigate quantum effects on a macroscopic scale using precisely controllable interactions with light, magnetic or electro-magnetic fields [6]. Since installation on the station in June 2018, CAL has operated for over four years on orbit, traveling well over 700 million miles and performing more than 111 000 such experiments with ultracold atoms. The persistent free-fall environment on the ISS provides several compelling advantages for the production and study of ultracold gas mixtures. Compared to ground-based experiments, the microgravity environment allows the use of much weaker traps and, as a result, the realization of even colder temperatures where quantum effects are magnified. Longer observation times are also possible, as the perturbation-free expansion times are not limited by gravitational acceleration within the measurement region. For certain measurements, such as of acceleration (or gravity), the sensitivity scales as the _square_ of the observation time, giving space-based instruments a dramatic advantage over their terrestrial counterparts. Microgravity also enables optimal overlap of mixtures of different atomic masses without the need to compensate for the differential gravitational sag with applied magnetic field gradients (Fig. 1). Microgravity, however, is but one reason to deploy instruments with ultracold atom in space. The cold temperatures and vacuum of deep space provide intriguing possibilities for pushing limits of ultracold experiments well beyond anything that could be achieved on Earth [7]. Vast distance scales are accessible, as well, and experiments can be performed in a variety of reference frames and gravitational potentials. Finally, the solar system provides a wide variety of observational targets for quantum sensing instruments. Figure 1: Harmonic trap potentials aligned with gravity on Earth (_left_) and in microgravity (_right_). The microgravity environment allows the use of much shallower potentials with ultracold atoms and optimal overlap in mixtures of different atomic species. Figure 2: Onset of Bose–Einstein condensation of rubidium atoms on the ISS [2]. Each false-color image represents a separate experiment where atoms are released after evaporative cooling in a harmonic trap, then imaged following a short time of free expansion to reveal the velocity distribution of the atomic ensemble. The final image shows a macroscopic cloud of almost 50 000 atoms with over one quarter in a single quantum wave function determined by the initial conditions in the trap. In this paper, we present an overview of the CAL science mission after over four years of operation on orbit, organized as follows: Section II highlights CAL's scientific contributions to date, Section III describes the instrument design and operations, Section IV summarizes changes made in-flight, and Section V briefly considers options for future missions. ## II Science Achievements The Cold Atom Laboratory utilizes the microgravity environment of the ISS to study ultracold quantum gases at unprecedented low energies and long free-fall times. CAL's science objectives are derived from the 2011 NASA Decadal Survey for Life and Physical Sciences [9]. The facility offers investigators the ability to perform experiments with three different atomic species, \({}^{87}\)Rb, \({}^{41}\)K, and \({}^{39}\)K, and to prepare them in specific atomic states (or superpositions of states). Atoms can be confined in a variety of trapping potential geometries, and dressed with both rf and microwave fields. Imaging of each species can be performed along two orthogonal directions, and interactions between atoms can be precisely tuned by varying an applied bias field over a magnetic Feshbach resonance [10]. Finally, light pulses from a far off-resonant laser at 785 nm (chosen so that it interacts with equal strength for both Rb and K atoms) can be applied for dual-species atom interferometry experiments. The instrument was designed not just to demonstrate these tools, which will be needed for a wide variety of future missions, but also to enable a variety of unique experiments that can only be performed in microgravity. CAL was designed as a versatile multi-user science facility, enabling a world-class group of scientists to perform a diverse range of investigations of quantum phenomena in the microgravity environment of the ISS. A NASA Research Announcement (NRA) was released on July 11, 2013 to solicit proposals from academic and research institutions to Figure 4: Superposition of momentum states observed in ultracold rubidium atoms after applying a series of optical pulses to realize an atom interferometer. Each spatially-separated atom cloud is approximately 40 \(\upmu\)m by 48 \(\upmu\)m in size, and the clouds are separated in momentum space by two photon recoils. Prior to the observation, an individual atom’s wavefunction exists simultaneously in both locations. Figure 3: Absorption images of non-condensed rubidium atoms after release from a shell potential in microgravity. The series of images illustrates the behavior as the bubble-shaped potential is “inflated” prior to release of the atoms. The near-uniform densities are only observable in the absence of gravity. The darker lobes at the upper and lower bounds of each cloud are artifacts of the column-averaged absorption imaging technique combined with the finite imaging resolution. Adapted from Ref. [8]. utilize the Cold Atom Lab facility. From this NRA, five flight Principal Investigators (PIs) [11, 12, 13, 14, 15] and two ground PIs [16, 17] were selected. Among the selected PI teams are three recent Nobel Prize laureates. PI-led investigations have demonstrated the production of quantum gases in rf-dressed "bubble" geometry traps [8, 18], as shown in Figure 3. Other experiments have demonstrated adiabatic cooling in extremely weak traps [19], and the use of "shortcut-to-adiabaticity" protocols [20] and delta-kick cooling techniques [21] to achieve temperatures of about 50 pK, corresponding to free-expansion velocities as low as 100 m/s, with unprecedented precision in positioning cold atomic samples [22]. Ongoing investigations include experiments to study the formation of Efimov molecules in microgravity, demonstrate unique methods to correlate the positions of atoms, demonstrate a quantum rotation sensor, and search for novel phenomena involving mixtures of quantum gases. Atom interferometry is a particularly important application for cold atoms in space, and is an essential component in three of the five CAL PI science campaigns. In an atom interferometer, cold atoms serve as matter waves, while a laser light field creates the periodic grating structures that the atoms scatter from in order to realize a closed-loop _matter wave_ interferometer. This is in contrast to a traditional light interferometer, where photons behave as waves that diffract off of a physical structure. Figure 4 shows a typical image resulting from the atom interferometer in CAL, illustrating the macroscopic separation of the two quantum superposition states for each atom. Beyond a simple demonstration of atom interferometry, CAL PIs have applied this quantum interference measurement technique for a proof-of-principle photon recoil measurement and to observe the influence of matter-wave interference over hundreds of milliseconds in free-fall [23]. A dual-species interferometer (\({}^{87}\)Rb and \({}^{41}\)K) has also been recently demonstrated [5]. Efforts to increase the interaction time (and sensitivity) in this dual-species interferometer are ongoing, with the goal to employ differential interferometry for a proof-of-principle test of Einstein's equivalence principle. ## III Instrument Design and Operations ### ISS Accommodation As a space-based platform, the ISS offers a relatively benign environment for scientific payloads. The Cold Atom Laboratory was designed to fit into an EXPRESS (EXpedite the PRocessing of Experiments to Space Station) Rack that provides standardized power, mechanical, thermal, and data interfaces for scientific payloads on the ISS. CAL occupies one full "quad" locker plus a single locker in EXPRESS Rack 7 (ER-7), located inside the Destiny Module near the station's center of gravity. The instrument draws up to 565 W of power from the station's 28 VDC power, and thermal management is provided by water and forced-air cooling in ER-7. A communications port supports daily real-time science operations and continuous telemetry monitoring on the ground by the CAL Operations Team. The CAL Science Instrument, which includes the Science Module as well as the majority of the lasers, optics and control electronics to support the proposed research programs, is housed within the ER-7 quad locker. These hardware subsystems are described in the following section. The DC power conversion electronics are housed separately in a single locker in ER-7, along with an additional laser to support dual-species atom interferometry and the optical amplifier used for laser cooling potassium. ### Operational Concept The CAL Operations Team operates the CAL Flight Instrument from the Earth Orbiting Missions Operations Center at JPL. Communication with the payload is over the Ku-band IP service through ISS Payload Operations at the Huntsville Operations Support Center (HOSC) at Marshall Space Flight Center (MSFC). The ground-to-station data link is provided by the Tracking and Data Relay Satellite System (TDRSS), a network of communications satellites and ground stations used to provide a near-continuous real-time communications relay with the ISS. All data is transferred to and from the instrument using the delay tolerant networking (DTN) functionality provided by MSFC's Telescience Resource Kit (TReK) software suite, and TReK's CCSDS File Delivery Protocol (CFDP) utility provides a standardized transport mechanism for file transfers over the DTN. A diagram of the CAL mission operations architecture is provided in Fig. 5. The operator interface to the Flight Instrument is provided via a Windows Remote Desktop session on the ground data system (GDS) computer, and experimental definition tables are executed on the Flight Instrument via sequence control by the CAL flight software. Science definition tables are developed by the PI Science Teams working with the CAL Team at JPL, and all new tables and sequences are flight rule checked before upload to the Flight instrument. Once on the Flight Instrument, the instrument operator will queue the science table into the flight software's "Sequence Engine," and the table can then be executed according to a time series of commands as specified in the corresponding time sequence file. A typical experimental sequence for single-species (Rb-only) science on CAL proceeds as follows: 1. **Laser cooling**: Collect and cool atoms in a magneto-optical trap (MOT) inside the science region of the vacuum enclosure, followed by a brief stage of so-called "optical molasses" where the quadrupole magnetic field is turned off and the laser frequencies further detuned to reach atom temperatures below 100 \(\mu\)K. Typical atom numbers for Rb are \(N\approx 3\times 10^{8}\) after this stage. 2. **State preparation**: Optically pump the cooled atoms to the low magnetic field-seeking quantum state. 3. **Transfer to atom chip**: An intermediate quadrupole magnetic trap is used to transfer atoms from the MOT region to the atom chip-based trap at the top of the science region. 4. **Evaporative cooling**: An rf or microwave field is employed to eject the hottest Rb atoms from the chip trap by selectively transferring these atoms from the low field-seeking state to a high field-seeking state. The rf or microwave frequency is reduced over approximately 1.5 s to eject atoms at decreasing temperatures in a process known as "forced evaporative cooling." At the critical temperature \(T_{\rm c}\approx 100\) nK, atoms begin to macroscopically occupy the BEC phase. 5. **Decompression and release**: The atom trap is relaxed to further cool the atoms, then atoms are released and allowed to freely expand. 6. **Interrogation**: Atoms may be further probed using precise laser, magnetic, or rf pulses, as specified in the science definition table. 7. **Detection**: After a specified time of flight, an image of the expanded atom cloud is recorded by a camera using laser absorption imaging, followed by a reference image recorded after the destructive absorption image. Additional details related to dual-species operation, including sympathetic cooling in Rb/K gas mixtures, can be found Figure 5: CAL Mission System Architecture in Refs. 5 and 24. The primary science product generated with each table execution is the pair of absorption and reference images recorded at a specified time of flight. These images are transferred automatically from the Flight computer via CFDP to the GDS computer, where the operator can review the both the raw absorption images and the calculated optical densities in real time using image analysis software developed at JPL for this purpose. ### Flight Hardware Overview #### 1. Science Module The CAL Science Module includes a physics package containing Rb and K atoms in an ultrahigh vacuum (UHV) enclosure, along with an opto-mechanical bench that supports the surrounding laser beam collimators and free-space optics, cameras, rf and microwave antennas, and current coils for magnetic field control. The cameras and current coils are coupled to a water-cooling loop via flexible copper heat pipes for thermal control. A dual-layer magnetic shield encloses the entire science module and provides greater than 55 dB attenuation of external magnetic fields. The physics package is derived from ColdQuanta's commercial RuBECi chamber [25], modified for dual-species (Rb and K) operation and ruggedized for flight. The CAL physics package also incorporates a custom silicon chip-based atom trap containing a high-quality through-chip window. The custom "atom chip" provides a unique configuration of conductive current traces for creating the magnetic trapping potential near the chip surface, as well as features for improved electrical, thermal, and mechanical integrity for use in flight. The vacuum enclosure of the physics package consists of two distinct regions, both made with high optical quality glass walls, separated by a differential pumping aperture. The source region contains two _in vacuo_ alkali metal dispensers for Rb and K, while the UHV science region includes a miniaturized \(2\ell/s\) ion pump plus a graphite non-evaporable getter to maintain background pressures below \(10^{-10}\) Torr within this region to allow long trap lifetimes. A two-dimensional magneto-optical trap (MOT), created by two pairs of circularly-polarized laser beams along the orthogonal horizontal axes plus a two-dimensional quadrupole magnetic field, acts as an "atom funnel" to collect and transfer the slowest atoms from a dilute thermal vapor of Rb and K in the source region to the UHV science region through the aperture. This collimated beam of laser-cooled Rb and K atoms is captured in the science region by a three-dimensional MOT formed by circularly-polarized laser beams along three orthogonal axes and centered on a quadrupole magnetic field formed by a pair of current coils in the anti-Helmholtz configuration. After further laser cooling followed by optical Figure 6: Illustration of the optical beam geometry in CAL’s science module (_left_); and an image of the science module during assembly, prior to installation of magnetic shields (_right_). Originally published in Ref. 2. pumping to a pure magnetic field-sensitive state, the atoms are transferred to the atom chip-based magnetic potential, where rf or microwave forced evaporative cooling is employed to reach the transition to a Bose condensate. After release from the magnetic trap, ultra-cold atoms are imaged using absorption imaging, where a laser pulse resonant with the atomic transition probes the density distribution of the expanded atom cloud to reveal the initial momentum state of the atomic ensemble. Two imaging subsystems are provided within the Science Module to allow absorption imaging along orthogonal axes, either parallel or perpendicular to the atom chip. The wide-field imaging subsystem employs a large diameter (12 mm FW(\(1/e^{2}\))M) laser beam aligned just below the atom chip to provide a wide field of view. The orthogonal "through-chip" imaging axis has a much smaller beam, and makes use of the high optical quality window at the center of the atom chip. Both cameras used for absorption imaging employ a near-infrared enhanced scientific CMOS sensor, with a quantum efficiency of approximately 35% at the resonant absorption wavelength of 780 nm for Rb. #### 2.1.2 Lasers and Optics Subsystem The Lasers and Optics Subsystem in CAL performs the initial laser cooling and trapping, optical pumping, and resonant detection of Rb and K atoms within the Science Module. To accomplish this, two laser frequencies are required for each atomic species. These frequencies are generated by tunable narrow-linewidth "trapping" and "repumping" lasers which are frequency-offset locked to a reference laser, which is in turn frequency-stabilized to a narrow atomic transition in a spectroscopy module containing a vapor cell of either rubidium or potassium. This offset-lock scheme provides the required frequency agility from the trapping and repumping lasers for the laser cooling, optical pumping, and detection stages. The tunable laser outputs are further amplified using two tapered-chip semiconductor amplifiers to provide up to 350 mW of optical power at the 780 nm and 767 nm wavelengths for Rb and K, respectively, and delivered to the Science Module through a polarizing-maintaining optical fiber-based network of optical switches and fiber splitters/combiners. Beam delivery via fiber optics allows the placement of lasers and optical components outside the Science Module, and facilitates replacement of individual subassemblies during integration or, if necessary, on orbit. A separate fixed-wavelength laser at 785 nm generates the far-off-resonant light for dual-species atom interferometry using Bragg diffraction in an optical lattice. The interferometer pulse sequence is generated using an acousto-optical modulator (AOM) driven at the resonant RF frequencies for simultaneous Bragg diffraction of \({}^{87}\)Rb and \({}^{41}\)K (or \({}^{39}\)K). The multiple frequencies for driving the AOM are directly synthesized by an arbitrary waveform generator, as described in the following section. #### 2.1.3 Control and Electronics Subsystem A Windows-based computer controller plus three field-programmable gate array (FPGA) modules, housed in a DC-powered PXI chassis, provide dynamic control of the magnetic field currents, RF and microwave emitter frequencies, and laser frequencies and amplitudes during each experimental sequence. The primary FPGA provides digital and analog timing waveforms for synchronous control of the current drivers, direct digital synthesizers, arbitrary waveform generator, and RF and optical switches with a timing resolution of 10 \(\upmu\)s. The remaining two FPGAs are used to implement digital servo control loops to acquire and maintain the frequency-stabilization locks for the Rb and K reference lasers. Executive functions, such as loading and processing experimental configuration tables, running experimental control sequences, collecting and reporting real-time telemetry data, and monitoring for off-nominal conditions, are handled by the LabVIEW-based "PXI Host" flight software running as a Windows application on the PXI controller. Two dozen lower-level hardware-control software modules directly interface with the various hardware subsystems, and are managed by the PXI Host. The Current Driver Assembly (CDA) contains independently-controllable low noise current drivers for the six magnetic field coils inside the science module and the three atom chip trap current traces, as well as additional current drivers for the Rb and K dispensers in the Science Module. Two of the three atom chip drivers are switchable across three different atom chip traces and can provide either unidirectional or bidirectional current (depending on the switch selection) to generate multiple magnetic trap configurations. The third current driver can be directed across a "fast Feshbach" current loop to generate a bias field up to 90 G in the Science Module. This bias field is employed to access magnetic Feshbach resonances in mixtures of Rb and K atoms, where the sign and strength of interactions between atomic species can be precisely tuned by varying an external magnetic field. The Laser Frequency Lock Assembly (LFLA) contains the laser control electronics for the six narrow-linewidth lasers as well as the ultra-high frequency (UHF) and microwave frequency sources for driving atomic transitions in Rb or K. The laser control electronics include all the laser drivers, frequency stabilization electronics and reference synthesizers for the offset-locked lasers, and are housed as six individually removable electronic "slices" within the LFLA chassis. Frequency sources for evaporative cooling of atoms include an 80-MHz arbitrary waveform generator (AWG) housed within the PXI chassis, as well as three RF/UHF direct digital synthesizers (DDS) on LFLA Slices 7 and 8 along with a phase-locked 7.3 GHz dielectric resonant oscillator (DRO) on Slice 7 that is mixed with a 1 GHz DDS to generate the microwave frequencies for evaporative cooling. LFLA Slice 9 contains the RF amplifiers to generate high-power RF from the AWG output to drive either the RF loop antenna within the Science Module for evaporative cooling of Rb or the AOM used to generate the optical Bragg diffraction pulses for atom interferometry. An RF relay on Slice 9 directs the amplified signal to either subsystem. ## 4 On-Orbit Upgrades CAL was designed to allow on-orbit replacement of limited-lifetime components, including lasers, optical amplifiers, and the alkali-metal dispensers inside the Science Module, in order to extend science operations beyond its three-year primary mission. To support the anticipated replacement of the identified hardware, the CAL payload launched in 2018 with a suite of on-orbit replacement unit (ORU) lasers and amplifiers, as well as additional modules for the PXI chassis. The CAL Operations Team continuously monitors telemetry from these lifetime-limited and consumable items to identify any change in performance indicating end-of-life behaviors, as well as to monitor the health and status of the various hardware subsystems. The ability of the ISS crew to access a scientific payload housed inside the station also enables the replacement, under certain conditions, of hardware due to unanticipated failures or degraded performance on orbit or, as in CAL's Science Module upgrade described in Section 4, to deliver enhanced science capabilities to the instrument. ### Enhanced Science Module In December 2019, after 18 months of CAL's operation in orbit, a new atom-interferometry capable science module was delivered to the ISS on the SpaceX CRS-19 resupply mission. The upgraded science module, known as Science Module 3 (SM3), was designed and assembled at JPL to fully support planned experiments with ultracold atom interferometry in space, including a proof-of-principle test of Einstein's Equivalence Principle. During transport on ground and after unloading on the station, the ion pump in the science module was operated using a GSE ion pump controller assembly (IPCA), developed at JPL, to maintain vacuum integrity within the module's physics package. The science module can be stored greater than three months without power to its ion pump under normal conditions on ground, but this powered stowage was a precaution against vacuum degradation due to helium permeation into the glass-walled physics package in an elevated helium environment. The removal and replacement (R&R) of CAL's science module was performed in January 2020 by the ISS crew Figure 7: _Left:_ Astronaut Christina Koch unloads a new Science Module aboard the International Space Station in December 2019, prior to installation in the Cold Atom Laboratory in January 2020. _Right:_ Astronaut Megan McArthur is shown wearing the augmented-reality headset used during the installation of hardware inside the Cold Atom Laboratory in July 2021. Images courtesy of NASA. under the direction of the CAL Operations Team. This R&R activity involved twelve separate crew procedures that took place on five days over a nine-day span. After closeout of the newly-installed SM3, the original science module, SM2, was connected to the GSE IPCA to maintain vacuum until its return to ground for analysis at JPL. During SM2's return flight, the IPCA was continuously powered by a specialized lithium ion battery that had been flight-qualified for use on the station for extravehicular activities (EVAs) by the astronauts. Immediately following this R&R activity, the CAL Operations Team confirmed the vacuum integrity within the UHV science region of SM3 from instrument telemetry, then the Team proceeded to demonstrate laser-cooled atoms. After uploading new experimental definition tables developed on ground for the specific atom chip geometry in SM3, the CAL Team was able to confirm nominal and repeatable generation of BECs in the upgraded instrument. ### Upgraded Microwave Frequency Source In July 2021, the ISS crew upgraded the Cold Atom Laboratory with new hardware to enable the production of ultracold potassium atoms alongside rubidium, as required for dual-species science operations. This hardware, referred to as "Slice 7B" and installed in the LFLA chassis, completed the microwave frequency synthesis chain required to directly cool rubidium atoms using evaporative cooling with microwave frequencies, rather than RF, which then sympathetically cool either \({}^{39}\)K or \({}^{41}\)K atoms within the same magnetic trap. Previous experiments with rubidium relied solely on RF for evaporative cooling, which is far less efficient in dual-species mixtures with potassium [24]. During the installation of this hardware, astronaut Megan McArthur employed a Microsoft HoloLens mixed and augmented reality headset, in a first demonstration of this technology to assist a crew procedure aboard the ISS. Preparation for this activity took six months and involved a collaboration between engineers at NASA's JPL, Johnson Space Center, and Marshall Space Flight Center. Through a live video feed from the headset camera, the CAL Operations Team at JPL was able to share the astronaut's view of the hardware being replaced on orbit, and to simultaneously affix virtual text and graphical annotations alongside physical objects within the augmented reality environment to assist the installation in real time. For example, during the detailed procedure of reconnecting each cable assembly as the new hardware was being installed, the CAL Team was able to place a cursor to indicate a specific connector or cable tie to supplement the written procedure. The virtual cursor would remain fixed relative to the indicated object, independent of the motion of the headset-mounted camera. Following installation, the CAL Operations Team was able to validate the performance of this hardware by demonstrating evaporative cooling of rubidium atoms to a BEC using only microwave frequencies from Slice 7B. Subsequently, the CAL Team was further able to generate a Bose-condensed sample of \({}^{41}\)K using only sympathetic cooling of potassium atom by microwave evaporatively-cooled rubidium atoms within the same trap. ### CPU Controller and SSD Replacement During science operations on 12 August 2021, the CAL Operations Team lost communication with the Flight Computer and were unable to reconnect. Efforts to ping the Flight Computer were unsuccessful, even after multiple remote power-cycles of the payload in an attempt to induce a reboot of the Flight Computer. From the available power draw telemetry, it was determined that the most likely causes were a failure of the PXI-8108 CPU controller or the solid-state drive (SSD) inside this controller, or a corruption of the Windows operating system on this drive. An spare ORU controller had been in stowage on the station since the original payload delivery in 2018, and the ISS crew was able to remove and replace the original controller with this ORU on 28 August 2021. The newly-installed controller was then reconfigured by the ISS Network Team for operation on the ISS network. After communication with ground was established, the latest flight software was remotely installed on the new controller, along with the TREK and ION DTN software suites, allowing CAL to resume operation on 3 September 2021. A subsequent R&R procedure was necessary after the boot volume in this ORU controller became corrupted after less than three months of operation. The non-functional SSD was replaced on 16 December 2021 with a bootable drive that was delivered to the station on SpaceX CRS-23. Following the successful R&R, the CAL Operations Team was again able to reconnect and remotely install the Flight Software on the newly installed drive. Once the FSW installation was verified, recent experimental control tables and sequences required to operate the Flight Instrument were re-uploaded. After confirming nominal telemetry from all hardware subsystems, the Operations Team proceeded with a successful checkout of the rubidium subsystem and was thereafter able to resume science operations. ## V Future Plans As the CAL mission continues into its fourth year, a number of upgrades are planned to further enhance the science return and allow new categories of investigations. The Science Module 3B ORU upgrade is currently scheduled for launch in June 2023. This science module should allow a significant increase in the number of ultracold atoms produced. Currently, CAL produces large numbers of laser-cooled atoms, but the transfer process to the atom chip is highly inefficient due to the orders-of-magnitude difference in the volume of the initial magnetic trap (formed by a pair of coils in the Helmholtz configuration) and the volume of the atom chip-based microtrap. The solution is to add an intermediate "mesoscale" trap which produces a magnetic trap that can be dynamically varied to roughly match the size of the initial and final traps. Such an enhancement might improve atom numbers by as much as an order of magnitude, bringing CAL's performance in line with typical terrestrial experiments in terms of this metric. We are currently planning to operate CAL at least through 2027, so at least one more science module will likely be required. After consulting with the scientific community, the two next highest priority upgrades (after the enhancement of atom numbers) are the implementation of optical dipole traps and improved control of magnetic fields within the instrument. We are currently studying means of introducing a crossed beam dipole trap geometry with deformable beams in an upgraded science module. ### Lessons Learned for Future Mission Development Our experience with CAL guides the development of future missions. A number of areas where we can immediately focus on are apparent, and each of these has the potential to improve both the utility and reliability of future cold atom based missions. #### 1. Flight hardware improvements **Faster experimental cycle times:** CAL typically runs experiments on a 75-90 s experiment cycle, and is limited by thermal considerations to cycle times no shorter than 60 s. Decreasing this cycle time to ten or even five seconds would dramatically increase the science throughput of the instrument. **Better experimental diagnostics:** Dynamic _in situ_ measurements of laser powers and both magnetic and rf field strengths would dramatically improve our ability to identify and diagnose systematic issues or any hardware degradation, and could aid scientific investigations. **More modular design:** Improvements in the ability to quickly swap out components on orbit can vastly increase both the reliability and scientific versatility of the instrument. This would allow us to take advantage of increased crew time availability that has come with the advent of commercial crew, perhaps allowing trained atomic physicists to work in space. #### 2. Operational improvements **Operation from PI host institutions:** Giving PI Teams direct control of the instrument will allow them to design new experiments fluidly, similar to how atomic physics experiments are conducted on the ground. For this to become possible, it will be necessary to further automate the instrument to allow experiments to run autonomously, and to establish the level of diagnostics necessary to ensure the safety of the facility. ### Future Missions The Bose-Einstein Condensate Cold Atom Lab (BECCAL) is a complementary NASA-DLR quantum matter research facility expected to launch to the ISS after CAL completes its nominal mission in 2027. BECCAL is designed to provide a fast experimental duty cycle, unique dynamically-configurable trapping potentials, and novel capabilities for atom interferometry [26]. In contrast to the CAL atom interferometer design, which uses a single far-detuned laser to interact simultaneously with two atomic species (Rb and K) for unprecedented common-mode suppression of vibration and noise in weak equivalence principle experiments, BECCAL will use two separate lasers for dual-species atom interferometry, with higher sensitivity to spurious vibrations but allowing a factor of 10 larger atom beams, and active control of the retro-optic for unprecedented long interrogation times. The novel capabilities of BECCAL will enable new studies of non-linear atom optics, matter-wave cavities, gravity gradients, and tests of the gravitational constant. A proposed astronaut-operated Quantum Explorer [27] follow-on to BECCAL would provide a reconfigurable facility for easy swap-out of custom hardware, PI-specific instruments, lasers, and science modules. Research enabled by such a facility could include the study of topics as diverse as the nature of the quantum vacuum; quantum chaos and pattern formation; atom lasers and matter-wave holography; matter-wave localization; and quantum simulations of astrophysical objects, such as the early universe, black holes, and neutron stars, as well as condensed matter systems such as high temperature superconductors. ## VI Conclusion The Cold Atom Laboratory is a pathfinder mission for fundamental studies of quantum matter in microgravity, and for future space-based cold-atom sensors enabling exquisitely precise measurements for both applied and fundamental science applications. While there are microgravity alternatives to an orbiting platform (e.g. parabolic flights [28, 29], drop towers [30, 31], or sounding rockets [3, 4]), the science return can be much greater in an orbital facility where hundreds or even thousands of experimental sequences can be processed per day. As a multi-user facility, CAL was designed to support a diversity of experimental campaigns with multiple science teams, and to provide the flexibility to evolve as the supporting ground-based research matures. For an exploratory mission investigating a wide variety of quantum phenomena over an extended mission lifetime, these advantages were compelling. Future missions employing advanced quantum sensors tailored toward a single science objective may have a single PI and find accommodation on a dedicated satellite platform. CAL is also unique in that it is the only Flight mission simultaneously in Phases C/D (Design & Development) through E (Operations), as the project continues to develop and test hardware for on-orbit upgrades as it operates through an extended science-phase mission. The ability to enhance science capabilities and replace limited-lifetime hardware is a unique advantage of the crewed ISS platform. CAL has undergone several on-orbit upgrades and repairs over its four years of operations with support from the station crew. These has enabled CAL to not only continue operations beyond its primary science mission but also provided _enhanced capabilities for increased science return_. As part of one such upgrade, we have provided a first demonstration of augmented reality technology to improve the real-time interaction between an astronaut and payload engineers on ground during an R&R procedure. This technology promises to find applications far beyond CAL for maintaining science payloads on the station. ## Acknowledgments The Cold Atom Laboratory is supported by the Biological and Physical Sciences Division of NASA's Science Mission Directorate and the ISS Program Office. We thank Dr. Craig Kundrot, former Division Director of NASA BPS, and Dr. Ulf Israelsson, former Program Manager for the Fundamental Physics Office at JPL, for their long-term support. We also acknowledge the early support of Dr. Mark Lee, former Program Scientist for NASA BPS; without his vision the Cold Atom Laboratory would not have become a reality. CAL was designed, built, and is currently managed and operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. US Government sponsorship is acknowledged. (c) 2023 All Rights Reserved. Published by the Mohammed Bin Rashid Space Centre (MBRSC) on behalf of SpaceOps, with permission and released to the MBRSC to publish in all forms.
2308.15608
Excited Hadron Channels in Hadronization
The proper treatment of hadronic resonances plays an important role in many aspects of heavy ion collisions. This is expected to be the case also for hadronization, due to the large degeneracies of excited states, and the abundant production of hadrons from their decays. We first show how a comprehensive treatment of excited meson states can be incorporated into quark recombination, and in extension, into Hybrid Hadronization. We then discuss the quantum mechanics of forming excited states, utilizing the Wigner distribution functions of angular momentum eigenstates of isotropic 3-D harmonic oscillators. We further describe how resonance decays can be handled, based on a set of minimal assumptions, by creating an extension of hadron decays in PYTHIA 8. Finally, we present first results by simulating $e^+e^-$ collisions using PYTHIA and Hybrid Hadronization with excited mesons up to orbital angular momentum $L=4$ and radial quantum number 2. We find that states up to $L=2$ are produced profusely by quark recombination.
Rainer J. Fries, Jacob Purcell, Michael Kordell II, Che-Ming Ko
2023-08-29T19:59:59Z
http://arxiv.org/abs/2308.15608v1
# Excited Hadron Channels in Hadronization ###### Abstract: The proper treatment of hadronic resonances plays an important role in many aspects of heavy ion collisions. This is expected to be the case also for hadronization, due to the large degeneracies of excited states, and the abundant production of hadrons from their decays. We first show how a comprehensive treatment of excited meson states can be incorporated into quark recombination, and in extension, into Hybrid Hadronization. We then discuss the quantum mechanics of forming excited states, utilizing the Wigner distribution functions of angular momentum eigenstates of isotropic 3-D harmonic oscillators. We further describe how resonance decays can be handled, based on a set of minimal assumptions, by creating an extension of hadron decays in PYTHIA 8. Finally, we present first results by simulating \(e^{+}+e^{-}\) collisions using PYTHIA and Hybrid Hadronization with excited mesons up to orbital angular momentum \(L=4\) and radial quantum number 2. We find that states up to \(L=2\) are produced profusely by quark recombination. Hadronization of partons is a longstanding problem for Monte Carlo (MC) event generators. The process of forming bound states of quarks and gluons can not be described from first principles. Instead, several models have been developed over the years to successfully describe certain aspects of hadronization. Among them are the string fragmentation model, that is for example deployed in the PYTHIA event generator [1, 2], and the quark recombination model [3, 4, 5, 6]. String fragmentation is based on the idea of QCD strings forming between color charges at large distances, and has been successfully applied to all kind of "small" collision systems, i.e. those not involving nuclei. On the other hand, quark recombination has had success describing certain aspects of hadronization in nuclear collisions, in particular large baryon/meson ratios and the constituent-quark-number scaling of elliptic flow. The idea behind Hybrid Hadronization is to combine these two models so a comprehensive and consistent description of hadronization in all collision systems at a broad range of collision energies can be achieved [7, 8]. The overarching idea is that systems of partons that are ready to hadronize are first allowed to recombine by sampling the recombination probabilities of all quark-antiquark pairs and all quark and antiquark triplets in the system. Gluons are assumed to have decayed into quark-antiquark octet pairs for this step. The recombination probabilities are computed in a phase-space formalism briefly sketched below. Some partons, preferentially those close in phase space, thus recombine directly into hadrons. All remnant partons are then assumed to be connected by strings and these string are subjected to a string fragmentation model, in our case the one implemented in PYTHIA 8. If the original system of partons has color tags assigned, e.g. in output from PYTHIA 8 for \(e^{+}+e^{-}\) or \(p+p\) collisions, these color tags are used in the computation of recombination probabilites, and they are again used to form the remnant strings, preserving color flow in the parton system. The focus of this work is the implementation of proper physical hadronic resonances in the recombination process. In the orignal work [7] excited hadrons were not mapped onto the proper physical states in the Particle Data Book [10] since the recombination probabilities into eigenstates of orbital angular momentum in the Wigner phase-space formalism were not known. Our study of angular momentum eigenstates of the 3-D isotropic harmonic oscillator [9] remedies this shortcoming. We briefly discuss the work done in [9] and then present first results from an application to \(e^{+}+e^{-}\) collisions. We assume the potential between color singlet quark-antiquark pairs to be be modelled by a 3-D isotropic harmonic oscillator. Our work restricts itself to mesons for now for simplicity. The widths of the potentials can be fixed to data by computing the squared charge radii for stable mesons and comparing those results to measured values of \(\langle r^{2}\rangle\), as laid out in [7]. The inverse length scale of the harmonic oscillator is denoted by \(\nu\) in the following. We compute the Wigner distributions \(W_{kl}({\bf r},{\bf q})\) of eigenstates of the potential with radial quantum number \(k\) and orbital angular momentum quantum number \(l\). The magnetic quantum number \(m\) is averaged over since we do not wish to consider the polarization of hadrons. In that case the distributions only depend on the magnitudes of the position and momentum vectors \({\bf r}\) and \({\bf q}\), and the angle \(\theta\) between those vectors. In our work we reduce the complicated problem of 3-D phase-space distributions to the known Wigner distributions of the 1-D harmonic oscillator [11]. These distributions have been computed before in [12] using a very different approach. The lowest energy distributions are \[W_{00} =\frac{1}{\pi^{3}\hbar^{3}}e^{-\frac{q^{2}}{\hbar^{2}\nu^{2}}-\nu^{2 }r^{2}}\,, \tag{1}\] \[W_{01} =W_{00}\left(-1+\frac{2}{3}\nu^{2}r^{2}+\frac{2}{3}\frac{q^{2}}{ \hbar^{2}\nu^{2}}\right)\,,\] (2) \[W_{10} =W_{00}\left(1+\frac{2}{3}\nu^{4}r^{4}-\frac{4}{3}\nu^{2}r^{2}- \frac{4}{3}\frac{r^{2}q^{2}}{\hbar^{2}}+\frac{8}{3}\frac{(\mathbf{r}\cdot \mathbf{q})^{2}}{\hbar^{2}}-\frac{4}{3}\frac{q^{2}}{\hbar^{2}\nu^{2}}+\frac{2} {3}\frac{q^{4}}{\hbar^{4}\nu^{4}}\right)\,. \tag{3}\] The distributions for \(k=1\), \(l=0\) are also visualized in Fig. 1. We procede by computing the probabilites for two Gaussian wave packets with given widths \(\delta\), representing the quark and antiquark, to coalesce into a bound state given by the Wigner distributions above. The probabilities \(P_{kl}(\mathbf{x},\mathbf{p})\) depend on the relative distance vectors \(\mathbf{x}\) and \(\mathbf{p}\) of the wave packets in position and momentum space, respectively. Here the magnetic quantum number \(m\) is summed over to capture all polarization states. The probabilities for the lowest energy states are \[\mathcal{P}_{00} =e^{-u}\,, \tag{4}\] \[\mathcal{P}_{01} =e^{-u}u\,,\] (5) \[\mathcal{P}_{10} =\frac{1}{2}e^{-u}\left(\frac{1}{3}u^{2}-\frac{1}{3}t\right) \tag{6}\] for the simplest case \(2\delta\nu=1\). For a discussion of other ratios between the length scales see [9]. The distrubutions have been written in terms of variables \(u=\nu^{2}x^{2}/2+p^{2}/(2\hbar^{2}\nu^{2})\) and \(t=(\mathbf{x}\times\mathbf{p})^{2}/\hbar^{2}\), which have straightforward physical interpretations as the dimensionless squared distance of the initial partons in phase space, and their dimensionless squared angular momentum, respectively. It is then interesting to analyze the mapping of initial angular momentum \(L^{2}\) in the parton system onto the choice of angular momentum quantum numbers \(l\), in particular if states of several different \(l\) with degenerate energy quantum number \(n=2k+l\) are available [9]. Maps of the coalescence probabilities \(P_{10}\) are shown in Fig. 2 We now turn to the implementation of this formalism for the recombination of mesons within the Hybrid Hadronization framework. The phase-space probabilities \(P_{kl}\) are supplemented with Figure 1: Phase-space distributions \(W_{10}\) for quantum numbers \(k=1\), \(l=0\) as functions of \(r=|\mathbf{r}|\) and \(q=|\mathbf{q}|\). The white lines indicate nodes where \(W_{10}=0\). Distributions are shown for several values of the angle \(\theta\) given by \(\cos\theta=\mathbf{r}\cdot\mathbf{q}/rq\). probablities \(P_{s}\) to form spin single or triplet states and the probability \(P_{c}\) to form color singlets. We have implemented light excited mesons up to \(n=4\). This means we include mesons up to \(G\)-wave for the radial ground state \(k=0\), and up to \(D\)-wave for a single radial excitation \(k=1\). This set of states exceeds the states listed as confirmed by the Particle Data Group [10]. To preserve unitarity we opt to keep the experimentally unconfirmed states. However, we are in need of the masses and strong decay branching ratios of these states. Missing masses are estimated by using the phenomenologically established scaling of squared masses with radial and spin quantum numbers, thus using linear interpolation and extrapolation. We consider decays into the simplest possible sets of stable hadrons plus additional pions, up to a maximum of five hadrons in total, which are allowed by the quantum numbers. Branching ratios are determined using phase-space weights and isospin algebra. As a random example, we pick the \(1^{3}F_{2}\) isospin triplet state which would be known as \(a_{2}(1918)\). We estimate its mass to be 1.918 GeV and allow its decay into two 3-pion states (63.3%) and three 5-pion states (36.7%). The necessary data on the full set of meson states considered by us is collected in a XML particle data file following PYTHIA 8 standards, which enables Pythia to read in and decay the excited states [13]. With the formalism set we look at a first example. We use PYTHIA 8 to generate \(e^{+}e^{-}\) di-jet events at \(\sqrt{s}=91.2\) GeV. The partons after final-state showers are extracted and fed into a standalone version of the Hybrid Hadronization code that supports the generation of the new physical meson resonances. Baryons are still treated as described in [7]. Hybrid Hadronization calls another instance of PYTHIA 8 to hand over remnant strings to fragment and includes hadrons from recombination. Both recombination and fragmentation hadrons can then decay according to settings in PYTHIA 8. We focus on one important preliminary result from this study. If one analyzes the quantum numbers of hadrons from recombination before any secondary decays. One finds the result shown in Fig. 3. The three panels depict the relative abundances of mesons as functions of radial and orbital angular momentum quantum numbers \(k\) and \(l\), as well as the total angular momentum quantum number \(j\) which comes from the addition of orbital and spin angular momentum operators, \({\bf J}={\bf L}+{\bf S}\). The distribution of radial quantum numbers is strongly peaked at the ground state \(k=0\) with the next excited stated adding about one third of that strength. On the other hand, \(P\)- Figure 2: Coalescence probabilities \(P_{10}\) for quantum numbers \(k=1\), \(l=0\) for two Gaussian wave packets interacting through an isotropic 3-D harmonic oscillator potential. Probabilities are shown as functions of relative coordinates \(x=|{\bf x}|\) and \(p=|{\bf p}|\) for several values of the angle \(\cos\theta={\bf x}\cdot{\bf p}/xp\) between the vectors. (\(l=1\) and \(2\), respectively) are as numerous as \(S\)-wave states, showing the importance of orbital angular momentum excitations in this jet systems. Even \(F\)-wave states are created with roughly half the strength of \(S\)-waves. As a result, vector and tensor mesons are, by far, the dominant channels in recombination, with \(j=3\) mesons still being more than doubly as numerous as spinless mesons. In summary, we have developed a formalism that allows for the inclusion of physical excited mesons into the quark recombination model and, in extension, the Hybrid Hadronization model. To this end we have computed the probabilities for the coalescence of Gaussian wave packets into angular momentum eigenstates of 3-D isotropic oscillator potentials, approximating the dynamics of quark-antiquark systems at distances which are not too large. We have utilized a phase-space formalism which allows us to compute quark-antiquark coalescence probabilities from the mean distance of the partons in coordinate and momentum space. We have compiled masses and branching ratios for strong decays for mesons up to \(G\)-waves, using estimates for those not found in the Particle Data Book. We find that in \(e^{+}e^{-}\) collisions at \(\sqrt{s}=91.2\) GeV, using PYTHIA 8 and Hybrid Hadronization, mesons states with excited orbital angular momenta dominate recombination. In the future, we plan to extend our formalism to heavy flavor mesons and baryons. This work was supported by the U.S. National Science Foundation under awards 1812431 and 2111568, and under award 2004571 through a subcontract with Wayne State University. In addition, this work was supported by the U.S. Department of Energy under Award No. DE-SC0015266. RJF would like to thank the ExtreMe Matter Institute EMMI at the GSI Helmholtzzentrum fur Schwerionenforschung, Darmstadt, for support and the Institute of Theoretical Physics at the University of Frankfurt and the Frankfurt Institute for Advanced Studies for their hospitality.
2302.07257
ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models
Large language models (LLMs) have recently demonstrated their potential in clinical applications, providing valuable medical knowledge and advice. For example, a large dialog LLM like ChatGPT has successfully passed part of the US medical licensing exam. However, LLMs currently have difficulty processing images, making it challenging to interpret information from medical images, which are rich in information that supports clinical decisions. On the other hand, computer-aided diagnosis (CAD) networks for medical images have seen significant success in the medical field by using advanced deep-learning algorithms to support clinical decision-making. This paper presents a method for integrating LLMs into medical-image CAD networks. The proposed framework uses LLMs to enhance the output of multiple CAD networks, such as diagnosis networks, lesion segmentation networks, and report generation networks, by summarizing and reorganizing the information presented in natural language text format. The goal is to merge the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models to create a more user-friendly and understandable system for patients compared to conventional CAD systems. In the future, LLM's medical knowledge can be also used to improve the performance of vision-based medical-image CAD models.
Sheng Wang, Zihao Zhao, Xi Ouyang, Qian Wang, Dinggang Shen
2023-02-14T18:54:06Z
http://arxiv.org/abs/2302.07257v1
# ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models ###### Abstract Large language models (LLMs) have recently demonstrated their potential in clinical applications, providing valuable medical knowledge and advice. For example, a large dialog LLM like ChatGPT has successfully passed part of the US medical licensing exam. However, LLMs currently have difficulty processing images, making it challenging to interpret information from medical images, which are rich in information that supports clinical decisions. On the other hand, computer-aided diagnosis (CAD) networks for medical images have seen significant success in the medical field by using advanced deep-learning algorithms to support clinical decision-making. This paper presents a method for integrating LLMs into medical-image CAD networks. The proposed framework uses LLMs to enhance the output of multiple CAD networks, such as diagnosis networks, lesion segmentation networks, and report generation networks, by summarizing and reorganizing the information presented in natural language text format. The goal is to merge the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models to create a more user-friendly and understandable system for patients compared to conventional CAD systems. In the future, LLM's medical knowledge can be also used to improve the performance of vision-based medical-image CAD models. ## 1 Introduction Large Language Models (LLMs) are advanced artificial intelligence systems that have been trained on vast amounts of text data [5, 22]. These models use deep learning techniques to generate human-like responses, making them useful for a variety of tasks such as language translation, question answering, and text generation. LLMs like OpenAI's GPT-3 [3] have shown remarkable results in natural language processing and have the potential to revolutionize various industries, including marketing, education, and customer service. The ability of LLMs to process and understand large amounts of data has made them highly sought after for solving complex problems. In the medical domain, LLMs have demonstrated their potential as valuable tools for providing medical knowledge and advice. For instance, a large dialog-based LLM, such as ChatGPT [17], has demonstrated remarkable results in a critical evalua tion of its medical knowledge. ChatGPT has successfully passed part of the US medical licensing exams, showcasing its potential to augment medical professionals in delivering care. Inspired by their remarkable progress in natural language processing, it is an interesting topic to integrate the LLMs to understand visual information in computer vision tasks. Processing images involves understanding the spatial relationships between objects, recognizing patterns and textures, and extracting features that describe the objects in an image. These tasks require a deep understanding of visual information, which is challenging for LLMs that have been primarily trained on text data. This limitation presents a major challenge in the medical field, where images play a crucial role in supporting clinical decisions. Medical images, such as X-rays, CT scans and MRIs, are rich in information that can provide critical insights into a patient's condition. However, LLMs currently struggle to interpret and extract information from these images, limiting their ability to fully support clinical decision-making processes. As the "pure" computer vision method, medical-image computer-aided diagnosis (CAD) networks have achieved significant success in supporting clinical decision-making processes in the medical field [24]. These networks leverage advanced deep learning algorithms to analyze medical images and provide valuable insights to support clinical decision-making. CAD networks have been designed specifically to handle the complexities of visual information in medical images, making them well-suited for tasks such as disease diagnosis [30], lesion segmentation [35], and report generation. These networks have been trained on large amounts of medical image data, allowing them to learn to recognize complex patterns and relationships in visual information that are specific to the medical field. The aim of this paper is to provide a scheme that combines the strength of LLMs and CAD models. In this scheme, namely ChatCAD, the image is first fed into multiple networks, i.e., an image classification network, a lesion segmentation network, and a report generation network as depicted in Figure 1. The results produced by classification or segmentation are a vector or a mask, which can not be understood by LLMs. Therefore, we transform these results into the text representation form as shown in the middle panel of Figure 1. These text-form results will then be concatenated together as a prompt "_Revise the report based on results from Network A and Network B_" for the LLM. The LLM then summarizes the results from all the CAD networks. As the example in this figure, the refined report combines the findings from all three networks to provide a clear and concise summary of the patient's condition, highlighting the presence of pneumonia and the extent of the infection in the left lower lobe. In this way, the LLM could correct errors in the generated report based on the results from CAD networks. Our experiment shows that our scheme could improve the diagnosis performance score of the state-of-the-art report generation methods by \(16.42\%\). A major benefit of our approach is the utilization of LLM's robust logical reasoning capabilities to combine various decisions from multiple models. This allows us to fine-tune each model individually. For instance, in response to an emergency outbreak such as COVID-19, we can add a pneumonia classification model (differentiating between community-acquired pneumonia and COVID-19 [19]) using very few cases without affecting the other models. Since classifiers are usually less data-hungry than other models, we mark it with "trainable" (green) in Figure 1. Another advantage of bootstrapping LLMs to CAD models is that their extensive and robust medical knowledge can be leveraged to provide interactive explanations and medical advice as we illustrate on Figure 2. For example, based on an image and generated report, patients can inquire about appropriate treatment options (second panel) or define medical terms such as "airspace consolidation" (third panel). Or with patient's chief complaint (forth panel), LLMs can explain why such symptom happens. In this manner, patients can gain a deeper understanding of their symptoms, diagnosis, and treatment more efficiently. It can efficiently help patients to reduce consultation costs with clinical experts. As the performances of CAD models and LLMs become Figure 2: Interactive CAD with LLMs. This example uses the ChatGPT as LLM. increasingly improved in the future, the proposed scheme has the potential to improve the quality of radiology reports and enhance the feasibility of online healthcare services. ## 2 Related Works ### Large Language Models Recent advances in Transformer architecture [28] and computing power have enabled the training of large language models with billions of parameters, leading to a significant improvement in their ability to summarize, translate, predict and generate human-like text [23, 25, 3]. Several domain-specific LLMs have been developed using general-purpose model weight and training schemes. BioBERT [13] and PubMedBERT [8] are examples of BERT [5] models trained on PubMed for biomedical data, while ClinicalBERT [2] was further trained on the MIMIC dataset and outperformed its predecessor. Med-PaLM [25] was developed in late 2022 using curated biomedical corpora and human feedback, and showed promising results, including a 67.6% accuracy on the MedQA exam. Chat-GPT, which was not given supplementary medical training, passed all three parts of the USMLE and achieved over 50% accuracy across all exams and surpassed 60% accuracy in the majority of them [12]. ### Vision-Language Model A popular method of converting visual information into language is through image captioning. Deep learning-based image caption models [33, 9] can generate descriptive and coherent captions using large datasets such as Microsoft COCO and Flickr 30K. In medical image analysis, image captioning methods are employed to generate exam image reports. For example, Li et al. [14] implement explicit medical abnormality graph learning for report generation. Zhang et al. [34] utilize a pre-constructed knowledge graph based on disease topics, respectively. Another line of research [29, 4] learns cross-modal patterns using self-attention architecture. The recent emergence of foundation models with more clinical knowledge holds promise as a potential future direction. Recently, with the increase in model size, advances in the field have shifted towards Vision-Language Pretraining (VLP) and utilizing pre-trained models. CLIP [21] merges visual and language information into a shared feature space, setting new state-of-the-art performance on various downstream tasks. Frozen [27] fine-tunes an image encoder, whose outputs serve as soft prompts for the language model. Flamingo [1] introduces cross-attention layers into the LLM to incorporate visual features, pre-training these new layers on billions of image-text pairs. ## 3 Method ### Bridge between Image and Text The key idea is to utilize the powerful logical reasoning capabilities of the LLMs to make more robust disease diagnosis for medical images. Therefore, we need to build a bridge to translate medical images into texts as inputs for the LLM. Our strategy is straightforward: 1) Feed exam images (e.g., X-Ray) into trained CAD models to obtain outputs; 2) Translate these outputs (typically tensors) into natural language; 3) Use language models to summarize the results and make a final conclusion; 4) Based on the results from visual models and pre-trained medical knowledge in the language models, engage in conversation about symptoms, diagnosis, and treatment. In this section, we mainly discuss the details of our proposed scheme. An example is illustrated in Figure 3, where the output of a disease classifier is a 5-value vector indicating the probabilities of five diseases, i.e., Cardiomegaly, Edema, Consolidation, Atelectasis, and Pleural Effusion. After that, we need to translate this result into a prompt sentence for the LLM. A natural way of prompting is to show all five kinds of pathology and their corresponding scores. We first tell the LLM "Higher disease score means higher possibility of illness" as the basic rule in order to avoid some misconception. Then, we represent the score of each disease as "$**(**disease**) score: $**(**score**)**" as shown in upper-right panel (Prompt#1). Reports generated using Prompt#1 can be found at second column in Figure 8 and Figure 9. One may notice that the LLMs are heavily influenced by Prompt#1, usually repeating all the numbers in the output. Reports generated from Prompt#1 are very different from radiologist's reports since the concrete diagnostic scores is not frequently used in clinical settings. To align with the language commonly used in clinical reports, we propose to transform the concrete scores into descriptions of disease severity as shown in lower-left panel (Prompt#2). Prompt#2 will be designed using a grading system, which will divide the scores into four categories: "No sign" [0.0-0.2), "Small possibility" [0.2-0.5), "Likely" [0.5-0.9), and "Definitely" [0.9 and above). These categories will be used to describe the likelihood of each of the five observations. Prompt#3 is a concise one that reports diseases with diagnosis scores higher than 0.5 in the prompt. If no prediction is made among all five diseases, the prompt will be "No Finding". Reports generated from Prompt#2 and Prompt#3 are generally acceptable and reasonable in most cases as one can observe in Figure 8 and Figure 9. "Network A" is frequently referenced in the generated reports. Some prompt tricks, e.g., "_Revise the report based on results from Network A but without mentioning Network A_", can be applied to removing its mention. We do not utilize these tricks in current experiments. ### Dataset and Implementation In this paper, we evaluate the performance of the combination of a report generation network (R2GenCMN [4]) and a classification network (PCAM [32]). The report generation networks (CvT2DistilGPT2 and R2GenCMN) are trained on the MIMIC-CXR dataset [11]. The MIMIC-CXR dataset is a large-scale public dataset of chest x-ray images with free-text radiology reports. It contains 377,110 images corresponding to 227,835 radiographic studies performed at the Beth Israel Deaconess Medical Center in Boston, MA. At the same time, the classifier is trained on the CheXpert dataset [10]. CheXpert is a large public dataset for chest radiograph interpretation, consisting of 224,316 chest radiographs of 65,240 patients. The reports from the LLMs are tested on the official test set of the MIMIC dataset. Due to the current limitation of ChatGPT usage (i.e., around 20 requests per hour), we are unable to test the entire test set of MIMIC-CXR now. Therefore, 300 cases are randomly selected, including 50 cases of Cardiomegaly, 50 cases of Edema, 50 cases of Consolidation, 50 cases of Atelectasis, 50 cases of Pleural effusion, and 50 cases with no findings. During the evaluation process, the text reports were converted to multi-class labels using cheXbert [26]. The LLMs are updating constantly to include more new knowledge and events, leading to the improvement of their reasoning capability. The GPT-3 model we use in this paper is _text-davinci-003_ which was released by OpenAI on Feb, 2023 based on IntructGPT [18]. The maxlen of the output is set to 1024 and temperature set to 0.5. The ChatGPT [17] model used is the _Jan-30-2023_ version. In the section "Interactive and Understandable CAD", ChatGPT is used to generate the example. During our test, the GPT-3 can also provide accurate and helpful chat. ## 4 Report Generation ### Quality Improvement of the Generated Report In this section, we evaluate the performance of our proposed method with other two report-generation methods, i.e., R2GenCMN [4] and CvT2DistilGPT2 [16]. On the basis of clinical importance and prevalence, we focus on five kinds of observations. Three metrics, including precision (PR), recall (RC), and F1-score (F1), are reported in Table 1. The strengths of our method are clearly shown in Table 1. It has obvious advantages in RC and F1, and is only weaker than R2GenCMN in the term of PR. Our method has a relatively high Recall and F1-score on MIMIC-CXR dataset. For all five kinds of diseases, both CvT2DistilGPT2 and R2GenCMN show inferior performance to our method concerning RC and F1. Specifically, their performances on Edema and Consolidation are rather low. Their RC values on Edema are 0.468 and 0.252, respectively, while Figure 4: Length comparison of generated reports. Figure 3: Prompts that bridge between tensor and text. We show three different prompt designs. our method achieves the RC value of 0.626 based on GPT-3. The same phenomenon can be observed in Consolidation, where the first two methods hold the values of 0.239 and 0.121 while ours (GPT-3) drastically outperforms them, with the RC value of 0.803. The R2GenCMN has a higher PR value compared to our method on three of five diseases. However, the cost of R2GenCMN's high performance on Precision is its weakness in the other two metrics, which can lead to biased report generation, e.g., seldomly reporting any potential diseases. At the same time, our method has the highest F1 among all methods, and we believe it can be the most trustworthy report generator. ### How LLMs affect Report Quality In this section, we compare the performance of different LLMs for report generation. We use Prompt#3 as the default prompt. OpenAI provides four different sizes of GPT-3 models through its publicly accessible API: text-ada-001, text-babbage-001, text-curie-001, and text-davinci-003. The smallest text-ada-001 can not generate meaningful reports and is therefore not included in this experiment. The size of the models has not been officially disclosed. The figures listed in Table 2 are approximate estimates based on the information in [7]. We report the F1-score of all observations in Table 2. It is noteworthy that language models struggle to perform well in clinical tasks when their model size is limited. The diagnostic performances of text-babbage-001 and text-curie-001 is subpar, as demonstrated by their low average F1-scores over five observations compared with the last two models. The improvement in diagnostic performance is evident in text-davinci-003, whose model size is hundreds of times larger than that of text-babbage-001. On average, text-davinci-003's F1-score is improved from 0.471 to 0.591. The ChatGPT is slightly better than text-davinci-003, achieving the improvement of 0.014, and their diagnostic abilities are comparable. Overall, the diagnostic capability of language models is proportional to their size, highlighting the critical role of the logistic reasoning capability of LLMs. In our experiments, it can be observed that more capable models generally produce longer reports as shown in Figure 6. At the same time, nearly 40% of reports generated by text-babbage-001 and nearly 15% of reports generated by text-curie-001 have no meaningful content. ## 5 Interactive and Understandable CAD The proposed ChatCAD offers several benefits, including its ability to utilize LLM's extensive and reliable medical knowledge to provide interactive explanations and advice. As shown in Figure 7, two examples of the interactive CAD are provided, with one chat discussing pleural effusion and the other addressing edema and its relationship to swelling. Through this approach, patients can gain a clearer understanding of their symptoms, diagnosis, and treatment options, leading to more efficient and cost-effective consultations with medical experts. As language models continue to advance and become more accurate with access to more trustworthy medical training data, ChatCAD has the potential to significantly enhance the quality of online healthcare services. ## 6 Limitations and Discussion In this paper, we explore a novel framework, ChatCAD, introducing large language models in CAD. The proposed method, however, still has limitations to be solved. First, LLM-generated reports are not human like in a certain way. LLM is likely to output sentences like "Network A's diagnosis prediction is consistent with the findings in Figure 5: F1-score comparison on 5 observations. Figure 6: Length of Reports generated by different models. “Babbage”, “Curie” and “Davinci” represent the three GPT-3 models with different model sizes, i.e., text-babbage-001, text-curie-001, and text-davinci-003. the radiological report" or "The findings from Network A's diagnosis prediction are supported by the X-ray". This is reflected on natural language similarity metrics when we compare to our baseline method. ChatCAD improved the diagnosis accuracy but dropped the BLEU score [20]. A promising way to address this issue is to add a module after ChatGPT to filter generated reports. Or add prompt like "please do not mention Network A". Additionally, we only design three typical kinds of prompts that are intuitive, and there is room for improvement. LLMs are capable of solving logical reasoning problems without additional computational costs [31]. In current ChatCAD, we did not give the network about patient's major complaint since there is no such dataset available. We believe the LLMs can process more complex information than what we currently provide. Better datasets and benchmarks are needed. Our experiments demonstrate significant impact of language model size on diagnostic accuracy. Larger, advanced, and more truthly LLMs such as the upcoming GPT-4 may improve the accuracy and report quality further. However, the role of vision classifiers has not yet been explored, and additional research is necessary to determine if models such as ViT [6] or SwinTransformer [15], which boast larger parameters, can deliver improved results. On the other hand, LLMs can also be used to help the training of vision models, such as correcting outputs of vision models using related medical knowledge learned in LLMs. In our work, we have only carried out a qualitative analysis of the prompt design instead of a quantitative analysis. Further in-depth investigations will be undertaken once the API for ChatGPT becomes available for use. Moreover, the specifics of this paper have not been discussed with any clinical professionals, and therefore it still lacks rigor in many places. We will improve it in subsequent versions.
2301.11165
Timing performances of front-end electronics with 3D-trench silicon sensors
Detectors based on pixels with timing capabilities are gaining increasing importance in the last years. Next-to-come high-energy physics experiments at colliders require the use of time information in tracking, due to the expected levels of track densities in the foreseen experimental conditions. A promising solution to gain high-resolution performance at the sensor level is given by so-called 3D silicon sensors. The excellent intrinsic time resolution of a special case of 3D sensors, the trench type, is limited by residual non-uniformities in the duration of the induced currents. The intrinsic contribution of the sensor to the total time resolution of the system, when the detector is coupled to a front-end electronics, depends on the characteristics of the electronics itself and can be minimized with a proper design. This paper aims to analyze the possible performance in the timing of a typically-used front-end circuit, the Trans-Impedance Amplifier, considering different possible configurations. Evidence of the preferred modes of operation in sensor read-out for timing measurement will be given.
Gian Matteo Cossu, Adriano Lai
2023-01-26T15:09:39Z
http://arxiv.org/abs/2301.11165v1
# Timing performances of front-end electronics with 3D-trench silicon sensors ###### Abstract Detectors based on pixels with timing capabilities are gaining increasing importance in the last years. Next-to-come high-energy physics experiments at colliders require the use of time information in tracking, due to the expected levels of track densities in the foreseen experimental conditions. A promising solution to gain high-resolution performance at the sensor level is given by so-called 3D silicon sensors. The excellent intrinsic time resolution of a special case of 3D sensors, the trench type, is limited by residual non-uniformities in the duration of the induced currents. The intrinsic contribution of the sensor to the total time resolution of the system, when the detector is coupled to a front-end electronics, depends on the characteristics of the electronics itself and can be minimized with a proper design. This paper aims to analyze the possible performance in the timing of a typically-used front-end circuit, the Trans-Impedance Amplifier, considering different possible configurations. Evidence of the preferred modes of operation in sensor read-out for timing measurement will be given. Keywords: Front-end electronics for detector readout, Timing detectors, Analogue electronic circuits, 3D pixel sensors ## 1 Introduction ### General contributions to the time resolution * Front-end electronics for timing * 3.1 Condition I: Charge Sensitive TIA (\(\tau\gg t_{c}\)) * 3.2 Condition II: Fast-TIA (\(\tau\approx t_{c}\)) * 4 Intrinsic time resolution of 3D-trench sensor with different front-end electronics * 4.1 Sensor contribution to the time resolution * 4.2 Intrinsic time resolution for different discrimination algorithms * 4.2.1 The timing Propagation Coefficient \(\mathcal{P}\) * 4.2.2 Constant-fraction time resolution in CS-TIA * 4.2.3 Leading-edge time resolution in Fast-TIA * 4.2.4 Constant-fraction time resolution in Fast-TIA * 4.2.5 Propagation coefficient \(\mathcal{P}\) for different \(\tau\) * 5 Contributions to time resolution for a real 3D trench detector * 5.1 Intrinsic time resolution of real 3D trench sensor * 5.2 Front-end electronics jitter * 6 Conclusions * A Propagation coefficient \(\mathcal{P}\) for the Constant Fraction Discrimination case: \(t_{s}>t_{c}\) * B Propagation coefficient \(\mathcal{P}\) for the Constant Fraction Discrimination case: \(t_{s}<t_{c}\) * C Jitter approximation for CS-TIA and Fast-TIA * C.1 CS-TIA * C.2 Fast-TIA Introduction An important emerging requirement in experimental high-energy physics concerns the need of introducing time measurements at the level of the single pixel sensor. As an example, the Upgrade-II of the LHCb experiment at the CERN LHC, scheduled to take data in about a decade from now, has requirements of concurrent space and time resolutions of the order of 10 \(\mu\)m and at least 50 ps, respectively, per single pixel hit [1, 2]. Such a trend is foreseen to continue with even more severe requirements in the subsequent generation of collider experiments, where time resolutions in the range of 10-20 ps per hit will be necessary [2]. Radiation resistance against fluences approaching \(10^{17}\) 1-MeV \(n_{eq}/cm^{2}\) is also a fundamental requirement. 3D silicon sensors demonstrated the capability to satisfy all these extreme requirements at the same time [3, 4]. A timing-optimized 3D sensor, designed with trench geometry (3D-trench in the following), was developed and produced within the TimeSPOT project [5]. The induced-current signals produced in 3D-trench sensors, although extremely fast (typical charge collection time of 200 ps), have different duration depending on the position of the ionizing tracks with respect to the electrodes [6, 7]. The effect on the output signals, once these current signals are processed by a specific electronics, can be analyzed with a simple model that takes into account the duration variations of the currents. This can be done by considering a specific circuit topology for the electronics. Usually, the signal produced by the sensor is processed by a trans-impedance amplifier, (TIA in the following), characterized by a time constant which is related to the bandwidth of the system. Once the type of electronics is chosen, it is possible to understand how the timing performance changes as a consequence of the different duration of the set of current signals that the system has to process. In this respect, the present paper aims to describe the relationship between the so-called _intrinsic_ contribution of the sensor and the characteristics of the electronics, which affects the time constant of the system and also the electronic jitter. We start discussing some general concepts in section 2. Section 3 is dedicated to describe the typical circuit solution for reading a capacitive sensor (i.e the TIA amplifier), in two particular conditions defined according to the value of the time constant of the system with respect to the average duration of the currents in the sensor. Section 4 is dedicated to analyse the intrinsic resolution of the 3D sensor in the two circuit conditions described in section 3. Here we introduce the concept of _timing propagation coefficient_\(\mathcal{P}\) from sensor to electronics. The propagation coefficient \(\mathcal{P}\) allows to highlight the link between the intrinsic contribution of the 3D sensor and the characteristics of the front-end electronics. It is also effective to understand under which conditions the timing performance of the system can be improved. ## 2 General contributions to the time resolution When quoting the contributions to uncertainty in the measurement of time, the following main quantities are normally considered, \[\sigma_{\mathrm{t}}=\sqrt{\sigma_{\mathrm{tw}}^{2}+\sigma_{\mathrm{TDC}}^{2}+ \sigma_{\mathrm{sens}}^{2}+\sigma_{\mathrm{ej}}^{2}}\ \, \tag{1}\] \(\sigma_{\mathrm{tw}}\) (_time-walk_) depends systematically on the fluctuations of the signal amplitude and can be corrected with dedicated signal processing techniques, as Constant Fraction Discrimination (CFD) or Leading Edge Discrimination (LED) with time-over-threshold measurement (TOT). \(\sigma_{\rm TDC}\) depends on the digital resolution of the electronics (conversion error) and can be made negligible by proper design. The \(\sigma_{\rm sens}\) term is what is commonly called the intrinsic resolution of the sensor and can depend on the effect of longitudinal non-uniformity in the energy deposit, due to delta rays, but also on differences in the signal shapes. The latter, in a 3D geometry, is mainly due to the different possible drift paths of the charge carriers in the sensor and produces variations in the signal current duration. This contribution depends only on the geometry of the sensitive volume and can be minimized if maximum uniformity in the electric field is obtained by sensor design [6; 7]. The \(\sigma_{\rm ej}\) term (electronic jitter) depends on the front-end electronics rise time and signal-to-noise ratio (SNR) and together with \(\sigma_{\rm sens}\) represents the main contribution to time resolution for 3D detectors. In order to obtain the best performance in terms of time resolution it is then necessary to minimize these two contributions: \[\sigma_{\rm t}\sim\sqrt{\sigma_{\rm sens}^{2}+\sigma_{\rm ej}^{2}}\enspace. \tag{2}\] In the next sections we will deal with these two fundamental contributions of the time resolution, (i.e. \(\sigma_{\rm sens}\) and \(\sigma_{\rm ej}\)), and how they change depending on the characteristics of the electronics. ## 3 Front-end electronics for timing The traditional textbook solution for the read-out of capacitive sensors is the well-known Charge Sensitive Amplifier (CSA), possibly followed by a suitable number of differentiating (CR) and integrating (RC) stages, realizing a so-called _Shaper_[8]. Actually, the CSA circuit is a particular case of a more general configuration, that is the Trans-Impedance-Amplifier (TIA) with shunt-shunt feedback (FB-TIA), schematically shown in Fig. 1 (left). A simplified implementation of the TIA amplifier can be realized by a common-source NMOS in a so-called self-biased topology (Fig. 1, right). The circuit (Fig. 1, right) can be solved analytically [9] using the equivalent small parameters model, thus finding the following second order transfer function 1 Figure 1: General representation of the FB-TIA circuit (left). NMOS FB-TIA with a self-biased topology (right). The current generator \(I_{D}(t)\) and the \(C_{D}\) capacitance model the operation of the capacitive sensor. \[R_{m}(s)=\frac{R_{f}\,G_{0}}{1+G_{0}}\frac{(1-s\tau_{z})}{(1+s\tau)^{2}}\, \tag{1}\] where \(\tau_{z}=R^{*}C_{f}/G_{0}\) is the time constant corresponding to the zero and \(G_{0}=(g_{m}R^{*}-\frac{R^{*}}{R_{f}})\) is the DC gain ( \(g_{m}\) is the trans-conductance of the transistor and \(R^{*}=R_{f}\,||R_{D}\)). The time constant \(\tau\), relative to the second-order pole reads \[\tau\approx\sqrt{\frac{R_{f}\,\xi}{g_{m}}}\, \tag{2}\] and is dependent on the quantity \(\xi\) that contains all the capacitances involved in the circuit, \[\xi=(C_{L}C_{in}+C_{L}C_{f}+C_{in}C_{f}). \tag{3}\] The trans-impedance in the \(s\)-domain \(R_{m}(s)\) (Eq. 1) needs to be convoluted with the sensor current signal \(I_{D}(s)\), in order to get the output voltage of the circuit. We consider here the simplified condition of a 3D-trench sensor (Fig. 2), operating with charge carriers both at saturation velocities. In this case, the current has a shape that can be modelled as a simple rectangular pulse, having a width of duration \(t_{c}\) and an amplitude \(I_{0}\), such that the product \(I_{0}\cdot t_{c}\) equals the total charge \(Q_{in}\) deposited by the particle (Fig. 2). The time \(t_{c}\) is the _charge collection time_ and is defined as the time required for all charge carriers to reach their respective electrodes and stop inducing. This rectangular-shaped description does not take into account the different drift velocities of the carriers, but it is still a more realistic description compared to describing the current pulse as a pure Dirac delta function. The current can then be expressed in the \(s\)-domain as \[I_{D}(s)=I_{0}\frac{1-e^{-s\tau_{c}}}{s}\, \tag{4}\] so that the output voltage \(V_{out}(s)\) can be written as Figure 2: Current pulse \(I_{D}(t)\) (left) for a 3D pixel sensor with trench geometry (right). The simulated signal is obtained by TCoDe simulation [6, 7]. The sizes of the pixel are \(55\times 55\times 150\ \mu m^{3}\). \[V_{out}(s)=I_{0}\frac{1-e^{-s_{tc}}}{s}\,\frac{R_{f}\,G_{0}}{1+G_{0}}\frac{(1-s \tau_{z})}{(1+s\tau)^{2}}. \tag{10}\] Taking the inverse Laplace transform in the time domain we have the signal \[V_{out}(t)=\mathcal{L}^{-1}(t)\left\{I_{0}\frac{1-e^{-s_{tc}}}{s}\,\frac{R_{f} \,G_{0}}{1+G_{0}}\frac{(1-s\tau_{z})}{(1+s\tau)^{2}}\right\}\,, \tag{11}\] which corresponds to a voltage signal described by the function \[V_{out}(t)=I_{0}\frac{R_{f}\,G_{0}}{1+G_{0}}\left\{\left[1-e^{- \frac{t}{\tau}}\left(1+\frac{t}{\tau}\left(1+\frac{\tau_{z}}{\tau}\right) \right)\right]-\right.\\ \left.\theta(t-t_{c})\left[1-e^{-\frac{(t-t_{c})}{\tau}}\left(1+ \frac{(t-t_{c})}{\tau}\left(1+\frac{\tau_{z}}{\tau}\right)\right)\right] \right\}. \tag{12}\] It is interesting to analyze the behaviour of the TIA circuit in two different operating conditions, distinguished by the size of the circuit time constant \(\tau\) with respect to the duration of the input current: the _charge collection time_\(t_{c}\). This is accomplished in the next two subsections. ### Condition I: Charge Sensitive TIA (\(\tau\gg t_{c}\)) This condition (named CS-TIA in the following) is typical of a CSA-based input stage, where the value of the feedback resistor \(R_{f}\) is maximized to have a better SNR. This is an optimal configuration when the precision in the signal amplitude measurement is more important than preserving the signal speed and time resolution. In any case, the use of the CSA configuration often remains a convenient compromise between overall performance and power consumption. The bandwidth of the TIA is kept much smaller compared to the bandwidth of the current pulse and, consequently, the shape of the current signal is not preserved. With a given trans-conductance \(g_{m}\) of the input transistor, the output voltage reaches quickly the maximum achievable slope, to decrease then exponentially with time. Since we have the factor \(\theta(t-t_{c})\) in the solution 12, we can consider two cases: when \(t<t_{c}\) we get the output signal \[V_{out}(t)_{t<t_{c}}=I_{0}\frac{R_{f}\,G_{0}}{1+G_{0}}\left\{\left[1-e^{-\frac {t}{\tau}}\left(1+\frac{t}{\tau}\left(1+\frac{\tau_{z}}{\tau}\right)\right) \right]\right., \tag{13}\] when \(t>t_{c}\) the output signal expression becomes \[V_{out}(t)_{t>t_{c}}=I_{0}\frac{R_{f}\,G_{0}}{1+G_{0}}e^{-\frac{t}{\tau}} \left(\frac{t_{c}}{\tau}-(e^{\frac{t_{c}}{\tau}}-1)(\frac{\tau_{z}+\tau}{\tau })+\frac{e^{\frac{t_{c}}{\tau}}\left(\tau^{2}-t_{c}(\tau-\tau_{z})\right)}{ \tau^{2}}-1\right). \tag{14}\] We can then define \[A=I_{0}\frac{R_{f}\,G_{0}}{1+G_{0}}\ \ \ ;\ \ \ B=(e^{\frac{t_{c}}{\tau}}-1) \left(\frac{\tau_{z}+\tau}{\tau}\right)\ \ \ ;\ \ \ C=\frac{e^{\frac{t_{c}}{\tau}} \left(\tau^{2}-t_{c}(\tau-\tau_{z})\right)}{\tau^{2}}-1\ \.\] Therefore, equation 11 can be written as \[V_{out}(t)_{t>t_{c}}=Ae^{-\frac{t}{\tau}}\left(B\frac{t}{\tau}+C\right)\,. \tag{12}\] The peaking time \(T_{peak}\) and voltage peak \(V_{peak}\) are \[T_{peak}=\frac{B-C}{B}\tau\ =\frac{e^{\frac{t_{c}}{\tau}}(\tau_{z}(\tau-t_{c})+ \tau t_{c})-\tau\tau_{z}}{(\tau_{z}+\tau)(e^{\frac{t_{c}}{\tau}}-1)}\,\,; \tag{13}\] \[V_{peak}=I_{0}\frac{R_{f}\,G_{0}}{1+G_{0}}e^{-\frac{T_{peak}}{\tau}}B\approx \frac{Q_{in}\cdot R_{f}}{e\tau}\,\,. \tag{14}\] The output signal \(V_{out}\) is shown in Fig. 3. During charge collection, the signal starts having negative values. For \(t>t_{c}\) the signal is positive with a positive derivative, reaching a maximum at \(T_{peak}\approx\tau\). We can anticipate here that this condition does not appear as the best possible one when the speed of the sensor is to be fully exploited. Figure 3: Calculated output voltage \(V_{out}\) in the CS-TIA configuration. Left: Due to the high peaking time (\(T_{peak}\approx\tau\)) with respect to average collection time, the current signal can be approximated by a Delta function. Right: Detail of the under-shoot during \(t<t_{c}\). f the duration of the induced current signal is much shorter compared to the time constant (\(\tau\gg t_{c}\)), the slope of the voltage output signal becomes almost independent of \(t_{c}\). As shown in Fig. 4, different charge collection times lead to a delay of the output signal, but the initial slope stays approximately constant. The maximum slope for every current is reached at time \(t_{c}\) and then decreases exponentially. The maximum signal slope is \[\frac{dV}{dt}\approx\frac{Q_{in}\cdot g_{m}}{\xi}, \tag{21}\] where \(Q_{in}\) is the total charge (for the rectangular pulse \(Q_{in}=I_{0}t_{c}\)), while \(\xi\) has been defined in Eq. 1. ### Condition II: Fast-TIA (\(\tau\approx t_{c}\)) A possible solution for realizing a FB-TIA having a time constant of the same order of the charge collection time \(t_{c}\) is using the same self-biased scheme of Fig. 1 implemented with a high-bandwidth Si-Ge bipolar transistor stage, as illustrated in Fig. 5. It is thus possible to take advantage of the benefits of the Si-Ge devices that allow reaching small input capacitances and high-frequency transitions of the order of 100 GHz also for discrete-component circuit solutions. Also in this case the circuit can be solved analytically [9], finding the following transfer function \[R_{m}(s)=\frac{R_{m_{0}}}{(1+s\tau)^{2}}\, \tag{22}\] where \(R_{m_{0}}\) is the DC-Transimpedance \[R_{m_{0}}=\frac{r_{\pi}g_{m}R_{C}R_{f}-r_{\pi}R_{C}}{(R_{f}+R_{C}+r_{\pi}(1+g_ {m}R_{C})}, \tag{23}\] and the time constant is \[\tau=\sqrt{\frac{R_{f}\,R_{C}r_{\pi}\xi}{r_{\pi}(1+g_{m}R_{C})+R_{C}+R_{f}}} \approx\sqrt{\frac{R_{m_{0}}\xi}{g_{m}}}. \tag{24}\] Figure 4: Output voltage \(V_{out}\) in the CS-TIA configuration for current pulses with different duration \(t_{c}\) and same charge \(Q_{in}=I_{0}\cdot t_{c}\) (left). Detail of the under-shoot and slope of the signal for different values of \(t_{c}\) (right) he output voltage is given by the convolution with the current pulse and reads \[V_{out}(t)=I_{0}R_{m_{0}}\left\{\left[1-e^{-\frac{t}{\tau}}\left(1+\frac{t}{\tau} \right)\right]-\theta(t-t_{c})\left[1-e^{-\frac{(t-t_{c})}{\tau}}\left(1+\frac {(t-t_{c})}{\tau}\right)\right]\right\}\,. \tag{21}\] Considering separately the two signals, with respect to the charge collection time \(t_{c}\), \[V_{out}(t)=\begin{cases}I_{0}R_{m_{0}}(1-e^{-\frac{t}{\tau}}(1+\frac{t}{\tau}) )&\text{if $t<t_{c}$,}\\ \\ I_{0}R_{m_{0}}e^{-\frac{t}{\tau}}(B\frac{t}{\tau}+C)&\text{if $t>t_{c}$,}\end{cases} \tag{22}\] where \[B=e^{\frac{t_{c}}{\tau}}-1\ \ \ ;\ \ \ C=e^{\frac{t_{c}}{\tau}}\left(1-\frac{t_ {c}}{\tau}\right)-1\ \.\] Taking the derivative of the voltage signal for \(t<t_{c}\) we can find the slope of the signal \[V^{{}^{\prime}}_{out}(t)_{t<t_{c}}=\frac{I_{0}R_{m_{0}}}{\tau^{2}}e^{-\frac{t }{\tau}}t. \tag{23}\] The slope is maximum at \(t=t_{c}\), so we can write \[V^{{}^{\prime}}_{out}(t_{c})=\frac{Q_{in}R_{m_{0}}}{\tau^{2}e}. \tag{24}\] The peaking time \(T_{peak}\) is \[T_{peak}=\frac{e^{\frac{t_{c}}{\tau}}t_{c}}{e^{\frac{t_{c}}{\tau}}-1}. \tag{25}\] The derivative is shown in Fig. 6. Also in this case the maximum value for the slope is reached at time \(t_{c}\) but here we are at a much higher fraction of the voltage signal. In particular, \[V_{peak}=V_{out}(T_{peak})_{t>t_{c}}=I_{0}R_{m_{0}}e^{-\frac{e}{e-1}}(e-1) \approx 0.353\cdot I_{0}R_{m_{0}}. \tag{26}\] The voltage at time \(t=t_{c}\), that is the maximum slope voltage, is \[V_{Slope}=I_{0}R_{m_{0}}\big{(}\frac{e-2}{e}\big{)}\approx 0.264\cdot I_{0}R _{m_{0}}. \tag{27}\] Figure 5: Schematic of the FB-TIA with bipolar transistor NPN (left) and corresponding small signal model (right). aking the ratio between \(V_{\begin{subarray}{c}Max\\ Slope\end{subarray}}\) and \(V_{peak}\) we find \[V_{\begin{subarray}{c}Max\\ Slope\end{subarray}}\approx 0.748\cdot V_{peak}. \tag{24}\] In the condition of \(\tau\approx t_{c}\), the maximum slope is reached at about 75% of the peaking value (Fig. 6). ## 4 Intrinsic time resolution of 3D-trench sensor with different front-end electronics In the present section, we analyze the performance in time resolution of the two configurations analysed above, that is the CS-TIA (\(\tau\gg t_{c}\)) and the Fast-TIA (\(\tau\approx t_{c}\)) configurations. It is of particular interest to study how the characteristics of the electronics affect the _primitive_ sensor time resolution. The latter is usually referred to as _intrinsic resolution_, and is strictly related to the spread of the charge collection time distributions (CCT), which in 3D sensors can be relatively small (standard deviations in the range of tens of ps). ### Sensor contribution to the time resolution We consider an ideal 3D geometry with flat parallel-plate electrodes (Fig. 8). This choice is motivated by the high intrinsic speed of this kind of geometry [10]. In this case \(t_{c}\) depends on the hit position of the impinging particle. For tracks closer to the electrode at the higher potential, the sensor will collect electrons very quickly while holes would induce for a longer time since they have to travel a longer distance and they move slower. The same argument can be used in the opposite case with tracks close to the electrode at lower potential with a short current signal from holes and a longer pulse for electrons. Figure 6: Example of the output signal and its derivative for a Fast-TIA. he shape of such current signals (Fig.7, right) is very similar to simple rectangular pulses with the difference that, since the two carriers typically induce for different times, when one has finished inducing, we get a drop in the current amplitude. In the case where the charge collection time is minimum we get exactly a rectangular pulse. We can assume for now that this difference in shape does not affect very much the obtainable intrinsic resolution, that will be instead dominated by the dispersion in the current signal duration \(t_{c}\). This will allow us in the following to calculate the obtainable intrinsic resolution using the voltage signal expression derived in the previous sections. When we have electric fields strong enough for both charge carriers to reach the respective saturation velocities \(v_{e}\) and \(v_{h}\), the electrons and holes speeds in silicon become similar and are the same throughout the sensor. This is equivalent to neglecting weighting field non-uniformity. We will have a minimum \(t_{c}^{min}\) and a maximum \(t_{c}^{max}\) for the charge collection time \(t_{c}\) (see Fig.4 right). Assuming that the distance between the electrodes is \(d=20\)\(\mu m\) and using reasonable values for silicon saturation velocities, we obtain \[t_{c}=\begin{cases}t_{c}^{min}=\frac{d}{v_{e}+v_{h}}\sim 100\text{ ps}&\text{if }x=\frac{v_{e}d}{v_{e}+v_{h}}\\ t_{c}^{max}=\frac{d}{v_{h}}\sim 210\text{ ps}&\text{if }x=d\end{cases} \tag{1}\] The current signal duration \(t_{c}\) of the detector populate a distribution that has an average \(\overline{t_{c}}\) and a certain standard deviation \(\sigma_{t_{c}}\). For simplicity we can assume that the charge collection times generated spanning the detector width are all equally probable, (actually, shorter charge collection times are more probable, as electrons move faster, Fig. 8 right). Following such an assumption, both charge carriers have the same saturation velocity and we obtain a rectangular distribution corresponding in the ideal case to a dispersion \[\sigma_{t_{c}}\sim\frac{t_{c}^{max}-t_{c}^{min}}{\sqrt{12}}\approx 32\text{ ps}. \tag{2}\] This is not yet the intrinsic resolution of the sensor, but we will see in the next subsections that the final resolution for 3D-trench sensors is strongly related to the standard deviation \(\sigma_{t_{c}}\) through a Figure 7: Sensor with 3D geometry from the top, the colored dots represent the point where the track has passed (left), corresponding ideal current to the deposit on the sensor (right) ell defined _propagation coefficient_. ### Intrinsic time resolution for different discrimination algorithms We analyze here the effect of the discrimination stage in the two cases of CS-TIA and Fast-TIA. We consider two kinds of typical discrimination techniques: LED and CFD. We limit our discussion to the intrinsic contribution due to the sensor, considering the _time-walk_ for the LED case, as a mere systematic (and processing-recoverable) effect. #### 4.2.1 The timing Propagation Coefficient \(\mathcal{P}\) Let us consider the case of a fixed voltage threshold set at voltage \(V_{th}\). The threshold needs to be at a higher voltage of the noise floor to avoid spurious hits, which in the case of the CS-TIA means considering the solution for the voltage signal at \(t>t_{c}\). In principle, the threshold time \(t_{s}\) is the time that satisfies the equation \[V_{th}=V_{out}(t_{s}). \tag{12}\] We want to understand how this time \(t_{s}\) is affected by the fact that the sensor produces a population of current signals with different charge collection times \(t_{c}\). This is equivalent to consider \(t_{s}\) as a function of the charge collection time \(t_{c}\). Assuming Eq. 12 could be solved to find the time \(t_{s}\), the fluctuation of the time at the threshold can then be obtained by the error propagation formula \[\sigma_{t_{s}}=\left(\frac{\partial t_{s}}{\partial t_{c}}\right)\sigma_{t_{ c}}. \tag{13}\] The proportionality factor between the resolution at the threshold \(\sigma_{t_{s}}\) and the standard deviation of the CCT distribution \(\sigma_{t_{c}}\), can be defined as the _timing propagation coefficient_\(\mathcal{P}\) from sensor to electronics: Figure 8: Ideal parallel-plate sensor with 3D geometry (left), Probability Distribution Function of Charge collection time distribution (right) \[\mathcal{P}=\left(\frac{\partial t_{s}}{\partial t_{c}}\right)\,. \tag{4.5}\] The introduction the propagation coefficient \(\mathcal{P}\) can help us to understand how the CCT distribution is decisive about the final performance of the system. In the following, we are interested to see how \(\mathcal{P}\) changes for the two cases: CS-TIA and Fast-TIA. #### 4.2.2 Constant-fraction time resolution in CS-TIA When using a CFD, the threshold is always set at the same fraction of the maximum value of the output voltage. This allows us to correct the time-walk given by the signal with the same charge collection times but different charges. In the CS-TIA case, being the electronics slow with respect to the duration of the pulse, this correction is essential to have good performance, otherwise the time resolution would strongly worsen. If a leading edge discrimination is used instead, this needs to be supported by a dedicated processing to compensate for the time-walk fluctuations, such as measuring the time over threshold (TOT), which can allow to reach an equivalent performance as the CFD. For this reason, we limit the calculation of the propagation coefficient in the CS-TIA case only to the CFD case, showing that the obtainable intrinsic resolution is independent of the chosen threshold. The voltage at the threshold can be written as \[V_{out}\left(t_{s}\right)_{t>t_{c}}=\alpha V_{out}(T_{peak})\, \tag{4.6}\] \(t_{s}\) is the time at threshold, set at the fraction \(\alpha\) of the voltage peak. Taking the derivative of both sides of Eq. 4.6 we can find the expression for the propagation coefficient \(\mathcal{P}\) in the CS-TIA case which is given by (see appendix A for the detailed derivation): \[\mathcal{P}=\frac{\partial T_{peak}}{\partial t_{c}}\, \tag{4.7}\] \[\mathcal{P}=\frac{e^{\frac{\mathrm{i}c}{\tau}}(\tau-\tau_{z})(\tau e^{\frac{ \mathrm{i}c}{\tau}}-\tau-t_{c})}{\tau(\tau+\tau_{z})(e^{\frac{\mathrm{i}c}{\tau }}-1)^{2}}. \tag{4.8}\] The value of the propagation coefficient \(\mathcal{P}\) as a function of the ratio \(\frac{\tau}{t_{c}}\) is shown in Fig. 9: for time constant \(\tau\) of the electronics much greater of the average collection time \(t_{c}\), \(\mathcal{P}\) approaches the value \(\frac{1}{2}\). If we consider the presence of the zero with time constant \(\tau_{z}\) in the transfer function 3.1, we find that the propagation coefficient has a smaller value. For slow electronics \(\tau_{z}\) can help to obtain a slightly better timing resolution. For very fast electronics the contribution of \(\tau_{z}\) can usually be neglected since its zero goes to extremely high frequency. The fact that the propagation coefficient for relatively slow electronics approaches \(\frac{1}{2}\) is not surprising. Considering the discussion in [11], when the time constant is much greater than the duration of the signal, the electronics start behaving as a system able to measure accurately the _time centroid_\(t_{cog}\), defined in Eq. 4.9. The intrinsic resolution \(\sigma_{sens}\) is, in this case, given by the standard deviation of all the time centroids \(\sigma_{t_{cog}}\). \[t_{cog}=\frac{\int I(t)\cdot tdt}{\int I(t)dt}. \tag{4.9}\] The response of the CS-TIA is the pulse response \(h(t)\) delayed by the centroid time \[V_{out}(t)\approx h(t-t_{cog})\, \tag{4.10}\] and the peaking time of the voltage output becomes \[T_{peak}\approx\tau+t_{cog}. \tag{4.11}\] Considering rectangular pulses it means that the time centroid is exactly \[t_{cog}=\frac{t_{c}}{2}. \tag{4.12}\] The peaking time for a rectangular pulse of duration \(t_{c}\) is then \[T_{peak}\approx\tau+\frac{t_{c}}{2}. \tag{4.13}\] Since the propagation coefficient \(\mathcal{P}\) is given by 4.7, the resolution of the system can be written as \[\sigma_{t_{s}}=\frac{\sigma_{t_{c}}}{2}. \tag{4.14}\] In this special case, the CCT distribution and the distribution of all the time centroids are both rectangular distributions, with the difference that all the time centroids are exactly half the charge collection times \(t_{c}\), which leads also to half the standard deviation. This suggests a possible criterion for which considering charge-sensitive electronics and 3D sensors with a population of Figure 9: Propagation coefficient as a function of the ratio \(\frac{\tau}{t_{c}}\) obtained using Eq. 4.8. The red curve refers to the case where the time constant \(\tau_{z}=0\). current signals with different duration, the time resolution can be estimated by taking half the standard deviation of the charge collection time distribution. As an application of this theory to CS-TIA we can show here a simulation performed by means of the TFBoost code [12; 13], where we make the convolution of the current signal from an ideal 3D sensor, as those in Fig. 7, having a distance between the electrodes \(d=20\mu m\) and saturation velocities equal to \(v_{e}=0.1\frac{\mu\text{m}}{\text{ps}}\) and \(v_{h}=0.095\frac{\mu\text{m}}{\text{ps}}\). The front-end electronics used in the simulation has the same characteristics of the Timespot1 ASIC [14], corresponding to what listed in table 1. The results of the simulation are shown in Fig. 10. We find a standard deviation of the CCT equal to \(\sigma_{t_{c}}\sim 30ps\) and a resolution with a CFD with the threshold set at 35% of the voltage peak equal to \(\sigma_{t_{s}}\sim 15.5ps\). Taking the ratio of the two standard deviations we find that the propagation coefficient is equal to \[\mathcal{P}(\tau\sim 11ns)\sim 0.52. \tag{4.15}\] #### 4.2.3 Leading-edge time resolution in Fast-TIA The propagation coefficient calculated for the CS-TIA in the previous section increases if we have faster electronics with smaller time constant \(\tau\) (Fig. 9). This could induce thinking that a faster electronics would lead to a worse temporal resolution with respect to the CS-TIA case. Indeed this could be true, but only in specific cases. When the time constant starts being of the same order as the average duration of current pulse \(t_{c}\) the resolution becomes dependent on the threshold position and the propagation coefficient \(\mathcal{P}\) changes with it. The shape of the voltage signal changes for currents with different duration \(t_{c}\) and we start suffering from ballistic deficit (Fig. 11). Signals with the same input charge will have variations of the voltage amplitude and, more important, the slope \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{f}\) & \(R_{D}\) & \(C_{in}\) & \(C_{f}\) & \(C_{L}\) \\ \hline \(3M\Omega\) & \(570k\Omega\) & \(100fF\) & \(5fF\) & \(21fF\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(g_{m}\) & \(\tau\) & \(\tau_{z}\) \\ \hline \(55\cdot 10^{-6}S\) & \(10.8ns\) & \(91ps\) \\ \hline \end{tabular} \end{table} Table 1: CS-TIA values used in the simulation. Figure 10: CCT distribution of the currents used in the simulation (left), Time of arrival distribution obtained with TFBoost (right) of the signal changes significantly. By decreasing the time constant \(\tau\) the electronics is increasingly capable to follow the input currents and the peaking time \(T_{peak}\) does not measure the time centroid anymore, but gets closer to measuring the duration of the current pulse \(t_{c}\). Considering the case when \(\tau\approx t_{c}\), we use now the expression of \(V_{out}(t)_{t<t_{c}}\) and set it equal to the chosen voltage threshold \(V_{th}\). If we approximate the exponential to the second order we can solve for the threshold time \(t_{s}\) \[V_{th}=I_{0}R_{m_{0}}(1-e^{-\frac{t_{s}}{\tau}}(1+\frac{t_{s}}{ \tau}))\, \tag{4.16}\] \[V_{th}\sim I_{0}R_{m_{0}}(1-(1-\frac{t_{s}}{\tau})(1+\frac{t_{s} }{\tau}))\,\] (4.17) \[t_{s}\sim\tau\sqrt{\frac{V_{th}t_{c}}{Q_{in}R_{m_{0}}}}. \tag{4.18}\] To calculate the propagation coefficient \(\mathcal{P}\) we propagate the fluctuations finding \[\mathcal{P}(t_{s})\sim\frac{1}{2}\frac{t_{s}}{t_{c}}. \tag{4.19}\] The resolution at the threshold \(V_{th}\) is Figure 11: Voltage signals with Fast-TIA for currents with different charge collection time \(t_{c}\). \[\sigma_{t_{s}}=\mathcal{P}(t_{s})\sigma_{t_{c}}. \tag{4.20}\] The time resolution \(\sigma_{t_{s}}\) is now a function of the threshold time \(t_{s}\). To understand what this means we can consider Fig. 11: our sensor is producing currents with rectangular shapes and different duration with a certain average \(\overline{t_{c}}\); the current with average duration \(\overline{t_{c}}\) will cross the threshold \(V_{th}\) at time \(t_{s}(\overline{t_{c}})\). Currents with shorter charge collection time will reach the threshold before time \(t_{s}(\overline{t_{c}})\) while longer current pulse after time \(t_{s}(\overline{t_{c}})\). If the charge collection times do not change in a large range we can assume that this variation is linear so that \[t_{s}(t_{c})\sim t_{s}(\overline{t_{c}})+\mathcal{P}(V_{th},\overline{t_{c}}) (t_{c}-\overline{t_{c}})\, \tag{4.21}\] the propagation coefficient can also be written using Eq. 4.18 \[\mathcal{P}(V_{th},\overline{t_{c}})\sim\frac{1}{2}\sqrt{\frac{V_{th} \overline{t_{c}}}{Q_{in}R_{m_{0}}}}\, \tag{4.22}\] where now we made explicit the dependence on the chosen voltage threshold. The time at the threshold \(t_{s}\) and the propagation coefficient \(\mathcal{P}\) grow as the square root of the voltage threshold \(V_{th}\). By setting \(V_{th}\) to a small value according to a \(t_{s}<t_{c}\) condition, we can reduce the sensor's intrinsic jitter. However, the approximation of equation 4.17 tends to underestimate the fluctuations, being valid only for very small threshold values. For a more complete understanding, we can consider the time \(t\) as a function of \(t_{c}\) and derive both sides of the equation 4.16 as follows \[\frac{\partial}{\partial t_{c}}\Bigg{(}\frac{V_{th}t_{c}}{Q_{in}R_{m_{0}}} \Bigg{)}=\frac{\partial}{\partial t_{c}}\Bigg{(}1-e^{-\frac{t_{s}}{\tau}} \Big{(}1+\frac{t_{s}}{\tau}\Big{)}\Bigg{)}\, \tag{4.23}\] \[\frac{V_{th}}{Q_{in}R_{m_{0}}}=\frac{e^{\frac{t_{s}}{\tau}}t_{s}}{\tau^{2}} \frac{\partial t_{s}}{\partial t_{c}}\, \tag{4.24}\] solving for \(\frac{\partial t_{s}}{\partial t_{c}}\) we can write the derivative as \[\frac{\partial t_{s}}{\partial t_{c}}\equiv\frac{V_{th}}{V^{\prime}(t_{s})t_{ c}}. \tag{4.25}\] The resolution at the threshold becomes \[\sigma_{t_{s}}=\frac{V_{th}}{V^{\prime}(t_{s})t_{c}}\sigma_{t_{c}}. \tag{4.26}\] Setting the threshold to the value corresponding to the maximum slope condition when \(t_{s}=\overline{t_{c}}\), we can use equations 3.20 and 3.23 and get the time resolution \[\sigma_{t_{s}}\approx(e-2)\sigma_{t_{c}}\sim 0.71\cdot\sigma_{t_{c}}. \tag{4.27}\] The condition \(V_{th}=V_{\begin{subarray}{c}Max\\ Slope\end{subarray}}\) implies transporting about 70% of the charge collection time fluctuations to the time resolution at the threshold. The total time resolution, in this case, could be dominated by the sensor contribution with respect to the electronic jitter that depends on the amount of charge generated (see section 5.2). Using equation 4.16 and the definition of the slope, the propagation coefficient becomes \[\mathcal{P}(t_{s})=\frac{\tau^{2}(e^{\frac{t_{c}}{\tau}}-1)-\tau t_{s}}{t_{s} \cdot t_{c}}. \tag{4.28}\] Fig. 12 shows a plot of the propagation coefficient as a function of the fraction \(\alpha\) of the signal for \(t_{c}=\overline{t_{c}}\). For the LE, only the value of \(\mathcal{P}\) for \(t_{s}<\overline{t_{c}}\) are shown. A higher threshold could be also chosen but this would always lead to worse results in terms of intrinsic resolution. It must be also pointed out that if we choose a threshold higher than a certain value, with the leading edge discrimination we could lose some of the events with larger charge collection time (Fig. 10). This happens when the chosen threshold is higher than the voltage reached at the maximum peaking time \(T_{peak}(t_{c_{max}})\) relative to the maximum charge collection time \(t_{c_{max}}\). Even for fast electronics, the LE will be affected by time walk fluctuation, but they can be tolerable allowing us to still reach very good performances, as shown in [10]. Of course, best performances can be obtained if discrimination with time walk correction is used, such as the constant fraction method that will be treated in the next section. #### 4.2.4 Constant-fraction time resolution in Fast-TIA To find the propagation coefficient \(\mathcal{P}\) for the constant fraction case we consider the following equation \[V_{out}(t_{s})_{t<t_{c}}=\alpha V_{out}(T_{peak}). \tag{4.29}\] The derivation is similar to the case for the CFD of the CS-TIA and is done in appendix B. Similarly to the LE for the Fast-TIA we find that the propagation coefficient \(\mathcal{P}\) is a function of the Figure 12: Propagation coefficient \(\mathcal{P}\) for the leading edge discrimination and for the constant fraction discrimination as a function of the threshold \(\alpha\) time \(t_{s}\) that is the time at threshold fixed at the fraction \(\alpha\) of the voltage peak \(V_{peak}=V_{out}(T_{peak})\). In particular, \[\mathcal{P}(t_{s})_{t_{s}<t_{c}}=\frac{t_{c}}{\tau}\frac{e^{\frac{t_{c}}{\tau} }(\tau e^{\frac{t_{s}}{\tau}}-t_{s}-\tau)}{t_{s}(e^{\frac{t_{c}}{\tau}}-1)^{2}}\, \tag{4.30}\] while for \(t_{s}>t_{c}\), we find that the propagation coefficient \(\mathcal{P}\) is independent of \(t_{s}\) and is given by \[\mathcal{P}_{t_{s}>t_{c}}=\frac{\partial T_{peak}}{\partial t_{c}}\, \tag{4.31}\] \[\mathcal{P}_{t_{s}>t_{c}}=\frac{e^{\frac{t_{c}}{\tau}}(\tau e^{\frac{t_{c}}{ \tau}}-\tau-t_{c})}{\tau(e^{\frac{t_{c}}{\tau}}-1)^{2}}. \tag{4.32}\] Eq. 4.29 can be solved numerically to find the time \(t_{s}\) as a function of the fraction \(\alpha\) so that also the propagation coefficient can be expressed as a function of the fraction of the CFD. Fig. 12 shows the propagation coefficient \(\mathcal{P}(\alpha)\) that as expected is a growing function of the threshold. For fractions \(\alpha\) higher than \(V_{\begin{subarray}{c}Max\\ Slope\end{subarray}}\) the value of \(\mathcal{P}\) stays constant and the front-end propagates about 70% of the charge collection times fluctuation \(\sigma_{t_{c}}\). Fig. 12 shows also a comparison between LE and CFD discrimination for time constant \(\tau\approx t_{c}\) where it can be seen that the two discrimination algorithms perform in a similar manner with fast electronics if we always deposit the same charge. If charge fluctuations are present, we will have an additional time walk of the time at threshold \(t_{s}\) that can be eliminated with the constant fraction method or with a Leading Edge assisted with some kind of compensation (e.g. amplitude correction or TOT correction). #### 4.2.5 Propagation coefficient \(\mathcal{P}\) for different \(\tau\) Up to now we have considered the value that the propagation coefficient \(\mathcal{P}\) assumes in two particular cases, namely the CS-TIA where the time constant \(\tau\gg t_{c}\) and the Fast-TIA in which \(\tau\approx t_{c}\). However, as Eq. 3.2 and 3.16 shows, the value of the time constant \(\tau\) depends on several factors: the capacitances involved, through the quantity \(\xi\), in particular the capacitance \(C_{D}\) of the sensor that can be the dominant one in certain configurations; the value of the DC trans-impedance and finally the trans-conductance \(g_{m}\) of the transistor. The trans-conductance \(g_{m}\) is often the most limiting factor since is directly related to the power consumption and, in circuits with very strict power constraints it could be impossible to reach the value of the time constant \(\tau\) of the same order of the average duration of the current pulse. To understand what happens if more speed of the electronics, could be exploited, Fig.13 (top) the value of the propagation coefficient \(\mathcal{P}\) is shown as a function of the threshold \(\alpha\) for different time constant \(\tau\). To take advantage of the reduction of the propagation coefficient as a function of the threshold, the time constant can be several times greater than the average charge collection time \(t_{c}\) provided that a high SNR is given. As an example, let us consider a system with an SNR\(=50\); setting the threshold to \(5\sigma_{v}\) to avoid false hits, the threshold would be at 10% of the voltage peak. If we look at Fig.13 (bottom), that shows the propagation coefficient \(\mathcal{P}\) as a function of the ratio \(\frac{\tau}{t_{c}}\), with a time constant five times greater than the average collection time \(t_{c}\), we still be able to halve the propagation coefficient \(\mathcal{P}\) reaching an intrinsic time resolution \(\sigma_{t_{\mathrm{s}}}=\sigma_{t_{c}}/4\) instead of half \(\sigma_{t_{c}}\). ## 5 Contributions to time resolution for a real 3D trench detector This section summarizes the two contributions to the time resolution of real 3D trench sensors, as seen in Eq. 2. The intrinsic contribution of a real sensor can be quite different from what is seen in the ideal case and requires accurate simulations to obtain the induced currents. The contribution due to the electronic jitter is instead reported using the known equations. ### Intrinsic time resolution of real 3D trench sensor A description of a 3D sensor using an ideal 3D parallel plate geometry helps to understand one of the most important contributions to the intrinsic resolution of this type of sensors, namely, the fact Figure 13: Propagation coefficient \(\mathcal{P}\) for the constant fraction discrimination as a function of the threshold \(\alpha\) for different time constant \(\tau\) (top), Propagation coefficient \(\mathcal{P}\) for the constant fraction discrimination as a function of the ratio \(\frac{\tau}{t_{c}}\) for different threshold \(\alpha\) (bottom) that the currents generated have different durations due to the different drift times of the carriers depending of the position of the impinging particle. This leads to obtaining a distribution of these durations \(t_{c}\) which is the CCT distribution. In a real 3D detector there are other aspects that can affect the duration and the shape of the signals, in particular, diffusion and variation of the electric and weighting fields. When all the effects are considered, the shape and duration of the currents inside the detector can be quite different to what we would expect considering an ideal 3D parallel plate geometry with carriers at saturation velocities and in a constant weighting field. Indeed, a great effort has been made in order to optimize the geometry of this type of sensors in order to optimize their performance [6; 7]. Fig. 14 shows the CCT distribution obtained with a simulation with the software TCoDe [6; 7] of the 3D detector developed within the TimeSPOT project [5]. The distribution shows a standard deviation \(\sigma_{t_{c}}\sim 53\) ps. Considering the detector geometry, the distance between electrodes and the saturation velocities used previously in section 4.1, we would expect a value of \(\sigma_{t_{c}}^{Ideal}\sim 36\) ps. Therefore, also in an strongly optimized detector the value of \(\sigma_{t_{c}}\) is indeed dominated by the fact that we have different duration because of the different drift distances that the carriers have to cover but all the other contributions (i.e. weighting field, diffusion) could worsen severely the spread of the duration of the currents. The propagation coefficient \(\mathcal{P}\) defined in the previous section is still useful also for CCT distributions such as Fig.14 and the criterion found for the CS-TIA case (Eq. 4.14) still holds. In particular, supposing to process the currents of the detector with the CCT shown in Fig.14, we would expect to obtain an intrinsic resolution of about \[\sigma_{t_{s}}^{CS-TIA}\sim\frac{\sigma_{t_{c}}}{2}=26\,\mathrm{ps}. \tag{5.1}\] This has also been confirmed by extensive simulations made with the software packages TCoDe and TFBoost [6; 7; 13]. If a Fast-TIA is used instead with a time constant \(\tau\approx t_{c}\), the resolution could be strongly improved depending of the noise present in the system. If a very low threshold can be used, the intrinsic contribution could be heavily reduced as seen both in experiment [10], and simulations [3], where an intrinsic resolution of about \(15ps\) was estimated for the TimeSPOT sensor. Considering the value of \(\sigma_{t_{c}}\) found in the TCoDe simulation, using fast electronics and a low Figure 14: CCT distribution obtained with a TCoDe simulation of the TimeSPOT sensor [6; 7]. threshold allow to obtain a propagation coefficient \(\mathcal{P}\sim 0.28\) that is consistent with the threshold fraction used in [10]. ### Front-end electronics jitter Conceptually, the contribution of the front-end electronics to the system time resolution can be interpreted as the projection of the electronic noise onto the time axis and is defined as electronic time jitter or \(\sigma_{\rm ej}\), already found in Eq. 1. This is given by [8], \[\sigma_{\rm ej}=\sigma_{v}\left(\frac{dV}{dt}\right)^{-1}\,. \tag{10}\] To evaluate \(\sigma_{\rm ej}\) we need both the time derivative of the signal function \(V_{out}(t)\) and the voltage noise \(\sigma_{v}\)2. By looking at the expressions for the slope of the voltage signal in the two cases (Eq. 12 and Eq. 13), we see that is directly proportional to the total charge and trans-conductance \(g_{m}\) and inversely proportional to the quantity \(\xi\) (Eq. 12). The obtainable jitter depends strictly by the amount of charge available, the power constraints (i.e. the \(g_{m}\), from which also depends the noise performance) and all the capacitance involved in the circuit the are crucial to the final speed of the electronics. Footnote 2: Explicit calculation of the noise performance of the two configurations can be found in [9]. As shown in [15], the electronic jitter performance improves if the time constant is of the same order of the duration of the current pulse. For planar detectors, this means a trade-off between intrinsic contribution to time resolution, (that gets smaller for electrodes closer to each other), and electronic jitter which is inversely proportional to the total charge (that for planar detector is proportional to the sensor thickness). The 3D detectors, instead, allow to decouple electrodes distance and produced charge so that an higher signal to noise ratio can be obtained and very good performance in terms of time resolution can be reached, even without benefiting of a gain charge mechanism [4; 10; 16]. Considering the two electronics analyzed, we can express the electronic jitter with the following relations (more details can be found in appendix C) \[\sigma_{\rm ej}^{CS}\sim\frac{\sigma_{v}\xi}{\overline{Q_{in}g_{m}}}\,. \tag{11}\] Here the term \(\overline{Q_{in}}\) refers to the average amount of charge deposited. Another way to express the jitter is in term of the signal to noise ratio SNR and time constant of the system \(\tau\), \[\sigma_{\rm ej}^{CS}\sim\frac{1}{SNR}\frac{\tau}{e}\,. \tag{12}\] For the Fast-TIA case, the jitter at the maximum derivative is \[\sigma_{\rm ej}^{F}\sim\frac{0.96}{SNR}\cdot\tau\,. \tag{13}\] Using a Fast-TIA that has time constant \(\tau\sim 200ps\) of the same order of the average duration of the currents in the 3D detector, we find that with SNR= 20 the electronic jitter \(\sigma_{ej}\) would be less than 10 ps. Conclusions Starting from the Eq. 2, that is completely general for many types of sensors, we have found an explicit description of the two contributions \(\sigma_{\rm sens}\) and \(\sigma_{ej}\) in the case we have a detector that produce currents with different durations, such as the 3D silicon sensor with parallel plate geometry (_trench_ structure) connected to a feedback TIA electronics. The final performance in terms of time resolution depends strongly on the characteristic of the system detector+electronics, that can be characterized by a time constant \(\tau\), that depends on the electronics itself (i.e. trans-conductance \(g_{m}\), feedback network, capacitances etc. ) and sensor capacitance \(C_{D}\). The time resolution achievable with such system can be summarized in two cases: #### CS-TIA (\(\tau\gg t_{c}\)) The time constant of the system \(\tau\) is much greater than the average current duration \(t_{c}\). The intrinsic contribution of the detector is independent of the discrimination threshold value and is given by \[\sigma_{\rm sens}=\mathcal{P}\sigma_{t_{c}}\, \tag{6.1}\] where \(\mathcal{P}\) is the _timing propagation coefficient_ and \(\sigma_{t_{c}}\) is the standard deviation of the CCT distribution. For 3D trench detector we have that \[\mathcal{P}\approx 0.5\, \tag{6.2}\] \[\sigma_{t_{c}}\sim\frac{t_{c}^{max}-t_{c}^{min}}{\sqrt{12}}\, \tag{6.3}\] where \(t_{c}^{max}\) and \(t_{c}^{min}\) are the maximum and minimum charge collection time of the currents in the detector. With short inter-electrode distance, \(t_{c}^{max}\) and \(t_{c}^{min}\), and, consequently, \(\sigma_{t_{c}}\) can be made very small, still having enough charge since 3D detectors have thickness and distance of electrodes decoupled. If we introduce also the electronic jitter for the charge-sensitive case, \(\sigma_{ej}^{CS}\), the final time resolution can be expressed as \[\sigma_{t}=\sqrt{\left(\mathcal{P}\sigma_{t_{c}}\right)^{2}+\left(\sigma_{ej }^{CS}\right)^{2}}. \tag{6.4}\] If we can set the threshold very low becomes \[\sigma_{t}\sim\sqrt{\left(\frac{\sigma_{t_{c}}}{2}\right)^{2}+\left(\frac{1}{ SNR}\frac{\tau}{e}\right)^{2}}\, \tag{6.5}\] where \(\tau\) is the time constant of the system and SNR is the signal to noise ratio. #### Fast-TIA (\(\tau\sim t_{c}\)) The time constant of the system is of the same order of the average duration of the currents. The intrinsic contribution of the detector is now dependent on the chosen threshold. This can still be expressed using the propagation coefficient that is a growing function of the threshold \(\alpha\). Referring to the constant fraction method the intrinsic contribution is \[\sigma_{\rm sens}(\alpha)=\mathcal{P}(\alpha)\sigma_{t_{c}}. \tag{100}\] Also the electronic jitter for the Fast-TIA \(\sigma_{ej}^{F}\) is threshold dependent, being minimum at about 75% of \(V_{peak}\) and growing from lower threshold. The standard deviation \(\sigma_{\rm sens}\) gets smaller for lower threshold but the jitter increases, which means that exist an optimum value of the threshold that minimize the time resolution. The total time resolution can be written as \[\sigma_{t}(\alpha)=\sqrt{\left(\mathcal{P}(\alpha)\sigma_{t_{c}}\right)^{2}+ \left(\sigma_{ej}^{F}(\alpha)\right)^{2}}. \tag{101}\] A reasonable estimate with threshold at about 25% of \(V_{peak}\) is \[\sigma_{t}(\alpha=25\%)\sim\sqrt{\left(\frac{\sigma_{t_{c}}}{4}\right)^{2}+ \left(\frac{1.2\ \tau}{SNR}\right)^{2}}. \tag{102}\] #### Acknowledgments This work was supported by the Fifth Scientific Commission (CSN5) of the Italian National Institute for Nuclear Physics (INFN), within the Project TimeSPOT. The authors wish to thank Angelo Loi for his help in providing the plots used in Fig. 14 and Davide Brundu for the useful discussions. Appendix A Propagation coefficient \(\mathcal{P}\) for the Constant Fraction Discrimination case: \(t_{s}>t_{c}\) Let us consider first the CS-TIA case with the voltage signal for \(t>t_{c}\). The threshold time \(t_{s}\) is the one that satisfied the following equation \[e^{-\frac{t_{s}}{\tau}}\left(B\frac{t_{s}}{\tau}+C\right)=\alpha\left[e^{- \frac{T_{peak}}{\tau}}\left(B\frac{T_{peak}}{\tau}+C\right)\right]\, \tag{103}\] where \(\alpha\) indicate the fraction of the voltage peak. Taking the derivative of both sides with respect to \(t_{c}\) we obtain \[V^{{}^{\prime}}(t_{s})\frac{\partial t_{s}}{\partial t_{c}}+e^{-\frac{t_{s}}{ \tau}}\left(\frac{\partial B}{\partial t_{c}}\frac{t_{s}}{\tau}+\frac{ \partial C}{\partial t_{c}}\right)=\alpha\left[e^{-\frac{T_{peak}}{\tau}} \left(\frac{\partial B}{\partial t_{c}}\frac{T_{peak}}{\tau}+\frac{\partial C }{\partial t_{c}}\right)\right]\, \tag{104}\] where \(V^{{}^{\prime}}(t_{s})\) is the normalized derivative (i.e. obtained dividing for \(I_{0}R_{m_{0}}\)). Solving for \(\frac{\partial t_{s}}{\partial t_{c}}\), \[\frac{\partial t_{s}}{\partial t_{c}}=\frac{\alpha\left[e^{-\frac{T_{peak}}{ \tau}\left(\frac{\partial B}{\partial t_{c}}\frac{T_{peak}}{\tau}+\frac{ \partial C}{\partial t_{c}}\right)}\right]-e^{-\frac{t_{s}}{\tau}}\left(\frac{ \partial B}{\partial t_{c}}\frac{t_{s}}{\tau}+\frac{\partial C}{\partial t_{c }}\right)}{V^{{}^{\prime}}(t_{s})}\, \tag{100}\] and the derivative \(V^{{}^{\prime}}(t_{s})\) can be written, \[V^{{}^{\prime}}(t_{s})=e^{-\frac{t_{s}}{\tau}}\frac{B}{\tau} \left(\frac{T_{peak}-t_{s}}{\tau}\right)\, \tag{101}\] substituting \(\alpha\) using Eq. 101 we find: \[\frac{\partial t_{s}}{\partial t_{c}}=\frac{\left(B\frac{t_{s}}{ \tau}+C\right)\left(\frac{\partial B}{\partial t_{c}}\frac{T_{peak}}{\tau}+ \frac{\partial C}{\partial t_{c}}\right)-\left(B\frac{T_{peak}}{\tau}+C \right)\left(\frac{\partial B}{\partial t_{c}}\frac{t_{s}}{\tau}+\frac{ \partial C}{\partial t_{c}}\right)}{\left(B\frac{T_{peak}}{\tau}+C\right) \frac{B}{\tau}\left(\frac{T_{peak}-t_{s}}{\tau}\right)} \tag{102}\] some of the terms cancel out and the propagation coefficient becomes independent from the threshold time \(t_{s}\) \[\frac{\partial t_{s}}{\partial t_{c}}=\tau\frac{\left(C\frac{ \partial B}{\partial t_{c}}-B\frac{\partial C}{\partial t_{c}}\right)}{B^{2}}\, \tag{103}\] since \(T_{peak}\) is given by \[T_{peak}=\tau\frac{B-C}{B}\, \tag{104}\] taking the derivative of \(T_{peak}\) with respect to \(t_{c}\) we find that \[\frac{\partial t_{s}}{\partial t_{c}}=\frac{\partial T_{peak}}{ \partial t_{c}}. \tag{105}\] We conclude that the propagation coefficient for \(t>t_{c}\) is given by the derivative of the peaking time \(T_{peak}\) with respect to the charge collection time \(t_{c}\) \[\mathcal{P}=\frac{\partial T_{peak}}{\partial t_{c}} \tag{106}\] Appendix B Propagation coefficient \(\mathcal{P}\) for the Constant Fraction Discrimination case: \(t_{s}<t_{c}\) Let's consider first the Fast-TIA case with the voltage signal for \(t<t_{c}\). The threshold time \(t_{s}\) is the one that satisfied the following equation \[\left(1-e^{-\frac{t_{s}}{\tau}}\left(1+\frac{t_{s}}{\tau}\right) \right)=\alpha\left[e^{-\frac{T_{peak}}{\tau}}\left(B\frac{T_{peak}}{\tau}+C \right)\right]. \tag{107}\] Taking the derivative of both sides with respect to \(t_{c}\) we obtain \[V^{{}^{\prime}}(t_{s})\frac{\partial t_{s}}{\partial t_{c}}=\alpha\left[e^{-\frac {T_{peak}}{\tau}}\left(\frac{\partial B}{\partial t_{c}}\frac{T_{peak}}{\tau}+ \frac{\partial C}{\partial t_{c}}\right)\right]\,, \tag{120}\] again \(V^{{}^{\prime}}(t_{s})\) is the normalized derivative (i.e. obtained dividing for \(I_{0}R_{m_{0}}\)). Solving for \(\frac{\partial t_{s}}{\partial t_{c}}\), \[\frac{\partial t_{s}}{\partial t_{c}}=\alpha\frac{\left[e^{-\frac{T_{peak}}{ \tau}}\left(\frac{\partial B}{\partial t_{c}}\frac{T_{peak}}{\tau}+\frac{ \partial C}{\partial t_{c}}\right)\right]}{V^{{}^{\prime}}(t_{s})}\,, \tag{121}\] and substituting \(\alpha\) using Eq. 120 we find \[\frac{\partial t_{s}}{\partial t_{c}}=\frac{\left(1-e^{-\frac{t_{s}}{\tau}} \left(1+\frac{t_{s}}{\tau}\right)\right)\left(\frac{\partial B}{\partial t_{ c}}\frac{T_{peak}}{\tau}+\frac{\partial C}{\partial t_{c}}\right)}{\left(e^{- \frac{t_{s}}{\tau}}\frac{t_{s}}{\tau^{2}}\right)\left(B\frac{T_{peak}}{\tau}+ C\right)}\,, \tag{122}\] using the value of \(T_{peak}\) for the Fast-TIA, \[T_{peak}=\frac{e^{\frac{t_{c}}{\tau}}t_{c}}{e^{\frac{t_{c}}{\tau}}-1}\,, \tag{123}\] we find that for \(t<t_{c}\) the derivative \(\frac{\partial t_{s}}{\partial t_{c}}\) is equal to \[\frac{\partial t_{s}}{\partial t_{c}}=\frac{t_{c}}{\tau}\frac{e^{\frac{t_{c}}{ \tau}}\left(\tau e^{\frac{t_{s}}{\tau}}-t_{s}-\tau\right)}{t_{s}(e^{\frac{t_{ c}}{\tau}}-1)^{2}}\,. \tag{124}\] The propagation coefficient is in this case dependent of the chosen threshold \(t_{s}\), and gets smaller for lower fraction \(\alpha\): \[\mathcal{P}(t_{s})=\frac{t_{c}}{\tau}\frac{e^{\frac{t_{c}}{\tau}}\left(\tau e ^{\frac{t_{s}}{\tau}}-t_{s}-\tau\right)}{t_{s}(e^{\frac{t_{c}}{\tau}}-1)^{2}} \tag{125}\] ## Appendix C Jitter approximation for CS-TIA and Fast-TIA ### Cs-Tia \[\sigma_{\rm ej}^{CS}\sim\frac{\sigma_{v}\xi}{Q_{in}g_{m}} \tag{126}\] using the expression of \(\tau\) in Eq.3.2 we have \[\sigma^{CS}_{\rm ej}\sim\frac{\sigma_{v}\tau^{2}}{\bar{Q}_{in}R_{f}}\ ;\] (C.2) using the value of \(V_{peak}\) in Eq. 3.12, \[\sigma^{CS}_{\rm ej}\sim\frac{\sigma_{v}\tau}{\bar{Q}_{in}R_{f}}\tau=\frac{ \sigma_{v}e\tau}{\bar{Q}_{in}R_{f}}\frac{\tau}{e}=\frac{\sigma_{v}}{V_{peak}} \frac{\tau}{e}=\frac{1}{SNR}\frac{\tau}{e}\,\] (C.3) \[\sigma^{CS}_{\rm ej}\sim\frac{1}{SNR}\frac{\tau}{e}\] (C.4) ### Fast-TIA First we calculate the jitter with the threshold set to the maximum slope condition. Using Eq. 3.20 we have \[\sigma^{F}_{\rm ej}\sim\frac{\sigma_{v}\,e\tau^{2}}{\bar{Q}_{in}R_{m_{0}}}\ ;\] (C.5) considering the \(V_{peak}\) expression we can write, \[\sigma^{F}_{\rm ej}\sim\frac{\sigma_{v}\,e\tau^{2}}{\bar{Q}_{in}R_{m_{0}}}\cdot \frac{t_{c}\cdot e^{-\frac{T_{peak}}{\tau}}B}{t_{c}\cdot e^{-\frac{T_{peak}} {\tau}}B}\,\] (C.6) with \(B=e^{\frac{t_{c}}{\tau}}-1\), \(\frac{\overline{Q}_{in}}{t_{c}}=I_{0}\), \(T_{peak}\) given by Eq. 3.21 and being \(\tau=t_{c}\) we find \[\sigma^{F}_{\rm ej}\sim\frac{\sigma_{v}}{V_{peak}}\Big{(}e(e-1)e^{-\frac{e}{e -1}}\Big{)}\tau\,\] (C.7) \[\sigma^{F}_{\rm ej}\sim\frac{0.96}{SNR}\tau\] (C.8)
2302.08164
Campana points on diagonal hypersurfaces
We construct an integral model for counting Campana points of bounded height on diagonal hypersurfaces of degree greater than one, and give an asymptotic formula for their number, generalising work by Browning and Yamagishi. The paper also includes background material on the theory of Campana points on hyperplanes and previous results in the field.
Francesca Balestrieri, Julia Brandes, Miriam Kaesberg, Judith Ortmann, Marta Pieropan, Rosa Winter
2023-02-16T09:22:27Z
http://arxiv.org/abs/2302.08164v2
# Campana points on diagonal hypersurfaces ###### Abstract. We construct an integral model for counting Campana points of bounded height on diagonal hypersurfaces of degree greater than one, and give an asymptotic formula for their number, generalising work by Browning and Yamagishi. The paper also includes background material on the theory of Campana points on hyperplanes and previous results in the field. ###### Contents * 1 Introduction * 2 The circle method * 3 Campana points * 4 Campana points on diagonal hypersurfaces * 5 Application of the circle method * 6 Proof of the main theorem ## 1. Introduction The study of Campana points on varieties has gained a lot of attention in recent years. Philosophically, sets of Campana points on a variety over a number field interpolate between the set of rational points and the set of integral points. As such, the general aim when studying Campana points is to use techniques from the study of rational points to say something about Campana points, which can in turn be used to understand integral points better. There are several definitions of Campana points (see Section 3), all having in common that they are integral points on a proper variety satisfying prescribed 'intersection conditions' with respect to a boundary divisor. When \(X\) is a hypersurface in a projective space \(\mathbb{P}^{n}_{\mathbb{Q}}\) and the closure of \(X\) in \(\mathbb{P}^{n}_{\mathbb{Z}}\) is a flat proper model of \(X\) over \(\mathbb{Z}\), a set of Campana points on \(X\) for the boundary divisor induced by the coordinate hyperplanes of the projective space is the set of \(\mathbb{Z}\)-points in \(X\) for which the coordinates are \(m\)-full for different \(m\) (an integer \(a\) is \(m\)-full if \(p|a\) implies \(p^{m}|a\) for all primes \(p\)), see Example 3.8. Many classical results and conjectures for rational points have been formulated also for Campana points; an overview of some of the following can be found in [16]. An analogue of the Mordell Conjecture for Campana points (orbifold Mordell conjecture) over function fields was introduced and proven in characteristic 0 in [17], and in positive characteristic in [19]. Asymptotics for the number of Campana points of bounded height were first given in [20] for linear hypersurfaces in \(\mathbb{P}^{n}_{\mathbb{Q}}\) for \(n\geq 4\), with the prescribed intersection conditions corresponding to points with squareful coordinates. The same was done for \(n=3\) in [21], while [1] deals with general \(m\)-full conditions on each coordinate for \(n\) large enough (see Section 2.3 for more details on these counting results). Upper and lower bounds for \(n=2\) are given in [18]. In [19], the authors gave an analogue of the Manin Conjecture for Campana points of bounded height on Fano orbifolds (PSTVA Conjecture), and proved it for equivariant compactifications of vector groups. The PSTVA Conjecture has also been proven for split toric varieties with torus invariant boundary [22, 23], and biequivariant compactifications of the Heisenberg group [24]. It is also compatible with the order of growth in the results in [20], [18] and [19]. However, in [21], it is shown that the leading constant in [18] does not agree with the one in the PSTVA Conjecture on a specific orbifold, and a counterexample to the PSTVA Conjecture is given by counting Campana points on \(\mathbb{P}^{1}\) with non-linear boundary divisor. It is likely that a similar phenomenon also arises in the results of [42] and [3]. The first asymptotics for the arguably harder case of weak Campana points are given in [20], where the author counts such points of bounded height on orbifolds associated to norm forms for Galois extensions of number fields. The same paper also contains the first asymptotics for Campana points on singular orbifolds. Weak Campana points also appear in [21], where the author counts diagonal quartic surfaces with a Brauer-Manin obstruction to the Hasse principle, and shows that this number gives a lower bound for the number of weak Campana points in \(\mathbb{P}^{3}\) for the boundary divisor induced by the coordinate hyperplanes. The study of the potential density of rational points (i.e., density of rational points over a finite extension of the base field) was translated to Campana points as well, with Campana first stating a conjecture in this direction for Campana points on curves over number fields in [1]. In [1], the authors extended the conjecture to a much larger class of varieties, and they showed that their conjecture recovers Lang's conjecture for rational points on varieties of general type, and the Lang-Vojta conjecture for \(S\)-integral points on varieties of logarithmic general type. The authors of [1] proved the conjecture when the boundary divisor is a general smooth divisor satisfying some extra conditions. Instances of the conjecture formulated over function fields are proven in [14] and in [15]. Finally, analogues of local-global principles for Campana points are also studied: in [23] the authors defined the Campana analogues of weak weak approximation and the Hilbert property, and proved the Campana version of a result by Colliot-Thelene and Ekedahl that weak weak approximation implies the Hilbert property. A Campana analogue of the Hasse Principle is defined in [26], where the authors studied weak approximation and Brauer-Manin obstructions for Campana points. In the setting mentioned earlier, where \(X\) is a hypersurface in projective space over \(\mathbb{Q}\), counting Campana points of bounded height is equivalent to counting \(m\)-full solutions of bounded size to homogeneous equations. This counting problem lends itself very well to the circle method, a powerful counting method in analytic number theory. So far, the circle method has been used to count Campana points when both \(X\) and the boundary divisor are linear [42, 3, 30, 21]; we give more detail on this in Section 2.3. In this paper we extend the techniques in [3] to study the problem of counting Campana points on a non-linear diagonal hypersurface \(X\) in \(\mathbb{P}^{n}\). Our main result is the following. **Theorem 1.1**.: _Let \(k,n,m_{0},\ldots,m_{n}\) be positive integers. Let \(B\) be a positive real number. Let \(X\subset\mathbb{P}^{n}_{\mathbb{Q}}\) be the hypersurface given by \(\sum_{i=0}^{n}c_{i}x_{i}^{k}=0\) with \(c_{0},\ldots,c_{n}\in\mathbb{Z}_{\neq 0}\) and \(\gcd(c_{0},\ldots,c_{n})=1\). Let \(D\) be the \(\mathbb{Q}\)-divisor on \(X\) given by \(D=\sum_{i=0}^{n}(1-\frac{1}{m_{i}})\{x_{i}=0\}\). With the model defined in Section 4.1 and the height function induced by the Weil height on \(\mathbb{P}^{n}(\mathbb{Q})\), the set of Campana \(\mathbb{Z}\)-points of height at most \(B\) on \(X\cap(\mathbb{P}^{n}\smallsetminus\bigcup_{i=0}^{n}\{x_{i}=0\})\) with respect to the boundary divisor \(D\) is given by_ \[N(X,D,B)=\left\{\begin{aligned} &(x_{0}:\cdots:x_{n})\in \mathbb{P}^{n}(\mathbb{Q})\left|\begin{array}{c}x_{0},\ldots,x_{n}\in \mathbb{Z}_{\neq 0},\\ &\gcd(x_{0},\ldots,x_{n})=1,\\ &\text{$x_{i}$ is $m_{i}$-full $\forall i\in\{0,\ldots,n\}$},\\ &|x_{0}|,\ldots,|x_{n}|\leq B,\\ & c_{0}x_{0}^{k}+\cdots+c_{n}x_{n}^{k}=0\end{array}\right.\end{aligned} \right\}.\] _Assume that \(k\geq 2\), \(2\leq m_{0}\leq\cdots\leq m_{n}\) and that_ \[\sum_{i=0}^{n}\frac{1}{2s_{0}(km_{i})}>1, \tag{1.1}\] _where_ \[s_{0}(m)=\min\{2^{m-1},\tfrac{1}{2}m(m-1)+\lfloor\sqrt{2m+2}\rfloor\}\qquad \forall m\in\mathbb{N}.\] _Then there exist constants \(C\geq 0\) and \(\eta>0\) such that for all \(B>0\) we have_ \[\#N(X,D,B)=CB^{\sum_{i=0}^{n}\frac{1}{m_{i}}-k}+O_{\eta}\left(B^{\sum_{i=0}^{ n}\frac{1}{m_{i}}-k-\eta}\right). \tag{1.2}\] An explicit expression for the constant \(C\) is given in (6.8). **Remark 1.2**.: For \(k=1\), a version of Theorem 1.1 was established in [1], and it is compatible with the PSTVA Conjecture as far as the order of growth is concerned. For \(k\geq 2\), the integral model obtained by the closure of \(X\) in \(\mathbb{P}_{Z}^{n}\) is not regular. However, the order of growth in Theorem 1.1 is still compatible with the prediction in [15, Conjecture 1.1]. In Section 4.2 we describe the set of Campana points on the regular model obtained by inverting all the prime numbers dividing \(k\cdot c_{0}\cdots c_{n}\). The set of Campana points with respect to this model is larger than the set in Theorem 1.1, but the corresponding asymptotic, which we compute in a forthcoming paper, is expected to have the same order of magnitude as (1.2). **Remark 1.3**.: Browning and Yamagishi [1] had the condition \(\sum_{i=0}^{n-1}\frac{1}{m_{i}(m_{i}+1)}\geq 1\) in their result. This seems stricter than necessary, since on the one hand it isolates the last coefficient, creating a somewhat artificial condition, and on the other hand it fails to take advantage of the strongest available bounds from [20, Section 14]. In our result, we address both of those issues in the case \(k\geq 2\). In principle, the methods can be extended also to the case \(k=1\), but this would require a more careful treatment in some parts of the argument. Since the main focus of our paper is on equations of degree \(2\) or higher, we did not expend the extra effort that would be required to achieve a strengthening of the result of [1]. For context, in the smallest possible case that is admissible within Theorem 1.1, namely \(k=m_{0}=\ldots=m_{n}=2\), we have \(s_{0}(km_{i})=s_{0}(4)=8\), and consequently the condition (1.1) translates into the bound \(n\geq 16\). Although it improves the range of applicability compared to [1], the condition in our theorem is still far from the log Fano condition \(\sum_{i=0}^{n}\frac{1}{km_{i}}>1\), which is the geometric condition under which it makes sense to expect an asymptotic as predicted by the PSTVA Conjecture. In fact, as far as applications of the circle method involving mean value estimates are concerned, the limit of the method is at \(\sum_{i=0}^{n}\frac{1}{km_{i}}>2\). ### Outline of the paper The paper is organized as follows. In Section 2 we give a brief history of the circle method and explain the main ideas of the technique. We also give an overview of applications of the circle method to the problem of counting Campana points of bounded height. In Section 3 we define Campana points, and give an overview of different definitions that have been used in the literature. We focus on diagonal hypersurfaces in Section 4, where we describe sets of Campana points of bounded height and set up the counting problem for Theorem 1.1. In Section 5 we develop a variant of the circle method of [1] combined with the bounds in [20, Section 14], and we finally prove Theorem 1.1 in Section 6. ### Notation Given \(m\in\mathbb{N}\) and a set \(S\) of prime numbers, an integer \(x\) is \(m\)-full outside \(S\) if \(p^{m}|x\) holds for every prime \(p\notin S\) dividing \(x\); we say that \(x\) is \(m\)_-full_ if we can take \(S=\emptyset\). To avoid confusion, we stress here that we do not consider \(0\) to be an element of \(\mathbb{N}\). For a number field \(F\), and for a finite set \(S\) of places of \(F\) containing all infinite places, we denote by \(\mathcal{O}_{F,S}\) the ring of \(S\)-integers, that is, all elements of \(F\) with non-negative \(v\)-adic valuation for all places \(v\notin S\). We denote by \(\infty\) the infinite place of \(\mathbb{Q}\). For a prime number \(p\), we denote by \(\mathrm{val}_{p}\) the \(p\)-adic valuation. In Sections 4, 5 and 6, we denote tuples of integers by bold letters as follows: \(\boldsymbol{c}=(c_{0},\ldots,c_{n})\in\mathbb{Z}^{n+1}\), and similarly \(\boldsymbol{d},\boldsymbol{\varepsilon},\boldsymbol{s},\boldsymbol{x}, \boldsymbol{\tilde{u}}\in\mathbb{Z}^{n+1}\). We further have \(\boldsymbol{t}=(t_{i,r})_{0\leq i\leq n,1\leq r\leq m_{i}-1}\in\mathbb{N}^{ \sum_{i}m_{i}-n}\) for \(m_{i}\) as in Theorem 1.1, and similarly \(\tilde{\boldsymbol{v}}\in\mathbb{N}^{\sum_{i}m_{i}-n}\). Throughout, we assume that \(B\) is a large positive real number. For \(\boldsymbol{x}\in\mathbb{Z}^{n+1}\), we write \(|\boldsymbol{x}|=\max_{0\leq i\leq n}|x_{i}|\). We denote by \(\mathbb{Z}_{\mathrm{prim}}^{n+1}\) the set \(\{\boldsymbol{x}\in\mathbb{Z}^{n+1}:\gcd(x_{0},\ldots,x_{n})=1\}\), by \(\prod_{p}\) a product over all prime numbers, and by \(\mu\) the Mobius function. For \(\alpha\in\mathbb{R}\), we write \(e(\alpha)=\exp(2\pi i\alpha)\), and we denote by \(\|\alpha\|\) the distance between \(\alpha\) and its closest integer. In Theorem 1.1 and in Section 6 the implicit constants in the estimates \(\ll\) and \(O(\cdot)\) are allowed to depend on the fixed data \(k\), \(c_{0},\ldots,c_{n}\), \(m_{0},\ldots,m_{n}\), \(\epsilon\), \(\delta\). In Section 5 they depend on the following data: \(d_{0},\ldots,d_{n}\), \(\tilde{m}_{0},\ldots,\tilde{m}_{n}\), \(\epsilon\), \(\delta\). In particular, all statements involving the symbol \(\epsilon\) are asserted to hold for all \(\epsilon>0\). Here, we do not track the precise 'value' of \(\epsilon\), which consequently may vary from one expression to the next. ### Acknowledgements We want to thank the organizers of the workshop Women in Numbers Europe 4 which took place in the summer of 2022 in Utrecht, The Netherlands, which is where this project originated. We thank Tim Browning for useful comments. While working on this paper Judith Ortmann was partially supported by the Caroline Herschel Programme of Leibniz University Hannover, as well as a scholarship for a research stay abroad of the Graduiertenakademie of Leibniz University Hannover since part of the work was done while visiting the University of Bath. Rosa Winter was supported by UKRI Fellowship MR/T041609/1. In the final stages of the work, Julia Brandes was supported by Project Grant 2022-03717 from Vetenskapsradet (Swedish Science Foundation), and Marta Pieropan was supported by the grant VI.Vidi.213.019 of the Nederlandse Organisatie voor Wetenschappelijk Onderzoek. Part of the work was completed while Julia Brandes was visiting the Max Planck Institute for Mathematics in Bonn, whose generous support is also gratefully acknowledged. ## 2. The circle method ### History of the circle method In the 1920's Hardy and Littlewood developed the so called 'Hardy-Littlewood circle method' (sometimes called just 'Hardy-Littlewood method' or 'Circle method'), which is an analytic method to treat additive problems in number theory. These problems deal with the representation of a large number as a sum of numbers of some specified type. The most famous additive problem is Waring's problem: Let \(k\) be a positive integer. Can every large integer \(N\) be written as a sum of a bounded number of \(k\)-th powers \[N=x_{1}^{k}+\cdots+x_{s}^{k}, \tag{2.1}\] where \(s\) is a positive integer? Let \(r_{k,s}(N)\) denote the number of such representations of \(N\). Then, Waring's problem is equivalent to showing that \(r_{k,s}(N)>0\) for some \(s\) and all sufficiently large integers \(N\). Waring's problem was first proved by Hilbert in 1909. From 1918 to 1920, Hardy and Littlewood together with Ramanujan were the first to prove an explicit upper bound for \(r_{k,s}(N)\) by using a new analytic method. Their work laid down the foundations of the circle method in its original form (see [10, 2]). It turned out that the circle method is a very powerful method, since it can be used to solve a diverse range of number theoretic problems. For example, it is one of the most significant all purpose tools when studying rational points on higher-dimensional algebraic varieties. We refer to [1] for an overview. In its earliest versions, the starting point of the circle method is the generating function \[f(z)=\sum_{a\in A}z^{a}\quad(|z|<1),\] where \(A\) denotes a set of nonnegative integers. Taking \(f(z)\) to the \(s\)th power yields the series \[f(z)^{s}=\sum_{N=0}^{\infty}R_{A,s}(N)z^{N},\] whose coefficients \(R_{A,s}(N)\) encode the number of solutions of the equation \[N=a_{1}+\cdots+a_{s} \tag{2.2}\] with \(a_{1},\ldots,a_{s}\in A\). We can isolate the \(N\)th coefficient by means of Cauchy's integral formula, which yields \[R_{A,s}(N)=\frac{1}{2\pi i}\int_{|z|=\rho}\frac{f(z)^{s}}{z^{N+1}}\mathrm{d}z \tag{2.3}\] for any \(\rho\in(0,1)\). In the original (Hardy-Ramanujan) version of the circle method this integral is evaluated by dividing the circle of integration into two disjoint sets, the'major arcs' and the'minor arcs'. Classically, the major arcs contribute to the main term and the minor arcs to the error term. In 1928 Vinogradov introduced a helpful simplification by transferring the problem from complex analysis to Fourier analysis. The Fourier transform maps the set of integers \(\mathbb{Z}\) onto the real unit interval \(\mathbb{R}/\mathbb{Z}\simeq[0,1)\). For any integer \(N\) we denote as before by \(R_{A,s}(N)\) the number of representations of \(N\) as a sum of \(s\) elements in a finite set \(A\). The inverse Fourier transform \(\mathcal{F}^{-1}R_{A,s}\) of \(R_{A,s}\) is then a function of \(\alpha\) and is given by \[\mathcal{F}^{-1}R_{A,s}(\alpha)=\sum_{N\in\mathbb{Z}}R_{A,s}(N)e(\alpha N)= \sum_{a_{1},\ldots,a_{s}\in A}e(\alpha(a_{1}+\cdots+a_{s}))=\left(\sum_{a\in A }e(\alpha a)\right)^{s}.\] It is convenient to put \[F(\alpha)=\sum_{a\in A}e(\alpha a),\] and the reader may note that with this definition we have \(F(\alpha)=f(e(\alpha))\) where \(f\) is the generating function considered above. We can now apply the forward Fourier transform and obtain \[\mathcal{FF}^{-1}R_{A,s}(N)=\int_{0}^{1}F(\alpha)^{s}e(-\alpha N)\mathrm{d}\alpha.\] By the Fourier Inverse Theorem, we have \(\mathcal{FF}^{-1}R_{A,s}(N)=R_{A,s}(N)\), and thus we obtain the formula \[R_{A,s}(N)=\int_{0}^{1}F(\alpha)^{s}e(-\alpha N)\mathrm{d}\alpha. \tag{2.4}\] Clearly, since we assume \(A\) to consist of non-negative integers, the equation (2.2) implies that without loss of generality the sum \(F(\alpha)\) can be truncated at \(a\leq sN\), as larger values trivially cannot contribute. We remark that if the set \(A\) is allowed to include negative numbers as well, one typically includes some other truncation \(a\leq B\) for some large parameter \(B\). ### The modern circle method: Main steps We are interested in the situation when the set \(A\) is the set of \(k\)-th powers. In that case the exponential sum is given by \[F(\alpha)=\sum_{1\leq x\leq P}e(\alpha x^{k}),\] where we put \(P=\lfloor sN^{1/k}\rfloor\). The main strategy is now to try to understand the integral in (2.4) by studying the size of the integrand, and in particular the size of the exponential sums \(F(\alpha)\) as \(\alpha\) ranges over the unit interval. For a typical (irrational) \(\alpha\), the individual summands within the exponential sum \(F(\alpha)\) are more or less equidistributed over the unit circle, and consequently the behaviour of the exponential sum \(F(\alpha)\) is reminiscent of Brownian motion. In fact, [2, Corollary 2.2] shows that \(F(\alpha)\ll P^{1/2+\epsilon}\) for all \(\alpha\) in a set \(\mathcal{L}\subset[0,1]\) with Lebesgue measure \(1\). At the same time, when \(\alpha\) is a rational number with denominator \(q\), the summands inside \(F(\alpha)\) can take at most \(q\) distinct values on the unit circle. Thus, in particular for small values of \(q\) there is significant potential for interference. The extreme case of this is when \(\alpha=0=\frac{0}{1}\), which gives the value \(F(0)=P\). Moreover, since \(F\) is continuous, this interference behaviour extends to suitably small neighbourhoods of rational numbers with small denominator. This motivates the dissection of the interval \([0,1]\) into major and minor arcs, where the major arcs \(\mathfrak{M}\) comprise all \(\alpha\) that are close to a rational number with small denominator, so that \(F(\alpha)\) is potentially large, and the minor arcs \(\mathfrak{m}\) collect the remaining \(\alpha\) that lack a similarly strong rational approximation. The treatment of the major arcs is by now mainly standard. By using the fact that \(\alpha\in\mathfrak{M}\) has a good approximation by a rational number \(a/q\), we can write \(\alpha=a/q+\theta\) for some small \(\theta\). This leads to a factorization of the integral over the major arcs into a product of two terms: the'singular integral' and the'singular series', which then can be evaluated to obtain the main term. Here, the singular integral can be interpreted as the volume of the solution set of the equation (2.1) when viewed as a submanifold inside \(\mathbb{R}^{s}\); this also provides the expected order of growth. Meanwhile, the singular series encodes all congruence information related to the equation (2.1). In fact, it can be factorised into an Euler product, where each factor describes the volume of the solution set of (2.1) when viewed as a subset of the \(p\)-adic numbers \(\mathbb{Q}_{p}\) as \(p\) ranges over the primes. Thus, the canonical outcome of the circle method is an asymptotic formula with a main term of size \(\asymp P^{s-k}\sim N^{s/k-1}\), which is modulated by a product of local factors that encode any local obstructions the problem might have. The bottleneck of the problem is the treatment of the minor arcs. The main difficulty is that although we have good control over the size of \(F(\alpha)\) in an almost-all sense, our understanding in a _pointwise_ sense remains comparatively poor. Fortunately, however, for additive problems like the one in (2.1) the contribution to (2.4) that stems from the minor arcs can be estimated by \[\int_{\mathfrak{m}}F(\alpha)^{s}e(-n\alpha)\mathrm{d}\alpha\leq\sup_{\alpha \in\mathfrak{m}}|F(\alpha)|\int_{0}^{1}|F(\alpha)|^{s-1}\mathrm{d}\alpha.\] In other words, it becomes less crucial to have strong pointwise bounds for \(F(\alpha)\) if in addition we have good control over the _average_ behaviour of moments of \(F\). Since \(F\) exhibits square-root cancellation almost everywhere, this is a more tractable problem at least for small moments. For larger moments, however, this problem still presents formidable difficulties. A major breakthrough in the field was the resolution of the main conjecture associated with Vinogradov's mean value theorem ([1, 16, 17, 18]), which gives a near complete understanding of the average behaviour of moments of the related exponential sum \[G(\alpha)=\sum_{1\leq x\leq P}e(\alpha_{1}x+\alpha_{2}x^{2}+\ldots+\alpha_{k}x ^{k}).\] In fact, the strongest available bounds for mean values of moments of \(F\) are derived from these results (see [17, Section 14]). These are the bounds we will exploit in our arguments below. ### Examples of counting Campana points by the circle method Here, we summarize instances in which the circle method is used to count Campana points of bounded height. So far, there are only a few examples of this. Van Valckenborgh and Browning were the first to use the circle method to count Campana points of bounded height. Much of the early investigations is centered around the set of Campana points for the boundary divisor \(\frac{1}{2}\sum_{i=0}^{n}\{x_{i}=0\}\), corresponding to squareful numbers. In this setting, Van Valckenborgh [13] gave an asymptotic formula for the number of integral points \((a_{0}:\cdots:a_{n})\) of bounded height on the hyperplane \(\{\sum_{i=0}^{n}x_{i}=0\}\) in \(\mathbb{P}^{n}\) such that \(a_{i}\) is squareful, provided that \(n\geq 4\). The picture becomes more complicated for \(n\leq 3\), as underlying geometric considerations begin to have an effect. In [1] Browning and Van Valckenborgh considered the case \(n=2\) and investigated the number of positive coprime squareful numbers \(x,y,z\) of bounded height \(B\) that satisfy the equation \(x+y=z\). In particular, they give a lower bound of order \(O(B^{1/2})\) for the number of corresponding Campana points, which they conjecture to be sharp. Unfortunately, establishing the corresponding upper bound turns out to be significantly harder, and here they use a method rooted in the determinant method to give an upper bound of order \(O(B^{3/5})\). Finally, Shute [15] settled the case \(n=3\) by adapting the delta-symbol method [17]. In this context, it is noteworthy that in order to put the problem in the framework of Manin's conjecture, the author has to exclude certain accumulating special subvarieties from the count, reinforcing the principle that geometric phenomena tend to have a disproportionate effect in small dimension. We can therefore summarise that in the setting of counting Campana points with boundary divisor \(\frac{1}{2}\sum_{i=0}^{n}\{x_{i}=0\}\) on the hyperplane \(x_{0}+\cdots+x_{n}=0\) only the case \(n=3\) is still open. Meanwhile, Browning and Yamagishi generalized the work of Van Valckenborgh [13] in [1] by extending it to more general boundary divisors. Instead of squareful integers \(a_{i}\) they consider more generally \(m_{i}\)-full integers, where \(m_{i}\geq 2\). They further generalize to hypersurfaces \(c_{0}x_{0}+\cdots+c_{n-1}x_{n-1}=x_{n}\) for fixed nonzero integers \(c_{0},\ldots,c_{n-1}\). Similarly to [13], they give an asymptotic formula for the number of Campana points of bounded height in this setting by using the circle method under the assumption that the following condition is satisfied: \[\sum_{\begin{subarray}{c}0\leq i\leq n\\ i\neq j\end{subarray}}\frac{1}{m_{i}(m_{i}+1)}\geq 1\text{ for some }0\leq j\leq n.\] It turns out that considering these Campana points can be interpreted as studying Waring's problem for mixed exponents. Browning and Yamagishi used this to further prove an asymptotic formula for Waring's problem of mixed powers. In this paper we extend the result of [1] to diagonal hypersurfaces of degree \(\geq 2\). ## 3. Campana points In this section we introduce the notion of Campana points that we use in Theorem 1.1. Campana introduced the notion of 'orbifoldes geometriques' and 'orbifold rational point' in his papers [1, 1, 2, 3]. Since then, several different definitions of Campana points appeared in the literature [1, 2], which agree with the original definition of Campana on curves. As is explained in [13], this is not the case for higher dimensional varieties, where the different definitions can lead to significant differences in the associated counting problems. All the papers just mentioned define Campana points on a regular integral model of the variety over the ring of \(S\)-integers of the field of definition, for some finite set of places \(S\). Very recently, Mitankin, Nakahara and Streeter gave a definition of Campana points for nonregular models [14]. We will use this definition in this paper, since it allows us to directly interpret the counting problem in Theorem 1.1 in the setting of Campana points. **Definition 3.1**.: Let \(F\) be any field. A Campana orbifold over \(F\) is a pair \((X,D)\) where * \(X\) is a smooth proper variety over \(F\), and * \(D\) is an effective Weil \(\mathbb{Q}\)-divisor on \(X\) defined over \(F\) satisfying \[D=\sum_{\alpha\in\mathcal{A}}\epsilon_{\alpha}D_{\alpha},\] where \(\mathcal{A}\) is a finite index set, the \(D_{\alpha}\)'s are distinct prime divisors on \(X\), and \(\epsilon_{\alpha}\) belongs to the set \[\mathcal{M}=\left\{\left.1-\frac{1}{m}\right|m\in\mathbb{N},m\geq 2\right\}.\] Note that the \(D_{\alpha}\)'s are irreducible over \(F\), but not necessarily over \(\overline{F}\). We define \(D_{\mathrm{red}}=\sum_{\alpha\in\mathcal{A}}D_{\alpha}\), and say that \((X,D)\) is smooth if \(D_{\mathrm{red}}\) is a strict normal crossing divisor on \(X\). **Remark 3.2**.: In the existing literature, the set \(\mathcal{M}\) is usually taken to include \(0\) and \(1\). For the purposes of this paper we don't need these values, hence we omit them in our definition. Since Campana points depend on the choice of an integral model for the Campana orbifold, we need to define what a good choice for such an integral model is. From now on, we take \(F\) to be a number field. **Definition 3.3**.: Let \((X,D)\) be a Campana orbifold over \(F\). Let \(S\subset\Omega_{F}\) be a finite set of places of \(F\) containing all the infinite places. A proper integral model of \((X,D)\) over \(\mathcal{O}_{F,S}\) is a pair \((\mathcal{X},\mathcal{D})\) such that \(\mathcal{X}\) is a flat proper model of \(X\) over \(\mathcal{O}_{F,S}\), and \(\mathcal{D}=\sum_{\alpha\in\mathcal{A}}\epsilon_{\alpha}\mathcal{D}_{\alpha}\), where \(\mathcal{D}_{\alpha}\) is the Zariski-closure of \(D_{\alpha}\) in \(\mathcal{X}\). If \(\mathcal{X}\) is regular, we say that \((\mathcal{X},\mathcal{D})\) is a good integral model of \((X,D)\). Finally, before we can define Campana points, we need to understand the 'intersection behaviour' of a given point \(P\in X(F)\) with the divisor \(D\). **Definition 3.4**.: Let \((X,D)\) be a Campana orbifold over \(F\), and let \((\mathcal{X},\mathcal{D})\) be a proper integral model over \(\mathcal{O}_{F,S}\). Take \(P\in X(F)\). Since \(\mathcal{X}\) is proper, we have \(X(F)=\mathcal{X}(\mathcal{O}_{F,S})\) and so \(P\) extends uniquely to a point \(\mathcal{P}\in\mathcal{X}(\mathcal{O}_{F,S})\). Fix \(v\notin S\) and let \(\mathcal{P}_{v}\in\mathcal{X}(\mathcal{O}_{v})\) be the point corresponding to \(P\). By definition, we can also view it as a map \(\mathcal{P}_{v}:\mathrm{Spec}(\mathcal{O}_{v})\to\mathcal{X}\). Fix \(\alpha\in\mathcal{A}\). Since \(\mathcal{D}_{\alpha}\subset\mathcal{X}\) is a closed subscheme, the fiber product \(\mathcal{D}_{\alpha}\times_{\mathcal{X}}\mathrm{Spec}(\mathcal{O}_{v})\subset \mathrm{Spec}(\mathcal{O}_{v})\) is a closed subscheme corresponding to a non-zero ideal \(I_{v,\mathcal{P},\alpha}\) in \(\mathcal{O}_{v}\), using the correspondence between closed subschemes of \(\mathrm{Spec}(\mathcal{O}_{v})\) and ideals in \(\mathcal{O}_{v}\). We distinguish two cases: * If \(\mathcal{P}_{v}\not\subseteq\mathcal{D}_{\alpha}\), we define the intersection multiplicity of \(P\) and \(\mathcal{D}_{\alpha}\) at \(v\) to be \[n_{v}(\mathcal{D}_{\alpha},P)=\mathrm{length}(\mathcal{O}_{v}/I_{v,\mathcal{ P},\alpha}).\] Equivalently, since \(\mathcal{O}_{v}\) is a discrete valuation ring, say with a choice of uniformiser \(\pi_{v}\) for its maximal ideal, it follows that the non-zero ideal \(I_{v,\mathcal{P},\alpha}\) can be written as \[I_{v,\mathcal{P},\alpha}=\left(\pi_{v}\right)^{n_{v}(\mathcal{D}_{\alpha},P)}.\] * If \(\mathcal{P}_{v}\subseteq\mathcal{D}_{\alpha}\), we set the intersection multiplicity of \(P\) and \(\mathcal{D}_{\alpha}\) at \(v\) to be \(+\infty\) (in this case, the ideal \(I_{v,\mathcal{P},\alpha}\) is just the zero ideal (\(0\)) in \(\mathcal{O}_{v}\)). To summarise, the intersection multiplicity of \(P\) and \(\mathcal{D}_{\alpha}\) at \(v\) is \[n_{v}(\mathcal{D}_{\alpha},P)=\begin{cases}\mathrm{length}(\mathcal{O}_{v}/I_{ v,\mathcal{P},\alpha})&\text{ if }\mathcal{P}_{v}\not\subseteq\mathcal{D}_{\alpha},\\ +\infty&\text{ if }\mathcal{P}_{v}\subseteq\mathcal{D}_{\alpha}.\end{cases}\] **Remark 3.5**.: In practice, given a point \(\mathcal{P}_{v}:\mathrm{Spec}(\mathcal{O}_{v})\to\mathcal{X}\), and an open subset \(U\subseteq\mathcal{X}\) containing the image of \(\mathcal{P}_{v}\), such that \(\mathcal{D}_{\alpha}|_{U}\) is principal and defined by a rational function \(f_{\alpha,U}\) which is regular on \(U\) (i.e., \(\mathcal{D}_{\alpha}\) is Cartier in a neighbourhood of \(\mathcal{P}_{v}\)), we have \(n_{v}(\mathcal{D}_{\alpha},P)=\mathrm{val}_{v}(f_{\alpha,U}(\mathcal{P}_{v}))\), where \(f_{\alpha,U}(\mathcal{P}_{v})\) is the image of \(f_{\alpha,U}\) in \(\mathcal{O}_{v}\) under the ring homomorphism that defines \(\mathcal{P}_{v}:\mathrm{Spec}(\mathcal{O}_{v})\to U\), and \(\mathrm{val}_{v}\) is the \(v\)-adic valuation on \(\mathcal{O}_{v}\). We are now in the position to define Campana points. **Definition 3.6**.: We keep the notation as in Definition 3.4. For \(\alpha\in\mathcal{A}\) we write \(\epsilon_{\alpha}=1-\frac{1}{m_{\alpha}}\). A point \(P\in X(F)\) is a Campana \(\mathcal{O}_{F,S}\)-point on \((\mathcal{X},\mathcal{D})\) if, for every place \(v\notin S\) and every \(\alpha\in\mathcal{A}\), we have either \(n_{v}(\mathcal{D}_{\alpha},P)=0\) or \(n_{v}(\mathcal{D}_{\alpha},P)\geq m_{\alpha}\). **Remark 3.7**.: Different choices of the finite set of places \(S\subset\Omega_{F}\) lead to potentially different sets of Campana points, and thus to potentially different counting problems. **Example 3.8**.: Using Remark 3.5, we describe Campana points for \(F=\mathbb{Q}\) and \(X\subseteq\mathbb{P}_{\mathbb{Q}}^{n}\), in the setting where the divisor \(D_{\alpha}\) is the restriction to \(X\) of a divisor \(D_{\alpha}^{\prime}\subseteq\mathbb{P}^{n}\) for all \(\alpha\in\mathcal{A}\), and we have a model \(\mathcal{X}\subseteq\mathbb{P}_{\mathbb{Z}_{S}}^{n}\) for some set of places \(S\) (for example, when \(\mathcal{X}\) is the Zariski closure of \(X\) in \(\mathbb{P}_{\mathbb{Z}_{S}}^{n}\)). For all \(\alpha\in\mathcal{A}\), let \(f_{\alpha}\in\mathbb{Z}[x_{0},\ldots,x_{n}]\) be homogeneous with coprime coefficients (i.e., with content \(1\)) such that \(D_{\alpha}\) is defined by \(f_{\alpha}\). Fix a point \(P\in X(\mathbb{Q})\) and write \(P=(a_{0}:\cdots:a_{n})\) in projective homogeneous coordinates with \(a_{0},\ldots,a_{n}\in\mathbb{Z}\) such that \(\gcd(a_{0},\ldots,a_{n})=1\). Let \(l_{P}\in\mathbb{Z}[x_{0},\ldots,x_{n}]\) be a linear form with \(l_{P}(a_{0},\ldots,a_{n})=1\). Then the image of \(\mathcal{P}\) in \(\mathcal{X}\) is contained in the affine patch \(U=\mathbb{P}_{\mathbb{Z}_{s}}^{n}\smallsetminus\{l_{P}=0\}\), so for all \(\alpha\in\mathcal{A}\) we can take \(f_{\alpha,U}=f_{\alpha}/l_{P}^{\deg f_{\alpha}}\), and find \(n_{p}(\mathcal{D}_{\alpha},P)=\mathrm{val}_{p}(f_{\alpha}(a_{0},\ldots,a_{n}))\) for every prime \(p\notin S\). It follows that \(P\in X(\mathbb{Q})=\mathcal{X}(\mathbb{Z}_{S})\) is a Campana point precisely when, for all \(\alpha\in\mathcal{A}\), we have that \(f_{\alpha}(a_{0},\ldots,a_{n})\) is \(m_{\alpha}\)-full outside \(S\). ## 4. Campana points on diagonal hypersurfaces In this section we set up the problem of counting Campana points of bounded height on diagonal hypersurfaces in \(\mathbb{P}_{\mathbb{Q}}^{n}\) with boundary divisor induced by the coordinate hyperplanes. We do this in two settings: first by constructing a proper integral model over \(\mathbb{Z}\), and then by constructing a good integral model over \(\mathbb{Z}_{S}\) for a suitable finite set of places \(S\). ### A proper model Let \(F=\mathbb{Q}\) and let \(k\in\mathbb{Z}_{\geq 1}\). Let \(X\subset\mathbb{P}_{\mathbb{Q}}^{n}\) be the diagonal hypersurface given by the equation \[\sum_{i=0}^{n}c_{i}x_{i}^{k}=0,\] where \(c_{0},\ldots,c_{n}\in\mathbb{Z}_{\neq 0}\), and \(\gcd(c_{0},\ldots,c_{n})=1\). We consider the effective \(\mathbb{Q}\)-divisor on \(X\) given by \[D=\sum_{i=0}^{n}\epsilon_{i}D_{i},\] where \(D_{i}=\{x_{i}=0\}\cap X\) and \(\epsilon_{i}\) belongs to the set \(\mathcal{M}\) for all \(i\in\{0,\ldots,n\}\). Then \((X,D)\) is a Campana orbifold over \(\mathbb{Q}\). Write \(\epsilon_{i}=1-\frac{1}{m_{i}}\) for all \(i\in\{0,\ldots,n\}\). **Definition 4.1**.: We denote by \((\mathcal{X}_{1},\mathcal{D}_{1})\) the proper integral model of \((X,D)\) given by the same equations for \(X\) and \(D\) over \(\mathbb{Z}_{S}\), where \(S=\{\infty\}\). **Lemma 4.2**.: _Let \(B>0\) be a real number. With the the model \((\mathcal{X}_{1},\mathcal{D}_{1})\) and the height induced by the Weil height on \(\mathbb{P}^{n}(\mathbb{Q})\), the set of Campana \(\mathbb{Z}\)-points on \(X\cap(\mathbb{P}^{n}\smallsetminus\bigcup_{i=0}^{n}\{x_{i}=0\})\) of height bounded by \(B\) is given by_ \[N(X,D,B)=\left\{\begin{array}{c}x_{0},\ldots,x_{n}\in\mathbb{Z}_{\neq 0}, \\ \gcd(x_{0},\ldots,x_{n})=1,\\ x_{i}\text{ is $m_{i}$-full $\forall i\in\{0,\ldots,n\}$},\\ |x_{0}|,\ldots,|x_{n}|\leq B,\\ c_{0}x_{0}^{k}+\cdots+c_{n}x_{n}^{k}=0\end{array}\right\}. \tag{4.1}\] Proof.: Take \(a_{1},\ldots,a_{n}\in\mathbb{Z}_{\neq 0}\) with \(\gcd(a_{1},\ldots,a_{n})=1\), such that \(P=(a_{0}:\cdots:a_{n})\) is contained in \(X\cap(\mathbb{P}^{n}\smallsetminus\bigcup_{i=0}^{n}\{x_{i}=0\})\) and of height at most \(B\). The only non-obvious condition to check in order to prove the lemma is that \(P\) satisfies Definition 3.6 if and only if the third condition in \(N(X,D,B)\) holds. Fix \(i\) in \(\{1,\ldots,n\}\). As in Example 3.8, let \(l_{P}\) be a linear form in \(\mathbb{Z}[x_{0},\ldots,x_{n}]\) with \(l_{P}(a_{0},\ldots,a_{n})=1\). Then the divisor \(\mathcal{D}_{i}\) is Cartier on the open set \(U=\mathbb{P}^{n}\smallsetminus\{l_{P}=0\}\), given by the rational function \(x_{i}/l_{P}\). We find \(n_{p}(\mathcal{D}_{i},P)=\mathrm{val}_{p}(a_{i})\), and by definition, \(P\) is a Campana point if and only if \(\mathrm{val}_{p}(a_{i})\in\mathbb{Z}_{\geq m_{i}}\cup\{0\}\), which is equivalent to \(a_{i}\) being \(m_{i}\)-full. ### Bad primes and a regular model The model \((\mathcal{X}_{1},\mathcal{D}_{1})\) over \(\mathbb{Z}\) is not a good integral model of \((X,D)\) in general, since \(\mathcal{X}_{1}\) is not regular as a scheme over \(\mathbb{Z}\). The following lemma gives sufficient conditions on a finite set \(S\) of primes for \((\mathcal{X}_{1},\mathcal{D}_{1})\) to be smooth over \(\mathbb{Z}_{S}\). **Lemma 4.3**.: _For \(S\) a finite set of places of \(\mathbb{Q}\) containing \(\infty\) and all primes dividing \(k\) and all \(c_{i}\), the model \((\mathcal{X}_{2},\mathcal{D}_{2})\) of \((X,D)\) over \(\mathbb{Z}_{S}\) defined by the same equations as \(X\) and \(D\) over \(\mathbb{Z}_{S}\) is smooth._ Proof.: Let \(\mathcal{X}\subset\mathbb{P}_{\mathbb{Z}}^{n}\) be the projective scheme over \(\operatorname{Spec}\mathbb{Z}\) defined by the equation \[c_{0}x_{0}^{k}+...+c_{n}x_{n}^{k}=0.\] Notice that the structure morphism \(s:\mathcal{X}\to\operatorname{Spec}\mathbb{Z}\) is flat by [13, Proposition 4.3.9], as \(\mathcal{X}\) is integral because \(\gcd(c_{0},\ldots,c_{n})=1\), \(\operatorname{Spec}\mathbb{Z}\) is a Dedekind domain, and \(s\) is non-constant. Let \(S\) be a finite set of places of \(\mathbb{Q}\) containing \(\infty\) and all the primes \(p\) which divide \(k\cdot\prod_{i=0}^{n}c_{i}\). Consider \(\mathcal{X}_{S}=\mathcal{X}\times_{\operatorname{Spec}\mathbb{Z}}\operatorname {Spec}(\mathbb{Z}_{S})\). The structure morphism \(s_{S}:\mathcal{X}_{S}\to\operatorname{Spec}(\mathbb{Z}_{S})\) is flat, by the property of fibre products and since \(s\) is flat. Since \(s_{S}\) is flat, by [10, Theorem 10.2] it suffices to show that the geometric fibers of \(s_{S}\) are regular. The geometric generic fiber \(\mathcal{X}_{\overline{\mathbb{Q}}}\) is regular by the Jacobian criterion [10, Exercise I.5.8], as \(c_{0},\ldots,c_{n}\neq 0\), and since any Noetherian scheme is regular if and only if it is regular at its closed points [13, Chapter 4, Corollary 2.17]. For every prime number \(p\notin S\), the geometric fiber \(\mathcal{X}_{\overline{\mathbb{F}}_{p}}\) over \((p)\in\operatorname{Spec}\mathbb{Z}\) is regular by the Jacobian criterion [10, Exercise I.5.8], as \(p\nmid k\cdot\prod_{i=0}^{n}c_{i}\), and since any Noetherian scheme is regular if and only if it is regular at its closed points [13, Chapter 4, Corollary 2.17]. From Lemma 4.3 it follows that we can always obtain a good integral model for \((X,D)\) by setting \[S=\left\{p\text{ prime}:p\ |\ k\cdot\prod_{i=0}^{n}c_{i}\right\}\cup\{\infty\},\] and defining \((\mathcal{X}_{2},\mathcal{D}_{2})\) over \(\mathbb{Z}_{S}\) by the same equations for \(X\) and \(D_{i}\) over \(\mathbb{Z}_{S}\). The set of Campana points on \(X\) with respect to the model \(\mathcal{X}_{2}\) is then similar to the set in Lemma 4.2, but requiring the \(x_{i}\)-coordinate of the point to be \(m_{i}\)-full outside \(S\) instead of just \(m_{i}\)-full. This leads to a larger set of Campana points than the one in Lemma 4.2. It is not obvious that in order to obtain a regular model we need to take the set \(S\) as defined above (or larger); for certain choices of \(X\) we might get regular models using smaller \(S\) as well. However, we remark that primes dividing one or more of the coefficients \(c_{i}\) can be potentially problematic and induce non-regular points on \(\mathcal{X}_{1}\), as the following example shows. **Example 4.4**.: Let \(k\geq 2\) be an integer. Let \(p\) be a prime not dividing \(k\). Let \(\mathcal{X}\) be defined by \[p^{k}x_{0}^{k}-x_{1}^{k}+x_{2}^{k}=0\subset\mathbb{P}_{\mathbb{Z}}^{2};\] it is clear that e.g. the point \((1:p:0)\) is on \(\mathcal{X}\). Notice that the Jacobian matrix over \(\mathbb{F}_{p}\) vanishes at the point \((1:0:0)\), so \(\mathcal{X}\) is not smooth in the fiber over \(\mathbb{F}_{p}\). We now show that \(\mathcal{X}\) is not regular. We work in the affine patch of \(\mathbb{P}_{\mathbb{Z}}^{2}\) given by \(\{x_{0}\neq 0\}\), and we get the affine equation \(p^{k}-y_{1}^{k}+y_{2}^{k}=0\). Consider the maximal ideal \[\mathfrak{m}=(p,y_{1},y_{2})\in\operatorname{Spec}\left(\frac{\mathbb{Z}[y_{1}, y_{2}]}{(p^{k}-y_{1}^{k}+y_{2}^{k})}\right),\] which contains, for example, the prime ideal corresponding to the point \((y_{1},y_{2})=(p,0)\). Notice that \(p^{k}-y_{1}^{k}+y_{2}^{k}\) is contained in \(\mathfrak{m}\). We have that \(\operatorname{Spec}\left(\frac{\mathbb{Z}[y_{1},y_{2}]}{(p^{k}-y_{1}^{k}+y_{2}^ {k})}\right)\) is regular at a point corresponding to \(\mathfrak{m}/(p^{k}-y_{1}^{k}+y_{2}^{k})\) if and only if \(p^{k}-y_{1}^{k}+y_{2}^{k}\notin\mathfrak{m}^{2}\). But, since \(k\geq 2\), it is clear that \(p^{k}-y_{1}^{k}+y_{2}^{k}\in\mathfrak{m}^{2}\). Hence, \(\operatorname{Spec}\left(\frac{2[y_{1},y_{2}]}{(p^{k}-y_{1}^{k}+y_{2}^{k})}\right)\) is not regular at \(\mathfrak{m}\). The primes dividing the exponent \(k\) can also be potentially problematic, as the following example shows. **Example 4.5**.: Let \(k\geq 2\) be an integer. Let \(p\) be a prime dividing \(k\) and write \(k=p\lambda\). We consider \(\mathcal{X}\) defined by \[x_{0}^{k}-x_{1}^{k}+x_{2}^{k}=0\subset\mathbb{P}_{\mathbb{Z}}^{2},\] with the obvious point \((x_{0}:x_{1}:x_{2})=(1:1:0)\) on it. First of all, we notice that the Jacobian matrix over \(\mathbb{F}_{p}\) vanishes identically at every point. We work in the affine patch of \(\mathbb{P}_{\mathbb{Z}}^{2}\) given by \(\{x_{0}\neq 0\}\), and we get the affine equation \(1-y_{1}^{k}+y_{2}^{k}=0.\) Consider the maximal ideal \[\mathfrak{m}=(p,y_{1}-1,y_{2})\in\operatorname{Spec}\left(\frac{\mathbb{Z}[y_{ 1},y_{2}]}{(1-y_{1}^{k}+y_{2}^{k})}\right),\] which contains, for example, the prime ideal corresponding to the point \((y_{1},y_{2})=(1,0)\). Notice that \(1-y_{1}^{k}+y_{2}^{k}\) is contained in \(\mathfrak{m}\): indeed, \(1-y_{1}^{k}+y_{2}^{k}=(1-y_{1})(1+y_{1}+\cdots+y_{1}^{k-1})+y_{2}^{k}\). As in the previous example, we have that \(\operatorname{Spec}\left(\frac{\mathbb{Z}[y_{1},y_{2}]}{(1-y_{1}^{k}+y_{2}^{k })}\right)\) is regular at a point corresponding to \(\mathfrak{m}/(1-y_{1}^{k}+y_{2}^{k})\) if and only if \(1-y_{1}^{k}+y_{2}^{k}\notin\mathfrak{m}^{2}\). We now show that \(1-y_{1}^{k}+y_{2}^{k}\) is in \(\mathfrak{m}^{2}\). Since \(y_{2}^{k}\in\mathfrak{m}^{2}\) (as \(k\geq 2\)), it suffices to show that \(1-y_{1}^{k}\in\mathfrak{m}^{2}\). If \(p\) is odd, we notice that \[(y_{1}^{\lambda}-1)^{p}=y_{1}^{k}-1+p(-(y_{1}^{\lambda})^{p-1}+y_{1}^{\lambda} )+O(p^{2})\] and thus that \[y_{1}^{k}-1=(y_{1}^{\lambda}-1)^{p}+py_{1}^{\lambda}((y_{1}^{\lambda})^{p-2}-1 )+O(p^{2}).\] But \((y_{1}-1)\) is a factor of \((y_{1}^{\lambda}-1)\), and thus \((y_{1}-1)^{2}\) is a factor of \((y_{1}^{\lambda}-1)^{p}\). Moreover, since \(\lambda(p-2)>0\), we have that \((y_{1}-1)\) is a factor of \(((y_{1}^{\lambda})^{p-2}-1)\). Hence, it follows that \(1-y_{1}^{k}\in\mathfrak{m}^{2}\). If \(p=2\), then completing the square yields \[(y_{1}^{\lambda})^{2}-1=(y_{1}^{\lambda}-1)^{2}+2(y_{1}^{\lambda}-1).\] But \((y_{1}-1)\) is a factor of \((y_{1}^{\lambda}-1)\). Hence, it follows that \(1-y_{1}^{k}\in\mathfrak{m}^{2}\). In both cases, we conclude that \(1-y_{1}^{k}+y_{2}^{k}\) is contained in \(\mathfrak{m}^{2}\) and thus that \(\operatorname{Spec}\left(\frac{\mathbb{Z}[y_{1},y_{2}]}{(1-y_{1}^{k}+y_{2}^{k})}\right)\) is not regular at \(\mathfrak{m}\). ### The associated counting problem We now set up the counting problem for Theorem 1.1. As in the statement of Theorem 1.1 and in Section 4.1, we fix nonzero coprime integers \(c_{0},\ldots,c_{n}\), and integers \(k,m_{0},\ldots,m_{n}\geq 2\). Up to reordering the indices, we can assume that \(m_{0}\leq\cdots\leq m_{n}\). We use freely the notation introduced in Section 1.2. We consider the counting function \[N(B)=\#\left\{\boldsymbol{x}\in\mathbb{Z}_{\neq 0}^{n+1}\left|\text{gcd}(x_{0},\ldots,x_{n})=1,\,|\boldsymbol{x}|\leq B,\,x_{i}\text{ is $m_{i}$-full }\forall i\in\{0,\ldots,n\},\,\sum_{i=0}^{n}c_{i}x_{i}^{k}=0\right.\right\}\] analogous to [1, (1.1)]. Then \[\#N(X,D,B)=\frac{1}{2}N(B), \tag{4.2}\] as every point in \(N(X,D,B)\) has exactly two representatives \((x_{0},\ldots,x_{n})\in\mathbb{Z}^{n+1}\) satisfying the conditions in (4.1). In the rest of this section we rephrase the counting problem in order to apply the circle method. We follow the strategy of [1, SS3]. Recall that for each \(m\in\mathbb{N}\), an \(m\)-full integer \(x\neq 0\) has a unique representation \[x=\pm u^{m}\prod_{r=1}^{m-1}v_{r}^{m+r},\] with \(u,v_{1},\ldots,v_{m-1}\in\mathbb{N}\) such that \(v_{1},\ldots,v_{m-1}\) are squarefree and pairwise coprime. Thus, one can rewrite \(N(B)\) as the cardinality of the set of tuples \(\mathbf{x}\in\mathbb{Z}_{\neq 0}^{n+1}\) satisfying \[\gcd(x_{0},\ldots,x_{n})=1,\quad|\mathbf{x}|\leq B, \tag{4.3}\] \[x_{i}=\pm u_{i}^{m_{i}}\prod_{r=1}^{m_{i}-1}v_{i,r}^{m_{i}+r}\quad\forall i \in\{0,\ldots,n\}, \tag{4.4}\] \[\mu^{2}(v_{i,r})=1,\quad\gcd(v_{i,r},v_{i,\tilde{r}})=1,\quad\forall i \in\{0,\ldots,n\},\forall r,\tilde{r}\in\{1,\ldots,m_{i}-1\},r\neq\tilde{r}, \tag{4.5}\] \[\sum_{i=0}^{n}c_{i}x_{i}^{k}=0. \tag{4.6}\] Put \(\Lambda=\sum_{i=0}^{n}(m_{i}-1)\). For integer vectors \(\mathbf{d}=(d_{0},\ldots,d_{n})\in\mathbb{Z}_{\neq 0}^{n+1}\) as well as \(\mathbf{s}=(s_{0},\ldots,s_{n})\in\mathbb{N}^{n+1}\) and \(\mathbf{t}=(t_{i,r})_{0\leq i\leq n,1\leq r\leq m_{i}-1}\in\mathbb{N}^{\Lambda}\), let \[N_{\mathbf{d}}(B,\mathbf{s},\mathbf{t})=\left\{\mathbf{x}\in(\mathbb{N}\cap[1,B])^{n+1}\left| \begin{array}{c}d_{0}x_{0}^{k}+\ldots+d_{n}x_{n}^{k}=0,\ (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq: For any pair \((\mathbf{s},\mathbf{t})\in\mathbb{N}^{n+1}\times\mathbb{N}^{\Lambda}\) and any \(R\in\mathbb{R}_{>0}\cup\{\infty\}\) define the set \[V_{R}(\mathbf{s},\mathbf{t})=\left\{\tilde{\mathbf{v}}\in\mathbb{N}^{\Lambda}\left|\begin{array} []{c}s_{i}^{km_{i}}\prod_{r=1}^{m_{i}-1}t_{i,r}^{k(m_{i}+r)}\tilde{v}_{i,r}^{ k(m_{i}+r)}\leq R^{k}\quad\forall i\in\{0,\ldots,n\}\\ \mu^{2}(\tilde{v}_{i,r}t_{i,r})=1\quad\forall i\in\{0,\ldots,n\},r\in\{1, \ldots,m_{i}-1\}\\ \gcd(\tilde{v}_{i,r}t_{i,r},\tilde{v}_{i,r}t_{i,r^{\prime}})=1\quad\forall i, r,r^{\prime}\text{ with }r\neq r^{\prime}\end{array}\right.\right\}.\] Thus \[\#N_{\mathbf{d}}(B,\mathbf{s},\mathbf{t})=\sum_{\tilde{\mathbf{v}}\in V_{B}(\mathbf{s},\mathbf{t})}M_{ \mathbf{d},\mathbf{\gamma}}(B^{k}),\] where \(\mathbf{\gamma}\) is given by (4.9). For \(R\in\mathbb{R}_{>0}\cup\{\infty\}\) we define further \[\mathcal{T}_{R}=\left\{\begin{array}{c}\mathbf{s}\in\mathbb{N}^{n+1},\mathbf{t}\in \mathbb{N}^{\Lambda}\left|\begin{array}{c}s_{i}^{m_{i}}\prod_{r=1}^{m_{i}-1}t _{i,r}^{m_{i}+r}\leq R\\ \mu^{2}(s_{i})=\mu^{2}(t_{i,r})=1\quad\forall i\in\{0,\ldots,n\},r\in\{1, \ldots,m_{i}-1\}\\ \gcd(t_{i,r},t_{i,r^{\prime}})=1\quad\forall i\in\{0,\ldots,n\},r\neq r^{ \prime}\in\{1,\ldots,m_{i}-1\}\\ p\mid s_{j}t_{j,1}\cdots t_{j,m_{j}-1}\implies p\mid s_{i}t_{i,1}\cdots t_{i,m_{ i}-1}\quad\forall i,j\in\{0,\ldots,n\}\end{array}\right.\end{array}\right\}.\] We can thus rewrite the relation (4.8) in the shape \[\#N_{\mathbf{d}}^{*}(B,\mathbf{1},\mathbf{1})=\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}} \varpi(\mathbf{s},\mathbf{t})\sum_{\tilde{\mathbf{v}}\in V_{B}(\mathbf{s},\mathbf{t})}M_{\mathbf{d}, \mathbf{\gamma}}(B^{k}) \tag{4.10}\] due to the properties of the function \(\varpi\) in Lemma 4.6. The functions \(M_{\mathbf{d},\mathbf{\gamma}}(B^{k})\) can be estimated via [1, Theorem 2.7] whenever \(\sum_{i=0}^{n-1}\frac{1}{km_{i}(km_{i}+1)}\geq 1\). In the next section we improve the circle method result of Browning and Yamagishi to a version that applies under the weaker assumptions of Theorem 1.1. ## 5. Application of the circle method Here we prove a sharper version of [1, Theorem 2.7] with parameters \(N=0,H=1,\mathbf{h}=0\). Fix integers \(2\leq\tilde{m}_{0}\leq\cdots\leq\tilde{m}_{n}\). For \(\mathbf{d}\in\mathbb{Z}_{\neq 0}^{n+1}\), \(\mathbf{\zeta}\in\mathbb{N}^{n+1}\) and \(\tilde{B}>0\), put \[\tilde{M}_{\mathbf{d},\mathbf{\zeta}}(\tilde{B})=\#\left\{\mathbf{u}\in\mathbb{N}^{n+1} \left|\zeta_{i}u_{i}^{\tilde{m}_{i}}\leq\tilde{B}\ \forall i\in\{0,\ldots,n\},\,\sum_{i=0}^{n}d_{i}\zeta_{i}u_{i}^{\tilde{m}_{i}}= 0\right.\right\}. \tag{5.1}\] Thus, if \(\tilde{m}_{i}=km_{i}\) for \(0\leq i\leq n\) we have \(M_{\mathbf{d},\mathbf{\zeta}}(B^{k})=\tilde{M}_{\mathbf{d},\mathbf{\zeta}}(B^{k})\). Our next immediate goal is to establish an asymptotic formula for \(\tilde{M}_{\mathbf{d},\mathbf{\gamma}}(\tilde{B})\). Here, we use the Hardy-Littlewood circle method much in the way of [1, Section 2], but we aim for a slightly sharper bound than the one presented in that paper. For any integer \(m\geq 2\), let \(s_{0}(m)\) be a positive integer such that for all \(s\geq s_{0}(m)\) and \(\epsilon>0\) one has \[\int_{0}^{1}\left|\sum_{1\leq x\leq X}e(\alpha x^{m})\right|^{2s}\mathrm{d} \alpha\ll X^{2s-m+\epsilon}.\] Similarly, let \(\sigma(m)\) be a positive integer with the property that for any \(\beta\in[0,1)\) and any \(q\in\mathbb{N}\) with \(\|q\beta\|\leq q^{-1}\) one has the bound \[\left|\sum_{1\leq x\leq X}e(\beta x^{m})\right|\ll X^{1+\epsilon}(X^{-1}+q^{-1 }+qX^{-m})^{\sigma(m)}\] for all \(\epsilon>0\). It follows from [1, Theorem 2.1] (see also [14, Theorem 14.8] for the bound on \(s_{0}(m)\)) that the choices \[s_{0}(m)=\min\{2^{m-1},\tfrac{1}{2}m(m-1)+\lfloor\sqrt{2m+2}\rfloor\}\quad\text { and }\quad\sigma(m)^{-1}=2s_{0}(m)\] are admissible. (We remark here that even \(\sigma(m)^{-1}=2s_{0}(m-1)\) is admissible. Since \(s_{0}\) is strictly increasing, choosing \(\sigma(m)\) as above constitutes a weakening.) Suppose for the remainder of the section that the parameters \(\mathbf{d}\in\mathbb{Z}_{\neq 0}^{n+1}\), \(\mathbf{\zeta}\in\mathbb{N}^{n+1}\) and \(\tilde{B}>0\) are fixed. For every \(i\in\{0,\dots,n\}\), let \(\tilde{B}_{i}=(\tilde{B}/\zeta_{i})^{\frac{1}{n_{i}}}\) and \[S_{i}(\alpha)=\sum_{1\leq u\leq\tilde{B}_{i}}e(\alpha d_{i}\zeta_{i}u^{\tilde{ m}_{i}}),\] so that \[\tilde{M}_{\mathbf{d},\mathbf{\zeta}}(\tilde{B})=\int_{0}^{1}\prod_{i=0}^{n}S_{i}( \alpha)\mathrm{d}\alpha.\] For fixed \(\delta>0\), we recall the set of minor arcs from [1, SS2], which is given by \[\mathfrak{m}=[0,1]\smallsetminus\bigcup_{\begin{subarray}{c}0\leq a\leq q\leq \tilde{B}^{\delta}\\ \gcd(a,q)=1\end{subarray}}\{\alpha\in[0,1):|\alpha-a/q|<\tilde{B}^{\delta-1}\}.\] **Lemma 5.1**.: _For \(i\in\{0,\dots,n\}\), \(0<\delta<\frac{1}{(2n+5)\tilde{m}_{n}(\tilde{m}_{n}+1)}\) and \(\epsilon>0\), we have_ \[\sup_{\alpha\in\mathfrak{m}}|S_{i}(\alpha)|\ll\tilde{B}^{1/\tilde{m}_{i}- \delta\sigma(\tilde{m}_{i})+\epsilon}\zeta_{i}^{-1/\tilde{m}_{i}+\sigma( \tilde{m}_{i})}.\] Proof.: The proof is identical to that of [1, Lemma 2.5] (see in particular the last display on p.1083), with the only difference in the precise meaning of the quantity \(\sigma(\tilde{m}_{i})\). Let \[\tilde{\Theta}=\sum_{i=0}^{n}\frac{1}{2s_{0}(\tilde{m}_{i})}-1\qquad\text{ and}\qquad\tilde{\Gamma}=\sum_{i=0}^{n}\frac{1}{\tilde{m}_{i}}-1. \tag{5.2}\] **Lemma 5.2**.: _Assume that \(\tilde{\Theta}>0.\) Let \(0<\delta<\frac{1}{(2n+5)\tilde{m}_{n}(\tilde{m}_{n}+1)}\) and \(\epsilon>0\). Then_ \[\int_{\mathfrak{m}}\left|\prod_{i=0}^{n}S_{i}(\alpha)\right|\mathrm{d}\alpha \ll\tilde{B}^{\tilde{\Gamma}-\delta\tilde{\Theta}+\epsilon}\prod_{i=0}^{n} \zeta_{i}^{1/(2s_{0}(\tilde{m}_{i}))-1/\tilde{m}_{i}}.\] Proof.: From the condition \(\tilde{\Theta}>0\) it follows that we can find \(\beta_{0},\dots,\beta_{n}\in(0,1)\) so that \[\sum_{i=0}^{n}\frac{\beta_{i}}{2s_{0}(\tilde{m}_{i})}=1. \tag{5.3}\] Take \(\ell_{i}=\frac{2s_{0}(\tilde{m}_{i})}{\beta_{i}}\) for all \(1\leq i\leq n\). It then follows from Holder's inequality, the definition of \(s_{0}(\tilde{m}_{i})\) and Lemma 5.1 above that \[\int_{\mathfrak{m}}\left|\prod_{i=0}^{n}S_{i}(\alpha)\right| \mathrm{d}\alpha \ll\left(\prod_{i=0}^{n}\sup_{\alpha\in\mathfrak{m}}|S_{i}(\alpha )|^{1-\beta_{i}}\right)\int_{0}^{1}\prod_{i=0}^{n}|S_{i}(\alpha)|^{\beta_{i}} \mathrm{d}\alpha\] \[\ll\prod_{i=0}^{n}\left(\sup_{\alpha\in\mathfrak{m}}|S_{i}(\alpha )|^{1-\beta_{i}}\left(\int_{0}^{1}|S_{i}(\alpha)|^{\beta_{i}\ell_{i}}\mathrm{ d}\alpha\right)^{1/\ell_{i}}\right)\] \[\ll\tilde{B}^{\epsilon}\prod_{i=0}^{n}\left(\left(\tilde{B}^{1/ \tilde{m}_{i}-\delta\sigma(\tilde{m}_{i})}\zeta_{i}^{-1/\tilde{m}_{i}+\sigma( \tilde{m}_{i})}\right)^{1-\beta_{i}}\left(\tilde{B}/\zeta_{i}\right)^{\frac{ \beta_{i}\ell_{i}-\tilde{m}_{i}}{\ell_{i}\tilde{m}_{i}}}\right)\] \[\ll\tilde{B}^{\omega_{1}+\epsilon}\prod_{i=0}^{n}\zeta_{i}^{\omega _{2,i}},\] where \[\omega_{1}=\sum_{i=0}^{n}\left((1/\tilde{m}_{i}-\delta\sigma(\tilde{m}_{i}))(1 -\beta_{i})+\frac{\beta_{i}\ell_{i}-\tilde{m}_{i}}{\ell_{i}\tilde{m}_{i}}\right)\] and \[\omega_{2,i}=(-1/\tilde{m}_{i}+\sigma(\tilde{m}_{i}))(1-\beta_{i})-\frac{\beta _{i}\ell_{i}-\tilde{m}_{i}}{\ell_{i}\tilde{m}_{i}}\qquad\forall i\in\{0,\dots,n\}.\] After inserting our choice for \(\ell_{i}\), a modicum of computation shows that \[\omega_{2,i}=-\frac{1}{\tilde{m}_{i}}+\sigma(\tilde{m}_{i})-\beta_{i}\left( \sigma(\tilde{m}_{i})-\frac{1}{2s_{0}(\tilde{m}_{i})}\right)=-\frac{1}{\tilde{m }_{i}}+\frac{1}{2s_{0}(\tilde{m}_{i})},\] where in the last step we took advantage of our choice \((2s_{0}(\tilde{m}_{i}))^{-1}=\sigma(\tilde{m}_{i})\). Similarly, we have \[\omega_{1}=\sum_{i=0}^{n}\left(\frac{1}{\tilde{m}_{i}}-\frac{\beta_{i}}{2s_{0} (\tilde{m}_{i})}-\delta\sigma(\tilde{m}_{i})(1-\beta_{i})\right)=\tilde{\Gamma }-\delta\tilde{\Theta},\] where the last equality follows from (5.3). Combining Lemma 5.2 with [1, Lemma 2.2] yields the desired asymptotic formula. Define \[\mathfrak{S}_{\boldsymbol{d},\boldsymbol{\zeta}}=\sum_{q=1}^{\infty}\frac{1} {q^{n+1}}\sum_{\begin{subarray}{c}a\;(\mathrm{mod}\;q)\\ (a,q)=1\end{subarray}}\prod_{i=0}^{n}\sum_{r=1}^{q}e(ad_{i}\zeta_{i}r^{m_{i}} /q)\qquad\text{and}\qquad\mathfrak{J}_{\boldsymbol{d}}=\int_{-\infty}^{\infty }\prod_{i=0}^{n}\left(\int_{0}^{1}e(\lambda d_{i}\xi^{m_{i}})d\xi\right)d\lambda, \tag{5.4}\] noting that these coincide with the definitions of \(\mathfrak{S}_{\boldsymbol{d};\boldsymbol{\zeta}}(\boldsymbol{h},H;N)\) and \(\mathfrak{J}_{\boldsymbol{d}}\) of [1, Section 2.1] with the choices \(\boldsymbol{h}=\boldsymbol{0}\), \(H=1\) and \(N=0\). Thus, we obtain the following. **Theorem 5.3**.: _Let \(2\leq\tilde{m}_{0}\leq\cdots\leq\tilde{m}_{n}\) be positive integers such that_ \[\sum_{i=0}^{n}\frac{1}{2s_{0}(\tilde{m}_{i})}>1\qquad\text{and}\qquad\sum_{i= 0}^{n}\frac{1}{\tilde{m}_{i}}>3.\] _Let \(\boldsymbol{d}\in\mathbb{Z}_{\neq 0}^{n+1}\), \(\boldsymbol{\zeta}\in\mathbb{N}^{n+1}\), \(\tilde{B}>0\). Let \(0<\delta<\frac{1}{(2n+5)\tilde{m}_{n}(\tilde{m}_{n}+1)}\) and \(\epsilon>0\). With the notation introduced in (5.1) and (5.2), we have_ \[\tilde{M}_{\boldsymbol{d},\boldsymbol{\zeta}}(\tilde{B})=\frac{\mathfrak{S}_ {\boldsymbol{d},\boldsymbol{\zeta}}\mathfrak{J}_{\boldsymbol{d}}}{\prod_{i=0}^ {n}\zeta_{i}^{\frac{1}{\tilde{m}_{i}}}}\tilde{B}^{\tilde{\Gamma}}+O\left(E_{1} (\boldsymbol{\zeta})+\frac{\tilde{B}^{\tilde{\Gamma}-\delta}E_{2}(\boldsymbol{ \zeta})}{\prod_{i=0}^{n}\zeta_{i}^{\frac{1}{\tilde{m}_{i}}}}+\tilde{B}^{ \tilde{\Gamma}-\delta\tilde{\Theta}+\epsilon}\prod_{i=0}^{n}\zeta_{i}^{-\frac {1}{\tilde{m}_{i}}+\frac{1}{2s_{0}(\tilde{m}_{i})}}\right),\] _and the error terms are given by_ \[E_{1}(\boldsymbol{\zeta})=\frac{\prod_{i=0}^{n}\tilde{B}_{i}}{\tilde{B}}\left( \frac{1}{\tilde{B}_{0}}+\cdots+\frac{1}{\tilde{B}_{n}}\right)\tilde{B}^{(2n+5 )\delta}\qquad\text{and}\qquad E_{2}(\boldsymbol{\zeta})=\sum_{q=1}^{\infty }q^{1-\Gamma+\epsilon}\prod_{i=0}^{n}\gcd(\zeta_{i},q)^{\frac{1}{\tilde{m}_{i} }}.\] ## 6. Proof of the main theorem In this section we conclude the proof of Theorem 1.1. We start with some numerical conditions that follow from the assumptions of Theorem 1.1. **Lemma 6.1**.: _The assumptions \(k,m_{0},\ldots,m_{n}\geq 2\) and \(\sum_{i=0}^{n}\frac{1}{2s_{0}(km_{i})}>1\) imply_ \[\frac{1}{2s_{0}(km_{i})}\leq\frac{1}{km_{i}}-\frac{1}{k(m_{i}+1)}\quad\forall i \in\{0,\ldots,n\}\qquad\text{and}\qquad\sum_{i=0}^{n}\frac{1}{km_{i}}>3.\] Proof.: The first inequality in the statement can be rearranged to \[km_{i}(m_{i}+1)\leq 2s_{0}(km_{i}).\] Now, suppose first that \(km_{i}\geq 6\), so that \(s_{0}(km_{i})=\frac{1}{2}km_{i}(km_{i}-1)+\lfloor\sqrt{2km_{i}+2}\rfloor\). In that situation, the above bound becomes \[km_{i}(m_{i}+1)\leq km_{i}(km_{i}-1)+2\lfloor\sqrt{2km_{i}+2}\rfloor,\] which can be rearranged to \[km_{i}((k-1)m_{i}-2)+2\lfloor\sqrt{2km_{i}+2}\rfloor\geq 0,\] which is clearly satisfied for \(k\geq 2\) and \(m_{i}\geq 2\). This settles all cases in which \(k\geq 3\) or \(m_{i}\geq 3\). The remaining case \(k=m_{i}=2\) can be checked by hand. Finally, the second statement follows upon observing that for \(i\in\{1,\ldots,n\}\), \(2s_{0}(km_{i})\geq 3km_{i}\), as \(km_{i}\geq 4\) Now we continue the proof of Theorem 1.1 from where we left off at the end of Section 4.3. **Proposition 6.2**.: _Let \(\mathbf{d}\in\mathbb{Z}_{\neq 0}^{n+1}\). Fix integers \(m_{0},\ldots,m_{n}\geq 2\) and \(k\geq 2\), and put \(\Gamma=\sum_{i=0}^{n}\frac{1}{km_{i}}-1\). There exists a real number \(\eta>0\) such that_ \[\#N^{*}_{\mathbf{d}}(B,\mathbf{1},\mathbf{1})=C_{\mathbf{d}}B^{k\Gamma}+O(B^{k\Gamma-\eta}), \tag{6.1}\] _where_ \[C_{\mathbf{d}}=\mathfrak{I}_{\mathbf{d}}\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{\infty}} \varpi(\mathbf{s},\mathbf{t})\sum_{\hat{\mathfrak{v}}\in V_{\infty}(\mathbf{s},\mathbf{t})} \mathfrak{E}_{\mathbf{d},\mathbf{\gamma}}\prod_{i=0}^{n}\gamma_{i}^{-\frac{1}{km_{i}}} \tag{6.2}\] _with \(\mathfrak{E}_{\mathbf{d},\mathbf{\gamma}}\) and \(\mathfrak{I}_{\mathbf{d}}\) as in (5.4), and \(\mathbf{\gamma}\) defined via (4.9)._ Proof.: Our strategy is to apply Theorem 5.3, with \(\tilde{B}=B^{k}\) and \(\tilde{m}_{i}=km_{i}\) for \(0\leq i\leq n\), to estimate the sets \(M_{\mathbf{d},\mathbf{\gamma}}(B^{k})\) in (4.10). Note that our choice of the parameters ensures that \(M_{\mathbf{d},\mathbf{\gamma}}(B^{k})=\tilde{M}_{\mathbf{d},\mathbf{\gamma}}(\tilde{B})\), and that the quantity \(\tilde{\Gamma}\) coincides with the \(\Gamma\) defined in the statement of the proposition. Thus, for every \(\epsilon>0\) and \(0<\delta<\frac{1}{(2n+5)km_{n}(km_{n}+1)}\) we obtain \[\begin{split}\#N^{*}_{\mathbf{d}}(B,\mathbf{1},\mathbf{1})=& \sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\varpi(\mathbf{s},\mathbf{t})\sum_{\hat{ \mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t})}\left[\frac{\mathfrak{E}_{\mathbf{d},\mathbf{ \gamma}}\mathfrak{I}_{\mathbf{d}}}{\prod_{i=0}^{n}\gamma_{i}^{\frac{1}{km_{i}}}}B^ {k\Gamma}+O\left(E_{1}(\mathbf{\gamma})+\frac{B^{k\Gamma-k\delta}E_{2}(\mathbf{ \gamma})}{\prod_{i=0}^{n}\gamma_{i}^{\frac{1}{km_{i}}}}\right)\right.\\ &\quad+\left.O\left(B^{k\Gamma-k\delta\Theta+\epsilon}\prod_{i=0} ^{n}\gamma_{i}^{-\frac{1}{km_{i}}+\frac{1}{2\sigma_{0}(km_{i})}}\right) \right],\end{split} \tag{6.3}\] where \(\Theta=\sum_{i=0}^{n}\frac{1}{2s_{0}(km_{i})}-1\). We observe that by [1, p. 1093], the leading constant is \[\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\varpi(\mathbf{s},\mathbf{t})\sum_{\hat{ \mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t})}\mathfrak{E}_{\mathbf{d},\mathbf{\gamma}} \mathfrak{I}_{\mathbf{d}}\prod_{i=0}^{n}\gamma_{i}^{-\frac{1}{km_{i}}}=C_{\mathbf{d}} +(B^{-\eta}), \tag{6.4}\] where \(C_{\mathbf{d}}\) is the expression given in (6.2) and \(\eta\) is some suitable positive number. It thus remains to bound the error terms. Put \[F_{1}(B)=B^{-k\Gamma}\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\varpi(\mathbf{s}, \mathbf{t})\sum_{\hat{\mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t})}E_{1}(\mathbf{\gamma}), \qquad F_{2}(B)=\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\varpi(\mathbf{s},\mathbf{t}) \sum_{\hat{\mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t})}\frac{B^{-k\delta}E_{2}(\mathbf{ \gamma})}{\gamma_{i}^{\frac{1}{km_{i}}}}\] and \[F_{3}(B)=B^{-k\delta\Theta+\epsilon}\sum_{(\mathbf{s},\mathbf{t})\in \mathcal{T}_{B}}\varpi(\mathbf{s},\mathbf{t})\sum_{\hat{\mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t})}\prod_{i=0}^{n}\gamma_{i}^{-\frac{1}{km_{i}}+\frac{1}{2s_{0}(km_{i})}},\] then the desired result will follow if we can show that \(F_{j}(B)\ll B^{-\eta}\) for some \(\eta>0\). We now begin with the estimation of \(F_{1}(B)\). Upon inserting the definition of \(E_{1}(B)\) from the statement of Theorem 5.3, one can show by a simple computation (see also [1, (3.6)]), using the last statement of Lemma 4.6, that \[F_{1}(B)=B^{k(2n+5)\delta+\epsilon}\sum_{l=0}^{n}B^{-\frac{1}{m_{l}}}\sum_{( \mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\sum_{\hat{\mathfrak{v}}\in V_{B}(\mathbf{s},\mathbf{t })}\prod_{\begin{subarray}{c}i=1\\ i\neq l\end{subarray}}^{n}\gamma_{i}^{-\frac{1}{km_{i}}}.\] For \(0\leq i\leq n\), we introduce the notation \[w_{i}=\tilde{v}_{i,1}^{m_{i}+1}\cdots\tilde{v}_{i,m_{i}-1}^{2m_{i}-1}\quad \text{and}\quad\tau_{i}=s_{i}^{m_{i}}\prod_{r=1}^{m_{i}-1}t_{i,r}^{m_{i}+r},\] so that \(\gamma_{i}=w_{i}^{k}\tau_{i}^{k}\). In that notation we can write \[F_{1}(B) =B^{k(2n+5)\delta+\epsilon}\sum_{l=0}^{n}B^{-\frac{1}{m_{l}}}\sum_{ (\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\sum_{\tilde{\boldsymbol{v}} \in V_{B}(\boldsymbol{s},\boldsymbol{t})}\prod_{\begin{subarray}{c}i=1\\ i\neq l\end{subarray}}^{n}(w_{i}\tau_{i})^{-\frac{1}{m_{i}}}\] \[\ll B^{k(2n+5)\delta+\epsilon}\sum_{l=0}^{n}B^{-\frac{1}{m_{l}}} \sum_{(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\left(\sum_{\tilde{v} _{1},1,\ldots,\tilde{v}_{l},m_{l-1}\cdot w_{l}\leq\frac{B}{\tau_{l}}}1\right) \prod_{\begin{subarray}{c}i=1\\ i\neq l\end{subarray}}^{n}\tau_{i}^{-\frac{1}{m_{i}}}\sum_{\begin{subarray}{c} \tilde{v}_{i,1},\ldots,\tilde{v}_{i,m_{l}-1}\cdot w_{i}\leq\frac{B}{\tau_{i}} \end{subarray}}\left(\frac{1}{w_{i}}\right)^{\frac{1}{m_{i}}}.\] Using the estimates \[\sum_{v_{1}^{m+1}\cdots v_{m-1}^{2m-1}\leq\frac{B}{\tau}}1\ll\sum_{v_{2}, \ldots,v_{m}=1}^{\infty}\left(\frac{B/\tau}{v_{2}^{m+2}\cdots v_{m-1}^{2m-1}} \right)^{\frac{1}{m+1}}\ll\left(\frac{B}{\tau}\right)^{\frac{1}{m+1}}\] and \[\sum_{v_{1}^{m+1}\cdots v_{m-1}^{2m-1}\leq\frac{B}{\tau}}\left( \frac{1}{v_{1}^{m+1}\cdots v_{m-1}^{2m-1}}\right)^{\frac{1}{m}}\ll 1\] from [1, p. 1090] within the above bound, it follows that \[F_{1}(B) \ll B^{k(2n+5)\delta+\epsilon}\sum_{l=0}^{n}B^{-\frac{1}{m_{l}}} \sum_{(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\left(\frac{B}{\tau_ {l}}\right)^{\frac{1}{m_{l}+1}}\prod_{\begin{subarray}{c}i=1\\ i\neq l\end{subarray}}^{n}\tau_{i}^{-\frac{1}{m_{i}}}\] \[\ll B^{-\frac{1}{m_{n}(m_{n}+1)}+k(2n+5)\delta+\epsilon}\sum_{( \boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\prod_{i=0}^{n}\tau_{i}^{- \frac{1}{(m_{i}+1)}}.\] The sum can be bounded by a slight modification of [1, (3.8)]. Write \(\sigma_{i}=s_{i}\prod_{r=1}^{m_{i}-1}t_{i,r}\), then clearly \(\sigma_{i}^{m}\leq\tau_{i}\), and thus \[\sum_{(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\prod_{i =0}^{n}\tau_{i}^{-\frac{1}{(m_{i}+1)}} \leq\sum_{(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}}\prod _{i=0}^{n}\sigma_{i}^{-\frac{m_{i}}{m_{i}+1}}=\prod_{p}\left(1+\sum_{ \begin{subarray}{c}(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}\\ (\boldsymbol{s},\boldsymbol{t})\neq(\boldsymbol{1},\boldsymbol{1})\end{subarray}} p^{-\sum_{i=0}^{n}\operatorname{val}_{p}(\sigma_{i})\frac{m_{i}}{m_{i}+1}}\right)\] \[\leq\prod_{p}\left(1+\prod_{i=0}^{n}p^{-\frac{m_{i}}{m_{i}+1}} \sum_{\begin{subarray}{c}(\boldsymbol{s},\boldsymbol{t})\in\mathcal{T}_{B}\\ \operatorname{val}_{p}(\sigma_{i})\geq 1\end{subarray}}1\right)\leq\prod_{p} \left(1+\prod_{i=0}^{n}p^{-\frac{m_{i}}{m_{i}+1}}(2m_{i}-1)\right)\ll 1,\] as \(\sum_{i=0}^{n}\frac{m_{i}}{m_{i}+1}>1\) under the assumption \(m_{0},\ldots,m_{n}\geq 2\). Thus altogether we obtain the bound \[F_{1}(B)\ll B^{-\frac{1}{m_{n}(m_{n}+1)}+k(2n+5)\delta+\epsilon}, \tag{6.5}\] which is sufficient for our purposes, provided that \(\delta\) was taken small enough. We now turn to \(F_{2}(B)\). As before, we rewrite the quantity under consideration by inserting the definition of \(E_{2}(B)\) (see also [1, (3.6)]), whereupon we can use the upper bound \[F_{2}(B) =B^{-k\delta+\epsilon}\sum_{(\boldsymbol{s},\boldsymbol{t})\in \mathcal{T}_{B}}\sum_{\tilde{\boldsymbol{v}}\in V_{B}(\boldsymbol{s},\boldsymbol {t})}\sum_{q=1}^{\infty}q^{1-\Gamma+\epsilon}\prod_{i=0}^{n}\frac{\gcd(\gamma_ {i},q)^{\frac{1}{km_{i}}}}{\gamma_{i}^{\frac{1}{km_{i}}}}\] \[\leq B^{-k\delta+\epsilon}\sum_{q=1}^{\infty}q^{1-\Gamma+ \epsilon}f_{1}(q)f_{2}(q),\] with \[f_{1}(q)=\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{\infty}}\prod_{i=0}^{n}\left(\frac{ \gcd(\tau_{i}^{k},q)}{\tau_{i}^{k}}\right)^{\frac{1}{km_{i}}}\qquad\text{and} \qquad f_{2}(q)=\sum_{\tilde{\mathbf{v}}\in V_{B}(\mathbf{1},\mathbf{1})}\prod_{i=0}^{n} \left(\frac{\gcd(w_{i}^{k},q)}{w_{i}^{k}}\right)^{\frac{1}{km_{i}}}.\] Since \(\tilde{v}_{i,1},\ldots,\tilde{v}_{i,m_{i}-1}\) are pairwise coprime for \(\tilde{\mathbf{v}}\in V_{B}(\mathbf{1},\mathbf{1})\), we can estimate the second expression via \[f_{2}(q)\leq\prod_{i=0}^{n}\prod_{r=1}^{m_{i}-1}\sum_{\tilde{v}_{i,r}\leq B^{ \frac{1}{m_{i}+r}}}\mu^{2}(\tilde{v}_{i,r})\frac{\gcd(\tilde{v}_{i,r}^{km_{i}+ kr},q)^{\frac{1}{km_{i}}}}{\tilde{v}_{i,r}^{(m_{i}+r)/m_{i}}}\ll q^{\epsilon}\] for all \(\epsilon>0\), where the last bound follows from [1, (3.9)]. For \(f_{1}(q)\) we proceed as in [1, p. 1091]. Let \[\mathscr{T}=\left\{\overline{\tau}=(\overline{\tau}_{0},\ldots,\overline{ \tau}_{n})\in\mathbb{N}^{n+1}:\begin{array}{c}\text{val}_{p}(\overline{\tau} _{i})\in\{0,m_{i},m_{i}+1,\ldots,3m_{i}-1\}\quad\forall p,\forall i\in\{0, \ldots,n\}\\ p|\overline{\tau}_{i}\implies p|\overline{\tau}_{j}\quad\forall i,j\in\{0, \ldots,n\}\end{array}\right\}.\] For every \(\overline{\tau}\in\mathscr{T}\) there is a unique pair \((\mathbf{s},\mathbf{t})\in\mathcal{T}_{\infty}\) such that \(\overline{\tau}_{i}=s_{i}^{m_{i}}\prod_{r=1}^{m_{i}-1}t_{i,r}^{m_{i}+r}\) for all \(i\in\{0,\ldots,n\}\). Then \[f_{1}(q)\leq\sum_{\overline{\tau}\in\mathscr{T}}\prod_{i=0}^{n}\left(\frac{ \gcd(\overline{\tau}_{i}^{k},q)}{\overline{\tau}_{i}^{k}}\right)^{\frac{1}{km _{i}}}\leq\prod_{p}\left(1+\prod_{i=0}^{n}\sum_{m_{i}\leq\alpha_{i}\leq 3m_{i}-1}p^{( \min\{k\alpha_{i},\text{val}_{p}(q)\}-k\alpha_{i})/km_{i}}\right)=O(1),\] as the local contribution to the product is \(1+O(p^{-(n+1)})\) if \(p\nmid q\), and \(O(1)\) if \(p\mid q\). Thus \[F_{2}(B)\ll B^{-k\delta+\epsilon}\sum_{q=1}^{\infty}q^{1-\Gamma+\epsilon}\ll B ^{-k\delta+\epsilon} \tag{6.6}\] for all \(\epsilon>0\), as \(\sum_{i=0}^{n}\frac{1}{km_{i}}>3\) by Lemma 6.1. Lastly, we turn our attention to \(F_{3}(B)\). By Lemma 6.1 we have \[\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\sum_{\tilde{\mathbf{v}}\in V_ {B}(\mathbf{s},\mathbf{t})}\prod_{i=0}^{n}\gamma_{i}^{-\frac{1}{km_{i}}+\frac{1}{2n_{ 0}(km_{i})}}\leq\sum_{(\mathbf{s},\mathbf{t})\in\mathcal{T}_{B}}\left(\sum_{\tilde{\bm {v}}\in V_{B}(\mathbf{s},\mathbf{t})}\prod_{i=0}^{n}w_{i}^{-\frac{1}{m_{i}+1}}\right) \prod_{i=0}^{n}\tau_{i}^{-\frac{1}{m_{i}+1}}\] \[\ll\left(\prod_{i=0}^{n}\sum_{\tilde{v}_{i,1}^{m_{i}+1}\cdots \tilde{v}_{i,m_{i}-1}^{2m_{i}-1}\leq B}\left(\tilde{v}_{i,1}^{m_{i}+1}\cdots \tilde{v}_{i,m_{i}-1}^{2m_{i}-1}\right)^{-\frac{1}{m_{i}+1}}\right)\sum_{(\mathbf{s },\mathbf{t})\in\mathcal{T}_{B}}\prod_{i=0}^{n}\tau_{i}^{-\frac{1}{m_{i}+1}}\ll B ^{\epsilon}\] for all \(\epsilon>0\), where the last estimate follows from [1, (3.8)] and the bound \(\ll\log B\) for the sum over \(v_{i,r}\) in [1, p. 1092]. Thus \[F_{3}(B)\ll B^{-k\delta\Theta+\epsilon} \tag{6.7}\] for all \(\epsilon>0\). The desired conclusion follows now upon combining (6.3) with (6.4) as well as the bounds (6.5), (6.6) and (6.7). Theorem 1.1 is now immediate upon combining Lemma 4.2, (4.2), (4.7) and Proposition 6.2. In particular, the leading constant in (1.2) is \[C=\begin{cases}\frac{1}{2}\sum_{\mathbf{s}\in\{\pm 1\}^{n+1}}C_{\mathbf{c}\mathbf{c}}&\text{ if $k$ is odd,}\\ 2^{n}C_{\mathbf{c}}&\text{ if $k$ is even.}\end{cases} \tag{6.8}\] **Remark 6.3**.: Note that the condition that \(k\geq 2\) enters in two places. On the one hand, we need \(\Gamma>3\) to control the main term in Theorem 5.3, and on the other hand it plays a role in the estimation of \(F_{3}(B)\), via the application of Lemma 6.1. By implementing a suitable pruning argument in the treatment of the minor arcs in the proof of Theorem 5.3, it seems likely that the last error term in that theorem can be improved, giving rise to a corresponding quantity \(F_{3}\) with better convergence properties. As for the main term, the condition \(\Gamma>3\) can presumably be relaxed by means of the techniques described in [20, Chapter 4]. Consequently, the expectation is that extending Proposition 6.2, and thus Theorem 1.1, for \(k=1\) should be a question of determination rather than any potential structural obstacles.
2310.07046
Highly efficient visible and near-IR photon pair generation with thin-film lithium niobate
Efficient on-chip entangled photon pair generation at telecom wavelengths is an integral aspect of emerging quantum optical technologies, particularly for quantum communication and computing. However, moving to shorter wavelengths enables the use of more accessible silicon detector technology and opens up applications in imaging and spectroscopy. Here, we present high brightness ($(1.6 \pm 0.3) \times 10^{9}$ pairs/mW/nm) visible-near-IR photon pair generation in a periodically poled lithium niobate nanophotonic waveguide. The degenerate spectrum of the photon pairs is centered at 811 nm with a bandwidth of 117 nm. The measured on-chip source efficiency of $(2.3\pm 0.5) \times 10^{11}$ pairs/mW is on par with source efficiencies at telecom wavelengths and is also orders of magnitude higher than the efficiencies of other visible sources implemented in bulk crystal or diffused waveguide-based technologies. These results represent the shortest wavelength of photon pairs generated in a nanophotonic waveguide reported to date by nearly an octave.
Nathan A. Harper, Emily Y. Hwang, Ryoto Sekine, Luis Ledezma, Christian Perez, Alireza Marandi, Scott K. Cushing
2023-10-10T22:19:01Z
http://arxiv.org/abs/2310.07046v2
# Highly efficient visible and near-IR photon pair generation with thin-film lithium niobate ###### Abstract Efficient on-chip entangled photon pair generation at telecom wavelengths is an integral aspect of emerging quantum optical technologies, particularly for quantum communication and computing. However, moving to shorter wavelengths enables the use of more accessible silicon detector technology and opens up applications in imaging and spectroscopy. Here, we present high brightness (\((1.6\pm 0.3)\times 10^{9}\) pairs/mW/nm) visible-near-IR photon pair generation in a periodically poled lithium niobate nanophotonic waveguide. The degenerate spectrum of the photon pairs is centered at 811 nm with a bandwidth of 117 nm. The measured on-chip source efficiency of \((2.3\pm 0.5)\times 10^{11}\) pairs/mW is on par with source efficiencies at telecom wavelengths and is also orders of magnitude higher than the efficiencies of other visible sources implemented in bulk crystal or diffused waveguide-based technologies. These results represent the shortest wavelength of photon pairs generated in a nanophotonic waveguide reported to date by nearly an octave. osajournal ## 1 Introduction Spontaneous parametric downconversion (SPDC) has been used for decades to produce quantum entanglement in various photonic degrees of freedom, serving as a workhorse in emerging quantum optical technologies. Compared to most nonlinear processes, SPDC is relatively inefficient, requiring over one million pump photons to produce one pair of entangled photons in even the highest performing crystals. However, recent advances in nanophotonics, particularly in thin-film lithium niobate (TFLN), have enabled significantly more efficient frequency conversion and quantum state generation [1, 2, 3] through sub-\(\mu\)m interaction areas, high nonlinearities, and low material losses [4, 5]. By exploiting this platform, many recent demonstrations of SPDC in TFLN [6, 7, 8, 9] have achieved efficiencies three orders of magnitude greater than that of bulk crystal-based sources [10] and one order of magnitude greater than that of large diffused waveguide-based sources [11]. To date, most TFLN-based photon pair sources are designed for SPDC at telecom wavelengths because of the low losses in optical fibers at 1550 nm [12, 13] and back-compatibility for applications such as quantum communication [14], computing [15], and a globally connected quantum network [16]. Although telecom photons are preferred for quantum information applications, visible and near-infrared photons are generally better suited for imaging and spectroscopy. Experiments at these wavelengths can take advantage of multi-pixel detectors such as electron-multiplying charge coupled devices (EM-CCD) and single photon avalanche detector (SPAD) arrays, enabling the measurements needed for imaging [17, 18, 19] and characterization of high-dimensional entangled states [20, 21]. Furthermore, the electronic transitions of molecules and atoms are accessible at near-IR wavelengths, allowing for fluorescence lifetime measurements [22, 23, 24], compatibility with quantum memories [25], and fundamental studies of few-photon nonlinearities [26, 10, 27]. More generally, near-IR and visible photons can be detected with high quantum efficiency and low dark noise using existing mature silicon technology at room temperature, compared to near-IR-IR detectors which require cryogenic cooling [28]. Despite these advantages, all demonstrations of nanophotonic pair production have resided in the telecom region, and the best near-IR and visible photon pair sources are still large-area waveguides [29, 30, 31, 32] and bulk periodically poled crystals [26, 29, 33, 34, 35, 36]. Potential reasons for this discrepancy stem from the difficulty in fabricating visible nonlinear circuits on thin-film lithium niobate due to factors such as the ultra-short poling periods required for quasi-phase matching and losses from material absorption [37, 38, 39] and scattering [40]. In spite of these difficulties, high-performance visible devices in thin-film lithium niobate are becoming increasingly common for classical applications such as electro-optic modulation and second harmonic generation [41, 42, 43, 44, 45]. Here, we extend TFLN-based SPDC sources by nearly an octave in frequency to produce high-brightness photon pairs in the visible and near-IR. The device produces an on-chip efficiency of \((2.3\pm 0.5)\times 10^{11}\) pairs/mW, which corresponds to a per-photon conversion efficiency of more than 1 photon pair converted in every 10,000 pump photons (\((1.1\pm 0.2)\times 10^{-4}\) pairs/photon). This efficiency is nearly two orders of magnitude better than visible-light diffused waveguide SPDC sources [31] and is on par with other TFLN sources in the telecom regime [8]. The SPDC from this device exhibits a broad spectrum centered at 811 nm with a degenerate FWHM Figure 1: (a) Schematic of the periodically poled lithium niobate nanophotonic waveguide. (b) Second harmonic microscopy image of the periodic poling. The poling electrode period in this image is 2 \(\mu\)m. (c) Mode profiles and waveguide geometry of the fundamental quasi-TE modes at the designed pump and SPDC center wavelengths. (d) Refractive indices and corresponding poling periods for a range of SPDC wavelengths. bandwidth of 117 nm and an average brightness of \((1.6\pm 0.3)\times 10^{9}\) pairs/mW/nm, a number limited by at least an order of magnitude by the pump laser linewidth (0.8 nm). Consistent with this bandwidth, we measure an ultrashort coherence time of \(\sim\)40 fs for the entangled photons with an indistinguishability of \(100.\pm 1\%\). Our results therefore show that, although pumped at wavelengths near what would usually be considered its cutoff range, TFLN can equally be a platform for visible-near-IR entangled photon applications as it is at telecom wavelengths. ## 2 Device design and fabrication The periodically poled lithium niobate waveguides (Figure 1a) were simulated in Lumerical MODE to determine the quasi-phase matching poling period. The guided modes at the design pump wavelength (406 nm) and SPDC center wavelength (812 nm) were simulated using the bulk Sellmeier coefficients of lithium niobate [46] and silicon dioxide [47] with the geometric parameters shown in Figure 1c and a 60\({}^{\circ}\)sidewall, which is consistent with the fabrication process. To take advantage of lithium niobate's largest nonlinear tensor element (\(d_{33}\) = 28 pm/V) [48], only the fundamental quasi-transverse electric (TE) modes were considered. An etch depth of 420 nm and a top width of 1.5 \(\mu\)m with a total LN thickness of 600 nm were targeted for ease of optical coupling, fabrication, and \(\geq\) 2 \(\mu\)m poling period while providing high performance. For these parameters, the effective refractive indices (\(n_{\mathrm{eff,pump}}\) = 2.29, \(n_{\mathrm{eff,SPDC}}\) = 2.09) result in a quasi-phase matching poling period of \(\Lambda\) = \(\lambda_{\mathrm{pump}}/\Delta n_{\mathrm{eff}}\) = 2.03 \(\mu\)m at the target pump wavelength of 406 nm (Figure 1d). The devices were fabricated from a 5% MgO-doped X-cut thin-film lithium niobate on insulator wafer (NanoLN), which consists of 600 nm of lithium niobate bonded to 2 \(\mu\)m of silicon dioxide on a 0.4 mm silicon substrate. To quasi-phase match the SPDC sources, poling electrodes (7 mm long) were first fabricated by performing a metal lift-off through electron beam lithography with bilayer polymethyl methacrylate (PMMA) resist followed by electron beam evaporation of titanium (15 nm) and gold (55 nm). The electrodes were poled with a 490 V and 70 \(\mu\)s square wave, and the poled domain formation was monitored with second harmonic microscopy [49] Figure 2: Schematic for coupling and detection of the photon pairs. I, isolator; ND, variable neutral density filter; HWP, half-wave plate; L, aspheric lens; LPF, long-pass filter; emICCD, electron-multiplying intensified charge-coupled device; BS, beamsplitter; D, single-photon avalanche detector; TT, time-tagger; M, mirror. (a) Optical setup to couple into the TFLN waveguide. (b) Characterization scheme to measure the photon pair spectra. (c) Characterization scheme for coincidence counting. (d) Optical setup for the Michelson interferometer. (Figure 1b). After poling, waveguides were defined through an aligned electron beam lithography step with hydrogen silesoxquiane (HSQ) resist followed by argon inductive coupled plasma reactive ion etching to achieve an etch depth of 420 nm, verified through atomic force microscopy. The chip facets were manually polished to increase coupling efficiency. A broadband oscillator was used to verify the phase matching wavelength through second harmonic generation and was found to match with the computationally predicted second harmonic wavelength within \(\pm\)3 nm. This small discrepancy was likely due to fabrication tolerances, particularly in the etch depth and film thickness. ## 3 Device characterization The spectrum, generation rate, and coherence properties of the entangled photon pairs produced from the fabricated device are characterized as shown in Figure 2. In these experiments, the room-temperature periodically poled waveguide is pumped with a free-running laser diode (Coherent OBIS LX 405 nm) to produce entangled pairs (Figure 2a). An antireflection-coated aspheric lens (NA = 0.58, Thorlabs C140TMD-A) couples the free-space pump beam to the fundamental TE mode of the waveguide. The photon pairs produced in the fundamental TE mode are collected off-chip and collimated using a similar aspheric lens (NA = 0.58, Thorlabs C140TMD-B). The spectra of the entangled photon pairs are measured to assess the phase-matching properties and tunability of the device (Figure 3). Pairs collected from the waveguide are transmitted to a grating spectrometer and measured using an electron-multiplying intensified camera (Figure 2b). To tune the SPDC emission, the center wavelength of the pump wavelength is varied from 405-406.4 nm by changing the drive current of the laser diode. Variable neutral-density filters are used to keep the pump power for all the collected spectra at a consistent 10 \(\mu\)W. In doing so, three distinct phase matching regions are explored (Figure 3a): 1) at long pump wavelengths, the phase matching condition is not satisfied and emission is not observed; 2) from 405.6-405.9 nm, the degenerate wavelengths are phasematched; and 3) at short pump wavelengths, the spectrum splits and nondegenerate emission extending to the cutoff wavelength of the filter is observed. The dip in intensity in Figure 3a around 760 nm is likely due to impurities in the thin film, contamination during the device fabrication process, or ambient absorption. Due to the linewidth of the laser used in the experiment (0.8 nm FWHM), the spectra are considerably broadened compared to the spectra expected from a single-frequency pump laser (Supplemental Figure S3). Nevertheless, experiment and theory reach qualitative agreement by factoring the pump linewidth into the calculations (Figure 3b). For all subsequent experiments, a laser center wavelength of 405.7 nm is used for degenerate phase matching. The resulting spectrum (Figure 3c) is centered at 811.4 \(\pm\) 0.7 nm with a FWHM bandwidth of 117 nm (53 THz) that accounts for 85% of the overall flux. The pair generation efficiency of the device is measured through co Figure 3: (a) Measured and (b) theoretical SPDC spectra as a function of pump wavelength. (c) Lineouts of the measured and theoretical SPDC spectra at a pump wavelength of 405.7 nm, corresponding to the dashed lines in (a) and (b). two SPADs (Figure 2c). In this experiment, the pairs from the chip were split at a 50:50 broadband beamsplitter (Thorlabs BS014) and coupled to multimode optical fibers connected to the detectors. Coincidence detection events between the SPADs were recorded with a time-tagger (Picoquant PicoHarp 300). Figure 4a shows a representative coincidence histogram recorded at 45 nW of pump power. The temporal correlation in this graph (3.4 ns FWHM) is given by the response time of the SPADs and not the entangled photon correlations (see Figure 5 later in the text). The coincidence counts are corrected by background subtraction of the number of counts in a 9.5 ns window at the histogram peak from the number of counts in another 9.5 ns window in a background region far from the peak. Sweeping the laser power with a neutral density filter yields the curves in Figure 4b-c, which are linearly fit to determine the pair generation efficiency. To account for the wavelength dependence of the SPAD quantum efficiency (Supplemental Equation S11), all wavelengths in the spectrum are integrated over to calculate the average detection efficiency for single photons (\(\eta_{1}=0.52\)) as well as the average joint pair detection probability (\(\eta_{12}=0.27\)). Including a factor of 2 due to the probability of splitting pairs at the beamsplitter yields Equation 1 for the measured efficiency of the source: \[E=\frac{m_{1}m_{2}}{m_{c}}\frac{\eta_{12}}{2\eta_{1}^{2}} \tag{1}\] Here \(E\) is the pair generation efficiency, \(m_{1}\) and \(m_{2}\) are the singles rates at the two detectors, and \(m_{c}\) is the rate of coincidences, all in units of counts/mW. Accounting for the 10.2 dB transmission loss of the pump laser into the waveguide, a pair generation efficiency of \((2.3\pm 0.5)\times 10^{11}\) pairs/mW is measured, which is equivalent to a per-pump-photon efficiency of \((1.1\pm 0.2)\times 10^{-4}\) pairs/photon. Over the 117 nm FWHM bandwidth of the spectrum, this efficiency translates to an average brightness of \((1.6\pm 0.3)\times 10^{9}\) pairs/nm/mW. The uncertainties here and throughout this work are reported as one standard deviation, derived from the standard error in the fits of Figure 4b-c and the uncertainty in the detector quantum efficiency. The ratio of singles counts to coincidence counts suggests that the transmission of the SPDC from the waveguide to each of the two detectors is 8.4 dB and 8.0 dB, respectively, which includes losses out of the waveguide and of the free-space optics. Our theoretical efficiency (Supplemental), including the FWHM of the pump laser, is \(2.66\times 10^{11}\) pairs/mW, in close agreement with our experimental results. Finally, the two-photon interference is measured (Figure 2d) to demonstrate the non-classical behaviour of the produced photon pairs. Figure 5 shows the measured two-photon interferogram (Figure 5a) obtained from the device without subtracting accidentals, as well as the one-photon interferogram (Figure 5c) for comparison. Due to the aforementioned temporal resolution of the SPADs used here, the four unique paths through the interferometer are indistinguishable and combine to yield the interference pattern. The important features of the interferogram are Figure 4: (a) Coincidence histogram at an input pump power of 45 nW including accidentals. Measured (b) coincidence and (c) singles counts while sweeping the input power. The fitted slopes (solid lines) produce the pair generation efficiency of our device. as follows: 1) Near the zero path length difference, photons are delayed within the coherence length of the source and exhibit both one- and two-photon interference. A visibility of 100.\(\pm\)1% is measured within the coherence length, with an uncertainty derived assuming Poissonian statistics. This near-perfect visibility indicates good mode overlap in the interferometer and indistinguishability of the photons within the pair. 2) Far from the coherence length of the source (delays greater than 20 fs), interference between two photons taking different paths disappears, which explains why the single-photon interference disappears in Figure 5c and 5e. Notably, interference between pairs of photons that travel together through the interferometer persists with a fringe period at half the pair wavelength and is shown in more detail in Figure 5d. This feature suggests quantum interference due to the energy-time entanglement of the pairs, and would not be observed if the light was generated from a coherent or a thermal source with a similar spectrum [50]. A fringe visibility of \(43\pm 3\%\) is observed in this region far from the coherence length, which is close to the theoretical maximum of 50% for this experiment due to the temporal resolution of the detectors. The uncertainty in visibility is given from the standard error in the fit of Figure 5d. The qualitative agreement between the measured and theoretical two-photon interferogram (Figure 5a-b) suggests that SPDC and genuine energy-time entanglement are being produced. ## 4 Discussion Compared to the state of the art for visible photon pair sources, the device presented in this work exhibits substantially improved brightness and efficiency due to the small effective area of the waveguide. The device's performance against reported literature values is plotted in Figure 6 using efficiency and wavelength as the figures of merit. Telecom and MIR TFLN devices are also included, demonstrating comparable or improved performance. At short wavelengths, the phase-matching bandwidth decreases due to group velocity dispersion in lithium niobate, but the efficiency remains high because the downconversion spectral power density scales with the inverse fifth power of the SPDC wavelength (\(\lambda_{s}^{-5}\)) [29]. In addition to the benefits of higher-energy photon pairs, the strategy of decreasing the SPDC wavelength to access higher efficiencies could enable single-photon nonlinearities when integrated with a resonator [51]. The high efficiency of this device, which is limited here by the linewidth of the pump laser, has significant implications Figure 5: The (a) measured and (b) simulated (Supplemental) two photon Michelson interferogram for the device. (c) Corresponding singles counts out of the interferometer, demonstrating single photon interference within the coherence length of the source. The (d) coincidence counts and (e) singles counts far from the coherence length of the source. Note that all measured coincidence counts ((a) and (d)) do not have accidentals subtracted. for practical uses of entangled photons, including reducing integration times, allowing the use of low-power laser diodes for pair generation, and reducing fluorescence and stray light. Even with the losses present in this setup, high signal to noise coincidence measurements are acquired with nanowatts of laser power, highlighting the potential of this device. The efficiency and brightness of the device can be further improved by narrowing the bandwidth of the pump laser, as discussed further in the Supplemental. A single-frequency pump is estimated to shrink the phase-matching bandwidth from 117 nm to 35 nm, increasing the brightness by a comparable factor. Furthermore, the SPDC process is most efficient near degeneracy because the group velocity of the signal and idler are equal to first order, so the efficiency could potentially increase by a factor of 5. However, one benefit of using a multifrequency pump is that the device sensitivity to the laser wavelength is reduced, yielding a higher stability average response with greater bandwidth. ## 5 Conclusion Efficient photon pair generation has been demonstrated with an integrated thin-film lithium niobate waveguide at visible and NIR wavelengths (720-900 nm). An on-chip SPDC efficiency of \((2.3\pm 0.5)\times 10^{11}\) pairs/mW, which is on-par with reported literature at telecom wavelengths, has been produced near the usually associated cut-off wavelengths for the pump (406 nm) in TFLN. The photon pair spectra has an average brightness of \((1.6\pm 0.3)\times 10^{9}\) pairs/mW/nm, centered at 811 nm with a 117 nm bandwidth. To date, these results are the shortest wavelength photon pairs generated in a thin film platform by nearly an octave. The work opens up opportunities to exploit the quantum advantage of integrated entangled photon circuits beyond telecom to imaging and spectroscopy applications. **Funding.** Air Force Office of Scientific Research (FA9550-20-1-0040); Army Research Office (W911NF-23-1-0048); National Science Foundation (EECS 1846273, DGE-1745301); U.S. Department of Energy (DE-SC0020151). **Acknowledgments.** The authors gratefully acknowledge the critical support and infrastructure provided Figure 6: Comparison of relevant literature SPDC sources to this work. Horizontal error bars represent the reported bandwidth of the sources. Data for the efficiency, center wavelength, and bandwidth are taken from Refs. [10, 26, 29, 33, 35, 36, 52, 53] for bulk crystal sources, Refs. [11, 29, 30, 31, 32, 54, 55, 56, 57, 58, 59] for large waveguide sources (including micromachined and diffused waveguides), and Refs. [6, 7, 8, 9] for TFLN sources. The yellow shaded region represents the typical telecommunication wavelength window. for this work by The Kavli Nanoscience Institute (KNI) and the Beckman Biological Imaging Facility at Caltech. This work was additionally supported by the KNI-Wheatley Scholar in Nanoscience and the Rothenberg Innovation Initiative. N.H. was supported by the Department of Defense (DoD) through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program. E.H. was supported by the National Science Foundation Graduate Research Fellowship Program under Grant no. DGE-1745301. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. ## Disclosures. The authors declare no conflicts of interest. ## Data Availability Statement. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. ## Supplemental document. See Supplement 1 for supporting content.
2302.13222
Speech Corpora Divergence Based Unsupervised Data Selection for ASR
Selecting application scenarios matching data is important for the automatic speech recognition (ASR) training, but it is difficult to measure the matching degree of the training corpus. This study proposes a unsupervised target-aware data selection method based on speech corpora divergence (SCD), which can measure the similarity between two speech corpora. We first use the self-supervised Hubert model to discretize the speech corpora into label sequence and calculate the N-gram probability distribution. Then we calculate the Kullback-Leibler divergence between the N-grams as the SCD. Finally, we can choose the subset which has minimum SCD to the target corpus for annotation and training. Compared to previous data selection method, the SCD data selection method can focus on more acoustic details and guarantee the diversity of the selected set. We evaluate our method on different accents from Common Voice. Experiments show that the proposed SCD data selection can realize 14.8% relative improvements to the random selection, comparable or even superior to the result of supervised selection.
Changfeng Gao, Gaofeng Cheng, Pengyuan Zhang, Yonghong Yan
2023-02-26T03:26:26Z
http://arxiv.org/abs/2302.13222v1
# Speech Corpora Divergence Based Unsupervised Data Selection for ASR ###### Abstract Selecting application scenarios matching data is important for the automatic speech recognition (ASR) training, but it is difficult to measure the matching degree of the training corpus. This study proposes a unsupervised target-aware data selection method based on speech corpora divergence (SCD), which can measure the similarity between two speech corpora. We first use the self-supervised Hubert model to discretize the speech corpora into label sequence and calculate the N-gram probability distribution. Then we calculate the Kullback-Leibler divergence between the N-grams as the SCD. Finally, we can choose the subset which has minimum SCD to the target corpus for annotation and training. Compared to previous data selection method, the SCD data selection method can focus on more acoustic details and guarantee the diversity of the selected set. We evaluate our method on different accents from Common Voice. Experiments show that the proposed SCD data selection can realize 14.8% relative improvements to the random selection, comparable or even superior to the result of supervised selection. Automatic speech recognition, data selection, self-supervised learning. ## I Introduction For the automatic speech recognition (ASR), speech annotation is an expensive and time-consuming work. People can easily collect enormous unsupervised corpus from websites, broadcasts, and podcasts, but only a minority of them can be annotated artificially. Moreover, as the training data matching is crucial for the ASR system [1, 2], it is important to select a suitable subset for annotation and training according to the target scenarios like accented [3], far-field [4] and children [5] ASR. Although we can manually select the matching training corpus by accent, speaker, or channel for annotation, it is difficult to describe the corpus's similarity mathematically and select the training speech automatically. In general, most works [6, 7, 8, 9, 10, 11, 12] believes that a well-designed training set should have similar distribution with the target corpus, but it is difficult to measure the speech corpus distribution. To solve this problem, the most common method is measuring the speech distribution with the transcription [9, 10, 11, 12]. [9] uses the frequency of the word, character or phoneme to measure the transcription distribution and then samples data uniformly. And to sample unlabeled speech, [11] uses a baseline ASR model to decode the N-Best hypothesis and then calculate the term frequency-inverse document frequency (tf-idf) for data selection. As the text-level distribution is too rough to measure the acoustic difference of the speech, [12] count the distribution according to the context-dependent HMM states to capture more acoustic details. However, the HMM states still largely depends on the speech transcription and the lexicon, it can not still measure the difference between sex, accent or other acoustic characteristics. Besides selection by the distribution, contrastive sampling [13, 14, 15, 16, 17, 18] is another recent popular data selection method. Most of them use a universal and a target domain ASR model to score the utterances by the confidence score or the hypothesis perplexity. Then they will sample the utterances which has largest gap between the target and the universal score one by one. Using different domain ASR models can evaluate the speech from the acoustic characteristics well, however, it also tend to choose similar speech and reduce the diversity of the selected set. In this study, we design a novel target-aware data selection method by proposing the speech corpora divergence (SCD). We use the self-supervised learning (SSL) model, Hubert [19], to discretize the speech and then measure the speech distribution in the discrete space. We count the N-gram of the discrete corpus and use the Kullback-Leibler divergence (KLD) to calculate the SCD. Then we can select a subset from the universal unlabeled corpus by minimizing the SCD between the selected corpus and the target corpus and further use greedy search to simplify the algorithm complexity. Compared with the previous works, the Hubert discrete labels can contain both acoustic and semantic information, so it can represent the speech distribution better than the text-related labels like word, char and HMM states. And as the SCD selection method considers the relationship between the whole selected speech and the target set, it can sample more diverse speech than the contrastive sampling. We evaluate the SCD data selection method on different accented speech from Common Voice. Experiments prove that our proposed method can realize 14.8% relative improvements to the random selection and reach or even exceed the human selection result with accent labels. ## II Related Work Hubert [19] is one of the most successful SSL model which has been applied on different speech tasks. The Hubert model uses a CNN feature extractor to convert the waveform into hidden representations. Then it will mask a part of representations and use a transformer encoder to predict the discrete labels of the masked part.The discrete labels are initialed by a K-means cluster on MFCC and then are refined by the Hubert model iteratively. Some research work find that the labels find by Hubert can be used in different tasks. For example, [20, 21] use these labels to pre-train the ASR decoder. They find that the Hubert discrete labels can help the decoder learn how to generate text sequences. [22, 23, 24] find that they can resynthesis the speech by combining the discrete label with some pitch and speaker information. There are also some works apply these discrete labels on emotion conversion [25] and NLP task [26]. These researches indicates that compared to the traditional label like word or phone, the Hubert discrete labels contain richer acoustic and semantic information. So in this paper, we will use the Hubert discrete labels to represent the continues speech and then design a data selection method according to the distribution of the Hubert labels. ## III Method ### _Speech Corpora Divergence_ In this subsection, we give the definition of the SCD, which measures the similarity between two speech corpora. A speech corpus \(X\) can be represented by a stochastic process with probability distribution \(P(X)=P(X_{1}X_{2}\dots X_{t}\dots)\) and each speech utterance \(x_{i}=x_{i1}x_{i2}\dots x_{it}\dots\) is a sampling result, \(i\) and \(t\) stand for the utterance index and the time step. Then we can use the KLD to measure the difference between two distribution: \[\mathrm{SCD}(X,Y)=D_{\mathrm{KL}}(X||Y) \tag{1}\] However, it is not easy to calculate the SCD by eq (1), because corpora \(X\) and \(Y\) are in continuous space. Inspired by the recent SSL [27, 28, 29, 30, 19], we use the hidden units discovery system in Hubert to discretize the speech corpora. For each corpus, every utterance \(x_{i}\) will be converted into label sequence \(\widetilde{x}_{i1},\widetilde{x}_{i2}\dots\widetilde{x}_{in}\dots\), and \(\widetilde{x}_{in}\in\mathcal{L}\), \(\mathcal{L}=\{1,2,\dots,K\}\). \(K\) is the clusters number of the hidden units discovery system. After obtaining the discrete \(\widetilde{X}\), we can use an N-gram model \(P_{\widetilde{X}}(L)\) to represent the \(P(X)\): \[P_{\widetilde{X}}(L=l_{i})=\frac{\mathrm{cnt}_{\widetilde{X}}(l_{i})}{\sum_{ l_{j}}\mathrm{cnt}_{\widetilde{X}}(l_{j})} \tag{2}\] where \(L\in\mathcal{L}^{N}\), \(\mathcal{L}^{N}\) is the N-order cartesian power of \(\mathcal{L}\). \(\mathrm{cnt}_{\widetilde{X}}\) stands for the count operation in corpora \(\widetilde{X}\). Finally the SCD can be calculated as: \[\mathrm{SCD}(X,Y)=\sum_{l_{i}\in\mathcal{L}^{N}}P_{\widetilde{X}}(L=l_{i}) \log\frac{P_{\widetilde{X}}(L=l_{i})}{P_{\widetilde{Y}}(L=l_{i})} \tag{3}\] We conclude the calculation of the SCD in Fig.1. ### _Target-aware data selection with SCD_ With the help of the SCD, we can re-define the target-aware data selection as a combinatorial optimization problem. Given the unlabeled universal speech corpus \(U\) and the query speech set \(Q\), sample a subset \(S\) from \(U\) with size \(C\) and minimize the \(\mathrm{SCD}(Q,S)\) at the same time: \[S^{*}=\operatorname*{argmin}_{S}\mathrm{SCD}(Q,S),where\quad|S|=C \tag{4}\] In practice, the available query corpus \(Q\) is always small, it cannot fully represent the target scenario well. So directly using the \(P_{\widetilde{Q}}(L)\) to calculate the SCD could make the \(S\) overfit the \(Q\). To increase the generalization ability of the selected set \(S\), we use the interpolation method with \(U\) and \(Q\) as follows: \[P_{\widetilde{Q^{\prime}}}(L)=\lambda P_{\widetilde{Q}}(L)+(1-\lambda)P_{ \widetilde{U}}(L) \tag{5}\] \[S^{*}=\operatorname*{argmin}_{S}\mathrm{SCD}(Q^{\prime},S),where\quad|S|=C \tag{6}\] However, finding the global optimum solution \(S^{*}\) is a NP-hard problem and the solution-space size is \(\binom{|U|}{C}\). So we use the greedy-search method to find the local optimum solution Fig. 1: Calculation of the SCD. We use the Hubert model to convert the speech corpora into label corpora. Then use the N-gram to measure the distribution. The SCD can be defined by the KLD between the N-grams. to reduce the algorithm complexity. Details are shown in Algorithm 1. As each utterance is visited only once, the complexity is \(O(|U|K^{N})\), \(O(K^{N})\) is the SCD complexity and \(O(|U|)\) is the search complexity. And when \(N\) is large, we can cut the rare grams to further reduce the complexity. ## IV Experiment ### _Dataset_ We conduct the experiments on the English Common Voice (CMV) v5.1 [31] and take the accent as the basis of data selection. In CMV, volunteers worldwide contribute these audios by reading sentences provided by the website, and some of the volunteers also provide their accents. We will select the training subset from the CMV and evaluate on the Indian (ind) and the Australia (aus) accents, which only account for 5% and 4% among the whole CMV. For evaluation, besides the official evaluation set, we split a _dev_ set (5 hours) and a _test_ set (5 hours) for the ind-accent and aus-accent. These split parts will be excluded during data selection. ### _Data selection_ We use different data selection methods, including random selection, supervised accent-label selection, and the proposed unsupervised SCD selection to sample the training set. For the random selection, we shuffle the corpus and select the front utterances. For the accent-label selection, we only sample the speech with ind-label or aus-label. These two methods can be regarded as the lower bound and the upper bound of the experiments. For the SCD selection, we use the open source self-supervised Hubert-Large model 1[30] to discretize the speech into 500 cluster labels. We count the distributions of the discrete corpus as a uni-gram (\(N\)=1). During the greedy-search, we use the _dev-ind_ or _dev-aus_ set as the \(Q\) and \(\lambda\) is adjusted from 0 to 1. Finally, we fine-tune a Hubert as the downstream ASR model with 1 hour, 10 hours, and 100 hours of different selected training speech 2. Footnote 1: [https://dl.fbaipublicfiles.com/hubert/hubert_large_160k_pt](https://dl.fbaipublicfiles.com/hubert/hubert_large_160k_pt) Footnote 2: We cannot sample the 100 hours set by the accent-label selection because the audios with the ind-label or aus-label are less than 100 hours. ### _Main results_ Table I lists the main results of the experiments. According to the table, we can find that our proposed SCD selection can be better than the random selection on both ind-accent and aus-accent. Furthermore, compared to the supervised accent-label selection, the SCD selection can realize better results on the ind-accent and comparable results on the aus-accent. We can also find that the generalization of the SCD selection is better than the accent-label selection. The accent-label selection can benefit the recognition performance of the target accent but also hurt other accents at the same time. In contrast, the SCD selection can improve the target accent result with little influence on others. For example, on the 10 hours experiments, the ind-label selection can bring 5.7% relative improvement on the _test-ind_ but also relatively increase 11.6% and 44.5% WER on the _test-cmv_ and _test-aus_. In contrast, the ind-SCD selection can reduce the WER of _test-ind_ with 12.7%, meanwhile, the WER increment on _test-aus_ is only 9.3% and the _test-cmv_ result is even better. By comparing the results with different training set sizes, we can find that the SCD selection can be more powerful with the sample size growing. When the selected training set increases from 1 to 100 hours, the relative improvement between SCD selection and random selection increases from 5.8% to 14.8% for ind-accent, and increases from 4.0% to 7.0% for aus-accent. Because the SCD selection is based on statistics, larger sample size will provide a more accurate probability distribution. ### _Analysis_ #### Iv-D1 Influence of the discretization and N-gram We analysis the influence of the discretization and N-gram and show the result in Table II. For the discretization, we also use the the K-means clustering with MFCC (the original target label of the Hubert) and the Hubert-Base to discrete the speech. We can find that all of them can exceed the random selection. This means that the SCD selection still can be useful even without a self-supervised model. Although similar performances are shown on the _dev-ind_, the Hubert-Large discretization gets the best results on the _test-ind_. This indicates that more informative discrete label can bring better generalization ability. For the N-gram, we also use larger \(N\) to measure the distribution of the discrete label corpus and find that when \(N\) becomes larger, the _dev-ind_ WER always decrease but the _test-ind_ result could become worse. This proves the selected training set over matches the _dev-ind_. We believe the reason is that _dev-ind_ only contains 5 hours of speech, which is insufficient for a higher-order statistical model. #### Iv-B2 Compare with other methods We also use the transcription distribution and contrastive sampling for data selection like [9] and [18]. And the results is also shown in Table II. We can find that select data with the word or character is useless and even harmful in this task, because the transcription can not show the difference between different accents. And the contrastive sampling can realize similar performance on the 10 hours _ind-test_ set, however, much worse on the _cmv-test_ set and the 1 hours training task. Because it tend to select similar utterances especially when the selected size is small. #### Iv-B3 Influence of the interpolation factor \(\lambda\) We evaluate the influence of the interpolation factor \(\lambda\), which is used to prevent the selected set from overfitting the query corpus. The experiments are based on the ind-accent with 1 or 10 hours training set, and the results are shown in Fig 2. We can find that on the _dev-ind_, the WER continuously decreases with the \(\lambda\) growth, which means that the selected subset fits the _dev-ind_ better. However, for the _test-ind_, the WER will reduce firstly and then rise again until \(\lambda\) changes to 1.0. The phenomenon is more evident when the selected training set size is small. This means that without interpolating with the universal corpus, the generalization ability of the SCD selection will be hurt. #### Iv-B4 Selected training set construction We draw the construction of three 10 hours training sets selected by random selection, ind-SCD selection, and aus-SCD selection in Fig 3. We can find that the SCD selection set contains more target accented speech than the random selection. For example, when using random selection, the ind-accented and aus-accented speech only takes 7.5%. For the ind-SCD and aus-SCD selection, the proportion of the ind-accented and aus-accented speech will increase to 48% and 21%. It should be noticed that during SCD selection, no accent label or transcription is used. Fig 3 also shows the relationship between different accents. The ind-SCD selection choose more ind-accented speech rate and less other accent speech. However, the aus-SCD also samples more speech with England (eng) and Newzealand (nzl) labels beside the aus-accented speech. As we know, the aus-accent has closer relationship with the eng-accent and the nzl-accent than others. This indicates that the proposed SCD is consistent with human cognition. ## V Conclusion This study proposes SCD to measure the speech corpora similarity by the Hubert discrete label distribution and then select the training speech by the SCD. Compare to previous works, this SCD selection method can consider both acoustic and semantic information in the speech corpus and can also guarantee the diversity of the selected speech. Experiments on different accents speech show that with the same training set size, the proposed SCD selection can realize up to 14.8% relative improvement than the random selection and also realize comparable even better performance than supervised selection with accent label. As the SCD data selection method is independent from the transcription, we will apply this method on other audio tasks which need to sample the best training subset from a large-scale corpus.
2308.06112
Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping
Visual Speech Recognition (VSR) differs from the common perception tasks as it requires deeper reasoning over the video sequence, even by human experts. Despite the recent advances in VSR, current approaches rely on labeled data to fully train or finetune their models predicting the target speech. This hinders their ability to generalize well beyond the training set and leads to performance degeneration under out-of-distribution challenging scenarios. Unlike previous works that involve auxiliary losses or complex training procedures and architectures, we propose a simple approach, named Lip2Vec that is based on learning a prior model. Given a robust visual speech encoder, this network maps the encoded latent representations of the lip sequence to their corresponding latents from the audio pair, which are sufficiently invariant for effective text decoding. The generated audio representation is then decoded to text using an off-the-shelf Audio Speech Recognition (ASR) model. The proposed model compares favorably with fully-supervised learning methods on the LRS3 dataset achieving 26 WER. Unlike SoTA approaches, our model keeps a reasonable performance on the VoxCeleb test set. We believe that reprogramming the VSR as an ASR task narrows the performance gap between the two and paves the way for more flexible formulations of lip reading.
Yasser Abdelaziz Dahou Djilali, Sanath Narayan, Haithem Boussaid, Ebtessam Almazrouei, Merouane Debbah
2023-08-11T12:59:02Z
http://arxiv.org/abs/2308.06112v1
Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping ###### Abstract Visual Speech Recognition (VSR) differs from the common perception tasks as it requires deeper reasoning over the video sequence, even by human experts. Despite the recent advances in VSR, current approaches rely on labeled data to fully train or finetune their models predicting the target speech. This hinders their ability to generalize well beyond the training set and leads to performance degeneration under out-of-distribution challenging scenarios. Unlike previous works that involve auxiliary losses or complex training procedures and architectures, we propose a simple approach, named Lip2Vec that is based on learning a prior model. Given a robust visual speech encoder, this network maps the encoded latent representations of the lip sequence to their corresponding latents from the audio pair, which are sufficiently invariant for effective text decoding. The generated audio representation is then decoded to text using an off-the-shelf Audio Speech Recognition (ASR) model. The proposed model compares favorably with fully-supervised learning methods on the LRS3 dataset achieving 26 WER. Unlike SoTA approaches, our model keeps a reasonable performance on the VoxCeleb test set. We believe that reprogramming the VSR as an ASR task narrows the performance gap between the two and paves the way for more flexible formulations of lip reading. ## 1 Introduction The process of inferring visual cues from a speaker's facial expressions and lip movements to interpret speech in a silent setting is refereed to as lip-reading or visual speech recognition (VSR). VSR is mostly useful in environments where the speech is unclear or difficult to hear due to some confounding factors [8, 7]. Hearing and speech-impaired individuals also greatly benefit from VSR [60]. Albeit the small variations around the mouth area, the space of spoken words can be large due to the phonemes composition mechanism. This makes the task highly ambiguous as several phonemes incur similar visual characteristics. Moreover, VSR needs to be robust to variations w.r.t. multiple speakers, head pose movements, non-verbal facial expressions and imaging conditions. Furthermore, lip-reading requires the integration of visual features and contextual information (_i.e_., topic, key words search, environment and place, _etc_.) [56, 37, 6]. Over the last few years, computational methods for VSR has seen a surge with the recent proposed datasets, and can be grouped into (_i_) word-level prediction that classifies a silent video segment into a pre-defined vocabulary of words; (_ii_) continuous visual speech recognition, which predicts sentences for varying length video sequences. Most existing VSR approaches employ a common pipeline, where lip sequences are spatially encoded using a convolution-based backbone and passed to a contextual encoder (_i.e_., transformer [62] or conformer [21]) to model temporal dependencies. Finally, auto-regressive transformer decoder cross-attends to these representations for predicting the text. Previous works focused on enhancing the video representations for better decoding, while early approaches pretrained the backbone on word-level LRW dataset [14] for better convergence on continuous VSR [1, 32]. In contrast, [34, 3] exploit audio information as an extra supervision for an auxiliary task. Recently, cross-modal self-supervised pretraining has been a dominant paradigm for a smoother supervised finetuning afterwards [53, 54, 22]. Alternatively, the audio latent space exhibits the properties of local smoothness between input and its representation, is temporally coherent over a sequence of observations, has simple dependencies among its factors and is sparsely activated for a specific input, leading to robust and performing models [5, 4, 46, 48]. Whereas the lip sequence is more ambiguous, with complex dependencies over the sequences as the movements are only a partial observation of a larger system that includes tongue, and other facial muscles[20]. Thus, this highlights a fundamental question about supervised learning on lip-reading data that is likely to result in local generalization, while lacking robustness on out-of-distribution data. In this work, we study these questions, uncovering key representational analogies between audio and lip sequences, the ways in which these analogies can act as a robust support for downstream task transfer, allowing for reprogramming the VSR using off-the-shelf ASR models. Specifically, our contributions are: * We propose Lip2Vec framework that simulates VSR as an ASR task by learning a prior network that maps lip sequence features to audio-like representations, which can then be decoded to text using an ASR model. * Through extensive evaluation, we show that learning the prior network can be exploited for decoding text. Furthermore, it performs on par with fully supervised methods on the LRS3 [2] test set and generalizes better on the VoxCeleb2-en [13] test set. * Our approach addresses the generalization and robustness challenges encountered by VSR models. The design explicitly bridges the gap between the VSR and ASR performances, that is proportional to the quality of the learned prior network. * Our approach benefits from CTC-only decoding of ASR models and is 10\(\times\) faster compared to standard VSR approaches, which decode text auto-regressively. ## 2 Related Works Here, we briefly discuss the works related to the task of visual speech recognition. ### Visual Speech Recognition Sentence-level VSR, also referred as continuous visual speech recognition is challenging due to unconstrained large corpus and complex dependencies across the sequence length with regards to the text target. Whilst we briefly overview the recent sentence-level VSR efforts, we refer to [51, 63, 18] for extensive reviews. Learning from scratch on VSR datasets [2, 1] raises serious optimization issues. This difficulty emerges as the decoder cross-attention is under-optimized in early training, resulting in noisy contextual information for the queries. Several hypotheses have been proposed to account for this. The work of [33] proposed a curriculum learning approach, where shorter sequences are initially used for training followed by progressively adding longer ones. Differently, VTP [43] proposed sub-words learning scheme using frame-word boundaries to crop out training samples for a better convergence. These training strategies are computationally demanding and hard to scale to larger datasets. The recent works of [34, 3] proposed to exploit the audio latent representations as part of a auxiliary task, where the network is optimized to predict pretrained ASR representations along with the target text, making the optimization more stable as it provides extra supervision. Intuitively, if the transformer encoder is able to match the audio features statistics, it has to adjust the attention weights for a better decoding. Another line of research leverages pretraining on larger datasets in a self-supervised way (SSL), then finetuning on labeled VSR data using video-text pairs [53, 54, 22, 64, 32]. AV-HuBERT [53] fuses the masked audio-visual representations to predict the cluster assignments created from the audio features, thus, distilling knowledge from the audio stream features to model visual inputs. VATLM [64] attempts unifying both modalities using a one tower design, where a single network is optimized to construct a common representation space for video, audio and text. This is achieved by setting a unified tokenizer for all modalities, and then performing the masked prediction task over the unified tokens. The works of [52, 35] designed cross-modal self-supervised learning frameworks, by adopting contrastive learning [24] to learn discriminative visual representations that appear to improve VSR performance and generalization. Recently, RAVen [22] designed an asymmetric SSL framework to induce a one-way knowledge distillation, where the audio networks predict both audio and video representations, whereas the visual network is restricted to predict the audio features only. This forces the audio network to serve as a strong teacher, as it would adjust to both modalities at the same time. In this work, we argue that, despite the remarkable results of SSL pretraining, its expressive power can be further exploited differently. One design choice is to freeze the learned representation for the downstream task of VSR. Unlike the classification setting, the common practice of linear probing [11] is not effective on the VSR datasets [32]. The contributions of this paper attempt to address this question. ### Latent-to-Latent Models Over the last few years, latent-to-latent approaches have attracted much attention especially in the cross-modal generation literature. The high-level idea aims to match representations from two manifolds unified by a unique generating process, where correspondences are recovered and knowledge from one domain is transferred to another. In fact, Dall-e1[47] trained a prior network on large scale datasets to map text to image tokens so as to perform text-guided image generation using VQ-VAEs [61], while in the work of [65], latent-to-latent network is employed to map dense visual features to discrete music representation. The work of [26] manipulates the GAN's latent space by steering the representations to change facial attributes. Adversarial reprogramming [38] was taken out of the realm of adversarial attacks to repurpose an image classification to perform sequence classification tasks [38]. In this work, we take a step forward and extend latent-to-latent techniques to VSR task, which is more fine-grained and requires better temporal modeling. ## 3 Method As mentioned earlier, audio encoders of ASR models learn to transform the audio inputs to well-structured latent representations that are sufficiently robust for the task of text decoding. Our approach takes advantage of these audio representations by utilizing them as targets for training a differentiable parametric function \(\mathbf{f}_{\theta}:\mathbf{z_{v}}\mapsto\mathbf{z_{asr}}\), with parameters \(\theta\) (, a neural network). Such a prior network transforms video latent representations computed by a video encoder to synthetic audio representations, which are then input to the corresponding ASR decoder for predicting the text. Our prior network is optimized to model the joint distribution over the video and audio representations by maximizing the cosine similarity between the respective representations of the pairs. ### Preliminaries We call for a function \(\mathbf{f}_{\mathbf{\omega}}:\mathbf{V}^{T\times W\times H}\mapsto z_{\mathbf{\nu}}\). This function is trained in a self-supervised way such that it encodes the lip sequences by explicitly capturing the characteristics of the lip motion (, temporal smoothness, invariances of small and local changes in the lip sequences, ), while still being unconditioned by the text labels. For the audio modality, the goal is to learn a model \(\mathbf{f}_{\mathbf{\gamma}}:\mathbf{A}^{T\times S}\mapsto y\), which maps the input audio signal to the corresponding text labels \(y\). **Video encoder:** We adopt the self-supervised learned model from AV-HuBERT [53] as our video encoder. It comprises a modified ResNet [23] as frontend, followed by a transformer encoder. The 2D frontend of ResNet is replaced with a 3D convolutional layer [42]. The AV-HuBERT model is pretrained to solve the masked-prediction task, where given the masked audio and video representations output by the ResNet, the goal is to predict the clusters assigned using acoustic features (, MFCC). This is iteratively refined using k-means clustering of features learned by the audio-visual encoder. Consequently, the encoder learns to better encode the characteristics of a video sequence. Given a video sequence in \(\mathbb{R}^{T\times W\times H}\), the video encoder \(\mathbf{f}_{\mathbf{\omega}}(\cdot)\) maps it to \(z_{\mathbf{v}}\in\mathbb{R}^{T\times D}\). Figure 1 (left) shows the video encoder architecture for extracting \(\mathbf{z_{v}}\) from a video sequence. **ASR model:** While our framework can host any off-the-shelf ASR model, we leverage Wav2Vec2.0 [5] for its simplicity and generalization capacity. Its contrastive pretraining maximizes the mutual information between a set of anchors from contextualized representations output by the transformer encoder, and their positive pair samples from quantized representations of the ResNet features, while pushing away the set of negatives. Such a pretraining on \(53\)k hours of unlabeled data promotes better temporal modeling and achieves a low WER of 4.8 on Librispeech [40] even when finetuning on just ten minutes of labeled data. The ASR model \(\mathbf{f}_{\mathbf{\gamma}}(\cdot)\) maps an acoustic signal to audio representations \(\mathbf{z_{asr}}\) using a feature extractor and projector. The \(\mathbf{z_{asr}}\) is then contextualized by the transformer encoder and mapped to a vocabulary of 32 characters using a linear layer, making it faster compared to auto-regressive decoding techniques [58]. Figure 1 (right) shows the pipeline for decoding the text from an audio input. ### Learning the Prior Network We freeze the encoders (\(\mathbf{f}_{\mathbf{\gamma}}(\cdot)\) and \(\mathbf{f}_{\mathbf{\omega}}(\cdot)\)) and learn the prior distribution over video and audio latents by maximizing the cosine similarity between \(\mathbf{z_{v}}\) and \(\mathbf{z_{asr}}\) w.r.t. \(\mathbf{p_{\theta}}\)[47]. To this end, we instantiate the prior network \(\mathbf{f}_{\mathbf{\theta}}(\cdot)\) as a standard transformer encoder [62]. Given a video-audio pair as inputs, we employ \(\mathbf{f}_{\mathbf{\omega}}(\cdot)\) to encode the lip sequence, whereas the audio signal is encoded with \(\mathbf{f}_{\mathbf{\gamma}}(\cdot)\) up to the ResNet level. The video representations \(\mathbf{z_{v}}\) are summed with their corresponding masked audio features \(\mathcal{M}(\mathbf{z_{asr}})\), where \(\mathcal{M}(\cdot)\) denotes the time-masking operation. The resulting combined representation Figure 1: **On the left:** The video encoder takes a sequence of frames as input and computes the corresponding video representation \(z_{\mathbf{v}}\). **On the right:** The frontend of the ASR model takes an audio input and obtains the audio representation \(z_{\mathbf{asr}}\), which is the passed through a transformer encoder and linear layer for obtaining the text output. is modeled as a single data source to generate the synthetic audio representations \(\mathbf{z_{asr}^{g}}\). The prior network is an encoder-only transformer model that exploits the expressive power of the self-attention to perform the manifold mapping from video to audio. Moreover, the task is to model the joint distribution over video and audio representations by associating the recurring patterns, compare their dependencies, and infer analogies on how the lip movements can be synthesized as an audio signal. Finally, the prior network \(\mathbf{f}_{\theta}(\cdot)\) is optimized to predict the unmasked audio representations. **Avoiding collapse:** Albeit representing the same target speech, the audio and video manifolds are likely disjoint and are may not transport easily. In the process of maximizing the similarity between the respective representations, the task is to construct an input stream that achieves the optimization sweet-spot, thereby allowing the prior network to smoothly learn the mapping between the two manifolds. Furthermore, utilizing only video representations as input leads to degraded performance due to the difficulty in optimization resulting from missing informative features. In contrast, utilizing the audio representations summed with the video representations results in the prior network relying solely on the former for the prediction, while neglecting the latter completely and results in degraded VSR performance. To alleviate these issues of collapse, we opt for a masking schedule over the audio representations, where the mask proportion is linearly increased as the training progresses and ensures the input stream is video features only during the final epochs of training. Such a progressive masking of audio representations at the input of the prior network promotes smoother optimization during the early stage of training, and pushes the transformer to slowly learn the generalizable features for the VSR task. ### Training and Inference **Training:** For a pair of video and audio representations \(\mathbf{z_{v}}\) and \(\mathbf{z_{asr}}\), the prior network optimizes: \[f_{\theta}:\mathbf{z_{in}}\mapsto\mathbf{z_{asr}^{g}},\text{where}\quad\mathbf{ z_{in}}=\mathbf{z_{v}}+\mathcal{M}(\mathbf{z_{asr}}).\] We define \(\mathcal{M}(\cdot)\) as the masking operation with a probability \(p\) that is a function of training steps. Given the audio input, the corresponding representations \(\mathbf{z_{asr}}\) and logits \(\mathbf{h_{asr}}\in\mathbb{R}^{T\times C}\) (with \(C\) being the vocabulary size of the ASR model) are utilized as targets for optimization. Here, \(\mathbf{z_{asr}}\) is extracted at the ResNet level, while the logits are extracted after the final linear projection layer of the Wav2Vec2.0. The training objective is to minimize the negative cosine similarity between representations summed over the temporal dimension, while maintaining a small distance with logits. Particularly, the losses are given by \[\mathcal{L}_{\text{cosine}}=-\sum_{i=1}^{T}\mathbf{z_{asr}^{\top}}_{i}\mathbf{ z_{asr}^{g}}^{\mathbf{g}},\quad\text{and} \tag{1}\] Figure 3: **Decoding text from video during inference.** The video representations \(\mathbf{z_{v}}\) computed by the video encoder are input to our learned prior network, which synthesizes audio representations \(\mathbf{z_{asr}^{g}}\). These representations are then passed through the encoder and linear layer of the ASR model to predict the text. Note that audio representations are not used at test time. Figure 2: **Training pipeline of our Lip2Vec framework.** The video representations \(\mathbf{z_{v}}\) are summed with the masked audio representations \(\mathcal{M}(\mathbf{z_{asr}})\) and input to the prior network. The prior network generates corresponding synthetic audio representations \(\mathbf{z_{asr}^{g}}\), which are compared with the original \(\mathbf{z_{asr}}\) through a cosine similarity loss (\(\mathcal{L}_{cosine}\)). Furthermore, the representations \(\mathbf{z_{asr}^{g}}\) and \(\mathbf{z_{asr}}\) are passed independently through the transformer encoder and linear layer of the ASR model to obtain the predicted and target logits, respectively and aligned through an MSE loss (\(\mathcal{L}_{mes}\)). Note that the video encoder and the ASR model parameters are kept frozen throughout the training. \[\mathcal{L}_{\text{mse}}=\frac{1}{T}\sum_{i=1}^{T}(\mathbf{h}_{\mathbf{asr},i}^{ \mathbf{g}}-\mathbf{h}_{\mathbf{asr},i})^{2}. \tag{2}\] The final objective function is given by \[\mathcal{L}=\mathcal{L}_{\text{cosine}}+\alpha\mathcal{L}_{\text{mse}}, \tag{3}\] where \(\alpha\) is a hyperparameter for weighting the MSE loss. **Inference:** At test time, a query video is input to the video encoder to obtain the video representation \(\mathbf{z}_{\mathbf{v}}\). The prior network takes this \(\mathbf{z}_{\mathbf{v}}\) and generates a corresponding audio representation \(\mathbf{z}_{\mathbf{asr}}^{\mathbf{g}}\), which is then passed to the transformer encoder and linear layer of the ASR model to obtain the predicted text \(\hat{y}\). Figure 3 shows our inference pipeline for decoding the text from a video-only input. Note that audio is not utilized at inference time for decoding the text. ## 4 Experiments **Datasets:** We train the prior network using the video-audio pairs on: LRS3 [2] and VoxCeleb2-en [13]. The LRS3 dataset comprises a total of \(433\) hours of training videos from pretrain and trainval sets. From the multi-lingual VoxCeleb2 dataset, a subset of \(1326\) hours of videos for the English language is selected as VoxCeleb2-en, as in [53]. We evaluate the prior network using two test sets of LRS3 and VoxCeleb2-en, as detailed below: * LRS3: a small scale test set of around \(1\) hour in total, consisting of \(1321\) sequences. We leverage the \(68\) facial landmarks provided by [34] to crop the utterances around the mouth area. * VoxCeleb-En: we randomly sample \(5\)K videos from the VoxCeleb2-en test set, with the same duration statistics as the LRS3 test set. We use Whisper medium [46] as the labeling tool to obtain the text transcripts. Moreover, for efficiency reasons, we utilize Yolo5Face [45] to obtain the landmarks instead of relying on RetinaFace [15, 9] face detector. We found that the resulting \(5\) facial landmarks are sufficient for cropping the mouth regions 1. Footnote 1: This new pseudo labelled test set test set will be made publicly available to serve as an extra benchmark for the community **Evaluation metric:** As in [34], we employ the word error rate (WER) to measure the matching between the predicted text and the ground truth transcript. **Implementation details:** We adopt the implementations of AV-HuBERT [53] and Wav2Vec2.0 [5] from the official fairseq repository2. For the prior network, we consider two configurations: BASE with \(6\) transformer layers and LARGE with \(12\) layers. The embedding dimension/feed-forward dimension/attention heads in each transformer layer are \(768\)/\(3072\)/\(12\) for both variants. Furthermore, we employ a fully-connected layer and a temporal convolution upsampling to match the \(50\) fps of the audio representations. Base and large are trained on the low and high ressource settings respectively. The prior network is implemented in PyTorch [41] and trained using \(4\) and \(8\) NVidia A100 40GB GPUs for base and large models, respectively. All the models are trained for \(30\) epochs using the AdamW [31] optimizer. We employ a warmup of \(5\) epochs and a cosine learning rate scheduler with maximum \(lr\) set to \(10^{-3}\). Footnote 2: [https://github.com/facebookresearch/fairseq/tree/main/fairseq](https://github.com/facebookresearch/fairseq/tree/main/fairseq) **On using labeled video-text data:** It is worth mentioning that the prior network weights are not fine-tuned using labeled data containing video-text pairs. Both video encoder and ASR models are kept frozen when performing the latent to latent training. The main motivation is to set a robust evaluation procedure and to prevent the prior network from adapting its parameters to represent the video as an audio, but rather to semantically match their latent spaces. ### Main Results **Finetuning _vs._ latent-to-latent:** Table 1 shows the performance comparison between supervised finetuning and our proposed latent-to-latent training in terms of WER score on the LRS3 test set. For both settings, identical pretrained video encoder from [53] is utilized. The supervised finetuning using AV-HuBERT [53] is performed with either a linear layer (CTC) or a decoder (CE) using labeled video-text pairs. In contrast, our latent-to-latent training employs unlabeled video-audio pair for training the prior network alone while the pretrained video encoder and ASR decoder (Wav2Vec2.0) are kept frozen. We observe that our latent-to-latent approach obtains consistent improvements across different settings. However, we observe that when the large video encoder is pretrained only on LRS3 (433h), the supervised finetuning achieves better performance. This is \begin{table} \begin{tabular}{c c c|c c c} \hline \hline \multirow{2}{*}{**Encoder**} & \multirow{2}{*}{**Pretrain**} & \multirow{2}{*}{**Finetune**} & \multicolumn{2}{c}{**Supervised**[53]} & \multicolumn{2}{c}{**Ours: Lip2Vec**} \\ & & & CTC & CE & CTC \\ \hline \multirow{4}{*}{**Base**} & \multirow{4}{*}{433h} & 30h & 55.3 & 51.8 & 49.5 \\ & & 433h & 49.3 & 44.0 & 42.0 \\ \cline{2-6} & & 30h & 47.3 & 46.1 & 40.6 \\ \cline{2-6} & & 433h & 43.0 & 34.8 & 34.1 \\ \hline \multirow{4}{*}{**Large**} & \multirow{4}{*}{433h} & 30h & 48.4 & 44.8 & 55.4 \\ & & 433h & 44.3 & 41.6 & 50.1 \\ \cline{1-1} \cline{2-6} & & 30h & 40.7 & 32.5 & 31.2 \\ \cline{1-1} \cline{2-6} & & 433h & 38.6 & 28.6 & 26.0 \\ \hline \hline \end{tabular} \end{table} Table 1: **Supervised finetuning _vs._ latent-to-latent training. Comparison in terms of WER on LRS3 test set is shown. The same pretrained video encoder from AV-HuBERT [53] is finetuned or utilized for the prior network. For the supervised learning, AV-HuBERT is trained with either linear layer (CTC) or a decoder (CE). Our Lip2Vec consistently improves the performance across different settings with simple CTC decoding.** likely due to the large encoder overfitting to the pretraining data while being less generalizable and being prone to change at the finetuning stage to fit the labeled video-text pairs. Since the latent-to-latent procedure does not involve training the video encoder, our approach suffers when the pretrained video encoder is not generalizable. Such an issue does not arise for the base video encoder or when pretraining is performed on LRS3+VoxCeleb2-en (1759h), which helps in obtaining robust video representations that are better suited for latent-to-latent learning. It is also worth mentioning that Wav2Vec2.0 achieves 6.2 WER on the LRS3 test set. Furthermore, when using the large encoder pretrained on LRS3+VoxCeleb-En (1759h) and finetuning on 433h, our approach achieves the best WER score of \(26.0\), with gains of \(12.6\) and \(2.6\) over the supervised CTC and CE finetuning, respectively. These results show the efficacy of our latent-to-latent learning approach for the VSR task. **State-of-the-art comparison:** Here, we compare the Lip2Vec approach to SoTA VSR approaches on the LRS3 test set. Tables 2 and 3 show the performance comparison in terms of WER for the low-resource and high-resource settings, respectively. While the low-resource setting denotes that finetuning is performed with only 30h of LRS3 trainval data, the high-resource setting indicates finetuning with 433h of LRS3. Supervised methods using varying labeled data are also reported in Table 3 for comparison. We observe that our Lip2Vec performs favorably against existing approaches across different settings. Furthermore, the approach depends on the generalizability of the pretrained video encoder representations. Because training the Lip2Vec does not utilize labeled video-text pairs in addition to freezing the parameters of the video encoder. This is in contrast to the supervised finetuning, which is likely to significantly vary the video encoder parameters to align for text decoding. Furthermore, from Table 3, we observe our Lip2Vec trained with large encoder with 1759h of pretraining to obtain the best results of \(26.0\). This results in gains of \(2.6\), \(2.2\) and \(2.4\) over AV-HuBERT, RAVen and VATLM, respectively, when self-training (_i.e_., pseudo-labeling the data and additionally using them for finetuning) is not employed by these approaches. **Results on VoxCeleb2-en:** In Table 5, we report the WER scores on three folds of the VoxCeleb2-en test set: the first fold is randomly selected \(5\)k videos, the second and third are subsets of this \(5\)k, where Wav2Vec2.0 obtains WER scores less than \(30\) and \(20\), respectively. We follow this procedure to reduce the bias and aim for a fair comparison as the labels are obtained with another ASR model (_i.e_., Whisper [46]). First, we observe that SoTA approaches fail to generalize under this benchmark. Both the model from [34] and the VTP [44] scores are around 70 WER. It is worth mentioning that VTP was trained on a \(2.7\)k hours of video. As expected, the Wav2Vec2.0 gets relatively reasonable results (10 to 25 WER). Interestingly enough, our Lip2Vec approach tracks the Wav2Vec2.0 scores with an upper bound proportional to the quality of the prior network. When only trained on \begin{table} \begin{tabular}{c l c c c c} \hline \hline & **Method** & **Unlabeled AV data** & **Labeled Data** & **Decoding** & **VSR** \\ \hline \multirow{8}{*}{**Base**} & AV-HuBERT [53] & 433h & 30h & CTC & 55.3 \\ & AV-HuBERT [53] & 433h & 30h & CE & 51.8 \\ & RAVen [22] & 433h & 30h & CTC+CE & 47.0 \\ & VATLM [64] & 433h & 30h & CE & 48.0 \\ & **Ours: Lip2Vec** & 433h & 30h\({}^{\dagger}\) & CTC & 49.5 \\ \cline{2-6} & AV-HuBERT [53, 54] & 1759h & 30h & CTC & 47.3 \\ & AV-HuBERT [53, 54] & 1759h & 30h & CE & 46.1 \\ & RAVen [22] & 1759h & 30h & CTC+CE & 40.2 \\ & VATLM [64] & 1759h & 30h & CE & 42.6 \\ & **Ours: Lip2Vec** & 1759h & 30h\({}^{\dagger}\) & CTC & 40.6 \\ \hline \multirow{8}{*}{**Large**} & AV-HuBERT [53] & 433h & 30h & CTC & 48.4 \\ & AV-HuBERT [53] & 433h & 30h & CE & 44.8 \\ \cline{1-1} & **Ours: Lip2Vec** & 433h & 30h\({}^{\dagger}\) & CTC & 55.4 \\ \cline{1-1} \cline{2-6} & AV-HuBERT [53, 54] & 1759h & 30h & CTC & 40.7 \\ \cline{1-1} & AV-HuBERT [53, 54] & 1759h & 30h & CE & 32.5 \\ \cline{1-1} & RAVen [22] & 1759h & 30h & CTC+CE & 33.1 \\ \cline{1-1} & VATLM [64] & 1759h & 30h & CE & 31.6 \\ \cline{1-1} & **Ours: Lip2Vec** & 1759h & 30h\({}^{\dagger}\) & CTC & 31.2 \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance comparison on LRS3 test set in low-resource setting. In this setting, only 30h of LRS3 trainval set is utilized for finetuning after pretraining on unlabeled data from either LRS3 (433h) or LRS3+VoxCeleb2-en (1759h) data. ‘Base’ and ‘Large’ denote the size of the pretrained video encoder employed. Our Lip2Vec achieves favorable gains across different settings. Furthermore, compared to other approaches that require an auto-regressive decoder (CE), our inference speed is significantly higher due to CTC decoding. \(\dagger\) denotes that our method does not utilize labeled video-text data during finetuning, but uses unlabeled video-audio pairs for the same.** 30h of LRS3, our Lip2Vec deviates from Wav2Vec2.0 by an average WER score of 23 across the three folds, thereby showing the generalization capability of our approach. Note that self-trained variants of RAVen and AV-HuBERT are not considered for OOD generalization since they are trained on pseudo-labelled VoxCeleb2-en train set. It can be seen that our Lip2Vec also achieves consistent gains in terms of WER, in comparison to RAVen and AV-HuBERT across different folds, demonstrating better generalization to unseen or novel speakers. This trend holds for 433h finetuning **Training on VoxCeleb2-en.** We investigate the impact of training with VoxCeleb2-en [13] data. In practice, this scenario might arise if one has access to a dataset comprising unlabelled lip sequences. Our Lip2Vec framework is a suitable fit for this setting as it does not require labeled video-text pairs to learn the prior network. We take advantage of this property and train the model variants on a low-resources (30h) setting of the VoxCeleb2-en dataset. Table 4 shows the WER scores following various training sets on both LRS3 and VoxCeleb2-en test sets. As expected, combining 30h from VoxCeleb2-en with the LRS3 low-resources setting improves the performance on the LRS3 test set as compared to training on 30h of LRS3 only (\(30.5\)_vs_\(31.2\) for large and \(40.1\)_vs_\(40.6\) for base). It is worth mentioning that training on 30h of VoxCeleb2-en achieves similar WER compared to using LRS3 low-resource dataset. This highlights the robustness of the proposed Lip2Vec approach and its considerable advantages over the supervised finetuning. \begin{table} \begin{tabular}{l l|c c} \hline \hline \multirow{2}{*}{**Encoder**} & \multirow{2}{*}{**Training set**} & \multicolumn{2}{c}{**Test set**} \\ & & LRS3 & VoxCeleb2-en \\ \hline \multirow{3}{*}{**Base**} & LRS3-30h & 40.6 & 58.2 \\ & LRS3+VoxCeleb2-en-60h & 40.1 & 54.6 \\ & VoxCeleb2-en-30h & 41.2 & 57.3 \\ \hline \multirow{3}{*}{**Large**} & LRS3-30h & 31.2 & 39.4 \\ & LRS3+VoxCeleb2-en-60h & 30.4 & 33.1 \\ \cline{1-1} & VoxCeleb2-en-30h & 30.5 & 33.8 \\ \hline \hline \end{tabular} \end{table} Table 4: **Training the Lip2Vec on VoxCeleb2-en.** Comparing the effects of varying the training set on the WER scores on both LRS3 and VoxCeleb2-en test sets. We randomly select 30h from VoxCeleb2-en, and use it in different settings. We observe that the prior network can generalize to LRS3 when seing VoxCeleb2-en data only \begin{table} \begin{tabular}{c l c c c c} \hline \hline & **Method** & **Unlabeled AV data** & **Labeled Data** & **Decoding** & **VSR** \\ \hline \multirow{4}{*}{**Supervised**} & Afouras _et al._[1] & - & 1519h & CE & 58.9 \\ & Shillingford _et al._[55] & - & 3886h & CTC & 55.1 \\ & Ma _et al._[34] & - & 813h & CTC+CE & 34.7 \\ & Makino _et al._[36] & - & 31000h & Transducer & 33.6 \\ & Prajwal _et al._[44] & - & 2676h & CE & 30.7 \\ & Serdyuk _et al._[50] & - & 90000h & Transducer & 25.9 \\ & Chang _et al._[10] & - & 100000h & Transducer & 12.8 \\ \hline \multirow{4}{*}{**Self-Supervised**} & AV-HuBERT [53] & 433h & 433h & CTC & 49.3 \\ & AV-HuBERT [53] & 433h & 433h & CE & 44.0 \\ & RAVen [22] & 433h & 433h & CTC+CE & 39.1 \\ \cline{2-6} & **Ours: Lip2Vec** & 433h & 433h\({}^{\dagger}\) & CTC & 42.0 \\ \cline{2-6} & **AV-HuBERT [53, 54]** & 1759h & 433h & CTC & 43.0 \\ \cline{2-6} & AV-HuBERT [53, 54] & 1759h & 433h & CE & 34.8 \\ & RAVen [22] & 1759h & 433h & CTC+CE & 33.1 \\ & **VATL:M**[64] & 1759h & 433h & CE & 34.2 \\ & **Ours: Lip2Vec** & 1759h & 433h\({}^{\dagger}\) & CTC & 34.1 \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Self-Supervised** \\ **Large** \\ \end{tabular} } & AV-HuBERT [53] & 433h & 433h & CTC & 44.3 \\ & AV-HuBERT [53] & 433h & 433h & CE & 41.6 \\ \cline{2-6} & **Ours: Lip2Vec** & 433h & 433h\({}^{\dagger}\) & CTC & 50.1 \\ \cline{2-6} & **AV-HuBERT [53, 54]** & 1759h & 433h & CTC & 38.6 \\ \cline{2-6} & AV-HuBERT [53, 54] & 1759h & 433h & CE & 28.6 \\ \cline{2-6} & **AV-HuBERT [53, 54]** & 1759h & 433h & CTC+CE & 26.9 \\ \cline{2-6} & RAVen [22] & 1759h & 433h & CTC+CE & 28.2 \\ \cline{2-6} & RAVen [22] & \(\text{w}\)/ self-training & 1759h & 433h & CTC+CE & 24.9 \\ \cline{2-6} & VATLM [64] & 1759h & 433h & CE & 28.4 \\ \cline{2-6} & VATLM [64] & \(\text{w}\)/ self-training & 1759h & 433h & CE & 26.2 \\ \cline{2-6} & **Ours: Lip2Vec** & 1759h & 433h\({}^{\dagger}\) & CTC & 26.0 \\ \hline \hline \end{tabular} \end{table} Table 3: **Performance comparison on LRS3 test set in high-resource setting.** ‘Base’ and ‘Large’ denote the size of the self-supervised video encoder employed. Performance of supervised approaches are also reported. \(\dagger\) denotes that our Lip2Vec does not utilize labeled video-text data during finetuning, but uses unlabeled video-audio pairs for the same. Our approach achieves favorable gains across different settings with significantly higher inference speed due to CTC decoding, compared to other approaches that require an auto-regressive decoder (CE). Particularly, when using the large encoder pretrained on 1759h, our approach achieves the best score of \(26.0\) and is on par with \(25.9\) of [50] that utilizes 90k hours of labeled data in a supervised setting. The Whisper [46] pseudo-labelled VoxCeleb2-en test set turns out to be more challenging due to the high variety of speakers, vocabulary, _etc_. The Lip2Vec variants scores on this benchmark are still far from their performance on the LRS3 test set. Future works will focus on the generalization aspects on both LRS3 and VoxCeleb2-en test sets. **Inference speed:** As model efficiency is a key factor for real-world VSR applications, Table 5 shows a GPU runtime comparison (processing time per 100 frames) of the different approaches on sample videos. Compared with other approaches, our model exhibits a remarkable improvement, being over \(10\times\) faster than VTP, which is the fastest model among the tested models. This is explained by the fact that CTC decoding does not require any computationally expensive auto-regressive procedures, beam search, _etc_. ### Ablation Study Here, we evaluate the performance of our Lip2Vec when ablating the key components: varying the hyperparameter \(\alpha\) for \(\mathcal{L}_{mse}\) loss and the masking function \(\mathcal{M}(\cdot)\). For this study, evaluation is conducted on LRS3 test set and the large video encoder (self-supervisedly pretrained on 1759 hours of LRS3 and VoxCeleb2-en) is employed. **Impact of varying \(\alpha\):** Table 6 presents the performance of our framework when the hyperparameter weight \(\alpha\) (Eq. 3) is varied. We observe that higher values of \(\alpha\) degrade the performance since the similarity between predicted representations \(\mathbf{z_{asr}^{g}}\) and target \(\mathbf{z_{asr}}\) diverges due to the gradients from \(\mathcal{L}_{mse}\) dominating over \(\mathcal{L}_{cosine}\). Furthermore, when training for a fixed number of epochs, \(\alpha{=}0\) achieves a WER score of 34.6 compared to the best results of 31.2 when \(\alpha{=}0.01\). We also observe that training longer without MSE loss (denoted by \(\dagger\) in Table 6) can achieve a WER score of 31.4, indicating that \(\mathcal{L}_{mse}\) aids in faster training convergence. **Impact of different masking strategies:** From Table 7, we observe that not masking the audio representations \(\mathbf{z_{asr}}\) results in the prior network learning a shortcut from its input to output while ignoring the video representations and thereby performing poorly at test time when no audio is available. Similarly, maintaining the same masking probability \(p\) throughout the training results in the prior network expecting the masked audio to be present at test time as well for generating synthetic \(\mathbf{z_{asr}^{g}}\) accurately. In contrast, initializing \(p\) to a low value of \(0.3\) and gradually increasing it to \(1.0\) (simulating no \(\mathbf{z_{asr}}\) input) by the end of training enables the prior network to learn better representations \(\mathbf{z_{asr}^{g}}\) from input \(\mathbf{z_{v}}\). Consequently, our progressive masking achieves a WER score of \(31.2\), thereby validating its efficacy for training. Additional results are provided in the supplementary. ## 5 Discussion and Future Work **Supervised _vs._ self-supervised video encoder:** As discussed in the experiments, we employed a self-supervised video encoder from AV-HuBERT [53] for training the prior network. In contrast, here, we evaluate the efficacy of a supervised video encoder in the Lip2Vec framework by utilizing the encoder from [34]. For this experiment, we train the prior network following the low resources set \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(\alpha\) & 0.0 & 0.0\({}^{\dagger}\) & 0.01 & 0.2 & 0.5 \\ \hline **WER** & 34.6 & 31.4 & 31.2 & 52.1 & 91.3 \\ \hline \hline \end{tabular} \end{table} Table 6: **Impact of varying \(\alpha\).** WER comparison on LRS3 test set when varying the weight \(\alpha\) for \(\mathcal{L}_{mse}\). When \(\alpha\) is increased beyond 0.05, the \(\mathcal{L}_{mse}\) dominates over \(\mathcal{L}_{cosine}\), resulting in \(\mathbf{z_{asr}^{g}}\) diverging from the target \(\mathbf{z_{asr}}\). While performance is slightly degraded without MSE loss for a fixed training budget, longer training (denoted by \(\dagger\)) can reach similar optimal performance as with \(\alpha{=}0.01\), validating that \(\mathcal{L}_{mse}\) improves the convergence. \begin{table} \begin{tabular}{l c} \hline \hline **Masking** & **WER** \\ \hline No Masking & 75.2 \\ 50\% Masking & 66.9 \\ 80\% Masking & 61.5 \\ 100\% Masking & 65.3 \\ **Progressive Masking** & 31.2 \\ \hline \hline \end{tabular} \end{table} Table 7: **Masking strategy.** WER comparison on LRS3 test with different masking strategies \(\mathcal{M}(\cdot)\) for the audio representations \(\mathbf{z_{asr}}\). No masking performs poorly since the prior network discards the \(\mathbf{z_{v}}\) input. Similarly, masking with same probability (\(p\)) throughout the training shows only marginal improvement over no masking. The best results are obtained with the proposed progressive masking, where \(p\) is gradually increased during the training. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{Unlabeled} & \multicolumn{4}{c}{Video folds} & \multirow{2}{*}{Runtime (s)} \\ \cline{2-2} \cline{4-5} & & 01 & 02 & 03 \\ \hline Wav2Vec2.0 [5] & – & 25.1 & 15.0 & 10.1 & 0.05 \\ \hline Ma et al. [34] & – & 69.4 & 64.1 & 61.3 & 3.91 \\ VTP [43] & – & 71.7 & 69.1 & 66.9 & 0.97 \\ \hline \hline \multicolumn{2}{l}{RAVen*} & 433h & 78.2 & 74.3 & 72.3 & – \\ AV-HuBERT* & 433h & 79.1 & 76.3 & 73.2 & – \\ **Ours: Lip2Vec* & 433h & 71.2 & 65.7 & 57.3 & 0.07 \\ \hline \hline \end{tabular} \end{table} Table 5: **Out-of-distribution generalization on VoxCeleb2-en test set in terms of WER.** The folds are selected using Wav2Vec2.0 scores. Base and Large models are denoted by \({}^{\star}\) and \({}^{\dagger}\). All models are fine-tuned on the 30h low-resource setting of LRS3. The performance on LRS3 test set is also shown for ease of reference. The last column reports the average computational load in seconds for decoding 100 frames video (4 seconds) on a single Nvidia A100. ting. This achieves WER scores of 45.0 and 76 on LRS3 and VoxCeleb2-en test sets, respectively. This is likely due to the explicit text supervision, which trains the video encoder to output representations aligned with the text decoding task rather than trained towards better representing the lip movements. This shows that self-supervised encoders are highly suited for learning the latent-to-latent mappings and are better generalizable. This is also supported by the findings in neuroscience research, which demonstrate that silent lip-reading signal first synthesizes a coarse-grained auditory speech representation in early auditory cortices. Then, the right angular gyrus excites the temporal visual speech area, extracts and possibly predicts the slower features of lip movements. Finally, the auditory cortices are fed with this signal to decode as an audio signal [39, 25]. Consequently, our approach opens a new line of research for exploring the subtle definition of visual speech recognition embedded in the human brain [8]. **VSR as interpolation _vs_. extrapolation:** Most perception problems are interpolative in their nature [12] and satisfy the manifold hypothesis [17]. These tasks are intuitive for humans, and are usually solved in the early layers of the visual cortex in a matter of milliseconds (, classification, recognition, ) [29, 59]. For such problems, deep learning is a perfect fit with its ability to perform non-linear interpolation in a complex high-dimensional manifold, enabling arbitrary complex behavior [57, 12]. However, lip-reading experts allude to a high-level step-wise and iterative reasoning to solve the task. This likely suggests that VSR has some higher level of extrapolation as compared to the common perception tasks. Thus, we hypothesize that learning the manifold transfer without exposing the lip sequences explicitly to the text labels would induce some interpolation, thereby allowing for a better generalization. We believe improving the prior network by leveraging better training procedures and architectures such as [49] would be an important future research direction for tightening up the bound with the ASR performance. **Impact of fine-tuning on learned self-supervised encoder:** From our experiments above, we observed that supervised video encoders and models pretrained on LRS3 only, are not suitable for latent-to-latent learning. A potential future direction includes studying the effect of text labels on self-supervised learned weights using measures such as Center Kernel Alignment (CKA) [28] for obtaining deeper insights into the VSR task. ## 6 Conclusion We introduced Lip2Vec, a simple VSR framework that makes the most of ASR and VSR models by combining knowledge acquired by an off-the-shelf VSR encoder and an ASR model. The approach exploits the latent space structure to perform inter modality mapping, and learns how to transfer the visual representations to a suitable decoding space. Results on various benchmarks demonstrated the competitiveness and robustness of the approach. We believe this is an important step towards better VSR modeling using latent-to-latent methods. In summary, the results and discussions presented in the paper along with those in the appendices demonstrate the efficacy of our Lip2Vec approach for the task of visual speech recognition.
2310.13033
LASER: Linear Compression in Wireless Distributed Optimization
Data-parallel SGD is the de facto algorithm for distributed optimization, especially for large scale machine learning. Despite its merits, communication bottleneck is one of its persistent issues. Most compression schemes to alleviate this either assume noiseless communication links, or fail to achieve good performance on practical tasks. In this paper, we close this gap and introduce LASER: LineAr CompreSsion in WirEless DistRibuted Optimization. LASER capitalizes on the inherent low-rank structure of gradients and transmits them efficiently over the noisy channels. Whilst enjoying theoretical guarantees similar to those of the classical SGD, LASER shows consistent gains over baselines on a variety of practical benchmarks. In particular, it outperforms the state-of-the-art compression schemes on challenging computer vision and GPT language modeling tasks. On the latter, we obtain $50$-$64 \%$ improvement in perplexity over our baselines for noisy channels.
Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim, Michael C. Gastpar
2023-10-19T13:18:57Z
http://arxiv.org/abs/2310.13033v2
# LASER: Linear Compression in Wireless Distributed Optimization ###### Abstract Data-parallel SGD is the de facto algorithm for distributed optimization, especially for large scale machine learning. Despite its merits, communication bottleneck is one of its persistent issues. Most compression schemes do alleviate this either assume noiseless communication links, or fail to achieve good performance on practical tasks. In this paper, we close this gap and introduce LASER: LineAr CompreSion in WirEless DistRibuted Optimization. LASER capitalizes on the inherent low-rank structure of gradients and transmits them efficiently over the noisy channels. Whilst enjoying theoretical guarantees similar to those of the classical SGD, LASER shows consistent gains over baselines on a variety of practical benchmarks. In particular, it outperforms the state-of-the-art compression schemes on challenging computer vision and GPT language modeling tasks. On the latter, we obtain \(50\)-\(64\%\) improvement in perplexity over our baselines for noisy channels. ## 1 Introduction Distributed optimization is one of the most widely used frameworks for training large scale deep learning models (Bottou et al., 2018; Dean et al., 2012; Tang et al., 2020). In particular, data-parallel SGD is the workhorse algorithm for this task. Underpinning this approach is the _communication_ of large gradient vectors between the workers and the central server which performs their _aggregation_. While these methods harness the inherent parallelism to reduce the overall training time, their communication cost is a major bottleneck that limits scalability to large models. Design of communication-efficient distributed algorithms is thus a must for reaping the full benefits of distributed optimization (Xu et al., 2020). Existing approaches to reduce the communication cost can be broadly classified into two themes: (i) compressing the gradients before transmission; or (ii) utilizing the communication link for native 'over-the-air' aggregation (averaging) across workers. Along (i), a number of gradient compression schemes have been designed such as quantization (Bernstein et al., 2018; Vargattik et al., 2022), sparsification (Aji and Heafield, 2017; Isik et al., 2022), hybrid methods (Jiang et al., 2018; Basu et al., 2019), and low-rank compression (Wang et al., 2018; Vogels et al., 2019). These methods show gains over the full-precision SGD in various settings (Xu et al. (2020) is a detailed survey). Notwithstanding the merits, their key shortcoming is that they assume a _noiseless_ communication link between the clients and the server. In settings such as federated learning with differential privacy or wireless communication, these links are noisy. Making them noiseless requires error-correcting codes which exacerbates the latency, as the server needs to wait till it receives the gradient from each worker before aggregating (Guo et al., 2020). Under theme (ii), communication cost is reduced by harnessing the physical layer aspects of (noisy) communication. In particular, the superposition nature of wireless channels is exploited to perform over-the-air averaging of gradients across workers, which reduces the latency, see e.g. Shi et al. (2020) and the references therein. Notable works include A-DSGD Amiri and Gunduz (2020), analog-gradient-aggregation Guo et al. (2020); Zhu et al. (2019), channel aware quantization Chang & Tandon (2020), etc. However, to the best of our knowledge, the majority of these approaches are restricted to synthetic datasets and shallow neural networks (often single layer) and do not scale well to the practical neural network models (which we verify in Sec. 4). This leads to a natural question: _Can we design efficient and practical gradient compression schemes for noisy communication channels?_ In this work, we precisely address this and propose LASER, a principled gradient compression scheme for distributed training over wireless noisy channels. Specifically, we make the following contributions: * Capitalizing on the inherent low-rank structure of the gradients, LASER efficiently computes these low-rank factors and transmits them reliably over the noisy channel while allowing the gradients to be averaged in transit (Sec. 3). * We show that LASER enjoys similar convergence rate as that of the classical SGD for both quasi-convex and non-convex functions, except for a small additive constant depending on the channel degradation (Theorem 1). * We empirically demonstrate the superiority of LASER over the baselines on the challenging tasks of (i) language modeling with GPT-2 \(\rightarrow\) WikiText-103 and (ii) image classification with ResNet18 \(\rightarrow\) (Cifar10, Cifar100) and 1-layer NN \(\rightarrow\) Mnist. With high gradient compression (\(165\times\)), LASER achieves \(50\)-\(64\%\) perplexity improvement in the low and moderate power regimes on WikiText-103. To the best of our knowledge, LASER is the first to exhibit such gains for GPT language modeling (Sec. 4). **Notation.** Euclidean vectors and matrices are denoted by bold letters \(\mathbf{x},\mathbf{y},\mathbf{M}\), etc. \(\|\cdot\|\) denotes the Frobenius norm for matrices and the \(\ell_{2}\)-norm for Euclidean vectors. \(\mathcal{O}(\cdot)\) is an upper bound subsuming universal constants whereas \(\tilde{\mathcal{O}}(\cdot)\) hides any logarithmic problem-variable dependencies. ## 2 Background **Distributed optimization**. Consider the (synchronous) data-parallel distributed setting where we minimize an objective \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) defined as the empirical loss on a global dataset \(\mathcal{D}=\{(\mathbf{x}_{j},y_{j})\}_{j=1}^{N}\): \[\min_{\mathbf{\theta}\in\mathbb{R}^{d}}f(\mathbf{\theta}),\quad f(\mathbf{\theta})\triangleq \frac{1}{N}\sum_{j=1}^{N}\ell(\mathbf{x}_{j},y_{j};\mathbf{\theta}),\] where \(\ell(\cdot)\) evaluates the loss for each data sample \((\mathbf{x}_{j},y_{j})\) on model \(\mathbf{\theta}\). In this setup, there are \(k\) (data-homogeneous) training clients, where the \(i^{\text{th}}\) client has access to a stochastic gradient oracle \(\mathbf{g}_{i}\) \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Target**} & \multicolumn{2}{c}{**Power required**} & \multirow{2}{*}{**Reduction**} \\ \cline{2-3} \cline{5-5} & Z-SGD & & \\ \hline 80 & \(160\,\mathrm{K}\) & \(10\,\mathrm{K}\) & \(16\times\) \\ 50 & \(640\,\mathrm{K}\) & \(40\,\mathrm{K}\) & \(16\times\) \\ 40 & \(2560\,\mathrm{K}\) & \(160\,\mathrm{K}\) & \(16\times\) \\ 35 & \(2560\,\mathrm{K}\) & \(160\,\mathrm{K}\) & \(16\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: Power required _(lower is better)_ to reach the target perplexity on WikiText-103. Z-SGD sends the uncompressed gradients directly, while LASER sends a rank-4 approximation. LASER requires \(16\times\) less power than Z-SGD to achieve the target perplexity over a wide interval. In the very-high-power regime with perplexity close to that of the noiseless SGD, we see no power gains. Figure 1: Final test perplexity after 20k iterations _(lower is better)_ vs. power budget for GPT-2 language modeling on WikiText-103. LASER consistently requires orders-of-magnitude less power than other methods for the same perplexity. e.g. mini-batch gradient on a set of samples randomly chosen from \(\mathcal{D}\), such that \(\mathbb{E}[\mathbf{g}_{i}|\mathbf{\theta}]=\nabla f(\mathbf{\theta})\) for all \(\mathbf{\theta}\in\mathbb{R}^{d}\). In distributed SGD (Robbins and Monro, 1951; Bottou et al., 2018), the server aggregates all \(\mathbf{g}_{i}\)s and performs the following updates: \[\mathbf{\theta}_{t+1}=\mathbf{\theta}_{t}-\gamma_{t}\cdot\frac{1}{k}\sum_{i=1}^{k}\mathbf{ g}_{i}^{(t)},\quad\mathbb{E}[\mathbf{g}_{i}^{(t)}|\mathbf{\theta}_{t}]=\nabla f(\mathbf{ \theta}_{t}),\quad t\geq 0,\] (SGD) where \(\{\gamma_{t}\}_{t\geq 0}\) is a stepsize schedule. Implicit here is the assumption that the communication link between the clients and the server is noiseless, which we expound upon next. **Communication model.** For the communication uplink from the clients to the server, we consider the standard wireless channel for over-the-air distributed learning (Amiri and Gunduz, 2020; Guo et al., 2020; Zhu et al., 2019; Chang and Tandon, 2020; Wei and Shen, 2022): the _additive slow-fading channel_, e.g., the classical multiple-access-channel (Nazer and Gastpar, 2007). The defining property of this family is the superposition of incoming wireless signals (enabling over-the-air computation) possibly corrupted together with an independent channel noise (Shi et al., 2020). Specifically, we denote the channel as a (random) mapping \(\mathcal{Z}_{P}(\cdot)\) that transforms the set of (time-varying) messages transmitted by the clients \(\{\mathbf{x}_{i}\}_{i\in[k]}\subset\mathbb{R}^{d}\) to its noisy version \(\mathbf{y}\in\mathbb{R}^{d}\) received by the server: \[\mathbf{y}=\mathcal{Z}_{P}(\{\mathbf{x}_{i}\})\triangleq\sum_{i=1}^{k}\mathbf{x}_{i}+\mathbf{Z },\quad\|\mathbf{x}_{i}\|^{2}\leq P_{t},\;\frac{1}{T}\sum_{t=0}^{T-1}P_{t}\leq P, \tag{1}\] where the noise \(\mathbf{Z}\in\mathbb{R}^{d}\) is independent of the channel inputs and has zero mean and unit variance per dimension, i.e. \(\mathbb{E}\|\mathbf{Z}\|^{2}=d\). The power constraint on each client \(\|\mathbf{x}_{i}\|^{2}\leq P_{t}\) at time \(t\) serves as a communication cost (and budget), while the power policy \(\{P_{t}\}\) allots the total budget \(P\) over \(T\) epochs as per the average power constraint (Wei and Shen, 2022; Amiri and Gunduz, 2020). A key metric that captures the channel degradation quality is the signal-to-noise ratio per coordinate (SNR), defined as the ratio between the average signal energy (\(P\)) and that of the noise (\(d\)), i.e. \(\mathrm{SNR}\triangleq P/d\). The larger it is the better the signal fidelity. The power budget \(P\) encourages the compression of signals: if each client can transmit the same information \(\mathbf{x}_{i}\) via fewer entries (smaller \(d\)), they can utilize more power per entry (higher \(\mathrm{SNR}\)) and hence a more faithful signal. For the downlink communication from the server to the clients (broadcast channel), we assume that it is noiseless and thus the clients exactly receive what the server transmits (McMahan and Ramage, 2017; Konecny et al., 2016; 2016). In the rest of the paper by channel we mean the uplink channel. The channel model in Eq. (1) readily generalizes to the fast fading setup as discussed in Sec. 4. **Gradient transmission over the channel.** In the distributed optimization setting the goal is to communicate the (time-varying) local gradients \(\mathbf{g}_{i}\in\mathbb{R}^{d}\) to the central server over the noisy channel in Eq. (1). Here we set the messages \(\mathbf{x}_{i}\) as linear scaling of gradients (as we want to estimate the gradient average), i.e. \(\mathbf{x}_{i}=a_{i}\,\mathbf{g}_{i}\) with the scalars \(a_{i}\in\mathbb{R}\) enforcing the power constraints: \[\mathbf{y}=\sum_{i=1}^{k}a_{i}\,\mathbf{g}_{i}+\mathbf{Z},\quad\|a_{i}\,\mathbf{g}_{i}\|^{2} \leq P_{t}. \tag{2}\] Now the received signal is a weighted sum of the gradients corrupted by noise, whereas we need the sum of the gradients \(\sum_{i}\mathbf{g}_{i}\) (upto zero mean additive noise) for the model training. Towards this goal, a common mild technical assumption is that the gradient norms \(\{\|\mathbf{g}_{i}\|\}\) are known at the receiver at each communication round (Chang and Tandon, 2020; Guo et al., 2020) (can be relaxed in practice, Sec. 4). The optimal scalars are then given by \(a_{i}=\sqrt{P_{t}}/(\max_{j}\|\mathbf{g}_{j}\|),\forall i\in[K]\), which are uniform across all the clients (SS E.1). Now substituting this \(a_{i}\) in Eq. (2) and rearranging, the effective channel can be written as \[\mathbf{y}=\widetilde{\mathcal{Z}}_{P}(\{\mathbf{g}_{i}\})\triangleq\frac{1}{k}\sum_{ i=1}^{k}\mathbf{g}_{i}+\frac{\max_{i}\|\mathbf{g}_{i}\|}{k\sqrt{P_{t}}}\,\mathbf{Z}.\] (noisy channel) Or equivalently, we can assume this as the actual channel model where the server receives the gradient average corrupted by a zero mean noise proportional to the gradients. Note that the noise magnitude decays in time as gradients converge to zero. We denote \(\widetilde{\mathcal{Z}}_{P}(\cdot)\) as simply \(\mathcal{Z}_{P}(\cdot)\) henceforth as these two mappings are equivalent. **Z-SGD.** Recall that the SGD aggregates the uncompressed gradients directly. In the presence of the noisy channel, it naturally modifies to \[\mathbf{\theta}_{t+1}=\mathbf{\theta}_{t}-\gamma_{t}\:\mathcal{Z}_{P}(\{\mathbf{g}_{i}^{(t)} \}).\] (Z-SGD) Thus Z-SGD is a canonical baseline to compare against. It has two sources of stochasticity: one stemming for the stochastic gradients and the other from the channel noise. While the gradient in the Z-SGD update still has the same conditional mean as the noiseless case (zero mean Gaussian in noisy channel), it has higher variance due to the Gaussian term. When \(P=\infty\), Z-SGD reduces to SGD. ## 3 LASER: Novel linear compression cum transmission scheme In this section we describe our main contribution, LASER, a novel method to compress gradients and transmit them efficiently over noisy channels. The central idea underpinning our approach is that, given the channel power constraint in Eq. (1), we can get a more faithful gradient signal at the receiver by transmitting its 'appropriate' compressed version (fewer entries sent and hence more power per entry) as opposed to sending the full-gradient naively as in Z-SGD. This raises a natural question: _what's a good compression scheme that facilitates this?_ To address this, we posit that we can capitalize on the inherent low-rank structure of the gradient matrices (Martin and Mahoney, 2021; Mazumder et al., 2010; Yoshida and Miyato, 2017) for efficient gradient compression and transmission. Indeed, as illustrated below and in Theorem 1, we can get a variance reduction of the order of the smaller dimension when the gradient matrices are approximately low-rank. More concretely, let us consider the single worker case where the goal is to transmit the stochastic gradient \(\mathbf{g}\in\mathbb{R}^{m\times m}\) (viewed as a matrix) to the server with constant power \(P_{t}=P\). Further let's suppose that \(\mathbf{g}\) is approximately rank-one, i.e. \(\mathbf{g}\approx\mathbf{p}\mathbf{q}^{\top}\), with the factors \(\mathbf{p},\mathbf{q}\in\mathbb{R}^{m}\) known. If we transmit \(\mathbf{g}\) uncompressed over the noisy channel, as in Z-SGD, the server receives \(\mathbf{y}_{\text{Z-SGD}}=\mathbf{g}+(\|\mathbf{g}\|/\sqrt{P})\:\mathbf{Z}\in\mathbb{R}^{m \times m}\). On the other hand, if we capitalize on the low-rank structure of \(\mathbf{g}\) and instead transmit the factors \(\mathbf{p}\) and \(\mathbf{q}\) with power \(P/2\) each, the server would receive: \[\mathbf{y}_{\mathbf{p}}=\mathbf{p}+(\sqrt{2}\|\mathbf{p}\|/\sqrt{P})\:\mathbf{Z}_{\mathbf{p}}\in \mathbb{R}^{m},\quad\mathbf{y}_{\mathbf{q}}=\mathbf{q}+(\sqrt{2}\|\mathbf{q}\|/\sqrt{P})\:\bm {Z}_{\mathbf{q}}\in\mathbb{R}^{m},\] where \(\mathbf{Z}_{\mathbf{p}}\) and \(\mathbf{Z}_{\mathbf{q}}\) are the channel noise. Now we reconstruct the stochastic gradient as \[\mathbf{y}_{\text{LASER}}\triangleq\mathbf{y}_{\mathbf{p}}\mathbf{y}_{\mathbf{q}}^{\top}=(\mathbf{p} +(\sqrt{2}\|\mathbf{p}\|/\sqrt{P})\:\mathbf{Z}_{\mathbf{p}})(\mathbf{q}+(\sqrt{2}\|\mathbf{q}\|/ \sqrt{P})\:\mathbf{Z}_{\mathbf{q}})^{\top}. \tag{3}\] Conditioned on the gradient \(\mathbf{g}\), while the received signal \(\mathbf{y}\) has the same mean \(\mathbf{g}\) under both Z-SGD and LASER, we observe that for Z-SGD it has variance \(\mathbb{E}\|\mathbf{y}_{\text{Z-SGD}}-\mathbf{g}\|^{2}=\|\mathbf{g}\|^{2}/\mathrm{SNR}\) with \(\mathrm{SNR}\triangleq P/m^{2}\), whereas that of LASER is roughly \(\|\mathbf{g}\|^{2}\cdot(4/m\mathrm{SNR})(1+1/(m\mathrm{SNR}))\), as further elaborated in Definition 1. When \(\mathrm{SNR}\) is of constant order \(\Omega(1)\), we observe that the variance for LASER is roughly \(\mathcal{O}(m)\) times smaller than that of Z-SGD, which is significant given that variance directly affects the convergence speed of stochastic-gradient based methods (Bottou et al., 2018). More generally, even if the gradients are not inherently low-rank and we only know their rank factors approximately, with standard techniques like error-feedback (Seide et al., 2014) we can naturally generalize the aforementioned procedure, which is the basis for LASER. Algorithm 1 below details LASER and Theorem 1 establishes its theoretical justification. While LASER works with any power policy \(\{P_{t}\}\) in noisy channel, it suffices to consider the constant law \(P_{t}=P\) as justified in Sec. 4.2. ### Algorithm For distributed training of neural network models, we apply Algorithm 1 to each layer independently. Further we use it only for the weight matrices (fully connected layers) and the convolutional filters (after reshaping the multi-dimensional tensors to matrices), and transmit the bias vectors uncompressed. Now we delineate the two main components of LASER: (i) Gradient compression + Error-feedback (EF), and (ii) Power allocation + Channel transmission. **Gradient compression and error feedback (7-9).** Since we transmit low-rank gradient approximations, we use error feedback (EF) to incorporate the previous errors into the current gradient update. This ensures convergence of SGD with biased compressed gradients (Karimireddy et al., 2019). For the rank-\(r\) compression of the updated gradient \(\mathbf{M}\), \(\mathcal{C}_{r}(\mathbf{M})\), we use the PowerSGD algorithm from Vogels et al. (2019), a linear compression scheme to compute the left and right singular components \(\mathbf{P}\in\mathbb{R}^{m\times r}\) and \(\mathbf{Q}\in\mathbb{R}^{n\times r}\) respectively. PowerSGD uses a single step of the subspace iteration (Stewart and Miller, 1975) with a warm start from the previous updates to compute these factors. The approximation error, \(\mathbf{M}-\mathbf{P}\mathbf{Q}^{\top}\), is then used to update the error-feedback for next iteration. Note that the clients do not have access to the channel output and only include the local compression errors into their feedback. The decompression function in line 9 is given by \(\textsc{decompress}(\mathbf{P},\mathbf{Q})\triangleq\mathbf{P}\mathbf{Q}^{\top}\in\mathbb{R}^{m \times n}\). **Power allocation and channel transmission (10-11).** This block is similar to Eq. (3) we saw earlier but generalized to multiple workers and higher rank. For each client, to transmit the rank-\(r\) factors \(\mathbf{P}\) and \(\mathbf{Q}\) over the noisy channel, we compute the corresponding power-allocation vectors \(\mathbf{\alpha},\mathbf{\beta}\in\mathbb{R}^{r}_{+}\), given by \(\mathbf{\alpha},\mathbf{\beta}=\textsc{poweralloc}(\mathbf{P},\mathbf{Q},\mathbf{M})\). This allocation is uniform across all the clients. Given these power scalars, all the clients synchronously transmit the corresponding left factors over the channel which results in \(\mathbf{Y_{p}}\in\mathbb{R}^{m\times r}\). Similarly for \(\mathbf{Y_{q}}\in\mathbb{R}^{n\times r}\). Finally, the stochastic gradient for the model update is reconstructed as \(\mathbf{g}=\mathbf{Y_{p}}\mathbf{Y_{q}^{\top}}\). For brevity we defer the full details to SS E.1. ### Theoretical results We now provide theoretical justification for LASER for learning parameters in \(\mathbb{R}^{m\times n}\) with \(m\leq n\) (without loss of generality). While our algorithm works for any number of clients, for the theory we consider \(k=1\) to illustrate the primary gains with our approach. Our results readily extend to the multiple clients setting following Codonnier (2018). Specifically, Theorem 1 below highlights that the asymptotic convergence rate of LASER is _almost the same as that of the classical_ SGD, except for a small additive constant \(\lambda_{\mathrm{LASER}}\) which is \(\mathcal{O}(m)\) times smaller than that of Z-SGD. Our results hold for both quasi-convex and arbitrary non-convex functions. We start with the preliminaries. **Definition 1** (**Channel influence factor)**.: _For any compression cum transmission algorithm \(\mathrm{ALG}\), let \(\mathbf{y}_{\mathrm{ALG}}(\mathbf{g})\) be the reconstructed gradient at the server after transmitting \(\mathbf{g}\) over the noisy channel. Then the channel influence factor \(\lambda_{\mathrm{ALG}}\) is defined as_ \[\lambda_{\mathrm{ALG}}\triangleq\frac{\mathbb{E}_{\mathbf{Z}}\|\mathbf{y}_{\mathrm{ ALG}}(\mathbf{g})-\mathbf{g}\|^{2}}{\|\mathbf{g}\|^{2}}. \tag{4}\] The influence factor gauges the effect of the channel on the variance of the final gradient \(\mathbf{y}_{\mathrm{ALG}}\): if the original stochastic gradient \(\mathbf{g}\) has variance \(\sigma^{2}\) with respect to the actual gradient \(\nabla f\), then \(\mathbf{y}_{\mathrm{ALG}}\) has \((1+\lambda_{\mathrm{ALG}})\sigma^{2}\). Note that this variance directly affects the convergence speed of the SGD and hence the smaller \(\lambda_{\mathrm{ALG}}\) is, the better the compression scheme is. In view of this, the following fact (SS B.2) illustrates the crucial gains of LASER compared to Z-SGD, which are roughly of order \(\mathcal{O}(m)\): \[\lambda_{\mathrm{LASER}}\leq\frac{4}{(m/r)\mathrm{SNR}}\left(1+\frac{1}{(n/r) \mathrm{SNR}}\right)\ll\frac{1}{\mathrm{SNR}}=\lambda_{\mathrm{Z-SGD}}. \tag{5}\] In the low-rank (Vogels et al., 2019) and constant-order SNR regime where \(r=\mathcal{O}(1)\) and \(\mathrm{SNR}=\Omega(1)\), we observe that \(\lambda_{\mathrm{LASER}}\) is roughly \(\mathcal{O}(m)\) times smaller than \(\lambda_{\mathrm{Z-SGD}}\). In other words, the effective \(\mathrm{SNR}\) seen by LASER roughly gets boosted to \(\mathcal{O}(m\,\mathrm{SNR})\) due to capitalizing on the low-rank factors whereas Z-SGD perceives only the standard factor \(\mathrm{SNR}\). Constant-order SNR, i.e. \(P/mn=\Omega(1)\), means that the energy used to transmit each coordinate is roughly a constant, analogous to the constant-order bits used in quantization schemes (Vargaftik et al., 2021). In fact, a weaker condition that \(P/4r^{2}>1\) suffices (SS E.3). With a slight abuse of notation, we denote the first upper bounding quantity in Eq. (5) as \(\lambda_{\mathrm{LASER}}\) too and \(\textsc{Decompress}(\mathcal{C}_{r}(\cdot))\) as \(\mathcal{C}_{r}(\cdot)\) for brevity. We briefly recall the standard assumptions for SGD convergence following the framework in Bottou et al. (2018) and Stich & Karimireddy (2019). **Assumption 1**.: _The objective \(f:\mathbb{R}^{m\times n}\to\mathbb{R}\) is differentiable and \(\mu\)-quasi-convex for a constant \(\mu\geq 0\) with respect to \(\mathbf{\theta}_{\star}\), i.e. \(f(\mathbf{\theta})-f(\mathbf{\theta}_{\star})+\frac{\mu}{2}\|\mathbf{\theta}-\mathbf{\theta}_ {\star}\|^{2}\leq\langle\nabla f(\mathbf{\theta}),\mathbf{\theta}-\mathbf{\theta}_{\star} \rangle,\ \forall\mathbf{\theta}\in\mathbb{R}^{m\times n}\)._ **Assumption 2**.: \(f\) _is \(L\)-smooth for some \(L>0\), i.e. \(f(\mathbf{\theta}^{\prime})\leq f(\mathbf{\theta})+\langle\nabla f(\mathbf{\theta}),\mathbf{ \theta}^{\prime}-\mathbf{\theta}\rangle+\frac{L}{2}\|\mathbf{\theta}^{\prime}-\mathbf{ \theta}\|^{2},\ \forall\mathbf{\theta},\mathbf{\theta}^{\prime}\in\mathbb{R}^{m\times n}\)._ **Assumption 3**.: _For any \(\mathbf{\theta}\), a gradient oracle \(\mathbf{g}(\mathbf{\theta},\mathbf{\xi})=\nabla f(\mathbf{\theta})+\mathbf{\xi}\), and conditionally independent noise \(\mathbf{\xi}\), there exist scalars \((M,\sigma^{2})\geq 0\) such that \(\mathbb{E}[\mathbf{\xi}|\mathbf{\theta}]=0,\ \mathbb{E}[\|\mathbf{\xi}\|^{2}\|\mathbf{\theta}]\leq M\| \nabla f(\mathbf{\theta})\|^{2}+\sigma^{2}\)._ **Assumption 4**.: _The compressor \(\mathcal{C}_{r}(\cdot)\) satisfies the \(\delta_{r}\)-compression property: there exists a \(\delta_{r}\in[0,1]\) such that \(\mathbb{E}_{\mathcal{C}_{r}}\|\mathcal{C}_{r}(\mathbf{M})-\mathbf{M}\|^{2}\leq(1- \delta_{r})\|\mathbf{M}\|^{2},\ \forall\mathbf{M}\in\mathbb{R}^{m\times n}\)._ \(\delta_{r}\)-compression is a standard assumption in the convergence analysis of Error Feedback SGD (EF-SGD) (Stich & Karimireddy, 2020). It ensures that the norm of the feedback memory remains bounded. We make the following assumption on the influence factor \(\lambda_{\mathrm{LASER}}\), which ensures that the overall composition of the channel and compressor mappings, \(\mathcal{Z}_{P}(\mathcal{C}_{r}(\cdot))\), still behaves nicely. **Assumption 5**.: _The channel influence factor \(\lambda_{\mathrm{LASER}}\) satisfies \(\lambda_{\mathrm{LASER}}\leq 1/(10(2/\delta_{r}+M))\)._ We note that a similar assumption is needed for convergence even in the hypothetical ideal scenario when the clients have access to the channel output (SS B.2), which we do not have. This bound can be roughly interpreted as \(\lambda_{\mathrm{LASER}}=\mathcal{O}(\delta_{r})\). We are now ready to state our main result. **Theorem 1** (**LASER convergence)**.: _Let \(\{\mathbf{\theta}_{t}\}_{t\geq 0}\) be the LASER iterates (Alg. 1) with constant stepsize schedule \(\{\gamma_{t}=\gamma\}_{t\geq 0}\) and suppose Assumptions 2-5 hold. Denote \(\mathbf{\theta}_{\star}\triangleq\mathrm{argmin}_{\mathbf{\theta}}f(\mathbf{\theta}),f_{ \star}\triangleq f(\mathbf{\theta}_{\star})\), and \(\tau\triangleq 10L\left(\frac{2}{\delta_{r}}+M\right)\). Then for \(k=1\),_ 1. _if_ \(f\) _is_ \(\mu\)_-quasi convex for_ \(\mu>0\)_, there exists a stepsize_ \(\gamma\leq\frac{1}{\tau(1+\lambda_{\mathrm{LASER}})}\) _such that_ \[\mathbb{E}f(\mathbf{\theta}_{\mathrm{out}})-f_{\star}=\widetilde{\mathcal{O}}\left( \tau(1+\lambda_{\mathrm{LASER}})\|\mathbf{\theta}_{0}-\mathbf{\theta}^{\star}\|^{2} \exp\left(\frac{-\mu T}{\tau(1+\lambda_{\mathrm{LASER}})}\right)+\frac{\sigma^ {2}(1+\lambda_{\mathrm{LASER}})}{\mu T}\right),\] _where_ \(\mathbf{\theta}_{\mathrm{out}}\) _is chosen from_ \(\{\mathbf{\theta}\}_{t=0}^{T-1}\) _such that_ \(\mathbf{\theta}_{\mathrm{out}}=\mathbf{\theta}_{t}\) _with probability_ \((1-\mu\gamma/2)^{-t}\)_._ 2. _if_ \(f\) _is_ \(\mu\)_-quasi convex for_ \(\mu=0\)_, there exists a stepsize_ \(\gamma\leq\frac{1}{\tau(1+\lambda_{\mathrm{LASER}})}\) _such that_ \[\mathbb{E}f(\mathbf{\theta}_{\mathrm{out}})-f_{\star}=\mathcal{O}\left(\frac{\tau\| \mathbf{\theta}_{0}-\mathbf{\theta}^{\star}\|^{2}(1+\lambda_{\mathrm{LASER}})}{T}+ \sigma\|\mathbf{\theta}-\mathbf{\theta}_{\star}\|\sqrt{\frac{1+\lambda_{\mathrm{LASER}} }{T}}\right),\] _where_ \(\mathbf{\theta}_{\mathrm{out}}\) _is chosen uniformly at random from_ \(\{\mathbf{\theta}\}_{t=0}^{T-1}\)_._ 3. _if_ \(f\) _is an arbitrary non-convex function, there exists a stepsize_ \(\gamma\leq\frac{1}{\tau(1+\lambda_{\mathrm{LASER}})}\) _such that_ \[\mathbb{E}\|\nabla f(\mathbf{\theta}_{\mathrm{out}})\|^{2}=\mathcal{O}\left(\frac{ \tau\|f(\mathbf{\theta}_{0})-f_{\star}\|^{2}(1+\lambda_{\mathrm{LASER}})}{T}+ \sigma\sqrt{\frac{L(f(\mathbf{\theta})-f_{\star})(1+\lambda_{\mathrm{LASER}})}{T}} \right),\] _where_ \(\mathbf{\theta}_{\mathrm{out}}\) _is chosen uniformly at random from_ \(\{\mathbf{\theta}\}_{t=0}^{T-1}\)_._ 4. \(\mathrm{Z-SGD}\) _obeys the convergence bounds (i)-(iii) with_ \(\delta_{r}=1\) _and_ \(\lambda_{\mathrm{LASER}}\) _replaced by_ \(\lambda_{\mathrm{Z-SGD}}\)_._ **LASER vs. Z-SGD.** Thus the asymptotic rate of LASER is dictated by the timescale \((1+\lambda_{\mathrm{LASER}})/T\), very close to the \(1/T\) rate for the classical SGD. In contrast, Z-SGD has the factor \((1+\lambda_{\mathrm{Z-SGD}})/T\) with \(\lambda_{\mathrm{Z-SGD}}=\mathcal{O}(m)\,\lambda_{\mathrm{LASER}}\). **Multiple clients.** As all the workers in LASER (Alg. 1) apply the same linear operations for gradient compression (via PowerSGD), Theorem 1 can be extended to (homogenous) multiple workers by shrinking the constants \(\sigma^{2},\mathrm{SNR},\lambda_{\text{LASER}}\), and \(\lambda_{\text{Z-SGD}}\) by a factor of \(k\), following cordonnier2018. Proof.: (Sketch) First we write the LASER iterates \(\{\mathbf{\theta}_{t}\}_{t\geq 0}\) succinctly as \[\mathbf{\theta}_{t+1} =\mathbf{\theta}_{t}-\mathcal{Z}(\mathcal{C}_{r}(\mathbf{e}_{t}+\gamma_{ t}\mathbf{g}_{t})),\] \[\mathbf{e}_{t+1} =(\mathbf{e}_{t}+\gamma_{t}\mathbf{g}_{t})-\mathcal{C}_{r}(\mathbf{e}_{t}+ \gamma_{t}\mathbf{g}_{t}).\] First we establish a bound on the gap to the optimum, \(\mathbb{E}\|\mathbf{\theta}_{t+1}-\mathbf{\theta}_{*}\|^{2}\), by the descent lemma (Lemma 11). This optimality gap depends on the behavior of the error updates via \(\mathbb{E}\|\mathbf{e}_{t}\|^{2}\), which we characterize by the error-control lemma (Lemma 12). When \(f\) is quasi-convex, these two lemmas help us establish a recursive inequality between the optimality gap \(\mathbb{E}f(\mathbf{\theta}_{t+1})-f_{*}\) at time \(t+1\) and with that of at time \(t\): \(\mathbb{E}f(\mathbf{\theta}_{t})-f_{*}\). Upon unrolling this recursion and taking a weighted summation, Lemma 3 establishes the desired result. In the case of non-convexity, the same idea helps us to control \(\mathbb{E}\|\nabla f(\mathbf{\theta}_{t})\|^{2}\) in a similar fashion and when combined with Lemma 6, yields the final result. The proof for Z-SGD is similar. ## 4 Experimental results We empirically demonstrate the superiority of LASER over state-of-the-art baselines on a variety of benchmarks, summarized in Table 2. **Setup.** We consider four challenging tasks of practical interest: (i) GPT language modeling on WikiText-103, and (ii, iii, iv) image classification on Mnist, Cifar10 and Cifar100. For the language modeling, we use the GPT-2 like architecture following pagliardini2023 (\(\lx@sectionsign\) F). ResNet18 is used for the Cifar datasets. For Mnist, we use a \(1\)-hidden-layer network for a fair comparison with amiri2020bay noisy conditions, we vary the power \(P\) geometrically in the range \([0.1,10]\) for Mnist, \([250,128000]\) for Cifar10 and Cifar100, and \([10000,1024\times 10000]\) for WikiText-103. The chosen ranges can be roughly split into low-moderate-high power regimes. Recall from noisy channel that the smaller the power, the higher the noise in the channel. **Baselines.** We benchmark LASER against three different sets of baselines: (i) Z-SGD, (ii) Signum, Random-K, Sketching, and (iii) A-DSGD. Z-SGD sends the uncompressed gradients directly over the noisy channel and acts as a canonical baseline. The algorithms in (ii) are state-of-the-art distributed compression schemes for noiseless communication (Vogels et al., 2019). Signum (Bernstein et al., 2018) transmits the gradient sign followed by the majority vote and Sketching (Rothchild et al., 2020; Haddadpour et al., 2020) uses a Count Mean Sketch to compress the gradients. We omit comparison with quantization methods (Vargaftik et al., 2022) given the difference in our objectives and the settings (noisy channel). A-DSGD (Amiri and Gunduz, 2020b) is a popular compression scheme for noisy channels, relying on Top-K and random sketching. However A-DSGD does not scale to tasks of the size we consider and hence we benchmark against it only on Mnist. SGD serves as the noiseless baseline (Table 2). All the compression algorithms use the error-feedback, and use the compression factor (compressed-gradient-size/original-size) \(0.2\), the optimal in the range \([0.1,0.8]\). We report the best results among \(3\) independent runs for all the baselines (SS F). ### Results on language modeling and image classification For GPT language modeling, Fig. 1 in Sec. 1 highlights that LASER outperforms the baselines over a wide range of power levels. To the best of our knowledge, this is the first result of its kind to demonstrate gains for GPT training over noisy channels. Specifically, we obtain \(64\%\) improvement in perplexity over Z-SGD (\(76\) vs. \(212\)) in the low power regime (\(P=10\,\mathrm{K}\)) and \(50\%\) (\(35\) vs. \(71\)) for the moderate one (\(P=160\,\mathrm{K}\)). This demonstrates the efficacy of LASER especially in the limited power environment. Indeed, Table 1 illustrates that for a fixed target perplexity, LASER requires \(16\times\) less power than the second best, Z-SGD. In the very high power regime, we observe no clear gains (as expected) compared to transmitting the uncompressed gradients directly via the Z-SGD. We observe a similar trend for Cifar10 classification, as Fig. 2 and Table 3 demonstrate the superiority of LASER over other compression schemes; Random-K does better than the other baselines till moderate power levels after which Z-SGD dominates. Signum is considerably worse than others, as it hasn't converged yet after \(150\) epochs, and hence omitted. With regards to power reduction, Table 3 highlights that LASER requires just \((1/16)^{\text{th}}\) the power compared to Z-SGD to reach any target accuracy till \(91\%\). We observe similar gains for Cifar100 (SS F). Table 4 compares the performance of LASER against various compression algorithms on Mnist. In the very noisy regime (\(P=0.1\)), Random-K is slightly better than LASER and outperforms the other baselines, whereas in the moderate (\(P=1\)) and high power (\(P=10\)) regimes, LASER is slightly better than the other algorithms. On the other hand, we observe that A-DSGD performs worse than even simple compression schemes like Random-K in all the settings. The formulation in noisy channel allows for any power control law \(P_{t}\) as long as it satisfies the average power constraint: \(\sum_{t}(P_{t}/T)\leq P\). This begs a natural question: _what's the best power scheme for LASER?_ To answer this, for Cifar10 classification, under a fixed budget \(P\) we consider different power policies with both increasing and decreasing power across epochs: the constant, piecewise constant and linear schemes. Fig. 3 illustrates the results for the decreasing power laws, while Fig. 7 their increasing counterparts. These results highlight that the _constant_ power policy achieves the _best_ performance for both LASER and Z-SGD, compared to the time-varying ones. Further LASER attains significant accuracy gains over Z-SGD for all the power control laws. Interestingly LASER performs the _same_ with all the power schemes. We posit this behavior to the fact that the noisy channel already contains a time-varying noise due to the term \(\frac{\max_{i}\|\|\mathbf{g}_{i}\|}{\sqrt{P_{t}}}\). Since the gradients decay over time, this inherently allows for an implicit power/SNR-control law even with a constant \(P_{t}\), thus enabling the constant power scheme to fare as good as the others. Hence, without loss of generality, we consider the static power schedule for our theory and experiments. We refer to SS F.7 for a detailed discussion. ### Computational complexity and communication cost Recall from Algorithm 1 that the two critical components of LASER are gradient compression and channel transmission. To gauge their efficacy we analyze them via two important metrics: (i) _computational complexity_ of compression and (ii) _communication cost_ of transmission. For (ii), recall from Eq. (1) that the power constraint indirectly serves as a communication cost and encourages compression. Table 5 quantitatively measures the total data sent by clients for each training iteration (doesn't change with the power \(P\)) for GPT language modeling on WikiText-103. As illustrated, LASER incurs the lowest communication cost among all the baselines with \(165\times\) cost reduction as compared to the Z-SGD, followed by Signum which obtains \(33\times\) reduction. Interestingly, LASER also achieves the best perplexity scores as highlighted in Fig. 1. For these experiments, we let rank \(r=4\) for LASER and the best compression factor \(0.2\) for the baselines (as detailed earlier). Signum does not require any compression factor. For (i), since LASER relies on PowerSGD for the rank decomposition, it inherits the same low-complexity benefits: Tables \(3\)-\(7\) of Vogels et al. (2019) demonstrate that PowerSGD is efficient with significantly lower computational needs and has much smaller processing time/batch as compared to baselines without any accuracy drop. In fact, it is the core distributed algorithm behind the recent breakthrough DALL-E (SS E in Ramesh et al. (2021)). **Slow and fast fading channels.** The slow/non-fading model in Eq. (1) readily generalizes to the popular fast fading channel (Guo et al., 2020; Amiri and Gunduz, 2020): \(\mathbf{y}=\sum_{i}\gamma_{i}\mathbf{x}_{i}+\mathbf{Z}\), where \(\gamma_{i}\) are the channel fading coefficients. A standard technique here in the literature is to assume that channel-state-information (CSI) is known in the form of fading coefficients or their statistics, which essentially reduces the problem to a non-fading one. Likewise LASER can be extended to the fast fading channel as well. The challenging setting without CSI is an interesting topic of future research. ## 5 Related work **(i) Compression schemes with noiseless communication.** Assuming a noiseless bit pipe from clients to the server, quantization methods (Dettmers, 2015; Alistarh et al., 2017; Horvoth et al., 2022; Li et al., 2018; Wen et al., 2017; Yu et al., 2019; Vargafik et al., 2021) quantize each coordinate and send as fewer bits as possible. Sparsification techniques (Ivkin et al., 2019; Stich et al., 2018; Sun et al., 2019; Tsuzuzuku et al., 2018; Wangni et al., 2018) send a reduced number of coordinates, based on criteria such as Top/Random-K, as opposed to sending the full gradient directly. Hybrid methods (Dryden et al., 2016; Lim et al., 2019) combine both. Rank compression methods (Yu et al., 2018; Cho et al., 2019; Wang et al., 2018) spectrally decompose gradient matrix (often via SVD) and transmit these factors. Since SVD is computationally prohibitive, we rely on the state-of-the-art Figure 3: Accuracy vs. budget \(P\) for various laws. Constant is the best for both LASER and Z-SGD. light-weight compressor PowerSGD (Vogels et al., 2019). **(ii) Compression schemes for noisy channels.** The main idea here is to enable over-the-air-aggregation of gradients via the superposition nature of wireless channels (Nazer and Gastpar, 2007) thus reducing the communication latency and bandwidth. The popular A-DSGD (Amiri and Gunduz, 2020) relies on Top-K sparsification and random sketching. However, being memory intensive, A-DSGD is restricted to MNIST with \(1\)-layer NN and doesn't scale beyond. Guo et al. (2020) propose an analog-gradient-aggregation scheme but it is limited to shallow neural networks. Chang and Tandon (2020) design a digital quantizer for training over Gaussian MAC channels. **(iii) Power laws.** In the absence of explicit power constraints, Wei and Shen (2022) show that \(\mathcal{O}(1/t^{2})\) noise-decay ensures the standard \(1/T\) convergence rate for noisy FED-AVG whereas Saha et al. (2022) propose a \(t^{0.8}\) increase in SNR for the decentralized setup. ## 6 Conclusion We propose a principled gradient compression scheme, LASER, for wireless distributed optimization over additive noise channels. LASER attains significant gains over its baselines on a variety of metrics such as accuracy/perplexity, complexity and communication cost. It is an interesting avenue of future research to extend LASER to channels with downlink noise and fast fading without CSI.
2307.08962
REX: Rapid Exploration and eXploitation for AI Agents
In this paper, we propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX. Existing AutoGPT-style techniques have inherent limitations, such as a heavy reliance on precise descriptions for decision-making, and the lack of a systematic approach to leverage try-and-fail procedures akin to traditional Reinforcement Learning (RL). REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance. This approach has the advantage of enabling the utilization of offline behaviors from logs and allowing seamless integration with existing foundation models while it does not require any model fine-tuning. Through comparative analysis with existing methods such as Chain-of-Thoughts(CoT) and Reasoning viA Planning(RAP), REX-based methods demonstrate comparable performance and, in certain cases, even surpass the results achieved by these existing techniques. Notably, REX-based methods exhibit remarkable reductions in execution time, enhancing their practical applicability across a diverse set of scenarios.
Rithesh Murthy, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Le Xue, Weiran Yao, Yihao Feng, Zeyuan Chen, Akash Gokul, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
2023-07-18T04:26:33Z
http://arxiv.org/abs/2307.08962v2
# REX: Rapid Exploration and eXploitation for AI Agents ###### Abstract In this paper, we propose an enhanced approach for **R**apid **E**xploration and e**X**ploitation for AI Agents called **REX**. Existing AutoGPT-style techniques have inherent limitations, such as a heavy reliance on precise descriptions for decision-making, and the lack of a systematic approach to leverage try-and-fail procedures akin to traditional Reinforcement Learning (RL). REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance. This approach has the advantage of enabling the utilization of offline behaviours from logs and allowing seamless integration with existing foundation models while it does not require any model fine-tuning. Through comparative analysis with existing methods such as Chain-of-Thoughts(CoT) and Reasoning viA Planning(RAP), REX-based methods demonstrate comparable performance and, in certain cases, even surpass the results achieved by these existing techniques. Notably, REX-based methods exhibit remarkable reductions in execution time, enhancing their practical applicability across a diverse set of scenarios. **Keywords:** Large Language Model, Upper Confidence Bound, Reinforcement Learning, Large Action Model, AI Agents ## 1 Introduction AI agents driven by Large Language Models (LLMs) have become a very active research topic recently. A series of applications, such as AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), Langchain (Chase, 2023), have been proposed and well adopted in various application scenarios. In this paper, we employ the terms "AI Agents" and "Large Action Models (LAM)" interchangeably. AI agents based on LLMs usually craft the users' inputs into a standard prompt based on certain pre-designed prompt templates. Usually users are required to provide detailed description on the goals and plans of their task. A series of actions, together with their input, output arguments and descriptions are pre-defined inside the prompt templates so that language model understands the action space. In the context of LLM-based action agents, an essential element for effective engineering lies in the explicit specification of the prompt. This specification outlines the requirement for the language model's output format to be structured, enabling the environment to accurately extract the actions selected by the model. Based on the action of the model, the environment will follow the same rational as in most reinforcement learning, run the state transition and provide possible reward or feedback to the agent. Recently (Hao et al., 2023a) proposes to integrate Monte Carlo Tree Search (MCT, 2016) in the state-action transition to help LLMs exploring un-discovered paths. (Ouyang and Li, 2023) proposes to integrate the rewards directly in to the prompt of the LLM agent to help guide the trajectory search. Despite the considerable success of LLM-based AI agents in various application scenarios, they are evidently in an early stage of development, leaving much room for improvement. One specific area of concern is the integration of rewards, which requires further attention and enhancement. Classical RL based agents training is able to improve the model performance using the rewards provided by the environment. Popular RL algorithms, such as Policy Gradient (Sutton et al., 2000), Proximal Policy Optimization Algorithm (PPO) (Schulman et al., 2017), Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), and Advantage Actor Critic methods (Mnih et al., 2016), are able to update the model weights of the agents based on the rewards or feedback from the environments. However, LLM based AI agents suffer from the following limitations: 1. **Lack of systematic rewards incorporation:** While they can generate actions based on input data, they often struggle to systematically integrate reward signals, hindering their ability to learn and optimize their performance. 2. **Exploration-Exploitation trade-off:** LLMs face challenges in striking the right balance between exploration and exploitation. Optimal performance requires exploring new strategies to discover potentially better rewards while also exploiting existing knowledge to maximize short-term gains. 3. **Insufficient long-term planning:** LLMs may focus on immediate rewards and overlook the importance of long-term planning. This lack of foresight can hinder their ability to identify more profitable paths and actions that may yield larger rewards in the future. To address these limitations, in this paper we propose a novel technique, **R**apid **E**xploration and **e**X**ploitation for AI Agents **(REX)**, that is designed to empower LLMs with the capability to seamlessly integrate rewards into their models and effectively manage the exploration-exploitation trade-off, all while significantly accelerating the overall learning process. ## 2 Action Agents Driven by LLMs Action agents driven by LLMs (Large Language Models) refer to systems or applications that utilize LLMs like GPT-3.5, Bard, etc. to generate natural language responses and perform various actions or tasks based on user inputs or queries. These agents are designed to leverage the capabilities of LLMs to understand and generate human-like text, enabling them to interact with users in a conversational manner and take specific actions. LLMs can be used to generate sequences of actions in response to a given context or stimulus. By conditioning the model on the input and utilizing appropriate decoding strategies, LLMs can effectively generate action plans, instructions, or responses. For instance, in the context of video games, LLMs can act as virtual agents, making decisions based on the game state and generating appropriate actions to interact with the environment. ### Understanding LLMs as Action Agents LLMs are deep neural networks that have been trained on vast amounts of text data to predict the next word in a sequence (Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2019; OpenAI, 2023). They learn the statistical patterns and semantic relationships between words, allowing them to generate contextually relevant text. This ability to comprehend and generate text forms the basis for LLMs to become action agents. #### 2.1.1 Natural Language Understanding (NLU) One of the fundamental components of LLMs is their ability to understand natural language. Through pre-training on massive amounts of text, LLMs learn to extract meaning and context from input text. This NLU capability enables LLMs to comprehend user instructions and requests, which is crucial for acting as action agents. By interpreting the intent behind user commands, LLMs can generate appropriate responses and take suitable actions. #### 2.1.2 Knowledge Integration LLMs can access a wide range of knowledge due to their training on diverse textual data. This knowledge integration allows LLMs to understand complex concepts, facts, and relationships present in the text. As action agents, LLMs can leverage this knowledge to perform tasks that require understanding and reasoning. For example, an LLM acting as a virtual assistant can provide detailed answers to complex questions or perform research tasks by extracting relevant information from vast knowledge bases. #### 2.1.3 Contextual Generation LLMs excel at generating human-like text that is contextually relevant. This capability makes them invaluable as action agents for tasks involving content creation or communication. LLMs can generate emails, write code, compose articles, or even create conversational dialogue. By utilizing their understanding of context, LLMs can generate highly personalized and accurate content, tailored to specific requirements. #### 2.1.4 Decision Making and Planning LLMs can go beyond simple text generation and demonstrate decision-making abilities as action agents. By incorporating reinforcement learning techniques, LLMs can be trained to make optimal decisions based on specific goals or objectives. These models can consider various factors, evaluate potential actions, and choose the most appropriate course of action. This decision-making capability makes LLMs suitable for tasks that require planning, optimization, and intelligent action. #### 2.1.5 Integration with External Systems To act as action agents, LLMs need to interact with the real world. This can be achieved through integration with external systems, such as APIs, databases, or IoT devices. By connecting to these systems, LLMs can perform physical actions, control devices, retrieve data, or manipulate information. For example, an LLM could be integrated with a home automation system to control lights, adjust thermostats, or manage smart appliances based on user commands. ### Limitations While LLMs have shown immense potential as action agents, it is important to acknowledge their limitations. Here are some key limitations to consider: #### 2.2.1 Limited understanding of physical world LLMs lack direct sensory perception or physical embodiment. They rely solely on textual input and lack the ability to perceive the physical world like humans do. This limitation makes it difficult for LLMs to perform tasks that require physical manipulation, visual recognition, or spatial understanding. #### 2.2.2 Need for explicit instructions LLMs heavily rely on explicit instructions and well-defined queries to perform tasks. They struggle with ambiguity and may require specific and detailed instructions to generate accurate responses or take appropriate actions. LLMs can misinterpret ambiguous commands, leading to undesired outcomes or incorrect actions. #### 2.2.3 Lack of common sense reasoning Although LLMs have access to vast amounts of textual data, they may struggle with common sense reasoning and understanding context beyond what is explicitly mentioned in the training data. This limitation can lead to responses or actions that are technically correct but do not align with human expectations or common sense reasoning. #### 2.2.4 Biases and Ethical considerations LLMs are trained on large-scale datasets that reflect the biases and patterns present in the data. As a result, they may inadvertently exhibit biased behavior or generate outputs that reflect the biases encoded in the training data. This limitation poses ethical challenges when LLMs are used as action agents, especially in sensitive domains such as decision-making or content generation. #### 2.2.5 Lack of causal reasoning and explanations LLMs excel at generating text based on patterns and statistical correlations in the training data. However, they often struggle to provide causal explanations or reason through cause-and-effect relationships. This limitation can hinder their ability to justify their actions or provide detailed explanations for their decision-making processes. #### 2.2.6 Limited generalization to unseen scenarios While LLMs can generalize well to tasks and scenarios similar to those encountered during training, they may struggle when faced with unseen or novel situations. LLMs might gen erate inaccurate or nonsensical responses when exposed to inputs that differ significantly from their training data. ## 3 How Reinforcement Learning Comes into Play? Reinforcement Learning (RL) plays a crucial role in enabling LLMs to act as action agents. RL is a machine learning paradigm that involves training an agent to make sequential decisions in an environment to maximize a reward signal. In the context of LLMs as action agents, RL provides a framework for training the models to make optimal decisions based on specific goals or objectives. ### Why RL is necessary in LAM setting? #### 3.1.1 Goal-oriented decision-making RL allows LLMs to learn how to make decisions that maximize a reward signal. The LLM agent interacts with an environment, receiving observations (input text) and taking actions (generating responses or performing actions). The agent receives feedback in the form of rewards or penalties based on the quality of its actions. By exploring and exploiting different actions, the LLM agent learns to make decisions that lead to higher rewards, aligning with the desired objectives. #### 3.1.2 Exploration and exploitation RL enables LLMs to balance exploration (trying out different actions to discover optimal strategies) and exploitation (leveraging known strategies to maximize rewards). During training, the LLM agent explores various actions and their consequences, gradually learning which actions lead to higher rewards. As the agent gains experience, it begins to exploit the learned knowledge by prioritizing actions that are known to yield higher rewards. This process allows LLMs to fine-tune their decision-making capabilities and improve their performance as action agents. #### 3.1.3 Reward shaping RL involves defining a reward signal that guides the learning process. In the case of LLMs as action agents, reward shaping is crucial to provide feedback on the quality of the generated responses or performed actions. Rewards can be designed to encourage desirable behavior, such as providing informative and accurate responses, while penalizing undesired behavior, such as generating incorrect or nonsensical text. Through RL, LLMs learn to generate actions that optimize for higher rewards, aligning with the desired objectives of the application. #### 3.1.4 Training scenarios and simulations RL allows for the creation of simulated environments or training scenarios that LLMs can interact with during the learning process. These environments can mimic real-world situations or specific tasks, providing a controlled setting for the LLM agent to learn and improve its action-taking abilities. By training in simulated environments, LLMs can acquire skills and behaviors that generalize to real-world scenarios, enabling them to act as action agents in practical applications. #### 3.1.5 Policy optimization RL involves optimizing the policy of the LLM agent, which determines the actions it takes in response to different inputs. The policy can be represented by a neural network that takes input text and generates appropriate responses or action instructions. (Sutton et al., 2000; Schulman et al., 2017, 2015; Mnih et al., 2016) ### Monte Carlo Tree Search In recent years, Monte Carlo Tree Search (MCTS) has gained significant attention as a powerful algorithm for decision-making in various domains, particularly in game-playing applications. MCTS has demonstrated remarkable performance in games such as Chess, Go, and Poker, surpassing human expertise in some cases. However, its applications are not limited to games alone, as MCTS can be adapted to tackle complex decision-making problems in diverse fields. At its core, MCTS is a simulation-based search algorithm that leverages random sampling to explore the search space and make informed decisions. It combines elements of tree search and Monte Carlo sampling to iteratively build a search tree and estimate the value of each possible action. The main steps of the MCTS algorithm can be summarized as follows: 1. **Selection:** Starting from the root of the search tree, the algorithm traverses down the tree by selecting actions that balance exploration and exploitation. It uses a selection policy, such as the Upper Confidence Bound (UCB), to determine the most promising nodes. 2. **Expansion:** Once a leaf node is reached, the algorithm expands the tree by generating one or more child nodes corresponding to possible actions from the current state. 3. **Simulation:** MCTS performs Monte Carlo simulations (also known as rollouts) from each newly expanded node. These simulations play out random sequences of actions until reaching a terminal state, yielding an outcome or reward. 4. **Backpropagation:** After a simulation is completed, the result is backpropagated up the tree. The statistics of the visited nodes, such as the number of visits and accumulated rewards, are updated accordingly. These four steps are repeated iteratively for a specified number of iterations or until a time limit is reached. As more iterations are performed, the search tree evolves and provides increasingly accurate estimates of the value of different actions. Figure 1 shows one possible way of implementing MCTS in LAM setting. This particular implementation serves as the foundation for our proposed algorithm, UCL (UCB for updating Logits of LLM), which is extensively discussed in section 3.3. ## 4 Proposed Methodology ### Rapid Exploration and Exploitation: REX The major drawbacks of vanilla MCTS algorithm is that it's computationally expensive and fails to arrive at the solution quickly. To address these issues we propose an Accelerated version of Monte Carlo Tree Search, called REX, by using LLM. LLMs ability to learn form the context and to take multiple actions at once makes it an ideal candidate for this methodology. Figure 2 shows the pictorial representation of this algorithm. The four major steps of MCTS can be compressed to two steps in REX as follows: 1. **Selection + Expansion + Simulation:** Rather than progressing step by step from the initial state, taking actions, and moving to the next state until reaching a terminal state, the new algorithm considers all possible actions simultaneously at each stage. It predicts the entire solution, including the sequence of intermediate steps and the final answer, in one go. This approach removes the need for state transitions and multiple predictions. 2. **Backpropagation:** Once the final answer is determined to be correct, the results are propagated back to all the intermediate steps in the solution. Figure 1: Vanilla MCTS in LAM setting. UCB values are integrated in the prompt so that the model can make informed decision. The UCB scores are mapped to HIGH or LOW expected reward. #### 4.1.1 Algorithm 1: UCB driven Chain-of-Thoughts (UCB-CoT) REX is the basis of this algorithm. Feedback/reward is taken into account in the context/prompt which is the input to LLM. 1. **Approach:** This approach clubs the idea of Multi-pass CoT and MCTS. In the traditional Multi-pass CoT, the language model is queried randomly and independently multiple times, and a successful outcome is achieved if at least one of the queries yields a correct solution. However, UCB-CoT takes this a step further by assigning rewards to the Agent's actions based on the correctness of its generated solutions at the end of each pass. **Note-** In our setup, we define the term question or problem as the core issue we aim to resolve. We use the term solution to encompass both the sequence-of-intermediate-steps and the final-answer. The final-answer corresponds to the final block configuration in the Blocksworld dataset or the end answer in the GSM8K dataset. On the other hand, the sequence-of-intermediate-steps refers to the steps that lead to the final-answer. We use action and step interchangeably; in either case it refers to a single step in sequence-of-intermediate-steps. we refer state to represent the step-index in sequence-of-intermediate-steps. 2. **Reward Assignment:** After each pass, the language model's solution is evaluated. If the final-answer is correct, a reward of \(+1\) is assigned to each step in sequence-of-intermediate-steps in the solution, indicating a "High reward action." Conversely, if the final-answer is incorrect, a reward of \(0\) is assigned to each Figure 2: REX: UCB-style scores are estimated based on the current and historical exploration of the paths instead of random simulations. For each pass, the scores are bucketized into HIGH and LOW and integrated into the prompt to assist future explorations. step in sequence-of-intermediate-steps in the solution, in that pass, representing a "Low reward action." But how to identify the correctness of a solution? * **Reward Learning from Environment:** In the Blocksworld dataset, where the expected answer is already known, a reward of +1 is assigned if the final-answer matches the target block configuration; otherwise, the reward is 0 * **Reward Learning from LLM:** In the GSM8K dataset, where the expected answer is not known, the final answer's correctness is determined by the LLM i.e we provide the question, sequence-of-intermediate-steps, and final-answer from the current pass and query the LLM asking if the solution is correct? If yes, a reward of +1 is assigned else reward is 0 3. **UCB Scoring:** In each pass, the UCB score is calculated for each action taken by the agent. The UCB score represents a balance between exploration and exploitation. This scoring mechanism encourages the language model to explore different actions while favoring those with higher potential rewards. In the equ (1) below, s represents the state, and a represents the action taken at s. N is the number of times Agent has produced s in it's solution and N(s, a) is the number of times the Agent took an aciton a from a state s. C is constant to balance the exploration vs. exploitation trade off. \(\hat{Q}(s,a)\) is the cumulative reward for taking a action a at state a. \[UCB(s,a)=\hat{Q}(s,a)+C*\sqrt{\frac{\ln N}{N(s,a)}}\] (1) 4. **Reinforcement Learning and Policy Optimization:** By incorporating UCB into the prompts, this approach leverages reinforcement learning techniques to encourage the language model to generate more "High reward actions" and avoid "Low reward actions." The model learns from the rewards associated with each pass and adjusts its policy accordingly(via in-context learning). The goal is to optimize the model's decision-making process over time, leading to improved performance and more accurate solutions. 5. **Selection of Optimal Action Sequence:** After a specified number of passes, the sequence of actions that yields the highest cumulative reward at each stage is selected as the final solution. This selection process ensures that the language model's decision-making is guided by the reinforcement learning framework and prioritizes actions that maximize the cumulative reward, thus promoting the generation of accurate solutions. #### 4.1.2 Algorithm 2: Simple-Reward driven Chain-of-Thoughts (\(\mathcal{R}\)-CoT) \(\mathcal{R}\)-CoT closely resembles UCB-CoT, differ only in the method used to determine the HIGH / LOW expected rewards. While UCB-CoT relies on UCB scores, \(\mathcal{R}\)-CoT utilizes simple rewards of +1 and 0 for this purpose. The inherent nature of \(\mathcal{R}\)-CoT leans towards being more exploitative than exploratory. ``` 0: Problem P, final-answer Z, sequence-of-intermediate-steps H, action space A, state space S, action space for state s A(s) where s \(\in\) S, number of passes N, reward function Q: S x A \(\rightarrow\) R, upper confidence bound function U: S x A \(\rightarrow\) R, expected reward function E: S x A \(\rightarrow\) {\(HIGH\), \(LOW\)}, expected-answer X, reward \(\mathcal{R}\) 1: agent = Agent() 2:for i \(\leftarrow\) 0,..., N-1 do 3: U(s, a) = calculate_ucb_scores(s, a) \(\forall\) (s, a) \(\in\) Q 4:for s in S do 5: E(s, a) \(\leftarrow\)\(HIGH\) if U(s, a) == max\({}_{a\epsilon A(s)}\) U(s, a) else\(LOW\) 6:endfor 7: H, Z = agent.solve(P, E) 8:if X is available then 9:\(\mathcal{R}\)\(\leftarrow\)\(+\)\(+\)\(+\)if Z == X else 0 10:else 11: valid_answer = agent.validate_action(P, H, Z) 12:\(\mathcal{R}\)\(\leftarrow\)\(+\)\(+\)\(+\)if valid_answer else 0 13:endif 14: Q(s, a) + = \(\mathcal{R}\)\(\forall\) (s, a) \(\in\) H 15:endfor ``` **Algorithm 1** UCB driven Chain-of-Thoughts (UCB-CoT) ### Algorithm 3: Multi-pass CoT with adjusted loglikelihood based on UCB scores (UCL-CoT) In this section we propose a novel approach called UCL (UCB applied for Logits of LLM). This approach is a variant of REX, featuring a distinct modification in its methodology. While REX employs in-context learning, this variation involves adjusting the logits of actions to influence the decision-making process of the LLM instead. In certain cases, language models may generate alternative actions instead of following the recommendations in prompt. To address this, a novel approach has been introduced that adjusts the logits corresponding to tokens associated with the actions, ensuring that the intended actions are consistently chosen. 1. **Approach:** This approach builds upon the foundation of the Multi-pass CoT model by introducing novel scoring mechanisms using Upper Confidence Bound (UCB) called UCL. In order to modify the logits for a given query, we adhere to a one-time alteration constraint and therefore execute a query for each step in the solution. This approach differs from the previous method, where a single call was made to retrieve the entire solution, as we now make a separate call for each individual state. 2. **Reward Assignment:** Reward assignment for UCL-CoT is exactly same as the reward assignment for UCB-CoT. 3. **UCL Scoring:** The UCL score is calculated based on Equation (2). This score is used to update the loglikelihoods of tokens corresponding to all possible actions for the current state in the language model. By updating the loglikelihoods using the UCL scores, the language model is forced to execute high-reward action sequence. The constants B & K control the extent to which logits of the LLM are offset. \[UCL(s,a)=B*\ln\frac{UCB(s,a)}{K}\] (2) 4. **Selection of Optimal Action Sequence:** Similar to previous approaches, after a specified number of passes, the sequence of actions that yields the highest cumulative reward at each stage is selected as the final solution. ``` 0: Problem P, next-action \(\hat{a}\), action space A, state space S, action space for state s A(s) where s \(\in\) S, number of passes N, reward function Q: S x A \(\rightarrow\) R, upper confidence bound function U: S x A \(\rightarrow\) R, upper confidence bound for LAM function L: S x A \(\rightarrow\) R, expected-answer X, state at depth d \(S_{d}\), and let D be the maximum permissible depth, constant B, constant K 1: agent = Agent() 2:for i \(\leftarrow\) 0,..., N-1 do 3:\(S_{d}\)\(\leftarrow\) P 4: trajectory = [ ] 5:for j \(\leftarrow\) 0,..., D-1 do 6: U(\(S_{d}\), a) = calculate_ucb_scores(\(S_{d}\), a) \(\forall\) a \(\in\) A(\(S_{d}\)) 7: L(\(S_{d}\), a) = \(B*\ln\frac{U(S_{d},a)}{K}\) 8:for a in A(\(S_{d}\)) do 9: T \(\leftarrow\) get_token_ids(a) 10: LOGIT_BIAS(t) = L(\(S_{d}\), a) \(\forall t\in T\) 11:endfor 12:\(\hat{a}\) = agent.solve(P, LOGIT_BIAS) 13:if X is available then 14:\(\mathcal{R}\)\(\leftarrow\)\(+\)1 if\(\hat{a}\) == X else \(0\) 15:else 16: valid_answer = agent.validate_action(P, \(S_{d}\), \(\hat{a}\)) 17:\(\mathcal{R}\)\(\leftarrow\)\(+\)1 if valid_answer else \(0\) 18:endif 19: trajectory \(+=(S_{d},\hat{a})\) 20: Q(s, a) + = \(\mathcal{R}\)\(\forall\) (s, a) \(\in\) trajectory 21: Update:\(S_{d}\)\(\leftarrow\)\(S_{d}\)\(\bigoplus\)\(\hat{a}\) 22:endfor 23:endfor ``` **Algorithm 3** CoT with adjusted loglikelihood based on UCB scores (UCL-CoT) ## 5 Experiments & Discussion ### Baseline 1. **Single-pass CoT:** The Single pass CoT (Wei et al., 2022) technique serves as our initial baseline model for evaluating language models. It involves designing prompts that include a series of step-by-step examples, ultimately leading to the final solution. The language model is queried only once(single-pass) to solve the given problem. If the solution provided by the language model is accurate, it is deemed a successful call. 2. **Multi-pass CoT:** The Multi-pass CoT with at least one correct approach builds upon the Single pass CoT by extending the number of queries to the language model. Instead of querying the LLM only once, we query it multiple times. A successful outcome is achieved if at least one of these queries yields a correct answer. 3. **RAP:** Reasoning via Planning (RAP) (Hao et al., 2023a) framework leverages Language Models (LLMs) to strategically plan and execute coherent reasoning processes for a wide range of tasks. By repurposing the LLM and constructing a world model through prompting, the framework enables the LLM to anticipate future outcomes and make informed decisions. ### Blocksworld The Blocksworld dataset (Valmeekam et al., 2023) represents a planning problem that involves the arrangement of blocks with varying colors in a predetermined configuration. Each instance of the dataset provides an initial block configuration and the desired final configuration. The term "block configuration" refers to the specific arrangement of the blocks, where each block can be positioned either on top of another block, on a table surface, or held in hand, but not all options are available simultaneously. The blocks are uniquely identified by their respective colors, such as red, blue, and so on. Table 1 illustrates the performance of the methodologies proposed in this study. Each experiment is run for 10 iterations/passes and chatGPT (model: gpt-3.5-turbo) is used for all the experiments below. The Blocksworld dataset is divided into 3 subcategories(2, 4, 6 steps) based on the number of steps required to transform the block configuration from initial to final. Let 'n' denote the number of instances in each of these sub-categories. To transform the block configuration from initial to final, the model is expected to propose sequence of steps/actions. In blocksworld, there are only four major actions - Stack, Unstack, Pick and Put. Table 1 presents different variants of the Chain of Thoughts (CoT) algorithm discussed in the previous section. The best score is in bold and the second best score is underlined. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Model & LLM & 2-step & 4-step & 6-step \\ & config. & (n=30) & (n=56) & (n=114) \\ \hline 3-shot CoT & T=0.0 & 40\% & 17.85\% & 8.77\% \\ \hline 3-shot CoT & T=1.0 & 73.33\% & 46.64\% & 16.67\% \\ \hline 3-shot \(\mathcal{R}\)-CoT & T=0.0 & 53.33\% & 37.5\% & 14.91\% \\ \hline 3-shot \(\mathcal{R}\)-CoT & T=1.0 & 75\% & **50.89\%** & 16.67\% \\ \hline 3-shot UCB-CoT & T=0.0 & 80\% & 39.28\% & 25.43\% \\ \hline 3-shot UCB-CoT & T=1.0 & 78.33\% & 49.10\% & **27.63\%** \\ \hline 3-shot UCL-CoT & T=0.0 & 60\% & 44.64\% & 20.17\% \\ \hline 3-shot UCL-CoT & T=1.0 & **85\%** & 47.32\% & 17.98\% \\ \hline \end{tabular} \end{table} Table 1: Blocksworld In order to assess the efficacy of the proposed algorithm more accurately, it is advisable to compare the performance of the different algorithms when the temperature (T) is set to 0.0. Setting a higher temperature value can obscure the impact of the proposed approaches, making it difficult to evaluate their effectiveness. ### GSM8k The GSM8K dataset (Cobbe et al., 2021) comprises a collection of 8.5K grade school math word problems of exceptional quality. Each problem within this dataset typically requires a solution involving a sequence of elementary calculations, utilizing fundamental arithmetic operations such as addition, subtraction, multiplication, and division (+, -, \(\times\), -\(\div\)). The number of steps required to solve each problem falls within the range of 2 to 8 steps. While the problems exhibit a high level of diversity, the solutions rely solely on elementary concepts, making achieving high test performance an achievable objective. The performance of the proposed methodologies are presented in Table 2. Temperatures is set to 0.0 for all the methods. ### Comparison with other MCTS technique Table 3 provides a comprehensive comparison between our model and another Monte Carlo Tree Search (MCTS) based model, namely RAP (Hao et al., 2023b). The table presents a detailed analysis of various performance metrics and highlights the distinguishing features and advantages of our model in comparison to RAP. We introduce three important variables: n, which denotes the number of iterations or passes, d, representing the depth limit, and m, indicating the number of possible actions generated at each state. Table 3 provides a detailed comparison of the query count between the LLM/Agent for RAP, vanilla CoT and our proposed techniques UCB-CoT & UCL-CoT. The results clearly demonstrate the significant speed advantage of UCB-CoT over RAP. Our approach not only outperforms RAP in terms of computational efficiency but also offers enhanced flexibility. Notably, UCB-CoT can be seamlessly implemented using any Large Language Model (LLM), including popular APIs like OpenAI, even in scenarios where access to the underlying logits is restricted. This versatility makes UCB-CoT a highly adaptable and practical choice for a wide range of applications. \begin{table} \begin{tabular}{|l|l|} \hline Model & GSM8K-test \\ & (n=1319) \\ \hline 3-shot CoT 1-pass & 76.95\% \\ \hline 3-shot CoT 10-pass & 80.81\% \\ \hline 3-shot \(\mathcal{R}\)-CoT 10-pass & 81.34\% \\ \hline 3-shot UCB-CoT 10-pass & 82.03\% \\ \hline 3-shot UCL-CoT 10-pass & **90.44\%** \\ \hline \end{tabular} \end{table} Table 2: GSM8K ### Is UCB a better choice than simple reward(\(\mathcal{R}\))? From Table 1 and Table 2 it is clear that UCB based feedback outperforms simple reward based feedback in Planning(Blocksworld, 2- & 6-step setup) and Mathematical Reasoning(GSM8K) datasets. We believe the ability of UCB to strike a balance between exploration and exploitation is the reason for it. An example instance from blocksworld dataset(2-step) is presented in Figure 3. Figure 4 shows the solution provided by \(\mathcal{R}\)-CoT and Figure 5 shows the solution provided by UCB-CoT. This is an example which shows how UCB-CoT was able to solve a problem which \(\mathcal{R}\)-CoT wasn't able to solve. Clearly, encouraging the model to pick a HIGH reward action based on UCB score has helped the model to solve the given problem. The \(\mathcal{R}\)-CoT approach involves providing the model with specific information and instructions to solve a problem during each pass. The prompt includes details about the problem, instructions, action space, and three examples of solved problems (3-shot examples). Additionally, feedback from previous passes is given to the model in each pass. In the Simple-Reward(\(\mathcal{R}\)) setting, if the action sequence leads to the correct answer, a 'HIGH' reward is assigned to each action or step in the solution. Conversely, if the action sequence Figure 3: Sample problem from Blocksworld \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Methodology & Underlying LLM & 2-step Blocksworld & 4-step Blocksworld & 6-step Blocksworld test & GSM8K- queries \\ \hline 4-shot RAP & LLaMA-33B & 100\% & 86\% & 26\% & 51.6\% & n*m*d \\ \hline 3-shot CoT & gpt-3.5-turbo & 73.33\% & 46.64\% & 16.67\% & 80.81\% & n \\ \hline 3-shot UCB-CoT & gpt-3.5-turbo & 78.33\% & 49.10\% & 27.63\% & 82.03\% & n \\ \hline 3-shot UCL-CoT & gpt-3.5-turbo & 60\% & 44.64\% & 20.17\% & 90.44\% & n*d \\ \hline \end{tabular} \end{table} Table 3: RAP vs. CoT vs. UCB-CoT vs. UCL-CoT ## Appendix A Figure 4: \(\mathcal{R}\)-CoT based Solution Figure 5: UCB-CoT based Solution does not lead to the correct answer, a 'LOW' reward is assigned. This reward information is included in the prompt for the subsequent pass. It's important to note that unless the action sequence results in a correct answer, none of the actions or steps will receive a 'HIGH' reward. Consequently, the model is encouraged to propose new actions in order to improve its performance. The UCB-CoT solution is depicted in Figure 5. Similar to the \(\mathcal{R}\)-CoT approach, the UCB-CoT prompt incorporates comprehensive information about the problem, including problem details, instructions, action space, and three examples of solved problems (referred to as 3-shot examples). Moreover, feedback from previous passes is incorporated into the model at each pass. In line with the methodology described in Section 4.1.1, we compute the Upper Confidence Bound (UCB) score for each action within the solution during each pass. Subsequently, we assign the action associated with the highest UCB score as a 'HIGH' reward, while the remaining actions are designated as 'LOW' reward. This UCB scoring mechanism ensures an effective selection process, by striking a good balance between exploration and exploitation, for identifying the most promising action to be executed, optimizing the model's performance within the UCB-CoT framework. The analysis presented in Table 4, showing the average number of unique actions per step in the solution, reveals that the UCB-CoT approach promotes a greater degree of action exploration at each step compared to the \(\mathcal{R}\)-CoT method. This emphasis on exploration within the action space is believed to facilitate the model in discovering novel trajectories for problem-solving, ultimately leading to an increased success rate. By encouraging the model to explore alternative actions, UCB-CoT expands the scope of potential solutions and enhances the model's ability to overcome challenges effectively. ## 6 Conclusion In conclusion, this paper presented REX and it's variations, aimed at enhancing the performance of Large Action Models (LAMs). These techniques effectively strike a balance between exploration and exploitation, which is crucial for successful problem-solving. Through extensive evaluations on Planning and Mathematical Reasoning datasets, we have demonstrated the superiority of UCB-CoT and UCL-CoT in terms of accuracy and efficacy. Notably, UCB-CoT outperforms the previously discussed RAP technique in terms of efficiency, as indicated by the reduced number of queries. The strengths of UCB-CoT lie in its simplicity, efficiency, flexibility, and speed, making it a compelling choice for real-world applications. By incorporating UCB-based formulas, our techniques provide a robust framework for optimizing LAMs, opening doors for advancements in various domains that require large-scale action modeling. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Model & Blocksworld & Blocksworld & Blocksworld \\ & 2-step & 4-step & 6-step \\ \hline \(\mathcal{R}\)-CoT & 1.63 & 2.32 & 2.34 \\ \hline UCB-CoT & 3.61 & 3.89 & 4.24 \\ \hline \end{tabular} \end{table} Table 4: Avg. No. of unique actions per step
2308.16301
On some problems of primes with the floor function
Let $\left[x\right]$ be the largest integer not exceeding $x$. For $0<\theta \leq 1$, let $\pi_{\theta}(x)$ denote the number of integers $n$ with $1 \leq n \leq x^{\theta}$ such that $\left[\frac{x}{n}\right]$ is prime and $S_{\mathbb{P}}(x)$ denote the number of primes in the sequence $\left\{\left[\frac{x}{n}\right]\right\}_{n \geqslant 1}$. In this paper, we obtain the asymptotic formula $$ \pi_{\theta}(x)=\frac{x^{\theta}}{(1-\theta) \log x}+O\left(x^{\theta}(\log x)^{-2}\right) $$ provide that $\frac{435}{923}<\theta<1$, and prove that $$ S_{\mathbb{P}}(x)=x\sum_{p} \frac{1}{p(p+1)}+O_{\varepsilon}\left(x^{435/923+\varepsilon}\right) $$ for $x \rightarrow \infty$. Thus improve the previous result due to Ma, Wu and the author.
Runbo Li
2023-08-18T13:25:41Z
http://arxiv.org/abs/2308.16301v1
# On some problems of primes with the floor function ###### Abstract. Let \([x]\) be the largest integer not exceeding \(x\). For \(0<\theta\leq 1\), let \(\pi_{\theta}(x)\) denote the number of integers \(n\) with \(1\leq n\leq x^{\theta}\) such that \(\left[\frac{x}{n}\right]\) is prime and \(S_{\mathbb{P}}(x)\) denote the number of primes in the sequence \(\left\{\left[\frac{x}{n}\right]\right\}_{n\geqslant 1}\). In this paper, we obtain the asymptotic formula \[\pi_{\theta}(x)=\frac{x^{\theta}}{(1-\theta)\log x}+O\left(x^{\theta}(\log x)^{ -2}\right)\] provide that \(\frac{435}{923}<\theta<1\), and prove that \[S_{\mathbb{P}}(x)=x\sum_{p}\frac{1}{p(p+1)}+O_{\varepsilon}\left(x^{435/923+ \varepsilon}\right)\] for \(x\to\infty\). Thus improve the previous result due to Ma, Wu and the author. Key words and phrases:Exponential sum, Distribution of the primes, the floor function 2020 Mathematics Subject Classification: 11N37, 11L07 ## 1. Introduction The investigations of the summations related to rounding up certain arithmetic functions are quite popular in recent years. It seems that this new term wave of enthusiasm starts from the paper of Bordelles, Dai, Heyman, Pan and Shparlinski [1], where the following asymptotic formula \[\sum_{n\leqslant x}f\left(\left[\frac{x}{n}\right]\right)=x\sum_{n=1}^{\infty }\frac{f(n)}{n(n+1)}+O_{f}\left(x^{1/2+\varepsilon}\right) \tag{1}\] is given, provided that \(f\) satisfies a broad condition involving the growth of the magnitude of it. Here, as usual \(\varepsilon\) denotes an arbitrary small positive number. An example of particular interest is the one for \(f=\Lambda\). In [7], Liu, Wu and Yang proved the following elaborate asymptotic formula \[\sum_{n\leqslant x}\Lambda\left(\left[\frac{x}{n}\right]\right)=x\sum_{n=1}^{ \infty}\frac{\Lambda(n)}{n(n+1)}+O\left(x^{9/19+\varepsilon}\right). \tag{2}\] In 2023, Li and Ma improved the exponent \(\frac{9}{19}\) to \(\frac{17}{36}\) in [5] and Zhang further improved the exponent to \(\frac{435}{923}\) in [10]. In an other direction, the work of Bordelles, Luca, Moree and Shparlinski [2] about the Bernoulli polynomials has led to the consideration of truncated distribution of the primes represented by \([x/n]\). Let \(0<\theta\leq 1\) be a real number and \(\pi_{\theta}(x)\) be the number of integers \(n\) with \(1\leqslant n\leqslant x^{\theta}\) such that \(\left\lfloor\frac{x}{n}\right\rfloor\) is prime. In [9], Ma, Chen and Wu proved \[\pi_{\theta}(x)=\begin{cases}\sum_{k=1}^{L}\frac{(-1)^{k-1}(k-1)!}{(1-\theta)^{k }}\frac{x^{\theta}}{(\log x)^{k}}+O\left(\frac{x^{\theta}}{(\log x)^{L+1}} \right),&\frac{23}{47}<\theta<1,\\ x\sum_{p}\frac{1}{p(p+1)}+O\left(\frac{26}{53}(\log x)^{\frac{119}{53}}\right),&\theta=1,\end{cases} \tag{3}\] where \(L\geqslant 1\) is any given integer. It is amazing that the leading term in the asymptotic formula of \(\pi_{\theta}(x)\) is not continuous at the point \(\theta=1\) when \(x\) is a given large number. Ma, Chen and Wu further proposed the following conjecture: **Conjecture**.: _For any \(0<\theta<1\) and given integer \(L\geqslant 1\), we have_ \[\pi_{\theta}(x)=\sum_{k=1}^{L}\frac{(-1)^{k-1}(k-1)!}{(1-\theta)^{k}}\frac{x ^{\theta}}{(\log x)^{k}}+O\left(\frac{x^{\theta}}{(\log x)^{L+1}}\right). \tag{4}\] In 2022, Zhou and Feng [11] proved that for any \(\frac{9}{19}<\theta<1\) and \(L\geqslant 1\), the conjecture is true. In 2023, Li [6] showed that even \(\frac{17}{36}<\theta<1\) is admissible. In this paper, we shall give an improvement of the result obtained by Zhou, Feng and Li. We obtained this result by combine the methods in [10] and [11]. Now, let's state our main result as the following theorem. **Theorem 1.1**.: _Let \(\theta\) be a number with \(\frac{435}{923}<\theta<1\) and \(L\geqslant 1\) be a given integer. Then_ \[\pi_{\theta}(x)=\sum_{k=1}^{L}\frac{(-1)^{k-1}(k-1)!}{(1-\theta)^{k}}\frac{x^ {\theta}}{(\log x)^{k}}+O\left(\frac{x^{\theta}}{(\log x)^{L+1}}\right), \tag{5}\] _where the implied constant depends on \(\theta,L\) and the real number \(\varepsilon>0\) which is contained in Lemma 2.1._ By the same method in [11], we also proved this theorem. **Theorem 1.2**.: _Let \(\theta\) be a number with \(\frac{435}{923}<\theta<1\). For any any integer \(A\geqslant 1\), We have_ \[\Lambda_{\theta}(x):=\sum_{n\leq x^{\theta}}\Lambda\left(\left[\frac{x}{n} \right]\right)=x^{\theta}+O\left(x^{\theta}(\log x)^{-A}\right), \tag{6}\] _where the implied constant depends on \(\theta,L\) and the real number \(\varepsilon>0\) which is contained in Lemma 2.1._ Let \(\mathbb{P}\) be the set of all primes and let \(\mathbb{P}_{\mathrm{w}}\) be the set of all prime powers. Denote by \(\mathbb{1}_{\mathbb{P}}\) and \(\mathbb{1}_{\mathbb{P}_{\mathrm{w}}}\) their characteristic functions, respectively. In 2021, Heyman [4] proposed to study the number of primes or prime powers in the sequence \(\left\{\left[\frac{x}{n}\right]\right\}_{n\geqslant 1}\) : \[S_{\mathbb{P}}(x):=\sum_{n\leqslant x}\mathbb{1}_{\mathbb{P}}\left(\left[ \frac{x}{n}\right]\right),\quad S_{\mathbb{P}_{\mathrm{w}}}(x):=\sum_{n \leqslant x}\mathbb{1}_{\mathbb{P}_{\mathrm{w}}}\left(\left[\frac{x}{n} \right]\right).\] Theorems 5 and 7 of [4] can be stated as follows: \[S_{\mathbb{P}}(x) =C_{\mathbb{1}_{\mathbb{P}}}x+O\left(x^{1/2}\right), \tag{7}\] \[S_{\mathbb{P}_{\mathrm{w}}}(x) =C_{\mathbb{1}_{\mathbb{P}_{\mathrm{w}}}}x+O\left(x^{1/2}\right), \tag{8}\] where \(C_{\mathbb{1}_{\mathbb{P}}}:=\sum_{p}\frac{1}{p(p+1)}\) and \(C_{\mathbb{1}_{\mathbb{P}_{\mathbb{w}}}}:=\sum_{p,\nu\geqslant 1}\frac{1}{p^{\nu}(p^{ \nu}+1)}\). In 2021, Ma and Wu [8] proposed better results by breaking the \(\frac{1}{2}\)-barrier in the error term. They proved that for any \(\varepsilon>0\), \[S_{\mathbb{P}}(x) =C_{\mathbb{1}_{\mathbb{P}}}x+O_{\varepsilon}\left(x^{9/19+ \varepsilon}\right), \tag{9}\] \[S_{\mathbb{P}_{\mathbb{w}}}(x) =C_{\mathbb{1}_{\mathbb{P}_{\mathbb{w}}}}x+O_{\varepsilon}\left(x ^{9/19+\varepsilon}\right), \tag{10}\] as \(x\to\infty\), where the implied constants depend on \(\varepsilon\). In this paper, we shall give an improvement of the result obtained by Ma and Wu. Now let's state our result as the following theorem. **Theorem 1.3**.: _For any \(\varepsilon>0\),_ \[S_{\mathbb{P}}(x) =C_{\mathbb{1}_{\mathbb{P}}}x+O_{\varepsilon}\left(x^{435/923+ \varepsilon}\right), \tag{11}\] \[S_{\mathbb{P}_{\mathbb{w}}}(x) =C_{\mathbb{1}_{\mathbb{P}_{\mathbb{w}}}}x+O_{\varepsilon}\left( x^{435/923+\varepsilon}\right), \tag{12}\] _as \(x\to\infty\), where the implied constants depend on \(\varepsilon\)._ ## 2. Some Lemmas From now on, let \(x\) be a large positive number. Let \(\varepsilon>0\) be an arbitrary small positive number which may not be the same throughout our paper. Let \(\mathbb{N}\) and \(\mathbb{P}\) be the set of positive integers and prime numbers, respectively. The notation \(p\) will always denote a prime number. Let \(\pi(x)\) be the number of primes up to \(x\) and \[\Lambda(n)=\left\{\begin{array}{ll}\log p&\text{ if }n=p^{\alpha},\\ 0&\text{ otherwise.}\end{array}\right.\] be the von Mangoldt function. For any real number \(t\), let \[\rho(t)=t-[t]-1/2.\] For \(0<D\leqslant x,D<t\leqslant 2D\) and \(\delta\notin-\mathbb{N}\), let \[\Sigma_{\delta}(x,D,t)=\sum_{D<d\leqslant t}\Lambda(d)\rho\left(\frac{x}{d+ \delta}\right).\] We need some auxiliary results before the proof of Theorems 1.1 and 1.3. **Lemma 2.1**.: _Let \(\delta\notin-\mathbb{N}\) be a fixed constant. For \(D<t\leqslant 2D\) and \(D<x^{2/3}\), we have_ \[\Sigma_{\delta}(x,D,t)\ll_{\varepsilon}\left\{\begin{array}{ll}x^{1/2+ \varepsilon}D^{-1/6}&\text{ if }\ D<x^{3/7},\\ x^{1/3+\varepsilon}D^{2/9}&\text{ if }\ x^{3/7}\leqslant D<x^{6/13},\\ x^{1/6+\varepsilon}D^{7/12}&\text{ if }\ x^{6/13}\leqslant D<x^{482/923},\\ x^{435/923+\varepsilon}&\text{ if }\ x^{482/923}\leqslant D<x^{488/923}.\end{array}\right.\] Proof.: From [7, Proposition 4.1] with \((\kappa,\lambda)=(\kappa^{\prime},\lambda^{\prime})=(1/2,1/2)\) and [10, (3.6)] with \((\kappa,\lambda)=(\kappa^{\prime},\lambda^{\prime})=(1/2,1/2)\), \(\varrho=6/923\) and \(\varpi=20.5/923\), we have \[\Sigma_{\delta}(x,D,2D)\ll_{\varepsilon}x^{\varepsilon}\left(x^{1/6}D^{7/12 }+D^{5/6}+x^{1/3}D^{2/9}+x^{1/2}D^{-1/6}\right)\] for \(D<x^{3/4}\) and \[\Sigma_{\delta}(x,D,2D)\ll_{\varepsilon}x^{435/923+\varepsilon}\] for \(x^{482/923}\leqslant D<x^{488/923}\). In fact, by carefully checking the proof of [7, Proposition 4.1] and [10, (3.6)], we still have \[\Sigma_{\delta}(x,D,t)\ll_{\varepsilon}x^{\varepsilon}\left(x^{1/6}D^{7/12}+D^ {5/6}+x^{1/3}D^{2/9}+x^{1/2}D^{-1/6}\right) \tag{13}\] for \(D<x^{3/4}\) and \[\Sigma_{\delta}(x,D,t)\ll_{\varepsilon}x^{435/923+\varepsilon} \tag{14}\] for \(x^{482/923}\leqslant D<x^{488/923}\). The lemma then follows from direct discussions. For \(D\leq x\) and \(\delta\notin-\mathbb{N}\), let \[\mathscr{S}_{\delta}(x,D)=\sum_{D<p\leqslant 2D}\rho\left(\frac{x}{p+\delta} \right).\] **Lemma 2.2**.: _Let \(\delta\notin-\mathbb{N}\) be a fixed constant. For \(D<x^{2/3}\), we have_ \[\mathscr{S}_{\delta}(x,D)\ll_{\varepsilon}\left\{\begin{array}{ll}x^{1/2+ \varepsilon}D^{-1/6}+D^{1/2}&\text{ if }\ D<x^{3/7},\\ x^{1/3+\varepsilon}D^{2/9}+D^{1/2}&\text{ if }\ x^{3/7}\leqslant D<x^{6/13},\\ x^{1/6+\varepsilon}D^{7/12}+D^{1/2}&\text{ if }\ x^{6/13}\leqslant D<x^{482/923},\\ x^{435/923+\varepsilon}+D^{1/2}&\text{ if }\ x^{482/923}\leqslant D<x^{488/923}. \end{array}\right.\] Proof.: For \(0<D\leq x,D<t\leq 2D\) and \(\delta\notin-\mathbb{N}\), let \[\mathscr{G}_{\delta}(x,D,t)=\sum_{D<p\leqslant t}\vartheta(d)\rho\left(\frac{ x}{p+\delta}\right)\] where \[\vartheta(n)=\left\{\begin{array}{ll}\log p&\text{ if }n=p\text{ is a prime},\\ 0&\text{ otherwise}.\end{array}\right.\] Note that \[\sum_{D<d\leqslant t}\Lambda(d)\rho\left(\frac{x}{d+\delta}\right)=\sum_{D<d \leqslant t}\vartheta(d)\rho\left(\frac{x}{d+\delta}\right)+O\left(t^{1/2} \right). \tag{15}\] so we have \[\mathscr{G}_{\delta}(x,D,t)=\Sigma_{\delta}(x,D,t)+O\left(D^{1/2}\right) \tag{16}\] for any \(D<t\leq 2D\). Integrating by parts, we have \[\mathscr{S}_{\delta}(x,D)=\frac{\mathscr{G}_{\delta}(x,D,2D)}{\log 2D}+\int_{D}^{ 2D}\frac{\mathscr{G}_{\delta}(x,D,t)}{t(\log t)^{2}}dt. \tag{17}\] Then our lemma follows from Lemma 2.1 by routine computations. **Lemma 2.3**.: _[_11_, Lemma 4]__. Let \(\theta\) be a positive number with \(0<\theta<1\) and \(L\geqslant 1\) be a given integer. Then_ \[x\sum_{p\geqslant x^{1-\theta}}\frac{1}{p(p+1)}=\sum_{k=1}^{L}\frac{(-1)^{k-1} (k-1)!}{(1-\theta)^{k}}\frac{x^{\theta}}{(\log x)^{k}}+O\left(\frac{x^{\theta} }{(\log x)^{L+1}}\right),\] _where the implied constant depends only on \(L\) and \(\theta\)._ **Lemma 2.4**.: _[_3_, Proposition 3.1]__. Let \(f\) be a positive-valued function on \(\mathbb{N}\) and \(D\) a parameter with \(D\leq x\). Then,_ \[\sum_{D<n\leq x}f\left(\left[\frac{x}{n}\right]\right)=\sum_{d\leq x/D}f(d)\sum_ {x/(d+1)<n\leq x/d}1+O\left(f\left(\frac{x}{D}\right)\left(1+\frac{D^{2}}{x} \right)\right).\] **Lemma 2.5**.: _[_11_, Lemma 6]__. Let \(\theta\) be a positive number with \(0<\theta<1\) and \(A\) is any given positive number, we have_ \[x\sum_{d\geq x^{1-\theta}}\frac{\Lambda(d)}{d(d+1)}=x^{\theta}+O\left(x^{ \theta}(\log x)^{-A}\right),\] _where the implied constant depends only on \(A\) and \(\theta\)._ ## 3. Proof of Theorem 1.1 **Proof of Theorem 1.1.** For \(\theta>\frac{435}{923}\), we spilt the sum \(\pi_{\theta}(x)\) into the following two shorter sums \[\pi_{\theta}(x)=S_{1}+S_{2} \tag{18}\] where \[S_{1}=\sum_{n\leq x^{435/923},[x/n]\in\mathbb{P}}1\quad and\quad S_{2}=\sum_{ x^{435/923}<n\leq x^{\theta},[x/n]\in\mathbb{P}}1.\] Trivial estimate leads to the bound \[S_{1}\leqslant\sum_{n\leqslant x^{435/923}}1\leqslant x^{435/923}. \tag{19}\] By Lemma 2.4, we can rewrite \(S_{2}\) as \[S_{2} =\sum_{x^{1-\theta}\leqslant p\leqslant x^{488/923}}\sum_{x/(p+1 )<n\leqslant x/p}1+O\left(x^{2\theta-1}\right)\] \[=\sum_{x^{1-\theta}\leqslant p\leqslant x^{488/923}}\left(\frac{ x}{p}-\rho\left(\frac{x}{p}\right)-\frac{x}{p+1}+\rho\left(\frac{x}{p+1}\right) \right)+O\left(x^{2\theta-1}\right)\] \[=x\sum_{p\geqslant x^{1-\theta}}\frac{1}{p(p+1)}-x\sum_{p>x^{488 /923}}\frac{1}{p(p+1)}+R_{1}(x)-R_{0}(x)+O\left(x^{2\theta-1}\right), \tag{20}\] where \[R_{\delta}(x)=\sum_{x^{1-\theta}\leqslant p\leqslant x^{488/923}}\rho\left( \frac{x}{p+\delta}\right)\quad(\delta=0\text{ or }1).\] It is easy to see that \(x^{2\theta-1}\ll x^{\theta-\varepsilon}\) for any \(\theta<1\) and it is clear that \[x\sum_{p>x^{488/923}}\frac{1}{p(p+1)}\leqslant x\sum_{n\geqslant x^{488/923} }\frac{1}{n(n+1)}\ll x^{435/923}. \tag{21}\] From Lemma 2.3, we have \[x\sum_{p\geqslant x^{1-\theta}}\frac{1}{p(p+1)}=\sum_{k=1}^{L}\frac{(-1)^{k-1} (k-1)!}{(1-\theta)^{k}}\frac{x^{\theta}}{(\log x)^{k}}+O\left(\frac{x^{\theta }}{(\log x)^{L+1}}\right). \tag{22}\] To complete the proof of our theorem, it remains to show that \[R_{\delta}(x)\ll x^{\theta}(\log x)^{-(L+1)} \tag{23}\] for \(\delta=0\) and \(1\). For any positive integer \(i\), let \(D_{i}=x^{488/923}2^{-i}\). Since \(\theta>435/923\), then \(D_{i}\leqslant x^{488/923}<x^{2/3}\) for all \(1\leqslant i\leqslant\left[\frac{\theta-435/923}{\log 2}\log x\right]+1\). By Lemma 2.2, \[|R_{\delta}(x)|\leqslant\sum_{1\leqslant i\leqslant\left[\frac{\theta-435/923} {\log 2}\log x\right]+1}\mathscr{S}_{\delta}\left(x,D_{i}\right)\] \[\ll_{\varepsilon}\sum_{1\leqslant i\leqslant\left[\frac{\theta-435/923}{ \log 2}\log x\right]+1}\left(x^{1/2+\varepsilon}D_{i}^{-1/6}+x^{1/3+ \varepsilon}D_{i}^{2/9}+x^{1/6+\varepsilon}D_{i}^{7/12}+x^{435/923+ \varepsilon}+D_{i}^{1/2}\right)\] \[\ll_{\varepsilon}x^{(\theta+2)/6}+x^{435/923+\varepsilon}\ll_{\varepsilon, \theta}x^{435/923+\varepsilon}\ll_{\varepsilon,\theta}x^{\theta}(\log x)^{-( L+1)}, \tag{24}\] valid for \(\frac{435}{923}<\theta<\frac{764}{923}\). Combined with the range \(\frac{17}{36}<\theta<1\) of Li [6], we get the theorem. The proof of Theorem 1.2 is similar to the proof of Theorem 1 by replacing Lemma 2.3 with Lemma 2.5. ## 4. Proof of Theorem 1.3 **Proof of Theorem 1.3.** Let \(f=\mathbb{1}_{\mathbb{P}}\) or \(\mathbb{1}_{\mathbb{P}_{\mathbb{w}}}\). First we write \[S_{f}(x)=\sum_{n\leqslant x}f\left(\left[\frac{x}{n}\right]\right)=S_{f1}(x)+S _{f2}(x) \tag{25}\] with \[S_{f1}(x):=\sum_{n\leqslant x^{435/923}}f\left(\left[\frac{x}{n}\right] \right),\quad S_{f2}(x):=\sum_{x^{435/923}<n\leqslant x}f\left(\left[\frac{x} {n}\right]\right).\] Trivial estimate leads to the bound \[S_{f1}(x)\ll x^{\frac{435}{923}}. \tag{26}\] By Lemma 2.4, we can rewrite \(S_{f2}\) as \[S_{f2}(x) =\sum_{d\leqslant x^{488/923}}f(d)\sum_{x/(d+1)<n\leqslant x/d}1\] \[=\sum_{d\leqslant x^{488/923}}f(d)\left(\frac{x}{d}-\rho\left( \frac{x}{d}\right)-\frac{x}{d+1}+\rho\left(\frac{x}{d+1}\right)\right)\] \[=x\sum_{d\geqslant 1}\frac{f(d)}{d(d+1)}+R_{1}^{f}(x)-R_{0}^{f}(x )+O(x^{435/923}), \tag{27}\] where we have used the following bounds \[x\sum_{d>x^{488/923}}\frac{f(d)}{d(d+1)}\ll x^{435/923},\quad\sum_{d\leqslant x ^{435/923}}f(d)\left(\rho\left(\frac{x}{d+1}\right)-\rho\left(\frac{x}{d} \right)\right)\ll x^{435/923}\] \[R_{\delta}^{f}(x)=\sum_{x^{435/923}<d\leqslant x^{488/923}}f(d)\rho\left(\frac{x}{d+ \delta}\right).\] Combining (25), (26) and (27), it follows that \[S_{f}(x)=x\sum_{d\geqslant 1}\frac{f(d)}{d(d+1)}+O_{\varepsilon}\left(\left|R_ {1}^{f}(x)\right|+\left|R_{0}^{f}(x)\right|+x^{435/923}\right).\] On the other hand, we have \[R_{\delta}^{\mathbb{1}_{\mathbb{P}_{\mathbb{W}}}}(x)=\sum_{x^{435/923}<p^{ \nu}\leqslant x^{488/923}}\rho\left(\frac{x}{p^{\nu}+\delta}\right)=R_{\delta} ^{\mathbb{1}_{\mathbb{P}}}(x)+O\left(x^{244/923}\right).\] Thus in order to prove Theorem 1.3, it remains to show that \[R_{\delta}^{\mathbb{1}_{\mathbb{P}}}(x)\ll_{\varepsilon}x^{435/923+ \varepsilon}\quad(x\geqslant 1) \tag{28}\] for \(\delta=0\) and \(1\). By using Lemma 2.2, \[R_{\delta}^{\mathbb{1}_{\mathbb{P}}}(x) \ll_{\varepsilon}x^{\varepsilon}\max_{x^{435/923}<D\leqslant x^{ 488/923}}\mathscr{S}_{\delta}(x,D)\] \[\ll_{\varepsilon}x^{\varepsilon}\max_{x^{435/923}<D\leqslant x^{ 482/923}}\mathscr{S}_{\delta}(x,D)+x^{\varepsilon}\max_{x^{482/923}<D \leqslant x^{488/923}}\mathscr{S}_{\delta}(x,D)\] \[\ll_{\varepsilon}\max_{x^{435/923}<D\leqslant x^{482/923}}\left(x ^{1/6+\varepsilon}D^{7/12}+D^{1/2}\right)+\max_{x^{482/923}<D\leqslant x^{488 /923}}\left(x^{435/923+\varepsilon}+D^{1/2}\right)\] \[\ll_{\varepsilon}x^{435/923+\varepsilon}.\] This completes the proof of Theorem 1.3.
2305.01859
Regularities and multiplicities of Veronese type algebras
In this paper, we study the algebra of Veronese type. We show that the presentation ideal of this algebra has an initial ideal whose Alexander dual has linear quotients. As an application, we explicitly obtain the Castelnuovo-Mumford regularity of the Veronese type algebra. Furthermore, we give an effective upper bound on the multiplicity of this algebra.
Kuei-Nuan Lin, Yi-Huang Shen
2023-05-03T02:06:32Z
http://arxiv.org/abs/2305.01859v1
# Regularities and Multiplicities of Veronese type algebras ###### Abstract. In this paper, we study the algebra of Veronese type. We show that the presentation ideal of this algebra has an initial ideal whose Alexander dual has linear quotients. As an application, we explicitly obtain the Castelnuovo-Mumford regularity of the Veronese type algebra. Furthermore, we give an effective upper bound on the multiplicity of this algebra. 2020 Mathematics Subject Classification: Primary 05E40 13F55 13F65 Secondary 14M25 13H15 13D02 Keyword: Algebra of Veronese type, Regularity, Cohen-Macaulay, Multiplicity ## 1. Introduction Let \(S=\mathbb{K}[x_{1},\dots,x_{n}]\) be a polynomial ring over a field \(\mathbb{K}\) with \(n\geq 2\). Suppose that \(I\) is an ideal minimally generated by some monomials \(f_{1},\dots,f_{u}\) in \(S\). The _semigroup ring_ associated to \(I\), denoted by \(\mathbb{K}[I]\), is the subalgebra of \(S\) generated by \(f_{1},\dots,f_{u}\). This ring is also known as the _toric ring_ associated with \(I\). In this work, we focus on investigating several algebraic invariants of \(\mathbb{K}[I]\) associated with an ideal of Veronese type. We fix a degree \(d\) and a sequence \(\boldsymbol{\alpha}=(\alpha_{1},\dots,\alpha_{n})\) of integers with \(1\leq\alpha_{1}\leq\dots\leq\alpha_{n}\leq d\) and \(d<\sum_{i=1}^{n}\alpha_{i}\). Let \(\mathbb{N}=\{\,0,1,2,\dots\,\}\) be the set of non-negative integers and \[I_{d,\boldsymbol{\alpha}}\coloneqq\left\langle\boldsymbol{x}^{\boldsymbol{c}} :\boldsymbol{c}=(c_{1},\dots,c_{n})\in\mathbb{N}^{n},\,\sum_{i=1}^{n}c_{i}=d \text{ and }c_{i}\leq\alpha_{i}\text{ for each }i\,\right\rangle\] be _an ideal of Veronese type_. Related, let \(\mathcal{A}_{d,\boldsymbol{\alpha}}=\mathbb{K}[I]\subset S\) and call it an _algebra of Veronese type_. If \(\alpha_{1}=\dots=\alpha_{n}\), then \(I_{d,\boldsymbol{\alpha}}\) is a special strongly symmetric shifted ideal. Moreover, if \(\alpha_{i}=d\) for all \(i\), then \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is the \(d\)-th Veronese subring of \(S\). Algebras of Veronese type have been studied from the viewpoint of toric varieties, see [23]. De Negri and Hibi also classified the Gorensteinness of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) in [6, Theorem 2.4]. In addition, Costantini and Seceleanu studied algebras associated with strongly symmetric shifted ideals in [5]. In this paper, we continue their work and compute the Castelnuovo-Mumford regularity and the multiplicity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). Recall that for a finitely generated graded module \(M\) over the polynomial ring \(S\), the _Castelnuovo-Mumford regularity_ of \(M\), denoted by \(\operatorname{reg}(M)\), is \(\max\left\{\,j-i:\beta_{i,j}\neq 0\,\right\}\), where the \(\beta_{i,j}\)'s are the graded Betti numbers of \(M\). This regularity can be used to bound the degree of the syzygy generators of the module \(M\). Another important invariant that we study here is the _multiplicity_\(\mathsf{e}(M)\) of \(M=\bigoplus_{k}M_{k}\) with respect to the graded maximal ideal, where \(M_{k}\) is the degree \(k\) component of the graded module \(M\). Recall that the Hilbert function \(H(M,k):=\dim_{\mathbb{K}}(M_{k})\) is eventually a polynomial in \(k\) of degree \(\dim(M)-1\). The leading coefficient of this polynomial is of form \(\mathsf{e}(M)/(\dim(M)-1)!\). If \(M\) is the homogeneous coordinate ring of a projective variety \(X\), then the multiplicity is just the _degree_ of \(X\). One way to study the semigroup ring \(\mathbb{K}[I]\) algebraically is to investigate its presentation ideal. Let \(\mathbb{K}[\boldsymbol{T}]=\mathbb{K}[T_{f_{1}},\dots,T_{f_{u}}]\) be a polynomial ring in \(u\) variables over \(\mathbb{K}\). Then the _presentation ideal_\(J\) is the kernel of the canonical \(\mathbb{K}\)-algebra homomorphism \(\psi:\mathbb{K}[\boldsymbol{T}]\to\mathbb{K}[I]\) with \(\psi(T_{f_{i}})=f_{i}\). Obviously, \(J\) is a prime ideal and we have \(\mathbb{K}[\boldsymbol{T}]/J\cong\mathbb{K}[I]\). The study of algebra \(\mathbb{K}[I]\) becomes more feasible if the Grobner basis of the presentation ideal \(J\) exists; see [2] for the case of chordal bipartite graphs, [21] for the \(d\)-th Veronese subring, and [19] for the case of three-dimensional Ferrers diagrams. This is because the invariants of the initial ideal and the original ideal are closely related, c.f. [14, Section 3.3]. According to the work of Conca and Varbaro in [3], these relations become tight when the initial ideal is squarefree. On the other hand, without knowing the Grobner basis of \(J\), finding the Castelnuovo-Mumford regularity or the multiplicity of \(\mathbb{K}[I]\), for example, becomes a difficult task; cf. [16]. It is worth noting that if \(J\) has a quadratic Grobner basis, then the algebra \(\mathbb{K}[I]\) is Koszul (that is, \(\mathbb{K}\) has a linear resolution over \(\mathbb{K}[I]\)) by [10]. The authors of the current paper proved in [18] that if \(I\) is associated with a three-dimensional Ferrers diagram satisfying a mild property, then the presentation ideal \(J\) has a quadratic Grobner basis. This is the foundation for the calculation of the Castelnuovo-Mumford regularity and the multiplicity in [19]. However, the way we obtained the quadratic Grobner basis seems to be model-dependent. A more useful tool is the idea of _sorting monomials_ introduced by Sturmfels. With this technique, Sturmfels [23] proved that the presentation ideal of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) has a quadratic Grobner basis. More recently, Herzog et al. [15] generalized the notion of sortability to the non-equigenerated case and proved that the presentation ideal of the algebra associated with a sortable monomial ideal also has a quadratic Grobner basis. With the quadratic Grobner basis of \(J\) described earlier by Sturmfels, we can study the Veronese type algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) in more detail. As a main result, we obtain the Castelnuovo-Mumford regularity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) explicitly in Theorem 5.6. This generalizes the result [21, Theorem 4.2] of Nitsche in the case of the \(d\)-th Veronese subring. We then offer an effective bound on the multiplicity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) in Theorem 5.14. This work is organized as follows. In Section 2, we recall some essential definitions and terminology that we will need later. In Section 3, we use the combinatorial structure of the sorting operation to study the algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). From the initial ideal of the presentation ideal \(J\) of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\), we introduce a structural graph \(\mathcal{G}_{d,\boldsymbol{\alpha}}\) and study its maximal cliques. As a result, we derive the generators of the Alexander dual of \(\operatorname{in}(J)\) in Corollary 3.10. In Section 4, we propose a carefully constructed order in Setting 4.13. With it, we show in Theorem 4.21 that this Alexander dual ideal has linear quotients. Combinatorial details of the quotients are also presented in this section. Finally, in Section 5, we gather all the tools and results and present the two main theorems mentioned above. ## 2. Preliminaries Throughout this paper, we fix a positive integer \(n\geq 2\). Following the convention, \([n]\) stands for the set \(\{1,2,\ldots,n\}\). Let \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\) be a polynomial ring over a field \(\mathbb{K}\), and \(\mathfrak{S}_{n}\) be the symmetric group of \([n]\). **Notation 2.1**.: 1. Let \(\boldsymbol{a}\in\mathbb{Z}^{n}\) be a tuple written in boldface. Unless otherwise stated, we usually write \(a_{i}\) correspondingly with subscript, not in boldface, for its \(i\)-th coordinate. Namely, we expect that \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}^{n}\). Following this convention, if we encounter several tuples \(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t}\in\mathbb{Z}^{n}\), we will have \(\boldsymbol{a}^{i}=(a_{1}^{i},\ldots,a_{n}^{i})\) for each \(i\in[t]\). 2. Suppose that \(\boldsymbol{a}\in\mathbb{N}^{n}\). We will write \(\boldsymbol{x}^{\boldsymbol{a}}\) for the monomial \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) in \(S\) and define \(|\boldsymbol{a}|\coloneqq a_{1}+\cdots+a_{n}\). 3. Let \(I\) be a monomial ideal of \(S\). We write \(G(I)\) for the minimal monomial generating set of \(I\), and \(\mu(I)=\#G(I)\) for the minimal number of generators. Recall that Sturmfels introduced in [23, Chapter 14] a sorting operator for monomials of the same degree. **Definition 2.2**.: 1. Let \(\operatorname{Mon}_{d}\) be the set of monomials of degree \(d\) in \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\). Then, we have the _sorting operator_ for \(p\geq 2\): \[\operatorname{sort}:\underbrace{\operatorname{Mon}_{d}\times\cdots\times \operatorname{Mon}_{d}}_{p\text{ times}}\to\underbrace{\operatorname{Mon}_{d}\times\cdots\times \operatorname{Mon}_{d}}_{p\text{ times}},\qquad(u_{1},\ldots,u_{p})\mapsto(v_{1}, \ldots,v_{p}),\] which is defined as follows. Suppose that \(u_{1}\cdots u_{p}=x_{i_{1}}x_{i_{2}}\cdots x_{i_{pd}}\) with \(i_{1}\leq i_{2}\leq\cdots\leq i_{pd}\). Then, \(v_{k}\coloneqq x_{i_{k}}x_{i_{p+k}}\cdots x_{i_{(d-1)p+k}}\) for \(k\in[p]\). The sequence \((u_{1},\ldots,u_{p})\) of monomials in \(\operatorname{Mon}_{d}\) is called _sorted_ if \(\operatorname{sort}(u_{1},\ldots,u_{p})=(u_{1},\ldots,u_{p})\). A subset \(U\) of \(V_{n,d}\) is _sortable_ if \(\operatorname{sort}(U\times U)\subseteq U\times U\). 2. Let \(V_{n,d}\coloneqq\{\boldsymbol{a}\in\mathbb{N}^{n}:|\boldsymbol{a}|=d\}\). Since we have a natural correspondence between \(\boldsymbol{x}^{\boldsymbol{a}}\in\operatorname{Mon}_{d}\) and \(\boldsymbol{a}\in V_{n,d}\), by abuse of notation, we also derive a sorting operator \[\operatorname{sort}:V_{n,d}\times\cdots\times V_{n,d}\to V_{n,d}\times\cdots \times V_{n,d}.\] Similarly, we can also talk about whether a subset of \(V_{n,d}\) is sortable. In this paper, we always assume the following. **Setting 2.3**.: Suppose that \(n\geq 3\). We fix a degree \(d\) and a sequence \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\) of integers with \(1\leq\alpha_{1}\leq\cdots\leq\alpha_{n}\leq d\) and \(d<|\boldsymbol{\alpha}|\). Let \(V^{\boldsymbol{\alpha}}_{n,d}\coloneqq\{\,\boldsymbol{c}=(c_{1},\ldots,c_{n}) \in V_{n,d}:c_{i}\leq\alpha_{i}\text{ for all }i\,\}\). Furthermore, let \(I_{d,\boldsymbol{\alpha}}\coloneqq\langle\boldsymbol{x}^{c}:\boldsymbol{c} \in V^{\boldsymbol{\alpha}}_{n,d}\rangle\) be the Veronese type ideal in \(S\), and \(\mathcal{A}_{d,\boldsymbol{\alpha}}=\mathbb{K}[I_{d,\boldsymbol{\alpha}}]\) be the Veronese type algebra. **Remark 2.4**.: 1. When \(n=2\), it is not difficult to see that the Veronese type algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is isomorphic to a Veronese ring, up to a shift. 2. If \(d=|\boldsymbol{\alpha}|\), then the Veronese type algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is generated principally by the monomial \(\boldsymbol{x}^{\boldsymbol{\alpha}}\). **Remark 2.5**.: Let \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) be the Veronese type algebra in Setting 2.3. Since \(V^{\boldsymbol{\alpha}}_{n,d}\) is sortable by [9, Proposition 6.11], the set \[\mathscr{J}\coloneqq\left\{\,\underline{T_{\boldsymbol{a}^{1}}T_{\boldsymbol {a}^{2}}}-T_{\boldsymbol{b}^{1}}T_{\boldsymbol{b}^{2}}:\boldsymbol{a}^{1}, \boldsymbol{a}^{2}\in V^{\boldsymbol{\alpha}}_{n,d},\ (\boldsymbol{a}^{1}, \boldsymbol{a}^{2})\text{ unsorted and sort}(\boldsymbol{a}^{1}, \boldsymbol{a}^{2})=(\boldsymbol{b}^{1},\boldsymbol{b}^{2})\,\right\}\] is a Grobner basis of the presentation ideal \(J\) of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) with respect to a term order such that the marked (underlined) monomials are the leading terms by [9, Theorem 6.16]. In particular, \(J\) has a quadratic squarefree initial ideal. Consequently, \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is a Cohen-Macaulay Koszul normal domain, by [9, Theorems 5.16, 5.17, and 6.7]. **Proposition 2.6**.: _Let \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) be the Veronese type algebra in Setting 2.3. Then, \(\dim(\mathcal{A}_{d,\boldsymbol{\alpha}})=n\)._ Proof.: By assumption, we have \(d<|\boldsymbol{\alpha}|\). Since \(I_{d,\boldsymbol{\alpha}}\) is equigenerated, the dimension of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) equals the rank of the matrix \(\boldsymbol{M}_{d,\boldsymbol{\alpha}}\) whose rows are the exponent vectors of the monomial generators of \(I_{d,\boldsymbol{\alpha}}\) (see [17, Exercise 8.21]). Based on this, we consider two cases. 1. Suppose that \(d<n\). Then, let \(\boldsymbol{\alpha}^{\prime}=(1,1,\ldots,1)\). Since the corresponding matrix \(\boldsymbol{M}_{d,\boldsymbol{\alpha}^{\prime}}\) is a submatrix of \(\boldsymbol{M}_{d,\boldsymbol{\alpha}}\), it remains to show that \(\operatorname{rank}(\boldsymbol{M}_{d,\boldsymbol{\alpha}^{\prime}})=n\), or equivalently, \(\dim(\mathcal{A}_{d,\boldsymbol{\alpha}^{\prime}})=n\). Notice that \(I_{d,\boldsymbol{\alpha}^{\prime}}\) is a strongly symmetric shifted ideal. That \(\dim(\mathcal{A}_{d,\boldsymbol{\alpha}^{\prime}})=n\) follows from [5, Proposition 3.29]. 2. Suppose that \(d\geq n\). Then, we can find suitable \(\boldsymbol{\alpha}^{\prime}\) such that \(1\leq\alpha^{\prime}_{i}\leq\alpha_{i}\) for all \(i\) and \(d=|\boldsymbol{\alpha}^{\prime}|-1\). Since the corresponding matrix \(\boldsymbol{M}_{d,\boldsymbol{\alpha}^{\prime}}\) is a submatrix of \(\boldsymbol{M}_{d,\boldsymbol{\alpha}}\), it remains to show that \(\operatorname{rank}(\boldsymbol{M}_{d,\boldsymbol{\alpha}^{\prime}})=n\). Let \(\boldsymbol{e}^{i}\) be the \(i\)-th standard basis of \(\mathbb{Z}^{n}\) for \(1\leq i\leq n\). It is not difficult to see that \(\boldsymbol{M}_{d,\boldsymbol{\alpha}^{\prime}}\) is an \(n\times n\) matrix, whose rows take the form \(\boldsymbol{\alpha}^{\prime}-\boldsymbol{e}^{i}\) for \(i=1,\ldots,n\). It is then an elementary task to show that \(\boldsymbol{\alpha}^{\prime}-\boldsymbol{e}^{1},\ldots,\boldsymbol{\alpha}^{ \prime}-\boldsymbol{e}^{n}\) are linearly independent. Say, \(\sum_{i}k_{i}(\boldsymbol{\alpha}^{\prime}-\boldsymbol{e}^{i})=\boldsymbol{0}\) for suitable \(k_{i}\in\mathbb{R}\). Thus, \(\sum_{i}k_{i}\boldsymbol{e}^{i}=(\sum_{i}k_{i})\boldsymbol{\alpha}^{\prime}\) and consequently \(\sum_{i}k_{i}=(\sum_{i}k_{i})|\boldsymbol{\alpha}^{\prime}|\). If \(\sum_{i}k_{i}=0\), then \(\sum_{i}k_{i}\boldsymbol{e}^{i}=(\sum_{i}k_{i})\boldsymbol{\alpha}^{\prime}= \boldsymbol{0}\). As a result, each \(k_{i}=0\). If instead that \(\sum_{i}k_{i}\neq 0\), then \(|\boldsymbol{\alpha}^{\prime}|=1\), which contradicts the assumption that \(|\boldsymbol{\alpha}^{\prime}|-1=d\geq n\geq 3\). ## 3. Maximal cliques Let \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) be the Veronese type algebra in Setting 2.3. In one of the main results of this paper, we determine the Castelnuovo-Mumford regularity of the algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). Recall that if \(\beta_{i,j}\) is the graded Betti number of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) considered as a \(\mathbb{K}[\boldsymbol{T}]\)-module, then the Castelnuovo-Mumford regularity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is defined to be \[\operatorname{reg}(\mathcal{A}_{d,\boldsymbol{\alpha}})\coloneqq\max_{i,j}\,\{ \,j-i:\beta_{i,j}\neq 0\,\}\,.\] Our exploration of the Castelnuovo-Mumford regularity depends on the following key observations. **Observation 3.1**.: Let \(J\) be the presentation ideal of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) generated by the set \(\mathscr{J}\) in Remark 2.5. Since \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is Cohen-Macaulay and \(\operatorname{in}(J)\) is squarefree by Remark 2.5, it follows from [3, Corollary 2.7] that the quotient ring \(\mathbb{K}[\boldsymbol{T}]/\operatorname{in}(J)\) is Cohen-Macaulay. Let \((\operatorname{in}(J))^{\vee}\) be the Alexander dual of the squarefree ideal \(\operatorname{in}(J)\). By [3, Corollary 2.7] and [14, Proposition 8.1.10], \(\operatorname{reg}(\mathcal{A}_{d,\boldsymbol{\alpha}})=\operatorname{reg}( \mathbb{K}[\boldsymbol{T}]/J)=\operatorname{reg}(\mathbb{K}[\boldsymbol{T}]/ \operatorname{in}(J))=\operatorname{pd}((\operatorname{in}(J))^{\vee})\). Inspired by the above observation, we turn to find the projective dimension of \((\operatorname{in}(J))^{\vee}\). To do this, we proceed as follows. First, since \(\mathbb{K}[\boldsymbol{T}]/\operatorname{in}(J)\) is Cohen-Macaulay, \((\operatorname{in}(J))^{\vee}\) is an equigenerated ideal. We will describe the minimal monomial generators of this ideal as the maximal cliques of length \(n\) of some graph in Proposition 3.5. Then, in Section 4, we give a total order on the minimal monomial generating set \(G((\operatorname{in}(J))^{\vee})\) and show that \((\operatorname{in}(J))^{\vee}\) has linear quotients with respect to this order. Details of these quotients are important by the following lemma. **Lemma 3.2** ([14, Corollary 8.2.2]).: _Let \(I\) be an equigenerated ideal in the polynomial ring \(S\). Suppose that the minimal generating set \(G(I)\) consists of \(f_{1},\ldots,f_{m}\) such that \(Q_{i}\coloneqq\langle f_{1},\ldots,f_{i-1}\rangle:_{S}f_{i}\) is linear for each \(i\). Let \(p=\max_{i}\mu(Q_{i})\) be the maximum of the minimal numbers of generators of the successive linear quotient ideals. Then, the top nonzero total Betti number of \(I\) is \(\beta_{p}(I)\), which has the value \(\#\left\{\,i:\mu(Q_{i})=p\,\right\}\). In particular, the projective dimension of \(I\) is given by \(p\)._ **Definition 3.3**.: 1. Recall that \(V_{n,d}\) is the set \(\{\mathbf{a}\in\mathbb{N}^{n}:|\mathbf{a}|=d\}\). For any distinct \(\mathbf{a},\mathbf{b}\in V_{n,d}\), we write \(\mathbf{a}>_{\operatorname{lex}}\mathbf{b}\) if the leftmost nonzero entry of \(\mathbf{a}-\mathbf{b}\) is positive. 2. Let \(\mathbf{\eta}=(\eta_{1},\ldots,\eta_{n})\) be the smallest tuple in \(V_{n,d}^{\mathbf{\alpha}}\) with respect to \(>_{\operatorname{lex}}\), namely, if \(\mathbf{a}\in V_{n,d}^{\mathbf{\alpha}}\), then \(\mathbf{a}\geq_{\operatorname{lex}}\mathbf{\eta}\). It is clear that \(\eta_{n}=\alpha_{n}\) and \(\eta_{i}=\min\{\alpha_{i},d-\eta_{n}-\eta_{n-1}-\cdots-\eta_{i+1}\}\) for \(i=n-1,n-2,\ldots,1\). 3. Let \(\mathcal{G}=\mathcal{G}(d,\mathbf{\alpha})\) be a simple graph on the set \(V_{n,d}^{\mathbf{\alpha}}\), such that \(\{\mathbf{a},\mathbf{b}\}\) with \(\mathbf{a}>_{\operatorname{lex}}\mathbf{b}\) is an edge if and only if \((\mathbf{a},\mathbf{b})\) is sorted. A _clique_ of \(\mathcal{G}\) is a collection of vertices, in which, every two different vertices are adjacent. A _maximal clique_ is a clique that is maximal with respect to inclusion. Let \(\operatorname{MC}(\mathcal{G})=\operatorname{MC}(\mathcal{G}(d,\mathbf{\alpha}))\) be the set of maximal cliques of \(\mathcal{G}\). **Remark 3.4**.: 1. If \((\mathbf{a},\mathbf{b})\) is a sorted pair in \(V_{n,d}\), then \(\mathbf{a}\geq_{\operatorname{lex}}\mathbf{b}\). 2. The fact (ii) before [23, Theorem 14.2] can be restated as follows. Suppose that \(\{\mathbf{a}^{1}>_{\operatorname{lex}}\mathbf{a}^{2}>_{\operatorname{lex}}\cdots>_{ \operatorname{lex}}\mathbf{a}^{s}\}\) is a subset of \(V_{n,d}^{\mathbf{\alpha}}\) such that \(\operatorname{sort}(\mathbf{a}^{i},\mathbf{a}^{j})=(\mathbf{a}^{i},\mathbf{a}^{j})\) for all \(1\leq i<j\leq s\). Then, \(\operatorname{sort}(\mathbf{a}^{1},\mathbf{a}^{2},\ldots,\mathbf{a}^{s})=(\mathbf{a}^{1},\mathbf{a }^{2},\ldots,\mathbf{a}^{s})\). In other words, sorted subsets of \(V_{n,d}^{\mathbf{\alpha}}\) are precisely cliques in \(\mathcal{G}\). 3. It follows from the description in Remark 2.5 and the fact just stated in (b) that the squarefree monomial ideal \(\operatorname{in}(J)\) is the edge ideal of the complement graph \(\mathcal{G}^{\complement}\). Suppose that \(T_{\mathbf{a}^{1}}T_{\mathbf{a}^{2}}\cdots T_{\mathbf{a}^{m}}\in G(\operatorname{in}(J)^{ \vee})\) is a minimal monomial generator of the Alexander dual ideal. This is equivalent to saying that \(\{\mathbf{a}^{1},\ldots,\mathbf{a}^{m}\}\) is a minimal vertex cover of the complement graph \(\mathcal{G}^{\complement}\), by [14, Corollary 9.1.5]. In other words, the set complement \(\{\mathbf{a}^{1},\ldots,\mathbf{a}^{m}\}^{\complement}\) is a maximal clique of \(\mathcal{G}\). Consequently, in this section, we turn to the study of the maximal cliques. **Proposition 3.5**.: _Every maximal clique of \(\mathcal{G}\) has cardinality \(n\)._ Proof.: The Veronese type algebra \(\mathcal{A}_{d,\mathbf{\alpha}}\) is Cohen-Macaulay by Remark 2.5. In other words, the presentation ideal \(J\) is Cohen-Macaulay. Since \(\operatorname{in}(J)\) is squarefree, this implies that \(\operatorname{in}(J)\) is Cohen-Macaulay by [3, Corollary 2.7]. In particular, \(\operatorname{in}(J)\) is height unmixed. Notice that \(\operatorname{ht}(\operatorname{in}(J))=\operatorname{ht}(J)=\#\mathbf{T}-n\) by Proposition 2.6, where \(\#\mathbf{T}\) is the number of variables in \(\mathbb{K}[\mathbf{T}]\), i.e., the dimension of this polynomial ring. Consequently, every minimal monomial generator of \(\operatorname{in}(J)^{\vee}\) has degree \(\#\mathbf{T}-n\), and every maximal clique has size \(\#\mathbf{T}-(\#\mathbf{T}-n)=n\) by Remark 3.4 (c). We need more tools to describe these maximal cliques in detail. **Definition 3.6**.: Let \(\mathbf{a},\mathbf{b}\in\mathbb{Z}^{n}\) be two tuples. Suppose that the following condition is satisfied. 1. There exist \(1\leq i_{1}<i_{2}<\cdots<i_{2k-1}<i_{2k}\leq n\) such that \(b_{i_{2j-1}}=a_{i_{2j-1}}-1\) while \(b_{i_{2j}}=a_{i_{2j}}+1\) for \(1\leq j\leq k\). Furthermore, for \(t\in[n]\setminus\{i_{1},i_{2},\ldots,i_{2k}\}\), one has \(b_{t}=a_{t}\). In this case, the _sorting signature set_\(\Delta(\mathbf{a},\mathbf{b})\) of this pair is defined to be \[\Delta(\mathbf{a},\mathbf{b})\coloneqq\bigcup_{j=1}^{k}[i_{2j-1},i_{2j}),\] which is considered as the union of intervals on the real line. Consequently, the _length_ of this set is \(\sum_{j=1}^{k}(i_{2j}-i_{2j-1})\), denoted by \(\operatorname{length}(\Delta(\mathbf{a},\mathbf{b}))\). From the definition, we can directly verify the following simple fact. **Lemma 3.7**.: _Let \(\mathbf{a}>_{\rm lex}\mathbf{b}\) be two tuples in \(V_{n,d}\). Then, \(\operatorname{sort}(\mathbf{a},\mathbf{b})=(\mathbf{a},\mathbf{b})\) if and only if the condition \((\mathsf{S})\) is satisfied._ The following are further important observations when using the sorting operator. **Lemma 3.8**.: _If \(\{\mathbf{a}>_{\rm lex}\mathbf{b}>_{\rm lex}\mathbf{c}\}\) is a sorted subset in \(V_{n,d}\), then \(\Delta(\mathbf{a},\mathbf{c})=\Delta(\mathbf{a},\mathbf{b})\sqcup\Delta(\mathbf{b},\mathbf{c})\)._ Proof.: It suffices to show that \(\Delta(\mathbf{a},\mathbf{b})\cap\Delta(\mathbf{b},\mathbf{c})=\emptyset\). Suppose for contradiction that \(\Delta(\mathbf{a},\mathbf{b})\cap\Delta(\mathbf{b},\mathbf{c})\) is not empty. Then, we can find suitable intersecting \([i_{2j-1},i_{2j})\) and \([i^{\prime}_{2j^{\prime}-1},i^{\prime}_{2j^{\prime}})\) in the definition of \(\Delta(\mathbf{a},\mathbf{b})\) and \(\Delta(\mathbf{b},\mathbf{c})\) respectively. Let \(j\) and \(j^{\prime}\) be the smallest such that this happens. 1. Suppose that \(i_{2j-1}=i^{\prime}_{2j^{\prime}-1}\). Whence \(b_{i_{2j-1}}=a_{i_{2j-1}}-1\) and \(c_{i_{2j-1}}=b_{i_{2j-1}}-1\). This forces \(c_{i_{2j-1}}=a_{i_{2j-1}}-2\). Since condition \((\mathsf{S})\) is not satisfied here, we have a contradiction to the assumption that \((\mathbf{a},\mathbf{c})\) is a sorted pair by Lemma 3.7. 2. Suppose that \(i_{2j-1}<i^{\prime}_{2j^{\prime}-1}<i_{2j}\). Now, we derive \(i^{\prime}_{2j^{\prime}-2}\leq i_{2j-1}\) from the minimality assumption. From the interval positions, it is clear that \(a_{i^{\prime}_{2j^{\prime}-1}}=b_{i^{\prime}_{2j^{\prime}-1}}=c_{i^{\prime}_{2j ^{\prime}-1}}+1\) and consequently \(a_{i^{\prime}_{2j^{\prime}-1}}+b_{i^{\prime}_{2j^{\prime}-1}}^{\prime}+c_{i^{ \prime}_{2j^{\prime}-1}}\equiv 2\pmod{3}\). Meanwhile, it is not difficult to check that \(-1+\sum_{k\leq i_{2j-1}}a_{k}=\sum_{k\leq i_{2j-1}}b_{k}=\sum_{k\leq i_{2j-1}} c_{k}\). Furthermore, we have \(a_{t}=b_{t}=c_{t}\) for \(i_{2j-1}<t<i^{\prime}_{2j^{\prime}-1}\). Whence, the sorting operator will produce \(a_{i^{\prime}_{2j^{\prime}-1}}+1=b_{i^{\prime}_{2j^{\prime}-1}}=c_{i^{\prime} _{2j^{\prime}-1}}\), which is a contradiction. 3. Suppose that \(i^{\prime}_{2j^{\prime}-1}<i_{2j-1}<i^{\prime}_{2j^{\prime}}\). This case is similar to 2 above. **Lemma 3.9**.: 1. _If_ \(\{\mathbf{a}>_{\rm lex}\mathbf{b}>_{\rm lex}\mathbf{d}\}\) _and_ \(\{\mathbf{b}>_{\rm lex}\mathbf{c}>_{\rm lex}\mathbf{d}\}\) _are two cliques in_ \(\mathcal{G}\)_, then_ \(\{\mathbf{a}>_{\rm lex}\mathbf{b}>_{\rm lex}\mathbf{c}>_{\rm lex}\mathbf{d}\}\) _is also a clique in_ \(\mathcal{G}\)_._ 2. _Symmetrically, if_ \(\{\mathbf{a}>_{\rm lex}\mathbf{b}>_{\rm lex}\mathbf{c}\}\) _and_ \(\{\mathbf{a}>_{\rm lex}\mathbf{c}>_{\rm lex}\mathbf{d}\}\) _are two cliques in_ \(\mathcal{G}\)_, then_ \(\{\mathbf{a}>_{\rm lex}\mathbf{c}>_{\rm lex}\mathbf{d}\}\) _is also a clique in_ \(\mathcal{G}\)_._ Proof.: By symmetry, we will only consider the first case. Since \(\Delta(\mathbf{a},\mathbf{b})\cap\Delta(\mathbf{b},\mathbf{d})=\emptyset\) while \(\Delta(\mathbf{b},\mathbf{c})\subseteq\Delta(\mathbf{b},\mathbf{d})\) by Lemma 3.8, we have \(\Delta(\mathbf{a},\mathbf{b})\cap\Delta(\mathbf{b},\mathbf{c})=\emptyset\). Then, we can apply Lemma 3.7 and Lemma 3.8 to show that \(\operatorname{sort}(\mathbf{a},\mathbf{c})=(\mathbf{a},\mathbf{c})\) with \(\Delta(\mathbf{a},\mathbf{c})=\Delta(\mathbf{a},\mathbf{b})\sqcup\Delta(\mathbf{b},\mathbf{c})\). Whence, \(\{\mathbf{a}>_{\rm lex}\mathbf{b}>_{\rm lex}\mathbf{c}>_{\rm lex}\mathbf{d}\}\) is a clique in \(\mathcal{G}\). We know from Proposition 3.5 that every maximal clique has cardinality \(n\). Additional information can be drawn here. **Corollary 3.10**.: _For each maximal clique \(\{\mathbf{a}^{1}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}\}\) in \(\mathcal{G}\), we have \(\operatorname{length}(\Delta(\mathbf{a}^{i},\mathbf{a}^{j}))=j-i\) when \(1\leq i<j\leq n\). Furthermore, \(\Delta(\mathbf{a}^{1},\mathbf{a}^{n})=[1,n)\). In other words, \(a_{1}^{n}=a_{1}^{1}-1\), \(a_{n}^{n}=a_{n}^{1}+1\) and \(a_{i}^{n}=a_{i}^{1}\) for \(i\in\{2,\ldots,n-1\}\)._ Proof.: Since \(\Delta(\mathbf{a}^{1},\mathbf{a}^{n})\subseteq[1,n)\) and \(\operatorname{length}(\Delta(\mathbf{a}^{1},\mathbf{a}^{n}))\geq\sum_{i=1}^{n-1} \operatorname{length}(\Delta(\mathbf{a}^{i},\mathbf{a}^{i+1}))\geq n-1\) by Lemma 3.8, the first two statements are clear. The last statement follows from the definition of the sorting signature set. **Corollary 3.11**.: _Let \(\{\mathbf{a}^{1}>_{\rm lex}\mathbf{a}^{2}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}>_{\rm lex }\mathbf{a}^{n+1}\}\) be a set of vertices in \(\mathcal{G}\). Suppose that \(\{\mathbf{a}^{1}>_{\rm lex}\mathbf{a}^{2}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}\}\) is a maximal clique of \(\mathcal{G}\) and \(\Delta(\mathbf{a}^{1},\mathbf{a}^{2})=\Delta(\mathbf{a}^{n},\mathbf{a}^{n+1})\). Then, \(\{\mathbf{a}^{2}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}>_{\rm lex}\mathbf{a}^{n+1}\}\) is also a maximal clique of \(\mathcal{G}\)._ Proof.: We have \(\Delta(\mathbf{a}^{2},\mathbf{a}^{n})\sqcup\Delta(\mathbf{a}^{n},\mathbf{a}^{n+1})=\Delta(\mathbf{ a}^{2},\mathbf{a}^{n})\sqcup\Delta(\mathbf{a}^{1},\mathbf{a}^{2})=\Delta(\mathbf{a}^{1},\mathbf{a}^{n})=[1,n)\) by Corollary 3.10. Thus, we can verify by definition that \(\Delta(\mathbf{a}^{2},\mathbf{a}^{n+1})=[1,n)\). In particular, \(\{\mathbf{a}^{2}>_{\rm lex}\mathbf{a}^{n+1}\}\) is an edge of \(\mathcal{G}\) by Lemma 3.7. Now, as \(\{\mathbf{a}^{2}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}\}\) is a clique and \(\{\mathbf{a}^{n}>_{\rm lex}\mathbf{a}^{n+1}\}\) is an edge, \(\{\mathbf{a}^{2}>_{\rm lex}\cdots>_{\rm lex}\mathbf{a}^{n}>_{\rm lex}\mathbf{a}^{n+1}\}\) is also a maximal clique of \(\mathcal{G}\) by Lemma 3.9. ## 4. Linear Quotients In this section, we continue to assume that \(\mathcal{A}_{d,\mathbf{\alpha}}\) is the Veronese type algebra of Setting 2.3 and \(\mathbf{\eta}\) is the tuple given in Definition 3.3. As explained at the beginning of Section 3, we intend to show that the Alexander dual ideal \((\operatorname{in}(J))^{\vee}\) has linear quotients. This tactic allows us to have more control over its minimal free resolution. In particular, we can explicitly calculate the Castelnuovo-Mumford regularity of \(\mathcal{A}_{d,\mathbf{\alpha}}\) in Section 5, and also give a reasonable upper bound on its multiplicity. To prove the linear quotient property, we need to impose a total order \(\prec\) on the minimal monomial generating set \(G(\operatorname{in}(J)^{\vee})\), such that with respect to this order, the ideal \(\operatorname{in}(J)^{\vee}\) has linear quotients. Let \(\operatorname{MC}(\mathcal{G})\) be the set of maximal cliques of the graph \(\mathcal{G}=\mathcal{G}(d,\boldsymbol{\alpha})\). As observed in Remark 3.4(c), \[G(\operatorname{in}(J)^{\vee})=\left\{\;\boldsymbol{T}_{A^{\complement}} \coloneqq\prod_{v\in V^{\boldsymbol{\alpha}}_{n,d}\setminus A}T_{v}:A\in \operatorname{MC}(\mathcal{G})\;\right\}.\] Thus, we will consider the corresponding total order, still denoted by \(\prec\), on \(\operatorname{MC}(\mathcal{G})\). By definition, we want to show that the quotient ideal \(\langle\boldsymbol{T}_{B^{\complement}}:B\prec A\rangle:\boldsymbol{T}_{A^{ \complement}}\) is generated by some ring variables of \(\mathbb{K}[\boldsymbol{T}]\) for each \(A\in\operatorname{MC}(\mathcal{G})\). Notice that \[\langle\boldsymbol{T}_{B^{\complement}}\rangle:\boldsymbol{T}_{A^{\complement}} =\prod_{v\in A\setminus B}T_{v}. \tag{1}\] Therefore, for any given maximal clique \(B\) with \(B\prec A\), it suffices to find a suitable maximal clique \(D\) with \(D\prec A\) such that \(A\setminus D\) is a singleton set with \(A\setminus D\subseteq A\setminus B\). Unfortunately, the total order \(\prec\) introduced here is really involved. We have to postpone its debut to Setting 4.13. Before that, we need to make some preparations. **Definition 4.1**.: 1. A priori, a maximal clique of the graph \(\mathcal{G}\) is a set of vertices. When we write a maximal clique \(A=(\boldsymbol{a}^{1},\boldsymbol{a}^{2},\ldots,\boldsymbol{a}^{n})\) in the tuple form, we intend to indicate that \(\boldsymbol{a}^{1}>_{\operatorname{lex}}\boldsymbol{a}^{2}>_{\operatorname{ lex}}\cdots>_{\operatorname{lex}}\boldsymbol{a}^{n}\). 2. Two maximal cliques \(A=(\boldsymbol{a}^{1},\boldsymbol{a}^{2},\ldots,\boldsymbol{a}^{n})\) and \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) are called _equivalent_, if and only if \(\boldsymbol{a}^{1}=\boldsymbol{b}^{1}\). It follows from Corollary 3.10 that this is also equivalent to saying that \(\boldsymbol{a}^{n}=\boldsymbol{b}^{n}\). With respect to this binary relation, we have _equivalence classes_. We will write \(\mathcal{E}_{A}\) for the equivalence class to which \(A\) belongs. 3. The _rank_ of a tuple \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in V_{n,d}\) (with respect to the given tuple \(\boldsymbol{\eta}\)) is defined to be \[\operatorname{rank}(\boldsymbol{a})\coloneqq\sum_{j=1}^{n-1}(a_{j}-\eta_{j})( n-j).\] It is clear that \(\operatorname{rank}(\boldsymbol{\eta})=0\). Furthermore, if \(\boldsymbol{a}>_{\operatorname{lex}}\boldsymbol{b}\) belong to some common maximal clique of \(\mathcal{G}\), then it is easy to verify directly that \(\operatorname{rank}(\boldsymbol{a})=\operatorname{rank}(\boldsymbol{b})+ \operatorname{length}(\Delta(\boldsymbol{a},\boldsymbol{b}))\). 4. For each maximal clique \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\), we define the _rank_ of \(\mathcal{E}_{A}\) to be \(\operatorname{rank}(\boldsymbol{a}^{n})\). This is well-defined, since if \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) and \(A\) are equivalent, then \(\boldsymbol{b}^{n}=\boldsymbol{a}^{n}\). 5. Suppose that \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\), where \(\boldsymbol{a}^{i}=(a_{1}^{i},\ldots,a_{n}^{i})\in\mathbb{Z}^{n}\) and \(\sum_{k}a_{k}^{i}=d\) for each \(i\in[n]\). Assume that there exists a permutation \((s_{1},\ldots,s_{n-1})\) of \(\{1,2,\ldots,n-1\}\), written in one-line notation, such that \(\Delta(\boldsymbol{a}^{i},\boldsymbol{a}^{i+1})=[s_{i},s_{i}+1)\) for \(i\in[n-1]\). Then we say the tuple \((s_{1},\ldots,s_{n-1})\) is the _signature_ of \(A\) and denote it by \(\operatorname{sgn}(A)\). Of course, we are mostly interested in the case where \(A\) is a maximal clique of \(\mathcal{G}\). Whence, by Lemma 3.8 and Corollary 3.10, the signature of \(A\) must exist. 6. Let \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) be a maximal clique. If there exists a maximal clique \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) such that \((\boldsymbol{a}^{2},\ldots,\boldsymbol{a}^{n})=(\boldsymbol{b}^{1},\ldots, \boldsymbol{b}^{n-1})\), then we will say that \(B\) is the _root_ of \(A\) and write \(B=rA\). It follows from Corollary 3.10 that if the root of \(A\) exists, then it is unique. **Remark 4.2**.: 1. For any \(\boldsymbol{a}\in V^{\boldsymbol{\alpha}}_{n,d}\), if \(\boldsymbol{a}\neq\boldsymbol{\eta}\), we can find \(\boldsymbol{b}\in V^{\boldsymbol{\alpha}}_{n,d}\) such that \(\boldsymbol{a}>_{\operatorname{lex}}\boldsymbol{b}\geq_{\operatorname{lex}} \boldsymbol{\eta}\), \(\operatorname{sort}(\boldsymbol{a},\boldsymbol{b})=(\boldsymbol{a},\boldsymbol{b})\), and \(\operatorname{length}(\boldsymbol{a},\boldsymbol{b})=1\). If we use this repeatedly, we can find by induction \(\boldsymbol{b}^{0},\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{m}\in V^{ \boldsymbol{\alpha}}_{n,d}\) such that \(\boldsymbol{b}^{0}=\boldsymbol{a}\), \(\boldsymbol{b}^{m}=\boldsymbol{\eta}\) and \(\operatorname{rank}(\boldsymbol{b}^{i})=\operatorname{rank}(\boldsymbol{b}^{i+1 })+1\) for each \(i\). In particular, \(\operatorname{rank}(\boldsymbol{a})=m\in\mathbb{N}\). 2. For each maximal clique \(A\), we have \(\operatorname{rank}(A)\in\mathbb{N}\). 3. If \(B=rA\), then \(\operatorname{rank}(A)=\operatorname{rank}(B)+1\). 4. We have precisely one equivalence class \(\mathcal{E}_{A}\) such that \(\operatorname{rank}(\mathcal{E}_{A})=0\). If \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) belongs to such \(\mathcal{E}_{A}\), then \(\boldsymbol{b}^{n}=\boldsymbol{\eta}\). Here is the converse of Corollary 3.10. **Lemma 4.3**.: _Let \(\mathbf{a}^{1}=(a^{1}_{1},\dots,a^{n}_{n})\) be a tuple in \(V^{\alpha}_{n,d}\). Suppose that \(1\leq a^{1}_{1}\) while \(a^{1}_{n}\leq\alpha_{n}-1\). Then, there exists a maximal clique in \(\mathcal{G}\) of the form \(\{\,\mathbf{a}^{1}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{n}\,\}\)._ Proof.: It is clear that \(\mathbf{a}^{n}\coloneqq(a^{1}_{1}-1,a^{1}_{2},\dots,a^{1}_{n-1},a^{1}_{n}+1)\) belongs to \(V^{\alpha}_{n,d}\), by assumption. Notice that \(\{\mathbf{a}^{1}>_{\mathrm{lex}}\mathbf{a}^{n}\}\) is a clique in \(\mathcal{G}\) and \(\Delta(\mathbf{a}^{1},\mathbf{a}^{n})=[1,n)\). Thus, we can complete \(\{\mathbf{a}^{1}>_{\mathrm{lex}}\mathbf{a}^{n}\}\) to a maximal clique. This maximal clique must have the form \(\{\,\mathbf{a}^{1}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{n}\,\}\), due to Proposition 3.5 and the rank reason given at the end of Definition 4.1(c). Next, we describe a necessary and sufficient condition for a given maximal clique \(A\) such that \(\mathrm{rank}(A)>0\) and \(rA\) does not exist. This characterization ensures that we pick the correct element when ordering for the linear quotients. **Lemma 4.4**.: _Suppose that \(A=(\mathbf{a}^{1},\dots,\mathbf{a}^{n})\) is a maximal clique such that \(\mathbf{a}^{1}=(a^{1}_{1},\dots,a^{1}_{n})\) and \(\mathrm{sgn}(A)=(s_{1},\dots,s_{n-1})\). If \(\mathrm{rank}(A)>0\), then \(rA\) does not exist, if and only if \(a^{1}_{1}=1\) and \(s_{1}=1\), or \(a^{1}_{n}=\alpha_{n}-1\) and \(s_{1}=n-1\)._ Proof.: Suppose that \(\mathbf{a}^{2}=(a^{2}_{1},\dots,a^{2}_{n})\). Let \(\mathbf{a}^{n+1}\) be the tuple such that \(\Delta(\mathbf{a}^{2},\mathbf{a}^{n+1})=[1,n)\). It is easy to see that \(\mathbf{a}^{n+1}=(a^{2}_{1}-1,a^{2}_{2},\dots,a^{2}_{n-1},a^{2}_{n}+1)\) and \(\Delta(\mathbf{a}^{n},\mathbf{a}^{n+1})=[s_{1},s_{1}+1)\). Then, \(rA\) does not exist if and only if \(\mathbf{a}^{2},\dots,\mathbf{a}^{n},\mathbf{a}^{n+1}\) is not a legitimate maximal clique. By Corollary 3.11, the latter happens precisely when \(\mathbf{a}^{n+1}\notin V^{\alpha}_{n,d}\), which means either \(a^{2}_{1}=0\) or \(a^{2}_{n}=\alpha_{n}\). 1. Suppose that \(a^{2}_{1}=0\). We claim that \(s_{1}=1\). Otherwise, \(s_{j}=1\) for some \(2\leq j\leq n-1\). This implies that \(\mathbf{a}^{k}=(a^{2}_{1},\dots)\) for \(k=1,\dots,j\), and \(\mathbf{a}^{k}=(a^{2}_{1}-1,\dots)\) for \(k=j+1,\dots,n\). But as \(a^{2}_{1}-1=-1\) in this case, we have a contradiction. Now, since \(s_{1}=1\), it is clear that \(a^{1}_{1}=1\). 2. Suppose that \(a^{2}_{n}=\alpha_{n}\). The argument is similar. Conversely, if \(a^{1}_{1}=1\) and \(s_{1}=1\), or \(a^{1}_{n}=\alpha_{n}-1\) and \(s_{1}=n-1\), then \(a^{2}_{1}=0\) or \(a^{2}_{n}=\alpha_{n}\). By the argument at the beginning of this proof, \(rA\) does not exist. To facilitate exhibiting the claimed linear quotients property, we need the subsequent handy tool. **Lemma 4.5**.: _Let \(A=(\mathbf{a}^{1},\dots,\mathbf{a}^{n})\) and \(B=(\mathbf{b}^{1},\dots,\mathbf{b}^{n})\) be two tuples of elements in \(\mathbb{Z}^{n}\) such that \(\mathbf{a}^{1}=\mathbf{b}^{1}\) and \(\mathbf{a}^{n}=\mathbf{b}^{n}\). Suppose that \(\mathrm{sgn}(A)=(s_{1},\dots,s_{n-1})\) and \(\mathrm{sgn}(B)=(t_{1},\dots,t_{n-1})\). Then, the following conditions are equivalent for an index \(k\in[n-2]\):_ 1. _the signature_ \(\mathrm{sgn}(B)\) _takes the form_ \((s_{1},\dots,s_{k-1},s_{k+1},s_{k},s_{k+2},\dots,s_{n-1})\)_;_ 2. _the difference set_ \(\mathrm{diff}(A,B)\coloneqq\{i:\mathbf{a}^{i}\neq\mathbf{b}^{i}\}\) _is exactly_ \(\{k+1\}\)_._ _If in addition \(A\) and \(B\) are two maximal cliques in an equivalence class \(\mathcal{E}\), then the following is an additional equivalent condition:_ 1. _the quotient ideal_ \(\langle\mathbf{T}_{B}\mathfrak{e}\rangle:_{\mathbb{K}[\mathbf{T}]}\mathbf{T}_{A^{\mathbbm{ e}}}\) _is generated by_ \(T_{a^{k+1}}\)_._ Proof.: Firstly, we show that 1\(\Rightarrow\)2. By assumption, we have \(\mathbf{a}^{1}=\mathbf{b}^{1}\) and \(s_{i}=t_{i}\) for \(i=1,2,\dots,k-1\). Since \[\Delta(\mathbf{a}^{i},\mathbf{a}^{i+1})=[s_{i},s_{i}+1)=[t_{i},t_{i}+1)=\Delta(\mathbf{b}^ {i},\mathbf{b}^{i+1}),\] by induction, we have \(\mathbf{a}^{j}=\mathbf{b}^{j}\) for \(j=1,2,\dots,k\). Similarly, we have \(\mathbf{a}^{n}=\mathbf{b}^{n}\) and \(s_{i}=t_{i}\) for \(i=n-1,n-2,\dots,k+2\) by assumption. Thus, we also have \(\mathbf{a}^{j}=\mathbf{b}^{j}\) for \(j=n,n-1,\dots,k+2\). On the other hand, since \(s_{k}\neq t_{k}\) while \(\mathbf{a}^{k}=\mathbf{b}^{k}\), we have \[\Delta(\mathbf{a}^{k},\mathbf{a}^{k+1})=[s_{k},s_{k}+1)\neq[t_{k},t_{k}+1)=\Delta(\mathbf{ b}^{k},\mathbf{b}^{k+1}),\] and hence \(\mathbf{a}^{k+1}\neq\mathbf{b}^{k+1}\). It is now clear that \(\mathrm{diff}(A,B)=\{k+1\}\). Secondly, we show that 2\(\Rightarrow\)3. By assumption, we have \(\mathbf{a}^{j}=\mathbf{b}^{j}\) whenever \(j\neq k+1\). Since \[\bigsqcup_{j\leq i}[s_{j},s_{j}+1)=\Delta(\mathbf{a}^{1},\mathbf{a}^{i+1})=\Delta(\mathbf{ b}^{1},\mathbf{b}^{i+1})=\bigsqcup_{j\leq i}[t_{j},t_{j}+1)\] for all \(1\leq i\leq k-1\), we must have \(s_{i}=t_{i}\) for each such \(i\). Similarly, we have \(s_{i}=t_{i}\) when \(k+2\leq i\leq n-1\), by looking at \(\Delta(\mathbf{a}^{i},\mathbf{a}^{n})=\Delta(\mathbf{b}^{i},\mathbf{b}^{n})\). Notice that \(\{s_{1},\dots,s_{n-1}\}=\{1,\dots,n-1\}=\{t_{1},\dots,t_{n-1}\}\). Therefore, \(\{s_{k},s_{k+1}\}=\{t_{k},t_{k+1}\}\). Since \(A\neq B\), there is only one possibility for this: \(s_{k}=t_{k+1}\) and \(s_{k+1}=t_{k}\). Finally, assume that \(A\) and \(B\) are two maximal cliques in \(\mathcal{E}\). The equivalence of (b) and (c) is then clear from Equation (1). We need tools to determine whether a potential maximal clique is really legitimate. **Definition 4.6**.: Suppose that \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in V^{\boldsymbol{\alpha}}_{n,d}\). If there exists some \(\boldsymbol{b}\in V^{\boldsymbol{\alpha}}_{n,d}\) such that \(\Delta(\boldsymbol{a},\boldsymbol{b})=[s,s+1)\), we say that we apply an _\(s\)-jump_ to \(\boldsymbol{a}\) in order to get \(\boldsymbol{b}\). It is clear that such an operation exists if and only if \(a_{s}>0\) and \(a_{s+1}<\alpha_{s+1}\). **Remark 4.7**.: Suppose that \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) is a maximal clique and \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\). For each \(i,j\in[n]\), we have \(a^{i}_{j}\in\{a^{1}_{j}-1,a^{1}_{j},a^{1}_{j}+1\}\). The case \(a^{i}_{j}=a^{1}_{j}-1\) occurs if and only if the \(j\)-jump is applied before \(\boldsymbol{a}^{i}\) (i.e., \(j=s_{k}\) for some \(k<i\)), while the \((j-1)\)-jump is not applied before it. Similarly, the case \(a^{i}_{j}=a^{1}_{j}+1\) happens if and only if the \((j-1)\)-jump is applied before \(\boldsymbol{a}^{i}\), while the \(j\)-jump is not applied before it. **Definition 4.8**.: Let \(\mathcal{E}\) be an equivalence class, in which, every maximal clique starts with \(\boldsymbol{a}^{1}=(a^{1}_{1},\ldots,a^{1}_{n})\). We consider a partial order \(\triangleleft\) on \([n-1]\) with respect to \(\mathcal{E}\) as follows. Suppose that \(i\in[n-1]\). * If \(a^{1}_{i}=0\) and \(i>1\), then we require that \(\underline{(i-1)}\triangleleft i\). * Likewise, if \(a^{1}_{i+1}=\alpha_{i+1}\) and \(i+1<n\), then we require that \(\underline{(i+1)}\triangleleft i\). After considering each \(i\in[n-1]\), the underlined parts _generate_ a partial order \(\triangleleft\) on \([n-1]\). The induced poset will be called the _poset of obstructions_ with respect to \(\mathcal{E}\). The next two lemmas justify the terminology of this poset from the point of view of allowing permutations in \(\mathfrak{S}_{n-1}\) to be legitimate signatures. **Lemma 4.9**.: _The poset of Definition 4.8 is well-defined. Furthermore, suppose that \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) is a maximal clique in \(\mathcal{E}\) and \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\). Then, the following condition is satisfied:_ 1. _whenever_ \(s_{i}\triangleleft s_{j}\)_, we have_ \(i<j\)_._ Proof.: For each \(i\in[n-1]\), by definition, we apply an \(s_{i}\)-jump to \(\boldsymbol{a}^{i}\) in order to get \(\boldsymbol{a}^{i+1}\). * If \(a^{1}_{s_{i}}=0\) and \(s_{i}>1\), then we must have applied an \((s_{i}-1)\)-jump somewhere before the \(s_{i}\)-jump. Meanwhile, we require that \((s_{i}-1)\triangleleft s_{i}\) in Definition 4.8. * Likewise, if \(a^{1}_{s_{i}+1}=\alpha_{s_{i}+1}\) and \(s_{i}+1<n\), then we must have applied an \((s_{i}+1)\)-jump somewhere before the \(s_{i}\)-jump. Meanwhile, we require that \((s_{i}+1)\triangleleft s_{i}\) in Definition 4.8. Note that we won't have \(s_{i}\triangleleft(s_{i}+1)\) and \((s_{i}+1)\triangleleft s_{i}\) at the same time: the first requires the \(s_{i}\)-jump to be applied before the \((s_{i}+1)\)-jump in \(A\), while the second requires the opposite. This shows that the poset is well-defined. It is also clear that the condition (PO) holds. **Lemma 4.10**.: _Conversely, let \(\boldsymbol{s}=(s_{1},\ldots,s_{n-1})\) be a permutation of in \(\mathfrak{S}_{n-1}\). Suppose that the condition (PO) is satisfied by \(\boldsymbol{s}\). Then, \(\boldsymbol{s}\) is a legitimate signature with respect to \(\mathcal{E}\), namely, there exists some \(A\in\mathcal{E}\) such that \(\operatorname{sgn}(A)=\boldsymbol{s}\)._ Proof.: Suppose that the maximal cliques in \(\mathcal{E}\) all start with \(\boldsymbol{a}^{1}\) and end with \(\boldsymbol{a}^{n}\). It suffices to construct \(\boldsymbol{a}^{2},\ldots,\boldsymbol{a}^{n-1}\in V^{\boldsymbol{\alpha}}_{n,d}\), such that \(\Delta(\boldsymbol{a}^{i-1},\boldsymbol{a}^{i})=[s_{i-1},s_{i-1}+1)\) when \(i\geq 2\). We do this by induction on \(i\). The degenerate base case when \(i=1\) is trivial. Next, assume that \(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{i-1}\) have been constructed and \(2\leq i\leq n-1\). 1. If \(a^{i-1}_{s_{i-1}}>0\) and \(a^{i-1}_{s_{i-1}+1}<\alpha_{s_{i-1}+1}\), it is clear that we can apply the \(s_{i-1}\)-jump to \(\boldsymbol{a}^{i-1}\) to get a tuple \(\boldsymbol{a}^{i}\in V^{\boldsymbol{\alpha}}_{n,d}\). Certainly \(\Delta(\boldsymbol{a}^{i-1},\boldsymbol{a}^{i})=[s_{i-1},s_{i-1}+1)\). 2. Suppose that \(a^{i-1}_{s_{i-1}}=0\). As \(a^{1}_{s_{i-1}}\geq 0\), we deduce that \(a^{i-1}_{s_{i-1}}\neq a^{1}_{s_{i-1}}+1\). Since \(\boldsymbol{s}\) is a permutation, we cannot apply the \(s_{i-1}\)-jump before \(\boldsymbol{a}^{i-1}\). Thus, it follows from Remark 4.7 that we cannot use the \((s_{i-1}-1)\)-jump before \(\boldsymbol{a}^{i-1}\). 1. Suppose that \(s_{i-1}=1\). Whence, \(s_{k}\neq 1\) for \(k<i-1\). Consequently, we deduce that \(a^{1}_{1}=\cdots=a^{i-1}_{1}=0\). But this is impossible since it is necessary that \(a^{1}_{1}\geq 1\). 2. Suppose that \(s_{i-1}>1\). If in addition \(a^{1}_{s_{i-1}}=0\), we have \((s_{i-1}-1)\triangleleft s_{i-1}\) from Definition 4.8. Then, the condition (PO) implies that the \((s_{i-1}-1)\)-jump is applied before the \(s_{i-1}\)-jump, i.e., the \((s_{i-1}-1)\)-jump is applied before \(\mathbf{a}^{i-1}\), a contradiction. If instead \(a^{1}_{s_{i-1}}>0\), then \(a^{i-1}_{s_{i-1}}=a^{1}_{s_{i-1}}-1\). It follows from Remark 4.7 that the \(s_{i-1}\)-jump is applied before \(\mathbf{a}^{i-1}\), again a contradiction. 3. Suppose that \(a^{i-1}_{s_{i-1}+1}=\alpha_{s_{i-1}+1}\). We can argue as in the previous case to see that this is impossible. The final step is to provide each equivalence class with additional structures. **Remark 4.11**.: Suppose that \(i\triangleleft j\) in the poset of obstructions with respect to \(\mathcal{E}\). If \(i<j\), then we have \(i\triangleleft(i+1)\triangleleft\cdots\triangleleft(j-1)\triangleleft j\). If instead \(i>j\), then we have \(i\triangleleft(i-1)\triangleleft\cdots\triangleleft(j+1)\triangleleft j\). **Definition 4.12**.: Let \(\mathcal{E}\) be an equivalence class of maximal cliques. 1. In \(\mathcal{E}\), we will take a maximal clique \(L=L(\mathcal{E})\) as follows. If \(a^{1}_{1}=1\), then let \(\kappa_{1}\) be the largest integer such that \(\kappa_{1}\leq n-1\) and \(a^{1}_{2}=\cdots=a^{1}_{\kappa_{1}}=0\). When \(a^{1}_{2}>0\), we will simply take \(\kappa_{1}=1\). Symmetrically, if \(a^{1}_{n}=\alpha_{n}-1\), then let \(\kappa_{2}\) be the smallest integer such that \(\kappa_{2}\geq 2\) and \(a^{1}_{j}=\alpha_{j}\) for all \(\kappa_{2}\leq j\leq n-1\). When \(a^{1}_{n-1}<\alpha_{n-1}\), we will simply take \(\kappa_{2}=n\). It is clear that \(\kappa_{1}<\kappa_{2}\) if both exist. Furthermore, in Definition 4.8, we have the relations \(1\triangleleft 2\triangleleft\cdots\triangleleft\kappa_{1}\) and \((n-1)\triangleleft(n-2)\triangleleft\cdots\triangleleft(\kappa_{2}-1)\). Note that these relations are the only nontrivial ones that include the integers \(1,2,\ldots,\kappa_{1}-1,\kappa_{2},\kappa_{2}+1,\ldots,n-1\) in the poset of obstructions with respect to \(\mathcal{E}\). On the other hand, although it is possible to have like \((\kappa_{1}+1)\triangleleft\kappa_{1}\) or \((\kappa_{2}-2)\triangleleft(\kappa_{2}-1)\) in the poset, we won't have any \(t\in[n-1]\) such that either \(\kappa_{1}\triangleleft t\) or \((\kappa_{2}-1)\triangleleft t\), by the choice of \(\kappa_{1}\) and \(\kappa_{2}\), as well as Remark 4.11. Thus, we can choose an \(L\) in \(\mathcal{E}\) such that \[\operatorname{sgn}(L)\coloneqq\begin{cases}(\tau_{1},\ldots,\tau_{\kappa_{2}- \kappa_{1}-2},\underbrace{n-1,n-2,\ldots,\kappa_{2}-1}_{\mathcal{E}},\underbrace {1,2,\ldots,\kappa_{1}}_{\mathcal{E}}),&\text{if }\kappa_{1}<\kappa_{2}-1,\\ (\underbrace{n-1,n-2,\ldots,\kappa_{2}}_{\mathcal{E}},\underbrace{1,2,\ldots, \kappa_{1}}_{\mathcal{E}}),&\text{if }\kappa_{1}=\kappa_{2}-1\end{cases}\] for suitable \(\tau_{i}\)'s that are compatible with the poset of obstructions. In the case where \(\kappa_{2}\) is not defined, namely when \(a^{1}_{n}<\alpha_{n}-1\), we can superficially assume that \(\kappa_{2}=n+1\) and treat it as a degenerate case of the first one. Similarly, when \(\kappa_{1}\) is not defined, namely, when \(a^{1}_{1}>1\), we can assume that \(\kappa_{1}=0\). A priori, the \(\operatorname{sgn}(L)\) just constructed is only a permutation in \(\mathfrak{S}_{n-1}\). It is indeed a legal signature since such a maximal clique \(L\) exists by Lemma 4.10. 2. We will call \((\mathcal{E},L)\) a _marked equivalence class_. This is an equivalence class with a chosen representative. Since we will take this particular \(L\) for this \(\mathcal{E}\) once and for all in the rest of this paper, we will simply write \((\mathcal{E},L)\) as \(\mathcal{E}\). 3. Suppose that \(\mathbf{\tau}\coloneqq\operatorname{sgn}(L)=(\tau_{1},\tau_{2},\ldots,\tau_{n-1})\). Now, take any \(A\in\mathcal{E}\) and assume that \(\operatorname{sgn}(A)=(k_{1},\ldots,k_{n-1})\). Let \(s_{i}\) be the index such that \(\tau_{s_{i}}=k_{i}\). We will say that the _relative signature of \(A\) with respect to \(\mathbf{\tau}\)_ (or equivalently, with respect to \(L\)) is \(\operatorname{sgn}_{\mathbf{\tau}}(A)\coloneqq(s_{1},\ldots,s_{n-1})\). Obviously, \(\operatorname{sgn}_{\mathbf{\tau}}(L)=(1,2,\ldots,n-1)\). Since we have all the necessary definitions and notations in place, we are ready to state the order that gives rise to the linear quotient property. Recall that \(\operatorname{MC}(\mathcal{G})\) is the set of maximal cliques of \(\mathcal{G}\). **Setting 4.13** (**Rules of order)**.: Let \(\prec\) be a total order on \(\operatorname{MC}(\mathcal{G})\) satisfying the following conditions. 1. If \(\operatorname{rank}(B)<\operatorname{rank}(A)\), then \(B\prec A\). 2. Suppose that \(\operatorname{rank}(B)=\operatorname{rank}(A)\), \(\mathcal{E}_{A}\neq\mathcal{E}_{B}\), and \(B\prec A\). Then, for any \(A^{\prime}\in\mathcal{E}_{A}\) and \(B^{\prime}\in\mathcal{E}_{B}\), we have \(B^{\prime}\prec A^{\prime}\). Therefore, \(\prec\) also induces a total order on the set of equivalence classes. 3. Let \(\mathcal{E}\) be an equivalence class and \(L\) be the special maximal clique we chose in Definition 4.12. Suppose that \(\mathbf{\tau}\coloneqq\operatorname{sgn}(L)=(\tau_{1},\ldots,\tau_{n-1})\). The restriction of \(\prec\) to \(\mathcal{E}\) is given by the lexicographical order with respect to \(\tau_{1}<\tau_{2}<\cdots<\tau_{n-1}\). In other words, if \(A,B\in\mathcal{E}\), then \(B\prec A\) if and only if the first nonzero entry of \(\operatorname{sgn}_{\mathbf{\tau}}(B)-\operatorname{sgn}_{\mathbf{\tau}}(A)\) is negative. Such a total order \(\prec\) exists, but in general, it is not unique. In the rest of this paper, we will simply fix one that works. **Example 4.14**.: We present here an example showing how the maximal cliques in a marked equivalence class \((\mathcal{E},L)\) are ordered according to Setting 4.13. Consider \(n=5\), \(d=8\), and \(\boldsymbol{\alpha}=(2,2,2,3,3)\). Let \(\mathcal{E}\) be the equivalence containing \[A=((1,1,1,3,2),(0,2,1,3,2),(0,1,2,3,2),(0,1,2,2,3),(0,1,1,3,3)).\] Write \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{5})\). Note that \(rA\) does not exist. This is because if \(\Delta(\boldsymbol{a}^{2},\boldsymbol{a}^{5+1})=[1,5)\), then \(\boldsymbol{a}^{5+1}=(-1,2,1,3,3)\), which does not belong to \(V_{n,d}^{\boldsymbol{\alpha}}\). Using the notation in Definition 4.12, we have \(\kappa_{1}=1\) and \(\kappa_{2}=4\). Consequently, we choose a maximal clique \(L\) with \(\boldsymbol{\tau}\coloneqq\operatorname{sgn}(L)=(2,4,3,1)\). Since \(\boldsymbol{a}^{1}\) is given, we have \[L=((1,1,1,3,2),(1,0,2,3,2),(1,0,2,2,3),(1,0,1,3,3),(0,1,1,3,3))\] for our equivalence class \(\mathcal{E}\). The poset of obstructions, defined in Definition 4.8, contains only the nontrivial relation \(4\triangleleft 3\). As a result, we have exactly \(4!/2=12\) maximal cliques in the equivalence class \(\mathcal{E}\). In Table 1, we list all their signatures and their relative signatures with respect to \(\boldsymbol{\tau}\). Due to lack of space, we will not explicitly list every maximal clique in this equivalence class. We only mention one here for illustration. Since every such maximal clique starts with \(\boldsymbol{a}^{1}=(1,1,1,3,2)\), the maximal clique \(B\) satisfying \(\operatorname{sgn}(B)=(1,4,2,3)\) is \[B=((1,1,1,3,2),(0,2,1,3,2),(0,2,1,2,3),(0,1,2,2,3),(0,1,1,3,3)).\] We can then check that \(\operatorname{sgn}_{\boldsymbol{\tau}}(B)=(4,2,1,3)\). The rest of this section is devoted to showing the linear quotient property with respect to the order \(\prec\) introduced in Setting 4.13. Let \(A\) be a maximal clique. Given Equation (1), to show that the quotient ideal \(\langle\boldsymbol{T}_{B\mathfrak{E}}:B\prec A\rangle:\boldsymbol{T}_{A ^{\mathfrak{E}}}\) is linear, we show that for every maximal clique \(B\prec A\), we can find \(C\prec A\) such that \(\#\operatorname{diff}(A,C)=1\) and \(\operatorname{diff}(A,C)\subseteq\operatorname{diff}(A,B)\). The following technical lemma constructs the candidate maximal cliques that we will need in the later proof. **Lemma 4.15**.: _Suppose that \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) and \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) are two different maximal cliques in the same equivalence class \(\mathcal{E}\) such that \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\) and \(\operatorname{sgn}(B)=(t_{1},\ldots,t_{n-1})\). Suppose that \(s_{i}=t_{i}\) for \(i\in\{1,2,\ldots,k\}\cup\{\ell,\ell+1,\ldots,n-1\}\) and \(s_{k+1}\neq t_{k+1}\); we allow \(\ell=n\) so that the part \(\{\ell,\ell+1,\ldots,n-1\}\) disappears. Then, we can find maximal cliques \(C,D\in\mathcal{E}\) such that_ \[\operatorname{sgn}(C)=(s_{1},\ldots,s_{k},s_{k+1},t_{k+1},p_{k+3},\ldots,p_{ \ell-1},s_{\ell},\ldots,s_{n-1})\] _and_ \[\operatorname{sgn}(D)=(s_{1},\ldots,s_{k},t_{k+1},s_{k+1},q_{k+3},\ldots,q_{ \ell-1},s_{\ell},\ldots,s_{n-1})\] _for suitable \(p_{i}\)'s and \(q_{i}\)'s._ Proof.: Suppose that \(\boldsymbol{a}^{k+1}=(a_{1}^{k+1},\ldots,a_{n}^{k+1})\). Then \[\boldsymbol{a}^{k+2}=(a_{1}^{k+1},\ldots,a_{s_{k+1}-1}^{k+1},a_{s_{k+1}}^{k+1 }-1,a_{s_{k+1}+1}^{k+1}+1,a_{s_{k+1}+2}^{k+1},\ldots,a_{n}^{k+1}).\] Since \(\boldsymbol{a}^{1}=\boldsymbol{b}^{1}\) and \(s_{i}=t_{i}\) for \(i\in[k]\), we have \(\boldsymbol{b}^{k+1}=\boldsymbol{a}^{k+1}\) and consequently \[\boldsymbol{b}^{k+2}=(a_{1}^{k+1},\ldots,a_{t_{k+1}-1}^{k+1},a_{t_{k+1}}^{k+1 }-1,a_{t_{k+1}+1}^{k+1}+1,a_{t_{k+1}+2}^{k+1},\ldots,a_{n}^{k+1}).\] Without loss of generality, we can assume that \(s_{k+1}<t_{k+1}\). \begin{table} \begin{tabular}{c c c} \hline \hline \(\operatorname{sgn}(A)\) & \(\operatorname{sgn}_{\boldsymbol{\tau}}(A)\) \\ \hline \(1\) & \((2,4,3,1)\) & \((1,2,3,4)\) \\ \(2\) & \((2,4,1,3)\) & \((1,2,4,3)\) \\ \(3\) & \((2,1,4,3)\) & \((1,4,2,3)\) \\ \(4\) & \((4,2,3,1)\) & \((2,1,3,4)\) \\ \(5\) & \((4,2,1,3)\) & \((2,1,4,3)\) \\ \(6\) & \((4,3,2,1)\) & \((2,3,1,4)\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \(\operatorname{sgn}(A)\) & \(\operatorname{sgn}_{\boldsymbol{\tau}}(A)\) \\ \hline \(7\) & \((4,3,1,2)\) & \((2,3,4,1)\) \\ \(8\) & \((4,1,2,3)\) & \((2,4,1,3)\) \\ \(9\) & \((4,1,3,2)\) & \((2,4,3,1)\) \\ \(10\) & \((1,2,4,3)\) & \((4,1,2,3)\) \\ \(11\) & \((1,4,2,3)\) & \((4,2,1,3)\) \\ \(12\) & \((1,4,3,2)\) & \((4,2,3,1)\) \\ \hline \hline \end{tabular} \end{table} Table 1. Ordered maximal cliques in an equivalence class 1. Suppose that \(s_{k+1}+1=t_{k+1}\). Let \(\mathbf{c}^{k+3}\) be the tuple such that \(\Delta(\mathbf{a}^{k+2},\mathbf{c}^{k+3})=[t_{k+1},t_{k+1}+1)\), namely, \[\mathbf{c}^{k+3}=(a_{1}^{k+1},\ldots,a_{s_{k+1}-1}^{k+1},a_{s_{k+1}}^{k+1}-1,a_{s_{k+ 1}+1}^{k+1},a_{s_{k+1}+2}^{k+1}+1,a_{s_{k+1}+3}^{k+1},\ldots,a_{n}^{k+1}).\] Due to the existence of \(\mathbf{a}^{k+2}\) and \(\mathbf{b}^{k+2}\), it can be verified that \(\mathbf{c}^{k+3}\in V_{\mathbf{a}}^{\mathbf{\alpha}}\). Furthermore, \(\Delta(\mathbf{c}^{k+3},\mathbf{a}^{\ell})=\Delta(\mathbf{a}^{k+1},\mathbf{a}^{\ell})\setminus[s _{k+1},s_{k+1}+2)\). Whence, \(\mathbf{a}^{k+2}>_{\mathrm{lex}}\mathbf{c}^{k+3}>_{\mathrm{lex}}\mathbf{a}^{\ell}\) is a clique in \(\mathcal{G}\). It is then not difficult to see that \(\mathbf{a}^{1}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{k+2}>_{\mathrm{lex}} \mathbf{c}^{k+3}>_{\mathrm{lex}}\mathbf{a}^{\ell}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}} \mathbf{a}^{n}\) is a clique, by Lemma 3.9. As \(\Delta(\mathbf{a}^{1},\ldots,\mathbf{a}^{n})=[1,n)\), we can complete it to a maximal clique, which must have the form \(\mathbf{a}^{1}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{k+2}>_{\mathrm{lex}} \mathbf{c}^{k+3}>_{\mathrm{lex}}\mathbf{c}^{k+4}>_{\mathrm{lex}}>\cdots>\mathbf{c}^{\ell- 1}>_{\mathrm{lex}}\mathbf{a}^{\ell}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{n}\) by Corollary 3.10. We will take this as our expected maximal clique \(C\). Now, \(\mathbf{a}^{1}>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\mathbf{a}^{k+1}>_{\mathrm{lex}}\mathbf{ b}^{k+2}>_{\mathrm{lex}}\mathbf{c}^{k+3}>_{\mathrm{lex}}\mathbf{c}^{k+4}>_{\mathrm{lex}} \cdots>\mathbf{c}^{\ell-1}>_{\mathrm{lex}}\mathbf{a}^{\ell}>_{\mathrm{lex}}\cdots>_{ \mathrm{lex}}\mathbf{a}^{n}\) is also a maximal clique, which will be our expected \(D\). One can check directly that they satisfy the requirements. 2. Suppose that \(s_{k+1}+1<t_{k+1}\). We can make a similar argument. Note that each equivalence class \(\mathcal{E}\) is ordered with respect to the total order \(\prec\) in Setting 4.13. We will divide it into subclasses, for the convenience of our later argument. **Notation 4.16**.: Let \((\mathcal{E},L)\) be a marked equivalence class with \(\mathrm{sgn}(L)=\mathbf{\tau}\). Then, the _subclasses_\(\mathcal{E}_{j_{1},\ldots,j_{k}}\subset\mathcal{E}\)_of level_\(k\) are defined such that the following conditions are satisfied. 1. If \(k<n-1\), then \(\mathcal{E}_{j_{1},\ldots,j_{k}}=\bigsqcup_{t=1}^{m}\mathcal{E}_{j_{1},\ldots, j_{k},t}\) is the disjoint union of nonempty subclasses of level \(k+1\), for some suitable positive integer \(m\). 2. For any \(A,B\in\mathcal{E}_{j_{1},\ldots,j_{k}}\) with \(\mathrm{sgn}_{\mathbf{\tau}}(A)=(p_{1},\ldots,p_{n-1})\) and \(\mathrm{sgn}_{\mathbf{\tau}}(B)=(q_{1},\ldots,q_{n-1})\), one has \(p_{i}=q_{i}\) for all \(i=1,\ldots,k\). 3. It follows from the previous two conditions that every subclass of level \(n-1\) contains exactly one maximal clique. Now, if \(A\in\mathcal{E}_{j_{1},\ldots,j_{n-1}}\) and \(B\in\mathcal{E}_{\ell_{1},\ldots,\ell_{n-1}}\) such that \(A\prec B\) (or equivalently, the first nonzero entry of \(\mathrm{sgn}_{\mathbf{\tau}}(A)-\mathrm{sgn}_{\mathbf{\tau}}(B)\) is negative), then the first nonzero entry of \((j_{1},\ldots,j_{n-1})-(\ell_{1},\ldots,\ell_{n-1})\) must be negative. As a quick example, the maximal clique \(B\) near the end of Example 4.14 belongs to the corresponding subclass \(\mathcal{E}_{3,2,1,1}\) of level \(4\). **Lemma 4.17**.: _Let \((\mathcal{E},L)\) be a marked equivalence class with \(\mathrm{sgn}(L)=\mathbf{\tau}\) and consider a subclass \(\mathcal{E}_{j_{1},\ldots,j_{k}}\) of level \(k\) with \(j_{k}>1\). Suppose that \(\{s_{1},\ldots,s_{k-1},s_{k,1},\ldots,s_{k,j_{k}}\}\) is a subset of \([n-1]\) and the signatures of the maximal cliques in \(\mathcal{E}_{j_{1},\ldots,j_{k-1},t}\) have the form \((s_{1},\ldots,s_{k-1},s_{k,\ell},p_{k+1},\ldots,p_{n-1})\) for each \(\ell\leq j_{k}\), where \(p_{i}\in[n-1]\setminus\{s_{1},\ldots,s_{k-1},s_{k,\ell}\}\) for \(i=k+1,\ldots,n-1\). Besides, assume that \(t\) belongs to \([n-1]\setminus\{s_{1},\ldots,s_{k-1},s_{k,1},\ldots,s_{k,j_{k}-1}\}\) such that \(t\) precedes \(s_{k,j_{k}}\) with respect to \(\mathbf{\tau}\). Then, there is no maximal clique \(A\) in \(\mathcal{E}\) such that \(\mathrm{sgn}(A)\) has the form \((s_{1},\ldots,s_{k-1},s_{k,j_{k}},t,q_{k+2},\ldots,q_{n-1})\), where \(q_{i}\in[n-1]\setminus\{s_{1},\ldots,s_{k-1},s_{k,j_{k}},t\}\) for \(i=k+2,\ldots,n-1\)._ Proof.: Suppose for contradiction that there is a maximal clique \(A\) in \(\mathcal{E}\) such that \(\mathrm{sgn}(A)\) has the form \((s_{1},\ldots,s_{k-1},s_{k,j_{k}},t,q_{k+2},\ldots,q_{n-1})\) for suitable \(q_{i}\)'s. Since \(\mathbf{\tau}\) is a legitimate signature and \(t\) precedes \(s_{k,j_{k}}\) with respect to \(\mathbf{\tau}\), we don't have \(s_{k,j_{k}}\triangleleft t\) with respect to the partial order in Definition 4.8. Consequently, \((s_{1},\ldots,s_{k-1},t,s_{k,j_{k}},p_{k+2},\ldots,p_{n-1})\) is a legitimate signature within \(\mathcal{E}\) by Lemma 4.10. Let \(\mathcal{E}_{j_{1},\ldots,j_{k-1},\ell}\) be the subclass, in which the signatures of the maximal cliques have the form \((s_{1},\ldots,s_{k-1},t,r_{k+1},\ldots,r_{n-1})\) for suitable \(r_{i}\)'s. Since \(t\) precedes \(s_{k,j_{k}}\) with respect to \(\mathbf{\tau}\), we have \(\ell<j_{k}\) by the condition (c) in our construction of subclasses. But this contradicts the choice of \(t\). The following proposition guarantees the linear quotients within an equivalence class. **Proposition 4.18**.: _Suppose that \(A\) and \(B\) are two maximal cliques in an equivalence class \(\mathcal{E}\) such that \(B\prec A\). Then, there exists a maximal clique \(D\in\mathcal{E}\) such that \(D\prec A\), and the set difference \(A\setminus D\) is a singleton set with \(A\setminus D\subseteq A\setminus B\)._ Proof.: Suppose that in the marked equivalence class \((\mathcal{E},L)\), one has \(\mathrm{sgn}(L)=\mathbf{\tau}\). In addition, assume that \(A=(\mathbf{a}^{1},\ldots,\mathbf{a}^{n})\) and \(B=(\mathbf{b}^{1},\ldots,\mathbf{b}^{n})\in\mathcal{E}\). Recall that we previously defined \(\mathrm{diff}(A,B)\coloneqq\{\,i:\mathbf{a}^{i}\neq\mathbf{b}^{i}\,\}\). We may further assume that \(A\in\mathcal{E}_{j_{1},\ldots,j_{k-1},j_{k}}\) and \(B\in\mathcal{E}_{j_{1},\ldots,j_{k-1},j_{k}^{\prime}}\) with \(j_{k}^{\prime}<j_{k}\). Then \(k+1=\min\operatorname{diff}(A,B)\) and it is clear that \(k\leq n-2\). Let us consider a set of maximal cliques \[\mathcal{D}\coloneqq\left\{\,F\in\mathcal{E}:\operatorname{diff}(A,F)\subseteq \operatorname{diff}(A,B)\setminus\left\{k+1\right\}\,\right\}.\] Since \(A\in\mathcal{D}\), the set \(\mathcal{D}\) is not empty. Let \(A^{\prime}\) be the first maximal clique in \(\mathcal{D}\) with respect to \(\prec\) and suppose that \(\operatorname{sgn}(A^{\prime})=(s_{1},\ldots,s_{n-1})\). Obviously, \(A^{\prime}\in\mathcal{E}_{j_{1},\ldots,j_{k}}\) and \(A^{\prime}\preceq A\). Furthermore, we have both \(\operatorname{diff}(A^{\prime},B)\subseteq\operatorname{diff}(A,B)\) and \(\operatorname{diff}(A,A^{\prime})\subseteq\operatorname{diff}(A,B)\). Let \(C=(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{n})\) be the tuple of elements in \(\mathbb{Z}^{n}\), such that \(\boldsymbol{c}^{1}=\boldsymbol{a}^{1}\) and \(\operatorname{sgn}(C)=(s_{1},\ldots,s_{k-1},s_{k+1},s_{k},s_{k+2},\ldots,s_{ n-1})\). By Lemma 4.5, \(\operatorname{diff}(C,A^{\prime})=\left\{k+1\right\}\). In addition, we **claim** that \(C\) is a legitimate maximal clique in \(\mathcal{E}\) and \(C\prec A^{\prime}\). With this, if \(A=A^{\prime}\), then we take \(D=C\) since \(\operatorname{diff}(A,B)\supseteq\operatorname{diff}(A,C)=\left\{k+1\right\}\). If instead \(A\neq A^{\prime}\), then we will replace \(B\) by \(A^{\prime}\). Our proof will be done by induction on the cardinality \(\#\operatorname{diff}(A,B)\), since \(\operatorname{diff}(A,A^{\prime})\subsetneq\operatorname{diff}(A,B)\) and \(A^{\prime}\prec A\). It remains to prove the above claim about \(C\). Since \(A\in\mathcal{E}_{j_{1},\ldots,j_{k-1},j_{k}}\), we may assume that the signatures of the maximal cliques in \(\mathcal{E}_{j_{1},\ldots,j_{k-1},\ell}\) have the form \((s_{1},\ldots,s_{k-1},s_{k,\ell},p_{k+1},\ldots,p_{n-1})\) for each \(\ell<j_{k}\) with suitable \(p_{i}\)'s. From Lemma 4.17 we derive that either \(s_{k}\) precedes \(s_{k+1}\) with respect to \(\boldsymbol{\tau}\), or \(s_{k+1}\in\left\{s_{k,1},\ldots,s_{k,j_{k}-1}\right\}\). 1. Suppose that \(s_{k}\) precedes \(s_{k+1}\) with respect to \(\boldsymbol{\tau}\). Since \(B\in\mathcal{E}_{j_{1},\ldots,j_{k-1},j_{k}^{\prime}}\) with \(j_{k}^{\prime}<j_{k}\), we can write \(\operatorname{sgn}(B)=(s_{1},\ldots,s_{k-1},s_{k,j_{k}^{\prime}},p_{k+1}, \ldots,p_{n-1})\) with suitable \(p_{i}\)'s. Since \(j_{k}^{\prime}<j_{k}\), \(s_{k,j_{k}^{\prime}}\) precedes \(s_{k}\) with respect to \(\boldsymbol{\tau}\). As a result of our assumption here, \(s_{k,j_{k}^{\prime}}\) also precedes \(s_{k+1}\) with respect to \(\boldsymbol{\tau}\). Without loss of generality, we may assume that \(\operatorname{diff}(A^{\prime},B)=\left\{k+1,k+2,\ldots,r\right\}\) is a "continuous" segment. Applying Lemma 4.15 to \(A^{\prime}\) and \(B\), we can find some maximal clique \(A^{\prime\prime}\) in \(\mathcal{E}\) such that \(\operatorname{diff}(A^{\prime},A^{\prime\prime})\subseteq\operatorname{diff}(A ^{\prime},B)\) and \(\operatorname{sgn}(A^{\prime\prime})=(s_{1},\ldots,s_{k},s_{k,j_{k}^{\prime}},r_{k+2},\ldots,r_{n-1})\) for suitable \(r_{i}\)'s. In particular, \(k+1\notin\operatorname{diff}(A,A^{\prime\prime})\). Meanwhile, we observe that \[\operatorname{diff}(A,A^{\prime\prime})\subseteq\operatorname{diff}(A,A^{ \prime})\cup\operatorname{diff}(A^{\prime},A^{\prime\prime})\subseteq \operatorname{diff}(A,B)\cup\operatorname{diff}(A^{\prime},B)=\operatorname{ diff}(A,B).\] Thus, the maximal clique \(A^{\prime\prime}\) belongs to the previously defined set \(\mathcal{D}\). But since \(s_{k,j_{k}^{\prime}}\) precedes \(s_{k+1}\) with respect to \(\boldsymbol{\tau}\) while \(\operatorname{sgn}(A^{\prime})=(s_{1},\ldots,s_{n-1})\), this contradicts our choice of \(A^{\prime}\). 2. Suppose instead that \(s_{k+1}=s_{k,\ell}\) for some \(\ell<j_{k}\). Then, \(\boldsymbol{c}^{k+1}\) is a legitimate tuple by the existence of \(\mathcal{E}_{j_{1},\ldots,j_{k-1},\ell}\). Meanwhile, notice that \(A^{\prime}\) is a legitimate maximal clique such that \(\operatorname{diff}(C,A^{\prime})=\left\{k+1\right\}\). Consequently, \(C\) is also a legitimate maximal clique. Since \(\ell<j_{k}\), the index \(s_{k+1}\) precedes \(s_{k}\) with respect to \(\boldsymbol{\tau}\). Thus, \(C\prec A^{\prime}\), fully confirming our claim about \(C\). Next, we consider the linear quotients across equivalence classes. **Lemma 4.19**.: _Suppose that \(\mathcal{E}\) is an equivalence class such that \(\operatorname{rank}(\mathcal{E})>0\). Let \(\kappa_{1}\), \(\kappa_{2}\), and \(L\) be as introduced in Definition 4.12. Then, we have \(\kappa_{1}+1<\kappa_{2}-1\)._ Proof.: Suppose for contradiction that \(\kappa_{1}+1\geq\kappa_{2}-1\). Since \(\kappa_{1}<\kappa_{2}\), we obtain either \(\kappa_{1}+1=\kappa_{2}\) or \(\kappa_{1}+1=\kappa_{2}-1\). Since \(n\geq 3\), \(\kappa_{1}\leq n-1\), and \(\kappa_{2}\geq 2\), we encounter the following four cases. 1. Suppose that \(\kappa_{1}=n-1\). Then \(\boldsymbol{a}^{1}=(1,0,\ldots,0,a_{n}^{1})\) with \(0\leq a_{n}^{1}\leq\alpha_{n}-1\). Consequently, \(\boldsymbol{a}^{n}=(0,\ldots,0,a_{n}^{1}+1)\), which has to be \(\boldsymbol{\eta}\). 2. Suppose that \(\kappa_{2}=2\). Then \(\boldsymbol{a}^{1}=(a_{1}^{1},\alpha_{2},\ldots,\alpha_{n-1},\alpha_{n}-1)\) with \(a_{1}^{1}>1\). Consequently, \(\boldsymbol{a}^{n}=(a_{1}^{1}-1,\alpha_{2},\ldots,\alpha_{n})\), which has to be \(\boldsymbol{\eta}\). 3. Suppose that \(\kappa_{1}+1=\kappa_{2}\) and \(2\leq\kappa_{1}\leq n-2\). Then \(\boldsymbol{a}^{1}=(1,0,\ldots,0,\alpha_{\kappa_{1}+1},\ldots,\alpha_{n-1}, \alpha_{n}-1)\). Consequently, \(\boldsymbol{a}^{n}=(0,\ldots,0,\alpha_{\kappa_{1}+1},\ldots,\alpha_{n})\), which has to be \(\boldsymbol{\eta}\). 4. Suppose that \(\kappa_{1}+1=\kappa_{2}-1\) and \(1\leq\kappa_{1}\leq n-2\). Then \[\boldsymbol{a}^{1}=(1,0,\ldots,0,a_{\kappa_{1}+1}^{1},\alpha_{\kappa_{1}+2}, \ldots,\alpha_{n-1},\alpha_{n}-1)\] with \(0<a_{\kappa_{1}+1}^{1}<\alpha_{\kappa_{1}+1}\). Consequently, \(\boldsymbol{a}^{n}=(0,\ldots,0,a_{\kappa_{1}+1}^{1},\alpha_{\kappa_{1}+2}, \ldots,\alpha_{n})\), which has to be \(\boldsymbol{\eta}\). In each case, we always have \(\operatorname{rank}(\mathcal{E})=0\), which is a contradiction. **Proposition 4.20**.: _Let \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) be a maximal clique in the equivalence class \(\mathcal{E}\) such that \(rA\) does not exist. Suppose that \(B\prec A\) is a maximal clique such that \(B\notin\mathcal{E}\). Then, there exists another maximal clique \(D\in\mathcal{E}\) and some \(j\in[n-1]\) such that \(D\prec A\) and \(A\setminus D=\{\boldsymbol{a}^{j+1}\}\subseteq A\setminus B\)._ Proof.: We will prove this by using the notation and arguments of Definition 4.12. As \(\operatorname{rank}(\mathcal{E})>0\) by the existence of \(B\), we showed in Lemma 4.19 that \(\kappa_{1}+1<\kappa_{2}-1\), and the first maximal clique \(L\) in \(\mathcal{E}\) satisfies \[\operatorname{sgn}(L)=(\tau_{1},\ldots,\tau_{\kappa_{2}-\kappa_{1}-2},\underbrace {n-1,n-2,\ldots,\kappa_{2}-1},\underbrace{1,2,\ldots,\kappa_{1}}),\] for appropriate \(\tau_{i}\)'s. Suppose that \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\) and let \(W\) be the set \[[n-1]\setminus(\{n-1,n-2,\ldots,\kappa_{2}-1\}\sqcup\{1,2,\ldots,\kappa_{1}\} )=\{\kappa_{1}+1,\ldots,\kappa_{2}-2\}.\] Since \(\kappa_{1}+1<\kappa_{2}-1\), \(W\) is not empty. Meanwhile, since \(rA\) does not exist, \(s_{1}=1\) or \(n-1\) by Lemma 4.4, and \(s_{1}\notin W\). In the following, let \(j\geq 1\) be the smallest such that \(s_{j+1}\in W\). We can construct a maximal clique \(D\) in \(\mathcal{E}\) such that \(\operatorname{sgn}(D)=(s_{1},\ldots,s_{j-1},s_{j+1},s_{j},s_{j+2},\ldots,s_{n-1})\). To confirm the legitimacy of \(D\), notice that \(s_{j}\notin W\) while \(s_{j+1}\in W\). Since \(s_{j+1}\) precedes \(s_{j}\) in \(\operatorname{sgn}(L)\) while \(s_{j}\) precedes \(s_{j+1}\) in \(\operatorname{sgn}(A)\), these two indices are not comparable in the poset of obstructions, i.e., they can exchange positions in any legitimate signature. As \(\operatorname{sgn}(A)\) is a legitimate signature, so is \(\operatorname{sgn}(D)\), namely, \(D\) is a maximal clique in \(\mathcal{E}\). Next, since \(s_{j+1}\) precedes \(s_{j}\) in \(\operatorname{sgn}(L)\), we deduce that \(D\prec A\). From Lemma 4.5 we also have \(A\setminus D=\{\boldsymbol{a}^{j+1}\}\). It remains to show that this \(\boldsymbol{a}^{j+1}\notin B\). Suppose for contradiction that this is not true. Then, we can write \(B=(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{n})\) and \(\boldsymbol{b}^{j^{\prime}}=\boldsymbol{a}^{j+1}\) for some \(j^{\prime}\). Since \(B\prec A\), we must have \(j^{\prime}\leq j+1\), by rank reason. At the same time, given Lemma 4.9, we can find \(i_{1}\leq\kappa_{1}\) and \(i_{2}\leq n-\kappa_{2}\) such that \(\{s_{1},\ldots,s_{j}\}=\{1,\ldots,i_{1}\}\sqcup\{n-i_{2},\ldots,n-1\}\), by the choice of \(j\). Whence, we can write \(\boldsymbol{a}^{j+1}=(\underbrace{0,\ldots,0}_{i_{1}},a_{i_{1}+1}^{j+1}, \ldots,a_{n-i_{2}}^{j+1},\underbrace{\alpha_{n-i_{2}+1},\ldots,\alpha_{n}}_{i_ {2}})\). Since \(\boldsymbol{b}^{j^{\prime}}=\boldsymbol{a}^{j+1}\), we obtain that \(1,2,\ldots,i_{1},n-i_{2},\ldots,n-1\notin\Delta(\boldsymbol{b}^{j^{\prime}}, \boldsymbol{b}^{n})\). Thus, if \(\operatorname{sgn}(B)=(q_{1},\ldots,q_{n-1})\), then \(\{1,\ldots,i_{1},n-i_{2},\ldots,n-1\}\subseteq\{q_{1},\ldots,q_{j^{\prime}-1}\}\), forcing \(j\leq j^{\prime}-1\). Therefore, \(j^{\prime}=j+1\). It is now clear that \(\{q_{1},\ldots,q_{j}\}=\{1,\ldots,i_{1}\}\sqcup\{n-i_{2},\ldots,n-1\}\), and \(\Delta(\boldsymbol{b}^{1},\boldsymbol{b}^{j+1})=\Delta(\boldsymbol{a}^{1}, \boldsymbol{a}^{j+1})\). As \(\boldsymbol{b}^{j+1}=\boldsymbol{a}^{j+1}\), it follows that \(\boldsymbol{b}^{1}=\boldsymbol{a}^{1}\) and \(B\in\mathcal{E}\) as well. This contradicts the assumption that \(B\notin\mathcal{E}\). Finally, we are ready to show the announced linear quotient result. **Theorem 4.21**.: _The total order \(\prec\) of Setting 4.13 induces a linear quotient order of the monomial generating set \(G(\operatorname{in}(J)^{\vee})\)._ Proof.: Take arbitrary maximal cliques \(A\) and \(B\) such that \(B\prec A\). Given Equation (1), it suffices to find suitable maximal clique \(D\) with \(D\prec A\) such that \(A\setminus D\) is a singleton set with \(A\setminus D\subseteq A\setminus B\). Suppose that \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) and \(A\in\mathcal{E}\). We have two cases. 1. Assume that \(B\in\mathcal{E}\). We apply Proposition 4.18 for the existence of such \(D\). 2. Assume that \(B\notin\mathcal{E}\). 1. If \(rA\) exists, then \(A\setminus rA=\{\boldsymbol{a}^{1}\}\). Note that \(\boldsymbol{a}^{1}\notin B\) for rank reasons. So in this case we take \(D=rA\). 2. Assume that \(rA\) does not exist. We apply Proposition 4.20 for the existence of such \(D\). ## 5. Applications This section is devoted to two applications of the linear quotient structure that we established earlier. Here, we continue to assume that \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is the Veronese type algebra of Setting 2.3 and \(\boldsymbol{\eta}\) is the tuple given in Definition 3.3. Meanwhile, \(J\) is the presentation ideal so that \(\mathcal{A}_{d,\boldsymbol{\alpha}}=\mathbb{K}[\boldsymbol{T}]/J\). ### Regularity of the algebra First of all, we determine the Castelnuovo-Mumford regularity of the algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). For each maximal clique \(A\) in the graph \(\mathcal{G}=\mathcal{G}(d,\boldsymbol{\alpha})\), we will denote the minimal number of generators of the linear quotient ideal \(\langle\boldsymbol{T}_{B^{\complementplement}}:B\prec A\rangle:_{\mathbb{K}[ \boldsymbol{T}]}\boldsymbol{T}_{A^{\complement}}\) by \(\omega_{\boldsymbol{\alpha}}(A)\), or \(\omega(A)\) for short. By the proof of Theorem 4.21, we have \[\omega(A)=\#\left\{\,B\prec A:\operatorname{diff}(A,B)\text{ is a singleton set}\,\right\}. \tag{2}\] Furthermore, by Lemma 3.2, we have the following formula: \[\operatorname{pd}((\operatorname{in}(J))^{\vee})=\max_{A}\omega(A). \tag{3}\] Since \(\operatorname{pd}((\operatorname{in}(J))^{\vee})=\operatorname{reg}(\mathcal{A }_{d,\boldsymbol{\alpha}})\) by Observation 3.1, the task is now clear: find the largest \(\omega(A)\). We start with a quick estimate. **Lemma 5.1**.: _We have \(\omega(A)\leq n-1\) for each maximal clique \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\)._ Proof.: Since \(\langle\boldsymbol{T}_{B^{\complement}}:B\prec A\rangle:_{\mathbb{K}[\mathbf{ T}]}\boldsymbol{T}_{A^{\complement}}\) is linear, it follows from Equation (1) that \[\omega(A)=\#\left\{\,\boldsymbol{a}^{i}:\text{there exists some $B\prec A$ with $\{ \boldsymbol{a}^{i}\}=A\setminus B$}\,\right\}.\] Therefore, it suffices to show that there is no \(B\) with \(B\prec A\) such that \(\{\boldsymbol{a}^{n}\}=A\setminus B\). Suppose for contradiction that this is not true. By Corollary 3.10, we have either \(B=(\boldsymbol{b},\boldsymbol{a}^{1},\boldsymbol{a}^{2},\ldots,\boldsymbol {a}^{n-1})\) or \(B=(\boldsymbol{a}^{1},\boldsymbol{a}^{2},\ldots,\boldsymbol{a}^{n-1}, \boldsymbol{b})\) for some suitable tuple \(\boldsymbol{b}\). In the first case, \(\operatorname{rank}(B)=\operatorname{rank}(A)+1\), which contradicts the assumption that \(B\prec A\). In the second case, we have \(B=A\) by Corollary 3.10, which is also a contradiction. It is natural to ask: when do we have \(\omega(A)=n-1\)? Let us start with a simple observation. In any equivalence class \(\mathcal{E}\), every maximal clique \(A\) uniquely corresponds to the signature \(\operatorname{sgn}(A)\), which is a permutation in \(\mathfrak{S}_{n-1}\). Thus, it is clear that \(\mathcal{E}\) contains at most \((n-1)!\) elements. In the following, we classify when an equivalent class \(\mathcal{E}\) contains exactly \((n-1)!\) elements. **Lemma 5.2**.: _Let \(\mathcal{E}\) be an equivalence class such that every maximal clique in it begins with \(\boldsymbol{a}^{1}=(a^{1}_{1},\ldots,a^{1}_{n})\). Then, the following are equivalent:_ 1. _the cardinality_ \(\#\mathcal{E}=(n-1)!\)_;_ 2. _the poset of obstructions in Definition_ 4.8 _is trivial for_ \(\mathcal{E}\)_;_ 3. _one has_ \(1\leq a^{1}_{1}\leq\alpha_{1}\)_,_ \(1\leq a^{1}_{j}\leq\alpha_{j}-1\) _for all_ \(2\leq j\leq n-1\)_, and_ \(0\leq a^{1}_{n}\leq\alpha_{n}-1\)_._ Proof.: The cardinality \(\#\mathcal{E}=(n-1)!\) if and only if every permutation in \(\mathfrak{S}_{n-1}\) is the signature of some maximal clique in \(\mathcal{E}\). But the latter is equivalent to saying that the poset of obstructions in Definition 4.8 is trivial for \(\mathcal{E}\), namely, \(1\leq a^{1}_{j}\leq\alpha_{j}-1\) for all \(2\leq j\leq n-1\). That \(1\leq a^{1}_{1}\leq\alpha_{1}\) and \(0\leq a^{1}_{n}\leq\alpha_{n}-1\) is automatic for all such \(\boldsymbol{a}^{1}\) in view of Corollary 3.10. In the following, we characterize when \(\operatorname{pd}((\operatorname{in}(J))^{\vee})\) is exactly \(n-1\). **Proposition 5.3**.: _The following conditions are equivalent:_ 1. _the projective dimension_ \(\operatorname{pd}((\operatorname{in}(J))^{\vee})=n-1\)_;_ 2. _there exists a maximal clique_ \(A\) _such that_ \(\omega(A)=n-1\)_;_ 3. _one has_ \(n\leq d\leq\sum_{i=1}^{n}(\alpha_{i}-1)\)_;_ 4. _there exists a maximal clique_ \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) _with_ \(\boldsymbol{a}^{1}=(a^{1}_{1},\ldots,a^{1}_{n})\) _such that_ \[2\leq a^{1}_{1}\leq\alpha_{1},\quad 1\leq a^{1}_{j}\leq\alpha_{j}-1 \text{ for all }2\leq j\leq n-1,\quad\text{and}\quad 0\leq a^{1}_{n}\leq\alpha_{n}-2.\] Proof.: The equivalence of 1 and 2 is clear from the explanation before Lemma 5.2. Next, we show the equivalence of 3 and 4. Since \(|\boldsymbol{a}^{1}|=d\), we can easily deduce 3 from 4. Conversely, if 3 is satisfied, we can easily find suitable \(\boldsymbol{a}^{1}=(a^{1}_{1},\ldots,a^{1}_{n})\in\mathbb{N}^{n}\) as in 4. Let \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\) be the tuple of elements in \(\mathbb{Z}^{n}\) such that \(\operatorname{sgn}(A)=(1,2,\ldots,n-1)\). It can be verified directly that \(A\) is a legitimate maximal clique. Thus, we get 4. In the following, we show the equivalence of 2 and 4. 1. [label=()] 2. \(\Leftarrow\) 4: Suppose that the condition in 4 is satisfied. Then, the equivalence class \(\mathcal{E}\) to which \(A\) belongs has exactly \((n-1)!\) maximal cliques by Lemma 5.2, and every permutation in \(\mathfrak{S}_{n-1}\) is legitimate as a signature with respect to \(\mathcal{E}\). Without loss of generality, we may assume that \(A\) is the very last maximal clique of \(\mathcal{E}\) with \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\). For each \(k=2,3,\ldots,n-1\), we consider the maximal clique \(B^{k}\) in \(\mathcal{E}\) with \(\operatorname{sgn}(B^{k})=(s_{1},\ldots,s_{k-2},s_{k},s_{k-1},s_{k+1},\ldots,s_{ n-1})\). Then \(B^{k}\prec A\) and \(A\setminus B^{k}=\{\boldsymbol{a}^{k}\}\) by Lemma 4.5. Meanwhile, since \(a^{1}_{n}\leq\alpha_{n}-2\), it follows from Corollary 3.10 that \(\boldsymbol{a}^{n}\) is not the tuple \(\boldsymbol{\eta}\) of Definition 3.3. In other words, \(\operatorname{rank}(\mathcal{E})>0\). Thus, \(rA\) exists and \(A\setminus rA=\{\boldsymbol{a}^{1}\}\) by Lemma 4.4. In conclusion, \(\langle\boldsymbol{T}_{C^{\complement}}:C\prec A\rangle:_{\mathbb{K}[\mathbf{ T}]}\boldsymbol{T}_{A^{\complement}}=\langle T_{\boldsymbol{a}^{1}},\ldots,T_{ \boldsymbol{a}^{n-1}}\rangle\) is linear and has the maximal size by the proof of Lemma 5.1. In particular, 2 holds. 2. \(\Rightarrow\) (d): Suppose that the condition in (b) is satisfied. Then, there exists a maximal clique \(A=(\mathbf{a}^{1},\ldots,\mathbf{a}^{n})\) in some equivalence class \((\mathcal{E},L)\) such that the quotient ideal \(Q\coloneqq\langle\mathbf{T}_{\mathbf{b}^{\mathcal{E}}}:B\prec A\rangle:_{\mathbb{K}[ \mathbf{T}]}\)\(\mathbf{T}_{A^{\mathcal{E}}}\) is linear with \(n-1\) minimal monomial generators. In view of Lemma 5.1 and its proof, this implies that \(Q=\langle T_{\mathbf{a}^{1}},T_{\mathbf{a}^{2}},\ldots,T_{\mathbf{a}^{n-1}}\rangle\). Since \(T_{\mathbf{a}^{1}}\in Q\), \(rA\) must exist and \(\operatorname{rank}(\mathcal{E})>0\). Additionally, for each \(k\in\{2,\ldots,n-1\}\), we have a maximal clique \(B^{k}\) such that \(B^{k}\prec A\) and \(A\setminus B^{k}=\{\mathbf{a}^{k}\}\). Since \(B^{k}\) obviously starts with \(\mathbf{a}^{1}\), this maximal clique belongs to \(\mathcal{E}\). Now, suppose that \(\operatorname{sgn}(A)=(s_{1},\ldots,s_{n-1})\). It follows from Lemma 4.5 that \(\operatorname{sgn}(B^{k})=(s_{1},\ldots,s_{k-2},s_{k},s_{k-1},s_{k+1},\ldots, s_{n-1})\). Since \(B^{k}\prec A\), we must have \(s_{k}<s_{k-1}\) when considering the lexicographical order at the end of Setting 4.13(c). Since this holds for any \(k\in\{2,\ldots,n-1\}\), we conclude that \(s_{n-1}<s_{n-2}<\cdots<s_{1}\) with respect to this lexicographical order. Whence, \(A\) is the last maximal clique of \(\mathcal{E}\) and \(\operatorname{sgn}(L)=(s_{n-1},\ldots,s_{1})\) is the reverse of \(\operatorname{sgn}(A)\). Consequently, the poset of obstructions considered in Definition 4.8 is trivial, and we have \(0<a_{i}^{1}<\alpha_{i}\) for \(i\in\{2,\ldots,n-1\}\) by Lemma 5.2. Furthermore, since \(A\) is legitimate, \(1\leq a_{1}^{1}\leq\alpha_{1}\) and \(0\leq a_{n}^{1}\leq\alpha_{n}-1\) by Corollary 3.10. If \(a_{1}^{1}=1\), then \(\operatorname{sgn}(L)\) takes the form \((\tau_{1},\ldots,\tau_{n-2},1)\) by the requirement in Definition 4.12. Whence, \(s_{1}=1\). But this implies that \(rA\) does not exist by Lemma 4.4, a contradiction. Similarly, we can prove that \(a_{n}^{1}\leq\alpha_{n}-2\). Thus, (d) holds. In what follows, we consider the case where the equivalent requirements of Proposition 5.3 are not satisfied. Before doing so, we introduce a reduction. **Remark 5.4**.: Suppose that \(I\) is an equigenerated monomial ideal with minimal monomial generators \(\mathbf{x}^{\mathbf{a}^{1}},\ldots,\mathbf{x}^{\mathbf{a}^{t}}\) in \(\mathbb{K}[x_{1},\ldots,x_{n}]\). In addition, suppose that \(\mathbf{b}\) is a tuple in \(\mathbb{N}^{n}\) such that \(\mathbf{b}-\mathbf{a}^{i}\in\mathbb{N}^{n}\) for each \(i\). Whence, we will call \(I^{[\mathbf{b}]}\coloneqq\langle\mathbf{x}^{\mathbf{b}-\mathbf{a}^{i}},\ldots,\mathbf{x}^{\mathbf{b}- \mathbf{a}^{t}}\rangle\) the _generalized Newton dual of \(I\) with respect to \(\mathbf{b}\)_. It follows from [1, Theorem 3.1] that the algebra \(\mathbb{K}[I]\) is isomorphic to \(\mathbb{K}[I^{[\mathbf{b}]}]\). Of course, we are only interested in the case where \(I=I_{d,\mathbf{\alpha}}\), and \(\mathbf{b}=\mathbf{\alpha}\). In this case, we have \[\mathcal{A}_{d,\mathbf{\alpha}}=\mathbb{K}[I]\cong\mathbb{K}[I^{[\mathbf{\alpha}]}]= \mathcal{A}_{|\mathbf{\alpha}|-d,\mathbf{\alpha}}. \tag{4}\] **Proposition 5.5**.: _Suppose that either \(n>d\) or \(\sum_{i=1}^{n}(\alpha_{i}-1)<d\). Let \(d^{\prime}=\min(d,|\mathbf{\alpha}|-d)\). Then, \(\operatorname{pd}((\operatorname{in}(J))^{\vee})=\left\lfloor n-\frac{n}{d^{ \prime}}\right\rfloor\)._ Proof.: It follows from the isomorphism in Equation (4) that we can assume that \(d^{\prime}=d\) and \(n>d\). From this, we also deduce that \(2d\leq|\mathbf{\alpha}|\). As the first step, we show that \(\operatorname{pd}((\operatorname{in}(J))^{\vee})\geq\left\lfloor n-\frac{n}{ d}\right\rfloor\). In the extremal case when \(d=1\), since \(|\mathbf{\eta}|=d\), one obviously has \(\mathbf{\eta}=(0,\ldots,0,1)\) and \(I_{d,\mathbf{\alpha}}\) is the graded maximal ideal of \(S\). Whence, the claimed formula is guaranteed by [21, Theorem 4.2]. Therefore, in the following, we will assume that \(d\geq 2\). By Equation (3), we need to find a maximal clique \(A\) such that \(\omega(A)\geq\left\lfloor n-\frac{n}{d}\right\rfloor\). Suppose that \(n-1=pd+q\) such that \(p=\left\lfloor\frac{n-1}{d}\right\rfloor\) and \(0\leq q<d\). Then, \(\left\lfloor n-\frac{n}{d}\right\rfloor=n-1-p\). Let \(\mathcal{E}\) be the equivalence class of maximal cliques, all of which start with the vertex \[\mathbf{a}^{1}\coloneqq\underbrace{(1,0,\ldots,0}_{p},\ldots,\underbrace{1,0, \ldots,0}_{p},\underbrace{1,0,\ldots,0}_{p+1},\ldots,\underbrace{1,0,\ldots,0 }_{p+1},0)\] in \(\mathcal{G}(d,\mathbf{\alpha})\). Thus, the maximal cliques in \(\mathcal{E}\) all end with \(\mathbf{a}^{n}\) such that \(\Delta(\mathbf{a}^{1},\mathbf{a}^{n})=[1,n)\) by Corollary 3.10. It is clear that \(\mathbf{a}^{n}=(0,a_{2}^{1},\ldots,a_{n-1}^{1},1)\). If \(\alpha_{n}\geq 2\), since \(\mathbf{\eta}=(*,\ldots,*,\alpha_{n})\), we have \(\mathbf{a}^{n}\neq\mathbf{\eta}\). Therefore, \(\operatorname{rank}(\mathcal{E})>0\). If instead \(\alpha_{n}=1\), then \(\mathbf{\alpha}=(1,\ldots,1)\). Whence, \(2d\leq|\mathbf{\alpha}|=n\). We still have \(\operatorname{rank}(\mathcal{E})>0\). Otherwise, we will have \[\mathbf{a}^{n}=(\underbrace{0,\ldots,0}_{p},\underbrace{1,0,\ldots,0}_{p},\ldots, \underbrace{1,0,\ldots,0}_{p},\underbrace{1,0,\ldots,0}_{p+1},\ldots, \underbrace{1,0,\ldots,0}_{p+1},1)=\mathbf{\eta}.\] Since \(p\geq 1\), by the description of \(\mathbf{\eta}\) in Definition 3.3, the only possibility is \(q=0\) and \(p=1\). Whence, \(n=d+1\). But as \(n\geq 3\) and \(2d\leq n\), this is impossible. To simplify the following proof, we write accordingly \[(1,2,\ldots,n-1)=\underbrace{(\underbrace{s_{1}^{1},\ldots,s_{p}^{1}}_{p},\ldots, \underbrace{s_{1}^{d-q},\ldots,s_{p}^{d-q}}_{d-q},\underbrace{s_{0}^{d-q+1}, \ldots,s_{p}^{d-q+1}}_{p+1},\ldots,\underbrace{s_{0}^{d},\ldots,s_{p}^{d}}_{p+1 })}_{d-q}.\] Then, we have \[s_{i}^{\ell}\triangleleft s_{j}^{\ell}\qquad\text{for all $i<j$ and all $\ell$} \tag{5}\] in the poset of obstructions defined in Definition 4.8. Furthermore, we have \[s_{1}^{i+1}\triangleleft s_{p}^{i}\qquad\text{if $1\leq i\leq d-q-1$ and $\alpha_{s_{1}^{i+1}}=1$}, \tag{6}\] and \[s_{0}^{j+1}\triangleleft s_{p}^{j}\qquad\text{if $d-q\leq j\leq d-1$ and $\alpha_{s_{0}^{j+1}}=1$}, \tag{7}\] in the poset of obstructions. Indeed, they are the generating relations of that poset. Since the Castelnuovo-Mumford regularity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is independent of the rules we impose in Section 4, we can further require that \[\operatorname{sgn}(L)\coloneqq(\underbrace{s_{0}^{d},\ldots,s_{p- 1}^{d}}_{p},\underbrace{\overbrace{s_{0}^{d-1},\ldots,s_{p}^{d-1}}^{d-1}, \ldots,\underbrace{s_{0}^{d-q+1},\ldots,s_{p}^{d-q+1}}_{p+1}}_{d-q-1},\] \[\underbrace{s_{1}^{d-q},\ldots,s_{p}^{d-q}}_{p},\ldots, \underbrace{s_{1}^{2},\ldots,s_{p}^{2}}_{p},s_{p}^{d},\underbrace{s_{1}^{1}, \ldots,s_{p}^{1}}_{p})\] when \(q\geq 1\). If instead \(q=0\), we then require that \[\operatorname{sgn}(L)\coloneqq(\underbrace{s_{1}^{d},\ldots,s_{p- 1}^{d}}_{p-1},\underbrace{s_{1}^{d-1},\ldots,s_{p}^{d-1}}_{p},\ldots, \underbrace{s_{1}^{2},\ldots,s_{p}^{2}}_{p},s_{p}^{d},\underbrace{s_{1}^{1}, \ldots,s_{p}^{1}}_{p}).\] In any case, the prescribed \(\operatorname{sgn}(L)\) is a legitimate signature by Lemma 4.10, since it satisfies the requirements of Equations (5), (6), (7), and Definition 4.12. Now, let \(A\) be the maximal clique starting from \(\boldsymbol{a}^{1}\) such that \[\operatorname{sgn}(A)\coloneqq(\underbrace{s_{0}^{d-q+1},s_{0}^{d-q+2}, \ldots,\boxed{s_{0}^{d}}}_{q})\underbrace{s_{1}^{1},s_{1}^{2},\ldots,\boxed{ s_{1}^{d}}}_{d}\ldots,\underbrace{s_{p-1}^{1},s_{p-1}^{2},\ldots,\boxed{s_{p-1}^{d}}}_{d} \underbrace{s_{p}^{1},s_{p}^{d},s_{p}^{2},\ldots,\boxed{s_{p}^{d-1}}}_{d}).\] Then \(A\) is legitimate by Lemma 4.10, since it satisfies the requirements of Equations (5), (6), and (7). Note that all end positions of the underbraced segments in \(\operatorname{sgn}(A)\) are boxed. Suppose that we also write \(\operatorname{sgn}(A)=(t_{1},\ldots,t_{n-1})\). For each \(k\in[n-2]\) such that \(t_{k}\) is not boxed, we can find a tuple \(B^{k}=(\boldsymbol{b}_{1}^{k},\ldots,\boldsymbol{b}_{n}^{k})\) of elements in \(\mathbb{Z}^{n}\) such that \(\boldsymbol{b}_{1}^{k}=\boldsymbol{a}^{1}\) and \[\operatorname{sgn}(B^{k})=(t_{1},\ldots,t_{k-1},t_{k+1},t_{k},t_{k+2},\ldots,t _{n-1}).\] Note that \(B^{k}\) is also a legitimate maximal clique in \(\mathcal{E}\), since it satisfies the requirements of Equations (5), (6), and (7). Moreover, \(B^{k}\prec A\) due to our choice of \(\operatorname{sgn}(L)\), and \(\operatorname{diff}(A,B^{k})\) is a singleton set by Lemma 4.5. Consequently, we have a natural lower bound for the minimal number of generators: \[\mu(\langle\boldsymbol{T}_{B^{k}}:B\in\mathcal{E}\text{ and }B\prec A\rangle:_{ \mathbb{K}[\boldsymbol{T}]}\boldsymbol{T}_{A^{k}})\geq\begin{cases}n-2-p,& \text{if $q\neq 0$},\\ n-1-p,&\text{if $q=0$}\end{cases}\] by the first part of the proof of Theorem 4.21. Notice that \(q=0\) if and only if \(\operatorname{sgn}(A)\) starts with \(1\), if and only if \(rA\) does not exist by Lemma 4.4. Thus, \[\omega(A)=\mu(\langle\boldsymbol{T}_{B^{k}}:B\prec A\rangle:_{\mathbb{K}[ \boldsymbol{T}]}\boldsymbol{T}_{A^{k}})\geq n-1-p=\left\lfloor n-\frac{n}{d}\right\rfloor\] by the second part of the proof of Theorem 4.21. Therefore, we get \(\operatorname{pd}((\operatorname{in}(J))^{\vee})=\max\limits_{B}\omega(B)\geq \left\lfloor n-\frac{n}{d}\right\rfloor\), as planned. As the second step, we show that \(\operatorname{pd}((\operatorname{in}(J))^{\vee})\leq\left\lfloor n-\frac{n}{d}\right\rfloor\). For this purpose, we introduce the new tuple \(\boldsymbol{\alpha}^{\prime}=(d,\ldots,d)\). Notice that the sets of maximal cliques satisfy that \(\operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha}))\subseteq\operatorname{ MC}(\mathcal{G}(d,\boldsymbol{\alpha}^{\prime}))\). For any fixed \(A=(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{n})\in\operatorname{MC}( \mathcal{G}(d,\boldsymbol{\alpha}))\), let \(\mathcal{E}(\boldsymbol{\alpha})\) (resp. \(\mathcal{E}(\boldsymbol{\alpha}^{\prime})\)) be the equivalence class in \(\mathcal{G}(d,\boldsymbol{\alpha})\) (resp. \(\mathcal{G}(d,\boldsymbol{\alpha}^{\prime})\)) to which \(A\) belongs. Let \(\boldsymbol{\eta}\) (resp. \(\boldsymbol{\eta}^{\prime}\)) be the tuple given in Definition 3.3 for \(\boldsymbol{\alpha}\) (resp. \(\boldsymbol{\alpha}^{\prime}\)). It is clear that the post of obstructions of \(\mathcal{E}(\boldsymbol{\alpha}^{\prime})\) is a subposet of \(\mathcal{E}(\boldsymbol{\alpha})\). Let \(\kappa_{1}\) and \(\kappa_{2}\) (resp. \(\kappa_{1}^{\prime}\) and \(\kappa_{2}^{\prime}\)) be the index defined in Definition 4.12 for \(\boldsymbol{\alpha}\) (resp. \(\boldsymbol{\alpha}^{\prime}\)), then we have \(\kappa_{1}=\kappa_{1}^{\prime}\) by the definition. As for \(\kappa_{2}\) and \(\kappa_{2}^{\prime}\), notice first that \(a_{n}^{1}\leq\alpha_{n}^{\prime}-1=d-1\) and \(|\boldsymbol{a}^{1}|=d\). If \(a_{n}^{1}=d-1\), we must have \(\boldsymbol{a}^{1}=(1,0,\ldots,0,d-1)\) and \(\boldsymbol{a}^{n}=(0,\ldots,0,d)=\boldsymbol{\eta}=\boldsymbol{\eta}^{\prime}\). Whence, \(\mathcal{E}(\boldsymbol{\alpha})=\mathcal{E}(\boldsymbol{\alpha}^{\prime})\) has rank \(0\) and contains exactly one maximal clique: \[A=((1,0,\ldots,0,d-1),(0,1,0,\ldots,0,d-1),\ldots,(0,\ldots,0,d)).\] In particular, \(\omega_{\boldsymbol{\alpha}}(A)=0=\omega_{\boldsymbol{\alpha}^{\prime}}(A)\). On the other hand, if \(a_{n}^{1}<d-1\), then \(\kappa_{2}^{\prime}\) does not exist. As a result, the special \(L\) we designate to \(\mathcal{E}(\boldsymbol{\alpha})\) in Definition 4.12 also works for the equivalence \(\mathcal{E}(\boldsymbol{\alpha}^{\prime})\). Consequently, we have \[\{\,B\in\operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha})):B\prec_{ \boldsymbol{\alpha}}A\,\}=\{\,B\in\operatorname{MC}(\mathcal{G}(d,\boldsymbol {\alpha}^{\prime})):B\prec_{\boldsymbol{\alpha}^{\prime}}A\,\}\cap \operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha})),\] and \(\omega_{\boldsymbol{\alpha}}(A)\leq\omega_{\boldsymbol{\alpha}^{\prime}}(A)\) by Equation (2). Then it is easy to deduce that \[\operatorname{pd}((\operatorname{in}(J(\boldsymbol{\alpha})))^{ \vee}) =\max\limits_{A\in\operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha}))} \omega_{\boldsymbol{\alpha}}(A)\leq\max\limits_{A\in\operatorname{MC}( \mathcal{G}(d,\boldsymbol{\alpha}))}\omega_{\boldsymbol{\alpha}^{\prime}}(A)\] \[\leq\max\limits_{A\in\operatorname{MC}(\mathcal{G}(d,\boldsymbol {\alpha}^{\prime}))}\omega_{\boldsymbol{\alpha}^{\prime}}(A)=\operatorname{ pd}((\operatorname{in}(J(\boldsymbol{\alpha}^{\prime})))^{\vee}).\] Since \(\operatorname{pd}((\operatorname{in}(J(\boldsymbol{\alpha}^{\prime})))^{\vee} )=\operatorname{reg}(\mathcal{A}_{d,\boldsymbol{\alpha}^{\prime}})=\left\lfloor n -\frac{n}{d}\right\rfloor\) by [21, Theorem 4.2], this completes the proof. We can summarize the above results and state the first main theorem of this section. **Theorem 5.6**.: _Let \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) be the Veronese type algebra in Setting 2.3. Set \(d^{\prime}=\operatorname{min}(d,|\boldsymbol{\alpha}|-d)\). Then, \(\operatorname{pd}((\operatorname{in}(J))^{\vee})=\left\lfloor n-\frac{n}{d ^{\prime}}\right\rfloor\)._ Proof.: Let \(J\) be the presentation ideal of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). From Remark 2.5, [3, Corollary 2.7] and [14, Proposition 8.1.10], we derive that \(\operatorname{reg}(\mathcal{A}_{d,\boldsymbol{\alpha}})=\operatorname{reg}( \mathbb{K}[\boldsymbol{T}]/J)=\operatorname{reg}(\mathbb{K}[\boldsymbol{T}]/ \operatorname{in}(J))=\operatorname{pd}((\operatorname{in}(J))^{\vee})\). The formulas then follow from Propositions 5.3 and 5.5. Recall that the \(\mathsf{a}\)_-invariant_ of an algebra was introduced by Goto and Watanabe in [12, Definition 3.1.4]. Since the Veronese type algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is Cohen-Macaulay by Remark 2.5, we have \[\mathsf{a}(\mathcal{A}_{d,\boldsymbol{\alpha}})=\operatorname{reg}(\mathcal{A}_ {d,\boldsymbol{\alpha}})-\operatorname{dim}(\mathcal{A}_{d,\boldsymbol{\alpha}}) \tag{8}\] in view of the equivalent definition of Castelnuovo-Mumford regularity in [22, Definitions 1 and 3]. Notice that the dimension and the Castelnuovo-Mumford regularity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) are known by Proposition 2.6 and Theorem 5.6 respectively. Therefore, we obtain the \(\mathsf{a}\)-invariant of this algebra for free. Moreover, the _reduction number_ of the ideal \(I_{d,\boldsymbol{\alpha}}\), and the Castelnuovo-Mumford regularity of the algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) are equal by the following lemma. For the definition and further discussion of the reduction numbers of ideals, see [17, Section 8.2]. **Lemma 5.7** ([4, Proposition 6.6] or [11, Proposition 1.2]).: _Let \(I\) be an equigenerated monomial ideal in some polynomial ring over a field \(\mathbb{K}\). Assume that the algebra \(\mathbb{K}[I]\) is Cohen-Macaulay and the field \(\mathbb{K}\) is infinite. Then \(I\) has the reduction number \(\operatorname{\mathsf{r}}(I)=\operatorname{reg}(\mathbb{K}[I])\)._ **Corollary 5.8**.: _Let \(\mathcal{A}_{d,\boldsymbol{\alpha}}=\mathbb{K}[I_{d,\boldsymbol{\alpha}}]\) be the Veronese type algebra in Setting 2.3. Assume that \(\mathbb{K}\) is an infinite field and set \(d^{\prime}=\operatorname{min}(d,|\boldsymbol{\alpha}|-d)\). Then, \(\operatorname{\mathsf{r}}(I)=\left\lfloor n-\frac{n}{d^{\prime}}\right\rfloor\) and \(\mathsf{a}(\mathcal{A}_{d,\boldsymbol{\alpha}})=-\left\lceil\frac{n}{d^{\prime}}\right\rceil\)._ Proof.: The algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is Cohen-Macaulay by Remark 2.5. Thus, the statements follow from Proposition 2.6, Theorem 5.6, Lemma 5.7, and Equation (8). ### Multiplicity bound of the algebra We conclude this work with a reasonable upper bound on the multiplicity of the Veronese type algebra \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). To begin with, we count the number of generators of the ideal \(I_{d,\boldsymbol{\alpha}}\), the number of different equivalent classes of maximal cliques, and the number of equivalent classes that have precisely \((n-1)!\) maximal cliques. **Lemma 5.9**.: _The dimension of the polynomial ring \(\mathbb{K}[\boldsymbol{T}]\coloneqq\mathbb{K}[T_{\boldsymbol{a}}:\boldsymbol {a}\in V_{n,d}^{\boldsymbol{\alpha}}]\) is given by_ \[\dim(\mathbb{K}[\boldsymbol{T}])=\sum_{i\geq 0}(-1)^{i}\sum_{\begin{subarray}{ c}P\subseteq[n],\\ \#P=i\end{subarray}}\binom{d-\sum_{p\in P}(\alpha_{p}+1)+n-1}{n-1}.\] _Moreover, there are_ \[G\coloneqq\sum_{i,j\geq 0}(-1)^{i+j}\sum_{\begin{subarray}{c}P\subseteq\{2,3, \ldots,n-1\},\,Q\subseteq\{1,n\},\\ \#P=i\end{subarray}}\binom{d-\sum_{p\in P}(\alpha_{p}+1)-\sum_{q\in Q}\alpha_ {q}+n-2}{n-1}\] _different equivalent classes of maximal cliques in the graph \(\mathcal{G}(d,\boldsymbol{\alpha})\), and there are_ \[H\coloneqq\sum_{i,j\geq 0}(-1)^{i+j}\sum_{\begin{subarray}{c}P\subseteq\{2,3, \ldots,n-1\},\,Q\subseteq\{1,n\},\\ \#P=i\end{subarray}}\binom{d-\sum_{p\in P}(\alpha_{p}-1)-\sum_{q\in Q}\alpha_ {q}}{n-1}\] _equivalent classes which have precisely \((n-1)!\) maximal cliques. In particular, there exists at most \(H\cdot(n-1)!+(G-H)\cdot(n-1)!/2\) different maximal cliques._ Proof.: It is clear that \(\dim(\mathbb{K}[\boldsymbol{T}])=\#V_{n,d}^{\boldsymbol{\alpha}}\) is equal to the number of ways to write \(a_{1}+\cdots+a_{n}=d\) such that \(0\leq a_{i}\leq\alpha_{i}\). Furthermore, by Corollary 3.10 and Lemma 4.3, the number \(G\) of equivalent classes of maximal cliques is the number of ways to have \(a_{1}+\cdots+a_{n}=d\), under the conditions that \(0\leq a_{i}\leq\alpha_{i}\) for \(i=2,\ldots,n-1\), \(1\leq a_{1}\leq\alpha_{1}\), and \(0\leq a_{n}\leq\alpha_{n}-1\). It is not hard to see that \(G=\#V_{n,d-1}^{\boldsymbol{\alpha}}\), where \(\boldsymbol{\alpha}^{\prime}=(\alpha_{1}-1,\alpha_{2},\alpha_{3},\ldots, \alpha_{n-1},\alpha_{n}-1)\). Similarly, by Lemma 5.2, finding the number \(H\) is counting how many possible ways one can write \(a_{1}+\cdots+a_{n}=d\), subject to the conditions \(1\leq a_{i}\leq\alpha_{i}-1\) for \(i=2,\ldots,n-1\), \(1\leq a_{1}\leq\alpha_{1}\), and \(0\leq a_{n}\leq\alpha_{n}-1\). It is not difficult to see that \(H=\#V_{n,d-(n-1)}^{\boldsymbol{\alpha}^{\prime\prime}}\), where \(\boldsymbol{\alpha}^{\prime\prime}=(\alpha_{1}-1,\alpha_{2}-2,\alpha_{3}-2, \ldots,\alpha_{n-1}-2,\alpha_{n}-1)\). The three formulas given above then follow from the classical _stars and bars method_ and the _inclusion-exclusion principle_ in combinatorics. Notice that if an equivalence class \(\mathcal{E}\) does not have \((n-1)!\) maximal cliques, then its poset of obstructions is not trivial by Lemma 5.2. Say, we have \(p\triangleleft q\) in this poset. Then, for any \(A\in\mathcal{E}\), we have \(p\) preceding \(q\) in \(\operatorname{sgn}(A)\). Thus, \[\#\mathcal{E}=\#\left\{\,\operatorname{sgn}(A):A\in\mathcal{E}\,\right\}\leq( n-1)!/2,\] namely, \(\mathcal{E}\) contains at most \((n-1)!/2\) maximal cliques. The "in particular" part then follows. For a homogeneous \(\mathbb{K}\)-algebra \(R\), let \(\mathsf{e}(R)\) denote its _multiplicity_ with respect to its graded maximal ideal. This number is clear in the squarefree case, by the work of Terai [24]. **Lemma 5.10** ([24, Lemma 4.1]).: _Let \(\mathfrak{a}\) be a squarefree monomial ideal in a polynomial ring \(A\). Then \(\mathsf{e}(A/\mathfrak{a})=\beta_{1,h_{1}}(A/\mathfrak{a}^{\vee})\), where \(h_{1}=\operatorname{indeg}(\mathfrak{a}^{\vee})\) is the initial degree of the Alexander dual ideal \(\mathfrak{a}^{\vee}\)._ **Corollary 5.11**.: _The multiplicity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\) is equal to \(\#\operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha}))\)._ Proof.: Note that \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\) can be calculated from the Hilbert series of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\cong\mathbb{K}[\boldsymbol{T}]/J\). Meanwhile, the Hilbert series of \(\mathbb{K}[\boldsymbol{T}]/J\) and \(\mathbb{K}[\boldsymbol{T}]/\operatorname{in}(J)\) coincide. Thus, thanks to Lemma 5.10, in order to find \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\), we only need to compute the minimal number of generators of the equigenerated squarefree monomial ideal \((\operatorname{in}(J))^{\vee}\). This number is obviously the number of maximal cliques of \(\mathcal{G}\). In addition, Terai gave the following upper bound on multiplicity. **Lemma 5.12** ([24, Theorem 4.2]).: _Let \(R=A/\mathfrak{a}\) be a homogeneous \(\mathbb{K}\)-algebra of codimension \(g\geq 2\). Then_ \[\mathsf{e}(R)\leq\binom{\operatorname{reg}(R)+g}{g}-\binom{\operatorname{reg}( R)-\operatorname{indeg}(\mathfrak{a})+g}{g}.\] Relatedly, Eisenbud and Goto in [8] made a conjecture linking Castelnuovo-Mumford regularity and multiplicity. Although a counterexample was recently given by McCullough and Peeva in [20], the statement of the original conjecture still holds in the Cohen-Macaulay case. **Lemma 5.13** ([7, Corollary 4.15]).: _Suppose that \(A\) is a polynomial ring over an algebraically closed field. If \(\mathfrak{a}\) is a nondegenerated homogeneous prime ideal in \(A\) and \(A/\mathfrak{a}\) is Cohen-Macaulay, then_ \[\operatorname{reg}(A/\mathfrak{a})\leq\mathsf{e}(A/\mathfrak{a})-\operatorname {codim}(\mathfrak{a}).\] We are ready to state our final result regarding the multiplicity of \(\mathcal{A}_{d,\boldsymbol{\alpha}}\). **Theorem 5.14**.: _Suppose that \(\mathcal{A}_{d,\boldsymbol{\alpha}}=\mathbb{K}[I_{d,\boldsymbol{\alpha}}]\) is the Veronese type algebra of Setting 2.3 whose presentation ideal \(J\) is not zero. Let \(r\coloneqq\operatorname{reg}(\mathcal{A}_{d,\boldsymbol{\alpha}})\), \(t\coloneqq\dim(\mathbb{K}[\boldsymbol{T}])\), \(G\) and \(H\) be the numbers that we computed in Theorem 5.6 and Lemma 5.9. Furthermore, we write \(d_{1}\coloneqq d\) and \(d_{2}\coloneqq\sum_{i=1}^{n}\alpha_{i}-d\). Then, we have_ \[r+t-n\leq\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\leq \min\Bigg{\{}H\cdot(n-1)!+\frac{(G-H)\cdot(n-1)!}{2},\ d_{1}^{n-1},\ d_{2}^{n -1},\] \[\binom{r+t-n}{t-n}-\binom{r-2+t-n}{t-n}\Bigg{\}}.\] Proof.: The codimension of \(J\) is \(t-n\) by Proposition 2.6. Since we can replace \(\mathbb{K}\) by its algebraic closure, the first inequality follows from Lemma 5.13. As for the second inequality, we apply first Lemmas 5.9 and 5.11. In addition, notice that the multiplicity in the Veronese case is well-known: \(\mathsf{e}(\mathcal{A}_{d_{1}(d_{1},\ldots,d_{1})})=d_{1}^{n}\). Since obviously \(\operatorname{MC}(\mathcal{G}(d,\boldsymbol{\alpha}))\subseteq\operatorname{ MC}(\mathcal{G}(d_{1},(d_{1},\ldots,d_{1})))\), we have \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\leq d_{1}^{n-1}\) by Corollary 5.11. Similarly, since \(\mathcal{A}_{d,\boldsymbol{\alpha}}\cong\mathcal{A}_{d_{2},\boldsymbol{ \alpha}}\), we also have \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\leq d_{2}^{n-1}\). As for the remaining piece, we observe that the presentation ideal \(J\) and its initial ideal are quadratic by Remark 2.5. Thus, when \(\operatorname{codim}(J)\geq 2\), we can apply Lemma 5.12. If instead \(\operatorname{codim}(J)=1\), since \(\operatorname{in}(J)\) is height-unmixed, squarefree, and has the same height, this initial ideal is the intersection of principal monomial prime ideals. Consequently, the quadratic ideal \(\operatorname{in}(J)\) is generated by a squarefree monomial of degree \(2\). In particular, \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})=\mathsf{e}(\mathbb{K}[ \boldsymbol{T}]/\operatorname{in}(J))=2\). On the other hand, \[\binom{r+t-n}{t-n}-\binom{r-2+t-n}{t-n}=\binom{r+1}{1}-\binom{r-2+1}{1}=2,\] which means that we have equality in this case. **Example 5.15**.: Let \(\boldsymbol{\alpha}=(1,4,4,5,7)\) and \(d=7\). Using the notation in Theorem 5.14, we have that \(d_{2}^{4}=(21-7)^{4}=14^{4}=38416>d_{1}^{4}=7^{4}=2401\). By Lemma 5.9, we have \(t=171\), \(G=75\), and \(H=18\). Therefore \(H\cdot(5-1)!+(G-H)\cdot(5-1)!/2=1116\). By Theorem 5.6, we have \(r=\operatorname{reg}(\mathcal{F}(\operatorname{Sss}(\boldsymbol{\eta})))=5-1=4\). Thus, \(r+t-n=4+171-5=170\) and \[\binom{r+t-n}{t-n}-\binom{r-2+t-n}{t-n} =\binom{4+171-5}{171-5}-\binom{4-2+171-5}{171-5}\] \[=33571342.\] It follows from Theorem 5.14 that we can obtain \[170\leq\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})\leq 1116.\] By directly enumerating the maximal cliques, we can check that \(\mathsf{e}(\mathcal{A}_{d,\boldsymbol{\alpha}})=960\) by Lemma 5.10. _Acknowledgment_.: The authors are grateful to the software system Macaulay2[13], for serving as an excellent source of inspiration. The second author is partially supported by the "Anhui Initiative in Quantum Information Technologies" (No. AHY150200) and the "Innovation Program for Quantum Science and Technology" (2021ZD0302902).
2309.01554
Rogue wave patterns associated with Adler-Moser polynomials in the nonlinear Schrödinger equation
We report new rogue wave patterns in the nonlinear Schr\"{o}dinger equation. These patterns include heart-shaped structures, fan-shaped sectors, and many others, that are formed by individual Peregrine waves. They appear when multiple internal parameters in the rogue wave solutions get large. Analytically, we show that these new patterns are described asymptotically by root structures of Adler-Moser polynomials through a dilation. Since Adler-Moser polynomials are generalizations of the Yablonskii-Vorob'ev polynomial hierarchy and contain free complex parameters, these new rogue patterns associated with Adler-Moser polynomials are much more diverse than previous rogue patterns associated with the Yablonskii-Vorob'ev polynomial hierarchy. We also compare analytical predictions of these patterns to true solutions and demonstrate good agreement between them.
Bo Yang, Jianke Yang
2023-09-04T12:08:54Z
http://arxiv.org/abs/2309.01554v1
# Rogue wave patterns associated with Adler-Moser polynomials in the nonlinear Schrodinger equation ###### Abstract We report new rogue wave patterns in the nonlinear Schrodinger equation. These patterns include heart-shaped structures, fan-shaped sectors, and many others, that are formed by individual Peregrine waves. They appear when multiple internal parameters in the rogue wave solutions get large. Analytically, we show that these new patterns are described asymptotically by root structures of Adler-Moser polynomials through a dilation. Since Adler-Moser polynomials are generalizations of the Yablonskii-Vorob'ev polynomial hierarchy and contain free complex parameters, these new rogue patterns associated with Adler-Moser polynomials are much more diverse than previous rogue patterns associated with the Yablonskii-Vorob'ev polynomial hierarchy. We also compare analytical predictions of these patterns to true solutions and demonstrate good agreement between them. ## I Introduction Rogue waves are unusually large, mysterious, and suddenly appearing surface water waves that can be dangerous even to large ships [1]. Their counterparts in optics and many other physical fields have also been reported [2; 3; 4]. Due to their unexpected nature and potential damage, rogue waves have been heavily studied in the physical and mathematical communities in recent years. Physically, a lot of laboratory experiments on rogue waves have been performed in diverse fields such as optical fibers [5; 6; 7; 8], water tanks [9; 10; 11; 12], superfluid helium [13], plasma [14; 15], and Bose-Einstein condensates [16]. Mathematically, rogue wave studies have been greatly facilitated by the fact that, many integrable equations that govern diverse physical processes, such as the nonlinear Schrodinger (NLS) equation for wave packet evolution in deep water, optical fibers, plasma, and Bose-Einstein condensates [17; 18; 19; 20; 21; 22; 23], and the Manakov system for light transmission in randomly birefringent optical fibers [24], admit explicit solutions that exhibit rogue-wave characteristics. The first such solution was reported by Peregrine in the NLS equation [25]. This Peregrine solution starts from a slightly perturbed constant-amplitude background wave. Then, it develops a localized peak that is three times the height of the background. Afterwards, this peak decays and merges to the background again. This is a rogue wave since its transient peak is much higher than its original wave amplitude, and this peak appears and disappears unexpectedly. It turns out that the Peregrine solution is just the simplest (fundamental) rogue wave in the NLS equation. More intricate NLS rogue waves that could reach even higher transient peak amplitudes or multiple peaks were later discovered [26; 27; 28; 29; 30; 31; 32]. In addition, rogue waves were also derived for many other integrable systems such as the Manakov system [33; 34; 35; 36; 37; 38] and so on. These analytical expressions of rogue waves shedded much light on the intricate rogue wave dynamics and guided their observations in laboratory experiments [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 16]. Pattern formation in rogue waves is an important question, since such information allows prediction of later rogue wave shapes from earlier rogue wave forms. One of the simplest rogue patterns is a rogue triplet, which comprises three fundamental rogue waves forming a triangle in the space-time plane. Such rogue triplets have been reported theoretically in many integrable equations [37; 36; 32; 27; 28] and observed experimentally in both water tanks and randomly birefringent optical fibers [8; 12]. Beyond rogue triplets, more sophisticated rogue patterns comprising Peregrine waves forming shapes such as pentagons, heptagons, and rings have also been reported for the NLS equation [39; 40; 41; 29]. In addition, it was revealed in [41] that those sophisticated patterns would reliably arise when one of the internal parameters in NLS rogue wave solutions gets large, and their shapes are predicted asymptotically by root structures of the Yablonskii-Vorob'ev polynomial hierarchy through a dilation and rotation (each nonzero root of the hierarchy predicts the spatial-temporal location of a Peregrine wave). In [42], it was further revealed that those NLS rogue patterns associated with root structures of the Yablonskii-Vorob'ev polynomial hierarchy are universal and would arise in many other integrable systems, such as the derivative NLS equations, the Boussinesq equation, and the Manakov system. Given the importance of the NLS equation in diverse physical disciplines, an important question is whether this equation admits other types of rogue patterns. If so, what special polynomials can be used to predict those patterns? In this paper, we show that the NLS equation does admit many new rogue patterns. These patterns arise when _multiple_ internal parameters in rogue wave solutions get large, and their shapes are predicted asymptotically by root structures of Adler-Moser polynomials through a simple dilation. Adler-Moser polynomials are generalizations of the Yablonskii-Vorob'ev polynomial hierarchy, and their root structures are much more diverse than those of the Yablonskii-Vorob'ev polynomial hierarchy. As a consequence, we report many new rogue patterns in the NLS equation, such as heart-shaped structures, fan -shaped sectors, and others. Our analytical predictions for these new patterns based on root structures of Adler-Moser polynomials are compared to true rogue solutions, and excellent agreement is demonstrated. ## II Preliminaries The NLS equation is \[\mathrm{i}u_{t}+\frac{1}{2}u_{xx}+|u|^{2}u=0. \tag{1}\] This equation governs nonlinear wave packet evolution in numerous physical systems such as deep water, optical fibers, plasma, and two-component Bose-Einstein condensates [17; 18; 19; 20; 23]. ### General bilinear rogue wave solutions Rogue wave solutions in this NLS equation have been derived by various methods before [27; 30; 31; 32]. The most explicit forms of these rogue wave solutions are the ones that were derived by the bilinear method in [31] and then further simplified in [41]. These simplified bilinear rogue wave solutions of the \(N\)-th order are \[u_{N}(x,t)=\frac{\sigma_{1}}{\sigma_{0}}e^{\mathrm{i}t}, \tag{2}\] \[\sigma_{n}=\det_{1\leq i,j\leq N}\left(\,\phi_{2i-1,2j-1}^{(n)}\,\right), \tag{3}\] \[\phi_{i,j}^{(n)}=\sum_{\nu=0}^{\min(i,j)}\frac{1}{4^{\nu}}\,S_{i-\nu}(\mathbf{x}^{ +}(n)+\nu\mathbf{s})\,S_{j-\nu}(\mathbf{x}^{-}(n)+\nu\mathbf{s}), \tag{4}\] \[x_{1}^{\pm}=x\pm\mathrm{i}t\pm n,\quad x_{2k}^{\pm}=0, \tag{5}\] \[x_{2k+1}^{+}=\frac{x+2^{2k}(\mathrm{i}t)}{(2k+1)!}+a_{2k+1},\quad x_{2k+1}^{- }=(x_{2k+1}^{+})^{*}, \tag{6}\] where \(S_{k}(\mathbf{x})\) with \(\mathbf{x}=(x_{1},x_{2},\ldots)\) are Schur polynomials defined by the generating function \[\sum_{k=0}^{\infty}S_{k}(\mathbf{x})\epsilon^{k}=\exp\left(\sum_{k=1}^{\infty}x_{ k}\epsilon^{k}\right), \tag{7}\] \(\mathbf{s}=(0,s_{2},0,s_{4},\cdots)\) are coefficients from the expansion \[\sum_{j=1}^{\infty}s_{j}\lambda^{j}=\ln\left[\frac{2}{\lambda}\tanh\left( \frac{\lambda}{2}\right)\right], \tag{8}\] the asterisk * represents complex conjugation, and \(a_{3},a_{5},\cdots,a_{2N-1}\) are free irreducible complex parameters which control the shape of this rogue wave solution. When \(N=1\), the above solution is \(u_{1}(x,t)=\hat{u}_{1}(x,t)\,e^{\mathrm{i}t}\), where \[\hat{u}_{1}(x,t)=1-\frac{4(1+2\mathrm{i}t)}{1+4x^{2}+4t^{2}}. \tag{9}\] This is the fundamental rogue wave in the NLS equation that was discovered by Peregrine in [25]. ### Adler-Moser polynomials and their root structures Adler-Moser polynomials were proposed by Adler and Moser [43], who expressed rational solutions of the Korteweg-de Vries equation in terms of those polynomials. In a different context of point vortex dynamics, it was discovered unexpectedly that the zeros of these polynomials also form stationary vortex configurations when the vortices have the same strength but positive or negative orientations, and the numbers of those positive and negative vortices are consecutive triangular numbers [44; 45]. Adler-Moser polynomials \(\Theta_{N}(z)\) can be written as a determinant [45] \[\Theta_{N}(z)=c_{N}\left|\begin{array}{cccc}\theta_{1}(z)&\theta_{0}(z)& \cdots&\theta_{2-N}(z)\\ \theta_{3}(z)&\theta_{2}(z)&\cdots&\theta_{4-N}(z)\\ \vdots&\vdots&\vdots&\vdots\\ \theta_{2N-1}(z)&\theta_{2N-2}(z)&\cdots&\theta_{N}(z)\end{array}\right|, \tag{10}\] where \(\theta_{k}(z)\) are Schur polynomials defined by \[\sum_{k=0}^{\infty}\theta_{k}(z)\epsilon^{k}=\exp\left(z\epsilon+\sum_{j=1}^{ \infty}\kappa_{j}\epsilon^{2j+1}\right), \tag{11}\] \(\theta_{k}(z)\equiv 0\) if \(k<0\), \(c_{N}=\prod_{j=1}^{N}(2j-1)!!\), and \(\kappa_{j}\,(j\geq 1)\) are arbitrary complex constants. Note that our \(\kappa_{j}\) constant is slightly different from that in [45] by a factor of \(-1/(2j+1)\), and this different parameter definition will be more convenient for our purpose. The determinant in (10) is a Wronskian since we can see from Eq. (11) that \(\theta_{k}^{\prime}(z)=\theta_{k-1}(z)\), where the prime denotes differentiation. In addition, these \(\Theta_{N}(z)\) polynomials are monic with degree \(N(N+1)/2\), which can be seen by noticing that the highest \(z\) term of \(\theta_{k}(z)\) is \(z^{k}/k!\), and the determinant in (10) with \(\theta_{k}(z)\) replaced by its highest \(z\) term can be explicitly calculated as \(z^{N(N+1)/2}\)[31]. Adler-Moser polynomials reduce to the Yablonskii-Vorob'ev polynomial hierarchy when all \(\kappa_{j}\) constants are set as zero except for one of them [41]. Thus, we can view Adler-Moser polynomials as generalizations of the Yablonskii-Vorob'ev polynomial hierarchy. The first few Adler-Moser polynomials are \[\Theta_{1}(z) =z,\] \[\Theta_{2}(z) =z^{3}-3\kappa_{1},\] \[\Theta_{3}(z) =z^{6}-15\kappa_{1}z^{3}+45\kappa_{2}z-45\kappa_{1}^{2},\] \[\Theta_{4}(z) =z^{10}-45\kappa_{1}z^{7}+315\kappa_{2}z^{5}-1575\kappa_{3}z^{3}\] \[+4725\kappa_{1}\kappa_{2}z^{2}-4725\kappa_{1}^{3}z-4725\kappa_{2} ^{2}+4725\kappa_{1}\kappa_{3}.\] Root structures of Adler-Moser polynomials are important to us, since we will link them to rogue wave patterns in the later text. Due to the free complex parameters \(\{\kappa_{j}\}\) in them, their root structures will be understandably very diverse -- much more diverse than root structures of Yablonskii-Vorob'ev hierarchy polynomials. Indeed, when setting all \(\{\kappa_{j}\}\) as zero except for one of them, we get root structures of Yablonskii-Vorob'ev hierarchy polynomials which are in the shape of triangles, pentagons, heptagons, and so on. When we continuously change those \(\{\kappa_{j}\}\) values, we will get root structures which smoothly deform from one type of Yablonskii-Vorob'ev root structure to another, such as from a triangle to a pentagon. In this process, uncountably infinite new root shapes will be generated. These roots are generically simple roots. Indeed, if a root happens to be a multiple root, it will split into simple roots when the complex parameters \(\{\kappa_{j}\}\) are slightly perturbed. For this reason, we will focus on the case when all roots of \(\Theta_{N}(z)\) are simple in this article. In this case, \(\Theta_{N}(z)\) will have \(N(N+1)/2\) roots. Of the uncountably infinite root structures of Adler-Moser polynomials, we illustrate only three of them for brevity. These three samples are for \(\Theta_{5}(z;\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4})\), with three sets of \((\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4})\) values as \[(\mathrm{i,i,i,i}),\;(\mathrm{5i/3,i,5i/7,-5i/9}),\;(1,1,1,1). \tag{12}\] Their root structures are displayed in Fig. 1(a, b, c), respectively. In these panels, every root is a simple root. The (a) panel shows a heart-shaped structure, (b) shows a fan-shaped circular sector, and (c) shows a two-arc structure combined with a triangle. ## III Analytical predictions for rogue patterns with multiple large internal parameters Rogue wave solutions \(u_{N}(x,t)\) in Eq. (2) contain \(N-1\) free internal complex parameters \(a_{3},a_{5},\cdots,a_{2N-1}\). If only one of those parameters is large, then the resulting rogue pattern is predicted by root structures of Yablonskii-Vorob'ev hierarchy polynomials, see [41]. In this section, we consider patterns of these rogue solutions when _multiple_ of these internal parameters are large. Specifically, suppose parameters \(a_{3},a_{5},\cdots,a_{2N-1}\) in \(u_{N}(x,t)\) are of the following form \[a_{2j+1}=\kappa_{j}\,A^{2j+1},\quad 1\leq j\leq N-1, \tag{13}\] where \(A\gg 1\) is a large positive constant, and \((\kappa_{1},\kappa_{2},\ldots,\kappa_{N-1})\) are \(O(1)\) complex constants not being all zero. Suppose also that roots of the Adler-Moser polynomial \(\Theta_{N}(z)\) with parameters \(\{\kappa_{j}\}\) are all simple. Then, our analytical prediction on the pattern of this rogue wave solution \(u_{N}(x,t)\) is given by the following theorem. **Theorem 1.**_If all roots of \(\Theta_{N}(z)\) are simple, then the \(N\)-th order rogue wave \(u_{N}(x,t)\) in Eq. (2) with its internal large parameters \((a_{3},a_{5},\cdots,a_{2N-1})\) as given by Eq. (13) would asymptotically split into \(N(N+1)/2\) fundamental (Peregrine) rogue waves of the form \(\hat{u}_{1}(x-\hat{x}_{0},t-\hat{t}_{0})\,e^{it}\), where \(\hat{u}_{1}(x,t)\) is given in Eq. (9), and positions \((\hat{x}_{0},\hat{t}_{0})\) of these Peregrine waves are given by_ \[\hat{x}_{0}+\mathrm{i}\,\hat{t}_{0}=z_{0}A, \tag{14}\] _with \(z_{0}\) being every one of the \(N(N+1)/2\) simple roots of \(\Theta_{N}(z)\). The error of this Peregrine wave approximation is \(O(A^{-1})\). Expressed mathematically, when \((x-\hat{x}_{0})^{2}+(t-\hat{t}_{0})^{2}=O(1)\), we have the following solution asymptotics_ \[u_{N}(x,t;a_{3},a_{5},\cdots,a_{2N-1})=\hat{u}_{1}(x-\hat{x}_{0},t-\hat{t}_{0} )\,e^{\mathrm{i}t}+O\left(A^{-1}\right).\] _When \((x,t)\) is not in the neighborhood of any of these Peregrine waves, \(u_{N}(x,t)\) would asymptotically approach the constant-amplitude background \(e^{it}\) as \(A\to+\infty\)._ This theorem indicates that the rogue pattern is asymptotically a simple dilation of the root structure of the underlying Adler-Moser polynomial by a factor of \(A\), with each root predicting the location of a Peregrine wave in the \((x,t)\) plane according to Eq. (14). Thus, this theorem establishes a direct connection between rogue patterns and root structures of Adler-Moser polynomials. One may notice that in the present case of multiple large parameters, the rogue pattern is a simple dilation of the root structure of an Adler-Moser polynomial, while in the previous case of a single large parameter as studied in [41], the rogue pattern was a dilation _and rotation_ of the root structure of a Yablonskii-Vorob'ev hierarchy polynomial. The reason our current rogue pattern does not involve rotation to the root structure is that, the Adler-Moser polynomial contains free complex constants \(\{\kappa_{j}\}\), which automatically put its root structure in proper orientation to match the rogue pattern. Comparatively, a Yablonskii-Vorob'ev hierarchy polynomial does not contain such free complex constants, and thus the orientation of its root structure is fixed. In this case, in order for its root structure to match the orientation of the rogue wave, a proper rotation is needed. ## IV Numerical confirmation Now, we numerically verify Theorem 1 by comparing its predictions with true rogue-wave solutions. This comparison will be done only for fifth-order rogue waves \(u_{5}(x,t)\) for brevity. Such fifth-order solutions have internal complex parameters \((a_{3},a_{5},a_{7},a_{9})\). We will do this comparison on three examples. Internal parameter values in these three examples are of the form (13) with \(A=5\), which is large as desired, and their \((\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4})\) values are given in Eq. (12). These \(\kappa_{j}\) values are used since root structures of Adler-Moser polynomials \(\Theta_{5}(z)\) for these values have been displayed in Fig. 1. For these three sets of internal parameters, true rogue wave solutions are plotted in the upper three panels of Fig. 2, respectively. It is seen that each panel comprises 15 lumps (Peregrine waves) in the \((x,t)\) plane. In the first panel, these 15 Peregrine waves form a heart-shaped structure, with another mini-heart in its interior. In the second panel, these 15 Peregrine waves form a fan-shaped structure. In the third panel, these 15 Peregrine waves form two vertically-oriented arcs plus a smaller triangle on their right side. Our analytical predictions \(|u_{5}^{(p)}(x,t)|\) for these rogue waves from Theorem 1 can be assembled into a simple formula, \[\left|u_{5}^{(p)}(x,t)\right|=1+\sum_{j=1}^{15}\left(\left|\hat{u}_{1}(x-\hat{ x}_{0}^{(j)},t-\hat{t}_{0}^{(j)})\right|-1\right), \tag{15}\] where \(\hat{u}_{1}(x,t)\) is the Peregrine wave given in (9), and their positions \((\hat{x}_{0}^{(j)},\hat{t}_{0}^{(j)})\) are given by (14) with \(z_{0}\) being every one of the \(N(N+1)/2=15\) simple roots of the Adler\({}_{n}=\) Moser polynomial \(\Theta_{5}(z)\). These predicted solutions for the same \((a_{3},a_{5},a_{7},a_{9})\) values as in the true solutions are plotted in the lower three panels of Fig. 2. When compared to the root structures of the Adler-Moser polynomial \(\Theta_{5}(z)\) in Fig. 1, our predicted rogue patterns in these lower panels are obviously a simple dilation of those root structures, by a factor of \(A=5\), with each root replaced by a Peregrine wave, as Theorem 1 says. When comparing the true rogue solutions in the upper row to their analytical predictions in the lower row, we can clearly see that they agree with each other very well. In fact, one can hardly notice the difference between them, which is an indication that our prediction in Theorem 1 is highly accurate. Quantitatively, we have also measured the error of our analytical predictions versus the \(A\) value, similar to what we did in Fig. 5 of Ref. [41]. That error analysis confirmed that the error does decay in proportion to \(A^{-1}\), as Theorem 1 predicts. Thus, Theorem 1 is fully confirmed numerically. Details of this quantitative comparison are omitted here for brevity. ## V Proof of Theorem 1 In this section, we prove the analytical predictions on NLS rogue patterns in Theorems 1. The main idea of our proof resembles that in Ref. [41] for a single large internal parameter case. To derive the large-parameter asymptotics of the rogue wave solution \(u_{N}(x,t)\) in Eq. (2), we need asymptotic expressions for the determinant \(\sigma_{n}\) in Eq. (3). For this purpose, we first use determinant identities and the Laplace expansion to rewrite \(\sigma_{n}\) as [31] \[\sum_{0\leq\nu_{1}<\nu_{2}<\cdots<\nu_{N}\leq 2N-1}\det_{1\leq i,j \leq N}\left[\frac{1}{2^{\nu_{j}}}S_{2i-1-\nu_{j}}(\mathbf{x}^{+}(n)+\nu_{j}\mathbf{s })\right]\] \[\quad\times\det_{1\leq i,j\leq N}\left[\frac{1}{2^{\nu_{j}}}S_{2i -1-\nu_{j}}(\mathbf{x}^{-}(n)+\nu_{j}\mathbf{s})\right], \tag{16}\] where \(S_{k}\equiv 0\) if \(k<0\). When internal parameters \((a_{3},a_{5},\cdots,a_{2N-1})\) are of the form (13) with \(A\gg 1\), and \(x,t=O(A)\) or smaller, we have \[S_{k}(\mathbf{x}^{+}(n)+\nu\mathbf{s})=S_{k}\left(x_{1}^{+},\nu s_{2},x_ {3}^{+},\nu s_{4},\cdots\right)\] \[=S_{k}\left(x_{1}^{+},0,\kappa_{1}A^{3},0,\kappa_{2}A^{5},\cdots \right)\left[1+O(A^{-2})\right]\] \[=S_{k}(\hat{\mathbf{v}})\left[1+O(A^{-2})\right], \tag{17}\] where \(\hat{\mathbf{v}}=\left(x+\mathrm{i}t+n,0,\kappa_{1}A^{3},0,\kappa_{2}A^{5},0, \cdots\right)\). From the definition (7) of Schur polynomials, one can see that the Figure 2: Comparison between true rogue solutions \(|u(x,t)|\) (upper row) and their analytical predictions (lower row) for \(N=5\) and \(A=5\). From left to right columns: \((\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4})=(\mathrm{i},\mathrm{i},\mathrm{i },\mathrm{i})\), \((\mathrm{5i}/3,\mathrm{i},\mathrm{5i}/7,-\mathrm{5i}/9)\), \((1,1,1,1)\). In all panels, \(-30\leq x,t\leq 30\). polynomial \(S_{k}(\hat{\mathbf{v}})\) is related to \(\theta_{k}(z)\) in (11) as \[S_{k}(\hat{\mathbf{v}})=A^{k}\theta_{k}(\hat{z}), \tag{18}\] where \(\hat{z}\equiv A^{-1}(x+\mathrm{i}t+n)\). The dominant contribution in the Laplace expansion (16) of \(\sigma_{n}\) comes from two index choices, \(\nu=(0,1,\cdots,N-1)\), and \(\nu=(0,1,\cdots,N-2,N)\). With the first index choice, in view of Eqs. (17)-(18), the determinant involving \(\mathbf{x}^{+}(n)\) inside the summation of (16) is asymptotically \[\alpha\,A^{\frac{N(N+1)}{2}}\Theta_{N}(\hat{z})\left[1+O\left(A^{-2}\right) \right], \tag{19}\] where \(\alpha=2^{-N(N-1)/2}c_{N}^{-1}\). Let us define \((\hat{x}_{0},\hat{t}_{0})\) by Eq. (14), i.e., \(z_{0}=A^{-1}(\hat{x}_{0}+\mathrm{i}\hat{t}_{0})\), where \(z_{0}\) is a simple root of the Adler-Moser polynomial \(\Theta_{N}(z)\). Then, when \((x,t)\) is in the \(O(1)\) neighborhood of \((\hat{x}_{0},\hat{t}_{0})\), we expand \(\Theta_{N}(\hat{z})\) around \(\hat{z}=z_{0}\). Recalling \(\Theta_{N}(z_{0})=0\), we get \[\Theta_{N}(\hat{z})=A^{-1}\left[(x-\hat{x}_{0})+\mathrm{i}(t-\hat{t}_{0})+n \right]\Theta_{N}^{\prime}(z_{0})\left[1+O\left(A^{-1}\right)\right].\] Inserting this equation into (19), the determinant involving \(\mathbf{x}^{+}(n)\) inside the summation of (16) becomes \[\left[(x-\hat{x}_{0})+\mathrm{i}(t-\hat{t}_{0})+n\right]\,\alpha\,A^{\frac{N( N+1)}{2}}\Theta_{N}^{\prime}(z_{0})\left[1+O\left(A^{-1}\right)\right].\] Similarly, the determinant involving \(\mathbf{x}^{-}(n)\) inside this summation becomes \[\left[(x-\hat{x}_{0})-\mathrm{i}(t-\hat{t}_{0})-n\right]\,\alpha\,A^{\frac{N( N+1)}{2}}\Theta_{N}^{\prime}(z_{0}^{*})\left[1+O\left(A^{-1}\right)\right].\] Next, we consider the contribution from the second index choice of \(\nu=(0,1,\cdots,N-2,N)\). For this index choice, the determinant involving \(\mathbf{x}^{+}(n)\) inside the summation of (16) becomes \[\frac{1}{2}\alpha\,A^{\frac{N(N+1)-2}{2}}\Theta_{N}^{\prime}(\hat{z})\left[1+ O\left(A^{-2}\right)\right].\] When \((x,t)\) is in the \(O(1)\) neighborhood of \((\hat{x}_{0},\hat{t}_{0})\), the above term is asymptotically equal to \[\frac{1}{2}\alpha\,A^{\frac{N(N+1)-2}{2}}\Theta_{N}^{\prime}(z_{0})\left[1+O \left(A^{-1}\right)\right].\] Similarly, the determinant involving \(\mathbf{x}^{-}(n)\) inside the summation of (16) becomes \[\frac{1}{2}\alpha\,A^{\frac{N(N+1)-2}{2}}\Theta_{N}^{\prime}(z_{0}^{*})\left[ 1+O\left(A^{-1}\right)\right].\] Summarizing the above two dominant contributions in the Laplace expansion (16), we find that \[\sigma_{n}(x,t)=\alpha^{2}\,\left|\Theta_{N}^{\prime}(z_{0})\right| ^{2}A^{N(N+1)-2}\] \[\quad\times\left[(x-\hat{x}_{0})^{2}+\left(t-\hat{t}_{0}\right)^{ 2}-2\mathrm{i}n\left(t-\hat{t}_{0}\right)-n^{2}+\frac{1}{4}\right]\] \[\quad\times\left[1+O\left(A^{-1}\right)\right]. \tag{20}\] Since the root \(z_{0}\) has been assumed simple, \(\Theta_{N}^{\prime}(z_{0})\neq 0\). Thus, the above leading-order asymptotics for \(\sigma_{n}(x,t)\) does not vanish. Therefore, when \(A\) is large and \((x,t)\) in the O(1) neighborhood of \(\left(\hat{x}_{0},\hat{t}_{0}\right)\), we get from (20) that \[u_{N}(x,t)=\frac{\sigma_{1}}{\sigma_{0}}e^{\mathrm{i}t} =e^{\mathrm{i}t}\left(1-\frac{4[1+2\mathrm{i}(t-\hat{t}_{0})]}{1+4(x- \hat{x}_{0})^{2}+4(t-\hat{t}_{0})^{2}}\right)\] \[+O\left(A^{-1}\right),\] which is a Peregrine wave \(\hat{u}_{1}(x-\hat{x}_{0},t-\hat{t}_{0})e^{\mathrm{i}t}\), and the error of this Peregrine prediction is \(O\left(A^{-1}\right)\). Theorem 1 is then proved. ## VI Conclusions and discussions In this paper, we have reported many new rogue patterns in the NLS equation which are predicted by root structures of new special polynomials. Specifically, we have shown that when multiple internal parameters in the rogue wave solutions are large, many new rogue patterns would arise, including heart-shaped structures, fan-shaped structures, and others. Analytically, these rogue patterns are determined by root structures of Adler-Moser polynomials. If all roots of the Adler-Moser polynomial are simple, then the rogue pattern is simply a dilation of the Adler-Moser-polynomial's root structure, with each root replaced by a Peregrine wave. Since Adler-Moser polynomials contain free complex parameters, their root structures would be very diverse. As a result, NLS rogue waves could assume much more varied spatial-temporal patterns beyond those reported earlier. Since NLS rogue waves have been observed in diverse physical systems [5; 6; 9; 10; 11; 12; 13; 14; 16], these new NLS rogue patterns open up more varieties of rogue dynamics which could be verified in experiments too. The previous NLS rogue patterns associated with root structures of the Yablonskii-Vorob'ev polynomial hierarchy were later found to be universal and would appear in many other integrable systems [41; 42]. The present rogue patterns associated with Adler-Moser polynomials are expected to be universal as well, and they should arise in other integrable systems too when multiple internal parameters in rogue waves of those integrable equations are large. This prospect will be pursued in the near future. Our rogue-pattern predictions in this article were made under the assumption that all roots of the Adler-Moser polynomial are simple. A very interesting question is what will happen if some roots of the Adler-Moser polynomial are not simple. This question is still open that merits further studies. ## Acknowledgment The work of B.Y. was supported in part by the National Natural Science Foundation of China (GrantNo.12201326), and the work of J.Y. was supported in part by the National Science Foundation (U.S.) under award number DMS-1910282.
2303.03318
Generalized Method for the Optimization of Pulse Shape Discrimination Parameters
Organic scintillators exhibit fast timing, high detection efficiency for fast neutrons and pulse shape discrimination (PSD) capability. PSD is essential in mixed radiation fields, where different types of radiation need to be detected and discriminated. In neutron measurements for nuclear security and non proliferation effective PSD is crucial, because a weak neutron signature needs to be detected in the presence of a strong gamma-ray background. The most commonly used deterministic PSD technique is charge integration (CI). This method requires the optimization of specific parameters to obtain the best gamma-neutron separation. These parameters depend on the scintillating material and light readout device and typically require a lengthy optimization process and a calibration reference measurement with a mixed source. In this paper, we propose a new method based on the scintillation fluorescence physics that enables to find the optimum PSD integration gates using only a gamma-ray emitter. We demonstrate our method using three organic scintillation detectors: deuterated trans-stilbene, small-molecule organic glass, and EJ-309. In all the investigated cases, our method allowed finding the optimum PSD CI parameters without the need of iterative optimization.
Jianxin Zhou, Abdullah Abdulaziz, Yoann Altmann, Angela Di Fulvio
2023-03-06T17:44:08Z
http://arxiv.org/abs/2303.03318v1
# Generalized Method for the Optimization of Pulse Shape Discrimination Parameters ###### Abstract Organic scintillators exhibit fast timing, high detection efficiency for fast neutrons and pulse shape discrimination (PSD) capability. PSD is essential in mixed radiation fields, where different types of radiation need to be detected and discriminated. In neutron measurements for nuclear security and non proliferation effective PSD is crucial, because a weak neutron signature needs to be detected in the presence of a strong gamma-ray background. The most commonly used deterministic PSD technique is charge integration (CI). This method requires the optimization of specific parameters to obtain the best gamma-neutron separation. These parameters depend on the scintillating material and light readout device and typically require a lengthy optimization process and a calibration reference measurement with a mixed source. In this paper, we propose a new method based on the scintillation fluorescence physics that enables to find the optimum PSD integration gates using only a gamma-ray emitter. We demonstrate our method using three organic scintillation detectors: deuterated transtilbene, small-molecule organic glass, and EJ-309. In all the investigated cases, our method allowed finding the optimum PSD CI parameters without the need of iterative optimization. keywords: Exponential model, PSD, fast neutron detection + Footnote †: journal: ## 1 Introduction Pulse-shape-discrimination (PSD) capable organic scintillators are the detectors of choice when it is necessary to detect and discriminate different radiation types, e.g., gamma rays and neutrons, with fast timing and high efficiency. Therefore, organic scintillators are used for a wide range of applications, from nuclear security to diagnostic radiology and nuclear physics [1; 2; 3; 4]. The dependence of the fluorescence time constants on the particle linear energy transfer (LET) enables PSD [5]. In practice, PSD is possible because the shape of the detected pulses changes with the LET of the particle depositing its energy in the detector. The traditional and most commonly used method to find a parameter that depends on the pulse shape and hence enables PSD is based on charge integration (CI)[5; 6]. The CI-based PSD parameter is the tail-to-total ratio (TTR), which is the ratio between the area under the terminal portion of the pulse, i.e., the pulse tail, and the whole pulse area. TTR is calculated for each pulse and ranges between zero and one. A relatively high TTR corresponds to pulses with an increased delayed fluorescence emission with respect to the prompt fluorescence emission. Despite the increasing number of alternative PSD approaches, e.g., based on zero-crossing [7; 8; 9; 10; 11], time-over-threshold [12] and machine-learning [13; 14], the CI remains the most frequently-used method for PSD. While being simple to implement, CI requires a lengthy, source- and material-specific optimization of the pulse integration gate parameters, which is typically performed by evaluating the PSD figure-of-merit [15] for many iterations of such parameters. In this work, we present a generalized method to optimize the choice of the CI gate parameters. The proposed method is based on an exponential model to fit a template gamma-ray pulse. The model includes only the prompt fluorescence component, without accounting for the delayed fluorescence. The pulse time stamp at which the model and the measured template differ the most reveals the onset of the delayed component, which can be used as tail start time in CI-based PSD. This method is based on the intrinsic scintillation decay times, hence avoids the cumbersome gate optimization process needed in CI PSD. Moreover, it does not require any neutron source to find the optimized charge integration parameter. We validated the method using three organic scintillators: deuterated trans-stilbene (stilbene-d\({}_{12}\)) [16], EJ-309, and small-molecule organic glass [17], hereafter referred to as organic glass detector. The model-determined tail start time yielded the best PSD results for all three organic scintillation detectors. ## 2 Methods We present a model-based method to find the integration time gates for CI PSD without the need for any iterative optimization process. This method was validated using three organic scintillation detectors, namely, stilbene-d\({}_{12}\), EJ-309, and organic glass. ### Workflow of the model-based charge integration PSD Figure 1 shows the workflow of the proposed method to find the integration parameters for a PSD-capable scintillator. First, approximately one thousand gamma-ray pulses were acquired and averaged to generate a pulse template. The details of the average process are described in the next section. Then, an exponential model was used to fit the template. After the fitting process, the pulse height differences between the original pulse template and the fitting result were calculated. The time stamp corresponding to the maximum difference between the two is the optimum tail start time for charge integration PSD. The details of this procedure are presented in the following sections. ### Derivation of CI integration time gate from the pulse fit Measured gamma-ray pulses and an exponential pulse model were used to obtain the optimal tail start time for CI PSD method. We first acquired approximately one thousand gamma-ray pulses and averaged them to a pulse template, normalized to its peak value. Then, we fit the template with a bi-exponential pulse model that is widely used to describe the fluorescence signal produced by organic scintillators [18]. The model is shown in Equation (1). The first two exponential terms represent the rising and decay of the fast component, respectively, and the last two account for the rising and decay of the slow component. \(A\) and \(B\) are the amplitudes of the fast and slow fluorescence components, respectively. \(\tau_{r}\), \(\tau_{f}\), and \(\tau_{s}\) are the time constants of the rising edge, the fast light decay, and the slow light decay. \(t_{0}\) is the time offset with respect to the acquisition window. \[L(t)=A\left(-e^{\left(-\frac{(t-t_{0})}{\tau_{r}}\right)}+e^{\left(-\frac{(t-t_ {0})}{\tau_{f}}\right)}\right)+B\left(-e^{\left(-\frac{(t-t_{0})}{\tau_{r}} \right)}+e^{\left(-\frac{(t-t_{0})}{\tau_{s}}\right)}\right) \tag{1}\] Gamma-ray-produced pulses exhibit mainly the fast light component [19]. Delayed fluorescence also exists, but its relative intensity is lower compared to Figure 1: Workflow of the model-based charge integration method for neutron/gamma discrimination prompt fluorescence. Therefore, the fit of the slow decay constant yields a large associated uncertainty [16]. We hence set the amplitude of the slow component (\(B\) in Equation 1) to zero and only fit the fast component of the gamma pulse template. After the fit, we calculated the difference between the measured template pulse and the exponential fit. The pulse time stamp of maximum difference can be considered as the time when the fit of the fast component fails to describe the complete gamma pulse because the model deliberately neglects the delayed fluorescence component. This timestamp represents the maximum of the delayed fluorescence component. Therefore, we chose it as the tail start time to start the tail integration of CI PSD. ### Figure of merit as PSD evaluation metrics The figure of merit (FOM), detailed below, was used as the quantitative metric to evaluate the PSD results. We performed charge integration PSD on each measured pulse and calculated the ratio of the tail and total pulse integrals (tail-to-total ratio). We analyzed the FOM of pulses in 20-keVee light output bins and evaluated the distribution of the trail-to-total ratio for each light output bin, as shown in Figure 2. The two peaks represent gamma and neutron pulse distributions, and their centroids and full width at half maxima (FWHMs) were used to calculate FOM, defined in Equation (2). \(S\) is the distance between the maximum values of the neutron and the gamma-ray distributions, \(\Gamma_{1}\) and \(\Gamma_{2}\) are the FWHMs of the gamma-ray and neutron distributions, respectively. A larger FOM value represents a better neutron-gamma discrimination capability. \[FOM=\frac{S}{\Gamma_{1}+\Gamma_{2}} \tag{2}\] Figure 2: Distribution of \({}^{252}\)Cf counts in terms of tail-to-total ratio in charge-integration-based PSD method, in the 200-210 keVee light output interval. ### Experimental Setup The model-based PSD method was demonstrated on three organic scintillation detectors: a stilbene-d\({}_{12}\), an EJ-309, and an organic glass detector. Table 1 shows the chemical composition and main properties of the detectors. The EJ-309 and the organic glass crystals are 5.08 cm tall cylinders with 5.08 cm diameter. The stilbene-d\({}_{12}\) has a 140 cm\({}^{3}\) non-equilateral hexagonal prismatic shape [16] with a 5.4 cm height and was custom-grown at Lawrence Livermore National Laboratory [20]. The three detectors are all coupled with photomultiplier tubes (PMT) that convert light pulses into electrical waveforms. The PMT models of stilbene-d\({}_{12}\), EJ-309 and organic glass detectors were HAMAMATSU H6559, ET Enterprises 9214B, and ET Enterprises 9214B, respectively. These waveforms are acquired and digitized by a CAEN DT5730 500-MSps 14-bit digitizer. A 1\(\mu\)Ci \({}^{137}\)Cs source was used to calibrate the detectors, in terms of light output. Then, we irradiated the detectors using a 5 \(\mu\)Ci \({}^{252}\)Cf neutron source to evaluate their PSD performances. Each detector recorded approximately 2\(\times\)10\({}^{6}\) mixed neutron and gamma pulses emitted by the \({}^{252}\)Cf source. The distance between the source and the front face of the detectors was set to 50 cm to have an acceptable detector count rate while minimizing the pile-up pulses. The data processing was performed with Python and Matlab custom codes. ## 3 Results The PSD performance of three organic scintillators was evaluated using the proposed PSD method. We calculated the FOM values obtained using the model-based method and compared it with the traditional PSD method based on the iterative optimization to find the integration time gates. ### Detector calibration and pulse template generation with the \({}^{137}\)Cs source We calibrated the stilbene-d\({}_{12}\), EJ-309, and organic glass detectors with the \({}^{137}\)Cs gamma source. Figure 3 shows the measured \({}^{137}\)Cs pulse integral spectrum with the stilbene-d\({}_{12}\). We fit the Compton edge with a Gaussian distribution and calibrated the light output response of the stilbene-d\({}_{12}\) to Compton electrons using the 85% of the edges produced by 662 keV gamma rays that correspond to 478 keV electron energy deposited via Compton scattering[16]. The light output calibration of the stilbene-d\({}_{12}\) is: light output (keVee) = 4345.45 \begin{table} \begin{tabular}{|c|c|} \hline Scintillator & Composition \\ & (wt \%) \\ \hline Stilbene-d\({}_{12}\) & 46.15\% deuterium, 53.85\% carbon \\ EJ-309 & 55.52\% hydrogen, 44.48\% carbon \\ Organic glass & 45.59\% hydrogen, 53.17\% carbon, and 1.24\% silicon \\ \hline \end{tabular} \end{table} Table 1: Material properties of the scintillators (keVee/V)\(\times\)Pulse height (V). We used the same calibration method for the EJ-309 and organic glass detectors, and the results are light output (keVee) = 2108.51 (keVee/V)\(\times\)Pulse height (V) and light output (keVee) = 1671.33 (keVee/V)\(\times\)Pulse height, respectively. We also used the \({}^{137}\)Cs measurement to generate the gamma pulse template. In order to ensure the quality of the pulse template, we rejected the piled-up pulses in the measurement. Approximately one thousand pulses whose pulse heights were within the \(\pm\) 0.05V of \({}^{137}\)Cs Compton edge region were chosen to build the template to reject low-amplitude pulses with high noise. The start of each gamma pulse was defined as the time when its amplitude reached 10% of its maximum. Then, we normalized the peaks of all gamma pulses to 1 and averaged them to form a pulse template, as shown in Figure 4. The time interval of two contiguous sampled points is 1 ns. Exponential fit of the pulse template and the acquisition of the tail start time from the fit result The exponential model (Equation 1) was used to fit the gamma pulse template. Figure 5 shows the fit result of the pulse template from the stilbene-d\({}_{12}\) detector. The fit was calculated using the curve_fit function of the Python scipy.optimize package. Since we only used the exponential model to fit the fast component of the fluorescence signal, the fit does not resemble the template shape well at the tail region. This discrepancy allows identifying the onset of the delayed fluorescence component. Although the delayed fluorescence is relatively more intense in pulses produced by high-LET interactions, Compton electrons also exhibit a delayed fluorescence signal [19]. Figure 5 (b) shows the sample-by-sample difference between the pulse and the fit for the stilbene-d\({}_{12}\) Figure 3: Measured pulse height distribution of the stilbene-d\({}_{12}\) detector with the \({}^{137}\)Cs source. detector. The maximum difference occurs 25 ns from the beginning of the pulse (18 ns from the pulse peak). We used this 18 ns from the peak as the tail start time to perform CI PSD for the stilbene-d\({}_{12}\). The tail start times of the EJ-309 and organic glass detectors were also obtained with this model-based method and they were 10 ns after the peak, as shown in Table 2. ### PSD performance with the model-determined integration setting The model-determined tail start time was used to perform CI PSD for the stilbene-d\({}_{12}\), EJ-309, and organic glass detectors. The other charge integration settings were a) total integration started at 2 ns before the pulse peak, and b) the integration of the total and tail both ended at 150 ns. Figure 6 shows the \({}^{252}\)Cf PSD scatter-density plot of three detectors when using the model-determined tail start time settings. One can observe that the neutron and gamma pulses are best separated in Figure 6 (a), which demonstrated the stilbene-d\({}_{12}\) outperformed the other detectors in PSD. EJ-309 detector exhibited a better PSD capability than the organic glass detector. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Scintillator & Stilbene-d\({}_{12}\) & EJ-309 & Organic glass \\ \hline Fast decay time (\(\tau_{f}\)) from the fit & 7.9 ns & 4.3 ns & 3.9 ns \\ Model-determined tail start time & 18 ns & 10 ns & 10 ns \\ \hline \end{tabular} \end{table} Table 2: Model-determined tail start time of three scintillators Figure 4: Gamma pulse template obtained from the \({}^{137}\)Cs measurement with the stilbene-d\({}_{12}\) detector. Figure 5: Exponential fit result of the stilbene-d\({}_{12}\) gamma pulse template. Figure 6: PSD scatter-density plot of the \({}^{252}\)Cf source with model-determined integration setting. ### FOM sensitivity to the tail start time We calculated the FOM as a function of light output to quantitatively evaluate the PSD performance. Figure 7 shows the FOM of three detectors, when varying tail start time values. The tail start time values rang from 10 ns to 30 ns for the stilbene-d\({}_{12}\) and range from 4 ns to 22 ns for the EJ-309 and the organic glass. The other charge integration settings were kept the same (the pulse total integration started 2 ns before the pulse peak, and the total and tail integration ended at 150 ns). In Figure 7, we can observe that the tail start time that was determined by the our method yields the highest FOM at all light output values. In our previous work [20], we reported that the FOM value of a stilbene-d\({}_{12}\) crystal was approximately 3 at 400 keVee. In Figure 7, the FOM value with the optimized integration settings is 2.74 \(\pm 0.06\) at 400 keVee. The stilbene-d \({}_{12}\) crystal in this paper is approximately 4.3 times larger than the one we used Figure 7: FOM of three detectors with various trail start time settings. The highlighted data represent the FOM values using the model-determined tail start time. before [20]. Light scattering effects within the crystal increase with its size and broaden the PSD distributions hence increasing the FWHM in 2 and worsening the PSD performance. As for the FOM results of the glass detector, Laplace et al [21] reported the PSD performance of a 2 inches diameter and 2 inches height organic glass scintillator, and the FOM value is approximate 1.4 at 600 keVee. This result is in good agreement with the FOM value obtained in this work, 1.42 \(\pm\)0.03 at 600 keVee. Shin et al [22] also reported the PSD performance of a 2 inches diameter and 2 inches height organic glass crystal, and found a slightly higher FOM of approximately 1.7 at 600 keVee. As for the PSD performance of the EJ-309 detector, we obtained a 1.91\(\pm\)0.10 FOM values at 600 keVee. Using the same detector size and 600 keVee light output range, Laplace et al [21] reported an approximate 1.4 FOM value, Stevanato et al [23] reported an approximated 1.75 FOM value. Small discrepancies between our FOM and values reported in the literature could be due to different PMT models coupled to the scintillators, physical conditions of the crystals/glass and optical reflectors or impedance matching gel between the PMT and the crystal. Any optical phenomenon that affects the light propagation between the photon production at the position of interaction and its detection at the PMT photocathode could also lead to different PSD performance. Additionally, not all the cited papers reported the methods for PSD optimization, therefore, the cited PSD FOM parameters may have not been thoroughly optimized. ### FOM sensitivity to the total start time The proposed model-based method can find the optimum PSD tail start time without the need of any neutron source or iterative algorithm. Besides the tail start time, the start time of the total integration is the other parameter that could affect the PSD performance. However, this parameter is expected to be consistent for the three measured scintillators because it does not depend on the fluorescence decay constants, which are material specific. We varied the total start time settings for all three organic scintillators and calculated the FOM values, while using the same model-determined tail start time setting. The results are shown in Figure 8. We can observe that the PSD FOM is the highest when the total integration gate starts at 2 ns before the peak for the EJ-309 and stilbene-d\({}_{12}\), and slightly improves (within a single standard deviation) when the total gate starts at the peak for the glass detector. ## 4 Conclusions The optimization of PSD CI parameters is very important when high accuracy in radiation classification is needed, such as in nuclear reaction studies and nuclear non-proliferation measurements. CI time gate parameters need careful optimization to obtain the best discrimination between gamma-ray and neutron pulses. CI exploits the differential response in two different time gates of Figure 8: FOM of the stilbene-d\({}_{12}\), the EJ-309, and the organic glass detectors for the \({}^{252}\)Cf measurement, using the model-determined tail start time settings. The legend represents the different total start times used in PSD. each detected pulse to derive a parameter that is shape-dependent and higher in pulses that exhibit a longer tail, i.e., more intense delayed fluorescence, with respect to faster-decaying pulses. We showed that the parameter that influences the PSD FOM the most is the tail start time, as expected, being more sensitive to the fluorescence decay time constants. We demonstrated that it is possible to optimize this parameter using a model-based method and a \({}^{137}\)Cs source. The model-based method relies on the fact that the scintillation process in response to ionization interactions is always characterized by a prompt and one or multiple delayed fluorescence components. Even gamma-ray produced pulses, which are generally fast-decaying, exhibit all these components, but the prompt fluorescence is the prominent one. We found the maximum discrepancy between a gamma-ray template pulse and its reduced exponential model, i.e., only including the fast component. The corresponding time stamp in the pulse indicates the onset of the delayed fluorescence component and can be used as tail start time. One could also fit the full model, including both fast and delayed components, but the goodness of the fit is poorer due to the relatively low intensity of the delayed fluorescence in gamma-ray pulses. With this method, we found that starting the pulse tail at 18 ns, 10 ns, and 10 ns ns after the peak, in stilbene-d\({}_{12}\), EJ-309, and organic glass pulses, respectively, yields the best PSD FOM. The FOM obtained with the parameters that we determined are in good agreement with those found by other researchers and available in the literature, for organic glass and stilbene-d\({}_{12}\). We reported a higher EJ-309 FOM than Laplace and Stevanato [21; 23]. These discrepancies could be due to the digital pulse sampling time [24], physical differences between detector and PMT setup, and to differences in the CI gate determination methods. This variability also confirms the PSD sensitivity to multiple physical parameters and the need of a robust PSD FOM maximization strategy. The method reported in this paper is simple to implement, does not require an iterative optimization process and can be performed with a common laboratory \({}^{137}\)Cs source. ## 5 Acknowledgements This work was funded in part by the Nuclear Regulatory Commission (NRC), United States Faculty Development Grant 31310019M0011 and in part by the Royal Academy of Engineering under the Research Fellowship scheme RF201617/16/31.
2310.15442
Direct measurements of cosmic rays and their possible interpretations
The last two decades have brought spectacular advances in astrophysics of cosmic rays (CRs) and space- and ground-based astronomy. Launches of missions that employ forefront detector technologies enabled measurements with large effective areas, wide fields of view, and precision that we recently could not even dream of. Meanwhile, interpretation of the individual slices of information about the internal working of the Milky Way provided by such experiments poses challenges to the traditional astrophysical models. New mysteries arise in the composition and spectra of CR species at low and high energies, in the energy range where we thought the main features were already understood fairly well. This accumulation of unsolved puzzles highlights the peculiarity of the current epoch and means that major breakthroughs are still ahead. In my talk, I review the current state of direct measurements of CRs and discuss their possible interpretations. Unfortunately, many important ideas and publications are not discussed here due to the space limitations.
Igor V. Moskalenko
2023-10-24T01:28:48Z
http://arxiv.org/abs/2310.15442v1
# Direct measurements of cosmic rays ###### Abstract: The last two decades have brought spectacular advances in astrophysics of cosmic rays (CRs) and space- and ground-based astronomy. Launches of missions that employ forefront detector technologies enabled measurements with large effective areas, wide fields of view, and precision that we recently could not even dream of. Meanwhile, interpretation of the individual slices of information about the internal working of the Milky Way provided by such experiments poses challenges to the traditional astrophysical models. New mysteries arise in the composition and spectra of CR species at low and high energies, in the energy range where we thought the main features were already understood fairly well. This accumulation of unsolved puzzles highlights the peculiarity of the current epoch and means that major breakthroughs are still ahead. In my talk, I review the current state of direct measurements of CRs and discuss their possible interpretations. Unfortunately, many important ideas and publications are not discussed here due to the space limitations. ## 1 Introduction It is appropriate to start with the definition of the term "direct measurements." The latter implies direct contact between CR particles and an instrument, which can only be done at the top of the atmosphere or in space. Apparently, the design, weight, and exposure of a scientific payload depend on the current technology and change over time, and so does the definition of the "direct measurements." A plot of the all-particle CR spectrum, originally made by Simon Swordy in 2001 [1], and its well-known variations placed direct measurements below \(\sim\)1 TeV, reflecting the technology of the late 20th century, although there are several exceptions, such as, e.g., Sokol spacecraft [2, 3] and JACEE experiment [4]. Now, two decades later, we routinely make direct precise measurements approaching the energy of the knee at \(\sim\)3 PeV, thanks to the talents of experimentalists and recent technological advances. I enthusiastically predict that in about two decades, by 2050, direct measurements will advance the next three orders of magnitude in energy and reach 1-10 EeV range. Today, this can be considered a very challenging goal, but not impossible. Meanwhile, several breakthrough experiments have been proposed, e.g., AMS-100 [5], HERD [6], HERO [7], ALADInO [8], which significantly extend capabilities of the current instrumentation. For example, AMS-100 is designed to have a geometrical acceptance of \(\sim\)100 m\({}^{2}\) sr. CR research is currently experiencing a golden age. Thanks to the outstanding progress in the energy coverage and precision of direct measurements, we are witnessing a constant stream of discoveries of new features in the spectra of CR species and anomalous behavior of elemental and isotopic ratios in the energy range from MV to 100s TV, and even more new experiments are preparing to launch (e.g., GAPS, HERD, TIGERISS, HERO, HELIX, COSI). A natural consequence is an _infinite_ number of publications with interpretations of new observed features. Unfortunately, this means that many important ideas and publications are not mentioned here due to space limitations. For most of the 20th century, the two popular models describing the observed spectra and composition of CRs were the leaky-box and the Galactic diffusion model with halo [9]. The leaky-box model is the simplest and some people still use it. This model considers the Galaxy as a volume uniformly filled with gas, sources, and CRs with a small leakage - hence the name "leaky-box." Tuned to local measurements, it can correctly reproduce the fluxes of stable nuclei at a single point in the Galaxy. The diffusion model is a realistic model in which gas and sources are distributed in the Galactic disk. CRs fill a large volume around the disk called the halo, and can escape into the intergalactic space through its outer "boundaries." The spatial distributions of CR species, gas, background radiation, and magnetic field are non-uniform and generally consistent with observations of diffuse Galactic thermal and non-thermal emissions. This model is widely used today. At the turn of the 21st century, the general success of the diffusion model led Vitaly Ginzburg to the conclusion [10]: "In respect of CR with \(E_{\rm CR}\)\(<\)10\({}^{15}\)\(-\)10\({}^{16}\) eV, there generally remain some vague points, but in the whole the picture is _clear_ enough..." This is reminiscent of the popular view at the turn of the 20th century (ca. \(\sim\)1900), formulated by Lord Kelvin (William Thomson): "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement..." Interestingly, increasingly precise measurements is exactly what led us to the current situation! Today we have such excellent data that the whole picture becomes rather _unclear_. This creates exciting opportunities for theorists and experimentalists and promises new breakthroughs! A word of caution though. A large amount of new material and attempts to comprehend it often lead to the development of overly complicated models, which add many parameters in an attempt to achieve consistency with the data. Such models are featuring two halos with different diffusion coefficients, anisotropic diffusion, local sources with different injection spectra, sources in spiral arms, slow diffusion zones, etc., sometimes combined within a single model. Although this allows models to reproduce some of the observed features, it does not necessary lead to a better understanding of the underlying processes. Here we must adhere to time-tested wisdom, such as that formulated by William of Ockham (known as Occam's razor): _Numquam ponenda est pluralitas sine necessitate_ ("Plurality must never be posited without necessity") or follow the advise attributed to Albert Einstein: "Everything should be as simple as it can be but not simpler." ## 2 Low-energy features Low-energy measurements provide unique information about isotopic spectra and composition of CRs. However, such measurements are subject to heliospheric (or solar) modulation, which was difficult to handle due to the lack of measurements outside the heliosphere. This problem was largely resolved with Voyager 1, 2 entering the very local interstellar medium (ISM), V1 in 2012 and V2 in 2018, and beaming the elemental spectra of CR species from ISM space [11]. This unimaginable breakthrough provides a solid ground for studies of low-energy particles. Low-energy features, often called excesses, are observed in the spectra of some elements when compared to the spectra of their neighbors, there are also mismatches between the data taken by different instruments operating in different energy ranges, even after their correction for solar modulation, and/or significant deviations from model predictions. Let us start with iron and its radioactive isotope, \({}^{60}\)Fe (\(\beta^{-}\) decay, half-life of 2.6 Myr [12]), which may provide some clues to the origin of low-energy features in the spectra of other species. ### \({}^{60}\)Fe as a tracer of supernova activity in the solar neighborhood Evidences of the past SN activity in the local ISM are abundant [13, 14, 15]. There is no general agreement on the exact number of SN events and their exact timing, but it seems quite clear that several events may have occurred at distances of up to 100 pc in the last \(\sim\)10 Myr. The most recent SN events in the solar vicinity occured 1.5-3.2 Myr and 6.5-8.7 Myr ago [13, 14]. The measured signal spread of \(\sim\)1.5 Myr implies a series of SN explosions. Besides, the Local Bubble is a low-density region about \(\sim\)200 pc around the sun filled with hot H ii gas, itself formed by a series of SN explosions [16, 17]. Studies suggest 14-20 SNe within a moving group, the surviving members of which are now in the Scorpius-Centaurus stellar association [15, 17]. An excess of radioactive \({}^{60}\)Fe found in deep ocean core samples of FeMn crust [14, 18, 19], in the Pacific Ocean sediments [20], in lunar regolith samples [21, 22, 23], and in the Antarctic snow [24] indicates that it may have been deposited by SN explosions in the solar neighborhood. Fifteen \({}^{60}\)Fe events and only one \({}^{61}\)Co event were observed by ACE-CRIS [25] in CRs, while there were about equal numbers of \({}^{58}\)Fe and \({}^{59}\)Co events. This implies that the \({}^{60}\)Fe events are real and not the spillover of a more abundant neighboring isotope. Meanwhile, only an upper limit was established for \({}^{59}\)Ni (\(\tau_{1/2}\)\(\sim\)76 kyr) [26], suggesting \(\gtrsim\)100 kyr time delay between the ejecta and the next SN. The low-energy feature in the _iron spectrum_[27], perhaps also associated with SN activity in the solar neighborhood, was revealed for the first time when comparing data from Voyager 1 [11] and ACE-CRIS with AMS-02 [28]. It is most clearly visible as a bump in the Fe/He, Fe/O, and Fe/Si ratios at 1-3 GV, while a similar feature in the He/O and Si/O ratios is absent. The large fragmentation cross section and fast ionization losses of iron hint at a local origin of the excess. The calculations [27] use the Monte Carlo code HelMod [29] based on the Parker equation and developed to describe the CR transport through the heliosphere from interstellar space to the Earth. Interestingly, the Ni/Fe ratio reported by CALET [30] (see also a highlight CALET talk by Shoji Torii [31]) is constant between 10 and 200 GeV/n, indicating the same origin of elements of the iron group. Precise measurements of the sub-Fe/Fe = (Sc+Ti+V)/Fe ratio can shed light on the origin of the iron group and provide further details about CR sources in our local neighborhood. ### Aluminum excess The excess in the Al spectrum in the narrow rigidity range of 3-10 GV (\(\sim\)0.8-4 GeV/n) becomes clearly visible if we compare the Al/Si ratio measured by AMS-02 [32] with the model predictions [33], while a similar feature in the Na/Si ratio is absent. There are four possible physical reasons for the discrepancy between data and model calculations [33]: (i) incorrect spectrum of \({}^{28}\)Si, the main progenitor of secondary \({}^{26,27}\)Al, (ii) errors in the total Al inelastic cross sections, (iii) errors in the production cross sections of \({}^{26,27}\)Al isotopes, and (iv) additional local component of primary Al. Reason (i) can be rejected because all model calculation are based on available data. The Si spectrum is tuned to data from Voyager 1, ACE-CRIS, and AMS-02 [34]. Importantly, AMS-02 data is available above 2 GV, the same rigidity range for all CR species. Contributions of other CR species are very minor and cannot be the cause of the observed excess. (ii) Significant errors in the total inelastic cross section of Al can be excluded as the primary cause of the excess, taking into account the accelerator data. The total inelastic cross section of \({}^{27}\)Al is measured below \(\sim\)1 GeV/n (\(<\)3 GV) [35] and in the rigidity range 10-19 GV in the inverse kinematics [36]. The parameterizations of the total inelastic cross sections used in the model calculations are tuned to the available data. One can also notice the absence of similar excesses in the spectra of neighboring nuclei, such as Ne, Na, Mg, Si [33, 34]. (iii) Almost 100% of secondary \({}^{27}\)Al is produced through fragmentation of \({}^{28}\)Si with minor contributions from \({}^{29}\)Si, \({}^{32}\)S, and \({}^{56}\)Fe [37]. Unfortunately, the isotopic production cross section (\(p\)+\({}^{28}\)Si\(\rightarrow^{27}\)Al) is the major source of uncertainty. Only a couple of data points are available for this reaction, at 1.4 GV and 2.3 GV in the inverse kinematics, which constrain \({}^{27}\)Al production in the lower range of rigidities where the excess is observed. Fortunately, measurements of \({}^{28}\)Si fragmentation \({}^{28}\)Si+\(p\)\(\rightarrow\)Al performed by a Siegen group [38] indicate that the cross section remains flat between 1 and 14.5 GeV/n corresponding to the rigidity range from 3.5-31 GV (\({}^{27}\)Al), which covers the excess range and extends to significantly higher rigidities. (iv) The observed low-energy excess in the Al spectrum is most likely due to the stable isotope \({}^{27}\)Al. However, because it is stable, its yield and Galactic distribution are difficult to measure. Meanwhile, an abundant literature exists on observations of the distribution of the diffuse \(\gamma\)-ray 1.809 MeV line emission from the decay of the radioactive \({}^{26}\)Al isotope and its origin. Observations of the diffuse Galactic 1.809 MeV emission line by COMPTEL [39] and INTEGRAL [40] have shown that \({}^{26}\)Al nucleosynthesis is ongoing in the present Galaxy. Potential sources include AGB stars, novae, core-collapse SNe, and Wolf-Rayet stellar winds [41, 42]. There are also reports of the discovery of a close SNR (G275.5+18.4) with an angular diameter of 24\({}^{\circ}\) in the constellation Antlia Pneumatica [43]. The SNR distance is estimated as 60-340 pc, but is most likely \(\sim\)200 pc [44]. A marginally significant feature is detected in the 1.8 MeV \(\gamma\)-ray emission line within the Antlia SNR. ### Lithium excess The standard propagation model [45] tuned to the B/C ratio data by Voyager 1 [11], ACE-CRIS [46], and AMS-2 [47], demonstrates good agreement with measurements of CR species in a wide energy range, implying that Li spectrum should also be well reproduced by the same model. However, a comparison of the model calculations of secondary Li with data exhibits a significant excess over the model predictions above a few GV [45]. This may be an indication of errors in the production cross sections or an unexpected primary Li component. From a compilation of the majority of Li production cross sections [48], one can see that the main production channels are the fragmentations of \({}^{12}\)C and \({}^{16}\)O measured in several different experiments. Although they are not measured perfectly, each contributes 12%-14%, and thus a 20% error in one of them would correspond to only 2%-3% of total Li production. Other production channels contribute at a level of 1%-2% or less. It is not impossible, but rather unlikely that the cross section errors are all biased in the same direction, resulting in the observed 20% excess. The obvious "solution" is to renormalize the LiBeB production cross sections to match the CR data and/or reduce the diffusion coefficient to boost Li production, while ensuring the calculated B/C ratio is still consistent with the experimental data [49, 50]. The scientific terms for this approach are scale factors and nuisance parameters. While these may seem cool and produce the desired results, hastily removing the excess risks throwing the baby out with the bathwater. Another cross section hypothesis suggests that the contribution of Fe fragmentation is calculated incorrectly, and with new updated cross sections, the Li data can be reproduced well enough [51]. Meanwhile, a back-of-the-envelope estimate shows that the contribution of Fe to Li production is less than 5% at 22 GV (at maximum excess). Indeed, the _local ISM fluxes_ of main progenitors taken at \(E_{\rm kin}\)=10.44 GeV/n (\(\approx\)22 GV) are: \(F_{\rm Fe}\)\(\approx\)\(4.476\times 10^{-3}\), \(F_{\rm Si}\)\(\approx\)\(6.723\times 10^{-3}\), \(F_{\rm Mg}\)\(\approx\)\(7.904\times 10^{-3}\), \(F_{\rm Ne}\)\(\approx\)\(6.456\times 10^{-3}\), \(F_{\rm O}\)\(\approx\)\(4.131\times 10^{-2}\), \(F_{\rm N}\)\(\approx\)\(9.089\times 10^{-3}\), \(F_{\rm C}\)\(\approx\)\(3.979\times 10^{-2}\), \(F_{\rm B}\)\(\approx\)\(7.709\times 10^{-3}\) in units of m\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\) [GeV/n]\({}^{-1}\), where the fluxes of B-Si are taken from [34], and Fe - from [27]. The heliospheric transport in [27, 34] is calculated using the HelMod code, but the solar modulation at \(\approx\)22 GV is weak anyway. At 10 GeV/n, the cross sections of the reactions \(A+p\)\(\rightarrow\)Li +\(X\), where \(A=\) B, C, N, O, Ne, Mg, Si, Fe, and Li=\({}^{6}\)He+\({}^{6}\)Li+\({}^{7}\)Li, are about the same in the range of 23-32 mb, see Figs. 2, 3 in [51]. Assuming they are _all the same_, we can simply use fluxes \(F_{A}\) shown above. This gives for the iron contribution \(F_{\rm Fe}\)/\(\Sigma_{A}F_{A}\)\(\approx\)0.036 (3.6%), where \(\Sigma_{A}F_{A}\)\(\approx\)0.123. Even if the Fe\(\rightarrow\)Li cross section is 50% larger (unlikely), the Fe contribution increases to \(\approx\)0.053 (5.3%). These numbers are calculated without taking into account the contribution of all other species, so the Fe contribution is even smaller in reality. Note that the effect of possible cross section errors in reactions with He target is minor [51]. Although the contribution of Fe increases with rigidity due to its hard spectrum, this estimate shows that incorrect cross sections cannot be the main cause of the excess. An exciting possibility is that the primary Li may come from nova explosions [45]. Indeed, the \(\alpha\)-capture reaction of \({}^{7}\)Be production \({}^{3}\)He(\(\alpha,\gamma\))\({}^{7}\)Be in stars was proposed a while ago [52, 53]. A subsequent decay of \({}^{7}\)Be (half-life of 53.22 days) yields \({}^{7}\)Li isotope. \({}^{7}\)Be should be transported into cooler layers where it can decay to \({}^{7}\)Li, the so-called Cameron-Fowler mechanism in AGB stars. The production of \({}^{7}\)Li in the same reactions in novae was discussed in [54, 55, 56]. Observation of blue-shifted absorption lines of partly ionized \({}^{7}\)Be in the spectrum of a classical nova V339 Del [57] about 40-50 days after the explosion is the first evidence that the mechanism proposed in 1970s is working indeed [58]. Consequent observations of several other novae (V1369 Cen, V5668 Sgr, V2944 Oph, V407 Lupi, V838 Her) also reveal the presence of \({}^{7}\)Be lines in their spectra. Meanwhile, there could be other sources of \({}^{6,7}\)Li. Low-energy measurements by PAMELA [59] show that \({}^{7}\)Li/\({}^{6}\)Li\(\approx\)1.1 below 1 GeV/n. The _latest_ preliminary AMS-02 analysis [60] indicates \({}^{7}\)Li/\({}^{6}\)Li\(\approx\)1 below 10 GeV/n, but this still may change when the analysis is completed. Precise measurements of the isotopic ratio can shed light on the origin of Li in CRs. ## 3 Inventory of Galactic cosmic ray sources The main sources of Galactic CRs remain SNe and SNR with their total kinetic energy of the ejecta in the range of \(10^{51}\) erg. The Green catalog currently lists 294 SNR. Meanwhile, the isotopic and spectral anomalies observed recently force us to look at other sources, especially local, which can also contribute to the observed CRs. These are primarily Wolf-Rayet stars (currently 354 are known) and O-stars (20,000 observed), which over their fairly short lifetimes provide, respectively, \(10^{51}\) erg and \(10^{50}\) erg in high-velocity winds reaching \((2-4)\times 10^{3}\) km/s, pulsars (\(\sim\)1,500 observed) with their total rotating power reaching \(4\times 10^{49}\) erg (Crab), and novae providing \(10^{45}\) erg (estimated frequency 30-50/year, \(\approx\)350 observed). For comparison, countless stellar flares can provide up to \(10^{36}\) erg each, and can also add to low-energy CRs. ## 4 Protons/Helium ratio The monotonic decrease of the H/He ratio was first noticed in the PAMELA data when plotted vs. rigidity [61] and has been confirmed by other experiments covering the impressive rigidity range from \(\sim\)100 MV to 50 TV, such as Voyager 1 [11], AMS-02 [62] (Fig. 79), and CALET [63], while DAMPE [64] presents only their latest proton and He spectra, but not their ratio. Meanwhile, we note that a flatter spectrum of He vs. H was already spotted by many earlier experiments, such as Sokol [2, 65], JACEE [4], ATIC [66], and CREAM [67]. However, theory told us that the spectral indices in rigidity should not depend on the specific properties of the primary particles. Thus, many researchers then have attributed this difference to systematic effects, and I myself am no exception. Apparently, not only He behaves differently from protons. Accurate measurements by AMS-02 [62] show that other, mainly primary species, such as C, O, Si, have spectral indices above \(\sim\)60 GV very similar to He. This raises a question, whether the spectrum of primary CR species depends on the \(A/Z\) ratio, which is equal to 1 for protons, and 2 for most abundant isotopes of He, C, O, Si. ### Hypothesis of the spatial distribution of elements The main idea is that SN explodes into the pre-SN wind, which is composed of lighter elements when the star is young, but becomes increasingly enriched in heavier elements in its later stages [68, 69]. The young SN shell then accelerates heavier elements when it is young, and lighter elements when it fates. This would make the spectra of heavier species flatter as they are accelerated by a stronger shock, while the spectra of lighter elements are produced at later stages. SN can also explode into the medium enriched with heavier elements from previous SN explosions. An argument against this hypothesis is that the spectra of He, C, and O have the same index, while the spectra of Ne, Mg, and most importantly Si are somewhat steeper [62]. This implies that the spatial distribution of C and O in the pre-SN wind should match the distribution of He, while heavier Ne, Mg, and Si should be accelerated by a weaker shock at a later time. ### Hypothesis of two components in the H spectrum This is an empirical hypothesis [62], which suggests that the observed CR proton spectrum is a combination of spectra, which come from two distinctly different types of proton sources. One of them is a regular source that accelerates all particles and injects them into the ISM with spectra similar to He. Another source is enriched with hydrogen (or depleted in heavier species) and injects protons with a steeper spectrum (by \(\approx\)0.3 in index). Earlier, a similar idea [70, 71] was proposed to reproduce the observed positron excess. The harder spectrum (younger) sources should be surrounded by gas producing more secondary species including positrons, and this could explain their flatter spectrum. An argument against this hypothesis is that it requires sources that are unique for protons. It is not clear what kind of sources these are, and what makes them so unique. ### Hypothesis of different acceleration efficiency The ideas proposed in [72] and [73] are somewhat different, but the authors came to similar conclusions: most particles in the shock are protons (\(A/Z\)=1), which generate Alfven waves and became frozen into the generated turbulence. The nuclei with \(A/Z\)\(>\)1 or \(A/Q\)\(>\)1 (Q is the charge of a partly ionized atom) are not in synch with Alfven waves generated by protons, and are more efficiently injected into the shock and then accelerated. The hypothesis predicts that the injection efficiency of heavier species increases relatively to protons with increase of the shock Mach number and \(A/Q\) value. The same applies to all species \(A/Q\)\(>\)1, but the efficiency should come to saturation for sufficiently high \(A/Q\) values. ## 5 Silicon & fluorine puzzles An increase in accuracy of CR data reveals unexpected puzzles in the O-Si groups, implying that CR acceleration and transport processes are still far from being fully understood. Let us look at the Si/O ratio in the local ISM [27], i.e. take their local interstellar spectra [34], which eliminates the solar modulation. The most abundant isotopes, \({}^{16}\)O and \({}^{28}\)Si, are primaries with \(A/Z\)=2. The larger fragmentation cross section and faster ionization energy losses of Si nuclei result in the rise of the Si/O ratio with rigidity below \(\sim\)10 GV. In the absence of energy losses in the middle range (\(\sim\)10-300 GV), the Si/O ratio is const, while at higher rigidities, the ratio decreases for no apparent reason. Note that the He/O ratio is flat above 60 GV. It is also interesting to examine the differences in two secondary/primary ratios, B/O and F/Si. If we look at the rigidities \(>\)10 GV, where the solar modulation is small, the ratios diverge in the entire range up to 1 TV with the F/Si ratio being flatter by \(\approx\)0.052 in the index (Fig. 3 in [74]), albeit with large error bars in the F/Si ratio. This usually implies a difference in the interstellar propagation, with the indices of the effective diffusion coefficients probed by these ratios differing by that same number. Then, a weak monotonic increase can be expected for the F/B ratio. However, the increase is observed only up to 100-200 GV, while in the range \(\sim\)100 GV-1 TV the ratio becomes constant (Fig. 1 in [74]), again with large error bars. The break at \(\approx\)200 GV, if confirmed, perhaps has the same origin as the break observed in the spectra of all CR species and discussed in the next section. Striking is that the increase in the F/B ratio is observed in the range 10-200 GV, where Si/O\(\approx\)const. The latter defies the hypothesis of the difference in propagation, as in this case, the Si/O ratio cannot be constant, assuming that the injection spectra of O and Si are the same. Let us look at the fluorine puzzle from a different perspective. A comparison of standard propagation calculations tuned to the B/C or B/O ratio with the measured F/Si ratio shows a deficit in secondary fluorine, which increases with the rigidity up to the break at 200 GV [75]. However, it becomes consistent with the AMS-02 data [74] and thus with the B/O ratio above 200 GV, albeit with large error bars. This is a serious issue, which cannot be cured by simple renormalization of the cross sections - the latter are flat above \(\sim\)1-2 GeV/n. For example, if we assume that the fluorine production cross sections are off by \(\approx\)10% and renormalize the calculated fluorine spectrum down by the same factor, we obtain excesses from \(\approx\)3-10 GV and at \(\sim\)100 GV. The described rigidity-dependent discrepancies imply different origins of the Si group and the CNO group, or an as yet unclear difference in their propagation, or perhaps a non-negligible primary F component. The F anomaly is also confirmed in other studies, e.g., [76]. Precise measurements of the \({}_{15}\)P/\({}_{16}\)S and sub-Fe/Fe ratios should be able to clarify the issue or add more puzzles. ## 6 200 GV & 10 TV breaks or the TV bump The \(\sim\)200 GV breaks in the spectra of protons and He are clearly visible in the data collected by ATIC-2 [66], CREAM [67], and even earlier experiments, but initially looked like a calibration problem between lower and higher energy (\(\lesssim\)200 GeV) experiments. The break becomes widely accepted after the PAMELA publication [61], and a confirmation by Fermi-LAT [77], which used observations of the CR-induced \(\gamma\)-ray emission from the Earth's limb. The latter actually confirmed the flatter proton spectrum above \(\sim\)200 GeV, in agreement with PAMELA measurements. The break is most clearly visible in the latest data by AMS-02 [62], the instrument that is best suited for this energy range. A comparison of the fits made in the range from 30-50 GV to 200 GV shows an interesting picture, where Fe has the hardest spectrum followed by He, O, C, and then Si, S, Ne, Mg. The steeper spectra are observed in H, Al, N, Na, F, B, Be, and the steepest is Li (partly tertiary). The fluorine spectrum is flatter than the spectrum of boron, as already discussed above, and may indicate a different origin or presence of the primary component. Yet another break, at 10 TV, was also observed by the ATIC [66] and CREAM [78] teams, but the former made no claim, while the latter stated that more data is needed. The first decisive evidence was provided by the NUCLEON team [79], which observed the break in the spectra of protons, He, and light elements at the same rigidity. This break is now confirmed by several instruments: DAMPE [80], CALET [81], and ISS-CREAM [82]. A striking increase in anisotropy in the same energy range from \(\sim\)0.2-100 TeV [83] indicates that these two breaks form a single bump structure from \(\sim\)0.2-100 TV rather than being two independent features; note that for protons dominating in CRs, their kinetic energy is approximately equal to the rigidity at high energies. ### Anisotropy map The anisotropy map of the entire sky at 10 TeV, which corresponds to the bump maximum, was produced using the combined data of HAWC and IceCube experiments [83]. It includes the large-scale and small-scale anisotropy features, and provides information that is crucial for understanding the bump origin. The large-scale map features a very sharp jump in the relative CR intensity across the magnetic equator - a hint at the proximity of the source. The dominant spot in the residual small-scale anisotropy map (Region A, Figs. 5, 11 [83]) points to the direction to the source, which coincides with the Galactic _anti-center_, the direction of the local B-field, and is about 45\({}^{\circ}\) off the "tail" of the heliosphere. This is in a remarkable contradiction with the conventional understanding that the phase of dipole anisotropy should point to the direction of the Galactic center, where the majority of CR sources are located and the CR number density is the highest. ## 7 Models of the TeV bump On March 3, 2011, PAMELA reported about a break in the spectra of H and He at about the same rigidity 230-240 GV [61]. No one knew yet that there was another break at higher rigidity. Therefore, the early papers proposed interpretation of this first break. The first paper [84], submitted on August 4, 2011, proposed four different scenarios of the origin of the break: interstellar propagation, the injection spectrum (or spectra from two distinctly different source populations), and a local source at low or high energies. The analysis showed that the propagation scenario in which the break is associated with a change in the rigidity dependence of the diffusion coefficient is preferable. These scenarios and their variations are still discussed in the literature. Physical interpretation for the propagation scenario [85] was proposed 10 months later (submitted on May 30, 2012): "...the diffusive propagation is no longer determined by the self-generated turbulence, but rather by the cascading of externally generated turbulence (for instance due to supernova bubbles) from large spatial scales to smaller scales where CRs can resonate." The propagation scenario naturally explains the same break rigidity for all species and reproduces the observed difference between the spectra of primary and secondary species in subsequent AMS-02 publications. Essentially, the values of the spectral breaks in the spectra of primary (C, O) and secondary (B) species are connected as \(\Delta\delta_{\rm sec}{\approx}2\Delta\delta_{\rm pri}\), where \(\Delta\delta\) is the difference in the spectral power-law indices below and above the break. Meanwhile, it is difficult to imagine a compelling physical reason for the repeated change in the diffusion coefficient, now at 10 TV, and so another explanation is needed. ### A local SNR surrounded by gas clouds There are many models discussing the origin of the observed spectral break at 200 GV [86], but only a few of them are trying to reproduce the entire TV bump with two breaks at 200 GV and 10 TV. One of the most popular is the model speculating on the idea of the local SNR (e.g., Geminga SNR at \(\sim\)300 pc), surrounded by gas cloud(s) where the secondary species are produced. It is claimed that this model reproduces all the observed features in the spectra of CR nuclei, \(e^{\pm}\), \(\bar{p}\), and dipole anisotropy. Various versions of this model are discussed in [87, 88, 89, 71] and elsewhere. In such a model [95], the local SNR accelerates primary species (\(e^{-}\) and nuclei). Secondary species (LiBeB, \(\bar{p}\), \(e^{\pm}\)) are produced by accelerated nuclei in the gas cloud(s) surrounding the shell. The nuclear species coming from the SNR have a _convex_ spectral shape, when scaled with \(E_{\rm kin}^{2.6}\), formed by the cutoff of the injection spectrum \(\propto\)\(R^{-\nu}\)\(e^{-R/R_{c}}\), with \(R_{c}\)=15 TV, while the low-energy decrease is formed due to a time delay in the propagation of particles from the SNR that have not yet reached us. The SNR age, distance, and the diffusion coefficient are tuned in such a way that these bumps fit in between 200 GV and 100 TV. To match the observed CR spectra and make room for the SNR component, the spectra of Galactic CR species must have a _concave_ shape. The latter is done by adjusting the parameters of the two-halo scenario [100]. The steepening in the nuclear spectra, which becomes visible at 5 TV, is tuned to reproduce the observed steepening in the \(e^{+}\) spectrum at \(\approx\)300 GV, assuming that all observed excess positrons are produced in the proposed scenario (5 TV/300 GV\(\approx\)17 is the mean fraction of the kinetic energy of the primary proton transferred to the secondary pion per collision [101, 102]). Primary electrons lose energy due to the inverse Compton scattering and synchrotron emission to make the observed break at 1 TV (see Sect. 8). The suggested source is Geminga SNR with an age 330 kyr and a distance of 330 pc. There are some issues with the local SNR model. First, it requires fine tuning. To fit the observed data, it is necessary to make a dip in the spectra of Galactic species and a bump in the corresponding local SNR components at the same energy simultaneously. The number of free parameters in this model, not counting the normalizations of the Galactic CR species, is about 50. These include 8 transport parameters + 6 spectral parameters + 28 (may be somewhat less) individual normalizations of SNR components for each species + 7 parameters for primary electrons + 1 gas cloud grammage. Second, a simple estimate of the diffusion length shows that it cannot reproduce the observed sharp jump in the relative CR intensity across the magnetic equator [83]. Gyroradius of 10 TV particles in the interstellar 3 \(\mu\)G magnetic field is \(\sim\)0.003 pc. For a source at \(\sim\)330 pc, this gives \(\sim\)10\({}^{5}\) mean free paths - there is no way to produce the sharp jump in the anisotropy map at such a distance. Even if the mean free path is \(\sim\)1 pc, it is still \(\sim\)330 mean free paths. In fact, all models of the TV bump with relatively distant sources have the same problem. This leads us to the conclusion that the "source" should be much closer to the solar system, and it should be a different type of source. ### Reacceleration bump The large number of free parameters discussed above can be avoided if, instead of a source surrounded by gas clouds, we assume that pre-existing CRs are _reaccelerated_ in a local shock [103, 104, 105]. If the shock is located at a few particle's mean free paths from the observer or connected to the observer by the magnetic field line, it can also explain the observed sharp jump in the relative CR intensity across the magnetic equator. Such a model requires only a moderate reacceleration below rigidity \(\sim\)50 TV, a shock with the Mach number of \(\sim\)1.5 should suffice. Reaccelerated particles below \(\sim\)0.2 TV are convected with the ISM flow and do not reach us, thus creating the bump. This single universal process acts on all CR species in the rigidity range below 100 TV. The position of the sun relative to the shock defines the bump parameters, which can change over time. The model does not specify the location of the shock. The passing stars' bow shock, shock from the old SNR, or any local shock with a small Mach number \(\sim\)1.5 can do the job. The distance-shock-size relation gives an estimate of the distance: \(\zeta_{obs}\)(pc)\(\sim\)100\(\sqrt{L_{\perp}\)(pc); for sufficiently large bow shocks, \(L_{\perp}\)=10\({}^{-3}\)-10\({}^{-2}\) pc, the distance is \(\zeta_{obs}\)=3-10 pc. The _only three_ unknown bump-specific fitting parameters can be obtained from a fit to the proton spectrum, best measured among CR species. Spectra of all other CR species can be calculated using a simple analytical formula, where the input parameters are their normalizations and spectral power-law indices derived from their local interstellar spectra (LIS) below the bump. The steeper the spectrum of ambient particles, the stronger the effect of reacceleration, and larger the bump. The LIS for all species H-Ni are provided in [34], and updated Fe LIS - in [27]. Interestingly that the increased CR intensity feature in the small-scale residual anisotropy map aligns well with the direction of the local magnetic field [83] and may indicate the direction to the shock. One of the favorite candidates is the \(\epsilon\) Eri star [105], with its configuration of the bow and termination shocks, projected just 6.7\({}^{\circ}\) off of the direction of the local magnetic field. \(\epsilon\) Eri is a solar like star, K2 dwarf (5 000 K) with the mass 0.82\(M_{\odot}\) and radius 0.74\(R_{\odot}\). It is located at the distance of 3.2 pc and has the speed 20 km/s, a bit small, but has an astonishing mass loss rate: \(\dot{m}\)\(\sim\)30-1500\(\dot{M}_{\odot}\). It also has a huge astrosphere - 8000 au, 47\({}^{\prime}\) as seen from Earth (larger than the angular size of the Moon!). Other candidate stars are \(\epsilon\) Indi, a triplet star K4.5V (0.77\(M_{\odot}\))+T1.5 (0.072\(M_{\odot}\))+T6 (0.067\(M_{\odot}\)) at a distance of 3.6 pc and moving with the speed of 40.4 km/s (radial), or Scholz's Star, a duplet M9.5 (0.095\(M_{\odot}\))+T5.5 (0.063\(M_{\odot}\)) at a distance of 6.8 pc, moving with the speed of 82.4 km/s (radial). There could be other types of shocks in the solar neighborhood, see discussion in [103, 105]. ## 8 Cosmic ray electrons Electrons in CRs are subject to severe energy losses at all energies. The fastest losses are at low and high energies due to the ionization and the inverse Compton and synchrotron emission, correspondingly. Therefore, in order for very-high energy electrons to reach us, their sources must be close to us and relatively young. Perhaps Nishimura et al. [106] were the first to suggest that "the electron spectrum in TeV region would deviate from smooth power law behavior due to small number of sources which are capable of contributing to the observed flux... several bumps would be observed in the spectrum correlating to each source..." Other early papers [107, 108] also show possible contributions of local sources. Subsequent publications discussed the origin of the observed spectrum and modeled the contribution of local sources above 1 TeV. The all-electron (\(e^{-}\)+\(e^{+}\)) spectrum up to 1 TeV was measured by _Fermi_-LAT [109, 110] (pre-_Fermi_-LAT measurements are summarized in [109]). It appears to be too flat, contrary to the expectations of a steep decrease (see Fig. 57 in [62] and Fig. 5 in [111]), and cannot be reproduced with a single component. The sharp cutoff above \(\sim\)1 TeV was reported by H.E.S.S. [112, 113] and confirmed by VERITAS [114]. Subsequent measurements by PAMELA [115], AMS-02 [62, 116, 117], CALET [118], DAMPE [119], and _Fermi_-LAT [120] reveal a flattening at \(\sim\)60 GeV and confirmed the cutoff at 1 TeV. Interestingly, the experiments are consistent in pairs, that is, CALET and AMS-02 are consistent with each other, but differ by \(\approx\)20% from DAMPE and _Fermi_-LAT, which are also consistent with each other. Above 1 TeV, H.E.S.S., CALET, and DAMPE measurements are consistent with each other, there is also a slight hint at a possible feature above 3 TeV albeit with large error bars. Slow diffusion zones observed around several pulsars [121] can increase the energy losses of TeV electrons making observations of such features less likely. The all-electron spectrum includes the \(e^{+}\) excess (the so-called "signal"), and may include an identical \(e^{-}\) "signal" if the source is charge-sign symmetric, such as pulsars or dark matter (DM) [62]. There are quite a number of publications discussing the contributions to the all-electron spectrum from local sources. Such multi-component models include the average electron spectrum from distant Galactic sources, parametrized contributions from local catalog sources (SNRs, PWNe), and may utilize the observed radio spectral indices of local SNRs, see examples in [122, 123]. AMS-02 data on electrons (\(e^{-}\)) may offer a clue to the origin of the break at \(\sim\)1 TeV. Preliminary analysis [124] uses a three-component fit, which includes low- and high-energy power-laws plus the \(e^{+}\)-like source term. According to this fit, the break in the all-electron spectrum at \(\sim\)1 TeV is related to the cutoff in the \(e^{+}\) spectrum plus the identical \(e^{-}\) component, and implies a charge-symmetric source of the excess positrons (pulsars, DM). However, more accurate data is needed to test if the charge-symmetry is exact (e.g., hadronic processes do not produce identical \(e^{\pm}\) spectra). ## 9 Cosmic ray positrons Unexpected behavior of the positron fraction \(e^{+}/(e^{+}\!+\!e^{-})\) was first noticed in the data from the TS93 balloon flight [125], which observed a constant positron fraction of 0.078\(\pm\)0.016 in the range of 5-60 GeV, and in the HEAT experiment [126], which detected "a small positron flux of nonstandard origin." Earlier experiments were contaminated with protons due to insufficient rejection capability. The results did not attract much attention until PAMELA team reported a surprising rise in the positron fraction up to 100 GeV [127], contrary to the expectations of a monotonic decrease with energy [128, 129]. Conventional models imply a smooth CR source distribution and steady-state production of secondary species in the hadronic CR interactions with ISM gas. The rise was confirmed by _Fermi_-LAT [130], which used the geomagnetic field for identifying positrons, and with a higher precision and up to \(\sim\)500 GeV by AMS-02 [131, 132]. The latest AMS-02 data indicate that the \(e^{+}\) flux is the sum of low-energy secondaries from CR production plus a high-energy component from a new source with a cutoff at \(749^{+197}_{-137}\) GeV [62, 124]. Perhaps the most striking is the fact that the \(e^{+}/\bar{p}\) and \(\bar{p}/p\) ratios barely change from 60-525 GeV [62, 133] hinting at some connection between the three species. Fitting a constant to the flux ratio in this range yields \(e^{+}/\bar{p}=2.01\pm 0.03\)(stat.) \(\pm 0.05\)(syst.) [134], consistent with a constant. It is unclear whether it has a fundamental importance or is a chance coincidence. Some authors argue that this indicates that both species (\(e^{+},\bar{p}\)) are secondary and are produced in the same process [135, 136]. However, the fit could also be performed with a moderately rising functional dependence, because constraining are only a few lower-energy points from 60-130 GeV. Given the high collected positron statistics, the authors [137] point to small but significant peaks in the positron fraction at 12 and 21 GeV, which could be associated with a powerful explosion in the Galactic center and _Fermi_ Bubbles [138, 139] or with yet unknown processes. The positron anomaly gave rise to a huge number of interpretation papers, most of which connect the positron excess with DM. However, accurate multi-messenger data (\(\gamma,\bar{p}\), CRs) collected after the PAMELA discovery imposes tight constraints on the most simplistic DM models (see a review talk by Francesca Calore [140]). The astrophysical interpretations can be divided into groups of models with primary and secondary origin of positrons. In turn, models with primary positrons discuss their production in pulsars with additional effects produced by pulsar bow shocks and slow diffusion zones. Secondary production models are divided between positron production in ISM and in SNR shocks; the latter exploit various properties of SNR shocks and various configurations of target gas distributions. There are also models exploiting inhomogeneity of the SNR (CR sources) distribution. Here we mention only a few examples. _A model with local SNR, surrounded by gas clouds (secondary production in the local ISM), has already been discussed above. We refer the reader to Sect. 7.1._ ### Positron production in Galactic SNR shocks The first models [141] speculated on the idea of producing secondary species in a SNR shock, where CRs are accelerated (originally proposed in [142]). Therefore, the secondary \(e^{\pm}\) participate in the acceleration process and turn out to have a very flat spectrum, which is responsible, after propagation in the Galaxy, for the observed positron excess. However, it soon becomes clear that this process should also increase the production of other secondary species, and thus other secondary to primary ratios (\(\bar{p}/p\), B/C) should rise too [143, 144, 145, 146], contrary to observations. Non-observation of such rise provides significant constraints on physical conditions in the shock. ### Secondary positrons in the volume charge model In this model [147], CRs accelerated in the SNR shell penetrate into the dense gas clumps upstream, where they interact with the ISM gas and produce secondary particles (\(\bar{p},e^{\pm}\)). The predominance of positively charged particles in the shock and in the pre-cursor develops a positive electric volume charge in the gas cloud, which preferentially expels secondary positrons into the upstream plasma where they are accelerated by the shock. Since the shock is modified, these positrons develop a harder spectrum than CR electrons accelerated in other SNRs. Mixing these populations explains the increase in the positron fraction \(e^{+}/(e^{+}+e^{-})\) above 8 GeV. Besides, there are also other sources of positrons in the ISM, such as radioactive decay. ### Primary positrons from pulsars Pulsars are the primary charge-symmetric suspects as they disconnect positrons from nuclear species and therefore remove the constraints associated with production of other secondaries. Probably, the first mention of pulsars as sources of CR positrons can be found in [148], and the first calculation of secondary production and pulsar contribution to CR positrons - in [149]. A calculation of the positron fraction using the data available at that time (contaminated by protons above \(\approx\)5 GeV) was performed in [150]. This model included secondary \(e^{\pm}\), primary \(e^{-}\) from SNR, and primary \(e^{\pm}\) from pulsars. Examples of modern calculations of the positron fraction, which include \(e^{\pm}\) from known sources and secondary \(e^{\pm}\) can be found in [151, 152]. ### Primary positrons in the pulsar bow shock model Pulsars with high spin-down power produce relativistic winds. Some pulsars move relative to their surrounding ISM at supersonic speeds producing bow shocks. Ultrarelativistic particles accelerated at the termination surface of the pulsar wind can experience reacceleration in the converging flow system, producing a universal spectrum similar to that of protons accelerated in the SNR shell. This scenario naturally explains why the \(e^{+}/p\) ratio remains constant above 60 GeV. Primary positrons and electrons in this scenario have similar spectra. An idea of positron reacceleration in a pulsar bow shock was proposed in [153], and further detailed in [154]. It is suggested that the 5.7 millisecond (MSP) pulsar PSR J0437-4715 may produce the observed positrons. The pulsar distance and velocity are \(\approx\)156.79\(\pm\)0.25 pc and \(\sim\)100 km/s. It is the closest and brightest MSP in a binary system with a white dwarf companion and an orbital period of 5.7 days. It is observed in optical, far-ultraviolet (FUV), and X-ray bands and exhibits the greatest long-term rotational stability of any pulsar. The model assumes that the pulsar's position coincides with the direction of the local magnetic field and adjusts the parallel diffusion coefficient to match optical, FUV, and X-ray constraints on the flux of accelerated leptons from the nebula. ## 10 Cosmic ray antiprotons and 10 GeV excess CR antiprotons in the range 2-12 GeV were observed for the first time during balloon flights [155, 156]. The following series of Antarctic flights by BESS [157], N. America flights by MASS91 [158], HEAT [159], and CAPRICE98 [160] instruments, and space experiments, PAMELA [161] and AMS-02 [62], extended the energy range and increased accuracy of \(\bar{p}\) measurements. The \(\bar{p}\) spectrum is now measured up to \(\sim\)450 GV [62]. It features a low-energy rise caused by the kinematics of the process [162]. Ratio \(\bar{p}/p\approx const\) from 30-450 GV [62]. Following the publication of the AMS-02 \(\bar{p}\) data [133], several groups independently noticed an excess over conventional model predictions at around 10 GeV [163, 164, 165]; all three papers are marked as published on May 9-12, 2017. Two papers [164, 165] proposed an interpretation in terms of DM, while [163] pointed to increased systematics due to the high solar activity period during data taking or due to the production cross section uncertaincies, see also [166, 167, 168, 169, 170]. At present, the two hypotheses remain, (i) DM contribution, and (ii) systematics due to the solar modulation and/or cross section uncertainties. People like the DM hypothesis, but attempts are being made to improve on the cross sections. Interestingly, the same proposed DM candidate (\(m_{\chi}\approx 50-100\) GeV) can reproduce the 10 GeV antiproton excess, \(\gamma\)-ray excess from thsec Galactic center, and the extended \(\gamma\)-ray emission from the 400-kpc-across halo of the Andromeda galaxy (M31) [171, 172] (see also references therein). Due to space limitations, I have to conclude my review with an incomplete list of papers that discuss astrophysical uncertainties associated with antiprotons [173, 174, 175, 176, 177, 178, 179], and the DM interpretation of the antiproton excess [180, 181, 182, 183]. ## 11 Concluding remarks - situation with production cross sections Unfortunately, one major remaining bottleneck is the accuracy of particle and especially isotopic production cross sections (see discussion in [48, 184]). Every time an unexpected spectral feature is detected, there is a chance of an error in the model predictions due to errors in the cross sections. The elimination of such errors is easier and much cheaper than building and successfully launching a major space mission like AMS-02, but this requires a dedicated community effort. _Partial support from NASA grants Nos. 80NSSC22K0477, 80NSSC22K0718, 80NSSC23K0169 is greatly acknowledged._
2305.11492
Results on the Non-Vanishing of Derivatives of L-Functions of Vector-Valued Modular Forms
We show a non-vanishing result for the averages of the derivatives of $L$-functions associated with the orthogonal basis of the space of vector-valued cusp forms of weight $k\in \frac12 \mathbb{Z}$ on the full group in the critical strip. We also show the existence of at least one basis element whose $L$-function does not vanish under certain conditions. As an application, we generalize our result to Kohnen's plus space and prove an analogous result for Jacobi forms.
Subong Lim, Wissam Raji
2023-05-19T07:45:34Z
http://arxiv.org/abs/2305.11492v1
# Results on the non-vanishing of derivatives of \(L\)-functions of vector-valued modular forms ###### Abstract. We show a non-vanishing result for the averages of the derivatives of \(L\)-functions associated with the orthogonal basis of the space of vector-valued cusp forms of weight \(k\in\frac{1}{2}\mathbb{Z}\) on the full group in the critical strip. We also show the existence of at least one basis element whose \(L\)-function does not vanish under certain conditions. As an application, we generalize our result to Kohnen's plus space and prove an analogous result for Jacobi forms. ## 1. Introduction The theory of \(L\)-functions play a crucial role in both number theory and arithmetic geometry. \(L\)-functions exhibit natural connections with various mathematical subjects including number fields, automorphic forms, Artin representations, Shimura varieties, abelian varieties, and intersection theory. The central values of \(L\)-functions and their derivatives reveal important connections to the geometric and arithmetic properties of Shimura varieties such as the Gross-Zagier formula, Colmez's conjecture, and the averaged Colmez formula. On the other hand, vector-valued modular forms are important generalizations of elliptic modular forms that arise naturally in the theory of Jacobi forms, Siegel modular forms, and Moonshine. Being an important tool to tackle classical problems in the theory of modular forms, Selberg used these forms to give an estimation for the Fourier coefficients of the classical modular forms [11]. Borcherds in [3] and [4] used vector-valued modular forms associated with Weil representations to provide a description of the Fourier expansion of various theta liftings. Some applications of vector-valued modular forms stand out in high-energy physics by mainly providing a method of differential equations in order to construct the modular multiplets, and also revealing the simple structure of the modular invariant mass models [6]. Other applications concerning vector-valued modular forms of half-integer weight seem to provide a simple solution to the Riemann-Hilbert problem for representations of the modular group [2]. So it is only natural to study the \(L\)-functions of vector-valued modular forms and their properties as a buildup that aligns with the development of a Hecke theory to the space of vector-valued modular forms. In [9], we show that averages of \(L\)-functions associated with vector-valued cusp forms do not vanish when the average is taken over the orthogonal basis of the space of vector-valued cusp forms. To illustrate, we let \(\{f_{k,1},\ldots,f_{k,d_{k}}\}\) be an orthogonal basis of the space \(S_{k,\chi,\rho}\) of vector-valued cusp forms with Fourier coefficients \(b_{k,l,j}(n)\), where \(\chi\) is a multiplier system of weight \(k\in\frac{1}{2}\mathbb{Z}\) on \(\operatorname{SL}_{2}(\mathbb{Z})\) and \(\rho:\operatorname{SL}_{2}(\mathbb{Z})\to\operatorname{GL}_{m}(\mathbb{C})\) is an \(m\)-dimensional unitary complex representation. We also let \(t_{0}\in\mathbb{R},\epsilon>0\), and \(1\leq i\leq m\). Then, there exists a constant \(C(t_{0},\epsilon,i)>0\) such that for \(k>C(t_{0},\epsilon,i)\) the function \[\sum_{l=1}^{d_{k}}\frac{<L^{*}(f_{k,l},s),\mathbf{e}_{i}>}{(f_{k,l},f_{k,l})}b_{k, l,i}(n_{i,0})\] does not vanish at any point \(s=\sigma+it_{0}\) with \(\frac{k-1}{2}<\sigma<\frac{k}{2}-\epsilon\), where \(<L^{*}(f_{k,l},s),\mathbf{e}_{i}>\) denotes the \(i\)th component of \(L^{*}(f_{k,l},s)\). Kohnen, Sengupta, and Weigel in [8] proved a nonvanishing result for the derivatives of \(L\)-functions in the critical strip for elliptic modular forms on the full group. In [10], the second author generalized their result to modular forms of half-integer weight on the plus space. In this paper, we show analogous results for the averages of the derivatives of \(L\)-functions for the orthogonal basis of the space of vector-valued cusp forms in the critical strip. In particular, given \(k\in\frac{1}{2}\mathbb{Z}\), \(\chi\) a multiplier system of weight \(k\) on \(\mathrm{SL}_{2}(\mathbb{Z})\), \(t_{0}\in\mathbb{R},\epsilon>0,1\leq i\leq m\), and \(n\) a positive integer, we show that there exists a constant \(C(t_{0},\epsilon,i,n)>0\) such that for \(k>C(t_{0},\epsilon,i,n)\) the function \[\sum_{l=1}^{g_{k}}\frac{b_{k,l,i}(n_{i,0})}{(f_{k,l},f_{k,l})}\frac{d^{n}}{ds ^{n}}<L^{*}(f_{k,l},s),\mathbf{e}_{i}>\] does not vanish at any point \(s=\sigma+it\) with \(t=t_{0},\frac{k-1}{2}<\sigma<\frac{k}{2}-\epsilon\). The isomorphism between the space of Jacobi forms of weight \(k\) and index \(m\) on \(SL_{2}(\mathbb{Z})\) and the space of vector-valued modular cusp forms with a specific multiplier system and a given Weil representation depending on \(m\) leads to an analogous result for Jacobi forms. We also give a similar result for cusp forms in the plus space. ## 2. The Kernel Function In this section, we define the kernel function \(R_{k,s,i}\) and determine its Fourier expansion. The kernel function being a cusp form will play an important role in determining the coefficients of a given cusp form in terms of \(L\)-functions when the given cusp form is written in terms of the orthogonal basis. So, let \(\Gamma=\mathrm{SL}_{2}(\mathbb{Z})\), \(k\in\frac{1}{2}\mathbb{Z}\) and \(\chi\) a unitary multiplier system of weight \(k\) on \(\Gamma\), i.e. \(\chi:\Gamma\to\mathbb{C}\) satisfies the following conditions: 1. \(|\chi(\gamma)|=1\) for all \(\gamma\in\Gamma\). 2. \(\chi\) satisfies the consistency condition \[\chi(\gamma_{3})(c_{3}\tau+d_{3})^{k}=\chi(\gamma_{1})\chi(\gamma_{2})(c_{1} \gamma_{2}\tau+d_{1})^{k}(c_{2}\tau+d_{2})^{k},\] where \(\gamma_{3}=\gamma_{1}\gamma_{2}\) and \(\gamma_{i}=\left(\begin{smallmatrix}a_{i}&b_{i}\\ c_{i}&d_{i}\end{smallmatrix}\right)\in\Gamma\) for \(i=1,2\), and \(3\). Let \(m\) be a positive integer and \(\rho:\Gamma\to\mathrm{GL}(m,\mathbb{C})\) a \(m\)-dimensional unitary complex representation. Let \(\{\mathbf{e}_{1},\ldots,\mathbf{e}_{m}\}\) denote the standard basis of \(\mathbb{C}^{m}\). For a vector-valued function \(f=\sum_{j=1}^{m}f_{j}\mathbf{e}_{j}\) on \(\mathbb{H}\) and \(\gamma\in\Gamma\), define a slash operator by \[(f|_{k,\chi,\rho}\gamma)(\tau):=(c\tau+d)^{-k}\chi^{-1}(\gamma)\rho^{-1}( \gamma)f(\gamma\tau).\] The definition of the vector-valued modular forms is given as follows. **Definition 2.1**.: _A vector-valued modular form of weight \(k\in\frac{1}{2}\mathbb{Z}\), multiplier system \(\chi\), and type \(\rho\) on \(\Gamma\) is a sum \(f=\sum_{j=1}^{m}f_{j}\mathbf{e}_{j}\) of functions holomorphic in \(\mathbb{H}\) satisfying the following conditions:_ 1. \(f|_{k,\chi,\rho}\gamma=f\) _for all_ \(\gamma\in\Gamma\) 2. _For each_ \(1\leq j\leq m\)_, each function_ \(f_{j}\) _has a Fourier expansion of the form_ \[f_{i}(\tau)=\sum_{n+\kappa_{j}\geq 0}a_{j}(n)e^{2\pi i(n+\kappa_{j})\tau}.\] _Here and throughout the paper,_ \(\kappa_{j}\) _is a certain positive number with_ \(0\leq\kappa_{j}<1\)_._ The space of all vector-valued modular forms of weight \(k\), multiplier system \(\chi\), and type \(\rho\) on \(\Gamma\) is denoted by \(M_{k,\chi,\rho}\). There is a subspace \(S_{k,\chi,\rho}\) of vector-valued cusp forms for which we require that each \(a_{j}(n)=0\) when \(n+\kappa_{j}\) is non-positive. For a vector-valued cusp form \(f(\tau)=\sum_{j=1}^{m}\sum_{n+\kappa_{j}>0}a_{j}(n)e^{2\pi i(n+\kappa_{j})\tau }\mathbf{e}_{j}\in S_{k,\chi,\rho}\), we see that \(a_{j}(n)=O(n^{k/2})\) for every \(1\leq j\leq m\) as \(n\to\infty\) by the same argument for elliptic modular forms. Then, the vector-valued \(L\)-function defined by \[L(f,s):=\sum_{j=1}^{m}\sum_{n+\kappa_{j}>0}\frac{a_{j}(n)}{(n+\kappa_{j})^{s}} \mathbf{e}_{j}\] converges absolutely for \(\mathrm{Re}(s)\gg 0\). This has an integral representation \[\frac{\Gamma(s)}{(2\pi)^{s}}L(f,s)=\int_{0}^{\infty}f(iv)v^{s}\frac{dv}{v}.\] From this, we see that it has an analytic continuation to \(\mathbb{C}\) and a functional equation given by \[L^{*}(f,s)=i^{k}\chi(S)\rho(S)L^{*}(f,k-s),\] where \(L^{*}(f,s)=\frac{\Gamma(s)}{(2\pi)^{s}}L(f,s)\) and \(S=\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\). Let \(i\) be an integer with \(1\leq i\leq m\). Define \[p_{s,i}(\tau):=\tau^{-s}\mathbf{e}_{i}.\] For \(s\in\mathbb{C}\) with \(1<\mathrm{Re}(s)<k-1\), we define the kernel function by \[R_{k,s,i}:=\gamma_{k}(s)\sum_{\gamma\in\Gamma}p_{s,i}|_{k,\chi,\rho}\gamma,\] where \(\gamma_{k}(s):=\frac{1}{2}e^{\pi is/2}\Gamma(s)\Gamma(k-s)\). Then, this series converges absolutely uniformly whenever \(\tau=u+iv\) satisfies \(v\geq\epsilon,v\leq 1/\epsilon\) for a given \(\epsilon>0\) and \(s\) varies over a compact set. Moreover, it is a vector-valued cusp form in \(S_{k,\chi,\rho}\). We write \(<\cdot,\cdot>\) for the standard scalar product on \(\mathbb{C}^{m}\), i.e. \[\left\langle\sum_{j=1}^{m}\lambda_{j}\mathbf{e}_{j},\sum_{j=1}^{m}\mu_{j} \mathbf{e}_{j}\right\rangle=\sum_{j=1}^{m}\lambda_{j}\overline{\mu_{j}}.\] Then, for \(f,g\in M_{k,\chi,\rho}\), we define the Petersson scalar product of \(f\) and \(g\) by \[(f,g):=\int_{\mathcal{F}}<f(\tau),g(\tau)>v^{k}\frac{dudv}{v^{2}}\] if the integral converges, where \(\mathcal{F}\) is the standard fundamental domain for the action of \(\Gamma\) on \(\mathbb{H}\). Then, by [9, Lemma 3.1], we have \[(f,R_{k,\bar{s},i})=c_{k}<L^{*}(f,s),\mathbf{e}_{i}>, \tag{2.1}\] where \(c_{k}:=\frac{(-1)^{k/2}\pi(k-2)!}{2^{k-2}}\). We can also compute the Fourier expansion of \(R_{k,s,i}\). **Lemma 2.2**.: _[_9_, Lemma 3.2]_ _The function \(R_{k,s,i}\) has the Fourier expansion_ \[R_{k,s,i}(\tau)=\sum_{j=1}^{m}\sum_{n+\kappa_{j}>0}r_{k,s,i,j}(n)e^{2\pi i(n+ \kappa_{j})\tau},\] _where \(r_{k,s,i,j}(n)\) is given by_ \[r_{k,s,i,j}(n) = \delta_{i,j}(2\pi)^{s}\Gamma(k-s)(n+\kappa_{i})^{s-1}\] \[+\chi^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\right)_{j,i}(-1)^{k/2}(2\pi)^{k-s}\Gamma(s)(n+ \kappa_{j})^{k-s-1}\] \[+\frac{(-1)^{k/2}}{2}(2\pi)^{k}(n+\kappa_{j})^{k-1}\frac{\Gamma (s)\Gamma(k-s)}{\Gamma(k)}\sum_{\begin{subarray}{c}(c,d)\in\mathbb{Z}^{2}\\ (c,d)=1,ac>0\end{subarray}}c^{-k}\left(\frac{c}{a}\right)^{s}\] \[\times\bigg{(}e^{2\pi i(n+\kappa_{j})d/c}e^{\pi is}\chi^{-1} \left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a&b \\ c&d\end{smallmatrix}\right)\right)_{j,i}\ \,_{1}F_{1}(s,k;-2\pi in/(ac))\] \[+e^{-2\pi i(n+\kappa_{j})d/c}e^{-\pi is}\chi^{-1}\left(\left( \begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}-a &b\\ c&-d\end{smallmatrix}\right)\right)_{j,i}\ \,_{1}F_{1}(s,k;2\pi in/(ac))\bigg{)},\] _where \({}_{1}F_{1}(\alpha,\beta;z)\) is Kummer's degenerate hypergeometric function._ ## 3. The Main Theorem In this section, we give the main theorem for the non-vanishing of an average of derivatives of \(L\)-functions in the critical strip. We also derive a corollary about the existence of at least one \(L\)-function whose derivative does not vanish. To do so, let \[n_{i,0}:=\begin{cases}1&\text{if }\kappa_{i}=0,\\ 0&\text{if }\kappa_{i}\neq 0.\end{cases}\] Then, we have the following theorem. **Theorem 3.1**.: _Let \(k\in\frac{1}{2}\mathbb{Z}\) and let \(\chi\) be a multiplier system of weight \(k\) on \(\operatorname{SL}_{2}(\mathbb{Z})\). Suppose that \(\{f_{k,1},\ldots,f_{k,g_{k}}\}\) is an orthogonal basis of \(S_{k,\chi,\rho}\) with Fourier expansions_ \[f_{k,l}(\tau)=\sum_{j=1}^{m}\sum_{n+\kappa_{j}>0}b_{k,l,j}(n)e^{2\pi i(n+\kappa _{j})\tau}\ (1\leq l\leq g_{k}),\] _where \(g_{k}:=\dim S_{k,\chi,\rho}\). Let \(t_{0}\in\mathbb{R},\epsilon>0,1\leq i\leq m\), and \(n\) a positive integer. Then, there exists a constant \(C(t_{0},\epsilon,i,n)>0\) such that for \(k>C(t_{0},\epsilon,i,n)\) the function_ \[\sum_{l=1}^{g_{k}}\frac{b_{k,l,i}(n_{i,0})}{(f_{k,l},f_{k,l})}\frac{d^{n}}{ds^ {n}}<L^{*}(f_{k,l},s),\mathbf{e}_{i}> \tag{3.1}\] _does not vanish at any point \(s=\sigma+it\) with \(t=t_{0},\frac{k-1}{2}<\sigma<\frac{k}{2}-\epsilon\)._ Proof.: We follow the argument in the proof of Theorem 3.1 in [8]. For each \(1\leq i\leq m\), by (2.1), we have \[R_{k,s,i}=c_{k}\sum_{l=1}^{g_{k}}\frac{<L^{*}(f_{k,l},s),\mathbf{e}_{i}>}{(f_{k,l },f_{k,l})}f_{k,l}. \tag{3.2}\] If we take the first Fourier coefficients of \(i\)th component function on both sides of (3.2), then by Lemma 2.2 we have \[(2\pi)^{s}\Gamma(k-s)(n_{i,0}+\kappa_{i})^{s-1}\] \[+\chi^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0& -1\\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}(2\pi)^{k-s}\Gamma(s)(n_{i,0} +\kappa_{i})^{k-s-1}\] \[+\frac{(-1)^{k/2}}{2}(2\pi)^{k}(n_{i,0}+\kappa_{i})^{k-1}\sum_{ \begin{subarray}{c}(c,d)\in\mathbb{Z}^{2}\\ (c,d)=1,ac>0\end{subarray}}c^{-k}\left(\frac{c}{a}\right)^{s}\] \[\times\biggl{(}e^{2\pi i(n_{i,0}+\kappa_{j})d/c}e^{\pi is}\chi^ {-1}\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a &b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\ 1f_{1}(s,k;-2\pi in_{i,0}/(ac))\] \[\quad+e^{-2\pi i(n_{i,0}+\kappa_{i})d/c}e^{-\pi is}\chi^{-1} \left(\left(\begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}-a &b\\ c&-d\end{smallmatrix}\right)\right)_{i,i}\ 1f_{1}(s,k;2\pi in_{i,0}/(ac))\biggr{)}\biggr{]}\] \[=c_{k}\sum_{l=1}^{g_{k}}\frac{<L^{*}(f_{k,l},s),\mathbf{e}_{i}>} {(f_{k,l},f_{k,l})}b_{k,l,i}(n_{i,0}), \tag{3.3}\] where \[{}_{1}f_{1}(\alpha,\beta;z):=\frac{\Gamma(\alpha)\Gamma(\beta-\alpha)}{\Gamma (\beta)}\ {}_{1}F_{1}(\alpha,\beta;z).\] We assume that (3.1) is zero. If we take the \(n\)th derivative with respect to \(s\) on both sides in (3.3), then we have \[\frac{d^{n}}{ds^{n}}\biggl{[}(2\pi)^{s}\Gamma(k-s)(n_{i,0}+\kappa _{i})^{s-1}\biggr{]}\] \[=-\frac{d^{n}}{ds^{n}}\biggl{[}\chi^{-1}\left(\left(\begin{smallmatrix }0&-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&-1 \\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}(2\pi)^{k-s}\Gamma(s)(n_{i,0}+ \kappa_{i})^{k-s-1}\biggr{]}\] \[-\frac{d^{n}}{ds^{n}}\biggl{[}\frac{(-1)^{k/2}}{2}(2\pi)^{k}(n_{ i,0}+\kappa_{i})^{k-1}\sum_{\begin{subarray}{c}(c,d)\in\mathbb{Z}^{2}\\ (c,d)=1,ac>0\end{subarray}}c^{-k}\left(\frac{c}{a}\right)^{s}\] \[\times\biggl{(}e^{2\pi i(n_{i,0}+\kappa_{j})d/c}e^{\pi is}\chi^ {-1}\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a &b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\ 1f_{1}(s,k;-2\pi in_{i,0}/(ac))\] \[\quad+e^{-2\pi i(n_{i,0}+\kappa_{i})d/c}e^{-\pi is}\chi^{-1}\left( \left(\begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}-a &b\\ c&-d\end{smallmatrix}\right)\right)_{i,i}\ 1f_{1}(s,k;2\pi in_{i,0}/(ac))\biggr{)}\biggr{]}. \tag{3.4}\] Then, the left-hand side of (3.4) is equal to \[\frac{1}{n_{i,0}+\kappa_{i}}\sum_{\nu=0}^{n}\frac{d^{\nu}}{ds^{\nu}} [(2\pi(n_{i,0}+\kappa_{i}))^{s}]\frac{d^{n-\nu}}{ds^{n-\nu}}\Gamma(k-s)\] \[=\frac{(2\pi(n_{i,0}+\kappa_{i}))^{s}}{n_{i,0}+\kappa_{i}}\sum_{ \nu=0}^{n}(-1)^{n-\nu}\binom{n}{\nu}(\log(2\pi(n_{i,0}+\kappa_{i})))^{\nu} \Gamma^{(n-\nu)}(k-s)\] \[=(2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}(\log(2\pi(n_{i,0}+\kappa_{i })))^{n}\Gamma(k-s)\] \[\qquad+(2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}\sum_{\nu=0}^{n-1}(-1) ^{n-\nu}\binom{n}{\nu}(\log(2\pi(n_{i,0}+\kappa_{i})))^{\nu}\Gamma^{(n-\nu)} (k-s).\] Then, we have \[\frac{1}{(2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}\Gamma(k-s)}\cdot \frac{d^{n}}{ds^{n}}\bigg{[}(2\pi)^{s}\Gamma(k-s)(n_{i,0}+\kappa_{i})^{s-1} \bigg{]}\] \[=(\log(2\pi(n_{i,0}+\kappa_{i})))^{n}+\sum_{\nu=0}^{n-1}(-1)^{n- \nu}\binom{n}{\nu}(\log(2\pi(n_{i,0}+\kappa_{i})))^{\nu}\frac{\Gamma^{(n-\nu) }(k-s)}{\Gamma(k-s)}.\] Let \(\psi(s):=\frac{\Gamma^{\prime}(s)}{\Gamma(s)}\). Then, one can see that \(\frac{\Gamma^{(n)}(s)}{\Gamma(s)}\) is a polynomial \(P(\psi,\psi^{(1)},\ldots,\psi^{(n-1)})\) with integral coefficients and it contains the term \(\psi^{n}\) which is the highest power of \(\psi\) occurring in \(P\). It is known that \(\psi\) satisfies the following asymptotic formulas \[\psi(s)\sim\log(s)-\frac{1}{2s}-\sum_{\nu=1}^{\infty}\frac{B_{2\nu}}{2\nu s^{2 \nu}}\] and \[\psi^{(n)}(s)\sim(-1)^{n-1}\left(\frac{(n-1)!}{s^{n}}+\frac{n!}{2s^{n+1}}+ \sum_{\nu=0}^{\infty}B_{2\nu}\frac{(2\nu+n-1)!}{(2\nu)!s^{2\nu+n}}\right)\] for \(s\to\infty\) in \(|\arg(s)|<\pi\), where \(B_{n}\) denotes the \(n\)th Bernoulli number (for example, see [1, 6.3.18 and 6.4.11]). Let \(s=\frac{k}{2}-\delta+it_{0}\) (\(\epsilon<\delta<\frac{1}{2}\)). Then the leading term of \(\frac{\Gamma^{(n-\nu)}(k-s)}{\Gamma(k-s)}\) for \(0\leq\nu\leq n-1\) is \((\log(\frac{k}{2}+\delta-it_{0}))^{n-\nu}\) as \(k\to\infty\) and \(\psi^{(n)}(s)=o\left(\frac{1}{|s|^{n}}\right)\) as \(|s|\to\infty\) in \(|\arg(s)|<\pi\) for \(n\in\mathbb{N}\). Therefore, we have \[\sum_{\nu=0}^{n-1}(-1)^{n-\nu}\binom{n}{\nu}(\log(2\pi(n_{i,0}+\kappa_{i})))^{ \nu}\frac{\Gamma^{(n-\nu)}(k-s)}{\Gamma(k-s)}=Q\left(\log(\frac{k}{2}+\delta- it_{0})\right)+o(1)\] as \(k\to\infty\), where \(Q\) is a polynomial of degree \(n\) and its highest coefficient is \((-1)^{n}\). For the first term on the right-hand side of (3.4), we have \[\frac{d^{n}}{ds^{n}}\bigg{[}\chi^{-1}\left(\left(\begin{smallmatrix}0 &-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&-1 \\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}(2\pi)^{k-s}\Gamma(s)(n_{i,0}+ \kappa_{i})^{k-s-1}\bigg{]}\] \[=\chi^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&-1 \\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}\frac{(2\pi(n_{i,0}+\kappa_{i} ))^{k}}{n_{i,0}+\kappa_{i}}\sum_{\nu=0}^{n}\binom{n}{\nu}\frac{d^{\nu}}{ds^{ \nu}}[(2\pi(n_{i,0}+\kappa_{i}))^{-s}]\frac{d^{n-\nu}}{ds^{n-\nu}}\Gamma(s)\] \[=\chi^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&-1 \\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}\frac{(2\pi(n_{i,0}+\kappa_{i} ))^{k-s}}{n_{i,0}+\kappa_{i}}\] \[\qquad\times\sum_{\nu=0}^{n}(-1)^{\nu}\binom{n}{\nu}\log(2\pi(n_ {i,0}+\kappa_{i}))^{\nu}\Gamma^{(n-\nu)}(s).\] If we divide this by \((2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}\Gamma(k-s)\), then we have \[\chi^{-1}\left(\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}0&- 1\\ 1&0\end{smallmatrix}\right)\right)_{i,i}(-1)^{k/2}\frac{(2\pi(n_{i,0}+\kappa_{i} ))^{k-2s}}{(n_{i,0}+\kappa_{i})^{2}}\] \[\qquad\times\sum_{\nu=0}^{n}(-1)^{\nu}\binom{n}{\nu}\log(2\pi(n_ {i,0}+\kappa_{i}))^{\nu}\frac{\Gamma^{(n-\nu)}(s)}{\Gamma(s)}\cdot\frac{\Gamma (s)}{\Gamma(k-s)}. \tag{3.5}\] Let \(s=\frac{k}{2}-\delta+it_{0}\) (\(\epsilon<\delta<\frac{1}{2}\)). Then by [1, 6.1.23 and 6.1.47], we have \[\left|\frac{\Gamma(s)}{\Gamma(k-s)}\right|=\left|\frac{k}{2}+it_{0}\right|^{ -2\delta}\cdot\left|1+O\left(\frac{1}{|\frac{k}{2}+it_{0}|}\right)\right|,\] where the \(O\) constant is absolute, uniformly in \(\epsilon<\delta<\frac{1}{2}\). On the other hand, the highest order term in \(\frac{\Gamma^{(n-\nu)}(s)}{\Gamma(s)}\) is \(\left(\psi\left(\frac{k}{2}-\delta+it_{0}\right)\right)^{n-\nu}\). This behaves like \((\log(\frac{k}{2}-\delta+it_{0}))^{n-\nu}\) for \(0\leq\nu<n\) as \(k\to\infty\). Thus, we can see that all terms in the sum in (3.5) go to zero as \(k\to\infty\). The second term on the right-hand side of (3.4) is equal to \[-\frac{(-1)^{k/2}}{2}(2\pi)^{k}(n_{i,0}+\kappa_{i})^{k-1}\sum_{ \begin{subarray}{c}(c,d)\in\mathbb{Z}^{2}\\ (c,d)=1,ac>0\end{subarray}}c^{-k}\frac{d^{n}}{ds^{n}}\bigg{[}\left(\frac{c}{ a}\right)^{s}\] \[\times\bigg{(}e^{2\pi i(n_{i,0}+\kappa_{j})d/c}e^{\pi is}\chi^{-1 }\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a &b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\ \ 1f_{1}(s,k;-2\pi in_{i,0}/(ac))\] \[\quad+e^{-2\pi i(n_{i,0}+\kappa_{i})d/c}e^{-\pi is}\chi^{-1}\left( \left(\begin{smallmatrix}-a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}-a &b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\ \ 1f_{1}(s,k;2\pi in_{i,0}/(ac))\bigg{)}\bigg{]}\] \[=-\frac{(-1)^{k/2}}{2}(2\pi)^{k}(n_{i,0}+\kappa_{i})^{k-1}\sum_{ \begin{subarray}{c}(c,d)\in\mathbb{Z}^{2}\\ (c,d)=1,ac>0\end{subarray}}c^{-k}\sum_{\nu=0}^{n}\binom{n}{\nu}\left(\frac{c}{ a}\right)^{s}\left(\log\left(\frac{c}{a}\right)\right)^{\nu}\] \[\times\frac{d^{n-\nu}}{ds^{n-\nu}}\bigg{[}\bigg{(}e^{2\pi i(n_{i, 0}+\kappa_{j})d/c}e^{\pi is}\chi^{-1}\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a &b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\ \ 1f_{1}(s,k;-2\pi in_{i,0}/(ac))\] \[\quad+e^{-2\pi i(n_{i,0}+\kappa_{i})d/c}e^{-\pi is}\chi^{-1}\left( \left(\begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}-a &b\\ c&-d\end{smallmatrix}\right)\right)_{i,i}\ \ 1f_{1}(s,k;2\pi in_{i,0}/(ac))\bigg{)}\bigg{]}. \tag{3.6}\] In the above equation, the derivative in the last two lines is equal to \[\sum_{w=0}^{n-\nu}\binom{n-\nu}{w}\bigg{\{}e^{2\pi i(n_{i,0}+\kappa_{ j})d/c}\chi^{-1}\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a& b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\] \[\qquad\qquad\qquad\times\frac{d^{w}}{ds^{w}}\big{[}e^{\pi is} \big{]}\frac{d^{n-\nu-w}}{ds^{n-\nu-w}}\big{[}\begin{smallmatrix}1&f_{1}(s,k;-2 \pi in_{i,0}/(ac))\end{smallmatrix}\big{]}\] \[\qquad\qquad\qquad+e^{-2\pi i(n_{i,0}+\kappa_{i})d/c}\chi^{-1} \left(\left(\begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}- a&b\\ c&-d\end{smallmatrix}\right)\right)_{i,i}\] \[\qquad\qquad\qquad\qquad\times\frac{d^{w}}{ds^{w}}\big{[}e^{- \pi is}\big{]}\frac{d^{n-\nu-w}}{ds^{n-\nu-w}}\big{[}\begin{smallmatrix}1&f_{1}( s,k;2\pi in_{i,0}/(ac))\end{smallmatrix}\big{]}\bigg{\}}\] \[\sum_{w=0}^{n-\nu}\binom{n-\nu}{w}\bigg{\{}\left(\pi i\right)^{w} e^{2\pi i(n_{i,0}+\kappa_{j})d/c}e^{\pi is}\chi^{-1}\left(\left(\begin{smallmatrix} a&b\\ c&d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\right)_{i,i}\] \[\qquad\qquad\qquad\qquad\times\frac{d^{n-\nu-w}}{ds^{n-\nu-w}} \big{[}\begin{smallmatrix}1&f_{1}(s,k;-2\pi in_{i,0}/(ac))\end{smallmatrix} \big{]}\] \[\qquad\qquad\qquad+\left(-\pi i\right)^{w}e^{-2\pi i(n_{i,0}+ \kappa_{i})d/c}e^{-\pi is}\chi^{-1}\left(\left(\begin{smallmatrix}-a&b\\ c&-d\end{smallmatrix}\right)\right)\rho^{-1}\left(\left(\begin{smallmatrix}- a&b\\ c&-d\end{smallmatrix}\right)\right)_{i,i}\] \[\qquad\qquad\qquad\times\frac{d^{n-\nu-w}}{ds^{n-\nu-w}}\big{[} \begin{smallmatrix}1&f_{1}(s,k;2\pi in_{i,0}/(ac))\end{smallmatrix}\big{]} \bigg{\}}.\] By [1, 13.2.1], for \(\mathrm{Re}(\beta)>\mathrm{Re}(\alpha)>0\), we have \[{}_{1}f_{1}(\alpha,\beta;z)=\int_{0}^{1}e^{zu}u^{\alpha-1}(1-u)^{\beta-\alpha -1}du.\] Therefore, for any \(n\in\mathbb{Z}_{\geq 0}\), we obtain \[\frac{d^{n}}{ds^{n}}\left[\begin{smallmatrix}1&f_{1}\left(s,k; \pm\frac{2\pi in_{i,0}}{ac}\right)\end{smallmatrix}\right]\] \[=\int_{0}^{1}e^{\pm\frac{2\pi in_{i,0}}{ac}u}\frac{d^{n}}{ds^{n}} \left[u^{s-1}(1-u)^{k-s-1}\right]du\] \[=\int_{0}^{1}e^{\pm\frac{2\pi in_{i,0}}{ac}u}\left(\sum_{j=0}^{n} (-1)^{n-j}\binom{n}{j}(\log(u))^{j}(\log(1-u))^{n-j}\right)u^{s-1}(1-u)^{k-s-1 }du.\] Since \(\log(u)=o(u^{-\epsilon^{\prime}})\) for any \(\epsilon^{\prime}>0\) as \(u\to 0\), we see that \[\left|\frac{d^{n}}{ds^{n}}\left[\begin{smallmatrix}1&f_{1}\left(s,k;\pm\frac{2 \pi in_{i,0}}{ac}\right)\end{smallmatrix}\right]\right|\leq K_{n},\] where \(K_{n}\) is a constant depending only on \(n\). Let \(s=\frac{k}{2}-\delta+it_{0}\) (\(\epsilon<\delta<\frac{1}{2}\)). Then, the series in (3.6) is \[\ll\sum_{a=1}^{\infty}\sum_{c=1}^{\infty}a^{-\frac{k}{2}+\delta}c ^{-\frac{k}{2}-\delta}\bigg{(}2\left|\log\left(\frac{c}{a}\right)\right|^{n}e^ {\pi|t_{0}|}K_{0}\] \[\qquad\qquad\qquad\qquad+2e^{\pi|t_{0}|}\sum_{\nu=0}^{n-1}\binom{n }{\nu}\left|\log\left(\frac{c}{a}\right)\right|^{\nu}\sum_{w=0}^{n-\nu}\binom{n -\nu}{w}\left(\frac{\pi}{2}\right)^{w}K_{n-\nu-w}\bigg{)}.\] This can be estimated in terms of the Riemann zeta function and a positive constant factor \(B(t_{0},n)\) depending only on \(t_{0}\) and \(n\). If we divide the second term on the right-hand side of (3.4) by \((2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}\Gamma(k-s)\), then the absolute value is \[\ll\frac{(2\pi(n_{i,0}+\kappa_{i}))^{\frac{k}{2}+\delta}}{\Gamma(\frac{k}{2}+ \delta-it_{0})}B(t_{0},n)\] and this goes to \(0\) as \(k\to\infty\) uniformly in \(\delta\in(\epsilon,\frac{1}{2})\) by Stirling's formula. In conclusion, if we divide both sides of (3.4) by \((2\pi)^{s}(n_{i,0}+\kappa_{i})^{s-1}\Gamma(k-s)\), then the right-hand side goes to zero as \(k\to\infty\) but the absolute value of the left-hand side is \[\gg|\log(\frac{k}{2}+\delta-it_{0})|^{n}\] as \(k\to\infty\). This is a contradiction. By using the functional equation of \(L^{*}(f,s)\) (\(f\in S_{k,\chi,\rho}\)), we obtain the following corollary. **Corollary 3.2**.: _Let \(k\), \(\chi\), and \(\{f_{k,1},\ldots,f_{k,g_{k}}\}\) be as in Theorem 3.1. Let \(t_{0}\in\mathbb{R},\epsilon>0\), and \(n\) a positive integer. Then, for \(k>C(t_{0},\epsilon,n)\) and any \(s=\sigma+it\) with \(t=t_{0},\ \frac{k-1}{2}<\sigma<\frac{k}{2}-\epsilon,\ \frac{k}{2}+\epsilon< \sigma<\frac{k+1}{2}\), there exists \(f_{k,l}\) such that \(\frac{d^{n}}{ds^{n}}L^{*}(f,s)\neq 0\)._ ## 4. The case of \(\Gamma_{0}(N)\) Now, we consider the case of an elliptic modular form of integral weight on the congruence subgroup \(\Gamma_{0}(N)\). By using Theorem 3.1, we can extend a result in [8] to the case of \(\Gamma_{0}(N)\). To illustrate, let \(N\) be a positive integer and let \(k\) be a positive even integer. Let \(\Gamma=\Gamma_{0}(N)\) and let \(S_{k}(\Gamma)\) be the space of cusp forms of weight \(k\) on \(\Gamma\). Let \(\{\gamma_{1},\ldots,\gamma_{m}\}\) be the set of representatives of \(\Gamma\setminus\operatorname{SL}_{2}(\mathbb{Z})\) with \(\gamma_{1}=I\). For \(f\in S_{k}(\Gamma)\), we define a vector-valued function \(\tilde{f}:\mathbb{H}\to\mathbb{C}^{m}\) by \(\tilde{f}=\sum_{j=1}^{m}f_{j}\mathbf{e}_{j}\) and \[f_{j}=f|_{k}\gamma_{j}\ (1\leq j\leq m),\] where \((f|_{k}\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right))(z):=(cz+d)^{-k}f(\gamma z)\). Then, \(\tilde{f}\) is a vector-valued modular form of weight \(k\) and the trivial multiplier system with respect to \(\rho\) on \(\operatorname{SL}_{2}(\mathbb{Z})\), where \(\rho\) is a certain \(m\)-dimensional unitary complex representation such that \(\rho(\gamma)\) is a permutation matrix for each \(\gamma\in\operatorname{SL}_{2}(\mathbb{Z})\) and is an identity matrix if \(\gamma\in\Gamma\). Then, the map \(f\mapsto\tilde{f}\) induces an isomorphism between \(S_{k}(\Gamma)\) and \(S_{k,\rho}\), where \(S_{k,\rho}\) denotes the space of vector-valued cusp forms of weight \(k\) and trivial multiplier system with respect to \(\rho\) on \(\operatorname{SL}_{2}(\mathbb{Z})\). For \(\tilde{f},\tilde{g}\in S_{k,\rho}\), we define a Petersson inner product by \[(\tilde{f},\tilde{g}) := \int_{\mathcal{F}}<\tilde{f},\tilde{g}>y^{k}\frac{dxdy}{y^{2}}.\] Note that if \(f,g\in S_{k}(\Gamma)\) such that \(f\) and \(g\) are orthogonal, then \(\tilde{f}\) and \(\tilde{g}\) is also orthogonal. **Corollary 4.1**.: _Let \(k\) be a positive even integer with \(k>2\). Let \(N\) and \(n\) be positive integers and \(\Gamma=\Gamma_{0}(N)\). Let \(\{f_{k,1},\ldots,f_{k,e_{k}}\}\) be an orthogonal basis of \(S_{k}(\Gamma)\). Let \(t_{0}\in\mathbb{R},\epsilon>0\). Then, _there exists a constant \(C(t_{0},\epsilon,n)>0\) such that for \(k>C(t_{0},\epsilon,n)\) there exists a basis element \(f_{k,l}\in S_{k}(\Gamma)\) satisfying_ \[\frac{d^{n}}{ds^{n}}L^{*}(\widetilde{f_{k,l}},s)\neq 0\] _at any point \(s=\sigma+it_{0}\) with_ \[\frac{k-1}{2}<\sigma<\frac{k}{2}-\epsilon\ \ and\ \ \frac{k}{2}+\epsilon<\sigma< \frac{k+1}{2}.\] ## 5. The case of Jacobi forms We now consider the case of Jacobi forms. Let \(k\) be a positive even integer and \(m\) be a positive integer. From now, we use the notation \(\tau=u+iv\in\mathbb{H}\) and \(z=x+iy\in\mathbb{C}\). We review basic notions of Jacobi forms (for more details, see [5, Section 3.1] and [7, Section 5]). Let \(F\) be a complex-valued function on \(\mathbb{H}\times\mathbb{C}\). For \(\gamma=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\mathrm{SL}_{2}(\mathbb{Z}),X=(\lambda,\mu)\in \mathbb{Z}^{2}\), we define \[(F|_{k,m}\gamma)(\tau,z):=(c\tau+d)^{-k}e^{-2\pi im\frac{cs^{2}}{c\tau+d}}F( \gamma(\tau,z))\] and \[(F|_{m}X)(\tau,z):=e^{2\pi im(\lambda^{2}\tau+2\lambda z)}F(\tau,z+\lambda \tau+\mu),\] where \(\gamma(\tau,z)=(\frac{a\tau+b}{c\tau+d},\frac{z}{c\tau+d})\). We give now the definition of a Jacobi form. **Definition 5.1**.: _A Jacobi form of weight \(k\) and index \(m\) on \(\mathrm{SL}_{2}(\mathbb{Z})\) is a holomorphic function \(F\) on \(\mathbb{H}\times\mathbb{C}\) satisfying_ 1. \(F|_{k,m}\gamma=F\) _for every_ \(\gamma\in\mathrm{SL}_{2}(\mathbb{Z})\)_,_ 2. \(F|_{m}X=F\) _for every_ \(X\in\mathbb{Z}^{2}\)_,_ 3. \(F\) _has the Fourier expansion of the form_ (5.1) \[F(\tau,z)=\sum_{\begin{subarray}{c}l,r\in\mathbb{Z}\\ 4ml-r^{2}\geq 0\end{subarray}}a(l,r)e^{2\pi il\tau}e^{2\pi irz}.\] We denote by \(J_{k,m}\) the space of all Jacobi forms of weight \(k\) and index \(m\) on \(\mathrm{SL}_{2}(\mathbb{Z})\). If a Jacobi form satisfies the condition \(a(l,r)\neq 0\) only if \(4ml-r^{2}>0\), then it is called a Jacobi cusp form. We denote by \(S_{k,m}\) the space of all Jacobi cusp forms of weight \(k\) and index \(m\) on \(\mathrm{SL}_{2}(\mathbb{Z})\). Let \(F\) be a Jacobi cusp form \(F\in S_{k,m}\) with its Fourier expansion (5.1). We define the partial \(L\)-functions of \(F\) by \[L(F,j,s):=\sum_{\begin{subarray}{c}n\in\mathbb{Z}\\ n+j^{2}\equiv 0\pmod{4m}\end{subarray}}\frac{a\left(\frac{n+j^{2}}{4m},j \right)}{\left(\frac{n}{4m}\right)^{s}}\] for \(1\leq j\leq 2m\). Moreover, \(F\) can be written as \[F(\tau,z)=\sum_{1\leq j\leq 2m}F_{j}(\tau)\theta_{m,j}(\tau,z) \tag{5.2}\] with uniquely determined holomorphic functions \(F_{j}:\mathbb{H}\to\mathbb{C}\) and functions in \(\{F_{j}|\ 1\leq j\leq 2m\}\) have the Fourier expansions \[F_{j}(\tau)=\sum_{\begin{subarray}{c}n\geq 0\\ n+j^{2}\equiv 0\pmod{4m}\end{subarray}}a\left(\frac{n+j^{2}}{4m},j\right)e^{2 \pi in\tau/(4m)},\] where the theta series \(\theta_{m,j}\) is defined by \[\theta_{m,j}(\tau,z):=\sum_{\begin{subarray}{c}r\in\mathbb{Z}\\ r\equiv j\pmod{2m}\end{subarray}}e^{2\pi ir^{2}\tau/(4m)}e^{2\pi irz}\] for \(1\leq j\leq 2m\). We write \(\operatorname{Mp}_{2}(\mathbb{R})\) for the metaplectic group. The elements of \(\operatorname{Mp}_{2}(\mathbb{R})\) are pairs \((\gamma,\phi(\tau))\), where \(\gamma=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{SL}_{2}(\mathbb{R})\), and \(\phi\) denotes a holomorphic function on \(\mathbb{H}\) with \(\phi(\tau)^{2}=c\tau+d\). The map \[\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\mapsto\widehat{\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)}=\left(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right),\sqrt{c\tau+d}\right)\] defines a locally isomorphic embedding of \(\operatorname{SL}_{2}(\mathbb{R})\) into \(\operatorname{Mp}_{2}(\mathbb{R})\). Let \(\operatorname{Mp}_{2}(\mathbb{Z})\) be the inverse image of \(\operatorname{SL}_{2}(\mathbb{Z})\) under the covering map \(\operatorname{Mp}_{2}(\mathbb{R})\to\operatorname{SL}_{2}(\mathbb{R})\). It is well-known that \(\operatorname{Mp}_{2}(\mathbb{Z})\) is generated by \(\widetilde{T}\) and \(\widetilde{S}\). We define a \(2m\)-dimensional unitary complex representation \(\widetilde{\rho}_{m}\) of \(\operatorname{Mp}_{2}(\mathbb{Z})\) by \[\widetilde{\rho}_{m}(\widetilde{T})\mathbf{e}_{j}=e^{-2\pi ij^{2}/(4m)} \mathbf{e}_{j}\] and \[\widetilde{\rho}_{m}(\widetilde{S})\mathbf{e}_{j}=\frac{i^{\frac{1}{2}}}{ \sqrt{2m}}\sum_{j^{\prime}=1}^{2m}e^{2\pi ijj^{\prime}/(2m)}\mathbf{e}_{j^{ \prime}},\] Let \(\chi\) be a multiplier system of weight \(\frac{1}{2}\) on \(\operatorname{SL}_{2}(\mathbb{Z})\). We define a map \(\rho_{m}:\operatorname{SL}_{2}(\mathbb{Z})\to\operatorname{GL}_{2m}(\mathbb{C})\) by \[\rho_{m}(\gamma)=\chi(\gamma)\widetilde{\rho}_{m}(\widetilde{\gamma})\] for \(\gamma\in\operatorname{SL}_{2}(\mathbb{Z})\). The map \(\rho_{m}\) gives a \(2m\)-dimensional unitary representation of \(\operatorname{SL}_{2}(\mathbb{Z})\). Let \(\{\mathbf{e}_{1},\dots,\mathbf{e}_{2m}\}\) denote the standard basis of \(\mathbb{C}^{2m}\). For \(F\in S_{k,m}\), we define a vector-valued function \(\tilde{F}:\mathbb{H}\to\mathbb{C}^{2m}\) by \(\tilde{F}=\sum_{j=1}^{2m}F_{j}\mathbf{e}_{j}\), where \(F_{j}\) is defined by the theta expansion in (5.2). Then, the map \(F\mapsto\tilde{F}\) induces an isomorphism between \(S_{k,m}\) and \(S_{k-\frac{1}{2},\tilde{\chi},\rho_{m}}\). Let \(L^{*}(F,j,s):=\frac{\Gamma(s)}{(2\pi)^{s}}L(F,j,s)\). Then, we have the following corollary. **Corollary 5.2**.: _Let \(k\) be a positive even integer with \(k>2\). Let \(m\) and \(n\) be positive integers. Let \(\{F_{k,m,1},\dots,F_{k,m,d}\}\) be an orthogonal basis of \(S_{k,m}\). Let \(t_{0}\in\mathbb{R}\) and \(\epsilon>0\)._ 1. _Let_ \(j\) _be a positive integer with_ \(1\leq j\leq 2m\)_. Then, there exists a constant_ \(C(t_{0},\epsilon,j,n)>0\) _such that for any_ \(k>C(t_{0},\epsilon,j,n)\)_, and any_ \(s=\sigma+it_{0}\) _with_ \[\frac{2k-3}{4}<\sigma<\frac{2k-1}{4}-\epsilon,\] _there exists a basis element_ \(F_{k,m,l}\in S_{k,m}\) _such that_ \[\frac{d^{n}}{ds^{n}}L^{*}(F_{k,m,l},j,s)\neq 0.\] 2. _There exists a constant_ \(C(t_{0},\epsilon,n)>0\) _such that for any_ \(k>C(t_{0},\epsilon,n)\)_, and any_ \(s=\sigma+it_{0}\) _with_ \[\frac{2k-3}{4}<\sigma<\frac{2k-1}{4}-\epsilon\ \ \text{and}\ \ \frac{2k-1}{4}+\epsilon<\sigma<\frac{2k+1}{4},\] _there exist a basis element_ \(F_{k,m,l}\in S_{k,m}\) _and_ \(j\in\{1,\ldots,2m\}\) _such that_ \[\frac{d^{n}}{ds^{n}}L^{*}(F_{k,m,l},j,s)\neq 0.\] **Remark 5.3**.: _Note that \(\rho_{m}(-I)\) is not equal to the identity matrix in \(\operatorname{GL}_{2m}(\mathbb{C})\). Instead, we have_ \[\rho_{m}(-I)\mathbf{e}_{j}=i\mathbf{e}_{2m-j}.\] _By a similar argument, we prove the same result as in Theorem 3.1 for the representation \(\rho_{m}\)._ ## 6. The case of Kohnen plus space Let \(k\) be a positive even integer. By [7, Theorem 5.4], there is an isomorphism \(\phi\) between \(S_{k,1}\) and \(S_{k-\frac{1}{2}}^{+}\), where \(S_{k-\frac{1}{2}}^{+}\) denotes the space of cusp forms in the plus space of weight \(k-\frac{1}{2}\) on \(\Gamma_{0}(4)\). Moreover, this isomorphism is compatible with the Petersson scalar products. Let \(f\) be a cusp form in \(S_{k-\frac{1}{2}}^{+}\) with Fourier expansion \(f(\tau)=\sum_{n\equiv 0,3\pmod{4}}c(n)e^{2\pi in\tau}\). Then, the \(L\)-function of \(f\) is defined by \[L(f,s):=\sum_{\begin{subarray}{c}n>0\\ n\equiv 0,3\pmod{4}\end{subarray}}\frac{c(n)}{n^{s}}.\] For \(1\leq j\leq 2\), let \(c_{j}\) be defined by \[c_{j}(n):=\begin{cases}c(n)&\text{if }n\equiv-j^{2}\pmod{4},\\ 0&\text{otherwise}.\end{cases}\] Then, \(c(n)=c_{1}(n)+c_{2}(n)\) for all \(n\). With this, we consider partial sums of \(L(f,s)\) by \[L(f,j,s):=\sum_{\begin{subarray}{c}n>0\\ n\equiv 0,3\pmod{4}\end{subarray}}\frac{c_{j}(n)}{n^{s}}\] for \(1\leq j\leq 2\). Suppose that \(F\) is a Jacobi cusp form in \(S_{k,1}\). By the theta expansion in (5.2), we have a corresponding vector-valued modular form \((F_{1}(\tau),F_{2}(\tau))\). Then, the isomorphism \(\phi\) from \(S_{k,1}\) to \(S_{k-\frac{1}{2}}^{+}\) is given by \[\phi(F)=\sum_{j=1}^{2}F_{j}(4\tau).\] From this, we see that \[L(f,j,s)=\frac{1}{4^{s}}L(F,j,s).\] We have the following corollary regarding the partial sums of \(L(f,s)\) for \(f\in S_{k-\frac{1}{2}}^{+}\). **Corollary 6.1**.: _Let \(k\) be a positive even integer with \(k>2\). Let \(n\) be a positive integer. Let \(\{f_{k-\frac{1}{2},1},\dots,f_{k-\frac{1}{2},d}\}\) be an orthogonal basis of \(S^{+}_{k-\frac{1}{2}}\). Let \(t_{0}\in\mathbb{R}\) and \(\epsilon>0\)._ 1. _Let_ \(j\) _be a positive integer with_ \(1\leq j\leq 2\)_. Then, there exists a constant_ \(C(t_{0},\epsilon,j,n)>0\) _such that for any_ \(k>C(t_{0},\epsilon,j,n)\)_, and any_ \(s=\sigma+it_{0}\) _with_ \[\frac{2k-3}{4}<\sigma<\frac{2k-1}{4}-\epsilon,\] _there exists a basis element_ \(f_{k-\frac{1}{2},l}\in S^{+}_{k-\frac{1}{2}}\) _such that_ \[\frac{d^{n}}{ds^{n}}\left[4^{s}L^{*}(f_{k,m,l},j,s)\right]\neq 0.\] 2. _There exists a constant_ \(C(t_{0},\epsilon,n)>0\) _such that for any_ \(k>C(t_{0},\epsilon,n)\)_, and any_ \(s=\sigma+it_{0}\) _with_ \[\frac{2k-3}{4}<\sigma<\frac{2k-1}{4}-\epsilon\ \ and\ \ \frac{2k-1}{4}+\epsilon<\sigma<\frac{2k+1}{4},\] _there exist a basis element_ \(f_{k-\frac{1}{2},l}\in S^{+}_{k-\frac{1}{2}}\) _and_ \(j\in\{1,\dots,2\}\) _such that_ \[\frac{d^{n}}{ds^{n}}\left[4^{s}L^{*}(F_{k,m,l},j,s)\right]\neq 0.\]
2306.08493
A Note on Twisted Moments of Dirichlet $L$-functions
In this paper, we establish an asymptotic formula for the twisted second moments of Dirichlet $L$-functions with one and two twists when averaged over all primitive Dirichlet characters of modulus $R$, where $R$ is a monic polynomial in $\mathbb{F}_q[T]$. The main result in this paper generalizes the work of Djankovi\'c [`The reciprocity law for the twisted second moment of Dirichlet $L$-functions over rational function fields', Bull. Aust. Math. Soc. 98 (2018), no. 3, 382--388].
J. C. Andrade, J. MacMillan
2023-06-14T13:15:23Z
http://arxiv.org/abs/2306.08493v1
# A note on twisted moments of Dirichlet \(L\)-functions ###### Abstract. In this paper, we establish an asymptotic formula for the twisted second moments of Dirichlet \(L\)-functions with one and two twists when averaged over all primitive Dirichlet characters of modulus \(R\), where \(R\) is a monic polynomial in \(\mathbb{F}_{q}[T]\). The main result in this paper generalizes the work of Djankovic ['The reciprocity law for the twisted second moment of Dirichlet \(L\)-functions over rational function fields', Bull. Aust. Math. Soc. **98** (2018), no. 3, 382-388]. Key words and phrases:twisted moments, Dirichlet \(L\)-functions, function fields 2010 Mathematics Subject Classification: Primary 11M38; Secondary 11M06, 11T06, 11T55 ## 1. Introduction It is well-known that the study of moments of the Riemann zeta-function and \(L\)-functions is an important topic in analytic number theory. It can be even argued that great part of research in analytic number theory in the last century have been guided and motivated by this topic. Applications of moments of \(L\)-functions appears more notably in the Lindelof hypothesis, but also when studying proportions of zeros satisfying the Riemann hypothesis and nonvanishing at the central point of families of \(L\)-functions. For some these applications is important to understand not only the moments of \(L\)-functions but also what is known as _twisted moments_. Let \(\chi\) be a Dirichlet character modulo \(p\), where \(p\) is a prime number. The problem is then to obtain a formula for \[\mathcal{S}(p,h):=\sum_{\chi(\bmod p)}\left|L\left(\frac{1}{2},\chi\right) \right|^{2}\chi(h), \tag{1.1}\] where \(h\) is a fixed prime number and the \(*\) indicates a summation over all primitive Dirichlet characters modulo \(p\). With this notation, Conrey [3, Theorem 10] proved the following. **Theorem 1.1** (Conrey [3]).: _For primes \(p,h\) with \(2\leq h<p\) we have that_ \[\mathcal{S}(p,h) =\frac{p^{1/2}}{h^{1/2}}\mathcal{S}(h,-p)+\frac{p}{h^{1/2}}\left( \log\frac{p}{h}+\gamma-\log(8\pi)\right)+\zeta\left(\frac{1}{2}\right)^{2}p^ {1/2}\] \[+O\left(h+\log p+\frac{p^{1/2}}{h^{1/2}}\log p\right),\] _where \(\gamma\) is Euler's constant and \(\zeta\) is the Riemann zeta-function._ In [11], Young extended Conrey's result as follows. **Theorem 1.2** (Young [11]).: _For primes \(p,h\) with \(h<p^{1-\varepsilon}\), we have that_ \[\frac{p^{1/2}}{\varphi(p)} \mathcal{S}(p,h)-\frac{h^{1/2}}{\varphi(h)}\mathcal{S}(h,-p)= \frac{p^{1/2}}{h^{1/2}}\left(\log\frac{p}{h}+\gamma-\log(8\pi)\right)\] \[+\zeta\left(\frac{1}{2}\right)^{2}\left(1-2\frac{p^{1/2}}{ \varphi(p)}(1-p^{-1/2})+2\frac{h^{1/2}}{\varphi(h)}(1-h^{-1/2})\right)+ \mathcal{E}(p,h),\] _where \(\varphi(p)\) is Euler's totient function and_ \[\mathcal{E}(p,h)\ll hp^{-1-\varepsilon}+h^{-C},\] _for all fixed \(\varepsilon,C>0\)._ Advancing the study of twisted moments of Dirichlet \(L\)-functions, Bettin [2] showed that the error term \(\mathcal{E}(p,h)\) can be extended to a continuous function with respect to the real topology. In his work, Bettin extended the known reciprocity results for twisted moments by establishing an exact formula with shifts. Another related problem, that can be seen as a generalization of (1.1), is to obtain asymptotic formulas for the twisted moments of Dirichlet \(L\)-functions with _two_ twists. In other words, the aim is to study \[M_{\pm}(h,k;p):=\frac{p^{1/2}}{\varphi(p)}\sideset{}{{}^{*}}{\sum}_{\chi(\bmod p )}\left|L\left(\frac{1}{2},\chi\right)\right|^{2}\chi(h)\overline{\chi}(k), \tag{1.2}\] where \(h,p\) and \(k\) are prime numbers. The quantity (1.2) was first studied by Selberg [8]. In [2], Bettin improved Selberg's result on the second moment of Dirichlet \(L\)-functions with two twists. **Theorem 1.3** (Bettin [2]).: _Let \(h,k\) and \(p\) be distinct prime numbers such that \(p\geq 4hk\). Then_ \[M_{\pm}(h,k;p)= \pm M_{\pm}(h,p;k)\pm M_{\pm}(k,p;h)\] \[+\frac{1}{2}\left(\frac{p}{hk}\right)^{1/2}\left(\log\frac{p}{hk} +\gamma-\log(8\pi)\mp\frac{\pi}{2}\right)+O(\log p).\] More recently, there have been some interesting developments on the study of twisted second moments of Dirichlet \(L\)-functions over rational function fields. Let \(q\) be a power of an odd prime number and \(\mathbb{A}=\mathbb{F}_{q}[T]\) the polynomials with coefficients in the finite field \(\mathbb{F}_{q}\). In this setting, Djankovic [4] proved the following. **Theorem 1.4** (Djankovic [4]).: _Let \(P,H\) be irreducible polynomials in \(\mathbb{F}_{q}[T]\) and_ \[\mathcal{S}(P,H):=\sideset{}{{}^{*}}{\sum}_{\chi(\bmod P)}\left|L\left(\frac {1}{2},\chi\right)\right|^{2}\chi(H).\] _If \(H\neq P\) and \(\deg(H)\leq\deg(P)\) then_ \[\frac{|P|^{1/2}}{\phi(P)} \mathcal{S}(P,H)-\frac{|H|^{1/2}}{\phi(H)}\mathcal{S}(H,-P)=\frac {|P|^{1/2}}{|H|^{1/2}}\left(\deg(P)-\deg(H)-\zeta_{\mathbb{A}}\left(\frac{1}{ 2}\right)^{2}\right)\] \[+\zeta_{\mathbb{A}}\left(\frac{1}{2}\right)^{2}\left(1-2\frac{|P |^{1/2}}{\phi(P)}(1-|P|^{-1/2})+2\frac{|H|^{1/2}}{\phi(H)}(1-|H|^{-1/2}) \right),\] _where \(L(s,\chi)\) is the Dirichlet \(L\)-function in function fields associated to the Dirichlet character \(\chi\) modulo \(P\), with \(\zeta_{\mathbb{A}}(s)\) being the zeta-function for \(\mathbb{F}_{q}[T]\), \(\phi(P)\) the Euler's totient function for polynomials and \(|P|=q^{\deg(P)}\) denotes the norm of a polynomial \(P\) in \(\mathbb{F}_{q}[T]\)._ In a recent work, Djankovic, Dokic and Lelas [5] have established a function field analogue of Bettin's result about twisted second moments of Dirichlet \(L\)-functions with two twists. If we let \(H,K\) and \(Q\) be monic irreducible polynomials in \(\mathbb{F}_{q}[T]\) and restrict the sum further to be over all even or odd Dirichlet characters modulo \(Q\) then the problem is to establish an asymptotic formula for \[\mathcal{S}^{\pm}(Q;H,K)=\frac{|Q|^{\frac{1}{2}}}{\phi^{\pm}(Q)}\sideset{}{{} ^{*}}{\sum}_{\chi(\bmod Q)}\left|L\left(\frac{1}{2},\chi\right)\right|^{2} \chi(H)\bar{\chi}(K), \tag{1.3}\] where \(\phi^{\pm}(Q)\) denotes the number of even or odd Dirichlet characters modulo \(Q\). Motivated by the methods of Bettin [2], Djankovic, Dokic and Lelas [5] established a triple reciprocity formula involving \(\mathcal{S}^{-}(Q;H,K)\), \(\mathcal{S}^{-}(H;K,-Q)\) and \(\mathcal{S}^{-}(K;H,-Q)\) and involving \(\mathcal{S}^{+}(Q;H,K)\), \(\mathcal{S}^{+}(H;K,Q)\) and \(\mathcal{S}^{+}(K;H,Q)\). In particular, they proved the following results. **Theorem 1.5** (Djankovic, Dokic and Lelas [5]).: _Let \(H\), \(K\) and \(Q\) be distinct monic irreducible polynomials in \(\mathbb{F}_{q}[T]\) such that \(\deg(H)+\deg(K)\leq\deg(Q)\). Then we have the following triple reciprocity formulas:_ \[\mathcal{S}^{-}(Q;H,K) =\mathcal{S}^{-}(H;K,-Q)+\mathcal{S}^{-}(K;H,-Q)\] \[+\frac{|Q|^{\frac{1}{2}}}{|HK|^{\frac{1}{2}}}\left(\deg(Q)-\deg( H)-\deg(K)\right)\] _and_ \[\mathcal{S}^{+}(Q;H,K) =\mathcal{S}^{+}(H;K,Q)+\mathcal{S}^{+}(K;H,Q)\] \[+\frac{|Q|^{\frac{1}{2}}}{|HK|^{\frac{1}{2}}}\left(\deg(Q)-\deg( H)-\deg(K)-\zeta_{\mathbb{A}}\left(\frac{1}{2}\right)^{2}(q-1)\right)\] \[-2\zeta_{\mathbb{A}}\left(\frac{1}{2}\right)^{2}\left(\frac{|Q| ^{\frac{1}{2}}-1}{\phi^{+}(Q)}-\frac{|H|^{\frac{1}{2}}-1}{\phi^{+}(H)}-\frac{| K|^{\frac{1}{2}}-1}{\phi^{+}(K)}\right).\] The aim of this note is to extend the above results of Djankovic and Djankovic, Dokic and Lelas. In their work they only consider Dirichlet characters modulo a monic irreducible polynomial, i.e., they only prove results for prime moduli. In this note we establish results for a general moduli. In particular we prove the following. **Theorem 1.6**.: _Let \(H\) and \(R\) be monic polynomials in \(\mathbb{F}_{q}[T]\) with \(\deg(H)<\deg(R)\). Then_ \[\frac{1}{\phi^{*}(R)}\underset{\chi(\text{mod }R)}{\sum^{*}}\left|L\left( \frac{1}{2},\chi\right)\right|^{2}\chi(H)=|H|^{\frac{1}{2}}\frac{\phi(R)}{|R|} \deg(HR)+O\left(|H|^{\frac{1}{2}}\log\omega(R)\right), \tag{1.4}\] _where \(\omega(R)\) is the number of distinct prime factors of \(R\), \(\phi^{*}(R)\) denotes the number of primitive Dirichlet characters modulo \(R\) and the \(*\) indicates a summation over all primitive Dirichlet characters modulo \(R\)._ And for the two twists we have the following. **Theorem 1.7**.: _Let \(H\), \(K\) and \(R\) be monic polynomials in \(\mathbb{F}_{q}[T]\) with \(\deg(H)+\deg(K)<\deg(R)\). Then_ \[\frac{1}{\phi^{*}(R)}\underset{\chi(\text{mod }R)}{\sum^{*}}\left|L\left( \frac{1}{2},\chi\right)\right|^{2}\chi(H)\bar{\chi}(K)=|HK|^{\frac{1}{2}}\frac{ \phi(R)}{|R|}\deg(HKR)+O\left(|HK|^{\frac{1}{2}}\log\omega(R)\right). \tag{1.5}\] ## 2. A short overview of Dirichlet \(L\)-functions over function fields In this section, we give a short overview of Dirichlet \(L\)-functions in function fields, with most of these facts stated in [7]. Let \(\mathbb{F}_{q}\) denote a finite field with \(q\) elements, where \(q\) is a power of an odd prime and \(\mathbb{A}=\mathbb{F}_{q}[T]\) be its polynomial ring. Furthermore, we denote by \(\mathbb{A}^{+}\), \(\mathbb{A}^{+}_{n}\) and \(\mathbb{A}^{+}_{\leq n}\) the set of all monic polynomials in \(\mathbb{A}\), the set of all monic polynomials in \(\mathbb{A}\) of degree \(n\) and the set of all monic polynomials of degree at most \(n\) in \(\mathbb{A}\) respectively. For \(f\in\mathbb{A}\), the norm of \(f\), \(|f|\), is defined to be equal to \(q^{\deg(f)}\) and \(\phi(f)\), \(\mu(f)\) and \(\omega(f)\) denotes the Euler-Totient function for \(\mathbb{A}\), the Mobius function for \(\mathbb{A}\) and the number of distinct prime factors of \(f\). For \(\Re(s)>1\), the zeta function for \(\mathbb{A}\) is defined as \[\zeta_{\mathbb{A}}(s)=\sum_{f\in\mathbb{A}^{+}}\frac{1}{|f|^{s}}=\prod_{P}\left( 1-\frac{1}{|P|^{s}}\right)^{-1}, \tag{2.1}\] where the product is over all monic irreducible polynomials in \(\mathbb{A}\). Since there are \(q^{n}\) monic polynomials of degree \(n\) in \(\mathbb{A}\), then \[\zeta_{\mathbb{A}}(s)=\frac{1}{1-q^{1-s}}.\] **Definition 2.1**.: Let \(R\in\mathbb{A}^{+}\). Then a Dirichlet character modulo \(R\) is defined to be a function \(\chi:\mathbb{A}\to\mathbb{C}\) which satisfies the following properties: 1. \(\chi(AB)=\chi(A)\chi(B),\) \(\forall A,B\in\mathbb{A}\), 2. \(\chi(A+BR)=\chi(A),\) \(\forall A,B\in\mathbb{A}\), 3. \(\chi(A)\neq 0\iff(A,R)=1\). A Dirichlet character \(\chi\) is said to be even if \(\chi(a)=1\) for all \(a\in\mathbb{F}_{q}^{*}\). Otherwise we say that it is odd. **Definition 2.2**.: Let \(R\in\mathbb{A}^{+},S|R\) and \(\chi\) be a character of modulus \(R\). We say that \(S\) is an induced modulus of \(\chi\) if there exists a character \(\chi_{1}\) of modulus \(S\) such that \[\chi(A)=\begin{cases}\chi_{1}(A)&\text{if }(A,R)=1,\\ 0&\text{otherwise}.\end{cases}\] We say \(\chi\) is primitive if there is no induced modulus of strictly smaller norm than \(R\). Otherwise \(\chi\) is said to be non-primitive. Let \(\phi^{*}(R)\) denote the number of primitive characters of modulus \(R\). **Definition 2.3**.: Let \(\chi\) be a Dirichlet character modulo \(R\). Then the Dirichlet \(L\)-function corresponding to \(\chi\) is defined by \[L(s,\chi):=\sum_{f\in\mathbb{A}^{+}}\frac{\chi(f)}{|f|^{s}} \tag{2.2}\] which converges absolutely for \(\Re(s)>1\). To finish this section, we will state some results about multiplicative functions in function fields which will be used throughout this paper. Taking Euler products, we see that for all \(s\in\mathbb{C}\) and all \(R\in\mathbb{A}\), we have \[\sum_{E|R}\frac{\mu(E)}{|E|^{s}}=\prod_{P|R}\left(1-\frac{1}{|P|^{s}}\right) \tag{2.3}\] and differentiating (2.3), we see that for all \(s\in\mathbb{C}\backslash\{0\}\), we have \[\sum_{E|R}\frac{\mu(E)\deg(E)}{|E|^{s}}=-\left(\prod_{P|R}1-\frac{1}{|P|^{s}} \right)\left(\sum_{P|R}\frac{\deg(P)}{|P|^{s}-1}\right). \tag{2.4}\] **Lemma 2.4** ([1, Lemma 4.5]).: _Let \(R\in\mathbb{A}^{+}\). We have that_ \[\sum_{P|R}\frac{\deg(P)}{|P|-1}\ll\log\omega(R). \tag{2.5}\] **Lemma 2.5** ([9, Lemma A.2.3]).: _For \(\deg(R)>1\), we have_ \[\omega(R)\ll\frac{\log_{q}|R|}{\log_{q}\log_{q}|R|}, \tag{2.6}\] _where the implied constant is independent of \(q\)._ **Lemma 2.6**.: _We have_ \[2^{\omega(R)}=\sum_{E|R}|\mu(E)|. \tag{2.7}\] _Also, for any \(\epsilon>0\) we have_ \[2^{\omega(R)}\ll_{\epsilon}|R|^{\epsilon}. \tag{2.8}\] **Lemma 2.7** ([9, Lemma A.2.4]).: _For \(\deg(R)>q\) we have_ \[\phi(R)\gg\frac{|R|}{\log_{q}\log_{q}|R|}. \tag{2.9}\] **Lemma 2.8** ([9, Lemma A.2.5]).: _For \(\deg(R)>q\), we have_ \[\phi^{*}(R)\gg\frac{\phi(R)}{\log_{q}\log_{q}|R|}. \tag{2.10}\] **Lemma 2.9** ([1, Lemma 3.7]).: _Let \(R\in\mathbb{A}^{+}\) and \(A,B\in\mathbb{A}\). Then_ \[\sum_{\chi(\text{mod }R)}^{*}\chi(A)\bar{\chi}(B)=\begin{cases}\sum_{EF=R} \mu(E)\phi(F)&\text{if }(AB,R)=1,\\ 0&\text{otherwise}\end{cases}\] As a Corollary we have the following result. **Corollary 2.10** ([1, Corollary 3.8]).: _For all \(R\in\mathbb{A}^{+}\) we have that_ \[\phi^{*}(R)=\sum_{EF=R}\mu(E)\phi(F). \tag{2.11}\] ## 3. Preliminary Lemmas In this section, we state and prove results which will be needed to prove both Theorem 1.6 and Theorem 1.7. We start by stating the approximate function equation for \(\left|L\left(\frac{1}{2},\chi\right)\right|^{2}\). **Lemma 3.1** ([6, Lemma 2.5]).: _Let \(\chi\) be a primitive Dirichlet character of modulus \(R\). Then we have_ \[\left|L\left(\frac{1}{2},\chi\right)\right|^{2}=2\sum_{\begin{subarray}{c}A,B \in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\end{subarray}}\frac{\chi(A)\bar{\chi}(B)}{|AB|^{\frac{1}{2}} }+O\left(|R|^{-\frac{1}{2}+\epsilon}\right). \tag{3.1}\] The next lemma will be used to obtain the main term of Theorem 1.6 and Theorem 1.7. **Lemma 3.2**.: _Let \(H\) and \(R\) be fixed monic polynomials in \(\mathbb{F}_{q}[T]\) with \(\deg(H)<\deg(R)\) and let \(x\) be a positive integer. If \(x\geq\deg(R)-\deg(H)\), then_ \[\sum_{\begin{subarray}{c}A\in\mathbb{A}^{+}_{\geq x}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}=|H|\frac{\phi(R)}{|R|}(x+\deg(H))+O\left( |H|\log\omega(R)\right). \tag{3.2}\] _Whereas if \(x<\deg(R)-\deg(H)\), then_ \[\sum_{\begin{subarray}{c}A\in\mathbb{A}^{+}_{\geq x}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}=|H|\frac{\phi(R)}{|R|}(x+\deg(H))+O\left( |H|\log\omega(R)\right)+O\left(2^{\omega(R)}q^{-x}(x+\deg(H))\right). \tag{3.3}\] Proof.: For all positive integers \(x\) we have \[\sum_{\begin{subarray}{c}A\in\mathbb{A}_{\leq x}^{+}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}=\sum_{A\in\mathbb{A}_{\leq x}^{+}}\frac{1}{ |A|}\sum_{E|(AH,R)}\mu(E)=\sum_{A\in\mathbb{A}_{\leq x}^{+}}\frac{1}{|A|}\sum_{ \begin{subarray}{c}E|AH\\ E|R\end{subarray}}\mu(E)=\sum_{E|R}\mu(E)\sum_{\begin{subarray}{c}A\in\mathbb{A}_ {\leq x}^{+}\\ E|AH\end{subarray}}\frac{1}{|A|}. \tag{3.4}\] Since \(E|AH\) then \(EL=AH\) for some \(L\in\mathbb{A}^{+}\) with \(\deg(L)=\deg(A)+\deg(H)-\deg(E)\leq x+\deg(H)-\deg(E)\). Thus \[\sum_{\begin{subarray}{c}A\in\mathbb{A}_{\leq x}^{+}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}=|H|\sum_{\begin{subarray}{c}E|R\\ \deg(E)\leq x+\deg(H)\end{subarray}}\frac{\mu(E)}{|E|}\sum_{\begin{subarray}{c }L\in\mathbb{A}^{+}\\ \deg(L)\leq x+\deg(H)-\deg(E)\end{subarray}}\frac{1}{|L|}.\] We know that, for a non-negative integer \(y\), \[\sum_{L\in\mathbb{A}_{\leq y}^{+}}\frac{1}{|L|}=\sum_{k=0}^{y}q^{-k}\sum_{L\in \mathbb{A}_{k}^{+}}1=\sum_{k=0}^{y}1=y+1,\] and so \[\sum_{\begin{subarray}{c}A\in\mathbb{A}_{\leq x}^{+}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|} =|H|\sum_{\begin{subarray}{c}E|R\\ \deg(E)\leq x+\deg(H)\end{subarray}}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1)\] \[=|H|\sum_{E|R}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1)\] \[-|H|\sum_{\begin{subarray}{c}E|R\\ \deg(E)>x+\deg(H)\end{subarray}}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1). \tag{3.5}\] Using (2.3), (2.4) and Lemma 2.4 we have \[\sum_{E|R}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1)=\frac{\phi(R)}{|R|}(x+\deg(H ))+O\left(\log\omega(R)\right). \tag{3.6}\] If \(x+\deg(H)\geq\deg(R)\), then there is no \(E|R\) with \(\deg(E)>\deg(R)\) and so the final term on the right-hand side of (3.5) is empty. Thus for \(x+\deg(H)\geq\deg(R)\) \[\sum_{\begin{subarray}{c}E|R\\ \deg(E)>x+\deg(H)\end{subarray}}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1)=0. \tag{3.7}\] Whereas for \(x+\deg(H)<\deg(R)\), we have \[\sum_{\begin{subarray}{c}E|R\\ \deg(E)>x+\deg(H)\end{subarray}}\frac{\mu(E)}{|E|}(x+\deg(H)-\deg(E)+1) \ll\sum_{\begin{subarray}{c}E|R\\ \deg(E)>x+\deg(H)\end{subarray}}\frac{|\mu(E)|}{|E|}\deg(E)\] \[\ll\frac{x+\deg(H)}{q^{x+\deg(H)}}\sum_{\begin{subarray}{c}E|R\\ \deg(E)>x+\deg(H)\end{subarray}}|\mu(E)|\] \[\ll\frac{2^{\omega(R)}(x+\deg(H))}{q^{x+\deg(H)}},\] here the final inequality follows from Lemma 2.6. Combining the above completes the proof. The following lemmas will be used to create a suitable bound for the error term of Theorem 1.6 and Theorem 1.7. **Lemma 3.3**.: _Let \(F\), \(H\) and \(R\) be fixed monic polynomials in \(\mathbb{F}_{q}[T]\) where \(F|R\) and let \(z<\deg(R)\). Then_ \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ AH\equiv B(mod\;F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{q^{\frac{5}{2}}( z+1)|H|}{|F|}. \tag{3.8}\] Proof.: We consider three cases, \(\deg(AH)>\deg(B)\), \(\deg(AH)<\deg(B)\) and \(\deg(AH)=\deg(B)\) where \(AH\neq B\). If we first consider the case \(\deg(AH)>\deg(B)\) and suppose that \(\deg(A)=i\), then since \(AH\equiv B(\text{mod }F)\) and \(AH\neq B\) we have that \(AH=LF+B\) for some \(L\in\mathbb{A}^{+}\) with \(\deg(L)=i+\deg(H)-\deg(F)\) and \(\deg(B)=z-\deg(A)=z-i\). Thus, combining the above we have \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ \deg(AH)>\deg(B)\\ AH\equiv B(\text{mod }F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\leq q^{-\frac{z}{2}}\sum_{ i=0}^{z}\sum_{\begin{subarray}{c}L\in\mathbb{A}^{+}\\ \deg(H)-\deg(F)\end{subarray}}\sum_{\begin{subarray}{c}B\in\mathbb{A}^{+}\\ \deg(B)=z-i\end{subarray}}1\] Similarly, considering the case \(\deg(AH)<\deg(B)\) and using similar arguments seen previously we have \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ \deg(B)>\deg(AH)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\leq\frac{q^{\frac{z}{2}}( z+1)}{|F|}. \tag{3.10}\] Finally, if we consider the case where \(\deg(AH)=\deg(B)=i\), then \(2i=\deg(ABH)=z+\deg(H)\) and so \(\deg(B)=i=\frac{z+\deg(H)}{2}\). Furthermore since \(AH\equiv B(\text{mod F})\) and \(AH\neq B\), then \(AH=LF+B\) where \(L\in\mathbb{A}\) with \(\deg(L)<i-\deg(F)=\frac{z+\deg(H)}{2}-\deg(F)\). Thus combining the above we have \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ \deg(AH)=\deg(B)\\ AH\equiv B(\text{mod }F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\leq q^{-\frac{z}{2}}\sum_{ \begin{subarray}{c}B\in\mathbb{A}^{+}\\ \deg(B)=\frac{z+\deg(H)}{2}\deg(L)<\frac{z+\deg(H)}{2}-\deg(F)\end{subarray}} \sum_{\begin{subarray}{c}L\in\mathbb{A}\\ \deg(B)=\frac{z+\deg(H)}{2}\deg(L)<\frac{z+\deg(H)}{2}-\deg(F)\end{subarray}}1\] Combining all the cases proves the result. **Lemma 3.4**.: _Let \(F\), \(H\), \(K\) and \(R\) be fixed monic polynomials in \(\mathbb{F}_{q}[T]\) where \(F|R\) and let \(z<\deg(R)\). Then_ \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ AH\equiv BK(mod\,F)\\ AH\neq BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{q^{\frac{5}{2}}(z +1)|HK|}{|F|}. \tag{3.12}\] Proof.: The proof is similar to the proof of Lemma 3.3 and [10, Lemma 6.4]. **Lemma 3.5**.: _For all \(R\in\mathbb{A}^{+}\) and \(\epsilon>0\) we have_ \[\frac{2^{\omega(R)}|R|^{\frac{1}{2}}\deg(R)}{\phi^{*}(R)}\ll_{\epsilon}|R|^{ \epsilon-\frac{1}{2}}. \tag{3.13}\] Proof.: For \(\deg(R)\leq q\) we know, by [9, (A.2.3)] that \(\frac{\phi^{*}(R)}{|R|}\gg 1\). Thus for \(\deg(R)\leq q\) we have \[\frac{2^{\omega(R)}|R|^{\frac{1}{2}}\deg(R)}{\phi^{*}(R)}\ll\frac{2^{\omega(R) }\deg(R)}{|R|^{\frac{1}{2}}}\ll\frac{2^{\omega(R)}}{|R|^{\frac{1}{2}-\epsilon}}.\] From Lemma 2.6 we know that \(2^{\omega(R)}\ll|R|^{\epsilon}\), thus (3.13) holds for \(\deg(R)\leq q\). For \(\deg(R)>q\) we know by Lemma 2.7 and Lemma 2.8 that \[\phi^{*}(R)\gg\frac{\phi(R)}{\log_{q}\log_{q}|R|}\gg\frac{|R|}{(\log_{q}\log_{ q}|R|)^{2}}.\] Thus if \(\deg(R)>q\), then \[\frac{2^{\omega(R)}|R|^{\frac{1}{2}}\deg(R)}{\phi^{*}(R)}\ll\frac{2^{\omega(R) }\deg(R)(\log_{q}\log_{q}|R|)^{2}}{|R|^{\frac{1}{2}}}\ll_{\epsilon}\frac{2^{ \omega(R)}}{|R|^{\frac{1}{2}-\epsilon}}\] Finally, from Lemma 2.6, we know that \(2^{\omega(R)}\ll|R|^{\epsilon}\), then (3.13) holds for \(\deg(R)>q\) and thus completes the proof. ## 4. Proof of Theorem 1.6 In this section, we use results stated previously to prove Theorem 1.6. Proof of Theorem 1.6.: Using the approximate function equation Lemma 3.1 we have \[\frac{1}{\phi^{*}(R)}\sideset{}{{}^{*}}{\sum}_{\chi(\mathrm{mod}\ R)}\left|L \left(\frac{1}{2},\chi\right)\right|^{2}\chi(H)=\frac{2}{\phi^{*}(R)}\sideset{ }{{}^{*}}{\sum}_{\chi(\mathrm{mod}\ R)}\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\end{subarray}}\frac{\chi(A)\bar{\chi}(B)\chi(H)}{|AB|^{\frac{1 }{2}}}+O\left(|R|^{-\frac{1}{2}+\epsilon}\right). \tag{4.1}\] Using the orthogonality relation Lemma 2.9, we have \[\frac{2}{\phi^{*}(R)}\sideset{}{{}^{*}}{\sum}_{\chi(\mathrm{mod}\ R)}\sum_{ \begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\end{subarray}}\frac{\chi(A)\bar{\chi}(B)\chi(H)}{|AB|^{\frac{1 }{2}}}=\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}. \tag{4.2}\] For the second sum on the right-hand side of (4.2), we will consider the contribution of the diagonal, \(AH=B\), and the off-diagonal, \(AH\neq B\), terms separately. Thus we write \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{ c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}=\frac{2}{\phi^{*}(R)}\sum_{ EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ AH=B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{2}{ \phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^ {+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}.\] Considering the contribution of the diagonal, \(AH=B\), the double sum over all \(A,B\in\mathbb{A}^{+}\) with \(\deg(AB)<\deg(R)\), \(AH=B\) and \((ABH,R)=1\) becomes a single sum over all \(A\in\mathbb{A}^{+}\) with \(\deg(A)<\frac{1}{2}(\deg(R)-\deg(H))\) and \((AH,R)=1\). Therefore using the arguments stated above and Corollary 2.10 we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH=B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}=\frac{2}{|H|^{\frac{1}{2}} }\sum_{\begin{subarray}{c}A\in\mathbb{A}^{+}\\ \deg(A)<\deg(R)\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}. \tag{4.3}\] Using Lemma 3.2 with \(x=\frac{\deg(R)-\deg(R)}{2}-1\) we have \[\frac{2}{|H|^{\frac{1}{2}}}\sum_{\begin{subarray}{c}A\in\mathbb{A}^{+}\\ \deg(A)<\deg(R)-\deg(H)\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}=|H|^{\frac{1}{2}}\frac{\phi(R)}{|R|}(\deg (H)+\deg(R))+O\left(|H|^{\frac{1}{2}}\log\omega(R)\right). \tag{4.4}\] For the contribution of the off-diagonal terms we use Lemma 3.3 to give \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\sum_{z=0}^{\deg(R)-1} \frac{|H|q^{\frac{z}{2}}(z+1)}{|F|}\ll\frac{|H||R|^{\frac{1}{2}}\deg(R)}{|F|}. \tag{4.5}\] Thus using (4.5) we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{|H||R|^{\frac{1}{2 }}\deg(R)}{\phi^{*}(R)}\sum_{EF=R}|\mu(E)|\frac{\phi(F)}{|F|}. \tag{4.6}\] Combining (4.6) with Lemma 2.6, Lemma 3.5 and the fact that \(\frac{\phi(R)}{|R|}\leq 1\) we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(mod\,F)\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{2^{\omega(R)}|H||R |^{\frac{1}{2}}\deg(R)}{\phi^{*}(R)}\ll|H||R|^{\epsilon-\frac{1}{2}}. \tag{4.7}\] Since \(\deg(H)<\deg(R)\), then there is some \(\epsilon>0\) such that \(\deg(H)\leq(1-2\epsilon)\deg(R)\). Thus \(|H|^{\frac{1}{2}}|R|^{\epsilon-\frac{1}{2}}=q^{\frac{1}{2}\deg(H)+\left(\epsilon -\frac{1}{2}\right)\deg(R)}\leq q^{\frac{1}{2}(1-2\epsilon)\deg(R)+\left( \epsilon-\frac{1}{2}\right)\deg(R)}=1\). Therefore combining the above with (4.7), we get \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv B(\text{mod }F)\\ AH\neq B\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll|H|^{\frac{1}{2}}. \tag{4.8}\] Combining the above completes the proof of Theorem 1.6. ## 5. Proof of Theorem 1.7 In this section we use similar methods to that seen in the proof of Theorem 1.6 to prove Theorem 1.7. Proof of Theorem 1.7.: Using the approximate functional equation, Lemma 3.1, we have \[\frac{1}{\phi^{*}(R)}{\sum_{\chi(\text{mod }R)}}^{*}\left|L\left( \frac{1}{2},\chi\right)\right|^{2}\chi(H)\bar{\chi}(K)\] \[= \frac{2}{\phi^{*}(R)}{\sum_{\chi(\text{mod }R)}}^{*}\sum_{ \begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\end{subarray}}\frac{\chi(A)\bar{\chi}(B)\chi(H)\bar{\chi}(K)}{ |AB|^{\frac{1}{2}}}+O\left(|R|^{-\frac{1}{2}+\epsilon}\right). \tag{5.1}\] Using the orthogonality relation Lemma 2.9, we have \[\frac{2}{\phi^{*}(R)}{\sum_{\chi(\text{mod }R)}}^{*}\sum_{ \begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\end{subarray}}\frac{\chi(A)\bar{\chi}(B)\chi(H)\bar{\chi}(K)}{ |AB|^{\frac{1}{2}}}=\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{ \begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(\text{mod }F)\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}. \tag{5.2}\] For the second sum on the right-hand side of (5.2) we will consider the contribution of the diagonal, \(AH=BK\), and off-diagonal, \(AH\neq BK\), terms separately. Thus we write \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{ \begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(\text{mod }F)\\ AH=BK\\ AH=BK\\ (ABH,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}=\frac{2}{\phi^{*}(R)} \sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(\text{mod }F)\\ AH=BK\\ AH=BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}.\] Considering the contribution of the diagonal, \(AH=BK\), the double sum over all \(A,B\in\mathbb{A}^{+}\) with \(\deg(AB)<\deg(R)\), \(AH=BK\) and \((ABHK,R)=1\) becomes a single sum over \(A\in\mathbb{A}^{+}\) with \(\deg(A)<\frac{1}{2}(\deg(R)+\deg(K)-\deg(H))\) and \((AH,R)=1\). Therefore using the arguments stated above and Corollary 2.10 we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH=BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}=\frac{2|K|^{\frac{1}{2}}}{|H| ^{\frac{1}{2}}}\sum_{\begin{subarray}{c}A\in\mathbb{A}^{+}\\ \deg(A)<\frac{\deg(R)+\deg(K)-\deg(H)}{2}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}. \tag{5.3}\] Using Lemma 3.2 with \(x=\frac{1}{2}(\deg(R)+\deg(K)-\deg(H))-1\) we have \[\frac{2|K|^{\frac{1}{2}}}{|H|^{\frac{1}{2}}}\sum_{\begin{subarray}{c}A\in \mathbb{A}^{+}\\ \deg(A)<\frac{\deg(R)+\deg(K)-\deg(H)}{2}\\ (AH,R)=1\end{subarray}}\frac{1}{|A|}\] \[=|HK|^{\frac{1}{2}}\frac{\phi(R)}{|R|}(\deg(R)+\deg(H)+\deg(K))+O\left(|HK|^{ \frac{1}{2}}\log\omega(R)\right).\] For the contribution of the off-diagonal terms we use Lemma 3.3 to give \[\sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH=BK(mod\;F)\\ AH\neq BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}=\sum_{z=0}^{\deg(R)-1} \sum_{\begin{subarray}{c}A,B\in\mathbb{A}^{+}\\ \deg(AB)=z\\ AH\equiv BK(mod\;F)\\ AH\neq BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\] \[\ll\sum_{z=0}^{\deg(R)-1}\frac{|HK|q^{\frac{s}{2}}(z+1)}{|F|}\ll \frac{|HK||R|^{\frac{1}{2}}\deg(R)}{|F|}. \tag{5.4}\] Thus using (5.4) we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(mod\;F)\\ AH\neq BK\\ (ABK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{|HK||R|^{\frac{1} {2}}\deg(R)}{\phi^{*}(R)}\sum_{EF=R}|\mu(E)|\frac{\phi(F)}{|F|}. \tag{5.5}\] Combining (5.5) with Lemma 2.6, Lemma 3.5 and the fact that \(\frac{\phi(R)}{|R|}\leq 1\) we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(mod\;F)\\ AH\neq BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll\frac{2^{\omega(R)}| HK||R|^{\frac{1}{2}}\deg(R)}{\phi^{*}(R)}. \tag{5.6}\] Since \(\deg(H)+\deg(K)<\deg(R)\), then there is some \(\epsilon>0\) such that \(\deg(H)+\deg(K)\leq(1-2\epsilon)\deg(R)\). Thus \(|HK|^{\frac{1}{2}}|R|^{\epsilon-\frac{1}{2}}=q^{\deg(H)+\deg(K)+(\epsilon- \frac{1}{2})\deg(R)}\leq q^{(1-2\epsilon)\deg(R)+(\epsilon-\frac{1}{2})\deg( R)}=1\). Thus, combining the above and (5.6) we have \[\frac{2}{\phi^{*}(R)}\sum_{EF=R}\mu(E)\phi(F)\sum_{\begin{subarray}{c}A,B\in \mathbb{A}^{+}\\ \deg(AB)<\deg(R)\\ AH\equiv BK(mod\;F)\\ AH\neq BK\\ (ABHK,R)=1\end{subarray}}\frac{1}{|AB|^{\frac{1}{2}}}\ll|HK|^{\frac{1}{2}}. \tag{5.7}\] Combining everything completes the proof of Theorem 1.7. **Acknowledgment:** The authors are grateful to the Leverhulme Trust (RPG-2017-320) for the support through the research project grant "Moments of \(L\)-functions in Function Fields and Random Matrix Theory". The authors also would like to thank Prof. Steve Gonek and Dr. Gihan Marasingha for helpful comments and suggestions on a previous version of this note.
2308.00563
PDBImages: A Command Line Tool for Automated Macromolecular Structure Visualization
Summary: PDBImages is an innovative, open-source Node.js package that harnesses the power of the popular macromolecule structure visualization software Mol*. Designed for use by the scientific community, PDBImages provides a means to generate high-quality images for PDB and AlphaFold DB models. Its unique ability to render and save images directly to files in a browserless mode sets it apart, offering users a streamlined, automated process for macromolecular structure visualization. Here, we detail the implementation of PDBImages, enumerating its diverse image types and elaborating on its user-friendly setup. This powerful tool opens a new gateway for researchers to visualize, analyse, and share their work, fostering a deeper understanding of bioinformatics. Availability and Implementation: PDBImages is available as an npm package from https://www.npmjs.com/package/pdb-images. The source code is available from https://github.com/PDBeurope/pdb-images. Contact: [email protected], [email protected]
Adam Midlik, Sreenath Nair, Stephen Anyango, Mandar Deshpande, David Sehnal, Mihaly Varadi, Sameer Velankar
2023-08-01T14:04:30Z
http://arxiv.org/abs/2308.00563v1
# PDBImages: A Command Line Tool for Automated Macromolecular Structure Visualization ###### Abstract **Summary:** PDBImages is an innovative, open-source Node.js package that harnesses the power of the popular macromolecule structure visualization software Mol*. Designed for use by the scientific community, PDBImages provides a means to generate high-quality images for PDB and AlphaFold DB models. Its unique ability to render and save images directly to files in a browserless mode sets it apart, offering users a streamlined, automated process for macromolecular structure visualization. Here, we detail the implementation of PDBImages, enumerating its diverse image types and elaborating on its user-friendly setup. This powerful tool opens a new gateway for researchers to visualize, analyse, and share their work, fostering a deeper understanding of bioinformatics. **Availability and Implementation:** PDBImages is available as an npm package from [https://www.npmis.com/package/pdb-images](https://www.npmis.com/package/pdb-images). The source code is available from [https://github.com/PDBEurope/pdb-images](https://github.com/PDBEurope/pdb-images). **Contact:** [email protected], [email protected] ## Introduction Visualization of macromolecular structure data holds high importance for researchers seeking to explore and understand the intricate world of biomolecules (O'Donoghue _et al._, 2010; Kozilkova _et al._, 2017; Olson, 2018). While programmatic analyses of atomic coordinates can reveal valuable insights into biological processes and disease mechanisms, the power of viewing structures in three dimensions cannot be understated [12, 13]. It is akin to providing the scientific community with a microscope to delve deeper into the atomic realm and observe molecular interactions in unprecedented detail. A diverse range of tools currently exist for viewing macromolecular 3D structures, ranging from desktop applications such as PyMol [15] and ChimeraX [16] to browser-based tools like JSMol [17] and Mol\({}^{\star}\)[18]. Each of these tools offers unique advantages and perspectives. However, among these software suites, Mol\({}^{\star}\) has carved out a niche as a powerful and popular tool for displaying macromolecule structure data, and has been adopted by the Protein Data Bank in Europe (PDBe) [1], the AlphaFold Protein Structure Database [15], UniProt (The UniProt Consortium, 2023), InterPro [19], Ensembl [14], and several other major data providers. Mol\({}^{\star}\) also stands out for its ability to save a given view as a state file that can be reloaded to reproduce the same view. However, to create macromolecular images and/or state files the user must manually create the desired views (e.g. load the structure, apply colouring, zoom to their region of interest) and export them one by one. For creating a large number of images, this approach becomes extremely inefficient. To facilitate and automate image generation, we have developed PDBImages, a performant and reusable open-source software tool that leverages the capabilities of Mol\({}^{\star}\) in conjunction with the PDBe API [1]. PDBImages aims at fully automated generation of high-quality images for PDB entries and AlphaFold DB models, with an easy-to-use command-line interface and scalability. We utilize this tool internally to generate images displayed on the PDBe pages, but it can also be employed directly by the scientific community, facilitating the generation of molecular structure images for communication of scientific outcomes. This application note provides an overview of PDBImages, detailing its functionality and instructions for usage, enabling users to generate high-quality molecular structure images efficiently. ## Implementation PDBImages is a Node.js command-line application, written in TypeScript and building on the popular visualization library Mol\({}^{\star}\)[18]. Its core functionality revolves around the ability to read atomic XYZ coordinates, construct predefined views of the macromolecular structures, and save these views as PNG images, Mol\({}^{\star}\) state files, and caption files. To this end it employs Mol\({}^{\star}\) in the browserless mode and _gl_ rendering library. The implemented functionality is accessible via a straightforward, user-friendly command-line interface (Figure 1). PDBImages can work in two modes: the default _pdb_ mode is suitable for PDB entries and custom structures, while the _alphafold_ mode is dedicated to AlphaFold predicted models. First, PDBImages reads the coordinate file of the processed structure, which may be a PDB entry, AlphaFold model, or a custom structure. The input structure file can be in the PDBx/mmCIF (.cif) or binary CIF (.bcf) format and can also be compressed with GZIP (.cif.gz,.bcf.gz). When the _-input_ option is not specified, the input file will be automatically retrieved from PDBe (Armstrong _et al._, 2020) or AlphaFold DB (Varadi _et al._, 2021), depending on the selected mode. PDBImages then renders individual images and saves them in the output directory. It provides nine distinct image types, each focused on a different aspect of the structure. Eight types apply in the _pdb_ mode and one in the _alphafold_ mode (see Table 1). Figure 1: Overview of the PDBImages tool. Image types can be selected using the -_type_ option, or by default, all applicable image types will be rendered. For each image, at least three files are saved: the rendered image itself (in PNG format, possibly in multiple resolutions), a JSON file with the image caption, and a Mol* state file (MOLJ format, which can be loaded into Mol* to reproduce the same view, for example by dragging the state file and dropping it in a browser window with Mol* Viewer). Some image types require additional input data; these are automatically fetched from the PDB API (Armstrong _et al._, 2020) (see Table 2). As this feature is only relevant for PDB entries, for other structures (not deposited to PDB) it can be disabled using the -_no_-_api_ option. Orientation of the visualized structure plays a crucial role, as an improperly selected orientation can lead to the occlusion problem - the parts of the structure closer to the viewer will hinder the visualisation of the structure farther away (Heinrich _et al._, 2014). While this cannot be completely avoided, we minimize the chance of occlusion by calculating the principal component analysis (PCA) of the atomic coordinates and aligning the PCA axes to the screen axes, or "laying the structure flat against the screen". Additional rules ensure that the original orientation of the structure does not affect (flip) the resulting orientation. This new handy feature has already been integrated into the Mol* Viewer itself (_Orient Axes_ option under the _Reset_ button). \begin{table} \begin{tabular}{l l l} \hline \hline **Mode** & **Image type** & **Description** \\ \hline pdb & entry & Show the complete deposited structure, coloured by chains and entities (chemically distinct molecules). For ensembles, show all models. \\ pdb & assembly & For each assembly listed in the mmCIF file, show the entire assembly, coloured by chains and entities. \\ pdb & entity & For each entity, show the preferred assembly with this entity highlighted (excluding the water entity). \\ pdb & domain & Highlight domain mappings from CATH (Silitoe _et al._, 2021), SCOP (Andreeva _et al._, 2020), Pfam (Mistry _et al._, 2020) and Rfam (Kalvari _et al._, 2021) using SIFTS mappings (Dana _et al._, 2019). \\ pdb & ligand & For each distinct non-polymer entity (excluding water), show this entity and its surroundings. \\ pdb & modres & For each distinct type of modified residue in the structure, show the preferred assembly with all instances of this modified residue highlighted. \\ pdb & bfactor & Show the deposited structure in putty representation, colour-coded by B-factor data (only applies to X-ray structures). \\ pdb & validation & Show the deposited structure colour-coded by structure quality data. \\ alphafold & plddt & Show the predicted structure colour-coded by the pLDDT confidence measure (Jumper _et al._, 2021) (only applies to computationally predicted structures). \\ \hline \hline \end{tabular} \end{table} Table 1: Image types generated by PDBImages This optimal orientation is referred to as the front view. Some image types are additionally rendered in the side view and top view, with arrows in the left bottom corner indicating the PCA axes (this can be adjusted by the -_view_ and -_no_-_axes_ options). As the last step, PDBImages creates two summary files: the first, _[id]_filelist_, is a simple list of created images; the second, _[id]_json_, also provides image captions and other metadata and has the images structured into sections by image type. Much more detailed instructions can be found in the PDBImage documentation ([https://github.com/PDBeurope/pdb-images#pdbimages](https://github.com/PDBeurope/pdb-images#pdbimages)). ## Availability PDBImages is released as an npm package ([https://www.npmis.com/package/pdb-images](https://www.npmis.com/package/pdb-images)) and can be installed using the npm package manager (requires Node.js 18 or higher). In this way, it can be used as a standalone command-line application but can also be easily incorporated into more complex workflows. The source code for PDBImages is publicly available under Apache 2 licence from the PDBe GitHub repository at [https://github.com/PDBeurope/pdb-images](https://github.com/PDBeurope/pdb-images). We encourage contributions from the scientific community to further improve and expand the capabilities of PDBImages. PDBImages will run seamlessly and utilize GPU for rendering on Linux, Mac, and Windows personal computers. For running in Linux environments without a running X-server (like large computing infrastructures), we provide a Docker image using X emulator Xvfb; however, this will not utilize GPU ([https://hub.docker.com/r/pdbegroup/pdb-images](https://hub.docker.com/r/pdbegroup/pdb-images)). PDBImages has been used to generate images displayed on the PDBe pages ([https://www.ebi.ac.uk/pdbe/](https://www.ebi.ac.uk/pdbe/)) since August 2023. For any PDB entry, all generated images can be downloaded in various resolutions (1600x1600, 800x800, 200x200, and 100x100 pixels, plus the Mol* state file). The list of available images for an entry can be obtained from the summary files ([https://www.ebi.ac.uk/pdbe/static/entry/](https://www.ebi.ac.uk/pdbe/static/entry/)[id]_filename_ or _[https://www.ebi.ac.uk/pdbe/static/entry/](https://www.ebi.ac.uk/pdbe/static/entry/)[id]_json_, where _[id]_ stands for the PDB ID of interest). \begin{table} \begin{tabular}{l l} **Retrieved** & **PDB** **API endpoints** \\ **information** & \\ \hline Entity names & /pdb/entry/molecules/{id} \\ Preferred assembly information & /pdb/entry/summary/{id} \\ Modified residues & /pdb/entry/modified_AA_or_NA/{id} \\ Domain mappings from SIFTS & /mappings/{id}, /nucleic_mappings/{id} \\ Validation data & /validation/residuewise_outlier_summary/entry/{id} \\ \hline \end{tabular} \end{table} Table 2: Optional data retrieved from the PDBe API The full URL of a specific file can then be obtained by combining the base URL, filename (retrieved from either of the summary files), and file suffix (retrieved from the JSON summary file), as demonstrated by this example: [https://www.ebi.ac.uk/pdbe/static/entry/](https://www.ebi.ac.uk/pdbe/static/entry/) + 1tqn_bfactor + _image-800x800.png = [https://www.ebi.ac.uk/pdbe/static/entry/1tqn_bfactor_image-800x800.png](https://www.ebi.ac.uk/pdbe/static/entry/1tqn_bfactor_image-800x800.png) Images for new and updated PDB entries are published simultaneously with the weekly PDB release. ## Acknowledgements We would like to thank Alexander S. Rose, Sebastian Bittrich, and Jesse Liang, who contributed the code in the core Mol\({}^{*}\) library allowing it to run in the browserless mode. ## Funding This work was supported by Biotechnology and Biological Sciences Research Council/National Science Foundation funding [BB/W017970/1, PI: S. Velankar; DBI-2129634, PI: S. K. Burley]; Wellcome Trust [218303/Z/19/Z to S. Velankar]; European Molecular Biology Laboratory - European Bioinformatics Institute; Czech Science Foundation [22-30571M to D. Sehnal]; and Ministry of Education, Youth and Sports of the Czech Republic [LM2023055 to D. Sehnal]. _Conflict of Interest:_ none declared.
2310.14984
My Lockdown Escape: Sparking Self-Empathy in the Context of the Covid-19 Pandemic
During the Covid-19 pandemic, research communities focused on collecting and understanding people's behaviours and feelings to study and tackle the pandemic indirect effects. Despite its consequences are slowly starting to fade away, such an interest is still alive. In this article, we propose a hybrid, gamified, story-driven data collection approach to spark self-empathy, hence resurfacing people's past feelings. The game is designed to include a physical board, decks of cards, and a digital application. As the player plays through the game, they customize and escape from their lockdown room by completing statements and answering a series of questions that define their story. The decoration of the lockdown room and the storytelling-driven approach are targeted at sparking people's emotions and self-empathy towards their past selves. Ultimately, the proposed approach was proven effective in sparking and collecting feelings, while a few improvements are still necessary.
Andrea Tocchetti, Silvia Maria Talenti, Marco Brambilla
2023-10-23T14:33:17Z
http://arxiv.org/abs/2310.14984v1
# My Lockdown Escape: Sparking Self-Empathy in the Context of the Covid-19 Pandemic ###### Abstract During the Covid-19 pandemic, research communities focused on collecting and understanding people's behaviours and feelings to study and tackle the pandemic indirect effects. Despite its consequences are slowly starting to fade away, such an interest is still alive. In this article, we propose a hybrid, gamified, story-driven data collection approach to spark self-empathy, hence resurfacing people's past feelings. The game is designed to include a physical board, decks of cards, and a digital application. As the player plays through the game, they customize and escape from their lockdown room by completing statements and answering a series of questions that define their story. The decoration of the lockdown room and the storytelling-driven approach are targeted at sparking people's emotions and self-empathy towards their past selves. Ultimately, the proposed approach was proven effective in sparking and collecting feelings, while a few improvements are still necessary. ## Introduction The recent Covid-19 pandemic and its consequences affected our lives unprecedentedly, changing our daily habits whilst disrupting our emotional and psychological health [13]. Among these consequences, the lockdown enforced by the local governments caused some of the most devastating ones, including depression, anxiety, and stress [14, 15]. Although the pandemic's consequences are slowly fading away, the research community's interest in understanding people's emotions over that period is still alive [16]. Researchers with different backgrounds have been shaping data collection processes and developing methodologies to involve people in sharing their feelings by designing various approaches combining gamification techniques with the most commonly employed survey approach [12, 13]. While these methods may still be effective regardless of time, people are slowly starting to forget how they felt. For this reason, collecting people's feelings requires designing approaches capable of sparking these emotions again. This article proposes a gamified approach to collect people's feelings during the Covid-19 pandemic named "My Lockdown Escape". The proposed methodology strives to make people empathise with their past selves - a concept we call _self-empathy_ - by combining different gamified design techniques. The proposed design follows a hybrid approach with both digital and physical elements. A digital application implementing a storytelling-driven activity supports an escape room-like experience involving decks of cards and a board. In this article, we strive to demonstrate the effectiveness of the proposed methodology in collecting people's feelings by sparking their empathy towards their past selves. Furthermore, we report on the collected data and discuss interesting insights into people's perspectives and feelings towards their experience during the pandemic. The remainder of this article is organized as follows. Chapter 2 describes empathy and gamification in the context of interest. Chapter 3 describes the design and implementation of the proposed method, focusing on the first. Chapter 4 reports on the structure of the experiments and the profiles of the participants. Chapter 5 discusses the results, user experience, and the analysis of the collected data. Chapter 6 summarises the work and provides some insights into future works and design improvements. ## Related Works & Background ### Empathy & Self-Empathy Empathy can be described as _the capability of a human to put themselves in someone else's shoes_[1]. An extensive definition characterises empathy as _an emotional response, dependent upon the interaction between trait capacities and state influences, whose resulting emotion is similar to one's perception (directly experienced or imagined) and understanding (cognitive empathy) of the stimulus emotion, with the recognition that the source of the emotion is not one's own_[10]. Regardless of the difference in complexity, these definitions imply a social relation between two people: the one who feels and expresses an emotion and the one who experiences the consequent emotional response. We stray from such a standard model of empathy, focusing on sparking and assessing people's empathy compared to their past selves rather than someone else, resulting in a so-called one-state model [1]. Such a change of perspective should drive the person towards a better understanding of the experienced emotions since they were the ones feeling them in the first place. The research field on empathy found fertile ground in computer science [22], resulting in the development and assessment of various approaches leveraging gameful design elements to drive empathy [13, 14, 15]. Such a combination of the physical and digital worlds makes it necessary to highlight a fundamental difference between the empathy experienced by humans through their peers and the one conveyed through digital technologies. The first is a human reaction sparked by our perception and understanding of the feelings of another human being through our senses. On the other hand, the second must leverage digital technologies' features to spark it. In particular, they rely on images and sounds, like photos [12], videos [13, 14], music [15], etc., to convey emotions, feelings, and perceptions since digital technologies lack the ability to convey them through the senses. Hence, it is necessary to design approaches capable of dealing with such a gap, driving people to empathise even when digital environments are employed. ### Gamification Gamification can be defined as the application of game design elements in non-game contexts to invoke gameful experiences and behavioural outcomes from users to support the value of the content they provide or create [16, 10]. Such an approach has been applied to several fields, _e.g._, medical and healthcare [12], policy-making [12, 13], educational [14] and many more, demonstrating the all-around applicability of such a paradigm. The effects of such approaches have been studied for a very long time by the research community. For example, classic gamification elements (_e.g._, avatars, animations, challenges, etc.) were proven to be effective in improving user attention and enjoyment [13], improving participation rate and reliability of the answers in data collection tasks [15], as well as improving user engagement and driving behaviours [1]. The Covid-19 outbreak's consequences heavily impacted people's emotional and psychological health, sparking research to analyse the pandemic's scale and impact [12, 13, 14, 15]. In that regard, researchers developed gamification strategies to achieve better coverage and user involvement whilst delivering an interesting and enjoyable experience [12, 13, 14]. While understanding people's emotional and psychological conditions has been fundamental to comprehend the impacts of the pandemic, some researchers focused on tackling its consequences, demonstrating the effectiveness of gamified approaches in motivating and enhancing students' learning [16, 1], approaching elderly people with healthcare initiatives [15], improving the population's awareness about disinformation [13, 14], etc. Among these gamified activities, some applied specific design elements to achieve their objectives. In particular, escape room-like experiences were proven effective in springing cooperation [17] and motivation [18, 19, 20, 21] among participants, especially when applied in remote digital environments. On the other hand, digital storytelling (_e.g._, narratives, interactive stories, etc.) was shown to improve the application's appealing [15], user engagement [16], and rising emotions and sparking imagination [14] as they engage the user on a personal level in a novel or familiar experiences. Despite its demonstrated effectiveness, the research community still acknowledged the need to apply gamification carefully (_e.g._, avoid biasing the user with the narrative [15], or avoid using reward-based mechanisms in surveys [15]) to prevent undesired outcomes and/or behaviours. emotions and stimulating self-empathy. The proposed activity can be divided into three main steps (represented in Figure 1): * **Player Creation**, _i.e._, collecting the player's personal data and customising their avatar, * **Lockdown Room Decoration**, _i.e._, decorating the lockdown room by placing the cards on the board and setting up the next step of the activity, * **Escape Room Gameplay**, _i.e._, escaping from the lockdown room by collecting the cards placed on the board. **Player Creation** - The player is asked to provide their personal data through the digital application. They provide a nickname to guarantee anonymous data collection, age, country, gender, ethnicity, and education level. They are also requested to create a physical avatar or pick a digital one. In the first case, they customize their avatar card, _i.e._, a card with a simple outline of a stylized person, by drawing on it using coloured markers. Then, such a card is uploaded into the system by taking a picture and placed in the corresponding slot on the board. Alternatively, the player can pick one of the pre-made avatars available on the digital application. If a digital avatar is chosen, the corresponding physical avatar card is positioned on the board. **Lockdown Room Decoration** - The player decorates their lockdown room by completing statements in the story narrated through the digital application. Each statement is assigned to a part of the story and a deck whose cards represent a possible phrase or word to complete the sentence. Decks are identified by colour and a unique name. Every card has a symbol printed on its front and a unique QR code and the name of the deck they belong to on its back. Examples of cards are depicted in Figure 3, on the left. At this stage, the part of the board to be used (represented in Figure 2) represents the lockdown room, _i.e._, an abstraction of a real room where the player experienced the pandemic. It has at least one dedicated card slot for each of the seven decks involved, namely _People_, _Picture_, _Floor_, _Lamp_, _Window_, _Bookcase 1_, and _Bookcase 2_ which represent the elements the player uses to describe their room. For each statement, the player inspects and picks a card of choice from the associated deck, scans the QR code on its back using their mobile phone through the application, and places it face-up in the corresponding board slot. Scanning the QR code stores the choice in the system. Whenever a statement is completed, the corresponding part of the story is updated and displayed alongside the next one. Such a process is repeated until a card is placed in all the slots associated with the seven involved decks. An example of a statement and the corresponding list of completions are provided below. _Statement: Our story begins in early 2020, and not long ago, the COVID-19 pandemic broke out in your country and the whole world. You are at home watching the news. The titles are scary and doubtful. Take a look around you, and you will see that you are surrounded by..._ _Possible Completions (i.e., Cards): Family, Parents, Friends, Strangers, No one, Roommates, and Animals_ Then, the player setups the board to be used in the second part of the activity (represented in Figure 4 on the right). They shuffle the _Object_ deck and create three face-down piles to be placed on three dedicated slots on the board by evenly distributing the cards. Then, one randomly selected _Container_ card is placed atop each pile. While the _Object_ cards represent the different items the person could have interacted with during the pandemic, the _Container_ cards represent the furniture in which they are stashed. **Escape Room Gameplay** - The player must now find three core _Object_ cards, _i.e._, the mask, hand sanitiser, and green pass cards, to escape their lockdown room. They must Figure 1: A high-level representation of the three steps of the game, namely Player Creation, Lockdown Room Decoration, and Escape Room Gameplay, and their sub-steps. Figure 3: Examples of cards from the decks involved in the Escape Room Gameplay step (_i.e._, _Container_ on the left and _Object_ on the right). scan the QR code on the back of the card on top of a pile of choices and answer the corresponding question in the digital application. An example of a question and the corresponding list of answers are provided below. _Question (i.e., Cards): Think back at your lockdown experience. If it was a movie, what title would it have?_ _Possible Answers: The Never-ending Story, The Social Network, Home Alone, Life is Beautiful, Back to the Future, Eat Pray Love, A Good Year, Cast Away_ When a question is successfully answered, the player flips its card and uncovers the item it hides. Whenever an item is discovered, its card is placed in the dedicated area of the board (represented in Figure 4, on the right) alongside all the other items the player has already found and its corresponding digital icon is displayed in the application. Such a process is the same regardless of whether the item belongs to the _Object_ or _Container_ deck. At this stage, the board represents the spot (_e.g._, a carpet, a table, etc.) where the player places the items they uncover. This process is repeated until all the _Object_ cards necessary to escape have been found. Then, the player can escape the room or keep playing to find all the objects. When they successfully escape their lockdown room, the player is shown their story which can then be shared with their peers. In particular, it includes a picture of the player's avatar or their virtual representation, the textual description Figure 4: A representation of the part of the board to support the Escape Room Gameplay step. The three slots on top are dedicated to the three piles of cards that will be prepared from the _Object_ and _Container_ decks, while the cards uncovered by the player will be placed in the slot at the bottom. Figure 2: A representation of the part of the board to support the Lockdown Room Decoration step. Each slot has an associated deck name that represents the deck from which the card to be placed belongs. An avatar slot where the player can place their avatar card is also featured. In the considered setting, all decks have one slot each, besides the Floor deck that has two. of the room they decorated with the completed statements, and the items they uncovered with the corresponding questions' answers. "My Lockdown Escape" is designed following a hybrid setting, combining physical and digital assets. The physical assets (_i.e._, the cards and the board) were designed using digital tools. Then, they were printed on cardboard, cut, and coated with plastic. The digital asset (_i.e._, the web application) was developed abiding by the structure of a three-layer architecture. The front end was implemented using HTML, CSS, Javascript, and Thymeleaf. Furthermore, the Bootstrap toolkit was widely employed. The middle layer was developed using Java, Spring Boot, and the Model-View-Controller framework. The back end is managed through a relational database implemented using MySQL technology. Such an application was deployed on a web server to make it accessible to multiple players simultaneously. ## Experiments We evaluated the effectiveness of the proposed approach in a series of experiments with different objectives. The first experiment involved 21 students and researchers (9 women and 12 men) from an Italian university, mainly aged between 21 and 27 years old (26,7 years old on average), in a series of individual experiments in Milan. The second one involved 28 people (17 women and 11 men) from a variety of European organizations, mainly aged between 22 and 66 years old (28,4 years old on average), in an open experiment in Bruxelles. Whilst the first experiments was mainly aimed at collecting feedback about the approach and the user experience, the second one contributed to test the methodology in an open environment and collect feedback about possible improvements. The participants to both experiments were given an initial description of the application. Then, they performed the activity without receiving any suggestions. Each participant was required to bring their own mobile phone to play. Such a setting allowed the testing of the application on different mobile operative systems and web browsers. As previously described, the approach was mainly designed to collect the participants' feelings by sparking self-emptly in the context of the Covid-19 pandemic. The questions the participants answered and the statements they completed were aimed at collecting such data. Regarding the achievement of this objective, we recognize the nondeterministic nature of the data collection performed in the Escape Room Gameplay step. Indeed, the player may escape the room before answering to all the questions after they found the three _Object_ cards. We argue that such an event does not impact the assessment of our method as it only influences the amount of data collected in the dedicated part of the game. Moreover, it would be quite easy to reshape the rules of the game to have the player answering all the questions before escaping the room, _e.g._, by allowing the player to leave only after the three piles are empty. To assess the effectiveness of our methodology, each of the first experiment's participant was asked to answer a questionnaire including all the questions from the System Usability Scale (SUS) [1] (10 questions) to measure the system's usability [1], a set of questions to evaluate the overall approach (inspired by [10]) (5 questions), and a set of questions to evaluate the tool's effectiveness in sparking self-emptly (inspired by GEO [11]) and GUESS [12] (5 questions). The latter were custom-made since there are very few or no questionnaires addressing the assessment of self-empathy and hybrid approaches in the literature. A list of such questions is available in Appendix A. The questions' order in the questionnaire was randomized to prevent potential bias. The answers were modelled following a Likert Scale approach ranging from 1 ("Strongly Disagree") to 5 ("Strongly Agree"). ## Results ### Approach Assessment Ultimately, the first experiment yielded positive results and provided useful feedback to improve the approach. In particular, the application achieved a final SUS score of 75 which represents good usability compared to the average SUS score of 68 [10]. We achieved a score of 78% and 72% for the hybrid approach and empathy assessment, respectively, by averaging the numerical values on the corresponding Likert Scale of the answers. These scores provide preliminary evidence that the hybrid design is appreciated and the approach can spark empathy in most participants. Despite most participants deemed the experience to be enjoyable and engaging, from the feedback we received, the behaviours we observed, and the computed scores, we acknowledge there's still room for improvement. First, the game's instructions may benefit from small clarifications and extra details. In particular, in the Lockdown Room Decoration step, some participants were mislead to take their cards randomly instead of picking them. Such a misunderstanding also caused them to position their cards face-down on the board instead of face-up. Furthermore, when comparing the game steps, participants preferred the Lockdown Room Decoration step, stating that the Escape Room Gameplay may benefit from a small re-design due to the randomness in finding the cards to meet the escape condition. Additionally, we noticed that one of the most common behaviours was that most participants tended to leave the room as soon as they met the escape conditions. As previously discussed, even a slight change to the rules would allow to prevent such behaviour, finally improving the data collection. Regarding the latter objective, a few participants also stated that while the Lockdown Room Decoration step perfectly masked the data collection, they perceived it clearly in the Escape Room Gameplay step. Such feedback calls for improvements to better bind the approach with the data collection activity underneath. We argue that a better alignment between the cards and the associated questions would address such a drawback. ### Data Collection In this section, some of the insights we derived from our analysis are reported, mainly focusing on some statements from the Lockdown Room Decoration step and some questions from the Escape Room Gameplay step. At first, the collected data confirmed an obvious trend, _i.e.,_ most participants (89%) think the pandemic negatively impacted the mental health of the population (as represented in Figure 8 on the left), while surprisingly revealing that a fair percentage (25%) of the participants was not influenced at all (as represented in Figure 8 on the right). Similar trends can be identified in other statements. For example, regarding the statement "The enforcement of the lockdown made me feel...", most of the participants completed it using the words "Frustrated" (39%) and "Anxious" (26%), hence highlighting the negative impact of the lockdown on their mental and emotional health. On the other hand, a few participants used positive (less than 10%) or neutral (less than 10%) feelings, showing that not everybody was negatively impacted by the lockdown. In the question "Think back at your lockdown experience. If it was a movie, what title would it have?", the titles that got chosen the most (more than 90%) are the ones sparking negative emotions (_e.g._, "The Never Ending Story", "A Quiet Place", "Home Alone", etc.), once again confirming the general trend of negativity associated with the lockdown. A similar trend was also identified in the questions "You encounter a friend while walking on the street. He does not keep a correct social distance. How does it make you feel?" and "A friend of yours calls you to hang out at their place with other friends. Would you go?". In particular, we identified a general trend of negativity towards interacting with other people, even when friends are involved. Indeed, most participants (69%) won't leave or would be afraid to leave their house to engage in social interactions while most participants would feel "Anxious" (30%) or "Vulnerable" (30%) when approached by someone who doesn't maintain the correct social distance. Another interesting insight we observed is that the interests and priorities of the participants changed after the spread of the pandemic. In particular, most participants stated they were mainly interested in "Relationships" (40%) and "Career and Finance" (36%) before the pandemic. On the other hand, during the Covid-19 pandemic, their interests shifted towards "Mental Health" (30%), "Relationships" (28%) and "Family" (18%). We also performed a few analyses based on their belonging to a specific group (_i.e.,_ students and non-students) for the last statement or gender (_i.e.,_ male or female) for the fourth question. In the first case (Figure 5), we analysed how students and non-students would behave when social interactions are involved, identifying a stronger aversion in students to engaging in social activities. On the other hand, we identified trends in the feelings experienced during the pandemics based on gender (Figure 7). Indeed, while women mostly felt "Frustration", men mostly felt "Boredom", and both groups equally experienced "Anxiety". Such a result highlights that gender could be of fundamental interest when Figure 5: Answers to the question ”A friend of yours calls you to hang out at their place with other friends. Would you go?” divided by the participants’ student status (_i.e.,_ student vs. non-student). Figure 6: Distribution of the personal priorities of respondents before and during the pandemic. Figure 7: Distribution of the answers to ”The main emotion felt during the Covid-19 pandemic” by gender (only male and female are represented since no participant picked other options). discerning the impacts and behaviours driven by the lockdown. ## 5 Conclusions & Future Works This article described a hybrid, gamified, story-driven data collection approach to spark self-empathy in participants. As they play and build their own story, they are driven to self-emphatise with their past selves and provide data to be analysed to understand their past behaviours and attitudes. Preliminary experiments validated the approach and highlighted the need for a few improvements. In future works, we plan to improve the proposed gamified approach by addressing the feedback we received and providing new decks of cards and room abstractions, allowing even more freedom and customizability of the gameplay and the data to be collected. Furthermore, we noticed that a small improvement could be factored into the game by shuffling the cards placed on the first part of the board with the _Object_ cards used to build the piles for the second part of the game. Such a change would improve the data collection while making the two parts of the game even more entwined. ## 6 Appendix A: Questionnaire The following questions were employed to assess the validity of the approach in our experiments. * I found this hybrid method more engaging than digital-only methods. * I feel that this hybrid method is better than full-digital or full-physical. * The escape-room style helped me remember my lockdown experience. * The storytelling style helped me remember my lockdown experience. * I found the game boring. The following questions were employed to assess the capability of the approach to sparking self-empathy in our experiments. * The game helped me remember my lockdown experience. * I would describe myself as a pretty soft-hearted person. * When I think about sad past events of my life, I feel the same sadness. * I am often quite touched by things that I see happen. * The game helped empathize with my past self. ## 7 Acknowledgements and Credits Container (Figure 3 on the left) icon by Smashicons from www.flaticon.com Object (Figure 3 on the left) and Carpet (Figure 4) icon by Freepik from www.flaticon.com
2303.05769
Coupled Hénon Map, Part I: Topological Horseshoes and Uniform Hyperbolicity
We derive a sufficient condition for topological horseshoe and uniform hyperbolicity of a 4-dimensional symplectic map, which is introduced by coupling the two 2-dimensional H\'enon maps via linear terms. The coupled H\'enon map thus constructed can be viewed as a simple map modeling the horseshoe in higher dimensions. We show that there are two different types of horseshoes, each of which is realized around different anti-integrable limits in the parameter regime.
Keisuke Fujioka, Ryota Kogawa, Jizhou Li, Akira Shudo
2023-03-10T08:10:14Z
http://arxiv.org/abs/2303.05769v1
# Coupled Henon Map, Part I: Topological Horseshoes and Uniform Hyperbolicity ###### Abstract We derive a sufficient condition for topological horseshoe and uniform hyperbolicity of a 4-dimensional symplectic map, which is introduced by coupling the two 2-dimensional Henon maps via linear terms. The coupled Henon map thus constructed can be viewed as a simple map modeling the horseshoe in higher dimensions. We show that there are two different types of horseshoes, each of which is realized around different anti-integrable limits in the parameter regime. ## 1 Introduction Horseshoe dynamics is known to be a source of chaos in dynamical systems. The most well-known and the simplest system modeling the horseshoe dynamics would be the Henon map [1, 2], which is a 2-dimensional quadratic map defined on \(\mathbb{R}^{2}\). In the 2-dimensional plane, the horseshoe-shaped deformation is obtained by first stretching some initial domain in the unstable direction and then contracting it in the stable direction after folding back the stretched domain. Suppose that the horseshoe-shaped domain, in both forward and backward iterations, intersects the original domain with two distinct regions, each of which is completely penetrated without lateral overhang. In this case, we say that the dynamics exhibits _topological horseshoe_[3, 4]. When the topological horseshoe is realized, the intersection of the iterated domain with the original domain, which generates the two disjoint strips in the case of a once-fold dynamics, is always mapped into the previous intersections, meaning that the width of each strip gradually decreases in time. Furthermore, if the contraction in the domain of interest is exponentially fast, each strip will eventually shrink to a string. If this is also the case in the backward iteration, the strings formed in the stretching and contracting directions iterates intersect to give a set of points. It then leads to the conjugation relation between the original and the properly introduced symbolic dynamics. The so-called Conley-Moser theory concerns a sufficient condition to have the symbolic dynamics based on topological horseshoe and _uniform hyperbolicity_[5]. For the Henon map, Devaney-Nitecki first developed such an argument and gave a sufficient condition such that the Henon map exhibits topological horseshoe and uniform hyperbolicity as well [6]. Later, it was proved that the parameter locus satisfying uniform hyperbolicity can be extended to the situation where the first homoclinic tangency happens using the complex dynamics technique [7] and computer-assisted proof [8, 9]. There is another, even simpler approach to capturing the existence of chaos. Suppose that the system has a certain parameter whose limiting value kills the dynamical relation between successive time steps, resulting in an infinite sequence of numbers or symbols. Such a limit is called the _anti-integrable limit_[10, 11, 12]. Suppose there exists a suitable (discrete) Lagrangian. Then one can find a one-to-one correspondence between a sequence of numbers in the anti-integrable limit and an orbit generated by the actual dynamics whose parameter is close to the anti-integrable limit. The proof is based on the global implicit function theorem and the contraction mapping principle can be easily generalized to a wide class of systems. Moreover, since a close analogy exists between the orbits in dynamical systems and the equilibrium states of a class of variational problems in solid-state systems, one can relate the uniform hyperbolicity of the dynamics with the existence of phonon gap in the solid state problem [13]. The topic we would like to discuss in this article is the topological horseshoe and uniform hyperbolicity in higher dimensional symplectic maps. Among a variety of choices [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30], we here take the coupled Henon map, which will be introduced below. As in the case of 2-dimensional polynomial maps [31], there is a derivation of normal forms of quadratic symplectic maps due to Moser [32], which provides a canonical model to be studied in detail [29, 30]. Indeed, it was shown in [30] that the normal form introduced by Moser can be decoupled into a pair of uncoupled quadratic maps under an appropriate choice of parameters, so our map should be a reduced version of the general normal form. An advantage of starting with the coupled Henon map to examine topological horseshoes and uniform hyperbolicity would be that one can find anti-integrable limits in the parameter space rather easily. As mentioned above, one would expect uniform hyperbolicity, and perhaps also topological horseshoe as well in the vicinity of anti-integrable limits [34, 38, 39, 40, 41]. There indeed exist some works in which topological horseshoe together with uniform hyperbolicity manifests in the region close to the anti-integrable limit [36, 37]. Here we provide a sufficient condition for topological horseshoe and uniform hyperbolicity for the coupled Henon map, using essentially the same strategy as Devaney-Nitecki [6]. In particular, we study topological horseshoe and uniform hyperbolicity around the two different anti-integrable limit, each of which is derived by taking certain parameter limits in the coupled Henon map. The first type can be shown to be conjugate with the symbolic dynamics with four symbols, while the second one is described by the full shift with two symbols. As will be briefly explained and thoroughly discussed in the following paper, their folding natures are different from each other. Especially the first type is so unique that it appears only in 4-dimensional space. The structure of the paper is as follows: Section 2 introduces our coupled Henon map, which is obtained by coupling a pair of 2-dimensional Henon maps, and has three parameters: the two nonlinearity parameters and the coupling strength. Then we show that two anti-integrable limits exist in the current form of the coupled Henon map. Section 3 gives the main results of this paper, providing a sufficient condition for topological horseshoe and uniform hyperbolicity around each anti-integrable limit. Section 4 presents the existence domains in which the non-wandering set is contained. This part corresponds to the proof of the first part of the main theorems. Section 5 gives a sufficient condition for uniform hyperbolicity of the coupled Henon map. To derive uniform hyperbolicity, we examine the cone field condition. In particular, we will use a sufficient condition for uniform hyperbolicity in higher dimensional settings, which have been introduced by Newhouse [42]. Section 6 is devoted to proving the main theorems. Section 7 summarizes the results and provides some outlooks. ## 2 Coupled Henon map and anti-integrable limits ### Coupled Henon map The coupled Henon map is introduced as \[\left(\begin{array}{c}x_{n+1}\\ y_{n+1}\\ z_{n+1}\\ w_{n+1}\end{array}\right)=f\left(\begin{array}{c}x_{n}\\ y_{n}\\ z_{n}\\ w_{n}\end{array}\right)=\left(\begin{array}{c}a_{0}-x_{n}^{2}-z_{n}+c(x_{n}-y _{n})\\ a_{1}-y_{n}^{2}-w_{n}-c(x_{n}-y_{n})\\ x_{n}\\ y_{n}\end{array}\right), \tag{2.1}\] where \(c>0\) is assumed. The inverse map \(f^{-1}\) is \[\left(\begin{array}{c}x_{n-1}\\ y_{n-1}\\ z_{n-1}\\ w_{n-1}\end{array}\right)=f^{-1}\left(\begin{array}{c}x_{n}\\ y_{n}\\ z_{n}\\ w_{n}\end{array}\right)=\left(\begin{array}{c}z_{n}\\ w_{n}\\ a_{0}-z_{n}^{2}-x_{n}+c(z_{n}-w_{n})\\ a_{1}-w_{n}^{2}-y_{n}-c(z_{n}-w_{n})\end{array}\right). \tag{2.2}\] Here \(a_{0}\) and \(a_{1}\) are parameters that control the nonlinearity, and the parameter \(c\) gives the coupling strength between the two Henon maps [6]. For \(c>0\), the replacement of the variables as \((x,y,z,w)\rightarrow(z,w,x,y)\) transforms the map \(f\) into its inverse \(f^{-1}\). \[\left(\begin{array}{c}X\\ Y\\ Z\\ W\end{array}\right)=\frac{1}{2}\left(\begin{array}{c}x+y\\ x-y\\ z+w\\ z-w\end{array}\right), \tag{2.3}\] the form (2.1) can be written as \[\left(\begin{array}{c}X_{n+1}\\ Y_{n+1}\\ Z_{n+1}\\ W_{n+1}\end{array}\right)=F\left(\begin{array}{c}X_{n}\\ Y_{n}\\ Z_{n}\\ W_{n}\end{array}\right)=\left(\begin{array}{c}A_{0}-(X_{n}^{2}+Y_{n}^{2})-Z_{ n}\\ A_{1}-2X_{n}Y_{n}-W_{n}+2cY_{n}\\ X_{n}\\ Y_{n}\end{array}\right). \tag{2.4}\] where \[A_{0}=\frac{a_{0}+a_{1}}{2},\ \ \ \ A_{1}=\frac{a_{0}-a_{1}}{2}.\] The inverse map \(F^{-1}\) is also rewritten as \[\left(\begin{array}{c}X_{n-1}\\ Y_{n-1}\\ Z_{n-1}\\ W_{n-1}\end{array}\right)=F^{-1}\left(\begin{array}{c}X_{n}\\ Y_{n}\\ Z_{n}\\ W_{n}\end{array}\right)=\left(\begin{array}{c}Z_{n}\\ W_{n}\\ A_{0}-(Z_{n}^{2}+W_{n}^{2})-X_{n}\\ A_{1}-2Z_{n}W_{n}-Y_{n}+2cW_{n}\end{array}\right). \tag{2.5}\] ### Anti-integrable limits for the coupled Henon map Here we show that two types anti-integrable limits exist in the coupled Henon map. For simplicity, we consider the case with \(a=a_{0}=a_{1}\). Let us introduce new parameters \(\epsilon=\sqrt{1/a},u=\epsilon x\) and \(v=\epsilon y\) and rewrite the coupled Henon map (2.1) as \[\left\{\begin{array}{l}\epsilon u_{n+1}=1-(u_{n})^{2}-\epsilon u_{n-1}+c \epsilon(u_{n}-v_{n}),\\ \epsilon v_{n+1}=1-(v_{n})^{2}-\epsilon v_{n-1}-c\epsilon(u_{n}-v_{n}).\end{array}\right. \tag{2.6}\] (A) Anti-integrable limit with four symbols The first type of anti-integrable limit is given by letting \(a\to\infty\) with \(c\) being fixed. In this anti-integrable limit, the coupling between two Henon maps can be neglected and (2.6) tends to \[\left\{\begin{array}{l}0=1-(u_{n})^{2},\\ 0=1-(v_{n})^{2},\end{array}\right. \tag{2.7}\] which lead to \[\left\{\begin{array}{l}u_{n}=\pm 1,\\ v_{n}=\pm 1.\end{array}\right. \tag{2.8}\] The four solutions \((u_{n},v_{n})=(+1,+1),(+1,-1),(-1,+1),(-1,-1)\) provide symbols of the symbolic dynamics around this anti-integrable limit. (B) Anti-integrable limit with two symbols The second type of anti-integrable limit is given by letting \(a\to\infty\) with \(c/\sqrt{a}=const=\gamma\) being fixed. In this limit, the two Henon maps are strongly coupled and the relations (2.6) tend to \[\left\{\begin{array}{l}0=1-(u_{n})^{2}+\gamma(u_{n}-v_{n}),\\ 0=1-(v_{n})^{2}-\gamma(u_{n}-v_{n}),\end{array}\right. \tag{2.9}\] which lead to the four solutions in the form \((u_{n},v_{n})=(+1,+1),(\gamma-\sqrt{1-\gamma^{2}},\gamma+\sqrt{1-\gamma^{2}}),(\gamma+\sqrt{1-\gamma^{2}},\gamma-\sqrt{1-\gamma^{2}})\) and \((-1,-1)\). For \(1\leq|\gamma|\), the two solutions are complex, while for \(1>|\gamma|\) all the solutions are real. ## 3 Main theorems In this paper, we will give sufficient conditions for topological horseshoe and uniform hyperbolicity around the anti-integrable limits (A) and (B), respectively. **Theorem 3.1**.: _As for the anti-integrable limit of the case (A), the following holds. A-1) For \(-1\leq A_{0}\), the non-wandering set \(\Omega(f)\) satisfies_ \[\Omega(f)\subset V_{f}, \tag{3.10}\] _where_ \[V_{f}=\{(x,y,z,w)\,|\,|x|,|y|,|z|,|w|\leq r\}. \tag{3.11}\] _Here, \(r=2\sqrt{2}(1+\sqrt{1+A_{0}})\). A-2) If the parameters satisfy the following conditions, \(f\) shows topological horseshoe._ \[0<\frac{1}{4}c^{2}+a_{i}-(c+2)r, (i=0,1), \tag{3.12}\] \[0\leq r^{2}-2(c+1)r-a_{i}, (i=0,1). \tag{3.13}\] A-3) In addition to the conditions (3.12) and (3.13), if the parameters satisfy the following condition, \(\Omega(f)\) is uniformly hyperbolic._ \[4+c<\frac{-c+\sqrt{c^{2}+4(a_{i}-(c+2)r)}}{2},\ \ (i=0,1). \tag{3.14}\] **Theorem 3.2**.: _As for the anti-integrable limit of the case (B), the following holds. B-1) For \(-1\leq A_{0}\), the non-wandering set \(\Omega(f)\) satisfies_ \[\Omega(f)\subset V_{F}, \tag{3.15}\] _where_ \[V_{F}=\{(x,y,z,w)\,|\,\left|\frac{x+y}{2}\right|,\left|\frac{x-y}{2}\right|, \left|\frac{z+w}{2}\right|,\left|\frac{z-w}{2}\right|\leq R\}, \tag{3.16}\] _where \(R=1+\sqrt{1+A_{0}}\). B-2) If the parameters satisfy the following conditions, \(f\) shows topological horseshoe._ \[A_{1}\leq R<c, \tag{3.17}\] \[R<A_{0}-(W^{*})^{2}-R,\] (3.18) \[W^{*}\leq R. \tag{3.19}\] _Here, \(W^{*}=\max\biggl{(}\Big{|}\frac{2R-A_{1}}{2(c-R)}\Big{|}\), \(\Big{|}\frac{-2R-A_{1}}{2(c-R)}\Big{|}\biggr{)}\), and \(Z^{*}=\sqrt{A_{0}-(W^{*})^{2}-2R}\). B-3) In addition to the conditions (3.17), (3.18) and (3.19), if the parameters satisfy the following condition, \(\Omega(f)\) is uniformly hyperbolic._ \[4+c\leq Z^{*}-W^{*}. \tag{3.20}\] ## 4 Non-wandering set ### Some lemmas To prove topological horseshoe and uniformly hyperbolicity for the coupled Henon map we take a similar strategy similar to Devaney-Nitecki [6]. In the following, we prove some lemmas using the parameter: \[R=1+\sqrt{1+A_{0}}\in\mathbb{R}. \tag{4.1}\] **Lemma 4.1**.: \(R\) _satisfies the following_ \[R^{2}-2R-A_{0}=0. \tag{4.2}\] Proof.: Self-evident. **Lemma 4.2**.: _a) For any \(C\geq 0\), if \(|Z_{0}|\leq C\) is satisfied, the following holds:_ \[A_{0}-(Z_{1}^{2}+W_{1}^{2})-C\leq X_{1}\leq A_{0}-(Z_{1}^{2}+W_{1}^{2})+C. \tag{4.3}\] _In addition, if \(|X_{0}|\leq C\), then \(|Z_{1}|\leq C\) holds. b) For any \(C\geq 0\), if \(|X_{0}|\leq C\) is satisfied, the following holds:_ \[A_{0}-(X_{-1}^{2}+Y_{-1}^{2})-C\leq Z_{-1}\leq A_{0}-(X_{-1}^{2}+Y_{-1}^{2})+C. \tag{4.4}\] _In addition, if \(|Z_{0}|\leq C\), then \(|X_{-1}|\leq C\) holds._ Proof.: It is easy to check both of them. **Lemma 4.3**.: _a) If \(X_{0}\leq\min(-|Z_{0}|,-R)\), then \(X_{1}\leq X_{0}\) follows. The equality holds when \((X_{0},Y_{0},Z_{0})=(-R,0,-R)\). b) If \(-|Z_{0}|\leq X_{0}\) and \(Z_{0}\leq-R\) hold, then \(Z_{-1}\leq Z_{0}\) and \(|Z_{0}|\leq|Z_{-1}|\) follows. The equalities hold when \((X_{0},Z_{0},W_{0})=(-R,-R,0)\)._ Proof.: a) If \(X_{0}\leq\min(-|Z_{0}|,-R)\) holds, we find that \[X_{1}-X_{0} =A_{0}-(X_{0}^{2}+Y_{0}^{2})-Z_{0}-X_{0}\] \[\leq A_{0}-X_{0}^{2}-Z_{0}-X_{0}\] \[\leq A_{0}-X_{0}^{2}+|Z_{0}|-X_{0}\] \[\leq A_{0}-X_{0}^{2}-2X_{0}\] \[=A_{0}-(X_{0}+1)^{2}+1. \tag{4.5}\] Since \(X_{0}\leq-R\), we have a condition for \(X_{0}\) as \[X_{0}\leq-R=-1-\sqrt{1+A_{0}}\leq-1. \tag{4.6}\] Then, \(A_{0}-(X_{0}+1)^{2}+1\) takes the maximum value at \(X_{0}=-R\) (see Fig. 1). Thus, we have \[X_{1}-X_{0}\leq A_{0}-(-R)^{2}-2(-R)=0. \tag{4.7}\] Here we have used lemma 4.1. The equality holds when \((X_{0},Y_{0},Z_{0})=(-R,0,-R)\) is satisfied. b) Assuming \(-|Z_{0}|\leq X_{0}\) and \(Z_{0}\leq-R\), we find that \[Z_{-1}-Z_{0} =A_{0}-(Z_{0}^{2}+W_{0}^{2})-X_{0}-Z_{0}\] \[\leq A_{0}-Z_{0}^{2}-X_{0}-Z_{0}\] \[\leq A_{0}-Z_{0}^{2}+|Z_{0}|-Z_{0}\] \[=A_{0}-Z_{0}^{2}-2Z_{0}\] \[=A_{0}-(Z_{0}+1)^{2}+1. \tag{4.8}\] In the same way as above, since \[Z_{0}\leq-R\leq-1 \tag{4.9}\] holds, \(A_{0}-(Z_{0}+1)^{2}+1\) takes the maximum value at \(Z_{0}=-R\) (see Fig. 2). Hence, we have \[Z_{-1}-Z_{0}\leq A_{0}-(-R)^{2}-2(-R)=0. \tag{4.10}\] Since \(Z_{0}\leq-R\), \(|Z_{-1}|\geq|Z_{0}|\) also follows. The equality holds when \((X_{0},Z_{0},W_{0})=(-R,-R,0)\) holds. ### Decomposition of domains and transition rules In the following, we study the coupled Henon map in the case where \(R\) takes a real value. For this purpose, we introduce the following domains (see Fig. 3): \[N_{1} =\{(X,Y,Z,W)\,|\,X\leq\min(-|Z|,-R)\}, \tag{4.11}\] \[N_{2} =\{(X,Y,Z,W)\,|\,X\geq-R,\ |Z|\leq R\},\] (4.12) \[N_{3} =\{(X,Y,Z,W)\,|\,X\geq-|Z|,\ Z\geq R\},\] (4.13) \[N_{4} =\{(X,Y,Z,W)\,|\,X\geq-|Z|,\ Z\leq-R\}, \tag{4.14}\] Figure 2: Sketch of \(g=A_{0}-(Z_{0})^{2}-2Z_{0}\). **Proposition 4.4**.: _If \(A_{0}\geq-1\), the following holds: a) Under the iteration of \(F\), the coordinate \(X\) strictly decreases in \(N_{1}\) except for \((X,Z)=(-R,-R)\). b) \(F(N_{1})\subset N_{1}\). c) \(F(N_{2})\subset N_{1}\cup N_{2}\) and \(F(N_{3})\subset N_{1}\cup N_{2}\). d) Under the iteration of \(F^{-1}\), the coordinate \(Z\) strictly decreases in \(N_{4}\) except for \((X,Z)=(-R,-R)\). e) \(F^{-1}(N_{3})\subset N_{4}\) and \(F^{-1}(N_{4})\subset N_{4}\). f) \(F^{-1}(N_{2})\subset N_{2}\cup N_{3}\cup N_{4}\)._ Proof.: a) Self-evident from lemma 4.3 a). b) For \((X_{0},Y_{0},Z_{0},W_{0})\in N_{1}\), \(X_{1}\leq X_{0}\) follows from a). From Eq. (2.4), we have \(Z_{1}=X_{0}\leq-R<0\). So, we have \(-|Z_{1}|=X_{0}\), which leads to the inequality \(X_{1}\leq-|Z_{1}|\). In addition, \(X_{1}\leq X_{0}\leq-R\) follows from a). Combining these, \((X_{1},Y_{1},Z_{1},W_{1})\in N_{1}\) is satisfied, _i.e._, \(F(N_{1})\subset N_{1}\) holds. c) Using lemma 4.2 by setting \(C=R\), the region specified by \(|Z|\leq R\), which covers the domain \(N_{2}\), is mapped to the horseshoe-shaped domain (see Fig. 4): \[A_{0}-(Z_{1}^{2}+W_{1}^{2})-R\leq X_{1}\leq A_{0}-(Z_{1}^{2}+W_{1}^{2})+R. \tag{4.16}\] The right boundary is expressed as \[X_{1}=A_{0}-(Z_{1}^{2}+W_{1}^{2})+R. \tag{4.17}\] Since \(W_{1}\) is real, \(X_{1}\) is bounded as \[X_{1}\leq A_{0}-Z_{1}^{2}+R. \tag{4.18}\] The boundary of Eq. (4.18), namely, \[X_{1}=A_{0}-Z_{1}^{2}+R, \tag{4.19}\] Figure 3: Illustration of domains and their boundary lines in the \((X,Z)\)-plane. is shown by the red curve in Fig. 4. Using lemma 4.1, it is easy to check that the point \((X_{1},Z_{1})=(-R,\pm R)\) satisfies Eq. (4.19). Therefore, the horseshoe-shaped region, specified by Eq. (4.16), lies completely inside the left-hand side of the red curve expressed by Eq. (4.19), and the red curve passes through the conner points \((X_{1},Z_{1})=(-R,\pm R)\) of \(N_{2}\). Thus, \(F(N_{2})\subset N_{1}\cup N_{2}\) is concluded. It is also easy to show that the line \(Z=R\) is mapped to the leftmost curve shown in Fig. 4. As a result, \(F(N_{3})\subset N_{1}\cup N_{2}\) follows (see Fig. 5). d) Self-evident from lemma 4.3 b). e) Note that the domain \(N_{3}\) in \((X,Z)\)-plane is expressed as \[N_{3}=\{(X,Y,Z,W)\,|\,Z=-X+\gamma,\,\gamma\geq 0,\,Z\geq R\}, \tag{4.20}\] thus it is mapped by \(F^{-1}\) as \[F^{-1}(N_{3})=\{(X,Y,Z,W)\,|\,Z=A_{0}-(X^{2}+Y^{2})+X-\gamma,\,\gamma\geq 0, \,X\geq R\}. \tag{4.21}\] For \(\gamma\geq 0\), we find that \(Z=A_{0}-(X^{2}+Y^{2})+X-\gamma\leq A_{0}-X^{2}+X\leq-R\). Here we have used lemma 4.1 and \(X\geq R\) in the region \(F^{-1}(N_{3})\). Since the points in \(F^{-1}(N_{3})\) satisfy \(X\geq R\) and \(Z\leq-R\), \(F^{-1}(N_{3})\subset N_{4}\) holds. In a similar way, the domain \(N_{4}\) is expressed as \[N_{4}=\{(X,Y,Z,W)\,|\,Z=X-\gamma,\,Z\leq-R,\,\gamma\geq 0\}, \tag{4.22}\] Figure 4: The blue region shows the horseshoe region obtained by setting \(C=R\) in (4.3). The red curve is the parabola in Eq. (4.19). Figure 5: The iterated domains. The red curve represents the rightmost curves for the regions \(F(N_{1}),F(N_{2})\) and \(F(N_{3})\). thus it is mapped by \(F^{-1}\) as \[F^{-1}(N_{4})=\{(X,Y,Z,W)\,|\,Z=A_{0}-(X^{2}+Y^{2})-X-\gamma,\,\gamma\geq 0,\,X \leq-R\}. \tag{4.23}\] For \(\gamma\geq 0\), we find that \(Z-X=A_{0}-(X^{2}+Y^{2})-X-\gamma-X\leq A_{0}-X^{2}-2X\leq 0\). Here we have again used lemma 4.1 and \(X\leq-R\) in the region \(F^{-1}(N_{4})\). Since the points in \(F^{-1}(N_{4})\) satisfy \(X\leq-R\) and \(Z\leq X\), \(F^{-1}(N_{4})\subset N_{4}\) holds. f) It is easy to see that \(F^{-1}(N_{2})\) is contained in the region \(|X|\leq R\). Combining these facts with the definitions of \(N_{1},N_{2}\) and \(N_{3}\), one can show that \(F^{-1}(N_{2})\in N_{2}\cup N_{3}\cup N_{4}\) holds (see Fig. 6). ### Existence domain of the non-wandering set: proof of Main theorem 3.1 A-1) and Main theorem 3.2 B-1) In this section, based on Propositon 4.4, we specify the domain containing the non-wandering set \(\Omega(F)\). As illustrated in Fig. 7 the flow of dynamics, regardless of whether the flow from \(N_{4}\) to \(N_{2}\) exists or not, an orbit launched in the domain \(N_{3}\) does not return back to the vicinity of the initial point. Propositons 4.4 a) implies that the coordinate \(X\) of the points contained in \(N_{1}\) are strictly decreasing, and also 4.4 d) implies the coordinate \(Z\) of the points contained in \(N_{4}\) are strictly increasing, so they do not return back to the vicinity of the initial points as well. This argument holds for the backward iteration \(F^{-1}\). It follows that the points of the non-wandering set \(\Omega(F)\) do not exist in Figure 6: The inverse images of each domain. The same set of parameters is used as in Fig. 5. The blue curve represents the uppermost situation. the domains \(N_{1},N_{3}\) and \(N_{4}\), and thus the non-wandering set \(\Omega(F)\) should be contained in the domain \(N_{2}\). Since \(\Omega(F)\subset N_{2}\), the non-wandering set can be expressed as \(\Omega(F)=\Lambda\subset\bigcap_{k=-\infty}^{\infty}F^{k}(N_{2})\), thus \(\Omega(F)\subset F^{-1}(N_{2})\cap N_{2}\cap F(N_{2})\) holds. Note here that we do not know whether the non-wandering set is empty or not. In the following, we use this condition to further specify the existence domain of the non-wandering set. More specifically, we will provide a hypercube containing the region \(F^{-1}(N_{2})\cap N_{2}\cap F(N_{2})\). First, note that the non-wandering set \(\Omega(F)\) should be located in the region \(|Z|\leq R\), since \(\Omega(F)\subset F^{-1}(N_{2})\cap N_{2}\cap F(N_{2})\). From the mapping rule (2.5), \(|Z_{0}|\leq R\) immediately leads to \(|X_{-1}|\leq R\). Therefore, the condition \(\Omega(F)\subset F^{-1}(N_{2})\) implies that \(|X|\leq R\) must be satisfied for the points in \(\Omega(F)\) (see Fig. 6). Next, we recall (4.17), which tells us the maximum value of \(X\) in the region \(F(N_{2})\), that is, \[X_{1} = A_{0}-(Z_{1}^{2}+W_{1}^{2})+R \tag{4.24}\] \[\leq A_{0}-W_{1}^{2}+R.\] The condition \(|X_{1}|\leq R\), obtained above, leads to \[-R\leq A_{0}-W_{1}^{2}+R, \tag{4.25}\] which implies that \(|W_{1}|\leq R\) must be satisfied for the points in \(\Omega(F)\) (see Fig. 8(a)). Again, it follows immediately from the mapping rule (2.5) that \(|Y_{0}|\leq R\). As a result of these arguments, we can conclude that \[\Omega(F)\subset V_{F}=\{(X,Y,Z,W)\,|\,|X|,|Y|,|Z|,|W|\leq R\}. \tag{4.26}\] We then consider the hypercube \(V_{f}\) in the original coordinates \((x,y,z,w)\), which contains the region \(V_{F}\). The slice of \(V_{f}\) by \((x,y)\)-plane is illustrated in Fig. 9, and we have \[\Omega(f)\subset V_{f}=\{(x,y,z,w)\,|\,|x|,|y|,|z|,|w|\leq 2\sqrt{2}R\}. \tag{4.27}\] Figure 7: The flow of dynamics. The dashed line shows that the flow can exist, but its proof is not given here. The red and blue arrows indicate a monotonic shift to the left and upward, respectively, in each region. The proof of our Main theorems 3.1 A-1) and 3.2 B-1) is thus completed. ## 5 Sufficient condition for uniform hyperbolicity ### Cone field condition We introduce here the cone field condition [42], which leads to a sufficient condition for uniform hyperbolicity. **Definition 5.1**.: _Let \(\mathbb{E}_{1}\subset\mathbb{R}^{n}\) and \(\mathbb{E}_{2}\) be a proper subspace and its complementary subspace, respectively. i.e., \(\mathbb{R}^{n}=\mathbb{E}_{1}\oplus\mathbb{E}_{2}\). The standard unit cone determined by the Figure 8: The domains mapped by \(F\) and \(F^{-1}\). (a) The green curve shows the leftmost parabola for which \(N_{2}\cap F(N_{2})\neq\emptyset\). (b) The green curve shows the lowest parabola for which \(N_{2}\cap F^{-1}(N_{2})\neq\emptyset\). Figure 9: Domains containing the non-wandering set \(\Omega(F)\). subspaces \(\mathbb{E}_{1}\) and \(\mathbb{E}_{2}\) is given by the set,_ \[K(\mathbb{E}_{1},\mathbb{E}_{2})=\{\mathbf{v}=(\mathbf{v}_{1},\mathbf{v}_{2})\,|\mathbf{v}_{1}\in \mathbb{E}_{1},\mathbf{v}_{2}\in\mathbb{E}_{2},|\mathbf{v}_{2}|\leq|\mathbf{v}_{1}|\}. \tag{5.1}\] **Definition 5.2**.: _A cone in \(\mathbb{R}^{n}\) with core \(\mathbb{E}_{1}\), denoted by \(\mathcal{C}(\mathbb{E}_{1})\), is the image \(T(K(\mathbb{E}_{1},\mathbb{E}_{2}))\). Here \(T:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a linear automorphism such that \(T(\mathbb{E}_{1})=\mathbb{E}_{1}\). By a cone \(\mathcal{C}\) in \(\mathbb{R}^{n}\) we mean a set \(\mathcal{C}(\mathbb{E}_{1})\) for some proper subspace \(\mathbb{E}_{1}\) of \(\mathbb{R}^{n}\)._ **Definition 5.3**.: _A cone field \(\mathcal{C}=\{\mathcal{C}_{\mathbf{x}}\}\) on a manifold \(M\) is a collection of cones \(\mathcal{C}_{\mathbf{x}}\in T_{x}M\) for \(x\subset M\)._ **Definition 5.4**.: _For a given cone field \(\mathcal{C}=\{\mathcal{C}_{\mathbf{x}}\}_{\mathbf{x}\in M}\) and a diffeomorphism \(h\) defined on the manifold \(M\), let_ \[m_{\mathcal{C},\mathbf{x}}=m_{C,\mathbf{x}}(h) =\inf_{\mathbf{v}\in\mathcal{C}_{\mathbf{x}}\setminus\{0\}}\frac{|Dh_{ \mathbf{x}}(\mathbf{v})|}{|\mathbf{v}|}, \tag{5.2}\] \[m^{\prime}_{\mathcal{C},\mathbf{x}}=m^{\prime}_{C,\mathbf{x}}(h) =\inf_{\mathbf{v}\notin\mathcal{C}_{h(\mathbf{x})}}\frac{|Dh^{-1}_{h(\mathbf{ x})}(\mathbf{v})|}{|\mathbf{v}|}. \tag{5.3}\] _We call \(m_{\mathcal{C},\mathbf{x}}\) and \(m^{\prime}_{\mathcal{C},\mathbf{x}}\) the minimal expansion and minimal co-expansion of \(h\) on \(\mathcal{C}_{\mathbf{x}}\), respectively._ **Definition 5.5**.: _We say that \(h\) is expanding on the cone field \(\mathcal{C}\) if_ \[\inf_{\mathbf{x}\in\Lambda}m_{\mathcal{C},\mathbf{x}}(h)>1\ \Longleftrightarrow\ \inf_{x\in\Lambda}\inf_{\mathbf{v}\in\mathcal{C}_{\mathbf{x}}\setminus\{0\}}\frac{|Dh_ {\mathbf{x}}(\mathbf{v})|}{|\mathbf{v}|}>1. \tag{5.4}\] _Similarly, we say that \(h\) is co-expanding on the cone field \(\mathcal{C}\) if_ \[\inf_{\mathbf{x}\in\Lambda}m^{\prime}_{\mathcal{C},\mathbf{x}}(h)>1\ \Longleftrightarrow\ \sup_{x\in\Lambda}\sup_{\mathbf{u}\in Dh^{-1}_{h(\mathbf{x})}(C^{c}_{h(\mathbf{x})})}\frac{|Dh _{\mathbf{x}}(\mathbf{u})|}{|\mathbf{u}|}<1. \tag{5.5}\] **Definition 5.6**.: _We say that the cone field \(\mathcal{C}_{\mathbf{x}}\) has constant orbit core dimension on \(\Lambda\) if_ \[\dim\mathbb{E}_{\mathbf{x}}=\dim\mathbb{E}_{h(\mathbf{x})} \tag{5.6}\] _holds for all \(x\in\Lambda\). Here \(\mathbb{E}_{\mathbf{x}}\) and \(\mathbb{E}_{h(\mathbf{x})}\) are the cores of \(\mathcal{C}_{\mathbf{x}}\) and \(\mathcal{C}_{h(\mathbf{x})}\), respectively._ Based on these notions, Newhouse has derived a necessary and sufficient condition for uniform hyperbolicity. **Theorem 5.7** (Newhouse).: _A sufficient condition for \(\Lambda(h)\) to be uniformly hyperbolic is that there are an integer \(N>0\) and a cone field \(\mathcal{C}\) with constant orbit core dimension over \(\Lambda(h)\) such that \(h^{N}\) is both expanding and co-expanding on \(\mathcal{C}\)._ Here we can show the following. **Corollary 5.8**.: _If there exists a standard unit cone field \(\mathcal{C}_{\mathbf{x}}\) on \(\Lambda(h)\) with \(h\)-invariant cones, i.e., \(Dh(\mathbb{E}_{\mathbf{x}})=\mathbb{E}_{h(\mathbf{x})},\,\forall x\in\Lambda(h)\), such that \(h\) is both expanding and co-expanding, then \(\Lambda(h)\) is uniformly hyperbolic._ Proof.: Since \(\mathbb{E}_{\mathbf{x}}\) is invariant under \(h\), it has constant orbit core dimension. The fact that for any \(\mathbf{x}\in\Lambda(h)\)\(\lambda\leq m_{\mathcal{C},\mathbf{x}}\) and \(\lambda\leq m^{\prime}_{\mathcal{C},\mathbf{x}}\) imply that \(h\) is both expanding and co-expanding. Hence \(h\) is uniformly hyperbolic. Sufficient condition for uniform hyperbolicity: the case with four symbols in the anti-integrable limit We first derive a sufficient condition for the case whose anti-integrable limit has four symbols. The Jabcobian for the forward and backward iterations is respectively given by \[Jf =\left(\begin{array}{rrrr}-2x+c&-c&-1&0\\ -c&-2y+c&0&-1\\ 1&0&0&0\\ 0&1&0&0\\ \end{array}\right), \tag{5.7}\] \[Jf^{-1} =\left(\begin{array}{rrrr}0&0&1&0\\ 0&0&0&1\\ -1&0&-2z+c&-c\\ 0&-1&-c&-2w+c\\ \end{array}\right). \tag{5.8}\] The following lemma will be used in the subsequent argument. **Lemma 5.9**.: _Let_ \[G(x,y)=\left(\begin{array}{rr}-2x+c&-c\\ -c&-2y+c\\ \end{array}\right), \tag{5.9}\] _where \(x,y\in\mathbb{R}\) satisfy the condition \(2\lambda+2+c\leq|x|,|y|\). Then, for any vector \(\mathbf{w}_{0}=(\xi,\eta)^{t}\), the following holds:_ \[(2\lambda+2)|\mathbf{w}_{0}|\leq|\mathbf{w}_{1}|, \tag{5.10}\] _where \(\mathbf{w}_{1}=G(x,y)\mathbf{w}_{0}\)._ Proof.: In the case \(|\eta_{0}|\leq|\xi_{0}|\), we have \[|\mathbf{w}_{1}| \geq|\xi_{1}|\] \[=|(-2x+c)\xi_{0}-c\eta_{0}|\] \[\geq|-(2x-c)\xi_{0}|-|c\eta_{0}|\] \[=|(2x-c)||\xi_{0}|-c|\eta_{0}|\] \[\geq(2|x|-c)|\xi_{0}|-c|\eta_{0}|\] \[\geq 2(|x|-c)|\xi_{0}|\] \[=(|x|-c)(|\xi_{0}|+|\xi_{0}|)\] \[\geq(|x|-c)(|\xi_{0}|+|\eta_{0}|)\] \[\geq(|x|-c)|\mathbf{w}_{0}|\] \[\geq(2\lambda+2)|\mathbf{w}_{0}|.\] Similarly, for \(|\xi_{0}|<|\eta_{0}|\), \[|\mathbf{w}_{1}| \geq|\eta_{1}|\] \[=|-c\xi_{0}+(-2y+c)\eta_{0}|\] \[\geq|G(x,y)\mathbf{v}_{0}^{+}-\mathbf{v}_{0}^{-}|-|\mathbf{v}_{0}^{+}|\] \[\geq|G(x,y)\mathbf{v}_{0}^{+}|-|\mathbf{v}_{0}^{-}|-|\mathbf{v}_{0}^{+}|\] \[\geq(2\lambda+1)|\mathbf{v}_{0}^{+}|-|\mathbf{v}_{0}^{-}|\] \[\geq 2\lambda|\mathbf{v}_{0}^{+}|\] \[\geq\lambda|\mathbf{v}_{0}|.\] Similarly, for b), we have \[|\mathbf{v}_{-1}|=\left|\left(\begin{array}{c}\mathbf{v}_{0}^{-}\\ -\mathbf{v}_{0}^{+}+G(z,w)\mathbf{v}_{0}^{-}\end{array}\right)\right|\] \[\geq|G(z,w)\mathbf{v}_{0}^{-}-\mathbf{v}_{0}^{+}|-|\mathbf{v}_{0}^{-}|\] \[\geq|G(z,w)\mathbf{v}_{0}^{-}|-|\mathbf{v}_{0}^{+}|-|\mathbf{v}_{0}^{-}|\] \[\geq(2\lambda+1)|\mathbf{v}_{0}^{-}|-|\mathbf{v}_{0}^{+}|\] \[\geq 2\lambda|\mathbf{v}_{0}^{-}|\] \[\geq\lambda(|\mathbf{v}_{0}^{+}|+|\mathbf{v}_{0}^{-}|)\] \[\geq\lambda|\mathbf{v}_{0}|.\] Theorem 5.10 tells us that \(f\) is expanding and co-expanding. Combined with the Lemma 5.9, we finally find the following: **Corollary 5.11**.: _If all points in the non-wandering set \(\Omega(f)\), if not empty, satisfy the condition_ \[4+c\leq|x|,|y|,|z|,|w|, \tag{5.14}\] _then \(\Omega(f)\) is uniformly hyperbolic._ Sufficient condition for uniformly hyperbolicity: the case with two symbols in the anti-integrable limit Next, we consider a sufficient condition for the case where the anti-integrable limit has two symbols. The Jacobian after the transformation (2.3) is respectively given by \[Jf =\left(\begin{array}{cc}\widetilde{G}(x,y)&-I_{2}\\ I_{2}&O_{2}\end{array}\right), \tag{5.15}\] \[Jf^{-1} =\left(\begin{array}{cc}O_{2}&I_{2}\\ -I_{2}&\widetilde{G}(z,w)\end{array}\right). \tag{5.16}\] The following will be used in the following argument. **Lemma 5.12**.: _Let_ \[\widetilde{G}(X,Y)=\left(\begin{array}{cc}-2X&-2Y\\ -2Y&-2X+2c\end{array}\right), \tag{5.17}\] _where \(X,Y\in\mathbb{R}\) satisfy the condition \(2\lambda+2+c\leq|X|-|Y|\). Then, for any vector \(\mathbf{w}_{0}=(\xi,\eta)^{t}\), the following holds:_ \[(2\lambda+2)|\mathbf{w}_{0}|\leq|\mathbf{w}_{1}|, \tag{5.18}\] _where \(\mathbf{w}_{1}=G(x,y)\mathbf{w}_{0}\)._ Proof.: In the case \(|\eta_{0}|\leq|\xi_{0}|\), we have \[|\mathbf{w}_{1}| \geq|\xi_{1}|\] \[=|(-2X)\xi_{0}-2Y\eta_{0}|\] \[\geq|-2X\xi_{0}|-|2Y\eta_{0}|\] \[\geq 2(|X|-|Y|)|\xi_{0}|\] \[=(|X|-|Y|)(|\xi_{0}|+|\xi_{0}|)\] \[\geq(||X|-|Y|)(|\xi_{0}|+|\eta_{0}|)\] \[\geq(|X|-|Y|)|\mathbf{w}_{0}|\] \[\geq(2\lambda+2)|\mathbf{w}_{0}|.\] Similarly, for \(|\xi_{0}|<|\eta_{0}|\), \[|\mathbf{w}_{1}| \geq|\eta_{1}|\] \[=|-2Y\xi_{0}-2(X-c)\eta_{0}|\] \[\geq|-2(X-c)||\eta_{0}|-|-2Y||\xi_{0}|\] \[=2|(X-c)||\eta_{0}|-2|Y||\xi_{0}|\] \[\geq 2(|X|-c)|\eta_{0}|-2|Y||\xi_{0}|\] \[>2(|X|-|Y|-c)|\eta_{0}|\] \[=(|X|-|Y|-c)(|\eta_{0}|+|\eta_{0}|)\] \[>(|X|-|Y|-c)(|\xi_{0}|+|\eta_{0}|)\] \[\geq(|X|-|Y|-c)|\mathbf{w}_{0}|\] \[\geq(2\lambda+2)|\mathbf{w}_{0}|.\] Combining Theorem 5.10 with lemma 5.12, we find the following: **Corollary 5.13**.: _If all points in the non-wandering set \(\Omega(f)\), if not empty, satisfy the condition_ \[4+c\leq|X|-|Y|,|Z|-|W|, \tag{5.19}\] _then \(\Omega(F)\) and so \(\Omega(f)\) is uniformly hyperbolic._ ## 6 Proof of Main theorems ### The case with four symbols in the anti-integrable limit _Topological horseshoe:_ In this section, we provide a sufficient condition for topological horseshoe and uniform hyperbolicity for the case (A), _i.e._, the case around the anti-integrable limit with four symbols. First, we consider the situation in the original coordinate \((x,y,z,w)\). Using the relation \(f^{-1}(f(V_{f}))=V_{f}\), we find that the region \(f(V_{f})\) is expressed as \[\left\{\begin{array}{l}|z|\leq r,\\ |w|\leq r,\\ |a_{0}-z^{2}-x+c(z-w)|\leq r,\\ |a_{1}-w^{2}-y-c(z-w)|\leq r.\end{array}\right. \tag{6.1}\] We can re-express \(f(V_{f})\) as \[f(V_{f})= \{(x,y,z,w)\,|\,|z|\leq r,|w|\leq r,x=-z^{2}+cz+a_{0}+\alpha \tag{6.2}\] \[\mbox{where }|\alpha|\leq(c+1)r,y=-w^{2}+cw+a_{1}+\beta\mbox{ where }|\beta|\leq(c+1)r\}.\] In this new expression, we have got rid of \(w\)-dependence of \(x\) or \(z\), as well as the \(z\)-dependence of \(y\) or \(w\). Therefore the \((x,z)\)-plane is now decoupled from the \((y,w)\)-plane. It is therefore valid to consider parabolas in the \((x,z)\)-plane and the \((y,w)\)-plane, separately. Let \(\Gamma_{x}^{\rm max}\) be the parabola with the largest \(x\) (rightmost in Fig. 10), \(\Gamma_{x}^{\rm min}\) be the one with the smallest \(x\) (leftmost in Fig. 10): \[\Gamma_{x}^{\rm max}:x= \!\!\!-z^{2}+cz+a_{0}+(c+1)r, \tag{6.3}\] \[\Gamma_{x}^{\rm min}:x \!\!\!= \!\!\!-z^{2}+cz+a_{0}-(c+1)r. \tag{6.4}\] Furthermore, let \(S_{x}^{+}=\{(x,y,z,w)\in V\,|\,x=r\}\) and \(S_{x}^{-}=\{(x,y,z,w)\in V\,|\,x=-r\}\), respectively (see Fig. 10). For the 2-dimensional Henon map \(f\), the horseshoe condition is given by the requirement that \(f\cap f(V)\) is decomposed into two disjoint regions. Here the region \(V\) is a region that contains the non-wandering set \(\Omega(f)\). Here we apply the same condition for the \((x,z)\)- and \((y,w)\)-planes, respectively. First we consider the condition for the \((x,z)\)-plane. In order for the horseshoe condition to be satisfied in the \((x,z)\)-plane, as shown in Fig. 10, the following should hold: 1) \(\Gamma_{x}^{\rm min}\) intersects \(S_{x}^{+}\) at two points. 2) \(\Gamma_{x}^{\rm max}\) intersects \(S_{x}^{-}\) at two points. The first condition holds if \[\frac{1}{4}c^{2}+a_{0}-(c+1)r>r \tag{6.5}\] is satisfied. Since it is assumed that \(c>0\), the second condition is equivalent to the condition requiring that \(x(z=r)\leq-r\) and \(x(z=-r)\leq-r\). The former condition is written as \[-r^{2}+cr+a_{0}+(c+1)r\leq-r. \tag{6.6}\] The latter condition automatically holds if the former one is fulfilled. Figure 10: \(\Gamma_{x}\) in the \((x,z)\)-plane. The red and blue curves represent \(\Gamma_{x}^{\rm max}\) and \(\Gamma_{x}^{\rm min}\), respectively. The green regions show \(f(V_{f})\cap V_{f}\). The argument for the \((y,w)\)-plane is developed in the same way, again based on (6.1), which leads to the conditions \[\frac{1}{4}c^{2}+a_{1}-(c+1)r>r, \tag{6.7}\] \[-r^{2}+cr+a_{1}+(c+1)r\leq-r. \tag{6.8}\] Due to the symmetry, the inverse map \(f^{-1}\) is obtained by swapping \((x,y)\leftrightarrow(z,w)\) in the map \(f\), thus the same conditions follow for \(f^{-1}\). Thus, the conditions (6.5), (6.6), (6.7), and (6.8) lead to a topological horseshoe. The proof of Theorem 3.1 A-1) is done. #### Uniform hyperbolicity: Next, we consider a sufficient condition for uniform hyperbolicity. From section 5.2, to obtain uniform hyperbolicity it is sufficient to show that any point \((x,y,z,w)\in f(V_{f})\cap V_{f}\) satisfies the condition (5.14) since \(\Omega(f)\subset f(V_{f})\cap V_{f}\) holds. Suppose that \((x,y,z,w)\in f(V_{f})\cap V_{f}\), and the conditions (6.5) and (6.6) are satisfied. Let \(z_{-}^{*}\) and \(z_{+}^{*}\) be the \(z\) coordinates of the intersection points between \(\Gamma_{\min}\) and \(S_{x}^{+}\) where \(z_{-}^{*}\leq z_{+}^{*}\) is assumed (see Fig. 10). We can explicitly obtain as \[z_{\pm}^{*}=\frac{c\pm\sqrt{c^{2}+4(a_{0}-(c+2)r)}}{2}. \tag{6.9}\] If \(z_{-}^{*}<0\), the following holds: \[|z|\geq\,\min(|z_{-}^{*}|,|z_{+}^{*}|)=|z_{-}^{*}|=-z_{-}^{*}.\] Here \(c>0\) is used to show the first inequality. Hence, if the condition \[-z_{-}^{*}>4+c \tag{6.10}\] is satisfied, then \(|z|>4+c\) holds for all the points in \(f(V)\cap V\). Note that the condition (6.10) automatically ensures the condition \(z_{-}^{*}<0\) for \(c>0\). We can develop the same argument for the \((y,w)\)-plane, and find that the following is sufficient to ensure that the condition \(|w|>4+c\) holds for all the points within \(f(V)\cap V\): \[\frac{-c+\sqrt{c^{2}+4(a_{1}-(c+2)r)}}{2}>4+c. \tag{6.11}\] In a similar manner, the argument for the inverse map \(f^{-1}\) provides a sufficient condition to satisfy \(|x|,|y|>4+c\). Since the inverse map \(f^{-1}\) is given by swapping the variables as \((x,y)\leftrightarrow(z,w)\), the resulting conditions are the same as (6.10) and (6.11). Thus, in addition to the conditions (6.5), (6.6), (6.7), and (6.8) the conditions (6.10) and (6.11) lead to a sufficient condition for the non-wandering set \(\Omega(f)\) to be uniformly hyperbolic. The proof of Theorem 3.1 A-2) is complete. ### The case with two symbols in the anti-integrable limit Topological horseshoe:We first examine the existence of topological horseshoe. From the definition (3.16) of \(V_{F}\) and the mapping rule (2.4), we have \[F(V_{F})\cap V_{F}=\{(X,Y,Z,W)\,|\,|X|,|Y|,|Z|,|W|\leq R,\] \[X=-Z^{2}-W^{2}+A_{0}+s^{\prime},|s^{\prime}|\leq R,\] \[Y=A_{1}+2(c-Z)W+s,|s|\leq R\}. \tag{6.12}\] First, consider the projection of \(F(V_{F})\cap V_{F}\) onto the \((Y,W)\)-plane. Let \[\Gamma_{Y}:Y=A_{1}-2(c-Z)W+s, \tag{6.13}\] be a set of straight lines in the \((Y,W)\)-plane parametrized by \(Z\) and \(s\), where \(|s|\leq R\), and let \[\Gamma_{Y}^{\rm max}:Y=A_{1}+2(c-R)W-R, \tag{6.14}\] \[\Gamma_{Y}^{\rm min}:Y=A_{1}+2(c-R)W+R, \tag{6.15}\] be the upper and lower straight members of \(\Gamma_{Y}\). \(\Gamma_{Y}^{\rm max}\) is attained at \(Z=R\) and \(s=-R\), and \(\Gamma_{Y}^{\rm min}\) is attained at \(Z=R\) and \(s=R\) (see Fig 11(a)). Since \(c>R\) and \(|Z|\leq R\), we know that the slope of \(\Gamma_{Y}\) is always postive. Solving for \(W\), we get \[W=\frac{Y-A_{1}-s}{2(c-Z)}. \tag{6.16}\] The maximum and minimum values of \(W\), denoted by \(W_{\rm max}\) and \(W_{\rm min}\) respectively, are given as \[W_{\rm max}=\frac{2R-A_{1}}{2(c-R)},\quad\mbox{attained at $Y=R,Z=R$ and $s=-R$}, \tag{6.17}\] \[W_{\rm min}=\frac{-2R-A_{1}}{2(c-R)},\quad\mbox{attained at $Y=-R,Z=R$ and $s=R$}. \tag{6.18}\] Since we have imposed the condition (3.19), we see that the projection of \(F(V_{F})\) intersects \(V_{F}\) completely in the \(Y\)-direction, and the width of \(F(V_{F})\cap V_{F}\), as measured in the \(W\)-direction, is strictly less than \(2R\) (see Fig 11(a)). Next, consider the projection of \(F(V_{F})\cap V_{F}\) onto the \((X,Z)\)-plane. Let \[\Gamma_{X}:X=-Z^{2}-W^{2}+A_{0}+s^{\prime} \tag{6.19}\] be a family of parabolas in the \((X,Z)\)-plane parametrized by \(W\) and \(s^{\prime}\), where \(|W|\leq W^{*}\) and \(|s^{\prime}|\leq R\). Let \[\Gamma_{X}^{\rm max}:X=-Z^{2}+A_{0}+R, \tag{6.20}\] \[\Gamma_{X}^{\rm min}:X=-Z^{2}+A_{0}-(W^{*})^{2}-R, \tag{6.21}\] be the rightmost and leftmost members of \(\Gamma_{X}\). Note that \(\Gamma_{X}^{\rm max}\) is attained at \(W=0\) and \(s^{\prime}=R\), and \(\Gamma_{X}^{\rm min}\) at \(W=W^{*}\) and \(s^{\prime}=-R\) (see Fig. 11(b)). For \(\Gamma_{X}^{\rm max}\), notice that when \(Z=\pm R\), we have \[X=-R^{2}+A_{0}+R=-R. \tag{6.22}\] Therefore, \(\Gamma_{X}^{\rm max}\) intersects with boundary of \(V_{F}\) at its two corner points, namely, \(A=(-R,R)\) and \(B=(-R,-R)\) in Fig. 11(b). In the meantime, for \(\Gamma_{X}^{\rm min}\), we examine the location of its vertex, denoted by V in Fig. 11(b). The vertex is attained by setting \(Z=0\), which leads to \[X_{V}=A_{0}-(W^{*})^{2}-R.\] Since it is imposed in (3.18) that \[A_{0}-(W^{*})^{2}-R>R,\] we obtain \(X_{V}>R\), i.e., the vertex of \(\Gamma_{X}^{\rm min}\) is located on the right side of \((R,0)\), as illustrated in Fig. 11(b). As a result, the region in between \(\Gamma_{X}^{\rm max}\) and \(\Gamma_{X}^{\rm min}\) gives rise to a topological binary horseshoe in the \((X,Z)\)-plane. Thus, we know that the non-wandering set \(\Omega(F)\) is non-empty and is at least semi-conjugate to a full shift with two symbols. _Uniform hyperbolicity:_ Next, we will show uniform hyperbolicity on \(\Omega(F)\). From section 5.3, we already know a sufficient condition for uniform hyperbolicity in Corollary 5.13. Here we show that this is indeed the case for points in \(F(V_{F})\cap V_{F}\). Notice that for any point in \(F(V_{F})\cap V_{F}\), we have \[|Z|\geq Z^{*}, \tag{6.23}\] where \(Z^{*}\) is the \(Z\)-coordinate of the point \(C\) in Fig 11(b). Thus, \[|Z|-|W|\geq Z^{*}-|W|\geq Z^{*}-W^{*} \tag{6.24}\] holds. Since it is imposed in (3.20) that \(Z^{*}-W^{*}\geq 4+c\), we immediately obtain \[|Z|-|W|\geq 4+c. \tag{6.25}\] Due to the symmetry of the mapping equations, \(F^{-1}\) can be obtained from \(F\) by swapping \((X,Z)\) with \((Y,W)\), thus we obtain, \[|X|-|Y|\geq 4+c \tag{6.26}\] as well. The uniform hyperbolicity on \(\Omega(F)\) thus follows. Finally, we check that the parameters leading to the anti-integrable limit satisfy the sufficient condition obtained above for topological horseshoe and uniform hyperbolicity. The case (A) is given by taking the limit of \(a=a_{0}=a_{1}\to\infty\). This limit implies that \(r\to 2\sqrt{2a}\), so it turns out that the conditions in A-2) and A-3) in Theorem 3.1 hold. For the case (B), the anti-integrable limit is obtained by taking the limit of \(a=a_{0}=a_{1}\to\infty\) and \(\gamma\to\infty\) with \(c=\gamma\sqrt{a}\) being fixed. In this case, \(R\to\sqrt{a}\), \(W^{*}\to 0\) and \(Z^{*}=\sqrt{a}\) follow, and the conditions in B-2) and B-3) in Theorem 3.2 are Figure 12: For the anti-integrable limit with four symbols, the region satisfying the topological horseshoe is shown in light orange, and the region satisfying both topological horseshoe and uniform hyperbolicity is shown in orange. For the the anti-integrable limit with two symbols, the region satisfying topological horseshoe is shown in light blue, and the region satisfying both topological horseshoe and uniform hyperbolicity is shown in blue. \(a=a_{0}=a_{1}\) are taken. satisfied. Figure 12 illustrates the parameter regions in which topological horseshoe and uniform hyperbolicity hold. ## 7 Summary We have derived a sufficient condition for topological horseshoe and uniform hyperbolicity of the coupled Henon map around the anti-integrable limits. The coupled Henon map introduced here has at least two types of anti-integrable limits, which were obtained by taking appropriate limits on the nonlinear parameters \(a_{0}\), \(a_{1}\) and a coupling strength \(c\). The strategy of specifying the existence domain of the non-wandering set, and showing topological horseshoe and uniform hyperbolicity is a straightforward generalization of the approach taken in Ref. [6]. It is specific to higher dimensional maps to have different types of horseshoe, and it does not happen in 2-dimensional maps. In a subsequent paper [33], we will further introduce topologically different types of horseshoe that are impossible in two dimensions by studying a family of Henon-type mappings. Since the conditions obtained are sufficient ones, as in the case of the 2-dimensional Henon map [6], one can expect that the parameter domain with topological horseshoe and uniform hyperbolicity must be further extended, possibly to the situation where an analog of the first tangency happens [7, 43]. A plausible approach to this problem would be to use a computer-assisted proof developed in Refs. [8, 9]. Furthermore, it is interesting to investigate the transition between the two types of horseshoes found in the present work. Such a transition, if it exists, will induce a kind of bifurcation in higher dimensions. Another question to be addressed in the future is whether other types of horseshoes exist in the parameter space. We have studied here only in the symmetric situation \(a_{0}=a_{1}\), but it is by no means obvious whether the situation associated with three symbols appears or not. If this is the case, this also provides a new type of horseshoe, which appears only in higher dimensional maps. ## Acknowledgement J.L. and A.S. acknowledge financial support from Japan Society for the Promotion of Science (JSPS) through JSPS Postdoctoral Fellowship for Research in Japan (Standard). This work has been supported by JSPS KAKENHI Grant No. 17K05583, and also by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2139.
2305.01681
The dust enrichment of early galaxies in the JWST and ALMA era
Recent observations with the James Webb Space Telescope are yielding tantalizing hints of an early population of massive, bright galaxies at $z > 10$, with Atacama Large Millimeter Array (ALMA) observations indicating significant dust masses as early as $z\sim 7$. To understand the implications of these observations, we use the DELPHI semi-analytic model that jointly tracks the assembly of dark matter halos and their baryons, including the key processes of dust enrichment. Our model employs only two redshift- and mass-independent free parameters (the maximum star-formation efficiency and the fraction of supernova energy that couples to gas) that are tuned against all available galaxy data at $z \sim 5-9$ before it is used to make predictions up to $z \sim 20$. Our key results are: (i) the model under-predicts the observed ultraviolet luminosity function (UV LF) at $z > 12$; observations at $z>16$ lie close to, or even above, a "maximal" model where all available gas is turned into stars; (ii) UV selection would miss 34\% of the star formation rate density at $z \sim 5$, decreasing to 17\% by $z \sim 10$ for bright galaxies with $\rm{M_{UV}} < -19$; (iii) the dust mass ($M_d$) evolves with the stellar mass ($M_*$) and redshift as $\log(M_d) = 1.194\log(M_*) + 0.0975z - 5.433$; (iv) the dust temperature increases with stellar mass, ranging between $30-33$ K for $M_* \sim 10^{9-11}M_\odot$ galaxies at $z \sim 7$. Finally, we predict the far infrared LF at $z \sim 5-20$, testable with ALMA observations, and caution that spectroscopic redshifts and dust masses must be pinned down before invoking unphysical extrema in galaxy formation models.
Valentin Mauerhofer, Pratika Dayal
2023-05-02T18:00:03Z
http://arxiv.org/abs/2305.01681v2
# The dust enrichment and detectability of early galaxies in the JWST and ALMA era ###### Abstract Recent observations with the James Webb Space Telescope (JWST) are yielding tantalizing hints of an early population of massive, bright galaxies at \(z>10\), with Atacama Large Millimeter Array (ALMA) observations indicating significant dust masses in place as early as \(z\sim 7\). To understand the implications of these observations, we use the delphi semi-analytic model that jointly tracks the assembly of dark matter halos and their constituent baryons, including the key processes of dust enrichment. Our model employs only two redshift- and mass-independent free parameters that are tuned against all available galaxy data at \(z\sim 5-9\) before it is used to make predictions up to \(z\sim 20\). Our key results are: _(i)_ the model progressively under-predicts the observed ultraviolet luminosity function (UV LF) at \(z>12\); observations at \(z>16\) lie close to, or even above, a "maximal" model where all available gas is turned into stars; _(ii)_ UV selection would miss 34% of the star formation rate density at \(z\sim 5\), decreasing to 17% by \(z\sim 10\) for bright galaxies with \(\rm M_{UV}<-19\); _(iii)_ the dust mass (\(M_{d}\)) evolves with the stellar mass (\(M_{*}\)) and redshift as \(\log(M_{d})=1.194\log(M_{*})+0.0975z-5.433\); _(iv)_ the escape fraction of UV photons (\(f_{\rm esc}^{\rm UV}\)) decreases with increasing mass and star formation rate. At \(z\sim 7\), \(f_{\rm esc}^{\rm esc}\sim 0.8\) (0.1) for \(M_{*}\sim 10^{9}\) (\(10^{11}\))\(M_{\odot}\) galaxies; _(v)_ the dust temperature increases with stellar mass, ranging between \(30-33\) K for \(M_{*}\sim 10^{9-11}M_{\odot}\) galaxies at \(z\sim 7\). Finally, we predict the far infrared (FIR) LF at \(z\sim 5-20\), testable with ALMA observations, and caution that spectroscopic redshifts and dust masses must be pinned down before invoking unphysical extrema in galaxy formation models. keywords: galaxies : high-redshift, luminosity function, mass function, formation, evolution - ISM: dust, extinction ## 1 Introduction The first billion years after the Big Bang saw the emergence of the first galaxies, whose stellar populations created the first heavy elements and dust (for a review see e.g. Maiolino & Mannucci, 2019) as well as the first hydrogen-ionizing photons that started the process of cosmic reionization (for a review see e.g. Dayal & Ferrara, 2018). The emergence of these first systems and their large-scale effects remain key outstanding questions in our cosmic timeline. Over the past decade, tremendous efforts have been made to build a global picture of galaxy formation and evolution at high-redshifts, through a combination of multi-wavelength observations using facilities such as the Hubble Space Telescope (HST), the Very Large Telescope (VLT) and the Subaru Telescope to name a few (for reviews see e.g. Dunlop et al., 2013; Stark, 2016). More recently, the Atacama Large Millimetre Array (ALMA) has started providing unprecedented views of the dust content of early galaxies at redshifts \(z\sim 4.4-7.5\) through the ALMA Large Program to INvestigate C+ at Early Times (ALPINE; Dessauges-Zavadsky et al., 2020; Bethermin et al., 2020) and the ALMA Reionization Epoch Bright Line Emission Survey (REBELS; Bouwens et al., 2022; Inami et al., 2022). A key issue in determining the dust masses of early galaxies is that the observed far Infrared (FIR) continuum emission is characterized by key two quantities - the dust temperature (\(T_{d}\)) and the dust mass (\(M_{d}\)). Unless multi-band dust measurements are available (see e.g. Faisst et al., 2020; Bakx et al., 2021), these two quantities are degenerate, requiring an assumption on the dust temperature in order to infer the associated dust mass. Despite these caveats, a puzzle is the extremely high dust-to-stellar-mass ratios, ranging between \(0.012-3\%\), obtained for star forming galaxies with stellar masses \(M_{*}\sim 10^{8.3-10.5}\) M\({}_{\odot}\), at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}7\)(e.g. Watson et al., 2015; Laporte et al., 2017; Hashimoto et al., 2019; Bakx et al., 2020; Reuter et al., 2020; Bouwens et al., 2022). Further, a key property of dust is its ability to absorb (non-ionizing) ultra-violet (UV) photons that are re-emitted in the FIR (see e.g. Dayal et al., 2010). ALMA REBELS observations have recently allowed such FIR luminosity functions (LFs) to be mapped out at \(z\sim 7\)(Barrufet et al., 2023). Furthermore, the James Webb Space Telescope (JWST) has recently started providing ground-breaking views of galaxy formation at \(z\sim 9-18\), allowing us to reach this last unknown territory of galaxy formation (Adams et al., 2023; Atek et al., 2023; Bouwens et al., 2023; Bradley et al., 2022; Naidu et al., 2022). This has led to estimates of the global UV LF up to \(z\sim 18\) although caution must be exerted when using the LF at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12\) where the redshift and nature of the sources remains debated (Adams et al., 2023; Naidu et al., 2022). Surprisingly, the UV LF seems to show almost no evolution at the bright end (\(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-22\)) at \(z\sim 4-13\)(e.g. Bowler et al., 2020; Harikane et al., 2020) showing a possible excess in number density when compared to a Schechter function (e.g. Bowler et al., 2015; Ono et al., 2018). This has led a number of explanations including a co-evolution of halo mass function and the dust content of galaxies (Ferrara et al., 2022), UV contribution from black-hole accretion powered active galactic nuclei (AGN; Ono et al., 2018; Piana et al., 2022; Pacucci et al., 2022), observational biases causing us to observe only exceptionally starbursting galaxies (Mirocha and Furlanetto, 2023) or an initial mass function (IMF) that evolves with redshift (Pacucci et al., 2022; Yung et al., 2023). Finally, the JWST has also allowed the stellar mass function (SMF) to be probed out to \(z\sim 10\)(e.g. Santini et al., 2023) despite caveats on the assumed star formation history that can lead to a significant variations in the inferred stellar mass (e.g. Topping et al., 2022). In view of these recent advances, a number of models have been used to explore the physical mechanisms of dust production and evolution as well as the effects of dust on early galaxy observables. The approaches adopted range from hydrodynamical simulations that model small-scales processes such as dust growth, dust destruction, grain size distribution and the geometry of dust and stars (e.g. Bekki, 2015; Aoyama et al., 2017; McKinnon et al., 2018; Trebitsch et al., 2023) to simulations that have been post-processed with dust models to compute the dust content and attenuation (e.g. Dayal et al., 2011; Mancini et al., 2015; Narayanan et al., 2018; Wilkins et al., 2018; Li et al., 2019; Ma et al., 2019; Graziani et al., 2020; Vogelsberger et al., 2020; Vijayan et al., 2023) to semi-analytic models (e.g. Popping et al., 2017; Vijayan et al., 2019; Triani et al., 2020; Dayal et al., 2022) and analytic formalisms (e.g. Ferrara et al., 2022). In this work we make use of the broad mass range and flexibility offered by the Delphi semi-analytic model (Dayal et al., 2014, 2022) to study the dust content of high-redshift galaxies, including the effect of dust on their visibility and its detectability in the FIR. A key strength of this model is that it only has two mass- and redshift-independent free parameters and is base-lined against all available data-sets at \(z\sim 5-9\) before its predictions are extended to even higher redshifts. Throughout this paper, we adopt a \(\Lambda\)CDM model with dark energy, dark matter and baryonic densities in units of the critical density as \(\Omega_{\Lambda}=0.691\), \(\Omega_{m}=0.308\) and \(\Omega_{b}=0.049\), respectively, a Hubble constant \(H_{0}=100\,h\,{\rm km\,s^{-1}\,Mpc^{-1}}\) with \(h=0.67\), spectral index \(n=0.96\) and normalisation \(\sigma_{8}=0.81\)(Planck Collaboration et al., 2016). Additionally, we use the stellar library BPASSv2.2.1 (Eldridge et al., 2008; Stanway et al., 2016). This library assumes a Kroupa IMF (Kroupa, 2001), with a slope of \(-1.3\) between 0.1 and 0.5 M\({}_{\odot}\) and of \(-2.35\) between 0.5 and 100 M\({}_{\odot}\). Finally, we use comoving units and magnitudes in the standard AB system (Oke and Gunn, 1983) throughout the paper. The paper is structured as follows: in Sec. 2 we detail the delphi model, including a description of the halo merger tree, the computation of star-formation, supernovae feedback, dust evolution and the associated luminosities. In Sec. 3 we present the results of our model in terms of UV observables, such as the LF and the cosmic UV density, as well as the mass-luminosity relation and the stellar mass function. In Sec. 4, we detail the derived dust properties of high-redshift galaxies, including the dust mass, dust temperature and UV escape fraction, along with analytical relations between those quantities and the stellar mass and star-formation rate. In Sec. 5 we discuss the observability of the infrared part of high-redshift galaxy spectra, and compare our results with far-infrared (FIR) LFs from the literature. Finally, we summarize and discuss our results in Sec. 6. ## 2 Theoretical model In this section, we briefly describe the theoretical model used to study the assembly of dark matter halos and their baryonic components at \(z\sim 4.5-20\); interested readers are referred to our previous papers (Dayal et al., 2014, 2022) for complete details. We start with a description of the merger tree (Sec. 2.1) before discussing the star formation prescription and the associated supernova (SN) feedback (Sec. 2.2), the dust enrichment of early galaxies (Sec. 2.3) and the resulting luminosities in both the UV and IR (Sec. 2.4). ### Halo merger tree and gas accretion Starting at \(z=4.5\) we build merger trees for 600 galaxies, up to \(z\sim 40\), uniformly distributed in terms of the halo mass (in log space) between \(\log(M_{h}/\rm M_{\odot})=8-14\) using the binary merger tree algorithm from Parkinson et al. (2008). We impose a mass resolution of \(10^{8}\rm M_{\odot}\) and use a constant redshift-step of 30 Myr for the merger tree so that all Type II SN (SNII) explode within a single redshift-step, preventing the need for delayed SN feedback. Each halo is assigned a number density by matching to the Sheth-Tormen (Sheth and Tormen, 1999) halo mass function (HMF) at \(z=4.5\) and this number density is propagated throughout its merger tree. We have confirmed that the resulting HMFs are in accord with the Sheth-Tormen HMFs at all higher redshifts, up to \(z\sim 20\). The first progenitors ("starting leaves") of any merger tree are assigned an initial gas mass that is linked to the halo mass through the cosmological ratio such that \(M_{\rm g}^{\rm i}=(\Omega_{b}/\Omega_{m})M_{h}\). At every further redshift-step, the total halo mass is determined by the sum of the dark matter mass brought in by mergers and smooth-accretion from the intergalactic medium (IGM). While we assume the accreted gas mass to be proportional to the accreted dark matter mass, the merged gas mass is determined by the gas mass left in the merging progenitors after star formation and the associated SNII feedback. ### Star formation and supernova feedback We start by computing the newly formed stellar mass in a given redshift-step as \[M_{*}(z)=f_{*}^{\rm eff}M_{\rm g}^{\rm i}(z), \tag{1}\] where \(f_{*}^{\rm eff}\) is the effective star formation efficiency and \(M_{\rm g}^{\rm i}\) is the (initial) gas mass at the start of the redshift-step. We assume this mass to have formed uniformly over \(t_{*}=30\) Myr to obtain the star formation rate (SFR) \(\psi=M_{*}(z)/t_{*}\). The \(f_{*}^{\rm eff}\) value for any halo is the minimum between the star formation efficiency that produces enough SNII energy to unbind the remainder of the gas (\(f_{*}^{\rm s}\)) and a maximum star formation efficiency parameter (\(f_{*}\)) i.e. \(f_{*}^{\rm eff}=\min(f_{*},f_{*}^{\rm s})\). While galaxies with \(f_{*}^{\rm eff}=f_{*}\) are efficient star-formers, those with \(f_{*}^{\rm eff}=f_{*}^{\rm s}\) comprise "feedback-limited" systems that can unbind all of their gas content due to SN feedback. To compute \(f_{*}^{\rm e}\), we start by calculating the energy \(E_{\rm ej}\) required to unbind the gas left after star formation \[E_{\rm ej}=(M_{\rm g}^{\rm i}-M_{*})v_{c}^{2}, \tag{2}\] where \(v_{c}\) is the halo rotational velocity. This is compared to the SNII energy \[E_{\rm SN}=f_{w}v_{s}^{2}M_{*}, \tag{3}\] where \(f_{w}\) is the fraction of SNII energy coupling to the gas and \(v_{s}^{2}=\nu E_{51}=747\,{\rm km\,s^{-1}}\). Here \(\nu=0.011\) is SNII rate for our chosen Kroupa IMF and we assume each SNII to produce \(E_{51}=10^{51}\)erg of energy. The parameter \(f_{*}^{\rm e}\) is the star-formation efficiency that would result in an equality between \(E_{\rm SN}\) and \(E_{\rm ej}\), i.e., \[f_{*}^{\rm ej}=\frac{v_{c}^{2}}{v_{c}^{2}+f_{w}v_{s}^{2}}. \tag{4}\] With this formalism, the ejected gas mass at any step can be calculated as \[M_{\rm ej}=\frac{E_{\rm SN}}{E_{\rm ej}}(M_{\rm g}^{\rm i}-M_{*})=\frac{f_{w} v_{s}^{2}}{v_{c}^{2}}M_{*}. \tag{5}\] We note that while \(f_{w}\) essentially determines the faint-end of the UV LF and the low-mass end of the SMF, \(f_{*}\) is crucial in determining the high-mass end of the SMF and the bright-end of the UV LF. However, the bright end of the UV LF is also shaped by the presence of dust as detailed in the next section. Simultaneously matching to the observed UV LF and SMF at \(z\sim 5-9\), including the impact of dust attenuation, requires \(f_{*}=15\%\) and \(f_{w}=6\%\) - these are the free parameter values used in the _fiducial_ model. ### Dust modeling We briefly describe our dust model here and interested readers are referred to Dayal et al. (2022) for complete details. We use a coupled set of equations to model the time-evolution of the gas-phase metal (\(M_{Z}\)) and dust masses (\(M_{d}\)), assuming perfect mixing of gas, metals and dust, such that \[\frac{{\rm d}M_{Z}}{{\rm d}t}=\dot{M}_{Z}^{\rm pro}-\dot{M}_{Z}^{\rm ge}-\dot {M}_{Z}^{\rm sat}-\dot{M}_{d}^{\rm gro}+\dot{M}_{d}^{\rm des} \tag{6}\] \[\frac{{\rm d}M_{d}}{{\rm d}t}=\dot{M}_{d}^{\rm pro}-\dot{M}_{d}^{\rm ge}-\dot{M }_{d}^{\rm sat}+\dot{M}_{d}^{\rm gro}-\dot{M}_{d}^{\rm des}. \tag{7}\] Starting with metals, the different terms represent the rates of metal production (\(\dot{M}_{Z}^{\rm pro}\)) for which we use the the mass- and metallicity-dependent stellar yields between \(1-50\) M\({}_{\odot}\)(Kobayashi et al., 2020), ejection in SNII-driven winds (\(\dot{M}_{Z}^{\rm ge}\)), astration into star formation (\(\dot{M}_{Z}^{\rm sat}\)), metals lost into dust growth in the interstellar medium (ISM; \(\dot{M}_{d}^{\rm gro}\)) and the metals returned to the ISM due to dust destruction (\(\dot{M}_{d}^{\rm des}\)). As for dust, we assume that it is mostly produced by SNII, with each SNII producing \(0.5\)M\({}_{\odot}\) of dust (Dayal et al., 2022), with asymptotic giant branch stars (AGBs) having a negligible contribution (e.g. Dayal et al., 2010; Lesniewska and Michalowski, 2019). The different terms represent the rates of dust production (\(\dot{M}_{d}^{\rm gro}\)) in SNII, dust destruction in SNII shocks (\(\dot{M}_{d}^{\rm des}\)), ejection in winds (\(\dot{M}_{d}^{\rm eq}\)), loss in astration (\(\dot{M}_{d}^{\rm sat}\)) and increase due to ISM grain growth. Assuming perfect mixing, the gas and metals lost in outflows and astration are proportional to the gas mass lost to these processes. Finally, we model ISM grain growth as (Dwek, 1998) \[\dot{M}_{d}^{\rm gro}=X_{c}\left(1-\frac{M_{d}}{M_{d}+M_{Z}}\right)\frac{M_{d} }{\tau_{\rm acc}}, \tag{8}\] where \(\tau_{\rm acc}=\tau_{0}(Z/Z_{\odot})^{-1}\) and \(X_{c}\) is the fraction of cold ISM gas where such grain growth can take place; we use a value of \(X_{c}=0.5\) based on high-resolution simulations of early galaxies (e.g. Pallottini et al., 2019). Finally, \(\tau_{0}\) is the dust accretion timescale and \(Z/Z_{\odot}\) is the gas-phase metallicity in solar units. Since \(\tau_{0}\) is relatively poorly known, and changing its value from 30 to 0.3 Myr only changes the dust mass by a factor two (Dayal et al., 2022), we adopt \(\tau_{0}=30\) Myr as our _fiducial_ dust grain-growth timescale. ### The emerging UV and IR luminosities We start by calculating the intrinsic luminosity (\(L_{\rm UV}^{\rm int}\)) at rest-frame 1500A assuming a continuous star-formation over the 30 Myr redshift-steps of the merger tree and using the stellar metallicity of each stellar population as inputs for the BPASS (v2.2.1) stellar population synthesis model (Eldridge et al., 2008; Stanway et al., 2016). We then calculate the dust-attenuated "observed" UV luminosity (\(L_{\rm UV}^{\rm obs}\)) as follows (see also Dayal et al., 2022): we assume carbonaceous/graphite dust with a single grain size of \(a=0.05\mu m\) and a density \(s=2.25{\rm g\,cm^{-3}}\)(Todini and Ferrara, 2001; Nozawa et al., 2003). We model the dust distribution as a sphere of radius (\(r_{d}\)) equal to the gas radius, which is calculated as \(r_{\rm gas}=4.5\times r_{\rm vir}\)(Ferrara et al., 2000). Here, \(r_{\rm vir}\) is the halo virial radius and the spin parameter is assumed to have an average value of \(\lambda=0.04\)(Dayal and Ferrara, 2018). Recent ALMA observations (Fujimoto et al., 2020; Fudamoto et al., 2022) have shown a gas radius that remains constant between \(z\sim 4-7\) for galaxies at a fixed UV luminosity. This is interpreted as gas occupying a larger fraction of the halo volume with increasing redshift. We include this effect by calculating the gas radius as \[r_{d}=r_{\rm gas}=4.5\times 0.04\left(\frac{1+z}{7}\right)r_{\rm vir}. \tag{9}\] This results in a constant radius for a fixed halo mass as a function of redshift. In this slab configuration, the optical depth of the dust is \(\tau_{d}=3M_{d}/(4\pi r_{d}^{2}as)\). The corresponding escape fraction of UV continuum photons is \[f_{\rm esc}^{\rm UV}=\frac{1-e^{-r_{d}}}{\tau_{d}}. \tag{10}\] The dust attenuated UV luminosity is obtained by multiplying the intrinsic UV luminosity by this escape fraction: Figure 1: The UV LF at \(z\sim 5-20\), as marked in the panels. In each panel, the dashed and solid lines represent the intrinsic and dust-attenuated UV LFs from the theoretical model, respectively. Finally, the dotted lines at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}13\) represent the upper limit to the theoretical UV LF with \(f_{*}=1\) and no feedback. In each panel, points show a compilation of the data from a number of different observational works (including Atek et al. 2015; Bowler et al. 2017; Atek et al. 2018; Ishigaki et al. 2018; Oesch et al. 2018; Bouwens et al. 2021, 2022b, 2023a,b; Naidu et al. 2022b; Donnan et al. 2023; Harikane et al. 2022a, 2023a,b; McLeod et al. 2023), as marked. \[L_{\rm UV}^{\rm obs}(\lambda)=f_{\rm esc}^{\rm UV}L_{\rm UV}^{\rm int}(\lambda). \tag{11}\] Concerning the infrared emission, \(L_{\rm IR}\), we assume an energy balance between the non-ionizing UV radiation (rest-frame 912-4000\(\rm\AA\)) absorbed by dust and the subsequent infrared emission (see e.g. Dayal et al., 2010). To compute \(L_{\rm IR}\), we first integrate the UV spectra for each source over the wavelength range 912-4000\(\rm\AA\) which yields the total IR luminosity \[L_{\rm IR}=(1-f_{\rm esc}^{\rm UV})\int_{912}^{4000}L_{\rm UV}^{\rm int}( \lambda){\rm d}\lambda, \tag{12}\] Finally, the peak of the dust emission temperature, assuming black-body emission, is computed as (Dayal et al., 2010): \[T_{d}=6.73\left(\frac{L_{\rm IR}/L_{\odot}}{M_{d}/M_{\odot}}\right)^{1/6}{\rm K}. \tag{13}\] ## 3 The impact of dust on early galaxy observables As a first step we show that our choice of model parameters reproduces the observed UV LF at \(z\sim 5-9\) before showing predictions up to \(z\sim 20\) in Sec. 3.1. We then show the redshift evolution of the cosmic UV luminosity density up to \(z\sim 20\), using different magnitude thresholds to compare with observations. This is followed by the relation between both the intrinsic and observed UV magnitudes and the stellar mass in Sec. 3.3 before we show our predictions of the stellar mass function and the corresponding stellar mass density at \(z\sim 5-20\) in Sec. 3.4. ### Redshift evolution of the UV LF We begin by showing both the intrinsic and dust-attenuated UV LFs at \(z\sim 5-20\) in Fig. 1. We find that, within error bars, the intrinsic UV LF is in good accord with the observed data for \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-21\) at all \(z\sim 5-12\). This indicates the impact of SNII feedback in determining the properties of these low to intermediate-mass galaxies, with dust playing a sub-dominant role. At \(z\sim 6-8\), however, the theoretical UV LF does not show the slight flattening/downturn seen in the data for the faintest (lensed) sources with \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-15\)(Atek et al., 2018; Bouwens et al., 2022). This could possibly be attributed to physical effects not considered here, such as the impact of reionization feedback reducing the gas masses (and therefore star forming capabilities) of such low-mass objects (e.g. Hutter et al., 2021) or the impact of observational uncertainties such as those associated with lensing systematics (e.g. Atek et al., 2018). Interpreting the bright end of the UV LF is complicated by the fact that, in addition to dust attenuation, black-hole accretion-powered luminosity can have a significant impact on the LF at \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-21\) at \(z\sim 5-6\)(e.g. Ono et al., 2018; Kulkarni et al., 2019; Piana et al., 2022). For this reason, we limit our comparison to the observational UV LF from the star-forming galaxy sample (excluding AGNs) at these redshifts (Harikane et al., 2022). We find that the impact of dust becomes relevant at \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-21\) at \(z\sim 5-10\) with dust attenuation playing a negligible role at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12\) where extremely massive, dusty galaxies have not had time to form. Further, while the theoretical dust-attenuated UV LF is in agreement with all observations of the UV LF at \(z\sim 5-9\), it under-predicts the number density for the brightest galaxies (with \(\rm M_{UV}\sim-22.5\)) at \(z\sim 10-11\)(Donnen et al., 2023; McLeod et al., 2023). This could be explained by e.g. radiative pressure ejecting dust from such systems which have high specific star formation rates (Ferrara et al., 2022) or the dust radius being even larger. We also caution that our homogeneous dust distribution model misses crucial effects such as dust being either clumped/spatially segregated from star forming regions as indicated by REBELS observations (e.g. Dayal et al., 2022; Inami et al., 2022) which could have implications for the UV-visibility of these early galaxies. At \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12\), while our UV LF matches to the observations for \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-20\), we under-predict the number density for brighter sources observed by a number of works (e.g. Naidu et al., 2022; Donnen et al., 2023; Bouwens et al., 2023, 2023) which increases to an under-prediction by around three orders of magnitude at \(z\sim 16-18\)(comparing to Harikane et al., 2023; Bouwens et al., 2023). Although spectroscopic confirmations are crucial in validating the high-redshift nature of these sources, theoretically, such high number densities could be explained by these galaxies being extreme star-formers that significantly lie above the average star formation rate-halo mass relation (e.g. Harikane et al., 2022; Pacucci et al., 2022) or having a more top-heavy compared to the generally-used Salpeter or Chabrier IMFs (e.g. Pacucci et al., 2022; Yung et al., 2023). As a sanity check, we also calculate the "maximal" UV LF allowed by our model at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12\), assuming no feedback and a star formation efficiency of \(f_{*}^{\rm eff}=1.0\). Although this extreme model lies above the observations at \(z\sim 12-13\) and matches the data at \(z\sim 16\)(from Harikane et al., 2023), it is still about 0.5 dex below the highest-redshift observations at \(z\sim 18\)(from Bouwens et al., 2023, 20). We however caution that spectroscopic confirmations are crucial to validate these ultra-high redshifts (e.g. Adams et al., 2023; Naidu et al., 2022; Arrabal Haro et al., 2023) before theoretical models are pushed to their extreme limits. Finally, for clearer visualization, we show the redshift evolution of the dust-obscured UV LF between \(z\sim 5-20\) in Fig. 2. At the faint end (\(\rm M_{UV}\sim-13\)), the amplitude of the Figure 2: The redshift evolution of the dust-attenuated UV LF from our model between \(z\sim 5-20\), as marked. observed UV LF is almost constant between \(z\sim 5-13\) and shows the expected decline with increasing luminosity. For example, the number of systems with \(\rm M_{UV}\sim-18\) falls by about two orders of magnitude between \(z\sim 5\) and 13. As expected, we probe to increasingly higher luminosities with decreasing redshifts as more and more massive systems assemble, with the LF extending to \(\rm M_{UV}\sim-24.5\) (\(\sim-21\)) at \(z\sim 5\) (13). At \(z\sim 16\), the amplitude of the UV LF drops rapidly at all luminosities due to a combination of the evolution of the HMF and such low-mass halos being feedback-dominated. Finally, we note that despite the inclusion of dust, our theoretical UV LF does not show the "bright-end saturation" seen in observations at \(z\sim 5-13\)(e.g. Harikane et al., 2023; Bowler et al., 2020); a part of this could be attributed to the increasing contributions of AGN at the bright-end with decreasing redshift. ### The intrinsic and dust-attenuated UV luminosity density We now show the redshift evolution of the UV luminosity density (\(\rho_{\rm UV}\)), for both the intrinsic and dust-attenuated cases, obtained for a number of different magnitude thresholds as shown in Fig. 3. To compare to observations, we convert the UV luminosity to a star formation rate (SFR) using a conversion factor of \(\kappa_{\rm UV}=\psi/L_{\rm UV}=1.15\times 10^{-28}\)(\(\rm M_{\odot}yr^{-1}/erg\,s^{-1}Hz^{-1}\); Madau & Dickinson, 2014). Integrating over all galaxies, the intrinsic UV luminosity density decreases by about three orders of magnitude from \(\rho_{\rm UV}\sim 10^{26.5}\) to \(10^{23.8}\)erg s\({}^{-1}\)Hz\({}^{-1}\)cMpc\({}^{-3}\) between \(z\sim 4.5\) and 18. At all redshifts, \(\rho_{\rm UV}\) is dominated by the contribution from low-mass, low-luminosity galaxies (\(\rm M_{\rm UV}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-13\)), with such sources making up a 100% of \(\rho_{\rm UV}\) at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12.5\). The \(\rho_{\rm UV}\) contribution of galaxies naturally decreases with increasing luminosity: at \(z\sim 15\), galaxies with \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-15,-17\) and \(-19\) contribute \(99,93,70\%\) to the UV luminosity density, which decreases to \(93,76,52\%\) by \(z\sim 10\). As seen from the same figure, dust has a sensible impact only at \(z\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}6\) for the global population and at \(z\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}10\) for bright sources with \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-19\). Our prediction of the luminosity density is in good agreement with observed data-sets up to \(z\sim 10\) integrating down to a number of magnitude thresholds ranging between \(\rm M_{UV}\sim-17\) to \(-19\)(e.g. Donnan et al., 2023; McLeod et al., 2016; Bouwens et al., 2023b) as detailed in Fig. 3. At \(z>13\), we compare our results with available data from Bouwens et al. (2023b), who provide \(\rho_{\rm UV}\) values for their own recent JWST detections in addition to a compilation of public JWST data that they label "robust", "solid" and "possible". As seen from the same figure, the UV luminosity density from their data as well as the "robust" data-set lie about 0.5 dex above our predicted values at \(z\sim 13\), with the "solid" and "possible" data-sets being almost two orders of magnitude above our model values. With these data-sets effectively showing the same number density at \(z\sim 13-17\), by \(z\sim 17\), all of these observations lie orders of magnitude above the predicted luminosity density values. As a sanity check, we also compare these observations to our "maximal" model (no feedback, \(f_{\rm f}^{\rm eff}=1\)). While we find such an upper limit to be in accord with their observations as well as the "solid" data-set, it is still lower than the value inferred from the tentative "possible" data-set. This again leads to the conclusion that spectroscopic confirmations are crucially required to validate the redshift and nature of these highest-redshift sources. In terms of dust attenuation effects, we find that accounting for all galaxies, the star-formation rate density based on the UV would miss 17% of the actual star-formation rate density at \(z\sim 5\) which decreases to 2% at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10\). However, only considering bright galaxies with \(\rm M_{UV}<-19\), UV selection would miss 34% at \(z\sim 5\) decreasing to 17% by \(z\sim 10\); this is in excellent accord with \(30-60\%\) of the SFR being missed in the UV at \(z\sim 7\) due to dust attenuation as inferred by ALMA REBELS results (Algera et al., 2023b). We further quantify the effects of dust in Fig. 4 where we show the population averaged fraction of intrinsic UV light (\(\rho_{\rm UV}^{\rm obs}/\rho_{\rm UV}^{\rm obs}\)) that can escape from galaxies, unattenuated by dust. As might be expected, this fraction decreases with decreasing redshift and increasing mass as the dust content builds up. For example, accounting for all galaxies, this fraction decreases from \(\sim 1\) at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10\) to \(\sim 0.8\) by \(z\sim 5\). While the behavior for galaxies fainter than \(\rm M_{UV}\sim-15\) is quite similar to this trend, galaxies brighter than \(\rm M_{UV}\sim-17\) show lower escape fraction values at all \(z\). The brightest galaxies (with \(\rm M_{UV}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-19\)) are dust attenuated at all redshifts, and show escape fraction values that decrease from 0.8 at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10\) to 0.65 by \(z\sim 5\). The UV photons absorbed by dust are re-emitted in the IR, whose detectability is discussed in Sec. 5. Figure 3: The redshift evolution of the UV luminosity density between \(z\sim 5-20\). We also show the corresponding star formation rate density using a conversion factor between the star formation rate and UV luminosity of \(\kappa_{\rm UV}=1.15\times 10^{-28}\)(\(\rm M_{\odot}yr^{-1}/erg\,s^{-1}Hz^{-1}\)) (Madau & Dickinson, 2014). As marked, the dashed and solid lines show model results for the intrinsic and dust-attenuated values of the UV Luminosity. The different colors show results for the UV magnitude limits marked so as to be able to compare to the observations shown using points. Finally, the solid gray line shows the results from our extreme model using \(f_{\rm f}^{\rm eff}=100\%\), and a magnitude threshold of -19. The different points show observational data from Donnan et al. (2023, diamonds:) who use a magnitude threshold of -17, from McLeod et al. (2016, squares) who use a magnitude threshold of -17.7 and from Bouwens et al. (2023b, triangles: red for fiducial, and orange, purple and olive-green for “robust”, “solid” and “possible” literature detections, respectively), who use a magnitude threshold of -19. ### Redshift evolution of the stellar mass - UV luminosity relation We now discuss the mass-to-light relation between the intrinsic and observed UV magnitudes and the total stellar mass, at \(z\sim 5-16\), as shown in Fig. 5. For \(M_{\star}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10^{7}\,{\rm M}_{\odot}\) galaxies, \(\rm M_{UV}^{int}\) effectively scales with the stellar mass. This is because such galaxies reside in massive halos (with \(M_{h}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10^{9.5}\,{\rm M}_{\odot}\)) at all the redshifts considered and are therefore efficient star-formers with a fixed efficiency of \(f_{*}^{\rm eff}=f_{*}=0.15\). However, at a fixed \(\rm M_{UV}^{int}\) value, the associated stellar mass increases with decreasing redshift. This is because galaxies have lower gas fractions with decreasing redshift (due to more generations of feedback-limited progenitors; Dayal et al., 2014) which results in lower star formation rates. The redshift-dependent relation between the intrinsic magnitude and the stellar mass is well fit by: \[\rm log(M_{\star}/M_{\odot})=-0.4*M_{UV}^{int}+1.495-0.0797z. \tag{14}\] From this relation, we see that \(\rm M_{UV}^{int}\sim-19\) corresponds to \(M_{\star}\sim 10^{8.7}\,{\rm M}_{\odot}\) at \(z\sim 5\) which drops by about an order of magnitude to \(M_{\star}\sim 10^{7.8}\,{\rm M}_{\odot}\) by \(z\sim 16\). We then discuss the \(\rm M_{UV}^{int}-M_{\star}\) relation shown for \(z\sim 5-16\) in the right panel of the same figure. Given their low dust masses and associated dust attenuation, discussed in detail in Sec. 4, the \(\rm M_{UV}^{int}-M_{\star}\) relation follows the intrinsic UV magnitude-stellar mass relation for \(\rm M_{UV}^{obs}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}-20\) at all the redshifts considered. However, as a result of the increasing dust attenuation with increasing mass, the \(\rm M_{UV}^{obs}-M_{\star}\) relation shows an upturn for brighter systems. For example, galaxies with \(M_{\star}\sim 10^{10}\,{\rm M}_{\odot}\) at \(z\sim 5\) show an observed magnitude of about -22.5 which is a magnitude fainter than the intrinsic magnitude. However, the most massive systems, with \(M_{\star}\sim 10^{12}\,{\rm M}_{\odot}\) show an observed magnitude (\(\sim-24\)) which is 2.5 magnitudes fainter than the intrinsic \(\rm M_{UV}\) value. The escape fraction of continuum photons is quantified in Sec. 4.2 and can be used to link the \(M_{\star}\) and \(\rm M_{UV}^{int}\) values at any given redshift. ### The stellar mass function and stellar mass density at \(z\sim 5-20\) We now present our predictions of the stellar mass function, at \(z\sim 5-20\), as shown in Fig. 6. We compare our results at \(z\sim 5-10\) with a number of observational data-sets that are in good agreement within error bars (from Duncan et al., 2014; Song et al., 2016; Batawadekar et al., 2019; Kikuchihara et al., 2020; Stefanon et al., 2021) when re-normalised to a Kroupa IMF. By construction, the theoretical SMF is a good match to the data for \(M_{\star}\sim 10^{7-11.25}\,{\rm M}_{\odot}\) at \(z\sim 5-8\). Again, while the low-mass end is mostly determined by SNII feedback, the high-mass end is determined by the star formation efficiency (\(f_{*}^{\rm eff}=f_{*}\)) in these massive sources. At \(z\sim 10\), however, the theoretical SMF over-predicts the number density for \(M_{\star}\sim 10^{8.25-8.75}\,{\rm M}_{\odot}\) sources from Stefanon et al. (2021) by as much as an order of magnitude. This could be due to a number of reasons including low number statistics or the assumption of a constant star formation history leading to an under-estimation of the stellar mass observationally (Topping et al., 2022). In the same figure, we also show the SMF predicted by our model at \(z\sim 12-20\). As seen, both the amplitude and the mass range of the SMF decrease with increasing redshift. For example, for \(M_{\star}\sim 10^{7}\,{\rm M}_{\odot}\), the number density falls by about 3.25 orders of magnitude between \(z\sim 12\) and 20, from a value of \(10^{-2.2}\,\rm to\,10^{-5.25}\rm{Mpc}^{-3}\rm{dex}^{-1}\). Also, considering down to number densities of \(10^{-8.5}\rm{\alpha}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{ \it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it}{\it}{\it} {\it}{\it}{\it}{\it}{\}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it} {\it}{\it}}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it} {\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\it}{\}{\it}{\it}{\it} \it}{\it}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!}{\!\!}{}{\! Figure 5: The total stellar mass as a function of the intrinsic (_left panel_) and dust-attenuated (_right panel_) absolute magnitude at \(z\sim 5=16\), as shown with the different shaded curves. The area of each curve represents the extent from the 16th percentile to the 84th percentile. Figure 6: The redshift evolution of the SMF predicted by our model (solid line) at \(z\sim 5-20\), as marked. In each panel, we compare our model results with a compilation of observational results shown by points (including Duncan et al., 2014; Song et al., 2016; Bhatawdekar et al., 2019; Kikuchihara et al., 2020; Stefanon et al., 2021), as marked. All of the observational data-sets have been re-normalised to a Kroupa IMF. The bottom right panel shows the model predictions at \(z\sim 12-20\), where there is no observational data yet. the SMD and baselining and validating theoretical models at these high-redshifts. ## 4 Dust properties in the first billion years In this section we study the dust properties of early galaxies including the dust-stellar mass relation (Sec. 4.1), the escape fraction of UV photons unattenuated by dust (Sec. 4.2) and the associated dust temperatures (Sec. 4.3). ### The relation between dust and stellar mass in the first billion years We start by showing the relation between dust mass (\(M_{d}\)) and stellar mass in Fig. 8. Firstly, we find a linear relation linking \(M_{\star}\) and \(M_{d}\) at all \(z\sim 5-16\)(see also Sec. 3, Dayal et al., 2022). This is driven by the fact that the S\(\rm III\) dust production rate is proportional to the SFR that scales with \(M_{\star}\) in our model. Further, all of the processes of astration, destruction and ejection also scale with the SFR (see Sec. 2.1 Dayal et al., 2022) with ISM grain growth on a 30 Myr timescale only resulting in a small contribution to the total dust mass. Secondly, at a fixed stellar mass galaxies show a dust mass that increases with increasing redshift. This can be explained by the fact that the halo rotational velocity (\(v_{c}\)) increases with increasing redshift for a given halo mass. This leads to an increase in \(f_{\star}^{\rm s}\) which leads to a higher star-formation rate for feedback-limited galaxies (which form stars at an efficiency of \(f_{\star}^{\rm s}\)) resulting in a higher dust mass. Furthermore, by combining Equations 1, 4 and 5, we see that an increased \(v_{c}\) leads to a decrease in the ejected gas mass, for both efficient star formers and feedback limited galaxies, resulting in both retaining a larger fraction of their gas and dust content. Indeed, for \(M_{\star}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10^{8.5}\,\rm M_{\odot}\), we find a linear relation linking \(M_{d}\) and \(M_{\star}\) at all \(z\sim 5-16\) such that \[\log(M_{d})=1.194\log(M_{\star})+0.0975z-5.433. \tag{15}\] For galaxies of \(M_{\star}\sim 10^{9.5}\,\rm M_{\odot}\), our model results in a dust mass of about \(10^{6.4}\) (\(10^{6.9}\)) \(\rm M_{\odot}\) and a dust-to-stellar mass ratio of about \(0.08\%\)(\(0.25\%\)) at \(z\sim 5\) (10). This increases to a dust mass of about \(10^{8.2}\) (\(10^{8.7}\)) \(\rm M_{\odot}\) and a dust-to-stellar mass ratio of about \(0.16\%\)(\(0.5\%\)) at \(z\sim 5\) (10) for massive galaxies with \(M_{\star}\sim 10^{11}\,\rm M_{\odot}\). We then compare our results to the dust masses inferred for \(M_{\star}\sim 10^{9-11}\,\rm M_{\odot}\) galaxies at \(z\sim 5\) and 7 from the ALMA ALPINE (Fudamoto et al., 2020) and REBELS (Bouwens et al., 2022) surveys, respectively. We note two key caveats involved in these observational data-sets: firstly, given most of these sources are detected in a single ALMA band, a dust temperature has to be assumed in order to obtain a dust mass (see discussion in Sommovigo et al., 2022). Further, the star formation history used can significantly affect the inferred stellar masses (see e.g. Topping et al., 2022). Despite these caveats, within error bars our model results at \(z\sim 5\) are in good accord with the ALPINE results, except perhaps for two galaxies, the lowest-mass and highest-mass sources. Further, the REBELS sample finds a rather flat distribution of the dust masses as a function of the stellar mass (see Sec. 3, Dayal et al., 2022) as compared to the linear relation found by the theoretical model. Possible solutions could lie in the stellar masses being under-estimated using the assumption of a constant star formation history (Topping et al., 2022) or higher dust temperatures that could push down the associated dust masses. Finally, we also show results from the fiducial model of Popping et al. (2017) at \(z\sim 7\). As shown, they find a dust mass than is larger than ours by a factor of about 6. This is due to the smaller dust growth timescale that they use, resulting in a dust mass dominated by dust growth, while dust growth has a sub-dominant impact in our models, as shown in Dayal et al. (2022). Figure 8: The dust mass as a function of the total stellar mass for \(z\sim 5-16\), as marked; the area of each curve demarcates the extent from the \(16^{\rm th}\) percentile to the \(84^{\rm th}\) percentile. The orange line shows the median results from the fiducial model of Popping et al. (2017) at \(z\sim 7\). Finally, as marked, the different points show results from the ALMA REBELS (blue squares; Bouwens et al., 2022) and ALPINE surveys (pink pentagons; Fudamoto et al., 2020). Figure 7: The redshift evolution of the stellar mass density between \(z\sim 5-19\) for different mass cuts, including all galaxies and integrating above \(M_{\star}\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10^{8}\) and \(10^{9}M_{\odot}\), as marked. We compare our model results with the observationally inferred values (integrating down to a mass limit of \(10^{8}M_{\odot}\)) from Duncan et al. (2014, yellow squares), Song et al. (2016, purple circles), Bhatawdekar et al. (2019, green triangles), Kikuchihara et al. (2020, empty crosses), Stefanon et al. (2021, red pentagons) and the recent JWST results from Santini et al. (2023, blue circles) which have all be re-normalised to a Kroupa IMF. ### The evolution of the UV escape fraction We now look at the relation between the fraction of UV photons that can escape a galaxy unattenuated by dust (\(f_{\rm esc}^{\rm UV}\)) and the stellar mass and SFR. The UV escape fraction can also be interpreted as the ratio of the SFR observed in the UV (\(\psi_{\rm UV}\)) to the total intrinsic SFR (\(\psi\)) i.e. \(f_{\rm esc}^{\rm UV}=\psi_{\rm UV}/\psi\). We start by studying \(f_{\rm esc}^{\rm UV}\) as a function of the stellar mass in (the left panel of) Fig. 9. We find two key trends: at a given redshift, \(f_{\rm esc}^{\rm UV}\) decreases with increasing \(M_{*}\) given the increasing dust masses of more massive systems. For example, at \(z\sim 5\), \(f_{\rm esc}^{\rm UV}\) decreases from \(\sim 1\) for \(M_{*}\la 10^{8.5}\,{\rm M}_{\odot}\) to \(\sim 0.05\) for \(M_{*}\sim 10^{12}\,{\rm M}_{\odot}\) systems. As we go to higher redshifts, the stellar mass range naturally narrows: for example, by \(z\sim 16\), the most massive systems only have \(M_{*}\sim 10^{8.5}\,{\rm M}_{\odot}\) and \(f_{\rm esc}^{\rm UV}\sim 0.65\). Secondly, as noted in the previous section, for a given stellar mass, the dust mass increases slightly with increasing redshift. Further, galaxies of a given stellar mass are hosted in slightly lower-mass halos (i.e with smaller virial radii) with increasing redshift. This, coupled with our assumptions of the gas and dust radius being effectively constant with redshift for a fixed halo mass result in a decrease in \(f_{\rm esc}^{\rm UV}\) with increasing redshift for a given stellar mass. Indeed, considering \(M_{*}\sim 10^{9.5}\,{\rm M}_{\odot}\), \(f_{\rm esc}^{\rm UV}\) decreases from \(\sim 0.8\) at \(z\sim 5\) to \(\sim 0.5\) by \(z\sim 10\). Our redshift-dependent relation between \(f_{\rm esc}^{\rm UV}\) and \(M_{*}\) at \(z\sim 5-16\) is quantified as: \[f_{\rm esc}^{\rm UV}=\frac{1}{2}\Big{(}1-\tanh\big{[}\alpha(z)(\log(M_{*})-M_ {0}(z))\big{]}\Big{)}, \tag{16}\] where \(\alpha(z)=0.931+\exp\big{(}0.447z-7.842\big{)}\) and \(M_{0}(z)=10.739-0.124z\). We then also show \(f_{\rm esc}^{\rm UV}\) as a function of the total SFR in the (right panel of the) same figure. Interestingly, the \(f_{\rm esc}^{\rm UV}-\psi\) relation does not show any significant evolution with redshift, over \(z\sim 5-16\). This is driven by the fact that both the dust mass and SFR scale with the stellar mass in the same way as a function of redshift. I.e. for a given stellar mass, the increase in dust mass with increasing redshift is matched by an increase in the SFR resulting in a roughly constant \(f_{\rm esc}^{\rm UV}-\psi\) relation. We find \(f_{\rm esc}^{\rm UV}\sim 1\) for \(\psi\la 1\,{\rm M}_{\odot}{\rm yr}^{-1}\) which decreases to \(\sim 0.1\) for \(\psi\sim 1000\,{\rm M}_{\odot}{\rm yr}^{-1}\). At \(z\sim 7\), we find \(f_{\rm esc}^{\rm UV}\sim 0.53-0.17\) for \(\psi\sim 30-300\,{\rm M}_{\odot}{\rm yr}^{-1}\) i.e. about \(47-83\%\) of the UV luminosity of such sources is suppressed due to dust attenuation. These values are in accord with the values inferred from the REBELS survey (see also Dayal et al., 2022). The \(f_{\rm esc}^{\rm UV}-\psi\) relation can be quantified as: \[f_{\rm esc}^{\rm UV}=\frac{1}{2}\Big{(}1-\tanh\big{[}A(z)(\log(\psi)-\psi_{0} (z))\big{]}\Big{)}, \tag{17}\] where \(A(z)=0.899+\exp\big{(}0.411z-7.189\big{)}\) and \(\psi_{0}(z)=1.903-0.052z\). Finally, we also derive a relation between the observed UV magnitude and \(f_{\rm esc}^{\rm UV}\) such that \[f_{\rm esc}^{\rm UV}=\frac{1}{2}\Big{(}1-\tanh\big{[}\xi(z)({\rm M}_{\rm UV}^ {\rm obs}-{\rm M}_{\rm UV,0}(z))\big{]}\Big{)}, \tag{18}\] where \(\xi(z)=-6.23\cdot 10^{-1}+1.58\cdot 10^{-3}z^{2}-7.52\cdot 10^{-6}z^{4}\) and \({\rm M}_{\rm UV,0}(z)=-22.103+0.09\)z. This is to allow a conversion between \(f_{\rm esc}^{\rm UV}\) and galaxy properties including the intrinsic UV magnitude and dust masses using Eqns. 14 - 15. ### The redshift evolution of dust temperatures Next, we study the dust temperature (\(T_{d}\)), which is a measure of how intensely the dust is heated by UV radiation. We show the dust temperature as a function of stellar mass for \(z\sim 5-16\) in Fig. 10. As seen, we find that \(T_{d}\) increases with an increase in \(M_{*}\). This is to be expected given that while the intrinsic UV luminosity scales with \(M_{*}\), more massive galaxies also show lower \(f_{\rm esc}^{\rm UV}\) values, allowing for more heating of the dust mass (as seen in Fig. 9). For example, at \(z\sim 5\), \(T_{d}\) increases from \(\sim 17\) to 31K as \(M_{*}\) increases from \(10^{7}\) to \(10^{10}\,{\rm M}_{\odot}\). However, \(T_{d}\) saturates for \(M_{*}\ga 10^{10}\,{\rm M}_{\odot}\) - this is because of the saturation in \(f_{\rm esc}^{\rm UV}\) seen above in Sec. 4.2. We also find that at a fixed \(M_{*}\) value, \(T_{d}\) increases with increasing redshift. This is driven by the fact that galaxies of a given stellar mass have both a higher SFR and a smaller \(f_{\rm esc}^{\rm UV}\) value with increasing redshift. This results in a larger fraction of the UV photons being absorbed from intrinsically brighter galaxies, resulting in both higher IR luminosities and dust temperatures. For galaxies with \(M_{*}\sim 10^{9.5-11}\,{\rm M}_{\odot}\), we calculate values of \(T_{d}\sim 27-34\)K at \(z\sim 5-7\). These are lower than the average values of \(T_{d}\sim 48\pm 8\) derived for the ALMA ALPINE sample (Sommovigo et al., 2022), and the values of \(T_{d}\sim 47\pm 6-52\pm 11\)K derived for REBELS sources Figure 9: Model results showing the escape fraction of non-ionizing UV photons (\(f_{\rm esc}^{\rm UV}\)) as a function of the total stellar mass (_left panel_) and the star-formation rate (_right panel_). The curves show the results for \(z\sim 5-16\), as marked and demarcate the extent from the \(16^{\rm th}\) percentile to the \(84^{\rm th}\) percentile. (Sommovigo et al., 2022b; Ferrara et al., 2022b), respectively. However, multi-band ALMA observations of three massive galaxies, with \(M_{*}\sim 10^{10}\,{\rm M}_{\odot}\) in the REBELS survey hint at lower dust temperatures of \(T_{d}\sim 30-35\)K (Algera et al., 2023a). These are in perfect agreement with the average value of about 33K we predict for such sources. An outstanding issue, however, is that such low dust temperatures result in higher dust masses that are more compatible with an unphysical "maximal" dust model where each SNII is required to produce \(1\,{\rm M}_{\odot}\) of dust, dust is required to grow in the ISM on a timescale of 0.3 Myr and dust can neither be destroyed nor ejected (see Sec. 2.1 Dayal et al., 2022). The need of the hour is multi-band ALMA detections of such high-redshift sources to get better constraints on their dust temperatures. In addition, our simplistic model misses a number of crucial effects such as the fact that dust is probably clumped in the ISM - indeed, concentrated clumps of dust around star-forming regions would have higher dust temperatures than the fully diffuse dust component calculated here. ## 5 The Dust Detectability of Early Galaxies in the ALMA Era: the FIR LF at Extremely High Redshifts Now that we have established that the model reproduces observables in the UV and have studied the dust enrichment and attenuation of early sources, we can study their dust emission. We start by discussing the relation between the FIR luminosity (\(L_{\rm FIR}\)) and stellar mass as shown in Fig. 11. We see that \(L_{\rm FIR}\) increases with stellar mass due to the higher star-formation rate of more massive galaxies and their lower \(f_{\rm esc}^{\rm UV}\) values that lead to more UV photons being absorbed by dust and re-emitted in the infrared. For galaxies with \(M_{*}\sim 10^{9.5-11}\,{\rm M}_{\odot}\), our model yields \(L_{\rm FIR}\sim 10^{10.1-12.2}\,(10^{10.6-12.5}){\rm L}_{\odot}\) at \(z\sim 5\) (7). Further, as might be expected from the discussions above, for a given \(M_{*}\) value, \(L_{\rm FIR}\) increases with increasing redshift as a result of their larger dust masses and lower \(f_{\rm esc}^{\rm UV}\) values. Indeed, by \(z\sim 10\), galaxies with \(M_{*}\sim 10^{9.5}\,{\rm M}_{\odot}\) show FIR luminosity values as high as \(L_{\rm FIR}\sim 10^{11}{\rm L}_{\odot}\). We then show our resulting FIR LF at \(z\sim 5-20\) in Fig. 12. As might be expected, both the normalisation and the luminosity range of the FIR LF decrease with increasing redshift. For example, the number density of sources with \(L_{\rm FIR}\sim 10^{-9}{\rm L}_{\odot}\) falls from \(10^{-3}{\rm cMpc}^{-3}\) dex\({}^{-1}\) at \(z\sim 5\) to \(10^{-7.5}{\rm cMpc}^{-3}\) dex\({}^{-1}\) by \(z\sim 18\). This is because the number density associated with a given stellar mass drops-off with increasing redshift faster than the increase in FIR luminosity. Further, at \(z\sim 5\), the FIR LF extends between \(10^{9-13.5}{\rm L}_{\odot}\) which decreases to \(10^{9-12.5}{\rm L}_{\odot}\) by \(z\sim 7\) and \(<10^{9.7}{\rm L}_{\odot}\) by \(z\sim 18\); there is effectively no IR LF at redshifts as high as \(z\sim 20\). Indeed, at \(z\sim 13\), the fiducial model yields slightly more than one per \({\rm G}{\rm G}{\rm G}{\rm G}^{3}\) for \(L_{\rm FIR}\sim 10^{11}{\rm L}_{\odot}\) and about 10 galaxies per \({\rm G}{\rm G}{\rm G}{\rm G}^{3}\) of \(L_{\rm FIR}\sim 10^{10}{\rm L}_{\odot}\) at \(z\sim 16\). Getting significant number statistics at these early epochs therefore poses a severe challenge in the volumes that must be surveyed. We then compare our results to the FIR LFs inferred using ALMA data: this includes results at \(z\sim 4.5-6\) from the ALPINE survey (Gruppioni et al., 2020) and from the REBELS survey at \(z\sim 7\)(Barrufet et al., 2023). We start by noting that both these samples are based on low number statistics. Further, the \(z\sim 4.5-6\) data shows a number of puzzling aspects such as the flatness of the FIR LF over the observed range of \(L_{\rm FIR}\sim 10^{11.25-12.5}{\rm L}_{\odot}\) and the volume density of dusty sources seems to show very little evolution at \(z>2.5\)-3. This could arise from a number of reasons (see discussion in Gruppioni et al., 2020) including: (i) photometric redshift uncertainties that can induce Poissonian errors in the LF (i.e. an uncertainty in the number of objects in each bin); (ii) the sources probed might be part of an over-density; considering them as unbiased blindly detected sources would thereby lead to an overestimation of the LF; (iii) the fact that most of these sources are detected in a single ALMA band which results in uncertainties in converting such monochromatic fluxes to total IR luminosities. The same issues also hold true for the FIR LF at \(z\sim 7\). With these caveats in mind, we find that while the _fiducial_ theoretical FIR LF is in good agreement with the \(z\sim 4.5-6\) Figure 11: Model results showing the FIR dust emission (\(L_{\rm FIR}\)) as a function of the stellar mass for \(z\sim 5-16\), as marked; the area of each curve represents the extent from the \(16^{\rm th}\) percentile to the \(84^{\rm th}\) percentile of the distribution. See text in Sec. 2.4 for details on the calculation of \(L_{\rm FIR}\). Figure 10: The dust temperature (\(T_{d}\)) as a function of the stellar mass for \(z\sim 5-16\), as marked; the area of each curve represents the extent from the \(16^{\rm th}\) percentile to the \(84^{\rm th}\) percentile of the distribution. See text in Sec. 2.4 for details on the calculation of the dust temperature. Figure 12: The redshift evolution of the infrared LF at \(z\sim 5-20\), as marked in the panels. In each panel, the solid line represents the _fiducial_ model results, the dashed line shows results from the fiducial model assuming all UV luminosity is re-emitted in the FIR (i.e. \(f_{\rm UV}^{\rm UV}=0\)) and the dotted line shows the “maximal” model with \(f_{\rm F}^{\rm eff}=1\) and and \(f_{\rm UV}^{\rm UV}=0\). We compare our model results to observed FIR LFs at \(z\sim 5-6\) (Gruppioni et al., 2020, green dots) and at \(z\sim 7\)(Barrufet et al., 2023, red diamonds). data in the faintest luminosity bins (\(L_{\rm FIR}\sim 10^{11.25-11.5}{\rm L_{\odot}}\)), it under-predicts the number density for brighter sources. The same situation arises when comparing to the data at \(z\sim 7\) where our _fiducial_ results lie below the observations by as much as an order of magnitude for the faintest sources. We therefore carry out a number of limiting calculations at all \(z\sim 5-20\): (i) in the first case, we use the _fiducial_ model for the UV luminosity but assume \(f_{\rm esc}^{\rm UV}=0\) i.e. all of the UV photons are converted into FIR luminosity; (ii) in the "maximal" case, we assume a SFE of a 100% i.e. \(f_{*}^{\rm eff}=1\) and \(f_{\rm esc}^{\rm UV}=0\) - this yields the upper limit to the FIR LF at any redshift. We find that, within error bars, the observed FIR LF at \(z\sim 4.5-6\) is in accord with the "maximal" model for \(L_{\rm FIR}\sim 10^{11.75-12.25}{\rm L_{\odot}}\). Puzzingly, however, the brightest observed data point lies above this maximal model. The situation is similar at \(z\sim 7\) where the observationally-inferred FIR LF is more compatible with the maximal model, at least for the brightest sources. It is therefore crucial to have spectroscopic confirmation for the redshift of these sources, and preferably multi-band ALMA observations to robustly pin-down the FIR LF at high-redshifts before we invoke unphysical extrema in galaxy formation models. ## 6 Conclusion and Discussions In this work we track the dust enrichment of galaxies at \(z\sim 5-20\) using the delphi semi-analytic model for galaxy formation. A key strength of this model is that it only invokes two mass- and redshift-independent free parameters to match of observables at \(z\sim 5-9\) including the UV LF and SMF: these are the upper limit to the star-formation efficiency parameter (\(f_{*}=0.15\)) and the fraction of SN feedback coupling to winds (\(f_{w}=0.06\)). This model is also baselined against dust mass estimates of early galaxies from recent ALMA observations at \(z\sim 5-7\). This model is used to study the impact of dust on global galaxy properties up to \(z\sim 20\) - including the UV LF, SMF, UV luminosity (SFR) density. Additionally, we study the dust properties of early galaxies including the dust-to-stellar mass relation, the escape fraction of UV photons unattenuated by dust and dust temperatures before we make predictions for the dust visibility (through the FIR LF). Our key results are summarized as follows: * By construction, our model matches the observed UVLF at \(z\sim 5-9\). While SNII feedback effectively shapes the faint-end of the UV LF, dust plays a key role in determining the bright-end (\({\rm M_{UV}}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-21\)) at \(z\sim 5-10\). Further, we find that dust has no sensible impact on visibility of early galaxies at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}12\). * At \(z\sim 12-18\), the model significantly under-predicts both the observed UV LF and \(\rho_{\rm UV}\) (when comparing to e.g. Donnan et al.2023; Bouwens et al.2023b; Harikane et al.2023b). Indeed, even a "maximal" model with no feedback and a 100% star formation efficiency does not produce the bright (\({\rm M_{UV}}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}-20\)) galaxies observed at \(z\sim 16-18\)(Harikane et al.2023b; Bouwens et al.2023b) and under-predicts the observationally-inferred UV luminosity density. This necessitates spectroscopic confirmations of such early sources although plausible solutions might also lie in such systems being extreme star-formers or having a top-heavy IMF. * While the model matches the observed SMF and SMD up to \(z\sim 8\), it lies approximately 1 dex above the observed SMF (and hence the SMD) at \(z\sim 10\)(Stefanon et al.2021). This might be attributed either to an incompleteness in the observational data-set or an under-estimation of the stellar masses because of an assumption of a constant star-formation history. * Given that SNII are the key dust factories, in our model the dust mass evolves linearly with the stellar mass at \(z\sim 5-16\) such that \(\log(M_{d})=1.194\log(M_{*})+0.0975z-5.433\). As seen, for a given stellar mass, the dust mass shows an increase with redshift. This is due to the increasing star-formation rates and decreasing gas mass ejected for a fixed stellar mass with increasing redshift. * The UV escape fraction \(f_{\rm esc}^{\rm UV}\) decreases with stellar mass (or the star formation rate) due to the more dusty nature of massive galaxies. For example, at \(z\sim 5\), \(f_{\rm esc}^{\rm UV}\) decreases from \(\sim 1\) for \(M_{*}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}10^{8.5}\,{\rm M_{ \odot}}\) to \(\sim 0.05\) for \(M_{*}\sim 10^{12}\,{\rm M_{\odot}}\) systems. We also find that given their larger dust masses, galaxies of a given stellar mass show decreasing \(f_{\rm esc}^{\rm UV}\) values with increasing redshift. * We find that accounting for all galaxies, the star-formation rate density based on the UV would miss 17% of the actual star-formation rate density at \(z\sim 5\) which decreases to 2% at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10\). However, only considering bright galaxies with \({\rm M_{UV}}<-19\), UV selection would miss 34% at \(z\sim 5\) decreasing to 17% by \(z\sim 10\); this is in excellent accord with \(30-60\%\) of the SFR being missed in the UV at \(z\sim 7\) due to dust attenuation as inferred by ALMA REBELS (Algera et al.2023b). * Assuming equilibrium between the non-ionizing photons absorbed and re-radiated by dust, we find dust temperatures that increase both with stellar mass and increasing redshifts. At \(z\sim 7\), we find an average temperature of 33K for galaxies with a stellar mass above \(10^{10}\,{\rm M_{\odot}}\), in good agreement with recent multi-band ALMA measurements Algera et al. (2023a). * Finally, we predict the FIR LF at \(z\sim 5-20\) and find our predictions to match to the FIR LF inferred from ALMA ALPINE results at \(z\sim 5-6\) for \(L_{\rm FIR}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}10^{11-11.5}{\rm L_{\odot}}\). The model under-predicts the observed FIR LF at higher luminosities and at higher redshifts e.g. when comparing to ALMA REBELS results (Barrufet et al.2023). Even our maximal model, with \(f_{*}^{\rm eff}=1.0\) and \(f_{\rm esc}^{\rm UV}=0.0\) under-predicts the number density of the brightest objects, requiring further ALMA follow up and spectroscopic confirmations for these rare sources. Finally, we end with some caveats. Firstly, we find SNII to be the primary dust factories with ISM grain growth in a homogeneous medium playing a minor role. However, including a multi-phase ISM, with cold clumps where grain growth could be more efficient, could help increase the contribution of the latter process to the total dust mass. Secondly, we assume dust to be homogeneously distributed in the ISM. However, as has been shown by recent ALMA observations (e.g. Inami et al.2022), dust and star forming regions can be spatially segregated, significantly affecting the dust optical depth experienced by UV photons. Thirdly, while we assume a Kroupa IMF throughout this work, the redshift evolution of the IMF remains an outstanding issue, for example becoming more top-heavy with decreasing metallicity (see e.g Chon et al., 2021). This could have a significant impact on the inferred UV luminosities (e.g. Pacucci et al., 2022; Yung et al., 2023). Finally, we assume a constant star formation efficiency for massive systems, not accounting for observed galaxies lying significantly above the main sequence of star-formation (Harikane et al., 2022; Pacucci et al., 2022). Forthcoming observations with JWST will be crucial in obtaining spectroscopic redshifts to validate the highest-redshift sources observed, with multi-band ALMA observations providing crucial constraints on the dust temperatures (and hence masses) of galaxies in the era of cosmic dawn. ## Acknowledgments VM and PD acknowledge support from the NWO grant 016.VIDI.189.162 ("ODIN"). PD warmly thanks the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. The authors thank L. Barrufet, J. Kerutt, L. Sommovigo and M. Trebitsch for their helpful comments and insightful discussions. ## Data Availability Data generated in this research will be shared on reasonable request to the corresponding author.
2306.09955
Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign?
We study benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification. In particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted or flipped. We identify conditions on the margin of the clean data that give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. Our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. We prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training.
Erin George, Michael Murray, William Swartworth, Deanna Needell
2023-06-16T16:40:04Z
http://arxiv.org/abs/2306.09955v2
# Training shallow ReLU networks on noisy data using hinge loss: when do we overfit and is it benign? ###### Abstract We study benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification. In particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted or flipped. We identify conditions on the margin of the clean data that give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. Our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. We prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training. ## 1 Introduction Conventional machine learning wisdom suggests that the generalization error of a complex model will typically be worse versus a simpler model when both are trained to interpolate data. Indeed, the bias-variance trade-off implies that although choosing a complex model is advantageous in terms of approximation error, it comes at the price of an increased risk of overfitting. The traditional solution to managing this trade-off is to use some form of regularization, allowing the optimizer to select a predictor from a rich class of functions while at the same time encouraging it to choose one that is simple. However, in recent years this perspective has been challenged by the observation that deep learning models, trained with minimal if any form of regularization, can almost perfectly interpolate noisy data with nominal cost to their generalization performance (Zhang et al., 2017; Belkin et al., 2018, 2019). This phenomenon is referred to as _benign overfitting_. Following these empirical observations, a line of research has emerged aiming to theoretically characterize the conditions under which various machine learning models, trained to zero loss on noisy data, obtain, at least asymptotically, optimal generalization error. To date, analyses in this regard have focused primarily on linear models, including linear regression (Bartlett et al., 2020; Muthukumar et al., 2020; Wu and Xu, 2020; Chatterji and Long, 2021; Zou et al., 2021; Hastie et al., 2022; Koehler et al., 2021; Wang et al., 2021; Chatterji and Long, 2022; Cao et al., 2021; Shamir, 2022), logistic regression (Chatterji and Long, 2021; Muthukumar et al., 2021; Wang et al., 2021b) and kernel regression (Belkin et al., 2018; Mei and Montanari, 2019; Liang and Rakhlin, 2020; Liang et al., 2019). With regards to understanding benign overfitting in neural networks, in the Neural Tangent Kernel (NTK) regime (Jacot et al., 2018) the prediction of a neural network is well approximated via kernel regression (Adam and Pennington, 2020). However, this regime typically requires unrealistically large network width and fails to capture feature learning. Indeed, despite being the initial source of inspiration for the subject, an understanding of when and how neural networks benignly overfit in the rich, feature learning regime remains elusive. ### Related Work and Contributions In this work, we study benign overfitting in binary classification for two-layer ReLU networks, trained using gradient descent and hinge loss, on label corrupted, linearly separable data. We remark that a line of work (Brutzkus et al., 2018; Wang et al., 2019; Yang et al., 2021) studies the convergence of gradient descent in a similar setting on generic, linearly separable data without label corruptions. These works also require additional assumptions, notably leaky ReLU instead of ReLU, insertion of noise into the optimization algorithm or changes to the loss function. To the best of our knowledge, there are only two existing lines of work which study benign overfitting in neural networks outside of the kernel regime. First, concerning the most relevant line of prior work to our own, Frei et al. (2022) consider a smooth, leaky ReLU activation function, train the network using the logistic instead of the hinge loss and assume the data is drawn from a mixture of well-separated sub-Gaussian distributions. The key result of this work is that given a sufficient number of iterations of GD, then the network will interpolate the noisy training data while also achieving minimax optimal generalization error up to constants in the exponents. A concurrent work Xu and Gu (2023) extends this result to more general activation functions including ReLU, relaxes the assumptions on the noise distribution to being centered with bounded logarithmic Sobolev constant, and also improves the convergence rate. As highlighted in Xu and Gu (2023), the fact that ReLU is non-smooth and non-leaky significantly complicates the analysis of both the convergence and generalization. A second line of work (Cao et al., 2022; Kou et al., 2023) studies benign overfitting in two-layer convolutional as opposed to feedforward neural networks. Whereas here and in Frei et al. (2022); Xu and Gu (2023) each data point is modeled as the sum of a signal and noise component, in Cao et al. (2022); Kou et al. (2023) the signal and noise components lie in disjoint patches. The weight vector of each neuron is applied to both patches separately and a non-linearity, such as ReLU, is applied to the resulting pre-activation. In this setting, the authors prove interpolation of the noisy training data and derive conditions on the clean margin under which the network benignly vs non-benignly overfits. We emphasize that the data model studied in this work is very different to the setting we study here, and as a result we primarily restrict our comparison to that with Frei et al. (2022) and the concurrent work Xu and Gu (2023). We now summarize our contributions: in particular, under certain assumptions on the model hyperparameters, we prove conditions on the clean margin resulting in the three distinct training outcomes highlighted below. We remark that prior works focus primarily on characterizing benign overfitting. 1. **Benign overfitting:** Theorem 3.1 provides conditions under which the training loss converges to zero and bounds the generalization error, showing that it is asymptotically optimal. This result is analogous to those of Frei et al. (2022) and Xu and Gu (2023). 2. **Non-benign overfitting:** Theorem 3.6 provides conditions under which the network achieves zero training loss while generalization error is bounded below by a constant. Unlike Frei et al. (2022) and Xu and Gu (2023), this is not due to the non-separability of the data model but is instead a result of the neural network failing to learn the optimal classifier. 3. **No overfitting:** Theorem 3.8 provides conditions under which the network achieves zero training loss on points with uncorrupted label signs but nonzero loss on points with corrupted signs. Again the generalization error is bounded and shown to be asymptotically optimal. To conclude this section we further remark that our proof techniques are quite different from those used in Frei et al. (2022); Xu and Gu (2023). This is due to the fact we study the hinge instead of the logistic loss, we discuss the differences arising from this in detail in Section 3. In particular, we set up the problem in such a way that the convergence analysis reduces to a combinatorial problem which involves counting the number of activations of clean versus corrupt points during various stages of training. Our analysis further provides a detailed description of the dynamics of the network's neurons, thereby allowing us to understand how the network fits both the clean and corrupted data. ## 2 Preliminaries ### Data model We consider a training sample of \(2n\) pairs of points and their labels \((\mathbf{x}_{i},y_{i})_{i=1}^{2n}\) where \((\mathbf{x}_{i},y_{i})\in\mathbb{R}^{d}\times\{-1,+1\}\). Furthermore, we identify two disjoint subsets \(\mathcal{S}_{T}\subset[2n]=\{1,\dots,2n\}\) and \(\mathcal{S}_{F}\subset[2n]\), \(\mathcal{S}_{T}\cup\mathcal{S}_{F}=[2n]\), which correspond to the clean and corrupt points in the sample respectively. The categorization of a point as clean or corrupted is determined by its label: for all \(i\in[2n]\) we assume \(y_{i}=\beta(i)(-1)^{i}\) where \(\beta(i)=-1\) iff \(i\in\mathcal{S}_{F}\) and \(\beta(i)=1\) otherwise. In addition, we assume \(|\mathcal{S}_{F}\cap[2n]_{e}|=|\mathcal{S}_{F}\cap[2n]_{o}|=k\) and \(|\mathcal{S}_{T}\cap[2n]_{e}|=|\mathcal{S}_{T}\cap[2n]_{o}|=n-k\), where \([2n]_{c}\subset[2n]\) and \([2n]_{o}\subset[2n]\) are the even and odd indices, respectively. We remark that this assumption simplifies the exposition of our results but is not integral to our analysis. Each data point has the form \[\mathbf{x}_{i}=(-1)^{i}(\sqrt{\gamma}\mathbf{v}+\sqrt{1-\gamma}\beta(i)\mathbf{ n}_{i}) \tag{1}\] Here \(\mathbf{v}\in\mathbb{R}^{d}\) satisfies \(\|\mathbf{v}\|=1\), furthermore we refer to \(\mathbf{v}\) as the signal vector as the alignment of a clean point with \(\mathbf{v}\) determines its sign. Indeed, \(\operatorname{sign}(\langle\mathbf{x}_{i},\mathbf{v}\rangle)=(-1)^{i}=y_{i}\) for \(i\in\mathcal{S}_{T}\) whereas \(\operatorname{sign}(\langle\mathbf{x}_{i},\mathbf{v}\rangle)=-y_{i}\) for \(i\in\mathcal{S}_{F}\). Thus we may view the labels of corrupt point as flipped from their clean state. The vectors \((\mathbf{n}_{i})_{i=1}^{2n}\) are mutually independent and identically distributed (i.i.d.) random vectors drawn from the uniform distribution over \(\mathbb{S}^{d-1}\cap\operatorname{span}\{\mathbf{v}\}^{\perp}\), which we denote \(U(\mathbb{S}^{d-1}\cap\operatorname{span}\{\mathbf{v}\}^{\perp})\). Clearly this distribution is symmetric, mean zero and for any \(\mathbf{n}\sim U(\mathbb{S}^{d-1}\cap\operatorname{span}\{\mathbf{v}\}^{\perp})\) it holds that \(\mathbf{n}\perp\mathbf{v}\) and \(\|\mathbf{n}\|=1\). We refer to these vectors as noise components due to the fact that they are independent of the labels of their respective points. The real, scalar quantity \(\gamma\in[0,1]\) controls the strength of the signal versus the noise and also defines the clean margin. Finally, at test time a clean label \(y\sim U(\{-1,1\})\) is sampled and the corresponding test data point has the form \[\mathbf{x}=y(\sqrt{\gamma}\mathbf{v}+\sqrt{1-\gamma}\mathbf{n}), \tag{2}\] where again \(\mathbf{n}\sim U(\mathbb{S}^{d-1}\cap\operatorname{span}\{\mathbf{v}\}^{\perp})\). The key idea we use to characterize the training dynamics is to reduce the analysis of the trajectory of each neuron to that of counting the number of clean versus corrupt updates to it. This combinatorial approach relies on each point having similar sized signal and noise components. In order to make our analysis as clear as possible, we select a data model which ensures the signal and noise components are consistent in size across all points. We emphasize that these assumptions are not strictly necessary and we believe analogous analyses could be conducted when the signal and noise components are instead appropriately bounded. In addition, and as discussed in more detail in Section 3.2, the orthogonality of the signal and noise components allow us to demonstrate non-benign overfitting even when a perfect classifier exists. ### Network architecture, optimization and initialization We consider a densely connected, single layer feed-forward neural network \(f:\mathbb{R}^{2m\times d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) with the following forward pass map, \[f(\mathbf{W},\mathbf{x})=\sum_{j=1}^{2m}(-1)^{j}\phi(\langle\mathbf{w}_{j}, \mathbf{x}\rangle).\] Here \(\phi:=\max\{0,z\}\) denotes the ReLU activation function and \(\mathbf{w}_{j}\) the \(j\)-th row of the weight matrix \(\mathbf{W}\in\mathbb{R}^{2m\times d}\). The network weights are optimized using full batch gradient descent (GD) with step size \(\eta>0\) in order to minimize the hinge loss over a training sample \(((\mathbf{x}_{i},y_{i}))_{i=1}^{2n}\subset(\mathbb{R}^{d}\times\{-1,1\})^{2n}\) sampled as described in Section 2.1. After \(t^{\prime}\) iterations this optimization process generates a sequence of weight matrices \((\mathbf{W}^{(t)})_{t=0}^{t^{\prime}}\). For convenience, we overload our notation for the forward pass map of the network and let \(f(t,\mathbf{x}):=f(\mathbf{W}^{(t)},\mathbf{x})\). Furthermore, we denote the hinge loss on the \(i\)-th point at iteration \(t\) as \(\ell(t,i):=\max\{0,1-y_{i}f(t,\mathbf{x}_{i})\}\). The hinge loss over the entire training sample at iteration \(t\) is therefore \(L(t):=\sum_{i=1}^{2n}\ell(t,i)\). Let \(\mathcal{F}^{(t)}:=\{i\in[2n]:\ell(t,\mathbf{x}_{i})<1\}\) and \(\mathcal{A}_{j}^{(t)}:=\{i\in[2n]:\langle\mathbf{w}_{j}^{(t)},\mathbf{x}_{i} \rangle>0\}\) denote the sets of point indices that have nonzero loss and which activate the \(j\)th neuron at iteration \(t\) respectively. With \[\frac{\partial\ell(t,i)}{\partial w_{jr}}=\begin{cases}0,&\langle\mathbf{w}_{j }^{(t)},\mathbf{x}_{i}\rangle\leq 0,\\ -(-1)^{j}y_{i}x_{ir},&\langle\mathbf{w}_{j}^{(t)},\mathbf{x}_{i}\rangle>0\end{cases}\] then the GD update rule1 for the neuron weights at iteration \(t\geq 0\) may be written as Footnote 1: Although the derivative of ReLU clearly does not exist at zero, we follow the routine procedure of defining an update rule that extends the gradient update to cover this event. \[\mathbf{w}_{j}^{(t+1)}=\mathbf{w}_{j}^{(t)}+(-1)^{j}\eta\sum_{l=1}^{2n}\mathbb{ 1}\,(l\in\mathcal{A}_{j}^{(t)}\cap\mathcal{F}^{(t)})y_{l}\mathbf{x}_{l}. \tag{3}\] In regard to the initialization of the network parameters, for convenience we assume each neuron's weight vector is drawn mutually i.i.d. uniform from the centered sphere with radius \(\lambda_{w}>0\). We remark that results analogous to the ones presented hold if the weights are instead initialized mutually i.i.d. as \(w_{jc}^{(0)}\sim\mathcal{N}(0,\sigma_{w}^{2})\) where \(\sigma_{w}^{2}\) is sufficiently small. ### Notation For indices \(i,j\in\mathbb{Z}_{\geq 1}\) we say \(i\sim j\) iff \(\left(-1\right)^{i}=\left(-1\right)^{j}\). We often refer to a data point or neuron by its index alone, e.g. "point \(i\)" refers to the \(i\)-th training point \(\left(\mathbf{x}_{i},y_{i}\right)\). For two iterations \(t_{0},t_{1}\) with \(t_{1}>t_{0}\) we define the following. 1. \(G_{j}(t_{0},t_{1}):=\sum_{i\in\mathcal{S}_{T}}\sum_{\tau=t}^{t_{1}-1}\mathbb{1 }(i\in\mathcal{A}_{j}^{(\tau)}\cap\mathcal{F}^{(\tau)})\) is the number of clean updates applied to the \(j\)-th neuron between iterations \(t_{0}\) and \(t_{1}\). 2. \(B_{j}(t_{0},t_{1}):=\sum_{i\in\mathcal{S}_{T}}\sum_{\tau=t_{0}}^{t_{1}-1} \mathbb{1}(i\in\mathcal{A}_{j}^{(\tau)}\cap\mathcal{F}^{(\tau)})\) is the number of corrupt updates applied to the \(j\)-th neuron between iterations \(t_{0}\) and \(t_{1}\). 3. \(G(t_{0},t_{1}):=\sum_{j\in[2m]}G_{j}(t_{0},t_{1})\) and \(B(t_{0},t_{1}):=\sum_{j\in[2m]}B_{j}(t_{0},t_{1})\) are the total number of clean and corrupt updates applied to the entire network between iterations \(t_{0}\) and \(t_{1}\). 4. \(T(t_{0},t_{1}):=G(t_{0},t_{1})+B(t_{0},t_{1})\) is the total number of updates from all points applied to the entire network between iterations \(t_{0}\) and \(t_{1}\). We extend all these definitions to the case \(t_{0}=t_{1}\) by letting the empty sum be 0. Finally, we use \(C\geq 1\) and \(c\leq 1\) to denote generic, positive constants. ## 3 Results The main contributions of this work are Theorem 3.1, Theorem 3.6 and Theorem 3.8, which characterize how the margin of the clean data drives three different training regimes: namely benign overfitting, overfitting and non-overfitting respectively. We primarily distinguish between the three aforementioned training outcomes based on conditions on the signal strength \(\gamma\in[0,1]\). Assuming the corrupt points are the minority in the training sample, then heuristically we might expect the following behavior as \(\gamma\) varies: if \(n\gamma\gg 1\), then the signal dominates the noise during training, corrupted points are never fitted and the network generalizes well. If \(n\gamma\ll 1\), then all points are eventually fitted based on their noise component and the network generalizes poorly. As such, we expect to observe benign overfitting when \(\gamma\) is small but not too small: in this regime the network learns the signal, thus ensuring it generalizes well, but corrupted points can still be fitted based on their noise component, thereby allowing training to zero loss. With each theorem we provide here we give a sketch of its proof: full proofs are contained in the Supplementary Materials, which also contain supporting numerical simulations in Appendix F. Throughout this section, and in order to establish a common setting in which to observe a variety of different behaviors, we make the following assumptions on the hyperparameters of the network and data. **Assumption 1**.: _For a sufficiently large constant \(C\geq 1\), failure probability \(\delta\in(0,1/2)\) and noise inner product bound \(\rho\in(0,1)\), let \(d\geq C\rho^{-2}\log(n/\delta)\), \(k\leq cn\), \(\lambda_{w}\leq cn\) and \(\eta\leq\xi\), where \(\xi\) depends on \(n\), \(m\), \(k\), \(\gamma\), and \(d\)._ We remark that the condition \(d\geq C\rho^{-2}\log(n/\delta)\) ensures the noise components are nearly-orthogonal: in particular, \(\max_{i\neq l}|\left(\mathbf{n}_{i},\mathbf{n}_{l}\right)|\leq c\rho\) with high probability for some positive constant \(c\). This near orthogonality condition on the noise terms is restrictive, but is a common assumption in the related works Frei et al. (2022); Xu & Gu (2023). We note that the value of \(\rho\) required for each of our results to hold varies. Likewise, the optimal constants \(c\) and \(C\) required in each case also vary and we will not concern ourselves with finding the tightest possible constants. While there are differences the proofs of Theorem 3.1, 3.6 and 3.8 generally fit the following outline. 1. Use concentration to show with high probability the training data is nearly orthogonal and a certain initialization pattern is satisfied. 2. Characterize the activation pattern early in training before any point achieves zero loss. 3. Bound the activations at an iteration just before any training point achieves zero loss. 4. Based on bounds on the activations at a given iteration, derive an iteration-independent upper bound on the number of subsequent updates that can occur before convergence. At convergence all points either have zero loss or activate no neurons. We emphasize that our proof techniques are significantly different from those used in Frei et al. (2022); Xu & Gu (2023) due to the differences between the hinge and logistic loss. In particular, letting \(\sigma(z)\) denote the logistic loss, a key step in the proof of these prior works is showing at any iteration \(t\geq 0\) that the ratio \(\sigma^{\prime}(y_{i}f(t,\mathbf{x}_{i}))/\sigma^{\prime}(y_{l}f(t,\mathbf{x}_ {l}))\) is upper bounded by a constant for all pairs of points \(i,l\) in the training sample. For the hinge loss this approach is not feasible: indeed, if at an iteration \(t\) some points achieve zero loss while others have not then this ratio is unbounded. ### Benign overfitting The following theorem states the conditions in particular on \(\gamma\) under which the network simultaneously achieves asymptotically optimal test error and achieves zero loss on both the clean and corrupted data after a finite number of iterations. A detailed proof of this Theorem along with the associated lemmas is provided in Appendix C. **Theorem 3.1**.: _Let Assumption 1 hold and further assume \(n\geq C\log(1/\delta)\), \(m\geq C\log(n/\delta)\), \(\rho\leq c\gamma\) and \(C\sqrt{\log(n/\delta)/d}\leq\gamma\leq cn^{-1}\). Then there exists a sufficiently small step-size \(\eta\) such that with probability at least \(1-\delta\) over the randomness of the dataset and network initialization the following hold._ 1. _The training process terminates at an iteration_ \(\mathcal{T}_{\text{end}}\leq\frac{Cn}{\eta}\)_._ 2. _For all_ \(i\in[2n]\) _then_ \(\ell(\mathcal{T}_{\text{end}},\mathbf{x}_{i})=0\)_._ 3. _The generalization error satisfies_ \[\mathbb{P}(\operatorname{sgn}(f(\mathcal{T}_{\text{end}},\mathbf{x}))\neq y )\leq\exp\left(-cd\gamma^{2}\right).\] Proof sketch.: Recall the parameter \(\rho\) bounds the inner products of the noise components of the training data. Specifically, the conditions on \(d\) given in Assumption 1 ensure \(\max_{i\neq l}|\langle\mathbf{n}_{i},\mathbf{n}_{l}\rangle|\leq\frac{\rho}{1-\gamma}\) with high probability. We also identify the following sets of neurons for \(p\in\{-1,1\}\), \[\Gamma_{p} :=\left\{j\in[2m]\ :\ (-1)^{j}=p,\ G_{j}(0,1)(\gamma-\rho)-B_{j}(0,1)( \gamma+\rho)\geq\frac{2\lambda_{w}}{\eta}\right\},\] \[\Theta_{p} :=\left\{j\in\Gamma_{p}\ :\ G_{j}(0,1)(\gamma+\rho)-B_{j}(0,1)( \gamma-\rho)\leq 1-\gamma+\rho\right\}.\] These sets are useful in that neurons in \(\Gamma_{p}\) have predictable activation patterns during the early phase of training. Furthermore, if \(i\) is the index of a corrupted point which activates a neuron in \(\Theta_{y_{i}}\) at initialization, then this point will continue activating this neuron throughout the early phase of training. Concentration argument shows that \(\Gamma_{p}\) and \(\Theta_{p}\) are significant subsets of \([m]_{p}\) with high probability. In summary, for benign overfitting we say we have a _good initialization_ if i) \(\max_{i\neq l}|\langle\mathbf{n}_{i},\mathbf{n}_{l}\rangle|\leq\frac{\rho}{1- \gamma}\), ii) for some small constant \(\alpha\in(0,1)\) then \(|\Gamma_{p}|\geq(1-\alpha)m\), and iii) for each \(i\in\mathcal{S}_{F}\) there exists a \(j\in[2m]\) such that \((-1)^{j}=y_{i}\) and \(i\in\mathcal{A}_{j}^{(0)}\). **Lemma 3.2**.: _Under the assumptions of Theorem 3.1 and assuming we have a good initialization, suppose at some iteration \(t_{0}\) the loss of every clean point is bounded above by \(a\in\mathbb{R}_{\geq 0}\), while the loss of every corrupted point is bounded above by \(b\in\mathbb{R}_{\geq 0}\). Then for all \(t\geq t_{0}\) the total number of clean and corrupt updates which occur after \(t_{0}\) are upper bounded as follows,_ \[G(t_{0},t) \leq Cn\left(\frac{a+bk\gamma}{\eta}\right), B(t_{0},t) \leq Ck\left(\frac{b+an\gamma}{\eta}\right).\] Because these upper bounds are independent of \(t\) then we may conclude that training reaches a steady state after a finite number of iterations. In particular, this means every point either has zero loss or activates no neurons. To prove the network achieves zero loss we need only show that every training point activates at least one neuron after the last training update. This property is simple to prove for clean points: indeed, if \(i\in\mathcal{S}_{T}\) then \(i\) activates every neuron in \(\Gamma_{y_{i}}\) after the first iteration. An inductive argument then shows \(i\) activates a neuron in every subsequent iteration. Showing that every corrupt point activates a neuron at the end of training is not as simple, and requires a more careful consideration of the training dynamics. To this end we say a neuron is a _carrier_ of a training point between iterations \(t_{0}\) and \(t\) if \(i\in\mathcal{A}_{j}^{(r)}\) for all \(\tau\in[t_{0},t]\). In order to prove the network fits the corrupt data we need to show each corrupt point \((\mathbf{x}_{i},y_{i})\) has a carrier neuron in \(\Theta_{y_{i}}\) throughout training. If too many clean points activate such a neuron, then it is possible it will eventually cease to carry any corrupt points and if a corrupt point loses all of its carrier neurons then it cannot be fitted. We show this event cannot occur by studying the activation patterns of neurons in \(\Gamma:=\Gamma_{1}\cup\Gamma_{-1}\). **Lemma 3.3**.: _Let the assumptions of Theorem 3.1 hold and suppose we have a good initialization. Let \(j\in\Gamma\) and \(t>0\) be an iteration such that no point achieves zero loss at or before this iteration. For a point \(i\in\mathcal{S}_{T}\), then \(i\in\mathcal{A}_{j}^{(t)}\) iff \(i\sim j\). For a point \(i\in\mathcal{S}_{F}\) with \(i\not\sim j\), \(i\in\mathcal{A}_{j}^{(t)}\) iff \(i\in\mathcal{A}_{j}^{(1)}\)._ The next lemma bounds the activations just before any points achieve zero loss. **Lemma 3.4**.: _Under the assumptions of Theorem 3.1 and assuming we have a good training set, there is an iteration \(\mathcal{T}_{1}\leq\frac{C}{\eta m[1+(\gamma+\rho)(n-k)]}\) before any point achieves zero loss where the following hold for a constant that varies from line to line._ 1. _For all_ \(p\in\{-1,1\}\)_,_ \(j\in\Gamma_{p}\)_,_ \(i\sim j\)_, and_ \(i\in\mathcal{S}_{T}\)_, then_ \(\langle\mathbf{w}_{j}^{(\mathcal{T}_{1})},\mathbf{x}_{i}\rangle\geq cm^{-1}\)_._ 2. _For all_ \(p\in\{-1,1\}\)_,_ \(j\in\Gamma_{p}\)_,_ \(i\not\sim j\)_, and_ \(i\in\mathcal{S}_{T}\)_, then_ \(\langle\mathbf{w}_{j}^{(\mathcal{T}_{1})},\mathbf{x}_{i}\rangle\leq-cn\gamma m ^{-1}\)_._ 3. _For all_ \(i\in\mathcal{S}_{T}\)_, then_ \(\ell(\mathcal{T}_{1},\mathbf{x}_{i})\leq c\)_._ Due to the fact that clean points are the majority and all of them push the network in the same signal direction, then immediately after \(\mathcal{T}_{1}\) the loss of clean points is small and clean points activate all neurons in the relevant \(\Gamma_{p}\) strongly. Furthermore, once the loss of a clean point is small it stays small. In subsequent iterations, if the number of corrupt updates since \(\mathcal{T}_{1}\) is also small, approximately \(C\varepsilon n\gamma/(\eta(\gamma+\rho)\), then each clean point will activate on all but an \(\varepsilon\) proportion of neurons in the relevant \(\Gamma_{p}\). As the hinge loss switches off the updates from a point once it reaches zero loss, eventually clean points do not participate in every iteration. Furthermore, when they do participate their updates are spread over a large proportion of the neurons. This ensures that most neurons in \(\Theta_{p}\) cannot receive too many clean updates in isolation, thereby ensuring carrier neurons continue to carry corrupted points throughout training. Lastly, the generalization result follows from the near orthogonality of the noise components of both the training and test data. Indeed, using the same concentration bound, a test point satisfies the same inner product noise condition as the training data with high probability. **Lemma 3.5**.: _Consider a test label \(y\in\{-1,1\}\) and point \(\mathbf{x}:=y\sqrt{\gamma}\mathbf{v}+\sqrt{1-\gamma}\mathbf{n}\), where \(\mathbf{n}\sim\text{Uniform}(\mathcal{S}^{d-1}\cap\mathrm{span}\{\mathbf{v}\}^{ \perp})\) is mutually i.i.d. from the training sample. Assume the conditions of Theorem C.14 hold and that we have a good initialization. In addition, suppose that \(|\langle\mathbf{n},\mathbf{n}_{l}\rangle|<\frac{\rho}{1-\gamma}\) for all \(l\in[2n]\), then \(yf(\mathcal{T}_{\text{end}},\mathbf{x})>0\)._ ### Non-benign overfitting The next theorem states a negative overfitting result: for sufficiently small \(\gamma\) the network achieves again zero loss on both the clean and corrupted data after a finite number of iterations, but the probability of misclassification is bounded from below by a constant. A detailed proof of this Theorem along with the associated lemmas is provided in Appendix D. **Theorem 3.6**.: _Let Assumption 1 hold and further assume \(m\geq C\log(n/\delta)\), \(\rho\leq cn^{-1}\), \(\eta<1/(2mn)\) and \(\gamma\leq\frac{c}{\sqrt{md}}\). Then with probability at least \(1-\delta\) over the randomness of the dataset and network initialization the following hold._ 1. _The training process terminates at an iteration_ \(\mathcal{T}_{\text{end}}\leq\frac{Cn}{\eta}\)_._ 2. _For all_ \(i\in[2n]\) _then_ \(\ell(\mathcal{T}_{\text{end}},\mathbf{x}_{i})=0\)_._ 3. _The generalization error satisfies_ \[\mathbb{P}(\mathrm{sgn}(f(\mathcal{T}_{\text{end}},\mathbf{x}))\neq y)\geq \frac{1}{8}.\] We remark that the above result holds for \(n\geq 1\) and any \(k\). Indeed, in this regime the noise components dominate the training dynamics and we therefore expect the performance of the network on test points to be random. We re-emphasize that, unlike in the data model used by Frei et al. (2022) and Xu and Gu (2023), there does exist a classifier with perfect generalization error for arbitrarily small \(\gamma\). The significance of Theorem 3.6 is that under the data model considered GD results in a suboptimal classifier. Proof sketch.: Similar to the proof of Theorem 3.1, in the context of non-benign overfitting we say the initialization is "good" if \(\max_{i\neq l}|\langle\mathbf{n}_{i},\mathbf{n}_{l}\rangle|\leq\frac{\rho}{1-\gamma}\) and if each point in the training sample activates a neuron of the same sign. Under the conditions of Theorem 3.6 it can be shown that a good initialization in this context happens with high probability. **Lemma 3.7**.: _In addition to the conditions of Theorem 3.6, suppose we have a good initialization and that for some iteration \(t_{0}\) then \(\ell(t_{0},\mathbf{x}_{i})\leq a\) for all \(i\in[2n]\). Then \(T(t_{0},t)\leq\frac{Cnq}{\eta}\)._ As for the benign overfitting case, we need to show that each training point activates a neuron after the last training iteration. Under the assumptions on \(\gamma\) it can be shown that the loss of a point decreases during every iteration it participates in, regardless of the status and activations of other points in the training sample. All that remains is to lower bound the generalization error. To this end observe for a test point \((\mathbf{x},y)\) that \[y(f(\mathcal{T}_{\text{end}},\mathbf{x})-f(\mathcal{T}_{\text{end}},-\mathbf{ x}))=\sum_{j=1}^{2m}y(-1)^{j}\langle\mathbf{w}_{j}^{(\mathcal{T}_{\text{end}})}, \mathbf{x}\rangle.\] If the right-hand-side of this equality is negative we can conclude that either \(\mathbf{x}\) or \(-\mathbf{x}\) is misclassified. That this event is true with probability lower bounded by a constant in turn follows by appropriately upper bounding the norm of the network weights in the signal subspace, as well as lower bounding the norm of the network weights in the noise subspace. ### No-overfitting The following theorem illustrates that for \(\gamma\) larger than the upper bound required for benign overfitting, then after convergence, which occurs in a finite number of iterations, only the clean points achieve zero loss. By contrast, the corrupt points cease to activate any neurons and are thus zeroed by the network. The network also achieves asymptotically optimal test error. A detailed proof of this Theorem along with the associated lemmas is provided in Appendix E. **Theorem 3.8**.: _Let Assumption 1 hold and further assume \(m\geq 2\), \(n\geq C\log\left(\frac{m}{\delta}\right)\), \(\rho\leq c\gamma\) and \(cn\leq\gamma\leq ck^{-1}\). Then there exists a sufficiently small step-size \(\eta\) such that with probability at least \(1-\delta\) over the randomness of the dataset and network initialization we have the following._ 1. _The training process terminates at an iteration_ \(\mathcal{T}_{\text{end}}\leq\frac{Cn}{\eta}\)_._ 2. _For all_ \(i\in\mathcal{S}_{T}\) _then_ \(\ell(\mathcal{T}_{\text{end}},\mathbf{x}_{i})=0\) _while_ \(\ell(\mathcal{T}_{\text{end}},\mathbf{x}_{i})=1\) _for all_ \(i\in\mathcal{S}_{F}\)_._ 3. _The generalization error satisfies_ \[\mathbb{P}(\operatorname{sgn}(f(\mathcal{T}_{\text{end}},\mathbf{x}))\neq y) \leq\exp\left(-cd\gamma^{2}\right).\] We remark that the upper bound on \(\gamma\) allows us to re-deploy the same proof technique used to prove convergence in the benign overfitting case, thereby ensuring the training process converges within a finite number of iterations. We conjecture this upper bound can be increased but leave such an analysis to future work. Proof sketch.: In the context of no-overfitting we identify a "good" initialization as one for which \(\max_{i\neq l}|\left\langle\mathbf{n}_{i},\mathbf{n}_{l}\right\rangle|\leq \frac{\rho}{1-\gamma}\) and \(\Gamma=\Gamma_{-1}\cup\Gamma_{+1}=[2m]\). Under the conditions of Theorem 3.8 it can be shown a good initialization in this context occurs with high probability, furthermore the resulting activation pattern early during training is simple to characterize. **Lemma 3.9**.: _Suppose that the conditions of Theorem 3.8 hold and that we have a good initialization. Consider an arbitrary \(j\in[2m]\) and iteration \(2\leq t\leq\mathcal{T}_{0}\) occurring before a point has achieved zero loss. Then \(i\in\mathcal{A}_{j}^{(t)}\) iff \(i\sim j\)._ Next we bound the activations of the training points just before \(\mathcal{T}_{0}\), the iteration at which any training points first achieve zero loss. In the following we use \(F_{1},F_{2}\) and \(F_{3}\) as placeholders for expressions depending on the data and model parameters. Here, for the sake of conveying the ideas in the proof we do not write them in full and refer the reader to Supplementary Material. **Lemma 3.10**.: _Suppose that the conditions of Theorem 3.8 hold and that we have a good initialization, then there is an iteration \(\mathcal{T}_{1}\) before any point achieves zero loss such that_ \[\langle\mathbf{w}_{j}^{(\mathcal{T}_{1})},\mathbf{x}_{i}\rangle \leq\frac{F_{1}}{m}\text{ if }i\in\mathcal{S}_{F},i\sim j,\] \[\langle\mathbf{w}_{j}^{(\mathcal{T}_{1})},\mathbf{x}_{i}\rangle \geq\frac{F_{2}}{m}\text{ if }i\in\mathcal{S}_{T},i\sim j,\] \[\langle\mathbf{w}_{j}^{(\mathcal{T}_{1})},\mathbf{x}_{i}\rangle \leq-\frac{F_{3}}{m}\text{ if }i\not\sim j.\] Next we seek to ensure the activation patterns remain mostly fixed: in particular, we show \(i\in\mathcal{A}_{j}^{(t)}\) if \(i\in\mathcal{S}_{T}\) and \(i\sim j\), while \(i\notin\mathcal{A}_{j}^{(t)}\) if \(i\not\sim j\). **Lemma 3.11**.: _Suppose that the conditions of Theorem 3.8 hold and that we have a good initialization. In addition, for \(a,b\in\mathbb{R}\) assume there is a time \(t_{0}\) such that \(\ell(t_{0},\mathbf{x}_{i})\leq a\) for all \(i\in\mathcal{S}_{T}\) and \(\phi(\langle\mathbf{w}_{j}^{(t_{0})},\mathbf{x}_{i}\rangle)\leq b\) for all \(i\in\mathcal{S}_{F}\) and \(i\sim j\). If \(i\in\mathcal{S}_{T}\), \(i\sim j\) implies \(i\in\mathcal{A}_{j}^{(\tau)}\) and \(i\not\sim j\) implies \(i\notin\mathcal{A}_{j}^{(\tau)}\) for all \(\tau\) satisfying \(t_{0}\leq\tau<t\), then_ \[B_{j}(t_{0},t)\leq\frac{Ck}{\eta}\left(b+\frac{a}{m}\right), \sum_{j\sim s}G_{j}(t_{0},t)\leq\frac{C(a+mb)}{\gamma\eta}.\] As before, this update bound is finite and iteration-independent, therefore GD converges provided the assumptions on the activation patterns are not violated. Furthermore, if these activation patterns do hold, then every clean point activates a neuron and no corrupt point activates a neuron of the same label sign. Therefore, under the assumption on the activation pattern, at convergence clean points achieve zero loss while corrupt points have non-zero loss, i.e., they activate no neurons. It therefore suffices to prove the condition on the activation pattern, which we show holds as long as \[\min\left\{\frac{F_{3}}{m},\frac{F_{2}}{m}\right\}\geq Ck(\gamma+\rho)\eta \left(\frac{F_{1}+1-F_{2}}{m}\right)\geq\eta(\gamma+\rho)B_{j}(t_{0},t).\] As \(C\) does not depend on the parameters, we can ensure this condition holds by letting \(Ck(\gamma+\rho)\) be sufficiently small. With \(\rho\leq cn^{-1}\), we show it suffices that \(\gamma<ck^{-1}\). Finally, the generalization result follows in a fashion almost identical to that used for Lemma 3.5. ### Comparison of results Finally, we compare the differing regimes of our results side-by-side with those of Frei et al. (2022); Xu & Gu (2023) in Table 1. We note that comparisons are not like-for-like as Frei et al. (2022) consider smooth, leaky ReLU and logistic loss, Xu & Gu (2023) a generalized family of activation functions, which includes ReLU, and logistic loss, and this paper ReLU and hinge loss. Furthermore, in addition to differences in the noise distribution discussed in Section 2.1, Frei et al. (2022); Xu & Gu (2023) assume a data model where the norm of each data point is approximately proportional to \(\sqrt{d}\). We therefore re-scale their results in order to make comparison with this work in which all data points have unit norm. Taken together these results suggest, at least under the type of data model considered, that benign overfitting occurs for signal strengths proportional to \(1/n\). Furthermore, our results also suggest that above \(1/n\) one might expect to see a transition to no-overfitting, while below \(1/\sqrt{nd}\) a transition to harmful overfitting. We remark that the latter is non-trivial in our setting as for all \(\gamma>0\) the classifier \(h(\mathbf{x})=\operatorname{sign}(\langle\mathbf{v},\mathbf{x}\rangle)\) always has perfect accuracy. We remark all this analysis takes place in high dimensions, we leave an analysis in low dimensions to future work. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Frei et al. (2022) & Xu \& Gu (2023) & Theorem 3.1 & Theorem 3.6 & Theorem 3.8 \\ \hline \(n\geq C\cdot\) & \(\log\left(\frac{1}{\delta}\right)\) & \(\log\left(\frac{m}{\delta}\right)\) & \(\log\left(\frac{1}{\delta}\right)\) & \(1\) & \(\log\left(\frac{m}{\delta}\right)\) \\ \(m\geq C\cdot\) & \(1\) & \(\log\left(\frac{n}{\delta}\right)\) & \(\log\left(\frac{n}{\delta}\right)\) & \(\log\left(\frac{n}{\delta}\right)\) & \(1\) \\ \(\gamma\leq c\cdot\) & \(\frac{1}{n}\) & \(\frac{1}{n}\) & \(\frac{1}{n}\) & \(\frac{1}{\sqrt{nd}}\) & \(\frac{1}{k}\) \\ \(\gamma\geq C\cdot\) & \(\frac{1}{\sqrt{nd}}\) & \(\sqrt{\frac{\log(\frac{md}{nd})}{nd}}\) & \(\sqrt{\frac{\log(\frac{n}{d})}{d}}\) & \(0\) & \(\frac{1}{n}\) \\ Result & Benign2 & Benign & Benign & Non-benign & No-overfit \\ \hline \hline \end{tabular} \end{table} Table 1: across all results \(k\leq cn\) while \(d\geq Cn^{2}\log(n/\delta)\) for (Frei et al., 2022), Xu & Gu (2023) and Theorem 3.1. ## 4 Conclusion Developing a theoretical description of benign overfitting in neural networks is a highly nascent area, with mathematical results available only for very limited data models. Furthermore, the conditions describing the transitions between overfitting versus non-overfitting and benign versus non-benign even in these simplified settings are yet to be fully characterized. The goal of this work was to address this issue as well as explore the impact of using the hinge loss. In particular, and admittedly for a simple data model, we prove three different training outcomes, corresponding to overfitting, benign overfitting and no-overfitting, based on conditions on the margin of the clean data. Our analysis differs significantly from prior works due to the fact the ratio of loss between different training points can be unbounded. The key limitation of this work is the restrictiveness of the data model: in particular, as in the prior works we use a near-orthogonal noise model and assume a rank one signal, we also place additional conditions on the noise distribution. In addition to generalizing the signal and noise model as well as improving the bounds required for our results to hold, we believe the following themes are important areas for future research: first relaxing the near orthogonal noise condition, second exploring data models beyond those which are linearly separable, third investigating the role and impact of depth. #### Acknowledgments EG, WS and DN were partially supported by NSF DMS 2011140 and NSF DMS 2108479. EG was also partially supported by NSF DGE 2034835.
2303.12215
In-source and in-trap formation of molecular ions in the actinide mass range at CERN-ISOLDE
The use of radioactive molecules for fundamental physics research is a developing interdisciplinary field limited dominantly by their scarce availability. In this work, radioactive molecular ion beams containing actinide nuclei extracted from uranium carbide targets are produced via the Isotope Separation On-Line technique at the CERN-ISOLDE facility. Two methods of molecular beam production are studied: extraction of molecular ion beams from the ion source, and formation of molecular ions from the mass-separated ion beam in a gas-filled radio-frequency quadrupole ion trap. Ion currents of U$^+$, UO$_{1-3}^+$, UC$_{1-3}^+$, UF$_{1-4}^+$, UF$_{1,2}$O$_{1,2}^+$ are reported. Metastable tantalum and uranium fluoride molecular ions are identified. Formation of UO$_{1-3}^+$, U(OH)$_{1-3}^+$, UC$_{1-3}^+$, UF$_{1,2}$O$_{1,2}^+$ from mass-separated beams of U$^+$, UF$_{1,2}^+$ with residual gas is observed in the ion trap. The effect of trapping time on molecular formation is presented.
M. Au, M. Athanasakis-Kaklamanakis, L. Nies, J. Ballof, R. Berger, K. Chrysalidis, P. Fischer, R. Heinke, J. Johnson, U. Köster, D. Leimbach, B. Marsh, M. Mougeot, J. Reilly, E. Reis, M. Schlaich, Ch. Schweiger, L. Schweikhard, S. Stegemann, J. Wessolek, F. Wienholtz, S. G. Wilkins, W. Wojtaczka, Ch. E. Düllmann, S. Rothe
2023-03-21T22:26:17Z
http://arxiv.org/abs/2303.12215v1
# In-source and in-trap formation of molecular ions in the actinide mass range at CERN-ISOLDE ###### Abstract The use of radioactive molecules for fundamental physics research is a developing interdisciplinary field limited dominantly by their scarce availability. In this work, radioactive molecular ion beams containing actinide nuclei extracted from uranium carbide targets are produced via the Isotope Separation On-Line technique at the CERN-ISOLDE facility. Two methods of molecular beam production are studied: extraction of molecular ion beams from the ion source, and formation of molecular ions from the mass-separated ion beam in a gas-filled radio-frequency quadrupole ion trap. Ion currents of U\({}^{+}\), UO\({}_{1-3}\)\({}^{+}\), U\({}_{1-3}\), UF\({}_{1-4}\)\({}^{+}\), UF\({}_{1,2}\)O\({}_{1,2}\)\({}^{+}\) are reported. Metastable tantalum and uranium fluoride molecular ions are identified. Formation of UO\({}_{1-3}\)\({}^{+}\), U(OH)\({}_{1-3}\)\({}^{+}\), UC\({}_{1-3}\), UF\({}_{1,2}\)O\({}_{1,2}\)\({}^{+}\) from mass-separated beams of U\({}^{+}\), UF\({}_{1,2}\)\({}^{+}\) with residual gas is observed in the ion trap. The effect of trapping time on molecular formation is presented. + Footnote †: journal: Nucl. Inst. and Methods in Physics Research B ## 1 Introduction There is interdisciplinary interest in radioactive molecules bridging fields of molecular physics, atomic physics and nuclear physics, as well as physics beyond the standard model [1]. Experimental research possibilities with many radioactive molecules are currently constrained by their limited production. This is particularly the case for radioactive molecules containing an actinide element. Only actinides in the decay chains of primordial \({}^{232}\)Th and \({}^{235,238}\)U are available in macroscopic quantities in nature. All others must be produced artificially. The Isotope Separation On-Line (ISOL) method allows production of a wide range of radioactive nuclides across the nuclear chart through reactions induced by the impact of an accelerated particle beam hitting a thick target. The ISOLDE facility at CERN [2] uses 1.4-GeV protons accelerated by CERN's Proton Synchrotron Booster (PSB) and can employ a variety of target and ion source systems. Once created, the reaction products must diffuse out of the target material and effuse to the ion source, where they are ionized and extracted as a beam of charged particles. For refractory species, forming volatile compounds has been employed as a technique to improve extraction from the target by delivering the isotopes of interest as molecular ion beams [3; 4; 5; 6]. In specific cases, the formation of molecules can reduce the isobaric contamination remaining after mass separation. The production of actinide molecules could address the scarcity and purity problems limiting many experiments on actinide isotopes. In addition, they present promising cases themselves [1; 7; 8; 9]. ## 2 Method The ISOLDE facility was used to study actinide species produced from four porous micro-structured uranium carbide (UC\({}_{x}\)) target units: a previously-irradiated target coupled to a rhenium surface ion source; a previously-irradiated target coupled to a tungsten surface ion source; and two new targets coupled to Forced Electron Beam Induced Arc Discharge (FEBIAD) ion sources [10]. The ISOLDE Resonance Ionization Laser Ion Source (RILIS [11]) was used to resonantly ionize atomic U with the ionization scheme shown in Fig. 1. Ion beams were extracted from the ion source using a 30-kV potential difference and separated by their mass-to-charge ratio in the separator magnet. Mass-separated ion beams were either sent to a MagneToF detector or cooled and bunched in the ISOLTRAP Radio-Frequency Quadrupole cooler-buncher (RFQ-cb) [12]. The bunched beam was sent to the Multi-Reflection Time-of-Flight Mass Spectrometer (MR-ToF MS) [13], where ions were separated based on their mass-to-charge ratios, including isobars, which were identified through ToF mass measurements. The experimental setup is shown schematically in Figure 1. ## 3 In-source molecular formation The target units with the tungsten surface ion source and the two FEBIAD type ion sources were equipped with calibrated leaks (1.3E-4, 3E-4 and 5.7E-5 mbar L s\({}^{-1}\)) through which carbon tetrafluoroxide (carbon tetrafluoromethane, CF\({}_{4}\)) gas was injected as a reagent for fluoride molecule formation. Using the two different types of ion sources, surface-, electron-impact-, and non-resonantly laser-ionized-molecules were observed. Experimental parameters are indicated in the captions of Figures 2 and 3. ### Non-resonant laser and plasma ionization UO\({}^{+}\) and UO\({}_{2}^{+}\) dominate the ion beam for oxidized targets. With first ionization potentials of 6.0313(6) eV and 6.128(3) eV for UO and UO\({}_{2}\), respectively [15], these species are observed with both surface and FEBIAD ion sources. With CF\({}_{4}\) injection, tungsten surface ionization, and 30 W of 532-nm laser light, UF\({}^{+}\) and UF\({}_{2}^{+}\) are the most intense uranium molecular ion beams. Surface-ionized UF\({}_{3}^{+}\) is detectable with a Faraday Cup; UF\({}_{4}^{+}\) is not observed (Fig. 3 a). Using a FEBIAD ion source, the UF\({}_{3}^{+}\) sideband is dominant and UF\({}_{4}^{+}\) is observed. Higher rates of U\({}^{+}\) likely result from the breakup of uranium molecules in the FEBIAD ion source before extraction as an ion beam. Sideband ratios depend strongly on the concentration of CF\({}_{4}\), favouring \(\mathrm{UF}_{2,3}^{+}\) with higher leak rates of CF\({}_{4}\). Bond dissociation energies of UO (7.856(135) eV), UO\({}_{2}\) (7.773(145) eV) [15] suggest that some dissociation of neutral and singly-charged oxides should occur within the FEBIAD ion source. ### Metastable molecular ions In mass spectrometry, the term'metastable' is used to describe molecular ions possessing sufficient excess energy to fragment in the field-free region after leaving the ion source [16]. Upon fragmentation, the fragment ions retain a fraction of the kinetic energy of the extracted precursor ion. This causes fragment ions to pass through the mass separator magnetic field with an apparent mass \(m^{*}\) corresponding to [16] \[m^{*}=\frac{m_{f}^{2}}{m_{p}} \tag{1}\] where \(m_{f}\) represents the mass of the fragment ion and \(m_{p}\) represents the mass of the precursor metastable molecular ion. Fragment ions and their precursors were identified from the apparent mass and studied as a function of the target and surface ion source temperatures (Fig. 4). Increasing the ion source temperature significantly increased the fragment ion intensity, suggesting that at high temperatures, the molecules are more likely to have sufficient excess energy to reach the metastable states that fragment after extraction. Fragment molecules are indicated where observed in Fig. 3. ## 4 In-trap molecular formation To study in-trap molecular formation, the ISOLTRAP RFQ-cb was employed with a buffer gas (here He at up to \(10^{-5}\) mbar measured within 1 m of the injection) to cool and bunch the ions. Mass-separated beams ionized using each of the studied ion sources were sent to the RFQ-cb for cooling and bunching. For studies of beam composition and in-trap molecular formation, a sample of the continuous ion beam was taken into the RFQ-cb. Ions were confined in the RFQ-cb for a trapping time during which interaction occurred between the ions, the buffer gas and residual gas contamination. The ion bunch was then ejected from the RFQ-cb and the arrival times of ions in each shot were measured with respect to the ejection time. Identification was performed with the MR-ToF MS using expected ToF values extracted from a calibration using \(\mathrm{\SI{85}}\),\(\mathrm{\SI{87}}\)\(\mathrm{Rb}^{+}\), \(\mathrm{\SI{133}}\)\(\mathrm{Cs}^{+}\) from the ISOLTRAP offline ion source [17], and online \(\mathrm{\SI{238}}\)\(\mathrm{U}^{+}\) from ISOLDE. ToF spectra were accumulated over a number of shots as seen in Figures 5, 6, and 7. The atomic uranium resonant laser ionization scheme in the ion source affected the count rates of uranium molecules (e.g. UO\({}^{+}\) and UOH\({}^{+}\) in Fig. 5) observed from the RFQ-cb ion trap. Combined with the mass-separation step of the separator magnets, this indicates that the molecules are formed from ions in the mass-separated U\({}^{+}\) beam rather than in the target or ion source. For U and Ta, ratios depend on the trapping time as shown in Figures 6 and 7. In addition to atomic ions forming molecules, molecular ions mass-separated by ISOLDE (including UF\({}^{+}\), UF\({}_{2}^{+}\), TaF\({}^{+}\), TaF\({}_{2}^{+}\)) reacted with the residual gas or buffer-gas contaminants to form molecules by pickup of C, O, H and OH (Figs. 7,8) and in some cases (UF\({}^{+}\), UF\({}_{2}^{+}\)) were observed with higher charge states (UF\({}^{2+}\), UF\({}_{2}^{2+}\)). To avoid detector saturation, attenuators were used to reduce the intensity of the ion beam injected into the RFQ-cb. This reduced absolute rates of in-trap formation and the formation efficiency relative to the ion beam intensity extracted from the ion source. Rates of molecular formation in the ion-trap represent ratios in the regime below space-charge limitations. Notably, the in-trap UO\({}_{x}\) formation showed an identical response to the storage time in the RFQ-cb before and after the addition of a liquid nitrogen cold trap to the buffer gas line, indicating that reaction products enter the ion trap through diffusion into the vacuum chamber rather than buffer gas injection. Figure 4: Rates of ion beams from a tungsten surface ion source recorded on a Faraday Cup after mass separation from the ISOLDE separator magnets shown in logarithmic scale. a), b), c): as a function of ion-source temperature for a target temperature of 1578 \({}^{\circ}\)C. d) e) f): rates as a function of target temperature for an ion source temperature of 2230 \({}^{\circ}\)C. c) and f): rates of fragment ions measured on the apparent mass corresponding to the indicated dissociation of metastable molecular ions. ## 5 Conclusions For previously irradiated or oxidized targets, UO\({}^{+}\) and UO\({}_{2}^{+}\) sidebands are on the order of \(10^{5}\,\mathrm{ions\,s^{-1}}\) at target temperatures above \(1600\,\mathrm{\SIUnitSymbolCelsius}\). Without further addition of oxygen, the intensity of these contaminants will decrease over time, with the UO\({}_{2}^{+}\) sideband depleting first, followed by the UO\({}^{+}\) sideband. At nominal target temperatures (\(2000\,\mathrm{\SIUnitSymbolCelsius}\)) and above, UC\({}^{+}\), UC\({}_{2}^{+}\) can similarly reach rates of \(10^{4}\) ions s\({}^{-1}\) or more. Fragments formed from the dissociation of metastable U and Ta fluorides arrive as additional non-isobaric contaminants in mass-separated beams. Fragment ions and their precursor ions can be identified using their apparent mass (Eq. 1) and anticipated for a given mass-to-charge ratio with the rates presented here. We present some representative rates of molecular ions extracted from different ion sources with oxidation, CF\({}_{4}\), and target temperatures noted in Figure 4 and Table 1 as well as some representative rates for formation of molecular ions in the RFQ-cb with trapping times noted in Table 2. Rates from the ion source depend very strongly on target and ion source temperature and CF\({}_{4}\) injection rate. Rates from the RFQ-cb depend on trapping time. Production of fluoride molecular ions from the ion source is achieved by adding CF\({}_{4}\). Since many actinide fluorides are stable at temperatures above \(1000\,\mathrm{\SIUnitSymbolCelsius}\), in-source formation is a promising approach that can use parameters including target and ion source temperatures and fluorine partial pressure to control formation rates. To form molecules that may not be stable at high temperatures, molecular formation in the RFQ ion trap is presented as a possible approach. Formation of oxides, carbides, and hydroxides from the mass-separated atomic and molecular ion beams occurs in the ISOLTRAP RFQ-cb in the presence of residual gases. Trapping time is shown to be a parameter influencing the formation of molecules in the ion trap. The molecular formation reported here in the ISOLTRAP RFQ-cb may have implications for other RFQ-cb ion traps used in Figure 5: ToF spectrum of A=238 mass-separated ion beams after cooling and bunching in the ISOLTRAP RFQ-cb, then trapping for 2000 revolutions in the MR-ToF MS as calculated for \({}^{238}\)U. The status of the U resonant laser (on or off) is indicated. Vertical lines shown in the top panel indicate ToFs expected from offline calibrations. See text for further details. Figure 6: ToF spectra for various storage times (indicated on the left) of A=238 mass-separated ion beams from the tungsten surface ion source with RILIS for U after cooling and bunching in the ISOLTRAP RFQ-cb, then trapping for 2000 revolutions in the MR-ToF MS as calculated for \({}^{238}\)UO\({}^{+}\). Summed counts in each of the identified peaks are shown as a function of trapping time. See text for further details. Figure 7: ToF spectrum of A=181 mass-separated ion beams from the FEBIAD ion source after bunching in the ISOLTRAP RFQ-cb. Cooling time is indicated. Vertical lines in the top panel show ToFs expected from the offline calibration for single-pass operation of the MR-ToF MS. beam preparation (e.g. the ISOLDE cooler ISCOOL [2]) which require further investigations. These studies combined characterize the composition of beams heavier than the target material and provide information on the process of creating molecular actinide beams in targets and in ion traps. ## 6 Acknowledgements The authors gratefully acknowledge support from the ISOLDE operations team, the ISOLDE targets and ion sources team, and Simone Gilardoni. This project has received funding from the European Union's Horizon 2020 Research and Innovation Program (grant No. 861198 project 'LISA' MSC ITN). The authors acknowledge support from the German Federal Ministry of Education and Research (BMBF) for ISOLTRAP (grant No. 05P18HGCIA and 05P21HGCII). L.N. acknowledges support from the Wolfgang Gentner Program (grant No. 13E18CHA).
2307.08751
Pseudospectra of Holographic Quasinormal Modes
Quasinormal modes and frequencies are the eigenvectors and eigenvalues of a non-Hermitian differential operator. They hold crucial significance in the physics of black holes. The analysis of quasinormal modes of black holes in asymptotically Anti-de Sitter geometries plays also a key role in the study of strongly coupled quantum many-body systems via gauge/gravity duality. In contrast to normal Sturm-Liouville operators, the eigenvalues of non-Hermitian (and non-normal) operators generally exhibit instability under small perturbations. This research focuses on the stability analysis of quasinormal frequencies pertaining to asymptotically planar AdS black holes, employing pseudospectrum analysis. Specifically, we concentrate on the pseudospectra of scalar and transverse gauge fields, shedding light on their relevance within the framework of gauge/gravity duality.
Daniel Arean, David Garcia-Fariña, Karl Landsteiner
2023-07-17T18:00:14Z
http://arxiv.org/abs/2307.08751v3
# Pseudospectra of Holographic Quasinormal Modes ###### Abstract Quasinormal modes and frequencies are the eigenvectors and eigenvalues of a non-Hermitian differential operator. They hold crucial significance in the physics of black holes. The analysis of quasinormal modes of black holes in asymptotically Anti-de Sitter geometries plays also a key role in the study of strongly coupled quantum many-body systems via gauge/gravity duality. In contrast to normal Sturm-Liouville operators, the eigenvalues of non-Hermitian (and non-normal) operators generally exhibit instability under small perturbations. This research focuses on the stability analysis of quasinormal frequencies pertaining to asymptotically planar AdS black holes, employing pseudospectrum analysis. Specifically, we concentrate on the pseudospectra of scalar and transverse gauge fields, shedding light on their relevance within the framework of gauge/gravity duality. ## 1 Introduction * 2 Pseudospectrum and Stability * 2.1 Matrix Pseudospectrum * 2.1.1 Norm Dependence in a Simple Example * 3 Quasinormal Modes and Pseudospectra * 3.1 Construction of the Eigenvalue Problem * 3.1.1 Example: Real Scalar in SAdS\({}_{4+1}\) * 3.2 Choice of Norm * 3.2.1 The Nature of the Energy Norm * 3.2.2 Time Independence of the Operator Norm * 4 Holographic Model * 4.1 Real Scalar * 4.2 Transverse Gauge Field * 5 Numerical method * 6 Results * 6.1 Real Scalar in SAdS\({}_{4+1}\) * 6.2 Transverse Gauge Field in SAdS\({}_{4+1}\) * 7 Conclusions * A Computation of \(L^{\dagger}\) * A.1 Real Scalar Field * A.2 Transverse Gauge Field * B Discretization in the Chebyshev grid * B.1 Collocation Method * B.2 Construction of the \(G_{E}\) matrix * B.3 Interpolation Between Grids * C Pseudospectra in the \(L^{2}\)-norm * D Numerical Values of the QNFs ## 1 Introduction Quasinormal modes are the solutions of linear differential equations on an open domain. The openness is encoded in outgoing boundary conditions. Such boundary conditions arise naturally in many branches of physics from the study of black holes to optics. The quasinormal modes can be thought of as the eigenmodes of linear differential operators. Due to the open boundary conditions the differential operator is non-Hermitian and the eigenvalues are complex. In gravitational physics, quasinormal modes are of utmost importance in the theory of black holes. Famously, a black hole horizon acts as a perfectly absorbing membrane. Therefore, solving a wave equation in a black hole background leads naturally to outgoing boundary conditions. In asymptotically flat space, radiation is also outgoing at infinity. The quasinormal modes that arise in asymptotically flat spacetimes are supposed to describe the late time ringdown of perturbed black holes. They arise for example in black hole mergers and are of highest interest for gravitational wave physics [1; 2; 3; 4]. The importance of quasinormal modes in asymptotically Anti de-Sitter (AdS) spaces derives from gauge/gravity duality [5; 6; 7; 8; 9]. Famously gauge/gravity duality is conjectured to describe certain strongly coupled quantum many-body systems. While up to date no concrete physical system has been found that indeed is modelled in all detail by gauge/gravity duality, it has lead to important insights into areas such as hydrodynamics and transport theory in the relativistic regime [10; 11; 12]. Some key results are the extremely low specific shear viscosity of holographic models of the quark gluon plasma [13] or phase transitions towards superconducting states [14; 15] as well as strongly coupled quantum critical phases [16]. Quasinormal modes are a key ingredient in this development as they can be understood as the poles of the retarded Green's functions of the strongly coupled quantum many-body system [17; 18; 19]. The shear viscosity can be read off from a quasinormal mode corresponding to momentum diffusion [20] and second order phase transitions arise as a quasinormal frequency moves into the upper half plane thus indicating an instability of the ground state [21; 22]. In particular the modern way of understanding relativistic hydrodynamics is very much influenced by the properties of quasinormal modes in asymptotically AdS black holes. Relativistic hydrodynamics can be understood as an effective field theory organised in a derivative expansion [23]. As an effective field theory it has a limited range of validity. This limit can be set as the wavelength at which the first non-hydrodynamic modes decays at a slower rate than the hydrodynamic mode [24]. More recently it is conjectured that the limit is set by the radius in the complex plane at which the lowest hydrodynamic quasinormal modes collide with the first non-hydrodynamic mode [25; 26; 27]. A question that so far has not been investigated in the context of gauge/gravity duality is the stability of the quasinormal frequencies under small perturbations of the background geometry. In asymptotically flat space such studies have a long history [28; 29] and recently the issue has been taken up in [30; 31; 32; 33; 34; 35; 36] (see also [37] for horizonless compact objects). The key new development of these recent studies is the application of the method of pseudospectra [38]. The eigenvalues of Hermitian operators are stable in a technical sense. In fact, this stability is what makes perturbation theory successful in quantum physics. On the other hand, the eigenvalues of non-Hermitian and non-normal operators generically are unstable. This means they can be displaced large distances by even small perturbations of the original operator. Here stability and smallness of a perturbation have precise meanings and we will give exact definitions later on. Concretely, in the case of quasinormal modes the instability appears as a consequence of the non-conservative nature of the system, associated with the eventual damping of the fluctuations as they fall into the black hole (and radiate away to infinite in asymptotically flat spacetime). AdS at large distances from the black hole horizon acts as a confining box but it is still true that fluctuations are damped as they fall into the black hole. Thus the damping of fluctuations in black hole geometries is independent of the asymptotics of the spacetime. It is then reasonable to expect that such quasinormal modes and frequencies are also inherently unstable as occurs in asymptotically flat space. The motivation for investigating this question is twofold. Firstly, it would serve as increasing evidence of the independence of the instability on the asymptotic behaviour of the spacetime. And secondly, it would shed a completely new light on the behaviour of strongly coupled quantum many-body systems modelled by gauge/gravity duality. The instability of the quasinormal mode spectrum would indicate that also the excitation spectrum of the dual quantum field theories would be unstable under perturbations. In turn, this might rise questions about the robustness of conclusions for quantum many-body physics drawn from the behavior of quasinormal modes. Let us add here that despite the theoretically well established instability of the quasinormal mode spectrum in the asymptotically flat case, extraction of overtones from gravitational wave signals have been reported in [39; 40; 41; 42]. To probe the stability of the quasinormal mode frequency (QNF) we follow the example of [30] and turn to pseudospectrum analysis [38]. Broadly speaking, pseudospectrum provides insight on how much perturbations of a given size can displace the spectrum of an operator. Accordingly, once the physically motivated notion of size is defined, pseudospectra serve as a powerful tool to discuss spectral stability or lack thereof. In this work we initiate a systematic study of the stability of quasinormal frequencies in AdS. We focus on two very simple cases: a real scalar field and a transverse gauge field in a Schwarzschild AdS\({}_{4+1}\) black brane (SAdS\({}_{4+1}\)). By the gauge gravity duality these correspond to a scalar operator and a conserved \(R\)-current of the strongly coupled maximally symmetric \(\mathcal{N}=4\) Yang-Mills theory [5]. Our choice of fluctuations is motivated by two main reasons. Firstly, we consider the SAdS\({}_{4+1}\) background because it is the archetypical example for gauge/gravity duality. Secondly, we choose a real scalar and a transverse gauge field for their simplicity and their lack of a hydro mode. Hydro modes dominate the low energy spectrum of the dual QFT as their corresponding QNF goes to zero as the momentum goes to zero. Consequently, analyzing the stability of hydro modes is akin to studying the stability of the hydrodynamic description of the dual quantum field theory [43]. As the study of hydrodynamic modes involves additional technical complexities due to the presence of gauge symmetries and the corresponding constraints, we will leave this topic for a follow-up work. The organisation of this paper is as follows. Since the method of pseudospectra is relatively new and perhaps little known in the context of gauge/gravity duality we spend the first few sections reviewing some of the underlying mathematics. Here we draw heavily from [38] and also from a previous work on asymptotically flat space [30]. In particular we define the pseudospectrum and review its most important properties in section 2. In section 3 we focus specifically on the application of pseudspectra to quasinormal modes. We point out the importance of chosing the proper coordinates. Following [44] we chose a coordinate slice which we call _regular_. These coordinates have the important property that the equal time slices are spacelike outside the horizon but coincide with infalling Eddington-Finkelstein coordinates at the horizon.1 We discuss the importance of the choice of norm and give a physical reasoning why the chosen energy norm is the relevant one in the holographic context. Indeed, the energy in regular coordinates is not conserved due to outflow on the horizon. Footnote 1: We note that infalling coordinates are very well suited to impose outgoing boundary conditions. In section 4 we construct the specific differential operators that describe the fluctuations of scalar and transverse gauge fields in an AdS black hole background. We show explicitly that the non-Hermiticity is concentrated on the horizon. As we discuss in detail this leads to a clear picture of the local origin of the non-Hermiticity and non-normality. In section 5 we first outline our numerical methods which are based on pseudospectral methods [45, 46]. Next, we introduce the selective pseudospectrum which tests the stability under random local potential perturbations. And finally, to further explore the aforementioned stability, we construct specific potentials to test some relevant regimes. Section 6 is the core of the paper and contains the results of our pseudospectrum calculations. We mostly concentrate on the results for the energy norm but, to illustrate the norm dependence, we also present the result for the \(L^{2}\) norm in appendix C. Indeed it turns out that the pseudospectrum for both the scalar field and the transverse vector show the typical open contour lines indicating instability. In section 7 we summarize our findings, present our conclusions and give an outlook on further studies in particular the highly interesting case of hydrodynamic modes. Some of the more technical details on the calculation of the adjoint differential operators and the numerical methods are presented in the appendices A, B and E. The numerical values of the quasinormal frequencies are presented in appendix D. Pseudospectrum and Stability In this section we collect relevant theorems and definitions of the pseudospectrum of linear operators. We next specialize them to finite dimensional operators (matrices) and illustrate some key notions through the example of a \(2\times 2\) matrix. For additional details, we refer the reader to [45]. Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), its spectrum \(\sigma\) (\(L\)) is defined as the set of points in the complex plane, \(z\), where its resolvent:2 Footnote 2: Recall that a closed linear operator \(L\) is a linear operator such that if a sequence \(\{x_{n}\}\) in \(\mathcal{H}\) converges to \(x\), then the sequence \(\{Lx_{n}\}\) converges to \(y\), where \(Lx=y\). From now on, when talking about operators, we implicitly assume that they are both closed and linear. \[\mathcal{R}(z;L)=(L-z)^{-1}\,, \tag{1}\] does not exist as a bounded operator on \(\mathcal{H}\). Within \(\sigma\) (\(L\)), we define the eigenvalues \(\{\lambda_{i}\}\) through the eigenvalue equation: \[Lu_{i}=\lambda_{i}u_{i}\,, \tag{2}\] where \(u_{i}\in\mathcal{D}(L)\) is the eigenvector corresponding to the eigenvalue \(\lambda_{i}\). Note that with our definitions, the eigenvalues correspond to isolated points in the spectrum (regions of dimension 0). Nonetheless, it is important to stress that, in general, the spectrum also contains regions of higher dimension, such as branch cuts (regions of dimension 1). That said, [44] proved that in the case of Schwarzschild AdS black holes the spectrum of quasinormal frequencies indeed consists of isolated points. A particularly important property of eigenvalues is that as long as the operator is self-adjoint (or, in more physical terms, the system is conservative), the spectral theorem ensures that if we perturb the system with a bounded operator of size \(\varepsilon\) the eigenvalues of the perturbed operator cannot suffer a displacement greater than \(\varepsilon\)[47, 48].3 More generally, this property holds for any normal operator \(A\) satisfying \(\left[A,A^{\dagger}\right]=0\), with \(A^{\dagger}\) the adjoint of \(A\). Physically, this ensures that any statement made relying on eigenvalue analysis is guaranteed to be right up to corrections of the same size as the error implicitly made when assuming a particular model. Footnote 3: The notion of size here is quite nontrivial and we shall discuss it in greater detail later. In general, we will consider that the size is given by the norm of the operator, which is inherited from the function norm in \(\mathcal{H}\). However, many physical systems are inherently non-conservative. In these systems, normality of the relevant operators is not guaranteed and therefore eigenvalues are, in general, no longer protected by the spectral theorem; meaning that small perturbations could potentially alter the spectrum in a significant manner. As any description of a physical system is always a model and consequently has some inherent error associated with it, one can conclude [38] that eigenvalue analysis is insufficient when dealing with non-conservative systems defined through non-self-adjoint (and potentially non-normal) operators as eigenvalues may not be stable. In order to characterize the stability of eigenvalues we need to introduce the notion of \(\varepsilon\)-pseudospectrum, which can be defined in three mathematically equivalent ways [38]: **Def. 2.1** (Resolvent norm approach).: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), and \(\varepsilon>0\), the \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(L)\) is_ \[\sigma_{\varepsilon}(L)=\{z\in\mathbb{C}:\|\mathcal{R}(z;L)\|>1/\varepsilon\}\,, \tag{3}\] _with the convention \(\|\mathcal{R}(z;L)\|=\infty\) for \(z\in\sigma(L)\)._ **Def. 2.2** (Perturbative approach).: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), and \(\varepsilon>0\), the \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(L)\) is_ \[\sigma_{\varepsilon}(L)=\{z\in\mathbb{C},\exists V,\|V\|<\varepsilon:z\in \sigma(L+V)\}\,. \tag{4}\] **Def. 2.3** (Pseudoeigenvalue approach).: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), and \(\varepsilon>0\), the \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(L)\) is_ \[\sigma_{\varepsilon}(L)=\{z\in\mathbb{C},\exists u^{\varepsilon}\in\mathcal{ D}(L):\|(L-z)u^{\varepsilon}\|<\varepsilon\|u^{\varepsilon}\|\}\,, \tag{5}\] _where \(u^{\varepsilon}\) is a \(\varepsilon\)-pseudoeigenvector with \(\varepsilon\)-pseudoeigenvalue \(z\)._ Note that, contrary to the spectrum, the pseudospectrum depends on the operator norm as it needs a notion of what constitutes a large or small perturbation to quantify stability. **Def. 2.4** (Operator norm).: _Given a bounded linear operator \(V\) acting on a Hilbert space \(\mathcal{H}\) equipped with a function norm \(\|\cdot\|\), we define its norm \(\|V\|\) as:_ \[\|V\|=\max_{u\in\mathcal{H}}\frac{\|Vu\|}{\|u\|}\,. \tag{6}\] Definition 2.2 corresponds to the physical intuition we were seeking: the \(\varepsilon\)-pseudospectrum constitutes the maximal region containing all possible displacements of the eigenvalues under perturbations of size \(\varepsilon\). It is then quite natural to represent the pseudospectrum as a contour map indicating the boundaries of these regions for multiple values of \(\varepsilon\) (see _e.g._ figure 1 for one such a map corresponding to a \(2\times 2\) matrix). On the other hand, definition 2.1, despite lacking such a clear physical interpretation, is significantly more powerful as it allows us to establish a few relevant theorems whose proofs can be found in chapter 4 of [38]: **Thm. 2.1**.: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), the \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(L)\) is a nonempty open set of \(\mathbb{C}\), and any bounded connected component of \(\sigma_{\varepsilon}(L)\) has a nonempty intersection with \(\sigma(L)\)._ **Thm. 2.2**.: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), the pseudospectra are strictly nested supersets of the spectrum: \(\cap_{\varepsilon>0}\,\sigma_{\varepsilon}(L)=\sigma(L)\), and conversely, for any \(\delta>0\), \(\sigma_{\varepsilon+\delta}(L)\supseteq\sigma_{\varepsilon}(L)+\Delta_{\delta}\) with \(\Delta_{\delta}\) the open disk of radius \(\delta\)._ **Thm. 2.3**.: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) with domain \(\mathcal{D}(L)\), \(z\in\mathbb{C}\) and \(\varepsilon>0\), we have \(\left\|\mathcal{R}(\overline{z};L^{\dagger})\right\|=\left\|\mathcal{R}(z;L)\right\|\), \(\sigma(L^{\dagger})=\overline{\sigma(L)}\) and \(\sigma_{\varepsilon}(L^{\dagger})=\overline{\sigma_{\varepsilon}(L)}\).4_ Footnote 4: Regarding notation, we denote the complex conjugate with a bar and complex conjugate transpose with an asterisk. Definition 2.3 hints at the numerical difficulties arising when obtaining eigenvalues and eigenvectors of non-normal operators. When operating at a precision of \(\varepsilon\) we are unable to distinguish between an \(\varepsilon\)-pseudoeigenvalue and a genuine eigenvalue, which, by virtue of definition 2.2, implies that we cannot obtain the spectrum of the desired operator but instead that of a perturbed one. Thus, in the absence of normality, one needs to be especially careful when employing numerical methods to prevent loss of predictivity. One practical consequence is that even numerical errors can lead to displacements and therefore it is often necessary in numerical approximations to compute quasinormal frequencies with (much) higher precision than the typical double precision floating-point machine numbers. Besides pseudospectra, the condition numbers \(\{\kappa_{i}\}\) are another useful tool to study the stability of eigenvalues. They quantify the effect of perturbations of size \(\varepsilon\) through the knowledge of the orthogonality between the eigenvectors of the unperturbed operator and its adjoint (often referred to as right- and left-eigenvectors, respectively). Heuristically, the condition numbers exploit the lack of orthogonality between left- and right-eigenvectors in non-normal operators to look for potential instabilities. Drawing from [38], we formalize this in the following definition and theorem: **Def. 2.5** (Condition numbers).: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\) and its adjoint \(L^{\dagger}\), we define the condition number \(\kappa_{i}\) associated with the eigenvalue \(\lambda_{i}\) of \(L\) as:_ \[\kappa_{i}=\frac{\left\|v_{i}\right\|\left\|u_{i}\right\|}{\left|\left\langle v _{i},u_{i}\right\rangle\right|}\,, \tag{7}\] _where \(\left\langle\cdot,\cdot\right\rangle\) is the inner product associated with the norm \(\left\|\cdot\right\|\), \(u_{i}\) is the right-eigenvector satisfying \(Lu_{i}=\lambda_{i}u_{i}\) and \(v_{i}\) is the left-eigenvector satisfying \(L^{\dagger}v_{i}=\overline{\lambda_{i}}v_{i}\)_ **Thm. 2.4** (Stability and condition numbers).: _Given a closed linear operator \(L\) acting on a Hilbert space \(\mathcal{H}\), whose spectrum contains a set of eigenvalues \(\{\lambda_{i}\}\), and a bounded perturbation \(V\) with \(\left\|V\right\|=\varepsilon\) we have:_ \[\left|\lambda_{i}(\varepsilon)-\lambda_{i}\right|\leq\varepsilon\kappa_{i}\,, \tag{8}\] _where \(\{\lambda_{i}(\varepsilon)\}\) are the eigenvalues of the perturbed operator \(L(\varepsilon)=L+V\)_ Proof.: Let us consider a closed operator \(L\), its adjoint \(L^{\dagger}\) and \(u_{i}\), \(v_{i}\) such that: \[Lu_{i}=\lambda_{i}u_{i}\,,\qquad L^{\dagger}v_{i}=\overline{\lambda_{i}}v_{i}\,. \tag{9}\] Defining the perturbed operator \(L(\varepsilon)=L+V\) with \(\left\|V\right\|=\varepsilon\), the perturbed right-eigenvector \(u_{i}(\varepsilon)=u_{i}+\varepsilon\delta u_{i}+\mathcal{O}\left(\varepsilon^ {2}\right)\) satisfies: \[L(\varepsilon)u_{i}(\varepsilon)=\lambda_{i}(\varepsilon)u_{i}(\varepsilon)\,, \tag{10}\] with \(\lambda_{i}(\varepsilon)\) the corresponding perturbed eigenvalue. Then, keeping only the leading order in \(\varepsilon\) we have: \[\lambda_{i}(\varepsilon) =\frac{\langle v_{i},L(\varepsilon)u_{i}(\varepsilon)\rangle}{ \langle v_{i},u_{i}(\varepsilon)\rangle}=\frac{\langle v_{i},Lu_{i}\rangle+ \langle v_{i},Vu_{i}\rangle+\varepsilon\,\langle v_{i},L\delta u_{i}\rangle}{ \langle v_{i},u_{i}\rangle+\varepsilon\,\langle v_{i},\delta u_{i}\rangle}\] \[=\lambda_{i}+\varepsilon\frac{\left\langle L^{\dagger}v_{i}, \delta u_{i}\right\rangle}{\langle v_{i},u_{i}\rangle}-\varepsilon\lambda_{i} \frac{\langle v_{i},\delta u_{i}\rangle}{\langle v_{i},u_{i}\rangle}+\frac{ \langle v_{i},Vu_{i}\rangle}{\langle v_{i},u_{i}\rangle}\] \[=\lambda_{i}+\varepsilon\lambda_{i}\frac{\langle v_{i},\delta u_ {i}\rangle}{\langle v_{i},u_{i}\rangle}-\varepsilon\lambda_{i}\frac{\langle v _{i},\delta u_{i}\rangle}{\langle v_{i},u_{i}\rangle}+\frac{\langle v_{i},Vu_ {i}\rangle}{\langle v_{i},u_{i}\rangle}=\lambda_{i}+\frac{\langle v_{i},Vu_{ i}\rangle}{\langle v_{i},u_{i}\rangle}\,, \tag{11}\] from where we recover the result we wanted to prove: \[|\lambda_{i}(\varepsilon)-\lambda_{i}|=\left|\frac{\langle v_{i},Vu_{i} \rangle}{\langle v_{i},u_{i}\rangle}\right|\leq\left|\frac{\|\|v_{i}\|\|\|Vu_{ i}\|}{\langle v_{i},u_{i}\rangle}\right|\leq\left|\frac{\|v_{i}\|\|\|V\|\|u_{ i}\|}{\langle v_{i},u_{i}\rangle}\right|=\varepsilon\kappa_{i}\,. \tag{12}\] For a normal operator all eigenvalues are stable and have condition number 1.5 Interestingly enough, even for non-normal operators some eigenvalues may have condition number 1 and thus be stable; these are referred to as normal eigenvalues. Footnote 5: For a normal operator \(L\) and its adjoint \(L^{\dagger}\) can be simultaneously diagonalized and thus \(v_{i}=u_{i}\) and \(\langle v_{i},u_{i}\rangle=\|v_{i}\|\|u_{i}\|\). Lastly, we note that throughout this section we have assumed \(\varepsilon\) to be small, but we have yet to give a formal description of smallness. As indicated by definition 2.2, pseudospectrum analysis is, in spirit, equivalent to a problem of perturbation theory in Quantum Mechanics and thus we decide to inherit the latter's definition of smallness: **Def. 2.6** (Smallness).: _Given a closed linear operator and a perturbation of size \(\varepsilon\), we say that \(\varepsilon\) is small if it is negligible when compared to the minimum distance between disconnected regions of the spectrum._ ### Matrix Pseudospectrum Throughout this work, we will limit ourselves to studying pseudospectra of matrices arising from the discretization of differential operators. Consequently, in this subsection, we present some relevant results specified to matrices. Note that all previous theorems and definitions still hold since \(N\times N\) matrices are bounded linear operators acting on an \(N\) dimensional Hilbert space comprised of vectors with \(N\) components. We begin by constructing a more practical definition of matrix norm using definition 2.4: **Thm. 2.5** (Matrix norm).: _Given an \(N\)-dimensional Hilbert space \(\mathcal{V}\) equipped with a norm \(\|\cdot\|\) and an \(N\times N\) matrix \(M\); the norm of \(M\) is:_ \[\|M\|=\sqrt{\rho\,\left(M^{\dagger}M\right)}\,, \tag{13}\] _with \(\rho\) the spectral radius:_ \[\rho(A)=\max_{\lambda\in\sigma(A)}\left\{|\lambda|\right\}\,. \tag{14}\] Proof.: We begin by pointing out that, for matrices, all elements in the spectrum are eigenvalues. Then, by definition 4, we have: \[\|M\|=\max_{u\in\mathcal{V}}\frac{\|Mu\|}{\|u\|}=\max_{u\in\mathcal{V}}\frac{ \sqrt{\left\langle Mu,Mu\right\rangle}}{\|u\|}=\max_{u\in\mathcal{V}}\frac{ \sqrt{\left\langle u,M^{\dagger}Mu\right\rangle}}{\|u\|}=\max_{u\in\mathcal{V} }\frac{\sqrt{\left\langle M^{\dagger}Mu,u\right\rangle}}{\|u\|}\,, \tag{15}\] where \(\left\langle\cdot,\cdot\right\rangle\) is the inner product associated with the norm and \(M^{\dagger}\) the adjoint with respect to the aforementioned inner product. As \(M^{\dagger}M\) is non-negative definite and self-adjoint, we can find an orthogonal basis \(\{e_{i}\}\) satisfying \(M^{\dagger}Me_{i}=\lambda_{i}e_{i}\) with \(\lambda_{i}\) real positive eigenvalues. Choosing \(\lambda_{n}=\max\{\lambda_{i}\}\), we then have: \[\left\langle u,M^{\dagger}Mu\right\rangle=\sum_{i,j}\overline{u^{i}}u^{j} \left\langle e_{i},M^{\dagger}Me_{j}\right\rangle=\sum_{i}\overline{u^{i}}u^{ i}\lambda_{i}\leq\sum_{i}\lambda_{n}\,\overline{u^{i}}u^{i}=\lambda_{n}\|u\|^{2}= |\lambda_{n}|\|u\|^{2}\,, \tag{16}\] which implies: \[\|M\|=\max_{u\in\mathcal{V}}\frac{\sqrt{\left\langle u,M^{\dagger}Mu\right\rangle }}{\|u\|}=\sqrt{|\lambda_{n}|}=\sqrt{\max_{\lambda\in\sigma\left(M^{\dagger}M \right)}\left\{|\lambda|\right\}}=\sqrt{\rho\left(M^{\dagger}M\right)}\,. \tag{17}\] Using this expression of the matrix norm, we can rewrite definition 1 in terms of the minimum generalized singular value. We formalize this statement in the following theorem: **Thm. 2.6**.: _Given an \(N\times N\) matrix \(M\) acting on an \(N\)-dimensional Hilbert space \(\mathcal{V}\) equipped with a norm \(\left\|\cdot\right\|\), the \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(M)\) is_ \[\sigma_{\varepsilon}(M)=\{z\in\mathbb{C}:s_{\min}(M-z)<\varepsilon\}\,, \tag{18}\] _where \(s_{\min}\) is the smallest generalized singular value:_ \[s_{\min}(A)=\sqrt{\min_{\lambda\in\sigma(A^{\dagger}A)}|\lambda|}\,. \tag{19}\] Proof.: First, we note that given an invertible matrix \(A\), we have: \[\max_{\lambda\in\sigma(A^{-1})}\{|\lambda|\}=\left(\min_{\lambda\in\sigma(A)} \{|\lambda|\}\right)^{-1}\,, \tag{20}\] which follows trivially from the diagonalization of \(A\). Then, the norm of the inverse can be written as follows: \[\left\|A^{-1}\right\|=\left(\min_{\lambda\in\sigma(A)}\{|\lambda|\}\right)^{- 1/2}=\left[s_{\min}(A)\right]^{-1}\,. \tag{21}\] Note that we can consistently define the norm of the inverse of a non-invertible matrix to be \(\infty\), thereby extending this proof to arbitrary matrices. Now, specializing to the resolvent, we have: \[\|\mathcal{R}(z;M)\|=[s_{\min}(M-z)]^{-1}\Longrightarrow\|\mathcal{R}(z;M)\|<1/ \varepsilon\Leftrightarrow s_{\min}(M-z)>\varepsilon\,, \tag{2.22}\] which, applying definition 2.1, implies that the theorem holds. Having expressed the pseudospectrum in terms of the generalized singular values, we can now establish the following corollary of theorem 2.2: **Cor. 2.1**.: _Given an \(N\times N\) normal matrix \(M\) acting on an \(N\)-dimensional Hilbert space \(\mathcal{V}\) equipped with a norm \(\|\cdot\|\), its \(\varepsilon\)-pseudospectrum \(\sigma_{\varepsilon}(M)\) is_ \[\sigma_{\varepsilon}(M)=\sigma(M)+\Delta_{\varepsilon}\,, \tag{2.23}\] _with \(\Delta_{\varepsilon}\) the open disk of radius \(\varepsilon\)._ Proof.: Let us consider a normal matrix \(M\) and its adjoint \(M^{\dagger}\). As \(M\) and \(M^{\dagger}\) commute, they can be simultaneously diagonalized. Furthermore, theorem 2.3 implies that the eigenvalues of \(M\) and \(M^{\dagger}\) are complex conjugate to each other, _i.e._, \(\lambda_{i}^{\dagger}=\overline{\lambda_{i}}\) for \(\lambda_{i}\in\sigma(M)\) and \(\lambda_{i}^{\dagger}\in\sigma(M^{\dagger})\). Then, the eigenvalues of \((M-z)^{\dagger}(M-z)\), \(\{\overline{\lambda}\}\) are given by: \[\{\overline{\lambda}\}=\sigma\left(M^{\dagger}M-zM^{\dagger}+\overline{z}M+|z |^{2}\right)=\left\{|\lambda|^{2}-z\overline{\lambda}-\overline{z}\lambda+|z |^{2}\right\}=\left\{|z-\lambda|^{2}\right\}\,, \tag{2.24}\] where \(\{\lambda\}=\sigma(M)\); and, consequently, \(s_{\min}(M-z)\) is: \[s_{\min}(M-z)=|z-\lambda_{i}|\,, \tag{2.25}\] for some \(\lambda_{i}\in\{\lambda\}\). Lastly, recalling theorem 2.6, we conclude that the \(\varepsilon\)-pseudospectrum is given by: \[\sigma_{\varepsilon}(M-z)=\{z\in\mathbb{C},\lambda_{i}\in\sigma(M):|z-\lambda _{i}|<\varepsilon\}=\sigma(M)+\Delta_{\varepsilon}\,. \tag{2.26}\] Up to this point, we have been totally generic and have not assumed any particular norm. However, when working with \(N\times N\) matrices it is particularly convenient to choose the \(N\times N\) matrix \(\ell^{2}\)-norm (\(\ell_{N}^{2}\)-norm) defined as: \[\|u\|_{2}=\sqrt{\sum_{a=1}^{N}|u^{a}|^{2}}\,. \tag{2.27}\] As we will discuss later, the \(\ell_{N}^{2}\) norm allows for significant optimization of the pseudospectrum algorithm, thus it is convenient to always employ it. However, we have a preferred norm, the one induced by the discretization of the original Hilbert space where the differential operator acts. Then, to exploit the advantages of the \(\ell_{N}^{2}\)-norm, we need to establish a connection between both norms.6 Footnote 6: Relating definitions in a generic norm to the \(\ell_{N}^{2}\)-norm is identical to performing a change of basis in the Hilbert space from a “coordinate” basis with nontrivial metric to an “orthogonal” basis with euclidean metric. **Thm. 2.7**.: _Given the \(\ell_{N}^{2}\)-norm \(\|\cdot\|_{2}\) and generic \(G\)-norm \(\|\cdot\|_{G}\) such that:_ \[\left\langle v,u\right\rangle_{G}=G_{ij}\overline{v^{i}}u^{j}\,, \tag{2.28}\] _with \(G=F^{*}F\) a symmetric positive definite \(N\times N\) matrix._ * _The_ \(\varepsilon\)_-pseudospectrum of a matrix_ \(M\) _in the_ \(G\)_-norm_ \(\sigma_{\varepsilon}^{G}(M)\) _satisfies_ \[\sigma_{\varepsilon}^{G}(M)=\sigma_{\varepsilon}^{\ell_{N}^{2}}\left(FMF^{-1 }\right)\,,\] (2.29) _with_ \(\sigma_{\varepsilon}^{\ell_{N}^{2}}(M)\) _the pseudospectrum in the_ \(\ell_{N}^{2}\)_-norm._ * _The condition numbers of the eigenvalue_ \(\lambda_{i}\) _of a matrix_ \(M\) _in the_ \(G\)_-norm_ \(\kappa_{i}^{G}\) _satisfy:_ \[\kappa_{i}^{G}=\frac{\|\tilde{v}_{i}\|_{2}\|\tilde{u}_{i}\|_{2}}{\left|\, \left\langle\tilde{v}_{i},\tilde{u}_{i}\right\rangle_{2}\right|}\,,\] (2.30) _where_ \(\tilde{u}_{i}\) _and_ \(\tilde{v}_{i}\) _fulfill:_ \[FMF^{-1}\tilde{u}_{i}=\lambda_{i}\tilde{u}_{i}\,,\qquad\left(FMF^{-1}\right)^ {*}\tilde{v}_{i}=\overline{\lambda_{i}}\tilde{v}_{i}\,.\] (2.31) Proof.: First, let us assume an \(N\)-dimensional Hilbert space \(\mathcal{V}\) equipped with the \(G\)-norm \(\|\cdot\|_{G}\) that induces the following inner product: \[\left\langle v,u\right\rangle_{G}=v^{*}Gu\,. \tag{2.32}\] For this to represent a well-defined inner product, the matrix \(G\) has to be symmetric and positive definite; thus it can always be expressed as \(G=F^{*}F\) using a Cholesky decomposition (for a proof see, for instance, ch. 4 of [49]).7 Footnote 7: In the language of differential geometry, performing a Cholesky decomposition is equivalent to choosing triangular vielbeins. For any metric, this can always be achieved by exploiting the internal O(\(N\)) symmetry of the “orthogonal” coordinates. * Considering now an \(N\times N\) matrix \(A\), we have: \[\|A\|_{G}^{2} =\max_{u\in\mathcal{V}}\frac{\left\langle Au,\,Au\right\rangle_{ G}}{\left\langle u,u\right\rangle_{G}}=\max_{u\in\mathcal{V}}\frac{\left(Au \right)^{*}G\,Au}{u^{*}Gu}=\max_{u\in\mathcal{V}}\frac{u^{*}A^{*}F^{*}FAu}{u^{* }F^{*}Fu}\] \[=\max_{u\in\mathcal{V}}\frac{\left\langle FAu,\,FAu\right\rangle_ {2}}{\left\langle Fu,\,Fu\right\rangle_{2}}=\max_{\tilde{u}\in\mathcal{V}} \frac{\left\langle FAF^{-1}\tilde{u},\,FAF^{-1}\tilde{u}\right\rangle_{2}}{ \left\langle\tilde{u},\tilde{u}\right\rangle_{2}}=\left\|FAF^{-1}\right\|_{2}^ {2}\,,\] (2.33) where we have introduced \(\tilde{u}=Fu\). This relates the \(G\)-norm to the \(\ell_{N}^{2}\) norm for any generic matrix. In particular, for the resolvent of a matrix \(M\): \[\left\|\mathcal{R}(z;M)\right\|_{G} =\left\|F\mathcal{R}(z;M)F^{-1}\right\|_{2}=\left\|F(M-z)^{-1}F^{- 1}\right\|_{2}\] \[=\left\|(FMF^{-1}-z)^{-1}\right\|_{2}=\left\|\mathcal{R}(z;FMF^{- 1})\right\|_{2},\] (34) and using definition 2.1 we trivially recover (29). * Now we assume an \(N\times N\) matrix \(M\) with left- and right-eigenvalues \(v_{i}\) and \(u_{i}\): \[Mu_{i}=\lambda_{i}u_{i}\,,\qquad M^{\dagger}v_{i}=\overline{\lambda_{i}}v_{i}\,.\] (35) The condition number \(\kappa_{i}^{G}\) corresponding to the eigenvalue \(\lambda_{i}\) is then given by: \[\kappa_{i}^{G}=\frac{\sqrt{v_{i}^{*}Gv_{i}}\sqrt{u_{i}^{*}Gu_{i}}}{\left|v_{i }^{*}Gu_{i}\right|}=\frac{\sqrt{(Fv_{i})^{*}Fv_{i}}\sqrt{(Fu_{i})^{*}Fu_{i}} }{\left|(Fv_{i})^{*}Fu_{i}\right|}=\frac{\|\tilde{v}_{i}\|_{2}\|\tilde{u}_{i} \|_{2}}{\left|\left\langle\tilde{v}_{i},\tilde{u}_{i}\right\rangle_{2}\right| }\,,\] (36) where we have introduced \(\tilde{u}_{i}=Fu_{i}\) and \(\tilde{v}_{i}=Fv_{i}\). Now, we note that \(M^{\dagger}\) in the G norm is defined as: \[M^{\dagger}=\left(GMG^{-1}\right)^{*}=F^{-1}\left(FMF^{-1}\right)^{*}F\,,\] (37) which trivially follows from the definition of adjoint: \[\left\langle M^{\dagger}g,f\right\rangle_{G}=\left\langle g,Mf\right\rangle_{G }\Rightarrow g^{*}\left(M^{\dagger}\right)^{*}Gf=g^{*}GMf\Rightarrow\left(M ^{\dagger}\right)^{*}G=GM\,,\] (38) with \(f\) and \(g\) two arbitrary vectors. Then, we can rewrite (35) in terms of \(\tilde{u}_{i}\) and \(\tilde{v}_{i}\) as: \[MF^{-1}\tilde{u}_{i}=F^{-1}\lambda_{i}\tilde{u}_{i}\Rightarrow FMF^{-1} \tilde{u}_{i}=\lambda_{i}\tilde{u}_{i}\,,\] (39) \[F^{-1}\left(FMF^{-1}\right)^{*}FF^{-1}\tilde{v}_{i}=F^{-1} \overline{\lambda_{i}}\tilde{v}_{i}\Rightarrow\left(FMF^{-1}\right)^{*}\tilde {v}_{i}=F^{-1}\overline{\lambda_{i}}\tilde{v}_{i}\,,\] (40) thus concluding the proof. Lastly, let us address a nontrivial subtlety of the discretization process. Definition 2.2 of the \(\varepsilon\)-pseudospectrum assumes bounded perturbations, or, in more physical terms, potential perturbations. However, in a discretized setting all operators become bounded and one cannot distinguish the discretized version of an originally bounded operator from that of an originally unbounded one. Then, in many cases it will be more interesting to probe a particular type of perturbations using definition 2.2, either choosing an specific form of the perturbation matrix or generating random matrices satisfying some physically motivated constraints. #### 2.1.1 Norm Dependence in a Simple Example In order to make the formalism of this section a bit more transparent, we present the pseudospectrum of a \(2\times 2\) matrix \(A\) \[A=\begin{pmatrix}-1&0\\ -50&-2\end{pmatrix}\,, \tag{41}\] with eigenvalues \(\omega_{1}=-2\) and \(\omega_{2}=-1\). Furthermore, to make the norm dependence explicit, we present the analysis in two norms, the \(\ell_{2}^{2}\)-norm and the \(G\)-norm: \[\left\|u\right\|_{G}=G_{ij}\overline{u^{i}}u^{j}\,, \tag{42}\] with \(G\) \[G=\begin{pmatrix}2\cdot 10^{4}&50\\ 50&1\end{pmatrix}=\begin{pmatrix}100\sqrt{2}&0\\ \frac{1}{2\sqrt{2}}&\sqrt{\frac{7}{8}}\end{pmatrix}\begin{pmatrix}100\sqrt{2} &\frac{1}{2\sqrt{2}}\\ 0&\sqrt{\frac{7}{8}}\end{pmatrix}=F^{*}F\,. \tag{43}\] * \(\ell_{2}^{2}\)-norm: In this case, the adjoint of \(A\) is its complex conjugate transpose \[A^{\dagger}=\begin{pmatrix}-1&-50\\ 0&-2\end{pmatrix}\,,\] (44) and \(A\) is not normal: \[A^{\dagger}A-AA^{\dagger}=\begin{pmatrix}-2500&-50\\ -50&2500\end{pmatrix}\neq 0\,.\] (45) Then, one could potentially find instabilities associated with non-normality. A suspicion confirmed by the condition numbers \[\kappa_{1}=\sqrt{2501}\approx 50\,,\qquad\kappa_{2}=\sqrt{2501}\approx 50\,,\] (46) which differ greatly from \(1\), and the pseudospectrum of figure 0(a), which is characterized by extended \(\varepsilon\)-pseudospectrum boundaries that eventually engulf both eigenvalues for small values of \(\varepsilon\). * \(G\)-norm: In this case, \(A\) is self-adjoint: \[A^{\dagger}=\overline{G^{-1}AG}=\begin{pmatrix}-1&0\\ -50&-2\end{pmatrix}=A\,,\] (47) and consequently stable. Here, the condition numbers are \[\kappa_{1}=1\,,\qquad\kappa_{2}=1\,,\] (48) and the pseudospectrum is characterized by concentric circles of radius \(\varepsilon\) around the eigenvalues (figure 0(b)), both denoting stability. As expected, the choice of norm is nontrivial; the notion of stability depends on the definition of small perturbations. It is also interesting to remark that figures 0(a) and 0(b) show very common features of the pseudospectrum of non-normal and normal operators, respectively. In general, the \(\varepsilon\)-pseudospectrum of a normal operator is a set disks of radius \(\varepsilon\) centered on the eigenvalues (as indicated by corollary 2.1); while for a non-normal operator it presents extended regions that may contain multiple eigenvalues.8 Footnote 8: Note that for normal operators the \(\varepsilon\)-pseudospectrum does engulf multiple eigenvalues if \(\varepsilon\) is large. As expected, this behaviour appears for \(\varepsilon\) larger than half the distance between the two eigenvalues. ## 3 Quasinormal Modes and Pseudospectra Quasinormal modes (QNMs) in AdS are solutions to the linearized equations of motion in a black hole background with well-defined quasinormal frequencies (QNFs), subject to Figure 1: Pseudospectrum of the \(A\) matrix (41) in two different norms. In both figures the red dots correspond to the eigenvalues, the white lines to boundaries of different \(\varepsilon\)-pseudospectra and the heat map to the logarithm of the inverse of the resolvent. In (a) the solid red and yellow lines represent the boundaries of \(\sigma(A)+\Delta_{0.04}\) and \(\sigma_{0.04}(A)\), respectively; note the great discrepancy associated with the non-normality. On the other hand, in (b) the dashed red line and the solid yellow line are superimposed and correspond to the boundaries of \(\sigma(A)+\Delta_{0.1}\) and \(\sigma_{0.1}(A)\), respectively. The solid black line in (b) represents the boundary of \(\sigma_{0.5}(A)\). outgoing boundary conditions on the horizon and normalizable boundary conditions on the AdS boundary [3].9 QNMs are small excitations of the black hole spacetime that eventually fall into the horizon and die out, provided the spacetime is stable.10 From the dual QFT perspective, they correspond to excitations over the thermal equilibrium that eventually thermalize. The decaying behaviour is reflected in the associated QNFs, which are complex as the system is inherently non-conservative. Footnote 9: Imposing normalizable boundary conditions corresponds to choosing source-less excitations in the dual QFT or, equivalently, to selecting solutions where the leading term on the AdS boundary vanishes. Note that in some cases the subleading mode can also be identified with the source; this corresponds to a different choice of quantization scheme in the dual QFT [50]. For simplicity, in the present work we do not consider that possibility. Footnote 10: Here, stability of the spacetime entails that the perturbation decays as opposed to growing exponentially. As argued in the previous section, the spectra of non-conservative systems are potentially unstable. One thus needs to complement the study of QNFs with pseudospectrum analysis. In order to achieve this, we need to cast the problem of finding QNFs as an eigenvalue equation and define a physically motivated norm. In this section we implement these two steps to determine the pseudospectrum of fluctuations in asymptotic AdS black hole geometries. Specifically, as a background we consider a static \((\mathrm{d}+1)\)-dimensional AdS planar black hole (black brane) spacetime, whose dual is a \(\mathrm{d}\)-dimensional strongly coupled QFT in thermal equilibrium at temperature given by the Hawking temperature of the black brane. The corresponding metric in Poincare coordinates is given by: \[ds^{2}=\frac{r^{2}}{l^{2}}\left(-f(r)dt^{2}+\delta_{ij}dx^{i}dx^{j}\right)+ \frac{l^{2}}{r^{2}}\frac{dr^{2}}{f(r)}\,, \tag{10}\] where \(l\) is the AdS radius and \(f(r)\) is the blackening factor whose largest zero we denote by \(r_{h}\). In these coordinates, the Hawking temperature \(T\) is given by: \[T=\frac{r_{h}^{2}}{4\pi l^{2}}\left.\frac{df(r)}{dr}\right|_{r=r_{h}}\,, \tag{11}\] and the AdS boundary and the horizon are located at \(r=\infty\) and \(r=r_{h}\), respectively. ### Construction of the Eigenvalue Problem Our approach relies on utilizing regular coordinates, as introduced in [44], to express the linearized equations of motion as an eigenvalue problem while also implementing the outgoing boundary conditions as regularity conditions on the horizon.11 Footnote 11: Regular coordinates are the AdS equivalent to the hyperboloidal ones used in asymptotically flat spacetimes (see [51] and [52] for a recent overview.). For the sake of completeness, in this subsection we present the approach in a general setting, only assuming a black brane background.12 In 10, we specialize this discussion to a scalar in an AdS\({}_{4+1}\) Schwarzschild black brane (SAdS\({}_{4+1}\)) background, hoping to shed more light on the nature of the approach. In general, we consider linearized equations of motion of the form: \[\left[-\nabla_{M}\nabla^{M}+K^{M}\nabla_{M}+U\right]\Phi=0\,, \tag{3.3}\] with \(\Phi\) a multi-component field, \(K^{M}\) and \(U\) functions and \(\nabla_{M}\) the covariant derivative. Exploiting the symmetries of the black brane, we choose coordinates \(\{\ell,\mathbf{x},\mathbf{r}\}\) such that the linear operator of equation (3.3) only depends on \(\mathbf{r}\). Furthermore, in order to eliminate \(\mathbf{x}\) derivatives we express \(\Phi\) as a superposition of Fourier modes: \[\Phi(\ell,\mathbf{x},\mathbf{r})=\int\frac{d^{\mathrm{d}-1}k}{(2\pi)^{\mathrm{d}-1}} \Phi(\ell,\mathbf{k},\mathbf{r})e^{i\mathbf{k}\mathbf{x}}\,, \tag{3.4}\] and solve for \(\Phi(\ell,\mathbf{k},\mathbf{r})\) instead. We can then rewrite equation (3.3) as \[g^{\ell\,\ell}\partial_{\ell}^{2}\Phi(\ell,\mathbf{k},\mathbf{r})=F_{1}\left[\partial _{\mathbf{r}}^{2},\partial_{\mathbf{r}};\mathbf{k},r\right]\Phi(\ell,\mathbf{k},\mathbf{r})+F_{2} \left[\partial_{\mathbf{r}};\mathbf{k},r\right]\partial_{\ell}\Phi(\ell,\mathbf{k},\mathbf{r })\,, \tag{3.5}\] where \(F_{1}\left[\partial_{\mathbf{r}}^{2},\partial_{\mathbf{r}};\mathbf{k},r\right]\) and \(F_{2}\left[\partial_{\mathbf{r}};\mathbf{k},r\right]\) are differential operators on \(\mathbf{r}\) of second and first order, respectively. Then, as long as the \(\ell\)-hypersurfaces are not null (\(g^{\ell\,\ell}\neq 0\)), we can introduce the auxiliary variable \(\Psi=\partial_{\ell}\Phi\) and rewrite the equations of motion (3.5) as:13 Footnote 13: For notational convenience, we refer to the \(x^{M}=\) const hypersurface as \(x^{M}\)-hypersurface. \[\partial_{\ell}\begin{pmatrix}\Phi(\ell,\mathbf{k},\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}=\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\mathbf{r}}^{2},\partial_{\mathbf{r}};\mathbf{k},r\right]&L_{2}\left[ \partial_{\mathbf{r}};\mathbf{k},r\right]\end{pmatrix}\begin{pmatrix}\Phi(\ell,\mathbf{k}, \mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}\,, \tag{3.6}\] where we have defined \(L_{i}=\left(g^{\ell\,\ell}\right)^{-1}F_{i}\). Recalling that a QNM \(\Phi(\ell,\mathbf{k},\mathbf{r})\) has a well-defined QNF \(\omega\) or, equivalently, that it is an eigenfunction of the Killing vector associated with stationarity, \(\mathbf{t}=\partial_{\ell}\): \[\mathbf{t}\begin{pmatrix}\Phi(\ell,\mathbf{k},\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}=\partial_{\ell}\begin{pmatrix}\Phi(\ell,\mathbf{k},\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}=-i\omega\begin{pmatrix}\Phi(\ell,\mathbf{k },\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}\,, \tag{3.7}\] we can cast expression (3.6) as a standard eigenvalue problem: \[\omega\begin{pmatrix}\Phi(\ell,\mathbf{k},\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}=i\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\mathbf{r}}^{2},\partial_{\mathbf{r}};\mathbf{k},r\right]&L_{2}\left[ \partial_{\mathbf{r}};\mathbf{k},r\right]\end{pmatrix}\begin{pmatrix}\Phi(\ell,\mathbf{k}, \mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}\,. \tag{3.8}\] Note that to write the equations of motion as an eigenvalue problem it is fundamental that the \(\ell\)-hypersurfaces are spacelike as opposed to null. Otherwise, we would obtain a generalized eigenvalue equation of the form: \[\omega F_{2}\left[\partial_{\mathbf{r}};\mathbf{k},r\right]\Phi(\ell,\mathbf{k},\mathbf{r})=-iF _{1}\left[\partial_{\mathbf{r}}^{2},\partial_{\mathbf{r}};\mathbf{k},r\right]\Phi(\ell, \mathbf{k},\mathbf{r})\,, \tag{3.9}\] instead of the standard eigenvalue problem discussed in section 2.14 Footnote 14: Technically, one can extend the formalism presented in section 2 to also hold for generalized eigenvalue problems of the form \((L-\lambda A)\,u=0\). However, the adequate notion of \(\varepsilon\)-pseudospectrum becomes unclear as one has to choose whether to perturb \(L\), \(A\) or both (see Chapter 45 of [38] and references therein for an extensive discussion). Regarding the boundary conditions, we seek to choose \(\{\ell,\mathbf{\varkappa},\mathbf{\varkappa}\}\) such that demanding regularity on the horizon is equivalent to imposing outgoing boundary conditions. This is typically achieved working with infalling Eddington Finkenstein (IEF) coordinates \(\{u,\tilde{\mathbf{x}},\tilde{r}\}\), which, in terms of the Poincare coordinates \(\{t,\mathbf{x},r\}\), are given by: \[u=t+\int\,\frac{dr}{f(r)}\,\left(\frac{l}{r}\right)^{2}\,,\qquad\tilde{\mathbf{ x}}=\mathbf{x}\,,\qquad\tilde{r}=r\,. \tag{3.10}\] However, writing the metric (3.1) in IEF coordinates: \[ds^{2}=\frac{\tilde{r}^{2}}{l^{2}}\left(-f(\tilde{r})du^{2}+\delta_{ij}dx^{i} dx^{j}\right)+2dud\tilde{r}\,, \tag{3.11}\] we conclude that the \(u\)-hypersurfaces are null and thus IEF coordinates are ill-suited for pseudospectrum analysis. Nonetheless, we can define regular coordinates that resemble the IEF ones near the horizon having spacelike \(\ell\)-hypersurfaces outside the horizon. More concretely, we construct regular coordinates such that the spacelike \(\ell\)-hypersurfaces match the null IEF \(u\)-hypersurfaces exactly on the horizon. A particularly simple choice is given by: \[\ell=u-\frac{l^{2}}{r_{h}}\left(1-\frac{r_{h}}{\tilde{r}}\right)=t-\frac{l^{2 }}{r_{h}}\left(1-\frac{r_{h}}{r}\right)+\int\,\frac{dr}{f(r)}\,\left(\frac{l} {r}\right)^{2}\,,\qquad\mathbf{\varkappa}=\tilde{\mathbf{x}}=\mathbf{x}\,,\qquad \varkappa=\tilde{r}=r\,, \tag{3.12}\] in terms of which the metric (3.1) is: \[ds^{2}=\frac{\varkappa^{2}}{l^{2}}\left(-f(\varkappa)d\ell^{2}+\delta_{ij}dx^ {i}dx^{j}\right)+2\left(1-f(\varkappa)\right)d\ell d\varkappa+\frac{l^{2}}{ \varkappa^{2}}\left(2-f(\varkappa)\right)d\varkappa^{2}\,. \tag{3.13}\] In practice, for most numerical applications it is more convenient to introduce a dimensionless compactified radial coordinate \(\rho\): \[\rho=1-\frac{r_{h}}{\varkappa}\,, \tag{3.14}\] such that the horizon is mapped to \(\rho=0\) and the AdS boundary to \(\rho=1\). In terms of these compactified regular coordinates \(\{\ell,\mathbf{\varkappa},\rho\}\), the black brane metric (3.1) takes the form: \[ds^{2}=\frac{l^{2}}{z_{h}^{2}(1-\rho)^{2}}\left(-\ell(\rho)d\ell^{2}+\delta_{ ij}dx^{i}dx^{j}+2\left(1-\ell(\rho)\right)z_{h}d\ell d\rho+\left(2-\ell( \rho)\right)z_{h}^{2}d\rho^{2}\right)\,. \tag{3.15}\] where \(\ell(\rho)=f\left(r\left(\rho\right)\right)\) and \(z_{h}=l^{2}/r_{h}\). It is worth stressing that the components of the Killing vector \(\mathbf{t}=\partial_{\ell}=\partial_{t}\) are invariant under the change of coordinates from regular to Poincare coordinates. Consequently, the QNFs defined as the eigenvalues of \(\partial_{t}\) and \(\partial_{\ell}\) match. In gauge/gravity duality, this is actually important, as the holographic dictionary identifies the poles of the retarded propagators with the eigenvalues of \(\partial_{t}\).15 Footnote 15: Here, by the holographic dictionary we refer to the standard one formulated in Poincaré coordinates. Note that one can always formulate a new holographic dictionary in a different coordinate set and identify the QNFs with the poles of the retarded propagators. Before moving on to a concrete example we summarize the core principles behind the regular coordinates: Given an AdS\({}_{4+1}\) black brane spacetime whose metric in Poincare coordinates is given by equation (11) a regular coordinate set \(\{\ell,\mathbf{x},\mathbf{r}\}\) is one that fulfils the following three requirements: the metric components are functions only of \(\mathbf{r}\), \(\ell\)-hypersurfaces are spacelike outside the horizon, and those \(\ell\)-hypersurfaces match the IEF \(u\)-hypersurfaces on the horizon. With these coordinates imposing outgoing boundary conditions corresponds to demanding regularity on the horizon. Moreover, equations of motion for the QNMs of the form (13) can be written as a standard eigenvalue problem. #### 3.1.1 Example: Real Scalar in SAdS\({}_{4+1}\) To better clarify the role played by the regular coordinates in the construction of the eigenvalue problem, here we specialize the discussion to a real scalar field \(\phi\) with action \[\mathcal{S}[\phi]=-\frac{1}{2}\int\,d^{4+1}x\sqrt{-g}\ \left(\partial_{M}\phi \partial^{M}\phi+m^{2}\phi^{2}\right)\,, \tag{16}\] in a SAdS\({}_{4+1}\) background. The corresponding linearized equations of motion are given by: \[\left[-\nabla_{M}\nabla^{M}+m^{2}\right]\phi=0\,. \tag{17}\] For better emphasis on the key aspects of our approach, we present this discussion in Poincare, IEF and compactified regular coordinates. Poincare Coordinates: the SAdS\({}_{4+1}\) metric in Poincare coordinates is given by (11) with \(f(r)=1-\left(\frac{r_{h}}{r}\right)^{4}\) and \(\{i,j\}=1,2,3\). In terms of the Fourier modes \(\phi(t,\mathbf{k},r)\), the linearized equation of motion (17) for a QNM trivially produces the following eigenvalue problem: \[\omega^{2}\,\phi(t,\mathbf{k},r)=f(r)\left[\mathbf{k}^{2}+\frac{m^{2}r^{2}}{ l^{2}}-\frac{\left(r^{5}f(r)\right)^{\prime}}{rl^{4}}\partial_{r}-\frac{r^{4}f(r)}{ l^{4}}\partial_{r}^{2}\right]\phi(t,\mathbf{k},r)\,, \tag{18}\] where we have used that the QNM is an eigenfunction of \(\partial_{t}\) with QNF \(\omega\). In order to impose outgoing boundary conditions, we first analyze the behaviour near the horizon \(r=r_{h}\). Solving the equations of motion with a Frobenius series, we find: \[\phi(t,\mathbf{k},r)=e^{-i\omega t}\left\{c_{1}\exp\left[-i\frac{\omega l^{2 }}{4r_{h}}\log(r-r_{h})\right]\left(1+...\right)+c_{2}\exp\left[i\frac{\omega l ^{2}}{4r_{h}}\log(r-r_{h})\right]\left(1+...\right)\right\}\,. \tag{19}\] Then, as the outgoing solution corresponds to fixing \(c_{2}=0\), imposing outgoing boundary conditions correspond to fixing a near-horizon behaviour: \[\phi(t,\mathbf{k},r\to r_{h})\approx c_{1}\exp\left[-i\omega\left(t+\frac{l^{2}} {4r_{h}}\log(r-r_{h})\right)\right]\;. \tag{3.20}\] Note that this choice for the outgoing solution matches our physical interpretation of a mode that falls into the horizon. Defining the tortoise coordinate \(r_{*}\) \[r_{*}=\frac{l^{2}}{2r_{h}}\left[\arctan\left(\frac{r}{r_{h}}\right)-\text{ arctanh}\left(\frac{r_{h}}{r}\right)\right]=\frac{l^{2}}{4r_{h}}\log(r-r_{h})+ \mathcal{O}\left((r-r_{h})^{0}\right)\;, \tag{3.21}\] the near-horizon behaviour can be written as two modes, one falling towards the brane and one exiting it: \[\phi(t,\mathbf{k},r\to r_{h})\approx c_{1}\exp\left[-i\omega\left(t+r_{*} \right)\right]+c_{2}\exp\left[-i\omega\left(t-r_{*}\right)\right]\;, \tag{3.22}\] and choosing a QNM that falls into the horizon corresponds to discarding the ingoing mode, _i.e._, taking \(c_{2}=0\). IEF Coordinates: the SAdS\({}_{4+1}\) metric in IEF coordinates is given by (3.11) with \(f(\bar{r})=1-\left(\frac{r_{h}}{\bar{r}}\right)^{4}\) and \(\{i,j\}=1,2,3\). The linearized equation of motion for a QNM (3.17) reads: \[\omega\left[\frac{3}{\bar{r}}+2\partial_{\bar{r}}\right]\phi(u,\mathbf{k}, \bar{r})=i\left[\frac{\mathbf{k}^{2}l^{2}}{\bar{r}^{2}}+m^{2}-\frac{\left( \bar{r}^{5}f(\bar{r})\right)^{\prime}}{\bar{r}^{3}l^{2}}\partial_{\bar{r}}- \frac{\bar{r}^{2}f(\bar{r})}{l^{2}}\partial_{\bar{r}}^{2}\right]\phi(u, \mathbf{k},\bar{r})\;, \tag{3.23}\] and it cannot be cast as a standard eigenvalue problem due to the derivative term. The near-horizon solution is given by: \[\phi(u,\mathbf{k},\bar{r})=e^{-i\omega u}\left\{c_{1}\left(1+...\right)+c_{2} \exp\left[i\frac{\omega l^{2}}{2r_{h}}\log(\bar{r}-r_{h})\right]\left(1+... \right)\right\}\;, \tag{3.24}\] with the outgoing solution corresponding to fixing \(c_{2}=0\).16 Footnote 16: This follows from the Poincaré result noting that under the coordinate transformation (3.10) we have: \[e^{-i\omega u}=\exp\left[-i\omega\left(t+\frac{l^{2}}{4r_{h}}\log\left(r-r_{h }\right)\right)\right]+\mathcal{O}\left((r-r_{h})^{0}\right)\;.\] Consequently, imposing outgoing boundary conditions corresponds to demanding regularity on the horizon: \[\phi(t,\mathbf{k},\bar{r}\to r_{h})\approx c_{1}e^{-i\omega u}\;. \tag{3.25}\] Compactified Regular Coordinates: the SAdS\({}_{4+1}\) metric in compactified regular coordinates \(\{\ell,\mathbf{x},\rho\}\) is given by (3.15) with \(\mathpzc{f}(\rho)=1-(1-\rho)^{4}\) and \(\{i,j\}=1,2,3\). Then, the linearized equation of motion for a QNM (3.17) is: \[\begin{split} z_{h}^{2}\omega^{2}\left[\mathpzc{f}(\rho)-2 \right]\phi(\ell,\mathbf{k},\nu)=iz_{h}\omega\left[(1-\rho)^{3}\left(\frac{ \mathpzc{f}(\rho)-1}{(1-\rho)^{3}}\right)^{\prime}+2\left(\mathpzc{f}(\rho)- 1\right)\partial_{\rho}\right]\phi(\ell,\mathbf{k},\nu)\\ -\left[\frac{m^{2}l^{2}}{(1-\rho)^{2}}+z_{h}^{2}\mathbf{k}^{2}-( 1-\rho)^{3}\left(\frac{\mathpzc{f}(\rho)}{(1-\rho)^{3}}\right)^{\prime} \partial_{\rho}-\mathpzc{f}(\rho)\partial_{\rho}^{2}\right]\phi(\ell,\mathbf{ k},\nu)\,,\end{split} \tag{3.26}\] and, introducing the dimensionless variables variables \[\mathbf{\upomega}=z_{h}\omega=\frac{\omega}{\pi T}\,,\quad\mathbf{\upsigma}=z_{h} \mathbf{k}=\frac{\mathbf{k}}{\pi T}\,, \tag{3.27}\] and the auxiliary field \(\psi(\ell,\mathbf{k},\nu)=z_{h}\partial_{\ell}\phi(\ell,\mathbf{k},\nu)\), equation (3.26) can be written as an eigenvalue problem 17 Footnote 17: One could cast the eigenvalue problem without introducing the dimensionless variables \(\mathbf{\upomega}\) and \(\mathbf{\upsigma}\). However, it is convenient to use them as they allows us to remove all \(z_{h}\) dependence and thus reduce the parameter space. \[\mathbf{\upomega}\begin{pmatrix}\phi(\ell,\mathbf{k},\rho)\\ \psi(\ell,\mathbf{k},\rho)\end{pmatrix}=i\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathbf{\upsigma},\rho\right]&L_{2 }\left[\partial_{\rho};\mathbf{\upsigma},\rho\right]\end{pmatrix}\begin{pmatrix} \phi(\ell,\mathbf{k},\rho)\\ \psi(\ell,\mathbf{k},\rho)\end{pmatrix}\,, \tag{3.28}\] where the differential operators \(L_{1}\) and \(L_{2}\) take the form: \[L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathbf{\upsigma},\rho\right]= \left[\mathpzc{f}(\rho)-2\right]^{-1}\left[\frac{m^{2}l^{2}}{(1-\rho)^{2}}+ \mathbf{\upsigma}^{2}-(1-\rho)^{3}\left(\frac{\mathpzc{f}(\rho)}{(1-\rho)^{3}} \right)^{\prime}\partial_{\rho}-\mathpzc{f}(\rho)\partial_{\rho}^{2}\right]\,, \tag{3.29}\] \[L_{2}\left[\partial_{\rho};\mathbf{\upsigma},\rho\right]=\left[\mathpzc{f}(\rho)- 2\right]^{-1}\left[(1-\rho)^{3}\left(\frac{\mathpzc{f}(\rho)-1}{(1-\rho)^{3} }\right)^{\prime}+2\left(\mathpzc{f}(\rho)-1\right)\partial_{\rho}\right]\,. \tag{3.30}\] With regard to the near-horizon behaviour, we have: \[\phi(\ell,\mathbf{k},\rho)=e^{-i\omega\ell}\left\{c_{1}\left(1+...\right)+c_{2 }\rho\exp\left[i\frac{z_{h}\omega}{2}\log(\rho)\right]\left(1+...\right) \right\}\,, \tag{3.31}\] with the outgoing solution corresponding to fixing \(c_{2}=0\).18 Then, imposing outgoing boundary conditions corresponds to demanding regularity on the horizon: Footnote 18: Once again, this follows from the Poincaré result as under the coordinate transformation (3.12) we have: \[e^{-i\omega\ell}=\exp\left[-i\omega\left(t+\frac{l^{2}}{4r_{h}}\log\left(r-r _{h}\right)\right)\right]+\mathcal{O}\left((r-r_{h})^{0}\right)\,.\] Therefore, it is clear that the regular coordinates constitute a sort of middle ground between IEF and Poincare coordinates. They exploit the regular behaviour on the horizon of the IEF coordinates while maintaining the spacelike character of the constant time hypersurfaces found in Poincare coordinates. In fact, with our particular choice of regular coordinates, the \(\mathpzc{f}\)-hypersurfaces can be thought of as interpolating between \(u\)- and \(t\)-hypersurfaces (see figure 2). ### Choice of Norm As discussed in section 2, pseudospectra are norm dependent. Moreover, we concluded that the spectrum of an operator may be unstable in one norm but not in a different one (as illustrated in figure 1), which shows the nontrivial role of the norm. Ideally, we want to find a physically motivated norm such that our physical notion of smallness matches the mathematical one. Implicitly, we have been assuming that the QNMs are fluctuations small enough not to affect the background, or equivalently, that their backreaction is negligible.19 Consequently, the physically motivated notion of size for the QNMs is their contribution to the energy-momentum tensor, which is directly related to their backreaction. Thus, the adequate norm is the energy norm \(\|\cdot\|_{E}\), introduced in [30; 53], which defines the norm of a QNM \(\Phi\) as its energy in a constant time hypersurface: Footnote 19: This is the underlying assumption behind studying their linearized equations of motion on a fixed background. \[\|\Phi\|_{E}^{2}=-\int_{\ell=\text{const}}\star J\left[\Phi\right]\,, \tag{3.33}\] where \(\star J\left[\Phi\right]\) is the Hodge dual of the current 1-form \[J\left[\Phi\right]=\mathfrak{t}^{M}T_{MN}\left[\Phi\right]dx^{N}\,, \tag{3.34}\] Figure 2: Penrose Diagram of \(\text{SAdS}_{4+1}\). The AdS boundary is denoted by \(\mathcal{J}\), \(\mathcal{H}^{+}\) (\(\mathcal{H}^{-}\)) represents the future (past) horizon and \(i^{+}\) (\(i^{-}\)) denotes the future (past) time-like infinity. The dashed blue lines correspond to regular \(t\)-hypersurfaces and the red and cyan dashed lines represent Poincaré \(t\) and IEF \(u\)-hypersurfaces, respectively. with \(T_{MN}\left[\Phi\right]\) the QNM's leading order contribution to the energy-momentum tensor. Therefore, the energy norm is generically given by:20 Footnote 20: Note that the leading order contribution of the fluctuations to the energy momentum tensor is quadratic. This occurs because the linear contribution vanishes as it is proportional to the equations of motion of the background. \[\left\|\Phi\right\|_{E}^{2}=\int_{\ell=\text{const}}d^{\text{d}}x \ \Phi^{*}(\ell,\mathbf{\varkappa},\mathbf{\varkappa}) \left[\overleftarrow{\partial}_{M}H_{1}(\mathbf{r})\overrightarrow{ \partial}^{M}+H_{M}(\mathbf{r})\overrightarrow{\partial}^{M}+\right.\] \[\left.+\overleftarrow{\partial}^{M}H_{M}^{*}(\mathbf{r})+H_{2}(\mathbf{ r})\right]\Phi(\ell,\mathbf{\varkappa},\mathbf{\varkappa})\,, \tag{3.35}\] where \(H_{1}(\mathbf{r}),H_{2}(\mathbf{r}),H_{M}(\mathbf{r})\) are functions respecting the black brane symmetries. Expression (3.35) can be further simplified decomposing into Fourier modes and introducing the auxiliary field \(\Psi(\ell,\mathbf{k},\mathbf{r})=\partial_{t}\Phi(\ell,\mathbf{k},\mathbf{r})\), obtaining: \[\left\|\Phi\right\|_{E}^{2}=\int_{\ell=\text{const}}\frac{d^{\text{d}-1}k}{(2 \pi)^{\text{d}-1}}\ d\mathbf{r}\ \left(\Phi^{*}(\ell,\mathbf{k},\mathbf{r})\ \Psi^{*}(\ell,\mathbf{k},\mathbf{r})\right)\mathcal{G}\left[ \overleftarrow{\partial}_{r},\overrightarrow{\partial}_{r};\mathbf{k},\mathbf{r} \right]\begin{pmatrix}\Phi(\ell,\mathbf{k},\mathbf{r})\\ \Psi(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}\,, \tag{3.36}\] where we have defined \[\mathcal{G}\left[\overleftarrow{\partial}_{r},\overrightarrow{\partial}_{r}; \mathbf{k},\mathbf{r}\right]=\begin{pmatrix}\mathcal{G}_{11}\left[\overleftarrow{ \partial}_{r},\overrightarrow{\partial}_{r};\mathbf{k},\mathbf{r}\right]\ \mathcal{G}_{12}\left[\overleftarrow{\partial}_{r};\mathbf{k},\mathbf{r}\right] \\ \mathcal{G}_{21}\left[\overrightarrow{\partial}_{r};\mathbf{k},\mathbf{r}\right] \mathcal{G}_{11}[\mathbf{r}]\end{pmatrix}\,, \tag{3.37}\] with \(\mathcal{G}_{ab}\) differential operators on \(\mathbf{r}\). As discussed in the previous section, we present the eigenvalue problem for the Fourier modes, _i.e._, we work out the stability of the operator acting on the subspace of QNMs with well-defined momentum \(\mathbf{k}\). Consequently, as we do not consider mixing between subspaces, we can drop the integral over \(\mathbf{k}\) and introduce a constant prefactor \(C\) instead. Furthermore, as we are mainly interested in the operator norm (which is independent of constant prefactors by virtue of definition 2.4), we can set \(C=1\) without losing any generality. With all these considerations, the relevant inner product can be written as: \[\left\langle\Phi_{1},\Phi_{2}\right\rangle_{E}=\int_{\ell=\text{const}}d\mathbf{r }\ \left(\Phi_{1}^{*}(\ell,\mathbf{k},\mathbf{r})\ \Psi_{1}^{*}(\ell,\mathbf{k},\mathbf{r})\right)\mathcal{G}\left[ \overleftarrow{\partial}_{r},\overrightarrow{\partial}_{r};\mathbf{k},\mathbf{r} \right]\begin{pmatrix}\Phi_{2}(\ell,\mathbf{k},\mathbf{r})\\ \Psi_{2}(\ell,\mathbf{k},\mathbf{r})\end{pmatrix}. \tag{3.38}\] Lastly, note that one needs to explicitly check that the energy norm is positive definite; otherwise, it would not be a well-defined norm. Whether this condition is satisfied depends on the particular choice of Hilbert space. Consequently, one must select a function space that enforces the desired asymptotic behaviour for the QNMs while also ensuring that all functions residing in the aforementioned space have positive energy. In section 4 we discuss this topic in detail for our particular model. #### 3.2.1 The Nature of the Energy Norm In this subsection we address the coordinate dependence of the energy norm. Definition (3.33) depends on the selection of constant time hypersurfaces, and consequently, one could wonder if regular \(\ell\)-hypersurfaces actually represent a well-motivated choice. A physically well-defined energy should decay over time, as the fluctuations fall into the black brane. In order to showcase that the energy norm (3.33) satisfies this property, we consider the simple model introduced in subsection 3.1.1. In particular, to stress the importance of choosing regular \(\ell\)-hypersurfaces, we compute the time derivative of energy for the real scalar field in Poincare, IEF and compactified regular coordinates. Poincare Coordinates: the energy of a QNM in Poincare coordinates is given by \[E\left[\phi\right]=\frac{1}{2l^{5}}\int_{t=\text{const}}d^{3}\mathbf{x}\;dr\;r ^{3}\;\left[m^{2}l^{2}\phi^{2}+\frac{l^{4}}{r^{2}}\;(\partial_{\mathbf{x}}\phi )^{2}+r^{2}f(r)\;(\partial_{r}\phi)^{2}+\frac{l^{4}}{r^{2}f(r)}\;(\partial_{i} \phi)^{2}\right]\;, \tag{3.39}\] where, for notational convenience, we omit the dependence of the field \(\phi\) on the coordinates. Then, using the conservation equation of the current (3.34): \[d\star J=0\,, \tag{3.40}\] we can conclude that the time derivative of the energy vanishes \[\frac{d}{dt}E\left[\phi\right]=\frac{r_{h}^{5}}{l^{5}}f(r_{h})\int_{r=r_{h}} d^{3}\mathbf{x}\;\partial_{t}\phi\partial_{r}\phi=0 \tag{3.41}\] as it is proportional to the blackening factor evaluated at the horizon.21 A quasinormal mode however is an exponentially decaying solution and one would expect the energy to decay as well. The resolution to this puzzle is that in Poincare coordinates a QNM is actually singular on the horizon and thus the energy is not well defined. We also note that the term proportional to the time derivatives in the energy is singular on the horizon. For these reasons the Poincare coordinates are not suited to define a norm on which the pseudospectrum analysis could be based. Footnote 21: In general, we would also have a term arising from the AdS boundary. However, in all the coordinates studied in this section, this term vanishes due to the normalizable boundary conditions. Physically this behaviour was to be expected, as the AdS boundary behaves as a wall for the QNMs. IEF Coordinates: in IEF coordinates, the energy of a QNM is \[E\left[\phi\right]=\frac{1}{2l^{5}}\int_{u=\text{const}}d^{3}\mathbf{\tilde{x} }\;d\tilde{r}\;\;\tilde{r}^{3}\;\left[m^{2}l^{2}\phi^{2}+\frac{l^{4}}{\tilde{r }^{2}}\;(\partial_{\mathbf{x}}\phi)^{2}+\tilde{r}^{2}f(\tilde{r})\;(\partial_ {r}\phi)^{2}\right]\;, \tag{3.42}\] and its time derivative is: \[\frac{d}{du}E\left[\phi\right]=-\frac{r_{h}^{5}}{l^{5}}\int_{\tilde{r}=r_{h}} d^{3}\mathbf{\tilde{x}}\;\;(\partial_{u}\phi)^{2}<0\,. \tag{3.43}\] Thus, we conclude that in IEF, the decay rate of the energy is a function of the time derivative of the QNMs. Indeed, this was the picture we expected, proving that the IEF coordinates are well suited to define a physically motivated energy. However, when dealing with pseudospectrum, as we already discussed, IEF coordinates do not generate a standard eigenvalue problem; and thus, we discard using them. Compactified Regular Coordinates: in compactified regular coordinates, the energy of a QNM associated with a real scalar with action (3.16) is \[E[\phi]=\frac{l^{3}}{2z_{h}^{3}}\int_{\ell=\mathrm{const}}d^{3} \boldsymbol{x}\ \frac{d\rho}{(1-\rho)^{3}}\ \left[\ \left(\frac{m^{2}l^{2}}{z_{h}(1- \rho)^{2}}\phi^{2}+z_{h}\left(\partial_{\boldsymbol{x}}\phi\right)^{2}\right)+ \frac{\not{\ell}(\rho)}{z_{h}}\left(\partial_{\rho}\phi\right)^{2}\right.\] \[\left.\ a real scalar \(\phi\) and of the transverse components of a \(U(1)\) gauge field \(A_{M}\) in a SAdS\({}_{4+1}\) background with compactified regular coordinates \(\{\ell,\mathbf{x},\rho\}\) and metric (3.15). Their respective actions are given by: \[\mathcal{S}[\phi]=-\frac{1}{2}\int d^{4+1}x\ \sqrt{-g}\left(\partial_{M}\phi \partial^{M}\phi+m^{2}\phi^{2}\right)\, \tag{4.1}\] \[\mathcal{S}[A]=-\frac{1}{4}\int d^{4+1}x\ \sqrt{-g}\left(F_{MN}F^{MN}\right)\, \tag{4.2}\] where \(F_{MN}=\partial_{M}A_{N}-\partial_{N}A_{M}\) is the field strength tensor and \(m\) is the scalar mass, which we assume to be above the Breitenlohner-Freedman (BF) bound (\(m^{2}l^{2}>-4\)) [54]. This latter requirement ensures that the QNM is normalizable and that the dual QFT is unitary [6]. ### Real Scalar The eigenvalue problem for a real scalar field with action (4.1) has already been presented in equations (3.28)-(3.30) of example 3.1.1; concluding that the relevant operator \(L\), whose stability we seek to study, is: \[L=i\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\rho}^{2},\,\partial_{\rho};\,\mathfrak{q},\rho\right]&L_ {2}\left[\partial_{\rho};\,\mathfrak{q},\rho\right]\end{pmatrix}. \tag{4.3}\] With respect to the boundary conditions, we note that, as indicated in section 3.1, the outgoing behaviour is automatically satisfied as long as we demand regularity on the horizon. On the other hand, normalizable boundary conditions have to be imposed manually, discarding the leading mode on the AdS boundary. In order to identify the aforementioned mode, we solve the equation of motion (3.17) near the AdS boundary using a Frobenius series, obtaining the following solution: \[\phi(\ell,\mathbf{k},\rho)=e^{-i\omega\ell}\left\{c_{-}(\rho-1)^{\Delta_{-}}( 1+...)+c_{+}(\rho-1)^{\Delta_{+}}(1+...)\right\}\, \tag{4.4}\] with \(\Delta_{\pm}=2\pm\sqrt{4+m^{2}l^{2}}\). Imposing normalizable boundary conditions corresponds to fixing \(c_{-}=0\). Regarding the energy norm, we have the following expression for the inner product:22 Footnote 22: We have eliminated all constant prefactors from the energy norm (4.5) as they play no role in the operator norm. \[\left\langle\phi_{1},\phi_{2}\right\rangle_{E}=\int_{\ell=\text{const}}d\rho \ \left(\overline{\phi_{1}}(\ell,\mathbf{k},\rho)\ \overline{\psi_{1}}(\ell,\mathbf{k},\rho)\right)\mathcal{G}\left[\overleftarrow{ \partial}_{\rho},\,\overrightarrow{\partial}_{\rho};\,\mathfrak{q},\rho\right] \begin{pmatrix}\phi_{2}(\ell,\mathbf{k},\rho)\\ \psi_{2}(\ell,\mathbf{k},\rho)\end{pmatrix}\, \tag{4.5}\] \[\mathcal{G}\left[\overleftarrow{\partial}_{\rho},\overrightarrow{\partial}_{ \rho};\,\mathfrak{q},\rho\right]=\begin{pmatrix}\frac{m^{2}l^{2}}{(1-\rho)^{5} }+\frac{\mathfrak{q}^{2}}{(1-\rho)^{3}}+\overleftarrow{\partial}_{\rho}\frac {\ell(\rho)}{(1-\rho)^{3}}\overrightarrow{\partial}_{\rho}&0\\ 0&\frac{2-\ell(\rho)}{(1-\rho)^{3}}\end{pmatrix}\, \tag{4.6}\] derived from the subsequent energy-momentum tensor: \[T_{MN}=\nabla_{M}\phi\nabla_{N}\phi-\frac{1}{2}g_{MN}\left[\nabla_{S}\phi\nabla^{S }\phi+m^{2}\phi^{2}\right]\;. \tag{101}\] In order to ensure that the energy norm (100) is positive definite, while also enforcing the adequate asymptotic behaviour for the QNMs, we chose to work in the space of regular functions with the following behaviour on the AdS boundary: \[\phi(\mathcal{\ell},\mathbf{k},\rho\to 1)=A(\mathcal{\ell},\mathbf{k})\;(\rho-1)^{ n}\;,\qquad n>2\;. \tag{102}\] With this choice, for masses above the BF bound, the leading mode in (101) is automatically discarded and the energy norm is positive definite [54]. It is worth pointing out that the chosen function space, besides being mathematically convenient, is also the physically relevant one as it contains all possible asymptotic behaviours for scalar QNMs with masses above the BF bound. In view of the above, in order to select functions belonging to the chosen Hilbert space, it is convenient to work with the rescaled fields \[\begin{pmatrix}\tilde{\phi}\\ \tilde{\psi}\end{pmatrix}=(1-\rho)^{-2}\begin{pmatrix}\phi\\ \psi\end{pmatrix}\;, \tag{103}\] for which imposing the asymptotic behaviour (102) amounts to fixing Dirichlet boundary conditions on the AdS boundary. Lastly, note that, given the analytical expression of the adjoint of \(L\) with respect to the energy norm \(L^{\dagger}\);23 Footnote 23: The explicit computation of \(L^{\dagger}\) can be found in appendix A.1. \[L^{\dagger}=L+\begin{pmatrix}0&0\\ 0&-2i\delta(\rho)\frac{\mathcal{\ell}(\rho)-1}{\mathcal{\ell}(\rho)-2}\end{pmatrix}, \tag{104}\] we can conclude that \(L\) is non-normal and that this non-normality is associated with the existence of a horizon. This matches our initial physical intuition, the system is non-conservative because the fluctuations eventually fall into the horizon and die out. ### Transverse Gauge Field The linearized equations of motion for a gauge field with action (100) are: \[\nabla^{M}F_{MN}=0\;. \tag{105}\] Decomposing into Fourier modes \(A_{\mu}(\mathcal{\ell},\mathbf{k},\rho)\) and assuming the momentum \(\mathbf{k}\) to be oriented in the \(x_{3}\) direction, the equations of motion decouple into two sectors: transverse \(\{A_{1},A_{2}\}\) and longitudinal \(\{A_{\mathcal{\ell}},A_{3},A_{\mathcal{\ell}}\}\). The members of the transverse channel transform as vectors under the unbroken \(O(2)\) symmetry, while the components of the longitudinal sector are invariants [43]. In the present work, we only study the transverse sector, _i.e._, we consider QNMs \(a(\ell,\mathbf{k},\rho)\) satisfying the following eigenvalue problem: \[\mathfrak{w}\begin{pmatrix}a(\ell,\mathbf{k},\rho)\\ \alpha(\ell,\mathbf{k},\rho)\end{pmatrix} =i\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho\right]&L_{2} \left[\partial_{\rho};\mathfrak{q},\rho\right]\end{pmatrix}\begin{pmatrix}a( \ell,\mathbf{k},\rho)\\ \alpha(\ell,\mathbf{k},\rho)\end{pmatrix}\,, \tag{110}\] \[L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho\right] =\left[\ell(\rho)-2\right]^{-1}\left[\mathfrak{q}^{2}-(1-\rho) \left(\frac{\ell(\rho)}{1-\rho}\right)^{\prime}\partial_{\rho}-\ell(\rho) \partial_{\rho}^{2}\right]\,,\] (111) \[L_{2}\left[\partial_{\rho};\mathfrak{q},\rho\right] =\left[\ell(\rho)-2\right]^{-1}\left[(1-\rho)\left(\frac{\ell( \rho)-1}{(1-\rho)}\right)^{\prime}+2\left(\ell(\rho)-1\right)\partial_{\rho} \right]\,, \tag{112}\] where we used again the dimensionless variables \(\mathfrak{w}=z_{h}\omega\), \(\mathfrak{q}=z_{h}\mathbf{k}\) and introduced the auxiliary field \(\alpha(\ell,\mathbf{k},\rho)=z_{h}\partial_{\ell}a(\ell,\mathbf{k},\rho)=-i \mathfrak{w}a(\ell,\mathbf{k},\rho)\). It is important to stress that the transverse sector is gauge invariant, as we can eliminate the \(\{x_{1},x_{2}\}\) dependence of all functions given the symmetries of the problem. Explicitly, we have that the gauge transformation of the transverse gauge field is the following: \[a\to a+d\chi=a \tag{113}\] where, as indicated, we disregard any possible dependence of \(\chi\) on \(\{x_{1},x_{2}\}\). This ensures that the pseudospectrum and the condition numbers only probe stability under gauge-invariant perturbations. In fact, the gauge invariant electric field is simply \(E_{1,2}=i\mathfrak{w}A_{1,2}\). As with the scalar, outgoing boundary conditions are automatically satisfied demanding regularity on the horizon, while normalizable boundary conditions have to be imposed explicitly. Solving the equations of motion (105) with a Frobenius series, we obtain the following near-AdS boundary behaviour: \[a(\ell,\mathbf{k},\rho)=e^{-i\omega\ell}\left\{c_{-}(1+...)+c_{+}(1-\rho)^{2}( 1+...)\right\}\,, \tag{114}\] and thus, imposing normalizable boundary conditions corresponds to fixing \(c_{-}=0\). With regards to the energy norm, we recall that the energy-momentum tensor for a gauge field with action (90) is given by: \[T_{MN}=F_{MS}F_{N}^{\phantom{N}S}-\frac{1}{4}g_{MN}F_{SP}F^{SP}\,, \tag{115}\] which yields the following inner product: \[\left\langle a_{1},a_{2}\right\rangle_{E} =\int_{\ell=\text{const}}d\rho\ \ \left(\overline{a_{1}}(\ell,\mathbf{k},\rho)\ \overline{\alpha_{1}}(\ell,\mathbf{k},\rho)\right)\mathcal{G}\left[ \overleftarrow{\partial}_{\rho},\overrightarrow{\partial}_{\rho};\mathfrak{q},\rho\right]\begin{pmatrix}a_{2}(\ell,\mathbf{k},\rho)\\ \alpha_{2}(\ell,\mathbf{k},\rho)\end{pmatrix}\,, \tag{116}\] \[\mathcal{G}\left[\overleftarrow{\partial}_{\rho},\overrightarrow {\partial}_{\rho};\mathfrak{q},\rho\right] =\begin{pmatrix}\frac{\mathfrak{q}^{2}}{1-\rho}+\overleftarrow{ \partial}_{\rho}\frac{\ell(\rho)}{1-\rho}\overrightarrow{\partial}_{\rho}&0\\ 0&\frac{2-\ell(\rho)}{1-\rho}\end{pmatrix}\,. \tag{117}\] In this case, as we have no mass term, the energy norm is always positive definite. Thus, we consider the function space comprised of functions satisfying outgoing boundary conditions on the horizon with the following behaviour on the AdS boundary: \[a(\ell,\mathbf{k},\rho\to 1)=A(\ell,\mathbf{k})(\rho-1)^{n}\,,\qquad n>1\,, \tag{119}\] which ensures the desired asymptotic behaviour for the QNMs while also guaranteeing that the norm is non-divergent. Similarly to the case of the scalar, we find it more convenient to work with the rescaled fields: \[\begin{pmatrix}\tilde{a}\\ \tilde{\alpha}\end{pmatrix}=(1-\rho)^{-1}\begin{pmatrix}a\\ \alpha\end{pmatrix}\,, \tag{120}\] for which imposing the asymptotic behaviour (119) amounts to fixing Dirichlet boundary conditions on the AdS boundary. To conclude, we note that, identically to what we found for the scalar field, \(L\) is non-normal with respect to the energy norm. Its adjoint with respect to the aforementioned norm is given by \[L^{\dagger}=L+\begin{pmatrix}0&0\\ 0&-2i\delta(\rho)\frac{\ell(\rho)-1}{\ell(\rho)-2}\end{pmatrix}\,, \tag{121}\] which once again, relates the non-normality to the existence of a horizon.24 Footnote 24: The explicit computation of \(L^{\dagger}\) can be found in appendix A.2. ## 5 Numerical method We approach the stability analysis numerically; discretizing the radial coordinate \(\rho\) in a Chebyshev grid with points: \[\rho_{j}=\frac{1}{2}\left[1-\cos\left(\frac{j\pi}{N}\right)\right]\,,\qquad j =0,1,...,N\,, \tag{122}\] which in turn allows us to discretize the differential operators using the corresponding Chebyshev differentiation matrices [45]. This choice is equivalent to approximating the QNMs by a series of Chebyshev polynomials \(T_{n}\) with \(n=\{0,1,...,N\}\). Regarding the energy norm we proceed as indicated in [30] and construct a \(G_{E}\) matrix defined as the discretized version of the original energy norm. Labelling by \(u\) the rescaled scalar doublet \(\left(\tilde{\phi},\,\tilde{\psi}\right)^{T}\) and the rescaled gauge field doublet \(\left(\tilde{a},\,\tilde{\alpha}\right)^{T}\), we construct \(G_{E}\) such that: \[\lim_{N\to\infty}u_{N}^{*}G_{E}u_{N}=\left\langle u,u\right\rangle_{E}\,, \tag{123}\] where \(u_{N}\) is the vector arising form the discretization of \(u\).25 Footnote 25: There is an important subtlety in the construction of \(G_{E}\) that should be addressed. One needs to construct \(G_{E}\) in a grid of at least twice the size of the original one and, at the end, interpolate back. This ensures consistency with the discretization process; when working in a grid with \(N+1\) points the maximum resolution is given by polynomials of degree \(N+1\), thus, for the discretized norm to be exact for polynomials of such degree, one needs to construct it on a grid with at least \(2(N+1)\) points. A detailed discussion on the construction of the \(G_{E}\) matrix can be found in appendix B. To conclude the discretization process, we need to numerically impose the function spaces introduced in the previous section. Note that, in any numerical method, we can only get regular solutions. Consequently, the discretization immediately selects the space of regular functions, and thus we only need to ensure the adequate behaviour on the AdS boundary. This corresponds to imposing Dirichlet boundary conditions for the rescaled scalar and gauge fields, which can be achieved by removing the rows and columns corresponding to the AdS boundary from all discretized operators, including the \(G_{E}\) matrix.26 Footnote 26: This is equivalent to reducing the space in which the matrices act to that of vectors vanishing on the AdS boundary. With the original system fully discretized, we can proceed to study the spectral stability. We use _Wolfram Engine_ to compute condition numbers and pseudospectra as indicated in theorem 7. It is particularly convenient to rewrite the problem in terms of the matrix \(\ell^{2}\)-norm as it reduces the computation of the pseudospectrum to obtaining the smallest eigenvalue of a Hermitian matrix; which we locate using Arnoldi iteration (see _e.g._ ch. 28 of [38]). With this procedure, we achieve \(\mathcal{O}\left(N^{2}\right)\) runtime for each point \(z\) in the complex plane where we compute the norm of the resolvent. In order to gain greater insight into the nature of the (in)stability, we also explore the selective pseudospectra associated with potential perturbations to the original equations of motion. Concretely, we consider perturbations to equations (3.17) and (4.11) of the form: \[\left[-\nabla_{M}\nabla^{M}+m^{2}+\frac{V(\rho)}{l^{2}}\right]\phi=0\,,\qquad \nabla^{M}F_{MN}-\frac{V(\rho)}{l^{2}}A_{N}=0\,, \tag{5.3}\] where, in order to preserve the asymptotic behaviour on the AdS boundary, we choose potentials vanishing on the aforementioned boundary \(V(1)=0\). Note that the potential term added to the gauge field dynamics does, in general, break gauge invariance. Nonetheless, as a transverse gauge field \(A_{M}\) is gauge invariant, the chosen potential term preserves the gauge symmetry. The main difference between full and selective pseudospectra lies in the kind of stability each of them probes; the former explores the stability under generic bounded perturbations, while the latter only considers local perturbations. Physically, these local perturbations can be interpreted as effective interactions arising from small deviations from the perfect SAdS\({}_{4+1}\) background; and thus, probing the stability under them is akin to determining the underlying model dependence of the system.27 Footnote 12: The \(\rho\)-wave function \(\rho\) is defined as \(\rho=\rho_{0}\), where \(\rho_{0}\) is the density of the particle, and \(\rho_{0}\) is the density of the particle. The selective pseudospectrum is computed using definition 2.2 with randomly generated potential perturbations. This is complemented with the computation of the QNFs for the perturbed system with the following deterministic potentials: \[V_{1}(\rho) =A_{1}(1-\rho)\cos(2\pi\rho)\,, \tag{5.4a}\] \[V_{2}(\rho) =A_{2}(1-\rho)\cos(90\pi\rho)\,,\] (5.4b) \[V_{3}(\rho) =A_{3}(1-\rho)\left\{1-\tanh\left[20\rho\right]\right\}\,,\] (5.4c) \[V_{4}(\rho) =A_{4}(1-\rho)\left\{1-\tanh\left[20(1-\rho)\right]\right\}\,, \tag{5.4d}\] which shed light on a few interesting regimes. With \(V_{1}\) and \(V_{2}\), we probe the effect of long and short \(\rho\)-wavelength (wavelength in the \(\rho\) direction) perturbations, while with \(V_{3}\) and \(V_{4}\), we analyze the stability under localized perturbations near the horizon (IR of the dual QFT) and the boundary (UV of the dual QFT). Note that we have introduced normalization constants \(\{A_{i}\}\) to fix the magnitude of the perturbation. We plot these potentials in figure 3. Interestingly, for the pseudospectrum (both full and selective), the grid establishes a cutoff for the \(\rho\)-wavelength of the perturbations. Thus, as a byproduct of the discretization, we are only truly sensitive to the stability under perturbations above the cutoff. Nonetheless, these grid effects are expected to be small; otherwise, the values of the QNFs would depend greatly on the grid and no numerical method would be reliable. Finally, to stress the nontrivial nature of the norm, we also consider the pseudospectrum in the \(L^{2}\)-norm \[\left\|u\right\|_{L^{2}}=\int\,d\rho\,\,u^{*}(\ell\,,\mathbf{k},\rho)u(\ell\,, \mathbf{k},\rho)\,. \tag{5.5}\] Figure 3: Deterministic potentials (5.4) with \(A_{i}=1\). Recall that the horizon is at \(\rho=0\) and the boundary at \(\rho=1\). Although the physical relevance of this norm for the present problem is less clear, it might be informative to compare the general features of the pseudospectra in the energy norm to those in the \(L^{2}\)-norm. We direct the interested reader to Appendix C where we present the pseudospectrum analysis in the \(L^{2}\)-norm. ## 6 Results Here, we present the results of the analysis described in the previous sections. Our numerical simulations are performed in a grid of 120 points with a precision 5\(\times\)MachinePrecision. It is important to note that we require large grids and high precision to ensure that the numerical procedure does not heavily affect our results. Physically, the discretization and numerical round-offs can be understood as perturbations to the original problem and, as we want to analyze the stability of the latter, we need to be especially careful and avoid undesired effects associated with them. As we are mainly interested in the stability of the three or four lowest-lying QNFs, in figure 4 we test the validity of our numerics by computing those QNFs in our setup and comparing them to the ones obtained using 10\(\times\)MachinePrecision in a grid of 400 points.28 Thus, we conclude that with our choice of parameters, we are ensuring that the effect of the numerics on the first four QNFs is smaller than \(10^{-50}\) in all cases. It is interesting to note that the need for such large grids and precisions is a good indicator of the underlying instability of the problem. Small perturbations associated with the numerics have effects much larger than their typical scale (see figure 5). Figure 4: Convergence test for QNFs computed with 5\(\times\)MachinePrecision. The error is defined as \(\left|1-\frac{|\mathbf{\mathsf{w}}|}{|\mathbf{\mathsf{w}}_{\text{ref}}|}\right|\), with \(\mathbf{\mathsf{w}}_{\text{ref}}\) the reference value obtained with 10\(\times\)MachinePrecision in grid of 400 points. The missing points are such that their error is below the numerical accuracy. ### Real Scalar in SAdS\({}_{4+1}\) In figures 6 and 7, we present full and selective pseudospectra and the corresponding condition numbers in the energy norm for different values of \(m^{2}l^{2}\) and \(\mathfrak{q}=\sqrt{\mathfrak{q}^{2}}\). Recall that throughout this section we are working in units of \(z_{h}=(\pi\,T)^{-1}\). The \(\varepsilon\)-pseudospectra exhibit extended open regions denoting instability.29 As observed in asymptotically flat [30; 31] and de Sitter spacetimes [32], the instability increases the further away the QNFs are from the real axis; which, for the dual quantum field theory, implies that excitations are progressively unstable the more short-lived they are. Footnote 29: Note that all QNFs are contained within some region of the \(\varepsilon\)-pseudospectra. However, due to limited resolution in the plots, not all regions of the \(\varepsilon\)-pseudospectrum are observable. For example, in the full pseudospectrum shown in figure 6(a), we cannot appreciate the \(10^{-7}\)-pseudospectrum around the first QNF observable in figure 5(a). The same limitation applies to the selective pseudospectra, where small regions are covered by the red dots representing the QNFs (_e.g._, the selective \(10^{-3}\)-pseudospectrum observed around the first QNF in figure 5(a) is covered by a red dot in figure 6(a)). Interestingly, we find that one needs perturbations of size \(\sim 10^{-0.4}\sim 0.4\) to drive the QNFs to the upper half of the complex plane, as indicated by the full pseudospectra in figure 7. Noting that the typical distance between eigenvalues is \(\sim 2\), this implies that all backgrounds arising as a small deviation from SAdS\({}_{4+1}\) are stable, _i.e._, they do not have exponentially growing QNMs. For the dual QFT, this indicates that the ground state is stable despite the spectrum being unstable. Focusing now on the selective pseudospectrum, we note that its structure seems to resemble the structure of the full pseudospectrum. Nonetheless, the scales clearly do not match; the selective pseudospectrum shows significantly more stability. Hence, we can conclude that most of the instability arises from perturbations which cannot be interpreted as local potential perturbations. Regarding the mass and momentum dependence, stability decreases with mass and increases with momentum (see figures 8 and 9). Such behaviour can be understood in the Figure 5: Convergence test for QNFs computed with MachinePrecision. The error is defined as \(\left|1-\frac{|\mathfrak{w}|}{|\mathfrak{w}_{\text{ref}}|}\right|\), with \(\mathfrak{w}_{\text{ref}}\) the reference value obtained with \(10\times\)MachinePrecision in grid of 400 points. The large error of the fourth QNF hints towards spectral instability. context of the instability being related to the distance to the real axis: the imaginary part of the QNFs is reduced as mass (momentum) decreases (increases), and thus we observe more stability. For the QFT this implies that high-momentum fluctuations of operators with small scaling dimension are more stable. Figure 6: Close-up of the scalar pseudospectrum in the energy norm around the first QNF for different values of \(\mathfrak{q}\) and \(m^{2}l^{2}\). The red dot corresponds to the QNF, the white lines represent the boundaries of various full \(\varepsilon\)-pseudospectra, and the dashed blue circle symbolizes a circle with a radius of \(10^{-1}\) centered on the QNF. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue and cyan dots indicate selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations of size \(10^{-1}\) and \(10^{-3}\), respectively. Remarkably, only in (a) do we observe instability of the first QNF under local potential perturbations. ## 6 Conclusions Figure 7: Scalar pseudospectrum in the energy norm for different values of \(\mathsf{q}\) and \(m^{2}l^{2}\). In the lower panels, we present selective and full pseudospectra. The red dots represent the (unperturbed) QNFs in the typical “Christmas Tree” configuration. The white lines denote the boundaries of different full \(\varepsilon\)-pseudospectra. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue, cyan, green, and yellow dots indicate different selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations of size \(10^{-1}\), \(10^{-3}\), \(10^{-5}\), and \(10^{-7}\); respectively. In the upper panels, we represent the condition numbers. Most notably, for small values of \(\varepsilon\), the full \(\varepsilon\)-pseudospectra present open regions containing multiple QNFs, which signals spectral instability. Figure 8: Mass dependence in the energy norm for the scalar fluctuations. Figure 9: Momentum dependence in the energy norm for the scalar fluctuations. We now focus on the first QNF (see figure 6), which has the smallest imaginary part and thus dominates the low energy spectrum of the QFT. Remarkably, we find that for sufficiently large mass and small momentum, it is unstable under local potential perturbations as the selective \(\varepsilon\)-pseudospectra extends beyond the circle of radius \(\varepsilon\) centered around the QNF. To further explore the nature of this instability, in figure 10 we analyze the effect of the deterministic potential perturbations (5.4) to the first QNF. We observe that the instability Figure 10: Effect on the first scalar QNF of the deterministic perturbations (5.4) with size \(\left\|V_{i}\right\|_{E}=10^{-1}\). The unperturbed QNF is shown in red, while the perturbed QNFs are depicted in blue. The dashed blue line represents the circle of radius \(10^{-1}\) centered in the unperturbed QNF. Remarkably, only for \(\mathfrak{q}=0\) do we observe instability under near-horizon perturbations (\(V_{3}\)). Figure 11: Effect on the spectrum of the scalar of the deterministic perturbations (5.4) with size \(\left\|V_{i}\right\|_{E}=10^{-1}\). In the lower panels we present the spectra and in the upper ones the condition numbers for the lowest QNFs. The unperturbed QNFs are shown in red, while the perturbed ones are depicted in blue. The plotted region of the spectra is stable under long \(\rho\)-wavelength perturbations (\(V_{1}\)) and near-boundary perturbations (\(V_{4}\)). Note also that, as indicated by the condition numbers, the perturbed QNFs associated with \(V_{2}\) and \(V_{3}\) are more stable than the unperturbed QNFs. is only present for near-horizon perturbations, concluding that it is mainly associated with deformations of the IR of the dual QFT. Moreover, we also find that, although we did not have statistical evidence in the selective pseudospectra, the first QNF is actually unstable under local potential perturbations for \(m^{2}l^{2}=-3\) (figure 9(c)). Continuing our exploration of the effect of the deterministic potentials, in figure 11 we present the spectrum and the condition numbers for the first 3 perturbed QNFs. We observe that large \(\rho\)-wavelength (\(V_{1}\)) and near-boundary (\(V_{4}\)) perturbations have very little effect on the spectrum. On the other hand, short \(\rho\)-wavelength (\(V_{2}\)) and near-horizon perturbations (\(V_{3}\)) displace the QNFs significantly, generating new branching structures. Remarkably, these perturbed QNFs are more stable than the unperturbed ones, indicating that the spectra of poles of Green's functions in strongly coupled holographic field theories might have some preference for branching structures over the unperturbed "Christmas Tree" configuration. We end this section by briefly addressing the results for the pseudospectrum of the scalar field in the \(L^{2}\)-norm. We refer the reader to Appendix C and here we simply point out that the pseudospectra are qualitatively similar to those in the energy norm. However, in the \(L^{2}\)-norm we find that the stability is enhanced under local potential perturbations and reduced under generic perturbations. ### Transverse Gauge Field in SAdS\({}_{4+1}\) Figure 12: Close-up of the transverse gauge field pseudospectrum in the energy norm around the first QNF for different values of \(\mathfrak{q}\). The red dot corresponds to the QNF, the white lines represent the boundaries of various full \(\varepsilon\)-pseudospectra, and the dashed blue circle symbolizes a circle with a radius of \(10^{-1}\) centered on the QNF. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue dots indicate selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations. In this section we present the pseudospectrum of the fluctuations of the transverse gauge field introduced in section 4.2. As we detail in the following, we observe the same generic features as for the real scalar. In particular we find the QNFs of the gauge field to be unstable. First, in figures 12 and 13 we display the pseudospectrum of the transverse gauge field. The presence of open contour lines around the QNFs indicates that they are unstable under generic perturbations. However, for the first QNF we have not found statistical evidence of its instability under random local perturbations (see the selective pseudospectra in figure 12). It is interesting to note that this is similar to what we found for the scalar field of mass \(m^{2}l^{2}=-3\) (see figure 6). From the conformal field theory point of view both of these fields correspond to operators of conformal dimension three. In order to further characterize the nature of the instability, in figure 14 we show the effect of the deterministic perturbations (100) on the QNF spectrum. At zero momentum all QNFs we study are unstable under these perturbations. In particular in figure 13 the first QNF is shown to be unstable under near-horizon perturbations. Moreover, from studying the evolution of the condition numbers shown in that figure, one can conclude that near-horizon Figure 13: Transverse gauge field pseudospectrum in the energy norm for different values of \(\mathfrak{q}\). In the lower panels, we present selective and full pseudospectra. The red dots represent the QNFs, and the white lines denote the boundaries of different full \(\varepsilon\)-pseudospectra. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue, cyan, green, and yellow dots indicate different selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations of size \(10^{-1}\), \(10^{-3}\), \(10^{-5}\), and \(10^{-7}\); respectively. In the upper panels, we represent the condition numbers. and short \(\rho\)-wavelength perturbations result in less unstable QNFs. As in the previous section it is also interesting to analyze the dependence of the stability Figure 14: Effect on the transverse gauge field spectrum of the deterministic perturbations (5.4) with size \(\left\|V_{i}\right\|_{E}=10^{-1}\). The unperturbed QNF is shown in red, while the perturbed QNFs are depicted in blue. In (a) and (b) the dashed blue line represents the circle of radius \(10^{-1}\) centered in the unperturbed QNF. In the upper panels of (c) and (d) we present the condition numbers for the lowest QNFs. of QNFs on the momentum of the fluctuation. Therefore, in figure 15 we plot the momentum dependence of the QNFs and their condition numbers. The QNFs get closer to the real axis as momentum increases, and, as for the real scalar, we observe that their condition numbers decrease in the process (although staying above one for the range of momentum studied). Despite the qualitative similarities, it is worth noting that the actual values of the pseudospectra do not match the cases studied in the previous subsection. Specifically, we observe that the gauge field is more stable than the massless scalar and less stable than the scalar with mass \(m^{2}l^{2}=-3\). This agrees with the instability being related to the imaginary part of the QNFs. The gauge field QNFs are closer to the real axis than those of the massless scalar and further away than those of the scalar with mass \(m^{2}l^{2}=-3\). Finally, it is illustrative to analyze the pseudospectrum in the \(L^{2}\)-norm. We present our results in appendix C. Again, we observe that in the \(L^{2}\)-norm we find the stability is enhanced under local potential perturbations and reduced under generic perturbations. ## 7 Conclusions In this work we have initiated the study of the spectral stability of quasinormal modes in asymptotically AdS black hole geometries. Gauge/gravity duality maps the frequencies of those modes (QNFs) to the spectrum of collective excitations of a strongly coupled quantum many-body system. Therefore, probing the spectral stability of QNFs is akin to probing the stability of the spectra of strongly coupled holographic field theories. Following previous works in asymptotically flat [30; 31] and de Sitter spacetimes [32], we probed spectral stability through pseudospectra and condition numbers. To construct the pseudospectra, we needed to choose a physically motivated norm and cast the equations of motion for the quasinormal modes in the form of an eigenvalue problem. Consequently, in section 3 we proposed a generic approach valid for static black brane backgrounds, a family of spacetimes very prominent in the literature. Our approach relied on two cornerstones. First, the regular coordinates, which enabled us to write the equations of motion as an Figure 15: Momentum dependence in the energy norm for the transverse gauge field. eigenvalue problem while also translating outgoing boundary conditions to regularity conditions on the horizon. And second, the energy norm, which represents the physically relevant notion of size for the QNMs. After discussing the general approach, we proceeded to study the stability of a real scalar field and a transverse gauge field in SAdS\({}_{4+1}\). As previously observed in asymptotically flat [30; 31] and dS spacetimes [32], we found that the QNFs are unstable and that the instability is associated with the existence of a horizon. Moreover, we observed that QNFs are less unstable the closer they are to the real axis, a feature also present in the aforementioned spacetimes. Regarding the scalar, we found an increase in stability at high momenta and small masses. This implies that in the dual QFT the instability is milder for high-momentum fluctuations of operators with small mass dimension. In order to further study the nature of the instability we probed the spectrum of QNFs with local potential perturbations, which could arise as effective interactions representing deviations from the exact SAdS\({}_{4+1}\) background. Remarkably, we inferred that only a small portion of the total instability is associated with these perturbations. Moreover, we concluded that small perturbations cannot drive the QNFs to the upper half of the complex plane. We also found that for small momenta, the first QNF is unstable, with the instability arising from localized perturbations near the horizon. Thus, small deformations of the IR can destabilize even the lowest QNF. With respect to the gauge field, we conclude that its pseudospectrum is qualitatively very similar to that of the scalar field. This is even more pronounced for the scalar of mass \(m^{2}l^{2}=-3\). In that case both the scalar and the transverse gauge field correspond to operators with the same conformal dimension. Actually the resemblance between the pseudospectra of the scalar and the gauge field might be due to the fact that the spectra of both fields are very similar. Exploring this potential connection would indicate a degree of universality, which would be intriguing to investigate in future research. We have proven that AdS spacetimes also present spectral instability associated with the existence of an event horizon. This supports the physical intuition that the observed instability is a direct consequence of the non-conservative character of black holes and thus is independent of the asymptotic behaviour of the spacetime. With regard to gauge/gravity duality, our results imply that thermal excitation spectra of strongly coupled quantum field theories dual to SAdS\({}_{4+1}\) (or some small deviation from it) are unstable. This suggests that a mathematical model would not be able to accurately represent the actual spectra of the real physical system. That said, it is important to remark that the instability decreases with the decay width of the fluctuations. We expect the imprint of the aforementioned instability to be less pronounced for modes near the real axis. We want to stress the importance of this result; in a practical setting it implies that, at most, one can hope to be able to capture the leading order behaviour (dominated by the longest-lived excitation). On the other hand it might not be possible to model the subleading features of decay trustfully as the higher QNFs are increasingly unstable. The pseudospectral analysis in asymptotic AdS geometries has many applications, with potentially deep implications, in the realm of gauge/gravity duality. We list some of them in the following. * Study the stability of hydrodynamic modes. As we have seen, the closer a QNF is to the real axis, the less unstable it becomes. In particular that might indicate that hydrodynamic modes enjoy privileged stability properties in the pseudospectrum. On the other hand, if this effect is not sufficiently pronounced it would be interesting to see whether small local perturbations at zero momentum could drive the hydro mode's QNF to the upper half plane indicating instabilities of flow patterns. In fact such instabilities have been argued to arise in Navier-Stokes equations and explain the early onset of turbulence [55]. * Study the stability of the collision between the hydro mode's QNF and the first non-hydro mode's QNF. This would shed a new perspective on the validity of the hydrodynamic approximation. Currently, the hydrodynamic expansion is postulated to be valid up to the energy where the aforementioned collision happens [25, 26, 27]; consequently, finding instability would indicate that the limit of validity itself is unstable. A similar comment applies to pole skipping points. These are points at which the residue of a QNM as pole of a holgraphic Green's function vanishes [24, 56]. Such points are of particular interest in relation to quantum chaos [57, 58, 59]. * Study the stability of the phase transition in the holographic superconductor model [15]. It would be interesting to see whether the critical temperature is stable under the generic perturbations captured by the pseudospectrum. * Study the spectral stability of AdS Reissner-Nordstrom black branes. The pseudospectrum analysis of charged black branes might have important consequences for the dual description of quantum critical phases of matter [8, 9]. * Study the spectral stability of other solutions frequently used in holography. Beyond completing the current literature of QNFs in AdS spacetimes, it would be interesting to explore the existence of some degree of universality in the pseudospectrum. In particular, as spectra in AdS typically share the "Christmas Tree" structure observed in the current work (see, for instance, [22, 43]), this study would answer whether the overarching features observed here are fundamental to the structure of the spectrum or depend heavily on the particular setup. * It has recently been suggested that quasinormal modes can serve as a signature for having successfully simulated (quantum) gravity on a (quantum) computer [60, 61]. In view of our results this idea suffers from similar problems as the idea of black hole spectroscopy from gravitational wave signals. If such (quantum) simulations can be realized then the task of identifying the quasinormal modes will be subject to similar difficulties as for astrophysical black holes (see [62] for a relevant discussion of this issue). * A different way of studying the poles of holographic Green's functions is to look for complex solutions for the momenta upon fixing the frequency to be a real number [63]. This is particularly important for the diffusive mode and has implications for causality [64] (see also the recent preprint [65]). It would be interesting to study the pseudospectra of these complex momentum modes. ## Acknowledgements The work of D.A and K.L. is supported through the grants CEX2020-001007-S and PID2021-123017NB-100, PID2021-127726NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by ERDF "A way of making Europe". The work of D.G.F. is supported by JAEIntroICU-2022-IFT-02. ## Appendix A Computation of \(L^{\dagger}\) Here we present a detailed computation of the adjoint operator \(L^{\dagger}\) for the real scalar field (equation (4.10)) and transverse gauge field (equation (4.22)). ### Real Scalar Field For the real scalar field, the differential operator \(L\) is given by \[L=i\begin{pmatrix}0&1\\ L_{1}&\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho\right]&L_{2} &\left[\partial_{\rho};\mathfrak{q},\rho\right]\end{pmatrix}\,,\] (A.1) with \[L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho \right]=\left[\ell(\rho)-2\right]^{-1}\left[\frac{m^{2}l^{2}}{(1-\rho)^{2}}+ \mathfrak{q}^{2}-(1-\rho)^{3}\left(\frac{\ell(\rho)}{(1-\rho)^{3}}\right)^{ ^{\prime}}\partial_{\rho}-\ell(\rho)\partial_{\rho}^{2}\right]\,,\] (A.2) \[L_{2}\left[\partial_{\rho};\mathfrak{q},\rho\right]=\left[\ell( \rho)-2\right]^{-1}\left[(1-\rho)^{3}\left(\frac{\ell(\rho)-1}{(1-\rho)^{3}} \right)^{\prime}+2\left(\ell(\rho)-1\right)\partial_{\rho}\right]\,,\] (A.3) and the inner product induced by the energy norm is: \[\left\langle\phi_{1},\phi_{2}\right\rangle_{E}= \int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}\left[\left(\frac{m^{2}l^ {2}}{(1-\rho)^{2}}+\mathfrak{q}^{2}\right)\overline{\phi_{1}}(\ell,\mathbf{k },\rho)\phi_{2}(\ell,\mathbf{k},\rho)+\right.\right.\] \[\left.+\left.\ell(\rho)\partial_{\rho}\overline{\phi_{1}}(\ell, \mathbf{k},\rho)\partial_{\rho}\phi_{2}(\ell,\mathbf{k},\rho)-\left(\ell( \rho)-2\right)\overline{\psi_{1}}(\ell,\mathbf{k},\rho)\psi_{2}(\ell,\mathbf{ k},\rho)\right]\,.\] (A.4) For notational convenience, we drop the explicit dependence of all functions and denote \(\rho\) derivatives with a prime. Now, recalling the equality: \[\left\langle L^{\dagger}\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}=\left\langle \phi_{1},L\left[\phi_{2}\right]\right\rangle_{E}\, \tag{104}\] we can compute \(L^{\dagger}\) integrating by parts \[\left\langle\phi_{1},L\left[\phi_{2}\right]\right\rangle_{E}= i\int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}\left[\left(\frac{m^{2}l^{2}}{(1- \rho)^{2}}+\mathfrak{q}^{2}\right)\overline{\phi_{1}}\psi_{2}+\overline{\phi_ {1}}^{\prime}\not{\ell}\psi_{2}^{\prime}-\overline{\psi_{1}}\left\{\left(\frac {m^{2}l^{2}}{(1-\rho)^{2}}+\mathfrak{q}^{2}\right)\phi_{2}\right.\right.\] \[\left.\left.-(1-\rho)^{3}\left(\frac{\not{\ell}\phi_{2}^{\prime} }{(1-\rho)^{3}}\right)^{\prime}+(1-\rho)^{3}\left(\frac{(\not{\ell}-1)\psi_{2 }}{(1-\rho)^{3}}\right)^{\prime}+(\not{\ell}-1)\,\psi_{2}^{\prime}\right\}\right]\] \[= -i\int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}\left[\left(\frac{m^{2}l ^{2}}{(1-\rho)^{2}}+\mathfrak{q}^{2}\right)\phi_{2}\overline{\psi_{1}}+\phi_{ 2}^{\prime}\not{\ell}\overline{\psi_{1}}^{\prime}-\psi_{2}\left\{\left(\frac {m^{2}l^{2}}{(1-\rho)^{2}}+\mathfrak{q}^{2}\right)\overline{\phi_{1}}\right.\right.\] \[\left.\left.-(1-\rho)^{3}\left(\frac{\not{\ell}\overline{\phi_{1 }}^{\prime}}{(1-\rho)^{3}}\right)^{\prime}+(1-\rho)^{3}\left(\frac{(\not{ \ell}-1)\overline{\psi_{1}}}{(1-\rho)^{3}}\right)^{\prime}+(\not{\ell}-1) \,\overline{\psi_{1}}^{\prime}\right\}\right]\] \[+i\frac{\not{\ell}\overline{\phi_{1}}\,\psi_{2}}{(1-\rho)^{3}} \Bigg{|}_{\rho=0}^{\rho=1}+i\frac{\not{\ell}\overline{\psi_{1}}\phi_{2}^{ \prime}}{(1-\rho)^{3}}\Bigg{|}_{\rho=0}^{\rho=1}-\left.2i\frac{(\not{\ell}-1) \overline{\psi_{1}}\psi_{2}}{(1-\rho)^{3}}\right|_{\rho=0}^{\rho=1}\] \[= \left\langle L\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}+i \frac{\not{\ell}\overline{\phi_{1}}^{\prime}\psi_{2}}{(1-\rho)^{3}}\Bigg{|}_{ \rho=0}^{\rho=1}+i\frac{\not{\ell}\overline{\psi_{1}}\phi_{2}^{\prime}}{(1- \rho)^{3}}\Bigg{|}_{\rho=0}^{\rho=1}-2i\frac{(\not{\ell}-1)\overline{\psi_{1}} \psi_{2}}{(1-\rho)^{3}}\Bigg{|}_{\rho=0}^{\rho=1}\] \[= \left\langle L^{\dagger}\left[\phi_{1}\right],\phi_{2}\right\rangle _{E}. \tag{105}\] The boundary terms vanish for \(\rho=1\) as the chosen function space satisfies the boundary condition (102). However at \(\rho=0\), we have \(\not{\ell}(0)=0\), and we get a non-zero contribution arising from the last term. Then, the final expression for (105) is given by: \[\left\langle\phi_{1},L\left[\phi_{2}\right]\right\rangle_{E}= \left\langle L\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}-2i \frac{(\not{\ell}-1)\overline{\psi_{1}}\psi_{2}}{(1-\rho)^{3}}\Bigg{|}_{\rho=0}\] \[= \left\langle L\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}-i \int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}(\not{\ell}-2)\left[2\delta(\rho)\frac{ \not{\ell}(\rho)-1}{\not{\ell}(\rho)-2}\right]\overline{\psi_{1}}\psi_{2}\] \[= \left\langle L\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}+ \left\langle\delta L\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}=\left\langle L ^{\dagger}\left[\phi_{1}\right],\phi_{2}\right\rangle_{E}\, \tag{106}\] from where we recover the expression for the adjoint given in equation (102): \[L^{\dagger}=L+\delta L=L+\begin{pmatrix}0&0\\ 0&-2i\delta(\rho)\frac{\not{\ell}(\rho)-1}{\not{\ell}(\rho)-2}\end{pmatrix}. \tag{107}\] ### Transverse Gauge Field For the transverse gauge field, the differential operator \(L\) is given by \[L=i\begin{pmatrix}0&1\\ L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho\right]&L_{2} \left[\partial_{\rho};\mathfrak{q},\rho\right]\end{pmatrix}\, \tag{108}\] with \[L_{1}\left[\partial_{\rho}^{2},\partial_{\rho};\mathfrak{q},\rho \right] =\left[\ell(\rho)-2\right]^{-1}\left[\mathfrak{q}^{2}-(1-\rho)\left( \frac{\ell(\rho)}{1-\rho}\right)^{\prime}\partial_{\rho}-\ell(\rho)\partial_{ \rho}^{2}\right]\,,\] (A.10) \[L_{2}\left[\partial_{\rho};\mathfrak{q},\rho\right] =\left[\ell(\rho)-2\right]^{-1}\left[(1-\rho)\left(\frac{\ell( \rho)-1}{(1-\rho)}\right)^{\prime}+2\left(\ell(\rho)-1\right)\partial_{\rho} \right]\,,\] (A.11) and the inner product induced by the energy norm is: \[\left\langle a_{1},a_{2}\right\rangle_{E}= \int_{0}^{1}\frac{d\rho}{(1-\rho)}\bigg{[}\mathfrak{q}^{2} \overline{a_{1}}(\ell,\mathbf{k},\rho)a_{2}(\ell,\mathbf{k},\rho)+\ell(\rho) \partial_{\rho}\overline{a_{1}}(\ell,\mathbf{k},\rho)\partial_{\rho}a_{2}( \ell,\mathbf{k},\rho)\] \[-\left(\ell(\rho)-2\right)\overline{a_{1}}(\ell,\mathbf{k},\rho) \alpha_{2}(\ell,\mathbf{k},\rho)\bigg{]}\,.\] (A.12) As with the scalar, we drop the explicit dependence of all functions and denote \(\rho\) derivatives with a prime and compute \(L^{\dagger}\) integrating by parts: \[\left\langle a_{1},L\left[a_{2}\right]\right\rangle_{E}= i\int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}\Bigg{[}\mathfrak{q}^{2} \overline{a_{1}}\alpha_{2}+\overline{a_{1}}^{\prime}\ell\alpha_{2}^{\prime}- \overline{\alpha_{1}}\left\{\mathfrak{q}^{2}a_{2}-(1-\rho)\left(\frac{\ell a _{2}^{\prime}}{(1-\rho)}\right)^{\prime}\right.\] \[\left.+(1-\rho)\left(\frac{(\ell-1)\alpha_{2}}{(1-\rho)}\right)^ {\prime}+(\ell-1)\,\alpha_{2}^{\prime}\right\}\Bigg{]}\] \[= -i\int_{0}^{1}\frac{d\rho}{(1-\rho)^{3}}\Bigg{[}\mathfrak{q}^{2} a_{2}\overline{\alpha_{1}}+a_{2}^{\prime}\ell\alpha_{1}^{\prime}-\alpha_{2} \left\{\mathfrak{q}^{2}\overline{a_{1}}-(1-\rho)\left(\frac{\ell\overline{a_ {1}}^{\prime}}{(1-\rho)}\right)^{\prime}\right.\] \[\left.+(1-\rho)\left(\frac{(\ell-1)\overline{\alpha_{1}}}{(1- \rho)}\right)^{\prime}+(\ell-1)\,\overline{\alpha_{1}}^{\prime}\right\} \Bigg{]}\] \[+i\frac{\ell\overline{a_{1}}^{\prime}\alpha_{2}}{(1-\rho)}\Bigg{|} _{\rho=0}^{\rho=1}+i\frac{\ell\overline{a_{1}}^{\prime}\alpha_{2}}{(1-\rho)} \Bigg{|}_{\rho=0}^{\rho=1}-\left.2i\frac{(\ell-1)\overline{\alpha_{1}}\alpha_ {2}}{(1-\rho)}\right|_{\rho=0}^{\rho=1}\] \[= \left\langle L\left[a_{1}\right],a_{2}\right\rangle_{E}+i\frac{ \ell\overline{a_{1}}^{\prime}\alpha_{2}}{(1-\rho)}\Bigg{|}_{\rho=0}^{\rho=1} +i\frac{\ell\overline{\alpha_{1}}a_{2}^{\prime}}{(1-\rho)}\Bigg{|}_{\rho=0}^{ \rho=1}-2i\frac{(\ell-1)\overline{\alpha_{1}}\alpha_{2}}{(1-\rho)}\Bigg{|}_{ \rho=0}^{\rho=1}\] \[= \left\langle L^{\dagger}\left[a_{1}\right],a_{2}\right\rangle_{E}\,.\] (A.13) The boundary terms vanish for \(\rho=1\) as the chosen function space satisfies the boundary condition (4.20). Nonetheless, at \(\rho=0\), we have \(\ell(0)=0\), and we get a non-zero contribution arising from the last term. Then, the final expression for (A.13) is given by: \[\left\langle a_{1},L\left[a_{2}\right]\right\rangle_{E} =\left\langle L\left[a_{1}\right],a_{2}\right\rangle_{E}-i\int_{0}^{ 1}\frac{d\rho}{(1-\rho)}(\ell-2)\left[2\delta(\rho)\frac{\ell(\rho)-1}{\ell( \rho)-2}\right]\overline{\alpha_{1}}\alpha_{2}\] \[= \left\langle L\left[a_{1}\right],a_{2}\right\rangle_{E}+\left\langle \delta L\left[a_{1}\right],a_{2}\right\rangle_{E}=\left\langle L^{\dagger} \left[a_{1}\right],a_{2}\right\rangle_{E}\,,\] (A.14) from where we recover the expression for the adjoint given in equation (4.22): \[L^{\dagger}=L+\delta L=L+\begin{pmatrix}0&0\\ 0&-2i\delta(\rho)\frac{\ell(\rho)-1}{\ell(\rho)-2}\end{pmatrix}\,.\] (A.15) Discretization in the Chebyshev grid In this section we give some more details regarding the discretization process. This discussion is based on the appendix C of [30]. ### Collocation Method Working in the Chebyshev grid (5.1) with \(N+1\) points is equivalent to approximating a function \(f(\rho)\) by a power series \[f(\rho)\approx\sum_{n=0}^{N}c_{n}T_{n}(2\rho-1)\,\] (B.1) where \(T_{n}\) are the Chebyshev polynomials \[T_{n}(x)=\cos\left(n\arccos\left(x\right)\right)\.\] (B.2) However, in the numerical method we do not work with the coefficients \(\{c_{n}\}\) but instead with the values of \(f(\rho)\) in the points of the grid (5.1). In order to connect both descriptions, we note that in our grid we have the following orthogonality relation \[\int_{0}^{1}\frac{d\rho}{\sqrt{1-(2\rho-1)^{2}}}\ T_{m}(2\rho-1)T _{n}(2\rho-1) =\pi\int_{0}^{1}d\theta\ T_{m}\left(\cos(\theta\pi)\right)T_{n} \left(\cos(\theta\pi)\right)\] \[\approx\frac{\pi}{N}\sum_{j=0}^{N}\frac{T_{m}\left(\cos\left( \frac{j\pi}{N}\right)\right)T_{n}\left(\cos\left(\frac{j\pi}{N}\right)\right) }{1+\delta_{j0}+\delta_{jN}}\] \[=\frac{\pi}{N}\sum_{j=0}^{N}\frac{T_{m}\left(2\rho_{j}-1\right)T _{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+\delta_{jN}}\] \[\approx\frac{\pi}{2}\delta_{mn}\left(1+\delta_{n0}+\delta_{nN} \right)\,\] (B.3) which allows us to the approximate the coefficients \(\{c_{n}\}\) as \[c_{n} =\frac{2}{1+\delta_{n0}+\delta_{nN}}\int_{0}^{1}d\theta\ f\left( \frac{1}{2}-\frac{1}{2}\cos\left(\theta\pi\right)\right)T_{n}\left(\cos( \theta\pi)\right)\] \[\approx\frac{2/N}{1+\delta_{n0}+\delta_{nN}}\sum_{j=0}^{N}\frac{f \left(\rho_{j}\right)T_{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+\delta_{jN} }\.\] (B.4) ### Construction of the \(G_{e}\) matrix Knowing how to express the functions in the grid as a sum of Chebyshev polynomials, we can now turn to discussing how to discretize a generic integral \[\int_{0}^{1}d\rho\ \overline{f}(\rho)g(\rho)\.\] (B.5) In order to do so, we first we note that for the Chebyshev polynomials we have \[\int_{0}^{1}d\rho\ T_{n}(2\rho-1)=\begin{cases}0&\quad\text{$n$ odd.}\\ \frac{1}{1-n^{2}}&\quad\text{$n$ even.}\end{cases} \tag{111}\] Then, using expression (110), we can approximate the integral (111) by \[\int_{0}^{1}d\rho\ \overline{f}(\rho)g(\rho) \approx\sum_{\begin{subarray}{c}n=0\\ n\ \text{even}\end{subarray}}^{N}\frac{1}{1-n^{2}}\frac{2/N}{1+\delta_{n0}+ \delta_{nN}}\sum_{j=0}^{N}\frac{\overline{f}\left(\rho_{j}\right)g\left(\rho_{ j}\right)T_{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+\delta_{jN}}\] \[\approx\sum_{j=0}^{N}\overline{f}\left(\rho_{j}\right)\left[ \sum_{\begin{subarray}{c}n=0\\ n\ \text{even}\end{subarray}}^{N}\frac{1}{1-n^{2}}\frac{2/N}{1+\delta_{n0}+ \delta_{nN}}\sum_{j=0}^{N}\frac{T_{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+ \delta_{jN}}\right]g\left(\rho_{j}\right)\] \[=\sum_{i=0}^{N}\sum_{j=0}^{N}\mu_{ij}\overline{f}\left(\rho_{i} \right)g\left(\rho_{k}\right)\,, \tag{112}\] where we have defined the weight matrix \(\mu\) as \[\mu_{ij}=\delta_{ij}\sum_{\begin{subarray}{c}n=0\\ n\ \text{even}\end{subarray}}^{N}\frac{1}{1-n^{2}}\frac{2/N}{1+\delta_{n0}+ \delta_{nN}}\sum_{j=0}^{N}\frac{T_{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+ \delta_{jN}}\,. \tag{113}\] With all this, we can now easily construct the matrix \(G_{E}\). Firstly, we factorize the operator \(\mathcal{G}\) introduced in equation (10) as a part acting on the left side and another acting on the right side: \[\mathcal{G}\left[\overleftarrow{\partial}_{\rho},\overrightarrow{\partial}_{ \rho};\mathbf{k},\rho\right]=\begin{pmatrix}\tilde{\mathcal{H}}_{11}\left[ \overleftarrow{\partial}_{\rho};\mathbf{k},\rho\right]&\tilde{\mathcal{H}}_{ 12}\left[\overleftarrow{\partial}_{\rho};\mathbf{k},\rho\right]\\ \tilde{\mathcal{H}}_{21}\left[\mathbf{k},\rho\right]&\tilde{\mathcal{H}}_{22} \left[\mathbf{k},\rho\right]\end{pmatrix}\begin{pmatrix}\mathcal{H}_{11}\left[ \overrightarrow{\partial}_{\rho};\mathbf{k},\rho\right]&\mathcal{H}_{12}\left[ \mathbf{k},\rho\right]\\ \mathcal{H}_{21}\left[\overrightarrow{\partial}_{\rho};\mathbf{k},\rho\right]& \mathcal{H}_{22}\left[\mathbf{k},\rho\right]\end{pmatrix}\,. \tag{114}\] Secondly, we disctretize the differential operators \(\tilde{\mathcal{H}}_{ab}\) and \(\mathcal{H}_{ab}\), obtaining the \((N+1)\times(N+1)\) matrices \(\tilde{H}_{ab}\) and \(H_{ab}\). And finally, we construct the \(G_{E}\) as \[G_{E}=\begin{pmatrix}\tilde{H}_{11}&\tilde{H}_{12}\\ \tilde{H}_{21}&\tilde{H}_{22}\end{pmatrix}\begin{pmatrix}\mu&0\\ 0&\mu\end{pmatrix}\begin{pmatrix}H_{11}&H_{12}\\ H_{21}&H_{22}\end{pmatrix}\,. \tag{115}\] Note that, as we wanted, with this definition we first act with \(\mathcal{G}\) on the vectors and then integrate (introducing the \(\mu\) matrix). ### Interpolation Between Grids When integrating as discussed in the previous subsection, we are losing information about the original functions. This can easily be seen from the fact that, despite needing \(2(N+1)\) coefficients to describe the approximates of the functions \(f(\rho)\) and \(g(\rho)\), we have constructed the integral (111) in a grid of \((N+1)\) points. To minimize this effect on \(G_{E}\), we proceed to construct it in a grid with \(M+1\) points and then interpolate to the original grid. To achieve this we need to construct the interpolation matrix \(\tilde{I}\) which connects both grids. Denoting by \(\varrho\) the points of the new grid, \(\tilde{I}\) is defined through the following equation \[\sum_{j=0}^{N}\tilde{I}_{ij}f(\rho_{j})=f(\varrho_{i})=\sum_{n=0}^{N}\frac{2/N }{1+\delta_{n0}+\delta_{nN}}\sum_{j=0}^{N}\frac{f\left(\rho_{j}\right)T_{n} \left(2\rho_{j}-1\right)}{1+\delta_{j0}+\delta_{jN}}T_{n}(2\varrho_{i}-1)\, \tag{112}\] concluding that \[\tilde{I}_{ij}=\sum_{n=0}^{N}\frac{2/N}{1+\delta_{n0}+\delta_{nN}}\frac{T_{n} (2\varrho_{i}-1)T_{n}\left(2\rho_{j}-1\right)}{1+\delta_{j0}+\delta_{jN}}. \tag{113}\] Then, defining by \(G_{E}^{(M)}\) as the \(G_{E}\) matrix constructed in the grid with \(M+1\) points, our final expression for the \(G_{E}\) matrix in the original grid with \((N+1)\) points is: \[G_{E}=\begin{pmatrix}\tilde{I}&0\\ 0&\tilde{I}\end{pmatrix}^{t}\ G_{E}^{(M)}\begin{pmatrix}\tilde{I}&0\\ 0&\tilde{I}\end{pmatrix}. \tag{114}\] ## Appendix C Pseudospectra in the \(L^{2}\)-norm In this appendix we present results for the pseudospectra of the same models as in the main text, now in the \(L^{2}\)-norm \[\|u\|_{L^{2}}=\int\,d\rho\ u^{*}(\ell,\mathbf{k},\rho)\,u(\ell,\mathbf{k},\rho ). \tag{115}\] We shall first consider the real scalar in \(\mathrm{SAdS}_{4+1}\) introduced in section 4.1. In figures 16 and 17 we present the condition numbers and the full and selective pseudospectra in the \(L^{2}\)-norm. We still observe an increasing instability the further away the QNF is from the real axis. The pseudospectra are qualitatively very similar to those observed in the energy norm. However, the overall instability is larger in the \(L^{2}\)-norm (in the full pseudospectrum we observe larger regions for the same value of \(\varepsilon\)). Remarkably, the spectrum is significantly more stable under local potential perturbations. Next we move on to the transverse gauge field of section 4.2 and display the results for its pseudospectrum in the \(L^{2}\)-norm. In figure 18 we zoom in on the first QNF, while in figure 19 we show the pseudospectrum down to the position of the fourth QNF. We again observe open contour lines indicating instability under generic perturbations. As for the scalar field, the qualitative shape of the pseudospectra is very similar to what we found in the energy norm. However, we once again find that in the \(L^{2}\)-norm the spectrum is more unstable under generic perturbations and more stable under local potential perturbations. This becomes clear upon comparing the pseudospectra in figure 19 to those in figure 13 corresponding to the energy norm. We end this section by pointing out that while qualitatively similar, the pseudospectra in the energy and \(L^{2}\)-norm are quantitatively different. Indeed, although the shape of the contour map for the same model is very similar in both norms the quantitative value of the pseudospectrum as illustrated by the color code varies markedly from the energy to the \(L^{2}\)-norm. This observation further stresses the importance of properly defining a physically Figure 16: Close-up of the scalar pseudospectrum in the \(L^{2}\)-norm around the first QNF for different values of \(\mathsf{q}\) and \(m^{2}l^{2}\). The red dot corresponds to the QNF, the white lines represent the boundaries of various full \(\varepsilon\)-pseudospectra, and the dashed blue circle symbolizes a circle with a radius of \(10^{-1}\) centered on the QNF. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent. Here the selective pseudospectra computed with random local potential perturbations of size \(10^{-1}\) is denoted by blue dots hidden behind the QNF. Figure 17: Scalar pseudospectrum in the \(L^{2}\)-norm for different values of \(\mathsf{q}\) and \(m^{2}l^{2}\). In the lower panels, we present selective and full pseudospectra. The red dots represent the QNFs, and the white lines denote the boundaries of different full \(\varepsilon\)-pseudospectra. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue, cyan, green, and yellow dots indicate different selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations of size \(10^{-1}\), \(10^{-3}\), \(10^{-5}\), and \(10^{-7}\); respectively. In the upper panels, we represent the condition numbers. motivated norm: the discrepancy in the definition of size of the potential perturbations between the \(L^{2}\) and energy norms results in quantitatively different phenomenology. Figure 18: Close-up of the transverse gauge field pseudospectrum in the \(L^{2}\)-norm around the first QNF for different values of \(\mathfrak{q}\). The red dot corresponds to the QNF, the white lines represent the boundaries of various full \(\varepsilon\)-pseudospectra, and the dashed blue circle symbolizes a circle with a radius of \(10^{-1}\) centered on the QNF. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent. Here the selective pseudospectra computed with random local potential perturbations of size \(10^{-1}\) is hidden by the QNF. Figure 19: Transverse gauge field pseudospectrum in the \(L^{2}\)-norm for different values of \(\mathfrak{q}\). In the lower panels, we present selective and full pseudospectra. The red dots represent the QNFs, and the white lines denote the boundaries of different full \(\varepsilon\)-pseudospectra. The heat map corresponds to the logarithm in base 10 of the inverse of the resolvent, while the blue, cyan, green, and yellow dots indicate different selective \(\varepsilon\)-pseudospectra computed with random local potential perturbations of size \(10^{-1}\), \(10^{-3}\), \(10^{-5}\), and \(10^{-7}\); respectively. In the upper panels, we represent the condition numbers. Numerical Values of the QNFs In this appendix we provide the numerical values of the first 10 quasinormal frequencies for the models discussed in the main text. For purposes of presentation we limit the precision to 15 significant figures. \begin{table} \begin{tabular}{|c|c|c|} \hline \(n\) & \(\mathrm{Re}(\varpi_{n})\) & \(\mathrm{Im}(\varpi_{n})\) \\ \hline \hline 1 & \(\pm 10.6370087422563\) & \(-1.06356241400480\) \\ \hline 2 & \(\pm 11.6542293745917\) & \(-2.66476173979828\) \\ \hline 3 & \(\pm 12.8816366495948\) & \(-4.46855950305745\) \\ \hline 4 & \(\pm 14.2652641255372\) & \(-6.38140408881256\) \\ \hline 5 & \(\pm 15.7667370212845\) & \(-8.35383981758053\) \\ \hline 6 & \(\pm 17.3576290472434\) & \(-10.3587501137841\) \\ \hline 7 & \(\pm 19.0169202769993\) & \(-12.3810472715686\) \\ \hline 8 & \(\pm 20.7291315800900\) & \(-14.4122658126651\) \\ \hline 9 & \(\pm 22.4828389281553\) & \(-16.4476266560906\) \\ \hline 10 & \(\pm 24.2695453187766\) & \(-18.4844261072115\) \\ \hline \end{tabular} \end{table} Table 4: Neutral scalar QNFs for \(m^{2}l^{2}=-3\) and \(\mathrm{q}=10\). \begin{table} \begin{tabular}{|c|c|c|} \hline \(n\) & \(\mathrm{Re}(\varpi_{n})\) & \(\mathrm{Im}(\varpi_{n})\) \\ \hline \hline 1 & \(\pm 2.19881456585250\) & \(-1.75953462713300\) \\ \hline 2 & \(\pm 4.21189720328773\) & \(-3.77488823578666\) \\ \hline 3 & \(\pm 6.21554314901884\) & \(-5.77725701316514\) \\ \hline 4 & \(\pm 8.21716723825394\) & \(-7.77808021954300\) \\ \hline 5 & \(\pm 10.2180612360290\) & \(-9.77847388400658\) \\ \hline 6 & \(\pm 12.2186177048135\) & \(-11.7786974714808\) \\ \hline 7 & \(\pm 14.2189931785234\) & \(-13.7788389290514\) \\ \hline 8 & \(\pm 16.2192614139935\) & \(-15.7789352848867\) \\ \hline 9 & \(\pm 18.2194613734973\) & \(-17.7790045343160\) \\ \hline 10 & \(\pm 20.2196154334378\) & \(-19.7790563669305\) \\ \hline \end{tabular} \end{table} Table 3: Neutral scalar QNFs for \(m^{2}l^{2}=-3\) and \(\mathrm{q}=0\). \begin{table} \begin{tabular}{|c|c|c|} \hline \(n\) & \(\mathrm{Re}(\varpi_{n})\) & \(\mathrm{Im}(\varpi_{n})\) \\ \hline \hline 1 & \(\pm 10.6311305880202\) & \(-1.07401086341959\) \\ \hline 2 & \(\pm 11.6456703226270\) & \(-2.68058057268396\) \\ \hline 3 & \(\pm 12.8710360299982\) & \(-4.48891090932708\) \\ \hline 4 & \(\pm 14.2530488610374\) & \(-6.40563035311599\) \\ \hline 5 & \(\pm 15.7531754716734\) & \(-8.38144582889587\) \\ \hline 6 & \(\pm 17.3428934333756\) & \(-10.3893551546947\) \\ \hline 7 & \(\pm 19.0011254785217\) & \(-12.4143536422258\) \\ \hline 8 & \(\pm 20.7123581254599\) & \(-14.4480366538192\) \\ \hline 9 & \(\pm 22.4651463361787\) & \(-16.4856700172425\) \\ \hline 10 & \(\pm 24.2509798299875\) & \(-18.5245835573417\) \\ \hline \end{tabular} \end{table} Table 6: Transverse gauge field QNFs for \(\mathfrak{q}=10\). \begin{table} \begin{tabular}{|c|c|c|} \hline \(n\) & \(\mathrm{Re}(\varpi_{n})\) & \(\mathrm{Im}(\varpi_{n})\) \\ \hline \hline 1 & \(\pm 2.000000000000000\) & \(-2.00000000000000\) \\ \hline 2 & \(\pm 4.0000000000000000\) & \(-4.000000000000000\) \\ \hline 3 & \(\pm 6.000000000000000\) & \(-6.00000000000000\) \\ \hline 4 & \(\pm 8.000000000000000\) & \(-8.00000000000000\) \\ \hline 5 & \(\pm 10.00000000000000\) & \(-10.0000000000000\) \\ \hline 6 & \(\pm 12.00000000000000\) & \(-12.0000000000000\) \\ \hline 7 & \(\pm 14.00000000000000\) & \(-14.0000000000000\) \\ \hline 8 & \(\pm 16.00000000000000\) & \(-16.0000000000000\) \\ \hline 9 & \(\pm 18.00000000000000\) & \(-18.0000000000000\) \\ \hline 10 & \(\pm 20.0000000000000\) & \(-20.0000000000000\) \\ \hline \end{tabular} \end{table} Table 5: Transverse gauge field QNFs for \(\mathfrak{q}=0\). Pseudospectrum Algorithm In this appendix we give a more explicit description of the algorithm employed when computing the full pseudospectrum and condition numbers using theorem 2.7. For the sake of clarity, we do not present here the code but instead the following illustrative flowchart: \begin{tabular}{|c|c|} \hline \hline \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}\) } & \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}\) } } \\ \hline \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\) } \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{{ }}}}}}}}}}}}}}}}}\) } \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{{ }}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{ }}}}}}}}}}}}}}}}}}}\) } \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{}}}}}}}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{\mathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{ \mathbf{ \mathbf{ }}}}}}}}}\) \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbf{\mathbfmathbfmathbf{ }}}}}}}\)} \\ \(\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbf
2305.11479
A new technique to measure noise parameters for global 21-cm experiments
Radiometer experiments to detect 21-cm Hydrogen line emission from the Cosmic Dawn and Epoch of Reionization rely upon precise absolute calibration. During calibration, noise generated by amplifiers within the radiometer receiver must be accounted for; however, it is difficult to measure as the noise power varies with source impedance. In this letter, we introduce a convenient method to measure the noise parameters of a receiver system, which is practical for low-frequency receivers used in global 21-cm experiments.
Danny C. Price, Cheuk-Yu Edward Tong, Adrian T. Sutinjo, Nipanjana Patra, Lincoln J. Greenhill
2023-05-19T07:14:00Z
http://arxiv.org/abs/2305.11479v1
# A new technique to measure noise parameters for global 21-cm experiments ###### Abstract Radiometer experiments to detect 21-cm Hydrogen line emission from the Cosmic Dawn and Epoch of Reionization rely upon precise absolute calibration. During calibration, noise generated by amplifiers within the radiometer receiver must be accounted for; however, it is difficult to measure as the noise power varies with source impedance. In this letter, we introduce a convenient method to measure the noise parameters of a receiver system, which is practical for low-frequency receivers used in global 21-cm experiments. ## 1 Introduction Numerous experiments (e.g. [1, 2, 3, 4, 5]) seek to detect the global 21-cm signal from the Cosmic Dawn with radiometers across 30-250 MHz. Such a detection requires an exquisitely calibrated radiometer and a well characterized antenna. Any spectral features introduced by the radiometer may obfuscate, or be mistaken for, the expected \(\sim\)100mK amplitude absorption feature. The EDGES experiment reported the presence of an apparent \(\sim\)500mK absorption feature in their calibrated data [1]; however, there are concerns that this is partly or fully due to unmodelled systematics that have introduced a spectral feature [6, 7, 8]. In tension with the EDGES result, the SARAS-3 experiment has recently reported a significant non-detection of the absorption feature in their calibrated spectra [3]. Gaining a better understanding and characterization of systematics within global 21-cm experiments will be critical for breaking this tension. One potential source of unwanted spectral features deserving more attention is the self-noise introduced by amplifiers within the radiometer. As the noise performance of a radiometer depends upon the impedance of the antenna it is connected to, a radiometer's noise figure will differ when deployed in the field as compared to laboratory measurements with a 50\(\Omega\) noise source. This difference is a source of error that must be accounted for. To date, most global experiments have used a method involving a long coaxial cable to determine the magnitude of the difference, following the approach employed in EDGES [9]. This method is primarily based upon Hu & Weinreb [10], but using the "noise wave" formulation of Meys [11]. Here, we introduce a new technique to characterize noise performance for global 21-cm experiments, based on the related "noise parameter" formulation commonly used in the microwave engineering community [12]. Our technique, based on the approach given [13], only requires a standard Vector Network Analyzer (VNA) and calibration kit, a short cable, and a calibrated noise source. We refer the reader to [13] for discussion of the mathematical and theoretical background. ## 2 Noise parameters The noise performance of a device-under-test (DUT) is commonly characterized by noise parameters: four real-valued terms from which noise characteristics can be derived for any input impedance. Following [13], the noise temperature \(T\) of a 2-port DUT connected to a source with reflection coefficient \(\Gamma_{s}\) can be expressed as: \[T(\Gamma_{s})=T_{\text{min}}+T_{0}N\frac{\left|\Gamma_{s}-\Gamma_{\text{opt}} \right|^{2}}{\left(1-\left|\Gamma_{s}\right|^{2}\right)\left|1+\Gamma_{\text {opt}}\right|^{2}} \tag{1}\] where the noise parameters are: * \(T_{\text{min}}\) is the minimum noise temperature. * \(\Gamma_{\text{opt}}\) is the optimum reflection coefficient. * \(N\) is the minimum noise ratio. Note that \(T_{0}=290\,\text{K}\), \(Z_{0}=50\,\Omega\), and \(\Gamma_{s}\) is complex valued. We may treat the magnitude \(\gamma_{\text{opt}}\) and phase \(\theta_{\text{opt}}\) of \(\Gamma_{s}\) as two real-valued noise parameters. The noise parameters of a DUT may be determined by making at least 4 measurements of \(T(\Gamma_{s})\) when different source impedances are connected. Following Lane's method [14], after the singularity removal detailed in [15], the four (real-valued) noise parameters can be found by casting the problem as a matrix equation: \[A\mathbf{x} =\mathbf{t}^{\prime} \tag{2}\] \[\mathbf{x} =[a,b,c,d]^{T}\] (3) \[\mathbf{t}^{\prime} =(1-\gamma_{s_{i}}^{2})\mathbf{t} \tag{4}\] Vector \(\mathbf{t}^{\prime}\) is formed from receiver measurements (entries in \(\mathbf{t}\)), matrix \(A\) is formed from \(\Gamma_{s}\) measurements, and we wish to find (4\(\times\)1) noise parameter vector \(\mathbf{x}\). The rows of matrix \(A\) depend upon the formulation used; here, we will follow the formulation detailed in [10, 15]: \[A_{i}=\left[1-\gamma_{i}^{2},1,\gamma_{t}cos\theta_{l},\gamma_{t}sin\theta_{l} \right]. \tag{5}\] and noise parameters are related to \(\mathbf{x}=[a,b,c,d]^{T}\) by: \[T_{\mathrm{min}} =a+\frac{b+\Delta}{2} \tag{6}\] \[N =\frac{\Delta}{4T_{0}}\] (7) \[\gamma_{\mathrm{opt}} =\sqrt{\frac{b-\Delta}{b+\Delta}}\] (8) \[\theta_{\mathrm{opt}} =tan^{-1}\left(\frac{-d}{-c}\right), \tag{9}\] where \(\Delta=\sqrt{b^{2}-c^{2}-d^{2}}\). Note that the minus signs in Equation 9 ensure the correct quadrant is returned. ### Approaches to source impedance selection Long cable approach:To date, the main approach used in 21-cm experiments is to connect a long length of coaxial cable to the receiver so that the phase of a reflected noise waves wrap rapidly with frequency [9, 10]. For 21-cm experiments, "long" lengths of 3-25 m have been used [9, 16]. The phase of an open cable's reflection coefficient is approximately \[\theta(f)=-4\pi l/v_{c}f, \tag{10}\] so as the cable length \(l\) increases, the phase will wrap faster, corresponding to circles around the edge of the Smith chart. If wraps are sufficiently fast--that is, faster than frequency variations of noise parameters--then well-spaced loci on the Smith chart can be selected to determine the underlying noise parameters. With this approach, a moving window of points with a least-squares fit can be used in lieu of the matrix method [10]. For the long-cable approach, higher frequency resolution requires longer cables. If the cable is too short, rapidly varying spectral structure will be missed. A length matched pair of open and shorted cables can be used so that the spacing on the Smith chart is constant across frequency. OSLC approach:As shown in [13], noise parameters can be determined using the "OSLC" approach: an open (\(\Gamma_{\mathrm{op}}\approx 1\)), short (\(\Gamma_{\mathrm{sh}}\approx-1\)) and load (\(\Gamma_{\mathrm{ld}}\approx 0\)) from a VNA calibration kit, and a shorted or open 1/8-wavelength length of cable (\(\Gamma_{\mathrm{cbl}}\approx\pm 1j\)). The OSLC approach can be used over a frequency range 0.2-1.8 \(f_{0}\), where \(f_{0}\) is the frequency for which the cable is 1/8 of a wavelength for the transmission mode. The loci of the OSLC points on a Smith chart form a "well spread impedance pattern", which has been shown to minimize measurement uncertainties [17]. For 21-cm experiments, a 1/8-wavelength cable at a frequency \(f_{0}=100\) MHz (i.e. \(\lambda_{0}=3\) m) would be suitable to cover 20-180 MHz. The cable would have a physical length \(l=\lambda_{0}v_{c}/8\), where \(v_{c}\) is the velocity factor of the cable; for common dielectrics, \(0.6<v_{c}<0.9\), so the required cable length is between 22.50-33.75 cm. Comparison of approaches:The OSLC approach uses a far shorter cable, and the frequency resolution of derived noise parameters depends only upon the receiver's channel bandwidth. Another advantage is that VNA calibration kits include physical models from which \(\Gamma_{s}\) can be derived with higher precision than VNA measurements (see [13, 18]). In contrast, the long-cable approach requires fewer source impedances to be connected, and thus fewer measurements, but much longer cables are needed if high frequency resolution solutions are required. ## 3 Applying the OSLC method Here, we present a procedure to measure the noise parameters of a radiometric receiver using the OSLC approach. This procedure is similar to [13], but has been modified for the case where the receiver itself is the DUT1. A summary of terms and measurements used in this section is provided in Table 1. Footnote 1: The main difference in derivation is that _transducer_ power gain must be used if the receiver is the DUT instead of _available_ power gain in eqn 35 of [13], if the DUT is connected in cascade to the receiver; see Chapter 11 of [19]. The OSLC method requires a calibrated noise source to generate "hot" (noise diode on) and "cold" (noise diode off) temperature references (\(T_{\mathrm{hot}}\) and \(T_{\mathrm{cold}}\)). The reflection coefficient must be measured for both states, and should satisfy \(\Gamma_{\mathrm{hot}}\approx\Gamma_{\mathrm{cold}}\). To extract noise parameters of a radiometer, we need to form the source reflection coefficient matrix \(A\), and measurement vector \(\mathbf{t^{\prime}}\). To form \(A\), VNA measurements (or physical models) of the four source impedances (open, short, load cable) are plugged into equation 5. The measurement vector \(\mathbf{t^{\prime}}\), requires that four power measurements are made with the radiometer: one for each source impedance. The power spectral density (PSD) measured by the radiometer when connected to a source with reflection coefficient \(\Gamma_{s}\) is given by \[P_{s}(f,\Gamma_{s})=D_{\mathrm{rx}}k_{B}\Delta fG_{\mathrm{rx}}(f,\Gamma_{s}) \left[T_{s}(f)+T_{\mathrm{n}}(f,\Gamma_{s})\right] \tag{11}\] where \(k_{B}\) is the Boltzmann constant and \(\Delta f\) is the noise equivalent bandwidth per channel. \(T_{n}\) is the receiver noise temperature when connected to \(\Gamma_{s}\). \(T_{s}\) is the source noise temperature, equal to the ambient temperature for passive components. \(G_{rx}(f,\Gamma_{s})\) is the transducer gain of the receiver; the factor \(D_{rx}\) represents to digital gain factors within the receiver (assumed to be linear). To calibrate, we define a scale factor \(\alpha\): \[\alpha=\frac{T_{\mathrm{hot}}-T_{\mathrm{cold}}}{P_{\mathrm{hot}}-P_{\mathrm{ cold}}}=\frac{1}{D_{\mathrm{rx}}k_{B}\Delta fG_{\mathrm{rx}}(f,\Gamma_{s})}, \tag{12}\] and similarly we define a mismatch factor \[M(\Gamma_{s})=\frac{G_{\mathrm{rx}}(f,\Gamma_{s})}{G_{\mathrm{rx}}(f,\Gamma_{ \mathrm{ns}})}=\left(1-\left|\Gamma_{\mathrm{ns}}\right|^{2}\right)\frac{\left| 1-\Gamma_{s}\Gamma_{\mathrm{rx}}\right|^{2}}{\left|1-\Gamma_{\mathrm{ns}} \Gamma_{\mathrm{rx}}\right|^{2}}. \tag{13}\] By doing so, we may form \(\mathbf{t}^{\prime}\) by: \[\mathbf{t}^{\prime}_{i}=\left(\alpha P_{s_{i}}M_{s_{i}}-\left(1-\left|\Gamma_ {s_{i}}\right|^{2}\right)T_{s_{i}}\right), \tag{14}\] The noise parameter vector may then be recovered via \[\mathbf{x}=A^{-1}\mathbf{t}, \tag{15}\] after which noise parameters \(T_{\mathrm{min}}\), \(R_{N}\), \(\gamma_{\mathrm{opt}}\) and \(\theta_{\mathrm{opt}}\) are recovered via equations 6-9. ## 4 Application to Hypereion We used the OSLC approach to measure the noise parameters of the prototype receiver for the HYPERION system [5]. The HYPREION system implements a two-channel, cross-correlation spectrometer; for simplicity, we only consider a single autocorrelation channel here. All required VNA and power spectra measurements (Table 1), were taken in a laboratory setting. HYPREION consists of a "frontend module", which performs initial signal conditioning, connected to the "backend module" and digital signal processor by 100 m of coaxial cable. When deployed, the frontend module will be connected to the antenna and located in the field, and the backend module and digital system will be located in an electromagnetically shielded room. We connected an open, short, load (from an Agilent 85052D calibration kit) and shorted coaxial cable to the HYPREION frontend module, and recorded power spectra in each state, across 30-120 MHz. A Keysight HP346B calibrated noise source was used to provide hot and cold reference states, and a Fieldfox N9915A VNA was used to measure reflection coefficients. Power spectra were generated using the HYPREION digital receiver, which is based on a 14-bit Signatek PX1500-2 digitizer. Extracted noise parameters are shown in Figure 1. ## 5 Discussion and conclusions Here, we have introduced a noise parameter measurement approach that can be applied to radiometer receivers used in 21-cm experiments. The approach presented here is a modified version of the approach detailed in [13], which provides extended details and discussion \begin{table} \begin{tabular}{c l} \hline \hline \(\Gamma_{\mathrm{rx}}\) & Ref. coefficient of radiometer receiver \\ \(\Gamma_{\mathrm{hot}}\) & Ref. coefficient of cal noise source (on). \\ \(\Gamma_{\mathrm{cold}}\) & Ref. coefficient of cal noise source (off). \\ \(\Gamma_{\mathrm{ns}}\) & Computed via \(\Gamma_{\mathrm{ns}}=(\Gamma_{\mathrm{on}}+\Gamma_{\mathrm{off}})/2\) \\ \(\Gamma_{\mathrm{op}}\) & Ref. coefficient of open standard \\ \(\Gamma_{\mathrm{sh}}\) & Ref. coefficient of short standard \\ \(\Gamma_{\mathrm{ld}}\) & Ref. coefficient of broadband load standard \\ \(\Gamma_{\mathrm{cbl}}\) & Ref. coefficient of 1/8-wavelength cable \\ \hline \(P_{\mathrm{hot}}\) & Recv. PSD when cal noise source (on) connected \\ \(P_{\mathrm{cold}}\) & Recv. PSD when cal noise source (off) connected \\ \(P_{\mathrm{op}}\) & Recv. PSD when open standard is connected \\ \(P_{\mathrm{sh}}\) & Recv. PSD when short standard is connected \\ \(P_{\mathrm{d}}\) & Recv. PSD when load standard is connected \\ \(P_{\mathrm{cbl}}\) & Recv. PSD when \(\lambda\)/8 cable is connected \\ \hline \(T_{\mathrm{cold}}\) & Ambient temperature (for ‘cold’ noise source (off)) \\ \(T_{\mathrm{hot}}\) & Noise source effective temperature (i.e. ENR) \\ \hline \end{tabular} \end{table} Table 1: Summary of reflection coefficients and power spectral density (PSD) measurements required for the OSLC method. Figure 1: Measured noise parameters for the HYPERION prototype system using the OSLC method. Points between 88–108 MHz have been flagged due to strong FM-band radio interference present in the data. of noise parameter measurement techniques. Many global 21-cm experiments switch between the antenna and a set of calibration references, and have existing methods to convert measured data into temperature (K). These internal references may be used in lieu of external open/short/load, as long as their reflection coefficients can be accurately modelled. Similarly, an experiment's existing calibration routines may be used in lieu of the procedure outlined in Section 3. The OSLC method is suitable for in-situ application (i.e. in the field) using a portable VNA. We suggest that future global 21-cm experiments should consider integrating OSLC source impedances within the radiometer.
2305.19025
Prediction theory in Hilbert Spaces: Operator-valued Szego Theory
In this paper, we extend some classical results of the Szego theory of orthogonal polynomials on the unit circle to the infinite-dimensional case.
Badr Missaoui, Nicholas H. Bingham
2023-05-30T13:30:15Z
http://arxiv.org/abs/2305.19025v1
# Prediction theory in Hilbert Spaces: ###### Abstract In this paper, we extend some classical results of the Szego theory of orthogonal polynomials on the unit circle to the infinite-dimensional case. Keywords: Moment problem, Orthogonal polynomials, Szego theory ###### Contents * 1 Introduction * 2 Kolmogorov Isomorphism Theorem * 3 Operator orthogonal polynomials * 4 The Verblunsky recursion * 5 Christoffel-Darboux Formulae * 6 Bernstein-Szego Approximation * 7 The Szego Theorem * 8 Complements ## 1 Introduction Operator orthogonal polynomials are a type of polynomial whose coefficients are operators rather than scalars. In the scalar case, the general theory of orthogonal polynomials stems from Chebyshev's work of 1858 on continued fractions (though these, and special cases of orthogonal polynomials, go back much earlier; for background here, see e.g. [9]). Modern developments include a series of papers by M. G. Krein in the 1950s and 1960s ([39] - [43]), and more recently by numerous authors; for references, see e.g. ([62],[6]). Operator-valued orthogonal polynomials are a generalization of matrix-valued orthogonal polynomials -- polynomials with matrix coefficients that are orthogonal with respect to a given matrix-valued measure ([7],[18],[49]). To define operator-valued orthogonal polynomials, let \(\mathcal{H}\) be a separable Hilbert space, \(\mathcal{L}(\mathcal{H})\) be the space of bounded linear operators on \(\mathcal{H}\), \(\mu\) be a positive, self-adjoint operator on \(\mathcal{H}\) and \(P_{n}(z)\), \(n=0,1,2,\dots\) a sequence of operator polynomials on the unit circle with degree \(n\). Write \(P_{n}(z)=\sum_{k=0}^{n}a_{k}z^{k}\) where \(a_{k}\in\mathcal{L}(\mathcal{H})\). We say that the polynomials \(P_{n}(z)\) are _orthogonal_ with respect to \(\mu\) if \[\langle P_{n}(z),P_{m}(z)\rangle_{\mu}=0,\quad n\neq m,\] with \(\langle\cdot,\cdot\rangle_{\mu}\) the inner product \(\langle f,g\rangle_{\mu}:=\int_{\mathbb{T}}f(z)^{*}d\mu g(z)\). Here \(\mathbb{T}\) denotes the unit circle in the complex plane and \(d\mu\) an operator-valued measure. The study of operator orthogonal polynomials is closely related to probability theory and the dilation theory of operators. The dilation theory of operators deals with the problem of extending a given operator to a larger space in a way that preserve certain properties. This theory was studied by Sz.-Nagy and Foias [64] and Nikolskii [51], and applied to the study of operator-valued functions by Gorniak and Weron ([31]) and Weron [67]. The results have been generalized to more general settings, such as propagators on semigroups, and their connection to the Aronszajn-Kolmogorov theorem (also known as the Kernel theorem) has been established by Masani [46]. In prediction theory, operator orthogonal polynomials are used to solve prediction problems in infinite-dimensional spaces (Makagon and Salehi [45] and Weron [67]). Mandrekar and Salehi also developed the theory of dilation and shift operators for nonstationary processes, leading to the concept of the _Kolmogorov decomposition_. This decomposition represents a positive definite kernel as the inner product of two sequences of operators, and enables the study of nonstationary processes using dilation and shift operators. In this paper, we use the moments of the orthogonality measure to represent operator orthogonal polynomials on the unit circle as certain _Schur complements_ (see e.g. [12]). This approach enables the easy derivation of classical recursion relations and the first operator versions of the kernel polynomials and Christoffel-Darboux formulas. Our approach is the extension to the operator case following the theory of matrix orthogonal polynomial in ([18], [49]). We mostly follow their notation and terminology. The paper is structured as follows: in section 2, we introduce Kolmogorov Isomorphism Theorem, and in section 3, we define the operator analog of matrix-valued polynomials on the unit circle; in section 4 - 6, we give the operator versions of Verblunsky recurrence relations, Christoffel-Darboux formulas, and Bernstein-Szego approximation. In section 7, we prove the operator version of the Szego Limit Theorem using the theory of orthogonal polynomials on the unit circle, and in section 8, we give some complements for future research. ## 2 Kolmogorov Isomorphism Theorem Throughout this paper, \(\mathcal{H}\) is a separable Hilbert space as above; we write \(\mathcal{L}(\mathcal{H},\mathcal{K})^{+}\) for the algebra of all positive bounded linear operators from \(\mathcal{H}\) into \(\mathcal{K}\); we abbreviate this to \(\mathcal{L}(\mathcal{H})^{+}\) when \(\mathcal{H}=\mathcal{K}\). Let \(\mathcal{B}(\mathbb{T})\) be the \(\sigma\)-algebra of Borel sets of \(\mathbb{T}\). A function \(\mu\) is a positive operator-valued measure defined by: **Definition 2.1**.: A positive operator-valued measure is a map \(\mu:\mathcal{B}(\mathbb{T})\to B(\mathcal{H})\) that satisfies the following properties: 1. \(\mu(\oslash)=0\), where \(\oslash\) is the empty set. 2. \(\mu\) is positive: \(\mu(\Delta)\geq 0\) for all \(\Delta\in\mathbb{T}\). 3. \(\mu\) is countably additive: if \(\Delta=\bigcup_{j=1}^{\infty}\Delta_{j}\) with \(\Delta_{i}\) disjoint, then \(\mu(\Delta)=\sum_{j=1}^{\infty}\mu(\Delta_{j})\). Let \(X\) be a stationary process over a locally compact group \(G\), that is, \[C(t,s)=\mathbb{E}(X_{s}^{*}X_{t})=C^{\prime}(t-s).\] By the classical spectral theorem (see, for example, Dunford and Schwartz [13, IV.10]), there exists a unique unitary operator \(U:t\mapsto U_{t}\) on \(G\) such that \[X_{t}=U_{t}X_{0}.\] This operator \(U\) is known as the _shift operator_. By Stone's theorem (see e.g. Stone ([65], VIII.2), Dunford and Schwartz ([25], X.2), Riesz and Nagy ([57] SSSS109, 137), Rudin ([59], Ch. 12), the shift operator has a spectral representation of the form \[U=\int_{\mathbb{T}}e^{i\lambda}E(d\lambda),\] with \(E\) an operator-valued function distribution on \(\mathbb{T}\), known as the _spectral measure_. Thus, \(X\) has the integral representation \[X_{t}=U^{t}X_{0}=\int_{\mathbb{T}}e^{it\lambda}E(d\lambda)X_{0}=\int_{\mathbb{ T}}e^{it\lambda}\xi(d\lambda),\] where \(\xi(\Delta)=E(\Delta)X_{0}\). In particular, if \(G=\mathbb{Z}\), then \[C_{n}=\left\langle X_{m+n},X_{m}\right\rangle=\int_{\mathbb{T}}e^{-in\lambda} \left\langle\xi(d\lambda),\xi(d\lambda)\right\rangle\int_{\mathbb{T}}e^{-in \lambda}\mu(d\lambda),\] where \(\mu\) represents _the spectral measure_ of the process \(X\). So by uniqueness of Fourier transforms, \[\mu(\Delta)=X_{0}^{*}E(\Delta)X_{0}.\] This result shows a strong connection between the spectral measure and dilation theory, and thus the existence of an isometry map, the _shift operator_ for stochastic processes. The isomorphism \[X_{n}\leftrightarrow e^{-in\cdot}I,\ n\in\mathbb{Z},\] with \(I\) the identity operator on \(\mathcal{H}\), is called _Kolomogorov Isomorphism Theorem_. This isomorphism translates the Kolomogorov-Wiener prediction problem into an approximation problem in \(L^{2}(\mu,\mathcal{H})\). As in [ManS], the space \(L^{2}(\mu,\mathcal{H})\) of operator-valued functions \(f:\mathbb{T}\to B(\mathcal{H})\) that are square-integrable with respect to the operator-valued measure \(\mu\), defined by \[L^{2}(\mu,\mathcal{H})=\left\{f\ |\ \int_{X}\mathrm{Trace}(f^{\dagger}(x)\mu(dx)f( x))<\infty\right\},\] is a Hilbert space with inner product \[\left\langle\left\langle f,g\right\rangle\right\rangle_{\mathrm{R}}=\int_{X}f^ {\dagger}(x)\mu(dx)g(x),\qquad\left\langle\left\langle f,g\right\rangle\right\rangle _{\mathrm{L}}=\int_{X}g(x)\mu(dx)f^{\dagger}(x).\] We assume throughout this paper that \(\mu\) is _absolutely continuous_. In this case, there exists a weakly measurable function \(M:\mathbb{T}\to\mathcal{H}\) with \(M(e^{it})\geq 0\) almost everywhere in \(\mathbb{T}\), such that \[d\mu(x)=M(x)dx.\] Consider now a family \(\mathbf{H}=\{\mathcal{H}_{n}\}_{n\in\mathbb{Z}}\) of Hilbert spaces. A map \(C\) on \(\mathbb{Z}\times\mathbb{Z}\) such that \(C_{i,j}\in\mathcal{L}(\mathcal{H}_{j},\mathcal{H}_{i})\) is called a _positive definite kernel_ if \[\sum_{i,j}\left\langle C_{i,j}h_{j},h_{i}\right\rangle\geq 0,\] for all sequences \(\{h_{n}\}_{n\in\mathbb{Z}}\) in \(\oplus_{n\in\mathbb{Z}}\mathcal{H}_{n}\) with finite support. The following theorem establishes a connection between this shift operator and positive definite sequences of operators; refer to [17] for more background. **Theorem 2.1**.: Let \(C\) be a positive definite Toeplitz kernel. Then there exists a Hilbert space \(\mathcal{K}\) and a map \(V\) such that \(V_{n}\in\mathcal{L}(\mathcal{H}_{n},\mathcal{K})\) and: * \(C_{i,j}=V_{i}^{\star}V_{j}\), for \(i,j\in\mathbb{Z}\), * \(\mathcal{K}=\bigvee_{n\in\mathbb{Z}}V^{n}\mathcal{H}_{n}\), with \(V^{i}=\prod_{j=0}^{i-1}U_{j}\). There is an important particular case where the family \(\mathbf{H}\) reduces to a single Hilbert space, _i.e._\(\mathcal{H}_{n}=\mathcal{H}\) for all \(n\in\mathbb{Z}\) and the positive definite kernel \(C\) has the property that \(C_{i,j}=T_{j-i}\) for a certain map \(T\) from \(\mathbb{Z}\) to \(\mathcal{L}(\mathcal{H})\). In this case, the kernel \(C\) is called a _positive definite Toeplitz kernel_. **Theorem 2.2**.: Let \(C\) be a positive definite Toeplitz kernel. Then there exist a Hilbert space, a unitary operator \(S\) in \(\mathcal{L}(\mathcal{K})\) and an operator in \(\mathcal{L}(\mathcal{H},\mathcal{K})\) such that * \(C_{i,j}=Q^{\star}S^{j-i}Q\), for \(i,j\in\mathbb{Z}\), * \(\mathcal{K}=\bigvee_{n\in\mathbb{Z}}S^{n}Q\mathcal{H}_{n}\), Moreover, \[\mathcal{K}=\bigvee_{n\geq 0}U^{n}(\mathcal{H}),\] where \(\bigvee\) denotes the linear span, and \(U\) is unique up to an isomorphism. The shift operator is defined by the successive powers of the unitary dilation operator \(U\). Note that Theorem 2.2 applies only to stationary processes. For nonstationary processes, the Naimark dilation can be generalized to the Kolomogorov decomposition, as stated in the following theorem. This Kolomogorov Decomposition (Theorem 2.1) extends the Kolomogorov Isomorphism Theorem (KIT) [38] to operator-valued non-stationary processes, and establishes a unique minimal dilation representation for the correlation function \(C\) of a stochastic process. This result, first established by Mandrekar and Salehi ([45] SS6) using the work of Wiener and Masani [68], supports the idea proposed by Masani [47] that there is a direct connection between dilation and shift operators. Dilation theory allows us to study a signal from the perspective of the dilation operator rather than the signal itself, rather as Fourier theory studies a signal from the spectral rather than temporal perspective. While Fourier theory is not typically applied to non-stationary signals, dilation theory does not require stationarity. If the signal is stationary, dilation theory leads to the Naimark dilation (Theorem 2.2). If the signal is non-stationary, it leads to the Kolmogorov decomposition (Theorem 2.1). ## 3 Operator orthogonal polynomials We define monic operator polynomials \(\Phi_{n}^{R}\), \(\Phi_{n}^{L}\) by applying the Gram-Schmidt orthogonalisation procedure to \(\{\mathbf{I},z\mathbf{I},\cdots\}\), that is, \(\Phi_{n}^{R}\) is the unique operator polynomial \(z^{n}\mathbf{I}+\) lower order with \[\big{\langle}\big{\langle}z^{k}\mathbf{I},\Phi_{n}^{R}\big{\rangle}\big{\rangle} _{\mathrm{R}}=0,\quad k=0,1,\cdots,n-1.\] It is written as \[\Phi_{n}^{R}(z)=\sum_{k=0}^{n}z^{k}A_{k},\] where \(A_{k}\in\mathcal{L}(\mathcal{H})\) and \(A_{n}=\mathbf{I}\). For an operator polynomial \(P_{n}\) of degree \(n\), we defined the _reversed polynomial_\(P_{n}^{*}\) of \(P_{n}\) by \[P_{n}^{*}(z)=z^{n}P_{n}(1/\bar{z})^{\dagger}.\] We have \[(P_{n}^{*})^{*}=P_{n},\] and for any \(\alpha\in\mathcal{L}(\mathcal{H})\), \[(\alpha P_{n})^{*}=P_{n}^{*}\alpha^{\dagger},\ \ (P_{n}\alpha)^{*}=\alpha^{ \dagger}P_{n}^{*}.\] **Lemma 3.1**.: We have \[\left\langle\left\langle f,g\right\rangle\right\rangle_{\mathrm{L}}=\left\langle \left\langle g,f\right\rangle\right\rangle_{\mathrm{L}}^{\dagger},\ \ \ \ \left\langle\left\langle f,g\right\rangle\right\rangle_{\mathrm{R}}=\left\langle \left\langle g,f\right\rangle\right\rangle_{\mathrm{R}}^{\dagger}\] \[\left\langle\left\langle f^{*},g^{*}\right\rangle\right\rangle_{\mathrm{L}}= \left\langle\left\langle g,f\right\rangle\right\rangle_{\mathrm{R}}^{\dagger}, \ \ \ \left\langle\left\langle f^{*},g^{*}\right\rangle\right\rangle_{\mathrm{R}}= \left\langle\left\langle g,f\right\rangle\right\rangle_{\mathrm{L}}^{\dagger}\] Proof.: The same steps of the proof of Lemma 3.1 in [18] applies here. The following lemma will be a very useful characterization of positive definite \(2\)\(\times\)\(2\) operator matrices. **Lemma 3.2**.: The following are equivalent: 1. The operator matrix \(\begin{pmatrix}A&B\\ B^{*}&C\end{pmatrix}\) is positive definite. 2. \(A>0\) and \(C-B^{*}A^{-1}B>0\). 3. \(C>0\) and \(A-B^{*}C^{-1}B>0\); here \(C-B^{*}A^{-1}B>0\) is called the _Schur complement_ of \(A\). Proof.: This follows immediately from the _Frobenius-Schur factorization,_ \[\begin{pmatrix}A&B\\ B^{*}&C\end{pmatrix}=\begin{pmatrix}I&0\\ B^{*}A^{-1}&I\end{pmatrix}\begin{pmatrix}A&0\\ 0&C-B^{*}A^{-1}B\end{pmatrix}\begin{pmatrix}I&A^{-1}B\\ 0&I\end{pmatrix}. \tag{1}\] In what follows, we provide an explicit operator expression for operator orthogonal polynomials on the unit circle in terms of the moments of the measure. Given an operator-valued positive measure on \(\mathbb{T}\), we define its _moments_ for \(k=-n,\cdots,n\) such that \[\mu_{k}=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-ik\theta}\mu(d\theta)\ \ \text{and}\ \ \mu_{-k}=\mu_{k}^{*}.\] The right and left _Toeplitz operator matrices_\(T_{n}^{R}\) and \(T_{n}^{L}\) associated to \(\mu\) are \[T_{n}^{R}=\begin{pmatrix}\mu_{0}&\mu_{-1}&\cdots&\mu_{-n+1}\\ \mu_{1}&\mu_{0}&\cdots&\mu_{-n+2}\\ \vdots&\vdots&\ddots&\vdots\\ \mu_{n-1}&\mu_{n-2}&\cdots&\mu_{0}\end{pmatrix},T_{n}^{L}=\begin{pmatrix}\mu_ {0}&\mu_{1}&\cdots&\mu_{n-1}\\ \mu_{-1}&\mu_{0}&\cdots&\mu_{n-2}\\ \vdots&\vdots&\ddots&\vdots\\ \mu_{-n+1}&\mu_{-n+2}&\cdots&\mu_{0}\end{pmatrix}.\] We also have \[T_{n+1}^{R}=\begin{pmatrix}T_{n}^{R}&\nu_{n}\\ \nu_{n}^{*}&\mu_{0}\end{pmatrix}, T_{n+1}^{L}=\begin{pmatrix}T_{n}^{L}&\xi_{n}\\ \xi_{n}^{*}&\mu_{0}\end{pmatrix},\] where \[\nu_{n}=\begin{pmatrix}\mu_{-n}\\ \mu_{-n+1}\\ \vdots\\ \mu_{-1}\end{pmatrix}, \xi_{n}=\begin{pmatrix}\mu_{n}\\ \mu_{n-1}\\ \vdots\\ \mu_{1}\end{pmatrix},\] and we define the _Schur complements_ of \(\mu_{0}\) in \(T_{n+1}^{R}\) and \(T_{n+1}^{L}\) as \[\kappa_{n}^{R} = \mathrm{SC}(T_{n+1}^{R})=\mu_{0}-\nu_{n}^{*}T_{n}^{-R}\nu_{n},\] \[\kappa_{n}^{L} = \mathrm{SC}(T_{n+1}^{L})=\mu_{0}-\xi_{n}^{*}T_{n}^{-L}\xi_{n},\] where \(T_{n}^{-R,-L}=(T_{n}^{R,L})^{-1}\). **Remark.** Let \(M_{n}(\mathcal{L}(\mathcal{H}))\) be the set of \(n\times n\) complex matrices with operator-valued entries from \(\mathcal{L}(\mathcal{H})^{+}\). Since \(\mu\) is a positive operator-valued measure, the Toeplitz matrix \(T_{n}\in M_{n}(\mathcal{L}(\mathcal{H}))\) defined by \[(T_{n})_{ij}=T_{j-i}\qquad(0\leq i\leq j\leq n-1)\] is positive definite. The positivity of \(\mu\) implies that the scalar matrix \(\langle T_{j-i}\ x_{j},x_{i}\rangle\) is positive for all choices of vectors \(x_{1},\ldots,x_{n}\in\mathcal{L}(\mathcal{H})\). ## 4 The Verblunsky recursion In this section, we demonstrate that the right and left orthogonal polynomials follow the classical Verblunsky recurrence relations. In our approach, we will define the monic operator-valued polynomials in terms of the Schur complements. The notion of Schur complement (of the (1,1) entry \(\mu_{0}\), as above) (SC) is relatively simple but suprisingly strong. **Proposition 4.1**.: Monic polynomials such that \[\Phi_{n}^{R}(z) = \mathrm{SC}\left(\begin{array}{ccccc}\mu_{0}&\mu_{-1}&\cdots&\mu_ {-n+1}&\mu_{-n}\\ \mu_{1}&\mu_{0}&\cdots&\mu_{-n+2}&\mu_{-n+1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \mu_{n-1}&\mu_{n-2}&\cdots&\mu_{0}&\mu_{-1}\\ I&zI&\cdots&z^{n-1}I&z^{n}I\end{array}\right)\] \[= z^{n}I-[I\ \ zI\ \ \cdots\ \ z^{n}I]\ T_{n}^{-R}\left(\begin{array}{ c}\mu_{-n}\\ \mu_{-n+1}\\ \vdots\\ \mu_{-1}\end{array}\right),\] \[\Phi_{n}^{L}(z) = \mathrm{SC}\left(\begin{array}{ccccc}\mu_{0}&\mu_{-1}&\cdots& \mu_{n-1}&I\\ \mu_{1}&\mu_{0}&\cdots&\mu_{n-2}&zI\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \mu_{n-1}&\mu_{n-2}&\cdots&\mu_{0}&z^{n-1}I\\ \mu_{-n}&\mu_{n-1}&\cdots&\mu_{-1}&z^{n}I\end{array}\right)\] \[= z^{n}I-[\mu_{-n}\ \ \mu_{-n+1}\ \ \cdots\ \ \mu_{-1}]\ T_{n}^{-L}\left(\begin{array}{ c}I\\ zI\\ \vdots\\ z^{n}I\end{array}\right)\] are orthogonal, i.e, for any \(k,j\geq 0\), \[\left\langle\left\langle\Phi_{k}^{R},\Phi_{j}^{R}\right\rangle\right\rangle_ {\mathrm{R}}=\delta_{kj}\kappa_{k}^{R},\qquad\left\langle\left\langle\Phi_{k}^ {L},\Phi_{j}^{L}\right\rangle\right\rangle_{\mathrm{L}}=\delta_{kj}\kappa_{k} ^{L}.\] Proof.: We prove just the first; the second is similar. For any \(0\leq m\leq n-1\) and \(z=e^{i\theta}\) \[\left\langle\left\langle z^{m}I,\Phi_{n}^{R}\right\rangle\right\rangle_{ \mathrm{R}} = \int_{-\pi}^{\pi}e^{-im\theta}\mu(\theta)(e^{in\theta}-[I\ \ e^{i\theta}\ \ \cdots\ \ e^{i\theta}]T_{n}^{-R}\nu_{n})d\theta\] \[= \mu_{m-n}-[\mu_{m}\ \ \cdots\ \ \mu_{m-n+1}]T_{n}^{-R}\nu_{n}\] \[= \mu_{m-n}-\mu_{m-n}=0.\] If \(m=n\), then \[\left\langle\left\langle\Phi_{n}^{R},\Phi_{n}^{R}\right\rangle \right\rangle_{\mathrm{R}} = \mu_{0}-[\mu_{n}\ \ \cdots\ \ \mu_{1}]T_{n}^{-R}\nu_{n}=\kappa_{R}^{n}.\] **Proposition 4.2**.: The monic orthogonal polynomials \(\Phi_{n}^{R}\) and \(\Phi_{n}^{L}\) obey the following recursion relations: \[\Phi_{n+1}^{R} = z\Phi_{n}^{R}+\Phi_{n}^{L,*}\Phi_{n+1}^{R}(0), \tag{2}\] \[\Phi_{n+1}^{L} = z\Phi_{n}^{L}+\Phi_{n+1}^{L}(0)\Phi_{n}^{R,*}. \tag{3}\] Proof.: For the first recursion, observe that for any \(1\leq i\leq n\), we have \[\left\langle\left\langle\Phi_{n+1}^{R}-z\Phi_{n}^{R},z^{i}I\right\rangle\right\rangle _{\mathrm{R}}=\left\langle\left\langle\Phi_{n}^{L,*},z^{i}I\right\rangle \right\rangle_{\mathrm{R}}=0,\] and so \(\Phi_{n+1}^{R}-z\Phi_{n}^{R}\) and \(\Phi_{n}^{L,*}\) are proportional. Setting \(z=0\) gives the constant of proportionality as \(\Phi_{n+1}^{R}(0)\). Similarly for the other claim. Define the normalized orthogonal polynomials by \[\varphi_{n}^{R}=\Phi_{n}^{R}\kappa_{n}^{-R/2}\quad\text{ and }\quad\varphi_{n}^{L}= \kappa_{n}^{-L/2}\Phi_{n}^{L}.\] One easily verifies \[\left\langle\left\langle\varphi_{n}^{R,L},\varphi_{n}^{R,L}\right\rangle \right\rangle_{\mathrm{R,L}}=\kappa_{n}^{-R,L/2}\left\langle\left\langle\Phi_ {n}^{R,L},\Phi_{n}^{R,L}\right\rangle\right\rangle_{\mathrm{R,L}}\kappa_{n}^{ -R,L/2}=I.\] Defining \[\rho_{n}^{R}=\kappa_{n+1}^{R/2}\kappa_{n}^{-R/2}\ \text{ and }\ \rho_{n}^{L}=\kappa_{n}^{L/2}\kappa_{n+1}^{-L/2},\] one can easily show \[z\varphi_{n}^{R}-\varphi_{n+1}^{R}\rho_{n}^{R} = \varphi_{n}^{L,*}(\alpha_{n}^{R})^{\dagger}, \tag{4}\] \[z\varphi_{n}^{L}-\rho_{n}^{L}\varphi_{n+1}^{L} = (\alpha_{n}^{L})^{\dagger}\varphi_{n}^{R,*}, \tag{5}\] where \[\alpha_{n}^{R} = -(\kappa_{n}^{-R/2})^{\dagger}\Phi_{n+1}^{R}(0)^{\dagger}\kappa_{ n}^{L/2}, \tag{6}\] \[\alpha_{n}^{L} = -\kappa_{n}^{R/2}\Phi_{n+1}^{L}(0)^{\dagger}(\kappa_{n}^{-L/2})^ {\dagger}. \tag{7}\] One also has \[\kappa_{n}^{-L}=(\rho_{n-1}^{L}\ldots\rho_{0}^{L})^{2}\ \text{ and }\ \kappa_{n}^{-R}=(\rho_{0}^{R}\ldots\rho_{n-1}^{R})^{2}.\] That is, \[\begin{pmatrix}\varphi_{n}^{L}(z)\\ \varphi_{n}^{R,*}(z)\end{pmatrix}=A^{L}(\alpha_{n},z)\begin{pmatrix}\varphi_{ n}^{L}(z)\\ \varphi_{n}^{R,*}(z)\end{pmatrix} \tag{8}\] where \[A^{L}(\alpha,z)=\begin{pmatrix}z(\rho^{L})^{-1}&-(\rho^{L})^{-1}\alpha^{ \dagger}\\ -z(\rho^{R})^{-1}\alpha&(\rho^{R})^{-1}\end{pmatrix}. \tag{9}\] **Proposition 4.3**.: 1. The operators \(\alpha_{n}^{R}\) and \(\alpha_{n}^{L}\) are equal. 2. \(\rho_{n}^{L}=(I-\alpha_{n}^{\dagger}\alpha_{n})^{1/2}\) and \(\rho_{n}^{R}=(I-\alpha_{n}\alpha_{n}^{\dagger})^{1/2}\). Proof.: We follow the steps of Damanik, Pushnitski and Simon [DamPS, Th 3.3]. 1) Multiplying 5 by \(\varphi_{n}^{R,*}\) under the left inner product: \[(\alpha_{n}^{L})^{\dagger}\left\langle\left\langle\varphi_{n}^{R,*},\varphi_{n}^{R,*}\right\rangle\right\rangle_{\mathrm{L}} = \left\langle\left\langle\varphi_{n}^{R,*},z\varphi_{n}^{L}-\rho_ {n}^{L}\varphi_{n+1}^{L}\right\rangle\right\rangle_{\mathrm{L}}\] \[= \left\langle\left\langle\varphi_{n}^{R,*},z\phi_{n}^{L}\right\rangle \right\rangle_{\mathrm{L}}-\underbrace{\left\langle\left\langle\varphi_{n}^{R,* },\rho_{n}^{L}\varphi_{n+1}^{L}\right\rangle\right\rangle_{\mathrm{L}}}_{=0}\] \[= \left\langle\left\langle z\varphi_{n}^{R},\phi_{n}^{L,*}\right \rangle\right\rangle_{\mathrm{R}}^{\dagger}\] \[= \left\langle\left\langle\varphi_{n+1}^{R}\rho_{n}^{R}+{\varphi_{ n}^{L,*}}(\alpha_{n}^{R})^{\dagger},\varphi_{n}^{L,*}\right\rangle\right\rangle_{ \mathrm{R}}^{\dagger}\] \[= \underbrace{\left\langle\left\langle\varphi_{n+1}^{L,*}\rho_{n} ^{R},\varphi_{n}^{L,*}\right\rangle\right\rangle_{\mathrm{R}}^{\dagger}}_{=0}+ \left\langle\left\langle\varphi_{n}^{L,*}(\alpha_{n}^{R})^{\dagger},\varphi_{ n}^{L,*}\right\rangle\right\rangle_{\mathrm{R}}^{\dagger}\] \[= \left\langle\left\langle\varphi_{n}^{L,*},\varphi_{n}^{L,*}( \alpha_{n}^{R})^{\dagger}\right\rangle\right\rangle_{\mathrm{R}}\] \[= \left\langle\left\langle\varphi_{n}^{L,*},\varphi_{n}^{L,*}\right \rangle\right\rangle_{\mathrm{R}}(\alpha_{n}^{R})^{\dagger}\] \[= \underbrace{\left\langle\left\langle\varphi_{n}^{L},\varphi_{n}^ {L}\right\rangle\right\rangle_{\mathrm{L}}}_{=I}(\alpha_{n}^{R})^{\dagger}.\] 2) \[I = \left\langle\left\langle z\varphi_{n}^{L},z\varphi_{n}^{L}\right \rangle\right\rangle_{\mathrm{L}}\] \[= \left\langle\left\langle\rho_{n}^{L}\varphi_{n+1}^{L}+\alpha_{n}^ {\dagger}\varphi_{n}^{R,*},\rho_{n}^{L}\varphi_{n+1}^{L}+\alpha_{n}^{\dagger} \varphi_{n}^{R,*}\right\rangle\right\rangle_{\mathrm{L}}\] \[= \rho_{n}^{L}\left\langle\left\langle\varphi_{n+1}^{L},\varphi_{n +1}^{L}\right\rangle\right\rangle_{\mathrm{L}}+\alpha_{n}^{\dagger}\left\langle \left\langle\varphi_{n}^{R,*},\varphi_{n}^{R,*}\right\rangle\right\rangle_{ \mathrm{L}}\alpha_{n}\] \[= (\rho_{n}^{L})^{2}+\alpha_{n}^{\dagger}\alpha_{n}.\] ## 5 Christoffel-Darboux Formulae This section introduces operator-valued right and left kernel polynomials and uses the recurrence formulae to derive the Christoffel-Darboux formulae. Proposition 5.1 below is the Christoffel-Darboux identity, extending [18]. **Proposition 5.1**.: \[(1-\bar{w}z)\sum_{k=0}^{n}\varphi_{k}^{L}(w)^{\dagger}\varphi_{k}^{L} (z)= \varphi_{n}^{R,*}(w)^{\dagger}\varphi_{n}^{R,*}(z)-\bar{w}z\varphi_ {n}^{L}(w)^{\dagger}\varphi_{n}^{L}(z)\] (10) \[= \varphi_{n+1}^{R,*}(w)\varphi_{n+1}^{R,*}(w)^{\dagger}-\varphi_{n +1}^{L}(w)\varphi_{n+1}^{L}(z)^{\dagger}\] (11) \[(1-\bar{w}z)\sum_{k=0}^{n}\varphi_{k}^{R}(z)\varphi_{k}^{R}(w)^{\dagger}= \varphi_{n}^{L,*}(z)\varphi_{n}^{L,*}(w)^{\dagger}-\bar{w}z \varphi_{n}^{R}(z)\varphi_{n}^{R}(w)^{\dagger}\] (12) \[= \varphi_{n+1}^{L,*}(z)\varphi_{n+1}^{L,*}(w)^{\dagger}-\varphi_{n +1}^{R}(z)\varphi_{n+1}^{R}(w)^{\dagger}\] (13) Proof.: We can use the same arguments from [DamPS, Prop. 3.6]: _(a)_ \[F_{n}^{L}(z)=\begin{pmatrix}\varphi_{n}^{L}(z)\\ \varphi_{n}^{R,*}(z)\end{pmatrix},\ \ J=\begin{pmatrix}I&0\\ 0&-I\end{pmatrix},\ \ \tilde{J}=\begin{pmatrix}\bar{w}zI&0\\ 0&-I\end{pmatrix}.\] Then \[F_{n+1}^{L}(z)=A^{L}(\alpha_{n},z)F_{n}^{L}(z)\] and \[A^{L}(\alpha_{n},w)^{\dagger}JA^{L}(\alpha_{n},z)=\begin{pmatrix}\bar{w}zI&0 \\ 0&-I\end{pmatrix}=\tilde{J}.\] Thus \[F_{n+1}^{L}(w)^{\dagger}JF_{n+1}^{L}(z)=F_{n}^{L}(w)^{\dagger}A^{L}(\alpha_{n},w)^{\dagger}JA^{L}(\alpha_{n},z)F_{n}^{L}(z)=F_{n}^{L}(w)^{\dagger}\tilde{J}F _{n}^{L}(z)\] and hence \[\varphi_{n+1}^{L}(w)^{\dagger}\varphi_{n+1}^{L}(z)-\varphi_{n+1}^{R,*}(w)^{ \dagger}\varphi_{n+1}^{R,*}(z)=\bar{w}z\varphi_{n}^{L}(w)^{\dagger}\varphi_{n }^{L}(z)-\varphi_{n}^{R,*}(w)^{\dagger}\varphi_{n}^{R,*}(z),\] which gives the second part of (a). Now, denote \[Q_{n}^{L}(z,w)=\varphi_{n+1}^{R,*}(w)^{\dagger}\varphi_{n+1}^{R,*}(z)-\varphi_ {n+1}^{L}(z)^{\dagger}\varphi_{n+1}^{L}(z).\] Then \[Q_{n+1}^{L}(z,w)-Q_{n}^{L}(z,w) = \varphi_{n}^{R,*}(w)^{\dagger}\varphi_{n}^{R,*}(z)-\bar{w}z \varphi_{n}^{L}(w)^{\dagger}\varphi_{n}^{L}(z)\] \[-\varphi_{n}^{R,*}(w)^{\dagger}\varphi_{n}^{R,*}(z)+\varphi_{n}^ {L}(w)^{\dagger}\varphi_{n}^{L}(z)\] \[= (1-\bar{w}z)\varphi_{n}^{L}(w)^{\dagger}\varphi_{n}^{L}(z).\] Summing over \(n\) the proof for (10) follows since \(Q_{-1}^{L}(z,w)=0\). For (12) the proof is similar to (10). **Proposition 5.2**.: Define the right and left kernel polynomials of degree \(n\) as \[K_{n}^{R}(x,y)=\sum_{k=0}^{n}\varphi_{k}^{R}(z)\varphi_{k}^{R}(w)^{\dagger}, \quad K_{n}^{L}(x,y)=\sum_{k=0}^{n}\varphi_{k}^{L}(x)^{\dagger}\varphi_{k}^{L} (y),\] Then, we have 1. \(K_{n}^{R}(x,y)=[I\ yI\ \cdots\ y^{n}I]\ T_{n+1}^{-R}\ \begin{bmatrix}I\\ x^{-1}I\\ \vdots\\ x^{-n}I\end{bmatrix},\) 2. \(K_{n}^{L}(x,y)=[I\ yI\ \cdots\ y^{-n}I]\ T_{n+1}^{-L}\ \begin{bmatrix}I\\ x^{1}I\\ \vdots\\ x^{n}I\end{bmatrix}.\) Proof.: We write \[K_{n}^{R}(x,y)=\underbrace{[Y\ y^{n}I]\ T_{n+1}^{-R}\ \begin{bmatrix}X^{*}\\ x^{-n}I\end{bmatrix}}_{R_{n}}\] with \(Y=[I\ yI\ \cdots\ y^{n-1}I]\) and \(X=[I\ xI\ \cdots\ x^{n-1}I]\). Writing \[T_{n+1}^{R}=\begin{pmatrix}T_{n}^{R}&\nu_{n}\\ \nu_{n}^{*}&\mu_{0}\end{pmatrix}\] and using (1), we have \[T_{n+1}^{-R}=\begin{pmatrix}A&\gamma\\ \gamma^{*}&\alpha\end{pmatrix},\] where \[A=T_{n}^{-R}+T_{n}^{-R}\nu_{n}\kappa_{n}^{-R}\nu_{n}^{*}T_{n}^{-R},\quad\gamma =-T_{n}^{-R}\nu_{n}\kappa_{n}^{-R},\quad\alpha=\kappa_{n}^{-R}.\] Using the equality \[[I\ zI\ \cdots\ z^{n-1}I]T_{n}^{-R}\nu_{n}\kappa_{n}^{-R/2}=z^{n}\kappa_{n}^{-R/ 2}-\varphi_{n}^{R}(z)\] and rewriting \(R_{n}\) as \[R_{n}=YAX^{*}+y^{n}\gamma^{*}X^{*}+Y\gamma x^{-n}+y^{n}\alpha x^{-n},\] it is straightforward to prove that \[R_{n}=R_{n-1}+\varphi_{n}^{R}(y)\varphi_{n}^{R}(x)^{\dagger}.\] Summing, the telescoping sum gives 1. as required. ## 6 Bernstein-Szego Approximation In this section, we obtain a formula for the Bernstein-Szego approximation of an operator-valued measure. Given a nontrivial operator-valued measure \(d\mu\), with Verblunsky coefficients \(\{\alpha_{n}\}_{n=0}^{\infty}\), we will identify the measures \(d\mu^{(n)}\) with \[\alpha_{j}(d\mu^{(n)})=\left\{\begin{array}{ll}\alpha_{j},&j\leq n\\ \mathbf{0},&j\geq n+1\end{array}\right.\] and \(\mu^{(n)}\to d\mu\) weakly. In many ways, the general proof of the strong Szego theorem will hinge on this approximation. As a preliminary, we need the following theorem: **Theorem 6.1**.: For \(z\in\mathbb{D}=\{z:|z|<1\}\), we have 1. For \(z\in\mathbb{T}\), all of \(\varphi_{n}^{R,*}(z)\), \(\varphi_{n}^{L,*}(z)\), \(\varphi_{n}^{R}(z)\), \(\varphi_{n}^{L}(z)\) are invertible. 2. \(\varphi_{n}^{R}(z)\), \(\varphi_{n}^{L}(z)\) have all zeros in \(\mathbb{D}\). 3. \(\varphi_{n}^{R,*}(z)\), \(\varphi_{n}^{L,*}(z)\) have all zeros in \(\mathbb{C}\backslash\bar{\mathbb{D}}\). 4. For any \(z\in\mathbb{T}\), \[\varphi_{n}^{R}(z)\varphi_{n}^{R}(z)^{\dagger}=\varphi_{n}^{L}(z)^{\dagger} \varphi_{n}^{L}(z).\] Proof.: For 1., assume that \(0<|w|\leq 1\). if there exists \(c\neq 0\) such that \(\varphi_{n}^{R}(w)^{\dagger}c=0\), we also have that \(\varphi_{n}^{L,*}(w)^{\dagger}c=0\). Then, propositions 5.2 and 5.1 lead to \[[I\ zI\ \cdots\ z^{n}I]\ T_{n+1}^{-R}\ \begin{bmatrix}c\\ w^{-1}c\\ \vdots\\ w^{-n}c\end{bmatrix}=0,\ \ \text{for}\ \ z\in\mathbb{C},\] which contradicts the invertibility of \(T_{n+1}^{-R}\), unless \(c=0\). This proves 1. for \(\varphi_{n}^{L,*}(z)\) and \(\varphi_{n}^{R}(z)\). If \(z=e^{i\theta}\), \(\varphi_{n}^{R,*}(z)=e^{-in\theta}\varphi_{n}^{R}(z)^{\dagger}\) and \(\varphi_{n}^{L}(z)=e^{in\theta}\varphi_{n}^{L,*}(z)^{\dagger}\) are also invertible. For 4., put \(z=w\) in equation (10) in Proposition 4. One has \(\varphi_{n}^{R,*}(z)^{\dagger}\varphi_{n}^{R,*}(z)=\varphi_{n}^{L}(z)^{ \dagger}\varphi_{n}^{L}(z)\), which can be rewritten as \[\varphi_{n}^{R}(z)\varphi_{n}^{R}(z)^{\dagger}=\varphi_{n}^{L}(z)^{\dagger} \varphi_{n}^{L}(z),\] as required. For 2., Let \(\varphi_{n}^{R}(z_{0})=0\) and define \(p_{n-1}\) such that \((z-z_{0})p_{n-1}=\varphi_{n}^{R}\). The polynomial \(p_{n-1}\) is of degree \(n-1\) which implies \(\left\langle\left\langle p_{n-1},\varphi_{n}^{R}\right\rangle\right\rangle_{ \mathrm{R}}=0\). So \[\left\langle\left\langle zp_{n-1},zp_{n-1}\right\rangle\right\rangle_{ \mathrm{R}}=\left\langle\left\langle p_{n-1},p_{n-1}\right\rangle\right\rangle_ {\mathrm{R}}=\left|z_{0}\right|^{2}\left\langle\left\langle p_{n-1},p_{n-1} \right\rangle\right\rangle_{\mathrm{R}}+\left\langle\left\langle\varphi_{n}^ {R},\varphi_{n}^{R}\right\rangle\right\rangle_{\mathrm{R}},\] or \[(1-|z_{0}|^{2})\left\langle\left\langle p_{n-1},p_{n-1}\right\rangle\right\rangle _{\mathrm{R}}=\left\langle\left\langle\varphi_{n}^{R},\varphi_{n}^{R}\right\rangle \right\rangle_{\mathrm{R}},\] from which we conclude \(|z_{0}|<1\), that is, the zeros of \(\varphi_{n}\) lie in \(\mathbb{D}\). Since \(\varphi_{n}^{R,*}(z_{0})=0\) if and only if \(\varphi_{n}^{R}(1/\bar{z}_{0})=0\), the zeros of \(\varphi_{n}^{R,*}\) lie in \(\mathbb{C}\backslash\bar{\mathbb{D}}\). The idea is that since the inverse in Theorem (6.1) exists, we can define the measure \(d\mu^{(n)}\) on \(\mathbb{T}\) by \[d\mu^{(n)}(\theta)=[\varphi_{n}^{R}(e^{i\theta})\varphi_{n}^{R}(e^{i\theta})^ {\dagger}]^{-1}\frac{d\theta}{2\pi}. \tag{14}\] Also, directly from the definition of the right orthogonal polynomials, we have the right Bernstein-Szego approximation to \(\mu\) \[d\mu^{(n)}(\theta)=[\varphi_{n}^{R,*}(e^{i\theta})^{\dagger}\varphi_{n}^{R,*}(e ^{i\theta})]^{-1}\frac{d\theta}{2\pi};\] the corresponding left Berstein-Szego approximation of \(\mu_{n}\) to \(\mu\) is \[d\mu^{(n)}(\theta)=[\varphi_{n}^{L,*}(e^{i\theta})\varphi_{n}^{L,*}(e^{i \theta})^{\dagger}]^{-1}\frac{d\theta}{2\pi}.\] **Theorem 6.2**.: The operator-valued measure \(d\mu^{(n)}\) is normalized and its right operator orthogonal polynomials for \(j=0,\cdots,n\) are \(\{\varphi_{j}^{R}\}_{j=0}^{n}\), and for \(j>n\), \[\varphi_{j}^{R}(z;d\mu^{(n)})=z^{j-n}\varphi_{n}^{R}(z;d\mu). \tag{15}\] The Verblunsky coefficients for \(d\mu^{(n)}\) are \[\alpha_{j}(d\mu^{(n)})=\left\{\begin{array}{ll}\alpha_{j}(d\mu),&j\leq n,\\ \mathbf{0},&j\geq n+1.\end{array}\right. \tag{16}\] Proof.: Following the steps of the proof in [DamPS], let \(\langle\langle\cdot,\cdot\rangle\rangle_{R}\) be the inner product associated with \(\mu^{(n)}\). By direct calculation, \[\langle\langle\varphi_{n}^{R},\varphi_{n}^{R}\rangle\rangle_{R}=\mathbf{1}, \tag{17}\] and for \(j=0,1,\cdots,n-1\), \[\langle\langle\varphi_{n}^{R},\varphi_{j}^{R}\rangle\rangle_{R}=0. \tag{18}\] So the family \(\{\varphi_{n}^{R}\}_{j=0}^{n}\) is an orthonormal basis with respect to the inner product \(\langle\langle\cdot,\cdot\rangle\rangle_{R}\). Also, \[\langle\langle e^{ij\theta},\varphi_{n}^{R}\rangle\rangle_{R} = \frac{1}{2\pi}\int_{0}^{2\pi}e^{ij\theta}(\varphi_{n}^{R}(e^{i \theta})^{\dagger})^{-1}d\theta\] \[= \frac{1}{2\pi}\oint e^{i(n-j-1)\theta}(\varphi_{n}^{R,*}(e^{i \theta}))^{-1}d\theta=0\] since \(n-k-1\geq 0\) and \(\varphi_{n}^{R,*}(e^{i\theta})^{-1}\) is analytic in \(\bar{\mathbb{D}}\) by Theorem (6.1). This proves \(\varphi_{n}^{R}\) is a OPUC for \(d\mu_{n}\) and (15) holds. By (6), if \(\varphi_{k+1}^{R}(0)=0\) then \(\alpha_{k}=0\), then by (15) \(\varphi_{n+j}^{R}(0;d\mu^{(n)})=0\), which implies (16). **Theorem 6.3**.: Let \(d\mu\) and \(d\nu\) be two nontrivial operator-valued measures on \(\mathbb{T}\) such that for \(N\), \[\varphi_{N}^{R,L}(z;d\mu)=\varphi_{N}^{R,L}(z;d\nu). \tag{19}\] Then \[\begin{array}{ll}(i)&\varphi_{j}^{R,L}(z;d\mu)=\varphi_{j}^{R,L}(z;d\nu)&j= 0,1,\cdots,N-1,\\ (ii)&\alpha_{j}(d\mu)=\alpha_{j}(d\nu)&j=0,1,\cdots,N-1,\\ (iii)&c_{j}(d\mu)=c_{j}(d\nu)&j=0,1,\cdots,N.\end{array} \tag{20}\] Proof.: The recurrence (2) can be written in the matrix form \[\begin{pmatrix}\varphi_{j+1}^{R}(z)\\ \varphi_{j+1}^{L,*}(z)\end{pmatrix}=A^{R}(\alpha_{j},z)\begin{pmatrix}\varphi _{j}^{R}(z)\\ \varphi_{j}^{L,*}(z)\end{pmatrix},\] where \[A^{R}(\alpha,z)=\begin{pmatrix}z\rho^{-R}&-z\alpha\rho^{-L}\\ -\alpha^{\dagger}\rho^{-R}&\rho^{-L},\end{pmatrix}\] and its inverse for \(z\neq 0\) \[A^{-R}(\alpha,z)=\begin{pmatrix}z^{-1}\rho^{-R}&\rho^{-R}\alpha\\ z^{-1}\rho^{-L}\alpha^{\dagger}&\rho^{-L},\end{pmatrix}\] which gives the inverse Verblunsky recurrence \[\varphi_{j}^{R}(z) = z^{-1}\varphi_{j+1}^{R}(z)\rho_{j}^{-R}+z^{-1}\varphi_{j+1}^{L,* }(z)\rho_{j}^{-L}\alpha_{j}^{\dagger}, \tag{21}\] \[\varphi_{j}^{L,*}(z) = \varphi_{j+1}^{R}(z)\rho_{j}^{-R}\alpha_{j}^{\dagger}+\varphi_{j+ 1}^{L,*}(z)\rho_{j}^{-L}. \tag{22}\] Then (19) implies \[\alpha_{N}(d\mu)=-\varphi_{N}^{\dagger}(z;d\mu)\kappa_{N}^{L/2}=-\varphi_{N}^{ \dagger}(z;d\nu)\kappa_{N}^{L/2}=\alpha_{N}(d\nu),\] and thus by the inverse recursions (21) and (22), we have by iteration that (i) and (ii) hold. (i) implies (iii) because \(\varphi_{j}(z;d\mu)\) and \(c_{1},\cdots,c_{j-1}\) determine \(c_{j}(d\mu)\) via \[\int\Phi_{j}^{R}(z)d\mu=0.\] **Theorem 6.4**.: \(d\mu^{(n)}\) is a probability measure on \(\mathbb{T}\) for which (16) holds. As \(n\to\infty\), \(d\mu^{(n)}\to d\mu\) weakly. Proof.: Our proof owes something to the scalar proof in [Sim3]. By using (iii) of Theorem (6.3), for \(j=0,1,\cdots,N\), we have \[c_{j}(d\mu^{(n)})=c_{j}(d\mu).\] This equation and its conjugate imply that for any Laurent polynomial \(f\) (i.e., a polynomial in \(z\) and \(z^{-1}\)), we have \[\lim_{N\to\infty}\int f(e^{i\theta})d\mu^{(n)}=\int f(e^{i\theta})d\mu, \tag{23}\] because the left-hand side is equal to the right-hand side for large enough \(N\). Since Laurent polynomials are dense in \(C(\mathbb{T})\), equation (23) holds for all \(f\). In other words, we have weak convergence. ## 7 The Szego Theorem In this section, our main goal is to prove the Szego theorem for operator orthogonal polynomials. We will first provide a brief overview of the Szego theorem in the scalar case; see e.g. [62], [6]. The Szego theory for orthogonal polynomials was later extended to the matrix case; refer to ([7], [18]) for background and references.. (i) \(\sigma>0\) iff the Szego condition \(\log w\in L_{1}\) holds, that is, \[\int-\log(w_{\theta})d\theta>-\infty.\] (ii) \(\sigma>0\) iff \(\alpha\in\ell_{2}\). (iii) \(\sigma^{2}=\prod_{1}^{\infty}(1-|\alpha_{n}|^{2})\), so \(\sigma>0\) iff the product converges, i.e. iff \[\sum\lvert\alpha_{n}\rvert^{2}<\infty:\qquad\alpha\in\ell_{2};\] (iv) \(\sigma^{2}\) is the geometric mean \(G(\mu)\) of \(\mu\): for \(\sigma>0\), \[\sigma^{2}=\exp\bigl{(}\frac{1}{2\pi}\int\log w(\theta)d\theta\bigr{)}=:G(\mu )>0.\] Derevyagin, Holtz, Khrushchev, and Tyaglov [20] extended Szego's theorem to the finite-dimensional setting ([20], Thm.28, Thm.29). Using their notation, det and \(\mathrm{tr}\) representing the determinant and trace, respectively, the following limit holds: \[\lim_{n\to\infty}\frac{\det(T_{n})}{\det(T_{n-1})}=\det\prod_{0}^{\infty}(I- \alpha_{k}\alpha_{k}^{\dagger})=\exp\int\mathrm{tr}\ \log f(\theta)d\theta/2\pi\qquad(KSz)\] This is known as the Kolmogorov-Szego formula (see e.g. [6], SS4). Szego's condition, which requires that the right-hand side of the equation be positive, holds if and only if \[\sum_{0}^{\infty}||\alpha_{k}^{\dagger}\alpha_{k}||<\infty,\] (extending early work of Delsarte, Genin and Kamp [21]. The _product theorem for determinants_ in \((KSz)\) above is simple linear algebra in finitely many dimensions, and holds quite generally. As far as we know, the Szego limit theorem was generalized to the operator case by Gohberg and Kaashoek in 1992 (see e.g. [30]), using factorization arguments and Schur complement techniques. They applied the theorem to a class of positive block Toeplitz operators with Hilbert-Schmidt entries, using the second regularized and Perelson determinants in place of the usual determinant. This line of research was continued by Bottcher and Silverman [15]. The self-adjointness requirement on the block Toeplitz matrices was removed and the smoothness condition required by Gohberg and Kaashoek [30] was relaxed. Under these circumstances, operator-valued versions of the Szego limit theorem were proved. Before proving our results in the operator case, let us review some definitions related to infinite determinants [61]. For an operator \(A\) that is trace class, the determinant \(\det(I-A)\) is well-defined and can be written as \[\det(I-A)=\prod_{j=1}^{\infty}(1-\lambda_{j}),\] where \((\lambda_{j})_{j=1}^{\infty}\) are the eigenvalues of \(A\). On the other hand, if \(A\) is a Hilbert-Schmidt operator, we define the second regularized determinant as \[\det_{2}(I-A)=\det[(I-A)e^{A}].\] This is based on the observation that \((I-A)e^{A}-I\) is trace class. It is worth noting that when \(A\) is trace class, we have \[\det_{2}(I-A)=\det(I-A)e^{tr(A)}.\] Moreover, for two Hilbert-Schmidt operators \(A\) and \(B\), we have \[\det_{2}(I-A)\mathrm{det}_{2}(I-B)=\det_{2}[(I-A)(I-B)]e^{tr(AB)}. \tag{24}\] Let \(\mathcal{W}_{1}(\mathcal{H})\), the Wiener algebra over the trace class operators on \(\mathcal{H}\), be the set of all operator-valued functions \(G\) on \(\mathbb{T}\) of the form \[G(z)=\sum_{n=-\infty}^{\infty}z^{n}G_{n},\quad z\in\mathbb{T}, \tag{25}\] where \(G_{n}\) is a trace class operator on \(\mathcal{H}\) for each \(n\) and \[\sum_{n=-\infty}^{\infty}||G_{n}||_{1}<\infty. \tag{26}\] In the following theorem, we will extend the Szego Limit Theorem of the Szego theory of orthogonal polynomials. To proceed, we will assume \(\mu_{0}=I\). **Theorem 7.1**.: Let \(T_{n}^{R}\) be the operator Toeplitz matrix and \((\alpha_{n})_{n}\) be the Verblunsky coefficients of \(\mu\). Then \[\lim_{n\to\infty}\frac{\det_{2}(T_{n}^{R})}{\det_{2}(T_{n-1}^{R})}=\det\prod_{ k=0}^{\infty}(I-\alpha_{k}\alpha_{k}^{\dagger}).\] Proof.: Using (24), it follows that \[\det_{2}(T_{n}^{R}) = \det_{2}(T_{n-1}^{R})\mathrm{det}_{2}(I-\nu_{n}^{*}T_{n}^{-R}\nu _{n})e^{\mathrm{tr}(\nu_{n}^{*}T_{n}^{-R}\nu_{n})}\] \[= \det_{2}(T_{n-1}^{R})\mathrm{det}(I-\nu_{n}^{*}T_{n}^{-R}\nu_{n})\] \[= \det_{2}(T_{n-1}^{R})\mathrm{det}(\kappa_{n}^{-R})=\det_{2}(T_{n- 1}^{R})\mathrm{det}(\rho_{0}^{R}\ldots\rho_{n-1}^{R})^{2}\] which leads to \[\det_{2}(T_{n}^{R})/\det_{2}(T_{n-1}^{R})=\det\prod_{k=0}^{n-1}(I-\alpha_{k} \alpha_{k}^{\dagger}).\] Since \(T_{n}^{R}\) is definite positive, the coefficients \(\alpha_{k}\) are Hilbert-Schmidt and strict contractions. This implies that \(\alpha_{k}\alpha_{k}^{\dagger}\) are trace class operators and thus \(\det(I-\alpha_{k}\alpha_{k}^{\dagger})\) are well-defined. Putting \(n\to\infty\), the statement follows. We follow Gohberg and Kaashoek [30] in making the assumption in the theorem below on the measure \(\mu^{(n)}\) defined above, needed to be able to use Wiener-algebra methods. **Theorem 7.2**.: Let \(\mu^{(n)}\) be the measure defined in (14) and assume that \[\mu^{(n)}(z)=\sum_{j=-\infty}^{\infty}z^{n}\mu_{n,j}\in I+\mathcal{W}_{1}( \mathcal{H}).\] Then \[\lim_{n\to\infty}\frac{\det_{2}(T_{n}^{R,L})}{\det_{2}(T_{n-1}^{R,L})}=\exp \frac{1}{2\pi}\int_{-\pi}^{\pi}\log(\det(\mu(\theta))d\theta. \tag{27}\] Proof.: Write \[L=\begin{pmatrix}0&\cdots&0&I\\ 0&\cdots&I&0\\ \vdots&&I&\vdots&\vdots\\ I&0&\cdots&0\end{pmatrix},\qquad\nu_{n}=\begin{pmatrix}\mu_{-1}\\ \mu_{-2}\\ \vdots\\ \mu_{-n}\end{pmatrix}.\] Then as \(T_{n}^{R}=LT_{n}^{L}L\), \[\Phi_{n}^{L,*}(z) = z^{n}(z^{-n}I-[I\ z^{-1}I\ \cdots\ z^{-n+1}I]\ T_{n}^{-L}\nu_{n}\] \[= I-[I\ zI\ \cdots\ z^{n}I]LT_{n}^{-L}L\phi\] \[= I-[I\ zI\ \cdots\ z^{n}I]T_{n}^{-R}\phi,\] \[= I+F_{n}(z),\] where \(F_{n}(z)=-[I\ zI\ \cdots\ z^{n}I]T_{n}^{-R}\phi\) and let \(F(z)=\sum_{n=1}^{\infty}z^{n}F_{n}\). Here, we can easily prove that the operator polynomial \(F\) is trace class (since its coefficients are the sum of the product of two positive definite Hilbert-Schmidt operators). So \(\det\Phi_{n}^{L,*}(z)\) and \(\det(\Phi_{n}^{L,*})^{\dagger}(z)\) are well-defined. Also, using the inequality (26), we can conclude that the series on the right-hand side of (25) converges in the trace class norm. Consequently, if we define \(\mu(\cdot)=I-\tilde{\mu}(\cdot)\) with \(\tilde{\mu}\) belonging to the Wiener class \(\mathcal{W}_{1}(\mathcal{H})\), then \(\tilde{\mu}(z)\) is a trace class operator for all \(z\in\mathbb{T}\). This means that \(\det(\mu(z))\) is well-defined for each \(z\in\mathbb{T}\), and in particular, the expression \(\det(\mu(z))\) on the right-hand side of (27) is well-defined. Having \(\mu^{(n)}=[\Phi_{n}^{L,*}\kappa_{n}^{-R}(\Phi_{n}^{L,*})^{\dagger}]^{-1}\), and passing to the limit, we have \[\kappa_{\infty}^{R}=(I+F(z))^{\dagger}\mu(z)(I+F(z)).\] It follows that \[\log\det\kappa_{\infty}^{R}=\Delta+\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\det(\mu _{n}(\theta))d\theta+\bar{\Delta},\] where \[\Delta=\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\det(I+F(e^{it}))dt,\qquad\bar{ \Delta}=\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\det(I+F(e^{it}))^{\dagger}dt.\] So \(\det(I+F(\cdot))\) is analytic on \(|z|<1\), and \(\det\Phi_{n}^{L,*}(\theta)\) is continuous and non-zero on \(|z|\leq 1\). Combining, \(\log\det(I+F(\cdot))\) is analytic on \(|z|<1\) and continuous om \(|z|\leq 1\). Now Cauchy's theorem gives \[\Delta=\frac{1}{2\pi}\int_{-\pi}^{\pi}\log\det(I+F(e^{it}))dt=0.\] Similarly for \(\bar{\Delta}=0\). Thus, we have the result. ## 8 Complements 1. _Gaussian Regression Formula (GRF)_ Much of what follows below is well illustrated by the _Gaussian Regression Formula (GRF)_. If \(X\sim N(\mu,\Sigma)\) is a Gaussian vector with mean vector \(\mu\) and covariance matrix \(\Sigma=(\sigma_{ij})\), write the inverse covariance matrix \(\Sigma^{-1}\) as \(K=(k_{ij})\), the _concentration matrix_. If \(X,\;\mu,\;\Sigma,\;K\) are partitioned conformably, \[X=\begin{pmatrix}X_{1}\\ X_{2}\end{pmatrix},\quad\mu=\begin{pmatrix}\mu_{1}\\ \mu_{2}\end{pmatrix},\quad\Sigma=\begin{pmatrix}\Sigma_{11}&\Sigma_{12}\\ \Sigma_{21}&\Sigma_{22}\end{pmatrix},\quad K=\begin{pmatrix}K_{11}&K_{12}\\ K_{21}&K_{22}\end{pmatrix},\] the conditional distribution of \(X_{2}\) given \(X_{1}=x_{1}\) is \[X_{2}\ |(X_{1}=x_{1})\sim N(\mu_{2}+\Sigma_{21}\Sigma_{11}^{-1}(x_{1}-\mu_{1}, \Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}),\qquad(GRF)\] when \(\Sigma_{11}^{-1}\) exists; see e.g. ([11], Th. 4.25). For the general case, using the (Moore-Penrose) generalized inverse when \(\Sigma\) is not invertible, see e.g. ([56], 8a.2.11). Somewhat more simply, \[X_{2}\ |(X_{1}=x_{1})\sim N(\mu_{2}-K_{22}^{-1}K_{21}(x_{1}-\mu_{1}),K_{22}^{-1}).\] Thus the regression of \(X_{2}\) on \(X_{1}\) is _linear_: \[\mathbb{E}[X_{2}\ |(X_{1}=x_{1})]=\mu_{2}+\Sigma_{21}\Sigma_{11}^{-1}(x_{1}-\mu_{ 1})=\mu_{2}-K_{22}^{-1}K_{21}(x_{1}-\mu_{1}).\] The GRF has a long and remarkable history. It originates in _Pearson's selection formula_ of 1911-12 (selection being the term then used for what we now call regression); see e.g. ([12], SS6.2). The GRF has been extended to the infinite-dimensional case. See Hairer et al. ([33], Lemma 4.4). 2. _Schur complements_ We have met the Schur complement already, under the notation \(SC(.)\) (Issai Schur (1875-1941) in 1905; for background, see e.g. [69], [12]). If \[M=\begin{pmatrix}P&Q\\ R&S\end{pmatrix},\] the _Schur complement_ of \(P\) in \(M\) is \[M/P:=S-RP^{-1}Q.\] The conditional variance matrix in \((GRF)\) above is thus the Schur complement of \(\Sigma_{11}\) in \(\Sigma\), \(\Sigma/\Sigma_{11}\), also called the _partial covariance matrix_ of \(X_{2}\) given \(X_{1}\). For more on Schur complements in linear algebra, in the work of Schur, Aitken, Haynsworth and others, see ([12], SS5.1). For closely related inversion formulae (of Banachiewicz, Duncan and Bartlett, and the Sherman-Morrison-Woodbury formula), see ([12], SS5.2). These stem from the (surprisingly complicated) formula for the inverse of a partitioned matrix: in the notation above, \[M^{-1}=\begin{pmatrix}P^{-1}+P^{-1}Q(M/P)^{-1}RP^{-1}&-P^{-1}Q(M/P)^{-1}\\ -(M/P)^{-1}RP^{-1}&(M/P)^{-1}\end{pmatrix},\] when the relevant inverses exist; see e.g. ([26], (0.8) p.4), [55], [66]. Schur complements were extended to the operator case by Dritschel and Rovnyak [22]. See also Hairer et al. ([33], Lemma 4.2). 3. _Regression; independence and conditional independence_ One sees (from the bivariate normal density and when it factorises) the classical result that for Gaussian vectors, two components \(X_{i},\ X_{j}\) are independent if and only if their correlation coefficient \(\sigma_{ij}=0\). We note in passing the much more recent complement to this, _Dempster's theorem_ of 1972 [19]: two components \(X_{i},X_{j}\) of \(X\) are conditionally independent given all the rest iff \(k_{ij}=0\). For, taking \(x_{1}\) for the random 2-vector \((X_{i},X_{j})\) and \(x_{2}\) for the remainder that we condition on, the GRF shows that the conditional density \(f_{1|2}\) of \(x_{1}|x_{2}\) has (conditional) covariance matrix \(K_{11}^{-1}\). So (repeating the argument just used) one has conditional independence iff \(K_{11}^{-1}\) is diagonal, i.e (as \(K_{11}\) is \(2\times 2\)) \(K_{11}\) diagonal, i.e. \(k_{ij}=0\) ([12], SS6.3). For matrices \(A,\ B\), linear forms \(AX,\ BX\) are independent if and only if \(A\Sigma B^{T}=0\); see e.g. [11], Th. 4.16). From this, one easily checks ([11], Th. 4.25, Proof) that \(X_{1}\) and \(X_{2}-\Sigma_{21}\Sigma_{11}^{-1}X_{1}\) are independent, or: \[X_{1}\ \text{and}\ \mathbb{E}[X_{2}\ |\ X_{1}]\ \text{are independent.}\] This result extends to the infinite-dimensional case: for a version for Gaussian measures on locally convex topological vector spaces, see Bogachev ([14], 3.10.1). 4. _Shorted operators_ The Schur complement occurs in a related context in the theory of _shorted operators_; see [4], [5], [16]. The term is suggested by the motivating physical context there, parallel connection of resistances in an electrical network. 5. _Fejer-Riesz theorem_ Recall the classical _Fejer-Riesz theorem_: a trigonometric polynomial \(Q(z)=\sum_{-n}^{n}Q_{i}z^{i}\) which is non-negative on the unit circle \(\mathbb{T}\) can be factored as the square of an analytic trigonometric polynomial \(P(z)=\sum_{0}^{n}P_{i}z^{i}\) (see e.g. ([32], 1.12 p.20), ([62], 1.3.1 p.26)). This was extended to the operator-valued case by Dritschel and Rovnyak [22]: with \(P^{*}\) the adjoint of \(P\), \[Q(z)=P(z)^{*}P(z)\qquad(z\in\mathbb{T}).\] Their proof used operator-valued Schur complements ([22], SS3). 6. _Riesz-Herglotz formula_ Analytic functions \(f:\mathbb{D}\to\mathbb{C}\) with \(Re\ f(z)\geq 0\ (z\in\mathbb{D})\) normalised by \(f(0)=1\) are called _Caratheodory functions_. The Riesz-Herglotz formula is the representation of Caratheodory functions as \[f(z)=\int_{\mathbb{T}}\,\left(\frac{e^{i\theta}+z}{e^{i\theta}-z}\right)\,d \mu(\theta),\] with \(\mu\) a probability measure on \(\mathbb{T}\) ([62], 1.1 p.3). Without the normalisation, one obtains the _Schur functions_, adds an \(iC\) term on the right with \(C\) real, and \(\mu\) becomes a probability measure. Operator versions of the Riesz-Herglotz formula are given by Dritschel and Rovnyak ([22]; SS 3, Method of Schur complements). 7. _Operator form of Szego's theorem_ Rosenblum and Rovnyak ([58], SS6.14) give an operator-valued version of Szego's theorem. With \(F\) a non-negative operator-valued function satisfying a logarithmic integrability condition of Szego type, \(F\) factorises as \[F=G^{*}G,\] with \(G\) an operator-valued (Hardy-space) outer function on \(\mathbb{T}\). See also Nikolskii ([50], SS4.8.8). Recall the scalar form of Szego's theorem, which involves the Szego function \(h(z)\), an outer function, the 'analytic square root' of the spectral density \(w\) (\(d\mu=wdm+d\mu_{s}\), with \(\log w\in L^{1}\), to use the usual notation here, as in [62], [6]). The scalar form of Szego's theorem is generalised by Dritschel and Rovnyak, to the matrix case ([22], Th. 4.7) and the operator case [22], Th. 4.5). For operator-valued spectral measures \(\mu\), see Gamboa, Nagel and Rouault [27]. There are many much more general related positivity results. See e.g. Helton and Putinar [34] for a survey of these, and applications to, e.g., optimization and control problems. 8. _Operator form of Nehari's theorem_ A _Hankel matrix_ is one whose elements are constant on the backward diagonals, that is, one of the form \((\alpha_{j+k})_{j,k=0}^{\infty}\), for some sequence \(\alpha=(\alpha_{j})_{0}^{\infty}\) of complex numbers. A _Hankel operator_\(\Gamma:\ell^{2}\to\ell^{2}\) is obtained by extending the map \[a=(a_{j})_{0}^{\infty}\mapsto b=(b_{j})_{0}^{\infty},\qquad b_{k}:=\sum_{0}^{ \infty}\alpha_{j+k}a_{j}\quad(k\geq 0)\] from the (densely defined) sequences \(a\) of finite support to \(\ell^{2}\). _Nehari's theorem_ characterises the bounded (i.e. continuous, for linear operators) Hankel operators on \(\ell^{2}\) as those with \(\alpha_{m}\) here the non-negative Fourier coefficients \(\hat{\psi}(m)\) of a bounded function \(\psi\in L^{\infty}(\mathbb{T})\). Then \(\|\Gamma\|\) is the infimum of \(\|\psi\|_{\infty}\) over all such \(\psi\). See Nikolskii ([50], SS1.3, 1.4) for two proofs of Nehari's theorem, one by Riesz-Smirnov factorization, one by the stepwise extension method of Adamyan, Arov and Krein ('AAK', for brevity), below; see Peller [53] for a monograph treatment of Hankel operators. The operator case is due to Page [52] and AAK [2], 1970/71; for a more recent treatment see Geronimo and Woerdeman [29]. For links between Nehari theory and prediction theory (below), see Yukio Kasahara and the first author [35]. For the matrix case, see e.g. [36], [37]. 9. _The Nehari problem and rigidity_ The term Nehari problem is used in two different but related ways. For the first, we turn to a reformulation of Nehari's theorem in terms of \(H^{2}(\mathbb{T})\) (\(H^{2}\) for short), the Hardy class of functions of order 2 on the unit circle \(\mathbb{T}\). For \(\phi\in L^{2}(\mathbb{T})\) (\(L^{2}\) below), define the _Hankel operator_\(H_{\phi}\) from \(H^{2}\) to \(H^{2}_{-}\), its orthogonal complement in \(L^{2}\), by \[H_{\phi}f:=P_{\phi}f\qquad(f\in H^{2}),\] with \(P_{-}\) the projection from \(L^{2}\) to \(H^{2}_{-}\). Then \(\phi\) is called a _symbol_ of \(H_{\phi}\) (_a_ symbol, as symbols are far from unique). Then Nehari's theorem may be reformulated as the equivalence, for \(\phi\in L^{2}\), of: (i) the operator \(H_{\phi}\) is bounded on \(H^{2}\); (ii) \(\phi\) has the same negative Fourier coefficients \(\hat{\phi}_{m}\) (\(m<0\)) as those, \(\hat{\psi}_{m}\), of some function \(\psi\in L^{\infty}\); (iii) \(P_{-}\)\(\phi\in BMO\), the space of functions of bounded mean oscillation, and then \[\|H_{\phi}\|=\inf\ \{\|\psi\|_{L^{\infty}}:\hat{\psi}(m)=\hat{\phi}(m),\ m<0\}\] [Pel, Th. 1.3]. So, \(H_{\phi}\) is bounded iff it has a bounded symbol, and as \[\|H_{\phi}\|=\inf\{\|\phi-f\|_{L^{\infty}}:f\in H^{\infty}\},\] the norm of \(H_{\phi}\) is the distance from \(\phi\) to \(H^{\infty}\). Hence the term _Nehari problem_ for that of approximating to a bounded function (on \(\mathbb{T}\)) by bounded analytic functions. See Peller [53] (details below), Garnett ([28], IV.4,5) (without reference to Nehari, incidentally). For the second, one needs the concept of _rigidity_ (the term is due to Sarason [60]; cf. Poltaratski and Sarason [54]). A non-zero function \(g\in H^{1}\) is _rigid_ (or _strongly outer_) if it is _determined by its argument_, i.e. by \(g/|g|\), to within a scale factor \(c>0\). Then \(h^{2}\) is an outer function in \(H^{1}\), and \(h^{2}/|h|=h/\bar{h}\) is called the _phase factor_ of \(h\). The measure \(\mu\) is determined by its Fourier coefficients, but with rigidity the _negative half_ of these suffices. The second sense for 'the Nehari problem' is: given a sequence \(\gamma=(\gamma_{n})_{1}^{\infty}\) of complex numbers, find \(\phi\) in the unit ball of \(L^{\infty}\) with \[\gamma_{n}=\int_{\mathbb{T}}e^{in\theta}\phi dm\qquad(n=1,2,\cdots).\] The case of non-uniqueness here - the _indeterminate case_ - is the probabilistically interesting one. In this case, when there is more than one (so infinitely many) solutions \(\gamma\), \(\gamma\) is called a _Nehari sequence_ (below). 10. _The Adamjan-Arov-Krein (AAK) parametrization_ In the indeterminate case of the Nehari problem, the solutions were parametrised by Adamjan, Arov and Krein [1]. They are of bilinear (fractional linear, Mobius) type, parametrised by balls (the exceptional case of uniqueness occurs when the ball has radius \(0\)). For details see Peller [53]: SS5.1 (Th. 1.13 p.166, scalar case), and for the matrix- and operator-valued cases, [2], and [53], SS5.3 (Th. 3.5, p.177), SS5.4 (Th. 4.16 p.212), SS5.5 (Th. 5.8 p.225), SS14.19 (Th. 19.1 p.628). 11. _Nehari sequences and rigidity_ In the indeterminate case, the AAK parametrisation describes the solution set as the outer functions \(h\in H^{2}\) such that (i) \(h\) has unit norm, (ii) \(h^{2}\) is rigid, (iii) \(h/\bar{h}\) solves the Nehari problem as in \((Neh)\), and one can take \(h(0)>0\). We now state the condition \[\mu_{s}=0,\qquad\log w\in L^{1},\qquad h^{2}\ \mbox{rigid}.\] Here \((LM)\) refers to work by Levinson and McKean [44] in continuous time, for which this is the discrete-time analogue; we call \((LM)\) the case of 'LM weights'. Under \((LM)\), \(\mu\) (and so \(h\)) is determined by its phase factor \(h/\bar{h}\). Restrict (as usual) to \(\mu\) non-trivial (having infinitely many support points). Then with \(P_{n}\) the space of polynomials of degree \(<n\) and \(\Phi_{n}\) the monic orthogonal polynomials on \(\mathbb{T}\) determined by \(\mu\) (see e.g. [61]), \((LM)\) holds if and only if \[z^{n}H_{-}^{2}(\mu)\cap H^{2}(\mu)=P_{n},\] equivalently, \[\Phi_{n}\ \perp z^{n}H_{-}^{2}(\mu)\cap H^{2}(\mu),\] for some (_equivalently, for all_) \(n=0,1,\cdots\) ([35], Th. 3.1). For results in the matrix case, see [36], [37]. The operator case remains open. 12. _The moment problem_ The indeterminate case of the Nehari problem suggests a comparison with the indeterminate case of the _moment problem_; both are strongly related to the theory of orthogonal polynomials. For the extensive background here, see e.g. Khrushchev [9]. 13. _Prediction theory_ Our motivation here is probabilistic, prediction theory, in one, many or infinitely many dimensions [6], [7], [8], [13]. The theory is at its most complete in the stationary case. For the non-stationary case, see e.g. [3], [23].
2306.00024
Self-Verification Improves Few-Shot Clinical Information Extraction
Extracting patient information from unstructured text is a critical task in health decision-support and clinical research. Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning, in contrast to supervised learning which requires much more costly human annotations. However, despite drastic advances in modern LLMs such as GPT-4, they still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health. Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs. This is made possible by the asymmetry between verification and generation, where the latter is often much easier than the former. Experimental results show that our method consistently improves accuracy for various LLMs in standard clinical information extraction tasks. Additionally, self-verification yields interpretations in the form of a short text span corresponding to each output, which makes it very efficient for human experts to audit the results, paving the way towards trustworthy extraction of clinical information in resource-constrained scenarios. To facilitate future research in this direction, we release our code and prompts.
Zelalem Gero, Chandan Singh, Hao Cheng, Tristan Naumann, Michel Galley, Jianfeng Gao, Hoifung Poon
2023-05-30T22:05:11Z
http://arxiv.org/abs/2306.00024v1
# Self-Verification Improves Few-Shot Clinical Information Extraction ###### Abstract Extracting patient information from unstructured text is a critical task in health decision-support and clinical research. Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning, in contrast to supervised learning, which requires costly human annotations. However, despite drastic advances, modern LLMs such as GPT-4 still struggle with issues regarding accuracy and interpretability, especially in safety-critical domains such as health. We explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs. This framework is made possible by the asymmetry between verification and generation, where the former is often much easier than the latter. Experimental results show that our method consistently improves accuracy for various LLMs across standard clinical information extraction tasks. Additionally, self-verification yields interpretations in the form of a short text span corresponding to each output, which makes it efficient for human experts to audit the results, paving the way towards trustworthy extraction of clinical information in resource-constrained scenarios. To facilitate future research in this direction, we release our code and prompts.1 Footnote 1: All code is made available at \(\mathbf{\mathsf{\mathsf{O}}}\) github.com/microsoft/clinical-self-verification. ## 1 Introduction and related work Clinical information extraction plays a pivotal role in the analysis of medical records and enables healthcare practitioners to efficiently access and utilize patient data (Zweigenbaum et al., 2007; Wang et al., 2018). Few-shot learning approaches have emerged as a promising solution to tackle the scarcity of labeled training data in clinical information extraction tasks (Agrawal et al., 2022; Laursen et al., 2023). However, these methods continue to struggle with accuracy and interpretability, both critical concerns in the medical domain (Gutierrez et al., 2022). Here, we address these issues by using self-verification (SV) to improve few-shot clinical information extraction. SV builds on recent works that chain together large language model (LLM) calls to improve an LLM's performance (Wu et al., 2022; Wang et al., 2022; Chase, 2023). Intuitively, these chains succeed because an LLM may be able to perform individual steps in a task, _e.g_. evidence verification, more accurately than the LLM can perform an entire task, _e.g_. information extraction (Ma et al., 2023; Madaan et al., 2023; Zhang et al., 2023). Such chains have been successful in settings such as multi-hop question answering (Press et al., 2022), retrieval-augmented/tool-augmented question answering (Peng et al., 2023; Paranjape et al., 2023; Schick et al., 2023; Gao et al., 2023), and code execution (Jojic et al., 2023). Here, we analyze whether building such a chain can improve clinical information extraction. Fig. 1 shows the SV pipeline we build here. We broadly define self-verification as using multiple calls to the _same_ LLM to verify its output, and also to ground each element of its output in evidence. Our SV pipeline consists of four steps, each of which calls the same LLM with different prompts. First, the _Original extraction_ step queries the LLM directly for the desired information. Next, the _Omission_ step finds missing elements in the output, the _Evidence_ step grounds each element in the output to a text span in the input, and the _Prune_ step removes inaccurate elements in the output. Taken together, we demonstrate that these steps improve the reliability of extracted information. Additionally, SV provides interpretable grounding for each output, in the form of a short text span in the input. Interpretability has taken many forms in NLP, including posthoc feature importance (Lundberg and Lee, 2017; Ribeiro et al., 2016), intrinsically interpretable models (Rudin, 2019; Singh et al., 2022), and visualizing model intermediates, _e.g_. attention (Wiegreffe and Pinter, 2019). The interpretable grounding we generate comes directly from an LLM, similar to recent works that use LLMs to generate explanations (Rajani et al., 2019; MacNeil et al., 2022; Singh et al., 2023) and ground those explanations in evidence (Rashkin et al., 2021; Gao et al., 2022). Experiments on various clinical information extraction tasks and various LLMs, including GPT-4 (OpenAI, 2023) and ChatGPT (Ouyang et al., 2022), show the efficacy of SV. In addition to improving accuracy, we find that the extracted interpretations match human judgements of relevant information, enabling auditing by a human and helping to build a path towards trustworthy extraction of clinical information in resource-constrained scenarios. ## 2 Methods and experimental setup ### Methods: Self-verification Fig. 1 shows the four different steps of the introduced SV pipeline. The pipeline takes in a raw text input, _e.g_. a clinical note, and outputs information in a pre-specified format, _e.g_. a bulleted list. It consists of four steps, each of which calls the same LLM with different prompts in order to refine and ground the original output. The original extraction step uses a task-specific prompt which instructs the model to output a variable-length bulleted list. In the toy example in Fig. 1, the goal is to identify the two diagnoses _Hypertension_ and _Right adrenal mass_, but the original extraction step finds only _Hypertension_. After the original LLM extraction, the Omission step finds missing elements in the output; in the Fig. 1 example it finds _Right adrenal mass_ and _Liver fibrosis_. For tasks with long inputs (mean input length greater than 2,000 characters), we repeat the omission step to find more potential missed elements (we repeat five times, and continue repeating until the omission step stops finding new omissions). Next, the Evidence step grounds each element in the output to a text span in the input. The grounding in this step provides interpretations that can be inspected by a human. In the Fig. 1 example, we find quotes supporting the first two diagnoses, but the quote for _liver fibrosis_ shows that it was in fact _ruled out_, and is therefore an incorrect diagnosis. Finally, the Prune step uses the supplied evidence to remove inaccurate elements from the output. In Fig. 1 this results in removing _liver fibrosis_ to return the correct final list. Taken together, these steps help to extract accurate and interpretable information. We provide the exact prompts used in all steps in the Github repo. For the tasks with short inputs, we include 5 random data demonstrations in the original extraction prompt; otherwise all prompts are fixed across examples. ### Experimental setup DatasetsTable 1 gives the details of each task we study here. Each task requires extracting a variable-length list of elements. In clinical trial arm extraction, these are names of different clinical trial arms, manually annotated from the EBM-NLP dataset (Nye et al., 2018). In the medication status extraction task, in addition to medication names the medication status must additionally be classified as _active_, _discontinued_, or _neither_. The text inputs for arm extraction / medication status extraction are relatively small (average length is 1,620 characters and 382 characters, respectively). In the case of MIMIC-III and MIMIC-IV (Johnson et al., 2016, 2021), we predict ICD-9 or ICD-10 codes (corresponding to diagnoses and procedures). We predict ICD codes using relevant sections from all types of clinical notes for MIMIC-III (average length: 5,200 words) but only discharge summaries for MIMIC-IV (average length: 1,400 words). The ICD codes are not directly present in the text in Figure 1: Overview of self-verification pipeline for clinical information extraction. Each step calls the same LLM with different prompts to refine the information from the previous steps. Below each step we show abbreviated outputs for extracting a list of assigned diagnoses from a sample clinical note. put, and therefore the task requires translating the diagnoses to their relevant code. MIMIC data is preprocessed using a standard pipeline (see Appendix A.1) and we evaluate on a random subset of 250 inputs for each task. ModelsWe evaluate three different models: GPT-3.5 (Brown et al., 2020), text-davinci-003, Chat-GPT (Ouyang et al., 2022) gpt-3.5-turbo, and GPT-4 (OpenAI, 2023) gpt4-0314 (in chat mode), all accessed securely through the Azure OpenAI API. We set the sampling temperature for LLM decoding to 0.1. EvaluationExtraction is evaluated via case-insensitive exact string matching, and we report the resulting macro F1 scores, recall, and precision. In some cases, this evaluation may underestimate actual performance as a result of the presence of acronyms or different names within the output; nevertheless, the relative performance of different models/methods should still be preserved. Following common practice, we restrict ICD code evaluation to the top 50 codes appearing in the dataset. ## 3 Results ### Self-verification improves prediction performance Table 2 shows the results for clinical extraction performance with and without self-verification. Across different models and tasks, SV consistently provides a performance improvement. The performance improvement is occasionally quite large (_e.g_. GPT-4 shows more than a 0.1 improvement in F1 for clinical trial arm extraction and more than a 0.3 improvement for medication status extraction), and the average F1 improvement across models and tasks is 0.056. We also compare to a baseline where we concatenate the prompts across different steps into a single large prompt which is then used to make a single LLM call for information extraction. We find that this large-prompt baseline performs slightly worse than the baseline reported in Table 2, which uses a straightforward prompt for extraction (see comparison details in Table A5). For tasks with short inputs, we find that GPT-3.5 performs best, even outperforming GPT-4, as has been seen in some recent works (_e.g_. Patil et al., 2023). For the MIMIC tasks with larger inputs, GPT-4 performs best. In fact, GPT-3.5 performs very poorly on ICD-code extraction, perhaps because the task requires not only extracting diagnoses from the input text but also knowing the mapping between diagnoses and ICD codes. Table 3 contains ablations showing how the different self-verification modules affect the results. The _Omission_ step finds missing elements, which increases recall but at the cost of decreased precision. In contrast, the _Prune_ step (that incorporates the span from the _Evidence_ step) removes extraneous elements, thereby increasing precision. Together (_Full SV_), the steps achieve a balance which improves F1. For tasks with longer inputs (_e.g_. MIMIC-IV ICD-10), the _Omission_ step seems to provide more of the improvement in F1, likely because it is able to find evidence that was missed by a single extraction step. ### Self-verification yields interpretations Fig. 2 shows an example output from the self-verification pipeline for medication status (the underlying model is GPT-4). In the example, the pipeline correctly identifies each medication and its corresponding status. In addition, the pipeline supplies the span of text which serves as evidence for each returned medication (shown with highlighting). This highlighting enables efficient auditing by a human for each element. In a human-in-the-loop setting, a human could also see results/highlights for elements which were pruned, to quickly check for any mistakes. Table 4 evaluates the evidence spans provided by SV against human judgements collected in a prior work (Nye et al., 2018). Human reviewers annotated spans in the original text which correspond to interventions, which include clinical trial arms as a subset. Table 4 gives the fraction of generated evidence spans that overlap with a span provided by the human annotators. The fraction is quite large, _e.g_. 93% for GPT-4. At baseline, human annotators identify less than 3.7% of tokens as interventions, so these span overlap accuracies are much higher than expected by random chance. Figure 2: Example output and interpretation for medication status. For each element of the output list, our pipeline outputs the text span which contains evidence for that generated output (shown with highlighting). ## 4 Discussion Self-verification constitutes an important step towards unlocking the potential of LLMs in healthcare settings. As LLMs continue to generally improve in performance, clinical extraction with LLMs + SV seems likely to improve as well. One limitation of SV is that it incurs a high computational cost as multiple LLM calls are chained together; however, these costs may continue to decrease as models become more efficient (Dao et al., 2022). Another limitation is that LLMs and SV continue to be sensitive to prompts, increasing the need for methods to make LLMs more amenable to prompting (Ouyang et al., 2022; Scheurer et al., 2023) and to make finding strong prompts easier (Shin et al., 2020; Xu et al., 2023; Singh et al., 2022b). Finally, SV can be harnessed in a variety of ways to improve clinical NLP beyond what is studied here, _e.g_. for studying clinical decision rules (Kornblith et al., 2022), clinical decision support systems (Liu et al., 2023), or improving model distillation (Wu et al., 2023; Toma et al., 2023).
2303.05782
A Study of Pulsation properties of 57 Non-Blazhko effect ab-type RR Lyrae stars with homogeneous metallicities from the LAMOST-Kepler/K2 survey
Homogeneous metallicities and continuous high-precision light curves play key roles in studying the pulsation properties of RR Lyrae stars. By cross-matching with LAMOST DR6, we have determined 7 and 50 Non-Blazhko RRab stars in the Kepler and K2 fields, respectively, who have homogeneous metallicities determined from low-resolution spectra of the LAMOST-Kepler/K2 project. The Fourier Decomposition method is applied to the light curves of these stars provided by the Kepler space based telescope to determine the fundamental pulsation periods and the pulsation parameters. The calculated amplitude ratios of R21, R31 and the phase differences of {\phi}21, {\phi}31 are consistent with the parameters of the RRab stars in both the Globular Clusters and the Large Magellanic Cloud. We find a linear relationship between the phase differences {\phi}21 and {\phi}31, which is in good agreement with the results in previous literature. As far as the amplitude, we find that the amplitude of primary frequency A1 and the total amplitude Atot follow either a cubic or linear relationship. For the rise time RT, we do not find its relevance with the period of the fundamental pulsation mode P1, or Atot and {\phi}21. However, it might follow a linear relationship with R31. Based on the homogeneous metallicities, we have derived a new calibration formula for the relationship of period-{\phi}31-[Fe/H], which agrees well with the previous studies.
Peng Zong, Jian-Ning Fu, Jiaxin Wang, Tian-Qi Cang, HaoTian Wang, Xiao-Yu Ma, Weikai Zong
2023-03-10T08:33:55Z
http://arxiv.org/abs/2303.05782v1
A Study of Pulsation properties of 57 Non-Blazhko effect ab-type RR Lyrae stars with homogeneous metallicities from the LAMOST-\(Kepler/K2\) survey ###### Abstract Homogeneous metallicities and continuous high-precision light curves play key roles in studying the pulsation properties of RR Lyrae stars. By cross-matching with LAMOST DR6, we have determined 7 and 50 Non-Blazhko RRab stars in the Kepler and K2 fields, respectively, who have homogeneous metallicities determined from low-resolution spectra of the LAMOST-\(Kepler/K2\) project. The Fourier Decomposition method is applied to the light curves of these stars provided by the \(Kepler\) space based telescope to determine the fundamental pulsation periods and the pulsation parameters. The calculated amplitude ratios of \(R_{21},\,R_{31}\) and the phase differences of \(\phi_{21},\,\phi_{31}\) are consistent with the parameters of the RRab stars in both the Globular Clusters and the Large Magellanic Cloud. We find a linear relationship between the phase differences \(\phi_{21}\) and \(\phi_{31}\), which is in good agreement with the results in previous literature. As far as the amplitude, we find that the amplitude of primary frequency A\({}_{1}\) and the total amplitude A\({}_{tot}\) follow either a cubic or linear relationship. For the rise time \(RT\), we do not find its relevance with the period of the fundamental pulsation mode P\({}_{1}\), or A\({}_{tot}\) and \(\phi_{21}\). However, it might follow a linear relationship with R\({}_{31}\). Based on the homogeneous metallicities, we have derived a new calibration formula for the relationship of period-\(\phi_{31}\)-[Fe/H], which agrees well with the previous studies. Variable stars,RR Lyraes ## 1 Introduction RR Lyrae stars (hereafter RRLs) are low-mass pulsating stars located at the intersection between the classical instability strip and the horizontal branch of the Hertzsprung-Russell diagram, with helium burning in the core (Aerts et al., 2010). Their pulsations are induced by the so-called \(\kappa\)-mechanism, operating in the hydrogen and helium partial ionization zones. Typical RRLs have pulsation periods of \(0.2-1\,\mathrm{d}\), amplitudes of \(0.3^{\mathrm{m}}-1^{\mathrm{m}}\), effective temperature \(T_{\mathrm{eff}}\) of 6100-7400 K, and spectral types of A2-F6 (Catelan and Smith, 2015). According to the pulsation modes, they can be divided into three types, i.e., the foundamental radial mode (type RRab), the first overtone (type RRc) or both modes simultaneously (type RRd) (Guggenberger et al., 2012; Moskalik et al., 2015). RRLs are widely used as tracers of stellar populations with ages older than 10 Gyr in the Milky Way (Walker, 1989; Mullen et al., 2021) and neighboring galaxies (Catelan and Smith, 2015; Plachy and Szabo, 2021). They are also commonly used as standard candles, benefiting from their relationship between M\({}_{V}\) and iron abundance (Sandage, 1993; Caputo, 1998; Nemec et al., 2013). However, this relationship's intrinsic and systematic errors are still in debate (see, e.g., Caputo et al., 2000; Di Criscienzo et al., 2004; Cassisi et al., 2008; Marconi, 2012, 2009; Mullen et al., 2021). The metallicities of RRLs in those studies have been highlighted for a reliable estimation of distance with the relation of period-luminosity metallicity. But it is difficult to obtain accurate measurements of metal abundances of this type of stars (For et al., 2011; Govea et al., 2014; Nemec et al., 2013; Sneden et al., 2017; Chadid et al., 2017; Magurno et al., 2019; Crestani et al., 2021; Gilligan et al., 2021). Arellano Ferro (2022) reported a homogeneous approach towards the calculation of mean M\({}_{V}\) and [Fe/H] with a sample of 37 globular clusters via the Fourier decomposition of their light curves. Another interesting astrophysical problem of RRLs is the Blazhko effect, which is the amplitude and phase modulation of the light curves on the time-scale of tens to thousands of days (Blazko, 1907; Shapley, 1916). Almost 30%-50% RRLs exhibit Blazhko characteristics, but the physical origin of the effect is still a mystery since its discovery (Benko et al., 2014). The investigations of RRLs have made great progress with the unprecedented and ever-precious photometric data obtained by the \(Kepler\) and \(K2\) mission. Benko et al. (2010) investigated a sample of 29 RRLs using the \(Kepler\) photometry and found that almost half of the sample exhibited Blazhko effect. The prototype star RR Lyr itself was studied in detail with the Q1-Q2 long cadence (LC) data of \(Kepler\) by Kolenberg et al. (2011), who found a multiplet structure at the main frequency and its harmonics up to the quintuplets. Nemec et al. (2011) carried out Fourier analysis of 19 non-Blazhko RRab stars with Kepler photometry, among which none of the stars showed the period-doubling effect as seen in Blazhko stars. They also found that KIC 7021124 pulsates simultaneously in both the fundamental and second overtone modes. Based on the following high-resolution spectroscopic observations with CFHT and Keck-I, Nemec et al. (2013) determined the iron-to-hydrogen ratios, radial velocities, and atmospheric parameters for 41 RRLs in the \(Kepler\) field and thus gave a new relationship of Period-\(\phi_{31}\)-[Fe/H]. Ngeow (2022) adopted a set of homogeneous samples of fundamental mode RRLs in \(Kepler\) field to investigate the performance of photometric metallicity. Comparing with roughly 50 RRLs in the prime \(Kepler\) field, more than 3000 RRLs had been proposed for observations in \(K2\) campaigns. Although one might lose the chance of studying Blazhko effect of RRLs merely with \(K2\) photometry concerning the limited lengths of the time-series observations of the target stars, it is possible to carry out population studies (Molnar et al., 2015; Armstrong et al., 2016) and statistical investigations (Kovacs, 2018; Moskalik et al., 2021). Jurcsik and Kovacs (1996) have revealed that the shapes of light curves of RRab stars in optical wavelength have relevance with their mental abundances. They derived a linear relation of mental abundance with period and low-order parameters of Fourier Decomposition on the light curves of RRab stars in V band, with the phase difference of \(\phi_{31}=\phi_{3}\)-3\(\phi_{1}\). This relation was investigated by Smolec (2005) with the light curves in I band of RRab stars from the Optical Gravitational Lensing Experiment (Udalski et al., 1992). A calibration of the relation was carried out by? using the data of the Palomar Transient Factory (Law et al., 2009) in R band. Nemec et al. (2013) extended the analysis of this relation using well-sampled light curves of RRab stars in the \(Kepler\) field (Koch et al., 2010). The calibration of the relation was given by Martinez-Vazquez et al. (2016) using the RRab stars in globular clusters (GCs) and fields. Iorio and Belokurov (2021) obtained a new Period-\(\phi_{31}\)-[Fe/H] relation using G-band light curves provided by Gaia DR2 (Gaia Collaboration et al., 2018; Holl et al., 2018; Clementini et al., 2019). Recent study of Mullen et al. (2021) not only gave new relations adopting the light curves of RRab stars of ASAS-SN sample in V band, but also provided new calibrations for the stars observed by W1 and W2 WISE bands. A similar study for RRc stars was also carried out by Mullen et al. (2022). Jurcsik and Juhasz (2022) studied RRab stars with quasi-identical-shape light curves but period differences as large as 0.05-0.21 d based on the Galactic bulge data of the OGLE-IV survey. They revealed that several of these stars show very similar light curves to that of the typical bulge RR Lyrae by examining their Fourier parameters. However, to precisely characterize the relation of Period-\(\phi_{31}\)-[Fe/H] and the connection of pulsation parameters of RRab stars, homogeneous spectra and light curves are required to derive the metallicity abundances and pulsation parameters, respectively. Fortunately, observations of the LAMOST-\(Kepler/K2\) project (LKS) (see,e.g., De Cat et al., 2015; Zong et al., 2018; Wang et al., 2020; Fu et al., 2020) have provided LAMOST spectra for a larger number of \(Kepler/K2\) targets. In this study, we investigate the characteristics of the pulsation parameters of the non-Blazhko RRab stars based on the \(Kepler\) light curves and homogeneous metal abundances provided by the LAMOST-\(Kepler/K2\) project, and a new calibration of Period-\(\phi_{31}\)-[Fe/H] is presented. This paper is organized as following: the target selection process is described in SS 2. The Fourier decomposition analysis of light curves is presented in SS 3. The analysis results and discussion are given in SS 4 and SS 5, respectively. Finally, we present conclusions of this work in SS 6. ## 2 Target Selection The catalog of RRLs in the fields of \(K2\) is obtained by combining the catalogs of RRLs of all campaigns down loaded from EVEREST (Luger et al., 2016, 2018). We obtain the target pixel files (hereafter TPFs) of all candidates of 3413 stars at the Mikulski Archive for Space Telescopes 1 (MAST, all the _K2_ data used in this paper can be found in MAST: 10.17909/T9K30X) using the catalogs. The light curves are then extracted from TPFs with the LightKurve package (Vinicius et al., 2018; Bartensten et al., 2021). For each star, a series of apertures are tested on the TPFs in order to optimize the photometry precision. The extracted light curves are then detrended by fitting and subtracting either a second- or third-order polynomial to remove the long-term systematic errors. Finally, The corresponding fluxes are converted to magnitudes and shifted to the \(K\)p mean magnitude levels. As an example, the images and light curves of the non-Blazhko RRab star EPIC 210830646 observed by \(K2\) are shown in Figures 1 and 2, respectively. Footnote 1: [https://archive.stsci.edu/missions-and-data/Repler](https://archive.stsci.edu/missions-and-data/Repler) We identify the non-Blazhko and Blazhko RRLs among those candidates by following the most strict and convincing evidence whether the presence of the side peaks is shown in the frequency spectra (Skarka et al., 2016). We searched for this feature with the software Period04 (Lenz and Breger, 2005), and identified 376 Blazhko and 594 non-Blazhko RRab stars, respectively. Then, we take those non-Blazhko RRab stars to cross-match with the catalog of Liu et al. (2020), who derived the metal abundances of RRLs from the low-resolution spectra of LAMOST DR6, with a total of 50 non-Blazhko RRab stars matched. We also cross-match the catalog of Liu et al. (2020) with a list of the \(Kepler\) non-Blazhko RRab stars in previous study (Nemec et al., 2013), among which 7 stars are derived. As Liu et al. (2020) did not give the uncertainties of the metal abundances of the stars, we estimate the uncertainties using the method provided by Wang et al. (2020). The values of the metal abundances of the 57 non-Blazhko RRab stars, with their corresponding uncertainties of the metal abundances, are listed in the sixth column of Table 1. ## 3 Pulsation Analysis Frequency analysis with Fourier Decomposition is useful to characterize pulsations of RRLs (e.g., Simon and Teays, 1982; Sandage, 1993). For the non-Blazhko RRab stars from No.8 to No.57 in Table 1, frequency analyses of the light curves are carried out with the software Period04. The corresponding uncertainties of the frequencies and periods are determined according to the method proposed by Zong et al. (2021). After the frequencies are extracted from the Fourier amplitude spectra, the light curves are fitted with the following formula concerning the sine function series, \[m(t)=m_{0}+\sum_{i=1}^{n}A_{i}\mathrm{sin}(2\pi if_{0}(t-t_{0})+\phi_{i}) \tag{1}\] where \(n\) is the number of fitted orders, \(f_{0}\) the main frequency, \(t\) the observation time (Barycentric Julian Date: BJD-2454833.0 ) and \(t_{0}\) the time of the first minimum apparent magnitude of the light curves. The mean magnitude \(m_{0}\), amplitude A\({}_{i}\), and phase \(\phi_{i}\) values at given \(i\)th-order can be determined. As Simon and Lee (1981) suggested, certain combinations of Fourier coefficients are directly related to some physical parameters of pulsating stars. These coefficients are typically defined either as linear combinations of phase difference of \(\phi_{ij}\) or as amplitude ratios \(R_{ij}\) as follows, \[\phi_{ij}=j\phi_{i}-i\phi_{j} \tag{2}\] \[R_{ij}=\frac{A_{i}}{A_{j}} \tag{3}\] where i = 2 or 3, j = 1 for the fundamental modes of RRLs as suggested by Nemec et al. (2013) and Smolec et al. (2013). Note that \(\phi_{21}\) and \(\phi_{31}\) are corrected for integer multiples of \(\pi\) to meet the \(\phi_{21}<\pi\) and \(\pi<\phi_{31}<2\pi\) conditions. The parameters of 5 \(Kepler\) RRab stars analyzed from No.1 to No.4 and No.7 of Table 1 are taken from Nemec et al. (2011). We also calculate the maximum (A\({}_{max}\)) and minimum (A\({}_{min}\)) light with their corresponding phases (\(\phi_{max}\) and \(\phi_{min}\)) of those stars in \(Kepler\) and \(K2\) missions, by fitting a second or third degree polynomial around each peak and valley of the phase-folded light curves, which can be used to determine the rise time RT = \(\phi_{max}\) - \(\phi_{min}\) and total amplitude A\({}_{\mathrm{tot}}\) = A\({}_{max}\) - A\({}_{min}\) for each star. The parameters of the stars KIC 9658012 and 9717032 as No.5 and 6 in Table 1 are determined in this work. The uncertainties of the Fourier coefficients for the 57 stars are also estimated. Table 1 lists the properties of these non-Blazhko RRLs in the fields of LAMOST-\(Kepler/K2\) project. Figure 1: The image of the non-Blazhko RRab star EPIC 210830646. (a) The target pixel file of the star; (b) the green polygon indicates the optimized aperture adopted on the star for photometry. Figure 2: (a) The light curve of EPIC 210830646 extracted by LightKurve (Vinicius et al., 2018; Barentsen et al., 2021), (b) the phase-folded light curve in fundamental period. ## 4 Analysis Results ### The properties of Fourier composition coefficients The Fourier coefficient of \(A_{1}\) is approximately proportional to the total amplitude of A\({}_{tot}\), since it is the dominant component of A\({}_{tot}\). In order to illustrate this relationship between A\({}_{1}\) and A\({}_{tot}\), we use a cubic and a linear equation to fit the two coefficients A\({}_{1}\) and A\({}_{tot}\) following Nemec et al. (2011) who found that the dependence between the two coefficients might be cubic using the 19 non-Blazhko RRab stars observed by \(Kepler\) and Skarka (2014) who analysed 176 non-Blazhko RRab stars from ASAS and WASP surveys found that the relation might be linear for the two coefficients, respectively. The cubic fitting is as follows, \[\begin{split} A_{1}=&-0.28(1)\times A_{tot}^{3}-0.56 (5)\times A_{tot}^{2}\\ &+0.66(2)\times A_{tot}-0.004(1)\end{split} \tag{4}\] while the linear fitting is as follows, \[A_{1}=0.3223(1)\times A_{tot}+0.0202(1) \tag{5}\] When the cubic curve (as shown in the top panel of Figure 3) is subtracted from the A\({}_{tot}\)-A\({}_{1}\) diagram, the residuals show no significant deviations from the average value of zero with the rms of 0.0125 mag (as shown in the bottom panel of Figure 3). But for the linear fitting, although the residuals show no significant variations (presented in the bottom panel of Figure 4) after subtracting the linear trending (presented in the top panel of Figure 4) from the A\({}_{tot}\)-A\({}_{1}\) diagram, the value of rms is 0.013 mag, which is slightly larger than that of the cubic fitting. It is worth mentioning that the data points away from 3\(\sigma\) are not considered for those two kinds of fittings as shown in the top panels of the two figures. Skarka (2014) and Nemec et al. (2011) had investigated the relation between the two Fourier coefficients \(\phi_{21}\) and \(\phi_{31}\) of the non-Blazhko RRab stars, and they both pointed out that the two coefficients of the stars follow a linear relation. In this work, we perform a linear fitting for the two Fourier coefficients \(\phi_{21}\) and \(\phi_{31}\) and the fitting equation is as following, \[\phi_{21}=0.459(10)\times\phi_{31}-0.064(28) \tag{6}\] The fitting is shown in the top panel of Figure 5, and the residuals are plotted in the bottom panel of the figure, with an rms of 0.082 rad. The data points beyond 3\(\sigma\) away from the trending are not considered in the fitting. Figure 6 shows how the rise time (RT) correlates with the parameters of the light curves, including the fundamental period, total amplitude \(A_{tot}\), the amplitude ratio \(R_{31}\) and the phase difference \(\phi_{21}\). Panel (a) shows the relation between RT and the main pulsation period. It can be seen that the longer period probably corresponds Figure 4: The relationship between A\({}_{tot}\) and A\({}_{1}\). The black and red dots are the RRLs in this study observed by \(Kepler\) and K2, respectively. The green line is the linear fitting of the data points as shown in the top panel, and the bottom panel shows the residuals. Figure 3: The relationship between A\({}_{tot}\) and A\({}_{1}\). The black and red dots are the RRLs in this study observed by \(Kepler\) and K2, respectively. The blue curve of the data points as shown in the top panel, and the bottom panel shows the residuals. to a higher rise time for most of the stars. This is for the first time that this tendency is noticed for \(Kepler\) and \(K2\) non-Blazhko RRab stars. The data points of the total amplitudes A\({}_{tot}\) versus RT are scattered as shown in panel (b). R\({}_{31}\) and RT might follow a roughly linear trend shown in panel (c). The distribution of RT versus \(\phi_{21}\) is scattered as shown in panel (d). ### Period-\(\phi_{31}\)-[Fe/H] A series studies (Jurcsik and Kovacs, 1996; Kovacs and Jurcsik, 1996; Kovacs and Walker, 2001; Jurcsik et al., 2009; Nemec et al., 2013; Plachy et al., 2016; \(\tt?\); Iorio and Belokurov, 2021; Mullen et al., 2021) have been conducted to investigate the relation of Period-\(\phi_{31}\)-[Fe/H]. In this study, we use the fundamental periods P and the phase differences \(\phi_{31}\) of the 54 non-Blazhko RRab stars with the mental abundances [Fe/H] of these stars provided by the LAMOST-\(Kepler/K2\) project to give a new calibration of the relation. This relation could be fitted by following equation, \[[{\rm Fe/H}]=a+b\times(P-\overline{P}_{0})+c\times(\phi_{31}-\overline{\phi}_{ 31}) \tag{7}\] where \(\overline{P}_{0}\) and \(\overline{\phi}_{31}\) are the mean values of the fundamental periods and phase differences \(\phi_{31}\) of the 54 non-Blazhko stars studied in this work, respectively. We adopt the least squares method which is implemented by SciPy package (Virtanen et al., 2020) to estimate the best fitting coefficients with their corresponding standard errors, for which, a = -3.650 (1), b = 0.848 (4) and c = -3.992 (1). Figure 7 illustrates the distribution of the [Fe/H] in the \(\phi_{31}\) versus periods plane of those RRLs. ## 5 Discussion ### The properties of Fourier composition coefficients In this study, we find that the values of standard errors of cubic fit and linear fit for the relation of the amplitude of the primary frequency A\({}_{1}\) and the total amplitude A\({}_{tot}\) of the 54 non-Blazhko RRab stars are \(\sigma_{1}\) = 0.013 and \(\sigma_{2}\) = 0.0125, respectively, which means that they follow either a cubic or linear relation. We do not find any relation for RT with the fundamental period, total amplitude A\({}_{tot}\) and phase difference \(\phi_{21}\) of the stars studied in this work. However, Skarka (2014) suggested that the RT follows a linear relation with those parameters for the non-Blazhko RRab stars but does not for the Blazhko RRab stars, which is not consistent with our result. This might be due to that the precision of the photometry from \(Kepler\) and \(K2\) is much higher than that of the photometric data in the previous study. As far as the relation between \(\phi_{21}\) and \(\phi_{31}\) of the stars, we find that they follow a linear relation. But Skarka (2014) found that the dependence of the two coefficients of \(\phi_{21}\) and \(\phi_{31}\) is not very strong. ### Comparing with RRab stars in globular clusters and LMC field As the Fourier decomposition coefficients derived from light curves observed in \(Kepler\) white band of the 54 non-Blazhko RRab, we convert them into \(V\) mag using formula (2) of Nemec et al. (2011). Figure 8 shows the correlations of properties of the non-Blazhko RRab stars in this study with the 177 RRab stars located in several Galactic and LMC GCs (Kovacs and Walker, 2001). Panel (a) shows the logP-\(\phi_{31}\) diagram. The stars in the GCs with poor and intermediate metallicities define clearly two edges, respectively. The metal-poor subsample consists of 19 stars, whose metallicities are in the range from -1.70 to -1.99 dex with an average value of -1.8 dex (Kovacs and Walker, 2001; Nemec et al., 2011). The other subsample stars are the intermediate-metallicity stars containing 39 members, whose metallicities are between -0.97 and -1.23 dex with an average value of -1.1 dex. It is clear that most no-Blazhko RRab stars in LAMOST-\(Kepler/K2\) fields have metal abundance between -1.80 dex and -1.10 dex, except for two \(Kepler\) stars and five \(K2\) stars with metallicities higher than -1.1 dex. Panel (b) shows that the distribution of the Fourier coefficients R\({}_{21}\) and R\({}_{31}\) of the 54 non-Blazhko RRab stars in the \(Kepler/K2\) survey are similar to the RRab stars in GCs. For the stars studied in this work, we find that some of Figure 5: The relationship of \(\phi_{31}\) and \(\phi_{21}\). The black and red dots represent RRLs in this study in the \(Kepler\) and \(K2\) fields, respectively. The dark line is the linear equation between the two coefficients with a rms of 0.082 rad as shown in the top panel. After extracting this fit, the residuals are shown in the bottom panel. them with low \(R_{31}\) values have relatively high \(R_{21}\) values. We also find that most stars are in the upper right side of panel (b) and most of them have high metallicities. Panel (c) shows that the stars studied in this work do not differ from the stars in GCs. We also note that the metallcicites might have no significant effect in this panel. Panel (d) shows the agreement between the phase parameters of the 54 non-Blazhko RRab stars and globular cluster RRab stars, which supports that the phase parameters of \(\phi_{21}^{s}\) and \(\phi_{31}^{s}\) might follow a linear relation. However, panel (d) also presents very little dependence on the metallicities. As the cluster RRLs adopted in this work cover roughly one dex in the metal intermediate regime, we collected the data of the reference stars from a large sample high resolution spectroscopic surveys of RRLs (For et al., 2011; Govea et al., 2014; Nemec et al., 2013; Sneden et al., 2017; Chadid et al., 2017; Magurno et al., 2019; Crestani et al., 2021; Gilligan et al., 2021), with the metallicities of RRLs ranging from -3.0 dex to solar or super-solar iron abundance based on those high resolution spectra. We cross match the catalog of those studies with our data for the reference stars. We derived 7 common stars from the study of Nemec et al. (2013) but no common stars from other literature (For Figure 6: Rise time as a function of the parameters of the light curves. The non-Blazhko RRab stars in this study observed by _Kepler_ and \(K2\) are presented in red and black dots, respectively. et al., 2011; Govea et al., 2014; Sneden et al., 2017; Chadid et al., 2017; Magurno et al., 2019; Crestani et al., 2021; Gilligan et al., 2021) and those stars are all in the \(Kepler\) field. But we notice that our database of metallicity of RRLs studied in this work is a subsample of Liu et al. (2020), who collected the data of the reference stars with reliable metallicity estimates either from high-resolution spectroscopy with the metallicity ranging from -2.95 dex to -0.59 dex (Clementini et al., 1995; For et al., 2011; Kinman et al., 2012; Nemec et al., 2013; Govea et al., 2014; Pancino et al., 2015) or as a member star of globular clusters (Harris, 2010) with the metallicity range from -2.37 dex to -1.29 dex. They finally obtained 47 stars in common, which formed their reference star sample. The metallicity scale adopted by them was the one established by Carretta et al. (2009), which was derived from the old metallicity scale (Zinn and West, 1984). They found that the values of metallicities of RRLs estimated from the low resolution spectra of LAMOST DR6 agree well with those of the compiled reference stars, with a negligible offset of -0.04 dex and a standard deviation of 0.22 dex. They also found that the dispersion is comparable to that yielded by multi-epoch observations. We also compare the Fourier decomposition coefficients R\({}_{21}\), R\({}_{31}\), \(\phi_{21}^{c}\) and \(\phi_{31}^{c}\) with those derived for the RRLs in the central regions of the LMC. The latter are determined from the OGLE Collection of Variable Stars by Soszynski et al. (2016) and transformed from \(I\) to \(V\) band using equations provided by Morgan et al. (1998). It is clear that the coefficients of the 54 non-Balzhko RRab stars determined in this work agree with the coefficients of the RRab stars well and differs from other type RRLs shown in Figure 9. ### Period-\(\phi_{31}\)-[Fe/H] We compare the metallicities [Fe/H] of the stars studied in this wrok calculated using Eq. 7 with those derived by relationships as documented in the literature (Jurcsik and Kovacs, 1996; Nemec et al., 2013; Martinez-Vazquez et al., 2016; Iorio and Belokurov, 2021; Mullen et al., 2021). For consistency, the metallicities of those investigations are converted to the often-used scale of Carretta et al. (2009) (hereafter C09). The result of this comparison is shown in Figure 10, the abscissa of all subgraphs refer to the metallicities of the stars calculated using Eq. 7, the ordinate of all subgraphs refer to the metallicities of the stars derived with the relations in the literature. Panel (a) shows the metallicities comparison between ours and those of Jurcsik and Kovacs (1996) (hereafter JK96), whose relation was derived using photometric data of 81 field RRab stars in \(V\) band with metallicities based on the high-dispersion spectroscopy scale of Jurcsik (1995). For consistency, we first convert the metallicities derived with their relation to the C09 scale using the formula provided by Kollath et al. (2011): [Fe/H]\({}_{C09}\) = 1.001[Fe/H]\({}_{JK96}\)-0.112. We then convert the Fourier coefficient \(\phi_{31}\) in K\({}_{p}\) system to V-band with the formula (2) of Nemec et al. (2011). We find that the comparing result between the two relations is different obviously within the calibration rang (-2.1 dex \(\leq\) [Fe/H] \(\leq\) 0.7 dex) of JK96 (red horizontal lines) particularly in the metal-rich regime. This might be due to that the photometric data of JK96 were collected from heterogeneous observations at various sites and were either lack of phase coverage or had excessive noise which caused a failure of Fourier fit in JK96. Nemec et al. (2013) (hereafter N13) derived a quadratic Period-\(\phi_{31}\)-[Fe/H] relation using 19 RRab stars in the \(Kepler\) field with accurate metallicities measurements. The metallicities comparison between ours and those of N13 is shown in panel (b). The Fourier coefficients \(\phi_{31}\) of the 57 stars are in same \(K_{p}\) system with that of Nemec et al. (2013). Furthermore, the metallicity scale adopted by N11 and ours are in same scale of C09. Note that the scatter is obviously large for either the higher metallicity or lower metallicity in their range ( -1.5 dex \(\leq\) [Fe/H] \(\leq\) 0.03 dex). Mullen et al. (2021) suggested that this might be caused by the higher-order term of the relationship given by N13, and they had only one RRab star with [Fe/H] \(\leq\) -2.0 dex resulting in the scarcity of calibrators in their sample for low [Fe/H]. Martinez-Vazquez et al. (2016) (hereafter MV16) gave a new calibration of Period-\(\phi_{31}\)-[Fe/H] based on a sample of 381 RRab stars in the GCs and 8 field RRab stars in order to extend the metallicities range of their samples. The metallicities of their sample were on the C09 scale, and we also convert Figure 7: Period-\(\phi_{31}\)-[Fe/H] fit for the RRLs observed in the LAMOST-_Kepler_/\(K2\) project. The color bar is the value of the period. the \(K_{p}\) system \(\phi_{31}\) value to the V-band system using formula (2) of Nemec et al. (2011). The metallicities comparing between ours with those of MV16 show an obvious scatter within the entire calibration range of MV16, as presented in panel (c). This phenomenon might be due to that the sample of 381 RRab stars were binned by period, they computed the mean period, \(\phi_{31}\) and V-band amplitude, which means that the calibration provided by MV16 was based on the average instead of the individual properties. Iorio & Belokurov (2021) (hereafter IB21) derived a G-band period-\(\phi_{31}\)-[Fe/H] relation based on the light curves of 84 RRab stars in Gaia Dr2 with known spectroscopic metallicities. For this comparison, we first use formula (2) of Kollath et al. (2011) to convert the \(\phi_{31}\) value in \(K_{p}\) system to the V-band system, then convert it to that in the G-band system using formula (6) of Clementini et al. (2016). What's more, an additional \(\pi\) offset should be subtracted from \(\phi_{31}\) to set the coefficients on the same scale as IB20 as suggested by Mullen et al. (2021). However, the metallicity abundances adopted by IB20 were on the scale of Zinn & West (1984)(ZW). We convert the metallicity abundances of IB20 to the C09 scale using the formula of [Fe/H]\({}_{C09}\) = 1.105[Fe/H]\({}_{ZW84}\) + 0.160. The metallicities comparison between ours and those of IB21 exhibit a generally agreement within the entire range of metallicity (-2.53 dex \(\leq\) [Fe/H] \(\leq\) 0.33 dex), with an rms of 0.123 dex, as shown in panel (d). However, Mullen et al. (2021) pointed out that the relation given by IB21 tends to overestimate the metallicity at the metal-poor ends and underestimate the metallicity at the metal-rich ends. Panel (e) shows the metallcicities comparison between ours and Figure 8: Correlation of properties of LAMOST-\(Kepler/K2\) non-Blazhko RRab stars with the 177 RRab stars located in several Galactic and LMC GCs (black small dots ). The cluster RRab stars presented here are taken from Kováčas & Walker (2001) which were adopted by Nemec et al. (2011). The green and yellow triangles are the stars in \(Kepler\) and \(K2\) fields in this study, respectively. The 19 most metal-poor stars, whose metallicities are between -1.70 and -1.99 dex with a mean value of -1.80 dex, are marked by blue circles. The 39 most metal-rich stars, whose metallicities are in the range of -0.97 to -1.23 dex with a mean value of -1.10 dex, are marked by red circles. The blue and red lines in panel (a) are linear fitting for the metal-poor and intermediate metal abundance stars, respectively. Note that the superscript ’S’ and ’C’ of the coefficients signify phase-parameters computed with sine and cosine series, respectively. those of Mullen et al. (2021) (hereafter M21), who used 1980 RRab stars with a metallicity range of -3.0 dex \(\leq\) [Fe/H] \(\leq\) 0.4 dex. We first convert the \(\phi_{31}\) value in \(K_{p}\) system to the V-band system using formula (2) of Nemec et al. (2011). A small shift of 0.08 dex is considered in converting to the often-used C09 scale of [Fe/H] as suggested by M21. The result exhibits an obvious scatter between the two relations for the entire range of metallicities. This might be due to that the sample of RRab stars in M21 is significantly larger than ours. ## 6 Conclusions As a consequence of cross-match the target stars of Kepler and K2 photometry with those of the spectroscopic observations in LAMSOT DR6, we derive a sample of 57 non-Blazhko RRab stars. The pulsation periods of these stars are determined with the Fourier Decomposition method applied on the light curves, including \(R_{21}\), \(R_{31}\), \(\phi_{21}\), \(\phi_{31}\), A\({}_{1}\) and A\({}_{t}ot\). We find that the amplitude ratios R\({}_{21}\), R\({}_{31}\) and those phase difference \(\phi_{21}\) and \(\phi_{31}\) are consistent with those determined in Globular Clusters and LMC. There is a linear relationship between the phase differences of \(\phi_{21}\) and \(\phi_{31}\), which agrees well with those in the literature (Skarka, 2014). In terms of the amplitudes of the stars studied in this work, we suggest that the amplitudes of primary frequencies A\({}_{1}\) and the total amplitudes A\({}_{tot}\) follow either a cubic or linear pattern, which need investigate in the feature. For the rise time \(RT\), we do not find its relevance with the fundamental pulsation period, A\({}_{tot}\) and \(\phi_{21}\). However, it might follow a linear relationship with R\({}_{31}\). Based on the homogeneous metallicities, we have derived a new calibration formula for the relationship of period-\(\phi_{31}\)-[Fe/H], which agrees well with those in the previous studies as documented in the literature (Jurcsik and Kovacs, 1996; Nemec et al., 2013; Martinez-Vazquez et al., 2016; Iorio and Belokurov, 2021; Mullen et al., 2021). We foresee a much larger catalog to be coming as LAMOST is ongoing to release spectra both in low-resolution and medium-resolution for targets with Kepler and K2 Figure 9: Comparison of Fourier coefficients of the non-Blazhko RRab stars in this study with those derived from the OGLE-IV LMC field RR Lyrae stars provided by Soszyński et al. (2016) and the known non-Blazhko RRab stars observed by _Kepler_ determined by Nemec et al. (2011). The red points represent RRab, green points RRc stars. The blue pentagon are stars observed by both _Kepler_ and LAMOST DR6. The dark pentagon are the stars observed by both \(K2\) and LAMOST DR6. photometry (Fu et al., 2020). A further and larger catalog which will refine the calibration of such relationships between different pulsation parameters. Those observational results might bring new constraints to the hydrodynamic models constructed for RR Lyrae stars in general. ## Acknowledgements We acknowledge the support from the National Natural Science Foundation of China (NSFC) through grants 11833002, 12090040, 12090042, 12273002 and 12203010. W.Z. is supported by the Fundamental Research Funds for the Central Universities. The Guoshoujing Telescope (the Large Sky Area Multi-object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. The authors gratefully acknowledge the Kepler team and all who have contributed to making this mission possible. astropy (Astropy Collaboration et al., 2013; The Astropy Collaboration, 2018), LightKurve (Vinicius et al., 2018; Barentsen et al., 2021), Period04 (Lenz & Breger, 2005)
2303.14857
Paired comparisons for games of chance
We present a Bayesian rating system based on the method of paired comparisons. Our system is a flexible generalization of the well-known Glicko, and in particular can better accommodate games with significant elements of luck. Our system is currently in use in the online game Duelyst II, and in that setting outperforms Glicko2.
Alex Cowan
2023-03-27T00:02:16Z
http://arxiv.org/abs/2303.14857v1
# Paired comparisons for games of chance ###### Abstract. We present a Bayesian rating system based on the method of paired comparisons. Our system is a flexible generalization of the well-known Glicko, and in particular can better accommodate games with significant elements of luck. Our system is currently in use in the online game _Duelyst II_, and in that setting outperforms Glicko2. The author was supported by the Simons Foundation Collaboration Grant 550031. ###### Contents * 1 Introduction * 2 Model * 2.1 Match outcomes * 2.2 Knowledge of players * 2.3 Player growth * 3 Parameter choices * 3.1 The luck function \(\Lambda\) and its tails * 3.2 \(\nu_{0}\) for unknown players * 3.3 The kernel \(\kappa\) * 3.4 Summary * 4 Algorithms * 4.1 Naive algorithms * 4.2 FFT-based algorithms * 4.3 Laplace algorithms * 5 Performance in Duelyst II ## 1. Introduction _The method of paired comparisons_[4] is a framework for ranking many items by comparing them two at a time. Often the outcome of these comparisons is non-deterministic, only a small fraction of all possible pairs will be compared, and some pairs may be compared multiple times. In this paper we discuss the method of paired comparisons in the context of ranking the players of a symmetric competitive two-player game according to their _strength_, i.e. ability to win matches. We present a rating system which, based only on the outcomes of previously played matches, estimates how likely any player is to defeat any other player. There are several other systems designed for this purpose, such as Elo [6], Glicko2 [9], and TrueSkill [11, 15]. Our system is fundamentally a generalization of the well-known system Glicko [8] (but not Glicko2; see Remark 2.19). With this generalization, we can adapt our system to specific situations in a way that avoids certain assumptions and certain approximations which might not be appropriate in those settings. In particular, Glicko and many other systems use the _Bradley-Terry model_[27, 1] for estimating the winning chances of players whose strength is known exactly. This model tends to overestimate the winning chances of players much stronger than their opponents, and also reflects reality poorly when used in games with elements of luck; c.f. Example 2.9 and Section 3.1. Our system is in use in the online collectible card game Duelyst II, and there our system outperforms Glicko2, which was the rating system the game used previously. We discuss this in Section 5. The core part of our system is made up of three models: Model 2.1 is used to predict the outcome of matches, Model 2.11 is used to update the system's beliefs about players based on match outcomes, and Model 2.16 is used to account for changes in player strength between matches due to external factors. The system functions by choosing some prior for a player's strength to assign to unknown players, and then using Model 2.11 and Model 2.16 after each match. Models 2.1, 2.11, and 2.16 are presented in substantial generality, and to use our system one must choose values for three parameters: \(\Lambda\) in Model 2.1, a prior \(\nu_{0}\) for unknown players to use in Model 2.11, and \(\kappa\) in Model 2.16. In Section 3, we present the parameter choices we made for Duelyst II, and give a very succinct summary of the resulting system in Section 3.4. We think similar choices will be reasonable in many other situations. An essential component of Glicko2's popularity is that it is computationally feasible to use it in large scale applications, such as on the major chess website Lichess [14]. We present two sets of algorithms in Section 4.2 and Section 4.3 for implementing our system. These algorithms are efficient enough to be practical for similar applications; Duelyst II's implementation can process roughly 170 matches per second per vCPU. We also highlight the algorithms in Section 4.1, which might help one in understanding the system, and Algorithm 4.5 and Algorithm 4.6 for taking convolutions of discrete distributions with Laplace distribution PDFs and CDFs which might be of independent interest. Implementation of all of these algorithms are available on the author's GitHub [2]. ### Acknowledgments We thank Michael Snarski for many very helpful conversations, Oleg Maslennikov for integrating all aspects of our system into Duelyst II, and the Duelyst II community for their enthusiastic testing and thorough feedback. ## 2. Model Our system is made up of three models. Model 2.1 is used to predict the outcome of matches. Model 2.11 is used after every match to update the system's beliefs about the strengths of players. Model 2.16 is used after each of these updates, to account for changes in player strength between matches due to external factors. In this section, we present these models in general. To use our system in practice, one then has to choose values for the following parameters: * In Model 2.1, \(\Lambda\) * In Model 2.11, a prior \(\nu_{0}\) for unknown players * In Model 2.16, \(\kappa\) We give a variety of examples of parameter choices for each individual model to highlight relationships to existing systems: * The Bradley-Terry model [27, 1] in Example 2.5 and Example 2.6 * TrueSkill [11, 15] in Example 2.7 * FIDE [7] in Example 2.8 * Glicko [8] in Example 2.13 and Example 2.18 In Section 3, we give recommendations for parameter choices which we expect to be suitable for most applications, and which allow for a more concrete formulation of our system. ### Match outcomes The outcome of a match between two players \(A\) and \(B\) depends on both how well \(A\) and \(B\) perform in that match as well as the nature of the game that they are playing. In this section we present Model 2.1 for predicting match outcomes in a way that takes into consideration both of these factors. **Model 2.1**.: * _Let_ \(\mathcal{S}\) _be a complete separable metric space._ * _Let_ \(\mathcal{P}(\mathcal{S})\) _be the set of Borel probability measures on_ \(\mathcal{S}\)_._ * _Let_ \(\Lambda:\mathcal{S}^{2}\to[0,1]\subset\mathbb{R}\) _be measurable and such that_ \(\Lambda(x,y)=1-\Lambda(y,x)\) _for all_ \(x,y\in\mathcal{S}\)_._ * _For any_ \(\mu_{A}\) _and_ \(\mu_{B}\) _in_ \(\mathcal{P}(\mathcal{S})\)_, define_ \[L(\mu_{A},\mu_{B})\coloneqq\int_{\mathcal{S}^{2}}\Lambda(x,y)\,\mu_{A}(dx)\, \mu_{B}(dy).\] _Players \(A\) and \(B\) are modeled by the probability measures \(\mu_{A}\) and \(\mu_{B}\) respectively, and the average score of \(A\) playing against \(B\) is modeled by \(L(\mu_{A},\mu_{B})\)._ We call \(\Lambda\) a _luck function_. In Model 2.1, games are determined by the choice of \(\Lambda\), and players by their associated probability measures. The luck function \(\Lambda\) reflects the nature of the game being played, in particular how much luck influences the outcomes of matches, and should be chosen on a case-by-case basis. The probability measures associated to players reflect how consistently that player performs from match to match. When two fixed players \(A\) and \(B\) play multiple matches against one another and multiple outcomes are observed, in many situations it is possible to explain this variation in outcome equally well as a consequence of inconsistent performances by the players (reflected in \(\mu_{A}\) and \(\mu_{B}\)), as a consequence of an element of luck inherent in the game (reflected in \(\Lambda\)), or by various combinations of these two factors. Example 2.5 and Example 2.6 show how the Bradley-Terry model [27, 1] can arise from either the perspective of the game involving luck or players performing inconsistently. **Definition 2.2**.: We take _Heaviside function_ to be the function \(H:\mathbb{R}\to\mathbb{R}\) defined as \[H(x)=\begin{cases}0,&x<0\\ \frac{1}{2},&x=0\\ 1,&x>0.\end{cases}\] **Definition 2.3**.: For any \(x\in\mathcal{S}\), the _Dirac measure_\(\delta_{x}\in\mathcal{P}(\mathcal{S})\) is defined by \[\delta_{x}(U)=\begin{cases}0,&x\not\in U\\ 1,&x\in U\end{cases}\] for all Borel-measurable sets \(U\subseteq\mathcal{S}\). **Example 2.4**.: Take \(\mathcal{S}=\mathbb{R}\). If \[\Lambda(x,y)=H(x-y),\] where \(H\) is the Heaviside function as in Definition 2.2, then Model 2.1 can be interpreted as a game in which the player who performs better in any given match wins that match. Conversely, if \[\Lambda(x,y)=\frac{1}{2}\quad\text{for all }x,y\in\mathcal{S},\] then this can be interpreted as a game in which the outcome of every match is determined uniformly at random. If \(\mu_{A}\) is a Dirac measure as in Definition 2.3, i.e. \(\mu_{A}(\{x\})=1\) for some \(x\in\mathcal{S}\), this can be interpreted as the player \(A\) being perfectly consistent, performing with exactly the same strength every match. Conversely, if \(\mu_{A}\) is a probability measure with high variance, then this can be interpreted as the player \(A\) being very inconsistent in their performance. However, there is no analogue to the luck function \(\Lambda(x,y)=\frac{1}{2}\), essentially because there is no uniform probability distribution on \(\mathbb{R}\). We will see in Section 3.1 that this discrepancy is significant. \(\triangle\) **Example 2.5**.: The _Bradley-Terry model_[27, 1] is the special case of Model 2.1 where \(\mathcal{S}=\mathbb{R}_{>0}\), \[\Lambda(x,y)=\frac{x}{x+y},\] and \(\mu_{A}\) and \(\mu_{B}\) are Dirac measures. These choices can be interpreted as * the players \(A\) and \(B\) always perform at the same strength, and * it is possible for a match to be won by the player that played worse in that match. The Bradley-Terry model is often encountered under the reparameterizations \(x\mapsto c^{x}\)[9] or \(x\mapsto 10\frac{x}{40}\)[7, 8]. **Example 2.6**.: "Linear models" in the sense described by David in [4, SS1.3, SS4] is the special case of Model 2.1 where \(\mathcal{S}=\mathbb{R}\) and \(\Lambda(x,y)=H(x-y)\). As discussed in [4], the Thurstone-Mosteller [23, 24, 16, 17, 18] and Bradley-Terry [27, 1] models are recovered by requiring that \(\mu_{A}\) and \(\mu_{B}\) be normal distributions or Gumbel distributions with scale parameter \(1\). In contrast with Example 2.5, these choices can be interpreted as * the players \(A\) and \(B\) perform at different strengths from one match to the next, and * every match is won by whichever player played best. Note that both this example and Example 2.5 recover the Bradley-Terry model, but do so in different ways, and can be interpreted differently. **Example 2.7**.: The TrueSkill system [11, 15] uses the special case of Model 2.1 in which \(\mathcal{S}=\mathbb{R}\), \[\Lambda(x,y)=\begin{cases}1&x-y>\varepsilon\\ \frac{1}{2}&|x-y|\leq\varepsilon\\ 0&x-y<-\varepsilon\end{cases}\] for some \(\varepsilon>0\), and \(\mu_{A}\) and \(\mu_{B}\) are normal distributions. **Example 2.8**.: The rating system [7, SS8] used by FIDE, the de facto governing body of chess, corresponds to Model 2.1 with \(\mathcal{S}=\mathbb{R}\), a choice of \(\Lambda\) which is piecewise constant but well-approximated by the modified Bradley-Terry model \[\Lambda(x,y)=\begin{cases}\frac{10}{11}&\text{if }x-y>400\\ \frac{1}{11}&\text{if }x-y<-400\\ \frac{1}{1+10\frac{y-y}{400}}&\text{if }|x-y|\leq 400,\end{cases}\] and \(\mu_{A},\mu_{B}\) Dirac measures. **Example 2.9**.: Consider the game \(G_{\beta}\) in which a chess match is played with probability \(\beta\in(0,1)\), and a winner is chosen uniformly at random with probability \(1-\beta\). If the game of chess is perfectly modeled by the Bradley-Terry model as presented in Example 2.5, then the game \(G_{\beta}\) is best modeled by the choice of parameters \(\mathcal{S}=\mathbb{R}_{>0}\), \[\Lambda(x,y)=\beta\frac{x}{x+y}+(1-\beta)\frac{1}{2},\] and \(\mu_{A}\) and \(\mu_{B}\) Dirac measures. None of the previous examples are good models for games in which it is impossible to win with probability arbitrarily close to \(1\). For instance, they cannot properly model the fact that the world champion of chess would win a match of \(G_{\beta}\) against both the author and a rock with probability \(\frac{1+\beta}{2}\), and the author would also win against the rock with probability \(\frac{1+\beta}{2}\). These sorts of considerations also cause problems when using the Rasch model [20] for multiple-choice tests where guessing is possible [12]. **Example 2.10**.: Take \(\mathcal{S}=\mathbb{R}^{3}\) and \[\Lambda\big{(}(x_{1},x_{2},x_{3}),(y_{1},y_{2},y_{3})\big{)}=\begin{cases}1& \text{if }\#\{i\in\{1,2,3\}\,:\,x_{i}>y_{i}\}\geq 2\\ 0&\text{if }\#\{i\in\{1,2,3\}\,:\,y_{i}>x_{i}\}\geq 2\\ \frac{1}{2}&\text{otherwise.}\end{cases}\] For Dirac measures \(\mu_{A},\mu_{B},\mu_{C}\in\mathcal{P}(\mathcal{S})\) defined by \[\mu_{A}(\{(3,3,3)\})=1,\quad\mu_{B}(\{(4,4,1)\})=1,\quad\mu_{A}(\{(5,2,2)\})=1,\] we have \[L(\mu_{A},\mu_{B})=L(\mu_{B},\mu_{C})=L(\mu_{C},\mu_{A})=0.\] This sort of non-transitivity arises in human play and is also an obstacle for machine learning [3, 21, 26]. It is unclear how one would properly model this non-transitivity with the widely-used linear models that were presented in Example 2.6. \(\triangle\) Model 2.1 makes two assumptions about the function \(\Lambda\) which we use to simplify Model 2.11. However, there are situations in which these assumptions aren't appropriate. We discuss the assumptions below. In both cases, it seems straightforward conceptually to omit the assumption, but then one must repeat the work done in subsequent sections without using the associated simplifications. The first assumption is that \(\Lambda(x,y)=1-\Lambda(y,x)\). This is appropriate for symmetric games, but probably isn't for asymmetric games. For example, the game in which players flip a coin to determine who plays with white in a game of chess is symmetric, but the game of chess after having picked colours is not. At the time of writing, there have been 3,988,065,350 matches of chess played on lichess.org, and in them white scored 52% [13]. To model this, one might instead fix two different \(\Lambda\)'s, each satisfying \(\Lambda(x,x)=0.5\pm 0.02\), and consider which player was playing with the white pieces to determine which one to use. The second assumption, that \(\Lambda(x,y)\in[0,1]\), builds on the first. There are situations where one is interested not in the probability of each player winning, but other notions of score. For example, in poker "cash games" players can exchange money for chips at any time, and strive to win as many chips as possible. Thus, if \(A\) and \(B\) win \(a\) and \(b\) chips respectively while playing a prescribed number of hands against each other in a poker cash game, then it is most meaningful to consider the quantities \(a\) and \(b\) themselves, and not whether or not \(a>b\). This contrasts with games like Go, where the winner of the match is the player with the highest score, regardless of the magnitude of the scores or the difference between them. It seems more natural to model poker cash games with a choice of \(\Lambda\) which takes values outside of \([0,1]\). This also allows one to consider games which are not zero-sum. In two player games, when one player wins, the other loses, but other notions of score needn't sum to zero. In the preceding poker example, one expects that \(a+b<0\), as casinos take a small portion of each pot. ### Knowledge of players Model 2.1 posits that the player \(A\) is determined by a Borel probability measure \(\mu_{A}\in\mathcal{P}(\mathcal{S})\). Our rating system will not know with certainty what the true underlying measure \(\mu_{A}\) is, and represents its current beliefs with a Borel probability measure \(\nu_{A}\in\mathcal{P}(\mathcal{P}(\mathcal{S}))\). The only observations our rating system will consider are match outcomes, and Model 2.11 gives the resulting posteriors. **Model 2.11**.: _Let \(\mathcal{S}\) and \(L\) be as in Model 2.1. For any \(\nu_{A}\) and \(\nu_{B}\) in \(\mathcal{P}(\mathcal{P}(\mathcal{S}))\), define \(\nu_{A,A>B}\) to be any element of \(\mathcal{P}(\mathcal{P}(\mathcal{S}))\) satisfying, for all Borel-measurable sets \(U\subseteq\mathcal{P}(\mathcal{S})\),_ \[\int_{U}\nu_{A,A>B}(d\mu)=\frac{\int_{U}\left[\int_{\mathcal{P}(\mathcal{S})}L (\mu,\mu^{\prime})\,\nu_{B}(d\mu^{\prime})\right]\nu_{A}(d\mu)}{\int_{ \mathcal{P}(\mathcal{S})}\left[\int_{\mathcal{P}(\mathcal{S})}L(\mu,\mu^{ \prime})\,\nu_{B}(d\mu^{\prime})\right]\nu_{A}(d\mu)}.\] _Define \(\nu_{A,B>A}\) as above but with all instances of \(L(\mu,\mu^{\prime})\) replaced by \(L(\mu^{\prime},\mu)\)._ _The prior distribution over probability measures associated to a player \(A\) is modeled by \(\nu_{A}\), and the posterior after observing a match in which \(A\) defeats \(B\) or \(B\) defeats \(A\) is modeled by \(\nu_{A,A>B}\) or \(\nu_{A,B>A}\)._ After updating the prior \(\nu_{A}\) to the posterior \(\nu_{A,A>B}\) or \(\nu_{A,B>A}\), that posterior will then be used as the prior when processing the next match \(A\) plays. In some games there are match outcomes such as draws which are neither wins nor losses. When one can reasonably model these outcomes by a real number \(\theta\), e.g. \(\theta=\frac{1}{2}\) for draws, then we suggest taking the posterior, which we'll denote \(\nu_{A,\theta}\), to be any probability measure which satisfies \[\int_{U}\nu_{A,\theta}(d\mu)=\frac{\int_{U}\left[\int_{\mathcal{P}(\mathcal{S}) }L(\mu,\mu^{\prime})^{\theta}L(\mu^{\prime},\mu)^{1-\theta}\,\nu_{B}(d\mu^{ \prime})\right]\nu_{A}(d\mu)}{\int_{\mathcal{P}(\mathcal{S})}\left[\int_{ \mathcal{P}(\mathcal{S})}L(\mu,\mu^{\prime})^{\theta}L(\mu^{\prime},\mu)^{1- \theta}\,\nu_{B}(d\mu^{\prime})\right]\nu_{A}(d\mu)} \tag{1}\] for all Borel-measurable sets \(U\subseteq\mathcal{P}(\mathcal{S})\). Our reasoning for this is the same as the reasoning given in [8, SS2]. This formula can also be viewed as encompassing the one given in Model 2.11 if one takes \(\theta=1\) if \(A>B\) and \(\theta=0\) if \(B>A\). It may be helpful in some cases to note that the assumptions in Model 2.1 imply that \(L(\mu^{\prime},\mu)=1-L(\mu,\mu^{\prime})\). One must choose what prior \(\nu_{0}\in\mathcal{P}(\mathcal{P}(\mathcal{S}))\) to assign to a player which is completely unknown to the system. Below, we give one type of prior which is a convenient choice for many applications. **Definition 2.12**.: For a given complete separable metric space \(\mathcal{S}\), we will call a Borel probability measure \(\nu\in\mathcal{P}(\mathcal{P}(\mathcal{S}))\)_Dirac-only_ if and only if, for all Borel-measurable \(U\subseteq\mathcal{P}(\mathcal{S})\), \[U\cap\big{\{}\delta_{x}\,:\,x\in\mathcal{S}\big{\}}=\emptyset\quad\implies \quad\nu(U)=0,\] where \(\delta_{x}\) is the Dirac measure as in Definition 2.3. Dirac-only priors are convenient for two reasons: 1. If \(\nu_{A}\) is Dirac-only, then the posteriors \(\nu_{A,A>B}\) and \(\nu_{A,B>A}\) from Model 2.11 are too. 2. If \(\nu\) is Dirac-only, then we can treat \(\nu\) as an element of \(\mathcal{P}(\mathcal{S})\) via \[\nu(U\subseteq\mathcal{S})\coloneqq\nu\big{(}\big{\{}\delta_{x}\,:\,x\in U \big{\}}\big{)}\,.\] **Example 2.13**.: This example gives a description of the Bayesian inference part of the widely-used Glicko system [8]. Take \(\mathcal{S}=\mathbb{R}\) and \(\Lambda\) to be the parameterization of the Bradley-Terry model [27, 1] given by \[\Lambda(x,y)=\frac{1}{1+10^{\frac{\eta-\eta}{400}}}.\] To an unknown player, assign a Dirac-only (Definition 2.12) prior \(\nu_{0}\) which, viewed as a probability measure on \(\mathbb{R}\), is a normal distribution. When updating according to Model 2.11, approximate the marginal likelihood \(\int_{\mathcal{P}(\mathcal{S})}L(\mu,\mu^{\prime})\,\nu_{B}(d\mu^{\prime})\) by a normal density with the same mode and second derivative at that mode; see [8, Appendix A] for details. A consequence of this approximation is that the posterior \(\nu_{A,\theta}\) (Eq. (1)) is again normal. \(\triangle\) **Example 2.14**.: Take \(\mathcal{S}=\mathbb{R}\) and \(\Lambda(x,y)=H(x-y)\), where \(H\) is the Heaviside function (Definition 2.2). Suppose Alice and Abi share the account \(A\) in an online game, and in any given match Alice plays with probability \(p\in(0,1)\) and Abi plays otherwise. When Alice plays she always performs with strength \(x_{1}\), and when Abi plays she always performs with strength \(x_{2}\). Suppose there is another player \(B\) which always performs with strength \(y\). In the case where our rating system is aware of all this information except for the value of \(p\), we can model the situation as follows. Let \(\mu_{p}\) be a probability measure satisfying \(\mu_{p}(\{x_{1}\})=p\) and \(\mu_{p}(\{x_{2}\})=1-p\). A reasonable choice of prior \(\nu_{A}\) would be the measure which, for any Borel-measurable subset \(U\) of \([0,1]\), gives probability \(\lambda(U)\) to the set \(\{\mu_{p}\,:\,p\in U\}\), where \(\lambda\) is the Lebesgue measure. This corresponds to a uniform prior for \(p\). The prior \(\nu_{B}\) should be taken to be \(\delta_{\delta_{y}}\) (Definition 2.3). If \(x_{1}<y<x_{2}\), then observations of match outcomes between \(A\) and \(B\) are essentially samples of Bernoulli random variable with unknown parameter \(p\), and the rating system is attempting to determine \(p\). Model 2.11 uses the usual Bayesian inference to estimate \(p\) and yields a beta distribution. If one considers only Dirac-only choices of \(\nu_{A}\), like in Example 2.13, then one cannot reasonably model the situation given in this example. If it is known that \(B\) always performs with strength and that every match is won by the player who performed better, then, after a single observation of \(A>B\), the posterior \(\nu_{A,A>B}\) gives probability \(0\) to all \(\delta_{x}\) with \(x<y\). The posterior after observing both \(A>B\) and \(B>A\) would give probability \(0\) to all \(\delta_{x}\) with \(x\neq y\). If \(p\) is far from \(\frac{1}{2}\), then \(\delta_{y}\) would be a very poor guess for \(\mu_{A}\). If there is another player \(C\) who is known to always perform with strength \(z\) satisfying \(x_{1}<y<z<x_{2}\), then assuming \(\mu_{A}\) is a Dirac measure would cause system to believe that the sequence of match outcomes \(A>B\), \(B>A\), and \(A>C\) could never occur, and in the proportion \(p(1-p)^{2}\) of cases in which it does the system's behaviour would be undefined. **Example 2.15**.: In this example we restate Model 2.11 under the assumption that \(\nu_{A}\) and \(\nu_{B}\) are discrete distributions over discrete distributions. We use the following notation: * \((\alpha_{i})_{i}\) and \((\beta_{j})_{j}\) are integer-indexed sequences of non-negative real numbers which each sum to \(1\), as are \((p_{i,k})_{k}\) for each \(i\), and \((q_{j,\ell})_{\ell}\) for each \(j\). * \((x_{i,k})_{k}\) and \((y_{j,\ell})_{\ell}\) are integer-indexed sequences of elements of \(\mathcal{S}\). * \(\delta_{*}\) is the Dirac measure at \(*\) (Definition 2.3). * \(\propto\) means that values should be normalized so that they sum to \(1\). Write \[\nu_{A} =\sum_{i}\alpha_{i}\delta_{\mu_{A,i}}, \nu_{B} =\sum_{j}\beta_{j}\delta_{\mu_{B,j}},\] \[\mu_{A,i} =\sum_{k}p_{i,k}\delta_{x_{i,k}}, \mu_{B,j} =\sum_{\ell}q_{j,\ell}\delta_{y_{j,\ell}}.\] Then \[L(\mu_{A,i},\mu_{B,j})=\sum_{k,\ell}p_{i,k}q_{j,\ell}\Lambda(x_{i,k},y_{j,\ell})\] and \[\nu_{A,A>B}=\sum_{i}\hat{\alpha}_{i}\delta_{\mu_{A,i}}, \hat{\alpha}_{i}\propto\alpha_{i}\sum_{j}\beta_{j}\sum_{k,\ell}p_{i,k}q_{j, \ell}\Lambda(x_{i,k},y_{j,\ell}).\] ### Player growth In practice, the strength of a player \(A\) is likely change over time for a variety of reasons. For example, \(A\) might read a book to learn a new chess opening, or acquire new cards in a collectible card game. Using Model 2.11 to process many of \(A\)'s matches can easily lead to situations where \(\nu_{A}\) gives probability nearly \(0\) to certain measures. If external factors then cause \(A\)'s strength to change, a purely Bayesian system might need to observe many match outcomes before it gives a non-negligible probability to the measure corresponding to \(A\)'s new strength. In this section, we present Model 2.16 for how a player's underlying measure can change between matches. **Model 2.16**.: _Let \(\mathcal{S}\) be a complete separable metric space. Fix a function \(\kappa:\mathcal{P}(\mathcal{S})\to\mathcal{P}(\mathcal{P}(\mathcal{S}))\) and denote by \(\kappa_{\mu}\) the value of \(\kappa\) evaluated at \(\mu\). Given any \(\nu\in\mathcal{P}(\mathcal{P}(\mathcal{S}))\), define \(\tilde{\nu}\) to be any element of \(\mathcal{P}(\mathcal{P}(\mathcal{S}))\) satisfying, for all Borel-measurable sets \(U\subseteq\mathcal{P}(\mathcal{S})\),_ \[\int_{U}\tilde{\nu}(d\mu^{\prime})\coloneqq\int_{\mathcal{P}( \mathcal{S})}\int_{U}\kappa_{\mu}(d\mu^{\prime})\,\nu(d\mu).\] _The possibility of the strength of \(A\) changing between matches because of external factors is modeled by replacing the measure \(\nu_{A}\) Model 2.11 associates to them by \(\tilde{\nu}_{A}\)._ We call the function \(\kappa\) a _kernel_. **Example 2.17**.: If \(\kappa_{\mu}=\delta_{\mu}\) for all \(\mu\in\mathcal{P}(\mathcal{S})\), then \(\tilde{\nu}=\nu\) for all \(\nu\). Here \(\delta_{\mu}\) denotes the Dirac measure at \(\mu\) (Definition 2.3). **Example 2.18**.: Recall the description of the Glicko system given in Example 2.13. Let \(\varphi(\,\cdot\,|\,x,\sigma^{2})\) be the normal density on \(\mathbb{R}\) with mean \(x\) and variance \(\sigma^{2}\). As described in [8, SS3.2], one step of the Glicko system is using Model 2.16 with \(\kappa_{\delta_{x}}\) the Dirac-only measure (Definition 2.12) satisfying \[\kappa_{\delta_{x}}(\{\delta_{y}\,:\,y\in U\})=\int_{U}\varphi(y\,|\,x,\sigma^{2 })\,dy\] for some fixed \(\sigma^{2}\) and all Borel-measurable \(U\subseteq\mathbb{R}\). As mentioned in Example 2.13, this formulation of Glicko only involves \(\nu\) which are Dirac-only, so the value \(\kappa_{\mu}\) when \(\mu\) isn't a Dirac measure is irrelevant. It's computationally convenient that if \(\nu\) and \(\kappa\) are normal in this sense, then \(\tilde{\nu}\) is again normal; c.f. [8, Eq. (7)]. **Remark 2.19**.: In some applications allowing \(\kappa\) to depend on the player can increase the accuracy of the model. The main innovation of Glicko2 [9] over Glicko is choosing such a dependence. However, this can incentivize players seeking to maximize their rating to intentionally lose games in some circumstances [5, SS5.1]. Competitive players of Pokemon GO have done this and perceive the resulting rankings to be inaccurate [25]. ## 3. Parameter choices Our system is in use in the online collectible card game Duelyst II. To implement the system, we needed to choose values for the parameters listed at the beginning of Section 2: * In Model 2.1, a luck function \(\Lambda\) * In Model 2.11, a prior \(\nu_{0}\) for unknown players * In Model 2.16, a kernel \(\kappa\) In this section we present the choices we made and give a concrete description of the resulting system. Our system is succinctly summarized in Section 3.4. We expect that similar choices will be suitable for many applications, and discuss minor variations other situations might call for. ### The luck function \(\Lambda\) and its tails For Duelyst II, we take \(\mathcal{S}=\mathbb{R}\). There are situations like those discussed in Example 2.10 which call for different choices of \(\mathcal{S}\), but we didn't feel that this was necessary for our application. Our choice of \(\Lambda\) is \[\Lambda(x,y)=\frac{1-\beta}{2}+\frac{\beta}{1+\exp(y-x)} \tag{2}\] with \(\beta=0.8\). This is the \(\Lambda\) from Example 2.9, and we discuss it there. When displaying ratings in game, we first apply the transformation \[x\mapsto\frac{400}{\log 10}x+1500 \tag{3}\] so that we match the parameterization of the Bradley-Terry model used by FIDE [7] and Glicko [8]. Our \(\Lambda\) is a linear combination of a constant function and a sigmoid. We chose a logistic curve for the sigmoid, but we would guess that other choices, like a normal CDF as in the Thurstone-Mosteller model [23, 24, 16, 17, 18], would work just as well. In contrast, the constant term in \(\Lambda\) has a very large impact on the behaviour of the system. Figure 3.1 illustrates the effect of including a constant term in \(\Lambda\). We suppose \(B\)'s strength is always 2000 (i.e. \(\nu_{B}=\delta_{\delta_{2000}}\); c.f. Definition 2.3), \(A\)'s strength is an unknown real number (i.e. \(\nu_{A}\) is "Dirac-only" as in Definition 2.12), and the system's prior \(\nu_{A}\) for \(A\)'s strength is a normal distribution with mean \(m\) and variance \(\sigma^{2}\). If the match outcome \(A>B\) is observed, then the system uses Model 2.11 to update its belief about \(A\)'s strength. Let \(m^{\prime}\) be the mean of \(A\)'s updated distribution \(\nu_{A,A>B}\). Figure 3.1 plots the difference \(m^{\prime}-m\) as a function of \(m\) for \(\Lambda\) as in (2) and reparameterized according to (3), \(\beta=0.8\), \(0.99\), \(1\), and various \(\sigma\). Unpacking the definitions, one can write the quantity being plotted explicitly: \[m^{\prime}-m=\frac{\frac{1-\beta}{2}m+\int_{\mathbb{R}}\frac{\beta}{1+10^{ \frac{2000-x}{400}}}\exp\!\left(-\frac{(x-m)^{2}}{2\sigma^{2}}\right)\frac{x \,dx}{\sqrt{2\pi\sigma^{2}}}}{\frac{1-\beta}{2}+\int_{\mathbb{R}} \frac{\beta}{1+10^{\frac{2000-x}{400}}}\exp\!\left(-\frac{(x-m)^{2}}{2\sigma^ {2}}\right)\frac{dx}{\sqrt{2\pi\sigma^{2}}}}-m. \tag{4}\] When \(\beta<1\), one interpretation is that, like in Example 2.9, there is a nonzero chance that the winner of a match is decided uniformly at random. If \(B\) is vastly stronger than \(A\), then, when the match outcome \(A>B\) is observed, the only plausible explanation is that the outcome of the match was in fact decided by chance, and the posterior distribution \(\nu_{A,A>B}\) for \(A\) is very similar to their prior \(\nu_{A}\). In (4), this intuition is reflected in the value of the integrals being much smaller than the constant terms, because there is nearly no overlap between the factors \[\frac{\beta}{1+10^{\frac{2000-x}{400}}}\quad\text{and}\quad\exp\!\left(-\frac {(x-m)^{2}}{2\sigma^{2}}\right),\] which respectively come from the sigmoidal term in \(\Lambda\) (defined in (2)) and the prior \(\nu_{A}\). In contrast, when \(\beta=1\), the constant terms in the numerator and denominator of (4) vanish, and the difference in the means of \(\nu_{A}\) and \(\nu_{A,A>B}\) is the ratio of the two exponentially small integrals. Essentially, the system views the chance of \(A\) having strength comparable to \(B\)'s as vanishingly small, but also views the chance of observing the match outcome \(A>B\) as vanishingly small. Because \(\nu_{A}\) has the \(\exp(-x^{2})\) tails of a normal distribution, but \(\Lambda\) has the much heavier tail \(\exp(-x)\), almost all of the mass in the integrals comes from \(x\approx m\). A straightforward calculation shows that \[m^{\prime}-m\longrightarrow\frac{\sigma^{2}\log 10}{400}\] as \(m\longrightarrow-\infty\). The behaviour with \(\beta=1\) overall seems much less reasonable to us than when \(\beta<1\), since it leads to massive rating changes for \(A\) based on ratios of minuscule probabilities. We view these probabilities as being substantially smaller than the chance that something bizarre has happened that the \(\beta=1\) model is not equipped to consider, and believe that a model that better reflects reality should not view this match outcome as extremely strong evidence that \(A\) is tremendously underrated. Figure 3.1 shows that changing \(\beta\) from \(1\) to \(0.99\) changes the behaviour of the model much more than changing \(\beta\) from \(0.99\) to \(0.8\). We note that [10, Fig. 1] reports that four widely-used systems based on the Bradley-Terry model all overestimate the performance of very highly rated competitors, that Sonas reports observing the same phenomenon in the popular article [22] with a dataset of \(1.54\) million chess games from FIDE, and that FIDE truncates the tails of the \(\Lambda\) it uses; see Example 2.8. This is consistent with our qualitative assessment of Figure 3.1 above: the Bradley-Terry model, with exponential asymptotes at \(0\) and \(1\), overestimates the winning chances of a much higher rated player. The two players who have played the most matches in our Duelyst \(\mathbb{II}\) dataset of the first \(1\),\(126\),\(592\) ranked matches played since the game's launch, who we call \(P\) and \(Q\). have played \(2162\) and \(2142\) matches respectively, and are ranked by our system to be at the \(95^{\text{th}}\) and \(99.95^{\text{th}}\) percentiles among all players having played at least one ranked match. Figure 3.2 shows the change in rating, i.e. mean of \(\nu_{P}\) or \(\nu_{Q}\), after each of their matches beyond the first \(100\) they played. The horizontal axis shows the rating difference at the time of the match between the player and their opponent. \(P\) is shown in red, and \(Q\) in blue. Points above the axis are wins and points below are losses, except for the \(10\) distinctly visible draws separate from the rest of the points. The shape of the clusters in Figure 3.2 is similar to what's shown in Figure 3.1, as expected. The qualitative observation that the blue points are noisier than the red points is explained by the fact that \(\text{Var}(\nu_{Q})\) varies between about \(52^{2}\) and \(62^{2}\) for the plotted matches, whereas \(\text{Var}(\nu_{P})\) is almost always between \(48^{2}\) and \(52^{2}\). ### \(\nu_{0}\) for unknown players Duelyst \(\mathbb{II}\)'s implementation considers only values of \(\nu\) which are Dirac-only (Definition 2.12). We can then view \(\nu\) as a probability distribution on \(\mathbb{R}\), with the interpretation that each player's strength could in principle be described by a single real number, and \(\nu\) describes the system's knowledge of said real number. The prior \(\nu_{0}\) we assign to an unknown player, viewed as a distribution on \(\mathbb{R}\), is \[\nu_{0}=\sum_{k=0}^{n}\rho(x_{k})\,\delta_{x_{k}} \tag{5}\] with \[x_{k}=-M+\frac{2M}{n}k,\qquad\rho(x_{k})\propto\varphi(x_{k}\,|\,0,\sigma_{0} ^{2}),\] \[n=1000,\qquad M=7,\qquad\sigma_{0}^{2}=0.7^{2}.\] Here \(\delta_{x_{k}}\) is a Dirac measure (Definition 2.3), \(\varphi(\,\cdot\,|\,x,\sigma_{0}^{2})\) is the normal density with mean \(x\) and variance \(\sigma_{0}^{2}\), and \(\propto\) indicates that the values of \(\rho(x_{k})\) are rescaled so that they sum to \(1\). It would be more natural to take \(\nu_{0}\) to be the normal distribution \(\mathcal{N}(0,0.7^{2})\), but we choose the discrete distribution (5) with finite support because it is straightforward to implement: we can encode (5) as the tuple \((\rho(x_{0}),\ldots,\rho(x_{1000}))\in\mathbb{R}^{1001}\), a list of \(1001\) real numbers. The measure (5) is an approximation to \(\mathcal{N}(0,0.7^{2})\) in the sense of weak convergence as \(n,M\longrightarrow\infty\). Figure 3.2. Change in rating after each match for the two Duelyst \(\mathbb{II}\) players that have played the most matches at the time of writing, plotted against difference between their rating and their opponent’s. The first \(100\) matches from each player are excluded. We chose the values of \(n=1000\), \(M=7\), and \(\sigma_{0}^{2}=0.7^{2}\) by examining the system's performance on the dataset of the first 1,126,592 ranked matches played since Duelyst II's launch. We were primarily concerned with producing a reliable ranking for the game's strongest players. The resulting rankings were relatively insensitive to these choices. We found that, for this dataset, taking \(\sigma_{0}^{2}=1\) lead to a small but non-negligible chance for players who happened to do very well in their first few matches getting ranked inappropriately highly, whereas taking \(\sigma_{0}^{2}=0.7^{2}\) seemingly did not. Taking smaller values of \(\sigma_{0}^{2}\) would require top players to play more matches to be accurately rated. The numbers \(M\) and \(n\) control how precisely the system can determine a player's strength, but larger values make the system more computationally expensive to use in practice. We chose \(M=7\) because no player had non-negligible mass outside \([-M,M]\), and \(n=1000\) because doubling this value did not meaningfully change the system's output for our dataset. ### The kernel \(\kappa\) For Duelyst II, we took \(\kappa\) to be \[\kappa_{\delta_{x}}=\sum_{k=0}^{n}K(x,x_{k})\,\delta_{x_{k}} \tag{6}\] with \[x_{k}=-M+\frac{2M}{n}k,\qquad K(x,x_{k})\propto\varphi(x\,|\,x_{k},\sigma_{ \kappa}^{2}),\] \[n=1000,\qquad M=7,\qquad\sigma_{\kappa}^{2}=0.03^{2},\] using the same notation as Eq. (5). Because all \(\nu\)'s that arise will be Dirac-only (Definition 2.12), the value of \(\kappa_{\mu}\) when \(\mu\) is not a Dirac measure irrelevant. As was the case in Section 3.2, a more natural choice for \(\kappa\), viewed as a distribution on \(\mathbb{R}\), would be the normal distribution \(\mathcal{N}(x_{k},\sigma_{\kappa}^{2})\) (and in some applications it might be desirable to take \(\kappa\) to be, more generally, a mixture of normal distributions with the same mean), but, for computational convenience, we want \(\tilde{\nu}\) to give \(0\) mass to \(\mathbb{R}\backslash\{x_{0},\ldots,x_{n}\}\). ### Summary In Duelyst II, each player \(A\) is represented as a tuple of \(1001\) real numbers: \[\bigg{(}\rho_{A}(\tfrac{14}{1000}\cdot 0-7),\,\rho_{A}(\tfrac{14}{1000}\cdot 1 -7),\ldots,\,\rho_{A}(\tfrac{14}{1000}k-7),\ldots,\,\rho_{A}(7)\bigg{)}.\] New players are set such that \[\rho_{A}(x)\propto\frac{1}{\sqrt{2\pi\cdot 0.7^{2}}}\exp\!\left(-\frac{x^{2}}{ 2\cdot 0.7^{2}}\right).\] Here and later, \(\propto\) indicates that the values are then normalized so that they sum to \(1\). After the match outcome \(A>B\) is observed, the system updates the values of \(\rho_{A}\) and \(\rho_{B}\) to \[\rho_{A,A>B}(x) \propto\rho_{A}(x)\sum_{k=0}^{1000}\rho_{B}\big{(}\tfrac{14}{100 0}k-7\big{)}\left[0.1+\frac{0.8}{1+\exp\!\big{(}(\tfrac{14}{1000}k-7)-x\big{)} }\right],\] \[\rho_{B,A>B}(x) \propto\rho_{B}(x)\sum_{k=0}^{1000}\rho_{A}\big{(}\tfrac{14}{1000 }k-7\big{)}\left[0.1+\frac{0.8}{1+\exp\!\big{(}x-(\tfrac{14}{1000}k-7)\big{)} }\right].\] We evaluate these expressions using the Fast Fourier Transform (FFT) as described in Section 4.2. The values of \(\rho_{A}\) and \(\rho_{B}\) are then replaced with the values of \(\rho_{A,A>B}\) and \(\rho_{B,A>B}\). Immediately after completing the step above, \(\rho_{A}\) is replaced with \(\tilde{\rho}_{A}\), defined by \[\tilde{\rho}_{A}(x)\propto\sum_{k=0}^{1000}\rho_{A}\big{(}\tfrac{14}{1000}k-7 \big{)}\,\frac{1}{\sqrt{2\pi\cdot 0.03^{2}}}\exp\!\left(-\frac{\big{(}x-(\tfrac{14}{1000}k-7) \big{)}^{2}}{2\cdot 0.03^{2}}\right),\] and \(\rho_{B}\) is replaced with \(\tilde{\rho}_{B}\) defined analogously. FFT is used to compute the values of \(\tilde{\rho}_{A}\) and \(\tilde{\rho}_{B}\). ## 4. Algorithms Our rating system operates as follows: * Assign a prior \(\nu_{0}\) to unknown players (see Model 2.11 and Section 3.2). * After every match: 1. update \(\nu_{A}\) and \(\nu_{B}\) to \(\nu_{A,A>B}\) and \(\nu_{B,A>B}\) using to Model 2.11, 2. replace \(\nu_{A}\) and \(\nu_{B}\) with \(\tilde{\nu}_{A}\) and \(\tilde{\nu}_{B}\) using Model 2.16, where \(A\) and \(B\) are the winner and loser of that match respectively. We call step 1 above _match processing_, and step 2 _kernel processing_. In this section, we present three algorithms for each of these steps, for three special cases of parameter choices. In all three cases, we will assume that the playing strength of an arbitrary player \(A\) is an unknown fixed real number that is an element of a known finite set \(S_{A}\) (which can depend on \(A\)) of size at most \(n+1\) (which cannot depend on \(A\)). In Section 4.1, we give an algorithm that is fully general besides the assumptions above. This algorithm takes time \(\gg n^{2}\) for each of the match processing step from Model 2.11 and the kernel processing step from Model 2.16, whereas the other two algorithms we present take time \(\ll n^{1+\varepsilon}\) for these steps. This algorithm is useful because it is the simplest of the three. In Section 4.2, we give an algorithm based on the Fast Fourier Transform (FFT) that processes matches and kernels in time \(\ll n^{1+\varepsilon}\), but requires some mild additional assumptions. This is the algorithm that we think is most useful for applications, and that we used in Duelyst II. In Section 4.3, we give another algorithm that processes matches and kernels in \(\ll n^{1+\varepsilon}\), but does not rely on FFT, and instead is completely elementary. It makes the somewhat restrictive assumption that the functions \(\Lambda\) and \(\kappa\) are essentially short linear combinations of CDFs or PDFs of Laplace distributions respectively, but omits assumptions on \(S_{A}\) that were necessary for the FFT-based algorithms. ### Naive algorithms Let \(\rho_{A}\) be such that \(\rho_{A}(x)=\nu_{A}(\{x\})\) for all \(x\). If the match \(A>B\) is observed, then \(A\)'s posterior distribution is given by \[\rho_{A,A>B}(x)\propto\rho_{A}(x)\sum_{x_{k}\in S_{B}}\rho_{B}(x_{k})\, \Lambda(x,x_{k}). \tag{7}\] Evaluating \(\rho_{A,A>B}(x)\) for a specific \(x\) by summing the right hand side above directly takes time \(\mathcal{O}(n)\). The function \(\rho_{A}\) is supported on at most \(n\) points, so overall the posterior can be computed in time \(\mathcal{O}(n^{2})\). If \(B>A\) is observed instead, then the same formula can be used with \(\Lambda(x,x_{k})\) replaced by \(\Lambda(x_{k},x)=1-\Lambda(x,x_{k})\). **Example 4.1**.: Suppose \[\Lambda(x,y)=\frac{x}{x+y},\] \[\rho_{A}(2)=\frac{9}{20},\qquad\rho_{A}(5)=\frac{3}{20},\qquad \rho_{A}(13)=\frac{8}{20},\] \[\rho_{B}(3)=\frac{2}{11},\qquad\rho_{B}(7)=\frac{4}{11},\qquad \rho_{B}(11)=\frac{5}{11}.\] If \(A>B\) is observed, then \[\rho_{A,A>B}(2)\propto\rho_{A}(2)\sum_{x_{k}\in\{3,7,11\}}\rho_{B }(x_{k})\frac{2}{2+x_{k}}=\frac{719}{7150},\] \[\rho_{A,A>B}(5)\propto\frac{43}{704},\] \[\rho_{A,A>B}(13)\propto\frac{208}{825}.\] Normalizing gives \[\rho_{A,A>B}(2)=\frac{69024}{284005}\approx 0.24,\qquad\rho_{A,A>B}(5)=\frac{4192 5}{284005}\approx 0.15,\qquad\rho_{A,A>B}(13)=\frac{173056}{284005}\approx 0.61.\] Similarly, \[\rho_{B,A>B}(3)=\frac{74724}{284005}\approx 0.26,\qquad\rho_{B,A>B}(7)=\frac{10 5456}{284005}\approx 0.37,\qquad\rho_{B,A>B}(11)=\frac{103825}{284005}\approx 0.37.\] Note that we use \(\rho_{A}\), not \(\rho_{A,A>B}\), to update \(\rho_{B}\). Let \(K:\mathbb{R}^{2}\to\mathbb{R}\) be defined by \[\kappa_{\delta_{x}}=\sum_{x_{k}\in S_{A}}K(x,x_{k})\,\delta_{x_{k}}\] as in (6). Then \(\tilde{\rho}_{A}\) can be computed as \[\tilde{\rho}_{A}(x)\propto\sum_{x_{k}\in S_{A}}\rho_{A}(x_{k})\,K(x,x_{k}) \tag{8}\] for \(x\in S_{A}\). Like the case with match processing discussed above, computing \(\tilde{\rho}_{A}\) this way takes time \(\mathcal{O}(n^{2})\). **Example 4.2**.: Suppose \(S_{A}=\{1,2,\ldots,100\}\) and \[K(x,y)=\begin{cases}\frac{1}{3}&\text{if $x-y\in\{-1,0,1\}$}\\ 0&\text{otherwise,}\end{cases}\qquad\rho_{A}(n)=\begin{cases}\frac{1}{10}&\text{ if $n$ is a perfect square}\\ 0&\text{otherwise.}\end{cases}\] Then \[\tilde{\rho}_{A}(n)=\begin{cases}\frac{1}{28}&n\in U\\ 0&\text{otherwise,}\end{cases}\] where \[U =\big{\{}n\in\mathbb{Z}\cap[1,100]\,:\,\text{for some $\delta\in\{-1,0,1\}$, $n+\delta\in[1,100]$ and $\sqrt{n+\delta}\in\mathbb{Z}$}\big{\}}\] \[=\{1,2,3,4,5,8,9,10,15,16,17,24,25,26,35,36,37,48,49,50,63,64,65,80,81,82,99,100\}.\ \ \triangle\] ### FFT-based algorithms We write \(f(n)=\tilde{\mathcal{O}}(g(n))\) to mean \(f(n)=\mathcal{O}(g(n)n^{\varepsilon})\) for all \(\varepsilon>0\). Define \(\rho_{A},\rho_{B}\), and \(K\) as in the previous section. We will recognize (7) and (8) as convolutions, and then use the Fast Fourier Transform (FFT) to compute them in time \(\tilde{\mathcal{O}}(n)\). In this section, we assume the following: * \(S_{A}=S_{B}\), * \(S_{A}=\{x_{0},\ldots,x_{n}\}\) with \(x_{k}=k\Delta\) for some \(\Delta\in\mathbb{R}_{>0}\), * \(\Lambda(x,y)=\frac{1-\beta}{2}+\beta F(x-y)\) for some \(\beta\in[0,1]\) and \(F:\mathbb{R}\to[0,1]\) increasing with asymptotes at \(0\) and \(1\), * \(K(x,y)=G(x-y)\) for some \(G:\mathbb{R}\to\mathbb{R}\). We begin by presenting an algorithm for kernel processing. With the assumptions above, (8) can be written as \[\tilde{\rho}_{A}(k\Delta)=\sum_{j=0}^{n}\rho_{A}(j\Delta)\,G\big{(}(k-j)\Delta \big{)}.\] We recognize this as a discrete convolution: \(\tilde{\rho}_{A}=\rho_{A}*G\). We can then compute all the values in the list \(\tilde{\rho}_{A}=\big{(}\tilde{\rho}_{A}(0),\tilde{\rho}_{A}(\Delta),\ldots, \tilde{\rho}_{A}(n\Delta)\big{)}\) in time \(\tilde{\mathcal{O}}(n)\) by using FFT. Our algorithm for computing the posterior (7) is similar, but involves some additional elementary manipulations. Define \[R(x)\coloneqq F(x)-H(x),\] where \(H\) is the Heaviside function (Definition 2.2). Our assumptions imply that \(R(x)\longrightarrow 0\) as \(|x|\longrightarrow\infty\), and that \[\rho_{A,A>B}(k\Delta) \propto\rho_{A}(k\Delta)\left(\frac{1-\beta}{2}+\beta\big{(}L_{R} (k\Delta)+L_{H}(k\Delta)\big{)}\right)\text{ and }\] \[\rho_{A,A<B}(k\Delta) \propto\rho_{A}(k\Delta)\left(\frac{1+\beta}{2}-\beta\big{(}L_{R }(k\Delta)+L_{H}(k\Delta)\big{)}\right),\] where \[L_{R}(k\Delta)\coloneqq\sum_{j=0}^{n}\rho_{B}(j\Delta)\,R\big{(}(k-j)\Delta \big{)}\quad\text{and}\quad L_{H}(k\Delta)\coloneqq\sum_{j=0}^{n}\rho_{B}(j \Delta)\,H\big{(}(k-j)\Delta\big{)}.\] The function \(L_{R}\) is a convolution: \(L_{R}=\rho_{B}*R\). We compute all values in the list \(L_{R}=\big{(}L_{R}(0),\ldots,L_{R}(n\Delta)\big{)}\) in time \(\tilde{\mathcal{O}}(n)\) using FFT. The following algorithm computes all values in the list \(L_{H}=\big{(}L_{H}(0),\ldots,L_{H}(n\Delta)\big{)}\) in time \(\mathcal{O}(n)\). **Algorithm 4.3**.: Compute \(L_{H}(k\Delta)\) for all \(k\in\{0,1,\ldots,n\}\). \(\Sigma\gets 0\) **for \(0\leq k\leq n\) do** \(\Sigma\leftarrow\Sigma+\frac{1}{2}\rho_{B}(k\Delta)\) \(L_{H}(k\Delta)\leftarrow\Sigma\) \(\Sigma\leftarrow\Sigma+\frac{1}{2}\rho_{B}(k\Delta)\) **end for** ### Laplace algorithms Let \(\rho_{A}\), \(\rho_{B}\), and \(K\) be as in Section 4.1. Define \[f(x\,|\,b)\coloneqq\frac{1}{2b}\exp\!\left(-\frac{|x|}{b}\right)\quad\text{ and}\quad F(x\,|\,b)\coloneqq\begin{cases}\frac{1}{2}\exp\!\left(-\frac{|x|}{b} \right)&x<0\\ 1-\frac{1}{2}\exp\!\left(-\frac{|x|}{b}\right)&x\geq 0.\end{cases}\] These are respectively the PDF and CDF of a Laplace distribution. Assume * \(\Lambda(x,y)=\frac{1-\beta}{2}+\beta\sum_{j=1}^{\ell}p_{j}F(x-y\,|\,a_{j})\) for non-negative reals \(p_{j}\) which sum to \(1\), * \(K(x,y)=\sum_{j=1}^{\ell}q_{j}f(x-y\,|\,b_{j})\) for non-negative reals \(q_{j}\) which sum to \(1\). With these assumptions, (7) and (8) become \[\rho_{A,A>B}\propto\rho_{a}(x)\left[\frac{1-\beta}{2}+\beta\sum_{j=1}^{\ell}p _{j}\sum_{x_{k}\in S_{B}}\rho_{B}(x_{k})F(x-x_{k}\,|\,a_{j})\right]\] and \[\tilde{\rho}_{A}(x)\propto\sum_{j=1}^{\ell}q_{j}\sum_{x_{k}\in S_{A}}\rho_{A} (x_{k})f(x-x_{k}\,|\,b_{j}).\] We evaluate the inner sums for all \(x\in S_{A}\) simultaneously in time \(\tilde{\mathcal{O}}(n)\) using Algorithm 4.6 and Algorithm 4.5 described below. Doing the remaining arithmetic in the usual way, we evaluate the right hand sides for all \(x\in S_{A}\) in time \(\tilde{\mathcal{O}}(\ell n)\). The final normalization can be done in time \(\mathcal{O}(n)\). The rest of this section explains Algorithm 4.6 and Algorithm 4.5. Fix \(\rho:\mathbb{R}\to\mathbb{R}\) non-negative, supported on \(x_{1},\ldots,x_{n}\), and such that its values sum to \(1\). Define \(Q:\mathbb{R}\to\mathbb{R}\) by \[Q(y)\coloneqq\sum_{k=1}^{n}\rho(x_{k})f(y-x_{k}\,|\,b).\] Algorithm 4.5 takes as input an arbitrary finite set of real numbers \(y_{1},\ldots,y_{m}\) and computes, in time \(\tilde{\mathcal{O}}(m+n)\), all of the \(m\) quantities \(Q(y_{1}),\ldots,Q(y_{m})\). Define \[Q_{L}(y)\coloneqq\sum_{x_{k}\leq y}\rho(x_{k})f(y-x_{k}\,|\,b)\quad\text{and} \quad Q_{R}(y)\coloneqq\sum_{x_{k}>y}\rho(x_{k})f(y-x_{k}\,|\,b).\] The following observation, which is immediate from the definitions, is the main idea underlying Algorithm 4.5 and Algorithm 4.6. **Observation 4.4**.: _If \(y\) and \(\Delta\) are such that \(\{x_{1},\ldots,x_{n}\}\cap[y,y+\Delta]=\emptyset\), then_ \[Q_{L}(y+\Delta)=e^{-\frac{\Delta}{b}}Q_{L}(y)\quad\text{and}\quad Q_{R}(y+ \Delta)=e^{\frac{\Delta}{b}}Q_{R}(y).\] **Algorithm 4.5**.: Compute \(Q(y_{i})\) for all \(y_{i}\in\{y_{1},\ldots,y_{m}\}\). ``` \(U\leftarrow\{x_{1},\ldots,x_{n}\}\cup\{y_{1},\ldots,y_{m}\}\) Sort \(U\) from smallest to largest. \(L\gets 0\) \(z_{0}\gets U[0]\) for\(z\in U\)do \(\Delta\gets z-z_{0}\) \(L\gets e^{-\frac{\Delta}{b}}L+\frac{1}{2b}\rho(z)\) \(Q(z)\gets L\) \(z_{0}\gets z\) endfor Sort \(U\) from largest to smallest. \(R\gets 0\) for\(z\in U\)do \(\Delta\gets z_{0}-z\) \(R\gets e^{-\frac{\Delta}{b}}R\) \(Q(z)\gets Q(z)+R\) \(R\gets R+\frac{1}{2b}\rho(z)\) \(z_{0}\gets z\) endfor ``` It is possible to compute the contribution from \(Q_{R}(y_{i})\) during the first iteration over \(U\). However, doing so requires that the arithmetic be done using \(\gg\frac{1}{b}(\max U-\min U)\) bits of precision because \(Q_{R}(y+\Delta)\) grows exponentially in \(\Delta\). In almost all applications it'll be the case that \(\frac{1}{b}(\max U-\min U)\gg m+n\), and the algorithm won't run in time \(\tilde{\mathcal{O}}(m+n)\). Define \(T:\mathbb{R}\to\mathbb{R}\) by \[T(y)\coloneqq\sum_{k=1}^{n}\rho(x_{k})F(y-x_{k}\,|\,b).\] \(T(y)\) can be decomposed into the three sums \[T(y)=\sum_{x_{k}\leq y}\rho(x_{k})-\sum_{x_{k}\leq y}\tfrac{1}{2}\rho(x_{k}) \exp\!\left(\frac{x_{k}-y}{b}\right)+\sum_{x_{k}>y}\tfrac{1}{2}\rho(x_{k}) \exp\!\left(\frac{y-x_{k}}{b}\right).\] With this decomposition and the ideas used to produce Algorithm 4.3 and Algorithm 4.5, we can construct Algorithm 4.6 that takes as input a set \(\{y_{1},\ldots,y_{m}\}\) and computes all of the corresponding values \(T(y_{i})\) in time \(\tilde{\mathcal{O}}(m+n)\). [MISSING_PAGE_POST] **Algorithm 4.11**.: Compute \(T(y \(L\gets 0\) \(z_{0}\gets U[0]\) **for**\(z\in U\)**do** \(\Delta\gets z-z_{0}\) \(M\gets M+\rho(z)\) \(L\gets e^{-\frac{\Delta}{b}}L+\frac{1}{2}\rho(z)\) \(T(z)\gets M-L\) \(z_{0}\gets z\) **endfor** Sort \(U\) from largest to smallest. \(R\gets 0\) **for**\(z\in U\)**do** \(\Delta\gets z_{0}-z\) \(R\gets e^{-\frac{\Delta}{b}}R\) \(T(z)\gets T(z)+R\) \(R\gets R+\frac{1}{2}\rho(z)\) \(z_{0}\gets z\) **endfor** ## 5. Performance in Duelyst II In this section, we compare the performance of Glicko2 with our system, as well as our system but with \(\beta=0.9\) instead of \(\beta=0.8\) in (2), on the dataset of the first 1,126,592 ranked matches played since Duelyst II's launch. Duelyst II used Glicko2 [9] to rate players previously, with parameters chosen to be the same as the ones used in the prequel Duelyst between 2016 to 2020: \(\tau=0.5\) and default rating 1500, RD 200, and volatility 0.06 [19]. Each of the three systems we analyze in this section we processed the matches in our dataset in chronological order. For each match, each system estimated the probability \(p\) of the observed match outcome occurring. Our system estimated \(p\) using Model 2.1, and Glicko2 estimated \(p\) using [8, Eq. (16)]. For the matches in which both players had variance less than \(70^{2}\) after the reparameterization (3), we computed \(-\log p\), the _log loss_ of that match. The average log loss was 0.6625 for Glicko2, 0.6613 for \(\beta=0.8\), and 0.6559 for \(\beta=0.9\). The left image in Figure 5.1 plots the log loss of each match individually, coloured by system. The horizontal axis is the difference in the ratings of the two players. For each colour, the three distinct curves correspond to wins by the weaker player (top), draws (middle), and wins by the Figure 5.1. Left: Log loss of each match by rating difference. Right: Proportion of the dataset’s total log loss by rating difference, normalized to integrate to average log loss. stronger player (bottom). The number of matches being plotted is large: 557973 for Glicko2, 566213 for \(\beta=0.8\), and 653660 for \(\beta=0.9\), causing many of the points to overlap. The right image of Figure 5.1 quantifies the density of points in the left image by showing the relative contribution of each rating difference to the total log loss in the dataset. Let \[K(x,y)\coloneqq\frac{1}{\sqrt{2\pi\cdot 5^{2}}}\exp\biggl{(}-\frac{(x-y)^{2}}{2 \cdot 5^{2}}\biggr{)}\] denote a Gaussian kernel with variance \(5^{2}\). The curve plotted on the right is proportional to \[\sum-\log p\cdot\frac{K(x,|r_{A}-r_{B}|)}{\int_{0}^{\infty}K(x,t)\,dt},\] where the sum is over matches in which both players have variance at most \(70^{2}\), the quantities \(r_{A}\) and \(r_{B}\) denote the means of the players (i.e. their ratings), \(p\) is the probability of the observed match outcome occurring as estimated by each system, and \(x\) is the variable for the horizontal axis. The proportionality constant is such that the plotted function integrates to the average log loss. While the value \(\beta=0.9\) had smaller total log loss than the value \(\beta=0.8\) which was implemented, our judgment was that \(\beta=0.8\) was the better choice for our purposes for two reasons. First, in our application, maximizing the probability observing the empirical data was not our goal. For us, it was much more important to accurately rank the game's top players relative to each other. The choice \(\beta=0.8\) yields more stable and reliable rankings, which is very important in practice. Second, many players actively enjoy interacting with the in-game rating system; trying to maximize the number the game displays to them becomes one of their primary objectives. From the perspective of game design, harshly penalizing unlucky losses is remarkably frustrating.
2306.16646
Reverse Information Projections and Optimal E-statistics
Information projections have found important applications in probability theory, statistics, and related areas. In the field of hypothesis testing in particular, the reverse information projection (RIPr) has recently been shown to lead to growth-rate optimal (GRO) e-statistics for testing simple alternatives against composite null hypotheses. However, the RIPr as well as the GRO criterion are undefined whenever the infimum information divergence between the null and alternative is infinite. We show that in such scenarios, under some assumptions, there still exists a measure in the null that is closest to the alternative in a specific sense. Whenever the information divergence is finite, this measure coincides with the usual RIPr. It therefore gives a natural extension of the RIPr to certain cases where the latter was previously not defined. This extended notion of the RIPr is shown to lead to optimal e-statistics in a sense that is a novel, but natural, extension of the GRO criterion. We also give conditions under which the (extension of the) RIPr is a strict sub-probability measure, as well as conditions under which an approximation of the RIPr leads to approximate e-statistics. For this case we provide tight relations between the corresponding approximation rates.
Tyron Lardy, Peter Grünwald, Peter Harremoës
2023-06-29T03:11:23Z
http://arxiv.org/abs/2306.16646v3
# Universal Reverse Information Projections and Optimal \(\mathrm{E}\)-statistics ###### Abstract Information projections have found many important applications in probability theory, statistics, and related fields. In the field of hypothesis testing in particular, the reverse information projection (RIPr) has recently been shown to lead to so-called growth-rate optimal (GRO) \(e\)-statistics for testing simple alternatives against composite null hypotheses. However, the RIPr as well as the GRO criterion are only defined in cases where the infimum information divergence between the null and alternative is finite. Here, we show that under much weaker conditions there often still exists an element in the alternative that is 'closest' to the null: the universal reverse information projection. The universal reverse information projection and its non-universal counterpart coincide whenever the KL is finite, and the strictness of this generalization will be shown by an example. Furthermore, the universal RIPr leads to optimal \(e\)-statistics in a sense that is a novel, but natural, extension of the GRO criterion. Finally, we discuss conditions under which the universal RIPr is a strict sub-probability distributions, and conditions under which an approximation of the universal RIPr leads to approximate \(e\)-statistics. ## I Introduction We write \(D(\nu\|\lambda)\) for the information divergence (Kullback-Leibler divergence, [1, 2, 3]) between two finite measures \(\nu\) and \(\lambda\) given by \[D(\nu\|\lambda)=\begin{cases}\int_{\Omega}\ln\!\left(\frac{\mathrm{d}\nu}{ \mathrm{d}\lambda}\right)\mathrm{d}\nu-(\nu(\Omega)-\lambda(\Omega)),&\text{if }\nu \ll\lambda;\\ \infty,&\text{else.}\end{cases}\] For probability measures the interpretation of \(D(\nu\|\lambda)\) is that it measures how much we gain by coding according to \(\nu\) rather than coding according to \(\lambda\) if data are distributed according to \(\nu\). Many problems in probability theory and statistics, such as conditioning and maximum likelihood estimation, can be cast as minimization in either or both arguments of the information divergence. In particular, this is the case within the recently established and now flourishing theory of hypothesis testing based on \(e\)-statistics that allows for optional continuation of experiments (see Section II-C) [4, 5]. That is, a kind of duality has been established between optimal \(e\)-statistics for testing a simple alternative \(P\) against a composite null hypothesis \(\mathcal{C}\) and reverse information projections [4]. Here, the reverse information projection (RIPr) of \(P\) on \(\mathcal{C}\) is -- if it exists -- a unique measure \(\hat{Q}\) such that every sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}\) with \(D(P\|Q_{n})\to\inf_{Q\in\mathcal{C}}D(P\|Q)\) converges to \(\hat{Q}\) in a particular way [6, 7]. It has been shown that whenever \(\mathcal{C}\) is convex and \(D(P\|\mathcal{C}):=\inf_{Q\in\mathcal{C}}D(P\|Q)<\infty\), the RIPr \(\hat{Q}\) exists and the likelihood ratio between \(P\) and \(\hat{Q}\) is the optimal \(e\)-statistic for testing \(P\) against \(\mathcal{C}\). However, it is clear that the RIPr does not exist if the information divergence between \(P\) and \(\mathcal{C}\) is infinite, i.e. \(D(P\|\mathcal{C})=\infty\). This leaves a void in the theory of optimality of \(e\)-statistics. In this work we remedy this by realizing that even if all measures in \(\mathcal{C}\) are infinitely worse than \(P\) at describing data distributed according to \(P\) itself, there can still be a measure that performs best relative to the elements of \(\mathcal{C}\). To find such a measure, we consider the _description gain_[8] given by \[D(P\|Q\rightsquigarrow Q^{\prime})=\int_{\Omega}\ln\!\left(\frac{\mathrm{d}Q^{ \prime}}{\mathrm{d}Q}\right)\mathrm{d}P-(Q^{\prime}(\Omega)-Q(\Omega)) \tag{1}\] whenever this integral is well-defined. If the quantities involved are finite then the description gain reduces to \[D(P\|Q\rightsquigarrow Q^{\prime})=D(P\|Q)-D(P\|Q^{\prime}). \tag{2}\] In analogy to the interpretation of information divergence for coding, description gain measures how much we gain by coding according to \(Q^{\prime}\) rather than \(Q\) if data are distributed according to \(P\). Furthermore denote \[D(P\|Q\rightsquigarrow\mathcal{C}):=\sup_{Q^{\prime}\in\mathcal{C}}D(P\|Q \rightsquigarrow Q^{\prime}),\] where undefined values are counted as \(-\infty\) when taking the supremum. If there exists at least one \(Q^{*}\in\mathcal{C}\) such that \(P\ll Q^{*}\), then \(D(P\|Q\rightsquigarrow\mathcal{C})\) is a well-defined number in \([0,\infty]\) for any \(Q\in\mathcal{C}\). This quantity should be seen as the maximum description gain one can get by switching from \(Q\) to any other measure in \(\mathcal{C}\). Intuitively, if there is a best descriptor in \(\mathcal{C}\), nothing can be gained by switching away from it. Indeed, in Proposition 2 we show that \(\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow\mathcal{C})\) is finite if and only if it is equal to zero. Furthermore, in Theorem 3 we show that -- under very mild conditions -- there exists a unique measure \(\hat{Q}\) such that every sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}\) with \[D(P\|Q_{n}\rightsquigarrow\mathcal{C})\to 0\] converges to \(\hat{Q}\) in a specific way. We call \(\hat{Q}\) the universal RIPr, as it coincides with the RIPr whenever the information divergence is finite. Furthermore, in Theorem 5 we show that whenever the universal RIPr \(\hat{Q}\) exists, the likelihood ratio of \(P\) and \(\hat{Q}\) is an optimal \(e\)-statistic in a sense that can be seen as a strict generalization of previously known optimality criteria for \(e\)-statistics. Finally, Theorem 4 and proposition 4 provide certain properties of the universal RIPr that give insights even in the finite information divergence setting. ## II Background ### _Preliminaries_ We work with a measurable space \((\Omega,\mathcal{F})\) and, unless specified otherwise, all measures will be defined on this space. Throughout, \(P\) will denote a finite measure and \(\mathcal{C}\) a set of finite measures, such that \(P\) and all \(Q\in\mathcal{C}\) have densities w.r.t. a common measure \(\mu\). These densities will be denoted with lowercase, i.e. \(p\) and \(q\) respectively. We will assume throughout that \(\mathcal{C}\) is \(\sigma\)-convex, i.e. closed under countable mixtures, though we will refer to this simply as 'convex'. Furthermore, we assume that there exists at least one \(Q^{*}\in\mathcal{C}\) such that \(P\ll Q^{*}\). On the one hand, this ensures that \(D(P\|Q\rightsquigarrow\mathcal{C})\) is a well-defined number in \([0,\infty]\) for any \(Q\in\mathcal{C}\). On the other hand, it aligns with our philosophy when we turn to hypothesis testing, in which case \(P\) and all \(Q\in\mathcal{C}\) will be probability distributions and serve as the alternative and null hypothesis respectively. We will consider \(P\) mostly as a tool to gather evidence against \(\mathcal{C}\), so that it does not make sense to consider the case in which \(P\) puts mass on events that cannot occur according to the null, as the null hypothesis can be discredited in such scenarios regardless of how much mass \(P\) puts on the event. ### _The Reverse Information Projection_ As mentioned briefly above, the reverse information projection is the result of minimizing the information divergence between \(P\) and \(\mathcal{C}\). If \(\mathcal{C}\) is an exponential family, this problem is well understood [9], but we focus here on the case that \(\mathcal{C}\) is a general convex set. In this setting, the following theorem establishes existence and uniqueness of a limiting object for any sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}\) such that \(D(P\|Q_{n})\to D(P\|\mathcal{C})\) whenever the latter is finite. This limit (i.e. \(\hat{Q}\) in the following) is called the reverse information projection of \(P\) on \(\mathcal{C}\). **Theorem 1** (Li [6], Definition 4.2 and Theorem 4.3).: _If \(P\) and all \(Q\in\mathcal{C}\) are probability distributions such that \(D(P\|\mathcal{C})<\infty\), then there exists a unique (potentially sub-) probability distribution \(\hat{Q}\) such that:_ 1. _We have that_ \(\ln q_{n}\to\ln\hat{q}\) _in_ \(L_{1}(P)\) _for all sequences_ \((Q_{n})_{n\in\mathbb{N}}\) _in_ \(\mathcal{C}\) _such that_ \(\lim_{n\to\infty}D(P\|Q_{n})=D(P\|\mathcal{C})\)_._ 2. \(\int_{\Omega}\ln\frac{\mathrm{d}P}{\mathrm{d}Q}=D(P\|\mathcal{C})\)_,_ 3. \(\int_{\Omega}\frac{\mathrm{d}P}{\mathrm{d}Q}\mathrm{d}Q\leq 1\) _for all_ \(Q\in\mathcal{C}\)_._ ### _E-statistics and Growth Rate Optimality_ The \(e\)-value has recently emerged as a popular alternative to the \(p\)-value for hypothesis testing [5, 10, 11]. It can be thought of as a measure of statistical evidence that is intimately linked with numerous ideas, such as likelihood ratios, test martingales [12] and tests of randomness [13]. Formally, an \(e\)-value is defined as the value taken by an \(e\)-statistic, which is defined as a random variable \(E:\Omega\to[0,\infty]\) that satisfies \(\int_{\Omega}E\,\mathrm{d}Q\leq 1\) for all \(Q\in\mathcal{C}\)[14]. The set of all \(e\)-statistics is denoted as \(\mathcal{E}_{\mathcal{C}}\). Large \(e\)-values constitute evidence against \(\mathcal{C}\) as null hypothesis, so that the null can be rejected when the computed \(e\)-value exceeds a certain threshold. For example, the test that rejects the null hypothesis when \(E\geq\nicefrac{{1}}{{\alpha}}\) has a type-I error guarantee of \(\alpha\) by a simple application of Markov's inequality: \(Q(E\geq\nicefrac{{1}}{{\alpha}})\leq\alpha\int_{\Omega}E\,\mathrm{d}Q\leq\alpha\). In general, the set \(\mathcal{E}_{\mathcal{C}}\) of \(e\)-statistics is quite large, and the above does not tell us _which_\(e\)-statistic to pick. This question was studied in [4] and a log-optimality criterion coined GRO was introduced for the case that the interest is in gaining as much evidence as possible relative to an alternative hypothesis given by a single probability distribution \(P\). This criterion can be traced back to the Kelly betting criterion in [15] and is further discussed in [16]. **Definition 1**.: If it exists, an \(e\)-statistic \(\hat{E}\in\mathcal{E}_{\mathcal{C}}\) is Growth-Rate Optimal (GRO) if it achieves \[\int_{\Omega}\ln\hat{E}\,\mathrm{d}P=\sup_{E\in\mathcal{E}_{\mathcal{C}}}\int_ {\Omega}\ln E\,\mathrm{d}P.\] The following theorem establishes a duality between GRO \(e\)-statistics and reverse information projections. For a limited set of testing problems, it states that GRO \(e\)-statistics exist and are uniquely given by likelihood ratios. **Theorem 2** (Grunwald et al. [4], Theorem 1).: _If \(P\) and all \(Q\in\mathcal{C}\) are probability distributions such that \(D(P\|\mathcal{C})<\infty\), \(p(\omega)>0\) for all \(\omega\in\Omega\), and \(\hat{Q}\) is the RIPr of \(P\) on \(\mathcal{C}\), then \(\hat{E}=\frac{\mathrm{d}P}{\mathrm{d}Q}\) is GRO with rate equal to \(D(P\|\mathcal{C})\), i.e._ \[\sup_{E\in\mathcal{E}_{\mathcal{C}}}\int_{\Omega}\ln E\,\mathrm{d}P=\int_{ \Omega}\ln\hat{E}\,\mathrm{d}P=D(P\|\mathcal{C}).\] _Furthermore, for any GRO \(e\)-statistic \(\tilde{E}\), we have that \(\tilde{E}=\hat{E}\) holds \(P\)-almost surely._ ## III The Universal Reverse Information Projection In this section, we will prove a result analogous to Theorem 1 in a more general setting. Rather than convergence of the logarithm of densities in \(L_{1}(P)\), we consider convergence with respect to a metric on the set of measurable positive functions, i.e. \(\mathcal{M}(\Omega,\mathbb{R}_{>0})=\{f:\Omega\to\mathbb{R}_{>0}:f\text{ measurable}\}\). For \(f,f^{\prime}\in\mathcal{M}(\Omega,\mathbb{R}_{>0})\) we define \[m_{P}^{2}(f,f^{\prime}):=\frac{1}{2}\int_{\Omega}\ln\!\left(\frac{\overline{f}}{ f}\right)+\ln\!\left(\frac{\overline{f}}{f^{\prime}}\right)\mathrm{d}P, \tag{3}\] where \(\overline{f}:=\nicefrac{{(f+f^{\prime})}}{{2}}\). This is a divergence that can be thought of as the averaged Bregman divergence associated with the convex function \(\gamma(x)=x-1-\ln(x)\). In [17], such divergences are studied in detail for general \(\gamma\). In particular, they show that the function \[m_{\gamma}^{2}(x,y)=\frac{1}{2}\gamma(x)+\frac{1}{2}\gamma(y)-\gamma\!\left( \frac{x+y}{2}\right)\] is the square of a metric if and only if \(\ln(\gamma^{\prime\prime}(x))^{\prime\prime}\geq 0\). In our case, \(\ln(\gamma^{\prime\prime}(x))^{\prime\prime}=2x^{-2}\), so this result holds. This can be used together with an application of Minkowski inequality to show that the triangle inequality holds for the square root of the divergence (3), i.e. \(m_{P}\), on \(\mathcal{M}(\Omega,\mathbb{R}_{>0})\). It should also be clear that for \(f,g\in\mathcal{M}(\Omega,\mathbb{R}_{>0})\) if \(f=g\) everywhere, then \(m_{P}(f,g)=0\). Conversely \(m_{P}(f,g)=0\) only implies that \(P(f\neq g)=0\). This prevents us from calling \(m_{P}\) a metric on \(\mathcal{M}(\Omega,\mathbb{R}_{>0})\), and we therefore define, analogous to \(\mathcal{L}^{p}\) and \(L^{p}\) spaces, \(M(\Omega,\mathbb{R}_{>0})\) as the set of equivalence classes of \(\mathcal{M}(\Omega,\mathbb{R}_{>0})\) under the relation '\(\sim\)' given by \(f\sim g\Leftrightarrow P(f\neq g)=0\). By the discussion above, \(m_{P}\) properly defines a metric on \(M(\Omega,\mathbb{R}_{>0})\). In the following we will often ignore this technicality and simply act as if \(m_{P}\) defines a metric on \(\mathcal{M}(\Omega,\mathbb{R}_{>0})\), since we are not interested in what happens on null sets of \(P\). **Proposition 1**.: _The metric space \((M(\Omega,\mathbb{R}_{>0}),m_{P})\) is complete._ Everything is now in place to state the main result. **Theorem 3**.: _If \(\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow\mathcal{C})<\infty\), then there exists a measure \(\hat{Q}\) that satisfies the following for every sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}\) s.t. \(D(P\|Q_{n}\rightsquigarrow\mathcal{C})\to\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow \mathcal{C})\) for \(n\to\infty\):_ 1. \(q_{n}\to\hat{q}\) _in_ \(m_{P}\)_,_ 2. _If_ \(P^{\prime}\) _is a measure such that_ \(|\!\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow P^{\prime})|<\infty\) _then_ \(\int_{\Omega}\ln\frac{\mathrm{d}P^{\prime}}{\mathrm{d}\hat{Q}}\, \mathrm{d}P=\lim_{n\to\infty}\int_{\Omega}\ln\frac{\mathrm{d}P^{\prime}}{ \mathrm{d}Q_{n}}\,\mathrm{d}P\)_,_ 3. \(\int_{\Omega}\frac{\mathrm{d}P}{\mathrm{d}\hat{Q}}\,\mathrm{d}Q\,\leq\,P( \Omega)\,+\,Q(\Omega)\,-\liminf_{n\to\infty}Q_{n}(\Omega)\) _for any_ \(Q\in\mathcal{C}\)_._ Theorem 1 is a special case of Theorem 3 when \(P\) and all \(Q\in\mathcal{C}\) are probability distributions and \(D(P\|\mathcal{C})<\infty\). This follows because Equation (2) implies that minimizing \(D(P\|Q\rightsquigarrow Q^{\prime})\) over \(Q\) is equivalent to minimizing \(D(P\|Q)\). The measure \(\hat{Q}\) as in Theorem 3 therefore extends the notion of the reverse information projection of \(P\) on \(\mathcal{C}\). We call \(\hat{Q}\) the universal reverse information projection of \(P\) on \(Q\) ('generalized' has already been used for the RIPr whenever it is not attained by an element of \(\mathcal{C}\)[9]). However, the density of the measure \(\hat{Q}\) is only unique as an element of \(M(\Omega,\mathbb{R}_{>0})\), since convergence of the densities holds in \(m_{P}\). In the current work this causes no ambiguity, so that we simply refer to it as 'the' universal RIPr. Note that Theorem 3 (1) implies that if there exists a \(Q\in\mathcal{C}\) with \(D(P\|Q\rightsquigarrow\mathcal{C})=0\), then \(Q\) is the universal RIPr of \(P\) on \(\mathcal{C}\). This matches with the intuition that the maximum gain we can get from switching away from the 'best' code in \(\mathcal{C}\) should be equal to zero. The following result establishes this more formally, i.e. whenever \(\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow\mathcal{C})<\infty\), it must actually be equal to zero. **Proposition 2**.: _The following conditions are equivalent:_ 1. _There exists a measure_ \(P^{\prime}\) _such that_ \(D(P\|P^{\prime}\rightsquigarrow\mathcal{C})\) _is finite._ 2. _There exists a measure_ \(Q\) _in_ \(\mathcal{C}\) _such that_ \(D(P\|Q\rightsquigarrow\mathcal{C})\) _is finite._ 3. _There exists a sequence of measures_ \(Q_{n}\in\mathcal{C}\) _such that_ \(D(P\|Q_{n}\rightsquigarrow\mathcal{C})\to 0\) _for_ \(n\to\infty\)_._ To show that the universal reverse information projection exists, it is therefore enough to prove that one of these equivalent conditions hold. Which condition is easiest to check will depend on the specific setting. We now provide one example. **Proposition 3**.: _Assume that \(\mathcal{C}\) is a convex set of probability measures that has finite minimax regret and with normalized maximum likelihood distribution \(P^{\prime}\in\mathcal{C}\). Then for any probability measure \(P\) that is absolutely continuous with respect to \(P^{\prime}\), it holds that \(D(P\|P^{\prime}\rightsquigarrow\mathcal{C})<\infty\)._ One-dimensional exponential families with finite minimax regret have been classified in [18]. Proposition 3 implies that information projections exists whenever \(\mathcal{C}\) is the convex hull of finitely many distributions. **Example 1**.: Let \(\mathcal{C}\) be a singleton whose single element \(Q\) is given by the standard Gaussian and let \(P\) be the standard Cauchy distribution. Since the Cauchy distribution is more heavy tailed than the Gaussian, we have that \(D(P\|\mathcal{C})=\infty\). However, since both distributions have full support, it follows that \[D(P\|\mathcal{C}\rightsquigarrow Q)=D(P\|Q\rightsquigarrow Q)=0.\] By Theorem 3 (1), \(Q\) is therefore the universal reverse information projection of \(P\) on \(\mathcal{C}\). This example can be extended to composite \(\mathcal{C}\) by considering all mixtures of the Gaussian distributions \(\mathcal{N}(-1,1)\) and \(\mathcal{N}(1,1)\) with mean \(\pm 1\) and variance \(1\). Proposition 3 guarantees the existence of a universal reversed information projection although the information divergence is still infinite because a Cauchy distribution is more heavy tailed than any finite mixture of Gaussian distributions. Symmetry implies that the universal reversed information projection must be equal to the uniform mixture of \(\mathcal{N}(-1,1)\) and \(\mathcal{N}(1,1)\). ### _Strict sub-probability measure_ We return now to the familiar setting where \(P\) is a probability distribution and \(\mathcal{C}\) a set of probability distributions (that is convex and closed under set-wise convergence). It is easy to verify that the RIPr \(\hat{Q}\) of \(P\) on \(\mathcal{C}\) is then a sub-probability measure. This follows because we know that there exists a sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathcal{C}\) such that \(q_{n}\) converges point-wise \(P\)-a.s. to \(\hat{q}\) and Fatou's Lemma tells us \[\int_{\Omega}\hat{q}\,\mathrm{d}\mu\ =\ \int_{\Omega}\liminf_{n\to\infty}q_{n}\, \mathrm{d}\mu\ \leq\ \liminf_{n\to\infty}\int_{\Omega}q_{n}\,\mathrm{d}\mu\ =\ 1.\] It is not clear a priori whether this can ever be a strict inequality. For example, if the sample space is finite, the set of probability measures is compact, so the limit of any sequence of probability measures (i.e. the reverse information projection) will also be a probability measure. The following example illustrates that this is not always the case for infinite sample spaces, and it can in fact already go wrong for a countable sample space with \(D(P\|\mathcal{C})<\infty\). **Example 2**.: Let \(\Omega=\mathbb{N}\) and \(\mathcal{F}=2^{\mathbb{N}}\). Furthermore, let \(P\) denote the probability measure \(\delta_{1}\) concentrated in the point \(i=1\) and \(\mathcal{C}\) the set of distributions \(Q\) satisfying \[\sum_{i=1}^{\infty}\frac{1}{i}q(i)=\frac{1}{2}.\] This set is defined by a linear constraint, so that \(\mathcal{C}\) is convex, and for any \(Q\in\mathcal{C}\), we have \[q(1)+\sum_{i=2}^{\infty}\frac{1}{i}q(i)=\sum_{i=1}^{\infty}\frac{1}{i}q(i)= \frac{1}{2},\] implying that \(q(1)\leq\sfrac{1}{2}\). It follows that \(D(P\|Q)=-\ln(q(1))\geq\ln(2)\). The sequence \(Q_{n}=\frac{n-2}{2n-2}\delta_{1}+\frac{n}{2n-2}\delta_{n}\) satisfies \(Q_{n}\in\mathcal{C}\) and \[D(P\|Q_{n})=\ln\frac{2n-2}{n-2}\to\ln(2).\] Consequently, it must hold that \(D(P\|\mathcal{C})=\ln(2)\). The sequence \(Q_{n}\) converges to the strict sub-probability measure \(\sfrac{1}{2}\delta_{1}\), which must therefore be the (universal) RIPr of \(P\) on \(\mathcal{C}\). A more general example, which can be seen as a template to create such situations, is given in Appendix C. The common theme is that \(\mathcal{C}\) is defined using only constraints of the form \(\sum g(i)q(i)=c\), where \(g\) is a positive function such that \(\lim_{n\to\infty}g(n)=0\). The following Theorem shows that if we add a 'dominating' restriction to \(\mathcal{C}\) such constraints cannot be violated. **Theorem 4**.: _Take \(\Omega=\mathbb{N},\mathcal{F}=2^{\mathbb{N}}\), and let \(\mathcal{C}\) be a convex set of probability distributions. Suppose that for \(f_{0},f_{1}:\mathbb{N}\to\mathbb{R}_{>0}\), we have that \(\sum_{i}f_{0}(i)q(i)\leq\lambda_{0}\) and \(\sum_{i}f_{1}(i)q(i)=\lambda_{1}\) for all \(Q\in\mathcal{C}\). If \(Q_{n}\) denotes a sequence of measures in \(\mathcal{C}\) that converges point-wise to some distribution \(Q^{*}\), and \(f_{0}\) dominates \(f_{1}\) in the sense that_ \[\lim_{i\to\infty}\frac{f_{1}(i)}{f_{0}(i)}=0, \tag{4}\] _then_ \[\sum_{i}f_{1}(i)\cdot q^{*}(i)=\lambda_{1}. \tag{5}\] ## IV Optimal e-statistics In this section, we assume that \(P\) and all \(Q\in\mathcal{C}\) are probability distributions, and we are interested in the hypothesis test with \(P\) as alternative and \(\mathcal{C}\) as null. To this end, Theorem 3 shows that -- if it exists -- the likelihood ratio of \(P\) and its universal RIPr is an \(e\)-statistic. A natural question is whether the optimality of the universal RIPr in terms of describing data distributed according to \(P\) carries over to some sort of optimality of the \(e\)-statistic, similar as for the GRO criterion in the case that \(D(P\|\mathcal{C})\). It turns out that this is true in terms of a very intuitive extension of the GRO criterion. Completely analogous to the coding story, we simply have to change from absolute to pairwise comparisons. **Definition 2**.: For \(e\)-statistics \(E,E^{\prime}\in\mathcal{E}_{\mathcal{C}}\), we say that \(E\) is _stronger_ than \(E^{\prime}\) if the following integral is well-defined and non-negative, possibly infinite: \[\int_{\Omega}\ln\!\left(\frac{E}{E^{\prime}}\right)\mathrm{d}P, \tag{6}\] where we adhere to the conventions \(\ln(\sfrac{0}{c})=-\infty\) and \(\ln(\sfrac{c}{0})=\infty\) for all \(c\in\mathbb{R}_{>0}\). Furthermore, an \(e\)-statistic \(E^{*}\in\mathcal{E}_{\mathcal{C}}\) is the _strongest_\(e\)-statistic if it is stronger than any other \(e\)-statistic \(E\in\mathcal{E}_{\mathcal{C}}\). Since we assumed that there exists a \(Q^{*}\in\mathcal{C}\) such that \(P\ll Q^{*}\), it follows that for any \(e\)-statistic \(E\) we must have \(P(E=\infty)=0\), which simplifies any subsequent analyses greatly. The optimality criterion in Definition 2 can be seen as a generalization of GRO, because if \(\int_{\Omega}\ln E\,\mathrm{d}P\) and \(\int_{\Omega}\ln E^{\prime}\,\mathrm{d}P\) are both finite, (6) can be written as the difference between the two logarithms, thus recovering the original GRO criterion. Analogous to that case, we prove that whenever the universal RIPr exists, it leads to an optimal \(e\)-statistic. **Theorem 5**.: _Suppose that both \(P\) and all \(Q\in\mathcal{C}\) are probability distributions such that \(\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow\mathcal{C})<\infty\). If \(\hat{Q}\) denotes the universal RIPr of \(P\) on \(\mathcal{C}\), then \(\hat{E}=\nicefrac{{\mathrm{d}P}}{{\mathrm{d}\hat{Q}}}\) is the strongest \(e\)-statistic. Furthermore, for any other strongest \(e\)-statistic \(\tilde{E}\) we must have that \(\tilde{E}=\tilde{E}\) holds \(P\)-a.s._ The above notion of optimality comes down to the simple idea that if one \(e\)-statistic \(E\) is stronger than another \(e\)-statistic \(E^{\prime}\), then a test based on \(E\) is more powerful than a test based on \(E^{\prime}\) in the sense that there is a higher probability of rejecting a false null-hypothesis. We will formulate an asymptotic version of this idea. Suppose that we conduct the same experiment \(n\) times independently to test the veracity of the hypothesis \(\mathcal{C}\), resulting in outcomes \(\omega_{1},\ldots,\omega_{n}\). For two \(e\)-statistics \(E,E^{\prime}\in\mathcal{E}_{\mathcal{C}}\), the law of large numbers states that if \(P\) is true, it will almost surely hold that \[\frac{\prod_{i=1}^{n}E(\omega_{i})}{\prod_{i=1}^{n}E^{\prime}(\omega_{i})}= \exp\!\left(n\int_{\Omega}\ln\!\left(\frac{E}{E^{\prime}}\right)\mathrm{d}P+o( n)\right)\!.\] It follows that if the integral \(\int_{\Omega}\ln\!\left(\frac{E}{E^{\prime}}\right)\mathrm{d}P\) is positive then with high probability \(E\) will give more evidence against \(\mathcal{C}\) than \(E^{\prime}\) if the alternative is true, i.e. a test based on \(E\) will asymptotically have more power than a test based on \(E^{\prime}\). We now return to Example 1, where the GRO criterion is not able to distinguish between \(e\)-variables, but we are able to do so with Definition 2 and Theorem 5. **Example 1** (continued).: In the case that \(P\) is the standard Cauchy and \(\mathcal{C}=\{Q\}\), where \(Q\) is the standard Gaussian, it is straightforward to see that the likelihood ratio between \(P\) and \(Q\) is an \(e\)-statistic, i.e. \[\int_{\Omega}\frac{\mathrm{d}P}{\mathrm{d}Q}\,\mathrm{d}Q=\int_{\Omega}\, \mathrm{d}P=1.\] However, for the growth rate it holds that \[\int_{\Omega}\ln\!\left(\frac{\mathrm{d}P}{\mathrm{d}Q}\right)\mathrm{d}P=D(P\|Q) =\infty.\] The same argument can be used to show that for any \(0<c\leq 1\), we have an _e_-statistic given by \(c\mathrm{d}P/\mathrm{d}Q\), which still has infinite growth rate. The GRO criterion in Definition 1 is not able to tell which of these _e_-statistics is preferable. However, since \(Q\) is the universal RIPr of \(P\) on \(\mathcal{C}\), it follows from Theorem 5 that \(\nicefrac{{\mathrm{d}P}}{{\mathrm{d}Q}}\) is the strongest _e_-statistic, and in particular stronger than \(c\mathrm{d}P/\mathrm{d}Q\) for all \(0<c<1\). ### _Approximation_ So far, we have discussed the existence and properties of the universal reverse information projection of \(P\) on \(\mathcal{C}\). However, there will be many situations where it is infeasible to compute this exact projection, as it requires solving a complex minimization problem. For example, if \(\mathcal{C}\) is given by the convex hull of some parameterized family of distributions, then the universal reverse information projection might be an arbitrary mixture of elements of this family, and the minimization problem need not be convex in the parameters of the family. It will therefore often be useful to resort to algorithms that provide an approximation. Existing algorithms focus on the case that \(D(P\|\mathcal{C})<\infty\), such as those proposed by Li [6, Theorem 3.3] or Csiszar and Tusnady [19, Theorem 5]. However, the convergence guarantees that these algorithms provide are in terms of information divergence, i.e. if \(\hat{Q}_{i}\) is the approximation of the projection after \(i\) iterations, then \(D(P\|\hat{Q}_{i})\to D(P\|\mathcal{C})\). If we want to use such an approximation for hypothesis testing, this is not enough: we need that \(\nicefrac{{p}}{{q_{i}}}\) gets closer and closer to being an _e_-statistic. The following theorem gives a condition under which this is true. **Proposition 4**.: _Assume that \(D(P\|\mathcal{C})<\infty\), fix \(Q,Q^{\prime}\in\mathcal{C}\), and let \(C_{q,q^{\prime}}>0\) be such that_ \[\int_{\Omega}\!\left(\frac{q^{\prime}}{q}\right)^{2}\mathrm{d}P=C_{q,q^{\prime }}. \tag{7}\] _Then if we set \(\delta:=D(P\|Q\rightsquigarrow\mathcal{C})\), we have_ \[\int_{\Omega}\frac{p}{q}\,\mathrm{d}Q^{\prime}=\int_{\Omega}\frac{q^{\prime}} {q}\,\mathrm{d}P\leq 1+4\max\!\left\{\left(C_{q,q^{\prime}}\delta\right)^{ \nicefrac{{1}}{{2}}},\delta\right\}\!. \tag{8}\] _Moreover, if, for some \(K\geq 0\), \(D(P\|Q^{\prime}\rightsquigarrow\mathcal{C})\leq K\delta\) and there exists a constant \(c_{1}>0\) such that either \(\nicefrac{{q}}{{q}}\leq c_{1}\)\(P\)-almost surely or \(\nicefrac{{q}}{{q^{\prime}}}\leq c_{1}\)\(P\)-almost surely, then_ \[\int_{\Omega}\frac{p}{q}\,\mathrm{d}Q^{\prime}=\int_{\Omega}\frac{q^{\prime}} {q}\,\mathrm{d}P\leq 1+(4(2c_{1}\max\{1,K\})^{\nicefrac{{1}}{{2}}}+2)\delta. \tag{9}\] We conjecture that a similar result will be true in the more general setting when \(\inf_{Q\in\mathcal{C}}D(P\|Q\rightsquigarrow\mathcal{C})<\infty\) even when \(D(P\|\mathcal{C})=\infty\). Furthermore, while the conditions (8) and (9) may seem very strong, the following example shows that in general such a condition is needed for the convergence to hold. **Example 3**.: Let \(\mathcal{Q}\) represent the family of geometric distributions on \(\Omega=\mathbb{N}_{0}\) and let \(\mathcal{C}=\text{conv}(\mathcal{Q})\). The elements of \(\mathcal{Q}\) are denoted by \(Q_{\theta}\) with density \(q_{\theta}(n)=\theta^{n}(1-\theta)\), where \(\theta\in[0,1)\) denotes the probability of failure. For simplicity, assume that \(P\in\mathcal{Q}\) so that the reverse information projection of \(P\) on \(\mathcal{C}\) is equal to \(P\). Take for example \(P=Q_{\nicefrac{{1}}{{2}}}\), then for any \(\theta,\theta^{\prime}\in[0,1)\) \[\int_{\Omega}\frac{q_{\theta^{\prime}}}{q_{\theta}}\,\mathrm{d}P =\sum_{n=0}^{\infty}\!\left(\frac{1}{2}\frac{\theta^{\prime}}{ \theta}\right)^{n}\!\frac{1}{2}\frac{1-\theta^{\prime}}{1-\theta} \tag{10}\] \[=\begin{cases}\frac{1}{1-\frac{\theta^{\prime}}{2}}\cdot\frac{1}{ 2}\frac{1-\theta^{\prime}}{1-\theta},&\text{ if }\theta^{\prime}<2\theta;\\ \infty,&\text{ otherwise};\end{cases} \tag{11}\] whereas \[D(P\|Q_{\theta}) =\sum_{n=0}^{\infty}\!\left(\frac{1}{2}\right)^{n+1}(-n\log(2 \theta)-\log 2(1-\theta))\] \[=\log\frac{\nicefrac{{1}}{{2}}}{1-\theta}+\log\frac{\nicefrac{{1}} {{2}}}{\theta}.\] Now consider a sequence \(\nicefrac{{1}}{{3}}<\theta_{1}<\theta_{2}<\theta_{3}\ldots\) that converges to \(\nicefrac{{1}}{{2}}\). Then by the above, \[D(P\|Q_{\theta_{i}})\to 0=D(P\|\mathcal{C}).\] We also see that for all \(i\) and all \(\theta^{\prime}\in[2\theta_{i},1)\), we have \[\int_{\Omega}q_{\theta^{\prime}}/q_{\theta_{i}}\mathrm{d}P=\infty,\] i.e. for all \(i\) we have \(\sup_{\theta^{\prime}\in[0,1)}\int_{\Omega}q_{\theta^{\prime}}/q_{\theta_{i}} \mathrm{d}P=\infty\). This shows that in general, a condition such as (7) is necessary. ## V Future and Related Work The results presented thus far suggest various avenues for further research of which we now discuss two. First, Theorem 3 is formulated for general measures so one may ask for an interpretation of the universal RIPr in the case that \(P\) and \(\mathcal{C}\) are not probability measures. If \(\Omega\) is finite and \(\lambda\) is a measure on \(\Omega\), then we may define a probability measure \(Po(\lambda)\) as the product measure \(Po(\lambda)=\bigotimes_{\omega\in\Omega}Po(\lambda(\omega))\), where \(Po(\lambda(\omega))\) denotes the Poisson distribution with mean \(\lambda(\omega)\). With this definition we get \[D(P\|Q\rightsquigarrow Q^{\prime})=D(Po(P)\|Po(Q)\rightsquigarrow Po(Q^{\prime})).\] Furthermore, it can be shown that if the universal RIPr \(\hat{Q}\) of \(P\) on \(\mathcal{C}\) exists and is an element of \(\mathcal{C}\), then \(Po(\hat{Q})\) is also the universal RIPr of \(Po(P)\) on the convex hull of \(\mathcal{C}^{\prime}:=\{Po(Q)|Q\in\mathcal{C}\}\). Consequently, \(\nicefrac{{Po(P)}}{{Po(\hat{Q})}}\) can be thought of as an _e_-statistic for \(\mathcal{C}^{\prime}\). More work is needed to determine whether this interpretation has any applications and if it can be generalized to arbitrary \(\Omega\). Second, even if \(D(P\|\mathcal{C})=\infty\), the Renyi divergence \(D_{\alpha}(P\|Q)\) (see e.g. [20]) may be a well-defined non-negative real number for \(\alpha\in(0,1)\) and \(Q\in\mathcal{C}\). These Renyi divergences are jointly convex in \(P\) and \(Q\)[20] and for each \(0<\alpha<1\) one may define a reversed Renyi projection \(\hat{Q}_{\alpha}\) of \(P\) on \(\mathcal{C}\)[21]. If it exists, it can be shown that this distribution will satisfy \[\int_{\Omega}\!\left(\frac{\mathrm{d}P}{\mathrm{d}\hat{Q}_{\alpha}}\right)^{ \alpha}\mathrm{d}Q\leq 1\] for all \(Q\in\mathcal{C}\), i.e. \(\left(\nicefrac{{\mathrm{d}P}}{{a}}\nicefrac{{\mathrm{d}\phi_{\alpha}}}{{a}}\right)^{\alpha}\) is an _e_-statistic. We conjecture that the projections \(\hat{Q}_{\alpha}\) will converge to the universal RIPr for \(\alpha\) tending to 1, which might lead to further applications. Except for the present paragraphs, this article is near-identical to the paper that we submitted to ISIT 2023 under the same title. After acceptance of that paper, we discovered that there is some overlap between work concurrently done by Zhang et al. [22] and our results on the existence of optimal _e_-statistics. In particular, they show that if \(\mathcal{C}\) is a convex polytope, then there exists an _e_-statistic in the form of a likelihood ratio between two unspecified measures. Since a convex polytope contains the uniform mixture of its vertices, which can be shown to have finite information gain, this also follows from our Theorem 3. They furthermore show that if the alternative is also a convex polytope \(\mathcal{A}\), then at least one of their _e_-statistics in the form of a likelihood ratio satisfies \(\inf_{P\in\mathcal{A}}\int_{\Omega}\ln E\,\mathrm{d}P>0\). More work is needed to determine whether our construction leads to a similar result for composite alternatives. Finally, it should be noted that the techniques used to prove their results appear to be of a completely different nature than the ones used in this paper, as they rely mostly on classical results in convex geometry together with results on optimal transport.
2304.11988
Graph-theoretical optimization of fusion-based graph state generation
Graph states are versatile resources for various quantum information processing tasks, including measurement-based quantum computing and quantum repeaters. Although the type-II fusion gate enables all-optical generation of graph states by combining small graph states, its non-deterministic nature hinders the efficient generation of large graph states. In this work, we present a graph-theoretical strategy to effectively optimize fusion-based generation of any given graph state, along with a Python package OptGraphState. Our strategy comprises three stages: simplifying the target graph state, building a fusion network, and determining the order of fusions. Utilizing this proposed method, we evaluate the resource overheads of random graphs and various well-known graphs. Additionally, we investigate the success probability of graph state generation given a restricted number of available resource states. We expect that our strategy and software will assist researchers in developing and assessing experimentally viable schemes that use photonic graph states.
Seok-Hyung Lee, Hyunseok Jeong
2023-04-24T10:46:54Z
http://arxiv.org/abs/2304.11988v4
# Graph-theoretical optimization of fusion-based graph state generation ###### Abstract Graph states are versatile resources for various quantum information processing tasks, including measurement-based quantum computing and quantum repeaters. Although the type-II fusion gate enables all-optical generation of graph states by combining small graph states, its non-deterministic nature hinders the efficient generation of large graph states. In this work, we present a graph-theoretical strategy to effectively optimize fusion-based generation of any given graph state, along with a Python package _OptGraphState_. Our strategy comprises three stages: simplifying the target graph state, building a fusion network, and determining the order of fusions. Utilizing this proposed method, we evaluate the resource overheads of random graphs and various well-known graphs. We expect that our strategy and software will assist researchers in developing and assessing experimentally viable schemes that use photonic graph states. Graph states represent a family of multi-qubit states where qubits are entangled following a specific structure determined by an associated graph. Owing to their highly entangled nature [1], graph states find applications in various quantum information processing domains, such as measurement-based quantum computing (MBQC) [2; 3; 4; 5], fusion-based quantum computing (FBQC) [6], quantum error correction [7; 8], quantum secret sharing [9; 10], quantum repeaters [11; 12; 13; 14], and quantum metrology [15]. Photonic qubit-based graph states are particularly crucial in these applications, not only because photons are predominantly used in quantum communication but also because MBQC can circumvent the need for in-line near-deterministic multi-qubit gates that are challenging in photonic systems [16]. All-optical methods for constructing photonic graph states are commonly processed by merging multiple smaller graph states into a larger one using _fusion operations of types I and/or II_[17]. The failures of these operations are heralded, presenting a significant advantage [18] over alternative methods such as the post-selected controlled-Z (cz) gate [19; 20] and the post-selected fusion gate [17]. Among the two fusion types, we focus exclusively on type II. This is because, assuming photodetectors with negligible dark counts, a type-I fusion could potentially transform a photon loss into an undetectable computational error [21], whereas any photon loss occurring before or during a type-II fusion can be identified. The non-deterministic nature of fusions is a crucial consideration when investigating quantum tasks using photonic graph states. When employing dual-rail-encoded qubits (such as polarization qubits) and restricting the setup to linear-optical devices and photodetectors, the success probability of a type-II fusion is limited to 50% without ancillary resources [22]. Higher success probabilities can be achieved by utilizing ancillary photons [23; 24], encoded qubits [25; 26], redundant graph structures [27; 28; 21], or continuous-variable techniques [29; 30; 31; 32; 33; 34]. Through these methods, fault-tolerant linear-optical MBQC is theoretically possible. For instance, our recent research verified that high photon loss thresholds of around 8% under a uniform photon loss model can be attained by employing parity-encoded multiphoton qubits [35]. Despite these advancements, resource overhead remains a significant challenge for generating large-scale graph states. Specifically, the number of required basic resource states (such as three photon Greenberger-Horne-Zeilinger states) or optical elements like photodetectors increases exponentially as the number of fusions grows. Consequently, it is essential to carefully design a procedure for generating a desired graph state from basic resource states to minimize resource overhead as much as possible. While several prior studies [36, 37] have addressed this issue, they only considered specific graph families and type-I fusion. In our previous work [35], we proposed a partial solution for general graphs and type-II fusion using a fine-tuning strategy; however, there is still considerable scope for improvement. In this work, we introduce a graph-theoretical strategy to effectively identify a resource-efficient method for fusion-based generation of any given graph state, building upon and generalizing the strategies presented in Ref. [35]. A single trial of our strategy comprises three main stages: (i) simplifying the graph state through the process of _unraveling_, (ii) constructing a _fusion network_ (a graph that dictates the required fusions between basic resource states), and (iii) determining the order of fusions. A sufficient number of trials are repeated with randomness and the one with the smallest resource overhead is selected as the outcome of the strategy. Although our approach does not guarantee the most optimal method, we provide evidence of its power and generality, making it suitable for studying various tasks involving graph states. Our strategy is implemented in an open-source Python package, _OptGraphState_, which is publicly available on Github: [https://github.com/seokhyung-lee/OptGraphState](https://github.com/seokhyung-lee/OptGraphState). This paper is structured as follows: In Sec. 1, we review the definitions and basic properties of graph states and type-II fusion. In Sec. 2, we describe our optimization strategy step by step with examples. In Sec. 3, we compute the resource overheads of various graphs using our strategy and numerically verify its effectiveness by comparing it with alternative strategies that lacks certain stages of the original strategy. We conclude with final remarks in Sec. 4. ## 1 Preliminaries ### Graph states and their equivalence relation For a given graph \(G=(V,\,E)\) with a vertex set \(V\) and an edge set \(E\), a graph state \(\ket{G}_{V}\) on qubits placed at the vertices is defined as \[\ket{G}_{V}:=\prod_{\{v_{1},v_{2}\}\in E}\hat{U}_{\text{CZ}}(v_{1},v_{2}) \bigotimes_{v\in V}\ket{+}_{v},\] where \(\hat{U}_{\text{CZ}}(v_{1},v_{2})\) is the controlled-Z (CZ) gate between the qubits at \(v_{1}\) and \(v_{2}\) and \(\ket{+}_{v}\) is the state \(\ket{0}+\ket{1}\) on \(v\). (We omit normalization coefficients throughout the paper unless necessary.) The graph state \(\ket{G}_{V}\) has the stabilizer group generated by \[\Bigg{\{}\hat{S}_{v}:=\hat{X}_{v}\prod_{u\in\text{adj}(v)}\hat{Z}_{u}\ \Bigg{|}\ v\in V\Bigg{\}},\] where \(\hat{X}_{v}\) and \(\hat{Z}_{v}\) are respectively the Pauli-X and Z operators on \(v\) and \(\text{adj}(v)\) is the set of the adjacent vertices of \(v\). Namely, \(\hat{S}_{v}\ket{G}_{V}=\ket{G}_{V}\) for every \(v\in V\). An important problem regarding graph states is whether two different graph states are equivalent under a unitary operation, especially under a sequence of single-qubit Clifford operations. For a graph \(G=(V,E)\) and a vertex \(v\in V\), we define a Clifford operator \[\hat{U}_{\text{LC}}(v):=e^{-i\frac{\pi}{4}\hat{X}_{v}}\prod_{u\in\text{adj}(v) }e^{i\frac{\pi}{4}\hat{Z}_{u}}. \tag{1}\] A _local complementation_\(\text{LC}_{v}\) with respect to a vertex \(v\in V\) is defined as a graph operation that, for every pair of adjacent vertices of \(v\), connect them if they are disconnected and disconnect them if they are connected. As proved in Ref. [38] and visualized in Fig. 1, \(\hat{U}_{\text{LC}}(v)\) transforms \(G\) by a local complementation; namely, \[\hat{U}_{\text{LC}}(v)\ket{G}=\ket{\text{LC}_{v}(G)}.\] Figure 1: **Example of a local complementation (LC) and the corresponding single-qubit Clifford operations. \(\hat{R}_{X}\) and \(\hat{R}_{Z}\) indicate a \(\pi/2\) rotation around the \(x\)- and \(z\)-axis, respectively, in the Bloch sphere; namely, \(\hat{R}_{P}:=\exp\left[i(\pi/4)\hat{P}\right]\) for \(\hat{P}\in\left\{\hat{X},\hat{Z}\right\}\). For the presented five-qubit graph state, applying \(\hat{R}_{X}^{\dagger}\) to vertex \(v_{1}\) and \(\hat{R}_{Z}\) to each of its neighbors is equivalent to transform the graph by an LC with respect to \(v_{1}\).** Furthermore, it is known that two graph states are equivalent under a sequence of single-qubit Clifford operations if and only if one of their corresponding graphs can be obtained by applying a sequence of local complementations on the other [38]. The followings describe several well-known families of graph states visualized in Fig. 2: Star graph.The \(m\)-vertex star graph \(G_{*}^{(m)}\) is a graph where one of the vertices (say, \(v_{\text{root}}\)) is connected with all the other vertices that are not connected with each other; see Fig. 2(a) for an example. The vertex \(v_{\text{root}}\) is called the _root_ vertex of \(G_{*}^{(m)}\) and the other vertices are called its _leaf_ vertices. Note that the graph state \(\ket{G_{*}^{(m)}}\) is equivalent to the \(m\)-qubit Greenberger-Horne-Zeilinger (GHZ) state \(\ket{\text{GHZ}_{m}}:=\ket{0}^{\otimes m}+\ket{1}^{\otimes m}\) and the graph state of the \(m\)-vertex complete graph \(G_{\text{cml}}^{(m)}\) (where all the vertices are connected) under single-qubit Clifford gates; namely, \[\ket{\text{GHZ}_{m}} =\left(\prod_{v\in V_{\text{leaf}}}\hat{H}_{v}\right)\ket{G_{*}^ {(m)}},\] \[\ket{G_{\text{cml}}^{(m)}} =\hat{U}_{\text{LC}}(v_{\text{root}})\ket{G_{*}^{(m)}},\] where \(V_{\text{leaf}}\) is the set of the leaf vertices of \(G_{*}^{(m)}\) and \(\hat{H}_{v}\) is the Hadamard gate on the qubit at \(v\). Star graph states are often used as basic resource states of photonic measurement-based quantum computing (MBQC) [28, 21, 39, 35, 31] and fusion-based quantum computing (FBQC) [6]. Cycle graph.A cycle graph consists of vertices connected in a closed chain. In particular, the graph state for the six-vertex cycle graph, which is shown in Fig. 2(b), is used as a basic resource state of FBQC [6]. Lattice graph.The \((m_{x},m_{y})\)-lattice graph for integers \(m_{x},m_{y}\geq 1\) has a two-dimensional (2D) square lattice structure where the vertices are repeated \(m_{x}\) (\(m_{y}\)) times along the \(x\)-axis (\(y\)-axis). See Fig. 2(b) for an example. Lattice graph states are particularly useful for 2D measurement-based quantum computing (MBQC) [2, 3], which is universal but not fault-tolerant [4]. Any single-qubit rotation and the controlled-not gate can be implemented by measuring qubits of a lattice graph state in appropriate single-qubit bases. Raussendorf-Harrington-Goyal (RHG) lattice.The \((L_{x},L_{y},L_{z})\)-RHG lattice graph is Figure 2: **Examples of various well-known families of graphs.** See Sec. 1.1 for their descriptions and usages. composed of unit cells stacked \(L_{x}\), \(L_{y}\), and \(L_{z}\) times along the \(x\)-, \(y\)-, and \(z\)-axis, respectively. Each unit cell is cube-shaped and it consists of vertices locating at the faces and edges of the cube, as visualized in Fig. 2(d). RHG lattices are utilized in fault-tolerant three-dimensional (3D) MBQC [4, 5]. Logical qubits encoded in a surface code can be embedded into a lattice and logical operations and measurements can be done only by single-qubit measurements and state injection. A specific operator on each unit cell serves as a parity-check operator, whose measurement outcome is used to detect and locate errors. Tree graph.A tree graph is defined as a connected acyclic graph. We particularly define the \((b_{0},b_{1},b_{2},\cdots)\)-tree graph for positive integers \(b_{0},b_{1},b_{2},\cdots\) as a tree graph where a vertex (designated its _root_ vertex) has \(b_{0}\) neighbors called 1st-generation _branches_ and each \(i\)th-generation branch (\(i\geq 1\)) has \(b_{i}+1\) neighbors that are \((i+1)\)-generation branches except for one. As an example, see Fig. 2(e) for the \((4,2,2)\)-tree graph. One important usage of tree graphs is counterfactual error correction; by attaching tree graph states on qubits for 2D MBQC, qubit loss up to 50% can be tolerated [40]. Such a technique also can be employed for 3D MBQC to suppress the effects of failed entangling operations during the construction of an RHG lattice [21]. Repeater graph.The \(4m\)-vertex repeater graph (\(m\geq 1\)) consists of \(2m\) completely-connected vertices and other \(2m\) vertices that are respectively connected with them; see Fig. 2(f) for the case of \(m=3\). A repeater graph state can be used for all-optical quantum repeaters [13], which distribute entanglement over a long distance by recursive entanglement swapping. \(m\) determines the number of Bell-state measurements (BSMs) required per single trial of entanglement swapping, which succeeds if any one of these \(m\) BSMs succeed. ### Type-II fusion operation The _type-II fusion operation_ (hereafter referred to simply as "fusion") is a two-qubit operation that consists of applying a Hadamard gate (\(\hat{H}\)) to one of the qubits, followed by a BSM, and finally erasing the qubits [17]. In other words, fusion indicates a destructive measurement of two Pauli operators \(\hat{X}\otimes\hat{Z}\) and \(\hat{Z}\otimes\hat{X}\) on a pair of qubits. By applying a fusion on an unconnected pair \((v_{1},v_{2})\) of vertices in a graph state, we can connect (disconnect) every adjacent vertex of \(v_{1}\) with every adjacent vertex of \(v_{2}\) up to several Pauli-Z operators if they are unconnected (connected); see the example in Fig. 3(a). More formally, for two unconnected vertices \(v_{1}\) and \(v_{2}\) of a graph \(G\), \(F_{v_{1},v_{2}}\) is defined as a graph operation that, for every \(u_{1}\in\operatorname{adj}\left(v_{1}\right)\) and \(u_{2}\in\operatorname{adj}\left(v_{2}\right)\), connect (disconnect) \(u_{1}\) and \(u_{2}\) if they are unconnected (connected) and delete \(v_{1}\) and \(v_{2}\) from the graph. When \(v_{1}\) and \(v_{2}\) undergo a fusion for which the Hadamard gate is applied on Figure 3: **Type-II fusion operation.** **(a)** Example of merging two graph states by a type-II fusion, which is composed of a Hadamard gate (\(\hat{H}\)) and a Bell-state measurement (BSM). \(m_{\text{sign}}\) and \(m_{\text{lett}}\) respectively denote the sign and letter outcomes of the BSM. If \(m_{\text{sign}}\) or \(m_{\text{lett}}\) is \(-1\), several Pauli-Z gates need to be applied on the resulting state to get the graph state. **(b)** BSM scheme for single-photon polarization qubits with polarizing beam splitters (PBSs), \(90^{\circ}\) and \(45^{\circ}\) wave plates, and four (A–D) photodetectors. A PBS transmits (reflects) horizontally-polarized (vertically-polarized) photons. The scheme distinguishes \(|\psi_{\pm}\rangle\): \(|\psi_{+}\rangle\) if both A and C or both B and D detect a single photon respectively, and \(|\psi_{-}\rangle\) if both A and D or both B and C detect a single photon respectively. If otherwise, the scheme fails. Two distinguishable Bell states can be selected by putting or removing wave plates before the first PBS. \(v_{1}\), the resulting state is \[\prod_{u_{1}\in\operatorname{adj}\left(v_{1}\right)}\hat{Z}_{u_{1}}^{\frac{1-m_{ \text{sign}}}{2}}\prod_{u_{2}\in\operatorname{adj}\left(v_{2}\right)}\hat{Z}_{u _{2}}^{\frac{1-m_{\text{light}}}{2}}\left|F_{v_{1},v_{2}}(G)\right\rangle,\] where \((m_{\text{sign}},m_{\text{lett}})\) is the outcome of the BSM. Here, we denote the Bell basis as \[\left|\phi_{\pm}\right\rangle :=\left|00\right\rangle\pm\left|11\right\rangle, \tag{2}\] \[\left|\psi_{\pm}\right\rangle :=\left|01\right\rangle\pm\left|10\right\rangle, \tag{3}\] and the outcome of a BSM as \((\pm 1,1)\) if \(\left|\phi_{\pm}\right\rangle\) is obtained and \((\pm 1,-1)\) if \(\left|\psi_{\pm}\right\rangle\) is obtained. For single-photon polarization qubits with the basis of horizontally and vertically polarized single-photon states (\(\left|\mathsf{H}\right\rangle\), \(\left|\mathsf{V}\right\rangle\)), the BSM can be done with linear optical devices and photodetectors [41], as visualized in Fig. 3(b). This BSM scheme can distinguish only two among the four Bell states, thus the fusion succeeds with the probability of \[p_{\text{succ}}(\eta)=\frac{(1-\eta)^{2}}{2}\] when each photon suffers loss with probability \(\eta\) and the input state is maximally mixed. See Ref. [35] for a discussion on how failed fusions affect the resulting graph state. ## 2 Strategy for identifying a method for graph state generation In this section, we present our main result: a graph-theoretical strategy to effectively identify a resource-efficient method for generating an arbitrary graph state via fusions. Our basic resource state is the three-qubit star graph state \[\left|G_{*}^{(3)}\right\rangle=\left|\raisebox{-1.72pt}{\includegraphics[scale= 1.72pt]{./figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfiguresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figuresfigures/figures/figures/figures/figures/figuresfigures/figuresfigures/figuresfigures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/ #### 2.1.1 Unraveling bipartiely-complete subgraphs **Definition 2.1** (**Bipartitively-complete graph/subgraph**).: A graph \(G=(V,E)\) is an \((n,m)\)_bipartiely-complete graph_ for an integer \(n,m\geq 2\) if \(V\) can be split into two disjoint subsets \(V_{1}\) and \(V_{2}\) such that \(|V_{1}|=n\), \(|V_{2}|=m\), and each vertex of \(V_{1}\) is connected with each vertex of \(V_{2}\) (namely, \(\{v_{1},v_{2}\}\in E\) for any \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\)). If a subgraph of a graph is an \((n,m)\) bipartiely-complete graph, it is called an \((n,m)\)_bipartiely-complete subgraph (BCS)_. Note that a bipartiely-complete graph allows edges between vertices in one part, thus it is different from a complete bipartite graph. The parameter \((n,m)\) of a bipartiely-complete graph may not be uniquely determined. If \(G\) has an \((n,m)\) BCS, its two parts can be disconnected by adding two vertices \((v_{1},v_{2})\) that are respectively connected with all the vertices in one of the two parts and adding \((v_{1},v_{2})\) to \(\mathcal{F}\), which replace \(nm\) edges with \(n+m\) edges and one fusion; see Fig. 4 for an example. This process is called unraveling the BCS. In our strategy, we repeat the cycle of finding non-overlapping BCSs (that do not share any vertices) via Algorithm 1 and unraveling them as above until no new BCSs are found. The time complexity of Algorithm 1 is \(\mathcal{O}(|V|d_{\text{max}}^{4})\) in the worst case, where \(d_{\text{max}}\) is the largest degree 1. Note that the iterations in Algorithm 1 are done in random orders because the final unraveled graph may vary depending on the orders and we want to suppress any possible bias during iteration (Step 4 of the strategy). All the randomness appearing from now on exist for the same reason. Footnote 1: The degree of a vertex is the number of its adjacent vertices. #### 2.1.2 Unraveling cliques **Definition 2.2** (**Clique**).: A _clique_ of a graph \(G\) is a subgraph of \(G\) where every vertex is fully connected with each other. A clique is _maximal_ if it cannot be enlarged by adding a new vertex. If \(G\) contains a clique, it can be simplified by using a local complementation. For a maximal clique of size greater than two, the unraveling process is conducted as follows (see Fig. 5 for an example): 1. Let us define \(V_{\text{cl}}\) as the set of the vertices in the clique and \(V_{\text{no.outer}}\subseteq V_{\text{cl}}\) as the set of the vertices in the clique that are connected only with vertices in the clique. 2. If \(V_{\text{no.outer}}\) is not empty, select a vertex \(v_{0}\) randomly from \(V_{\text{no.outer}}\). 3. If \(V_{\text{no.outer}}\) is empty, select a vertex \(v_{0}\) randomly from \(V_{\text{cl}}\) and separate the vertices in \(V_{\text{adj.out}}:=\operatorname{adj}\left(v_{0}\right)\setminus V_{\text{ cl}}\) from \(v_{0}\). This is done as follows: 1. Add two new vertices \(v_{1}\) and \(v_{2}\) and connect them. 2. For each vertex in \(V_{\text{adj.out}}\), connect it with \(v_{1}\) and disconnect it from \(v_{0}\). 3. Add \((v_{0},v_{2})\) to \(\mathcal{F}\). 4. Transform the graph by a local complementation with respect to \(v_{0}\) and update \(\hat{U}_{\text{C}}\leftarrow\hat{U}_{\text{LC}}(v_{0})\hat{U}_{\text{C}}\). In our strategy, we repeat the cycle of finding non-overlapping maximal cliques (that do not share any vertices) and unraveling them as above until no new cliques are found. Listing all maximal cliques of a graph is an important problem in the graph theory and known to take exponential time in the worst case [48]. However, there exist algorithms to list them in polynomial time per clique [49, 50], thus the problem can be efficiently solved if the graph does not contain many cliques. Our Python package OptGraphState uses Graph.maximal_clique method from Python package _python-igraph_[51], which implements a more advanced algorithm in Ref. [52]. Figure 4: **Example of an unraveling process of a bipartiely-complete graph. The vertices of the two parts are colored in blue and yellow, respectively. The original graph state can be constructed by performing a fusion on \(v_{1}\) and \(v_{2}\) of the unraveled graph state.** #### 2.1.3 Additional notes It is a subtle problem which of bipartiately-complete graphs and cliques to unravel first. We randomly choose it since we currently have no basis for judging which one is better. One may expect that BCSs and cliques are quite non-trivial and not very common. However, the smallest BCS and clique that are concerned for unraveling are cycles with four vertices and with three vertices, respectively. These are simple enough and appear in various graphs such as square and triangular grid graphs and RHG lattices. Moreover, large bipartiately-complete subgraphs may appear when converting a logically encoded graph state into a graph state with physical qubits. For instance, a three-qubit linear graph state with the \((n,m)\) parity encoding contains at most \((n,m)\) bipartiately-complete subgraphs in the physical level; see Sec. 3.3 and Ref. [35] for more details. Figure 5: **Example of an unraveling process of a maximal clique. The process is done through three steps: (a) selecting a random vertex \(v_{0}\) in the clique, (b) separating \(v_{0}\) and the adjacent vertices of \(v_{0}\) outside the clique by adding two vertices \(v_{1}\) and \(v_{2}\), and (c) applying a local complementation with respect to \(v_{0}\). Vertices in the clique are colored in blue and the new vertices are colored in yellow. The original graph state can be constructed by performing single-qubit Clifford operations (\(\hat{R}_{P}:=\exp\left[i(\pi/4)\hat{P}\right]\) for \(\hat{P}\in\left\{\hat{X},\hat{Z}\right\}\)) and a fusion on the unraveled graph state in (c).** ### Construction of fusion network After unraveling a graph \(G\), we obtain an unraveled graph \(G_{\text{unrv}}\) along with the information \(\left(\hat{U}_{\text{C}},\mathcal{F}\right)\) that identifies the operations needed to restore \(\left|G\right>\) from \(\left|G_{\text{unrv}}\right>\). In particular, the fusions specified by \(\mathcal{F}\) are called _external fusions_ to distinguish them from the fusions used to generate the unraveled graph state. We now deal with the problem of building a fusion network from result. We first formally define fusion networks as follows: **Definition 2.3** (**Fusion network**).: A graph \(\mathcal{N}_{f}=(N,L)\) is a _fusion network_ of a graph state \(\left|G\right>\) (where vertices and edges are refereed to as _nodes_ and _links_) for _root indicators_\(\{r_{l,n}\in\{0,1\}\ |\ \forall l\in L,\ \forall n\in l\}\) if \(\left|G\right>\) can be generated by the process: 1. Prepare a state \(\left|G_{*}^{(3)}\right>\) for each node \(n\). Let \(q_{\text{root}}^{(n)}\) denote its root qubit and \(Q_{\text{leaf}}^{(n)}\) denote the set of its leaf qubits. 2. For each link \(l=\{n_{1},n_{2}\}\), iterate the following: 1. Let \(q_{1}\) be \(q_{\text{root}}^{(n_{1})}\) if \(r_{l,n_{1}}=1\) and an arbitrary unmeasured qubit in \(Q_{\text{leaf}}^{(n_{1})}\) if \(r_{l,n_{1}}=0\). Define \(q_{2}\) analogously for \(n_{2}\). 2. Apply appropriate single-qubit Clifford operations on \(q_{1}\) and \(q_{2}\), if required. 3. Perform a fusion on \(q_{1}\) and \(q_{2}\). 3. Apply appropriate single-qubit Clifford operators on the remaining qubits, if required. We say that a link \(l=\{n_{1},n_{2}\}\) has the type of _root-to-root_, _root-to-leaf_, or _leaf-to-leaf_, when both \(r_{l,n_{1}}\) and \(r_{l,n_{2}}\) are equal to \(1\), only one of them is equal to \(1\), and both of them are equal to \(0\), respectively. We now describe how to build a fusion network \(\mathcal{N}_{f}\) and the corresponding root indicators \(\{r_{l,n}\}\) from the unraveled graph \(G_{\text{unrv}}\) and the external fusions \(\mathcal{F}\). The main idea is to decompose a graph state into multiple star graph state, each of which is again decomposed into multiple \(\left|G_{*}^{(3)}\right>\) states. An \(m\)-qubit star graph state \(\left|G_{*}^{(m)}\right>\) can be constructed by conducting fusions on \(m-2\) copies of \(\left|G_{*}^{(3)}\right>\), which leads to a fusion network with Figure 6: **Examples of the construction of fusion networks.****(a)** A five-qubit star graph state \(\left|G_{*}^{(5)}\right>\) and **(b)** a general graph state \(\left|G\right>\) are considered. In **(a)**, \(\left|G_{*}^{(5)}\right>\) is decomposed into three \(\left|G_{*}^{(3)}\right>\) states, which leads to a three-node linear fusion network that forms one node group. The process varies depending on the selection of the seed node (marked as “S”), which determines the root vertex of \(G_{*}^{(5)}\) (marked as “R”). A leaf vertex \(v_{f}\) of the star graph can be any of the four vertices (1–4) after the decomposition. In **(b)**, \(\left|G\right>\) is decomposed into multiple star graph states, where each of them is again decomposed into \(\left|G_{*}^{(3)}\right>\) states and forms one node group in the fusion network. The line styles of the links in the fusion network indicate their origins: black solid lines for fusions inside a star graph, orange double lines for fusions between star graphs, and a blue dashed line for an external fusion. \(m-2\) nodes connected linearly with root-to-leaf links; see Fig. 6(a) for an example when \(m=5\). Note that there is an ambiguity in positioning the root qubit (marked as "R") of \(\left|G_{*}^{(m)}\right\rangle\) as depicted in Fig. 6(a) with (1) and (2). The node for the \(\left|G_{*}^{(3)}\right\rangle\) state containing the root qubit of \(\left|G_{*}^{(m)}\right\rangle\) is called the _seed node_ (marked as "S") of the _node group_ that consists of these \(m-2\) nodes. A general graph state \(\left|G\right\rangle\) can be generated by conducting fusions on leaf qubits of multiple star graph states, where each star graph is originated from a vertex in \(G\) with degree larger than one. Consequently, its fusion network can be constructed by connecting the fusion networks of the individual star graphs (which respectively form one node group) with leaf-to-leaf links. An example is illustrated in Fig. 6(b), where root-to-leaf links and leaf-to-leaf links are represented by black single lines and orange double lines, respectively. If an external fusion exists, it also creates a link (blue dashed line) between nodes of different star graphs, as shown in Fig. 6(b). Such a link may belong to any one of the three types, depending on the vertices involved in the external fusion. Note that external fusions always appear between different star graphs, considering the unraveling processes in Figs. 4 and 5. It is important that the above process contains two types of ambiguity for each star graph (which are determined randomly in our strategy): which node in the node group to select as its seed node and which node to include each leaf vertex in. To illustrate the latter factor with the example of Fig. 6(a), the leaf vertex \(v_{f}\) in \(G_{*}^{(5)}\) can be any of the four vertices (1-4) after the decomposition. Such ambiguity matters if \(G_{*}^{(5)}\) appears during the decomposition of a larger graph state. In other words, if \(v_{f}\) participates in a fusion, the resulting fusion network may vary depending on this selection. For example, \(G_{*}^{(5)}\) appears in the decomposition of Fig. 6(b), and in this case, vertex 3 in (1) is selected to be \(v_{f}\); thus, the link for the fusion is connected to node \(n_{2}\). We lastly note that the single-qubit Clifford operators required in the process of generating a graph state can be identified from \(\bar{U}_{\text{C}}\), the product of single-qubit Clifford operations obtained from the unraveling process, and the fusion outcomes. ### Determination of fusion order We now have one stage left: how to determine the order of fusions. Let us regard a fusion network as a weighted graph where each node indicates a group of entangled qubits and each link represents a fusion between these groups that needs to be done. The weight of each node \(w(n)\), which is initialized to 1, is defined as the resource overhead of the process of generating the corresponding entangled states. Namely, \(w(n)\) is the average number of required \(\left|G_{*}^{(3)}\right\rangle\) states to generate the state. Upon the above setting, the action of a fusion can be treated as the _contraction_ of a link \(l\), which means to merge two endpoints \((n_{1},n_{2})\) of the link into a new node \(n_{l}\) and reconnect the links connected to the original nodes with \(n_{l}\). The weight of \(n_{l}\) is updated as \[w(n_{l}):=\begin{cases}[w(n_{1})+w(n_{2})]/p_{\text{succ}}&\text{if }n_{1}\neq n_{2},\\ w(n_{1})/p_{\text{succ}}&\text{if }n_{1}=n_{2},\end{cases} \tag{5}\] where \(p_{\text{succ}}\) is the fusion success probability. Hence, if the order of the fusions is given, the resource overhead \(Q\) of the entire process can be obtained as the summation of the weights of the last remaining nodes after contracting all the edges in the order. Note that an intermediate fusion network during this process may have loops (links connecting a node to itself) or multi-links (links incident to the same two vertices). For a fusion network \(\mathcal{N}_{\text{f}}=(N,L)\), the number of possible fusion orders is \(|L|!\); thus it is extremely inefficient to randomly sample fusion orders and find the best one unless there are very few links. Instead of it, our strategy is based on the following two intuitions: 1. It is preferred to contract links with small weights first, where the weight of a link \(l\) is defined as \(w(n_{l})\) in Eq. (5). It is because, defining \(f(x,y):=(x+y)/p_{\text{succ}}\) for two numbers \(x\) and \(y\), \[f(f(w_{1},w_{2}),w_{3})<f(w_{1},f(w_{2},w_{3}))\] when \(w_{1}<w_{2}<w_{3}\). 2. Links that do not share endpoints can be contracted simultaneously and it is preferred to contract links as parallelly as possible. For example, let us consider a four-node linear fusion network where the node set is \(\{n_{1},n_{2},n_{3},n_{4}\}\) and \(n_{i}\) and \(n_{i+1}\) are connected with a link \(l_{i,i+1}\) for each \(i\in\{1,2,3\}\). Provided that \(p_{\text{succ}}=1/2\), we obtain \(Q=16\) if \(l_{1,2}\) and \(l_{3,4}\) are first contracted in parallel, but obtain \(Q=22\) if \(l_{2,3}\) is contracted first. Based on these intuitions, we introduce the _min-weight-maximum-matching-first_ method to determine the fusion order. For each round of the link contraction process, we first identify the set of links with the smallest weight and get the subgraph \(\mathcal{N}_{\text{min.wgt}}\) of the intermediate fusion network induced by these links. We then find a maximum matching of \(\mathcal{N}_{\text{min.wgt}}\), which is the largest set of links that do not share any nodes, and contract these links in parallel. By repeating this procedure until no links remain, we can determine the fusion order and calculate the resource overhead \(Q\). We illustrate an example in Fig. 7. To compute a maximum matching, our software uses max_weight_matching function from Python package _NetworkX_[53], which is based on the algorithm in Ref. [54] and takes time of \(\mathcal{O}\Big{(}\big{|}\text{number of nodes}\big{|}^{3}\Big{)}\). #### 2.3.1 Note on the average number of fusions One may want to use the average number of required fusion attempts to quantify resource overheads instead of the average number of required \(\big{|}G_{*}^{(3)}\Big{>}\) states. In such a case, Eq. (5) should be modified to \[w(n_{l}):=\begin{cases}[w(n_{1})+w(n_{2})+1]/p_{\text{succ}}&\text{if $n_{1}\neq n_{2}$,}\\ [w(n_{1})+1]/p_{\text{succ}}&\text{if $n_{1}=n_{2}$,}\end{cases}\] and the weights of nodes should be initialized to \(0\). All the other parts of the strategy remain the same. OptGraphState provides an option to use this alternative resource measure instead of \(Q\). ### Iteration Since the method introduced above has randomness in several stages, it may produce a different outcome each time it is attempted. We thus iterate the method a sufficient number of times and choose the best one. Our software uses an _adaptive_ method to determine the iteration number: Denoting the process to iterate the strategy \(m\) times as \(R(m)\), we first perform \(R(m_{\text{init}})\) for a given integer \(m_{\text{init}}\geq 1\) and obtain \(Q_{\text{opt}}^{(1)}\), which is the minimal value of \(Q\) obtained from the samples. We then perform \(R(2m_{\text{init}})\) and obtain \(Q_{\text{opt}}^{(2)}\). If \(Q_{\text{opt}}^{(1)}\leq Q_{\text{opt}}^{(2)}\), we stop the iteration and return \(Q_{\text{opt}}^{(1)}\). If otherwise, we perform \(R(4m_{\text{init}})\), obtain \(Q_{\text{opt}}^{(3)}\), and stop the iteration if \(Q_{\text{opt}}^{(2)}\leq Q_{\text{opt}}^{(3)}\), which returns \(Q_{\text{opt}}^{(2)}\). If \(Q_{\text{opt}}^{(2)}>Q_{\text{opt}}^{(3)}\), we perform \(R(8m_{\text{init}})\), obtain \(Q_{\text{opt}}^{(4)}\), and so on. Here, we refer to this method as the _adaptive iteration_ with the given value of \(m_{\text{init}}\). ## 3 Applications of the strategy In this section, we present the numerical results obtained by applying our strategy to various graphs. We first analyze the distribution of resource overheads for random graphs, showing its tendency with respect to the numbers of vertices and edges. We then provide numerical evidence indicating that each step of our strategy can significantly contribute to lowering the resource overhead. Lastly, we show the calculated resource overheads of various well-known graphs described in Sec. 1.1. Throughout the section, \(|V|\) and \(|E|\) for a given graph indicate the numbers of vertices and edges, respectively, and \(\left|E\right|_{\text{max}}\) is defined as the maximal possible number of edges for a given value of \(|V|\) (under the assumption that there are no loops and multi-edges): \(\left|E\right|_{\text{max}}=|V|(|V|-1)/2\). Figure 7: **Example of the determination of the fusion order with the min-weight-maximum-matching-first method.** We assume \(p_{\text{succ}}=1/2\). Each step is an intermediate fusion network after contracting links (orange bold lines) in the previous step. The numbers inside the nodes indicate their weights. The obtained resource overhead is \(Q=64\), which is the weight of the last remaining node. ### Analysis of random graphs To sample random graphs, we use the Erdos-Renyi model [55], where all graphs that contain given fixed values of \(|V|\) and \(|E|\) have an equal probability. Figure 8 visualizes the distributions of the obtained resource overheads optimized by our strategy for various values of \(|V|\) and \(|E|\) when \(p_{\text{succ}}=0.5\) or \(0.75\). Here, we sample \(100\) random graphs for each combination \((p_{\text{succ}},|V|,|E|)\) and use the adaptive iteration method of \(m_{\text{init}}=600\). Several observations from the results are as follows: * \(Q_{\text{opt}}\) increases exponentially (or super-exponentially) as \(|V|\) grows when \(|E|/|E|_{\max}\) is fixed. * For a fixed value of \(|V|\), \(Q_{\text{opt}}\) is maximal when \(|E|\approx 0.6|E|_{\max}\). \(Q_{\text{opt}}\) is inversely correlated with \(|E|\) for large values of \(|E|\) since bipartiely-complete subgraphs and cliques are more likely to appear for when \(|E|\) is large. * The fusion scheme with \(p_{\text{succ}}=0.75\) may greatly reduce the order of \(Q_{\text{opt}}\), compared to the one with \(p_{\text{succ}}=0.5\), especially when \(|V|\) is large. Note that, to achieve \(p_{\text{succ}}=0.75\) with linear optics, we require an ancillary two-photon Bell state [23] or four ancillary unentangled photons [24] per fusion and photon-number resolving detectors that can discriminate at least four photons. On the other hand, the scheme with \(p_{\text{succ}}=0.5\) requires only on-off detectors and no ancillary photons. ### Performance analysis We now show that our strategy is indeed effective by comparing it with two "deficient" strategies in which a certain stage is missing from the original "full" strategy. In detail, we consider the following two alternative strategies: * The strategy without the unraveling process, where the original graph is directly used for generating a fusion network. The other steps are the same as the full strategy. * The strategy where the fusion order is randomly selected without using the min-weight-maximum-matching-first method. The other steps are the same as the full strategy. In Fig. 9, the distributions of \(Q_{\text{opt}}\) optimized by these three strategies for random graphs are presented as box plots. Each box extends from the first quartile (Q1) to the third quartile (Q3) and the corresponding whisker covers the entire range of the values. It clearly shows that the full strategy is significantly more powerful than the deficient ones, especially when there exist many vertices and edges. In other words, each step in the full strategy contributes to reducing the resource overhead. ### Applications to well-known graphs Lastly, we investigate the resource overheads of the graph states in Sec. 1.1, which are utilized in various quantum tasks such as MBQC, FBQC, quantum repeaters, and quantum error correction. Besides them, we also consider parity-encoded graph states, which are used for basic resource states of parity-encoding-based topological quantum computing (PTQC) protocol in Ref. [35] and FBQC in Ref. [6]. The \((n,m)\)_parity code_ (or _generalized Shor code_) [56] encodes a single logi Figure 8: **Distribution of the optimized resource overhead \(Q_{\text{opt}}\) for random graphs.** Random graphs are sampled with fixed numbers of vertices (\(|V|\)) and edges (\(|E|\)) by the Erdős-Renyi model [55]. Two different fusion success rates are considered: \(p_{\text{succ}}\in\{0.5,0.75\}\). \(|E|_{\max}=|V|(|V|-1)/2\) is the maximal possible number of edges for the given vertex number. For each combination of \((p_{\text{succ}},|V|,|E|)\), we sample \(100\) random graphs and obtain the distribution of \(Q_{\text{opt}}\) through the adaptive iteration method of \(m_{\text{init}}=600\). The median of the distribution is indicated as a dot and its total range is shown as a shaded region. cal qubit with the basis of \[\bigg{\{}\Big{(}|0\rangle^{\otimes m}+|1\rangle^{\otimes m}\Big{)}^{\otimes n}\pm \Big{(}|0\rangle^{\otimes m}-|1\rangle^{\otimes m}\Big{)}^{\otimes n}\bigg{\}},\] where \(\{|0\rangle\,,|1\rangle\}\) is the physical-level basis. An \((n,m)\)_parity-encoded graph state_ indicates a graph state in which the qubits on the vertices are encoded with the \((n,m)\) parity code. Such an encoded graph state can be rewritten as a graph state of physical-level qubits according to the rule in Fig. 10[35]. We cover two types of logical-level graphs in the calculation: the 3-vertex star graph (for PTQC [35]) and 6-vertex cycle graph (for FBQC [6]). In Table 1, we list the results of the resource analyses for these graph states, together with the basic information of the graphs. Additionally, in Appendix A, we present several detailed examples of the application of our strategy with visualization. ## 4 Remarks Graph states are versatile resource states for various tasks on quantum computation and communication, such as measurement-based quantum computing (MBQC) [2, 4], fusion-based quantum computing (FBQC) [6], quantum error correction [7, 8], and quantum repeaters [11]. However, in optical systems, the non-deterministic nature of Figure 10: **Rule for converting an \((n,m)\) parity-encoded graph state into a physical-level graph state.** A dashed box with a text “\(\times N\)” (for an integer \(N\)) indicates a bundle of recurrent subgraphs. Namely, the subgraph inside the box is repeated \(N\) times and, for each edge crossing the border of the box, an edge of the same format exists for every repeated subgraph. See Ref. [35] for more details. Figure 9: **Comparison of the distributions of the optimized resource overhead \(Q_{\rm opt}\) for different optimization strategies.** Three strategies are considered: the strategy without unraveling (s1), the strategy with random selection of the fusion order (s2), and our full strategy. The subplots correspond to different values of \(|V|\in\{12,18,24\}\) and \(|E|/|E|_{\rm max}\in\{0.2,0.6\}\). For each setting, 100 graphs are sampled by the Erdős–Rényi model and 1200 iterations are done for each graph. (The adaptive iteration method is not used for fair comparisons.) The distribution of \(Q_{\rm opt}\) is visualized as a box plot, where the red line indicates the median, the box extends from the first quartile (Q1) to the third quartile (Q3), and the whisker covers the entire range of the values. entangling operations hinders the generation of large-scale graph states; thus, the generation process should be carefully designed. In this work, we have described a graph-theoretical strategy to construct a resource-efficient method for generating an arbitrary graph state with the type-II fusion operation. Here, the resource overhead is quantified by the average number of required basic resource states (three-qubit star graph states) to generate the graph state without failed fusions. As outlined in Sec. 2, the strategy is composed of multiple trials to find the optimal one, where each round contains three stages: unraveling the graph, constructing a fusion network, and determining the fusion order. In Sec. 3, we applied the strategy to various graph states and verified numerically that each step of the strategy is indeed necessary to achieve high resource efficiency. We anticipate that our strategy and software will aid researchers in designing experimentally feasible approaches utilizing photonic graph states and in evaluating the practicality of their proposed schemes. For example, the basic resource states of MBQC and FBQC can be logically-encoded star or cycle graph states [35, 6]. Employing larger or more complex codes may improve the fault-tolerance of these schemes; however, generating such resource states could become a bottleneck in their implementation. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline \multirow{2}{*}{Graph} & \multirow{2}{*}{\(|V|\)} & \multirow{2}{*}{\(|E|\)} & \multirow{2}{*}{\(|E|/|E|\)} & \multicolumn{2}{c}{\(p_{\mathrm{succ}}=0.5\)} & \multicolumn{2}{c}{\(p_{\mathrm{succ}}=0.75\)} \\ \cline{5-8} & & & & & \(Q_{\mathrm{opt}}\) & \#Fusions & \(Q_{\mathrm{opt}}\) & \#Fusions \\ \hline 6-vertex star (\(G_{*}^{(6)}\)) & 6 & 5 & 0.33 & \(1.6\times 10^{1}\) & \(1.0\times 10^{1}\) & \(7.1\times 10^{0}\) & \(4.9\times 10^{0}\) \\ 12-vertex star (\(G_{*}^{(12)}\)) & 12 & 11 & 0.17 & \(1.1\times 10^{2}\) & \(7.4\times 10^{1}\) & \(2.7\times 10^{1}\) & \(2.1\times 10^{1}\) \\ 18-vertex star (\(G_{*}^{(18)}\)) & 18 & 17 & 0.11 & \(2.6\times 10^{2}\) & \(1.7\times 10^{2}\) & \(5.1\times 10^{1}\) & \(4.0\times 10^{1}\) \\ 24-vertex star (\(G_{*}^{(24)}\)) & 24 & 23 & 0.083 & \(5.4\times 10^{2}\) & \(3.6\times 10^{2}\) & \(8.2\times 10^{1}\) & \(6.5\times 10^{1}\) \\ \((3,3)\)-lattice & 9 & 12 & 0.33 & \(5.4\times 10^{2}\) & \(3.7\times 10^{2}\) & \(5.5\times 10^{1}\) & \(4.6\times 10^{1}\) \\ \((4,4)\)-lattice & 16 & 24 & 0.20 & \(7.7\times 10^{3}\) & \(5.2\times 10^{3}\) & \(2.4\times 10^{2}\) & \(2.0\times 10^{2}\) \\ \((5,5)\)-lattice & 25 & 40 & 0.13 & \(1.0\times 10^{5}\) & \(6.7\times 10^{4}\) & \(9.9\times 10^{2}\) & \(8.2\times 10^{2}\) \\ \((6,6)\)-lattice & 36 & 60 & 0.095 & \(7.9\times 10^{5}\) & \(5.3\times 10^{5}\) & \(2.8\times 10^{3}\) & \(2.3\times 10^{3}\) \\ \((1,1,1)\)-RHG lattice & 18 & 24 & 0.16 & \(1.9\times 10^{4}\) & \(1.3\times 10^{4}\) & \(3.9\times 10^{2}\) & \(3.4\times 10^{2}\) \\ \((2,2,2)\)-RHG lattice & 90 & 144 & 0.036 & \(2.8\times 10^{13}\) & \(1.8\times 10^{13}\) & \(8.0\times 10^{6}\) & \(6.5\times 10^{6}\) \\ \((2,2)\)-tree & 7 & 6 & 0.29 & \(2.8\times 10^{1}\) & \(1.8\times 10^{1}\) & \(1.0\times 10^{1}\) & \(7.3\times 10^{0}\) \\ \((2,2,2)\)-tree & 15 & 14 & 0.13 & \(2.1\times 10^{2}\) & \(1.4\times 10^{2}\) & \(4.0\times 10^{1}\) & \(3.1\times 10^{1}\) \\ \((2,2,2,2)\)-tree & 31 & 30 & 0.065 & \(1.6\times 10^{3}\) & \(1.1\times 10^{3}\) & \(1.4\times 10^{2}\) & \(1.1\times 10^{2}\) \\ \((3,3,3)\)-tree & 40 & 39 & 0.050 & \(1.7\times 10^{3}\) & \(1.2\times 10^{3}\) & \(1.8\times 10^{2}\) & \(1.5\times 10^{2}\) \\ \((4,4,4)\)-tree & 85 & 84 & 0.024 & \(1.2\times 10^{4}\) & \(7.8\times 10^{3}\) & \(6.1\times 10^{2}\) & \(4.9\times 10^{2}\) \\ \((8,2,2)\)-tree & 57 & 56 & 0.035 & \(1.6\times 10^{4}\) & \(1.0\times 10^{4}\) & \(4.7\times 10^{2}\) & \(3.8\times 10^{2}\) \\ Repeater graph with \(m=3\) & 12 & 21 & 0.32 & \(1.2\times 10^{2}\) & \(8.2\times 10^{1}\) & \(2.8\times 10^{1}\) & \(2.1\times 10^{1}\) \\ Repeater graph with \(m=4\) & 16 & 36 & 0.30 & \(2.1\times 10^{2}\) & \(1.4\times 10^{2}\) & \(4.3\times 10^{1}\) & \(3.3\times 10^{1}\) \\ Repeater graph with \(m=6\) & 24 & 78 & 0.28 & \(5.4\times 10^{2}\) & \(3.6\times 10^{2}\) & \(8.2\times 10^{1}\) & \(6.5\times 10^{1}\) \\ \((2,2)\) parity-encoded 3-star & 12 & 17 & 0.26 & \(1.2\times 10^{2}\) & \(8.2\times 10^{1}\) & \(2.8\times 10^{1}\) & \(2.1\times 10^{1}\) \\ \((3,3)\) parity-encoded 3-star & 27 & 48 & 0.14 & \(8.8\times 10^{2}\) & \(5.9\times 10^{2}\) & \(1.1\times 10^{2}\) & \(8.4\times 10^{1}\) \\ \((4,4)\) parity-encoded 3-star & 48 & 95 & 0.084 & \(2.4\times 10^{3}\) & \(1.6\times 10^{3}\) & \(2.3\times 10^{2}\) & \(1.9\times 10^{2}\) \\ \((5,5)\) parity-encoded 3-star & 75 & 158 & 0.057 & \(1.0\times 10^{4}\) & \(6.8\times 10^{3}\) & \(5.3\times 10^{2}\) & \(4.3\times 10^{2}\) \\ \((2,2)\) parity-encoded 6-cycle & 24 & 42 & 0.15 & \(1.3\times 10^{3}\) & \(8.5\times 10^{2}\) & \(1.2\times 10^{2}\) & \(9.9\times 10^{1}\) \\ \((3,3)\) parity-encoded 6-cycle & 54 & 114 & 0.080 & \(6.7\times 10^{3}\) & \(4.4\times 10^{3}\) & \(3.9\times 10^{2}\) & \(3.1\times 10^{2}\) \\ \((4,4)\) parity-encoded 6-cycle & 96 & 222 & 0.049 & \(2.1\times 10^{4}\) & \(1.4\times 10^{4}\) & \(8.8\times 10^{2}\) & \(7.1\times 10^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Results of resource overhead analyses for various well-known graphs.** See Fig. 2 for the visualization of these graphs. \(|V|\) and \(|E|\) indicate the numbers of vertices and edges. \(|E|_{\mathrm{max}}=|V|(|V|-1)/2\) is the maximal possible number of edges. The optimized resource overheads \(Q_{\mathrm{opt}}\) and the corresponding average numbers of fusion attempts are calculated for two fusion success rates: \(p_{\mathrm{succ}}\in\{0.5,0.75\ Our strategy can contribute to evaluating such a trade-off relation and identifying the most practical sweet spot. We lastly note several interesting unsolved problems related to our work: 1. **Generalization of unraveling.** For a given graph state \(\left|G\right>\), how can we identify another graph state \(\left|G^{\prime}\right>\) such that \(\left|G\right>\) can be generated from \(\left|G^{\prime}\right>\) using a combination of fusions, single-qubit Clifford (or general) operations, single-qubit measurements, and classical communications, resulting in a reduction of the overall resource overhead? This problem bears similarities to the equivalence problem of graph states [38, 57, 58], but fusions are included as allowable operations and resource overheads for fusion-based generation are considered. 2. **Lower bound of resource overhead.** Is it possible to find a (sufficiently tight) lower bound of the resource overhead \(Q\)? If such a lower bound can be computed, it would enable us to assess whether the resource overhead optimized by our strategy is indeed close to the real optimal value. 3. **Behavior of \(Q_{\text{opt}}\) against \(\left|E\right|/\left|E\right|_{\text{max}}\)**. In Fig. 8, \(Q_{\text{opt}}\) exhibits an intriguing behavior, where it is maximized around \(\left|E\right|/\left|E\right|_{\text{max}}=0.6\) regardless of \(\left|V\right|\). Can it be explained analytically? Is \(Q_{\text{opt}}\) related to a specific property of the graph or graph state, such as the multipartite entanglement of the graph state [59]? ## Acknowledgement This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korean government (Grant Nos. NRF-2019M3E4A1080074, NRF-2023R1A2C1006115, NRF2022M3E4A1076099, and 2022M3K4A1097117) via the Institute of Applied Physics at Seoul National University, by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2021-0-01059 and IITP-2023-2020-0-01606).
2309.00879
Towards Certified Probabilistic Robustness with High Accuracy
Adversarial examples pose a security threat to many critical systems built on neural networks (such as face recognition systems, and self-driving cars). While many methods have been proposed to build robust models, how to build certifiably robust yet accurate neural network models remains an open problem. For example, adversarial training improves empirical robustness, but they do not provide certification of the model's robustness. On the other hand, certified training provides certified robustness but at the cost of a significant accuracy drop. In this work, we propose a novel approach that aims to achieve both high accuracy and certified probabilistic robustness. Our method has two parts, i.e., a probabilistic robust training method with an additional goal of minimizing variance in terms of divergence and a runtime inference method for certified probabilistic robustness of the prediction. The latter enables efficient certification of the model's probabilistic robustness at runtime with statistical guarantees. This is supported by our training objective, which minimizes the variance of the model's predictions in a given vicinity, derived from a general definition of model robustness. Our approach works for a variety of perturbations and is reasonably efficient. Our experiments on multiple models trained on different datasets demonstrate that our approach significantly outperforms existing approaches in terms of both certification rate and accuracy.
Ruihan Zhang, Peixin Zhang, Jun Sun
2023-09-02T09:39:47Z
http://arxiv.org/abs/2309.00879v1
# Towards Certified Probabilistic Robustness with High Accuracy ###### Abstract Adversarial examples pose a security threat to many critical systems built on neural networks (such as face recognition systems, and self-driving cars). While many methods have been proposed to build robust models, how to build certifiably robust yet accurate neural network models remains an open problem. For example, adversarial training improves empirical robustness, but they do not provide certification of the model's robustness. On the other hand, certified training provides certified robustness but at the cost of a significant accuracy drop. In this work, we propose a novel approach that aims to achieve both high accuracy and certified probabilistic robustness. Our method has two parts, i.e., a probabilistic robust training method with an additional goal of minimizing variance in terms of divergence and a runtime inference method for certified probabilistic robustness of the prediction. The latter enables efficient certification of the model's probabilistic robustness at runtime with statistical guarantees. This is supported by our training objective, which minimizes the variance of the model's predictions in a given vicinity, derived from a general definition of model robustness. Our approach works for a variety of perturbations and is reasonably efficient. Our experiments on multiple models trained on different datasets demonstrate that our approach significantly outperforms existing approaches in terms of both certification rate and accuracy. ## 1 Introduction Neural networks have achieved remarkable success in various applications, including many security-critical systems such as self-driving cars [22], and face-recognition-based authentication systems [42]. Unfortunately, several security issues of neural networks have been discovered as well. Arguably the most notable one is the presence of adversarial examples. Adversarial examples are inputs that are carefully crafted by adding human imperceptible perturbation to normal inputs to trigger wrong predictions [23]. Their existence poses a significant threat when the neural networks are deployed in security-critical scenarios. For example, adversarial examples can mislead road sign recognition systems of self-driving cars and cause accidents [22]. In other use cases, adversarial examples may allow unauthorized access in face-recognition-based authentication [42]. The increasing adoption of machine learning in security-sensitive domains raises concerns about the robustness of these models against adversarial examples [37]. To tackle the issue of adversarial examples, researchers have developed various mitigation methods. Two well-known categories are adversarial training [4, 58] and certified training [33, 43], both of which aim to improve the robustness of neural networks, i.e., improving their prediction accuracy in the presence of adversarial examples whilst maintaining their accuracy with normal inputs if possible. Adversarial training works by training the neural network with a mixture of normal and adversarial examples. The latter may be either generated before hand [32] or during the training (e.g., min-max training [63]). While empirical studies show that adversarial training often improves model robustness whilst maintaining model accuracy, it does not provide any guarantee of the model robustness [64], which makes them less than ideal. For instance, it has been shown that a model trained through adversarial training remains vulnerable to new threats such as adaptive adversarial attacks [27, 49]. Certified training methods aim to provide a certain guarantee of the robustness of the neural network. These methods typically incorporate robustness verification techniques [60] during training, i.e., they aim to find a valuation of network parameters such that the model is provably robust with respect to the training samples. While they may be able to certify the model robustness on some input samples, they often reduce the model's accuracy significantly [11]. Recent studies have shown that state-of-the-art certified defences can result in up to 70% accuracy drop on MNIST and 90% on CIFAR-10 [11]. This is unacceptable for many real-world applications. Therefore, there is a pressing need for a more effective and efficient approach that can achieve both high accuracy and certified robustness. An alternative to certified training is randomized smoothing [12] which certifies certain form of robustness (e.g., against adversarial attacks within the L\({}^{2}\)-norm) by systematically introducing noises during training. It however suffers from the same problem of significant accuracy loss. In this work, we introduce a method that certifies a model's probabilistic robustness whilst maintaining its accuracy. Our method is designed based on the belief that deterministic robustness (i.e., a model is 100% robust within a certain region) is often infeasible without seriously compromising accuracy, whereas probabilistic robustness (e.g., a model makes the same prediction 99% of the time within a certain region) is often sufficient in practice. Our approach comprises two parts: a probabilistic robust training method that minimizes divergence variance and a runtime inference method to certify the model's robustness. In the training phase, our approach focuses on minimizing variance across model predictions on similar inputs to improve the robustness. Unlike other methods that focus on one specific group of adversarial attacks, e.g., PGD-based adversarial training [63] relies on the PGD attack [30], our method improves the model's robustness without overfitting to specific adversarial attacks. Furthermore, our method can be easily applied to handle a variety of different perturbations, e.g., such as rotation and scaling. In the inference phase, our approach certifies the model's probabilistic robustness by considering a given input in its peripheral region. We prove that the probabilistic certified robustness of a model can be derived from the accuracy of the model in the peripheral region. We evaluate our method by training models on multiple standard benchmark datasets and compare them with state-of-the-art robustness-improving methods, including adversarial training, certified training and others. We compare our approach with 13 baseline approaches in terms of standard accuracy (i.e., accuracy on normal test data), adversarial accuracy (i.e., accuracy in the presence of adversarial attacks), certified robustness rate (i.e., the probability of a test sample on which the model's probabilistic robustness is successfully certified), and certified robust accuracy (i.e., probability of a test sample being certified robust and correct). Compared to the state-of-the-art adversarial training, we show that our method achieves a competitive or higher adversarial accuracy while sacrificing significantly less standard accuracy (i.e., up to 50% less). More importantly, we are able to certify the model robustness with regards to most of the test inputs (i.e., up to 96.8% on MNIST and 92% on CIFAR-10). Compared to the state-of-the-art certified training, our method achieves a highly robust model whilst maintaining the model's accuracy, i.e., its standard accuracy is almost twice as high as that of certified training. Overall, the experiments show our method achieves a high level of certified robustness whilst maintaining the model accuracy. In summary, our contributions include the following. * A novel training algorithm that improves robustness whilst maintaining high accuracy. * An inference method which works with the above-mentioned training method to provide probabilistic certified robustness. * An extensive evaluation that shows that our method outperforms state-of-the-art adversarial training in terms of robustness and achieves a higher certification rate than all existing methods (while maintaining accuracy within 1% of that of the normal training). ## 2 Background and Problem Definition In the following, we first briefly introduce relevant concepts, and then review existing methods for enhancing model robustness against adversarial attacks, including adversarial and certified training. Lastly, we define our research problem. ### Preliminary Neural Network ModelsIn standard supervised learning, a neural network model is a function that takes inputs from \(\mathcal{X}\) and produces outputs in \(\mathcal{Y}\), where \(\mathcal{X},\mathcal{Y}\) are sets of inputs and outputs, respectively. Suppose we have a hypothetical function \(\bar{h}:\mathcal{X}\to\mathcal{Y}\) that we want to approximate using a neural network model given as \(h:\mathcal{X}\to\mathcal{Y}\). For any input \(x\) in \(\mathcal{X}\), the neural network model \(h\) produces a prediction \(h(x)\). With ground-truth label \(\bar{h}(x)\), we can compare the deviation of \(h(x)\) from \(\bar{h}(x)\) using a loss function \(\ell(h,x,\bar{h}(x))\). The choice of the loss function depends on the specific problem and data, but common options include the cross-entropy loss for classification and the mean squared error loss for regression. In this work, we focus on neural classification models and leave other models (e.g., generative models) to future work. Hereafter, we write \(G_{x}\) to denote the ground truth for any \(x\in\mathcal{X}\). Criterion of Correct ClassificationHere, inputs are represented as normalized vectors in \(\mathcal{X}\). Confidence scores for all \(C\) classes are denoted by \(\mathcal{Y}\). Normalizing the confidence scores (e.g., using the softmax function) provides the probability of each class, with the predicted class being the one with the highest probability. The loss function \(\ell(h,x,G_{x})\) measures deviation from the given class \(G_{x}\), and a prediction is _incorrect_ if and only if the following is satisfied \[\exists\;c\in\{\,1,2,\dots,C\,\}\,\quad c\neq G_{x}\wedge\ell(h,x,c)<\ell(h, x,G_{x}) \tag{1}\] Adversarial ExamplesAdversarial examples pose a security threat to machine learning systems, as they can be maliciously crafted to exploit vulnerabilities in the learned models [48]. These perturbed inputs are often imperceptible to the human eye but can lead to incorrect predictions or classifications, thus compromising the reliability of the system [8]. The consequences of adversarial attacks can be severe, particularly in security-critical areas. As machine learning models are increasingly integrated into critical applications, such as autonomous vehicles, the risk of adversarial attacks has heightened, with potential consequences ranging from privacy breaches to catastrophic failures [36, 42]. For instance, in the realm of autonomous vehicles, adversarial perturbations could cause misinterpretation of traffic signs or other crucial elements in the environment, leading to accidents and endangering lives [17]. The existence of an adversarial example can be defined as the presence of two inputs that are nearly identical, but are assigned different classifications by the model. Formally, an adversarial example exists if and only if the following is satisfied. \[\exists\,x_{1},x_{2}\in\mathcal{X}.\;\;d(x_{1},x_{2})\leq\epsilon\;\wedge\; \arg\max h(x_{1})\neq\arg\max h(x_{2}) \tag{2}\] where \(d(x_{1},x_{2})\) denotes a distance measure between the two inputs, and this distance needs to be smaller than a threshold \(\epsilon\) to be considered imperceptibly different. Note that the distance function can be defined in a variety of different ways (e.g., the Euclidean distance or the degree of rotation). RobustnessThe robustness of a neural network model refers to its ability to maintain its prediction in the presence of small perturbations. Formally, if the input data \(x\) follows certain distribution \(\mathcal{D}\), then \(\rho(h)\), the robustness of a model \(h\), is defined by the extent to which it maintains predictions in the presence of perturbations to the input data, as quantified by the following formula. \[P_{x_{1}\sim\mathcal{D}}\bigg{(}P_{x_{2}\sim\mathcal{U}(B(x_{1}))}\Big{(}\arg \max h(x_{1})\neq\arg\max h(x_{2})\Big{)}\leq\kappa\bigg{)} \tag{3}\] where the vicinity [29] function \(B\) is defined as follows. For any input \(x\in\mathcal{X}\), the vicinity \(B(x)\) is the local domain around (often centered at) \(x\), and \(B(x)\subset\mathcal{X}\). A vicinity \(B(x)\) is often defined to be some \(L^{p}\) norm of \(x\) (where \(p=0,1,2,\infty\)) [23], or domain-specific label-preserving transformations (e.g., tilting and zoom in/out) [3, 6]. Specifically, a vicinity is characterised by a distance function \(d\), and a predefined threshold \(\epsilon\) as follows: \(B(x_{1})=\{x_{2}\in\mathcal{X}\mid d(x_{1},x_{2})<\epsilon\}\). Common notations of distances include \[d(x_{1},x_{2}) =\left\|x_{1}-x_{2}\right\|_{p},\;\;\;\text{(Additive in $L^{p}$ norm), or} \tag{4}\] \[d(x_{1},x_{2}) =\begin{cases}\left|\epsilon^{\prime}\right|,&\text{if $\;\;f_{\text{ transform}}(x_{1},\epsilon^{\prime})=x_{2}$},\\ \epsilon+1,&\text{otherwise}\end{cases}\] where the transform function mapping from \(\mathcal{X}\) to \(\mathcal{X}\) can be understood as a specific transformation (_e.g._, whether an image is rotated or horizontally shifted) and its parameters (\(\epsilon^{\prime}\), _e.g._, the _degree_ of rotation). Lastly, \(\kappa\) is a constant threshold within the range of \([0,1]\). When \(\kappa=0\) in Equation (3), it is commonly known as deterministic robustness [30, 25, 35]. Otherwise, it is commonly known as probabilistic robustness [25, 66]. We remark that it has been observed that completely eliminating adversarial examples (i.e., by having \(\kappa=0\)) is often too stringent to be practical, compared to the alternative of keeping the possibility of undesirable events from occurring low (i.e., by having \(\kappa\) slightly above zero) [66]. ### Robust Model Training To create robust models, various methods have been proposed, which have been reviewed in recent studies [25, 45]. State-of-the-art training methods can be broadly categorized into adversarial training [18], certified training [46], and a number of other approaches. Adversarial TrainingAdversarial training is a widely-used and effective method for improving a model's empirical robustness. While there are many variants of the method, the most noticeable method involves solving the following optimization problem to achieve this goal. \[\min_{h}\;\;\text{E}_{x\sim\mathcal{D}}\left[\max_{t\in B(x)}\;\ell\big{(}h,t,G_{t}\big{)}\right] \tag{5}\] where \(\ell\) is a suitably-chosen loss function (e.g., the 0-1, cross-entropy, or squared loss). The idea is to approximate the worst loss that can be induced by a perturbation for each training sample during training and optimize the parameter of model \(h\) to improve the estimated worst-case robustness (in addition to standard accuracy). A critical part of adversarial training is to search for adversarial inputs within the vicinity of the training samples. Goodfellow et al. introduce a fast gradient sign method (FGSM) to generate adversarial inputs [19]. Adversarial training with FGSM significantly improves a model's robustness against adversarial samples generated through FGSM. Various other adversarial attacking methods are adopted for adversarial training as well. Among them, Projected Gradient Descent (PGD [30]) based adversarial training is shown to be the most effective, in various domains, including image classification and reinforcement learning. In the context of large-scale image classification tasks, an ensemble adversarial training method further improves robustness through utilizing adversarial examples generated from multiple pre-trained models [23]. Despite the advancements made in adversarial training over the years, improving model robustness remains an open problem. This is partly due to the challenge posed by the trade-off between standard accuracy and robustness [51]. To this end, TRADES is proposed to balance this trade-off with a regularization term based on Kullback-Leibler (KL) divergence between the model's output on clean inputs and adversarial inputs [63]. This approach has achieved state-of-the-art performance on several benchmark datasets, including CIFAR-10. Nevertheless, a 15% accuracy drop is still observed. More importantly, a significant limitation of adversarial training is that it does not certify a model's robustness against adversarial attacks [5]. This lack of certification implies that the robustness of a model cannot be guaranteed, particularly as new and sophisticated adversarial attacking methods are being developed [2]. For instance, it has been shown that a model trained through adversarial training remains vulnerable to new threats such as adaptive adversarial attacks [27, 49]. This limitation highlights the need for techniques that can provide certified robustness, i.e., a guarantee that the model is robust no matter what adversarial attacks are conducted. Certified TrainingCertified training aims to train models that are certified to be robust [52]. The idea is to soundly approximate the effect of any adversarial attack method and optimize the parameters of model \(h\) so that the effect of any adversarial attack method is kept within a certain bound such that deterministic robustness is guaranteed [25]. The optimization problem is defined as follows. \[\min_{h}\ \ \mathrm{E}_{x\sim\mathcal{D}}\left[\sup_{t\in B(x),\ c\neq G_{t}} \left(\ell(h,t,G_{t})-\ell(h,t,c)\right]\right. \tag{6}\] To soundly approximate the effect of any adversarial attack method, existing certified training methods use neural network verification techniques to soundly approximate the worst loss that can be induced by any perturbation within the vicinity of each training sample. If the label remains the same in the presence of such worst loss, the model is certified to be robust with respect to the sample. Note that after years of development, many neural network verification techniques have been proposed, e.g., [5, 46, 65]. Certified training methods however suffer from multiple shortcomings. First, they are computationally expensive. Although there has been a lot of development in neural network verification techniques, it is perhaps fair to say that such methods are still limited to relatively small neural networks. Given that certified training requires verifying the neural network robustness against each and every training sample, certified training is limited to small neural networks as of now. Second, existing certified training methods often result in a significant drop in the model's clean accuracy, i.e., accuracy on clean, non-adversarial inputs [12, 39]. The best clean accuracy achieved by certified training is typically 70% of that from adversarial training on the CIFAR-10 dataset [43, 51]. Such dramatic accuracy drop makes their application in real-world systems rare as of now. Lastly, existing certified training methods usually only work for robustness defined based on the \(L^{p}\) norms or in rare cases, simple transformation such as image rotation [12]. ### Problem Definition In this work, we aim to develop a method that achieves certified probabilistic robustness whilst maintaining high accuracy. Unlike deterministic robustness which puts strict requirements on how a model should behave, probabilistic robustness relaxes the requirements by allowing a small of number of exceptions within the vicinity of a sample to have different labels, which makes it much more achievable in practice. Furthermore, certified probabilistic robustness provides theoretical guarantees for the model performance when faced with adversarial inputs, which could be useful for system-level decision making. In practice, it is often sufficient to keep the probability of undesirable events from occurring low. However, achieving high levels of certified probabilistic robustness and accuracy simultaneously model is challenging. The illustration in Figure 1 highlights the delicate balance between maximizing clean accuracy and certified robustness that are achieved by state-of-the-art approaches (refer to Section 4 for details of these approaches). This work aims to address this challenge and provide a solution that meets the following criteria: * Accuracy on clean data, i.e., the standard accuracy of the model evaluated using a test set that is disjoint from the training set; * Robustness under different attack scenarios the accuracy of the model concerning adversarial samples generated using state-of-the-art adversarial attacking methods such as Auto Attack [14]; * Probabilistic certified robustness, i.e., the probability that samples in the testing set distribution that is certified to be robust; Figure 1: Comparing various methods using standard accuracy and certified robustness rates on CIFAR-10. Each circle represents one recently proposed method. * Computational efficiency during both training and inference, to scale up to larger neural models and handle a large number of input cases efficiently; * Compatibility with existing architectures and frameworks making it easy to integrate and extend to suit specific use cases and applications. ## 3 Our Method Our method has two parts. The first is a training method which aims to improve probabilistic robustness (illustrated in Figure 2) by minimizing the variance across the perturbation space for each sample in the training set. The second is an inference method which aims to establish certified robust prediction for a given sample. ### Variance-Minimizing Training To obtain a model that is both accurate and robust, we minimize the variance among model outputs for inputs within the same vicinity, alongside implementing empirical risk minimization. This training can be formulated as a Pareto optimization problem whose objective is as follows \[\begin{split}\min_{h}&\quad\text{E}_{x\sim \mathcal{D}}\left[\text{E}_{t\sim\mathcal{U}(B(x))}[\ell(h,t,G_{t})]\right]\\ &\quad\text{E}_{x\sim\mathcal{D}}\left[\text{Var}_{t\sim\mathcal{U }(B(x))}[\ell(h,t,G_{t})]\right]\end{split} \tag{7}\] where the first term is essentially the objective of empirical risk minimization (ERM) [53], and the second term, variance of individual losses, is our novel objective. Our goal is to minimize Objective (7) by optimizing the weight parameters of neural network model \(h\). The training algorithm to achieve this is presented in Algorithm 1, which outlines the specific operations involved. At each training step, we first sample a minibatch from the training data and for each (nominal) input in the minibatch, we sample a fixed number of (perturbed) inputs in the vicinity of the nominal input. Then, we use the neural network to make a prediction on each sample. Next, we compute the individual loss for each sample against the label of the given input independently. We then calculate the mean and standard deviation of these individual loss terms. Finally, we use a weighted sum of the mean and standard deviation as the effective loss function to back-propagate gradients and update the parameters of the neural network with the provided learning rate. This completes the flow of optimization. The presented algorithm is illustrated with stochastic gradient descent (SGD) optimizer but can be applied with other optimizers such as SGD with adaptive delta (Adadelta) [62]. ``` 1:Input: Training data \(\{\,(x_{i},G_{x_{i}})\mid i=1,2,\ldots,k\}\subset\mathcal{X},L^{\infty}\), bound \(\varepsilon\), network architecture \(h_{\Theta}\) parametrized by \(\theta\), step size \(\eta\), sample size \(n\), batch size \(m\), and \(\lambda\). 2:Initialization: Standard random initialization of \(h_{\Theta}\) 3:Output: Robust network \(h_{\Theta}\) 4:repeat 5: Uniformly sample \(\{\,(t_{i},G_{t_{i}})\mid i=1,2,\ldots,m\,\}\), a minibatch of training data where \(m<k\) 6:for\(i=1,2,\ldots,m\)do 7: Draw \(\{\,\tau_{i}\mid j=1,2,\ldots,n\,\}\sim\mathcal{U}(t_{i}-\varepsilon,t_{i}+\varepsilon)\) where \(\mathcal{U}\) is a uniform distribution 8:for all\(j=1,2,\ldots,n\)do 9:\(u_{j}\leftarrow\ell_{\text{Cross-entropy}}(h_{\Theta}(\tau_{j}),G_{t_{j}})\) 10:endfor 11:\(\mu_{i}\leftarrow\sum_{i=1}^{n}u_{j}/n\) 12:\(\sigma_{i}\leftarrow(\sum_{a=1}^{n}\sum_{b=1}^{n}(u_{a}-u_{b})^{2}/n)^{1/2}\) 13:endfor 14:\(\theta\leftarrow\theta-\eta\sum_{i=1}^{m}\nabla_{\theta}[\mu_{i}+\lambda\, \sigma_{i}]/m\) 15:until convergence ``` **Algorithm 1** Variance-Minimizing Training In Algorithm 1, the loss function combines mean minimization and variance minimization, with a weighting factor \(\lambda\) determining the importance of each component. Note that we use the square root of the variance term, allowing a linear combination of mean and standard deviation (SD) for the loss back-propagation. Intuitively, our Objective (7) allows us to improve the model's robustness without depending on any specific adversarial attacking methods. Instead, we improve model robustness by minimizing the spread (standard deviation) of model prediction alongside the traditional ERM method. Random sampling is adopted and the adversarial attack in each training step is avoided. In the ideal case, if, for a given \(x\in\mathcal{X}\) and model \(h\), a variable in the vicinity around \(x\) is correctly predicted by \(h\) and prediction of _any pair_ of samples in this vicinity is the same, then \(h\) achieves deterministic robustness in that vicinity. In the more likely case, the variance of the loss within the vicinity of each training sample is minimized through training and as a result, many of the samples within the vicinity may have the same (correct) prediction. Note that, unlike existing adversarial training methods which either rely on pre-computed adversarial examples [32] or adversarial examples generated during training [63] (often paying a high training cost), our training method is independent of specific attacking methods. Both terms in Objective (7) are crucial to improving the robustness of the model. Variance in the data represents the difference between individual observations. High variance means that the observations are scattered, while low variance means they are tightly clustered around the mean. Therefore, we believe that reducing the variance of the predictions can make the model more robust. Conversely, minimizing the mean alone (such as data augmentation [57]) leaves the outliers of the distribution to be unpredictable, which can lead to the existence of adversarial examples. Formally, we want to show why models with lower variance among nearby predictions are more robust. **Proposition 3.1**.: _If two distributions with the same mean have different variances, where the variance of one is less than the other, then for any quantile level \(q\) in the range \(0<q<1\), the upper bound of the \(q\)-th quantile of the distribution with the lower variance is less than the upper bound of the \(q\)-th quantile of the distribution with the higher variance._ Proof.: We start with Chebyshev's inequality. Chebyshev's inequality provides an upper bound on the tail probabilities of a random variable based on its variance. Let \(Z\) (integrable) be a random variable with finite expected value \(\mu\) and finite non-zero variance \(\sigma^{2}\). Then for any real number \(\lambda>0\), \[P\big{(}|Z-\mu|\geq\lambda\sigma\big{)}\leq\frac{1}{\lambda^{2}} \tag{8}\] which states that for any probability distribution, the proportion of data within \(\lambda\) standard deviations of the mean is at least \(1-1/\lambda^{2}\), and we can further derive: \[\begin{split} P\big{(}|Z-\mu|\geq\lambda\sigma\big{)}& =P\big{(}Z-\mu>\lambda\sigma\big{)}\leq\frac{1}{\lambda^{2}}\\ P\big{(}Z\leq\lambda\sigma+\mu\big{)}&\geq 1-\frac{1}{ \lambda^{2}}\end{split} \tag{9}\] Let \(\lambda\sigma=\mu-z\), we can have the inequality flipped like: \[P\big{(}Z\leq z\big{)}\geq 1-\frac{\sigma^{2}}{(z-\mu)^{2}} \tag{10}\] For any given \(z\), When the variance \(\sigma^{2}\) decreases, the lower bound for \(P\big{(}Z\leq z\big{)}\) increases. Hence, minimizing the variance is essentially reducing the probability of examples far away from the mean. Although a higher \(\lambda\) may appear more desirable, as it covers a larger vicinity, the model may be tuned to prioritize reducing variance over the mean, as our experiments show. If we omit the spread term in the loss function, the model minimizes expectation, similar to training with augmented data [57]. Omitting the expectation term is not recommended as it can lead to a model which makes poor predictions for all samples. ### Inference and Certifying The second part of our approach is an inference algorithm which aims to provide certified probabilistic robustness when possible. According to Equation (3), to establish certified probabilistic robustness, we must show that there is a guaranteed upper bound on the probability of adversarial examples, i.e., some threshold \(\kappa\). Intuitively, we would like to know for sure that among all the samples within the vicinity around an input, at least \(1-\kappa\) of them are not adversarial examples. Our inference method aims to certify the robustness, i.e., while providing prediction to an input variable, our inference method also offers certified probabilistic robustness as described above. To present this inference method, we first demonstrate our algorithm and then explain how it provides certified robustness and illustrate the difference between inference with certified robustness and vanilla inference. AlgorithmThe general idea of our inference method is captured below. For any \(x\in\mathcal{X}\) and model \(h\), \[h^{*}(x)\coloneqq(h*B)(x)\coloneqq\int_{\mathcal{X}}h(\tau)\,\mathbb{I}(x-\tau \in B(x))\,d\tau \tag{11}\] where symbol \(*\) denotes convolution, the mathematical operation on two functions. A superscript on \(h\) indicates that the model is based on the proposed inference instead of the ordinary inference. \(\mathbb{I}(\phi)\) a function that returns \(1\) if \(\phi\) is satisfied and \(0\) otherwise. A more feasible step-by-step implementation of this idea is presented in Algorithm 2 ``` Input: \(\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{Y}\), \(\mathcal{Y}\), \(\mathcal ordinary inference. Next, to determine the probability of correct predictions using the ordinary inference, we adopt an established method known as the exact binomial test [7]. Exact Binomial TestThe Binomial test is a statistical procedure used to test a hypothesis about the population proportion of a binary variable based on a sample of observations. It can be used to determine whether the proportion of one level in a binary variable is less than, greater than, or not equal to a specific claimed value. To evaluate the hypothesis that the proportion of a certain class of prediction around an input is higher than \(\kappa\), e.g., 10%, we conduct a binomial test using sample data. We use the following formula to calculate the probability of obtaining the observed occurrence of this class, or more extremely if the true proportion is less than 10%. \[P(Z\geq z\mid p=\kappa)=1-\sum_{i=0}^{z-1}\binom{n}{i}(\kappa)^{i}(1-\kappa)^ {n-i}\] where \(Z\) is the number of observed occurrences of this class in the sample; \(z\) is the observed number of occurrences of this class; \(n\) is the sample size; \(p\) is the claimed population proportion (in this case, 0.1); \(\binom{n}{i}\) is the binomial coefficient, which calculates the number of ways to choose \(i\) items from a set of \(n\) items. If the resulting probability is less than a pre-determined significance level (e.g. \(\alpha=0.05\)), we reject the null hypothesis that the proportion of occurrences is greater than or equal to 10% and conclude that it is lower. Otherwise, we fail to reject the null hypothesis and conclude that there is not enough evidence to suggest that the proportion is lower than 10%. In Algorithm 2, we perform both left-tail (i.e., \(P(Z\leq z)\)) and right-tail binomial tests to ensure that we can either accept that the probability is greater than \(1-\kappa\) or less than \(1-\kappa\). This provides certainty as to whether the prediction on the test case is certified as robust or not. The level of statistical significance is determined by \(\alpha\). As \(\alpha\) decreases, the statistical significance increases, which means that the certification is less likely to result in a false positive. Additionally, those cases that are not certified as robust have a lower likelihood of being false negatives. Although \(\kappa\) and \(\alpha\) are typically selected within the range of \(10^{-1}\) to \(10^{-4}\), decreasing both values, i.e., \(\kappa\to 0\) and \(\alpha\to 0\), can make the certification more reliable. We use sequential sampling to obtain the required sample size at runtime, which has proven to be optimal [54]. We stop collecting data once the probability of either the right or left tail crosses a predefined false positive rate. We make a decision based on which tail has crossed the threshold and certify the prediction as either robust or non-robust accordingly. The binomial test is described in detail in lines 7-21 of Algorithm 2. **Theorem 3.2**.: _Let \(x\) be a sample. If Algorithm 2 returns that \(x\) has certified robustness, i.e., \(p_{\text{right}}<\alpha\), then the probabilistic robustness of \(x\) is greater than \(1-\kappa\) is satisfied._ ## 4 Experiment In the following, we systematically evaluate our method by answering multiple research questions. ### Experimental Setting DatasetsExperiments are run on widely-used classification datasets: MNIST [24], SVHN [34], CIFAR-10 [21], and CIFAR-100 [21]. The details of these datasets for our study on robustness of classification models [22, 30, 50] are present in Table 1. In brief, the SVHN, CIFAR-10, and CIFAR-100 datasets consist of 32\(\times\)32 color images, while the MNIST dataset comprises 28\(\times\)28 grayscale images. The original training set of each dataset comprises a minimum of 50,000 samples, which are partitioned into training and validation sets following a ratio of 8:2. Model ArchitecturesWe adopt multiple model architectures to train the classifiers on the above-mentioned datasets. The details of these architectures are summarized in Table 2. These architectures all have been studied by exisiting robustness improving methods, as shown in the _Works_ column. In short, the model size ranges from 378,562 parameters for the small CNN7 model, to 11,689,512 parameters for the more complex ResNet-18 model. BaselinesIn the evaluation, we compare our method with eight baselines: 1) Empirical Risk Minimization (ERM) [53] is the standard training approach without any additional modifications; 2) Data Augmented training (DA) [44] trains the model with the samples augmented by applying various transformations to improve the generalization and robustness; 3) PGD-Training (PGDT) [30] aims to optimize the model's prediction error for the training samples and their surrounding neighborhoods, which are sampled by Projected Gradient Descent (PGD); 4) TRADES [63] attempts to minimize both the prediction error of the original samples and the prediction inconsistency between them and their neighborhoods; 5) MART [56] improves TRADES by a specific focus on misclassified samples; 6) Randomized Smoothing (RS) [12] provides certified robust accuracy by adding noise to inputs during training; 7) IBP [43] acquires the tractable upper bound for the worst-case perturbation and then provides a deterministic certificate of robustness; 8) PRL [40] is a probabilistic training method that aims to reduce the proportion of adversarial examples. The implementations of all these baselines are obtained from their respective original repositories. These baselines all aim at a robust and accurate model, although they originally pursue different metrics. PGDT [30], TRADES [63], and MART [56] are adversarial training that aims for empirical robustness. For certified training, e.g., IBP [43], the emphasis is placed on theoretical guarantees. Consequently, the primary objective is to optimize in a way such that no adversarial examples exist in the vicinity of each data point. This pursuit of adversarial robustness takes precedence over maintaining high accuracy levels, if necessary. PRL [40] aims to minimize the proportion of adversarial examples (based on training for probabilistic robustness). Their approach improves model robustness by maximizing the lower bound of the probability that the model's predictions are correct under a certain level of perturbation. Note that not all methods can be applied all model architectures. Table 3 summarizes the compatibility between the methods and model architectures. It should be noted that our method, along with ERM, DA, and PRL, applies to all architectures. We systematically evaluate each method for each model architecture to find the best-suited architecture for each method and dataset, e.g., TRADES [63] eventually finds that ResNet-18 is the best matching architecture for SVHN, and the basic ConvNet for MNIST, while being not compatible to CNN7 (refer to Table 3). The most suitable architecture for each approach on different tasks is as follows: 1) For the MNIST dataset, all approaches except IBP can utilize the basic ConvNet architecture, while IBP adopts CNN7. 2) For the SVHN or CIFAR-10/100 datasets, all approaches except IBP or RS can use ResNet-18, while IBP utilizes Wide-ResNet-8 and RS adopts CifarResNet-110. In the following, we report the experimental results according to the most suited architecture. ReproducibilityWe provide our code implementation, trained models, and supplementary materials on our repository at [https://github.com/soumission-anonyme](https://github.com/soumission-anonyme). In our training, we use different optimizatio \begin{table} \begin{tabular}{l|c c c c} \hline \hline Task & MNIST & SVHN & CIFAR-10 & CIFAR-100 \\ \hline Training Images & 48,000 & 58,606 & 40,000 & 40,000 \\ Validation Images & 12,000 & 14,651 & 10,000 & 10,000 \\ Testing Images & 10,000 & 26,032 & 10,000 & 10,000 \\ Image size & 28\(\times\) 28 & & \multicolumn{2}{c}{\(32\times 32\)} \\ Color Channels & 1 & \multicolumn{2}{c}{3} \\ Classes & & 10 & \multicolumn{2}{c}{100} \\ \hline \(L^{*}\) bound & 0.1 or 0.3 & \multicolumn{2}{c}{2/255 or 8/255} \\ Translation & & \multicolumn{2}{c}{\(\pm\)0.3} \\ Rotation & & \multicolumn{2}{c}{\(\pm\)35\({}^{\circ}\)} \\ Scaling Factor & & \multicolumn{2}{c}{\(\pm\)0.3} \\ \hline \hline \end{tabular} \end{table} Table 1: Details on image classification datasets, and perturbation bounds for each task \begin{table} \begin{tabular}{l|c c} \hline \hline Model & \# Parameters & Works \\ \hline ResNet-18 [20, 63] & 11,689,512 & [40, 56, 63] \\ Wide-ResNet-8 [61] & 3,000,074 & [43] \\ CifarResNet-110 [20] & 1,730,474 & [12] \\ CNN7 & 378,562 & [43] \\ Basic ConvNet & 1,663,370 & [40, 56, 63] \\ \hline \hline \end{tabular} \end{table} Table 2: Details of model architectures \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Approach & ResNet & Wide- & \multicolumn{2}{c}{Cifar} & \multicolumn{2}{c}{Basic} \\ & -18 & -8 & -110 & \multicolumn{2}{c}{ConvNet} \\ \hline ERM & ✓ & ✓ & ✓ & ✓ & ✓ \\ DA & ✓ & ✓ & ✓ & ✓ & ✓ \\ PGDT & ✓ & ✓ & \(\times\) & \(\times\) & ✓ \\ TRADES & ✓ & ✓ & \(\times\) & \(\times\) & ✓ \\ MART & ✓ & ✓ & \(\times\) & \(\times\) & ✓ \\ RS & \(\times\) & \(\times\) & ✓ & \(\times\) & ✓ \\ IBP & \(\times\) & ✓ & \(\times\) & ✓ & ✓ \\ PRL & ✓ & ✓ & ✓ & ✓ & ✓ \\ Ours & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 3: Compatibility between different methods and model architectures ent benchmarks to obtain the best performance. For example, we use Adadelta optimizer [62] with a learning rate of 1.0 for 150 epochs to optimize Basic ConvNet on MNIST. For the other three tasks, we use the SGD optimizer with an initial learning rate of 0.01 and weight decay of 3.5e-3. The learning rate for SGD is reduced by a factor of 10 at epochs 55, 75, and 90. Our experiments are conducted on a server with an x86_64 CPU featuring 8 cores running at 3.22GHz, 54.93GB of RAM, and an NVIDIA RTX 2080Ti GPU with 11.3 GB of memory. ### Research Questions and Answers We seek to answer the following research questions through our experiments. RQ1: Is our method effective in achieving robustness whilst maintaining accuracy?To answer this question, we evaluate the performance of our method and baseline methods using multiple metrics such as standard accuracy, certified robustness rate, and certified robust accuracy, as defined in Table 4. Note that for the baseline methods that do not inherently report probabilistic certified robustness, we run the exact binomial test to verify their certified robustness rate. The results are shown in Table 5. In terms of certified robust accuracy, it can be observed that our method has the highest certified robust accuracy on all four datasets, with an average value of 83.36%, which means that our method best strikes the balance between accuracy and robustness. In comparison, PRL is the best performing baseline method, with an average certified robust accuracy of 81.87%. Furthermore, comparing the results on the different datasets, we observe that our method outperforms PRL more when the dataset is more complex. Additionally, the average certified robust accuracy of the best adversarial training method (i.e., PGDT) is 74.29%, which is 11.42% lower than ours. Randomized smoothing and IBP yield even lower results at 73.10% and 53.74% respectively, although they still outperform normal training (i.e., ERM) whose certified robust accuracy is only 20.44%. Referring to the provided definitions in Table 4, it is evident that achieving a high certified robust accuracy necessitates both a high standard accuracy and a high certified robustness rate. In the following, we compare our method and baseline methods based on these two metrics separately. In terms of standard accuracy, our method exhibits a reasonably small sacrifice on standard accuracy while achieving robustness, compared to most of the existing methods. On the CIFAR-10, SVHN, and MNIST datasets, our method has a slight decreased accuracy in comparison to ERM, with a maximum reduction of less than 0.7% and an average value of 0.1% that closely approached DA. On CIFAR-100, although there is a noticeable decrease in accuracy compared to ERM, our method still ranked as the second-best training approach, only surpassed by DA. In addition, adversarial training results in a minimum 8.35% drop in accuracy, certified training usually leads to over 40% accuracy drop, and randomised smoothing results in a 10.31% accuracy drop. These baselines sacrifice standard accuracy for their respective training objective. For example, randomised smoothing introduces Gaussian noise during the training process to improve the model's robustness to perturbations. However, it can inadvertently push some of the original samples farther away from their true labels, leading to a reduction in accuracy. In terms of certified robustness rate, our method achieves the best performance on certified robustness rate, with an average value of 87.02%, which is 3.42% and 36.42% higher than probabilistically robust learning and certified training, respectively. The baselines which has higher accuracy than ours, i.e., ERM and DA, are significantly lower certified robustness rate, with an average rate of 22.32% and 65.86%, respectively. More than \(L^{p}\) transformation.While the existing robustness certification methods such as certified training and randomized smoothing primarily focus on \(L^{p}\) transformations of images [43], as presented in Table 5, we are also interested in the certified robustness on other transformations, such as translation, rotation, affine, and scaling. In our experiments, we randomly perturb the input within the given range of each \begin{table} \begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline \hline Metric & Formula & Meaning \\ \hline Standard Accuracy & \(\frac{1}{|S|}\sum_{(x,G_{x})\in S}\) & The probability that the model’s prediction is correct for an input from the data distribution \(\mathcal{D}\). \\ \hline Certified Robustness Rate & \(\frac{1}{|S|}\sum_{(x,G_{x})\in S}\left(\mathbb{I}\left(h(x)\text{ is with }\right.\right.\) & The probability that the model’s prediction is correct for an input from the data distribution \(\mathcal{D}\). \\ \hline Certified Robustness Rate & \(\frac{1}{|S|}\sum_{(x,G_{x})\in S}\left(\mathbb{I}\left(h(x)\text{ is with }\right.\right.\) & The probability that the model’s prediction has certified robustness and this prediction is correct, for an input from the data distribution \(\mathcal{D}\). \\ \hline Defence Rates & \(\frac{1}{|S|}\sum_{(x,G_{x})\in S}\left(\mathbb{I}\left(n(x)\text{ is with }\right.\right.\) & The probability that the model’s prediction is correct when the input has been perturbed by adversarial attack \(A\), for an input from the data distribution \(\mathcal{D}\). \\ \hline \hline \(\mathbb{I}(\phi)\) a function that returns 1 if \(\phi\) is satisfied and 0 otherwise \\ \hline \hline \end{tabular} \end{table} Table 4: Effectiveness metrics. To evaluate a model \(h\) on any input data from some distribution \(\mathcal{D}\), we assume that a test set \(\mathcal{S}\) generalises the distribution, and \(|\mathcal{S}|\) is the number of testing samples. transformation and report the corresponding certified robust accuracy. Similar to the previous experiments, we apply the exact binomial test to verify the robustness of the model obtained by different training algorithms with respect to non-\(L^{p}\) norm transformation. The hyper-parameters \(\kappa\) and \(\alpha\) are set as \(10^{-2}\) and \(0.01\), respectively. We present the results on CIFAR-10 in Table 6 and similar results are obtained on other datasets. It can be observed that our method consistently achieves the highest certified robust accuracy across all transformations, surpassing the threshold of 93.49%. Combined with the results shown in Table 5, it shows that our method is the most robust training algorithm against different kinds of perturbations, including both \(L^{p}\) and non-\(L^{p}\) transformations. The second highest is DA, with all results above 92.23%. This is likely because, rotation, translation, and scaling are frequently used in data augmentation. Remarkably, ERM achieves the third-highest certified robust accuracy, which can be attributed to the inherent robustness of convolutional layers to these non-\(L^{p}\) transformations. This robustness is due to their ability to capture and extract local patterns and spatial relationships in images through shared weights, local receptive fields, and spatial pooling operations [20]. In addition, PRL has slightly worse performance than our method, i.e., by 3.05%. sive defence success rate of 82.77% on average, surpassing the best performance of adversarial training, i.e., TRADES, which only attains a rate of 56.76%. Apart from adversarial training, IBP achieves the highest defence success rate at in average 54.39%. It is worth noting that although PRL exhibits relatively high certified robust accuracy for both \(L^{p}\) and non-\(L^{p}\) perturbation, second only to our method, their resilience against adversarial attacks is significantly low, with a defence success rate close to 0 on CIFAR-10 and CIFAR-100. Additionally, we evaluate our proposed method under 25 adversarial attacks on CIFAR-10 and compare its defence success rates with several baseline methods, as shown in Table 8. Row No Attack is the standard accuracy on the original testing set. It is evident that our approach outperforms all baseline methods across all adversarial attack algorithms. Except for the Pixle [38] attack, our method achieves a defence success rate of over 88% for all other attack methods. This is because Pixle attack focuses on searching for adversarial examples using the \(L^{0}\)-norm, which is not the focus of our method. Moreover, baseline methods with better average defence success rates, i.e., PGDT, TRADES, and MART, exhibit a significant decrease in standard accuracy (more than 10%). PRL continues to show poor performance against these adversarial attacks, achieving a success rate of less than 5% in most of the cases (17/25). This is because the adversarial examples of a PRL model, although only account for a small number (\(<\) 9.38% on CIFAR-10, SVHN, or MNIST), are relatively easy to be searched by attack algorithms. Overall, our method demonstrates robustness and effectiveness in defending against a wide range of adversarial attacks. _Answer to RQ2_: The proposed approach achieves the highest defence success rate (82.3%). Its high certified robustness indeed brings benefits in defending against various adversarial attacks including AutoAttack [14]. RQ3: How efficient is our approach?To answer this question, we measure the training and inference time of our method and the baseline approaches on MNIST. For inference, the time is collected on the whole testing set comprised of 10,000 samples. The results are shown in Table 9. **Training time.** For training efficiency, we can observe that compared to methods designed to certify robustness, i.e., IBP and PRL, our method demonstrates significantly higher training efficiency, being 21.93 and 2.89 times faster, respectively. Our method has a similar training cost to data augmentation and adversarial training, with a total training time around 10 thousand seconds. These findings indicate that our approach is highly efficient and practical for training deep neural networks \begin{table} \begin{tabular}{l|c c c c c c c c c} \hline \hline Attack & ERM & DA & PGDT & TRADES & MART & RS & IBP & PRL & Ours \\ \hline No Attack & **94.85** & 94.21 & 84.38 & 80.42 & 81.54 & 89.45 & 48.40 & 93.82 & 94.23 \\ TIFGSM [16] & 35.10 & 33.00 & 65.70 & 62.90 & 69.10 & 45.40 & 40.20 & 34.00 & **92.80** \\ MIFGSM [15] & 0.00 & 0.00 & 50.90 & 51.90 & 50.50 & 5.80 & 38.10 & 0.00 & **92.80** \\ DIFGSM [59] & 1.00 & 0.00 & 51.75 & 50.50 & 53.60 & 4.10 & 38.10 & 3.10 & **92.80** \\ VMIFGSM [55] & 0.00 & 0.00 & 51.10 & 50.90 & 51.90 & 4.10 & 38.10 & 0.00 & **93.90** \\ TPGD & 38.10 & 39.20 & 69.30 & 69.10 & 70.10 & 48.50 & 50.00 & 28.90 & **91.80** \\ FGSM [19] & 29.90 & 25.80 & 57.95 & 54.60 & 61.90 & 28.90 & 38.10 & 25.80 & **93.80** \\ RFGSSM [50] & 0.00 & 0.00 & 49.15 & 50.40 & 48.50 & 3.70 & 38.10 & 0.00 & **90.00** \\ BIM [23] & 0.00 & 0.00 & 52.00 & 57.20 & 47.40 & 2.10 & 38.10 & 0.00 & **90.70** \\ FAB [13] & 1.00 & 2.10 & 43.00 & 46.40 & 40.20 & 5.30 & 38.10 & 4.10 & **90.10** \\ CW [9] & 0.00 & 0.00 & 32.20 & 35.10 & 29.90 & 1.00 & 40.20 & 1.00 & **92.90** \\ UPGD & 0.00 & 0.00 & 49.85 & 50.50 & 49.80 & 5.10 & 38.10 & 0.00 & **93.80** \\ FFGSM [58] & 19.60 & 23.70 & 60.55 & 55.70 & 66.00 & 33.00 & 42.30 & 29.00 & **92.80** \\ Jitter [41] & 11.30 & 12.40 & 48.15 & 47.40 & 49.50 & 34.00 & 39.20 & 24.70 & **90.70** \\ PGD & 0.00 & 0.00 & 57.40 & 54.60 & 60.80 & 7.20 & 40.20 & 0.00 & **91.80** \\ EOTPGD [28] & 0.00 & 0.00 & 50.10 & 50.30 & 50.50 & 3.00 & 38.10 & 0.00 & **90.70** \\ AFGD [14] & 0.00 & 0.00 & 48.40 & 51.00 & 46.40 & 1.00 & 38.10 & 0.00 & **90.70** \\ NIFGSM [26] & 0.00 & 0.00 & 57.95 & 56.70 & 59.80 & 7.20 & 38.10 & 1.00 & **92.80** \\ SiniFGSM [26] & 4.10 & 1.00 & 59.00 & 56.70 & 61.90 & 23.70 & 38.10 & 12.40 & **93.70** \\ VNFGSM [55] & 0.00 & 0.00 & 50.45 & 53.00 & 48.50 & 5.10 & 38.10 & 0.00 & **92.90** \\ APGDT [14] & 0.00 & 0.00 & 40.90 & 44.30 & 38.10 & 0.00 & 38.10 & 0.00 & **88.70** \\ Square [1] & 0.00 & 1.00 & 50.40 & 54.00 & 47.40 & 3.10 & 38.10 & 2.10 & **88.08** \\ Add Gaussian Noise & 25.80 & 43.30 & 79.10 & 78.40 & 80.40 & 74.20 & 42.30 & 45.40 & **87.60** \\ OnePixel [47] & 79.40 & 83.50 & 78.05 & 74.20 & 82.50 & 83.50 & 42.50 & 80.40 & **89.70** \\ Pixle [38] & 0.00 & 0.00 & 12.55 & 11.30 & 14.40 & 1.00 & 10.30 & 0.00 & **17.50** \\ PGDL2 & 1.00 & 0.00 & 35.80 & 36.10 & 36.10 & 5.20 & 36.10 & 0.00 & **92.90** \\ \hline \hline \multicolumn{10}{l}{\(L^{\infty}\) bound at 8/255. \(L^{2}\) bound at 10/255. For Gaussian noise, std=0.1. More detailed parameter setting is according to} \\ \multicolumn{10}{l}{[https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html](https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html)} \\ \hline \end{tabular} \end{table} Table 8: Model defence success rates against adversarial attacks on standard benchmarks. Experiments are run on CIFAR-10 for all baselines and attacks. with robustness guarantees. **Inference time.** It is worth noting that the inherent inference process of ERM, DA, and adversarial training algorithms does not provide robustness certification. We thus focus on comparing our method with certified training algorithms, i.e., RS, IBP, and PRL. From Table 9, it can be observed that IBP takes the least inference time as it only requires a single forward propagation on the input to obtain the predictions and certification results. In contrast, the other three methods provide certification by predicting a large number of samples around the input. This efficiency is achieved by trading off training time. Our inference can be considered reasonably efficient, as it has the same order of magnitude as PRL and is two orders of magnitude faster than RS. This is mainly attributed to our method using sequential sampling to reduce processing time. That is, sequential sampling allows for decisions to be made based on observed data at each step [54] and is known to reduce required sample sizes while maintaining statistical correctness due to its adaptability [31]. In comparison to fixed-size sampling, sequential sampling may lead to increased efficiency [10]. Our method converges efficiently on correctly predicting unperturbed variables, but convergence on perturbed variables is slightly delayed, as illustrated in Figure 3(c). _Answer to RQ3_: Our approach has a training cost similar to data augmentation and adversarial training, and is much more efficient than certified training, making it practical for training neural networks with robustness guarantees. **RQ4: How do the hyper-parameters impact the performance of our approach?** We carry out an ablation study to assess the effect of the hyper-parameters in our method. **Vicinity size \(\varepsilon\).** To investigate the impact of the vicinity size on \begin{table} \begin{tabular}{l|c c c} \hline \hline Approach & \multicolumn{4}{c}{Certified Robust Accuracy} \\ \hline & CIFAR-100 & CIFAR-10 & SVHN & MNIST \\ ERM & 33.45 & 48.85 & 59.34 & 48.01 \\ DA & 54.43 & 83.50 & 84.79 & 81.23 \\ PGDT & 44.59 & 83.23 & 87.98 & 95.89 \\ TRADES & 58.86 & 80.57 & 82.45 & 95.39 \\ MART & 56.73 & 81.35 & 73.84 & 95.22 \\ RS & 53.93 & 88.98 & 86.03 & 90.48 \\ IBP & 33.45 & 54.41 & 67.34 & 97.74 \\ PRL & 53.99 & 91.74 & 91.97 & 98.99 \\ Ours & **57.27** & **93.58** & **92.85** & **97.15** \\ \hline \hline \end{tabular} \end{table} Table 10: The certified robust accuracy and certified robustness rate of different approaches on various datasets within smaller vicinity. Figure 3: (a) & (b) Illustrations of loss convergence of the mean term and variance term in Objective (7). (c) Illustration of model performance in terms of certified robust accuracy, certified robustness rate as well as standard accuracy. The illustrated figures are from experiments on MNIST. The performance of ERM [53] is also shown for comparison. \begin{table} \begin{tabular}{l|c|c} \hline \hline Approach & Training time (sec) & Inference time (sec) \\ \hline ERM & \(4.9\times 10^{2}\) & 27 \\ DA & \(8.9\times 10^{3}\) & 27 \\ PGD & \(9.9\times 10^{3}\) & 27 \\ TRADES & \(9.9\times 10^{3}\) & 27 \\ MART & \(9.9\times 10^{3}\) & 27 \\ RS & \(5.8\times 10^{4}\) & \(2.7\times 10^{5}\) \\ IBP & \(2.1\times 10^{5}\) & 27 \\ PRL & \(2.8\times 10^{4}\) & \(2.7\times 10^{3}\) \\ Ours & \(9.6\times 10^{3}\) & \(4.7\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 9: Comparison of overhead for our training on MNIST dataset with 300 epoch training time. For a fair comparison, the training cost of all approaches is collected using a single NVIDIA RTX 2080 Ti GPU. certified robust accuracy, we evaluate the models with altered \(L^{\infty}\)-norm radius \(\epsilon\) on each dataset. Specifically, for MNIST, values of \(\epsilon\) are selected from \(\{\,0.1,0.3\,\}\), while for the other three datasets, its values are chosen from \(\{\,2/255,8/255\,\}\). The results are shown in Table 10. We observe a trade-off between certified robust accuracy and the usefulness of certification, i.e., decreasing the vicinity radius increases certified robust accuracy. Our approach achieves high certified robust accuracy (\(>85\%\)) within a reasonable range of the vicinity and experiences a \(0.36\%\) increase with a one-third reduction and a \(2.98\%\) average increase with a one-quarter reduction. **Importance factor \(\lambda\).** We conduct an experiment on the impact of importance factor \(\lambda\) on the performance of neural models, including standard accuracy, certified robust accuracy, and defence success rate against AutoAttack. The value of \(\lambda\) ranges from \(0\) to \(5\), with an interval of \(0.25\). Figure 4 presents the trend of changes in model performance for MNIST, which is representative of other results. Note that a value of \(\lambda\) close to \(1\) yields the best performance. When the value of \(\lambda\) decreases, the contribution of the proposed variance-minimization term decreases as well. If \(\lambda\) is too small, i.e., close to \(0\), the training process becomes similar to data-augmented training with random perturbation, which prioritizes optimizing average losses, resulting in a drop in the certified robust rate and thus decreasing the certified robustness accuracy. On the other hand, if the loss function excessively emphasizes the variance term with a large value of \(\lambda\), it can lead to a decrease in standard accuracy and further impact the certified robust accuracy. Additionally, the defence success rate also decreases by about a quarter when varying \(\lambda\) from \(1\) to \(5\). **Percentage to certify \(\kappa\).** To investigate how the strictness of certification requirement influences the certified robust accuracy, we vary the acceptable level \(\kappa\) and significance level \(\alpha\). The certified robust accuracy with regard to different acceptable level and significance level is presented in Table 11 and Table 12, respectively. Note that \(\kappa=0\) means conducting deterministic robustness certification on the model, which can only be achieved by IBP. The remaining baselines and our method can only provide probabilistic robustness certification results for the model. It can be observed that the variation of both the acceptable level \(\kappa\) and significance level \(\alpha\) does not have a significant impact on the certified robust accuracy, except ERM and DA. Specifically, for our method, when \(\kappa\) has changed from \(10^{-3}\) to \(10^{-1}\), the certified robust accuracy has only improved by \(1.05\%\); no increase in certified robust accuracy is observed when \(\alpha\) varies from \(10^{-3}\) to \(5\times 10^{-2}\). ## 5 Conclusion We present an approach that improves the robustness of neural networks against adversarial examples. Our approach includes a training method that minimizes both the mean and variance of the loss in prediction and an inference method that provides probabilistic-certified robustness. Through theoretical analysis, we have shown that minimizing variance is the upper bound of the probability of adversarial examples and that higher quantile accuracy leads to over \(91\%\) certified robust accuracy. Our experimental results on standard benchmark datasets show that our method achieves higher defence success rate and certification rate compared to the state-of-the-art while sacrificing less standard accuracy. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Approach & \(1-\alpha=0.95\) & \(1-\alpha=0.99\) & \(1-\alpha=0.999\) \\ \hline ERM & 2.55 & 1.25 & 1.25 \\ DA & 77.56 & 76.07 & 76.07 \\ PGDT & 82.90 & 82.90 & 82.90 \\ TRADES & 78.80 & 78.80 & 78.80 \\ MART & 72.21 & 72.21 & 72.21 \\ RS & 87.98 & 87.98 & 87.98 \\ IBP & 40.00 & 40.00 & 40.00 \\ PRL & 90.63 & 90.63 & 90.63 \\ Ours & 91.75 & 91.75 & 91.75 \\ \hline \hline \end{tabular} \end{table} Table 12: Comparison of the influence of different \(\alpha\) values on the certified robust accuracy of CIFAR-10. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Approach & \(\kappa=0\) & \(\kappa=10^{-3}\) & \(\kappa=10^{-2}\) & \(\kappa=10^{-1}\) \\ \multicolumn{4}{l}{(Deterministic)} \\ \hline ERM & - & 1.25 & 1.25 & 25.09 \\ DA & - & 73.50 & 76.07 & 86.59 \\ PGDT & - & 82.82 & 82.90 & 82.95 \\ TRADES & - & 78.69 & 78.80 & 79.60 \\ MART & - & 71.42 & 72.21 & 73.43 \\ RS & - & 87.63 & 87.98 & 88.08 \\ IBP & 35.13 & 39.98 & 40.00 & 44.41 \\ PRL & - & 89.88 & 90.63 & 91.97 \\ Ours & - & 91.73 & 91.75 & 92.78 \\ \hline \hline \end{tabular} \end{table} Table 11: Comparison of the influence of different \(\kappa\) values on the certified robust accuracy of CIFAR-10. Figure 4: Adjusting hyperparameter \(\lambda\) makes the model converge with different performance. The experiment is run on MNIST with an \(L^{\infty}\) bound of \(0.3\).
2301.12487
Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid
Deep Neural Networks have proven to be highly accurate at a variety of tasks in recent years. The benefits of Deep Neural Networks have also been embraced in power grids to detect False Data Injection Attacks (FDIA) while conducting critical tasks like state estimation. However, the vulnerabilities of DNNs along with the distinct infrastructure of cyber-physical-system (CPS) can favor the attackers to bypass the detection mechanism. Moreover, the divergent nature of CPS engenders limitations to the conventional defense mechanisms for False Data Injection Attacks. In this paper, we propose a DNN framework with additional layer which utilizes randomization to mitigate the adversarial effect by padding the inputs. The primary advantage of our method is when deployed to a DNN model it has trivial impact on the models performance even with larger padding sizes. We demonstrate the favorable outcome of the framework through simulation using the IEEE 14-bus, 30-bus, 118-bus and 300-bus systems. Furthermore to justify the framework we select attack techniques that generate subtle adversarial examples that can bypass the detection mechanism effortlessly.
Farhin Farhad Riya, Shahinul Hoque, Jinyuan Stella Sun, Jiangnan Li, Hairong Qi
2023-01-29T16:50:16Z
http://arxiv.org/abs/2301.12487v2
# Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid ###### Abstract Deep Neural Networks have proven to be highly accurate at a variety of tasks in recent years. The benefits of Deep Neural Networks have also been embraced in power grids to detect False Data Injection Attacks (FDIA) while conducting critical tasks like state estimation. However, the vulnerabilities of DNNs along with the distinct infrastructure of cyber-physical-system (CPS) can favor the attackers to bypass the detection mechanism. Moreover, the divergent nature of CPS engenders limitations to the conventional defense mechanisms for False Data Injection Attacks. In this paper, we propose a DNN framework with additional layer which utilizes randomization to mitigate the adversarial effect by padding the inputs. The primary advantage of our method is when deployed to a DNN model it has trivial impact on the models performance even with larger padding sizes. We demonstrate the favorable outcome of the framework through simulation using the IEEE 14-bus, 30-bus, 118-bus and 300-bus systems. Furthermore to justify the framework we select attack techniques that generate subtle adversarial examples that can bypass the detection mechanism effortlessly. deep neural networks, false data injection attack, state estimation, power grid, cyber physical system. ## I Introduction Power grid is a complex system that connects several electric power sources to consumers through extensive power transmission and distribution networks. For ensuring the security of the electric power infrastructure, control and management systems of the power grid are essential. The North American Electrical Reliability Council's statistics indicates that 11 blackouts have occurred as a result of abnormalities in the cyber system of supervisory control and data acquisition (SCADA) [1]. State estimate is the method of determining unknown state variables in a power system based on the meter measurement by the control center. Through the examination of the meter measurement data and power system models, state estimation is utilized in system monitoring to accurately determine the state of the power grid. Typically, the results of the state estimation are employed in contingency analysis, which is then used to control the components of the power grid. As illustrated in Figure 1, measurement data from sensors or meters, such as bus voltage, bus power flow, branch power flow, and load profiles, are typically transmitted to a control center (SCADA). The control center then analyzes the received measurement data, estimates the states of the power system, determines possible contingencies, and sends the appropriate control signals to the Remote Terminal Units (RTUs) to ensure the reliable operation of the power system Cyber-Physical systems are typically complex with a variety of geographically dispersed data sources. The meters or sensors are deployed in the wild which accumulate the datas from the physical environment as depict in figure 1. Due to this distributed nature of the system it is difficult to guarantee the Fig. 1: Structure of communication and control system of power grids. security of all the meters. Moreover, the data gathered by the sensors rarely matches to the ideal theoretically calculated data because of the presence of various perturbations in the environment which causes measurement errors. Attackers could exploit this limitation by adding artfully crafted perturbations to the measurement of the compromised meters which can bypass the anomaly detection system. Numerous studies have been conducted on various False Data Injection Attack (FDIA) scenarios and some of the attacks [2][3][4] enables the attackers to generate subtle malicious false data that can bypass the detection system and make the estimator to produce incorrect output. Various critical operations like Optimal Power Flow analysis, Energy Distribution, Real-time pricing rely on the state estimation and these operations can be disturbed by the attackers through various successful false data injection attacks. Similarly, various research has also been conducted on developing corresponding detection strategies for different attack scenarios. Among the proposed defense techniques, Machine Learning models have become the most adapted techniques in recent years for anomaly detection, specially Deep Neural Networks have demonstrated higher performance in the related research evaluation. However, the vulnerabilities of DNN models against well crafted perturbations can pose a critical issue, specially in CPS systems like power grid, as these systems have some constrains that are different from any other domains like computer vision. For state estimation, the sensors used to accumulate data from physical environment causes measurement errors and well crafted perturbations imitating these measurement errors can degrade the performance of well-trained DNN models. With this in mind our research tends to provide a framework that can increase the robustness of the DNN models and makes the attack computationally expensive for the attackers. The key contributions in this paper can be summarized as follows: * We propose a general framework that can be easily adapted by the prevailing Machine Learning techniques which are utilized in FDIA detection. * The vast distributive nature of CPS requires infeasible labor to guarantee the security of the sensors. We highlight that our framework does not require any specific hardware configuration or re-deployment of the sensors. * We propose a defense method that mitigates the adversarial effect by reconstructing the input samples with multiple combination of random padding. The multi-combination padding expands the number of data samples which hinders the accuracy digression while the size of the padding increases which is crucial for test cases having less number of meters comparing * Our proposed framework has negligible accuracy drop comparing to the vanilla models with negligible computational cost changes. * We validate the framework through simulation using the IEEE test system, including IEEE 14-bus case, 30-bus case, 118-bus and 300-bus case. For every case the favorable outcomes justifies the proposed mechanism. The rest of the paper is structured as follows. Section II discusses the related works done in both the attack and defenses field. Essential background information and technical details are given in section III. A subsection of section IV discusses the approaches of finding the suitable attack strategy that better justify the defense mechanism with subtle perturbation. Moreover, section IV gives the details on the proposed framework with necessary explanation. In section V the information about the utilized dataset, model architecture and evaluation results can be found. Section IV discusses the future work and we conclude the paper in section V. ## II Related works After 2009 when Y. Liu, et al. [5] revealed that measurements obtained from SCADA systems susceptible to malicious fake data injection (FDI) attacks, Numerous attack and defense research have been carried out to comprehend the potential of FDI attacks ever since because of the crucial role that state estimation performs in power system. A wide range of attack studies concentrate on stealthy techniques that surpass the system's residual-based bad data detectors by using their knowledge of the topological Jacobian matrix. With the understanding of the topology of the system (sparse Jacobian matrix) many attack strategies have been built, among them proposed strategies of [6, 7] and [8] have demonstrated quality evaluation results. Even without the full knowledge of the Jacobian matrix FDI attacks can still be constructed which are demonstrated in [9][10]. The results of some attack studies like [11] have shown potential stability damage of power system. Using a static security evaluation, C. Jiongcong et al. [12] examined the effects of FDIA and concluded that FDIA can lead operators to make incorrect decisions. [13] analyzes the effects both random and structured bad data on state estimation. Data leaking and false injection attacks continue to be primary concerns for smart grid security since the stability of the grid heavily depend on the data in the grid [5]. Defending against FDI attacks various defense mechanisms are studies which protects a few carefully selected measurements [14][15]. They provide both the optimum and the least complex algorithms for preserving the integrity of the data, and they employ physical experiments to support their claims. An adaptive cumulative sum approach was proposed by Huang et al. [15] to address the issue of quick detection of false data. Studies like [16] and [17] utilized PMU measurements which are synchronized to GPS signals. With the increasing popularity of ML models in numerous fields, different studies are conducted in the power grid domain utilizing the advantages of ML models not only for their quality performance but also for their easy implementation. Unlike the previously mentioned defenses, using machine learning models does not require any additional hardware devices or re-deployment of the sensors. Moreover, ML models do not require to solve complex time-domain equations relating to solve power grid which make the strategies computationally efficient. Because of the mentioned advantages machine learning is one of the primary methods used to detect FDIA at the moment. In order to classify attacks, Ozay et al. [18] produce Gaussian dispersed attacks and employ both supervised and semi-supervised machine learning techniques. Similar to this, Esmalifalak et al. [19] develop an unsupervised learning case-based statistical anomaly detection and a distributed SVM based model for labeled data. [22] exploited Recurrent Neural Networks (RNN) to detect FDIA. In order to capture the dynamic behavior of the power system, a recurrent neural network with LSTM cells is employed in [20], and a convolutional neural network [21] is utilized to balance two input sources. Several studies have been conducted using Deep Neural Networks (DNN) to defend against FDIA [23]-[25]. Though DNN models have shown extremely good results among all the ML models, they are also vulnerable to different attacks. Szegedy et al. [26] uncovered the existence of DNN adversarial scenarios in 2013. The DNNs will be manipulated to generate false results by adding a modest, purposefully-created perturbation to the inputs. There have been several proposed adversarial attack techniques, and Goodfellow et al. presented the Fast Gradient Sign Method (FGSM), which generates perturbations using signed gradient values [27]. The Fast Gradient Method developed by Rozsa et al. utilized the gradient values directly [28]. Iterative attack [29] and DeepFool [30] are two yet well adversarial attack methods. To defend those attacks several methods were proposed among them adversarial training [31], model distillation [32], adversarial detection [33] input reconstruction [34] are mostly used defense techniques. ## III Approach ### **Overview of Generating Adversarial Examples** In this section we discuss the attacks strategies we adopted in our research to better justify our proposed framework. In this study we find the suitable attack strategy that adds subtle perturbation to the samples and generates well-crafted adversarial examples that degrades the models performance. We assume that if a model without any defense mechanism can classify any crafted sample correctly then the model is robust itself or the attack is not strong enough to fool the model. Therefore, in this section we look for the suitable attack that produces well crafted samples that makes the model generates wrong classification. Our study includes three attack strategies as follow: **Random Perturbation:** This attack strategy considers different level of random noise intensity for generating the false data. The intensity of the noise is considered from 0 to 1. As the noise intensity is randomly selected, the false datas generated by this attack is high likely contain very large noises which makes the true and false data easily distinguishable by the detection model. **Universal Noise:** In this attack strategy a universal noise for a certain set of data samples is selected which makes the model misclassify the highest. The universal noise is then used to create the crafted false data samples. **Iterative Gaussian Noise:** This attack strategy follows an iterative gaussian distribution to generate the false data and it creates the most subtle noises comparing the other two mentioned attack techniques. Fig 2 shows the manifold of the true and false data samples which depicts that the samples have very close features which are not easily distinguishable and it makes the model misclassify with higher confidence. ### **Overview of the proposed framework** In FDIAs the attackers exploit gradient-based optimization techniques to iteratively produce the perturbations. Adversarial perturbations in FDIA detection are very likely to be distinct to each sample since the perturbation produced by multi-step attacks typically has worse transferability. Therefore, perturbation generated for a particular sample will have less chances to work on a different sample to make the model misclassify with same higher level of confidence. We exploit this limitation and propose a framework that generates randomly padded samples that makes it computationally infeasible for an attacker to generate adversarial examples. Inspired by the work of Jiangnan et al. [36] which introduces a framework that reconstructs the input samples with random Fig 4: Model accuracy for different attack strategies while various number of meters are compromised for bus case 30 Fig 3: Model accuracy for different attack strategies while various number of meters are compromised for bus case 14 Fig 2: Manifold of normal and false data generated by the iterative gaussian attack strategy padding as a defense mechanism that makes it infeasible for the attackers to generate adversarial examples with multi-step iteration by guessing the random combination of a sample. Our framework follows similar construction but we redesigned the input sample reconstruction procedure which does not compromise the performance of the model while handling large number of padding sizes. Large number of padding sizes are crucial while considering cases with less number of features in each meter measurement vector. ## IV Experiments In this section, we conduct experiments with IEEE test systems, such as the IEEE 14-bus, 30-bus, 118-bus, and 300-bus systems, to validate our proposed framework. ### **Dataset** For the simulation, we use a DC power flow model. We extract the configuration for the IEEE test systems out of MATPOWER, a MATLAB tool for dealing with power flow problems. [35]. Using the MATPOWER tool we construct the H matrix and then derive the meter measurements. Each measurement vector z contains the power flow measurement data of the branch and the number of measurement in each z vector is denoted as m. As we consider different IEE test systems, the value of m is different for each cases. For obtaining the dataset for FDIA detection we generate normal data and false data representing the two classes required for the experiments as FDIA detection is a binary classification problem. Our training dataset contains 40,000 samples for all the IEEE test cases and among the samples 50% of the samples contain perturbations to poison the dataset. The normal data samples are labeled as 0 and the false data samples are labeled as 1. To justify our framework we run multiple experiments for variety test cases and for each case we consider different number of compromised meter to compare the results. According to the finding of our research if around 4% of the total meters can be compromised it can highly impact the accuracy of the detection model. Table 1 represents the different scenario we consider for evaluating our proposed work. deployable framework with is compatible to any DNN model shows favorable results on different IEEE bus test systems.
2306.06455
Scalable Rail Planning and Replanning with Soft Deadlines
The Flatland Challenge, which was first held in 2019 and reported in NeurIPS 2020, is designed to answer the question: How to efficiently manage dense traffic on complex rail networks? Considering the significance of punctuality in real-world railway network operation and the fact that fast passenger trains share the network with slow freight trains, Flatland version 3 introduces trains with different speeds and scheduling time windows. This paper introduces the Flatland 3 problem definitions and extends an award-winning MAPF-based software, which won the NeurIPS 2020 competition, to efficiently solve Flatland 3 problems. The resulting system won the Flatland 3 competition. We designed a new priority ordering for initial planning, a new neighbourhood selection strategy for efficient solution quality improvement with Multi-Agent Path Finding via Large Neighborhood Search(MAPF-LNS), and use MAPF-LNS for partially replanning the trains influenced by malfunction.
Zhe Chen, Jiaoyang Li, Daniel Harabor, Peter J. Stuckey
2023-06-10T14:41:05Z
http://arxiv.org/abs/2306.06455v1
# Scalable Rail Planning and Replanning with Soft Deadlines ###### Abstract The Flatland Challenge, which was first held in 2019 and reported in NeurIPS 2020, is designed to answer the question: How to efficiently manage dense traffic on complex rail networks? Considering the significance of punctuality in real-world railway network operation and the fact that fast passenger trains share the network with slow freight trains, Flatland version 3 introduces trains with different speeds and scheduling time windows. This paper introduces the Flatland 3 problem definitions and extends an award-winning MAPF-based software, which won the NeurIPS 2020 competition, to efficiently solve Flatland 3 problems. The resulting system won the Flatland 3 competition. We designed a new priority ordering for initial planning, a new neighbourhood selection strategy for efficient solution quality improvement with Multi-Agent Path Finding via Large Neighborhood Search(MAPF-LNS), and use MAPF-LNS for partially replanning the trains influenced by malfunction. ## Introduction The Flatland 3 Challenge is the third edition of this popular railway network operation competition. The competition is organized by AICrowd, SBB (Swiss federal railways), SNCF (French national railway company), and Deutsche Bahn (German national railway company), which aims to answer the question:"How to efficiently manage dense traffic on complex rail networks?". The challenge was first held in 2019, where most participants used planning or operation research methods to solve the problem. To encourage participants to use reinforcement learning approaches, the NeurIPS 2020 Flatland Challenge Laurent et al. (2021) added a separate reinforcement learning track (which ranks only reinforcement learning approaches). The competition format changed to tackle an unbounded number of instances of increasing difficulty in 8 hours, in anticipation that the computation speed and large problem size would be bottlenecks for non-reinforcement learning approaches. However, the 2020 competition shows no doubt that planning-based approaches again dominated reinforcement learning approaches. The challenge simulates railway network operations on an idealized railway network, a grid-based map showing rail tracks and train stations with a set of trains with start and target stations. Our task is to navigate trains to their target stations while complying with the rules of the rail transaction and avoiding collisions. The Flatland 3 Challenge introduces more elements from real-world rail operations. One change reflects the fact that timing and punctuality are crucial for real-world railways. The challenge schedules a time window for each train: the earliest departure time and an expected arrival time. The other change reflects how fast passenger trains and slow freight trains share the same railway network in the real world. The challenge considers trains with different speed profiles, modelled by the minimal number of time steps needed to travel through a rail segment. A significant hidden challenge of the competition is to overcome the slow execution speed of the competition environment. In the NeurIPS 2020 Flatland challenge, the winning software only used \(30\%\) of the \(8\) hours for planning agents, the rest was spent in executing the simulation environment. For Flatland 3, the total planning time is only \(5\%\) of the total time, meaning over 2 hours of evaluation time, we only have about 7 minutes for planning. In the challenge, reducing the makespan the (time the last train arrived) also helps reduce the environment execution time, since it executes fewer steps, but in Flatland 3, because of the earliest departure times, the overall makespan is hard to reduce by planning (it is strongly bounded by the latest leaving trains). These considerations mean that a careful balance is required in how much time should be spent improving the plans. Any time-consuming optimisation leads to a reduction in the number of solved problems, which can cause significant score loss but limited per-instance score improvement. The challenge is highly related to the academic problem of Multi-Agent Path Finding(MAPF). MAPF defines a graph and a group of agents, where each agent has a start and target vertex, and we need to plan collision-free paths for all agents while minimizing an objective, e.g. sum of individual costs. The problem is essential for a wide range of applications, including computer games Sigurdson et al. (2018); Li et al. (2020), automated warehousing Ma et al. (2017); Chen et al. (2021); Li et al. (2020), UAV traffic management Ho et al. (2019) and drone swarms Honig et al. (2018). Variants of the MAPF problem, such as MAPF with motion planning Cohen et al. (2019), MAPF with deadlines Ma et al. (2018) and MAPF with delay probabilitiesCap, Gregoire, and Frazzoli 2016; Chen et al.2021b; Ma, Kumar, and Koenig 2017; Li et al.2019; Wagner and Choset 2017; Atzmon et al.2020) are also widely studied and closely related to the Flatland environment. In this paper, we introduce the definition of the Flatland 3 problem, illustrate how Flatland 3 differs from previous editions, and describe a MAPF-based software that efficiently plans and replans punctual paths for trains with different speeds, winning the Flatland 3 competition. ## Flatland 3 Environment ### Problem Definition Flatland 3 aims to solve the rail planning and replanning problem with soft deadlines based on a railway network represented by a \(w\times h\) grid map with \(n\) cities, where each traversable cell is associated with a rail type shown in Figure 2. The rail type determines how trains can traverse through the cell. Each city is a small area on the map and has an even number (minimum 2) of parallel rails, where one rail in the city contains a train arrival station, and the other rail contains a departure station. Figure 1 shows an example with 2 cities, where the red building indicates an arrival station and the departure station is the adjacent cell on the other rail. Time is discretized into timesteps from 0 to \(T_{max}=\lfloor 8(w+h+\frac{m}{n})\rfloor\). There is a set of \(m\) trains \(\{a_{1},a_{2},...,a_{m}\}\) in a problem. Each agent \(a_{i}\) has a start cell \(s_{i}\) (= a train station), an initial orientation \(d_{i}\), a target cell \(g_{i}\) (= another train station), a max speed counter \(C_{i}^{max}\in[1...4]\) (indicate the minimum timesteps needed to traverse through a cell), the earliest departure timestep \(EDT_{i}\), and an expected arrival timestep \(EAT_{i}\) (a soft deadline). Note that the speed counter \(C^{max}\) is an inverse of speed: \(C^{max}=1\) trains can move in every time step, \(C^{max}=4\) trains can move at most once in 4 timesteps. Our task is to navigate as many trains as possible to their target cells and minimize total arrival delays for those who did not catch EAT. To be more specific, we want to maximize the _normalized reward_ (or _reward_ for short) defined as: \(1-\frac{\sum_{1\leq i<m}D_{i}}{mT_{max}}\in[0,1]\), where \(D_{i}=min(EAT_{i}-ACT_{i},0)\) is how many timesteps \(a_{i}\) is delayed arriving at its goal \(g_{i}\), \(ACT_{i}\) is the actual arrival time of agent \(a_{i}\). If \(a_{i}\) does not arrive \(g_{i}\) before \(T_{max}\), \(ACT_{i}\) is estimated as \(T_{max}+distance(v_{i},g_{i})\) (\(v_{i}\) is the location of \(a_{i}\) at \(T_{max}\) or \(s_{i}\) if \(a_{i}\) does not enter the environment at \(T_{max}\)). We define _success rate_ as the percentage of trains that reach their target cells by timestep \(T_{max}\) and _earliest arrival time_\(T_{i}^{0}\) of the train \(a_{i}\) as the earliest timestep when it can reach its target cell when ignoring collisions with other agents. Each agent is parked off map at timestep 0 and leaves the map immediately when they arrive at its goal cell. An agent \(a_{i}\) appears in its start cell (center the map) with its initial orientation and a speed counter \(C_{i}=0\) when receiving a move forward command in or after timestep \(EDT_{i}\). At each timestep, an agent only occupies one cell, and we navigate all agents giving each of them a command. When \(a_{i}\) is on the map, a move forward/left/right command will: * increase \(C_{i}\) by \(1\) if \(C_{i}<C_{i}^{max}\), * set \(C_{i}=0\) and move \(a_{i}\) to an adjacent cell following the transition rule of a rail type if \(C_{i}==C_{i}^{max}\), and the move action does not collide with any other trains and \(a_{i}\) is not suffering from a malfunction, * keep \(C_{i}\) unchanged and stay in the current cell otherwise. Two actions collide iff two agents arrive at the same cell or two agents swap adjacent cell locations at the same timestep. A stop command leaves the agent's location and \(C\) unchanged. Malfunctions simulate delays by stopping a train at a random timestep for a random duration. The random timestep is generated by a Poisson process with a rate \(\lambda\). The random duration is uniformly selected from the positive integer range and the delay duration becomes known when the malfunction occurs. The value of \(\lambda\) and the range of random delay duration is known to the planning code. The competition provides a software library written in Python to simulate the environment. For each instance, the simulation ends when all agents reach their target cells or reach \(T_{max}\). Our solution is written as a C++ dynamic library and called by the Python simulator. As discussed in the introduction, this environment consumes the majority of the execution time. ### Competition Configuration The challenge evaluates participants' codes in 150 instances with a time limit of 2 hours. These instances are categorized into 15 difficulty levels and each level contains 10 distinct instances. The easiest level has \(30\times 30\) grid maps with 7 agents and 2 cities. The hardest level has \(158\times 158\) grid maps with 425 agents and 41 cities. ### Flatland Challenge and MAPF (Li et al.2021b) showed the Flatland Challenge has important differences from standard MAPF, but is closely related Figure 1: Example of flatland railway network. (Laurent et al.2021) Figure 2: Flatland rail types: (a) straight, (b) curve, (c) simple switch, (d) diamond crossing, (e) single slip switch, (f) double slip switch, (g) tri-symmetrical switch, and (h) symmetrical switch. (Li et al.2021b) to MAPF variants. In the standard MAPF definition, we navigate a team of agents from their start vertices to goal vertices on an undirected graph with minimum flow time and without collisions. At each timestep, agents can move to an adjacent vertex or wait at the current vertex. Unlike standard MAPF, the Flatland environment restricts train movement to rails, trains are parked off the map before entering the map, and after reaching their target, the maximum time \(T_{max}\) acts as a hard deadline for all trains and trains break down randomly while moving. Flatland 3 further restricts trains' movement based on a speed profile, adds soft deadlines and departure times for each train, and uses the objective of minimizing arrival delays. ## 2 NeurIPS 2020 Flatland Challenge Compared to the NeurIPS 2020 Flatland Challenge, the organizer of the competition introduced more elements from real-world railway operations. In the NeurIPS 2020 Flatland Challenge, all trains have the same speed \(C^{max}=1\), have no earliest departure timestep or expected arrival timestep, and the optimisation objective is similar to the sum of individual costs, e.g., the total number of steps taken by all trains to reach their targets. In addition, the 2020 challenge evaluates solutions over an infinite number of instances with increasing difficulty with an 8 hour runtime limit. Just as in Flatland 3, the 2020 challenge has an overall track, which takes all solutions into account, and a reinforcement learning track, which only considers reinforcement learning-based solutions. The winning solution of the overall track, a MAPF-based approach, solved 362 instances with a score of 297.507 and the largest instance contains 3,256 trains. The top solution from the reinforcement learning track spent 8 hours solving 336 instances with a score of 214.150, which is reached by the top MAPF-based solution after only 15 minutes. ### MAPF-based Flatland 2020 solution Li et al. (2021) developed a MAPF-based software, which incorporates many state-of-the-art MAPF techniques, for solving train planning and replanning problems on large-scale networks under uncertainty and won the NeurIPS 2020 Flatland Challenge. Our solution for Flatland 3 is based on this work. Their basic solution uses _Prioritized Planning_ (PP) Silver (2005) to generate the initial solution and uses _Minimal Communication Policy_ (MCP) Ma et al. (2017) to handle malfunctions during execution. _MAPF via Large Neighborhood Search_ (MAPF-LNS) Li et al. (2021), _Partial Replanning_ (PR) and _Lazy Planning_ (LP) Li et al. (2021) further improved their solution. PP first sorts agents in a priority order, from high priority to low priority. Then uses _Safe Interval Path Finding_(SIPP)Phillips and Likhachev (2011) to plan the shortest paths while avoiding collisions with already planned paths, for each agent in the priority order. The Flatland Environment problems are well-formedMa et al. (2019), hence PP is guaranteed to find a solution if such a solution exists. Although the PP computes solutions rapidly, its solution quality is far from optimal. MAPF-LNS further improves the solution of PP by repeating a Large Neighborhood Search process to improve the quality. It takes the solution of PP as input and repeatedly selects, destroys, and replans the paths of a subset of agents until an iteration limit is reached. Li _et al_ runs 4 PP with different priority orders followed by 4 MAPF-LNS processes in parallel to select the best solution. To balance the trade-off between solution quality and runtime they collected training data offline and use _Simulated Annealing_ to determine the LNS iteration limits for instances of different sizes. MCP stops some trains to maintain the order in which each train visits shared cells to avoid potential deadlocks caused by malfunctions. But it sometimes stops trains unnecessarily. They designed a PR mechanism, which selects and replans the paths of agents that are influenced by malfunctioning agents, to overcome this issue. When there are thousands of agents to schedule, the runtime of PP with SIPP grows rapidly as it has to plan paths that avoid collisions with an increasing number of existing paths. The LP scheme tackles this issue by only planning paths for some of the agents during the initial planning phase and planning the rest during the execution. It prevents pushing too many agents into the environment, thus avoiding severe traffic congestion, taking into account malfunctions that have already happened in later planning, ignoring the paths of finished agents and significantly reducing the planning runtime. ## 3 Rail Planning and Replanning with Soft Deadlines In this section, we introduce a modified and improved version of Li et al. (2021)'s solution which solves Flatland 3 problems efficiently. We evaluated our solution over 150 locally generated instances, in which we simulate the challenge benchmark based on public challenge configuration, on a Nectar Cloud Server with AMD Opteron 63xx CPU and 32 GB RAM. The source codes and evaluation instances will be made public upon publication. ### SIPP with Discrete Speed One advantage of SIPP is that it is capable of planning paths with motion constraints Ma et al. (2019). A search node \(n=\langle v,I,t\rangle\) in SIPP includes a current vertex \(v\), an obstacle-free time interval \(I=[l,u)\) of \(v\), and an earliest possible arrival time \(t\). SIPP expands \(n\) by generating successor nodes for all reachable vertexes' reachable time intervals from \(n\), where \(n^{\prime}\) is a successor of \(n\), \(I^{\prime}\) is the interval of \(n^{\prime}\), \(I^{\prime}.l<=I.u\), \(I^{\prime}.u>n.t+1\) and \(n^{\prime}.t=max(n.t+1,I^{\prime}.l)\). The nature of discrete speed in Flatland 3 is a kind of motion constraint that, an agent \(a_{i}\) must stay at a vertex for at least \(C_{i}^{max}\) timestep before it traverses to the next vertex. To satisfy this constraint, we generate a successor node \(n^{\prime}\) iff \(I^{\prime}.u>n.t+C_{i}^{max}\). Then the earliest possible arrival time of \(n^{\prime}t\) is \(max(n.t+C_{i}^{max},I^{\prime}.l)\). Our basic Flatland 3 solution uses SIPP with discrete speed for PP, modifies the SIPP to only allow agents en tering the map at or after \(EDT\) of each agent, disables LP as there are at most 425 agents in any competition instance, disables PR to solve more problems within the time limit, and optimized the coding quality. The LNS is modified to accept a replanned solution iff the total arrival delay is improved in each iteration, and the iteration limit is at most 50 for small and large instances and at most 500 for instances in the middle, as we observed locally that the instances in the middle perform worse than others without LNS. This solution solves 135 problems in 2 hours and gives a score of 123.966. But this is not enough to win the competition. Due to the increasing difficulty of evaluation instances and the long environment execution time on large instances, solving one more instance or improving the average score over all solved instances becomes extremely difficult. Hence the improvements we discuss below, why they seem tiny are in fact significant. ### Slack Based Priority In the 2020 solution, PP sorts agents by train index, earliest arrival time, or start cells. Using parallel computing, PP computes with different priority orders at the same time and selects the best solution. In the Flatland 3 challenge, the introducing of the soft deadline and the new optimization objective make the soft deadline an important factor for priority ordering. A basic idea would be to prioritize agents with the earliest soft deadline, but this can be misleading. Assuming one agent \(a_{1}\) has a late \(EAT_{1}\), but also a late \(EDT_{1}\). The shortest path distance between the start and goal vertex might be equal to \(EAT_{1}-EDT_{1}\). Another agent \(a_{2}\), which must collide with \(a_{1}\), has \(EAT_{2}<EAT_{1}\), but the \(EAT_{2}-EDT_{2}\) is far larger than the length of its shortest path. Clearly, giving \(a_{1}\) higher priority is a better choice, although it has a later \(EAT_{1}\). Considering the scenario above, we define \(slack_{i}=EAT_{i}-EDT_{i}-distance(s_{i},g_{i})\) of agent \(a_{i}\) as a better metric for a priority order. We define a new priority ordering based on \(slack\) tie-breaking by prioritizing fast agents over slow agents. We added this priority order in the parallel-PP approach of the 2020 solution and it solved 135 problems with a score of 124.227. ### Delay-based neighbourhood selection Neighbourhood selection, where the LNS select a subset of agents for replanning, is the key for MAPF-LNS to improve solutions efficiently. The 2020 solution designed three neighbourhood selection strategies: (1) an agent-based strategy, which selects a train that is heavily delayed and other trains that cause the delay, (2) an intersection-based strategy, which selects trains that visit the same intersection, (3) a start-based strategy, which selects trains with the same start cell. It uses adaptive LNS [14] to keep track of each strategy's relative improvement and choose strategies randomly in proportion to their improvement. Here we propose a delay-based strategy that takes account of soft deadlines. We randomly select an agent from all agents that can not arrive at its goal vertex before its expected arrival time and find other agents potentially blocking its way. Then we randomly prioritize the selected agents, replan their paths, and only accept their new path if this results in lower total arrival delays. If we replace the start-based strategy (which is not relevant to Flatland 3 where trains have a scheduled departure time) in the adaptive LNS this improves the score to 124.352. If we disable adaptive LNS and use only delay-based neighbourhood selection we get a score of 124.432. We assume the relatively small iteration limit settings make it harder for adaptive LNS to learn the right balance of strategies. ### Partial Replanning using LNS PR fixes unnecessary waits caused by MCP when a new malfunction happens, but it also has the following drawbacks: (1) malfunction happens almost on every timestep on a large instance, and running PR on every timestep slows down the execution process, (2) too many trains need to be replanned on large instances, and (3) it replans paths without the guidance of the reward, in other words, if we can only replan a small proportion of affected agents, it does not know who to plan first. To overcome these issues, we use MAPF-LNS for PR, which focuses on agents causing arrival delays and optimises total arrival delays during the execution. LNS PR also overcomes the issue that malfunctions during execution make the optimisation of the initial plan less effective. Different from PR in the 2020 solution that is triggered by each new malfunction, we run LNS PR for a fixed number of times \(r\) for each instance and each run has \(p\) LNS iterations. In other words, we run LNS PR every \(\frac{T_{max}}{r}\) timesteps, rather than every timestep where a malfunction occurs. \(r\) and \(p\) are two integer parameters we configure for submissions. In this manner, we can balance the trade-off between solution quality and problem-solving speed better, and the influence from all malfunctions happening before each run is addressed. By setting \(r=20\) and \(p=20\) we increase the score to 125.175. In comparison, if we trigger the standard PR with the same frequency, we get a score of 124.911. ## Conclusion The Flatland 3 challenge provides a chance to tackle a (simplified form of a) real-world problem. The challenge result shows that MAPF-based approaches remain far ahead of Reinforcement Learning for this problem. Our software efficiently plans and optimises paths for hundreds of agents in seconds, while satisfying the speed and time window constraints, and delivers high-quality plan execution under uncertainty. On the official leader board, our winning solution reached a score of \(135.47\) and solved \(145\) instances. The second team on the leaderboard reached a score of \(132.470\), we reach the same score in \(1\) hour \(50\) minutes (illustrating how a small difference in score actually represents a large change). The best reinforcement learning approach ended with a score of \(27.868\) which we reach in only \(3\) minutes. Interestingly our score jumped to \(140.99\) solving all \(150\) instances after the organizers re-ran our solution on a faster computer after the competition finished. We then beat the second team's score (on the same computer) in \(1\) hour and \(32\) minutes.
2307.11200
Photon-assisted Landau Zener transitions in a tunable driven Rabi dimer coupled to a micromechanical resonator
Employing the multiple Davydov D$_2$ Ansatz with the time-dependent variational principle, we have investigated photon-assisted Landau-Zener (LZ) transitions and qubit manipulation in a hybrid quantum electrodynamics device. Modelled as a Rabi dimer, the device comprises of two interacting transmission-line resonators, each coupled to a qubit. The qubits, driven by independent harmonic fields, are further modulated by a micromechanical resonator mimicked by a phonon mode. The impacts of two independent driving fields on the qubit dynamics are carefully examined. The energy diagram of the system and the photon number mobilization on the resonators are analyzed to explain the behaviour of the LZ transitions and qubit dynamics while taking into account the influence of the single phonon mode. Results show that low phonon frequencies can alter the qubit dynamics, particularly in the absence of the driving fields, {and a strong phonon coupling strength can significantly perturb the qubit dynamics thanks to a high influx of phonon energy}. Notably, only the photon frequency affects the oscillation frequency of qubit polarization. This study unveils the imperative roles that photons and phonons play in the Rabi dimer model.
Daniel Melvin, Fulu Zheng, Kewei Sun, Zhengjie Tan, Yang Zhao
2023-07-20T19:24:39Z
http://arxiv.org/abs/2307.11200v1
Photon-assisted Landau Zener transitions in a tunable driven Rabi dimer coupled to a micromechanical resonator ###### Abstract Employing the multiple Davydov D\({}_{2}\) Ansatz with the time-dependent variational principle, we have investigated photon-assisted Landau-Zener (LZ) transitions and qubit manipulation in a hybrid quantum electrodynamics device. Modelled as a Rabi dimer, the device comprises of two interacting transmission-line resonators, each coupled to a qubit. The qubits, driven by independent harmonic fields, are further modulated by a micromechanical resonator mimicked by a phonon mode. The impacts of two independent driving fields on the qubit dynamics are carefully examined. The energy diagram of the system and the photon number mobilization on the resonators are analyzed to explain the behaviour of the LZ transitions and qubit dynamics while taking into account the influence of the single phonon mode. Results show that low phonon frequencies can alter the qubit dynamics, particularly in the absence of the driving fields, and a strong phonon coupling strength can significantly perturb the qubit dynamics thanks to a high influx of phonon energy. Notably, only the photon frequency affects the oscillation frequency of qubit polarization. This study unveils the imperative roles that photons and phonons play in the Rabi dimer model. ## I Introduction The Landau-Zener-Stuckelberg-Majorana model [1; 2; 3; 4], also known as the Landau-Zener (LZ) model, describes a two-level system that is driven externally and has a time-dependent energy gap between its two diabatic states. As the energy separation switches signs, the diabatic states, which are the Hamiltonian eigenstates without tunneling, experience a level crossing. Transitions between the diabatic states are denoted as the LZ transitions. Conversely, the adiabatic states, which are the Hamiltonian eigenstates with tunneling, face an avoided crossing [5]. Characterizing both diabatic and adiabatic transitions, the LZ model has been widely adopted to investigate fundamental physical problems in various fields, such as atomic and molecular physics [6], condensed matter physics [7], quantum information [8], and quantum simulation [9]. The LZ model can be formulated within the framework of circuit quantum electrodynamics (QED) systems, where two-level objects are typically coupled to resonators [10; 11]. Representing advanced QED apparatuses, hybrid circuit QED devices [12; 13; 14; 15; 16] are among the most promising candidates for realizing quantum information processing [17; 18] and quantum computation [19; 20; 21]. Investigating the LZ dynamics on such hybrid circuit QED platforms not only benefits fundamental studies of light-matter interactions, but also facilitates developments in quantum computation. In an advancement of a single-qubit QED model, hybrid QED devices, a system containing two coupled resonators each connected to one qubit could be an ideal model to unveil underlying physical mechanisms from both experimental and theoretical perspectives. The exceptional capability of hybrid QED models to characterize many-body quantum dynamics [22; 23; 16; 24] has attracted great interest [25; 26; 27; 28; 29; 30]. Such a minimalistic QED lattice has been fabricated using transmission-line resonators and transmon qubits [31]. Applying the rotating wave approximation (RWA) to the qubit-resonator interaction, one can use a pair of the Jaynes-Cummings Hamiltonian [32] to model this system [33; 34; 35]. Nevertheless, the RWA may be invalid in the regime of ultra-strong qubit-resonator coupling [36; 37]. In contrast, the counter-rotating terms are considered in the Rabi model [38; 39]. Hwang _et al._ constructed a Rabi dimer model and studied the phase transition of photons in two coupled resonators [40]. The combined effects of qubit-photon coupling and photon tunnelling rate on the photon dynamics are reported. Recently, with the intention to mimic environmental modulations on the Rabi dimer, we have proposed to couple the qubits to micromechanical resonators and investigated the photon-qubit-phonon dynamics by explicitly treating all the degrees of freedom (DOFs) in the hybrid system [41; 42; 43]. The LZ transitions in such systems have also been studied by applying an external harmonic driving field to one of the qubits [41]. Nonetheless, the dependence of the qubit-photon dynamics on two independently controlled driving fields and phonon frequencies remains obscure. In this study, our objectives are to understand the impacts of driving field amplitude and phase on the dynamics of LZ transition in a Rabi dimer and to elu cidate the impacts of a common phonon mode on the qubit and the photon dynamics. The state of the hybrid system is expressed with the Davydov D\({}_{2}\)_Ansatz_ and derived using the time-dependent variational principle. By tuning the external driving field amplitudes and phases, the left and right qubit dynamics show distinct patterns of the LZ transitions. Additionally, the presence of the photon tunnelling allows the photons to hop between the left and right qubits. Thereafter, low frequency phonons and qubit-phonon coupling will be introduced to the Rabi Dimer system to emulate environmental effects in the Rabi dimer system. The remainder of this paper is structured as follows Sec. II we present our methodology, including the system Hamiltonian, the Multi D\({}_{2}\)_Ansatz_, and the time-dependent variational principle. In Sec. III, results and discussions will be given. Finally, conclusions are drawn in Sec. IV. ## II Methodology ### Hamiltonian of the hybrid QED device As illustrated in Fig. 1, the hybrid circuit QED system explored in the current work comprises of two coupled transmission-line resonators each interacting with a qubit, and the qubits are modulated via external driving fields and coupled to a micromechanical resonator. The two transmission-line resonators coupled with qubits are modelled by a Rabi dimer Hamiltonian (\(\hbar=1\)) \[H_{\mathrm{RD}}=H_{\mathrm{L}}^{\mathrm{Rabi}}+H_{\mathrm{R}}^{\mathrm{Rabi}} -J(a_{\mathrm{L}}^{\dagger}a_{\mathrm{R}}+a_{\mathrm{R}}^{\dagger}a_{\mathrm{ L}}), \tag{1}\] where \(H_{\mathrm{L/R}}^{\mathrm{Rabi}}\) is a Rabi Hamiltonian describing the left (L) or right (R) resonator coupled to a qubit, and \(J\) refers to the tunnelling rate for photons hopping between the two resonators. Specifically, the Rabi Hamiltonian, [44; 45; 46; 47] \[H_{i=\mathrm{L,R}}^{\mathrm{Rabi}}=\frac{F_{i}}{2}\cos(\Omega_{i}t+\Phi_{i}) \sigma_{z}^{i}+\omega_{i}a_{i}^{\dagger}a_{i}-g_{i}(a_{i}^{\dagger}+a_{i}) \sigma_{x}^{i}, \tag{2}\] presents a driven qubit coupled to a photon mode at frequency \(\omega_{i}\) with a strength of \(g_{i}\) in the \(i\)th (\(i=\mathrm{L,R}\)) resonator. The external harmonic driving field imposed to the \(i\)th qubit is characterized by an amplitude \(F_{i}\), a frequency \(\Omega_{i}\) and an initial phase \(\Phi_{i}\). Here \(\sigma_{x}^{i}\) and \(\sigma_{z}^{i}\) are Pauli matrices and \(a_{i}\) (\(a_{i}^{\dagger}\)) is the annihilation (creation) operator of the \(i\)th photon mode. Throughout this work, it is assumed that the two photon modes have the same frequencies \(\omega_{\mathrm{L}}=\omega_{\mathrm{R}}=\omega_{r}\), and the qubit-photon coupling strengths in two resonators are also the same, \(g_{\mathrm{L}}=g_{\mathrm{R}}=g\). The micromechanical resonator is modelled by a single phonon mode \[H_{\mathrm{ph}}=\omega_{\mathrm{ph}}b^{\dagger}b \tag{3}\] with a frequency of \(\omega_{\mathrm{ph}}\) and a creation (annihilation) operator \(b^{\dagger}\) (\(b\)). The interaction between the micromechanical resonator and the two qubits is then expressed as \[H_{\mathrm{ph-q}}=\alpha(b^{\dagger}+b)(\sigma_{z}^{\mathrm{L}}+\sigma_{z}^{ \mathrm{R}}), \tag{4}\] where \(\alpha\) stands for the qubit-phonon coupling strength. Combining the Hamiltonians for the Rabi dimer, the phonon mode, and the qubit-phonon interaction yields the total Hamiltonian for the hybrid system \[H=H_{\mathrm{RD}}+H_{\mathrm{ph}}+H_{\mathrm{q-ph}}. \tag{5}\] Considering the low working temperatures of the modelled QED device, the micromechanical resonator in the system is thermally inactive and thus the temperature is set to zero in this work. Nevertheless, we still would like to comment that quantum dynamics at finite temperatures is also readily accessible within the undermentioned framework via the integration of methodologies, such as the Monte Carlo importance sampling method [48], the thermal field method [49] and the method of displaced number states [50; 51] ### The multiple Davydov D\({}_{2}\)_Ansatz_ To unveil the role of the phonon mode and driving fields in manipulating the photon and qubit states, we need a method which can explicitly depict all the DOFs for the qubits, the photon and phonon modes. Density matrix-based approaches are, therefore, inadequate for this purpose, as only the dynamics of the electronic DOFs are resolved after tracing out the bosonic DOFs. In contrast, the multiple Davydov D\({}_{2}\) (multi-D\({}_{2}\)) _Ansatz_ combined with the time-dependent variational principle Figure 1: A sketch of the hybrid QED system under study. Photons can hop between two transmission-line resonators with a tunneling rate of \(J\). Driven by an external field, the left (right) qubit is coupled to the photon mode in the left(right) resonator with a coupling strength \(g\). Both left and right qubits are coupled to a common phonon mode with a coupling strength \(\alpha\). treats all the DOFs on an equal footing and can capture the wave function propagation for all the DOFs[52; 53]. The outstanding performance of the multi D\({}_{2}\)_Ansatz_ to deliver an exact solution to the Schrodinger equation even for complex many-body problems by using high multiplicity[54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67]. Moreover, the approach has been successfully employed in simulating the quantum dynamics in similar hybrid QED systems with balanced numerical accuracy and efficiency. In this work, the multi-D\({}_{2}\)_Ansatz_ \[|\mathrm{D}_{2}^{M}(t)\rangle = \sum_{n=1}^{M}\Big{[}A_{n}(t)|\uparrow\uparrow\rangle+B_{n}(t)| \uparrow\downarrow\rangle+C_{n}(t)|\downarrow\uparrow\rangle \tag{11}\] \[\qquad+D_{n}(t)|\downarrow\downarrow\rangle\Big{]}\bigotimes|\mu_ {n}\rangle_{\mathrm{L}}|\nu_{n}\rangle_{\mathrm{R}}|\eta_{n}\rangle_{\mathrm{ ph}},\] with a multiplicity \(M\) is adopted to study the coupled photon-qubit-phonon dynamics in the QED device. Here \(|\uparrow\downarrow\rangle=|\uparrow\rangle_{\mathrm{L}}\otimes|\downarrow \rangle_{\mathrm{R}}\) with \(\uparrow(\downarrow)\) representing the up (down) state of the qubits, and \(|\mu_{n}\rangle_{\mathrm{L}}\) (\(|\nu_{n}\rangle_{\mathrm{R}}\)) and \(|\eta_{n}\rangle_{\mathrm{B}}\) are the coherent states for the left (right) photon mode and the phonon mode, respectively. The coherent states are \[|\mu_{n}\rangle_{\mathrm{L}} = \exp\Big{[}\mu_{n}(t)a_{\mathrm{L}}^{\dagger}-\mu_{n}^{*}(t)a_{ \mathrm{L}}\Big{]}\,|0\rangle_{\mathrm{L}}, \tag{7}\] \[|\nu_{n}\rangle_{\mathrm{R}} = \exp\Big{[}\nu_{n}(t)a_{\mathrm{R}}^{\dagger}-\nu_{n}^{*}(t)a_{ \mathrm{R}}\Big{]}\,|0\rangle_{\mathrm{R}},\] (8) \[|\eta_{n}\rangle_{\mathrm{ph}} = \exp\big{[}\eta_{n}(t)b^{\dagger}-\eta_{n}^{*}(t)b\big{]}\,|0 \rangle_{\mathrm{ph}}, \tag{9}\] where \(|0\rangle_{\mathrm{L}(\mathrm{R})}\) and \(|0\rangle_{\mathrm{B}}\) are the vacuum state of the left (right) photon mode and the phonon mode, respectively. The variational parameters \(A_{n}(t)\), \(B_{n}(t)\), \(C_{n}(t)\), and \(D_{n}(t)\) represent the probability amplitudes of corresponding qubit states. In addition, \(\mu_{n}\) (\(\nu_{n}\)) and \(\eta_{n}\) are the displacements of the left (right) photon mode and the phonon mode, respectively. These time-dependent variational parameters can be evaluated via the time-dependent variational principle, producing the wave function propagating along time. ### The time-dependent variational principle The equations of motion for all the variational parameters are derived from \[\frac{d}{dt}\bigg{(}\frac{\partial\mathscr{L}}{\partial\dot{\alpha}_{n}^{*}} \bigg{)}-\frac{\partial\mathscr{L}}{\partial\alpha_{n}^{*}}=0, \tag{10}\] where \(\alpha_{n}\) represents one of the aforementioned variational parameters, and \(\mathscr{L}\) is the Lagrangian \[\mathscr{L}=\frac{i}{2}\langle\mathrm{D}_{2}^{M}(t)|\frac{\overrightarrow{ \partial}}{\partial t}-\frac{\overleftarrow{\partial}}{\partial t}|\mathrm{D} _{2}^{M}(t)\rangle-\langle\mathrm{D}_{2}^{M}(t)|H|\mathrm{D}_{2}^{M}(t)\rangle. \tag{11}\] The derivation details can be found in Appendix A. ### Physical observables of interest As stated above, by combining the time-dependent variational principle with the multi-D\({}_{2}\)_Ansatz_, we can obtain the accurate time-dependent wave function of the entire system, which makes it straightforward to evaluate all the physical observables to characterize the coupled photon-qubit-phonon dynamics. The photon dynamics is presented by the time evolution of the photon population in two resonators \[N_{\mathrm{L}}(t) = \langle\mathrm{D}_{2}^{M}(t)|a_{\mathrm{L}}^{\dagger}a_{\mathrm{ L}}|\mathrm{D}_{2}^{M}(t)\rangle \tag{12}\] \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+B_{l}^{*}(t)B_{n}(t)+C_ {l}^{*}(t)C_{n}(t)\] \[\qquad+D_{l}^{*}(t)D_{n}(t)\Big{]}\mu_{l}^{*}(t)\mu_{n}(t)S_{ln}(t),\] \[N_{\mathrm{R}}(t) = \langle\mathrm{D}_{2}^{M}(t)|a_{\mathrm{R}}^{\dagger}a_{\mathrm{ R}}|\mathrm{D}_{2}^{M}(t)\rangle\] (13) \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+B_{l}^{*}(t)B_{n}(t)+C_ {l}^{*}(t)C_{n}(t)\] \[\qquad+D_{l}^{*}(t)D_{n}(t)\Big{]}\nu_{l}^{*}(t)\nu_{n}(t)S_{ln}(t),\] where \(S_{ln}(t)\) is the Debye-Waller factor \[S_{ln} = \exp\left[\mu_{l}^{*}(t)\mu_{n}(t)-\frac{1}{2}|\mu_{l}(t)|^{2}- \frac{1}{2}|\mu_{n}(t)|^{2}\right]\cdot\] \[\exp\left[\nu_{l}^{*}(t)\nu_{n}(t)-\frac{1}{2}|\nu_{l}(t)|^{2}- \frac{1}{2}|\nu_{n}(t)|^{2}\right]\cdot\] \[\exp\left[\eta_{l}^{*}(t)\eta_{n}(t)-\frac{1}{2}|\eta_{l}(t)|^{2} -\frac{1}{2}|\eta_{n}(t)|^{2}\right].\] With \(N_{\mathrm{L}}(t)\) and \(N_{\mathrm{R}}(t)\), the time evolution of the total photon number \(N(t)=N_{\mathrm{L}}(t)+N_{\mathrm{R}}(t)\) and photon imbalance \(Z(t)=N_{\mathrm{L}}(t)-N_{\mathrm{R}}(t)\) can be directly obtained. In addition to the photon dynamics, the time evolution of the qubit states is recorded during the simulations by measuring the time evolution of the qubit polarization \[\langle\sigma_{z}^{\mathrm{L}}(t)\rangle = \langle\mathrm{D}_{2}^{M}(t)|\sigma_{z}^{\mathrm{L}}|\mathrm{D}_{2 }^{M}(t)\rangle \tag{14}\] \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+B_{l}^{*}(t)B_{n}(t)\] \[-C_{l}^{*}(t)C_{n}(t)-D_{l}^{*}(t)D_{n}(t)\Big{]}S_{ln}(t),\] \[\langle\sigma_{z}^{\mathrm{R}}(t)\rangle = \langle\mathrm{D}_{2}^{M}(t)|\sigma_{z}^{\mathrm{R}}|\mathrm{D}_{2 }^{M}(t)\rangle\] (15) \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)-B_{l}^{*}(t)B_{n}(t)\] \[+C_{l}^{*}(t)C_{n}(t)-D_{l}^{*}(t)D_{n}(t)\Big{]}S_{ln}(t).\] The LZ transition probabilities, quantifying the qubit flipping probabilities from the down states to the up states, are expressed as \[P_{\mathrm{LZ}}^{\mathrm{L}}(t) = |\langle\uparrow_{\mathrm{L}}|\mathrm{D}_{2}^{M}(t)\rangle|^{2} \tag{16}\] \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+B_{l}^{*}(t)B_{n}(t)+C_ {l}^{*}(t)C_{n}(t)\] \[\qquad+D_{l}^{*}(t)D_{n}(t)\Big{]}\nu_{l}^{*}(t)\nu_{n}(t)S_{ln}(t),\] \[P_{\rm LZ}^{\rm R}(t) = |\langle\uparrow_{\rm R}|{\rm D}_{2}^{M}(t)\rangle|^{2} \tag{17}\] \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+C_{l}^{*}(t)C_{n}(t) \Big{]}S_{ln}(t).\] Similar to the photon number, the phonon population can be monitored via \[N_{\rm ph}(t) = \langle{\rm D}_{2}^{M}(t)|b^{\dagger}b|{\rm D}_{2}^{M}(t)\rangle \tag{18}\] \[= \sum_{l,n}^{M}\Big{[}A_{l}^{*}(t)A_{n}(t)+B_{l}^{*}(t)B_{n}(t)+C_ {l}^{*}(t)C_{n}(t)\] \[\qquad\quad+D_{l}^{*}(t)D_{n}(t)\Big{]}\eta_{l}^{*}(t)\eta_{n}(t )S_{ln}(t).\] and the phonon energy \(E_{\rm ph}(t)=\omega_{\rm ph}N_{\rm ph}(t)\) is obtained in turn. ### Parameter configurations and initial conditions The photon-qubit coupling strength \(g\) and the photon hopping rate \(J\) significantly impact the qubit dynamics in our QED system. Hwang _et al._ found a critical photon tunnelling rate \(J_{c}=0.03\ \omega_{0}\) in a bare resonant the Rabi dimer (qubit bias is resonant to the photon frequency \(\omega_{r}\)) [40], above which the photons are delocalized over two resonators regardless of the photon-qubit coupling strength. Here \(\omega_{0}\) stands for the energy unit in this study. An ultra-strong photon-qubit coupling strength of \(g=0.3\ \omega_{0}\) is adopted in the current work. In order to unveil the effects of photon population on the driven qubits, we consider two photon-tunnelling rates \(J=0.01\ \omega_{0}\) and \(J=0.075\ \omega_{0}\), which lead to photon localization and delocalization, respectively, in a bare resonant Rabi dimer [40; 43]. The left and right photon modes have the same frequency \(\omega_{\rm L}=\omega_{\rm R}=\omega_{r}=10\ \omega_{0}\). The high photon frequency makes the relaxation time of each LZ transition shorter than the interval between two neighbouring LZ transitions [41]. Initially, twenty photons \(N_{\rm L}(0)=20\) are pumped into the left resonator, while the right resonator is at vacuum state with \(\mu_{1}(t=0)=\sqrt{20}\) and \(\mu_{n\neq 1}(t=0)=\nu_{n}(t=0)=0\). Both the left and right qubits are prepared in the down state with \(A_{n}(t=0)=B_{n}(t=0)=C_{n}(t=0)=D_{n\neq 1}(t=0)=0\) and \(D_{1}(t=0)=1\). The single phonon mode is in the vacuum state \(\eta_{n}(t=0)=0\) initially. Based on extensive convergence tests with respect to a multiplicity of \(M=6\) is adopted in this study, which provides converted results with excellent efficiency. ## III Results and discussion Our study focuses on manipulating qubit states via adjusting external driving fields and the coupling between qubits and a micromechanical resonator (\(g\)). In Sec. III.1, we examine the impact of the driving fields on the qubit-photon dynamics in a Rabi dimer. In Sec. III.2, the qubit-phonon coupling is activated, and its effect on the photon-qubit-phonon dynamics is examined. ### Manipulating LZ transitions in a Rabi dimer through tuning the external driving field In Hamiltonian (2), two independent harmonic driving fields are applied to two qubits. This allows for direct control of the qubit states [68]. We consider the two driving fields having the same frequency of (\(\Omega_{\rm L}=\Omega_{\rm R}=0.05\ \omega_{0}\)), but with varying amplitudes (\(F_{\rm i}\)) and initial phases (\(\phi_{\rm i}\)). The influence of these driving fields on the dynamics of the qubit-photon system is analyzed in the subsequent two subsections. #### iii.5.1 Impact of driving field amplitude on LZ transitions In order to highlight the effects of the driving field amplitude on the coupled qubit-photon dynamics in the Rabi dimer, the two fields are first configured with an identical initial phase \(\Phi_{\rm L}=\Phi_{\rm R}=0\), but with varying amplitudes. In a bare resonant Rabi dimer (\(\lambda=0\)) given by Hamiltonian (1) with a photon tunnelling rate \(J=0.01\ \omega_{0}\) and a qubit-photon coupling strength \(g=0.3\ \omega_{0}\), the photons initially pumped into the left resonator will be localized in the resonator in the absence of the driving fields. Once the external fields are imposed on the qubits, the photon distribution is dramatically affected, as shown in Fig. 2, where the driving field amplitudes of \(F_{\rm L}\in\{10\ \omega_{0},20\ \omega_{0}\}\) and \(F_{\rm R}\in\{10\ \omega_{0},15\ \omega_{0},20\ \omega_{0}\}\) are configured to show their effects on the qubit-photon dynamics. The results for \(F_{\rm L}=10\ \omega_{0}\) and \(20\ \omega_{0}\) are presented in the left (panels a, c, and e) and right (panels b, d, and f) columns of Fig. 2, respectively. The three panels in each column contain the qubit-photon dynamics from different amplitudes of the right driving field. One can immediately find that the dynamics of the left (right) qubit are dominated by the left (right) driving field by comparing the results in each row (column) of Fig. 2. For instance, in Fig. 2 (a, c and e), the left driving field amplitude \(F_{\rm L}=10\ \omega_{0}\) leads to similar LZ transition patterns in the left qubit. There is a damped oscillation in \(P_{\rm LZ}^{\rm L}\) in the short time after \(\omega_{0}t=0\), as the hybrid states \(|\downarrow,n\rangle\) and \(|\uparrow,n-1\rangle\) intersect right at the start (\(\omega_{0}t\approx 0\)) with \(F_{\rm L}=10\ \omega_{0}\) (see the energy diagram in Fig. 2(g)). Here, \(|\downarrow(\uparrow)\rangle\) is the left qubit up (down) state, and \(|n\rangle\) is the number state of the left photon mode. Both the photons initially localized in the left resonator and the vanishing facilitate energy gap between the above states at \(\omega_{0}t\approx 0\) the flipping of the left qubit. Then, as the above hybrid states separate in energy, \(P_{\rm LZ}^{\rm L}\) approaches 0.5 with weak oscillations. In comparison to the frequent flipping of the left qubit, the right qubit remains on its initial down state due to the low photon population in the right resonator at \(\omega_{0}t\approx 0\). With a driving amplitude of \(F_{\rm R}=10\ \omega_{0}\), there is no avoided crossing, as can be seen from the energy diagram in Fig. 2(g). Therefore, there are no sudden changes on the right qubit dynamics (see Fig. 2 (a) and (b)). Instead, the right qubit exhibits a gradual population transfer to its up state as two neighboring adiabatic states approach each other in energy. As the two states separate in energy, the right qubit repopulates in the down state, resulting in the pulses in \(P_{\rm LZ}^{\rm R}\) shown in Fig. 2 (a) and (b). During the intersection between two adjacent adiabatic states, \(P_{\rm LZ}^{\rm R}\) is at the peak of the pulse, and \(P_{\rm LZ}^{\rm L}\) also shows pronounced oscillations as highlighted by the black circles in Fig. 2. In contrast to the dynamics of \(F_{\rm L}=10\ \omega_{0}\), the left qubit driven by a field of \(F_{\rm L}=20\ \omega_{0}\) is localized in its down state from the beginning until it meets the first avoided crossing point, as illustrated in Fig. 2 (b), (d) and (f). The detuning between \(|\downarrow,n\rangle\) and \(|\uparrow,n-1\rangle\) at \(\omega_{0}t\approx 0\) hinders the flipping of the left qubit. With the assistance from sufficient photons in the left resonator, the left qubit can realize complete population inversion between its up and down states at the first few avoided crossings, leading to the square-waved patterns of \(P_{\rm LZ}^{\rm L}\) in Fig. 2 (b), (d) and (f). The hybrid qubit-photon subsystem in the left resonator evolves adiabatically through these avoided crossings long paths of \(|\downarrow,n+2\rangle\rightarrow|\uparrow,n+1\rangle\rightarrow|\downarrow,n \rangle\rightarrow|\uparrow,n+1\rangle\cdots\). The energy diagram in Fig. 2 (h) provides a straightforward way to analyze such transition paths composed of diabatic qubit-photon states. It is noteworthy that the sudden flipping of the left qubit between its up and down states with \(F_{\rm L}=20\ \omega_{0}\) is also known as the adiabatic rapid passage (ARP) [69], which is typically adopted to achieve efficient population transfer in driven quantum systems [70]. As the name suggests, a successful ARP only occurs when three conditions are met. Firstly, the process has to be adiabatic, i.e., field-induced sweep on the detuning is sufficiently slow compared to the period of resonant Rabi oscillation. Secondly, the sweep should be more rapid than the excited state relaxation. This condition is always fulfilled in this section as the system is operating at zero temperature. Thirdly, at the endpoints (both the starting and ending points), field-induced detuning must be far from resonance to the energy gap between the adiabatic states of the quantum system. The first condition for ARP requires that \(|\upsilon|/\Delta\ll\Delta\)[69; 70]. Here the sweep rate \(\upsilon\) is estimated using the time derivative of the driving field \(\upsilon=-\Omega_{i}F_{i}\sin(\Omega_{i}t+\Phi_{i})/2\) at avoided crossings and \(\Delta=2g\sqrt{n+1}\) is approximately the energy gap between the adiabatic states [71; 72; 73]. For the hybrid qubit-photon subsystem in the left resonator with \(F_{\rm L}=20\)\(\omega_{0}\) (see Fig. 2 (b), (d) and (f)), this condition is satisfied at all avoided crossings with \(|\upsilon|=\frac{\sqrt{3}}{4}\omega_{0}^{2}\) and \(\Delta^{2}=0.36(n+1)\omega_{0}^{2}\), as states \(|\downarrow,n\rangle\) with \(n\geq 1\) will be populated initially. However, this adiabaticity criterion cannot be fulfilled in the right subsystem due to insufficient initial photons even if the driving field has an amplitude of \(F_{\rm R}=20\)\(\omega_{0}\) (see Fig. 2 (e) and (f)). Instead of a successful ARP with 100% population transferred to the up state, the right qubit exhibits a LZ at first avoided crossing with \(\sim 40\%\) of its population adiabatically flows to the up state, as illustrated in Fig. 2 (e) and (f). The third condition for ARP ensures that the system is localized in an adiabatic state before and after the ARP. As aforementioned and illustrated in the right column of Fig. 2, the initial large detuning between \(|\downarrow,n\rangle\) and \(|\uparrow,n-1\rangle\) induced by \(F_{\rm L}=20\)\(\omega_{0}\) confines the left qubit on its initial down state before the first avoided crossing. Therefore, multiple ARPs can be realized on the left qubit with all three conditions satisfied. Tuning the driving field amplitude to \(F_{\rm L}=10\)\(\omega_{0}\) leads to close energies for the above adiabatic states (see Fig. 2 (g)) and left qubit delocalization at \(\omega_{0}t\approx 0\), hindering ARP on the left qubit. In addition to changing the driving field amplitudes, initial detuning between the adiabatic states and resulting ARP can be manipulated by tuning the initial phases of the driving fields, as demonstrated in the following section. #### iii.2.2 Effects of the initial driving field phase on LZ transitions Besides the driving field amplitude, the initial phase of the driving field is another critical parameter that directly impacts the energy diagram of qubit-photon coupling and the system dynamics. In this section, a photon tunneling rate \(J=0.075\)\(\omega_{0}\) is configured, which allows the photons to hop between the two resonators even with a strong qubit-photon coupling of \(g=0.3\)\(\omega_{0}\). The amplitude and frequency of the driving fields are set to \(F_{\rm L}=F_{\rm R}=20\)\(\omega_{0}\) and \(\Omega_{\rm L}=\Omega_{\rm R}=0.05\)\(\omega_{0}\), respectively. Therefore, delocalized photons over two resonators ensure that the adiabaticity criterion for ARP is fulfilled. Tuning the initial phase of the driving fields can allow control over the system dynamics, as presented in Fig. 3 with several examples. With \(\phi_{\rm L}=-\phi_{\rm R}=\pi/2\), the two fields lead to avoided crossings at the same times for the left and right subsystems. Hence, the two qubits exhibit similar dynamics containing a sequence of ARPs, as shown in Fig. 3 (a). It is detectable that there are some high-frequency oscillations right before or after the ARPs occur, as highlighted by the circles in Fig. 3 (a). These oscillations appear at different time points for the two qubits. Comparing the qubit dynamics with the photon number evolution, one can find that these oscillations for a qubit only arise when the photon number in the corresponding resonator arrives at a local maximum. This phenomenon results from stronger effective tunneling as the photon population is increased (see Eq.(2)), thus leading to pronounced oscillations in \(P_{\rm LZ}\). The synchronization of two qubits shown in Fig. 3 (a) can be readily removed by parameterizing the two fields with a different initial phase. For instance, presented in Fig. 3 (b) is the situation with \(\phi_{\rm L}=-\phi_{\rm R}=\pi/6\). Avoided crossings of the left subsystem do not occur at the same time as the right one. Therefore, the occurrence of the ARPs on the right qubit is delayed as compared to the left qubit. It can be clearly seen from Fig. 3 (a) and (b) that diverse ARPs on the qubits can be achieved by adjusting the driving field phases. It can also happen that a specific initial phase configuration leads to neighboring adiabatic states having a small energy gap at \(\omega_{0}t=0\). Then the qubit will be immediately depolarized (\(P_{\rm LZ}\to 0.5\)) after several flipping with the assistance from sufficient photons, such as the left qubit shown in Fig. 3 (c) with \(\phi_{\rm L}=\phi_{\rm R}=2\pi/3\). Therefore, no ARP will take place at any of the avoided crossings. The qubits then tend to change their states at avoided crossings with the help of photons. This scenario is predominant in the dynamics of the right qubit. At the first avoided crossing, a LZ transition occurs with \(\sim 30\%\) population transferred to its up state due to a small photon number in the right resonator. As the second avoided crossing arrives, almost all the photons are accumulated in the right resonator, which completely pumps the right qubit to its up state. As discussed above, both the driving field amplitude and initial phase greatly influence the coupled qubit-photon dynamics. Photon-assisted LZ transitions will take place at avoided crossings. Specifically, a sequence of ARPs can occur if the adiabaticity criterion is satisfied and the initial gap between the adjacent states \(|\downarrow,n\rangle\) and \(|\uparrow,n\pm 1\rangle\) is large. The state evolution paths of the hybrid qubit-photon system can be uncovered by analyzing the qubit-photon dynamics with the energy diagram. ### Combined field and phonon effects on the Rabi dimer Beyond the driving field effects on the LZ transitions of the Rabi dimer, environmental modulations on the hybrid qubit-photon dynamics are further investigated in this subsection. As illustrated in Fig. 1, a phonon mode (\(H_{\rm ph}\)) is coupled to the qubits via the interaction Hamiltonian (\(H_{\rm q\mbox{-}ph}\)), producing the total Hamiltonian (\(H\)) in Eq. (5) for the composite system. In order to understand the phononic perturbations in the Rabi dimer system, prior knowledge of the coupled qubit-photon dynamics in the absence of the phonon mode is required. In a resonant Rabi dimer system with \(g=0.3\ \omega_{0}\) and \(J=0.01\ \omega_{0}\), it has been discovered that photons are trapped in the initial resonator, i.e., in the localized phase due to the frequent qubit flipping [40; 43]. In contrast, high-frequency photons with \(\omega_{r}=10\ \omega_{0}\) will be delocalized over two resonators as qubit flipping rarely occurs [41]. In the presence of environmental modulation, Zheng _et al._ discovered that stronger qubit-phonon coupling causes \(P_{\rm LZ}\) to approach \(0.5\) in a shorter time [41]. It is also highlighted that the phonon mode population depends on its frequency and the strength of its coupling to the qubits. Combined effects from the driving field and the phonon mode on the coupled phonon-qubit-photon dynamics are addressed here by tuning the phonon frequency and the qubit-phonon coupling strength. A parameter configuration of \(F_{\rm R}=0\), \(F_{\rm L}=20\ \omega_{0}\), \(\phi_{\rm L}=0\), \(\Omega_{\rm L}=0.05\ \omega_{0}\), \(g=0.3\ \omega_{0}\) and \(J=0.01\ \omega_{0}\) is adopted in calculations to follow unless otherwise specified. A driving field is only applied to the left qubit to distinguish the field effects from the ones of phonons on the LZ transitions. A low-frequency phonon mode in the frequency range of \(0.001\ \omega_{0}\leq\omega_{\rm ph}\leq 0.5\ \omega_{0}\) is adopted to mimic the experimental micromechanic resonator coupled to the qubits. As illustrated in Fig. 4, three qubit-phonon coupling strengths, \(\alpha/\omega_{0}=0.1,0.2\) and \(0.4\), are used to highlight the phonon mode effects on the system dynamics. With a weak qubit-phonon coupling strength of \(\alpha=0.1\ \omega_{0}\), the phonon effects on Rabi dimer dynamics are negligible, and the qubit-photon dynamics shown in the upper row of Fig. 4 is very similar to the case in the absence of the phonons [41]. The photons are delocalized over two resonators, and the right qubit is suppressed in its down state due to the large detuning between the right qubit and the photons. As time evolves, the phonon mode is gradually populated with the energy from the left qubit, which will effectively modulate the bias of the right qubit via the qubit-phonon coupling. If the phonon frequency is lower than \(0.01\ \omega_{0}\), noticeable phonon effects on the right qubit dynamics can be seen after \(t\omega_{0}\approx 200\), as a phonon mode with lower frequency is easier to be populated. In this regime, the right qubit attempts to flip to its up state with the help of the photons in the right resonator, which in turn traps the photons in the right resonator. The phonon mode also modulates the bias of the left qubit, but such a modulation is weak compared to the driving field. Therefore, the phonon mode effects on the left qubit dynamics are negligible with a weak qubit-phonon coupling strength of \(\alpha=0.1\ \omega_{0}\). The phonon effects on the left qubit dynamics are more pronounced with \(\alpha=0.2\ \omega_{0}\), as shown in the left panel of the middle row of Fig. 4. The energy flowing from the left qubit to the phonon mode is accelerated by increasing \(\alpha\). Therefore, even the earlier ARPs in the left qubit for \(t\omega_{0}<100\) can feel the phonon modulation. With this stronger qubit-phonon coupling, more significant phonon effects on the right qubit dynamics are also seen from the advanced raising of \(P_{\rm LZ}^{\rm R}\) compared to the situation for \(\alpha=0.1\ \omega_{0}\). The right qubit can flip to its up state and stay there for quite a long time if the phonon frequency is lower than \(0.01\ \omega_{0}\). The photon delocalization is also affected by the phonon-modulated qubit dynamics [43]. Increasing the qubit-phonon coupling strength further to \(\alpha=0.4\ \omega_{0}\) also spotlights the impacts of the phonon mode on the cou pled qubit-photon dynamics in the Rabi dimer, as illustrated in the lower row of Fig. 4. Compared to the results from weaker \(\alpha\) shown in the top two rows of Fig. 4, the four sets of ARPs of the left qubit in \(\omega_{0}t<100\) are destroyed as the phonon energy increases tremendously with \(\alpha=0.4~{}\omega_{0}\). The key to understanding the significance of the phonon frequency in relation to qubit dynamics is by examining the population of the single phonon mode. A phonon mode with a low frequency facilitates the growth of the phonon population. However, it requires a long time to accumulate sufficient energy to affect the qubit dynamics. Interacting with both left and right qubits, the phonon mode bridges the energy flow between the two qubits. The left qubit distributes the energy from the driving field to both the phonon and the left photon mode via diagonal and off-diagonal coupling, respectively. The energy exchanges between the left qubit and the left photon mode accompanies the qubit flipping. This mechanism can be clearly seen by comparing the dynamics of the left qubit and the photon. For instance, with \(\alpha=0.1~{}\omega_{0}\) (cf. the first row of Fig. 4), the first four sets of ARPs of the left qubit occur with sudden changes in the photon imbalance. In contrast, qubit polarization (localized on one state) will stimulate the energy flow between the qubit and the phonon. This phenomenon is predominant when the qubit-phonon coupling is strong, such as the region highlighted by the dashed circles in the bottom row of Fig. 4 with \(\alpha=0.4~{}\omega_{0}\). It is noteworthy that the LZ transition probability \(P_{\rm LZ}\) exhibits persistent high-frequency oscillations with small amplitudes regardless of the presence or absence of phonon mode. In order to gain deeper insight into these oscillations, a Fourier transformation technique is employed to unveil the frequencies of these oscillations in \(P_{\rm LZ}\). Calculations with various frequencies for the photon and phonon modes are performed with \(\omega_{r}/\omega_{0}\in\{9,10,11,12\}\) and \(\omega_{\rm ph}/\omega_{0}\in\{0.05,0.09\}\). The oscillation frequencies of \(P_{\rm LZ}\) from the calculations are plotted versus the photon frequency and shown in Fig. 5. It is clear that, with a specific photon frequency \(\omega_{r}\), \(P_{\rm LZ}^{\rm L}\) and \(P_{\rm LZ}^{\rm R}\) have the same oscillation frequency, which is independent on the phonon frequency \(\omega_{\rm ph}\). In addition, the oscillation frequency is linearly proportional to the photon frequency. It indicates that the oscillation frequency of \(P_{\rm LZ}\) is solely dependent on the frequency of the photon mode and equal to \(\omega_{r}/\pi\) precisely speaking. Figure 4: Time evolution of the LZ transition probability \(P_{\rm LZ}\) of the left (column 1) and right (column 2) qubits, photon imbalance (column 3) and phonon energy (column 4). A phonon mode with a frequency \(\omega_{\rm ph}\) in \(0.001~{}\omega_{0}<\omega_{\rm ph}<0.5~{}\omega_{0}\) is coupled to the qubits. The qubit-phonon coupling strength is set as \(\alpha/\omega_{0}=0.1,0.2\) and \(0.4\), with the results for a \(\alpha\) collected in an individual row of the figure. The driving field is only applied on the left qubit with \(F_{\rm L}=20~{}\omega_{0}\) and \(\Phi_{\rm L}=0\). The photon frequency is \(\omega_{r}=10~{}\omega_{0}\) for all cases. ## IV Conclusions Employing the multi-D\({}_{2}\)_Ansatz_ to describe the coupled qubit-photon dynamics in a Rabi dimer system containing driven qubits, we explore the modulations of the photon-assisted LZ transitions by tuning the amplitudes and the initial phases of the driving fields. These two parameters of the driving fields determine the energy diagram of a coupled qubit-photon monomer, which inherently dominates the coupled qubit-photon dynamics. It is found that photon-assisted ARPs can be achieved once the driving fields are parameterized to fulfill certain criteria, i.e., the existence of avoided crossings (a large enough driving field amplitude), the adiabaticity of the process (a strong enough qubit-photon coupling or the sufficient number of photons), and a large initial detuning between the neighbouring adiabatic states with opposite qubit polarizations. Therefore, the time points at which ARPs occur and the time stride between two adjacent ARPs can be precisely engineered by tuning the driving field amplitude and initial phase. Beyond the impacts from the driving field amplitudes and initial phases, the effects of a micromechanical resonator (modelled as a low-frequency phonon mode coupled to the two qubits) on the dynamics of the hybrid QED system are further unveiled within the framework of this study. Only the left qubit is driven by a driving field in this part in order to highlight the phonon effects on the system dynamics. A phonon mode with a lower frequency can be easily populated. However, it needs a longer time to accumulate sufficient energy to affect the qubits. Increasing the qubit-phonon coupling strength \(\alpha\) accelerates the energy flow from the driving field through the left qubit to the phonon mode. Thus the phonon effects on the coupled qubit-photon dynamics emerge earlier with a stronger \(\alpha\), and can even destroy the photon-assisted ARPs of the left qubit. In contrast to the phonon, which tends to confine the qubits in one state, the photons are off-diagonally coupled to the qubits and apt to flip the qubits. Therefore, the photon effects on the qubit dynamics can be clearly noticed from the high-frequency oscillations of the LZ transition probability \(P_{\text{LZ}}\) of the qubits. The frequencies of these oscillations are found solely dependent on the photon frequencies. It is expected that the knowledge gained in this work on the controllable ARPs with driving fields and phonon mode can benefit the quantum state preparation and engineering in quantum information and quantum computing with QED apparatus. ###### Acknowledgements. Support for this project from the Nanyang Technological University under the Undergraduate Research Experience on CAmpus (URECA) programme and from the Singapore Ministry of Education Academic Research Fund (Grant No. RG87/20) is gratefully acknowledged. We also thank Yuejun Shen for the helpful discussion. * ## Appendix A The time-dependent variational approach The variational principle results in equations of motion of the variational parameters as follows, \[i\sum_{n=1}^{M}\left[\dot{A}_{n}+A_{n}\Theta_{ln}\right]\bar{S}_ {ln}\] \[=\sum_{n=1}^{M}\Bigg{\{}A_{n}\Big{[}\mathcal{G}_{\text{L}}+ \mathcal{G}_{\text{R}}+\Xi_{ln}+2\lambda(\eta_{l}^{*}+\eta_{n})\Big{]}\] \[-g\Big{[}C_{n}(\mu_{l}^{*}+\mu_{n})-B_{n}(\nu_{l}^{*}+\nu_{n}) \Big{]}\Bigg{\}}\bar{S}_{ln}, \tag{10}\] where \(\Theta_{ln}=\mu_{l}^{*}\dot{\mu}_{n}+\nu_{l}^{*}\dot{\nu}_{n}+\eta_{n}^{*}\dot {\eta}_{n}\), \(\bar{S}_{ln}=\exp\left[\mu_{l}^{*}\mu_{n}+\nu_{l}^{*}\nu_{n}+\eta_{n}^{*}\eta_ {n}\right]\), \(\mathcal{G}_{\text{L}}=F_{\text{L}}\cos(\Omega_{\text{L}}t+\Phi_{\text{L}})/2\), \(\mathcal{G}_{\text{R}}=F_{\text{R}}\cos(\Omega_{\text{R}}t+\Phi_{\text{R}})/2\), \(\Xi_{ln}=\omega_{L}\mu_{l}^{*}\mu_{n}+\omega_{\text{R}}\nu_{l}^{*}\nu_{n}-J \left(\mu_{l}^{*}\nu_{n}+\mu_{n}\nu_{l}^{*}\right)+\omega_{\text{ph}}\eta_{l}^ {*}\eta_{n}\). \[i\sum_{n=1}^{M}\left[\dot{B}_{n}+B_{n}\Theta_{ln}\right]\bar{S}_ {ln}=\sum_{n=1}^{M}\Bigg{\{}B_{n}(\mathcal{G}_{\text{L}}-\mathcal{G}_{\text{R} }+\Xi_{ln})\] \[-g\Big{[}D_{n}\left(\mu_{l}^{*}+\mu_{n}\right)+A_{n}\left(\nu_{l}^ {*}+\nu_{n}\right)\Big{]}\Bigg{\}}\bar{S}_{ln}, \tag{11}\] Figure 5: Oscillation frequency of the LZ transition probability \(P_{\text{LZ}}\) versus the photon frequency. Calculations were performed with two different phonon frequencies (\(\omega_{\text{ph}}=0.05\;\omega_{0}\) and \(0.09\;\omega_{0}\)). The dashed line is a linear fitting. \[i\sum_{n=1}^{M}\left[\dot{C}_{n}+C_{n}\Theta_{ln}\right]\bar{S}_{ ln}=\sum_{n=1}^{M}\Bigg{\{}C_{n}(-\mathcal{I}_{\rm L}+\mathcal{I}_{\rm R}+\Xi_{ln})\] \[-g\Big{[}A_{n}\left(\mu_{l}^{*}+\mu_{n}\right)+D_{n}\left(\nu_{l}^ {*}+\nu_{n}\right)\Big{]}\Bigg{\}}\bar{S}_{ln},\] (A.3) \[i\sum_{n=1}^{M}\left[\dot{D}_{n}+D_{n}\Theta_{ln}\right]\bar{S}_ {ln}\] \[=\sum_{n=1}^{M}\Bigg{\{}D_{n}\Big{[}-\mathcal{G}_{\rm L}-\mathcal{ G}_{\rm R}+\Xi_{ln}-2\lambda\left(\eta_{l}^{*}+\eta_{n}\right)\Big{]}\] \[-g\Big{[}B_{n}\left(\mu_{l}^{*}+\mu_{n}\right)+C_{n}\left(\nu_{l }^{*}+\nu_{n}\right)\Big{]}\Bigg{\}}\bar{S}_{ln}.\] (A.4) \[i\sum_{n=1}^{M}\Big{[}\Big{(}A_{l}^{*}\dot{A}_{n}+B_{l}^{*}\dot {B}_{n}+C_{l}^{*}\dot{C}_{n}+D_{l}^{*}\dot{D}_{n}\Big{)}\,\mu_{n}\] \[\quad+\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{l}^{*}C_{n}+D_{l}^{* }D_{n}\right)\dot{\mu_{n}}\] \[\quad+\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{l}^{*}C_{n}+D_{l}^{* }D_{n}\right)\] \[\quad\times\left(\mu_{l}^{*}\dot{\mu}_{n}+\nu_{l}^{*}\dot{\nu}_{ n}+\eta_{n}^{*}\dot{\eta}_{n}\right)\mu_{n}\big{]}\,\bar{S}_{ln}\] \[=i\sum_{n=1}^{M}\frac{\mu_{n}}{2}\left[\left(A_{l}^{*}A_{n}+B_{l} ^{*}B_{n}-C_{l}^{*}C_{n}-D_{l}^{*}D_{n}\right)\times\mathcal{G}_{\rm L}\right] \bar{S}_{ln}\] \[+i\sum_{n=1}^{M}\frac{\mu_{n}}{2}\left[\left(A_{l}^{*}A_{n}-B_{l} ^{*}B_{n}+C_{l}^{*}C_{n}-D_{l}^{*}D_{n}\right)\times\mathcal{G}_{\rm R}\right] \bar{S}_{ln}\] \[+\sum_{n=1}^{M}\mu_{n}\bar{S}_{ln}\left(A_{l}^{*}A_{n}+B_{l}^{*}B _{n}+C_{l}^{*}C_{n}+D_{l}^{*}D_{n}\right)\] \[\quad\times\left(\omega_{\rm L}\mu_{l}^{*}\mu_{n}+\omega_{\rm R} \nu_{l}^{*}\nu_{n}-J\left(\mu_{l}^{*}\nu_{n}+\mu_{n}\nu_{l}^{*}\right)+\omega_ {\rm ph}\eta_{l}^{*}\eta_{n}\right)\] \[+\sum_{n=1}^{M}\bar{S}_{ln}\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{ l}^{*}C_{n}+D_{l}^{*}D_{n}\right)\] \[\quad\times\left(\omega_{\rm L}\mu_{n}-J\nu_{n}\right)\] \[-g\sum_{n=1}^{M}\mu_{n}\left(A_{l}^{*}C_{n}+B_{l}^{*}D_{n}+C_{l}^{ *}A_{n}+D_{l}^{*}B_{n}\right)\] \[\quad\times\left(\mu_{l}^{*}+\mu_{n}\right)\bar{S}_{ln}\] \[-g\sum_{n=1}^{M}\mu_{n}\left(A_{l}^{*}C_{n}+B_{l}^{*}D_{n}+C_{l}^{ *}A_{n}+D_{l}^{*}B_{n}\right)\bar{S}_{ln}\] \[-g\sum_{n=1}^{M}\mu_{n}\left(A_{l}^{*}B_{n}+B_{l}^{*}A_{n}+C_{l}^ {*}D_{n}+D_{l}^{*}C_{n}\right)\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\bar{S}_{ln}\] \[+\sum_{n=1}^{M}\mu_{n}\left(2A_{l}^{*}A_{n}-D_{l}^{*}D_{n}\right) \lambda\left(\eta_{l}^{*}+\eta_{n}\right)\bar{S}_{ln},\] (A.5) \[i\sum_{n=1}^{M}\left[\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{l}^{ *}C_{n}+D_{l}^{*}\dot{D}_{n}\right)\nu_{n}\right.\] \[+\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{l}^{*}C_{n}+D_{l}^{*}D_{n }\right)\] \[\quad\times\left(\mu_{l}^{*}\dot{\mu}_{n}+\nu_{l}^{*}\dot{\nu}_{ n}+\eta_{n}^{*}\dot{\eta}_{n}\right)\nu_{n}\big{]}\,\bar{S}_{ln}\] \[=i\sum_{n=1}^{M}\nu_{n}\left[\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n} -C_{l}^{*}C_{n}-D_{l}^{*}D_{n}\right)\times\mathcal{G}_{\rm L}\right]\bar{S}_{ln}\] \[+i\sum_{n=1}^{M}\nu_{n}\left[\left(A_{l}^{*}A_{n}-B_{l}^{*}B_{n} +C_{l}^{*}C_{n}-D_{l}^{*}D_{n}\right)\times\mathcal{G}_{\rm R}\right]\bar{S}_{ln}\] \[+\sum_{n=1}^{M}\nu_{n}\bar{S}_{ln}\left(A_{l}^{*}A_{n}+B_{l}^{*}B _{n}+C_{l}^{*}C_{n}+D_{l}^{*}D_{n}\right)\] \[\quad\times\left(\omega_{\rm L}\mu_{l}^{*}\mu_{n}+\omega_{\rm R} \nu_{l}^{*}\nu_{n}-J\left(\mu_{l}^{*}\nu_{n}+\mu_{n}\nu_{l}^{*}\right)+\omega_{ \rm ph}\eta_{l}^{*}\eta_{n}\right)\] \[+\sum_{n=1}^{M}\bar{S}_{ln}\left(A_{l}^{*}A_{n}+B_{l}^{*}B_{n}+C_{l} ^{*}C_{n}+D_{l}^{*}D_{n}\right)\] \[\quad\times\left(\omega_{\rm R}\nu_{n}-J\mu_{n}\right)\] \[-g\sum_{n=1}^{M}\nu_{n}\left(A_{l}^{*}C_{n}+B_{l}^{*}D_{n}+C_{l}^{ *}A_{n}+D_{l}^{*}B_{n}\right)\] \[\quad\times\left(\mu_{l}^{*}+\mu_{n}\right)\bar{S}_{ln}\] \[-g\sum_{n=1}^{M}\nu_{n}\left(A_{l}^{*}C_{n}+B_{l}^{*}D_{n}+C_{l}^{ *}A_{n}+D_{l}^{*}B_{n}\right)\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\bar{S}_{ln}\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\bar{S}_{ln}\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\bar{S}_{ln}\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\left(2A_{l}^{*}A_{n}-D_{l }^{*}D_{n}\right)\lambda\left(\eta_{l}^{*}+\eta_{n}\right)\bar{S}_{ln},\] (A.6) \[-g\sum_{n=1}^{M}\mu_{n}\left(A_{l}^{*}C_{n}+B_{l}^{*}D_{n}+C_{l}^{ *}A_{n}+D_{l}^{*}B_{n}\right)\] \[\quad\times\left(\nu_{l}^{*}+\nu_{n}\right)\left(2A_{l}^{*}A_{n}-D_ {l}^{*}D_{n}\right)\lambda\left(\eta_{l}^{*}+\eta_{n}\right)\bar{S}_{ln}\] \[\quad\times\left(\nu_{l}^{*}+\eta_{n}\right)\left(2A_{l}^{*}A_{n}-D_ {l}^{*}D_{n}\right)\lambda\left(\eta_{l}^{*}+\eta_{n}\right)\bar{S}_{ln},\] (A.7) \[-g\sum_{n=1}^{M}\mu_{n}\left(A_{l}^{*}C_ \[\times\left(\mu_{l}^{*}+\mu_{n}\right)\bar{S}_{ln}\] \[-g\sum_{n=1}^{M}\eta_{n}\left(A_{l}^{*}B_{n}+B_{l}^{*}A_{n}+C_{l}^ {*}D_{n}+D_{l}^{*}C_{n}\right)\] \[\times\left(\nu_{l}^{*}+\nu_{n}\right)\bar{S}_{ln}\] \[+\sum_{n=1}^{M}\eta_{n}\left(2A_{l}^{*}A_{n}-2D_{l}^{*}D_{n} \right)\lambda\left(\eta_{l}^{*}+\eta_{n}\right)\bar{S}_{ln}\] \[+\sum_{n=1}^{M}\left(2A_{l}^{*}A_{n}-D_{l}^{*}D_{n}\right)\lambda \bar{\bar{S}}_{ln}. \tag{10}\] By numerically solving these linear equations at each time \(t\), one can calculate the values of \(A_{n}\), \(\dot{B}_{n}\), \(\dot{C}_{n}\), \(\dot{D}_{n}\), \(\dot{\mu}_{n}\), \(\dot{\nu}_{n}\), and \(\dot{\eta}_{n}\) accurately. The fourth-order Runge-Kutta method is then adopted for the time evolution of the tunable Rabi dimer, including the time-dependent photon numbers, phonon number, qubit polarization, LZ transition probability.
2310.05862
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
Contrastive Language-Image Pre-training (CLIP) on large image-caption datasets has achieved remarkable success in zero-shot classification and enabled transferability to new domains. However, CLIP is extremely more vulnerable to targeted data poisoning and backdoor attacks, compared to supervised learning. Perhaps surprisingly, poisoning 0.0001% of CLIP pre-training data is enough to make targeted data poisoning attacks successful. This is four orders of magnitude smaller than what is required to poison supervised models. Despite this vulnerability, existing methods are very limited in defending CLIP models during pre-training. In this work, we propose a strong defense, SAFECLIP, to safely pre-train CLIP against targeted data poisoning and backdoor attacks. SAFECLIP warms up the model by applying unimodal contrastive learning (CL) on image and text modalities separately. Then, it divides the data into safe and risky sets, by applying a Gaussian Mixture Model to the cosine similarity of image-caption pair representations. SAFECLIP pre-trains the model by applying the CLIP loss to the safe set and applying unimodal CL to image and text modalities of the risky set separately. By gradually increasing the size of the safe set during pre-training, SAFECLIP effectively breaks targeted data poisoning and backdoor attacks without harming the CLIP performance. Our extensive experiments on CC3M, Visual Genome, and MSCOCO demonstrate that SAFECLIP significantly reduces the success rate of targeted data poisoning attacks from 93.75% to 0% and that of various backdoor attacks from up to 100% to 0%, without harming CLIP's performance.
Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman
2023-10-05T19:42:03Z
http://arxiv.org/abs/2310.05862v2
# Better Safe than Sorry: Pre-training ClIP against Targeted Data Poisoning and Backdoor Attacks ###### Abstract Contrastive Language-Image Pre-training (CLIP) on large image-caption datasets has achieved remarkable success in zero-shot classification and enabled transferability to new domains. However, CLIP is extremely more vulnerable to targeted data poisoning and backdoor attacks, compared to supervised learning. Perhaps surprisingly, poisoning 0.0001% of CLIP pre-training data is enough to make targeted data poisoning attacks successful. This is four orders of magnitude smaller than what is required to poison supervised models. Despite this vulnerability, existing methods are very limited in defending CLIP models during pre-training. In this work, we propose a strong defense, SafeClip, to safely pre-train CLIP against targeted data poisoning and backdoor attacks. SafeClip warms up the model by applying unimodal contrastive learning (CL) on image and text modalities separately. Then, it carefully divides the data into safe and risky subsets. SafeClip trains on the risky data by applying unimodal CL to image and text modalities separately, and trains on the safe data using the CLIP loss. By gradually increasing the size of the safe subset during the training, SafeClip effectively breaks targeted data poisoning and backdoor attacks without harming the CLIP performance. Our extensive experiments show that SafeClip decrease the attack success rate of targeted data poisoning attacks from 93.75% to 0% and that of the backdoor attacks from 100% to 0%, without harming the CLIP performance on various datasets. ## 1 Introduction Pre-training large vision-language models on enormous amount of paired image-caption data crawled from the internet has achieved remarkable success in zero-shot classification and robustness to distribution shift. CLIP learns image and text representations in a shared space by maximizing the agreement between the paired image-text representations, and minimizing the aggreement between the unpaired ones. This alleviates the need for high-quality annotations and allows scaling up the pre-training data to millions (Radford et al., 2021) and billions of examples (Jia et al., 2021). Despite the superior performance, CLIP is extremely vulnerable to targeted data poisoning and backdoor attacks, where an adversary injects a subset of malicious examples in the training data to change the prediction of particular examples at test time. Perhaps surprisingly, poisoning only 0.0001% and 0.01% of the pre-training data is enough to make targeted data poisoning and backdoor attacks successful, respectively (Radford et al., 2021). Considering that the large pre-training data of CLIP is often crawled from the internet, such attacks are very easy to perform in practice. Despite this vulnerability, protecting CLIP against targeted data poisoning and backdoor attacks during pre-training has remained unaddressed. The only recently proposed method, RoCLIP, aims to disassociate the poisoned image-caption pairs during pre-training by matching the image representations with the nearest neighbors of their captions, and matching the caption representations with the nearest neighbors of their images (Yang & Mirzasoleiman, 2023). However, this method can only defend CLIP against a relatively small number of poisons. Two other methods proposed to clean a _poisoned pre-trained_ CLIP, by fine-tuning on a _clean_ data of the same scale as pre-training (Yang et al., 2023), or fine-tuning on a _clean_ subset of pre-training data using CL on image and text modalities (Bansal et al., 2023). The first method is clearly not applicable to pre-training, and the second one even increases the attack success rate if applied to _poisoned_ pre-training data, as we will confirm experimentally. Protecting CLIP against targeted data poisoning and backdoor attacks during pre-training is indeed very challenging. This is because training only once on the poisoned pairs can make the attack successful. In contrast, in the supervised setting the model should be trained on the poisoned data for several epochs before the attack succeeds (Biggio et al., 2012; Turner et al., 2019). Thus, to protect CLIP during pre-training, it is crucial to entirely exclude the poisoned examples from the pre-training pipeline. In this work, we propose the first effective defense, SafeClip, against _strong_ targeted data poisoning and backdoor attacks during pre-training CLIP. SafeClip first warms up the model by applying unimodal CL on the image and text modalities separately. This initializes the model in a way that poisoned image-caption representations have a low similarity initially. Then, it applies the CLIP loss with a _low learning rate_ to associate image-caption representations, while maintaining a low similarity for poisoned pairs. Subsequently, SafeClip divides the data into a small safe set and a large risky set based on the similarity of their image-caption representations. SafeClip pre-trains the model by applying the CLIP loss only to the safe set and applying CL to the image and text modalities of the risky set separately. The safe and risky sets are updated during the training and the size of the safe set is gradually increased. In doing so, SafeClip effectively excludes the vast majority of the poisoned examples from the safe set and prevents CLIP loss to associate their poisoned image-captions. This effectively breaks the attack. SafeClip ensures a superior performance by increasing the size of the safe set applying data augmentation to its examples during pre-training. We conduct extensive experiments on the Conceptual Caption (CC) 1M to evaluate the effectiveness of SafeClip. We show that SafeClip effectively breaks state-of-the-art targeted data poisoning and backdoor attacks during pre-training, by decreasing the success rate of targeted data poisoning attacks from 93.75% to 0% and that of backdoor attacks from 54.3% to 0%, without harming the zero-shot and linear prob performance of CLIP on various datasets. ## 2 Related Work **Unimodal Contrastive Learning (CL)** Unimodal contrastive learning is among the most successful methods for representation learning (Chen et al., 2020; Caron et al., 2020; Chen & He, 2021). CL maximizes the agreement between different augmented views of the same example (positive pairs) while minimizing the agreement between augmented views of different examples (negative pairs). A recent body of work aimed to further improve the performance of CL, by improving the consistency of the representations via a momentum encode (He et al., 2020), eliminating the need for negative pairs (Grill et al., 2020), or removing redundancy between components of the representation vectors (Zbontar et al., 2021). Most relevant to our work is NNCLR, which enriches the learned representations by keeping a memory bank of augmented representations and use the nearest neighbor of every example in the pool as its positive pair (Dwibedi et al., 2021). **Contrastive Language-Image pre-training (CLIP)** Large vision-language models like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) achieved a remarkable success by contrastive pre-training on 400M and 1B image-caption pairs crawled from the web. Recent work tried to improve the data efficiency and performance of CLIP. Specifically, DeCLIP (Li et al., 2021) uses SimSiam (Chen & He, 2021) and Masked Language Modeling (Devlin et al., 2018) to match the augmented views of the image representations and the augmented views of the text representations, to improve the data efficiency of CLIP. SLIP (Mu et al., 2022) improves the performance by including in-modal contrastive learning on images using SimCLR, which maximizes the agreement between different views of the same augmented image while minimizing agreement between augmented views of different images. CyCLIP (Goel et al., 2022) emphasizes the importance of in-modal consistency, where the difference in similarities of image-image pairs should be close to that of text-text pairs, and cross-modal consistency, where the difference in similarities of each image-text pair should be similar. **Targeted Data Poisoning and Backdoor Attacks on CLIP** CLIP is highly susceptible to various types of targeted data poisoning and backdoor attacks (Carlini and Terzis, 2021; Yang et al., 2023). Targeted data poisoning attacks (TDPA) aim to deceive the model into misclassifying a specific test example by modifying the captions of a small subset of the training data. Backdoor attacks (BA) involve embedding a backdoor trigger into a small subset of examples in the training data, with the goal of causing the model to misclassify any test images with the same trigger. A backdoor trigger can be either visible, like a distinguishable patch, or invisible, like patterned noise points or patterned image deformation (Chen et al., 2017; Gu et al., 2017; Nguyen and Tran, 2021). Adding trigger to only \(0.01\%\) of the pre-training data can cause the model to misclassify the backdoored examples. TPDA is even more effective, requiring only 0.0001% of the data to be poisoned (Carlini and Terzis, 2021). **Targeted Data Poisoning and Backdoor Defense on CLIP** Despite the vulnerability of CLIP to TDPA and BA, existing defense methods are very limited. RoCLIP (Yang and Mirzasoleiman, 2023) is the only proposed defense for protecting CLIP during pre-training. RoCLIP first augments image-caption pairs using techniques such as random cropping and color jittering. Subsequently, it matches each image with the nearest neighbor of its caption, and vice versa. These nearest-neighbors are drawn from a representation pool, which is updated at the end of every epoch. However, RoCLIP is effective only against a limited range of poisons and fails to defend the model when trained on datasets with a poison rate higher than 0.0015%. Two recent works proposed data cleansing for fine-tuning CLIP, or cleaning a poisoned pre-trained CLIP during fine-tuning. Yang et al. (2023) proposed dropping examples that have a low image-caption similarity based on a clean pre-trained CLIP, to cleanse the fine-tuning data. This method requires a clean pre-trained model, and a proper threshold to filter the poisons without discarding a large amount of clean data. This threshold varies for different attack types and is difficult to pre-compute. To clean a poisoned CLIP with TDPA, Yang et al. (2023) proposed fine-tuning on a clean dataset of the same size as the pre-training data. Moreover, to clean a poisoned CLIP with BA, Bansal et al. (2023) proposed CleanCLIP, which fine-tunes the model on a _clean_ subset of the pre-training data with CLIP loss and CL loss on image and text modalities. The first method is clearly not applicable to pre-training and the second one increases the attack success rate when applied to the poisoned data. This is because CL cluster the backdoored images and their captions, and the CLIP loss can even better associate the backdoored images with the poisoned captions. In this work, we propose the first effective defense for protecting CLIP against strong TDPA (0.05%) and BA (0.05%) during pre-training, without compromising the model's performance. ## 3 Preliminary ### Contrastive Language-Image Pre-training (CLIP) Consider a dataset \(\mathcal{D}=\{(\mathbf{\mathbf{z}}_{i}^{\mathcal{I}},\mathbf{\mathbf{z}}_{i}^{\mathcal{T}}) \}_{i=1}^{n}\) of \(n\) image-captions pairs, where \(\mathbf{\mathbf{z}}_{i}^{\mathcal{I}}\) and \(\mathbf{\mathbf{z}}_{i}^{\mathcal{T}}\) are the image and caption of the \(i^{th}\) pair. The CLIP architecture consists of an image encoder \(f_{I}\!:\!\mathcal{I}\rightarrow\mathbb{R}^{d}\) and a text encoder \(f_{T}\!:\!\mathcal{T}\rightarrow\mathbb{R}^{d}\) to encode images and captions. The encoded representations are projected into the same space and are normalized to have unit \(\ell_{2}\)-norm. We denote the resulting image and text representations by \(\mathbf{\mathbf{z}}_{i}^{\mathcal{I}},\mathbf{\mathbf{z}}_{i}^{\mathcal{T}}\). To create the multi-modal interaction, the InfoNCE loss is applied to pull the projected representations of every image-caption pair together while pushing apart the projected representations of unpaired images and captions in the same mini-batch. Formally, for a mini-batch of \(N\) pairs, the CLIP loss is defined as: \[\mathcal{L}_{\text{CLIP}}\!=\!-\frac{1}{2N}\!\sum_{j=1}^{N}\log\!\left[\frac{ \exp\left(\left\langle\mathbf{\mathbf{z}}_{j}^{\mathcal{I}},\mathbf{\mathbf{z}}_{j}^{\mathcal{ T}}\right\rangle/\tau\right)}{\sum_{k=1}^{N}\exp\left(\left\langle\mathbf{\mathbf{z}}_{j}^{ \mathcal{I}},\mathbf{\mathbf{z}}_{k}^{\mathcal{T}}\right\rangle/\tau\right)}\right]\!- \frac{1}{2N}\!\sum_{k=1}^{N}\log\!\left[\frac{\exp\left(\left\langle\mathbf{\mathbf{z} }_{k}^{T},\mathbf{\mathbf{z}}_{k}^{\mathcal{T}}\right\rangle/\tau\right)}{\sum_{j=1}^{ N}\exp\left(\left\langle\mathbf{\mathbf{z}}_{j}^{\mathcal{I}},\mathbf{\mathbf{z}}_{k}^{\mathcal{T}} \right\rangle/\tau\right)}\right], \tag{1}\] where \(\tau\) is a trainable temperature parameter, and \(\left\langle.,.\right\rangle\) is the inner product between two representations. The performances of CLIP is evaluated with zero-shot or linear-probe, as we discuss next. **Zero-shot classification.** Zero-shot classification assess the generalizability and transferability of the model to unseen tasks. It transforms the downstream labels into natural language captions using the provided engineered prompt templates, such as "A photo of a {label}" (Radford et al., 2021). Then, it calculates the cosine similarity between the representations of a given image and each prompt, and predicts the label with the highest image-prompt similarity. **Linear probe classification.** Linear probe classification refers to evaluating the extracted representations from the pre-trained image encoder for training a linear classifier on the downstream labeled data. ### Targeted Data Poisoning and Backdoor Attacks Targeted data poisoning and backdoor attacks poison CLIP by injecting a set of poisoned image-caption pairs to the pre-training data. Let \(\mathcal{D}_{p}=\{(x_{i}^{T},x_{c}^{T})|x_{i}^{T}\in\mathcal{I}_{t},x_{c}^{T} \in\mathcal{T}_{adv}\}\) be the injected poisoned pairs, where \(\mathcal{I}_{t}\) is the poisoned image(s) and \(\mathcal{T}_{adv}\) is the set of adversarial caption related to the adversarial label \(y_{adv}\). To construct the poisoned caption set, one can search the training dataset for all captions that contain the adversarial label and use these captions as the adversarial captions. Another approach is to use CLIP's set of 80 different prompt-engineered text descriptions (Radford et al., 2021) to construct captions for the adversarial label, and then either use a subset of them or repeat them as necessary. In our work, we construct \(\mathcal{T}_{adv}\) from the training dataset, which is consistent with the construction methods used in (Carlini and Terzis, 2021; Yang et al., 2023; Yang and Mirzasoleiman, 2023; Bansal et al., 2023). **Targeted data poisoning attacks** aim to misclassify a particular test example, \(x_{i}^{T}\), as \(y_{adv}\). Hence, \(D_{p}=\{(x_{i}^{T},x_{c}^{T})|x_{c}^{T}\in\mathcal{T}_{adv}\}\). **Backdoor attacks** introduce a trigger patch to a set of poisoned images. The goal is to misclassify any test examples with the trigger patch, \(x_{i}^{T}\oplus\text{patch}\), as \(y_{adv}\). Hence, \(D_{p}=\{(x_{i}^{T}\oplus\text{patch},x_{c}^{T})|x_{i}^{T}\in\mathcal{I},x_{c}^{ T}\in\mathcal{T}_{adv}\}\). In contrast to targeted data poisoning attacks which target a particular test example, backdoor attacks inject _random_ images with the backdoor trigger, paired with the adversarial captions. **Adversary Objective** The primary objective of the adversary is to manipulate the output representations of CLIP, such that certain images are misclassified into adversarial categories instead of their true categories, while the other images are classified correctly. **Adversary Capabilities** We assume that the adversary has limited control over the pre-training data, and can inject a small number (\(\leq 0.05\%\) of the dataset size) of poisoned examples into the training dataset. Adversary also has the knowledge of the model structure, the training algorithm, and the hyperparameter used by their victim, but they cannot modify the training process directly. ## 4 Method Next, we motivate and present SafeClip for robust pre-training of CLIP against TDPA and BAs. ### Motivation Targeted data poisoning and backdoor attacks can succeed extremely fast when pre-training CLIP models. For example, when pre-training on a dataset with 0.01% poison rate, only 1 epoch of CLIP is enough to poison the model. Thus, to prevent the model from being poisoned, it is essential to filter out the majority of poisoned pairs _before_ the pre-training starts, and keep them out _throughout_ the Figure 1: Cosine similarity matrices between image-caption representations, obtained by CLIP and SafeClip. Training with CLIP loss on the poisoned data associates the poisoned image-caption pairs. In contrast, SafeClip prevents association of the poisoned image-caption pairs, by clustering images and captions in the same category via an inmodal CL loss before pre-training. pre-training. If the model avoids training on or is exposed to only a limited amount of the poisoned data, the representations of poisoned images and captions do not get close during pre-training, and the attack fails. However, as shown in Fig. 2, the poisoned pairs become inseparable from the clean pairs after 1 pre-training epochs. To filter out the poisoned pairs, SafeClip first warms up the model by a few epochs of unimodal CL on image and text modalities, separately. The unimodal CL clusters similar images and similar texts together. In doing so, it effectively pushes the poisoned image-caption representations apart, thus slowing down the attacks from taking effect. Then, SafeClip runs 1 epoch of CLIP with _small learning rate_ and evaluates the cosine similarities of all examples. As shown in Fig. 2, after the initial warmup epochs, the poisoned pairs can be better separated from the clean pairs. Since poisoned pairs are less similar compared to clean pairs, SafeClip only trains with the CLIP loss on examples with high cosine similarity (safe data), while training on other examples (risky data) with unimodal CL loss. Throughout the pre-training, SafeClip gradually increases the amount of data used for training with CLIP loss. Since majority of the poisoned pairs are never trained on, SafeClip can successfully defend the model from strong targeted data poisoning and backdoor attacks. Fig. 1(c) shows that the poison ratio in the pre-training data remains low throughout pre-training with SafeClip, while more clean pairs are being used to pre-train with the CLIP loss. In summary, to prevent the model from being poisoned, SafeClip consists of three steps: (1) A few epochs of unimodal CL warmup. (2) One epoch of slow-paced CLIP warmup. (3) A combination of unimodal CL training and CLIP training with data updating. The effect of SafeClip on image and text encoders is shown in Fig 1. CLIP focuses on aligning the paired image-caption representations, which renders it easy to poison. On the other hand, SafeClip clusters images and captions in the same category. In doing so, it reduces the similarity of poisoned image-caption representations, This allows SafeClip to successfully defend strong poisoning and backdoor attacks. The pseudocode of the SafeClip is illustrated in Alg. 1. Next, we will discuss each step in more details. #### 4.1.1 unimodal self-supervised Warmup SafeClip leverages unimodal CL on both image and text modalities separately. Since unimodal CL does not match poisoned images with captions, it does not risk poisoning the models. The unimodal CL clusters similar images and similar captions together. In doing so, poisoned pairs can be better separated from the clean pairs. Effectively, since the poisoned image(s) and adversarial captions are from different categories, poisoned images and adversarial captions cluster with their respective true representation clusters during unimodal CL warmup, and move further away from each other in representation space. For example, to poison an image of "cat" with a "plane" caption, the image needs to move closer to the "plane" text cluster and away from the "cat" image cluster in the representation space. The closer the image is to its true "cat" representation cluster at the beginning of training, the more challenging it becomes to poison the image. Same argument applies to captions. **Nearest-Neighbors** When the poison rate is high, poisoned images, which are either identical images (TDPA) or images sharing the backdoor patch (BA) cluster tightly together in the representation space. This prevent them from getting close to the cluster of their true category. To avoid this issue and enrich the quality of the representations, we extend our unimodal CL training by using a nearest neighbor pool to find positive pairs (Dwibedi et al., 2021). That is, rather than directly matching Figure 2: Distribution of image-caption cosine similarity after 1 epoch of pre-training with (a) CLIP and (b) SafeClip. The data to the right of the red line represents the portion of the (safe) data used for training with CLIP loss. SafeClip almost exclusively applies the CLIP loss to clean data. (c) Fraction of remaining poisons throughout pre-training with SafeClip. After the warmup, the poison ratio drops from 0.08% to 0.018%, and gradually goes to 0% as the pre-training continues. This shows the effectiveness of SafeClip in filtering out the poisoned pairs and defending the model. differently augmented views of the same image or caption \((\mathbf{z}_{i},\mathbf{z}_{i}^{+})\), we maintain a pool of image and caption representations, and match each image or caption representation with their nearest neighbor in the pool rather than its augmented views. The pool is implemented as a queue, initially initialized with representations of random examples and is updated by including the representations of examples in the current mini-batch, while excluding the oldest representations in the queue. By exposing examples to more diverse positive pairs, SafeClip prevents clustering of poisoned images and adversarial captions, and can separate the poisoned pairs more effectively. We will explore the impact of using the nearest neighbor approach in our ablation study. The unimodal CL loss is defined as: \[\mathcal{L}_{\text{unimodal\_NN}}=-\log\Biggl{[}\frac{\exp\left(\left\langle \text{NN}(\mathbf{z}_{i},\mathcal{P}),\mathbf{z}_{i}^{+}\right\rangle/\tau\right)}{ \sum_{k=1}^{N}\exp\left(\left\langle\text{NN}(\mathbf{z}_{i},\mathcal{P}),\mathbf{z}_{ k}^{+}\right\rangle/\tau\right)}\Biggr{]} \tag{2}\] where \(\mathbf{z}_{i}\) is the output image/text representation and \(\mathbf{z}_{i}^{+}\) is the augmented view of the image/text representation, and \(\text{NN}(\mathbf{z}_{i},\mathcal{P})\) is the nearest neighbor operator defined as: \[NN(\mathbf{z}_{i})=\text{argmin}_{\mathbf{p}\in\mathcal{P}}\|\mathbf{z}_{i}-\mathbf{p}\|_{2}. \tag{3}\] **Slow-paced CLIP Warmup Epoch** Although unimodal CL brings similar images and captions closer in the image and text representation spaces, the images and their corresponding captions remain relatively distant from each other. Thus, to associate the image-caption representations and effectively distinguish between poisoned and clean pairs, one epoch of CLIP warmup becomes essential before the filtering step. Thus, following the unimodal CL warmup, we proceed with one additional epoch of training with the CLIP loss. Crucially, to mitigate the risk of poisoning, as the CLIP loss directly matches the image-caption pairs, including the potentially poisoned ones, we employ a lower learning rate. This slow-paced CLIP epoch helps prevent the model from learning poisoned image-caption pairs while enabling SafeClip to filter out the majority of poisoned pairs before pre-training. As shown in Fig 2, the warmup results in a significant separation between poisoned and clean pairs. In addition, as we will discuss in Sec. 5.2.1, only a few epochs of unimodal CL is sufficient. SafeClip applies \(r=5\) epochs of unimodal warmup. We will show that this is not sensitive to tuning and can apply to a wide range of poisons in Sec. 5.1. #### 4.1.2 Mixed training with data updating After the warm-up phase, we evaluate the cosine similarities of all the examples and divide them into the safe and risky sets. The top \(k\%\) of the data, characterized by high cosine similarity, is considered _safe_ data, while the remaining portion is deemed _risky_ data. SafeClip applies CLIP loss to the safe data, directly matching images with their captions. To ensure the trustworthiness of the safe data, we select a small value for \(k\) (e.g., \(k=15\)). On the other hand, instead of discarding the risky data, we continue training on it with unimodal CL. However, two concerns still remain: (1) Some poisoned pairs may still be in the safe data, (2) The model's performance may suffer as the CLIP loss is not applied to majority of the examples. To address these concerns, at the end of each epoch, we assess the cosine similarity of all examples and update \(k=k+i\), to chose a larger fraction of the data with highest cosine similarity as the new safe set, with \(i\) being a small number (e.g., \(i=1\)). To further boost the performance, we apply data augmentation to the examples in the safe set used in the CLIP loss. With the above update strategy, only a small number of poisoned pairs may temporarily enter the safe data and cannot poison the model. At the same time, more training on clean data with CLIP loss and on risky data with unimodal CL loss allows the model to learn better representations and better distinguish and discard the poisoned pairs during the training. Additionally, since we progressively increase the proportion of safe data during training, by the end of the training, the majority of the data will be part of the safe data and will be trained on with CLIP loss, thereby resolving the performance issue. To reduce the computational cost of updating the safe and risky sets, instead of calculating the cosine similarities of all examples at every epoch, we recompute all the similarities every \(m\) epochs (e.g. \(m=5\)). In other epochs, we only update the cosine similarity for \(s\%>k\%\) of examples with the highest similarities, and update the safe and risky set accordingly. The loss of the mixed training is defined as: \[\mathcal{L}_{\text{SAFECLIP}}(\mathcal{D})=\mathcal{L}_{\text{unimodal\_NN}}( \mathcal{D}_{\text{risky}})+\mathcal{L}_{\text{CLIP}}(\mathcal{D}_{\text{ safe\_aug}}). \tag{4}\] Note that, during mixed training, we still apply nearest-neighbors for unimodal CL. ## 5 Experiments In this section, we evaluate the effectiveness of SafeClip against strong TDPA and BA. We start by introducing the experimental setup. Then we present our main results. Finally, we conduct an ablation study on different components of SafeClip. **Training** We used an open-source implementation of CLIP as our base model. Similar to the setup in (Radford et al., 2021), we utilize a ResNet-50 as the image encoder and a transformer as the text encoder. Due to computational constraints and consistent with (Yang and Mirzasoleiman, 2023), we randomly selected 1 million image-caption pairs from the Conceptual Captions 3M (CC3M) dataset as our training dataset (Sharma et al., 2018). In each experiment, the model is trained from scratch for 32 epochs with a batch size of 256, using the AdamW optimizer (Loshchilov and Hutter, 2017). **Downstream Datasets** To evaluate the downstream performance of our model, we conduct linear probe and zero-shot classifications, as introduced in Sec. 3.1, on 10 widely used datasets (Radford et al., 2021; Li et al., 2021; Yang and Mirzasoleiman, 2023) listed in Table 4. **Adversarial Attacks** To evaluate the effectiveness of our defense, we consider two different attack baselines: targeted data poisoning attacks (TDPA) (Carlini and Terzis, 2021) and backdoor attacks (BA) with visible patch triggers. For TPDAs, we randomly select 16 different images from the CC3M validation set as our target images. For each target image, we choose a random class from the ImageNet1K dataset (Deng et al., 2009), and construct an adversarial caption set related to the label as discussed in Sec. 3.2. For each target image, we generated 100 and 500 poisons. For BA, we randomly select 200 and 500 images from the CC3M pre-training data and apply the backdoor trigger. We use the same backdoor triggers as proposed by (Gu et al., 2017). We choose a random class from the ImageNet1K dataset (Deng et al., 2009) and construct the adversarial caption set related to the label as discussed in Sec. 3.2. Each backdoored image is paired with a random poisoned caption from the adversarial caption set. In our experiment, we used the class "mushroom" **Defense Baselines** We consider two defense baselines against the TDPA and the BA. First is the only pre-training defense, namely RoCLIP (Yang and Mirzasoleiman, 2023), that maintains a pool of representations and matches every image with the nearest neighbor of its caption in the pool, and vice versa. Second is adaptation of CleanCLIP (Bansal et al., 2023) to pre-training, which applies CL and CLIP loss on all the pre-training examples. We measure the effectiveness of attacks using the attack success rate. For TPDA, Poison Success Rate (PSR) is the fraction of target images that are classified as the adversarial label. For BA, Backdoor Success Rate (BSR) it is the fraction of test images containing the backdoor triggers that are classified as the adversarial label. ### Main Result Here, we evaluate the performance of SafeClip against TDPA and BA. We compare SafeClip with baselines, based on both attack success rates and downstream performance. **SaFeClip defense** First, we evaluate the effectiveness of SafeClip in breaking TDPA and BA. Table 1 shows that for TPDA, SafeClip effectively reduces PSR of CLIP from 93.75% to 0% and 6.25% for 100 and 500 poisons, respectively. For BA, it reduces BSR of CLIP from 54.3% and 100% to 0%. This indicates that SafeClip can manage to filter the vast majority of the poisoned pairs during the warmup and throughout the training, and successfully defend CLIP during pre-training. On the other hand, CleanCLIP and RoCLIP exhibit poor defense performances against such strong attacks, and can even increase the attack success rate. RoCLIP fails because the nearest neighbor of the poisoned image and caption is more likely to be another poisoned image and caption when the number of poisons is overwhelming. Similarly, CleanCLIP fails as it applies CLIP loss to all examples including the poisoned pairs. Importantly, this confirms that CL loss is not effective when applied with the CLIP loss to the poisoned data. We see that CleanCLIP increases BSR from 54% to 66.67%, when 200 backdoored images exist in the pre-training data. **SaFeClip downstream performance** Next, we evaluate SafeClip in terms of the downstream performance, by reporting linear probe and zero-shot performance on 10 downstream datasets in Table 4. As shown in Table 1, SafeClip provides a comparable or even superior performance compared to CLIP, which is due to the increasing size of the safe set and using data augmentation. ### Ablation Studies Finally, we analyze the effect of SafeClip's different components on its robustness and performance. **Slow-paced CLIP warmup** Although CLIP training on full data exposed the model to the poisoned pairs, to correlate the image representation with the caption representation and better filtering the poisons out, it is essential to train with CLIP loss for 1 epoch at a _low_ learning rate. As shown in Table 3, row 1,4: without any CLIP training, there are 26.7% more poisons in the top 30% of training data. At the same time, it is crucial to not train more with CLIP loss on the full data before filtering out the poisoned pairs. As shown in row 5, applying 1 more CLIP epoch with low learning rate to the poisoned data introduces 6% more poisons in the top 30%. In addition, it is important to keep the learning rate low during the CLIP warmup. As shown in Fig. 3, with a lower lr, 0.9% less poisons are found in the top 30% when the dataset has a 0.01% poison rate, and 3.3% less poisons are found in the top 30% when the dataset has a 0.05% poison rate. **Ablation on nearest-neighbors** To study the effect of using nearest-neighbors in the unimodal CL warmup, we randomly select 16 different images as target, and each target image is paired with 100 or 500 captions related to the adversarial labels. Fig. 3 shows when the dataset is lightly poisoned, the effect of NN is not obvious, and the total poison rate in the top 30% of the data is similar when using or not using NN. But, when the poison rate is high, NN contributes significantly to decreasing the cosine similarities of the poisoned pairs, marking a 7.3% drop in total poison rate in the top 30% data. #### 5.2.2 Ablation on mixed training SafeClip's mixed training epochs gradually incorporate increasing amounts of data as safe, allowing them to be trained with the CLIP loss, while keeping poisoned pairs away throughout the training process. This is essential for both the high performance of SafeClip, despite initially being trained with a small amount of data, and for achieving a low attack success rate. Specifically, two design choices contribute significantly to our mixed training: (1) unimodal CL training on risky data. (2)the small increment ratio \(i\). Here, we illustrate the necessity of each of these design choices. We run our experiments on 100K data from CC3M. We randomly select 16 different images as our target images, and pair each with 30 adversarial captions. In total, 480 poisoned pairs are injected into the dataset. **Unimodal CL on risky data** Similar to unimodal warmup epochs as discussed in Sec. 4.1.1, training with unimodal CL on risky data can improve SafeClip's defense effectiveness significantly. As shown in Fig. 3, unimodal CL helps on _risky_ data decrease the PSR by 12.5%. **Small increment ratio** Larger increment ratio for updating _safe_ data improves CLIP's performance, as it allows more data to be trained with CLIP loss earlier. However, this can put the model into risk if any poisoned pairs enter the safe data. As shown in Fig. 4, when we choose a larger \(i=5\), the PSR is increased by 37.5%. Conclusion We proposed SafeClip, an effective method for safely pre-train CLIP against targeted data poisoning and backdoor attacks. Using inmodal CL warmup and slow-paced CLIP warmup, SafeClip filters out majority of the poisons before pre-training and continue to exclude them during pre-training with mixed training strategy. Through extensive experiments, we demonstrated that SafeClip drops the attack success rate down to 0% for backdoor attacks and 6.25 % for targeted data poisoning attack with poisoning ratio of 0.05%, while preserving the CLIP performances on various downstream datasets.
2306.09766
Ultrafast switching of topological invariants by light-driven strain
Reversible control of the topological invariants from nontrivial to trivial states has fundamental implications for quantum information processors and spintronics, by realizing of an on/off switch for robust and dissipationless spin-current. Although mechanical strain has typically advantageous for such control of topological invariants, it is often accompanied by in-plane fractures and is not suited for high-speed, time-dependent operations. Here, we use ultrafast optical and THz spectroscopy to investigate topological phase transitions by light-driven strain in Bi$_2$Se$_3$, a material that requires substantial strain for $\mathrm{Z}_2$ switching. We show that Bi$_2$Se$_3$ experiences ultrafast switching from being a topological insulator with spin-momentum-locked surfaces, to hybridized states and normal insulating phases at ambient conditions. Light-induced strong out-of-plane strain can suppress the surface-bulk coupling, enabling differentiation of surface and bulk conductance at room temperature, far above the Debye temperature. We illustrate various time-dependent sequences of transient hybridization, as well as the switching operation of topological invariants by adjusting the photoexcitation intensity. The abrupt alterations in both surface and bulk transport near the transition point allow for coherent conductance modulation at hyper-sound frequencies. Our findings regarding light-triggered ultrafast switching of topological invariants pave the way for high-speed topological switching and its associated applications.
Tae Gwan Park, Seungil Baek, Junho Park, Eui-Cheol Shin, Hong Ryeol Na, Eon-Taek Oh, Seung-Hyun Chun, Yong-Hyun Kim, Sunghun Lee, Fabian Rotermund
2023-06-16T10:54:11Z
http://arxiv.org/abs/2306.09766v1
# Ultrafast switching of topological invariants by light-driven strain ###### Abstract Reversible control of the topological invariants from nontrivial to trivial states has fundamental implications for quantum information processors and spintronics, by realizing of an on/off switch for robust and dissipationless spin-current. Although mechanical strain has typically advantageous for such control of topological invariants, it is often accompanied by in-plane fractures and is not suited for high-speed, time-dependent operations. Here, we use ultrafast optical and THz spectroscopy to investigate topological phase transitions by light-driven strain in Bi\({}_{2}\)Se\({}_{3}\), a material that requires substantial strain for Z\({}_{2}\) switching. We show that Bi\({}_{2}\)Se\({}_{3}\) experiences ultrafast switching from being a topological insulator with spin-momentum-locked surfaces, to hybridized states and normal insulating phases at ambient conditions. Light-induced strong out-of-plane strain can suppress the surface-bulk coupling, enabling differentiation of surface and bulk conductance at room temperature, far above the Debye temperature. We illustrate various time-dependent sequences of transient hybridization, as well as the switching operation of topological invariants by adjusting the photoexcitation intensity. The abrupt alterations in both surface and bulk transport near the transition point allow for coherent conductance modulation at hypersound frequencies. Our findings regarding light-triggered ultrafast switching of topological invariants pave the way for high-speed topological switching and its associated applications. Topological surface states (TSSs) are unique quantum states on topologically nontrivial insulators, induced by spin-orbit coupling [1; 2]. These symmetry-protected helical TSSs provide robust and dissipationless transport channels, making them ideal for applications in spintronics and quantum computing [3; 4]. Controlling the topological invariants allows for the implementation of a topological on/off switching device as a transistor. This constitutes a fundamental building block for future applications and the emergence of topology from trivial matter [5]. Although traditional methods like chemical substitution have successfully demonstrated topological phase transitions via ionic interactions modification [6; 7; 8; 9], additional features such as reversibility and time-dependent operation are required for effective switching devices. Mechanical strain can induce topological phase transitions and can potentially serve this purpose [10; 11], but it restricts high-speed and time-dependent operations. Alternatively, light has been identified as a promising alternative to control topology on an ultrafast timescale using light-driven phonons [12; 13; 14; 15] and photocurrents [16; 17], as recently shown in Dirac and Weyl semimetals. However, this transition has not yet been observed in topological insulators (TIs) with a large bulk bandgap (E\({}_{g}\)), because they obviously require substantial strain. Furthermore, transport dynamics during topological phase transitions, which are one of the pivotal characteristics for topological switching applications, remain unexplored. Bi\({}_{2}\)Se\({}_{3}\) is a representative TI, which exhibits nontrivial Z\({}_{2}\) order and has a layered rhombohedral structure with a quintuple layer (QL) unit. The large E\({}_{g}\) of about 0.3 eV in bulk states (BS) of Bi\({}_{2}\)Se\({}_{3}\) ensures room temperature topological applications [3; 4; 18]. However, topological phase transitions in Bi\({}_{2}\)Se\({}_{3}\) require a large strain above 5% along out-of-plane direction [19], which is impossible to achieve with mechanical strain due to unavoidable in-plane fractures, formed around 1% longitudinal strain [20]. Previous studies of longitudinal strain and lattice vibrations by ultrafast coherent light motivate us to examine light-driven ultrafast Z\({}_{2}\) switching in Bi\({}_{2}\)Se\({}_{3}\)[21; 22; 23]. Here, we utilize ultrafast light pulses to selectively apply longitudinal strain as a means to induce the topological phase transition in Bi\({}_{2}\)Se\({}_{3}\). We find that light can induce a strain of about 7%, sufficient to alter topological invariants. The layered structure with van der Waals (vdW) bonding in Bi\({}_{2}\)Se\({}_{3}\) along the \(c\)-axis ensures durability even under such high strain. The lattice strain enables selective probing of TSS and BS charge transport at room temperature by suppressing the surface-bulk coupling, even far above the Debye temperature (\(\sim\)180 K) [24], where phonon scattering predominates. During light-driven topological switching, the transport lifetime in TSS suddenly reduces to approximately 50% due to light-induced hybridization. Simultaneously, the bulk conductance increases considerably during band parity inversion. We also demonstrate temporal sequences of light-driven Z\({}_{2}\) switching, transient hybridization, and coherent modulation of transport in ambient conditions. Our experimental scheme involves a sequence of three pulses: an optical pump, a near-infrared (NIR) probe, and a terahertz (THz) probe, as depicted in Fig. 1a. The optical pump with 1.5-eV photon energy produces photoinduced stress and strain within the Bi\({}_{2}\)Se\({}_{3}\) film by means of excited carriers. The resulting strain and simultaneous transport characteristics are monitored using NIR and THz probe pulses. The atomic displacement in Bi\({}_{2}\)Se\({}_{3}\) by photoinduced stress is perturbed with quasi-spherical symmetry due to its lateral isotropy. Given experimental conditions--with large beam spots on the sample (denoted A, at the scale of micrometers and millimeters for NIR and THz measurements, respectively) compared to the optical penetration depth (\(\xi\sim\)20 nm) [23] for pump pulses (\(A\gg\xi\))--shear and quasi-shear stresses are effectively nullified. This is because the transverse displacement is orthogonal to the spherical symmetry [25]. Thus, the photoinduced strain waves propagate as longitudinal plane waves (see Fig. S1). The longitudinal strain (\(\eta_{33}\)) confined within the film forms standing waves and acts as an effective tensile strain when the film thickness (\(d\)) is reduced to the nanoscale [23]. This feature makes it notably different from mechanical strain, which inevitably results in in-plane elongation. In a layered structure, both the intra-QL thickness and the inter-QL distance are influenced by \(\eta_{33}\), as shown by the black and green springs in Fig. 1a. Due to the weak vdW bonding, density functional theory (DFT) calculations reveal that the inter-QL distance changes easier than the intra-QL thickness (see Fig. S2). This aspect makes the crystal resilient against large expansion in the stacking direction. This substantial inter-QL distance can expand the Coulomb gap and reduce the strength of the spin-orbit interaction [19], leading to the inversion of conduction/valence band parity, and thus, to topological phase transitions and hybridized states as illustrated in Fig. 1a. The changes in the real part of the refractive index (\(\bar{n}\)) caused by deformation are recorded by NIR probe pulses. Since the sensitivity for measuring strain is mostly pronounced near the bandgap (i.e., \(\Delta_{\text{osc}}\sim d\bar{n}/d\omega\cdot\delta E_{g}\)) [26], the probe energy is selected as 0.92 eV (1350 nm in wavelength) which is close to the bandgap between the first bulk conduction and the second valence band [27] (Fig. S3). As illustrated in Fig. 1b, the \(\Delta\)R/R\({}_{0}\) signal shows the transient strain \(\eta_{33}\) (i.e., expansion reduces the probe reflection), which is equivalent to out-of-plane interlayer vibrations [21, 23]. Simultaneously, THz pulses allow us to monitor electrical conductance under photoinduced strain. Figure 1b displays the real part of THz sheet conductance (G\({}_{0}\)) of 22 QL Bi\({}_{2}\)Se\({}_{3}\), incorporating Drude-Lorentzian terms from free carrier (Lorentzian center, \(\omega_{L}=0\)) and optical phonon in the bulk (\(\omega_{L}\sim 2\) THz) [7]. In Figure 1: **Experimental scheme and light-driven strain waves and photoconductance dynamics in Bi\({}_{2}\)Se\({}_{3}\).****a,** Schematics of photoinduced topological phase transitions and their probing by time-resolved ultrafast optical and THz spectroscopy. The pump pulses generate the photocarrier and subsequent tensile strain, which induces the expansion of Bi\({}_{2}\)Se\({}_{3}\) thin film, consisting of the intralayer thickness (black spring) and interlayer distance (green spring). The strong out-of-plane straining in Bi\({}_{2}\)Se\({}_{3}\) induces to the inversion of conduction/valence parity near the Gamma point, leading to a topological phase transition from topological insulator (TI) to hybridized TI (HTI) and normal insulator (NI). **b,**\(\mathrm{F_{pump}}\)-dependent oscillatory signal measured in reflection changes at NIR wavelength. The black lines are the fit results of the experimental data with damped oscillations. **c,** Real part of THz conductance of 22 QL Bi\({}_{2}\)Se\({}_{3}\) at equilibrium obtained by THz time-domain spectroscopy. The black line represents the Drude fit. **d,** Temporal evolution in photoexcited dynamics, measured with conductance changes in THz probe (\(-\Delta\)E/E\({}_{0}\)) with \(\mathrm{F_{pump}}\) = 0.1 mJ/cm\({}^{2}\) and 2.3 mJ/cm\({}^{2}\). **e,** Real part of THz conductance by 0.1 mJ/cm\({}^{2}\) and 2.3 mJ/cm\({}^{2}\) photoexcitation at selected time delay (\(t_{1}\) and \(t_{2}\)) with equilibrium THz conductance (dashed black curves). The maximum in photoconductance at \(t_{1}\) implies the required time for relaxation of excited bulk carriers from the higher to the first conduction band. The \(t_{2}\) indicates the time of maximum expansion. the Drude model, the bases for TSS are the 2D spectral weight (\(D_{\rm TSS}=\omega_{p}^{2}d/4\pi^{2}\)) and the scattering rate (\(\gamma_{\rm TSS}=1/\tau_{\rm TSS}\)), where \(\omega_{p}\) and \(\tau_{\rm TSS}\) are the plasma frequency and transport lifetime, respectively. The \(D_{\rm TSS}\) value derived from the fit is 138 THz\({}^{2}\cdot\)QL, corresponding to a sheet carrier density (\(n_{2D}=4\pi^{2}m^{*}\varepsilon_{0}D_{\rm TSS}/e^{2}\)) of \(1.78\times 10^{13}\) cm\({}^{-2}\) and a Fermi level (\(E_{F}=2\pi\hbar D_{\rm TSS}/15e^{2}\)) of 340 meV [27; 6; 28], where \(m^{*}=0.15m_{0}\) is the electron effective mass [29]. The measured Fermi level is slightly above the bulk bandgap, suggesting that the THz response primarily arises from TSS. Given that our experiments were conducted at room temperature, above the Debye temperature, phonon-mediated surface-bulk coupling results in a flat spectrum with \(\gamma_{\rm TSS}\sim 3.9\) THz. The calculated mobility (\(\mu=e\tau_{\rm TSS}/m^{*}\)) is 254 cm\({}^{2}\)/V\(\cdot\)s, indicating that transport properties are mainly influenced by phonon scattering. It is worth noting that separating the TSS response from the phonon-mediated surface-bulk coupling remains a major challenge at room temperature [30; 31], which is crucial for operating topological devices in an ambient environment. Figures 1d and 1e show the dynamics of photoconductance for pump fluence (\(\rm F_{pump}\)) of 0.1 and 2.3 mJ/cm\({}^{2}\) with THz spectra at selected times \(t_{1}\) (carrier injection) and \(t_{2}\) (carrier relaxation and lattice expansion). Given that the measured \(\rm G_{0}\) is inversely proportional to THz transmission, Fig. 1d depicts the THz conductance deviated from equilibrium, as derived from frequency integration (i.e., \(-\Delta{\rm E}/{\rm E}_{0}\propto\Delta{\rm G}_{0}\)). Therefore, an initial rise in \(-\Delta{\rm E}/{\rm E}_{0}\) at \(t_{1}\) corresponds to an increase in G\({}_{0}\) due to photocarrier injection in BS [27; 28]. The excited carrier relaxes quickly within 10 ps with the bulk relaxation time (\(\tau_{\rm bulk}=2\) ps) [32; 33], and the unrelaxed carriers at \(t_{2}\), estimated from the Drude fit, are negligible, constituting just 0.12% of excited carriers (\(n_{ex}\)), as shown in Fig. S4. Following carrier relaxation, a significant reduction in \(\Delta{\rm G}_{0}\) with \(\rm F_{pump}\) = 2.3 mJ/cm\({}^{2}\) is observed, accompanied by subsequent coherent modulation, which correlates with the period in NIR probing (Fig. 1b). This negative \(\Delta{\rm G}_{0}\) is noticeable above 0.4 mJ/cm\({}^{2}\) (see Fig. S5a). Given that long-lived carriers in TSS increase \(\Delta{\rm G}_{0}\)[33], the observed negative \(\Delta{\rm G}_{0}\) is reproduced by the increased \(\gamma_{\rm TSS}\). Note that the negative \(\Delta{\rm G}_{0}\) at \(t_{2}\) coincides with the time required for maximum lattice expansion as observed in \(\Delta{\rm R}/{\rm R}_{0}\), similar to a recent report on ultrafast X-ray experiments [22]. In addition, the damping time (\(\tau_{\rm damping}\)) of negative photoconductance at \(\sim\)300 ps (refer to Fig. 1d and S5b) matches the strain damping time (\(\sim\)300-500 ps) observed in ultrafast X-ray experiments, whereas thermal relaxation occurs on a much longer timescale of a few nanoseconds [22; 28]. As a result, the observed decrease in THz conductance is primarily influenced by photoinduced tensile strain. The temporal separation of lattice dynamics from ultrafast carrier dynamics allows us to study transport properties under strain. Figure 2a presents the \(\rm F_{pump}\)-dependent \(\rm\Delta G_{0}\) spectra at \(t_{2}\) by varying \(\rm F_{pump}\) with Drude fits. During the fitting process, \(\Delta{\rm D}_{\rm TSS}\) maintains a linear dependence on \(\rm F_{pump}\), while the unrelaxed carriers are at a negligible level of about 0.12% of \(n_{ex}\). Below 0.8 mJ/cm\({}^{2}\), a slight decrease in conductance is fitted with an increase in \(\gamma_{\rm TSS}\) as well as broadening and shift of optical phonon frequency [27]. At 1.2 mJ/cm\({}^{2}\), the lower-energy tail in the spectra becomes flat, indicating a competing channel with increased conductivity, which becomes markedly visible at higher \(\rm F_{pump}\). The observed bipolar behavior in \(\rm\Delta G_{0}\) can be understood by considering the strain effects at both TSS and BS. Qualitatively, surface conduction vanishes during the topological phase transition, while bulk conduction turns metallic due to gap closing. To analyze this quantitatively, we compared experimental results with DFT calculations. Figure 2b displays the DFT-calculated electronic band structure with increasing tensile strain along the out-of-plane direction, which shows the \(\rm E_{g}\) closing and subsequent \(\rm Z_{2}\) switching. For TSS under varying \(\rm E_{g}\), the \(\gamma_{\rm TSS}\) should increase due to a finite size effect [34; 6; 35]. Since the penetration depth of TSS wave function (\(\lambda_{\rm TSS}\sim\hbar v_{F}/\rm E_{g}\)) into bulk \(\rm Bi_{2}Se_{3}\) is about 2.5 nm for \(\rm E_{g}\) = 0.3 eV [35; 6], TSSs in 22 QL samples (thickness, \(d\approx 22\) nm) are initially isolated at the top and bottom surfaces with opposite spin chirality. However, the photoinduced tensile strain causes a reduction of \(\rm E_{g}\), leading to an increase in \(\lambda_{\rm TSS}\). When \(\lambda_{\rm TSS}\) is close to half the \(d\), the top and bottom TSS overlap, leading to hybridization, as shown in Fig. 2c. This hybridized TI (HTI) expands the phase space for carrier scattering, including 180\({}^{\circ}\) backscattering [36; 6; 7]. Figure 2d illustrates the transport characteristics in TSS as a function of \(\rm F_{pump}\) and \(\eta_{33}\). These characteristics were derived from the Drude analysis depicted in Fig. 2a. To compare with DFT calculations, we convert \(\rm F_{pump}\) into strain by simulating the photoinduced stress (\(\sigma_{33}\)), taking into account both electronic stress (\(\sigma_{e}\)) and thermal stress (\(\sigma_{t}\)) [37], as shown in Fig. S6. The negative sign of \(\sigma_{33}\) indicates expansion. The electronic stress (\(\sigma_{e}\)) dominates over thermal expansion (\(\sigma_{t}\)) by a factor of 5, consistent with previous observations in \(\rm Bi_{2}Te_{3}\)[38]. The total stress induced by laser pulses, calculated to be 4 GPa, and the corresponding strain of 7% can be derived by considering the bulk modulus [19]. As illustrated in Fig. 2d, the 77s slightly increases, likely due to lattice heating and expansion. However, it undergoes a sudden surge beyond a 4% strain. In the DFT calculation with longitudinal strain at 4%, \(\rm E_{g}\) is reduced by 5 times to \(\sim\)70 meV (Figs. S7 and S8), which subsequently extends the penetration depth of \(\lambda_{\rm TSS}\) to \(\sim\)13 nm. This extended \(\lambda_{\rm TSS}\), close to half of the \(d\), is sufficient to trigger hybridization. Consequently, we observe a rapid drop in the transport lifetime in TSS to approximately 50% in the HTI state. This means that the surface electrons are freeze-out. The phase succeeding the HTI is anticipated to be a normal insulator (NI) with a topologically trivial state, which could be further identified by observing the strain-dependent BS conductance. The stain-dependent transport of the BS results in a topological metal phase, as the band parities converge toward inversion. Consequently, the additional channel with increased \(\rm\Delta G_{0}\) corresponds to the bulk conductance influenced by strain. Figure 2e illustrates the changes in the measured \(\rm\Delta G_{BS}\), compared with DFT calculations. The theoretical DC conductivity in BS, derived from fully anisotropic deformation potential and ionized impurities contribution within the Boltzmann transport theory, is obtained at a measured carrier density of \(n_{0}=n_{2D}d\sim 10^{19}\) cm\({}^{-3}\) (Fig. S8). The experimental \(\rm\Delta G_{BS}\) is approximated by the DC values as \(\Delta\)G\({}_{\rm BS}\sim\) 1/\(\Delta\)\(\gamma\)\({}_{\rm BS}\), by fitting \(\Delta\)\(\gamma\)\({}_{\rm BS}\), since the contribution of \(\Delta n\) is negligible, at 3%-5% of \(n_{2D}\) (Fig. S4). The experimentally determined \(\Delta\)G\({}_{\rm BS}\) values correlate with those from the DFT calculation. The BS conductance slightly increases to around 4% under tensile strain due to phase-space filling by the reduced E\({}_{g}\). For larger strain, the BS undergoes a significant increase in conductance, indicating a metallic BS. Following this, the BS conductance (G\({}_{\rm BS}\)) again decreases due to the reopening of E\({}_{g}\) after the band parity inversion, signifying the topologically trivial state (Z\({}_{2}\) = 0) above 5.5% strain. This critical strain is similar to that observed in a 6 QL slab (Fig. S9). Furthermore, the experimentally achieved maximum strain of approximately 7% corresponds to the trivial insulator with an E\({}_{g}\) of \(\sim\)100 meV. With a 9% strain, \(\gamma\)\({}_{\rm BS}\) is greatly suppressed with a sufficient E\({}_{g}\) (\(\sim\)200 meV), though this is not experimentally shown due to Fig. 2: **Manipulation of TSS and BS transport across topological phase transition by light-driven tensile strain.****a,** Differential THz conductance (\(\Delta\)G\({}_{0}\)) of Bi\({}_{2}\)Se\({}_{3}\) at various F\({}_{\rm pump}\) recorded at \(t_{2}\), where the lattice maximally expands. **b,** Orbital-decomposed band structure of Bi\({}_{2}\)Se\({}_{3}\) bulk from DFT calculation according to tensile strain. The orbital contributions from Bi-\(p_{z}\) and Se-\(p_{z}\) are marked with blue and orange color circles, where the size of the circle represents the magnitude of the contribution. Here, the K/4 and M/4 are the quarter of K\(-\Gamma\) and \(-\)M. **c,** Schematics of TSS hybridization and suppression of surface-bulk coupling by tensile strain. **d,** F\({}_{\rm pump}\)- and photoinduced strain-dependent scattering rate in TSS and \(-\)\(\Delta\)E/E\({}_{0}\) values at \(t_{2}\). The photoinduced strain is obtained by dividing the calculated \(\sigma\)\({}_{33}\) by the bulk modulus (\(C_{33}\)). The TSS scattering rate (\(\gamma\)\({}_{\rm TSS}\), inverse of transport lifetime in TSS), obtained by the fit results from (**a**) suddenly increases for about 1.3 mJ/cm\({}^{2}\), corresponding to about 4% strain, which results in the slope changes in \(-\)\(\Delta\)E/E\({}_{0}\) obtained at \(t_{2}\) as indicated by the vertical dashed line. The vertical orange line denote the transition point of hybridized topological insulator (HTI) from DFT prediction. The transition point of NI is obtained from the (**b**). The optical damage is observed above F\({}_{\rm pump}\) = 2.5 mJ/cm\({}^{2}\). **e,** Bulk conductance changes according to F\({}_{\rm pump}\) and strain adopted from the data in (**a**) and DFT calculations. The experimental \(\Delta\)G\({}_{\rm BS}\) is obtained from the simple relation of \(\Delta\)G\({}_{\rm BS}\sim 1/\Delta\)\(\gamma\)\({}_{\rm BS}\). The bulk conductance shows a significant increase across the topological phase transition at the same level of strain as indicated by the red vertical dashed line. The error bars in (**d**) and (**e**) represent the standard deviation uncertainties of the fitting results. optical damage. Importantly, the light-driven strain can suppress the surface-bulk coupling by a synergetic combination of expanded phase space of the HTI and shrinking the BS. This allows for the separation of charge transport in TSS and BS, which is essential for topological applications at room temperature. In addition, the strain-induced conductance dynamics indicate the potential to transiently convert the conducting edge with an insulating bulk (TI) into the opposite configuration (the insulating edge with conducting bulk), and even achieve a NI phase at ambient conditions with a moderate F\({}_{\text{pump}}\) and doping level. Following the transition, the expanded lattice damps with coherent interlayer vibrations as demonstrated in Figs. 1b, 1d, and 3a. The oscillation amplitude in \(\Delta\)R/R\({}_{0}\) monotonically increases with an increasing, while the oscillations in \(-\Delta\)E/E\({}_{0}\) become visible only at high F\({}_{\text{pump}}\). Figure 3b presents the spectra of the measured interlayer vibrations, which are consistent with the interface mode (IM, 20 GHz) and breathing modes (BMs, 70 and 100 GHz) [21, 22, 23]. Despite this, only the IM drives the coherent modulation in THz conductance changes. The longitudinal strain waves are described by \(\eta_{33}(\omega)=\eta_{33,0}\epsilon\delta/(1+\omega^{2}\delta^{2})\), where \(\delta=v_{s}/\xi\). Here, \(v_{s}=2.4\) nm/ps represents the longitudinal sound velocity and \(\xi\) is approximately 20 nm [23]. The spectrum of strain waves encompasses the observed frequencies of interlayer vibrational modes, denoted with vertical lines. This indicates that a mode conversion efficiency between light-driven strain waves and eigenmodes in QL chains is higher for IM than BM. The F\({}_{\text{pump}}\)-dependent amplitude of \(\Delta\)R (A\({}_{\text{AR}}\)), derived from Fig. 1b, exhibits a linear relationship (Fig. 3c). This is attributed to the proportional relationship between the F\({}_{\text{pump}}\) and \(\eta_{33}\),0 (Fig. S6). In terms of the G\({}_{0}\) modulation (Fig. 3d), A\({}_{\text{AE}}\) also shows a linear increase above a threshold pump fluence (F\({}_{\text{thres}}\sim 1.4\) mJ/cm\({}^{2}\)). It is noteworthy that the observed F\({}_{\text{thres}}\) corresponds with the F\({}_{\text{pump}}\) for the HTI transition (Fig. 2d ). This suggests that the coherent modulation of transport can be realized near the transition point, where the conductance changes are sensitive to layer displacement. This coherent modulation in transport is illustrated in Fig. 4a. Here, the \(\Delta\)G\({}_{0}\) spectra with F\({}_{\text{pump}}\) = 2.3 mJ/cm\({}^{2}\) is displayed at the half period of the oscillation. After photoexcitation and carrier relaxation at \(\Delta t\) = 15 ps, \(\Delta\)G\({}_{0}\) exhibits a minor decrease during strain development. From \(\Delta t\) = 28 ps (maximum expansion), \(\Delta\)G\({}_{0}\) spectra manifest coherent oscillations that correspond to the lattice dynamics. Following \(\Delta t\) = 102 ps, the coherence dissipates and steadily recovers toward equilibrium with damping, as portrayed in Fig. 4b. The obtained \(\gamma_{\text{TSS}}\) dynamic suggests that the initial displacement vector of expansion transitions the topological phase toward the trivial insulator (Z\({}_{2}\) = 0). The subsequent displacement, act Figure 3: **Coherent modulation of photoconductance in Bi\({}_{2}\)Se\({}_{3}\) by interlayer vibration.****a,** F\({}_{\text{pump}}\)-dependent oscillatory signal measured with THz probe (\(-\Delta\)E/E\({}_{0}\)). The black lines are the fit results of the experimental data with damped oscillation. **b,** Fast Fourier transform spectra of oscillatory signal in NIR probe from Fig. 1b and THz probe from (**a**). In NIR probe, three oscillation components can be assigned by interlayer vibrational modes including interfacial mode (IM, 20 GHz), 1st breathing mode (BM\({}_{1}\), 70 GHz), and 2nd breathing mode (BM\({}_{2}\), 100 GHz). While in THz probe, only IM involves the modulation of transport characteristics in Bi\({}_{2}\)Se\({}_{3}\). The bottom panel indicates the simulated spectra of photoinduced strain waves (\(\eta_{33}\)). The strain spectra explain the amplitude difference between observed vibrational modes. **c,** Measured oscillation amplitude in NIR probe (A\({}_{\text{AR}}\)) as a function of F\({}_{\text{pump}}\). The amplitude BM is produced by adding BM\({}_{1}\) and BM\({}_{2}\). The dashed line corresponds to the linear fit of the experimental data. The observed linear relation of F\({}_{\text{pump}}\) and A\({}_{\text{AR}}\) gives a reliability of the linearity of strain amplitude upon the F\({}_{\text{pump}}\). **d,** Measured oscillation Amplitude in THz probe (A\({}_{\text{AE}}\)) as a function of F\({}_{\text{pump}}\). The modulation with IM frequency is invisible at low F\({}_{\text{pump}}\) (incoherent modulation). Above the threshold of F\({}_{\text{pump}}\sim 1.3\) mJ/cm\({}^{2}\), coherent modulation is observed and shows a linear dependence of F\({}_{\text{pump}}\). The threshold fluence is consistent with F\({}_{\text{pump}}\) for HTI as indicated by the vertical blue line. This means that the coherent modulation requires substantial changes in transport characteristics. The error bars in (**c**) and (**d**) represent the standard deviation uncertainties of the measured data. ing as a restorative force, restores it to the nontrivial state (Z\({}_{2}\) = 1). The time-dependent \(\gamma_{\text{TSS}}\) displays coherent modulation while preserving the topological phase, characterized by the hybridized TSS, as depicted in the bottom panel of Fig. 4b. Since the \(n_{ex}\) after 10 ps is negligible, the \(-\Delta\)E/E\({}_{0}\) signal is dominated by \(\Delta\)\(\gamma_{\text{TSS}}\). This allows us to estimate the topological states based on \(-\Delta\)E/E\({}_{0}\) values derived from Fig. 2d. As a result, various time-dependent sequences of surface conduction and Z\({}_{2}\) invariants after photoexcitation with several F\({}_{\text{pump}}\) values can be obtained, as shown in Fig. 4c. For F\({}_{\text{pump}}\) = 1.5 mJ/cm\({}^{2}\), the strain approaches the hybridization point before transition, generating a transient topological state with the hybridized TSS for around 15 ps. This duration can be extended beyond 100 ps in conjunction with the occurrence of coherent modulation at a higher F\({}_{\text{pump}}\) of 1.8 mJ/cm\({}^{2}\). In the case of F\({}_{\text{pump}}\) = 2.1 mJ/cm\({}^{2}\), where the temporal sequence is comparable to F\({}_{\text{pump}}\) = 2.3 mJ/cm\({}^{2}\) as seen in Fig. 4b, the topological phase transitions to a topologically trivial state for approximately 10 ps. This is followed by the hybridized TSS for \(\sim\)100 ps, coinciding with the coherent modulation of transport properties. Thus, by merely adjusting F\({}_{\text{pump}}\) as a control parameter, we are able to achieve various temporal sequences of the transient HTI or Z\({}_{2}\) switching. The switching operation observed can be attributed to the mode conversion (from light to strain waves) and the finite size effect. Thinner films display blue-shifted eigenmodes of interlayer vibrations [21, 22, 23], enabling a faster modulation for the 17 QL Bi\({}_{2}\)Se\({}_{3}\), as depicted in Fig. 4d. However, such modulation was not observed in the 13 QL, potentially due to a poor mode conversion, because the eigenmode frequency Figure 4: **Temporal sequences of topological-switching operation and interlayer vibration-assisted coherent modulation of TSS and BS transport.****a,** Time-dependent differential THz conductance at F\({}_{\text{pump}}\) = 2.3 mJ/cm\({}^{2}\). \(\Delta t\) is selected as during the expansion (\(\Delta t\) = 15 ps), half period of IM vibrations (\(\Delta t\) = 28 ps, 50 ps, 76 ps), and full relaxation (\(\Delta t\) = 300 ps). The black curves are the Drude fits. **b,** Coherent modulation of THz conductance and the obtained scattering rate in TSS from (**a**). The error bars represent the standard deviation uncertainties of the fitting. The black curve is the fit result with an exponential decay with a time constant of \(t_{\text{damping}}\)\(\sim\) 300 ps and damped oscillations with IM frequency. The initial expansion and compression give \(\sim\)27% modulation in transport lifetime in the TSS channel. The horizontal blue and red lines correspond to the scattering rates at HIT and NI phases adopted from Fig. 2c inferred the relation between \(-\Delta\)E/E\({}_{0}\) value and \(\gamma_{\text{TSS}}\). The bottom panel shows the temporal sequence of topological state dynamics. **c,** F\({}_{\text{pump}}\)-dependent \(-\Delta\)E/E\({}_{0}\) signal and time-dependent sequence of phases obtained in the same way from (**b**). **d** and **e**, Number of QL-dependent modulation frequencies (**d**) and amplitudes (**e**) measured in photoconductance. The modulation with 33 GHz frequency in 17 QL can be achieved, whereas such modulation is not obtained in 13 QL. The reduction of Fthres in 17 QL compared to that of 22 QL as well as the absence of modulation in 13 QL contain the finite size effects and the mode conversion efficiency of photoinduced strain waves to QL-dependent eigenmodes in linear chain model as discussed in the main text. The error bars in (**d**) and (**e**) represent the standard deviation uncertainties of the measured data. for 13 QL is 50 GHz [28], far from the peak of spectrum of the photoinduced strain. To achieve faster modulation with the 13 QL, pump wavelength tuning might be necessary to blue-shift the strain spectrum. Furthermore, the finite size effect suggests that hybridization occurs at a relatively larger \(\text{E}_{\text{g}}\) in thinner films, as shown in Fig. 4. The \(\text{F}_{\text{thres}}\) for the 17 QL is smaller than that of the 22 QL, approximately 1.0 mJ/cm\({}^{2}\), corresponding to a strain of around 3%. For a 3% tensile strain, \(\text{E}_{\text{g}}\) is 150 meV (Fig. S8) and \(\lambda_{\text{TSS}}\) extends to roughly 8 nm (half of the 17 QL thickness). It is expected that the top and bottom TSSs have already partially overlapped for roughly 10 QL or less [6; 39], explaining the lack of coherent modulation in the 13 QL. Similarly, a bulk crystal might not be suitable for the switching operation because induced strain waves propagate without confinement and subsequent lattice expansion [25]. Therefore, the film thickness (above 20 QL) and pump wavelength play crucial roles in this switching operation. This methodology can be further developed by combining with films at phase boundaries [6; 7] and heterointerfaces [40; 41], and utilizing an ultrafast coherent control scheme [21; 42] to create ultrafast topological switching devices for a variety of applications. ###### Acknowledgements. This work was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning, Korea (RS-2023-00208484, 2019M3D1A10783022, 2019R1A6A1A10073887, 2021R1F1A1050726, 2021R1A2C2004324) and the National Research Council of Science & Technology (NST) grant from the Korean Government (CAP18054-000).
2303.17386
Complementary Random Masking for RGB-Thermal Semantic Segmentation
RGB-thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions. However, the previous studies mostly focus on designing a multi-modal fusion module without consideration of the nature of multi-modality inputs. Therefore, the networks easily become over-reliant on a single modality, making it difficult to learn complementary and meaningful representations for each modality. This paper proposes 1) a complementary random masking strategy of RGB-T images and 2) self-distillation loss between clean and masked input modalities. The proposed masking strategy prevents over-reliance on a single modality. It also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available. Also, the proposed self-distillation loss encourages the network to extract complementary and meaningful representations from a single modality or complementary masked modalities. Based on the proposed method, we achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks. Our source code is available at https://github.com/UkcheolShin/CRM_RGBTSeg.
Ukcheol Shin, Kyunghyun Lee, In So Kweon, Jean Oh
2023-03-30T13:57:21Z
http://arxiv.org/abs/2303.17386v2
# Complementary Random Masking for RGB-Thermal Semantic Segmentation ###### Abstract RGB-thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions. However, the previous studies mostly focus on designing a multi-modal fusion module without consideration of the nature of multi-modality inputs. Therefore, the networks easily become over-reliant on a single modality, making it difficult to learn complementary and meaningful representations for each modality. This paper proposes 1) a complementary random masking strategy of RGB-T images and 2) self-distillation loss between clean and masked input modalities. The proposed masking strategy prevents over-reliance on a single modality. It also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available. Also, the proposed self-distillation loss encourages the network to extract complementary and meaningful representations from a single modality or complementary masked modalities. We achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks. Our source code is available at [https://github.com/UKcheolShin/CRM_RGTSeg](https://github.com/UKcheolShin/CRM_RGTSeg). ## 1 Introduction Robust and reliable semantic scene understanding is a crucial fundamental ability for autonomous driving to ensure the safe and reliable operation of autonomous vehicles. RGB-thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions. For example, in foggy or low-light conditions, the RGB camera may struggle to capture objects in the scene due to reduced visibility, while the thermal camera can still detect the heat signatures of objects. Combining the information from both modalities enables reliable and accurate semantic segmentation in adverse scenarios. Therefore, it naturally led to recent active studies on semantic segmentation of RGB-thermal images [11, 36, 35, 37, 48, 17, 21, 47]. The primary research direction is designing a multi-modal fusion module [44, 46, 8, 21] that can effectively combine the information from both RGB and thermal modalities to improve the accuracy of semantic segmentation. However, without consideration of the nature of multi-modal inputs, the networks easily fall into a sub-optimal solution, where the network becomes over-reliant on a single modality, as shown in Fig. 2 and Tab. 1. In addition, this Figure 1: **Complementary random masking for RGB-thermal semantic segmentation. Our proposed method aims to learn meaningful and complementary representations from RGB and thermal images by using complementary masking of RGB-T inputs and ensuring consistency between augmented and original inputs. The proposed method leads to robust and reliable segmentation results in daylight, low-light, and modality-dropped scenarios.** implies that the networks are susceptible to a wide range of fault cases, such as sensor disconnection, lens occlusion, and other input quality degeneration. In this paper, we focus on learning complementary and meaningful representations from both RGB and thermal modalities to prevent the over-reliance problem on a single modality and eventually improve the accuracy of the segmentation model. For this purpose, our intuitive ideas are as follows and shown in Fig. 1: 1) We augment input RGB-T images with random masking to prevent the network from over-reliantly utilizing one modality for the semantic segmentation task. 2) We enforce consistency between the prediction results of augmented and original images to encourage the network to extract meaningful representations even from partially occluded modalities or a single modality. Our contributions can be summarized as follows: * We propose a complementary random masking strategy that randomly masks one modality and masks the other modality in a complementary manner to improve model robustness and accuracy. * We propose a self-distillation loss between the prediction result from clean input modalities and the multiple prediction results from masked input modalities to learn complementary and non-local representations. * Ours proposed method achieves state-of-the-art results over three RGB-T benchmark datasets (, MF [11], PST900 [35], and KP [14, 17] datasets). ## 2 Related Work ### RGB-Thermal Fusion Semantic Segmentation Recently, thermal images have been widely adopted in various applications, such as detection [19, 45], tracking [20, 15], feature matching [18, 26], depth estimation [32, 31, 33], and SLAM [34, 16], to achieve high-level robustness against adverse weather and lighting conditions. RGB-thermal fusion semantic segmentation networks also have been proposed to overcome the limitation of RGB semantic segmentation networks [4, 13, 40, 6, 5] that are often vulnerable to extreme conditions, such as low-light, rainy, snowy, and sandy conditions. Most previous RGB-T fusion networks focused on designing a multi-modal fusion module that can effectively combine the information from both modalities to improve the accuracy of semantic segmentation. Specifically, they proposed various types of feature fusion modules, such as the naive feature-level fusion [11, 36, 35], multi-scale feature fusion [36, 46, 37, 21], and attention-weighted fusion [44, 46, 8, 21]. However, if the nature of multi-modal inputs is not considered in the network training, the networks easily become over-reliant on a single modality. This can hinder the network from learning complementary and meaningful representations for each modality, which is necessary to accurately and robustly segment objects. ### RGB-Thermal Knowledge Distillation Several studies [39, 17, 10] have investigated the potential of using knowledge distillation between RGB and thermal modalities to improve performance in various recognition applications. Specifically, Heatnet [39] utilizes knowledge distillation from daytime prediction results to nighttime to improve the performance of the RGB-T semantic segmentation network. MS-UDA [17] and CEKD [10] distill the knowledge of RGB-T segmentation network to thermal image segmentation network. In contrast to these previous works, this paper specifically focuses on knowledge distillation between clean and masked images for RGB-T semantic segmentation tasks. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{**RGB-T**} & \multicolumn{2}{c|}{**RGB drop**} & \multicolumn{2}{c}{**THR drop**} \\ \cline{2-6} & mIoU \(\uparrow\) & mIoU \(\uparrow\) & Diff \(\downarrow\) & mIoU \(\uparrow\) & Diff \(\downarrow\) \\ \hline \hline RTFNet [36] & 53.2 & 45.6 & -7.6 & 10.5 & -42.7 \\ CMXNet [21] & 58.0 & 44.7 & -13.3 & 39.2 & -18.8 \\ Ours & **61.2** & **53.1** & **-8.1** & **52.7** & **-8.5** \\ \hline \end{tabular} \end{table} Table 1: **Quantitative comparisons of RGB-T segmentation on MF dataset [11] by input modality.** Previous RGB-T segmentation networks [36, 21] show a critical vulnerability to modality drop and over-reliance on a single modality, which can hinder the learning of complementary representations from multiple modalities. Figure 2: **Input modality dependency comparison of RGB-T semantic segmentation networks.** Common multi-modal fusion approaches often result in a sub-optimal solution, where the neural network becomes over-reliant on a single modality, as shown in (e) and (f). On the other hand, our proposed method prevents the issue of over-reliance ((h) and (i)). ## 3 Method ### RGB-T Mask2Former #### 3.1.1 Preliminaries for Mask Classification Mask classification architecture [6, 5] is a universal image segmentation network capable of semantic, instance, and panoptic segmentation. The network groups input pixels into \(N\) segments by estimating \(N\) binary masks and \(N\) class labels. The network consists of three main modules: a _backbone_ that extracts low-resolution features from an image, a _pixel decoder_ that gradually upsamples these features to generate high-resolution per-pixel embeddings, and a _transformer decoder_ that estimates object queries based on the image features. The class prediction is estimated via MLP layers with the object queries. The binary mask predictions are obtained by decoding the per-pixel embeddings with object queries. Please refer to these papers [6, 5] for details. #### 3.1.2 Mask2Former for RGB-T images We adopted Mask2Former [5] for semantic segmentation as our baseline model and modified the model to take RGB and thermal images, as shown in Fig. 3. More specifically, we assigned a modality-wise backbone for each modality. After extracting modality-wise image features from RGB and thermal images, we aggregate the features with a simple winner-take-all strategy via _max_ operation. This aggregation finds the most prominent feature from the RGB and thermal features across channel dimensions. We then normalized the aggregated feature map. At this stage, we can directly forward one modality feature to the decoder without aggregating the multi-modal features. After that, the aggregated multi-modal feature or single-modal feature is delivered to the pixel and transformer decoder to estimate \(N\) class and mask predictions. The final semantic segmentation mask can be obtained with a simple matrix multiplication of class and mask predictions. ### Complementary Random Masking Recently, masking strategy has been widely utilized in various language [9, 23, 2] and visual applications [1, 12, 41, 38, 43, 28] to learn meaningful representation. Especially, the image masking strategy is used to pre-train a large capacity backbone model to learn general representation for various downstream tasks, such as recognition [12, 41], video applications [38], 3D application [28]. Differing from these works focusing on learning general representation, we utilize a masking strategy to overcome the over-reliant problem of the RGB-T semantic segmentation task and to learn complementary and robust representation for each modality. Figure 3: **Overall pipeline of complementary masking and self-distillation for RGB-thermal semantic segmentation**. Our proposed training framework consists of complementary random masking and self-distillation loss. We randomly mask the patchified RGB-thermal pair in a complementary manner that guarantees at least one modality is valid. After that, the network estimates each prediction results from clean and masked RGB-thermal pairs. We enforce the network to predict the same class prediction results from the clean and masked RGB-thermal pairs. The proposed method resolves the over-reliant problem of RGB-T semantic segmentation networks and encourages the network to extract complementary and meaningful representations for robust and accurate semantic segmentation performance from RGB-T images. More specifically, as shown in Fig. 2 and Tab. 1, common RGB-T segmentation networks easily rely on a single modality. Therefore, the network rarely extracts meaningful representation from the other modality to segment and classify objects. This makes the network varleanble to a wide range of fault cases, such as sensor disconnection, lens occlusion, and other input quality degeneration. Also, it loses the chance to learn complementary and useful representations for each modality to segment and classify objects. Therefore, we push the network in a situation where one modality is partially unavailable but allows the missing information can be complemented by the other modality. For this purpose, we propose a complementary random masking method for RGB-T semantic segmentation. **Complementary Patch Masking.** We use Swin-Transformer [24] as our backbone model. Therefore, each modality image is patchified into a set of non-overlapped small patches, which will be fed into the transformer model for processing. Here, we randomly mask out the patches of one modality and replace the masked patches with learnable mask token vectors by following the convention of the token masking [9, 1, 43]. The other modality's patches are masked out in the same manner by using the complementary mask. The complementary random masking process is defined as follows: \[\begin{split}\hat{X}_{rgb}&=M*X_{rgb}+\hat{M}*L_ {rgb},\\ \hat{X}_{thr}&=\hat{M}*X_{thr}+M*L_{thr},\end{split} \tag{1}\] where \(X_{input}\) is tokenized input image, \(M\) is random mask, \(\hat{M}\) is its complementary mask, defined as \(1-M\), and \(L_{input}\) is learnable token vector. ### Self-distillation Loss The proposed self-distillation loss consists of two losses \(L_{SDC}\) and \(L_{SDN}\). The former loss \(L_{SDC}\) aims to make the network learn to extract complement representation when one modality information is partially unavailable. Therefore, we enforce the network to predict the same class prediction results from clean (_i.e_., \(X_{rgb}\) and \(X_{thr}\)) and masked (_i.e_., \(\hat{X}_{rgb}\) and \(\hat{X}_{thr}\)) RGB-thermal pairs. The proposed self-distillation loss for complementary representations is defined as follows: \[L_{SDC}=L_{1}(z^{c}(X_{rgb},X_{thr}),z^{c}(\hat{X}_{rgb},\hat{X}_{thr})), \tag{2}\] where \(z^{c}(\cdot)\) is class prediction logit estimated from given tokenized inputs and \(L_{1}(\cdot)\) is mean absolute error function. The latter loss \(L_{SDN}\) aims to make the network extract robust representations from a partially masked single modality based on their non-local context rather than local features. For this purpose, we further enforce the class prediction consistency between the clean RGB-T pair and single masked modality (_i.e_., \(\hat{X}_{rgb}\) or \(\hat{X}_{thr}\)). The proposed self-distillation loss for non-local representations is defined as follows: \[\begin{split} L_{SDN}&=L_{1}(z^{c}(X_{rgb},X_{thr} ),z^{c}(\hat{X}_{rgb})),\\ &+L_{1}(z^{c}(X_{rgb},X_{thr}),z^{c}(\hat{X}_{thr}))\end{split} \tag{3}\] ### Supervised Loss We utilize the same supervised loss function \(L_{sup}\), which consists of binary mask loss \(L_{mask}\) and classification loss \(L_{cls}\), used in Mask2Former [5]. The supervised loss is defined as follows: \[L_{sup}=L_{mask}+\lambda_{cls}L_{cls}, \tag{4}\] where the mask loss \(L_{mask}\) is a combination of binary cross-entropy loss and dice loss [27], defined as \(L_{mask}=\lambda_{cc}L_{ce}+\lambda_{dice}L_{dice}\). **Modality-wise Supervision.** The current network architecture can estimate three types of prediction results according to the given input modalities (_i.e_., RGB image, thermal image, and RGB-thermal pair). We also empirically found that the supervised loss for each prediction result of multiple input modalities can perform better than a single supervised loss for RGB-thermal pair. The modality-wise supervised loss is defined as follows: \[\begin{split} L_{MWS}&=L_{sup}(z_{gt},z(X_{rgb},X_{thr})),\\ &+L_{sup}(z_{gt},z(X_{rgb}))+L_{sup}(z_{gt},z(X_{thr})),\end{split} \tag{5}\] where \(z_{gt}\) is ground truth class \(z^{c}\) and binary mask \(z^{m}\), \(z\) is class and mask prediction from the given input modalities (RGB, thermal, or RGB-thermal). For the masked modalities, this loss only uses the first term (_i.e_., RGB-T). The total loss is defined as follows: \[L_{total}=L_{MWS}+L_{SDC}+L_{SDN} \tag{6}\] ## 4 Experiments In this section, we first describe the datasets used to train and validate RGB-T segmentation networks, along with the implementation details and training setting. After that, we provide quantitative and qualitative comparisons over three RGB-T benchmark datasets. Lastly, we conduct ablation studies of our proposed method to validate the effects of each sub-component. ### RGB-T Semantic Segmentation Datasets In this study, we employ three publicly available RGB-T datasets to train and evaluate the proposed method. **MF dataset [11]** consists of 820 daytime and 749 nighttime RGB-thermal images of urban driving scenes with a resolution of 640\(\times\)480. The dataset provides semantic labels for nine classes, including one unlabeled class and eight classes of common objects. **PST900 dataset [35]** provides 894 RGB-thermal images with a resolution of 1280\(\times\)720, taken under the cave and subterranean environments for DARPA Subterranean Challenge. The dataset contains annotated segmentation labels for five classes, including one background class (_i.e._, unlabeled) and four object classes. **KP dataset [14]** is RGB-T paired urban driving scene dataset, providing 95K video frames (62.5K for daytime and 32.5K for nighttime) with a resolution of 640\(\times\)512. Originally, KAIST Multispectral Pedestrian Detection (KP) dataset provides detection bounding box labels only. But, thankfully, Kim _et al_. [17] provides annotated semantic segmentation labels for 503 daytime and 447 nighttime images. The labels include 19 object classes, which are the same classes as Cityscapes [7] dataset. However, the dataset splits of Kim _et al_. [17] are undesirable to common RGB-T semantic segmentation network training. We divided 950 training images into 499 for training, 140 for validation, and 311 for testing. Daytime and nighttime images were appropriately distributed in each set. We provide our train/val/test splits that were used to train our network and other networks on KP dataset. \begin{table} \end{table} Table 2: **Quantitative comparisons for semantic segmentation of RGB-T images on MF [11], PST900 [35], and KP [14] datasets. We compared our proposed method with the previous state-of-the-art RGB-T semantic segmentation networks on MF, PST900, and KP benchmarks. Our proposed method demonstrates outperformed performance in all benchmark datasets. The best and second best performance in each block is highlighted in bold and underline, respectively.** ## 4 Conclusion Figure 4: **Qualitative comparison for semantic segmentation of RGB-T images on MF [11], PST900 [35], and KP [14] datasets. The first two rows are qualitative comparisons of MF dataset, the next two rows are PST 900 dataset results, and the remaining rows are KP dataset results. The proposed method shows reliable and accurate segmentation results across all datasets, including day-light, low-light, noisy images, and harsh cave conditions.** ### Implementation Details **Mask2Former.** We employ the Swin transforemr [24] (tiny, small, and base) as our backbone model. We use the multi-scale deformable attention Transformer (MSDeformAttn) [49] as a pixel decoder. We adopt the same Transformer decoder with DETR [3] for the Transformer decoder. The query \(N\) is set to 100 by default. **Training Settings.** Our proposed method is implemented with PyTorch [29] and Detectron2 [42] library on a machine equipped with two NVIDIA RTX A6000 GPUs. The following training setting is commonly used for all datasets. We use AdamW optimizer [25] and poly [4] learning rate schedule with an initial learning rate of \(10^{-4}\). We train all segmentation networks with a batch size of 14 for 35K iterations. We utilize Swin transformer model [24] pretrained on ImageNet-1K [30] (_i.e_., tiny(T), small(S), and base(B)) as a backbone model. We apply random color jittering [22], random horizontal flipping, and random cropping to RGB and thermal images as data augmentation. For the coefficients of loss functions, we set \(\lambda_{cls}\) as 2.0 for predictions matched with a GT label and as 0.1 for the "no object" (_i.e_., no match with any GT labels) by following [5]. Also, the coefficient \(\lambda_{ce}\) and \(\lambda_{dice}\) are set to 5.0 ### RGB-Thermal Semantic Segmentation In this section, we compare our proposed method with the previous RGB-T semantic segmentation networks [11, 36, 35, 37, 48, 44, 46, 8, 21] on three benchmarks. We use mean Intersection-over-Union (mIoU) to evaluate the performance of semantic segmentation. #### 4.3.1 Evaluation on MF Day-Night Dataset [11] The quantitative and qualitative comparison results are shown in Tab. 2-(a) and Fig. 4. We trained the RGB-T Mask2Former [5], as described in Sec. 3, along with our proposed method. Also, we provide a variant of the network with Swin-T, Swin-S, and Swin-B backbone models. Compared to the previous state-of-the-art method (_i.e_., CMXNet [21]), our approach leads to a 1.7% performance gain in the mIoU metric. Furthermore, our methods (Swin-S and B) achieve the best or second-best performance in most IoU metrics over nine classes. #### 4.3.2 Evaluation on PST900 Dataset [35] For the PST900 benchmark, our model (_i.e_., Swin-B) achieves a high performance improvement by 3.9% against previous state-of-the-art result (_i.e_., GMNet [48]), as shown in Tab. 2-(b). Also, our method provides precise and reliable segmentation results compared to previous methods, as shown in Fig. 4. #### 4.3.3 Evaluation on KP Day-Night Dataset [14, 17] KP dataset has a more diverse and numerous number of classes compared to the MF [11] and PST900 [35] datasets. Therefore, the increased complexity in the dataset makes it more difficult to accurately segment the objects in the RGB-T images, requiring more advanced techniques and consideration of multi-modality inputs. We trained publicly available RGB-T semantic segmentation networks (_i.e_., MFNet [11], RTFNet [36], CMXNet [21]) on the KP dataset with their provided code bases. As shown in Tab. 2-(c), our method (_i.e_., Swin-B) achieves 9.0% performance improvement against CMXNet [21]. Also, Fig. 4 shows that our method shows precise and accurate segmentation quality in partially occluded, noisy, and cluttered environments. This implies that the proposed complementary random masking and self-distillation loss make the network learn to extract non-local and complementary representations from each modality, even in challenging conditions. We think the results show that as the complexity of semantic segmentation is higher, our proposed method is more helpful in achieving accurate and robust semantic segmentation performance from RGB and thermal images. ### Ablation Study #### 4.4.1 Analysis of Loss Functions In this ablation study, we investigate the components of the proposed method, as shown in Tab. 3-(a). Baseline indicates an RGB-T Mask2Former model that is modified to take RGB and thermal image inputs, as described in Sec. 3. Our empirical finding demonstrates that modality-wise supervision loss \(L_{MWS}\), which provides supervision for each prediction result from multiple input modalities, yields +0.8% \begin{table} \end{table} Table 3: **Ablation study of our proposed method on MF dataset [11].** We conduct an ablation study of the proposed method and various complementary masking strategies. Swin-S backbone is used for ablation study. performance gain compared to a single supervised loss for RGB-thermal pairs (_i.e_., Baseline). Also, applying complementary random masking \(CRM\) brings +0.4% performance improvement by pushing the network to segment and classify objects even when partially occluded inputs are provided. The self-distillation losses for complementary and non-local representation (\(L_{SDC}\) and \(L_{SDN}\)) are getting +0.3% and +0.2% improvement, respectively. \(L_{SDC}\) aims to make the network learn to extract complement representation when one modality information is missing. \(L_{SDN}\) aims to make each modality extract robust representation to segment objects based on their non-local context rather than local features. Lastly, when all components are combined together, we get +1.7% performance improvement compared to the Baseline model. #### 4.4.2 Complementary Random Masking We study various types of complementary masking strategies, as shown in Fig. 5 and Tab. 3-(b). Square masking randomly masks a square area with half the height and width of the image in a random position. Patch masking randomly masks half an image (_i.e_., 0.5 ratios) with patches of different sizes (_e.g_., 8, 16, 32, 64). Generally, complementary random masking shows better performance than the Baseline model. But, the patch size or masking scheme seems to be importnat for network performance. For example, the complementary random square masking may hinder learning non-local representation from \(L_{SDN}\) loss by masking out a wide area. Similarly, too tiny patch size is also undesirable to learn complementary representations for each modality. Generally, random patches over a certain size show higher performance. Empirically, we found that patch size 32 shows the best performance. ## 5 Conclusion & Future Work **Conclusion.** In this paper, we have proposed a complementary random masking strategy and self-distillation loss for robust and accurate RGB-Thermal semantic segmentation. The proposed masking strategy prevents over-reliance on a single modality. It also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available. Also, the proposed self-distillation loss encourages the network to extract complementary and meaningful representations by enforcing class prediction consistency between clean and masked RGB-thermal pairs. Based on the proposed method, we achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks. **Future work: fusion module.** The proposed method focuses on the nature of multi-modal inputs for the RGB-T semantic segmentation task rather than effectively fusing the multi-modal information. For this purpose, in this paper, we utilized a straightforward feature fusion method rather than modern fusion modules. However, lots of studies have proposed various types of fusion modules. Therefore, we plan to study effective multi-modal feature fusion methods together with the proposed method for further performance improvements. **Future work: binary mask.** We also aim to investigate an effective distillation loss in binary mask predictions between clean and masked RGB-thermal pairs. In the current stage, we didn't find a proper solution for binary mask prediction consistency. However, if we design proper loss functions between the masks, we think the binary mask consistency can help the network to predict precise and accurate mask shapes in partial occlusion and clustered conditions. Figure 5: **Illustration of different complementary random masking strategies. Square masking randomly masks a square area with half the height and width of the image in a random position. Patch masking randomly masks half an image (_i.e_., 0.5 ratios) with patches of different sizes (_e.g_., 8, 16, 32, 64).**
2307.12709
A Dynamic Equivalent Energy Storage Model of Natural Gas Networks for Joint Optimal Dispatch of Electricity-Gas Systems
The development of energy conversion techniques enhances the coupling between the gas network and power system. However, challenges remain in the joint optimal dispatch of electricity-gas systems. The dynamic model of the gas network, described by partial differential equations, is complex and computationally demanding for power system operators. Furthermore, information privacy concerns and limited accessibility to detailed gas network models by power system operators necessitate quantifying the equivalent energy storage capacity of gas networks. This paper proposes a multi-port energy storage model with time-varying capacity to represent the dynamic gas state transformation and operational constraints in a compact and intuitive form. The model can be easily integrated into the optimal dispatch problem of the power system. Test cases demonstrate that the proposed model ensures feasible control strategies and significantly reduces the computational burden while maintaining high accuracy in the joint optimal dispatch of electricity-gas systems. In contrast, the existing static equivalent model fails to capture the full flexibility of the gas network and may yield infeasible results.
Siyuan Wang, Wenchuan Wu, Chenhui Lin, Binbin Chen
2023-07-24T11:44:28Z
http://arxiv.org/abs/2307.12709v1
A Dynamic Equivalent Energy Storage Model of Natural Gas Networks for Joint Optimal Dispatch of Electricity-Gas Systems ###### Abstract The development of energy conversion techniques enhances the coupling between the gas network and power system. However, challenges remain in the joint optimal dispatch of electricity-gas systems. The dynamic model of the gas network, described by partial differential equations, is complex and computationally demanding for power system operators. Furthermore, information privacy concerns and limited accessibility to detailed gas network models by power system operators necessitate quantifying the equivalent energy storage capacity of gas networks. This paper proposes a multi-port energy storage model with time-varying capacity to represent the dynamic gas state transformation and operational constraints in a compact and intuitive form. The model can be easily integrated into the optimal dispatch problem of the power system. Test cases demonstrate that the proposed model ensures feasible control strategies and significantly reduces the computational burden while maintaining high accuracy in the joint optimal dispatch of electricity-gas systems. In contrast, the existing static equivalent model fails to capture the full flexibility of the gas network and may yield infeasible results. Integrated energy system, gas pipeline network, equivalent energy storage model, energy management ## Nomenclature ### _Variables_ \(\rho\), \(v\), \(\pi\) Density, velocity and pressure of gas \(\pi_{i,s}\), \(f_{i,s}\) Gas pressure and mass flow rate at the \(i\)-th computation node at time \(t\) Vector composed of the gas pressure and mass \(s_{t}\) flow rate of all the uncontrollable computation nodes at time \(t\) Vector composed of all the controllable gas \(\mathbf{u}_{t}\) pressure and injection mass flow rate of normal nodes at time \(t\) Gas pressure and mass flow injection at normal node \(n\) at time \(t\) Boosted gas pressure provided by the \(k\)-th compressor at time \(t\) Output mass flow rate and gas pressure of the \(k\)-th gas well at time \(t\) Input gas mass flow rate and output power and of the \(k\)-th gas turbine at time \(t\) Output gas mass flow rate and input power of the \(k\)-th power to gas unit at time \(t\) Vector composed of all the active power of GTs and P2Gs at time \(t\) Vector of controllable pressure variables at time \(t\) , composed of the pressure of all GWs and boosted pressure provided by all the compressors \(f_{t}^{\text{\tiny{LD}}}\) Vector composed of the forecasted non-power gas load for all normal nodes Vector collects all the variables including power of GTs and P2Gs, and controllable gas pressure for the first \(t\) time slots \(\mathbf{p}^{\text{\tiny{GAS}}}\) Vector composed of the injection power from the gas network \(p_{t}^{\text{\tiny{GAS}}}\) of all time slots Matrix composed of all the active power of GTs and P2Gs of all time slots ### _Parameters_ \(u\)Speed of sound \(\lambda\)Friction coefficient of gas pipeline Base value of gas velocity of gas pipeline \(A\), \(D\), Cross-sectional area, diameter, friction coefficient \(\lambda\), \(\alpha\) and inclination of gas pipeline \(q_{L}\)Lower heat value of natural gas Energy conversion efficiency of the \(k\)-th unit \(s\) , \(s\) = {GT,P2G}. Lower and upper bounds of gas mass flow rate injection of the \(k\)-th unit \(s\) , \(s\) = {GT,P2G,GW}. Lower and upper bounds of gas pressure of the \(k\)-th unit \(s\) , \(s\) = {GT,P2G,GW}. Lower and upper bounds of active power of the \(k\)-th unit \(s\) , \(s\) = {GT,P2G}. Lower and upper bounds of the \(k\)-th compressor's pressure booster limits Lower and upper bounds of the \(l\)-th pipeline's gas pressure Parameters by collecting the operational \(\mathbf{W}_{t}\) , \(\mathbf{w}_{t}\) Constant matrix used to select the elements in \begin{tabular}{l l} & \(\mathbf{p}_{i}^{\text{DEV}}\) from the vector \(\mathbf{z}_{i}^{\text{CTRL}}\). \\ \(\mathbf{E}_{i}\) & Parameter of the high-dimensional quadrant ellipsoid region \(\mathcal{E}_{i}\) \\ \(T\) & Total number of all time slots \\ \(\mathbf{\underline{p}}_{i}^{\text{GAS}}\), \(\mathbf{\overline{p}}_{i}^{\text{GAS}}\) Lower and upper power bounds of the equivalent storage model of the gas network at time \(t\) \\ \(\mathbf{\underline{\varepsilon}}_{i}^{\text{GAS}}\), \(\mathbf{\overline{e}}_{i}^{\text{GAS}}\) Lower and upper energy bounds of the equivalent storage model of the gas network at time \(t\) \\ \(\mathbf{A}^{\text{GAS}}\),\(\mathbf{B}^{\text{GAS}}\) Parameters of the equivalent energy storage model with time-varying capability \\ \end{tabular} ### _Sets and Functions_ \begin{tabular}{l l} \(\mathcal{L}\) & Index set of pipelines \\ \(\mathcal{N}\) & Index set of normal nodes \\ \(\mathcal{S}\) & Index set of all computation nodes \\ \(\mathcal{S}_{\text{s}}\), \(\mathcal{S}_{i}\) & Index set of variables at normal node \(n\) and pipeline \(l\) \\ \(\mathcal{S}_{\text{n}}^{\text{-}}\), \(\mathcal{S}_{\text{s}}^{\text{-}}\) & Index sets of variables that flow into and out of normal node \(n\). \\ \(\mathcal{S}_{\text{k}}^{\text{-}}\), \(\mathcal{S}_{\text{k}}^{\text{-}}\) & Index of computation node flowing into and out of the compressor \(k\) \\ \(\mathcal{D}^{\text{*}}\) & Index set of all the units \(s\) in the gas network, \(s\) = {GT,P2G,GW,COMP,GEN} \\ \(\mathcal{D}_{\text{s}}^{\text{*}}\) & Index set of all the units \(s\) connected to the normal node \(n\), \(s\) = {GT,P2G,GW} \\ \(\mathcal{T}\) & Index set of all time slots. \\ \(\mathcal{E}_{i}\) & Power coupling region among different energy exchange interfaces of time \(t\) \\ \(\Omega^{\text{GAS}}\) & Controllability feasible region of gas network \\ \(\mathcal{R}^{\text{GAS}}\) & Energy storage flexibility region of gas network \\ \(\mathcal{F}^{\text{GAS}}\) & Integrated multi-port energy storage model with time-varying capacity of gas network \\ \((\bullet)_{i}\) & The \(i\)-th row of a matrix or the \(i\)-th element of a vector \\ \(\text{card}(\bullet)\) & Cardinality of a set. \\ \end{tabular} ## I Introduction ### _Motivation_ Since energy conversion techniques related to the integrated energy system were well developed [1, 2], which makes the gas network and power system coupled more tightly [3]. Exploiting the inherent flexibility of gas networks can offer valuable opportunities to the power system, enabling it to provide energy storage services [4, 5] and maintain power balance. However, for the power system, the challenges remain in leveraging the flexibility of gas networks. Firstly, the gas network and the power grid are generally operated separately [6], each associated with separate entities of interest, thereby raising concerns about information confidentiality [7] between these two systems. Consequently, the power system operator is unable to directly utilize the intricate model of gas networks for cohesive coordination [8]. Furthermore, the gas network employs partial differential equations to describe its dynamic processes [9], imposing a substantial computational burden on the joint dispatch of the electricity-gas system. Additionally, the gradual and inert nature of the gas grid system poses difficulties in quantifying the energy storage capacity essential for power system operators. In this paper, we propose a method to evaluate the equivalent energy storage model of gas networks. The slow dynamic process and operational constraints of the gas network are transformed into coupling power constraints and energy constraints among the active power of converters in the interface between the power system and the gas networks, such as GTs and P2Gs. These constraints constitute the equivalent energy storage model of the gas network, which can be easily incorporated into the optimal dispatch model of the power system. ### _Literature review_ The flexibility or equivalent model of gas network has been studied in serval works. [10] and [11] used the energy hub as the energy conversion port and estimated steady-state security region. The nodal operating envelope is used in [12] to represent the aggregated flexibility of the integrated energy units, taking into account multiple P2Gs and the operational constraints of gas networks. The work in [13] presented a comprehensive overview for the flexibility of distributed multi-energy systems and provided a general method to aggregate their flexibility. The inertia of gas and thermal is illustrated in [14] based on the corresponding dynamic model. The zonal linepack is introduced in [15] to quantify the gas flexibility and applied in the optimal power flow model in the integrated gas and electrical system. In [16], an outer approximation with equality relaxation method is presented to speed up the integrated operation of multi-energy systems. A feasible region composed of a large number of predefined constraints of the natural gas-fired units is proposed in [17], and this region are then used to replace the natural gas network in the electricity-gas co-optimization problem to simplify the calculation. The current solutions have facilitated the effective evaluation of the flexibility model of gas networks in certain aspects. However, the primary drawback of existing methods lies in their reliance on static models, ignoring the time-coupling constraints inherent in the gas network. Temporal coupling constraints pertain to the influence of preceding states on subsequent states of the gas system, ensuring the validity of the gas network's state space equation. Consequently, the two adjacent states cannot be considered independent entities. By considering temporal coupling constraints, we can achieve a more accurate depiction of the transient process of the gas network, thereby avoiding impractical solutions arising from sudden changes in gas state and reducing errors in the optimal coordination of electricity-gas systems. Moreover, in order to fully account for the dynamic nature of the gas network, the current models that consider dynamic processes can only employ detailed network models, leading to heavy computation burden in handling the dynamic gas state. It is necessary to substitute a simplified equivalent energy storage model for the detailed network model, thereby mitigating the substantial computational burden associated with computing the dynamic processes of the gas network. ### _Contribution and paper organization_ This paper presents a novel solution to evaluate the equivalent energy storage model of gas networks. Since the coupling interface between the power system and the gas network may involve several GTs and P2Gs, the gas network can be equivalent to a multi-port energy storage model with time-varying capacity. The regulation capability of GTs and P2Gs can be captured by a flexibility region, which is formulated by the state-space model of the natural gas network and its operational constraints. Then, the state-space model and operational constraints are represented in a reduced and more intuitive form. To the best knowledge of the authors, the main contributions of our work are as follows: (1) A multi-port energy storage model with time-varying capacity is proposed to quantify the flexibility of the gas network, which implicitly incorporates all operational constraints. Therefore, it can be easily incorporated into the optimal dispatch problem of the power system to preserve the information privacy and reduce the computational burden. (2) The inscribed high-dimensional quadrant ellipsoid algorithm is adopted to describe coupling relationships among the active power of energy converters, such as GTs and P2Gs. It offers significant advantages in terms of both approximation accuracy and computational complexity. (3) The high-dimensional polyhedron projection and bounds shrinking algorithm is developed to calculate the parameters of the equivalent energy storage model. The flexibility of gas networks can be fully exploited and the feasibility of the resulted control strategies can be guaranteed. The remainder of this paper is organized as follows. Gas network models with gas sources and units are formulated in Section II. The joint dispatch model of the gas and power network is introduced in Section III. Section IV delineates the detailed process for calculating the equivalent energy storage model. Numerical tests are presented in Section V and conclusions are made in Section VI. ## II Formulation of the Gas Network and Units The gas states along a pipeline obey the partial differential equations [18] as follows (supposing \(v(x,t)>0\)): \[\frac{\partial\rho(x,t)}{\partial t}+\frac{\partial\rho(x,t)v(x,t)}{\partial x }=0 \tag{1}\] \[\frac{\partial\rho(x,t)v(x,t)}{\partial t}+\frac{\partial\rho(x,t)v^{2}(x,t)} {\partial x}+\frac{\partial\pi(x,t)}{\partial x} \tag{2}\] \[+\frac{\lambda\rho(x,t)v^{2}(x,t)}{2D}+\rho(x,t)g\sin\alpha=0\] where (1) represents the principle of mass conservation in a natural gas pipeline, signifying that the net outflow rate of mass from an infinitesimal fluid volume is equivalent to the rate of mass decrement within the differential volume. (2) represents the laws of momentum. The five terms correspond to the natural gas inertia, convective gas flow, dynamic gas flow pressure, hydraulic friction force, and force of gravity, respectively. \(\pi(x,t)\), \(\rho(x,t)\) and \(v(x,t)\) denote the gas pressure, density and velocity at position \(x\) at time \(t\), respectively; \(\mathbf{D}\), \(\lambda\) and \(\alpha\) denote the cross-sectional diameter, friction coefficient and inclination of the gas pipeline, respectively. Utilizing the first-order Taylor expansion, the quadratic term can be expanded at the base value of gas velocity \(v_{b}\)[19, 20]. The linearization error resulting from the Taylor expansion is considerably smaller than 1%, thus it can be disregarded. \[v^{2}(x,t)\approx 2v(x,t)v_{b}-v_{b}^{2} \tag{3}\] Besides, the second convective term in (2) can be approximately considered as 0 [21], and the inclination angle \(\alpha\) is 0, too. The gas pressure and mass flow rate can be expressed by the state variables as follows: \[\pi(x,t)=u^{2}\rho(x,t) \tag{4}\] \[f(x,t)=A\rho(x,t)v(x,t) \tag{5}\] where \(A\) denotes the cross-sectional area of the gas pipeline. By substituting (3)-(5) into (1)-(2), the partial differential equations of gas states can be expressed as follows: \[A\frac{\partial\pi(x,t)}{\partial t}+u^{2}\frac{\partial f(x,t)}{\partial x}=0 \tag{6}\] \[\frac{1}{A}\frac{\partial f(x,t)}{\partial t}+\frac{\partial\pi(x,t)}{ \partial x}+\frac{\lambda v_{b}}{AD}f(x,t)-\frac{\lambda v_{b}^{2}}{2u^{2}D} \pi(x,t)=0 \tag{7}\] For the simplification of numerical calculation, the partial differential equations can be discretized in space and time. As shown in Fig. 1, there are two kinds of computation nodes, normal nodes and fictitious computation nodes [22]. The normal nodes are terminal nodes of each gas pipelines, and the fictitious computation nodes are obtained by discrete segmentation of the gas pipelines. Then, the state-space model (6)-(7) can be described as the discrete difference equations as follows: \[A\frac{\pi_{x,t}-\pi_{x,t-1}}{\Delta t}+u^{2}\frac{f_{x,t}-f_{x-1,t}}{\Delta x }=0 \tag{8}\] \[\frac{f_{x,t}-f_{x,t-1}}{A\Delta t}+\frac{\pi_{x,t}-\pi_{x-1,t}}{\Delta x}+ \frac{\lambda v_{b}}{AD}f_{x,t}-\frac{\lambda v_{b}^{2}}{2u^{2}D}\pi_{x,t}=0 \tag{9}\] Besides, there are topology constraints of gas pipeline networks as follows: Fig. 1: Computation nodes of gas pipeline network. \[\pi_{s_{i,s}}=\pi_{s_{i,s}}^{\text{NOBE}},i\in\mathcal{S}_{u} \tag{10}\] \[\sum_{i\in\mathcal{S}_{u}^{\text{C}}}f_{s_{i,s}}+f_{s_{i,s}}^{\text{ EU}}=\sum_{j\in\mathcal{S}_{u}^{\text{C}}}f_{j,s} \tag{11}\] where \(\pi_{s_{i,s}}^{\text{NOBE}}\) denotes the gas pressure of normal node \(n\) at time \(t\); \(f_{s_{i,s}}^{\text{IN}}\) denotes the injection mass flow rate of normal node \(n\) at time \(t\); \(\mathcal{S}_{u}\) denotes the index set of variables at normal node \(n\); \(\mathcal{S}_{u}^{\text{C}}\) and \(\mathcal{S}_{u}^{\text{C}}\) denote the index sets of variables that flow into and out of normal node \(n\). The discrete difference equations (8)-(9) and topology constraints (10)-(11) can be transformed into the compact matrix form [22] as follows: \[\mathbf{s}_{t}=\mathbf{J}\mathbf{s}_{t-1}+\mathbf{H}\mathbf{u}_{t-1} \tag{12}\] where \(\mathbf{s}_{t}\) is a vector composed of the gas pressure and mass flow rate of all the uncontrollable computation nodes at time \(t\). \(\mathbf{u}_{t}\) is a vector composed of all the controllable gas pressure and injection mass flow rate of normal nodes at time \(t\). \(\mathbf{J}\) and \(\mathbf{H}\) are constant matrices obtained based on (8)-(11). For the consideration of security, the gas pressures of gas pipelines also have upper and lower bounds. For any \(i\in\mathcal{S}_{t}\) \[\underline{\pi}_{t}\leq\pi_{t,s}\leq\overline{\pi}_{t} \tag{13}\] The gas injection at nodes can be described as the net injection of all the units. For any \(n\in\mathcal{N}\) \[f_{s_{i,s}}^{\text{EN}}=\sum_{k\in\mathcal{D}_{s}^{\text{EN}}}f_{s,s}^{\text{ CON}}+\sum_{k\in\mathcal{D}_{s}^{\text{EN}}}f_{s,s}^{\text{EN}}-\sum_{k\in \mathcal{D}_{s}^{\text{EN}}}f_{s,s}^{\text{GT}}-f_{s_{s}}^{\text{LD}} \tag{14}\] where \(\mathcal{D}_{s}^{\text{GW}}\), \(\mathcal{D}_{s}^{\text{EN}}\), and \(\mathcal{D}_{s}^{\text{GT}}\) denote the index sets of gas sources, P2Gs, and GTs that directly linked to node \(n\), respectively. \(f_{s,s}^{\text{LD}}\) denotes the mass flow rate of forecasted non-power gas load at node \(n\) at time \(t\). As the gas source, the gas wells (GWs) provide natural gas for the gas network with bounded mass flow rate and gas pressure. For any \(k\in\mathcal{D}^{\text{GW}}\) \[f_{s}^{\text{GW}}\leq f_{s,s}^{\text{GW}}\leq\overline{f}_{t}^{\text{GW}} \tag{15a}\] \[\underline{\pi}_{t}^{\text{GW}}\leq\pi_{t,s}^{\text{GW}}\leq\overline{\pi}_{t }^{\text{GW}} \tag{15b}\] The compressors are used to boot the gas pressure in the gas pipeline, which have the maximum and minimum pressure booster limits. For any \(k\in\mathcal{D}^{\text{COMP}}\) \[\Delta\underline{\pi}_{k}\leq\Delta\underline{\pi}_{k,s}\leq\Delta\overline{ \pi}_{k} \tag{16}\] The gas turbines (GTs) consume natural gas and generate electricity with the efficiency \(\eta_{k}^{\text{GT}}\). For any \(k\in\mathcal{D}^{\text{GT}}\) \[p_{k,s}^{\text{GT}}=\eta_{k}^{\text{GT}}f_{s,s}^{\text{GT}}q_{L} \tag{17a}\] \[p_{k}^{\text{GT}}\leq p_{k,s}^{\text{GT}}\leq\overline{p}_{k}^{ \text{GT}}\] (17b) \[\underline{f}_{k}^{\text{GT}}\leq f_{s,s}^{\text{GT}}\leq\overline{f}_{k}^{ \text{GT}} \tag{17c}\] where \(q_{L}\) denotes the low heat value of natural gas. The power-to-gas devices (P2Gs) consume electricity and generate natural gas with the efficiency \(\eta_{k}^{\text{P2G}}\). For any \(k\in\mathcal{D}^{\text{P2G}}\) \[f_{k,s}^{\text{P2G}}=\eta_{k}^{\text{P2G}}p_{k,s}^{\text{P2G}}/q_{L} \tag{18a}\] \[\underline{p}_{k}^{\text{P2G}}\leq p_{k,s}^{\text{P2G}}\leq\overline{p}_{k}^{ \text{P2G}}\] (18b) \[f_{k}^{\text{P2G}}\leq f_{s,s}^{\text{P2G}}\leq\overline{f}_{k}^{ \text{P2G}} \tag{18c}\] Based on the definition of control vector \(\mathbf{u}_{t}\) in (12), the model of GTs and P2Gs in (17a) and (18a), and the definition of gas injection \(f_{s,s}^{\text{IN}}\) in (14), the state vector \(\mathbf{s}_{t}\) can be reformulated as the affine form of state vector \(\mathbf{s}_{t-1}\), power of GTs and P2Gs \(\mathbf{p}_{t}^{\text{DEV}}\), controllable gas pressure \(\mathbf{\pi}_{t}^{\text{CTRL}}\) (including the gas pressure of GWs and boosted gas pressure provided by compressors), and forecasted gas consumption of non-controllable gas loads \(\mathbf{f}_{t}^{\text{LD}}\). \[\mathbf{s}_{t}=\mathbf{B}_{t}\mathbf{s}_{t-1}+\mathbf{B}_{2}\mathbf{p}_{t}^{\text{DEV}}+\mathbf{B}_{3 }\mathbf{\pi}_{t}^{\text{CTRL}}+\mathbf{B}_{4}\mathbf{f}_{t}^{\text{LD}} \tag{19}\] where \(\mathbf{B}_{1}\sim\mathbf{B}_{4}\) are constant matrices; \(f_{t}^{\text{LD}}\) is a vector composed of \(f_{s,s}^{\text{LD}}\) for all \(n\in\mathcal{N}\); \(\mathbf{p}_{t}^{\text{DEV}}\) denotes a vector composed of all the active power of GTs and P2Gs at time \(t\), that is \[\mathbf{p}_{t}^{\text{DEV}}\coloneqq\!\left[\cdots,p_{t,s}^{\text{GT}},\cdots,p_{t,s }^{\text{P2G}},\cdots\right]^{\!\top},i\in\mathcal{D}^{\text{GT}},j\in\mathcal{ D}^{\text{P2G}} \tag{20}\] \(n_{skv}:=\text{card}(\mathcal{D}^{\text{GT}})+\text{card}(\mathcal{D}^{\text{ P2G}})\); \(\mathbf{\pi}_{t}^{\text{CTRL}}\) denotes the vector of controllable pressure variables at time \(t\), composed of the pressure of all GWs and boosted pressure provided by all the compressors, that is \[\mathbf{\pi}_{t}^{\text{CTRL}}\coloneqq\!\left[\cdots,\mathbf{p}_{t,s}^{\text{GW}}, \cdots,\Delta\pi_{t,s}^{\text{COMP}},\cdots\right]^{\!\top} \tag{21}\] \[k\in\mathcal{D}^{\text{GW}},c\in\mathcal{D}^{\text{COMP}}\] Define the vector \(\mathbf{z}_{t}^{\text{CTRL}}\) that collects all the variables including active power of GTs and P2Gs, and controllable gas pressure for the first \(t\) time slots, that is \[\mathbf{z}_{t}^{\text{CTRL}}\coloneqq\!\left[\cdots,(\mathbf{p}_{t}^{\text{DEV}})^{\! \top},(\mathbf{\pi}_{t}^{\text{CTRL}})^{\!\top},\cdots\right]^{\!\top},i=1,2,3, \cdots,t \tag{22}\] To simplify the expression, all the operational constraints in (12)-(19) can be reformulated for the first \(t\) time slots as a matrix compact form as (23)-(24). The constraints (15b), (16), (17b) and (18b) related to the control variables correspond directly to the parameters in the matrix \(\mathbf{W}_{t}\). The constraints (13) and (15a) related to the gas network state can be expressed as the linear form of control variables according to the state space equation (19). \[\mathbf{W}_{i}\mathbf{z}_{i}^{\text{CTRL}}\leq\mathbf{w}_{i} \tag{23}\] \[\mathbf{p}_{i}^{\text{DEV}}=\mathbf{D}_{i}\mathbf{z}_{i}^{\text{CTRL}}. \tag{24}\] where the matrix \(\mathbf{W}_{i}\) and vector \(\mathbf{w}_{i}\) are constant by collecting the constraints (12)-(19) for the first \(t\) time slots; \(\mathbf{D}_{i}\) is a constant matrix used to select the elements of \(\mathbf{p}_{i}^{\text{DEV}}\) from the vector \(\mathbf{z}_{i}^{\text{CTRL}}\) according to its definition. ## III Joint Dispatch Model of the Gas and Power Network The joint dispatch model of the gas and power network can be expressed as follows: \[\min\sum_{i\in T}\!\!\left(\sum_{i\in T^{\text{GEN}}}\!\!C_{i}^{ \text{GEN}}+\sum_{j\in T^{\text{GEN}}}\!\!C_{j}^{\text{GT}}\right)\cdot\! \Delta t \tag{25a}\] \[s\mathbf{l}\mathbf{\underline{p}}_{i}^{\text{GEN}}\leq p_{k,t}^{\text{ GEN}}\leq\overline{p}_{k}^{\text{GEN}}\] (25b) \[-r_{i}^{\text{GEN}}\Delta t\leq p_{k,t}^{\text{GEN}}-p_{k,t-1}^{ \text{GEN}}\leq r_{k}^{\text{GEN}}\Delta t\] (25c) \[\mathbf{N}_{i}\left[\begin{array}{c}\mathbf{\underline{p}}_{i}^{\text{DEV}}\\ \mathbf{\underline{p}}_{i}^{\text{GEN}}\end{array}\right]\leq\mathbf{n}_{i}\] (25d) \[\mathbf{W}_{T}\mathbf{z}_{T}^{\text{CTRL}}\leq\mathbf{w}_{T}\] (25e) \[\mathbf{p}_{i}^{\text{DEV}}=\mathbf{D}_{i}\mathbf{z}_{i}^{\text{CTRL}}. \tag{25f}\] where (25a) is the objective function of optimal joint dispatch to minimize the total cost of fuel consumption; (25b) and (25c) denote the active power constraints and ramp constraints of generators. (25d) is a concise constraint that implies the linear flow constraints of the power network, including flow constraints for branches and voltage constraints for buses in the power network. (25e) and (25f) are the constraints of gas network. \(C_{i}^{\text{GEN}}\) denotes the fuel cost of the \(i\)-th coal-fired power unit, as shown in (26a); \(a_{i}^{\text{GEN}}\), \(b_{i}^{\text{GEN}}\) and \(c_{i}^{\text{GEN}}\) as the cost parameters; \(C_{j}^{\text{GT}}\) denotes the gas consumption cost of \(j\)-th GT unit, as shown in (26b); \(\mathbf{z}_{j}^{\text{GT}}\) as the cost of natural gas. The generation cost of renewable generators are considered as 0. \[C_{i}^{\text{GEN}}=a_{i}^{\text{GEN}}\left(p_{i,t}^{\text{GEN}} \right)^{2}+b_{i}^{\text{GEN}}p_{i,t}^{\text{GEN}}+c_{i}^{\text{GEN}} \tag{26a}\] \[C_{j}^{\text{GT}}=\mathbf{\underline{\chi}}_{j}^{\text{GT}}f_{j,t}^{\text{GT}} \tag{26b}\] In the above joint dispatch problem, the foremost computational burden stems from the gas network state constraint (25e). This arduous task arises due to the intricate state equation depicted in (19), necessitating a multitude of auxiliary variables to represent the states of fictitious computation nodes. The calculation of each state variable needs to be performed point by point. Therefore, it becomes imperative to establish a simplified equivalent expression that encapsulates the gas network's flexibility, denoting it as \(\mathcal{F}^{\text{GAS}}\). By doing so, the computation process in (25) can be significantly expedited. Consequently, the substitution of (25e) and (25f) is warranted, and the ensuing simplified expressions are as follows: \[\mathbf{P}^{\text{DEV}}\in\mathcal{F}^{\text{GAS}} \tag{27}\] To establish the gas network constraints (25e), comprehensive information regarding the gas network and its devices is indispensable. Therefore, it is imperative for the gas network operator to transmit the data of gas network models to the power grid for dispatch purposes. Conversely, (27) presents a condensed model of flexibility. When participating in the joint dispatch, the gas network only needs to transmit the compressed flexibility model, thereby effectively protecting the data privacy of the gas network. ## IV Equivalent Energy Storage Model for the Gas Network ### _Overview_ Since the gas network has large capacity for storing the natural gas, it has the potential of energy storage. In addition, the gas network system converts energy with the power grid through GTs and P2Gs, corresponding to the discharging and charging process of energy storage devices, respectively. Therefore, the gas network model can be reformulated as an equivalent multi-port energy storage model with time-varying capacity. As shown in Fig. 2, the process of evaluating the equivalent model of gas network is divided into two parts: the power coupling region of the energy conversion devices (GTs and P2Gs), and the energy storage flexibility region. In the first part, the power coupling region is calculated to describe the coupling relationship among the active power of the energy conversion devices of each time slot. In the second part, the energy storage flexibility region is formulated to describe the equivalent energy storage time-varying capability of the gas network. Fig. 2: Schematic of calculation process of the equivalent energy storage model. ### _Calculate the power coupling region of converters_ This subsection introduces the method to calculate power coupling region among different energy conversion devices of each time slot. A high-dimensional quadrant ellipsoid is used to approximate this region. For each time slot, we try to find a region in the space of vector \(\boldsymbol{p}_{t}^{\text{DEV}}\) as large as possible, denoted as \(\mathcal{E}_{t}\). All the points in this region should be feasible. That is to say, for any operation point \(\boldsymbol{p}_{t}^{\text{DEV}}\) inside \(\mathcal{E}_{t}\), there is a corresponding feasible control solution \(\boldsymbol{z}_{t}^{\text{CTRL},*}\) that can realize the operation point \(\boldsymbol{p}_{t}^{\text{DEV}}\) while meeting all the operational constraints of the gas network. Its corresponding mathematical expression is: for \(\forall\boldsymbol{p}_{t}^{\text{DEV}}\in\mathcal{E}_{t}\), \(\exists\boldsymbol{z}_{t}^{\text{CTRL},*}\), that makes (23) and (24) hold. There are several existing approaches for obtaining the inscribed high-dimensional region, such as employing the inscribed hypercube [23], inscribed ellipsoid [24], and vertex enumeration [25] methods. However, both the inscribed hypercube and inscribed ellipsoid methods are excessively conservative, leading to significant approximation errors. On the other hand, the vertex enumeration method is only suitable for computing lower-dimensional inscribed models but imposes a heavy computational burden in this context. We use a high-dimensional quadrant ellipsoid to approximate the feasible region of \(\boldsymbol{p}_{t}^{\text{DEV}}\). This method offers significant advantages in terms of both approximation accuracy and computational complexity. The schematic is shown in in Fig. 3. Due to the operational constraints of the gas network, there may be a large number of potential coupling relationships among the active power of the energy conversion devices at time \(t\). It is impossible to list all these constraints one by one explicitly, because the number of constraints increases exponentially with the number of devices. An alternative method is to approximate the exact expression of \(\mathcal{E}_{t}\) with an inscribed high-dimensional quadrant ellipsoid. The ellipsoid is strictly inscribed in the original control feasible region, so that the feasibility of the resulted control strategies can be guaranteed. This high-dimensional quadrant ellipsoid can be expressed as follows: \[\mathcal{E}_{t}\coloneqq\left\{\boldsymbol{p}_{t}^{\text{DEV}}\left|\ \left|\boldsymbol{E}_{t}^{-1}\left(\boldsymbol{p}_{t}^{\text{DEV}}-\boldsymbol{c }\right)\right|\right\}\leq 1\right.\boldsymbol{p}_{t}^{\text{DEV}}\geq \boldsymbol{c}\right\} \tag{28}\] where \(\boldsymbol{E}_{t}\) is a positive semidefinite matrix that determined the shape of the ellipsoid; \(\boldsymbol{c}\) is a vector that determined the center of the ellipsoid, which composed of the minimum active power of each GTs (\(\boldsymbol{p}_{t}^{\text{GT}}\)) and P2Gs (\(\boldsymbol{p}_{t}^{\text{PG}}\)). Since the minimum active power constraints of each GTs and P2Gs are already reflected in the definition of the ellipsoid in (28), these constraints can be removed from the original constraints (23) and the remaining constraints are rewritten as: \[\boldsymbol{\tilde{W}}_{t}\boldsymbol{z}_{t}^{\text{CTRL}}\leq\boldsymbol{ \tilde{w}}_{t} \tag{29}\] To calculate the parameter \(\boldsymbol{E}_{t}\) of maximum volume ellipsoid \(\mathcal{E}_{t}\) that inscribed in the original control feasible region, it can be transformed into the following semidefinite programming problem [24] as follows: \[\max_{\boldsymbol{E}_{t},\boldsymbol{F},\boldsymbol{z},\boldsymbol{z}} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ \[p_{i}^{\text{GAS}}\coloneqq\sum_{i\in\mathcal{D}^{\text{GAS}}}p_{i \,-}^{\text{GT}}-\sum_{j\in\mathcal{D}^{\text{GAS}}}p_{j,s}^{\text{PEG}}= \boldsymbol{\eta}^{\top}\boldsymbol{p}_{i}^{\text{DEV}} \tag{32}\] where \(\boldsymbol{\eta}\) is a constant vector composed of the constant coefficients of each item in \(\boldsymbol{p}_{i}^{\text{DEV}}\). According to (22) and (32), the injection power from the gas network of all time slots \(\boldsymbol{p}^{\text{GAS}}\) can be expressed in the linear form of control vector \(z_{\tau}^{\text{CTL}}\) as follows: \[\boldsymbol{p}^{\text{GAS}}=\boldsymbol{L}z_{\tau}^{\text{CTL}} \tag{33}\] where \(\boldsymbol{L}\) is a constant matrix and can be calculated based on (22) and (32). To simplify the expression, the control feasible region that make (31) and (33) hold is denoted as \(\Omega^{\text{GAS}}\). \[\Omega^{\text{GAS}}\coloneqq\left\{\begin{pmatrix}\boldsymbol{z}_{ \tau}^{\text{CTL}},\boldsymbol{p}^{\text{GAS}}\end{pmatrix}\begin{pmatrix} \boldsymbol{W}_{\tau}z_{\tau}^{\text{CTL}}\leq\boldsymbol{w}_{\tau}\\ \boldsymbol{p}^{\text{GAS}}=\boldsymbol{L}z_{\tau}^{\text{CTL}}\end{pmatrix}\right. \tag{34}\] Similar to the work in Part 1, we try to find a region in the space of vector \(\boldsymbol{p}^{\text{GAS}}\) as large as possible, denoted as \(\mathcal{B}^{\text{GAS}}\) in this part. All the points in this region should guarantee the control strategies feasible. That is to say, for any operation point \(\boldsymbol{p}^{\text{GAS}}\) inside \(\mathcal{B}^{\text{GAS}}\), there is a corresponding feasible control solution \(\boldsymbol{z}_{\tau}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{ \text{CTL}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{ \text{CTL}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{\text{CTL}^{ \text{CTL}^{\text{CTL}^{\text{CTL}}}}}}}}}}}}}}}}\), while meeting the operational constraints of gas network in (31) and (33). That is, for \(\forall\boldsymbol{p}^{\text{GAS}}\in\mathcal{B}^{\text{GAS}}\), \(\exists z_{\tau}^{\text{CTRL}^{\text{CTL}^{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text no control actions in \(\Omega^{\text{GAS}}\) can realize a specific dispatch scenario in \(\mathcal{B}^{\text{GAS}}_{(\iota)}\), it means this scenario is infeasible, and the infeasible point is denoted as \(P^{\text{GAS}}_{(\iota)}\). Then, the optimal value of \(f^{{}^{*}}_{(\iota)}\) is \(-\infty\), because the maximum problem in (39) is infeasible. On the contrary, if all the scenarios inside the \(\mathcal{B}^{\text{GAS}}_{(\iota)}\) is feasible for the gas network, the optimal value \(f^{{}^{*}}_{(\iota)}\) equals 0. To solve the Stackelberg game (39), the inner problem can be transferred into its dual minimize problem. Then it will become a bilinear optimization problem. \[\min_{p^{\text{GAS}},\nu,\mu} \mathbf{v}^{\top}\mathbf{w}_{\top}\mathbf{v}_{\top}+\mathbf{\mu}^{\top}p^{\text{GAS}} \tag{40a}\] \[s.t. A^{\text{GAS}}\mathbf{p}^{\text{GAS}} \leq\mathbf{b}^{\text{GAS}}_{(\iota)}\] (40b) \[\mathbf{v}^{\top}\mathbf{W}_{\top}+\mathbf{\mu}^{\top}\mathbf{L} =\mathbf{0}\] (40c) \[\mathbf{v}\geq\mathbf{0} \tag{40d}\] where \(\mathbf{v}\) and \(\mathbf{\mu}\) are the dual variables of constraints (39c) and (39d), respectively. Subsequently, we can use the KKT condition of (40b) and introduce auxiliary binary variables to transform (40) into the following MILP problem: \[\min_{p^{\text{GAS}},\nu,\mu,\zeta,\iota} \mathbf{v}^{\top}\mathbf{w}_{\top}-\zeta^{\top}\mathbf{b}^{\text{GAS}}_{(\iota)} \tag{41a}\] \[s. t. \zeta^{\top}A^{\text{GAS}}+\mathbf{\mu}^{\top}=\mathbf{0}\] (41b) \[\mathbf{v}^{\top}\mathbf{W}_{\top}+\mathbf{\mu}^{\top}\mathbf{L}=\mathbf{0}\] (41c) \[\mathbf{v}\geq\mathbf{0}\] (41d) \[\mathbf{0}\leq\mathbf{b}^{\text{GAS}}_{(\iota)}-A^{\text{GAS}}\mathbf{p}^{ \text{GAS}}\leq M\left(\mathbf{I}-\mathbf{s}\right)\] (41e) \[\mathbf{0}\leq\zeta\leq M\mathbf{s} \tag{41f}\] where \(\zeta\) denotes the dual variable of (40b). \(\mathbf{s}\) is vector composed of binary variables and has the same dimension as \(\mathbf{b}^{\text{GAS}}_{(\iota)}\). \(\mathbf{I}\) denotes an all-one vector; \(M\) is a sufficiently large constant. (41e)-(41f) are the inequities equivalent to the complementary slackness condition of the KKT condition of (40b). Since the optimal value is always obtained in the extreme point of the feasible region, \(\mathbf{p}^{\text{GAS}}_{(\iota)}\) must be an extreme point of \(\mathcal{B}^{\text{GAS}}_{(\iota)}\). Then we can find out the active constraints and denote the index set of them as \(\mathcal{P}_{(\iota)}\), that is \[\left(\mathbf{A}^{\text{GAS}}\right)_{i}P^{\text{GAS}}_{(\iota)}= \left(\mathbf{b}^{\text{GAS}}_{(\iota)}\right)_{i},\forall i\in\mathcal{P}_{( \iota)} \tag{42}\] where \(\left(\mathbf{\cdot}\right)_{i}\) denotes the \(i\)-th row of a matrix or the \(i\)-th element of a vector. Then we can obtain a feasible point in the control feasible region that is closest to \(\mathbf{p}^{\text{GAS}}_{(\iota)}\) by solving the optimization problem (43). This feasible point must be on the boundary of the feasible region and denoted as \(\mathbf{p}^{\text{bd}}_{(\iota)}\). \[\mathbf{p}^{\text{bd}}_{(\iota)}= \operatorname*{arg\,min}_{p^{\text{GAS}}}\left\|\mathbf{p}^{\text{ GAS}}_{(\iota)}-\mathbf{p}^{\text{GAS}}\right\|_{2}^{2} \tag{43a}\] \[s.t. \left(\mathbf{z}^{\text{CTRL}}_{\text{T}},\mathbf{p}^{\text{GAS}}\right) \in\Omega^{\text{GAS}} \tag{43b}\] In the second stage, the flexibility region of equivalent model \(\mathcal{B}^{\text{GAS}}_{(\iota)}\) will shrink by adjusting the parameters \(\mathbf{b}^{\text{GAS}}_{(\iota)}\) related to the active constraints in (42). A new group of parameters \(\mathbf{b}^{\text{GAS}}_{(\iota+1)}\) will be calculated according to (44) as follows: \[\mathbf{b}^{\text{GAS}}_{(\iota+1)} =\operatorname*{arg\,max}_{\mathbf{b}^{\text{GAS}},\mathbf{w}}\ \mathbf{I}_{\mathbf{h}_{\iota}}{}^{\top}\mathbf{b}^{\text{GAS}}\] (44a) \[s.t. \mathbf{b}^{\text{GAS}}\leq\mathbf{b}^{\text{GAS}}_{(\iota)}\] (44b) \[\mathbf{I}_{\mathbf{h}_{\iota}}{}^{\top}\mathbf{w}\geq T,\ \mathbf{w}\in\left\{0,1\right\}^{\text{ number of \(\mathbf{P}^{\text{DEV}}\) represents the index of device, and the column number represents the index of time slots. According to the definitions of \(\mathbf{p}^{\text{DEV}}_{t}\) and \(\mathbf{p}^{\text{GAS}}\) in (20) and (32), they can be expressed by \(\mathbf{P}^{\text{DEV}}\) in a linear form: \[\mathbf{p}^{\text{DEV}}_{t}=\mathbf{P}^{\text{DEV}}\mathbf{\delta}_{t},\forall t\in\mathcal{T} \tag{45a}\] \[\mathbf{p}^{\text{GAS}}=\left(\mathbf{\eta}^{\top}\mathbf{P}^{\text{DEV}}\right)^{\top} \tag{45b}\] where \(\mathbf{\delta}_{i}\) is a constant vector, whose \(t\)-th element is 1, and others are all 0; vector \(\mathbf{\eta}\) is defined in (32). Finally, the integrated flexibility model of gas network is as follows: \[\mathcal{F}^{\text{GAS}}\asymp\left\{\mathbf{P}^{\text{DEV}}\left|\mathbf{P}^{\text{DEV }}\mathbf{\delta}_{t}\in\mathcal{E}_{i},\forall t\in\mathcal{T}\right.\right\} \tag{46}\] ## V Numerical Test Cases ### _Case setup_ The case studies are carried on an integrated system of the IEEE RTS96 One Area 24 bus power network [27] and the GasLib-134 [28] Greek gas network with some modifications. There are 4 GTs, 4 P2Gs, 5 compressors and 2 GWs in the gas network. All the parameters of units and the topology of gas network are listed in the supplemental file [29]. All the numerical cases are conducted on a laptop with Intel Core i7-1165G7 CPU and 16 GB RAM. The programming are implemented in MATLAB R2020b, with GUROBI [30] as the solver and YALMIP [31] as the model tool of optimization problems. ### _Result of the converters' power coupling region_ The power coupling region among different energy exchange interfaces \(\mathcal{E}_{i}\) of each time slot is calculated. Since there are eight GTs and P2Gs in total, the region is an eight-dimensional quadrant ellipsoid. However, we cannot draw more than three-dimensional space images, but we can visualize it by selecting three variables at a time to draw the projection of this eight-dimensional ellipsoid to the three-dimensional space. Fig. 4 shows coupling region among different energy exchange interfaces at 12:00. The region is presented in the form of a group of three-dimensional ellipsoids, which is the projection results from eight-dimensional space. In each graph, the constraint relationship among the selected three different variables is visualized in a three-dimensional quadrant ellipsoid form. To demonstrate the approximating accuracy resulting from the aforementioned inscribed quadrant ellipsoid model, we generated 1000 operational scenarios of the gas network using Monte Carlo simulation. All generated scenarios are technically feasible, residing within the yellow region depicted in Fig. 3. Due to the approximation error of the inscribed model, some scenarios are encompassed within the quadrant ellipsoid, corresponding to the orange region in Fig. 3. The proportion of scenarios within the inscribed quadrant ellipsoid represents the coverage rate of the approximation model, reflecting the precision of the inscribed quadrant ellipsoid. The coverage rate of all time slots are shown as follows: The results indicate that the inscribed quadrant ellipsoid model exhibits a higher coverage rate. Hence, the quadrant ellipsoid effectively encompasses the majority of operational scenarios. ### _Result of the energy storage flexibility region_ The energy storage flexibility region \(\mathcal{B}^{\text{GAS}}\) is calculated based on the high-dimensional polyhedron projection and bounds shrinking algorithm, which describes the time-varying capacity of the equivalent energy storage model. The upper and lower bounds of aggregated power of gas network is shown in Fig. 5, corresponding to the parameters \(\overline{p}^{\text{GAS}}_{t}\), \(\underline{p}^{\text{GAS}}_{t}\) in (35a). The upper and lower energy capacity bounds of gas network is shown in Fig. 6, corresponding to the parameters \(\overline{e}^{\text{GAS}}_{t}\), \(\underline{e}^{\text{GAS}}_{t}\) in (35b). Fig. 4: Projection of high-dimensional quadrant ellipsoid in the 3-dimensional space (\(t=12\):00). Fig. 5: The upper and lower aggregated power bounds of gas network. In the equivalent energy storage model, the GTs act as discharge devices, while the P2Gs serve as charging devices. With the passage of time, both the maximum charging and discharging energy bounds gradually increase, causing a gradual expansion of the upper and lower energy bounds of the equivalent energy storage model. The upper bounds of power and energy are formed by the P2Gs. Since the capacity of P2Gs are relative small, their maximum active powers are seldom limited. Therefore, the upper bounds are usually straight lines. On the contrary, the lower bounds are formed by the GTs and their maximum active powers may be limited by the operation constraints of gas network, such as the constraints of gas pressure. By integrating the former results of \(\mathcal{E}_{t}\) and \(\mathcal{B}^{\text{GAS}}\), the integrated flexibility model of gas network \(\mathcal{F}^{\text{GAS}}\) can be finally obtained. ### _Comparison of calculation efficiency and dispatch error_ The calculated equivalent model of gas network can be easily incorporated into the optimal dispatch problem of the power system and reduce the computational burden. In this numerical case, we compare the calculation time and dispatch error of different joint dispatch methods. In this case, we randomly generate 100 scenarios with different load curves. Subsequently, for each scenario, we use the detailed gas network model (as the benchmark), the static gas network equivalent model proposed in [17], and our gas network equivalent model for joint optimal dispatch of electricity-gas systems, respectively. For the benchmark method, the detailed model of the gas network is used, which is solved by the finite difference method (FDM). It can be considered as the accurate results. The comparisons of calculation time and dispatch error are listed in TABLE II. The results in TABLE II show that the proposed equivalent energy storage model can effectively reduce the calculation time of the optimal dispatch problem with high accuracy. Furthermore, our gas network equivalent model can obtain much higher accurate results than the static equivalent one. This is because our equivalent model is calculated using the gas network state-space model, rather than the steady state equation. Therefore, it can fully exploit the flexibility of gas networks. In addition, the detailed parameters of the gas network are not explicitly used in the proposed calculation process, so the information privacy of gas networks is preserved. Subsequently, to verify the feasibility of the resulted control strategies of equivalent models. We use Monte Carlo simulations to randomly generate 10,000 scenarios which are solved using the static equivalent model in [17] and our model, respectively. Each sample represents the dispatch plan of all the units for each time slot. We check whether the dispatch plan of all units meets the gas network operation constraints. The proportion of infeasible scenarios is used as an indicator to measure the feasibility of the resulted control strategies of different models, and the results are shown in TABLE III. The results show that our equivalent model can guarantee the control strategies feasible in all the scenarios, while the static model may cause infeasible results. This is because our model incorporates the temporal coupling constraints as well as the transient process of gas networks. Fig. 8: Number of variables and constraints with FDM as the scale of gas network expands. Fig. 6: The upper and lower energy capacity bounds of gas network. Fig. 7: Comparison of calculation time for different scale systems using two different methods. To illustrate the scalability of our method, we scale up the gas networks and test the calculation time of each scale system. "1x" represents the original scale of the gas network, and "2x"-"5x" represent the gas networks with a scale of 2-5 times, respectively. The results are shown in Fig. 7 and Fig. 8. According to Fig. 7, the calculation time of FDM increases fast with the expansion of the system scale, while the calculation time of our method is almost unchanged. Based on Fig. 8, it can be observed that the number of variables and constraints associated with FDM are more than millions, and they increase linearly as the scale expands. However, the complexity of our method remains unaffected by the scale of the gas network. The number of variables and constraints remains constants at 5,760 and 7,104, respectively. ## VI Conclusions This work presented the detailed process for evaluating the equivalent energy storage model of the gas network. The state-space model of gas networks is reduced to a multi-port energy storage model with time-varying capacity, which implicitly incorporates the dynamic gas state transformation process and all operational constraints. This equivalent model can make full use of the flexibility of gas networks with guaranteed feasibility. From the perspective of the power system, the proposed method transforms the complex state-space model of the gas network into a simplified energy storage model. The reduced equivalent model can be easily incorporated into the optimal dispatch problem of the power system and the information privacy of gas networks is preserved. Numerical tests demonstrate the superior efficacy of the equivalent energy storage model in mitigating the computational burden associated with the coordinated optimal dispatch of electricity-gas systems, while maintaining high accuracy. In the joint optimal dispatch problem, our method yields a remarkable reduction in computation time, decreasing it from an average of 75s with the finite difference method to a mere 0.42s, thereby achieving a significant decrease of over two orders of magnitude. Moreover, our method exhibits an average dispatch error of only 0.25%. Furthermore, in comparison to existing static models that would yield an infeasible scenario rate of 8.37%, our model ensures the feasibility of all resulting control strategies.
2301.07934
Weak log-majorization between the geometric and Wasserstein means
There exist lots of distinct geometric means on the cone of positive definite Hermitian matrices such as the metric geometric mean, spectral geometric mean, log-Euclidean mean and Wasserstein mean. In this paper, we prove the log-majorization relation on the singular values of the product of given two positive definite matrices and their (metric and spectral) geometric means. We also establish the weak log-majorization between the spectra of two-variable Wasserstein mean and spectral geometric mean. In particular, we verify with certain condition on variables that two-variable Wasserstein mean converges decreasingly to the log-Euclidean mean with respect to the weak log-majorization.
Luyining Gan, Sejong Kim
2023-01-19T07:53:01Z
http://arxiv.org/abs/2301.07934v4
# Weak log-majorization between the geometric and Wasserstein means ###### Abstract. There exist lots of distinct geometric means on the cone of positive definite Hermitian matrices such as the metric geometric mean, spectral geometric mean, log-Euclidean mean and Wasserstein mean. In this paper, we prove the log-majorization relation on the singular values of the product of given positive definite matrices and their (metric and spectral) geometric means. We also establish the weak log-majorization between the spectra of the two-variable Wasserstein mean and spectral geometric mean. In particular, for a specific range of the parameter, the two-variable Wasserstein mean converges decreasingly to the log-Euclidean mean with respect to the weak log-majorization. _2020 Mathematics Subject Classification_ 15A42, 15B48, 47A64, _Key words and phrases._ Positive definite matrix, metric geometric mean, spectral geometric mean, Wasserstein mean, weak log-majorization. ## 1. Introduction Let \(\mathbb{C}_{m\times m}\) be the space of all \(m\times m\) complex matrices, and \(\mathbb{H}_{m}\) be the real vector space of \(m\times m\) Hermitian matrices. We denote as \(\mathbb{P}_{m}\subset\mathbb{H}_{m}\) the open convex cone of all \(m\times m\) positive definite matrices. Given \(A\in\mathbb{H}_{m}\), we use \(A\geq 0\) to indicate that \(A\) is positive semi-definite. For \(A,B\in\mathbb{H}_{m}\), the Lowner order \(A\geq B\) means \(A-B\geq 0\), that is, \(A-B\) is positive semi-definite. For \(X\in\mathbb{C}_{m\times m}\), the singular values of \(X\) are the eigenvalues of \(|X|:=(X^{*}X)^{1/2}\). Denote by \(s(X)\) the \(m\)-tuple of all singular values of \(X\in\mathbb{C}_{m\times m}\) with non-increasing order: \(s_{1}(X)\geq s_{2}(X)\geq\cdots\geq s_{m}(X)\geq 0\). We also denote by \(\lambda(X)\) the \(m\)-tuple of all eigenvalues of \(X\in\mathbb{H}_{m}\) with \(\lambda_{1}(X)\geq\lambda_{2}(X)\geq\cdots\geq\lambda_{n}(X)\). Let \(A,B\in\mathbb{P}_{m}\) and \(t\in\mathbb{R}\). The _metric geometric mean_ of \(A,B\) is a differentiable curve on \(\mathbb{P}_{m}\) defined by \[A\#_{t}B=A^{1/2}(A^{-1/2}BA^{-1/2})^{t}A^{1/2}.\] This notion was first introduced by Pusz and Woronowicz [21] for \(t=1/2\), simply denoted as \(A\#B=A\#_{1/2}B\). The weighted version was introduced later by Kubo and Ando [18]. As another notion of geometric mean on \(\mathbb{P}_{m}\), the _spectral geometric mean_ of \(A,B\) is a differentiable curve defined by \[A\natural_{t}B=(A^{-1}\#B)^{t}A(A^{-1}\#B)^{t},\] which was first proposed by Fiedler and Ptak [8] for the version of \(t=1/2\). We simply denote as \(A\natural B=A\natural_{1/2}B\). The weighted version was introduced later by Lee and Lim [19], and its several properties have been recently established [9, 16] on the setting of positive invertible operators. Note that the metric and spectral geometric mean can be considered as the non-commutative version for geometric mean of positive scalars. Zou [23] provided their relationship in terms of the log-majorization on the singular values of the metric geometric mean and the multiplication of two matrices, that is, for any \(A,B\in\mathbb{P}_{m}\), \[s(A^{1/2}(A\#B)B^{1/2})\prec_{\log}s(AB).\] It is straightforward to consider such relation to the weighted version and extend such relation to the spectral geometric mean. The _Wasserstein distance_ of \(A,B\in\mathbb{P}_{m}\) is the metric given by \[d_{W}(A,B)=\left[\operatorname{tr}\left(\frac{A+B}{2}\right)-\operatorname{ tr}(A^{1/2}BA^{1/2})^{1/2}\right]^{1/2}.\] This coincides with the Bures distance of density matrices in quantum information theory, and can be considered as a matrix version of the Hellinger distance for probability vectors. The Wasserstein mean of \(A_{1},\dots,A_{n}\in\mathbb{P}_{m}\) is the least squares mean for the Wasserstein distance, defined by \[\Omega(\omega;A_{1},\dots,A_{n})=\operatorname*{arg\,min}_{X\in\mathbb{P}_{m} }\sum_{j=1}^{n}w_{j}d_{W}^{2}(X,A_{j}),\] where \(\omega=(w_{1},\dots,w_{n})\) is a positive probability vector. In particular, when \(n=2\), replacing \(A_{1},A_{2}\) with \(A,B\) and \((w_{1},w_{2})\) with \((1-t,t)\), the explicit formula of two-variable _Wasserstein mean_ of \(A\) and \(B\) is given by \[A\diamond_{t}B:=\Omega((1-t,t);A,B)=(1-t)^{2}A+t^{2}B+t(1-t)[A(A^{-1}\#B)+(A^{- 1}\#B)A].\] The numerical computation and applications of the Wasserstein mean from both theoretical and computational aspects have been widely studied: see [2, 5, 6, 7, 14, 22] and references therein. Recently, the following (weak) log-majorization relations among matrix means have been shown [5, 10]: \[A\#_{t}B\prec_{\log}\exp((1-t)\log A+t\log B)\prec_{w\log}A\diamond_{t}B,\] \[A\#_{t}B\prec_{\log}\exp((1-t)\log A+t\log B)\prec_{\log}A\natural_{t}B.\] So it is an interesting problem to find such a relation between the spectral geometric and Wasserstein mean. In this paper, we organize the sections as follows. In Section 2, we recall known results for the metric geometric mean, spectral geometric mean and Wasserstein mean of positive definite matrices with log-majorization and fundamental properties. We then prove the log-majorization relation on the singular values of geometric means and the product of matrices in Section 3. The main goal is to establish the weak log-majorization relation between the spectral geometric mean and Wasserstein mean in Section 4 and the monotonicity of Wasserstein mean with respect to the weak log-majorization in Section 5. ## 2. Preliminaries on log-majorization and matrix means Let us recall the definition of log-majorization. Let \(x,y\) be two \(m\)-tuples of positive real numbers. Denote by \(x^{\downarrow},y^{\downarrow}\) the non-increasing order of elements of \(x,y\) respectively. We write \(x\prec_{w\log}y\) if \(x\) is _weakly log-majorized_ by \(y\), that is, \[\prod_{i=1}^{k}x_{i}^{\downarrow}\leq\prod_{i=1}^{k}y_{i}^{\downarrow},\quad k =1,2,\ldots,m. \tag{2.1}\] We say that \(x\) is _log-majorized_ by \(y\), denoted by \(x\prec_{\log}y\), if (2.1) is true for \(k=1,2,\ldots,m-1\) and equality holds for \(k=m\). For simplicity, we write \(A\prec_{(w)\log}B\) if \(\lambda(A)\prec_{(w)\log}\lambda(B)\) for \(A,B\in\mathbb{P}_{m}\). There has been a focus on the study of the (weak) log-majorization relations between different means on \(\mathbb{P}_{m}\). We know from [5] the following (weak) log-majorization relation among the Cartan (Riemannian) mean \(\Lambda\), log-Euclidean mean \(L\) and Wasserstein mean \(\Omega\) of \(A_{1},\ldots,A_{n}\in\mathbb{P}_{m}\): \[\Lambda(\omega;A_{1},\ldots,A_{n})\prec_{\log}L(\omega;A_{1},\ldots,A_{n}) \prec_{w\log}\Omega(\omega;A_{1},\ldots,A_{n}), \tag{2.2}\] where \[\Lambda(\omega;A_{1},\ldots,A_{n}):=\operatorname*{arg\,min}_{X\in\mathbb{P}_{m}} \sum_{j=1}^{n}w_{j}d_{R}^{2}(X,A_{j})\] is the Cartan mean for the Riemannian trace metric \(d_{R}(A,B)=\|\log A^{-1/2}BA^{-1/2}\|_{2}\) and \[L(\omega;A_{1},\ldots,A_{n}):=\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\] is the log-Euclidean mean. In particular, replacing \(A_{1},A_{2}\) with \(A,B\) and \((w_{1},w_{2})\) with \((1-t,t)\) in (2.2) for \(n=2\), we obtain \[A\#_{t}B\prec_{\log}\exp((1-t)\log A+t\log B)\prec_{\log}A\diamond_{t}B.\] Furthermore, the log-majorization between the log-Euclidean mean and spectral geometric mean has been shown in [11]: \[\exp((1-t)\log A+t\log B)\prec_{\log}A\natural_{t}B.\] So it is a natural question whether there exists a weak log-majorization relation between the spectral geometric mean and the Wasserstein mean. We see some properties of the metric geometric mean [4], spectral geometric mean [16, 19], and Wasserstein mean [14, 15], which are useful to prove our main results. **Lemma 2.1**.: _Let \(A,B,C,D\in\mathbb{P}_{m}\) and let \(s,t,u\in[0,1]\). Then the following are satisfied._ 1. \(A\#_{t}B=B\#_{1-t}A\)_._ 2. \((A\#_{t}B)^{-1}=A^{-1}\#_{t}B^{-1}\)_._ 3. \(A\#_{t}B\leq C\#_{t}D\) _whenever_ \(A\leq C\) _and_ \(B\leq D\)_._ 4. \(M(A\#_{t}B)M^{*}=(MAM^{*})\#_{t}(MBM^{*})\) _for any non-singular matrix_ \(M\)_._ 5. \((aA)\#_{t}(bB)=a^{1-t}b^{t}(A\#_{t}B)\) _for any_ \(a,b>0\)_._ 6. \((A\#_{s}B)\#_{t}(A\#_{u}B)=A\#_{(1-t)s+tu}B\)_._ 7. \(\det(A\#_{t}B)=(\det A)^{1-t}(\det B)^{t}\)_._ **Lemma 2.2**.: _Let \(A,B\in\mathbb{P}_{m}\) and let \(s,t,u\in[0,1]\). Then the following are satisfied._ 1. \(A\natural_{t}B=B\natural_{1-t}A\)_._ 2. \((A\natural_{t}B)^{-1}=A^{-1}\natural_{t}B^{-1}\)_._ 3. \(U(A\natural_{t}B)U^{*}=(UAU^{*})\natural_{t}(UBU^{*})\) _for any unitary matrix_ \(U\)_._ 4. \((aA)\natural_{t}(bB)=a^{1-t}b^{t}(A\natural_{t}B)\) _for any_ \(a,b>0\) 5. \((A\natural_{s}B)\natural_{t}(A\natural_{u}B)=A\natural_{(1-t)s+tu}B\)_._ 6. \(\det(A\natural_{t}B)=(\det A)^{1-t}(\det B)^{t}\)_._ **Lemma 2.3**.: _Let \(A,B\in\mathbb{P}_{m}\) and let \(s,t,u\in[0,1]\). Then the following are satisfied._ 1. \(A\diamond_{t}B=B\diamond_{1-t}A\)_._ 2. \((A\diamond_{t}B)^{-1}=A^{-1}\diamond_{t}B^{-1}\) _if and only if_ \(A=B\)_._ 3. \(U(A\diamond_{t}B)U^{*}=(UAU^{*})\diamond_{t}(UBU^{*})\) _for any unitary matrix_ \(U\)_._ 4. \((aA)\diamond_{t}(aB)=a(A\diamond_{t}B)\) _for any_ \(a>0\)_._ 5. \((A\diamond_{s}B)\diamond_{t}(A\diamond_{u}B)=A\diamond_{(1-t)s+tu}B\)_._ 6. \(\det(A\diamond_{t}B)\geq(\det A)^{1-t}(\det B)^{t}\)_._ ## 3. Log-majorization of geometric means Note that the metric geometric mean \(A\#_{t}B\) and spectral geometric mean \(A\natural_{t}B\) are non-commutative versions for geometric mean of positive scalars. In other words, for commuting \(A,B\in\mathbb{P}_{m}\) the metric geometric mean and spectral geometric mean become \(A^{1-t}B^{t}\). So it is an interesting problem to find the relationship between the non-commutative and commutative versions of geometric mean. Zou [23] proved that for any \(A,B\in\mathbb{P}_{m}\), \[s(A^{1/2}(A\#B)B^{1/2})\prec_{\log}s(AB). \tag{3.3}\] Lemos and Soares [20] provided another proof of (3.3), and more generally asked whether there exists the following log-majorization relation for \(A,B\in\mathbb{P}_{m}\) \[s(A^{t}(A\#_{t}B)B^{1-t})\prec_{\log}s(AB),\quad t\in[0,1]. \tag{3.4}\] This is still an open problem, but Ghabries et al. [12] established the following log-majorization results related to (3.4) **Theorem 3.1**.: _Let \(A,B\in\mathbb{P}_{m}\)._ 1. \(s(A^{t}(A\#_{t}B)B^{1-t})\prec_{\log}s(A^{\frac{3}{2}}BA^{-\frac{1}{2}})\quad \text{for}\quad\frac{1}{2}\leq t\leq 1\)_,_ 2. \(s(A^{t}(A\#_{t}B)B^{1-t})\prec_{\log}s(B^{\frac{3}{2}}AB^{-\frac{1}{2}})\quad \text{for}\quad 0\leq t\leq\frac{1}{2}\)_._ We prove alternative versions of (3.4) for the metric geometric and spectral geometric means. **Theorem 3.2**.: _Let \(A,B\in\mathbb{P}_{m}\). For any \(t\in[0,1]\),_ \[s(A^{t-\frac{1}{2}}(A\#_{t}B)B^{\frac{1}{2}-t})\prec_{\log}s(A^{1/2}B^{1/2}). \tag{3.5}\] Proof.: Note that (3.5) is equivalent to \[\lambda(A^{t-\frac{1}{2}}(A\#_{t}B)B^{1-2t}(A\#_{t}B)A^{t-\frac{1}{2}})^{1/2} \prec_{\log}\lambda(A^{1/2}BA^{1/2})^{1/2}. \tag{3.6}\] Since \((A^{t-\frac{1}{2}}(A\#_{t}B)B^{1-2t}(A\#_{t}B)A^{t-\frac{1}{2}})^{1/2}\) and \((A^{1/2}BA^{1/2})^{1/2}\) are both homogeneous from Lemma 2.1 (5), it is enough to show that \[A^{1/2}BA^{1/2}\leq I\qquad\text{ implies }\qquad A^{t-\frac{1}{2}}(A\#_{t}B)B^{1- 2t}(A\#_{t}B)A^{t-\frac{1}{2}}\leq I.\] Step 1. We first prove (3.6) for \(t\in[0,1/2]\). Assume that \(A^{1/2}BA^{1/2}\leq I\). Then \(B\leq A^{-1}\), and \(B^{1-2t}\leq A^{2t-1}\) by the Loewner-Heinz inequality since \(2t\in[0,1]\). \[A^{t-\frac{1}{2}}(A\#_{t}B)B^{1-2t}(A\#_{t}B)A^{t-\frac{1}{2}} \leq A^{t-\frac{1}{2}}(A\#_{t}B)A^{2t-1}(A\#_{t}B)A^{t-\frac{1}{2}}\] \[=\left(A^{t-\frac{1}{2}}(A\#_{t}B)A^{t-\frac{1}{2}}\right)^{2}.\] Since \(B\leq A^{-1}\), we obtain from Lemma 2.1 (3) \[A^{t-\frac{1}{2}}(A\#_{t}B)A^{t-\frac{1}{2}}\leq A^{t-\frac{1}{2}}A^{1-2t}A^{ t-\frac{1}{2}}\leq I.\] Therefore, \(A^{t-\frac{1}{2}}(A\#_{t}B)B^{1-2t}(A\#_{t}B)A^{t-\frac{1}{2}}\leq\left(A^{t- \frac{1}{2}}(A\#_{t}B)A^{t-\frac{1}{2}}\right)^{2}\leq I\). Moreover, by Lemma 2.1 (7) \[\det\left[A^{t-\frac{1}{2}}(A\#_{t}B)B^{1-2t}(A\#_{t}B)A^{t-\frac{1}{2}} \right]=\det(AB)=\det(A^{1/2}BA^{1/2}),\] and thus, (3.6) holds for \(t\in[0,1/2]\). Step 2. Let \(t\in[1/2,1]\). Since \(s(X)=s(X^{*})\) for any matrix \(X\in\mathbb{C}_{m\times m}\), we have from Lemma 2.1 (1) and Step 1 \[s(A^{t-\frac{1}{2}}(A\#_{t}B)B^{\frac{1}{2}-t})=s(B^{\frac{1}{2}- t}(B\#_{1-t}A)A^{t-\frac{1}{2}}) =s(B^{(1-t)-\frac{1}{2}}(B\#_{1-t}A)A^{\frac{1}{2}-(1-t)})\] \[\prec_{\log}s(B^{1/2}A^{1/2})=s(A^{1/2}B^{1/2}),\] which completes the proof. **Lemma 3.3**.: _Let \(A,B\in\mathbb{P}_{m}\). If \(A\leq I\) and \(B\leq I\), then \(A\natural B\leq I\)._ Proof.: Let \(A\leq I\) and \(B\leq I\). Since the square of \(A\natural B\) is similar to \(AB\) by [8], \[\lambda_{1}(A\natural B)=\lambda_{1}^{1/2}(AB)\leq\lambda_{1}^{1/2}(A)\lambda_{1 }^{1/2}(B)\leq 1,\] where the first inequality follows from \(\lambda_{1}(AB)\leq\lambda_{1}(A)\lambda_{1}(B)\) for \(A,B\in\mathbb{P}_{m}\). This implies that \(A\natural B\leq I\). **Theorem 3.4**.: _Let \(A,B\in\mathbb{P}_{m}\) with \(A\geq I\). Let \(1/2\leq t\leq 1\) and \(0\leq u\leq 1/2\). Then_ \[s(A^{-u}(A\natural_{t}B)B^{u})\prec_{w\log}s(A^{1/2}B^{1/2}). \tag{3.7}\] _In addition, if \(\det A=\det B\) or \(t=1/2,u=0\) then_ \[s(A^{-u}(A\natural_{t}B)B^{u})\prec_{\log}s(A^{1/2}B^{1/2}).\] Proof.: Note that (3.7) is equivalent to \[\lambda(A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u})^{1/2}\prec_{\log} \lambda(A^{1/2}BA^{1/2})^{1/2}.\] Since \((A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u})^{1/2}\) and \((A^{1/2}BA^{1/2})^{1/2}\) are both homogeneous from Lemma 2.2 (4), it is enough to show that \(A^{1/2}BA^{1/2}\leq I\) implies \(A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u}\leq I\). Let \(A,B\in\mathbb{P}_{m}\) with \(A\geq I\). Assume that \(A^{1/2}BA^{1/2}\leq I\). Then \(B\leq A^{-1}\), and \(B^{2u}\leq A^{-2u}\) by the Loewner-Heinz inequality because \(2u\in[0,1]\). So \[A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u}\leq A^{-u}(A\natural_{t}B) A^{-2u}(A\natural_{t}B)A^{-u}=(A^{-u}(A\natural_{t}B)A^{-u})^{2}.\] Set \(T:=\{t\in[\frac{1}{2},1]:A\natural_{t}B\leq I\}\). Since \(B\leq A^{-1}\) is equivalent to \(A\natural_{1/2}B\leq I\) from [16, Theorem 5], we have \(1/2\in T\), and \(1\in T\) because \(A\natural_{1}B=B\leq A^{-1}\leq I\). Assume that \(s,t\in T\). Then by Lemma 2.2 (5) and Lemma 3.3 \[A\natural_{\frac{s+t}{2}}B=(A\natural_{s}B)\natural_{1/2}(A\natural_{t}B)\leq I,\] so \(\frac{s+t}{2}\in T\). This yields that \(T\) contains all dyadic rational numbers in \([1/2,1]\), and by the density of dyadic rational numbers and the continuity of spectral geometric mean \(T=[1/2,1]\). Since \(A\natural_{t}B\leq I\) for \(1/2\leq t\leq 1\), we obtain \[A^{-u}(A\natural_{t}B)A^{-u}\leq A^{-2u}\leq I.\] Therefore, \(A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u}\leq(A^{-u}(A\natural_{t}B) A^{-u})^{2}\leq I\). Moreover, in order that \[\det(A^{-u}(A\natural_{t}B)B^{2u}(A\natural_{t}B)A^{-u})^{1/2}=(\det A)^{1-t- u}(\det B)^{t+u}=(\det A\det B)^{1/2},\] we have that \(\det(A^{-1}B)^{t+u-1/2}=1\). So \(\det A=\det B\), otherwise \(t+u=1/2\). Since \(1/2\leq t\leq 1\) and \(0\leq u\leq 1/2\), we obtain \(t=1/2\) and \(u=0\) if \(\det A\neq\det B\). **Corollary 3.5**.: _Let \(A,B\in\mathbb{P}_{m}\) with \(B\geq I\). Let \(0\leq t\leq 1/2\) and \(-1/2\leq u\leq 0\). Then_ \[s(A^{-u}(A\natural_{t}B)B^{u})\prec_{w\log}s(A^{1/2}B^{1/2}).\] Proof.: By Lemma 2.2 (1) and Theorem 3.4 with \(B\geq I\), \(1/2\leq 1-t\leq 1\) and \(0\leq-u\leq 1/2\) we obtain \[s(A^{-u}(A\natural_{t}B)B^{u}) =s(B^{u}(B\natural_{1-t}A)A^{-u})\] \[\prec_{w\log}s(B^{1/2}A^{1/2})=s(A^{1/2}B^{1/2}).\qed\] ## 4. Weakly log-majorization between two means Now we construct the weak log-majorization relation between the spectral geometric mean and the Wasserstein mean. There are several different expressions of the two-variable Wasserstein mean of positive invertible operators: see [15]. We use one of equivalent expressions as follows: \[A\diamond_{t}B=A^{-1/2}\left[(1-t)A+t(A^{1/2}BA^{1/2})^{1/2}\right]^{2}A^{-1/2}, \tag{4.8}\] where \(A,B\in\mathbb{P}_{m}\) and \(t\in[0,1]\). **Theorem 4.1**.: _Let \(A,B\in\mathbb{P}_{m}\) and \(t\in[0,1]\). Then_ \[A\natural_{t}B\prec_{w\log}A\diamond_{t}B. \tag{4.9}\] Proof.: For \(t=0\) and \(t=1\) it is obvious. We now consider the case \(t\in(0,\frac{1}{2}]\). Let \(C:=A^{-1}\#B\). Then taking congruence transformation by \(A^{1/2}\) yields that \(A^{1/2}CA^{1/2}=(A^{1/2}BA^{1/2})^{1/2}\). To prove (4.9), it suffices to show that if \(A\diamond_{t}B\leq I\), then \(A\natural_{t}B\leq I\) because both means are homogeneous from Lemma 2.2 (4) and Lemma 2.3 (4). Suppose that \(A\diamond_{t}B\leq I\). Then by (4.8) and the monotonicity of square root map, we obtain the following \[\left[(1-t)A+t(A^{1/2}BA^{1/2})^{1/2}\right]^{2} \leq A\] \[(1-t)A+t(A^{1/2}BA^{1/2})^{1/2} \leq A^{1/2}\] \[(1-t)A+tA^{1/2}CA^{1/2} \leq A^{1/2}\] \[(1-t)I+tC \leq A^{-1/2}\] \[C \leq \frac{1}{t}A^{-1/2}+\left(1-\frac{1}{t}\right)I.\] Then we consider the largest eigenvalue of \(A\natural_{t}B\). Since \(2t\in(0,1]\), it can be computed as \[\lambda_{1}(A\natural_{t}B)=\lambda_{1}(C^{t}AC^{t})=\lambda_{1}(A^{1/2}C^{2t }A^{1/2})\leq\lambda_{1}\left(\left(\frac{1}{t}A^{-1/2}+\left(1-\frac{1}{t} \right)I\right)^{2t}A\right). \tag{4.10}\] Since \(A\in\mathbb{P}_{m}\), there exist a unitary matrix \(U\) and a diagonal matrix \(D=\mbox{diag}\left(\lambda_{1},\ldots,\lambda_{m}\right)\) such that \(A=UDU^{*}\). Then \[\left(\frac{1}{t}A^{-1/2}+\left(1-\frac{1}{t}\right)I\right)^{2t}A\] \[= U\left(\frac{1}{t}D^{-1/2}+\left(1-\frac{1}{t}\right)I\right)^{ 2t}DU^{*}\] \[= U\left[\begin{matrix}\left(\frac{1}{t}\lambda_{1}^{-1/2}+\left( 1-\frac{1}{t}\right)\right)^{2t}\lambda_{1}&&\\ &\ddots&\\ &&\left(\frac{1}{t}\lambda_{m}^{-1/2}+\left(1-\frac{1}{t}\right)\right)^{2t} \lambda_{m}\end{matrix}\right]U^{*}.\] We claim that \(\left(\frac{1}{t}\lambda^{-1/2}+\left(1-\frac{1}{t}\right)\right)^{2t}\lambda\leq 1\) for all positive \(\lambda\). It is equivalent to prove that for all positive \(\lambda\) \[\frac{1}{t}\lambda^{\frac{1-t}{2t}}+\left(1-\frac{1}{t}\right)\lambda^{\frac{ 1}{2t}}\leq 1. \tag{4.11}\] Set \(f(\lambda):=\frac{1}{t}\lambda^{\frac{1-t}{2t}}+\left(1-\frac{1}{t}\right) \lambda^{\frac{1}{2t}}-1\). Let \(a=\frac{1}{2t}\). Then \(a\in[1,\infty)\) and \[f(\lambda)=2a\lambda^{a-\frac{1}{2}}+(1-2a)\lambda^{a}-1.\] Since \(f^{\prime}(\lambda)=2a(a-\frac{1}{2})\lambda^{a-\frac{3}{2}}+a(1-2a)\lambda^{a-1}= a(2a-1)\lambda^{a-1}(\lambda^{-1/2}-1)\), \(f(\lambda)\) attains the maximum value at \(\lambda=1\) for \(\lambda>0\). So (4.11) holds since \(f(\lambda)\leq f(1)=0\) for \(\lambda>0\). Thus by (4.10), \(\lambda_{1}(A\natural_{t}B)\leq 1\), that is, \(A\natural_{t}B\leq I\). For the case \(t\in[\frac{1}{2},1)\), note that \(1-t\in(0,\frac{1}{2}]\). Thus by Lemma 2.2 and Lemma 2.3 \[A\natural_{t}B=B\natural_{1-t}A\prec_{w\log}B\diamond_{1-t}A=A\diamond_{t}B.\] This completes the proof. **Remark 4.2**.: By Theorem 4.1 together with known results appeared in Section 2 we have the following relation among matrix means: \[A\#_{t}B\prec_{\log}\exp((1-t)\log A+t\log B)\prec_{\log}A\natural_{t}B\prec_{ w\log}A\diamond_{t}B.\] Furthermore, (4.9) cannot be a log-majorization because of the determinantal inequality of the Wasserstein mean in [14, Proposition 2.3]: \[\det\Omega(\omega;A_{1},\ldots,A_{n})\geq\prod_{j=1}^{n}(\det A_{j})^{w_{j}},\] and equality holds if and only if \(A_{1}=\cdots=A_{n}\), where \(A_{j}\in\mathbb{P}_{m}\) for all \(j=1,\ldots,n\). Also see Lemma 2.2 and Lemma 2.3 (5). ## 5. Weak log-majorization monotonicity of Wasserstein mean Let \(A,B\in\mathbb{P}_{m}\) and \(t\in[0,1]\). Here, the notations \(\nearrow_{\prec_{\log}}\) and \(\searrow_{\prec_{\log}}\) mean to converge increasingly and decreasingly with respect to \(\prec_{\log}\), respectively. Hiai and Petz [13] showed that the limit of the metric geometric mean for \(A,B\in\mathbb{P}_{m}\) is the log-Euclidean mean, when \(p\) goes to \(0\) \[\lim_{p\to 0}(A^{p}\#_{t}B^{p})^{1/p}=\exp((1-t)\log A+t\log B).\] Ando and Hiai [3] gave the monotonicity of the metric geometric mean with respect to the log-majorization relation as follows: \[(A^{p}\#_{t}B^{p})^{1/p}\prec_{\log}(A^{q}\#_{t}B^{q})^{1/q},\quad 0<q\leq p.\] It is straightforward to verify \[(A^{p}\#_{t}B^{p})^{1/p}\nearrow_{\prec_{\log}}\exp((1-t)\log A+t\log B)\quad \text{as}\quad p\searrow 0.\] Ahn, Kim and Lim [1] provided that the limit of the spectral geometric mean for \(A,B\in\mathbb{P}_{m}\) is the log-Euclidean mean, when \(p\) goes to \(0\) \[\lim_{p\to 0}(A^{p}\natural_{t}B^{p})^{1/p}=\exp((1-t)\log A+t\log B).\] Gan and Tam [11] proved the monotonicity of the spectral geometric mean with respect to the log-majorization: \[(A^{p}\natural_{t}B^{p})^{1/p}\searrow_{\prec_{\log}}\exp((1-t)\log A+t\log B )\quad\text{as}\quad p\searrow 0.\] So we consider whether there exists such a relation on the Wasserstein mean. We first give some results in support of our main result of this section, Theorem 5.4. **Lemma 5.1**.: _Let \(A,B\in\mathbb{P}_{m}\). If \(B\leq I\) then \(AB+BA\leq 2A\)._ Proof.: Note that \(AB+BA\leq 2A\) is equivalent to \[\frac{A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}}{2}\leq I.\] So we claim that \[\frac{A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}}{2}\prec_{w\log}B.\] Assume that \(B\leq I\). Since \(A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}\in\mathbb{H}_{m}\) has positive eigenvalues and \(s_{1}(AB)=s_{1}(BA)\), \[\lambda_{1}\left(\frac{A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}}{2}\right) =s_{1}\left(\frac{A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}}{2}\right)\] \[\leq\frac{s_{1}(A^{1/2}BA^{-1/2})+s_{1}(A^{-1/2}BA^{1/2})}{2}\] \[=s_{1}(B)=\lambda_{1}(B)\leq 1.\] This means that \(\frac{A^{1/2}BA^{-1/2}+A^{-1/2}BA^{1/2}}{2}\leq I\), which completes the proof. **Lemma 5.2**.: _Let \(A,B\in\mathbb{P}_{m}\). Then \(A\diamond_{t}B\leq I\) for any \(t\in(0,1)\) implies the following_ * \(A\leq B^{-1}\)_,_ * \(A\leq I\)_._ Proof.: Assume that \(A\diamond_{t}B\leq I\) for any \(t\in(0,1)\). For (i), by (4.9) \(A\natural_{t}B\leq I\), which is equivalent to \(A\leq B^{-1}\). See [15, Remark 3.5] and its references. For (ii), by the definition of the Wasserstein mean (4.8), \[A^{-1/2}[(1-t)A+t(A^{1/2}BA^{1/2})^{1/2}]^{2}A^{-1/2}\leq I.\] Taking congruence transformation by \(A^{1/2}\) and using the operator monotonicity of square root map yield \[(1-t)A+t(A^{1/2}BA^{1/2})^{1/2}\leq A^{1/2}.\] Taking congruence transformation by \(A^{-1/2}\) implies \[(1-t)I+tA^{-1}\#B\leq A^{-1/2}.\] By (i) \(A^{-1}\geq B\), and thus, by the weighted arithmetic-geometric mean inequality \[A^{-1/2}\geq(1-t)I+tA^{-1}\#B\geq(1-t)I+tB\geq I\#_{t}B=B^{t}.\] This implies that \(A^{-1/2}\geq B^{t_{k}}\) holds for a sequence \(t_{k}\in(0,1)\) converging to \(0\), and hence, \(A^{-1/2}\geq I\). That is, \(A\leq I\). **Theorem 5.3**.: _Let \(A,B\in\mathbb{P}_{m}\). Then_ \[\left(A\diamond_{t}B\right)^{2}\prec_{w\log}A^{2}\diamond_{t}B^{2} \tag{5.12}\] _for any \(t\in\left(0,\frac{\sqrt{\alpha_{B}}}{\sqrt{\alpha_{B}}+\sqrt{\beta_{B}}} \right]\cup\left[\frac{\sqrt{\beta_{A}}}{\sqrt{\alpha_{A}}+\sqrt{\beta_{A}}},1\right)\), where \(\alpha_{A}=\lambda_{m}(A),\beta_{A}=\lambda_{1}(A)\) and \(\alpha_{B}=\lambda_{m}(B),\beta_{B}=\lambda_{1}(B)\)._ Proof.: It is enough to show that \(A^{2}\diamond_{t}B^{2}\leq I\) implies \(A\diamond_{t}B\leq I\). Assume that \(A^{2}\diamond_{t}B^{2}\leq I\). 1. For the case \(\beta_{B}\leq 1\), equivalently \(B\leq I\), we easily see that \(A\diamond_{t}B\leq I\) from Lemma 5.2 (ii) and [17, Lemma 2.4]. 2. We prove that \(A\diamond_{t}B\leq I\) when \(B>I\). By Lemma 5.2 (i) \(A^{2}\leq B^{-2}\), and \(A\leq B^{-1}\) by the operator monotonicity of square root map. Note that \[A\diamond_{t}B=B\diamond_{1-t}A=B^{-1/2}[tB+(1-t)(B^{1/2}AB^{1/2})^{1/2}]^{2}B^ {-1/2},\] and \[\begin{array}{l}[tB+(1-t)(B^{1/2}AB^{1/2})^{1/2}]^{2}\\ =&t^{2}B^{2}+(1-t)^{2}B^{1/2}AB^{1/2}+t(1-t)[B(B^{1/2}AB^{1/2})^{1/2}+(B^{1/2 }AB^{1/2})^{1/2}B]\\ \leq&t^{2}B^{2}+(1-t)^{2}I+2t(1-t)B=[tB+(1-t)I]^{2}.\end{array}\] The inequality holds from Lemma 5.1, since \(A\leq B^{-1}\) and \((B^{1/2}AB^{1/2})^{1/2}\leq I\). Then \[A\diamond_{t}B\leq B^{-1/2}[tB+(1-t)I]^{2}B^{-1/2}=t^{2}B+(1-t)^{2}B^{-1}+2t(1-t)I.\] All eigenvalues of \(t^{2}B+(1-t)^{2}B^{-1}+2t(1-t)I\) are of the form \[f(t):=\lambda t^{2}+\lambda^{-1}(1-t)^{2}+2t(1-t),\] where \(\lambda\in(1,\beta_{B}]\) denotes an arbitrary eigenvalue of \(B\). One can see that \(f\) is a quadratic function with the critical point \(t_{0}=\dfrac{1}{1-\lambda}\), so \(f(t)\leq 1\) for \[-\dfrac{1}{\sqrt{\lambda}-1}\leq t\leq\dfrac{1}{\sqrt{\lambda}+1}.\] Since \(\lambda\leq\beta_{B}\), we have \(\dfrac{1}{\sqrt{\beta_{B}}+1}\leq\dfrac{1}{\sqrt{\lambda}+1}\). Thus, \(A\diamond_{t}B\leq I\) for \(t\in\left(0,\dfrac{1}{1+\sqrt{\beta_{B}}}\right]\). Step 3. For the case \(\alpha_{B}\leq 1\leq\beta_{B}\), the preceding arguments in Step 2 yield that \[\left(\dfrac{1}{\alpha_{B}}A\right)\diamond_{t}\left(\dfrac{1}{\alpha_{B}}B \right)\leq I\] \[\text{for }t\in\left(0,\dfrac{1}{1+\sqrt{\frac{\beta_{B}}{\alpha_{B}}}} \right]\text{ because }\dfrac{1}{\alpha_{B}}B\geq I\text{ and the maximum eigenvalue of }\dfrac{1}{\alpha_{B}}B\] is \(\dfrac{\beta_{B}}{\alpha_{B}}\). By the homogeneous property of two-variable Wasserstein mean, \[A\diamond_{t}B\leq\alpha_{B}I\leq I\] for \(t\in\left(0,\dfrac{\sqrt{\alpha_{B}}}{\sqrt{\alpha_{B}}+\sqrt{\beta_{B}}} \right]\). So (5.12) holds for \(t\in\left(0,\dfrac{\sqrt{\alpha_{B}}}{\sqrt{\alpha_{B}}+\sqrt{\beta_{B}}}\right]\). Step 4. For \(t\in\left[\dfrac{\sqrt{\beta_{A}}}{\sqrt{\alpha_{A}}+\sqrt{\beta_{A}}},1\right)\), we have by Step 3 and the symmetric property of two-variable Wasserstein mean in Lemma 2.3 (4) \[(A\diamond_{t}B)^{2}=(B\diamond_{1-t}A)^{2}\prec_{w\log}B^{2}\diamond_{1-t}A^{ 2}=A^{2}\diamond_{t}B^{2}.\] \[\text{because }1-t\in\left(0,\dfrac{\sqrt{\alpha_{A}}}{\sqrt{\alpha_{A}}+ \sqrt{\beta_{A}}}\right]\text{.}\] These four steps complete the proof. Now we are ready to prove that the spectra of Wasserstein mean is decreasing to the spectra of the log-Euclidean mean with respect to weak log-majorization for a specific range of \(t\). **Theorem 5.4**.: _Let \(A,B\in\mathbb{P}_{m}\). Then_ \[(A^{p}\diamond_{t}B^{p})^{1/p}\searrow_{\sim_{w\log}}\exp((1-t)\log A+t\log B) \quad\text{as}\quad p\searrow 0,\] _for any \(t\in\left(0,\frac{\sqrt[4]{\alpha_{B}}}{\sqrt[4]{\alpha_{B}}+\sqrt[4]{\beta_{B }}}\right]\cup\left[\frac{\sqrt[4]{\beta_{A}}}{\sqrt[4]{\alpha_{A}}+\sqrt[4]{ \beta_{A}}},1\right)\), where \(\alpha_{A}=\lambda_{m}(A),\beta_{A}=\lambda_{1}(A)\) and \(\alpha_{B}=\lambda_{m}(B),\beta_{B}=\lambda_{1}(B)\)._ Proof.: Replacing \(A,B\) by \(A^{1/2},B^{1/2}\) in Theorem 5.3 yields \[\left(A^{1/2}\diamond_{t}B^{1/2}\right)^{2}\prec_{w\log}A\diamond_{t}B\] for \(t\in\left(0,\frac{\sqrt[4]{\alpha_{B}}}{\sqrt[4]{\alpha_{B}}+\sqrt[4]{\beta_{ B}}}\right]\cup\left[\frac{\sqrt[4]{\beta_{A}}}{\sqrt[4]{\alpha_{A}}+\sqrt[4]{ \beta_{A}}},1\right)\), since \(\lambda_{i}(A^{1/2})=\lambda_{i}^{1/2}(A)\) for all \(i\). Replacing \(A,B\) by \(A^{1/2},B^{1/2}\) again in the preceding inequality implies \[\left(A^{1/2^{2}}\diamond_{t}B^{1/2^{2}}\right)^{2^{2}}\prec_{w\log}\left(A^{ 1/2}\diamond_{t}B^{1/2}\right)^{2}\] for \(t\in\left(0,\frac{\sqrt[8]{\alpha_{B}}}{\sqrt[8]{\alpha_{B}}+\sqrt[8]{\beta_{ B}}}\right]\cup\left[\frac{\sqrt[8]{\beta_{A}}}{\sqrt[8]{\alpha_{A}}+\sqrt[8]{ \beta_{A}}},1\right)\), because \(A\prec_{w\log}B\) implies \(A^{2}\prec_{w\log}B^{2}\). Then \[\left(A^{1/2^{2}}\diamond_{t}B^{1/2^{2}}\right)^{2^{2}}\prec_{w\log}A\diamond _{t}B\] for \(t\in\left(0,\frac{\sqrt[4]{\alpha_{B}}}{\sqrt[4]{\alpha_{B}}+\sqrt[4]{\beta_{ B}}}\right]\cup\left[\frac{\sqrt[4]{\beta_{A}}}{\sqrt[4]{\alpha_{A}}+\sqrt[4]{ \beta_{A}}},1\right)\), since \[\frac{\sqrt[4]{\alpha_{B}}}{\sqrt[4]{\alpha_{B}}+\sqrt[4]{\beta_{B}}}\leq\frac {\sqrt[8]{\alpha_{B}}}{\sqrt[8]{\alpha_{B}}+\sqrt[8]{\beta_{B}}}\ \text{ and }\ \frac{\sqrt[4]{\beta_{A}}}{\sqrt[4]{\alpha_{A}}+\sqrt[4]{\beta_{A}}}\geq\frac {\sqrt[8]{\beta_{A}}}{\sqrt[8]{\alpha_{A}}+\sqrt[8]{\beta_{A}}}.\] By induction we obtain \[\left(A^{1/2^{k}}\diamond_{t}B^{1/2^{k}}\right)^{2^{k}}\prec_{w\log}A\diamond _{t}B.\] Since \(\left(A^{1/2^{k}}\diamond_{t}B^{1/2^{k}}\right)^{2^{k}}\to\exp((1-t)\log A+t \log B)\) as \(k\to\infty\) by [14, Corollary 4.4], we conclude the desired property. Note that the interval \[\left(0,\frac{\sqrt[4]{\alpha_{B}}}{\sqrt[4]{\alpha_{B}}+\sqrt[4]{\beta_{B}}} \right]\cup\left[\frac{\sqrt[4]{\beta_{A}}}{\sqrt[4]{\alpha_{A}}+\sqrt[4]{\beta_ {A}}},1\right)\] for \(\alpha_{A}=\lambda_{m}(A),\beta_{A}=\lambda_{1}(A)\) and \(\alpha_{B}=\lambda_{m}(B),\beta_{B}=\lambda_{1}(B)\), appeared in Theorem 5.4, does not contain \(1/2\). So it is natural to ask whether Theorem 5.4 holds for \(t\in(0,1)\) as the metric geometric mean and spectral geometric mean. We conclude this paper by the following conjecture. **Conjecture 5.5**.: _Let \(A,B\in\mathbb{P}_{m}\) and \(t\in(0,1)\). Then_ \[\left(A^{p}\diamond_{t}B^{p}\right)^{1/p}\searrow_{\omega\log}\exp((1-t)\log A +t\log B)\quad\text{as}\quad p\searrow 0.\] ### Acknowledgement The work of L. Gan was partially supported by the AMS-Simons Travel Grant 2022-2024. The work of S. Kim was supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (No. NRF-2022R1A2C4001306).
2305.14501
Forward and hybrid path-integral methods in photoelectron holography: sub-barrier corrections, initial sampling and momentum mapping
We construct two strong-field path integral methods with full Coulomb distortion, in which the quantum pathways are mimicked by interfering electron orbits: the rate-based CQSFA (R-CQSFA) and the hybrid forward-boundary CQSFA (H-CQSFA). The methods have the same starting point as the standard Coulomb quantum-orbit strong-field approximation (CQSFA), but their implementation does not require pre-knowledge of the orbits' dynamics. These methods are applied to ultrafast photoelectron holography. In the rate-based method, electron orbits are forward propagated and we derive a non-adiabatic ionization rate from the CQSFA, which includes sub-barrier Coulomb corrections and is used to weight the initial orbit ensemble. In the H-CQSFA, the initial ensemble provides initial guesses for a subsequent boundary problem and serves to include or exclude specific momentum regions, but the ionization probabilities associated with individual trajectories are computed from sub-barrier complex integrals. We perform comparisons with the standard CQSFA and \textit{ab-initio} methods, which show that the standard, purely boundary-type implementation of the CQSFA leaves out whole sets of trajectories. We show that the sub-barrier Coulomb corrections broaden the resulting photoelectron momentum distributions (PMDs) and improve the agreement of the R-CQSFA with the H-CQSFA and other approaches. We probe different initial sampling distributions, uniform and otherwise, and their influence on the PMDs. We find that initial biased sampling emphasizes rescattering ridges and interference patterns in high-energy ranges, while an initial uniform sampling guarantees accurate modeling of the holographic patterns near the ionization threshold or polarization axis. Our results are explained using the initial to final momentum mapping for different types of interfering trajectories.
L. Cruz Rodriguez, T. Rook, B. B. Augstein, A. S. Maxwell, C. Figueira de Morisson Faria
2023-05-23T20:11:07Z
http://arxiv.org/abs/2305.14501v2
Forward and hybrid path-integral methods in photoelectron holography: sub-barrier corrections, initial sampling and momentum mapping ###### Abstract We construct a strong-field path integral method with full Coulomb distortion, in which electron orbits are forward propagated, and contrast the results with those from a hybrid forward-boundary method. These methods are applied to ultrafast photoelectron holography. In the forward method, we derive a non-adiabatic ionization rate from the Coulomb quantum-orbit strong-field approximation (CQSFA), which includes sub-barrier Coulomb corrections and is used to weight the initial orbit ensemble. In the hybrid forward-boundary CQSFA (H-CQSFA), we probe different initial sampling distributions, uniform and otherwise, and their influence on photoelectron momentum distributions (PMDs). We show that the sub-barrier Coulomb corrections broaden the resulting PMDs and improve the agreement of the rate-based method with the H-CQSFA and _ab-initio_ methods. Furthermore, in the hybrid approach, initial biased sampling emphasizes rescattering ridges and interferences in high-energy ranges, while an initial uniform sampling guarantees accurate modeling of the holographic patterns near the ionization threshold or polarization axis. Our results are explained using the initial to final momentum mapping for different types of interfering trajectories. ## I Introduction Ultrafast photoelectron holography is an important application of strong-field ionization, and brings together phase information, high currents and subfemtosecond resolution [1; 2]. For those reasons, its potential for attosecond imaging of matter has been widely explored since the past decade. Examples of holographic patterns are the fan-shaped fringes that form near the ionization threshold [3; 4; 5], the spider-like structure that occurs near the polarization axis [1; 6; 7; 8], a fishbone structure with fringes nearly perpendicular to the polarization axis [9; 10; 11], and the interference carpets [12; 13] and spiral-like fringes [14; 15] that arise for scattering angles perpendicular to the laser-field polarization. Throughout, an underlying theme is how to retrieve different types of quantum interference from the holographic patterns, bearing in mind that the phase difference between interfering pathways provides information about the target. This question has been central to a variety of orbit-based approaches, which draw upon the physical description of strong-field phenomena as the result of laser-induced collisions of an electron with its parent ion [16]. They range from early models in which phase differences have been incorporated into classical trajectories [6; 9] and well-established methods such as the strong-field approximation (SFA) [17] to path-integral methods [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. In its standard form, the SFA approximates the continuum by field-dressed plane waves and is constructed like a Born-type series, in which acts of rescattering take place at the origin and are incorporated in higher-order terms (for seminal papers see, e.g., Refs. [33; 34] and for reviews see, e.g., Refs. [35; 36; 37; 38]). This allows a clear distinction between 'direct' orbits, which reach the detector without further interaction with the core, and'rescattered' orbits. Typically, the boundaries between 'direct' and'rescattered', as well as classical constraints associated with acts of rescattering can be clearly seen in the resulting photoelectron momentum distributions (PMDs). They manifest themselves as sudden drops in the photoelectron yield and as classical ridges, which can be traced back to different sets of rescattering orbits. For a detailed discussion of these ridges in connection with the fork-type structure experimentally observed in Ref. [39], see Ref. [40]. Nonetheless, many studies have revealed that models with a Coulomb-free continuum do not suffice to describe the wealth of holographic structures or explain how they form. For instance, the fan-shaped fringes require the interference of orbits which reach the detector directly with orbits which are lightly deflected by the long potential tail [27; 28]. Similarly, the spider results from the quantum interference of two types of field-dressed Kepler hyperbolae, which have no SFA counterpart [28]. The hyperbolae interfere with orbits that go around the core, leading to the spiral [15]. All this shows that the interplay between the external laser field and the binding potential is essential for modeling photoelectron holography accurately and interpreting the observed patterns using orbit-based methods. For a systematic study of the differences introduced by the Coulomb potential in the ionization times, see Ref. [41]. In other words, one must consider a Coulomb-distorted continuum even for relatively simple, one-electron systems. For that reason, since the past decade, there has been an upsurge in approaches beyond the SFA, which are orbit-based, and yet retain quantum interference and tunneling (for a detailed account, see our review article [2]). Among them, there are the trajectory -based Coulomb SFA (TC-SFA) [42], the Quantum Trajectory Monte Carlo (QTMC) [18; 19; 20; 21; 22; 23; 24], the Semiclassical Two-Step Model (SCTM) [26; 32], and the Coulomb Quantum-Orbit Strong Field Approximation (CQSFA) [27; 28; 29; 30; 31; 43]. Recently, there have also been efforts to retrieve this information directly from experiments [44], or _ab-initio_ computations [45] using filtering techniques. Thereby, path-integral methods are very powerful, as they account for the laser field and the binding potential on the same footing. Most path-integral methods launch a huge ensemble of classical trajectories, propagating them in the continuum to find their final momentum and bin them to evaluate the transition amplitude. Although this procedure renders the methods applicable to a wide range of field shapes and potentials, they may require a large number of orbits, typically \(10^{8}\)-\(10^{9}\), for the PMDs to converge. Furthermore, in a forward approach, it may be difficult to identify specific types of interfering trajectories. An exception is the CQSFA, which solves a boundary problem relying on physical intuition, the orbit classifications in [42], and the SFA orbits as initial guesses. This formulation avoids a large ensemble of trajectories, making it straightforward to disentangle specific holographic structures. These distinctive characteristics enabled important breakthroughs by (a) showing unambiguously how well-known holographic structures such as the fan and the spider form [27; 28], (b) ruling out misunderstandings associated with interference carpets, which were brought about by the SFA [15], (c) revealing a multitude of other holographic structures, some of which have been observed experimentally [30], (d) allowing the study of multipath holographic interference [46]. However, this huge predictive power comes at a cost: the solving algorithm is less adaptable and requires some pre-knowledge of the orbits' dynamics. This means it may leave out whole sets of orbits and require substantial changes should the external field be modified. So far, the CQSFA has been mainly implemented for linearly polarized monochromatic fields, but recent studies for bichromatic fields [47] or elliptical polarization [48] have revealed some of these challenges. Recently, a semiclassical "hybrid" approach was developed that used a forwards shooting method and a clustering algorithm to solve the inverse boundary problem [49]. A very recent implementation of the CQSFA also used this hybrid approach for the case of short \(\sin^{2}\) pulses [50]. Furthermore, to incorporate sub-barrier dynamics in a path-integral framework is not a trivial matter. One possibility is to split the problem into sub-barrier and continuum propagation and, using contour integrals in the complex plane, to compute the sub-barrier corrections to the transition amplitude [29; 36; 51; 52; 28]. Alternatively, one may avoid a complex problem altogether and construct ionization rates, which will weight the initial distributions of orbits. This rate-based procedure is a key element in forward propagation methods such as SCTM [26], QTMC [18; 23], or recently developed hybrid approaches using clustering algorithms to solve the inverse problem [49]. Both strategies have advantages and shortcomings. Solving the integral under the barrier in the complex plane makes the approach more robust with regard to the initial conditions, but will require dealing with singularities and non-trivial limits [31; 36; 53; 54; 55]. On the other hand, rate-based methods are critically dependent on the initial conditions. The standard implementations [18; 26; 32] utilize adiabatic ionization rates (Ammosov-Delone-Krainov (ADK) model) [56; 57], only applicable in the limit of very small Keldysh parameters \(\gamma\ll 1\). However, in typical experimental conditions, non-adiabatic effects must be considered. Such effects have been studied in Refs. [23], and [58] looking at the sub-cycle dynamics obtaining non-adiabatic instantaneous rates. In particular, within the non-adiabatic theory in Ref. [23], those rates led to a broader longitudinal momentum distribution at the detector, and an accurate prediction of the cutoff in the momentum distribution of rescattered electrons, in agreement with the numerical solution of the time-dependent Schrodinger equation (TDSE). This work proposes two novel methods developed using the CQSFA initial formulation. The first method constructs a non-adiabatic ionization rate which includes Coulomb corrections from the CQSFA transition amplitude, using its sub-barrier contributions. The second approach is a hybrid forward-boundary CQSFA implementation (originally developed in [50]), in which, instead of using pre-assumed dynamics for the existing orbits, one launches a large ensemble of Coulomb-distorted trajectories, which are subsequently binned in order to solve a boundary problem. Both methods are then applied to photoelectron holography, starting from a proof of concept and assessing what specific momentum regions and holographic patterns are probed, depending on the initial sampling. For the rate-based method, the sampling affects the initial trajectory weighting, while for the hybrid forward-boundary CQSFA it influences the action associated with the first part of the contour. We also illustrate the subtleties involved in orbit classification, which help to clarify possible sources of confusion in forward approaches. This article is organized as follows. In Sec. II, we provide the general expressions for the CQSFA transition amplitude and the standard orbit classification. Subsequently, in Sec. III, these expressions are employed as a starting point for the rate-based method (Sec. III.1) and the hybrid forward-boundary CQSFA (Sec. III.2). In Sec. IV, these methods are used to compute PMDs, and, after establishing a good agreement with ab-initio methods, we focus on single-orbit distributions and different initial sampling regions (Secs. IV.1 and IV.2, respectively). These results are interpreted with the help of the initial-to-final momentum mapping presented in Sec. V. Finally, in Sec. VI we state our main conclusions. Background and general expressions The transition amplitude for the ionization of a single electron from the ground state of a Hydrogen atom \(|\psi_{0}(t_{0})\rangle\) to a continuum state with final momentum \(\mathbf{p}_{f}\), \(\left|\psi_{\mathbf{p}_{f}}(t)\right\rangle\) is \(\left\langle\psi_{\mathbf{p}_{f}}(t)|U(t,t_{0})|\psi_{0}\right\rangle\). The time evolution operator here is of the form \[U(t,t_{0})=\mathcal{T}\exp{\left[i\int_{t_{0}}^{t}H(t^{\prime})dt^{\prime} \right]}, \tag{1}\] where the Hamiltonian in the argument of this time ordered exponential can be written as \[H(t)=H_{a}+H_{I}(t). \tag{2}\] This consists of the field-free part \[H_{a}=\frac{\hat{\mathbf{p}}^{2}}{2}+V(\hat{\mathbf{r}}), \tag{3}\] and a gauge dependent interaction term \(H_{I}(t)\). In the case of the length gauge, which is used throughout, this has the form \(H_{I}(t)=\hat{\mathbf{r}}\cdot\mathbf{E}(t)\). The atomic potential is \(V(\mathbf{r})=-1/|\mathbf{r}|\). However, we have used the potential \[V(\mathbf{r})=-1/\sqrt{\mathbf{r}^{2}+a^{2}} \tag{4}\] with a very small parameter \(a\) (of the order of \(10^{-6}\)) to soften the Coulomb singularity for practical purposes and atomic units are used throughout unless otherwise stated. ### CQSFA transition amplitude Using the integral form of the time-evolution operator, the transition amplitude can be written as \[M(\mathbf{p}_{f})=-i\lim_{t\to\infty}\int_{-\infty}^{t}dt^{\prime}\left\langle \psi_{\mathbf{p}_{f}}(t)\left|U(t,t^{\prime})H_{I}(t^{\prime})e^{iI_{p}t^{ \prime}}\right|\psi_{0}\right\rangle, \tag{5}\] where \(I_{p}\) is the ionization potential, and \(U(t,t^{\prime})\) is the time evolution operator associated with the full Hamiltonian (2), given by Eq. (1). In the CQSFA, the transition amplitude is written as a phase-space path integral by a time slicing method [59; 25], \[M(\mathbf{p}_{f}) = -i\lim_{t\to\infty}\int_{-\infty}^{t}dt^{\prime}\int d\mathbf{ \tilde{p}}_{0}\int_{\tilde{\mathbf{p}}_{0}}^{\tilde{\mathbf{p}}_{f}(t)} \mathcal{D}^{\prime}\mathbf{\tilde{p}}\int\frac{\mathcal{D}\mathbf{r}}{(2\pi) ^{3}} \tag{6}\] \[\times e^{iS(\mathbf{\tilde{p}},\mathbf{r},t,t^{\prime})}(\mathbf{ \tilde{p}}_{0}|H_{I}(t^{\prime})|\psi_{0})\,.\] Eq. (6) represents an integral over all possible paths beginning at the core and with a fixed asymptotic momentum. Thereby, \(\mathcal{D}^{\prime}\tilde{\mathbf{p}}\) and \(\mathcal{D}\mathbf{r}\) give the integration measures for the path integrals, and the prime indicates a restriction. The tildes over the initial and intermediate momenta designate field dressing, i.e., \(\mathbf{\tilde{p}}_{0}=\mathbf{p}_{0}+\mathbf{A}(t^{\prime})\) and \(\mathbf{\tilde{p}}=\mathbf{p}+\mathbf{A}(\tau)\), with \(t^{\prime}\leq\tau\leq t\). The semi-classical action derived from the time slicing is \[S(\mathbf{\tilde{p}},\mathbf{r},t,t^{\prime})=I_{p}t^{\prime}-\int_{t^{\prime }}^{t}[\mathbf{\tilde{p}}(\tau)\cdot\mathbf{r}(\tau)+H(\mathbf{r}(\tau), \mathbf{p}(\tau),\tau)]d\tau, \tag{7}\] with a Hamiltonian \[H(\mathbf{r}(\tau),\mathbf{p}(\tau),\tau)=\frac{1}{2}\left[\mathbf{p}(\tau)+ \mathbf{A}(\tau)\right]^{2}+V(\mathbf{r}(\tau)). \tag{8}\] Next, Eq. (7) is calculated using saddle-point methods, which will require computing complex integrals. The specific contour used here considerably simplifies the problem and is along two straight lines. The first starts at \(t^{\prime}=t^{\prime}_{r}+it^{\prime}_{i}\) and extends vertically down to the real axis at \(t^{\prime}_{r}=\mathrm{Re}(t^{\prime})\). The second is along the real axis from \(\mathrm{Re}(t^{\prime})\) to \(t\). This choice of contour has been widely employed in the literature [51; 52; 60; 61], and allows us to neatly separate the action into two distinct parts as shown in Eq. (9). The parts represent the two distinct physical aspects of the problem: tunneling and continuum propagation. \[S(\mathbf{\tilde{p}},\mathbf{r},t,t^{\prime})=S^{\mathrm{tun}}(\mathbf{ \tilde{p}},\mathbf{r},t^{\prime}_{r},t^{\prime})+S^{\mathrm{prop}}(\mathbf{ \tilde{p}},\mathbf{r},t,t^{\prime}_{r}). \tag{9}\] The contribution from under the barrier is \[S^{\mathrm{tun}}(\mathbf{\tilde{p}},\mathbf{r},t^{\prime}_{r},t ^{\prime})=I_{p}(it^{\prime}_{i})-\frac{1}{2}\int_{t^{\prime}}^{t^{\prime}_{r}} (\mathbf{p}_{0}+\mathbf{A}(\tau))^{2}d\tau\\ -\int_{t^{\prime}}^{t^{\prime}_{r}}V(\mathbf{r}_{0}(\tau))d\tau; \tag{10}\] which follows the tunnel trajectory, \[\mathbf{r}_{0}(\tau)=\int_{t^{\prime}}^{\tau}(\mathbf{p}_{0}+\mathbf{A}(\tau^ {\prime}))d\tau^{\prime}. \tag{11}\] In Eqs. (10) and (11), the under the barrier momentum has been taken as constant, which is a reasonable approximation given the assumption that the dynamics is happening practically instantaneously by keeping \(\mathrm{Re}[t^{\prime}]\) constant. The contribution from the continuum propagation is \[S^{\mathrm{prop}}(\mathbf{\tilde{p}},\mathbf{r},t,t^{\prime}_{r} )=I_{p}t^{\prime}_{r}-\frac{1}{2}\int_{t^{\prime}_{r}}^{t}(\mathbf{p}(\tau)+ \mathbf{A}(\tau))^{2}d\tau\\ -2\int_{t^{\prime}_{r}}^{t}V(\mathbf{r}(\tau))d\tau, \tag{12}\] where the factor of 2 before the potential integral is due to the fact that for a Coulomb potential, \(\mathbf{r}\cdot\mathbf{\dot{p}}=V(\mathbf{r})\)[25; 26; 28]. Another important quantity is the tunnel exit \(z_{0}\), which, roughly speaking, gives the point in space at which the electron reaches the continuum. Under the assumption that the tunnel exit is real and directed along the polarization axis, we can state that \[z_{0}=\text{Re}[\tau_{0||}(t^{\prime}_{r})], \tag{13}\] where \(\mathbf{r}_{0}(t^{\prime}_{r})\) is the tunnel trajectory (11) for \(\tau=t^{\prime}_{r}\) and the subscript indicates its component along the driving-field polarization direction. The tunnel exit is used to solve the CQSFA boundary problem and in the orbits' classification. One should note that the reality of Eq. (13) is an approximation, which leads to real trajectories in the continuum. More rigorous formulations of Coulomb-distorted strong-field approaches in which this assumption is relaxed exist, but the differences observed in the PMDs are subtle. Furthermore, complex trajectories in the continuum require dealing with branch cuts upon rescattering, which is not without difficulties [55, 31]. ### Saddle-point equations The evaluation of the integral in Eq. (6) using the saddle-point approximation requires looking for solutions for \(t^{\prime}\), \(\mathbf{r}(\tau)\) and \(\mathbf{p}(\tau)\), such that the action is stationary. This yields the following saddle-point equations, for the ionization time, \[[\mathbf{p}(t^{\prime})+\mathbf{A}(t^{\prime})]^{2}=-2I_{p}, \tag{14}\] and the continuum trajectories must solve the system of PDEs \[\dot{\mathbf{r}}(\tau) =\mathbf{p}(\tau)+\mathbf{A}(\tau), \tag{15}\] \[\dot{\mathbf{p}}(\tau) =-\nabla_{r}V(\mathbf{r}(\tau)), \tag{16}\] for the intermediate momentum and position. This leads to the CQSFA transition amplitude \[M(\mathbf{p}_{f})\propto-i\lim_{t\to\infty}\sum_{s}\bigg{\{}\det\bigg{[} \frac{\partial\mathbf{p}_{s}(t)}{\partial\mathbf{p}_{s}(t^{\prime}_{s})}\bigg{]} \bigg{\}}^{-1/2}_{\mathcal{C}(t^{\prime}_{s})}e^{iS(\mathbf{p}_{s},\mathbf{r} _{s},t,t^{\prime}_{s})}, \tag{17}\] where \(t^{\prime}_{s}\), \(\mathbf{r}_{s}\) and \(\mathbf{p}_{s}\) are given by Eqs. (14), (15) and (16) respectively, and the sum is over the distinct saddle point trajectories which have final momentum \(\mathbf{p}_{f}\). The prefactor \(\det[\partial\mathbf{p}_{s}(t)/\partial\mathbf{p}_{s}(t^{\prime}_{s})]\) comes from the quadratic fluctuations around the saddle points and \[\mathcal{C}(t^{\prime}_{s})=\sqrt{\frac{2\pi i}{\partial^{2}S(\mathbf{\tilde{ p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{s})/\partial t^{\prime 2}_{s}}}\langle\mathbf{p}+ \mathbf{A}(t^{\prime}_{s})|H_{I}(t^{\prime}_{s})|\psi_{0}\rangle, \tag{18}\] is the same as the SFA prefactor. The stability factor \([\partial\mathbf{p}_{s}(t)/\partial\mathbf{p}_{s}(t^{\prime}_{s})]\) arises due to a Legendre transformation via the \(\mathbf{p}_{0}\) integral in Eq. (6), full details may be found in Ref. [50]. Eq. (17) is derived under the assumption that the saddle points remain well separated and that the stability factor does not pass through zero in a so-called focal point for the entirety of the domain considered. If this is not the case, then some trajectories can pass through focal points which accumulate additional Maslov phase, or asymptotic expansions must be constructed that account for groups of saddles collectively (for an example, see Ref. [62]). A second assumption is that the physics of the system is fully captured by considering a 2-dimensional model. This can lead to problems as by reducing the dimension of the system one can "hide" focal points that would have led to an accumulation of Maslov phase in the full dimensional system. In such reduced dimensionality models, this is known as a Gouy phase anomaly [49]. Hence, different trajectories may have different relative phases than that described by Eq. (17) which will alter their interference. For example, in [46] the "legs" of the spider are found to be shifted perpendicular to the polarisation direction to more closely align with experiments. In the original CQSFA, with predetermined trajectory types, it is relatively straightforward to identify focal points and incorporate Gouy phases by hand due to the dynamics of the trajectories being known. However, for semi-classical models employing a broad range of trajectories, such as forward-propagating or hybrid methods, it requires a more involved computation, which is beyond the scope of this work. An implementation of Maslov phases was presented in [49], and also for a recent more general implementation of the CQSFA in Ref. [50], where an explicit recipe was given. Here, we consider that the dynamics are restricted to the \(xz\) plane, with the laser field polarized along \(z\). ### Orbit classification In its standard form, the CQSFA restricts the sets of trajectories considered to include exactly one of a specific predefined type based on an understanding of the initial to final momentum mapping. Typically, for a linearly polarized monochromatic field, four distinct types of trajectories are solved for at each grid point. These types of trajectories were first introduced in [42] and are classified according to the product \(\Pi_{z}=z_{0}p_{fz}\) of the tunnel exit and the final momentum component parallel to the driving-field polarization, and the product \(\Pi_{x}=p_{0x}p_{fx}\) of the initial and final momentum components perpendicular to the driving-field polarization. A positive product \(\Pi_{z}\) means that the direction of the tunnel exit and that of the detector coincide, which is the case for orbits type 1 and 4. In contrast, a negative \(\Pi_{z}\) implies that the electron left on the opposite side with regard to the detector, which occurs for orbits 2 and 3. The product \(\Pi_{x}\) provides insight into how the transverse momentum component has changed during the electron propagation. If \(\Pi_{x}<0\), the electron has been deflected in such a way by the central potential that its final and initial momentum components \(p_{fx}\) and \(p_{0x}\) have opposite signs. This behavior is triggered by the presence of the Coulomb po tential and is observed for orbits 3 and 4, while for the remaining orbits \(\Pi_{x}>0\). The standard CQSFA makes significant assumptions upon these orbits' dynamics in order to solve the boundary problem. Type 1 orbits are expected to leave the atom and reach the detector directly, orbits 2 and 3 are assumed to behave like field-dressed Kepler hyperbolae and orbit 4 is predicted to exhibit a slingshot-type behavior, leaving from the same side as the detector and going around the ion. For clarity, a summary of the conditions upon \(\Pi_{x}\) and \(\Pi_{z}\), together with the expected dynamics, are provided in Table 1. For some trajectories which are known to not interact strongly with the core, such as orbits 1, the SFA solution can be used as an initial guess and then, by an incremental increase of the Coulomb strength, perturbed towards the genuine semiclassical trajectory. After this, nearby trajectories are found by considering previously solved trajectories on adjacent grid points as initial guesses. There is some subtlety to this step as the exact prescription for which initial guesses should be used to solve which grid points will influence the types of trajectories found. In this case, for a sufficiently small ring around \(\mathbf{p}_{f}=0\), no caustics, which would halt the progress of the solver, are encountered. The requirement of such a prescription reduces the versatility of this method. It is necessary to alter it for even the simplest deviations from a monochromatic field, and it will fail when required to find different types of trajectories or for more general field types. For examples of how the standard classification in Table 1 must be altered for bichromatic and elliptically polarized fields, see our previous publications [47] and [48], respectively. Furthermore, in Ref. [30] it was first evidenced that other distinct orbit types may satisfy the conditions highlighted in the second and third columns of Table 1 and one should not presuppose that the standard behavior will always hold. Nonetheless, we have observed that is not possible for an orbit to fail to satisfy one of the classifications in Table 1 unless it remains bound. Thus, these conditions, although exhaustive, do not provide a unique identification for the trajectories that should be included in the computation of the transition amplitude, and a more flexible approach is called for. The issue is that the classifications do not always remain true to the spirit of the dynamics highlighted in Ref. [42] and used in the standard, boundary version of the CQSFA. ## III Forward and Hybrid Methods Below, we construct two alternative approaches that use the CQSFA formulation allowing for more flexible solutions. The first starts from the CQSFA transition amplitude (17), but uses the sub-barrier part of the CQSFA to build a non-adiabatic ionization rate with Coulomb corrections. Subsequently, we solve the forward problem by launching an ensemble of trajectories, whose initial conditions are sampled from that rate. The second is a hybrid forward-boundary CQSFA method, which starts by launching a large set of Coulomb-distorted trajectories. These trajectories are then used as guesses for the boundary problem instead of relying on pre-existing assumptions about the contributing orbits. Throughout, unless otherwise stated, we consider a linearly polarized monochromatic field \[\mathbf{E}(t)=E_{0}\sin(\omega t)\hat{z}. \tag{19}\] That leads to a vector potential \[\mathbf{A}(t)=A_{0}\cos(\omega t)\hat{z}=2\sqrt{U_{p}}\cos(\omega t)\hat{z}, \tag{20}\] where \(\hat{z}\) is the polarization direction of the electric field, and \(U_{p}\) is the ponderomotive energy. This field choice ensures that the classification in Table 1 holds and facilitates comparison with the standard, boundary-type CQSFA. ### Rate-based forward method In our rate-based forward approach, we will evaluate the distribution at the detector by using a "shooting method" [23; 26; 32] to launch a large ensemble of trajectories. Afterwards, we will bin the final momentum of the trajectories and add coherently the ones lying within the same bin centered at \(\mathbf{p}_{f}\) in momentum space. The ionization probability \(\mathcal{P}(\mathbf{p}_{f})\) then is \[\mathcal{P}(\mathbf{p}_{f})=\left|\sum_{j=1}^{n_{b}}\mathcal{C}(t_{js}^{\prime })e^{iS_{j}(\hat{\mathbf{p}}_{s},\mathbf{r}_{s},t,t_{s}^{\prime})}\right|^{2}. \tag{21}\] The sum is carried over all the trajectories \(n_{b}\) arriving at a given bin. The determinant in the pre-exponential factor in (17), usually called the stability factor, cannot be correctly included in our "shooting method". As previously pointed out by [32; 49], the sampling of the trajectories yields a weight that is \(1/\det\), instead of the \(1/\sqrt{\det}\), predicted here. The term \(\mathcal{C}(t_{s}^{\prime})\) will be included subsequently in our calculations when computing PMDs. However, as our system is ionized from a \(1s\) state, we expect it to have a minor influence on the final momentum \begin{table} \begin{tabular}{c c c l} \hline Orbit & \(\Pi_{z}\) & \(\Pi_{x}\) & Behavior \\ \hline \hline 1 & + & + & Direct \\ 2 & - & + & Hyperbola \\ 3 & - & - & Hyperbola \\ 4 & + & - & Rescattered \\ \hline \end{tabular} \end{table} Table 1: Orbit classification used in the standard, boundary-type CQSFA for monochromatic linearly polarized fields. The labelling 1 to 4 classifies the orbit with two different conditions, the sign of \(\Pi_{z}=z_{0}p_{fz}\) and \(\Pi_{x}=p_{fz}p_{0z}\). The behavior in the fourth column indicates the expected dynamics of the specific types of orbits. distribution. This influence will be significant for bound states with angular dependence, but this issue will not be addressed in the present work. For a discussion, see Ref. [63]. The action in Eq. (21) can be split into its real and imaginary parts, \[S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{s})=\operatorname{Re }S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{s})+\operatorname{ Im}S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t^{\prime}_{r},t^{\prime}_{s}). \tag{22}\] The imaginary contribution comes only from the complex integral under the barrier as \(S^{\text{prop}}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{s})\) (12) is real due to the approximation made upon the tunnel exit. Hence, \[\operatorname{Im}S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t^{\prime}_{r},t ^{\prime}_{s})=\operatorname{Im}S^{\text{tun}}(\mathbf{\tilde{p}}_{s}, \mathbf{r}_{s},t^{\prime}_{r},t^{\prime}_{s}), \tag{23}\] where \(S^{\text{tun}}\) is given in Eq. (10). Then, we can write the ionization probability (21), with exponential accuracy, as \[\mathcal{P}(\mathbf{p}_{f})\approx\bigg{|}\sum_{j=1}^{n_{\text{th}}}\sqrt{W_ {j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t^{\prime}_{r},t^{\prime}_{s})}e^{ i\operatorname{Re}S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{r})} \bigg{|}^{2}, \tag{24}\] with \[\operatorname{Re}S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t,t^{\prime}_{s} )=S^{\text{prop}}+\operatorname{Re}S^{\text{tun}}, \tag{25}\] and \[W_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t^{\prime}_{r},t^{\prime}_{s})=e^ {-2\operatorname{Im}S_{j}(\mathbf{\tilde{p}}_{s},\mathbf{r}_{s},t^{\prime}_{ r},t^{\prime}_{s})}, \tag{26}\] is the instantaneous ionization rate [64; 56], determined by the dynamics under the barrier. The above equation is similar to those encountered in the implementations of the QTMC [23] and the SCTM [26; 32] to find the total ionization probability after binning the final momentum distribution. The main difference between these models and our present implementation will be the derivation of a Coulomb-corrected non-adiabatic ionization rate based on the CQSFA approach, as described in the next session. #### ii.2.1 Non-adiabatic ionization rate The derivation of the ionization rate for the field, given by Eqs. (19) and (20), will follow the procedure described in Ref. [23]. As mentioned before, we will restrict ourselves to a 2-dimensional problem with the initial canonical momentum \(\mathbf{p_{0}}=(p_{0z},p_{0x})\). Solving the saddle point equation (14), expressing the complex time \(t^{\prime}_{s}=t^{\prime}_{r}+it^{\prime}_{i}\), and separating into real and imaginary parts, we obtain the longitudinal canonical momentum at the tunnel exit, \[p_{0z}=-\frac{E_{0}}{w}\cos(\omega t^{\prime}_{r})\cosh(\omega t^{\prime}_{i}), \tag{27}\] and the real part of the tunnel exit (13) explicitly, \[z_{0}=\frac{E_{0}}{w^{2}}\sin(\omega t^{\prime}_{r})\bigg{(}1-\sqrt{1+\gamma^ {2}(t^{\prime}_{r},p_{0x}^{2}}\bigg{)}, \tag{28}\] with the effective Keldysh parameter given by [23] \[\gamma(t^{\prime}_{r},p_{0x})=\frac{\omega\sqrt{p_{0x}^{2}+2I_{p}}}{|E(t^{ \prime}_{r})|},\] that depends on the amplitude of the electric field at the ionization time and the initial transverse momentum. Furthermore, we can find the relation between the real and imaginary components of \(t^{\prime}_{s}\), \[\sinh(\omega t^{\prime}_{i})=\gamma(t^{\prime}_{r},p_{0x}). \tag{29}\] We have expressed both the initial longitudinal momentum and the tunnel exit as a function of the transverse momentum and the ionization time; hence only \(t^{\prime}_{r}\) and \(p_{0x}\) remain as independent variables. Next, we will obtain an analytical expression for the instantaneous ionization rate \(W(t^{\prime}_{r},p_{0x})\). Using the above relations and our particular choice of the electric field together with Eq. (10), Eq. (23) gives \[\operatorname{Im}S(t^{\prime}_{r},p_{0x})=\frac{E_{0}^{2}}{2\omega ^{3}}\bigg{[}\bigg{(}\cos^{2}(\omega t^{\prime}_{r})+\gamma^{2}(t^{\prime}_{r},p_{0x})+\frac{1}{2}\bigg{)}\times\] \[\sinh^{-1}\gamma(t^{\prime}_{r},p_{0x})-\frac{\gamma(t^{\prime}_{ r},p_{0x})}{2}\bigg{(}2\cos^{2}(\omega t^{\prime}_{r})+1\bigg{)}\] \[\qquad\times\sqrt{1+\gamma(t^{\prime}_{r},p_{0x})^{2}}\bigg{]}- \int_{t^{\prime}_{r}}^{t^{\prime}_{r}}\operatorname{Im}V(\mathbf{r}_{0}( \tau)d\tau. \tag{30}\] To derive an analytical expression for \[I_{V_{0}}=\int_{t^{\prime}}^{t^{\prime}_{r}}V(\mathbf{r}_{0}(\tau))d\tau, \tag{31}\] we use the long-wavelength approximation, as in [28] to expand the tunneling trajectory \(\mathbf{r}_{0}\) (11) around the imaginary component, so that \[\mathbf{r}_{0}(\tau)=(\tau_{i}-t^{\prime}_{i})\bigg{[}i[\mathbf{p}_{0}+\mathbf{ A}(t^{\prime}_{r})]-\frac{1}{2}\dot{\mathbf{A}}(t^{\prime}_{r})(\tau_{i}+t^{ \prime}_{i})\bigg{]}, \tag{32}\] where \(\tau_{i}=\operatorname{Im}[\tau]\). Finally, we obtain the regularized expression as computed in [29], \[I_{V_{0}}(t^{\prime}_{r},p_{0x})=i/\sqrt{-p_{0x}^{2}+\chi^{2}}\times\] \[\ln \bigg{[}\frac{2t^{\prime}_{i}(\chi^{2}-p_{0x}^{2})}{\chi\eta-p_{0 x}^{2}+\sqrt{\eta^{2}-p_{0x}^{2}}\sqrt{\chi^{2}-p_{0x}^{2}}}\bigg{]} \tag{33}\] with \[\eta =i(p_{0z}+A(t^{\prime}_{r}))-\frac{1}{2}t^{\prime}_{i}\dot{A}(t^{ \prime}_{r}), \tag{34}\] \[\chi =i(p_{0z}+A(t^{\prime}_{r}))-t^{\prime}_{i}\dot{A}(t^{\prime}_{r}), \tag{35}\] with \(p_{0z}\) and \(t^{\prime}_{i}\) given by Eqs. (27) and (29). Combining together Eq. (26) and Eq. (30) we can express the Coulomb corrected non-adiabatic ionization rate \(W_{C}(t^{\prime}_{r},p_{0x})\) as, \[W_{C}(t^{\prime}_{r},p_{0x})=e^{-2\operatorname{Im}S(t^{\prime}_{r},p_{0x})}=W_ {V_{0}}(t^{\prime}_{r},p_{0x})W_{0}(t^{\prime}_{r},p_{0x}), \tag{36}\] where \(W_{V_{0}}(t^{\prime}_{r},p_{0x})\) is the Coulomb contribution to the rate given by, \[W_{V_{0}}(t^{\prime}_{r},p_{0x})=e^{-2\operatorname{Im}I_{V_{0}}(t ^{\prime}_{r},p_{0x})}=\] \[\bigg{|}\bigg{[}\frac{2t^{\prime}_{i}(\chi^{2}-p_{0x}^{2})}{\chi \eta-p_{0x}^{2}+\sqrt{\eta^{2}-p_{0x}^{2}}\sqrt{\chi^{2}-p_{0x}^{2}}}\bigg{]} \frac{1}{\sqrt{-p_{0x}^{2}+\chi^{2}}}\bigg{|}^{2}, \tag{37}\] and \(W_{0}(t^{\prime}_{r},p_{0x})\) contains the rest of the contribution from Eq. (30) \[W_{0}(t^{\prime}_{r},p_{0x})=\exp\biggl{[}-\frac{E_{0}^{2}}{ \omega^{3}}\biggl{(}\cos^{2}(\omega t^{\prime}_{r})+\gamma^{2}(t^{\prime}_{r},p _{0x})+\frac{1}{2}\biggr{)}\sinh^{-1}\gamma(t^{\prime}_{r},p_{0x})-\frac{1}{2 }\gamma(t^{\prime}_{r},p_{0x})[2\cos^{2}(\omega t^{\prime}_{r})+1]\sqrt{1+ \gamma(t^{\prime}_{r},p_{0x})^{2}}\biggr{]} \tag{38}\] is the non-adiabatic rate without Coulomb correction, only different from the one derived in [23] because of the particular choice of the electric field. Furthermore, the real part of the action along the first part of the contour reads \[\operatorname{Re}S^{tun}=\frac{E_{0}}{\omega^{2}}p_{0z}\sin( \omega t^{\prime}_{r})(\cosh(\omega t^{\prime}_{i})-1)\\ +\frac{E_{0}^{2}}{8\omega^{3}}\sin(2\omega t^{\prime}_{r})(\cosh( 2\omega t^{\prime}_{i})-1)-\operatorname{Re}I_{V_{0}}, \tag{39}\] with \(I_{V_{0}}\) given by Eq. (33) in the long wave limit. This will be added to the continuum action according to Eq. (25). Our method is similar to that presented in [23; 26; 32]. Our main contribution is the Coulomb sub-barrier correction included in the ionization rate, based on the CQSFA transition amplitude. We have also added the additional phase coming from the real part of the integral under the barrier (39). This contribution from the Coulomb sub-barrier integral has been shown before [51] as responsible for shifting the holographic fringes' positions, leading to a better agreement with the TDSE calculations. Studies of such contributions and analytical approximations are also given in our previous paper [29]. In our rate-based method, we use accept-reject sampling to obtain the initial transverse momentum \(p_{0x}\) and ionization time \(t^{\prime}_{r}\) from the distribution \(\sqrt{W_{C}(t^{\prime}_{r},p_{0x})}\) (36). By doing this, we require fewer trajectories to resolve the interference patterns, as we sample from the regions where the ionization probability is relevant. Furthermore, it allows us to evaluate the photoelectron momentum distributions at the detector simply by adding the contribution of the phase in the continuum within the different bins in momentum space as, \[\mathcal{P}(\mathbf{p}_{f})\approx\left|\sum_{j=1}^{n_{b}}e^{i\operatorname{ Re}S_{j}(\tilde{\mathbf{p}}_{s},\mathbf{r}_{s},t^{\prime}_{s})}\right|^{2}. \tag{40}\] To avoid contributions from ATI rings we restrict the sampling of the ionization time to one field cycle. Once we have the initial transverse momentum and ionization times, we obtain the longitudinal momentum and tunnel exit positions from Eq. (27) and Eq. (28). This gives the complete set of initial conditions to propagate the classical trajectories in the continuum defined by equations (15) and (16). To integrate the equation of motion numerically, we use an adaptive step-size fourth-order Runge-Kutta integrator [65]. We use a trapezoidal pulse, with two ramp-on, four constant amplitude, and two ramp-off cycles. The external field is switched off after four cycles, and we use a trapezoidal pulse. Subsequently, the electron moves only under the influence of the Coulomb potential. If, after the pulse is switched off, the electron energy is negative, that is, \(E<0\), we interpret this as it is captured in a Rydberg state. Therefore, it will not contribute to the PMD. If, on the other hand \(E>0\) at the end of the pulse, the electron is freed and will reach the continuum. The asymptotic momentum \(\mathbf{p}_{a}\) of free electrons is calculated from the momentum \(\mathbf{p}(T_{f})\) and position \(\mathbf{r}(T_{f})\) at the end of the laser pulse (\(t=T_{f}\)) using Keppler's solutions [66], \[\mathbf{p}_{a}=p\frac{p(\mathbf{L}\times\mathbf{a})-\mathbf{a}}{1+k^{2}L^{2}}, \tag{41}\] where \(\mathbf{L}=\mathbf{r}(T_{f})\times\mathbf{p}(T_{f})\), and \(\mathbf{a}=\mathbf{p}(T_{f})\times\mathbf{L}-\mathbf{r}(T_{f})/r(T_{f})\). Then, we evaluate the photoelectron momentum distribution by binning the final asymptotic momentum of the ionized electrons and adding them coherently within each bin using Eq. (40). #### iii.2.2 Analysis of the Coulomb sub-barrier correction In this section, we study how the initial momentum distribution changes due to the influence of the Coulomb sub-barrier correction included in the rate. Figure 1 shows the density plot of the ionization rates, \(W_{0}\) and \(W_{C}\), as a function of the initial transverse momentum and the laser phase \(\omega t^{\prime}_{r}\). The latter can be mapped into the initial longitudinal momentum \(p_{0z}\) using Eq. (27), so that broader (narrower) ranges of laser phases correspond to broader (narrower) ranges of \(p_{0z}\). After including the Coulomb potential, we observe an overall increase in the distribution of several orders of magnitude. This is expected from previous work on Coulomb corrected rates [64] and Coulomb distorted approaches such as the Coulomb corrected SFA [42], the CQSFA [25] and the Volkov-eikonal approximation [67; 68]; for a review see [36]. Furthermore, there is a significant broadening of the transverse momentum distribution, and a narrowing of the range of ionization phases \(\omega t^{\prime}_{r}\) around the times \(\omega t^{\prime}_{r}=(2n+1)\pi/2=(2n+1)\omega T/4\), where \(T\) is the field cycle. These times correspond to the maxima and minima of the field, and, for the monochromatic wave in this work, also give \(A(t^{\prime}_{r})=0\). For the parameters in Fig. 1, we quantify in Table 2 the variance of the initial momenta, tunnel exit, and tunnel exit position average. The table shows that the Coulomb corrections cause an increase (decrease) in the transverse (longitudinal) momentum width, while the widths associated with the tunnel exit decrease much less. The average tunnel exit also decreases if the Coulomb correction is incorporated, which is a consequence of it restricting \(\omega t^{\prime}_{r}\) to ranges associated with larger field amplitudes. Intuitively, the changes introduced by the sub-barrier Coulomb correction shown in Fig. 1 may be understood as follows. The Coulomb potential is pulling the electron back, so it needs more energy to overcome the barrier. Therefore, a larger field amplitude will be necessary for it to escape than if the corrections were absent. This will narrow the ranges of \(\omega t^{\prime}_{r}\) around the peak-field times. As the parallel momentum is close to zero for electrons freed close to the peak of the field, this translates into higher transverse momentum. Similar Coulomb shifts to the momentum distribution demonstrated before [69] have been described as wavepacket deceleration. In terms of the orbit classification provided in Sec. II.3, we expect that the orbits most influenced by the aforementioned broadening will be orbits 1 and 2. An electron along orbit 1 is expected to reach the detector without further interaction or deflection, so that there will be a minimal escape velocity [25; 28]. This velocity can be achieved either by increasing \(p_{0x}\) or \(p_{0z}\). As orbit 2 is expected to be a field-dressed hyperbola along which the electron will be deflected but will not undergo hard scattering, the initial transverse momentum must be non-vanishing and relatively large in order for the electron to escape. In this specific regime, the Coulomb potential will contribute to the electron's escape [25; 28], which is reflected in an enhancement in the ionization probability. Orbits 3 and 4 have more restricted initial conditions and are expected to behave like hybrid (orbit 3) or rescattered (orbit 4) orbits. Thus, the Coulomb potential will have a stronger influence in the continuum propagation than in the sub-barrier dynamics. This issue will be discussed in more detail in Sec. V. The distribution \(W_{C}(t^{\prime}_{r},p_{0x})\) also exhibits two distinct peaks around \(p_{0x}\approx\pm 0.5\) along the peak-field times \(\omega t^{\prime}_{r}=(2n+1)\pi/2=(2n+1)\omega T/4\), which will be investigated in more detail in Fig. 2. In the upper panel of the figure, we plot \(W_{0}(t^{\prime}_{r},p_{0x})\) and \(W_{C}(t^{\prime}_{r},p_{0x})\) for \(\omega t^{\prime}_{r}=\pi/2\). The Coulomb-corrected distribution exhibits two additional peaks, which come from the branch points of the function \(1/\sqrt{r_{0}^{2}}\), and the branch cuts originated from them. The components of the tunnelling trajectory \(\mathbf{r}_{0}\) are \[r_{0z} =(\tau_{i}-t^{\prime}_{i})\bigg{[}i[p_{0z}+A(t^{\prime}_{r})]- \frac{1}{2}\dot{A}(t^{\prime}_{r})(\tau_{i}+t^{\prime}_{i})\bigg{]} \tag{42}\] \[r_{0x} =i(\tau_{i}-t^{\prime}_{i})p_{0x}, \tag{43}\] \begin{table} \begin{tabular}{l c c c c} \hline Rate & \(\sigma_{p0z}\) & \(\sigma_{p0z}\) & \(\sigma_{z_{0}}\) & \(\langle z_{0}\rangle\) \\ \hline \(W_{C}\) & 0.28 & 0.45 & 0.59 & 7.16 \\ \hline \(W_{0}\) & 0.25 & 0.52 & 0.60 & 7.22 \\ \hline \end{tabular} \end{table} Table 2: Width of the initial momentum and tunnel exit distributions (width and average) obtained from \(W_{C}\) and \(W_{0}\), using the parameters in Fig. 1. Figure 1: Transverse momentum distribution at the tunnel exit as a function of the laser phase \(\omega t^{\prime}_{r}/(2\pi)\) from \(W_{0}(t^{\prime}_{r},p_{0x})\) (upper panel) and \(W_{C}(t^{\prime}_{r},p_{0x})\) (lower panel). For these plots, we use a hydrogen atom (\(I_{p}=0.5\) a.u.) in a laser field of intensity \(I=1.5\times 10^{14}\) W/cm\({}^{2}\) and wavelength \(\lambda=800\) nm. The thick yellow line indicates the laser-field amplitude (up to a scaling factor), and the dashed white lines were drawn at phases \(\omega t^{\prime}_{r}=\pi/3\) and \(\omega t^{\prime}_{r}=\pi/2\). The distributions \(W_{0}\) and \(W_{C}\) are computed in arbitrary units. so that \[\mathbf{r}_{0}^{2}=\operatorname{Re}(r_{0z})^{2}+\operatorname{Re}(r _{0x})^{2}-(\operatorname{Im}(r_{0z})^{2}+\operatorname{Im}(r_{0x})^{2})\\ +2i\operatorname{Re}(r_{0x})\operatorname{Im}(r_{0x})+2i \operatorname{Re}(r_{0z})\operatorname{Im}(r_{0z}). \tag{44}\] From Eq. (43) we know that \(r_{0x}\) is always imaginary. Therefore, the second and fourth terms in Eq. (44) vanish. The branch cuts are located at the points where \(\operatorname{Im}\mathbf{r}_{0}^{2}=0\) and \(\operatorname{Re}\mathbf{r}_{0}^{2}<0\). From (44) we see that this is satisfied when \(p_{0z}=0\), and \(\omega t_{r}^{\prime}=\pi/2\mod\pi\). These peaks are absent in the lower panel of Fig. 2, for which the ionization phase \(\omega t_{r}^{\prime}=\pi/3\) is far from the field extrema. These sub-barrier branching points are of minor relevance for the present paper, as they occur in an initial momentum range far from that of interest. For discussions of branch cuts see [53, 54, 55, 70, 31]. ### Hybrid forward-boundary CQSFA Next, we will briefly discuss how the CQSFA has been modified in order to be made less reliant on pre-supposing the orbits' dynamics. The CQSFA is formulated and solved as a boundary value problem. The aim is to find sets of semiclassical trajectories whose final momenta coincide precisely with the grid points of a grid of final momenta. Prior to solving this problem, one needs appropriate guesses for the orbits. In general, to construct a procedure for finding trajectories, one requires a detailed understanding of the initial to final momentum map. This can be achieved either by using physical intuition and some pre-knowledge of the dynamics or by direct forward propagation of a great number of trajectories. However, the forward propagation itself provides a vast set of initial guesses because the trajectories which have \(\mathbf{p}_{f}\) close to a grid point can be readily used as starting points. This approach leads to a significantly more robust method, which is not influenced by one's preconceptions about the types of relevant trajectories. Similarly to the rate-based forward method described above, the first step of the hybrid forward-boundary CQSFA (H-CQSFA) is to numerically calculate the classical trajectories for a large range of initial conditions. For each value of initial momentum, there will be a range of associated saddle-point solutions, which were derived from (14) and correspond to the various ionization times in the semiclassical picture. For each initial momenta, ionization time pair, the tunnel exit can be evaluated from (28). As in the previous model, the triple \((\mathbf{p}_{0},z_{0},t^{\prime})\) suffices as initial conditions for classical continuum trajectories defined by the PDEs (15) and (16) to be solved numerically for the duration of the field, which is taken to span four cycles up to the time \(T_{f}\). In order to avoid ATI rings, which may mask the holographic patterns, we consider ionization times \(t^{\prime}\) within a single field cycle. Following the pulse, the final momenta are calculated analytically as the asymptotic momenta of the relevant Kepler hyperbola (41). Each grid point of final momenta can be associated with a bin that has the grid point at its midpoint. The trajectories whose final momenta fall into each bin are then processed. Trajectories with initial momenta not sufficiently separated, implying that one would be able to continuously interpolate between them both without the intermediate final momenta leaving the bin, are considered to be duplicates and ignored. Subsequently, the remaining trajectories are used as initial guesses for the trajectories which have asymptotic momentum coinciding exactly with the final momentum grid point. The bins are defined arbitrarily, or rather, they are chosen to cover the range of final momenta we would like to show in the photoelectron momentum distribution. The final momenta from the propagation may then fall into one of these bins. It is then determined, based on the content of the bin, whether this trajectory will be "refined" so that it lands right in the center of the bin and can be used to calculate the transition amplitude at its center. During the refinement process, the initial momenta will not be confined by the sampling, and will therefore go wherever it needs to so that its trajectory lands in the center of the bin. Fig. 3 provides a schematic representation of the binning process. Only one of the points starting in the red region in the initial momentum on the left will actually contribute to the probability amplitude at the grid point Figure 2: Instantaneous transverse momentum distributions at the tunnel exit for \(\omega t_{r}^{\prime}=\pi/2\) (upper panel) and \(\omega t_{r}^{\prime}=\pi/3\,\mathrm{a.u}\) (lower panel) using the same field parameters as in Figure 1. The upper panel corresponds to a peak-field time, while for the lower panel, we have taken an ionization phase so that the distribution \(W_{0}(t_{r}^{\prime},p_{0z})\) is touched tangentially near its width (see the dashed lines in Fig. 1 for details). For a better comparison, we have normalized the distributions so that they will have the same peak value at the times for which the field amplitude is maximal (upper panel). at the center of the red bin on the right. The rest are considered to be duplicates and should be discarded. Having established significantly more valid semiclassical trajectories than the four used in the previous approach, the action and prefactors from Equation (10), (12) and (17) are calculated numerically such that their amplitudes can be superposed to calculate the probability at a given final momentum. Unless otherwise stated, we consider an initial Gaussian momentum distribution \[\mathcal{P}(p_{0z},p_{0x})=\frac{1}{2\pi\sigma_{p0x}\sigma_{p0z}}\exp\Biggl{\{} -\frac{p_{0z}^{2}}{2\sigma_{p0z}^{2}}\Biggr{\}}\exp\Biggl{\{}-\frac{p_{0x}^{2} }{2\sigma_{p0x}^{2}}\Biggr{\}}, \tag{45}\] for the H-CQSFA, whose widths \(\sigma_{p0x}\), \(\sigma_{p0z}\) were chosen to match those of the rate \(W_{C}\) given in Table 2. Sampling the initial momentum distribution within this approach is more of a matter of including or excluding specific trajectories, as the imaginary part of the action gives the initial weighting. Nonetheless, care must be taken as, if there is a trajectory which was not sampled, and has an initial momentum which does not lie sufficiently close to any of the sampled momenta, it will not be found by the refinement process. Further analysis of the effect of sampling from different distributions will be provided in Sec. IV.2 by using initial Gaussian distributions of different widths and the same number of grid points. ## IV Photoelectron momentum distributions Next, we focus on the photoelectron momentum distributions computed with the methods in the previous sections. For simplicity, unless necessary, we will use the notations \(p_{x}\) and \(p_{z}\) for the final momentum components \(p_{fx}\) and \(p_{fx}\), respectively. The number of trajectories launched in the rate-based and hybrid methods will vary between \(10^{7}\) and \(2\)\(\times 10^{8}\), depending on the features of interest. Fully converged spectra are obtained for \(2\)\(\times 10^{8}\) trajectories, with only minor changes in the PMDs observed for more than \(10^{8}\) orbits. This is in agreement with the parameter range reported in [32]. In Fig. 4, we compare PMDs calculated with the rate-based method [panel (a)], the H-CQSFA [panel (b)], the CQSFA [panel (c)] and the outcome of an _ab-initio_ computation [panel (d)], provided by the freely available TDSE solver Qprop [71]. In order to concentrate on the holographic patterns, we consider a single cycle of the field, but add different unit cells incoherently to avoid the arbitrariness associated with the ionization interval having a finite start and endpoint. This is performed by considering different offset phases in Eq. (19), that is, setting \(\omega t\rightarrow\omega t+\phi\) and adding the resulting PMDs incoherently (for details see [44; 46]). Arbitrary endpoints may lead to asymmetries in the resulting PMDs. For Qprop, we considered a single-cycle pulse but performed an incoherent CEP average in order to eliminate asymmetries. If more than one cycle is added coherently, prominent above-threshold ionization (ATI) rings stemming from inter-cycle interference are obtained. These rings are well-known in the literature [2; 37] and have been shown in our previous publications [27; 28; 30; 31]. They are not of interest in the present work. A hybrid coherent-incoherent sum was used in [47] for monochromatic and bichromatic fields. Overall, the agreement of the orbit-based PMDs with the TDSE is reasonably good. All panels in Fig. 4 show the key holographic structures, such as the fan, the spider and the carpets, a high-energy ridge associated with rescattering and a caustic whose apex is located at the \(p_{x}\) axis. However, there are differences. For instance, the rate-based method and the H-CQSFA [Fig. 4(a) and (b)] exhibit secondary ridges at lower energies, and a richer interference structure in the carpet close to the \(p_{x}\) axis, which are absent in the standard CQSFA [Fig. 4(c)]. A richer structure for the carpet is also observed in the TDSE computation [Fig. 4(d)], although secondary ridges are harder to identify. We also see that the H-CQSFA exhibits additional fringes, which follow the rescattering ridges. These fringes are present in the TDSE results, but not in the rate-based method or the standard CQSFA computations. Furthermore, the signal in the outcome of the rate-based method seems to decay much faster away from the \(p_{z}\) axis than for the remaining orbit-based PMDs, which leads to a slightly suppressed interference carpet and a weaker signal close to the rescattering ridges. The discrepancies encountered above require a more detailed assessment of several issues pertinent to specific methods, such as how the specific modeling of the sub-barrier dynamics and ionization influence the resulting distributions, how rigorously one may rely upon the orbit classification provided in Table 1, and how different initial Figure 3: Schematic representation of the binning process employed in the hybrid forward-boundary CQSFA. The red square on the left-hand side represents the initial guess trajectories in a specific region launched and mapped into a final momentum region. Subsequently, the trajectory highlighted by the black point in that region will be used to solve the boundary problem and refine the initial conditions. The remaining trajectories in the bin will be discarded. sampling suppresses or enhances particular features. In the results that follow, we restrict the ionization times to a single field cycle to avoid the presence of ATI rings. This facilitates the study of holographic patterns. For simplicity, we consider a fixed unit cell determined by the electric field (19), so that the ionization times start when \(E(t)=0\). ### Sub-barrier dynamics and single-orbit contributions The first question we address in this section is how the results from the forward rate-based method will improve after including the Coulomb correction in the ionization rate, comparing them with the results from the H-CQSFA. These results are presented in Fig. 5, for the rate-based method (left column) and the H-CQSFA (right column). For the first one, the presence or absence of sub-barrier Coulomb corrections means that we consider either the ionization rate \(W_{C}\) [Eq. (36)], or the rate \(W_{0}\) [Eq. (38)], respectively. In the H-CQSFA calculations, the Coulomb sub-barrier correction is included by integrating Eq. (31) in the complex plane, instead of being incorporated in a rate. Nonetheless, this integral can be switched on and off. The upper and lower row of Fig. 5 display the PMDs without and with sub-barrier Coulomb corrections, respectively. For the rate-based method, we see how, when using \(W_{C}\) [Fig. 5(c)], the PMDs broaden along the perpendicular axis (for comparison, see Fig. 5(a), computed with the rate \(W_{0}\)). This is expected from the Coulomb correction's effect in the initial transverse momentum distribution (see Fig. 1). The widening along the \(p_{x}\) axis happens throughout but mainly affects the fan and the spider. In particular, the fan extends beyond \(p_{x}=\pm 0.5\) a.u, around the \(p_{z}=0\) axis. This happens because the fan results from the interference of direct orbits with field-dressed hyperbolae, and the spider stems from the interference of field-dressed hyperbolae (orbits 2 and 3), whose initial perpendicular momenta may be large. Therefore, a broader initial transverse momentum will extend the fan to higher momentum regions. In contrast, the other structures, such as the rescattering ridges, and the carpet-type pattern near the perpendicular momentum axis, result from orbits whose initial momenta are more localized near the polarization axis. This makes them less sensitive to the sub-barrier Coulomb correction. More details will be provided in Sec. V, in which the initial-to-final momentum mapping will be analyzed for specific orbits. The H-CQSFA outcome, plotted in the right column of Fig. 5, shows a similar effect: the Coulomb correction broadens the PMDs in the direction perpendicular to the driving-field polarization, and this broadening is particularly noticeable in the fan [see Fig. 5(d)]. The patterns encountered are also similar, but in the rate-based method, the features near the caustic and the carpet near the perpendicular momentum axis are more suppressed. Furthermore, the slope of the spider legs is different. In the hybrid CQSFA approach, they are mostly parallel to the polarization axis, while in the forward method, they bend slightly up. This can be clearly appreciated if we look at how one of the spider legs on the left panel plots cuts the red line, while it seems to cut it tangentially on the right panel plots. Altogether, these results highlight the importance of including the Coulomb potential in the Figure 4: Photoelectron momentum distributions computed for hydrogen (\(I_{p}=0.5\) a.u.) in a field of intensity \(I=1.5\times 10^{14}\) W/cm\({}^{2}\), wavelength \(\lambda=800\) nm, using the rate-based method [panel (a)], the H-CQSFA [panel (b)], the CQSFA [panel (c)] and the Schrödinger solver Qprop [panels (d)]. For the orbit-based methods, we look at a single cycle incoherently averaging over different unit cells according to [44, 46], while for Qprop we use a CEP-averaged one-cycle pulse. In the orbit-based methods, we use a total of \(10^{8}\) orbits for each unit cell. The Qprop outcome was plotted over more orders of magnitude as the rescattering ridges were strongly suppressed in for the scale used in the remaining panels. sub-barrier dynamics, either as a correction to rate-based methods or as an extra phase in the semiclassical action for the CQSFA. Several rescattering ridges are visible in all cases, but the signal is noticeably stronger in the H-CQSFA. Next, we look at single-orbit distributions to assess the influence of the sub-barrier corrections in the rate. Furthermore, we investigate how far-reaching the classification introduced in [42] is in forward or hybrid forward-boundary methods. Single-orbit distributions were previously studied within the pure boundary CQSFA approach [28], and in particular, the effect of the Coulomb integral under the barrier \(I_{V_{0}}\) was addressed in [29]. In Fig. 6 we display the single-orbit distributions obtained with the rate-based forward method without and with sub-barrier Coulomb corrections (first and second column from the left, respectively), compared to the hybrid CQSFA (third column from the left) and the standard boundary CQSFA (right column). The orbits are classified according to the conditions in Table 1. The distributions show that, in the rate-based and hybrid methods, the conditions upon the tunnel exit and the transverse momentum component are insufficient to enforce the dynamics typically associated with orbits 1, 2, 3 and 4. This is evidenced by rescattering ridges and features not necessarily associated with 'direct' orbits, such as caustics, present for the contributions of orbits 1 and 2. Examples are the primary ridge associated with rescattering at energy \(10U_{p}\), which is seen very clearly for orbit 1 [Figs. 6(a) to (c)] and are slightly less intense for orbit 2 [Figs. 6(e) to (g)], as well as secondary ridges at lower photoelectron energies for the distributions stemming from orbits 2 and 3 [Figs. 6(e) to (g) and (i) to (k)]. The number and intensity of those secondary ridges vary according to the method employed, being more prevalent in the rate-based approach. There are up to four secondary ridges for the distributions associated with orbit 2 in both rate-based and hybrid methods [Figs. 6(e) to (g)]. For orbit 3, four secondary ridges are only visible for the rate-based approach [Figs. 6(i) and (j)], while for the hybrid CQSFA, there is only a low-energy ridge [Fig. 6(k)]. For the rate-based method, the primary rescattering ridge is also observed for orbit 3 [see Figs. 6(i) and (j)], while it is absent for the H-CQSFA [see Fig. 6(k)]. Furthermore, the contributions of orbit 2 also exhibit a caustic near the perpendicular momentum axis, which traditionally is associated with orbit 3 [28] [see Figs. 6(e) to (g)]. In contrast, the standard boundary CQSFA, displayed in the fourth column from the left, leads to markedly different single-orbit distributions. Those obtained using orbits 1 and 2, plotted in Figs. 6(d) and (h), exhibit no ridges or caustics, are located near the ionization threshold and correspond to the central maxima obtained with the rate-based method. Furthermore, in the CQSFA, there are no secondary ridges in the contributions from orbit 3, as shown in Fig. 6(l). In all cases, there is a high-energy, primary rescattering ridge in the contributions from orbit 4, as shown in the last row of the figure. This is expected from the standard definition of this orbit, which is assumed to be rescattered from the start. Thus, while a pure boundary problem with pre-selected behaviors allows us to define the dynamics associated with orbits 1-4 neatly, for forward and hybrid approaches, there are other orbits that satisfy the conditions in Table 1. The secondary ridges observed at lower photoelectron energies are associated with longer orbits, which return at least after 1.5 cycles, but, nonetheless, satisfy the conditions imposed upon orbits 2 and 3. This implies that care must be taken with the standard CQSFA orbit classification if forward or hybrid methods are used. Another noteworthy feature is that, for the rate-based approaches, the contributions of orbit 3 around the polarization axis are stronger than those of the CQSFA computations. Let us remark that the overall shape of orbit 3 distribution within the CQSFA framework is only obtained when the stability factor \(\det[\partial\mathbf{p}_{s}(t)/\partial\mathbf{p}_{s}(t^{\prime}_{s})]\) given in Eq. (17) is incorporated [28]. As discussed in section III.1, the direct propagation method is implicitly taking this into consideration but with a wrong weight \(1/|\det|\) instead of the \(1/\sqrt{|\det|}\) employed in both CQSFA calculations, being this the cause of the enhanced contribution around the polarization axis. Finally, we observe interference patterns in the single-orbit distribution of orbits 3 and 4 from the rate-based method and hybrid CQSFA. There are fringes following the caustic whose extrema are located near \((p_{z},p_{x})=(0,1.3\) a.u.) for orbit 3, clearly visible in panels (i) to (k), and annular fringes following the Figure 5: Photoelectron momentum distributions (PMDs) computed using ionization times restricted to a single cycle, for the same field and atomic parameters as in Fig. 4 and a unit cell defined by Eq. (19). The left column has been computed with the rate-based forward method using the non-adiabatic ionization rate \(W_{0}\) [panel (a)] and the Coulomb-corrected ionization rate \(W_{C}\) [panel (c)]. The right column has been calculated with the hybrid forward-boundary CQSFA removing the Coulomb potential from the tunnelling part of the action [panel (b)] and with full action [panel (d)]. For both methods, we have launched an initial ensemble of \(N=2\times 10^{8}\) trajectories. The red lines serve as a guide for the spider and the fan. low-energy ridge in Fig. 6(k). Further annular interference patterns are also present for orbit 4, following the primary ridge [see Figs. 6(m) to (o)]. For the H-CQSFA, these patterns are equally strong throughout, while for the rate-based method, they are more prominent at the low-energy end of the primary rescattering ridge. This is evidence of different types of orbits being gathered under the umbrella of a single orbit, with the respective contributions being coherently superimposed. Overall, there is a better agreement of the single orbit distributions from the rate-based method with the CQSFA when employing the Coulomb-corrected rate. This is more noticeable for the contributions of orbits 1 and 2, which get elongated along the \(p_{x}\) axis and narrowed along the polarization axis. For orbit 2 we observe a suppression at \(p_{z}=0\) and two bright spots, in the rate-based approach employing \(W_{C}\) and in the boundary CQSFA calculations [see Figs. 6(f) and (h)] [29]. These spots are associated with the branching points that occur for the sub-barrier corrections at the field extrema (see Sec. III.1.2). They are more pronounced for the rate-based approach, as it uses an analytical approximation that overestimates the contribution from the first part of the contour [29]. The peaks are washed out in the H-CQSFA, as a consequence of having the contribution of several types of orbits falling under the classification of orbit 2. Different coherent superpositions of orbits lead to distinct holographic patterns. For instance, the interference of orbits 1 and 2 leads to the fan, that of 2 and 3 to the spider and the interference of orbits 3 and 4 to the spiral. These structures have been analyzed in detail in previous publications using the standard, boundary-type CQSFA [28; 29; 30; 46; 47; 14]. Therefore, we will keep these discussions as brief as possible in the present article without resorting to specific figures. Sub-barrier Coulomb corrections will also lead to shifts in the interference fringes, but this has been analyzed elsewhere and will detract from the main objective of this work (for discussions, see, e.g., [51]). ### Initial sampling and holographic patterns One important difference between the rate-based method and the H-CQSFA not addressed so far is re Figure 6: Single-orbit distributions for orbits 1,2,3 and 4, plotted in the first, second, third and fourth row, respectively, following the classification provided in Table 1. The distributions were computed for the same parameters as in Fig. 5 using the forward method with the non-adiabatic ionization rate \(W_{0}\) (first column from the left), the forward method with the non-adiabatic ionization rate with Coulomb correction \(W_{C}\) (second column from the left), the hybrid CQSFA with full action (third column from the left), and the standard pure boundary CQSFA (right column). We used \(N=2\times 10^{8}\) initial trajectories in both the rate-based method and the hybrid CQSFA. lated to how the initial conditions are sampled. In the rate-based method, the initial transverse momentum and ionization time are sampled either from \(W_{0}\) or \(W_{C}\). This has the advantage that most of the sampled trajectories are located in the higher probability region. For the H-CQSFA, so far, we have considered an initial Gaussian distribution whose parameters mimic those in the rate \(W_{C}\). This section aims to assess the impact of sampling from different initial conditions on the different holographic patterns more systematically. In the rate-based method, sampling from an arbitrary distribution will require correcting the weight of the trajectories at the detector, while, in the H-CQSFA framework, keeping a fixed number of points will imply that we let in or out certain regions in the initial momentum space. As a first test, in Fig. 7, we plot single-orbit distributions computed with the H-CQSFA, in which the initial conditions were sampled from a single uniform grid. The distribution associated with orbits 1 and 2, shown in Figs. 7(a) and (b), resembles those obtained with the boundary CQSFA, displayed in Figs. 6(d) and (h). There are no rescattering ridges or caustics, which leads to the conclusion that the orbits causing such features are not being sampled. This is in striking contrast to the results shown in Figs. 6(c) and (g) obtained with an initial Gaussian sampling, which shows these structures. The secondary ridge for orbit 3, displayed in Fig. 7(c), is also less defined than that obtained with the Gaussian sampling [Fig. 6(k)], and the annular interference structures associated with orbit 4 [see Fig. 7(d)] are only present near the perpendicular momentum axis, while for the previous sampling they are visible throughout [see Fig. 6(o)]. This implies that a narrower initial distribution probes the momentum region associated with rescattering in more detail. This shows that care must be taken when sampling, as the initial momenta lie in different regions and may be densely clustered, thus a single uniform sampling distribution may not include all relevant initial momentum points or will inefficiently sample dense clusters. Thus, it is important to investigate alternative sampling distributions such as a Gaussian, or alternatively use adaptive sampling such as in [50]. Next, we will sample the initial transverse and longitudinal momentum from a Gaussian distribution as defined in Eq. 45, whose widths \(\sigma_{pox}\), \(\sigma_{poz}\) will be altered to form a narrow \(\mathcal{G}_{n}\), medium \(\mathcal{G}_{m}\) and broader \(\mathcal{G}_{b}\) Gaussian. The standard deviation of the narrow Gaussian along the transverse direction is equal to the one of \(W_{0}\), for \(\mathcal{G}_{m}\) it was chosen close to the standard deviation of \(W_{0}\) along the polarization axis, and for \(\mathcal{G}_{b}\) an arbitrary value above the width predicted by the rates. For simplicity, the width was set equal along each direction. The number of trajectories is kept constant at \(10^{8}\) to illustrate the issue better. The values for \(\sigma_{pox}\), \(\sigma_{poz}\) are given in Table 3, for clarity we also added the widths of the distributions \(W_{0}\) and \(W_{C}\). Weighted initial sampling in the H-CQSFA may potentially increase efficiency. However, care must be taken because, depending on the sampling, certain types of trajectories may be included or left out, and this will have an effect on the overall patterns observed. On the other hand, in the rate-based method, sampling from an arbitrary distribution will require correcting the weight of the trajectories at the detector when calculating the transition probability combining equations (24) and (36). In Fig. 8 the photoelectron momentum distributions obtained from the different initial distributions are shown in order of decreasing width from top to bottom. On the left column, we have the results from the H-CQSFA while on the right we have the results from the rate-based method. Fig. 8(a) shows that by using the broad Gaussian \(\mathcal{G}_{b}\) the high-energy ends of the spider, located around parallel momenta \(p_{z}=\pm 1\) a.u. and perpendicular momenta \(|p_{x}|<0.5\) a.u., are elongated, compared to the plots with narrower Gaussians \(\mathcal{G}_{m}\) and \(\mathcal{G}_{n}\), for example in Fig. 8(e). Additionally, the fan, which is the interference structure near the threshold \((p_{z},p_{x})=(0,0)\) for low parallel momentum and up to around 0.5 in perpendicular momentum, is wider in Fig. 8(a) compared to Fig. 8(c). As the Gaussian distributions narrow, the spider shortens, losing its high-energy ends, and the fan becomes less spread. In contrast, the rescattering ridges and the interference rings in higher momentum regions become better defined. For instance, Fig. 8(e), computed for the Figure 7: Single-orbit photoelectron momentum distributions calculated with the H-CQSFA for the same parameters as in Fig. 6, but considering a uniform initial sampling of \(N=2\times 10^{8}\) trajectories. Panels (a), (b), (c) and (d) refer to the contributions from orbits 1, 2, 3 and 4, respectively. \begin{table} \begin{tabular}{c c c} \hline Distribution & \(\sigma_{pox}\) (a.u) & \(\sigma_{poz}\) (a.u) \\ \hline \(W_{C}\) & 0.28 & 0.45 \\ \hline \(W_{0}\) & 0.25 & 0.52 \\ \hline \(\mathcal{G}_{b}\) & 0.75 & 0.75 \\ \hline \(\mathcal{G}_{m}\) & 0.5 & 0.5 \\ \hline \(\mathcal{G}_{n}\) & 0.25 & 0.25 \\ \hline \end{tabular} \end{table} Table 3: Widths \(\sigma_{pox}\) and \(\sigma_{p0z}\) of the initial momentum distributions obtained from \(W_{C}\) and \(W_{0}\), respectively, compared with the width of the arbitrary Gaussian distributions. narrow Gaussian, exhibits truncated spider-like fringes, a somewhat blurred fan, but at least two distinct rescattering ridges, and circular interference fringes following the ridges. There is also a marked improvement in contrast for the carpet-like structure forming near the \(p_{x}\) axis and close to the caustic around the \(p_{x}\) axis. Additional interference structures with the shape of that caustic, which have been identified for the single-orbit distributions for orbit 3, are also visible regardless of the initial sampling taken. The reason a more focused sampling around the core leads to these clearer rings is due to better sampling of re-scattering trajectories, which often start close to the core. This is explored in more detail in the next section. These observations compare favorably to the results already presented in Fig. 5, where the Coulomb corrected rate-based model is investigated. They concluded that the Coulomb correction during tunneling leads to a broader transverse momentum distribution, which allows a better probing of the fan. An essential difference from the behaviors observed in Fig. 5 is noticed in the legs of the spider. When using the different ionization rates to sample the initial conditions, the width of the momentum distribution along the polarization axis was in both cases above the width selected for the narrow Gaussian as shown in Table 3. Therefore, the high-energy ends of the spider were not appreciably affected. Overall the plots on the right column of Fig 8 show similar effects to those obtained from the hybrid CQSFA. Finally, we must highlight that in the bottom panels, the amplitude of the PMD is significantly lower than in the medium and top panels. ## V Momentum mapping The features discussed in the previous section can be understood in greater depth by looking at the initial to final momentum mapping for specific sets of orbits. Here we will address the question of what momentum ranges \(\mathbf{p}_{0}\) at the tunnel exit lead to specific final momenta \(\mathbf{p}\). These studies are necessary as the joint influence of the driving field and the binding potential will modify the electron momenta during the continuum propagation. Moreover, they will shed light on the initial momenta leading to specific holographic features and why a certain sampling highlights particular momentum ranges and structures. We will perform this mapping for orbits 1 to 4 according to the classification in Table 1, and compare our results with the single-orbit distributions in Sec. IV.1. Because we have observed, using the single-orbit PMDs, that both for the forward and the hybrid methods there are unexpected features associated with rescattered orbits, such as ridges for orbits 1, 2 and 3, we will classify the dynamics further using the tunnel exit \(z_{0}\) and the Bohr radius \(r_{0}\) as parameters. This classification has been first employed in [30] within the context of the boundary CQSFA and uses the distance of closest approach \(r_{c}\) of an electron along a specific orbit. If during the continuum propagation, \(r_{c}\) lies within the region \(r_{0}<r_{c}<|z_{0}|\), where \(|z_{0}|\) is the radial distance from the origin determined by the absolute value of the tunnel exit, we assume that the electron has undergone a soft collision. This means that the residual potential was able to deflect the electron and bring it closer to the core than the tunnel exit but has not reached a region for which the potential is dominant. If the distance of closest approach \(r_{c}\) is smaller than the Bohr radius \(r_{0}\), we consider the electron has a hard collision with the core. Within this classification, a direct orbit implies \(r_{c}>|z_{0}|\) throughout. Unless otherwise stated, we will focus on a comparison between the rate-based method and the standard CQSFA. The conclusions drawn for the rate-based scenario are also valid for the H-CQSFA. Fig. 9 shows the initial (panels (a) and (c)) and final (panels (b) and (d)) momenta for orbit 1 following the classification given in Table 1. The first row shows the results obtained within the standard boundary CQSFA approach, while the second row shows the ones from the rate-based forward method. With both methods, the initial momentum of orbit 1 exhibits a flame-shaped structure, which is filled for the final momenta. However, the forward method renders some structure inside the flame and rescattering ridges that are not present in its boundary CQSFA counterpart. Further insight into these additional structures is achieved by applying our spatial filter to the results from the forward method, as shown in Fig. 10. The upper Figure 8: PMDs after sampling from a Gaussian function with different widths \(\mathcal{G}_{b}\) (top row), \(\mathcal{G}_{m}\) (center row), and \(\mathcal{G}_{n}\) (bottom row) as given in Table 3. The left panels were computed with the hybrid forward-boundary CQSFA, and the right panels with the forward method, using the same initial Gaussian distributions and correcting the weight at the detector. The figure uses an initial ensemble of \(N=10^{8}\) trajectories. row of the figure shows the orbits that reach the detector without getting closer than the tunnel exit to the core. These are expected to be direct orbits, which escape the Coulomb attraction and reach the detector. The initial momenta of these orbits exhibit a flame-shaped structure resembling that obtained with the boundary CQSFA computations displayed in Fig. 9. This means that to reach the detector with vanishing momentum \(\mathbf{p}=0\), the electron must escape with a non-vanishing momentum \(\mathbf{p}_{0}\). This is clear as the Coulomb force pulls the electron back to the core so that an electron with vanishing initial momentum would be trapped otherwise. The corresponding momentum distribution at the detector, plotted in Fig. 10(b), is centered at \((p_{z},p_{x})=(0,0)\) and, as expected, does not exhibit rescattering ridges. The Coulomb attraction has also closed the flame-type gap, as it affects the orbits with low initial velocity the most and those with high initial velocity only marginally. Let us also note that there is some inner structure inside the flame in the initial momentum distribution of the direct orbits obtained with the forward method. These are rare events, and a closer look at the dynamics of these orbits has revealed that they are slightly deflected by the core. However, the distance of closest approach is always above the tunnel exit and they do not lead to rescattering ridges. In the middle panels of Fig. 10, we plot the momentum mapping for orbit 1 deflected by the core with soft collisions. The initial momenta of these orbits [Fig. 10(c)] occupy a much more restricted area around the perpendicular momentum axis and lead to a slightly broader final momentum map, with a v-shaped structure near \((p_{z},p_{x})=(0,1\) a.u.) [see Fig. 10(d)]. This structure is also present in the single-orbit PMD. Finally, in the bottom panels, we show the mapping for the hard-colliding orbits. Their initial momenta, displayed in Fig. 10(e), are located in well-defined regions close to the polarization axis, namely a central island similar to that associated with the softly scattered orbits surrounded by two islands much more densely populated around larger parallel momenta. The central island leads to the ridge of energy \(10U_{p}\). In this case, the initial momentum of the particle is so small that the only escape route is to obtain enough kinetic energy via a collision with the core. The peripheral islands lead to a ridge of much lower energy, associated with a longer return. These ridges are present in the corresponding single-orbit distribution [Figs. 6(a) and (b)]. One should also note that the initial transverse momenta of the orbits inside the flame in Fig. 10(a), is around \(p_{0x}=0.25\) a.u, while the initial momentum of the soft and hard colliding orbits lies below this region. This suggests some kind of cutoff values for the initial transverse momentum, meaning that if the orbits start with lower momenta, they will be more influenced by the core. Consequently, they will have smaller \(r_{c}\), will be detected by our spatial filter and will be classified as colliding orbits. Furthermore, these observations indicate that using the tunnel exit as the spatial filter cannot account for these rare events. Figure 11 shows the initial to final momentum mapping for orbit 1 from the hybrid CQSFA approach, without applying the spatial filtering. The initial conditions were sampled from the broad \(\mathcal{G}_{b}\) [Fig. 11)(a)] and narrow [Fig. 11)(c)] \(\mathcal{G}_{n}\) Gaussian given in Table 3. For the narrower Gaussian, the initial momentum distribution resembles that obtained with the rate-based approach. Figure 10: Initial and final momenta of orbit 1, left and right panels, respectively, classified according to the spatial filter using the rate-based method results with \(N=10^{7}\) initial orbits. The top, middle, and bottom panels show the direct, soft-colliding, and hard-colliding orbits, respectively. Figure 9: Initial and final momentum of Orbit 1 classified according to Table 1, left and right panels, respectively. The first row shows the results from the boundary CQSFA, and the second row the results from the forward rate-based method using \(N=10^{7}\) initial orbits. This similarity persists in the final momentum distributions, which exhibits a well-defined high energy ridge such as in Fig. 10(f) and a structure around the perpendicular momentum axis resembling that in Fig. 10(e). The secondary, low-energy ridge present in Fig. 10(f) is largely absent, as the narrow Gaussian distribution does not cover sufficient initial momenta in the peripheral islands. We can also observe how the density of points in the different momentum regions changes; when using the broad Gaussian, they cover uniformly all the momentum space shown in the plot, while when the width is reduced, they are denser around vanishing momenta. These findings are in agreement with those in Fig. 8, which show that, under a constant number of orbits, narrower initial distributions allow a better probing of the rescatttered trajectories, thus making the associated holographic patterns more resolved. In Fig. 12, we display the initial (left column) and final (right column) momenta of orbit 2. The top row shows the results obtained within the standard boundary CQSFA framework. The middle and bottom rows show the results from the rate-based method using the spatial filter. The CQSFA momenta, depicted in Fig. 12(a), exhibit the behavior expected from field dressed Kepler hyperbolae starting from the "wrong side" with regard to the detector. At the time of ionization, these orbits need a non-vanishing transverse momentum component to be able to escape with at most a soft collision with the core. For that reason, the bulk of electron trajectories that are classified as orbit 2 have prominent contributions in the large transverse momentum region. This becomes relevant in rate-based or hybrid methods using initially biased distributions, as shown in Fig. 8: if the initial Gaussian distribution is too narrow, the fan may be compromised. In panels (c) and (d) we show the superposition of direct (orange) and soft-colliding (purple) orbits according to our classification. The non-colliding orbits 2 are released with non-vanishing transverse momenta and are mainly located along the transverse momentum axis. Their momentum at the detector forms a well-defined structure elongated along the transverse momentum axis, and also a small region along the polarization axis of almost vanishing transverse final momenta. The superposition shown in Fig. 12(c) resembles the initial distribution of orbit 2 obtained within the boundary CQSFA approach, except for some structures around vanishing momentum, which is not present in the pure boundary CQSFA calculations. This resemblance is expected, as the vast majority of such orbits are laser-dressed hyperbolae. Furthermore, the orbits type 2 classified as "direct" are deflected, but their distance of closest approach is never smaller than that defined by the tunnel exit. Thus, the deflection is not picked up by the spatial filter. On the other hand, if we consider the hard collision condition, the initial momentum distribution [Fig. 12(e)] will be similar to its orbit 1 counterpart. The final momentum distribution [Fig. 12(f)] will result in a well-resolved secondary ridge and a caustic. These structures are defined by the central island. The peripheral islands give rise to the structure near the perpendicular momentum axis \(p_{x}\) and remnants of ridges at higher energies. A noteworthy difference between the initial momentum of hard colliding orbits 1 and 2 is observed in the popu Figure 11: Initial and final momentum of orbit 1, left and right panels, respectively, from a hybrid CQSFA computation using \(N=1\times 10^{7}\) initial orbits. The initial distribution is sampled from the Gaussian distributions \(\mathcal{G}_{b}\) (a) and \(\mathcal{G}_{n}\) (c) which widths are given in Table 3. Figure 12: Initial and final momentum of orbit 2, left and right columns, respectively. Panels (a) and (b) show the mapping from a boundary CQSFA calculation. The middle and bottom panels show the mapping from the rate-based method using \(N=10^{7}\) initial orbits. The second row shows non-colliding (orange) and soft colliding orbits (purple). The third row displays the hard-colliding ones. lation of the two islands of larger momentum along the polarization axis. Those from orbit 1 are densely populated and lead to a well-defined rescattering ridge, while those from orbit 2 are sparsely populated. Fig. 13 represents the mapping for orbit 3. As in the previous figures, in the top panel, we show mapping from a boundary CQSFA calculation. The middle panels show direct (orange) and soft-colliding (purple) orbits, and the bottom panel shows the hard-colliding ones, after applying the spatial filter to the results from the rate-based method. An overall feature is that the transverse momentum component changes sign during the electron propagation so that negative values of \(p_{0x}\) will map into positive values of \(p_{x}\) and vice versa. Therefore, even if the spatial filter classifies some orbits 3 as direct, they must still be deflected so that the binding potential will change the transverse momentum in such a way that the conditions in Table 1 hold. Interestingly, the initial maps for the boundary CQSFA and the direct and soft-colliding orbits, shown in Figs. 13(a) and (c), occupy a much more restricted transverse momentum region than those for the standard orbits 1 and 2, plotted in Figs. 9(a) and 12(a), respectively. Nonetheless, the initial parallel momentum component \(p_{0z}\) extends up to relatively large values. This may result in some holographic structures involving orbit 3 being truncated if the initial distribution taken is too narrow (see the example of the spider discussed in Sec. IV.2 for the Gaussian \(\mathcal{G}_{n}\)). The final momentum mapping obtained for the CQSFA [Fig. 13(b)] is delimited by a caustic whose apex is located around \((p_{z},p_{x})=(0,1.3\) a.u.) up to perpendicular momenta near \(p_{x}=\pm 0.5\) a.u., but occupies a larger region closer to the field-polarization axis. This region has been shown in our previous publication [30], and the nature of the orbits changes beyond the caustic. Next, we will discuss what happens for the rate-based method using the aforementioned spatial filters [Figs. 13(c) to (f)]. The initial momentum maps for the direct and soft-recolliding orbits, displayed in Fig. 13(c), resembles the CQSFA outcome, while that plotted in Fig. 13(e) exhibits a central island and two peripheral islands, all of which are close the field polarization axis. A comparison of the CQSFA and the rate-based method shows that the direct and soft recolliding orbits do not contribute to the caustic [see Figs. 13(c) and (d)]. Instead, in both Figs. 13(a) and (e) there exists an arch-shaped structure near the origin which unites the two peripheral islands. This structure leads to a caustic in both Figs. 13(b) and (f), but not in Fig. 13(d). On the other hand, because the direct and soft colliding orbits 3 are present in the boundary-type CQSFA, they fill the structure inside the caustic in Figs. 13(b), while there are gaps below the caustic in Fig. 13(f). This region being filled can also be seen by inspecting the final momentum map in Fig. 13(d), resulting from the initial momenta given in Fig. 13(c). The central island in Fig. 13(e) leads to a rescattering ridge. Both the direct and soft-colliding orbits exhibit a gap at non-zero transverse momentum. The final momentum distribution of the hard-colliding orbits [Fig. 13(f)] resembles the orbit 2 counterpart [Fig. 12(f)], except for the empty areas extending along \(p_{z}=\pm 1\) a.u. This could be understood by comparing the initial momenta of both orbits. We can see how the higher-energy ends of the two islands along the polarization axis are less populated for the hard-rescattered orbit 3 [Fig. 13(e)] than for orbit 2 [Fig. 12(e)]. Nonetheless, the initial and final momentum maps for the hard colliding orbits 3 resemble their counterparts for orbit 2 [see Fig. 12(e) and (f) for comparison], with the difference that the initial momenta lie in the opposite half-plane. Finally, in Fig. 14, we plot the initial to final momentum mapping of orbit 4. The top panels stem from a boundary CQSFA calculation, the middle panel shows the direct (orange) and non-colliding (purple) orbits, and the bottom panel shows the hard-colliding orbit 4, after applying the filter to the forward method results. Overall, the initial momentum of these orbits occupies much more restricted momentum regions and low transverse momenta, in the opposite half plane of the final momenta. This shows that orbit 4 starts close to the field polarization axis. The standard CQSFA outcome shows an initial momentum region localized in the vicinity of the origin \((p_{0z},p_{0x})=(0,0)\) [Fig. 14(a)], which leads to a large fi Figure 13: Initial and final momentum of orbit 3, left and right panels, respectively. Panels (a) and (b) show the mapping from a boundary CQSFA calculation. The middle and bottom panels show the mapping from the rate-based method using \(N=10^{7}\) initial orbits. The second row shows non-colliding (orange) and soft colliding orbits (purple). The third row displays the hard-colliding ones. nal momentum region whose boundary is the high-energy rescattering ridge [Fig. 14(b)]. In contrast, the mapping from the rate-based method includes other types of orbit 4. The initial momenta of the non-colliding orbits within our classification [Fig. 14(c)] occupy a small region located around \(p_{0x}=-0.2\) a.u and (\(-0.5\) a.u. \(<\) p\({}_{0x}<0.5\) a.u.), reaching the detector with small momentum values located along the transverse momentum axis. The initial momenta of soft-colliding orbits are also localized within the same region along the parallel axis (\(-0.5\) a.u. \(<\) p\({}_{0x}<0.5\) a.u.), but extend longer in the perpendicular direction. The initial momentum distribution of the hard-colliding orbits resembles the distribution encountered for all the other hard-colliding orbits, meaning a central island, and two islands of higher parallel initial momentum. The final momentum distribution exhibits a high energy rescattering ridge, typically expected for these orbits, as shown in the final momentum distribution from the boundary CQSFA calculation. Comparing the initial momenta displayed in Fig. 14(a) and Fig. 14(e) we observe that only a small region around vanishing initial momentum is obtained with the boundary CQSFA calculations, and this is the one already leading to the high energy rescattering ridge. The missing orbits in the boundary CQSFA are the reason behind the absence of many holographic patterns that are observed for the rate-based and hybrid approaches associated solely with orbit 4, such as the annular interference structures following the primary rescattering ridge. Interestingly, there is a gap around the vanishing initial transverse momentum for all orbits except orbit 4. We see this, both with the rate-based method and the boundary CQSFA approaches. Also, we observe clear high-energy rescattering ridges, both for hard-scattered orbits 1 and 4. Looking at the dynamics of two of the orbits leading to this structure, which end up classified as orbits 1 and 4, reveals that both start with small transverse momentum but with opposite signs, and orbit 1 undergoes hard collisions with the core later on, as they are driven by the field before experiencing hard collisions. Overall, the comparison between the mapping obtained with both methods show that the forward methods allow for several types of orbits 1,2,3 and 4, which are left out when restrictions upon the orbits' dynamics are made, as is the case in the boundary CQSFA calculations. Furthermore, the previous analysis allows us to understand why the effect of the Coulomb correction added to the ionization rate was more noticeable on the single PMDs of orbits 1 and 2. From the momentum mapping, we can see how the initial momentum of non-colliding orbit 1, the one that most resembles the expected behavior of direct orbits, and the soft and non-colliding orbit 2 extend to higher values along the transverse momentum axis, being more sensitive to the changes of the width of the distribution in this direction. This also explains our observations regarding the extension of the fan, which originates from the interference of direct orbits and field-dressed hyperbolae. ## VI Conclusions In the present work, we develop forward and hybrid methods that are orbit-based, Coulomb distorted, and carry tunneling and quantum interference, and apply them to ultrafast photoelectron holography. These methods are more versatile than the Coulomb quantum-orbit strong-field approximation (CQSFA) in its original form, as they do not rely on any pre-knowledge or assumption about the dynamics of the contributing orbits. Although they use elements of the CQSFA, they start by launching an ensemble of Coulomb-distorted trajectories that are either used as guesses for a boundary problem, or propagated up to their asymptotic value. The former strategy is employed in a hybrid CQSFA (H-CQSFA) and the latter in a forward rate-based method. In contrast, the CQSFA in its original form is implemented as a boundary problem in which specific assumptions upon the orbits are made, the standard, Coulomb-free SFA is used as a first approximation and the binding potential is introduced incrementally. The methods agree well with the TDSE computations and among each other, and additional features are encountered, in comparison with the standard CQSFA, such as annular interference patterns and additional rescattering ridges. The main holographic structures, such as the fan, the spider and the spiral, are Figure 14: Initial and final momentum of orbit 4, left and right panels, respectively. Panels (a) and (b) show the mapping from a boundary CQSFA calculation. The middle and bottom panels show the mapping from the rate-based method using \(N=10^{7}\) initial orbits. The second row shows non-colliding (orange) and soft colliding orbits (purple). The third row displays the hard-colliding ones. present in all cases. A key finding is that both proposed methods are strongly dependent on the initial conditions at the tunnel exit, which are either incorporated as ionization rates or sub-barrier contributions to the semiclassical action. The ionization rate employed in the present article was constructed using the sub-barrier contributions of the CQSFA in the low-frequency approximation. Therefore, besides being non-adiabatic, similar to the one encountered in [23] it exhibits Coulomb corrections. Our results prove the relevance of including the latter in the rate by achieving a better agreement with the hybrid CQSFA calculations. In a previous publication, sub-barrier corrections have been used to construct analytic approximations for the CQSFA [29]. However, this was done in a boundary-type problem instead of being used in a rate. In our rate-based method, once the trajectories are sampled from a given ionization probability, they will carry this weight along the propagation, and it will contribute to the final PMDs. The hybrid CQSFA also launches a set of initial conditions, but, as the boundary problem is solved, the effect of sampling from different distributions will not affect the weight of the trajectories at the detector. However, it can impact the stability of the solver, or for a fixed number of trajectories, it can change how we resolve the different holographic patterns. Examples of this influence have been provided in Sec. IV. Therein, it was shown that the sub-barrier Coulomb corrections broaden (narrow) the initial electron-momentum distribution in the direction perpendicular (parallel) to the laser-field polarization. This will influence holographic patterns such as the fan, the spider and the carpet (see Sec. IV.1). Furthermore, by choosing an arbitrarily narrow or broad initial sampling, one may emphasize or leave out groups of orbits and their contributions to the specific holographic structures. For instance, a narrow Gaussian will produce better rescattering ridges and interference features in that region, but will result in a coarser fan and a truncated spider. These outcomes are related to the orbits whose interference leads to the structures: those released in the continuum near the polarization axis will be favored by narrower initial sampling, while those freed with large initial transverse momenta will be probed better if the initial conditions are sampled broadly. In addition, care must be taken with pre-assuming the orbits' dynamics. The conditions upon the tunnel exit and momentum components used in their classification and given in Table 1 are insufficient to guarantee the behavior first stated in [42] and used in the original, boundary-type CQSFA [25; 27; 28]. The underlying assumptions that orbit 1 goes directly to the detector, orbits 2 and 3 are laser-dressed hyperbolae and orbit 4 goes around the core before reaching the detector leaves out whole classes of orbits, which behave differently but still fulfill the conditions in Table 1. Although these orbits do not influence standard holographic structures, such as the fan, the spider or the spiral, they lead to additional rescattering ridges, low-energy structures and caustics, which appear both in momentum mappings or single-orbit distributions. Furthermore, their interference may lead to additional holographic structures. Examples are the interference of different types of orbit 4, which can be associated with pairs of short and long rescattered orbits that exist in the Coulomb free strong-field approximation [62; 72; 33], fork-like structures and secondary ridges [39; 40], and also additional types of orbits in the spiral. These latter orbits have been identified recently as multi-pass trajectories [73]. A further study also calls into question the nature of the orbits. In the boundary problems solved by us so far, orbits 1 and 2 are direct or at most lightly deflected by potential, orbit 3 is a hybrid, and orbit 4 is rescattered. However, single-orbit distributions have revealed rescattering ridges for orbits 1, 2 and 3 as well. For that reason, we have employed the spatial filter from our previous publication [30] to sort the orbits launched by the forward and hybrid approaches into "direct", "deflected" and "hard scattered". Comparing the orbits' distance of closest approach \(r_{c}\) with the tunnel exit \(|z_{0}|\) and the Bohr radius \(r_{0}\), we have called direct orbits all those trajectories for which \(r_{c}\geq|z_{0}|\), soft scattered those for which \(r_{0}<r_{c}<|z_{0}|\) and hard scattered if \(r_{c}\) is smaller or equal to the Bohr radius. These assumptions have resulted in all types of CQSFA orbits, from 1 to 4, having direct, soft scattered, and hard scattered subsets. Directs orbits 1, and soft scattered orbits 2 and 3 correspond broadly to those obtained with the standard CQSFA. Orbit 4 in the standard CQSFA corresponds to a subset of the hard scattered orbits 4 for the forward and hybrid methods. We have explored these types of orbits in detail in initial to final momentum maps, which revealed several noteworthy features. First, all hard scattered orbits lead to three islands along the polarization axis in the \(p_{0x}p_{0z}\) plane for the initial momenta: a central island near the origin and two peripheral islands centered at non-vanishing parallel momenta. For the final momenta, the central island leads to rescattering ridges, whose energy depends on the orbit in question, and the peripheral islands lead to caustics and low-energy structures. The other sets of orbits behave in a less universal way regarding the initial conditions, but there are similarities. In general, direct orbits fill the whole final momentum grid, while soft-scattered orbits fill momentum regions in the vicinity of \(p_{z}=0\) extending to relatively large perpendicular final momentum. Particularly similar is the momentum mapping of the soft-scattered orbits 1 and 4, which start from a central island and fill a momentum region near the perpendicular momentum axis. One should note, however, that spatial filtering using the distance to the core brings some degree of arbitrariness. For instance, there are deflected orbits that are classified as "direct" because their perihelion is larger than or equal to the distance defined by the tunnel exit. This happens to orbits 2, 3 and in particular 4. For orbit 1, there is also a subset of orbits that start inside the flame-shaped gap and are deflected but not detected by the spatial filter. Still, it is remarkable how neat the outcome of the filtering is, in general. Other types of criteria have been used in the literature to separate soft from hard collisions, such as, for instance, the scattering angle [74]. Finally, the method developed in this work is flexible enough to be applied to photoelectron holography in tailored fields. Additionally, it enables an in-depth study of caustics, ridges and low-energy structures in a fully Coulomb-distorted framework incorporating tunneling and quantum interference, or may be extended to scenarios with more than one active electron. ###### Acknowledgements. We thank C. Hofmann, N. Shvetsov-Shilovsky ans S. Brennecke for useful discussions. This work was funded by grants No. EP/J019143/1 and EP/T517793/1, from the UK Engineering and Physical Sciences Research Council (EPSRC). A.S.M. acknowledges funding support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement SSFI No. 887153.
2305.02363
Entity Tracking in Language Models
Keeping track of how states of entities change as a text or dialog unfolds is a key prerequisite to discourse understanding. Yet, there have been few systematic investigations into the ability of large language models (LLMs) to track discourse entities. In this work, we present a task probing to what extent a language model can infer the final state of an entity given an English description of the initial state and a series of state-changing operations. We use this task to first investigate whether Flan-T5, GPT-3 and GPT-3.5 can track the state of entities, and find that only GPT-3.5 models, which have been pretrained on large amounts of code, exhibit this ability. We then investigate whether smaller models pretrained primarily on text can learn to track entities, through finetuning T5 on several training/evaluation splits. While performance degrades for more complex splits, we find that even when evaluated on a different set of entities from training or longer operation sequences, a finetuned model can perform non-trivial entity tracking. Taken together, these results suggest that language models can learn to track entities but pretraining on text corpora alone does not make this capacity surface.
Najoung Kim, Sebastian Schuster
2023-05-03T18:01:13Z
http://arxiv.org/abs/2305.02363v2
# Entity Tracking in Language Models ###### Abstract Keeping track of how states and relations of entities change as a text or dialog unfolds is a key prerequisite to discourse understanding. Despite this fact, there have been few systematic investigations into the ability of large language models (LLMs) to track discourse entities. In this work, we present a task to probe to what extent a language model can infer the final state of an entity given an English description of the initial state and a series of state-changing operations. We use this task to first investigate whether Flan-T5, GPT-3 and GPT-3.5 can track the state of entities, and find that only GPT-3.5 models, which have been pre-trained on large amounts of code, exhibit this ability. We then investigate whether smaller models pretrained primarily on text can learn to track entities, through finetuning T5 on several training/evaluation splits. While performance degrades for more complex splits, we find that even for splits with almost no lexical overlap between training and evaluation, a finetuned model can often perform non-trivial entity tracking. Taken together, these results suggest that language models can learn to track entities but pretraining on large text corpora alone does not make this capacity surface. ## 1 Introduction A key prerequisite to long-context understanding and generating coherent text is the ability to accurately represent entities as the discourse unfolds (Karttunen, 1976; Groenendijk and Stokhof, 1991; Heim, 2002; Nieuwland and Van Berkum, 2006; Kamp et al., 2011, _i.a._). For example, consider the following example in the context of a recipe: 1. [label=(0)] 2. Put the eggs, sugar, flour, and baking powder in a bowl and mix to form a light batter. Make sure that the final batter does not contain any lumps of flour or sugar. 3. In order to understand this instruction, several distinct abilities are necessary: 4. **New discourse entity recognition:** recognizing when new discourse entities are introduced. E.g., _a bowl_ introduces a new discourse entity but _the final batter_ or _any lumps of..._ does not. 5. **Coreference resolution:** associating referring expressions with the right discourse entities. E.g., _a light batter_ and _the final batter_ refer to the same entity. 6. **Discourse entity tracking:** tracking the state changes made to each discourse entity. E.g., _the eggs_ are put into _the bowl_ and _mixed_ with the other ingredients. There exist many datasets that aim to evaluate these abilities (e.g., Walker et al., 2006; Pradhan et al., 2012; Rahman and Ng, 2012; Weston et al., 2015; Chen et al., 2018; Bamman et al., 2020; Uryupina et al., 2020) and many NLP models that aim to solve these tasks (e.g., Haghighi and Klein, 2010; Lee et al., 2011; Hill et al., 2016; Henaff et al., 2017; Ji et al., 2017; Lee et al., 2017; Bosselut et al., 2018; Gupta and Durrett, 2019, 2019; Aina et al., 2019; Toshniwal et al., 2020; Wu et al., 2020). In the context of large language models (LLMs), Tenney et al. (2019), Clark et al. (2019), and Sorodoc et al. (2020) found that representations of LSTMs and Transformer-based models such as BERT (Devlin et al., 2019) do capture coreference relations. Loiciciga et al. (2022) and Schuster and Linzen (2022) found that pretrained models are able to Figure 1: A sketch of our entity tracking task. detect whether noun phrases introduce discourse entities, albeit not fully systematically. The question of whether LLMs _can track the state of discourse entities_, however, has only been indirectly evaluated so far. Toshniwal et al. (2022) showed that GPT-2 (Radford et al., 2019) can learn to predict valid chess moves based on a compact, nonlinguistic description of previous moves. While this suggests that LLMs such as GPT-2 can learn the rules of chess and can learn to track the states in the game, it does not tell us whether LLMs track the entity states in natural language discourses. Li et al. (2021) evaluated whether the state of an entity can be decoded from LLM representations, which most directly aims to evaluate entity tracking abilities. Using a probing classifier, they found that the state of an entity can be decoded from T5 and BERT with high accuracy. However, as we show in a reanalysis of their results (Section 2), their results do not provide definitive evidence for entity tracking. Hence, whether LLMs can track entities during the processing of natural language discourse remains an open question. **Contributions** This work attempts to answer this question by developing a task targeted towards evaluating a language model's ability to track state changes of discourse entities (illustrated in Figure 1). We use this novel task to evaluate GPT-3 (Brown et al., 2020), GPT-3.5,1 and Flan-T5 (Chung et al., 2022) models without any additional finetuning. We find that only models in the GPT 3.5 series, which have been trained on both text and code, are able to perform non-trivial entity tracking. We then show that a smaller language model (T5: Raffel et al. 2020) can learn to perform non-trivial entity tracking, demonstrating the capacity to generalize to state descriptions with little lexical overlap. Our results suggest that language models can learn to track entities but pretraining on text corpora alone does not make this capacity surface. Footnote 1: [https://beta.openai.com/docs/model-index-for-researchers/models-referred-to-as-gpt-3-5](https://beta.openai.com/docs/model-index-for-researchers/models-referred-to-as-gpt-3-5) ## 2 Reanalysis of results from Li et al. (2021) We start from examining Li et al. (2021), the most relevant work to ours. They adapted two existing datasets, Alchemy (Long et al., 2016) and TextWorld (Cote et al., 2019), to test a model's ability to track state changes of an entity. The input to the model is a text description of the initial world state followed by state-changing instructions. Based on this description, the model is expected to identify the correct final state of each entity. For example, for Alchemy, the model receives formulaic descriptions of 7 beakers filled with different amounts of colored liquids, followed by instructions that manipulate the contents of the beakers such as pouring the liquid from one beaker into another, or draining the liquids from a beaker. Given an input like (2), the model is expected to recognize that the first beaker has 4 units of brown liquid, the second beaker has 2 units of red liquid, and the third beaker is empty. (2) _The first beaker has 1 green, the second beaker has 2 red, the third beaker has 3 red. Pour the last red beaker into beaker 1. Mix._ Using such descriptions, Li et al. (2021) found that a probing classifier that takes as input the encoding of these descriptions from T5 or BART (Lewis et al., 2020) is able to correctly predict the state of 75-76% of the entities, suggesting some degree of success on entity tracking. However, this conclusion becomes questionable when the datasets and the results are scrutinized further. Specifically, we conducted a fine-grained analysis of the success cases of the Alchemy experiment. In this experiment, the state of each beaker was probed after each state-changing instruction. Because each instruction targets at most two beakers (e.g., _pour X into Y_) and there are 7 beakers in total, there is a sparse representation of cases probing a beaker that actually underwent a change in the dataset. Indeed, 62.7% of all beaker states probed were identical to the initial state, meaning that a simple baseline that always predicts the initial state already achieves 62.7% accuracy (this is also noted by Li et al.). A second potential for shortcuts was the high rate of empty final states (32.4%).2 For these cases, the initial state can often be entirely disregarded, due to the presence of an emptying instruction such as _Drain the fourth beaker_--this instruction alone is sufficient to predict the fourth beaker's final state independent of its initial state. Therefore, such examples are also not best suited to fully assess entity tracking. Given the high prevalence of these two trivial scenarios (87.6% in total), only 12.4% of the datapoints can be considered as truly assessing state changes unfolding over a discourse context. If the accuracy is computed on the trivial and non-trivial cases separately, the probing classifier achieves 86.8% accuracy on trivial cases but only 3.1% accuracy on non-trivial cases, showing that most of the reported success derives from the trivial cases. In summary, our reanalysis suggests that the results of Li et al. (2021) do not provide conclusive evidence for non-trivial state tracking abilities in language models.3 However, it remains unclear whether this is due to issues with the setup or a true lack of entity tracking capacity. To this end, we propose a new behavioral evaluation. Footnote 3: Li et al. (2021) also presented two other sets of experiments. See Appendix A for details on how the other experiments exhibit similar issues. ## 3 Task Design and Dataset ### Desiderata The ability to track entities should be largely _independent of specific linguistic forms_. For a model that can properly track entities, it should not matter whether one talks about beakers or recipes or which specific syntactic constructions are used. This makes it an interesting ability to evaluate in the context of assessing whether and how meaning is represented, since at least classic language models are only trained on forms (Bender and Koller, 2020). At the same, this independence of form and entity states poses a challenge in evaluation design, since one needs to ensure that the training data does not allow the model to predict the state of entities from individual lexical items or phrases (such as the word _drain_ in the Alchemy dataset, as discussed in Section 2). Furthermore, language models pretrained on large text corpora may have learned common states of entities; for instance, that eggs often end up in a bowl. For these reasons, any task that evaluates entity tracking abilities should conform to the following four desiderata: 1. The probed states of entities should not follow similar distributional patterns to those that are likely to be present in the pretraining data (see also Linzen, 2020). 2. Individual words or phrases should not predict by themselves the state of an entity without considering the remaining discourse. 3. If any data is used for demonstration, fine-tuning or training of probing classifiers, the training and evaluation data should have little lexical overlap. 4. If any data is used for demonstration, finetuning or training, it should not be possible to solve this task by filling in slots of a fixed template. These properties cannot be guaranteed with naturalistic datasets such as recipes (Kiddon et al., 2015), science texts (Dalvi et al., 2019), or the Alchemy and TextWorld datasets, which have been previously used to evaluate entity tracking abilities. We therefore programmatically generated a dataset for which these properties hold. ### Dataset We take inspiration from the evaluation setup of Li et al. (2021) in designing our data. Our dataset consists of text descriptions of a particular state of the world followed by a sequence of changes. The worlds contain boxes that can be filled with objects. The objects can be placed inside the box, taken out of the box, or moved from one box to another. We define a world \(\mathcal{W}\) as \(\mathcal{W}=(O,n,m,e)\) where \(O\) is a set of objects, \(n\) is the number of boxes, \(m\) is the maximum number of objects one box can contain, and \(e\) is the expected number of objects in each box in the initial world states. For our dataset, we used \(n=7\), \(m=3\), \(e=2\), and used a set of nouns denoting items that can plausibly fit inside a box (e.g., _book, rock, brain_; \(|O|=100\)), selected from a list of words with frequency greater than 27 in the British National Corpus (BNC; Leech et al. 2001). A dataset consists of multiple distinct _scenarios_. A scenario consists of an initial state and a set of operations applied to this initial state. We fixed the number of operations (NumOps) in each scenario to 12. We randomly sampled 2200 scenarios, where the initial state and the 12 operations were both randomly sampled. The sampling process is designed such that only valid operations given the current world state can be sampled. The initial state and the operations were converted into naturalistic descriptions using predefined templates. We selected the task of moving objects across boxes because this is a domain where lexical contents of the entities do not offer cues to predict the outcome of state changes (Desideratum 1). To satisfy Desideratum 2, we did not include an operation that empties a box that allows the previous history of operations to be disregarded. For Desideratum 3, we considered experiments where the state and operation descriptions differ entirely between demonstration/finetuning and evaluation examples (See Table 2). Finally, we computed a "signature" of every initial state that indicates the number of objects contained in each box.4 Using this signature, we then made sure that there were no two examples with identical initial descriptions modulo the object identities where one of them appeared in the training split and the other one in the evaluation split. This prevents that models could solve this task by filling in slots (Desideratum 4). Additionally, compared to the Alchemy setup, our setup also has a benefit of requiring fewer additional reasoning abilities. The beaker scenario requires the model to be able to count and do simple arithmetic (e.g., inferring that adding one unit of liquid to a beaker with two units of liquid results in a beaker with three units of liquid). Moreover, some of the operations in Alchemy require knowledge about how colors are combined (e.g., inferring that mixing red and green liquids results in a brown liquid). The boxes domain removes these requirements. Footnote 4: For example, the signature of an initial state in which the first box contains two objects and the rest contains 1 object each would be 2111111. ### Task We define the entity tracking task as follows. Given a natural language description of the initial state of the world followed by 0-12 descriptions of operations that have been performed, the content of each box at the end of the description must be correctly identified. To this end, we created an example for each box after each operation. This corresponds to \(n\times(\text{NumOps}+1)\) questions per scenario (i.e., 91 questions in our dataset). Each question is formulated in the style of a cloze test, a format that language models are typically trained on. That is, the input string describes the initial state followed by a sequence of operations, then followed by _Box n contains_. The expected output is the correct set of objects in Box \(n\) based on the prefix description. See Appendix B for an example. ## 4 Experiment 1: In-context Demonstration In the first set of experiments, we used a setup in which the models are provided a small number of in-context demonstrations of the entity tracking task. This provides a way to probe the model without providing substantial supervision from which a task could be learned, as well as guiding the model to output the final state in a consistent format that can then be automatically assessed. ### Models We used models that have been shown to support task solving from in-context demonstrations provided as part of the prompt. Specifically, we use GPT-3 175B (davinci: Brown et al., 2020), the most recent GPT-3.5 (text-davinci-0035), and Flan-T5 (base and XL: Chung et al., 2022). Footnote 5: [https://beta.openai.com/docs/models/gpt-3](https://beta.openai.com/docs/models/gpt-3) The little information that OpenAI revealed about their models6 suggests that davinci is an autoregressive language model primarily trained on text corpora. text-davinci-003 was trained on the language modeling objective on a mix of text and code, and additionally trained with human feedback using reinforcement learning. Flan-T5 is based on T5, a sequence-to-sequence model trained on a denoising objective, that has been further instruction-finetuned on a battery of tasks. This has been shown to promote better responses to instructions with and without in-context demonstrations (Chung et al., 2022). See Table 1 for a summary of the models. We compared these models against a baseline computed by randomly outputting 0-3 objects from the set of objects that appeared in the same clauses as the box in question. Note that this baseline is stronger than a fully random baseline that selects outputs from all mentioned objects. Footnote 6: [https://beta.openai.com/docs/model-index-for-researchers](https://beta.openai.com/docs/model-index-for-researchers) We evaluated the GPT models through the OpenAI API and the Flan-T5 models using the HuggingFace library (Wolf et al., 2020). See Appendix C for details about the implementation. \begin{table} \begin{tabular}{l l l l} **Model** & **Size** & **Code?** & **Additional Training** \\ \hline GPT-3 davinci & 175B & ✗ & \\ \hline GPT-3 davinci-instruct-beta & 175B & ✗ & Human demonstrations \\ GPT-3 text-davinci-001 & 175B & ✗ & \begin{tabular}{l} Human demonstrations \\ (finetuning) \\ \end{tabular} \\ \hline GPT-3.5 code-davinci-002 & 175B & ✗ & \begin{tabular}{l} Human demonstrations \\ (finetuning) \\ \end{tabular} \\ \hline GPT-3.5 test-davinci-003 & 175B & ✗ & \begin{tabular}{l} Human demonstrations \\ (finetuning) \\ \end{tabular} \\ \hline GPT-3.5 test-davinci-003 & 175B & ✗ & \begin{tabular}{l} Human demonstrations \\ (finetuning) \\ \end{tabular} \\ \hline Flan-T5.3 XL & 3B & ✗ & \begin{tabular}{l} Human demonstrations \\ (finetuning) \\ \end{tabular} \\ \end{tabular} \end{table} Table 1: Summary of the models used for the in-context demonstration experiments. ### Prompting and Demonstrations Our prompts consist of: (a) a general description of the task, (b) two examples of the task to demonstrate the expected format, (c) an initial state description followed by a series of operations, and (d) an incomplete sentence _Box N contains_ ____ to be completed by the model (see Appendix D for full prompts). In order to reduce the inference cost, we used demonstrations that output the state of all boxes at once. However, in early experiments, Flan-T5 frequently only output the state of the first box even when the in-context demonstrations contained descriptions of all box states. For this reason, for Flan-T5, we probed each box individually.7 Footnote 7: To verify that this difference in the task format does not underestimate the accuracy of the GPT models, we also conducted an experiment in which we prompted GPT to only output the state of one box at a time. We found that, contrary to the Flan-T5 models, the accuracy of GPT was _lower_ when we prompted it to output individual boxes than when it output the contents of all boxes, so we can rule out that this difference in the task format is underestimating GPT’s performance. Demonstration/Test Mismatch for Form-meaning Disentanglement (AltForms)As discussed in Sections 3.1 and 3.2, we additionally experimented with a setup where the demonstration and test examples were mismatched in the form of the description of the world state and operations. Under this setup, the models were evaluated on descriptions where the names of the objects, the description of the initial state, and the phrasing of the core operations are all different from those in the demonstration examples (see Table 2). Except for the determiner "the" and the preposition "into", the two sets share no words (although subwords may be shared depending on tokenization). ### Evaluation We estimated the entity tracking capacity of the models by computing the accuracy of predicting the contents of each box after each operation. Given that we rely on arbitrary-length cloze completion to predict the contents, we had to score unconstrained generations. While we only considered instances as correct where the generated output mentions all objects (and no additional objects) in a given box, our evaluation setup did give some leeway to the exact form of the response. That is, we allowed the objects to appear in any order, the object mentions may be separated by commas or _and_, and we only considered the nouns in the object mentions, so it did not matter whether the model outputs a complete noun phrase or only the bare noun. The task becomes intrinsically harder as more operations are applied to a specific box, since the initial state description needs to be combined sequentially with all subsequent operations. Further, similarly to our observation in Section 2, every operation changes the state of at most two boxes. This implies that the number of datapoints corresponding to fewer operations is much greater than the number of datapoints for more operations. For these reasons, we report accuracy as a function of the number of operations affecting a particular box rather than reporting aggregate accuracy, and show all results with 95% confidence intervals. ### Results Figure 2 shows the prediction accuracy for different number of operations that acted upon a box (e.g., 3: three operations changed the content of the box after the initial state). The left panel shows the instances where the probed state differed from the initial state; the right panel shows the instances where the probed state were the same as the initial state. As the left panel shows, only GPT-3.5 text-davinci-003 consistently outperformed the (strong) random baseline. While, not surprisingly, the accuracy of this model also decreases as the number of operations increases, the model still correctly predicted all contents of a box after 7 operations in more than 25% of the cases. The Flan-T5 models, on the other hand, primarily output the initial state description and seem to ignore the operations, as indicated by the consistently high accuracy on predicting the state when the probed state equals the initial state (right panel), as well as the consistently low accuracy on cases where the final state deviates from the initial state (left Figure 2: Accuracy on state prediction after \(n\) operations that affect a specific box. Left: predictions for boxes whose content differs from the initial state, Right: predictions for boxes whose content is the same as in the initial state. Error bars show 95% CIs. panel). GPT-3 davinci also successfully repeated the initial state, but as seen in the decreasing curve in the right panel, seemed to be distracted by a larger number of intervening operations. **Form-meaning Disentanglement** We additionally evaluated text-davinci-003, the only model that exhibited a non-trivial entity tracking capacity in the first set of results, under the AltForms setup where the demonstration examples have low lexical overlap with the test examples. Figure 3 shows the prediction accuracy of text-davinci-003 on a representative subsample8 of our data. The blue line represents the performance when the descriptions in the demonstration and the test examples are disjoint as described in 4.2. As the comparison to the original results (red line) shows, the lexically disjoint demonstrations did lead to a small drop in performance when there were more than two operations that acted upon a box. Nevertheless, text-davinci-003 was able to predict the correct state of entities in many cases, further adding support for its non-trivial entity tracking capacity. Footnote 8: For each number of operations \(n\) affecting a box, we sampled 100 states with at least one example with \(n\) operations. ### Discussion Our results show that among the models we evaluated, only GPT-3.5 text-davinci-003 exhibit non-trivial entity tracking behavior. While its performance does decrease as the number of operations increases, the model still produced many accurate predictions even after six or seven relevant operations. Furthermore, through the experiment where the description of the initial state and operations differed in form in the demonstration examples, we also ruled out the possibility that these two examples are teaching the model this task or that the model is relying on superficial slot-filling heuristics. Therefore, we conclude that text-davinci-003 does have some capacity to track discourse entities. On the other hand, entity tracking behavior did not surface in the GPT-3 davinci (likely of similar size as GPT-3.5 text-davinci-003), a model pretrained primarily on text corpora on the next word prediction objective. This was also true for denoising models that have been finetuned on many tasks combined with instructions and demonstrations: the Flan-T5 models also showed near-zero accuracy on non-trivial examples. These results show that there exists a language model that can perform entity tracking to some degree, but this capacity does not necessarily surface in all sufficiently large models trained on large corpora. Then, which factors are responsible for this difference? Given that davinci and text-davinci-003 differ along at least two dimensions (text-davinci-003 is based on a model that was trained on code and it was trained with additional human feedback (Ouyang et al., 2022); see Table 1), our initial results do not shed light on what \begin{table} \begin{tabular}{c c c} **Operation** & **Base** & **AltForm** \\ \hline Move & _Move the car from Box 1 to Box 3._ & _Pick up the furby in Container A and place it into Container C._ \\ Remove & _Remove the car from Box 1._ & _Take the furby out of Container A._ \\ Put & _Put the car into Box 1._ & _Place the furby inside Container A._ \\ \hline \end{tabular} \end{table} Table 2: Different phrasings of the state-changing operations under the AltForms evaluation setup. Figure 4: Accuracy on state prediction for different GPT-3 models. Solid lines denote models trained on code and text, and dotted lines denote models mainly trained on text. Figure 3: Entity tracking accuracy of text-davinci-003 with low lexical overlap between demonstration and test examples (AltForms). exactly contributes to this difference. We therefore conducted a follow-up experiment where we compared a range of GPT-3 and GPT-3.5 models to identify a factor that contributes to the stark difference between davinci and text-davinci-003.9 Footnote 9: To limit inference costs, we used the same subsample of data as in the AltForms experiment. **Training on Code Encourages Entity Tracking Behavior** As Table 1 shows, two key dimensions of variation across models are additional training on human feedback and pretraining on code. If additional training on human feedback imbues language models with the ability to track entities, all models except for GPT-3 davinci and GPT-3.5 code-davinci-002 should be able to track entities. If, on the other hand, pretraining on code leads to better entity tracking, we expect all GPT-3.5 models to outperform GPT-3 on our task. As Figure 4 shows, GPT-3.5 models that have been trained on code systematically outperformed GPT-3 models, including code-davinci-002 that was not trained on human feedback. This suggests that a substantial representation of code in the pretraining data is beneficial for a language model's entity tracking capacity to surface. A further question that our results so far do not answer is to what extent model size matters and whether models at the scale of Flan-T5 can also exhibit non-trivial entity tracking behavior. Since there exist no smaller models that have been trained with the same objective and training data as the GPT-3.5 models, we explore this question through finetuning experiments with T5. ## 5 Experiment 2: Finetuning We investigated whether smaller models at the scale of T5 can _learn_ to track entity states through a series of experiments where we provide supervised training to the models. ### Train/test splits As discussed in Section 3.1, one challenge of evaluating entity tracking abilities is distinguishing this capacity from simple heuristics such as template slot-filling. We therefore designed various types of training/evaluation mismatches that block several possible shortcuts as described below. **Base Split** In the base split, we used the same format for training and evaluation examples. All initial states differed across training and evaluation to block simple slot-filling heuristics, as discussed in Section 3.2. **NumOps Split** The NumOps split restricts the maximum number of operations within a single example in the training set to 2, but includes up to 12 operations in the evaluation set. This split was intended to test whether a finetuned model is able to generalize to longer sequences of operations than it has seen during finetuning. **Vocab Split** The vocab split tests whether objects that are not part of the set of objects used during training can also be adequately tracked. We compiled a list of comparatively infrequent object names (e.g., _pomelo, furby, Flav-R-Straw_; not in BNC) and sampled the training and test sets using two completely disjoint sets of object names. The training set used the infrequent object list and the test set used the original object list. **AltForms Split** This is a split identical in design to the disjoint demonstration/test setting described in Section 4.2, aiming to test whether the model learns to associate specific words/phrases with the operations or whether finetuning leads to more generalizable entity tracking behavior. We also created another split that combines the properties of this split with the "NumOps" split. ### Models We evaluated T5-base, the best performing model in Li et al. (2021), by finetuning it on each of the dataset splits described above. See Appendix C for details about the implementation. As an additional baseline, we compared against the T5 model with randomly initialized parameters. ### Results and Discussion **Pretrained T5 can Learn to Perform Entity Tracking** As shown in Figure 5 (left), finetuning T5 leads to near-perfect accuracy on the base split. This suggests that the model is capable of learning this task. Training a randomly initialized T5 did not yield the same result: the accuracy of a model trained from random weights is considerably lower, due to the model almost exclusively predicting that a box is empty. These two results suggest that pretraining is crucial for the model to be able to learn this task. Furthermore, the model's entity tracking capacity is robust to novel object names at test time, with only minor degradation on accuracy (Figure 5, middle). Training only on operation sequences with a maximum length of 2 (NumOps split) leads to a larger degradation in performance for longer operation sequences, but even for longer operation sequences, the model is able to infer the correct final state in more than 45% of the cases. Finally, the model performance does degrade substantially when the training examples have low lexical overlap with test examples (Figure 5, right). Nevertheless, the model predicts many non-trivial examples correctly, with \(\sim\)50% tracking accuracy after 1+ operations. The performance degradation was compounded if we trained only on up to two operations (blue line), but the performance remained above chance. These results suggest that finetuning on an entity tracking task does lead to entity tracking abilities that generalize to many challenging scenarios. ## 6 General Discussion We set out to investigate whether pretrained language models exhibit entity tracking behavior. We developed a task that allowed us to evaluate whether language models can predict the state of an entity based on an initial state description and operations that act upon it. In the first set of experiments, we found that GPT-3 davinci, a vanilla pretrained language model, and Flan-T5, an instruction-finetuned language model, completely fail at this task and simply repeat the initial state description. The GPT-3.5 models (the core difference with the aforementioned models being code in pretraining corpora), on the other hand, exhibited non-trivial entity tracking behavior. Even after many operations affecting an entity state, they consistently performed above a strong random baseline. In the second set of experiments, we showed that this behavior can also be learned by much smaller models such as T5. When we finetune the model on this task, it is also able to perform entity tracking on examples that differ along several dimensions from the training data. Taken together, our results provide strong evidence that (a) vanilla language models that have mainly been trained on text data do not exhibit entity tracking abilities out of the box, (b) pretraining on code and text data considerably improves this ability, and (c) finetuning on this task can make this behavior also surface in smaller models that have primarily been trained on text data. What are reasons behind the efficacy of training on both code and text? For producing executable and correct code, keeping track of the states of variables is important. Therefore, to speculate, this kind of pretraining data may provide a stronger signal for the model to track entities compared to pure text data. It could also be that, as speculated by Potts (2020) and Merrill et al. (2021), _i.a._, pretraining on code provides additional grounding, which improves models' semantic and pragmatic abilities. The present results also highlight the importance of transparency in documenting the factors involved in the pretraining procedure. Our results add support to the claim of Bender and Koller (2020) that there are aspects about understanding, such as entity state tracking, that are inherently challenging for vanilla LMs. At the same time, our results also suggest that additional signal in the form of code, which is common in the latest versions of LMs, does make state tracking ability surface. This highlights that arguments that may be valid for vanilla LMs are no longer necessarily valid for newer models that are also referred to as LMs. On top of these empirical results, we laid out several principles that should be followed when evaluating state tracking abilities. Apart from these specific principles, we make a more general point that in assessing abilities related to meaning, one needs to consider potential strategies that the model could use to solve the task and make sure that the test examples do not mimic the distributional patterns of the training data. Furthermore, any super Figure 5: Results for finetuned T5 models. vision provided for evaluating the model, should not allow the model to learn associations between specific forms and meaning. Only then can we properly assess meaning-related capacities of LMs. ## Limitations One limitation of this work is that we are only considering behavioral data which makes it difficult to establish a fully causal link between entity tracking capacities and high performance on our task. Entity tracking is a high-level linguistic behavior and many other capacities are necessary for achieving high accuracy on our task. Therefore, we cannot rule out that differences in some other capacity, such as interpreting sentences compositionally (see Bogin 2022 and Bogin et al. 2022 for evidence that GPT-3 and GPT-3.5 models differ in their compositional generalization behavior), are the main driver for the differences in behavior we see across models. A further limitation of our setup is that it requires short-term memory capacities that exceed the memory capacities of most, if not all, humans. That is, if we presented humans with the same input as the model, we would not expect them to be able to keep track of the contents of all 7 boxes due to memory limitations. Therefore we are potentially expecting models to do super-human entity tracking, a setup that has been criticized for model evaluations of other linguistic abilities Lampinen (2022). We nevertheless believe that our task is justified given the architecture of the evaluated models. Transformer-based models can look back to any token in the entire input sequence within their context window, so a proper comparison between humans and models would be to present humans with the full description in written form and let them re-read the description after being prompted to state the contents of a box. While we did not formally evaluate whether humans have this ability on a larger population, we personally did not have any trouble tracking the contents of boxes when we had access to the written description. Relatedly, we designed our task such that the entire description fits within the context window of pretrained language models. However, as we mentioned in the introduction, entity tracking is an important ability for understanding long contexts and given the limited context window, our results do not apply to texts whose length exceeds a model's context window, and likely different model architectures will be necessary to perform proper entity tracking for longer texts. Further, while we found that the GPT-3.5 models as well as the finetuned T5 models can track entities in our task with higher accuracy than a strong random baseline, our results also indicate that this behavior is not very stable once several operations act on an entity. Our results should therefore not be taken as justification for using these models for critical applications where much higher accuracy is needed. Lastly, we only evaluated English models in this work. Given that we showed that even without high lexical overlap between the training and evaluation examples, models can keep track of entities to some extent, it seems likely that our results also apply to other languages. However, whether this actually the case remains an open question. ## Acknowledgements We thank Jacob Andreas, Ellie Pavlick, Allyson Ettinger, Tal Linzen, and the members of the NYU Computation and Psycholinguistics lab for discussions, and Belinda Li for sharing model outputs and details about their data preparation procedures and experiments. This research was conducted in part through the NYU IT High Performance Computing resources, services, and staff expertise, and it was supported by the NSF under Grant #2030859 to the Computing Research Association for the CIFellows Project and the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Program (Grant Agreement #948878). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation nor the Computing Research Association.
2307.03604
Cascading Failures in the Global Financial System: A Dynamical Model
In this paper, we propose a dynamical model to capture cascading failures among interconnected organizations in the global financial system. Failures can take the form of bankruptcies, defaults, and other insolvencies. The network that underpins the financial interdependencies between different organizations constitutes the backbone of the financial system. A failure in one or more of these organizations can lead the propagation of the financial collapse onto other organizations in a domino effect. Paramount importance is therefore given to the mitigation of these failures. Motivated by the relevance of this problem and recent prominent events connected to it, we develop a framework that allows us to investigate under what conditions organizations remain healthy or are involved in the propagation of the failures in the network. The contribution of this paper is the following: i) we develop a dynamical model that describes the equity values of financial organizations and their evolution over time given an initial condition; ii) we characterize the equilibria for this model by proving the existence and uniqueness of these equilibria, and by providing an explicit expression for them; and iii) we provide a computational method via sign-space iteration to analyze the propagation of failures and the attractive equilibrium point.
Leonardo Stella, Dario Bauso, Franco Blanchini, Patrizio Colaneri
2023-07-07T13:47:16Z
http://arxiv.org/abs/2307.03604v2
# Cascading Failures in the Global Financial System: A Dynamical Model ###### Abstract In this paper, we propose a dynamical model to capture cascading failures among interconnected organizations in the global financial system. Failures can take the form of bankruptcies, defaults, and other insolvencies. The network that underpins the financial interdependencies between different organizations constitutes the backbone of the financial system. A failure in one or more of these organizations can lead the propagation of the financial collapse onto other organizations in a domino effect. Paramount importance is therefore given to the mitigation of these failures. Motivated by the relevance of this problem and recent prominent events connected to it, we develop a framework that allows us to investigate under what conditions organizations remain healthy or are involved in the propagation of the failures in the network. The contribution of this paper is the following: i) we develop a dynamical model that describes the equity values of financial organizations and their evolution over time given an initial condition; ii) we characterize the equilibria for this model by proving the existence and uniqueness of these equilibria, and by providing an explicit expression for them; and iii) we provide a computational method via sign-space iteration to analyze the propagation of failures and the attractive equilibrium point. keywords: Systemic risk; Financial network; Financial contagion; Stability analysis. + Footnote †: journal: - ## 1 Introduction In the wake of recent events concerning the collapses of Silicon Valley Bank and Credit Suisse (CS), the focus of this paper is to investigate the propagation of failures in financial systems. The current global financial system is the resultant of a large number of financial interdependencies among governments, banks, firms, smaller and larger companies, private citizens, etc. In the same spirit of the related literature, we make use of the term organization in a broad sense including all these entities and individuals. These organizations hold each other's shares, debts and obligations in variable proportions. As a result, when a failure occurs, this can propagate through the network of interdependencies bringing other organizations to bankruptcy. Indeed, cascading defaults and failures account for one of the highest risks for the global financial system, let aside those institutions that are considered too big to fail, e.g., central banks. A slightly less recent example, but equally prominent, can be found in the interventions put together by the European Commission to save Greece and Spain from default following the historic quote "whatever it takes" by ECB President Mario Draghi (July 23rd, 2012) [11]. In this paper, we study the role of cascading failures among organizations linked through a network of financial interdependencies in the global financial system. Our aim is to develop a model that describes the risks associated with the propagation of failures in the network as well as the design of effective responses to mitigate the impact of financial contagion. Indeed, in the proposed model we highlight three relevant aspects: i) the interdependencies in a financial system through cross-holdings of shares or other liabilities; ii) the market price of assets owned by each organization; iii) and a failure cost incurred by each organization. Indeed, when the value of a financial organization attains a value that is below a failure threshold, additional losses propagate through the network leading to a cascade of failures. _Related Works._ The first structural framework to study the propagation of shocks in inter-bank lending was originally proposed in a pioneering work by Eisenberg and Noe in 2001 [10]. The main contribution of that work is the introduction of a model that captures the contagion from individual organizations to other organizations in an interbank lending network. The contagion occurs at individual nodes and propagates in the network, leading to new equilibrium points representing the agreed mutual payments. Their model illustrated how shocks to individual organizations can propagate through inter-bank lending networks. Indeed, contagion develops instantaneously, bringing the network to a new equilibrium on an agreed set of mutual payments. Subsequently, there has been a substantial body of work analyzing and generalizing this frame work. For example, the authors in [8] and [2] studied the way in which the structure of network graphs, such as hubs, sparsity, and asymmetry structure, influences the shock propagation and the magnitude of the aggregate fluctuation. Their study provides insights on the optimal structure for inter-bank lending networks. Their model can accommodate a variety of settings. For production networks, the model represents the input-output relationship and determines the output equilibrium [2], whereas for financial systems, it calculates the clearing loan repayments, involving the systemic risk of default cascade [10]. Later, the preliminary research proposed by Eisenberg and Noe was extended in several directions. A body of literature dating back to the work by Elsinger [12] and then followed by Elliott _et al._[11], Rogers and Veraart [19], and Glasserman and Young [15] considered bankruptcy costs and their impact onto the financial system. As a consequence of these costs, financial organizations can in turn fail and drag other organizations to bankruptcy. Simultaneously, cross-holdings were considered by Elsinger [12], Elliott _et al._[11], Fischer [14] and Karl and Fischer [17]. An important aspect in many of these works is that cross-holdings inflate the value of the financial system and thus the net value of each organization needs to be adjusted by a factor that preserves the real value in the system [5]. The work by Weber and Weske considers both these aspects and integrates them into a system that is able to capture fire sales as well [20]. In particular, the work by Elliott _et al._ highlighted the fact that in the current highly interconnected financial system, where banks and other institutions are linked via a network of mutual liabilities, a financial shock in one or few nodes of the network may hinder the possibility for these nodes to fulfill their obligations towards other nodes, and therefore provoke default [11]. A recent work by Birge [4] investigates an inverse optimisation approach based on the decisions from national debt cross-holdings to address the propagation and extent of failures in the network. However, the common assumption that all payments are simultaneous is quite unrealistic. For this reason, several recent works, e.g., see [3; 6; 9; 16], propose time-dynamic extensions of this model. The work by Calafiore _et al._ considers the problem of reducing the financial contagion by introducing some targeted interventions that can mitigate the cascaded failure effects. They consider a multi-step dynamic model of clearing payments and introduce an external control term that represents corrective cash injections made by a ruling authority [7]. Similarly, a case study on the Korean financial system is proposed by Ahn and Kim where the authors study the interventions in the form of liquidity injection into the financial system under economic shocks [1]. Finally, a recent work by Ramirez _et al._ investigated a stochastic discrete-time model where the mean and covariance error are studied with focus on the steady-state solution [18]. _Contribution_. The contribution of this work is threefold. Firstly, we introduce the formulation of a dynamical model for cascading failures in financial systems. This model is novel with respect to the literature as it allows us to consider several aspects that were not treated before. These aspects include uncertainty in the form of initial conditions that are, in general, not at an equilibrium. The second contribution of this paper is the stability analysis of the equilibrium points of the proposed system. In particular, we show the existence of these equilibria, their uniqueness and provide an explicit expression for them. Lastly, we provide a computational method via sign-space iteration that allows us to compute the attractive equilibrium point for given initial conditions. The paper is organized as follows. First, we introduce the notation. In Section 2, we develop the networked model. In Section 3, we investigate the existence, uniqueness and stability of the equilibrium points of our system. In Section 4, we illustrate the computational algorithm. Finally, in Section 5, we discuss concluding remarks and future directions. **Notation**. The symbols \(\mathbb{0}_{n}\) and \(\mathbb{1}_{n}\) denote the \(n\)-dimensional column vector with all entries equal to \(0\) and to \(1\), respectively. The identity matrix of order \(n\) is denoted by \(I_{n}\). Let \(J^{[k]}:=\operatorname{diag}(1-2\phi^{[k]})\), where vector \(\phi^{[k]}\) represents the integer \(k\) in binary representation; we denote the generic orthant \(k\) by \(\mathcal{X}^{k}\), namely, \(\mathcal{X}^{k}:=\{x\in\mathbb{R}^{k}|J^{[k]}x\geq 0\}\). Given a generic vector \(V\in\mathbb{R}^{n}\), let the operator \(y=\phi(V)\) be such that the \(i\)th component \(y_{i}=1\) if \(V_{i}\geq 0\) or \(y_{i}=0\), otherwise. The notation \(V\geq 0\) for a generic vector \(V\) or \(M\geq 0\) for a generic matrix \(M\) is to be intended elementwise. A square real matrix \(M\in\mathbb{R}^{n\omega n}\) is said to be _Metzler_ if its off-diagonal entries are nonnegative, namely, \(M_{i,j}\geq 0\), \(i\neq j\). Every Metzler matrix \(M\) has a real dominant eigenvalue \(\lambda_{F}(M)\), which is referred to as _Frobenius eigenvalue_. The corresponding left and right vectors associated with \(\lambda_{F}(M)\) are referred to as left and right _Frobenius eigenvectors_, respectively. A square real matrix \(M\) is said to be _Hurwitz_ if all its eigenvalues lie in the open left half plane. A square matrix is said to be _Shur_ if all its entries are real and its eigenvalues have absolute value less than one [13]. ## 2 Problem Formulation In this section, we introduce the model of a networked financial system, where a number of organizations are linked through financial interdependencies. To this aim, we consider a set of organizations \(N=\{1,\ldots,n\}\). Each organization \(i\in N\) is described by an equity value \(V_{i}\in\mathbb{R}\), which represents the total values of its shares.Organizations can invest in primitive assets, namely, mechanisms that generate income in the form of a net flow of cash over time. We consider a set of primitive assets \(M=\{1,\ldots,m\}\). We denote the market price of asset \(k\) by \(p_{k}\) and the share of the value of asset \(k\) held by organization \(i\) by \(D_{ik}\geq 0\). Each organization can also hold shares of other organizations; for any pair of organizations \(i,j\in N\), let \(C_{ij}\geq 0\) be the fraction of organization \(j\) owned by organization \(i\). The equity values of organizations can be determined by the following discrete-time dynamical model: \[V(t+1)=CV(t)+Dp-B\phi(V(t)-\underline{V}), \tag{1}\] where \(t\in\mathbb{Z}^{+}\), \(C\) is a nonnegative and nonsingular matrix where \(C_{ii}=0\) and \(\mathbb{1}_{n}^{\tau}C<\mathbb{1}_{n}^{\tau}\) which means that the equity value of each organization held by other organizations cannot exceed the equity value of the organization itself, \(D\) is a positive matrix, \(p\) a nonnull nonnegative vector, \(B=\mathsf{diag}(\underline{\phi})\) a nonnegative diagonal matrix with entries \(\beta_{i}>0\), \(i=1,2,\cdots,N\), \(\underline{V}\) is the vector of threshold values \(\underline{V}_{i}\) for all \(i\) below which organization \(i\) incurs a failure cost \(\bar{\beta}_{i}\) and \(\phi(V-\underline{V})\) the vector of indicator functions taking value \(1\) if \(V_{i}<\underline{V}_{i}\) and \(0\) if \(V_{i}\geq\underline{V}_{i}\). The first term in (1) takes into account the cross-holdings, the second term describes the primitive assets held by each organization and the last term accounts for the discontinuous drop imposed by the cost of failure. ## 3 Characterization of the Equilibria In this section, we study the equilibria of system (1). From the condition that \(0\leq B\phi(V(t)-\underline{V})\leq B\), and recalling that \(C\) is nonnegative we derive the following preliminary result. **Theorem 1**.: \(V(t)\geq 0,\forall t\geq 0\) _and \(V(0)\geq 0_{n}\) if and only if_ \[Dp-\beta\geq 0. \tag{2}\] \(\square\) Under condition (2), system (1) is a positive nonlinear switched system since vector \(\phi(V(t)-\underline{V})\) can take a finite number of values \(\phi^{[k]}\), with \(k=0,1,2,\cdots,2^{n}-1\). For instance, with \(n=3\) we have: \[\phi^{[0]}=0_{n},\phi^{[1]}_{3}=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right],\phi^{[2]}=\left[\begin{array}{c}0\\ 1\\ 0\end{array}\right],\phi^{[3]}=\left[\begin{array}{c}0\\ 1\\ 1\end{array}\right],\] \[\phi^{[4]}=\left[\begin{array}{c}1\\ 0\\ 0\end{array}\right],\phi^{[5]}=\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right],\phi^{[6]}=\mathbb{1}_{n}.\] As such, system (1) may possess at most \(2^{n}\) equilibria in total. The equilibria in orthant \(k\), denoted by \(\overline{V}^{[k]}\) and characterized by the index \(k\) is given by \[\overline{V}^{[k]}=(I_{n}-C)^{-1}(Dp-B\phi^{[k]}),\quad\text{s.t.}\quad\phi( \overline{V}^{[k]}-\underline{V})=\phi^{[k]}. \tag{3}\] Note that \(V=0\) cannot be an equilibrium of the system since \(Dp>0\) and that, if (2) holds, \(\overline{V}^{[k]}>0\). In the \(k\)th orthant the difference \(Y^{[k]}(t)=V(t)-\overline{V}^{[k]}\) follows the autonomous dynamics \[Y^{[k]}(t+1)=CY^{[k]}(t). \tag{4}\] Since \(C\) is nonnegative with \(\mathbb{1}_{n}^{\tau}C<\mathbb{1}_{n}^{\tau}\), it turns out that \(C\) is Schur-stable. Therefore, the following theorem can be stated. **Theorem 2**.: _Any equilibrium \(\overline{V}^{[k]}\) which is in the interior of the \(k\)th orthant \(\mathcal{X}^{k}\) is locally asymptotically stable. \(\square\)_ _Remark_. Note that there could be equilibria on the discontinuity points, but these are fragile (unstable) and are not considered. **Example 1**.: _Consider system (1) with \(N=20\) organisations, \(\text{M}=10\) assets. The initial condition \(V(0)\) is set to be random in \([0,30]\). Let \(C\) be set to random values in \([0,0.01]\) such that \(C_{ii}=0\) and \(\mathbb{1}_{n}^{\tau}C<\mathbb{1}_{n}^{\tau}\). Finally, let_ \[D=0.05\ \mathbb{1}_{20}\mathbb{1}_{10}^{\tau}\quad p=10\ \mathbb{1}_{10}^{\tau},\] \[\beta=\mathbb{1}_{20},\quad\underline{V}=10\ \mathbb{1}_{20}.\] _It is straightforward to see that \(Dp-\beta=4\mathbb{1}_{20}^{\tau}\geq 0\). Therefore, in accordance to Theorem 1, the values of all companies remain positive, namely, \(V(t)\geq 0,\forall t\geq 0\). Figure 1 depicts this scenario. Figure 1 (left) shows the time evolution of the system, where the dashed red line represents the threshold. Figure 1 (right) shows the network topology in first four instants, where companies are indicated by coloured nodes and edges indicate the cross-holdings between companies: the companies whose values are above the threshold are indicated in green, and below the thresholds in blue._ We now turn our attention to the existence and uniqueness of the equilibrium points in orthants \(0\) and \(2^{n}-1\), which we henceforth refer to as _positive_ and _negative_ equilibrium points, respectively. To this aim, consider: \[\begin{array}{c}V(t+1)=CV(t)+Dp-B\phi(V(t)-\underline{V}),\\ x(t)=V(t)-\underline{V}.\end{array}\] The above system can be rewritten as \[\begin{array}{c}x(t+1):=Cx(t)+r-B\phi(x(t)),\\ r:=(C-I_{n})\underline{V}+Dp.\end{array} \tag{5}\] Figure 1: Example 1: since condition (2) is satisfied, \(V(t)\geq 0,\forall t\geq 0\) (left); network topology in the first four time instants (right). The above is a monotone system since \(\phi(y)\geq\phi(x)\) if \(y\leq x\). We can now prove the following theorem. **Theorem 3**.: _Consider system (5). In each open orthant \(\mathcal{X}^{k}\), there exists at most one equilibrium. Furthermore, the following points hold true:_ 1. _There exists an equilibrium point_ \(\bar{x}\geq 0\) _if and only if_ \((I_{n}-C)^{-1}r\geq 0\)_._ 2. _If_ \((I_{n}-C)^{-1}(r-\beta)\geq 0\)_, then there exists an equilibrium point_ \(\bar{x}\geq 0\) _and it is the unique equilibrium._ 3. _There exists an equilibrium point_ \(\bar{x}<0\) _if and only if_ \((I_{n}-C)^{-1}(r-\beta)<0\)_._ 4. _If_ \((I_{n}-C)^{-1}r<0\)_, then there exists an equilibrium point_ \(\bar{x}<0\) _and it is the unique equilibrium._ Proof.: First, let us prove the first statement, namely, if an equilibrium exists in orthant \(k\), it is unique. Let \[\bar{x}^{[k]}=(I_{n}-C)^{-1}(r-B\phi^{[k]})\in\mathcal{X}^{k}\] be the generic equilibrium point in the \(k\)th orthant. By contradiction, let us assume that a second equilibrium point exists in the same orthant. It is straightforward to see that the calculation with a given \(\phi^{[k]}\) would produce the same equilibrium point. Let us now prove the rest point by point. 1. Let \((I_{n}-C)^{-1}r\geq 0\), then \(\bar{x}=(I_{n}-C)^{-1}r\geq 0\in\mathcal{X}^{0}\). Vice versa, assume that there exists a generic equilibrium \(\bar{x}\geq 0\), then \(\bar{x}\in\mathcal{X}^{0}\). Therefore, \(\phi(\bar{x})=0\) and \((I_{n}-C)^{-1}r\geq 0\). 2. Let \((I_{n}-C)^{-1}(r-\beta)\geq 0\), then \((I_{n}-C)^{-1}r\geq(I_{n}-C)^{-1}\beta\geq 0\). It follows from the first point that there exists an equilibrium \(\bar{x}\geq 0\). Moreover, assume there exists an equilibrium \(\bar{x}^{[k]}\) in orthant \(\mathcal{X}^{k}\), i.e., \(\bar{x}^{[k]}=(I_{n}-C)^{-1}r-B\phi(\bar{x}^{[k]})\geq(I_{n}-C)^{-1}(r-\beta)\geq 0\). Then, the unique equilibrium is in orthant \(\mathcal{X}^{0}\). 3. Let \((I_{n}-C)^{-1}(r-\beta)<0\), then \(\bar{x}=(I_{n}-C)^{-1}(r-\beta)<0\in\mathcal{X}^{2^{n}-1}\). Vice versa, assume that there exists a generic equilibrium \(\bar{x}<0\), then \(\bar{x}\in\mathcal{X}^{2^{n}-1}\). Therefore, \(\bar{x}=(I_{n}-C)^{-1}(r-\beta)<0\). 4. Let \((I_{n}-C)^{-1}r\leq(I_{n}-C)^{-1}(r-\beta)<0\), then from point 3, there exists an equilibrium \(\bar{x}^{[k]}<0\). Moreover, assume there exists an equilibrium \(\bar{x}^{[k]}\) in orthant \(\mathcal{X}^{k}\), i.e., \(\bar{x}^{[k]}=(I_{n}-C)^{-1}r-B\phi(x^{[k]})\leq(I_{n}-C)^{-1}r<0\). Then, the unique equilibrium is in orthant \(\mathcal{X}^{2^{n}-1}\). This concludes our proof. **Example 2**.: _Consider system (5) with \(N=20\) organisations, \(M=10\) assets. The initial condition \(x(0)\) is set to be random in \([0,30]\). Let \(C\) be set to random values in \([0,0.01]\) such that \(C_{i}=0\) and \(\mathbb{1}_{n}^{\top}C<\mathbb{1}_{n}^{\top}\). We provide two sets of simulations. Table 1 includes all the other parameters for each simulation._ _In the first set of simulations, the positive equilibrium, namely, \(\bar{x}\geq 0\) exists and is unique. This is in accordance with condition 1 and condition 2 of Theorem 3. This can be seen in Fig. 2 (top-left). Similarly, in the second set of simulations, since the third and last conditions of Theorem 3 hold true, the negative equilibrium point, i.e., \(\bar{x}<0\), exists and is unique. Figure 2 (bottom-left) shows the second set of simulations. Figure 2 (right) shows the network topology in the first and third instant for each set of simulations. Colours have the usual meaning as before with the addition of red that represents companies whose value is below 0._ Now, we provide a sufficient condition that guarantees that no equilibrium point in the negative orthant exists with respect to a subgraph of the cross-sharing \(C\). **Proposition 1**.: _Given a square principal submatrix of \(C\), denoted by \(\tilde{C}\), if the following holds:_ \[Y_{i}<\frac{(Dp-\beta)_{i}}{1-\lambda_{F}(\tilde{C})},\quad\forall i, \tag{6}\] _where \(\tilde{C}\) is a principal sparse subgraph of \(C\), then there does not exist the negative equilibrium point, i.e., at least one organization remains healthy._ Proof.: Assume that equation (6) holds true. Since \(\lambda_{F}(\tilde{C})\leq\lambda_{F}(C)\)[13], then \[Y_{i}<\frac{(Dp-\beta)_{i}}{1-\lambda_{F}(C)},\quad\forall i.\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Simulation & \(D\) & \(p\) & \(\beta\) & \(V\) \\ \hline \hline I & 0.06 & \(\mathbb{1}_{20}\mathbb{1}_{10}^{\top}\) & 10 \(\mathbb{1}_{20}^{\top}\) & \(\mathbb{1}_{20}\) & 10 \(\mathbb{1}_{20}\) \\ \hline II & 0.03 & \(\mathbb{1}_{20}\mathbb{1}_{10}^{\top}\) & 10 \(\mathbb{1}_{20}^{\top}\) & \(\mathbb{1}_{20}\) & 10 \(\mathbb{1}_{20}\) \\ \hline \end{tabular} \end{table} Table 1: Set of parameters for each simulation. Figure 2: Example 2: the equilibrium point \(\bar{x}\geq 0\) exists and is unique as condition 2 of Theorem 3 holds true (top); similarly, since the 4-th condition of Theorem 3 holds true, the equilibrium point \(\bar{x}<0\) exists and is unique (bottom). Let \(x>0\) be the left Frobenius eigenvector of \(C\), i.e., \(x^{\top}C=\lambda_{F}(C)x^{\top}\). Then, \[x^{\top}V<\frac{x^{\top}(Dp-\beta)}{1-\lambda_{F}(C)}=x^{\top}(I_{n}-C)^{-1}(Dp- \beta),\] so that, being \(Dp+(C-I_{n})Y=r\), we have: \[\begin{array}{l}x^{\top}V<x^{\top}(I_{n}-C)^{-1}(r-(C-I_{n})V-\beta),\\ x^{\top}V<x^{\top}V+x^{\top}(I_{n}-C)^{-1}(r-\beta),\\ x^{\top}(I_{n}-C)^{-1}(r-\beta)>0.\end{array}\] The above implies \((I_{n}-C)^{-1}(r-\beta)\not<0\). From point 3 of Theorem 3, then no equilibrium \(\bar{x}<0\) exists. \(\blacksquare\) _Remark_. Condition (6) provides a relation among three main elements of the original system: the thresholds, the underlying topology and the external assets. Therefore, since it is desirable that the system does not converge to the negative equilibrium point, by violating this condition on \(\underline{Y}\) we ensure that at least one company is healthy. ## 4 Sign-space Iteration In this section, we analyze the behavior of the trajectories of financial organizations that are below and above the threshold. To this end, let us rewrite system (5) in a more compact way as: \[x(t+1)=Cx(t)+\Psi(x(t)), \tag{7}\] where \(\Psi(x):=r-B\phi(x)\) and, in particular, with a slight abuse of notation, the following \[\Psi(x)=\Psi(\mathrm{sign}(x)),\quad\Psi_{k}\in\{\psi_{k}^{-},\psi_{k}^{+}\}\] depends on the sign of \(x\), \(\psi_{k}^{-}=r_{k}-\beta_{k}\) and \(\psi_{k}^{+}=r_{k}\) can both take positive and negative values. Here, the \(\mathrm{sign}(x)\) function is defined as: \[\mathrm{sign}(x):=1-2\phi(x)=\left\{\begin{array}{ll}+1,&\mbox{if }x\geq 0\\ -1,&\mbox{if }x<0.\end{array}\right.\] Let \(P=(I-C)^{-1}\). Then, an explicit expression for a candidate equilibrium is given by \[x=P\Psi(x),\] for \(\psi_{k}\in\{\psi_{k}^{-},\psi_{k}^{+}\}\). There are \(2^{\pi}\) such candidates. Let \(\sigma\) be a sign vector \(\sigma(k)\in\{-,+\}\) and define the iteration \[\sigma(k+1)=\mathrm{sign}\left[P\Psi(\sigma_{k})\right], \tag{8}\] and consider a fixed point of this iteration (if any) \[\bar{\sigma}=\mathrm{sign}\left[P\Psi(\bar{\sigma})\right]. \tag{9}\] The vector \(x\) is a rest point if and only if \(\sigma=\mathrm{sign}(x)\) satisfies (9). In other words, equation (9) characterises all the rest points and finding such rest points is equivalent to finding fixed points of the sign iteration. The next result follows immediately from the monotone nature of our system, which builds on the condition that \(\Psi(y)\geq\Psi(x)\) if \(y\geq x\). **Lemma 1**.: _Iteration (8) is monotone: if \(\sigma^{A}(0)\leq\sigma^{B}(0)\) are initial sign vectors, then the corresponding iteration satisfies \(\sigma^{A}(k)\leq\sigma^{B}(k)\)._ To compute the worst case rest point we initialize \(\sigma(0)=[--\dots--]^{\top}\). If \(\sigma(1)\) has all \(-\) signs we have a rest point (all organizations fail). Conversely, let us assume there are \(+\) signs. These are nodes that cannot fail. For instance, \[\sigma(0)=[\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array{array}[]{cccc}\begin{array}{cccc Then, recursively, by monotonicity, we have \[x^{W}(t+1)\geq x^{W}(t).\] Therefore \(x^{W}(t)\) converges to an equilibrium \(\bar{x}^{W}\) from below. Conversely, consider the trajectory of system (5) starting from the positive candidate equilibrium as: \[x^{B}(t),\ x^{B}(0)=P\Psi^{+}.\] This sequence is monotonically nonincreasing, and symmetrically to the above \(x^{B}(t)\) converges to an equilibrium \(\bar{x}^{B}\) from above. Necessarily, these equilibria are related to the bounds \(\sigma^{W}\) and \(\sigma^{B}\) introduced before, then we have \[\bar{x}^{W}\geq P\Psi(\sigma^{w}),\qquad\bar{x}^{B}\leq P\Psi(\sigma^{B}),\] since these are conditions that hold true for all equilibria. In fact, the inequalities are satisfied with equal sign. Indeed the initial conditions satisfy \[x^{W}(0)=P\Psi^{-}\leq\bar{x}^{W},\quad x^{B}(0)=P\Psi^{+}\geq\bar{x}^{B},\] so \(x^{W}(t)\) cannot become greater than \(\bar{x}^{W}\) and \(x^{B}(t)\) cannot become smaller than \(\bar{x}^{B}\). _Remark_.: Equilibria \(\bar{x}^{W}\) and \(\bar{x}^{B}\) are attractors w.r.t. the initial conditions in othant \(2^{n-1}\) and \(0\), respectively. As it is clear from the previous derivation, we further remark that there are points where the \(+\) and \(-\) are fixed from initialization. In particular, the indices where \[(Pr)_{i}<0\] holds true are \(-\) in all iterations. Vice versa, the indices where \[(P(r-\beta))_{i}>0\] holds true are \(+\) in all iterations. **Theorem 4**.: _Consider system (7). Let \(i=1,\ldots,n\)._ * _Case 1. Let_ \((Pr)_{i}<0\)_. Then,_ \(\bar{x}_{i}<0\)_._ * _Case 2. Let_ \((P(r-\beta))_{i}\geq 0\)_. Then,_ \(\bar{x}_{i}\geq 0\)_._ Proof.: The proof addresses the above two points one by one. * **Case 1**. The following \[\bar{x}_{i}=\sum_{j}P_{ij}r_{j}-\left(\sum_{j}P_{ij}\beta_{j}+\cdots\right)\] is always negative as the first sum is negative and the quantity after the subtraction is positive. * **Case 2**. The following \[\bar{x}_{i}=\left(\sum_{j}P_{ij}r_{j}-\sum_{j}P_{ij}\beta_{j}\right)+\cdots\] is always positive as the first components in parentheses are positive and the other quantities are also positive. This concludes our proof. _Remark_.: The convergence of the trajectory to a specific configuration of signs means that there exist no oscillations for the dynamical system in the corresponding orthant and the market values converge to the equilibrium point in that orthant. A direct consequence of Theorem 4 is the following result, which provides a bound on the number of failed organizations (and saved ones). **Corollary 1**.: _The number of failed organisations \(n_{F}\) is such that \(\mathbb{I}_{n}^{\top}\phi(I_{n}-C)^{-1}r)\leq n_{F}\leq\mathbb{I}_{n}^{\top} \phi(I_{n}-C)^{-1}(r-\beta).\) _ Proof.: From Theorem 3, \(\bar{x}^{\max}=(I_{n}-C)^{-1}r\) and \(\bar{x}^{\min}=(I_{n}-C)^{-1}(r-\beta)\) such that a generic equilibrium \(\bar{x}\), it holds \(\bar{x}^{\min}\leq\bar{x}\leq\bar{x}^{\max}\). Since \(\mathbb{I}_{n}^{\top}\phi(\bar{x})=n_{F}\), the number of failed organizations obeys the stated inequality, equivalent to \[\mathbb{I}_{n}^{\top}\phi(-\underline{Y}+(I_{n}-C)^{-1}Dp)\leq n_{F}\leq \mathbb{I}_{n}^{\top}\phi(-\underline{Y}+(I_{n}-C)^{-1}(Dp-\beta)).\] This concludes our proof. **Example 3**.: _Before concluding the paper, we provide one last example in the spirit of [4, 11]. We now consider system (1) with \(\text{N}=9\) organisations, \(\text{M}=9\) assets. In particular, our analysis involves the cross-holdings among nine countries, i.e., France (FR), Germany (DE), Greece (GR), Italy (IT), Japan (JP), Portugal (PT), Spain ( ES), United Kingdom (GB) and USA (US)._ _The matrix of cross-holdings \(C\) is summarised in Table 2. We assume that \(D=I_{N}\), and \(p\) is proportional to the countries GDP as shown in Table 3. The initial condition \(V(0)\) is set to be \(V(0)=[15.2838,19.9137,0.9863,9.0642,28.3350,0.7829,8.8020,\\ 12.1361,59.8130]^{\top}\). We set \(\beta=0.5\)\(\mathbb{I}_{20}\) and \(\underline{Y}=10\)\(\mathbb{I}_{20}\)._ _We show the behaviour of the nine countries and their convergence to \(\overline{V}\geq 0\). This is in accordance with Theorem 1. Figure 3 shows this scenario._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & FR & DE & GR & IT & JP & PT & ES & GB & US \\ \hline \hline FR & 0 &.03 &.01 &.07 &.01 &.04 &.04 &.05 &.04 \\ \hline DE &.04 & 0 &.06 &.03 &.00 &.05 &.04 &.09 &.04 \\ \hline GR &.00 &.00 & 0 &.00 &.00 &.00 &.00 &.00 &.00 \\ \hline IT &.01 &.03 &.00 & 0 &.00 &.01 &.02 &.01 &.00 \\ \hline JP &.04 &.02 &.00 &.02 & 0 &.01 &.01 &.06 &.10 \\ \hline PT &.00 &.00 &.00 &.00 &.00 &.0 &.00 &.00 &.00 \\ \hline ES &.01 &.02 &.01 &.02 &.00 &.15 & 0 &.09 &.02 \\ \hline GB &.03 &.02 &.01 &.01 &.01 &.02 &.01 & 0 &.04 \\ \hline US &.04 &.02 &.01 &.02 &.02 &.02 &.02 &.09 & 0 \\ \hline \end{tabular} \end{table} Table 2: Table providing the values of the matrix of cross-holdings \(C\), adapted from [4]. ## 5 Conclusions In this paper, we study the propagation of failures in financial systems. Specifically, we have developed a dynamical system to capture financial failures and uncertainty in the form of initial condition not at the equilibrium. We have carried out the stability analysis of this system by studying the existence and uniqueness of the equilibrium points. We have also showed how to compute these equilibria explicitly. Finally, we have proposed a computational method via sign-space iteration that can be used to analyze the behavior of the corresponding system of interconnected organizations. Future works include: i) the characterization of the invariance of each orthant of the \(2^{n}\) space and of the equilibria in each orthant, ii) the study of the worst-case scenario where all organizations fail and the conditions to prevent it, and iii) asset investments as feedback control design. ## Acknowledgments DB has been supported the SMiLES Research Project, part of the Research Programme Sustainable Living Labs, which is co-financed by the Dutch Research Council (NWO), the Ministry of Infrastructure and Water Management, The Netherlands, the Taskforce for Applied Research (SIA), The Netherlands and the Top Sector Logistics, The Netherlands. FB and PC has been partially supported by the Italian grant PRIN 2017 "Monitoring and Control Underpinning the Energy-Aware Factory of the Future: Novel Methodologies and Industrial Validation" (ID 2017YKKXYXJ). FB has been supported by the European Union - NextGenerationEU.
2304.05725
Topological aspects of quasi *-algebras with sufficiently many *-representations
Quasi *-algebras possessing a sufficient family $\mathcal{M}$ of invariant positive sesquilinear forms carry several topologies related to $\mathcal{M}$ which make every *-representation continuous. This leads to define the class of locally convex quasi GA*-algebras whose main feature consists in the fact that the family of their bounded elements, with respect to the family $\mathcal{M}$, is a dense C*-algebra.
Giorgia Bellomonte, Camillo Trapani
2023-04-12T09:30:53Z
http://arxiv.org/abs/2304.05725v1
# Topological aspects of quasi *-algebras with sufficiently many *-representations ###### Abstract. Quasi *-algebras possessing a sufficient family \(\mathcal{M}\) of invariant positive sesquilinear forms carry several topologies related to \(\mathcal{M}\) which make every *-representation continuous. This leads to define the class of locally convex quasi GA*-algebras whose main feature consists in the fact that the family of their bounded elements, with respect to the family \(\mathcal{M}\), is a dense C*-algebra. Key words and phrases:invariant positive sesquilinear form, *-representation, locally convex quasi *-algebra 2020 Mathematics Subject Classification: Primary 46K05, 46K10, 47L60, 47A07; Secondary 08A55 ## 1. Introduction Locally convex quasi *-algebras \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) arise often when taking the completion \(\mathfrak{A}:=\widetilde{\mathfrak{A}_{0}}[\tau]\) of a locally convex *-algebra \(\mathfrak{A}_{0}[\tau]\) with separately (but not jointly) continuous multiplication (this was, in fact, the case considered at an early stage of the theory, concerning applications in quantum physics). Concrete examples are provided by families of operators acting in rigged Hilbert spaces or by certain families of unbounded operators acting on a common domain \(\mathcal{D}\) of a Hilbert space \(\mathcal{H}\). For a synthesis of the theory and of its applications we refer to [6]. The study of this structure and the analysis performed also in [1, 4, 5, 7, 9] made it clear that the most regular situation occurs when the locally convex quasi *-algebra \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) under consideration possesses a _sufficiently rich_ family \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) of invariant positive sesquilinear forms on \(\mathfrak{A}\times\mathfrak{A}\) (see below for definitions); they allow a GNS construction similar to that defined by a positive linear functional on a *-algebra \(\mathfrak{A}_{0}\). The basic idea where this paper moves from, is to consider a quasi *-algebra \((\mathfrak{A},\mathfrak{A}_{0})\) where one can introduce a locally convex topology by means of the set of sesquilinear forms \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) itself. In the best circumstances we expect a behavior analogous to that of a *-algebra \(\mathfrak{B}_{0}\) whose topology can be defined via families of C*-seminorms: \[p_{M}(x)=\sup_{\omega\in M;\omega(\mathbf{e})=1}\omega(x^{*}x)^{1/2}\] where \(M\) is a convenient set of positive linear functionals on \(\mathfrak{B}_{0}\)[10]. For this reason, we start from a pure algebraic setup, i.e., \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) is a quasi *-algebra and we suppose that it has a sufficiently large \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) (in the sense that, for some convenient subset \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) and for every \(a\in\mathfrak{A},a\neq 0\) there exists \(\varphi\in\mathcal{M}\) such that \(\varphi(a,a)>0\)). Starting from this set \(\mathcal{M}\), we undertake the construction of locally convex topologies on \(\mathfrak{A}\) selecting in particular those under which each (sufficiently regular) *-representation is continuous. This analysis leads to the selection of a class of locally convex quasi *-algebras \((\mathfrak{A}[\tau],\mathfrak{A}_{{}_{0}})\) (called locally convex quasi GA*-algebras) whose bounded elements constitute a C*-algebra. The paper is organized as follows. In Section 2 some preliminary notions on quasi *-algebras, their topologies and their representations are summarized. In Section 3 we introduce the order defined by a family \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) whose related wedge becomes a cone when the family \(\mathcal{M}\) is sufficiently rich. In Section 4, given \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\), we introduce two notions of bounded elements: those bounded with respect to a family \(\mathcal{M}\) and those related to the order defined by \(\mathcal{M}\). These two notions turn out to be equivalent and every *-representation produces a bounded operator when acting on a bounded element. In Section 5 the topologies generated by a family \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) are investigated and in Section 6 we finally introduce locally convex quasi GA*-algebras and study some properties of them. Locally convex quasi GA*-algebras are characterized by the fact that their topology is equivalent to that generated by some \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) as in Section 5. ## 2. Basic definitions and facts We begin with some preliminaries; we refer to [6] for details. A _quasi *-algebra_\((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) is a pair consisting of a vector space \(\mathfrak{A}\) and a *-algebra \(\mathfrak{A}_{{}_{0}}\) contained in \(\mathfrak{A}\) as a subspace and such that 1. \(\mathfrak{A}\) carries an involution \(a\mapsto a^{*}\) extending the involution of \(\mathfrak{A}_{{}_{0}}\); 2. \(\mathfrak{A}\) is a bimodule over \(\mathfrak{A}_{0}\) and the module multiplications extend the multiplication of \(\mathfrak{A}_{{}_{0}}\). In particular, the following associative laws hold: \[(xa)y=x(ay);\ \ a(xy)=(ax)y,\ \ \ \forall\ a\in\mathfrak{A},\ x,y\in \mathfrak{A}_{{}_{0}};\] 3. \((ax)^{*}=x^{*}a^{*}\), for every \(a\in\mathfrak{A}\) and \(x\in\mathfrak{A}_{{}_{0}}\). The _identity_ of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\), if any, is a necessarily unique element \(\mathsf{e}\in\mathfrak{A}_{{}_{0}}\), such that \(a\mathsf{e}=a=\mathsf{e}a\), for all \(a\in\mathfrak{A}\). We will always suppose that \[ax=0,\ \forall x\in\mathfrak{A}_{{}_{0}}\Rightarrow a=0\] \[ax=0,\ \forall a\in\mathfrak{A}\Rightarrow x=0.\] These two conditions are clearly satisfied if \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) has an identity \(\mathfrak{e}\). **Definition 2.1**.: A quasi *-algebra \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) is said to be _locally convex_ if \(\mathfrak{A}\) is a locally convex vector space, with a topology \(\tau\) enjoying the following properties 1. \(x\mapsto x^{*},\ \ x\in\mathfrak{A}_{{}_{0}}\), is continuous; 2. for every \(a\in\mathfrak{A}\), the maps \(x\mapsto ax\) and \(x\mapsto xa\), from \(\mathfrak{A}_{{}_{0}}\) into \(\mathfrak{A}\), \(x\in\mathfrak{A}_{{}_{0}}\), are continuous; 3. \(\overline{\mathfrak{A}_{{}_{0}}^{\tau}}=\mathfrak{A}\); i.e., \(\mathfrak{A}_{{}_{0}}\) is dense in \(\mathfrak{A}[\tau]\). In particular, if \(\tau\) is a norm topology, with norm \(\|\cdot\|\), and 1. \(\|a^{*}\|=\|a\|,\ \forall a\in\mathfrak{A}\) then, \((\mathfrak{A}[\|\cdot\|],\mathfrak{A}_{{}_{0}})\) is called a _normed quasi *-algebra_ and a _Banach quasi *-algebra_ if the normed vector space \(\mathfrak{A}[\|\cdot\|]\) is complete. Let \(\mathcal{D}\) be a dense vector subspace of a Hilbert space \(\mathcal{H}\). Let us consider the following families of linear operators acting on \(\mathcal{D}\): \[\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H}) =\{X\ \text{closable},D(X)=\mathcal{D};\ D(X^{*})\supset \mathcal{D}\}\] \[\mathcal{L}^{\dagger}(\mathcal{D}) =\{X\in\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H}):X\mathcal{D }\subset\mathcal{D};\ X^{*}\mathcal{D}\subset\mathcal{D}\}\] \[\mathcal{L}^{\dagger}(\mathcal{D})_{b} =\{Y\in\mathcal{L}^{\dagger}(\mathcal{D});\ \overline{Y}\ \text{bounded}\},\] where \(\overline{Y}\) denotes the closure of \(Y\). The involution in \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is defined by \(X^{\dagger}:=X^{*}\upharpoonright\mathcal{D}\), the restriction of \(X^{*}\), the adjoint of \(X\), to \(\mathcal{D}\). The set \(\mathcal{L}^{\dagger}(\mathcal{D})\) is a *-algebra; more precisely, it is the maximal O*-algebra on \(\mathcal{D}\). (for the theories of O*-algebras and *-representations we refer to [8]). Furthermore, \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is also a _partial *-algebra_[2] with respect to the following operations: the usual sum \(X_{1}+X_{2}\), the scalar multiplication \(\lambda X\), the involution \(X\mapsto X^{\dagger}:=X^{*}\upharpoonright\mathcal{D}\) and the _(weak)_ partial multiplication \[X_{1}{\raisebox{0.86pt}{\makebox[0.0pt][0.0pt]{$\circ$}}}X_{2}=X_{1}{}^{\dagger }{\raisebox{0.86pt}{\makebox[0.0pt][0.0pt]{$\circ$}}}X_{2}, \tag{2.1}\] defined whenever \(X_{2}\) is a weak right multiplier of \(X_{1}\) (we shall write \(X_{2}\in R^{\mathrm{w}}(X_{1})\) or \(X_{1}\in L^{\mathrm{w}}(X_{2})\)), that is, whenever \(X_{2}\mathcal{D}\subset\mathcal{D}(X_{1}{}^{\dagger}{\raisebox{0.86pt}{\makebox [0.0pt][0.0pt]{$\circ$}}})\) and \(X_{1}{}^{*}\mathcal{D}\subset\mathcal{D}(X_{2}{}^{*})\). The following topologies on \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) will be used in this paper. The _weak topology_\(\mathfrak{t}_{w}\) on \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is defined by the seminorms \[r_{\xi,\eta}(X)=|\langle X\xi|\eta\rangle|,\quad X\in\mathcal{L}^{\dagger}( \mathcal{D},\mathcal{H}),\,\xi,\eta\in\mathcal{D}.\] The _strong topology_\(\mathfrak{t}_{s}\) on \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is defined by the seminorms \[p_{\xi}(X)=\|X\xi\|,\quad X\in\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H}),\, \xi\in\mathcal{D}.\] The _strong* topology_\(\mathfrak{t}_{s^{*}}\) on \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is usually defined by the seminorms \[p_{\xi}^{*}(X)=\max\{\|X\xi\|,\|X^{\dagger}\xi\|\},\,\xi\in\mathcal{D}.\] Then, \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})[\mathfrak{t}_{s^{*}}],\mathcal{ L}^{\dagger}(\mathcal{D})_{b})\) is a complete locally convex quasi *-algebra [6, Section 6.1]. Let us denote by \(t_{\dagger}\) the graph topology on \(\mathcal{D}\) defined by the set of seminorms \[\xi\in\mathcal{D}\to\|X\xi\|;\;X\in\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{ H}).\] The family of all bounded subsets of \(\mathcal{D}[t_{\dagger}]\) is denoted by \(\mathfrak{B}\). We will indicate by \(\mathfrak{t}_{u}\), \(\mathfrak{t}^{u}\) and \(\mathfrak{t}_{*}^{u}\), respectively, the _uniform_ topologies defined by the following families of seminorms: for \(\mathfrak{t}_{u}:p_{\mathcal{B}}(X)=\sup_{\xi,\eta\in\mathcal{B}}|\langle X\xi |\eta\rangle|,\quad\mathcal{B}\in\mathfrak{B}\); for \(\mathfrak{t}^{u}:p^{\mathcal{B}}(X)=\sup_{\xi\in\mathcal{B}}\|X\xi\|,\quad \mathcal{B}\in\mathfrak{B}\); for \(\mathfrak{t}_{*}^{u}:p_{*}^{\mathcal{B}}(X)=\max\{p^{\mathcal{B}}(X),p^{ \mathcal{B}}(X^{\dagger})\},\quad\mathcal{B}\in\mathfrak{B}\). It is easy to see that \(\mathfrak{t}_{u}\preceq\mathfrak{t}^{u}\preceq\mathfrak{t}_{*}^{u}\): \[p_{\mathcal{B}}(X)\leq\gamma_{\mathcal{B}}\,p^{\mathcal{B}}(X)\leq\gamma_{ \mathcal{B}}p_{*}^{\mathcal{B}}(X),\quad\forall X\in\mathcal{L}^{\dagger}( \mathcal{D},\mathcal{H});\] moreover, \[p_{\mathcal{B}}(X^{\dagger}\!\!\cap\!X)=p^{\mathcal{B}}(X)^{2}\text{ whenever }X^{\dagger}\!\!\cap\!X\text{ is well--defined.}\] As shown in [2, Proposition 4.2.3]\(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})[\mathfrak{t}_{*}^{u}]\) is complete. **Definition 2.2**.: Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra and \(\mathcal{D}_{\pi}\) a dense domain in a certain Hilbert space \(\mathcal{H}_{\pi}\). A linear map \(\pi\) from \(\mathfrak{A}\) into \(\mathcal{L}^{\dagger}(\mathcal{D}_{\pi},\mathcal{H}_{\pi})\) is called a *-_representation_ of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\), if the following properties are fulfilled: * \(\pi(a^{*})=\pi(a)^{\dagger},\quad\forall\ a\in\mathfrak{A}\); * for \(a\in\mathfrak{A}\) and \(x\in\mathfrak{A}_{{}_{0}}\), \(\pi(a)\!\!\cap\!\pi(x)\) is well-defined and \(\pi(a)\!\!\cap\!\pi(x)=\pi(ax)\). If \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) has a unit \(\mathsf{e}\in\mathfrak{A}_{{}_{0}}\), we assume that for every *-representation \(\pi\) of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\), \(\pi(\mathsf{e})=\mathbb{I}_{\mathcal{D}_{\pi}}\), the identity operator on the space \(\mathcal{D}_{\pi}\). If \(\pi_{o}:=\pi\upharpoonright\mathfrak{A}_{{}_{0}}\) is a *-representation of the *-algebra \(\mathfrak{A}_{{}_{0}}\) into \(\mathcal{L}^{\dagger}(\mathcal{D}_{\pi})\) we say that \(\pi\) is a _qu*-representation_ of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\). A *-representation \(\pi\) is called _bounded_ if \(\pi(a)\) is a bounded operator in \(\mathcal{D}_{\pi}\), for every \(a\in\mathfrak{A}\). Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra. We denote by \(\mathcal{Q}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\) the set of all sesquilinear forms on \(\mathfrak{A}\times\mathfrak{A}\), such that 1. \(\varphi\) is positive, i.e., \(\varphi(a,a)\geq 0,\quad\forall\ a\in\mathfrak{A}\); 2. \(\varphi(ax,y)=\varphi(x,a^{*}y),\quad\forall\ a\in\mathfrak{A},\ x,y\in \mathfrak{A}_{{}_{0}}\). For every \(\varphi\in\mathcal{Q}_{\mathfrak{A}_{0}}(\mathfrak{A})\), the set \[N_{\varphi}:=\big{\{}a\in\mathfrak{A}:\varphi(a,a)=0\big{\}}=\big{\{}a\in \mathfrak{A}:\varphi(a,b)=0,\ \forall\ b\in\mathfrak{A}\big{\}}.\] is a subspace of \(\mathfrak{A}\). Let \(\lambda_{\varphi}:\mathfrak{A}\to\mathfrak{A}/N_{\varphi}\) be the usual quotient map and for each \(a\in\mathfrak{A}\), let \(\lambda_{\varphi}(a)\) be the corresponding coset of \(\mathfrak{A}/N_{\varphi}\), which contains \(a\). An inner product \(\langle\cdot|\cdot\rangle\) is then defined on \(\lambda_{\varphi}(\mathfrak{A})=\mathfrak{A}/N_{\varphi}\) by \[\langle\lambda_{\varphi}(a)|\lambda_{\varphi}(b)\rangle:=\varphi(a,b),\quad \forall\ a,b\in\mathfrak{A}.\] Denote by \(\mathcal{H}_{\varphi}\) the Hilbert space obtained by the completion of the pre-Hilbert space \(\lambda_{\varphi}(\mathfrak{A})\). **Definition 2.3**.: We denote by \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) the subset of forms \(\varphi\in\mathcal{Q}_{\mathfrak{A}_{0}}(\mathfrak{A})\) for which \(\lambda_{\varphi}(\mathfrak{A}_{{}_{0}})\) is dense in \(\mathcal{H}_{\varphi}\). Elements of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) are also called _invariant positive sesquilinear forms_ or briefly _ips-forms_. Moreover, if \((\mathfrak{A}[\tau],\mathfrak{A}_{{}_{0}})\) is a locally convex quasi *-algebra, we denote by \(\mathcal{P}^{\tau}_{\mathfrak{A}_{0}}(\mathfrak{A})\) the family of elements \(\varphi\) of \(\mathcal{Q}_{\mathfrak{A}_{0}}(\mathfrak{A})\) that are jointly \(\tau\)-continuous; i.e., there exists a continuous seminorm \(p_{\sigma}\) such that \[|\varphi(a,b)|\leq p_{\sigma}(a)p_{\sigma}(b),\quad\forall a,b\in\mathfrak{A}.\] The sesquilinear forms of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) allow building up a GNS-representation [6]. Indeed, **Proposition 2.4**.: _Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra with unit \(\mathsf{e}\) and \(\varphi\) a sesquilinear form on \(\mathfrak{A}\times\mathfrak{A}\). The following statements are equivalent:_ 1. \(\varphi\in\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\)_._ 2. _There exist a Hilbert space_ \(\mathcal{H}_{\varphi}\)_, a dense domain_ \(\mathcal{D}_{\varphi}\) _of the Hilbert space_ \(\mathcal{H}_{\varphi}\) _and a closed cyclic *-representation_ \(\pi_{\varphi}\) _in_ \(\mathcal{L}^{\dagger}(\mathcal{D}_{\varphi},\mathcal{H}_{\varphi})\)_, with cyclic vector_ \(\xi_{\varphi}\) (in the sense that \(\pi_{\varphi}(\mathfrak{A}_{{}_{0}})\xi_{\varphi}\) is dense in \(\mathcal{H}_{\varphi}\))_, such that_ \[\varphi(a,b)=\langle\pi_{\varphi}(a)\xi_{\varphi}|\pi_{\varphi}(b)\xi_{ \varphi}\rangle,\quad\forall\ a,b\in\mathfrak{A}.\] **Remark 2.5**.: The *-representation \(\pi_{\varphi}\) is in fact obtained by taking the closure of the *-representation \(\pi_{\varphi}^{\circ}\) defined on \(\lambda_{\varphi}(\mathfrak{A}_{{}_{0}})\) by \[\pi_{\varphi}^{\circ}(a)\lambda_{\varphi}(x)=\lambda_{\varphi}(ax)\quad a\in \mathfrak{A},x\in\mathfrak{A}_{{}_{0}}.\] If \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) is _rich enough_ (in the sense that, for every \(a\in\mathfrak{A}\), there exists \(\varphi\in\mathcal{M}\), such that \(\varphi(a,a)>0\)) then we can introduce a partial multiplication as in [6, Definition 3.1.30]. Indeed, in this case, we say that the _weak_ multiplication, \(a\mbox{\tiny$\circ$}b,\ a,b\in\mathfrak{A}\), is well-defined if there exists \(c\in\mathfrak{A}\), such that \[\varphi(bx,a^{*}y)=\varphi(cx,y),\quad\forall\ x,y\in\mathfrak{A}_{ 0}\ \ \mbox{and}\ \ \varphi\in\mathcal{M}. \tag{2.2}\] In this case, we put \(a\mbox{\tiny$\circ$}b:=c\). With these definitions, we conclude that ([3, Proposition 4.4]) \(\mathfrak{A}\) is also a partial *-algebra with respect to the weak multiplication \(\mbox{\tiny$\circ$}\). **Remark 2.6**.: The uniqueness of \(c=a\mbox{\tiny$\circ$}b\) is guaranteed by Proposition 3.3 below. Clearly this multiplication depends on the family \(\mathcal{M}\). ## 3. Families of forms and order structure As discussed extensively in [6], the notion of bounded element of a locally convex quasi *-algebra reveals to be important for undertaking a spectral analysis in this structure. We propose here two different approaches similar to those developed in [3] but without the continuity assumptions made therein. Before going forth, we introduce some notions needed in what follows. In particular, in analogy to [10], **Definition 3.1**.: A subset \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) is said to be _balanced_ **:**: if \(\varphi\in\mathcal{M}\) implies \(\varphi^{x}\in\mathcal{M}\), for every \(x\in\mathfrak{A}_{0}\), where \(\varphi^{x}(a,b):=\varphi(ax,bx)\) for all \(a,b\in\mathfrak{A}\). **sufficient:**: if it is balanced and if, for every \(a\in\mathfrak{A}\setminus\{0\}\), there exists \(\varphi\in\mathcal{M}\) such that \(\varphi(a,a)>0\). **Remark 3.2**.: If \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) is a locally convex quasi *-algebra, then * \(\mathcal{P}^{\tau}_{\mathfrak{A}_{0}}(\mathfrak{A})\subset\mathcal{I}_{ \mathfrak{A}_{0}}(\mathfrak{A})\); * \(\mathcal{P}^{\tau}_{\mathfrak{A}_{0}}(\mathfrak{A})\) is balanced. The following proposition allows us to deal with notion of sufficiency in other equivalent ways. **Proposition 3.3**.: _Let \((\mathfrak{A},\mathfrak{A}_{0})\) be a quasi *-algebra, \(\mathcal{M}\) a subset of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) and \(a\in\mathfrak{A}\). Then the following are equivalent:_ _i) \(\varphi(ax,x)=0\), for all \(\varphi\in\mathcal{M}\), \(x\in\mathfrak{A}_{0}\);_ _ii) \(\varphi(ax,y)=0\), for all \(\varphi\in\mathcal{M}\), \(x,y\in\mathfrak{A}_{0}\);_ _iii) \(\varphi(ax,ax)=0\), for all \(\varphi\in\mathcal{M}\), \(x\in\mathfrak{A}_{0}\)._ _If \((\mathfrak{A},\mathfrak{A}_{0})\) has unit \(\mathsf{e}\) and \(\mathcal{M}\) is balanced, then the previous statements are equivalent to_ _iv)_\(\varphi(a,a)=0\)_, for every_ \(\varphi\in\mathcal{M}\)_._ In the case of a locally convex quasi *-algebra \((\mathfrak{A}[\tau],\mathfrak{A}_{{}_{0}})\), positive elements have been defined in [7] as the members of the closure \(\mathfrak{A}^{+}:=\overline{\mathfrak{A}_{{}_{0}}^{+}}^{\tau}\), where \[\mathfrak{A}_{{}_{0}}^{+}:=\left\{\sum_{k=1}^{n}x_{k}^{*}x_{k},\,x_{k}\in \mathfrak{A}_{{}_{0}},\,n\in\mathbb{N}\right\}.\] Here, as we have anticipated, we will start from a quasi *-algebra without a topology and we will introduce the notion of positive element via a family \(\mathcal{M}\) of forms of \(\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\). **Definition 3.4**.: Let \((\mathfrak{A},\mathfrak{A}_{0})\) be a quasi *-algebra. We call \(\mathcal{M}\)-positive an element \(a\in\mathfrak{A}\) such that \[\varphi(ax,x)\geq 0,\quad\forall\varphi\in\mathcal{M},\forall x\in\mathfrak{ A}_{{}_{0}}.\] We put \[\mathcal{K}_{{}_{\mathcal{M}}}:=\{a\in\mathfrak{A}:\,\varphi(ax,x)\geq 0,\,\, \forall\varphi\in\mathcal{M},\forall x\in\mathfrak{A}_{0}\}.\] If \(\mathcal{M}=\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\) we denote the corresponding set by \(\mathcal{K}_{{}_{\mathcal{I}}}\). **Lemma 3.5**.: _Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra with a sufficient \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\). If \(a\) is \(\mathcal{M}\)-positive, then \(a=a^{*}\)._ Proof.: The conclusion is a consequence of Proposition 3.3. The set \(\mathcal{K}_{{}_{\mathcal{M}}}\) is a \(qm\)-admissible wedge, that is \(a+b\in\mathcal{K}_{{}_{\mathcal{M}}}\), \(\lambda a\in\mathcal{K}_{{}_{\mathcal{M}}}\) and \(x^{*}ax\in\mathcal{K}_{{}_{\mathcal{M}}}\) for all \(a,b\in\mathcal{K}_{{}_{\mathcal{M}}}\), \(x\in\mathfrak{A}_{{}_{0}}\) and \(\lambda\geq 0\). If, moreover, \(\mathfrak{A}\) has a unit e then \(\mathsf{e}\in\mathcal{K}_{{}_{\mathcal{M}}}\). As usual, one can define an order on the _real_ vector space \(\mathfrak{A}_{h}=\big{\{}a\in\mathfrak{A}:a=a^{*}\big{\}}\) by \[a\leq_{{}_{\mathcal{M}}}b\ \ \Leftrightarrow\ \ b-a\in\mathcal{K}_{{}_{ \mathcal{M}}},\quad a,b\in\mathfrak{A}_{h}.\] **Proposition 3.6**.: _Let \((\mathfrak{A},\mathfrak{A}_{0})\) be quasi *-algebra with unit_ e _and let \(\mathcal{M}\) be a balanced subset of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\). Then the following are equivalent:_ _i)_\(\mathcal{M}\) _is sufficient;_ _ii)_\(\mathcal{K}_{{}_{\mathcal{M}}}\cap(-\mathcal{K}_{{}_{\mathcal{M}}})=\{0\}\)_._ Proof.: \(i)\Rightarrow ii)\) Let \(a\in\mathcal{K}_{{}_{\mathcal{M}}}\cap(-\mathcal{K}_{{}_{\mathcal{M}}})\). Then \(\varphi(ax,x)=0\), for every \(\varphi\in\mathcal{M}\) and every \(x\in\mathfrak{A}_{0}\); hence, by Proposition 3.3, and by the sufficiency of \(\mathcal{M}\) we get \(a=0\). \(ii)\Rightarrow i)\) Let us suppose by absurd that there exists \(a\in\mathfrak{A}\), \(a\neq 0\) such that \(\varphi(a,a)=0\), for every \(\varphi\in\mathcal{M}\). Then again by Proposition 3.3, it follows that \(\varphi(ax,x)=0\) for every \(x\in\mathfrak{A}_{0}\) and every \(\varphi\in\mathcal{M}\); this means that \(a\in\mathcal{K}_{{}_{\mathcal{M}}}\cap(-\mathcal{K}_{{}_{\mathcal{M}}})=\{0\}\) a contradiction. ## 4. Bounded and order bounded elements **Definition 4.1**.: Let \((\mathfrak{A},\mathfrak{A}_{\mathrm{0}})\) be a quasi *-algebra with \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{\mathrm{0}}}(\mathfrak{A})\) sufficient. We say that an element \(a\in\mathfrak{A}\) is \(\mathcal{M}\)-bounded if there exists \(\gamma_{a}=\gamma_{a,\mathcal{M}}>0\) such that \[|\varphi(ax,y)|\leq\gamma_{a}\varphi(x,x)^{1/2}\varphi(y,y)^{1/2},\quad\forall \varphi\in\mathcal{M};\,\forall x,y\in\mathfrak{A}_{\mathrm{0}}.\] If \(a\) is \(\mathcal{M}\)-bounded, we put \[\|a\|_{\mathrm{b}}^{\mathcal{M}}=\inf\{\gamma_{a}>0:|\varphi(ax,y )|\leq\gamma_{a}\varphi(x,x)^{1/2}\varphi(y,y)^{1/2},\] \[\varphi\in\mathcal{M},\ x,y\in\mathfrak{A}_{\mathrm{0}}\}.\] **Remark 4.2**.: For future use, we notice, that, in general and regardless to the \(\mathcal{M}\)-boundedness of \(a\in\mathfrak{A}\), the following equalities hold: \[\Lambda_{a} :=\inf\{\gamma_{a}>0:|\varphi(ax,y)|\leq\gamma_{a}\varphi(x,x)^{ 1/2}\varphi(y,y)^{1/2},\,\varphi\in\mathcal{M},\,x,y\in\mathfrak{A}_{\mathrm{ 0}}\}\] \[=\sup\{|\varphi(ax,y)|;\varphi\in\mathcal{M},x,y\in\mathfrak{A}_{ \mathrm{0}},\varphi(x,x)=\varphi(y,y)=1\}\] \[=\sup\{\|\pi_{\varphi}(a)\|;\ \varphi\in\mathcal{M}\}.\] Moreover, if \(a=a^{*}\), \[\Lambda_{a}=\sup\{|\varphi(az,z)|;\varphi\in\mathcal{M},z\in\mathfrak{A}_{ \mathrm{0}},\varphi(z,z)=1\}.\] The value of \(\Lambda_{a}\) is finite if and only if \(a\) is \(\mathcal{M}\)-bounded; by the definition itself, \(\|a\|_{\mathrm{b}}^{\mathcal{M}}=\Lambda_{a}\). **Lemma 4.3**.: _Let \(a,b\in\mathfrak{A}\) be \(\mathcal{M}\)-bounded. Then_ 1. \(a^{*}\) _is_ \(\mathcal{M}\)_-bounded too, and_ \(\|a^{*}\|_{\mathrm{b}}^{\mathcal{M}}=\|a\|_{\mathrm{b}}^{\mathcal{M}}\)_;_ 2. \(a+b\) _is_ \(\mathcal{M}\)_-bounded and_ \(\|a+b\|_{\mathrm{b}}^{\mathcal{M}}\leq\|a\|_{\mathrm{b}}^{\mathcal{M}}+\|b\|_{ \mathrm{b}}^{\mathcal{M}}\)_;_ 3. \(\alpha a\) _is_ \(\mathcal{M}\)_-bounded,_ \(\forall\alpha\in\mathbb{C}\)_;_ 4. _if_ \(a\)_-_\(b\) _is well-defined, the product_ \(a\)_-_\(b\) _is_ \(\mathcal{M}\)_-bounded and_ \(\|a\)_-_\(b\|_{\mathrm{b}}^{\mathcal{M}}\leq\|a\|_{\mathrm{b}}^{\mathcal{M}}\|b\|_{ \mathrm{b}}^{\mathcal{M}}\)_._ As in [3, Proposition 4.18] one can prove **Proposition 4.4**.: _Let \(a,b\) be \(\mathcal{M}\)-bounded elements of \(\mathfrak{A}\) and let \(\varphi\in\mathcal{M}\). Then, if \(a\)-\(b\) is well-defined, \(\pi_{\varphi}(a)\)\(\cap\)\(\pi_{\varphi}(b)\) is also well-defined and \(\pi_{\varphi}(a\)\(\cdot\)\(b)=\pi_{\varphi}(a)\)\(\cap\)\(\pi_{\varphi}(b)\)._ **Remark 4.5**.: Remark 4.2 and Proposition 4.4 imply that if \(a\) is \(\mathcal{M}\)-bounded and \(a^{*}\)\(\cdot\)\(a\) is well -defined, then \(\|a^{*}\)\(\cdot\)\(a\|_{\mathrm{b}}^{\mathcal{M}}=(\|a\|_{\mathrm{b}}^{\mathcal{M}})^{2}\). The notion of \(\mathcal{M}\)-positive element can be used to give a formally different definition of bounded element. Let \(a\in\mathfrak{A}\), put \(\Re(a)=\frac{1}{2}(a+a^{*})\), \(\Im(a)=\frac{1}{2i}(a-a^{*})\). Then both \(\Re(a),\Im(a)\in\mathfrak{A}_{h}\) and \(a=\Re(a)+i\Im(a)\). **Definition 4.6**.: Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra. The element \(a\in\mathfrak{A}\) is said \(\mathcal{K}_{{}_{\mathcal{M}}}\)_-bounded_ if there exists \(\gamma\geq 0\), such that \[\left\{\begin{array}{l}\pm\varphi(\Re(a)x,x)\leq\gamma\varphi(x,x)\\ \pm\varphi(\Im(a)x,x)\leq\gamma\varphi(x,x)\end{array}\right.,\ \forall\varphi\in \mathcal{M},x\in\mathfrak{A}_{{}_{0}}. \tag{4.1}\] If \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) is unital, then we can rewrite (4.1), more syntetically, as \[\pm\Re(a)\leq_{{}_{\mathcal{M}}}\gamma\mathbf{e},\qquad\pm\Im(a)\leq_{{}_{ \mathcal{M}}}\gamma\mathbf{e}.\] We denote by \(\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\) the set of all \(\mathcal{K}_{{}_{\mathcal{M}}}\)-bounded elements of \(\mathfrak{A}\). As in [7] the following result holds true: **Proposition 4.7**.: _The couple \((\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}}),\mathfrak{A}_ {{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\bigcap\mathfrak{A}_{0})\) is a quasi *-algebra, hence, in particular_ 1. \(\alpha a+\beta b\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M} }})\) _for any_ \(\alpha,\beta\in\mathbb{C}\) _and_ \(a,b\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\)_;_ 2. \(a\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\Leftrightarrow a ^{*}\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\)_;_ 3. \(a\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\)_,_ \(x\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\bigcap \mathfrak{A}_{0}\Rightarrow xa\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{} _{\mathcal{M}}})\)_;_ 4. \(x\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\bigcap \mathfrak{A}_{0}\Leftrightarrow xx^{*}\in\mathfrak{A}_{{}_{\mathrm{b}}}( \mathcal{K}_{{}_{\mathcal{M}}})\bigcap\mathfrak{A}_{0}\)_._ _In particular, if \((\mathfrak{A},\mathfrak{A}_{0})\) has a unit, then also \((\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}}),\mathfrak{A}_ {{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\bigcap\mathfrak{A}_{0})\) has a unit._ **Theorem 4.8**.: _Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra with sufficient \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\). Then the following are equivalent:_ 1. \(a\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\)_;_ 2. \(a\) _is_ \(\mathcal{M}\)_-bounded; i.e.,_ (Definition 4.1) _there exists_ \(\gamma_{a}>0\) _such that_ \[|\varphi(ax,y)|\leq\gamma_{a}\varphi(x,x)^{1/2}\varphi(y,y)^{1/2},\,\forall x,y\in\mathfrak{A}_{0},\] _for every_ \(\varphi\in\mathcal{M};\)__ 3. _there exists_ \(\gamma^{\prime}_{a}>0\) _such that_ \[\varphi(ax,ax)\leq\gamma^{\prime}_{a}\varphi(x,x),\,\forall x\in\mathfrak{A}_ {0},\] _for every_ \(\varphi\in\mathcal{M}\)_._ 4. _there exists_ \(\gamma^{\prime\prime}_{a}>0\) _such that_ \[|\varphi(ax,x)|\leq\gamma^{\prime\prime}_{a}\varphi(x,x),\,\forall x\in \mathfrak{A}_{0},\] _for every_ \(\varphi\in\mathcal{M}\)_._ Proof.: We prove it for symmetric elements. \(i)\Rightarrow ii)\) It is clear that if \(a=a^{*}\in\mathfrak{A}_{{}_{\mathrm{b}}}(\mathcal{K}_{{}_{\mathcal{M}}})\), then there exists \(\gamma>0\) such that \[|\varphi(ax,x)|\leq\gamma\varphi(x,x),\,\forall\varphi\in\mathcal{M},\forall x \in\mathfrak{A}_{0}\] hence \[\sup\{|\varphi(ax,x)|;\ \varphi\in\mathcal{M},x\in\mathfrak{A}_{0},\varphi(x,x)=1\} <\infty.\] From Remark 4.2 it follows that \(\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}<\infty\) i.e. \(a\) is \(\mathcal{M}\)-bounded. \(ii)\Rightarrow iii)\) Assume that \(a\in\mathfrak{A}\) is \(\mathcal{M}\)-bounded. If \(\varphi\in\mathcal{M}\), denote by \(\pi_{\varphi}\) the corresponding GNS representation. Then, \[|\langle\pi_{\varphi}(a)\lambda_{\varphi}(x)|\lambda_{\varphi}(y)\rangle| =|\varphi(ax,y)|\leq\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}\varphi (x,x)^{1/2}\varphi(y,y)^{1/2}\] \[=\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}\|\lambda_{\varphi}(x)\| \,\|\lambda_{\varphi}(y)\|,\quad\forall x,y\in\mathfrak{A}_{0}.\] This implies that, for every \(\varphi\in\mathcal{M}\), the operator \(\pi_{\varphi}(a)\) is bounded and \(\|\pi_{\varphi}(a)\|\leq\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}\). Hence, \[\varphi(ax,ax)^{1/2} =\|\pi_{\varphi}(a)\lambda_{\varphi}(x)\| \tag{4.2}\] \[\leq\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}\|\lambda_{\varphi}(x) \|=\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}}\varphi(x,x)^{1/2},\quad\forall x,y\in \mathfrak{A}_{0}.\] \(iii)\Rightarrow iv)\) Suppose that \(a\) satisfies \(iii)\). Let \(\varphi\in\mathcal{M}\) and \(x\in\mathfrak{A}_{0}\). Then \[|\varphi(ax,x)|\leq\varphi(ax,ax)^{1/2}\varphi(x,x)^{1/2}\leq{\gamma^{\prime} _{a}}^{1/2}\varphi(x,x).\] \(iv)\Rightarrow i)\) It is straightforward. **Remark 4.9**.: By the previous theorem we also deduce the following equalities for the norm of a \(\mathcal{M}\)-bounded element \(a\) (see also Remark 4.2): \[\|a\|_{{}_{\rm b}}^{{}_{\mathcal{M}}} =\inf\{\gamma>0:\varphi(ax,ax)\leq\gamma^{2}\varphi(x,x),\,\varphi \in\mathcal{M},\,x\in\mathfrak{A}_{{}_{0}}\}\] \[=\sup\{\varphi(ax,ax)^{1/2}:\varphi\in\mathcal{M},\,x\in\mathfrak{ A}_{{}_{0}},\,\varphi(x,x)=1\}.\] In view of Theorem 4.8, we adopt the notation \(\mathfrak{A}_{{}_{\rm b}}^{{}_{\mathcal{M}}}\) for the set of either \(\mathcal{M}\)-bounded or \(\mathcal{K}_{{}_{\mathcal{M}}}\)-bounded elements; i.e., we put \(\mathfrak{A}_{{}_{\rm b}}^{{}_{\mathcal{M}}}=\mathfrak{A}_{{}_{\rm b}}( \mathcal{K}_{{}_{\mathcal{M}}})\). **Definition 4.10**.: Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra and let \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\). We say that a *-representation \(\pi\) of \(\mathfrak{A}\) is \(\mathcal{M}\)-_regular_ if for every \(\xi\in\mathcal{D}_{\pi}\), the vector form \(\varphi_{\xi}\) defined by \[\varphi_{\xi}(a,b):=\langle\pi(a)\xi|\pi(b)\xi\rangle,\quad a,b\in\mathfrak{A} \tag{4.3}\] is a form in \(\mathcal{M}\). In particular, if \(\mathcal{M}=\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\), \(\pi\) is said to be _regular_[6]. We denote by \(\mathsf{Rep}^{r,\mathcal{M}}(\mathfrak{A},\mathfrak{A}_{{}_{0}})\) the set of \(\mathcal{M}\)-regular *-representations of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\). If \(\mathcal{M}=\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\) we denote it simply by \(\mathsf{Rep}^{r}(\mathfrak{A},\mathfrak{A}_{{}_{0}})\). **Remark 4.11**.: If \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\) is balanced, then for every \(\varphi\in\mathcal{M}\) the *-representation \(\pi_{\varphi}^{\circ}\) is \(\mathcal{M}\)-regular, see [6, Proposition 2.4.16]. **Proposition 4.12**.: _Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra with unit \(\mathsf{e}\) and let \(a\in\mathfrak{A}\)._ _If \(\pi(a)\geq 0\) for every *-representation \(\pi\) of \((\mathfrak{A},\mathfrak{A}_{0})\), then \(a\in\mathcal{K}_{\mathpzc{I}}\). Conversely, if \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) and \(a\in\mathcal{K}_{\mathpzc{M}}\), then \(\pi(a)\geq 0\) for every \(\mathcal{M}\)-regular *-representation \(\pi\) of \((\mathfrak{A},\mathfrak{A}_{0})\)._ Proof.: If \(\pi(a)\geq 0\) for every *-representation \(\pi\) of \((\mathfrak{A},\mathfrak{A}_{0})\), then for every \(\varphi\in\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) and every \(x\in\mathfrak{A}_{0}\), \[\varphi(ax,x)=\langle\pi_{\varphi}(a)\lambda_{\varphi}(x)|\lambda_{\varphi}(x )\rangle\geq 0.\] Hence, \(a\in\mathcal{K}_{\mathpzc{I}}\). Conversely, let \(\pi\) be a \(\mathcal{M}\)-regular *-representation. Then, for every \(\xi\in\mathcal{D}_{\pi}\), the vector form \(\varphi_{\xi}(a,b)=\langle\pi(a)\xi|\pi(b)\xi\rangle\), with \(a,b\in\mathfrak{A}\), belongs to \(\mathcal{M}\). Thus, from \(a\in\mathcal{K}_{\mathpzc{M}}\), it follows that \(\varphi_{\xi}(a,\mathsf{e})=\langle\pi(a)\xi|\xi\rangle\geq 0\), for every \(\xi\in\mathcal{D}_{\pi}\). Hence \(\pi(a)\geq 0\). **Remark 4.13**.: The first implication is also true if we consider only the GNS representations constructed from the forms \(\varphi\in\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\); if \(\varphi\in\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) and every \(\pi_{\varphi}(a)\geq 0\), then \(a\in\mathcal{K}_{\mathpzc{M}}\). **Proposition 4.14**.: _Let \((\mathfrak{A},\mathfrak{A}_{0})\) be a quasi *-algebra with unit \(\mathsf{e}\) and with sufficient \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\). Then,_ 1. _if_ \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathpzc{M}}\) _then_ \(\pi(a)\) _is a bounded operator for every_ \(\pi\in\mathsf{Rep}^{r,\mathcal{M}}(\mathfrak{A},\mathfrak{A}_{{}_{0}})\) _and_ \(\|\pi(a)\|\leq\|a\|_{\mathrm{b}}^{\mathpzc{M}}\)_;_ 2. _if_ \(\pi(a)\) _is a bounded operator for every_ \(\pi\in\mathsf{Rep}^{r,\mathcal{M}}(\mathfrak{A},\mathfrak{A}_{{}_{0}})\) _and_ \(\sup\{\|\pi(a)\|;\pi\in\mathsf{Rep}^{r,\mathcal{M}}(\mathfrak{A},\mathfrak{A}_ {{}_{0}})\}<\infty\)_, then_ \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathpzc{M}}\)_._ Proof.: \((i)\) By Theorem 4.8, \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathpzc{M}}\) implies \(\varphi(ax,ax)^{1/2}\leq\|a\|_{\mathrm{b}}^{\mathpzc{M}}\varphi(x,x)^{1/2}\)\(\forall\varphi\in\mathcal{M};\,x\in\mathfrak{A}_{{}_{0}}\). If \(\pi\) is \(\mathcal{M}\)-regular, for every \(\xi\in\mathcal{D}_{\pi}\), \(\varphi_{\xi}\in\mathcal{M}\) where \(\varphi_{\xi}(a,b)=\langle\pi(a)\xi|\pi(b)\xi\rangle\). Then, \[\|\pi(ax)\xi\|=\varphi_{\xi}(ax,ax)^{1/2}\leq\|a\|_{\mathrm{b}}^{\mathpzc{M} }\varphi_{\xi}(x,x)^{1/2}=\|a\|_{\mathrm{b}}^{\mathpzc{M}}\|\pi(x)\xi\|.\] The quasi *-algebra is supposed to be unital and also \(\pi(\mathsf{e})=I_{\mathcal{D}_{\pi}}\). Then \[\|\pi(a)\xi\|\leq\|a\|_{\mathrm{b}}^{\mathpzc{M}}\|\xi\|,\quad\forall\xi\in \mathcal{D}_{\pi}.\] Hence, \(\|\pi(a)\|\leq\|a\|_{\mathrm{b}}^{\mathpzc{M}}\). \((ii)\) Put \(\gamma:=\sup\{\|\pi(a)\|;\pi\in\mathsf{Rep}^{r,\mathcal{M}}(\mathfrak{A}, \mathfrak{A}_{{}_{0}})\}\). By hypothesis \[\|\pi_{\varphi}^{\circ}(a)\lambda_{\varphi}(x)\|\leq\gamma\|\lambda_{\varphi} (x)\|,\quad\forall\varphi\in\mathcal{M},x\in\mathfrak{A}_{0}\] i.e. \[\varphi(ax,ax)\leq\gamma^{2}\varphi(x,x),\quad\forall\varphi\in\mathcal{M},x \in\mathfrak{A}_{0}\] and by Theorem 4.8 it is equivalent to say that \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathpzc{M}}\). **Remark 4.15**.: Let \(\varphi\in\mathcal{M}\) and denote, as before, by \(\pi_{\varphi}\) the corresponding GNS representation. If \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\), then \(\pi_{\varphi}(a)\) is a bounded operator. Indeed, for every \(x\in\mathfrak{A}_{\mathrm{0}}\), \[\|\pi_{\varphi}(a)\lambda_{\varphi}(x)\|^{2}=\varphi(ax,ax)\leq(\|a\|_{ \mathrm{b}}^{\mathcal{M}})^{2}\varphi(x,x)=(\|a\|_{\mathrm{b}}^{\mathcal{M}}) ^{2}\|\lambda_{\varphi}(x)\|^{2},\] regardless of whether \(\pi_{\varphi}\) is \(\mathcal{M}\)-regular or not. The following additional condition will be used: * if \(a,b\in\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) then there exists a unique \(c\in\mathfrak{A}\), such that \(\pi_{\varphi}(a)\Box\pi_{\varphi}(b)=\pi_{\varphi}(c)\), for every \(\varphi\in\mathcal{M}\). **Theorem 4.16**.: _Let \((\mathfrak{A},\mathfrak{A}_{\mathrm{0}})\) be a quasi *-algebra. Let \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{\mathrm{0}}}(\mathfrak{A})\) be sufficient and assume that condition (C) holds. Then, \(\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) is a normed *-algebra with the weak multiplication \(\circ\) and the norm \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\)._ Proof.: As we have seen until now, \(\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) is a normed space such that if \(a\in\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) then \(a^{*}\) is \(\mathcal{M}\)-bounded and \(\|a^{*}\|_{\mathrm{b}}^{\mathcal{M}}=\|a\|_{\mathrm{b}}^{\mathcal{M}}\) and, whenever \(a\cdot b\) is well-defined, the product \(a\cdot b\) is \(\mathcal{M}\)-bounded and \(\|a\cdot b\|_{\mathrm{b}}^{\mathcal{M}}\leq\|a\|_{\mathrm{b}}^{\mathcal{M}}\| b\|_{\mathrm{b}}^{\mathcal{M}}\). Now, if \(\varphi\in\mathcal{M}\) and \(a,b\in\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\), the operator \(\pi_{\varphi}(a)\Box\pi_{\varphi}(b)\) is well-defined since, by Remark 4.15, \(\pi_{\varphi}(a)\) and \(\pi_{\varphi}(b)\) are bounded operators; of course, \(\pi_{\varphi}(a)\Box\pi_{\varphi}(b)\) is also bounded. Thus, by (C), there exists a unique \(c\in\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\), such that \(\pi_{\varphi}(a)\Box\pi_{\varphi}(b)=\pi_{\varphi}(c)\), for every \(\varphi\in\mathcal{M}\). Hence, for all \(\varphi\in\mathcal{M}\) and \(x,y\in\mathfrak{A}_{\mathrm{0}}\), we have \[\varphi(bx,a^{*}y) =\langle\pi_{\varphi}(b)\lambda_{\varphi}(x)|\pi_{\varphi}(a^{*}) \lambda_{\varphi}(y)\rangle=\langle\pi_{\varphi}(a)\Box\pi_{\varphi}(b) \lambda_{\varphi}(x)|\lambda_{\varphi}(y)\rangle\] \[=\langle\pi_{\varphi}(c)\lambda_{\varphi}(x)|\lambda_{\varphi}(y )\rangle=\varphi(cx,y).\] Thus \(c=a\cdot b\) is well-defined and is \(\mathcal{M}\)-bounded by Lemma 4.3. This completes the proof. **Proposition 4.17**.: _Let \((\mathfrak{A},\mathfrak{A}_{\mathrm{0}})\) be a quasi *-algebra with unit \(\mathsf{e}\). Let \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{\mathrm{0}}}(\mathfrak{A})\) be balanced and denote by \(R^{\mathcal{M}}(\mathfrak{A})\) the intersection of the kernels of all \(\mathcal{M}\)-regular *-representations of \(\mathfrak{A}\) on some Hilbert space. Then_ \[R^{\mathcal{M}}(\mathfrak{A})=\{a\in\mathfrak{A}|\,\varphi(a,a)=0,\,\forall \varphi\in\mathcal{M}\}.\] Proof.: For every \(\varphi\in\mathcal{M}\), the GNS representation \(\pi_{\varphi}^{\circ}\) is \(\mathcal{M}\)-regular, then, if \(a\in R^{\mathcal{M}}(\mathfrak{A})\), it is \(\pi_{\varphi}^{\circ}(a)=0\), hence \(\varphi(a,a)=\|\pi_{\varphi}^{\circ}(a)\xi_{\varphi}\|=0\). Conversely, if \(a\in\mathfrak{A}\) is such that \(\varphi(a,a)=0\), \(\forall\varphi\in\mathcal{M}\), since \(\mathcal{M}\) is balanced, it is \(\varphi^{x}(a,a)=0\) for all \(x\in\mathfrak{A}_{\mathrm{0}}\) and for all \(\varphi\in\mathcal{M}\), hence \[\varphi^{x}(a,a)=\varphi(ax,ax)=\|\pi_{\varphi}^{\circ}(a)\lambda_{\varphi}( x)\|^{2}=0\] and by the density of \(\lambda_{\varphi}(\mathfrak{A}_{\mathrm{0}})\) in \(\mathcal{H}_{\varphi}\) this implies that \(\pi_{\varphi}^{\circ}(a)=0\). Now, let \(\pi\) be a \(\mathcal{M}\)-regular *-representation of \(\mathfrak{A}\), then for every \(\xi\in\mathcal{D}_{\pi}\) the form \(\varphi_{\xi}\in\mathcal{M}\) and by what we have seen before it is \(\pi_{\varphi_{\xi}}^{\circ}(a)=0\). For every \(\xi\in\mathcal{D}_{\pi}\), there exists a cyclic vector \(\eta\) for the GNS representation \(\pi_{\varphi_{\xi}}^{\circ}\) such that \(\pi_{\varphi_{\xi}}^{\circ}\) \[\|\pi(a)\xi\|^{2}=\|\pi_{\varphi_{\xi}}^{\circ}(a)\eta\|^{2}=0\] this implies that \(\pi(a)=0\). By the arbitrariness of the \(\mathcal{M}\)-regular *-representation \(\pi\) of \(\mathfrak{A}\), it follows that \(a\in R^{\mathcal{M}}(\mathfrak{A})\). This concludes the proof. **Remark 4.18**.: The set \(R^{\mathcal{M}}(\mathfrak{A})\) is clearly a sort of *-radical; however its nature is purely algebraic here. ## 5. Topologies defined by families of sesquilinear forms The properties we have discussed in the previous section are all of pure algebraic nature; but families of sesquilinear forms of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) can be used to define, in rather natural way, topologies on \((\mathfrak{A},\mathfrak{A}_{0})\). Our next goal is in fact to define in \(\mathfrak{A}\) topologies that mimick the uniform topologies of families of operators. Throughout this section we will suppose that \((\mathfrak{A},\mathfrak{A}_{0})\) is a quasi *-algebra with unit \(\mathsf{e}\) and that \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) is a sufficient set of forms. Then \(\mathcal{M}\) defines the topologies \(\tau_{w}^{\mathcal{M}},\tau_{s}^{\mathcal{M}},\tau_{s^{*}}^{\mathcal{M}}\) generated, respectively, by the following families of seminorms: \[\tau_{w}^{\mathcal{M}}: a\mapsto|\varphi(ax,y)|,\quad a\in\mathfrak{A},\ \varphi\in\mathcal{M},\ x,y\in\mathfrak{A}_{0};\] \[\tau_{s}^{\mathcal{M}}: a\mapsto\varphi(a,a)^{1/2},\quad a\in\mathfrak{A},\ \varphi\in\mathcal{M};\] \[\tau_{s^{*}}^{\mathcal{M}}: a\mapsto\max\big{\{}\varphi(a,a)^{1/2},\varphi(a^{*},a^{*})^{1/2} \big{\}},\quad a\in\mathfrak{A},\ \varphi\in\mathcal{M}.\] **Definition 5.1**.: Let \(\mathcal{F}\) be a subset of \(\mathcal{M}\). We say that \(\mathcal{F}\) is _bounded_ if \[\sup_{\varphi\in\mathcal{F}}\varphi(a,a)<\infty,\quad\forall a\in\mathfrak{A}.\] The family \(\mathfrak{F}\) of bounded subsets of forms in \(\mathcal{M}\) has the following properties: 1. \(\bigcap\limits_{n\in\mathbb{N}}\mathcal{F}_{n}\in\mathfrak{F},\quad \mathcal{F}_{n}\in\mathfrak{F};\) 2. \(\mathcal{F}\cup\mathcal{G}\in\mathfrak{F},\quad\mathcal{F},\mathcal{G}\in \mathfrak{F}.\) If \(\mathcal{F}\in\mathfrak{F}\), we put \[p^{\mathcal{F}}(a):=\sup_{\varphi\in\mathcal{F}}\varphi(a,a)^{1/2},\quad a\in \mathfrak{A}.\] **Lemma 5.2**.: _Let \(\mathcal{F}\in\mathfrak{F}\). Then,_ 1. \(p^{\mathcal{F}}\) _is a seminorm on_ \(\mathfrak{A}\)_;_ 2. _the set_ \(\mathcal{F}^{x}=\{\varphi^{x},\varphi\in\mathcal{F}\}\)_,_ \(x\in\mathfrak{A}_{0}\)_, is bounded;_ 3. _for every_ \(x\in\mathfrak{A}_{0}\)_,_ __ \[p^{\mathcal{F}}(ax)=p^{\mathcal{F}^{x}}(a),\quad\forall a\in\mathfrak{A}.\] Proof.: As for (c), we have \[p^{\mathcal{F}}(ax)=\sup_{\varphi\in\mathcal{F}}\varphi(ax,ax)^{1/2}=\sup_{\varphi \in\mathcal{F}}\varphi^{x}(a,a)^{1/2}=\sup_{\psi\in\mathcal{F}^{x}}\psi(a,a)^{1/ 2}=p^{\mathcal{F}^{x}}(a).\] Since \(\mathcal{M}\) is sufficient, then \(\{p^{\mathcal{F}};\mathcal{F}\in\mathfrak{F}\}\) is a separating family of seminorms; thus it defines on \(\mathfrak{A}\) a Hausdorff locally convex topology which we denote by \(\tau^{\mathfrak{F}}\). Let us assume that \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) has a unit \(\mathtt{e}.\) If \(\mathcal{F}\in\mathfrak{F}\) we define \[p_{\mathcal{F}}(a):=\sup_{\varphi\in\mathcal{F}}|\varphi(a,\mathtt{e})|,\quad a \in\mathfrak{A}.\] Then \[p_{\mathcal{F}}(a)\leq\gamma_{\mathcal{F}}p^{\mathcal{F}}(a),\quad\forall a\in \mathfrak{A},\] with \(\gamma_{\mathcal{F}}=\sup_{\varphi\in\mathcal{F}}\varphi(\mathtt{e},\mathtt{e })^{1/2},\) and the following hold: \[p_{\mathcal{F}}(a^{*})=p_{\mathcal{F}}(a),\quad\forall a\in \mathfrak{A};\] \[p_{\mathcal{F}}(ax)\leq p^{\mathcal{F}}(x)p^{\mathcal{F}}(a^{*}),\quad\forall a\in\mathfrak{A},x\in\mathfrak{A}_{{}_{0}};\] \[p_{\mathcal{F}}(a^{*}{}_{o}a)=p^{\mathcal{F}}(a)^{2},\;\forall a \in\mathfrak{A}\text{ such that }a^{*}{}_{o}a\text{ is well--defined.}\] By \(\tau_{\mathfrak{F}}\) we will denote the locally convex topology on \(\mathfrak{A}\) generated by the family of seminorms \(\{p_{\mathcal{F}};\mathcal{F}\in\mathfrak{F}\}\) (to simplify notations, we do not mention explicitly the dependence on \(\mathcal{M}\)). Note that \(\tau_{\mathfrak{F}}\) need not be Hausdorff, in general. **Remark 5.3**.: We notice that if \(a^{*}{}_{o}a\) is well-defined and \(a^{*}{}_{o}a=0\) then \(p^{\mathcal{F}}(a)=0\) for every bounded set \(\mathcal{F}\)\(\in\mathfrak{F}\) and therefore \(a=0\). **Proposition 5.4**.: _Let \(\mathcal{M}\) be sufficient and suppose that \(\tau_{\mathfrak{F}}=\tau^{\mathfrak{F}}.\) Then \(\mathfrak{A}[\tau^{\mathfrak{F}}]\) is a locally convex space with the following properties:_ 1. _the involution_ \(a\mapsto a^{*}\) _is continuous;_ 2. _for every bounded set_ \(\mathcal{F}\)\(\in\mathfrak{F}\) _there exists a bounded set_ \(\mathcal{G}\)\(\in\mathfrak{F}\)_,_ \[p_{\mathcal{F}}(ax)\leq p^{\mathcal{G}}(a)p^{\mathcal{G}}(x),\quad\forall a\in \mathfrak{A},x\in\mathfrak{A}_{{}_{0}};\] _which implies that the left- and right multiplications are jointly continuous._ _In particular, if \(\mathfrak{A}_{{}_{0}}\) is \(\tau^{\mathfrak{F}}\)-dense in \(\mathfrak{A}\) then \((\mathfrak{A}[\tau^{\mathfrak{F}}],\mathfrak{A}_{{}_{0}})\) is a locally convex quasi *-algebra._ In general, the involution \(a\mapsto a^{*}\) in not continuous for \(\tau^{\mathfrak{F}}\). To circumvent this problem, we define the topology \(\tau_{*}^{\mathfrak{F}}\) generated by the family of seminorms \[p_{*}^{\mathcal{F}}(a)=\max\left\{p^{\mathcal{F}}(a),p^{\mathcal{F}}(a^{*}) \right\},\quad a\in\mathfrak{A},\,\mathcal{F}\in\mathfrak{F}.\] Clearly \(\tau_{\mathfrak{F}}\preceq\tau^{\mathfrak{F}}\preceq\tau_{*}^{\mathfrak{F}}\) and, if \(\tau_{\mathfrak{F}}=\tau^{\mathfrak{F}}\), then \(\tau_{\mathfrak{F}}=\tau^{\mathfrak{F}}=\tau_{*}^{\mathfrak{F}}\). Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra and suppose that the set \(\mathcal{M}\) is sufficient. It is clear that every \(\varphi\in\mathcal{M}\) is automatically continuous for \(\tau_{s}^{\mathcal{M}}\) and for any finer topology such as \(\tau_{s^{\star}}^{\mathcal{M}}\), \(\tau_{*}^{\mathfrak{F}}\) or \(\tau^{\mathfrak{F}}\). Our next goal is to investigate the properties of \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) when \(\mathfrak{A}\) is endowed with one of the topologies defined by the family \(\mathcal{M}\) defined above. We could wonder whether \((\mathfrak{A}[\tau_{*}^{\mathfrak{F}}],\mathfrak{A}_{{}_{0}})\) is a locally convex quasi *-algebra. The left- and right multiplications by an element \(x\in\mathfrak{A}_{{}_{0}}\) are continuous if we make an additional assumption: let us suppose that for every \(x\in\mathfrak{A}_{{}_{0}}\) there exists \(\gamma_{x}>0\) such that \[\varphi(xa,xa)\leq\gamma_{x}\varphi(a,a),\quad\forall\varphi\in\mathcal{M}, \forall a\in\mathfrak{A}. \tag{5.1}\] By (5.1) it follows that every \(x\in\mathfrak{A}_{{}_{0}}\) is \(\mathcal{M}\)-bounded and for every bounded subset \(\mathcal{F}\subset\mathcal{M}\), \[p^{\mathcal{F}}(xa)\leq\gamma_{x}p^{\mathcal{F}}(a),\quad\forall a\in \mathfrak{A}.\] This inequality, together with (c) of Lemma 5.2 and the continuity of the involution, implies that, for every \(x\in\mathfrak{A}_{{}_{0}}\), the maps \(a\mapsto ax\), \(a\mapsto xa\) are \(\tau_{*}^{\mathfrak{F}}\)-continuous. The *-algebra \(\mathfrak{A}_{{}_{0}}\) is not \(\tau_{*}^{\mathfrak{F}}\)-dense in \(\mathfrak{A}\) in general, hence in order to get a locally convex quasi *-algebra with topology \(\tau_{*}^{\mathfrak{F}}\) we could take as \(\mathfrak{A}\) the completion \(\widetilde{\mathfrak{A}_{{}_{0}}}^{\tau^{\mathfrak{F}}}\). Now we prove the following **Lemma 5.5**.: _Let \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) be a quasi *-algebra. Assume that \(\mathcal{M}\) is sufficient and directed upward w.r.to the order_ \[\varphi\leq\psi\ \Leftrightarrow\ \varphi(a,a)\leq\psi(a,a),\quad\forall a\in \mathfrak{A}.\] _Then \(\mathfrak{A}_{{}_{0}}\) is dense in \(\mathfrak{A}[\tau_{s}^{\mathcal{M}}]\)._ Proof.: Let us begin with proving that given \(a\in\mathfrak{A}\) we can find a net \(\{x_{\alpha}\}\subset\mathfrak{A}_{{}_{0}}\) such that \(\varphi(x_{\alpha}-a,x_{\alpha}-a)\to 0\) for every \(\varphi\in\mathcal{M}\). Again we put \(\varphi[a]:=\varphi(a,a)\), \(\varphi\in\mathcal{M}\), \(a\in\mathfrak{A}\). Since \(\varphi\in\mathcal{M}\), \(\lambda_{\varphi}(\mathfrak{A}_{{}_{0}})\) is dense in \(\mathcal{H}_{\varphi}\) (with \(\mathcal{H}_{\varphi}\) defined as in Proposition 2.4). This implies that, for every \(a\in\mathfrak{A}\), there exists a sequence \(\{x_{n}^{\varphi}\}\) such that \(\lambda_{\varphi}(x_{n}^{\varphi}-a)\to 0\) or, equivalently \(\varphi[x_{n}^{\varphi}-a]\to 0\). Then \[\forall n\in\mathbb{N},\quad\exists n_{\varphi}\in\mathbb{N}:\ \varphi[x_{n_{ \varphi}}^{\varphi}-a]<\frac{1}{n}.\] If \(\varphi,\psi\in\mathcal{M}\), we define \((\varphi,n_{\varphi})\leq(\psi,n_{\psi})\) if \(\varphi\leq\psi\) and \(n_{\varphi}\leq n_{\psi}\). Since \(\mathcal{M}\) is directed, \(\{(\varphi,n_{\varphi})\}\) is directed and \(\{x_{(\varphi,n_{\varphi})}\}\) is a net, with \(x_{(\varphi,n_{\varphi})}:=x_{n_{\varphi}}^{\varphi}\). We prove that, for every \(\psi\in\mathcal{M}\)\(\psi[x_{n_{\varphi}}^{\varphi}-a]\to 0\). Indeed, let \(\epsilon>0\) and \(n\in\mathbb{N}\) such that \(\frac{1}{n}<\epsilon\). Then if \((\varphi,n_{\varphi})\geq(\psi,n_{\psi})\) \[\psi[x_{n_{\varphi}}^{\varphi}-a]\leq\varphi[x_{n_{\varphi}}^{\varphi}-a]< \frac{1}{n}<\epsilon.\] This proves that \(\mathfrak{A}_{\mathrm{0}}\) is dense in \(\mathfrak{A}[\tau_{s}^{\mathcal{M}}]\). The representation \(\pi_{\varphi}^{\circ}\) is \(\tau^{\mathfrak{F}}\)-continuous. Indeed, if \(\mathcal{F}\) is any bounded subset of \(\mathcal{M}\) containing \(\varphi\), \[\|\pi_{\varphi}^{\circ}(a)\lambda_{\varphi}(x)\|=\varphi(ax,ax)^{1/2}\leq p^ {\mathcal{F}}(ax)\leq p^{\mathcal{F}^{x}}(a),\quad\forall a\in\mathfrak{A}; \,x\in\mathfrak{A}_{\mathrm{0}}\] as in Lemma 5.2. **Proposition 5.6**.: _Let \((\mathfrak{A},\mathfrak{A}_{\mathrm{0}})\) be a quasi *-algebra with sufficient \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{\mathrm{0}}}(\mathfrak{A})\). If \(\mathfrak{A}\) is \(\tau_{s^{*}}^{\mathcal{M}}\)-complete, then \(\mathfrak{A}\) is also \(\tau_{*}^{\mathfrak{F}}\)-complete._ Proof.: Let \(\{a_{\alpha}\}\) be a \(\tau_{*}^{\mathfrak{F}}\)-Cauchy net. Since \(\tau_{s^{*}}^{\mathcal{M}}\preceq\tau^{\mathfrak{F}}\), there exists \(a\in\mathfrak{A}\) such that \(a=\tau_{s^{*}}^{\mathcal{M}}-\lim_{\alpha}a_{\alpha}\). From the Cauchy condition, for every \(\epsilon>0\) and every bounded set \(\mathcal{F}\) there exists \(\overline{\alpha}\) such that \[\max\{\varphi(a_{\alpha}-a_{\alpha^{\prime}},a_{\alpha}-a_{\alpha^{\prime}}), \varphi(a_{\alpha}^{*}-a_{\alpha^{\prime}}^{*},a_{\alpha}^{*}-a_{\alpha^{\prime }}^{*})\}<\epsilon,\;\forall\varphi\in\mathcal{F},\alpha,\alpha^{\prime}> \overline{\alpha}.\] Then taking limit over \(\alpha^{\prime}\), \[\max\{\varphi(a_{\alpha}-a,a_{\alpha}-a),\varphi(a_{\alpha}^{*}-a^{*},a_{ \alpha}^{*}-a^{*})\}\leq\epsilon,\;\forall\varphi\in\mathcal{F},\alpha> \overline{\alpha}.\] Therefore, \(\mathfrak{A}\) is \(\tau_{*}^{\mathfrak{F}}\)-complete. **Theorem 5.7**.: _Let \(\mathcal{M}\) be sufficient and let property \((C)\) hold too. If \(\mathfrak{A}\) is \(\tau_{s^{*}}^{\mathcal{M}}\)-complete, then \(\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) is a C*-algebra with the weak multiplication \(\circ\) and the norm \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\)._ Proof.: By Theorem 4.16 and Remark 4.5 we only need to prove the completeness of \(\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\). Let \(\{a_{n}\}\subset\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) be a Cauchy sequence with respect to the norm \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\). Then, \(\{a_{n}^{*}\}\) is \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\)-Cauchy too. By (4.2), \[\varphi((a_{n}-a_{m})x,(a_{n}-a_{m})x)\leq(\|a_{n}-a_{m}\|_{\mathrm{b}}^{ \mathcal{M}})^{2}\varphi(x,x),\,\forall\varphi\in\mathcal{M},\forall x\in \mathfrak{A}_{\mathrm{0}}\] and \[\varphi((a_{n}^{*}-a_{m}^{*})x,(a_{n}^{*}-a_{m}^{*})x)\leq(\|a_{n}^{*}-a_{m}^{ *}\|_{\mathrm{b}}^{\mathcal{M}})^{2}\varphi(x,x),\,\forall\varphi\in \mathcal{M},\forall x\in\mathfrak{A}_{\mathrm{0}}.\] Therefore both \(\varphi((a_{n}-a_{m})x,(a_{n}-a_{m})x)\to 0\) and \(\varphi((a_{n}^{*}-a_{m}^{*})x,(a_{n}^{*}-a_{m}^{*})x)\to 0\), as \(n,m\to\infty\). This is in particular true when \(x=\mathsf{e}\) hence \(\{a_{n}\}\) is also Cauchy with respect to \(\tau_{s^{\star}}^{\mathcal{M}}\) and since \(\mathfrak{A}\) is \(\tau_{s^{\star}}^{\mathcal{M}}\)-complete, there exists \(a\in\mathfrak{A}\) such that \(a_{n}\stackrel{{\tau_{s^{\star}}^{\mathcal{M}}}}{{\to}}a\). The limit \(a\in\mathfrak{A}_{\mbox{\tiny b}}^{\mathcal{M}}\); indeed, for every \(\varphi\in\mathcal{M}\): \[|\varphi(ax,x)|^{2} \leq\varphi(ax,ax)\varphi(x,x)=\varphi(x,x)\limsup_{n\to\infty} \varphi(a_{n}x,a_{n}x)\] \[\leq\sup_{n\in\mathbb{N}}\|a_{n}\|_{\mbox{\tiny b}}^{\mathcal{M}} \varphi(x,x)^{2}\] Since \(\{a_{n}\}\) is Cauchy w.r.to the norm \(\|\cdot\|_{\mbox{\tiny b}}^{\mathcal{M}}\), for every \(\epsilon>0\) there exists \(n_{\epsilon}\in\mathbb{N}\), such that \(\|a_{n}-a_{m}\|_{\mbox{\tiny b}}^{\mathcal{M}}<\epsilon^{1/2}\), for all \(n,m>n_{\epsilon}\). This implies that \(\varphi((a_{n}-a_{m})x,(a_{n}-a_{m})x)<\epsilon\varphi(x,x)\), \(\forall\varphi\in\mathcal{M},\forall x\in\mathfrak{A}_{0}\), forall \(n,m>n_{\epsilon}\). Then, if we fix \(n>n_{\epsilon}\) and let \(m\to\infty\), we obtain \(\varphi((a_{n}-a)x,(a_{n}-a)x){\leq}\epsilon\varphi(x,x),\forall\varphi\in \mathcal{M},\forall x\in\mathfrak{A}_{0}\). This implies that \(\mathfrak{A}_{\mbox{\tiny b}}^{\mathcal{M}}\) is complete w.r.to the norm \(\|\cdot\|_{\mbox{\tiny b}}^{\mathcal{M}}\). To conclude, let us suppose that \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) is balanced. We pose the question: _under what conditions is \(\mathcal{M}\) also sufficient?_ Let us consider a locally convex quasi *-algebra \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) and choose \(\mathcal{M}=\mathcal{P}_{\mathfrak{A}_{0}}^{\tau}(\mathfrak{A})\). This set is certainly balanced, but it is not necessarily sufficient. This property can be characterized (by negation) by the following **Proposition 5.8**.: _Let \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) be a locally convex quasi *-algebra with unit \(\mathsf{e}\). For an element \(a\in\mathfrak{A}\) the following statements are equivalent:_ 1. \(a\in\operatorname{Ker}\pi\) _for every strongly-continuous (i.e.,_ \(\mathfrak{t}_{s}\)_-continuous) qu*-representation_ \(\pi\) _of_ \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\)_;_ 2. \(\varphi(a,a)=0\)_, for every_ \(\varphi\in\mathcal{P}_{\mathfrak{A}_{0}}^{\tau}(\mathfrak{A})\)_;_ 3. \(p^{\mathcal{F}}(a)=0\)_, for every bounded subset_ \(\mathcal{F}\) _of_ \(\mathcal{P}_{\mathfrak{A}_{0}}^{\tau}(\mathfrak{A})\)_._ ## 6. Locally convex quasi GA*-algebras The discussion of the previous sections suggests the following definition (which strengthen an analogous one for partial *-algebras [3, Definition 4.26]). **Definition 6.1**.: Let \((\mathfrak{A},\mathfrak{A}_{0})\) be a quasi *-algebra. Let \(\mathcal{M}\) be a family of forms of \(\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\). We say that \(\mathcal{M}\) is _strongly well-behaved_ if 1. \(\mathcal{M}\) is sufficient; 2. every \(x\in\mathfrak{A}_{0}\) is \(\mathcal{M}\)-bounded; 3. condition (C) holds; 4. \(\mathfrak{A}\) is \(\tau_{*}^{\mathfrak{F}}\)-complete. **Definition 6.2**.: Let \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) be a locally convex quasi *-algebra. We say that \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) is a _locally convex quasi GA*-algebra_ if there exists \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\) which is strongly well-behaved and \(\tau\) and \(\tau_{*}^{\mathfrak{F}}\) are equivalent (in symbols \(\tau\approx\tau_{*}^{\mathfrak{F}}\)). **Example 6.3**.: Let us consider the quasi *-algebra \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H}),\mathcal{L}^{\dagger}( \mathcal{D})_{b})\) of Section 2. Assume that \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\) is endowed with the topology \(\mathfrak{t}_{*}^{u}\) and denote by \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{u}\) the \(\mathfrak{t}_{*}^{u}\)-closure of \(\mathcal{L}^{\dagger}(\mathcal{D})_{b}\) in \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\), then \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{u}[\mathfrak{t}_{*}^{u}], \mathcal{L}^{\dagger}(\mathcal{D})_{b})\) is a locally convex quasi *-algebra. Let us take as \(\mathcal{M}\) the space consisting of the restrictions to \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{u}\) of the \(\mathfrak{t}_{*}^{u}\)-continuous ips-forms on \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H}),\mathcal{L}^{\dagger}( \mathcal{D})_{b})\). We will see that \(\mathcal{M}\) is strongly well-behaved and \(\mathfrak{t}_{*}^{u}\approx\tau_{*}^{\mathfrak{F}}\): this makes of \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{u}[\mathfrak{t}_{*}^{u}], \mathcal{L}^{\dagger}(\mathcal{D})_{b})\) a locally convex quasi GA*-algebra. Due to the \(\mathfrak{t}_{*}^{u}\)-density of \(\mathcal{L}^{\dagger}(\mathcal{D})_{b}\) in \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\), we can identify \(\mathcal{M}\) with the space of all \(\mathfrak{t}_{*}^{u}\)-continuous ips-forms on \((\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{u},\mathcal{L}^{\dagger}( \mathcal{D})_{b})\). This implies [3, Theorem 3.10] that every \(\psi\in\mathcal{M}\) can be written as follows \(\psi(A,B)=\sum_{i=1}^{n}\langle A\xi_{i}|B\xi_{i}\rangle\;A,B\in\mathcal{L}^ {\dagger}(\mathcal{D},\mathcal{H})_{u}\), for some vectors \(\xi_{1},...,\xi_{n}\in\mathcal{D}\). Hence, the set of \(\mathcal{M}\)-bounded elements coincides with the set \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{b}\) of all bounded operators of \(\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})\), which can be identified with the C*-algebra \(\mathcal{B}(\mathcal{H})\) of all bounded operators in \(\mathcal{H}\). These facts allow us to conclude easily that \(\mathcal{M}\) is strongly well-behaved. In particular, we notice that \((\mathsf{wb}_{3})\) holds since if \(A,B\in\mathcal{L}^{\dagger}(\mathcal{D},\mathcal{H})_{b}\) then the multiplication \(\circ\) (see (2.2)) is well-defined and coincides with the weak multiplication \(\Box\) of operators (see (2.1)): \(A\circ B=A\Box B\) is certainly well-defined; then if \(\varphi\in\mathcal{M}\), we have \(\pi_{\varphi}(A)\Box\pi_{\varphi}(B)=\pi_{\varphi}(A\Box B)=\pi_{\varphi}(A \circ B)\), by definition of *-representation. **Example 6.4**.: Let \(K\) denote a compact subset of the real line with \(m(K)>0\), where \(m\) denotes the Lebesgue measure. Then the pair \((L^{p}(K,m),C(K))\), where \(C(K)\) denotes the C*-algebra of continuous functions on \(K\), is a Banach quasi *-algebra. Let \(\mathcal{M}\) be the space of all jointly continuous ips-forms on \((L^{p}(K,m),C(K))\). Then, as shown in [6, Example 3.1.44], if \(p\geq 2\), \(\mathcal{M}\) can be completely described by functions of \(L^{s}(K,m)\), where \(s=\frac{p}{p-2}\)\((\frac{1}{0}=\infty)\), in the following sense: \(\varphi\in\mathcal{M}\Leftrightarrow\exists w\in L^{s}(K,m)\), \(w\geq 0:\;\varphi(f,g)=\int_{K}f\overline{g}wdm,\;\forall f,g\in L^{p}(K,m)\). For this reason we identify \(\mathcal{M}\) with \(L^{s}(K,m)\). With this in mind, 1. a subset \(\mathcal{F}\) of \(\mathcal{M}\) is bounded, if and only if it is contained in a ball centered at \(0\) in \(L^{s}(K,m)\); 2. the topology \(\tau^{\mathfrak{F}}\) (which equals \(\tau_{*}^{\mathfrak{F}}\), in this case) is normed and the norm coincides with \(\|\cdot\|_{p}\), since \[\sup_{\|w\|_{s}=1}\int_{K}|f|^{2}wdm=\||f|^{2}\|_{p/2}=\|f\|_{p}^{2}.\] 3. The topology \(\tau_{\mathfrak{F}}\) is also a norm topology and the norm coincides with \(\|\cdot\|_{p/2}\); 4. The set of \(\mathcal{M}\)-bounded elements is the C*-algebra \(L^{\infty}(K,m)\). In conclusion, \((L^{p}(K,m),L^{\infty}(K,m))\) is a Banach quasi GA*-algebra. **Example 6.5**.: The space \(L^{p}_{\rm loc}(\mathbb{R},m)\) of all (classes of) measurable functions on \(\mathbb{R}\) such that the restriction \(f_{|K}\) of \(f\) to \(K\) is in \(L^{p}(K,m)\), for every compact subset \(K\subset\mathbb{R}\), behaves similarly to the case discussed in Example 6.4. The main difference consists, of course, in the fact that we will not deal with norm topologies. More precisely, let us consider the pair \((L^{p}_{\rm loc}(\mathbb{R},m),C_{b}(\mathbb{R}))\) (where \(C_{b}(\mathbb{R})\) denotes the continuous bounded functions on \(\mathbb{R}\)), which is, as it is easy to check, a quasi *-algebra. The natural topology \(\tau_{p}\) of \(L^{p}_{\rm loc}(\mathbb{R},m)\) is then defined as the inductive limit of the norm topologies of the spaces \(L^{p}(K)\), when \(K\) runs in the family of compact subsets of \(\mathbb{R}\). Let \(\mathcal{M}\) denote the space of all ips-forms on \((L^{p}_{\rm loc}(\mathbb{R},m),C_{b}(\mathbb{R}))\) whose restriction to \(L^{p}(K,m)\) is continuous for every compact subset \(K\subset\mathbb{R}\). Then, if \(p\geq 2\), one can easily prove that \(\mathcal{M}\) can be described by functions of \(L^{s}_{\rm loc}(\mathbb{R},m)\) where, as before, \(s=\frac{p}{p-2}\) (again, \(\frac{1}{0}=\infty\)). It is easily seen that \(\mathcal{M}\) is strongly well-behaved. In this case, the set of \(\mathcal{M}\)-bounded elements is the C*-algebra \(L^{\infty}(\mathbb{R},m)\). The pair \((L^{p}_{\rm loc}(\mathbb{R},m),L^{\infty}(\mathbb{R},m))\) is a locally convex quasi GA*-algebra. The following theorem motivates in our opinion the attention devoted to locally convex quasi GA*-algebras. **Theorem 6.6**.: _Let \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) be a locally convex quasi GA*-algebras with unit and a well-behaved \(\mathcal{M}\subset\mathcal{I}_{\mathfrak{A}_{0}}(\mathfrak{A})\). Then:_ 1. _every_ \(\varphi\in\mathcal{M}\) _is jointly_ \(\tau\)_-continuous;_ 2. _every_ \(\mathcal{M}\)_-regular *-representation of_ \((\mathfrak{A}[\tau],\mathfrak{A}_{0})\) _is_ \((\tau,\mathfrak{t}_{s^{*}})\)_-continuous;_ 3. _the set_ \(\mathfrak{A}_{\rm b}^{\mathcal{M}}\) _of bounded elements is a C*-algebra with respect to the norm_ \(\|\cdot\|_{\rm b}^{\mathcal{M}}\)_._ Proof.: (a): Each \(\varphi\) is \(\tau_{*}^{\mathfrak{F}}\)-continuous by the construction itself of \(\tau_{*}^{\mathfrak{F}}\); the statement then follows from the assumption \(\tau\approx\tau_{*}^{\mathfrak{F}}\). (b): This follows from (a). Indeed if \(\pi\) is \(\mathcal{M}\)-regular, then for every \(\xi\in\mathcal{D}_{\pi}\), the sesquilinear form \(\varphi_{\xi}\) (see (4.3)) is in \(\mathcal{M}\); then, it is \(\tau_{*}^{\mathfrak{F}}\)-continuous. Then, there exists \(\mathcal{F}\in\mathfrak{F}\) such that \[|\langle\pi(a)\xi|\pi(b)\xi\rangle|\leq p_{*}^{\mathcal{F}}(a)p_{*}^{\mathcal{ F}}(b)\] hence \[\|\pi(a)\xi\|\leq p_{*}^{\mathcal{F}}(a),\quad\text{and}\quad\ \|\pi(a^{*})\xi\|\leq p_{*}^{\mathcal{F}}(a^{*})=p_{*}^{\mathcal{F}}(a),\quad \forall a\in\mathfrak{A}\] then for every \(\xi\in\mathcal{D}_{\pi}\) there exists \(\mathcal{F}\in\mathfrak{F}\) such that \[p_{\xi}^{*}(\pi(a))=\max\{\|\pi(a)\xi\|,\|\pi(a)^{\dagger}\xi\|\}\leq p_{*}^{ \mathcal{F}}(a).\] (c): We have just to prove the completeness of the set \(\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) with respect to the norm \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\). Let \(\{a_{n}\}\subset\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\) be a \(\|\cdot\|_{\mathrm{b}}^{\mathcal{M}}\)-Cauchy sequence, then for every \(\epsilon>0\) there exists \(n_{\epsilon}\in\mathbb{N}\) such that forall \(n,m\geq n_{\epsilon}\) it is both \(\|a_{n}-a_{m}\|_{\mathrm{b}}^{\mathcal{M}}<\epsilon\) and \(\|a_{n}^{*}-a_{m}^{*}\|_{\mathrm{b}}^{\mathcal{M}}<\epsilon\). Since \(\{a_{n}\}\subset\mathfrak{A}_{\mathrm{b}}^{\mathcal{M}}\), for every \(\varphi\in\mathcal{M}\) and every \(x_{0}\in\mathfrak{A}_{0}\), it is \[\varphi((a_{n}-a_{m})x,(a_{n}-a_{m})x)\leq(\|a_{n}-a_{m}\|_{\mathrm{b}}^{ \mathcal{M}})^{2}\varphi(x,x),\,\forall n,m\in\mathbb{N}\] and \[\varphi((a_{n}^{*}-a_{m}^{*})x,(a_{n}^{*}-a_{m}^{*})x)\leq(\|a_{n}^{*}-a_{m}^ {*}\|_{\mathrm{b}}^{\mathcal{M}})^{2}\varphi(x,x),\,\forall n,m\in\mathbb{N}\] hence, if \(\mathcal{F}\in\mathfrak{F}\): \[\sup_{\varphi\in\mathcal{F}}\varphi((a_{n}-a_{m})x,(a_{n}-a_{m})x)^{1/2}\leq \|a_{n}-a_{m}\|_{\mathrm{b}}^{\mathcal{M}}\sup_{\varphi\in\mathcal{F}}\varphi( x,x)^{1/2},\,\,\forall n,m\in\mathbb{N}\] and \[\sup_{\varphi\in\mathcal{F}}\varphi((a_{n}^{*}-a_{m}^{*})x,(a_{n}^{*}-a_{m}^{* })x)^{1/2}\leq\|a_{n}^{*}-a_{m}^{*}\|_{\mathrm{b}}^{\mathcal{M}}\sup_{\varphi \in\mathcal{F}}\varphi(x,x)^{1/2},\,\,\forall n,m\in\mathbb{N}\] by the previous inequalities, for every \(\mathcal{F}\in\mathfrak{F}\), we get \[p_{*}^{\mathcal{F}}(a_{n}-a_{m})=\max\left\{p^{\mathcal{F}}(a_{n}-a_{m}),p^{ \mathcal{F}}((a_{n}-a_{m})^{*})\right\}<\epsilon p^{\mathcal{F}}(\mathfrak{e}),\,\forall n,m\geq n_{\epsilon}.\] Then, \(\{a_{n}\}\) is a \(\tau_{*}^{\mathfrak{F}}\)-Cauchy sequence. Since \(\mathfrak{A}\) is \(\tau_{*}^{\mathfrak{F}}\)-complete, there exists \(a\in\mathfrak{A}\) such that \(a_{n}\stackrel{{\tau_{*}^{\mathfrak{F}}}}{{\to}}a\). The limit \(a\) is \(\mathcal{M}\)-bounded; indeed, if \(\varphi\in\mathcal{M}\) and \(x\in\mathfrak{A}_{\mathrm{0}}\), we have \[\varphi(ax,ax)=\lim_{n\to\infty}\varphi(a_{n}x,a_{n}x)\leq\limsup_{n\to \infty}\left(\|a_{n}\|_{\mathrm{b}}^{\mathcal{M}}\right)^{2}\!\varphi(x,x).\] The sequence \(\{\|a_{n}\|\}\) is Cauchy too and bounded; therefore \(a\) is \(\mathcal{M}\)-bounded. It remains to prove that \(\|a_{n}-a\|_{\mathrm{b}}^{\mathcal{M}}\to 0\) as \(n\to\infty\). For every \(\epsilon>0\) let \(n,m>n_{\epsilon}\) then \(\|a_{n}-a_{m}\|_{\mathrm{b}}^{\mathcal{M}}<\epsilon\). Now let \(m\to\infty\), then \[\|a_{n}-a\|_{\mathrm{b}}^{\mathcal{M}}=\lim_{m\to\infty}\|a_{n}-a_{m}\|_{ \mathrm{b}}^{\mathcal{M}}\leq\epsilon.\] ## Conclusion In this paper we have constructed some topologies on a quasi *-algebra \((\mathfrak{A},\mathfrak{A}_{\mathrm{0}})\) starting from a sufficiently rich family of sesquilinear forms that behave regularly. This study led us to introduce a new class of locally convex quasi *-algebras, that we have named GA* since their definition closely recalls that one of A*-algebras. Several questions remain however still open. We mention some of them. (a) When does a (locally convex) quasi *-algebra \((\mathfrak{A},\mathfrak{A}_{{}_{0}})\) possess a sufficient family \(\mathcal{M}\) of sesquilinear forms of \(\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\)? (b) Under what conditions is a locally convex quasi *-algebra \((\mathfrak{A}[\tau],\mathfrak{A}_{{}_{0}})\) a locally convex quasi GA*-algebra? We already know that there exist Banach quasi *-algebras \((\mathfrak{A}[\|\cdot\|],\mathfrak{A}_{{}_{0}})\) for which the set of continuous elements of \(\mathcal{I}_{\mathfrak{A}_{{}_{0}}}(\mathfrak{A})\) reduces to \(\{0\}\)[6, Example 3.1.29] and the sesquilinear forms of a well-behaved family \(\mathcal{M}\) of ips-forms are automatically continuous in a locally convex quasi GA*-algebra. Hence, in general, the two notions do not coincide. (c) Under which conditions is it possible to lighten the definition of _well-behaved_ family of ips-forms (Definition 6.1) by removing \((\mathsf{wb}_{3})\) and/or \((\mathsf{wb}_{4})\)? We hope to discuss these problems in a future paper. **Acknowledgements:** This work has been done within the activities of Gruppo UMI Teoria dell'Approsimazione e Applicazioni and of GNAMPA of the INdAM.
2301.09919
Opportunities and Challenges in Neural Dialog Tutoring
Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors. Although there have been significant recent advances in neural conversational systems using large language models (LLMs) and growth in available dialog corpora, dialog tutoring has largely remained unaffected by these advances. In this paper, we rigorously analyze various generative language models on two dialog tutoring datasets for language learning using automatic and human evaluations to understand the new opportunities brought by these advances as well as the challenges we must overcome to build models that would be usable in real educational settings. We find that although current approaches can model tutoring in constrained learning scenarios when the number of concepts to be taught and possible teacher strategies are small, they perform poorly in less constrained scenarios. Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring, which measures learning opportunities for students and how engaging the dialog is. To understand the behavior of our models in a real tutoring setting, we conduct a user study using expert annotators and find a significantly large number of model reasoning errors in 45% of conversations. Finally, we connect our findings to outline future work.
Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan
2023-01-24T11:00:17Z
http://arxiv.org/abs/2301.09919v2
# Opportunities and Challenges in Neural Dialog Tutoring ###### Abstract Designing dialog tutors has been challenging as it involves modeling the diverse and complex pedagogical strategies employed by human tutors. Although there have been significant recent advances in neural conversational systems using large language models (LLMs) and growth in available dialog corpora, dialog tutors has largely remained unaffected by these advances. In this paper, we rigorously analyze various generative language models on two dialog tutoring datasets for language learning using automatic and human evaluations to understand the new opportunities brought by these advances as well as the challenges we must overcome to build models that would be usable in real educational settings. We find that although current approaches can model tutoring in constrained learning scenarios when the number of concepts to be taught and possible teacher strategies are small, they perform poorly in less constrained scenarios. Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring, which measures learning opportunities for students and how engaging the dialog is. To understand the behavior of our models in a real tutoring setting, we conduct a user study using expert annotators and find a significantly large number of model reasoning errors in 45% of conversations. Finally, we connect our findings to outline future work. ## 1 Introduction The goal of dialog tutoring research is to build systems that can tutor students using natural language conversation (Wollny et al., 2021). For several decades, learning scientists have been studying the features of domain-specific dialog tutoring systems that engender learning in students (Chi et al., 1994; Graesser et al., 1995; Moore et al., 2004; Litman et al., 2006; Graesser, 2016; Ruan et al., 2019) and have established strong learning gains that are even comparable to human tutoring in specific domains (Nye et al., 2014). However, these systems require extensive authoring of materials by teachers (MacLellan and Koedinger, 2020) and therefore cannot fully utilize the scalability of online learning. Building dialog tutors is technically challenging as tutoring dialogs typically exhibit properties that are absent in other forms of dialog. Tutoring dialogs are often _long_, enabling students to be exposed to the concepts in a way that they can use them in future (Chi and Wylie, 2014), and _grounded_ in the learning scenarios (Graesser et al., 2009). Finally, good dialog tutors are engaging and create opportunities to learn, providing students space to seek and provide explanations, and self-reflect (Chi and Wylie, 2014; Reiser, 2004). The growing success of deep neural network based language generators in other dialog settings (Adiwardana et al., 2020; Roller et al., 2021) suggests new possibilities in dialog tutoring that could scale beyond domain-specific approaches. However, despite their promise, advances in neural generative models have seen little adoption in dialog tutoring. In this paper, we contribute a comprehensive study of the applicability of neural generative models in tutoring. We formally introduce the dialog tutoring task and analyze existing tutoring datasets (SS2). Then, we describe several generative and retrieval-based models for dialog tutoring (SS3) and benchmark them on two open-access dialog tutoring datasets for language learning: _CIMA_(Stasaki et al., 2020, a crowdsourced role-played dataset for learning prepositional phrases in Italian) and Teacher-Student Chatroom Corpus (TSCC)_Caines et al. (2020), a one-to-one English tutoring dataset from an online chatroom) (SS5.1). We evaluate our models on various automatic metrics (SS4.2) as well as two human evaluation studies: an evaluation of the quality of the generated response with respect to various measures of goodness (SS6.1), as well as a more realistic user study with a learning interface (SS6.2). Overall, while we find that pretrained models improve over simpler baselines in terms of automatic metrics, our consequent human evaluation reveals several shortcomings that ought to be addressed before these models can be adopted in the real world. We find that while neural generative models can model more constrained learning settings well, they struggle when the learning goal is more open-ended. Specifically, these models are unable to understand and reason about student solutions and misconceptions, and thus, are unable to use effective pedagogical strategies. We find that the field of dialog tutoring is significantly limited by the quantity and quality of available datasets. The available datasets are both too small and not rich enough to capture the nuances of the dialog tutoring problem. Our analysis also reveals the inadequacy of automatic evaluation metrics for capturing tutoring quality. Not only are the existing metrics unable to capture faithfulness to the learning material and the student dialog history, but they also cannot capture moves of good human tutors that allow learners the space for reflection, explanation, follow-ups, and real engagement in the process of learning. Based on our findings, we end with an outline of potential avenues of future research (SS7). We hope that our paper will bring attention to this under-explored natural language processing application with the potential for significant social good. ## 2 The Dialog Tutoring Task Dialog tutoring can be described as a multi-turn interaction between two interlocutors, where one performs the role of a _teacher_ seeking to teach the other interlocutor who acts in the role of a _student_. We then can describe a dialog tutoring session formally as a sequence of turns \(\mathcal{H}=(u_{1},\dots,u_{|\mathcal{H}|})\) that are taken by either of the interlocutors. Each turn \(u_{t}\in\mathcal{V}^{*}\) is a finite sequence of tokens from a vocabulary \(\mathcal{V}\). Further, each turn \(u_{t}\) can be associated with a sequence of dialog acts \(\mathbf{a}_{t}\in\mathcal{A}\) that indicate the action taken by the interlocutor in the corresponding turn. The dialog act is a key aspect of dialog tutoring as it can refer to the teaching strategy employed by the tutor. These may include strategies such as _providing a hint_ or _seeking a clarification_ (see Appendix A for more details). The set of dialog acts \(\mathcal{A}\) is usually fixed according to a predefined taxonomy and may be split into two subsets \(\mathcal{A}=\mathcal{A}_{\text{teacher}}\cup\mathcal{A}_{\text{student}}\), each corresponding to the teacher and student role. Each dialog session \(\mathcal{H}\) may also be accompanied with some _grounding_ information \(K\), which grounds the response in relevant information and may refer to the teaching material that needs to be taught to the student. This information \(K\) may come in various formats, including images and videos. However, we restrict ourselves to using only text-based grounding in this work such that \(K\in\mathcal{K}\subseteq\mathcal{V}^{*}\) is again a sequence of tokens from the common vocabulary \(\mathcal{V}\) and \(\mathcal{K}\) is used to describe the set of possible groundings (e.g., a textbook with a set of chapters). In Section 3 we derive different methods to model the role of the teacher, to which we restrict this work. ### Existing tutoring datasets To our knowledge, only three conversational tutoring datasets are openly available: _CIMA_Stasaki et al. (2020) is a crowd-sourced dataset, where annotators were asked to role-play students and teachers by working through an exercise on translating a prepositional phrase from English to Italian, given an image and a shared set of concepts. _TSCC_Caines et al. (2020) uses real teachers leading one-on-one language tutoring sessions in English language learning, thus creating a more open-ended scenario. Finally, _TalkMoves_Suresh et al. (2022). Figure 1: Examples of tutoring conversations from both datasets. The (image) grounding is shown in the second row and dialog acts in brackets indicate the pedagogical strategy. is a collection of scraped classroom transcripts of K-12 mathematics lesson videos that contain challenging, multi-party interactions. The scarcity of tutoring datasets stands in contrast to other dialog scenarios, where plenty of datasets have been proposed and studied. For example, task-oriented dialog has been studied in domains like reservations (Wen et al., 2017; Budzianowski et al., 2018; Kim et al., 2020) or public service information (Feng et al., 2020). On the other hand, chit-chat or open-domain dialog has been studied on movies (Zhou et al., 2018), Wikipedia knowledge (Dinan et al., 2019), agent persona (Dinan et al., 2020), knowledge graphs (Moon et al., 2019), and open-ended settings (Komeili et al., 2022). Furthermore, we note the following limitations and characteristics of tutoring datasets, also in comparison to other dialog domains: 1) Low pedagogical quality (CIMA), 2) Limited teaching strategies (all), 3) Exclusive focus on classroom settings (TalkMoves), 4) Small dataset size (all). 5) Significantly larger context sizes (TSCC) 6) Harder readability according to the Flesch score (TSCC). We provide more evidence in Table 1 which shows a comparison of dialog tutoring datasets with widely-used task-oriented and open-domain datasets. ### Related work on generative dialog models Similarly, while the advent of large pretrained models has sparked ample research on generative models for dialog (Bao et al., 2021; Peng et al., 2021; Roller et al., 2021; Shuster et al., 2022; Cohen et al., 2022), this has not carried over to research on tutoring systems, where existing solutions are predominantly rule-based and do not generate open-ended responses. For example, the authors on CIMA define heuristics to select responses (Stasaski et al., 2020). Pretrained transformers in general have only very recently been studied in this setting, however only for dialog act classification (Suresh et al., 2022) and to study the pedagogical ability of existing large pretrained models (Tack and Piech, 2022). ## 3 Dialog Tutoring Models After introducing the dialog tutoring task, this section highlights the models we evaluate on the task. We note that our aim is an analysis of existing models. We explore turn-level models that can generate a teacher response \(\mathbf{y}\coloneqq u_{t+1}\) given a tutoring session \(\mathcal{H}=(u_{1},\ldots,u_{|\mathcal{H}|})\). During training, we obtain the dialog history by teacher forcing, i.e., we take the ground-truth dialog history. Furthermore, we do not model the problem of retrieving grounding information but rather assume it as given. Generative ModelIn order to study if generative models can capture a _given_ teaching strategy, we first derive a model that assumes the ground-truth dialog act sequence \(\mathbf{a}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{|\mathcal{H}|}\}\) to be given as an input. Then, given dialog history \(\mathcal{H}_{<t}=\{u_{1},\ldots,u_{t}\}\), grounding information \(K\) and \(\mathbf{a}_{t+1}\subseteq\mathcal{A}_{\text{teacher}}\), the set of dialog acts relevant at timestep \(t+1\), the teacher response \(\mathbf{y}\) is generated according to a locally-normalized language generation model. In the case that no grounding information \(K\) is given, the dependency on \(K\) may be dropped. \[\mathbf{y}^{\star} = \underset{\mathbf{y}\in\mathcal{V}^{\star}}{\operatorname{argmax} }\{p\left(\mathbf{y}\mid\mathbf{a}_{t+1},\mathcal{H}_{<t},K\right)\}\] \[= \underset{\mathbf{y}\in\mathcal{V}^{\star}}{\operatorname{argmax} }\prod_{i=1}^{|\mathbf{y}|}\{p\left(y_{i}\mid\mathbf{y}_{<i},\mathbf{a}_{t+1}, \mathcal{H}_{<t},K\right)\}\] We separate the turns in the dialog by special \(\langle\text{teacher}\rangle\) and \(\langle\text{student}\rangle\) tags and prepend the dialog act as a special token, followed by a special \(\langle\text{knowledge}\rangle\) token and the grounding information \(K\) as the input to the encoder. In CIMA we encode the triples defining the grounding information in a simple natural language format, where we separate the English and Italian words for an object, color, and preposition as well as the whole phrase by the word "is", for example as "blue is blu" in Figure 1. Further, we add the grammar rules separated by a special token. We study different models to parametrize \(p\) that are described in Section 4. Finally, we use the version of **CTRL**(Keskar et al., 2019) presented by Rashkin et al. (2021). The aim of the model is to improve the faithfulness of grounded response generation models, a significant problem in neural language generation (Roller et al., 2021) which holds high importance in the field of tutoring, where one trusts a teacher to present correct information. The model is augmented by a sequence of control tokens that are intended to steer the generations to desirable properties. We use the _lexical overlap_ and _entailment tokens_, which we obtain as follows. In training, the lexical overlap is measured on a token-level between ground-truth response and grounding. Then, three equally sized buckets are created indicating low, medium, and high overlap which is indicated by a control token. Entailment is determined by an MNLI model and again a corresponding token is added. At test time, we always use the token that encourages the desirable property, in this case high lexical overlap and entailment. Finally, using a sequence of control tokens \(\mathbf{c}\), the model from equation 1 becomes: \[p\left(\mathbf{y}\mid\mathbf{a}_{t+1},\mathbf{c},\mathcal{H}_{<t},K\right) \tag{2}\] Joint ModelIn order to study how well current neural models can decide on a reasonable teaching strategy and perform in real case scenarios, we also implement a model that first decides the dialog act \(a_{t+1}\in\mathcal{A}_{\text{teacher}}\) (instead of assuming the ground-truth dialog act) and then uses it to generate a response \(\mathbf{y}=u_{t+1}\). We use a simple model that again takes the grounding and dialog context as input but now generates the concatenation of dialog act and response in one utterance, akin to SOLOIST Peng et al. (2021). Thus, for a given \(\mathbf{\hat{y}}:=\mathbf{a}_{t+1}\circ\mathbf{y}\) with act sequence \(\mathbf{a}_{t+1}\) of length \(N\) and response \(\mathbf{y}\) of length T, the model is \[p\left(\mathbf{\hat{y}}\mid K,\mathcal{H}_{<t}\right)=\prod_{i=1}^{m+N}p \left(\tilde{y}_{i}\mid\tilde{\mathbf{y}}_{<i},K,\mathcal{H}_{<t}\right) \tag{3}\] In training, we use teacher forcing and prepend \(\mathbf{a}_{t+1}\) to \(\mathbf{y}\) to obtain the label sequence. At test time, the model performs a beam search over the dialog act sequence and response jointly. Retrieval-based modelSince generative models are known to produce erroneous outputs that are factually incorrect and potentially inappropriate Ji et al. (2022), we also experiment with using a retrieval-based model that selects responses from the training corpus at test time. As opposed to previous work on the topic (e.g., Stasaki et al. (2020)), we do not employ a rule-based model but rather a learned retrieval model that does not require handcrafting elaborate and possibly brittle rules. Therefore, we use the **Bi-Encoder** architecture Mazare et al. (2018); Dinan et al. (2019) where a dialog context encoder \(\text{enc}_{\mathcal{H}_{<t};\theta}\) and a response encoder \(\text{enc}_{\mathcal{Y};\theta}\) encode context \(\mathcal{H}_{<t}\) and possible responses \(\mathbf{y}\) into a fixed size vector of same dimension \(n\). In our experiments, the weights \(\theta\) of both encoders are shared. The model is trained using contrastive learning. Suppose we are given a training pair \(\mathcal{H},\hat{\mathbf{y}}\) from a training dataset \(\mathcal{D}\) that we use for teacher forcing. We then train the model by sampling a negative response \(\mathbf{\bar{y}}\) from the set of responses in \(\mathcal{D}\) and using the Triplet Loss criterion, which for a metric function \(d:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is defined as: \[\mathcal{L}(\theta;\mathcal{H},\hat{\mathbf{y}},\mathbf{\bar{y} })=[m +d(\text{enc}_{\mathcal{H};\theta}(\mathcal{H}),\text{enc}_{\mathbf{y}; \theta}(\hat{\mathbf{y}}))\] \[-d(\text{enc}_{\mathcal{H};\theta}(\mathcal{H}),\text{enc}_{ \mathbf{y};\theta}(\bar{\mathbf{y}}))]_{+}, \tag{4}\] where \(m\) is a margin hyperparameter, and \(d\) is the euclidean norm in our experiments. Further, we do stratified sampling on CIMA to not select negatives from the same preposition, color, or object that might be false negatives. At test time, given a dialog context \(\mathcal{H}_{<t}\), we choose a response \(\mathbf{y}^{\star}\) from the training set \(\mathcal{D}\) by maximum inner product search using the decision rule \[\mathbf{y}^{\star}=\operatorname*{argmax}_{\mathbf{y}\in\mathcal{D}}\{\text{ enc}_{\mathcal{H};\theta}(\mathcal{H}_{<t})^{T}\text{enc}_{\mathbf{y}; \theta}(\mathbf{y})\}. \tag{5}\] ## 4 Experiments We use the following models for parameterizing \(p\) in Equation 2: A **sequence-to-sequence** model Sutskever et al. (2014) with a copy mechanism Gu et al. (2016) trained from scratch. A wide range of pretrained Transformers, namely **BART**Lewis et al. (2020), **DialoGPT**Zhang et al. (2020), **T5**Raffel et al. (2020) and its multilingual version **mT5**Xue et al. (2021). \begin{table} \begin{tabular}{|l|c c c c c c c|} \hline Dataset & Train samples & \#DA & Tgt. length & Src. length & \#prev. turns & corpus-div. & Flesch score & \(\text{F1}(\hat{\mathbf{y}},K)\) \\ \hline \hline CIMA & 2,715 & 5 & 14.71 & 9.70 & 4.55 & 0.149 & 84.64 & 0.196 \\ TSCC & 5,845 & 23 & 16.09 & 11.72 & 68.28 & 0.327 & 73.00 & - \\ \hline MultiWoZ 2.1 & 56,781 & 34\({}^{\star}\) & 19.86 & 14.49 & 7.86 & 0.069 & 90.90 & - \\ Schema-Guided Dialog & 164,982 & 10\({}^{\star}\) & 14.30 & 10.36 & 11.38 & 0.049 & 95.37 & - \\ DSTC9 & 19,184 & - & 21.61 & 11.65 & 11.70 & 0.050 & 81.85 & 0.47 \\ \hline Personachat & 127,162 & - & 12.26 & 11.65 & 6.51 & 0.162 & 91.80 & 0.10 \\ FaithDial & 18,357 & - & 21.72 & 17.33 & 4.54 & 0.274 & 83.28 & 0.47 \\ CMU\_DoG & 81,468 & - & 14.49 & 18.23 & 18.73 & 0.178 & 79.54 & 0.02 \\ \hline \end{tabular} \end{table} Table 1: Dialogue dataset statistics. Target length and source length in avg. number of tokens (Bart tokenizer). # prev. turns is avg. for each teacher response. Corpus-div is ngram entropy averaged for uni to four-grams. hlines separate: tutoring, task-oriented, open-domain dialog. \(\ast\)We only count system dialog acts. BART and T5 are pretrained encoder-decoder models that were trained on denoising and text-to-text tasks, respectively. mT5 bases on T5 but is multilingual which might help with the code-switched utterances in CIMA. Lastly, DialoGPT is an autoregressive language model based on GPT-2 (Radford et al., 2019) that was pretrained on a large dialog dataset obtained from Reddit. With this, we intend to study whether large-scale dialog-specific pretraining can aid in training educational tutors, as well. Implementation DetailsWe implement our experiments using the Huggingface transformers library and finetune the checkpoints provided as part of it for all Transformer-based models. For these models, we use an initial learning rate of \(3.25\times 10^{-5}\), 500 warmup steps and linear learning rate decay. We train the models using a batch size of 8 and evaluate on the validation sets after each epoch. In the end, we select the best model to be the one that has a minimal loss on the validation set. The sequence-to-sequence baseline is trained from scratch using an initial learning rate of \(0.001\) for 25,000 steps using the Adam optimizer and a dropout rate of \(0.1\) We use beam search with a beam size of 10 to generate model responses. ### Dataset splits Since there are no official dataset splits for CIMA and TSCC, we split both datasets randomly into training, validation and test sets. We provide the exact split of the dataset in an accompanying code repository. For CIMA, we use all such samples with less than three annotated tutor responses for training. The other conversations are split randomly into equally-sized validation and test sets which results in 2715/300/300 samples each. For TSCC, we split randomly along the conversations to obtain 82/10/11 training, validation, and test conversations each. ### Evaluation metrics To evaluate our models, we use the BLEU implementation provided by the sacrebleu package (sBLEU) (Post, 2018) to measure lexical overlap between generated and ground-truth response. Furthermore, we use BERT F1 (BERTScore) to measure their semantic similarity. Lastly, for CIMA we also calculate \(Q^{2}\)(Honovich et al., 2021) which measures the factual consistency of the response \(\mathbf{y}\) with the grounding information \(K\) by employing a question-answering based matching. Both BERTScore and \(Q^{2}\) have shown strong correlation with human judgements on factual consistency Honovich et al. (2022). ## 5 Results In this section, we summarize our main findings in terms of automatic evaluation. First, we give an overview of the performance of different models that we train on CIMA and TSCC in Section 5.1. Then, we assess their ability to stay faithful to teaching strategies (Section 5.2) and study how grounding annotations can influence the faithfulness of neural dialog tutors (Section 5.3), before studying their scaling behavior with dataset size and complexity (Section 5.4) and their generalization capabilities (Section 5.5). We then finish with an assessment of using education-specific data for pretraining (Section 5.6). \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline & \multicolumn{4}{c}{**CIMA**} & \multicolumn{2}{c}{**TSCC**} \\ Model & sBLEU / BLEU-1 (\(\uparrow\)) & BERT F1 (\(\uparrow\)) & \(Q^{2}\) (\(\uparrow\)) & sBLEU / BLEU-1 (\(\uparrow\)) & BERT F1 (\(\uparrow\)) \\ \hline Rule-based (Stasaki et al., 2020)\({}^{*}\) & 0.34/- & 0.45 & - & - & - \\ LSTM (Stasaki et al., 2020)\({}^{*}\) & 0.31/- & 0.53 & - & - & - \\ \hline Seq2seq & 2.89 / 28.0 & 0.676 & 0.372 & - & - \\ DialoGPT & 4.12 / 35.6 & 0.697 & 0.571 & 0.63 / 18.5 & 0.661 \\ Bi-Encoder (RoBERTa-base) & 5.89 / 23.9 & 0.690 & 0.344 & 1.367 / 8.8 & 0.638 \\ CTRL (BART-base) & 5.99 / 42.5 & 0.702 & 0.673 & - & - \\ t5-small & 7.36 / 34.0 & 0.672 & 0.676 & 2.72 / 12.1 & 0.646 \\ BART-large & 8.61 / 38.7 & 0.715 & 0.673 & 1.85 / 13.7 & 0.658 \\ BART-base & 9.58 / **42.5** & **0.726** & **0.680** & **2.67 / 18.6** & **0.670** \\ mt5-small & **11.26** / 41.0 & 0.700 & 0.624 & 1.80 / 14.9 & 0.653 \\ \hline BART-base\({}^{\dagger}\) & 5.61 / 41.03 & 0.707 & 0.642 & 1.90 / 15.4 & 0.659 \\ BART-large\({}^{\ddagger}\) & 5.65 / 42.67 & 0.694 & 0.607 & 1.74 / 15.1 & 0.660 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of models on CIMA and TSCC. We note that the strong sacrebleu differences are caused by the brevity penalty (all generative models generate too short sequences), \({}^{\dagger}\): use predicted dialog act label, others use ground-truth. * numbers taken from (Stasaki et al., 2020) - here, numbers may not be comparable as there is no standard split in CIMA. ### Comparison of different models Table 2 shows the key results from our experiments. First, all automatic metrics are _significantly higher_ on CIMA, which indicates that the models can fit CIMA much better than TSCC, with which current approaches still struggle. We further analyse this finding in Section 5.2 and show that this is because TSCC has _richer teaching strategies which are harder to model_. Our comparison also suggests that finetuning large pretrained _Transformer models generally gives better results than the rule-based and LSTM model_ reported in (Stasaki et al., 2020), and our implemented retrieval and sequence-to-sequence baselines. This illustrates the potential of LLMs for dialog tutoring. We also see a significant difference among different LLMs. Dialog-specific pretraining of DialoGPT does not help and gives worse results than BART and T5, primarily because the model tends to generate short and generic responses more often. Multilingual pretraining in mT5 improves over T5 only in some metrics, notably in BLEU and BERT F1 on CIMA but not in terms of \(Q^{2}\). Similarly, adding control tokens to BART does not improve \(Q^{2}\) or other automatic metrics. Surprisingly, using very large models actually degrades performance in our experiments. Finally, the last two rows show results obtained with our joint model that does not use the ground-truth dialog act but predicts it together with the response sequence and still provides reasonable performance. ### How well can generative models capture teaching strategies? We study this question first by evaluating the dialog act prediction accuracy of our joint model. We find that it is _significantly low_ on TSCC with \(21.8\) compared to \(71.2\) on CIMA for BART-base which indicates significant room for improvement. Notably, the joint model tends to predict more frequently occuring dialog acts, which results in fewer follow-up questions and "Other" never being predicted in CIMA, the least frequent act in the data. The distribution of dialog acts in the ground-truth annotations and model predictions with a BART-base joint model is in Figure 2. Then, we evaluate how well different models can stick to a given ground-truth dialog act by predicting the dialog act of the _generated response_ with a BART-base model trained to predict the ground-truth dialog act sequence based on the ground-truth response. The results are shown in Table 3. Notably, _BART-base performs better than the ground-truth annotations_. The CTRL model, on the other hand, has worse performance since the control tokens do not respect tutoring principles (e.g., lexical overlap to grounding discourages follow-up questions in favor of just giving hints). ### Does grounding in learning concepts help? Prior work has shown that grounding responses in relevant data can improve their quality, especially in terms of faithfulness (Shuster et al., 2021). We intend to validate this for dialog tutoring by studying three models with different inputs on CIMA. The first model is not provided grounding information, whereas the second and third are grounded in learning concepts (cf. Equation 1) with one using only the (preposition, object, color) triples and the other making use of additional grammar rules. The results with these models are shown in Table 4 and suggest that _grounding responses in relevant knowledge helps the model to produce better and \begin{table} \begin{tabular}{|l|l l l|} \hline Model & sBLEU (\(\uparrow\)) & BERT F1 (\(\uparrow\)) & \(Q^{2}\) (\(\uparrow\)) \\ \hline \hline BART-base & 6.69 / 38.6 & 0.718 & 0.571 \\ + triples & 9.20 / **45.3** & **0.730** & 0.642 \\ + grammar rules & **9.58** / 42.5 & 0.726 & **0.680** \\ \hline \end{tabular} \end{table} Table 4: Comparison of models with different inputs on CIMA. Triples are made up of preposition, object, and color translations. Grammar rules are a textual description of a learning concept. \begin{table} \begin{tabular}{|l|l l l l l|} \hline & \multicolumn{4}{c|}{Method} \\ \hline & GT & BART\({}_{\text{base}}\) & BART\({}_{\text{large}}\) & CTRL & Retrieval \\ \hline DA F1 & 78.3 & **81.0** & 70.1 & 63.0 & 43.1 \\ \hline \end{tabular} \end{table} Table 3: F1 score of the dialog act classification based on the generated responses of our models. Figure 2: Distribution of predicted and ground-truth dialog acts on CIMA. more faithful responses_. ### How do models scale with more data? Due to the limited availability of high-quality pedagogical datasets and the time-consuming process of authoring new materials (MacLellan and Koedinger, 2020), it is important to understand how quickly generative models can generalize to new settings. Thus, we assess how well the model can model tutoring in low-resource scenarios. We construct a study, where we randomly sample subsets of the CIMA training set and test the performance of the various models. We can see from Figure 3(a) that with more training data, the faithfulness of responses appears to improve and is not saturated before we reach the full training set. This supports the intuition that _additional training data might improve the performance further_. Similarly, we study how well our model can deal with an increase in complexity with respect to learning concepts at similar training data sizes. Therefore, we construct different training datasets with 735 samples and a varying number of concepts each time. We begin by taking samples concerned with the concept "in front of the" and evaluate exclusively on it, gradually adding new concepts. Figure 3(b) suggests that \(Q^{2}\) drops sharply at four concepts. BLEU on the other hand increases, and this might be due to the metric encouraging generic utterances that, for example, repeat a grammar rule. ### Can models generalize to new concepts? As the students progress and gain new knowledge, it might be a desirable property of dialog tutoring models to be able to handle new concepts that suit this increase in prior knowledge. Hence, we study how well our CIMA model can generalize to new concepts that it has not seen in training, for example, a new preposition. For this analysis, we create a set-up where we first train the model on all of the training data and evaluate on the subset of samples for each preposition separately. We then compare this number to a model that is not trained on the corresponding concept it is evaluated on, creating a zero-shot set-up which we carry out for a grounded and ungrounded response generation model. As measured by \(Q^{2}\) (cf. Table 5), this model can indeed _generalize to new concepts well, albeit with performance degradation_. Furthermore, _grounding information improves generalization_ as these define the learning concept (in this case the preposition) and how it is used. Without this information, we observe that the model generates generic responses more often. ### Does education-specific pre-training help? As educational data are widely available on the internet, next we study how education-specific pre-training effect results. In Table 6, we show results obtained with finetuning a BART-base model directly on CIMA and pretraining it on tutoring dialogs from TSCC or non-tutoring dialogs from MultiWoZ 2.1 (Eric et al., 2020), Personachat (Zhang et al., 2018), CMU DoG (Zhou et al., 2018), DSTC9 (Kim et al., 2020) and Topicalchat (Gopalakrishnan et al., 2019). In both cases, we only see _minor improvements_, which may be explained by the different dataset settings and the lack of a unified dialog act taxonomy. \begin{table} \begin{tabular}{|l|c c c|} \hline Method & sBLEU & BERT F1 & \(Q^{2}\) \\ \hline BART-base & 6.69 / 38.6 & 0.718 & 0.571 \\ + Ed. data & **7.31 / 41.4** & **0.727** & 0.577 \\ + Non-Ed. data & 6.60 / 39.4 & 0.721 & **0.583** \\ \hline \end{tabular} \end{table} Table 6: Influence of pretraining on educational and non-educational data. Please note that no grounding information is used in this setting. Figure 3: Performance of BART-base on CIMA as a function of: (a) training data size uniformly sampled from the training data, (b) the number of concepts, where only the specific number of concepts is retained and all others are excluded. \begin{table} \begin{tabular}{|l|l|c|c|c|} \hline Concept & \#Samples & full data & zero-shot & zero-shot \\ & & & & without grounding \\ \hline & train/test & \(Q^{2}\) & \(Q^{2}\) & \(Q^{2}\) \\ is behind the & 549/90 & 0.698 & 0.603 & 0.533 \\ is in front of the & 735/84 & 0.616 & 0.512 & 0.500 \\ is next to the & 547/51 & 0.497 & 0.539 & 0.483 \\ is on top of the & 224/30 & 0.683 & 0.578 & 0.567 \\ is under the & 270/24 & 0.854 & 0.646 & 0.625 \\ is inside of the & 390/21 & 0.579 & 0.643 & 0.190 \\ \hline all concepts & 2715 / 300 & 0.644 & 0.570 & 0.502 \\ \hline \end{tabular} \end{table} Table 5: Performance of a grounded BART-base model by learning concept. Full data uses the entire training data and zero-shot removes the concept of the row from the training data. Human Evaluation We further evaluate previously assessed models with human judgments firstly by obtaining quality estimates according to different criteria and secondly by conducting a simulation study, where expert annotators are asked to provide novel rewritings of existing conversations and to categorize errors made by the model. ### Quality of the generated responses We perform a human quality evaluation of the generated response for four models - retrieval (Bi-Encoder), BART-base, BART-base\({}_{\text{CTRL}}\) and the joint model (BART-base). A randomly chosen subset of the CIMA test set conversations were annotated by 4 annotators (with one annotator speaking C1 level Italian). All annotators labeled 60 examples in total, of which 20 overlapped. To further distinguish the quality of training data for the models, we annotated ground-truth responses on a small sample of 20 examples. We evaluate the following criteria on a 3-point Likert scale (disagree to completely agree) and outline our findings in the following, as shown in Figure 4. **Fluency**_"The response is grammatically correct and fluent."_ We find that all models have very high fluency scores. **Coherence**_"The response naturally follows up on previous utterance and context and has no logical conflicts with the context or DA label."_ We find that all generative models are able to produce coherent responses but not the retrieval model. **Correctness**_"The response is factually correct and respects learning concepts being taught."_ All models score comparable to ground-truth responses on the constrained CIMA dataset. It is noteworthy, however, that a response may be correct in itself but not coherent with the context or the grounding (often the case in the retrieval model), and this could explain the discrepancy between correctness and our automatic \(Q^{2}\) scores. **Equitable tutoring**_"The response gives a learning opportunity for the student by providing space for reflection, explanation, pointing to follow-up challenge, or engaging student in other ways."_ Here we find significant deficiencies not only for our evaluated models but notably also for the annotated ground-truth responses (gt). This might explain the insufficiencies in the responses as they reflect this distributional behavior of the training data. We think that future dataset collections should take better care of this property and resort to more expert annotators as opposed to crowdsourcing. Furthermore, Table 7 shows that our automatic metrics correlate poorly with human judgements. ### User study with a learning interface Lastly, we seek to study how well dialog tutoring models can perform in a realistic setting with questions obtained from real users (containing out-of-distribution samples) and not the fixed dataset. Therefore, we randomly sampled conversations from the CIMA test set. We asked two C1-level expert Italian speakers to 1) rephrase these conversations using a conversational dialogue interface and 2) assign erroneous model responses to pre-defined error categories. The interface used in the qualitative evaluation is shown in Figure 6. We obtain all model responses from the BART-base model that first predicts the dialog act and then the response. Error categories adopted from previous work (Bommasani et al., 2021) describe the ideal behavior of tutoring models as simulating the behavior of good human teachers along two dimensions: **Understanding**_"Being able to understand and reason about student solutions, misconceptions, and learning concepts."_ We find that of the 20 modified conversations, 45% exhibit _Understand \begin{table} \begin{tabular}{|l|c c|} \hline Quality Attribute & sBLEU & BERTScore \\ \hline Fluency & 0.14 & 0.12 \\ Coherence & 0.17 & 0.26 \\ Correctness & 0.06 & 0.15 \\ Equitable Tutoring & 0.08 & 0.16 \\ \hline \end{tabular} \end{table} Table 7: Pearson correlation coefficients between the human judgements on our quality criteria and automatic metrics. Figure 4: Comparison of models on four criteria (reporting \(M\)) in the human quality evaluation. We observed high \(SD\) for coherence and equitable metrics. ing errors_, such as an incorrect solution assessment or incorrect translations. **Pedagogy**_"Being able to use effective pedagogy to instruct students."_ We find that 10% of the responses exhibit _Pedagogical errors_, for example telling the correct solution directly without offering any engagement point to the student. 50% of the conversations were labeled good by the annotators. Examples of the conversations are available in Table 8. ## 7 Discussion: Towards More Equitable and Faithful Tutoring Systems In this section, we outline directions of research that we think can be important steps towards more equitable and faithful tutoring models. Namely, we first address the small scale and quality of current tutoring datasets and cast doubt on the crowdsourcing data quality checks. Then, we suggest ways of improving the underperformance of both equitable tutoring and teaching strategy prediction identified in current generative models under these constraints by drawing from learning sciences literature. Finally, we outline desiderata for more reliable dialog evaluation of neural tutoring models. **Datasets** Based on the analysis in SS2.1 and Table 1, we think that the community will benefit from a dataset that lies between CIMA and TSCC in terms of its difficulty. Moreover, the low equitable tutoring scores of CIMA's ground-truth responses indicate that crowdsourcing with untrained annotators can lead to low pedagogical quality. A similar observation has been found by human evaluation for the TSCC dataset (Tack and Piech, 2022). Finally, we encourage the establishment of better dialog act taxonomies that are backed by learning sciences research. As outlined in SS5.6 and in He et al. (2022), a unified taxonomy may also strongly aid in transfer learning. **Models** So far, dialog tutoring models have only covered limited domain-specific settings linked to a particular activity, such as learning Italian prepositions or solving math word problems. We argue that the community could benefit from working on problems common to learning in general, for example tracking problem-solving states and modeling pedagogies used by teachers. Here, knowledge tracing (Corbett and Anderson, 1994) (the problem of estimating students' skill mastery level) could be used for tracking problem-solving states and increasing the coherence of dialog tutoring conversations and dialog act selection performance which would contribute to better modeling of global teaching strategies. Furthermore, validated instruction quality coding schemes (Michaels et al., 2010; Hennessy et al., 2016) used by classroom teachers can be computationally modeled (Demszky et al., 2021; Ganesh et al., 2021) and incorporated into models. We also think that recently proposed constrained decoding approaches that can balance between multiple criteria (Qin et al., 2022) hold great promise in improving faithfulness in complex tutoring dialogs. Finally, as data collection is labor-intensive in expert domains, we see great potential in few-shot learning methods, such as prompt-based methods (Schick and Schutze, 2022). EvaluationOur experiments highlight the insufficiencies of current automatic dialog evaluation metrics, as both BLEU and BertScore show comparatively low correlation with our collected human judgements from SS6.1. This is in line with previous research (Mehri and Eskenazi, 2020; Mehri et al., 2022) and shows the necessity not only for better automatic evaluation metrics but also for verification based on human judgements or user studies that should incorporate criteria relevant to tutoring (e.g., equitable tutoring outcomes). Metrics that incorporate task success, which have been used in task-oriented dialog systems (Budzianowski et al., 2018), are a promising direction of future research for automatic evaluation. ## 8 Conclusion In this work, we reflected on the state of research in dialog tutoring and explored the potential of neural generative models in this domain. We found some promising initial results with these models in comparison to rule- or retrieval-based methods. However, we also established limitations of currently available benchmarks and evaluation criteria. Furthermore, we showed that there are a number of challenges that need to be addressed before neural generative models of text can be deployed as intelligent tutoring systems on a larger scale, such as controllability and being able to model a sound pedagogical strategy. Based on these findings, we outline potential avenues for future research. ### Limitations A key limitation of our work is the use of only two available tutoring datasets. Despite a limited number of datasets available in this domain, using the TalkMoves dataset (Suresh et al., 2022) could help further generalize our findings. This remains an avenue for future work. Based on the prior work, we focused on the specific conversational goal of dialog tutors which is providing learning aid for students' skill development and more opportunities to learn. While this is the most widespread type (Wollny et al., 2021), it is not covering all the goals of human tutors, and other aspects could be important, for example, rapport-building or mentoring on the meta-cognitive level. We acknowledge this both as a prerequisite for our work and at the same time as a limitation. For further discussion we refer the reader to Appendix B and C. Finally, our user study could be further extended with more participants. In the future, we plan a more comprehensive study with real language learners using an end-to-end dialog tutoring system. ## Ethics Statement We do not foresee any significant harm directly as a result of our work. Having said that, we must understand that automatic tutoring is a high-stake setting that can pose significant harm if appropriate care is not taken before the deployment of these systems. Issues of biases and lack of trust, and other ethical issues such as privacy concerns must be considered. Considering learners only as data points within a neural dialog tutoring context may prevent us from seeing the societal and socioeconomic barriers that they may be up against, thereby running the risk of not only failing to help relevant learner subgroups but also sometimes giving additional privileges to those who use these systems. ## 9 Acknowledgements This project was made possible by an ETH AI Center Doctoral Fellowship to Jakub Macina with partial support by the Asuera Stiftung and the ETH Zurich Foundation and has received funding by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. We thank the group members and our reviewers for their valuable feedback.
2304.12259
Imaging 3D Chemistry at 1 nm Resolution with Fused Multi-Modal Electron Tomography
Measuring the three-dimensional (3D) distribution of chemistry in nanoscale matter is a longstanding challenge for metrological science. The inelastic scattering events required for 3D chemical imaging are too rare, requiring high beam exposure that destroys the specimen before an experiment completes. Even larger doses are required to achieve high resolution. Thus, chemical mapping in 3D has been unachievable except at lower resolution with the most radiation-hard materials. Here, high-resolution 3D chemical imaging is achieved near or below one nanometer resolution in a Au-Fe$_3$O$_4$ metamaterial, Co$_3$O$_4$ - Mn$_3$O$_4$ core-shell nanocrystals, and ZnS-Cu$_{0.64}$S$_{0.36}$ nanomaterial using fused multi-modal electron tomography. Multi-modal data fusion enables high-resolution chemical tomography often with 99\% less dose by linking information encoded within both elastic (HAADF) and inelastic (EDX / EELS) signals. Now sub-nanometer 3D resolution of chemistry is measurable for a broad class of geometrically and compositionally complex materials.
Jonathan Schwartz, Zichao Wendy Di, Yi Jiang, Jason Manassa, Jacob Pietryga, Yiwen Qian, Min Gee Cho, Jonathan L. Rowell, Huihuo Zheng, Richard D. Robinson, Junsi Gu, Alexey Kirilin, Steve Rozeveld, Peter Ercius, Jeffrey A. Fessler, Ting Xu, Mary Scott, Robert Hovden
2023-04-24T16:56:16Z
http://arxiv.org/abs/2304.12259v2
# Imaging 3D Chemistry at 1 nm Resolution with Fused Multi-Modal Electron Tomography ###### Abstract Measuring the three-dimensional (3D) distribution of chemistry in nanoscale matter is a longstanding challenge for metrological science. The inelastic scattering events required for 3D chemical imaging are too rare, requiring high beam exposure that destroys the specimen before an experiment completes. Even larger doses are required to achieve high resolution. Thus, chemical mapping in 3D has been unachievable except at lower resolution with the most radiation-hard materials. Here, high-resolution 3D chemical imaging is achieved near or below one nanometer resolution in a Au-Fe\({}_{3}\)O\({}_{4}\) metamaterial, Co\({}_{3}\)O\({}_{4}\) - Mn\({}_{3}\)O\({}_{4}\) core-shell nanocrystals, and ZnS-Cu\({}_{0.64}\)S\({}_{0.36}\) nanomaterial using fused multi-modal electron tomography. Multi-modal data fusion enables high-resolution chemical tomography often with 99% less dose by linking information encoded within both elastic (HAADF) and inelastic (EDX/ EELS) signals. Now sub-nanometer 3D resolution of chemistry is measurable for a broad class of geometrically and compositionally complex materials. ## Introduction Knowing the complete chemical arrangement of matter in all dimensions is fundamental to engineering novel nanomaterials [1]. Although electron tomography provides comprehensive 3D structure at resolutions below 1 nm using elastic scattering signals [2; 3; 4], chemical tomography obtained from inelastic scattering remains largely out of reach. Several demonstrations of chemical tomography using electron energy loss or x-ray energy spectroscopy (EELS / EDX) accompanied the introduction of scanning transmission electron microscope (STEM) tomography and provide a milestone for 3D imaging [5; 6; 7; 8]. However, chemical tomography from core-excitation spectroscopy demands high electron doses that almost always exceed the specimen limits (e.g., \(>10^{7}\) e/A\({}^{2}\)) [9; 10; 11]. If attempting chemical tomography, researchers must sacrifice resolution by collecting few specimen projections (e.g., 5-10) and constrain the total dose (e.g., \(<10^{6}\) e/A\({}^{2}\)). Consequently, 3D resolution is penalized from undersampling and noisy chemical maps [12]. Therefore, a paradigm shift is necessary for high-resolution chemical tomography. We show achieving high-resolution 3D chemistry at lower dose requires fusing both elastic and inelastic scattering signals. Typically these detector signals are analyzed separately and correlated [13; 14; 15]. However, correlative imaging disregards shared but also complementary information between structure and chemistry and misses opportunities to recover useful information [16]. Data fusion, popularized in satellite imaging, goes further than correlation by linking separate signal modalities to reconstruct new information and improve measurement accuracy [17; 18; 19]. Recent developments in multi-modal data fusion paved new opportunities for high-resolution chemical imaging by substantially reducing the dose requirements to successfully acquire an atomic-resolution map [20]. In alignment with the principles of fused multi-modal electron microscopy, we extend its algorithmic framework into the third dimension. Here we introduce fused multi-modal electron tomography, which offers high signal-to-noise (SNR) and high-resolution recovery of material chemistry in 3D by linking information encoded within both elastic high-angle annular dark field (HAADF) and inelastic (EDX/ EELS) scattering signals. Multi-modal electron tomography reconstructs the volumetric chemical structure of specimens by solving a 3-term inverse problem that fuses signals from multiple detectors. This framework enables new sampling strategies that minimize dose by measuring a high number of HAADF projections alongside far fewer chemical projections--dose reductions of one-hundred fold are readily achieved. Although the chemical structure is severely underdetermined, fusing the two modalities fills in missing information, notably improving resolution and reconstruction quality. Our approach demonstrates that researchers can measure 3D chemistry at 1 nm resolution using electron doses as low as \(10^{4}\) e/A\({}^{2}\) and as few as 9 spectroscopic maps while remaining consistent with original measurements. Multi-modal tomography is validated across multiple material systems, including Au-Fe\({}_{3}\)O\({}_{4}\) superlattice clusters, core-shell Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\)[21], ZnS-Cu\({}_{0.64}\)S\({}_{0.36}\) heterostructures [22], Cu-SiC nanoparticles and a range of simulated specimens. By fusing modalities, chemical tomography is now possible at sub-nanometer resolution for a wider class of material systems. ## Results ### Principles of Fused Multi-Modal Electron Tomography High-resolution 3D chemical imaging is achieved using the multi-modal electron tomography framework illustrated in Fig. 1a for a binary Au-Fe\({}_{3}\)O\({}_{4}\) nanoparticle superlattice within a carbon-based matrix. In multi-modal electron tomography, projections of the specimen structure are measured from a HAADF detector and the specimen chemistry is extracted from spectroscopy (EELS or EDX). These two detector modalities are fused during the reconstruction process to provide the complete 3D chemical distribution of a specimen at high resolution and SNR. Figure 1b shows the 3D reconstruction of each individual chemistry: larger 10.2 \(\pm\) 1.1 nm Fe nanoparticles (blue) and smaller Au 3.9 \(\pm\) 0.4 nm nanoparticles (orange). Both chemistries are visualized simultaneously in Fig 1c to show the self-organization of the chemical superlattice. The light-element, carbon matrix is shown in Supplemental Figure 1. In multi-modal tomography, the number of structural HAADF projections usually exceeds the chemical projections. In this first demonstration, only 9 chemical maps (\(\Delta\theta=15^{\circ}\)) are measured from the Fe-L\({}_{2,3}\) and Au-M\({}_{4,5}\) core-excitation edges in an EELS spectrum whereas 47 HAADF images (\(\Delta\theta=3^{\circ}\)) are collected over a \(\pm 70^{\circ}\) specimen tilt range. Linking both modalities into the reconstruction enables a clear distinction between Fe\({}_{3}\)O\({}_{4}\) and Au nanoparticles at high resolution from just a few EELS maps and a total electron dose of \(~{}5\times 10^{5}\) e/A\({}^{2}\)--roughly two orders of magnitude lower total electron dose than an equivalent conventional approach. Fused multi-modal electron tomography reconstructs three-dimensional chemical models by solving an optimization problem seeking a solution that strongly agrees with (1) the HAADF modality containing high SNR, (2) the chemically sensitive spectroscopic modality (EELS and / or EDX), and (3) encourages sparsity in the gradient domain producing solutions with reduced spatial variation. The overall optimization function is as follows: \[\underset{\mathbf{x}_{i}\geq 0}{\arg\min} \frac{\lambda_{1}}{2}\Big{\|}\mathbf{A}_{h}\sum_{i}(Z_{i}\mathbf{x}_{i})^ {\gamma}-\mathbf{b}_{H}\Big{\|}_{2}^{2}+\] \[\lambda_{2}\sum_{i}\Big{(}\mathbf{1}^{T}\mathbf{A}_{c}\mathbf{x}_{i}-\mathbf{b}_ {i}^{T}\log(\mathbf{A}_{c}\mathbf{x}_{i}+\varepsilon)\Big{)}+\lambda_{3}\sum_{i}\| \mathbf{x}_{i}\|_{\mathrm{TV}}, \tag{1}\] \(\mathbf{x}_{i}\) is the reconstructed 3D chemical distributions for element \(i\), \(\mathbf{b}_{i}\) is the measured 2D chemical maps for element \(i\), \(\mathbf{b}_{H}\) is the measured HAADF micrographs, \(\mathbf{A}_{h}\) and \(\mathbf{A}_{c}\) are forward projection operators for HAADF and chemical modalities, \(\lambda\) are regularization parameters, \(\varepsilon\) herein prevents log(0) issues but can also account for background, the \(\log\) is applied element-wise to its arguments, superscript \(T\) denotes vector transpose, and \(\mathbf{1}\) denotes the vector of \(N_{\mathrm{chem}}^{\mathrm{proj}}n_{y}n_{i}\) ones, where \(n_{y}\) is the number of pixels, \(n_{i}\) is the number of elements present, and \(N_{\mathrm{chem}}^{\mathrm{proj}}\) is the number of projections for the chemical modality. Pseudo-code for numerical implementation is provided in the Supplemental Materials. The three terms in Equation 1 define our fused multi-modal framework designed to surpass traditional limits for chemical tomography. First, we assume a forward model where the simultaneous HAADF is a linear combination of the reconstructed 3D elemental distributions (\(\mathbf{x}_{i}^{\gamma}\) where \(\gamma\in[1.4,\,2]\)). The incoherent linear imaging approximation for elastic scattering scales with atomic number as \(Z_{i}^{\gamma}\), where experimentally \(\gamma\) is typically around 1.7 [23, 24, 25]. This \(\gamma\) is bounded between 4/3 as described by Lenz-Wentzel expressions for electrons passing through a screened coulombic potential and 2 for Rutherford scattering from bare nuclear potentials [26, 27]. Second, we ensure the recovered 3D Fig.1: **Nanoscale recovery of Au-Fe\({}_{3}\)O\({}_{4}\) nanoparticle superlattice.****a** Schematic highlighting the linked HAADF and EELS modalities for chemical tomography. HAADF projection images are collected at every tilt increment while core-loss EELS spectra are sparsely acquired every few tilts. **b** The fused multi-modal reconstruction for the specimen’s Fe L\({}_{2,3}\) (turquoise), O-K (turquoise), and gold M\({}_{4,5}\) edge (yellow). **c** Chemical overlay of the superlattice nanoparticles over the entire 115 nm field of view. Scale cubes, 5 nm\({}^{3}\). distributions maintain a high degree of data fidelity with the initial measurements by using the log-likelihood for spectroscopic measurements dominated by low-count Poisson statistics [19; 28]. In a higher count regime, this term can be substituted with a least-squares discrepancy (\(\|\mathbf{Ax}-\mathbf{b}\|_{2}^{2}\)) [29]. Lastly, we include channel-wise isotropic total variation (TV) regularization to enforce a sparse gradient magnitude, which reduces noise by promoting image smoothness while preserving sharp features [30]. This sparsity constraint, popularized by the field of compressed sensing (CS), is a powerful yet modest prior for recovering structured data [31; 32]. When solving Equation 1, each of these three terms should be weighted appropriately by determining coefficients (\(\lambda\)) that balance their contributions. Ultimately, optimization of all three terms is necessary for accurate recovery (Supplementary Fig. 21). The improvement in reconstruction quality with fused multi-modal chemical tomography (Fig. 2i) is dramatic when compared to traditional chemical tomography (Fig. 2c). ## 3D Chemistry at High-Resolution, Low-Dose In tomography, 3D resolution is described by the Crowther criterion, which states resolution is limited by the object size and the number of specimen projections measured [33] - higher resolution requires more projections [34]. For traditional chemical tomography, few chemical projections are collected and the Crowther relation devastates resolution in 3D. This limitation occurs from the high-dose requirements of chemical mapping (i.e., EDX, EELS) where only a few projections can be collected before radiation damage alters the specimen structure. Figure 2 shows how specimen projections from each modality are superimposed as planes of information in Fourier space. Chemical tomography is sparsely sampled in Fourier space (Fig. 2a), which results in a tomographic reconstruction containing artifacts and low SNR (Fig. 2b,c). Despite the poor quality, traditional chemical tomography tracks the chemical distribution, and the Mn shell (orange) can be seen surrounding the Co core (blue-green). In contrast, elastically scattered electrons collected by the HAADF detector provide high signals at lower doses and allow many projections to be collected--in practice, HAADF sampling is five to ten times more finely spaced than chemical Figure 2: **I Nanoscale recovery of Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\) core-shell nanoparticles.****a-c** Raw EELS reconstruction for the Co (blue-green) and Mn (orange) L\({}_{2,3}\) core-loss edges.**d-d** The HAADF tomogram of Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\) nanoparticle tracks the structure of the specimen but fails to describe materials chemistry in 3D. **g-i** The fused multi-modal reconstruction. Scale cubes, 25 nm\({}^{3}\). **a,d,g** Representation in Fourier space of the projections used to reconstruct the tomograms. **j** Fused multi-modal tomogram of a single Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\) nanoparticle. Scale cube, 10 nm\({}^{3}\). **e** A line profile showing the average intensity across the diameter of the particle. (Fig. 2d) [25]. The dose required for a single HAADF projection is 10\({}^{2}\)-10\({}^{3}\) times lower than a chemical projection acquired using core-energy loss spectroscopy. Thus, it is favorable to acquire more HAADF images and achieve higher resolution. Although HAADF tomography permits high-resolution and high-SNR reconstructions of structure, it lacks chemical specificity. This is seen in Figure 2e,f where the structure is well defined with low noise but the Co and Mn regions are not identifiable. Exploiting shared information in both modalities, multimodal tomography achieves a chemical resolution in 3D comparable to high-resolution HAADF reconstructions. Although few chemical measurements pose a severely underdetermined problem, fusing with the HAADF modality fills in missing chemical information. This is reflected in Figure 2g where many HAADF projections (e.g., 50-180) are measured while far fewer chemical projections (e.g., 5-15) are intermittently measured. In this reconstruction, 9 EELS maps and 45 HAADF projections (50-200 mrad detector inner and outer semi-angles) were collected over a \(\pm\)60\({}^{\circ}\) tilt range using a 2.4 A probe with a 24.3 nm depth of focus (300 keV acceleration voltage, 10 mrad convergence angle). High-resolution of 3D chemistry is visible in the the core shell Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\) using multi-modal tomography in Figure 2h,i. Fused multi-modal electron tomography provides unique insight for studying heterostructured nanocrystals with unprecedented geometries. In the case of Co\({}_{3}\)O\({}_{4}\) - Mn\({}_{3}\)O\({}_{4}\) nanocrystals, the manganese oxide shell is divided into several ordered grains that grow on each surface plane for the cobalt oxide nanocube core [21]. However the core and shell interface can vary per plane driven by the growth interplay between strain and surface energy, resulting in the formation of grain boundaries [35]. The complete 3D distribution of Co and Mn at the surface and interface is difficult to discern with 2D projected EELS maps or HAADF reconstructions. Fortunately, the fused chemical distributions reveals surface coverage of the shell grains and cross-sections quantify the shell thickness and interface chemistry. To further demonstrate, fused multi-modal EELS tomography was used to discern between ZnS and Cu\({}_{0.64}\)S\({}_{0.36}\) phases (Supplementary Fig. 27) in a heterostructured nanocrystal [22] and EDX tomography to identify Cu nanoparticles embedded in SiC catalysts (Supplementary Fig. 27). Data fusion eliminates noticeable noise in the final 3D chemical reconstruction without a loss of resolution. This noise reduction accompanies a dose reduction of roughly one-hundred fold. Linking the chemical projections to the high SNR HAADF signals dose-efficiently boosts the chemical specificity. To illustrate, in Figure 2, matching the resolution of fused multi-modal chemical tomography using traditional methods would require 45 EELS maps--a five-fold dose increase. However, the SNR of each chemical projection would still fall short (Supplementary Fig. 27) and requires roughly twenty-times additional dose. In total, multi-modal chemical tomography performs well at one-hundredth the dose requirement of traditional methods. ### Sub-nanometer Chemical Resolution in 3D 3D resolution of the chemical distribution in Au-Fe\({}_{3}\)O\({}_{4}\) nanoparticle superlattice (Fig. 3a) is demonstrated at or below 1 nm using multi-modal tomography. The achieved resolution is quantified in real and reciprocal space. In real space, the resolution limit is verified by visually inspecting a single 3 nm Au nanoparticle (Fig. 3d). The edge sharpness between the reconstructed nanoparticle and vacuum is visibly less than 1 nm. From line profiles, the half pitch resolution is 0.8 nm \(\times\) 0.8 nm \(\times\) 1.1 nm along the \(x\), \(y\), and \(z\) directions respectively. Along optimal directions (\(x\), \(y\)) the resolution is comparable to the Nyquist frequency (8.05 A). The real-space resolution is consistent with reciprocal space estimates of the cutoff frequency at which the signal drops to the noise floor [1]. Figure 3b highlights power Fig.3: **Resolution Analysis of Au-Fe\({}_{3}\)O\({}_{4}\) superlattice nanoparticles.****a** Fused EELS tomograms of Au-Fe\({}_{3}\)O\({}_{4}\) nanoparticles. Scale cube, 2 nm\({}^{-3}\). **b** Power spectral density of the Fe recombination along the principal axial directions shown on the right. Scale bar, 0.5 nm\({}^{-1}\). **c** Power spectral density profiles for \(k_{x}\)-\(k_{y}\) and \(k_{x}\)-\(k_{z}\) directions. **d** Line profiles of a 2.5 nm Au nanoparticle gives a resolution of 0.8 nm, 0.8 nm, and 1.1 nm along the x, y, and \(z\) directions. spectral density variations projected on three orthogonal planes. Measured power spectral density along the \(k_{x}\)-\(k_{y}\) and \(k_{x}\)-\(k_{z}\) directions shows information transfer roughly occurring at 0.99 nm and 1.02 nm respectively (Fig. 3c). These directions conservatively represent the 3D resolution from an average of the high-resolution and low-resolution (\(z\)-axis) directions. This 3D chemical resolution nearly matches the 3D HAADF resolution 1.00 nm, 1.01 nm in Figure 3 (Supplementary Fig. 3). For fused multi-modal chemical tomography, the HAADF 3D resolution provides a new upper bound to the highest obtainable 3D chemical resolution. A reduction of resolution along the \(z\)-axis is expected from the incomplete tilt range that creates a missing wedge of information in Fourier space [36]. Here, we observe approximately a 25% reduction in resolution along the missing wedge direction of the multi-modal chemical reconstruction. ### Influence of Sampling Electron tomography simulations show a 3-5 fold improvement in the normalized root mean square error \(\left(\langle\mathrm{NRMSE}\rangle\right)\) averaged across all elements when multi-modal tomography is used over conventional chemical tomography. In Figure 4 synthetic gold decorated CoO / CuO nanocubes inspired by real experimental data [37] provide a ground truth comparison to assess the accuracy of fused multi-modal tomography. Simulated projection images are generated from a simple linear incoherent imaging model of the 3D chemical composition with added Poisson noise (See Methods). The specimen tilt range is limited to \(\pm 70^{\circ}\) to better match typical experimental conditions. The advantages of multi-modal tomography are clearly visible in the 2D slices (Fig. 4b) taken from 3D reconstructions obtained by conventional chemical tomography \(\left(\langle\mathrm{NRMSE}\rangle=1.301\right)\) and fused multi-modal tomography \(\left(\langle\mathrm{NRMSE}\rangle=0.33\right)\). For all chemistries (Au, O, Cu, Co,) fused multi-modal tomography is more consistent with the ground truth with higher resolution and reduced noise. For any number of chemical projections acquired, we see a notable reduction in NRMSE when HAADF projections are integrated into the chemical reconstruction. Figure 4 shows the improved fused multi-modal reconstruction accuracy across a wide range of HAADF and chemical projections for the gold-decorated CoO / CuO nanocubes. The reconstruction error (average NRMSE) across most of the multi-modal parameter space is less than 0.40 compared to values around 1.2 for conventional tomography. Pixel values on the diagram (Fig. 4a) represent the average NRMSE across all of the elements. This NRMSE map shows data fusion strongly benefits by increasing the HAADF information available. It requires substantially less dose to increase the HAADF projections (i.e. moving vertically on the map) compared to increasing the chemical projections (i.e. moving horizontally on the map). Conventional chemical tomography does not use HAADF projections (bottom row, Fig. 4a) resulting in an average reconstruction error larger than the entire multi-modal regime. In practice fused multi-modal tomography is performed in the regime with equal or more HAADF projections than chemical (i.e. top-left triangle). Multi-modal tomography also performs well when the chemical projections exceed the number of HAADF projections, however, this is not practical since HAADF signals can be acquired simultaneously with EDX and EELS. Similar trends are observed in a second large-scale simulation performed on a synthetic composite structure composed of transition metal CoO nanoparticles embedded in a NiO support (Supplementary Fig. 3). Fig.4: **Estimating Sampling Requirements for Accurate Recovery with Synthetic CoO/CuO Nanocubes.****a** An NRMSE map representing the reconstruction error as a function of the number of HAADF and chemical lifts. Brighter pixels denote results containing incorrect reconstructions from the ground truth. **b** Visualization of three points corresponding to conventional chemical tomography (red-construction without the HAADF modality), and low or high-dose fused multi-modal electron tomography. **c** The 3D models used for generating synthetic chemical and ADF projections. Scale bar, 75 nm. ## Discussion While this paper highlights the advantages of fused multi-modal electron tomography, the technique is not a simple black-box solution. Step sizes for convergence and weights on the terms in the cost function (Eq. 1) must be reasonably selected. Standard spectroscopic pre-processing methods become ever more critical in combination with multi-modal fusion. Improper background subtraction of EELS spectra [38] or overlapping characteristic X-ray peaks that normally cause inaccurate stoichiometric quantification also reduces the accuracy of fused multi-modal tomography. Thick specimens with dimensions that far exceed the mean free path of the electron can produce inversion contrast that will cause electron tomography to fail [39]--also causing failure for multi-modal electron tomography (Supplementary Fig. **??**). As shown for 2D fused multi-modal electron microscopy [20], fused multi-modal tomography works best when elements have discernible contributions to the HAADF contrast and all chemical elements have been imaged. Multi-modal tomography leverages compressed sensing (e.g. TV min.) which assumes incoherence (i.e., a high level of dissimilarity) between the sensing and sparsifying transform [40, 41, 42]--although this assumption typically holds as demonstrated for the datasets presented herein. ## Conclusion In summary, we present fused multi-modal electron tomography that enables chemically-sensitive 3D reconstruction of matter with nanometer resolution at high SNR. Researchers no longer must choose between measuring 3D structure without chemical detail or characterizing chemistry along a single viewing direction. By linking signals from elastic (HAADF) and inelastic (EDX / EELS) scattering processes, the traditional dose limits of chemical tomography are substantially surpassed. In some cases, a one-hundred fold reduction in dose is estimated. To demonstrate, the complete volumetric density of each chemistry was mapped in several systems including Au-Fe\({}_{3}\)O\({}_{4}\), Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\), ZnS-Cu\({}_{0.64}\)S\({}_{0.36}\), and Cu-SiC nanomaterials. In both synthetic and experimental datasets, fused multi-modal electron tomography shows substantial advantages in the accuracy of 3D chemical imaging. This approach enables chemical tomography of a wide range of previously inaccessible materials with moderate radiation sensitivity. Fused multi-modal electron tomography opens up new understanding of geometrically and compositionally complex materials. Here, fused multi-modal tomography used commonly available STEM detectors (HAADF, EDX, and EELS), however, this approach can be extended to other modalities in development--including pixel-array detectors [43], annular bright field [44], ptychography [45], low-loss EELS [46], etc. One can imagine a future wherein all scattered and emitted signals in an electron microscope are collected and fused for maximally efficient characterization of matter in all dimensions. ## Methods ### Specimen Synthesis and Preparation #### Au-Fe\({}_{3}\)O\({}_{4}\) Superlattice Nanoparticles Syntheses of 3.9 nm Au NPs [47] and 10.2 nm Fe\({}_{3}\)O\({}_{4}\) NPs [48] were carried out under nitrogen atmosphere using standard Schlenk line techniques according to literature methods. Polystyrene-based ligands were attached to the NP surface through a ligand exchange process as reported before [49]. Thiol-terminated PS (PS-SH) was used as the polymeric ligand for Au NPs and was synthesized using Radial Addition Fragmentation Transfer (RAFT) polymerization and end-functionalized by aminolysis. Amine-terminated polystyrene was used as the polymeric ligand for Fe\({}_{3}\)O\({}_{4}\) NPs and was synthesized using atom transfer radical polymerization and end-group modification [50]. Binary superlattice of Au and Fe\({}_{3}\)O\({}_{4}\) NPs was prepared by nanoparticle co-crystallization at water-air interface. A toluene solution containing two types of NPs with concentration ratio of 2:1 was drop-cast onto the water surface in a Teflon well and slowly dried overnight. The binary nanoparticle film was transferred onto a 200-mesh carbon TEM substrate and further dried in vacuum oven for 6 hours to remove residual solvent. #### Co\({}_{3}\)O\({}_{4}\) nanocubes A mixture of 0.37 g of cobalt(II) perchlorate (Aldrich) and 2.7 g of oleylamine (Acros) in 15 mL of 1-octanol (Aldrich) was heated to 120 \({}^{\circ}\)C under air and aged for 2 hr. During the heating, 0.7 mL of distilled water was added before the temperature reaches 120 \({}^{\circ}\)C. After the reaction, an excess amount of acetone and ethanol was added and Co\({}_{3}\)O\({}_{4}\) nanocubes were retrieved by centrifugation. #### Core-Shell Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\) Nanoparticles An organic/aqueous suspension was prepared by adding 0.080 g of Co\({}_{3}\)O\({}_{4}\) nanocubes into a mixture of oleylamine (5 mmol), oleic acid (0.5 mmol), formic acid (3.15 mmol, Aldrich), and 15 mL of xylenes (Aldrich). The as-prepared suspension was heated to 40 \({}^{\circ}\)C under air and aged for three hours with magnetic stirring. And then, 0.7 mL of 0.7 M aqueous solution of manganese (II) chloride tetrahydrate was rapidly injected into the suspension at 90 \({}^{\circ}\)C and aged for 1.5 h under air. After the reaction, the nanocrystals were washed with hexane/ethanol and retrieved by centrifugation. The final product was prepared with three iterations of this process. #### ZnS - Cu\({}_{0.64}\)S\({}_{0.36}\) Nanocrystals Synthesis of the ZnS - Cu\({}_{0.64}\)S\({}_{0.36}\) Heterostructured NPs was performed as described by literature using typical air and water free synthetic techniques [22]. Cu\({}_{1.81}\)S (tysubite) nanocrystals are synthesized by first dissolving CuCl\({}_{2}\cdot\)2H\({}_{2}\)O in oleylamine (OLAM) at 200 \({}^{\circ}\)C after thoroughly degassing the solution at high temperature. Tert-butyl-disulfide is then injected at 180 \({}^{\circ}\)C and the reaction continues at this temperature for 40 minutes. After cooling to room temperature, the NPs are washed with hexanes and acetone then dried in a vacuum desiccator. The roxbyite NPs are then subjected to cation exchange as described previously [22] Briefly, ZnCl\({}^{2}\) and OLAM are degassed at high temperature and then heated at 180 \({}^{\circ}\)C for 30 minutes to make a concentrated solution of Zn\({}^{2}\)+ for cation exchange. After cooling the Zn\({}^{2+}\) solution to 100 \({}^{\circ}\)C, an aliquot of the solution is mixed with toluene and the temperature is adjusted to 50 \({}^{\circ}\)C. The synthesized roxbyite NPs are dissolved in tri-octyl phosphine and then injected into the Zn\({}^{2+}\) solution in and allowed to react for 30 minutes before quenching the reaction with cold acetone. _Cu-SiC Catalyst._ The Cu/SiC catalyst was prepared on a commercial SiC support purchased from Shanghai Yao Tian Nano Material Co., Ltd. following previously described methods [51]. The catalyst was prepared by incipient wetness impregnation using a Cu(NO\({}_{3}\))\({}_{2}\cdot\)3H\({}_{2}\)O aqueous solution 0.35 gmL with 5 wt% Cu loading followed by calcination in air at 350 \({}^{\circ}\)C for 2 h. _Acrylic C-TiO\({}_{2}\) Nanoparticles._ The C-TiO\({}_{2}\) sample was prepared by blending commercial TiO\({}_{2}\) particles (purchased from Chemours) with an emulsion polymer latex. Before conducting the chemical imaging at room temperature, the blend was pretreated under the electron beam in a Thermo Fisher T12 TEM at -80 \({}^{\circ}\)C to promote cross-linking in the latex and preserve its morphology above the glass transition temperature. ### Electron Tomography Acquisition Simultaneously acquired HAADF and EELS tilts series for the Au-Fe\({}_{3}\)O\({}_{4}\) specimen were collected on a Talos F200X G2 (Thermo Fisher) operated at 200 keV with a probe current of 115 pA, probe semi-angle of roughly 10.5 mrad and inner collection semi-angle of 50 mrad. The HAADF projections were collected from -60\({}^{\circ}\) to +60\({}^{\circ}\) with a 3\({}^{\circ}\) angular increment using a Model 2021 Fischione Analytical Tomography Holder. At each tilt angle, a STEM image with a 24 \(\mu\)s dwell time at each pixel of a lateral dimension 6.4 A. Simultaneously acquired HAADF and EELS spectrums were acquired at acquired with a 15\({}^{\circ}\) angular increment with a dwell time of 3 ms receiving a total electron dose of \(4.9\times 10^{5}\,e/\text{\AA}^{2}\) (\(1.72\times 10^{4}\,e/\text{\AA}^{2}\), \(4.73\times 10^{5}\,e/\text{\AA}^{2}\) for the HAADF and EELS modality, respectively). Refer to Supplementary Fig. 11 and 12 to view the raw tilt series. Simultaneously acquired HAADF and EELS tilt series for the Co\({}_{3}\)O\({}_{4}\) - Mn\({}_{3}\)O\({}_{4}\) specimen were collected on a double aberration-corrected modified FEI Titan 80-300 microscope (the TEAM I instrument at the National Center for Electron Microscopy within Lawrence Berkeley National Laboratory) operated at 300 keV with a probe current of 115 pA and semi-angle of roughly 10 mrad. This microscope is equipped with a Gatan K3 detector and Continuum spectrometer. The HAADF projections were recorded from -60\({}^{\circ}\) to +60\({}^{\circ}\) with a 3\({}^{\circ}\) angular increment using a Hummingbird Scientific concentric Tomography Holder. At each tilt angle, a STEM image with a 24 \(\mu\)s dwell time at each pixel of a lateral dimension of 7.79 A. Simultaneously acquired HAADF and EELS spectrums were acquired at acquired with a 15\({}^{\circ}\) angular increment with a dwell time of 0.677 ms receiving a total electron dose of \(8.37\times 10^{4}\,e/\text{\AA}^{2}\) (\(1.16\times 10^{4}\,e/\text{\AA}^{2}\), \(7.21\times 10^{4}\,e/\text{\AA}^{2}\) for the HAADF and EELS modality, respectively). Refer to Supplementary Fig. 11 and 12 to view the raw tilt series. Simultaneously acquired HAADF and EDX tilt series for the Cu-SiC specimen were collected on a Talos F200X G2 (Thermo Fisher) operated at 200 keV with a probe current of 250 pA, probe semi-angle of roughly 10.5 mrad and collection angle of 44-200 mrad. The HAADF projections were collected from -75 to +70 with a 3\({}^{\circ}\) angular increment. At each tilt angle, a STEM image with a 20 \(\mu\)s at each pixel of the lateral dimension of 1.4679 nm. Simultaneously acquired HAADF and EDX spectrums were acquired at acquired with a 15\({}^{\circ}\) angular increment with a dwell time of 20 \(\mu\)s dwell time for 25 frames receiving a total electron dose of \(4.33\times 10^{4}\,e/\text{\AA}^{2}\) (\(7.1\times 10^{3}\,e/\text{\AA}^{2}\), \(3.62\times 10^{4}\,e/\text{\AA}^{2}\) for the HAADF and EELS modality, respectively). The initial chemical distributions were generated from EDX maps using commercial Velox software that produced initial net count estimates (however atomic percent estimates are also suitable). ### Multi-Modal Tilt Series Alignment The EELS signals were obtained by integration over the core loss edges, all of which were done after background subtraction. The background EELS spectra were modeled using a linear combination of power laws implemented using the open source Cornell Spectrum Imager software [9]. Before tilt series alignment, the spectrum images have been drift-corrected after acquisition assuming a time-dependent linear drift model, as illustrated in Supplementary Fig. 13. The survey image, which is taken with an identical dwell time as the HAADF tilts, is taken as a reference. Iterative image registration between the chemical and HAADF signals seek an optimal translation and affine transformation. Following registration, the background of each projection was removed. For this purpose, the mean grey level in the outer regions was calculated for each projection and subtracted. In this way, the signal contribution of the carbon film could be eliminated. For the alignment of the tilt series, a coarse alignment is performed with either the center of mass (CoM) or cross-correlation method [52]. CoM works best when the total projected volume is fixed across specimen tilts (i.e., the object is isolated) [53]. In cases where either of these requirements are not met (e.g. fields of view where multiple particles are visible as demonstrated with the Au - Fe\({}_{3}\)O\({}_{4}\) nanoparticles), cross-correlation should be considered. Fine alignment is performed with custom written projection matching method [54] on the HAADF modality. The measured translation shifts are subsequently applied to the corresponding tilts where simultaneously acquired chemical maps were acquired. ### Fused Multi-Modal Tomography Recovery Here, fused multi-modal electron microscopy is framed as an inverse problem expressed in the following form: \(\hat{\mathbf{x}}=\,\operatorname*{arg\,min}_{\mathbf{x}\geq 0}\lambda_{1}\Psi_{1}(\mathbf{x})+ \lambda_{2}\Psi_{2}(\mathbf{x})+\lambda_{3}\mathrm{TV}(\mathbf{x})\) where \(\hat{\mathbf{x}}\) is the final reconstruction, and the three terms are described in the main manuscript (Eq. 1). When implementing an algorithm to solve this problem, we concatenate the multi-element spectral variable (\(\mathbf{x}\)) as 2D matrices: \(\mathbf{x}\in\,\mathbb{R}^{n_{y}\cdot n_{y}\cdot n_{i}\times n_{x}}\) where \(n_{i}\) denotes the total number of reconstructed elements and \(n_{x},n_{y}\) represent number of pixels in the x and y direction and \(\mathbf{x}_{i},\mathbf{b}_{i}\) are the reconstructions and chemical maps for element \(i\) (\(\mathbf{x}_{i}\in\,\mathbb{R}^{n_{y}\cdot n_{y}\times n_{x}}\) and \(\mathbf{b}_{i}\in\,\mathbb{R}^{n_{y}\cdot N_{chem}^{\text{proj}}\times n_{x}}\)). Here the axis of rotation is along the \(x\)-direction (\(n_{x}\)). The optimization problem is solved by a combination of gradient descent with total variation regularization. We minimize this cost function by iteratively descending along the negative gradient directions for the first two terms and subsequently evaluate the isotropic TV proximal operator to denoise the chemical volumes [55]. The gradients of the first two terms are: \[\nabla_{\mathbf{x}}\Psi_{1}(\mathbf{x}) =-\gamma\text{diag}\big{(}\mathbf{x}^{\gamma-1}\big{)}\mathbf{\Sigma}^{T} \mathbf{A}_{h}^{T}\Big{(}\mathbf{A}_{h}(\mathbf{\Sigma}\mathbf{x}^{\gamma})^{T}-\mathbf{b}_{H}\Big{)} \tag{2}\] \[\nabla_{\mathbf{x}_{i}}\Psi_{2}(\mathbf{x}_{i}) =\mathbf{A}_{c}^{T}\Big{(}(\mathbf{A}_{c}\mathbf{x}_{i}-\mathbf{b}_{i})\odot(\mathbf{ A}_{c}\mathbf{x}_{i}+\varepsilon)\Big{)}, \tag{3}\] where \(\odot\) denotes point-wise vision, \(\mathbf{b}_{H}\in\mathbb{R}^{n_{y}N_{\text{HADF}}^{\text{proj}}\times n_{x}}\) are the HAADF measurements, \(\mathbf{A}_{h}\in\mathbb{R}^{n_{y}\cdot N_{\text{HADF}}^{\text{proj}}\times n_{y} \cdot n_{y}}\) and \(\mathbf{A}_{c}\in\mathbb{R}^{n_{y}\cdot N_{\text{HADF}}^{\text{proj}}\times n_{y} \cdot n_{y}}\) are forward projection matrices operating on the chemical and HAADF modalities. Here, the first term in the cost function, relating the elastic and inelastic modalities, has been equivalently re-written as \(\Psi_{1}~{}=~{}\frac{1}{2}\big{\|}\mathbf{A}_{h}(\mathbf{\Sigma}\mathbf{x}^{\gamma})-\mathbf{b }_{H}\big{\|}_{2}^{2}\), where \(\mathbf{\Sigma}\in\mathbb{R}^{n_{y}\cdot n_{y}\times n_{y}\cdot n_{y}\cdot n_{i}}\) and \(\mathbf{\Sigma}\mathbf{x}\) expresses the summation of all chemistries as matrix-vector multiplication. Evaluating the TV proximal operator is in itself another iterative algorithm. In addition, we impose a non-negativity constraint since negative concentrations are unrealistic. We initialize the first iterate with reconstructions composed purely of the raw measured data (\(\mathbf{x}_{i}^{0}=\arg\min\Psi_{2}\)). This is an ideal starting point as it is a local minimizer of \(\Psi_{2}\). Smooth and asymptotic decay of all three terms in Eq. 1 is an indicator of reliable reconstruction. The final 3D HAADF and multi-modal chemical volumes were rendered using the Tomviz platform (tomviz.org [56] ). ### Multi-Modal Simulations and Bayesian Hyperparameter Optimization To demonstrate the functionality of our fused multi-modal electron tomography algorithm, we created a multi-channel phantom specimen inspired from an experimental system. The phantom consists of four channels, which we attribute to the crystal stoichiometry of CuO, CoO, and Au (Fig. 4c) with a volume size of 2563. The HAADF intensity is proportional to \(\sum_{c}(Z_{i}x_{i})^{\gamma}\) where \(x_{i}\) reflects the element's stoichiometry. To produce chemical maps with realistic noise characteristics, we set the background (vacuum) to roughly 15\(\%\) of the max intensity and subsequently applied Poisson noise to meet the desired SNR. For a Poisson-limited signal, each synthetic image has an SNR of \(\frac{\mu_{x}+\mu_{y}^{2}}{\sigma_{N}^{2}}\) where \(\mu_{s}\) is the mean signal and \(\sigma_{N}^{2}\) is the variance of noise [34] In the case of Figure 4, the SNR of the Co, Cu, O, Au, and HAADF modalities were 1.92, 2.89, 2.69, 1.96, 2208.67, respectively. Prior to measuring the NRMSE of the reconstructed volumes, the chemical distributions were normalized with zero mean and unit standard deviation. The NRMSE expresses a normalized measure of agreement between the reconstructed (\(\mathbf{x}\)) and ground truth (\(\mathbf{y}\)) : \(\sqrt{\frac{\sum_{i,j,k}(\mathbf{y}_{i,j,k}-\mathbf{x}_{i,j,k})^{2}}{\sum_{i,j,k}(\mathbf{y }_{i,j,k})^{2}}}\). While the HAADF SNR may be high, we found the NRMSE reliably converges when above 50 (Supplementary Fig. 3). Footnote 3: [https://github.com/jtschwar/tomo_TV](https://github.com/jtschwar/tomo_TV) Determining optimal regularization parameters for the phase diagram (Fig 4a) is computationally expensive to explore due to its variability across sampling conditions. While grid search could find the best parameters by exhaustively exploring all possible candidate values, the computation time would be expensive as each map would take approximately 125 days to complete on a single GPU. We efficiently explored the parameter space with Bayesian optimization (BO) -- a machine learning framework known for optimizing expensive unknown objective functions with minimal evaluations [57, 58]. It works by building a probabilistic model of the objective function with Gaussian processes (GP) regression. GP not only estimates our function of interest but also provides the uncertainty measurements to guide future predictions. BO takes into account past evaluations when determining future hyperparameter selections via an acquisition function [59]. For our simulations, we carried out BO with GP in Python with the Scikit Optimize library (scikit-optimize.github.io/stable) with the Matern kernel and GP Hedge acquisition strategy [60]. By exploiting BO with GP, we are able to provide an atlas of balanced hyperparameters for Eq. 1 with the CoNiO and CoCuO synthetic datasets (Supplementary Figs. 3-4). The estimated parameter landscape is smooth and continuous with a clear global optimum. Asynchronous parallel BO on supercomputing resources allowed us to efficiently run several reconstructions simultaneously on a single node. This form of parallel computing resulted in several factors of computational speed up as multiple GPUs received unique experimental parameters (e.g. SNR or sampling) to reconstruct concurrently amongst each other. Specifically, the computation time to generate an NRMSE map was reduced by 99.8% - taking less than a day to complete (18 hours). In total, 3,452 GPU hours were used to complete these simulations - 1078 hours on Summit - OLCF and 1078 hours on ThetaGPU - ALCF for the phase diagrams (Fig. 4 and Supplementary Fig. 3). An additional 1,296 GPU hours on Summit were used to produce the SNR plots (Supplementary Fig. 3). ### Code Availability All of the multi-modal electron tomography reconstruction and iterative alignment codes are available at github.com/jtschwar/tomo_TV and github.com/jtschwar/projection_refinement. A sample jupyter notebook outlining the fused multi-modal reconstruction on the Cu-SiC and Au-Fe\({}_{3}\)O\({}_{4}\) material systems will be available in the tomo TV repository. ## Data Availability The raw and aligned Au-Fe\({}_{3}\)O\({}_{4}\), Co\({}_{3}\)O\({}_{4}\)-Mn\({}_{3}\)O\({}_{4}\), and Cu-SiC tilt series will be available in a Zenodo repository.
2308.03081
Using Overlapping Methods to Counter Adversaries in Community Detection
When dealing with large graphs, community detection is a useful data triage tool that can identify subsets of the network that a data analyst should investigate. In an adversarial scenario, the graph may be manipulated to avoid scrutiny of certain nodes by the analyst. Robustness to such behavior is an important consideration for data analysts in high-stakes scenarios such as cyber defense and counterterrorism. In this paper, we evaluate the use of overlapping community detection methods in the presence of adversarial attacks aimed at lowering the priority of a specific vertex. We formulate the data analyst's choice as a Stackelberg game in which the analyst chooses a community detection method and the attacker chooses an attack strategy in response. Applying various attacks from the literature to seven real network datasets, we find that, when the attacker has a sufficient budget, overlapping community detection methods outperform non-overlapping methods, often overwhelmingly so. This is the case when the attacker can only add edges that connect to the target and when the capability is added to add edges between neighbors of the target. We also analyze the tradeoff between robustness in the presence of an attack and performance when there is no attack. Our extensible analytic framework enables network data analysts to take these considerations into account and incorporate new attacks and community detection methods as they are developed.
Benjamin A. Miller, Kevin Chan, Tina Eliassi-Rad
2023-08-06T10:23:32Z
http://arxiv.org/abs/2308.03081v1
# Using Overlapping Methods to Counter Adversaries in Community Detection+ ###### Abstract When dealing with large graphs, community detection is a useful data triage tool that can identify subsets of the network that a data analyst should investigate. In an adversarial scenario, the graph may be manipulated to avoid scrutiny of certain nodes by the analyst. Robustness to such behavior is an important consideration for data analysts in high-stakes scenarios such as cyber defense and counterterrorism. In this paper, we evaluate the use of overlapping community detection methods in the presence of adversarial attacks aimed at lowering the priority of a specific vertex. We formulate the data analyst's choice as a Stackelberg game in which the analyst chooses a community detection method and the attacker chooses an attack strategy in response. Applying various attacks from the literature to seven real network datasets, we find that, when the attacker has a sufficient budget, overlapping community detection methods outperform non-overlapping methods, often overwhelmingly so. This is the case when the attacker can only add edges that connect to the target and when the capability is added to add edges between neighbors of the target. We also analyze the tradeoff between robustness in the presence of an attack and performance when there is no attack. Our extensible analytic framework enables network data analysts to take these considerations into account and incorporate new attacks and community detection methods as they are developed. Introduction Community detection is an important analytic tool in large graph analysis. When graphs get very large, community detection provides a coarser-grained view of the graph than considering the nodes alone, and can greatly assist network analysts in identifying interesting portions of the graph that require deeper inspection. This makes community detection useful for data triage. In some applications, we may have data that has been manipulated by an adversary to point the data analyst in the wrong direction. An infected node in a computer network, for example, may want to avoid being grouped together with other infected nodes, given the risk of having all nodes discovered based on one initial cue to the analyst. A adversary would, in this context, want to spread the relevant nodes across communities so they are not concentrated within any particular highly connected subgraph. This degrades the utility of community detection, rendering it ineffective for the user. The adversary's goal becomes more complicated, however, if the analyst applies overlapping community detection [36]. In this case, nodes may be assigned to multiple communities, and joining a new community may be insufficient to disassociate from a community that may attract attention. The ability to keep a node within an interesting community has the potential to provide robustness to the analyst when attempting to uncover subnetworks of interest. In this paper, we evaluate the use of several overlapping community detection methods in the presence of a targeted adversarial attack. We follow the formulation of Kegelmeyer et al. [17] in which nodes have a measure of interestingness, and the target node wants to avoid being connected to a more interesting community. We formulate the problem as a Stackelberg game in which the defender leads by choosing one of several community detection algorithms and the attacker follows by choosing an attack in response. We find that overlapping community detection methods significantly outperform non-overlapping methods when measured by the target node's position in the ordered list of communities. This is true for even when the adversary's capability is expanded to include the ability to introduce edges between neighbors. ### Scope and Contributions This paper considers the case in which there is a single target node and the adversary's goal is to cause the network analyst to deprioritize it. The attacker is able to add edges between the target node and other nodes to which it is not currently connected. (We also consider a more capable attacker that is able to create edges between pairs of neighbors). As in the work of Kegelmeyer et al. [17], each node has a "temperature" value denoting its level of interest. Nodes in some cases have attributes, represented by binary vectors. In cases where the nodes come from various classes, we use this information to assign temperatures. The analyst and the attacker both have full visibility of the nodes, edges, and attributes (including temperature), but not their classes. The attacker is able to query the community detection method used by the analyst and obtain the resulting communities, but does not necessarily know the underlying algorithm. The graph consists of a fixed set of nodes, and edges are only manipulated by the attacker, i.e., we are not considering a dynamic graph with a varying topology. The paper's contributions are as follows: * We adapt several community detection attacks for use in the current context in which a single vertex wants to evade its initial community, and where the community detection method may yield overlapping communities. * We formulate an additional attack that allows the adversary to attempt to create a new community by forming connections between new neighbors (those obtained during the attack) and old ones (neighbors of the target before the attack). * We model the defender's optimization as a Stackelberg game with the defender as the leader and the attacker as the follower. * We demonstrate on seven real network datasets that overlapping community detection is much more robust to adversarial manipulation than non-overlapping community detection, including the case of the more capable adversary that can introduce connections among neighbors. * We analyze the tradeoff between robustness to attack and performance when no attack is present. ### Paper Organization The remainder of this paper is organized as follows. Section 2 briefly summarizes related work on attacks against community detection. Section 3 provides details on the problem model and formulates the Stackelberg game in which the attacker chooses an attack with respect to the analyst's chosen community detection method. This section summarizes the detection methods and attacks that we consider in our experiments. In Section 4, we present the results of the Stackelberg game on seven real network datasets, and in Section 5 we summarize our findings and outline future work. ## 2 Related Work As attacking machine learning algorithms on graphs has become an active area of research [42, 37, 16, 24], interest in attacking unsupervised learning applications such as community detection has increased. The objectives in the various studies, however, vary considerably. Nagaraja considers a case where an adversary is attempting to de-anonymize a communication network based on community structure, and nodes within the network create new communications to avoid detection [25]. A heuristic-based edge rewiring algorithm to hide a specific community is proposed in [32]. Fionda and Pirro introduce the concept of "safeness"--a measure of how concentrated a node's neighbors are in the node's community--and propose algorithms to reduce the modularity and safeness of a target community by adding and removing edges [10]. Another potentially important aspect is the "persistence" of the community, i.e., the extent to which it a community is discovered across multiple attempts at community detection, which is exploited to attack community detection in [18]. Li et al. propose a framework that includes building a generator that is trained to create similar graphs while obscuring the target community [20]. In one paper, the goal is to destroy the community structure so that the communities identified by various algorithms have low modularity or normalized mutual information, and achieves this using a genetic algorithm [7]. Other work considers attacks against a vertex classifier, evaluated with respect to the output of community detection algorithms [9]. Other work considers individual nodes that do not want to be part of a given community, which is the focus of this paper. Kegelmeyer et al. consider such a scenario, in which a subset of nodes have a "temperature" indicating how interesting they are, and the adversary's goal is that the target node not be grouped with interesting ("hot") nodes [17]. Chen et al. consider a genetic algorithm that handles the case of target nodes as well as target communities or targeting the overall community structure of the graph, with the fitness function varying depending on the target [8]. While most work has focused on targets that do not want to be included within a community, there is also the issue of identifying non-community members who attempt to join a community [15]. In addition to attacks against community detection specifically, there has been recent related work focused on attacking graph embeddings [4, 6]. While there are numerous ways to embed vertices into real-valued space--including features based on the nodes' neighborhoods [14] and random walk based methods [13]--methods based on random walks in particular have implications for community detection: when there is community structure in a graph, this causes nodes in the same community to be located near each other in the embedded space [5, 31]. Thus, we consider attacks against embeddings as a potential attack against community detection, as we discuss in Section 3.2. Several papers suggest applying their proposed community detection attacks to overlapping methods, but few have provided any results in this context. One recent paper considers an attack based on a modified degree centrality to make a target node be part of precisely one community when applying an overlapping community detection algorithm [23]. We consider this, and other attacks that focus on a single node, in a modified context in which edges can only be added. We formulate the analyst's optimization as a Stackelberg game, which is a technique that has been applied in other adversarial graph analysis contexts as well. Recent examples include centrality ranking [34, 33], link prediction methods [40], critical infrastructure defense [22], wireless communication [21], and privacy preservation [39]. Our present work extends this literature to the area of adversarial community detection. Problem Model We follow the problem model defined in [17]. We are given a graph \(G=(V,E)\), where \(V\) is a set of vertices and \(E\) is a set of edges. Each vertex has a feature called a "temperature," which we observe as either "hot," "cold," or "unknown." The temperature is quantified by a function \(T:V\rightarrow\{-1,0,1\}\) such that \[T(v)=\begin{cases}-1&\text{$v$ is cold}\\ 0&\text{$v$'s temperature is unknown}\\ 1&\text{$v$ is hot}\end{cases}\] In this scenario, an analyst is determining which vertices in the network require deeper analysis. Rather than analyze all hot vertices (the interesting nodes), which may be overwhelmingly large, the analyst breaks the graph into \(k\) communities \(C_{i}\subset V\) such that \(\bigcup_{i=1}^{k}C_{i}=V\). If communities are disjoint (non-overlapping), then \(C_{i}\cap C_{j}=\emptyset\) for \(i\neq j\). Communities are then ranked according to their average temperature, i.e., community \(C_{i}\)'s score is \[T(C_{i})=\frac{1}{|C_{i}|}\sum_{v\in C_{i}}T(v). \tag{1}\] The set of all communities is denoted by \(\mathcal{C}=\{C_{i}|1\leq i\leq k\}\) The analyst's goal is to find communities that warrant attention, and prioritizes the nodes in hot communities. We consider the case where there is a vertex attempting to evade the analyst's attention. The vertex is able to create new links (edges), but not delete existing ones. The objective of such an adversarial vertex would be to lower the temperature of the hottest community to which it is assigned. That is, the vertex \(v\in V\) adds edges to \(E\) with the cost function of minimizing \(T(C_{i})\) from (1) where \(v\in C_{i}\). We refer to this objective as \(T_{\text{comm}}(v)\), the _community temperature_ of \(v\). If successful, the node will be placed into a cold community and avoid further scrutiny. It is possible, however, to move to a cold community and still be identified. If the adversary's actions result in _all_ communities being relatively cold, for example, being in highest-temperature community would still result in the adversary being found by the analyst. Thus, consider an additional metric that accounts for this possibility. The _rank_ of \(v\) with respect to the communities \(C\), denoted by \(r\) and defined as the number of nodes in the union of communities with temperature at least as high as \(T_{\text{comm}}(v)\): \[r(v):=\left|\bigcup_{C^{\prime}\in\{C\in\mathcal{C}|T(C)\geq T_{\text{comm}} (v)\}}C^{\prime}\right|. \tag{2}\] The rank represents the number of nodes the analyst must consider in order to find \(v\). The analyst minimizes the rank of an evading target node using a Stackelberg game formulation. ### Stackelberg Game We formulate the counter-adversarial community detection problem as a Stackelberg game, where the analyst is the leader and the attacker is the follower. The move each player makes it to select a technique: The analyst chooses a community detection method, then the attacker chooses an attack. The attacker \(v\) will choose whichever attack yields the lowest rank i.e., will choose the attack \(E^{\prime}\), a set of edges that does not exist in the initial graph, to solve \[\hat{E}_{a}= \operatorname*{arg\,max}_{E^{\prime}\subset\{\{v,u\}|u\in V\} \setminus E}r_{\mathcal{C}}(v) \tag{3}\] \[\text{s.t.}\ |E^{\prime}|\leq b\] (4) \[G_{a}= (V,E\cup E^{\prime})\] (5) \[\mathcal{C}= f_{c}(G_{a}), \tag{6}\] where \(f_{c}(\cdot)\) is the analyst's chosen community detection method. Each player is fully aware of the other's capability, and the methods (attacks and defenses) that can be used. The adversary considers each attack strategy listed in Section 3.3 and measures the rank for all attack sizes from \(0\) to \(b\). Of all the attacks, the one that yields the lowest rank is chosen. For a given attack strategy \(f_{a}\), denote this procedure by \[E^{\prime}=f_{a}(v,G,f_{c},b). \tag{7}\] When choosing the best attack for a given target, the adversary maximizes the rank across all strategies, solving \[\hat{E}^{\prime}= \operatorname*{arg\,max}_{f_{a}\in A}r_{\mathcal{C}_{f_{a}}}(v) \tag{8}\] \[\text{s.t.}\ E^{\prime}_{f_{a}}= f_{a}(v,G,f_{c},b)\] (9) \[G_{f_{a}}= (V,E\cup E^{\prime}_{f_{a}})\] (10) \[\mathcal{C}_{f_{a}}= f_{c}(G_{f_{a}}), \tag{11}\] where \(A\) is the set of attack strategies. The result of this procedure is denoted by \[E_{\text{max}}=f_{\text{max}}(v,G,f_{c},b,A). \tag{12}\] The analyst's goal is to be as robust to such an attack, defined as _minimizing_ the target's rank afterward. The expected rank is the analyst's cost function, and the optimization takes place over all community detection methods available. For each community detection method and each potential target, the analyst evaluates the worst-case rank for each possible attack strategy. Given a set of candidate targets \(V_{t}\), the community detection method selection process is formalized as follows: \[\hat{f}_{c}= \operatorname*{arg\,min}_{f_{c}\in D}\frac{1}{|V_{t}|}\sum_{v\in V_ {t}}r_{\mathcal{C}_{f_{c},v}}(v) \tag{13}\] \[\text{s.t.}\ \mathcal{C}_{f_{c},v}= f_{c}(G^{\prime}_{f_{c},v})\] (14) \[G^{\prime}_{f_{c},v}= (V,E\cup E^{\prime}_{f_{c},v})\] (15) \[E^{\prime}_{f_{c},v}= f_{\max}(v,G,f_{c},b,A). \tag{16}\] Here \(D\) is the set of community detection methods available to the analyst and the objective is the expected rank of the target node, assuming each candidate target is equally likely. Note that the attacker takes the community detection method into account when performing the attack, so the defender must consider all attack-defense pairings when performing the optimization. ### Community Detection Methods Within the Stackelberg game, the analyst (leader) considers six community detection methods that may be used by the analyst. The non-overlapping methods make use of the _modularity_ metric [26], i.e., \[Q:=\frac{1}{2|E|}\sum_{v\in V}\sum_{u\in V}\left[\mathbb{I}(u\leftrightarrow v )-\frac{1}{2|E|}k_{u}k_{v}\right]\mathbb{I}(C(u)=C(v)), \tag{17}\] where \(k_{v}\) is the degree (number of connections) of vertex \(v\), \(C(v)\) is the community of \(v\) (i.e., \(C(v)=i\) if \(v\in C_{i}\)), and \(\mathbb{I}\) is the indicator function, which resolves to \(1\) if its argument is true and to \(0\) otherwise. The notation \(u\leftrightarrow v\) is a function that evaluates to true only if \(u\) and \(v\) share an edge and is false otherwise. Modularity measures the difference between the observed number of edges within and between communities and the expected number of edges if they were randomly rewired. The following community detection methods are used in the experiments. * _Louvain_ (LV): A greedy algorithm to maximize modularity (or another quality metric) [3]. Starting with each node in its own community, iteratively move nodes to join communities of their neighbors if it increases partition quality. Once no increase in quality is possible, create a new network where each community from the previous step is a node, and edges from the original graph become multi-edges (or self-loops when the nodes are in the same community). Apply the same procedure to the new graph. Continue until there is no change in partition quality. * _Leiden_ (LD): Follow a similar procedure to the Louvain algorithm, but with a "refinement" step before aggregation that ensures all communities are well connected [30]. * _Clique Percolation_ (CP): Create a new graph where each node is a \(k\) clique from the original graph. Two nodes share an edge if the corresponding cliques from the original graph share \(k-1\) nodes. Communities are defined by the connected components of the new graph. We use the implementation from Reid et al. in our experiments [28]. * _Hierarchical Link Clustering_ (HLC): For each pair of edges that share a node, compute the edge similarity as the Jaccard coefficient of the neighborhoods of the connected nodes, i.e., the similarity of edges \(e_{ik}\) and \(e_{jk}\) is \[\frac{|N(i)\cap N(j)|}{|N(i)\cup N(j)|},\] where \(N(i))\) is the neighborhood of node \(i\), which includes \(i\). Perform hierarchical clustering based on this similarity metric, and communities are determined by the resulting clusters [1]. * _Union of Maximum Spanning Trees Method_ (UMST): Compute the union of all maximum spanning trees [27] using the Jaccard coefficient of the nodes' neighborhoods as edge weights. Create a community around each node consisting of the triangles in the node's neighborhood within the UMST, then merge communities with substantial overlap [2]. * _Neural Overlapping Community Detection_ (NOCD): Train a graph neural network (GNN) that outputs the parameters of a Bernoulli-Poisson model [41], where the probability of an edge existing between nodes \(i\) and \(j\) is given by \[\Pr(i\leftrightarrow j)=1-\exp\left(-\mathbf{x}_{i}^{\top}\mathbf{x}_{j} \right).\] Here the vector \(\mathbf{x}_{i}\) is a vector indicating community membership, and is the output of the GNN [29]. The analyst's goal is to choose a method that will perform best in the presence of an adversarial attack, i.e., where the adversary remains among the hottest communities and has relatively small rank. ### Attacks We assume the adversary has a budget \(b\) denoting the number of new links that can be created. The adversary may choose any of the following attacks. * _Cold and Lonely_ (C&L): First connect to cold nodes, then unknown nodes, then hot nodes. Order nodes in increasing order of degree within a temperature (i.e., connect to nodes with few connections first, many connections later) [17]. * _Stable Structure_ (SS): Run community detection several times (which may give different results each time). If two nodes are in the same community every time, they are part of a "stable structure." Find all stable structures and order them in increasing order of temperature. Connect to nodes in each stable structure in this order (random order within a stable structure). Finally, connect to the remaining nodes (those in no stable structure) in increasing order of temperature, breaking ties randomly [17]. Note that when using an overlapping community detection method, the stable structures may also overlap. * _Embedding Attack_ (Emb): This attack was developed to attack node embeddings, which can be used to attack community detection [4]. The attack aims to modify the edge set to _maximize_ the loss that the node embedding algorithm is trying to minimize, i.e., to solve \[E^{*}=\operatorname*{arg\,max}_{\hat{E}}\mathcal{L}(V,\hat{E},Z^ {*})\quad Z^{*}=\min_{Z}\mathcal{L}(V,\hat{E},Z)\] (18) subject to \(|\hat{E}\cup E|-|\hat{E}\cap E|\leq\Delta E\), where \(Z:V\rightarrow\mathbb{R}^{d}\) is the \(d\)-dimensional embedding being learned and \(\Delta E\) is the number of edges that can be added or removed by the adversary. The authors use a random-walk-based embedding, where \(\|Z(u)-Z(v)\|\) is made smaller the more frequently random walks starting at \(u\) reach \(v\) or vice versa. We consider a version of this attack where no edges are removed and edges are only added if they connect the target to new neighbors. * _Evolutionary Perturbation Attack_ (EPA): A genetic algorithm with various modes of operation, attacking overall community detection performance, targeting specific communities for disruption, or targeting a specific node [8]. Like Kegelmeyer et al., the mode in which a specific node is targeted only considers new edges connected to the target. The "genes" are sets of edges to add and the fitness function is the ratio of the target's degree before the attack to its degree after the attack. (The fitness is zero if the attack is not successful in moving the target from its initial community.) Genes are selected for subsequent rounds by roulette sampling with probability proportional to their fitness. Genes (attacks) are combined by maintaining their common edges and randomly selecting edges not common to both. Finally, genes mutate by adding new edges to the attack with probability proportional to the pre-attack distance between their endpoints. The user specifies the number of reproduction rounds and the rate of combination and mutation. * _Based Importance Hiding_ (BIH): An attack specifically designed for overlapping community detection, in a context different from ours [23]. The goal of this method is to take a node that is initially part of several communities and remove it from all but one. It chooses edges to add or remove based on "degree importance," which the authors define with respect to a target node \(v\) and a community \(C\) as \[I(v,C):=\frac{\left(\sum_{u\in N_{C}^{v}}|N_{C}^{v}\cap N_{C}^{u}|\right)(\deg( v)-1)}{\deg(v)},\] where \(N_{C}^{v}\) is the set of the neighbors of \(v\) in community \(C\). High-importance edges are added to the community to which the target wants to remain, and removed between the target and communities from which it wants to disassociate. We consider a version of this attack that only adds edges and attempts to connect to join the community with which it has the most non-neighbors (i.e., the most nodes to which a new connection can be established). After connecting to all nodes in the chosen community, the attacker selects another community, continuing until the budget is depleted. * _Modularity-Based Attack_ (Mod): A baseline method that creates a new edge from the target node to a community that, if the node were to move to that community, would yield the greatest modularity. * _Stable Structure-Introduce Neighbors_ (SS-Nbr) We consider one attack that expands the adversary's capability, adding the capacity to create new edges between neighbors. This attack follows the same procedure as SS, but after the target connects to each new neighbor, the new neighbor is also connected to the target's initial set of neighbors (i.e., the neighbors it had in the original graph). This creates the possibility of a new community between the target's new and old neighbors that may dissolve the importance of the target's original community. Since the attacker moves second, the community detection method is fixed, and the attack optimization takes place with respect to \(\hat{f}_{c}\) in (13). Some strategies (i.e., C&L, Emb, and Mod) do not consider the specific community detection method while generating perturbations; the attacker only uses the specified method to identify the perturbation that increases the target's rank to the greatest value. The other attacks explicitly use the chosen community detection method when generating perturbations, taking into account the existing community structure presented to the analyst as it determines which edges to add, in addition to determining which perturbation to use after incrementing up to the attack budget. ## 4 Experiments ### Datasets We use seven datasets commonly used in the adversarial graph analysis literature * _CiteSeer_: A network of 3312 scientific publications put into 6 classes based on subject area, with 4732 links representing citations. Each node has a 3703-dimensional binary attribute vector, where each entry represents the presence or absence of a word in the paper. * _Cora_: Another citation network, consisting of 2708 machine learning papers labeled with one of seven categories. The citation network consists of 5429 citations, and each node has a 1433-dimensional binary attribute vector, indicating word presence as with CiteSeer. * _American College Football_ (football): A network of 115 nodes representing US college football teams, with 1231 edges indicating which teams played each other during the Fall 2000 season [12]. Each node has a label indicating the conference (out of 12 possible) to which the team belongs. * _Western US Power Grid_ (grid): Includes 4941 nodes in the electrical power grid of the western United States, with 6594 edges representing power lines between them [35]. * _Network Science Coauthorship_ (netsci): A network of 379 network scientists (in the largest connected component) with 914 edges representing coauthorship of articles [26]. * _Eu-Email core_ (email): An email network of a large European research institution, with 1005 nodes representing users and 25571 directed edges denoting which users emailed others [38, 19]. Nodes are labeled with one of 42 departments. * _Abu Sayyaf Group_ (ASG): A network of the Abu Sayyaf Group, a violent non-state Islamist group operating in the Philippines [11]. Each node is a member of ASG, and the nodes are linked if the two members both participated in at least one of 105 kidnapping events between 1991 and 2011. The largest connected component in this graph has 207 nodes and 2550 edges. Links to the datasets are available in Appendix A. ### Target Selection and Temperature Assignment We select 10 targets from each dataset. To identify these nodes, we compute a stable structure of the graph using 20 trials with the Louvain method. We then consider all stable structures that are _homogeneous_, i.e., all nodes within the structure have the same ground-truth label. For networks without labels (i.e., grid, netsci, and ASG), any stable structure can be used, and the label is taken to be membership in that stable structure. Among the nodes with the same label as the target, temperatures are assigned with probability \(\Pr(\text{hot})=0.3\), \(\Pr(\text{cold})=0.1\), \(\Pr(\text{unknown})=0.6\). For any other label, temperatures are assigned with probabilities of hot and cold reversed. ### Results We show highlights of the experimental results in Figure 1. The performance of the stable structure attack on the football and email datasets is fairly typical across experiments: The attacks are rather effective against the Louvain and Leiden methods, and somewhat effective against UMST. The attacks are much less effective against CP, with the exception of the email data, where it begins with low temperature and low priority. NOCD, which can use additional side information to make inferences about community structure, also tends to be robust in the face of the attacks. HLC, for the most part, retains a high temperature and a relatively low rank (high priority) for the target's community. Plots for all datasets are included in Appendix B. We observed one exception to HLC's robustness to attack, which is included in the figure: When the attacker uses our variant of BIH, it is effective against HLC on the email data. Investigating this phenomenon further, we noted that BIH tends to break up the target's community and add some of its new neighbors, slightly diluting the concentration of the original community members and reducing the temperature. BIH's focus on the amount of neighborhood overlap seems to work to its advantage in this relatively low-modularity network. There is also one example in which the attack fails regardless of the number of perturbations typically yielding a result that is counterproductive to the attacker: the Cora dataset with the modularity-based attack. In this case, the target tends to move to a new community, but one with the same label, and thus the same temperature distribution, which does not improve the target's position. Emb performs much better, likely because nodes from other classes tend to be farther away in the embedding space than those with the same class. While BIH also does not consider temperature, it focuses on lack of overlap between communities rather than modularity maximization, so it is more likely to cause a target to move to a community with a different modal label, and thus a lower temperature. We present the attacker's average best rank across all dataset/attack/community detection combinations in Table 1. Given the user's community detection method, the attacker chooses the method that minimizes the priority of the target node (i.e., maximizes its rank). Considering the result of the attacker's choice across community detection methods, the user selects the method that results in the highest priority for the target. The top two methods are identified in the table. A few things stand out in the table. First, stable structure is usually the best choice for the attacker, followed by C&L. Even in cases where the temperature assignment is driven entirely by community structure, these methods that use the temperature information tend to outperform the other methods. Even when restricted to attack strategies that do not consider temperature, however, the overlapping methods almost always outperform LV and LD. (The exception is CP on email, where the attacker's best option is to add no edges, as shown in Figure 1.) When choosing the community detection method that minimizes the attacker's average rank (i.e., makes it higher priority for the analyst), the two non-overlapping methods always perform worst. HLC is always present in the top two, with the other being either CP or NOCD. UMST is consistently superior to the non-overlapping methods, but typically underperforms with respect to the other overlapping methods. Looking more deeply into the results, we note that UMST is more likely than other overlapping methods to put a node Figure 1: Highlights of attacks applied to various community detection methods. Results are shown in terms of the maximum average temperature across the target node’s communities (left column) and the target node’s rank in the ordering by community temperature (right column), as the number of edge additions increases from 0 to 50. Higher target node rank is better for the attacker; worse for the analyst. Typical performance is shown in the case of football and email attacked with SS (first and second row, respectively), where we see a substantial change using non-overlapping methods and a more gradual—or even negligible—change using the overlapping methods. Exceptional cases, where the BIH attack is effective against HLC and where the Mod attack does not help the attacker, are shown in the third and fourth rows, respectively. In the typical case, overlapping methods yield less margin for improvement for the attacker, and will be preferred by the analyst. \begin{table} \begin{tabular}{c c c c c c c} \hline dataset & CD & C\&L & SS & Emb & Mod & BH \\ \hline football & LV & \(49\pm 4\) & \(\mathbf{105\pm 1}\) & \(46\pm 5\) & \(81\pm 2\) & \(82\pm 2\) \\ football & LD & \(48\pm 4\) & \(\mathbf{105\pm 1}\) & \(39\pm 4\) & \(70\pm 3\) & \(78\pm 3\) \\ football & CP\({}^{2}\) & \(24\pm 3\) & \(\mathbf{33\pm 2}\) & \(18\pm 1\) & \(23\pm 2\) & \(29\pm 5\) \\ football & HLC\({}^{1}\) & \(23\pm 2\) & \(\mathbf{29\pm 2}\) & \(20\pm 2\) & \(26\pm 2\) & \(25\pm 2\) \\ football & UMST & \(53\pm 4\) & \(72\pm 5\) & \(49\pm 6\) & \(65\pm 5\) & \(\mathbf{85\pm 3}\) \\ football & NOCD & \(\mathbf{43\pm 2}\) & \(\mathbf{43\pm 3}\) & \(36\pm 2\) & \(31\pm 2\) & \(31\pm 1\) \\ \hline netsci & LV & \(166\pm 17\) & \(\mathbf{367\pm 5}\) & \(193\pm 15\) & \(218\pm 7\) & \(232\pm 8\) \\ netsci & LD & \(209\pm 16\) & \(\mathbf{361\pm 10}\) & \(187\pm 22\) & \(224\pm 16\) & \(226\pm 9\) \\ netsci & CP\({}^{2}\) & \(46\pm 5\) & \(\mathbf{52\pm 6}\) & \(41\pm 5\) & \(40\pm 6\) & \(37\pm 5\) \\ netsci & HLC\({}^{1}\) & \(\mathbf{49\pm 5}\) & \(41\pm 4\) & \(41\pm 5\) & \(48\pm 5\) & \(46\pm 3\) \\ netsci & UMST & \(87\pm 11\) & \(\mathbf{168\pm 21}\) & \(111\pm 17\) & \(75\pm 12\) & \(115\pm 17\) \\ netsci & NOCD & \(\mathbf{172\pm 15}\) & \(156\pm 13\) & \(87\pm 4\) & \(104\pm 8\) & \(116\pm 8\) \\ \hline email & LV & \(\mathbf{986\pm 0}\) & \(902\pm 22\) & \(249\pm 33\) & \(613\pm 44\) & \(659\pm 18\) \\ email & LD & \(\mathbf{978\pm 5}\) & \(949\pm 11\) & \(149\pm 24\) & \(642\pm 30\) & \(603\pm 17\) \\ email & CP & \(750\pm 53\) & \(758\pm 46\) & \(742\pm 61\) & \(767\pm 38\) & \(\mathbf{774\pm 30}\) \\ email & HLC\({}^{1}\) & \(157\pm 19\) & \(197\pm 48\) & \(183\pm 33\) & \(183\pm 24\) & \(\mathbf{284\pm 29}\) \\ email & UMST & \(551\pm 73\) & \(521\pm 99\) & \(393\pm 61\) & \(545\pm 31\) & \(\mathbf{607\pm 21}\) \\ email & NOCD\({}^{2}\) & \(\mathbf{438\pm 36}\) & \(403\pm 50\) & \(251\pm 36\) & \(309\pm 29\) & \(284\pm 50\) \\ \hline Cora & LV & \(\mathbf{2367\pm 159}\) & \(2104\pm 127\) & \(1988\pm 177\) & \(280\pm 19\) & \(1974\pm 81\) \\ Cora & LD & \(\mathbf{2600\pm 118}\) & \(2129\pm 158\) & \(1867\pm 173\) & \(229\pm 17\) & \(2122\pm 42\) \\ Cora & CP\({}^{1}\) & \(256\pm 118\) & \(\mathbf{277\pm 117}\) & \(257\pm 118\) & \(252\pm 118\) & \(254\pm 118\) \\ Cora & HLC\({}^{2}\) & \(394\pm 102\) & \(\mathbf{426\pm 105}\) & \(319\pm 80\) & \(308\pm 77\) & \(328\pm 53\) \\ Cora & UMST & \(1608\pm 220\) & \(\mathbf{2221\pm 129}\) & \(1119\pm 128\) & \(538\pm 112\) & \(1274\pm 142\) \\ Cora & NOCD & \(\mathbf{1233\pm 73}\) & \(1221\pm 58\) & \(1120\pm 91\) & \(795\pm 36\) & \(1053\pm 44\) \\ \hline CiteSeer & LV & \(1116\pm 72\) & \(\mathbf{1607\pm 33}\) & \(686\pm 92\) & \(980\pm 74\) & \(1074\pm 63\) \\ CiteSeer & LD & \(1438\pm 60\) & \(\mathbf{1603\pm 38}\) & \(658\pm 89\) & \(990\pm 91\) & \(1053\pm 83\) \\ CiteSeer & CP\({}^{1}\) & \(331\pm 80\) & \(326\pm 80\) & \(\mathbf{345\pm 78}\) & \(314\pm 84\) & \(315\pm 83\) \\ CiteSeer & HLC\({}^{2}\) & \(\mathbf{404\pm 58}\) & \(329\pm 40\) & \(343\pm 27\) & \(314\pm 35\) & \(342\pm 42\) \\ CiteSeer & UMST & \(729\pm 77\) & \(\mathbf{1328\pm 92}\) & \(435\pm 41\) & \(566\pm 55\) & \(713\pm 95\) \\ CiteSeer & NOCD & \(496\pm 27\) & \(667\pm 63\) & \(463\pm 17\) & \(595\pm 41\) & \(\mathbf{807\pm 62}\) \\ \hline grid & LV & \(1512\pm 431\) & \(\mathbf{3678\pm 351}\) & \(418\pm 116\) & \(2158\pm 270\) & \(2545\pm 153\) \\ grid & LD & \(2598\pm 336\) & \(\mathbf{4001\pm 448}\) & \(497\pm 156\) & \(2180\pm 251\) & \(2593\pm 160\) \\ grid & CP\({}^{2}\) & \(\mathbf{448\pm 250}\) & \(444\pm 250\) & \(444\pm 250\) & \(434\pm 253\) & \(417\pm 254\) \\ grid & HLC\({}^{1}\) & \(\mathbf{374\pm 26}\) & \(364\pm 37\) & \(318\pm 34\) & \(324\pm 30\) & \(369\pm 22\) \\ grid & UMST & \(2334\pm 546\) & \(\mathbf{3160\pm 430}\) & \(855\pm 87\) & \(1015\pm 118\) & \(1172\pm 174\) \\ grid & NOCD & \(993\pm 57\) & \(1220\pm 104\) & \(892\pm 92\) & \(\mathbf{1525\pm 138}\) & \(1414\pm 147\) \\ \hline ASG & LV & \(121\pm 15\) & \(\mathbf{188\pm 5}\) & \(157\pm 5\) & \(154\pm 4\) & \(153\pm 4\) \\ ASG & LD & \(129\pm 14\) & \(\mathbf{192\pm 4}\) & \(149\pm 2\) & \(155\pm 4\) & \(150\pm 2\) \\ ASG & CP\({}^{1}\) & \(\mathbf{35\pm 3}\) & \(30\pm 4\) & \(29\pm 2\) & \(32\pm 2\) & \(32\pm 2\) \\ ASG & HLC\({}^{2}\) & \(\mathbf{51\pm 12}\) & \ into a single community, which may hinder its performance in this particular task. We also tested the EPA method. Each gene is an attack (a set of edges to add), and we seed the population with attacks created by other methods. The fitness function used is the rank of the target node after the attack is performed. (The fitness function is computed with respect to the analyst's community detection method.) We use a population of 100 and run for 10 generations. While this frequently results in the best attack, it is typically within one standard error of the second best, and is time consuming to compute. We therefore omit these results for brevity, as similar performance is always possible with one of the less computationally expensive attacks, and including EPA never impacts the defender's selection of a community detection method. In the Stackelberg game, the adversary knows the specific target and will select an attack strategy according to that specific node, not the average performance. Average target rank after attack in this scenario is plotted in Figure 2. While the specific aggregated values differ, the top two performers for the analyst remain the same. We also illustrate performance when we expand the adversary's capability and allow SS-Nbr as an attack strategy, keeping the budget at 51 edges. While this strategy usually substantially benefits the attacker when the analyst uses CP or HLC, it has a smaller effect on NOCD, resulting in that method being in the top two for the analyst in additional cases (it overtakes CP in the football and ASG datasets and HLC for CiteSeer). Investigating this matter, we noted that NOCD tends to create relatively few communities (as few as 6 for CiteSeer to as many as 42 for email), while on larger graphs CP and HLC identify hundreds (297 for CP, 739 for HLC). CP and HLC prioritize detecting many small communities, while NOCD identifies fewer larger ones. This propensity to have larger communities may make it more difficult for the target to disassociate from its initial community, despite creating many new triangles with SS-Nbr. If there is uncertainty regarding whether the target will attack, the analyst must consider this when selecting a community detection method. This alters the defender's optimization formula to be \[f_{c}= \operatorname*{arg\,min}_{f_{c}\in D}\frac{1}{|V_{t}|}\sum_{v\in V _{t}}\Big{[}p_{A}\cdot r_{\mathcal{C}^{1}_{f_{c},v}}(v)+(1-p_{A})r_{\mathcal{ C}^{0}_{f_{c}}}(v)\Big{]} \tag{19}\] \[\text{s.t.}\;\mathcal{C}^{0}_{f_{c}}= f_{c}(G)\] (20) \[\mathcal{C}^{1}_{f_{c},v}= f_{c}(G^{\prime}_{f_{c},v})\] (21) \[G^{\prime}_{f_{c},v}= (V,E\cup E^{\prime}_{f_{c},v})\] (22) \[E^{\prime}_{f_{c},v}= f_{\max}(v,G,f_{c},b,A), \tag{23}\] where \(p_{A}\) is the probability of attack. Results taking this consideration into account are shown in Figure 3. The figure includes SS-Nbr as a potential attack strategy. When \(p_{A}=0\), non-overlapping methods perform best in four of seven datasets, but HLC outperforms these methods for any attack probability greater than about 0.057. When there is no attack, HLC's tendency to identify many small communities often elevates smaller hot clusters above those that contain the target, reducing the target's rank at very low probabilities. In some cases, we see a drawback to NOCD's use of fewer communities: at low probability of attack, it often results in lower rank of the target, sometimes substantially so. Its robustness to all attacks considered, however, results in a smaller increase in expected rank than non-overlapping attacks as the probability of attack increases. ## 5 Conclusions This paper provides an evaluation of overlapping community detection methods as a data triage tool in the presence of adversarial activity. The target node is able to add edges to avoid being placed in a community that will receive greater scrutiny. Since overlapping community detection methods may leave a node in its original community while also placing it in a new one, this has the potential to increase robustness against such an attacker. We formulate the problem as a Stackelberg game in which a data analyst chooses a community detection method and the attacker chooses an attack strategy in response. In our results applying various attacks from the literature to seven real network datasets, we show that overlapping methods do indeed provide a more robust ability to identify the target node, measured by its position in the prioritized list of nodes. This remains the case when the target node is given the capacity to create new connections between its neighbors, though this does improve performance for the attacker. As new attacks and community detection methods are proposed, these can be incorporated into the analytical framework we propose to provide data Figure 2: Normalized target rank after attack, where the attacker chooses the strategy that maximizes rank. Bar heights are averages over 10 targets; error bars are standard errors. Higher rank is better for the attacker; the defender will choose the method that yields the lowest rank. Cases where the attacker is given the capability to perform SS-Nbr are shown in grey above the colored bars, which show results when this capability is not available. While the specific method changes depending on capability, in all cases, the defender will choose an overlapping community detection method, and the introduction of the SS-Nbr capability makes NOCD a more attractive option in more cases. analysts with the most robust possible community analysis, and a quantification of the tradeoffs between the methods at their disposal. ## Acknowledgements The authors wish to thank Christopher L. Smith at MIT Lincoln Laboratory. The idea to consider overlapping community detection methods in this context arose from a conversation between him and the first author.
2306.04242
4D Millimeter-Wave Radar in Autonomous Driving: A Survey
The 4D millimeter-wave (mmWave) radar, proficient in measuring the range, azimuth, elevation, and velocity of targets, has attracted considerable interest within the autonomous driving community. This is attributed to its robustness in extreme environments and the velocity and elevation measurement capabilities. However, despite the rapid advancement in research related to its sensing theory and application, there is a conspicuous absence of comprehensive surveys on the subject of 4D mmWave radar. In an effort to bridge this gap and stimulate future research, this paper presents an exhaustive survey on the utilization of 4D mmWave radar in autonomous driving. Initially, the paper provides reviews on the theoretical background and progress of 4D mmWave radars, encompassing aspects such as the signal processing workflow, resolution improvement approaches, and extrinsic calibration process. Learning-based radar data quality improvement methods are present following. Then, this paper introduces relevant datasets and application algorithms in autonomous driving perception, localization and mapping tasks. Finally, this paper concludes by forecasting future trends in the realm of 4D mmWave radar in autonomous driving. To the best of our knowledge, this is the first survey specifically dedicated to the 4D mmWave radar in autonomous driving.
Zeyu Han, Jiahao Wang, Zikun Xu, Shuocheng Yang, Lei He, Shaobing Xu, Jianqiang Wang, Keqiang Li
2023-06-07T08:33:00Z
http://arxiv.org/abs/2306.04242v4
# 4D Millimeter-Wave Radar in Autonomous Driving: A Survey ###### Abstract The 4D millimeter-wave (mmWave) radar, capable of measuring the range, azimuth, elevation, and velocity of targets, has attracted considerable interest in the autonomous driving community. This is attributed to its robustness in extreme environments and outstanding velocity and elevation measurement capabilities. However, despite the rapid development of research related to its sensing theory and application, there is a notable lack of surveys on the topic of 4D mmWave radar. To address this gap and foster future research in this area, this paper presents a comprehensive survey on the use of 4D mmWave radar in autonomous driving. Reviews on the theoretical background and progress of 4D mmWave radars are presented first, including the signal processing flow, resolution improvement ways, extrinsic calibration process, and point cloud generation methods. Then it introduces related datasets and application algorithms in autonomous driving perception and localization and mapping tasks. Finally, this paper concludes by predicting future trends in the field of 4D mmWave radar. To the best of our knowledge, this is the first survey specifically for the 4D mmWave radar. ## I Introduction Autonomous driving technology, which aims to provide safe, convenient and comfortable transportation experiences, is going through rapid development. To realize high-level autonomous driving, the capabilities of environment perception, localization, and mapping are crucial. Therefore, the sensors on autonomous vehicles, such as cameras, LiDARs, and radars, as well as their algorithms, are attracting increasing research interest. Among the various sensors, mmWave radars, benefiting from their recognized advantages of small-size, low-cost, all-weather operation, velocity-measuring ability, and high range resolution, etc., [1] have always been widely used for autonomous driving. However, traditional mmWave radars, also known as 3D mmWave radars, exhibit weak performance in measuring the elevation of targets, and their data typically only includes range, azimuth, and velocity information. Additionally, 3D mmWave radars suffer from clutter, noise, and low resolution, particularly in the angular dimension, which further limits their applicability to complex perception tasks. The recent advancement of multiple-input multiple-output (MIMO) antenna technology has improved elevation resolution, leading to the emergence of 4D mmWave radar. As the name suggests, 4D mmWave radar can measure four types of target information: range, azimuth, elevation, and velocity. The 4D mmWave radar not only serves as an improved version of mmWave radar, but also introduces numerous significant research topics. The raw data size of 4D mmWave radars is much larger than that of traditional ones, which poses challenges in signal processing and data generation, not to mention the clutter and noise. The sparsity and noise of 4D mmWave radar point clouds generated in the existing signal processing flow are more severe than those of LiDAR point clouds, necessitating the careful design of perception, localization and mapping algorithms that account for the 4D mmWave radar's inherent characteristic. Researchers have conducted a number of surveys on the theory and application of mmWave radars. In recent years, Bilik et al. [2] review the challenges of mmWave radar in autonomous vehicles and its future trends. Venon et al. [3] comprehensively summarize the theory and existing perception algorithms of mmWave radar in autonomous driving, while Harlow et al. [4] focus on mmWave radar applications in robotics for their survey. It is evident that most reviews are centered on 3D mmWave radars. Despite the revolutionary rise of the 4D mmWave radar and correlated algorithms, there have been few specialized surveys. To address this gap, this paper presents a thorough review of 4D mmWave radar in autonomous driving. The main contributions of this work can be summarized as follows: * To the best of our knowledge, this is the first survey concentrating on 4D mmWave radar in autonomous driving. * Given the uniqueness of the 4D mmWave radar, this survey specifically introduces its theoretical background and signal processing pipeline. * This paper provides an exhaustive survey of 4D mmWave radar application algorithms in autonomous driving, covering research on perception, localization and mapping. The remainder of this paper is organized as follows: Section II introduces the basic theory of 4D mmWave radars, including the signal processing flow, data formats, and the methods for improving resolution. Section III outlines some extrinsic calibration algorithms. Section IV summarizes some learning-based methods for generating point clouds. Section V lists available 4D mmWave radar datasets for researchers' convenience. Section VI reviews 4D mmWave radar perception applications, categorized into 4D-radar-only methods and multi-modal methods. 4D mmWave radar applications in localization and mapping are presented in Section VII by the sort of odometry, relocalization, and simultaneous localization and mapping (SLAM). Section VIII discusses future trends of 4D mmWave radar in autonomous driving, and Section IX draws the conclusion. ## II Background of 4D mmWave Radars For researchers focusing on autonomous driving, fundamental knowledge about 4D mmWave radars may often be somewhat overlooked. This section briefly reviews the basic theory and resolution-improving approaches of 4D mmWave radars as the foundation for the following sections. ### _Signal Processing Flow_ The traditional signal processing flow and corresponding data formats of 4D mmWave radars are shown in Fig.1. In step 1, millimeter waves are transmitted from transmission (TX) antennas. After reaching surrounding targets, waves are reflected to reception (RX) antennas. The waveform of most existing 4D mmWave radars is the Frequency Modulated Continuous Wave (FMCW), which offers superior resolution compared to other waveforms. In every working cycle (i.e. chirp) of the transmission antennas of FMCW radars, the frequency of the signal increases linearly with a starting frequency \(f_{c}\), a bandwidth \(B\), a frequency slope \(S\), and a time duration \(T_{c}\). By measuring the frequency of the received signal, the range \(r\) of the target can be calculated as follows: \[r=\frac{ct}{2},\quad t=\frac{\Delta f}{S}, \tag{1}\] where \(t\) is the time interval between transmission and reception, \(c\) is the light speed, and \(\Delta f\) is the frequency difference between the transmitted and received signals. Meanwhile, one frame of an FMCW radar contains \(N_{c}\) chirps and has a time duration \(T_{f}\). To avoid interference between adjacent chirps, the transmitted and received signals are considered within one chirp, so the maximum unambiguous detection range of 4D mmWave radars is restricted by \(T_{c}\). Taking an AWR1843 from Texas Instrument as an instance, its \(T_{c}=0.33\mu\)s, accordingly its maximum unambiguous range is 50m. Under the assumption that the range of a target in one frame is constant, the radial relative velocity \(v\) of the target can be calculated by the Doppler effect as follows: \[v=\frac{\Delta fc}{2f_{c}},\quad\Delta f=\frac{\Delta\varphi}{2\pi T_{c}}, \tag{2}\] where the first equation is the Doppler effect formula, \(\Delta f\) and \(\Delta\varphi\) are the frequency and phase drift between the received signals of adjacent two chirps, respectively. It is evident that the range and Doppler resolutions depend on \(f_{c},T_{c},N_{c}\), etc., for which readers can refer [3] for details. To estimate the direction-of-arrival (DOA) of the target, a multiple-input multiple-output (MIMO) antenna design is typically applied in mmWave FMCW radars. The \(n\) TX antennas and \(m\) RX antennas consist of \(n\times m\) virtual TX-RX pairs. To separate transmit signals, different TX antennas should transmit orthogonal signals. By comparing the phase drift between different TX-RX pairs, distance differences between different pairs to the same target can be calculated. Further considering the positional relationship along TX and RX antennas, the DOA of the target can be obtained. The signals of each pair are mixed by a mixer at step 2 and converted by an Analog-to-Digital Converter (ADC) at step 3 to obtain raw ADC data. It should be noted that the coordinates of the pair matrix in Fig. 1 represent sample timestamps within a chirp and a frame, respectively, and the value in each cell corresponds to reflection intensity. Then, at step 4, a 2D Fast Fourier Transformation (FFT) is conducted Fig. 1: The traditional signal processing flow and corresponding data formats of 4D mmWave radars [5][6][7] along the range and Doppler dimensions to generate the Range-Doppler map, the coordinates of which are range and velocity. Subsequently, there are two mainstream signal processing flows. The former one is to first conduct FFT along different TX-RX pairs to induce azimuth and elevation information (step 5a). acquiring a 4D range-azimuth-elevation-Doppler tensor, while for 3D mmWave radars, the result is a 3D range-azimuth-Doppler tensor. At step 6a, the constant false alarm rate (CFAR) algorithm [8] is usually applied in the four dimensions to filter the tensor by the intensity of every cell and obtain real targets in the format of point cloud for downstream tasks [9]. In contrast, the latter signal processing flow first filters RD maps to generate target cells using also CFAR algorithm (step 5b), then digital beamforming (DBF) method is employed at step 6b to restore angular information and generate a radar point cloud [6]. ### _Methods to Improve Resolution_ As mentioned above, the most crucial ability of 4D mmWave radars is to measure the elevation of targets, which is equal to improving the elevation resolution. Specific methods can be divided into hardware and software levels as follows: #### Ii-B1 Hardware At the hardware level, increasing the number of TX-RX pairs or the aperture of antennas are two primary ways to improve resolution, including: * Cascading: simply cascading several standard mmWave radar chips [10] can increase TX-RX pairs, thus improving angular resolution. For instance, a 12TX-16RX (192 pairs) 4D mmWave radar can be formed by cascading four standard 3TX-4RX (12 pairs) radar chips. It is the most straightforward approach, but the size and power dissipation are also increased. * Chip integration: integrating more antennas on a chip is another promising technique [11]. It has the potential to replace cascading, but the disturbance between antennas remains an unsolved problem. * Meta-material: apertures constructed by meta-material can significantly increase the angular resolution while controlling the size [12], but such methods have not been mature enough to be widely applied. #### Ii-B2 Software By virtually realizing hardware improvement or optimizing signal processing algorithms along the processing flow, radar resolution can be improved at the software level. * Virtual aperture imaging: inspired by the traditional synthetic aperture radar (SAR), some researchers try to virtually expand the aperture of antennas through software design, thus enhancing the angular resolution [13]. This method has a remarkable effect on angular resolution improvement but usually needs the help of cascading to reduce noise. * Super-resolution: super-resolution can be achieved by replacing traditional methods like FFT in the signal processing flow with innovative algorithms [14], even learning-based algorithms [15]. However, it also requires deeper research for practical application. ## III Extrinsic Calibration Radar point clouds are relatively sparse, and spectrum data is not sufficiently intuitive. Due to multi-path effects and clutter interference, noise is also considerable, posing challenges for calibration. For 4D mmWave radar, the higher resolution alleviates this issue, but there is still a lack of sufficiently robust online calibration methods. Following traditional calibration methods of 3D mmWave radars, retro-reflectors are commonly used to improve calibration accuracy. By carefully placing several retro-reflectors in specific locations, analyzing sensing results of the 4D mmWave radar, and comparing them with LiDAR and camera data, the extrinsic parameters can be calibrated [16]. Instead of successively calibrating each sensor pair, Domhof et al. calibrate all sensors together directly with respect to the mobile robot's body and achieve a median rotation error of only 0.02 \({}^{\circ}\)[17]. However, the practicability of retro-reflectors in real scenarios is limited. In recent years, some researchers have designed calibration methods for 4D mmWave radars that do not require specially placed retro-reflectors. Instead, radar motion measurement is utilized to conduct calibration for radars [18] or radar-camera pairs [19]. The convenience of these methods is guaranteed, but verification in extreme weather conditions still needs to be implemented. ## IV Learning-based Radar Point Cloud Generation One primary reason for the sparsity of 4D radar point clouds is the substantial information loss caused by CFAR. To address this problem, an increasing number of learning-based methods have been proposed to replace CFAR, and work directly with RD maps or the 4D tensor to improve the quality of 4D radar point clouds and the performance of downstream autonomous driving tasks, such as perception and localization. Related works, as well as datasets, perception, localization and mapping algorithms of 4D mmWave radars that will be illustrated in the following, are uniformly shown on a timeline in Fig. 2. Generally, CFAR is an optimal detection algorithm if cells are independent and identically distributed. However, as the targets in the real world usually have different shapes and occupy multiple cells, the CFAR-type methods will lead to masking effects, reducing the resolution of point clouds and suffering from information loss. Brodeski et al. [20] first apply CNN to RD maps for detection and localization of multiple objects, which is called DRD(deep radar detection) net. They formulate the target detection in RD maps as a segmentation task and adopt a model structure similar to 2D-U-Net [21]. Facing the lack of well-annotated datasets, specifically for radar RD maps, they refer to the radar calibration process, arrange retro-reflectors in an anechoic chamber to collect corresponding data, and map them back to RD maps as labels. Experiments show that DRD net can operate in real-time (\(\sim\)20ms for inference) and outperform classic methods in detection accuracy. However, the labeling challenge for RD maps still exists since the data collected in the anechoic chamber differ from those collected in the real-world driving scene, which is more challenging with the multi-path reflections, interference, attenuation, etc. To address this challenge, Cheng et al. [6][22] use LiDAR point clouds as the supervision and successively design network structures based on U-Net [23] and GAN [24]. In complex roadway scenes, the generated 4D radar point clouds by [22] not only contain fewer clutter points but also provide denser point clouds of real targets compared to the classical CFAR detectors. ## V Datasets Currently, available datasets with 4D mmWave radars are introduced in this section, which are summarized in Table I. Public datasets are crucial for 4D mmWave radar-based algorithms as they provide a platform for developing and comparing different algorithms and stimulate related research. Therefore, we curated published datasets up to the time of writing hoping these datasets will facilitate the research of 4D radar-based algorithms. Public 4D Radar data are fairly rare, Astyx HiRes 2019 is the first existing dataset [25]. The data provided for free consist of 500 synchronized frames (radar, LiDAR, camera) containing about 3,000 precisely annotated 3D object annotations. It can be seen that the amount of data in this dataset is relatively small. ColoRadar is a dataset dedicated to research on localization and mapping, containing approximately 2 hours of data from radar, LiDAR, and the pose groundtruth [9]. It provides three levels of processing for radar data: raw ADC data, 3D range-azimuth-elevation tensors derived by compressing the Doppler dimension of 4D radar tensors, and radar point clouds. This dataset collects data in several unique environments, both indoors and outdoors, providing a diverse range of sensor data. The VoD dataset is a new multi-sensor automotive dataset for multi-class 3D object detection, which consists of calibrated and synchronized LiDAR, camera, and radar data [26]. It contains 8693 frames of data acquired in complex urban traffic which consists of 123106 3D bounding box annotations of both moving and static objects and tracking IDs for each annotated object which is useful for tracking. Similarly, the Tj4DRadSet dataset contains 44 consecutive sequences with a total of 7757 synchronized frames, well-labeled using 3D bounding boxes and trajectory IDs [16]. Furthermore, it covers much richer and more challenging driving scenario clips (e.g. Urban roads, highways, industrial parks). To the best of our knowledge, K-Radar is currently the largest large-scale dataset based on 4D mmWave radar and collects 35k frame conditions (e.g. sunny, foggy, rainy, snowy) [27]. In addition to 4D radar data, K-Radar provides high-resolution LiDAR point clouds, surround RGB images from four stereo cameras, and RTK-GPS and IMU data from the ego vehicle. It is worth mentioning that K-radar is currently the only dataset that provides 4D radar Fig. 3: Overview of the radar signal processing chain with [22] Fig. 2: Timeline of 4D mmWave radar point cloud generation, datasets, perception, localization and mapping algorithms tensors. In order to facilitate experiments on various neural network structures, K-radar also provides a visual program to modularize the neural network training code. Although 4D mmWave radar is receiving more and more attention from the academic community, and more and more datasets have been released, compared with vision or LiDAR, the amount of data is still not large enough. ## VI Perception Applications Currently, the point cloud density of 4D mmWave radar has already reached a level comparable to that of low-beam LiDAR, and 4D mmWave radar exhibits superior robustness under low visibility and adverse weather conditions. Therefore, researchers have been attempting to transfer LiDAR point cloud processing models to 4D mmWave radar for target detection, scene flow prediction, and other tasks. Furthermore, as described in Section IV, pre-CFAR radar data contain richer information, promoting some researchers to work directly with RD maps or 4D tensors, bypassing point cloud generation tasks. Related work can be further divided into those relying solely on 4D radar or on multi-modal sensor fusion. ### _4D-Radar-only Methods_ Naturally, most related 4D-radar-only methods are derived from LiDAR-based methods. However, due to the sparsity and noise characteristics of mmWave radar, specific network designs are still required. #### Vi-A1 3D detection As for the 3D object detection task, according to the difference between model architectures, perception methods can be divided into CNN-based and Transformer-based. Palffy et al. [26] first apply PointPillars to 4D radar point clouds for 3D detection of multi-class road users. The performance is improved by temporal integration and by introducing additional features, such as elevation and Doppler velocity. However, the result of the proposed method (mAP 47.0) is still far inferior to the LiDAR detector on 64-beam LiDAR (mAP 62.1). RPFA-Net [28] achieves progress by introducing a radar pillar features attention (PFA) module. It utilizes self-attention instead of the simplified PointNet [29] to extract the global context feature from pillars, which aims to effectively capture the information over a long distance and improve heading angle estimation accuracy. As set operators, attention-based Transformer has inherent superiority in processing these point sets, which have permutation invariance. Therefore, to cope with sparse and noisy data of 4D mmWave radar, Tan et al. [30] propose a 3D object-detection framework based on multi-frame point clouds. They obtain the ego vehicle velocity and compensated velocity information from point clouds first, then accumulate nearby frames to the last frame. In addition to directly processing perception tasks on radar point clouds, some studies have turned their attention to RD maps or 4D tensors, aiming to utilize more underlying hidden information. K-Radar dataset [27] proposes a 3D object detection baseline that directly consumes 4D tensors as input and verifies that the elevation information of 4D tensor is essential for 3D object detection. The proposed model also demonstrates the robustness of 4D tensor-based perception, especially under adverse weather conditions. #### Vi-A2 Scene flow estimation Scene flow estimation aims to calculate a 3D motion vector field that represents the motion of both static and dynamic elements within the environment concerning the ego agent. While several research has traditionally relied on different sensing modalities such as cameras or LiDAR for scene flow estimation, there are also approaches that utilize 4D mmWave radar data to accomplish this task. Representatively, Ding et al. [31] propose a novel approach to 4D radar-based scene flow estimation via cross-modal learning, motivated by the co-located sensing redundancy in modern autonomous vehicles. Such redundancy implicitly provides various forms of supervision cues to the radar scene flow estimation, which can effectively solve the difficulty in labeling radar point clouds. Specifically, this work introduces a multitask model architecture for the cross-modal learning problem. Extensive experiments show the state-of-the-art performance of this method and demonstrate the effectiveness of cross-modal supervised learning to infer more accurate 4D mmWave radar scene flow. ### _Fusion Methods_ Considering that 4D mmWave radar can already provide point cloud information, some scholars have fused it with the camera or LiDAR for target detection, hoping to improve the accuracy and robustness of the perception model. Generally, there are three fusion levels for different modalities: data level, feature level, and decision level. Existing research primarily focuses on feature-level fusion. As for 4DRV (4D Radar and Vision) fusion, 4D mmWave radar can provide high-precision depth and velocity information in a low-cost way, compensating for the shortcomings of cameras and thereby improving the accuracy of 3D detection. In recent studies, 4D mmWave radar signals are usually transformed into 2D image-like features so that they can be practically deployed together with camera images. Representatively, Meyer et al. [32] apply a network based on [33] to the fusion of 4D mmWave radar and camera, which is originally developed for camera-LiDAR fusion. In order to make up for the data format difference, they discard the Doppler information and only retain the position information and the reflection intensity of 4D mmWave radar point clouds. The point clouds of each frame are used to generate a BEV image and then 3D proposals. Surprisingly, the precision of the fusion network is higher when radar data is used instead of LiDAR, and it reaches an average precision of 61% AP on Astyx dataset [25]. The authors argue that the reason may be the LiDAR sensor has only 16 beams, but further studies are still required. A subsequent study is performed by Cui et al. [34] with a newly added self-supervised model adaptation block [35], which dynamically adapts the fusion of different modalities according to the object properties. Besides, a front view map is generated from the radar point clouds together with the BEV image. The presented method outperforms the former study [32] by up to 9.5% in 3D AP. The front view map can make better use of the elevation information provided by the 4D mmWave radar, and it is easier to fuse with the monocular camera feature, balancing between detection accuracy and computational efficiency. Despite the above advantages of 4DRV fusion, the visual-based branch may still struggle to work facing aggressive lighting changes or adverse weather conditions, which in turn affects the overall performance of the model. Thus, Wang et al. [36] first explore the advantages of 4DRL(4D Radar and LiDAR) fusion with InterFusion, an interaction-based fusion framework. They design an InterRAL module(Interaction of Radar and LiDAR) and update pillars from two modalities to enhance feature expression. Ablation experiments are carried out to prove its effectiveness. In a following study, Wang et al. [37] propose \(M^{2}\)-Fusion network that integrates an interaction-based multi-modal fusion method termed IMMF and a center-based multi-scale fusion method termed CMSF. Evaluated using the Astyx dataset [25], it outperforms mainstream LiDAR-based object detection methods significantly. As LiDAR can accurately detect objects at close range, 4D mmWave radar has a far detection range due to its penetrability, so 4DRL fusion has the potential to be a reliable technical solution with low cost and high quality. ## VII Localization and Mapping Applications In severe environments where satellite positioning information is inaccurate, or high-definition maps are unavailable, localization and mapping by perception sensors are necessary. Some relevant research is carried out using the emerging 4D mmWave radar. As the radar point clouds is much lighter than the tensor, and studies for LiDAR can be transferred to it with minor adjustments, there is hardly any research on radar tensors about localization and mapping. ### _Odometry_ Radar odometry estimation is the core of localization, and is also a key component of SLAM. There is quite a few related research on 4D mmWave radars. However, due to the sparsity and noise of radars, odometry is mostly generated with the help of the Inertial Measurement Unit (IMU). Doer and Trommer make plenty of contributions to this topic using an Unmanned Aerial Vehicle (UAV). In [38], they estimate the height of the UAV by the barometer, then utilize a Random Sample and Consensus (RANSAC) based Least Squares to estimate ego velocity leveraging the Doppler information of the radar point clouds. The IMU data are fused finally to construct Extended Kalman Filter (EKF) based radar inertial odometry. On this basis, they consider Manhattan world assumptions, which assume planes in the environment are orthogonal to each other, and achieve comparable accuracy as the state-of-the-art visual-inertial odometry [39]. This contribution is then extended to multiple radars and shows satisfactory performance under degraded visual conditions with very little computational resource requirement [40][41]. Besides, they also research the fusion of 4D mmWave radars with visual and thermal information to further enhance the result [42]. The only downside of these researches is that the EKF-based algorithm may face difficulty coping with large-scale environments since the odometry drift will increase. And in most works, Manhattan world assumptions may restrict the applicability in severe outdoor environments. The EKF framework is also applied by Michalczyk et al. [43]. Instead of directly fusing IMU with ego velocity estimated by radar point clouds, they realize 3D point matching across sparse, noisy radar scans to measure the displacement between radar scans. The estimation of the 3D trajectory of the UAV reaches a 3.32% drift at the end of the total traveled distance. Learning-based radar odometry estimation is also explored. Lu et al. [44] extract the features of radar point clouds and IMU by CNN and RNN encoders, respectively. Then a two-stage cross-modal attention layer is designed to fuse these features, and an RNN is used to model the long-term dynamics. The output of the whole network is the 6-DOF odometry, which achieves a 0.8m Absolute Trajectory Error (ATE). The performance demonstrates further upgrades with assistance from the RGB camera or depth camera. ### _Relocalization_ Relocalization depends on high-accuracy online mapping, and is significant when using high-definition maps or detecting loop closure in SLAM. Considering the sparsity and noisiness of 4D mmWave radar point clouds, Cheng et al. [45] make use of Doppler velocity to remove moving objects, and then enhance the point clouds by merging multiple consecutive scans. Inspired by the famous PointNet, A multi-layer perception (MLP) based network is employed to increase the dimension of each point from 4 to 1024 for pointwise feature extraction. By comparing extracted features of the current scan and the global map, relocalization can be realized. Fig. 4: Overview of the \(M^{2}\) 4DRL fusion model [37] ### _Slam_ Since the above odometry and relocalization are indispensable for SLAM, research on 4D mmWave radar SLAM has emerged quite recently. Zhuang et al. [46] develop a whole-process SLAM for 4D mmWave radar point clouds based on iterative EKF. To avoid the sparsity caused by RANSAC-like methods, they conduct ego-velocity estimation by an iterative reweighted least squares. The weight of each radar point also reflects its dynamics, thus can help remove moving objects. The unconventional distribution-to-distribution matching between scan and submap further decreases the influence of sparsity. The effects are shown in Fig. 5. Zhuge [47] applies the pose graph to construct a SLAM system adapted from a LiDAR SLAM method named hdl_graph_slam [48]. Some impressive experiments are conducted under the influence of smoke and rain to prove the robustness of 4D mmWave radar in extreme environments. ## VIII Future Trends The 4D mmWave radar has the potential to bring about profound changes to autonomous vehicles. Nonetheless, it is far from mature at the moment. The future trends of 4D mmWave radar may rely mainly on the following areas. #### Vi-1 Point cloud enhancement As the most commonly used data format, 4D mmWave radar point clouds undergo evident low quality compared with other sensors. The data quality of point cloud is severely impacted by the characteristic of radars, such as the multi-path effect. Furthermore, there is an urgent need to refine the information loss during the signal processing flow, particularly by replacing CFAR with precisely designed learning-based methods. Learning-based methods for DOA estimation, instead of DBF methods, can also be explored for super-resolution angle estimation. #### Vi-2 Application algorithms redesign In addition to improving 4D mmWave radar point clouds, the application algorithms after signal processing is another non-neglectable issue. Up till now, a number of application algorithms for 4D mmWave radars are modified from corresponding LiDAR algorithms. The specialty of 4D mmWave radars, such as velocity measuring ability and adaptive capability in extreme environments should be further explored by future research. For perception tasks, multi-modal fusion is undoubtedly the future development direction. However, it remains to be explored whether the robustness of 4D radar in extreme weather conditions will be weakened by the integration of other sensor branches. For 4D mmWave radar localization and mapping, the sensor fusion with LiDARs and cameras has a great deal of room for discovery. #### Vi-3 Pre-point cloud data utilizing As for the unique data formats along the 4D mmWave radar signal processing flow, such as raw ADC data, RD maps, and 4D tensors, utilizing these data to perform perception, localization and mapping is an interesting but almost untouched topic. Learning-based models that take advantage of ample information from these data while keeping acceptable real-time performance may be a research hot spot. #### Vi-4 Dataset enriching Same as all other data-driven research areas, the datasets of 4D mmWave radars play a significant role in correlating studies. Existing datasets containing 4D mmWave radars are relatively rare. Data formats and scenario richness are two main fields waiting for expansion. ## IX Conclusion This paper offers a comprehensive overview of 4D mmWave radar in autonomous driving. It successively covers the signal processing theory, datasets, extrinsic calibration methods, learning-based radar point cloud generation algorithms, applications in perception, localization and mapping, as well as future trends. Research on 4D mmWave radars in the field of autonomous driving is still in progress and holds great potential for future advancements.
2306.13058
Checking Refinement of Asynchronous Programs against Context-Free Specifications
In the language-theoretic approach to refinement verification, we check that the language of traces of an implementation all belong to the language of a specification. We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck language. We show that this problem is EXPSPACE-complete -- the same complexity as that of language emptiness and for refinement verification against a regular specification. Our algorithm uses several technical ingredients. First, we show that checking if the coverability language of a succinctly described vector addition system with states (VASS) is contained in a Dyck language is EXPSPACE-complete. Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that allows replacing the (context-free) language of each task in an asynchronous program by a regular language. Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have to construct the regular language ab initio. Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient. In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.
Pascal Baumann, Moses Ganardi, Rupak Majumdar, Ramanathan S. Thinniyam, Georg Zetzsche
2023-06-22T17:24:37Z
http://arxiv.org/abs/2306.13058v1
# Checking Refinement of Asynchronous Programs ###### Abstract In the language-theoretic approach to refinement verification, we check that the language of traces of an implementation all belong to the language of a specification. We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck language. We show that this problem is \(\mathsf{EXPSPACE}\)-complete--the same complexity as that of language emptiness and for refinement verification against a regular specification. Our algorithm uses several technical ingredients. First, we show that checking if the coverability language of a succinctly described vector addition system with states (VASS) is contained in a Dyck language is \(\mathsf{EXPSPACE}\)-complete. Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that allows replacing the (context-free) language of each task in an asynchronous program by a regular language. Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have to construct the regular language ab initio. Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient. In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common practical resource usage patterns based on reference counting, for which few algorithmic techniques were known. Asynchronous programs, VASS, Dyck languages, Language inclusion, Refinement verification [1][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][]][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][]][] cooperative scheduler iteratively picks a previously spawned task and executes it atomically to completion. Asynchronous programs occur in many software systems with stringent correctness requirements. At the same time, they form a robustly decidable class of infinite-state systems closely aligned with other concurrency models. Thus, algorithmic verification of asynchronous programs has received a lot of attention from both theoretical and applied perspectives [26, 13, 10, 8, 9, 15, 11, 16, 21]. We work in the language-theoretic setting, where we treat asynchronous programs as generators of languages, and reduce verification questions to decision problems on these languages. Thus, an execution of a task yields a word over the alphabet of its events and task names. An execution of the asynchronous program concatenates the words of executing tasks and further ensures that any task executing in the concatenation was spawned before and not already executed. The trace of an execution projects the word to the alphabet of events and the language of the program is the set of all traces. With this view, reachability or safety verification questions reduce to language emptiness, and refinement verification reduces to language inclusion of a program in a given specification language over the alphabet of events. We consider the language inclusion problem for asynchronous programs when the specification language is given by a Dyck language. Our main result shows that this problem is \(\mathsf{EXPSPACE}\)-complete. The language emptiness problem for asynchronous programs, as well as language inclusion in a regular language, are already \(\mathsf{EXPSPACE}\)-complete [10]. Thus, there is no increase in complexity even when the specifications are Dyck languages. However, as we shall see below, our proof of membership in \(\mathsf{EXPSPACE}\) requires several new ingredients. In addition to the inherent language-theoretic interest, the problem is motivated by the practical "design pattern" of reference counting and barrier synchronization in concurrent event-driven programs. In this pattern, each global shared resource maintains a counter of _how many_ processes have access to it. Before working with the shared resource, a task acquires access to the resource by incrementing a counter (the reference count). Later, a possibly different task can release the resource by decrementing the reference count. When the count is zero, the system can garbage collect the resource. For example, device drivers in the kernel maintain such reference counts, and there are known bugs arising out of incorrect handling of reference counts [22]. Here is a small snippet that shows the pattern in asynchronous code: \begin{tabular}{l l} start : & \(\{\;t\;:=\;\;\texttt{inc}();\;\texttt{if}\;(t)\;\texttt{ spawn}(\texttt{work});\}\) \\ & \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\ inherit EXPSPACE-hardness. Let us therefore focus on the challenges in obtaining an EXPSPACE upper bound. The EXPSPACE algorithm for language emptiness proceeds as follows (see [10, 21]). First, we can ignore the alphabet of events and only consider words over the alphabet of task names. Second, we notice that (non-)emptiness is preserved if we "lose" some spawns along an execution; this allows us to replace the language of each task by its downward closure. By general results about well-quasi orderings, the downward closure is a regular language which, moreover, has a succinct representation. Thus, we can reduce the language emptiness problem to checking (coverability) language emptiness of an associated vector addition system with states (VASS). This problem can be solved in EXPSPACE, by a result of Rackoff [23]. Unfortunately, this outline is not sufficient in our setting. First, unlike for language emptiness or regular language inclusion, we cannot simply replace tasks with their downward closures (w.r.t. the subword ordering). While we can drop spawns as before, dropping letters from the event alphabet does not preserve membership in a Dyck language. Second, even if each handler is regular, we are left with checking if a VASS language is contained in a Dyck language. We provide new constructions to handle these challenges. Our starting point is the characterization of inclusion in Dyck languages [24]: A language \(L\) is not included in a Dyck language if and only if there is a word \(w\in L\) with either an _offset violation_ (number of open brackets does not match the number of closed brackets), a _dip violation_ (some prefix with more closed brackets than open ones), or a _mismatch violation_ (an open bracket of one kind matched with a closed bracket of a different kind). Checking VASS Language InclusionOur first technical construction shows how to check language inclusion of a VASS coverability language in a Dyck language in EXPSPACE. (In a _coverability language_, acceptance is defined by reaching a final control state.) In fact, our result carries over when the control states of the VASS are _succinctly represented_, for example by using transducers and binary encodings of numbers. We first check that the VASS language is _offset-uniform_, that is, every word in the language has exactly the same offset (difference between open brackets and closed brackets), and that this offset is actually zero. (If this condition is not true, there is already an offset violation.) We show that the offset of every prefix of a word in any offset-uniform VASS language is bounded by a doubly exponential number, and therefore, this number can be tracked by adding double exponentially bounded counters (as in Lipton's construction [19]) in the VASS itself. Moreover, we can reduce the checking of dip or mismatch violations to finding a _marked Dyck factor_: an infix of the form \(\#w\bar{\#}\) for a Dyck word \(w\). Finally, for offset-uniform VASS, finding a marked Dyck factor reduces to coverability in succinctly represented VASS, which can be checked in EXPSPACE[2]. Offset uniformity is important--finding a marked Dyck factor in an arbitrary VASS language is equivalent to VASS reachability, which is Ackermann-complete [7, 17]. In fact, checking whether a given VASS language is included in the set of _prefixes_ of the one-letter Dyck language is already equivalent to VASS reachability (see the long version of the paper for a proof). A consequence of our result is that given a VASS coverability language \(K\) and a _reachability language_ (i.e. acceptance requires all counters to be zero in the end) \(L\) of a _deterministic_ VASS, deciding whether \(K\subseteq L\) is EXPSPACE-complete. This is in contrast (but not in contradiction1) to recent Ackermann-completeness results for settings where both \(K\) and are drawn from subclasses of VASS coverability languages [6]. Downward Closure of TasksNext, we move to asynchronous programs. We define a composite ordering on words that is a combination of two different orderings: the subword ordering for task names, and the syntactic preorder on the events projected to a single set \(\{x,\bar{x}\}\) of Dyck letters. In our case, the latter means a word \(u\) is less than \(v\) iff they both have the same offset, but \(v\) has at most the dip of \(u\). The composite order is defined so as to preserve the existence of marked Dyck factors. In contrast to the subword ordering, this (composite) ordering is not a well-quasi-ordering (since, e.g., \(\bar{x}x,\bar{x}\bar{\bar{x}}xx,\bar{\bar{x}}\bar{\bar{x}}xxx,\ldots\) forms an infinite descending chain). Nevertheless, our most difficult technical construction shows that for any context-free language (satisfying an assumption, which we call tame-pumping) there exists a regular language with the same downward closure in this ordering. The case of general context-free languages reduces to this special case since the presence of a non-tame pump immediately results in a Dyck-violation and can easily be detected in PSPACE. For the tame-pumping grammars, a succinct description of the corresponding automaton can be computed in PSPACE. This key observation allows us to replace the context-free languages of tasks with regular sets, and thereby reduce the problem to checking VASS language inclusion. Related WorkLanguage inclusion in Dyck languages is a well-studied problem. For example, inclusion in a Dyck language can be checked in polynomial time for context-free languages [27] or for ranges of two-copy tree-to-string transducers [20]. Our work extends the recent result that the language noninclusion problem for context-bounded multi-pushdown systems in Dyck languages is NP-complete [1]. Our result is complementary to that of [1]: their model considers a _fixed_ number of threads but allows the threads to be interrupted and context-switched a fixed number of times. In contrast, we allow dynamic spawning of threads but assume each thread is atomically run to completion. A natural open question is whether our results continue to hold if threads can be interrupted up to a fixed number of times. Inclusion problems have recently also been studied when both input languages are given as VASS coverability languages [6]. Since in our setting, the supposedly larger language is always a Dyck language (which is not a coverablity VASS language), those results are orthogonal. ## 2 Language-Theoretic Preliminaries General DefinitionsWe assume familiarity with basic language theory, see the textbook [14] for more details. For an alphabet \(\Sigma\subseteq\Theta\), let \(\pi_{\Sigma}\colon\Theta^{*}\to\Sigma^{*}\) denote the projection onto \(\Sigma^{*}\). In other words, for \(w\in\Theta^{*}\), the word \(\pi_{\Sigma}(w)\) is obtained from \(w\) by deleting every occurrence of a letter in \(\Theta\setminus\Sigma\). If \(\Sigma\) contains few elements, e.g. \(\Sigma=\{x,y\}\), then instead of writing \(\pi_{\{x,y\}}\) we also write \(\pi_{x,y}\), leaving out the set brackets. We write \(|w|_{\Sigma}\) for the number of occurrences of letters \(x\in\Sigma\) in \(w\), and similarly \(|w|_{x}\) if \(\Sigma=\{x\}\). Context-Free LanguagesA _context-free grammar_ (CFG) \(\mathcal{G}=(N,\Theta,P,S)\) consists of an alphabet of _nonterminals_\(N\), an alphabet of _terminals_\(\Theta\) with \(N\cap\Theta=\emptyset\), a finite set of _productions_\(P\subseteq N\times(N\cup\Theta)^{*}\), and the start symbol \(S\in N\). We usually write \(A\to v\) to denote a production \((A,v)\in P\). The size of the CFG \(\mathcal{G}\) is defined as \(|\mathcal{G}|=\sum_{A\to v\in P}(|v|+1)\). We denote the _derivation relation_ by \(\Rightarrow_{\mathcal{G}}\) and its reflexive, transitive closure by \(\Rightarrow_{\mathcal{G}}\). We drop the subscript \(\mathcal{G}\) if it is clear from the context. We also use _derivation trees_ labelled by \(N\cup\Theta\) for derivations of the form \(A\xrightarrow{\rightarrow}w\) for some \(A\in N\). Here we start with the root labelled by \(A\), and whenever we apply a production \(B\to v\) with \(v=a_{1}\ldots a_{n}\), we add \(n\) children labelled by \(a_{1},\ldots,a_{n}\) (in that order from left to right) to a leaf labelled by \(B\). A _pump_ is a derivation of the form \(A\Join uAv\) for some nonterminal \(A\). A derivation tree which is pumpfree, i.e., in which no path contains multiple occurrences of the same nonterminal, is referred to as a _skeleton_. We will often see an arbitrary derivation tree as one which is obtained by inserting pumps into a skeleton. The _language_\(L(\mathcal{G},A)\) of \(\mathcal{G}\) starting from nonterminal \(A\in N\) contains all words \(w\in\Theta^{*}\) such that there exists a derivation \(A\Join_{\mathcal{G}}w\). The language of \(\mathcal{G}\) is \(L(\mathcal{G})=L(\mathcal{G},S)\). A _context-free language_ (\(\mathsf{CFL}\)) \(L\) is a language for which there exists a \(\mathsf{CFG}\)\(\mathcal{G}\) with \(L=L(\mathcal{G})\). A \(\mathsf{CFG}\)\(\mathcal{G}=(N,\Theta,P,S)\) is said to be in _Chomsky normal form_ if all of its productions have one of the forms \(A\to BC\), \(A\to a\), or \(S\to\varepsilon\), where \(B,C\in N\setminus\{S\}\), \(a\in\Theta\), and the last form only occurs if \(\varepsilon\in L(\mathcal{G})\). It is well known that every \(\mathsf{CFG}\) can be transformed in polynomial time into one in Chomsky normal form with the same language. An _extended_ context-free grammar (\(\mathsf{ECFG}\)) \(\mathcal{G}=(N,\Theta,P,S)\) is a \(\mathsf{CFG}\), which may additionally have productions of the form \(A\to\Gamma^{*}\in P\) for some alphabet \(\Gamma\subseteq\Theta\). Productions of this form induce derivations \(uAs\Join_{\mathcal{G}}uvs\), where \(u,s\in(N\cup\Theta)^{*}\) and \(v\in\Gamma^{*}\). Chomsky normal form for \(\mathsf{ECFG}\) is defined as for \(\mathsf{CFG}\), but also allows productions of the form \(A\to\Gamma^{*}\). An \(\mathsf{ECFG}\) can still be transformed into Chomsky normal form using the same algorithm as for a \(\mathsf{CFG}\), treating expressions \(\Gamma^{*}\) like single terminal symbols. Since the extended productions can be simulated by conventional \(\mathsf{CFG}\) productions, the language of an \(\mathsf{ECFG}\) is still a \(\mathsf{CFL}\). Dyck LanguageLet \(X\) be an alphabet and let \(\bar{X}=\{\bar{x}\mid x\in X\}\) be a disjoint copy of \(X\). The _Dyck language (over \(X\))_\(\mathsf{Dyck}_{X}\subseteq(X\cup\bar{X})^{*}\) is defined by the following context-free grammar: \[S\to\varepsilon\mid S\to SS\mid S\to xS\bar{x}\quad\text{for $x\in X$}.\] Let \(\Theta\supseteq X\cup\bar{X}\) be an alphabet. For \(w\in\Theta^{*}\) we define \(\mathsf{offset}(w)=|w|_{X}-|w|_{\bar{X}}\). A language \(L\subseteq\Theta^{*}\) is called _offset-uniform_ if for any \(u,v\in L\), we have \(\mathsf{offset}(u)=\mathsf{offset}(v)\). The _dip_ of \(w\in\Theta^{*}\) is defined as \(\mathsf{dip}(w)=\max\{-\mathsf{offset}(u)\mid u\text{ is a prefix of $w$}\}\). We define \(e(w)=(\mathsf{dip}(w),\mathsf{offset}(w))\). Observe that for \(w\in(X\cup\bar{X})^{*}\) with \(|X|=1\) we have \(w\in\mathsf{Dyck}_{X}\) if and only if \(e(w)=(0,0)\). A language \(L\subseteq(X\cup\bar{X})^{*}\) is _not_ included in \(\mathsf{Dyck}_{X}\) if and only if there exists a word \(w\in L\) that satisfies one of the following violation conditions [24]: 1. an _offset violation_\(\mathsf{offset}(w)\neq 0\), 2. a _dip violation_, where \(\mathsf{dip}(w)>0\), i.e., there is a prefix \(u\) of \(w\) with \(\mathsf{offset}(u)<0\), or 3. a _mismatch violation_, where there exists a pair \(x,\bar{y}\) (for some \(x\neq y\)) of _mismatched_ letters in \(w\), i.e., \(w\) contains an infix \(xv\bar{y}\) where \(e(v)=(0,0)\). For example, \(w_{1}=x\bar{x}\bar{x}x\) has a dip violation due to the prefix \(u=x\bar{x}\bar{x}\); \(w_{2}=xx\bar{x}\) has an offset violation and \(w_{3}=xx\bar{x}\bar{y}\) has a mismatch violation. ## 3 Asynchronous Programs An _asynchronous program_[10], henceforth simply called a _program_, is a tuple \(\mathscr{P}=(Q,\,\Sigma,\,\Gamma,\)\(\mathcal{G},\,\Delta,\,q_{0},\,q_{f},\,\gamma_{0})\), where \(Q\) is a finite set of _global states_, \(\Sigma\) is an alphabet of _event letters_, \(\Gamma\) is an alphabet of _handler names_ with \(\Sigma\cap\Gamma=\emptyset\), \(\mathcal{G}\) is a \(\mathsf{CFG}\) over the terminal symbols \(\Sigma\cup\Gamma\), \(\Delta\) is a finite set of transition rules (described below), \(q_{0}\in Q\) is the _initial state_, \(q_{f}\in Q\) is the _final state_, and \(\gamma_{0}\) is the _initial handler_. Transition rules in \(\Delta\) are of the form \(q\xleftrightarrow{a,A}q^{\prime}\), where \(q,q^{\prime}\in Q\) are global states, \(a\in\Gamma\) is a handler name, and \(A\) is a nonterminal symbol in \(\mathcal{G}\). Let \(\mathbb{M}[S]\) denote the set of all multisets of elements from the set \(S\). A _configuration_\((q,\mathbf{m})\in Q\times\mathbb{M}[\Gamma]\) of \(\mathscr{P}\) consists of a global state \(q\) and a multiset \(\mathbf{m}:\Gamma\to\mathbb{N}\) of pending handler instances. The _initial_ configuration of \(\mathscr{P}\) is \(c_{0}=(q_{0},\llbracket\gamma_{0}\rrbracket)\), where \(\llbracket\gamma_{0}\rrbracket\) denotes the singleton multiset containing \(\gamma_{0}\). A configuration is considered _final_ if its global state is \(q_{f}\). The rules in \(\Delta\) induce a transition relation on configurations of \(\mathscr{P}\): We have \((q,\mathbf{m})\xrightarrow{w}(q^{\prime},\mathbf{m}^{\prime})\) iff there is a rule \(q\xleftrightarrow{a,A}q^{\prime}\in\Delta\) and a word \(u\in L(\mathcal{G},A)\) such that \(\pi_{\Sigma}(u)=w\) and \(\mathbf{m}^{\prime}=(\mathbf{m}\ominus\llbracket a\rrbracket)\oplus\mathsf{ Parikh}(\pi_{\Gamma}(u))\), where \(\mathbf{m}^{\prime\prime}=\mathbf{m}\oplus\mathbf{m}^{\prime}\) is the multiset which satisfies \(\mathbf{m}^{\prime\prime}(a)=\mathbf{m}^{\prime}(a)+\mathbf{m}(a)\) for each \(a\in\Gamma\). Similarly \(\mathbf{m}^{\prime\prime}=\mathbf{m}\ominus\mathbf{m}^{\prime}\) is the multiset which satisfies \(\mathbf{m}^{\prime\prime}(a)=\mathbf{m}^{\prime}(a)-\mathbf{m}(a)\) for each \(a\in\Gamma\) with the implicit assumption that \(\mathbf{m}^{\prime}(a)\geq\mathbf{m}(a)\). Here, \(\mathsf{Parikh}(w):\Gamma\to\mathbb{N}\) is the _Parikh image_ of \(w\) that maps each handler in \(\Gamma\) to its number of occurrences in \(w\). Note that the transition is feasible only if \(\mathbf{m}\) contains at least one instance of the handler \(a\). Intuitively, a program consists of a set of atomic event handlers that communicate over a shared global state \(Q\). Each handler is a piece of sequential code that generates a word over a set of events \(\Sigma\) and, in addition, posts new instances of handlers from \(\Gamma\). A configuration \((q,\mathbf{m})\) represents the current value of the shared state \(q\) and a task buffer \(\mathbf{m}\) containing the posted, but not yet executed, handlers. At each step, a scheduler non-deterministically picks and removes a handler from the multiset of posted handlers and "runs" it. Running a handler changes the global state and produces a sequence of events over \(\Sigma\) as well as a multiset of newly posted handlers. The newly posted handlers are added to the task buffer. We consider asynchronous programs as generators of words over the set of events. A _run_ of \(\mathscr{P}\) is a finite sequence of configurations \(c_{0}=(q_{0},\llbracket\gamma_{0}\rrbracket)\xrightarrow{w_{1}}c_{1} \xrightarrow{w_{2}}\ldots\xrightarrow{w_{\ell}}c_{\ell}\). It is an _accepting_ run if it ends in a final configuration. The _language_ of \(\mathscr{P}\) is defined as \[L(\mathscr{P})=\{w\in\Sigma^{*}\ |\ w=w_{1}\cdots w_{\ell},\ \text{there is an accepting run}\ c_{0}\xrightarrow{w_{1}}\ldots\xrightarrow{w_{\ell}}c_{ \ell}\}.\] The size of the program \(\mathscr{P}\) is defined as \(|\mathscr{P}|=|Q|+|\mathcal{G}|+|\Delta|\), i.e., the combined size of states, grammar, and transitions. The _Dyck inclusion problem_ for programs asks, given a program \(\mathscr{P}\) over a set \((X\cup\bar{X})\) of events, whether every word in \(L(\mathscr{P})\) belongs to the Dyck language \(\mathsf{Dyck}_{X}\). We show the following main result. [Main Theorem] Given a program \(\mathscr{P}\) with \(L(\mathscr{P})\subseteq(X\cup\bar{X})^{*}\), deciding if \(L(\mathscr{P})\subseteq\mathsf{Dyck}_{X}\) is \(\mathsf{EXPSPACE}\)-complete. \(\mathsf{EXPSPACE}\)-hardness follows easily from the following result on language emptiness (by simply adding a loop with a letter \(\bar{x}\in\bar{X}\) at the final state). Therefore, the rest of the paper focuses on the \(\mathsf{EXPSPACE}\) upper bound. [Theorem 6.2, Ganty and Majumdar [10]] Given a program \(\mathscr{P}\), checking if \(L(\mathscr{P})=\emptyset\) is \(\mathsf{EXPSPACE}\)-complete. A nonterminal \(B\) in the grammar \(\mathcal{G}\) of a program \(\mathscr{P}\) is called _useful_ if there exists a run \(\rho\) of \(\mathscr{P}\) reaching \(q_{f}\) in which there exists a derivation tree containing \(B\). More precisely, there are two successive configurations \((q,\mathbf{m})\xrightarrow{w}(q^{\prime},\mathbf{m}^{\prime})\) in \(\rho\) such that there is a rule \(q\xleftrightarrow{a,A}q^{\prime}\) and a word \(u\in L(\mathcal{G},A)\) with \(\pi_{\Sigma}(u)=w\), \(\mathbf{m}^{\prime}=(\mathbf{m}\ominus\llbracket a\rrbracket)\oplus\mathsf{ Parikh}(\pi_{\Gamma}(u))\), and \(B\) occurs in some derivation tree with root \(A\) and yield \(u\). There is a simple reduction from checking if a nonterminal is useful to checking language emptiness (see the full version) so we can check if a nonterminal is useful also in \(\mathsf{EXPSPACE}\). Therefore, in the following, we shall assume that all nonterminals are useful. ## 4 Checking Dyck Inclusion for Vass Coverability Languages As a first technical construction, we show how to check Dyck inclusion for (succinctly defined) VASS languages. We shall reduce the problem for programs to this case. ### Models: Vass and Succinct Versions Vector Addition Systems with StatesA _vector addition system with states_ (VASS) is a tuple \(\mathcal{V}=(Q,\Sigma,I,E,q_{0},q_{f})\) where \(Q\) is a finite set of _states_, \(\Sigma\) is a finite alphabet of _input letters_, \(I\) is a finite set of _counters_, \(q_{0}\in Q\) is the _initial state_, \(q_{f}\in Q\) is the _final state_, and \(E\) is a finite set of _edges_ of the form \(q\xrightarrow{x,\delta}q^{\prime}\), where \(q,q^{\prime}\in Q\), \(x\in\Sigma\cup\{\varepsilon\}\), and \(\delta\in\{-1,0,1\}^{I}\).2 Footnote 2: A more general definition of VASS would allow each transition to add an arbitrary vector over the integers. We instead restrict ourselves to the set \(\{-1,0,1\}\), since this suffices for our purposes, and the \(\mathsf{EXPSPACE}\)-hardness result by Lipton [19] already holds for VASS of this form. A _configuration_ of \(\mathcal{V}\) is a pair \((q,\mathbf{u})\in Q\times\mathbb{M}[I]\). The elements of \(\mathbb{M}[I]\) and \(\{-1,0,1\}^{I}\) can also be seen as vectors of length \(|I|\) over \(\mathbb{N}\) and \(\{-1,0,1\}\), respectively, and we sometimes denote them as such. The edges in \(E\) induce a transition relation on configurations: there is a transition \((q,\mathbf{u})\xrightarrow{x}(q^{\prime},\mathbf{u}^{\prime})\) if there is an edge \(q\xrightarrow{x,\delta}q^{\prime}\) in \(E\) such that \(\mathbf{u}^{\prime}(i)=\mathbf{u}(i)+\delta(i)\geq 0\) for all \(i\in I\). A _run_ of the VASS is a finite sequence of configurations \(c_{0}\xrightarrow{x_{1}}c_{1}\xrightarrow{x_{2}}\cdots\xrightarrow{x_{\ell}}c_ {\ell}\) where \(c_{0}=(q_{0},\mathbf{0})\). A run is said to reach a state \(q\in Q\) if the last configuration in the run is of the form \((q,\mathbf{m})\) for some multiset \(\mathbf{m}\). An _accepting_ run is a run whose final configuration has state \(q_{f}\). The _(coverability) language_ of \(\mathcal{V}\) is defined as \[L(\mathcal{V})=\{w\in\Sigma^{*}\mid\text{there exists a run }(q_{0},\mathbf{0})=c_{0} \xrightarrow{x_{1}}\ldots\xrightarrow{x_{\ell}}c_{\ell}=(q_{f},\mathbf{u}) \text{ with }w=x_{1}\cdots x_{\ell}\}.\] The size of the VASS\(\mathcal{V}\) is defined as \(|\mathcal{V}|=|I|\cdot|E|\). Models with Succinct ControlIn this paper we need various models with _doubly succinct_ control, i.e., models with doubly exponentially many states. Informally speaking, a machine with finite control \(\mathcal{B}\), e.g. an NFA or a VASS, is doubly succinct if its set of control states is \(\Lambda^{M}\) where \(M\in\mathbb{N}\) is an exponential number given in binary encoding, and \(\Lambda\) is a finite alphabet. The initial and final state of \(\mathcal{B}\) are the states \(0^{M}\) and \(1^{M}\) for some letters \(0,1\in\Lambda\). Finally, the transitions of \(\mathcal{B}\) are given by _finite-state transducers_\(\mathcal{T}\), i.e., asynchronous multitagu automata recognizing relations \(R\subseteq(\Lambda^{M})^{k}\). For example, a _doubly succinct_ NFA (dsNFA in short) contains binary transducers \(\mathcal{T}_{a}\) for each \(a\in\Sigma\cup\{\varepsilon\}\) where \(\Sigma\) is the input alphabet, and \(\mathcal{B}\) contains a transition \(p\xrightarrow{x}q\) if and only if \((p,q)\) is accepted by \(\mathcal{T}_{x}\). A _doubly succinct VASS_ (dsVASS, for short) contains binary transducers \(\mathcal{T}_{x,i},\mathcal{T}_{x,i},\mathcal{T}_{x,\varepsilon}\) for each \(x\in\Sigma\cup\{\varepsilon\}\) and \(i\in I\), where \(I\) is the set of counters. A state pair \((p,q)\) accepted by \(\mathcal{T}_{x,i}\) specifies a transition \(p\xrightarrow{x,\mathbf{e}_{i}}q\) in \(\mathcal{B}\), where \(\mathbf{e}_{i}\) only increments counter \(i\) and leaves other counters the same. Similarly \(\mathcal{T}_{x,\tilde{i}}\) and \(\mathcal{T}_{x,\varepsilon}\) specify decrementing transitions and transitions without counter updates. Later we will also use _(singly) succinct_ ECFGs, which are extended context-free grammars whose set of nonterminals is \(\Lambda^{M}\) where \(M\) is a unary encoded number. The set of productions is given in a suitable fashion by transducers. Let us remark that the precise definition of (doubly) succinct automata or grammars is not important for our paper, e.g. one could also use circuits instead of transducers to specify the transitions/productions. ### Checking Dyck Inclusion for dsVASS We prove our first technical contribution: an EXPSPACE procedure to check non-inclusion of a VASS language in a Dyck language. This involves checking if one of (OV), (DV), or (MV) occurs. We begin by showing how these violations can be detected for a (non-succinct) VASS. To this end, first we show that offset-uniformity of a VASS language implies a doubly exponential bound \(B\) on the offset values for prefixes of accepted words (Theorem 4.1). Given an alphabet \(X\) and a number \(k\in\mathbb{N}\), we define the language \[\mathscr{B}(X,k)=\{w\in(X\cup\bar{X})^{*}\mid\text{for every prefix $v$ of $w$: }|\mathsf{offset}(v)|\leq k\}.\] Let \(\mathcal{V}\) be a VASS with \(L(\mathcal{V})\subseteq(X\cup\bar{X})^{*}\). If \(L(\mathcal{V})\) is offset-uniform, then \(L(\mathcal{V})\subseteq\mathscr{B}(X,2^{2^{p(|\mathcal{V}|)}})\) for some polynomial function \(p\). Proof.: Let \(\mathcal{V}=(Q,X\cup\bar{X},I,E,q_{0},q_{f})\) be a VASS where \(L(\mathcal{V})\neq\emptyset\) is offset-uniform. The unique offset of \(L(\mathcal{V})\) is bounded double exponentially in \(|\mathcal{V}|\) since \(L(\mathcal{V})\) contains some word that is at most double exponentially long, a fact that follows from Rackoff's bound on covering runs [23]. Let \(C\subseteq Q\times\mathbb{M}[I]\) be the set of configurations that are reachable from \((q_{0},\mathbf{0})\) and from which the final state can be reached. Observe that for any configuration \(c\in C\) the language \(L(c)=\{w\in(X\cup\bar{X})^{*}\mid\exists\mathbf{u}\colon c\xrightarrow{w}(q_{ f},\mathbf{u})\}\) is also offset-uniform since \(L(c)\subseteq\{w\in(X\cup\bar{X})^{*}\mid vw\in L(\mathcal{V})\}\) where \(v\in(X\cup\bar{X})^{*}\) is any word with \((q_{0},\mathbf{0})\xrightarrow{v}c\). Define the function \(f\colon C\to\mathbb{Z}\) where \(f(c)\) is the unique offset of the words in \(L(c)\). It remains to show that \(|f(c)|\) is bounded double exponentially for all \(c\in C\). Let \(M\) be the set of all configurations from which the final state can be reached (hence \(C\subseteq M\)). Consider the following order on VASS configurations \(Q\times\mathbb{M}[I]\): \((q,\mathbf{u})\leq(q^{\prime},\mathbf{u}^{\prime})\) iff \(q=q^{\prime}\) and \(\mathbf{u}(i)\leq\mathbf{u}^{\prime}(i)\) for each \(i\in I\). The cardinality of the set \(\min(M)\) of minimal elements in \(M\) with respect to this order is bounded doubly exponentially in the size of \(\mathcal{V}\). This follows directly from the fact that Rackoff's doubly-exponential bound [23] on the length of a covering run does not depend on the start configuration (but only the size of the VASS and the final configuration). An explicit bound for \(|\min(M)|\) is given in [4, Theorem 2]. Observe that if \(c_{1}\in M\) and \(c_{2}\in C\) with \(c_{1}\leq c_{2}\) then \(L(c_{1})\subseteq L(c_{2})\) and therefore \(L(c_{1})\) is also offset-uniform, having the same offset as \(L(c_{2})\). Hence, if for two configurations \(c_{1},c_{2}\in C\) there exists a configuration \(c\in M\) with \(c\leq c_{1}\) and \(c\leq c_{2}\), then \(f(c_{1})=f(c_{2})\). Since for every \(c_{2}\in C\) there exists \(c_{1}\in\min(M)\) with \(c_{1}\leq c_{2}\), the function \(f\) can only assume doubly exponentially many values on \(C\). Finally, we claim that \(f(C)\subseteq\mathbb{Z}\) is an interval containing \(0\), which proves that the norms of elements in \(f(C)\) are bounded by the number of different values, i.e., double exponentially. Since we assumed \(L(\mathcal{V})\neq\emptyset\), some final configuration \((q_{f},\mathbf{u})\in C\) is reachable from \((q_{0},\mathbf{0})\), and therefore \(0\in f(C)\) since \(\varepsilon\in L((q_{f},\mathbf{u}))\). Consider the configuration graph \(\mathcal{C}\) of \(\mathcal{V}\) restricted to \(C\). For any edge \(c_{1}\to c_{2}\) in \(\mathcal{C}\) we have \(|f(c_{1})-f(c_{2})|\leq 1\) since VASS transitions consume at most one input symbol. Moreover, the underlying undirected graph of \(\mathcal{C}\) is connected since any configuration is reachable from \((q_{0},\mathbf{0})\in C\). Therefore \(f(C)\) is an interval, which concludes the proof. Note that although \(\mathscr{B}(X,k)\) is a regular language for each \(X\) and \(k\), Theorem 4.1 does not imply that every offset-uniform VASS language is regular. For example, the VASS language \(\{(x\bar{x})^{m}(y\bar{y})^{n}\mid m\geq n\}\) is offset-uniform, but it is not regular. This is because Theorem 4.1 only implies boundedness of the number of occurrences of letters in the input words, but the VASS's own counters might be unbounded. The main consequence of Theorem 4.1 is that in a VASS we can track the offset using a doubly succinct control state. Thus, we have the following corollary. The following problems can be decided in \(\mathsf{EXPSPACE}\): Given a \(\mathsf{VASS}\) or \(\mathsf{dsVASS}\), does \(\mathsf{offset}(w)=0\) hold for all \(w\in L(\mathcal{V})\)? Proof.: First assume \(\mathcal{V}\) is a \(\mathsf{VASS}\). We show that the problem can be reduced to the intersection non-emptiness problem for a \(\mathsf{VASS}\) and a doubly succinct NFA, i.e., given a \(\mathsf{VASS}\)\(\mathcal{V}\) and a doubly succinct NFA \(\mathcal{A}\), is the intersection \(L(\mathcal{V})\cap L(\mathcal{A})\) nonempty? One can construct in polynomial time a doubly succinct \(\mathsf{VASS}\) for \(L(\mathcal{V})\cap L(\mathcal{A})\), as a product construction between \(\mathcal{V}\) and \(\mathcal{A}\). Since the emptiness problem for \(\mathsf{dsVASS}\) is in \(\mathsf{EXPSPACE}\) ([2, Theorem 5.1]), we can also decide emptiness of \(L(\mathcal{V})\cap L(\mathcal{A})\) in \(\mathsf{EXPSPACE}\). Define the number \(M=2^{2^{\rho(|\mathcal{V}|)}}\) where \(p\) is the polynomial from Theorem 4.1. Let \(K_{0}=\{w\in(X\cup\bar{X})^{*}\mid\mathsf{offset}(w)=0\}\). According to Theorem 4.1, we have \(L(\mathcal{V})\subseteq K_{0}\) if and only if \(L(\mathcal{V})\subseteq K_{0}\cap\mathscr{B}(X,M)\). By the remarks above, it suffices to construct a doubly succinct NFA for the complement of \(K_{0}\cap\mathscr{B}(X,M)\). The following doubly succinct deterministic finite automaton \(\mathcal{A}\) recognizes \(K_{0}\cap\mathscr{B}(X,M)\): Given an input word over \(X\cup\bar{X}\), the automaton tracks the current offset in the interval \([-M,M]\), stored in the control state as a binary encoding of length \(\log M=2^{p(|\mathcal{V}|)}\) together with a bit indicating the sign. If the absolute value of the offset exceeds \(M\), the automaton moves to a rejecting sink state. The state representing offset \(0\) is the initial and the only final state. Finally, we complement \(\mathcal{A}\) to obtain a doubly succinct NFA \(\bar{\mathcal{A}}\), with a unique final state, for the complement of \(K_{0}\cap\mathscr{B}(X,M)\). Now assume \(\mathcal{V}\) is a \(\mathsf{dsVASS}\). Using Lipton's construction simulating doubly exponential counter values [19], we can construct a (conventional) \(\mathsf{VASS}\)\(\mathcal{V}^{\prime}\), size polynomial in \(|\mathcal{V}|\), with the same language (similar to [2, Theorem 5.1]). We can now apply the above construction. Next, we check for (DV) or (MV), assuming offset uniformity. We will reduce both kinds of violations to the problem of searching for _marked Dyck factors_. A word of the form \(u\#v\bar{\#}w\) is called a _marked Dyck factor_ if \(u,v,w\in\{x,\bar{x}\}^{*}\) and \(v\in\mathsf{Dyck}_{x}\). Intuitively, if a (DV) occurs in a word \(w\), there is a first time that the offset reaches \(-1\). Placing a \(\bar{\#}\) at the place where this happens, and a \(\#\) right at the beginning, we have a word of the form \(\#u\bar{\#}v\) where \(u\in\mathsf{Dyck}_{x}\). Similarly for (MV), we replace two letters \(z\in X\) and \(\bar{y}\in X\) with \(z\neq y\) by \(\#\) and \(\bar{\#}\), respectively, and look for a word \(u\#v\bar{\#}w\), where \(v\in\mathsf{Dyck}_{x}\). The following problems can be decided in \(\mathsf{EXPSPACE}\): Given an offset-uniform \(\mathsf{VASS}\) or \(\mathsf{dsVASS}\)\(\mathcal{V}\), does \(L(\mathcal{V})\) contain a marked Dyck factor? Proof.: As in Corollary 4.2, given a \(\mathsf{dsVASS}\), we can convert to a polynomial-sized \(\mathsf{VASS}\) with the same language and apply the following algorithm. We again reduce to the intersection nonemptiness problem between a \(\mathsf{VASS}\) and a doubly succinct \(\mathsf{NFA}\), and use the fact that nonemptiness of \(\mathsf{dsVASS}\) is in \(\mathsf{EXPSPACE}\)[2, Theorem 5.1]. As above, define the number \(M=2^{2^{p(|\mathcal{V}|)}}\) where \(p\) is the polynomial from Theorem 4.1. The automaton keeps track of the offset and also verifies that the input has the correct format \(u\#v\bar{\#}w\) where \(u,v,w\in\{x,\bar{x}\}^{*}\). Furthermore, upon reaching \(\#\) it starts tracking the current offset and verifies that (i) the offset stays nonnegative, (ii) the offset never exceeds \(2M\), and (iii) the offset is zero when reaching \(\bar{\#}\). If \(L(\mathcal{V})\) intersects \(L(\mathcal{A})\), then clearly \(\mathcal{V}\) is a positive instance of the problem. Conversely, assume that \(L(\mathcal{V})\) contains a word \(u\#v\bar{\#}w\) with \(v\in\mathsf{Dyck}_{x}\). By offset-uniformity of \(\mathcal{V}\) and by Theorem 4.1, each prefix \(v^{\prime}\) of \(v\) satisfies \(\mathsf{offset}(v^{\prime})=\mathsf{offset}(uv^{\prime})-\mathsf{offset}(u)\leq M -(-M)=2M\). Therefore \(u\#v\bar{\#}w\in L(\mathcal{A})\). Let us put everything together. Let \(\rho\colon(X\cup\bar{X})^{*}\to\{x,\bar{x}\}^{*}\) be the morphism that replaces all letters from \(X\) (resp., \(\bar{X}\)) by the letter \(x\) (resp., \(\bar{x}\)). Given a \(\mathsf{dsVASS}\)\(\mathcal{V}\) over \(X\cup\bar{X}\) we can construct in polynomial time three \(\mathsf{dsVASS}\)\(\mathcal{V}_{\mathsf{o}},\mathcal{V}_{\mathsf{d}},\mathcal{V}_{\mathsf{m}}\) where \[L(\mathcal{V}_{\mathsf{o}}) =\rho(L(\mathcal{V})),\] \[L(\mathcal{V}_{\mathsf{d}}) =\{\#\rho(v)\bar{\#}\rho(\bar{y}w)\mid v\bar{y}w\in L(\mathcal{V })\text{ for some }v,w\in(X\cup\bar{X})^{*},\,y\in X\},\] \[L(\mathcal{V}_{\mathsf{m}}) =\{\rho(u)\#\rho(v)\bar{\#}\rho(w)\mid uyv\bar{z}w\in L( \mathcal{V})\text{ for some }u,v,w\in(X\cup\bar{X})^{*},\,y\neq z\in X\}.\] Observe that \(L(\mathcal{V})\subseteq\mathsf{Dyck}_{x}\) if and only if \(L(\mathcal{V}_{\mathsf{o}})\) has uniform offset \(0\) and \(L(\mathcal{V}_{\mathsf{d}})\) and \(L(\mathcal{V}_{\mathsf{m}})\) do not contain marked Dyck factors. Hence, to decide whether \(L(\mathcal{V})\subseteq\mathsf{Dyck}_{X}\) we first test that \(L(\mathcal{V}_{\mathsf{o}})\) has uniform offset \(0\), using Corollary 4.2, rejecting if not. Otherwise, we can apply Proposition 4.3 to test whether \(L(\mathcal{V}_{\mathsf{d}})\) or \(L(\mathcal{V}_{\mathsf{m}})\) contain marked Dyck factors. If one of the tests is positive, we know \(L(\mathcal{V})\not\subseteq\mathsf{Dyck}_{X}\), otherwise \(L(\mathcal{V})\subseteq\mathsf{Dyck}_{X}\). Given a \(\mathsf{dsVASS}\)\(\mathcal{V}\) over the alphabet \(X\cup\bar{X}\), checking whether \(L(\mathcal{V})\subseteq\mathsf{Dyck}_{X}\) is \(\mathsf{EXPSPACE}\)-complete. Let us remark that Theorem 4.4 can also be phrased slightly more generally. Above, we have defined the language of a \(\mathsf{VASS}\) to be the set of input words for which a final state is reached. Such languages are also called _coverability languages_. Another well-studied notion is the _reachability language_ of a \(\mathsf{VASS}\), which consists of those words for which a configuration \((q_{f},\mathbf{0})\) is reached. Moreover, a \(\mathsf{VASS}\) is _deterministic_ if for each input letter \(x\) and each state \(q\), there is at most one \(x\)-labeled transition starting in \(q\) (and there are no \(\varepsilon\)-transitions). We can now phrase Theorem 4.4 as follows: Given a \(\mathsf{VASS}\) coverability language \(K\) and a reachability language \(L\) of a deterministic \(\mathsf{VASS}\), it is \(\mathsf{EXPSPACE}\)-complete to decide whether \(K\subseteq L\). This is in contrast to inclusion problems where \(K\) is drawn from a subclass of the coverability languages: This quickly leads to Ackermann-completeness [6]. In fact, even if we replace \(\mathsf{Dyck}_{X}\) in Theorem 4.4 with the set of prefixes of \(\mathsf{Dyck}_{\{x\}}\), the problem becomes Ackermann-complete (see the full version of this work). ## 5 Checking Dyck Inclusion for Programs We now describe our algorithm for checking inclusion in \(\mathsf{Dyck}_{X}\) for programs. Our argument is similar to the case of \(\mathsf{dsVASS}\): we first construct three auxiliary programs \(\mathscr{P}_{\mathsf{o}}\), \(\mathscr{P}_{\mathsf{d}}\), and \(\mathscr{P}_{\mathsf{m}}\), and then we use them to detect each type of violation in the original program. We construct the program \(\mathscr{P}_{\mathsf{o}}\) for checking offset violation by projecting the Dyck letters to the one-dimensional Dyck alphabet \(\{x,\bar{x}\}\). The programs \(\mathscr{P}_{\mathsf{d}}\) and \(\mathscr{P}_{\mathsf{m}}\) are constructed by first placing two markers like for \(\mathsf{VASS}\), and then projecting to \(\{x,\bar{x}\}\). As in the algorithm for \(\mathsf{VASS}\), we check whether \(L(\mathscr{P}_{\mathsf{o}})\) has uniform offset \(0\), and whether \(L(\mathscr{P}_{\mathsf{d}})\) and \(L(\mathscr{P}_{\mathsf{m}})\) contain marked Dyck factors. For these checks, we convert the three programs into \(\mathsf{dsVASS}\)\(\mathcal{V}_{\mathsf{o}}\), \(\mathcal{V}_{\mathsf{d}}\), and \(\mathcal{V}_{\mathsf{m}}\), respectively, in such a way that violations are preserved. To be more precise, this conversion from programs to \(\mathsf{dsVASS}\) will preserve the _downward closure_ with respect to a specific order that we define below. The global downward closure procedure is obtained by composing a local downward closure procedure applied to each task. On the task level, the order \(\sqsubseteq\) is a combination of the subword order on the handler names in \(\Gamma\) and the syntactic order of \(\mathsf{Dyck}_{X}\) over the event letters. The core technical result is a transformation from context-free grammars into dsNFA which preserve the downward closure with respect to \(\sqsubseteq\). One key aspect of our downward closure construction is an important condition on the pumps that appear in the context-free grammar. A context-free grammar \(\mathcal{G}\) is _tame-pumping_ if for every pump \(A\xRightarrow{}uAv\), we have \(\mathsf{offset}(u)\geq 0\) and \(\mathsf{offset}(v)=-\mathsf{offset}(u)\). A derivation \(A\xRightarrow{}uAv\) is called an increasing pump if \(\mathsf{offset}(u)>0\), otherwise it is called a _zero pump_. An asynchronous program is _tame-pumping_ if its grammar is tame-pumping. Note that while our definition of a tame-pumping grammar is syntactic, it actually only depends on the generated language, assuming every nonterminal occurs in a derivation: In that case, a grammar is tame-pumping if and only if (i) the set of offsets and (ii) the set of dips of words in its language are both finite. The following lemma summarizes some properties of tame-pumping and why it is useful for our algorithm. The proof can be found in the full version. 1. We can check in \(\mathsf{coNP}\) whether a given context-free grammar over \(\{x,\bar{x}\}\) is tame-pumping. Furthermore, given a nonterminal \(A_{0}\), we can check in \(\mathsf{NP}\) whether \(A_{0}\) has a zero pump (resp., increasing pump). 2. There exists a polynomial \(p\) such that, if \(\mathcal{G}\) is tame-pumping, then for every nonterminal \(A\) of \(\mathcal{G}\) and every \(w\in L(\mathcal{G},A)\) we have \(\mathsf{dip}(w)\leq 2^{p(|\mathcal{G}|)}\). 3. If \(\mathscr{P}\) is not tame-pumping, then \(L(\mathscr{P})\not\subseteq\mathsf{Dyck}_{X}\). Thus, if \(\mathscr{P}\) is not tame-pumping, the refinement checking algorithm rejects immediately. From now on, we assume that \(\mathscr{P}\) is tame-pumping. ### Combining the subword order and the syntactic order Suppose \(\Gamma\) is an alphabet and let \(\Theta=\Gamma\cup\{x,\bar{x}\}\). Define \(\bar{a}=a\) for \(a\in\Gamma\). By \(\preccurlyeq\), we denote the _subword ordering_ on \(\Gamma^{*}\), i.e. \(u\preccurlyeq v\) if and only if \(u\) can be obtained from \(v\) by deleting some letters. Formally there exist words \(u_{1},\ldots,u_{n},v_{0},\ldots,v_{n}\in\Gamma^{*}\) such that \(u=u_{1}\cdots u_{n}\) and \(v=v_{0}u_{1}v_{1}\cdots u_{n}v_{n}\). For \(u,v\in\{x,\bar{x}\}^{*}\), we write \(u\trianglelefteq v\) if \(\mathsf{offset}(u)=\mathsf{offset}(v)\) and \(\mathsf{dip}(u)\geq\mathsf{dip}(v)\). In fact, \(\trianglelefteq\) is the _syntactic order_ with respect to the Dyck language, i.e. if \(u\trianglelefteq v\) and \(rus\in\mathsf{Dyck}_{x}\) then \(rus\in\mathsf{Dyck}_{x}\) for all \(r,s\). We define the ordering \(\sqsubseteq^{\prime}\) on \(\Theta^{*}\) by \(z_{1}\sqsubseteq^{\prime}z_{2}\) if and only if \(\pi_{x,\bar{x}}(z_{1})\trianglelefteq\pi_{x,\bar{x}}(z_{2})\), and \(\pi_{\Gamma}(z_{1})\preccurlyeq\pi_{\Gamma}(z_{2})\). For example, \(a\bar{x}xc\sqsubseteq^{\prime}xabc\bar{x}\) because \(ac\) is a subword of \(abc\), and both \(\bar{x}x\) and \(x\bar{x}\) have offset \(0\), but \(\bar{x}x\) has a larger dip. Let \(\#,\bar{\#}\) be two fresh letters, called _markers_. The set of _marked words_ is defined as \[\mathscr{M}=\Theta^{*}\{\varepsilon,\#\}\Theta^{*}\{\varepsilon,\bar{\#}\} \Theta^{*}.\] A marked word should be viewed as an infix of a larger word \(u\#v\bar{\#}w\). The set of _admissible_ marked words, denoted by \(\mathscr{A}\), consists of those words \(z\in\mathscr{M}\) which are an infix of a word \(u\#v\bar{\#}w\) where \(v\in\mathsf{Dyck}_{x}\). For example, a marked word \(u\#v\) is admissible if \(v\) is a prefix of a Dyck word. On the set of admissible marked words, we define an ordering \(\sqsubseteq\). To do so, we first define for each marked word \(z\in\mathscr{M}\) two words \(\mathsf{inside}(z)\) and \(\mathsf{outside}(z)\) in \(\Theta^{*}\) as follows: Let \(u,v,w\in\Theta^{*}\) such that either \(z=v\), \(z=u\#v\), \(z=v\bar{\#}w\), or \(z=u\#v\bar{\#}w\). Then we define \(\mathsf{inside}(z)=v\) and \(\mathsf{outside}(z)=uw\) (here, \(u=\varepsilon\) if it is not part of \(z\), same for \(w\)). Given two admissible marked words \(z_{1},z_{2}\in\mathscr{A}\) we define \(w\sqsubseteq w^{\prime}\) if and only if \(z_{1}\) and \(z_{2}\) contain the same markers, and \(\mathsf{inside}(z_{1})\sqsubseteq^{\prime}\mathsf{inside}(z_{2})\), and \(\mathsf{outside}(z_{1})\sqsubseteq^{\prime}\mathsf{outside}(z_{2})\). For example, \(a\bar{x}xc\#a\sqsubseteq xabc\bar{x}\#ab\) because \(a\bar{x}xc\sqsubseteq^{\prime}xabc\bar{x}\) and \(a\sqsubseteq^{\prime}ab\). For a language \(L\subseteq\mathscr{M}\) we denote by \(L\mathord{\downarrow}\) the downward closure of \(L\) within \(\mathscr{A}\) with respect to the ordering \(\sqsubseteq\). Thus, we define: \[L\mathord{\downarrow}=\{u\in\mathscr{A}\mid\exists v\in L\cap\mathscr{A}\colon u \sqsubseteq v\}.\] Given a tame-pumping \(\mathsf{CFG}\), we can compute in polynomial space a doubly succinct \(\mathsf{NFA}\)\(\mathcal{A}\) such that \(L(\mathcal{A})\mathord{\downarrow}=L(\mathcal{G})\mathord{\downarrow}\) and \(|\mathcal{A}|\) is polynomially bounded in \(|\mathcal{G}|\). We explain how to prove Theorem 5 in Section 6. Let us make a few remarks. While downward closed sets with respect to the subword ordering are always regular, this does not hold for \(\sqsubseteq\). Consider the language \(L=(ax)^{*}\) where \(a\in\Gamma\) is a handler name and \(x\in X\) is an event letter. Then \(L\mathord{\downarrow}\) consists of all words \(w\in\{a,x,\bar{x}\}^{*}\) where \(|w|_{a}\leq|w|_{x}-|w|_{\bar{x}}\), which is not a regular language. Furthermore, the automaton in Theorem 5 may indeed require double exponentially many states. For example, given a number \(n\), consider the language \(L=\{u\bar{x}^{{}^{\hskip 0.5ptn}}\#x^{{}^{\hskip 0.5ptn}}\bar{u}\mid u\in\{ax,bx \}^{*}\}\) where \(\Gamma=\{a,b\}\) is the set of handler names and \(X=\{x\}\). Here we define \(\overline{a_{1}a_{2}\cdots a_{n}}=\bar{a}_{n}\cdots\bar{a}_{2}\bar{a}_{1}\) for a word \(a_{1}\cdots a_{n}\in\{a,b,x\}^{*}\) where \(\bar{a}=a\) and \(\bar{b}=b\). This is generated by a tame-pumping context-free grammar of size linear in \(n\). However, for any \(\mathcal{A}\) with \(L(\mathcal{A})\mathord{\downarrow}=L\mathord{\downarrow}\), projecting to just \(a\) and \(b\) yields the language \(K=\{uu^{\mathsf{rev}}\mid u\in\{a,b\}^{*},\ |u|\leq 2^{n}\}\mathord{\downarrow}\), for which an NFA requires at least \(2^{2^{n}}\) states. Finally, note that the restriction to admissible words is crucial: If we defined the ordering \(\sqsubseteq\) on all words of \(\mathscr{M}\), then for the tame-pumping language \(L=\{x^{n}\#\bar{x}^{{}^{\hskip 0.5ptn}}\mid n\in\mathbb{N}\}\), the downward closure would not be regular, because an NFA would be unable to preserve the unbounded offset at the separator \(\#\). A key observation in this work is that in combination with tame pumping, admissibility guarantees that the offset at the borders \(\#\) and \(\bar{\#}\) is bounded (see Lemma 6), which enables a finite automaton to preserve it. Given a tame-pumping asynchronous program \(\mathscr{P}\), we can now compute a \(\mathsf{dsVASS}\)\(\mathcal{V}\) with the same downward closure: Its counters are the handler names \(a\in\Gamma\) in \(\mathscr{P}\). For each nonterminal \(A\) we apply Theorem 5 to \(\mathcal{G}_{A}\), which is the grammar of \(\mathscr{P}\) with start symbol \(A\), and obtain a \(\mathsf{dsNFA}\)\(\mathcal{B}_{A}\). We replace each transition \(q\xhookrightarrow{a,A}q^{\prime}\) by the following gadget: First, it decrements the counter for the handler name \(a\). Next, the gadget simulates the \(\mathsf{dsNFA}\)\(\mathcal{B}_{A}\) where handlers \(b\in\Gamma\) are interpreted as counter increments. Finally, when reaching the final state of \(\mathcal{B}_{A}\) we can non-deterministically switch to \(q^{\prime}\). Given an asynchronous program \(\mathscr{P}\) with tame-pumping, we can compute in polynomial space a doubly succinct \(\mathsf{VASS}\)\(\mathcal{V}\) such that \(L(\mathscr{P})\mathord{\downarrow}=L(\mathcal{V})\mathord{\downarrow}\) and \(|\mathcal{V}|\) is polynomially bounded in \(|\mathscr{P}|\). The details of the proof are given in the full version. ### The algorithm We are now ready to explain the whole algorithm. Given an asynchronous program \(\mathscr{P}=(Q,X\cup\bar{X},\Gamma,\mathcal{G},\Delta,q_{0},q_{f},\gamma_{0})\), we want to check if \(L(\mathscr{P})\subseteq\mathsf{Dyck}_{X}\). Recall that, wlog, we can assume all nonterminals are useful, meaning every nonterminal is involved in some accepting run. The algorithm is presented in Algorithm 1. As a first step, the algorithm verifies that \(\mathscr{P}\) is tame-pumping using Lemma 5. Next we construct the following auxiliary asynchronous programs \(\mathscr{P}_{\mathsf{o}}\), \(\mathscr{P}_{\mathsf{d}}\), \(\mathscr{P}_{\mathsf{m}}\), to detect offset, dip, and mismatch violations in \(L(\mathscr{P})\). Let \(\rho\colon(X\cup\bar{X})^{*}\to\{x,\bar{x}\}^{*}\) be the morphism which replaces all letters in \(X\) by unique letter \(x\) and all letters in \(\bar{X}\) by unique letter \(\bar{x}\). The programs \(\mathscr{P}_{\mathsf{o}}\), \(\mathscr{P}_{\mathsf{d}}\), \(\mathscr{P}_{\mathsf{m}}\) recognize the following languages over the alphabet \(\{x,\bar{x},\#,\bar{\#}\}\): \[L(\mathscr{P}_{\textsf{o}}) =\{\rho(w)\mid w\in L(\mathscr{P})\}, \tag{1}\] \[L(\mathscr{P}_{\textsf{d}}) =\{\#\rho(v)\bar{\#}\rho(\bar{y}w)\mid v\bar{y}w\in L(\mathscr{P}) \text{ for some }v,w\in(X\cup\bar{X})^{*},\,y\in X\},\] \[L(\mathscr{P}_{\textsf{m}}) =\{\rho(u)\#\rho(v)\bar{\#}\rho(w)\mid uvy\bar{z}w\in L(\mathscr{P }),\] \[\text{ for some }u,v,w\in(X\cup\bar{X})^{*},\,y\neq z\in X\}.\] In fact, if the original asynchronous program \(\mathscr{P}\) is tame-pumping, we can ensure that \(\mathscr{P}_{\textsf{o}}\), \(\mathscr{P}_{\textsf{d}}\), \(\mathscr{P}_{\textsf{m}}\) are also tame-pumping (see the full version for details). It remains to verify whether \(L(\mathscr{P}_{\textsf{o}})\) has uniform offset \(0\), and \(L(\mathscr{P}_{\textsf{d}})\) and \(L(\mathscr{P}_{\textsf{m}})\) do not contain marked Dyck factors. By Corollary 5.4 we can compute for each \(\mathsf{x}\in\{\textsf{o},\textsf{d},\textsf{m}\}\) a \(\mathsf{dsVASS}\)\(\mathcal{V}_{\mathsf{x}}\) with \(L(\mathcal{V}_{\mathsf{x}})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 6 Computing Downward Closures and the Proof of Theorem 5.3 It remains to show how the automaton \(\mathcal{A}\) for the downward closure in Theorem 5.3 is constructed. As a warm-up, let us illustrate how to construct from a context-free grammar \(\mathcal{G}\) an NFA \(\mathcal{A}\) for the subword closure of \(L(\mathcal{G})\), cf. [5]. Here, _subword closure_ refers to the downward closure with respect to the subword ordering \(\preccurlyeq\). Notice that this is a special case of Theorem 5.3, namely where \(L(\mathcal{G})\subseteq\Gamma^{*}\). The basic idea is that every derivation tree of \(\mathcal{G}\) can be obtained by inserting pumps into a _skeleton_--a derivation tree without vertical repetitions of nonterminals. The skeleton can be guessed by an (exponentially large) automaton \(\mathcal{A}\) and the effects of pumps are abstracted as follows: For each nonterminal \(A\) one can compute the subalphabets \(\Gamma_{A,\Gamma},F_{A,\mathsf{R}}\subseteq\Gamma\) containing all letters occurring on the left side \(u\) and the right side \(v\) of a pump \(A\mathrel{\Rightarrow}uAv\). Instead of inserting pumps, the automaton for the subword closure inserts arbitrary words \(u^{\prime}\in\Gamma^{*}_{A,\mathsf{L}}\) and \(v^{\prime}\in\Gamma^{*}_{A,\mathsf{R}}\) on the left or right side of \(A\), respectively. This is sufficient because for any word \(w\), the subword closure of the language \(w^{*}\) contains exactly those words that consist only of letters present in \(w\). The difficulty in proving Theorem 5.3 is to preserve, not only the subword closure, but also the downward closure with respect to the syntactic order \(\unlhd\) on the letters \(\{x,\bar{x}\}\). To do so, we need to distinguish between two types of pumps. Consider the derivation tree for a marked word \(z=u\#v\bar{\#}w\), depicted left in Figure 1. Observe that removing one of the three pumps in blue does not change the offset of \(\mathsf{inside}(z)=v\) or \(\mathsf{outside}(z)=uw\), because \(\mathcal{G}\) is tame-pumping. Such pumps, which are completely contained in \(\mathsf{inside}(z)\) or \(\mathsf{outside}(z)\), will be called _undivided_. However, one needs to be more careful when removing _divided_ pumps, e.g., the red pump in the second derivation tree of Figure 1. Removing the red pump decreases the offset of \(\mathsf{outside}(z)\), while increasing the offset of \(\mathsf{inside}(z)\) by the same amount. We will proceed in two transformations, which preserve the downward closure w.r.t. \(\sqsubseteq\). In the first transformation we obtain a grammar whose derivation trees do not contain any undivided pumps. In the second step we additionally eliminate divided pumps. ### Abstracting undivided pumps Recall that \(\mathscr{M}=\Theta^{*}\{\#,\varepsilon\}\Theta^{*}\{\bar{\#},\varepsilon\} \Theta^{*}\) where \(\Theta=\Gamma\cup\{x,\bar{x}\}\). In the following we only consider _uniformly marked_ grammars \(\mathcal{G}\), that is, we assume \(L(\mathcal{G})\) is contained in one of the subsets \(\Theta^{*}\#\Theta^{*}\bar{\#}\Theta^{*}\), \(\Theta^{*}\#\Theta^{*}\), \(\Theta^{*}\bar{\#}\Theta^{*}\), or \(\Theta^{*}\). This is not a restriction since we can split the given grammar \(\mathcal{G}\) into four individual grammars, covering the four types of marked words, and treat them separately. This allows us to partition the set of nonterminals \(N\) into \(N_{\#\bar{\#}}\cup N_{\#}\cup N_{\bar{\#}}\cup N_{0}\) where \(N_{\#\bar{\#}}\)-nonterminals only produce marked words in \(\Theta^{*}\#\Theta^{*}\bar{\#}\Theta^{*}\), \(N_{\#}\)-nonterminals only produce marked words in \(\Theta^{*}\#\Theta^{*}\), etc. A pump \(A\mathrel{\Rightarrow}uAv\) is _undivided_ if \(A\in N_{\#\bar{\#}}\cup N_{0}\), and _divided_ otherwise. Our first goal will be to eliminate undivided pumps. A derivation tree without undivided pumps may still contain exponentially large subtrees below \(N_{0}\)-nonterminals. Such subtrees will also be "flattened" in this step, see the first transformation step in Figure 1. A context-free grammar \(\mathcal{G}=(N,\Theta\cup\{\#,\bar{\#}\},P,S)\) is _almost-pumpfree_ iff 1. \(\mathcal{G}\) does not have undivided pumps, and 2. for all productions \(A\to\alpha\) with \(A\in N_{0}\) either \(\alpha=a\in\Theta\) or \(\alpha=(\Gamma^{\prime})^{*}\) for some \(\Gamma^{\prime}\subseteq\Gamma\). We will now explain how to turn any uniformly marked \(\mathsf{CFG}\) into an almost-pumpfree one. The resulting (extended) grammar will be exponentially large but can be represented succinctly. Recall that a _succinct ECFG_ (\(\mathsf{sECFG}\)) is an extended context-free grammar \(\mathcal{G}\) whose nonterminals are polynomially long strings and whose productions are given by finite-state transducers. For example, one of the transducers accepts the finite relation of all triples \((A,B,C)\) such that there exists a production \(A\to BC\). Productions either adhere to Chomsky normal form or have the form \(A\to B\). The latter enables us to simulate \(\mathsf{PSPACE}\)-computations in the grammar without side effects, see Section6.5 below. Given a uniformly marked tame-pumping \(\mathsf{CFG}\), one can compute in polynomial space a tame-pumping almost-pumpfree \(\mathsf{sECFG}\) such that \(L(\mathcal{G})\!\downarrow=L(\mathcal{G}^{\prime})\!\downarrow\) and \(|\mathcal{G}^{\prime}|\) is polynomially bounded in \(|\mathcal{G}|\). To prove Section6.2, we first need some auxiliary results, which are mainly concerned with computing the minimal dips and letter occurrences within undivided pumps of a grammar \(\mathcal{G}\). Recall that for the subword closure we computed for each nonterminal \(A\) the subalphabets \(\Gamma_{\mathsf{L},A}\) and \(\Gamma_{\mathsf{R},A}\), and inserted arbitrary words over \(\Gamma_{\mathsf{L},A}\) and \(\Gamma_{\mathsf{R},A}\) left and right to the nonterminal \(A\). For the refined order \(\sqsubseteq\) we may only use a letter \(a\in\Gamma\) after simulating the minimal dip which is required to produce the letter \(a\). For a word \(w\in\Theta^{*}\) we define the set \(\psi(w)\) of all pairs \((n,m)\in\mathbb{N}^{2}\) such that \(n\geq\mathsf{dip}(w)\) and \(m=n+\mathsf{offset}(w)\). In other words, \(\psi(w)\) is the reachability relation induced by \(w\), interpreted as counter instructions. Recall that _Presburger arithmetic_ is the first-order theory of \((\mathbb{N},+,<,0,1)\). As an auxiliary step, we will compute existential Presburger formulas capturing the relation \(\psi(u)\times\psi(v)\) for all pumps \(A\xrightarrow{\rightarrow}uAv\) of a nonterminal \(A\). In the following lemma, when we say that we can _compute a formula for a relation \(R\subseteq\mathbb{N}^{k}\) in polynomial space_, we mean that there is non-deterministic polynomial-space algorithm, where each non-deterministic branch computes a polynomial-size formula for a relation \(R_{i}\) such that if \(R_{1},\ldots,R_{n}\) are the relations of all the branches, then \(R=\bigcup_{i=1}^{n}R_{i}\). Here we tacitly use the fact that \(\mathsf{NPSPACE}=\mathsf{PSPACE}\)[25]. Given an offset-uniform \(\mathsf{CFG}\) with \(L(\mathcal{G})\subseteq\Theta^{*}\$\Theta^{*}\), where \(\$\in\Theta\), we can compute in polynomial space an existential Presburger formula for the relation \[\bigcup_{u\$v\in L}\psi(u)\times\psi(v)\subseteq\mathbb{N}^{4}.\] Proof sketch.: The result of Section6.3 was already proved in [1, Proposition 3.8], under the additional assumption that the given context-free grammar \(\mathcal{G}\) for \(L\) is _annotated_ (they even show that in this case the formula can be computed in \(\mathsf{NP}\)). We call \(\mathcal{G}\) annotated if for every nonterminal \(A\) the minimal dip that can be achieved by a word in \(L(\mathcal{G},A)\) is given as an input, denoted by \(\mathsf{mindip}(A)\). Hence, it remains to show how to compute the annotation of an offset-uniform grammar in \(\mathsf{PSPACE}\), which is possible using a simple saturation algorithm. For each nonterminal \(A\), the algorithm stores a number \(D(A)\) satisfying \(D(A)\geq\mathsf{mindip}(A)\). Initially, \(D(A)\) is set to an upper bound for \(\mathsf{mindip}(A)\), which by Section5.2 (2) can be chosen to be exponentially large in \(|\mathcal{G}|\). In each round the function \(D\) is updated as follows: For each production \(A\to BC\) we set \(D(A)\) to the minimum of \(D(A)\) and \(\max\{D(B),D(C)-\mathsf{offset}(B)\}\), where \(\mathsf{offset}(B)\) is the unique offset of \(L(\mathcal{G},B)\). Clearly, the algorithm can be implemented in polynomial space since the numbers are bounded exponentially. Termination of the algorithm is guaranteed since the numbers \(D(A)\) are non-increasing. With Section6.3 in hand, we can now prove the following lemma, which allows us to check whether pumps with certain letter occurrences exist for certain minimal dips. Given a tame-pumping \(\mathsf{CFG}\) such that \(L(\mathcal{G})\subseteq\mathscr{M}\), a nonterminal \(A\) in \(\mathcal{G}\), a letter \(a\in\Gamma\) and two numbers \(d_{\mathsf{L}},d_{\mathsf{R}}\in\mathbb{N}\), we can decide in \(\mathsf{PSPACE}\) if there exists a derivation \(A\mathrel{\Rightarrow\mskip-10.0mu \Rightarrow}uAv\) such that \(u\) contains the letter \(a\) (or symmetrically, whether \(v\) contains the letter \(a\)), \(\mathsf{dip}(u)\leq d_{\mathsf{L}}\), and \(\mathsf{dip}(v)\leq d_{\mathsf{R}}\). Furthermore, we can also decide in \(\mathsf{PSPACE}\) whether a derivation with the above properties exists that also satisfies \(\mathsf{offset}(u)>0\)._ Proof sketch.: We first construct the \(\mathsf{CFG}\)\(\mathcal{G}_{A}\) for the language of pumps of the nonterminal \(A\), meaning for \(L(\mathcal{G}_{A})=\{u\mathcal{S}v\mid A\mathrel{\Rightarrow\mskip-10.0mu \Rightarrow}\varphi\; uAv\}\). Then we intersect with the regular language \(\Theta^{*}a\Theta^{*}\lx@sectionsign\Theta^{*}\), and apply Lemma 6.3 to the resulting grammar. This is possible, because tame-pumping implies that the grammar for the pumps has a uniform offset of zero. We can modify the resulting Presburger formula from Lemma 6.3 to check for the required dips, and modify it further to check for the positive offset for \(u\). Finally, we use the fact that testing satisfiability of an existential Presburger formula is in \(\mathsf{NP}\)[3]. Now we are almost ready to prove Proposition 6.2. The last thing we need is for an \(\mathsf{sECG}\) to perform \(\mathsf{PSPACE}\)-computations on paths in its derivation trees: An \(\mathsf{sECG}\) can simulate \(\mathsf{PSPACE}\)-computations on exponentially long paths in its derivation trees. This is because the nonterminals are polynomially long strings and can therefore act as polynomial space Turing tape configurations. Moreover, the transducers of the \(\mathsf{sECG}\) can easily be constructed to enforce the step-relation of a Turing machine. If we apply this enforcement to productions of the form \(A\to B\), then the path that simulates the \(\mathsf{PSPACE}\)-computation will not even have any additional side paths until after the computation is complete. Thus, only the result of the computation will affect the derived word. Since grammars and transducers are non-deterministic (and \(\mathsf{NPSPACE}=\mathsf{PSPACE}\)), we can even implement non-determinism and guessing within such computations. We are ready to present a proof sketch of Proposition 6.2. The main idea is that \(\mathcal{G}^{\prime}\) simulates derivation trees of \(\mathcal{G}\) by keeping track of at most polynomially many nodes, and abstracting away pumps via the previous auxiliary results. If a nonterminal \(A\) of \(\mathcal{G}\) does not belong to \(N_{0}\) (i.e., it produces a marker), then \(\mathcal{G}^{\prime}\) guesses a production \(A\to BC\) to apply. If \(A\) furthermore belongs to \(N_{\#\tilde{\pi}}\), then \(\mathcal{G}^{\prime}\) also guesses a pump to apply in the form of a \(4\)-tuple consisting of two dip values \(d_{\mathsf{L}},d_{\mathsf{R}}\in\mathbb{N}\) and two alphabets \(\Gamma_{\mathsf{L}},\Gamma_{\mathsf{R}}\subseteq\Gamma\). Guessing and storing the dip values is possible in \(\mathsf{PSPACE}\), since they are exponentially bounded by Lemma 5.2 (2). For each \(a\in\Gamma_{\mathsf{L}}\), Lemma 6.4 is used on input \(A,a,d_{\mathsf{L}},d_{\mathsf{R}}\) to check in \(\mathsf{PSPACE}\) whether a matching pump exists. A symmetric version of Lemma 6.4 is also used for each \(a\in\Gamma_{\mathsf{R}}\). Then, if all checks succeed, \(\mathcal{G}^{\prime}\) simulates the pump as \(A\to\bar{x}^{d_{\mathsf{L}}}x^{d_{\mathsf{L}}}\Gamma_{\mathsf{L}}^{*}BC\bar{ x}^{d_{\mathsf{R}}}x^{d_{\mathsf{R}}}\Gamma_{\mathsf{R}}^{*}\). This simulation clearly preserves minimal dips and handler names, whereas by tame-pumping the combined offset of a pump is zero anyway, and therefore need not be computed. If a nonterminal \(A\) belongs to \(N_{0}\), then \(\mathcal{G}^{\prime}\) abstracts away its entire subtree. To this end it generates a pumpfree subtree on-the-fly using depth-first search, which is possible in \(\mathsf{PSPACE}\) since without pumps the tree has polynomial height. During this process pumps are simulated using the same strategy as before. We also need to ensure that nonterminals of \(\mathcal{G}^{\prime}\) in \(N_{0}\) only have productions that allow for a single leaf node below them. To this end \(\mathcal{G}^{\prime}\) only ever derives letters and alphabets \(\Gamma^{\prime*}\) one at a time. Consider the up to two _main paths_ in a derivation tree of \(\mathcal{G}^{\prime}\), by which we mean the paths leading from the root to a marker. Whenever \(\mathcal{G}^{\prime}\) simulates a pump as \(A\to u^{\prime}Av^{\prime}\) in the above process, it extends the main path by \(|uv|\) and in each step only derives a single nonterminal from \(N_{0}\) to the left or right. When \(\mathcal{G}^{\prime}\) abstracts an entire subtree of a nonterminal in \(N_{0}\), then this subtree is also produced to the left or right of the main path, without leaving said path. Additionally, whenever \(\mathcal{G}^{\prime}\) simulates a pump of some \(A\), then \(\mathcal{G}^{\prime}\) assumes that this pump is the combination of all pumps that occur in the original derivation tree for that instance of \(A\). Thus, below such a pump, it remembers in polynomial space, that \(A\) is not allowed to occur anymore. Finally, whenever \(\mathcal{G}^{\prime}\) checks by Lemma 6.4 that a pump exists with \(\mathsf{offset}(u)>0\), then this is a so-called increasing pump, and it can be repeated to achieve an infix with arbitrary high offset. Thus, dip values below this pump cannot make up for this offset and therefore will no longer be simulated. ### Abstracting divided pumps We have now removed all the undivided pumps and are left with derivation trees as in the middle picture of Figure 1. In this subsection, we will show the following: Given a tame-pumping almost-pumpfree \(\mathsf{sECFG}\)\(\mathcal{G}\) with \(L(\mathcal{G})\subseteq\mathscr{M}\), one can construct in polynomial space a \(\mathsf{dsNFA}\)\(\mathcal{B}\) such that \(L(\mathcal{B})\!\!\downarrow=L(\mathcal{G})\!\!\downarrow\) and \(|\mathcal{B}|\) is polynomially bounded in \(|\mathcal{G}|\). We give a proof sketch here, the details can be found in the full version of the paper. Our starting point in the proof of Lemma 6.6 is the following key observation: The offsets which occur during the production of any _admissible_ marked word \(w\) which contains exactly one marker are bounded. This allows us to keep track of the offset precisely, which is necessary for us to solve the marked Dyck factor (MDF) problem. For a node \(t\) in a derivation tree \(T\), let \(w(t)\) denote the word derived by the subtree rooted at \(t\) and let \(u(t)=\mathsf{inside}(w(t))\), \(v(t)=\mathsf{outside}(w(t))\). There exists a polynomial \(p\) such that for any uniformly marked, tame-pumping, almost-pumpfree \(\mathsf{sECFG}\)\(\mathcal{G}\) the following holds. Let \(T\) be a derivation tree of \(\mathcal{G}\) which produces an admissible marked word containing \(\#\) or \(\overline{\#}\), but not both. Then we have \(|\mathsf{offset}(u(t))|,|\mathsf{offset}(v(t))|\leq 2^{p(|\mathcal{G}|)}\). Proof.: We consider the case when the word derived is of the form \(u\#v\), the case for \(v\overline{\#}w\) being symmetric. Our derivation tree \(T\) has a skeleton \(T^{\prime}\) into which pumps are inserted to form \(T\). This means \(u\#v=u^{\prime}_{k}\hat{u}_{k}\cdots u^{\prime}_{1}\hat{u}_{1}u^{\prime}_{0} \#v^{\prime}_{0}\hat{v}_{1}v^{\prime}_{1}\cdots\hat{v}_{k}v^{\prime}_{k}\), where \(u^{\prime}_{k}\cdots u^{\prime}_{0}\#v^{\prime}_{0}\cdots v^{\prime}_{k}\) is the word generated by \(T^{\prime}\) and each pair \((\hat{u}_{i},\hat{v}_{i})\) is derived using a pump. Then we have \[\mathsf{offset}(u) =\overbrace{\mathsf{offset}(u^{\prime}_{k}\cdots u^{\prime}_{0} )}^{==U_{0}}+\overbrace{\sum_{i=1}^{=U_{1}}\mathsf{offset}(\hat{u}_{i})}^{=:U _{1}},\] \[\mathsf{offset}(v) =\underbrace{\mathsf{offset}(v^{\prime}_{0}\cdots v^{\prime}_{k} )}_{=:V_{0}}+\underbrace{\sum_{i=1}^{=}\mathsf{offset}(\hat{v}_{i})}_{=:V_{1}}.\] We claim that each of the numbers \(|U_{0}|,|U_{1}|,|V_{0}|,|V_{1}|\) is bounded by \(n(\mathcal{G})\), the number of nonterminals of \(\mathcal{G}\). This clearly implies the lemma: Since \(\mathcal{G}\) is a succinct grammar, it has at most exponentially many nonterminals in the size of its description. We begin with \(U_{0},V_{0}\). The tree \(T^{\prime}\) contains each nonterminal of \(\mathcal{G}\) at most once, and by property (C2) in Definition 6.1, we know that the subtree under each nonterminal in \(T^{\prime}\) not containing \(\#\) has offset \(-1\), \(0\), or \(1\). Thus, \(|U_{0}|,|V_{0}|\leq n(\mathcal{G})\). The bound on \(|U_{1}|,|V_{1}|\) is due to admissibility of \(u\#v\): It yields \(V_{0}+V_{1}=\mathsf{offset}(v)\geq 0\) and thus \(V_{1}\geq-V_{0}\). Moreover, by tame-pumping, we know that \(\mathsf{offset}(\hat{v}_{i})\leq 0\) for each \(i\in[1,k]\), and thus \(V_{1}\leq 0\). Together, we obtain \(V_{1}\in[-V_{0},0]\). Finally, tame-pumping also implies \(\mathsf{offset}(\hat{u}_{i})=-\mathsf{offset}(\hat{v}_{i})\) for each \(i\in[1,k]\) and hence \(U_{1}=-V_{1}\). Note that the bound only holds under the condition of admissibility. An easy counterexample is the tame-pumping language \(L=\{x^{n}\#\bar{x}^{n}\mid n\in\mathbb{N}\}\). The \(\mathsf{dsNFA}\)\(\mathcal{B}\) of Lemma 6 can now be constructed in three steps as follows: Step I: Tracking counter effects.We first observe that since \(\mathcal{G}\) is almost-pumpfree, its pumps \(A\mathrel{\ensurestack\stack\[0.0pt]{=}{\Rightarrow}}uAv\) can be simulated by a transducer that traverses the derivation tree bottom-up. Thus, we can construct a _singly_ succinct finite-state transducer \(\mathcal{T}_{A}\) with size polynomial in \(|\mathcal{G}|\) that captures all pumps \(A\mathrel{\ensurestack\[0.0pt]{=}{\Rightarrow}}uAv\). To be precise, \(\mathcal{T}_{A}\) accepts exactly those pairs \((u,v)\) for which \(A\mathrel{\ensurestack\[0.0pt]{=}{\Rightarrow}}u^{\mathsf{rev}}Av\). The transducer \(\mathcal{T}_{A}\) has one state for each nonterminal of \(\mathcal{G}\). Since \(\mathcal{B}\) will need to preserve offset and dip, we need to expand \(\mathcal{T}_{A}\) to track them as well. Here, it is crucial that we only need to do this for \(A\in N_{\#}\cup N_{\#}\) and pumps \(A\mathrel{\ensurestack\[0.0pt]{=}{\Rightarrow}}uAv\) that are used to derive an admissible word. According to Lemma 6 tells us that in such a pump, the absolute values of offsets and dips of \(u\) and \(v\) are bounded by \(2^{q(|\mathcal{G}|)}\) for some polynomial \(q\). Thus, we can modify \(\mathcal{T}_{A}\) so as to track the dip and offset of the two words it reads. Therefore, for each \(A\in N_{\#}\cup N_{\#}\) and each quadruple \(\mathbf{x}=(d_{\mathsf{L}},\delta_{\mathsf{L}},d_{\mathsf{R}},\delta_{\mathsf{ R}})\) of numbers with absolute value at most \(2^{q(\mathcal{G})}\), we can construct in \(\mathsf{PSPACE}\) a transducer \(\mathcal{T}_{A,\mathbf{x}}\) with \[(u,v)\text{ is accepted by }\mathcal{T}_{A,\mathbf{x}}\quad\text{iff}\quad A \mathrel{\ensurestack[0.0pt]{=}{\Rightarrow}}u^{\mathsf{rev}}Av\text{ and }e(u^{\mathsf{rev}})=(d_{\mathsf{L}},\delta_{\mathsf{L}})\text{, and }e(v)=(d_{\mathsf{R}},\delta_{\mathsf{R}}).\] Moreover, \(\mathcal{T}_{A,\mathbf{x}}\) is singly succinct, polynomial-size, and can be computed in \(\mathsf{PSPACE}\). Observe that by Lemma 6, if a pump \(A\mathrel{\ensurestack[0.0pt]{=}{\Rightarrow}}uAv\) is used in a derivation of an admissible word, then for some quadruple \(\mathbf{x}\), the pair \((u^{\mathsf{rev}},v)\) is accepted by \(\mathcal{T}_{A,\mathbf{x}}\). Step II: Skeleton runs.The automaton \(\mathcal{B}\) has to read words from left to right, rather than two factors in parallel as \(\mathcal{T}_{A}\) and \(\mathcal{T}_{A,\mathbf{x}}\) do. To this end, it will guess a run of \(\mathcal{T}_{A,\mathbf{x}}\) without state repetitions; such a run is called a _skeleton run_. For a fixed skeleton run \(\rho\), the set of words read in each component of \(\mathcal{T}_{A,\mathbf{x}}\) is of the shape \(\Gamma_{0}^{*}\{a_{1},\varepsilon\}\Gamma_{1}^{*}\cdots\{a_{k},\varepsilon\} \Gamma_{k}^{*}\), where each \(a_{i}\) is read in a single step of \(\rho\) and \(\Gamma_{i}\) is the set of letters from \(\Gamma\) seen in cycles in a state visited in \(\rho\). Sets of this shape are called _ideals_[12]. The ideal for the left (right) component is called the _left_ (_right_) _ideal_ of the skeleton run. Note that since \(\mathcal{T}_{A,\mathbf{x}}\) has exponentially many states, the skeleton run is at most exponentially long. Step III: Putting it together.The \(\mathsf{dsNFA}\)\(\mathcal{B}\) guesses and verifies an exponential size skeleton \(T\) of the \(\mathsf{sECFG}\)\(\mathcal{G}\). Moreover, for each node \(t\) that is above \(\#\) or \(\bar{\#}\)--but not both--it guesses a quadruple \(\mathbf{x}=(d_{\mathsf{L}},\delta_{\mathsf{L}},d_{\mathsf{R}},\delta_{\mathsf{ R}})\) with \(d_{\mathsf{L}},d_{\mathsf{R}}\in[0,2^{q(|\mathcal{G}|)}]\), \(\delta_{\mathsf{L}},\delta_{\mathsf{R}}\in[-2^{q(\mathcal{G})},2^{q(\mathcal{ G})}]\) and a skeleton run \(\rho_{t}\) of the transducer \(\mathcal{T}_{A,\mathbf{x}}\), where \(A\) is \(t\)'s label. The automaton \(\mathcal{B}\) then traverses the skeleton \(T\) in-order; i.e. node, left subtree, right subtree, node; meaning each inner node is visited exactly twice. Whenever \(\mathcal{B}\) visits a node \(t\) as above, it produces an arbitrary word from an ideal of \(\rho_{t}\): For the first (resp. second) visit of \(t\), it uses the left (resp. right) ideal of \(\rho_{t}\). Moreover, in addition to the word from the left ideal, \(\mathcal{B}\) outputs a string \(w\in\{x,\bar{x}\}^{*}\) with \(e(w)=(d_{\mathsf{L}},\delta_{\mathsf{L}})\), where \(\mathbf{x}=(d_{\mathsf{L}},\delta_{\mathsf{L}},d_{\mathsf{R}},\delta_{\mathsf{ R}})\) is the quadruple guessed for \(t\) (and similarly for the right ideal). This way, it preserves offset and dip at the separators \(\#\) and \(\bar{\#}\). Since the skeleton \(T\) has exponentially many nodes (in \(|\mathcal{G}|\)) and each skeleton run \(\rho_{t}\) requires exponentially many bits, the total number of bits that \(\mathcal{B}\) has to keep in memory is also bounded by an exponential in \(|\mathcal{G}|\).
2304.05043
ZnO-based scintillating bolometers: New prospects to study double beta decay of $^{64}$Zn
The first detailed study on the performance of a ZnO-based cryogenic scintillating bolometer as a detector to search for rare processes in zinc isotopes was performed. A 7.2 g ZnO low-temperature detector, containing more than 80\% of zinc in its mass, exhibits good energy resolution of baseline noise 1.0--2.7 keV FWHM at various working temperatures resulting in a low-energy threshold for the experiment, 2.0--6.0 keV. The light yield for $\beta$/$\gamma$ events was measured as 1.5(3) keV/MeV, while it varies for $\alpha$ particles in the range of 0.2--3.0 keV/MeV. The detector demonstrate an effective identification of the $\beta$/$\gamma$ events from $\alpha$ events using time-properties of only heat signals. %(namely, Rise time parameter). The radiopurity of the ZnO crystal was evaluated using the Inductively Coupled Plasma Mass Spectrometry, an ultra-low-background High Purity Ge $\gamma$-spectrometer, and bolometric measurements. Only limits were set at the level of $\mathcal{O}$(1--100) mBq/kg on activities of \Nuc{K}{40}, \Nuc{Cs}{137} and daughter nuclides from the U/Th natural decay chains. The total internal $\alpha$-activity was calculated to be 22(2) mBq/kg, with a major contribution caused by 6(1) mBq/kg of \Nuc{Th}{232} and 12(2) mBq/kg of \Nuc{U}{234}. Limits on double beta decay (DBD) processes in \Nuc{Zn}{64} and \Nuc{Zn}{70} isotopes were set on the level of $\mathcal{O}(10^{17}$--$10^{18})$ yr for various decay modes profiting from 271 h of acquired background data in the above-ground lab. This study shows a good potential for ZnO-based scintillating bolometers to search for DBD processes of Zn isotopes, especially in \Nuc{Zn}{64}, with the most prominent spectral features at $\sim$10--20 keV, like the two neutrino double electron capture. A 10 kg-scale experiment can reach the experimental sensitivity at the level of $\mathcal{O}(10^{24})$ yr.
A. Armatol, B. Broerman, L. Dumoulin, A. Giuliani, H. Khalife, M. Laubenstein, P. Loaiza, P. de Marcillac, S. Marnieros, S. S. Nagorny, S. Nisi, C. Nones, E. Olivieri, L. Pagnanini, S. Pirro, D. V. Poda, J. -A. Scarpaci, A. S. Zolotarova
2023-04-11T08:02:12Z
http://arxiv.org/abs/2304.05043v1
# ZnO-based scintillating bolometers: New prospects to study double beta decay of \({}^{64}\)Zn ###### Abstract The first detailed study on the performance of a ZnO-based cryogenic scintillating bolometer as a detector to search for rare processes in zinc isotopes was performed. A 7.2 g ZnO low-temperature detector, containing more than 80% of zinc in its mass, exhibits good energy resolution of baseline noise 1.0-2.7 keV FWHM at various working temperatures resulting in a low-energy threshold for the experiment, 2.0-6.0 keV. The light yield for \(\beta\)/\(\gamma\) events was measured as 1.5(3) keV/MeV, while it varies for \(\alpha\) particles in the range of 0.2-3.0 keV/MeV. The detector demonstrate an effective identification of the \(\beta\)/\(\gamma\) events from \(\alpha\) events using time-properties of only heat signals. The radiopurity of the ZnO crystal was evaluated using the Inductively Coupled Plasma Mass Spectrometry, an ultra-low-background High Purity Ge \(\gamma\)-spectrometer, and bolometric measurements. Only limits were set at the level of \(\mathcal{O}\)(1-100) mBq/kg on activities of \({}^{40}\)K, \({}^{137}\)Cs and daughter nuclides from the U/Th natural decay chains. The total internal \(\alpha\)-activity was calculated to be 22(2) mBq/kg, with a major contribution caused by 6(1) mBq/kg of \({}^{232}\)Th and 12(2) mBq/kg of \({}^{234}\)U. Limits on double beta decay (DBD) processes in \({}^{64}\)Zn and \({}^{70}\)Zn isotopes were set on the level of \(\mathcal{O}\)(10\({}^{17}\)-10\({}^{18}\)) yr for various decay modes profiting from 271 h of acquired background data in the above-ground lab. This study shows a good potential for ZnO-based scintillating bolometers to search for DBD processes of Zn isotopes, especially in \({}^{64}\)Zn, with the most prominent spectral features at \(\sim\)10-20 keV, like the two neutrino double electron capture. A 10 kg-scale experiment can reach the experimental sensitivity at the level of \(\mathcal{O}\)(10\({}^{24}\)) yr. Cryogenic detectors, Hybrid detectors, Scintillators, scintillation and light emission processes (solid, gas and liquid scintillators), Calorimeters, Double-beta decay detectors, Particle identification methods, Photon detectors for UV, visible and IR photons (solid-state), X-ray detectors, Materials for solid-state detectors ## 1 Introduction The observation of neutrino flavor oscillations has provided evidence of the non-degenerate mass of the neutrinos and motivated a worldwide experimental effort to measure the absolute neutrino mass and the actual scheme of the neutrino mass ordering [1]. Neutrinoless double beta decay (0\(\nu\)-DBD) is the only practical means of determining the nature of the neutrino (Dirac or Majorana) and one of the most sensitive probes of its absolute mass [2; 3]. If observed this would imply lepton number violation, and be a direct probe for physics beyond the Standard Model. The search for 0\(\nu\)-DBD with different target isotopes is not only identified as a recommendation of the APPEC committee [4] but also provides crucial input for the theoretical modeling of nuclear matrix elements. The recent increase of new results in the field of DBD studies is due to significant development of various detector techniques (scintillators [5; 6; 7], semiconductor detectors [8; 9], bolometers [10; 11], scintillating bolometers [12; 13; 14], time-projection chambers [15; 16; 17; 18], and tracking calorimeters [19; 20]), the establishment of highly effective deep material purification, and last but not least, the development of high quality crystals with embedded and highly-enriched isotopes of interest (e.g. CaF\({}_{2}\), high purity \({}^{76}\)Ge, TeO\({}_{2}\), Zn\({}^{82}\)Se, \({}^{106}\)CdWO\({}_{4}\), \({}^{116}\)CdWO\({}_{4}\), Li\({}_{2}\)\({}^{100}\)MoO\({}_{4}\), Ca\({}^{100}\)MoO\({}_{4}\), Zn\({}^{100}\)MoO\({}_{4}\)) [21; 22; 23]. These advances, however, have lead to focusing experimental efforts on a short-list of DBD-active isotopes, such as \({}^{48}\)Ca, \({}^{76}\)Ge, \({}^{82}\)Se, \({}^{100}\)Mo, \({}^{116}\)Cd, \({}^{130}\)Te, \({}^{150}\)Nd and \({}^{136}\)Xe [2; 24; 25; 26]. Other isotopes1 are less studied for reasons specific to each individual isotope. Further details on techniques in the search for rare decays can be found in [2; 21; 22; 28; 29]. Zinc contains two potentially DBD-active natural isotopes, namely \({}^{64}\)Zn and \({}^{70}\)Zn, whose properties are listed in table 1. \({}^{64}\)Zn is one of a few DBD-active nuclei to have a high natural abundance (\(\sim\)48%) capable of use in a large scale experiment without expensive isotopic enrichment. Furthermore, the relatively high transition energy of \({}^{64}\)Zn (\(Q_{2\beta}\) = 1096 keV) makes it energetically allowed for both double electron capture (\(2\varepsilon\)) and electron capture with positron emission (\(\varepsilon\beta^{+}\)) channels of DBD [30]. There is a strong motivation to search for these processes as it could clarify the contribution of the right-handed current admixture in weak interactions [31, 32]. Despite several detector materials and techniques currently available to study DBD processes in Zn isotopes, all of them have some drawbacks. A survey of these techniques follows and is given in table 2 in order of increasing zinc mass fraction. ZnWO\({}_{4}\) scintillating crystals contain zinc at the level of 21% in mass. This naturally radiopure scintillating material was proposed as a detector for DBD studies of Zn and W isotopes and to search for dark matter particles in 2005 [35]. Later, the technology of large volume crystals has been well-developed [36, 37], as well as an improvement of its radiopurity was achieved through the multi-stage crystallization process [38]. The best limits on DBD processes in Zn isotopes achieved in low-background long-term measurements with ZnWO\({}_{4}\) scintillating crystals are of \(\mathcal{O}\)(10\({}^{19}\)-10\({}^{20}\)) yr. In should be emphasized that both the relatively poor energy resolution (about 9% for 662 keV \(\gamma\) quanta) and the low Zn-mass fraction cause a reduced experimental sensitivity. Despite the excellent performance of ZnWO\({}_{4}\) also as Transition Edge Sensor (TES)-carriers [39] developed and used in the R&D program of the CRESST experiment and its high radiopurity [40], there are no reported long-term low-background cryogenic measurements with a large volume ZnWO\({}_{4}\) crystal acting as a scintillating bolometer. However, excellent results with a 1 cm\({}^{3}\) ZnWO\({}_{4}\) scintillating bolometer have been recently achieved [41]. As it was shown with many other crystals [42], a simultaneous record of a scintillation pulse and a phonon signal allows for an effective particle identification leading to a significant background reduction. The bolometric technique also typically provides an excellent energy resolution (\(\sim\)0.1%) in the phonon channel [43]. Both features lead to the enhancement of the experimental sensitivity over purely-scintillating ZnWO\({}_{4}\). Another Zn-containing scintillating crystal, namely Li\({}_{2}\)Zn\({}_{2}\)(MoO\({}_{4}\))\({}_{3}\), has been established in 2009 and tested as a potential target material for 0\(\nu\)-DBD searches of \({}^{100}\)Mo (\(Q_{2\beta}\) = 3034 keV) acting as a scintillating bolometer [45]. Containing Zn at the level of 21% in mass, this crystal could be also used to search for DBD process occurring in Zn isotopes. However, a very poor yield of scintillation light observed in the first cryogenic test makes it impossible to achieve all typical features of the scintillating bolometer technique though simultaneous record of scintillation \begin{table} \begin{tabular}{c c c c} \hline Transition & Energy release & Isotopic abundance & Decay modes \\ & \(Q_{2\beta}\) [keV][33] & [\%][34] & \\ \hline \({}^{64}\)Zn \(\rightarrow\)\({}^{64}\)Ni & 1095.7(0.7) & 49.17(75) & \(2\varepsilon\), \(\varepsilon\beta^{+}\) \\ \({}^{70}\)Zn \(\rightarrow\)\({}^{70}\)Ge & 998.5(2.2) & 0.61(10) & \(2\beta^{-}\) \\ \hline \end{tabular} \end{table} Table 1: Transition, energy released, isotopic abundance, and decay modes for potentially DBD-active natural zinc isotopes. light and phonon signal. Moreover, only small volume crystals (less than 0.2 kg in mass) could be produced at that time. The combination of technological issues during the crystal growth and its poor performance as a scintillating bolometer denied further studies with this compound. A CdZnTe (CZT) semiconductor compound, containing 21% of Zn in mass, is a very promising radiation detector due to its excellent energy resolution (FWHM = 1% for 662 keV \(\gamma\) quanta). However, it also contains DBD-active \({}^{116}\)Cd (\(Q_{2\beta^{-}}=2814\) keV) and \({}^{130}\)Te (\(Q_{2\beta^{-}}=2529\) keV) isotopes with larger transition energies and shorter half-lives, which are responsible for an irreducible background for DBD processes of Zn isotopes. Moreover, the long-lived beta-active \({}^{113}\)Cd isotope (\(Q_{\beta}=323.8\) keV, \(T_{1/2}=8\times 10^{15}\) yr [33, 51]), present in natural Cd at the level of 12%, is a main background component at low energies below \(\sim\)325 keV, preventing the study of DBD processes in Zn isotopes with signatures at low energies. From the technological point of view, CZT crystals larger than \(2\times 2\times 2\) cm are not available yet. The best limits achieved in the framework of COBRA experiment with CZT semiconductors regarding DBD processes of Zn isotopes are of \(\mathcal{O}(10^{18})\) yr. From 2010, ZnMoO\({}_{4}\) crystals were considered as the most promising target material for 0\(\nu\)-DBD studies of \({}^{100}\)Mo, due to their excellent performance as a scintillating bolometer in terms of energy resolution in phonon channel and high radiopurity [47, 52, 53]. Moreover, the possibility to perform particle identification only through the signal properties in phonon channel [47] was \begin{table} \begin{tabular}{l c c c c} \hline Target & Fraction of & Detector & Major & Reference \\ material & Zn [wt \%] & type & drawback(s) & \\ \hline ZnWO\({}_{4}\) & 21 & \begin{tabular}{c} Scintillator \\ \end{tabular} & Poor FWHM & [44] \\ \cline{3-5} & & Scint. Bol. & N. D. & [39] \\ \hline Li\({}_{2}\)Zn\({}_{2}\)(MoO\({}_{4}\))\({}_{3}\) & 21 & Scint. Bol. & \begin{tabular}{c} Interference with \\ DBD of \({}^{100}\)Mo; \\ Low radiopurity \\ \end{tabular} & [45] \\ \hline CdZnTe & 21 & Semiconductor & \begin{tabular}{c} Interference with \\ DBD of \({}^{116}\)Cd,\({}^{130}\)Te; \\ Low radiopurity \\ \end{tabular} & [46] \\ \hline ZnMoO\({}_{4}\) & 29 & Scint. Bol. & \begin{tabular}{c} Interference with \\ DBD of \({}^{100}\)Mo; \\ Low light yield \\ \end{tabular} & [47] \\ \hline ZnSe & 44 & Scint. Bol. & \begin{tabular}{c} Interference with \\ DBD of \({}^{82}\)Se \\ \end{tabular} & [48] \\ \hline ZnO & 80 & Scint. Bol. & T.B.A. & This work \\ \hline Zn metal & 100 & \begin{tabular}{c} HPGe \\ \end{tabular} & \begin{tabular}{c} Poor detection \\ efficiency \\ \end{tabular} & [49] \\ \cline{3-5} & & Superconductor & R\&D, T.B.A. & [50] \\ \hline \end{tabular} \end{table} Table 2: Survey of Zn-containing detector techniques. Listed with the target material is the fraction of Zn by weight, detector type, and major drawbacks to the method. (N.D. = not determined, T.B.A. = to be analysed, Scint. Bol. = scintillating bolomter, HPGe = High Purity Ge \(\gamma\)-spectrometer.) demonstrated in ZnMoO\({}_{4}\) for the first time among all previously tested crystals as a cryogenic bolometer. This is a useful feature for future large-scale experiments, since it allows to minimize the total number of electronics channels, acquiring only the heat channel, while not compromising the experimental sensitivity. Thanks to the extensive R&D program, radiopure crystals up to 2 kg in mass were produced from natural molybdenum [54] and enriched in \({}^{100}\)Mo [55]. A ZnMoO\({}_{4}\) compound contains approximately 29% of zinc; however, the presence of another DBD-active isotope, i.e. \({}^{100}\)Mo, with a shorter half-life value and larger decay energy makes it difficult to study DBD processes in Zn. Recently, experimental data acquired with Zn\({}^{82}\)Se scintillating bolometers (\(\sim\)45% Zn by mass) within the CUPID-0 experiment were analyzed to set new limits on several modes of DBD processes in \({}^{64}\)Zn and \({}^{70}\)Zn isotopes up to \(\mathcal{O}(10^{21}-10^{22})\) yr [48]. Both types of ZnSe crystals, produced from natural Se and enriched in \({}^{82}\)Se, worked well as scintillating bolometers and exhibited decent performance (FWHM \(\approx 30\) keV in phonon channel at 3 MeV), excellent pulse-shape discrimination ability and high radiopurity [56, 57]. These new limits were obtained from data collected with 22 enriched crystals and one natural crystal with 9.18 kg of ZnSe active mass for a 11.34 kg\(\times\)yr of total collected exposure. The experimental sensitivity was limited by the presence of a high amount of DBD-active \({}^{82}\)Se isotope (\(Q_{2\beta^{-}}=2998\) keV), responsible for the major background contribution, and by a limited acquisition time. It should be emphasized that from a technological point of view, ZnSe crystal growth is a very complicated process and not well-established yet. To date, ZnSe crystals with dimensions up \(\varnothing 45\times 55\) mm could be produced by the High-Pressure Bridgman-Stockbarger technique with a yield of "ready-to-use" crystals less than 50% [58]. Therefore, the use of ZnSe crystals in further studies of DBD processes in Zn isotopes is unfavorable. Complementary studies were performed with 10 kg of a highly purified Zn metal measured on the ultra-low-background (ULB) HPGe \(\gamma\)-spectrometer at the Laboratori Nazionali del Gran Sasso of the INFN (LNGS, Italy) over 828 h [49]. Through the optimization of the Zn sample geometry and high radiopurity of the Zn metal, the highest limits on some modes of DBD processes were established at the \(\mathcal{O}(10^{21})\) yr. At the same time, the experimental sensitivity cannot be further significantly improved due to the limitation of detection efficiency in this "source \(\neq\) detector" approach; there is also a limit to the effective mass that could be placed around HPGe detector that does not deteriorate the optimal detection efficiency. Moreover, with only emitted \(\gamma\)'s being detected, this technique unable to distinguish between \(0\nu\) and \(2\nu\) modes of \(\varepsilon\beta^{+}\) decay of \({}^{64}\)Zn and cannot be used for studies of \(2\beta^{-}\) decay of \({}^{70}\)Zn. It should be noted that measurements with Zn metal samples on HPGe detectors were one of the first techniques to study DBD processes in Zn [59, 60]. A low efficiency of the passive source technique can potentially be overcome by using metallic zinc as a superconducting absorber with a TES phonon readout; this innovative low-threshold detector technology is under development by Ricochet collaboration for the detection of coherent elastic neutrino-nucleus scattering [50, 61]. As one can see, none of the target materials and experimental approaches listed above are optimal to search for DBD processes in Zn isotopes. To reach the highest sensitivity, a detector should fulfill several requirements: 1. possess a high Zn-content; 2. elements embedded in its chemical formula should be light in mass; 3. chemical formula should be free from other DBD-active elements; 4. possess a high radiopurity; 5. have a well-developed technology for detector production; 6. work as a cryogenic scintillating bolometer. We propose to use a ZnO-based scintillating bolometer, with the highest zinc mass fraction (more than 80%), to search for DBD processes in Zn isotopes. ZnO crystals have never been used before as radiation detectors, while typically utilized as piezoelectric crystals, wafers for powerful LEDs in the blue and UV spectral ranges, and also find use in gas sensors, varistors, and generators of surface acoustic waves [62]. Oxygen, the only remaining element in the chemical formula, is not a DBD-active element, and will not contribute to the detector background. Here we present results on the first cryogenic test of a ZnO-based scintillating bolometer, its production, evaluation of performance and radiopurity, and studies of prospects for DBD searches. ## 2 Crystal production ZnO (zincite) single crystals belong to the structural type of wurtzite with a hexagonal cell that have the space symmetry group (P63mc) with the lattice constants \(a=3.250\) A, \(c=5.207\) A. Crystals demonstrate a very strong anisotropy of their physical properties. Zincite crystals considered to be a heavily doped weakly compensated semiconductor material of the A\({}^{2}\)B\({}^{6}\) type, in which intrinsic defects (interstitial zinc) and, to a lesser extent, oxygen vacancies serve as donors (n-type conductivity). Some others characteristics of ZnO crystals are as follows [63]: the melting point is 1975\({}^{\circ}\)C, the density is 5.67 g/cm\({}^{3}\), and the band gap is 3.44 eV (1.6 K) and 3.37 eV (300 K). The ZnO crystal used in the present study was grown by the hydrothermal method [64], the most efficient technique for large batches of crystals with similar properties. To produce large-volume ZnO single crystals, this method has been adapted for autoclaves with 80-mm-diameter containers that allows to grow up to 20 large single crystals (\(50\times 50\times 12\) mm each) simultaneously. Special attention was paid to prepare the starting charge and seed materials. A high-purity ZnO powder was pressed into tablets and annealed at 1000\({}^{\circ}\)C under air atmosphere. These cylindrical tablets were then used as a starting charge. Seeding plates were cut from previously-grown ZnO single crystals oriented along the [0001] crystallographic direction. Then seed plates were polished and etched prior the growth process. ZnO crystal growth was carried out by the direct temperature drop in alkaline solutions (KOH + LiOH + NH\({}_{4}\)OH) at the crystallization temperature of approximately 330-360\({}^{\circ}\)C. The temperature drop between crystallization and dissolution zone was approximately 8-15\({}^{\circ}\)C, and under pressure of 30-40 MPa (more details can be found in [64]). The growth rate in the [0001] direction was about 0.12 mm/day. A small ZnO sample (\(10.6\times 11.0\times 11.0\) mm) used in this study was cut from a large crystal (\(50\times 50\times 12\) mm) produced following this technique. Then two opposite faces were optically polished, while the lateral surface was ground to enhance light output. The concentration of the most common radioactive elements in the ZnO crystal was measured by an Inductively Coupled Plasma Mass Spectrometry analysis (ICP-MS, Agilent Technologies model 7500a) at LNGS. The analysis was performed in a semiquantitative mode; the instrument was calibrated with a single standard solution containing 10 ppb of Li, Y, Ce and Tl. The uncertainty is approximately 25% of given concentration values, listed in table 3. Being produced from the raw materials and reagents of 99.98% chemical purity grade that are potentially contaminated with natural radioactive nuclides, the ZnO crystal exhibits rather high chemical purity with respect to U/Th and K content, which could be a result of an effective impurity segregation during the crystal growth process. ## 3 HPGe measurements In order to improve sensitivity to possible radioactive contaminants, the ZnO sample was measured for 1107 h with an ULB HPGe \(\gamma\)-spectrometer in the STELLA (SubTerranenan Low Level Assay) facility at LNGS [65]. Thanks to the deep underground location of the STELLA facility (corresponding to more than 3600 m water equivalent of overburden), the reduction of muon flux is a factor 10\({}^{6}\). The ULB HPGe detector has a volume of 468 cm\({}^{3}\) and an energy resolution of 1.8 keV at 1332 keV. The passive shield of the detector consists of low-radioactivity lead (\(\sim\)25 cm), copper (\(\sim\)5 cm), and ancient lead (\(\sim\)2 cm) on the inner part of the copper shield. The set-up is sealed in an air-tight stainless steel box continuously flushed with a high purity nitrogen gas to avoid the presence of residual environmental radon. The measured energy spectra of the background (with no sample) and of the ZnO crystal sample are shown in figure 1, normalized to the acquisition time of the ZnO sample. The normalized spectra are almost indistinguishable in the wide energy range, except for a low-energy region (below 100 keV), where some excess of counts is observed for ZnO crystal sample. The detection efficiencies were obtained using Monte-Carlo (MC) simulation in the MaGe framework [66] based on the GEANT4. We found no evidence of the \(\gamma\) peaks, which can be definitely ascribed to decays of natural radionuclides in the ZnO sample. Therefore, we set only limits on specific activities using the Feldman-Cousins method [67]. The results are presented in table 4; upper limits on activities of U/Th radionuclides, as well as \({}^{40}\)K from natural radioactivity or \({}^{137}\)Cs of anthropogenic origin are set on the level of \(\mathcal{O}\)(1-100) mBq/kg. The established limit on activity of \({}^{40}\)K in the ZnO sample, less than 220 mBq/kg, corresponds to the upper limit of the natural potassium contamination in the crystal less than 7 ppm. This value is in agreement and slightly improves the limit on the potassium concentration obtained in ICP-MS measurements, less than 10 ppm (see section 2). \begin{table} \begin{tabular}{l l l l} \hline Element & Concentration & \multicolumn{2}{c}{Specific activity} \\ & [ppb] & [mBq/kg] & Nuclide \\ \hline K & \(\leq\) 10000 & \(\leq\) 310 & \({}^{40}\)K \\ Th & \(\leq\) 1 & \(\leq\) 4 & \({}^{232}\)Th \\ U & \(\leq\) 1 & \(\leq\) 12 & \({}^{238}\)U \\ \hline \end{tabular} \end{table} Table 3: The concentration of the most common natural radioactive elements (and their activity) in the ZnO crystal measured by an ICP-MS instrument. The uncertainty is approximately 25% of given concentration values. ## 4 Low-temperature measurements A composite bolometric detector aiming at measuring simultaneously heat and scintillation of the ZnO sample at millikelvin temperatures was made of two modules with the mounted ZnO crystal and Ge wafer, shown in figure 2. The ZnO sample was equipped with a neutron-transmutation-doped (NTD) Ge thermistor [68] (with a dimension of \(2.0\times 1.5\times 0.3\) mm), a heat-to-voltage transducer, to register the temperature \begin{table} \begin{tabular}{l l l} \hline Chain & \multicolumn{1}{l}{Radionuclide} & Activity [mBq/kg] \\ \hline & \({}^{40}\)K & \(\leq\) 220 \\ & \({}^{137}\)Cs & \(\leq\) 5 \\ \hline \({}^{232}\)Th & \({}^{228}\)Ra & \(\leq\) 16 \\ & \({}^{228}\)Th & \(\leq\) 23 \\ \hline \({}^{235}\)U & \({}^{235}\)U & \(\leq\) 64 \\ \hline & \({}^{234}\)Th & \(\leq\) 470 \\ \({}^{238}\)U & \({}^{234m}\)Pa & \(\leq\) 340 \\ & \({}^{226}\)Ra & \(\leq\) 12 \\ \hline \end{tabular} \end{table} Table 4: Internal radioactive contamination of the 7.2 g ZnO crystal sample measured over 1107 h using the ULB HPGe detector at LNGS. The upper limits are given at 95% C.L. Figure 1: Energy spectra of background (black) and the 7.2 g ZnO crystal sample (red), measured using the ULB HPGe detector at underground laboratory (LNGS) over 1798 h and 1107 h, respectively. Spectra are normalized to the acquisition time of the ZnO sample measurement. The normalized spectra are almost indistinguishable. variation induced by interacting particles in the detector media. The NTD Ge sensor was glued directly on crystal surface using a two-component epoxy glue (Araldite(r)). In addition, a P-doped silicon heater [69] was glued on the same crystal face aiming at injecting periodically a constant power (mimicking monoenergetic particle signals used for the off-line stabilization of the detector thermal response fluctuations [70]). Ultrasonic wire-bonding was used to connect the golden pads of the NTD Ge thermistor to the gold-plated-on-Kapton contacts glued on the copper housing with two Au wires (\(\varnothing 25~{}\upmu\)m), providing both thermal and electrical links, while the heater was bonded with two Al wires (\(\varnothing 25~{}\upmu\)m). The crystal was mounted on a copper plate using PTFE (polytetrafluoroethylene) clamps and brass screws. The entire internal side of the bottom copper plate of the detectors module and the lateral side of the Cu housing were covered by a reflective film (Vikuiti(tm) to improve light collection on a photodetector. A bolometric light detector (LD) was made of a high-purity Ge wafer (\(\varnothing 44\times 0.17\) mm), equipped with a small NTD Ge sensor (\(2.2\times 0.8\times 0.6\) mm), which was mounted with the help of Al\({}_{2}\)O\({}_{3}\) balls (\(\varnothing 1.5\) mm) and polypropylene supporting elements in a Cu housing. In order to decrease the reflectivity of the wafer, it was coated by a 70-nm SiO layer [71]. Moreover, several concentric Al electrodes (100-nm-thick, 3.8-mm-pitch) have been deposited on one side of the Ge wafer to exploit heat signal amplification in the presence of the electric field, known as the Neganov-Trofimov-Luke (NTL) effect [72, 73]. More details on construction, operation, and characterization of such type of bolometric LDs can be found in [74] (the detector used in the present work is labeled as NTLLD2 there). We operated the ZnO scintillating bolometer in the aboveground cryogenic laboratory of the IJCLab (Orsay, France), using a pulse-tube-based \({}^{3}\)He/\({}^{4}\)He dilution refrigerator [75]. The assembled detector module was installed in the cryostat on the copper plate mechanically decoupled from the Figure 2: A bolometric light detector (Left), based on a Ge disk with a size of \(\varnothing 44\times 0.17\) mm and instrumented with a \(\sim\)5 mg NTD Ge thermistor, and a ZnO scintillating bolometer (Right), based on a \(10.6\times 11\times 11\) mm crystal and equipped with a \(\sim\)5 mg NTD Ge and with a Si:P heater. Both absorbers are mounted on the Cu holder using either PTFE supporting elements (ZnO) or Al\({}_{2}\)O\({}_{3}\) balls and plastic clamps. The Ge LD was then placed on the top of the ZnO detector assembly. mixing chamber by three springs to reduce vibrations induced by the pulse-tube of the cryostat. The outer vacuum chamber of the cryostat is surrounded by a 10-cm-thick lead to suppress the environmental \(\gamma\) background. Such shielding also helps to mitigate the pile-up problem of large-volume (tens of cm\({}^{3}\)) thermal detectors with a typical response of NTD-based sensors in the millisecond-second range. Taking into account that we used rather small ZnO (1.3 cm\({}^{3}\)) and Ge (0.3 cm\({}^{3}\)) absorbers, we do not expect a significant impact from pile-up on the bolometric characterization of the composite detector. We realized two cryogenic runs (labeled as Run I and II), both at 15 mK temperature stabilized on the detector plate. In the first run, a \({}^{238}\)U/\({}^{234}\)U \(\alpha\) source was placed inside the ZnO detector housing, while the second cryogenic run was performed without the calibration source. In addition, two TeO\({}_{2}\) bolometers of 4 cm\({}^{3}\) each were faced to another side of the Ge LD in Run I, extending possibilities of the LD calibration as detailed below. The detector channels were readout using a room temperature low-noise electronics based on DC-coupled voltage sensitive amplifiers [76]. The NTDs were biased with \(\sim\)1-5 nA currents through a few G\(\Omega\) load resistances. The NTD resistances were measured in the range of hundreds k\(\Omega\) to a few M\(\Omega\), depending on the choice of working points. In Run I, the working points were set aiming at maximizing detectors' voltage signal per unit of energy; the optimization was done by scanning signals induced by heater (for ZnO) and LED (for Ge LD) depending on the NTDs currents. In Run II, we polarized detectors stronger to reduce their sensitivity to fluctuations/instabilities of the thermal bath temperature and to mitigate non-linearity in the detectors' response. The continuous stream data were acquired by 16-bit ADC (National Instruments NI USB-6218 BNC) with a 5 kHz sampling rate; a Bessel cut-off frequency was set at 675 Hz. A total duration of detectors' low-temperature characterization was about two and three weeks in Run I and II respectively. ## 5 ZnO scintillating bolometer performance The detector performance was evaluated by analysis of the acquired data processed using an optimum filter technique [77], realized with the help of a MATLAB-based application. A half of second window was chosen for the data processing of both channels (LD and ZnO), as a good compromise between the time response of the detectors, their counting rates, and a window length for a more proper investigation of low frequency noise figure. A signal template of the ZnO bolometer was made by summing up signals with energies \(\sim\)0.2-2.0 MeV, while signals with an order of magnitude lower energies, mainly muon-induced events, were used for the construction of the LD average pulse. The noise template of both detectors is based on 10000 individual noise waveforms of corresponding channels. Events were triggered in the filtered data with energies above a threshold of 5 \(\times\) RMS noise. Then for each triggered event we use a signal maximum at the filter output as an energy estimate and we compute several parameters to ascribe a pulse shape. Parameters relevant for the present study are following: * a signal rising part from 10% to 90% of signal amplitude; * a signal descending part from 90% to 30% of signal amplitude; * a Pearson's linear correlation coefficient between a triggered signal and an average pulse, both after the filtering; d) PSD parameter defined as a ratio of a fitted amplitude (template vs. particle signal) to a filtered amplitude (primary energy estimate); more details about this parameter can be found in [78]. Pulse-shape parameters, such as Rise time and Decay time listed in table 5, show that both bolometers have a comparatively fast response for NTD-instrumented thermal detectors. This observation can be explained by the small size of the absorbers and their thermistors, and it shows that ZnO, in addition to the widely-used Ge, has good material properties for low-temperature applications. As it was expected, a stronger polarization of NTDs in Run II resulted in a faster time response than that of Run I. Amplitude distributions of ZnO-detected events in both Runs contain several \(\gamma\) peaks of \({}^{214}\)Pb and \({}^{214}\)Bi from environmental radioactivity, as illustrated in figure 3. It allows us to calibrate the ZnO bolometer and thus to characterize its performance in terms of the voltage signal amplitude per unit of the deposited energy (in \(\upmu\)V/keV) and the energy resolution of the noise baseline at the filter output (in keV FWHM), both reported in table 5. The ZnO bolometer's sensitivity in Run I was about 0.3 \(\upmu\)V/keV, which is a comparatively high value for macrobolometers. Despite of a higher working temperature of NTD in Run II, the ZnO bolometric signal is found to be still notable, around 0.09 \(\upmu\)V/keV. The ZnO baseline noise in Run I, 1 keV FWHM, is among the best ever achieved with single-crystals-based bolometers with sizes of a few cm\({}^{3}\) or larger being tested in this pulse-tube-operated cryostat [79, 80, 81, 82]. In Run II, the ZnO bolometer is characterized by a factor of 3 higher noise level (2.7 keV FWHM), in correlation with the drop of the sensitivity. The difference in the baseline noise is not crucial for the ZnO bolometer energy resolution, which is found to be relatively high, especially if compared to room temperature scintillating detectors. A calibration of the Ge LD in both Runs is done by using a distribution of comic-ray muons passing through the wafer, which is well-described by the Landau distribution and its maximum corresponds to the most probable muon-induced energy release. According to GEANT4-based simulations, we expect the most probable energy deposition in a 0.17-mm-thick Ge to be 100 keV [74]. Taking into account a lack of knowledge on the precise thickness of the used Ge wafer and keeping in mind an observed spread of \(\pm\)0.02 mm [79], an uncertainty of this calibration method in the present conditions is expected to be around 10%. However, we were able to improve the precision of this calibration by exploiting an alternative calibration, available in Run I, to tune the value of the most probable energy of muons. The alternative calibration is realized thanks to the presence of two TeO\({}_{2}\) crystals, facing the LD on the opposite side of the ZnO sample, which can act as Te X-ray sources being exposed to radiation [83]. Indeed, we clearly see a Te X-ray K\({}_{\alpha}\)/K\({}_{\beta}\) doublet in addition to the muon distribution, as illustrated in figure 4. Thanks to this observation, we calibrated the most probable muon-induced energy as 93 keV and used this value for the LD energy scale determination. Consequently, we found a rather high LD sensitivity of about 3 \(\upmu\)V/keV in Run I, which was then reduced by a factor of 3 in Run II. Despite a significant difference in the sensitivity, the LD noise was measured to be at a similar level of 0.2-0.3 keV FWHM. By applying a 60 V bias on the Al electrode, signal amplification by a factor of 12 is achieved, while the noise reduction factor is approximately 7, decreasing the LD noise to around 40 eV FWHM. Taking into account the results of the ZnO sample \(\gamma\) screening (limits on \({}^{228}\)Th/\({}^{226}\)Ra activity are at the level of ten(s) mBq/kg, see table 4), a total \(\alpha\) activity of Th/U radionuclides in the studied crystal is expected to be low (i.e. ten(s) events per day). Thus, in Run I we decided to use an external \(\alpha\) source (\({}^{238}\)U/\({}^{234}\)U, a smeared energy profile [84]) in order to investigate the response of the ZnO bolometer to \(\alpha\) particles in comparison to similar energy electron recoil signals induced by muons. Among the above-listed pulse-shape parameters, only the Decay time parameter shows a small difference between types of particles, while other parameters clearly show separate populations. An example of efficient particle identification achieved using a rather simple pulse-shape parameter such as Rise time is illustrated in figure 5; similar efficiency is obtained with the PSD parameter, while the Correlation appears to be less powerful. Thanks to the use of the photodetector, we can explore simultaneous detection of particle-induced energy release in the ZnO bolometer followed by scintillation detected by the LD as a viable tool of particle identification [42, 43]. Coincidences between ZnO and LD channels were established using ZnO-detected events as a trigger and taking into account a tiny difference in a rising part of signals of the detectors. Often, detected scintillation is given relative to the corresponding heat energy, in keV/MeV, represented by the Light-to-Heat ratio. An example of the distribution of the Light-to-Heat parameter versus energy of detected events in a dataset of Run I (the same events shown in figure 5) is illustrated in figure 6. A distribution of \(\alpha\) events exhibit enhanced signals detected by the LD, leaking into the band of \(\beta/\gamma/\upmu\)-induced events and even further. Such detector's response to \(\alpha\) particles is unusual. We would explain this effect by \(\alpha\) particles irradiation on the defected crystal surface. Indeed, all sides of the ZnO sample, apart from the side faced to the LD, were roughened by a 400-grid sandpaper to enhance light collection caused by diffuse reflection on crystal surfaces. This surface treatment leads to a surface damaging on a \begin{table} \begin{tabular}{l|c c|c c} \hline Detector & \multicolumn{2}{c|}{ZnO} & \multicolumn{2}{c}{Ge LD} \\ Bolometric measurements & Run I & Run II & Run I & Run II \\ \hline NTD working resistance [M\(\Omega\)] & 2.5 & 0.42 & 4.2 & 0.54 \\ Rise time [ms] & 2.45 & 0.84 & 1.59 (1.30\({}^{*}\)) & 0.95 \\ Decay time [ms] & 15.3 & 5.0 & 10.4 (10.8\({}^{*}\)) & 5.5 \\ Signal amplitude per unit of & & & & \\ deposited energy [\(\upmu\)V/keV] & 0.33 & 0.09 & 3.2 (38\({}^{*}\)) & 0.99 \\ Energy resolution [keV FWHM] of baseline noise & 1.0 & 2.7 & 0.29 (0.04\({}^{*}\)) & 0.24 \\ Energy resolution [keV FWHM] & & & & \\ -at 352 keV & 4.9 & 8.8 & - & - \\ \hline \end{tabular} \end{table} Table 5: Performance of ZnO and Ge channels of the composite bolometric detector operated at 15 mK. The difference in the working points of the bolometers in Runs I and II (represented by the NTD resistances) is responsible for the different detector performance. Results achieved with the LD being in the NTL amplification mode (60 V electrode bias) are labeled with *. Figure 4: Energy spectrum of events detected by the bolometric Ge LD over 24 h of Run I measurements in the above-ground cryogenic set-up. The spectrum exhibits Te X-ray K\({}_{\alpha}\) and K\({}_{\beta}\) peaks (fluorescence induced in the two neighboring TeO\({}_{2}\) crystals by natural radioactivity) and a cosmic-ray induced muon bump. Figure 3: Energy spectrum of events registered with the 7.2 g ZnO bolometer in 96-h-long low-temperature measurements (Run I) in a cryostat at sea level. The \(\gamma\) peaks of \({}^{214}\)Pb and \({}^{214}\)Bi observed in the spectrum are originated to environmental radioactivity, while the low-energy region is dominated by beta decays of \({}^{234}\)Th, present in the decay chain of a \({}^{238}\)U/\({}^{234}\)U \(\alpha\) source used to irradiate the ZnO detector. certain depth, which is typically estimated to be three times of abrasive particulates dimensions, i.e. 75 um in our case. Therefore, \(\alpha\) particles with energies less than 4.5 MeV would be fully stopped in the damaged layer. From the material properties point of view, this damaged layer could be enriched with low-lying charge traps, which then could be populated by charge carriers during the ZnO sample handling under sunlight. During an \(\alpha\) particle interaction, a large number of such trapped charge carriers could be released around \(\alpha\) particle track, and following to luminescence centers would enhance the scintillation light emission. Similar effect of the crystal defect structure involvements in the light yield enhancement was observed with ZnSe-based scintillating bolometers, where typically observed light yield for \(\alpha\) particles is significantly higher than that of electrons or \(\gamma\) quanta, and hence the quenching factor for \(\alpha\) particles is larger than 1 [85, 86]. Background measurements, i.e. without an externally located calibration \(\alpha\) source, demonstrated in figure 7, supporting a hypothesis that the defect structure of the ZnO bulk material plays a major role to the observed effect of the light yield enhancement for internal \(\alpha\) particles too. The total internal \(\alpha\) activity evaluated in the 3.0-7.5 MeV energy interval is 22(2) mBq/kg, being calculated taking into account the selection efficiency of the "pure" \(\alpha\) events through the pulse-shape analysis (90%). Moreover, in the energy spectrum collected in the background run, i.e. without an external calibration \(\alpha\) source, there are two prominent events distributions that could be ascribed to the ZnO crystal internal contamination by 6(1) mBq/kg of \({}^{232}\)Th and 12(2) mBq/kg of \({}^{234}\)U. To evaluate the internal \(\alpha\) contamination of the ZnO sample more precisely a longer background run is required, as well as a proper correction of the light yield enhancement for \(\alpha\) Figure 5: Distribution of Rise time parameter for different type of events (black \(\gamma\), \(\beta\), u’s; red \(\alpha\)’s) detected by the ZnO scintillating bolometer in Run I (96 h of measurements). The energy scale is calibrated using \(\alpha\) particles (keV alpha equivalent, keV\({}_{\rm ae}\)). The detectors shows pulse-shape discrimination capability using heat signals, as demonstrated by separation of \(\alpha\) particles (mostly originated to the source) from \(\gamma,\beta\), muon-induced events. Figure 6: Scatter plot of the Light-to-Heat parameter versus energy of particles detected by the ZnO scintillating bolometer in Run I (96 h of measurements). The \(\alpha\) particles (red distribution) are selected from \(\gamma,\beta\), and \(\mu\)’s (black distribution) using the Rise time parameter. Figure 7: Two-dimensional histogram showing a distribution of Light-to-Heat parameter versus energy of particles registered by the ZnO scintillating bolometer in 271 h of measurements (Run II). The \(\alpha\) particles (red distribution) are selected from \(\gamma,\beta\) and \(\mu\)’s (black distribution) using the PSD parameter. particles should be taken into consideration in order to significantly improve energy resolution, in a similar manner as it was done in [87]. Nevertheless, from the current data set, one can conclude that the secular equilibrium in natural decay chains is broken, which could be an evidence of a very effective segregation effect occurred during the crystal growth. ## 6 Searches for double beta decay processes in zinc In this study we do not posses the low background conditions and large exposure required for high-sensitivity searches for DBD processes (i.e. with currently leading half-life sensitivities \(\mathcal{O}(10^{21}\)-\(10^{26}\)) yr, depending on DBD channels and isotope of interest). This is due to the above ground location of this experiment (i.e. significant cosmic-ray background), modest external shielding around the cryostat (i.e. significant environmental \(\gamma\)-ray background), no special care on radiopurity of the cryostat and the detector component, less than 10 g mass of the ZnO sample, and a short duration of measurements (hundreds of hours). Despite the aforementioned constrains, we can use the present measurements to show prospects of ZnO bolometers for DBD search applications, particularly thanks to a low energy threshold and a good energy resolution demonstrated in this study. First of all, we performed MC simulations of DBD processes in the ZnO bolometer aiming at obtaining their response functions. We used a GEANT4-based application Simourg [88] to construct a simplified geometry of the detector and to run MC simulations using the kinematics of \(10^{6}\) initial particles emitted in DBD transitions, provided by the DECAY0 event generator [89]. The GEANT4 10.2 (patch 02) libraries and Livermore low-energy electromagnetic physics list have been used to run the Simourg program. The energy dependence of the ZnO energy resolution has been approximated using several \(\gamma\) peaks found in the ZnO background spectrum. The resulting MC-simulated energy distributions of DBD events expected for different DBD processes in Zn isotopes embedded in the ZnO bolometer are illustrated in figure 8. As it is seen in figure 8, all simulated response functions have at least one peak-like feature which might help to recognize a DBD signal of interest over a flat background except the featureless, continuous distribution from \({}^{70}\)Zn \(2\nu 2\beta^{-}\) decay events. We used these features to optimize the region of interest (ROI) in terms of the DBD signal containment efficiency and the measured background (e.g. using a ratio of the detection efficiency to a square root of number of counts in the ROI). We found that the optimal ROIs to search for DBD processes in \({}^{64}\)Zn are in the energy interval of 10-100 keV, in particular \(\sim\)10-20 keV for \({}^{64}\)Zn double-electron capture (more probable process compared to \(\varepsilon\beta^{+}\) decay modes thanks to a higher energy available; one can also compare phase space factors [90]). Such a comparatively low-energy threshold is routinely demonstrated by macro-bolometers including the ZnO low-temperature detector of the present work, where 2 and 6 keV energy thresholds (defined as \(5\times\) RMS baseline noise) were used in the data processing of Runs I and II respectively. In the case of \({}^{70}\)Zn \(0\nu 2\beta^{-}\) decay, a peak (i.e. in the ROI) is expected at 1 MeV, while for the \(2\nu\) mode the highest counting rate would be at \(\sim\)1/3 of \(Q_{2\beta}\). However, for the \(2\nu\) mode we used a ROI corresponding to a higher energy part of the spectrum, \(\sim\)0.4-0.7 MeV, which has an order of magnitude lower background than at the maximum of the \(2\nu 2\beta^{-}\) distribution. ROIs chosen in the present study and the corresponding detection efficiencies are listed in table 6. In order to estimate the sensitivity of the experiment to different DBD processes of interest, the half-life limits were calculated using the following formula: \[\lim T_{1/2}=\ln 2\cdot N\cdot\eta_{PSD}\cdot\eta\cdot t/\lim S, \tag{10}\] where \(N\) is a number of nuclei of interest (\(2.55\times 10^{22}\) nuclei of \({}^{64}\)Zn isotope and \(3.16\times 10^{20}\) nuclei of \({}^{70}\)Zn), \(\eta_{PSD}\) is a selection efficiency of events, \(\eta\) is a DBD signal detection efficiency, \(t\) is a time of measurements (271 h), and \(\lim\)\(S\) is a number of counts excluded at a given confidence level. The event selection efficiency (including also a trigger efficiency) has been studied using a pulse injection method (i.e. pulse templates with different amplitudes were randomly injected into the data stream and processed in the same way as physics data to evaluate a selection efficiency at a given energy). To derive \(\lim\)\(S\), the ZnO sample energy spectrum was fitted in the ROIs with a simple model containing GEANT4-simulated response functions of the ZnO bolometer to various DBD processes and a background component. Using the results of the fits and the Feldman-Cousins approach [67], Figure 8: Energy spectra of GEANT4-based MC simulations of DBD processes in the operated ZnO bolometer: 2\(\nu\) and 0\(\nu\) modes of double-electron capture (a) and electron capture with positron emission (b) in \({}^{64}\)Zn, and double-beta decay of \({}^{70}\)Zn (c) Each distribution corresponds to 10\({}^{6}\) decays; the energy dependence of the ZnO bolometer energy resolution has been taken into account. we obtained lim\(S\) values and calculated the corresponding half-life limits. For instance, the least squares fit in the 15-60 keV (30-90 keV) interval, \(\chi^{2}\) = 73.4/46 = 1.59 (71.2/61 = 1.17), returns the area of the \(2\nu 2\)K (\(2\nu e\beta^{+}\)) decay effect as \(425\pm 72\) counts (\(3711\pm 605\) counts). Consequently, we calculated a Feldman-Cousins limit on the excluded events, lim\(S\) = 543 counts (4703 counts), and a corresponding half-life limit, lim \(T_{1/2}=8.8\times 10^{17}\) yr (\(1.0\times 10^{17}\) yr) at 90% C.L. The results for all searched-for DBD modes are summarized in table 6. Different distributions of DBD events, excluded at the current level of sensitivity, are illustrated in figure 9. To estimate the ultimate sensitivity of the measurement in a dedicated energy interval, we used the so-called "1\(\sigma\) approach" in which statistical uncertainties of the number of events registered in a ROI are taken as limS (we took 1.64\(\sigma\) to comply with 90% C.L.). For instance, 25552 counts (27985 counts) were found in the energy range of 15-60 keV (30-90 keV) of the ZnO energy spectrum, which results to lim\(S\) = 262 counts (274 counts). Using these lim\(S\) values together with \(N\) of \({}^{64}\)Zn nuclei, \(\eta\) of the \(2\nu 2\)K (\(2\nu e\beta^{+}\)) processes, \(\eta_{PSD}\) = 0.88 (0.90) in the considered energy intervals, and \(t\) of data taking, one can obtain lim \(T_{1/2}=1.8\times 10^{18}\) yr (\(7.6\times 10^{17}\) yr). The results of this method are also presented in table 6. It should be noted that in case of \(2\nu 2\beta^{-}\) decay of \({}^{70}\)Zn, we conservatively ascribe all events in the ROI to a continuum signature of the process searched for (see table 6). ## 7 Discussion The achieved sensitivity in terms of half-life limits for most DBD modes of \({}^{64}\)Zn and \({}^{70}\)Zn isotopes, \(\mathcal{O}\)(10\({}^{17}\)-10\({}^{18}\)) yr, is still several orders of magnitude lower than the most stringent limits ranging from 10\({}^{21}\) to 10\({}^{22}\) yr for these nuclei. Significantly higher exposures (a factor of \(10^{3}-10^{4}\)) of these leading experiments, as well as their underground location and well-shielded low-background set-ups were the major factors of such enhanced sensitivity, as shown in table 7. It should be also stressed that only a dozen nuclei among potentially \(2\varepsilon\), \(\varepsilon\beta^{+}\), \(2\beta^{+}\) active isotopes were investigated at the level of sensitivity around or more than 10\({}^{21}\) yr, such as \({}^{36}\)Ar, \({}^{40}\)Ca, \({}^{58}\)Ni, \({}^{64}\)Zn, \({}^{78}\)Kr, \({}^{96}\)Ru, \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Nuclide} & DBD & ROI & DBD containment & Ultimate & Half-life \\ & process & [keV] & efficiency, \(\eta\) & sensitivity [yr] & limit [yr] \\ \hline \multirow{3}{*}{\({}^{64}\)Zn} & \(2\nu 2\)K & 15–60 & 0.960 & 1.8\(\times 10^{18}\) & 8.8\(\times 10^{17}\) \\ & \(0\nu 2\varepsilon\) & 15–60 & 0.826 & 1.5\(\times 10^{18}\) & 7.5\(\times 10^{17}\) \\ & \(2\nu e\beta^{+}\) & 30–90 & 0.424 & 7.6\(\times 10^{17}\) & 1.0\(\times 10^{17}\) \\ & \(0\nu e\beta^{+}\) & 40–90 & 0.646 & 1.3\(\times 10^{18}\) & 3.6\(\times 10^{18}\) \\ \hline \multirow{2}{*}{\({}^{70}\)Zn} & \(2\nu 2\beta^{-}\) & 430–730 & 0.240 & 1.3\(\times 10^{14}\) & 3.5\(\times 10^{14}\) \\ & \(0\nu 2\beta^{-}\) & 900–1100 & 0.914 & 6.1\(\times 10^{16}\) & 1.7\(\times 10^{17}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Region of interest (ROI) to search for various DBD processes in Zn isotopes, corresponding DBD signal containment efficiencies, ultimate sensitivities, and half-life limits obtained by applying two methods to the data of the ZnO bolometer (7.2 g of mass, operated over 271 h). The ultimate sensitivities and half-life limits are presented at the 90% confidence level. In the case of \(2\nu 2\beta^{-}\) decay of \({}^{70}\)Zn, we conservatively ascribe all events in the ROI to a continuum signature of the process searched for. \({}^{106}\)Cd, \({}^{112}\)Sn, \({}^{124}\)Xe, \({}^{130}\)Ba, \({}^{132}\)Ba, see [94] and references therein. The current limitation of the experimental sensitivity in these studies is a consequence of few reasons: a) lower energy releases in \(2\varepsilon\), \(\varepsilon\beta^{+}\), \(2\beta^{+}\) processes in comparison to those in \(2\beta^{-}\) decay, that complicates background suppression; b) higher expected \(T_{1/2}\) values, as a result of low decay energies and phase space factors; and c) lower natural isotopic abundances of these isotopes (typically less than 1% with only few exceptions). Moreover, the obtained half-life limits presented here, and the stringent existing limits, are both well below of the existing theoretical calculations [95, 96] for zinc isotopes. According to these predictions the observation of \(2\nu 2\beta^{-}\) of \({}^{70}\)Zn is expected at \(\mathcal{O}(10^{23}\)-\(10^{24})\) yr; \(2\nu 2\varepsilon\) of \({}^{64}\)Zn is expected at \(\mathcal{O}(10^{25}\)-\(10^{26})\) yr; \(2\nu\varepsilon\beta^{+}\) decay mode of \({}^{64}\)Zn is expected at \(\mathcal{O}(10^{31}\)-\(10^{35})\) yr. Hence, the most promising DBD processes in natural zinc isotopes to be searched for at the current level of experimental sensitivity are \(2\nu 2K\) and \(0\nu 2\varepsilon\) of \({}^{64}\)Zn, along with \(2\nu 2\beta^{-}\) and \(0\nu 2\beta^{-}\) decay modes of \({}^{70}\)Zn in case of its enrichment. Therefore, one needs to have a clear strategy on how to enhance the experimental sensitivity of a Zn-based detector. Fortunately, in case of zinc isotopes, further improvement of the experimental sensitivity can be achieved rather easy by increasing the ZnO detector mass, producing crystals using a high purity ZnO raw materials (no less than 99.999%), and by performing cryogenic measurements in a well-shielded cryostat at an underground laboratory. For instance, for an Figure 9: Fragment of the energy spectrum collected with the 7.2 g ZnO bolometer in a 271-h-long measurement at sea level. The most intensive \(\gamma\) peaks of environmental radioactivity present in the spectrum are labeled. Distributions of different DBD processes in Zn isotopes, excluded at 90% C.L. level, are shown in the main plot (\({}^{70}\)Zn) and in the two insets (both correspond to \({}^{64}\)Zn). For clarity of the inset, we do not present the \(0\nu 2\varepsilon\) decay mode which has a similar energy distribution as \(2\nu 2\)K below 20 keV. experiment with 5.1 kg of ZnO crystals, which is equivalent to zinc mass in the CUPID-0 detector (i.e. 4.1 kg of Zn) measured for one year, the experimental sensitivity will increase by a factor \(10^{-3}\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Nuclide} & DBD & \multirow{2}{*}{\(\lim T_{1/2}\) [yr]} & \multirow{2}{*}{Exposure (kg\(\times\)d)} & \multirow{2}{*}{Detector Type} & \multirow{2}{*}{Ref.} \\ & process & & & & \\ \hline \multirow{8}{*}{\(2\nu 2\)K} & \(1\times 10^{19}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ & \(9\times 10^{17}\) & 0.08 & \multirow{2}{*}{ZnO scint. bol.} & This work \\ & \(6\times 10^{16}\) & 0.14 & \multirow{2}{*}{CdZnTe semicond.} & [92] \\ & \(8\times 10^{15}\) & 0.19 & \multirow{2}{*}{Zn + prop. chamber} & [91] \\ \cline{1-1} \cline{5-6} & \(3\times 10^{21}\) & 348 & \multirow{2}{*}{HP Zn + HPGe} & [49] \\ & \(3\times 10^{20}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ & \(7\times 10^{18}\) & 7.85 & \multirow{2}{*}{CdZnTe semicond.} & [93] \\ & \(8\times 10^{17}\) & 0.08 & \multirow{2}{*}{ZnO scint. bol.} & This work \\ & \(7\times 10^{17}\) & 0.08 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [35] \\ \cline{1-1} \cline{5-6} \({}^{64}\)Zn & \(3\times 10^{21}\) & 348 & \multirow{2}{*}{HP Zn + HPGe} & [49] \\ & \(9\times 10^{20}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ & \(1\times 10^{20}\) & 7.13 & \multirow{2}{*}{Zn + HPGe} & [60] \\ & \(4\times 10^{18}\) & 0.08 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [35] \\ & \(2\times 10^{18}\)* & 1.53 & \multirow{2}{*}{Zn + HPGe} & [59] \\ & \(1\times 10^{17}\) & 0.08 & & \\ \hline \multirow{8}{*}{\(0\nu\varepsilon\beta^{+}\)} & \(1\times 10^{22}\) & 4142 & \multirow{2}{*}{ZnSe scint. bol.} & [48] \\ & \(9\times 10^{20}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ & \(1\times 10^{20}\) & 7.13 & \multirow{2}{*}{Zn + HPGe} & [60] \\ & \(4\times 10^{18}\) & 0.08 & & \\ & \(2\times 10^{18}\)* & 1.53 & \multirow{2}{*}{Zn + HPGe} & [59] \\ & \(1\times 10^{18}\) & 7.85 & \multirow{2}{*}{CdZnTe semicond.} & [93] \\ & \(3\times 10^{16}\) & 0.14 & & \\ \hline \multirow{8}{*}{\(70\)Zn} & \(4\times 10^{18}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ & \(2\nu 2\beta^{-}\) & \(1\times 10^{16}\) & 0.08 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [35] \\ & \(4\times 10^{14}\) & 0.08 & & \\ \cline{1-1} \cline{5-6} & \(2\times 10^{18}\) & 4142 & \multirow{2}{*}{ZnSe scint. bol.} & [48] \\ \cline{1-1} & \(1\times 10^{18}\) & 7.85 & \multirow{2}{*}{CdZnTe semicond.} & [92] \\ \cline{1-1} & \(3\times 10^{16}\) & 0.14 & & \\ \hline \multirow{8}{*}{\(2\nu 2\beta^{-}\)} & \(4\times 10^{18}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ \cline{1-1} & \(2\nu 2\beta^{-}\) & \(1\times 10^{16}\) & 0.08 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [35] \\ \cline{1-1} & \(4\times 10^{14}\) & 0.08 & & \\ \cline{1-1} \cline{5-6} & \(2\times 10^{21}\) & 4142 & \multirow{2}{*}{ZnSe scint. bol.} & [48] \\ \cline{1-1} & \(3\times 10^{19}\) & 193 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [44] \\ \cline{1-1} & \(0\nu 2\beta^{-}\) & \(7\times 10^{17}\) & 0.08 & \multirow{2}{*}{ZnWO\({}_{4}\) scint. det.} & [35] \\ \cline{1-1} & \(2\times 10^{17}\) & 0.08 & & \\ \cline{1-1} \cline{5-6} & \(1\times 10^{16}\) & 0.53 & \multirow{2}{*}{CdZnTe semicond.} & [92] \\ \hline \hline \end{tabular} \end{table} Table 7: Sensitivities of different experiments, \(\lim T_{1/2}\), to search for various DBD processes in Zn isotopes. The \(\lim T_{1/2}\) results of the present search with the ZnO scintillating bolometer and other experiments quoted are given at 90% C.L., except a few ones labeled with (*) which are at 68% C.L. All quoted previous experiments, except studies [91] and [59] were realized in underground laboratories, and in dedicated low-background facilities. 150 with respect to the current experiment with the 7.2 g ZnO crystal. Further, if this detector would be placed in the underground facility (e.g. LNGS or Modane) providing effective shielding against cosmic ray induced events, this would lead to reduction of the background rate by a factor of \(10^{4}\)[97]. Thus, in total, one could expect an enhancement in the experimental sensitivity by a factor of about \(10^{6}\), corresponding to half-life limits of \(\mathcal{O}(10^{23}\)-\(10^{24})\) yr. Such experimental sensitivity would be rather competitive comparing even to current experiments in DBD field, being achieved with natural zinc. Moreover, there is a real opportunity to further increase the experimental sensitivity (by a factor 2) by using zinc enriched in \({}^{64}\)Zn isotope. A water solution of ZnO and Zn-containing organic compound (Zn-acetate, Zn(CH\({}_{3}\)COO)\({}_{2}\times\)H\({}_{2}\)O) are widely used in nuclear industry as an effective reagent to prevent bacteria cultivation in water environment of an active zone of water-water reactors and to reduce corrosion of the construction materials [98]. Zinc used for this application should be depleted in \({}^{64}\)Zn isotope, since this isotope has the highest cross-section of neutron capture among all natural zinc isotopes leading to a notable activation under a high flux of neutrons in the active zone. Therefore, there are significant quantities of \({}^{64}\)Zn remaining after zinc isotopes depletion for nuclear industry application, which is considered as a by-product. The cost of the highly enriched (more than 99.9%) \({}^{64}\)Zn isotope is \(\mathcal{O}(10)\) kEuro/kg compared to the typical price of enriched isotopes used in the DBD field (\(\mathcal{O}(100)\) kEuro/kg), which makes it affordable for a 10 kg-scale experiment implementation. One more possibility to enhance the experimental sensitivity that could be achieved with ZnO-based scintillating bolometer is related to the fact that the ZnO crystal is a semiconductor material. Therefore, one can exploit the phonon signal amplification in the presence of the electric field, i.e. the Neganov-Trofimov-Luke (NTL) effect, as it was earlier adopted to Ge light detector [74]. This phonon signal amplification will lead to increasing of signal amplitude, followed by an improvement of the baseline energy resolution and significant lowering of the low-energy detection threshold. For instance, typical energy resolution for cryogenic light detectors \(\mathcal{O}(300)\) eV FWHM [99] can be improved to \(\mathcal{O}(50)\) eV using a NTL-amplification [74], as also shown in the present work. The improved energy resolution would help to minimize background contribution to the region of interest leading to an enhancement of the experimental sensitivity by a factor of 5. All these parameters are vital for studies of DBD processes with low energy release in the outgoing channel, such as \(2\nu 2K\) in \({}^{64}\)Zn. An important consideration that should be also taken into account during the ZnO crystal production for such experiment, their processing, and the final experiment installation, is that raw ZnO material and the produced ZnO crystals should be transported only by a land-transportation to avoid its cosmogenic activation and long-lived \({}^{65}\)Zn isotope production that would results in experimental sensitivity reduction. ## 8 Conclusion In this work we report on a detailed study of the performance of a ZnO-based scintillating bolometer as a detector to search for rare processes in zinc isotopes. For the first time, a 7.2 g ZnO crystal containing more than 80% of zinc in its mass, was successfully operated as a low-temperature detector over 271 h of background measurements in a pulse-tube dilution refrigerator at a surface lab. The ZnO-based detector exhibits the excellent energy resolution of baseline noise 1.0-2.7 keV FWHM at various working temperatures that results in a low-energy threshold of the experiment, i.e. 2.0-6.0 keV. This also makes it feasible to study rare processes with low-energy releases, like \(2\nu 2K\) decay mode of \({}^{64}\)Zn. The light yield for \(\beta/\gamma\) events was determined to be 1.5(3) keV/MeV, while it varies for \(\alpha\) particles in the range of 0.2-3.0 keV/MeV, most probably, due to contribution of the bulk material defect structure to the light yield enhancement effect. The ZnO-based detector shows a powerful pulse-shape discrimination capability using time-properties of only heat signals (namely, Rise time parameter), demonstrating a full separation of alpha particles from \(\beta/\gamma\) of environmental radioactivity and muon-induced events. The ZnO crystal was found to be rather radiopure with respect to daughter nuclides from the U/Th natural decay chains, as well as \({}^{40}\)K from natural radioactivity or \({}^{137}\)Cs of anthropogenic origins, with only limits on their activities being set at the level of \(\mathcal{O}\)(1-100) mBq/kg in measurements with ultra-low-background HPGe \(\gamma\)-spectrometer. A 22(2) mBq/kg of the total internal \(\alpha\)-activity in the 3.0-7.5 MeV energy interval was calculated analysing the "pure" \(\alpha\) events selected from Run II data set (background measurements). Two prominent alpha events distributions in the background spectrum could be ascribed to the crystal internal contamination by 6(1) mBq/kg of \({}^{232}\)Th and 12(1) mBq/kg of \({}^{234}\)U. Further, more precise, evaluation of the ZnO crystal internal \(\alpha\) contamination is required. At the same time, being produced from low chemical purity grade raw materials (99.98%), ZnO demonstrates a feasibility of its radiopurity improvement by utilizing high purity chemicals and raw materials. Taking into account the excellent performance of the ZnO crystal as a scintillating bolometer and profiting from 271 h of acquired background data (without \(\gamma\) and \(\alpha\) calibration sources), limits on DBD processes in \({}^{64}\)Zn and \({}^{70}\)Zn isotopes were set on the level of \(\mathcal{O}\)(10\({}^{17}\)-10\({}^{18}\)) yr for various decay modes. The achieved sensitivity was analyzed and compared with all previous low-background long-term experiments searching for DBD processes in zinc isotopes. To summarize, there is a good potential for ZnO-based scintillating bolometers to search for DBD processes of Zn isotopes, especially DBD modes in \({}^{64}\)Zn, with the most prominent spectral features at \(\sim\)10-20 keV. Observation of this process would require a "source = detector" technology with a low-energy threshold, good energy resolution, and low background, all of which are achievable with ZnO-based scintillating bolometers. Together with a high natural isotopic abundance and the potential for further isotopic enrichment, the sensitivity to DBD processes in \({}^{64}\)Zn with a 10 kg-scale experiment is at the level of \(\mathcal{O}\)(10\({}^{24}\)) yr. Furthermore, a ZnO-based cryogenic experiment would provide complementarity to the current DBD searches and while allowing for studies of \(2\varepsilon,\varepsilon\beta^{+}\), and \(2\beta^{-}\) decay modes of natural zinc isotopes at the \(\mathcal{O}\)(10\({}^{24}\)) year level and beyond. ## 9 Acknowledgements BB is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). SSN is supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. We also appreciate the fruitful collaboration with Dr. Vladimir Lutin and grateful for the provided ZnO sample used in these studies. The cryostat used for the low-temperature tests here described -- installed at ICLab (Orsay, France)-- was donated by the Dipartimento di Scienza e Alta Tecnologia of the Insubria University (Como, Italy). The bolometric measurements were realized within the BINGO project, which has received funds from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 865844 - BINGO).
2302.08831
Symmetry of the Hyperfine and Quadrupole Interactions of Boron Vacancies in a Hexagonal Boron Nitride
The concept of optically addressable spin states of deep level defects in wide band gap materials is successfully applied for the development of quantum technologies. Recently discovered negatively charged boron vacancy defects (VB) in hexagonal boron nitride (hBN) potentially allow a transfer of this concept onto atomic thin layers due to the van der Waals nature of the defect host. Here, we experimentally explore all terms of the VB spin Hamiltonian reflecting interactions with the three nearest nitrogen atoms by means of conventional electron spin resonance and high frequency (94 GHz) electron-nuclear double resonance. We establish symmetry, anisotropy, and principal values of the corresponding hyperfine interaction (HFI) and nuclear quadrupole interaction (NQI). The HFI can be expressed in the axially symmetric form as Aperp = 45.5 MHz and Apar = 87 MHz, while the NQI is characterized by quadrupole coupling constant Cq = 1.96 MHz with slight rhombisity parameter n = (Pxx - Pyy)/Pzz = -0.070. Utilizing a conventional approach based on a linear combination of atomic orbitals and HFI values measured here, we reveal that almost all spin density (84 %) of the VB electron spin is localized on the three nearest nitrogen atoms. Our findings serve as valuable spectroscopic data and direct experimental demonstration of the VB spin localization in a single two dimensional BN layer.
Irina N. Gracheva, Fadis F. Murzakhanov, Georgy V. Mamin, Margarita A. Sadovnikova, Bulat F. Gabbasov, Evgeniy N. Mokhov, Marat R. Gafurov
2023-02-17T12:04:02Z
http://arxiv.org/abs/2302.08831v1
Symmetry of the Hyperfine and Quadrupole Interactions of Boron Vacancies in a Hexagonal Boron Nitride ###### Abstract The concept of optically addressable spin states of deep-level defects in wide band gap materials is successfully applied for the development of quantum technologies. Recently discovered negatively charged boron vacancy defects (V\({}_{\text{B}}^{-}\)) in hexagonal boron nitride (hBN) potentially allow a transfer of this concept onto atomic-thin layers due to the van der Waals nature of the defect host. Here, we experimentally explore all terms of the V\({}_{\text{B}}^{-}\) spin Hamiltonian reflecting interactions with the three nearest nitrogen atoms by means of conventional electron spin resonance and high frequency (94 GHz) electron\(-\)nuclear double resonance. We establish symmetry, anisotropy, and principal values of the corresponding hyperfine interaction (HFI) and nuclear quadrupole interaction (NQI). The HFI can be expressed in the axially symmetric form as \(A_{\parallel}=\) 45.5 \(\pm\) 0.9 MHz and \(A_{\parallel}=\) 87 \(\pm\) 0.5 MHz, while the NQI is characterized by quadrupole coupling constant \(C_{\text{q}}=\) 1.96 \(\pm\) 0.05 MHz with slight rhombisity parameter \(\eta=(P_{xx}-P_{yy})/P_{zz}=-\)0.070 \(\pm\) 0.005. Utilizing a conventional approach based on a linear combination of atomic orbitals and HFI values measured here, we reveal that almost all spin density (\(\approx\)84%) of the V\({}_{\text{B}}^{-}\) electron spin is localized on the three nearest nitrogen atoms. Our findings serve as valuable spectroscopic data and direct experimental demonstration of the V\({}_{\text{B}}^{-}\) spin localization in a single two-dimensional BN layer. + Footnote †: journal: Physics Letters A ## 1 Introduction Optically polarized electron spins of defects in solids serve as a valuable platform for development of advanced quantum technologies [1, 2, 3] and are widely used as a tool to probe unconventional many body quantum physics in condensed matter [6, 7]. The development of solid-state technologies based on defects with a significant electron\(-\)nuclear interaction has been underway for several decades and has been implemented into diverse semiconductor crystal structures [8]. The main explored solid-state platforms with these respects are diamond with negatively charged nitrogen vacancy defects (NV\({}^{-}\)) and silicon carbide (SiC) with NV\({}^{-}\) and vacancy-related defects [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Quantum magnetometers [13, 2, 5], nanoscale magnetic resonance imaging (MRI) [13], sensors of thermal and electrical fields [35], and masers operated at room temperature [14, 15], have been proposed and developed using advantages of these defects. The main approach of their use in the above-mentioned applications is that the high spin state (\(S\geq 1\)) of the defect is already split in zero magnetic field and can be initialized, manipulated, and subsequently read out by optical or radio frequency means utilizing principles of electron spin resonance (ESR) and optically detected magnetic resonance (ODMR) [16]. However, both diamond and SiC are three-dimensional (3D) crystals that prevent advanced nanoscale fabrication processing of these materials. A distinctly new platform for potential realization of the above-mentioned scenarios has been recognized only recently by demonstration of optically addressable spin states of defects in two-dimensional van der Waals (vdW) materials, namely, hexagonal boron nitride (hBN) [17, 18, 19, 20, 21], hBN is formed by 2D atomic layers of sp\({}^{2}\)-hybridized nitrogen\(-\)boron atoms that are coupled through weak vdW interactions. Its ultrawide band gap (\(E_{\text{g}}\approx\) 6 eV) [22, 23] naturally grants the existence of deep-level defects with optical transitions well below its band gap [24]. Together with a well-developed radiation technique allowing one to achieve fine tuning of the number of atomically thin layers (down to the single layer), these make such kind of layered material particularly interesting for nanoscale quantum technologies [23, 24, 25] and as a monolayer single-photon quantum emitter [26]. Particularly, exploiting 2D nature of hBN, defects emitting single photons have been successfully isolated in a single monoatomic BN layer [25], and the search for defects in hBN from the perspectives of their applicability for quantum sensing and qubits has been launched [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Among the large variety of defects possessing ODMR, only one defect in the form of a negatively charged boron vacancy (V\({}_{\rm B}^{-}\)) has been shown to be controllably and reproducibly generated in the hBN host [30, 31, 32, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31] and its microscopic origin is well understood and established by means of rigorous ESR spectroscopy and calculations based on the density functional theory [32, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. The structure of the V\({}_{\rm B}^{-}\) defect is schematically shown in Figure 1a as a missing boron atom having three of the boron vacancy [36]. Based on the results of these experiments, we accomplish the study of the electron\(-\)nuclear interactions of the V\({}_{\rm B}^{-}\) electron spin with the nearest nitrogen shell and are able to demonstrate that the V\({}_{\rm B}^{-}\) spin is localized in a single two-dimensional layer. ## 2 Materials and Methods The sample used in this study is commercially produced hexagonal boron nitride single crystal (HQ Graphene company) with dimensions 1 mm \(\times\) 1 mm \(\times\) 0.15 mm. To create boron vacancies, the sample was irradiated to a total dose of 6 \(\times\) 10\({}^{18}\) cm\({}^{-2}\) with 2 MeV electrons. The hBN crystal with mentioned characteristics was placed in a resonator system of a spectrometer by a special nonmagnetic sample holder. The ESR spectra were measured at \(T\) = 35 K on a standard Bruker ESR spectrometer (ESP-300) in the X-band frequency range (\(\nu_{\rm now}\approx\) 9.6 GHz), equipped with an Oxford Instruments liquid helium flow cryostat for low temperature measurements. The ENDOR spectra were measured at \(T\) = 25 K in the W-band frequency range (\(\nu_{\rm now}=\) 94 GHz) on a Bruker Elexsys E680 spectrometer. The spectra were recorded using a Mims pulse sequence: \(\pi/2\)\(-\)\(\tau-\pi/2\)\(-\) radiofrequency \(\pi\) pulse \(-\)\(\pi/2\)\(-\)\(\tau\)\(-\) electron spin echo (ESE), where \(\pi/2\) = 40 ns, \(\tau\) = 260 ns, and \(\pi_{\rm rf}\) = 72 \(\mu\)s. All ESR and ENDOR spectra were measured under optical excitation of the sample with 532 nm laser through an optical fiber. The EasySpin software package was used to simulate the obtained ESR spectra [37]. ## 3 Results and Discussion To refine the structure of the center and in order to study the anisotropy of the electron\(-\)nuclear interactions, we first measure the stationary X-band ESR spectra of the hBN sample under \(\lambda\) = 532 nm optical excitation in the orientation of a static magnetic field **B** parallel (**B\(\|\epsilon\)**) and perpendicular (**B\(\bot\epsilon\)**) to the hexagonal \(\epsilon\) axis. Results are shown in Figure 2a. The doublet of lines labeled with vertical arrows originate from the allowed magnetic dipole transition (\(m_{\rm s}=0\leftrightarrow m_{\rm s}=+1\), and \(m_{\rm s}=0\leftrightarrow m_{\rm s}=-1\)) between Zeeman-split triplet states. The splitting between these lines is \(\Delta B\approx 257\) mT = 2\(D/\gamma_{\rm e}\) where \(D\approx\) 3.6 GHz and \(\gamma_{\rm e}\) = 28 GHz/T is the electron gyromagnetic ratio. These values are the spectroscopic fingerprints of the \(S\) = 1 V\({}_{\rm B}^{-}\) defect [33, 31, 18]. Optically induced predominant population of the \(m_{\rm S}\) = 0 spin sublevel is readily seen through phase reversal of the fine-structure lines, as schematically depicted in the right inset in Figure 2a. The HFI structure shown in the enlarged scale for the \(m_{\rm S}=0\to m_{\rm S}=+1\) ESR line in the left inset in Figure 2a is due to the interaction of the electron spin of the defect with the nuclear spins of the nearest nitrogen atoms (\({}^{14}\)N, nuclear spin \(I\) = 1, natural abundance 99.63%). To describe the ESR spectra, the following spin Hamiltonian is used. \[H=g\mu_{\rm B}{\bf S}{\bf S}+D\left(S_{z}^{2}-\frac{S(S+1)}{3}\right)+\ \sum_{k=1}^{3}{\bf A}_{\rm L}{\bf I}{\bf S} \tag{1}\] Here, the first term reflects electron Zeeman interaction with isotropic \(g\)-factor, and the second term describes ZFS with the principal \(z\)-axis of the \(D\)-tensor that coincides with the \(\epsilon\) axis of the crystal. The third term describes the hyperfine interaction of the V\({}_{\rm B}^{-}\) electron spin with three (\(k\) = 3) nearest to the vacancy \({}^{14}\)N nuclear spins using the axial \(A\)-tensor with the principal \(z\)-axis directed along the nitrogen-dangling bond. As Figure 1: (a) Structure of the hBN lattice and schematic representation of V\({}_{\rm B}^{-}\) defects, where green balls represent boron atoms, blue represents nitrogen atoms, and yellow represents the boron vacancy center; (b) V\({}_{\rm B}^{-}\) center energy level diagram in the absence of external static magnetic field and the scheme of the optical pumping cycle of the ground state (GS) \(m_{\rm S}\) = 0 spin sublevel. Excitation (green) transfers the system into the excited state (ES). Radiative recombination (purple) and spin-dependent nonradiative intersystem crossing decay to the GS via the metastable state (MS) (dashed lines). \(D\) denotes the ZFS. shown in Figure 2c, these directions are labeled as \(A_{\rm zz}\) for each nitrogen nuclei. **S** and **I** are the operators of a total electron and nuclear spins (\(S=1,I=1\)), \(g\) is the Lande factor, \(\mu_{\rm B}\) is the Bohr magneton, and **B** is the static magnetic field. In the **B\(\|\)\(\varepsilon\)** orientation, all three nitrogen atoms are equivalent. That is, one would expect to observe the 7-line hyperfine structure governed by the consideration that the number of the hyperfine lines is determined as \(2nl+1=7\), where \(n\) is the number of equivalent nuclei with \(I=1\). This seven-line HF structure together with its simulation using eq 1 is shown in the left inset in Figure 2a. The extracted value of the hyperfine coupling constant \(A_{\rm zz}=47\pm 1\) MHz corresponds to the previously measured in the ESR/ODMR experiments [18, 28] and is in good agreement with the value calculated through density functional theory (DFT) [28, 33]. To determine the anisotropy of the hyperfine interaction, we then measure the angular dependence of the hyperfine splitting in the 2D plane in the **B\(\bot\)\(\varepsilon\)** configuration by rotating the **B** vector in the (0001) plane by angle \(\varphi\), as depicted in Figure 2c. We are interested in a region with a pronounced hyperfine structure in the range of magnetic fields 380\(-\)410 mT. The ESR spectra for three different angles \(\varphi\) are shown in Figure 2b. A satisfactory description of the spectra is obtained for the values of the spin Hamiltonian parameters indicated in Table 1 (first row). The corresponding simulated spectra are shown in gray. Since the stationary ESR technique does not allow one to determine parameters of the quadrupole interaction, we carry out the ENDOR experiments. An analysis of the ENDOR spectra also makes it possible to refine the values of the hyperfine interaction tensor. Like the ESR spectra, all ENDOR experiments were collected under \(\lambda=532\) nm optical excitation and for different orientations of the constant magnetic field vector **B** relative to the \(\varepsilon\)-axis. Figure 3a,b shows the measured ENDOR spectra for **B\(\bot\)\(\varepsilon\)** and Figure 3c for **B\(\|\)\(\varepsilon\)**. For a correct description of the ENDOR spectra, two additional terms have to be added to the spin Hamiltonian (1). Namely, the term reflecting nuclear Zeeman interaction (\(Z_{\rm n}=-g_{\rm n}\mu_{\rm B}\)**I**), where \(g_{\rm n}\) is the nuclear \(g\)-factor for \({}^{14}\)N, \(\mu_{\rm n}\)--nuclear magneton. The second term is \(\rm I_{\rm n}P_{\rm I\varphi}\), where \(P=\frac{3\varepsilon\Omega_{\rm n}N_{\rm\varphi}}{4l(2l-1)}\), describing the nuclear quadrupole interaction (NQI). The latter is related to the interaction of the electric field gradient \(V_{\rm ij}\) with the nuclear electric quadrupole moment \(Q_{\rm N}\) and described using the quadrupole interaction tensor \(P_{\rm ij}\). Based on the symmetry of the system, it is logical to assume that the direction of the \(z\)-axis of the \(P\)-tensor for each nucleus coincides with the direction of the \(z\)-axis of the hyperfine interaction tensor (see Figure 2c). Results of ENDOR experiments are shown in Figure 3. Dashed lines are for the simulation of the ENDOR spectra with parameters listed in Table 2. The asymmetry coefficient was taken equal to \(\eta=(P_{\rm zz}-P_{\rm yy})/P_{zz}=-0.070\), and \(C_{\rm q}=1.96\) MHz, where \(C_{\rm q}\) is called the quadrupole coupling constant and defined as \(C_{\rm q}=\varepsilon Q_{\rm N}V_{\rm zz}\). The hyperfine interaction tensor values were slightly corrected to \(A_{\rm zz}\approx A_{\rm yy}=A_{\perp}=45.5\) MHz and \(A_{\rm zz}=A_{\parallel}=87\) MHz. In the orientation **B**, the angle of the in-plane deviation of **B** from the dangling bond according to the simulation is \(7.7^{\circ}\) degrees. It should be noted that the \(C_{\rm q}\) value determined here is in good agreement with previously derived \(C_{\rm q}=2.11\) MHz based on the results of the electron spin echo envelope modulation (ESEEM) experiments [28]. The slight difference in 150 kHz between these two values is due to the fact that in the current experiments, the asymmetry of the NQI is revealed and thus is taken into account. The presence of additional lines in the ENDOR spectra (Figure 3a) near the Larmor frequency (\(\nu_{\rm Larmor}\approx 10.2\) MHz) of the nitrogen \({}^{14}\)N nucleus can be caused by two reasons: (i) signals from the second nitrogen coordination sphere and (ii) signals from the nuclei of neighboring layers, from below and above. Since such hyperfine interactions from distant (remote) nuclei are weak, ENDOR signals from them are absent in Figure 3b. The corresponding weak electron--nuclear interactions manifest themselves only through quadrupole splitting at spin transitions with \(m_{\rm S}=0\). General description of the ENDOR frequencies arising due to the quadrupole splitting can be obtained by the following expression \(\Delta f(\varphi)\approx\nicefrac{{3}}{{4}}\zeta_{\text{q}}(3\ \text{cos}^{2}\ \varphi-1)\), taking into account all rotational transformations in Euler angles (see Figure 2c). Here, \(\varphi\) is the angle between external magnetic field \(\mathbf{B}\) and the nitrogen dangling bonds, \(C_{\text{g}}=1.96\) MHz. This is why we observe the ENDOR signal separately from each nitrogen nucleus, as labeled in Figure 3 as N\({}_{\text{1}}\), N\({}_{\text{2}}\), and N\({}_{\text{3}}\) i.e., in this case, each HFI value depends on the angle as \(A_{\text{iso}}+T(3\ \text{cos}^{2}(\varphi)\ -\ 1)\), where \(A_{\text{iso}}\) and \(T\) are the isotropic part and the anisotropic part of HFI, correspondingly. Up to this point, we show that the \(A\)-tensor has an axial symmetry about the local \(z\)-axis, which corresponds to the direction of the dangling bonds (the direction of the \(z\)-axis of the \(A\)-tensor is \(90^{\circ}\) with the \(z\)-axis of the \(D\)-tensor). It is also seen that the anisotropy has an axial symmetry of the 6th order, which corresponds to perfectly coordinated three nitrogen atoms closest to the vacancy at the vertices of a regular triangle. Using refined values of hyperfine interaction constants from ENDOR data as \(A_{\perp}=45.5\) MHz and \(A_{\parallel}=87\) MHz, it is possible to determine the isotropic (\(A_{\text{iso}}\)) and anisotropic (\(T\)) parts of the HFI: \(A_{\text{iso}}=\frac{2A_{\parallel}+A_{\parallel}}{3}\approx 59.3\) MHz and \(T=\frac{A_{\parallel}-A_{\parallel}}{3}\approx 13.8\) MHz. The isotropic part of the HFI induced by the Fermi contact interaction is a measure of the spin density at the nitrogen nucleus (2s character of the wave function). The anisotropic part induced by the dipole\(-\)dipole interaction corresponds to the spin density concentrated on the 2p nitrogen orbital. Both parts of the HFI are represented as follows: \[A_{\text{iso}} =\frac{8\pi}{3}g\mu_{\text{B}}g_{\text{N}}\mu_{\text{N}}c_{2}^{2} \mu_{\text{J}}^{2}\ \text{|}\nu_{\text{2s}}(0)^{2}\] \[T =\frac{2}{5}g\mu_{\text{B}}g_{\text{N}}\mu_{\text{N}}c_{2}^{2}\mu_ {\text{J}}^{2}\ \text{(}r^{-3}\text{)}_{2\text{P}} \tag{2}\] Here, \(\eta_{\text{J}}^{2}\) is the electron spin density, \(\psi_{\text{2s}}(0)\) is the 2s wave function at the sight of nitrogen nuclei, and \(\text{(}r^{-3}\text{)}_{2\text{P}}\) is the averaging over the 2p electronic wave function. Representing the wave function of an unpaired electron as a linear combination of atomic orbitals \(\Psi=\sum\eta\eta\psi_{\text{J}}\) and \(\psi_{\text{J}}=c_{2\text{J}}\psi_{2\text{J}}+c_{2\text{P}}\psi_{2\text{P}}\), where \(\psi_{\text{N}}\) and \(\psi_{\text{J}}\) correspond to 2s and 2p orbitals and \(c_{2\text{A}}\) and \(c_{2\text{P}}\) are corresponding coefficients, it is possible to calculate the electron spin density \(\eta_{\text{J}}^{23,39}\) localized on the nitrogen atom through the known values of \(\psi_{\text{2s}}(0)\) and \(\text{(}r^{-3}\text{)}_{2\text{P}}\) for nitrogen. In this case, the normalization condition \(c_{2\text{s}}^{2}+c_{2\text{P}}^{2}\) = 1 is taken into account and it is assumed that the atomic orbitals of nitrogen in hBN do not differ much from those previously calculated for the case of a free nitrogen atom.[40] Thus, the calculated spin density on one nitrogen atom is \(\eta_{\text{J}}^{2}=28\%\). Since there are three equivalent nitrogen atoms in the nearest environment of a boron vacancy, the spin density concentrated on them is \(\approx\)84%. The extraction of the hyperfine Fermi contact term and the calculation of the electron spin density within the monolayer play an important role in the coherent properties of the boron vacancy.[56] ## 4 Conclusions In this work, the features of electron\(-\)nuclear interactions of the boron vacancy V\({}_{\text{B}}\) with surrounding equivalent nitrogen nuclei were investigated by the methods of the stationary electron paramagnetic resonance (9.6 GHz) and high-frequency (94 GHz) pulsed ENDOR technique. In addition to the \(g\)-factor and fine splitting (D = 3.6 GHz), the detailed study of the angular dependence of the ESR spectra made it possible to determine all the components of the hyperfine interaction tensor (\(A_{\perp}=45.5\pm 0.9\) MHz and \(A_{\parallel}=87\pm 0.5\) MHz) and therefore to establish its axial symmetry. The established HFI value serves as an estimate of V\({}_{\text{B}}\) electron spin density localization in a two-dimensional plane of sp\({}^{2}\)-hybridized BN atoms. Given the isotropic \(A_{\text{iso}}\) and anisotropic \(T\) contributions of HFI, we can conclude that nearly all spin density of the V\({}_{\text{B}}^{-}\) density (\(\approx\)84%) is localized on the three nitrogen atoms in the 2D BN layer. The latter means that the number of layers in a single crystal will not have a significant effect on the ground-state spin properties of the center, which means that it might be possible to obtain a monoatomic hBN Figure 3: Electron\(-\)nuclear interaction spectra (green) revealed by ENDOR together with corresponding simulations (blue). (a) Nuclear magnetic resonance (NMR) transitions in the \({}^{148}\)N Larmor frequency region corresponding to the nuclear spin flips in the \(m_{\text{g}}=0\) manifold. Three pairs of lines, labeled as N\({}_{\text{1}}\), N\({}_{\text{2}}\) and N\({}_{\text{3}}\) reflect NQJ with three nonequivalent nitrogen nuclei; (b) NMR transitions related to three nonequivalent nitrogen nuclei N\({}_{\text{1}}\), N\({}_{\text{2}}\) and N\({}_{\text{3}}\) relative to the external magnetic field; (c) ENDOR spectrum in the \(\mathbf{B}\|c\) orientation at which all three nitrogen nuclei are equivalent, i.e., with the same values of the hyperfine coupling constant \(A_{\text{iso}}\). Dashed blue lines are calculated ENDOR transitions using eq 1 with additional terms of NQJ and nuclear Zeeman interaction. \begin{table} \begin{tabular}{c c c c c c c c c} \hline & \multicolumn{3}{c}{hyperfine interaction (MHz)} & \multicolumn{3}{c}{quadrupole interaction (MHz)} \\ \cline{2-9} & \(A_{\text{iso}}\) (x\(\parallel c\)) & \(A_{\text{P}}\) & \(A_{\text{iso}}\) & \(p_{\text{m}}\) (x\(\parallel c\)) & \(P_{\text{m}}\) & \(P_{\text{m}}\) & \(C_{\text{g}}\) = \(c_{\text{Q}}\omega V_{\text{m}}\) \\ \hline \({}^{14}\)N & 45.5 \(\pm\) 0.9 & 45.5 \(\pm\) 0.9 & 87 \(\pm\) 0.5 & \(-\)0.52 \(\pm\) 0.02 & \(-\)0.46 \(\pm\) 0.02 & 0.98 \(\pm\) 0.03 & 1.96 \(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 2: Values of the Hyperfine and Quadrupole Interaction Tensors Determined by Simulating the ENDOR Spectra layer with optically addressable structurally protected spin sublevels of color centers. The ENDOR method applied at two different canonical orientations allows us to determine the value of the quadrupole interaction \(C_{\text{q}}\) = 1.96 \(\pm\) 0.05 MHz with an estimate of the asymmetry parameter \(\eta\) = \(-\)0.070 \(\pm\) 0.005. Despite the fact that the fine structure tensor is directed along the crystallographic axis \(\epsilon_{\text{r}}\) the quadrupole interaction has its maximum value along the dangling bond. The relatively small value of the asymmetry parameter \(\eta\) indicates the axial symmetry of the quadrupole interaction. Hence, in this work, we explored all terms of the \(V_{\text{B}}^{\text{-}}\) spin Hamiltonian together with their symmetry reflecting interactions with the three nearest nitrogen atoms. ## 5 Author Information **Corresponding Author** Irina N. Gracheva - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0003-3715-8170; Email: [email protected] **Authors** Fadis F. Murzakhanov - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0001-7601-6314 **Georg V. Mamin - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0002-7852-917X **Margarita A. Sadovnikova - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0002-5255-9020 **Bulat F. Gabbasov - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0003-1359-3717 **Evgeniy N. Mokhov - _Ioffe Institute, St. Petersburg 194021, Russia_** **Marat R. Gafurov - _Institute of Physics, Kazan Federal University, Kazan 420008, Russia_; \(\copyright\)orcid.org/0000-0002-2179-2823** Complete contact information is available at: [https://pubs.acs.org/10.1021/acs.jpcc.2c08716](https://pubs.acs.org/10.1021/acs.jpcc.2c08716) **Author Contributions** The manuscript was written through contributions of all authors. **Funding** This research was funded by the RSF grant No. 20-72-10068. **Notes** The authors declare no competing financial interest. ## 6 Acknowledgments The authors would like to thank Russian Science Foundation grant no. 20-72-10068 for supporting the study. Irina N. Gracheva, Fadis F. Murzakhanov, and Georgy V. Mamin thank Victor Soltamov for participating in the discussion and interpretation of the results. ## 7 Abbreviations hBN hexagonal boron nitride ESR electron spin resonance ENDOR electron nuclear double resonance ODMR optically detected magnetic resonance ZFS zero field splitting HFI hyperfine interaction ES electron spin echo NMR nuclear magnetic resonance NQJ nuclear quadrupole interaction