id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.10161 | A matching principle for power transfer in Stochastic Thermodynamics | Gradients in temperature and particle concentration fuel many processes in
the physical and biological world. In the present work we study a thermodynamic
engine powered by anisotropic thermal excitation (that may be due to e.g., a
temperature gradient), and draw parallels with the well-known principle of
impedance matching in circuit theory, where for maximal power transfer, the
load voltage needs to be half of that of the supplying power source. We
maximize power output of the thermodynamic engine at steady-state and show that
the optimal reactive force is precise half of that supplied by the anisotropy. | Olga Movilla Miangolarra, Amirhossein Taghvaei, Tryphon T. Georgiou | 2023-03-17T17:45:26Z | http://arxiv.org/abs/2303.10161v2 | # A matching principle for power transfer
###### Abstract
Gradients in temperature and particle concentration fuel many processes in the physical and biological world. In the present work we study a thermodynamic engine powered by anisotropic thermal excitation (that may be due to e.g., a temperature gradient), and draw parallels with the well-known principle of impedance matching in circuit theory, where for maximal power transfer, the load voltage needs to be half of that of the supplying power source. We maximize power output of the thermodynamic engine at steady-state and show that the optimal reactive force is precise half of that supplied by the anisotropy.
## I Introduction
Imagine a wind mill, with blades at an angle with respect to wind velocity, and the wind coming straight at it. The windmill draws power for rotational speeds \(\omega\in(0,\omega_{\max})\); clearly, if it is not rotating no power is drawn, and if it rotates too fast in the direction of the wind, power is delivered instead of drawn. At what angular velocity is the power drawn maximal? This depends on the geometry of the blades and is not the subject of our paper. However, it is intuitively clear that the "sweet spot" is somewhere in the middle, where the product of torque applied by the wind times the angular velocity, hence power, is maximal. This helps highlight a general principle.
In some detail, and in order to draw parallels later on, assume a first-order approximation for the supplied torque by the wind \(\tau_{S}-\omega R\). The dynamics of the angular velocity of the windmill obey \(J\omega=\tau_{S}-\omega R-\tau_{L}\), where \(\tau_{L}\) represents the torque of the load. At steady-state,
\[\omega=(\tau_{S}-\tau_{L})/R \tag{1}\]
and the power drawn is \(P=\omega\tau_{L}\). Clearly, the power is positive for \(\omega\in(0,\omega_{\max}=\tau_{S}/R)\), and is maximal for \(\omega^{*}=\omega_{\max}/2\) with an optimal load torque
\[\tau_{L}^{*}=\tau_{S}/2.\]
Thus, as expected, the "sweet spot" is in the middle.
This same principle is often referred to as impedance matching in circuit theory. We briefly point to a textbook example. Consider a voltage source \(V_{S}\) with internal resistance \(R_{S}\) (that includes that of the transmission line) and a load with resistance \(R_{L}\) as in Figure 1. Then, the current is
\[i=(V_{S}-V_{L})/R_{S},\]
where \(V_{L}=iR_{L}\), and the power drawn is \(P=iV_{L}\). The power is maximal when the load resistance matches that of the source, \(R_{L}=R_{S}\), equivalently, it is half of the total \(R_{L}=(R_{S}+R_{L})/2\). Viewing the load as reacting by producing voltage \(V_{L}\), the power is maximal (irrespective of \(R_{S}\)) when
\[V_{L}^{*}=V_{S}/2.\]
The purpose of the present work is to point to a similar principle in nonequilibrium thermodynamics. We consider a thermodynamic ensemble of particles subject to anisotropic fluctuations along different degrees of freedom, and we study the problem of maximizing power output at a _nonequilibrium steady-state_ (NESS). The NESS can be pictured as a whirlwind, with the thermal anisotropy constituting the power source that sustains the circulatory steady-state current. The optimal load, effected by externally actuated non-conservative forces, turns out to be half of that supplied by the anisotropy (cf. (17)).
In Section II we present a canonical example of a system subject to anisotropic temperatures. Then, Sections III and IV develop the matching principle in the case where the control potential is quadratic, and in general, respectively.
## II The Brownian Gyrator
We consider overdamped Brownian particles with two degrees of freedom in an anisotropic heat bath, and subject to a time-varying quadratic potential. The location \(X_{t}\in\mathbb{R}^{2}\) of the particles obeys the Langevin dynamics
\[dX_{t}=\frac{1}{\gamma}\left(f(t,X_{t})-\nabla U(t,X_{t})\right)dt+\sqrt{ \frac{2k_{B}T}{\gamma}}dB_{t}, \tag{2}\]
with the force term consisting of a non-conservative component \(f(t,X_{t})\) and the gradient of a potential
\[U(t,X_{t})=\frac{1}{2}X_{t}^{*}K(t)X_{t},\]
with \(K(t)\) a symmetric \(2\times 2\) matrix. Throughout, \(k_{B}\) denotes the Boltzmann constant, \(\gamma\) the friction coefficient,
Fig. 1: Impedance matching example.
denotes a 2-dimensional standard Brownian motion (with mean zero and covariance the identity) and \(T=\mathrm{diag}(T_{1},\,T_{2})\) a diagonal matrix with entries the temperature of thermal excitation along each of the two degrees of freedom.
The probability density function of the state \(X_{t}\) in (2), denoted by1\(\rho(t,x)\), with \(x\in\mathbb{R}^{2}\), constitutes the _state of the thermodynamic system_ and represents the ensemble of particles; it satisfies the Fokker-Planck equation
Footnote 1: We use the notation \(x\) for a vector in \(\mathbb{R}^{2}\), and \(X_{t}\) for the vector-valued stochastic process of position.
\[\partial_{t}\rho+\nabla\cdot(\rho v)=0, \tag{3}\]
with
\[v=(\mathbf{f}_{5}-\mathbf{f}_{L})/\gamma,\]
for the _source force_\(\mathbf{f}_{5}:=-\nabla U-k_{B}T\nabla\log(\rho)\) and the _load force_\(\mathbf{f}_{L}:=-f\). The non-conservative term \(\mathbf{f}_{L}=-f\), when present, is divergence-free with respect to \(\rho\), in that \(\nabla\cdot(\mathbf{f}_{L}\rho)=0\).
When the initial state is Gaussian with mean zero and covariance \(\Sigma_{0}\), denoted \(\mathcal{N}(0,\Sigma_{0})\), and \(f\) is a linear function of \(X_{t}\), then \(\rho\) remains Gaussian over time with mean zero and covariance that satisfies the Lyapunov equation
\[\gamma\dot{\Sigma}(t)=-K(t)\Sigma(t)-\Sigma(t)K(t)+2k_{B}T, \tag{4}\]
that in this case constitutes the system dynamics. The non-conservative force is of the form \(f=\Omega\Sigma^{-1}(t)X_{t}\), with \(\Omega\) a skew-symmetric matrix; it plays no role in (4) (i.e., it cancels out) due to the skew symmetry \(\Omega+\Omega^{\prime}=0\).
At any point in time, the total energy of the system is \(E=\int U\rho\,dxdy\). The system exchanges energy with the environment either through the external forcing (work differential in (5)), or through heat transfer to and from the two thermal baths (heat differential in (6)). Specifically, the first of these contributions has two terms, one due to changes in the potential and another due to the non-conservative force,
\[dW=\int\frac{\partial U}{\partial t}\rho\,dxdydt-\mathbb{E}[\mathbf{f}_{L}^{ \prime}\circ dX_{t}], \tag{5}\]
and represents work2. On the other hand, the heat uptake from the thermal baths is
Footnote 2: Here, \(d\) indicates an imperfect differential, where its integral depends on the chosen path and not only on the end points.
\[d\,Q=d\,Q_{1}+d\,Q_{2}=-\iint U\nabla\cdot(v\rho)\,dxdydt+\mathbb{E}[ \mathbf{f}_{L}^{\prime}\circ dX_{t}], \tag{6}\]
which can be split into the contributions coming from the different heat baths, \(d\,Q_{1}\) and \(d\,Q_{2}\). Combining (5) and (6) we have the first law of thermodynamics,
\[dE=d\,W+d\,Q, \tag{7}\]
where the differential of internal energy is the sum of the two contributions.
### Steady-state analysis
Let us assume momentarily that \(\mathbf{f}_{L}=-f=0\) and that the potential remains constant, with \(K(t)=K_{c}\) symmetric and positive definite. Then, the system reaches a steady-state distribution; this is Gaussian \(\mathcal{N}(0,\Sigma_{\mathrm{ss}})\) with (steady-state) covariance satisfying the algebraic Lyapunov equation
\[K_{c}\Sigma_{\mathrm{ss}}+\Sigma_{\mathrm{ss}}K_{c}=2k_{B}T. \tag{8}\]
The solution to (8) is unique and can be expressed in integral form as
\[\Sigma_{\mathrm{ss}}=2k_{B}\int_{0}^{\infty}e^{-\tau K_{c}}Te^{-\tau K_{c}}d \tau=2k_{B}\mathcal{L}_{K_{c}}(T), \tag{9}\]
where, for future reference, we define the linear operator
\[X\mapsto\mathcal{L}_{A}(X):=\int_{0}^{\infty}e^{-\tau A}Xe^{-\tau A}d\tau\]
that depends on the positive definite matrix \(A\).
The source force becomes \(\mathbf{f}_{S}=-K_{c}x+k_{B}T\Sigma_{\mathrm{ss}}^{-1}x\), and hence the velocity field can be expressed as
\[v(x)=\frac{\mathbf{f}_{S}}{\gamma}=-\frac{1}{2\gamma}(K_{c}\Sigma_{\mathrm{ss} }-\Sigma_{\mathrm{ss}}K_{c})\Sigma_{\mathrm{ss}}^{-1}x, \tag{10}\]
where we have used the algebraic Lyapunov equation (8). Even if the system is at steady-state, the probability current \(v\rho\) does not need to vanish, in general. A non-vanishing probability current induces a heat flow between the thermal baths, with \(d\,Q_{1}=-d\,Q_{2}\). When it vanishes (\(v\rho=0\)), a condition known as _detailed balance_ in the physics literature, the steady-state is an _equilibrium state_.
It is seen that in order to ensure detailed balance, \(K_{c}\) and \(\Sigma_{\mathrm{ss}}\) must commute. In view of (8), also (9), this only happens when \(T\) and \(K_{c}\) commute; i.e., this happens when \(K_{c}\) is diagonal, since \(T\) is already diagonal. In that case, \(\Sigma_{\mathrm{ss}}=k_{B}TK_{c}^{-1}\) is also diagonal and results in zero probability current according to (10). Moreover, in that case, heat cannot transfer between the degrees of freedom. On the other hand, when \(K_{c}\) and \(T\) do not commute, detailed balance breaks down and a non-vanishing (stationary) probability current materializes leading to a NESS with non-vanishing heat transfer between the heat baths.
The system described here is known as the Brownian Gyrator. Since its conception by Filliger and Reimann in 2007 [1], it has been thoroughly studied, mostly at steady-state. Several works focused on the curl-carrying probability current that mediates a heat transfer between heat baths and the torque that it generates [1, 2, 3, 4, 5]. Other works have studied optimal transitioning between states [6], maximum work extraction through periodic variation of the potential function [7], the role of information flow [8, 9], the effect of external forces [10], strong coupling limits [11], the control relevance of anharmonic potentials [12], gyration for underdamped mesoscopic systems [13], the relevance of non-Markovian noise [14] and the use of active reservoirs [15].
Experimental realizations of (2) have been based on several different physical embodiments [5, 4, 2, 16],
[17]. In particular, [5, 17] are based on colloidal particles suspended in a viscous medium while the anisotropy in stochastic excitation is induced through an electromagnetic field. Another embodiment is based on the electric circuit of Figure 2. Here, the two degrees of freedom are the charges in the capacitors \(C_{1}\) and \(C_{2}\), and the two resistors, in contact with heat baths of different temperatures, generate Johnson-Nyquist fluctuating currents [16, 2, 4].
Specifically, the equations of motion of the system in Figure 2 can be written as
\[CdV_{t}=-\frac{1}{R}V_{t}dt+\sqrt{\frac{2k_{B}T}{R}}dB_{t},\]
where \(V_{t}=[V_{1}(t),\ V_{2}(t)]^{\prime}\) are voltages across the two capacitors, \(C\) is the matrix of capacitances
\[C=\left[\begin{array}{cc}C_{1}+C_{c}&-C_{c}\\ -C_{c}&C_{2}+C_{c}\end{array}\right],\]
and \(\sqrt{2k_{B}T/R}dB_{t}\) models the Johnson-Nyquist noise [18] at the two resistors for temperatures \(T_{1}\) and \(T_{2}\) (\(T=\text{diag}(T_{1},T_{2})\), as before). The state of this electrical-thermal system can alternatively be described in terms of the charges \(q_{1}(t)\) and \(q_{2}(t)\) at capacitances \(C_{1}\) and \(C_{2}\), since the vector of charges is \(q_{t}=CV_{t}\). Then, the changes satisfy
\[dq_{t}=-\frac{1}{R}C^{-1}q_{t}dt+\sqrt{\frac{2k_{B}T}{R}}dB_{t}. \tag{11}\]
Let \(U(q)=\frac{1}{2}q^{\prime}C^{-1}q\) be the (potential) energy stored in the system of capacitances \((C_{1},C_{2},C_{c})\). The first term in the right-hand-side of (11) is precisely the negative gradient \(-\nabla U(q_{t})/Rdt\). Hence, equation (11) represents a two-dimensional overdamped Langevin system (2) with non-conservative forces absent, and \(R\) playing the role of the friction coefficient. It follows that the distribution of charges \(\rho(t,q_{t})\) satisfies a Fokker-Planck equation (3), with velocity field \(v\) being replaced by a corresponding current field (see equation (18)).
## III Maximum power extraction from the Brownian Gyrator
In the present section, we take the next natural step and consider work extraction from the resulting circulating current. We explore the coupling of the natural gyrating motion with an external non-conservative actuation, for the purpose of extracting work that can be supplied by the anisotropy of the temperature field. Earlier studies on maximizing work output [19, 17, 20] were particular to linear two-dimensional systems. Our contribution lies in a general approach that applies to higher dimensions and nonlinear forces (see Section IV), while bringing to light the aforementioned matching principle, that the velocity ensuring maximal power transfer is precisely at the midpoint of the power producing range of velocity values.
Let us picture the steady-state of the Brownian gyrator (the system described by (2) subject to a fixed quadratic potential with \(f=0\)) as a vortex of swirling particles around the origin, which originates from the non-zero probability current at steady-state. Now imagine yourself sailing around the origin, propelled by the wind of particles, going in circles at some velocity slower than the particles, yet non-zero, slowed-down by external forces applied to the boat for the purpose of extracting work. The work extracted would correspond to the force applied on the sails times the displaced distance, in a way that is analogous to the windmill example discussed in the introduction.
Indeed, one can use non-conservative actuation to implement such an interaction. Let \(K(t)=K_{c}\) be a constant positive definite matrix. Then, the system reaches a steady-state with covariance \(\Sigma_{ss}\), that together with \(K_{c}\) satisfies (8). Let us consider \(f=\Omega\Sigma_{ss}^{-1}X_{t}\), which represents the force on the sail that, due to it's skew-symmetry, does not alter the stationary distribution of the state of the system, but does affect the mean velocity \(v=(\mathbf{f}_{S}-\mathbf{f}_{L})/\gamma\). Here, the source and load forces are given by
\[\mathbf{f}_{S} =-K_{c}X_{t}+k_{B}T\Sigma_{ss}^{-1}X_{t} \tag{12a}\] \[\mathbf{f}_{L} =-\Omega\Sigma_{ss}^{-1}X_{t}. \tag{12b}\]
The work output is then
\[-dW =\mathbb{E}[\mathbf{f}_{L}^{\prime}\circ dX_{t}]\] \[=-\mathbb{E}\big{[}X_{t}^{\prime}\Sigma_{ss}^{-1}\Omega^{\prime} dX_{t}+k_{B}\text{Tr}[\Omega\Sigma_{ss}^{-1}T]dt/\gamma\big{]} \tag{13a}\] \[=\frac{1}{\gamma}\mathbb{E}\big{[}-X_{t}^{\prime}\Sigma_{ss}^{-1 }\Omega^{\prime}(-K_{c}+\Omega\Sigma_{ss}^{-1}+k_{B}T\Sigma_{ss}^{-1})X_{t} \big{]}\big{]}dt\] \[=\mathbb{E}\big{[}\mathbf{f}_{L}^{\prime}(\mathbf{f}_{S}- \mathbf{f}_{L})/\gamma\big{]}dt, \tag{13b}\]
where for the second equality we have changed from the Stratonovich integral to Ito by adding the Ito term \(k_{B}\text{Tr}\left[\Omega\Sigma_{ss}^{-1}T\right]dt/\gamma\) (with \(\text{Tr}\) denoting the trace), and for the third line we have used (2) and the fact that \(\text{Tr}[\Omega\Sigma_{ss}^{-1}T]=\mathbb{E}\left[\text{Tr}[\Omega\Sigma_{ss}^ {-1}X_{t}X_{t}^{\prime}\Sigma_{ss}^{-1}T]\right]\). Continuing on, from (13a) we obtain that
\[P=-dW/dt=-\frac{1}{\gamma}\text{Tr}\left[\Omega\Sigma_{ss}^{-1}\Omega^{\prime} +k_{B}\Omega\Sigma_{ss}^{-1}T\right],\]
where we have used (2) and the fact that the trace of the product of symmetric and skew-symmetric matrices is zero.
Thus, the problem to maximize power \(P\) is equivalent to the static problem
\[\min\text{Tr}\left[\Omega\Sigma_{ss}^{-1}\Omega^{\prime}+k_{B}\Omega\Sigma_{ss }^{-1}T\right],\]
where the minimum is taken over skew-symmetric \(\Omega\)'s. The first-order necessary condition for optimality is found by
Fig. 2: Electrical emboddiment of the Brownian gyrator.
computing the first order variation of the cost and setting it to zero, this is,
\[\mathrm{Tr}\left[\Delta_{\Omega}M\right]=0,\ \ \ \mathrm{with}\ \ \ M=2\Omega \Sigma_{ss}^{-1}+k_{B}T\Sigma_{ss}^{-1}, \tag{14}\]
where \(\Delta_{\Omega}\) is any skew-symmetric matrix. Therefore, the first-order necessary condition for optimality (14) implies that \(M\) must be symmetric. It follows that an optimal choice3\(\Omega^{*}\) must satisfy
Footnote 3: Throughout, we superscribe \(*\) to denote an optimal solution.
\[\Sigma_{ss}^{-1}\Omega^{*}+\Omega^{*}\Sigma_{ss}^{-1}=\frac{k_{B}}{2}\left( \Sigma_{ss}^{-1}T-T\Sigma_{ss}^{-1}\right). \tag{15}\]
This equation has a unique solution which is skew-symmetric and can be written as
\[\Omega^{*}=\frac{k_{B}}{2}\mathcal{L}_{\Sigma_{ss}^{-1}}(\Sigma_{ss}^{-1}T-T \Sigma_{ss}^{-1}).\]
Note that when \(\Sigma_{ss}\) and \(T\) commute, no work can be extracted (as one would expect), since in this case \(\Omega^{*}=0\).
A way to interpret this solution is by looking back at the Fokker-Planck equation (3). Clearly, at steady state, where \(\nabla\cdot\rho v=0\), \(v=(\mathbf{f}_{S}-\mathbf{f}_{L})/\gamma\) is divergence-free, and since \(\mathbf{f}_{L}\) is divergence-free, so is \(\mathbf{f}_{S}\). From (12a) and (14) we have that
\[\mathbf{f}_{S}=-(K_{c}-M)X_{t}-2\Omega^{*}\Sigma^{-1}X_{t},\]
and since it is divergence-free, \(K_{c}-M=0\). Therefore, \(\mathbf{f}_{S}=-2\Omega^{*}\Sigma^{-1}X_{t}\), and in view of (12b) we have the following proposition.
**Proposition 1**: _The maximum steady-state power output through non-conservative forcing for the linear system in (2),_
\[\max_{\mathbf{f}_{L}}\mathbb{E}[\mathbf{f}_{L}^{\prime}(\mathbf{f}_{S}- \mathbf{f}_{L})/\gamma], \tag{16}\]
_is obtained for_
\[\mathbf{f}_{L}^{*}=\mathbf{f}_{S}/2. \tag{17}\]
**Remark 1**: _The maximum power output can be computed as follows,_
\[P^{*} =-\frac{1}{\gamma}\mathrm{Tr}[\Omega^{*}\Sigma_{ss}^{-1}\Omega^{ *^{\prime}}+k_{B}\Omega^{*}\Sigma_{ss}^{-1}T]\] \[=-\frac{1}{2\gamma}\mathrm{Tr}[\Omega^{*^{\prime}}(\Omega^{*} \Sigma_{ss}^{-1}+\Sigma_{ss}^{-1}\Omega^{*})+2k_{B}\Omega^{*}\Sigma_{ss}^{-1}T]\] \[=\frac{k_{B}}{4\gamma}\mathrm{Tr}[(T\Sigma_{ss}^{-1}-\Sigma_{ss}^ {-1}T)\Omega^{*}]\] \[=\frac{k_{B}}{2\gamma}\mathrm{Tr}[T\Sigma_{ss}^{-1}\Omega^{*}],\]
_where for the third equality we have used (15). It is seen that the maximal power is proportional to the anti-symmetric part of \(T\Sigma_{ss}^{-1}\), which constitutes our power source. Moreover, this expression for \(P^{*}\) can be written explicitly in terms of the parameters of our system: first solve for the steady-state covariance \(\Sigma_{ss}\) in (8) (which is fairly simple for a two-dimensional problem), then use it to find \(\Omega^{*}\) and finally to write \(P^{*}\). Explicit derivations in a particular two-dimensional case can be found in [19, 20]._
Let us now return to a circuit theoretic embodiment. In order to generate a non-conservative force, a non-reciprocal capacitance is needed. To this end, we consider the circuit depicted in Figure 3, where we have introduced such a general (two-port) capacitance, with law
\[i_{\mathrm{nr}}=\begin{bmatrix}i_{\mathrm{nr},1}\\ i_{\mathrm{nr},2}\end{bmatrix}=-C_{\mathrm{nr}}\frac{dV_{t}}{dt},\]
and a capacitance matrix \(C_{\mathrm{nr}}\) that is not necessarily symmetric (i.e., possibly non-reciprocal [21]). The state of the system is the vector of charges \(q_{t}\), as before, and satisfies (11) with \(\hat{C}=C+C_{\mathrm{nr}}\) replacing \(C\). The "force" \(-\hat{\mathcal{C}}^{-1}q_{t}\) can be split into conservative and non-conservative components,
\[-\hat{\mathcal{C}}^{-1}q_{t}=-(\hat{\mathcal{C}}^{-1})^{\prime}C\hat{\mathcal{C }}^{-1}q_{t}-(\hat{\mathcal{C}}^{-1})^{\prime}C_{\mathrm{nr}}^{\prime}\hat{ \mathcal{C}}^{-1}q_{t},\]
with the non-conservative part \((\hat{\mathcal{C}}^{-1})^{\prime}C_{\mathrm{nr}}^{\prime}\hat{\mathcal{C}}^{-1} q_{t}=\Omega\Sigma_{ss}^{-1}q_{t}\).
As noted, the probability density of the system of charges follows the Fokker-Planck equation (3), with velocity field \(v\) replaced by a current field (function of a vector4\(q\)),
Footnote 4: Following our earlier convention, \(q_{t}\) represents the vector-valued stochastic process of charges, whereas \(q\) represents a vector in \(\mathbb{R}^{2}\).
\[\mathbf{i}=(\mathbf{V}_{S}-\mathbf{V}_{L})/R, \tag{18}\]
with
\[\mathbf{V}_{S} =-(\hat{\mathcal{C}}^{-1})^{\prime}C\hat{\mathcal{C}}^{-1}q+k_{B} T\Sigma_{ss}^{-1}q,\] \[\mathbf{V}_{L} =-(\hat{\mathcal{C}}^{-1})^{\prime}C_{\mathrm{nr}}^{\prime}\hat{ \mathcal{C}}^{-1}q,\]
and \(q\in\mathbb{R}^{2}\).
The thermodynamic definition of power is consistent with the circuit theoretic viewpoint where \(P=\mathbb{E}[V_{t}^{\prime}\circ i_{\mathrm{nr}}]\), for \(V_{t}=[V_{1}(t)\ V_{2}(t)]^{\prime}=\hat{\mathcal{C}}^{-1}q_{t}\). Specifically,
\[Pdt =\mathbb{E}[V_{t}^{\prime}\circ i_{\mathrm{nr}}dt]=-\mathbb{E}[V_{ t}^{\prime}\circ C_{\mathrm{nr}}dV_{t}]\] \[=-\mathbb{E}[q_{t}^{\prime}(\hat{\mathcal{C}}^{-1})^{\prime}\circ C _{\mathrm{nr}}\hat{\mathcal{C}}^{-1}dq_{t}]\] \[=\mathbb{E}[\mathbf{V}_{L}^{\prime}\circ dq_{t}],\]
which, in analogy with (13b), gives
\[-dW=\mathbb{E}[\mathbf{V}_{L}^{\prime}(\mathbf{V}_{S}-\mathbf{V}_{L})/R]dt.\]
Thus, maximum power is similarly obtained by a load voltage that halves the source voltage fueled by the anisotropy in temperature of the two Johnson-Nyquist resistors, i.e.
\[\mathbf{V}_{L}^{*}=\mathbf{V}_{S}/2,\]
Fig. 3: Circuit realization of a steady-state Brownian gyrator engine.
akin to the standard impedence matching problem._
**Remark 2**: _A circuit such as the one depicted in Figure 3 can be realized through controlled feedback [21]. However, this requires external energy input which is unaccounted for, constituting the main disadvantage when attempting to extract work from a system through non-conservative forcing._
## IV A general setting
The previous results apply more generally to a linear system with \(n\)-degrees of freedom. We can do even better and consider an \(n\)-dimensional general case where equation (2) holds with \(X_{t}\in\mathbb{R}^{n}\), \(T\) a diagonal \(n\times n\) matrix with the temperatures of the ambient heat baths as entries, \(B_{t}\) an \(n\)-dimensional standard Brownian motion, and each degree of freedom subjected to a general non-linear force \(f_{t}-\partial_{x}U\). The probability density function \(\rho(t,x)\) of the process \(X_{t}\) satisfies the Fokker-Planck equation (3), with
\[\mathbf{f}_{S} =-\nabla U-k_{B}T\nabla\log(\rho)\] \[\mathbf{f}_{L} =-f,\]
where the latter is divergence-free with respect to \(\rho\) as before, i.e., \(\nabla\cdot(\mathbf{f}_{L}\rho)=0\).
Let \(\rho_{ss}\) be an admissible steady-state, i.e. such that there exists a unique time-independent potential \(U_{c}\) that renders \(\rho_{ss}\) stationary. Specifically, the potential and \(\rho_{ss}\) must satisfy
\[\nabla\cdot((\mathbf{f}_{S}-\mathbf{f}_{L})\rho_{ss}/\gamma)=0, \tag{19}\]
with
\[\mathbf{f}_{S}=-\nabla U_{c}-k_{B}T\nabla\log(\rho_{ss}).\]
This is equivalent to imposing
\[\nabla\cdot(\rho_{ss}\nabla U_{c})=-\nabla\cdot(\rho_{ss}k_{B}T\nabla\log(\rho _{ss})). \tag{20}\]
It is seen that \(U_{c}\) is specified by the gradient part of \(k_{B}T\nabla\log(\rho_{ss})\), in that \(U_{c}\) must satisfy the Poisson equation (20). It is assumed that (20) has a unique solution5 for all admissible \(\rho_{ss}\).
Footnote 5: See [22] for sufficient conditions on \(\rho_{ss}\) for uniqueness of solution.
If the potential is fixed, the work output due to the non-conservative forcing is given by
\[-dW= \mathbb{E}[\mathbf{f}_{L}^{\prime}\circ dX_{t}]\] \[= \frac{1}{\gamma}\int\Big{(}\mathbf{f}_{L}^{\prime}(-\nabla U_{c}+ f)+k_{B}\mathrm{Tr}\left[\mathrm{Jac}(\mathbf{f}_{L})T\right]\Big{)}\rho_{ss}dxdt\] \[= \mathbb{E}[\mathbf{f}_{L}^{\prime}(\mathbf{f}_{S}-\mathbf{f}_{L} )/\gamma]dt,\]
where we used the Ito rule for the second equality, and denoted by \(\mathrm{Jac}(\mathbf{f}_{L})\) the Jacobian of \(\mathbf{f}_{L}\). For the third equality we used integration by parts.
The optimal \(\mathbf{f}_{L}\) that maximizes the power output must be such that the first order variation of \(P\),
\[\int(\delta_{\mathbf{f}_{L}})^{\prime}(\mathbf{f}_{S}-2\mathbf{f}_{L})\rho_{ss }dx/\gamma,\]
vanishes for all admissible \(\delta_{\mathbf{f}_{L}}\). Given that \(\delta_{\mathbf{f}_{L}}\) must be a divergence-free field with respect to \(\rho_{ss}\), we obtain that \(\mathbf{f}_{S}-2\mathbf{f}_{L}\) must be of gradient form, from orthogonality. Since both \(\mathbf{f}_{L}\) and \(\mathbf{f}_{S}\) must be divergence-free, the first by construction and the second due to steady-state (19), we obtain that the optimal \(\mathbf{f}_{L}\) must reduce the divergence-free swirling motion of the particles (induced by \(\mathbf{f}_{S}\)) by half, as is stated in the following proposition.
**Proposition 2**: _The maximum steady-state power output through non-conservative forcing for the (non-linear) system (2) with \(X_{t}\in\mathbb{R}^{n}\),_
\[\max_{\mathbf{f}_{L}}\,\mathbb{E}[\mathbf{f}_{L}^{\prime}(\mathbf{f}_{S}- \mathbf{f}_{L})/\gamma], \tag{21}\]
_is obtained by_
\[\mathbf{f}_{L}^{*}=\mathbf{f}_{S}/2. \tag{22}\]
**Remark 3**: _The optimal mean velocity field is then_
\[v^{*}=\mathbf{f}_{L}^{*}/\gamma.\]
Upon using the orthogonality between \(\nabla U_{c}\) and \(\mathbf{f}_{L}^{*}\), the maximum power can be written as
\[P^{*}=-\gamma\int(v^{*})^{\prime}v^{*}\rho_{ss}dx-k_{B}\int(v^{*})^{\prime}T \nabla\log(\rho_{ss})\rho_{ss}dx.\]
Note that this expression is nothing but the heat rate (decomposed into dissipative and quasi-static heat rates) of the system with mean velocity \(v^{*}\)[23], as one would expect from the first law (7), since \(dE=0\) implies \(-dW=d\,Q\).
## V Conclusions
The purpose of this work has been to present a matching principle for maximal power extraction in diverse systems, ranging from microscopic thermodynamic heat engines to windmills and electric circuits. This principle holds in general under the assumption that the source has a linear response (e.g., voltage-current, force-velocity, etc.). This is precisely the case underlying the well-known impedance matching principle for power transfer in circuits, and the same principle is extrapolated to stochastic systems where the anisotropy in thermal fluctuations constitutes the source of energy, as explained in the body of the paper. Interest in this principle stems from the significance of power harvesting mechanisms in the physical and biological world. Thus, it appears of great importance to investigate whether naturally occurring analogues, such as those driving bacterial flagella [24, 25], have evolved to display some form of optimality that reminisces the matching principle that we discussed herein.
We would like to remark that impedance matching in classical network theory extends to dynamical loads [26, 27]. Similarly, instances where the power source may not be Markov and where the power generated is consumed by a higher-order dynamical component, are relevant in the context of thermodynamics. Finally, a thermodynamic counterpart of maximal power transfer [28, 29] is of interest, where suitably designed coupling may facilitate impedance matching between given thermodynamic components.
## Acknowledgments
The research was supported in part by the AFOSR under grant FA9550-20-1-0029, and ARO under W911NF-22-1-0292. O.M.M was supported by "la Caixa" Foundation (ID 100010434) with code LCF/BQ/AA20/11820047.
|
2308.12404 | Balanced Submodular Flows | This paper examines the Balanced Submodular Flow Problem, that is the problem
of finding a feasible submodular flow minimizing the difference between the
flow values along the edges. A min-max formula is given to the problem and an
algorithm is presented to solve it using $O(m^2)$ submodular function
minimizations. Then, these result are extended to the weighted version of the
problem. Finally, the Balanced Integer Submodular Flow Problem is discussed. | Alpár Jüttner, Eszter Szabó | 2023-08-23T19:57:34Z | http://arxiv.org/abs/2308.12404v2 | # Balanced Submodular Flows
# Balanced Submodular Flows
Alpar Juttner and Eszter Szabo
Department of Operations Research, Eotvos Lorand University, Budapest, Hungary.
Email: [email protected], [email protected].
**Abstract:** This paper examines the Balanced Submodular Flow Problem, that is the problem of finding a feasible submodular flow minimizing the difference between the flow values along the edges. A min-max formula is given to the problem and an algorithm is presented to solve it using \(\mathcal{O}(m^{2})\) submodular function minimizations. Then, these results are extended to the weighted version of the problem. Finally, the Balanced Integer Submodular Flow Problem is discussed.
**Keywords: submodular flow, balanced optimization, strongly polynomial algorithm**
## 1 Introduction
Balanced optimization problems aim to find a most equitable distribution of resources. Several problems have been analysed in the literature such as the Balanced Spanning Tree Problem studied by Camerini, Maffioli, Martello, and Toth [2], by Longshu Wu [12] and by Ficker, Spieksma, and Woeginger [6] or the Balanced Assignment Problem by Martello, Pulleyblank, Toth, and de Werra [13]. Duin and Volgenant [3] proposed a generic scheme for minimum deviation problems, that is also usable for solving certain balanced optimization problems including both the above ones. Ahuja proposed a parametric simplex method for the general Balanced Linear Programming Problem [1].
The Balanced Network Flow Problem introduced by Scutella [16] aims to find a feasible network flow minimizing the difference between the maximum and minimum flow values along the edges, i.e. \(\max_{e\in A}f(e)-\min_{e\in A}f(e)\). Scutella presented an algorithm using Newton's approach that performs \(\mathcal{O}(n^{2}\log^{3}(n))\) max-flow computations, where \(n\) is the number of nodes. The weighted version of the problem has also been solved by Scutella and Klinz [11].
This paper examines an extension of the problem above to submodular flows, where the goal is to find a feasible submodular flow minimizing the difference between flow values along the edges. A strongly polynomial algorithm is given for solving the Balanced Submodular Flow Problem with \(\mathcal{O}(m^{2})\) submodular function minimizations, where \(m\) is the number of edges in the input graph. Then, Section 4 examines the Weighted Balanced Submodular Flow problem, that is the problem of finding a feasible submodular flow minimizing the difference between the maximum and minimum weighted flow values along the edges. We show that an optimal solution can be found by solving \(\mathcal{O}(n^{4}m^{6}\log^{6}(m))\) submodular function minimization problems. In Section 5, the Balanced Integral Submodular Flow Problem is introduced and an optimal integral solution is determined using the fractional optimum.
## 2 Preliminaries
**Definition 2.1**.: _For an underlying set \(V\), let \(\mathcal{P}(V)\) denote the power set of \(V\), i. e. the set of all subsets of \(V\). The set functions \(b,p:\mathcal{P}(V)\to\mathbb{R}\) are called submodular or supermodular if_
\[b(X)+b(Y)\geq b(X\cup Y)+b(X\cap Y) \tag{1}\]
_or_
\[p(X)+p(Y)\leq p(X\cup Y)+p(X\cap Y) \tag{2}\]
_holds for each subsets \(X,Y\subseteq V\), respectively. A function is called modular if it is both sub- and supermodular._
**Theorem 2.2** (Orlin [14]).: _Assuming that the value of a submodular function \(b\) can be computed for any \(X\subseteq V\) in time \(T\), then the value of \(\min\{b(X):X\subseteq V\}\) can be computed in time \(\mathcal{O}(n^{5}T+n^{6})\)._
Whenever a submodular function minimization is used as a subroutine, its running time will be denoted by \(\Upsilon\) for the sake on simplicity.
**Definition 2.3**.: _For a directed graph \(G=(V,A)\) and a subset of vertices \(X\subseteq V\), let \(\tilde{\varrho}(X)\) and \(\tilde{\delta}(X)\) denote the set of edges entering \(X\) and leaving \(X\), respectively. For a vector \(x\in\mathbb{R}^{A}\), let_
\[\varrho_{x}(X):=\sum_{e\in\tilde{\varrho}(X)}x(e),\quad\delta_{x}(X):=\sum_{e \in\tilde{\delta}(X)}x(e)\quad\text{and}\quad\partial_{x}(X):=\varrho_{x}(X)- \delta_{x}(X). \tag{3}\]
_Furthermore, let \(\varrho(X)\), \(\delta(X)\) and \(\partial(X)\) denote the number of edges entering \(X\), leaving \(X\), and their difference, respectively._
It is straightforward to check that \(\varrho_{x}(X)\) and \(\delta_{x}(X)\) are submodular functions for any nonnegative vector \(x\). If \(l,u\in\mathbb{R}^{A}\) and \(l\leq u\), then \(\varrho_{u}(X)-\delta_{l}(X)\) is submodular and \(\varrho_{l}(X)-\delta_{u}(X)\) is supermodular.
**Definition 2.4**.: _Let us given a directed graph \(G=(V,A)\) and a submodular function \(b:\mathcal{P}(V)\to\mathbb{R}\). A vector \(x\in\mathbb{R}^{A}\) is called a submodular flow if_
\[\varrho_{x}(X)-\delta_{x}(X)\leq b(X) \tag{4}\]
_holds for each \(X\subseteq V\)._
_For vectors \(l,u\in\mathbb{R}^{A}\), a submodular flow \(x\) is called \((l,u)\)-bounded if \(l\leq x\leq u\)._
**Theorem 2.5**.: _For lower and upper bounds \(l,u\in\mathbb{R}^{A}\), there exists an \((l,u)\)-bounded submodular flow if and only if \(l\leq u\) and_
\[\varrho_{l}(X)-\delta_{u}(X)\leq b(X) \tag{5}\]
_holds for each \(X\subseteq V\)._
**Theorem 2.6** (Frank [7]).: _Assume that \(b:\mathcal{P}(V)\to\mathbb{R}\) is submodular and the value of \(b(X)\) can be computed in time \(T\) for any \(X\subseteq V\), then an \((l,u)\)-bounded submodular flow -- or a set \(X\) violating (5) if such a flow does not exist -- can be found in time \(\mathcal{O}(n^{5}T)\)._
For estimating the running time of the proposed algorithms, we will also use the following known results.
**Lemma 2.7** (Goemans [15]).: _Let \(b\in\mathbb{R}^{p}\) be a real vector and let \(x_{1},x_{2},\ldots,x_{q}\in\{-1,0,+1\}^{p}\) be vectors such that_
\[0<bx_{i+1}\leq\frac{1}{2}bx_{i}\quad\forall i\in\{1,2,\ldots,q-1\}\]
_Then \(q=\mathcal{O}(p\log(p))\)_
**Theorem 2.8** (Erdos-Szekeres [5]).: _Given integer numbers \(r,s\) and a sequence of distinct real numbers with length at least \((r-1)(s-1)+1\), there exists a monotonically increasing subsequence of length \(r\) or a monotonically decreasing subsequence of length \(s\)._
_In the case when \(r=s\), every sequence of length \(n\) contains a monotonically increasing or decreasing subsequence of length \(\sqrt{n}\)._
**Theorem 2.9** (Goemans, Gupta and Jaillet [8]).: _Let \(b\) be a submodular function on \(V\), \(|V|=n\) and assume that \(b(\emptyset)\geq 0\). Then the value of_
\[\delta^{*}:=\min\left\{\delta\geq 0:\min_{S\subseteq V}\left(b(S)+\delta a(S) \right)\geq 0\right\} \tag{6}\]
_can be computed by performing \(\mathcal{O}(n\log^{2}n)\) submodular function minimization over the set \(V\), i. e. in time \(\mathcal{O}(n^{2}\Upsilon)\)._
A family \(R\subseteq\mathcal{P}(V)\) is called a ring family if it is closed under taking unions and intersections. For a given family of subsets \(X_{1},X_{2},\ldots,X_{k}\) of \(V\), let \(R(X_{1},X_{2},\ldots X_{k})\) denote the ring family generated by \(X_{1},X_{2},\ldots,X_{k}\).
**Lemma 2.10** (Goemans et al. [8]).: _Let \(b:\mathcal{P}(V)\to\mathbb{R}\) be a submodular function with \(b_{\min}=\min_{S\subseteq V}b(S)\leq 0\). Consider a sequence of distinct sets \(X_{1},X_{2},\ldots,X_{q}\) such that \(b(X_{1})=b_{\min},b(X_{2})>-2b_{\min}\), and \(b(X_{i})\geq 4b(X_{i-1})\) for \(3\leq i\leq q\). Then_
\[b(X_{q})>\max\{b(X):X\in R(X_{1},\ldots,X_{q-1})\},\]
_therefore \(X_{q}\notin R(X_{1},\ldots,X_{q-1})\)._
**Lemma 2.11** (Goemans et al. [8]).: _Let \(b:\mathcal{P}(V)\to\mathbb{R}\) be a submodular function and \(\mathcal{T}\subseteq\mathcal{P}(V)\), where \(|V|=n\). Assume that \(b(S)\leq M\) for all \(S\in\mathcal{T}\). Let \(X\subseteq V\) be a subset, such that_
\[b(X)>\frac{n^{2}}{4}(M-b_{\min})\]
_Then \(X\notin R(\mathcal{T})\)._
**Lemma 2.12** (Goemans et al. [8]).: _Given a sequence of subsets \(X_{1},X_{2}\ldots X_{k}\) of \(V\), define \(L_{i}=R(X_{1},\ldots,X_{i})\) for \(1\leq i\leq k\), where \(|V|=n\). Assume that \(X_{i}\notin L_{i-1}\) for all \(i>1\). Then_
\[k\leq\binom{n+1}{2}+1\]
**Theorem 2.13** (Goemans et al. [8]).: _Let \(b:\mathcal{P}(V)\to\mathbb{R}\) be a submodular function with \(b_{\min}:=\min_{S\subseteq V}b(S)\leq 0\). Consider a sequence of distinct sets \(T_{1},T_{2},\ldots,T_{q}\) such that \(f(T_{1})=b_{\min}\), \(b(T_{2})>-2b_{\min}\) and \(b(T_{i})\geq 4b(T_{i-1})\) for \(3\leq i\leq q\). Then \(q\leq\binom{n+1}{2}+1\)._
## 3 Balanced Submodular Flows
**Definition 3.1**.: _The spread1 of a vector \(x\in\mathbb{R}^{A}\) is the value_
Footnote 1: This expression can be seen under another name in the literature..
\[\sigma(x):=\max_{a\in A}x(a)-\min_{a\in A}x(a)\]
_._
The _Balanced Submodular Flow Problem_ is to find a submodular flow of minimum spread.
**Problem 3.2**.: _Let us given a directed graph \(G=(V,A)\) and a submodular function \(b:\mathcal{P}(V)\to\mathbb{Z}\). Find_
\[\sigma^{*}:=\min\left\{\sigma(x):x\in\mathbb{R}^{A}\quad\varrho_{x}(X)- \delta_{x}(X)\leq b(X)\quad\forall X\subseteq V\right\} \tag{7}\]
_along with a minimizing vector \(x^{*}\)._
For the sake of simplicity, we assume that there is no isolated vertex in the graph \(G\) and there exists an unbounded submodular flow, i.e. \(b(U)\geq 0\) holds for every subset \(U\subseteq V\) with \(\varrho(U)=0\) and \(\delta(U)=0\).
The following section presents a dual characterization of the value of the minimum spread and then strongly polynomial algorithms to compute this value will be given. For technical reasons, the cases of Eulerian and non-Eulerian graphs are treated separately.
### Eulerian Graphs
Throughout this section, a graph \(G\) is called _Eulerian_ if \(\partial(v)=0\) holds for all \(v\in V\). Note that \(G\) is not required to be connected.
Clearly, if \(G\) is Eulerian and \(x\in\mathbb{R}^{A}\) is a submodular flow, then \(x+\alpha\mathbb{1}\) is also a submodular flow for any \(\alpha\in\mathbb{R}\). Therefore the Balanced Submodular Flow Problem reduces to the problem of finding the minimum value \(\sigma^{*}\) for which there exists a bounded submodular flow \(0\leq x^{*}\leq\sigma^{*}\mathbb{1}\). Applying Theorem 2.5, \(\sigma^{*}\) is the smallest nonnegative value for which \(b(X)+\sigma^{*}\delta(X)\geq 0\) holds for all \(X\subseteq V\). In other words, we are looking for the root of the function
\[f(\sigma):=\min\{b(X)+\sigma\delta(X):X\subseteq V\} \tag{8}\]
or \(f(\sigma)=0\) if this minimum is negative. This immediately gives the following dual characterization of the minimum spread submodular flows.
**Theorem 3.3**.: _Assume that \(G\) is Eulerian. Then_
\[\sigma^{*}=\max\left\{\frac{-b(X)}{\delta(X)}:X\subseteq V,\delta(X)>0\right\} \tag{9}\]
_or \(\sigma^{*}=0\) if the value of this maximum is negative._
Therefore, the problem reduces to a fractional optimization problem, the optimum of which can be calculated using the standard Discrete Newton Method [15] outlined in Algorithm 1. It is straightforward to see that \(\delta(X_{i})\) strictly decreases in every iteration and a standard argument shows that the final set \(X_{i}\) indeed maximizes the value \(\frac{-b(X)}{\delta(X)}\). Thus we get the following
**Theorem 3.4**.: _The value \(\sigma^{*}\) of the minimum spread and the corresponding dual \(X\) in Theorem 3.3 are determined by Algorithm 1 within \(\mathcal{O}(m)\) iterations. \(\Box\)_
### Non-Eulerian Graphs
Now, let us consider the Balanced Submodular Flow Problem in non-Eulerian graphs. Let \(G=(V,A)\) be a directed graph assuming that there exists a node \(v\in V\) such that \(\partial(v)\neq 0\). First, we analyze a simpler algorithm and then improve this algorithm. Furthermore, a dual characterization of the minimum spread submodular flows is also given. More precisely, the following theorem will be proved.
**Theorem 3.5**.: _Assume that \(G\) is not Eulerian. Then_
\[\sigma^{*}=\max_{X,Y\subseteq V}\left\{\sigma(X,Y):\partial(X)\geq 0,\partial(Y) <0,\varrho(X)+\delta(X)>0\right\}, \tag{10}\]
_where_
\[\sigma(X,Y):=\frac{b(X)\partial(Y)-b(Y)\partial(X)}{\delta(Y)\partial(X)- \delta(X)\partial(Y)} \tag{11}\]
_or \(\sigma^{*}=0\) if the value of this maximum is negative._
Note that the condition \(\varrho(X)+\delta(X)>0\) in (10) is equivalent to \(\delta(Y)\partial(X)-\delta(X)\partial(Y)\neq 0\), assuming that \(\partial(X)\geq 0,\partial(Y)<0\) hold for \(X,Y\). Therefore, the expression in (10) is well-defined.
In order to prove Theorem 3.5, we first show that the expression above constitutes a lower bound of \(\sigma^{*}\) for any pairs of sets \(X\) and \(Y\).
**Lemma 3.6**.: _Let \(X,Y\subseteq V\) such that \(\partial(X)\geq 0\), \(\partial(Y)<0\) and \(\varrho(X)+\delta(X)>0\). Then_
\[\sigma^{*}\geq\sigma(X,Y). \tag{12}\]
Proof.: By the definition of \(x^{*}\), there exists a real value \(\kappa\) such that \(\kappa 1\leq x^{*}\leq(\kappa+\sigma^{*})\mathbb{1}\). Using Theorem 2.5 with \(l:=\kappa\mathbb{1}\) and \(u:=(\kappa+\sigma^{*})\mathbb{1}\) we get
\[\kappa\partial(X)-\sigma^{*}\delta(X)\leq b(X) \tag{13}\]
and
\[\kappa\partial(Y)-\sigma^{*}\delta(Y)\leq b(Y). \tag{14}\]
From these, we get
\[b(X)\partial(Y)+\sigma^{*}\delta(X)\partial(Y)\leq\kappa\partial(X)\partial(Y )\leq b(Y)\partial(X)+\sigma^{*}\delta(Y)\partial(X), \tag{15}\]
therefore
\[\sigma^{*}\big{(}\delta(Y)\partial(X)-\delta(X)\partial(Y)\big{)}\geq b(X)\partial (Y)-b(Y)\partial(X). \tag{16}\]
Since \(\delta(Y)\partial(X)-\delta(X)\partial(Y)>0\), the equation above is equivalent to
\[\sigma^{*}\geq\frac{b(X)\partial(Y)-b(Y)\partial(X)}{\delta(Y)\partial(X)- \delta(X)\partial(Y)}=\sigma(X,Y). \tag{17}\]
Proof of Theorem 3.5.: Let
\[\kappa(X,Y):=\frac{b(X)\delta(Y)-b(Y)\delta(X)}{\delta(Y)\partial(X)-\delta(X )\partial(Y)} \tag{18}\]
and let \(X\) and \(Y\) be a pair of sets maximizing (10) and among them the one maximizing \(\kappa^{*}=\kappa(X,Y)\). We show that there exists a bounded submodular flow such that \(\kappa^{*}1\leq x\leq(\kappa^{*}+\sigma^{*})1\).
Suppose to the contrary, assume that such a flow does not exist. Then, Theorem 2.5 states that a set \(C\subseteq V\) with the property
\[\kappa^{*}\partial(C)-\sigma^{*}\delta(C)>b(C)\]
must exist. Note that \(\varrho(X)+\delta(X)>0\), since otherwise \(\kappa(X,Y)=0\) and therefore \(b(C)<0\) would hold, thus the problem would be infeasible.
If \(\partial(C)<0\) and \(\partial(X)>0\), we get
\[\big{(}b(X)\partial(Y)-b(Y)\partial(X)\big{)}\big{(}\delta(C) \partial(X)-\delta(X)\partial(C)\big{)}\] \[= b(X)\partial(Y)\delta(C)\partial(X)-b(X)\partial(Y)\delta(X) \partial(C)-\] \[b(Y)\partial(X)\delta(C)\partial(X)+b(Y)\partial(X)\delta(X) \partial(C)\] \[= -b(X)\partial(Y)\delta(X)\partial(C)+\partial(X)\big{(}b(X) \partial(Y)\delta(C)-\] \[b(Y)\delta(C)\partial(X)+b(Y)\delta(X)\partial(C)\big{)}\] \[= -b(X)\partial(Y)\delta(X)\partial(C)+\partial(X)\big{(}\delta(C) \big{(}b(X)\partial(Y)-\] \[b(Y)\partial(X)\big{)}+\partial(C)\big{(}b(Y)\delta(X)\big{)} \big{)}\] \[= \partial(X)\big{(}\delta(C)\big{(}b(X)\partial(Y)-b(Y)\partial(X )\big{)}-\partial(C)\big{(}b(X)\delta(Y)-b(Y)\delta(X)\big{)}\big{)}\] \[-b(X)\partial(Y)\delta(X)\partial(C)+\partial(X)\partial(C)b(X) \delta(Y)\] \[= \big{(}\delta(Y)\partial(X)-\delta(X)\partial(Y)\big{)}\partial (X)\big{(}\delta(C)\sigma-\partial(C)\kappa\big{)}-\] \[b(X)\partial(Y)\delta(X)\partial(C)+\partial(X)\partial(C)b(X) \delta(Y)\] \[< \big{(}\delta(Y)\partial(X)-\delta(X)\partial(Y)\big{)}\partial (X)(-b(C))\] \[-b(X)\partial(Y)\delta(X)\partial(C)+\partial(X)\partial(C)b(X) \delta(Y)\] \[= \big{(}\delta(Y)\partial(X)-\delta(X)\partial(Y)\big{)}\big{(}b( X)\partial(C)-\partial(X)b(C)\big{)},\]
therefore
\[\sigma(X,Y)<\sigma(X,C),\]
contradicting the maximizing property of \(X\) and \(Y\).
By the same token, if \(\partial(C)\geq 0\) we get that
\[\sigma(X,Y)<\sigma(C,Y).\]
Finally, if \(\partial(C)<0\) and \(\partial(X)=0\), we get
\[\kappa(X,Y)<\kappa(X,C)\]
in the same way, which finishes the proof.
### Algorithm for non-Eulerian Graphs
```
1: Choose \(X_{1},Y_{1}\subseteq V\) such that \(\partial(X_{1})\geq 0\) and \(\partial(Y_{1})<0\).
2:\(i:=1\)
3:repeat
4:\(\sigma_{i}:=\sigma(X_{i},Y_{i})\)
5:\(\kappa_{i}:=\kappa(X_{i},Y_{i})\)
6:\(C_{i}\in\arg\min\{b(X)+\sigma_{i}\delta(X)-\kappa_{i}\partial(X):X\subseteq V\}\)
7:if\(b(C_{i})+\sigma_{i}\delta(C_{i})-\kappa_{i}\partial(C_{i})\geq 0\)then
8: RETURN \(\kappa_{i},\sigma_{i},X_{i},Y_{i}\)
9:elseif\(\varrho(C_{i})+\delta(C_{i})=0\)then
10: RETURN "INTEASIBE"
11:elseif\(\partial(C_{i})\geq 0\)then
12:\(X_{i+1}:=C_{i}\)
13:\(Y_{i+1}:=Y_{i}\)
14:else
15:\(X_{i+1}:=X_{i}\)
16:\(Y_{i+1}:=C_{i}\)
17:\(i\longleftarrow i+1\)
```
**Algorithm 2** Minimum spread calculation in non-Eulerian graphs
The proof of Theorem 3.5, immediately provides the algorithm outlined in Algorithm 2 for finding an optimal pair of sets \(X\) and \(Y\).
The initial sets \(X_{1}\) and \(Y_{1}\) can be chosen as single node sets satisfying conditions in line 1.
Computing \(C_{i}\) in line 6 involves in the minimization of the submodular function \(b(X)+\sigma_{i}\delta(X)-\kappa_{i}\partial(X)\). If the algorithm terminates at line 8, then the last value of the \(\sigma_{i}\) is optimal and \(X_{i},Y_{i}\) are maximizing sets in Theorem 3.5.
Note, that \(\partial(X_{i})\geq 0\) and \(\partial(Y_{i})<0\) hold in every iteration, therefore the denominator of \(\sigma_{i}\) and \(\kappa_{i}\) are not zero.
The algorithm is clearly finite because the value \(\sigma_{i}\) is monotonically increasing throughout the execution, and either the value of \(\sigma_{i}\) or \(\kappa_{i}\) strictly increases in each iteration.
In the following, we will show that not only is Algorithm 2 finite, but in fact it runs in strongly polynomial time. Let us consider two sets \(Z_{1},Z_{2}\subseteq V\)_equivalent_ if \(\delta(Z_{1})=\delta(Z_{2})\), \(\varrho(Z_{1})=\varrho(Z_{2})\) and \(b(Z_{1})=b(Z_{2})\).
**Lemma 3.7**.: _The algorithm computes at most \(\left(m+1\right)^{2}\) non-equivalent sets._
Proof.: The lemma is proved by showing that if \(C_{1}\) and \(C_{2}\) are non-equivalent sets found by the algorithm and \(\delta(C_{1})=\delta(C_{2})\) then \(\varrho(C_{1})\neq\varrho(C_{2})\).
If \(b(C_{1})=b(C_{2})\), then \(\varrho(Z_{1})\neq\varrho(Z_{2})\), because they are non-equivalent. Therefore, we may assume that \(b(C_{1})\neq b(C_{2})\).
First, let us consider the case when \(\delta(C_{1})=\delta(C_{2})=0\). Then \(\varrho(C_{1})\neq 0\) and \(\varrho(C_{2})\neq 0\), otherwise the problem would be infeasible. Thus, the expression in line 6 can be rewritten as
\[\arg\min\{b(X)+\sigma_{i}\delta(X)-\kappa_{i}\partial(X)\}=\arg\min\bigg{\{} \varrho(x)\left(\frac{b(X)}{\varrho(X)}-\kappa_{i}\right)\bigg{\}}. \tag{19}\]
By re-indexing we may assume that \(\frac{b(C_{1})}{\varrho(C_{1})}>\frac{b(C_{2})}{\varrho(C_{2})}\). (If these two values are equal then the algorithm will only find the set with the greater \(\varrho\) value.) Since \(C_{1}\) minimizes the right hand side of (19) for \(\kappa_{1}\), we have \(\varrho(C_{i_{1}})<\varrho(C_{i_{2}})\).
Now assume that \(\delta(C_{1})=\delta(C_{2})>0\) and \(b(C_{1})>b(C_{2})\). Then the expression in line 6 can be rewritten as
\[\arg\min\{b(X)+\sigma_{1}\delta(X)-\kappa_{1}\partial(X)\}=\arg\max\bigg{\{} \delta(x)\left(\frac{\kappa_{1}\partial(X)-b(X)}{\delta(X)}-\sigma_{1}\right) \bigg{\}}. \tag{20}\]
Since \(\delta(C_{1})=\delta(C_{2})\) and \(C_{1}\) maximizes the right hand side of (20) for \(\kappa_{1},\sigma_{1}\), then
\[\frac{\kappa_{1}\partial(C_{1})-b(C_{1})}{\delta(C_{1})}\geq\frac{\kappa_{1} \partial(C_{2})-b(C_{2})}{\delta(C_{2})},\]
from which we get that \(\frac{\partial(C_{1})}{\delta(C_{1})}>\frac{\partial(C_{2})}{\delta(C_{2})}\). Thus \(\varrho(C_{1})\neq\varrho(C_{2})\), which finishes the proof.
**Lemma 3.8**.: _Assume that \(X_{i}\) and \(X_{j}\) are equivalent for some \(i\neq j\). Then \(Y_{i}\) and \(Y_{j}\) are not equivalent._
Proof.: If \(X_{i}\) is equivalent to \(X_{j}\) and \(Y_{i}\) is equivalent to \(Y_{j}\), then \(\sigma_{i}=\sigma_{j}\) and \(\kappa_{i}=\kappa_{j}\) hold. But -- as we have seen in the proof of Theorem 3.5 -- the \(\sigma_{i}\) is monotonically increasing, and either \(\sigma_{i}\) or \(\kappa_{i}\) strictly increases at each iteration.
Combining Lemma 3.7 and 3.8, we immediately get the following upper bound on the number of iterations.
**Theorem 3.9**.: _Algorithm 2 terminates after at most \(\mathcal{O}(m^{4})\) iterations. _
The running time of each single iteration is dominated by the submodular function minimization, therefore we get the following.
**Theorem 3.10**.: _Algorithm 2 computes \(\sigma^{*}\) along with the maximizing sets \(X\) and \(Y\) in \(\mathcal{O}(m^{4}\Upsilon)\) time. _
### Improved Algorithm for Non-Eulerian Graphs
This section presents an \(\mathcal{O}(m^{2}\Upsilon)\) time algorithm for finding the minimum spread and the corresponding dual pair of sets \(X\) and \(Y\).
The improved procedure is outlined in Algorithm 3. The main difference compared to Algorithm 2 is that the algorithm revisits the previously found sets at each iteration, ensuring that every set can be found at most once. Therefore, the number of iterations is equal to the number of non-equivalent sets found by the algorithm.
For this we maintain sequences of previously calculated sets \(X_{1},X_{2},\ldots,X_{i}\) and \(Y_{1},Y_{2},\ldots,Y_{j}\) such that
\[\frac{\partial(Y_{1})}{\delta(Y_{1})}<\frac{\partial(Y_{2})}{\delta(Y_{2})}< \cdots<\frac{\partial(Y_{j})}{\delta(Y_{j})}<\frac{\partial(X_{i})}{\delta(X_ {i})}<\cdots<\frac{\partial(X_{2})}{\delta(X_{2})}<\frac{\partial(X_{1})}{ \delta(X_{1})} \tag{21}\]
At each iteration, the unnecessary sets are deleted. Note that -- by Theorem 2.5-- if there exists a set \(Z\) with \(\delta(Z)=0\) and \(\varrho(Z)\neq 0\) and \(\kappa>\frac{b(Z)}{\varrho(Z)}\), then there exists no feasible submodular flow with lower bound \(\kappa 1\). Analogously, if \(\delta(Z)>0\) and \(\sigma<\frac{\kappa\partial(Z)-b(Z)}{\delta(Z)}\), then a \((\kappa 1,(\kappa+\sigma)1)\)-bounded submodular flow cannot exist. Therefore, if the value of a flow is at least \(\kappa\) on each edge, then spread of the flow is at least \(\frac{\kappa\partial(Z)-b(Z)}{\delta(Z)}\).
Therefore a set \(X_{i}\) is to be considered unnecessary if for any \(\kappa^{\prime}\) there exists a set \(Z\in\{Y_{1},Y_{2},\ldots,Y_{j},X_{i-1},\ldots,X_{1}\}\) such that
\[\frac{\kappa^{\prime}\partial(X_{i})-b(X_{i})}{\delta(X_{i})}\leq\frac{\kappa ^{\prime}\partial(Z)-b(Z)}{\delta(Z)}.\]
It means that for any \(\kappa^{\prime}\), there exists another found set \(Z\) which defines a larger lower bound for the spread than those obtained by \(X_{i}\). Due to the deleting process at the end of every iteration, the inequalities in (21) hold true.
It is clear that if \(Z,Z^{\prime}\in\{Y_{1},Y_{2},\ldots,Y_{j},X_{i},X_{i-1},\ldots,X_{1}\}\) are two different sets and \(\frac{\partial(Z)}{\delta(Z)}<\frac{\partial(Z^{\prime})}{\delta(Z^{\prime})}\) then the followings holds true.
\[\frac{\kappa^{\prime}\partial(Z)-b(Z)}{\delta(Z)} <\frac{\kappa^{\prime}\partial(Z^{\prime})-b(Z^{\prime})}{ \delta(Z^{\prime})} \text{if }\kappa^{\prime}<\kappa(Z,Z^{\prime})\] \[\frac{\kappa^{\prime}\partial(Z)-b(Z)}{\delta(Z)} =\frac{\kappa^{\prime}\partial(Z^{\prime})-b(Z^{\prime})}{ \delta(Z^{\prime})} \text{if }\kappa^{\prime}=\kappa(Z,Z^{\prime})\] \[\frac{\kappa^{\prime}\partial(Z)-b(Z)}{\delta(Z)} >\frac{\kappa^{\prime}\partial(Z^{\prime})-b(Z^{\prime})}{ \delta(Z^{\prime})} \text{if }\kappa^{\prime}>\kappa(Z,Z^{\prime})\]
At the beginning of iteration \(i\), every set in \(\{Y_{1},Y_{2},\ldots,Y_{j},X_{i},X_{i-1},\ldots,X_{1}\}\) was necessary, thus
\[\kappa(Y_{1},Y_{2})<\kappa(Y_{2},Y_{3})<\cdots<\kappa(Y_{j},X_{j})<\cdots< \kappa(X_{2},X_{1}).\]
Therefore, the following properties hold true in every iteration.
1. If \(\kappa<\kappa(Y_{1},Y_{2})\) and \(Z\in\{Y_{2},\ldots,Y_{j},X_{i},X_{i-1},\ldots,X_{1}\}\), then \[\frac{\kappa\partial(Y_{1})-b(Y_{1})}{\delta(Y_{1})}>\frac{\kappa\partial(Z)- b(Z)}{\delta(Z)}.\]
2. If \(\kappa(Y_{j^{\prime}-1},Y_{j^{\prime}})<\kappa<\kappa(Y_{j^{\prime}},Y_{j^{\prime }+1})\) and \(Z\in\{Y_{1},Y_{2},\ldots,Y_{j},X_{i},X_{i-1},\ldots,X_{1}\}\), \(Z\neq Y_{j^{\prime}}\), then \[\frac{\kappa\partial(Y_{j^{\prime}})-b(Y_{j^{\prime}})}{\delta(Y_{j^{\prime}})} >\frac{\kappa\partial(Z)-b(Z)}{\delta(Z)}.\]
3. If \(\kappa(Y_{j-1},Y_{j})<\kappa<\kappa(Y_{j},X_{i})\) and \(Z\in\{Y_{1},Y_{2},\ldots Y_{j-1},X_{i},X_{i-1},\ldots,X_{1}\}\), then \[\frac{\kappa\partial(Y_{j})-b(Y_{j})}{\delta(Y_{j})}>\frac{\kappa\partial(Z)- b(Z)}{\delta(Z)}.\]
Similar properties hold for any \(X_{i^{\prime}}\in\{X_{i},X_{i-1},\ldots,X_{1}\}\).
If \(\delta(C)=0\), then \(\kappa_{i}>\frac{b(C)}{\varrho(C)}\). In this case (line (11)) every set in \(\{X_{i},X_{i-1}\ldots X_{1}\}\) is unnecessary. At the beginning of this iteration, \(Y_{j^{\prime}}\) defines the largest lower bound for the spread if and only if \(\kappa(Y_{j^{\prime}-1},Y_{j^{\prime}})\leq\kappa\leq\kappa(Y_{j^{\prime}},Y_ {j^{\prime}+1})\). Therefore, \(Y_{j^{\prime}}\) is no longer necessary if and only if \(\frac{b(C)}{\varrho(C)}<\kappa(Y_{j^{\prime}-1},Y_{j^{\prime}})\). This is equivalent to the condition \(\sigma(C,Y_{j})\leq\sigma(C,Y_{j-1})\).
If \(\delta(C)\neq 0\), we have that
\[\frac{\kappa\partial(Y_{j})-b(Y_{j})}{\delta(Y_{j})}=\sigma(X_{i},Y_{j})< \frac{\kappa\partial(C)-b(C)}{\delta(C)}.\]
Therefore, \(X_{i}\) or \(Y_{j}\) are unnecessary whenever \(\frac{\partial(X_{i})}{\delta(X_{i})}\leq\frac{\partial(C)}{\delta(C)}\) or \(\frac{\partial(Y_{j})}{\delta(Y_{j})}\geq\frac{\partial(C)}{\delta(C)}\) hold, respectively. Since \(X_{i}\) was necessary at the end of the previous iteration, then
\[\frac{\kappa\partial(X_{i})-b(X_{i})}{\delta(X_{i})}\geq\frac{\kappa\partial( X)-b(X)}{\delta(X)}\quad\forall X\in\{X_{i-1},\ldots,X_{1}\},\]
holds for any \(\kappa(Y_{j},X_{i})\leq\kappa\leq\kappa(X_{i},X_{i-1})\), which means that \(X_{i}\) is unnecessary if and only if
\[\frac{\kappa\partial(X_{i})-b(X_{i})}{\delta(X_{i})}\leq\frac{\kappa\partial( C)-b(C)}{\delta(C)}\quad\forall\kappa\in[\kappa(Y_{j},X_{i}),\kappa(X_{i},X_{i-1})] \tag{22}\]
Since \(\frac{\partial(X_{i})}{\delta(X_{i})}>\frac{\partial(C)}{\delta(C)}\) (otherwise \(X_{i}\) would have already been deleted), it is enough to check if expression (22) holds at the end of the interval, i. e. for \(\kappa=\kappa(X_{i},X_{i-1})\). Thus, inequality (22) reduces to \(\kappa(C,X_{i})\geq\kappa(C,X_{i+1})\). The decision of whether or not \(Y_{j}\) is necessary can be made in the same way. Note that if \(\frac{\partial(X_{1})}{\delta(X_{1})}>\frac{\partial(C)}{\delta(C)}\), then \(X_{1}\) will not be deleted. Analogously, if \(\frac{\partial(Y_{1})}{\delta(Y_{1})}<\frac{\partial(C)}{\delta(C)}\), then \(Y_{1}\) will not be deleted in the current iteration.
After deleting all the unnecessary sets, the algorithm adds the new set \(C\) to the appropriate subsequence. Note that \(C\) is always necessary in the iteration where it is computed and it is not equivalent to any \(Z\in\{Y_{1},Y_{2},\ldots,Y_{j},X_{i},X_{i-1},\ldots,X_{1}\}\). In addition, a deleted (i. e. unnecessary) set cannot become necessary again. Therefore, the algorithm will find every set at most once, and delete it at most once.
To sum up, the algorithm finishes after \(\mathcal{O}(m^{2})\) iterations, therefore we get the following theorem.
**Theorem 3.11**.: _Algorithm 3 solves the Balanced Submodular Flow Problem in \(\mathcal{O}(m^{2}\Upsilon)\) time. \(\square\)_
## 4 Weighted Balanced Submodular Flows
This section introduces the weighted version of the problem and extends the dual characterization to this case. Then, a strongly polynomial algorithm is presented to determine the minimum weighted spread with a positive weight function.
**Definition 4.1**.: _Given a weight function \(c:A\to\mathbb{R}^{+}\) on the edges, the weighted spread of a vector \(x\in\mathbb{R}^{A}\) is the value_
\[\sigma_{c}(x):=\max_{a\in A}c(a)x(a)-\min_{a\in A}c(a)x(a)\]
The _Balanced Submodular Flow Problem_ is defined as the following.
**Problem 4.2**.: _Let us given a directed graph \(G=(V,A)\), a weight function \(c:A\to\mathbb{R}^{+}\) and a submodular function \(b:\mathcal{P}(V)\to\mathbb{Z}\). Find_
\[\sigma^{*}:=\min\big{\{}\sigma_{c}(x):x\in\mathbb{R}^{A},\varrho_{x}(X)- \delta_{x}(X)\leq b(X)\quad\forall X\subseteq V\big{\}} \tag{23}\]
_along with a minimizing vector \(x\)._
Let us introduce the following notations.
\[\tilde{c}(e) :=\frac{1}{c(e)}\] \[\varrho_{\tilde{c}}(X) :=\sum_{e\in\varrho(X)}\tilde{c}(e)\] \[\delta_{\tilde{c}}(X) :=\sum_{e\in\delta(X)}\tilde{c}(e)\] \[\partial_{\tilde{c}}(X) :=\varrho_{\tilde{c}}(X)-\delta_{\tilde{c}}(X)\]
**Definition 4.3**.: _For an arbitrary real value \(\kappa\in\mathbb{R}\), let \(s(\kappa)\) denote the minimal value \(\sigma\) for which there exists a \((\kappa\tilde{c},(\kappa+\sigma)\tilde{c})\)-bounded submodular flow \(x\)._
Clearly,
\[\sigma^{*}=\min_{\kappa}s(\kappa)\]
The following theorem determines a dual characterization of Problem 4.2 and then we will show that the optimal spread can be computed in strongly polynomial time.
**Theorem 4.4**.: _Assume that there exists a set \(X\subseteq V\) such that \(\partial_{\tilde{c}}(X)\gtrapprox 0\). Then_
\[\sigma^{*}=\max\Bigg{\{}\frac{-b(X)\partial_{\tilde{c}}(Y)+b(Y)\partial_{ \tilde{c}}(X)}{\delta_{\tilde{c}}(X)\partial_{\tilde{c}}(Y)-\delta_{\tilde{c}} (Y)\partial_{\tilde{c}}(X)}:\partial_{\tilde{c}}(X)\geq 0,\partial_{\tilde{c}}(Y)<0 \Bigg{\}}, \tag{24}\]
_or \(\sigma^{*}=0\) if the maximum above is negative._
_If \(\partial_{\tilde{c}}(X)=0\) holds for all \(X\subseteq V\), then \(s(\kappa)\) is a constant function. In this case_
\[\sigma^{*}=\max_{\partial_{\tilde{c}}(X)=0}\Bigg{\{}\frac{-b(X)}{\delta_{ \tilde{c}}(C)}:b(X)<0\Bigg{\}}, \tag{25}\]
_or \(\sigma^{*}=0\) if \(b(X)\geq 0\) hold for all \(X\subseteq V\)._
The proof of Theorem 4.4 follows the same pattern as that of Theorem 3.5 and the details are left to the reader.
The following two sections present a strongly polynomial algorithm to compute the value of \(s(\kappa)\) then to find its minimum, therefore solving the weighted version of the Balanced Submodular Flow Problem. This latter algorithm also provides a corresponding dual solution described in Theorem 4.4.
### Computing the Value of \(s(\kappa)\)
In this section, the following theorem is proved.
**Theorem 4.5**.: _The value of \(s(\kappa)\) can be computed with \(n^{2}+\mathcal{O}(m\log^{2}(m))\) submodular function minimizations._
By Theorem 2.5, we get that
**Claim 4.6**.: \[s(\kappa)=\min\left\{\sigma\geq 0:\min_{X\subseteq V}\left(b(S)-\kappa\partial_{ \tilde{c}}(X)-\sigma\delta_{\tilde{c}}(X)\right)\geq 0\right\}\] (26)
Note that \(b(S)-\kappa\partial_{\tilde{c}}(X)\) is a submodular function, therefore Theorem 4.5 is clearly a special case of the following extension of Theorem 2.9 proven below.
**Theorem 4.7**.: _Let \(b\) be a submodular function over the set \(V\), \(|V|=n\) and assume that \(b(\emptyset)\geq 0\). In addition, let \(f:\mathcal{P}(V)\rightarrow\mathcal{P}(A)\) and \(a\in\mathbb{R}^{A}\), such that \(b^{\prime}(S)=a(f(S))=\sum_{e\in f(S)}a(e)\) is also submodular and let \(|E|=m\). Then the value of_
\[\delta^{*}:=\min\left\{\delta\geq 0:\min_{S\subseteq V}\left(b(S)+\delta a(f(S ))\right)\geq 0\right\} \tag{27}\]
_can be computed by performing \(n^{2}+\mathcal{O}(m\log^{2}m)\) submodular function minimization over the set \(V\), i. e. in time \(\mathcal{O}((n^{2}+m\log^{2}m)\Upsilon)\)._
Proof.: Note that \(\delta^{*}\) is the smallest root of the piecewise linear concave function
\[h(\delta):=\min_{S\subseteq V}\left(b(S)+\delta a(f(S))\right).\]
Therefore \(\delta^{*}\) can be computed using the well-known Discrete Newton Method outlined in Algorithm 4. In the following, we show that the number of iterations taken by the algorithm is at most \(n^{2}+\mathcal{O}(m\log^{2}m)\).
```
1:\(\delta_{1}=0\), \(i=0\)
2:repeat
3:\(i=i+1\)
4:\(h_{i}=\min\{b(X)+\delta_{i}a(f(X))\}\)
5:\(X_{i}\in\arg\min\{b(X)+\delta_{i}a(f(X))\}\)
6:if\(h_{i}=0\)thenreturn\(\delta_{i},X_{i}\)
7:\(a_{i}:=a(f(X_{i}))\)
8:if\(a_{i}=0\)thenreturn\(\text{\sc UDEFINED}\), \(X_{i}\)
9:\(\delta_{i+1}=\frac{-b(X_{i})}{a_{i}}\)
```
**Algorithm 4** Discrete Newton Method
We use the following generic properties of the Discrete Newton Method ([15]).
**Claim 4.8**.:
1. \(h_{1}<h_{2}<\dots<h_{t}=0\)
2. \(\delta_{1}<\delta_{2}<\dots\delta_{t}\)
3. \(a_{1}>a_{2}>\cdots>a_{t}\geq 0\)
**Claim 4.9**.: \[\frac{h_{i+1}}{h_{i}}+\frac{a_{i+1}}{a_{i}}\leq 1\]
Let \(I\) denote the set of indices for which \(\frac{a_{i+1}}{a_{i}}\leq\frac{2}{3}\). Then -- as a direct consequence of Lemma 2.7 -- we get that
\[|I|=\Big{|}\left\{i:\frac{a_{i+1}}{a_{i}}\leq\frac{2}{3}\right\}\Big{|}= \mathcal{O}(m\log m).\]
Let \(J\) be the set of the remaining indices, i. e.
\[J=\left\{i:\frac{a_{i+1}}{s_{i}}>\frac{2}{3}\right\}.\]
Due to Claim 4.9, \(\frac{h_{i+1}}{h_{i}}<\frac{1}{3}\) hold for any \(i\in J\).
Let us consider the smallest partition \(J=J_{1}\cup J_{2},\cup\cdots\cup J_{q}\) such that \(J_{l}\) is an interval for each \(l=1,\ldots,q\), i. e. \(J_{l}=[s_{l},t_{l}]\) and \(t_{l}+1<s_{l+1}\). Let \(t^{\prime}_{l}:=t_{l}-\lceil\log(n^{2}/4)\rceil\) and \(J^{\prime}_{l}:=[s_{l},t^{\prime}_{l}]\). (If \(t^{\prime}_{l}<s_{l}\), then let \(J^{\prime}_{l}:=\emptyset\)). Let \(J^{\prime}:=J^{\prime}_{1}\cup J^{\prime}_{2},\cup\cdots\cup J^{\prime}_{q}\).
**Lemma 4.10**.: _For each \(j\in J^{\prime}\), we have_
\[X_{j}\notin R(X_{k}:k\in J^{\prime},\ j<k).\]
Proof.: On the one hand, let \(s_{i}\leq l<t^{\prime}_{i}\) for some \(1\leq i\leq q\). Then the sequence \(X_{l},X_{l+1},\ldots,X_{t^{\prime}_{l}}\) with the submodular function \(b^{\prime}(X):=b(X)+\delta_{t_{l}}a(f(X))\) satisfies the conditions of Lemma 2.10, therefore \(X_{l}\notin R(X_{l+1},X_{l+2},\ldots,X_{t^{\prime}_{l}})\).
On the other hand, if \(i<q\), then let \(F:=J^{\prime}_{i+1}\cup\cdots\cup J^{\prime}_{q}\). Clearly, \(b^{\prime}(X_{j})<0\) for each \(j\in F\). Lemma 2.11 with \(M=0\) implies2 that \(b^{\prime}(X)\leq-\frac{n^{2}}{4}b^{\prime}(X_{t_{1}})\) holds for each \(X\in R(X_{k}:k\in F)\). Due to Theorem 2.13, \(X_{l}\in R(F)\) only if \(t^{\prime}_{i}<l\leq t^{\prime}_{i}\). Therefore if \(s_{i}\leq l\leq t^{\prime}_{i}\), then \(X_{l}\notin R(X_{k}:k\in J^{\prime},\quad l<k)\), finishing the proof.
Footnote 2: Note that \(b^{\prime}(X_{t_{i}})=\min\{b^{\prime}(X):X\subseteq V\}\).
The sequence \((X_{j}:j\in J^{\prime})\) meets the conditions of Lemma 2.12, thus \(|J^{\prime}|\leq\binom{n+1}{2}+1\).
Finally the total number of iterations taken by Algorithm 4 is
\[\big{|}I\big{|}+\big{|}J\big{|} \leq\big{|}I\big{|}+\big{|}J^{\prime}\big{|}+2\log(n)|I|\] \[\leq O(m\log m)+\binom{n+1}{2}+1+2\log(n)O(m\log m)\] \[=O(n^{2}+m\log^{2}m),\]
finishing the proof of Theorem 4.5.
Note that when Algorithm 4 is used to compute the value of \(s(\kappa_{0})\) by finding the smallest root of the function \(h(\sigma):=\min_{X\subseteq V}\left(b(S)-\kappa\partial_{\bar{c}}(X)-\sigma \delta_{\bar{c}}(X)\right)\), and the algorithm terminates at line 8, then the provided set \(X\subseteq V\) will have the properties \(\delta_{\bar{c}}(X)=0\) and \(\kappa_{0}>\frac{b(X)}{\partial_{\bar{c}}(X)}\). Therefore, no submodular flow with lower bound \(\kappa_{0}\bar{c}\) may exist.
With this information in hand, one can compute the largest value \(\kappa_{\max}\) for which \(s(\kappa_{\max})\) is defined (i. e. there exists a submodular flow with lower bound \(\kappa_{\max}\tilde{c}\)) as follows.
By Theorem 2.5,
\[\kappa_{\max}=\max\bigg{\{}\kappa:\min\{b(X)-\kappa\varrho_{\tilde{c}}(X):\forall X \subseteq V,\quad\delta_{\tilde{c}}(X)=0\}\geq 0\bigg{\}}.\]
Therefore, this value can again be computed by applying the Discrete Newton Method to find the largest root of the concave function
\[h(\kappa):=\min\{b(X)-\kappa\varrho_{\tilde{c}}(X):\forall X\subseteq V,\quad \delta_{\tilde{c}}(X)=0\}\]
with initial value \(\kappa_{0}\).
By the same token as in the proof of Theorem 4.5, the value of \(\kappa_{\max}\) can be found in \(n^{2}+\mathcal{O}(m\log^{2}(m))\) iterations.
### Minimizing \(s(\kappa)\)
**Claim 4.11**.: _The function of \(s(\kappa)\) is a convex function._
Proof.: For any \(\kappa_{1},\kappa_{2}\in\mathbb{R}\), let \(x_{1}\) and \(x_{2}\) be submodular flows such that \(\kappa_{i}\mathbbm{1}\leq x_{i}\leq(\kappa_{i}+s(\kappa_{i}))\mathbbm{1}\), and let \(0\leq\lambda\leq 1\). Then \(x^{\prime}:=\lambda x_{1}+(1-\lambda)x_{2}\) is also a submodular flow and \([\lambda\kappa_{1}+(1-\lambda)\kappa_{2}]\mathbbm{1}\leq x^{\prime}\leq[ \lambda(\kappa_{1}+s(\kappa_{1}))+(1-\lambda)(\kappa_{2}+s(\kappa_{2}))] \mathbbm{1}\), therefore
\[s(\lambda\kappa_{1}+(1-\lambda)\kappa_{2})\leq\lambda s(\kappa_{1})+(1- \lambda)s(\kappa_{2}). \tag{28}\]
Since \(s(\kappa)\) is a convex function, the Handler-Zang method -- detailed in Appendix A -- is applicable to find the minimum of it. It will be shown that it actually provides a strongly polynomial algorithm. For this an efficient way to find an appropriate initial interval is given, then a strongly polynomial bound on the number of iterations is provided.
#### 4.2.1 Finding the Initial Interval
In order to find the initial interval, let us start with an arbitrary value \(\kappa_{0}\leq\kappa_{\max}\) and iterate the usual Newton steps on the function \(s(\kappa)\) until either
1. a root of \(s(\kappa)\) is found. Then the optimal solution of \(0\) to the original problem is found.
2. a value with a subgradient \(0\) is found. In this case the minimum of \(s(\kappa)\) has also been reached.
3. a value with a subgradient of the opposite sign is found. Then \([\kappa_{i-1},\kappa_{i}]\) (or \([\kappa_{i},\kappa_{i-1}]\) if \(\kappa_{i}<\kappa_{i-1}\)) is an appropriate initial interval.
4. \(\kappa_{i}>\kappa_{\max}\). Then \([\kappa_{i-1},\kappa_{\max}]\) is an appropriate initial interval.
#### 4.2.2 Upper Bound on the Number of Iterations
**Theorem 4.12**.: _The minimum of \(s(\kappa)\) can be computed using the Handler-Zang method with \(\mathcal{O}(n^{4}m^{6}\log^{6}(m))\) iterations._
Note that during the execution of the algorithm, a set \(X\) with \(\delta_{\tilde{c}}(X)=0\) can never be relevant when computing (26). Therefore it can be rewritten as follows.
\[s(\kappa)=\max\bigg{\{}\kappa\frac{\partial_{\tilde{c}}(X)}{\delta_{\tilde{c}}(X )}-\frac{b(X)}{\delta_{\tilde{c}}(X)}:\delta_{\tilde{c}}(X)>0\bigg{\}}. \tag{29}\]
Thus, for any given value of \(\kappa\) and set \(X\) maximizing Equation (29), the value \(\frac{\partial_{\tilde{c}}(X)}{\delta_{\tilde{c}}(X)}\) is a subgradient of \(s(\kappa)\).
Now, let us assume that \(\sigma^{*}\) is already known, and we are only looking for the value of \(\kappa^{*}\) for which \(s(\kappa^{*})=\sigma^{*}\), i. e. we are looking for a root of the function \(\overline{s}(\kappa):=s(\kappa)-\sigma^{*}\). This could also be computed by the Discrete Newton Method outlined in Algorithm 5.
An important observation is the following. Let \(a_{1}^{\prime}<a_{2}^{\prime}<\cdots<a_{k}^{\prime}=\kappa^{*}\) be all the _distinct_ values computed by the Handler-Zang method during the minimization of \(s(\kappa)\) and let \(a_{1}^{\prime}=\kappa_{0}<\kappa_{1}<\cdots<\kappa_{l}=\kappa^{*}\) be the values computed by the Discrete Newton Method for finding the root of \(\overline{s}(\kappa)\) with the initial value \(\kappa_{0}=a_{1}\). Then \(\kappa_{i}\leq a_{i}^{\prime}\) lfolds for all \(i=1,\ldots,k\), consequently \(k<l\).The same holds for the sequence of \(b_{1}^{\prime}>b_{2}^{\prime}>\cdots>b_{r}^{\prime}=\kappa^{*}\).
The Handler-Zang method updates either \(a_{i}\) or \(b_{i}\) in every iteration, therefore we get that the number of its iteration is at most twice the number of iterations taken by the Discrete Newton Method to find the root of \(\overline{s}(\kappa)\). An upper bound on the latter quantity is given below.
**Theorem 4.13**.: _The root of \(\overline{s}(\kappa)\) can be computed using the Discrete Newton Method with \(\mathcal{O}(n^{4}m^{6}\log^{6}(m))\) iterations._
```
1:\(\kappa_{1}=a_{1}\), \(i=1\)
2:\(\sigma_{i}=s(\kappa_{1})-\sigma^{*}\)
3:while\(\sigma_{i}>0\)do
4:\(i=i+1\)
5:\(\kappa_{i}=\frac{b_{i-1}}{\partial_{i-1}}\)
6:\(\sigma_{i}=s(\kappa_{i-1})-\sigma^{*}\)
7:\(A_{i}=\arg\max_{A\subseteq V}\frac{\kappa_{i}\partial_{\tilde{c}}(A)-b(A)}{ \delta_{\tilde{c}}(A)}\)
8:\(b_{i}=b(A_{i}),\quad\partial_{i}=\partial_{\tilde{c}}(A_{i}),\quad\delta_{i} =\delta_{\tilde{c}}(A_{i}),\quad s_{i}=\frac{\partial_{i}}{\delta_{i}}\)
```
**Algorithm 5** Discrete Newton Method for \(\overline{s}(\kappa)\)
**Claim 4.14**.: \[\frac{\sigma_{i+1}}{\sigma_{i}}+\frac{s_{i+1}}{s_{i}}\leq 1\] (30)
An upper bound on the number of iterations is given below. For this purpose, the indices are divided into parts, depending on the value of \(\frac{s_{i+1}}{s_{i}}\). Note that it is between \(0\) and \(1\), since \(|s_{i}|\) is decreasing.
**Lemma 4.15**.: _There are \(\mathcal{O}(m^{4}\log^{4}(m))\) iterations such that \(\frac{s_{i+1}}{s_{i}}\leq\frac{2}{3}\)._
Proof.: Let \(i_{1},i_{2},\ldots,i_{k}\) be the indices for which \(\frac{s_{i_{j}+1}}{s_{i_{j}}}\leq\frac{2}{3}\) holds and consider the sequence \(\delta_{i_{1}},\delta_{i_{2}},\ldots,\delta_{i_{k}}\). By Theorem 2.8, there exists a monotonic subsequence \(\delta_{i_{1}^{\prime}},\delta_{i_{2}^{\prime}},\ldots,\delta_{i_{k^{\prime}}^ {\prime}}\) of length \(k^{\prime}=\left\lfloor\sqrt{k}\right\rfloor\).
Let us assume that all the subgradients computed by the Newton method are positive i. e. \(\partial_{i}>0\). (The case of negative subgradients is proved the same way.)
Case 1. \(\delta_{i^{\prime}_{1}}\geq\delta_{i^{\prime}_{2}}\geq\cdots\geq\delta_{i^{\prime} _{j^{\prime}}}\).Then
\[\partial_{i^{\prime}_{j+1}}=s_{i^{\prime}_{j+1}}\delta_{i^{\prime}_{j+1}}\leq s _{i^{\prime}_{j+1}}\delta_{i^{\prime}_{j}}\leq\frac{2}{3}s_{i^{\prime}_{j}} \delta_{i^{\prime}_{j}}=\frac{2}{3}\partial_{i^{\prime}_{j}}\]
therefore
\[\partial_{i^{\prime}_{j+2}}\leq\frac{2}{3}\partial_{i^{\prime}_{j+1}}\leq \frac{4}{9}\partial_{i^{\prime}_{j}}\leq\frac{1}{2}\partial_{i^{\prime}_{j}}\]
holds for all \(j=1,\ldots,k^{\prime}-2\). Observe, that with
\[b_{i} =\tilde{c}(e_{i})\quad 1\leq i\leq|E|\] \[x_{i} =\begin{cases}1&e_{i}\in\varrho(A_{i})\\ -1&e_{i}\in\delta(A_{i})\\ 0&\text{otherwise}\end{cases}\]
the values of \(\partial_{i}\) are of the form required by Lemma 2.7. Therefore we get \(k^{\prime}=\mathcal{O}(m\log(m))\).
Case 2. \(\delta_{i^{\prime}_{1}}\leq\delta_{i^{\prime}_{2}}\leq\cdots\leq\delta_{i^{ \prime}_{k^{\prime}}}\).Similarly to Case 1, one can prove that there are \(\mathcal{O}(m\log(m))\) indices such that \(\frac{4}{3}\delta_{i^{\prime}_{j^{\prime}}}\leq\delta_{i^{\prime}_{j^{\prime} +1}}\).
Now let us consider a subsequence of consecutive indices \(j_{1},j_{2},\ldots,j_{h}\) such that \(\frac{4}{3}\delta_{j_{t}}>\delta_{j_{t+1}}\). Now we get that
\[\frac{8}{9}\partial_{j_{t}}\geq\partial_{j_{t+1}}\]
holds for all \(1\leq t\leq h-1\). By Lemma 2.7, we get that the length of such a subsequence is \(\mathcal{O}(m\log(m))\). Therefore the total length \(k^{\prime}\) of the whole sequence is \(\mathcal{O}(m\log(m)\cdot\ m\log(m))=\mathcal{O}(m^{2}\log(m^{2}))\).
Now let us consider a sequence of consecutive iterations \(i,\ldots,i+l\) such that \(\frac{s_{j+1}}{s_{j}}\geq\frac{2}{3}\) for \(j=i,\ldots,i+l\). Note that by Claim 4.14\(\frac{\sigma_{j+1}}{\sigma_{j}}\leq\frac{1}{3}\) also holds for these indices.
**Claim 4.16**.: _For any \(j=i,\ldots,i+l-1\), we have_
\[\kappa_{j+1}-\kappa_{i+l}\leq\frac{1}{2}(\kappa_{j}-\kappa_{i+l}). \tag{31}\]
Proof.: The claim can easily be proved by the fact that
\[\kappa_{j^{\prime}}-\kappa_{j^{\prime}+1}=\frac{\sigma_{j^{\prime}}}{s_{j^{ \prime}}}=\frac{\sigma_{j^{\prime}-1}}{s_{j^{\prime}-1}}\cdot\frac{\sigma_{j^{ \prime}}}{\sigma_{j^{\prime}-1}}\cdot\frac{s_{j^{\prime}-1}}{s_{j^{\prime}}} \leq\frac{\sigma_{j^{\prime}-1}}{s_{j^{\prime}-1}}\cdot\frac{1}{3}\cdot\frac{3 }{2}=\frac{1}{2}(\kappa_{j^{\prime}-1}-\kappa_{j^{\prime}})\]
holds for each \(j^{\prime}=i,\ldots,i+l-1\).
**Claim 4.17**.: \[WehaveF_{j}\geq 2F_{j+2}\quad\forall i\leq j\leq i+l-2,\]
_where_
\[F_{j}:=\sigma_{i+l}-(\sigma_{j}-(\kappa_{j}-\kappa_{i+l})s_{j}).\]
Proof.: Note that
\[\sigma_{j}-(\kappa_{j}-\kappa_{j+1})s_{j}=\sigma_{j}-\frac{\sigma_{j}}{s_{j}}s_{j} =0,\]
\[\sigma_{j}\geq\sigma_{i+l}\geq 0,\]
and
\[s_{j}\geq s_{j+1}\]
hold for all \(j=i,\ldots,i+l\). Then the claim is proved as follows:
\[F_{j}= \sigma_{i+l}-(\sigma_{j}-(\kappa_{j}-\kappa_{i+l})s_{j})=\sigma_{ i+l}-\sigma_{j}+(\kappa_{j}-\kappa_{i+l})s_{j}\] \[= \sigma_{i+l}-(\sigma_{j}-(\kappa_{j}-\kappa_{j+1})s_{j})+(\kappa_ {j+1}-\kappa_{i+l})s_{j}\geq(\kappa_{j+1}-\kappa_{i+l})s_{j}\] \[\geq 2(\kappa_{j+2}-\kappa_{i+l})s_{j}\geq 2(\kappa_{j+2}-\kappa_{i+l} )s_{j+2}\] \[\geq 2(\sigma_{i+l}-\sigma_{j+2})+2(\kappa_{j+2}-\kappa_{i+l})s_{j+2}\] \[= 2(\sigma_{i+l}-\sigma_{j+2}+(\kappa_{j+2}-\kappa_{i+l})s_{j+2}) =2F_{j+2}.\]
**Claim 4.18**.: _We have_
\[F_{j}=\frac{\sigma_{i+l}\delta_{j}+b_{j}-\kappa_{i+l}\partial_{j}}{\delta_{j}}. \tag{32}\]
_Furthermore, the expression_
\[f(X):=\sigma_{i+l}\delta_{\bar{c}}(X)+b(X)-\kappa_{i+l}\partial_{\bar{c}}(X)\]
_appearing in the numerator in (32) is a submodular function._
Proof.: \[F_{j}= \sigma_{i+l}-(\sigma_{j}-(\kappa_{j}-\kappa_{i+l})s_{j})=\sigma_{ i+l}-\frac{\partial_{j}}{\delta_{j}}\kappa_{j}+\frac{b_{j}}{\delta_{j}}+ \kappa_{j}s_{j}-\kappa_{i+l}s_{j}\] \[= \sigma_{i+l}+\frac{b_{j}}{\delta_{j}}-\kappa_{i+l}(\frac{\partial _{j}}{\delta_{j}})=\frac{\sigma_{i+l}\delta_{j}+b_{j}-\kappa_{i+l}\partial_{j}} {\delta_{j}}\]
and
\[f(X)= \sigma_{i+l}\delta_{\bar{c}}(X)+b(X)-\kappa_{i+l}\partial_{\bar{c} }(X)+\sigma_{i+l}\delta_{\bar{c}}(Y)+b(Y)-\kappa_{i+l}\partial_{\bar{c}}(Y)\] \[= \sigma_{i+l}(\delta_{\bar{c}}(X)+\delta_{\bar{c}}(Y))+(b(X)+b(Y)) -\kappa_{i+l}(\partial_{\bar{c}}(X)+\partial_{\bar{c}}(Y))\] \[\geq \sigma_{i+l}(\delta_{\bar{c}}(X\cup Y)+\delta_{\bar{c}}(X\cap Y)) +(b(X\cup Y)+b(X\cap Y))\] \[-\kappa_{i+l}(\partial_{\bar{c}}(X\cup Y)+\partial_{\bar{c}}(X \cap Y))\] \[= \sigma_{i+l}\delta_{\bar{c}}(X\cup Y)+b(X\cup Y)-\kappa_{i+l} \partial_{\bar{c}}(X\cup Y)\] \[+\sigma_{i+l}\delta_{\bar{c}}(X\cap Y)+b(X\cap Y)-\kappa_{i+l} \partial_{\bar{c}}(X\cap Y)\]
**Theorem 4.19**.: _Let \(i,i+1,\ldots,i+l\) be consecutive indices such that \(\frac{\sigma_{j+1}}{\sigma_{j}}\leq\frac{1}{3}\). Then \(l\) is at most \(\mathcal{O}(n^{4}m^{2}\log^{2}(m))\)._
Proof.: The same technique is applicable here as in Lemma 4.15. Considering the sequence of \(\delta_{i},\delta_{i+1},\ldots,\delta_{i+l}\) and using Theorem 2.8, a monotonic subsequence
\[\delta_{j_{1}},\delta_{j_{2}},\ldots,\delta_{j_{l^{\prime}}}\]
of length \(\lfloor\sqrt{l}\rfloor\) must exist.
Case 1. \(\delta_{j_{1}}\geq\delta_{j_{2}}\geq\cdots\geq\delta_{j_{l^{\prime}}}\).Let us recall that \(\delta_{j_{k}}\geq\frac{1}{\sqrt{2}}\delta_{j_{k+2}}\) and
\[\frac{f(A_{j_{k}})}{\delta_{j_{k}}}=Fj_{k}\geq 2F_{j_{k+2}}=2\frac{f(A_{j_{k+2}}) }{\delta_{j_{k+2}}},\]
therefore
\[f(A_{j_{k}})\geq f(A_{j_{k+2}})\]
holds for all \(t\leq k\leq t+r-2\).
Then we can use Theorem 2.13 with \(T_{0}=A_{j_{t+r}},T_{1}=A_{j_{t+r-4}},\ldots,T_{k}=A_{j_{t+r-4k}}\). The conditions of theorem 2.13 hold true, firstly because
\[f(A_{j_{k}})= \sigma_{j_{t+r}}\delta_{j_{k}}+b_{j_{k}}-\kappa_{j_{t+r}}\partial _{j_{k}}=\delta_{j_{k}}\big{(}\sigma_{j_{t+r}}-(\sigma_{j_{k}}-(\kappa_{j_{k}} -\kappa_{j_{t+r}})s_{j_{k}})\big{)}\] \[> \delta_{j_{k}}\big{(}\sigma_{j_{t+r}}-(\sigma_{j_{k}}+(\kappa_{j _{k}}-\kappa_{j_{k+1}})s_{j_{k}})\big{)}\geq\delta_{j_{k}}\big{(}\sigma_{j_{t+ r}}-0\big{)}>0\]
hold for all \(t\leq k\leq t+r-2\), and
\[f(T_{0})= f(A_{j_{t+r}})=\sigma_{j_{t+r}}\delta_{j_{t+r}}+b_{j_{t+r}}- \kappa_{j_{t+r}}\partial_{j_{t+r}}\] \[= -b_{j_{t+r}}+\kappa_{j_{t+r}}\partial_{j_{t+r}}+b_{j_{t+r}}- \kappa_{j_{t+r}}\partial_{j_{t+r}}=0\]
holds for \(k=t+r\), therefore \(f(T_{0})=f_{\min}\) and \(f(T_{1})=f(A_{j_{t+r-4}})>0=-2f_{\min}\).
Secondly, because
\[f(T_{k})=f(A_{j_{t+r-4k}})\geq 4f(A_{j_{t+r-4k+4}})=4f(A_{j_{t+r-4(k-1)}})=4f(T_ {k-1})\]
Due to Theorem 2.13, we have \(r\) is at most \(4\binom{n+1}{2}+1=\mathcal{O}(n^{2})\).
In conclusion, the length of increasing subsequence \(\sqrt{l}=\mathcal{O}(n^{2})\), which means that the statement of the theorem holds true in this case.
Case 2. \(\delta_{j_{1}}\leq\delta_{j_{2}}\leq\cdots\leq\delta_{j_{l^{\prime}}}\).First, let \(j_{t_{1}},j_{t_{2}},\ldots,j_{t_{q}}\) denote the indices such that \(\sqrt[4]{2}\delta_{j_{t}}\leq\delta_{j_{t+1}}\). Then \(\delta_{j_{t_{k-4}}}\leq\frac{1}{2}\delta_{j_{t_{k}}}\) holds for all \(5\leq k\leq q\). Since \(\delta_{j}\) is linear, using Lemma 2.7, it follows that \(q\) is at most \(\mathcal{O}(m\log(m))\).
On the other hand, let \(j_{t},j_{t+1},\ldots,j_{t+r}\) be a subsequence of _consecutive_ indices such that \(\sqrt[4]{2}\delta_{j_{t}}>\delta_{j_{t+1}}\). Recall that \(\delta_{j_{k}}\geq\frac{1}{\sqrt{2}}\delta_{j_{k+2}}\) and
\[\frac{f(A_{j_{k}})}{\delta_{j_{k}}}=Fj_{k}\geq 2F_{j_{k+2}}=2\frac{f(A_{j_{k+2} })}{\delta_{j_{k+2}}},\]
therefore
\[f(A_{j_{k}})\geq\sqrt{2}f(A_{j_{k+2}})\]
hold for all \(t\leq k\leq t+r-2\).
Then we can use Theorem 2.13 with \(T_{0}=A_{j_{t+r}},T_{1}=A_{j_{t+r-8}},\ldots,T_{k}=A_{j_{t+r-8k}}\) similarly to the first case and we get that \(r\) is at most \(8\binom{n+1}{2}+1=\mathcal{O}(n^{2})\).
In conclusion, the length of increasing subsequence \(\sqrt{l}\) is at most \(\mathcal{O}(n^{2}m\log(m))\), which proves the theorem.
Finally, it follows from Lemma 4.15 and Theorem 4.19 that the number of iterations the usual Discrete Newton Method is strongly polynomial. Thus, the minimum of \(s(\kappa)\) can be computed with \(\mathcal{O}(n^{4}m^{8}\log^{6}(m))\) submodular function minimization. In summary, the following theorem is proved and an algorithm for finding \(\sigma^{*}\) is obtained.
**Theorem 4.20**.: _The value of \(\sigma^{*}\) along with maximizing sets \(X,Y\subseteq V\) can be computed in \(\mathcal{O}(n^{4}m^{6}\log^{6}(m)\cdot(n^{2}+m\log^{2}(m)))\) submodular function minimization problems._
Balanced Integral Submodular Flows
A simple example of a graph having two nodes and two parallel edges shows that the minimum spread solution is not always possible to be chosen to be integral, even in case of simple network flows with integer supply vector. This section shows how an integral flow of minimum (unweighted) spread can be found.
From now on, let us assume that the submodular function \(b\) is integral.
Just as in the weighted case, let \(s(\kappa)\) be defined as the minimum value \(\sigma\) for which there exists a \((\kappa 1,(\kappa+\sigma)1)\)-bounded submodular flow \(x\). Note that \(s(\kappa)\) can be computed by solving \(\mathcal{O}(|A|)\) supermodular function minimization problems for any given \(\kappa\). Clearly,
\[\sigma^{*}=\min_{\kappa}s(\kappa)\]
Note, that the function \(s(\kappa)\) is a convex and piecewise linear.
**Definition 5.1**.: _For an arbitrary integer \(\kappa\in\mathbb{Z}\), let \(s_{I}(\kappa)\) denote the minimum value \(\sigma\) for which there exists an \((\kappa 1,(\kappa+\sigma)1)\)-bounded integral submodular flow \(x\in\mathbb{Z}^{A}\)._
**Claim 5.2**.: _For any \(\kappa\in\mathbb{Z}\), we have \(s_{I}(\kappa)=\lceil s(\kappa)\rceil\)._
Proof.: Clearly, \(s_{I}(\kappa)\geq s(\kappa)\). On the other hand, the definition of \(s(\kappa)\) implies the existence of a \((\kappa 1,(\kappa+s(\kappa))\,1)\)-bounded submodular flow \(x\). This flow is also bounded by the integer vectors \(\kappa 1\) and \((\kappa+\lceil s(\kappa)\rceil)\,1\), therefore -- because of the integrality of the submodular flow polyhedron [4] -- an integer submodular flow must also exist between these bounds.
The claim above and the convexity of \(s(\kappa)\) immediately give the following.
**Claim 5.3**.: _The spread \(\sigma_{I}^{*}\) of the balanced integral submodular flow can be computed as_
\[\sigma_{I}^{*}=\min\left\{s_{I}(\lfloor\sigma^{*}\rfloor),s_{I}(\lceil\sigma^ {*}\rceil)\right\}=\min\left\{\lceil s(\lfloor\sigma^{*}\rfloor)\rceil, \lceil s(\lceil\sigma^{*}\rceil)\rceil\right\}\]
_._
## 6 Conclusions
In this paper, we extended the Balanced Network Flow Problem examined by Scutella and Klinz [16, 11] to submodular flows. A min-max formula and strongly polynomial algorithm were presented to the problem as well as to its weighted version. Finally, a strongly polynomial algorithm for Balanced Integral Submodular Flow Problem was given.
### Acknowledgement
The work was supported by the Lendulet Programme of the Hungarian Academy of Sciences - grant number LP2021-1/2021 and by the Hungarian National Research, Development and Innovation Office - NKFIH, grant number FK128673.
The authors would like to acknowledge the valuable suggestions of Andras Frank and Kristof Berczi. |
2304.12909 | Spinning particles and background fields | Through their respective sigma models, a bosonic string and a superstring can
be coupled to (super)gravity fields. These are subsequently forced to satisfy
their right classical equation of motions, as a consequence of quantization of
the string. There are indications that particle models with extended
supersymmetry can replicate this behavior. The bosonic sector of supergravity,
comprising the metric, the Kalb-Ramond 2-form and the dilaton scalar field, was
already shown to derive from Becchi-Rouet-Stora-Tyutin quantization of the
$N=4$ spinning particle. Expanding on these results, here we discuss how to
retrieve other Supergravity fields in the background. | Eugenia Boffo | 2023-04-25T15:24:13Z | http://arxiv.org/abs/2304.12909v1 | # Spinning particles and background fields
###### Abstract:
Through their respective sigma models, a bosonic string and a superstring can be coupled to (super)gravity fields. These are subsequently forced to satisfy their right classical equation of motions, as a consequence of quantization of the string. There are indications that particle models with extended supersymmetry can replicate this behavior. The bosonic sector of supergravity, comprising the metric, the Kalb-Ramond 2-form and the dilaton scalar field, was already shown to derive from Becchi-Rouet-Stora-Tyutin quantization of the \(N=4\) spinning particle [13]. Expanding on these results, here we discuss how to retrieve other Supergravity fields in the background.
###### Contents
* 1 Introduction
* 1.1 Spinning particle
* 1.1.1 Spinors
* 1.1.2 Forms
* 2 Backgrounds
* 2.1 BRST and twists by backgrounds
* 2.2 Spin field and twistors
* 2.3 R-R backgrounds
* 3 Conclusions and outlook
## 1 Introduction
The purpose of this short note is to streamline some of the results on _Ramond-Ramond backgrounds for the spinning particle_ contained in [12]. These results were presented by the author in a talk at the Corfu Summer Institute, during the workshop "Noncommutative and generalized geometry in string theory, gauge theory and related physical models". Emphasis is put on a representation with twistor-like variables and the need of a spin field for the study of deformations by backgrounds is justified. To begin with, I will first review \(N=1\) spinning particles.
### Spinning particle
Spinning particles [2] are particles with an intrinsic spin degree of freedom. For our purposes we will focus on relativistic and massless spinning particles, but generalizations are possible (see for example [3] for the massive case). Supersymmetry hides behind the model and is responsible for the spin degree of freedom as we shall show. A reader familiar with string theory will recognize that the spinning particle is reminiscent of a superstring in the RNS formulation [4][5], when the 1-dimensional string collapses to a point-like object. To write down an action functional for the spinning particle we need to consider maps from the supermanifold \(\mathbb{R}^{1|1}\) (_source space_) to a _target space_\((M,\eta)\), which we take to be a 4-dimensional metric manifold, Lorentzian for concreteness, with the Minkowski metric \(\eta=\text{diag}(+---)\). The superscript of \(\mathbb{R}^{1|1}\) counts and distinguishes the number of even and odd directions, where the latter are parametrized by Grassmann coordinates. So the function sheaf is locally isomorphic to \(C^{\infty}(U)\otimes\Lambda^{\bullet}V\) (here \(V\) is a 1-dimensional vector space and \(U\) is an open subset of \(\mathbb{R}\)) or equivalently \(C^{\infty}(\mathbb{R})\oplus C^{\infty}(\mathbb{R},V)\). The \(\mu\)-th component of an element in \(\text{Maps}(\mathbb{R}^{1|1},M)\) is thus:
\[X^{\mu}(\tau,\xi)=x^{\mu}(\tau)+\mathrm{i}\xi\psi^{\mu}(\tau)\]
and plays the role of a coordinate for \(M\), but it will not be the only datum entering the model. In fact, it is convenient to enforce _reparametrization invariance_ on the model. In the bosonic setting, reparametrization invariance is assured by an einbein \(e(\tau)\). Hence we need a 1-form that generalizes \(e(\tau)\) to \(\mathbb{R}^{1|1}\):
\[E(\tau,\xi)=e(\tau)+2\mathrm{i}\xi\chi(\tau)\in\Omega^{1}(\mathbb{R})\oplus \Omega^{1}(\mathbb{R},V).\]
Besides the natural vector fields \(\partial_{\tau}\) and \(\partial_{\xi}\), another useful derivation that we wish to consider here is the superderivative \(D\):
\[D=\mathrm{i}\xi\partial_{\tau}+\partial_{\xi}\,\]
whose anticommutator closes on \(\partial_{\tau}\). After all this preparation, we can assemble these ingredients into the action for the relativistic massless spinning particle, manifestly invariant under reparametrizations of the (super)line:
\[S=-\int_{\mathbb{R}^{1|1}}\mathrm{d}\tau\mathrm{d}\xi\ \frac{\mathrm{i}}{2E}DX^{ \mu}\partial_{\tau}X^{\mu} \tag{1}\]
This functional integral bears evident similarities with its "bosonic" counterpart. Invariance under reparametrizations is simply the request that the Lie derivative of the Lagrangian w.r.t. a supervector field \(Y(\tau,\xi)D\) is zero modulo boundary terms. On the fields, this corresponds to \(\delta_{Y}E=\mathcal{L}_{Y}E=Y(DE(\tau,\xi))+(DY(\tau,\xi))E,\ \delta_{Y}X=Y(DX(\tau,\xi))\).
Expanding to linear order in \(\xi\) and subsequently performing the Berezinian integral, one obtains:
\[S=\int_{\mathbb{R}^{1}}\mathrm{d}\tau\ \frac{1}{2e}\left(\dot{x}^{\mu}\dot{x}_{ \mu}+\mathrm{i}\psi^{\mu}\dot{\psi}_{\mu}\right)-\frac{1}{e^{2}}\mathrm{i} \chi\psi^{\mu}\dot{x}_{\mu} \tag{2}\]
This system is constrained. Through a Legendre transformation, the constraints can be conveniently emphasized. Thus with the Lagrangian \(L\) in the _first order formulation_ (\(L=p\cdot\dot{x}-H(p,x,\tau)\)):
\[L=p\cdot\dot{x}+\frac{\mathrm{i}}{2}\psi\cdot\dot{\psi}-\frac{e}{2}p^{2}- \mathrm{i}\chi\psi\cdot p,\qquad\dot{x}^{\mu}=\left(ep_{\nu}+\frac{\mathrm{i} }{e}\chi\psi_{\nu}\right)\eta^{\mu\nu}\,, \tag{3}\]
the constraints can be immediately read off: The Lagrange multiplier \(\chi\) implements transversality of the bosonic momentum and the fermionic coordinate, while the einbein \(e\) tells us that the Casimir element for translations must be zero, which is physically a zero mass condition. The infinitesimal supersymmetry transformations with local, parity odd parameter \(\alpha(\tau)\):
\[\delta_{\alpha}x^{\mu}=\mathrm{i}\,\alpha\psi^{\mu}\,,\quad\delta_{\alpha} \psi^{\mu}=-\alpha\eta^{\mu\nu}p_{\nu}\,,\quad\delta_{\alpha}e=2\mathrm{i}\, \alpha\chi\,,\quad\delta_{\alpha}\chi=\dot{\alpha}\,,\quad\delta_{\alpha}p_{ \mu}=0\,,\]
leave the action functional invariant modulo a boundary term, \(\delta_{\alpha}S=\int\mathrm{d}\tau\frac{\mathrm{i}}{2}\frac{\mathrm{d}}{ \mathrm{d}\tau}(\alpha\psi^{\mu}p_{\mu})\). From the Lagrangian in (3), the graded Poisson brackets are also immediate:
\[\{x^{\mu},p_{\nu}\}=\delta_{\nu}^{\mu},\quad\{\psi^{\mu},\psi^{\nu}\}_{+}=2 \eta^{\mu\nu}.\]
Given these brackets, evidently the constraints respect a superalgebra (associated to the superdiffeomorphism algebra)
\[H:=p^{2},\quad q:=\psi\cdot p,\qquad\{q,q\}=2H\,. \tag{4}\]
Canonical quantization of the system is straightforward at this stage. Turning the Poisson brackets into commutators of operators (\(\hbar=1\)), \(p\) then acts by derivative action \(p=-\mathrm{i}\frac{\partial}{\partial x}\), \(x\) by
multiplicative action and \(\psi\) may have an action on two different modules at our disposal: the Spin module and the ring of forms. Let us explain how. The discussion that follows can also be found in [2], one of the pioneering works on the subject.
#### 1.1.1 Spinors
Due to the Clifford algebra satisfied by the \(\psi\)'s, a natural module for the algebra is the module of spinors \(S\). A 4-dimensional complex Dirac spinor \(u(x)=(u_{\alpha}(x),u^{\dot{\alpha}}(x))^{T}\) is an irreducible representation of the Clifford algebra (though it is not irreducible for the Spin group, for which 2-dimensional Weyl spinors are). Then acting with the "transversality" constraint \(q\) yields:
\[qu(x)=\begin{pmatrix}\Gamma^{\mu}_{\alpha\dot{\beta}}\partial_{\mu}u^{\dot{ \beta}}(x)\\ \Gamma^{\mu}\stackrel{{\alpha\dot{\beta}}}{{\partial}}\partial_{ \mu}u_{\beta}(x)\end{pmatrix}=\mathrm{i}\dot{q}u=0.\]
This is the Dirac equation and \(q\) was the Dirac operator all along. Furthermore the condition \(p^{2}=-\square=0\) is automatically satisfied when the spinor already belongs to the kernel of the Dirac operator.
#### 1.1.2 Forms
In the following part, our metric manifold \((M,\eta)\) will be locally identified with \(\mathbb{R}^{1,3}\). The Clifford algebra \(\mathrm{Cliff}(\mathbb{R}^{1,3})\), with symmetric pairing of signature \((1,3)\), as a vector space is isomorphic to:
\[\mathrm{Cliff}(\mathbb{R}^{1,3})\cong\Lambda^{0}\mathbb{R}^{1,3}\oplus\Lambda ^{1}\mathbb{R}^{1,3}\oplus\Lambda^{2}\mathbb{R}^{1,3}\oplus\Lambda^{3}\mathbb{ R}^{1,3}\oplus\Lambda^{4}\mathbb{R}^{1,3}.\]
Given a basis \(e_{i}\) for \(\mathbb{R}^{1,3}\) and its dual \(\mathcal{E}^{i}\), the Clifford algebra can thus be represented on \(\Lambda^{\bullet}\mathbb{R}^{1,3}\ni\varpi\) by the wedge product operation and the contraction:
\[e_{i}\wedge\varpi+\iota_{\mathcal{E}^{i}}\varpi. \tag{5}\]
Analogously we may take the vector space of sections \(\Gamma(\Lambda^{\bullet}T^{*}\mathbb{R}^{1,3})\cong\Gamma(\Lambda^{\bullet}T^{ *}M)\) as our representation space. Then on a form \(\omega^{k}\) of form degree \(0\leq k\leq 4\), the action (5) extends as the operation of wedging by \(\mathrm{d}x^{\mu}\) plus contracting with \(\partial_{\mu}\). Immediately, \(q=\psi^{\mu}p_{\mu}\) is recognized to act on \(\omega^{k}\) as:
\[q\omega^{k}=-\mathrm{i}(\mathrm{d}x^{\mu}\wedge+\eta^{\mu\nu}\iota_{\partial_ {\nu}})\partial_{\mu}\,\omega(x)_{\alpha_{1}\dots\alpha_{k}}\mathrm{d}x^{ \alpha_{1}}\wedge\dots\wedge\mathrm{d}x^{\alpha_{k}}=-\mathrm{i}(\mathrm{d}+ \mathrm{d}^{\dagger})\omega^{k}\,, \tag{6}\]
With \(\mathrm{d}^{\dagger}=*\mathrm{d}*\) being the codifferential, and \(*\) the Hodge star operator. Hence \(q\) is exactly the "square root" of the Laplace-Beltrami operator \(p^{2}=-\square=-(\mathrm{d}\mathrm{d}^{\dagger}+\mathrm{d}^{\dagger}\mathrm{d})\) as it should be. Thus at each form degree, the conditions are:
\[\begin{cases}\mathrm{d}^{\dagger}\omega^{1}=0=\mathrm{d}^{\dagger}\omega^{0}\\ \mathrm{d}\omega^{k}+\mathrm{d}^{\dagger}\omega^{k+2}=0\quad\text{for }k=0,1,2\\ \mathrm{d}\omega^{3}=0=\mathrm{d}\omega^{4}\end{cases} \tag{7}\]
while \(\square\omega^{k}=0\) follows from the equations above. Remind that our metric is pseudoRiemannian. Contrary to the Riemannian case, there is no Hodge decomposition of a form into harmonic, exact and co-exact part in such setting. What holds instead, is a decomposition for a form \(\omega^{k}\), with
spacelike compact support1_, in case it is _closed and co-closed_. Then \(\omega^{k}=\mathrm{d}\omega^{k-1}+\mathrm{d}^{\dagger}\omega^{k+1}\), with \(\omega^{k-1},\omega^{k+1}\) spacelike compact forms satisfying \(\mathrm{d}\omega^{k+1}=0\) and \(\mathrm{d}^{\dagger}\omega^{k-1}=0\). The interested reader can consult [16]. However this is not an option here, because our differential equations (7) are mixing the form degrees. An alternative representation on holomorphic forms, though in an Euclidean target space, is discussed in the Appendix.
Footnote 1: A set \(X\) is spacelike compact if it is contained in the causal influence set of a compact set Y.
_Notice that even if in target space we set a flat Minkowski metric, all flat metrics (associated to a flat connection) are admissible. Indeed, the metric enters only in the definition of the codifferential. We will come back to this remark towards the end of section 2.1._
## 2 Backgrounds
The spinning particle model of the previous section describes a fermion or a set of forms after first quantization, and does not give any relevant information on the _flat_ backgrounds, which can be chosen to one's liking. On the contrary, a bosonic string fixes its target space to be the bosonic NS-NS sector of Supergravity, as a consequence of lack of conformal anomaly [6]. Furthermore, fully-fledged Supergravity is implied by consistency conditions for the quantization of a superstring [7], in Berkovits' pure spinor formulation [8]. With similar methods applied to the RNS superstring [1], the bosonic NS-NS supergravity eom's were shown to follow. A common features of these two works was to deploy Becchi-Rouet-Stora-Tyutin quantization as their chosen quantization prescription.
BRST is essentially cohomological, so a differential operator is present and can be easily twisted with suitable wanna-be background fields. Then nilpotency, even just on a restricted set of functions (for the classical case) or Hilbert space states (for the quantum case), must be checked. It yields conditions on the background fields, thus giving indications on which of these fields can couple to the particle/string and what equations of motion they must satisfy. Motivated by the results of [13] on NS-NS supergravity as an outcome of BRST of the \(N=4\) spinning particle, _our purpose is to investigate Ramond-Ramond fields in the background of a spinning particle_, which must be suitably rearranged for this endeavor. **Ramond-Ramond fields** (or fluxes) are basically a set of \(n\)-forms (with \(n\) just even or just odd according to which string of type II is considered), related by Hodge duality and subjected to d-closure. These two properties consequently imply that the forms must have zero divergence. Given the state of affairs explained in section 1.1.2, these conditions will not be hard to get from a spinning particle, given its deep ties with the D'Alembertian and the Dirac operator.
Let us first briefly review what BRST entails and then apply it to the spinning particle.
### BRST and twists by backgrounds
**Becchi-Rouet-Stora-Tyutin quantization**[9] was developed in order to perform the path integral quantization of gauge theories, which are notoriously difficult to tackle with older methods because of the gauge fields (boundary states). To study functions on the singular symplectic quotient by the gauge symmetries, the idea is to introduce fictitious Grassmann coordinates so to form a _resolution_ of the quotient. These coordinates correspond to the symmetry generators in \(\mathfrak{g}\) but with
shifted parity (the _antighosts_) as well as those of the dual \(\mathfrak{g}^{*}\), again upon a shift in parity (the _ghosts_). In this paper, the generators of \(\mathfrak{g}\) are related to (4). The following step is to "embed" the resolution into a doubly graded chain complex: \(\mathcal{C}=\sum_{p,q}\mathcal{C}^{p,q}=\sum_{p,q}\mathcal{C}^{\infty}(M) \otimes\Lambda^{p}\mathfrak{g}^{*}[1]\otimes\Lambda^{q}\mathfrak{g}[1]\) (the number \(1\) in square brackets refers precisely to the change in degree and thus parity). The differential operator acts by sending elements of fixed \(p-q\) to \(p-q+1\), hence it can increase the ghost number \(p\) as well as decrease the antighost number \(q\). The whole point of the construction is to achieve an isomorphism in cohomology: \(H^{\bullet}(\mathcal{C})\cong H^{\bullet}(\mathfrak{g})\otimes C^{\infty}(M/ G)\). At ghost degree \(0\), one recovers the "physical" states while the gauge symmetries are placed at ghost degree \(-1\).
In the present situation, to \(H\) and \(q\) given in (4), one assigns respectively \(b,\beta\in\mathfrak{g}[1]\) and the only non-trivial bracket happens to be \([\beta,\beta]=b\). For the coalgebra then one has \(\gamma,c\) such that \([\gamma,\beta]=1=\{b,c\}\). Our chosen _polarization_ is that \(\beta\) and \(b\) act with derivative action on a constant state, so that \(\gamma\) and \(c\) act like creation operators and generate the (ghost part of the) representation. Basically the representation of forms \(\omega^{n}\) discussed in section 1.1.2 is extended with \(\mathbb{C}\llbracket\gamma,c\rrbracket\):
\[\sum_{k}\sum_{n=0}^{2}\gamma^{k}\omega_{k}^{n}+c\gamma^{k}\omega_{k}^{n}\,. \tag{8}\]
The differential operator is a ghost degree \(1\) object, built with the constraints and a piece depending on the algebra to ensure nilpotency:
\[Q=-c\square+\gamma(\mathrm{d}+\mathrm{d}^{\dagger})+\gamma^{2}\partial_{c}. \tag{9}\]
In \(\ker Q\) one gets
\[(\mathrm{d}+\mathrm{d}^{\dagger})\omega_{k}^{*}+\omega_{k-1}^{ \bullet}=0,\] \[-\square\omega_{k}^{\bullet}+(\mathrm{d}+\mathrm{d}^{\dagger}) \omega_{k-1}^{\bullet}=0\]
The cohomology shows that for \(k\geq 1\) one of the two complexes of forms can be eliminated in favor of the other. So this is "off-shell". It should be noticed, however, that at \(k=0\), which corresponds to ghost number \(0\), one finds \(\mathrm{d}\omega^{n-1}+\mathrm{d}^{\dagger}\omega^{n+1}=0\) as obtained with the quantization method in the previous section.
We already claimed, at the end of the previous section, that the backgrounds can support only flat connections. BRST, because of its cohomological nature, is a great playground to test this claim. A straightforward way to analyze backgrounds is by a twist of the de Rham differential in (9). It should also be noticed that only the forms which appear in the Hilbert space can be used for the twist. Indeed the reason is that the BRST operator itself induces a homological vector field on the space of fields which can depend on the latter ones. A \(1\)-form is certainly present in our space of states (8), so we can do the following:
\[\mathrm{d}\leadsto\mathrm{d}+A\wedge=:\mathrm{d}_{A},\]
and its adjoint \(\mathrm{d}_{A}^{\dagger}\) is obtained by decorating it with the Hodge star on the left and the right. Recall that if \(\varrho\) is a \(p\)-form, in \(4\) dimensions \(**\varrho=-(-1)^{p(4-p)}\varrho\). In turn the differential must be, in first
approximation:
\[Q_{A}= cH_{A}+\gamma\left(\mathrm{d}_{A}+\mathrm{d}_{A}^{\dagger}\right)+ \gamma^{2}\partial_{c}\] \[H_{A}:= -\square+*(\mathrm{d}^{\dagger}A)\wedge*+(\mathrm{d}^{\dagger}A) \wedge+A\wedge*A\wedge**A\wedge*A\wedge\] \[+([\mathrm{d}_{A},\mathrm{d}_{A}])\wedge**([\mathrm{d}_{A}, \mathrm{d}_{A}])\wedge**.\]
Here \(H_{A}\) is fixed by \(\{\mathrm{d}_{A}+\mathrm{d}_{A}^{\dagger},\mathrm{d}_{A}+\mathrm{d}_{A}^{ \dagger}\}=H_{A}\). To ensure nilpotency, the quantum commutator of operators between \(H_{A}\) and \(\mathrm{d}_{A}+\mathrm{d}_{A}^{\dagger}\) must vanish. Things will sensibly change depending on whether we consider an abelian or non-abelian theory. For the former, we obtain \(\mathrm{d}^{\dagger}A=0\) as well as \(\mathrm{d}A=0\) (remarkably, \([\mathrm{d}_{A},H_{A}]=0\) does not require further conditions so the \(A^{2}\) term in \(H_{A}\) does not have to be zero). Hence \(A\) is de Rham closed and co-closed.
Instead if we wanted to consider a non-abelian 1-form, its field strength \([\mathrm{d}_{A},\mathrm{d}_{A}]_{*}=2\mathrm{d}A+[A,A]\) should again be set to zero, while \(\mathrm{d}_{A}^{\dagger}A=0\) is just \(\mathrm{d}^{\dagger}A=0\) because \(A^{\mu\,a}A_{\mu}^{b}f_{ab}{}^{c}=0\), for \(f_{ab}{}^{c}\) the structure constants. In both cases, the conditions would be compatible with thinking of \(A\) as a Ramond-Ramond 1-form field strength! However a complete match with the Ramond-Ramond fluxes requires also all the other admissible forms in higher degree. Unfortunately, in this setting it is not clear how to discuss deformations by a 2-form. That is why, inspired by [11] and [10], we resort to a twistorial description.
### Spin field and twistors
Given that we are considering a supersymmetric worldline with a 4-dimensional target space, the Spin group has two irreps distinguishable by their chirality2. Hence one may consider introducing two pairs of _conjugated_ twistor-like variables, \((\theta^{\alpha},\lambda_{\beta})\) and \((\bar{\theta}_{\dot{\alpha}},\tilde{\lambda}^{\beta})\) and could explore the consequences of assigning even parity to one pair, odd parity to the other: \([\theta^{\alpha},\lambda_{\beta}]=\delta^{\alpha}_{\beta}\) and \(\{\bar{\theta}_{\dot{\alpha}},\tilde{\lambda}^{\beta}\}=\delta^{\beta}_{\dot{ \alpha}}\). An educated guess for the realization of the Gamma matrices could then be:
Footnote 2: The chirality is the eigenvalue for the chirality operator:
\[\frac{\mathrm{id}+\psi_{4}}{2},\]
where \(\psi_{4}:=\psi^{0}\psi^{1}\psi^{2}\psi^{3}\). Since it is also a projector, its eigenvalues are only \(\pm 1\).
\[\psi^{\mu}:=\theta^{\alpha}\sigma^{\mu}_{\alpha\,\dot{\alpha}}\tilde{\lambda} ^{\dot{\alpha}}+\bar{\theta}_{\dot{\alpha}}\tilde{\sigma}^{\mu\,\dot{\alpha} \alpha}\lambda_{\alpha}. \tag{10}\]
Now, if \(\theta,\bar{\theta}\) create states out of a ground state, in the Fock space \(F\) we would find bitwistors. These are nothing but forms in \(\Omega^{\bullet}(M)\) written in twistorial notation. The action of \(\psi^{\mu}\) extends from that on a single particle Hilbert space by Leibniz rule. However after some algebraic manipulations one would immediately recognize that on the Fock space,
\[\{\psi^{\mu},\psi^{\nu}\}\]
equals \(2\eta^{\mu\nu}\) only on the spinors, while on the bispinors/bitwistors equals \((2+2)\eta^{\mu\nu}\) because of Leibniz rule applied to the tensor representation. Abandoning reality and resorting to complex pairs \(\sigma^{\alpha}=\frac{1}{\sqrt{2}}(\sigma^{0}+\mathrm{i}\sigma^{1}),\;\sigma^ {b}:=\frac{\mathrm{i}}{\sqrt{2}}(\sigma^{2}+\sigma^{3})\) and \(\tilde{\sigma}^{\alpha}=\frac{1}{\sqrt{2}}(\sigma^{0}-\mathrm{i}\sigma^{1}),\; \tilde{\sigma}^{b}=\frac{\mathrm{i}}{\sqrt{2}}(\sigma^{2}-\sigma^{3})\) does not help either.
Following our work [12], we will now enclose all the odd Grassmann parity into a \(\mathbb{Z}_{2}\) degree shifting operator \(\uparrow\), so that now \(\tilde{\theta},\tilde{\lambda}\) are turned into Grassmann even spinors:
\[[\lambda_{\alpha},\theta^{\beta}]=\delta^{\beta}_{\alpha},\quad[\tilde{\lambda} ^{\dot{\alpha}},\tilde{\theta}_{\dot{\beta}}]=\delta^{\dot{\alpha}}_{\dot{ \beta}}\,,\]
The degree shifting operator, in combination with the twistor-like objects, can be thought as a _spin field_ for a spinning particle in the worldline. In string theory, a spin field is usually inserted at the endpoint of a branch cut to turn a Neveu-Schwarz state into a Ramond one. Since there are no branch cuts for functions on a line, as opposed to what can happen with complex maps from a conformal plane, this is the best that we can actually do to mimic this on the line.
Furthermore, we choose \(\lambda,\tilde{\lambda}\) to commute with \(\uparrow\), while their conjugated variables are non-commuting with \(\uparrow\). With appropriate insertions of \(\uparrow\), our representation space \(F^{\prime}\), limited to spinors and bispinors, contains specific polynomials in \(\theta,\tilde{\theta}\):
\[F^{\prime}=\left\langle v_{\alpha}\theta^{\alpha},\quad\left(B_{ \alpha\beta}+\varphi\epsilon_{\alpha\beta}\right)\left|\alpha\beta\right\rangle,\quad A_{\alpha}{}^{\dot{\alpha}}\left|\alpha\dot{\alpha}\right\rangle\right\rangle \tag{11}\] \[\left|\alpha\beta\right\rangle:=\frac{1}{2}\left(\theta^{\alpha}( \uparrow\theta^{\beta}-\theta^{\beta}\uparrow)+\theta^{\alpha}(\theta^{\beta} -\uparrow\theta^{\beta}\uparrow)\right)\,,\] \[\left|\alpha\dot{\alpha}\right\rangle:=\frac{1}{2}\left(\theta^{ \alpha}(\uparrow\tilde{\theta}^{\dot{\alpha}}-\tilde{\theta}^{\dot{\alpha}} \uparrow)+\theta^{\alpha}(\tilde{\theta}^{\dot{\alpha}}-\uparrow\tilde{ \theta}^{\dot{\alpha}}\uparrow)\right)\,.\]
With \(\tilde{F}^{\prime}\) we will denote the chiral counterpart to \(F^{\prime}\), where all the \(\theta\)'s are traded for \(\tilde{\theta}\) and vice versa. With these choices, the anticommutator of the Gamma matrices (10) \(\psi^{\mu}\uparrow\) yields now:
\[\left\{\psi^{\mu}\uparrow,\psi^{\nu}\uparrow\right\}=2\eta^{\mu\nu}\left( \theta^{\alpha}\lambda_{\alpha}+\tilde{\theta}_{\dot{\alpha}}\tilde{\lambda} ^{\dot{\alpha}}\right)+f_{1}(\tilde{\theta})\lambda\lambda+f_{2}(\theta) \tilde{\lambda}\tilde{\lambda}+f_{3}(\theta,\tilde{\theta})\tilde{\lambda} \lambda\,, \tag{12}\]
but one can be easily convinced that the last three terms in (12) drop after evaluating the above expression on any state of \(F^{\prime}\) and \(\tilde{F}^{\prime}\). Then \((\theta^{\alpha}\lambda_{\alpha}+\tilde{\theta}_{\dot{\alpha}}\tilde{\lambda }^{\dot{\alpha}})\) just fixes the representation space to be invariant under rotations in the space of spinors with the same chirality. In the end, one has managed to retrieve:
\[\left\{\psi^{\mu}\uparrow,\psi^{\nu}\uparrow\right\}\left|{}_{F^{\prime}, \tilde{F}^{\prime}}=2\eta^{\mu\nu}\,.\]
As customary in BRST, we should now extend these Weyl and anti-Weyl spinors and bispinors, to a representation of the ghost algebra, however for our current investigations we can just focus on the ghost degree zero states. These are in \(\ker Q\) if
\[QF^{\prime}=\gamma\psi^{\mu}(-\mathrm{i}\partial_{\mu})F^{\prime}=0 \tag{13}\] \[\begin{cases}\dot{\partial}\nu=0\\ \mathrm{d}A=0=\mathrm{d}^{\dagger}A,\\ \mathrm{i}\mathrm{d}^{\dagger}B+\mathrm{d}\varphi-\frac{1}{2}*\mathrm{d}B=0. \end{cases} \tag{14}\]
Hence we recover once more the Weyl equation for our Weyl spinor \(\nu\). Regarding the remaining equations in (14), they follow from the _Fierz identities_ with the Pauli matrices3, and naturally separate according to the form degree, but also into imaginary and real part. Hence at ghost degree zero we find odd degree forms which are closed and co-closed, \(\mathrm{d}A=0=\mathrm{d}^{\dagger}A\). Then if one assumes
that \(B\) has real values, we can require it to be divergence-free. To solve for the real part of the last equation in (14) we can finally use a duality condition. Indeed for our present discussion it is rather crucial to half the number of degrees of freedom, by making \(p\)-forms Hodge dual to \((4-p)\)-forms (or self-dual in the case of the 2-form). Duality and either closure or co-closure imply the remaining differential equation: for instance, taking for concreteness \(A^{(1)}=*A^{(3)}\) and \(\mathrm{d}A^{(1)}=0=\mathrm{d}A^{(3)}\), then it is guaranteed that they have zero divergence. Coming back to the real part of the bottom equation in (14), if we impose self-duality in spacetime, \(B=*B\), then \(\mathrm{d}\varphi=0\) separately. Hence \(\varphi\), \(A\equiv A^{(1)}\) and \(B\) can be interpreted as Ramond-Ramond field strengths. Certainly a field strength of form degree 0 (like \(\varphi\)) is not meaningful in the realm of de Rham cohomology (it would require K-theory, but this is beyond the scopes of this note). As done in string theory and supergravity, we will avoid any complication caused by \(\varphi\) by taking it to be constant.
At higher ghost numbers the theory is still off-shell as before. Notably the equations (14) do not require the Hamiltonian constraint (or zero mass constraint), so we can actually drop it altogether, and consider only the chiral supercharge
\[\mathbf{q}=\tilde{\theta}_{\dot{\alpha}}\tilde{\varphi}^{\mu\ \dot{\alpha}\dot{ \beta}}\lambda_{\beta}\,p_{\mu}\uparrow. \tag{15}\]
Basically \(H^{0}_{\mathcal{Q}}(M,\mathbb{R})=H^{0}_{\mathbf{q}}(M,\mathbb{R})\). Twistings of this operator are easy to handle.
### R-R backgrounds
We are now all set and can embark in the study of deformations by background fields. The focus will be on the chiral supercharge \(\mathbf{q}=\tilde{\theta}\not{p}\lambda\uparrow\). Such operator has an action on \(F^{\prime}\) (11) while \(\tilde{F}^{\prime}\) is annihilated by \(\mathbf{q}\). The deformations that we are keen on studying are:
\[\delta\mathbf{q}_{\tilde{\beta}}:=\tilde{\theta}_{\dot{\alpha}}\tilde{B}^{ \dot{\alpha}\dot{\beta}}\tilde{\lambda}_{\beta}\uparrow,\qquad\delta\mathbf{q }_{A}:=\tilde{\theta}_{\dot{\alpha}}\tilde{A}^{\dot{\alpha}}_{\ \beta}\lambda^{\beta}\uparrow,\]
with \(\tilde{B}\) and \(\tilde{A}\) being a priori just a 2-form and a 1-form written in twistorial notation. The possibility of studying "covariant Dirac operators" with forms in different form degree is a remarkable feature of the twistorial description.
Now,
\[\{\mathbf{q}+\delta\mathbf{q}_{A},\mathbf{q}+\delta\mathbf{q}_{A}\}=2\{ \mathbf{q},\delta\mathbf{q}_{A}\}+\{\delta\mathbf{q}_{A},\delta\mathbf{q}_{A}\}\]
is going to be automatically nilpotent on every state in \(F^{\prime}\) and its chiral counterpart. This a simple consequence of having two annihilators (\(\lambda_{\alpha}\lambda_{\beta}=\partial_{\theta^{\alpha}}\partial_{\theta^{ \beta}}\)) on the right, and it is a radical departure from the behavior observed so far. Shall a larger space of states not be annihilated by them, the condition for nilpotency is just \(\mathrm{d}A=0\).
Concerning \(\delta\mathbf{q}_{\tilde{B}}\) things get quite intriguing. While the linear deformation \(\{\delta\mathbf{q}_{\tilde{B}},\mathbf{q}\}\) is zero on the locus of the equations of motion for the fields in \(F^{\prime}\) and \(\tilde{F}^{\prime}\), nilpotency cannot be achieved because:
\[\{\delta\mathbf{q}_{\tilde{B}},\delta\mathbf{q}_{\tilde{B}}\}\propto\tilde{ \theta}\left(*\tilde{B}\wedge*\tilde{B}+\tilde{B}\wedge\tilde{B}+\tilde{B} \circ g^{-1}\tilde{B}\right)\tilde{\lambda} \tag{16}\]
as seen by thoroughly using (23). Since it depends linearly on \(\tilde{\lambda}\), (16) has still a non-zero action on \(\tilde{F}^{\prime}\). We cannot ignore \(F^{\prime}\) and simply project our theory into the chiral half \(F^{\prime}\) because in that case it will not contain \(\tilde{B}\). Hence we would be forced to set \(\tilde{B}=0\). Nevertheless it is interesting to
treat this as a small deformation. Expanding the field \(A\) of the Hilbert space in a small parameter \(A=A_{(0)}+sA_{(1)}+\dots\), in the kernel of \(\mathbf{q}+\delta\mathbf{q}_{B}\) at first order in \(s\) we find:
\[\mathrm{d}^{\dagger}A_{(1)} =-*\left(\tilde{B}\wedge*\tilde{B}_{(0)}\right) \tag{17}\] \[\mathrm{d}A_{(1)} =\tilde{B}\circ g^{-1}\circ\tilde{B}_{(0)} \tag{18}\]
We deem these equation to stem from a BF-type [14] or Chern-Simons-type theory.
## 3 Conclusions and outlook
In this short note, after reviewing the \(N=1\) spinning particle and especially its quantization into spinors and forms, we studied Ramond-Ramond fields in the background. The construction relies on twistor-like objects and on BRST quantization. In the BRST operator, the supercharge (associated to supersymmetry invariance) can be twisted by 1- and 2-forms in the twistorial notation. Nilpotency of the newly defined BRST operator does not have to hold tout-court, but can be verified on the states in the Fock space of the quantized spinning particle. Thanks to a spin field for the worldline, we could show that a _dynamical_ 1-form is admitted: there are no obstructions for a covariant derivative constructed with it. The equations of motion for the 1-form are an independent piece of information which does not follow from nilpotency. Instead a deformation by the 2-form did not lead to a nilpotent operator on the Fock space. Nevertheless, when this finite deformation is turned into an infinitesimal one we obtain some intriguing differential equations.
In light of the analysis presented here, which partly summarizes [12], the \(N=1\) spinning particle in the twistorial description does yield some sensible results about Ramond-Ramond fluxes in the background. However we believe that spinning particles with enhanced supersymmetry should improve these outcomes and lead to the full set of R-R fields and their equations. Considering models with explicit supersymmetry in the target space e.g. the _superembedding model_[15], should noticeably help to get the fermionic fields too.
## Appendix A Fierz identities
For \(\eta^{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\),
\[(\sigma^{\mu\nu})_{\alpha}{}^{\beta} =\frac{\mathrm{i}}{4}\left(\sigma^{\mu}_{\alpha\dot{\gamma}} \bar{\sigma}^{\nu\,\dot{\gamma}\beta}-\sigma^{\nu}_{\alpha\dot{\gamma}}\bar{ \sigma}^{\mu\,\dot{\gamma}\beta}\right) \tag{19}\] \[\bar{\sigma}^{\rho\,\dot{\alpha}\alpha}\sigma^{\mu\nu}_{\alpha\beta} =\begin{cases}\frac{\mathrm{i}}{2}(\eta^{\rho\nu}\bar{\sigma}^{ \mu}-\eta^{\rho\mu}\bar{\sigma}^{\nu})^{\dot{\alpha}}{}_{\beta}-\frac{1}{2} \epsilon^{\rho\mu\nu\delta}\bar{\sigma}^{\alpha\dot{\gamma}}_{\delta}\epsilon _{\gamma\beta}&\text{if }\rho=j\\ \frac{\mathrm{i}}{2}(\eta^{\rho\nu}\bar{\sigma}^{\mu}+\eta^{\rho\mu}\bar{ \sigma}^{\nu})^{\dot{\alpha}}{}_{\beta}-\frac{1}{2}\epsilon^{\rho\mu\nu\delta} \bar{\sigma}^{\dot{\alpha}\dot{\gamma}}_{\delta}\epsilon_{\gamma\beta}&\text{ if }\rho=0\end{cases}\] (20) \[\delta^{\dot{\alpha}}_{\beta}(\sigma^{\mu})_{\alpha\dot{\alpha}}( \bar{\sigma}^{\nu})^{\beta\beta} =\eta^{\mu\nu}\delta^{\beta}_{\alpha}-2\mathrm{i}[\sigma^{\mu\nu} ]^{\beta}{}_{\alpha}\] (21) \[(\bar{\sigma}^{\mu\nu}\bar{\sigma}^{\rho\sigma})^{\dot{\alpha}\beta} =\frac{1}{4}\left(\eta^{\mu\sigma}\eta^{\rho\nu}-\eta^{\mu\rho} \eta^{\nu\sigma}\right)\epsilon^{\dot{\alpha}\dot{\beta}}+\frac{\mathrm{i}}{4} \epsilon^{\mu\nu\rho\sigma}\epsilon^{\dot{\alpha}\dot{\beta}}\] (22) \[+\frac{\mathrm{i}}{2}\left(\eta^{\sigma\nu}\bar{\sigma}^{\mu\rho} -\eta^{\nu\rho}\bar{\sigma}^{\mu\sigma}+\eta^{\mu\rho}\bar{\sigma}^{\nu\sigma }-\eta^{\mu\sigma}\bar{\sigma}^{\nu\rho}\right)^{\dot{\alpha}\beta}, \tag{23}\]
B Alternative representation on holomorphic forms
To build such representation, let us first swap the pseudoRiemannian target space for a Riemannian one, with Euclidean metric. Many complications are avoided when the metric is positive definite. Then one can construct two complex conjugated pairs of \(\psi\)'s:
\[\Psi^{p}=\frac{1}{\sqrt{2}}\left(\psi^{2p}+\mathrm{i}\psi^{2p+1}\right),\quad \bar{\Psi}^{p}=\frac{1}{\sqrt{2}}\left(\psi^{2p}-\mathrm{i}\psi^{2p+1}\right), \quad p=0,1 \tag{24}\]
This linear redefinition is a homomorphism of the Clifford algebra, yielding the new brackets:
\[\{\Psi^{k},\bar{\Psi}^{j}\}=2\delta^{kj},\quad\{\Psi^{i},\Psi^{k}\}=0=\{\bar{ \Psi}^{j},\bar{\Psi}^{i}\}\;. \tag{25}\]
Considering (24) themselves as a representation, \(\Psi^{0},\Psi^{1}\) transform in the fundamental of \(\mathfrak{su}(2)\), while \(\bar{\Psi}^{0},\bar{\Psi}^{1}\) in the antifundamental. Then a module for \(\mathfrak{so}(4)\) inside \(\mathrm{Spin}(4)\), constructed from the lowest weight state (or ground state in physics), consists of \(\mathfrak{su}(2)\) totally antisymmetric tensors due to (25). Thus the states in the Hilbert space are:
\[\Phi(x)\ket{0},\quad C_{k}(x)\Psi^{k}\ket{0},\quad B_{01}(x)(\Psi^{0}\Psi^{1} -\Psi^{1}\Psi^{0})\ket{0}.\]
So \(\Phi\) is the ground state singlet, \(C_{k}(x)\Psi^{k}\) is the 2 (vectorial representation), and the last state is the singlet in \(2\otimes 2=3\oplus 1\). One should then impose the dynamics, determined by the constraints \(p^{2}=0\) and \(\psi^{\mu}p_{\mu}=0\). For this scope, a convenient realization of the algebra (25) is by the following assignation:
\[\Psi^{i}\cong\mathrm{d}z^{i}\wedge,\quad\bar{\Psi}^{i}\cong h^{\tilde{I}k} \iota_{\partial_{z^{k}}}\;. \tag{26}\]
Indeed the exterior product with a holomorphic differential and the contraction by a holomorphic vector field, together with the hermitian metric \(h^{\tilde{I}k}=(h^{-1})^{\tilde{I}k}=\delta^{\tilde{I}k}\), replicate (25). We are thus locally seeing the 4-dimensional manifold \(\mathbb{R}^{4}\) as a 2-dimensional almost complex manifold \(U\). Upon this identification, the momenta in the cotangent space must also be arranged in complex pairs:
\[p_{\mu}\rightarrow(p_{z^{0}},p_{z^{1}})=\left(\frac{1}{\sqrt{2}}(p_{0}- \mathrm{i}p_{1}),\frac{1}{\sqrt{2}}(p_{2}-\mathrm{i}p_{3})\right)\]
In turn, the Dirac operator \(\psi^{\mu}p_{\mu}\), expressed in terms of \(\Psi,\bar{\Psi}\) and the complex momenta, and letting \(p_{z}\rightarrow-\mathrm{i}\partial_{z}\), is now explicitly realized as:
\[\psi^{\mu}p_{\mu}\mapsto-\mathrm{i}\left(\mathrm{d}z^{k}\partial_{z^{k}}+h^{ \tilde{I}k}\iota_{\partial_{z^{k}}}\partial_{\tilde{z}k}\right)\equiv-\mathrm{ i}\left(\partial+\partial^{\dagger}\right), \tag{27}\]
with Dolbeault differential and codifferential. Instead the Laplacian is now given by the holomorphic part of the complex Laplacian:
\[p^{2}\mapsto-\Delta_{\partial}\]
On the states, the Dirac operator constraint tells us:
\[\partial C(z+\bar{z})=0,\qquad\partial^{\dagger}C(z+\bar{z})=0,\qquad\partial ^{\dagger}B(z+\bar{z})+\partial\Phi(z+\bar{z})=0, \tag{28}\]
which the reader can immediately recognize to be already in the kernel of the holomorphic Laplacian. Hence the 1-form is \(\partial\)-closed and divergence-less, while the remaining equation relates the holomorphic divergence of the 2-form to the Dolbeault differential of the function.
## Acknowledgments
I would like to thank the organizers of the workshop "Noncommutative and generalized geometry in string theory, gauge theory and related physical models" for the opportunity to present my results and for a successful workshop. A huge thank you to the local organizers for the relaxed atmosphere and for putting up an incredible program rich in cultural and sport activities.
I have benefited from discussions with Mauro Mantegazza, Svatopluk Krysl and Ondra Hulik, to whom I am very grateful. I am thankful to Ivo Sachs for working together on the paper which prompted this short article. Financial support from GACR grant EXPRO 19-28268X and from a Riemann Fellowship is acknowledged.
|
2307.05925 | A Tractable Statistical Representation of IFTR Fading with Applications | The recently introduced independent fluctuating two-ray (IFTR) fading model,
consisting of two specular components fluctuating independently plus a diffuse
component, has proven to provide an excellent fit to different wireless
environments, including the millimeter-wave band. However, the original
formulations of the probability density function (PDF) and cumulative
distribution function (CDF) of this model are not applicable to all possible
values of its defining parameters, and are given in terms of multifold
generalized hypergeometric functions, which prevents their widespread use for
the derivation of performance metric expressions. In this paper we present a
new formulation of the IFTR model as a countable mixture of Gamma distributions
which greatly facilitates the performance evaluation for this model in terms of
the metrics already known for the much simpler and widely used Nakagami-m
fading. Additionally, a closed-form expression is presented for the generalized
moment generating function (GMGF), which permits to readily obtain all the
moments of the distribution of the model, as well as several relevant
performance metrics. Based on these new derivations, the IFTR model is
evaluated for the average channel capacity, the outage probability with and
without co-channel interference, and the bit error rate (BER), which are
verified by Monte Carlo simulations. | Maryam Olyaee, Hadi Hashemi, Juan M. Romero-Jerez | 2023-07-12T05:45:27Z | http://arxiv.org/abs/2307.05925v1 | # A Tractable Statistical Representation of IFTR
###### Abstract
The recently introduced independent fluctuating two-ray (IFTR) fading model, consisting of two specular components fluctuating independently plus a diffuse component, has proven to provide an excellent fit to different wireless environments, including the millimeter-wave band. However, the original formulations of the probability density function (PDF) and cumulative distribution function (CDF) of this model are not applicable to all possible values of its defining parameters, and are given in terms of multifold generalized hypergeometric functions, which prevents their widespread use for the derivation of performance metric expressions. In this paper we present a new formulation of the IFTR model as a countable mixture of Gamma distributions which greatly facilitates the performance evaluation for this model in terms of the metrics already known for the much simpler and widely used Nakagami-\(m\) fading. Additionally, a closed-form expression is presented for the generalized moment generating function (GMGF), which permits to readily obtain all the moments of the distribution of the model, as well as several relevant performance metrics. Based on these new derivations, the IFTR model is evaluated for the average channel capacity, the outage probability with and without co-channel interference, and the bit error rate (BER), which are verified by Monte Carlo simulations.
Multipath fading, Independent Fluctuating Two-Ray (IFTR), Gamma Distribution, Generalized Moment Generating Function, Outage Probability, Co-channel Interference
## I Introduction
Due to the spectrum shortage in future generation wireless network, higher frequency bands are being considered in the standards in order to cover users' demands. Thus, millimeter and terahertz bands usage has emerged in the context of 5G/6G cellular networks [1]. In many wireless scenarios, channel multipath fading is an essential propagation effect to be considered due to the potential detrimental impact on performance. Therefore, accurate characterization of wireless channel fading at those higher frequencies has become a relevant research topic, and much effort is being made in this area [2, 3, 4].
Recently, the independent fluctuating two-ray (IFTR) [5] channel model has been presented to characterize multipath propagation, which includes several well-known distributions, namely Rayleigh, Rician, Hoyt (Nakagami-\(q\)), Rician Shadowed, and Nakagami-\(m\), as special or limiting cases. The IFTR model consists of two dominant (specular) waves plus a diffuse component, due to the aggregation of multiple low-power scattered waves, modeled as a complex Gaussian random variable (RV), where the specular components are assumed to fluctuate independently following Nakagami-\(m\) fading. This model is related to the fluctuating two-ray (FTR) fading model except that in the latter the two specular components are assumed to be fully correlated and fluctuate simultaneously. The FTR model was introduced in [6] and was later reformulated in [7, 8] and, more recently, in [9], and has been studied abundantly for different wireless environments, mostly in the context of millimeter-wave communications, and considering many different performance metrics (see for example [9] and the references therein). In spite of the apparent similitude in the formal definition of the FTR and IFTR fading models, there are major differences between them, both in terms of the fitting results to experimental measurements and in the involved mathematical derivations. On the one hand, the IFTR fading model has been shown to provide a (sometimes remarkable) better fit than FTR fading (as well as other generalized fading models such as \(\kappa\)-\(\mu\) shadowed [10] and two-wave with diffuse power -TWDP- [9]) to experimental data in very different environments, including line-of-sight (LOS) millimeter-wave, land-mobile satellites (LMS), and underwater acoustic communications (UAC) [5]. On the other hand, the independence of the two specular components in the IFTR model imposes new mathematical challenges, as now a two-fold nested integration always appear in its statistical characterization.
Although both the probability density function (PDF) and cumulative distribution function (CDF) of the IFTR model were presented in [5], their use is rather limited for two reasons: on the one hand they are not completely general, as they require assuming one of the model parameters \(m_{1}\) or \(m_{2}\) to be integer, while they can take any arbitrary positive real value in realistic propagation scenarios; on the other hand, the known PDF and CDF are given in terms of a generalized hypergeometric function, which is actually a multifold infinite summation, which is very difficult to manipulate to obtain analytical expressions for most performance metrics in wireless communication systems.
In this paper, we solve the aforementioned issues by deriving a new statistical characterization of the IFTR fading model assuming arbitrary positive values of \(m_{1},m_{2}\) and easy to manipulate. Additionally, we expand the known results for
the precise characterization of the model and apply them for the performance analysis of wireless systems. Specifically, the key contributions of this paper are:
* A new formulation is presented for the PDF and CDF of the instantaneous SNR of IFTR fading in terms of an infinite countable mixture of Gamma distributions for arbitrary values of the channel parameters \(m_{1}\) and \(m_{2}\), where the weights of the elements of the mixture are given in closed-form. The resulting infinite series are demonstrated to be convergent and are precisely truncated and evaluated using the Kolmogorov-Smirnov goodness-of-fit test.
* The generalized moment generating function (GMGF) of the IFTR fading model is obtained for the first time, which for many relevant cases can be written in closed-form, allowing to obtain all the moments of the distribution. In spite of the model generality and statistical complexity, this function permits to obtain closed-form expressions for different relevant performance metrics including, for example, secrecy capacity outage, outage probability under interference and energy detection probability.
* The new and expanded statistical characterization of IFTR fading is used for its performance analysis evaluation in terms of the average capacity, outage probability with and without interference and average bit error rate (BER) for different modulations. The effect of the parameters values of the model are evaluated numerically and verified by simulation.
The rest of this paper is organized as follows: The channel model is presented in Section II. Then, in Section III, the new representation of the IFTR fading is presented, as well as, for the first time, to the authors' knowledge, an expression of the GMGF, which for many relevant cases can be written in closed-form. Several performance metrics, including the average capacity, the outage probability, and the BER in IFTR fading are analyzed in Section IV. Simulation and numerical results are given in Section V. Finally, the paper is concluded in Section VI.
## II Preliminary definitions and channel model
**Definition 1**: _A RV \(X\) following a Gamma distribution with shape parameter \(\lambda\) and scale parameter \(\nu\) will be denoted as \(X\sim\mathcal{G}(\lambda,\nu)\), and its PDF and CDF will be given, respectively, by_
\[f^{\mathcal{G}}(x;\lambda,\nu) =\frac{x^{\lambda-1}}{\Gamma(\lambda)\nu^{\lambda}}e^{-\frac{p}{ \nu}}, \tag{1}\] \[F^{\mathcal{G}}(x;\lambda,\nu) =\frac{1}{\Gamma(\lambda)}\gamma\left(\lambda,\frac{x}{\nu}\right), \tag{2}\]
_where \(\gamma(\cdot,\cdot)\) is the incomplete Gamma function [11, eq. (8.350.1)]._
**Remark 1**: _The SNR \(\gamma_{\mathcal{K}}\) (or, equivalently, the received power) in a Nakagami-\(m\) fading with mean \(\bar{\gamma}_{\mathcal{K}}\) and fading severity parameter \(m\) follows a Gamma distribution with shape parameter \(m\) and scale parameter \(\bar{\gamma}_{\mathcal{K}}/m\), i.e., \(\gamma_{\mathcal{K}}\sim\mathcal{G}(m,\bar{\gamma}_{\mathcal{K}}/m)\)._
The IFTR fading model is composed of two specular waves, whose amplitude fluctuate according to independent Nakagami-\(m\) fading, plus an undetermined number of scattered low-amplitude waves (the diffuse component) which, by virtue of the central limit theorem, are jointly represented by a complex Gaussian RV. Let \(\zeta_{i}\sim\mathcal{G}(m_{i},1/m_{i})\), with \(i\in\{1,2\}\), then the complex base-band representation of the IFTR fading model can be expressed as
\[V_{r}=\sqrt{\zeta_{1}}V_{1}e^{j\phi_{1}}+\sqrt{\zeta_{2}}V_{2}e^{j\phi_{2}}+X +jY, \tag{3}\]
where \(V_{i}\) is the average amplitude of the \(i\)-th specular component, \(\phi_{i}\) is a uniformly distributed RV in \([0,2\pi)\) representing its phase, and \(X+jY\) models the diffuse component with \(X,Y\sim\mathcal{N}(0,\sigma^{2})\).
In addition to the fading severity parameters of the specular components, \(m_{1}\) and \(m_{2}\), the IFTR model will be determined by the following physically-motivated parameters:
\[K =\frac{V_{1}^{2}+V_{2}^{2}}{2\sigma^{2}}, \tag{4}\] \[\Delta =\frac{2V_{1}V_{2}}{V_{1}^{2}+V_{2}^{2}}, \tag{5}\]
where \(K\) represents the ratio of the average power of the dominant components to the power of the diffuse component and \(\Delta\in[0,1]\) provides a measure of the specular components similarity, so that \(\Delta=0\) implies \(V_{1}=V_{2}\). Without loss of generality we will assume \(V_{1}\geq V_{2}\), and therefore \(\Delta=1\) implies \(V_{2}=0\), i.e., only the first specular component, if any, is received. For the sake of compactness in subsequent expressions, we will also define the following ancillary parameters, given in terms of \(K\) and \(\Delta\):
\[K_{1} \triangleq\frac{V_{1}^{2}}{2\sigma^{2}}=K\frac{1+\sqrt{1-\Delta^{ 2}}}{2}, \tag{6}\] \[K_{2} \triangleq\frac{V_{2}^{2}}{2\sigma^{2}}=K\frac{1-\sqrt{1-\Delta^{ 2}}}{2}. \tag{7}\]
The IFTR model is very versatile and includes different classical and generalized fading models as particular cases by an appropriate selection of the parameters. Thus, for \(m_{1},m_{2}\rightarrow\infty\) the fluctuations of the specular components tend to disappear and the IFTR model collapses to the TWDP one [12]. If, in addition, we let \(\Delta=0\), the Rice model is obtained. For finite values of \(m_{1}\), \(\Delta=0\) yields the Rician Shadowed model [13], which was shown in [14] that it contains the Hoyt (Nakagami-\(q\)) model for \(m_{1}=0.5\), with \(q=\left(\sqrt{1+2K}\right)^{-1}\). The Rayleigh fading model can be obtained as a particularization of either the aforementioned Rice or Hoyt models for \(K=0\), and also for \(m_{1}=1\) and \(\Delta=0\). If there is only one specular component and the diffuse component is absent (\(\Delta=0\), \(K\rightarrow\infty\)), the IFTR model collapses to the Nakagami-\(m\) model.
## III New representation of the IFTR fading model
In this paper, we present a new statistical characterization of the SNR of a signal undergoing IFTR fading which, denoting by \(E_{s}\) the symbol energy density and \(N_{0}\) the power spectral density, is defined as \(\gamma\triangleq\left(E_{s}/N_{0}\right)\left|V_{r}\right|^{2}\).
**Definition 2**: _A RV \(\gamma\) following an IFTR distribution with parameters \(m_{1}\), \(m_{2}\), \(K\), \(\Delta\) and mean \(\overline{\gamma}\) will be denoted by \(\gamma\sim\mathcal{I}\mathcal{F}\mathcal{R}(\overline{\gamma},m_{1},m_{2},K,\Delta)\), and its PDF and CDF will be denoted, respectively, by \(f_{\gamma}^{\rm IFTR}(\cdot)\) and \(F_{\gamma}^{\rm IFTR}(\cdot)\)._
Following the same spirit as in [15] for TWDP and in [7, 8] for FTR fading, we now show that the PDF and CDF of the SNR of a RV following an IFTR distribution can be expressed as infinite countable mixtures of the corresponding functions for the Gamma distribution. Additionally, we show how this result can be applied to readily obtain any metric, defined by averaging over the channel realizations, for the IFTR model, from such metric for the much simpler and widely used Nakagami-\(m\) fading.
### _PDF and CDF of IFTR fading_
**Lemma 1**: _Let \(\gamma\sim\mathcal{I}\mathcal{F}\mathcal{R}(\overline{\gamma},m_{1},m_{2},K,\Delta)\), then, its PDF and CDF can be expressed, respectively, as_
\[f_{\gamma}^{\rm IFTR}\left(x\right) =\sum_{j=0}^{\infty}A_{j}f^{\mathcal{G}}\left(x;j+1,\frac{\bar{ \gamma}}{1+K}\right), \tag{8}\] \[F_{\gamma}^{\rm IFTR}\left(x\right) =\sum_{j=0}^{\infty}A_{j}F^{\mathcal{G}}\left(x;j+1,\frac{\bar{ \gamma}}{1+K}\right), \tag{9}\]
_where \(f^{\mathcal{G}}\) and \(F^{\mathcal{G}}\) are, respectively, the PDF and CDF of the Gamma distribution given in (1) and (2), and coefficients \(A_{j}\) are given in (11) in terms of the channel parameters and the regularized Gauss hypergeometric function1, which is defined as_
Footnote 1: The regularized Gauss hypergeometric function can be calculated in terms of the _standard_ Gauss hypergeometric function as \({{}_{2}F_{1}}\left(a,b;c;z\right)={{}_{2}F_{1}}\left(a,b;c;z\right)/\Gamma(c)\) when \(c\notin\left\{0,-1,-2,\ldots\right\}\), however, the corresponding parameter \(c\) in the coefficients \(A_{j}\) in (11) can indeed be a non-positive integer for some values of index \(j\), therefore, \({{}_{2}F_{1}}\) has to be calculated using (10). Nevertheless, the regularized Gauss hypergeometric function is in-built in the Mathematica software.
\[{{}_{2}F_{1}}\left(a,b;c;z\right)=\sum_{k=0}^{\infty}\frac{\left(a\right)_{k} \left(b\right)_{k}}{\Gamma\left(c+k\right)}\frac{z^{k}}{k!}, \tag{10}\]
_where \((a)_{k}\triangleq\Gamma(a+k)/\Gamma(a)\) is the Pochhammer symbol._
See Appendix A.
Note that, in contrast to the PDF and CDF expressions given in [5], (8) and (9) are valid for arbitrary values of \(m_{1}\) and \(m_{2}\), and therefore this is also true for all the performance metrics derived from them.
**Remark 2**: _By noting that the \(j\)-th term in (8) is proportional to \((x/\bar{\gamma})^{j}\), the PDF and CDF in IFTR fading in the high SNR regime (i.e., as \(\bar{\gamma}\to\infty\)) can be approximated by only maintaining the first term in the infinite summations, yielding_
\[f_{\gamma}^{\rm IFTR}\left(x\right) \approx A_{0}\frac{\bar{\gamma}}{1+K}e^{-x(1+K)/\bar{\gamma}}, \bar{\gamma}\gg x, \tag{12}\] \[F_{\gamma}^{\rm IFTR}\left(x\right) \approx A_{0}\left(1-e^{-x(1+K)/\bar{\gamma}}\right), \bar{\gamma}\gg x \tag{13}\]
_with_
\[A_{0}= \frac{m_{1}^{m_{1}}m_{2}^{m_{2}}}{\left(K_{1}+m_{1}\right)^{m_{1 }}\left(K_{2}+m_{2}\right)^{m_{2}}}\] \[\times\ _{2}F_{1}\left(m_{1},m_{2};1;\frac{K^{2}\Delta^{2}}{4\left(K_{1 }+m_{1}\right)\left(K_{2}+m_{2}\right)}\right). \tag{14}\]
**Corollary 1**: _Let \(h(\gamma)\) be a performance metric (or statistical function) depending on the instantaneous SNR, and let \(X_{\mathcal{K}}(\bar{\gamma}_{\mathcal{K}},m)\) be the metric (or function) obtained by averaging over an interval of the PDF of the SNR for Nakagami-\(m\) fading with mean \(\bar{\gamma}_{\mathcal{K}}\) and fading severity \(m\), i.e.,_
\[X^{\mathcal{K}}(\bar{\gamma}_{\mathcal{K}},m)=\int_{a}^{b}h(x)f^{\mathcal{G}}( x;m,\bar{\gamma}_{\mathcal{K}}/m)dx, \tag{15}\]
_where \(0\leq a\leq b<\infty\). Then, the average performance metric for IFTR fading can be calculated as_
\[X^{\rm IFTR}\left(\bar{\gamma},m_{1},m_{2},K,\Delta\right)\] \[\qquad\qquad=\sum_{j=0}^{\infty}A_{j}X^{\mathcal{K}}\left(\frac{ \bar{\gamma}}{1+K}(j+1),j+1\right), \tag{16}\]
_where \(A_{j}\) are the IFTR coefficients defined in (11)._
The average metric in IFTR fading channel is calculated as
\[X^{\rm IFTR}(\bar{\gamma},m_{1},m_{2},K,\Delta)=\int_{a}^{b}h(x)f_{\gamma}^{ \rm IFTR}\left(x\right)dx. \tag{17}\]
By plugging (8) into (17) we can write
\[X^{\rm IFTR}\left(\bar{\gamma},m_{1},m_{2},K,\Delta\right)\] \[\quad=\int_{a}^{b}h\left(x\right)\left[\sum_{j=0}^{\infty}A_{j}f^{ \mathcal{G}}\left(x;j+1,\frac{\bar{\gamma}}{1+K}\right)\right]dx\] \[\quad=\sum_{j=0}^{\infty}A_{j}\int_{a}^{b}h\left(x\right)f^{ \mathcal{G}}\left(x;j+1,\frac{\bar{\gamma}}{1+K}\right)dx. \tag{18}\]
Comparing the integral of the resulting expression with (15) and identifying \(j+1=m\) and \(\frac{\bar{\gamma}}{1+K}=\frac{\bar{\gamma}_{\mathcal{K}}}{m}\), (16) is obtained.
### _Series convergence and Kolmogorov-Smirnov goodness-of-fit statistical test_
The series expressions of the PDF given in (8) is calculated by averaging the convergent series expression for TWDP fading, given in (65), over the fluctuations of the specular components, as explained in Appendix A. The weights of the Gamma PDF's in the TWDP series are positive [15] and therefore the interchange of integration and infinite summation in (66) can be carried out by virtue of Tonelli's theorem [16], which has the following consequences:
(i) The series in the right hand side of (8) converges to the PDF of the IFTR fading model \(f_{\gamma}^{\rm IFTR}(x)\ \forall x\in[0,\infty)\).
(ii) The calculated coefficients \(A_{j}\) are positive for all \(j\).
Moreover, the performance metrics in communication systems (e.g., BER, channel capacity, outage probability, etc.) are typically non-negative functions which, together with (ii), permits to invoke again Tonelli's theorem, thus allowing the interchange of integration and infinite summation in (18), yielding two additional consequences:
(iii) The series in the right hand side of (16) converges to the average metric in IFTR fading \(X^{\rm IFTR}(\bar{\gamma},m_{1},m_{2},K,\Delta)\).
(iv) Considering \(h(\gamma)=1\) in \([0,\infty)\) in Corollary 1 yields \(\sum_{j=0}^{\infty}A_{j}=1\). Additionally, considering \(h(\gamma)=1\) in \([0,x)\) in
Corollary 1 provides a formal justification for obtaining (9) by integrating (8) term by term.
The infinite series used in the statistical characterization of IFTR fading must be truncated for numerical computation. We now provide the Kolmogorov-Smirnov (KS) goodness-of-fit statistical test, which permits to check how close a truncated series is to the exact value. The KS test statistic is given by [17]
\[T_{KS}=max|\hat{F}_{\gamma}^{\rm IFTR}(x)-F_{\gamma}^{\rm IFTR}(x)|, \tag{38}\]
where \(F_{\gamma}^{\rm IFTR}(x)\) is the exact value of the CDF and \(\hat{F}_{\gamma}^{\rm IFTR}(x)\) is the approximation of the CDF when the series is truncated to \(J\) terms.
Table I reports the KS test for different channel parameters when the truncated series have 20, 30, or 40 terms. It can be seen that the accuracy reaches an acceptable level when the first 40 terms of the series are computed, so the numerical calculations of all the series in this work will consider 40 terms.
Figs. 1 and 2 show the PDF of the SNR for different IFTR channel parameters obtained from (8) assuming 40 terms in the truncated series computation. Fig. 1 is plotted for \(\Delta=0.1,0.9\) and for both integer and non-integer values of \(m_{1}\) and \(m_{2}\), while Fig. 2 shows the PDF for \(K=5,15\). The numerical results are verified by Monte-Carlo simulation, showing an excellent agreement in all cases. Fig. 3 illustrates the CDF of the SNR in IFTR fading computed from (9) for different values of \(K\), \(\Delta\) and \(m_{1},m_{2}\).
### _GMGF and moments of the IFTR model_
**Definition 3**: _Let \(n>0\), and let \(X\) be a continuous non-negative RV with PDF \(f_{X}(\cdot)\). The GMGF of \(X\) is defined as_
\[\phi_{X}^{(n)}\left(s\right)\triangleq E\left\{X^{n}e^{Xs}\right\}=\int_{0}^{ \infty}x^{n}e^{xs}f_{X}\left(x\right)dx, \tag{39}\]
_where \(E\left\{\cdot\right\}\) denotes the expectation operator. The moment generating function (MGF) is defined as \(\phi_{X}\left(s\right)\triangleq E\left\{e^{Xs}\right\}=\phi_{X}^{(0)}\left(s\right)\), and it is therefore a particular case of the GMGF. Note that for \(n\in\mathbb{N}\), the GMGF coincides with the \(n\)-th order derivative of the MGF. Also, the \(n\)-th order moment of \(X\) is obtained as \(\mu_{X}^{n}\triangleq E\left\{X^{n}\right\}=\phi_{X}^{(n)}\left(0\right)\)._
The GMGF finds application in different communication
Fig. 1: PDF of the SNR under IFTR fading for different channel parameters \(m_{1},m_{2}\) and \(\Delta\). Simulation confirmation results are displayed as circular markers. \(K=10\). \(\bar{\gamma}=1\).
Fig. 2: PDF of the SNR under IFTR fading for different channel parameters \(m_{1},m_{2}\) and \(K\). Simulation confirmation results are displayed as circular markers. \(\Delta=0.5\). \(\bar{\gamma}=1\).
theory areas, including energy detection, outage probability under co-channel interference, physical layer security or BER analysis. In most cases it suffices to consider \(n\in\mathbb{N}\), which usually results in closed-form expressions for the GMGF, such as it is the case for IFTR fading, as we show bellow. However, there are situations, such as composite Inverse Gamma (IG) shadowing/fading modeling [18], where the more general case of arbitrary \(n>0\) needs to be considered. In the following Lemma we derive expressions for the GMGF of the IFTR fading model for both cases.
**Lemma 2**: _Let \(\gamma\sim\mathcal{IFTR}(\overline{\gamma},m_{1},m_{2},K,\Delta)\), then, its GMGF can be expressed as follows:_
_(i) General case (\(n\in\mathbb{R}^{+}\)):_
\[\phi_{\gamma}^{(n)}(s)=\sum_{j=0}^{\infty}A_{j}\phi_{G}^{(n)}\left(s,j+1, \frac{\bar{\gamma}}{1+K}\right), \tag{40}\]
_where \(A_{j}\) is defined in (11) and \(\phi_{G}^{(n)}\) is the GMGF of a RV \(G\sim\mathcal{G}(\lambda,\nu)\), which is given by_
\[\phi_{G}^{(n)}(s,\lambda,\nu)=\frac{\Gamma(n+\lambda)\left(\frac{1}{\nu}-s \right)^{-(n+\lambda)}}{\Gamma(\lambda)\nu^{\lambda}}. \tag{41}\]
_(ii) Case \(n\in\mathbb{N}\): A closed-form expression is given in (42)._
Case (i): This result is obtained by by applying Corollary 1 to the GMGF of the SNR in Nakagami-\(m\) fading given in [18, Table II].
Case (ii): See Appendix B.
**Lemma 3**: _Let \(\gamma\sim\mathcal{IFTR}(\overline{\gamma},m_{1},m_{2},K,\Delta)\), then its \(n\)-th order moment can be expressed as follows:_
_(i) General case (\(n\in\mathbb{R}^{+}\)):_
\[\mu_{\gamma}^{n}=\sum_{j=0}^{\infty}A_{j}\frac{\Gamma(n+j+1)\bar{\gamma}^{n}} {\Gamma(j+1)(1+K)^{n}}. \tag{43}\]
_(ii) Case \(n\in\mathbb{N}\): A closed-form expression is given now by_
\[\mu_{\gamma}^{n} =\left(\frac{\overline{\gamma}}{1+K}\right)^{n}\sum_{q=0}^{n} \binom{n}{q}\frac{n!}{q!}\sum_{r=0}^{q}\binom{q}{r}\] \[\times\sum_{p=0}^{q-r}\binom{q-r}{p}K_{1}^{p}K_{2}^{q-r-p}\sum_{ l=0}^{r}\binom{r}{l}\left(\frac{K\Delta}{2}\right)^{2l}\] \[\times\frac{\Gamma\left(m_{1}+l+p\right)}{\Gamma\left(m_{1} \right)m_{1}^{l+p}}\frac{\Gamma\left(m_{2}+q-l-p\right)}{\Gamma\left(m_{2} \right)m_{2}^{q-l-p}}\delta_{2l,r}. \tag{44}\]
_where \(\delta_{2l,r}\) is the kronecker delta function._
These results follows by considering \(s=0\) in the GMGF expressions. In case (ii), the following equality has been taken into account to obtain (44):
\[\lim_{s\to 0}s^{n-m}\cdot_{2}\tilde{F}_{1}\left(a,b;n-m+1;A\cdot s^{2}\right)= \delta_{n,m}, \tag{45}\]
which holds for any \(n,m\in\mathbb{N}\), where the cases \(n>m\) and \(n=m\) are trivial, and the case \(n<m\) results from the fact that the Gamma function has simple poles at the non-positive integers, and therefore from (10) and given \(p\in\mathbb{N}\cup\{0\}\) we can write
\[_{2}\tilde{F}_{1}\left(a,b;-p;z\right)=\sum_{k=p+1}^{\infty}\frac{\left(a \right)_{k}\left(b\right)_{k}}{\Gamma\left(-p+k\right)}\frac{z^{k}}{k!}. \tag{46}\]
From the expression of the moments for \(n\in\mathbb{N}\) given in (44), a closed-form expression for the amount of fading (AoF) for IFTR fading can be obtained in closed-form. The AoF captures the severity, in terms of the variability, of the fading channel as a function of the parameters of the model and is defined as the variance of the SNR normalized by its squared mean, so that \(\text{AoF}\triangleq E\{(\gamma-\bar{\gamma})^{2}\}/\bar{\gamma}^{2}=E\{ \gamma^{2}\}/\bar{\gamma}^{2}-1\).
**Corollary 2**: _Let \(\gamma\sim\mathcal{IFTR}(\overline{\gamma},m_{1},m_{2},K,\Delta)\), then, its AoF can be written as_
\[\text{AoF}=\frac{1}{\left(1+K\right)^{2}}\left[1+2K+\frac{\left(K\Delta\right) ^{2}}{2}+\frac{K_{1}^{2}}{m_{1}}+\frac{K_{2}^{2}}{m_{2}}\right]. \tag{47}\]
This result is obtained by particularizing the moments in (44) to the definition of the AoF.
The IFTR fading model tends to the TWDP one for \(m_{1},m_{2}\rightarrow\infty\). As a check, it must be noted that for such condition the expression given in (47) tends to the AoF given in [19, eq. (34)] for TWDP fading.
## IV Performance analysis
By using the derived statistical characterization of the IFTR fading model, the performance of different wireless communication systems undergoing this fading distribution can be calculated. In the following, the channel capacity, the outage probability in an interference-limited multi-antenna receiver and the symbol error rate have been obtained for IFTR fading.
### _Average channel capacity_
The average capacity per unit bandwidth for IFTR fading is given by
\[C=\int_{0}^{\infty}\log_{2}(1+x)f_{\gamma}^{\text{IFTR}}(x)dx. \tag{48}\]
Fig. 3: CDF of the SNR under IFTR fading for different channel parameters \(m_{1},m_{2}\), \(K\), and \(\Delta\). Simulation confirmation results are displayed as circular markers.
A direct application of Corollary I using the average channel capacity expression for Nakagami-\(m\) fading channels [20, eq. (23)] provides the following closed-form expression:
\[C=\sum_{j=0}^{\infty}\frac{A_{j}e^{\frac{K+1}{\gamma}}}{\ln(2)}\sum_{k=0}^{j} \left(\frac{1+K}{\bar{\gamma}}\right)^{k}\Gamma\left(-k,\frac{1+K}{\bar{\gamma} }\right), \tag{49}\]
where \(A_{j}\) is given in eq. (11) and \(\Gamma(.,.)\) is the upper incomplete gamma function, which can be computed, when the first parameter is a negative integer, as [11, eq. (8.352.3)]
\[\Gamma(-n,x)=\frac{(-1)^{n}}{\Gamma(n)}\left[\sum_{r=0}^{n-1}\frac{\Gamma(n-r) }{(-x)^{n-r}e^{x}}-Ei(-x)\right], \tag{50}\]
where \(E_{i}(.)\) is the exponential integral function [11, eq. (8.211.1)].
### _Outage probability in interference-limited multi-antenna receiver_
The outage probability, i.e., the probability that the received SNR is below a threshold \(\gamma_{th}\), under IFTR fading is given by
\[P_{out}=Pr(\gamma<\gamma_{th})=F_{\gamma}^{\text{IFTR}}(\gamma_{th}). \tag{51}\]
On the other hand, in the presence of co-channel interference (CCI) of total received power \(I\), considering negligible background noise and denoting as \(W\) the received power from the desired user, which is assumed to experience IFTR fading, the outage probability is defined as
\[\hat{P}_{\text{out}}=P\left(\frac{W}{I}<R_{th}\right), \tag{52}\]
where \(R_{th}\) denotes the signal-to-interference (SIR) threshold.
We further assume \(N\) receive antennas performing maximal ratio combining (MRC) and \(L\) independent and identically distributed (i.i.d.) Rayleigh interferers with average power \(P_{I}\). In this scenario, the outage probability is given by [21, eq. (15)]
\[\hat{P}_{\text{out}}=\sum_{k=0}^{L-1}\left(\frac{1}{R_{th}P_{I}}\right)^{k} \sum_{\mathcal{U}}\prod_{i=1}^{N}\frac{1}{u_{i}!}\phi_{W_{i}}^{(u_{i})}\left(- \frac{1}{R_{th}P_{I}}\right), \tag{53}\]
where \(\mathcal{U}\) is a set of \(N\)-tuples such that \(\mathcal{U}=\{(u_{1}...u_{N}),u_{i}\in\mathbb{N},\ \sum_{i=1}^{N}u_{i}=k\}\), and \(\phi_{W_{i}}^{(u_{i})}(s)\) is computed using (42), as \(u_{i}\in\mathbb{N}\), by simply considering the relation \(W_{i}=\frac{\gamma_{i}}{E_{s}/N_{0}}\), thereby providing a closed-form expression for the outage probability.
### _Exact and approximated average BER_
The average symbol error rate in a telecommunication system is one of the main parameters for measuring the quality of communication. In this section, we calculate this metric for the IFTR fading channel. The conditional BER probability in AWGN channel for some relevant modulations with coherent detection can be written as [22]
\[P_{e}(x)=\sum_{r=1}^{R}\alpha_{r}Q(\sqrt{\beta_{r}x}). \tag{54}\]
The average BER is calculated by averaging over all possible channel realizations. From the result in [23, eq. (5.18)] for Nakagami-\(m\) fading, by virtue of Corollary 1, the average BER in IFTR fading can be written, after some manipulation, as
\[\bar{P}_{e}= \sum_{r=1}^{R}\frac{\alpha_{r}}{2}\sum_{j=0}^{\infty}A_{j}\left[1 -\sqrt{\frac{\beta_{r}\overline{\gamma}}{2\left(1+K\right)+\beta_{r} \overline{\gamma}}}\sum_{k=0}^{j}\binom{2k}{k}\right.\] \[\left.\times\left(\frac{1-\frac{\beta_{r}\overline{\gamma}}{2 \left(1+K\right)+\beta_{r}\overline{\gamma}}}{4}\right)^{k}\right]. \tag{55}\]
In the high SNR regime (\(\bar{\gamma}\rightarrow\infty\)), the average BER can be simplified by simply maintaining the first term in the infinite summation, as stated in Remark 2, yielding
\[\bar{P}_{e}\approx\sum_{r=1}^{R}\frac{\alpha_{r}}{2}A_{0}\left[1-\sqrt{\frac{ \beta_{r}\overline{\gamma}}{2\left(1+K\right)+\beta_{r}\overline{\gamma}}} \right],\ \bar{\gamma}\rightarrow\infty. \tag{56}\]
## V Numerical and simulation results
This section presents figures illustrating the performance of IFTR fading channels. The obtained numerical results have been validated by Monte Carlo simulations where \(10^{7}\) random realizations of the IFTR distribution have been computed. Based on Table I, numerical results involving infinite series have been calculated truncating to 40 terms, as it provides a satisfactory accuracy for all the considered cases. In all the presented figures we have assumed \(\bar{\gamma}=1\).
The average capacity for IFTR fading is presented in Fig. 4 for different values of the channel parameters \(\{m_{1},m_{2},K,\Delta\}\). The presented numerical results have been obtained from (49). It can be seen that a higher capacity is obtained for high \(K\). On the other hand, a high value of \(\Delta\) (close to 1) yields lower capacity due to the increased
probability that the specular components cancel each other, which increases the channel variability.
Fig. 5 shows the outage probability (\(P_{out}\)) computed from (51) versus the average SNR (\(\bar{\gamma}\)) for different channel model parameters. It can be observed that decreasing \(\Delta\) from \(0.9\) to \(0.5\), increasing \(K\) from 5 to 15, and decreasing \(m_{1}\) and \(m_{2}\) yields a better performance (lower outage probability), as these changes give rise to a reduced fading severity.
In Fig. 6 the same values of the IFNR model parameters as in Fig. 5 are considered, although now a multiantenna receiver is assumed under CCI for the outage probability. The same effect as in Fig. 5 is observed when the channel parameters are modified, but the amount of variation in the outage probability is affected by the presence of CCI and the use of MRC reception. For example, for \(\bar{\gamma}=\overline{SIR}=10\) dB, the outage probability under CCI, \(\hat{P}_{out}=2\times 10^{-4}\), is lower than \(P_{out}=5\times 10^{-3}\) due to the MRC diversity gain when the values of the parameters are \(K=15\), \(\Delta=0.5\), \(m_{1}=15\) and \(m_{2}=7.5\).
Fig. 7 shows the outage probability with CCI for different system parameters vs. the SIR threshold. The numerical results of the outage probability from (53) are plotted for different number of antennas \(N=1,2,3\) and average interference power \(P_{I}=1,2\) when \(L=1\). It can be seen that as the number of received antennas increases, the outage probability decreases, and the diversity gain increases. Also, for a given SIR threshold, the outage probability is higher for larger average interference power, as expected. Monte-Carlo simulations show an excellent match to the numerical results.
Finally, Fig. 8 shows the exact and asymptotic BER vs. the average SNR in IFNR fading for BPSK modulation (\(R=1\), \(\alpha_{1}=1\), \(\beta_{1}=2\)). The figure shows this performance metric for different channel parameters \(\Delta=0.1,0.5,0.9\) when the fluctuating parameters \(m_{1}\)(\(=15.7\)) and \(m_{2}\)(\(=5.1\)) are non-integers. Again, increasing \(\Delta\) results in higher channel variability, causing a detrimental impact on performance, i.e., a higher average BER. It is worth mentioning that when the average SNR is above 20 dB, the asymptotic curves, which are much simpler to compute, yield very good approximated results, and above 30 dB the exact and asymptotic results are indistinguishable in all the presented cases.
## VI Conclusion
In this paper, a new formulation in series form has been derived for the PDF and CDF of the IFTR fading model. The convergence of the obtained series are demonstrated and truncated for numerical computation using the Kolmogorov-Smirnov goodness-of-fit. We show that, leveraging on any average performance metric already known for the much simpler Nakagami-\(m\) fading model, such metric can be readily obtained for IFTR fading.
Fig. 4: Numerical and simulation results for the average capacity vs. average SNR in dB for different channel parameters values and (\(\bar{\gamma}=1\)). Simulation confirmation results are displayed as circular markers.
Fig. 5: Numerical and simulation results for the outage probability vs. \(\bar{\gamma}\) with \(\gamma_{th}=0\) dB. Simulation confirmation results are displayed as circular markers.
Fig. 6: Numerical and simulation results for the outage probability for IFTR fading with CCI for different channel parameters values with \(N=2\), \(L=1\), \(P_{I}=1\) and \(R_{th}=0\) dB. Simulation confirmation results are displayed as circular markers.
Additionally, the GMGF of IFTR fading has been obtained which, for most cases of interest, can be expressed in closed-form, thus opening the door to circumvent the model mathematical complexity and obtain several relevant performance metrics also in closed-form, as well as the moments of the distribution and the amount of fading. Finally, the new and expanded statistical characterization of the IFTR fading model has been exemplified, showing and discussing numerical results for the average capacity, the outage probability with and without interference and the BER for BPSK modulation, which have been verified by Monte Carlo simulations.
## Acknowledgment
The authors would like to express their gratitude to F. Javier Lopez-Martinez for his insightful comments during the development of this work.
## Appendix A Proof of Lemma I
Let us consider the fading model defined in (3) conditioned to the particular realizations of the RVs \(\zeta_{1}=u_{1}\), \(\zeta_{2}=u_{2}\). Thus, we can write
\[V_{r}|_{u_{1},u_{2}}=\sqrt{u}_{1}V_{1}e^{j\phi_{1}}+\sqrt{u}_{2}Ve^{j\phi_{2}}+ X+jY, \tag{57}\]
which corresponds to the TWDP fading model with specular components amplitudes \(\sqrt{u}_{1}V_{1}\) and \(\sqrt{u}_{2}V_{2}\) and parameters
\[K_{u_{1},u_{2}}=\frac{u_{1}V_{1}^{2}+u_{2}V_{2}^{2}}{2\sigma^{2 }}=u_{1}K_{1}+u_{2}K_{2}, \tag{58}\] \[\Delta_{u_{1},u_{2}}=\frac{2\sqrt{u_{1}u_{2}}V_{1}V_{2}}{u_{1}V_ {1}^{2}+u_{2}V_{2}^{2}}, \tag{59}\]
which satisfy
\[K_{u_{1},u_{2}}\Delta_{u_{1},u_{2}}=\sqrt{u_{1}u_{2}}\frac{V_{1}V_{2}}{\sigma ^{2}}=\sqrt{u_{1}u_{2}}K\Delta. \tag{60}\]
The conditional average SNR for the model definition given in (57) will be
\[\bar{\gamma}_{u_{1},u_{2}}=\frac{E_{s}}{N_{0}}\left(u_{1}V_{1}^{2}+u_{2}V_{2} ^{2}+2\sigma^{2}\right)=\frac{E_{s}}{N_{0}}2\sigma^{2}\left(1+K_{u_{1},u_{2} }\right). \tag{61}\]
On the other hand, by premediating over all possible realizations of the unit-mean RVs \(\zeta_{1}\), \(\zeta_{2}\), the unconditional average SNR will be
\[\bar{\gamma}=E\{\bar{\gamma}_{u_{1},u_{2}}\}=\frac{E_{s}}{N_{0}}(V_{1}^{2}+V_ {2}^{2}+2\sigma^{2})=\frac{E_{s}}{N_{0}}2\sigma^{2}(1+K), \tag{62}\]
and therefore, equating (61) and (62), we can write
\[\frac{1+K_{u_{1},u_{2}}}{\bar{\gamma}_{u_{1},u_{2}}}=\frac{1}{\left(E_{s}/N_{ 0}\right)2\sigma^{2}}=\frac{1+K}{\bar{\gamma}}, \tag{63}\]
From the PDF of the received power of the TWDP fading model given in [15] as a mixture of Gamma distributions, the PDF of the conditional SNR of the model defined in (57) can be written as
\[f_{\gamma_{u_{1},u_{2}}}^{\text{TWDP}}(x)=e^{-K_{u_{1},u_{2}}} \sum_{j=0}^{\infty}\frac{K_{u_{1},u_{2}}^{j}}{j!}f^{Q}\left(x;j+1,\frac{\bar{ \gamma}_{u_{1},u_{2}}}{1+K_{u_{1},u_{2}}}\right)\] \[\times\sum_{k=0}^{j}\binom{j}{k}\bigg{(}\frac{\Delta_{u_{1},u_{2 }}}{2}\bigg{)}^{k}\sum_{l=0}^{k}\binom{k}{l}I_{2l-k}\left(-K_{u_{1},u_{2}} \Delta_{u_{1},u_{2}}\right), \tag{64}\]
which, from (58)-(63), can be rewritten as
\[f_{\gamma_{u_{1},u_{2}}}^{\text{TWDP}}(x)=e^{-u_{1}K_{1}-u_{1} K_{2}}\sum_{j=0}^{\infty}\frac{1}{j!}f^{Q}\left(x;j+1,\frac{\bar{\gamma}}{1+K}\right)\] \[\times\sum_{k=0}^{j}\binom{j}{k}\sum_{q=0}^{j-k}\binom{j-k}{q}(u _{1}K_{1})^{q}(u_{2}K_{2})^{j-k-q}\] \[\times\left(\frac{\sqrt{u_{1}u_{2}}K\Delta}{2}\right)^{k}\sum_{l =0}^{k}\binom{k}{l}I_{2l-k}\left(-\sqrt{u_{1}u_{2}}K\Delta\right), \tag{65}\]
Fig. 8: Numerical and simulation results for the average BER vs. average SNR in dB for BPSK with channel parameters \(m_{1}=15.7\), \(m_{2}=5.1\) and \(K=10\).
Fig. 7: Numerical and simulation results for the outage probability for IFTR fading with CCI considering different values of \(N=1,2,3\) and interference average power \(P_{I}=1,2\) watts. Channel parameters are \(m_{1}=3\), \(m_{2}=30\), \(K=5\) and \(\Delta=0.5\).
The PDF of the SNR of the IFTR model can be obtained by averaging (65) over all possible realizations of the RVs \(\zeta_{1}\) and \(\zeta_{2}\), i.e.
\[f_{\gamma}^{\text{IFTR}}(x)=\int_{0}^{\infty}\int_{0}^{\infty}f_{ \gamma_{u_{1},u_{2}}}^{\text{TWDP}}(x)f_{\zeta_{1}}\left(u_{1}\right)f_{\zeta_{2 }}\left(u_{2}\right)du_{1}du_{2}, \tag{66}\]
where
\[f_{\zeta_{i}}\left(u_{i}\right)=\frac{m_{i}^{m_{i}}u_{i}^{m_{i}-1}}{\Gamma \left(m_{i}\right)}e^{-m_{i}u_{i}},\qquad i=1,2. \tag{67}\]
The double integral in (66) can be solved in closed-form by iteratively integrating with respect to variables \(u_{1}\) and \(u_{2}\). Thus, after changing the order of integration and summation, we can write
\[f_{\gamma}^{\text{IFTR}}(x) =\sum_{j=0}^{\infty}f^{\mathcal{G}}\left(x;j+1,\frac{\overline{ \gamma}}{1+K}\right)\] \[\times\sum_{k=0}^{j}\binom{j}{k}\sum_{q=0}^{j-k}\binom{j-k}{q} \frac{K_{1}^{q}K_{2}^{j-k-q}}{j!}\] \[\times\left(\frac{K\Delta}{2}\right)^{k}\sum_{l=0}^{k}\binom{k}{l }\frac{m_{1}^{m_{1}}}{\Gamma(m_{1})}\frac{m_{2}^{m_{2}}}{\Gamma(m_{2})}\mathcal{ H}_{1}, \tag{68}\]
where we have defined
\[\mathcal{H}_{1}\triangleq\int_{0}^{\infty}u_{2}^{m_{2}+j-k/2-q-1 }e^{-\left(m_{2}+K_{2}\right)u_{2}}\mathcal{I}_{1}(u_{2})du_{2}, \tag{69}\] \[\mathcal{I}_{1}(u_{2})\triangleq\int_{0}^{\infty}u_{1}^{m_{1}+q +k/2-1}e^{-\left(m_{1}+K_{1}\right)u_{1}}\] \[\qquad\qquad\times I_{2l-k}\left(-\sqrt{u_{1}u_{2}}K\Delta\right) du_{1}. \tag{70}\]
We now consider the following equality from [11, 6.643.2] and [11, 9.220.2]:
\[\mathcal{J} =\int_{0}^{\infty}t^{\mu-1/2}e^{-pt}I_{2\nu}\left(2\beta\sqrt{t} \right)dt\] \[=\frac{\Gamma(\mu+\nu+\frac{1}{2})\beta^{2\nu}}{p^{\nu+\mu+\frac{ 1}{2}}}\,{}_{1}\tilde{F}_{1}\bigg{(}\mu+\nu+\frac{1}{2},2\nu+1,\frac{\beta^{2} }{p}\bigg{)}, \tag{71}\]
where \({}_{1}\tilde{F}_{1}\) is the regularized Kummer hypergeometric function, and from which (70) can be written in closed-form as
\[\mathcal{I}_{1}(u_{2}) =\left(-1\right)^{k}\frac{\Gamma\left(m_{1}+q+l\right)}{\left(m_ {1}+K_{1}\right)^{m_{1}+q+l}}\left(\frac{K\Delta}{2}\right)^{2l-k}u_{2}^{l-k/2}\] \[\times_{1}\tilde{F}_{1}\left(m_{1}+q+l;2l-k+1;\frac{u_{2}K^{2} \Delta^{2}}{4\left(m_{1}+K_{1}\right)}\right). \tag{72}\]
Introducing (72) into (69) and solving the integral with the help [11, eq. (7.621.4)] we can write
\[\mathcal{H}_{1}=\left(-1\right)^{k}\left(\frac{K\Delta}{2}\right) ^{2l-k}\frac{\Gamma\left(m_{1}+q+l\right)}{\left(m_{1}+K_{1}\right)^{m_{1}+q+l}}\] \[\times\frac{\Gamma(m_{2}+j-k-q+l)}{\left(m_{2}+K_{2}\right)^{m_{2 }+j-k-q+l}}\,\tilde{F}_{1}\bigg{(}m_{1}+q+l,\] \[m_{2}+j-k-q+l;2l-k+1;\frac{K^{2}\Delta^{2}}{4\left(m_{1}+K_{1} \right)\left(m_{2}+K_{2}\right)}\bigg{)}\,, \tag{73}\]
which, together with (68), yields the desired result in (8) for the PDF of the SNR of the IFTR fading model. On the other hand, the CDF in (9) is obtained by a simple integration of (8) (see additional comments on this in Section III-B).
## Appendix B Proof of Lemma 2: Case (ii)
As in Appendix A, we consider an IFTR model conditioned to the particular realizations of the RVs \(\zeta_{1}=u_{1}\), \(\zeta_{2}=u_{2}\), which yields a TWDP model with specular components amplitudes \(\sqrt{u}_{1}V_{1}\) and \(\sqrt{u}_{2}V_{2}\), parameters \(K_{u_{1},u_{2}}\) and \(\Delta_{u_{1},u_{2}}\) given, respectively, by (58) and (59), and conditional mean \(\bar{\gamma}_{u_{1},u_{2}}\), given in (61). The GMGF for the TWDP model for \(n\in\mathbb{N}\) can be obtained from [24, eq. (24) for \(\mu=1\)] as
\[\phi_{\bar{\gamma}_{u_{1},u_{2}}}^{(n)}\left(s\right)=\bar{\gamma }_{u_{1},u_{2}}^{n}1e^{\frac{\bar{\gamma}_{u_{1},u_{2}}\bar{\gamma}_{u_{1},u_{2} }}{1+K_{u_{1},u_{2}}-\bar{\gamma}_{u_{1},u_{2}}}}\sum_{q=0}^{n}\binom{n}{q} \frac{K_{u_{1},u_{2}}^{q}}{q!}\] \[\times\frac{\left(1+K_{u_{1},u_{2}}\right)^{q+1}}{\left(1+K_{u_{1 },u_{2}}-\bar{\gamma}_{u_{1},u_{2}}\right)^{q+n+1}}\sum_{r=0}^{q}\binom{q}{r} \left(\frac{\Delta_{u_{1},u_{2}}}{2}\right)^{r}\] \[\times\sum_{l=0}^{r}\binom{r}{l}I_{2l-r}\left(\frac{K_{u_{1},u_{2} }\Delta_{u_{1},u_{2}}\bar{\gamma}_{u_{1},u_{2}}\delta}{1+K_{u_{1},u_{2}}-\bar{ \gamma}_{u_{1},u_{2}}\delta}\right), \tag{74}\]
which can be written, by using the relations (58)-(63), as
\[\phi_{\gamma_{u_{1},u_{2}}}^{(n)}(s)=\bar{\gamma}^{n}n!e^{\frac{ \bar{\gamma}_{s}}{1+K-\bar{\gamma}_{s}}\left(u_{1}K_{1}+u_{2}K_{2}\right)}\sum_{q =0}^{n}\binom{n}{q}\sum_{p=0}^{q-r}\binom{q-r}{p}\] \[\times\frac{\left(u_{1}K_{1}\right)^{p}\left(u_{2}K_{2}\right)^{q- r-p}}{q!}\frac{\left(1+K\right)^{q+1}}{\left(1+K-\bar{\gamma}s\right)^{q+n+1}} \sum_{r=0}^{q}\binom{q}{r}\] \[\times\left(\frac{\sqrt{u_{1}u_{2}}K\Delta}{2}\right)^{r}\sum_{l=0 }^{l}\binom{r}{l}I_{2l-r}\left(\sqrt{u_{1}u_{2}}\frac{K\Delta\bar{\gamma}s}{1+K- \bar{\gamma}s}\right). \tag{75}\]
The GMGF of IFTR fading is obtained by averaging (75) over all possible realizations of \(\zeta_{1},\zeta_{2}\) as
\[\phi_{\gamma}^{(n)}\left(s\right)=\int_{0}^{\infty}\int_{0}^{\infty }\phi_{\gamma_{u_{1},u_{2}}}^{(n)}\left(s\right)f_{\zeta_{1}}(u_{1})f_{\zeta_ {2}}(u_{2})du_{1}du_{2}. \tag{76}\]
Introducing (75) into (76) we can write
\[\phi_{\gamma_{u_{1},u_{2}}}^{(n)}(s)=\bar{\gamma}^{n}n!\frac{m_{1}^{m_{1}}}{ \Gamma(m_{1})}\frac{m_{2}^{m_{2}}}{\Gamma(m_{2})}\sum_{q=0}^{n}\binom{n}{q} \sum_{p=0}^{q-r}\binom{q-r}{p}\] \[\times\frac{\left(K_{1}\right)^{p}\left(K_{2}\right)^{q-r-p}}{q!} \frac{\left(1+K\right)^{q+1}}{\left(1+K-\bar{\gamma}s\right)^{q+n+1}}\sum_{r=0}^ {q}\binom{q}{r}\] \[\times\left(\frac{K\Delta}{2}\right)^{r}\sum_{l=0}^{l}\binom{r}{l} \mathcal{H}_{2}, \tag{77}\]
where we have defined
\[\mathcal{H}_{2}\triangleq\int_{0}^{\infty}u_{2}^{m_{2}+q-r/2-p-1}e^{-\left(m_{2 }-\frac{K
Note that \(\mathcal{H}_{2}\) and \(\mathcal{I}_{2}\) are actually the same integrals \(\mathcal{H}_{1}\) and \(\mathcal{I}_{1}\) defined, respectively, in (69) and (70), although for different coefficients, which are now in some cases rational functions on \(s\). Therefore, following the same procedure as in (69)-(73), a closed-form expression can be found for \(\mathcal{H}_{2}\) as given in (80), which together with (77) yields (42).
|
2303.06703 | Near-field imaging of domain switching in in-operando VO$_{2}$ devices | Experimental insight in the nanoscale dynamics underlying switching in novel
memristive devices is limited owing to the scarcity of techniques that can
probe the electronic structure of these devices. Scattering scanning near-field
optical microscopy is a relatively novel approach to probe the optical response
of materials with a spatial resolution well below the diffraction limit. We use
this non-invasive tool to demonstrate that it provides detailed information on
the origin and memory behaviour of ultra-thin films of vanadium dioxide.
Simultaneously recorded $I(V)$ characteristics and near-field maps show that
discontinuities in the I(V) characteristics arise from the sudden switching of
insulating domains to metallic domains. At the threshold voltage, the domains
form a continuous current path. The metallic domains persist once the bias
voltage is removed, but narrow monoclinic regions appear around the domain
boundaries. The key advantage of our approach is that it provides detailed
information on the electronic structure at length scales raging from tens of
nanometers up to tens of microns and is easily applied under \textit{in
operando} conditions. | Sergio Salvía Fernández, Xing Gao, Silvia Cassanelli, Stephan Bron, Hans Hilgenkamp, Erik van Heumen | 2023-03-12T16:50:29Z | http://arxiv.org/abs/2303.06703v1 | # Near-field imaging of domain switching in in-operando VO\({}_{2}\) devices
###### Abstract
Experimental insight in the nanoscale dynamics underlying switching in novel memristive devices is limited owing to the scarcity of techniques that can probe the electronic structure of these devices. Scattering scanning near-field optical microscopy is a relatively novel approach to probe the optical response of materials with a spatial resolution well below the diffraction limit. We use this non-invasive tool to demonstrate that it provides detailed information on the origin and memory behaviour of ultra-thin films of vanadium dioxide. Simultaneously recorded \(I(V)\) characteristics and near-field maps show that discontinuities in the I(V) characteristics arise from the sudden switching of insulating domains to metallic domains. At the threshold voltage, the domains form a continuous current path. The metallic domains persist once the bias voltage is removed, but narrow monoclinic regions appear around the domain boundaries. The key advantage of our approach is that it provides detailed information on the electronic structure at length scales raging from tens of nanometers up to tens of microns and is easily applied under _in operando_ conditions.
pacs: 74.25.-b, 74.25.-b, 74.25.-b, 74.25.-b, 74.25.+d Scattering scanning near field optical microscopy (sSNOM) is a versatile, non-invasive probe of the dielectric and plasmonic response of correlated and 2D materials. Based on tapping-mode atomic force microscopy (AFM), an sSNOM can map out optical contrast on a scale of a few tens of nanometers at any wavelength due to the interaction between a polarizable AFM cantilever, an incoming laser and the sample under study. Some of the more recent enhancements of the sSNOM implementation include enabling broadband spectroscopy [1; 2], ultra-fast pump-probe microscopy [3; 4; 5] and THz sSNOM [6]. An equally important endeavour has been improving the ability to manipulate material parameters in-situ. Examples include manipulating the optical response of a material through changing the carrier concentration [7; 8] or exploring phase transitions with variable temperature microscopy [9; 10]. Invariably, adding new capabilities to the sSNOM repertoire have led to new insights in the electronic properties of materials on nanometer length scales [11; 12].
In this article we map out the near-field optical response _in operando_ on a memristive device as it is driven through switching events. The device is based on a vanadium dioxide thin film, which features a reversible insulator to metal transition. Vanadium dioxide is a poster child for the application of complex oxide materials in a variety of room temperature, energy saving applications [13; 14]. As platform for a memristor device, VO\({}_{2}\) has been investigated in a variety of different forms [15; 16; 17; 18; 19; 20; 21]. Depending on the details, switching can be as fast as a few 100's of femtoseconds [5; 22]. Particular interest has gone to fully strained films [23; 24; 25]. These ultra-thin films are fully in the rutile phase in both the insulating and metal phases, which is expected to lead to enhanced durability of devices.
Several studies have investigated whether the current driven transition arises from electric field effects [16; 25; 26; 27; 28] or from Joule heating [15; 29; 30; 31]. It appears that both effects are present to some degree and that the balance depends on detailed properties of the structures studied (film thickness, substrate etcetera). In-operando probing of device switching has provided insight in the parameters determining the switching speed [19], in the role played by film thickness and crack formation [25] and in the role played by nanoscale domains in unstrained films [16].
Scattering near-field optical microscopy has proven to be a valuable tool for the study of VO\({}_{2}\) crystals and films [32; 33; 34; 35; 36; 37], but to the best of our knowledge has not been applied to the study of in-operando devices or fully strained thin films. In this article we use it to compare the insulator to metal transition (IMT) both as function of temperature and applied electric field in 20 nm spatial resolution. We investigate the heating induced by the CO\({}_{2}\) laser and its effect on the onset of the MIT. We use the sSNOM contrast to elucidate the mesoscopic changes in the electronic structure as a current is applied. Combined with simultaneously measured \(I(V)\) characteristics, this
enable us to pinpoint the origin of irregular steps in \(I(V)\) traces.
In our experiments, we combine scattering scanning near-field optical microscopy (sSNOM) with transport measurements. Strained VO\({}_{2}\) films of 13 nm thickness are grown on a single crystal TiO\({}_{2}\) substrate using pulsed laser deposition [18]. Subsequently, sets of eight gold contacts are created using photolithography (Fig. 1a). Each set consists of two rows of four contacts that are separated by 3 \(\mu\)m, while the rows are separated by 12 \(\mu\)m. This layout allows us to determine the two-point resistance from \(V_{S}/I_{S}\) by using pairs of contacts or a four-point resistance, \(R\,=\,V_{R}/I_{R}\), by employing the four contacts on a row. The smaller spacing between contacts on a row is less ideal for the sSNOM experiments, as only a small region between the contacts can be probed. We therefore use two-point \(I_{S}(V_{S})\) measurements between a contact in the top and bottom row to simultaneously record the resistance, while scanning the near-field response over a large 15 \(\mu\)m by 10 \(\mu\)m area. We choose this area to be in between the contacts (yellow square in Fig. 1a) over which a voltage \(V_{S}\) is applied, so that the current flows approximately perpendicular to the scanning direction. We use the gold contacts as reference for the near-field maps, enabling quantitative comparisons between experiments and eliminating small changes in the CO\({}_{2}\) laser intensity. All sSNOM data is displayed on a color scale labelled as S\({}_{2}\)/S\({}_{2,Au}\). In other words, the value of the second harmonic of the demodulated near-field signal in every pixel, is divided by the average value of the corresponding signal measured on a nearby gold contact. Such gold reference images are recorded frequently, e.g. after changing temperature or voltage. AFM topographies are collected as well, showing an atomically smooth surface over the entire field of view with only a few domain lines across the field of view (white, diagonal lines in Fig. 1a). The near-field optical response is measured at a wavelength \(\lambda\,=\,10.7\,\mu\)m (116 meV). At this wavelength large optical contrast is expected between metallic and insulating regions [32]. X-ray diffraction measurements on the as-deposited VO\({}_{2}\) film indicate that the film was principally in rutile phase. The associated strain is accommodated by monoclinic-phase planar defects representing the domain lines mentioned above. As observed previously, films with thickness below 35 nm are almost completely in the rutile phase [24; 25]. The insulator-to-metal transition (IMT) in our films is therefore purely electronic in nature. Figure 1b shows that the resistance changes by almost three orders of magnitude with transition temperatures during \(T_{C}\) = 289 K and \(T_{H}\) = 295 K, recorded during cooling (C) and heating (H) respectively. The width of the transition is approximately 2 K, with a hysteresis of 6 K.
The application of a voltage while in the insulating state can also trigger the IMT [18; 25; 29; 30; 38], as shown in Fig. 1c. Starting from a temperature below the IMT (293 K in Fig. 1c), the system is in a high resistance state until a threshold voltage is reached. Beyond this threshold the current increases hundredfold, signalling the formation of a metallic state. We will call this the forming cycle. In subsequent \(I(V)\) cycles the metallic state persists, as evidenced by the distinctly different cycles 2 and 3 in Fig. 1c. In these subsequent cycles small steps in current at particular voltages are also present, which are close to reproducible from cycle to cycle. The insulating state can be restored by cooling below \(T_{C}\).
One of the interesting aspects of VO\({}_{2}\) is that, besides dc electric fields, the IMT can be induced through other external knobs such as strain [34] and ultrafast light pulses [28; 39]. This also plays an important role in our experiments as demonstrated in Fig. 1d. The sSNOM data is taken at T = 292 K close to \(T_{H}\) in the insulating state.
Figure 1: (a) Experimental geometry. The yellow square indicates the relation between AFM topography (on the right) and the contacts. (b): Resistance (\(R\,=\,V_{R}/I_{R}\)) recorded during heating and cooling cycle. Also indicated are the two-point resistances \(V_{S}/I_{S}\), measured during the sSNOM experiments (yellow squares). A small shift in temperature is due to laser-induced heating. Red solid circles indicate temperatures where we investigated heating due to the CO\({}_{2}\) laser, see panel (e). (c): \(I_{S}(V_{S})\) characteristic measured at 293.2 K. In the first cycle the resistance is high until a threshold voltage is reached at 1.7 V. The \(I_{S}(V_{S})\) characteristic then follows a different voltage dependence until the sample is reset by cooling well below the IMT. (d) sSNOM image recorded at 292 K during a heating cycle. The arrow indicates the sudden appearance of a metallic domain induced by laser heating. (d): change in local temperature determined from the corresponding change in resistance when the CO\({}_{2}\) laser is focused on the sample. See the text for more details.
Indeed, a large part of the field of view (FOV) has a low near-field amplitude, indicative of the insulating state [32]. In the middle of the scan we observe that the CO\({}_{2}\) laser used in the experiment is also capable of inducing the IMT. This can be seen by the sudden switching of a domain (indicated by a yellow arrow). Our experimental geometry allows us to determine the laser-induced heating by effectively using the induced change in resistivity as thermometer.
For this, we start by cooling the sample to 283 K and then slowly heat the film to a selected temperature. With the AFM engaged, we turn the CO\({}_{2}\) laser 'on' and 'off' with a flip mounted mirror. The effective temperature increase in the film can now be estimated by monitoring the change in resistivity that is measured in the dark and illuminated state. To convert resistivity to temperature readings, we invert the resistivity data of Fig. 1b at the temperatures shown by dark-red solid circles. Figure 1e shows the resulting \(\Delta T\) traces as function of time. Depending on the starting temperature the effect of the laser is to increase the temperature of the film by as much as 0.6 K for the lowest measured temperature. The magnitude of \(\Delta T\) decreases with increasing starting temperature. This is likely related to the changes in absorption coefficient, specific heat and thermal conductivity that take place near the IMT. We notice that close to \(T_{H}\) the resistivity does not return to the original value, resulting in a finite \(\Delta T\) also in the 'off' state. This can be related to the switching of a large domain to the metallic state induced by the laser, similar to what is observed in Fig. 1d. As we will follow the IMT using sSNOM and would like to exclude sudden switching resulting from laser heating, we carry out all subsequent experiments while cooling from the metallic state. For every measurement, we set the temperature and record the two-point resistance between contacts at the top and bottom of the image. The \(V_{S}/I_{S}\) values obtained in this way are shown in Fig. 1c as yellow squares. Note that they cannot be directly compared to the four-point resistance (solid lines) since the two-point resistance probes a different area of the surface.
Figure 2 shows the main experimental result of this paper: sSNOM images recorded as function of temperature (top row) and as function of applied voltage \(V_{S}\) (bottom row). We first focus on the evolution of the near-field signal, recorded while cooling down from 343 K. Initially, the near-field response is fully metallic, but near the phase transition, some domains start to become insulating. As has been observed previously [25], thin films often consist of domains that can have slightly different critical temperatures. In films with a thickness below 15 nm, the domains can become very large and even occupy the whole film.
The 294 K panel shows an almost completely metallic film (corresponding to yellow in our color scale), with thin lines (dark) that follow the white domain lines observed in the topography of Fig. 1a. We note again that these lines are thought to be VO\({}_{2}\) in a monoclinic phase. At 288.5 K meandering, insulating filaments emerge that creep in from the domain boundary. These filaments permeate the domain as temperature is further decreased. One novel aspect compared to previous experiments [32; 33; 34; 37; 40; 41] is that our films consist of large single crystalline domains that are fully strained and thus already in the rutile phase. The transition within each domain is therefore purely electronic and this reflects itself in only weak pinning of the nucleation sites of the
Figure 2: Top row: Gold referenced near-field scattering amplitude \(S_{2}/S_{2,Au}\) maps recorded at selected temperatures while cooling from the metallic state. Bottom row: \(S_{2}/S_{2,Au}\) recorded at various voltages V\({}_{\rm S}\) during a forming \(I(V)\) cycle (at 293.2 K). The threshold voltage for this measurement occurred at 1.25 V. The panel labelled with 0.0* V was recorded immediately after turning off the applied voltage. Blue squares in the two right-most panels indicate the same domain.
filaments. As the experiment is repeated, the formation of filaments starts from different points on the boundary.
In the bottom row panels of Fig. 2 we contrast the temperature dependence of the near-field signal with the voltage dependence of the signal. As starting point we first cool the film to 283 K and subsequently heat the film to 293.2 K, two degrees below \(T_{H}\). Without applying any voltage, we observe that some domains are already metallic (panel labelled 0.0 V). Comparing the \(V_{S}\) = 0 V data with the data presented in the top row of Fig. 2, we note that the formation of the metallic state is reversed as compared to the formation of the insulating state. One can still see insulating filaments within the domain that are absorbed in the metallic matrix as the voltage is increased. As we increase the voltage from \(V_{S}\) = 0 V to \(V_{S}\) = 0.8 V, we observe two small steps in the current that flows between the contacts. This change in current appears to correspond to the switching of two complete domains: one in the middle of our field of view and one on the far left side of it. Time domain reflectivity experiments suggest that the formation of filaments within a domain can take place on microsecond time scales [19], which is too fast for our scanning method to register and we consequently only see the end state in which the domain is fully metallic.
While performing the sSNOM measurements, we noticed a large step in current is observed at \(V_{S}\) = 1.15 V, at a lower threshold voltage than the step seen in Fig. 1c. This offset in threshold voltage is due to the additional heating induced by the CO\({}_{2}\) laser as can be verified by repeating the experiment with the laser in the 'off' state. In the \(V_{S}\) = 1.3 V data presented in Fig. 2 we see that all domains in our field of view are fully in the metallic state, with the exception of a few narrow domains in the top right corner. We also note that the average intensity of the domains increases and that insulating filaments within a domain have disappeared.
Unexpectedly, only a small change is observed when the voltage is set to zero, \(V_{S}\) = 0* V. The most notable change is that the lines between domains become more prominent and increase in width. However, the average sSNOM intensity within domains hardly decreases. We monitored the persistence of this state over time using a small probing current (1 \(\mu\)A). We observed no significant change in the resistivity over a 17 hour period in this way. We have therefore clearly identified the source for the forming effect in the first cycle: starting from a mostly insulating phase 'low' state (0.0 V), metallic domains form upon application of a voltage (0.8 V). As the voltage increases, more domains become conducting and the device changes to a 'high' state when all domains between the contacts are in the metallic state (1.2 - 1.3 V). As the device is turned off, the interior of the domains remain metallic while the domains are isolated from each other through narrow insulating regions. When we go through a second cycle, these insulating regions will again become metallic giving rise to the small steps seen in cycle 2 and 3 in Fig. 1c).
A more quantitative picture of the changes underlying the \(I(V)\) characteristics can be obtained by plotting the intensity variation within a field of view in a histogram representation. Fig. 3a shows the histograms for the (extended) series of measurements presented in the bottom row of Fig. 2. At \(V_{S}\) = 0 V, most of the domains are insulating and this is represented by a large, sharp peak in the histogram with average signal intensity S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.02. A broader peak centred around S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.25 can also be seen and corresponds to the metallic domains. As \(V_{S}\) is increased, some weight shifts from the S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.02 peak to S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.25 until the threshold voltage \(V_{S}\)=1.15 is reached. As can be seen from the histogram, at \(V_{S}\) = 1.2 V the maximum around S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.25 suddenly shifts to higher signal values (S\({}_{2}\)/S\({}_{2,Au}\) = 0.35) and the sharp peak at S\({}_{2}\)/S\({}_{2,Au}\) = 0.02 is strongly reduced. Finally at \(V_{S}\) = 1.3 V the remaining insulating regions around the domain lines turn metallic and the S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.02 peak disappears. When the voltage is set to \(V_{S}\) = 0, a small signal at S\({}_{2}\)/S\({}_{2,Au}\) \(\approx\) 0.02 reappears, signalling a small contribution from the insulating phase.
The mesoscopic transport behaviour of the film can be largely predicted by the changes taking place in the histograms. Figure 3b compares relative changes in the insulating area at a particular voltage to the insulating area at \(V_{S}\) = 0 V (e.g. \(A_{ins.}(V_{S})/A_{ins.}(V_{S}\,=\,0)\)), with the relative change in resistance (\(R(V_{S})/R(V_{S}\,=\,0)\). To obtain the relative change in insulating area, \(A_{ins.}(V_{S})\), we integrate the histograms of panel 3a up to S\({}_{2}\)/S\({}_{Au}\) = 0.1. As Fig. 3b shows, the changes in resistance are accompanied by distinct changes in the total area of insulating domains. Small differences (for example, the step in resistance around \(V_{S}\) = 0.8 V) are due to the fact that some domains that contribute to the total current path fall outside our field of view.
Structural effects have little influence on the mesoscopic properties of the film, as demonstrated in Fig. 3c. Here we show a partial overlay of near-field (right half) on the topography (left half) under the application of \(V_{S}\) = 5 V at 292.5 K. At such large applied voltages, which were applied only as our final measurements, it is sometimes seen that the film partly detaches from the underlying substrate. Despite the 8 nm height variations between some domains, we observed only a weak impact on the resistance or the near-field signal. The dominant effect underlying the memristive device in our experiments, is therefore consistent with Joule heating [15; 29; 30; 31]. In contrast to earlier work [15], we find that the current flow between contacts selectively heats domains above their transition temperature. This points to small variations of \(T_{H}\) in individual domains as has been observed previously [25]. Although we cannot completely exclude electric field effects, we note that reversing the
sign of the voltage resulted in the same overall behaviour of the current and near-field signal.
It has been pointed out previously that strain release in thicker films will lead to domain formation and that domain boundaries can become monoclinic or will have lower strain as compared to the larger domain areas [25]. Our direct measurement of the local optical response shows that these regions are quite sharp (Fig. 3d). Above the IMT, the data taken at 294 K shows large metallic domains separated by domain lines from which a narrow insulating region extends. The spatial resolution in our experiments is approximately 20 nm, enabling us to determine that the insulating regions along the domain lines are 100 - 300 nm wide. Interestingly, the physical domain lines observed in topography are much sharper than this as Fig.'s 1a and 3c demonstrate. As temperature is increased, these regions eventually turn metallic as well. In areas of the film with a large domain line density, the intermediate area has a distinctly higher critical temperature as can be clearly seen in the 294 K data presented in Fig. 2 (top right corner). Similarly, the small, reproducible steps in cycle 2 and cycle 3 (Fig. 3d), are a result of the switching of insulating domain lines separating the large metallic domains. To conclude, we note that the near-field signal near the insulating strips is a bit lower, indicating perhaps weak effects of reduced strain within the domain itself.
To summarise, nanoscale near-field imaging of _in operando_ devices is a feasible and promising new tool to probe the connection between microscopic electronic properties and mesoscopic transport. When applied to rutile VO\({}_{2}\) devices, it shows that a memory effect is created by persistent switching of large domains. Our 20 nm spatial resolution allows us to demonstrate that \(I(V)\) characteristic can be accurately explained from the behaviour of 100 - 300 nm wide insulating strips that are connected to domain boundaries in the film.
|
2310.09508 | Findability: A Novel Measure of Information Accessibility | The overwhelming volume of data generated and indexed by search engines poses
a significant challenge in retrieving documents from the index efficiently and
effectively. Even with a well-crafted query, several relevant documents often
get buried among a multitude of competing documents, resulting in reduced
accessibility or `findability' of the desired document. Consequently, it is
crucial to develop a robust methodology for assessing this dimension of
Information Retrieval (IR) system performance. While previous studies have
focused on measuring document accessibility disregarding user queries and
document relevance, there exists no metric to quantify the findability of a
document within a given IR system without resorting to manual labor. This paper
aims to address this gap by defining and deriving a metric to evaluate the
findability of documents as perceived by end-users. Through experiments, we
demonstrate the varying impact of different retrieval models and collections on
the findability of documents. Furthermore, we establish the findability measure
as an independent metric distinct from retrievability, an accessibility measure
introduced in prior literature. | Aman Sinha, Priyanshu Raj Mall, Dwaipayan Roy | 2023-10-14T06:32:06Z | http://arxiv.org/abs/2310.09508v1 | # Findability: A Novel Measure of Information Accessibility
###### Abstract.
The overwhelming volume of data generated and indexed by search engines poses a significant challenge in retrieving documents from the index efficiently and effectively. Even with a well-crafted query, several relevant documents often get buried among a multitude of competing documents, resulting in reduced accessibility or "findability" of the desired document. Consequently, it is crucial to develop a robust methodology for assessing this dimension of Information Retrieval (IR) system performance. While previous studies have focused on measuring document accessibility disregarding user queries and document relevance, there exists no metric to quantify the findability of a document within a given IR system without resorting to manual labor. This paper aims to address this gap by defining and deriving a metric to evaluate the findability of documents as perceived by end-users. Through experiments, we demonstrate the varying impact of different retrieval models and collections on the findability of documents. Furthermore, we establish the findability measure as an independent metric distinct from retrievability, an accessibility measure introduced in prior literature.
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightscomment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
+
Footnote †: 2023 rightsment: _To appear in the proceedings of CRM '23_, October 21–25, 2023, Birmingham, UK
The retrievability measure focuses exclusively on the system-oriented aspect of document retrieval, neglecting user intent during a search. Further, the retrievability measure provides only a general approximation of the likelihood of a document being retrieved, regardless of which query is posed to the IR system. This approximation merely attempts to gauge the accessibility of documents within the collection facilitated by the IR system, disregarding the user's perspective. When users interact with a search engine, they express their specific information need by submitting a query, reflecting their intent and purpose for utilizing the IR system. Therefore, to accurately estimate document accessibility within the collection, it is crucial to account for both the user and their query together with the IR system during the search process.
The ability to access relevant documents in response to a user's query lies at the heart of document findability from the user's standpoint. For instance, presenting a document related to "Java" in response to a user's search for "Python" does not contribute to the findability of "Java" because the document lacks relevance to the query posed. In this context, the findability of a document refers to its capacity to be located solely for queries whose intent is satisfied or addressed by that particular document. In other words, a document is considered _found_ when the user is satisfied to encounter it in the search results as per the query they entered into the IR system to find that document. A similar approach was followed in (Krishnan et al., 2017) where the authors introduced _Page Hunt_, a game specifically designed to collect web search log data. The game involves presenting participants with webpages and tasking them with finding them using the provided search interface.
Previous studies (Bradbury et al., 2016; Bradbury et al., 2016) have introduced retrievability measures to estimate the ease of document retrieval using search engines. Additionally, other researchers (Bradbury et al., 2016; Bradbury et al., 2016; Bradbury et al., 2016) have explored something like findability from the perspective of browsing and navigability, evaluating how easily users can navigate websites. Authors of (Bradbury et al., 2016) say that successful validation of findability measures could enable the development of tools to assist Information Architects in analyzing websites, offering insights into the findability and utilization of content, as well as identifying features (such as terms, links, etc.) that contribute to the ease or difficulty of locating specific pages.
## 3. Findability - A Measure of Accessibility
Consider an IR system that employs model \(R\) to retrieve relevant documents in response to user queries from a document collection \(D\). Let \(Q_{d}\) represent the set of all possible queries for which document \(d\) (where \(d\in D\)) is deemed relevant. We refer to these queries as "relevant queries" specifically for that particular document. The _findability_ of a document is then defined as the expectation of the likelihood of a user finding that document for every query \(q\) (\(q\in Q_{d}\)). Mathematically, the findability measure is formulated as:
\[f(d)=\frac{1}{|Q_{d}|}\sum_{q\in Q_{d}}\xi(p_{dq},c) \tag{1}\]
In Equation 1, \(p_{dq}\) is the rank of document \(d\) in the search result against query \(q\). The function \(\xi(p_{dq},c)\) is a generalized convenience function that captures users' willingness to explore the search results up to rank \(p_{dq}\), while \(c\) denotes the threshold rank at which users cease examining the ranked list.
The function \(\xi(p_{dq},c)\) is subject to two boundary constraints. The first constraint ensures that when document \(d\) appears at the top rank (\(p_{dq}=1\)), it represents the most convenient and optimal scenario for the user; \(\xi(1,c)\) is set to 1 to reflect this favorable situation. On the other hand, users typically do not continue indefinitely exploring the ranked list of search results; they stop investigating (denoted by \(c\) in Equation 1) at some point. In the worst-case scenario, if a document appears in the search results after the user has stopped investigating, it means the user does not find that document. Thus, for ranks \(p_{dq}\) greater than \(c\), we set \(\xi(p_{dq},c)=0\). With these two constraints, the function \(\xi(.)\) is bounded within the range of \([0,1]\), making it a suitable measure for interpretation.
To define the convenience function \(\xi(.)\), we employ the concept of Click Through Rate (CTR): the net percentage of clicks that a document at a certain rank gets out of total clicks by users to open a search result document in the ranked list. The CTR on a search engine could be taken as a representation of the user tendency or user effort it takes to investigate a certain rank in the results. Notably, analyses conducted by Semrush Inc. and Backlinko2 based on 4 million Google search results offer valuable insights into CTR for top ranks in the context of web searches. These findings indicate that a mere 0.63% users click on results beyond rank position 10, suggesting that a majority of users discontinue exploring the ranked list after a certain rank threshold. The observations from CTR of users lead us to propose the following two forms for the convenience function \(\xi(p_{dq},c)\) in the context of findability measure:
Footnote 2: [https://backlinko.com/google-ctr-stats](https://backlinko.com/google-ctr-stats)
* **Exponential decay of Convenience:** Considering an exponential decay with a decay rate of approximately one-third, the convenience function can be defined as: \[\xi(p_{dq},c)=\begin{cases}e^{-(p_{dq}-1)/3}&\text{if }p_{dq}\leq c\\ 0&\text{if }p_{dq}>c\end{cases}\] (2)
* **Inverse law of Convenience:** An alternative approach to incorporate the decaying effect is by considering the inverse of the document rank, which can be expressed as follows: \[\xi(p_{dq},c)=\begin{cases}\frac{1}{p_{dq}}&\text{if }p_{dq}\leq c\\ 0&\text{if }p_{dq}>c\end{cases}\] (3)
### Estimating Document Findability
In order to estimate _findability_ scores of documents in an operational setting, one crucial requirement is the creation of a relevant query set for every document. Ideally, we would require a comprehensive list of all the queries for which a document could be considered relevant. However, generating such an exhaustive list manually would be an impractically labor-intensive task, even for a moderately-sized collection of documents: human experts need to read the documents and submit search queries that are deemed relevant to the respective documents. Avoiding the involvement of human efforts, a known-item query generation strategy can be employed as a proxy for automatically generating a smaller but representative sample of relevant queries for each document.
### Mean Findability and Findability Bias
When evaluating document findability within a fixed collection, two aspects of access provided by different retrieval models can be assessed. Firstly, the _mean of findability_ scores for all documents in the collection reflects the overall effectiveness of the retrieval model in retrieving the correct document at the top. This provides a measure of the aggregate performance of the retrieval model in delivering relevant documents. Secondly, the Gini coefficient (Gini, 2017) can be employed to quantify the disparity of access imposed by the retrieval model across the collection. In this context, the Gini coefficient represents the imbalance of access among the documents within the collection itself. It captures the extent to which certain documents are favored over others in terms of findability. By considering both the mean findability, denoted as \(\langle f\rangle\), and the findability bias, represented by the Gini coefficient \(G\), a comprehensive assessment of document findability offered by a retrieval model can be obtained. This dual approach provides a holistic understanding of how effectively and fairly the retrieval model enables access to the documents in the collection.
The applicability of Gini coefficient as a measure of inequality in the context of accessibility in IR has been employed in earlier studies (Beng et al., 2017; Wang et al., 2018). This measure, borrowed from economics and social sciences, provides a quantitative way to assess the level of inequality in access to information. Lorenz curve (Loraz, 2010; Wang et al., 2018), the graphical representation of Gini coefficient is used to visualize the deviation of wealth distribution from equality. In the context of findability, the Gini coefficient can be computed as follows:
\[G=\frac{\sum_{i=1}^{N}(2i-N-1)\cdot f(d_{i})}{N\sum_{j=1}^{N}f(d_{j})} \tag{4}\]
where \(f(d_{i})\) and \(f(d_{j})\) are \(i^{th}\) and \(j^{th}\) documents when documents are ordered in ascending order by their findability scores; \(N\) is the total number of documents in the collection. Gini coefficient \(G\) ranges from 0 to 1, where \(G=0\) means no bias (i.e., an equal distribution which implies \(f(d)\) is equal for all documents) and \(G=1\) indicates maximum bias (i.e., all documents have \(f(d)=0\) except one document) - implying that only one document consistently appears at the top ranks for its relevant queries, while the rest of the documents are never found within the top ranks and remain hidden among other documents (that take up the top \(c\) positions).
## 4. Empirical Analysis
In this section, we present an experiment that showcases a practical use-case scenario for the findability measure. We evaluate the Mean Findability and Findability Bias of three standard retrieval models across three different benchmark collections to determine which retrieval model offers the best accessibility in diverse collection types. This evaluation allows us to assess the effectiveness of each retrieval model in retrieving the correct documents at the top ranks and the degree of bias in document access within each collection. Further, we investigate the relationship between findability and retrievability (Beng et al., 2017). By comparing findability scores with retrievability scores, we uncover that findability scores are independent and distinct from retrievability scores.
### Datasets and Retrieval Models
We evaluate the proposed findability metric using three benchmark datasets: TREC Robust, WT10g, and MS-Marco passage. The statistics of the datasets are presented in Table 1. These datasets are commonly used in information retrieval research and provide diverse document collections for our analysis. In this study, we investigate the findability of documents using three different retrieval models, particularly BM25 (Krizhevsky et al., 2014; Krizhevsky et al., 2015), LM-Dir (Zhu et al., 2015), and DFR-PL2 (Beng et al., 2017).
### Query Generation Method
As discussed in Section 3.1, the estimation of findability requires a set of relevant queries for each document in the collection. In this study, we employ a known-item query generation method, which was introduced in a previous work (Beng et al., 2017). This method has demonstrated its effectiveness in generating query scores that are comparable to manually generated known-item queries. Following the _popular+discriminative_ selection strategy method (Beng et al., 2017), we generate a set of known-item search queries with the average query length \(k\) and mixing parameter \(\lambda\) (as defined in (Beng et al., 2017)) set to 4 and 0 respectively.
To ensure a comprehensive assessment, the generated number of relevant queries is set to 10% of the total number of distinct terms in the document with an upper cap of 50 to maintain computational tractability.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\# documents** & **Collection type** & **\# terms** & **\# queries** \\ \hline
**TREC Robust** & 528,155 & News & 1,502,031 & 10,230,070 \\
**WT10g** & 1,692,096 & Web & 9,674,707 & 26,041,327 \\
**MS MARCO passage** & 8,841,823 & Web excerpts & 1,410,558 & 19,839,452 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets used for experimentation.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c|}{Self query set} & \multicolumn{2}{c}{Known-item query set} \\ \hline & **Pearson’s r** & **Kendall’s r** & **Pearson’s r** & **Kendall’s r** \\ \hline
**Robust04** & -0.0944 & -0.0518 & -0.1292 & -0.1053 \\ \hline
**WT10g** & -0.0088 & 0.0084 & -0.0256 & -0.0287 \\ \hline
**MS MARCO** & 0.0115 & 0.0307 & 0.0388 & 0.0269 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Gini coefficient \(G\) and Mean Findability \(\langle f\rangle\) for Findability \(f(d)\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Robust04**} & **WT10g** & **MS MARCO** \\ \hline \multirow{2}{*}{**LM-Dir**} & \(G\) & 0.1587 & 0.2847 & 0.3774 \\ & \(\langle f\rangle\) & 0.6327 & 0.5209 & 0.5173 \\ \hline \multirow{2}{*}{**BM25**} & \(G\) & 0.1456 & 0.2503 & 0.3116 \\ & \(\langle f\rangle\) & 0.6640 & 0.5985 & 0.5895 \\ \hline \multirow{2}{*}{**DFR-PL2**} & \(G\) & 0.1424 & 0.2497 & 0.3007 \\ & \(\langle f\rangle\) & 0.6672 & 0.6133 & 0.5888 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Correlation between Findability \(f(d)\) and Retrievability \(r(d)\) when both \(f(d)\) and \(r(d)\) are estimated using query sets generated from their respective datasets (Self query set) as well as the known-item search query set (Known-item query), where all are statistically significant correlation with \(p<0.05\).
### Experimentation
The findability measure, defined in Equation 1, includes a parameter \(c\) that represents the maximum rank tolerance of users. While the optimal value of \(c\) can vary depending on a specific task and user preferences, we have chosen a value of \(c=100\) for this particular study3. Based on our initial study of CTR data, we found that the inverse law of convenience provides a better fit. Therefore, we have opted to utilize the inverse law form (Equation 3) for estimating the findability scores.
Footnote 3: We experimented with \(c\) from 10 to 100 varied in steps of 10; considering the space limitation, we are reporting the results for \(c=100\) in this paper.
We report the Gini coefficient \(G\) and the mean findability \(\langle f\rangle\) of the collections in Table 2. From the table, we can observe that as the Gini coefficient increases, there is a noticeable decrease in mean findability. This implies that across the three examined retrieval models, improving the overall findability of documents results in a concurrent decrease in findability bias. Consequently, retrieval models that increase the findability of documents tend to improve accessibility to the collection as a whole by enhancing the findability of the majority of documents. The inequality is graphically presented as Lorenz curve in Figure 1. In the figure 1, the curve for retrievability employing BM25 model is also presented in red to showcase the disparity with the findability bias.
Moreover, it is observed that mean findability decreases and findability bias increases with collection size. This aligns with our intuition that a larger number of documents results in heightened competition among them for higher rankings, thereby diminishing findability. Additionally, it seems that the bias imposed by the retrieval model on the collection becomes more pronounced as the collection size increase.
Among the three retrieval models, LM-Dir yields lower findability for the collection's documents compared to the other two models. While BM25 and PL2 exhibit similar mean findability, the Gini coefficient for PL2 is reported to be lower indicating that PL2 performs better overall in terms of the findability aspect of accessibility-based model performance.
### Comparing with Retrievability
The distinction between retrievability and findability may initially appear subtle. Still, it is essential to acknowledge the substantial conceptual distinctions between them that emerge when considering user behavior and user-centric factors. To clarify the distinction, we perform a retrievability analysis using BM25 retrieval model for all three collections, utilizing a standard query set commonly used for computing retrievability scores, as mentioned in the original work (Bordes et al., 2017). Further, we compute Pearson's and Kendall's rank correlation coefficient. As presented in Table 3, the obtained correlations reveal an almost negligible association between the two measures. Even when utilizing the same queries for retrievability analysis as those employed for findability evaluation (known-item queries), Table 3's third column reveals a persistent lack of relationship between findability and retrievability. These correlation results serve to establish findability as an independent measure of accessibility that was not previously encompassed by the retrievability measure.
The findability score provides a uniform interpretation and a constant range, akin to a coefficient, unlike retrievability which encounters comparability challenges across diverse studies due to variations in query sets. Moreover, findability measure is well-suited for analyzing the findability of individual documents, whereas a single retrievability score alone may not adequately represent retrievability of a document without additional context or information.
## 5. Conclusion and Future Work
This paper introduces a novel metric called 'findability' for measuring document accessibility. Our study demonstrates that all three retrieval models exhibit comparable behavior in regard to findability. Furthermore, we compare findability with retrievability, another existing metric for document accessibility. Future research will investigate the use of the findability measure in fine-tuning IR system parameters in situations where relevance judgments are unavailable. Additionally, exploring the correlation between improved overall findability and enhanced user experience is an area of interest for further investigation.
_Acknowledgement:_ We would like to thank the anonymous reviewer for their valuable and encouraging feedback.
Figure 1. Lorenz Curves for three retrieval models for each of the three collections. Note that, some of the curves are closely situated, potentially making them indistinguishable in the plots, particularly for the Robust04 and WT10g collections. To highlight the contrast between retrievability with findability, the scaled curves for retrievability are also presented in red. |
2310.11347 | Diagonalizing Bose Gases in the Gross-Pitaevskii Regime and Beyond | We present a novel approach to the Bogoliubov theory of dilute Bose gases,
allowing for an elementary derivation of the celebrated Lee-Huang-Yang formula
in the Gross-Pitaevskii regime. Furthermore, we identify the low lying
excitation spectrum beyond the Gross-Pitaevskii scaling, extending a recent
result [3] to significantly more singular scaling regimes. Finally, we provide
an upper bound on the ground state energy in the Gross-Pitaevskii regime that
captures the correct expected order of magnitude beyond the Lee-Huang-Yang
formula. | Morris Brooks | 2023-10-17T15:31:51Z | http://arxiv.org/abs/2310.11347v3 | # Diagonalizing Bose Gases in the Gross-Pitaevskii Regime and Beyond
###### Abstract
We present a novel approach to the Bogoliubov theory of dilute Bose gases, allowing for an elementary derivation of the celebrated Lee-Huang-Yang formula in the Gross-Pitaevskii regime. Furthermore, we identify the low lying excitation spectrum beyond the Gross-Pitaevskii scaling, extending a recent result [3] to significantly more singular scaling regimes. Finally, we provide an upper bound on the ground state energy in the Gross-Pitaevskii regime that captures the correct expected order of magnitude beyond the Lee-Huang-Yang formula.
## 1 Introduction
Given a non-negative function with compact support \(V\in L^{3}(\mathbb{R}^{3})\) let us introduce the rescaled two-particle interaction \(V_{L}(x-y):=L^{2}V(L(x-y))\), where \(L>0\). In the following we will study the many-particle Hamilton operator \(H_{N,\kappa}\) acting on the \(N\)-particle Hilbert space \(L^{2}_{\rm sym}\big{(}\Lambda^{N}\big{)}\), with \(\Lambda:=\big{[}-\frac{1}{2},\frac{1}{2}\big{]}^{3}\) being the three-dimensional torus, given by
\[H_{N,\kappa}: =\sum_{i=1}^{N}(-\Delta)_{x_{i}}+\sum_{1\leq i<j\leq N}V_{N^{1- \kappa}}(x_{i}-x_{j})\] \[=\sum_{k\in 2\pi\mathbb{Z}^{3}}|k|^{2}a_{k}^{\dagger}a_{k}+ \frac{1}{2}\sum_{jk,mn\in 2\pi\mathbb{Z}^{3}}\left(V_{N^{1-\kappa}}\right)_{ jk,mn}a_{k}^{\dagger}a_{j}^{\dagger}a_{m}a_{n}, \tag{1}\]
where \(a_{k}\) and \(a_{k}^{\dagger}\) are the standard annihilation and creation operators on the bosonic Fock space \(\mathcal{F}:=\bigoplus_{n=0}^{\infty}L^{2}_{\rm sym}(\Lambda^{n})\) corresponding to the modes \(e^{ikx}\) for \(k\in 2\pi\mathbb{Z}^{3}\), \(x_{i}-x_{j}\) refers to the distance between two particles on the torus \(\Lambda\) and \(0\leq\kappa\leq\frac{2}{3}\) is an additional scaling parameter. If not indicated otherwise, we will always assume that indices run in the set \(2\pi\mathbb{Z}^{3}\), which we will usually neglect in our notation, and we write \(k\neq 0\) in case the index runs in the set \(2\pi\mathbb{Z}^{3}\setminus\{0\}\).
The study of the low energy properties, such as the ground state energy \(E_{N,\kappa}\), of dilute Bose gases described by the Hamilton operator \(H_{N,\kappa}\) has a long standing history in the mathematics, as well the physics, literature. A rigorous derivation of the leading order
\[E_{N,\kappa}=4\pi\mathfrak{a}N^{1+\kappa}+o_{N\to\infty}\big{(}N^{1+\kappa} \big{)}\,, \tag{2}\]
with the scattering length \(\mathfrak{a}\) defined as \(8\pi\mathfrak{a}:=\left\langle\sqrt{V},\frac{1}{1+\sqrt{V}\frac{1}{-2\Delta}\sqrt {V}}\sqrt{V}\right\rangle_{L^{2}(\mathbb{R}^{3})}=\int_{\mathbb{R}^{3}}V(x) \mathrm{d}x-\left\langle V,\frac{1}{-2\Delta+V(x)}V\right\rangle_{L^{2}( \mathbb{R}^{3})}\) according to [13, 11], has been achieved in [20] for the thermodynamic limit in the regime of small densities \(\rho\), where \(\kappa\) is determined by the density \(\rho\) via \(\rho=N^{3\kappa-2}\), and later for the Gross-Pitaevskii regime \(\kappa:=0\) in [19]. More recently the subleading contribution to the asymptotics in Eq. (2), famously conjectured to be the Lee-Huang-Yang term emerging from the underlying Bogoliubov theory [2, 15], has been identified in the Gross-Pitaevskii regime in [1] and in the thermodynamic limit in [8, 9]. Following these landmark results, a lot of effort has been made to streamline and generalize the methods and results, see for example [3, 12, 7, 4, 10].
In this work we want to combine the approach in [8, 9, 7] with the one in [1, 12], with the goal of finding an algebraic method, such as the one in [8, 9, 7], with a clean representation of important cancellations comparable to the one in [12]. Furthermore we will follow an alternative approach when it comes to the scattering length, which will be introduced via the Feshbach-Schur map discussed in Section 2. The following three hand-selected Theorem 1, Theorem 2 and Theorem 3 should be seen as an advertisement for the general method and its robustness, highlighting its various advantages. Starting with Theorem 1, we recover the, at this point, well known results in [1, 12], following, what we would call, a short and elementary proof. All the relevant cancellations within the proof are exclusively discussed in Section 2.
**Theorem 1**.: _Let \(E_{N}\) be the ground state energy of the operator \(H_{N}:=H_{N,0}\). Then we have_
\[E_{N}=4\pi\mathfrak{a}_{N}(N-1)+\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16 \pi\mathfrak{a}|k|^{2}}-|k|^{2}-8\pi\mathfrak{a}+\frac{(8\pi\mathfrak{a})^{2} }{2|k|^{2}}\right\}+o_{N\to\infty}(1), \tag{3}\]
_where \(\mathfrak{a}_{N}\) is the box scattering length defined in Eq. (9)._
While Eq. (3) claims an identity, our proof will focus on the lower bound, as the upper bound is discussed in detail in Theorem 3. Besides commenting on the length and complexity of the proof of Theorem 1, we also want to emphasise that our proof of Theorem 1 and Theorem 2, and especially Theorem 3, do not require any cut-off parameters. However, it can be useful to introduce such a parameter anyways, as it shortens the proof of Theorem 1 and allows us to slightly extend the range of possible values \(\kappa\) in case of Theorem 2, to be precise we can verify Theorem 2 without any cut-off parameter for \(0\leq\kappa<\frac{1}{11}\). Furthermore we want to point out that the error terms in the proof of the lower bound in Theorem 1 can be controlled by the particle number operator alone, and do not require the kinetic energy.
In the subsequent Theorem 2 we extend the results in [3] concerning the excitation spectrum of the operator \(H_{N,\kappa}\) for \(0\leq\kappa<\kappa_{0}\), where \(\kappa_{0}\) is of the order of magnitude \(10^{-3}\), to the significantly large range \(0\leq\kappa<\frac{1}{8}\). We do however want to emphasise, that our proof of Theorem 2 relies on strong a priori estimates provided by recent results in [6], and later in a slightly different setting by [4, 10].
**Theorem 2**.: _Let \(E_{N,\kappa}\) be the ground state energy of \(H_{N,\kappa}\), \(0\leq\kappa<\frac{1}{8}\) and assume that \(V\) is
radially symmetric decreasing. With the definition \(\tau:=\frac{5}{2}\big{(}\frac{1}{8}-\kappa\big{)}>0\), \(E_{N,\kappa}\) is given by_
\[4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)+\frac{1}{2}\underset{k\neq 0}{ \sum}\bigg{\{}\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k|^{2}-8\pi \mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^{2}}\bigg{\}} +O_{N\to\infty}\big{(}N^{-\tau}\big{)}\,. \tag{4}\]
_Let furthermore \(E_{N,\kappa}^{(d)}\) denote the \(d\)-th eigenvalue, in increasing order, and let us enumerate the set \(\Big{\{}\underset{k\neq 0}{\sum}n_{k}\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}} :n_{k}\in\mathbb{N}_{0}\Big{\}}\) in increasing order by \(\{\lambda_{N,\kappa}^{(1)},\lambda_{N,\kappa}^{(2)},\dots\}\). Then_
\[E_{N,\kappa}^{(d)}-E_{N,\kappa}=\lambda_{N,\kappa}^{(d)}+O_{N\to\infty}\big{(} N^{-\tau}\big{)}\,.\]
Again, the proof of Theorem 2 focuses on the lower bound, as the upper bound is discussed in a more general setting in the subsequent Theorem 3. We want to point out here, that even though our algebraic manipulations do not make use of unitary operations, they are equally well suited to provide lower bounds and upper bounds on the ground state energy as well as the excitation spectrum. Furthermore we want to note that our resolution of the spectrum up to the order \(N^{-\tau}\) is sharp enough to see the non-linear contribution of \(|k|^{4}\) in the excitations \(\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}\) for the slightly smaller range \(0\leq\kappa<\frac{5}{48}\).
In our final Theorem 3, we provide sufficient upper bounds on the excitation energy \(E_{N,\kappa}^{(d)}\) in order to conclude the proof of Theorem 1 and Theorem 2. Furthermore, we derive an improved, and novel, upper bound on the ground state energy \(E_{N}\) in the Gross-Pitaevskii regime, which gives a correction of the order \(\frac{\log N}{N}\). We want to emphasise that the scale \(\frac{\log N}{N}\) is indeed the expected order of magnitude for the next term in the asymptotic expansion in Eq. (3), see [22], and we believe it is crucial at this point that our method does not rely on cut-off techniques in momentum space.
**Theorem 3**.: _In the Gross-Pitaevskii regime \(\kappa:=0\), we have the upper bound_
\[E_{N}\leq 4\pi\mathfrak{a}_{N}(N-1)+\frac{1}{2}\underset{k\neq 0}{\sum} \bigg{\{}\sqrt{|k|^{4}+16\pi\mathfrak{a}|k|^{2}}-|k|^{2}-8\pi\mathfrak{a}+ \frac{(8\pi\mathfrak{a})^{2}}{2|k|^{2}}\bigg{\}}+C\frac{\log N}{N} \tag{5}\]
_for a suitable \(C>0\). Furthermore for \(\kappa<\frac{2}{13}\) and \(d\in\mathbb{N}\), \(E_{N,\kappa}^{(d)}\) is bounded from above by_
\[4\pi\mathfrak{a}_{N}N^{\kappa}(N-1)+\frac{1}{2}\underset{k\neq 0}{\sum} \bigg{\{}\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k|^{2}-8\pi \mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^{2}} \bigg{\}}+\lambda_{N,\kappa}^{(d)}+CN^{\frac{13\kappa}{4}-\frac{1}{2}}. \tag{6}\]
## 2 The two-body Problem
Before we come to the (approximate) diagonalization of the many-body operator \(H_{N,\kappa}\), which will be the basis of verifying Theorem 1, Theorem 2 and Theorem 3, we will first investigate the corresponding problem for the two body Hamiltonian \(H:=-\Delta_{2}+V_{L}(x-y)\) defined on \(L^{2}(\Lambda^{2})\) with \(\Delta_{2}:=\Delta_{x}+\Delta_{y}\). The goal of this Section is to find a transformation \(S:L^{2}(\Lambda^{2})\longrightarrow L^{2}(\Lambda^{2})\), which removes the correlations of \(H\) between low energy momentum
pairs \(\left(k_{1},k_{2}\right)\in\mathcal{L}\subseteq\left(2\pi\mathbb{Z}\right)^{6}\) and high energy momentum pairs \(\mathcal{H}:=\left(2\pi\mathbb{Z}\right)^{6}\setminus\mathcal{L}\). We will specify the set \(\mathcal{L}\) later this section, for now let \(\pi_{\mathcal{L}}\) and \(\pi_{\mathcal{H}}=1-\pi_{\mathcal{L}}\) denote the corresponding projections in \(L^{2}(\Lambda^{2})\). With \(\pi_{\mathcal{L}}\) and \(\pi_{\mathcal{H}}\) at hand, \(H\) can be represented as a block matrix
\[H=\begin{bmatrix}\pi_{\mathcal{L}}(-\Delta_{2}+V_{L})\pi_{\mathcal{L}}&\pi_{ \mathcal{L}}V_{L}\pi_{\mathcal{H}}\\ \pi_{\mathcal{H}}V_{L}\pi_{\mathcal{L}}&\pi_{\mathcal{H}}(-\Delta_{2}+V_{L}) \pi_{\mathcal{H}}\end{bmatrix}.\]
It is now an elementary task in linear algebra to remove the correlation terms \(\pi_{\mathcal{H}}V_{L}\pi_{\mathcal{L}}\) and \(\pi_{\mathcal{L}}V_{L}\pi_{\mathcal{H}}\), and therefore bringing the operator \(H\) in a block diagonal form. For this purpose let us define the Feshbach-Schur map \(S:=1-RV_{L}\pi_{\mathcal{L}}=\begin{bmatrix}1&0\\ -RV_{L}\pi_{\mathcal{L}}&1\end{bmatrix}\), where \(R\) is the pseudo-inverse of \(\pi_{\mathcal{H}}(-\Delta_{2}+V_{L})\pi_{\mathcal{H}}\), and compute
\[S^{\dagger}HS=\begin{bmatrix}\pi_{\mathcal{L}}\left(-\Delta_{2}+V_{L}-V_{L}RV _{L}\right)\pi_{\mathcal{L}}&0\\ 0&\pi_{\mathcal{H}}(-\Delta_{2}+V_{L})\pi_{\mathcal{H}}\end{bmatrix}. \tag{7}\]
Consequently \(H=T^{\dagger}\left(-\Delta_{2}+\widetilde{V}_{L}\right)T\) where \(T:=1+RV_{L}\pi_{\mathcal{L}}=\begin{bmatrix}1&0\\ RV_{L}\pi_{\mathcal{L}}&1\end{bmatrix}\) is the inverse of the Feshbach-Schur map \(S\) and the renormalized potential is defined as
\[\widetilde{V}_{L}:=\begin{bmatrix}\pi_{\mathcal{L}}\left(V_{L}-V_{L}RV_{L} \right)\pi_{\mathcal{L}}&0\\ 0&\pi_{\mathcal{H}}V_{L}\pi_{\mathcal{H}}\end{bmatrix}. \tag{8}\]
In the following we will always use the concrete choice \(\mathcal{L}:=\bigcup_{|k|<K}\{(k,0),(0,k)\}\), for a given \(0<K\leq\infty\). At this point we want to emphasise that in the case \(K=\infty\), the definition of \(T\) and \(\widetilde{\widetilde{V}}_{L}\) do not include any cut-off parameter in momentum space.
We can now relate the renormalized potential \(V_{L}-V_{L}RV_{L}\) with the scattering properties of the potential \(V(x)\). Following [12], let us first define the box scattering length
\[\mathfrak{a}_{L}:=\frac{L}{8\pi}\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{00,00}=\frac{ L}{8\pi}\left(\int V_{L}(x)\mathrm{d}x-\langle V_{L},RV_{L}\rangle\right), \tag{9}\]
which is independent of the choice of \(K\). It has been shown in [11] that the box scattering length satisfies \(|\mathfrak{a}_{L}-\mathfrak{a}|\lesssim L^{-1}\), where \(\mathfrak{a}\) is the scattering length of \(V(x)\) introduced in Section 1. Furthermore, also the other matrix entries of the renormalized potential \(V_{L}-V_{L}RV_{L}\) appearing in Eq. (8) can be related to the scattering length as we demonstrate in Lemma 1.
**Lemma 1**.: _There exists a constant \(C>0\) such that for \(k_{i}\in 2\pi\mathbb{Z}^{3}\) satisfying \(k_{1}+k_{2}=k_{3}+k_{4}\)_
\[\left|L\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{k_{1}k_{2},k_{3}k_{4}}-8\pi\mathfrak{ a}\right|\leq CL^{-1}\!\left(1+\sum_{i=1}^{4}|k_{i}|\!\right). \tag{10}\]
_Furthermore \(\left|L\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{k_{1}k_{2},k_{3}k_{4}}\right|\leq \frac{C}{1+L^{-1}|k_{3}-k_{1}|}\leq C\) uniformly in \(k_{i}\). Both estimates are uniform in the parameter \(K\) introduced below Eq. (8)._
The proof of Lemma 1, which can be found in the Appendix A, is based on the results in [11], however we want to emphasise that for small potentials \(V\) one can simplify the proof and avoid consulting [11] by making use of the Neumann series of the resolvent \(R\).
The many-body Diagonalization
In this section we want to lift the diagonalization procedure for the two-particle Hamiltonian derived in Section 2 to a diagonalization of the many-body Hamiltonian \(H_{N,\kappa}\). This will be done by the introduction of a many-body counterpart to the inverse Feshbach-Schur map \(T:L^{2}(\Lambda^{2})\longrightarrow L^{2}(\Lambda^{2})\) defined below Eq. (7), which is realized by the new sets of variables
\[c_{k}: =a_{k}+\sum_{j,mn}\left(T-1\right)_{jk,mn}a_{j}^{\dagger}a_{m}a_{ n}, \tag{11}\] \[\psi_{jk}: =a_{j}a_{k}+\sum_{mn}\left(T-1\right)_{jk,mn}a_{m}a_{n}. \tag{12}\]
In order to illustrate the intimate relation between these variables and the underlying transformation \(T\), we compute, using the canonical commutation relations (CCR) \(\left[a_{j},a_{k}^{\dagger}\right]=\delta_{j,k}\),
\[\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}+\frac{1}{2}\sum_{jk,mn}\left( \widetilde{V}_{N^{1-\kappa}}\right)_{jk,mn}\psi_{jk}^{\dagger}\psi_{mn} \tag{13}\] \[=\]
where \(\mathcal{R}:=\sum_{jk,mn;j^{\prime},m^{\prime}n^{\prime}}|k|^{2}\overline{(T- 1)_{j^{\prime}k,m^{\prime}n^{\prime}}}(T-1)_{jk,mn}a_{n^{\prime}}^{\dagger}a_ {m^{\prime}}^{\dagger}a_{j^{\prime}}a_{m}a_{n}\) and \(\widetilde{V}_{N^{1-\kappa}}\) is the renormalized potential defined in Eq. (8). Using the coefficients
\[w_{k}:=N(T-1)_{(-k)k,00}=\frac{N}{2|k|^{2}}\Big{(}V_{N^{1-\kappa}}-V_{N^{1- \kappa}}RV_{N^{1-\kappa}}\Big{)}_{(-k)k,00}\quad\mathop{\simeq}_{N\to\infty} \quad\frac{4\pi\mathfrak{a}N^{\kappa}}{|k|^{2}}, \tag{14}\]
the residuum \(\mathcal{R}\) can further be decomposed into a single-particle operator \(\sum_{k\neq 0}|k|^{2}w_{k}^{2}a_{k}^{\dagger}a_{k}\) and an error term \(\mathcal{E}_{1}:=\mathcal{R}-\sum_{k\neq 0}|k|^{2}w_{k}^{2}a_{k}^{\dagger}a_{k}\), which is small given the assumption that the number of excited particles \(\mathcal{N}:=\sum_{k\neq 0}a_{k}^{\dagger}a_{k}\) is small compared to the total number of particles \(N\), as we confirm later this Section. Hence the computation of the many-body operator in Eq. (13) essentially reduces to the computation of the two-body operator
\[T^{\dagger}\left(-\Delta_{2}+\widetilde{V}_{N^{1-\kappa}}\right)T=-\Delta_{2} +V_{N^{1-\kappa}},\]
which has been carried out in Section 2, see the comment below Eq. (7). Combining what we have so far, we can represent \(H_{N,\kappa}\) in terms of the new variables \(c_{k}\) and \(\psi_{jk}\) as
\[H_{N,\kappa}=\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}+\frac{1}{2}\sum_{jk,mn}\left( \widetilde{V}_{N^{1-\kappa}}\right)_{jk,mn}\psi_{jk}^{\dagger}\psi_{mn}- \mathcal{R}.\]
The advantage of this representation compared to Eq. (1) is that the renormalized potential \(\widetilde{V}_{N^{1-\kappa}}\) has the block diagonal structure \(\widetilde{V}_{N^{1-\kappa}}=\pi_{\mathcal{L}}(V_{N^{1-\kappa}}-V_{N^{1- \kappa}}RV_{N^{1-\kappa}})\pi_{\mathcal{L}}+\pi_{\mathcal{H}}V_{N^{1-\kappa}} \pi_{\mathcal{H}}\),
see the definition of \(\widetilde{V}_{N^{1-\kappa}}\) in Eq. (8). Using the positivity of \(V_{N^{1-\kappa}}\) therefore yields
\[H_{N,\kappa} \geq\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}\!+\!\frac{1}{2}\sum_{jk,mn} \Big{(}\pi_{\mathcal{L}}(V_{N^{1-\kappa}}-V_{N^{1-\kappa}}RV_{N^{1-\kappa}})\pi _{\mathcal{L}}\Big{)}_{jk,mn}\psi_{jk}^{\dagger}\psi_{mn}-\mathcal{R}\] \[=\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}\!+\!\frac{1}{2}\sum_{jk,mn} \Big{(}\pi_{\mathcal{L}}(V_{N^{1-\kappa}}-V_{N^{1-\kappa}}RV_{N^{1-\kappa}})\pi _{\mathcal{L}}\Big{)}_{jk,mn}a_{k}^{\dagger}a_{j}^{\dagger}a_{m}a_{n}-\mathcal{R}\] \[=\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}+\frac{\lambda_{0}}{4}a_{0}^ {2\dagger}a_{0}^{2}+\sum_{0<|k|<K}\lambda_{k}a_{0}^{\dagger}a_{0}\,a_{k}^{ \dagger}a_{k}-\mathcal{R}, \tag{15}\]
with \(\lambda_{k}:=\Big{(}V_{N^{1-\kappa}}-V_{N^{1-\kappa}}RV_{N^{1-\kappa}}\Big{)} _{k0,k0}+\Big{(}V_{N^{1-\kappa}}-V_{N^{1-\kappa}}RV_{N^{1-\kappa}}\Big{)}_{0k,k0}\underset{N\to\infty}{\simeq}\) 16\(\pi\mathfrak{a}N^{\kappa}\), where we have used that \(\psi_{mn}=a_{m}a_{n}\) in case at least one of the indices \(m\) or \(n\) is zero. We further observe that \(a_{0}^{\dagger}a_{0}=N-\mathcal{N}\) and \(a_{0}^{2\dagger}a_{0}^{2}=N(N-1)-(2N-1)\mathcal{N}+\mathcal{N}^{2}\) hold on the \(N\)-particle Hilbert space \(L^{2}_{\mathrm{sym}}\big{(}\Lambda^{N}\big{)}\), where \(\mathcal{N}\) is defined below Eq. (14), which yields
\[\frac{\lambda_{0}}{4}a_{0}^{2\dagger}a_{0}^{2}+\sum_{0<|k|<K}\lambda_{k}a_{0}^ {\dagger}a_{0}\,a_{k}^{\dagger}a_{k}=4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa} (N-1)+\sum_{0<|k|<K}N\lambda_{k}\,a_{k}^{\dagger}a_{k}-\frac{N\lambda_{0}}{2} \mathcal{N}-\mathcal{E}_{2}\]
with \(\mathcal{E}_{2}:=\sum_{0<|k|<K}\lambda_{k}\mathcal{N}a_{k}^{\dagger}a_{k}- \frac{\lambda_{0}}{4}\left(\mathcal{N}^{2}+\mathcal{N}\right)\), where we used \(N\frac{\lambda_{0}}{4}=4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}\) by the definition of \(\mathfrak{a}_{N^{1-\kappa}}\) in Eq. (9). Utilizing \(\mathcal{E}_{1}\) introduced below Eq. (14), we therefore obtain
\[H_{N,\kappa}-4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)\geq\sum_{k}|k|^{2} c_{k}^{\dagger}c_{k}+\sum_{k\neq 0}\mu_{k}a_{k}^{\dagger}a_{k}-\mathcal{E}_{1}- \mathcal{E}_{2} \tag{16}\]
with \(\mu_{k}:=\mathds{1}(|k|<K)N\lambda_{k}-\frac{N\lambda_{0}}{2}-|k|^{2}w_{k}^{2} \underset{N,K\to\infty}{\simeq}8\pi\mathfrak{a}N^{\kappa}-\frac{(4\pi\mathfrak{ a}N^{\kappa})^{2}}{|k|^{2}}\). In the following we want to apply the theory of Bogoliubov operators in order to analyse the operator on the right hand side of Eq. (16). However the operators \(c_{k}\) do not satisfy the CCR, not even in an approximate sense, and they do not leave \(L^{2}_{\mathrm{sym}}\big{(}\Lambda^{N}\big{)}\) invariant. It is well understood at this point, that the second issue can be avoided by multiplying \(c_{k}\) with an appropriate phase factor \(a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\), see for example [21, 5, 16]. Regarding the first issue we define new variables
\[d_{k}:=a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}c_{k}-w_{k}a_{-k}^{ \dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}, \tag{17}\]
as well as the increment \(\delta_{k}:=a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k}-d_{k}\). As we will see in Lemma 2, the increments can be regarded as being small and therefore the operators \(d_{k}\) do satisfy approximate CCR. With \(\delta_{k}\) at hand, we can now rewrite \(a_{k}^{\dagger}a_{k}=a_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}a _{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k}=\left(d_{k}+\delta_{k} \right)^{\dagger}\left(d_{k}+\delta_{k}\right)\) and
\[c_{k}^{\dagger}c_{k}\!=\!c_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_ {0}a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}c_{k}\!=\!\left(d_{k}\!+ \!w_{k}d_{-k}^{\dagger}\!+\!w_{k}\delta_{-k}^{\dagger}\right)^{\dagger}\!\left( d_{k}\!+\!w_{k}d_{-k}^{\dagger}\!+\!w_{k}\delta_{-k}^{\dagger}\right).\]
Defining the coefficients \(A_{k}:=|k|^{2}+|k|^{2}w_{k}^{2}+\mu_{k}-\epsilon\), \(B_{k}:=2|k|^{2}w_{k}\) and \(C_{k}:=2|k|^{2}w_{k}^{2}\), we consequently obtain for \(\epsilon\geq 0\) the identity
\[\sum_{k}|k|^{2}c_{k}^{\dagger}c_{k}+\sum_{k\neq 0}\mu_{k}a_{k}^{ \dagger}a_{k}-\epsilon\mathcal{N}=\sum_{k}|k|^{2}\,\Big{(}d_{k}+w_{k}d_{-k}^{ \dagger}\Big{)}^{\dagger}\Big{(}d_{k}+w_{k}d_{-k}^{\dagger}\Big{)}+\sum_{k\neq 0 }(\mu_{k}-\epsilon)d_{k}^{\dagger}d_{k}-\mathcal{E}_{3}\] \[=\frac{1}{2}\sum_{k\neq 0}\Biggl{\{}A_{k}\,\Big{(}d_{k}^{ \dagger}d_{k}+d_{-k}^{\dagger}d_{-k}\Big{)}+B_{k}\,\Big{(}d_{k}d_{-k}+d_{-k}^ {\dagger}d_{k}^{\dagger}\Big{)}+C_{k}\frac{[d_{k},d_{k}^{\dagger}]+[d_{-k},d_ {-k}^{\dagger}]}{2}\Biggr{\}}-\mathcal{E}_{3}, \tag{18}\]
with
\[\mathcal{E}_{3}:=\sum_{k\neq 0}(\epsilon-\mu_{k})\Bigl{(}\delta_{k}^{ \dagger}\delta_{k}+d_{k}^{\dagger}\delta_{k}+\text{H.c.}\Bigr{)}-\sum_{k}|k|^{ 2}\Bigl{(}w_{k}^{2}\delta_{k}\delta_{k}^{\dagger}+w_{k}\,\Big{(}d_{k}+w_{k}d_ {-k}^{\dagger}\Big{)}\,\delta_{-k}^{\dagger}+\text{H.c.}\Bigr{)}\,. \tag{19}\]
By Lemma 1 we have \(A_{k}\underset{N,K\to\infty}{\simeq}|k|^{2}+8\pi\mathfrak{a}N^{\kappa}-\epsilon\), as well as \(B_{k}\underset{N\to\infty}{\simeq}8\pi\mathfrak{a}N^{\kappa}\) and \(C_{k}\underset{N\infty}{\simeq}\frac{\left(8\pi\mathfrak{a}N^{\kappa}\right)^{ 2}}{2|k|^{2}}\), and furthermore \(A_{k}\geq B_{k}\) for \(N\geq N_{0}\), \(K\geq N^{\frac{\kappa}{2}}K_{0}\) and \(\epsilon<\epsilon_{0}\). Consequently we can apply Bogoliubov's Lemma, see [18, 14], which yields
\[\frac{1}{2}\sum_{k\neq 0}\left\{A_{k}\,\Big{(}d_{k}^{ \dagger}d_{k}+d_{-k}^{\dagger}d_{-k}\Big{)}+B_{k}\,\Big{(}d_{k}d_{-k}+d_{-k}^{ \dagger}d_{k}\Big{)}+C_{k}\frac{[d_{k},d_{k}^{\dagger}]+[d_{-k},d_{-k}^{ \dagger}]}{2}\right\}\] \[\qquad\qquad\geq\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^ {2}}-A_{k}+C_{k}\right\}\frac{[d_{k},d_{k}^{\dagger}]+[d_{-k},d_{-k}^{\dagger} ]}{2}. \tag{20}\]
Defining \(\mathcal{E}_{4}:=\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^ {2}}-A_{k}+C_{k}\right\}\left(1-\frac{[d_{k},d_{k}^{\dagger}]+[d_{-k},d_{-k}^{ \dagger}]}{2}\right)\), and combining the estimate in Eq. (16) with the identity in Eq. (18) and the lower bound in Eq. (20), yields
\[H_{N,\kappa}-4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)- \frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\} \geq\epsilon\mathcal{N}-\sum_{i=1}^{4}\mathcal{E}_{i}, \tag{21}\]
for \(\epsilon,N\) and \(K\) satisfying the restrictions stated above Eq. (20). We want to emphasise that the constant on the left hand side of Eq. (21) contains the correct leading order energy \(4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)\), and in the limit \(N,K\to\infty\) and \(\epsilon\to 0\), it contains the sub-leading Lee-Huang-Yang correction \(\frac{1}{2}\sum_{k\neq 0}\Bigl{\{}\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k|^{ 2}-8\pi\mathfrak{a}N^{\kappa}+\frac{\left(8\pi\mathfrak{a}N^{\kappa}\right)^{ 2}}{2|k|^{2}}\Bigr{\}}\) as well, which follows from the asymptotic behaviour of the coefficients \(A_{k}\), \(B_{k}\) and \(C_{k}\) stated below Eq. (19).
### Proof of Theorem 1
In order to verify Theorem 1, we need sufficient bounds on the error terms \(\mathcal{E}_{i}\). For this purpose, we first derive estimates on the increment \(\delta_{k}\) in the subsequent Lemma 2, which we will use in Lemma 3 in order to control \(\mathcal{E}_{i}\). For the rest of this subsection we will always assume that we are in the Gross-Pitaevskii Regime, i.e. we assume \(\kappa=0\). Furthermore recall that we are working on the \(N\)-particle Hilbert space \(L^{2}_{\text{sym}}\bigl{(}\Lambda^{N}\bigr{)}\) and the claimed estimates hold restricted to this space.
**Lemma 2**.: _There exists a \(C>0\) such that \(\sum_{k\neq 0}\left(\delta_{k}^{\dagger}\delta_{k}+\delta_{k}\delta_{k}^{ \dagger}\right)\leq\frac{C}{N-\mathcal{N}}(\mathcal{N}+1)^{2}\)._
Proof.: Let us define \(h(x):=\frac{1}{\sqrt{1+\frac{1}{N}-x}}-\sqrt{1-x}\) as well as the coefficients
\[f_{\ell,k}:=\Big{(}(T-1)_{(\ell-k)k,\ell 0}+(T-1)_{(\ell-k)k,0\ell} \Big{)} \tag{22}\] \[=\frac{1}{|k|^{2}\!+\!|\ell\!-\!k|^{2}}\left\{\Big{(}V_{N^{1-\kappa }}\!-\!V_{N^{1-\kappa}}RV_{N^{1-\kappa}}\Big{)}_{(\ell-k)k,\ell 0}\!+\!\Big{(}V_{N^{1-\kappa}}\!-\!V_{N^{1-\kappa}}RV_{N^{1- \kappa}}\Big{)}_{(\ell-k)k,0\ell}\right\},\]
in case \(\ell\neq 0\), and \(f_{\ell,k}:=0\) otherwise. Then we can write \(\delta_{k}=-\delta_{k}^{\prime}+\delta_{k}^{\prime\prime}\) with \(\delta_{k}^{\prime}:=\sqrt{a_{0}^{\dagger}a_{0}}\sum_{|\ell|<K}f_{\ell,k}\,a_ {\ell-k}^{\dagger}a_{\ell}\) and \(\delta_{k}^{\prime\prime}:=w_{k}\left(\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_ {0}-\frac{\sqrt{a_{0}^{\dagger}a_{0}a_{0}}}{N}\right)a_{-k}^{\dagger}=w_{k}a_{ -k}^{\dagger}\frac{a_{0}}{\sqrt{N}}h\big{(}\frac{\mathcal{N}}{N}\big{)}\). Note that \(|h(x)|\lesssim\frac{x+1}{\sqrt{1-x}}\), \(|w_{k}|\lesssim\frac{1}{|k|^{2}}\) and \(\left(\frac{a_{0}}{\sqrt{N}}\right)^{\dagger}\frac{a_{0}}{\sqrt{N}}\leq 1\). Consequently
\[\sum_{k\neq 0}(\delta_{k}^{\prime\prime})^{\dagger}\delta_{k}^{\prime \prime}\lesssim\sum_{k\neq 0}h\bigg{(}\frac{\mathcal{N}}{N}\bigg{)}\frac{a_{-k}a_{- k}^{\dagger}}{|k|^{4}}h\bigg{(}\frac{\mathcal{N}}{N}\bigg{)}\!\lesssim\!h\bigg{(} \frac{\mathcal{N}}{N}\bigg{)}^{2}\!(\mathcal{N}\!+\!1)\!\lesssim\!\frac{( \mathcal{N}+1)^{3}}{N(N\!-\!\mathcal{N})},\]
where we have used \(\sum_{k\neq 0}\frac{1}{|k|^{4}}<\infty\). Similarly the same estimate can be shown for \(\sum_{k\neq 0}\delta_{k}^{\prime\prime}(\delta_{k}^{\prime\prime})^{\dagger}\). In order to estimate \(\delta_{k}^{\prime}\), note that \(|f_{\ell,k}|\leq\frac{D}{N|k|^{2}}\) for a suitable constant \(D\) by Lemma 1. In combination with \(\sqrt{a_{0}^{\dagger}a_{0}}\leq\sqrt{N}\) this immediately yields the bounds \(\pm\mathfrak{Re}\delta_{k}^{\prime}\leq\frac{D}{\sqrt{N}|k|^{2}}\mathcal{N}\) and \(\pm\mathfrak{Im}\delta_{k}^{\prime}\leq\frac{D}{\sqrt{N}|k|^{2}}\mathcal{N}\). Since \(\mathcal{N}\) commutes with both \(\mathfrak{Re}\delta_{k}^{\prime}\) and \(\mathfrak{Im}\delta_{k}^{\prime}\) we can square these inequalities and obtain \(\left(\delta_{k}^{\prime}\right)^{\dagger}\delta_{k}^{\prime}+\delta_{k}^{ \prime}\left(\delta_{k}^{\prime}\right)^{\dagger}\leq 4\left(\mathfrak{Re} \delta_{k}^{\prime}\right)^{2}+4\left(\mathfrak{Im}\delta_{k}^{\prime} \right)^{2}\leq\frac{8D^{2}}{N|k|^{4}}\mathcal{N}^{2}\).
In the following let \(\mathcal{F}_{M}^{+}\) denote the spectral subspace \(\mathcal{N}\leq M\), and let us define for an operator \(X\) the restricted operator \(X\Big{|}_{\mathcal{F}_{M}^{+}}\) as \(X\Big{|}_{\mathcal{F}_{M}^{+}}:=\pi_{\mathcal{F}_{M}^{+}}X\pi_{\mathcal{F}_{M}^ {+}}\), where \(\pi_{\mathcal{F}_{M}^{+}}\) is the orthogonal projection onto \(\mathcal{F}_{M}^{+}\).
**Lemma 3**.: _For all \(K<\infty\) and \(0<r<1\), there exists a constant \(C_{K,r}>0\) such that_
\[\pm\mathcal{E}_{i}\Big{|}_{\mathcal{F}_{M}^{+}}\leq C_{K,r}\sqrt{\frac{M+1}{N}} (\mathcal{N}+1).\]
_for all \(M\leq\min\{rN,N-1\}\) and \(i\in\{1,2,3,4\}\)._
Proof.: Using \(|\lambda_{k}|\lesssim N^{-1}\), see Lemma 1, we have \(\pm\mathcal{E}_{2}\leq CN^{-1}(\mathcal{N}+1)^{2}\), which concludes the case \(i=2\). Regarding the case \(i=3\), we have by Cauchy-Schwarz and Lemma 2
\[\pm\mathcal{E}_{3}\leq\epsilon\sum_{k\neq 0}\Big{\{}a_{k}^{\dagger}a_{k}+c_{k}^{ \dagger}c_{k}+d_{k}^{\dagger}d_{k}+(d_{k}+w_{k}d_{-k}^{\dagger})^{\dagger}(d_{k }+w_{k}d_{-k}^{\dagger})\Big{\}}+\epsilon^{-1}\frac{C}{N-\mathcal{N}}(\mathcal{ N}+1)^{2}\]
for any \(\epsilon>0\). Since \(d_{k}=\frac{1}{\sqrt{a_{0}^{\dagger}a_{0}}}a_{0}^{\dagger}a_{k}-\delta_{k}\) we have \(\sum_{k\neq 0}d_{k}^{\dagger}d_{k}\lesssim\mathcal{N}\) again by Lemma 2. Furthermore \(\sum_{k\neq 0}c_{k}^{\dagger}c_{k}=\sum_{k\neq 0}\left(d_{k}+w_{k}a_{-k}^{ \dagger}a_{0}\frac{1}{\sqrt{a_{0}^{\dagger}a_{0}}}\right)^{\dagger}\left(d_{k }+w_{k}a_{-k}^{\dagger}a_{0}\frac{1}{\sqrt{a_{0}^{\dagger}a_{0}}}\right)\lesssim \mathcal{N}+\frac{1}{N}(\mathcal{N}+1)^{2}\).
\(\sum_{k\neq 0}w_{k}^{2}a_{k}a_{k}^{\dagger}\lesssim\mathcal{N}+1\). Similarly \(\sum_{k\neq 0}(d_{k}+w_{k}d_{-k}^{\dagger})^{\dagger}(d_{k}+w_{k}d_{-k}^{ \dagger})\lesssim\mathcal{N}+1\). Hence
\[\pm\mathcal{E}_{3}\Big{|}_{\mathcal{F}_{M}^{+}}\lesssim\epsilon(\mathcal{N}+1) +\epsilon^{-1}\frac{M+1}{N-M}(\mathcal{N}+1),\]
which concludes the proof of the case \(i=3\) for the optimal choice \(\epsilon:=\sqrt{\frac{M+1}{N-M}}\). Regarding the case \(i=4\), note that \(\left[a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k},a_{k}^{\dagger }\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}\right]\Big{|}_{\mathcal{F}_{N-1}^ {+}}=1|_{\mathcal{F}_{N-1}^{+}}\), and therefore \(\mathcal{E}_{4}|_{\mathcal{F}_{N-1}^{+}}\) reads
Multiplying out the commutators and estimating the resulting products in the same way as we did in the case \(i=3\), concludes the proof of the case \(i=4\). Regarding the final case \(i=1\), let us write \(\mathcal{E}_{1}=\frac{1}{2}\sum_{(\ell,\ell^{\prime})\in A_{K}}\left( \mathcal{G}^{\ell,\ell^{\prime}}+\mathrm{H.c.}\right)+\sum_{k\neq 0}|k|^{2}w_{k}^{2 }N^{-2}\left(a_{0}^{2\dagger}a_{0}^{2}-1\right)a_{k}^{\dagger}a_{k}\) with \(A_{K}:=\{(\ell,\ell^{\prime})\neq 0:|\ell|,|\ell^{\prime}|<K\}\) and \(\mathcal{G}^{\ell,\ell^{\prime}}:=\sum_{k\neq 0}|k|^{2}a_{\ell^{\prime}-k}^{ \dagger}|k|^{2}\overline{f_{\ell,k}}f_{\ell^{\prime},k}a_{\ell^{\prime}-k}^{ \dagger}a_{\ell}^{\dagger}a_{0}a_{\ell^{\prime}}a_{0}a_{\ell-k}\), where \(f_{\ell,k}\) is defined in Eq. (22). Note that \(\left|\left|k\right|^{2}\overline{f_{\ell,k}}f_{\ell^{\prime},k}\right| \lesssim N^{-2}\), see for example the proof of Lemma 2. Furthermore we have for \((\ell,\ell^{\prime})\neq(0,0)\) the estimate \(\left\|\left(a_{\ell^{\prime}}^{\dagger}a_{0}^{\dagger}a_{\ell}a_{0}\right) \right|_{\mathcal{F}_{M}^{+}}\right\|\lesssim\sqrt{M+1}(N+1)^{\frac{3}{2}}\). Consequently \(\pm\left(\mathcal{G}^{\ell,\ell^{\prime}}+\mathrm{H.c.}\right)\big{|}_{ \mathcal{F}_{M}^{+}}\lesssim\sqrt{\frac{M+1}{N}}\mathcal{N}\), which concludes the proof, since the set \(A_{K}\) is finite and \(\pm\sum_{k\neq 0}|k|^{2}w_{k}^{2}N^{-2}\left(a_{0}^{2\dagger}a_{0}^{2}-1\right)a_{k} ^{\dagger}a_{k}\lesssim\frac{M+1}{N}\mathcal{N}\).
With Lemma 3 at hand, we are now in a position to verify Theorem 1. In the following let \(\Psi_{N}\) be the ground state of the operator \(H_{N}\), \(f,g:\mathbb{R}\longrightarrow[0,1]\) smooth functions satisfying \(f^{2}+g^{2}=1\) and \(f(x)=1\) for \(x\leq\frac{1}{2}\) as well as \(f(x)=0\) for \(x\geq 1\). Then we define for \(0<\rho<1\) the truncated states \(\Theta_{N}:=\frac{f\left(\frac{\mathcal{N}}{a_{N}}\right)\Psi_{N}}{\left\| \left(\frac{\mathcal{N}}{\rho N}\right)\Psi_{N}\right\|}\). Note that the ground state \(\Psi_{N}\) satisfies complete Bose-Einstein condensation \(\frac{1}{N}\left\langle\Psi_{N},\mathcal{N}\Psi_{N}\right\rangle\underset{N \rightarrow\infty}{\longrightarrow}0\) according to the well known results in [17]. Consequently we obtain by the IMS inequality, which can for example be found in [12, Proposition 20], respectively which follows from the methods presented in our more general Lemma 11,
\[\left\langle\Theta_{N},H_{N}\Theta_{N}\right\rangle\leq E_{N}+\frac{N}{\rho N- 2\left\langle\Psi_{N},\mathcal{N}\Psi_{N}\right\rangle}\frac{C^{\prime}}{N} \leq E_{N}+\frac{C}{N}\]
for suitable \(C^{\prime},C\). Since \(\Theta_{N}\) is an element of \(\mathcal{F}_{\rho N}^{+}\), we have by Lemma 3
\[\left\langle\Theta_{N},\left(\epsilon\mathcal{N}-\sum_{i=1}^{4}\mathcal{E}_{i} \right)\Theta_{N}\right\rangle\geq\left(\epsilon-4C_{K,\frac{1}{2}}\sqrt{\rho+ \frac{1}{N}}\right)\left\langle\Theta_{N},\mathcal{N}\Theta_{N}\right\rangle-4C _{K,\frac{1}{2}}\sqrt{\rho+\frac{1}{N}}\]
for \(\epsilon>0\), \(K<\infty\) and \(0<\rho<\frac{1}{2}\). Assuming \(\rho<\left(\frac{\epsilon}{4C_{K,\frac{1}{2}}}\right)^{2}\), as well as \(N\) large enough, \(\epsilon<1\) and \(K\geq K_{0}\) large enough such that Eq. (21) holds, we obtain the lower bound
\[E_{N}-4\pi\mathfrak{a}_{N}(N-1)\geq\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_ {k}^{2}}-A_{k}+C_{k}\right\}-\frac{C}{N}-4C_{K,\frac{1}{2}}\sqrt{\rho+\frac{1} {N}}.\]
Since \(|\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}|\lesssim\frac{1}{|k|^{4}}\) by Lemma 1, we conclude using dominated convergence
\[\lim_{\epsilon\to 0,K\to\infty}\lim_{\rho\to 0}\lim_{N\to\infty} \left[\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}- \frac{C}{N}-4C_{K,\frac{1}{2}}\sqrt{\rho+\frac{1}{N}}\right]\] \[=\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}|k|^{2}}-|k|^{2 }-8\pi\mathfrak{a}+\frac{(8\pi\mathfrak{a})^{2}}{2|k|^{2}}\right\}.\]
## 4 Beyond the Gross-Pitaevskii Regime
In this section we will explain how to treat the case \(\kappa>0\). For this purpose we use the following algebraic version of Bogoliubov's Lemma, see for example [18, 21],
\[\frac{1}{2}\sum_{k\neq 0}\left\{A_{k}\left(d_{k}^{\dagger}d_{k}+d_{- k}^{\dagger}d_{-k}\right)+B_{k}\left(d_{k}d_{-k}+d_{-k}^{\dagger}d_{k} \right)+C_{k}\frac{[d_{k},d_{k}^{\dagger}]+[d_{-k},d_{-k}^{\dagger}]}{2}\right\}\] \[=\sum_{k\neq 0}e_{k}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{ \dagger}\right)^{\dagger}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger} \right)\!+\!\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}\!-\!B_{k}^{2}}\!-\!A_{k} +C_{k}\right\}\frac{[d_{k},d_{k}^{\dagger}]\!+\![d_{-k},d_{-k}^{\dagger}]}{2}\]
with \(e_{k}:=\sqrt{(A_{k})^{2}-(B_{k})^{2}}\), \(\gamma_{k}:=\frac{1}{\sqrt{1-\alpha_{k}^{2}}}\) and \(\nu_{k}:=\frac{\alpha_{k}}{\sqrt{1-\alpha_{k}^{2}}}\) where we define the coefficients \(\alpha_{k}:=\frac{1}{B_{k}}\left(A_{k}-\sqrt{A_{k}^{2}-B_{k}^{2}}\right)\). Note that by Lemma 1, it is easy to see that \(0\leq\nu_{k}\leq\gamma_{k}\lesssim\frac{N^{\frac{\kappa}{2}}}{\sqrt{|k|}}\) for \(|k|\lesssim N^{\frac{\kappa}{2}}\) and \(|\nu_{k}|\lesssim\frac{N^{\kappa}}{|k|^{2}}\) globally. Setting \(\epsilon:=0\) and using the estimate in Eq. (16) together with the identity in Eq. (18), we therefore obtain
\[H_{N,\kappa} -4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)-\frac{1}{2}\sum_ {k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}\] \[\geq\sum_{k\neq 0}e_{k}\left(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{ \dagger}\right)^{\dagger}\left(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}\right)- \sum_{i=1}^{4}\mathcal{E}_{i}. \tag{23}\]
### Control of the Error Terms \(\mathcal{E}_{i}\)
In order to obtain a useful representation of the error terms \(\mathcal{E}_{1}\) and \(\mathcal{E}_{3}\), recall the definition \(f_{\ell,k}\) from Eq. (22) and let us introduce \(\Lambda_{k\ell,k^{\prime}\ell^{\prime}}:=\delta_{k+\ell=k^{\prime}+\ell^{ \prime}}|k-\ell^{\prime}|^{2}N\overline{f_{\ell^{\prime},\ell^{\prime}-k}}f_{ \ell,\ell-k^{\prime}}\) as well as \(\Upsilon_{\ell,k}^{(1)}:=2N^{\frac{1}{2}}|k|^{2}\overline{f_{\ell,-k}}w_{k},\; \Upsilon_{\ell,k}^{(2)}:=-N^{\frac{1}{2}}|k|^{2}\overline{w_{k}}f_{\ell,-k}\) and \(\Upsilon_{\ell,k}^{(3)}:=-N^{\frac{3}{2}}\left(\mathds{1}(|k|<K)\lambda_{k}- \frac{\lambda_{0}}{2}\right)f_{\ell+k,k}\). Furthermore we define \(O_{1}:=N^{-\frac{3}{2}}a_{0}^{\dagger}a_{0}^{2}\) as well as \(O_{2}:=O_{3}:=\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\frac{a_{0}}{\sqrt{N}}\sqrt {a_{0}a_{0}^{\dagger}}\). Using the
decomposition \(\delta_{k}=-\delta^{\prime}_{k}+\delta^{\prime\prime}_{k}\) from the proof of Lemma 2, we can then write
\[\mathcal{E}_{1} =\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{\prime}\ell^ {\prime}}\,a^{\dagger}_{\ell^{\prime}}a^{\dagger}_{k^{\prime}}\frac{a^{ \dagger}_{0}a_{0}}{N}a_{k}a_{\ell}+\left(\sum_{\ell,k}\Upsilon^{(1)}_{\ell,k}O _{1}\,a^{\dagger}_{k}a^{\dagger}_{\ell}a_{k+\ell}+\text{H.c.}\right) \tag{24}\] \[\qquad+\sum_{k\neq 0}|k|^{2}w_{k}^{2}N^{-2}\left(a^{2\dagger}_{0}a ^{2}_{0}-1\right)a^{\dagger}_{k}a_{k},\] \[\mathcal{E}_{3} =\left(\sum_{\ell,k}\Upsilon^{(2)}_{\ell,k}O_{2}\,a^{\dagger}_{k} a^{\dagger}_{\ell}a_{k+\ell}+\text{H.c.}\right)+\left(\sum_{\ell,k}\Upsilon^{(3)}_{ \ell,k}O_{3}\,a^{\dagger}_{k}a^{\dagger}_{\ell}a_{k+\ell}+\text{H.c.}\right)\] (25) \[+\sum_{k}|k|^{2}w_{k}^{2}\big{(}[\delta_{k},d^{\dagger}_{k}]+[ \delta_{k},d^{\dagger}_{k}]+[\delta_{k},\delta^{\dagger}_{k}]\big{)}+\sum_{k} |k|^{2}w_{k}\left(\delta^{\dagger}_{k}\delta^{\dagger}_{-k}+\delta_{-k}\delta_ {k}\right)\] \[-\sum_{k\neq 0}\left(a^{\dagger}_{k}\frac{1}{\sqrt{a_{0}a^{ \dagger}_{0}}}a_{0}\left(N\left(\mathds{1}(|k|<K)\lambda_{k}-\frac{\lambda_{0}} {2}\right)\delta^{\prime\prime}_{k}+|k|^{2}w_{k}(\delta^{\prime\prime}_{-k})^ {\dagger}\right)+\text{H.c.}\right).\]
This Subsection is devoted to the derivation of suitable bounds on the most prominent error contributions \(\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{\prime}\ell^{\prime}}\,a ^{\dagger}_{\ell^{\prime}}a^{\dagger}_{k^{\prime}}\frac{a^{\dagger}_{0}a_{0}} {N}a_{k}a_{\ell}\) and \(\sum_{\ell,k}\Upsilon^{(i)}_{\ell,k}O_{i}\,a^{\dagger}_{k}a^{\dagger}_{\ell}a_ {k+\ell}\) in Lemma 4 and Lemma 5. The residual error terms are then taken care of in Appendix B. Let us at this point introduce the new operators \(b_{k}:=\gamma_{k}a^{\dagger}_{0}\frac{1}{\sqrt{a_{0}a^{\dagger}_{0}}}a_{k}+ \nu_{k}a^{\dagger}_{-k}\frac{1}{\sqrt{a_{0}a^{\dagger}_{0}}}a_{0}\) and the particle number operator in these new variables \(\widetilde{\mathcal{N}}:=\sum_{k\neq 0}b^{\dagger}_{k}b_{k}\), which notably satisfy the CCR on \(\mathcal{F}^{+}_{N-1}\), i.e. \(\left[b_{k},b^{\dagger}_{\ell}\right]\Psi=\delta_{k,\ell}\Psi\) for \(\Psi\in\mathcal{F}^{+}_{N-1}\). Let us furthermore introduce \(\mathcal{F}^{\leq}_{M_{0}}\) as the spectral subspace \(\sum_{0<|k|<K}a^{\dagger}_{k}a_{k}\leq M_{0}\).
**Lemma 4**.: _There exists a constant \(C>0\) such that we have for \(0<\delta<\frac{1}{2}\)_
\[\pm\!\!\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{ \prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}a^{\dagger}_{k^{\prime}}\frac {a^{\dagger}_{0}a_{0}}{N}a_{k}a_{\ell}\bigg{|}_{\mathcal{F}^{\leq}_{M_{0}}}\!\! \leq\!C\left(N^{\frac{5_{k}}{2}+2\delta}\frac{M_{0}\!+\!1}{N}+N^{\frac{7_{k}}{2 }+2\delta-1}\right)\!\left(\sum_{k}|k|^{1-2\delta}b^{\dagger}_{k}b_{k}\Big{|}_ {\mathcal{F}^{<}_{\widetilde{M}_{0}}}\!+\!1\right).\]
Proof.: Using \(a_{k}=\frac{1}{\sqrt{a_{0}a^{\dagger}_{0}}}a_{0}\gamma_{k}b_{k}+\frac{1}{\sqrt{ a_{0}a^{\dagger}_{0}}}a_{0}\nu_{-k}b^{\dagger}_{-k}\) and the fact that \(\Lambda\geq 0\), yields
\[0 \leq\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{\prime }\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}a^{\dagger}_{k^{\prime}}\frac{a^{ \dagger}_{0}a_{0}}{N}a_{k}a_{\ell}\leq\sum_{k\ell,k^{\prime}\ell^{\prime}} \Lambda_{k\ell,k^{\prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}a^{ \dagger}_{k^{\prime}}a_{k}a_{\ell}\] \[\leq\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{\prime }\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}\left(\gamma_{k^{\prime}}b_{k^{ \prime}}+\nu_{-k^{\prime}}b^{\dagger}_{-k^{\prime}}\right)^{\dagger}\left(\gamma _{k}b_{k}+\nu_{-k}b^{\dagger}_{-k}\right)a_{\ell}\] \[\leq 2\sum_{k\ell,k^{\prime}\ell^{\prime}}\gamma_{k}\Lambda_{k\ell, k^{\prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}b^{\dagger}_{k^{\prime}}b_{k}a_{ \ell}+2\sum_{k\ell,k^{\prime}\ell^{\prime}}\nu_{-k^{\prime}}\nu_{-k}\Lambda_{k \ell,k^{\prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}b_{-k^{\prime}}b_{-k }^{\dagger}a_{\ell}\] \[=2\sum_{k\ell,k^{\prime}\ell^{\prime}}\gamma_{k}\Lambda_{k\ell,k^{ \prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}b^{\dagger}_{k^{\prime}}b_{k}a_{ \ell}+2\sum_{k\ell,k^{\prime}\ell^{\prime}}\nu_{-k^{\prime}}\nu_{-k}\Lambda_{k \ell,k^{\prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}b_{-k}^{\dagger}b_{-k ^{\prime}}a_{\ell}+\sum_{\ell}\sigma_{\ell}a^{\dagger}_{\ell}a_{\ell}.\]
with \(\sigma_{\ell}:=2\sum_{k}\Lambda_{k\ell,k\ell}\nu_{-k}^{2}\). Note that \(\Lambda_{k\ell,k^{\prime}\ell^{\prime}}\,a^{\dagger}_{\ell^{\prime}}b^{\dagger}_{-k ^{\prime}}a_{\ell}=\Lambda_{k\ell,k^{\prime}\ell^{\prime}}\,b^{\dagger}_{-k}a^{ \dagger}_{\ell^{\prime}}a_{\ell}b_{-k^{\prime}}\), since \([b_{-k^{\prime}},a_{\ell}]=0\) for \(\ell\neq k^{\prime}\) and \(\Lambda_{kk^{\prime},k^{\prime}\ell^{\prime}}=0\). Defining \(\widetilde{\Lambda}_{k\ell,k^{\prime}\ell^{\prime}}:=\big{(}4\gamma_{k^{ \prime}}\gamma_{k^{\prime}}\Lambda_{k\ell,k^{\prime}\ell^{\prime}}+4\nu_{k^{ \prime}}\nu_{k}\Lambda_{(-k^{\prime})\ell,-k\ell^{\prime}}\big{)}|k|^{\delta- \frac{1}{2}}|k^{\prime}|^{\delta-\frac{1}{2}}\]
and using again the positivity of \(\Lambda\), we can bound \(\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{\prime}\ell^{\prime}}\,a_{ \ell^{\prime}}^{\dagger}a_{k^{\prime}}^{\dagger}a_{k}a_{\ell}\) from above by
\[\sum_{k\ell,k^{\prime}\ell^{\prime}}\widetilde{\Lambda}_{k\ell,k^{\prime}\ell^{ \prime}}\Big{(}|k^{\prime}|^{\frac{1}{2}-\delta}b_{k^{\prime}}\Big{)}^{ \dagger}a_{\ell^{\prime}}^{\dagger}a_{\ell}\Big{(}|k|^{\frac{1}{2}-\delta}b_{ k}\Big{)}+\sum_{\ell}\sigma_{\ell}a_{\ell}^{\dagger}a_{\ell}+\sum_{k,k^{\prime}}4 \gamma_{k^{\prime}}\gamma_{k}\Lambda_{(k-k),k^{\prime}(-k^{\prime})}[b_{k^{ \prime}},a_{-k}]^{\dagger}[b_{k},a_{-k}], \tag{26}\]
where \(\delta>0\). In order to investigate the first term in Eq. (26), let us first derive a sufficient bound on the operator norm of \(\widetilde{\Lambda}\). For this purpose let us define the weight function \(p(k):=|k|^{\delta-\frac{1}{2}}\) and observe that \(\widetilde{\Lambda}\) satisfies the weighted Schur test
\[\sum_{k}p(k)\left|\widetilde{\Lambda}_{k(k^{\prime}+\ell^{\prime}-k),k^{\prime }\ell^{\prime}}\right|\leq Cp(k^{\prime})N^{\frac{3\kappa}{2}}\sup_{\ell^{ \prime}}\sum_{k}|k|^{2\delta-1}\left|f_{\ell^{\prime},\ell^{\prime}-k}\right|, \tag{27}\]
for a suitable constant \(C\) and all \(\ell^{\prime}\) and \(k^{\prime}\), where we have used that \(|\gamma_{k}|,|\nu_{k}|\lesssim N^{\frac{\kappa}{4}}\) and
\[|\ell^{\prime}-k|^{2}N|f_{\ell,\ell-k^{\prime}}|\leq N|(V_{N^{1-\kappa}}-V_{N^ {1-\kappa}}RV_{N^{1-\kappa}})_{k^{\prime}(\ell^{\prime}-k^{\prime}),\ell^{ \prime}0}|\lesssim N^{\kappa}\]
by Lemma 1. As a consequence of the weighted Schur Test in Eq. (27) we have the bound
\[\|\widetilde{\Lambda}\|\lesssim N^{\frac{3\kappa}{2}}\sup_{\ell^{ \prime}}\sum_{k}|k|^{2\delta-1}|f_{\ell^{\prime},\ell^{\prime}-k}|\leq N^{ \frac{3\kappa}{2}}\sup_{\ell^{\prime}}\sum_{k}|k|^{2\delta-3}\left|\!\left<V_{ N^{1-\kappa}}\!-\!V_{N^{1-\kappa}}RV_{N^{1-\kappa}}\right>_{k(\ell^{\prime}-k),\ell^{ \prime}0}\right| \tag{28}\] \[\lesssim N^{\frac{5\kappa}{2}-1}\sup_{\ell^{\prime}}\sum_{k\neq 0 }\frac{|k|^{2\delta}}{|k|^{3}(1+N^{\kappa-1}|k-\ell^{\prime}|)}\lesssim N^{ \frac{5\kappa}{2}-1}\int_{0}^{\infty}\frac{|k|^{2\delta}}{|k|(1+N^{\kappa-1}|k |)}\,\mathrm{d}k\lesssim N^{\frac{5\kappa}{2}+2\delta(1-\kappa)-1},\]
where we have applied the uniform bound in Lemma 1. Therefore \(\|\widetilde{\Lambda}\|\lesssim N^{\frac{5\kappa}{2}+2\delta-1}\) which implies the estimate
\[\sum_{k\ell,k^{\prime}\ell^{\prime}}\widetilde{\Lambda}_{k\ell,k^{\prime}\ell^ {\prime}}\Big{(}|k^{\prime}|^{\frac{1}{2}-\delta}b_{k^{\prime}}\Big{)}^{ \dagger}a_{\ell^{\prime}}^{\dagger}a_{\ell}\Big{(}|k|^{\frac{1}{2}-\delta}b_{k }\Big{)}\Big{|}_{\mathcal{F}_{\widetilde{M}_{0}}^{\lesssim}}\lesssim N^{\frac{5 \kappa}{2}+2\delta}\frac{M_{0}+1}{N}\sum_{k}|k|^{1-2\delta}b_{k}^{\dagger}b_{k }\Big{|}_{\mathcal{F}_{\widetilde{M}_{0}}^{\lesssim}}.\]
Regarding the third term in Eq. (26), we are going to use the upper bound \(|\nu_{k}|\lesssim N^{\kappa}|k|^{-2}\) and \(|\gamma_{k}|\lesssim N^{\frac{\kappa}{4}}\), as well as Eq. (28), which yields
\[\left\|\sum_{k,k^{\prime}}4\gamma_{k^{\prime}}\gamma_{k}\Lambda_{ k(-k),k^{\prime}(-k^{\prime})}[b_{k^{\prime}},a_{-k^{\prime}}]^{\dagger}[b_{k},a_{-k}] \right\|\lesssim\sum_{k,k^{\prime}}4\gamma_{k^{\prime}}\gamma_{k}|\Lambda_{k(-k ),k^{\prime}(-k^{\prime})}||\nu_{k}||\nu_{k^{\prime}}| \tag{29}\] \[\qquad\lesssim N^{\frac{5\kappa}{2}}\sum_{k\neq 0}|k|^{-(3+2 \delta)}\sum_{k^{\prime}\neq 0}|k^{\prime}|^{2\delta-3}\left|(V_{N^{1-\kappa}}-V_{N^{1- \kappa}}RV_{N^{1-\kappa}})_{(k-k^{\prime})^{\prime}k^{\prime},k0}\right| \lesssim N^{2\delta+\frac{7\kappa}{2}-1}\]
for \(\delta<\frac{1}{2}\). Regarding the second term in Eq. (26), note that we have \(0\leq\sigma_{\ell}\lesssim\mathds{1}(|k|<K)N^{\frac{5\kappa}{2}-1}\log N\), and consequently we have the estimate \(0\leq\sum_{\ell}\sigma_{\ell}a_{\ell}^{\dagger}a_{\ell}\Big{|}_{\mathcal{F}_{ \widetilde{M}_{0}}^{\lesssim}}\lesssim N^{\frac{5\kappa}{2}}\log N\frac{M_{0}}{N}\).
**Lemma 5**.: _For \(r<1\) and \(\kappa\leq 1\), there exists a \(C>0\), such that for \(i\in\{1,2,3\}\)_
\[\left.\pm\left(\sum_{\ell,k}\Upsilon_{\ell,k}^{(i)}O_{i}\,a_{k}^{\dagger}a_{ \ell}^{\dagger}a_{k+\ell}+\mathrm{H.c.}\right)\right|_{\mathcal{F}_{\widetilde{ M}_{0}}^{\leq}\cap\mathcal{F}_{rN}^{+}}\leq CN^{\frac{5\kappa}{2}}\sqrt{\frac{M_{0}+1}{N}+N^{ \frac{3\kappa}{2}-1}}\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{ \widetilde{M}_{0}}^{\leq}\cap\mathcal{F}_{rN}^{+}}+1\right).\]
Proof.: Using \(a_{k}=\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}\gamma_{k}b_{k}+\frac{1}{\sqrt{a_{ 0}a_{0}^{\dagger}}}a_{0}\nu_{-k}b_{-k}^{\dagger}\), we can rewrite
\[\left(\sum_{\ell,k}\Upsilon_{\ell,k}^{(i)}O_{i}\,a_{k}^{\dagger}a_{ \ell}^{\dagger}a_{k+\ell}\!+\!\text{H.c.}\right)\!=\!\left(\sum_{\ell,k}X_{ \ell,k}^{(i)}\widetilde{O}_{i}b_{k}^{\dagger}a_{\ell}^{\dagger}b_{k+\ell}\!+ \!\text{H.c.}\right)\!+\!\left(\sum_{\ell,k}Y_{\ell,k}^{(i)}\widetilde{O}_{i} a_{\ell}^{\dagger}b_{k+\ell}b_{-k}\!+\!\text{H.c.}\right),\]
where we define \(X_{\ell,k}^{(i)}:=\gamma_{k}\gamma_{\ell+k}\Upsilon_{\ell,k}^{(i)}+\nu_{k}\nu_ {k+\ell}\Upsilon_{\ell,-(k+\ell)}^{(i)}\), \(Y_{\ell,k}^{(i)}:=\big{(}\nu_{k}\gamma_{k+\ell}\!+\!\nu_{k+\ell}\gamma_{k} \big{)}\Upsilon_{\ell,k}^{(i)}\) and \(\widetilde{O}_{i}:=O_{i}\) in the case \(i\in\{1,2\}\), and \(X^{(3)}:=\nu_{-k}\gamma_{k+\ell}\overline{\Upsilon_{-k,k+\ell}^{(3)}}+\nu_{k+ \ell}\gamma_{-k}\overline{\Upsilon_{k+\ell,-k}^{(3)}}\), \(Y_{\ell,k}^{(3)}:=\big{(}\gamma_{-k}\gamma_{k+\ell}+\nu_{-k}\nu_{k+\ell}\big{)} \overline{\Upsilon_{-k,\ell+k}^{(3)}}\) and \(\widetilde{O}_{3}:=O_{3}^{\dagger}\left(\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}} a_{0}\right)^{2}\). Using \((T-1)_{(\ell-k)k,\ell 0}=\frac{1}{|k|^{2}+|\ell-k|^{2}}\Big{(}V_{N^{1-\kappa}}-V_{N^{1- \kappa}}RV_{N^{1-\kappa}}\Big{)}_{(\ell-k)k,\ell 0}\), the bounds from Lemma 1 and the simple observation that \(|\gamma_{k}|,|\nu_{k}|\lesssim N^{\frac{\kappa}{4}}\), yields \(|X_{\ell,k}|,|Y_{\ell,k}|\lesssim N^{\frac{5\kappa-1}{2}}\frac{1}{|\ell|^{2}+| k|^{2}}\). Consequently
\[\left(\sum_{\ell,k}X_{\ell,k}^{(i)}\widetilde{O}_{i}b_{k}^{\dagger}a_{\ell}^{ \dagger}b_{k+\ell}+\text{H.c.}\right)\leq\epsilon\sum_{k,\ell}\frac{b_{k+\ell }^{\dagger}b_{k+\ell}}{(|\ell|^{2}+|k|^{2})^{2}}+\epsilon^{-1}N^{5\kappa-1} \sum_{k}\widetilde{O}_{i}b_{k}^{\dagger}\left(\sum_{0<\ell<K}a_{\ell}^{\dagger }a_{\ell}\right)b_{k}\widetilde{O}_{i}^{\dagger}, \tag{30}\]
where we have used that \(X_{\ell,k}=0\) in case \(|\ell|\geq K\). Note at this point that \([b_{k},\widetilde{O}_{i}^{\dagger}]=\gamma_{k}\big{[}a_{0}^{\dagger}\frac{1}{ \sqrt{a_{0}a_{0}^{\dagger}}};\widetilde{O}_{i}^{\dagger}\big{]}a_{k}+\nu_{k} \big{[}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0},\widetilde{O}_{i}^{\dagger} \big{]}a_{-k}^{\dagger}\) and \(\big{[}a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}},\widetilde{O}_{i}^{ \dagger}\big{]}^{\dagger}\big{[}a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{ \dagger}}},\widetilde{O}_{i}^{\dagger}\big{]}\big{|}_{\mathcal{F}_{rN}^{+}} \lesssim\frac{1}{N^{2}}\) as well as \(\big{[}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0},\widetilde{O}_{i}^{\dagger} \big{]}^{\dagger}\big{[}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0},\widetilde{O }_{i}^{\dagger}\big{|}_{\mathcal{F}_{rN}^{+}}\lesssim\frac{1}{N}\) and \(\widetilde{O}_{i}\widetilde{O}_{i}^{\dagger}\big{|}_{\mathcal{F}_{rN}^{+}} \lesssim 1\). Consequently by Lemma 10
\[\widetilde{O}_{i}\widetilde{\mathcal{N}}\widetilde{O}_{i}^{\dagger} \big{|}_{\mathcal{F}_{rN}^{+}}\lesssim \widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{rN}^{+}}\!+\!\frac{1}{N^{2}} \!\sum_{k\neq 0}\gamma_{k}^{2}a_{k}^{\dagger}a_{k}\!+\!\frac{1}{N^{2}}\!\sum_{k \neq 0}|\nu_{k}|^{2}a_{k}a_{k}^{\dagger}\!\lesssim\!\big{(}1\!+\!N^{2\kappa-2} \!\big{)}\!\bigg{(}\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{rN}^{+}}\!+\!1 \bigg{)}\!\lesssim\!\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{rN}^{+}}\!+\!1.\]
Using \(\sum_{k,\ell}\frac{1}{(|\ell|^{2}+|k|^{2})^{2}}b_{k+\ell}^{\dagger}b_{k+\ell} \lesssim\widetilde{\mathcal{N}}\) and that \(b_{k}\) and \(b_{k}^{\dagger}\) map \(\mathcal{F}_{M_{0}}^{\leq}\) into \(\mathcal{F}_{M_{0}+1}^{\leq}\), we obtain by Eq. (30) that
\[\left(\sum_{\ell,k}X_{\ell,k}^{(i)}\widetilde{O}_{i}b_{k}^{\dagger}a_{ \ell}^{\dagger}b_{k+\ell}+\text{H.c.}\right)\bigg{|}_{\mathcal{F}_{M_{0}}^{\leq}} \lesssim\left(\epsilon\widetilde{\mathcal{N}}+\epsilon^{-1}N^{5\kappa}\frac{M_{0} +1}{N}\widetilde{O}_{i}\widetilde{\mathcal{N}}\widetilde{O}_{i}^{\dagger}\right) \bigg{|}_{\mathcal{F}_{M_{0}}^{\leq}}\] \[\lesssim\left(\epsilon+\epsilon^{-1}N^{5\kappa}\frac{M_{0}+1}{N} \right)\left(\widetilde{\mathcal{N}}\bigg{|}_{\mathcal{F}_{M_{0}}^{\leq}}+1 \right)=2N^{\frac{5\kappa}{2}}\sqrt{\frac{M_{0}+1}{N}}\left(\widetilde{ \mathcal{N}}\bigg{|}_{\mathcal{F}_{M_{0}}^{\leq}}+1\right)\]
for an optimal choice of \(\epsilon\). Regarding the \(Y\)-contributions, we are going to estimate
\[\left(\sum_{\ell,k}Y_{\ell,k}^{(i)}\widetilde{O}_{i}a_{\ell}^{\dagger}b_{k+\ell}b_ {-k}+\text{H.c.}\right)\lesssim\epsilon\,\widetilde{\mathcal{N}}+\epsilon^{-1} \sum_{k,\ell,\ell^{\prime}}\overline{Y_{\ell^{\prime},k}}Y_{\ell,k}\,\widetilde{O}_{i} a_{\ell}^{\dagger}b_{\ell+k}b_{\ell^{\prime}+k}^{\dagger}a_{\ell^{\prime}} \widetilde{O}_{i}^{\dagger}.\]
Defining the operator \(G\) via its coefficients \(G_{\ell k,\ell^{\prime}k^{\prime}}:=\delta_{\ell+k=\ell^{\prime}+k^{\prime}} \overline{Y_{\ell^{\prime},k-\ell^{\prime}}}Y_{\ell,k-\ell}\) and using the fact that \(b_{k}\) satisfies the CCR on \(\mathcal{F}_{N-1}^{+}\) as well as \(a_{\ell}\mathcal{F}_{N-1}^{+}\subset\mathcal{F}_{N-2}^{+}\), we furthermore obtain
\[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Note that \(\sum_{k,\ell}\overline{Y_{\ell,k}}Y_{\ell,k}\,a_{\ell}^{\dagger}\widetilde{O}_{i} \widetilde{O}_{i}^{\dagger}a_{\ell}\lesssim\sum_{k,\ell}\overline{Y_{\ell,k}}Y _{\ell,k}\,a_{\ell}^{\dagger}a_{\ell}\lesssim N^{5\kappa-1}\mathcal{N} \lesssim N^{5\kappa}N^{\frac{3\kappa}{2}-1}\left(\widetilde{\mathcal{N}}+1\right)\) by Lemma 10. Using furthermore the bound on the operator norm \(\|G\|\lesssim N^{5\kappa-1}\) yields
\[\pm\sum_{k\ell,k\ell^{\prime}}G_{\ell k,\ell^{\prime}k^{\prime}} \widetilde{O}_{i}a_{\ell}^{\dagger}b_{k}^{\dagger}b_{k^{\prime}}a_{\ell^{ \prime}}\widetilde{O}_{i}^{\dagger}\lesssim N^{5\kappa-1}\sum_{0<|\ell|<K,k} \widetilde{O}_{i}a_{\ell}^{\dagger}b_{k}^{\dagger}b_{k}a_{\ell}\widetilde{O}_ {i}^{\dagger}\] \[\lesssim N^{5\kappa-1}\sum_{k}\widetilde{O}_{i}b_{k}^{\dagger} \left(\sum_{0<|\ell|<K}a_{\ell}^{\dagger}a_{\ell}\right)b_{k}\widetilde{O}_{i} ^{\dagger}+N^{5\kappa-1}\sum_{k}\widetilde{O}_{i}[b_{k},a_{-k}]^{\dagger}[b_{k },a_{-k}]\widetilde{O}_{i}^{\dagger}.\]
Note that \(\sum_{k}\widetilde{O}_{i}[b_{k},a_{-k}]^{\dagger}[b_{k},a_{-k}]\widetilde{O}_ {i}^{\dagger}\Big{|}_{\mathcal{F}_{rN}^{+}}\lesssim\sum_{k}|\nu_{k}|^{2} \widetilde{O}_{i}\widetilde{O}_{i}^{\dagger}\Big{|}_{\mathcal{F}_{rN}^{+}} \lesssim\sum_{k}|\nu_{k}|^{2}\lesssim N^{\frac{3\kappa}{2}}\), and hence
\[\pm\sum_{k\ell,k\ell^{\prime}}G_{\ell k,\ell^{\prime}k^{\prime}} \widetilde{O}_{i}a_{\ell}^{\dagger}b_{k}^{\dagger}b_{k^{\prime}}a_{\ell^{ \prime}}\widetilde{O}_{i}^{\dagger}\Big{|}_{\mathcal{F}_{M_{0}}^{<}\cap \mathcal{F}_{rN}^{+}}\lesssim N^{5\kappa}\left(\frac{M_{0}+1}{N}\widetilde{O} _{i}\widetilde{\mathcal{N}}\widetilde{O}_{i}^{\dagger}\Big{|}_{\mathcal{F}_{M_ {0}}^{<}\cap\mathcal{F}_{rN}^{+}}+N^{\frac{3\kappa}{2}-1}\right)\] \[\lesssim N^{5\kappa}\left(\frac{M_{0}+1}{N}+N^{\frac{3\kappa}{2}- 1}\right)\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{M_{0}}^{<}\cap \mathcal{F}_{rN}^{+}}+1\right).\]
Using the optimal choice \(\epsilon:=N^{\frac{5\kappa}{2}}\sqrt{\frac{M_{0}+1}{N}+N^{\frac{3\kappa}{2}-1}}\) concludes the proof.
At this point we want to emphasise that our error estimates in Lemma 4 and Lemma 5 can only produce good bounds in case we apply them for states in \(\Psi\in\mathcal{F}_{M}^{\leq}\) with suitably small number of particles \(M\). As we will demonstrate in Appendix C, we can always restrict our attention to such states.
### Proof of Theorem 2
In the following let \(0<\kappa<\frac{1}{8}\) and \(d\in\mathbb{N}\), and let us introduce the concrete choices \(\lambda_{0}:=\frac{3}{8}\), \(\lambda:=\frac{21}{32}\) and \(K:=N^{\frac{5}{16}+\frac{\kappa}{2}}\). Furthermore let us for now assume that \(E_{N,\kappa}^{(d)}\leq E_{N,\kappa}+CN^{\frac{5}{2}}\) and let \(\mathcal{V}_{d}\subseteq\mathcal{F}_{N^{\lambda_{0}}}^{\leq}\cap\mathcal{F}_{ N^{\lambda}}^{+}\) be a dimensional subspace, such that for \(\tau:=\frac{5}{2}\big{(}\frac{1}{8}-\kappa\big{)}\)
\[E_{N,\kappa}^{(d)}\geq\sup_{\Psi\in\mathcal{V}_{d}:\|\Psi\|=1}\left\langle\Psi, H_{N,\kappa}\Psi\right\rangle-CN^{-\tau}, \tag{31}\]
which exists according to Lemma 11. We want to emphasise at this point that Lemma 11 depends crucially on the a priori estimates derived in Appendix C using the recent results in [6]. For any \(\Psi\) in \(\mathcal{V}_{d}\) with \(\|\Psi\|=1\) we clearly have \(\left\langle\Psi,\mathcal{E}_{2}\Psi\right\rangle\leq\left\langle\Psi,\sum_{0< |k|<K}\lambda_{k}\mathcal{N}a_{k}^{\dagger}a_{k}\Psi\right\rangle\lesssim N^{ \kappa+\lambda_{0}-1}\left\langle\Psi,\mathcal{N}\Psi\right\rangle\lesssim N^{ \frac{5\kappa}{2}+\lambda_{0}-1}\left\langle\Psi,\left(\widetilde{\mathcal{N}}+ 1\right)\Psi\right\rangle\), where we used Lemma 10 and \(\widetilde{\mathcal{N}}\) is defined above Lemma 4. Again by Lemma 10, we have \(|\left\langle\Psi,\sum_{k\neq 0}|k|^{2}w_{k}^{2}N^{-2}\left(a_{0}^{2\dagger}a_{0}^{2}-1 \right)a_{k}^{\dagger}a_{k}\Psi\right\rangle|\lesssim N^{2\kappa-1}\left\langle \Psi,\mathcal{N}(\mathcal{N}+1)\,\Psi\right\rangle\lesssim N^{\frac{5\kappa}{2}+ \lambda-1}\langle\Psi,\left(\widetilde{\mathcal{N}}+1\right)\Psi\rangle\), which appears in the decomposition of \(\mathcal{E}_{1}\) in Eq. (24). Combining this with the representation of \(\mathcal{E}_{1}\) and \(\mathcal{E}_{3}\) from Eq. (24) and Eq. (25), the estimates from Lemmata 4, 5 and 9, as well as Eq. (52) and Eq. (53), we obtain
Further \(\left\langle\Psi,\left(\sum_{k}|k|^{1-2\delta}b_{k}^{\dagger}b_{k}\!+\!1\right) \Psi\right\rangle\lesssim N^{-\frac{\kappa}{2}}\left\langle\Psi,\sum_{k\neq 0}e_{k} \left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)^{\dagger}\left(\gamma_ {k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)\Psi\right\rangle+1\), by Eq. (51). Combining what we have so far with the lower bound in Eq. (23) yields for a suitable constant \(C\)
\[\left\langle\Psi,H_{N,\kappa}\Psi\right\rangle\geq 4\pi\mathfrak{a}_{N^{1- \kappa}}N^{\kappa}(N-1)+\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}- A_{k}+C_{k}\right\}-CN^{-\tau}\]
\[+\left(1-CN^{-\tau}N^{-\frac{\kappa}{2}}\right)\left\langle\Psi,\sum_{k\neq 0 }e_{k}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)^{\dagger}\left( \gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)\Psi\right\rangle. \tag{32}\]
In order to compare \(\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k| ^{2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^ {2}}\right\}\) with the expression \(\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}}-A_{k}+C_{k}\right\}\), note that \(||k|^{2}+8\pi\mathfrak{a}N^{\kappa}-2A_{k}|\lesssim N^{2\kappa-1}(1\!+\!|k|)+N^ {\kappa}\mathds{1}(|k|>K)\) and \(|8\pi\mathfrak{a}N^{\kappa}-2B_{k}|\lesssim N^{2\kappa-1}(1+|k|)\) by Lemma 1, and therefore
\[\left|\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k} \right\}-\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}- |k|^{2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2| k|^{2}}\right\}\right|\] \[\qquad\qquad\lesssim\frac{N^{4\kappa-1}}{N^{\frac{\kappa}{2}}} \log N+\sum_{|k|>K}\frac{N^{3\kappa}}{|k|^{4}}\lesssim N^{\frac{7\kappa}{2}-1} \log N+\frac{N^{3\kappa}}{K}\lesssim N^{-\tau}, \tag{33}\]
Using \(e_{k}\geq 0\) in Eq. (32) and the state \(\Psi_{0}\) which spans the space \(\mathcal{V}_{1}\), we can consequently verify the first statement Eq. (4)
\[E_{N,\kappa}^{(1)}\!\geq\!\!\left\langle\Psi_{0},H_{N,\kappa} \Psi_{0}\right\rangle\!-\!C_{1}N^{-\tau}\!\!\geq\!4\pi\mathfrak{a}_{N^{1- \kappa}}N^{\kappa}(N\!-\!1)\!+\!\frac{1}{2}\sum_{k\neq 0}\!\left\{\!\sqrt{A_{k}^{2}\!-\! B_{k}^{2}}\!-\!A_{k}\!+\!C_{k}\!\right\}\!-\!C_{2}N^{-\tau}\] \[\!\geq\!4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N\!-\!1)\!+\! \frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k| ^{2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^ {2}}\right\}\!-\!C_{3}N^{-\tau}\!,\]
where \(C_{1},C_{2},C_{3}>0\) are suitable constants.
In order to verify the second statement of Theorem 2, recall the definition of \(\lambda_{N,\kappa}^{(d)}\) in Theorem 2 and let us define \(\widetilde{e}_{k}:=\min\{e_{k},\lambda_{N,\kappa}^{(d)}+1\}\). By Eq. (32) we obtain for a suitable constant \(C>0\) the lower bound
\[\left\langle\Psi,H_{N,\kappa}\Psi\right\rangle\geq 4\pi\mathfrak{a}_{N^{1- \kappa}}N^{\kappa}(N-1)+\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}- A_{k}+C_{k}\right\}-CN^{-\tau} \tag{34}\] \[\qquad+\left(1-CN^{-\tau}N^{-\frac{\kappa}{2}}\right)\left\langle \Psi,\sum_{k\neq 0}\widetilde{e}_{k}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger} \right)^{\dagger}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)\Psi \right\rangle.\]
Since \(|\widetilde{e}_{k}|\lesssim N^{\frac{\kappa}{2}}\) uniformly in \(k\), we furthermore have by Eq. (50) for \(\Psi\in\mathcal{V}_{d}\)
\[\left\langle\Psi,\left\{\sum_{k\neq 0}\widetilde{e}_{k}b_{k}^{ \dagger}b_{k}-\sum_{k\neq 0}\widetilde{e}_{k}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{ \dagger}\right)^{\dagger}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger} \right)\right\}\Psi\right\rangle\] \[\qquad\lesssim\left(N^{\frac{5\kappa+\lambda_{0}-1}{2}}+N^{2 \kappa+\lambda-1}\right)\left(\left\langle\Psi,\widetilde{\mathcal{N}}\Psi\right \rangle+1\right)\lesssim N^{-\tau}\left(\left\langle\Psi,\widetilde{\mathcal{N}} \Psi\right\rangle+1\right),\]
where the operators \(b_{k}\) are introduced above Lemma 4. In combination with Eq. (34), Eq. (31) and Eq. (33), we therefore obtain for a suitable constant \(C>0\)
\[E_{N,\kappa}^{(d)}\geq 4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)+ \frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k| ^{2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^ {2}}\right\}\] \[\quad+\big{(}1-CN^{-\tau}N^{-\frac{\kappa}{2}}\big{)}\sup_{ \Psi\in\mathcal{V}_{d}:\|\Psi\|=1}\left\langle\Psi,\sum_{k\neq 0}\widetilde{e}_{k} b_{k}^{\dagger}b_{k}\Psi\right\rangle-CN^{-\tau}.\]
Following the work [5], respectively by making use of the excitation map \(U_{N}\) introduced in [16], we note that the operators \(b_{k}\) can be extended from \(\mathcal{F}_{N-1}^{+}\) to operators satisfying the CCR, meaning that there exists a Hilbert space extension \(\mathcal{F}_{N-1}^{+}\subseteq\mathfrak{h}\) and operators \(\mathfrak{b}_{k}\) defined on \(\mathfrak{h}\), such that the family \(\{\mathfrak{b}_{k}:k\in 2\pi\mathbb{Z}^{3}\setminus\{0\}\}\) is unitarily equivalent to the standard annihilation operators and \(\mathfrak{b}_{k}\Psi=b_{k}\Psi\) for \(\Psi\in\mathcal{F}_{N-1}^{+}\). Denoting the \(d\)-th eigenvalue of \(\sum_{k\neq 0}\widetilde{e}_{k}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\) as \(\widetilde{\lambda}_{N,\kappa}^{(d)}\) and making use of the fact that \(\mathcal{V}_{d}\subseteq\mathcal{F}_{N-1}^{+}\) for \(N\) large enough, we obtain by the min-max principle
\[\sup_{\Psi\in\mathcal{V}_{d}:\|\Psi\|=1}\left\langle\Psi,\sum_{k\neq 0} \widetilde{e}_{k}b_{k}^{\dagger}b_{k}\Psi\right\rangle=\sup_{\Psi\in\mathcal{V }_{d}:\|\Psi\|=1}\left\langle\Psi,\sum_{k\neq 0}\widetilde{e}_{k} \mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\Psi\right\rangle\geq\widetilde{ \lambda}_{N,\kappa}^{(d)}. \tag{35}\]
Since the operators \(\mathfrak{b}_{k}\) are unitarily equivalent to standard annihilation operators, we know that the eigenvalues \(\widetilde{\lambda}_{N,\kappa}^{(d)}\) are an enumeration of \(\left\{\sum_{k\neq 0}n_{k}\widetilde{e}_{k}:n_{k}\in\mathbb{N}_{0}\right\}\) in increasing order. Using \(||k|^{2}+8\pi\mathfrak{a}N^{\kappa}-2A_{k}|\lesssim N^{2\kappa-1}(1+|k|)\) for \(|k|<K\) and \(|8\pi\mathfrak{a}N^{\kappa}-2B_{k}|\lesssim N^{2\kappa-1}(1+|k|)\), we obtain \(|e_{k}-\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}|\lesssim N^{-\tau}\) for fixed \(k\), and consequently \(|\widetilde{\lambda}_{N,\kappa}^{(d)}-\lambda_{N,\kappa}^{(d)}|\lesssim N^{-\tau}\). Using that \(|\lambda_{N,\kappa}^{(d)}|\lesssim N^{\frac{\kappa}{2}}\), we obtain for suitable \(C,C^{\prime}>0\)
\[E_{N,\kappa}^{(d)}-4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1 )-\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k |^{2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k| ^{2}}\right\}\\ \geq\left(1-C^{\prime}N^{-\tau}N^{-\frac{\kappa}{2}}\right) \lambda_{N,\kappa}^{(d)}-C^{\prime}N^{-\tau}\geq\lambda_{N,\kappa}^{(d)}-CN^ {-\tau}.\]
Together with Theorem 3, this concludes the proof of Theorem 2.
## 5 Trial States and their Energy
In this Section we want to verify the upper bound on \(E_{N,\kappa}\) in Theorem 3. For this purpose let us use the concrete choice \(K:=\infty\) for the cut-off parameter and let us keep the term \(\frac{1}{2}\sum_{jk,mn}\left(\pi_{\mathcal{H}}V_{N^{1-\kappa}}\pi_{\mathcal{H} }\right)_{jk,mn}\psi_{jk}^{\dagger}\psi_{mn}=\frac{1}{2}\sum_{jk,mn\neq 0} \left(V_{N^{1-\kappa}}\right)_{jk,mn}\psi_{jk}^{\dagger}\psi_{mn}\), which we have bounded from below by \(0\) in Eq. (15), yielding the identity
\[H_{N,\kappa}^{\prime}\!=\!\sum_{k\neq 0}e_{k}\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{ -k}^{\dagger}\right)^{\!\dagger}\!\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{ \dagger}\right)\!+\!\frac{1}{2}\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn} \psi_{jk}^{\dagger}\psi_{mn}\!-\!\sum_{i=1}^{4}\mathcal{E}_{i}, \tag{36}\]
with \(H_{N,\kappa}^{\prime}:=H_{N,\kappa}-4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N -1)-\frac{1}{2}\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}\). In order to obtain from this representation of \(H_{N,\kappa}\) an upper bound on its eigenvalues \(E_{N,\kappa}^{(d)}\), we need to
find trial states \(\Theta_{d}\) which simultaneously annihilate the variables \(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}\) for \(k\neq 0\) and \(\psi_{jk}\) for \(j,k\neq 0\), at least in an approximate sense. This will be carried out in the following two Subsections 5.1 and 5.2, using the operators \(\mathfrak{b}_{k}\) introduced above Eq. (35) as well as \(\mathfrak{a}_{k}:=\gamma_{k}\mathfrak{b}_{k}-\nu_{k}\mathfrak{b}_{-k}^{\dagger}\), which is an extension of \(a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k}\) from \(\mathcal{F}_{N-1}^{+}\) to \(\mathfrak{h}\).
### Annihilation of \(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}\)
We start by expressing \(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}\) in terms of the operators \(\mathfrak{b}_{k}\) as
\[\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}=\mathfrak{b}_{k}-\gamma_{k}\delta_{k} -\nu_{k}\delta_{-k}^{\dagger}=\mathfrak{b}_{k}+\sum_{p}\sqrt{N}f_{p,k} \mathfrak{b}_{p-k}^{\dagger}\mathfrak{a}_{p}+\epsilon_{k}, \tag{37}\]
with
\[\epsilon_{k}=-\nu_{k}\delta_{-k}^{\dagger}-\gamma_{k}\delta_{k}^{\prime\prime }+\sum_{p}\left(\gamma_{k}\gamma_{p-k}\sqrt{a_{0}^{\dagger}a_{0}}-\sqrt{N} \right)f_{p,k}\mathfrak{a}_{p-k}^{\dagger}\mathfrak{a}_{p}+\sum_{p}\gamma_{k} \nu_{p-k}\sqrt{a_{0}^{\dagger}a_{0}}f_{p,k}\mathfrak{b}_{k-p}\mathfrak{a}_{p}.\]
In the following Lemma 6 we show that the contribution coming from the \(\epsilon_{k}\) can be regarded as a small error.
**Lemma 6**.: _There exists a constant \(C>0\), such that we have for \(\kappa<1\) the estimate_
\[\sum_{k}e_{k}\,\epsilon_{k}^{\dagger}\epsilon_{k}\leq CN^{6\kappa-1}\Big{(} \widetilde{\mathcal{N}}+1\Big{)}^{2}\,.\]
Proof.: Due to the positivity of \(e_{k}\) we can estimate the square separately for each term in the definition of \(\epsilon_{k}\). Starting with \(-\nu_{k}\delta_{-k}^{\dagger}\) we obtain by a slight adaptation of Eq. (54)
\[\sum_{k\neq 0}e_{k}\left(\nu_{k}\delta_{-k}^{\dagger}\right)^{\dagger}\nu_{k} \delta_{-k}^{\dagger}\lesssim\left(\sum_{k\neq 0}e_{k}\nu_{k}^{2}|k|^{-4} \right)N^{5\kappa-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{3}\lesssim N^{6 \kappa-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{3}\,.\]
By a slight modification of the upper bound on \(\left(\delta_{k}^{\prime\prime}\right)^{\dagger}\delta_{k}^{\prime\prime}\) obtained in Lemma 10, we have
\[\sum_{k\neq 0}e_{k}\left(\delta_{k}^{\prime\prime}\right)^{\dagger}\delta_{k}^{ \prime\prime}\lesssim\frac{\sum_{k\neq 0}e_{k}\gamma_{k}^{2}w_{k}^{2}}{N}N^{ \frac{1\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}\lesssim N^{\frac{ 1\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}\,.\]
Regarding \(X_{k}:=\sum_{p}\left(\gamma_{k}\gamma_{p-k}\sqrt{a_{0}^{\dagger}a_{0}}-\sqrt{ N}\right)f_{p,k}\mathfrak{a}_{p-k}^{\dagger}\mathfrak{a}_{p}\), note that \(\left|\gamma_{k}\gamma_{p-k}\sqrt{a_{0}^{\dagger}a_{0}}-\sqrt{N}\right|^{2} \lesssim\frac{1}{N}\left(\frac{N}{N}+\frac{N^{\kappa+\delta}}{|k|}+\frac{N^{ \kappa+\delta}}{|p-k|}\right)\) for \(\delta>0\), and therefore \(\sum_{k\neq 0}e_{k}X_{k}^{\dagger}X_{k}\lesssim N^{5\kappa-1}\Big{(}\widetilde{ \mathcal{N}}+1\Big{)}^{3}\) for \(\delta\) small enough, where we have used \(\mathcal{N}^{m}\lesssim N^{\frac{3m\kappa}{2}}\left(\widetilde{\mathcal{N}}+1 \right)^{m}\). The final term in the definition of \(\epsilon_{k}\) can be estimated in the same way.
In order to cancel the term \(\sum_{p}\sqrt{N}f_{p,k}\mathfrak{b}_{p-k}^{\dagger}\mathfrak{a}_{p}\) in Eq. (37), let us introduce the operator \(G:=\frac{1}{2}\sum_{p,k}\sqrt{N}f_{p,k}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_ {p-k}^{\dagger}\mathfrak{a}_{p}\), which allows us to write
\[\Big{(}\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger}\Big{)}(1\!-\!G)=\mathfrak{b}_{k }-G\mathfrak{b}_{k}+\epsilon_{k}(1\!-\!G)\,. \tag{38}\]
**Corollary 1**.: _Let \(\mathbb{K}:=\sum_{k}e_{k}\mathbf{b}_{k}^{\dagger}\mathbf{b}_{k}+\sum_{jk,mn\neq 0 }\left(V_{N^{1-\kappa}}\right)_{jk,mn}\mathbf{b}_{k}^{\dagger}\mathbf{b}_{j}^{ \dagger}\mathbf{b}_{m}\mathbf{b}_{n}\). There exists a \(C>0\), such that_
\[\pm\sum_{k}e_{k}\left(\left(\gamma_{k}d_{k}+\nu_{k}d_{-k}^{\dagger} \right)\left(1\!-\!G\right)-\mathbf{b}_{k}\right)^{\dagger}\left(\left(\gamma_ {k}d_{k}+\nu_{k}d_{-k}^{\dagger}\right)\left(1\!-\!G\right)-\mathbf{b}_{k}\right) \tag{39}\] \[\leq CN^{\frac{11\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}}+1 \Big{)}^{2}(\mathbb{K}+1)\,.\]
Proof.: First of all note that we have by Lemma 6
\[\sum_{k\neq 0} e_{k}(1\!-\!G)^{\dagger}\epsilon_{k}^{\dagger}\epsilon_{k}(1\!-\!G) \lesssim N^{6\kappa-1}\!(1\!-\!G)^{\dagger}\left(\widetilde{\mathcal{N}}^{2} +1\right)\left(1\!-\!G\right)\] \[\lesssim N^{6\kappa-1}\left(\widetilde{\mathcal{N}}+1\right)^{2} \lesssim N^{\frac{11\kappa}{2}-1}\left(\widetilde{\mathcal{N}}+1\right)( \mathbb{K}+1)\,.\]
Furthermore we have that \(G^{\dagger}G\lesssim N^{2\kappa-1}(\mathcal{N}+1)^{2}\lesssim N^{5\kappa-1} \Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{2}\), and consequently \(\sum_{k\neq 0}e_{k}\mathbf{b}_{k}^{\dagger}G^{\dagger}G\mathbf{b}_{k} \lesssim N^{5\kappa-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{2}(\mathbb{K}+1)\). This concludes the proof by Eq. (38).
### Annihilation of \(\psi_{jk}\)
Using the definition of \(f_{j,k}\) in Eq. (22) and introducing \(\chi:=\left(a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\right)^{2}\), we can rewrite
\[\chi\psi_{jk}=\mathbf{b}_{j}\mathbf{b}_{k}+\sqrt{N}f_{k+j,k}\mathbf{a}_{k+j}+ \epsilon_{jk} \tag{40}\]
on the space \(\mathcal{F}_{N-1}^{+}\) for \(j,k\neq 0\), with
\[\epsilon_{jk}: =\delta_{j+k=0}(w_{k}-2\gamma_{k}\nu_{k})+(\gamma_{j}\gamma_{k}- 1)\mathbf{b}_{j}\mathbf{b}_{k}-\nu_{j}\gamma_{k}\mathbf{b}_{-j}^{\dagger} \mathbf{b}_{k}-\nu_{k}\gamma_{j}\mathbf{b}_{-k}^{\dagger}\mathbf{b}_{j}+\nu_{ k}\nu_{j}\mathbf{b}_{-k}^{\dagger}\mathbf{b}_{j}^{\dagger}\] \[\quad\quad+\delta_{j+k=0}w_{k}\left[\chi\frac{a_{0}^{2}}{N}-1 \right]+\sum_{p\neq 0}\left(\sqrt{a_{0}^{\dagger}a_{0}-1}-\sqrt{N}\right)f_{p,k} \mathbf{a}_{k}.\]
As the following Lemma 7 demonstrates, the operators \(\epsilon_{jk}\) can be regarded as being small.
**Lemma 7**.: _We have the estimate \(\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn}\epsilon_{jk}^{\dagger} \epsilon_{mn}\lesssim N^{4\kappa-1}\Big{(}\widetilde{\mathcal{N}}^{2}+1\Big{)}\)._
Proof.: Due to the sign of \(V_{N^{1-\kappa}}\) it is enough to verify the statement for each term in the definition of \(\epsilon_{jk}\) individually. Regarding the term \(\delta_{j+k=0}(w_{k}-2\gamma_{k}\nu_{k})\) note that \(\left|w_{k}-2\gamma_{k}\nu_{k}\right|\lesssim\frac{N^{\kappa}}{\left|k\right|^ {2}}\mathds{1}\big{(}\left|k\right|\leq N^{\frac{\kappa}{2}}\big{)}+\frac{N^{2 \kappa}}{\left|k\right|^{4}}\mathds{1}\big{(}\left|k\right|>N^{\frac{\kappa}{2} }\big{)}\), and consequently
\[\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{(-k)k,(-j)j}\left|w_{k}-2 \gamma_{k}\nu_{k}\right|\left|w_{j}-2\gamma_{j}\nu_{j}\right|\lesssim N^{4 \kappa-1}.\]
For the next term \((\gamma_{j}\gamma_{k}-1)\mathbf{b}_{j}\mathbf{b}_{k}\), note that the operator \(\Theta_{jk,mn}:=\left(V_{N^{1-\kappa}}\right)_{jk,mn}\left(\gamma_{j}\gamma_{k} -1\right)(\gamma_{m}\gamma_{n}-1)\) has an operator norm bounded by \(CN^{\kappa-1}\sup_{T}\sum_{p}(\gamma_{p}\gamma_{p+T}-1)^{2}\), which follows from the weighted Schur test, see for example the proof of Lemma 4. Since
\(N^{\frac{3\kappa}{2}}\), uniformly in \(T\), we obtain \(\sum_{jk,mn\neq 0}\left(\Theta\right)_{jk,mn}\mathfrak{b}_{k}^{\dagger} \mathfrak{b}_{j}^{\dagger}\mathfrak{b}_{m}\mathfrak{b}_{n}\lesssim N^{\frac{5 \kappa}{2}-1}\left(\widetilde{\mathcal{N}}+1\right)^{2}\). Coming to the next term \(\nu_{j}\gamma_{k}\mathfrak{b}_{-j}^{\dagger}\mathfrak{b}_{k}\) we first notice that
\[\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn}\!\left(\nu_{j} \gamma_{k}\mathfrak{b}_{-j}^{\dagger}\mathfrak{b}_{k}\right)^{\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for \(0<\delta<2\). Since \(\sum_{j}|j|^{2+\delta}\nu_{j}^{2}\lesssim N^{1+\kappa+\delta}\), this concludes the argument. Finally we expand the term \(\sum_{p,q}\sqrt{N}f_{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p-q}^{\dagger} \mathfrak{b}_{p}\mathfrak{b}_{k}\mathfrak{a}_{p}=\sum_{p,q}\sqrt{N}\gamma_{p}f _{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p-q}^{\dagger}\mathfrak{b}_{j} \mathfrak{b}_{k}\mathfrak{b}_{p}-\sum_{p,q}\sqrt{N}\nu_{p}f_{p,q}\mathfrak{b}_ {q}^{\dagger}\mathfrak{b}_{p-q}^{\dagger}\mathfrak{b}_{j}\mathfrak{b}_{k} \mathfrak{b}_{p}^{\dagger}\) and illustrate how to proceed by analysing \(\sum_{p,q}\sqrt{N}\gamma_{p}f_{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p- q}^{\dagger}\mathfrak{b}_{j}\mathfrak{b}_{k}\mathfrak{b}_{p}\). For this purpose note that the operator valued matrix \(\Theta_{p,p^{\prime}}:=\left(\sum_{q^{\prime}}\sqrt{N}\gamma_{p^{\prime}}f_{p ^{\prime},q^{\prime}}\mathfrak{b}_{q^{\prime}}^{\dagger}\mathfrak{b}_{p^{ \prime}-q^{\prime}}^{\dagger}\right)^{\dagger}\left(\sum_{q}\sqrt{N}\gamma_{p }f_{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p-q}^{\dagger}\right)\) satisfies \(\Theta\lesssim N^{\frac{5\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^ {2}\), and therefore
\[\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn}\left( \sum_{p,q}\sqrt{N}f_{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p-q}^{ \dagger}\mathfrak{b}_{j}\mathfrak{b}_{k}\mathfrak{a}_{p}\right)^{\dagger} \left(\sum_{p,q}\sqrt{N}f_{p,q}\mathfrak{b}_{q}^{\dagger}\mathfrak{b}_{p-q}^{ \dagger}\mathfrak{b}_{m}\mathfrak{b}_{n}\mathfrak{a}_{p}\right)\] \[\lesssim N^{\frac{5\kappa}{2}-1}\sum_{jk,mn\neq 0}\left(V_{N^{1- \kappa}}\right)_{jk,mn}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{j}^{\dagger} \Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{2}\mathfrak{b}_{m}\mathfrak{b}_{n} \lesssim N^{\frac{5\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{2} \Bigg{(}\sum_{k}|k|^{2}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}+1\Bigg{)}\,.\]
**Corollary 2**.: _There exists a constant \(C>0\) such that for \(\kappa<\frac{2}{11}\)_
\[\pm\!\!\!\sum_{jk,mn\neq 0}\!\!\!\left(V_{N^{1-\kappa}}\right)_{jk,mn}\Big{(} \!\!\left(1\!-\!G\right)^{\dagger}\!\psi_{jk}^{\dagger}-\mathfrak{b}_{k}^{ \dagger}\mathfrak{b}_{j}^{\dagger}\Big{)}\!\left(\psi_{mn}\!\left(1\!-\!G \right)-\mathfrak{b}_{m}\mathfrak{b}_{n}\right)\!\leq\!\!CN^{\frac{1\kappa}{2 }-1}\Big{(}\widetilde{\mathcal{N}}\!+\!1\Big{)}^{4}\!\left(\mathbb{K}\!+\!1 \right). \tag{43}\]
Proof.: Note that \(\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn}\left(1\!-\!G\right)^{ \dagger}\epsilon_{jk}^{\dagger}\epsilon_{mn}\!\left(1\!-\!G\right)\lesssim N^{ \frac{7\kappa}{2}-1}\!\!\left(1\!-\!G\right)^{\dagger}\Big{(}\widetilde{ \mathcal{N}}^{2}+1\Big{)}\left(1\!-\!G\right)\lesssim N^{\frac{7\kappa}{2}-1} \left(\widetilde{\mathcal{N}}^{5}\!+\!1\right)\) by Lemma 7. In combination with Lemma 8 this concludes the proof.
### Proof of Theorem 3
Regarding the verification of Theorem 3, let us define for \(\delta>0\) the trial states
\[\Psi_{d}:=Z_{d}^{-1}(1\!-\!G)\mathds{1}\left(\mathcal{N}\leq N^{\frac{2\kappa }{2}+\delta}\right)\Gamma_{d},\]
where \(\Gamma_{d}\) is the \(d\)-th eigenvector of the operator \(\sum_{k}e_{k}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\), i.e. \(\Gamma_{d}\) is the eigenvector of \(\sum_{k}e_{k}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\) corresponding to the eigenvalue \(\widetilde{\lambda}_{N,\kappa}^{(d)}\) defined above Eq. (35), and \(Z_{j}\) is a normalization constant. Note that \(\Psi_{d}\in\mathfrak{h}\) is an element of \(\mathcal{F}_{N-1}^{+}\subset L_{\text{sym}}^{2}\big{(}\Lambda^{N}\big{)}\) for \(N\) large enough, and therefore an appropriate trial state. In order to compute the energy of \(\Psi\), we are going to combine the results from Subsection 5.1 and 5.2, as well as the results in Section 4, which yields the following Corollary 3.
**Corollary 3**.: _Let \(\kappa<\frac{2}{13}\). Then there exist a \(C>0\), such that_
\[\pm\Big{(}\!\left(1\!-\!G\right)^{\dagger}\!H_{N,\kappa}^{\prime} \!\left(1\!-\!G\right)-\mathbb{K}\Big{)}\leq CN^{\frac{11\kappa}{4}-\frac{1}{2} }\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{6}\Big{(}\mathbb{K}+1\Big{)}. \tag{44}\]
_Furthermore \(\langle\Psi_{d},H_{N,\kappa}^{\prime}\Psi_{d}\rangle\leq\widetilde{\lambda}_{N,\kappa}^{(d)}+CN^{\frac{13\kappa}{4}-\frac{1}{2}}\) and \(\langle\Psi_{1},H_{N,0}^{\prime}\Psi_{1}\rangle\leq C^{\frac{\log N}{N}}\)._
Proof.: In order to verify Eq. (44), we make use of Eq. (36) and obtain by Corollary 1 and Corollary 2
\[\pm\left(\left(1\!-\!G\right)^{\dagger}\left(H^{\prime}_{N,\kappa}+\sum_{i=1}^{4} \mathcal{E}_{i}\right)\left(1\!-\!G\right)-\mathbb{K}\right)\leq CN^{\frac{1 \kappa}{4}-\frac{1}{2}}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{6}\Big{(} \mathbb{K}+1\Big{)}.\]
In order to control the \(\mathcal{E}_{i}\) terms, note that we have by a simplification of the methods in Subsection 4.1 that \(\pm\left(\sum_{i=1}^{4}\mathcal{E}_{i}-\mathcal{E}_{\log}-\mathcal{E}_{\rm odd }\right)\lesssim CN^{6\kappa-1}\Big{(}\widetilde{\mathcal{N}}+1\Big{)}^{4}\), where we define the operators \(\mathcal{E}_{\log}:=\sum_{k\ell,k^{\prime}\ell^{\prime}}\Lambda_{k\ell,k^{ \prime}\ell^{\prime}}\,a_{\ell^{\prime}}^{\dagger}a_{\ell^{\prime}}^{ \dagger}a_{\ell^{\prime}}^{\dagger}\frac{a_{\varrho a}^{\dagger}a_{\varrho}}{N }a_{k}a_{\ell}\) and \(\mathcal{E}_{\rm odd}:=\sum_{i=1}^{3}\left(\sum_{\ell,k}\Upsilon_{\ell,k}^{(i) }O_{1}\,a_{k}^{\dagger}a_{\ell}^{\dagger}a_{k+\ell}+{\rm H.c.}\right)\). Therefore \(\pm(1\!-\!G)\left(\sum_{i=1}^{4}\mathcal{E}_{i}-\mathcal{E}_{\log}-\mathcal{E }_{\rm odd}\right)(1\!-\!G)\lesssim N^{6\kappa-1}\Big{(}\widetilde{\mathcal{N }}+1\Big{)}^{6}\lesssim N^{\frac{11\kappa}{2}-1}\Big{(}\widetilde{\mathcal{N}} +1\Big{)}^{6}\Big{(}\mathbb{K}+1\Big{)}\). Regarding the term \(\mathcal{E}_{\log}\) and \(\mathcal{E}_{\rm odd}\), we obtain by a slight modification of Lemma 4 and 5 that \(\pm\mathcal{E}_{\log}\lesssim N^{4\kappa+2\delta-1}\left(\widetilde{\mathcal{N }}+1\right)\left(\sum_{k}|k|^{1-2\delta}\mathfrak{b}_{k}^{\dagger}\mathfrak{b} _{k}+1\right)\) and \(\pm\mathcal{E}_{\rm odd}\lesssim N^{\frac{13\kappa}{4}-\frac{1}{2}}\left( \widetilde{\mathcal{N}}+1\right)^{2}\lesssim N^{\frac{11\kappa}{4}-\frac{1}{2 }}\left(\widetilde{\mathcal{N}}+1\right)\!\left(\mathbb{K}+1\right)\). Since we have the estimate \((1\!-\!G)^{\dagger}\left(\sum_{k}|k|^{1-2\delta}\mathfrak{b}_{k}^{\dagger} \mathfrak{b}_{k}+1\right)(1\!-\!G)\lesssim\left(1+N^{\frac{3\kappa}{2}-1} \right)\left(\widetilde{\mathcal{N}}+1\right)^{2}(\mathbb{K}+1)\), this concludes the proof of Eq. (44). Applying Eq. (44), we consequently obtain for suitable constants \(C^{\prime}\) and \(C\)
\[\langle\Psi_{d},H^{\prime}_{N,\kappa}\Psi_{d}\rangle=Z_{d}^{-2} \left\langle 1\!\left(\mathcal{N}\leq N^{\frac{3\kappa}{2}+\delta}\right) \Gamma_{d},(1\!-\!G)^{\dagger}H^{\prime}_{N,\kappa}(1\!-\!G)\mathds{1}\Big{(} \mathcal{N}\leq N^{\frac{3\kappa}{2}+\delta}\Big{)}\,\Gamma_{d}\right\rangle\] \[\quad\leq Z_{d}^{-2}\left(\left\langle 1\!\left(\mathcal{N}\leq N^{ \frac{3\kappa}{2}+\delta}\right)\Gamma_{d},\mathbb{K}1\!\left(\mathcal{N}\leq N ^{\frac{3\kappa}{2}+\delta}\right)\Gamma_{d}\right\rangle+C^{\prime}N^{\frac {13\kappa}{4}-\frac{1}{2}}\right) \tag{45}\] \[\quad\leq\langle\Gamma_{d},\mathbb{K}\Gamma_{d}\rangle+CN^{\frac{ 13\kappa}{4}-\frac{1}{2}}. \tag{46}\]
In order to see the inequalities in Eq. (45) and Eq. (46), note that \(\langle\Gamma_{d},\mathcal{N}^{m}\Gamma_{d}\rangle\leq C_{m}N^{\frac{3\kappa m }{2}}\) for \(m\in\mathbb{N}\) and suitable \(C_{m}\), which implies that \(|Z_{d}-1|\leq C_{\lambda}^{\prime}N^{-\lambda}\) for any \(\lambda>0\), and similarly \(\left\langle 1\!\left(\mathcal{N}\leq N^{\frac{3\kappa}{2}+\delta}\right)\Gamma_{d}, \left(\widetilde{\mathcal{N}}^{n}\mathbb{K}\right)\mathds{1}\Big{(}\mathcal{N }\leq N^{\frac{3\kappa}{2}+\delta}\Big{)}\,\Gamma_{d}\right\rangle\leq \langle\Gamma_{d},\left(\widetilde{\mathcal{N}}^{n}\mathbb{K}\right)\Gamma_{d} \rangle+C_{\lambda,n}N^{-\lambda}\) for \(\lambda>0\), which concludes the argument since \(\langle\Gamma_{d},\mathbb{K}\Gamma_{d}\rangle\lesssim N^{\frac{\kappa}{2}}\). Furthermore we want to note that \(\Gamma_{d}\) as the \(d\)-th eigenvector of the harmonic oscillator \(\sum_{k}e_{k}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\), has a finite number of excitations, i.e. there exists a finite set \(I_{d}\subset 2\pi\mathbb{Z}^{3}\setminus\{0\}\) and a constant \(C_{d}\), such that \(\mathfrak{b}_{k}\Gamma_{d}=0\) for \(k\notin I\) and \(\|\mathfrak{b}_{j}\mathfrak{b}_{k}\Gamma_{d}\|\leq C\). Consequently
\[\pm\left(\langle\Gamma_{d},\mathbb{K}\Gamma_{d}\rangle-\widetilde{\lambda}_{N, \kappa}^{(d)}\right)=\pm\sum_{jk,mn}\left(V_{N^{1-\kappa}}\right)_{jk,mn} \left\langle\mathfrak{b}_{j}\mathfrak{b}_{k}\Gamma_{d},\mathfrak{b}_{m} \mathfrak{b}_{n}\Gamma_{d}\right\rangle\leq C_{d}^{2}|I_{d}|^{4}\|\widehat{V} \|_{\infty}N^{\kappa-1},\]
which concludes the proof of \(\langle\Psi_{d},H^{\prime}_{N,\kappa}\Psi_{d}\rangle\leq\widetilde{\lambda}_{N, \kappa}^{(d)}+CN^{\frac{13\kappa}{4}-\frac{1}{2}}\). Regarding the proof of the final statement, let \(\kappa:=0\) for the rest of the proof and note that \(\mathfrak{b}_{k}\Gamma_{1}=0\) for all \(k\in 2\pi\mathbb{Z}^{3}\setminus\{0\}\), and therefore we have by Eq. (38) and Eq. (42) that
\[\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{\dagger}\right)\Psi_{1} \!=\!Z_{1}^{-1}\left[\left(\gamma_{k}d_{k}\!+\!\nu_{k}d_{-k}^{ \dagger}\right)(1\!-\!G)\!-\!\mathfrak{b}_{k}\right]\mathds{1}\!\left(\mathcal{N }\leq N^{\delta}\right)\Gamma_{1}\!-\!Z_{1}^{-1}\mathfrak{b}_{k}\mathds{1}\! \left(\mathcal{N}>N^{\delta}\right)\Gamma_{1}\] \[\psi_{jk}\Psi_{1} =Z_{1}^{-1}\left[\psi_{jk}(1\!-\!G)-\mathfrak{b}_{j}\mathfrak{b}_{ k}\right]\mathds{1}\!\left(\mathcal{N}\leq N^{\delta}\right)\Gamma_{1}\!-\!Z_{1}^{-1} \mathfrak{b}_{j}\mathfrak{b}_{k}\mathds{1}\!\left(\mathcal{N}>N^{\delta}\right) \Gamma_{1}.\]
Using the fact that \(\|\mathfrak{b}_{k}\mathds{1}\big{(}\mathcal{N}>N^{\delta}\big{)}\,\Gamma_{1}\|^{2} \leq C_{\lambda}N^{-\lambda}\) and \(\|\mathfrak{b}_{j}\mathfrak{b}_{k}\mathds{1}\big{(}\mathcal{N}>N^{\delta}\big{)} \,\Gamma_{1}\|\leq C_{\lambda}N^{-\lambda}\) for any \(\lambda>0\), see the argument below Eq. (46), we obtain by Eq. (36) in combination with Corollary 1 and Corollary 2 that
\[\left\langle\Psi_{1},\left(H^{\prime}_{N,0}+\sum_{i=1}^{4}\mathcal{E}_{i} \right)\Psi_{1}\right\rangle\lesssim Z_{1}^{-2}N^{-1}\lesssim N^{-1}.\]
Proceeding as earlier this proof, we have \(|\left\langle\Psi_{1},\left(\sum_{i=1}^{4}\mathcal{E}_{i}-\mathcal{E}_{\log} -\mathcal{E}_{\text{odd}}\right)\Psi_{1}\right\rangle|\lesssim N^{-1}\). In order to estimate the \(\mathcal{E}_{\text{odd}}\) term, note that \(\Gamma_{1}\) is a quasi-free state with respect to \(\mathfrak{b}_{k}\), respectively \(\mathfrak{a}_{k}\), and \(\mathcal{E}_{\text{odd}}\) contains only odd powers in \(\left\{a_{k}:k\in 2\pi\mathbb{Z}^{3}\setminus\left\{0\right\}\right\}\). Therefore \(e^{i\pi\mathcal{N}}\mathds{1}\big{(}\mathcal{N}\leq N^{\delta}\big{)}\,\Gamma _{1}=\mathds{1}\big{(}\mathcal{N}\leq N^{\delta}\big{)}\,e^{i\pi\sum_{k\neq 0} \mathfrak{a}_{k}^{\dagger}\mathfrak{a}_{k}}\Gamma_{1}=\mathds{1}\big{(} \mathcal{N}\leq N^{\delta}\big{)}\,\Gamma_{1}\) is invariant under the transformation \(e^{i\pi\mathcal{N}}\), while \(e^{-i\pi\mathcal{N}}\mathcal{E}_{\text{odd}}e^{i\pi\mathcal{N}}=-\mathcal{E} _{\text{odd}}\), and therefore \(\left\langle\mathds{1}\big{(}\mathcal{N}\leq N^{\delta}\big{)}\,\Gamma_{1}, \mathcal{E}_{\text{odd}}\mathds{1}\big{(}\mathcal{N}\leq N^{\delta}\big{)}\, \Gamma_{1}\right\rangle=0\), and the same holds for \(G^{\dagger}\mathcal{E}_{\text{odd}}G\). Hence
\[\left\langle\Psi_{1},\mathcal{E}_{\text{odd}}\Psi_{1}\right\rangle=-2Z_{1}^{-2 }\mathfrak{Re}\left\langle\mathds{1}\big{(}\mathcal{N}\leq N^{\delta}\big{)} \,\Gamma_{1},\mathcal{E}_{\text{odd}}G\mathds{1}\big{(}\mathcal{N}\leq N^{ \delta}\big{)}\,\Gamma_{1}\right\rangle.\]
A rough, but easy, estimate shows that \(G^{\dagger}G\lesssim N^{-1}\left(\widetilde{\mathcal{N}}+1\right)^{3}\) and \(\mathcal{E}_{\text{odd}}^{2}\lesssim N^{-1}\left(\widetilde{\mathcal{N}}+1 \right)^{3}\), and hence \(\pm\left(\mathcal{E}_{\text{odd}}G+G^{\dagger}\mathcal{E}_{\text{odd}} \right)\lesssim N^{-1}\left(\widetilde{\mathcal{N}}+1\right)^{3}\). Therefore \(\left\langle\Psi_{1},\mathcal{E}_{\text{odd}}\Psi_{1}\right\rangle\lesssim N^ {-1}\). Regarding the term \(\mathcal{E}_{\log}\), we obtain by a simplified version of the argument in Lemma 4
\[\pm\mathcal{E}_{\log}\lesssim N^{-1}\left(\widetilde{\mathcal{N}}+1\right) \!\left(\sum_{k}|k|^{2}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}+1\right)+T _{N},\]
where \(T_{N}:=\left\|\sum_{k,k^{\prime}}4\gamma_{k^{\prime}}\gamma_{k}\Lambda_{k(-k), k^{\prime}(-k^{\prime})}[b_{k^{\prime}},a_{-k^{\prime}}]^{\dagger}[b_{k},a_{-k}]\right\|\), see Eq. (29). First of all observe
\[\left\langle\Psi_{1},N^{-1}\!\left(\widetilde{\mathcal{N}}\!+\!1\right)\!\! \left(\!\sum_{k}|k|^{2}\mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\!+\!1\! \right)\Psi_{1}\!\right\rangle\!\lesssim\!\left\langle\Gamma_{1},N^{-1}\! \left(\!\widetilde{\mathcal{N}}\!+\!1\right)\!\!\left(\!\sum_{k}|k|^{2} \mathfrak{b}_{k}^{\dagger}\mathfrak{b}_{k}\!+\!1\!\right)\Gamma_{1}\right\rangle \!\lesssim\!N^{-1}.\]
Regarding the operator norm \(T_{N}\) we note that \(|\nu_{k}|\lesssim\frac{1}{|k|^{2}\left(1+\frac{|k|}{N}\right)}\) as a consequence of Lemma 1 and proceed similar as in Eq. (29) with the estimate
\[T_{N} \lesssim\frac{1}{N}\sum_{k,k^{\prime}\neq 0}\frac{|\nu_{k}||\nu_{k^{ \prime}}|}{|k|^{2}+|k^{\prime}|^{2}}\lesssim\int_{|k|>1}\frac{1}{|k|^{2}\left(1 +\frac{|k|}{N}\right)}\left(\int_{\mathbb{R}^{3}}\frac{1}{|k|^{2}+|k^{\prime}|^ {2}}\frac{1}{|k^{\prime}|^{2}}\mathrm{d}k^{\prime}\right)\mathrm{d}k\] \[=\mu\int_{|k|>1}\frac{1}{|k|^{3}\left(1+\frac{|k|}{N}\right)} \mathrm{d}k=4\pi\mu\int_{1}^{\infty}\frac{1}{r\left(1+\frac{r}{N}\right)} \mathrm{d}r=4\pi\mu\log N,\]
with the finite constant \(\mu:=\int_{\mathbb{R}^{3}}\frac{1}{1+|k^{\prime}|^{2}}\frac{1}{|k^{\prime}|^{2} }\mathrm{d}k^{\prime}\).
Applying Corollary 3 we immediately obtain the upper bounds
\[\left\langle\Psi_{d},H_{N,\kappa}\Psi_{d}\right\rangle \leq 4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)+\frac{1}{2}\sum_{ k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}+\widetilde{\lambda}_{N,\kappa}^{ (d)}+CN^{\frac{13\kappa}{2}-\frac{1}{2}},\] \[\left\langle\Psi_{1},H_{N,0}\Psi_{1}\right\rangle \leq 4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)+\frac{1}{2}\sum_{ k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}+C\frac{\log N}{N},\]
which concludes the proof of Theorem 3 since we have by Eq. (33) that
\[\left|\sum_{k\neq 0}\left\{\sqrt{A_{k}^{2}-B_{k}^{2}}-A_{k}+C_{k}\right\}-\sum_{k \neq 0}\left\{\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}+|k|^{2}+8\pi \mathfrak{a}N^{\kappa}-\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^{2}} \right\}\right|\]
is of order \(N^{\frac{7\kappa}{2}-1}\log N\) in the case \(K:=\infty\).
## Appendix A Coefficients of the renormalized Potential
Proof of Lemma 1.: Since \(\left|L\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{00,00}-8\pi\mathfrak{a}\right|\lesssim L ^{-1}\), see [11], Eq. (10) is equivalent to verifying \(\left|L\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{k_{1}k_{2},k_{3}k_{4}}-L\Big{(}V_{L}- V_{L}RV_{L}\Big{)}_{00,00}\right|\lesssim L^{-1}\sum_{i=1}^{4}|k_{i}|\). In order to do this, let us define the function \(W_{k_{1},k_{2}}:=e^{ik_{1}x}e^{ik_{2}y}V_{L}-e^{i(k_{1}+k_{2})x}V_{L}\) which has the Fourier transform \(\widehat{W}_{k_{1},k_{2}}(\ell_{1},\ell_{2})=\widehat{V}_{L}(\ell_{1}-k_{1}, \ell_{2}-k_{2})-\widehat{V}_{L}(\ell_{1}-k_{1}-k_{2},\ell_{2})\). Using the representation \(W_{k_{1},k_{2}}=\int_{0}^{1}W_{k_{1},k_{2},r}\mathrm{d}r\) with \(\widehat{W}_{k_{1},k_{2},r}(\ell_{1},\ell_{2}):=\frac{\mathrm{d}}{\mathrm{d}r} \widehat{V}_{L}(\ell_{1}-k_{1}-rk_{2},\ell_{2}-(1-r)k_{2})\), and the fact that \(0\leq R\leq\frac{1}{-\Delta_{2}}\) we have
\[L\left\langle W_{k_{1},k_{2}},RW_{k_{1},k_{2}}\right\rangle\leq \int_{0}^{1}L\left\langle W_{k_{1},k_{2},r},RW_{k_{1},k_{2},r}\right\rangle \mathrm{d}r\leq\int_{0}^{1}L\left\langle W_{k_{1},k_{2},r},\frac{1}{-\Delta_{ 2}}W_{k_{1},k_{2},r}\right\rangle\mathrm{d}r. \tag{47}\]
Using the fact that \(\left|\widehat{W}_{k_{1},k_{2},r}(P)\right|\leq|k_{2}|L^{-2}\left|\nabla \widehat{V}\left(L^{-1}P\right)\right|\), together with the assumption that \(\nabla\widehat{V}\) is bounded and in \(L^{1}\), we obtain \(L\left\langle W_{k_{1},k_{2}},RW_{k_{1},k_{2}}\right\rangle\lesssim L^{-2}|k_ {2}|^{2}\). Observe that
\[L\Big{(}V_{L}RV_{L}\Big{)}_{k_{1}k_{2},k_{3}k_{4}}=L\left\langle e ^{ik_{3}x}e^{ik_{4}y}V_{L},Re^{ik_{1}x}e^{ik_{2}y}V_{L}\right\rangle=L\Big{(}V_ {L}RV_{L}\Big{)}_{k0,k0}\] \[\qquad+L\left\langle e^{ik_{3}x}e^{ik_{4}y}V_{L},RW_{k_{1},k_{2}} \right\rangle+L\overline{\langle e^{ikx}V_{L},RW_{k_{3},k_{4}}\rangle},\]
with \(k:=k_{1}+k_{2}\). From our results on \(W_{k_{1},k_{2}}\) it is clear that the last two terms are of order \(L^{-1}\), and hence it will be enough to study \(L\Big{(}V_{L}RV_{L}\Big{)}_{k0,k0}\). Note that the estimate on the absolute value \(\left|L\Big{(}V_{L}-V_{L}RV_{L}\Big{)}_{k_{1}k_{2},k_{3}k_{4}}\right|\leq\frac {C}{1+L^{-1}|k_{3}-k_{1}|}\) follows from a similar argument as in Eq. (47) together with the assumption that \(\widehat{V}\) and \(k\widehat{V}(k)\) are bounded and in \(L^{1}\).
In order to verify that \(\left|L\Big{(}V_{L}RV_{L}\Big{)}_{k0,k0}-8\pi\mathfrak{a}\right|\) is of order \(L^{-1}|k|\), we observe that we have \(L|\Big{(}V_{L}RV_{L}\Big{)}_{k0,k0}-\left\langle V_{L}^{*},R_{k}V_{L}^{*} \right\rangle|\lesssim L^{-1}\) by the methods in [11], where \(V_{L}^{*}\) is the interaction considered as a function on the full space \(\mathbb{R}^{3}\) and \(R_{k}\) is the inverse of the operator \(\left(\frac{1}{i}\nabla\right)^{2}+\left(\frac{1}{i}\nabla+k\right)^{2}+V_{L}^ {*}\) acting on \(L^{2}(\mathbb{R}^{3})\), which describes the action of \(e^{-ikx}(-\Delta_{2}+V_{L}(x-y))e^{ikx}\)
on translation invariant states. Therefore it is enough to estimate
\[\langle V_{L}^{*},R_{k}V_{L}^{*}\rangle-\langle V_{L}^{*},R_{0}V_{L}^ {*}\rangle=\frac{1}{2}\langle V_{L}^{*},\left(R_{k}-R_{0}\right)V_{L}^{*}\rangle +\frac{1}{2}\left\langle V_{L}^{*},\left(R_{-k}-R_{0}\right)V_{L}^{*}\right\rangle \tag{48}\] \[=\!\int_{0}^{1}\!\left\langle V_{L}^{*},R_{sk}\left(\left(\frac{1 }{i}\nabla+sk\right)\cdot k\right)R_{sk}V_{L}^{*}\right\rangle\mathrm{d}s\!- \!\!\int_{0}^{1}\!\left\langle V_{L}^{*},R_{-sk}\left(\left(\frac{1}{i}\nabla- sk\right)\cdot k\right)R_{-sk}V_{L}^{*}\right\rangle\mathrm{d}s\] \[=\int_{-1}^{1}\int_{0}^{t}\!\left\langle V_{L}^{*},R_{tk}\left(|k |^{2}+2\left(\left(\frac{1}{i}\nabla+tk\right)\cdot k\right)R_{tk}\left(\left( \frac{1}{i}\nabla+tk\right)\cdot k\right)\right)R_{tk}V_{L}^{*}\right\rangle \mathrm{d}t\mathrm{d}s,\]
where we have used the reflection symmetry \(k\mapsto-k\) in the first identity and Duhamel's formula in the second and third line. Let us define the vectors \(\psi_{\xi}:=R_{\xi}V_{L}^{*}\) and note that \(|\widehat{\psi_{\xi}}(p)|\leq\frac{1}{L(|p+\xi|^{2}+|p|^{2})}\). Using \(\left(\left(\frac{1}{i}\nabla+tk\right)\cdot k\right)R_{tk}\left(\left(\frac{1 }{i}\nabla+tk\right)\cdot k\right)\lesssim 1\) we therefore obtain
\[|\langle V_{L}^{*},R_{k}V_{L}^{*}\rangle-\langle V_{L}^{*},R_{0}V_ {L}^{*}\rangle|\lesssim L^{-2}\int_{-1}^{1}\int_{0}^{t}\int_{\mathbb{R}^{3}} \frac{|k|^{2}}{(|p+tk|^{2}+|p|^{2})^{2}}\,\mathrm{d}p\,\mathrm{d}t\,\mathrm{d}s\] \[\qquad\leq L^{-2}\int_{\mathbb{R}^{3}}\int_{-1}^{1}\int_{0}^{t} \frac{|k|^{2}+2|(p+tk)\!\cdot\!k|^{2}}{(|p+tk|^{2}+|p|^{2})^{2}}\,\mathrm{d}t \,\mathrm{d}s\,\mathrm{d}p=L^{-2}\int_{\mathbb{R}^{3}}\!w_{k}(p)\,\mathrm{d}p,\]
with \(w_{k}(p):=\frac{1}{|p+k|^{2}+|p|^{2}}+\frac{1}{|p-k|^{2}+|p|^{2}}-\frac{1}{4|p |^{2}}\), where the last identity follows by deriving the integral representation in Eq. (48) for the non-interacting case \(V:=0\). Making use of the estimate \(|w_{k}(p)|\lesssim\frac{|k|^{2}}{(|p|^{2}+|p+k|^{2})|p|^{2}}\) then yields \(\int_{\mathbb{R}^{3}}\!w_{k}(p)\,\mathrm{d}p\lesssim|k|^{2}\int\frac{1}{(|p|^{ 2}+|p+k|^{2})|p|^{2}}\,\mathrm{d}p\lesssim|k|\).
## Appendix B Additional Error Estimates
In this Section we will discuss estimates which will allow us, together with the results in Subsection 4.1, to control the error term \(\sum_{i=1}^{4}\mathcal{E}_{i}\).
**Lemma 9**.: _Let \(0<r<1\). Then there exists a constant \(C>0\) such that_
\[\left.\pm\mathcal{E}_{4}\right|_{\mathcal{F}_{M}^{+}}\leq CN^{\frac{11\kappa} {2}-1}\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{M}^{+}}+1\right) \tag{49}\]
_for all \(M\leq\min\{rN,N-1\}\). Furthermore_
\[\left.\pm\sum_{k}|k|^{2}w_{k}^{2}\big{(}[\delta_{k},d_{k}^{\dagger}]+[\delta_{k },d_{k}^{\dagger}]+[\delta_{k},\delta_{k}^{\dagger}]\big{)}\right|_{\mathcal{ F}_{M}^{+}}\leq CN^{\frac{11\kappa}{2}-1}\left(\widetilde{\mathcal{N}}\Big{|}_{ \mathcal{F}_{M}^{+}}+1\right).\]
Proof.: Recall the representation of the error term \(\mathcal{E}_{4}|_{\mathcal{F}_{N-1}^{+}}\) from Lemma 3
\[\frac{1}{2}\sum_{k\neq 0}\left\{A_{k}-\sqrt{A_{k}^{2}-B_{k}^{2}}-C_{k}\right\} \left(\left[\delta_{k},a_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0 }\right]+\left[a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k},\delta_ {k}^{\dagger}\right]+\left[\delta_{k},\delta_{k}^{\dagger}\right]\right)\Bigg{|}_ {\mathcal{F}_{N-1}^{+}}\]
and compute \(\left[\delta_{k},a_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}\right]+ \left[a_{0}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{k},\delta_{k}^{ \dagger}\right]=(T_{1}+T_{2}+\text{H.c.})\) with
\[T_{1}: =\frac{a_{0}}{\sqrt{N}}\left(\frac{1}{\sqrt{a_{0}^{\dagger}a_{0}} }\sqrt{a_{0}a_{0}^{\dagger}}-\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\sqrt{a_{0}^ {\dagger}a_{0}}\right)\frac{a_{0}}{\sqrt{N}}w_{k}a_{-k}^{\dagger}a_{k}^{ \dagger},\] \[T_{2}: =\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\left(\sqrt{a_{0}^{\dagger }a_{0}}\!-\!\sqrt{a_{0}a_{0}^{\dagger}}\right)\!\frac{a_{0}a_{k}^{\dagger}}{N }\sum_{|\ell|<K}N\left((T\!-\!1)_{(\ell-k)k,\ell 0}\!+\!(T-1)_{(\ell-k)k,0\ell} \right)a_{\ell-k}^{\dagger}a_{\ell}.\]
Using that \(\frac{a_{0}}{\sqrt{N}}\left(\frac{1}{\sqrt{a_{0}^{\dagger}a_{0}}}\sqrt{a_{0}a_ {0}^{\dagger}}-\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\sqrt{a_{0}^{\dagger}a_{0} }\right)\frac{a_{0}}{\sqrt{N}}\) and \(\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}\left(\sqrt{a_{0}^{\dagger}a_{0}}\!-\! \sqrt{a_{0}a_{0}^{\dagger}}\right)\!\frac{a_{0}a_{k}^{\dagger}}{N}\) is bounded by \(\frac{C_{r}}{N}\) on \(\mathcal{F}_{rN}^{+}\), we immediately obtain \((T_{1}+T_{1}^{\dagger})|_{\mathcal{F}_{rN}^{+}}\lesssim N^{\kappa-1}(\mathcal{ N}+1)\) from \(\sum_{k\neq 0}w_{k}^{2}\lesssim N^{\kappa}\) and following the proof of Lemma 2 we furthermore obtain \((T_{2}+T_{2}^{\dagger})|_{\mathcal{F}_{rN}^{+}}\lesssim N^{\kappa-1}\mathcal{N}\). In a similar fashion, one arrives at the estimate \(\left[\delta_{k},\delta_{k}^{\dagger}\right]\big{|}_{\mathcal{F}_{rN}^{+}} \lesssim N^{2\kappa-1}(\mathcal{N}+1)\). This concludes the proof of Eq. (49), since \(\left|A_{k}-\sqrt{A_{k}^{2}-B_{k}^{2}}-C_{k}\right|\lesssim N^{2\kappa}\) by Lemma 1 and therefore
\[\pm\mathcal{E}_{4}\Big{|}_{\mathcal{F}_{M}^{+}}\lesssim N^{4\kappa-1}\mathcal{ N}\Big{|}_{\mathcal{F}_{M}^{+}}\lesssim N^{\frac{11\kappa}{2}-1}\left(\widetilde{ \mathcal{N}}\Big{|}_{\mathcal{F}_{M}^{+}}+1\right),\]
where we have used Lemma 10. The second part of the Lemma can be verified analogously.
**Lemma 10**.: _Let \(0<\lambda_{0}<1-4\kappa\) and \(\frac{5\kappa}{2}<\lambda<1-\frac{3\kappa}{2}\). Then \(\mathcal{N}\Big{|}_{\mathcal{F}_{N-1}^{+}}\lesssim N^{\frac{5}{2}}\widetilde{ \mathcal{N}}\Big{|}_{\mathcal{F}_{N-1}^{+}}+N^{\frac{3\kappa}{2}}\) and \(\mathcal{N}(\mathcal{N}+1)\Big{|}_{\mathcal{F}_{N\lambda}^{+}}\lesssim 2N^{\frac{ \kappa}{2}+\lambda}\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N \lambda}^{+}}+1\right)\), as well as_
\[b_{k}^{\dagger}b_{k}\Big{|}_{\mathcal{F}_{N^{\lambda}0}^{\leq} \cap\mathcal{F}_{N^{\lambda}}^{+}}-\left(\gamma_{k}d_{k}+\nu_{k}d_{k}^{ \dagger}\right)^{\dagger}\left(\gamma_{k}d_{k}+\nu_{k}d_{k}^{\dagger}\right) \Big{|}_{\mathcal{F}_{N^{\lambda}0}^{\leq}\cap\mathcal{F}_{N^{\lambda}}^{+}} \tag{50}\] \[\qquad\lesssim\left(N^{2\kappa+\frac{\lambda_{0}}{2}-\frac{1}{2}}+ N^{\frac{3\kappa}{2}+\lambda-1}\right)\left(b_{k}^{\dagger}b_{k}+\frac{\widetilde{ \mathcal{N}}+1}{|k|^{4}}\right)\Big{|}_{\mathcal{F}_{N^{\lambda}0}^{\leq}\cap \mathcal{F}_{N^{\lambda}}^{+}},\]
_and there exists a constant \(K_{0}\) such that for \(K\geq K_{0}N^{\frac{\kappa}{2}}\) and \(\delta>0\)_
\[\sum_{k}|k|^{1-\delta}b_{k}^{\dagger}b_{k}\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{ \leq}\cap\mathcal{F}_{N^{\lambda}}^{+}}\lesssim N^{-\frac{\kappa}{2}}\sum_{k \neq 0}e_{k}\left(\gamma_{k}d_{k}+\nu_{k}d_{k}^{\dagger}\right)^{\dagger}\left( \gamma_{k}d_{k}+\nu_{k}d_{k}^{\dagger}\right)\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{ \leq}\cap\mathcal{F}_{N^{\lambda}}^{+}}+1. \tag{51}\]
_Furthermore we have the estimates_
\[\pm\sum_{k}|k|^{2}w_{k}\left(\delta_{k}^{\dagger}\delta_{-k}^{\dagger}+ \delta_{-k}\delta_{k}\right)\Big{|}_{\mathcal{F}_{N\lambda\lambda}^{\leq}\cap \mathcal{F}_{N\lambda}^{+}}\lesssim\left(N^{\frac{9\kappa}{2}+\lambda_{0}-1}+N^ {\frac{7\kappa}{2}+2\lambda-2}\right)\left(\widetilde{\mathcal{N}}\Big{|}_{ \mathcal{F}_{N\lambda\lambda}^{\leq}\cap\mathcal{F}_{N\lambda}^{+}}+1\right), \tag{52}\] \[\pm\sum_{k\neq 0}\left(a_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{ \dagger}}}a_{0}\left(N\left(\mathds{1}(|k|<K)\lambda_{k}-\frac{\lambda_{0}}{2} \right)\delta_{k}^{\prime\prime}+|k|^{2}w_{k}(\delta_{-k}^{\prime\prime})^{ \dagger}\right)+\mathrm{H.c.}\right)\bigg{|}_{\mathcal{F}_{N\lambda\theta}^{ \leq}\cap\mathcal{F}_{N\lambda}^{+}}\] \[\lesssim N^{\frac{5\kappa}{2}+\lambda-1}\left(\widetilde{ \mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda\theta}^{\leq}\cap\mathcal{F}_{N \lambda}^{+}}+1\right). \tag{53}\]
Proof.: Using \(a_{k}=\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0}\gamma_{k}b_{k}+\frac{1}{ \sqrt{a_{0}a_{0}^{\dagger}}}a_{0}\nu_{-k}b_{-k}^{\dagger}\), we can rewrite
\[\mathcal{N}\Big{|}_{\mathcal{F}_{N-1}^{+}}=\sum_{k\neq 0}\left(\gamma_{k}b_{k}+ \nu_{-k}b_{-k}^{\dagger}\right)^{\dagger}\left(\gamma_{k}b_{k}+\nu_{-k}b_{-k}^ {\dagger}\right)\Big{|}_{\mathcal{F}_{N-1}^{+}}\lesssim\sum_{k\neq 0}\left( \gamma_{k}^{2}+\nu_{k}^{2}\right)b_{k}^{\dagger}b_{k}\Big{|}_{\mathcal{F}_{N-1 }^{+}}+\sum_{k\neq 0}\nu_{k}^{2}.\]
Since \(\sum_{k\neq 0}\nu_{k}^{2}\lesssim N^{\frac{3\kappa}{2}}\) and \(|\nu_{k}|\leq|\gamma_{k}|\lesssim N^{\frac{\kappa}{4}}\), we obtain \(\mathcal{N}\Big{|}_{\mathcal{F}_{N-1}^{+}}\lesssim N^{\frac{\kappa}{2}} \widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N-1}^{+}}+N^{\frac{3\kappa}{2}}\). In order to confirm the other statements of the Lemma, let us first verify
\[\left\{(\delta_{k})^{\dagger}\delta_{k}+\delta_{k}(\delta_{k})^{ \dagger}\right\}\Big{|}_{\mathcal{F}_{N\lambda\theta}^{\leq}\cap\mathcal{F}_ {N\lambda}^{+}}\lesssim|k|^{-4}N^{\frac{5\kappa}{2}}\left(N^{\kappa+\lambda_{ 0}-1}+N^{2\lambda-2}\right)\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_ {N\lambda\theta}^{\leq}\cap\mathcal{F}_{N\lambda}^{+}}+1\right). \tag{54}\]
For this purpose, we follow the proof of Lemma 2 by writing \(\delta_{k}=-\delta_{k}^{\prime}+\delta_{k}^{\prime\prime}\) and estimating
\[\left\{(\delta_{k}^{\prime\prime})^{\dagger}\delta_{k}^{\prime \prime}+\delta_{k}^{\prime\prime}(\delta_{k}^{\prime\prime})^{\dagger}\right\} \Big{|}_{\mathcal{F}_{N\lambda}^{+}}\lesssim N^{2\kappa-2}|k|^{-4}\left( \mathcal{N}+1\right)^{3}\lesssim N^{2\kappa-2}|k|^{-4}\Big{(}\sum_{m,n,p\neq 0}a_{m}^{ \dagger}a_{n}^{\dagger}a_{p}^{\dagger}a_{p}a_{n}a_{m}+1\Big{)}.\]
By expressing \(a_{p}\) in terms of \(b_{p}\) and \(b_{-p}^{\dagger}\), we furthermore obtain
\[\sum_{m,n,p\neq 0}a_{m}^{\dagger}a_{n}^{\dagger}a_{n}^{\dagger}a_{p}^{ \dagger}a_{p}a_{n}a_{m}\Big{|}_{\mathcal{F}_{N\lambda}^{+}}\!\!=\!\!\sum_{m,n,p \neq 0}(\gamma_{p}^{2}\!+\!\nu_{p}^{2})a_{m}^{\dagger}a_{n}^{\dagger}b_{p}^{ \dagger}b_{p}a_{n}a_{m}\Big{|}_{\mathcal{F}_{N\lambda}^{+}}\!\!+\!\!\sum_{m,n,p \neq 0}\nu_{p}^{2}a_{m}^{\dagger}a_{n}^{\dagger}a_{n}a_{m}\Big{|}_{\mathcal{F}_{N \lambda}^{+}} \tag{55}\] \[\lesssim\sum_{m,n,p\neq 0}(\gamma_{p}^{2}\!+\!\nu_{p}^{2})b_{p}^{ \dagger}a_{m}^{\dagger}a_{n}^{\dagger}a_{n}a_{m}b_{p}\Big{|}_{\mathcal{F}_{N-1 }^{+}}\!\!+\!\sum_{m,n,p\neq 0}(\gamma_{p}^{2}\!+\!\nu_{p}^{2})[a_{n}a_{m},b_{p}]^{ \dagger}[a_{n}a_{m},b_{p}]\Big{|}_{\mathcal{F}_{N\lambda}^{+}}\] \[\lesssim N^{\frac{\kappa}{2}+2\lambda}\widetilde{\mathcal{N}} \Big{|}_{\mathcal{F}_{N\lambda}^{+}}+N^{\frac{7\kappa}{2}}\left(\widetilde{ \mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda}^{+}}+1\right)+N^{3\kappa+\lambda} \left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda}^{+}}+1\right).\]
Since we assume \(\lambda>\frac{5\kappa}{2}\), we obtain \(\left\{(\delta_{k}^{\prime\prime})^{\dagger}\delta_{k}^{\prime\prime}+\delta_{k} ^{\prime\prime}(\delta_{k}^{\prime})^{\dagger}\right\}\Big{|}_{\mathcal{F}_{N \lambda}^{+}}\lesssim|k|^{-4}N^{\frac{5\kappa}{2}+2\lambda-2}\left(\widetilde{ \mathcal{N}}+1\right)\). Let
us furthermore estimate \(\big{\{}(\delta_{k}^{\prime})^{\dagger}\delta_{k}^{\prime}+\delta_{k}^{\prime}( \delta_{k}^{\prime})^{\dagger}\big{\}}\,\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{ \leq}\cap\mathcal{F}_{N\lambda}^{+}}\) from above by
\[|k|^{-4}N^{2\kappa-1}\left(\sum_{0<|\ell|,|\ell^{\prime}|<K}a_{ \ell^{\prime}-k}^{\dagger}a_{\ell}^{\dagger}a_{\ell}a_{\ell^{\prime}-k}+\sum_ {p\neq 0}a_{p}^{\dagger}a_{p}\right)\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{ \leq}\cap\mathcal{F}_{N\lambda}^{+}}\] \[\lesssim|k|^{-4}N^{2\kappa-1}\left(N^{\lambda_{0}}\sum_{p\neq 0}a_{p }^{\dagger}a_{p}+N^{\frac{3\kappa}{2}}\left(\widetilde{\mathcal{N}}+1\right) \right)\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{\leq}\cap\mathcal{F}_{N\lambda}^ {+}}\] \[\lesssim|k|^{-4}N^{\frac{\gamma_{k}}{2}+\lambda_{0}-1}\left( \widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{\leq}\cap \mathcal{F}_{N\lambda}^{+}}+1\right).\]
Combining the estimates on \(\delta_{k}^{\prime}\) and \(\delta_{k}^{\prime\prime}\) therefore yields Eq. (54). In order to verify Eq. (51) note that \(b_{k}-\gamma_{k}d_{k}-\nu_{k}d_{-k}^{\dagger}=\gamma_{k}\delta_{k}+\nu_{k} \delta_{-k}^{\dagger}\). Using Eq. (54), as well as the observation that \(|k|^{1-\delta}\lesssim N^{-\frac{\kappa}{2}}e_{k}\) for \(K\geq K_{0}N^{\frac{\kappa}{2}}\) and \(K_{0}\) large enough, yields
\[\sum_{k}|k|^{1-\delta}b_{k}^{\dagger}b_{k}\Big{|}_{\mathcal{F}_{N \lambda_{0}}^{\leq}\cap\mathcal{F}_{N\lambda}^{+}} \lesssim N^{-\frac{\kappa}{2}}\sum_{k\neq 0}e_{k}\left(\gamma_{k}d_{k}+\nu_{k}d_{ k}^{\dagger}\right)^{\dagger}\left(\gamma_{k}d_{k}+\nu_{k}d_{k}^{\dagger} \right)\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{\leq}\cap\mathcal{F}_{N\lambda}^{+ }}\] \[\qquad\qquad\qquad+N^{\frac{\kappa}{2}}\sum_{k}|k|^{1-\delta}( \delta_{k}^{\dagger}\delta_{k}+\delta_{k}\delta_{k}^{\dagger})\Big{|}_{ \mathcal{F}_{N\lambda_{0}}^{\leq}\cap\mathcal{F}_{N\lambda}^{+}}\] \[\lesssim N^{-\frac{\kappa}{2}}\sum_{k\neq 0}e_{k}\Big{(}\gamma_{k}d_{k} +\nu_{k}d_{k}^{\dagger}\Big{)}^{\dagger}\Big{(}\gamma_{k}d_{k}+\nu_{k}d_{k}^{ \dagger}\Big{)}\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{\leq}\cap\mathcal{F}_{N \lambda}^{+}}+N^{3\kappa}\big{(}N^{\kappa+\lambda_{0}-1}+N^{2\lambda-2}\big{)} \!\left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda_{0}}^{\leq}\cap \mathcal{F}_{N\lambda}^{+}}\!\!+\!1\right).\]
By our assumption \(\lambda_{0}<1-4\kappa\) and \(\lambda_{0}<1-\frac{3}{2}\kappa\) we have
\[\sum_{k}|k|^{1-\delta}b_{k}^{\dagger}b_{k} \lesssim\big{(}1-N^{3\kappa}\big{(}N^{\kappa+\lambda_{0}-1}\!+\! N^{2\lambda-2}\big{)}\big{)}\sum_{k}|k|^{1-\delta}b_{k}^{\dagger}b_{k}\] \[\leq\sum_{k}|k|^{1-\delta}b_{k}^{\dagger}b_{k}-N^{3\kappa}\big{(}N ^{\kappa+\lambda_{0}-1}\!+\!N^{2\lambda-2}\big{)}\,\widetilde{\mathcal{N}},\]
which concludes the proof of Eq. (51). Note that Eq. (50) can be verified similarly. Furthermore Eq. (52) follows immediately from Eq. (54), using the fact that \(|k|^{2}|w_{k}|\lesssim N^{\kappa}\). Finally when it comes to Eq. (53), let us estimate in a similar fashion to Lemma 2
\[\pm\sum_{k\neq 0}\left(a_{k}^{\dagger}\frac{1}{\sqrt{a_{0}a_{0}^{\dagger}}}a_{0 }\left(N\left(\mathds{1}(|k|<K)\lambda_{k}\!-\!\frac{\lambda_{0}}{2}\right) \delta_{k}^{\prime\prime}\!+\!|k|^{2}w_{k}(\delta_{-k}^{\prime\prime})^{ \dagger}\right)\!+\!\text{H.c.}\right)\lesssim N^{2\kappa-1}\mathcal{N}( \mathcal{N}\!+\!1).\]
Following the argument in Eq. (55), we further obtain
\[\mathcal{N}(\mathcal{N}+1)\Big{|}_{\mathcal{F}_{N\lambda}^{+}} =\sum_{p,q}a_{p}^{\dagger}a_{q}^{\dagger}a_{q}a_{p}\Big{|}_{ \mathcal{F}_{N\lambda}^{+}}+\sum_{p}a_{p}^{\dagger}a_{p}\Big{|}_{\mathcal{F}_{ N\lambda}^{+}}\lesssim\left(N^{\frac{\kappa}{2}+\lambda}+N^{2\kappa}\right) \left(\widetilde{\mathcal{N}}\Big{|}_{\mathcal{F}_{N\lambda}^{+}}+1\right)\] \[\leq 2N^{\frac{\kappa}{2}+\lambda}\left(\widetilde{\mathcal{N}} \Big{|}_{\mathcal{F}_{N\lambda}^{+}}+1\right).\]
A priori Condensation
In this subsection, we will use the strong estimates on the number of expected excited particles \(\langle\Psi,\mathcal{N}\Psi\rangle\) derived in [6], in order to obtain necessary ad hoc for our proof of Theorem 2. By [6, Theorem 1.2] we have the a priori estimate \(\langle\Psi,\mathcal{N}\Psi\rangle\leq CN^{\frac{5\kappa}{2}}\) for states \(\Psi\) satisfying \(\langle\Psi,H_{N,\kappa}\Psi\rangle-4\pi\mathfrak{a}N^{1+\kappa}\lesssim N^{ \frac{5\kappa}{2}}\). Making use of the fact that the Lee-Huang-Yang correction \(\frac{1}{2}{\sum_{k\neq 0}}\Big{\{}\sqrt{|k|^{4}+16\pi\mathfrak{a}N^{\kappa}|k|^{2}}-|k|^{ 2}-8\pi\mathfrak{a}N^{\kappa}+\frac{(8\pi\mathfrak{a}N^{\kappa})^{2}}{2|k|^{2 }}\Big{\}}\) is of the order \(N^{\frac{5\kappa}{2}}\), therefore yields
\[\langle\Psi_{d},\mathcal{N}\Psi_{d}\rangle\leq CN^{\frac{5\kappa}{2}} \tag{56}\]
for any eigenstate \(\Psi_{d}\) corresponding to an eigenvalue \(E_{N,\kappa}^{(d)}\) with \(E_{N,\kappa}^{(d)}-E_{N,\kappa}\lesssim N^{\frac{5\kappa}{2}}\). Note that the Eq. (56) only gives us an estimate on the expected number of particles. The following Lemma 11 however tells us that we can lift the control on the expected number of particles to a control on the number of particles in a spectral sense, i.e. we will argue that we can restrict our attention to states in the spectral subspace \(\mathcal{F}_{\lambda_{0}}^{\leq}\cap\mathcal{F}_{\lambda}^{+}\) without changing the energy significantly. Here we follow the methods presented in [7].
**Lemma 11**.: _Let \(E_{N,\kappa}^{(d)}\) satisfy \(E_{N,\kappa}^{(d)}\leq E_{N,\kappa}+CN^{\frac{\kappa}{2}}\) for a constant \(C>0\) and \(K\leq N^{1+\kappa}\), and assume \(\lambda_{0}>\frac{5\kappa}{2}\) as well as \(\lambda>3\kappa\). Then there exists a \(d\) dimensional subspace \(\mathcal{V}_{d}\subseteq\mathcal{F}_{\lambda_{0}}^{\leq}\cap\mathcal{F}_{ \lambda}^{+}\), such that for all elements \(\Psi\in\mathcal{V}_{d}\) with \(\|\Psi\|=1\)_
\[\langle\Psi,H_{N,\kappa}\Psi\rangle\leq E_{N,\kappa}^{(d)}+N^{-2\lambda_{0}} \Big{(}N^{\frac{9\kappa}{2}}+KN^{2\kappa}\Big{)}+N^{3\kappa-\lambda_{0}}+N^{ -2\lambda}N^{1+\kappa}+N^{3\kappa-\lambda}. \tag{57}\]
Proof.: Let \(\mathcal{W}_{d}\) be the space spanned by the first \(d\) eigenfunctions of \(H_{N,\kappa}\), \(f,g:\mathbb{R}\longrightarrow[0,1]\) smooth functions satisfying \(f^{2}+g^{2}=1\) and \(f(x)=1\) for \(x\leq\frac{1}{2}\) as well as \(f(x)=0\) for \(x\geq 1\) and let \(\mathcal{N}_{*}:=\sum_{0<|k|<K}a_{k}^{\dagger}a_{k}\). With this at hand we define the space \(\widetilde{\mathcal{V}}_{d}:=f\big{(}\frac{\mathcal{N}_{*}}{N^{\lambda_{0}}} \big{)}\mathcal{W}_{d}\), which clearly satisfies \(\widetilde{\mathcal{V}}_{d}\subseteq\mathcal{F}_{\lambda_{0}}^{\leq}\). Note that by our assumption \(\lambda_{0}>\frac{5\kappa}{2}\) and Eq. (56), we have
\[\left\|f\bigg{(}\frac{\mathcal{N}_{*}}{N^{\lambda_{0}}}\bigg{)}\,\Psi\right\| ^{2}\geq\Big{(}1-CN^{\frac{5\kappa}{2}-\lambda_{0}}\Big{)}\,\|\Psi\|^{2}>0 \tag{58}\]
for any \(\Psi\in\mathcal{W}_{d}\setminus\{0\}\), a suitable constant \(C\) and \(N\) large enough. Hence it is clear that \(\widetilde{V}_{d}\) is \(d\) dimensional again. Using the IMS identity \(f\Big{(}\frac{\mathcal{N}_{*}}{N_{0}^{\lambda}}\Big{)}\,H_{N,\kappa}f\Big{(} \frac{\mathcal{N}_{*}}{N_{0}^{\lambda}}\Big{)}+g\Big{(}\frac{\mathcal{N}_{*}} {N_{0}^{\lambda}}\Big{)}\,H_{N,\kappa}g\Big{(}\frac{\mathcal{N}_{*}}{N_{0}^{ \lambda}}\Big{)}=H_{N,\kappa}+\mathcal{E}\) with
\[\mathcal{E}:=\frac{1}{2}\left[f\bigg{(}\frac{\mathcal{N}_{*}}{N_{0}^{\lambda} }\bigg{)}\,,\bigg{[}H_{N,\kappa},f\bigg{(}\frac{\mathcal{N}_{*}}{N_{0}^{ \lambda}}\bigg{)}\bigg{]}\right]+\frac{1}{2}\left[g\bigg{(}\frac{\mathcal{N} _{*}}{N_{0}^{\lambda}}\bigg{)}\,,\bigg{[}H_{N,\kappa},g\bigg{(}\frac{ \mathcal{N}_{*}}{N_{0}^{\lambda}}\bigg{)}\bigg{]}\right],\]
see for example [16, 7], we obtain furthermore for a generic state \(\Theta=\frac{f\big{(}\frac{\mathcal{N}_{*}}{N^{\lambda_{0}}}\big{)}\Psi}{\|f \big{(}\frac{\mathcal{N}_{*}}{N^{\lambda_{0}}}\big{)}\Psi\|}\in\widetilde{V}_ {d}\), where \(\Psi\) is a state in \(\mathcal{W}_{d}\), and a suitable constant \(C>0\) the estimate
\[\langle\Theta,H_{N,\kappa}\Theta\rangle \leq\left\|f\bigg{(}\frac{\mathcal{N}_{*}}{N^{\lambda_{0}}} \bigg{)}\,\Psi\right\|^{-2}\bigg{(}E_{N,\kappa}^{(d)}-\langle g\bigg{(}\frac{ \mathcal{N}_{*}}{N^{\lambda_{0}}}\bigg{)}\,\Psi,H_{N,\kappa}g\bigg{(}\frac{ \mathcal{N}_{*}}{N^{\lambda_{0}}}\bigg{)}\,\Psi\rangle+\langle\Psi,\mathcal{E }\Psi\rangle\bigg{)}\] \[\leq E_{N,\kappa}^{(d)}+C\,\langle\Psi,\mathcal{E}\Psi\rangle+ CN^{3\kappa-\lambda_{0}},\]
where we have used the lower bound \(\left\langle g\left(\frac{\mathcal{N}_{\epsilon}}{N^{\lambda_{0}}}\right)\Psi,H_{N, \kappa}g\left(\frac{\mathcal{N}_{\epsilon}}{N^{\lambda_{0}}}\right)\Psi\right\rangle \geq\left(1-\left\|f\left(\frac{\mathcal{N}_{\epsilon}}{N^{\lambda_{0}}}\right) \Psi\right\|^{2}\right)E_{N,\kappa}\geq\left(1-\left\|f\left(\frac{\mathcal{N }_{\epsilon}}{N^{\lambda_{0}}}\right)\Psi\right\|^{2}\right)E_{N,\kappa}^{(d) }-CN^{3\kappa-\lambda_{0}}\), which follows from our assumptions on \(E_{N,\kappa}^{(d)}\) together with Eq. (58). In order to estimate \(\langle\Psi,\mathcal{E}\Psi\rangle\), let us define \(\pi_{0}\) as the projection onto the zero mode, \(\pi_{1}\) as the projection onto the modes \(\{k\in 2\pi\mathbb{Z}^{3}:|k|<K\}\) and \(\pi_{3}\) as the projection onto the modes \(\{k\in 2\pi\mathbb{Z}^{3}:|k|>K\}\), and rewrite \(\mathcal{E}\) as
\[\mathcal{E}=\frac{1}{4N^{2\lambda_{0}}}\sum_{I\in\{0,1,2\}^{4}}\sum_{jk,mn}( \pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}}\pi_{I_{3}}\pi_{I_{4}})_{jk,mn}a_{k}^{ \dagger}a_{j}^{\dagger}X_{I}a_{m}a_{n},\]
with \(X_{I}:=N^{2\lambda_{0}}\left[f\left(\frac{\mathcal{N}_{\epsilon}+\#_{I_{1},I _{2}}}{N^{\lambda_{0}}}\right)-f\left(\frac{\mathcal{N}_{\epsilon}+\#_{I_{3},I _{4}}}{N^{\lambda_{0}}}\right)\right]^{2}+N^{2\lambda_{0}}\left[g\left(\frac{ \mathcal{N}_{\epsilon}+\#_{I_{1},I_{2}}}{N^{\lambda_{0}}}\right)-g\left(\frac {\mathcal{N}_{\epsilon}+\#_{I_{3},I_{4}}}{N^{\lambda_{0}}}\right)\right]^{2}\) and \(\#_{i,j}\) counts how many of the indices \(i,j\) are equal to \(1\). Before we start with the term-by-term analysis, let us introduce the variables \(\widetilde{c}\) and \(\widetilde{\psi}\) as the ones defined in Eq. (11) and Eq. (12) with the concrete choice of the cut-off parameter \(0<K_{0}<2\pi\). Note that in this case, we obtain in analogy to Eq. (16) for a suitable \(C>0\)
\[H_{N,\kappa}-4\pi\mathfrak{a}_{N^{1-\kappa}}N^{\kappa}(N-1)\geq\mathbb{P}-CN^ {\kappa}\mathcal{N},\]
where \(\mathbb{P}:=\sum_{k}|k|^{2}\widehat{c}_{k}^{\dagger}\widetilde{c}_{k}+\frac{1 }{2}\sum_{jk,mn\neq 0}\left(V_{N^{1-\kappa}}\right)_{jk,mn}\widetilde{\psi}_{jk}^{ \dagger}\widetilde{\psi}_{mn}\), which especially implies the upper bound \(\langle\Psi,\mathbb{P}\Psi\rangle\lesssim N^{\frac{7\kappa}{2}}\) for \(\Psi\in\mathcal{W}_{d}\). Starting with the case \(I_{1}=I_{2}=0\) and \(I_{3}=I_{4}=1\), we identify \(\sum_{|k|<K}(V_{N^{1-\kappa}})_{00,(-k)k}\left(a_{0}^{\dagger}\right)^{2}X_{I }a_{-k}a_{k}+\text{H.c.}\) as
\[\left(\sum_{|k|<K}(V_{N^{1-\kappa}})_{00,(-k)k}\left(a_{0}^{ \dagger}\right)^{2}X_{I}a_{-k}\widetilde{c}_{k}-N^{-1}\sum_{|k|<K}(V_{N^{1- \kappa}})_{00,(-k)k}\left(a_{0}^{\dagger}\right)^{2}a_{0}^{2}X_{I}w_{k}a_{-k}a_ {-k}^{\dagger}\right)+\text{H.c.}\] \[\lesssim N^{\kappa}\mathbb{P}+N^{\kappa}\mathcal{N}+N^{2\kappa} \mathcal{N}+KN^{2\kappa},\]
which gives an contribution of at most order \(N^{\frac{9\kappa}{2}}+KN^{2\kappa}\) when evaluated against \(\Psi\). Since we clearly only have to consider \(I\) for which \(\#_{I_{1},I_{2}}\neq\#_{I_{3},I_{4}}\), the only relevant cases left are the ones where both \(I_{1},I_{2}\) and \(I_{3},I_{4}\) contain at least one non-zero index, and at least one of these index pairs contains the index \(1\). W.l.o.g., let us assume \(I_{1}=1\). In this case we identify
\[\sum_{jk,mn}(\pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}}\pi_{I_{3}}\pi _{I_{4}})_{jk,mn}a_{k}^{\dagger}a_{j}^{\dagger}X_{I}a_{m}a_{n}=\sum_{jk,mn}( \pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}}\pi_{I_{3}}\pi_{I_{4}})_{jk,mn}a_{k}^{ \dagger}a_{j}^{\dagger}X_{I}\widetilde{\psi}_{mn}\] \[-N^{-1}a_{0}^{2}\sum_{jk,m}(\pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}} \pi_{I_{3}}\pi_{I_{4}})_{jk,m(-m)}a_{k}^{\dagger}a_{j}^{\dagger}X_{I}w_{m}. \tag{59}\]
Note that the second term on the right hand side of Eq. (59) can be treated in the same way as the case \(I_{1}=I_{2}=0\) and \(I_{3}=I_{4}=1\). Regarding the first term on the right hand side of Eq. (59) we estimate
\[\sum_{jk,mn}(\pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}}\pi_{I_{3}}\pi _{I_{4}})_{jk,mn}a_{k}^{\dagger}a_{j}^{\dagger}X_{I}\widetilde{\psi}_{mn}+ \text{H.c.}\lesssim\sum_{jk,mn}(\pi_{I_{1}}\pi_{I_{2}}V_{N^{1-\kappa}}\pi_{I_{1} }\pi_{I_{2}})_{jk,mn}a_{k}^{\dagger}a_{j}^{\dagger}a_{m}a_{n}+\mathbb{P}\] \[\lesssim\mathbb{P}+N^{-2}\left(a_{0}^{\dagger}\right)^{2}a_{0}^{2 }\sum_{|j|,|m|<K}(V_{N^{1-\kappa}})_{j(-j),m(-m)}w_{j}w_{m}\lesssim\mathbb{P}+ K^{2}N^{\kappa-1},\]
which is of order \(N^{\frac{7\kappa}{2}}+KN^{2\kappa}\) due to our assumption \(K\leq N^{1+\kappa}\). Therefore
\[\left\langle\Theta,H_{N,\kappa}\Theta\right\rangle\leq E_{N,\kappa}^{(d)}+N^{-2 \lambda_{0}}\Big{(}N^{\frac{9\kappa}{2}}+KN^{2\kappa}\Big{)}+N^{3\kappa- \lambda_{0}}\]
for any \(\Theta\in\widetilde{\mathcal{V}}_{d}\) with \(\|\Theta\|=1\). Note that states in \(\Theta\in\widetilde{\mathcal{V}}_{d}\) still satisfy \(\left\langle\Theta,\mathcal{N}\Theta\right\rangle\lesssim N^{\frac{5\kappa}{ 2}}\), and let us further define \(\mathcal{V}_{d}:=f\big{(}\frac{\mathcal{N}}{N^{\lambda}}\big{)}\widetilde{ \mathcal{V}}_{d}\). Clearly \(\mathcal{V}_{d}\subseteq\mathcal{F}_{\lambda_{0}}^{\leq}\cap\mathcal{F}_{ \lambda}^{+}\) and similar to before we see that \(\mathcal{V}_{d}\) is indeed \(d\) dimensional. Making again use of the IMS identity, and performing similar estimates then yields for generic states \(\Theta=\frac{f\big{(}\frac{\mathcal{N}}{N^{\lambda}}\big{)}\Psi}{\|f\big{(} \frac{\mathcal{N}}{N^{\lambda}}\big{)}\|}\in\mathcal{V}_{d}\), where \(\Psi\in\widetilde{\mathcal{V}}_{d}\) with \(\|\Psi\|=1\), and a suitable constant \(C>0\)
\[\left\langle\Theta,H_{N,\kappa}\Theta\right\rangle\leq E_{N,\kappa}^{(d)}+N^{ -2\lambda_{0}}\Big{(}N^{\frac{9\kappa}{2}}+KN^{2\kappa}\Big{)}+N^{3\kappa- \lambda_{0}}+N^{3\kappa-\lambda}+C\left\langle\Psi,\mathcal{E}^{\prime}\Psi \right\rangle,\]
with \(\mathcal{E}^{\prime}:=\frac{1}{4N^{2\lambda}}\left(\sum_{k,\ell\neq 0}\widehat{V_{N, \kappa}}(k)a_{k}^{\dagger}a_{\ell-k}^{\dagger}X_{0}a_{\ell}a_{0}+\text{H.c.} \right)+\frac{1}{4N^{2\lambda}}\left(\sum_{k\neq 0}\widehat{V_{N,\kappa}}(k)a_{k}^{ \dagger}a_{-k}^{\dagger}X_{1}a_{0}^{2}+\text{H.c.}\right)\) and \(X_{i}:=N^{2\lambda}\left[f\big{(}\frac{\mathcal{N}+2}{N^{\lambda}}\big{)}-f \big{(}\frac{\mathcal{N}+i}{N^{\lambda}}\big{)}\right]^{2}+N^{2\lambda}\left[ g\big{(}\frac{\mathcal{N}+2}{N^{\lambda}}\big{)}-g\big{(}\frac{\mathcal{N}+i}{N^{ \lambda}}\big{)}\right]^{2}\). Using the fact that we have \(\mathcal{E}^{\prime}\lesssim N^{-2\lambda}\left(H_{N,\kappa}+N^{1+\kappa}\right)\) concludes the proof.
## Acknowledgments
Funding from the ERC Advanced Grant ERC-AdG CLaQS, grant agreement n. 834782, is gratefully acknowledged.
|
2303.16510 | Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms
for Optimization under Orthogonality Constraints | Orthogonality constraints naturally appear in many machine learning problems,
from Principal Components Analysis to robust neural network training. They are
usually solved using Riemannian optimization algorithms, which minimize the
objective function while enforcing the constraint. However, enforcing the
orthogonality constraint can be the most time-consuming operation in such
algorithms. Recently, Ablin & Peyr\'e (2022) proposed the Landing algorithm, a
method with cheap iterations that does not enforce the orthogonality constraint
but is attracted towards the manifold in a smooth manner. In this article, we
provide new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold, the set of rectangular
orthogonal matrices. We also consider stochastic and variance reduction
algorithms when the cost function is an average of many functions. We
demonstrate that all these methods have the same rate of convergence as their
Riemannian counterparts that exactly enforce the constraint. Finally, our
experiments demonstrate the promise of our approach to an array of
machine-learning problems that involve orthogonality constraints. | Pierre Ablin, Simon Vary, Bin Gao, P. -A. Absil | 2023-03-29T07:36:54Z | http://arxiv.org/abs/2303.16510v1 | Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints
###### Abstract
Orthogonality constraints naturally appear in many machine learning problems, from Principal Components Analysis to robust neural network training. They are usually solved using Riemannian optimization algorithms, which minimize the objective function while enforcing the constraint. However, enforcing the orthogonality constraint can be the most time-consuming operation in such algorithms. Recently, Ablin and Peyre (2022) proposed the Landing algorithm, a method with cheap iterations that does not enforce the orthogonality constraint but is attracted towards the manifold in a smooth manner. In this article, we provide new practical and theoretical developments for the landing algorithm. First, the method is extended to the Stiefel manifold, the set of rectangular orthogonal matrices. We also consider stochastic and variance reduction algorithms when the cost function is an average of many functions. We demonstrate that all these methods have the same rate of convergence as their Riemannian counterparts that exactly enforce the constraint. Finally, our experiments demonstrate the promise of our approach to an array of machine-learning problems that involve orthogonality constraints.
Orthogonal manifold, Stiefel manifold, stochastic optimization, variance reduction
## 1 Introduction
Letting \(f\) a function from \(\mathbb{R}^{n\times p}\) to \(\mathbb{R}\), we consider the problem of optimizing \(f\) with orthogonality constraints:
\[\min_{X\in\mathbb{R}^{n\times p}}f(X),\qquad\text{s.t.}\quad X\in\text{St}(p,n )=\{X\in\mathbb{R}^{n\times p}|\;\;X^{\top}X=I_{p}\}, \tag{1}\]
where \(\text{St}(p,n)\) is _the Stiefel manifold_. Such a problem appears naturally in many machine learning problems as a way to control dissimilarity between learned features, e.g. in principal component analysis (PCA) (Hotelling, 1933), independent component analysis (Hyvarinen, 1999; Ablin et al., 2018), canonical correlation analysis (Hotelling, 1936), and more recently,
for the training of neural networks to improve their stability (Arjovsky et al., 2015; Zhang et al., 2021; Wang et al., 2020) and robustness against adversarial attacks (Cisse et al., 2017; Li et al., 2019, 2019; Singla and Feizi, 2021).
Riemannian optimization techniques are based on the observation that the orthogonality constraints in (1) define a smooth matrix manifold (Absil et al., 2008; Boumal, 2023) called the Stiefel manifold. The smooth geometry of the manifold constraint allows for the extension of many optimization techniques from the Euclidean space to the manifold setting, including second-order methods (Absil et al., 2007), stochastic gradient descent (Bonnabel, 2013), and variance-reduced methods (Zhang et al., 2016; Tripuraneni et al., 2018; Zhou et al., 2019).
A crucial part of Riemannian optimization methods is the use of _retraction_(Edelman et al., 1998; Absil and Malick, 2012), which is a projection map on the manifold preserving the first-order information, and ensures the individual iterates remain on the manifold constraint. Computing retractions with the orthogonality constraints involves linear algebra operations, such as matrix exponentiation, inverse, or QR decomposition. In some applications, e.g., when evaluating the gradient is relatively cheap, computing the retraction is the dominant cost of the optimization method. In these cases, unlike in Euclidean optimization, ensuring that the iterates move on the manifold can be much more costly than computing the gradient direction. Additionally, the need to perform retractions, and more generally, to take into account the curvature of the manifold, causes challenges in developing accelerated techniques in Riemannian optimization that the community has just started to overcome (Becigneul and Ganea, 2018; Alimisis et al., 2021; Criscitiello and Boumal, 2022). As a result, practitioners in deep learning sometimes rely on the use of adding a squared penalty term and minimize \(f(X)+\lambda\mathcal{N}(X)\) with \(\mathcal{N}(X)=\|X^{\top}X-I_{p}\|^{2}/4\) in the Frobenius norm, which does not perfectly enforce the constraint.
Unlike Riemannian techniques, where the constraint is exactly enforced in every iteration, and the squared penalty method, where the optimum of the problem is not exactly on the manifold, we employ a method that is in between the two. Motivated by the previous work of Ablin and Peyre (2022) for the orthogonal matrix manifold, we design an algorithm that does not enforce the constraint exactly in every iteration but instead controls the maximum distance to the constraint employing inexpensive matrix multiplication. Finally, the iterates _land_ on the manifold exactly at convergence, to a critical point of the problem (1).
The following subsection provides a brief prior on the current optimization techniques for solving (1). The rest of the paper is organized as follows
* In Section 2 we extend the landing algorithm to \(\mathrm{St}(p,n)\); this way, the algorithm is no longer restricted to square matrices. By bounding the step-sizes we guarantee that we remain in \(\mathrm{St}(p,n)^{\varepsilon}\). We also introduce a new merit function.
* In Subsection 3.1, thanks to the new merit function, we greatly improve the theoretical results of Ablin and Peyre (2022): we demonstrate that the basic landing method with constant step size achieves \(\inf_{k\leq K}\|\mathrm{grad}f(X_{k})\|^{2}=O(K^{-1})\),
* In Subsection 3.2, we introduce a stochastic algorithm dubbed stochastic landing algorithm. We show that with a step size decaying in \(K^{-\frac{1}{2}}\),it achieves \(\inf_{k\leq K}\|\mathrm{grad}f(X_{k})\|^{2}=O(K^{-\frac{1}{2}})\).
* In Subsection 3.3, we extend the SAGA algorithm and show that the SAGA landing algorithm achieves \(\inf_{k\leq K}\|\mathrm{grad}f(X_{k})\|^{2}=O(K^{-1})\). We recover each time the same convergence rate and sample complexity as the classical Riemannian counterpart of the methods--that uses retractions.
* In Section 4, we numerically demonstrate the merits of the method when computing retractions is a bottleneck of classical Riemannian methods.
Regarding the convergence speed results, the reader should be aware that we use the _square norm_ of the gradient as a criterion, while some readers might be more familiar with results stated without a square.1
Footnote 1: When this work was ready to be made public, the preprint (Schechtman et al., 2023) appeared that pursues similar goals. It addresses the more general problem of equality-constrained optimization, it uses a different merit function than the one we introduce in Section 2.2 and it does not consider variance reduction as we do in Section 3.3. The numerical experiments also differ considerably, as it only considers a Procrustes problem, while we experiment with deep neural networks.
NotationThe norm of a matrix \(\|M\|\) denotes the Frobenius norm, i.e. the vectorized \(\ell_{2}\) norm. We denote \(I_{p}\) the \(p\times p\) identity matrix and the Stiefel manifold as \(\mathrm{St}(p,n)\), which is the set of \(n\times p\) matrices such that \(X^{\top}X=I_{p}\). The tangent space of the Stiefel manifold at \(X\in\mathrm{St}(p,n)\) is denoted by \(\mathrm{T}_{X}\mathrm{St}(p,n)\). The projection on the set of \(p\times p\) skew-symmetric matrices denoted \(\mathrm{skew}_{p}\subset\mathbb{R}^{p\times p}\) is \(\mathrm{skew}(M)=\frac{1}{2}(M-M^{\top})\) and on the set of symmetric matrices is \(\mathrm{sym}(M)=\frac{1}{2}(M+M^{\top})\). The exponential of a matrix is \(\exp(M)\). The gradient of a function \(f:\mathbb{R}^{n\times p}\to\mathbb{R}\) is denoted by \(\nabla f\) and we define its Riemannian gradient as \(\mathrm{grad}f(X)=\mathrm{skew}(\nabla f(X)X^{\top})X\) for all \(X\in\mathbb{R}^{n\times p}\) as explained in detail in Section 2. We say that a function is \(L\)-smooth if it is differentiable and its gradient is \(L\)-Lipschitz. The constant \(L\) is then the smoothness constant of \(f\).
### Prior Work on Optimization with Orthogonality Constraints
Equation (1) is an optimization problem over a matrix manifold. In the literature, we find two main approaches to solving this problem, reviewed next.
#### 1.1.1 Trivializations
This approach proposed by Lezcano-Casado and Martinez-Rubio (2019); Lezcano-Casado (2019) consists in finding a differentiable surjective function \(\phi:E\to\mathrm{St}(p,n)\) where \(E\) is a Euclidean space, and to solve
\[\min_{A\in E}f(\phi(A))\enspace. \tag{2}\]
The main advantage of this method is that it turns a Riemannian optimization problem on \(\mathrm{St}(p,n)\) into an optimization on a Euclidean space, for which we can apply all existing Euclidean methods, such as gradient descent, stochastic gradient descent, or Adam. This is especially convenient in deep learning, where most standard optimizers are not meant to handle Riemannian constraints. However, this method has two major drawbacks. First, it can drastically change the optimization landscape. Second, the gradient of the function that is optimized is, following the chain rule, \(\nabla(f\circ\phi)=\left(\frac{\partial\phi}{\partial z}\right)^{\top}\nabla f \circ\phi\), and the Jacobian-vector product can be very expensive to compute.
To give a concrete example, we consider the parametrization \(\phi\) used by Singla and Feizi (2021): \(\phi(A)=\exp(A)\begin{pmatrix}I_{p}\\ 0\end{pmatrix}\), where \(A\in\text{skew}_{n}\), with \(\text{skew}_{n}\) the set of \(n\times n\) skew symmetric matrices. This corresponds to the first \(p\) columns of \(\exp(A)\). We see that computing this trivialization requires computing the exponential of a potentially large \(n\times n\) matrix. Furthermore, when computing the gradient of \(f\circ\phi\), one needs to compute the Jacobian-vector product with a matrix exponential, which requires computing a larger \(2n\times 2n\) matrix exponential (Lezcano-Casado and Martinez-Rubio, 2019): this becomes prohibitively costly when \(n\) is large.
#### 1.1.2 Riemannian optimization
This approach consists in extending the classical Euclidean methods such as gradient descent or stochastic gradient descent to the Riemannian setting. For instance, consider Euclidean gradient descent to minimize \(f\) in the Euclidean setting, which iterates \(X_{k+1}=X_{k}-\eta\nabla f(X_{k})\) where \(\eta>0\) is a step size. There are two ingredients to transform it into a Riemannian method. First, the Euclidean gradient \(\nabla f(X)\) is replaced by the Riemannian gradient \(\text{grad}f(X)\), which is the projection of \(\nabla f(X)\) onto the tangent space of \(\text{St}(p,n)\) at \(X\). Second, the subtraction is replaced by a retraction \(\mathcal{R}\), which allows moving while staying on the manifold. We obtain the iterations of the Riemannian gradient descent:
\[X_{k+1}=\mathcal{R}(X_{k},-\eta\text{grad}f(X_{k}))\enspace. \tag{3}\]
In the case of \(\text{St}(p,n)\)(Edelman et al., 1998), the tangent space at \(X\) is the set
\[\text{T}_{X}\text{St}(p,n)=\{X\Omega+X_{\perp}K:\Omega\in\text{skew}_{p},K \in\mathbb{R}^{n-p\times p}\}=\{WX:W\in\text{skew}_{n}\} \tag{4}\]
and the Riemannian gradient with respect to the canonical metric (Edelman et al., 1998) is
\[\text{grad}f(X)=\text{skew}(\nabla f(X)X^{\top})X \tag{5}\]
where \(\text{skew}(M)=\frac{1}{2}(M-M^{\top})\) is the skew-symmetric part of a matrix. In (5), for convenience, we have omitted a factor of 2; compare with (Gao et al., 2022, SS3). This omission is equivalent to magnifying the canonical metric by a factor of 2. From a computational point of view, computing this gradient is cheap: it can be rewritten, for \(X\in\text{St}(p,n)\), as \(\text{grad}f(X)=\frac{1}{2}(\nabla f(X)-X\nabla f(X)^{\top}X)\), which can be computed with one matrix-matrix product of size \(p\times n\) and \(n\times p\), and one matrix-matrix product of size \(n\times p\) and \(p\times p\).
A retraction \(\mathcal{R}(X,Z)=Y\) is a mapping that takes as inputs \(X\in\text{St}(p,n)\) and \(Z\in\text{T}_{X}\text{St}(p,n)\), and outputs \(Y\in\text{St}(p,n)\), such that it "goes in the direction of \(Z\) at the first order", i.e., we have \(Y=X+Z+o(\|Z\|)\). It defines a way to move on the manifold \(\text{St}(p,n)\). We give several examples, where we write \(Z=X\Omega+X_{\perp}K=WX\) in view of (4).
1. _Exponential retraction_: \[\mathcal{R}(X,Z)=\begin{pmatrix}X&Z\end{pmatrix}\exp\begin{pmatrix}\Omega&-Z ^{\top}Z\\ I_{p}&\Omega\end{pmatrix}\begin{pmatrix}I_{p}\\ 0\end{pmatrix}\exp(-\Omega)\enspace.\] (6) This is the exponential map--that follows geodesics--for the canonical metric on the manifold. The most expensive steps to compute it are a matrix exponentiation of a
matrix of size \(2p\times 2p\) and a matrix-matrix product between matrices of size \(n\times 2p\) and \(2p\times 2p\).
2. _Cayley retraction_: \[\mathcal{R}(X,Z)=(I_{n}-\frac{W}{2})^{-1}(I_{n}+\frac{W}{2})X\enspace.\] Though the inverse does exist for any \(W\in\text{skew}_{n}\), it requires solving a linear system that dominates the cost.
3. _Projection retraction_: \[\mathcal{R}(X,Z)=\text{Proj}(X+Z),\enspace\text{with Proj}(Y)=Y(Y^{\top}Y)^{- \frac{1}{2}}\enspace.\] (7) Computing this retraction requires computing the inverse-square root of a matrix, which is also a costly linear algebra operation.
These operations allow us to implement Riemannian gradient descent. Many variants have then been proposed to accelerate convergence, among which trust-region methods (Absil et al., 2007) and Nesterov acceleration (Ahn and Sra, 2020).
In most machine learning applications, the function \(f\) corresponds to empirical risk minimization, and so it has a sum structure. It can be written as:
\[f(X)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(X)\enspace, \tag{8}\]
where \(N\) is the number of samples and each \(f_{i}\) is a "simple" function. In the Euclidean case, stochastic gradient descent (SGD) (Robbins and Monro, 1951) is the algorithm of choice to minimize such a function. At each iteration, it takes a random function \(f_{i}\) and goes in the direction opposite to its gradient. Similarly, in the Riemannian case, we can implement Riemannian-SGD (Bonnabel, 2013) by iterating
\[X_{k+1}=\mathcal{R}(X_{k},-\eta_{k}\text{grad}f_{i}(X_{k})),\enspace\text{ where }i\sim\mathcal{U}[1,N] \tag{9}\]
with \(i\sim\mathcal{U}[1,N]\) meaning that index \(i\) is drawn from the discrete uniform distribution between \(1\) and \(N\) at each iteration. This method only requires one sample at each iteration; hence it scales gracefully with \(N\). However, its convergence is quite slow and typically requires diminishing step sizes.
Variance reduction techniques (Johnson and Zhang, 2013; Schmidt et al., 2017; Defazio et al., 2014) are classical ways to mitigate this problem and allow to obtain algorithms that are stochastic (i.e., use only one sample at a time) and that provably converge while having a constant step size, with a faster rate of convergence than SGD.
## 2 Geometry of the Landing Field and its Merit Function
For \(\lambda>0\) and \(X\in\mathbb{R}^{n\times p}\), we define the landing field as
\[\Lambda(X)=\text{grad}f(X)+\lambda X(X^{\top}X-I_{p}), \tag{10}\]
where \(\mathrm{grad}f(X)=\mathrm{skew}(\nabla f(X)X^{\top})X\). We define
\[\mathcal{N}(X)=\frac{1}{4}\|X^{\top}X-I_{p}\|^{2} \tag{11}\]
where \(\|\cdot\|\) is the Frobenius norm so that the second term in (10) is \(\lambda\nabla\mathcal{N}(X)\). Here, \(\mathrm{grad}f(X)\) denotes the extension to all \(X\in\mathbb{R}^{n\times p}\) of formula (5). It thus coincides with the Riemannian gradient when \(X\in\mathrm{St}(p,n)\). This extension has several attractive properties. First, for all full-rank \(X\in\mathbb{R}^{n\times p}\), \(\mathrm{grad}f(X)\) can still be seen as a Riemannian gradient of \(f\) on the manifold \(\{Y\in\mathbb{R}^{n\times p}\mid Y^{\top}Y=X^{\top}X\}\) with respect to a canonical-type metric, as shown in (Gao et al., 2022, Proposition 4). Second, this formula allows having orthogonality between the two terms of the field \(\Lambda\), for any \(X\). Indeed, we have \(\langle\mathrm{grad}f(X),X(X^{\top}X-I_{p})\rangle=\langle\mathrm{skew}( \nabla f(X)X^{\top}),X(X^{\top}X-I_{p})X^{\top}\rangle\), which cancels since it is the scalar product between a skew-symmetric and a symmetric matrix.
The intuition behind the landing field is fairly simple: the component \(\mathrm{grad}f(X)\) is here to optimize the function, while the term \(X(X^{\top}X-I_{p})\) attracts \(X\) towards \(\mathrm{St}(p,n)\). More formally, since these two terms are orthogonal, the field cancels if and only if both terms cancel. The fact that \(X(X^{\top}X-I_{p})=0\) gives, assuming \(X\) injective, \(X^{\top}X=I_{p}\), hence \(X\in\mathrm{St}(p,n)\). Then, the fact that \(\mathrm{grad}f(X)=0\) combined with \(X\in\mathrm{St}(p,n)\) shows that \(X\) is a first-order critical point of the function \(f\) on the manifold. This reasoning is qualitative: the next part formalizes this geometrical intuition.
### Geometrical Results and Intuitions
In the remainder of the paper, we will always consider algorithms whose iterates stay close to the manifold \(\mathrm{St}(p,n)\). We measure this closeness with the function \(\mathcal{N}(X)\) (introduced in (11)), and define the _safe region_ as
\[\mathrm{St}(p,n)^{\varepsilon}=\{X\in\mathbb{R}^{n\times p}|\;\;\mathcal{N}(X )\leq\frac{1}{4}\varepsilon^{2}\}=\{X\in\mathbb{R}^{n\times p}|\;\;\|X^{\top}X -I_{p}\|\leq\varepsilon\} \tag{12}\]
for some \(\varepsilon\) between \(0\) and \(1\). A critical part of our work is to ensure that the iterates of our algorithms remain in \(\mathrm{St}(p,n)^{\varepsilon}\), which in turn guarantees the following bounds on the singular values of \(X\).
**Lemma 1**: _For all \(X\in\mathrm{St}(p,n)^{\varepsilon}\), the singular values of \(X\) are between \(\sqrt{1-\varepsilon}\) and \(\sqrt{1+\varepsilon}\)._
Note that when \(\varepsilon=0\), the singular values of \(X\) are all equal to \(1\), thus making the columns of the matrix orthogonal and ensuring that \(X\in\mathrm{St}(p,n)\).
As noted before, a critical feature of the landing field is the orthogonality of the two components, which holds between \(\nabla\mathcal{N}(X)\) and any direction \(AX\) with a skew-symmetric matrix \(A\). In order for the results to generalize to the stochastic and variance reduction setting, in the rest of this section we consider a more general form of the landing field of the form
\[F(X,A)=AX+\lambda\nabla\mathcal{N}(X), \tag{13}\]
where \(A\) is an \(n\times n\) skew-symmetric matrix. The formula of the landing field in (10) is recovered by taking \(A=\mathrm{skew}(\nabla f(X)X^{\top})\) in the above equation (13), that is to say \(\Lambda(X)=F(X,\mathrm{skew}(\nabla f(X)X^{\top}))\).
Fields of the form (13) will play a key role in our analysis, exhibiting interesting geometrical properties which stem from the orthogonality of the two terms: \(AX\) and \(\nabla\mathcal{N}(X)=X(X^{\top}X-I_{p})\) are orthogonal. Figure 1 illustrates the geometry of the problem and of the field \(F\). We have the following inequalities:
**Proposition 2**: _For all \(X\in\mathrm{St}(p,n)^{\varepsilon}\) and \(A\in\mathrm{skew}_{n}\), the norm of the field (13) admits the following bounds:_
\[\|AX\|^{2}+4\lambda^{2}(1-\varepsilon)\mathcal{N}(X)\leq\|F(X,A)\|^{2}\leq\|AX \|^{2}+4\lambda^{2}(1+\varepsilon)\mathcal{N}(X)\]
The orthogonality of the two terms also ensures that going in the direction of this field \(F(X,A)\) will allow us to remain in the safe region \(\mathrm{St}(p,n)^{\varepsilon}\) as long as the step size is small enough.
**Lemma 3** (Safe step size): _Let \(X\in\mathrm{St}(p,n)^{\varepsilon}\), \(A\in\mathrm{skew}_{n}\), and consider the update \(\tilde{X}=X-\eta F(X,A)\), where \(\eta>0\) is a step size and \(F(X,A)\) is the field (13). Define \(g=\|F(X,A)\|\) and \(d=\|X^{\top}X-I_{p}\|\). If the step size satisfies_
\[\eta\leq\eta(X):=\min\left\{\frac{\lambda d(1-d)+\sqrt{\lambda^{2}d^{2}(1-d)^{ 2}+g^{2}(\varepsilon-d)}}{g^{2}},\frac{1}{2\lambda}\right\}, \tag{14}\]
_then the next iterate \(\tilde{X}\) remains in \(\mathrm{St}(p,n)^{\varepsilon}\)._
This lemma is of critical importance, both from a practical and theoretical point of view. In practice, at each iteration of our algorithms introduced later, we always compute the safe step (14), and if the safe step is smaller than the prescribed step size, we use the safe step instead. Note that the formula for the safe step only involves quantities that are readily available when the field \(F\) has been computed: computing \(\eta(X)\) at each iteration does not add a significant computational overhead.
We can furthermore lower bound the safe step in (14) by a quantity that does not depend on \(X\):
Figure 1: Illustration of the geometry of the field \(F\). Note the orthogonality of the two components.
**Lemma 4** (Non-disappearing safe step size): _Assuming that \(\|AX\|_{F}\leq\tilde{a}\), we have that the upper-bound in Lemma 3 is lower-bounded by_
\[\eta(X)\geq\eta^{*}(\tilde{a},\varepsilon,\lambda):=\min\left\{\frac{\lambda(1- \varepsilon)\varepsilon}{\tilde{a}^{2}+\lambda^{2}(1+\varepsilon)\varepsilon^ {2}},\,\sqrt{\frac{\varepsilon}{2\tilde{a}^{2}}},\,\frac{1}{2\lambda}\right\}. \tag{15}\]
This lemma serves to prove that the iterates of our algorithms all stay in the safe region provided that the step size is small enough. The following result provides a formal statement:
**Proposition 5**: _Consider a sequence of iterates \(X_{k}\) defined by recursion, starting from \(X_{0}\in\mathrm{St}(p,n)^{\varepsilon}\). We assume that there is a family of maps \(\mathcal{A}_{k}(X_{0},\ldots,X_{k})=A_{k}\in\mathrm{skew}_{p}\) such that \(X_{k+1}=X_{k}-\eta_{k}F(X_{k},A_{k})\) where \(\eta_{k}>0\) is a step size. In addition, we assume that there is a constant \(\tilde{a}>0\) such that for all \(X_{0},\ldots,X_{k}\in\mathrm{St}(p,n)^{\varepsilon}\), we have \(\|\mathcal{A}_{k}(X_{0},\ldots,X_{k})X_{k}\|\leq\tilde{a}\). Then, if all \(\eta_{k}\) are such that \(\eta_{k}\leq\eta^{*}(\tilde{a},\varepsilon,\lambda)\) with \(\eta^{*}\) defined in Lemma 4, we have that all iterates satisfy \(X_{k}\in\mathrm{St}(p,n)^{\varepsilon}\)._
This proposition shows that an algorithm that follows a field of the form (13) with sufficiently small steps will stay within the safe region \(\mathrm{St}(p,n)^{\varepsilon}\). The definition of the maps \(\mathcal{A}_{k}\) is slightly cumbersome as it depends on all the past iterates, but it is needed to handle the variance reduction algorithm that we study later. This result is central to this article since all of the subsequent algorithms considered in Section 3 produce sequences that satisfy the hypothesis of Proposition 5.
### A Merit Function
The next proposition defines a smooth merit function for the landing field \(\Lambda(X)\) defined in (10). The existence of such a merit function is central for a simple analysis of the landing algorithm and its different extensions. We consider
\[\mathcal{L}(X)=f(X)+h(X)+\mu\mathcal{N}(X), \tag{16}\]
where \(h(X)=-\frac{1}{2}\langle\mathrm{sym}(X^{\top}\nabla f(X)),X^{\top}X-I_{p}\rangle\) and \(\mu>0\), which is suitably chosen in the following result.
**Proposition 6** (Merit function bound): _Let \(\mathcal{L}(X)\) be the merit function defined in (16). For all \(X\in\mathrm{St}(p,n)^{\varepsilon}\) we have with \(\nu=\lambda\mu\):_
\[\langle\nabla\mathcal{L}(X),\,\Lambda(X)\rangle\geq\frac{1}{2}\|\mathrm{grad }f(X)\|^{2}+\nu\mathcal{N}(X), \tag{17}\]
_for the choice of_
\[\mu\geq\frac{2}{3-4\varepsilon}\left(L(1-\varepsilon)+3s+\hat{L}^{2}\frac{1+ \varepsilon}{\lambda}\right)\ \,, \tag{18}\]
_where \(s=\sup_{X\in\mathrm{St}(p,n)^{\varepsilon}}\|\mathrm{sym}(X^{\top}\nabla f(X))\|\), \(L>0\) is the smoothness constant of \(f\) over \(\mathrm{St}(p,n)^{\varepsilon}\), \(\hat{L}=\max(L,L^{\prime})\) with \(L^{\prime}=\max_{X\in\mathrm{St}(p,n)^{\varepsilon}}\|\nabla f(X)\|\), and \(\varepsilon<\frac{3}{4}\)._
This demonstrates that \(\mathcal{L}\) is in fact a merit function (or a _Lyapunov_ function). Indeed, the landing direction is an ascent direction for the merit function, since \(\langle\nabla\mathcal{L}(X),\Lambda(X)\rangle\geq 0\). We can then combine this proposition and Proposition 2 get the following result:
**Proposition 7**: _Under the same conditions as in Proposition 6, defining \(\rho=\min(\frac{1}{2},\frac{\nu}{4\lambda^{2}(1+\varepsilon)})\), we have for \(X\in\mathrm{St}(p,n)^{\varepsilon}\):_
\[\langle\Lambda(X),\nabla\mathcal{L}(X)\rangle\geq\rho\|\Lambda(X)\|^{2}.\]
We now turn to the intuition behind the merit function. The merit function \(\mathcal{L}\) is composed of three terms. The terms \(f(X)\) and \(\mu\mathcal{N}(X)\) are easy to interpret: the first one controls optimality, while the second controls the distance to the manifold. The term \(h(X)\) might be mysterious at first. Its role is best understood when \(X\) is on the manifold. Indeed, for \(X\in\mathrm{St}(p,n)\) we get
\[\nabla h(X)=-X\mathrm{sym}(X^{\top}\nabla f(X))\enspace.\]
This vector is, in fact, the opposite of the projection of \(\nabla f(X)\) on the normal space to \(\mathrm{St}(p,n)\) at \(X\). Hence, if \(X\in\mathrm{St}(p,n)\) then \(\mathcal{L}(X)=f(X)\) and \(\nabla\mathcal{L}(X)\) is a vector in the tangent space \(\mathrm{T}_{X}\mathrm{St}(p,n)\) which is aligned with \(\mathrm{grad}f(X)\). Note that Fletcher's penalty function (Fletcher, 1970) is similar to the merit function \(\mathcal{L}(X)\) while the \(h(X)\) term is determined by the solution of the least squares problem. Notice that this merit function \(\mathcal{L}(X)\) is the same as that of Gao et al. (2019) where \(\mathcal{L}\) is constructed from the augmented Lagrangian function and \(h(X)\) serves as a multiplier term since the multiplier of orthogonality constraints has a closed-form solution, \(\mathrm{sym}(X^{\top}\nabla f(X))\), at any first-order stationary point. The main difference between our two works is that Gao et al. (2019) then solves the optimization problem by taking steps in a direction that approximates \(-\nabla\mathcal{L}(X)\), but that does not satisfy the orthogonality hypothesis and hence is not guaranteed to converge for any value of \(\lambda>0\) (see also the discussion in Appendix B in (Ablin and Peyre, 2022)). As a sum of smooth terms, the merit function is also smooth:
**Proposition 8** (Smoothness of the merit function): _The merit function \(\mathcal{L}\) is \(L_{g}\)-smooth on \(\mathrm{St}(p,n)^{\varepsilon}\), with \(L_{g}=L+\lambda L_{\mathcal{N}}\) where \(L\) is the smoothness constant of \(f(X)+h(X)\) and \(L_{\mathcal{N}}\) is that of \(\mathcal{N}\), upper bounded for instance by \(2+3\varepsilon\)._
Schechtman et al. (2023) consider instead the non-smooth merit function \(\mathcal{L}^{\prime}(X)=f(X)+M\|X^{\top}X-I_{p}\|\). Our merit function decreases faster in the direction normal to the manifold, which is why the term \(h(X)\) is introduced to tame the contribution of \(f\) in that direction. The smoothness of our merit function renders the subsequent analysis of the algorithms particularly simple since the smoothness lemma applied on \(\mathcal{L}\) directly gives a descent lemma on the merit function. This forms the basic tool to analyze the landing method and its variants.
## 3 A Family of Landing Algorithms, and their Convergence Properties
In this section, we consider a family of methods all derived from a base algorithm, the landing gradient descent algorithm. All of our algorithms follow directions of the form (13).
### Landing Gradient Descent: the Base Algorithm
This algorithm produces a sequence of iterates \(X_{k}\in\mathbb{R}^{n\times p}\) by iterating
\[X_{k+1}=X_{k}-\eta\Lambda(X_{k}) \tag{19}\]
where \(\eta>0\) is a step size. Note that this method falls into the hypothesis of Proposition 5 with the simple maps \(\mathcal{A}_{k}(X_{0},\ldots,X_{k})=\mathrm{grad}f(X_{k})\), so we can just take \(\tilde{a}=\sup_{X\in\mathrm{St}(p,n)^{\varepsilon}}\|\mathrm{grad}f(X)\|\) to get a safe step size \(\eta^{*}\) that guarantees that the iterates of the landing algorithm stay in \(\mathrm{St}(p,n)^{\varepsilon}\).
We will start with the analysis of this method, where we find that it achieves a rate of convergence of \(\frac{1}{K}\): we have \(\frac{1}{K}\sum_{k=0}^{K}\|\mathrm{grad}f(X_{k})\|^{2}=O(\frac{1}{K})\) and \(\frac{1}{K}\sum_{k=0}^{K}\mathcal{N}(X_{k})=O(\frac{1}{K})\). We, therefore, obtain the same properties as classical Riemannian gradient descent, with a reduced cost per iteration, but with different constants.
Consider the iteration (19) starting from \(X_{0}\in\mathrm{St}(p,n)^{\varepsilon}\). Define \(\tilde{a}=\sup_{X\in\mathrm{St}(p,n)^{\varepsilon}}\|\mathrm{skew}(\nabla f(X )X^{\top})X\|_{F}\), and let \(\eta^{*}\) the safe step size chosen from Lemma 4. Let \(\mathcal{L}^{*}\) be a lower bound of the merit function \(\mathcal{L}\) on \(\mathrm{St}(p,n)^{\varepsilon}\). Then, for \(\eta\leq\min(\frac{1}{2L_{g}},\frac{\nu}{4\Lambda^{2}L_{g}(1+\varepsilon)}, \eta^{*})\), we have
\[\frac{1}{K}\sum_{k=1}^{K}\|\mathrm{grad}f(X_{k})\|^{2}\leq\frac{4(\mathcal{L}( X_{0})-\mathcal{L}^{*})}{\eta K}\quad\text{and}\quad\frac{1}{K}\sum_{k=1}^{K} \mathcal{N}(X_{k})\leq\frac{2(\mathcal{L}(X_{0})-\mathcal{L}^{*})}{\eta\nu K}.\]
This result demonstrates convergence to the stationary points of \(f\) on the manifold at a rate \(\frac{1}{K}\), just like classical Riemannian gradient descent (Boumal et al., 2019). Some readers might be more familiar with statement "without squares", which are in the current case that \(\inf_{k\leq K}\|\mathrm{grad}f(X_{k})\|=O(\frac{1}{\sqrt{K}})\) and \(\inf_{k\leq K}\|X_{k}^{\top}X_{k}-I_{p}\|=O(\frac{1}{\sqrt{K}})\). This result is an important improvement over that of Ablin and Peyre (2022) since we do not require decreasing step sizes to get convergence and obtain a much better convergence rate.
### Landing Stochastic Gradient Descent: Large Scale Orthogonal Optimization
We now consider the case where the function \(f\) is the average of \(N\) functions:
\[f(X)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(X)\]
We can define the landing field associated with each \(f_{i}\) by
\[\Lambda_{i}(X)=\mathrm{grad}f_{i}(X)+\lambda X(X^{\top}X-I_{p})\]
This way, we have
\[\Lambda(X)=\frac{1}{N}\sum_{i=1}^{N}\Lambda_{i}(X)\]
and if we take an index \(i\) uniformly at random between \(1\) and \(N\) we have
\[\mathbb{E}_{i}[\Lambda_{i}(X)]=\Lambda(X).\]
In other words, the direction \(\Lambda_{i}\) is an unbiased estimator of the landing field \(\Lambda\). We consider the landing stochastic gradient descent (Landing-SGD) algorithm, which at iteration \(k\) samples a random index \(i_{k}\) uniformly between \(1\) and \(N\) and iterates
\[X_{k+1}=X_{k}-\eta_{k}\Lambda_{i_{k}}(X_{k}) \tag{20}\]
where \(\eta_{k}\) is a sequence of step size. As is customary in the analysis of stochastic optimization algorithms, we posit a bound on the variance of \(\Lambda_{i}\):
**Assumption 10**: _There exists \(B>0\) such that for all \(X\in\mathrm{St}(p,n)^{\varepsilon}\), we have \(\frac{1}{N}\sum_{i=1}^{N}\|\Lambda_{i}(X)-\Lambda(X)\|^{2}\leq B\)._
This allows us to derive a simple recursive bound on the iterates using the smoothness inequality:
**Proposition 11**: _Assume that \(\eta_{k}\leq\min(\frac{1}{2L_{g}},\frac{\nu}{4\lambda^{2}L_{g}(1+\varepsilon )},\eta^{*})\) where \(\eta^{*}\) is the global safe step size from Lemma 4. Then,_
\[\mathbb{E}_{i_{k}}[\mathcal{L}(X_{k+1})]\leq\mathcal{L}(X_{k})-\frac{\eta_{k}} {4}\|\mathrm{grad}f(X_{k})\|^{2}-\frac{\eta_{k}\nu}{2}\mathcal{N}(X_{k})+\frac {L_{g}B\eta_{k}^{2}}{2},\]
_where the expectation is taken with respect to the random variable \(i_{k}\)._
We can now get convergence rates of the stochastic landing algorithm with decreasing step sizes.
**Proposition 12**: _Assume that the step size is \(\eta_{k}=\eta_{0}\times(1+k)^{-\frac{1}{2}}\) with \(\eta_{0}=\min(\frac{1}{2L_{g}},\frac{\nu}{4\lambda^{2}L_{g}(1+\varepsilon)}, \eta^{*})\), with \(\eta^{*}\) chosen as the safe step size. Then, we have_
\[\inf_{k\leq K}\mathbb{E}[\|\mathrm{grad}f(X_{k})\|^{2}]=O\left(\frac{\log(K)} {\sqrt{K}}\right)\quad\text{and}\quad\inf_{k\leq K}\mathbb{E}[\|\mathcal{N}(X_ {k})\|^{2}]=O\left(\frac{\log(K)}{\sqrt{K}}\right)\enspace.\]
This shows that our method with decreasing step size has the same convergence rate as Riemannian stochastic gradient descent with decreasing step size. With constant step size, we get the following proposition:
**Proposition 13**: _Assume that the step size is fixed to \(\eta=\eta_{0}\times(1+K)^{-\frac{1}{2}}\) with \(\eta_{0}=\min(\frac{1}{2L_{g}},\frac{\nu}{4\lambda^{2}L_{g}(1+\varepsilon)}, \eta^{*})\). Then, we have_
\[\inf_{k\leq K}\mathbb{E}[\|\mathrm{grad}f(X_{k})\|^{2}]=O\left(\frac{1}{\sqrt {K}}\right)\quad\text{and}\quad\inf_{k\leq K}\mathbb{E}[\|\mathcal{N}(X_{k})\| ^{2}]=O\left(\frac{1}{\sqrt{K}}\right)\enspace.\]
Sample complexityThe sample complexity of the algorithm is readily obtained from the bound: in order to find an \(\varepsilon-\)critical point of the problem and get both \(\inf_{k\leq K}\|\mathrm{grad}f(X_{k})\|^{2}\leq\varepsilon\) and \(\inf_{k\leq K}\mathcal{N}(X_{k})\leq\varepsilon\), we need \(O(\varepsilon^{-2})\) iterations. The \(O\) here only hides constants of the problem like conditioning of \(f\) and hyperparameter \(\lambda\), but this quantity is independent of the number of samples \(N\). This matches the classical sample complexity results obtained with SGD in the Euclidean and Riemannian non-convex settings (Bonnabel, 2013).
### Landing SAGA: Variance Reduction for Faster Convergence
In this section, we are in the same finite-sum setting as in Section 3.2. As is customary in classical optimization, SGD suffers from the high variance of its gradient estimator, which leads to sub-optimal rates of convergence. A classical strategy to overcome this issue consists in using variance reduction algorithms, which build an estimator of the gradient
whose variance goes to \(0\) as training progress. Such algorithms have also been proposed in a Riemannian setting, but like most other methods, they also require retractions Zhang et al. (2016).
We propose a retraction-free variance-reduction algorithm that is a cross-over between the celebrated SAGA algorithm and the landing algorithm, which we call the Landing-SAGA algorithm. The algorithm keeps a memory of the last gradient seen at each sample, \(\Phi^{1}_{k},\ldots,\Phi^{N}_{k}\) where each \(\Phi^{i}_{k}\in\mathbb{R}^{n\times p}\). At iteration \(k\), we sample uniformly at random an index \(i_{k}\) between \(1\) and \(N\), and compute the direction \(\Lambda^{i_{k}}_{k}=\mathrm{grad}f_{i_{k}}(X_{k})-\mathrm{skew}(\Phi^{i_{k}}_ {k}X_{k}^{\top})X_{k}+\frac{1}{m}\sum_{j=1}^{m}\mathrm{skew}(\Phi^{j}_{k}X_{k} ^{\top})X_{k}+\lambda X_{k}(X_{k}^{\top}X_{k}-I_{p})\). Then, we update the memory corresponding to sample \(i_{k}\) by doing \(\Phi^{i_{k}}_{k+1}=\nabla f_{i_{k}}(X_{k})\), and \(\Phi^{j}_{k+1}=\Phi^{j}_{k}\) for all \(j\neq i_{k}\). Then we move in the direction
\[X_{k+1}=X_{k}-\eta\Lambda^{i_{k}}_{k}.\]
It is important to note that the variance reduction is only applied on the "Riemannian" part \(\mathrm{grad}f_{i}(X)\). The other term in \(X(X^{\top}X-I_{p})\) is treated as usual. Like in the classical SAGA algorithm, we have the unbiasedness property:
\[\mathbb{E}_{i}[\Lambda^{i}_{k}]=\Lambda(X_{k})\]
This means that on average, the direction we take is the landing field, computed over the whole dataset. The gist of this method is that we can have fine control on \(\mathbb{E}_{i}[\|\Lambda^{k}_{i}\|^{2}]\). Indeed, letting \(D^{i}_{k}=\mathrm{grad}f_{i}(X_{k})-\mathrm{skew}(\Phi^{i}_{k}X_{k}^{\top})X_ {k}\), we have
\[\mathbb{E}[\|\Lambda^{k}_{i}\|^{2}] =\|\Lambda(X_{k})\|^{2}+\mathbb{E}[\|D^{i}_{k}-\mathbb{E}[D^{i}_{ k}]\|^{2}] \tag{21}\] \[\leq\|\Lambda(X_{k})\|^{2}+\mathbb{E}[\|D^{i}_{k}\|^{2}], \tag{22}\]
and \(D^{i}_{k}\) can also be controlled since
\[\|D^{i}_{k}\| =\|\mathrm{skew}((\nabla f_{i}(X_{k})-\Phi^{i}_{k})X_{k}^{\top})X _{k}\| \tag{23}\] \[\leq(1+\varepsilon)\|\nabla f_{i}(X_{k})-\Phi^{i}_{k}\|\] (24) \[\leq(1+\varepsilon)L_{f}\|X_{k}-X^{i}_{k}\|^{2} \tag{25}\]
where \(X^{i}_{k}\) is the last iterate that what chosen for the index \(i\) (so \(X^{i}_{k}\) is such that \(\nabla f(X^{i}_{k})=\Phi^{i}_{k}\)). We, therefore, recover that we need to control the distance from the memory \(X^{i}_{k}\) to the current point \(X_{k}\), as is customary in the analysis of SAGA.
We have the following convergence theorem, which is obtained by combining the merit function and the proof technique of Reddi et al. (2016):
**Proposition 14**: _Define \(\rho\) as in Proposition 7, \(L_{g}\) as in Proposition 8 and \(L_{f}\) the smoothness constant of \(f\) on \(\mathrm{St}(p,n)^{\varepsilon}\). Assume that the step size is such that_
\[\eta\leq\min\left(\eta^{*},\frac{\rho}{L_{g}},\frac{1}{\sqrt{8N(1+\varepsilon )}L_{f}},\left(\frac{\rho}{4N(4N+2)L_{g}L_{f}^{2}(1+\varepsilon)}\right)^{1/3} \right)\.\]
_Then, we have_
\[\inf_{k\leq K}\mathbb{E}[\|\mathrm{grad}f(X_{k})\|^{2}]=O\left(\frac{1}{\eta K }\right)\quad\text{and}\quad\inf_{k\leq K}\mathbb{E}[\|\mathcal{N}(X_{k})\|^{ 2}]=O\left(\frac{1}{\eta K}\right)\.\]
As in classical optimization, using the variance reduction of SAGA recovers a \(\frac{1}{K}\) rate with a stochastic algorithm.
Sample complexityWhen the number of samples \(N\) is large, the last term in the above "min" for the choice of step size is the smallest; hence the step size scales as \(N^{-2/3}\). This shows that to get to a \(\varepsilon-\)critical point such that \(\|\text{grad}f(X)\|^{2}\leq\varepsilon\), we need \(O(N^{\frac{2}{3}}\varepsilon^{-1})\) iterations. This matches the sample complexity of classical Euclidean SAGA in the non-convex setting (Reddi et al., 2016).
### Comparison to Penalty Methods
It is a common practice in deep learning applications that the orthogonality if imposed by adding an \(\ell_{2}\) regularization term and minimizing
\[f(X)+\lambda\mathcal{N}(X),\]
for example in (Balestriero et al., 2018; Xie et al., 2017; Bansal et al., 2018). This method leads to a small computational overhead compared to the simple unconstrained minimization of \(f\), and it allows the use of standard optimization algorithms tailored for deep learning. However, it provides no guarantee that the orthogonality will be satisfied. Generally, there are two possible outcomes based on the choice of \(\lambda>0\). If \(\lambda\) is small, then the final point is far from orthogonality, defeating the purpose of the regularization. If \(\lambda\) is too large, then optimization becomes too hard, as the problem becomes ill-conditioned: its smoothness constant grows with \(\lambda\) while its strong convexity constant does not, since \(\mathcal{N}(X)\) is not strongly convex (indeed, it is flat in the direction tangent to \(\text{St}(p,n)\)).
In order to have a more formal statement than the above intuition, we consider the simple case of the PCA, that is when \(f\) is a quadratic: \(f(X)=-\frac{1}{2}\|AX\|^{2}\) with \(A\in\mathbb{R}^{N\times n}\):
\[\min_{X\in\mathbb{R}^{n\times p}}-\frac{1}{2}\|AX\|^{2}+\frac{\lambda}{4}\|X^ {\top}X-I_{p}\|^{2} \tag{26}\]
**Proposition 15**: _The optimal solution \(X_{*}\in\mathbb{R}^{n\times p}\) of the principal component analysis in (26) with the \(\ell_{2}\) squared penalty regularization parameter \(\lambda>0\), has the distance \(\|X_{*}^{\top}X_{*}-I_{p}\|=\|A^{\top}A\|/\lambda\)._
## 4 Experiments
We numerically compare the landing method against the two main alternatives, the Riemannian gradient descent with QR retraction and the Euclidean gradient descent with added \(\ell_{2}\) squared penalty norm, with stochastic gradients. All the tests are implemented in PyTorch and performed using a single GPU.
### Online PCA
We test the methods performance on the online principal component analysis (PCA) problem
\[\min_{X\in\mathbb{R}^{n\times p}}-\frac{1}{2}\left\|AX\right\|_{F}^{2},\quad \text{s.t.}\quad X\in\text{St}(p,n), \tag{27}\]
where \(A\in\mathbb{R}^{N\times n}\) is a synthetically generated data matrix with \(N=15\,000\) being the number of samples each with dimension \(n=5000\). The columns of \(A\) are independently sampled from the normal distribution \(\mathcal{N}(0,UU^{\top}+\sigma I_{n})\), where \(\sigma=0.1\) and \(U\in\mathbb{R}^{n\times p}\) is sampled from the Stiefel manifold with the uniform Haar distribution.
We compare the landing stochastic gradient method with the classical Riemannian gradient descent and with the "regularized" method which minimizes \(f(X)+\lambda\mathcal{N}(X)\), where \(\lambda\) is now a regularization hyperparameter, using standard SGD. Figure 2 shows the convergence of the objective and the distance to the constraint against the computation time of the three methods using stochastic gradients with a batch size of \(128\) and a fixed stepsize, which decreases after \(30\) epochs. The training loss is computed as \(f(X_{k})-f(X_{*})\), where \(X_{*}\) is the matrix of \(p\) right singular vectors of \(A\). We see that in both cases of \(p=200\) and \(p=1000\) the landing is able to reach a lower objective value faster compared to the Riemannian gradient descent, however, at the cost of not being precisely on the constraint but with \(\mathcal{N}(X)\leq 10^{-6}\). The distance is further decreased after \(30\) epochs as the fixed stepsize of the iterations is decreased as well.
Euclidean gradient descent with \(\ell_{2}\) regularization performs poorly with both choices of regularizer \(\lambda\) ("Reg." in the figure). In the first case when \(\lambda=10^{2}\) is too small, the distance remains large, and as a result, the training loss becomes negative since we are comparing
Figure 2: Experiments with online PCA.
against \(X_{*}\) which is on the constraint. In the second case of the large penalty term, when \(\lambda=10^{4}\), the iterates remain relatively close to the constraint, but the convergence rate is very slow. These experimental findings are in line with the theory explained in Section 3.4.
In general, we see that the landing method outperforms Riemannian gradient descent in cases when the computational cost of the retraction is more expensive relative to computing the gradient. This occurs especially in cases when \(p\) is large or the batch size of the stochastic gradient is small, as can be seen also in the additional experiments for \(p=100\) and \(p=500\) shown in Figure 4 in Appendix A.
### Neural networks with orthogonality constraints
We further test the methods for training neural networks whose weights are constrained to the Stiefel manifold. Orthogonality of weights plays a prominent role in deep learning, for example in the recurrent neural networks to prevent the problem of vanishing/exploding gradients (Arjovsky et al., 2015), or in orthogonal convolutional neural networks that impose kernel orthogonality to improve the stability of models (Bansal et al., 2018; Wang et al., 2020).
We perform the test using two standard models VGG16 (Simonyan and Zisserman, 2014) and Resnet18 (He and Sun, 2016) while constraining the kernels of all convolution layers to be on the Stiefel manifold. We reshape the convolutional kernels to the size \(n_{\text{out}}\times n_{\text{in}}n_{\text{x}}n_{\text{y}}\), where \(n_{\text{in}},n_{\text{out}}\) is the number of input and output channels respectively and \(n_{\text{x}},n_{\text{y}}\) is the filter size. In the case when the reshaping results in a wide instead of a tall matrix, we impose the orthogonality on its transposition. We train the models using Riemannian gradient descent, Euclidean gradient descent with \(\ell_{2}\) regularization, and the landing method, with batch size of 128 samples for 150 epochs, and with a fixed step size that decreases as \(\eta=\eta/10\) every 50 epochs. We repeat each training 5 times for different random seed.
Figure 3 shows the convergence of the test accuracy and the sum of distances to the constraints against the computation time, with the light shaded areas showing minimum and maximum values of the 5 runs. The figure shows the landing is a strict improvement over the Euclidean gradient descent with the added \(\ell_{2}\) regularization, which, for the choice of \(\lambda=1\), achieves a good test accuracy, but at the cost of the poor distance to the constraint of \(10^{-3}\), and for the choice of \(\lambda=10^{3}\) converges to the solution that has similar distance as the landing of \(10^{-8}\), but has poor test accuracy around 55%. In comparison, training the models with the Riemannian gradient descent with QR retractions, achieves the lowest distance to the constraint, but also takes longer to reach the test accuracy of roughly 90%.
We also compared with the trivialization approach (Lezcano-Casado, 2019) using the Geotorch library, however this approach is not readily suitable for optimization over large Stiefel manifolds. See the experiments in Figure 5 in the Appendix A, which takes over 7 hours, i.e. approximately 14 times as long as the other methods, to reach the test accuracy of around 90% with VGG16.
the decision variables take the form of a rectangular matrix constrained to be orthonormal. The iterative method is infeasible in the sense that orthogonality is not enforced at the iterates. We have obtained a computable bound on the step size ensuring that the next iterate stays in a safe region. This safe step size, along with the smooth merit function (16), has allowed for a streamlined complexity analysis in Section 3, both for the deterministic and stochastic cases. The various numerical experiments have illustrated the value of the proposed approach.
## Acknowledgments
The authors thank Gabriel Peyre for fruitful discussions. This work was supported by the Fonds de la Recherche Scientifique - FNRS and the Fonds Wetenschappelijk Onderzoek - Vlaanderen under EOS Project no 30468160, and by the Fonds de la Recherche Scientifique - FNRS under Grant no T.0001.23. Simon Vary is a beneficiary of the FSR Incoming Post-doctoral Fellowship.
Figure 3: Experiments with orthogonal convolutions. |
2310.16428 | Similarity-driven and Task-driven Models for Diversity of Opinion in
Crowdsourcing Markets | The recent boom in crowdsourcing has opened up a new avenue for utilizing
human intelligence in the realm of data analysis. This innovative approach
provides a powerful means for connecting online workers to tasks that cannot
effectively be done solely by machines or conducted by professional experts due
to cost constraints. Within the field of social science, four elements are
required to construct a sound crowd - Diversity of Opinion, Independence,
Decentralization and Aggregation. However, while the other three components
have already been investigated and implemented in existing crowdsourcing
platforms, 'Diversity of Opinion' has not been functionally enabled yet. From a
computational point of view, constructing a wise crowd necessitates
quantitatively modeling and taking diversity into account. There are usually
two paradigms in a crowdsourcing marketplace for worker selection: building a
crowd to wait for tasks to come and selecting workers for a given task. We
propose similarity-driven and task-driven models for both paradigms. Also, we
develop efficient and effective algorithms for recruiting a limited number of
workers with optimal diversity in both models. To validate our solutions, we
conduct extensive experiments using both synthetic datasets and real data sets. | Chen Jason Zhang, Yunrui Liu, Pengcheng Zeng, Ting Wu, Lei Chen, Pan Hui, Fei Hao | 2023-10-25T07:50:57Z | http://arxiv.org/abs/2310.16428v2 | # Similarity-driven and Task-driven Models for Diversity of Opinion in Crowdsourcing Markets
###### Abstract
The recent boom in crowdsourcing has opened up a new avenue for utilizing human intelligence in the realm of data analysis. This innovative approach provides a powerful means for connecting online workers to tasks that cannot effectively be done solely by machines or conducted by professional experts due to cost constraints. Within the field of social science, four elements are required to construct a sound crowd - Diversity of Opinion, Independence, Decentralization and Aggregation. However, while the other three components have already been investigated and implemented in existing crowdsourcing platforms, 'Diversity of Opinion' has not been functionally enabled yet.
From a computational point of view, constructing a wise crowd necessitates quantitatively modeling and taking diversity into account. There are usually two paradigms in a crowdsourcing marketplace for worker selection: building a crowd to wait for tasks to come and selecting workers for a given task. We propose similarity-driven and task-driven models for both paradigms. Also, we develop efficient and effective algorithms for recruiting a limited number of workers with optimal diversity in both models. To validate our solutions, we conduct extensive experiments using both synthetic datasets and real data sets.
_Keywords--_ Crowdsourcing; Diversity of Opinion
## 1 Introduction
Recently, with the emergence of crowdsourcing platforms, such as Amazon Mechanical Turk [3] and CrowdFlower [4], more and more applications are utilizing human intelligence in processing various tasks that are either too difficult to be solved only by computers alone or too expensive to employ experts to perform. For example, data gathering can be done implicitly, through crowdsourced sensing and on-line behaviour collection, or explicitly, by sending targeted information requests to the crowd. Given another example from an analytical perspective, human input can be used to
address computationally difficult tasks such as entity resolution [34], schema matching [35] and the like. In recent years, the higher-efficiency crowdsourcing models like CDB [40] and SPR [44] were proposed. They were able to further expand the scope and frequency of use of crowdsourcing.
Though humankind is intelligent, meanwhile, they are also erroneous and greedy, which makes the quality of crowdsourcing results quite questionable. Therefore, it is important to select the "right" workers to build a wise crowd to guarantee the quality. Then one crucial question to address is "What are the elements of a wise crowd?". Fortunately, this question has been thoroughly studied in the field of social science and many detailed answers have been given. One of the most recognized answers, from [31] with over 5,000 citations, points out that four elements are essential to form a wise crowd, which are:
* Each person should have private information even if it's just an eccentric interpretation of the known facts.
* People's opinions aren't determined by the opinions of those around them.
* People are able to specialize and draw on local knowledge.
* Some mechanism exists for turning private judgements into a collective decision.
Therefore, in order to construct a wise crowd, we need to make sure that the constructed crowd satisfies the above four elements. From the perspective of crowdsourcing systems, _independence_ and _decentralization_ are easy to achieve, by providing a free and independent channel for each individual worker, that is, a means to enable each worker to answer questions based on personal specialism without being aware of other workers. Existing crowdsourcing platforms, such as AMT and CrowdFlower, work precisely in this way. Concerning _aggregation_, various mechanisms have been proposed already, such as majority voting [10], to achieve a target overall reliability. However, to the best of our knowledge, how to ensure the _diversity of opinion_ in constructing a wise crowd has not been studied from algorithmic perspectives before. Thus, in this paper, we address the algorithmic optimizations towards the _diversity of opinion_ for crowd construction.
### When Diversity Trumps Ability
The effect of diversity differs depending on the corresponding crowdsourced tasks, as pointed out in [23]. In particular, for _problem-solving tasks_, diversity is the essential factor affecting the performance of a crowd, and it is even much more important than the average ability of individuals. This phenomenon was discovered and verified in [24], and referred to the 'Diversity Trumps Ability Theorem', which makes the observation that diverse groups of problem solvers - groups of people with diverse tools consistently outperformed groups of the best and the brightest. People with high abilities are often trained in the same institutions, tend to possess similar perspectives and apply similar problem-solving techniques, or heuristics. Many problems do not succumb to a single heuristic, or even a set of similar ones. This is why a diverse crowd functions better than a few experts. Intuitively, if two groups are formed, one random (and therefore diverse) and one consisting of the best individual performers, the first group almost always did better.
This theorem ends up indirectly providing convincing arguments as to why - under certain conditions - citizens may outperform elected officials and experts [23].
### Two Basic Models for Diversity of Opinion
From a computational perspective, in order to build a wise crowd, we are interested in quantitatively modeling the diversity, and take it into consideration for constructing a crowd. In a crowdsourcing marketplace, we usually encounter two basic paradigms for worker selection: building a crowd that will wait for tasks to come or selecting workers for a given task. We propose models for both of the paradigms.
#### 1.2.1 Similarity-driven Model (S-Model)
When there is no explicit query, we resort to the pairwise similarity of workers to model the diversity of opinion. In particular, we model the similarity of a pair of workers as a similarity score value (high value indicates high similarity), and use the negative value of average pairwise similarity to quantify the overall diversity. Intuitively, the lower the average similarity, the higher the diversity.
S-Model can be applied to crowdsourcing scenarios which do not have explicit queries when constructing a crowd and require quick responses when a query arrives. For example, diners may comment on a restaurant through Foursquare [1], whereas iPhone users may post ratings of the applications that they have downloaded from the Apple Store. Such data is highly valuable for product creators (usually a company) : as ratings and reviews have a significant impact on sales; and companies can analyze ratings and review trends to adjust overall marketing strategies, improve customer service, and fine-tune merchandising and so on. However, in current web-based commenting systems, product creators must passively wait for reviewers to visit the commenting systems to provide their comments and ratings. Hence, product creators may have to wait a long time to receive a satisfactory number of reviews. These drawbacks with existing commenting systems motivate the quest for effective methods to actively invite a group of reviewers prior to the arrival of the query.
#### 1.2.2 Task-driven Model (T-Model)
Another common scenario is that a requester has a specific query, and enlists workers to join the crowd to answer it. In such a paradigm, we are able to analyze the diversity of workers according
\begin{table}
\begin{tabular}{l l} \hline Notation & Description \\ \(w(w_{i})\) & a crowdsourcing worker \\ \(Sim(w_{i},w_{j})\) & the pairwise similarity between \(w_{i}\) and \(w_{j}\) \\ \(Div(C)\) & the diversity of a crowd C of workers \\ \(\theta_{1}(\theta_{0})\) & the number of positive (negative) workers to be enlisted with positive (negative) \\ \(t_{i}\) & the opinion of worker \(w_{i}\) \\ \(Pr(t=1\) or \(0)\) & the probability of t satisfying or dissatisfying \(P\) \\ \(N\) & the set of candidate workers to be selected \\ \(k\) & the number of workers to be enlisted \\ \(S\) & the set of workers to be selected, \(|S|=k\) \\ \(\theta_{2}\) & \(\theta_{2}=k-\theta_{0}\) \\ \(\tau(S)\) & the probability of at least \(\theta_{1}(\theta_{0})\) workers existing in S \\ \(T_{0}\) & \(T_{0}=\sum_{t\in S}t\), following Poisson Binomial distribution \\ \end{tabular}
\end{table}
Table 1: MEANINGS OF SYMBOLS USED
to the content of the query. Regarding the given query, we model the opinion of each worker as a probability ranging from 0 to 1, which indicates opinions from negative to positive, respectively. To guarantee the desirable diversity of opinion, we allow a user to set up the demand on the number of workers with positive (negative) opinions. Therefore, the optimization issue is to maximize the probability that the user's demand is satisfied.
T-model captures essence of diversity for a wide class of crowdsourcing scenarios. A typical example application, which is initiated and currently operated by the US government [2], is an _online petitioning system_ enabling participants to propose, discuss and sign political petitions. To determine whether a petition is significant enough to get a response from the White House, the current mechanism is simply a threshold of the number of signatures (currently 100,000), indicating the number of people who support the petition. However, to analyze a particular petition fairly, it would be more constructive if opinions from both the proposition and the opposition are taken into consideration. So guided by the T-model, the government may actively collect online comments on both sides of the petition, which is more constructive for further governmental processing.
### Challenges and Contributions
As _diversity_ is a loosely defined concept, the first main challenge is quantitatively measuring the diversity among candidate workers. Another main challenge to be addressed is to design effective and efficient algorithms for worker selection with the consideration of the diversity of opinions. To address these two challenges, the conference paper [53] proposed effective measures to estimate the diversity of the crowd under two common scenarios, S-Model and T-Model, respectively, and propose effective approximation algorithms for crowd selection. To summarize, the conference paper has made the following contributions:
* 1. In Section 2, the conference paper studied the crowd selection problem under S-model, and propose an efficient (1 + \(\epsilon\)) approximation algorithm for finding a crowd with the highest diversity.
* 2. In Section 3, the conference paper studied the crowd selection problem under the T-model, prove its NP-hardness, and provide a solution based on distribution approximations.
* 3. In section 4, the conference paper presented our experimental evaluation of the performances of T-model and S-model, and conduct a case study to exhibit the goodness of crowds selected by our proposed models.
* 4. In Sections 5 and 6, the conference discussed related works and conclude the paper.
Compared with the conference version, this paper has following new contributions:
* 1. In our previous conference paper, we proposed two methods to optimize the process of worker selection under T-driven model towards the diversity of opinion in crowdsourcing markets. These methods include Poisson approximation and binomial approximation. Both of them are not very accurate and not fast, since they approximate the objection function twice. In this paper, we introduced two new methods that perform much better. The first one is normal approximation with the simulated annealing algorithm (see Section 3.4), which only approximate the objection function once, and is much faster than the previous two methods.
The second one is the method DFT-CF with the simulated annealing algorithm (see Section 3.5), which is an exact method and is more accurate than the previous three methods.
* 2. We conducted a set of experiments on T-model in synthetic data for the two newly introduced methods (see Section 4.2.1). The results demonstrate that these two new methods have better performance than the two methods proposed in our previous paper (see Fig.7 and Fig.8).
* 3. We also analyzed the real data for the two newly introduced methods (see Section 4.2.2). The two new methods demonstrate very stable and outstanding performance (see Fig.9). The detailed explanations for this phenomenon are given at the end of Section 4.2.
## 2 Similarity-Driven Model
In this section, we formally introduce the model, and propose efficient algorithms to enlist workers.
### Model and Definitions
We first need to design a computational model to depict the crowd diversity for the worker selection problem. Under the similaritydriven model, each pair of workers is associated with a value which describes their pairwise similarity. We aim to select \(k\) workers out of \(n\) candidates, such that the average pairwise distance is maximized (i.e. the average similarity is minimized).
We formally present the model with the following definitions.
**Definition 1** (Pairwise Similarity): _For a given set of potential crowdsourcing workers \(W\), the diversity of any two workers is computed by a pairwise similarity function \(Sim(w_{i},w_{j})\) where \(w_{i},w_{j}\in W\)._
**Definition 2** (Crowd Diversity): _Given a crowd of workers \(C=\{w_{1},w_{2},...,w_{|C|}\}\), a pairwise similarity function \(Sim(.)\), the diversity of the crowd is defined as the negative value averaged pairwise similarity, that is,_
\[Div(C)=\frac{-\sum_{w_{i},w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})}{|C|} \tag{1}\]
**Remark 1**: _For the sake of generality, we consider \(Sim(.)\) here as an abstract function, which measures the similarity between two workers. In the appendix, we list a number of popular methods to quantify \(Sim(.)\). Aside from these measurements, we can also plug in any reasonable diversity measurements. In our model, users may also design appropriate similarity functions depending on the data structure and application requirements._
Essentially, we are interested in finding a subset of candidate workers with the maximal diversity, using the cardinality constraint. We formally define this optimization problem as follows.
**Problem 1** (Diversity Maximization): _For a given set of potential crowdsourcing workers \(W\), each worker \(w_{i}\in W\), an integer \(k\), we aim to find a subset \(C\subseteq W\) such that \(|C|=k\) and \(Div(C)\) is maximized, that is,_
\[\operatorname*{arg\,max}_{C\subseteq W,|C|=k}Div(C) \tag{2}\]
**Example 1**: _Fig.1 illustrates an example with 6 workers and their pairwise similarity values. We aim to select three of them, to maximize the crowd diversity. All the possible selections are enumerated as follows and the associated crowd diversity._
_Clearly, the optimal selection is \(<A,D,E>\), with the highest diversity -0.433._
### NP-hardness
Unfortunately, the diversity maximization problem under S-Model is NP-hard, as stated in the following theorem.
**Theorem 1**: _The diversity maximization problem is NP-hard._
**Proof 1**: _First, we reduce the diversity maximization problem to a subset version: relaxing the constant from \(|S|=k\) to be \(|S|\leq k\). The reduction is correct because, if a polynomial algorithm A solves the crowd selection problem, then we can solve this by calling A k times, setting \(|S|=1,2,...,k\)._
_Next, we construct a special case of the diversity maximization problem, namely the crowd selection problem. We reach the NP-hardness of crowd selection problem by proving the crowd selection problem is NP-hard. With a trivial reduction, the crowd selection problem becomes an nth-order Knapsack Problem according to Formula 6. Following the proof by H. Kellerer, et al in [19],
Figure 1: Find 3 workers with highest diversity
_we prove the hardness of nOKP._
_An nth-order Knapsack Problem(nOKP) is a Knapsack problem whose objective function has the form as follows:_
\[optimize\sum_{i_{1}\in n}\sum_{i_{2}\in n}\cdots\sum_{i_{n}\in n}V[i_{1},i_{2}, \cdots,i_{n}]\cdot x_{1}x_{2}\cdot\cdots x_{n} \tag{3}\]
_where \(V[i_{1},i_{2},\cdots,i_{n}]\) is an n-dimensional vector indicating the profit achieved if objects \([i_{1},i_{2},\cdots,i_{n}]\) are concurrently selected. Given an instance of a traditional KP, we can construct an nOKP instance by defining the profit n-dimensional vector as \(V[i,i,\cdots,i]=p_{i}\) and \(V[otherwise]=0\) for all \(i\), where \(p_{i}\) is the profit in a traditional KP. The weight vector and objective value remain the same. \(\Box\)_
### Approximation Algorithm
In the previous section, we show that the diversity maximization problem is NP-hard. Therefore, we are interested in developing fast approximation algorithms.
Now we revisit the optimization function defined in Definition 2: \(Div(C)=\frac{-\sum_{w_{i}.w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})}{|C|}\), in which \(|C|\) is a fixed value, indicating the number of workers to be selected. Hence, the goal is actually to maximize \(-\sum_{w_{i}.w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})\), which we use \(Sum(C)\) to denote. As a result, we have
\[Sum(C)=-\sum_{w_{i}.w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j}) \tag{4}\]
Then, the optimization is equivalently transformed as
\[\operatorname*{arg\,max}_{C\subseteq W,|C|=k}Sum(C) \tag{5}\]
Furthermore, we discover that the optimization function \(Sum(.)\) is a submodular function of the set of candidate workers \(W\).
A function \(f\)_is_ submodular if
\[f(A\cup\{a_{1}\})+f(A\cup\{a_{2}\})\geq f(A\cup\{a_{1},a_{2}\})+f(A) \tag{6}\]
for any \(A\) and \(a_{1},a_{2}\notin A\). Submodularity implies the property of diminishing marginal returns. Intuitively, in our problem, this says that adding a new worker would lead to an enhanced improvement if there were less workers already in the crowd. The problem of selecting a k-element subset maximizing a sub-modular function can be approximated with a performance guarantee of \((1-1/e)\), by iteratively selecting the best element given the ones selected so far.
With Theorem 2, we indicate that function \(Sum(.)\) is submodular.
**Theorem 2**: _For an arbitrary instance of the diversity maximization problem, the resulting optimization function \(Sum(.)\) is submodular._
**Proof 2**: _In order to establish this result, we need to prove that \(\forall C,w_{0},w_{1}\), we have_
\[Sum(C\cup\{w_{0}\})+Sum(C\cup\{w_{1}\})\geq Sum(C\cup\{w_{0},w_{1}\})+Sum(C) \tag{7}\]
_where \(C\subseteq W\), \(w_{0},w_{1}\in W-C\). By Definition 2, we express the left-hand-side and right-hand-side as follows_
\[\begin{split} LHS=&-\sum_{w_{i},w_{j}\in C\wedge i \neq j}Sim(w_{i},w_{j})-\sum_{w\in C}Sim(w,w_{0})\\ &-\sum_{w_{i},w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})-\sum_{w \in C}Sim(w,w_{1})\\ RHS=&-\sum_{w_{i},w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})-\sum_{w\in C}Sim(w,w_{0})\\ &-\sum_{w_{i},w_{j}\in C\wedge i\neq j}Sim(w_{i},w_{j})-\sum_{w \in C}Sim(w,w_{1})\quad-Sim(w_{0},w_{1})\end{split} \tag{9}\]
_Therefore, we have:_
\[LHS-RHS=Sim(w_{0},w_{1})\geq 0 \tag{10}\]
_which competes the proof. _
Facilitated by Theorem 2, our first main result is that the optimal solution for _diversity maximization_ can be efficiently approximated within a factor of \((1-1/e-\epsilon)\)[7]. Here \(e\) is the base of the natural logarithm and \(\epsilon\) is any arbitrary small positive real number. Thus, this is a performance guarantee slightly better than \((1-1/e)=63\%\).
Algorithm 1 lists the detailed steps of this approximation algorithm. This algorithm, which achieves the performance guarantee, is a natural greedy hill-climbing strategy related to the approach considered in [7]. Thus the main content of this result is the analysis framework needed for obtaining a provable performance guarantee, and the fairly surprising fact that hill-climbing is always within a factor of at least \(63\%\) of the optimal for this problem.
```
Input :\(C\leftarrow\emptyset\) Output : Find \(C\) s.t. \(|C|=k\) and \(Div(C)\) is maximized. \(C\leftarrow\{w_{0},w_{1}\}\); while\(|C|\leq k\)do \(x=\operatorname*{arg\,max}_{w_{x}\in W}Div(C\cup\{w_{x}\})\); \(C\gets C\cup\{w_{x}\}\);
1 end while returnC;
```
**Algorithm 1**Diversity Maximization
## 3 Task-Driven Model
Under the task-driven model, each worker is associated with a probability, describing his/her opinion about the given task. We aim to select \(k\) workers out of \(n\) candidates, such that the numbers of positive and negative workers satisfy a user's demand.
We formally define the optimization problem and related important notations in this section.
**Definition 3** (Worker Opinion): _A crowdsourcing worker \(w_{i}\) is associated with an opinion \(w_{i}\) about the given task, which is a Bernoulli random variable. We denote the probability \(Pr(t_{i}=1)=1-Pr(t_{i}=0)\), where \(Pr(t_{i}=1)(Pr(t_{i}=0))\) is the probability of \(w_{i}\) having a positive(negative) opinion about the task. We assume that the opinions of all the workers are independent._
There are two possible ways to obtain the probabilities for the workers. Firstly, when a crowdsourcing platform is implemented on a public online community (e.g. social networks, online forums), we can analyze the historical data and profile information of a given user. Any of the current techniques can be used as a plug-in for our system to detect relevance of a worker to a subject of interest. Secondly, before selecting a worker to participate in a crowd, we may simply ask individual workers for their opinions towards the given subject. On common crowdsourcing platforms, such questions can be designed as so-called _Qualification Tests_, which are prerequisites for workers to answer any questions thereafter.
### Crowd Selection with T-Model
Now we illustrate how to optimize the process of worker selection under T-model. Before providing the formal definition, we introduce the rationale of the optimization. Since each worker's opinion is probabilistic, the total number of workers with positive (negative) opinions is also a probabilistic distribution. We assume that we have the user's demand of the number of workers with positive (negative) opinions, and the optimization is to select the best subset of workers such that the user's demand is satisfied.
As follows, we define the optimization problem under T-model.
**Definition 4** (K-Best Workers Selection): _Given a set of \(|N|\) workers \(w_{i},w_{2},\cdot\cdot\cdot,w_{|N|}\) with opinions \(N=\{t_{1},t_{2},\cdot\cdot\cdot,t_{|N|}\}\). Let \(\theta_{1}\) and \(\theta_{0}\) be the user's demand on the numbers of workers being supportive or opposing with respect to the given task, respectively. We aim to select \(k\) workers, so that the probability of the user's demand being fulfilled is maximized. To ensure this probability is positive for any \(k\geq 1\), we assume \(\theta_{0}+\theta_{1}\leq k\). Formally, let \(S\) be the subset of \(N\), and let \(\tau\) be the probability that at least \(\theta_{1}(\theta_{0})\) workers existing in \(S\) supporting (opposing) the given task,_
\[\tau(S)=Pr\{\sum_{t\in S}t\geq\theta_{1}\wedge\sum_{t\in S}(1-t)\geq\theta_{0}\} \tag{11}\]
_we have the optimization problem as follows:_
\[S:=\operatorname*{arg\,max}_{|S|=k}\tau(S) \tag{12}\]
By taking a closer look at Formula 11, we have \(\sum_{t\in S}t+\sum_{t\in S}(1-t)=k\). For the sake of presentation, we denote \(T=\sum_{t\in S}t,\theta_{2}=k-\theta_{0}\). Then, Formula 11 can be rewritten as
\[\tau(S)=Pr(\theta_{1}\leq T\leq\theta_{2}) =\sum_{i=\theta_{1}}^{\theta_{2}}Pr(T=i) \tag{13}\]
Since each worker can be treated as a random variable following Bernoulli distributions, \(T\) follows a standard Poison Binomial distribution (PBD). Therefore, by adopting the probability mass function
(pmf) of PBD, we have
\[\tau(S)=\sum_{i=\theta_{1}}^{\theta_{2}}\sum_{A\in F_{t}}\prod_{t_{\alpha}\in A} Pr(t_{\alpha}=1)\prod_{t_{\beta}\in A^{c}}Pr(t_{\beta}=0) \tag{14}\]
where \(F_{t}\) is the set of all the subsets of \(S\).
**Example 2**: _A concrete example to illustrate the optimization problem as illustrated in Fig.2. Assume we have a set of candidate workers, with worker opinions 0.2, 0.3, 0.4, 0.6, 0.8 and 0.9, respectively. We further assume that a user wants to select 4 of them, and one of them has a positive opinion and one of them has a negative opinion. Hence, we have \(\theta_{1}=1,\theta_{0}=1,k=4\), then \(\theta_{2}=4-1=3\). There are totally \(C_{6}^{4}\) possible combinations, each of which indicates a PBD. We present all the possible size-4 combinations, and compute \(\tau(S)\) for each of them. Fig.3 illustrates the PBD of the number of workers with positive opinions, and indicates the range of probabilities we aim to maximize._
_One can see that \(<A,C,D,F>\) is the optimal choice, since it maximizes the probability that_
\begin{tabular}{|c|c|c|c|} \hline Crowd & \(\tau(S)\) & Crowd & \(\tau(S)\) \\ \hline A,B,C,D & 0.7616 & A,B,C,E & 0.7272 \\ \hline A,B,C,F & 0.8992 & A,B,D,E & 0.4832 \\ \hline A,B,D,F & 0.7152 & A,B,E,F & 0.6784 \\ \hline A,C,D,E & 0.7884 & **A,C,D,F** & **0.9224** \\ \hline A,C,E,F & 0.9108 & A,D,E,F & 0.7448 \\ \hline B,C,D,E & 0.6188 & B,C,D,F & 0.8568 \\ \hline B,C,E,F & 0.8356 & B,D,E,F & 0.5736 \\ \hline C,D,E,F & 0.8732 & & \\ \hline \end{tabular}
_the user's demand is satisfied._
Figure 2: Find 4 workers including 1 supporter and 1 objector
### Method with Poisson Approximation
To select the exact optimal combination of \(k\) workers, we have to enumerate all \(O(n^{k})\) PBDs, and output the one with the highest \(\tau(S)\). However, this naive method leads to very high computational cost. In this subsection, we consider each PBD as a Poisson distribution, and conduct the selection among the approximated Poisson distributions. By aborting the bounded imprecision introduced by the approximation, we significantly improve the efficiency.
A Poisson binomial distribution can be well approximated by a Poisson distribution. Then, we consider \(T\) approximately following a Poisson distribution, with parameter \(\lambda=\sum_{t\in S}Pr(t=1)\).
Then we have
\[Pr(\theta_{1}\leq T\leq\theta_{2})\approx F_{P}(\theta_{2},\lambda)-F_{P}( \theta_{1},\lambda) \tag{15}\]
where \(F_{P}\) is the cumulative mass function (CMF) of the Poisson distribution. As a result, we find \(S^{\prime}\) to maximize
\[G_{P}(\lambda):=F_{P}(\theta_{2},\lambda)-F_{P}(\theta_{1},\lambda) \tag{16}\]
and return \(S^{\prime}\) as the approximate answer. In the reminder of this subsection, we first analyze the monotonicity of \(G_{P}(\lambda)\), and then provide two algorithmic solutions.
#### 3.2.1 Monotonicity Analysis
In the following, we first analyze the monotonicity of \(G_{P}(\lambda)\). We discover that \(G_{P}(\lambda)\) has a nice monotonic property, which is algorithmically useful. This discovery is concluded with the following theorem.
**Theorem 3**: _Considering \(\lambda\) as a continues independent variable with range \((0,k)\), \(G_{P}(\lambda)\) monotonously increases and decreases on \([0,(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}}]\) and \([(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}},k]\), respectively._
**Proof 3**: _First, we expand \(F_{P}\), the CMF of Poisson distribution, and rewrite \(G_{P}(\lambda)\) as_
\[G_{P}(\lambda)=e^{-\lambda}\sum_{i=0}^{\theta_{2}}\frac{\lambda^{i}}{i!}-e^{- \lambda}\sum_{j=0}^{\theta_{1}}\frac{\lambda^{j}}{j!}\ \ =\sum_{i=\theta_{1}+1}^{\theta_{2}}\frac{e^{-\lambda}\lambda^{i}}{i!} \tag{17}\]
Figure 3: The Poisson-Binomial Distribution
_Then, we take the partial derivative of \(G_{P}(\lambda)\) w.r.t \(\lambda\):_
\[\begin{split}\frac{\partial G_{P}(\lambda)}{\partial\lambda}& =\sum_{i=\theta_{1}+1}^{\theta_{2}}\frac{\partial^{\frac{e^{- \lambda}\lambda^{i}}{i!}}}{\partial\lambda}=\sum_{i=\theta_{1}+1}^{\theta_{2} }\frac{e^{-\lambda}(i\lambda^{i-1}-\lambda^{i})}{i!}\\ &=e^{-\lambda}\sum_{i=\theta_{1}+1}^{\theta_{2}}\frac{(i \lambda^{i-1}-\lambda^{i})}{i!}=e^{-\lambda}\sum_{i=\theta_{1}+1}^{\theta_{2}} \{\frac{\lambda^{i-1}}{(i-1)!}-\frac{\lambda^{i}}{i!}\}\\ &=e^{-\lambda}\{\sum_{i=\theta_{1}+1}^{\theta_{2}}\frac{\lambda ^{i-1}}{(i-1)!}-\sum_{i=\theta_{1}+1}^{\theta_{2}}\frac{\lambda^{i}}{i!}\}=e^ {-\lambda}\{\frac{\lambda^{\theta_{1}}}{\theta_{1}!}-\frac{\lambda^{\theta_{2 }}}{\theta_{2}!}\}\\ &=e^{-\lambda}\lambda^{\theta_{1}}\{\frac{1}{\theta_{1}!}-\frac{ \lambda^{\theta_{2}-\theta_{1}}}{\theta_{2}!}\}\end{split} \tag{18}\]
_To analyze the monotonicity of \(G_{P}(\lambda)\), we solve \(\lambda\) for inequation \(\frac{\partial G_{P}(\lambda)}{\partial\lambda}>0\). Note that, in eq. 18, we have \(e^{-\lambda}\lambda^{\theta_{1}}>0\), and \(\theta_{2}>\theta_{1}\), so_
\[\frac{\partial G_{P}(\lambda)}{\partial\lambda}=e^{-\lambda}\lambda^{\theta_{ 1}}\{\frac{1}{\theta_{1}!}-\frac{\lambda^{\theta_{2}-\theta_{1}}}{\theta_{2}!} \}>0\Leftrightarrow\lambda^{\theta_{2}-\theta_{1}}<\frac{\theta_{2}!}{\theta _{1}!}\Leftrightarrow\lambda<(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{ \theta_{2}-\theta_{1}}} \tag{19}\]
_Similarly, we have \(\frac{\partial G_{P}(\lambda)}{\partial\lambda}<0\Leftrightarrow\lambda>( \frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}}\), which completes the proof. _
#### 3.2.2 Transformation to Exact k-item Knapsack Problem (E-kKP)
Based on the discovered monotonicity property, we show that maximizing \(G(\lambda)\) is equivalent to the classical "Exact k-object Knapsack (E-kKP)" problem as shown by the following Theorem.
**Theorem 4**: _By considering each PBD approximately as a Poisson distribution, the k-best workers selection problem can be solved by any algorithm for the Exact k-item Knapsack Problem (E-kKP)._
**Proof 4**: _Facilitated with theorem 3, our optimization is revised to select \(S\) such that \(\lambda=\sum_{t\in S}Pr(t=1)\) approaches \((\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}}\), which is a constant number. Furthermore, we have \(\lambda=\sum_{t\in S}Pr(t=1)\), then by defining_
\[\Omega_{P}:=(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}} \tag{20}\]
_our optimization is further revised as selecting \(S\) such that \(\sum_{t\in S}Pr(t=1)\) approaches \(\Omega_{P}\). Despite having the nice property of monotonicity, \(G_{P}(\lambda)\) may not be symmetric, and \(\lambda=\sum_{t\in S}Pr(t=1)\) is a discrete variable. This indicates, we need to find \(\lambda_{l}\) and \(\lambda_{r}\), which achieve maximums of \(G_{P}\) on \([0,(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}}]\) and \([(\frac{\theta_{2}!}{\theta_{1}!})^{\frac{1}{\theta_{2}-\theta_{1}}}\,k]\), respectively. Then we choose between them by comparing \(G_{P}(\lambda_{l})\) and \(G_{P}(\lambda_{r})\). Consequently, we aim to find two size-k subsets \(S_{l}\) and \(S_{r}\) of the given \(N\), such that \(\sum_{t\in S_{l}}Pr(t=1)\) is largest to but no larger than \(\Omega_{P}\), and \(\sum_{t\in S_{r}}Pr(t=1)\) is smallest to but smaller than \(\Omega_{P}\). Actually, algorithmically speaking, finding \(S_{l}\) is the same as finding \(S_{r}\). This is because finding \(S_{r}\) is equivalent to finding \(N-S_{r}\), which is \(|N|-k\) sized, such that \(\sum_{t\in N-S_{r}}Pr(t=1)\) is the largest but no larger than \(\sum_{t\in N}Pr(t=1)-\Omega_{P}\). Therefore, the remaining optimization problem is: finding \(S_{l}\), which is a size-k subset of N, and we want to maximize the sum of values in \(S_{l}\) without exceeding \(\Omega_{P}\). This is a typical E-kKP problem._
It is known that E-kKP can be solved by
(1) a backtracking approach with \(O(|N|^{k}/k!)\) time;
(2) dynamic programming with \(O(\gamma|N|)\);
(3) 1/2-approximation algorithm by linear programming with \(O(|N|)\).
These three algorithms are proposed in [11]. For showing how to adopt these algorithms, we only demonstrate (1), that is, the backtracking algorithm with Algorithm 2. The other two algorithms are analogous.
With Algorithm 2, we find \(S_{l}\) and \(S_{r}\) by \(Bt(k,\Omega_{P},N)\) and \(N-Bt(|N|-k,\sum_{t\in N}Pr(t=1)-\Omega_{P},N)\), receptively. Note \(\lambda_{l}=\sum_{S_{l}}Pr(t=1)\) and \(\lambda_{r}=\sum_{S_{r}}Pr(t=1)\), we set the output \(S^{\prime}=S_{l}\) as the final result if \(G(\lambda_{l})>G(\lambda_{r})\); otherwise \(S^{\prime}=S_{r}\) is returned.
### Method with Binomial Approximation
It is known that Binomial approximation is also an effective method to deal with the high complexity of the Poisson Binomial distribution. Similar to the Poisson approximation, we have
\[Pr(\theta_{1}\leq T\leq\theta_{2})\approx F_{B}(\theta_{2};n,p)-F_{B}(\theta _{1};n,p) \tag{21}\]
where \(F_{B}\) is the CMF of Binomial Distribution with parameter \(n=k\) and \(p=\frac{\sum_{t\in S}Pr(t=1)}{k}\). Then, the optimization is to maximize:
\[G_{B}(P):=F_{B}(\theta_{2};n,p)-F_{B}(\theta_{1};n,p) \tag{22}\]
Please note \(n\) is a fixed parameter since \(k\) is a constant in K-best workers selection problem. Therefore, what we can do is to simply adjust \(p\) with different selections of \(S\). Analogous to the Poisson Approximation in Section 3.2, we first analyze the monotonicity, and then discuss the algorithm.
**Monotonicity Analysis**:
With theorem 5, we show that \(G_{B}(p)\) also has a useful monotonic feature, which is similar to the Poisson approximation.
**Theorem 5**: _Considering \(p\) as a continues independent variable with range \((0,n)\), \(G_{B}(p)\) monotonously increases and decreases on \([0,\frac{1}{1+(\frac{(n-\theta_{2})C_{n}^{\theta_{2}}}{(n-\theta_{1})C_{n}^{ \theta_{1}}})^{\frac{1}{\theta_{2}-\theta_{1}}}}]\) and \([\frac{1}{1+(\frac{(n-\theta_{2})C_{n}^{\theta_{2}}}{n})^{\frac{1}{\theta_{2}- \theta_{1}}}}\,n]\), respectively._
**Proof 5**: _The CMF of a Binomial distribution, \(F_{B}\), can be represented in terms of the regularized incomplete beta function:_
\[F_{B}(\theta;n,p)=(n-\theta)C_{n}^{\theta}\int_{0}^{1-p}t^{n-\theta-1}(1-t)^{ \theta}dt \tag{23}\]
_Facilitated with Formula 23, we compute the partial derivative of \(G_{B}(p)\) w.r.t \(p\):_
\[\begin{split}\frac{\partial G_{B}(p)}{\partial p}& =(n-\theta_{2})C_{n}^{\theta_{2}}\frac{\partial\int_{0}^{1-p}t^{n- \theta_{2}-1}(1-t)^{\theta_{2}}dt}{\partial p}-(n-\theta_{1})C_{n}^{\theta_{1} }\frac{\partial\int_{0}^{1-p}t^{n-\theta_{1}-1}(1-t)^{\theta_{1}}dt}{\partial p }\\ &=(n-\theta_{2})C_{n}^{\theta_{2}}\{-(1-p)^{n-\theta_{2}-1}p^{ \theta_{2}}\}-(n-\theta_{1})C_{n}^{\theta_{1}}\{-(1-p)^{n-\theta_{1}-1}p^{ \theta_{1}}\}\\ &=p^{\theta_{1}}(1-p)^{n-\theta_{2}-1}\{(n-\theta_{1})C_{n}^{ \theta_{1}}(1-p)^{\theta_{2}-\theta_{1}}-(n-\theta_{2})C_{n}^{\theta_{2}}p^{ \theta_{2}-\theta_{1}}\}\end{split} \tag{24}\]
_Then, by solving equations \(\frac{\partial G_{B}(p)}{\partial p}\geq 0\) and \(\frac{\partial G_{B}(p)}{\partial p}\leq 0\), we have results \(p\leq\frac{1}{1+(\frac{(n-\theta_{2})C_{n}^{\theta_{2}}}{(n-\theta_{1})C_{n}^{ \theta_{1}}})^{\frac{1}{\theta_{2}-\theta_{1}}}}\) and \(p\geq\frac{1}{1+(\frac{(n-\theta_{2})C_{n}^{\theta_{2}}}{(n-\theta_{1})C_{n}^{ \theta_{1}}})^{\frac{1}{\theta_{2}-\theta_{1}}}}\), respectively, which completes the proof. \(\square\)_
**Algorithms**
Algorithm 2 (and other algorithms for E-kKP problem) can be reused for finding the approximate solution based on binomial approximation. Specifically, we define
\[\Omega_{B}:=\frac{k}{1+(\frac{(n-\theta_{2})C_{n}^{\theta_{2}}}{(n-\theta_{1} )C_{n}^{\theta_{1}}})^{\frac{1}{\theta_{2}-\theta_{1}}}} \tag{25}\]
and the solution subset is between \(S_{l}^{\prime}=Bt(k,\Omega_{B},N)\) and \(S_{r}^{\prime}=N-Bt(|N|-k,\sum_{t\in N}Pr(t=1)-\Omega_{B},N)\). Here, let \(p_{l}=\sum_{t\in S_{l}^{\prime}}Pr(t=1)\) and \(p_{r}=\sum_{t\in S_{r}^{\prime}}Pr(t=1)\), then we return \(S_{l}^{\prime}\) as result if \(G_{B}(p_{l})>G_{B}(p_{r})\); otherwise return \(S_{r}^{\prime}\).
### Method with Normal Approximation
It's well-known that both the Poisson distribution and Binomial distribution can be approximated by normal distribution. In previous sections, we have already discussed methods using Poisson and Binomial distributions to approximate the Poisson Binomial distribution. Now we will further use normal distribution to approximate the Poisson Binomial Distribution. We assume that \(T\) approximately obeys a normal distribution with the expection:
\[\mu_{S}=\sum_{t\in S}Pr(t=1) \tag{26}\]
and the standard deviation:
\[\sigma_{S}=\sqrt{\sum_{t\in S}\left(Pr(t=1)(1-Pr(t=1))\right)} \tag{27}\]
Then, Formula 11 can be approximated as:
\[\tau(S)=Pr(\theta_{1}\leq T\leq\theta_{2})\quad\approx F_{N}(\theta_{2}+0.5;\mu_{S },\sigma_{S})-F_{N}(\theta_{1}-0.5;\mu_{S},\sigma_{S}) \tag{28}\]
where \(F_{N}\) is the CDF of Normal distribution with parameters \(\mu_{S}\) and \(\sigma_{S}\). Here we add a continuity correction according to [37], considering that the Poisson Binomial Distribution is discrete while the normal distribution is continuous. Now, we find \(S^{\prime}\) to maximize:
\[G_{N}(\mu_{S},\sigma_{S}):=F_{N}(\theta_{2}+0.5;\mu_{S},\sigma_{S})-F_{N}( \theta_{1}-0.5;\mu_{S},\sigma_{S}) \tag{29}\]
Unlike the Poisson approximation and Binomial approximation, \(G_{N}(\mu_{S},\sigma_{S})\) has no monotonic features, and it contains two independent parameters (\(\mu_{S}\) and \(\sigma_{S}\)) required to be controlled. Therefore, it's not suitable to utilize only one parameter (\(\mu_{S}\) or \(\sigma_{S}\)) to help us finding the best group by the aforementioned Knapsack method (Algorithm 2). If we utilize the two parameters simultaneously, the time cost by normal approximation would be much higher than that by the Poisson or Binomial approximation. Thus, it is necessary to choose an algorithm that can not only take advantage of fast computation of \(G_{N}(\mu_{S},\sigma_{S})\) but also decrease the time cost.
The simulated annealing algorithm [38] is a good choice to deal with the issue mentioned above. It is a stochastic optimization algorithm based on the Monte-Carlo iterative solution strategy, and its starting point is based on the similarity between the annealing process of solid matter in physics and general combinatorial optimization problems. The simulated annealing algorithm starts from a relatively high initial temperature, and with the continuous decrease of the temperature parameter, combined with a certain probability jump characteristic, randomly finds the global optimal solution of the objective function in the solution space, that is, the local optimal solution can be probabilistically jump out and eventually tend to the global optimum.
We demonstrate the algorithm in Algorithm 3, where \(T_{ini}\), \(T_{end}\), \(r\) and \(c\) represent the initial temperature, the terminal temperature, repeating times and the ratio of chilling-down, respectively. Similar to the original algorithm which improves the results by adding random displacement to the atom at each iteration, our algorithm improves the results by randomly replacing a small number (\(k_{1}\)) of elements in \(S\) with \(k_{1}\) elements from its complementary set \(R\) as shown in Algorithm 3.
The computation of \(\tau(S)\) will be the main factor of affecting the time complexity of the algorithm. We use the normal approximation to calculate \(\tau(S)\) (eq. 28). Suppose the time complexity of calculating \(\tau(S)\) is \(k\), then the total time complexity of this algorithm is \(O(kr\log_{c}(\frac{T_{ini}}{T_{end}}))\). It relates to the hyper-parameters \(T_{ini}\), \(T_{end}\), \(r\) and \(c\). All of them are chosen by the user and have no connection to \(k\) and \(|N|\). Users could balance the time complexity and the accuracy they want by selecting the suitable parameters. In the experiments of Section 4.2.1, we set the parameters as \(T_{ini}=1\), \(T_{end}=0.0001\), \(r=1000\), \(c=0.9\).
Compared to the aforementioned Poisson approximation and Binomial approximation that approximate \(\tau(S)\) twice, the normal approximation with the simulated annealing algorithm only approximates \(\tau(S)\) once. Thus, we believe the latter can improve the results, which is demonstrated by the experimental evaluation in Section 4.2.
### Method with Discret Fourier Transform and Character Function (DFT-CF)
The DFT-CF method is an exact method of computing the probability mass function [37, 38]. Here, it is based on two-dimensional Discret Fourier Transform (DFT) of the Character Function (CF)
```
Input :\(k,N=\{t_{0},t_{1},...,t_{N-1}\}\) Output: A size-k subset of \(N\)
1 Initialize \(T=T_{ini}\), \(r\), \(c\);
2\(S\)\(\leftarrow\) A set consisting of random \(k\) elements in \(N\);
3\(R\)\(\leftarrow\)\(N-S\);
4while\(T>T_{end}\)do
5for\(t\)\(\leftarrow\)\(0\) to \(r\)do
6\(k_{1}\)\(\leftarrow\) random(1, floor(min(\(\frac{k}{2}\),\(\frac{|N|-k}{2}\)))\);
7\(R_{1}\)\(\leftarrow\) A set consisting of random \(k_{1}\) elements in \(R\);
8\(S_{1}\)\(\leftarrow\) A set consisting of random \(k-k_{1}\) elements in \(S\);
9\(S_{1}\)\(\leftarrow\)\(S_{1}\bigcup R_{1}\);
10\(\Delta G_{N}\)\(\leftarrow\)\(G_{N}(\mu_{S_{1}},\sigma_{S_{1}})-G_{N}(\mu_{S},\sigma_{S})\);
11if\(\Delta G_{N}>0\)then
12\(S\)\(\leftarrow\)\(S_{1}\);
13\(R\)\(\leftarrow\)\(N-S_{1}\);
14
15else
16 Accept \(S_{1}\) with probability \(e^{\frac{\Delta G_{N}}{T}}\);
17ifAccept \(S_{1}\)then
18\(S\gets S_{1}\);
19\(R\)\(\leftarrow\)\(N-S_{1}\);
20
21 end if
22
23 end if
24\(T\)\(\leftarrow\)\(T\)\(*\)\(c\);
25
26 end if
27return\(S\);
```
**Algorithm 3**Simulated Annealing Algorithm
of the Poisson Binomial Distribution.
The Character Function of the Poisson Binomial variable \(T=\sum_{j}t_{j}\) is given by \(\phi(t)=E(e^{itT})\). According to the definition of mathematical expectation, we have:
\[\phi(t)=\sum_{T_{0}=0}^{k}Pr(T=T_{0})e^{itT_{0}}. \tag{30}\]
On the other hand, we assume that \(t_{j}\)'s are mutually independent, so
\[\phi(t)=\prod_{j=1}^{k}E(e^{itt_{j}})=\prod_{j=1}^{k}(1+Pr(t_{j}=1)(e^{it}-1)) \tag{31}\]
Let \(t=\omega l\), where \(l\) is \(0...k\), \(\omega=2\pi/(k+1)\), by combining equations (30) and (31), we have:
\[\frac{1}{k+1}\sum_{T_{0}=0}^{k}Pr(T=T_{0})e^{i\omega lT_{0}}=\frac{\prod_{j=1 }^{k}\left(1+Pr(t_{j}=1)(e^{i\omega l}-1)\right)}{k+1} \tag{32}\]
Application of DFT to both sides of eq. (32) leads to the pmf of T as follows (see details in [38]):
\[Pr(T=T_{0})=\frac{1}{k+1}\sum_{l=0}^{k}\left(\prod_{j=1}^{k}\left((1-p_{j})+p _{j}e^{i\omega l}\right)e^{-i\omega lT_{0}}\right) \tag{33}\]
Thus, we have:
\[\tau(S)=Pr(\theta_{1}\leq T\leq\theta_{2})=\frac{1}{k+1}\sum_{T_{0}=\theta_{1 }}^{\theta_{2}}\sum_{l=0}^{k}\left(\prod_{j=1}^{k}\left((1-p_{j})+p_{j}e^{i \omega l}\right)e^{-i\omega lT_{0}}\right) \tag{34}\]
Now we can still utilize the simulated annealing algorithm by replacing the \(G_{N}(\mu_{S},\sigma_{S})\) in Algorithm 3 with the exact \(\tau(S)\) in eq. (34).
The time complexity of calculating \(Pr(T=T_{0})\) is \(O(k^{2})\), where \(k\) is the size of \(S\). Thus, it is straightforward to get the time complexity of the method DFT-CF (see \(\tau(S)\) in eq. 34) is \(O(k^{2}(\theta_{2}-\theta_{1}))\). Compared to the normal approximation in Section 3.4, DFT-CF needs not to approximate \(\tau(S)\). Therefore, the results by DFT-CF tend to be more accurate. The experimental evaluation in Section 4.2 demonstrate this.
## 4 Experimental Evaluation
In this section, we present our experimental evaluation of the performances of T-model and S-model, as well as an experimental study of the crowd selection problem, namely finding the optimal set of workers with a given budget. The goal of our experiments is twofold: first, we study the effect of different parameters for the proposed algorithms; second, we compare the two proposed algorithms with a baseline algorithm, that is, selecting the workers randomly. In order to explore the various settings of parameter values in our methods, we have used synthetic data for the testing. In addition, we verify the effectiveness of our methods on data from the Foursquare [1], a very popular social network. Specifically, we used the Foursquare API to gather sample data of the existing venues
and the tips posted on them. In particular, for each collected venue, the crawler collects all its tips, the identifications of the users who posted each of them. Our crawler ran from March 15th to May 19th, which collected data from 69,423 users. Additionally, to evaluate the practicability of the proposed models, we conducted a case study on Amazon Mechanical Turk (AMT).
All the experiments are conducted on a server equipped with Intel(R) Core(TM)i7 3.40GHz PC and 16GB memory, running on Microsoft Windows 7.
### Experiments on S-model
We first conducted evaluation on S-model. In particular, we compared the proposed greedy algorithm, namely _greedy_, with two alternative methods- (1) _exact_: a brute-force algorithm, which computes the exact optimal solution; (2) _random_: the workers are selected randomly. Due to the high computational cost for the exact algorithm, we only generate a small data set with 30 workers. Each pair of workers is assigned a similarity ranging from -1 to 0 (so \(Div(C)>0\)), following two different distributions - Uniform and Normal.
**Effectiveness**: We generated 100 such data sets, and reported their average performance in Fig.4. Note the x-axis denotes the budget number of workers to be enlisted, and y-axis indicates the diversity of the selected crowd.
It is straightforward to interpret our findings: from the experimental results, we can see that _greedy_ well approximates the performance of the _exact_. This is consistent with our theoretical analysis that _greedy_ performs an approximation guarantee of 63%, as shown in Section 2.3. In addition, _greedy_ outperforms random for all three distributions. We also find that the diversity grows with the increasing number of \(k\) for all three algorithms, which confirms the fact that large crowds tend to have high diversity. Another interesting finding is that, by comparing it with _random_, the advantages of _greedy_ are more evident in Normal distributions than in Uniform distributions. This is because Normal distribution are skewed, thereby _random_ is very likely to select the values around the mean, which leads to low diversity.
Figure 4: Effectiveness of Methods for S-model with Various Distributions
On the real data set, the exact algorithm cannot be performed due to its factorial time cost. So we only plotted the performance of _random_ and _greedy_, as demonstrated in Fig.6. The result is basically consistent with the synthetic data.
**Efficiency**: In this subsection, we empirically examine the time-efficiency of the proposed algorithm for S-model. In particular, we compare the greedy algorithm (Algorithm 2) with the exact algorithm (Brute-force enumeration). As shown in Fig.5, the exact algorithm (denoted by _exact_) entails exponential computation time, and the greedy algorithm (_greedy_) is much more effective than \(exact\). Please note that we stop \(exact\) after running it over 500 seconds.
Figure 5: Efficiency of Methods for S-model with Various Distributions
Figure 6: Effectiveness of S-model on Foursquare (Real) data
### Experiments on T-model
#### 4.2.1 Synthetic Data
In this subsection, we demonstrate a series of experimental results on synthetic data. To simulate individual opinions without bias, in this section we produced synthetic datasets following two different distributions - normal and uniform, each of which has varying mean values and variance values. The characteristics of K-Best selection are investigated with both Poisson Approximation and Binomial Approximation and the characteristics of Simulated Annealing algorithms are investigated with both normal approximation and exact DFT-CF method. Then we evaluate the efficiency and effectiveness of both methods.
The synthetic dataset is generated as follows: we generated 100 data sets, each including 30 candidate workers. The number of candidate workers is small because we want to use a brute-force algorithm to traverse the searching space, and find the absolute optimal solution. Then, we can evaluate how far the proposed approximation algorithm is from this optimum. The setting of parameters is: \(k=10,\theta_{1}=3,\theta_{0}=3\), \(k=15,\theta_{1}=5,\theta_{0}=5\) and \(k=20,\theta_{1}=6,\theta_{0}=6\).
The results of effectiveness are reported in Fig.7. In each subfigure of Fig.7, x-axis indicates the index of the 100 data sets, and y-axis denotes the value of \(\tau(S)\), which is the function we try to maximize. The methods with poisson and binomial approximations are named 'poisson' and 'binomial', and the methods with normal approximation and exact DFT-CF are named 'normal' and 'DFT-CF', respectively. To better illustrate the advantage of the proposed methods, we also compare them with a baseline method, which randomly select workers, denoted by 'random'. From the experimental results, we can see that the performance of 'random' can be arbitrarily bad, while the 'poisson' and 'binomial' have similar performance and the 'normal' and 'DFT-CF' have better performance, and they all well approximate the optimum. In addition, we present the comparison of efficiency in Fig.8. One can see that the approximation techniques are much more efficient than computing the exact solutions. Moreover, we observe that 'poisson' and 'binomial' have similar performance in terms of efficiency, the method 'normal' performs the best, and 'DFT-CF' ranks the second.
#### 4.2.2 Real Data
In this subsection, we evaluated the proposed methods on real data sets from Foursquare. In particular, we select 10000 active workers (i.e. Foursquare users) from all the data collected. We evaluate sentiment of all the historical comments for each worker, and use average opinion sentiment value for this experiment. With this large data set, we examine the performance of the proposed algorithms with different settings of \(\theta_{0}\), \(\theta_{1}\) and \(k\). In Fig.9, we use x-axis to denote the value of \(k\), whereas \(\theta_{0}\) and \(\theta_{1}\) are set to be different portions of \(k\).
First, we can observe that the proposed approximation-based methods significantly outperforms the random baseline. In particular, the advantage of proposals is evident when \(\theta_{0}\) and \(\theta_{1}\) are far from the \(k/2\), such as figures 9(a), 9(d) and 9(h). Comparatively, when they are close to \(k/2\), the performance of random baseline becomes better, but still worse than our proposals. This phenomenon can be explained by the Central Limit Theorem [29] - the sum of 0-1 random variables (i.e. a Poisson Binomial Distribution) is approximately a normal distribution, and the random baseline is more likely to pick the workers with probability close to the mean. So when the user's demand is also close to the mean, the random baseline would have a better performance. When the
Figure 8: Efficiency of Methods for T-model with Various Distributions
Figure 7: Effectiveness of Methods with Poisson and Binomial Approximations
user's demand is far to the mean, randomly selecting workers is very unlikely to satisfy the user's demand.
Overall speaking, our proposal demonstrates very stable and outstanding performance. Moreover, when \(k\) is fairly large, the user's demand can be almost 100% guaranteed. When \(k\) is small, both the normal approximation and the exact DFT-CF method perform better than the Poisson approximation and the binomial approximation. There are two main reasons for this phenomenon: (1) Both the Poission and Binomial approximation with the Backtracking Algorithms approximate \(\tau(S)\) twice while the normal approximation with the simulated annealing algorithm approximates \(\tau(S)\) once only, and there is no approximation for the exact DFT-CF method with the simulated annealing algorithm; (2) the order of worker selection affects the results of the Backtracking Algorithm, since in the Algorithm 2, the condition \(\sum_{i=1}^{k}Pr(t_{i}=1)>\Omega\) in line 4 determines whether the currently selected worker combination can continue to recursively from a qualified \(S\), and it will be affected by the order of the data. Specifically, for different sorting of the same set of workers, when the first \(k\) items are large enough, \(\sum_{i=1}^{k}Pr(t_{i}=1)>\Omega\) will be more likely to be judged as True, causing the currently worker combination and all \(S\) combinations containing it to be removed. This will reduce the number of \(S\) combinations traversed by the algorithm, thereby decreasing the possibility of searching for the optimal solution. And if the sum of the first \(k\) items is too small, the number of \(S\) combinations traversed by the algorithm will be the same as enumeration, which will affect the overall efficiency of the algorithm. However, the order of workers does not affect the result of the simulated annealing algorithm since the algorithm selects the workers randomly.
### Case Study
We conducted a case study to exhibit the _goodness_ of crowds selected by our proposed models. In particular, we ask the crowds to produce pairwise comparisons for a number of restaurants. One thing worth noting is that the goodness of a crowdsourced result for restaurants is not _absolute_. Nevertheless, in order to present a fairly objective evaluation, we carefully select 40 pairs of restaurants, such that each of them is consistently ranked by three different third-party systems, namely _Yelp!_ ([http://www.yelp.com/](http://www.yelp.com/) ), _Michelin_ ([http://www.michelin.com/](http://www.michelin.com/)), as well as _OpenRice_ ([http://www.openrice.com/](http://www.openrice.com/)). The pairwise comparisons agreed by all the systems are assumed to be the ground truth.
We publish questions on Amazon Mechanical Turk (AMT), which is a widely used crowdsourcing marketplace. Each question contains two restaurants, and requires a worker to provide comments (at least 200 words) on each restaurant and decide which one is better. We accept 100 workers for each question. We apply the S-model and T-model on the data obtained from AMT, and select a subset of workers out of the 100 for each pair of restaurants. Specifically, we adopt the distance function detailed in section 7.1.3 (part of Appendix) for S-model; and use the sentiment analysis tool from Natural Language Toolkit (NLTK [5]) for the T-model. To aggregate crowdsourced answers, we use the majority as the crowd's result. Moreover, for comparison, we randomly select the same number of workers, denoted by _rand_.
The size of the selected subset of workers is set to 11, 21,..., 51, and the proposed models consistently outperform _rand_. Due to the page limit, we demonstrate the precision and recall when the size is 21. In Fig.10, we use _rand_, _t-model_ and _s-model_ to denote the results for random selection, t-model and s-model, respectively. From the experimental results, we can see that the proposed models achieve fairly high precision and recall (70%+). Besides, we observe that _rand_ has quite low
Figure 9: Testing on real data
precision and recall, which indicate that the diversity of opinion is very important for constructing a crowd.
## 5 Related Work
### Crowd-based Queries
The recent development of crowdsourcing brings us a new opportunity to engage human intelligence into the process of answering queries (see [13] as a survey). Crowdsourcing provides a new problem-solving paradigm [8, 21], which has been blended into several research communities. In particular, crowdsourcing-based data management techniques have attracted many attentions in the database and data mining communities recently. In the practical viewpoint, [15] proposed and develop a query processing system using microtask-based crowdsourcing to answer queries. Moreover, in [26], a declarative query model is proposed to cooperate with standard relational database operators. In addition, in the viewpoint of theoretical study, many fundamental queries have been extensively studied, including filtering [25], max [17], sorting [22], join [22, 23], etc. Recently, some researchers considered new models with lower cost and higher efficiency than traditional models. For example, [44] proposed novel frameworks called SPR and SPR+ that are used to address the crowdsourced top-k queries to minimize the total monetary cost and [40] proposed a crowd-powered database system called 'CDB' that tried to achieve the three optimization goals that are smaller monetary cost, lower latency and higher quality. Besides, crowdsourcing-based solutions of many complex algorithms are developed, such as categorization based on graph search [27], clustering [16], entity resolution [32, 34] and [42], analysis over social media [10], and tagging in social networks [12], search engines [46], trip planning [18], pattern mining [6], data management [45], services like decision-making services [43] and service-oriented business [41], etc.
Figure 10: Precision and Recall on Case Study over AMT
### Team Formation
Another related problem in the field of data mining is _Team Formation Problem_[20]. Before taking diversity into consideration, previous Team Formation problems focus on satisfying the specific requirements of given tasks for certain skills which are possessed by different candidates experts. Normally, the cost of choosing one expert is also defined, e.g. influence on personal relationship and communication cost etc. Aside from using explicit graph constraints, some attempts of solving team formation problem are based on communication activities [9, 14]. In recent years, more and more methods and solutions have been proposed. For example, [51], [49], [50], [47] respectively proposed different models or algorithms from the perspectives of member collaboration, communication costs among experts, integrating worker agency and team leaders. [48] also proposed a Decision Support System to assist multiple team formation in the context of software development that planned to be used in to the industry.
The difference between Team Formation problem and ours is twofold. First, Team Formation mainly considers on individual capabilities, while we consider the crowd as a whole - the most capable workers may not make a wise crowd [31]. Second, we focus on the diversity of opinions of the crowd, which has not been addressed in the Team Formation problem.
### Diversity of Opinions in Social Science
The importance of diversity of opinions for crowdsourcing is already well studied in the field of social science. In particular, [23] is known as one of the most representative book in the field. It highlights the importance of cognitive diversity for collective problemsolving (where diversity trumps ability), and takes a complex subject, moves beyond metaphor and mysticism and politics and places the claims of diversity's benefits on a solid intellectual foundation.
To our best knowledge, this is the first work of algorithmic study on a how to construct a wise crowd with the consideration of the diversity of opinion.
## 6 Conclusion and Future Work
In this paper, we study how to construct a wise crowd with the consideration of diversity of opinions. In particular, two basic paradigms for worker selection is addressed - building a crowd waiting for tasks to come and selecting workers for a given task. Accordingly, we propose Similarity-driven (S-Model) and Task-driven Model (T-Model) for these two paradigms. Under both of the models, we propose efficient and effective algorithms to enlist workers with a budgeted constraint. We have verified the solutions with extensive experiments on both synthetic datasets and real data sets. The experimental studies demonstrate that the proposals are robust for varying parameters, and significantly outperform the baselines.
There are many further research directions to explore. One immediate future direction is how to consider the different influence of workers for the diversity of opinions. The influence may diminish the range of opinions, and polarize people's opinions making group feedback less reliable in guiding decision-makers. Influencers tend to improve people's confidence, but this so-called 'confidence effect' will boost an individual's confidence, while at the same time, decrease their accuracy. Another interesting dimension is to differentiate the cost for recruiting different workers,
then the problem is to minimize the total cost while fulfilling the requirement of diversity. Besides, we are interested in designing better similarity/distance functions for our T-model.
|
2308.14075 | FaceCoresetNet: Differentiable Coresets for Face Set Recognition | In set-based face recognition, we aim to compute the most discriminative
descriptor from an unbounded set of images and videos showing a single person.
A discriminative descriptor balances two policies when aggregating information
from a given set. The first is a quality-based policy: emphasizing high-quality
and down-weighting low-quality images. The second is a diversity-based policy:
emphasizing unique images in the set and down-weighting multiple occurrences of
similar images as found in video clips which can overwhelm the set
representation. This work frames face-set representation as a differentiable
coreset selection problem. Our model learns how to select a small coreset of
the input set that balances quality and diversity policies using a learned
metric parameterized by the face quality, optimized end-to-end. The selection
process is a differentiable farthest-point sampling (FPS) realized by
approximating the non-differentiable Argmax operation with differentiable
sampling from the Gumbel-Softmax distribution of distances. The small coreset
is later used as queries in a self and cross-attention architecture to enrich
the descriptor with information from the whole set. Our model is
order-invariant and linear in the input set size. We set a new SOTA to set face
verification on the IJB-B and IJB-C datasets. Our code is publicly available. | Gil Shapira, Yosi Keller | 2023-08-27T11:38:42Z | http://arxiv.org/abs/2308.14075v2 | # FaceCoresetNet: Differentiable Coresets for Face Set Recognition
###### Abstract
In set-based face recognition, we aim to compute the most discriminative descriptor from an unbounded set of images and videos showing a single person. A discriminative descriptor balances two policies when aggregating information from a given set. The first is a quality-based policy: emphasizing high-quality and down-weighting low-quality images. The second is a diversity-based policy: emphasizing unique images in the set and down-weighting multiple occurrences of similar images as found in video clips which can overwhelm the set representation. This work frames face-set representation as a differentiable coreset selection problem. Our model learns how to select a small coreset of the input set that balances quality and diversity policies using a learned metric parameterized by the face quality, optimized end-to-end. The selection process is a differentiable farthest-point sampling (FPS) realized by approximating the non-differentiable Argma's operation with differentiable sampling from the Gumbel-Softmax distribution of distances. The small coreset is later used as queries in a self and cross-attention architecture to enrich the descriptor with information from the whole set. Our model is order-invariant and linear in the input set size. We set a new SOTA to set face verification on the IJB-B and IJB-C datasets. Our code is publicly available 1.
Footnote 1: [https://github.com/ligaripash/FaceCoresetNet](https://github.com/ligaripash/FaceCoresetNet)
## Introduction
Face Set Recognition (FSR) is a variant of Face Recognition (FR) that involves recognizing an individual's identity based on a _template_, that is an unbounded and unordered set of images and video clips depicting a particular face instead of a single photograph. A template is shown in the Fig. 1. Given the added data, FSR is more accurate compared to FR. The increased accuracy and proliferation of face media available on the Internet and public cameras make FSR an attractive solution for 'in the wild' situations, such as in unconstrained surveillance and Homeland Security scenarios [12] where image quality is lacking due to low resolution, extreme pose, illumination, occlusions, and recognition of a single image is difficult.
FSR consists of verification, which means confirming the identity of an input query template (also referred to as a probe) and identification, which entails searching a database of registered subjects (known as a gallery) to identify an unknown probe person. FSR schemes encode the images in a template into a fixed-size vector descriptor to allow computationally efficient similarity search. As the templates consist of an arbitrary number of images and video frames, the image aggregation process has to be efficient with respect to the template's size. Furthermore, since the template is an unordered set of images, the aggregation should be permutation invariant. Additionally, the aggregation should prioritize informative images while downplaying uninformative, low-quality (blurry, poorly lighted) images. Finally, the process of feature fusion should factor in "face burstiness". If a template includes a burst of similar frames as in video clips, the burst should not overpower the template descriptor.
Some aggregation schemes apply average pooling [1, 13] to the set of features. However, this method presents a significant drawback. All images within the set are weighted equally, regardless of their quality. To address this issue, [10, 11, 12] suggest facial features whose magnitude is proportional to image quality. To im
Figure 1: Template coreset selection. in the round blue rectangle, an input face set with 12 images containing a video clip sample (\(I_{1}\) to \(I_{6}\)) and six photographs of the same person (\(I_{7}\) to \(I_{12}\)). The coreset selection process balances quality and diversity. The final coreset in the red round rectangle contains four high-quality images with diverse poses and expressions.
prove the aggregation, other works model intra-set relationships using reinforcement learning [11], recurrent models [12], or self-attention [13, 14, 15]. In contrast to intra-set modeling, face burstiness received only limited attention in the iterature [22].
While Recurrent methods [1, 13] are effective in sequence modeling, they are unsuitable for modeling orderless sets as they are not permutation invariant. On the other hand, Multihead Self Attention (MSA) [23, 14] is an effective permutation-invariant approach to model self-attention among template features, but it becomes impractical for large templates since it is quadratic in the template size.
In this work, we propose a feature aggregation approach, denoted _FaceCorsetNet_, that selects a small coreset we dub _Core-Template_ of the face template with balanced quality and diversity. Given \(N\) data points, a coreset is a smaller subset of \(K<<N\) sampled points that approximate the original dataset with respect to a given metric. It is often used to speed up training or reduce the memory and storage requirements of large datasets [16, 1, 17].
We select the Core-Template using a greedy differentiable farthest point sampling (FPS) process with a learned metric parameterized by the quality of the face images. As computing the farthest point requires an Argmax operation, which is not differentiable, we suggest a novel differential FPS by replacing the Argmax operation with sampling from the differentiable Gumbel-Softmax [13] distribution of quality-aware distances. Unlike conventional FPS which starts with a point selected at random, our selection process starts with highest quality feature, hence the selection is permutation invariant. An example of a face template and the selected Core-Template is depicted in Fig. 1. As far as we know, this is the first differential coreset approach that can be fully optimized by backpropagation. Importantly, our differentiable Core-Template selection reduces the unbounded template size \(N\) to a fixed size \(K\), facilitating the overall linear algorithm complexity.
To enhance the Core-Template representation, we compute self-attention amongst the Core-Template features. To enrich the representation with information from the full template missing from the Core-Template, we compute cross-attention between the Core-Template and the full template, using the Core-Template features as queries and the full template as Key-Values. By using the Core-Template features as queries for self-attention and cross-attention, we compute attention in linear time, unlike the typical quadratic time complexity in other works modeling intra-template relations, such as in the well-written paper by [14].
As a final step, we sum and normalize the enriched Core-Template features to get the final feature representation for the template. The final template representative feature is used to compute the vanilla FR adaptive margin loss [14] to optimize the model, which is simple and effective. The Core-Template selection process and attention computation are linear in the template size, facilitating an overall linear time complexity of FaceCorsetNet. The proposed scheme is shown to achieve SOTA accuracy on the IJB-B [16], IJB-C [15] FSR benchmarks.
In particular, we propose the following contributions:
* We propose a novel differential, permutation invariant, coreset selection for SFR, with a differentiable FPS strategy based on a quality-aware metric to balance quality and diversity. The coreset selection and metric parameters are part of the trained model and optimized end-to-end. As far as we know, this is the first differential coreset selection method with potential value to other fields.
* We model intra-template relations using the selected coreset as queries in self and cross-attention architecture to extract information from the whole set.
* As far as we know, our algorithm is the first to compute intra-template relations in linear time.
* We use a vanilla FR AdaFace loss with no additional loss terms for a simple design.
* We establish a new SoTA for face verification on IJB-B and IJB-C datasets
## Related Work
A simple method to combine features involves computing the mean of a collection of features [1, 13]. A limitation of this elementary technique is that inferior-quality images can dominate the combined descriptor, diminishing its discriminatory capacity. During training with margin-based softmax, superior-quality faces indirectly produce features with greater magnitude [13, 14]. By summing the features, high-quality images are inherently emphasized without needing explicit image quality evaluation. Both CFAN [13] and QAN [14] employ a self-attention mechanism to learn image weighting through an explicit quality assessment [13, 14]. Nevertheless, these approaches share a common drawback: they do not consider the relationships within the set during the weight computation stage.
To model intra-set relationships, the nonlocal neural network and RSA [14, 15] utilize intermediate feature maps \(U_{i}\) of size \(C_{m}\times H\times W\), during aggregation. This approach leverages the rich and complementary information feature maps provide and considers spatial relationships to refine them. However, the main limitation is the intensive computation required for attention calculation. Specifically, for a set of N feature maps, an attention module requires an affinity map of size \(\left(N\times H\times W\right)^{2}\). CAFace, cluster and Aggregate network [14] generates an affinity map of size \(N^{2}\), resulting in improved computation efficiency for attention calculation. Using our differential selection of coresets, this work reduces the feature fusion complexity to \(O(N)\).
Set burstiness suppression network [22], tackles the face burstiness, a problem mostly overlooked by FSR research, by quantizing the face set feature Gram matrix, which has a size of \(N^{2}\). In the main FSR datasets IJB-B, IJB-C, each image in each template is labeled by its media source (either a video clip or a singular image). MagFace and AdaFace [14, 15] workaround the face burstiness problem by using these labels to fuse each media source separately and then sum the per-media features to produce the final fused feature. In this method, each media source contributes equally to the final descriptor, regardless of size, mitigating the burstiness problem. In our work, we solve face burstiness in linear time without resorting to media labels, which may not be available, and still achieve superior performance.
DAC [16] proposes an RL-based quality estimator, while MARN [12] is an RNN-based quality estimator. However, these methods are not agnostic to input order, making them unsuitable for modeling orderless sets. In contrast, our method is permutation invariant, and well suited for set modeling.
A long-standing practice in fundamental approaches such as kNN and kMeans [13] or mixture models [10] is the use of subsets of the core to approximate the structure of an available set. These subsets enable the finding of approximate solutions with considerably reduced costs [1]. Recently, coreset-based methods have made their way into Deep Learning techniques, such as network pruning [15], active learning [14], and increasing effective data coverage of mini-batches for improved GAN training [15], or representation learning [16]. The latter two have achieved success through a greedy coreset selection mechanism.
As far as we know, all previous coreset selection methods are non-differentiable. In contrast, in our proposed method, the coreset selection is integrated into our PyTorch model and trained end-to-end. Quality-based distance metric parameters are optimized during training, allowing an optimal balance between quality and diversity in the selected coreset.
To implement our differentiable Core-Template selection, we use the Gumbel-Softmax [15] technique, which enables the smoothing of discrete categorical distributions, making them suitable for backpropagation. It employs the Gumbel-Max reparametrization trick, which efficiently draws samples from a Categorical distribution. Let \(z\) be the categorical variable with class probabilities \(\pi_{1},\pi_{2}\ldots\pi_{k}\). The trick asserts that sampling from the categorical distribution is equivalent to sampling from the following expression:
\(z=one\_hot(\underset{i}{\mathrm{argmax}}[g_{i}+log\pi_{i}])\)
Here, \(log\pi_{i}\) denotes the logits of the categorical distribution, and \(g_{i}\) follows a Gumbel distribution with parameters (0,1). To avoid non-differentiability introduced by the Argmax operation, it is replaced with a Softmax function:
\(y_{i}=\frac{exp((log(\pi_{i})+g_{i})/\tau)}{\Sigma_{j=1}^{k}exp((log(\pi_{j})+g _{j})/\tau)}\) for \(i=1,...,k\).
The temperature parameter (\(\tau\)) controls the spreading of the distribution. As \(\tau\) approaches zero, the Gumbel-Softmax method converges to the discrete categorical distribution. During training, a \(\tau\) value of 1 is typically used, while during inference, \(\tau\) approaches zero (\(\tau\to 0^{+}\)). [16] employ the Gumbel-Softmax for a differentiable sampling of point clouds, facilitating tasks such as point cloud segmentation and classification.
Multi-head self-attention (MSA) [17] is a commonly used function representing intra-set and inter-set relationships through an affinity map. It is a critical
Figure 2: FaceCoresetNet Architecture
component of transformer architectures, which have surpassed CNNs in various visual tasks [1, 13, 14]. In principle, each MSA output value is a weighted average of other values controlled by the queries and keys affinity. The MSA time complexity is \(O(N^{2})\); hence computing intra-set attention on a large template is prohibitive. To reduce the computation time, we compute intra-set attention on a fixed-size coreset, and cross-attention between the coreset and the template in \(O(N)\) time.
## Method
Let \(\mathbf{T}=\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}\}\) be a template of \(N\) facial images of the same identity. The goal is to produce a discriminative single compact feature vector \(\mathbf{f}\) from \(\mathbf{T}\). To compute \(\mathbf{f}\), we employ an _Extract_, _Select_, _Attend and Aggregate_ paradigm. To _Extract_ (fig. 2 left) compact feature \(\mathbf{f_{i}}\) from each image \(\mathbf{x}_{i}\), we use a pre-trained frozen single image FR model \(E:\mathbf{x}_{i}\rightarrow\mathbf{f}_{i}\) following [13, 14]. The set of features produced is \(\mathbf{F}=\{\mathbf{f}_{1},\mathbf{f}_{2},\ldots,\mathbf{f}_{N}\}\) or in a matrix notation \(\mathbf{F}\in\mathbb{R}^{N\times N_{c}}\) where \(N_{c}\) is the number of channels in the compact feature. During training, we create a batch of \(B\) templates of \(B\) different individuals. Each template of size \(N\) where \(N_{min}\leq N\leq N_{max}\), i.e. \(\mathbf{F}\in\mathbb{R}^{B\times N\times N_{c}}\). The template size \(N\) is randomly selected for each mini-batch.
### Core Template Selection
To reduce the template feature size \(\mathbf{F}\) from unbounded size \(N\), we _Select_ (Fig. 2 green block) a core template denoted by \(\mathbf{CT}\) of fixed size \(K\). Our core template selection process strives to select high-quality images with diverse appearances to address the burstiness problem. As established in [12], \(||\mathbf{f}_{i}||\) is a proxy for \(\mathbf{x}_{i}\)'s image quality. Our core template selection starts by selecting the feature \(\mathbf{f}_{i}\) with the largest norm in \(\mathbf{F}\), the template's highest quality feature. Following the farthest point sampling [1] paradigm, in each selection iteration, we choose the farthest feature in \(\mathbf{F}\backslash\mathbf{CT}\) from the set of features already selected in \(\mathbf{CT}\). The farthest point in our setting should correlate to a high-quality image with a diverse appearance relative to the features already selected for the core template. Cosine distance between two features \(\mathbf{f}_{i}\) and \(\mathbf{f}_{j}\) is defined as:
\[d_{c}(\mathbf{f}_{i},\mathbf{f}_{j})\coloneqq 1-\frac{\mathbf{f}_{i}\cdot\mathbf{f}_{j}}{|| \mathbf{f}_{i}||\mathbf{f}_{j}||} \tag{1}\]
Large cosine distance between features correlates with large appearance variations (face pose, expression, age, etc.) between the corresponding images. Selecting the farthest feature using this metric encourages core template appearance diversification, but ignores image quality, resulting in the selection of diverse but low-quality images. To account for image quality, we scale the cosine distance by the feature norm exponentiated by a learned parameter \(\gamma\) to balance between quality and diversity:
\[d_{q}(\mathbf{f}_{i},\mathbf{f}_{j};\gamma)\coloneqq d_{c}(\mathbf{f}_{i},\mathbf{f}_{j})|| \mathbf{f}_{j}||^{\gamma} \tag{2}\]
Here, \(\mathbf{f}_{i}\in\mathbf{CT}\) and \(\mathbf{f}_{j}\in\mathbf{F}\setminus\mathbf{CT}\) and \(d_{q}\) is a non-symmetric distance function modulated by image \(\mathbf{x}_{j}\)'s quality. \(\gamma\) is a trained parameter that reflects the optimal importance of the image quality. If \(\gamma\) is 0, there is no weight to quality and \(d_{q}\) is reduced to \(d_{c}\). If \(\gamma>>1\), the feature quality dominates, and we have \(d_{q}(\mathbf{f}_{i},\mathbf{f}_{j};\gamma)\approx||\mathbf{f}_{j}||^{\gamma}\) as illustrated in Fig. 4. The effect of using our quality aware distance function vs. regular cosine distance is illustrated in Fig. 3. In Fig. 3A, we have the template \(\mathbf{F}=\{\mathbf{f}_{1},\mathbf{f}_{2},\mathbf{f}_{3}\}\) and \(\mathbf{f}_{1}\) already in the Core Template (\(\mathbf{CT}=\{\mathbf{f}_{1}\}\)), marked by the red dashed ellipse, and we want to choose the next feature for the core template out of \(\mathbf{F}\setminus\{\mathbf{f}_{1}\}=\{\mathbf{f}_{2},\mathbf{f}_{3}\}\) (marked by the blue dashed ellipse). Using the regular cosine distance, the feature \(\mathbf{f}_{3}\) of low-quality image 3 is farther from \(\mathbf{f}_{1}\) than high-quality image 2 due to its larger pose difference. As a result, the low quality \(\mathbf{f}_{3}\) feature is selected for CT (\(\mathbf{CT}=\{\mathbf{f}_{1},\mathbf{f}_{3}\}\)), reducing the discriminative power of \(\mathbf{CT}\). In Fig. 3B, using our quality aware metric Eq. 2, the distance \(d_{c}(\mathbf{f}_{1},\mathbf{f}_{2})\) is increased by \(\mathbf{f}_{2}\)'s larger quality measure, and the distance \(d_{c}(\mathbf{f}_{1},\mathbf{f}_{3})\) is reduced by \(\mathbf{f}_{3}\) lower quality, reversing the distance order and selecting the high quality \(\mathbf{f}_{2}\) for the core template (\(\mathbf{CT}=\{\mathbf{f}_{1},\mathbf{f}_{2}\}\))
The Core-Template selection algorithm extends the FPS algorithm with important modifications. Instead of relying
Figure 4: The core template selection is influenced by the value of \(\gamma\). In this example, the template size \(N=5\) and \(K=3\). In the top row, when \(\gamma\) is large, the selection process prioritizes image quality over diversity. As a result, three similar high-quality images are chosen, marked with red rectangles. Conversely, in the bottom row, with \(\gamma=0\), diversity precedes image quality, leading to a selection that emphasizes a diverse range of images.
Figure 3: Cosine distance (A) vs. quality-aware distance (B) for core template selection. The distance from \(f_{1}\) to \(f_{2}\) increases in B relative to A, as \(f_{2}\) has high quality. The distance from \(f_{1}\) to \(f_{3}\) is decreased in B, as \(f_{3}\) has low quality.
on the non-differentiable Argmax operation to choose the feature in \(\mathbf{T}\) farthest from \(\mathbf{CT}\), we adopt a differentiable approach by sampling from the Gumbel-Softmax distribution of distances [10]. This enables us to incorporate differentiability and leverage backpropagation during training. Additionally, we introduce a quality-based distance function that incorporates a learned parameter \(\gamma\), allowing the algorithm to achieve an optimal balance between quality and diversity in the selected Core-Template features. Furthermore, unlike the conventional FPS method that randomly selects the first feature, we prioritize the feature with the highest norm (highest quality) to be inserted first into the Core-Template \(\mathbf{CT}\). This selection criterion ensures permutation invariance and enhances the algorithm's overall performance. The pseudocode for our core template selection algorithm is in Algorithm 1.
```
1defCore-Template_selection(F,K,F_norms):
3"""
4Input:
5F:normalizedfeaturesdata,[B,N,C]
6K:TheintendedcoreTemplate(CT) size(scalar)
7F_norms:Fnorms,[B,N]
8gamma:learnedqualityvs.diversity balanceparameter(scalar)
9Return:
10CoreTemplateCTwithsizeK
11"""
12temperature=1.0iftrainelse1e-10
13#Sortselectthefeaturewithhighest quality(maxnorm)
11_hot_max=torch.gumbel_softmax(F_norms,tau-temperature,hard-True)
14#Extractthefeaturewithmaximumnorm
15CT=F@1_hot_max
16#Afterextractioncomputedistance fromCT@F@16d_CT_to_F=quality_aware_dist(CT,F_norms,F)
17foriinrange(K-1):
11_hot_max=torch.gumbel_softmax(d_CT_to_F,tau-temperature,hard-True)
18#extractnextfeaturetoaddtocT
19new_f=f@1_hot_max
20d_new_f_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t__t_t_t_t_t__t_t_t__t_t_t_t_t_t__t_t_t_t_t_t__t_t_t_t_t__t_t_t__t_t__t_t_t_t_t_t_t_t__t_t_t_t__t_t_t__t_t_t_t__t_t_t_t__t_t_t__t_t_t__t_t_t_t__t_t_t_t__t_t_t__t_t_t_t_t_t_t__t_t_t__t_t_t_t_t_t_t__t_t_t__t_t_t_t_t__t_t_t_t__t_t_t_t_t_t__t_t_t_t_t_t_t_t_t__t_t_t__t_t__t_t__t_t__t_t_t__t_t_t_t__t_t_t_t__t_t__t_t__t_t__t_t__t_t_t_t__t_t__t_t__t_t__t_t__t_t__t_t_t__t_t__t_t_t_t__t_t__t_t__t_t_t_t__t_t__t_t_t_t_t__t__t_t_t_t__t_t_t__t_t_t__t_t__t_t__t_t_t__t_t__t_t__t_t__t_t__t__t_t__t_t_t__t_t_t__t_t__t_t__t__t_t__t_t_t__t_t_t__t__t_t__t_t__t_t__t_t__t_t__t_t__t_t__t_t_t__t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t__t_t_t__t_t_t_t__t_t_t_t_t_t__t_t_t__t_t_t_t_t_t_t_t_t__t_t_t_t__t_t_t_t_t_t__t_t_t_t_t_t__t_t_t__t_t__t_t_t_t__t_t_t__t_t_t_t_t_t__t_t_t_t__t_t_t__t_t_t_t__t_t_t__t_t_t__t_t_t__t_t__t_t__t_t_t_t_t_t__t_t_t_t_t_t__t_t_t__t_t_t_t__t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t__t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t__t
exchange information among its features (Fig. 2).
Inspired by recent developments in detection transformers such as Efficient-DETR [22], which demonstrate the superiority of leveraging image features as queries instead of relying on learned queries for decoding as in the original transformer [23] and DETR [20], we leverage the core template features as transformer decoder queries that interact with key values taken from the full template features in \(\mathbf{F}\). This allows the core template features to extract additional information from the full template that is missing from the core template.
**Norm encoding**. It appeals to our intuition that explicit image quality data can aid information extraction in multi-head attention settings, facilitating the suppression of low-quality features and highlighting high-quality features. To incorporate the quality information, we encode the feature norms as a quality proxy, using the standard Sinusidial conversion established in [23], and add them to the original features. Our experiments validate the positive impact of Norm encoding on the model's overall accuracy.
Importantly, our attention process interpolates between template features using self and cross-attention realized with MHA. We avoid the Feed-Forward block in the transformer encoder and decoder blocks, which extrapolates the template features and deteriorates performance.
The complexity of the encoder is \(O(K^{2})\), which is constant. The complexity of the decoder is \(O(NK)\). The total complexity of the attention block is \(O(N)\).
**Aggregate.** To represent the entire template with one feature \(\mathbf{f}\), we combine the \(K\) features in the enhanced core template by adding them up and normalizing the result to have a unit length.
### Training and Loss
After the aggregation phase, we get a single vector \(\mathbf{f}\) representing the whole template. From this point on, the problem is reduced from SFR to FR, facilitating the use of SoTA FR loss functions. In this work, we choose AdaFace [20] loss for its superior accuracy. AdaFace computes a class margin adaptively based on the image quality of the face approximated by the face feature norm. In our case, the margin is computed by the _template_ quality instead of the image (See Fig. 2 right-hand side). Using a simple margin-based loss for template feature fusion facilitates a simplistic implementation compared with elaborate loss terms found in [20] and others.
Large modern FR datasets like WebFace [22] contain millions of identities but only a limited number of images per identity (WebFace4M contains about 20 images per identity on average) with no video clips. On the other hand, SFR benchmarks like IJB-B, IJB-C contain rich templates with a diverse combination of still images and video clips for each identity. To bridge this gap, we online augment the training set and simulate IJB-B, IJB-C style templates. To compensate for the lack of video clips, we simulate a video clip by randomly sampling an image and applying various geometrical transformations (translations, rotations, etc.) on it to create multiple "video frames". Each training template comprises a random collection of still photos and simulated video clips with random addition of photometric noise to each image.
## Experiments
To allow for a valid comparison with the current SOTA, we strictly follow the experiments protocol of CAFace [20] (NeurIPS'22). Our training dataset is WebFace4M [22], which contains 4.2 million facial images from 205,990 identities. We use CAFaces's pre-trained face recognition model, \(E\), an IResnet-101, trained with AdaFace loss on the entire WebFace4M dataset. To train the FaceCoresetNet, we use CAFace's subset of 813,482 images from 10,000 identities. As VGG-2 [1] and MS1MV2 [23, 24] creators have withheld them due to privacy and other issues, we do not use them in our experiments. We train FaceCoresetNet for **2** epochs till convergence, compared with CAFace's **10** epochs training schedule.
We test on IJB-B [20] and IJB-C [20] datasets as they are intended for template-based face recognition. IJB-B comprises 1845 individuals and includes 11,754 images, 55,025 frames, and 7,011 videos. Each template in the dataset consists of a diverse collection of still images and video frames from various sources. These images and videos were gathered from the Internet and are considered unconstrained, as they exhibit considerable variation in pose, lighting conditions, and image quality. Following CAFace, we focus our experiments on face verification, where IJB-B offers 10,270 genuine template pair comparison and 8,000,000 impostor comparisons, facilitating valid measurements of very small FAR values on the TAR@FAR ROC curve. IJB-C extends IJB-B by adding 1,661 new subjects, with increased emphasis on occlusion and diversity.
### Ablation and Analysis
To demonstrate the effectiveness of each component in FaceCoresetNet, we conduct a series of experiments by ablating individual components. Initially, we turned off all components, resulting in a configuration where the template feature was simply the average feature computed from all the templates' features. The obtained TAR@FPR=1e-6 for the IJB-B dataset was 38.45 (Table 1). Subsequently, we enabled the differential core-template selection. We computed the final template descriptor by averaging the pooled features within the core template. Surprisingly, by effectively selecting 3 features, the accuracy increased significantly to 48.01. We hypothesize that our core template's high qual
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{5}{c}{Differential} & \multirow{2}{*}{Cross Att} & \multirow{2}{*}{Cross Att} & \multirow{2}{*}{Norm Encoding} & IJB-B:TAR \\ \# & & & & & \\ \# & & & & & \\ \hline
1 & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 38.45 \\
2 & ✓ & \(\times\) & \(\times\) & \(\times\) & 48.01 \\
3 & ✓ & ✓ & \(\times\) & \(\times\) & 49.24 \\
4 & ✓ & ✓ & ✓ & \(\times\) & 51.71 \\
5 & ✓ & ✓ & ✓ & ✓ & 52.56 \\ \hline \hline \end{tabular}
\end{table}
Table 1: FaceCoresetNet Ablations
ity and diversity reduce the amount of noise present in the original template, which may explain the observed improvement in accuracy. Furthermore, we integrated self-attention into the core-template features, boosting accuracy to 49.24. Additionally, by employing cross-attention between the core and full templates, the accuracy increased to 51.71. Finally, with the incorporation of Norm encoding, the accuracy reached 52.56.
**Effect of Core-Template size**. To determine the optimal core-template size, we searched a range of sizes from 1 to 6. The outcomes of this search are summarized in Table 1 of the supplementary materials. Notably, the ideal coreset size was identified as 3, and this optimal value remained constant throughout all our subsequent experiments.
\(\gamma\) **computation policy**. Our model optimizes the \(\gamma\) value in the quality-aware distance. To showcase the effectiveness of this optimization, we conducted 3 experiments. One setting \(\gamma\) to the constant 10 value, prioritizing quality over diversity. Conversely, we set \(\gamma\) to 0, which reduces the quality aware distance to the vanilla cosine distance, prioritizing diversity over quality. In the last experiment, we optimize \(\gamma\) end-to-end. The results in Table 3 demonstrates the effectiveness of our approach.
**Comparison with SoTA methods**. In Table. 2, we compare our results with the state-of-the-art (SOTA) in face verification, focusing on the IJB-B and IJB-C datasets. Our method achieves the highest accuracy in the low false acceptance rate regime, TAR@FAR=10e-6, for both IJB-B and IJB-C, outperforming others significantly. Additionally, we obtain the second-best results for TAR@FAR=10e-5 while maintaining the advantage of utilizing the lowest **linear** time compute.
**Computation Complexity**. To validate our algorithm's claimed linear time complexity, we employ (fvcore 2023) to calculate the FLOPS in our FaceCoresetNet and compare it against CAFace (Kim et al., 2022) using the authors' code. The FLOPS measurements for different template sizes are illustrated in Fig. 6. The results demonstrate that our algorithm indeed operates in linear time, while CAFace exhibits quadratic time complexity.
## Conclusion
In this work, we employ a method to fuse face template features by selecting a small, fixed-size core-template that balances quality and diversity. This selection uses a differentiable farthest-point sampling with a quality-aware distance metric, realized by sampling from the Gumbel-Softmax distribution of quality-aware distances.
The selected core template features are further enhanced using MHA self-attention. These enhanced features are then used as queries for MHA cross-attention between the core template and the entire template, enabling the extraction of additional information that may be missing from the core template.
Our proposed model achieves state-of-the-art accuracy on IJB-B and IJB-C datasets, all while maintaining reduced linear time complexity and simple design. FaceCoresetNet is showcased within the context of face set recognition, yet its applicability may extend to other domains requiring efficient feature fusion based on a differentiable selection with a learned metric. We hope to see an extension of this work to other domains in future research.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Complexity} & \multicolumn{6}{c}{IJB-B 1:1 Verification TAR} & \multicolumn{1}{c}{IJB-C 1:1 Verification TAR} \\ & & (\(N\): Template Size) & FAR=1e-4 & FAR=1e-5 & FAR=1e-6 & FAR=1e-4 & FAR=1e-5 & FAR=1e-6 \\ \hline AdaFace (Kim, Jain, and Liu, 2022) & NA & 94.84 & 90.86 & 38.45 & 96.42 & 94.47 & 87.90 \\ RSA (Liu et al., 2019) & \(O(N^{2})\) & 95.00 & 91.22 & - & 96.49 & 94.58 & - \\ SD + VBA (Wang, Zhao, and Wu, 2022) & \(O(N^{2})\) & 95.38 & 90.89 & 48.96 & 96.65 & 94.85 & 90.02 \\ CAFace (Kim et al., 2022) & \(O(N^{2})\) & 95.78 & 92.78 & 47.21 & 97.30 & 95.96 & 90.56 \\
**FaceCoresetNet** & \(\mathbf{O(N)}\) & 95.28 & 91.44 & 52.56 & 96.96 & 95.21 & **91.12** \\ \hline \hline \end{tabular}
\end{table}
Table 2: A performance comparison of recent methods on IJB-B (Whitelam et al., 2017) and IJB-C (Maze et al., 2018) datasets. The results are color-coded, with green representing the best, blue the second best, and red the third-best outcomes. FaceCoresetNet achieves the best results on TPR@FPR=1e-6 for both IJB-B and IJB-C while being the most efficient.
\begin{table}
\begin{tabular}{l c} \hline \hline \multirow{2}{*}{\(\gamma\) policy} & IJB-C: \\ & TAR@FAR=1e-6 \\ \hline Fixed 0 (Diversity priority) & 90.84 \\ Fixed 10 (Quality priority) & 89.69 \\ Trained & **91.12** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Training \(\gamma\) for optimal balance between quality and diversity yields best results, compared with fixed \(\gamma\) values
Figure 6: Computation complexity comparison with SoTA. FaceCoresetNet is linear in the template size, while CAFace is quadratic. |
2305.15618 | Debias Coarsely, Sample Conditionally: Statistical Downscaling through
Optimal Transport and Probabilistic Diffusion Models | We introduce a two-stage probabilistic framework for statistical downscaling
using unpaired data. Statistical downscaling seeks a probabilistic map to
transform low-resolution data from a biased coarse-grained numerical scheme to
high-resolution data that is consistent with a high-fidelity scheme. Our
framework tackles the problem by composing two transformations: (i) a debiasing
step via an optimal transport map, and (ii) an upsampling step achieved by a
probabilistic diffusion model with a posteriori conditional sampling. This
approach characterizes a conditional distribution without needing paired data,
and faithfully recovers relevant physical statistics from biased samples. We
demonstrate the utility of the proposed approach on one- and two-dimensional
fluid flow problems, which are representative of the core difficulties present
in numerical simulations of weather and climate. Our method produces realistic
high-resolution outputs from low-resolution inputs, by upsampling resolutions
of 8x and 16x. Moreover, our procedure correctly matches the statistics of
physical quantities, even when the low-frequency content of the inputs and
outputs do not match, a crucial but difficult-to-satisfy assumption needed by
current state-of-the-art alternatives. Code for this work is available at:
https://github.com/google-research/swirl-dynamics/tree/main/swirl_dynamics/projects/probabilistic_diffusion. | Zhong Yi Wan, Ricardo Baptista, Yi-fan Chen, John Anderson, Anudhyan Boral, Fei Sha, Leonardo Zepeda-Núñez | 2023-05-24T23:40:23Z | http://arxiv.org/abs/2305.15618v2 | # Debias Coarsely, Sample Conditionally:
###### Abstract
We introduce a two-stage probabilistic framework for statistical downscaling between unpaired data. Statistical downscaling seeks a probabilistic map to transform low-resolution data from a (possibly biased) coarse-grained numerical scheme to high-resolution data that is consistent with a high-fidelity scheme. Our framework tackles the problem by tandeming two transformations: a debiasing step that is performed by an optimal transport map, and an upsampling step that is achieved by a probabilistic diffusion model with _a posteriori_ conditional sampling. This approach characterizes a conditional distribution without the need for paired data, and faithfully recovers relevant physical statistics from biased samples. We demonstrate the utility of the proposed approach on one- and two-dimensional fluid flow problems, which are representative of the core difficulties present in numerical simulations of weather and climate. Our method produces realistic high-resolution outputs from low-resolution inputs, by upsampling resolutions of \(8\times\) and \(16\times\). Moreover, our procedure correctly matches the statistics of physical quantities, even when the low-frequency content of the inputs and outputs do not match, a crucial but difficult-to-satisfy assumption needed by current state-of-the-art alternatives.
Introduction
Statistical downscaling is crucial to understanding and correlating simulations of complex dynamical systems at multiple resolutions. For example, in climate modeling, the computational complexity of general circulation models (GCMs) [4] grows rapidly with resolution. This severely limits the resolution of long-running climate simulations. Consequentially, accurate predictions (as in forecasting localized, regional and short-term weather conditions) need to be downscaled from coarser lower-resolution models' outputs. This is a challenging task: coarser models do not resolve small-scale dynamics, thus creating bias [15; 63; 78]. They also lack the necessary physical details (for instance, regional weather depends heavily on local topography) to be of practical use for regional or local climate impact studies [31; 34], such as the prediction or risk assessment of extreme flooding [33; 41], heat waves [53], or wildfires [1].
At the most abstract level, statistical downscaling [75; 76] learns a map from low- to high-resolution data. However, it has several unique challenges. First, unlike supervised machine learning (ML), there is _no natural pairing of samples_ from the low-resolution model (such as climate models [22]) with samples from higher-resolution ones (such as weather models that assimilate observations [37]). Even in simplified cases of idealized fluids problems, it is hard to align the simulations, due to the chaotic behavior of the models: two simulations with very close initial conditions will diverge rapidly. Several recent studies in climate sciences have relied on synthetically generated paired datasets. The synthesis process, however, requires accessing both low- and high-resolution models and either (re)running costly high-resolution models while respecting the physical quantities in the low-resolution simulations [24; 40] or (re)running low-resolution models with additional terms nudging the outputs towards high-resolution trajectories [12]. In short, requiring data in correspondence for training severely limits the potential applicability of supervised ML methodologies, despite their promising results [35; 36; 54; 60].
Second, unlike the setting of (image) super-resolution [25], in which an ML model learns the (pseudo) inverse of a downsampling operator, downscaling additionally needs to correct the bias. This difference is depicted in Fig. 1(a). Super-resolution can be recast as frequency extrapolation [10], in which the model reconstructs high-frequency content, while matching the low-frequency content of a low-resolution input. However, the restriction of the target high-resolution data may not match the distribution of the low-resolution data in Fourier space [44]. Therefore, debiasing is necessary to correct the Fourier spectrum of the low-resolution input to render it admissible for the target distribution (moving solid red to solid blue lines with the dashed blue extrapolation in Fig. 1). Debiasing allows us to bypass the crucial but difficult-to-satisfy requirement that the low-frequency statistics of the low- and high-resolution datasets need to match.
Given these two difficulties, statistical downscaling should be more naturally framed as matching two probability distributions linked by a unknown map; such a map emerges from both distributions representing the same underlying physical system, albeit with different characterizations of the system's statistics at multiple spatial and temporal resolutions. The core challenge is then: _how do we structure the downscaling map so that the (probabilistic) matching can effectively remediate the bias introduced by the coarser, i.e., the low-resolution, data distribution?_
Thus, the main idea behind our work is to introduce a debiasing step so that the debiased (yet, still coarser) distribution is closer to the target distribution of the high-resolution data. This step results in an intermediate representation for the data that preserves the correct statistics needed in the follow-up step of upsampling to yield the high-resolution distribution. In contrast to recent works on distribution matching for unpaired image-to-image translation [80] and climate modeling [30], the additional structure our work imposes on learning the mapping prevents the bias in the low-resolution data from polluting the upsampling step. We review those approaches in SS2 and compare to them in SS4.
Concretely, we propose a new probabilistic formulation for the downscaling problem that handles _unpaired data_ directly, based on a factorization of the unknown map linking both low- and high-resolution distributions. This factorization is depicted in Fig. 1(b). By appropriately restricting the maps in the factorization, we rewrite the downscaling map as the composition of two procedures: a debiasing step performed using an optimal transport map [20], which _couples the data distributions_ and corrects the biases of the low-resolution snapshots; followed by an upsampling step performed using conditional probabilistic diffusion models, which have produced state-of-the-art results for image synthesis and flow construction [5; 47; 65; 67].
We showcase the performance of our framework on idealized fluids problems that preserve the core difficulty of atmospheric flows. We show that our framework is able to generate realistic snapshots that are faithful to the physical statistics, while outperforming several baselines.
## 2 Related work
The most direct approach to upsampling low-resolution data is to learn a low- to high-resolution mapping via paired data when it is possible to collect such data. For complex dynamical systems, several methods carefully manipulate high- and low-resolution models, either by nudging or by enforcing boundary conditions, to produce paired data without introducing spectral biases [12; 24]. Alternatively, if one has strong prior knowledge about the process of downsampling, optimization methods can solve an inverse problem to directly estimate the high-resolution data, leveraging prior assumptions such as sparsity in compressive sensing [9; 10] or translation invariance [39].
In our setting, there is no straightforward way to obtain paired data due to the nature of the problem (i.e., turbulent flows, with characteristically different statistics across a large span of spatio-temporal scales). In the weather/climate community (see [73] for an extensive overview), prior knowledge can be exploited to downscale specific variables [75]. One of the most predominant methods of this type is bias-correction spatial disaggregation (BCSD), which combines traditional spline interpolation with a quantile matching bias correction [51], and linear models [38]. Recently, several studies have used ML to downscale physical quantities such as precipitation [72], but without quantifying the prediction uncertainty. Yet, a generally applicable method to downscale arbitrary variables is lacking.
Another difficulty is to remove the bias in the low resolution data. This is an instance of domain adaptation, a topic popularly studied in computer vision. Recent work has used generative models such as GANs and diffusion models to bridge the gap between two domains [5; 7; 13; 52; 54; 56; 62; 68; 77; 79]. A popular domain alignment method that was used in [30] for downscaling weather data is AlignFlow [32]. This approach learns normalizing flows for source and target data of the same dimension, and uses their common latent space to move across domains. The advantage of those methods is that they do not require training data from two domains in correspondence. Many of those approaches are related to optimal transport (OT), a rigorous mathematical framework for learning maps between two domains without paired data [74]. Recent computational advances in OT for discrete (i.e., empirical) measures [20; 58] have resulted in a wide set of methods for domain adaptation [19; 29]. Despite their empirical success with careful choices of regularization, their use alone for high-dimensional images has remained limited [55].
Our work uses diffusion models to perform upsampling after a debiasing step implemented with OT. We avoid common issues from GANs [69] and flow-based methods [49], which include over-smoothing, mode collapse and large model footprints [23; 47]. Also, due to the debiasing map, which matches the low-frequency content in distribution (see Fig. 1(a)), we do not need to explicitly impose that the low-frequency power spectra of the two datasets match like some competing methods do [5]. Compared to formulations that perform upsampling and debiasing simultaneously [5; 72], our
Figure 1: (a) Upsampling (super-resolution) as frequency extrapolation in the Fourier domain. The model extrapolates low-frequency content to higher-frequencies (dashed blue). The debiasing map corrects the biased low-frequency content (solid red). (b) Diagram of the proposed framework where \(\mathcal{X}\) is the space of high-resolution data, \(\mathcal{Y}\) is the space of low-resolution data, \(C\) is an _unknown nonlinear_ map linking \(\mathcal{X}\) and \(\mathcal{Y}\), \(C^{\prime}\) is a _known linear_ downsampling map, \(\mathcal{Y}^{\prime}\) is an intermediate (low-resolution) space induced by the image of \(C^{\prime}\), and \(T\) is a invertible debiasing map such that \(C\) can be factorized as \(T^{-1}\circ C^{\prime}\). The conditional probabilities \(p(x|C^{\prime}x=y^{\prime})\) are used for the probabilistic upsampling procedure.
framework performs these two tasks separately, by only training (and independently validating) a single probabilistic diffusion model for the high-resolution data once. This allows us to quickly assess different modeling choices, such as the linear downsampling map, by combining the diffusion model with different debiasing maps. Lastly, in comparison to other two-stage approaches [5; 30], debiasing is conducted at low-resolutions, which is less expensive as it is performed on a much smaller space, and more efficient as it is not hampered from spurious biases introduced by interpolation techniques.
## 3 Methodology
SetupWe consider two spaces: the high-fidelity, high-resolution space \(\mathcal{X}=\mathbb{R}^{d}\) and the low-fidelity, low-resolution space \(\mathcal{Y}=\mathbb{R}^{d^{\prime}}\), where we suppose that \(d>d^{\prime}\). We model the elements \(X\in\mathcal{X}\) and \(Y\in\mathcal{Y}\) as random variables with marginal distributions, \(\mu_{X}\) and \(\mu_{Y}\), respectively. In addition, we suppose there is a statistical model relating the \(X\) and \(Y\) variables via \(C\colon\mathcal{X}\to\mathcal{Y}\), an unknown and possibly nonlinear, downsampling map. See Fig. 1(b) for a diagram.
Given an observed realization \(\bar{y}\in\mathcal{Y}\), which we refer to as a _snapshot_, we formulate downscaling as the problem of sampling from the conditional probability distribution \(p(x|E_{\bar{y}})\) for the event \(E_{\bar{y}}:=\{x\in\mathcal{X}\,|\,C(x)=\bar{y}\}\), which we denote by \(p(x|C(x)=\bar{y})\). Our objective is to sample this distribution given only access to marginal samples of \(X\) and \(Y\).
Main ideaIn general, down-scaling is an ill-posed problem given that the joint distribution of \(X\) and \(Y\) is not prescribed by a known statistical model. Therefore, we seek an approximation to \(C\) so the statistical properties of \(X\) are preserved given samples of \(\mu_{Y}\). In particular, such a map should satisfy \(C_{\sharp}\mu_{X}=\mu_{Y}\), where \(C_{\sharp}\mu_{X}\) denotes the push-forward measure of \(\mu_{X}\) through \(C\).
In this work, we impose an structured ansatz to approximate \(C\). Specifically, we _factorize_ the map \(C\) as the composition of a known and linear _downsampling map_\(C^{\prime}\), and an invertible _debiasing map_\(T\):
\[C=T^{-1}\circ C^{\prime},\quad\text{such that}\quad(T^{-1}\circ C^{\prime})_{ \sharp}\mu_{X}=\mu_{Y}, \tag{1}\]
or, alternatively, \(C^{\prime}_{\sharp}\mu_{X}=T_{\sharp}\mu_{Y}\). This factorization decouples and explicitly addresses two entangled goals in downscaling: debiasing and upsampling. We discuss the advantage of such factorization, after sketching how \(C^{\prime}\) and \(T\) are implemented.
The range of the downsampling map \(C^{\prime}\colon\mathcal{X}\to\mathcal{Y}^{\prime}\) defines an _intermediate_ space \(\mathcal{Y}^{\prime}=\mathbb{R}^{d^{\prime}}\) of high-fidelity low-resolution samples with measure \(\mu_{Y^{\prime}}\). Moreover, the joint space \(\mathcal{X}\times\mathcal{Y}^{\prime}\) is built by projecting samples of \(X\) into \(\mathcal{Y}^{\prime}\), i.e., \((x,y^{\prime})=(x,C^{\prime}x)\in\mathcal{X}\times\mathcal{Y}^{\prime}\); see Fig. 1(b). Using these spaces, we decompose the domain adaptation problem into the following three sub-problems:
1. _High-resolution prior_: Estimate the marginal density \(p(x)\);
2. _Conditional modeling_: For the joint variables \(X\times Y^{\prime}\), approximate \(p(x|C^{\prime}x=y^{\prime})\);
3. _Debiasing_: Compute a transport map such that \(T_{\sharp}\mu_{Y}=C^{\prime}_{\sharp}\mu_{X}\).
For the first sub-problem, we train a _unconditional_ model to approximate \(\mu_{X}\), or \(p(x)\), as explained in SS3.1. For the second sub-problem, we leverage the prior model and \(y^{\prime}\in\mathcal{Y}^{\prime}\) to build a model for _a posteriori_ conditional sampling of \(p(x|C^{\prime}x=y^{\prime})\), which allows us to upsample snapshots from \(\mathcal{Y}^{\prime}\) to \(\mathcal{X}\), as explained in SS3.2. For the third sub-problem, we use domain adaptation to shift the resulting model from the source domain \(\mathcal{X}\times\mathcal{Y}^{\prime}\) to the target domain \(\mathcal{X}\times\mathcal{Y}\), for which there is no labeled data. For such a task, we build a transport map \(T:\mathcal{Y}\to\mathcal{Y}^{\prime}\) satisfying the condition that \(T_{\sharp}\mu_{Y}=\mu_{Y^{\prime}}=C^{\prime}_{\sharp}\mu_{X}\). This map is found by solving an optimal transport problem, which we explain in SS3.3.
Lastly, we merge the solutions to the sub-problems to arrive at our core downscaling methodology, which is summarized in Alg. 1. In particular, given a low-fidelity and low-resolution sample \(\overline{y}\), we use the optimal transport map \(T\) to project the sample to the high-fidelity space \(\overline{y}^{\prime}=T(\overline{y})\) and use the conditional model to sample \(p(x|C^{\prime}x=\overline{y}^{\prime})\). The resulting samples are contained in the high-fidelity and high-resolution space.
The factorization in Eq. (1) has several advantages. We do not require a cycle-consistency type of loss [32; 80]: the consistency condition is automatically enforced by Eq. (1) and the conditional sampling. By using a linear downsampling map \(C^{\prime}\), it is trivial to create the intermediate space \(\mathcal{Y}^{\prime}\), while rendering the conditional sampling tractable: conditional sampling with a nonlinear map is
often more expensive and it requires more involved tuning [16; 17]. The factorization also allows us to compute the debiasing map in a considerably lower dimensional space, which conveniently requires less data to cover the full distribution, and fewer iterations to find the optimal map [20].
### High-resolution prior
To approximate the prior of the high-resolution snapshots we use a probabilistic diffusion model, which is known to avoid several drawbacks of other generative models used for super-resolution [47], while providing greater flexibility for _a posteriori_ conditioning [16].
Intuitively, diffusion-based generative models involves iteratively transforming samples from an initial noise distribution \(p_{T}\) into ones from the target data distribution \(p_{0}=p_{\text{data}}\). Noise is removed sequentially such that samples follow a family of marginal distributions \(p_{t}(x_{t};\sigma_{t})\) for decreasing diffusion times \(t\) and noise levels \(\sigma_{t}\). Conveniently, such distributions are given by a forward noising process that is described by the stochastic differential equation (SDE) [42; 67]
\[dx_{t}=f(x_{t},t)dt+g(x_{t},t)dW_{t}, \tag{2}\]
with drift \(f\), diffusion coefficient \(g\), and the standard Wiener process \(W_{t}\). Following [42], we set
\[f(x_{t},t)=f(t)x_{t}:=\frac{\dot{s}_{t}}{s_{t}}x_{t},\qquad\text{and}\qquad g (x_{t},t)=g(t):=s_{t}\sqrt{2\dot{\sigma}_{t}\sigma_{t}}. \tag{3}\]
Solving the SDE in Eq. (2) forward in time with an initial condition \(x_{0}\) leads to the Gaussian perturbation kernel \(p(x_{t}|x_{0})=\mathcal{N}(x_{t};s_{t}x_{0},s_{t}^{2}\sigma_{t}^{2}\mathbf{I})\). Integrating the kernel over the data distribution \(p_{0}(x_{0})=p_{\text{data}}\), we obtain the marginal distribution \(p_{t}(x_{t})\) at any \(t\). As such, one may prescribe the profiles of \(s_{t}\) and \(\sigma_{t}\) so that \(p_{0}=p_{\text{data}}\) (with \(s_{0}=1,\sigma_{0}=0\)), and more importantly
\[p_{T}(x_{T})\approx\mathcal{N}(x_{T};0,s_{T}^{2}\sigma_{T}^{2}\mathbf{I}), \tag{4}\]
i.e., the distribution at the terminal time \(T\) becomes indistinguishable from an isotropic, zero-mean Gaussian. To sample from \(p_{\text{data}}\), we utilize the fact that the reverse-time SDE
\[dx_{t}=\big{[}f(t)x_{t}-g(t)^{2}\nabla_{x_{t}}\log p_{t}(x_{t})\big{]}dt+g(t) dW_{t}, \tag{5}\]
has the same marginals as Eq. (2). Thus, by solving Eq. (5) backwards using Eq. (4) as the final condition at time \(T\), we obtain samples from \(p_{\text{data}}\) at \(t=0\).
Therefore, the problem is reduced to estimating the _score function_\(\nabla_{x_{t}}\log p_{t}(x_{t})\) resulting from \(p_{\text{data}}\) and the prescribed diffusion schedule \((s_{t},\sigma_{t})\). We adopt the denoising formulation in [42] and learn a neural network \(D_{\theta}(x_{0}+\varepsilon_{t},\sigma_{t})\), where \(\theta\) denotes the network parameters. The learning seeks to minimize the \(L_{2}\)-error in predicting the true sample \(x_{0}\) given a noise level \(\sigma_{t}\) and the sample noised with \(\varepsilon_{t}=\sigma_{t}\varepsilon\) where \(\varepsilon\) is drawn from a standard Gaussian. The score can then be readily obtained from the denoiser \(D_{\theta}\) via the asymptotic relation (i.e., Tweedie's formula [28])
\[\nabla_{x_{t}}\log p_{t}(x_{t})\approx\frac{D_{\theta}(\hat{x}_{t},\sigma_{t })-\hat{x}_{t}}{s_{t}\sigma_{t}^{2}},\qquad\hat{x}_{t}=x_{t}/s_{t}. \tag{6}\]
### A posteriori conditioning via post-processed denoiser
We seek to super-resolve a low-resolution snapshot \(\bar{y}^{\prime}\in\mathcal{Y}^{\prime}\) to a high-resolution one by leveraging the high-resolution prior modeled by the diffusion model introduced above. Abstractly, our goal is to sample from \(p(x_{0}|E^{\prime}_{\bar{y}^{\prime}})\), where \(E^{\prime}_{\bar{y}^{\prime}}=\{x_{0}:C^{\prime}x_{0}\!=\!\bar{y}^{\prime}\}\). This may be approximated by modifying the learned denoiser \(D_{\theta}\) at _inference time_ (see Appendix A for more details):
\[\tilde{D}_{\theta}(\hat{x}_{t},\sigma_{t})=(C^{\prime})^{\dagger}\bar{y}^{ \prime}+(I-VV^{T})\left[D_{\theta}(\hat{x}_{t},\sigma_{t})-\alpha\nabla_{\hat {x}_{t}}\|C^{\prime}D_{\theta}(\hat{x}_{t},\sigma_{t})-\bar{y}^{\prime}\|^{2 }\right], \tag{7}\]
where \((C^{\prime})^{\dagger}=V\Sigma^{-1}U^{T}\) is the pseudo-inverse of \(C^{\prime}\) based on its singular value decomposition (SVD) \(C^{\prime}=U\Sigma V^{T}\), and \(\alpha\) is a hyperparameter that is empirically tuned. The \(\tilde{D}_{\theta}\) defined in Eq. (7) directly replaces \(D_{\theta}\) in Eq. (6) to construct a conditional score function \(\nabla_{x_{t}}\log p_{t}(x_{t}|E^{\prime}_{\bar{y}^{\prime}})\) that facilitates the sampling of \(p(x_{0}|E^{\prime}_{\bar{y}^{\prime}})\) using the reverse-time SDE in Eq. (5).
### Debiasing via optimal transport
In order to upsample a biased low-resolution data \(\overline{y}\in\mathcal{Y}\), we first seek to find a mapping \(T\) such that \(\overline{y}^{\prime}=T(\overline{y})\in\mathcal{Y}^{\prime}\) is a representative sample from the distribution of unbiased low-resolution data. Among the infinitely many maps that satisfy this condition, the framework of optimal transport (OT) selects a map by minimizing an integrated transportation distance based on the cost function \(c\colon\mathcal{Y}\times\mathcal{Y}^{\prime}\to\mathbb{R}^{+}\). The function \(c(y,y^{\prime})\) defines the cost of moving one unit of probability mass from \(y^{\prime}\) to \(y\). By treating \(Y,Y^{\prime}\) as random variables on \(\mathcal{Y},\mathcal{Y}^{\prime}\) with measures \(\mu_{Y},\mu_{Y^{\prime}}\), respectively, the OT map is given by the solution to the Monge problem
\[\min_{T}\left\{\int c(y,T(y))d\mu_{Y}(y):T_{\sharp}\mu_{Y}=\mu_{Y^{\prime}} \right\}. \tag{8}\]
In practice, directly solving the Monge problem is hard and may not even admit a solution [74]. One common relaxation of Eq. (8) is to seek a joint distribution, known as a coupling or transport plan, which relates the underlying random variables [74]. A valid plan is a probability measure \(\gamma\) on \(\mathcal{Y}\times\mathcal{Y}^{\prime}\) with marginals \(\mu_{Y}\) and \(\mu_{Y^{\prime}}\). To efficiently estimate the plan when the \(c\) is the quadratic cost (i.e., \(c(y,y^{\prime})=\frac{1}{2}\|y-y^{\prime}\|^{2}\)), we solve the entropy regularized problem
\[\inf_{\gamma\in\Pi(\mu_{Y^{\prime}},\mu_{Y^{\prime}})}\int\frac{1}{2}\|y-y^{ \prime}\|^{2}d\gamma(y,y^{\prime})+\epsilon D_{\text{KL}}(\gamma||\mu_{Y^{ \prime}}\otimes\mu_{Y}), \tag{9}\]
where \(D_{\text{KL}}\) denotes the KL divergence, and \(\epsilon>0\) is a small regularization parameter, using the Sinkhorn's algorithm [20], which leverages the structure of the optimal plan to solve Eq. (9) with small runtime complexity [3]. The solution to Eq. (9) is the transport plan \(\gamma_{\epsilon}\in\Pi(\mu,\nu)\) given by
\[\gamma_{\epsilon}(y,y^{\prime})=\exp\left((f_{\epsilon}(y)+g_{\epsilon}(y^{ \prime})-\frac{1}{2}\|y-y^{\prime}\|^{2})/\epsilon\right)d\mu_{Y}(y)d\mu_{Y^{ \prime}}(y^{\prime}), \tag{10}\]
in terms of potential functions \(f_{\epsilon},g_{\epsilon}\) that are chosen to satisfy the marginal constraints. After finding these potentials, we can approximate the transport map using the barycentric projection \(T_{\gamma}(y)=\mathbb{E}_{\gamma}[Y^{\prime}|Y=y]\), for a plan \(\gamma\in\Pi(\mu_{Y},\mu_{Y^{\prime}})\)[2]. For the plan in Eq. (10), the map is given by
\[T_{\gamma_{\epsilon}}(y)=\frac{\int y^{\prime}e^{(g_{\epsilon}(y^{\prime})- \frac{1}{2}\|y-y^{\prime}\|^{2})/\epsilon}d\mu_{Y}(y^{\prime})}{\int e^{(g_{ \epsilon}(y^{\prime})-\frac{1}{2}\|y-y^{\prime}\|^{2})/\epsilon}d\mu_{Y}(y^{ \prime})}. \tag{11}\]
In this work, we estimate the potential functions \(f_{\epsilon},g_{\epsilon}\) from samples, i.e., empirical approximations of the measures \(\mu_{Y},\mu_{Y^{\prime}}\). Plugging in the estimated potentials in Eq. (11) defines an approximate transport map to push forward samples of \(\mu_{Y}\) to \(\mu_{Y^{\prime}}\). More details on the estimation of the OT map are provided in Appendix H.
A core advantage of this methodology is that it provides us with the flexibility of changing the cost function \(c\) in Eq. (8), and embed it with structural biases that one wishes to preserve in the push-forward distribution. Such direction is left for future work.
## 4 Numerical experiments
### Data and setup
We showcase the efficacy and performance of the proposed approach on one- and two-dimensional fluid flow problems that are representative of the core difficulties present in numerical simulations of weather and climate. We consider the one-dimensional Kuramoto-Sivashinski (KS) equation and the two-dimensional Navier-Stokes (NS) equation under Kolmogorov forcing (details in Appendix F) in periodic domains. The low-fidelity (LF), low-resolution (LR) data (\(\mathcal{Y}\) in Fig. 1(b)) is generated using a finite volume discretization in space [46] and a fractional discretization in time, while the high-fidelity
(HF), high-resolution (HR) data (\(\mathcal{X}\) in Fig. 1(b)) is simulated using a spectral discretization in space with an implicit-explicit scheme in time. Both schemes are implemented with jax-cfd and its finite-volume and spectral toolboxes [27, 43] respectively. After generating the HF data in HR, we run the LF solver using a spatial discretization that is \(8\times\) coarser (in each dimension) with permissible time steps. For NS, we additionally create a \(16\times\) coarser LFLR dataset by further downsampling by a factor of two the \(8\times\) LFLR data. See Appendix F for further details.
For both systems, the datasets consist of long trajectories generated with random initial conditions2, which are sufficiently downsampled in time to ensure that consecutive samples are decorrelated. We stress once more that even when the grids and time stamps of both methods are aligned, there is _no pointwise correspondence_ between elements of \(\mathcal{X}\) and \(\mathcal{Y}\). This arises from the different modeling biases inherent to the LF and HF solvers, which inevitably disrupt any short-term correspondence over the long time horizon in a strongly nonlinear dynamical setting.
Footnote 2: The presence of global attractors in both systems renders the exact initial conditions unimportant. It also guarantees sufficient coverage of the target distributions sampling from long trajectories.
Finally, we create the intermediate space \(\mathcal{Y}^{\prime}\) in Fig. 1(b) by downsampling the HFIR data with a simple selection mask3 (i.e., the map \(C^{\prime}\)). This creates the new HFLR dataset \(\mathcal{Y}^{\prime}\) with the same resolution as \(\mathcal{Y}\), but with the low-frequency bias structure of \(\mathcal{X}\) induced by the pushforward of \(C^{\prime}\).
Footnote 3: It is worth noting that careful consideration should be given to the choice of \(C^{\prime}\) to avoid introducing aliasing, as this can potentially make the downscaling task more challenging.
**Baselines and definitions.** We define the following ablating variants of our proposed method
* Unconditional diffusion sampling (_UncondDfn_).
* Diffusion sampling conditioned on LFLR data without OT correction (_Raw cDfn_).
* [_Main_] Diffusion sampling conditioned on OT-corrected (HFLR) data (_OT+cDfn_).
We additionally consider the following baselines to benchmark our method:
* Tricubic interpolation approximating HR target using local third-order splines (_triCub_).
* Vision transformer (ViT) [26] based deterministic super-resolution model (_ViT_).
* CycleGAN, which is adapted from [80] to enable learning transformations between spaces of different dimensions (_cycGAN_).
The first two baselines require paired data and, threfore, learn the upsampling map \(\mathcal{Y}^{\prime}\rightarrow\mathcal{X}\) (i.e., HFLR to HFHR) as factorized benchmarks. The last baseline presents an end-to-end alternative and is trained directly on unpaired LFLR and HFHR samples. Further information about the implemented baselines can be found in Appendix D.
**OT training.** To learn the transport map in Eq. (11), we solve the entropic OT problem in Eq. (9) with \(\epsilon=0.001\) using a Sinkhorn [20] iteration with Anderson acceleration and parallel updates. We use \(90,000\) i.i.d. samples of \(Y\in\mathcal{Y}\) and \(Y^{\prime}\in\mathcal{Y}^{\prime}\), and perform \(5000\) iterations. Implementations are based on the \(\tt{ott\text{-}jax}\) library [21].
**Denoiser training and conditional sampling.** The denoiser \(D_{\theta}\) is parametrized with a standard U-Net architecture similar to the one used in [61]. We additionally incorporate the preconditioning technique proposed in [42]. For \(s_{t}\) and \(\sigma_{t}\) schedules, we employ the variance-preserving (VP) scheme originally introduced in [67]. Furthermore, we adopt a data augmentation procedure to increase the effective training data size by taking advantage of the translation symmetries in the studied systems.
Samples are generated by solving the SDE based on the post-processed denoiser \(\tilde{D}_{\theta}\) using the Euler-Maruyama scheme with exponential time steps, i.e., \(\{t_{i}\}\) is set such that \(\sigma(t_{i})=\sigma_{\text{max}}(\sigma_{\text{min}}/\sigma_{\text{max}})^{i/N}\) for \(i=\{0,...,N\}\). The number of steps used, \(N\), vary between systems and downscaling factors. More details regarding denoiser training and sampling are included in Appendix B.
**Metrics.** To quantitatively assess the quality of the resulting snapshots we compare three physical and statistical properties of the snapshots: (i) the energy spectrum, which measures the energy in each Fourier mode and thereby providing insights into the similarity between the generated and reference samples, (ii) a spatial covariance metric, which characterizes the spatial correlations within the snapshots, and (iii) the KL-divergence of the kernel estimation (KDE-KLD) for each point, which serves as a measure for the local structures. We present (i) below and leave the latter two described in Appendix C as they are commonly used in probabilistic modeling.
The energy spectrum is defined4 as
Footnote 4: This definition is applied to each sample and averaged to obtain the metric (same for MELR below).
\[E(k)=\sum_{|\underline{k}|=k}|\hat{u}(\underline{k})|^{2}=\sum_{|\underline{k}| =k}\left|\sum_{i}u(x_{i})\exp(-j2\pi\underline{k}\cdot x_{i}/L)\right|^{2} \tag{12}\]
where \(u\) is a snapshot system state, and \(k\) is the magnitude of the wave-number (wave-vector in 2D) \(\underline{k}\). To assess the overall consistency of the spectrum between the generated and reference samples using a single scalar measure, we consider the mean energy log ratio (MELR):
\[\text{MELR}=\sum_{k}w_{k}\left|\log\left(E_{\text{pred}}(k)/E_{\text{ref}}(k) \right)\right|, \tag{13}\]
where \(w_{k}\) represents the weight assigned to each \(k\). We further define \(w_{k}^{\text{unweighted}}=1/\text{card}(k)\) and \(w_{k}^{\text{weighted}}=E_{\text{ref}}(k)/\sum_{k}E_{\text{ref}}(k)\). The latter skews more towards high-energy/low-frequency modes.
In Appendix E, we present additional ablation studies that demonstrate the importance of OT correction in the factorized benchmarks.
**Comparison vs. factorized alternatives.** Fig. 3 displays NS samples generated by all benchmarked methods. Qualitatively, our method is able to provide highly realistic small-scale features. In comparison, we observe that _triCub_ expectedly yields the lowest quality results; the deterministic _ViT_ produces samples with color shift and excessive smoothing, especially at \(16\times\) downscaling factor.
Quantitatively, our method outperforms all competitors in terms of MELR and KLD metrics in the NS tasks, while demonstrating consistently good performance in both \(8\times\) and \(16\times\) downscaling, despite the lack of recognizable features in the uncorrected LR data (Fig. 3(a) bottom) in the latter case. Other baselines, on the other hand, experience a significant performance drop. This showcases the value of having an unconditional prior to rely on when the conditioning provides limited information.
**Comparison vs. end-to-end downscaling.** Although the _cycGAN_ baseline is capable of generating high-quality samples at \(8\times\) downscaling (albeit with some smoothing) reflecting competitive metrics, we encountered persistent stability issues during training, particularly in the \(16\times\) downscaling case.
**Diffusion samples exhibit ample variability.** Due to the probabilistic nature of our approach, we can observe from Table 2 that the OT-conditioned diffusion model provides some variability in the downscaling task, which increases when the downscaling factor increases. This variability provides a measure of uncertainty quantification in the generated snapshots as a result of the consistent formulation of our approach on probability spaces.
\begin{table}
\begin{tabular}{|l|c c c c c|c c c c|} \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{NS \(8\times\) downsample} & \multicolumn{4}{c|}{NS \(16\times\) downsample} \\ \hline \multirow{2}{*}{**Metric**} & cyc & Raw & OT+ & OT+ & OT+ & cyc & Raw & OT+ & OT+ & OT+ \\ & GAN & cDfn & triCub & ViT & cDfn & GAN & cDfn & triCub & ViT & cDfn \\ \hline Constraint RMSE \(\downarrow\) & - & 0.001 & 0 & 1.52 & 0.001 & - & 0.001 & 0 & 0.72 & 0.001 \\ Sample Variability \(\uparrow\) & 0 & 0.27 & 0 & 0 & 0.36 & 0 & 1.07 & 0 & 0 & 1.56 \\ MELR (unweighted) \(\downarrow\) & 0.08 & 0.79 & 0.52 & 0.38 & **0.06** & 1.14 & 0.54 & 0.55 & 1.38 & **0.05** \\ MELR (weighted) \(\downarrow\) & 0.05 & 0.37 & 0.06 & 0.18 & **0.02** & 0.28 & 0.30 & 0.13 & 0.09 & **0.02** \\ KDE-KLD \(\downarrow\) & 1.62 & 73.16 & 1.46 & 1.72 & **1.40** & 2.05 & 93.87 & 7.30 & 1.67 & **0.83** \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation of conditional sampling for NS.
Figure 3: Example showing the vorticity field of samples debiased and super-resolved using different techniques at \(8\times\) (top row) and \(16\times\) (bottom row) downscaling factors. From left to right: (a) LR snapshot produced by the low-fidelity solver (input \(\bar{y}\) of Alg. 1), (b) OT-corrected snapshot (\(\bar{y}^{\prime}\) in line 1 of Alg. 1), (c) downscaled snapshot with cycle-GAN directly from LR snapshot, (d) tricubic interpolation of the OT-corrected snapshot, (e) deterministic upsample of the OT-corrected snapshot, (f) diffusion sample conditioned on the OT-corrected snapshot (output \(\bar{x}\) in Alg. 1), and (g) two true HR samples in the training data with the closest Euclidean distance to the OT-corrected generated sample. The \(16\times\) source is the same as the \(8\times\) source but further downsampled by a factor of two. OT maps are computed independently between resolutions.
Conclusion
We introduced a two-stage probabilistic framework for the statistical downscaling problem. The framework performs a debiasing step to correct the low-frequency statistics, followed by a upsampling step using a conditional diffusion model. We demonstrate that when applied to idealized physical fluids, our method provides high-resolution samples whose statistics are physically correct, even when there is a mismatch in the low-frequency energy spectra between the low- and high-resolution data distributions. We have shown that our method is competitive and outperforms several commonly used alternative methods.
Future work will consider fine-tuning transport maps by adapting the map to the goal of conditional sampling, and introducing physically-motivated cost functions in the debiasing map. Moreover, we will address current limitations of the methodology, such as the high-computational complexity of learning OT-maps that scales quadratically with the size of the training set, and investigate the model's robustness to added noise in the collected samples as is found in weather/climate datasets. The code and data used in this work will be released after review.
## Broader impact
Statistical downscaling is important to weather/climate modeling. In this work, we propose a new method for improving the accuracy of high-resolution weather/climate forecast (on which risk assessment would be made) from low resolution climate modeling. Weather/climate research and other scientific communities in computational fluid dynamics will benefit from this work for its potential to reduce computational costs. We do not believe this research will disadvantage anyone.
|
2307.03603 | Measurement of the low-energy antitriton inelastic cross section | In this Letter, the first measurement of the inelastic cross section for
antitriton$-$nucleus interactions is reported, covering the momentum range of
$0.8 \leq p < 2.4$ GeV/$c$. The measurement is carried out using data recorded
with the ALICE detector in pp and Pb$-$Pb collisions at a centre-of-mass energy
per nucleon of 13 TeV and 5.02 TeV, respectively. The detector material serves
as an absorber for antitriton nuclei. The raw yield of (anti)triton nuclei
measured with the ALICE apparatus is compared to the results from detailed
ALICE simulations based on the GEANT4 toolkit for the propagation of
(anti)particles through matter, allowing one to quantify the inelastic
interaction probability in the detector material. This analysis complements the
measurement of the inelastic cross section of antinuclei up to $A=3$ carried
out by the ALICE Collaboration, and demonstrates the feasibility of the study
of the isospin dependence of inelastic interaction cross section with the
analysis techniques presented in this Letter. | ALICE Collaboration | 2023-07-07T13:45:09Z | http://arxiv.org/abs/2307.03603v2 | # Measurement of the low-energy antitriton inelastic cross section
###### Abstract
In this Letter, the first measurement of the inelastic cross section for antitriton-nucleus interactions is reported, covering the momentum range of \(0.8\leq p<2.4\) GeV\(/c\). The measurement is carried out using data recorded with the ALICE detector in pp and Pb-Pb collisions at a centre-of-mass energy per nucleon of 13 TeV and 5.02 TeV, respectively. The detector material serves as an absorber for antitriton nuclei. The raw yield of (anti)triton nuclei measured with the ALICE apparatus is compared to the results from detailed ALICE simulations based on the Geant4 toolkit for the propagation of (anti)particles through matter, allowing one to quantify the inelastic interaction probability in the detector material. This analysis complements the measurement of the inelastic cross section of antinuclei up to \(A=3\) carried out by the ALICE Collaboration, and demonstrates the feasibility of the study of the isospin dependence of inelastic interaction cross section with the analysis techniques presented in this Letter.
(c) 2023 CERN for the benefit of the ALICE Collaboration.
Reproduction of this article or parts of it is allowed as specified in the CC-BY-4.0 license.
## 1 Introduction
The properties of light antinuclei (such as antideuteron \(\overline{\mathrm{d}}\), antihelium \({}^{3}\overline{\mathrm{He}}\) and antitriton \({}^{3}\overline{\mathrm{H}}\)) have been a subject of various studies at accelerators in the last few decades. These objects, composed of antiprotons and antineutrons, are stable in vacuum, but annihilate when coming into contact with normal matter. So far, light antinuclei have only been observed in high-energy particle collisions at various fixed-target and collider experiments, from the AGS [1, 2, 3, 4], to the SPS [5], RHIC [6, 7, 8, 9, 10, 11], and the LHC [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Their production mechanism has been actively studied in the context of matter-antimatter asymmetry and of understanding the formation of composite nuclear objects [29, 30, 31, 32, 23, 24, 33]. However, until very recently, the studies of their inelastic interactions with matter have been difficult, since obtaining an isolated beam of light antinuclei with precisely determined momentum comprises formidable challenges. In fact, only the antideuteron inelastic cross section had been measured at total momenta of 13.3 GeV/\(c\)[34] and 25 GeV/\(c\)[35].
Recently, new measurements of the inelastic cross sections of antinuclei with matter at low momenta (\(p\leq 8\) GeV/\(c\)) were reported by the ALICE Collaboration, both for \(\overline{\mathrm{d}}\)[36] and \({}^{3}\overline{\mathrm{He}}\)[37]. While these measurements were mainly motivated by their impact on astrophysical dark-matter searches, they also allow a more detailed look into the physics of low-energy antinucleus-nucleus interactions, which can be used to improve Monte Carlo (MC) simulations of particle physics detectors and to better understand the properties of these composite objects at low kinetic energies. More accurate MC simulations of inelastic interactions with the detector material also allow one to improve the measurements of the production yields of antinuclei [21, 23, 38]. However, one aspect which so far has not been accessible with these measurements is the isospin dependence of the inelastic cross section of composite antinuclei. This effect is difficult to evaluate since the lightest isospin-partner antinuclei are \({}^{3}\overline{\mathrm{He}}\) and \({}^{3}\overline{\mathrm{H}}\), which are only produced in limited quantities in high-energy hadronic collisions at accelerators [21].
In this Letter the first measurement of the antitriton-nucleus inelastic interaction cross section \(\sigma_{\mathrm{inel}}(\overline{\mathrm{{{{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm { \ \ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\) \) \) \)
which is provided either by the TOF itself or by the T0 detectors, two arrays of Cherenkov counters located at forward rapidities [46]. Together with the momentum of the particle obtained from the track curvature, the time-of-flight measurement allows one to determine the particle's mass. Finally, a stainless steel space-frame supporting structure and the Transition Radiation Detector (TRD) are located between the TPC and the TOF; while neither the space-frame nor the TRD is used directly in this analysis, they significantly contribute to the total material budget in which antinuclei can interact inelastically [36, 37].
The results presented in this Letter are based on the data collected during the 2016, 2017, and 2018 LHC operation with proton beams at a centre-of-mass energy \(\sqrt{s}\) = 13 TeV with the high-multiplicity (HM) trigger, and the 2018 Pb-Pb campaign at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV with the minimum bias (MB), central (0-10%) or semi-central (30-50%) event triggers. The central event trigger corresponds to the selection of 10% of all inelastic events with the highest signal amplitude in the V0 detectors (0-10% centrality [47]), whereas semi-central trigger is tuned to select events within 30-50% centrality range. Collision events are selected by using the information from the V0 detectors [48], which consist of two plastic scintillator arrays located on both sides of the interaction point at forward and backward pseudorapidities. The V0 detectors are also used to reject other sources of tracks such as beam-gas interactions and interactions within the beampipe. For the MB event trigger, coincident signals in both V0 scintillators are required to be synchronous with the beam crossing time defined by the LHC clock. In order to trigger on events with high charged-particle multiplicities, the total signal amplitude measured in the V0 detector is additionally required to exceed a given threshold for a given event activity. This selects the 0.17% of events with the highest multiplicity in the V0 detectors in pp collisions. An additional cut is made on the z position of the primary vertex (\(|Vtx_{z}|<10\) cm).
In total, about \(10^{9}\) events from the pp data sample are selected for further analysis, while for the Pb-Pb dataset about \(230\times 10^{6}\) events are analysed.
## 3 Data analysis
Once produced in the initial collisions between protons or heavy ions, antitriton nuclei traverse the detector, with some of them interacting inelastically with the detector material. In a traditional fixed-target experiment, the measurement of a cross section implies the availability of a beam of the particle of interest, which traverses the target in order to determine the loss of particles. Due to the unfeasibility of isolating a beam of low energy antinuclei, the measurements presented in this work relies on ratios that are sensitive to this loss without depending on the absolute number of produced antinuclei. Two methods have been applied to evaluate the inelastic cross section of antitritons, \(\sigma_{\rm inel}(\overline{{}^{3}\rm H})\), similarly to what has been done in Refs. [36, 37] for the \({}^{3}\overline{\rm He}\) and \(\overline{\rm d}\) inelastic cross section measurements.
The first method, namely the antibaryon-to-baryon method, was used to analyse the high-multiplicity pp collisions and is based on the comparison of measured \({}^{3}\overline{\rm H}\) and \({}^{3}\rm H\) yields. Due to the extra inelastic processes which occur as particles traverse the detector, less \({}^{3}\overline{\rm H}\) is detected than its matter counterpart. This loss is quantified by comparison with detailed MC simulations of the ALICE detector. This method makes use of the fact that the relative amounts of matter and antimatter produced at LHC energies are almost equal [49, 50], and for \({}^{3}\overline{\rm H}\) and \({}^{3}\rm H\) can be calculated from the antiproton-to-proton ratio [51, 52] to be 0.994 \({}^{+0.006}_{-0.045}\). Thus, the antiparticle-to-particle ratio is sensitive to the loss of antinuclei. This method allows one to extend the measurement of \({}^{3}\overline{\rm H}\) up to \(p_{\rm primary}=2.4\) GeV/\(c\), but is challenging for application in Pb-Pb collisions due to copious \({}^{3}\rm H\) background from spallation processes [19].
The second method, applied in the analysis of Pb-Pb data, relies on the measurement of the amount of raw \({}^{3}\overline{\rm H}\) reconstructed in the TOF detector and of those reconstructed in the TPC. This is the TOF-to-TPC method employed in Ref. [37]. In this case, the number of \({}^{3}\overline{\rm H}\) nuclei tagged in the TPC and measured in the TOF is compared to that of \({}^{3}\overline{\rm H}\) reconstructed in the TPC, which allows one to quantify the inelastic processes occurring in the material between the TPC and the TOF detectors. This is an almost direct
analogy of a fixed target experiment, as the particles identified in the TPC act as the beam, while the ones reaching the TOF represent the particles surviving after the target. In order to extract the inelastic cross section while avoiding any bias due to finite detector acceptance and tracking efficiencies, data are compared to MC simulations that reproduce the conditions of the data taking. This method is applicable only in the momentum range in which \({}^{3}\overline{\rm H}\) can be clearly identified in both TPC and TOF detectors (\(0.9<p_{\rm primary}<1.5\) GeV/\(c\)) and is used in the analysis of Pb-Pb data due to larger yield of produced \({}^{3}\overline{\rm H}\) nuclei compared to pp collisions.
For both methods, the experimental results are compared with the corresponding ratios evaluated by means of a full-scale MC Geant4 simulation with varied \(\sigma_{\rm inel}(^{3}\overline{\rm H})\) cross sections, as described in section 3.4. In both cases, using ratios instead of individual particle yields allows us to extract the antitriton inelastic cross section independently from its production cross section. Further details on the data analysis are described in the following sections.
### Track selection and particle identification
(Anti)\({}^{3}\)H candidates are selected from a sample of charged-particle tracks reconstructed in the ITS and TPC in the pseudorapidity range \(|\eta|<0.8\) (0.75) for the Pb-Pb (pp) data sample, and at midrapidity with \(|\eta|<0.5\) for both samples. Several track quality criteria are applied, such as a minimum number of clusters in the TPC of at least 100 out of a maximum of 159, and at least 2 in the ITS, with at least one cluster located in any of the two innermost ITS layers. Furthermore, the number of TPC crossed rows is constrained to be more than 70. A good quality of the track fit is achieved by requiring the \(\chi^{2}\) per TPC reconstructed point to be less than 2.5 (4) for the Pb-Pb (pp) data sample. These cuts are stricter in the Pb-Pb data sample due to the higher track density environment. The number of TPC clusters used in the calculation of the specific energy loss (\({\rm d}E/{\rm d}x\)) is required to be larger than 100 (55) to ensure a good \({\rm d}E/{\rm d}x\) resolution in the Pb-Pb (pp) dataset. The contribution from secondary tracks in the pp data sample is reduced by selecting a maximum distance of closest approach (DCA) to the primary vertex in the transverse plane (\({\rm DCA}_{\rm xy}\)) and in the longitudinal direction (\({\rm DCA}_{\rm z}\)) lower than 0.1 cm.
The (anti)triton candidate tracks are identified using the information on the specific energy loss \({\rm d}E/{\rm d}x\) in the TPC gas volume, which is complemented by the information on the time-of-flight measurement in the TOF detector. The \(n\sigma^{\rm TPC}\) variable represents the PID response in the TPC expressed in terms of the deviation between the measured and expected \({\rm d}E/{\rm d}x\), normalised by the detector resolution \(\sigma\). The expected \({\rm d}E/{\rm d}x\) is computed with a parameterised Bethe-Bloch formula [42]. For the antibaryon-to-baryon method, (anti)tritons are selected in the TPC by applying the selection criterion \(|n\sigma^{\rm TPC}|<3.0\), once the requirement on the squared mass hypothesis of the (anti)tritons \(|{\rm m}^{2}_{\rm TOF}-{\rm m}^{2}_{\rm BH}|<2\) (GeV/\(c^{2}\))\({}^{2}\) measured with the TOF detector is fulfilled. Candidate tracks which do not reach the TOF are not considered in the analysis. This selection is sufficient to obtain a purity close to 100% for (anti)tritons in the full momentum range explored in this analysis (\(1.3<p<2.4\) GeV/\(c\)).
For the TOF-to-TPC method, \({}^{3}\overline{\rm H}\) candidates need to be reconstructed in both TPC and TOF detectors in the same momentum intervals, hence the analysis is restricted to the momentum range of \(0.9<p<1.5\) GeV/\(c\). Antitrion candidates are selected in the TPC be applying the selection of \(-2.0<n\sigma^{\rm TPC}<3.5\), and then matched to TOF hits. The asymmetric selection interval allows one to suppress the contamination due to other particle species, misidentified as \({}^{3}\overline{\rm H}\) in the TPC, with lower \({\rm d}E/{\rm d}x\) signal. Any residual contamination in the \(n\sigma^{\rm TPC}\) distributions is fitted with an exponential function and subtracted. Such contribution is negligible at low momentum and amounts to < 1% in the momentum interval \(1.3<p<1.5\) GeV/\(c\). Similarly to the \(n\sigma^{\rm TPC}\), the \(n\sigma^{\rm TOF}\) variable represents the deviation between the measured and expected time-of-flight of (anti)tritons, normalised by the resolution of the time-of-flight measurement. In the TOF, \({}^{3}\overline{\rm H}\) nuclei are selected in the range of \(-3.0<n\sigma^{\rm TOF}<4.0\). The reason for an asymmetric interval is the presence of an exponential tail towards higher values of \(n\sigma^{\rm TOF}\), that reflects the TOF detector time response [45].
Particularly for this analysis, a special reconstruction of the Pb-Pb collisions was used, in order to minimise the sensitivity to elastic scattering in the material. In the TOF, tracks are only associated with a hit cluster if the cluster is within a certain distance from the extrapolated position of the track reconstructed in the TPC. This maximum distance was increased in this reconstruction, from 3 cm to 10 cm, with the latter being the usual window for pp collisions.
### Corrections
The sample of \({}^{3}\)H candidates reconstructed as described above includes both the nuclei produced in the initial collisions and those produced in the spallation processes occurring when particles traverse the beam pipe and the detector material. Since this process produces nuclei by spallation from other heavier nuclei, it can not produce antinuclei, but rather secondary nuclei only. The antibaryon-to-baryon ratio method employed in pp collisions is sensitive to these secondary nuclei, since it compares the yields of antitriton to those of triton in order to measure the inelastic cross section. Therefore, this effect needs to be corrected for. In order to distinguish between tritons originating from the initial collision and those originating from spallation processes, their different DCAs to the primary vertex in the transverse plane are used. While primary nuclei have DCA distributions peaked around zero, secondary nuclei show much broader DCA distributions which are flat over most of the studied DCA range. The different DCA distributions for the two types of nuclei are studied with templates from detailed MC simulations, and the relative contributions are therefore obtained, as shown in details in Ref. [19]. The uncertainty from this correction is estimated to be below 5%.
A second correction is applied to account for any energy loss which occurs before the inelastic interactions, due to ionisation and rescattering processes. The energy loss has to be estimated differently between the two approaches, as what needs to be corrected in the antibaryon-to-baryon method is the energy loss between the primary vertex and the point where the \({}^{3}\overline{\mathrm{H}}\) inelastic interaction occurs, while for the TOF-to-TPC method it is the energy loss between the TPC and the interaction point. For the antibaryon-to-baryon method, the momentum of the tracks is estimated by measuring their curvature, and it is then propagated towards the primary vertex accounting for the measured energy loss, and thus is the momentum at the primary vertex \(p_{\mathrm{primary}}\).
Due to continuous energy-loss effects in the detector material, the inelastic interaction of the antinuclei with the detector material happens at a momentum \(p\), which is lower than the momentum \(p_{\mathrm{primary}}\) reconstructed at the primary collision vertex. The corresponding effect is taken into account utilising MC simulations in which one has precise information about both momenta for each (anti)particle. In the analysis of pp collisions using the antibaryon-to-baryon method, the average values of \(p/p_{\mathrm{primary}}\) distributions in each analysed \(p_{\mathrm{primary}}\) interval are used to consider the energy loss. The root mean square (RMS) of these distributions is used to determine the uncertainty of the momentum p, which is propagated to the uncertainty of the measured cross section. For the analysis of the Pb-Pb data sample using the TOF-to-TPC method, the MC information on the momenta of daughter tracks originating from the \({}^{3}\overline{\mathrm{H}}\) annihilation is used to estimate the momentum of the particle at annihilation. This is compared to \(p_{\mathrm{primary}}\) to estimate the magnitude of this effect and the resulting uncertainty. The uncertainty on the inelastic cross section from the continous energy loss is evaluated to be less than 2% for the antibaryon-to-baryon method and less than 2.5% for the TOF-to-TPC method.
### Systematic uncertainties
Several sources of systematic uncertainties have been considered, depending on the method used for the inelastic cross section extraction. The total uncertainty is obtained as the quadratic sum of the individual contributions, assuming uncorrelated contributions.
The first source of systematic uncertainty, investigated for both methods, is due to the track selection cri
teria. The criteria have been varied 100 times, both in data and MC, using random uniform distributions around the nominal values. In the case of the antibaryon-to-baryon method, the relative systematic uncertainty is given by the RMS divided by the mean value of the distributions of the \({}^{3}\overline{\rm H}\)-to-\({}^{3}\)H ratios in each momentum interval. For the TOF-to-TPC ratio method, the systematic uncertainty is evaluated as half the maximum difference between the resulting inelastic cross sections. Variations consistent with statistical fluctuations are rejected from the trials used to estimate the systematic uncertainties according to the Barlow criterion [53]. This contribution is 13% for the TOF-to-TPC method, flat in momentum, while it is rejected by the Barlow test in the other method, using a cutoff value of 2\(\sigma_{\rm Barlow}\). This difference in the systematic uncertainty due to track selection criteria is due to the fact that in the antibaryon-to-baryon method, the effect of the track selection applies similarly to both the baryon and antibaryon tracks, and thus cancels out to a large extent.
For the antibaryon-to-baryon ratio method, the systematic uncertainty due to the effect of secondary nuclei from spallation on the ratio is considered. This correction is described in section 3.2. Variations of the binning and of the fit range (from \(\pm 1\) cm to \(\pm 2\) cm around the primary vertex) have been performed to evaluate this uncertainty, which is \(\sim\)6% in the lowest momentum bin and \(<\) 1% at higher momenta. For the antibaryon-to-baryon method [36], three additional uncertainties are included: i) the uncertainty on the primordial antimatter-to-matter ratio produced in collisions, ii) the uncertainty due to the elastic cross section of \({}^{3}\)H and iii) the one due to the inelastic cross section of \({}^{3}\)H. All three are considered as global uncertainties for the antibaryon-to-baryon method. These uncertainties are 4.5%, 1%, and 2.3% in the whole momentum interval, respectively.
For the TOF-to-TPC ratio method, additional sources of systematic uncertainties are related to the description of the material budget in the MC simulations, the PID, and the momentum correction evaluation. The material budget of ALICE is varied by \(\pm\)4.5% in MC simulations, and the deviations of the final results from the default case are considered as an uncertainty. The value of 4.5% corresponds to the current uncertainty on the material obtained by photon conversion measurements [42] and by the studies of the TOF-to-TPC matching efficiency with pions [54]. The uncertainty related to the PID has been evaluated as the difference between the raw yields obtained by subtracting the background contribution from the \(n\sigma^{\rm TPC}\) distributions when varying the function used to fit the contamination (Gaussian, double Gaussian, Gaussian + 2 exponential functions). The contribution coming from the difference between the bin-counting method and the integral of the signal function is added in quadrature to the previous one. The deviation of the final results from the default case divided by \(\sqrt{12}\) is considered as uncertainty.
The systematic uncertainty due to the energy loss correction is evaluated as described in section 3.2, and amounts to \(\sim\)2% (\(\sim\) 5%) in the analysis of Pb-Pb (pp) collisions.
### Monte Carlo simulations
The experimental observables, shown in Fig. 1, are compared with the corresponding ratios from MC simulations based on Geant4. The parameterisation of the inelastic cross section in Geant4 is based on Glauber calculations [55, 56], which are based on geometrically scaling the inelastic antiproton-proton cross sections to heavier systems. This parameterisation neglects the effect of Coulomb interactions, which are negligible for \(p>1\) GeV\(/c\). The analysis described above is repeated using several MC simulations with varied values of the antitrion inelastic cross sections. In the case of the antibaryon-to-baryon method, three MC simulations are used, with \(\sigma_{\rm inel}(\overline{{}^{3}\rm H})\) multiplied by a factor of 0.75, 1, and 4. These factors were chosen for historical reasons. In the case of the TOF-to-TPC method, three MC simulations are used, with \(\sigma_{\rm inel}(\overline{{}^{3}\rm H})\) multiplied by a factor of 0.5 and 1.5, in addition to the default one. The corresponding MC ratios for the two cases are shown as coloured bands in Fig. 1 and are used as references for the experimental ratios to obtain the \({}^{3}\overline{\rm H}\) inelastic cross section as described in the following section.
### Inelastic cross section determination
The determination of the inelastic cross section requires precise knowledge of the ALICE detector material budget. Detailed studies of the detector material have been carried out in previous works by the ALICE Collaboration [36, 37]. These studies allow the determination of the effective target material. For the analysis based on the antibaryon-to-baryon method, the inelastic interactions can occur in the ITS, TPC, TRD, and TOF detectors, hence the average material corresponds to \(\langle Z\rangle=14.8\) and \(\langle A\rangle=31.8\). For the TOF-to-TPC method, instead, the average material is the TRD and space frame system, corresponding to \(\langle Z\rangle=16.1\) and \(\langle A\rangle=34.7\). Using the same procedure described in Refs. [36, 37], the MC ratios obtained with the different multiplicative factors applied to \(\sigma_{\rm inel}(^{3}\overline{\rm H})\) are fitted in each momentum interval using a Lambert-Beer function:
\[R=\exp(-{\rm a}\sigma)\times{\rm b}, \tag{1}\]
where \(R\) refers to the MC ratios obtained with either of the two methods, a and b are the fit parameters, and \(\sigma\) refers to the scaling factor of the \(\sigma_{\rm inel}(^{3}\overline{\rm H})\). The experimental ratios are therefore projected on the fit function and a scaling factor for the Geant4 parameterisation is obtained for each momentum interval. This scaling factor shows by how much the inelastic cross section needs to be changed in the Geant4 MC implementation with respect to default to reach the same value for the observable (antitriton-to-triton ratio for pp, TPC-to-TOF ratio for Pb-Pb) as in the data. Finally, by multiplying the scaling factor in each interval by the default inelastic cross section implemented in Geant4, the inelastic cross section is obtained. Since only cross sections on integer atomic numbers can be obtained in Geant4, the inelastic cross section on the nearest available element was scaled according to the Glauber model [55] as described in equation 2
\[\sigma({\rm A}_{\rm ALICE})=\sigma({\rm A}_{\rm GEANT4})\times\frac{1.34\times {\rm A}_{\rm GEANT4}^{0.21}+1.51\times{\rm A}_{\rm GEANT4}^{-1/3}}{1.34\times {\rm A}_{\rm ALICE}^{0.21}+1.51\times{\rm A}_{\rm ALICE}^{-1/3}} \tag{2}\]
where \({\rm A}_{\rm ALICE}\) is the average atomic number seen by antinuclei as they travel through the ALICE detector and \({\rm A}_{\rm GEANT4}\) is the atomic number of the closest available element in Geant4. The resulting \(\sigma_{\rm inel}(^{3}\overline{\rm H})\) is shown in the left panel of Fig. 2, for the two analysed data samples. For the ease of comparison,
Figure 1: Left: raw primary \({}^{3}\overline{\rm H}/^{3}\rm H\) ratio as a function of the momentum \(p_{\rm primary}\) in pp collisions at \(\sqrt{s}=13\) TeV. Right: ratio between the raw number of \({}^{3}\overline{\rm H}\) candidates reconstructed in the TOF and the raw number of \({}^{3}\overline{\rm H}\) reconstructed in the TPC in Pb–Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV as a function of the momentum \(p_{\rm primary}\). In both panels, data are shown in black, the statistical and systematic uncertainties are shown as vertical bars and boxes, respectively. The results from ALICE MC simulations based on Geant4 are shown as coloured bands, the different colours referring to the different inelastic cross sections implemented in the simulations, with default \(\sigma_{\rm inel}\) described in Ref. [56]. The width of the MC band represents the statistical uncertainty of the simulation.
the results from the antibaryon-to-baryon analysis have been scaled to the same average material as the results from the TOF-to-TPC analysis, also according to equation 2, and \(\sigma\)(Geant4) is shown as well. It can be seen that the measurements using the antibaryon-to-baryon method agrees with \(\sigma\)(Geant4) at a 1 \(\sigma\) level, and the TOF-to-TPC ratio method at a 2 \(\sigma\) level. The inelastic cross sections are shown as a function of the momentum \(p\) at which the inelastic interaction occurs.
Comparing the inelastic cross sections for \({}^{3}\overline{\rm H}\) and \({}^{3}\overline{\rm He}\) shows that the two measurements are consistent within uncertainties (see right panel of Fig. 2). There are no quantitative predictions for the expected difference between the two isospin partners, however, a naive approach would be to consider their size as the dominant factor. The measured charge radii of the two nuclei are 1.96 fm for \({}^{3}\)He [57, 58] and 1.8 fm for \({}^{3}\)H [58]. While no independent measurement exists for the antinuclei counterparts, the antinuclei are assumed to have the same size as their matter counterparts by CPT symmetry. This would suggest that the inelastic cross section of \({}^{3}\overline{\rm He}\) should be roughly 20% larger than the one of \({}^{3}\overline{\rm H}\). The current measurement is not sensitive enough to distinguish differences of that order and the cross sections of \({}^{3}\overline{\rm H}\) and \({}^{3}\overline{\rm He}\) are compatible within errors. The larger size of the data samples expected in the upcoming Run 3 and Run 4 campaigns of the LHC will allow a significant improvement over the current measurement.
## 4 Summary
The results presented in this Letter represent the first measurement of the antitriton inelastic cross section, and complete the set of measurements of antinuclei-matter inelastic cross section up to \(A=3\) of the projectile antinucleus by ALICE. The results have been found, within sizeable uncertainties, to be consistent with the paramterisation used in Geant4 toolkit and with existing \({}^{3}\overline{\rm He}\) measurements from Ref. [37]. They demonstrate the feasibility to study the isospin dependence of inelastic interactions of composite antinuclei such as \({}^{3}\overline{\rm H}\) and \({}^{3}\overline{\rm He}\). Future studies of substantially larger data samples collected during the LHC Run 3 and Run 4 campaigns (starting from 2022) with the upgraded ALICE apparatus will substantially improve on the measurements presented in this Letter. They will allow for more precise comparison between \({}^{3}\overline{\rm H}\) and \({}^{3}\overline{\rm He}\) results and for better calculations of the inelastic cross sections of antinuclei in the currently used models, improving also our understanding of antinucleus-matter inelastic interactions at low energies.
Figure 2: Left: inelastic interaction cross section for antitritons with the two analysis methods on an average material element of the ALICE detector as a function of the momentum \(p\) at which the interaction occurs. Dashed black lines represent the default Geant4 parameterisations for antitritons. Right: comparison between the results reported in the left panel and the results for \({}^{3}\overline{\rm He}\) inelastic cross section measured in Pb–Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV from Ref. [37]. Note that \({}^{3}\overline{\rm He}\) nuclei can be clearly identified in both TPC and TOF detectors over a much wider momentum range than \({}^{3}\overline{\rm H}\). In both panels, boxes show the statistical and systematic uncertainties summed in quadrature.
## Acknowledgements
The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detector: A. I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation (ANSL), State Committee of Science and World Federation of Scientists (WFS), Armenia; Austrian Academy of Sciences, Austrian Science Fund (FWF): [M 2467-N36] and Nationalstiftung fur Forschung, Technologie und Entwicklung, Austria; Ministry of Communications and High Technologies, National Nuclear Research Center, Azerbaijan; Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Financiadora de Estudos e Projetos (Finep), Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) and Universidade Federal do Rio Grande do Sul (UFRGS), Brazil; Bulgarian Ministry of Education and Science, within the National Roadmap for Research Infrastructures 2020?207 (object CERN), Bulgaria; Ministry of Education of China (MOEC), Ministry of Science & Technology of China (MSTC) and National Natural Science Foundation of China (NSFC), China; Ministry of Science and Education and Croatian Science Foundation, Croatia; Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear (CEADEN), Cubaenergia, Cuba; Ministry of Education, Youth and Sports of the Czech Republic, Czech Republic; The Danish Council for Independent Research | Natural Sciences, the VILLUM FONDEN and Danish National Research Foundation (DNRF), Denmark; Helsinki Institute of Physics (HIP), Finland; Commissariat a l'Energie Atomique (CEA) and Institut National de Physique Nucleaire et de Physique des Particules (IN2P3) and Centre National de la Recherche Scientifique (CNRS), France; Bundesministerium fur Bildung und Forschung (BMBF) and GSI Helmholtzzentrum fur Schwerionenforschung GmbH, Germany; General Secretariat for Research and Technology, Ministry of Education, Research and Religions, Greece; National Research, Development and Innovation Office, Hungary; Department of Atomic Energy Government of India (DAE), Department of Science and Technology, Government of India (DST), University Grants Commission, Government of India (UGC) and Council of Scientific and Industrial Research (CSIR), India; National Research and Innovation Agency - BRIN, Indonesia; Istituto Nazionale di Fisica Nucleare (INFN), Italy; Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS) KAKENHI, Japan; Consejo Nacional de Ciencia (CONACYT) y Tecnologia, through Fondo de Cooperacion Internacional en Ciencia y Tecnologia (FONCICYT) and Direccion General de Asuntos del Personal Academico (DGAPA), Mexico; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Netherlands; The Research Council of Norway, Norway; Commission on Science and Technology for Sustainable Development in the South (COMSATS), Pakistan; Pontificia Universidad Catolica del Peru, Peru; Ministry of Education and Science, National Science Centre and WUT ID-UB, Poland; Korea Institute of Science and Technology Information and National Research Foundation of Korea (NRF), Republic of Korea; Ministry of Education and Scientific Research, Institute of Atomic Physics, Ministry of Research and Innovation and Institute of Atomic Physics and University Politehnica of Bucharest, Romania; Ministry of Education, Science, Research and Sport of the Slovak Republic, Slovakia; National Research Foundation of South Africa, South Africa; Swedish Research Council (VR) and Knut & Alice Wallenberg Foundation (KAW), Sweden; European Organization for Nuclear Research, Switzerland; Suranaree University of Technology (SUT), National Science and Technology Development Agency (NSTDA), Thailand Science Research and Innovation (TSRI) and National Science, Research and Innovation Fund (NSRF), Thailand; Turkish Energy, Nuclear and Mineral Research Agency (TENMAK), Turkey; National Academy of Sciences of Ukraine, Ukraine; Science and Technology Facilities Council (STFC), United Kingdom; National Science Foundation of the United States of America (NSF) and United States Department of Energy, Office of Nuclear Physics (DOE NP), United States of America. In addition, individual groups or members have received support from: Eu
ropean Research Council, Strong 2020 - Horizon 2020 (grant nos. 950692, 824093), European Union; Academy of Finland (Center of Excellence in Quark Matter) (grant nos. 346327, 346328), Finland.
|
2303.00825 | Gas-Particle Dynamics in High-Speed Flows | High-speed disperse multiphase flows are present in numerous environmental
and engineering applications with complex interactions between turbulence,
shock waves, and particles. Compared to its incompressible counterpart,
compressible two-phase flows introduce new scales of motion that challenge
simulations and experiments. This review focuses on gas-particle interactions
spanning subsonic to supersonic flow conditions. An overview of existing Mach
number-dependent drag laws is presented, with origins from 18th-century cannon
firings, and new insights from particle-resolved numerical simulations. The
equations of motion and phenomenology for a single particle are first reviewed.
Multi-particle systems spanning dusty gases to dense suspensions are then
discussed from numerical and experimental perspectives. | Jesse Capecelatro, Justin Wagner | 2023-03-01T21:18:51Z | http://arxiv.org/abs/2303.00825v2 | # Gas-Particle Dynamics in High-Speed Flows
###### Abstract
High-speed (compressible) disperse multiphase flows often involve complex interactions between turbulence, shock waves, and particles. Examples can be found in numerous environmental and engineering applications, from volcanic eruptions to jet-induced cratering during spacecraft landings. This article reviews the fluid dynamics associated with gas-particle flows under such extreme conditions. Compared to its incompressible counterpart, compressible two-phase flows introduce new scales of motion that pose significant modeling challenges. An overview of existing Mach number-dependent drag laws is presented, with origins from 18th-century cannon firings, and new insights from particle-resolved numerical simulations. The equations of motion and phenomenology for a single particle are first reviewed. Multi-particle systems spanning dusty gases to dense suspensions are then discussed from both numerical and experimental perspectives.
## 1 Introduction
High-speed flows made up of solid particles or liquid droplets can be found across a broad range of engineering and scientific disciplines. These include naturally occurring processes, such as supernovas (Inoue et al., 2009) and volcanic eruptions (Lube et al., 2020), and human-caused flows, such as coal dust explosions (Griffith, 1978); shock wave lithotripsy (Lingeman et al., 2009); combustion/detonation (Zhang et al., 2001); and plume-surface interactions during space exploration (Capecelatro, 2022). While the last several decades have seen significant advancements in understanding and modeling _incompressible_ particle-laden flows (Crowe et al., 1996, Balachandran & Eaton, 2010, Fox, 2012, Tenneti & Subramaniam, 2014, Brandt & Coletti, 2022), much less attention has been paid to particle-laden _compressible_ flows. This article presents a review and perspectives on this topic. We focus on flows characterized by finite Mach numbers and high Reynolds numbers containing dilute to dense concentrations of rigid particles.
Compared to incompressible flows, the examples listed above introduce physical processes taking place over a much wider range of scales. As an illustrative example, consider a grain of sand with diameter \(d_{p}=100\) um and density \(\rho_{p}=3000\) kg/m\({}^{3}\) in air. The particle response time due to drag is \(\tau_{p}=\rho_{p}d_{p}^{2}/(18\mu_{g})\approx 0.1\) s, where \(\mu_{g}\) is the gas viscosity. Meanwhile, the characteristic acoustic timescale (time for a disturbance traveling at the speed of sound \(c\) to pass over the particle) is \(\tau_{a}=d_{p}/c\approx 0.3\)\(\upmu\)s, almost 6 orders of magnitude smaller! This discrepancy in time scales adds significant challenges to numerical predictions and experimental diagnostics. In addition, unsteady force contributions that are typically negligible for incompressible gas-solid flows (e.g., added mass and Basset history) can be important owing to the large acceleration difference between the phases. It is therefore apparent that a theoretical/phenomenological description of incompressible particle-laden flows is far from complete when gas-phase compressibility is important.
**Figure 1** shows an approximate regime diagram highlighting fluid-particle interactions from the single particle limit to dense suspensions and Mach numbers ranging from incompressible to supersonic. The terms subcritical and supercritical are used to denote Mach numbers below and above the critical value where supersonic flow around a particle first occurs (\(\text{Ma}\approx 0.6\) for an isolated particle). The present review is focused on gas-particle flows, where the particle-to-fluid density ratio is \(\rho_{p}/\rho_{g}\gg 1\), and the gas phase is in the continuum regime, characterized by Knudsen numbers \(\text{Kn}\ll 1\). For an ideal gas, the Knudsen number can be defined in terms of the Reynolds number and Mach number according to
\[\text{Kn}=\sqrt{\frac{\pi\gamma}{2}}\frac{\text{Ma}}{\text{Re}}, \tag{1}\]
where \(\gamma\) is the ratio of specific heats of the gas phase. The Reynolds number and Mach number are defined in terms of the relative velocity between the phases, i.e., \(\text{Re}=\rho_{g}|\mathbf{u}_{g}-\mathbf{v}_{p}|d_{p}/\mu_{g}\) and \(\text{Ma}=|\mathbf{u}_{g}-\mathbf{v}_{p}|/c\), where \(\mathbf{u}_{g}\) and \(\mathbf{v}_{p}\) are the fluid and particle velocities, respectively. The flow is categorized as continuum when \(\text{Kn}<10^{-2}\). When \(\text{Kn}\) is higher, the collision rate between gas molecules and the surface of particles becomes insufficient to satisfy the no-slip condition. At \(\text{Kn}\) values between \(10^{-2}\) and \(10\), the flow exhibits a small departure from no-slip (slip regime). For \(\text{Kn}\) values greater than \(10\), collisions between gas molecules and particles are frequent, while inter-molecule interactions are rare (free molecular flow). Thus, continuum flows at finite Mach numbers are associated with large Reynolds numbers. Two-phase rarefied flows and hypersonic flows, while rich in physics and present in many important applications, are not discussed here.
Particle-laden compressible flows have traditionally been studied in the context of dusty gases (Marble, 1970), where the two-phase mixture is modeled as a single gas with averaged properties (valid in the limit of zero particle volume). When a shock wave passes through a dusty gas, its thickness and change in pressure differ greatly from a shock passing through an unladen gas (Carrier, 1958). This is markedly different from incompressible flows, in which particles have negligible effect on the carrier phase at sufficiently low volume fractions (i.e., \(\Phi_{v}<10^{-5}\), termed one-way coupling (Elghobashi, 1994)). Beyond the dusty gas regime, finite size particles are capable of modifying the carrier-phase turbulence (termed two-way coupling), and at moderate Mach numbers they are also capable of modifying the pressure field, resulting in acoustic or sound modulation (Crighton & Williams, 1969; Krothapalli et al., 2003; Buchta et al., 2019). At even higher volume fractions, collisions between particles and fluid-mediated wakes drive particle dynamics (termed four-way coupling). When the flow is compressible at these volume fractions, the interstitial space between particles can create a nozzling effect that results in choked flow (Theofanous et al., 2018).
This review provides an up-to-date account on the current understanding and modeling capabilities of high-speed particle-laden flows, and concludes with a brief perspective on future research directions. We begin with the equations of motion and phenomenology for a single particle. We then focus on multi-particle systems spanning dusty gases to dense suspensions from both a numerical and experimental perspective, including multiphase aeroacoustics, turbulence induced during shock-particle interactions, and drag at finite volume fraction and Mach number.
Figure 1: Different regimes characterizing fluid-particle interactions in high Reynolds number flows. The present review is primarily concerned with gas-particle flows (\(\rho_{p}/\rho_{g}\gg 1\)) in the continuum regime (\(\mathrm{Ma/Re}\ll 1\)). One-way coupling is applicable to incompressible and subsonic (shock-free) flows at low volume fractions. At higher Mach numbers but still low volume fractions, particles are capable of modifying shock structures. At higher volume fractions and low Mach numbers, momentum exchange between the phases is capable of enhancing or attenuating gas-phase turbulence. Dense suspensions in high Mach number flows correspond to explosive dispersal of particles with strong shock-particle-turbulence interactions.
## 2 Particle Equation of Motion
The fluid force acting on a particle is typically decomposed into separate contributions: the quasi-steady drag force \({\bf F}_{qs}\); undisturbed flow forces \({\bf F}_{un}\) (sometimes denoted the pressure gradient or Archimedes force), inviscid unsteady force \({\bf F}_{iu}\) (often referred to as added-mass, see the sidebar titled Added Mass in a Compressible Fluid), and viscous-unsteady force \({\bf F}_{vu}\) (Basset history), expressed as
\[m_{p}\,\frac{{\rm d}{\bf v}_{p}}{{\rm d}t}={\bf F}_{qs}+{\bf F}_{un}+{\bf F}_{ iu}+{\bf F}_{vu}, \tag{2}\]
where \(m_{p}\) is the particle mass. Maxey & Riley (1983), Gatignol (1983) derived expressions for each term in the context of a spherical particle moving through an incompressible fluid at low Reynolds numbers (Stokes flow), which are widely employed in numerical simulations.
## ADDED MASS IN A COMPRESSible Fluid
The term added (or virtual) mass refers to the enhanced inertia of an object caused by the surrounding volume of fluid moving with it. This yields an inviscid unsteady force that can be expressed in terms of a response kernel, \(K_{iu}(\tau;{\rm Ma})\), used to weigh the history of the particle's acceleration, as shown in Equation 5. In an incompressible flow, acoustics propagate infinitely fast and the kernel reduces to a Dirac delta function, \(K_{iu}(\tau;{\rm Ma}=0)=\delta(\tau)/2\), allowing it to be written as the product of the relative acceleration and an added-mass coefficient (\(C_{m}\)). For a spherical particle in an incompressible flow, integration of the kernel yields \(C_{m}=0.5\). In a compressible flow, the kernel decays over a short but finite time that depends on the particle's shape and Mach number. Consequently, the force no longer takes the form of a constant mass multiplied by the instantaneous acceleration. Because of this, Miles (1951), Longhorn (1952) and later reiterated by Parmar et al. (2008), emphasized that reference to this force as a 'virtual' or 'added' mass is only applicable to incompressible flows.
Parmar et al. (2011, 2012) extended the equations to viscous compressible flows, where the separate force contributions are given by
\[{\bf F}_{qs}=\frac{1}{2}C_{D}\rho_{g}\left({\bf u}_{g}-{\bf v}_{ p}\right)|{\bf u}_{g}-{\bf v}_{p}|A_{p}, \tag{3}\] \[{\bf F}_{un}=V_{p}\rho_{g}\frac{{\rm Du}_{g}}{{\rm D}t},\] (4) \[{\bf F}_{iu}=V_{p}\int_{-\infty}^{t}K_{iu}\left(t-\chi;{\rm Ma} \right)\left(\frac{{\rm D}(\rho_{g}{\bf u}_{g})}{{\rm D}t}-\frac{{\rm d}(\rho _{g}{\bf v}_{p})}{{\rm d}t}\right)_{t=\chi}{\rm d}\chi,\] (5) \[{\bf F}_{vu}=\frac{3}{2}d_{p}^{2}\sqrt{\pi\rho_{g}\mu_{g}}\int_{- \infty}^{t}K_{vu}\left(t-\chi;{\rm Re},{\rm Ma}\right)\left(\frac{{\rm D}(\rho _{g}{\bf u}_{g})}{{\rm D}t}-\frac{{\rm d}(\rho_{g}{\bf v}_{p})}{{\rm d}t} \right)_{t=\chi}{\rm d}\chi, \tag{6}\]
where \(A_{p}\) is the frontal area of the particle, \(V_{p}\) is its volume, and \(K_{iu}\) and \(K_{vu}\) are the inviscid and viscous-unsteady force kernels, respectively. Although Equations 3-6 were derived for a single particle in the limit \({\rm Re}\to 0\) and \({\rm Ma}\to 0\), they provide a framework for empirical extensions to more realistic flow conditions. For example, the quasi-steady drag force includes a drag coefficient \(C_{D}({\rm Re},{\rm Ma},\Phi_{v})\) typically modeled using an empirical correlation based on far-field flow properties. However, correlations valid for finite Mach numbers _and_ volume fractions are only starting to become available. In addition, properly reconstructing
far-field quantities at the particle location in coarse-grained simulations remains an active area of research (e.g., Horwitz & Mani 2016), especially for flows involving shocks where these quantities may be discontinuous (see Jacobs & Don 2009). In the following sections we summarize existing models for quasi-steady drag, the inviscid unsteady force, their origins, and extensions to multi-particle systems.
## 3 Single-Particle Interactions
In this section, we review the state-of-the-art in modeling the forces acting on an _isolated_ particle, that is, a particle sufficiently far from any walls or neighboring particles. Such an example is provided in **Figure 2**. We then move on to extensions to multi-particle systems.
### Quasi-Steady Drag
Standard models for \(C_{D}\) are typically formulated for flows that reach a quasi-steady state after the relative velocity between the phases is established. A culmination of experimental and semiempirical studies from the 20th century yielded reliable estimates of \(C_{D}\) for a sphere in incompressible flows up to \(\mathrm{Re}=10^{5}\) (e.g., Oseen 1910, Goldstein 1929, Schiller & Naumann 1933, Clift & Gauvin 1971). **Figure 3** shows general trends of the drag coefficient as a function of Reynolds number. The correlation of Clift & Gauvin (1971) is among the most comprehensive for incompressible flows. The drag force in compressible flows is complicated by the emergence of expansion fans and shock waves. For subcritical Mach numbers (\(\mathrm{Ma}\lessapprox 0.6\)), the drag coefficient is only weakly affected by compressibility effects due to the absence of shocks. For supercritical but still subsonic Mach numbers, \(C_{D}\) increases sharply with Mach number due enhanced pressure by the presence of a weak shock. For supersonic Mach numbers, a bow shock is formed-with a stand-off distance that decreases with increasing Mach number-that leads to a large increase in \(C_{D}\). Schlieren imaging of a sphere in free flight from recent experiments by Nagata et al. (2020a) are shown in **Figure 2**, highlighting these different regimes.
Using free-flight measurements from aeroballistic ranges, Henderson (1976) developed an Re- and Ma-dependent drag correlation for subsonic and supersonic flows, and linearly interpolated the data for transonic conditions. It includes non-continuum effects when the mean-free path of the gas phase approaches the particle diameter (i.e., \(\mathrm{Kn}>0\)). Loth (2008) later developed a drag coefficient by separating the flow into a rarefaction-dominated
Figure 2: Schlieren visualization of flow past a sphere with (_a_) \(\mathrm{Ma}=0.9\), (_b_) \(\mathrm{Ma}=1.21\), and (_c_) \(\mathrm{Ma}=1.39\). Adapted from Nagata et al. (2020a) with permission.
regime for \(\mathrm{Re}<45\) (shaded red in **Figure 3**) and a compression-dominated regime when \(\mathrm{Re}>45\) (shaded yellow in **Figure 3**). In between, it was suggested that \(C_{D}\approx 1.63\) is independent of Ma and Kn. Parmar et al. (2010) assessed the models of Henderson (1976) and Loth (2008) using data collected by Bailey and Starr (1976). Both models were found to yield significant errors near the transonic regime. They proposed an improved model for \(\mathrm{Ma}\leq 1.75\) by decomposing \(C_{D}\) into three correlations for subcritical (\(\mathrm{Ma}<0.6\)), intermediate, and supersonic regimes. As described in Cliff et al. (2005), much of the data used for constructing Mach number dependent drag laws are unreliable due to high levels of freestream turbulence, interference by supports, and wall effects. In addition, many of these correlations are formulated from data that can be traced back to the 18th century (see the sidebar titled Origins of Modern-Day Drag Laws).
With the advent of high-performance computing, particle-resolved direct numerical simulations (PR-DNS) are beginning to shed new light on this topic. Loth et al. (2021) combined PR-DNS of Nagata et al. (2020b) with rarefied-gas simulations and an expanded experimental dataset to refine \(C_{D}\). It was shown to be approximately twice as accurate compared to the correlations of Loth (2008) and Parmar et al. (2010) at moderate Mach numbers, and showed improvement to Loth's original model at \(\mathrm{Ma}>2\). The resulting drag coefficient is shown in **Figure 3**. It can be seen that the compression-dominated region (\(\mathrm{Re}>60\)) yields an increase in \(C_{D}\) as Ma increases, whereas in the rarefaction-dominated
Figure 3: Drag coefficient of a sphere highlighting the rarefaction dominated regime (shaded red), compression dominated regime (shade yellow), and multi-particle continuum regime (shaded green). Incompressible flow (\(\mathrm{Ma}=0\)) past an isolated particle (Clift and Gauvin, 1971) (–). Free-molecular flow (\(\mathrm{Ma}\gg 0\) and \(\mathrm{Kn}\gg 0\)) past an isolated particle (\(\bullet\)\(\bullet\)). Single-particle correlation of Loth et al. (2021) (\(--\)). Multi-particle correlation of Osnes et al. (2023) (\(\cdots\)). Adapted from Loth et al. (2021) with permission.
## Origins of modern-day drag laws
Book II of Isaac Newton's _Principia_ was one of the earliest works to estimate the drag of a sphere, demonstrating the force is proportional to the square of the object's speed through the fluid \(U\), its cross-sectional area \(A\), and the density of the carrier fluid \(\rho\); \(F_{d}=\rho AU^{2}C_{D}/2\). His early experiments showed the drag coefficient of a sphere to be \(C_{D}\approx 0.5\) at low speeds. Half a century later, British mathematician Benjamin Robins provided the first measurements of drag on a sphere traveling at high speeds following his invention of the ballistic pendulum. Using round shots fired from guns, drag was found to scale as \(U^{3}\), with values of \(C_{D}\) as much as three times greater than Newton's original estimate (Howard 1742).
Further progress was not made until more than a century later when Francis Bashforth conducted experiments of artillery round shots (cannon fire) for the British Army. (Bashforth with his former college classmate John Couch Adams later went on to develop the Adams-Bashforth method (Bashforth & Adams 1883), a class of multi-step methods commonly used today for numerical time integration.) Bashforth's invention of the ballistic chronograph in 1864 allowed for up to 10 velocity measurements per shot, providing reliable estimates of acceleration over a wide range of speeds (Bashforth 1870). Cannon firings operate in a fortuitous flow regime for studying drag. Typical velocities span \(100-700\) m/s, corresponding to Mach numbers ranging from 0.3 to 2 in air, where \(C_{D}\) changes sharply due to compressibility effects. Further, diameters range from \(50-200\) mm, corresponding to Reynolds numbers near the critical value (\(\mathrm{Re}\approx 2\times 10^{5}\)). The drag force was found to scale according to \(F_{d}\propto U^{2}\) at moderate (subsonic) velocities, consistent with Newton. At higher velocities (subsonic to moderately supersonic) he showed \(F_{d}\propto U^{3}\), consistent with Robins, and at even higher velocities it is again proportional to \(U^{2}\)(Gilman 1905).
More than a century later, Miller & Bailey (1979) compiled available data for drag on a sphere at moderate to high Mach numbers and found Bashforth's measurements to be among the most accurate. This data, combined with more recent free-flight measurements from aeroballistic ranges, continues to be used in the development of modern-day drag laws (Clift et al. 2005).
### Unsteady Forces
Experiments involving the passage of a shock wave over a sphere have provided measurements of unsteady drag coefficients in the presence of strong acceleration (Britan et al. 1995, Tanno et al. 2003, Sun et al. 2005, Bredin & Skews 2007, Skews et al. 2007). **Figure 4\(a\) depicts a planar shock wave interacting with a stationary sphere in a shock tube as a function of non-dimensional time \(\tau_{s}=2tu_{s}/d_{p}\), where \(u_{s}\) is the shock speed. Experimental observations and numerical simulations have shown that the peak drag coefficient occurs just before the shock reaches the sphere equator with values as much as one order of magnitude larger than the steady counterpart (Sun et al. 2005, Bredin & Skews 2007,
Skews et al. 2007). Shortly after (\(\tau_{s}\approx 2\)), the shock wave is diffracted on the downstream side of the particle, resulting in the generation of high pressure that temporarily reduces drag. For weak shocks with subcritical post-shock Mach numbers (Ma \(<0.6\)), this can lead to a period of negative \(C_{D}\)(Osnes & Vartdal 2022). At sufficiently low Reynolds numbers, the negative contribution from pressure drag is counteracted by viscous forces, preventing \(C_{D}\) from becoming negative (Sun et al. 2005). The high pressure region behind the particle then expands and develops into a wake, at which point the flow transitions to a quasi-steady flow state. Note that the relative contributions of steady and unsteady forces obtained from experiments of shock-particle interactions have been debated (see the sidebar titled Perceived Unsteady Effects from Shock Tube Experiments).
## Perceived Unsteady Effects from Shock Tube Experiments
Several pioneering multiphase shock tube experiments (Igra & Takayama 1993, Suzuki et al. 2005, Jourdan et al. 2007) have reported elevated drag coefficients for small spheres (approximately 1 mm) in high \(\rho_{p}/\rho_{g}\) flows compared to the'standard' incompressible drag law of Clift & Gauvin (1971). These studies attributed the increase in drag to unsteady acceleration of the spheres. According to Parmar et al. (2009), however, such unsteady effects should dissipate rapidly for small spheres and therefore have negligible effect on long time drag measurements. Drag coefficient data at transonic speeds in more recent shock tube experiments (Wagner et al. 2012a) and a comparison to compressible drag models of Loth (2008) and Parmar et al. (2010) suggested the elevated drag coefficients in previous shock tube studies are likely related to pronounced compressibility effects not captured by earlier drag correlations such as that of Henderson (1976). These same experiments also noted increased drag even for a few widely spaced particles near transonic Mach numbers, which might also explain elevated drag measured in previous shock tube studies. Additionally, careful characterization of particle size is critical (Bordoloi et al. 2017, Maxon et al. 2021).
Figure 4: Interaction of a normal shock with a sphere. (_a_) Shock tube experiment at a shock Mach number \(M_{s}=1.22\). (_b_) Unsteady drag prediction from a numerical model. Panel \(a\) adapted from Tanno et al. (2003). Panel \(b\) adapted from Parmar et al. (2009).
Unsteady contributions to the particle equation of motion are often neglected when the particle-to-fluid density ratio (\(\rho_{p}/\rho_{g}\)) is large. As shown in **Table 1**, the ratio of the inviscid unsteady forces (\(\mathbf{F}_{iu}\) and \(\mathbf{F}_{un}\)) to the quasi-steady drag \(\mathbf{F}_{qs}\) in Equation 2 scales like \((\rho_{p}/\rho_{g}+C_{m})^{-1}\) when a particle accelerates in a quiescent fluid, with \((\rho_{p}/\rho_{g}+C_{m})^{-1/2}\) scaling for the viscous unsteady force, \(\mathbf{F}_{vu}\). Thus, for gases comprised of liquid droplets or solid particles, these contributions are indeed small. In contrast, Taylor (1928), Magnaudet et al. (1995) demonstrated that the relative contributions of the unsteady forces are _independent_ of density ratio when particles are placed in non-uniform flows. Bagchi & Balachandar (2002) established the conditions under which non-uniformities in the ambient flow are important. As summarized in **Table 1**, the order of magnitude of the inviscid unsteady forces compared to quasi-steady drag is \(\mathrm{Re}d_{p}/\mathcal{L}\), where \(\mathcal{L}\) is a characteristic length scale of the flow. Similarly, \(|\mathbf{F}_{vu}|/|\mathbf{F}_{qs}|\propto\sqrt{\mathrm{Re}d_{p}/\mathcal{L}}\). Thus, unsteady forces arising due to fluid acceleration are independent of density ratio when \(\mathrm{Re}d_{p}/\mathcal{L}\geq\mathcal{O}(1)\). From this scaling it is apparent that unsteady forces are important during shock-particle interactions, where \(\mathrm{Re}\gg 1\) and the characteristic length scale corresponds to the shock thickness (\(\mathcal{L}\ll d_{p}\)), as seen in **Figure 4_b_**.
As shown in Equation 5, the inviscid unsteady force involves a kernel \(K_{iu}\) that depends on prior history and local Mach number. Longhorn (1952) derived a closed-form expression for \(K_{iu}\) for a sphere in an incompressible flow, given by \(K_{iu}(\tau;\mathrm{Ma}=0)=\exp(-\tau)\cos(\tau)\). Applying a Mach number expansion using the compressible form of the Bernoulli equation, Parmar et al. (2008) extended this to flows at \(\mathrm{Ma}<0.6\). The added-mass coefficient was found to scale like \(C_{m}\propto 1+\mathrm{Ma}^{2}+\mathcal{O}\left(\mathrm{Ma}^{4}\right)\). At a Mach number of 0.5, the added-mass coefficient was found to be \(C_{m}(\mathrm{Ma}=0.5)\approx 1\), approximately twice as large as the value for a sphere in an incompressible flow.
Building off of this, Parmar et al. (2009) derived a model capable of capturing unsteady shock loading applicable to subcritical cases where the post-shock Mach number is less than 0.6 (see **Figure 4_b_**). The model includes the pressure gradient force (\(\mathbf{F}_{un}\)) and history term (\(\mathbf{F}_{iu}\)) while neglecting \(\mathbf{F}_{vu}\), which was deemed inconsequential. Like previous experimental data and numerical simulations, the model shows the peak drag coefficient of an isolated particle interacting with a shock wave can be an order of magnitude larger than the value from the quasi-steady drag force for a wide range of Mach numbers and Reynolds numbers. These unsteady effects are active when the shock passes over the particle, but have little contribution to long term particle motion if \(\rho_{p}/\rho_{g}\) is large.
All of the modeling efforts discussed above consider flows interacting with an isolated particle. The flow through assemblies of particles, and corresponding forces that arise, can deviate significantly from the situation of a single particle. At present, accurate models validated at finite values of Reynolds number, Mach number, and volume fraction are lacking. The following sections provide an overview of numerical and experimental campaigns focused on compressible flows in multi-particle systems.
\begin{table}
\begin{tabular}{c c c} \hline \hline & **Inviscid unsteady** & **Viscous unsteady** \\
**Accelerating fluid:** & \(\mathrm{Re}\frac{d_{p}}{\mathcal{L}}\) & \(\sqrt{\mathrm{Re}\frac{d_{p}}{\mathcal{L}}}\) \\
**Accelerating particle:** & \(\frac{1}{\rho_{p}/\rho_{g}+C_{m}}\) & \(\frac{1}{\sqrt{\rho_{p}/\rho_{g}+C_{m}}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relative importance of the inviscid and viscous unsteady forces compared to quasi-steady drag. Adapted from Bagchi & Balachandar (2002).
## 4 Equations of motion for multi-particle systems
For sufficiently large volume fractions, particle-particle interactions can have an order-one effect on the dynamical evolution of the two-phase flow. In the context of incompressible flows, when \(\Phi_{v}\gtrapprox 10^{-3}\) interparticle collisions and neighbor-induced hydrodynamic interactions (i.e., long-range interactions arising from boundary layers around neighboring particles) directly modify the drag force and carrier-phase turbulence. In high-speed flows, the emergence of bow shocks around individual particles and reflected shocks from neighboring particles complicates this picture. **Figure 5** shows PR-DNS of a planar shock interacting with a cloud of finite size particles, highlighting the emergence of shocklets and pseudo-turbulence. Even shortly after the shock passes over the curtain, appreciable size segregation can be observed, with smaller particles traveling further downstream. This is consistent with the scaling of the unsteady forces given in **Table 1**.
The mass loading, defined by the ratio of the specific masses of the particle and fluid phases \(\Phi_{m}=\rho_{p}\Phi_{v}/[\rho_{g}(1-\Phi_{v})]\), characterizes the extent to which interphase coupling is important. When \(\Phi_{m}\ll 1\), the effect of particles on the background flow is negligible. As \(\Phi_{m}\) increases, mass, momentum, and heat transfer from the particles to the fluid become increasingly more important. In this section, we first consider the equations governing flows made up of small, non-interacting particles in the limit \(\Phi_{v}\to 0\) and \(\Phi_{m}>0\) (dusty gas regime) then discuss the equations of motion for flows containing finite size particles.
Figure 5: PR-DNS of a \(\text{Ma}_{s}=1.66\) shock interacting with a particle curtain with an initial volume fraction \(\Phi_{v}=0.21\). (_a_) Bidisperse distribution of particles after the shock traverses the curtain. (_b_) Numerical schlieren at an early time when the shock is still within the curtain. (_c_) Contour of local Mach number shortly after the shock passes through the curtain. Courtesy of A Sridhar.
### Dusty Gas Regime
The dusty gas approach assumes the carrier fluid contains many small non-interacting particles (\(\Phi_{v}\approx 0\)), but the density ratio is sufficiently high such that \(\Phi_{m}>0\). If the timescales associated with interphase exchange are small compared to the characteristic time of the flow, then equilibrium of temperature and velocity can be assumed between the two phases. Under these assumptions, the mixture behaves like a single gas with modified properties, greatly reducing the number of transport equations that need to be solved. The mixture density is \(\rho_{g}(1+\Phi_{m})\), and the ratio of specific heats in the mixture (denoted with an asterisk) is given by (Marble 1970)
\[\frac{\gamma^{*}}{\gamma}=\frac{C_{p}+\Phi_{m}C_{p,p}}{C_{p}+\gamma\Phi_{m}C_{ p,p}}, \tag{7}\]
where \(C_{p}\) and \(C_{p,p}\) are the ratio of specific heats of the fluid and particles, respectively. Consequently, the presence of particles modifies the effective sound speed according to
\[\frac{c^{*}}{c}=\sqrt{\frac{\gamma^{*}}{\gamma}}\frac{1}{1+\Phi_{m}}. \tag{8}\]
These relations are often employed when modeling large-scale geophysical phenomena, such as pyroclastic density currents (Sulpizio et al. 2014) and volcanic eruptions (Carcano et al. 2014, Valentine & Sweeney 2018), and shock waves in the interstellar medium (Draine & McKee 1993).
A defining feature of dusty gases is their ability to modify shock structures in the carrier phase, despite the low volume fraction. When a shock wave propagates through a dusty gas, the thickness of the wave, the pressure change across the shock, and other flow quantities differ greatly compared to when a shock passes through a dust-free gas (Carrier 1958, Miura & Glass 1982). Particles remove momentum and energy from the gas, causing the strength of the shock to decay. At sufficiently high mass loading, the shock decays to a Mach wave and eventually becomes fully dispersed (Miura & Glass 1982). However, particles of finite size delay the time it takes the two phases to reach an equilibrium. Non-negligible slip velocities between the phases give rise to turbulence modulation (for sufficiently viscous flows) and the emergence of bow shocks (for sufficiently high Mach numbers), as depicted in **Figure 5**. Thus, for many applications, the dusty gas approximation is not valid and instead transport equations need to be solved for each phase separately.
### Volume-Averaged Equations
In the dusty gas approach, only the momentum and energy equations of the gas phase are solved, along with a transport equation for the particle density. Extensions to two-phase flows at moderate and high volume fractions require some type of averaging of the governing equations. Anderson & Jackson (1967) applied a spatial filter to the Navier-Stokes equations to arrive at a set of equations for each phase that can be solved at a scale larger than the size of the particle. This is analogous to large-eddy simulation (LES) of single-phase flows, and similarly results in unclosed terms that require models. Unlike in single-phase LES, the filtering procedure omits the volume occupied by particles, resulting in sub-filtered (or subgrid-scale) contributions that account for the presence of particles. Such averaging can be applied to both phases, resulting in Eulerian-based two-fluid models (Drew 1983), or combined with a Lagrangian description of the particle phase, the so-called Eulerian-Lagrangian approach (Patankar & Joseph 2001, Capecelatro & Desjardins 2013).
The formulation of Anderson & Jackson (1967) has recently been extended to compressible flows (Shallcross et al., 2020), which results in additional unclosed terms. Because carrier-phase velocity fluctuations may originate at the particle scale, the residual stress that arises from averaging the non-linear convective term differs significantly from the Reynolds stress appearing in classical single-phase turbulence. This term may even exist in laminar flows due to unresolved particle wakes, and is therefore termed a _pseudo_ turbulent Reynolds stress (PTRS) (Mehrabadi et al., 2015). Recent work has shown that PTRS can contribute to a significant portion of the total kinetic energy during shock-particle interactions (Hosseinzadeh-Nik et al., 2018; Sen et al., 2018; Mehta et al., 2019; Osnes et al., 2019; Shallcross et al., 2020; Khalloufi & Capecelatro, 2023) and plays an important role in satisfying overall conservation (Fox et al., 2020). Algebraic models for the PTRS in incompressible (Mehrabadi et al., 2015) and compressible (Osnes et al., 2019) flows, and transport equations for compressible flows (Shallcross et al., 2020) have been proposed, but such models are still in their infancy. Future modeling efforts must take care when distinguishing between velocity fluctuations originating from large-scale turbulent motion (intrinsic turbulence) and those induced by particles (pseudo turbulence).
### III-Posedness
The compressible two-fluid equations are known to become ill-posed due to lack of hyperbolicity when two-way coupling is accounted for (Lhuillier et al., 2013). Depending on the treatment of the unclosed terms introduced by the averaging process, the equations can yield imaginary characteristics that lead to unphysical instabilities. This has been shown to give rise to spurious volume fraction disturbances during shock-particle interactions (Theofanous et al., 2018). Since the 1970s, numerous attempts have been made to address the lack of hyperbolicity, namely through treatment of the interfacial pressure (Stuhmiller, 1977) or adding ad-hoc forces to stabilize the solution (Lhuillier et al., 2013). This was recently resolved by Fox (2019), who derived a hyperbolic two-fluid model starting from the Boltzmann-Enskog kinetic equations, wherein the fluxes and source terms have unambiguous definitions. The following year, Fox et al. (2020) extended the model to arbitrary density ratios by including the added mass of the fluid on the particle in addition to fluid-mediated interactions in the particle pressure tensor. Given an accurate and consistent set of models for the particle-scale dynamics (e.g., drag and added mass), such a framework shows promise for enabling large-scale simulations of high-speed two-phase flows.
### Energy Transfer
**Figure 6** shows how energy is transferred in compressible gas-particle flows according to the volume-averaged equations of motion. Particles exchange mean kinetic energy with the carrier phase through drag and added mass. This large-scale motion in the gas phase is then transferred to pseudo-turbulent kinetic energy (PTKE; \(k_{g}\)) by the random arrangement of particles and non-linear interactions between wakes. A portion of the PTKE generates random uncorrelated motion in the particle phase (termed granular temperature; \(\Theta_{p}\)), and the remainder is dissipated to heat via viscous dissipation, \(\varepsilon_{\rm pt}\). Fluid dilatation (compression) also contributes to turbulent kinetic energy (Pantano & Sarkar, 2002; Khalloufi & Capecelatro, 2023), while particle compression acts as a source of granular temperature (Capecelatro et al., 2015). Granular temperature generates collisions between particles, and if the particles are inelastic (coefficient of restitution \({\rm e}<1\)), the energy associated with collisions
is dissipated to heat. In the formulation by Fox et al. (2020), the particle pressure tensor has a fluid-mediated contribution from added mass that is purely mechanical. This ensures hyperbolicity and implies that \(\Theta_{p}\) has a compression source term that depends only on the 'thermodynamic' pressure (Houim and Oran, 2016). Without this property, \(\Theta_{p}\) can take on negative values when the thermodynamic pressure is null.
## 5 Multi-Particle Interactions
Gas-particle interactions in flows containing many particles are discussed in this section. Canonical flows that capture complex gas-particle dynamics in compressible flows are limited. Insights gleaned from PR-DNS on the drag force exerted by compressible flows on assemblies of particles are first reviewed. The effects of interphase exchange on flow-generated sound are then discussed. We then summarize existing experimental configurations that isolate shock-particle interactions. Such flows are particularly challenging to simulate numerically, as they require methods that can simultaneously capture shock structures and the disperse phase. As already discussed, subgrid-scale models under these flow conditions are far less developed compared to flows in the absence of shocks. Experiments are challenging owing to the wide range of temporal scales and reduced optical access caused by the particles. We focus on dilute suspensions of particles in compressible jets and dense suspensions of particles in shock tubes. Such experiments are useful for validating numerical models, and understanding the intricate dynamics shared by a broader class of compressible gas-particle flows.
### Drag at Finite Mach Number and Volume Fraction
The drag force in systems containing many interacting finite size particles differs greatly from that of an isolated particle. This has been well documented for incompressible flows (e.g., see Tenneti and Subramaniam, 2014, and references therein). Macak et al. (2021) developed regime maps for subsonic flow in dense gas-particle systems. Using theoretical arguments, they showed that the drag forces arising in assemblies of particles result in compressible effects at Mach numbers well below the typical incompressible criterion used
Figure 6: Energy diagram for compressible gas-particle flows. In this description, the total gas-phase energy is given by \(E_{g}=\frac{1}{2}u_{g}^{2}+k_{g}+e_{g}\) and the total particle-phase energy is \(E_{p}=\frac{1}{2}u_{p}^{2}+\frac{3}{2}\Theta_{p}+e_{p}\), where \(e_{g}\) and \(e_{p}\) are the internal energies of the gas phase and particles, respectively. For a closed system, all energy flows to the lower-right corner. Courtesy of RO Fox.
for an isolated sphere (i.e., \(\mathrm{Ma}<0.3\)). With recent progress in numerical methods and growing computational resources, PR-DNS are starting to provide quantitative measures of the drag force exerted by compressible flows in dense suspensions. The green shaded region in **Figure 3** corresponds to expected values for the (steady) drag coefficient at finite volume fraction and Mach number in the continuum regime.
PR-DNS of shock waves passing through random distributions of spherical particles reveal significant particle-to-particle variation in drag (Mehta et al. 2019b, Osnes & Vartdal 2021, 2022). Particles quickly dissipate energy from the shock, causing the peak unsteady drag force to decrease with downstream distance. Mehta et al. (2019b) showed that for inviscid flows, the formation of shocklets and bow shocks around the particles results in non-zero drag long after the shock passes. Interestingly, the mean drag force acting on the suspension was found to closely match the force acting on an isolated particle. In contrast, the mean drag force on a suspension of particles exerted by a viscous fluid increases with increasing volume fraction, with values of \(C_{D}\) significantly higher in particle clouds compared to values reported in single-particle studies (Osnes et al. 2019).
Most PR-DNS of compressible gas-particle flows consider shock-particle interactions, which introduce challenges in developing drag correlations due to the lack of statistical stationarity and homogeneity. Khalloufi & Capecelatro (2023) reported the first simulations of homogeneous compressible flows past random arrays of particles ranging from subsonic to supersonic free-stream Mach numbers. The magnitude of the drag force was found to increase with volume fraction, consistent with findings from incompressible flows. Increasing the volume fraction was also found to reduce the critical Mach number that demarcates the transonic regime. An 'effective' Mach number was proposed to capture the increase in compressibility effects on drag. Osnes et al. (2023) expanded the data set of Khalloufi & Capecelatro (2023) to a wider range of Reynolds numbers and proposed models for quasi-steady drag, quasi-steady drag variation, and transverse (lift) forces, representing the most comprehensive drag law for spheres to date. The correlations reduce to well-established force models in the incompressible limit of zero Mach number and in the isolated particle limit of zero volume fraction. The corresponding drag coefficient for \(\Phi_{v}=0.4\) at \(\mathrm{Ma}=0.8\) is shown in **Figure 3**.
It is important to note that to date, there has been little (if any) comparisons of the forces measured from PR-DNS with experiments. This requires carefully setup experiments to isolate the effect of drag on particle motion, and diagnostics capable of measuring gas-phase velocity in the vicinity of particles. Canonical experimental configurations of shock-particle interactions will be summarized in Sections 5.3 and 5.4.
### Particle-Turbulence-Acoustics Interactions
Turbulent compressible flows are capable of radiating acoustic waves that in some cases can generate significant (often undesired) sound pressure levels (SPL) (e.g., Tam 1995). In the presence of particles or droplets, interphase coupling is capable of amplifying or attenuating SPL. For example, water injection has been observed experimentally to reduce near-field sound levels from high-speed jets by 2-6 dB using 5-16% of mass of the gas jet (Krothapalli et al. 2003) and as much as 12 dB near rocket engine exhausts with \(\Phi_{m}>1\) (Henderson 2010).
The precise mechanisms responsible for these observed changes in SPL are complex. The acoustic analogy introduced by Lighthill (1952) is widely used to analyze sound generated
from turbulent flows. Crightton & Williams (1969) extended Lighthill's acoustic analogy to quantify the effect small air bubbles have on the sound generated from turbulent flow in water. Effects from the disperse phase are represented by a volume distribution of monopoles and dipoles. The theory predicts that sound levels increase with \(\Phi_{m}\), and this increase is significant when \(\Phi_{m}>1\). This is in contrast to experimental observations of reduced sound during water injection into high-speed jets of air. Buchta et al. (2019) suggests this implies that interphase coupling in these configurations not only reduces the baseline turbulence-generated sound, but the mechanism also serves to counter any additional sources of sound created from the particles themselves.
To better understand the effects of interphase coupling on flow-generated sound, the averaged equations of motion can be rearranged to arrive at a transport equation for the gas-phase pressure, given by (Buchta et al., 2019)
\[\frac{\partial p_{g}}{\partial t}+\mathbf{u}_{g}\cdot\nabla p_{g}=-\gamma p_{g }\nabla\cdot\mathbf{u}_{g}-\mathcal{D}+\frac{\gamma-1}{1-\Phi_{v}}\left( \mathbf{v}_{p}-\mathbf{u}_{g}\right)\cdot\mathbf{F}_{d}+\gamma p_{g}\frac{ \mathrm{D}\ln\Phi_{v}}{\mathrm{D}t}, \tag{9.1}\]
where terms involving molecular transport effects are combined into \(\mathcal{D}\), and \(\mathbf{F}_{d}\) is the drag force containing the fluid contributions on the right-hand side of Equation 2. Compared to the pressure transport equation for single-phase flows (e.g., Pantano & Sarkar, 2002), the last two terms on the right-hand side represent new contributions due to particles. The last term accounts for volume displacement effects through changes in \(\Phi_{v}\), representing a \(pDV\) work term due to particles entering or leaving a control volume (Houim & Oran, 2016). The term involving \(\mathbf{F}_{d}\) accounts for work due to drag and is only active when the slip velocity between the phases is non-zero.
Buchta et al. (2019) performed numerical simulations of particle-laden, high-speed shear layers to quantify the effect of inertial particles on turbulence and near-field pressure intensity. Interphase coupling was observed to have a broadband effect on the turbulence and pressure fields, reducing the turbulence by over 70% for \(\Phi_{m}=10\) (see **Figure 7**). Volume displacement and drag coupling were found to have competing effects on the local pressure
Figure 7: Results from numerical simulations of compressible mixing layers seeded with particles of Stokes number \(\mathrm{St}=1\) at different convective Mach numbers (\(M\)). (_a_) Sound pressure level changes at a distance above the mixing layer. (_b_) Reduction in turbulence levels at the centerline of the mixing layer. Reproduced with permission from Buchta et al. (2019).
fluctuations. For subsonic flow, the sound level increases with particle loading, consistent with low-Mach number multiphase aeroacoustic theory. As shown in **Figure 7**, the SPL increased despite a marked decrease in turbulent kinetic energy (TKE). The increase in SPL was found to be due to changes in fluid dilatation by the particles. In contrast, the sound levels were found to decrease with increasing mass loading in supersonic flows, largely due to the substantial decrease in TKE. While the field of aeroacoustics has received much attention in the past half-century, multiphase aeroacoustics remains largely unexplored. These results suggest that the introduction of a disperse phase may provide alternative sound-reducing mechanisms compared to traditional passive and active control methods.
### Shock-Particle Interactions in Jets
Experimental studies on particle-laden compressible jets date back to the 1960s (Bailey et al. 1961, Hoglund 1962, Marble 1963, Lewis Jr & Carlson 1964, Jarvinen & Draper 1967). As shown in **Figure 8**, particles tend to modify the shock structures within the jet. In an unladen underexpanded jet, the location of the Mach disk is strongly a function of the ratio of the total (tank) pressure to the ambient pressure, or nozzle pressure ratio (NPR). A common correlation for the location of the Mach disk is \(L_{\rm MD}/D_{e}=\sqrt{\rm NPR/2.4}\), where \(D_{e}\) is the nozzle diameter (Crist et al. 1966). This expression was obtained semi-analytically and has since been validated over a wide range of pressure ratios (Franquet et al. 2015). Lewis Jr & Carlson (1964) developed a mass loading-dependent correction based on an empirical fit to experimental data of micron sized alumina particles in high-speed jets, given by \(f(\Phi_{m},{\rm Ma}_{e})=(1+0.197{\rm Ma}_{e}^{1.45}\Phi_{m}^{0.65})^{-1}\), where \({\rm Ma}_{e}\) is the Mach number at the nozzle exit. For a supersonic jet with \({\rm Ma}_{e}=3\), this predicts up to a 30% shift in \(L_{\rm MD}\) when \(\Phi_{m}\approx 0.3\), consistent with earlier experimental findings. Even larger changes would be predicted at higher exit Mach numbers. Thus, even for relatively dilute systems, particles are capable of significantly modifying the carrier phase, and the coupling becomes more pronounced at higher Mach numbers.
More recently, Sommerfeld (1994) performed experiments and numerical simulations of particle-laden underexpandes jets at \({\rm NPR}=30\). **Figure 8** shows shadowgraph images of the jet with increasing values of \(\Phi_{m}\). The shift in Mach disk was attributed to strong momentum coupling between the phases due to the large relative velocity between the gas and particles. Their experiments significantly overpredict the shift in \(L_{\rm MD}\) compared to the correlation of Lewis Jr & Carlson (1964), with smaller particles resulting in a larger shift. This was attributed to smaller particles influencing a larger portion of the jet due to their increased spreading rate. Eulerian-Lagrangian simulations were capable of predicting a shift in the Mach disk location, but not to the same degree as was seen in the experiments. Further work is needed to better understand the exact mechanisms responsible for the modifications to the shock structures, and expose limitations of numerical models. In general, the best way to improve fundamental understanding on shock-particle interactions is numerical simulation concurrent with advanced experimental methods (see sidebar titled Experimental Diagnostics for High-Speed Multiphase Flow).
### Particle Curtain Interactions in a Shock Tube
Several experimental campaigns have sought to study dense gas-solid flows using shock tubes. The pioneering work of Rogue et al. (1998) studied shock-particle interactions at nearly packed volume fractions of \(\Phi_{v}\approx 0.6\) by placing a bed of spheres on a thin diaphragm
## Experimental Diagnostics for High-Speed Multiphase Flow
The last two decades have seen rapid advances in high-speed imaging technology. Digital cameras with high spatial resolution, MHz repetition-rates, and long record times are becoming common, allowing image-based particle tracking in supersonic flows. Additionally, the advent of high-repetition-rate, high-power illumination sources such as pulse-burst lasers, has enabled time-resolved particle image velocimetry (TR-PIV) to characterize the gas phase velocity (Beresh, 2021). In such experiments, care must be taken to separate the typically slower particles of interest from the PIV seed (DeMauro et al., 2017). With the continuous development of sophisticated algorithms, digital in-line holography (DIH) can now resolve 3D particle velocity and size distributions in shock tubes (Chen et al., 2018) and in underexpanded jets (Buchmann et al., 2012). Researchers have resorted to X-ray techniques in dense multiphase flows opaque to visible light. For instance, single-shot, flash X-ray has measured particle volume fraction in dense clouds (Wagner et al., 2015), whereas time-resolved proton radiography has successfully tracked explosively dispersed particle beds (Hughes et al., 2021). Combining 3D tracking algorithms with X-ray is promising for optically opaque flows but are only in their infancy with demonstrations limited to creeping flow (Makiharju et al., 2022).
In a vertical shock tube, Later, researchers targeted shock-particle interactions in dense regimes using gravity-fed particle curtains (e.g., Wagner et al., 2012, Theofanous et al., 2016, DeMauro et al., 2017, Theofanous et al., 2018, DeMauro et al., 2019, Daniel & Wagner, 2022). An example is shown in **Figure 9**, which includes schlieren imaging of the particle curtain (having initial thickness \(\delta_{0}\)) surrounded by contours of streamwise gas-phase velocity measured with TR-PIV. The incident shock first travels through the PIV window and schlieren fields (**Figure 9a**). Transmitted and reflected shocks are observed in **Figure 9_b_**
Figure 8: Experimental visualizations of underexpanded jets. (_a_) Schlieren of an unladen jet with \(\text{NPR}=4.76\) and \(D_{e}=2\) mm. (_b_) Same jet laden with 100 μm particles. (_c_)–(_f_): Shadowgraph of an underexpanded jet with \(\text{NPR}=29.8\), \(D_{e}=3\) mm, and \(d_{p}=45\) μm for different mass loadings: (_c_) \(\Phi_{m}=0\); (_d_) \(\Phi_{m}=0.11\); (_e_) \(\Phi_{m}=0.24\); (_f_) \(\Phi_{m}=0.35\); (_g_) \(\Phi_{m}=0.64\); and (_h_) \(\Phi_{m}=1.07\). Panels _a_–_b_ courtesy of JS Rubio. Panels _c_–_f_ adapted from Sommerfeld (1994).
following impingement on the curtain resulting in a pressure gradient across the curtain. Control volume analysis using TR-PIV and concurrent pressure measurements suggest this pressure differential to be the largest contributor to unsteady drag of the particle curtain (DeMauro et al., 2017). At later times (**Figure 9_c_**), the reflected and transmitted shocks propagate upstream and downstream, respectively, before substantial spread of the curtain is seen in **Figure 9_d_**. Local regions of high velocity are observed within the curtain, which are likely correlated to changes in porosity that act effectively as a nozzle to accelerate the flow (e.g., see **Figure 5**).
The multitude of particle curtain experiments performed to date span a wide range of Ma\({}_{s}\), \(\rho_{p}\), \(\Phi_{v}\), and initial curtain thickness \(\delta_{0}\). The dependence of the non-dimensional curtain spread \(\delta/\delta_{0}\) on these parameters is shown in **Figure 10_a_**. Not surprisingly, particles spread faster as Ma\({}_{s}\) increases and \(\rho_{p}\) decreases. The experiments of Theofanous et al. (2016) with larger \(\delta_{0}\) show a markedly lower spreading rate than the rest. Also evident is the faster spread of the curtain with increasing \(\Phi_{v}\). To collapse the spread, Theofanous et al. (2016) used dimensional analysis to normalize time as a function of the theoretical reflected shock pressure if the curtain were treated as a solid wall. Expanding upon this, DeMauro et al. (2019), Daniel & Wagner (2022) used a force balance approach to suggest the following scaling relationships:
\[\frac{x}{\delta_{0}}=\left(\frac{\sqrt{\Delta P}t}{\sqrt{\rho_{p}}\delta_{0}} \right)^{2}, \tag{10}\]
and
\[\frac{x}{\delta_{0}}\propto\ \left(\Phi_{v}^{0.25}\sqrt{\frac{\rho_{0}}{\rho_{p} }}\frac{U_{\mathrm{ind}}t}{\delta_{0}}\right)^{2}={t^{\star}}^{2}, \tag{11}\]
Figure 9: Interaction of a Mach number Ma\({}_{s}=1.22\) normal shock with a particle curtain having initial volume fraction \(\Phi_{v}=0.09\) and initial thickness \(\delta_{0}\) = 3.0 mm. Streamwise velocity \(u\) contours normalized by post-shock incident shock velocity \(U_{\mathrm{ind}}\). Normalized time \(t^{\star}\) given by Equation 11. At \(t^{\star}\) = (_a_) \(-0.018\) (\(-40\) μs), (_b_) 0.067 (147 μs), (_c_) 0.128 (280 μs), and (_d_) 0.287 (627 μs). Adapted from DeMauro et al. (2017).
where \(\Delta P\) is the pressure difference across the curtain, \(\rho_{0}\) is the initial gas density, and \(U_{\rm ind}\) is the theoretical velocity induced by the incident shock. Daniel & Wagner (2022) demonstrated tight collapse of the particle curtain spread using Equation 10 and the pressures measured across the curtain over a range of volume fractions. Alternatively, the time scaling in Equation 11 is derived by treating the particle curtain as a porous screen and incorporates only initial parameters known _a-priori_. As shown in **Figure 10_b_**, this scaling results in effective collapse of the curtain spread over nearly an order of magnitude range in \(U_{\rm ind}\) (\(110-1170\) m/s), \(\rho_{p}\) (\(2.4-17.1\) kg/m\({}^{3}\)) and \(\delta_{0}\) (\(1.8-33.5\) mm). Notably, the particle diameter is not included in the scaling, although it also varies by an order of magnitude (\(0.1-1\) mm). This suggests that the unsteady dynamics of the shock-induced dispersal are dominated by properties within the curtain as it expands, and that \(\delta_{0}\) may be a more important length scale than \(d_{p}\) for the cases investigated thus far.
Despite progress, several questions on shock-particle curtain interactions remain. The parameter space should be expanded to include high density particles concurrent with strong shocks. Morevover, as the particle cloud expands and becomes more dilute, the dense scaling will no longer hold. For instance, Theofanous et al. (2018) report good agreement with standard drag laws when the initial volume fraction within the curtain is less than \(1\%\). Experiments with initial volume fraction of a few percent would shed light on this dense-to-dilute transition. Additionally, all experiments to date have been performed with monodisperse particles. Experiments and numerical simulations of bidisperse and polydisperse suspensions would provide insight into shock-induced size segregation (e.g., **Figure 5**).
Multiphase shock tubes represent one of the few experimental configurations commonly
Figure 10: Comparison of historical particle curtain spread data. (_a_) Non-dimensional spread \(\delta/\delta_{0}\) versus time, and (_b_) \(\delta/\delta_{0}\) versus \(t^{*}\) (Equation 11). Glass particle trajectories are shown in blue, tungsten in green, and steel in red. Adapted from Wagner et al. (2023).
used for validating compressible gas-particle flow models. Ling et al. (2012) performed Eulerian-Lagrangian simulations using the force models of Parmar et al. (2009) augmented with volume fraction corrections. They report overall good agreement in the curtain spreading rate and location of reflected/transmitted shocks, but demonstrate that the results depend strongly on the nature of the models employed (namely drag and collisions). Discrepancies between simulations and experiments were attributed to experimental uncertainty in the initial particle distribution and boundary layers at the walls of the shock tube. Houim and Oran (2016) showed good agreement against shock tube experiments of Rogue et al. (1998) and Wagner et al. (2012b) using a compressible formulation of the two-fluid model. Their results were also found to be sensitive to how the edge of the curtain was initialized. Using a compressible Eulerian-Lagrangian formulation, Shallcross and Capecelatro (2018) showed that the filter size used for interphase exchange and coefficient of restitution appearing in the collision model have significant effects on the curtain spreading rates. Nili et al. (2021) performed uncertainty quantification of shock-curtain interactions using a one-dimensional Eulerian-Lagrangian framework. They showed that improvements to the quasi-steady drag force are needed to reduce uncertainty in the upstream particle front, while improvements to quasi-steady, pressure gradient, and added-mass forces in addition to the collision model are needed to reduce errors in long-term predictions of the downstream particle front position. These studies point to a need for experimental measurements with improved uncertainty quantification and further parametric studies as new models come online.
Even on modern computers, particle-resolved simulations that run out for long enough time to observe noticeable spreading of the curtain (\(t^{*}\gtrapprox 0.2\)) are still out of reach. To ensure numerical stability in an explicit discretization of the compressible Navier-Stokes equations, the simulation time step must be \(\Delta t<\Delta x/\max|\mathbf{u}_{g}+c|\), where the grid spacing used in PR-DNS is \(\Delta x\ll d_{p}\). As an example, consider the simulation shown in **Figure 5**. The parameters closely match the multiphase shock tube experiment of Wagner et al. (2012b). Uniform grid spacing is employed with 40 points across the diameter of the smallest particles, resulting in \(\Delta t=\mathcal{O}(10^{-9}\ \mathrm{s})\), 6 orders of magnitude smaller than the time span in **Figure 10**. Recent advances in immersed boundary methods using adaptive mesh refinement (Mehta et al., 2022) show promise in addressing some of these issues. Combining such techniques with multirate time integration (e.g., Mikida et al., 2019) or all-Mach number solvers (e.g., Kuhn and Desjardins, 2021) could aid in reconciling the timescale discrepancy.
SummaryPoints
1. Compared to incompressible flows, particle-laden compressible flows span a much wider range of scales. In addition, unsteady forces that are typically negligible in low-speed gas-particle flows (e.g., added mass and Basset history) can have leading order effects in flows with large gas-phase acceleration, especially during shock-particle interactions. Validated models for such interactions exist when the postshock Mach number is subcritical.
2. For a given free-stream Mach number, gas-phase compressibility becomes increasingly more important as the volume fraction increases.
3. Existing drag laws for particles at finite Mach number have surprising history from 18th- and 19th-century cannon firings.
4. Compressible two-fluid models are known to become ill-posed due to lack of hyperbolicity when two-way coupling is accounted for. After nearly 40 years of attempts
to remedy this, a fully hyperbolic two-fluid model was recently formulated (Fox 2019, Fox et al. 2020).
5. Particle-resolved simulations are shedding new light on drag and turbulence when shock waves interact with assemblies of particles.
6. Advances in high-speed measurement diagnostics and novel experimental configurations over a wide range of volume fractions have provided insight into unsteady forces associated with shock-particle interactions, motivating a multitude of modeling efforts.
## Future issues
1. Improved numerical methods are needed to properly account for particles in the vicinity of shocks when the grid spacing is larger than the particle diameter.
2. Separate (but compatible) models for intrinsic turbulence (turbulence that follows the classical energy cascade) and pseudo-turbulence (turbulence generated by particles at small scales) are needed.
3. Careful experiments resolving particle motion and the surrounding fluid are needed to validate numerical simulations. To resolve the flow-field, such experiments will likely use larger particles challenging current limitations in DNS.
4. Particle size segregation during shock-driven expansion, such as explosive dispersal, is poorly understood. Numerical simulations and experiments of shocks interacting with bidisperse and polydisperse mixtures and nonspherical particles will provide important insights.
5. Multiphase instabilities observed in strongly accelerating flows (e.g. Rodriguez et al. 2013, McFarland et al. 2016, Frost 2018, Osnes et al. 2018, Koneru et al. 2020, Ouellet et al. 2021) remain poorly understood and need to be unraveled.
6. Flow through a collection of particles results in a distribution of drag forces with significant particle-to-particle variation. Various models have recently been proposed to capture drag force variation in incompressible flows (Akiki et al. 2017, Esteghamatian et al. 2018, Lattanzi et al. 2022) and recently compressible flows (Osnes et al. 2023). Future research should incorporate and assess the importance of such models in coarse-grained simulations.
7. Validated models for interphase heat and mass transfer that account for reacting particles at finite Ma and \(\Phi_{v}\) are needed. Recent progress in this area can be found in Ling et al. (2016), Houim & Oran (2016).
8. At hypersonic speeds (e.g., Ma \(>6\)), temperature effects become important, and ionization of the carrier phase might need to be accounted for. Recent advances have been made in heat flux and drag modeling under these conditions (e.g., Singh & Schwartzentruber 2016, 2017), including the transition from rarefied to continuum flows in dense particle suspensions (Vijayan & Levin 2022). However, detailed experiments are required to validate and identify the limitations of these models.
## Disclosure Statement
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
## Acknowledgments
J.C. acknowledges the support from the National Aeronautics and Space Administration (grant no. 80NSSC20K1868 and 80NSSC20K0295). He is also grateful for contributions from current and former students and postdocs, including Dr. Gregory Shallcross, Dr. Mehdi Khalloufi, Meet Patel, and Archana Sridhar. J.W. is grateful for support from the Laboratory Directed Research and Development Program (LDRD). He thanks Steven Beresh, Sean Kearney, Edward DeMauro, Kyle Daniel, and Daniel Guildenbecher for critical input and insightful discussions.
Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
|
2310.09087 | A physics-infused Immersed Boundary Method using online sequential Data
Assimilation | A physics-infused strategy relying on the Ensemble Kalman Filter (EnKF) is
here used to augment the accuracy of a continuous Immersed Boundary Method
(IBM). The latter is a classical penalty method accounting for the presence of
the immersed body via a volume source term which is included in the
Navier-Stokes equations. The model coefficients of the penalization method,
which are usually selected by the user, are optimized here using an EnKF
data-driven strategy. The parametric inference is governed by the physical
knowledge of local and global features of the flow, such as the no-slip
condition and the shear stress at the wall. The C++ library CONES (Coupling
OpenFOAM with Numerical EnvironmentS) developed by the team is used to perform
an online investigation, coupling on-the-fly data from synthetic sensors with
results from an ensemble of coarse-grained numerical simulations. The analysis
is performed for a classical test case, namely the turbulent channel flow with
$Re_\tau = 550$. The comparison of the results with a high-fidelity Direct
Numerical Simulation (DNS) shows that the data-driven procedure exhibits
remarkable accuracy despite the relatively low grid resolution of the ensemble
members. | Miguel M. Valero, Marcello Meldi | 2023-10-13T13:14:56Z | http://arxiv.org/abs/2310.09087v1 | # A physics-infused Immersed Boundary Method using online sequential Data Assimilation
###### Abstract
A physics-infused strategy relying on the Ensemble Kalman Filter (EnKF) is here used to augment the accuracy of a continuous Immersed Boundary Method (IBM). The latter is a classical penalty method accounting for the presence of the immersed body via a volume source term which is included in the Navier-Stokes equations. The model coefficients of the penalization method, which are usually selected by the user, are optimized here using an EnKF data-driven strategy. The parametric inference is governed by the physical knowledge of local and global features of the flow, such as the no-slip condition and the shear stress at the wall. The C++ library CONES (Coupling OpenFOAM with Numerical EnvironmentS) developed by the team is used to perform an online investigation, coupling on-the-fly data from synthetic sensors with results from an ensemble of coarse-grained numerical simulations. The analysis is performed for a classical test case, namely the turbulent channel flow with \(Re_{\tau}=550\). The comparison of the results with a high-fidelity Direct Numerical Simulation (DNS) shows that the data-driven procedure exhibits remarkable accuracy despite the relatively low grid resolution of the ensemble members.
IBM, DA, EnKF, CONES, wall turbulence
## 1 Introduction
The development of reliable, efficient, and cost-effective numerical tools for the accurate prediction of multi-physics problems is a timely key challenge in Computational Fluid Dynamics (CFD). In fact, the development of predictive tools to investigate flow configurations, including several complex aspects (turbulence, compressibility effects, fluid-structure interaction, transport of active and passive scalars), must take into account requirements exhibiting goal rivalry. Future paradigms in the numerical development of flow solvers strive for increased accuracy of the predictive computational strategies while reducing the computational resources required. This latter criterion is not only associated with a global reduction of the cost and the accessibility of methods on different computational machines and architectures but also with the decrease of the carbon footprint related to calculations, in order to comply with progressively more strict regulations. Therefore, efficient numerical modelling in fluid mechanics is essential for progress in fundamental studies as well as industrial applications.
Among the flow problems previously introduced, the accurate numerical representation of near-wall flow features for bodies immersed in turbulent flows needs key advancement. The prediction of numerous flow features of unstationary flows, such as aerodynamic forces, is driven by the precise representation of localized near-wall dynamics. This aspect is particularly relevant and, at the same time, challenging for the flow prediction around complex geometries. In this case, classical body-fitted approaches may have to deal with high deformation of the mesh elements, possibly leading to poor numerical prediction. Additionally, the simulation of moving bodies may require prohibitively expensive mesh updates. In the last decades, several numerical strategies have been proposed to handle these two problematic aspects. Among these proposals, the Immersed Boundary Method (IBM) (Peskin, 1972, 2002; Mittal & Iaccarino, 2005; Kim & Choi, 2019; Verzicco, 2022) has emerged as one of the most popular approaches. The IBM includes a spectrum of tools that operate on non-body-fitted grids i.e. the mesh is not tailored around the shape of the immersed body, which is used as an internal boundary condition for classical body-fitted approaches. For IBM, numerical strategies are developed instead to mimic the presence of the body. The techniques, which rely, for example, on the addition of source terms in the dynamic equation or on imposing discrete values at the centre of grid elements for some physical quantities investigated, are usually distinguished in continuous and discrete methods. Despite the significant differences between the numerous approaches presented in the literature, state-of-the-art methods can account for body movement and deformation with reduced computational cost. However, the complete resolution of near-wall features, in particular for high Reynolds regimes, is a challenging issue in IBM. In fact, owing to the regularity of the grid and the difficulties in applying arbitrary local directional stretching, a larger number of mesh elements is usually required in IBM to obtain similar accuracy of body-fitted tools (Verzicco, 2022).
Another difficulty for the numerical representation of near-wall turbulence, for both IBM and body-fitted methods, is linked to the performance of turbulence/subgrid-scale modelling in the proximity of the wall (Pope, 2000; Sagaut, 2005; Wilcox, 2006). Computational resources required for complete flow resolution at the wall may be prohibitive, and, at the same time, wall modelling can show a lack of accuracy in regions exhibiting flow recirculation or strong pressure gradients. In the last decades, several approaches coming from the Estimation Theory (Simon, 2006) have been employed to complete turbulence closures and/or their associated wall functions with the aim of obtaining generalized predictive models. Among these approaches, data-driven approaches for Uncertainty Quantification and Propagation (UQ-UP) (Xiao & Cinnella, 2019), Data Assimilation (DA) (Asch _et al._, 2016) and Machine Learning (ML) (Duraisamy _et al._, 2019) have been extensively used to improve the accuracy of turbulence closures (Gorle & Iaccarino, 2013; Edeling _et al._, 2014; Margheri _et al._, 2014; Tracey _et al._, 2015; Wu _et al._, 2018; Srinivasan _et al._, 2019; Volpiani _et al._, 2021) and subgrid-scale models (Meldi _et al._, 2011; Vollant _et al._, 2017; Meldi, 2018; Chandramouli _et al._, 2020; Lozano-Duran & Bae, 2020; Mons _et al._, 2021; Moldovan _et al._, 2022). These works have identified theoretical and practical issues and proposed efficient parametric/structural corrections to such models. However, most of these analyses also indicate that a universal predictive model is arguably not achievable because of the high sensitivity to the physical features of the turbulent flows as well as to the numerous numerical details of the simulation process. In addition, data-driven methods require the availability and manipulation of data sets to perform their parametric optimization. The information required from such databases can become prohibitively large when _data hungry_ techniques such as deep learning are used. Considering that the optimized turbulence closures via data-driven techniques do not grant universal predictive features, one may wonder if running a classical high-accuracy
simulation may be computationally less expensive and more accurate in a large number of cases.
In the present work, a _physics infused_ data-driven strategy based on Data Assimilation is proposed to improve the accuracy of a classical IBM, namely the penalization model (Angot _et al._, 1999). This analysis aims to exploit available physical knowledge of the flow instead of blindly feeding the algorithms with available data. In this scenario, the physical information available about the flow configuration is mapped in a data set including local and global features, and it is made available over synthetic sensors, which are appositely created and placed. This information is used to enhance the performance of the model. Whenever possible, exploiting physical information is preferable to the usage of Big Data. First, the loading and manipulation of databases are alleviated, significantly reducing the computational costs. Second, a larger degree of flexibility about the number and positioning of the sensors can be easily achieved. This last aspect can be beneficial to enhance the rate of convergence of the underlying optimization processes, mitigating key problems such as over-constraining. In addition, the inclusion of physical criteria and information is not exclusive, and it can be easily integrated with available databases for the analysis of complex cases.
The test case chosen for the present investigation is the turbulent plane channel flow for \(Re_{\tau}\approx 550\). This test case, which has been extensively studied in the literature (Pope, 2000), is driven by the shear effects at the wall. In this framework, the physical knowledge of the near-wall dynamics observed for this test case will be infused into the DA algorithm to optimize the IBM. In particular, the Ensemble Kalman Filter (EnKF) (Evensen, 2009; Asch _et al._, 2016) will be used to infer the parametric behaviour of the penalization method mimicking the near-wall flow dynamics. As three-dimensional, scale-resolving simulations will be performed within the EnKF, the C++ platform CONES recently developed by the team (Villanueva _et al._, 2023) will be used to perform the DA strategy _on-the-fly_. This means that the integration of the physical information will be performed while an ensemble of numerical simulations is running and producing instantaneous flow fields, without the need to stop and restart the calculations to perform the optimization process.
The article is structured as follows. In SS2, the numerical equations are presented, together with the formulation of the IBMs used and the DA algorithm. In SS3, the test case is introduced and discussed. Preliminary results are presented, and the setup of the DA experiment is detailed. In SS4, the main results are discussed. The sensitivity of the data-driven method to changes in its hyperparametric description, as well as the underlying CFD model, is investigated. In SS5, the conclusions are drawn, and future perspectives are discussed.
## 2 Numerical Ingredients
Since the original work by Peskin (1972), the term IBM has been used to include a large spectrum of methods that simulate immersed (or embedded) boundaries in viscous flows on grids that do not conform to these boundaries. A qualitative representation is shown in figure 1. The usage of Cartesian grids permits formulating efficient spectral, finite differences, or finite volume approximations of the partial derivative equations used to represent the physical system. In the present work, we will focus on IBM strategies which rely on the usage of source terms to account for the presence of the immersed body.
### Dynamic equations and numerical solver
An incompressible flow for a Newtonian fluid can be described by the classical form of the Navier-Stokes equations:
\[\boldsymbol{\nabla\cdot u} =0 \tag{1}\] \[\frac{\partial\boldsymbol{u}}{\partial t}+\boldsymbol{\nabla\cdot(u \boldsymbol{u})} =-\nabla p+\nu\nabla^{2}\boldsymbol{u}+\boldsymbol{f}_{P} \tag{2}\]
\(\boldsymbol{u}\), \(p\) and \(\nu\) are the velocity field, the pressure field (normalized over the density \(\rho\)), and the kinematic viscosity, respectively. The term \(\boldsymbol{f}_{P}\) represents a normalized volume force, which could be, for example, the IBM source term used to account for the presence of the immersed body.
The complete numerical resolution of all active dynamic scales governing the evolution of turbulence via Direct Numerical Simulation (DNS) requires prohibitive computational resources for _Re_ values commonly observed in realistic industrial flows. One popular approach used to mitigate this problem and reduce the computational resources required is the Large Eddy Simulation (LES) (Sagaut, 2005), which relies on explicit/implicit filtering of the dynamic equations. This procedure excludes the direct calculation of small eddies, whose dynamic effects are modelled. This subgrid-scale closure usually relies on asymptotic theories based on the hypothesis of universal behaviour of the small eddies (i.e. they are statistically independent of the macroscopic features of the flow). Numerical discretization for LES is obtained starting from the filtered equations resolved for the reduced-order variables (\(\overline{\cdot}\)):
\[\boldsymbol{\nabla\cdot\tilde{u}} =0 \tag{3}\] \[\frac{\partial\boldsymbol{\tilde{u}}}{\partial t}+\boldsymbol{ \nabla\cdot(\tilde{u}\tilde{u})} =-\nabla\tilde{p}+\nu\nabla^{2}\boldsymbol{\tilde{u}}-\boldsymbol{ \nabla\cdot\tau}_{SGS}+\boldsymbol{\tilde{f}}_{P} \tag{4}\]
\(\boldsymbol{\tau}_{SGS}\) is the subgrid stress tensor and its components are defined as \(\tau_{SGS,ij}=\widetilde{u_{i}u_{j}}-\tilde{u}_{i}\tilde{u}_{j}\), where \(i,j=x,y,z\) corresponds to the spatial directions of the frame of reference. The tensor \(\boldsymbol{\tau}_{SGS}\) must be modelled to close the problem. Among the different proposals in the literature (Sagaut, 2005), the _SubGrid-Scale_ (SGS) model proposed by Smagorinsky (1963) is integrated into most of the CFD numerical solvers. This model is based on the Boussinesq turbulent viscosity hypothesis and assumes that _Re_ is sufficiently high so
Figure 1: Qualitative representation of a Cartesian grid used for IBM calculations. \(\Omega_{f}\) and \(\Omega_{b}\) represent the fluid and body domains, respectively. The blue area \(\Sigma_{b}\) represents the body interface.
that scale separation and turbulence equilibrium are satisfied (Katopodes, 2018). Hence, to close the problem, the Boussinesq hypothesis relies on an SGS viscosity \(\nu_{SGS}\):
\[\tau_{SGS,ij} = \frac{1}{3}\delta_{ij}\tilde{S}_{kk}-2\nu_{SGS}\tilde{S}_{ij} \tag{5}\] \[\nu_{SGS} = \left(C_{s}\Delta\right)^{2}\left(2\tilde{S}_{ij}\tilde{S}_{ij} \right)^{1/2}\] (6) \[\tilde{S}_{ij} = \frac{1}{2}\left(\frac{\partial\tilde{u}_{i}}{\partial x_{j}}+ \frac{\partial\tilde{u}_{j}}{\partial x_{i}}\right) \tag{7}\]
\(C_{s}\in\) [0.1, 0.2] is the so-called Smagorinsky coefficient, \(\Delta\) is the filter width, and \(\tilde{S}_{ij}\) is the resolved rate-of-strain tensor. In this work, both DNS and LES will be used.
Numerical simulations are performed using the C++ finite-volume open-source software _OpenFOAM_. CFD solvers available on this platform, which are based on Finite Volume discretization, have been extensively used by the scientific community in recent years for both academic studies and industrial applications (Tabor & Baba-Ahmadi, 2010; Meldi _et al._, 2012; Selma _et al._, 2014; Constant _et al._, 2017). The calculations have been performed using second-order centred schemes for spatial discretization, and a second-order backward scheme has been selected for the time discretization. The numerical system is resolved using a _Pressure-Implicit with Splitting of Operators_ (PISO) algorithm (Issa, 1986; Ferziger & Peric, 1996; Versteeg & Malalasekera, 2007; Greenshields & Weller, 2022), which is detailed in SSA adding the velocity-dependent forcing term in the momentum equations. The simulation performed includes a set of DNS (i.e. numerical resolution of the dynamic system in equations 2.1 - 2.2) as well as two LES (equations 3 - 2.4) which have been closed using the Smagorinsky model. For the latter, two proposals available in the code for the calculation of the filter width \(\Delta\) (see equation 6) have been selected:
(i) The _cube-root volume delta_\(\Delta_{c}^{CRV}\) method ties the value of \(\Delta\) to the geometric local features of the mesh element. The expression used is \(\Delta_{c}^{CRV}=a\left(V_{c}\right)^{1/3}\), where \(V_{c}\) is the volume of the mesh element \(c\), and \(a\) is a parameter to be defined by the user. In this analysis, \(a=1\).
(ii) The _van Driest damping function_\(D=1-\exp\left(-y^{+}/b\right)\) is used to improve the accuracy of the SGS closure in the near-wall region. \(y\) represents the wall-normal direction, and \(y^{+}=y/\delta_{\nu}\) is the adimensionalized wall distance over the viscous wall unit \(\delta_{\nu}=\nu/u_{\tau}\). The friction velocity \(u_{\tau}=\sqrt{\tau_{w}/\rho}\) is determined via the calculated shear stress at the wall \(\tau_{w}\). In this case, the filter width is \(\Delta_{c}^{vD}=\min\left(D\kappa y/d,\Delta_{c}^{CRV}\right)\). The values of the empirical coefficients are \(b=26\), \(\kappa=0.41\) (_von Karman_ constant), and \(d=0.158\). In practice, \(\Delta\) performs here as a _cube-root volume delta_ model far from the wall, while it exhibits significantly smaller values in the proximity of the body surface.
### Immersed Boundary Method
As discussed in SS1, the Immersed Boundary Method (IBM) includes a very wide range of applications that target the accurate representation of immersed bodies using non-body-fitted grids. In the present study, two proposals relying on the usage of source terms that are included in the dynamic equations will be introduced and used in the numerical simulations.
The first approach considered is the classical penalization method proposed by Angot _et al._ (1999). This continuous method employs Darcy's law to obtain a closed expression for the source term \(\boldsymbol{f}_{P}\) in the body interface \(\Sigma_{b}\) and the solid region \(\Omega_{b}\). To do so, a
spatial dependent tensor \(\textbf{{D}}(\textbf{{x}},t)\) must be defined. The resulting penalty term is modelled as:
\[\textbf{{f}}_{P}=\left\{\begin{array}{ll}\textbf{0}&\textrm{if }\textbf{{x}}\in \Omega_{f}\\ -\nu\textbf{{D}}\left(\textbf{{u}}-\textbf{{u}}_{\textbf{{ib}}}\right)& \textrm{if }\textbf{{x}}\in(\Sigma_{b}\cup\Omega_{b})\end{array}\right. \tag{8}\]
\(\textbf{{u}}_{\textbf{{ib}}}(\textbf{{x}},t)\) is a target velocity representing the immersed body's physical behaviour. If the body surface is not moving, then \(\textbf{{u}}_{\textbf{{ib}}}=\textbf{0}\). The components \(D_{ij}\) of the tensor \(D\) are usually controlled to obtain the best compromise between accuracy and stability of the numerical solver (Verzicco, 2022). The choice performed is particularly important in the proximity of the grid transition between the flow region \(\Omega_{f}\) and the interface region \(\Sigma_{b}\).
The second IBM approach considered in this study is a discrete penalty method already validated in _OpenFOAM_ by Constant _et al._ (2017). The method is based on the work by Uhlmann (2005) and Pinelli _et al._ (2010), but it is improved by including a _Reproducing Kernel Particle Method_ (RKPM) (Liu _et al._, 1995). A complete description of this method is reported in SSB. This discrete IBM relies on two complementary physical spaces. The _Eulerian_ domain is described by the grid used for calculation, while the _Lagrangian_ Markers are discrete points representing the immersed body's surface. The method determines the source term via a two-step procedure:
(i) In the _interpolation step_, the physical variables describing the flow, which are calculated on the Eulerian mesh, are interpolated on the Lagrangian Markers. The source term is then calculated on the Lagrangian space. Its structural form is very similar to the one seen for the penalization method in equation 8:
\[\textbf{{F}}_{P}=a_{s}\left(\textbf{{U}}^{\star}-\textbf{{U}}_{\textbf{{ib}}}\right) \tag{9}\]
Capital letters are used to indicate the variables in the Lagrangian space. \(\textbf{{U}}^{\star}\) is the velocity field interpolated from the Eulerian grid while \(a_{s}\) is a coefficient resulting from the discretization procedure. \(\textbf{{U}}_{\textbf{{ib}}}\) is homologous to \(\textbf{{u}}_{\textbf{{ib}}}\) in the Lagrangian framework.
(ii) After the source term is calculated on the Lagrangian space, it is projected on the Eulerian grid during the _spreading step_. This procedure allows obtaining a closed expression for the source term to be integrated within the computational solver.
The main difference between the two algorithms presented is that the penalization method is strictly local i.e. the forcing depends exclusively on the flow field predicted in the correspondent mesh element. On the other hand, the discrete method relies on an interpolation stencil to communicate between the Eulerian grid and the Lagrangian Markers. The user can select the size in terms of grid elements for this structure. Larger interpolation stencils connect a larger number of mesh elements with each Lagrangian Markers. This improves the stability of the numerical algorithms as well as the smoothness of the solution. However, larger computational stencils are also responsible for higher computational requirements. In addition, the size of the computational stencil (for the discrete method) and the width of the interface region \(\Sigma_{b}\)/value of the coefficients \(D_{ij}\) (for the penalization model) can be responsible for diffused interfaces, which can affect the precision of the numerical results.
### Sequential Data Assimilation: The Ensemble Kalman Filter
Data Assimilation (DA) techniques (Daley, 1991; Asch _et al._, 2016) have emerged in recent times as a powerful tool to enhance the reliability of numerical simulations in Fluid Dynamics. These approaches enable the integration of high-accuracy observation (DNS or experiments, for example) into reduced-order numerical models to obtain improved predictions of the underlying physical system. DA approaches are usually
grouped into two main families, namely variational methods and sequential methods. The former, which includes well-known methods such as 3DVar and 4DVar, resolves the DA problem as an optimization task in which initial conditions and/or dynamic models are determined to minimize a prescribed cost function. These models are extremely accurate, but they may not converge for the analysis of multi-scale time-evolving physical processes (Sirkes and Tziperman, 1997). Applications in fluid mechanics mainly deal with stationary configurations (Artana et al., 2014; Foures et al., 2014; Mons and Marquet, 2021). On the other hand, sequential methods mainly rely on Bayesian techniques to resolve the DA problem. One of the most powerful methods in sequential DA is the Ensemble Kalman Filter (EnKF) (Evensen, 2009; Katzfuss et al., 2016). This technique has proved to be efficient in reconstructing turbulent flows in recent years, for both stationary and unstationary configurations (Labahn et al., 2019; Zhang et al., 2020; Mons et al., 2021; Moldovan et al., 2022).
The EnKF, which is the strategy selected for the present research work, is an advanced tool based on the Kalman Filter (KF) (Kalman, 1960). The KF assumes that a physical state \(\mathbf{u}\) can be estimated from a linear discrete _model_M and available _observations_\(\mathbf{y}\). A common assumption, in particular in fluid mechanics applications, is that the model \(M\) can create a quasi-continuous map of the physical phenomena investigated, but it provides a lower fidelity representation. On the other hand, the observation \(\mathbf{y}\) is more accurate, but it is sparse in space and time. Both sources of information are affected by uncertainties, which are referred to as \(v\) and \(w\) for the model and the observation, respectively. These uncertainties are usually included in the system as random processes, and they can be approximated through unbiased Gaussian distributions: \(v=\mathcal{N}(0,\mathbf{\mathcal{O}})\) and \(w=\mathcal{N}(0,\mathbf{\mathcal{H}})\). \(\mathbf{\mathcal{O}}\) and \(\mathbf{\mathcal{H}}\) are time-dependent matrices representing the covariance matrix for the model and the observation. In the framework of this hypothesis, the process \(\mathbf{u}\) can be accurately described by the two statistical moments of lower order (mean and variance) of its probability density function (pdf). This implies that the state estimation process performed by the DA approach is governed by the error covariance matrix \(\mathbf{\mathcal{P}}=\mathbb{E}[(\mathbf{u}-\mathbb{E}[\mathbf{u}])(\mathbf{u}-\mathbb{E}[\bm {u}])^{T}]\). The assimilation scheme is composed of two essential steps:
1. A _forecast_ (\(f\)) step where the physical state \(\mathbf{u}\) and its error covariance matrix \(\mathbf{\mathcal{P}}\) are advanced in time using the model. Considering a time advancement from the time step \(k-1\) to \(k\), this step is described by the following equations: \[\mathbf{u}_{k}^{f} =\mathbf{M}_{k:k-1}\mathbf{u}_{k-1}\] (10) \[\mathbf{\mathcal{P}}_{k}^{f} =\mathbf{M}_{k:k-1}\mathbf{\mathcal{P}}_{k-1}\mathbf{M}_{k:k-1}^{T}+\mathbf{ \mathcal{O}}_{k}\] (11)
2. An _analysis_ (\(a\)) phase where the physical state and the model are updated by the DA procedure. The analysis step is performed only if observation \(\mathbf{y}\) is available at the time step \(k\). This update is obtained as: \[\mathbf{u}_{k}^{a} =\mathbf{u}_{k}^{f}+\mathbf{\mathcal{K}}_{k}\left(\mathbf{y}_{k}-\mathbf{H}_{k:k-1} \mathbf{u}_{k}^{f}\right)\] (12) \[\mathbf{\mathcal{P}}_{k}^{a} =\left(\mathbf{I}-\mathbf{\mathcal{K}}_{k}\mathbf{H}_{k:k-1}\right)\mathbf{\mathcal{ P}}_{k}^{f}\] (13)
\(\mathbf{H}_{k:k-1}\) is a mathematical operator which projects the state predicted by the model into the space of the observations. The term \(\mathbf{y}_{k}-\mathbf{H}_{k:k-1}\mathbf{u}_{k}^{f}\) is referred to as _innovation_, and it measures the discrepancy between model and observation. \(\mathbf{\mathcal{K}}_{k}\) is the _Kalman gain_, which is a matrix computing the optimal correlation between the system's state prediction and
the observed data, is obtained by minimizing the updated error covariance matrix \(\boldsymbol{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf}}}}}}}}{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{}}}}}}} {\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{}}}}}} {\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\}\}}\}}\}}\}}\}}\}}\}}\}}}\}}}\ \}}}}}}}}}}}}} \ \ \ \}}}}}}}}}}}}} \}}}}} \}}} \}}} \}} \ \}} \ \}} \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\}\}}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
numerical system, an ensemble of \(n_{o}\) observations is also obtained via a perturbation of the observation vector \(\mathbf{y}_{k}\) whose size is \([n_{o},1]\). This procedure allows obtaining an observation matrix \(\mathbf{V}_{k}\) of size \([n_{o},m]\), whose \(m\) columns are defined as \(\mathbf{Y}_{i,k}=\mathbf{y}_{k}+\mathbf{e}_{i},\,i=1,2,...,m\). The random noise \(\mathbf{e}_{i}\) is described by a Gaussian probability function \(\mathbf{e}_{i}\sim\mathcal{N}(0,\mathbf{R}_{k})\). As for the KF, \(\mathbf{R}_{k}\) represents the observation covariance matrix.
Similarly, the EnKF requires the use of a non-linear sampling matrix \(\mathcal{H}(\mathbf{u}_{k}^{f})\) to project the model into the position of the observations. Analogously to the system's state \(\mathbf{u}_{k}^{f}\) in equation 17, it can be decomposed into a matrix \(\mathbf{\mathcal{S}}_{k}^{f}\) where each column represents a normalized anomaly, and the mean is defined as \(\overline{\mathcal{H}(\mathbf{u}_{k}^{f})}=\sum_{i}^{m}\mathcal{H}(\mathbf{u}_{i,k}^{f })/m\):
\[\left[\mathbf{\mathcal{S}}_{k}^{f}\right]_{i}=\frac{\mathcal{H}(\mathbf{u}_{i,k}^{f}) -\overline{\mathcal{H}(\mathbf{u}_{k}^{f})}}{\sqrt{m-1}} \tag{19}\]
For an infinite ensemble size, \(\lim_{m\to+\infty}\mathbb{E}[(\mathbf{e}-\mathbb{E}[\mathbf{e}])(\mathbf{e}-\mathbb{E}[ \mathbf{e}])^{T}]=\mathbf{R}\), which becomes time-independent, and the Kalman gain matrix \(\mathbf{\mathcal{K}}_{k}\) describing the most optimal correlation between the state and the observations is simplified to (Hoteit _et al._, 2015; Carrassi _et al._, 2018):
\[\mathbf{\mathcal{K}}_{k}=\mathbf{\mathcal{X}}_{k}^{f}\left(\mathbf{\mathcal{S}}_{k}^{f} \right)^{T}\left[\mathbf{\mathcal{S}}_{k}^{f}\left(\mathbf{\mathcal{S}}_{k}^{f}\right) ^{T}+\mathbf{R}_{k}\right]^{-1} \tag{20}\]
All the elements presented constitute the essential ingredients to calculate the updated system's state \(\mathbf{u}_{k}^{a}\), which is then used for the time advancement of the numerical model \(\mathcal{M}\) in the following iteration \(k+1\):
\[\mathbf{u}_{k}^{a}=\mathbf{u}_{k}^{f}+\mathbf{\mathcal{K}}_{k}\left(\mathbf{Y}_{k}-\mathcal{H }(\mathbf{u}_{k}^{f})\right) \tag{21}\]
The step-by-step implementation of the EnKF is detailed in the algorithm 1. The following state-of-the-art modifications are also included in the classical EnKF formulation:
* The EnKF is naturally used to update the system's physical state, but it can also be used to update free coefficients \(\theta\) governing the model \(\mathcal{M}\). In the present work, this task is performed via an _extended state_ approach (Asch _et al._, 2016). In practice, the parameters are included in the state vector \(\mathbf{u}_{i}^{\prime}=[\mathbf{u}_{i}\,\theta_{i}]^{T}\) for each realisation, and they are updated with the physical field using equation 21.
* Some _deterministic inflation_(Asch _et al._, 2016) is included to increase the variability of the state predicted by the EnKF: \[\mathbf{u}_{i,k}^{a}=\overline{\mathbf{u}}_{k}^{a}+\lambda_{k}\left(\mathbf{u}_{i,k}^{a}- \overline{\mathbf{u}}_{k}^{a}\right)\] (22) where \((\lambda>1)\). This procedure usually improves the accuracy of the updated system's state \(\mathbf{u}_{k}^{a}\), since \(\mathbf{\mathcal{P}}_{k}\) may be underestimated due to the sampling errors deriving from the use of a limited amount of members \(m\). Also, this procedure may also prevent a premature collapse of the system leading to the divergence of the filter.
* _Covariance localization_ of the Kalman gain \(\mathbf{\mathcal{K}}_{k}\) is used to gradually set to zero the correlation between the observations and the system's state with increasing distance. This choice controls the emergence of some undesired spurious fluctuations due to the underestimation of \(\mathbf{\mathcal{P}}_{k}\). Covariance localisation is usually performed premultiplying \(\mathbf{\mathcal{K}}_{k}\) with a used-defined matrix \(\mathbf{\mathcal{L}}_{k}\). This matrix, for which \([L]_{ij}\in[0,1]\), is defined using an exponential decay in space so that the update of the model state tends to zero when far observation is considered. If the grid used by the model and the location of the
sensors do not change in time, \(\boldsymbol{L}\) is also time-independent. This is the case for the present investigation.
```
Input:\(\mathcal{M}\), \(\boldsymbol{R}\), \(\boldsymbol{y}_{k}\), and a prior/initial state system \(\boldsymbol{u}_{i,0}\), where usually \(\boldsymbol{u}_{i,0}\sim\mathcal{N}(\overline{\boldsymbol{u}}_{0},\sigma^{2})\) for\(k=1,2,...,K\)do for\(i=1,2,...,m\)do
1 Advancement in time of the state vectors: \(\boldsymbol{u}_{i,k}^{f}=\mathcal{M}\,\boldsymbol{u}_{i,k-1}\)
2 Generation of an observation matrix from the confidence level given to the observation data: \(\boldsymbol{Y}_{i,k}=\boldsymbol{y}_{k}+\boldsymbol{e}_{i}\), with \(\boldsymbol{e}_{i}\sim\mathcal{N}(0,\boldsymbol{R})\)
3 Estimation of the ensemble means (system state and projection matrix): \(\overline{\boldsymbol{u}}_{k}^{f}=\frac{1}{m}\sum_{i=1}^{m}\boldsymbol{u}_{i, k}^{f}\), \(\overline{\mathcal{H}(\boldsymbol{u}_{k}^{f})}=\frac{1}{m}\sum_{i=1}^{m} \mathcal{H}(\boldsymbol{u}_{i,k}^{f})\)
4 Computation of the anomaly matrices (system state and projection matrix): \(\boldsymbol{X}_{k}=\frac{\boldsymbol{u}_{i,k}-\overline{\boldsymbol{u}}_{k}} {\sqrt{m-1}}\), \(\boldsymbol{S}_{k}=\frac{\mathcal{H}(\boldsymbol{u}_{i,k}^{f})-\overline{ \mathcal{H}(\boldsymbol{u}_{k}^{f})}}{\sqrt{m-1}}\)
5 Calculation of the Kalman gain (with covariance localization): \(\boldsymbol{K}_{k}=\boldsymbol{L}\odot\boldsymbol{X}_{k}^{f}(\boldsymbol{S}_ {k})^{T}\left[\boldsymbol{S}_{k}(\boldsymbol{S}_{k})^{T}+\boldsymbol{R}\right] ^{-1}\)
6 Update of the state matrix: \(\boldsymbol{u}_{i,k}^{a}=\boldsymbol{u}_{i,k}^{f}+\boldsymbol{K}_{k}\left( \boldsymbol{Y}_{i,k}-\mathcal{H}(\boldsymbol{u}_{i,k}^{f})\right)\)
7 Application of (deterministic) covariance inflation: \(\boldsymbol{u}_{i,k}^{a}=\overline{\boldsymbol{u}}_{k}^{a}+\lambda_{k}\left( \boldsymbol{u}_{i,k}^{a}-\overline{\boldsymbol{u}}_{k}^{a}\right)\)
8 end for
```
**Algorithm 1**Scheme of the EnKF used in the present study.
## 3 Test case: turbulent plane channel flow, \(\boldsymbol{Re_{\tau}=550}\)
The turbulent plane channel flow is a fundamental benchmark test case in CFD, and open databases are available online, providing high-accuracy DNS data for reference. On the one hand, the simple geometry of this test case allows the exclusion of a number of physically complex aspects in the optimization process, such as, for example, the separation of the boundary layer. On the other hand, because of the velocity fluctuations observed in the proximity of the wall, traditional IBMs usually fail to provide an accurate flow prediction. For all of these reasons, this test case is a suitable candidate for an ambiguous assessment for IBM augmentation via DA.
The friction Reynolds number considered is \(\mathit{Re_{\tau}=u_{\tau}h/\nu=550}\). The friction velocity was previously introduced, and it is defined as \(u_{\tau}=\sqrt{\tau_{w}/\rho}\) where \(\tau_{w}\) is the wall shear stress, while \(h\) is the half-height of the channel. The coordinate system is set so that \(x\) is the streamwise direction, \(y\) is the wall-normal direction, and \(z\) is the spanwise direction. The two walls are positioned at \(y=0,\,2h\), respectively. Periodic boundary conditions are applied to the side walls, and the mass flow rate is conserved in time using a source term for the dynamic equations already implemented in _OpenFOAM_. Results are usually presented in non-dimensional form (suffix \(+\)) using the friction velocity \(u_{\tau}\) and the viscous wall unit \(\delta_{\nu}=\nu/u_{\tau}\) for normalization. A second non-dimensional form used in the
present analysis (suffix \(\star\)) relies on \(u_{\tau}\) calculated by a reference simulation (R-DNS-BF) introduced in SS3.1. In addition, statistical moments are also averaged over the two directions \(x\) and \(z\), for which statistical homogeneity is observed.
### Reference simulation and preliminary results
A database of classical numerical simulations has been performed to obtain preliminary results. Details are reported in the first six rows of table 1. The physical domain is discretized using a uniform distribution of the elements in the \(x\) and in the \(z\) direction. A geometric distribution is used in the vertical direction, increasing the size of the grid while moving away from the wall. Therefore, \(\Delta y^{\star}_{min}\) represents the grid size at the wall, and \(\Delta y^{\star}_{max}\) is the size at the centre of the channel. This analysis has been performed to obtain a suitable initial _prior_ state to be optimized with DA. The database includes several simulations obtained with different techniques:
* One body-fitted DNS (R-DNS-BF), which represents the reference simulation that will be used to validate the results. Statistical moments for this simulation have been obtained for 300 advective times \(t_{A}=h/U_{c}\), where \(U_{c}\) is the averaged velocity at the centerline for \(y=h\). The averaged velocity profile and the components of the Reynolds stress tensor compare well with results by del Alamo & Jimenez (2003) and Hoyas & Jimenez (2008) for similar \(\mbox{{Re}}_{\tau}\) as shown in figure 2. The minor discrepancies observed could be due to a small difference in \(u_{\tau}\) (around 2.55%), to the rate of convergence of the statistical moments, to the different discretisation strategies or to the different distribution of the mesh elements in the wall-normal direction.
* One body-fitted DNS (DNS-BF), which is run on a smaller domain and uses a lower grid resolution when compared with the simulation R-DNS-BF. Details of the grid are provided in table 1. One can see that a factor of 4 coarsens the resolution in the \(x\) and \(z\) directions. In the vertical direction, the number of mesh elements is also reduced by a factor of 4, and the ratio coefficient of the geometric distribution of the elements is different. This last choice has been performed to obtain a similar resolution in the wall region when compared with the refined grid used for the simulation R-DNS-BF.
* Two LES, which are run on the same grid as DNS-BF. The first simulation, BF-LES, is performed using Smagorinsky's model as subgrid-scale closure. The second one, BF-LES-VD, includes _van Driest_'s correction at the wall for the subgrid-scale model.
* Two IBM runs, which are performed on a grid very similar to the one used for DNS-BF shown in figure 3. The differences emerge in the proximity of the wall. For the IBMs, the wall mesh element has its centre in \(y=0\) (or \(y=2h\) at the top), and three additional layers of mesh elements of the same size are included in the solid region, considering one of them in the interface \(\Sigma_{b}\). These layers are placed to ensure the numerical stability of the algorithm. The two calculations are referred to as DNS-IBM-CF for continuous forcing (penalization) and DNS-IBM-DF for discrete forcing. For both simulations, the bottom and top walls are not moving, so \(\mbox{{u}}_{\mbox{{ib}}}=\mbox{{0}}\) and \(\mbox{{U}}_{\mbox{{ib}}}=\mbox{{0}}\), respectively. The volume force \(\mbox{{f}}_{P}\) is non-zero at the interface region \(\Sigma_{b}\) (light blue region in figure 3_(b)_), which consists of the three closest mesh elements in the \(y\) direction to the immersed walls (\(y=0\), \(2h\)). The interface region \(\Sigma_{b}\) has been chosen to be the same for the two methods in order to provide a rigorous comparison. The Lagrangian Markers for the discrete method are positioned at the centre of each mesh element for \(y=0\), and the computational stencil is made by \(3\times 3\times 3\) elements. For the penalized method, \(\mbox{{f}}_{P}\) is also included in the solid domain \(\Omega_{b}\), which is in this case represented by a layer of two cells in the \(y\) direction (grey in figure 3_(b)_).
Results obtained with the simulations of the database are now compared. The aim of the present analysis is to evaluate the accuracy of the calculations performed with the
coarse grids, as well as to assess the efficacy of SGS modelling and IBM in this case. Comparisons will be performed against the simulation R-DNS-BF, which is considered to be the _true_ physical state.
First, the predicted friction velocity \(u_{\tau}\) is analyzed. The values obtained by the different simulations, which are reported in table 1, show that most of the simulations significantly under-predict \(u_{\tau}\). The only exception is represented by the simulation LES-BF i.e. the LES calculation for which the classical Smagorinsky model is used. In this case, discrepancies with the reference are of the order of \(7.29\%\). However, one may argue that this apparently acceptable prediction is actually the result of compensation between different error sources, such as explicit filtering, the Smagorinsky model and the discretization schemes. Applications of LES to the turbulent plane channel exhibit
Figure 3: _(a)_ General view of the grid employed for the plane channel with IBM. _(b)_ Cartesian grid employed when using the IBM for the analysis of the turbulent plane channel. \(\Omega_{f}\) and \(\Omega_{b}\) represent the fluid and body domains, respectively (the latter displayed in grey). The blue area \(\Sigma_{b}\) (three cell layers in the \(y\) direction) constitutes the body interface.
Figure 2: Comparison of the main statistical moments of the velocity field. Results are shown for () the simulation R-DNS-BF and () the database by del Alamo & Jiménez (2003).
very high sensitivity to these interactions (Meyers & Sagaut, 2007). The lack of global accuracy for the simulation LES-BF is shown by the analysis of the physical flow fields.
Figure 4 shows the evolution of the main statistical quantities of the flow, which include the mean streamwise velocity \(\langle U\rangle^{+}=\langle u_{x}\rangle/u_{\tau}\) and the components of the Reynolds stress tensor \(\langle u_{i}^{\prime}u_{j}^{\prime}\rangle^{+}=\langle u_{i}^{\prime}u_{j}^{ \prime}\rangle/u_{\tau}^{2}\), for the body-fitted simulations. Significant discrepancies are overall observed in figure 4_(a)_ and _(b)_ for the normalized mean streamwise velocity \(\langle U\rangle^{+}\). The simulation LES-BF complies well with the DNS data far from the wall, thanks to the predicted value for \(u_{\tau}\). However, the velocity profile in the proximity of the wall is erroneous in this case. Simulations DNS-BF and LES-VD-BF perform reasonably better close to the wall for \(y^{+}<10\) and \(y^{+}<30\), respectively. However, the accuracy is significantly degraded in the outer layer, owing to the under-prediction of \(u_{\tau}\). A high discrepancy is observed as well for the components of the resolved Reynolds stress tensor, which are shown in figure 4_(c)_ to _(f)_. One can see that the LES do not provide an accurate prediction for the position of the peak, which is significantly far from the wall for all the components. On the other hand, the simulation DNS-BF provides an accurate estimation of this feature. Differences are observed for the predicted magnitude of the different components. One can see in figure 4_(c)_ how both the LES and the calculation DNS-BF significantly over-predict the component \(\langle u_{x}^{\prime}u_{x}^{\prime}\rangle^{+}\). A general under prediction is instead observed for the components \(\langle u_{y}^{\prime}u_{y}^{\prime}\rangle^{+}\) and \(\langle u_{x}^{\prime}u_{z}^{\prime}\rangle^{+}\). A global reduction of the magnitude would be expected, considering that the lack of grid resolution is expected to dampen the velocity fluctuations. However, it also affects the accuracy in the prediction of their spatial gradients, which govern the dissipation of each component of the Reynolds stress tensor. Therefore, the complex results observed are mainly due to the choices performed in terms of mesh resolution. The component \(\langle u_{x}^{\prime}u_{y}^{\prime}\rangle^{+}\) shown in figure 4_(f)_, which is tied to turbulence production effects, is the one for which the smaller discrepancy is globally observed. One can see that in every case, the magnitude of the components of the Reynolds stress tensor is higher for the DNS-BF calculation when compared with the two LES. The reason is associated with the dissipative effect of the Smagorinsky model used to close the equations.
The simulations run with classical IBM using penalization (DNS-IBM-CF) and discrete
Figure 4: Comparison of the main statistical moments of the velocity field. Results are shown for simulations (—) DNS-BF, (- - - -) BF-LES, (\(\cdots\)) BF-LES-VD, and (—) R-DNS-BF.
(DNS-IBM-DF) approaches are now investigated. Comparisons are performed using the reference run R-DNS-BF as well as the body-fitted calculation DNS-BF, whose grid is almost identical to the one used in the IBM runs. For the continuous IBM method, the tensor \(\boldsymbol{\mathsf{D}}=D\)\(\boldsymbol{l}\), where \(D=10^{5}\,m^{-2}\) for the mesh elements in \(\Sigma_{b}\cup\Omega_{b}\) and \(D=0\) elsewhere. The analysis considers the mean velocity as well as the components of the Reynolds stress tensor, as previously done for the body-fitted simulations. Results are shown in figure 5. One can see that the physical quantities calculated by both IBM strategies are similar, and they are close to findings obtained with the coarse-grained simulation DNS-BF. Looking in detail at the near-wall behaviour of the velocity profile in figure 5_(b)_ one can see that simulation DNS-IBM-CF is not able to successfully impose \(\boldsymbol{u_{ib}}=\boldsymbol{0}\) at the wall, nor to provide an accurate estimation of the velocity gradient close to the wall.
In summary, the present findings indicate that all the preliminary simulations fail to provide an accurate estimation of the physical flow features, using a coarse grid. One can also add that body-fitted LES provided unreliable results due to the non-linear interactions between different error sources. This uncertainty is arguably going to be magnified by the interactions between subgrid-scale modelling and IBM. For this reason, the DA analyses will be performed using CFD runs without SGS models. This decision excludes a complexity to be considered in the DA study, namely the interaction between the data-driven IBM model derived by DA and the SGS model itself.
### Data Assimilation experiment
A DA strategy is proposed here to infuse physical knowledge within a reduced-order IBM numerical solver. The aim of the analysis is i) to improve the accuracy of such a solver and ii) to do so with a limited increase in computational resources required. Discussion in SS2.3 indicated how the DA methods rely on a _model_, providing a quasi-continuous representation of the physical variables in the domain of investigation, as well as some local _observation_. These elements are now discussed in detail.
The _model_ here considered is the numerical solver and test case used for the preliminary simulation DNS-IBM-CF. Such a model was not able to provide accurate estimations of
Figure 5: Comparison of the main statistical moments of the velocity field. Results are shown for simulations (——) DNS-BF, (\(\cdots\):) DNS-IBM-CF, (**–––**) DNS-IBM-DF, and (**––**) R-DNS-BF.
the statistical moments. In addition, a no-slip condition at the wall was not obtained. The number of ensemble realizations, which allows us to explore the spaces associated with the parametric description and the physical solution, is set to \(m=40\). The value chosen for \(m\) is based on recommendations provided in the literature from analyses combining the EnKF with CFD solvers (Mons _et al._, 2021; Moldovan _et al._, 2021). The data-driven strategy will be used to dynamically enhance the physical state at each analysis phase as well as to infer the local value of the diagonal components of the tensor \(D\). Therefore, the full parametric space to be optimized consists of the three components \(D_{xx}\), \(D_{yy}\) and \(D_{zz}\) for \(128\times 64\times 64=524\,288\) grid elements, for a total of \(1\,572\,864\) degrees of freedom to be determined in the parametric space investigated. The following strategies are applied to reduce the complexity of the problem:
(i) Coefficients for mesh elements \(\in\Omega_{f}\) are automatically set to zero. This implies that, out of the 64 layers in the \(y\) direction, only 5 at the bottom and 5 at the top are considered. In addition, statistical symmetry around the plane \(y=h\) is used to consider only 5 layers. While symmetry is not observed for the instantaneous flow, viscous phenomena in the proximity of the wall reduce the intensity of velocity fluctuations, mitigating the effect of this approximation.
(ii) Similarly, statistical invariance due to homogeneity in the \(x\) and \(z\) is used to neglect the dependency of the coefficients to those directions i.e. \(D_{ii}(\mathbf{x},t)=D_{ii}(y,t)\) for \(i=x,y,z\).
Thus, the optimization task is reduced to a space of 15 degrees of freedom, which is the determination of 3 constants for 5 grid layers in the \(y\) direction. A second essential aspect is the _prior_ which is used as the initial condition for the model. The prior, which includes the physical condition imposed at the initial time as well as the free parameters prescribed, plays an essential role in the rate of convergence towards the optimized solution by DA. In this case, the initial physical state imposed for each ensemble member is obtained from interpolation of the simulation R-DNS-BF for \(t=0\). In addition, the velocity field for the mesh elements in the subviscous layer has been set to \(\mathbf{u}=[u_{\tau}y/\delta_{\nu},\,0,\,0]\). The values for the components \(D_{ii}\) used in the simulation DNS-IBM-CF are considered to be the prior state for the parametric description. For each simulation of the ensemble, these values are perturbed using a Gaussian distribution \(D_{ii}=\mathcal{N}(10^{5},2.5\cdot 10^{7})\,m^{-2}\) for each of the five grid layers in the \(y\) direction where the source term is non-zero. These conservative choices for physical state and parametric description have been performed to unambiguously identify the sensitivity of the solution to the parametric variation, speeding up the optimization procedure by EnKF.
The _observation_ used in the present analysis is now described. The _physics infused_
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline & \(N_{x}\times N_{y}\times N_{z}\) & \(\Delta x^{*}\) & \(\Delta z^{*}\) & \(\Delta y^{*}_{\min}\) & \(\Delta y^{*}_{\max}\) & \(L_{x}/h\) & \(L_{z}/h\) & \(u_{\tau}\) & \(\Delta t\,(t_{A})\) \\ R-DNS-BF & \(1\,024\times 256\times 512\) & 9.9 & 6.6 & 1 & 11.2 & 6\(\pi\) & 2\(\pi\) & 0.0480 & 0.004 \\ DNS-BF & \(128\times 58\times 64\) & 39.5 & 26.3 & 1.08 & 52 & 3\(\pi\) & \(\pi\) & 0.0428 & 0.02 \\ LES-BF & \(128\times 58\times 64\) & 39.5 & 26.3 & 1.08 & 52 & 3\(\pi\) & \(\pi\) & 0.0515 & 0.02 \\ LES-VD-BF & \(128\times 58\times 64\) & 39.5 & 26.3 & 1.08 & 52 & 3\(\pi\) & \(\pi\) & 0.0434 & 0.02 \\ DNS-IBM-CF & \(128\times 64\times 64\) & 39.5 & 26.3 & 1.3 & 52 & 3\(\pi\) & \(\pi\) & 0.0375 & 0.02 \\ DNS-IBM-DF & \(128\times 64\times 64\) & 39.5 & 26.3 & 1.3 & 52 & 3\(\pi\) & \(\pi\) & 0.0397 & 0.02 \\ DNS-IBM-DA & \(128\times 64\times 64\) & 39.5 & 26.3 & 1.3 & 52 & 3\(\pi\) & \(\pi\) & 0.0487 & 0.02 \\ \end{tabular}
\end{table}
Table 1: Grid resolution, domain size and time step \(\Delta t\) used for the generation of the database of simulations.
strategy proposed utilises physical knowledge of the flow in the form of data to be integrated into the data-driven method. The information employed deals with the physical behaviour of the flow in the proximity of the wall when the Reynolds number \(Re_{\tau}\) is known. Thus, the non-slip condition is first applied on a number of sensors, which are distributed over the physical domain for \(y=0\) and \(y=2h\). A qualitative representation is shown in figure 6. A total of \(4\,096\) sensors are used for which the condition \(u_{x}=0\) is imposed as surrogate observation. No constraint is imposed for \(u_{y}\) and \(u_{z}\). This choice complies with the intrinsic limitations of the EnKF, for which a matrix inversion of the size of the observed data [\(n_{o}\), \(n_{o}\)] must be performed (Asch _et al._, 2016). Also, thanks to the solenoidal features of the resolved numerical schemes, the inferred field for \(u_{x}\) also affects the other velocity components without the need to over-constrain the system to be optimized. A second physical information is infused. Once the value of \(\mbox{{Re}}_{\tau}\) is known, \(u_{\tau}\) is also fixed. This implies that the mean velocity gradient at the wall is known considering its relation with \(\tau_{w}\) and \(u_{\tau}\):
\[\tau_{w}=\rho\nu\,\left(\frac{\partial\langle u_{x}\rangle}{\partial y}\right) =\rho u_{\tau}^{2}=\frac{1}{2}\rho C_{f}U_{c}^{2} \tag{1}\]
In this case, the data inferred is the friction coefficient \(C_{f}\) calculated using the friction velocity \(u_{\tau}\) and the mean streamwise centerline velocity \(U_{c}\) obtained via the simulation R-DNS-BF. The confidence in the physics-infused information is now discussed. First of all, the uncertainty affecting each sensor is supposed to be statistically independent, so that the matrix \(\boldsymbol{R}\) in equation 20 can be reasonably approximated to a diagonal matrix. The standard deviation has been set to \(0.05\) for the rows associated with sensors observing the velocity at the wall. For the sensor measuring the friction coefficient, the level of confidence is set to \(0.05\,C_{f}\). This value is very close to the variance of the time evolution of \(C_{f}(t)\) observed for the simulation R-DNS-BF.
The DA procedure combining the _model_ and the _observation_ is now detailed. The sequential features of the EnKF are fully exploited, which means that the DA state estimation and optimization are performed on the fly during the run. For simulations of this size, non-intrusive approaches are too expensive in terms of computational resources required. In fact, simulations of the ensemble must be interrupted before every analysis phase and restarted after. This operation time, which sums up the writing and reading of files for the physical solution, may increase the computational costs by several orders of magnitude when compared with the forecast of the solution. For this reason, the numerical simulations of the ensemble are coupled online with the DA code using the application _Coupling OpenFOAM with Numerical EnvironmentS_ (CONES), recently developed by the research group (Villanueva _et al._, 2023). CONES parallelizes the problem using an open-source coupler called CWIPI (Reflox _et al._, 2011), which identifies two modules (simulations in OpenFOAM and the library with the EnKF algorithm) and establishes HPC communications among them according to the user's specifications. CPU cores assigned for the module of OpenFOAM are occupied when carrying out the forecast step and become inactive during the analysis phases, whereas those destined for the correction step work oppositely. Thus, this algorithm completely excludes the costly operations of interrupting/restarting the simulations of the ensemble. In this work, a DA analysis phase is performed every six time steps of the numerical forecast. Considering that the time step \(\Delta t=0.02t_{A}\), this implies that data are assimilated every \(0.12t_{A}\). This relatively high frequency of assimilation with respect to the characteristic physical scales of the flow naturally excludes the risk of lack of convergence of the DA procedure (Meldi, 2018). As detailed in SSA, the pressure is updated from the system's state estimated by
the EnKF by means of a Poisson equation. This additional step grants conservativity of the discretized Navier-Stokes equations. A visualization of the EnKF procedure is shown in figure 7.
Finally, covariance inflation and localization are discussed. These state-of-the-art procedures aim to improve the accuracy and robustness of the EnKF, as well as reduce the computational costs associated (Asch _et al._, 2016). For the former, deterministic inflation is applied by using equation 2.22 with a constant \(\lambda=1.05\) during initial evolution phases for \(t\in[0,5\,t_{A}]\). This choice increases the variability of the ensemble and prevents the collapse of the parametric description of the system during the first analysis phases. Localization is applied taking into account that observation/physical constraints are located in the proximity of the wall. A matrix \(\boldsymbol{L}\) premultiplying the Kalman Gain in equation 2.20 is generated where all the coefficients are zero with the exception of those referring to the mesh elements located in the subviscous layer (\(y^{+}<5\)) and outside the physical space. These coefficients are determined using a decay exponential function (\(L_{ij}=e^{-\Delta r_{ij}^{2}/\eta}\)), where \(\Delta r_{ij}\) is the distance between the sensor providing observation \(i\) and mesh element \(j\). The parameter \(\eta\) is tuned to avoid discontinuities in the DA state estimation update moving from the subviscous layer to the outer wall regions. In addition, the EnKF algorithm is clipped in the near-wall region, and only the velocity field \(\boldsymbol{u}\) of the mesh elements located here is updated. This corresponds to a total number of \(57\,344\) mesh elements (seven cell layers in the wall-normal direction), which makes \(n=172\,032\) degrees of freedom without accounting for the fifteen parameters of the tensor \(\boldsymbol{D}\).
## 4 Results
The physics-infused procedure developed relies on on-the-fly state estimation and parametric optimization of an ensemble of numerical simulations. These runs predict the instantaneous turbulent physical field. Results obtained by this procedure are first compared with the reference database presented in SS3.1. A sensitivity analysis of the hyperparameters governing the performance of the numerical model and of the DA strategy is also investigated.
Figure 6: Position at the bottom wall of the sensors providing constraints for the streamwise direction (\(u_{x}=0\)) when applying the EnKF.
### Comparison of DA results with reference simulations
It is here reminded that the DA procedure based on the EnKF is used to optimize the parametric behaviour of an advanced IBM penalization model, as well as to perform an update of the flow field. These operations are performed sequentially, operating directly over the predicted instantaneous flow field.
First of all, data in table 1 seem to indicate that the infusion of the local no-slip constraint and the global knowledge of the friction coefficient \(C_{f}\) is able to significantly improve the prediction of the friction velocity \(u_{\tau}\). In fact, classical IBM calculations (DNS-IBM-DF) obtain a significant underprediction of this latter quantity for a discrepancy of around 17%. The physics-infused procedure obtains an overprediction of around 1.5%, thus a discrepancy of almost 12 times less than the classical simulations. Therefore, one can expect that the global accuracy of the flow is significantly increased, considering the essential role of \(u_{\tau}\) in the establishment of the physical features observed for this test case. A first qualitative comparison of the flow features is shown in figure 8 for the isocontours calculated via the Q-criterion. One can see that, despite the lack of grid resolution, the DA simulation (DNS-IBM-DA) appears to be able to capture fine structures that exhibit a better agreement with the reference DNS (R-DNS-BF). This result appears to imply that the instantaneous flow update performed by the EnKF provides a significant improvement in accuracy when compared with the classical IBM approaches, which are run on grids of identical resolution.
Similar conclusions can be drawn comparing the statistical moments obtained by the
Figure 7: Scheme of CONES application for sequential DA techniques.
simulations, which are shown in figure 9. The normalized mean velocity profile \(\langle U\rangle^{+}\) shown in figure 9_(a)_ and _(b)_ is significantly closer to the DNS reference both in the inner and the outer layers. In addition, one can see that the no-slip condition at the wall is well obtained by the DA simulation, within the confidence level prescribed for the observation, since \(\langle U\rangle_{y=0,2h}^{+}=0.097\). For the DNS-IBM-CF, \(\langle U\rangle_{y=0,2h}^{+}=0.557\), which involves an enhancement of \(83\%\). Improvements using the DA method can also be observed for the Reynolds stress tensor components, in particular when compared with the continuous IBM model. These improvements, however, vary depending on the components. High accuracy is obtained for the prediction of \(\langle u_{x}^{\prime}u_{y}^{\prime}\rangle^{+}\), which is shown in figure 9_(f)_, while a small degradation is observed for \(\langle u_{z}^{\prime}u_{z}^{\prime}\rangle^{+}\) in figure 9_(e)_.
The analysis is completed with the comparison of the time spectra shown in figure 10. The spectra are calculated using the 1-D discrete _Fast Fourier Transform_[\(FFT\); Press _et al._ (2017)] to the time-series of the fluctuating velocity \(\mathbf{u}^{\prime}(t)\) sampled at the locations \(y^{\star}=30\), \(56\):
\[FFT(k)=\sum_{k=0}^{N-1}e^{-2\pi j\frac{kt}{N}}\mathbf{u}^{\prime}(t) \tag{1}\]
\(N=7\,500\) is the total number of samples. The sampling time is \(\Delta t=0.04\,t_{A}\) over a period \(T=100\,t_{A}\). Using Taylor's hypothesis for frozen turbulence (Taylor, 1938), results in figure 10 are shown for wavenumbers \(\kappa=2\pi f/U_{c}\), where \(f\) is the temporal frequency
Figure 8: Isocontours of Q-criterion calculated for \(y/h=0.18\) (\(y^{\star}\approx 100\)). Results are shown for simulations (a) DNS-IBM-CF, (b) DNS-IBM-DF, (c) DNS-IBM-DA, and (d) R-DNS-BF.
of the \(FFT\) transform. In addition, \(\kappa\) is adimensionalized with respect to the kinematic viscosity \(\nu\) and the friction velocity \(u_{\tau}\) so that \(\kappa^{+}=\kappa\nu/u_{\tau}\). To smooth out the curves, a first-order Butterworth low-pass filter (Butterworth, 1930) is used to eliminate the undesired noise. One can see that the coarse-grid simulations (IBM-DNS-CF and DNS-BF) produce similar spectra, and the fluctuation energy starts to decay for relatively low wavenumbers. This observation is related to two concurring phenomena, namely the lack of grid resolution and the poor accuracy in estimating the wall shear stress. On the other hand, DA results get significantly closer to the reference DNS results, indicating that the accurate prediction of \(u_{\tau}\) provides a beneficial effect on the representation of the velocity fluctuating field. One could also observe that the gap between the reference DNS and the DA run is around half-octave i.e. proportional to the difference in resolution of the grids used for calculations. This confirms that the usage of the EnKF was able to efficiently compensate for the modelling error and that the discrepancies observed are intrinsically tied to the grid resolution.
### Sensitivity of the physics-infused strategy to mesh refinement and placement of sensors
The analysis of the results performed in SS4.1 highlighted how the infusion of physical information in the numerical process can improve the quality of the results even when simulations are performed using coarse grids. In the present section, the sensitivity of the DA strategy is tested against variations in the key elements that constitute the global methodology. More precisely, two aspects are considered. The first one is represented by variations in the grid resolution for the model. The second one consists of a different distribution and density of the sensors providing observation. To this purpose, four additional DA runs are performed varying the indicated parameters. Features of such simulations are reported in table 2. The simulation DNS-IBM-DA from the previous section is now taken as the reference for the rest of the data-driven simulations. As one can see, case 1 uses the same mesh as the reference DA run, while cases 2, 3, and 4 are performed using a modified grid. In these cases, the resolution in the streamwise
Figure 9: Comparison of the main statistical moments of the velocity field. Results are shown for simulations () DNS-IBM-DA, () DNS-BF, () DNS-IBM-CF, () DNS-IBM-DF, and () R-DNS-BF.
direction \(x\) and in the spanwise direction \(z\) is two times coarser, while a higher resolution is employed in the normal direction \(y\). For the latter, a smaller expansion ratio between consecutive mesh elements \(r_{y}=\Delta y_{i}/\Delta y_{i-1}\) is used. This parameter, which is equal to \(r_{y}=1.171\) for the grid employed in the DNS-IBM-DA run, is reduced to \(r_{y}=1.060\) in these studies. This new grid is composed of \(N=262\,144\) elements. The second aspect that is considered is the number of sensors employed to locally inform the model to respect the constraint \(u_{x}=0\) for \(y=0\). One can see in figure 11 that different densities of the sensors, as well as different distributions, have been analysed. In particular, one can see that probes have been positioned both at the centre of the mesh element as well as at their interface. In addition, in case 3, the problematic aspect of multiple sensors within one mesh element has been investigated.
The prior state for the new DA runs has been chosen using the same criteria previously presented for the DNS-IBM-DA case, and the optimization targets the values of the diagonal elements of the tensor \(D\). The prior state for the parameters is, for these cases, the solution obtained by the DNS-IBM-DA run. This choice, which has been performed to obtain a faster and more robust convergence of the optimisation procedure, also allowed for a lower deterministic inflation level to be applied. In this case, a constant value of \(\lambda=1.01\) is chosen for all parameters throughout the time range \(t\in[0,300\,t_{A}]\). The level has been selected empirically to obtain the best trade-off between fast convergence and stability of the optimization procedure.
Figure 12 illustrates the main statistical moments of the flow. One can see that the choice of a different grid affects the prediction of the physical quantities investigated, while a weak sensitivity to the position and number of sensors is observed. In particular, results for case 1 are very similar to those obtained by the run DNS-IBM-DA, despite the smaller number of sensors which are located at the cell edges. A quantification of the differences between these two runs is provided by the error in the prediction of the friction coefficient, calculated as:
\[\Delta_{\overline{C_{f}}}=\frac{|\langle C_{f,\text{DA}}\rangle-\langle C_{f, \text{R-DNS-BF}}\rangle|}{\langle C_{f,\text{R-DNS-BF}}\rangle} \tag{10}\]
Figure 10: \(FFT\) calculated by sampling the fluctuating velocity field \(\mathbf{u}^{\prime}\) located at (first row) \(y^{*}\approx 30\) and (second row) \(y^{*}\approx 56\). Results are shown for the simulations () IBM-DNS-DA, () IBM-DNS-CF, () DNS-BF, and () R-DNS-BF.
The results, which are reported in table 2, indicate a discrepancy of around 3% for the reference DA run and 1% for case 1. The analysis is now extended to the three DA runs performed over the modified grid. Results obtained for the statistical moment of the velocity field in figure 12 indicate that degradation of the performance of the DA algorithm is observed. This observation is expected, because the IBM model is, in this case, less accurate owing to the lower grid resolution. However, it has been verified that the DA runs obtained using this grid are sensibly better than classical simulations on the same mesh, confirming the improvement in accuracy of the DA strategy whatever numerical model is used for the forecast. One can see that results from cases 2 and 4 are almost identical, also in terms of measured discrepancy \(\Delta_{\overline{C_{f}}}\). On the other hand, results for case 3 are less precise, with an increase of \(\Delta_{\overline{C_{f}}}\) when compared to the other two DA realizations on the same grid. A direct conclusion that can be drawn from this analysis is that, while features of the numerical model used for DA are important, also the number and position of sensors can play a role in the global accuracy of the results. More precisely, when sensors are positioned too close one to another, the uncertainties affecting the measurements can be responsible for convergence issues in the optimization process. In addition, the hypothesis of uncorrelated uncertainty of the observation (i.e. \(\boldsymbol{R}\) being a diagonal matrix) becomes questionable when sensors are close. On the other hand, if sensors are far from the other in terms of characteristic scales of the flow, the global lack of information may preclude a satisfying state estimation. It must also be stressed that in this case, only one piece of information, namely the streamwise velocity, has been observed for each sensor. The global picture is arguably going to be more complex when multiple physical quantities are provided at each location.
The analysis of second-order statistical moments in figure 12_(c)_ to _(f)_ shows that no differences are observed among the simulations with the same grid. This may imply that the behaviour of such statistics is less sensitive to variations of the friction coefficient/friction velocity in the range here investigated. One can also see that, even if every DA run improves the performance of the underlying IBM, only the realizations with the first grid refinement (DNS-IBM-DA and case 1) are actually able to comply with the indicated level of confidence (5%) for \(\overline{C_{f}}\).
At last, the optimized values for tensor \(\boldsymbol{D}\) are shown in figure 13. Here, results are selected for the most accurate realization for each grid i.e. cases 1 and 2. First of all, the parameter distributions obtained for the two meshes are very similar. This observation confirms the robustness of the optimization procedure. In addition, for each component, the coefficients for the mesh elements of the interface \(\Sigma_{b}\) play a predominant role when compared to the ones in the solid region \(\Omega_{b}\). This result is in agreement with the properties
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline & \(N_{x}\times N_{y}\times N_{z}\) & \(\Delta x^{*}\) & \(\Delta z^{*}\) & \(\Delta y^{*}_{min}\) & \(\Delta y^{*}_{max}\) & \(L_{x}/h\) & \(L_{z}/h\) & \(u_{x}=0\) & \(\Delta_{\overline{C_{f}}}\) \\ R-DNS-BF & \(1\,024\times 256\times 512\) & 9.9 & 6.6 & 1 & 11.2 & \(6\pi\) & \(2\pi\) & \(-\) & \(-\) \\ DNS-IBM-DA & \(128\times 64\times 64\) & 39.5 & 26.3 & 1.3 & 52 & \(3\pi\) & \(\pi\) & \(4\,096\) & \(0.033\) \\ case 1 & \(128\times 64\times 64\) & 39.5 & 26.3 & 1.3 & 52 & \(3\pi\) & \(\pi\) & \(1\,024\) & \(0.012\) \\ case 2 & \(64\times 128\times 32\) & 79.0 & 52.7 & 1 & 30 & \(3\pi\) & \(\pi\) & \(1\,024\) & \(0.149\) \\ case 3 & \(64\times 128\times 32\) & 79.0 & 52.7 & 1 & 30 & \(3\pi\) & \(\pi\) & \(4\,096\) & \(0.195\) \\ case 4 & \(64\times 128\times 32\) & 79.0 & 52.7 & 1 & 30 & \(3\pi\) & \(\pi\) & \(256\) & \(0.154\) \\ \end{tabular}
\end{table}
Table 2: Summary of the DA runs performed for the sensitivity analysis.
of continuity and conservation that one can find in Roma's function (see equation B.3) used in the discrete IBM. However, two important points need discussion:
\(\bullet\) One would expect the order of magnitude of the parameters linked to \(D_{yy}\) to be similar to those characterizing \(D_{zz}\) and, at the same time, to be significantly smaller than the coefficients defining \(D_{xx}\). The results of the DA optimization indicate that the values for \(D_{yy}\) are, on the other hand, similar to those for \(D_{xx}\). This result could be associated with the improvement in the estimation of the wall shear stress.
\(\bullet\) In the interface region \(\Sigma_{b}\), the coefficients for \(D_{xx}\) exhibit their maximum for \(y=0\). For the coefficients for \(D_{yy}\) and \(D_{zz}\), a minimum is there obtained instead. The observation provided for \(y=0\) is in the form of streamwise velocity, therefore it is logical to expect a local higher coefficient for \(D_{xx}\). The values obtained in the neighbour mesh elements for \(D_{yy}\) and \(D_{zz}\) are therefore optimized to complement the constraint \(u_{x}=0\) with the physical condition imposed for \(u_{\tau}\).
### Comparison of computational resources required
Results presented in SS4 have shown how the physics-infused procedure relying on the EnKF provides a significant improvement in the predictive capabilities of a classical penalization IBM model. In this section, the computational resources \(CC\) required to perform each of the simulations of the database in tables 1 and 2 are discussed. While the simulations have not been run on the same machine--therefore, an exact comparison
Figure 11: Location of the sensors where \(u_{x}=0\) is infused during the DA procedure. Snapshots are shown for \(y=0\).
of the computational resources used cannot be performed--the elements of discussion provided aim to obtain insights about the efficiency of the data-infused procedure i.e. gain in accuracy versus increase in costs. To this purpose, the resources required to perform the simulations are normalized over \(CC_{\text{DNS-BF}}\), which is the total costs employed to run the simulation DNS-BF (\(CC^{\star}=CC/CC_{\text{DNS-BF}}\)). It is reminded that all numerical simulations are run with the same numerical schemes and grid (with the exception of the simulation R-DNS-BF, and the data-driven cases 2, 3 and 4). Therefore, differences observed in computational terms are not associated with the numerical solver.
The two LES calculations, LES-BF and LES-BF-VD, exhibit very similar requirements when compared with the coarse-grained DNS-BF: \(CC^{\star}=0.995\) and \(CC^{\star}=0.986\), respectively. The only difference in this case is represented by the calculation of the contribution of the Smagorinsky model. Considering that this SGS closure does not require the resolution of additional equations, the variation in computational resources is negligible. Similar considerations can be performed for the simulation DNS-IBM-CF, where \(CC^{\star}=1.027\) since the calculation of the penalization term is straightforward in this case and does not require a significant amount of supplementary resources. The costs for these three first simulations stay in the range of a couple of percentage points from
Figure 12: Comparison of the main statistical moments of the velocity field for the data-driven cases. Results are shown for simulations () DNS-IBM-DA, () case 1, () case 2, () case 3, () case 4, and () R-DNS-BF.
Figure 13: Coefficients of tensor for the simulations () case 1 and () case 2. Similar trends are observed in cases with identical grid refinement.
the reference simulation, which can be due to numerical uncertainty in the calculation process. On the other hand, the value of the parameter \(CC^{\star}=1.172\) for simulation DNS-IBM-DF is relatively higher. The reason is due to the communication and execution of the interpolation and spreading steps, which are performed by two different libraries. While this algorithmic cost can be improved with efficient coding, it still represents a non-negligible increase in the computational costs since, additionally, \(0.74\) scalar hours are required when launching the simulation to load the stencils of the \(16\,384\) Lagrangian markers (one Lagrangian marker for each Eulerian cell located at \(y=0,2h\)).
Now, the simulation R-DNS-BF is considered. For this simulation, one needs to take into account that the physical domain is \(2\) times larger in the streamwise \(x\) and spanwise \(z\) directions, and the grid resolution is \(4\) times more refined in every direction, which implies that the total number of mesh elements is \(256\) larger when compared with the grid used for the other calculations. In addition, the time step is also five times smaller, which means that the increase of computational resources is around \(10^{4}\) larger than the simulation DNS-BF. Practical estimation provides a result of \(CC^{\star}\approx 12\,000\).
At last, the data-driven runs are considered. In this case, the computational costs are normalised by the ensemble number \(m+1\) (one additional CPU core is employed to perform the analysis phases). It is noticed from the Algorithm 1, that the cost associated with the matrix operations depends on the number of observations employed \(n_{o}\), the number of ensembles \(m\), the number of parameters \(\theta\), and the number of degrees of freedom in our system \(n\). In this work, \(m\) and \(\theta\) are constant for every DA realisation. Therefore, the computational costs related to the data-driven procedures can be expressed as \(CC^{\star}=f(n,n_{o})\). In figure 14(_a_), the costs of the five DA runs analyzed in SS4.2 are plotted against the two variables \((n^{\star},n_{o}^{\star})\) adimensionalized over their maximum value (\(n^{max}=172\,032\) and \(n_{o}^{max}=4\,096\)). One can see that the five results for \(CC^{\star}\) from the available DA realizations are well-fitted by a log-level regression model. This is proved by estimating the coefficient of determination \(R^{2}\). For a dependent variable \(\mathbf{z}\) whose predicted values are \(\widehat{z}_{i}\), its definition is \(R^{2}=\sum_{i}(\widehat{z}_{i}-\overline{z})^{2}/\sum_{i}(z_{i}-\overline{z}) ^{2}\), which represents the proportion of the total variation in the \(z_{i}\) values from the average \(\overline{z}\) explained by the regression equation. For the predicted \(\widehat{CC}^{\star}\), the following equation is derived with a \(R^{2}=99.902\%\):
\[\ln\left(\frac{\widehat{CC}^{\star}}{m+1}\right)=1.651+1.448\,n^{\star}+0.511 \,n_{o}^{\star} \tag{10}\]
It is observed that, in the empirical relation expressed in equation 10, \(n^{\star}\) is more important than \(n_{o}^{\star}\) in the range of investigation of the present study. The data-driven runs are now sorted by increasing the computational resources required. Case 4 provides a \(CC^{\star}/(m+1)=8.797\), case 2 is \(9.284\), case 3 is \(13.708\), case 1 is \(24.797\), and DNS-IBM-DA is \(37.598\). This result clearly indicates that the hyperparameters driving the DA technique are critical both in terms of accuracy as well as for computational costs. Further investigations should be focused on determining these coefficients to optimize the algorithm, possibly by combining Machine Learning (ML) techniques altogether. Besides, as already expressed in SS4.2, one can see that more observations do not necessarily imply better accuracy. To quantify such a statement, the parameter \(\Delta_{\overline{C_{f}}}\) from table 2 is also approximated by a log-level regression model and shown in figure 14_(b)_. In this case, \(R^{2}=99.269\%\) for the range covered in these studies. The resulting empirical formula is:
\[\widehat{\Delta_{\overline{C_{f}}}}=0.219-0.223\,n^{\star}+0.041\,n_{o}^{\star} \tag{11}\]
The positive coefficient of \(n_{o}^{\star}\) in equation 4.4 is an indication of the potential degradation of the accuracy of the DA technique due to overconstraints, such as for the DA run case 3 in the present work.
## 5 Conclusions
A physics-infused strategy has been developed to improve the accuracy of a classical Immersed Boundary Method (IBM) tool, namely the penalization method. Physical information about the flow condition, no-slip condition and the stress at the wall, has been introduced within an online data-driven technique based on the Ensemble Kalman Filter (EnKF). This sequential tool has been used to update the physical state of the flow, as well as to optimize the parametric description of the penalization IBM integrated into the dynamic equations. The analysis of the statistical quantities of the flow, which include \(u_{\tau}\), the mean streamwise velocity \(\langle u_{x}\rangle\) and the components of the Reynolds stress tensor, indicate that the DA procedure is successful in accurately reproducing the flow when compared with a high-accuracy DNS. In addition, the analysis of the instantaneous features, in the form of isocontours of the Q-criterion, shows that a similar organization of the structures is obtained despite the different grid resolution. The sensitivity of the physics-infused procedures to changes in the model and the observation has then been analyzed. For the former, a grid with a different resolution in each space direction has been selected. For the latter, the position and number of sensors have been modified. The results indicate a mild sensitivity to these key elements, which is nonetheless lower than the one exhibited for classical simulations with the same mesh changes. Therefore, the procedure shows good characteristics of robustness. In addition, the current version of the online DA procedure is extremely competitive in terms of accuracy vs computational costs required, and it can be further improved using advanced EnKF strategies such as multiple local EnKF with extended localization, as performed in environmental studies (Asch _et al._, 2016).
This study opens the perspective for including physical knowledge for numerous flow configurations within a data-driven formalism. The important point here is that the technique proposed at least partially bypasses the need to produce, store, and manipulate big databases, providing a reduction of computational resources needed for these means. Future applications currently explored by the team deal with the training of Machine Learning (ML) tools to replicate the dynamic effects of IBM models.
The present research work has been developed in the framework of the project ANR
Figure 14: Real (red dots) and predicted (grey plane) values for _(a)_ computational costs \(CC^{\star}\) and _(b)_ the relative error of the friction coefficient \(\Delta_{\overline{C_{f}}}\) for the range covered in these studies.
JCJC-2021 IWP-IBM-DA. Computational resources to perform a part of the database of simulations have been obtained through the project EDARI A0122A01741 on the IRENE supercomputer (TGCC).
## Appendix A
The data-driven algorithm used to augment the IBM relies on numerical solvers based on the PISO algorithm (Issa 1986). The velocity and pressure field are iteratively updated to comply with the momentum equation and the Poisson equation. When the penalization volume force is velocity-dependent \(\mathbf{f}_{P}(\mathbf{u}_{t,j})\), it may be solved implicitly in equation 2. Let us consider the time \(t\) and the time step advancement \(\Delta t\). Considering \(j\in[1,J]\) as a single iteration from the PISO loop, the following steps are performed:
1. Resolution of the momentum equation, from where \(\mathbf{u}_{t,j}\) is obtained. \[\mathbf{\mathsf{\Lambda}}\,\mathbf{u}_{t,j}-\mathbf{b}-\mathbf{f}_{P}(\mathbf{u}_{t,j})=-\nabla p_{ t-\Delta t}\] (11)
2. Estimation of the pressure \(p_{t,j+1}\) through the Poisson equation. \(A^{\prime}\) is a scalar field calculated from \(\mathbf{\mathsf{\Lambda}}\) and \(\mathbf{f}_{P}\), whereas \(\mathbf{\mathsf{\Gamma}}^{\prime}(\mathbf{u}_{t,j})\) is a tensor field containing the discretized form of all the terms on the left side of equation 11. \[\mathbf{\nabla\cdot\frac{1}{A^{\prime}}}\nabla p_{t,j+1}=\mathbf{\nabla\cdot\left( \frac{\mathbf{\mathsf{\Gamma}}^{\prime}(\mathbf{u}_{t,j})}{A^{\prime}}\right)}\] (12)
3. Update of the velocity field \(\mathbf{u}_{t,j+1}\) to satisfy the zero-divergence condition. \[\mathbf{u}_{t,j+1}=\frac{\mathbf{\mathsf{\Gamma}}^{\prime}(\mathbf{u}_{t,j})}{A^{\prime}} -\frac{1}{A^{\prime}}\nabla p_{t,j+1}\] (13)
Equations 12 and 13 are solved iteratively until reaching convergence. In the case of the presence of high-fidelity data at time \(t\), when \(j=J\), \(\mathbf{u}_{t,J}=\mathbf{u}_{k}^{f}\), which is the forecast velocity field at the \(k^{th}\) analysis phase for the EnKF. Together with the coefficients \(\theta_{k}^{f}\) of the tensor \(\mathbf{\mathsf{\Omega}}\), it constitutes the forecast system's state. The EnKF updates them to estimate \((\mathbf{u}_{k}^{a},\theta_{k}^{a})\). This state, however, does not necessarily respect the Navier-Stokes equations, as the velocity field is updated while the pressure field is the one obtained in the forecast. In order to obtain a consistent final solution, an additional update of the pressure is performed via an additional resolution of the Poisson equation.
1. The volume force term \((\mathbf{f}_{P})_{k}^{a}\) is updated by using the new coefficients \(\theta_{k}^{a}\) from the tensor \(\mathbf{\mathsf{\Omega}}\): \[(\mathbf{f}_{P})_{k}^{a}=-\nu\mathbf{\mathsf{\Omega}}(\theta_{k}^{a})\,\mathbf{u}_{k}^{a}\] (14)
2. A Poisson equation is solved to update the pressure \(p_{k}^{a}\). The complete flow field and the updated parameters \(\theta_{k}^{a}\) will be used to solve the equation 11 in \(t+\Delta t\). \[\mathbf{\nabla}^{2}p_{k}^{a}=-\mathbf{\nabla\cdot(\mathbf{u}_{k}^{a}\mathbf{\nabla u}_{k}^{a })+\mathbf{\nabla\cdot(f_{P})_{k}^{a}}}\] (15)
## Appendix B
The complete algorithm for the reference discrete IBM is described below. For each time \(t\), the following steps are carried out:
1. Estimation of an initial velocity \(\mathbf{u}_{t,j}\). following the expression (11) for the momentum predictor without a source term.
\[\boldsymbol{\mathit{\boldsymbol{\mathit{A}}}}\,\boldsymbol{u}_{t,j^{*}}- \boldsymbol{b}=-\nabla p_{t-\Delta t} \tag{22}\]
(ii) Interpolation \((I)\) of the velocity \(\boldsymbol{u}_{t,j^{*}}\) from the element \(i\) of a subspace \(D_{s}\) (Eulerian mesh) to the Lagrangian marker \(s\), with \(s\in[1,N_{s}]\).
\[I[\boldsymbol{u}_{t,j^{*}}]_{s}=[\,\boldsymbol{U}^{*}\,]\,(s)=\sum_{i\in D_{ s}}(\boldsymbol{u}_{t,j^{*}})_{i}\,\delta_{d}\,(\boldsymbol{x}_{i}-\mathcal{X}_{s}) \boldsymbol{\mathit{\boldsymbol{\Delta}}}\boldsymbol{x} \tag{23}\]
\(\boldsymbol{x}_{i}\) and \(\mathcal{X}_{s}\) refer to the position in the Eulerian and Lagrangian frameworks, respectively. \(\boldsymbol{\mathit{\boldsymbol{\Delta}}}\boldsymbol{x}\) describes the Eulerian quadrature (\(\boldsymbol{\mathit{\boldsymbol{\Delta}}}\boldsymbol{x}=\Delta x\Delta y \Delta z\) in the Cartesian frame of coordinates) and the interpolation kernel \(\delta_{d}\) is a discretized delta function based on the Euclidean distance \(r=(\boldsymbol{x}_{j}-\mathcal{X}_{s})/d\) proposed by Roma _et al._ (1999):
\[\delta_{d}=\begin{cases}\dfrac{1}{3}\big{(}1+\sqrt{-3r^{2}+1}\big{)}&0\leq r \leq 0.5\\ \dfrac{1}{6}\Big{(}5-3r-\sqrt{-3(1-r)^{2}+1}\Big{)}&0.5\leq r\leq 1.5\\ 0&otherwise\end{cases} \tag{24}\]
The kernel is multiplied with a prescribed function \(\mathcal{G}\) to account for the directional stretching of the mesh elements.
(iii) Calculation of \(\boldsymbol{F}_{P}\) on the \(N_{s}\) Lagrangian markers by employing the equation 10 already presented in SS2.2. The target values over the \(N_{s}\) Lagrangian markers are \(\boldsymbol{U}_{\boldsymbol{\mathit{\boldsymbol{\mathit{\boldsymbol{\mathit{ \boldsymbol{\mathit{\boldsymbol{\mathit{\boldsymbol{\mathit{\boldsymbol{\mathitmathit{ \boldsymbol{\mathitmathitmathit{ \boldsymbol{ \mathit{ \boldsymbol{ \mathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{\mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{\mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ { \mathit{ { \mathit{ { \mathit{ \mathit{ { \mathit{ \mathit{ { { \mathit{ \mathit{ { \mathit{ { { \mathit{ { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { { \mathit{{ { { { { { \mathit{ { \mathit{ { { { { { { { { \ \mathit{{ { \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\cdot}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
\[\mathbf{\nabla\cdot}\frac{1}{A}\nabla p_{t,j+1} =\mathbf{\nabla\cdot}\left(\frac{\mathbf{T}(\mathbf{u}_{t,j})}{A}\right) \tag{10}\] \[\mathbf{u}_{t,j+1} =\frac{\mathbf{T}(\mathbf{u}_{t,j})}{A}-\frac{1}{A}\nabla p_{t,j+1} \tag{11}\]
|
2306.14056 | Stable Yaw Estimation of Boats from the Viewpoint of UAVs and USVs | Yaw estimation of boats from the viewpoint of unmanned aerial vehicles (UAVs)
and unmanned surface vehicles (USVs) or boats is a crucial task in various
applications such as 3D scene rendering, trajectory prediction, and navigation.
However, the lack of literature on yaw estimation of objects from the viewpoint
of UAVs has motivated us to address this domain. In this paper, we propose a
method based on HyperPosePDF for predicting the orientation of boats in the 6D
space. For that, we use existing datasets, such as PASCAL3D+ and our own
datasets, SeaDronesSee-3D and BOArienT, which we annotated manually. We extend
HyperPosePDF to work in video-based scenarios, such that it yields robust
orientation predictions across time. Naively applying HyperPosePDF on video
data yields single-point predictions, resulting in far-off predictions and
often incorrect symmetric orientations due to unseen or visually different
data. To alleviate this issue, we propose aggregating the probability
distributions of pose predictions, resulting in significantly improved
performance, as shown in our experimental evaluation. Our proposed method could
significantly benefit downstream tasks in marine robotics. | Benjamin Kiefer, Timon Höfer, Andreas Zell | 2023-06-24T20:47:37Z | http://arxiv.org/abs/2306.14056v1 | # Stable Yaw Estimation of Boats from the Viewpoint of UAVs and USVs
###### Abstract
Yaw estimation of boats from the viewpoint of unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) or boats is a crucial task in various applications such as 3D scene rendering, trajectory prediction, and navigation. However, the lack of literature on yaw estimation of objects from the viewpoint of UAVs has motivated us to address this domain. In this paper, we propose a method based on HyperPosePDF for predicting the orientation of boats in the 6D space. For that, we use existing datasets, such as PASCAL3D+ and our own datasets, SeaDronesSee-3D and BOArienT, which we annotated manually. We extend HyperPosePDF to work in video-based scenarios, such that it yields robust orientation predictions across time. Naively applying HyperPosePDF on video data yields single-point predictions, resulting in far-off predictions and often incorrect symmetric orientations due to unseen or visually different data. To alleviate this issue, we propose aggregating the probability distributions of pose predictions, resulting in significantly improved performance, as shown in our experimental evaluation. Our proposed method could significantly benefit downstream tasks in marine robotics.
## I Introduction
Yaw estimation of objects from the viewpoint of unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) or boats is an essential task in various applications such as 3D scene rendering, trajectory prediction, and navigation. Accurate pose estimation is crucial for safe and efficient operations in the marine environment, where the ability to locate and track objects such as boats and ships is essential for collision avoidance, search and rescue, and marine surveillance. Furthermore, it is vital to have robust yaw predictions in augmented reality applications, to better aid a human operator.
AIS (automatic identification system) data only helps for boats that emit these signals. Smaller boats do not send AIS data. Furthermore, radar is expensive and only provides a very coarse position of boats. It requires a correct set-up of the radar and is harder to interpret for non-experts. Computer vision-based orientation prediction on the other hand offers a cheap and direct method.
Furthermore, there is a lack of literature on heading estimation of objects from the viewpoint of UAVs and USVs. In particular, predicting the orientation of objects far away from the camera is a challenging task due to the inherent uncertainty in the visual data. Methods based on 3D bounding box detection rely on precise box labels and are inherently error-prone for distant objects [1].
In this paper, we propose a method based on HyperPosePDF [2] for predicting the orientation of boats in the 6D space. HyperPosePDF is a recent method that models the uncertainty of predictions and has shown promising results in the field of 6D pose estimation. We train this method on existing datasets, such as PASCAL3D+, and on our own datasets, called SeaDronesSee-3D and BOArienT, which we manually annotate with bounding boxes and pose information for evaluation purposes.
To speed up the bounding box annotation, we develop an annotation tool based on the recently published "Segment Anything" method [3]. We make this tool together with the data publicly available.
We extend HyperPosePDF to work in video-based scenarios, where the prediction of the orientation of objects across time is essential. Naively applying HyperPosePDF on video data yields single-point predictions, often resulting in far-off predictions and incorrect symmetric orientations due to unseen or visually different data. Therefore, we propose aggregating the probability distributions of pose predictions over time, resulting in significantly improved performance, as shown in our experimental evaluation.
Furthermore, naively predicting the yaw of boats based on analyzing their trajectory in 3D space does not work for standing or slowly moving boats. Moreover, formulating yaw prediction in this way is error-prone due to an ill-posed 2D \(\longleftrightarrow\) 3D projection, which is not reliable in heading estimation as we will see in subsequent sections.
Lastly, we demonstrate a full pipeline with detection and tracking of objects and subsequent orientation prediction for a downstream synthetic rendering of a scene. Our proposed method could significantly benefit downstream tasks in marine robotics.
Fig. 1: Ignoring the temporal domain results in false, near-symmetric orientation prediction of a boat from frame 70 (top) to frame 71 (middle). Tracking the probability distributions alleviates this problem (bottom).
In summary, our contributions are as follows:
* We pose the novel problem of predicting the heading of boats via purely vision-based methods.
* We propose a novel method to aggregate pose predictions by tracking the probability distributions to capture uncertainties due to symmetries and ambiguous appearances.
* We create a new dataset BOArienT, a benchmark featuring 30 FPS manually annotated video, featuring precise object detection and pose labels. Furthermore, we annotate parts of SeaDronesSee-MOT with pose data, which we call SeaDronesSee-3D.
* We show in multiple experiments on diverse benchmarks the utility of our method. Lastly, we demonstrate the utility of our method on a full pipeline with detection and tracking to synthetically render a scene.
* We make code, data, and adapted labeling tools publicly available on www.macvi.org.
## II Related Work
Pose estimation of common or close industrial objects has been explored in several methods [4, 5, 6]. Analyzing the static images, they split the task into two stages - object detection and subsequent 6D pose estimation of the predicted bounding boxes. However, their focus is on close objects that are dominant in the image plane. On the other hand, we focus on yaw estimation of boats that are distant and occasionally hardly visible. This makes an accurate yaw estimation hard as many plausible predictions exist. Several works explored how to model the uncertainty of pose predictions [2, 7]. They output probability distributions over many different poses, effectively capturing the symmetries inherent in the poorly visible objects. While they only experiment with common objects in static scenes, we aim to build on top of their methods to predict stable yaw predictions across time.
The last years have shown a great influx in works in maritime computer vision [8, 9, 10, 11]. Most works focus on the detection or tracking of objects from the viewpoint of UAVs, USVs or boats. There is a great corpus of works working on simulation and trajectory prediction [12, 13]. However, these methods only operate on map data as opposed to image/video data.
Likewise, the general UAV-/USV-based research focuses on object detection and tracking, and anomaly detection [14, 15, 16, 17, 9, 18], but neglects the yaw estimation aspect.
## III 3D Geometry Prerequisites
There are three principal axes in any boat, called longitudinal, transverse and vertical axes. Figure 2 shows the rotations around these. These are absolute orientations, i.e. while our method outputs an orientation estimation, it is relative to our camera view. Therefore, we may obtain the absolute orientation using an onboard magnetometer or dual GNSS solutions.
We note that we focus on the case of zero roll and pitch angle, i.e. only the orientation is predicted.
For downstream tasks, such as trajectory prediction for collision avoidance but also for rendering synthetic scenes visually smooth and stable, we need to map our predictions to 3D space. For that, we compute 3D object coordinates relative to the UAV, and then use these to obtain actual world coordinates via passive geolocation.
For the relative object coordinates, we consider a mathematical perspective projection camera model since this resembles the common use case for cameras on UAVs and USVs. We assume our camera to look down at a certain angle, which may be a variable gimbal or static camera. A gimbal balances a potential UAV roll angle so that we assume there to be a zero camera roll angle. If there is no gimbal in the USV case, we apply a CV-based roll correction by levelling the horizon line using the IMU roll angle.
Using the relative coordinates of an object (\(x\)- and \(y\)-ground distances to UAV), we compute its GPS coordinates based on the UAV's GPS coordinates as follows. Given the camera heading angle \(\theta\), we compute the rotation matrix and rotate the relative coordinates of an object to obtain
\[\begin{bmatrix}x_{r}\\ y_{r}\\ 1\end{bmatrix}=\begin{bmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}x\\ y\\ 1\end{bmatrix}. \tag{1}\]
Finally, we map the relative coordinates to GPS coordinates via
\[la^{object} =la+\frac{y_{r}}{r}\frac{180}{\pi}, \tag{2}\] \[lo^{object} =lo+\frac{x_{r}}{r}\frac{180}{\pi}\frac{1}{\cos(lat~{}\pi/180)}. \tag{3}\]
Fig. 3: Orientation relative to the camera. At \(0^{\circ}\), the boat’s nose is facing directly us. Note that we did not include the roll and pitch angles.
Fig. 2: Rotation around the longitudinal, transverse and vertical axes, i.e. roll, pitch, and yaw [19].
We refer the reader to [20] for a more comprehensive derivation of the 2D \(\longleftrightarrow\) 3D projection. Concretely, we would like to note that the projection may especially be critical for a distant object in the USV scenario as here, we encounter a very acute viewing angle. Small errors in pixel space result in large distance errors in world space. It is an open problem of how to correctly project distant objects in world space. For our consideration, we are mostly concerned with obtaining correct heading estimations for either close detections that may ultimately pose an immediate threat. For distant objects, we mostly care about stable heading predictions over time.
## IV Data Collection and Labeling
Because of the lack of available datasets for yaw estimation, we capture and annotate our own. For the UAV scenario, we leverage the already existing SeaDronesSee-MOT [8] dataset, which comes with bounding boxes and instance ids for boats. Furthermore, we annotate the 6D pose of boats from various sample scenes by adapting the annotation tool provided in [21]. Figure 5 shows an example scene where a boat is labelled from a viewpoint of a UAV. We leverage the provided metadata from the UAV to automatically infer coarse pitch and roll angles relative to the camera. Herein, we assume the world pitch and roll angle of boats to be zero, such that we only need to annotate the heading direction. For that, we manually provide a coarse heading and, upon selecting anchor points from the CAD in the corresponding real objects, we optimize for the precise 6D pose using their optimization procedure [21]. For annotation efficiency, we only annotate every 10th frame and interpolate the pose in between.
For the USV scenario, we capture our own data from the viewpoint of a fixed camera installed on a small motorboat. We use the ZED2 camera1 with integrated IMU to infer the orientation at which we look at the scene. As before, we may also infer a coarse estimation of the roll and pitch angle for subsequent finer annotating via pose optimization. Before that, we annotate the scenes with bounding boxes using our tool, which we built on top of SAM (Segment Anything Model [3], see Fig. 6). We leveraged their largest ViT-H (636M parameters) model and built a user interface and labeling logic around it, such that objects can be assigned their bounding boxes by just clicking on them. Analogous to before, we annotate every 10th frame and interpolate in between. Table I shows a timing comparison between conventional labeling tools and our method. Every method was required to yield precise bounding boxes as rated by human experts. We repeated this experiment with five experts knowledgeable in the field of object detection. Each experiment lasted for half an hour. Our method clearly outperforms the others by 8.7 FPM. We hypothesize that fatigue symptoms occur later because annotating with a single click already covers the entire object. In contrast, when setting bounding boxes, precise outlining of the object is required, which becomes more exhausting over time. While this effort can be reduced by tracking, there are often errors
Fig. 4: Left: Example orientation of a boat taken from 50m of altitude and looking down with a pitch angle of 40\({}^{\circ}\). The highlighted boat has a yaw angle of 280\({}^{\circ}\) relative to our viewpoint. Since the UAV’s heading is 170\({}^{\circ}\) (close to true south), we know that the boat has an absolute heading of 260\({}^{\circ}\) (close to true west). Right: Cad overlays on a frame of one the videos we took. Note the very small objects in the left part of the frame.
Fig. 5: Example view of pose labeling tool. First, we align the view coarsly in steps of 5\({}^{\circ}\), then we put the visible anchor points (see Figure 3) in the image plane. These are used to obtain a better pose label. We optimize the orientation to match these anchors (see [7]).
in tracking objects that are far away, requiring the annotator to stay alert and relabel bounding boxes.
We want to note that this method can fail in scenarios of low contrast or very distant objects. In this case, one has to resort to standard bounding box detection. Moreover, it requires a GPU to process ViT-H (in our case an RTX 2080Ti). Furthermore, a more exhaustive study on the benefits of segmentation-based labeling needs to be done to obtain a more comprehensive overview. In particular, a more comprehensive experiment considering object number, size, shape and movement needs to be done. We release both (adapted) annotation tools for further studies.
## V Method: Aggregating Probability Distributions over Time
Our approach is based on HyperPosePDF [7]. For an input image \(x\in\mathcal{X}\), it aims to obtain a conditional probability distribution \(p(\cdot|x)\colon\mathbf{SO}(3)\mapsto\mathbb{R}^{+}\), representing the distribution of the inherited pose of an object in the image \(x\). For that, we train a vision backbone network, e.g. ResNet to predict the networks of a second network. While the vision network acts as a hypernetwork, the architecture of the second network is inspired by an implicit neural representation. The implicit neural representation acts on the rotation manifold and outputs for each pose, the corresponding probability of it being the underlying rotation of the object present in the image. Hence, it acts as an approximation of the probability distribution \(p(R|x)\) by marginalizing over \(\mathbf{SO}(3)\). During training, we maximize \(p(R|x)\) by providing pairs of inputs \(x\) and corresponding ground truth \(R\). To make a single pose prediction, we solve \(\arg\max_{R\in\mathbf{SO}(3)}f(x,R)\) with gradient ascent, projecting the values back into the manifold after each step. To predict a full probability distribution, we evaluate \(p(R_{i}|x)\) over the \(\mathbf{SO}(3)\) equivolumetric partition \(R_{i}\). This model can estimate complex distributions on the manifold without prior knowledge of each object's symmetries, and appropriate patterns expressing symmetries and uncertainty emerge naturally in the model's outputs. This is indeed, the most general way to conduct pose estimation. Specifically, in our scenario where we want to predict the pose to predict the trajectory it is possible to include uncertainty information of the pose to improve the performance.
The posterior distribution
\[P(R_{k+1}|Z_{k}), \tag{4}\]
based on the observations \(Z_{k}\) for time steps \(\{1,\dots,k\}\) can be approximated by
\[P(R_{k+1}|Z_{k})\approx P(R_{k}|Z_{k})+\Delta_{\text{pose}}, \tag{5}\]
where \(\Delta_{\text{pose}}\) is defined as a weighted running average
\[\Delta_{\text{pose}}=\frac{1}{k}\sum_{l=0}^{k-1}\omega_{l}\Big{(}P(R_{l+1}|Z_{ l+1})-P(R_{l}|Z_{l})\Big{)}. \tag{6}\]
For \(l<k+1\) the probabilities \(P(R_{l}|Z_{l})\) are known and approximated by the HyperPosePDF network. Therefore, the calculation of the pose at a future time point is deterministic. The weights \(\omega_{l}\) fulfill \(\sum_{l}\omega_{l}=1\). To reduce the effect of earlier pose transitions, which have a lesser effect on the current pose movement it is plausible to simply set the initial weights as \(0\) and average the remaining over a smaller time interval chosen such that \(0<t<k\)
\[\omega_{l}=\begin{cases}0&\text{for }l<t,\\ \frac{1}{k-t}&\text{for }l\geq t.\end{cases} \tag{7}\]
This especially comes in helpful, when we try to predict the movement of a boat that is in the middle of a turn manoeuvre and the respective trajectory resembles a curve. Furthermore, this allows us to detect false pose predictions in the case that the pose prediction in the next time step differs to much from the previous path. E.g., in the case of nearly symmetric boats, we experienced the appearance of \(180^{\circ}\) miss-predictions, which now can be easily excluded.
## VI Experiments
First, we conduct experiments on the single-image Pascal3D+ set to illustrate the performance and expressiveness of HyperPosePDF. Similar to [2], we choose a pretrained ResNet-101 backbone for our vision module. Then, we train the model to predict the weights of a one-layer network with a width of 256. Using the Adam optimizer, we evaluate our model after 150k iterations using a batch size of 16. A learning rate of \(10^{-5}\) is used for the first 1000 iterations, and then a cosine decay is applied. We choose a time horizon window of \(k=3\) for our experiments.
We report the performance of the category _boat_ using the two commonly used metrics _accuracy at 30\({}^{\circ}\) (Acc)_ and _median error in degrees (ME)_. Table II shows that this method is on par with SOTA methods (ImplicitPDF [7]).
Naively applying HyperPosePDF on video data yields single-point predictions (i.e. orientations) at each time step.
Fig. 6: Faster bounding box annotations by means of “Segment anything” [3]. We leverage this method to accelerate 2D bounding box labeling. A user just needs to click on the object, the corresponding bounding box will be set and saved automatically.
However, uncertainty due to unseen data yields far-off predictions, often resulting in wrong symmetric orientations. For example, compare to Figure 1.
We evaluate on SeaDronesSee3D and BOArienT, where we manually annotated the orientations. Table II shows that our method yields higher accuracy at a maximum of \(30^{\circ}\) error tolerance as well as lower median angle error. Figure 1 shows an example sequence of SeaDronesSee3D where the single-image predictor miss-predicts the orientation by 180\({}^{\circ}\) due to the slight symmetric shape of the boat.
To test our approach in a complete pipeline, we employ a state-of-the-art multi-object tracker and apply the yaw estimator on the predicted bounding boxes. For the UAV scenario, we train on SeaDronesSee-MOT, and for the boat scenario, we take a pre-trained tracker on COCO. We report the performance of the trackers on SeaDronesSee3D and BOArienT in Table III.
Now, we apply the yaw estimator on top of the predicted bounding boxes with associated ids. Whenever a new tracklet is starting, we initialize a new probability distribution running mean. We only measure the orientation estimation performance on objects that have successfully been detected.
Table IV shows that our method still outperforms the single-image approach since the multi-object tracker is quite robust already (only a few ID switches degrade our method to effectively become a single-image method at these time points). Remarkably, we can even improve the point prediction over the naive mode running mean method, which simply applies a running mean on the modes of the distributions. We note, that this is on top of the higher expressiveness coming from our probability distributions: we may incorporate the uncertainty of heading estimations in downstream tasks, such as trajectory prediction, collision avoidance or for visualization purposes in augmented reality applications.
Finally, we compare our heading estimation approach with a naive trajectory-based approach. For every detection in every frame, we map its center box location to 3D via a perspective projection camera model [20] and capture the trajectory in world coordinates. We predict the next trajectory point by a constant velocity model coming from the previous three time steps. We take the resulting heading to be the final prediction of this baseline. If an object is lost, we need to re-initialize the heading which is a critical shortcoming of this approach. Furthermore, Table IV shows that the trajectory-based method fails on both scenarios due to stationary boats
Fig. 7: Sample boat heading probability distribution predictions (given in radian). Ground truth values are 4.8, 0.4, 6.0, 4.0. Almost all the captured distributions are uni-modal and the best single-point estimator would yield a fairly close prediction of the orientation. The first image already provides a glance at the benefits of capturing the uncertainty. It is not clear whether the sailboat is sailing at an angle of 270\({}^{\circ}\) or slightly less.
and a challenging and noisy \(2\text{D}\rightarrow\text{3D}\) projection.
Figure 8 shows the predicted positions and headings in BOArienT coming from our method and from this baseline via \(2\text{D}\rightarrow\text{3D}\) projection. Because some boats are stationary, the heading information for the baseline is incorrect. Furthermore, the heading information from slowly driving boats is very noisy as the underlying \(2\text{D}\longleftrightarrow 3\text{D}\) projection is error-prone. Single-image predictions are better, but the smallness of the objects makes these predictions also very noisy.
## VII Conclusion and Discussion
In this paper, we addressed the novel problem of predicting the yaw of boats from the viewpoint of unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) or boats. We proposed a method based on HyperPosePDF, which models the uncertainty of predictions and yields robust orientation predictions across time in video-based scenarios. To demonstrate the utility of our method, we created two new datasets, SeaDronesSee-3D and BOArienT, manually annotated with bounding boxes and pose information, and made them publicly available. Our experimental evaluation showed that our method significantly improves performance compared to naive single-point predictions. Our proposed method has potential applications in marine robotics, including 3D scene rendering, trajectory prediction, and navigation.
|
2306.04245 | Cold CAS Ion Trap -- 22 pole trap with ring electrodes for
astrochemistry | The enhancement of a cryogenic radio frequency 22 pole trap instrument by the
addition of ring electrodes is presented in detail. The ring electrodes tightly
surround the poles and only a fraction of the applied electric potential
penetrates to the trap axis, facilitating the fine control of slow cold ions. A
precise computational model, describing the effective mechanical potential
created by the applied static and rf fields, governing the ion behaviour, is
employed to demonstrate and understand the operation of our setup. The use of
ring electrodes for improved extraction of cold stored ions is shown. Variable
trapping potentials, placed on one ring electrode, can be used to control the
evaporation of only those $\text{H}^+$ ions from the trap, whose kinetic energy
exceeds the barrier. This ring electrode trapping opens new possibilities to
study processes of minimal kinetic energy release, e. g. spin exchange. We
propose a robust modified method for the determination of temperature dependent
ion molecule reaction rates, resistant to effects caused by neutral gas
freezing and demonstrate it on the reaction of $\text{CO}^+$/$\text{CO}_2^+$
with $\text{H}_2$/$\text{D}_2$. Finally, the use of a supercontinuum laser for
quick localisation of spectroscopic bands is examined on the $\text{N}_2^+$
Meinel system. | Pavol Jusko, Miguel Jiménez-Redondo, Paola Caselli | 2023-06-07T08:36:07Z | http://arxiv.org/abs/2306.04245v1 | # Cold CAS Ion Trap - 22 pole trap with ring electrodes for astrochemistry
###### Abstract
The enhancement of a cryogenic radio frequency 22 pole trap instrument by the addition of ring electrodes is presented in detail. The ring electrodes tightly surround the poles and only a fraction of the applied electric potential penetrates to the trap axis, facilitating the fine control of slow cold ions. A precise computational model, describing the effective mechanical potential created by the applied static and rf fields, governing the ion behaviour, is employed to demonstrate and understand the operation of our setup. The use of ring electrodes for improved extraction of cold stored ions is shown. Variable trapping potentials, placed on one ring electrode, can be used to control the evaporation of only those H\({}^{+}\) ions from the trap, whose kinetic energy exceeds the barrier. This ring electrode trapping opens new possibilities to study processes of minimal kinetic energy release, e. g. spin exchange. We propose a robust modified method for the determination of temperature dependent ion molecule reaction rates, resistant to effects caused by neutral gas freezing and demonstrate it on the reaction of CO\({}^{+}\)/CO\({}_{2}\)\({}^{+}\) with H\({}_{2}\)/D\({}_{2}\). Finally, the use of a supercontinuum laser for quick localisation of spectroscopic bands is examined on the N\({}_{2}\)\({}^{+}\) Meinel system.
cryogenic ion trap; effective potential; ring electrodes; ion-molecule reaction rates; electronic spectroscopy
## 1 Introduction
The 22 pole radio frequency (rf) ion trap was pioneered by Dieter Gerlich more than 30 years ago [1] as a successor to lower order multipoles, i. e., quadrupoles and octopoles [2]. \(2n\) circular rods (poles) of diameter \(d\), easy to manufacture and position, are placed on a circle with inscribed radius \(r_{0}\). These 3 parameters (\(2n\), \(r_{0}\) and \(d\)) are in a first approximation chosen so that the ring circular surface best approximates the curvature of the hyperbolic potential. For \(d=1\) mm and \(r_{0}=5\) mm, the magic number "22" is born: \(2n=22\). The use of non-ideal poles and geometries, greatly facilitating the physical manufacturing of the device, necessarily leads to perturbations in the ideal field. Fortunately, these perturbations are mostly not relevant for particle trapping. This fact, together with the lack of need for bulky and expensive magnets, lead to the wide adoption of radio frequency traps with many different geometries, from simple linear multipoles, e. g., quadrupole [3; 4; 5], hexapole [6], octopole [7; 8], 16 pole [9], 22 pole [10; 11; 12; 13; 14; 15; 16; 17; 18], ring electrode traps (stacked ring electrodes) [19; 20], quadrupole ion traps (3 dimensional quadrupole, QIT) [21; 22; 23] to geometries where thin wire is used to approximate the desired electrode shape [24; 25].
The main advantage of the 22 pole trap geometry is provided by the high number of poles, leading to an almost field free region and sharp steep barriers on the sides of the trap approximating a square box potential. The high number of poles also absorbs some small manufacturing imperfections as demonstrated by ions still remaining trapped in a 22 pole trap exhibiting 10 potential minima instead of an ideally predicted flat minimum along the axis [26] (for symmetry breaking in multipoles see [27]). In reality, even the electrostatic potentials at the ends of the axially symmetric trap (used to reflect the ions in the axial direction) do affect the potential in the middle of the trap [3; 28], creating an off-axis potential minimum. The effect of these divergences from the ideal potentials become more and more pronounced as the ion temperature (its kinetic energy) is decreased to cryogenic temperatures (ca. 10 K or 1 meV and lower) using an active form of cooling. In this work, we exclusively focus on buffer gas cooling, i. e., the use of collisions of ions and a cold neutral gas (usually He). The neutrals are thermalised to the temperature of the inside wall of the trap, as in molecular regime, the mean free path is greater than the dimensions of the trap, and are not affected by the rf fields. The ions, on the contrary, can gain kinetic energy from the rf, which can lead to situations where further decrease of the trap body temperature, i. e., of the
neutral gas temperature, does not lead to a decrease of the ion temperature, an effect often described as parasitic heating (\(T_{\mathrm{ion}}>T_{\mathrm{gas}}\)).
The temperature of the ions in the trap can be determined experimentally using chemical probing [29], using doppler broadening [30; 31; 32], from the rovibrational band profile [33; 17], from hot bands in the electronic spectra [34], using Time-of-Flight (ToF) [35], or evaporative ion losses [36]. Numerical simulations of kinetic ion temperature were used to understand the discrepancy between the ion temperature and the trap temperature in a 22 pole trap [37] and to compare the heating effect in a 16 pole and 16 wire pole [38]. The heating effects may also be taken advantage of, e. g., a QIT where a low externally induced resonant frequency excitation is used to eject specific \(m/q\) ions based on their secular motion [39].
The potential inside a multipole can also be influenced by surrounding the multipole with a ring electrode. Due to the shielding provided by the multipole rods themselves, only a fraction of the applied voltage penetrates to the multipole axis [1; 28]. Ring electrodes have been used in guided ion beam (GIB) experiments [40; 41], as well as in an octopole trap [8]. Ring electrodes on a 22 pole trap were studied in Dieter Gerlich's group using a rough numerical model (finite difference methods) and the barrier height was calibrated using partial reflection of ions stored in a packet (pulse) on the ring electrode [42]. Furthermore, Richthofen et al. [43] used ring electrodes to trap the ions and resonantly excite specific \(m/q\) species. The addition of ring electrodes provides finer access to the trap volume, e. g., to the trap center, which is only slightly affected by the input/output electrodes, opens the possibility to form barriers inside the trap, i. e., two (or more) separate traps adjacent to each other, as well as an option to compensate small patch electric fields. Further, the fine tuning of the potential can be used to influence the ions depending on the energy, e. g., leak out of ions with excess kinetic energy. The degree of control achieved with this configuration, where ring electrodes act only by penetration, is hard to replicate with a ring electrode trap (e. g. [19]). In the latter case, a very precise determination of the electric potential at the electrode surface would be required due to the direct exposure of the ring electrodes to the trap volume.
Ion traps, as demonstrated by all the aforementioned setups, are an extraordinarily versatile tool. Their application ranges from fundamental physics through physical chemistry up to molecular biology. Of particular interest for us is their application to
laboratory astrochemistry.
Interstellar clouds are cold (\(10-100\) K) and tenuous (\(10^{2}-10^{5}\) H\({}_{2}\) molecules per cm\({}^{3}\)) and, in less dense regions, are being impinged by UV photons from the interstellar radiation field [44]. Despite these harsh conditions, interstellar clouds are filled with molecules [45] thanks to the active chemistry initiated by ion-molecule reactions [46, 47]. Measuring the rates of ion-molecule reactions is crucial for astrochemical models (e. g. [48, 49]), which are used to predict and interpret observations of interstellar molecules, important diagnostic tools for the chemical and physical composition of interstellar clouds, where stars like our Sun and planets like our Earth form. The low particle number densities and cold environments to simulate make ion traps an excellent tool for laboratory studies of these astrochemical processes.
In this paper we present a new experimental cryogenic 22 pole rf trap setup Cold CAS Ion Trap (CCIT) at The Center for Astrochemical Studies (CAS) at the Max Planck Institute for Extraterrestrial Physics, which is specifically designed to take advantage of the coupling of ring electrodes to a multipole trap, and its application to the study of astrochemical processes such as those occurring in interstellar clouds.
## 2 Experimental
### Experimental setup
The block diagram of the experimental setup can be seen in Fig. 1 panel (a). The ions follow a path from the top left to bottom right of the diagram accomplished exclusively through the use of electric fields. All the measurements proceed in a cyclic manner with periods in multiples of 1 s. Synchronisation is provided by computer pre-generated signals with a resolution of 1 \(\mu\)s.
The Gerlich type Storage Ion Source (SIS) [1] is used to produce ions. In this type of source, electron bombardment of a precursor gas continuously leaked in the source produces ions inside a cavity of stacked electrodes with an applied rf field facilitating the ion storage. The ions are accumulated in the source and are typically extracted only for tens of ms per cycle, thus the storage function effectively increases the ion yield. Moreover, the produced ions can undergo reactive collisions with neutrals present in the precursor gas, allowing us to form ions not directly created by electron bombardment, as well as non reactive collisions leading to the internal rela
Figure 1: Panel (a): Schematic view of the setup. SIS – Storage Ion Source, 1.QP – source quadrupole, B. – electrostatic bender, 22 pt – 22 pole trap, 2. QP – product quadrupole, K. - Daly Knob (conversion dynode), CEM – channel electron multiplier. Panel (b): 3D model of the 22 pole trap. Panel (c): effective mechanical potential inside the trap for a singly charged particle of mass \(4\;m/q\), \(V_{0}=50\) V, \(f_{0}=19\) MHz, \(\mathrm{SE}=1\) V, \(\mathrm{SA}=0\) V, \(\mathrm{R}_{4}=100\) V, the rest of the ring electrodes are at 0 V.
We typically produce a wide variety of ions in the SIS and use a source quadrupole (1. QP) to filter only the mass of the ion of interest. After the 1. QP the ions are refocused and bent to the trap axis in the electrostatic bender (B.).
The 22 pole trap (22 pt) (see Fig. 1 panel (b)) is made out of 1 mm stainless steel rods on an inscribed radius of 5 mm. The rods are held in sapphire rings and every other is electrically connected to either of the outputs of the rf generator (RF\({}_{1}\) or RF\({}_{2}\)) respectively at both ends. Input (SE) and output (SA) electrodes are inserted into the trap, ring electrodes R\({}_{1}\)-R\({}_{5}\) surround the trap rods. The trap is mounted on top of a RDK-101E cryocooler (Sumitomo), and an additional heating element HTR-50 (LakeShore) allows us to regulate the temperature down to 4 K. Silicon diodes DT-670C-CU (LakeShore) are used to measure the temperature. The neutral gas can be administered to the trap through two independent lines, allowing either continuous injection or a short pulse using a custom piezo-element actuated valve located inside the vacuum chamber close to the thermal shield of the trap unit. The pressure in the 22 pole trap vacuum vessel is measured by a Bayard-Alpert ionization gauge AIG17G (Arun Microelectronics) which is calibrated by a capacitance manometer CMR 375 (Pfeiffer), directly connected to the trap volume and held at room temperature. Thermal transpiration is taken into account.
The ions leaving the trap are mass selected in the product quadrupole (2. QP), comprising a TSQ 7000 hyperquad (Finnigan MAT) driven by QMH 400-5 (Pfeiffer) electronics customised for the particular capacitive load of the QP rod system, and subsequently detected in the corresponding TSQ 7000 type detector. The system consists of a conversion dynode (knob - K.) held at 15 kV producing secondary particles (cations, anions, electrons, neutrals) upon incoming ion impact. In the positive ion mode, the dynode is held at \(-15\) kV and the secondary particles are anions and electrons. When in negative ion mode, the dynode is held at \(+15\) kV and the secondary particles are positive ions. Secondary particles are detected in a channel electron multiplier (CEM, Photonis 5903 or DeTech 2312), rather than converted to photons as in a typical Daly type detector [50]. The CEM is used in counting mode and the output signal is discriminated using a model 6908 (Phillips Scientific) discriminator, or directly in the multi-channel scaler (MCS) PMS-400A (Becker & Hickl).
A standard experiment sequence consists of ion production/ion trap filling, storage with exposition (ions exposed to photons, neutral reactant, etc.), and product analy
sis/detection. The cryostat operates with a 1 s period and the experiment is in phase with the cryostat. Additionally, ring electrodes, 22 pole rf amplitude, and MCS can be controlled during the cycle in order to acquire particular data.
### Potentials in a trap with rings
Numerical simulations of the potential generated inside the trap are a useful tool for experimental optimization. They have been employed for this purpose by several groups (see for instance [28, 51, 52, 38, 26]), either through the use of commercial software like SIMION [53], or open source libraries. Here, we use the boundary element method (BEM), implemented through the Python library bempp-cl [54, 55], to calculate the electrostatic potentials generated by the different trap electrodes. BEM calculations avoid the need for a spatial grid in which the solution is calculated, allowing instead the evaluation of the potential at any point in space, and the use of a Python library facilitates the analysis of the computed results and the application of the calculated potential in further simulations. The solution of the Laplace equation is computed using the fast multipole method [56], which simplifies long distance interactions between discrete elements lowering the memory requirements for the simulation. The boundary conditions are imposed on a 3D mesh generated from a 3D CAD model of the 22 pole trap using the Salome platform [57].
Given a general electric potential inside the trap with rf and static components
\[\Phi=\Phi_{\textit{rf}}\cos{(\Omega t)}+\Phi_{s}\, \tag{1}\]
where \(\Omega=2\pi f_{0}\) is the rf angular frequency, the average force acting on an ion with mass \(m\) and charge \(q\) can be derived from the effective mechanical potential [1]
\[V^{*}=\frac{q^{2}E_{\textit{rf}}^{2}}{4m\Omega^{2}}+q\Phi_{s}\, \tag{2}\]
where \(\mathbf{E}_{\textit{rf}}\) is the amplitude of the rf electric field, obtained as
\[\mathbf{E}_{\textit{rf}}=-\nabla\Phi_{\textit{rf}}. \tag{3}\]
This approach requires a separation of the ion motion into a slow drift term, controlled by the effective potential, and a rapid oscillatory motion due to the rf field. To
test the validity of this approximation, the adiabaticity parameter \(\eta\) can be defined as
\[\eta=\frac{2q\left|\nabla E_{\textit{rf}}\right|}{m\Omega^{2}}\;. \tag{4}\]
A value of \(\eta<0.3\) generally guarantees that the approximation holds.
An analytical solution exists for the case of an ideal multipole of order \(n\). The effective potential and adiabaticity parameter in plane polar coordinates \(\left(r,\varphi\right)\) then take the form
\[V^{*}=\frac{n^{2}q^{2}V_{0}^{2}}{4m\Omega^{2}r_{0}^{2}}\left(\frac{r}{r_{0}} \right)^{2n-2}+qU_{0}\left(\frac{r}{r_{0}}\right)^{n}\cos\left(n\varphi\right) \tag{5}\]
\[\eta=2n\left(n-1\right)\frac{qV_{0}}{m\Omega^{2}r_{0}^{2}}\left(\frac{r}{r_{0} }\right)^{n-2}\;, \tag{6}\]
where \(\Phi_{0}=U_{0}-V_{0}\cos\left(\Omega t\right)\) is the potential applied to the electrodes, and \(r_{0}\) is the inscribed radius.
The computational method described above has been used to obtain the effective potential inside the trap for different experimental configurations, such as the one depicted in panel (c) of Fig. 1. In this particular case, the rf field is generated with the 22 poles and the axial trapping of the ions is performed by the end electrode at the entrance of the trap, SE, and the fourth ring electrode R\({}_{4}\). Since ring electrodes only change the trap potential by penetration, the 100 V voltage applied to the electrode only results in a barrier of tens of meV inside the trap (see below).
A detailed look at the radial profile of the potential inside the trap is included in Fig. 2. The potential generated by just the R\({}_{4}\) electrode at different positions, as well as the effective potential and adiabaticity parameters, are shown. The analytical values from eqs. 5 and 6 are also plotted for comparison. It can be seen that the radial profile of the R\({}_{4}\) potential changes depending on the position \(z\) along the axis of the trap. At the position of the ring electrode (\(z=7.25\) mm), the potential barrier has its minimum at the axis, and gradually increases with the radius. At \(z=10.5\) mm (between the fourth and fifth ring electrodes), the shape changes and now the potential is almost constant from the axis up to \(r\sim 3\) mm, with a slight drop between 3 and 4 mm. In both cases, differences in the potential towards a rod or the gap between them are only significant for \(r>4\) mm, with both potentials quickly diverging towards 0 and \(\sim 100\) eV respectively. The potential generated by R\({}_{4}\) quickly decays away from the
ring electrode, and at \(z=-5.05\) mm, where the minimum potential from Fig. 1 (c) is found, its value is \(\sim 1\) meV at the axis, decreasing towards the poles. Because of this radial shape, which is similar to the one generated by the end electrodes [28], the calculated effective potential for the complete configuration has its minimum \(\sim 3\) mm off the axis and towards the rods. The calculated effective potential matches the analytical expression from eq. 5 closer to the rods, where the effect of the multipole field surpasses that of the end and ring electrodes. A similar result is obtained for the adiabaticity parameter from eq. 6.
The combination of potentials generated by ring and end electrodes should be carefully undertaken, since they can interfere with one another as noted by Fanghanel et al. [28]. Panel (a) of Fig. 3 shows the axial profile of the electrostatic potentials generated by the R\({}_{4}\) and R\({}_{5}\) ring electrodes and the output electrode SA. As can be noted, the
Figure 2: Radial profile of different potentials in the trap. In red and blue, potential generated by the R\({}_{4}\) ring electrode at different \(z\) positions along the axis of the trap, towards the center of the rod (red) and towards the middle of the gap between rods (blue); cf. top left inset. Cyan line for \(z=-5.05\) mm represents the overlap of blue and red lines. Black dashed-dotted line: effective potential at \(z=-5.05\) mm for the trap configuration described in Fig. 1. Arrow marks the minimum of the total potential at \(r=2.9\) mm. Solid orange lines: analytical expressions for the effective potential and adiabaticity parameter for the corresponding ideal multipole. Dashed orange lines: calculated adiabaticity parameter towards the center of the rod and the middle of the gap between the rods (shown only for \(r>0.8\cdot r_{0}\)).
potentials of R\({}_{5}\) and SA clearly overlap with each other, so combinations of R\({}_{4}\) and SA should be preferred. A useful configuration for ion energy sampling (see Discussion 4) is to set a potential barrier with R\({}_{4}\) and extract the ions that go over it using a negative potential in SA. The axial profile of the combined potential generated for that configuration is shown in Fig. 3 (b), normalized to the potential applied to the R\({}_{4}\) electrode. The curves for the different voltages do not overlap, which means that the influence of SA distorts the potential barrier somewhat so that its height is no longer proportional to the applied voltage. Above 40 V, the curves start to get closer as the contribution of SA becomes relatively smaller, and eventually converge to a corresponding penetration of \(\sim 8\cdot 10^{-4}\).
### Applications of ring electrodes
The ring electrodes offer fine tuning of the ion conditions inside the ion trap. Applying the right potential, the position of the ion cloud can be influenced or ions can be directly
Figure 3: Panel (a): Axial profile of the electrostatic potentials generated by the two ring electrodes closest to the trap exit, R\({}_{4}\) and R\({}_{5}\), when set to 100 V, and the output electrode SA set to 1 V (see Fig. 1). Panel (b): Axial profile of the relative electrostatic potentials generated by the R\({}_{4}\) ring electrode with respect to the applied voltage, when the output electrode is set to an extraction voltage of SA = \(-1\) V.
trapped using only ring electrodes [43]. The ion cloud can also be pushed out of the trap while emptying the trap as shown in Fig. 4.
In some experimental conditions (trap temperature \(<10\) K, high neutral number densities) it may become difficult to empty the trap only by using the end electrodes SE, SA. We believe this is mainly caused by the fact that the induced emptying axial potential (dark blue in Fig. 4) is monotonously decreasing, albeit very slowly, due to shielding of the SA electrode by the trap rods themselves. The emptying potential can be made an order of magnitude steeper just by using a tens of volts lower potential (in the case of cations) on every consecutive ring electrode (see divider in Fig. 1), effectively overcoming possible patch potentials on the trap rods as well as potentials present due to manufacturing imperfections.
Figure 4: Trapping and emptying of the trap equipped with ring electrodes. Top panel: axial profile of electrostatic potential produced by the electrodes. Red colour represents trapping potential, dark blue colour represents emptying potential with no ring electrodes. The use of ring electrodes and the divider (see Fig. 1) creates a uniform decreasing potential (light blue). Bottom panel: Effect of the divider on the extraction of mass \(44\;m/q\) ions from the trap held at \(10\) K.
## 3 Results
### H\({}^{+}\) evaporation from the trap
The low potential barrier created by a ring electrode can be used to examine processes in which the energy of the trapped ions plays a significant role. In this experiment we intentionally keep the barrier formed by ring electrode R\({}_{5}\) as low as possible in order to quantify the escape of H\({}^{+}\) ions in different conditions. H\({}^{+}\) ions are first trapped and cooled down using initial He pulse and SE, SA electrodes. Subsequently, after the He buffer gas is pumped away, the SA barrier is removed and only the R\({}_{5}\) electrode is able to reflect the ions back to the trap. Ions that are not reflected pass over the barrier and are detected (see Fig. 5(a)). This "evaporation" process is of exponential nature over several orders of magnitude and can be represented by a single number, the escape rate \(r\). We plot the escape rate \(r\) at various number densities, neutral gases (H\({}_{2}\), He) and R\({}_{5}\) electrode potential in Fig. 5(b). It is immediately clear, that \(r\) is linear with number density, implying a binary process and can be characterised by an apparent collision rate \(k_{\mathrm{a}}\) (Fig. 5(c)).
Comparing the \(k_{\mathrm{a}}=5.2\cdot 10^{-11}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\) (at R\({}_{5}=10\) V) to the Langevin collisional rate \(k_{\mathrm{L}}=2.6\cdot 10^{-9}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\) for H\({}^{+}\)+H\({}_{2}\) reveals that approximately only 2 % of collisions lead to evaporation (for H\({}^{+}+\)He \(k_{\mathrm{L}}=1.2\cdot 10^{-9}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\) implying cca. 4 % evaporation). The \(k_{\mathrm{a}}\) decreases faster for H\({}_{2}\) than for He with increasing R\({}_{5}\) voltage (Fig. 5(c)), i. e., collisions with H\({}_{2}\) are clearly different than collisions with He. On top of the apparent difference in the target mass (factor of 2), one has to also consider that the collision with He is completely non-reactive as is the case for a collision with para-H\({}_{2}\). Since normal room temperature H\({}_{2}\) is used, the corresponding 3:1 ortho to para ratio in H\({}_{2}\) is assumed. In the case of a collision with ortho-H\({}_{2}\), the spin conversion reaction
\[\mathrm{H}^{+}+\mathrm{H}_{2}(o)\rightarrow\mathrm{H}^{+}+\mathrm{H}_{2}(p) \tag{7}\]
occurs with a predicted reaction rate \(k_{\mathrm{o-p}}\approx 1-2\cdot 10^{-10}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\)[58, 59, 60, 61] around 10 K and releases 14.7 meV [62] of kinetic energy, which is mostly carried away in the lighter H\({}^{+}\) ion (\(\approx 10\) meV) due to conservation of momentum. This additional energy greatly exceeds the thermal energy (\(<2\) meV at 17 K) available in non-reactive collisions and contradicts our initial expectation, where H\({}_{2}\) would have been responsible for a stronger escape effect.
To our knowledge, this is the first attempt to characterise the reaction 7, where the reactants and products do not change their \(m/q\) ratios and the reaction leads only to a minimal energy release. Our initial assumption, that the H\({}^{+}\) ion will be less affected by the collision with neutral He atoms, than with the H\({}_{2}\) molecules, where the reaction is possible was found to be false (see Fig. 5(b)). We presume, that within the current experimental framework, the reaction can only be studied by using highly enriched para-H\({}_{2}\) as the reference measurement and attribute the reaction 7 to the difference to the measurement with normal-H\({}_{2}\) (contains 3:1 ortho:para H\({}_{2}\)) instead of directly using normal-H\({}_{2}\).
Figure 5: Evaporation of H\({}^{+}\) from the trap over R\({}_{5}\) electrode as a function of electrode potential, number density and neutral gas. Inset (a): evaporation (escape rate \(r\)) from the 22 pole trap as seen with the MCS. Panel (b): escape rate \(r\) as a function of neutral number density for H\({}_{2}\), He gas (gray arrow marks the point corresponding to inset (a)). Panel (c): Apparent collisional rate \(k_{\mathrm{a}}\) as a function of the potential on the R\({}_{5}\) electrode.
### Reaction rates
All ion traps, irrespective of being based on rf or magnetic fields, offer considerable trapping times making them prefered tools for ion-neutral interaction studies, from fast near Langevin processes, typically \(10^{-9}\) cm\({}^{3}\) s\({}^{-1}\), to slow radiative attachment, where the reaction rate coefficient is in the order of \(10^{-16}\) cm\({}^{3}\) s\({}^{-1}\)[63].
The reaction rate coefficient determination procedure, illustrated on the reaction
\[\mathrm{CO_{2}}^{+}+\mathrm{H_{2}}\rightarrow\mathrm{HCO_{2}}^{+}+\mathrm{H} \tag{8}\]
can be seen on the inset of Fig. 6. The ion trap is filled with the reactant ion \(\mathrm{CO_{2}}^{+}\), while a known pressure of neutral \(\mathrm{H_{2}}\) is maintained inside the trap. The product of the reaction, \(\mathrm{HCO_{2}}^{+}\), accumulates in the trap as the number of reactant \(\mathrm{CO_{2}}^{+}\) ions decreases (neutral H product can not be stored/detected).
This binary process can be described as exponential loss of the primary ion \(\mathrm{CO_{2}}^{+}\). Every point in the inset of Fig. 6 was recorded 10 times (see the error bars), the "ion signal" (abscisa) is always reported in numbers per filling throughout the whole document. The reaction rate coefficient at this given temperature can simply be determined as the division of the least square (LS) fitted time constant and neutral number density. Usually, this procedure is repeated over decreasing temperature, as the cryostat is being cooled down. The whole process is usually repeated in order to bin or average the measured reaction coefficients and arrive at a more robust final result.
We want to report a different approach, usually only used to test the order of reaction, where the reaction rate is measured in a wide range of number densities of the neutral reactant at every temperature (Fig. 6). The slope of this dependence is the reaction rate coefficient \(k\). The solution of this overdetermined system is best found using regression analysis in the form of a least squares fit. Contrary to the inset case, where the ordinate value (time) has negligible uncertainty, the pressure (or number density, etc.) can have considerable scatter, therefore, we investigated the use of LS methods taking both errors into consideration [64]. Fig. 6 compares ordinary least squares (dashed line; number "N"; ignores the pressure uncertainty) and total least squares (full line; number "T"). Although the difference in the fitted slope \(k\) is within the fit uncertainty even for the \(T=26\) K measurement data set, where the pressure uncertainty is relatively high, we recommend the use of total LS for \(k\) determination. The method will provide more consistent results, especially in the case where the pressure
uncertainty is not even over the measurement range.
The temperature dependent reaction rate coefficient of the reaction
\[\mathrm{CO}^{+}+\mathrm{H}_{2}\rightarrow\mathrm{HCO}^{+}+\mathrm{H} \tag{9}\]
and reaction 8 is shown in Fig. 7.
We confirm the increase of the reaction rate with decreasing temperature for the reaction of \(\mathrm{CO_{2}}^{+}+\mathrm{H}_{2}\) (and \(\mathrm{D}_{2}\)), while providing more data points with superior statistical error that rather show a leveling-off at cryogenic temperatures below 40 K, in contrast with the results of Borodi et al. [67]. We confirm that the reaction of \(\mathrm{CO}^{+}+\mathrm{H}_{2}\) (and \(\mathrm{D}_{2}\)) is mostly constant from 15 to 250 K with essentially Langevin reaction rate coefficient (contrary to Richthofen et al. [43], who report the reaction rate coefficient with \(\mathrm{D}_{2}\) 15 % higher, though still within their reported error margin).
The reported error bars always correspond to the standard deviation of the data fit. The main source of overall error is the density calibration uncertainty, which we estimate to be in the range \(20-30\) %. At the same time, the error of the relative difference between the measured rate coefficients is not affected by the density calibration
Figure 6: Reaction rate of \(\mathrm{CO_{2}}^{+}+\mathrm{H}_{2}\rightarrow\mathrm{HCO_{2}}^{+}+\mathrm{H}\) as a function of \(\mathrm{H}_{2}\) pressure at trap temperatures of 26 and 40 K. Inset: Number of ions in the trap as a function of storage time. Loss rate in absence of reactants is negligible at time scales of hundreds of ms. Grey arrow points to the data point, which is represented by the inset. N, T – normal LS, total LS respectively (see text).
uncertainty, and shall be close to the statistical one. We assume the ion temperature is very close to the temperature of the neutrals for \(T>15\) K and for the trap parameters employed, with low rf amplitudes and \(\eta\ll 0.1\) (eq. 6) [31, 32].
### Action spectroscopy
As in any other ion trapping experiment, the number density of ions in our setup is considerably lower than \(10^{5}\) cm\({}^{-3}\), enabling only the use of action type of spectroscopy methods, where the "change" of the studied medium, rather than that of the absorbed/ emitted photons is observed. Action schemes from laser induced charge transfer LICT [68] to pre-dissociation of weakly bound clusters, pioneered in the 1980s [69], are possible.
We report the use of bright supercontinuum laser for overview action spectroscopy of N\({}_{2}\)\({}^{+}\), where charge transfer reaction
\[\mathrm{N_{2}}^{+}+\mathrm{Ar}\rightarrow\mathrm{Ar}^{+}+\mathrm{N_{2}}\quad \Delta H=0.18\ \mathrm{eV} \tag{10}\]
proceeds predominantly for excited N\({}_{2}\)\({}^{+}\) states, because of the 0.18 eV endothermicity.
Figure 7: Reaction rate coefficient of CO\({}^{+}\) and CO\({}_{2}\)\({}^{+}\) with H\({}_{2}\) and D\({}_{2}\) as a function of temperature. Langevin rate coefficient shown on the left (dash; L.). Previous results are reported using open symbols and dashed lines (CO\({}^{+}\)[65, 66, 43] and CO\({}_{2}\)\({}^{+}\)[67, 65]).
Light source is SuperK FIANIUM FIU-15 with high resolution bandpass filter LLTF CONTRAST (NKT Photonics) with spectral range 400 - 1000 nm (channel spectral bandwidth \(<2.5\) nm FWHM). A similar laser system has been used previously in He tagging overview action spectroscopy of PAHs [70], where many spectral features of complex molecule guarantee a dense spectrum. This is not the case for a diatomic N2+ molecule with only the lowest rotational states populated in our temperature range, implying a sparse spectrum. Fig. 8 shows the recorded spectra with better S/N ratio at higher trap temperature (\(T=150\) K) and signal diminishing below \(T=90\) K. We expect this behaviour to be directly related to the temperature dependent N2+ state population and decreasing kinetic velocity Doppler line broadening.
Unfortunately, no pre-dissociation spectroscopy (He tagging [72]), nor laser induced inhibition of cluster growth (LIICG) [73] attempts were successful.
For illustration, spectral power density inside our trap was close to 1 mW nm\({}^{-1}\), whereas Schlemmer et al. [68] achieved around 30 W nm\({}^{-1}\). The primary advantage of using supercontinuum sources in overview action spectroscopy is the high scanning speed, with the ability to acquire the presented spectral features in tens of minutes.
Figure 8: Electronic spectrum of N2+ cation Meinel system [71] (gray line – band origins) recorded using VIS-LICT (laser induced charge transfer) scheme [68]. In red, measurements performed at 150 K. In magenta, at 90 K.
## 4 Discussions
We have shown the marginally improved method to investigate the temperature dependence of reaction rate coefficients in section 3.2. The fundamental source of errors and experimental difficulties remains the control and determination of the neutral gas pressure (number density) present inside the trap. The interaction of the neutral species with the trap walls is particle as well as temperature dependent, most commonly observed as disappearance of the molecules, i. e., "freezing". On one hand, to minimize this effect, we constructed our trap with walls just around the rods, minimising the trap volume/surface area (see Fig. 1(b)). On the other hand, the trap construction required the use of electrically insulating material (ceramics) with unknown adsorption surface properties. In any case, the application of the method shown in Fig. 6 minimises the space for errors, as any discrepancy in the linearity of the data not only as a function of neutral number density, but inherently also as a function of trap temperature and deposition time (since measurements take several hours) is immediately seen and its cause can be investigated.
The use of ring electrodes to improve the extraction efficiency from the trap has been demonstrated in Fig. 4 and has been found more and more useful, as the mass of the ion increased (over \(20\;m/q\); \(\mathrm{H^{+}}\), \(\mathrm{He^{+}}\) are virtually unaffected) at temperatures \(T<50\;\mathrm{K}\) and higher neutral number densities (\(10^{12}\;\mathrm{cm^{-3}}\)). It is intrinsically difficult to investigate the extraction efficiency in our setup, since the measured quantity is always a combination of extraction, focusing effects of all the ion optics and second quadrupole, and the dead time of the detector. Still, all these elements have to be optimised, on case by case basis, for the best extraction efficiency in order to increase the amount of acquired data per time. Nevertheless, the improved extraction in the case of the \(\mathrm{CO_{2}}^{+}+\mathrm{H_{2}}\) experiment, as shown in Fig. 4 (bottom panel), sped up data acquisition by more than an order of magnitude and undeniably reinforced the validity of data by decreasing the dependence of extraction efficiency on temperature. It is also important to note, that experiments where the trap temperature is not varied (e. g., virtually all the spectroscopy experiments), do not suffer from extraction efficiency issues by design, as it stays constant in the time frame of the signal acquisition.
The evaporation, as described in section 3.1, depends on the neutral number density but also on the type of the neutral particle. Potentially, this fact may be further exploited as a sensitive number density (pressure) gauge and used to calibrate the number
density in the trap, without the need for a calibration reaction, e. g. the calibration of a neutral molecular beam passing through the trap [67].
On top of the evaporation, more generally, ring electrodes may also be used to analyse the ions in the trap, by configuring the field in a way such that only ions fulfilling predetermined conditions (i. e., kinetic energy) will pass the barrier. In this way the barrier height may be calibrated using ion beams of defined energy [43]. Another possible application is recently introduced "leak-out spectroscopy" scheme, where only ions affected by light can leak through the barrier [74]. The potential barrier may also be used to detect the ions undergoing a collision (see section (3.1)) or to characterise the energy distribution of ions inside the trap.
## 5 Conclusion
We present a 22 pole trap instrument equipped with five ring electrodes specifically designed to study ion-molecule interactions relevant for astrochemistry. Low temperature interaction with neutrals other than He or H\({}_{2}\) (or generally at temperatures below 15 K) remains challenging due to cryo-pumping ("freeze out"). We outline a method to determine the temperature dependent reaction rate coefficient resilient to errors arising therefrom. We present a spectroscopic overview approach suitable for fast localisation of spectroscopic features based on a supercontinuum laser. We introduce a trap evaporation scheme usable to characterise processes involving spin change, releasing only minimal kinetic energy, while maintaining \(m/q\) composition of trapped particles. Further experimental and theoretical work is necessary in order to quantify the reaction rate of the process from the evaporation rate(s).
We present a detailed computational model used to obtain the effective potentials inside the trap as a function of voltages applied to all the relevant electrodes and explain how to obtain the desired field configuration for injection, trapping, evaporation, and improved extraction from the trap. The most important conclusion is the fact that the contributions from all the trap electrodes to the effective potential have to be considered simultaneously, i. e., one can not consider only the influence of ring electrode(s) while neglecting the end electrode(s), and that the barrier height has to be related to the real (non zero) potential minima inside the trap. We specifically abstain from any quantitative characterisation of ion energy as a function potential applied to
any electrode and rather focus on relative measurements only (change of H\({}^{+}\) evaporation rate). The quantitative analysis and associated model, requiring dynamical ion trajectory simulations, will be published separately.
The ability to store ions unobstructed for long times emphasizes the usefulness of cryogenic rf ion traps for the characterisation of ions through spectroscopy and studies of ion-molecule interactions using reaction kinetics.
## Acknowledgements
The completion of this work is dedicated to Prof. Dieter Gerlich, who started to build the setup with us in 2019 and continued the works until he passed away in the fall of 2020 (removed from the author list as requested by the editor on 2023-2-8). The authors gratefully acknowledge the work of the electrical and mechanical workshops and engineering departments of the Max Planck Institute for Extraterrestrial Physics. We thank Prof. Stephan Schlemmer for helpful discussions.
## Disclosure statement
The authors report there are no competing interests to declare.
## Additional information
The raw experimental data and corresponding post-processing scripts can be downloaded at [https://doi.org/10.5281/zenodo.7410333](https://doi.org/10.5281/zenodo.7410333).
## Funding
This work was supported by the Max Planck Society.
|
2308.03086 | Probing the Anisotropy and Non-Gaussianity in the Redshift Space through
the Conditional Moments of the First Derivative | Focusing on the redshift space observations with plane-parallel approximation
and relying on the rotational dependency of the general definition of excursion
sets, we introduce the so-called conditional moments of the first derivative
($cmd$) measures for the smoothed matter density field in three dimensions. We
derive the perturbative expansion of $cmd$ for the real space and redshift
space where peculiar velocity disturbs the galaxies' observed locations. Our
criteria can successfully recognize the contribution of linear Kaiser and
Finger-of-God effects. Our results demonstrate that the $cmd$ measure has
significant sensitivity for pristine constraining the redshift space distortion
parameter $\beta=f/b$ and interestingly, the associated normalized quantity in
the Gaussian linear Kaiser limit has only $\beta$ dependency. Implementation of
the synthetic anisotropic Gaussian field approves the consistency between the
theoretical and numerical results. Including the first-order contribution of
non-Gaussianity perturbatively in the $cmd$ criterion implies that the N-body
simulations for the Quijote suite in the redshift space have been mildly skewed
with a higher value for the threshold greater than zero. The non-Gaussianity
for the perpendicular direction to the line of sight in the redshift space for
smoothing scales $R\gtrsim 20$ Mpc h$^{-1}$ is almost the same as the real
space. In contrast, the non-Gaussianity along the line of sight direction in
redshift space is magnified. The Fisher forecasts indicate an almost
significant enhancement in constraining the cosmological parameters,
$\Omega_m$, $\sigma_8$, and $n_s$ when using $cmd+cr$ jointly. | M. H. Jalali Kanafi, S. M. S. Movahed | 2023-08-06T10:50:13Z | http://arxiv.org/abs/2308.03086v2 | Probing the anisotropy and non-Gaussianity in redshift space through the derivative of excursion set moments
###### Abstract
Focusing on the redshift space observations with plane-parallel approximation and relying on the rotational dependency of the general definition of excursion sets, we introduce the so-called conditional moments of the first derivative (\(cmd\)) measures for the smoothed matter density field in 3-Dimension. We derive the perturbative expansion of \(cmd\) for real space and redshift space where peculiar velocity disturbs the galaxies' observed locations. Our criteria can successfully recognize the contribution of linear Kaiser and Finger of God effects. Our results demonstrate that the \(cmd\) measure has significant sensitivity for pristine constraining the redshift space distortion parameter \(\beta=f/b\) and interestingly, the associated normalized quantity in the Gaussian linear Kaiser limit has only \(\beta\)-dependency. Implementation of the synthetic anisotropic Gaussian field approves the consistency between the theoretical and numerical results. Including the first-order contribution of non-Gaussianity perturbatively in the \(cmd\) criterion implies that the N-body simulations for the Quijote suite in redshift space has been mildly skewed with a higher value for the threshold greater than zero. The non-Gaussianity for the perpendicular direction to the line of sight in the redshift space for smoothing scales \(R\gtrsim 20\) Mpc/h is almost the same as the real space, while the non-Gaussianity for along to the line of sight direction in redshift space is magnified.
methods:data - analysis-methods:numerical-methods:statistical-large-scale structures.
## 1 Introduction
In the high-precision cosmology era, drastic attention should be paid to the various robust measures construction for extracting information from random cosmological fields as accurate as possible, particularly from large-scale structures of the matter distribution in the Universe (Peebles, 2020; Bernardeau et al., 2002). On the other hand, discrepancies between what we observe through various surveys and theoretical counterparts essentially persuade researchers to include the stochastic notion (Kaiser, 1984; Bardeen et al., 1986; Bernardeau et al., 2002; Matsubara, 2003; Codis et al., 2013; Matsubara, 2020). It is supposed that on the sufficiently large scales, the distribution of galaxies in the real space is homogeneous and isotropic, while, such an assumption is no longer satisfied in the redshift space when the position of structures is plotted as a function of redshift rather than their distances. Dealing with imposed anisotropy requires designing proper methods which are sensitive to both the existence of preferred direction and non-Gaussianity generated form different mechanisms.
The observed redshifts of galaxies which are mainly originated by the Hubble flow are also disturbed by their peculiar velocity along the line of sight. In the vicinity of peculiar velocity which is almost produced by inhomogeneity known as overdensities and underdensities in the local Universe, a difference between galaxies's actual locations and their observed locations as determined by their redshifts exists. This phenomenon is known as the Redshift space distortions (RSD). The Finger of God (FoG) effect (Jackson, 1972; Peebles, 2020) and the linear Kaiser effect (Kaiser, 1987) are the different parts of RSD dominated in the small enough and large scales, respectively. The elongation of clusters along the line of sight caused by the random motion of galaxies within the virialized clusters on small scales is so-called FoG, while the linear Kaiser effect refers to the suppression in the clustering of galaxies on large scales due to the coherent motion into the overdense regions of density field leading to squash the shape of clusters in redshift space along the line of sight direction (Sargent & Turner, 1977; Hamilton, 1992, 1998). Although RSD makes the interpretation of observational data more challenging, it provides an opportunity to extract statistical information to constrain associated cosmological parameters (Hamilton, 1992, 1998; Bernardeau et al., 2002; Weinberg et al., 2013).
In recent years, many researches have been focused on the analysis of RSD from different points of view. As illustration: the correlation between the redshift distortions and cosmic mass distribution makes sense to utilize the RSD for assessing the linear growth rate of density fluctuations (Hamaus et al., 2022; Panotopoulos and Rincon, 2021), trying to break the degeneracy between various modified gravity and General Relativity in the presence of massive neutrinos in the context of standard model of cosmology (Wright et al., 2019); the joint analysis of the Alcock-Paczynski effect and RSD to probe the cosmic expansion (Song et al., 2015); combining RSD with weak lensing and baryon acoustic oscillations to improve the observational constraints on the cosmological parameters (Eriksen and Gaztanaga, 2015); quantifying the RSD spectrum (Bharadwaj et al., 2020; Mazumdar et al., 2020, 2023); examining the primordial non-Gaussianity via RSD (Tellarini et al., 2016).
As of the importance role of the large scale structures and corresponding observational catalogs, there are many attempts incorporating the geometrical and topological virtues of diverse relevant fields such as the genus statistic (Gott et al., 1986; Hamilton et al., 1986), contour statistics including 1-, 2- and 3-Dimensional features (Ryden, 1988; Ryden et al., 1989), Minkowski functionals consisting of \(D+1\) scalar quantities which describe the morphology of isodensity contours of \(D\)-dimensional field (Mecke et al., 1994; Schmalzing and Buchert, 1997) (see also (Kerscher et al., 1997; Sahni et al., 1998; Hikage et al., 2006; Einasto et al., 2011; Liu et al., 2020, 2022; Matsubara, 2003; Pogosyan et al., 2009; Gay et al., 2012; Codis et al., 2013; Matsubara and Kuriki, 2021; Matsubara et al., 2022, and references therein)).
The central assumptions in many cosmological studies are homogeneity, isotropy, and Gaussianity due to the extension of the central limit theorem domain. In the real data sets, not only the violation of Gaussianity is expected, but also the anisotropy can emerge due to different reasons ranging from initial conditions, and phase transitions to the non-linearity among the evolution (Planck Collaboration et al., 2014, 2016; Renaux-Petel, 2015; Planck Collaboration et al., 2016; Hou et al., 2009; Springel et al., 2006; Bernardeau et al., 2002; Planck Collaboration et al., 2014; Vafaei Sadr and Movahed, 2021). Subsequently, to explore the large scale structures in the redshift space as the counterpart of the real space, many powerful statistical measures have been considered by concentrating on the non-Gaussianity and anisotropy (Matsubara, 1996; Codis et al., 2013; Appleby et al., 2018, 2019, 2023). Recently, Minkowski tensors, an extension of scalar Minkowski functionals (Santalo, 2004; Appleby et al., 2018; Schroder-Turk et al., 2013), have been employed on 2- and 3-Dimensional cosmological fields in the real and redshift spaces (Chingangbam et al., 2017; Appleby et al., 2018, 2019, 2023; Goyal and Chingangbam, 2021; Appleby et al., 2022).
Motivated to examine the anisotropy, asymmetry and non-Gaussianity induced in many cosmological random fields, simultaneously, we pursue the mainstream of theoretical measures construction to explore the anisotropy and non-Gaussianity and to quantify the statistical features of a generic field such as density field with \(z-\)anisotropic behavior in plane-parallel approximation. When the anisotropy and non-Gaussianity are interested, we advocate the utilizing of measures specially designed to declare the anisotropy rather than using those measures such as Minkowski Functional and contouring analysis which are not in principle directional tools, however, they can recognize the anisotropy and non-Gaussianity, because they generally have the imprint of directional averaging and they may give the spurious results.
The novelties and advantages of our approach are as follows:
(1) We will provide a comprehensive mathematical description of the so-called conditional moments of the first derivative (\(cmd\)) of excursion set and calculate the theoretical prediction of this statistic for a \(3\)-Dimensional isotropic and asymmetric Gaussian field as a function of threshold using a probabilistic framework. We will also take into account the first order correction due to the mildly non-Gaussianity in the context of perturbative approach. Our notable measure is able to recognize the preferred and generally anisotropic directions for any generic field for 2- and 3-Dimension in different disciplines as well as non-Gaussianity (Li et al., 2013; Ghasemi Nezhadhaghighi et al., 2017; Klatt et al., 2022; Kapfer et al., 2010; Schroder-Turk et al., 2013).
(2) The anisotropy imprint by the linear Kaiser effect will be examined by our introduced measure in the plane-parallel approximation. Also incorporating the Gaussian and Lorentzian phenomenological models of the FoG effect, the correction to the linear Kaiser limit will be carried out. To make our analysis more complete, we will compare the sensitivity of this statistic to the redshift space distortions parameter concerning other famous measures such as crossing statistics and Minkowski tensors.
3) Using the N-body simulations provided by the Quijote suite, the capability of \(cmd\) statistics will be verified and we will elucidate the non-Gaussianity matter density field in redshift space, especially thorough the line of sight by \(cmd\) up to the \(\mathcal{O}(\sigma_{0}^{2})\), perturbatively.
The rest of this paper is organized as follows: Section 2 will be assigned to a brief review the notion of RSD and the relationship between the density field in the redshift and real spaces. In Section 3, we will present a mathematical description of our new measure to capture the preferred direction in the context of a probabilistic framework. The perturbative expansion of theoretical prediction for the \(cmd\) in the mildly non-Gaussian regime is also given in this section. Section 4 will be devoted to the characterization of RSD including the linear Kaiser and FoG effects using the geometrical measures. The implementation of geometrical measures on our synthetic data sets and also N-body simulations by the Quijote team will be presented in section 5. The last section will be focused on the summary and concluding remarks.
In this section, for the sake of clarity, we first briefly review the relationship between a typical cosmological stochastic field in the redshifted Universe and corresponding quantity in the real space. Owing to the peculiar velocity field, the observed position of an object in redshift space (\(\mathbf{s}\)) differs from its real space position, \(\mathbf{r}\), and its relation is given by:
\[\mathbf{s}=\mathbf{r}+\frac{\mathbf{v(\mathbf{r}).\hat{\mathbf{n}}}}{H}\hat{\mathbf{n}}, \tag{1}\]
where \(\mathbf{v(\mathbf{r})}\) represents peculiar velocity, \(\hat{\mathbf{n}}\) is the line of sight direction and \(H\) is the Hubble parameter. Equation (1) leads to a distortions in an observed cosmological stochastic field, particularly the observed density field in redshift space. To the linear order, due to the so-called linear Kaiser effect, the distorted density contrast field in redshift space is related to the density contrast field in real space for a given wavenumber, \(\mathbf{k}\), by (Kaiser, 1987):
\[\tilde{\delta}^{(s)}(\mathbf{k})=(1+\beta\mu^{2})\ \tilde{\delta}^{(r)}(\mathbf{k})\, \ \ \ \ \ \beta\equiv f/b \tag{2}\]
The (-) symbol is reserved for quantity in the Fourier space throughout this paper. The \(\diamond\) is replaced by (\(s\)) and (\(r\)) for redshift and real spaces, respectively. Also \(\mu\equiv\hat{\mathbf{k}}.\hat{\mathbf{n}}\), \(f\) is the linear growth rate of the density contrast and \(b\) is the linear bias factor. Equation (2) holds for the matter and biased tracers (e.g. galaxies) density fields, which for the matter case, we have \(b=1\).
Beside to the linear Kaiser effect, there are non-linear effects such as the non-linear Kaiser effect and the FoG effect leading to the distortions of the density field in redshift space with different manners. Therefore, taking into account the nonlinear effects, the Equation (2) can be written in the general form as:
\[\tilde{\delta}^{(s)}(\mathbf{k})=\tilde{O}_{s}(\mu,k\mu)\tilde{\delta}^{(r)}(\bm {k}). \tag{3}\]
in which the operator \(\tilde{O}_{s}\) can be written in the multiplication decomposition of the linear Kaiser part (\(\tilde{O}_{\rm lin}\)) and the non-linear part (\(\tilde{O}_{\rm nl}\)) as below:
\[\tilde{O}_{s}(\mu,k\mu) = \tilde{O}_{\rm lin}(\mu,k\mu)\times\tilde{O}_{\rm nl}(\mu,k\mu) \tag{4}\] \[= (1+\beta\mu^{2})\tilde{O}_{\rm nl}(\mu,k\mu).\]
Accordingly, the power spectrum in the redshift space and in the real space have the following relation:
\[P^{(s)}(\mathbf{k})=(1+\beta\mu^{2})^{2}\big{|}\tilde{O}_{\rm nl}(\mu,k\mu)\big{|} ^{2}P^{(r)}(\mathbf{k}), \tag{5}\]
Equations (3) and (5) demonstrate that the Fourier transform of the redshift space density field as well as the power spectrum depends on the direction of wavenumber relative to the line of sight. In other words, the density field in the redshift space is anisotropic and there is an alignment in the line of sight. In this case, we expect that a proper directional statistical measure is capable to distinguish the line of sight direction from the perpendicular directions. It turns out that the mentioned difference should be depended on the amount of anisotropy which is produced by redshift space distortions and even on the sensitivity of the considered directional statistics. For an isotropic density field, there is no difference between various directions. As mentioned in the introduction, any conceivable measure to extract reliable cosmological results should be taken into account such generated anisotropy which is inevitable for the astrophysical context. For this purpose, we will rely on the probabilistic framework in the next section to construct new directional statistical measures and evaluate the corresponding capabilities for desired applications.
## 3 Probabilistic Framework
Suppose that \(\delta_{R}^{(r,s)}\) denotes the density field contrast in the real and redshift spaces and it is already smoothed by a smoothing window function, \(W_{R}\), in the Fourier space as:
\[\tilde{\delta}_{R}^{(r,s)}(\mathbf{k})=\tilde{W}(kR)\tilde{\delta}^{(r,s)}(\mathbf{k}), \tag{6}\]
We define a so-called set for mentioned smoothed density field in 3-Dimension including the field itself and corresponding first derivative as \(\mathcal{A}^{(r,s)}\equiv\Big{\{}\delta^{(r,s)},\delta_{,x}^{(r,s)},\delta_{, y}^{(r,s)},\delta_{,z}^{(r,s)}\Big{\}}\), and for simplicity, we have omitted the subscript smoothing scale denoted by \(R\) and hereafter, the superscript \((r,s)\) of \(\mathcal{A}\) is dropped. In addition \(\delta_{,i}^{(r,s)}\equiv\partial_{i}\delta^{(r,s)}\) and \(i\) gets the \(x,y,z\), representing the axises in the Cartesian coordinate.
### JPDF of Random Field
The general form of joint probability density function (JPDF) of the set \(\mathcal{A}\) including \(4\) elements for the redshift space and real space, separately can be expressed by (Matsubara, 2003):
\[\begin{split}\mathcal{P}(\mathcal{A})&=\exp\bigg{[} \sum_{j=3}^{\infty}\frac{(-1)^{j}}{j!}\bigg{(}\sum_{\mu_{1}=1}^{N=4}...\sum_{ \mu_{j}=1}^{N=4}\mathcal{K}_{\mu_{1},\mu_{2},...,\mu_{j}}^{(j)}\\ &\times\frac{\partial^{j}}{\partial\mathcal{A}_{\mu_{1}}... \partial\mathcal{A}_{\mu_{j}}}\bigg{)}\bigg{]}\mathcal{P}_{G}(\mathcal{A}) \end{split} \tag{7}\]
where \(\mathcal{K}_{\mu_{1},\mu_{2},...,\mu_{n}}^{(n)}\equiv\left\langle\mathcal{A} _{\mu_{1}}\mathcal{A}_{\mu_{2}}...\mathcal{A}_{\mu_{n}}\right\rangle_{c}\) represents cumulant and \(\mathcal{P}_{G}(\mathcal{A})\) is the multivariate Gaussian JPDF of the \(\mathcal{A}\) and it is given by:
\[\mathcal{P}_{G}(\mathcal{A})=\frac{\exp\left(-\frac{1}{2}\mathcal{A}^{T}. \left[\mathcal{K}^{(2)}\right]^{-1}.\mathcal{A}\right)}{2\pi\ \sqrt{\det\mathcal{K}^{(2)}}} \tag{8}\]
where \(\mathcal{K}^{(2)}\equiv\left\langle\mathcal{A}\otimes\mathcal{A}\right\rangle_{c}\) is the \(4\times 4\) covariance matrix of \(\mathcal{A}\) known as second cumulant and \(\left\langle\right\rangle_{c}\) denotes to connected moment. The matrix form of \(\mathcal{K}^{(2)}\) can be expressed as:
\[\mathcal{K}^{(2)}=\begin{pmatrix}\left\langle\delta^{(r,s)}\delta^{(r,s)}\right\rangle _{c}\left\langle\delta^{(r,s)}\delta^{(r,s)}_{x}\right\rangle_{c}\left\langle \delta^{(r,s)}\delta^{(r,s)}_{y}\right\rangle_{c}\left\langle\delta^{(r,s)} \delta^{(r,s)}_{,z}\right\rangle_{c}\\ \left\langle\delta^{(r,s)}_{,x}\delta^{(r,s)}\right\rangle_{c}\left\langle \delta^{(r,s)}_{,x}\delta^{(r,s)}_{x}\right\rangle_{c}\left\langle\delta^{(r,s )}_{,x}\delta^{(r,s)}_{y}\right\rangle_{c}\left\langle\delta^{(r,s)}_{,x} \delta^{(r,s)}_{,z}\right\rangle_{c}\\ \left\langle\delta^{(r,s)}_{,y}\delta^{(r,s)}\right\rangle_{c}\left\langle \delta^{(r,s)}_{,y}\delta^{(r,s)}_{x}\right\rangle_{c}\left\langle\delta^{(r,s )}_{y}\delta^{(r,s)}_{y}\right\rangle_{c}\left\langle\delta^{(r,s)}_{y}\delta^ {(r,s)}_{,z}\right\rangle_{c}\\ \left\langle\delta^{(r,s)}_{,z}\delta^{(r,s)}_{,z}\right\rangle_{c}\left\langle \delta^{(r,s)}_{,z}\delta^{(r,s)}_{,y}\right\rangle_{c}\left\langle\delta^{(r, s)}_{,z}\delta^{(r,s)}_{,y}\right\rangle_{c}\left\langle\delta^{(r,s)}_{,z} \delta^{(r,s)}_{z}\right\rangle_{c}\end{pmatrix} \tag{9}\]
Using the notation \(i,j\in\{x,y,z\}\), the various components in the \(\mathcal{K}^{(2)}\) becomes:
\[\left\langle\delta^{(r,s)}\delta^{(r,s)}\right\rangle_{c}= \left(\sigma^{(r,s)}_{0}\right)^{2}\] \[\left\langle\delta^{(r,s)}\delta^{(r,s)}_{,i}\right\rangle_{c}= 0\] \[\left\langle\delta^{(r,s)}_{,i}\delta^{(r,s)}_{,j}\right\rangle_{c}= \left(\sigma^{(r,s)}_{1i}\right)^{2}\delta_{ij} \tag{10}\]
where \(\delta_{ij}\) is Kronecker delta function. In the above Equation, \(\sigma^{2}_{m}\) illustrates the \(m\)-order of spectral index and according to the power spectrum of the density field smoothed on the scale \(R\) with a given window function, it reads as:
\[\left(\sigma^{(r,s)}_{m}(R)\right)^{2}=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}|\bm {k}|^{2m}P^{(r,s)}(\mathbf{k})\;\tilde{W}^{2}(kR) \tag{11}\]
and the spectral index for derivative is:
\[\left(\sigma^{(r,s)}_{1i}(R)\right)^{2} \equiv\left\langle\left(\delta^{(r,s)}_{,i}\right)^{2}\right\rangle _{c}\] \[=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}k_{i}^{2}P^{(r,s)}(\mathbf{k})\; \tilde{W}^{2}(kR) \tag{12}\]
Accordingly, we have:
\[\left(\sigma^{(r,s)}_{1}\right)^{2} =\left\langle\left(\delta^{(r,s)}_{,x}\right)^{2}+\left(\delta^{ (r,s)}_{,y}\right)^{2}+\left(\delta^{(r,s)}_{,z}\right)^{2}\right\rangle_{c} \tag{13}\] \[=\left(\sigma^{(r,s)}_{1x}\right)^{2}+\left(\sigma^{(r,s)}_{1y} \right)^{2}+\left(\sigma^{(r,s)}_{1z}\right)^{2},\]
and for the isotropic 3-Dimensional field in the real space, we obtain:
\[\sigma^{(r)}_{1x}=\sigma^{(r)}_{1y}=\sigma^{(r)}_{1z}=\frac{\sigma^{(r)}_{1}}{ \sqrt{3}} \tag{14}\]
The observable quantity of any statistical measure, \(\mathcal{F}(\mathcal{A})\), depending on the \(\mathcal{A}\), can be expressed by the following expectation value:
\[\left\langle\mathcal{F}(\mathcal{A})\right\rangle =\int d\mathcal{A}\;\mathcal{P}(\mathcal{A})\;\mathcal{F}(\mathcal{ A})\] \[=\left\langle\exp\bigg{[}\sum_{j=3}^{\infty}\frac{1}{j!}\bigg{(} \sum_{\mu_{1}}^{4}...\sum_{\mu_{j}}^{4}\mathcal{K}^{(j)}_{\mu_{1},\mu_{2},...,\mu_{j}} \tag{15}\] \[\times\frac{\partial^{j}}{\partial\mathcal{A}_{\mu_{1}}... \partial\mathcal{A}_{\mu_{j}}}\bigg{)}\bigg{]}\mathcal{F}(\mathcal{A})\right\rangle _{G}\]
where \(\left\langle X\right\rangle_{G}\equiv\int d\mathcal{A}\;\mathcal{P}_{G}( \mathcal{A})X\). Therefore, in the presence of non-Gaussianity, one can obtain the statistical expectation value of \(\mathcal{F}(A)\) in terms of Gaussian integrations based on perturbative formalism.
### The \(cmd\) statistical measures
For a 3-Dimensional density field with total volume \(V\), we define excursion set \(Q_{v}\) as a set of all field points which satisfy condition \(\delta^{(r,s)}\geq\vartheta\sigma^{(r,s)}_{0}\). The boundary of this excursion set, denoted by \(\partial Q_{v}\), characterizes the isodensity contours of the density field at threshold \(\vartheta\). The redshift space distortion affects the isodensity contours of cosmological density fields, more precisely, the scaler and Tensorial forms of Minkowski Functional have been used to examine the anisotropy properties of redshift space and also distortions parameter (Matsubara & Yokoyama, 1996; Codis et al., 2013; Appleby et al., 2018, 2019, 2023). The Genus and contour crossing in various dimensions has been examined in redshift space (Matsubara, 1996; Codis et al., 2013). Interestingly, those statistics revealing the one- and two-Dimensional slices depend on anisotropy due to peculiar velocities in redshift space. Consequently, we persuade that other criteria similar to the well-known measures introduced for the characterization of morphology may have the potential for anisotropy evaluation in the cosmological stochastic field. After introducing the so-called level crossing as a powerful tool for quantifying a typical stochastic time series by S. O. Rice (Rice, 1944, 1945), the generalized form of that means including the Up-, down- and the conditional crossing statistics have been utilized as complementary methods for diverse applications (Bardeen et al., 1986; Bond & Efstathiou, 1987; Ryden, 1988; Ryden et al., 1989; Matsubara, 1996; Brill, 2000; Matsubara, 2003; Sadegh Movahed & Khosravi, 2011; Ghasemi Nezhadhaghighi et al., 2017). Particularly, the contour crossing statistic corresponds to the mean number of intersections between the isodensity contours of the density field at threshold \(\vartheta\), \(\partial Q_{v}\), and a straight line in a specific direction (Ryden, 1988). The crossing statistic (\(cr\)) can be expressed in a compact form as (Ryden, 1988; Matsubara, 1996, 2003; Codis et al., 2013):
\[N^{(r,s)}_{cr}(\vartheta,i)=\left\langle\delta_{D}\left(\delta^{(r,s)}-\vartheta \sigma^{(r,s)}_{0}\right)\;\left|\delta^{(r,s)}_{,i}\right|\;\right\rangle \tag{16}\]
where \(i\) represents the direction of a straight line. Using Equation (15), for a 3-Dimensional field, we have:
\[\left\langle N_{cr}^{(r,s)}(\vartheta,i)\right\rangle_{G}=\frac{\sigma_{1i}^{(r,s )}}{\pi\sigma_{0}^{(r,s)}}\ e^{-\vartheta^{2}/2}, \tag{17}\]
Inspired by the crossing statistic, we introduce so-called the _conditional moments of the first derivative (cmd)_. This modification in the weight of the first partial derivative enables us to capture the footprint of anisotropic e.g. due to RSD. Selecting the isodensity contours at/above a given threshold, \(\vartheta\), and by fixing a direction, \(i\), the \(n\)th moment of the first derivative in such direction for the determined pixels to be computed. The mathematical description of the _cmd_ criterion can be clarified as follows:
\[N_{cmd}^{(r,s)}(\vartheta,i;n) \equiv \left\langle\delta_{D}\left(\delta^{(r,s)}-\vartheta\sigma_{0}^{ (r,s)}\right)\left(\delta_{i}^{(r,s)}\right)^{n}\right\rangle \tag{18}\] \[= \frac{1}{V}\int_{V}\delta_{D}\left(\delta^{(r,s)}-\vartheta \sigma_{0}^{(r,s)}\right)\ \left(\delta_{i,i}^{(r,s)}\right)^{n}dV\] \[= \frac{1}{V}\int_{\partial Q_{u}}\frac{\left(\delta_{,i}^{(r,s)} \right)^{n}}{\left|\mathbf{\nabla}\delta^{(r,s)}\right|}\ dA\]
we utilize the volume to surface integral transformation (Schmalzing & Buchert, 1997). Using the probabilistic framework presented in the subsection 3.1, the expected values of \(N_{cmd}^{(r,s)}\) for a 3-Dimensional Gaussian density field is obtained as follows:
\[\left\langle N_{cmd}^{(r,s)}(\vartheta,i;n)\right\rangle_{G}=2^{\frac{n-3}{2} }[1+\cos(n\pi)]\Gamma\left(\frac{n+1}{2}\right)\times\frac{\left(\sigma_{1i}^ {(r,s)}\right)^{n}}{\pi\sigma_{0}^{(r,s)}}\ e^{-\vartheta^{2}/2} \tag{19}\]
where \(\Gamma(:)\) is Gamma function. Equation (19), implies that only the even value of \(n\) is survived in the Gaussian regime and all odd values of first derivative moments are identically zero. To mitigating the numerical error, the lowest power adopted for the RSD analysis in the context of _cmd_ measure, would be \(n=2\), throughout this paper.
### Perturbative Formalism
In the previous subsection, we introduced our new measure, and in principle according to Equation (15), we can derive the perturbative form of the \(N_{cmd}^{(r,s)}\) for 3-Dimensional density field in the mildly non-Gaussian regime. To this end, we expand the Equation (15) for a typical observable quantity up to \(\mathcal{O}(\sigma_{0}^{2})\) as:
\[\left\langle\mathcal{F}\right\rangle= \langle\mathcal{F}\rangle_{G}+\frac{\sigma_{0}}{3!}\sum_{\mu_{1}, \mu_{2},\mu_{3}}\mathcal{K}_{\mu_{1},\mu_{2},\mu_{3}}^{(3)}\langle\partial \mathcal{F}/\partial\mathcal{A}_{\mu_{1}}\partial\mathcal{A}_{\mu_{2}} \partial\mathcal{A}_{\mu_{3}}\rangle_{G} \tag{20}\] \[+\mathcal{O}(\sigma_{0}^{2})\]
Subsequently, the weakly non-Gaussian form of \(N_{cmd}^{(r,s)}\), up to the \(\mathcal{O}(\sigma_{0}^{2})\), becomes:
\[\left\langle N_{cmd}^{(r,s)}(\vartheta,i)\right\rangle_{NG}=\frac{\left( \sigma_{1i}^{(r,s)}\right)^{2}}{\sqrt{2\pi}\ \sigma_{0}^{(r,s)}}\ e^{-\vartheta^{2}/2}\ \bigg{\{}1\ +\ \bigg{[}\frac{1}{6}\ S_{0}^{(r,s)}H_{3}(\vartheta)\ +\ S_{1i}^{(r,s)}\bigg{]}\sigma_{0}^{(r,s)}\ +\ \mathcal{O} \bigg{(}(\sigma_{0}^{(r,s)})^{2}\bigg{)}\ \bigg{\}}, \tag{21}\]
where \(H_{n}(\vartheta)\) represent the probabilists' Hermite polynomials and we have used following definitions:
\[S_{0}^{(r,s)}\equiv\frac{\left\langle\ \left(\delta^{(r,s)}\right)^{3}\ \right\rangle_{c}}{\left(\sigma_{0}^{(r,s)}\right)^{4}}, \tag{22}\]
\[S_{1i}^{(r,s)}\equiv\ \frac{\left\langle\ \delta^{(r,s)}\left(\delta_{,i}^{(r,s)} \right)^{2}\ \right\rangle_{c}}{\sigma_{0}^{(r,s)}\ \left(\sigma_{1i}^{(r,s)}\right)^{2}}, \tag{23}\]
Having Equation (21), we can predict the \(\left\langle N_{cmd}\right\rangle\) for a given field considering the corresponding spectral indices.
## 4 Implementation on the Redshift space
In this section, we consider the linear Kaiser and FoG effects as the sources of anisotropy in the density field and evaluate the imprint of these effects on our introduced measures in the previous section. Throughout this paper, we use the plane-parallel approximation and consider the \(z\)-axis of the Cartesian coordinate as the line of sight direction, without losing generality. In this approximation, there is no statistical difference between the directions perpendicular to \((\hat{z})\) (e.g. \(\hat{x}\) and \(\hat{y}\)), and we use the notation \(\hat{\mathcal{I}}\) to show these directions.
### The \(cmd\) and \(cr\) measures in the Linear Kaiser limit
In the linear Kaiser limit, Equation (5) reduces to (Kaiser, 1987):
\[P^{(s)}(\mathbf{k})=(1+\beta\mu^{2})^{2}P^{(r)}(\mathbf{k}) \tag{24}\]
Using Equations (11), (12) and (24), one can obtain:
\[\left(\sigma_{0}^{(s)}\right)^{2} = C_{0}\left(\sigma_{0}^{(r)}\right)^{2},\] \[\left(\sigma_{1}^{(s)}\right)^{2} = C_{0}\left(\sigma_{1}^{(r)}\right)^{2}, \tag{25}\]
and
\[\left(\sigma_{\mathcal{I}}^{(s)}\right)^{2} = \left(\frac{C_{0}-C_{1}}{2}\right)\left(\sigma_{1}^{(r)}\right)^{ 2},\] \[\left(\sigma_{1z}^{(s)}\right)^{2} = C_{1}\left(\sigma_{1}^{(r)}\right)^{2}, \tag{26}\]
where
\[C_{n}(\beta)\equiv\frac{1}{2}\int_{-1}^{1}\mu^{2n}\;(1+\beta\mu^{2})^{2}\;d\mu\;, \tag{27}\]
Consequently, in the linear Kaiser limit, the \(cr\) and \(cmd\) statistics for a 3-Dimensional Gaussian field in redshift space for \(\hat{z}\) and \(\hat{\mathcal{I}}\) directions become:
\[N_{cr}^{(s)}(\vartheta,\hat{\mathcal{I}})=\frac{\sigma_{1}^{(r)}}{\sqrt{2} \pi\sigma_{0}^{(r)}}\;\sqrt{1-\frac{C_{1}}{C_{0}}}\;e^{-\vartheta^{2}/2}, \tag{28}\]
\[N_{cr}^{(s)}(\vartheta,\hat{z})=\frac{\sigma_{1}^{(r)}}{\pi\sigma_{0}^{(r)}} \;\sqrt{\frac{C_{1}}{C_{0}}}\;e^{-\vartheta^{2}/2}, \tag{29}\]
Also
\[N_{cmd}^{(s)}(\vartheta,\hat{\mathcal{I}})=\frac{\left(\sigma_{1}^{(r)} \right)^{2}}{\sqrt{2\pi}\;\sigma_{0}^{(r)}}\;\frac{C_{0}-C_{1}}{2\;\sqrt{C_{ 0}}}\;e^{-\vartheta^{2}/2}, \tag{30}\]
\[N_{cmd}^{(s)}(\vartheta,\hat{z})=\frac{\left(\sigma_{1}^{(r)}\right)^{2}}{ \sqrt{2\pi}\;\sigma_{0}^{(r)}}\;\frac{C_{1}}{\sqrt{C_{0}}}\;e^{-\vartheta^{2} /2}, \tag{31}\]
Note that the r.h.s of the above Equations have been expressed in terms of the real space spectral indices. For further analysis, we define the following normalized quantity for direction \(i\):
\[n_{\diamond}^{(r,s)}(\vartheta,i)\equiv\] \[\frac{N_{\diamond}^{(r,s)}(\vartheta,i)}{\int_{-\infty}^{\infty} d\vartheta\left[N_{\diamond}^{(r,s)}(\vartheta,i)+N_{\diamond}^{(r,s)}( \vartheta,j)+N_{\diamond}^{(r,s)}(\vartheta,k)\right]}, \tag{32}\]
where \(\{i,j,k\}\in[\hat{x},\hat{y},\hat{z}]\) and \(i\neq j\neq k\). The \(\diamond\) is replaced by \(cr\) and \(cmd\). Interestingly, the isotropic Gaussian limit of Equation (32) reduces to:
\[n_{cr}^{(r)}(\vartheta)=n_{cmd}^{(r)}(\vartheta)=\frac{1}{3\sqrt{2\pi}}\;e^ {-\vartheta^{2}/2} \tag{33}\]
Equation (33) reveals that in the isotropic Gaussian limit, normalized quantities are independent from the spectral indices and therefore the properties of the power spectrum. For a given field, any departure from Equation (33) can be considered as the signature of anisotropy and/or non-Gaussianity. While for the redshift space, the normalized quantities can be derived as:
\[n_{cr}^{(s)}(\vartheta,\hat{\mathcal{I}})=\frac{1}{\sqrt{2\pi}}\;\frac{1}{2+ \sqrt{\frac{2C_{1}}{C_{0}-C_{1}}}}\;e^{-\vartheta^{2}/2}, \tag{34}\]
\[n_{cr}^{(s)}(\vartheta,\hat{z})=\frac{1}{\sqrt{2\pi}}\;\frac{1}{1+\sqrt{2} \sqrt{\frac{C_{0}}{C_{1}}}-1}\;e^{-\vartheta^{2}/2}, \tag{35}\]
\[n_{cmd}^{(s)}(\vartheta,\hat{\mathcal{I}})=\frac{1}{2\sqrt{2\pi}}\;\left(1- \frac{C_{1}}{C_{0}}\right)\;e^{-\vartheta^{2}/2}, \tag{36}\]
\[n_{cmd}^{(s)}(\vartheta,\hat{z})=\frac{1}{\sqrt{2\pi}}\;\left(\frac{C_{1}}{C _{0}}\right)\;e^{-\vartheta^{2}/2}, \tag{37}\]
They have no explicit dependencies on the spectral indices. Thus, for the Gaussian limit, the normalized quantities only depend on the threshold, \(\vartheta\), and redshift space parameter, \(\beta\), through the \(C_{0}\) and \(C_{1}\). From Equation (27), we find:
\[C_{0}(\beta)=1+\frac{2\beta}{3}+\frac{\beta^{2}}{5}\;,\quad C_{1}(\beta)=\frac{ 1}{3}+\frac{2\beta}{5}+\frac{\beta^{2}}{7} \tag{38}\]
In the limit \(\beta\to 0\), Equations (34)-(37) get the isotropic limit presented in Equation (33). In such a limit, the normalized \(cr\) and \(cmd\) measures are similar. Therefore, for \(\beta\neq 0\), the among of deviation from the isotropic limit can be considered as a signature for determining the sensitivity of \(cr\) and \(cmd\) statistics to RSD. In Fig. 1, we plot the analytical predictions of the normalized \(cr\) and \(cmd\) quantities as a function of threshold, \(\vartheta\), for a typical anisotropic Gaussian matter field in redshift space in the presence of the linear Kaiser effect adopted by \(\beta=0.48\) as a fiducial value. The black solid line illustrates the normalized \(cr\) and \(cmd\) for isotropic limit (Equation (33)). The green dashed line corresponds to \(n_{cr}\) for a line of sight direction, while the purple dashed-dotted line is perpendicular to the line of sight direction. The linear Kaiser effect squeezes the isodensity contours along the line of sight. As a result, according to the definition of \(cr\) statistics, we expect that the value of \(n_{cr}\) is higher for the line of sight direction compared to \(\hat{\mathcal{I}}\) directions. Noticing the analytical form of \(cmd\) criterion (Equations (36) and (37)), the mentioned imprint of linear Kaiser effect would be magnified leading to make a robust measure compared to the common crossing statistics. Subsequently, the difference between \(n_{cmd}(\hat{\mathcal{I}})\) (blue dotted line) and \(n_{cmd}(\hat{z})\) (red loosely dashed line) is higher than the corresponding value in the context of \(cr\) measure for as fixed
value of \(\beta\). The lower panel of Fig. 1 indicates the difference between \(\Delta n_{\circ}^{(s)}(\vartheta)\equiv n_{\circ}^{(s)}(\vartheta,\hat{z})-n_{ \circ}^{(s)}(\vartheta,\hat{\mathcal{I}})\).
To make more complete our discussion regarding the capability of \(cr\) and \(cmd\) measures to put the observational constraint on the RSD parameter, we follow the approach carried on by Appleby et al. (2019) in the context of Minkowski tensors. We introduce following quantities by means of \(cr\) and \(cmd\) criteria as:
\[\Theta_{cr}(\vartheta) \equiv \frac{N_{cr}^{(s)}(\vartheta,\hat{\mathcal{I}})}{N_{cr}^{(s)}( \vartheta,\hat{z})},\] \[\Theta_{cmd}(\vartheta) \equiv \frac{N_{cmd}^{(s)}(\vartheta,\hat{\mathcal{I}})}{N_{cmd}^{(s )}(\vartheta,\hat{z})}. \tag{39}\]
It turns out that for the Gaussian and linear Kaiser limit, we have:
\[\Theta_{cr} = \sqrt{\frac{C_{0}(\beta)-C_{1}(\beta)}{2C_{1}(\beta)}},\] \[\Theta_{cmd} = \frac{C_{0}(\beta)-C_{1}(\beta)}{2C_{1}(\beta)}. \tag{40}\]
We use the notation \(\Theta_{\circ}\) with \(\diamond\in\{cr,cmd,MT1,MT2\}\). Here the \(MT1\) and \(MT2\) are associated with the type one and two of rank-2 Minkowski tensors as defined by Appleby et al. (2019). Based on the theoretical predictions of \(\Theta_{\circ}\), we can determine the level of accuracy accordingly, we can constrain the value of parameter \(\beta\). This accuracy depends on the statistical uncertainty associated with \(\Theta_{\circ}\), which can be evaluated using the Fisher forecast approach. We rely on the posterior probability function, \(\mathcal{P}_{\circ}(\beta|\Theta_{\circ})\), as:
\[\mathcal{P}_{\circ}(\beta|\Theta_{\circ}) = \langle\delta_{D}(\beta-\Phi_{\circ}^{-1}(\Theta_{\circ}))\rangle\] \[= \int d\Theta_{\circ}^{\prime}\mathcal{P}(\Theta_{\circ}^{\prime} )\delta_{D}(\Theta_{\circ}-\Theta_{\circ}^{\prime})\left|\mathcal{J}\right|_{ \Theta_{\circ}^{\prime}=\Phi_{\circ}(\beta)} \tag{41}\]
here \(\mathcal{J}\) is the Jacobian computed for \(\Theta_{\circ}^{\prime}=\Phi_{\circ}(\beta)\) and \(\Phi_{\circ}(\beta)\) reads by Equation (40). Finally, according to a given confidence interval (C.L.), the error-bar on \(\beta\) is given by:
\[\mathrm{C.L.}=\int_{\beta_{\mathrm{fiducial}}-\sigma_{\beta}^{(-)}}^{\beta_{ \mathrm{fiducial}}+\sigma_{\beta}^{(+)}}d\beta\ \mathcal{P}_{\circ}(\beta|\Theta_{\circ}) \tag{42}\]
or equivalently, based on the error propagation formalism up to the first order, we obtain the relative error on redshift space parameter as:
\[\sigma_{\beta}^{2}=\left(\frac{\partial\ln\Theta_{\circ}}{\partial\ln\beta} \right)^{-2}\sigma_{\circ}^{2} \tag{43}\]
where \(\sigma_{\beta}\) and \(\sigma_{\circ}\) represent the fractional uncertainties on \(\beta\) and \(\Theta_{\circ}\), respectively. In Fig. 2, we plot \(\sigma_{\beta}\) in terms of \(\sigma_{\circ}\) for \(\beta_{\mathrm{fiducial}}=0.48\) as the fiducial value in the Gaussian limit. We also consider \(\sigma_{\circ}=0.01\) as the comparison base value which is shown by the black vertical solid line in this figure. As mentioned in previous research, incorporating the one percent relative error in the statistical measures as already has been achieved by the current galaxy catalogs, yielding almost higher accuracy in the context of \(cmd\) criterion compared to all other statistics including \(cr\) and rank-2 Minkowski tensors.
Figure 1: The theoretical prediction for the normalized \(cr\) and \(cmd\) measures as a function of threshold for \(\beta_{\mathrm{fiducial}}=0.48\) as a fiducial value in the linear Kaiser limit. The “\(\ast\)” symbol is replaced by \(\hat{\mathcal{I}}\) and \(\hat{z}\) for perpendicular and along to the line of sight, respectively. The \(\diamond\) symbol is reserved for \(cr\) and \(cmd\) statistics. The black solid line is for \(\beta=0\) which shows the isotropic limit. The lower panel depicts the difference \(\Delta n_{\circ}^{(s)}(\vartheta)\equiv n_{\circ}^{(s)}(\vartheta,\hat{z})-n_ {\circ}^{(s)}(\vartheta,\hat{\mathcal{I}})\) demonstrating that \(\Delta n_{cmd}^{(s)}\) is higher than \(\Delta n_{cr}^{(s)}\).
Figure 2: Error propagator on the \(\beta\) for various criteria discussed in the text. Supposing one percent relative error produced for \(\sigma_{\circ}\) in observation causes lower relative error on redshift space parameter by \(cmd\) measure compared to other criteria examined in this paper.
### Finger of God impact on the \(cr\) and \(cmd\) measures
Thus far, we have applied \(cr\) and \(cmd\) statistics to the redshift space density field in the presence of the linear Kaiser effect. In this subsection, we take into account the FoG phenomenon in addition to the linear Kaiser effect as the anisotropy sources of the density field in redshift space and try to characterize their impacts on our statistical measures.
To elaborate on the FoG effect describing the elongation of the clusters along the line of sight on small scales, there are several phenomenological models in the literature. Here we consider Gaussian (Peacock & Dodds, 1994) and Lorentzian (Percival et al., 2004; Ballinger et al., 1996) FoG models which are respectively read off by the following Equations:
\[\tilde{O}_{\rm FoG}^{\rm Gauss}(k\mu,\sigma_{u})=e^{-\frac{1}{2}\sigma_{u}^{2 }k^{2}\mu^{2}} \tag{44}\]
and
\[\tilde{O}_{\rm FoG}^{\rm Lorentz}(k\mu,\sigma_{u})=\frac{1}{1+\frac{1}{2} \sigma_{u}^{2}k^{2}\mu^{2}}, \tag{45}\]
where \(\sigma_{u}\) is the one-dimensional velocity dispersion. More precisely, to manipulate the linear Kaiser and FoG effects together, the non-linear part of the Equation (4) can be replaced by the Equation (44) or Equation (45). It is worth noting that the spectral indices given by Equations (11) and (12) are modified by correction of power spectrum which is in principle constructed by plugging the Equation (44) or Equation (45) in the Equation (5).
To go further, we define \(\xi_{\circ}\equiv\Theta_{\circ}^{-1}-1\) and \(\circ\in\{cr,cmd\}\). The Gaussian limits of \(\xi_{cr}\) and \(\xi_{cmd}\) are obtained as:
\[\xi_{cr}=\frac{\sigma_{1z}^{(s)}}{\sigma_{1\!\!T}^{(s)}}-1,\qquad\xi_{cmd}= \left(\frac{\sigma_{1z}^{(s)}}{\sigma_{1\!\!T}^{(s)}}\right)^{2}-1, \tag{46}\]
Therefore, in this case, \(\xi_{\circ}\) statistics depend on the following quantities including parameter \(\beta\), FoG model, one dimensional velocity dispersion, \(\sigma_{u}\), smoothing kernel, \(\tilde{W}\), smoothing scale, \(R\), and power spectrum in the real space, \(P^{(r)}(\mathbf{k})\). The \(\xi_{\circ}\) can be numerically computed for a desired cosmological field and supposing the Gaussian model, this can be considered as a new model-dependent observational measure to constrain the associated cosmological parameters.
To compare the FoG contribution with the linear Kaiser effect, we adopt a Gaussian smoothing kernel equates to \(\tilde{W}(kR)=\exp(-(kR)^{2}/2)\). We also use CAMB software (Lewis et al., 2000) with the fiducial values \(\{\,\Omega_{\Lambda}=0.69179,\,\Omega_{c}h^{2}=0.11865,\,\Omega_{b}h^{2}=0.022 307,\,\Omega_{\nu}h^{2}=0.000638,\,h=0.6778,\,n_{s}=0.9672\,\}\) which is in agreement with the flat \(\Lambda\)CDM \(Planck\) cosmological parameters
Figure 3: Left panel: \(\xi_{\circ}\) as a function of \(R\) for the phenomenological Gaussian model of FoG and for \(\sigma_{u}=4\)Mpc/h and \(\sigma_{u}=5\)Mpc/h. There is a trade off between the imprint of FoG and linear Kaiser effects for different smoothing scales due to their contradiction behaviors at small and large scales, respectively. At the so-called \(R_{*}\) whose value depends on cosmological parameters, the directional dependency of \(cr\) and \(cmd\) is negligible. The lower part of the left panel illustrates the difference of \(\xi_{\circ}\) for both velocity dispersions. Right panel: the comparison between two phenomenological models for FoG, namely the Lorentzian and the Gaussian models, in the context of \(cr\) and \(cmd\) statistical measures. The corresponding lower panel depicts the difference between \(\xi_{\circ}\) for the Gaussian and the Lorentzian cases.
to compute the matter power spectrum, (Planck Collaboration et al., 2020, 2020). Adopting the linear bias \(b=1\), we obtain \(\beta=0.526\) for these cosmological parameters.
In the left panel of Fig. 3, we illustrate the \(\xi_{\circ}\) as a function of the smoothing scale when the phenomenological Gaussian model is taken into account for the FoG effect. Scaling dependency of \(\xi_{\circ}\) is clearly due to the FoG and interestingly we obtain that at a smoothing scale denoted by \(R_{\star}\), the \(\xi_{\circ}\) pierces the zero threshold. This means that at such a scale, the directional dependency of \(cr\) and \(cmd\) measures are diminished due to the competition between the linear Kaiser and FoG effects which behave on the contrary ways. In other words, the FoG and the linear Kaiser effects lead to the stretching and hardening of the iso-density contours along the line of sight, respectively. On a specific scale (\(R_{\star}\)), the linear Kaiser effect and the FoG effect cancel each other out, and \(\xi_{cr}\) and \(\xi_{cmd}\) reach zero. In scales smaller than \(R_{\star}\), the FoG effect is dominant, and both \(\xi_{cr}\) and \(\xi_{cmd}\) have negative values, but for scales larger than \(R_{\star}\), the linear Kaiser effect becomes significant and both \(\xi_{cr}\) and \(\xi_{cmd}\) take positive values. In addition, by increasing the velocity dispersion, the dominant range of FoG grows, which is in agreement with the analytical modeling of FoG. In the lower part of left panel, we plot the \(\Delta\xi_{\circ}\equiv\xi_{\circ}(\sigma_{u}=5{\rm{Mpc}}/{\rm{h}})-\xi_{ \circ}(\sigma_{u}=4{\rm{Mpc}}/{\rm{h}})\) and our results confirm that \(\xi_{cmd}\) is higher than \(\xi_{cr}\) for the two fixed values of \(\sigma_{u}\).
In the left panel of Fig. 3, the \(\xi_{\circ}\) for the Gaussian and Lorentzian models of FoG are compared. The higher value of smoothing scale leads to diminishing the contribution of higher \(k\) resulting in the two mentioned models converge to each other. We must point out that, the sensitivity of \(cr\) measure is almost less than \(cmd\) criterion to non-linearity in redshift space. Therefore, the capability of \(cmd\) in distinguishing different FoG models is higher than \(cr\) statistics. To make more sense, we also compute \(\Delta\xi_{\circ}\equiv\xi_{\circ}(Lorentzian\ FoG)-\xi_{\circ}(Gaussian\ FoG)\) and the bottom part of left panel shows that \(\xi_{cmd}\) for small smoothing scale has higher dependency on the model of FoG than the \(\xi_{cr}\).
## 5 Application on Mock data
In this section, we are going to numerically extract the \(cr\) and \(cmd\) statistical measures for simulated anisotropic density field and compare our results with the theoretical predictions obtained in previous section. Two following approaches are considered: at first, according to the computed matter power spectrum consistent with flat \(\Lambda\)CDM model, we simulate Gaussian random field. Secondly, we will rely on the N-body simulations known as Quijote simulations (Villaescusa-Navarro et al., 2020).
### Gaussian synthetic field
We consider the linear Kaiser effect as a source of anisotropy and therefore generate the anisotropic Gaussian field. To this end, using the linear power spectrum of matter determined by CAMB, we generate an isotropic Gaussian density field, \(\delta^{(r)}\), sampled on a cubical lattice with the total volume size \(V=(1{\rm{Gpc}}/{\rm{h}})^{3}\) which consist of \(N_{pix}=512^{3}\) pixels. Applying the Fourier transform on the simulated isotropic density field, we construct an anisotropic field according to the following transformation:
\[\tilde{\delta}^{(s)}(\mathbf{k})=\left(1+\beta\frac{k_{z}^{2}}{k^{2}}\right) \tilde{\delta}^{(r)}(\mathbf{k}), \tag{47}\]
and therefore, we obtain the redshift space density field in Fourier space. Then we smooth \(\tilde{\delta}^{(s)}\) by a Gaussian kernel with scale \(R=20\) (Mpc/h). We generate \(N_{sim}=100\) realizations of Gaussian isotropic and anisotropic fields, and then apply the mentioned numerical methods to the simulated density fields to obtain the \(cr\) and \(cmd\) measures as a function of threshold. For each realization, we extract these statistics in threshold range, \(\vartheta\in[-4.0,4.0]\), and then we do the ensemble average. Increasing the number of realizations had no significant effect on our ensemble average. Fig. 4 presents the \(cr\) and \(cmd\) as a function of \(\vartheta\) for perpendicular and along to line of sight in redshift space and for real
Figure 4: Upper panel: The crossing statistics as a function of \(\vartheta\) for theoretical predictions (solid lines) and corresponding numerical reconstructions (symbols). Lower panel: \(cmd\) measure versus threshold. The solid lines correspond to theoretical predictions, while the symbols indicate the results given by numerical simulations. Here we took \(R=20\)Mpc/h.
space. The upper panel corresponds to \(cr\), while the lower panel shows the \(cmd\). The solid lines are associated with theoretical predictions and the symbols are for corresponding numerical results. The error bar represents the \(1\sigma\) level of confidence demonstrating the good consistency between the numerical and theoretical results. However, due to the presence of the first derivative of the smoothed field in \(cmd\) measure, we expect to obtain an almost higher value of errorbar compared to \(cr\).
In the rest of this subsection, motivated by introducing a proper measure to put observational constraint on \(\beta\), we define the following weighted summation to marginalize the effect of threshold bins:
\[\bar{\xi}_{\phi}=\frac{\sum_{\phi_{i}=\theta_{\min}}^{\vartheta_{\max}}\omega _{\phi}(\vartheta_{i})\xi_{\phi}(\vartheta_{i})}{\sum_{\vartheta_{i}=\theta_ {\min}}^{\vartheta_{\max}}\omega_{\phi}(\vartheta_{i})}, \tag{48}\]
where the weight is defined by means of statistical error as \(\omega_{\phi}(\vartheta_{i})\equiv\sigma_{\bar{\xi}_{\phi}}^{-1}(\vartheta_ {i})\). In Fig. 5, we plot the \(\bar{\xi}_{\phi}\) statistics as a function of \(\beta\) extracted from theoretical (solid line) and computational (symbols) approaches. The higher slope for \(cmd\) measure concerning the \(cr\) statistics versus \(\beta\) reveals more robustness in discriminating different values of \(\beta\).
### N-body simulations
To examine the non-Gaussian impact on the conditional moments of derivative, we use the three-dimensional large scale structure made by publicly available N-body simulations from the Quijote complex (Villaescusa-Navarro et al., 2020). Each our ensemble extracted form the Quijote simulations has following properties: \(N_{\rm particles}=512^{3}\), box size of \(V=(1{\rm Gpc/h})^{3}\), the fiducial cosmological parameter are based on flat \(\Lambda\)CDM including \(\Omega_{m}=0.3175\), \(\Omega_{b}=0.049\), \(h=0.6711\), \(n_{s}=0.9624\) and \(\sigma_{8}=0.834\) (see Villaescusa-Navarro et al., 2020) for more details). To construct the proper density field in applying our numerical pipeline, we use Pylians (Villaescusa-Navarro, 2018) and at redshift \(z=0\), exploiting the routine cloud in cell (CIC) for mass assignment, the density field contrast, \(\delta(\mathbf{r})\), would be retrieved. Finally, convolving the \(\delta(\mathbf{r})\) with a Gaussian window function characterized by a smoothing scale, \(R\), the matter density contrast is constructed in the real space, \(\delta_{R}^{(r)}(\mathbf{r})\). To create the corresponding field in redshift space in plane-parallel approximation, \(\delta_{R}^{(s)}(\mathbf{r})\), for each value of \(\mathbf{r}\), we use Pylians which in principle considers the Equation (1).
Fig. 6 depicts the \(N_{cmd}^{(r,s)}\) versus threshold for the Quijote simulations. Upper left panel corresponds to \(N_{cmd}^{(r,s)}\) in real and redshift spaces for \(\hat{\mathcal{I}}\in[\hat{x},\hat{y}]\) and \(\hat{z}\) directions taking \(R=40\)Mpc/h. As we expect, there are no significant deviations between various directions in real space, while in the redshift space, the \(N_{cmd}^{(s)}(\vartheta,\hat{x})=N_{cmd}^{(s)}(\vartheta,\hat{y})\neq N_{cmd }^{(s)}(\vartheta,\hat{z})\). The upper right panel indicates the \(N_{cmd}^{(r)}(\vartheta,\hat{z})\) for the Gaussian prediction (green dashed-dot line) while the red dashed line indicates the theoretical non-Gaussian prediction for \(cmd\) (Equation (21)). The filled black circle symbols correspond to the numerical analysis including their \(1\sigma\) level of confidence. The numerical result is mildly skewed and it is tilted to the higher thresholds yielding the non-Gaussian behavior. The lower left panel illustrates the \(N_{cmd}^{(s)}(\vartheta,\hat{z})\) for Gaussian model (green dashed-dot line), non-Gaussian model up to the \(\mathcal{O}(\sigma_{0}^{2})\) (red dashed line) and filled circle symbols correspond to the numerical results for \(R=40\)Mpc/h. The lower right panel shows the results for \(N_{cmd}^{(s)}(\vartheta,\hat{x})\) which is perpendicular to the line of sight direction.
The difference between the numerical computation of \(N_{cmd}^{(r,s)}\) and the corresponding theoretical Gaussian prediction are depicted in Fig. 7. The dashed lines are for the deviation of perturbative non-Gaussian theory concerning Gaussian prediction, while the symbols are the same quantities computed from simulations, numerically. In the upper left panel, we depict the difference between the numerical results and theoretical Gaussian predictions in addition to the variation of the theoretical non-Gaussian model concerning the Gaussian form for real space. The green filled circle symbols, blue triangle symbols, and red rectangle symbols indicate the difference between \(N_{cmd}^{(s)}(\vartheta,\hat{z})\), \(N_{cmd}^{(s)}(\vartheta,\hat{x})\) and \(N_{cmd}^{(r)}(\vartheta,\hat{z})\) computed numerically for N-body simulations and associated Gaussian models, respectively, in the upper right panel. For this part, we consider the smoothed scale equates to \(R=40\) Mpc/h. In the lower panels, we display the same quantities as expressed for the upper panel but for \(R=20\) Mpc/h (lower left panel) and \(R=30\) Mpc/h (lower right panel). Our results confirm that the deviation from Gaussianity perpendicular to the line of sight directions in redshift space is almost the same as the real space. It is worth noting that, to compute the corresponding theoretical results, we adopt the spectral indices numerically from simulations. The inconsistency between theory and numerical results extracted from N-body simulations is justified due to the reason that for the lower value of the smoothing scale,
Figure 5: The \(\bar{\xi}_{\phi}\) as a function of \(\beta\). The filled circle symbols correspond to the numerical analysis of \(\sigma\), while the filled triangles represent the numerical results for the \(cmd\). The solid lines indicate the corresponding theoretical predictions. Here we took \(R=20\)Mpc/h.
the \(\sigma_{0}\) gets higher value for the lower value of \(R\), consequently, to obtain more precise consistency, we have to take into account the higher terms in perturbative formula according to Equation (20) to achieve more accurate formula for \(\left\langle N_{cmd}^{(r,s)}(\vartheta,i)\right\rangle_{NG}\) (Equation (21)).
## 6 Summary and Conclusions
The redshift space distortions caused by the linear and non-linear effects lead to anisotropy in the density field in the redshift space. To clarify the mention anisotropy as well as non-Gaussianity, we have developed a geometrical measure which is quite sensitive to the anisotropic distribution of density fields.
In this work, inspired by the contour crossing (\(cr\)) statistic, we have introduced the so-called conditional moments of derivative (\(cmd\)) criteria, which can capture the preferred direction and also are sensitive to the induced anisotropy together with the non-Gaussianity in the underlying cosmological stochastic field. Using a probabilistic framework, we have analytically calculated the theoretical expectation value of \(cmd\) measure as a function of threshold (\(\vartheta\)) for isotropic and anisotropic Gaussian density field in terms of associated spectral indices. Also, for the weakly non-Gaussian field, we have perturbatively extended our analysis up to the \(\mathcal{O}(\sigma_{0}^{2})\) contribution due to the general non-Gaussianity in real and redshift spaces. In addition, to perform a comparison between \(cmd\) and \(cr\) statistics, we have carried out similar computations for the \(cr\) as well.
Taking into account the Gaussianity and incorporating the linear Kaiser effect as the source of anisotropy in the redshift space density field, we have compared the sensitivity of \(cr\) and \(cmd\) statistics to the redshift space parameter (\(\beta\)). The normalized quantity \(n_{\diamond}\) depending on direction, thresh
Figure 6: The \(N_{cmd}^{(r,s)}[\mathrm{Mpc}/\mathrm{h}]^{-2}\) versus threshold for the Quijote simulations. Upper left panel: The expectation value of conditional moment of the first derivative in real and redshift spaces for both \(\mathcal{I}\in[\hat{x},\hat{y}]\) and \(\hat{z}\) directions adopting \(R=40\)Mpc/h. Upper right panel: The \(N_{cmd}^{(r)}(\hat{z})\) for the Gaussian prediction considering corresponding spectral indices (green dashed-dot line) while the red dashed line indicates the theoretical non-Gaussian prediction for \(cmd\) (Equation (21)). The filled black circle symbols correspond to the numerical analysis including their \(1\sigma\) level of confidence. The lower panels are the same as upper right panel just for redshift space in \(\hat{z}\) (left) and \(\hat{x}\) (right) directions.
old, and the \(\beta\) parameter has been introduced and our results demonstrated that the \(n_{cmd}^{(s)}\) is more sensitive than the \(n_{cr}^{(s)}\) to the anisotropy as depicted in the lower panel of Fig. 1, particularly for intermediate threshold. According to the error propagation approach and by considering the relative error on the \(\varTheta_{\circ}\) equates to one percent indicating that the \(c\)_cmd_ enables to put tight constraint compare to other statistics in Gaussian (Fig. 2) and non-Gaussian regimes.
To make our evaluation more complete, we defined \(\xi_{\circ}\) (Equation (46)) and its smoothing scale dependency for various values of relevant parameters for the treatment of the influence of FoG and the comparison with the linear Kaiser effect. This quantity is carefully recognized in the range scale where the contribution of FoG or linear Kaiser effect becomes dominant (Fig. 3). This quantity for a high enough value of \(R\) also asymptotically goes to the fixed value implying the \(\beta\) value.
Implementation of the synthetic data numerically has also supported the good consistency between numerical results and theoretical prediction of \(cr\) and \(c\)_cmd_ measures for the Gaussian field (Fig. 4). To adopt the observational constraining robustly, we also defined a weighted parameter, \(\bar{\xi}_{\diamond}\) (Equation (48)), and for various values of \(\beta\) in simulation, we revealed that \(\bar{\xi}_{cmd}\) has higher \(\beta\)-dependency (Fig. 5).
Although, we have implemented our methodology on the N-body simulations publicly available by the Quijote suite. The \(N_{cmd}^{(r,s)}\) as a function of \(\vartheta\) implied that there is a deviation from Gaussian theory along the line of sight for both real and redshift spaces and the numerical results are higher(less) than the Gaussian prediction for \(\vartheta\gtrsim 0\) (\(\vartheta\lesssim 0\)) (Fig. 6). To make more sense regarding the non-Gaussianity in the N-body simulations provided by the Quijote, we obtained that the amount of non-Gaussianity in the context of \(N_{cmd}\) for perpendicular to the line of sight directions in redshift space is
Figure 7.— The \(N_{cmd}^{(r,s)}-\langle N_{cmd}^{(r,s)}\rangle_{G}\) with dimension \([\mathrm{Mpc}/\mathrm{h}]^{-2}\) versus threshold for Quijote simulations. Upper left panel shows the difference between \(N_{cmd}^{(r)}(\vartheta,\bar{z})\) and the value corresponds to the theoretical Gaussian prediction for \(R=40\) Mpc/h. The red dashed solid line is for the theoretical prediction for deviation from Gaussian field up to the second order expansion, while the filled black circle symbols are for numerical deviation from Gaussian theory. The same quantities just for redshift space are plotted in the upper right panel. Notice that, for the redshift space, the results for the \(\hat{x}\) direction has been also given. The lower panels are the same as upper panels just we took \(R=20\) Mpc/h (lower left) and \(R=30\) Mpc/h (lower right).
almost the same as for along of line of sight in real space (Fig. 7). This means that to mitigate the non-Gaussianity produced by RSD and to examine non-Gaussianity due to other mechanisms such as primordial ones, we should consider the analysis on the plane perpendicular to the line of sight in redshift space. The peculiar velocity magnifies the non-Gaussianity along the line of sight in redshift space which is well recognized by \(cmd\) measure.
To go further, we suggest the do following tasks as the complementary subjects in the banner of excursion sets and RDS and will be left for the future study: however the plane-parallel approximation provides a reliable approach in accounting for the peculiar velocity along the line of sight in the observed distribution of galaxies, but to obtain more accurate evaluation, spherical redshift distortions could be interesting to pursue. We also focused on the matter density field and it is useful to consider the galaxies catalogs and other real data sets instead, consequently utilizing the \(cmd\) measure opens new room to evaluate bias factor. In addition, various model of primordial non-Gaussianity can be evaluated.
Generally, the existence of a preferred direction in cosmology and for various scales has remained an under debate topic, to this end, our measures can provide a pristine framework. Thanks to scaling window analysis and modifying the \(cmd\) criteria, hopefully, makes it more capable by scanning over the underlying field to capture the scale and location dependency of directional behavior (Li et al., 2013; Ghasemi Nezhadhaghighi et al., 2017; Klatt et al., 2022; Kapfer et al., 2010; Schroder-Turk et al., 2013b).
The authors are very grateful to Ali Haghighatgoo for his extremely useful comments on different parts of this paper. Also thanks to Ravi K. Sheth for his constructive discussions. SMSM appreciates the hospitality of the HECAP section of ICTP where a part of this research was completed. We also thank the Quijote team for sharing its simulated data sets and providing extensive instruction on how to utilize the data.
|
2303.17381 | Exact dynamics of two holes in two-leg antiferromagnetic ladders | We study the motion of holes in a mixed-dimensional setup of an
antiferromagnetic ladder, featuring nearest neighbor hopping $t$ along the
ladders and Ising-type spin interactions along, $J_\parallel$, and across,
$J_\perp$, the ladder. We determine exact solutions for the low-energy one- and
two-hole eigenstates. The presence of the trans-leg spin coupling, $J_\perp$,
leads to a linear confining potential between the holes. As a result, holes on
separate legs feature a super-linear binding energy scaling as $(J_\perp /
t)^{2/3}$ in the strongly correlated regime of $J_\perp,J_\parallel \leq t$.
This behavior is linked to an emergent length scale $\lambda \propto
(t/J_\perp)^{1/3}$, stemming from the linear confining potential, and which
describes how the size of the two-hole molecular state diverges for
$J_\perp,J_\parallel \ll t$. On the contrary, holes on the same leg unbind at
sufficiently low spin couplings. This is a consequence of the altered
short-range boundary condition for holes on the same leg, yielding an effective
Pauli repulsion between them, limiting their kinetic energy and making binding
unfavorable. Finally, we determine the exact nonequilibrium quench dynamics
following the sudden immersion of initially localized nearest neigbhor holes.
The dynamics is characterized by a crossover from an initial ballistic quantum
walk to an aperiodic oscillatory motion around a finite average distance
between the holes due to the confining potential between them. In the strongly
correlated regime of low spin couplings, $J_\perp, J_\parallel \leq t$, we find
this asymptotic distance to diverge as $t / J_\perp$, showing a much stronger
scaling than the eigenstates. The predicted results should be amenable to
state-of-the-art quantum simulation experiments using currently implemented
experimental techniques. | K. Knakkergaard Nielsen | 2023-03-30T13:48:01Z | http://arxiv.org/abs/2303.17381v2 | # Exact dynamics of two holes in two-leg antiferromagnetic ladders
###### Abstract
We study the motion of holes in a mixed-dimensional setup of an antiferromagnetic ladder, featuring nearest neighbor hopping \(t\) along the ladders and Ising-type spin interactions along, \(J_{\parallel}\), and across, \(J_{\perp}\), the ladder. We determine exact solutions for the low-energy one- and two-hole eigenstates. The presence of the trans-leg spin coupling, \(J_{\perp}\), leads to a linear confining potential between the holes. As a result, holes on separate legs feature a super-linear binding energy scaling as \((J_{\perp}/t)^{2/3}\) in the strongly correlated regime of \(J_{\perp},J_{\parallel}\leq t\). This behavior is linked to an emergent length scale \(\lambda\,\propto\,(t/J_{\perp})^{1/3}\), stemming from the linear confining potential, and which describes how the size of the two-hole molecular state diverges for \(J_{\perp},J_{\parallel}\ll t\). On the contrary, holes on the same leg unbind at sufficiently low spin couplings. This is a consequence of the altered short-range boundary condition for holes on the same leg, yielding an effective Pauli repulsion between them, limiting their kinetic energy and making binding unfavorable. Finally, we determine the exact nonequilibrium quench dynamics following the sudden immersion of initially localized nearest neighbor holes. The dynamics is characterized by a crossover from an initial ballistic quantum walk to an aperiodic oscillatory motion around a finite average distance between the holes due to the confining potential between them. In the strongly correlated regime of low spin couplings, \(J_{\perp},J_{\parallel}\leq t\), we find this asymptotic distance to diverge as \(t/J_{\perp}\), showing a much stronger scaling than the eigenstates. The predicted results should be amenable to state-of-the-art quantum simulation experiments using currently implemented experimental techniques.
## I Introduction
Quantum simulation experiments have matured to the level, where they push our understanding of many-body quantum dynamics and inspire new approximate theoretical tools [1; 2; 3; 4; 5] that allow us to explore the complex spatial structures arising in e.g. Fermi-Hubbard systems [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. A major driver for this line of research is to better understand the microscopic origins of high-temperature superconductivity [23], which basic phenomenology may be explained by the interaction and ensuing pairing of dopants in Fermi-Hubbard systems [24; 25; 26]. Recent experiments [27] have for the first time successfully demonstrated that cold-atom simulators can achieve and probe the formation of such pairs. Whereas these experiments were still limited to rather small system sizes, they have shown a great promise in how we can understand these mechanisms from the bottom-up perspective, and the approximate theoretical description of this situation [28] suggests that the system supports a strong binding of the dopants, in contrast to the usual scenario in two-dimensional square lattice geometries [29; 30; 31; 32; 33; 34; 35].
Inspired by this development, we analyze a situation, in which we may gain _exact_ results for the binding and nonequilibrium dynamics of dopants in a mixed-dimensional Fermi-Hubbard system [Figs. 1(a)-1(b)]. The main theoretical difficulty in previous studies [28] has been to fully describe isotropic spin couplings, coming with the complication of an underlying order-disorder phase transition as the trans-leg spin coupling is increased [36; 37; 38; 39; 40; 22]. However, by restricting the spin interactions to the Ising type, the underlying spin lattice always supports a perfectly Neel-ordered ground state. Based on this simplification, we find that we can describe the low-energy single- and two-hole eigenstates exactly in this case, whether they be on the same or separate legs [Figs. 1(a)-1(b)]. Furthermore, using the precise insights into the two-hole eigenstates, we calculate the exact quench dynamics following the sudden creation of two holes as nearest neighbors. Here, Figs. 1(c)-1(d) show the result of holes on separate legs. We find that the dynamics can be divided into two characteristic regimes. First, the holes perform independent quantum walks, meaning that they blow apart ballistically. Second, as the holes diverge from each other, a confining string of overturned spins forms between them. Eventually, the holes are slowed down by this confinement and aperiodic oscillations in the strings, and thereby in the inter-hole distance, take place around a well-defined long-time average.
A major challenge in previous experiments with doped Fermi-Hubbard systems [21; 14] has been to reach the strongly correlated regime, which is interesting both from the perspective of the physics of the cuprate materials supporting high-temperature superconductivity [23], and for understanding many-body phenomena outside the realm of perturbation theory. We note that this system is a natural experimental candidate for that, because the effective coupling strength between the holes is \(4t/J_{\perp}\). Consequently, the crossover timescale from the quantum walk to the string oscillation behavior in Fig. 1(d) changes from a perturbative \((t/J_{\perp})^{2}\) scaling to a strongly correlated scaling of \(t/J_{\perp}\) already for \(J_{\perp}\lesssim 4t\). Importantly, the crossover time is still quite moderate in terms of hopping times \(1/t\), and remains below \(3/t\) for \(J_{\perp}>t\), which should make it possible to experimentally observe the departure from the quantum walk. While the mixed-dimensional property of this model has already been achieved experimentally [27], the ability to tune the spin interactions to the Ising limit can be facilitated by Rydberg-dressed atoms [41; 42; 43; 44; 45; 46], polar molecules [47], and trapped ions [48]. This setup is, therefore, within reach for modern quantum simulation experiments.
The paper is organized as follows. In Sec. II, we set up the mixed-dimensional \(t\)-\(J\) model. In Sec. III, we determine the low-energy single- and two-hole eigenstates. In Sec. IV,
we study the nonequilibrium quench dynamics of two holes, before we conclude in Sec. V. Throughout the paper, we set the reduced Planck constant, \(h\), and the lattice spacing to \(1\).
## II Model
We consider a system of spin-\(1/2\) particles placed along a two-leg ladder, described by a mixed-dimensional \(t\)-\(J\) model with Hamiltonian \(\hat{H}=\hat{H}_{t}+\hat{H}_{J}\) [Fig. 1(a)]. The spins \(\sigma=\uparrow,\downarrow\) can hop to nearest neighbors along the legs \(\mu=1,2\),
\[\hat{H}_{t}=-t\sum_{j,\sigma,\mu}\left[\hat{c}_{j,\mu,\sigma}^{\dagger}\tilde {c}_{j+1,\mu,\sigma}+\hat{c}_{j+1,\mu,\sigma}^{\dagger}\tilde{c}_{j,\mu, \sigma}\right], \tag{1}\]
under the constraint that there is at most a single spin on each site. This is enforced by the modified particle operator \(\tilde{c}_{j,\mu,\sigma}=\hat{c}_{j,\mu,\sigma}(1-\hat{n}_{\mu,j})\), with \(\hat{n}_{\mu,j}=\sum_{\sigma}\hat{c}_{j,\mu,\sigma}^{\dagger}\hat{c}_{j,\mu,\sigma}\) the local density operator. The nearest neighbor antiferromagnetic spin-spin coupling is assumed to be fully polarized in the \(z\)-direction
\[\hat{H}_{t}= J_{\parallel}\sum_{j,\mu}\left[\hat{S}_{j,\mu}^{(z)}\hat{S}_{j+1, \mu}^{(z)}-\frac{\hat{n}_{j,\mu}\hat{n}_{j+1,\mu}}{4}\right]\] \[+ J_{\perp}\sum_{j}\left[\hat{S}_{j,1}^{(z)}\hat{S}_{j,2}^{(z)}- \frac{\hat{n}_{j,1}\hat{n}_{j,2}}{4}\right], \tag{2}\]
with \(J_{\perp},J_{\parallel}>0\). Such mixed-dimensional models [49; 50] have recently been proposed to yield strong binding of holes through an emergent confining string potential of overturned spins [28], and was recently implemented successfully in the case of fully symmetric spin couplings [27]. The polarized Ising-type interaction explored here, enables us to derive exact results for low-energy single- and two-hole eigenstates, as well as the full nonequilibrium dynamics of two initially localized holes.
## III Low-energy eigenstates
In the absence of holes, the polarized AFM coupling in Eq. (II) results in a perfect Neel ordered state, \(|\mathrm{AF}\rangle\), for any values of \(J_{\parallel},J_{\perp}>0\). For periodic boundary conditions of \(N\) spins along each of the two legs, this results in the ground state energy
\[E_{0}=-N\frac{J_{\parallel}+J_{\perp}}{2}, \tag{3}\]
owing to a nearest neighbor spin bond energy of \(J_{\parallel}/2\) (\(J_{\perp}/2\)) along (across) the ladder. This should be contrasted to the case of isotropic spin couplings, in which case there is a quantum phase transition between Neel order along the ladder and spin singlet formation along the rungs as \(J_{\perp}/J_{\parallel}\) is increased [36; 37; 38; 39; 40; 22; 23; 36]. Utilizing this simplification, we can find the single-hole and two-hole ground states. To have a more efficient description, we employ a Holstein-Primakoff transformation and describe the system in terms of holes, \(\hat{h}\), and bosonic spin excitations \(\hat{s}\). The latter operators are defined with respect to the antiferromagnetic ground state \(\hat{s}\,|\mathrm{AF}\rangle=0\). The hopping Hamiltonian then reads [51]
\[\hat{H}_{t}=t\sum_{j,\mu}\Bigl{[}\hat{h}_{j,\mu}^{\dagger}F(\hat{ h}_{j,\mu},\hat{s}_{j,\mu})F(\hat{h}_{j+1,\mu},\hat{s}_{j+1,\mu})\hat{h}_{j+1, \mu}\hat{s}_{j,\mu}\] \[+\hat{s}_{j+1,\mu}^{\dagger}\hat{h}_{j,\mu}^{\dagger}F(\hat{h}_{j,\mu},\hat{s}_{j,\mu})F(\hat{h}_{j+1,\mu},\hat{s}_{j+1,\mu})\hat{h}_{j+1,\mu} \Bigr{]}+\mathrm{H.c.} \tag{4}\]
Here, \(F(\hat{h},\hat{s})=\sqrt{1-\hat{h}^{\dagger}\hat{h}-\hat{s}^{\dagger}\hat{s}}\) ensures that there is at most a single spin excitation and a single hole on each site. The spin-coupling Hamiltonian likewise becomes
\[\hat{H}_{J}=-J_{\parallel}\sum_{j,\mu} \Bigl{[}\left(\frac{1}{2}-\hat{s}_{j,\mu}^{\dagger}\hat{s}_{j,\mu} \right)\left(\frac{1}{2}-\hat{s}_{j+1,\mu}^{\dagger}\hat{s}_{j+1,\mu}\right)+ \frac{1}{4}\Bigr{]}\] \[\times\bigl{[}1-\hat{h}_{j,\mu}^{\dagger}\hat{h}_{j,\mu}\bigr{]}[ 1-\hat{h}_{j+1,\mu}^{\dagger}\hat{h}_{j+1,\mu}\bigr{]}\] \[-J_{\perp}\sum_{j} \Bigl{[}\left(\frac{1}{2}-\hat{s}_{j,1}^{\dagger}\hat{s}_{j,1} \right)\left(\frac{1}{2}-\hat{s}_{j,2}^{\dagger}\hat{s}_{j,2}\right)+\frac{1}{ 4}\Bigr{]}\] \[\times\bigl{[}1-\hat{h}_{j,1}^{\dagger}\hat{h}_{j,1}\bigr{]}[1- \hat{h}_{j,2}^{\dagger}\hat{h}_{j,2}\bigr{]}. \tag{5}\]
Figure 1: Mixed-dimensional \(t\)–\(J\) model featuring spin-\(1/2\) particles on a two-leg ladder geometry with two holes on separate legs (a) or on the same leg (b). The spins can hop to nearest neighbor vacant sites along the ladder with amplitude \(-t\), and have nearest neighbor Ising interactions \(J_{\parallel},J_{\perp}\) along and across the ladder, respectively. (b) Average distance between two holes versus time \(\tau\), which initially sit at nearest neighbor sites. At short times, the holes blow apart ballistically as described by a quantum walk (red line), after which they oscillate around a well-defined long-time average. This is shown for \(J_{\perp}/t=0.2,1,3\) from top to bottom. (c) Corresponding dynamical regimes: quantum walk at short times (red region) and confining string oscillations (blue region) on long timescales. The crossover scales as \((t/J_{\perp})^{2}\) and \(t/J_{\perp}\) in the weak (\(J_{\perp}\gg t\), dashed line) and strong (\(J_{\perp}\ll t\), long-dashed line) correlation regime. The lines in (b) are colored to match the dynamical regimes in (c).
We emphasize that the spins can both be fermions and hard-core _bosons_. In fact, if the holes sit on separate legs, they are distinguishable by which leg they move in. If they move along the same leg, they are equivalently distinguishable by which one is to the left and which one is to the right. As a result, the statistics never come into play in what follows, only the hard-core constraint and the one-dimensional nature of the motion. The results, therefore, apply equally well to fermionic and hard-core bosonic spins, as one might expect from the general duality of fermions and imprenatable bosons in one dimension [52].
### Single-hole eigenstates
Central to the analysis of a single hole is the insight that a single hole doped into the two-leg antiferromagnetic Ising ladder is localized. Due to inversion symmetry, the low-energy eigenstates may then be written as (assuming that the hole resides in leg 1)
\[|\Psi_{1}\rangle=\Big{[}C^{(0)}\hat{h}_{0,1}^{\dagger}+\frac{C^{ (1)}}{\sqrt{2}}\big{(}\hat{h}_{-1,1}^{\dagger}+\hat{h}_{+1,1}^{\dagger}\big{)} \hat{s}_{0}^{\dagger}+\dots\Big{]}|\mathrm{AF}\rangle\] \[=\Big{[}C^{(0)}\hat{h}_{0,1}^{\dagger}+\sum_{d=1}^{N/2}\!\!\frac{ C^{(d)}}{\sqrt{2}}\Big{(}\hat{h}_{-d,1}^{\dagger}\!\prod_{j=0}^{d+1}\!\!\hat{s}_{j,1 }^{\dagger}+\hat{h}_{d,1}^{\dagger}\!\!\prod_{j=0}^{d-1}\!\!\hat{s}_{j,1}^{ \dagger}\Big{)}\Big{]}|\mathrm{AF}\rangle\,. \tag{6}\]
This describes that for hole positions \(d\) sites away from the central site, with total amplitude \(C^{(d)}\), a resulting string of overturned spins of length \(d\) appears. Taking the energy of a single _stationary_ hole, \(E_{0}+J_{\parallel}+J_{\perp}/2\) as reference, and utilizing the Schrodinger equation, \(\hat{H}\left|\Psi_{1}\right\rangle=E_{1}\left|\Psi_{1}\right\rangle\), we obtain the equations of motion
\[E_{1}C^{(0)} =\sqrt{2}t\cdot C^{(1)}\] \[E_{1}C^{(1)} =V_{1}(1)\cdot C^{(1)}+\sqrt{2}t\cdot C^{(1)}+t\cdot C^{(2)}\] \[E_{1}C^{(d)} =V_{1}(d)\cdot C^{(d)}+t\cdot C^{(d-1)}+t\cdot C^{(d+1)}. \tag{7}\]
The lower equation applies for \(d\geq 2\). The motion of the hole \(d\) sites away leaves behind a single frustrated spin bond in leg \(1\), as well as \(d\) frustrated spin bonds across the ladder. This results in the linear string potential
\[V_{1}(d)=\frac{J_{\parallel}}{2}+d\cdot\frac{J_{\perp}}{2}, \tag{8}\]
confining the hole around to its origin. The obtained equations of motion are identical in form to the recently obtained exact results in general Bethe lattices [53]. Utilizing the same techniques for solving the equations of motion in Eq. (7), we propose a recursive structure of the amplitudes
\[C^{(d+1)}=tf_{1}^{(d+1)}(E_{1})C^{(d)}, \tag{9}\]
for \(d\geq 1\). Inserting this into Eq. (7), we obtain the self-consistency equation
\[f_{1}(E)=\frac{1}{E-t^{2}f_{1}\left(E-J_{\perp}/2\right)}, \tag{10}\]
for \(d\geq 1\). Here, we have defined \(f_{1}^{(d)}(E)=f_{1}(E-V_{1}(d))\) for a yet to be determined \(d\)-independent function \(f_{1}\). The self-consistency equation (10) can, finally, be used to find a closed-form expression of \(f_{1}(E)\) in terms of Bessel functions of the first kind, \(J_{\nu}(x)\),
\[f_{1}(E)=-\frac{1}{t}\frac{J_{\Omega(E)}\left(\frac{4t}{J_{\perp}}\right)}{J_ {\Omega(E)-1}\left(\frac{4t}{J_{\perp}}\right)}, \tag{11}\]
with \(\Omega(E)=-2E/J_{\perp}\), similar to the results in Refs. [53; 54; 55]. Inserting \(f_{1}^{(2)}(E_{1})=f_{1}(E-V_{1}(2))\) in the equation for \(d=1\) in Eq. (7), hereby, yields \(C^{(1)}=\sqrt{2}t\cdot f_{1}(E-V_{1}(1))C^{(0)}\). Inserting this into the equation for \(d=0\) in Eq. (7), then finally results in the equation for the single-hole energy,
\[E_{1}=\Sigma_{1}(E_{1})=2t^{2}\cdot f_{1}\left(E_{1}-\frac{J_{\perp}+J_{ \parallel}}{2}\right), \tag{12}\]
hereby defining the single-hole self-energy \(\Sigma_{1}(E)\). Equation (12) actually supports a discrete series of eigenstates similar to a single hole in a Bethe lattice [53]. Here, however, our main focus is on the ground state as this is important to decipher whether two holes will bind or not.
The recursive structure of the amplitudes along with the result in Eq. (11), thus, allows us to construct the full many-body eigenstate with
\[C^{(d)} =\sqrt{2}C^{(0)}\cdot t^{d}\prod_{j=1}^{d}f_{1}^{(j)}(E_{1})\] \[=(-1)^{d}\sqrt{2Z_{1}}\cdot\frac{J_{\Omega(E_{1}-V_{1}(d))}\left( \frac{4t}{J_{\perp}}\right)}{J_{\Omega(E_{1}-V_{1}(1))-1}\left(\frac{4t}{J_{ \perp}}\right)}. \tag{13}\]
Here, \(Z_{1}=[1-\partial_{E}\Sigma_{1}(E)|_{E=E_{1}}]^{-1}\) is the (quasiparticle) residue of the single-hole Green's function \([E-\Sigma_{1}(E)]^{-1}\) at the single-hole energy \(E_{1}\). The result \(C^{(0)}=\sqrt{Z_{1}}\) is derived
Figure 2: (a) The hole is localized around a particular site along the ladder (top). As the hole moves, spins align in its wake, generating more and more spin frustrations (shaded blue and red background). (b) Resulting lowest single-hole energy for indicated values of intraleg spin coupling \(J_{\parallel}\). For \(J_{\perp}=0.1t\), we also show the strong coupling result (Eq. (14), red dashed line) valid for \(J_{\perp},J_{\parallel}\ll 1\).
by normalizing the wave function, \(1=\langle\Psi_{1}|\Psi_{1}\rangle\).
At strong coupling, \(J_{\perp},J_{\parallel}\ll t\), the hole spreads out more and more, resulting in a continuum limit. This yields the asymptotic single-hole energy
\[E_{1}\rightarrow-2t+ta^{(0)}\left(\frac{J_{\perp}}{2t}\right)^{2/3}+\frac{J_{ \parallel}}{2}, \tag{14}\]
with \(-a^{(0)}\simeq-1.02\) the first zero of the derivative of the Airy function, \(\mathrm{Ai}^{\prime}(x)\). In Fig. 2(b), we plot the single hole energy as a function \(J_{\perp}\) for a few indicated values of \(J_{\parallel}\). We see good agreement between Eq. (14) and the full solution of Eq. (12) for \(J_{\parallel}=0.01t\) and \(J_{\perp}\ll t\).
### Two-hole eigenstates
We now focus on the low-energy two-hole eigenstates. We both consider holes moving on separate legs [Fig. 3], as well as holes moving along the same leg [Fig. 4]. For holes traveling on separate legs, the breaking of spin bonds within a leg can be completely avoided by starting from a configuration of spins in which the spins to the right of the holes is moved by one lattice site to the right. Hence, if the holes move alongside each other, the perfect Neel order is retained and no spin bonds are broken. The appropriate two-hole eigenstates are, therefore, delocalized along the ladder. We, thus, define the states
\[|\Psi_{2\perp}(k,d)\rangle = \frac{1}{\sqrt{N}}\sum_{j}\mathrm{e}^{ikj}\mathrm{e}^{ikd/2}\hat {h}^{\dagger}_{j,1}\hat{h}^{\dagger}_{j+d,2} \tag{15}\] \[\times\prod_{l>j}\hat{s}^{\dagger}_{l,1}\prod_{m>j+d}\hat{s}^{ \dagger}_{m,2}|\mathrm{AF}\rangle\,,\]
for a linear distance \(d\) between the two holes. Here, we assume \(N\) sites in each leg and periodic boundary conditions. In this manner, \(d>0\) (\(d<0\)) indicates that the hole in leg 2 has moved \(|d|\) sites to the right (left). The appearance of the string operator, \(\prod_{l>j}\hat{s}^{\dagger}_{l,1}\prod_{m>j+d}\hat{s}^{\dagger}_{m,2}\), is due to the shift of the underlying AFM order by one lattice site to the right at \(j\) and \(j+d\). These states have crystal momentum \(k\in(-\pi,\pi]\), as translating the holes and spin strings \(\hat{h}^{\dagger}_{j,1}\hat{h}^{\dagger}_{j+d,2}\prod_{l>j}\hat{s}^{\dagger}_ {l,1}\prod_{m>j+d}\hat{s}^{\dagger}_{m,2}|\mathrm{AF}\rangle\) by one lattice site to the right results in an additional phase of \(\mathrm{e}^{-ik}\). As no spin frustration within a leg occurs for this configuration of holes, the resulting low-energy eigenstates are independent of the intra-leg spin coupling \(J_{\parallel}\).
For holes moving along the same leg, the most favorable configuration of the spins is now obtained by taking out two adjacent spins. Once again, if the holes move alongside each other, no spin bonds are broken. The delocalized states for a
Figure 3: (a) Two holes on separate legs are delocalized with frustrated spin bonds (shaded red and blue) between the holes. (b) Two-hole spectral function for indicated values of \(J_{\perp}\) as a function of the crystal momentum \(k\). Because of inversion symmetry, \(A_{2\perp}(-k,\omega)=A_{2\perp}(+k,\omega)\). In blue is shown \(\pm 4t\cos(k/2)\) for reference. The spectrum of states of the form in Eq. (15) is independent the intra-leg spin coupling \(J_{\parallel}\).
Figure 4: (a) Like holes on separate legs, two holes on the same leg feature delocalized two-hole eigenstates and frustrated spin bonds (shaded red and blue) between the holes. (b) Two-hole spectral function for indicated values of \(J_{\perp}\) and \(J_{\parallel}\) as a function of the crystal momentum \(k\). In blue is shown \(\pm 4t\cos(k/2)\) for reference. The spectrum, here, depends on both the trans- (\(J_{\perp}\)) and intra-leg (\(J_{\parallel}\)) spin couplings. For \(J_{\perp}\to 0\) (right), a quasiparticle band appears below the continuum, when \(J_{\parallel}\geq 4t\cos(k/2)\). For \(J_{\parallel}=2t\), this corresponds to \(k\geq 2\pi/3\).
distance \(d\) between the holes in this case, therefore, becomes
\[|\Psi_{2\parallel}(k,d)\rangle=\frac{1}{\sqrt{N}}{\sum_{j}}\mathrm{e}^{ikj} \mathrm{e}^{ikd/2}\hat{h}_{j,1}^{\dagger}\hat{h}_{j+d,1}^{\dagger}\!\!\!\!\!\! \prod_{l=j+1}^{j+d-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
still with \(\Omega(E)=-2E/J_{\perp}\). Insertion in the equation of motion for \(d=0\) in Eq. (18) and for \(d=1\) in Eq. (19) reveals the equations for the two-hole energies
\[E_{2\perp}^{(n)}(k)=-\frac{J_{\perp}}{2}+8t^{2}\text{cos}^{2} \bigg{(}\frac{k}{2}\bigg{)}f_{2}(k,E_{2}^{(n)}(k)),\] \[E_{2\parallel}^{(n)}(k)=-\frac{J_{\parallel}}{2}+4t^{2}\text{cos} ^{2}\bigg{(}\frac{k}{2}\bigg{)}f_{2}(k,E_{2}^{(n)}(k)-J_{\perp}/2). \tag{24}\]
As for the single-hole case, we can use the recursion relations in Eq. (21) along with Eq. (23) to explicitly write the coefficient of the many-body wave function. For holes on separate legs, we get
\[C_{\perp}^{(n)}(k,d)=C_{\perp}^{(n)}(k,0)\bigg{[}2t\cos\bigg{(} \frac{k}{2}\bigg{)}\bigg{]}^{|d|}\prod_{j=1}^{|d|}f_{2\perp}^{(j)}(k,E_{2 \perp}^{(n)}(k))\] \[=(-1)^{d}\sqrt{Z_{2\perp}^{(n)}(k)}\frac{J_{\Omega(E_{2\perp}^{( n)}(k))+|d|-1}\Big{(}\frac{8t\cos(k/2)}{J_{\perp}}\Big{)}}{J_{\Omega(E_{2 \perp}^{(n)}(k))-1}\Big{(}\frac{8t\cos(k/2)}{J_{\perp}}\Big{)}}. \tag{25}\]
for \(|d|\geq 1\), using \(\Omega(E-V_{2\perp}(d))=\Omega(E)+|d|-1\). For holes on the same leg, we similarly get
\[C_{\parallel}^{(n)}(k,d)=C_{\parallel}^{(n)}(k,1)\bigg{[}2t\cos \bigg{(}\frac{k}{2}\bigg{)}\bigg{]}^{d-1}\prod_{j=2}^{d}f_{2\parallel}^{(j)}(k,E_{2\parallel}^{(n)}(k))\] \[=(-1)^{d-1}\sqrt{Z_{2\parallel}^{(n)}(k)}\frac{J_{\Omega(E_{2 \parallel}^{(n)}(k))+d-1}\Big{(}\frac{8t\cos(k/2)}{J_{\perp}}\Big{)}}{J_{ \Omega(E_{2\parallel}^{(n)}(k))}\Big{(}\frac{8t\cos(k/2)}{J_{\perp}}\Big{)}}. \tag{26}\]
for \(d\geq 2\). Analogous to the single-hole case, \(Z_{2s}^{(n)}(k)=\left.\left[1-\partial_{\omega}\Sigma_{2s}(k,E)\right|_{ \omega=E_{2s}^{(n)}(k)}\right]^{-1}\) is the residue of the two-hole Green's function
\[G_{2\perp}(k,\omega)=\frac{1}{\omega+i\eta+J_{\perp}/2-\Sigma_{ 2\perp}(k,\omega+i\eta)},\] \[G_{2\parallel}(k,\omega)=\frac{1}{\omega+i\eta+J_{\parallel}/2- \Sigma_{2\parallel}(k,\omega+i\eta)}, \tag{27}\]
for \(\eta=0^{+}\), \(\Sigma_{2\perp}(k,\omega+i\eta)=8t^{2}\cos^{2}(k/2)\cdot f_{2}(k,\omega+i\eta)\), and \(\Sigma_{2\parallel}(k,\omega+i\eta)=4t^{2}\cos^{2}(k/2)\cdot f_{2}(k,\omega+i \eta-J_{\perp}/2)\). The poles of \(G_{2s}\), hereby, determine the spectra for states of the forms in Eqs. (15) and (16). These are all states that have a nonzero overlap with finding holes at adjacent sites with no frustrated spin bonds. Importantly, this subfamily of states contains the two-body states with the lowest energy. The spectral functions, \(A_{2s}(k,\omega)=-2\text{Im}G_{2s}(k,\omega)\) are shown in Figs. 3 and 4 for a few indicated values of the spin couplings. From here, the discrete bands, \(E_{2s}^{(n)}(k)\), with \(n=0,1,2,\dots\) are now apparent. In the limit of \(J_{\perp}/t\to 0^{+}\), a continuum of states form between \(\pm 4t\cos(k/2)\). Below this continuum of states, holes traveling on the same leg [Fig. 4(b)], feature a well-defined quasiparticle state if \(J_{\parallel}>4t\cos(k/2)\), in which case Eq. (24) may be solved to yield
\[E_{2\parallel}^{(0)}(k)=-\frac{J_{\parallel}}{2}-\frac{8t^{2}\cos^{2}(k/2)}{J_ {\parallel}}, \tag{28}\]
ending up at \(-J_{\parallel}/2\) at the Brillouin zone boundary, \(k=\pi\). For \(J_{\parallel}>4t\) [bottom right in Fig. 4(b)], this state appears for any \(k\) and a full quasiparticle band remains even for \(J_{\perp}\to 0\). For \(0<J_{\parallel}<4t\), on the other hand, a quasiparticle state appears only for crystal momenta close enough to the boundary of the Brillouin zone [top right in Fig. 4(b)]. We note that at \(k=\pi\) for general \(J_{\perp}>0\), there seems to be an equal spacing of the bands. In fact, inspecting Eqs. (18) and (19) we see that the two hopping pathways destructively interfere here, giving a vanishing total hopping amplitude, \(2t\cos(\pi/2)=0\). The states are, therefore, completely immobile and their energies are, consequently, determined by the string potentials
\[E_{2\perp}^{(n)}(\pi)=(n-1)\frac{J_{\perp}}{2},\] \[E_{2\parallel}^{(0)}(\pi)=-\frac{J_{\parallel}}{2},E_{2\parallel} ^{(n)}(\pi)=n\frac{J_{\perp}}{2}, \tag{29}\]
where \(n\geq 0\) (\(n\geq 1\)) in the upper (lower) line. This gives a spacing of \(J_{\perp}/2\) at \(k=\pi\). We note that the overall structure of the spectra in Fig. 3 are similar to the isotropic spin coupling case in the regime of \(J_{\perp}\gg J_{\parallel}\), where the underlying spin lattice resides in a disordered regime of spin-singlets on each rung [28].
Finally, for the lowest energy two-hole state, \(k=0\) and \(n=0\), we find that the energies at strong coupling, \(J_{\perp},J_{\parallel}\ll t\), behaves as \(E_{2\perp}^{(0)}(0)=-4t(1-a^{(0)}(J_{\perp}/4t)^{2/3}/2)+J_{\perp}/2\) and \(E_{2\parallel}^{(0)}(0)=-4t(1-a^{(1)}(J_{\perp}/4t)^{2/3}/2)+J_{\perp}/2\). Here,
Figure 5: (a) Trans-leg binding energy, \(E_{b\perp}=2E_{1}-E_{2\perp}(0)\), versus the trans-leg spin coupling \(J_{\perp}/t\) for several indicated values of the intra-leg spin coupling \(J_{\parallel}\). For \(J_{\perp},J_{\parallel}\ll t\), \(E_{b\perp}\) follows a \(\left(J_{\perp}/t\right)^{2/3}\) power law behavior [light red dashed line, Eq. (30)]. For \(J_{\perp}\gg t\), \(E_{b\perp}\) approaches \(J_{\perp}/2\) [black dashed line]. (b) Intra-leg binding energy, \(E_{b\parallel}=2E_{1}-E_{2\parallel}(0)\), versus \(J_{\parallel}\) for several values of \(J_{\perp}\). \(E_{b\parallel}\) approaches \(J_{\parallel}/2\) for \(J_{\parallel}\gg t\) [black dashed line]. (c) Intra-leg binding energy versus \(J_{\perp}\) instead. \(E_{b\parallel}\) approaches \(J_{\parallel}/2\) for \(J_{\perp}\gg t\) as well [dashed lines].
\(-a^{(0)}\simeq-1.02\) is once again the first zero of the derivative of the Airy function [see Appendix A for details], while \(-a^{(1)}\simeq-2.34\) is the first zero of the Airy function itself. Together with the single-hole energy in Eq. (14), this leads to the asymptotic binding energies, \(E_{bs}=2E_{1}-E_{2s}^{(n=0)}(k=0)\)
\[E_{b\perp}\to t\cdot a^{(0)}(2-2^{1/3})\left(\frac{J_{\perp}}{2t} \right)^{2/3}+\frac{J_{\perp}}{2}+J_{\parallel},\] \[E_{b\parallel}\to t\cdot(2a^{(0)}-2^{1/3}a^{(1)})\left(\frac{J_{ \perp}}{2t}\right)^{2/3}+\frac{J_{\perp}}{2}+J_{\parallel}. \tag{30}\]
In Fig. 5(a), we plot the binding energy as a function of \(J_{\perp}/t\) for a few indicated values of \(J_{\parallel}\) in the case of holes on separate legs. The functional form of the binding energy in the upper line of Eq. (30) is anticipated to remain true in the case of isotropic spin-couplings [28]. Together with the behavior of the binding energy for holes on the same leg [Figs. 5(b)-5(c)], this lends new insights into when holes can bind strongly or not. In general, the two holes are confined by the string of overturned spins between them. This results in the dominant energy-scaling of \((J_{\perp}/t)^{2/3}\), and leads to a strong binding mechanism for holes on separate legs. For holes on the same leg, however, since the prefactor in front of this term is negative, \(2a^{(0)}-2^{1/3}a^{(1)}\simeq-0.90<0\), two holes on the same leg actually energetically prefer to unbind. Similar to recent cold-atom experiments with isotropic spin couplings [27], this difference can be understood from a Pauli repulsion effect. In fact, the hard-core constraint means that the boundary condition at \(d=0\) is altered from being soft for holes on separate legs to exactly zero for holes on the same leg. This results in the different prefactors of \(a^{(0)}\simeq 1.02\) and \(a^{(1)}\simeq 2.34\) in the two cases, which will become apparent when we investigate the spatial distribution of the holes below. We may note, however, that already for moderate values of the intra-leg spin-coupling \(J_{\parallel}\), this unbinding is overcome and eventually reaches \(J_{\parallel}/2\) for \(J_{\parallel}\gg t\). In fact, in the extreme limit of \(J_{\perp}/t\to 0^{+}\) Eq. (28) in combination with \(E_{2\parallel}^{(0)}(0)=-4t\) for \(J_{\parallel}/t<4t\), results in the positive binding energy
\[E_{b\parallel} =J_{\parallel}+4t-\sqrt{(4t)^{2}+J_{\parallel}^{2}},\,J_{ \parallel}<4t,\] \[E_{b\parallel} =\frac{3J_{\parallel}}{2}+\frac{8t^{2}}{J_{\parallel}}-\sqrt{(4 t)^{2}+J_{\parallel}^{2}},\,J_{\parallel}\geq 4t, \tag{31}\]
shown with a black line in Fig. 5(b). Here, we use Eq. (28) for \(J_{\parallel}<4t\) and \(E_{2\parallel}(0)=-4t\) for \(J_{\parallel}\geq 4t\), as well as \(E_{1}=J_{\parallel}/2-2t\sqrt{1+(J_{\parallel}/4t)^{2}}\) by solving Eqs. (10) and (12) for \(J_{\perp}\to 0^{+}\). Hence, in this limit the binding energy interpolates between two linear behaviors in the intra-leg spin coupling, from an initial \(J_{\parallel}\) to \(J_{\parallel}/2\) behavior. This illustrates that two holes on the same leg bind _unless_ both the trans- and intra-leg spin couplings are small. Furthermore, we stress once again that these results, including the unbinding mechanism for holes on the same leg, ensues regardless of the statistics of the spins and only depends on the hard-core constraint, as one should also expect for in a system with one-dimensional motion [52].
In this manner, we have given a detailed account of the low-energy behavior for both intra- and trans-leg configurations. Holes on separate legs always bind with a super-linear scaling of \(t\cdot(J_{\perp}/t)^{2/3}\) for \(J_{\perp},J_{\parallel}\ll t\). For holes on the same leg, however, the hard-core constraint results in an energy cost proportional to \(t\cdot(J_{\perp}/t)^{2/3}\) for low \(J_{\perp},J_{\parallel}\) and leads to unbinding in this regime. However, for higher values of either spin coupling the holes will, once again, bind.
Whereas a determination of the two-hole binding energy is direct proof of their ability to bind, it is simultaneously notoriously difficult to measure in modern quantum simulation experiments with ultracold atoms in synthetic lattices, such as optical lattices and Rydberg arrays. The simple reason is that the required spectroscopy entails single atom detection in e.g. time of flight or rather advanced band-mapping techniques [17; 56]. On the other hand, the combination of the lattice experiments and the development of quantum gas microscopy has enabled the direct and precise measurement of spatial correlations, and has successfully been employed to measure antiferromagnetic correlations in Fermi-Hubbard systems [7; 9], as well as characterizing the spatial properties [14] and formation dynamics of magnetic polarons [21] in such systems. For
Figure 6: (a) Trans-leg \(g_{\perp}^{(2)}\) correlation versus the relative distance \(d\) for holes on separate legs and indicated values of the trans-leg spin coupling \(J_{\perp}\). In dark red and blue is shown the continuum limit result valid for \(J_{\perp}/t\ll 1\) [Eq. (33)]. (b) Intra-leg \(g_{\parallel}^{(2)}\) correlation function versus \(d\) for holes on the same leg for intra-leg spin coupling \(J_{\parallel}=0.2t\) and indicated values of \(J_{\perp}\). Average distances \(\left\langle\left|d\right|\right\rangle_{s}=\sum_{d}|d|g_{\parallel}^{(2)}(0,d)/N\) between the two holes for the ground state at \(k=0\) as a function of \(J_{\perp}/t\) on a log-log plot for the intra-(\(s=\parallel\), red lines) and trans-leg (\(s=\perp\), blue line) configurations of the holes. The intra-leg case is shown for several indicates values of \(J_{\parallel}\). We also show the strong-correlation scaling \((t/J_{\perp})^{1/3}\) (black short dashes), as well as the weak-correlation results \(\propto\ (t/J_{\perp})^{2}\) (black long dashes).
two holes, the two-point hole-hole correlators
\[g_{\perp}^{(2)}(k,d) =\frac{\langle\hat{h}_{1,j}^{\dagger}\hat{h}_{1,j}\hat{h}_{2,j+d}^{ \dagger}\hat{h}_{2,j+d}\rangle_{k}}{\langle\hat{h}_{1,j}^{\dagger}\hat{h}_{1,j} \rangle_{k}\langle\hat{h}_{2,j+d}^{\dagger}\hat{h}_{2,j+d}\rangle_{k}}=N|C_{ \perp}^{(0)}(k,d)|^{2},\] \[g_{\parallel}^{(2)}(k,d) =\frac{\langle\hat{h}_{1,j}^{\dagger}\hat{h}_{1,j}\hat{h}_{1,j+d} ^{\dagger}\hat{h}_{1,j+d}\rangle_{k}}{\langle\hat{h}_{1,j}^{\dagger}\hat{h}_{ 1,j}\rangle_{k}\langle\hat{h}_{1,j+d}^{\dagger}\hat{h}_{1,j+d}\rangle_{k}}=N|C_ {\parallel}^{(0)}(k,d)|^{2}, \tag{32}\]
provide such a spatial probe of their binding, as was also recently used in experiments [27]. In Eq. (32), the average is taken for the states \(|\Psi_{2s}^{(0)}(k)\rangle\) with \(s=\perp,\parallel\) in Eq. (17) residing in the lowest band \(E_{2s}^{(0)}(k)\). We utilize that the amplitude \(C_{s}^{(0)}(k,d)\) gives the probability to observe the holes at distance \(d\), \(|C_{s}^{(0)}(k,d)|^{2}\). Therefore, the numerator simply gives \(|C_{s}^{(0)}(k,d)|^{2}/N\), whereas the uniform spreading of the holes means that \(\langle\hat{h}_{\mu,j}^{\dagger}\hat{h}_{\mu,j}\rangle_{k}=1/N\), for both legs \(\mu=1,2\). In Figs. 6(a)-6(b), we plot these correlators as a function of \(d\) for several values of \(J_{\perp}\). For lower values of \(J_{\perp}/t\), the holes separate more and more from each other as one expects for a higher mobility. We note that already for \(J_{\perp}=3t\), the probability of finding the holes as nearest neighbors has dropped to around \(50\%\). For \(J_{\perp}/t\ll 1\) - and \(J_{\parallel}/t\ll 1\) in the intra-leg case - the relative wave functions of the holes, \(C_{s}^{(n)}(k,d)\), can be mapped to a continuum model. In this limit, they fulfill a continuous one-dimensional Schrodinger equation with a mass scaling as \(t\) and a linear potential scaling with \(J_{\perp}\)[57; 5; 3; 5]. As a result, the relative wave functions takes on the form of Airy functions [see Appendix A], resulting in
\[\frac{g_{\perp}^{(2)}(k,d)}{N} \to A_{0}^{2}\lambda(k)[\text{Ai}(\lambda(k)|d|-a^{(0)})]^{2},\] \[\frac{g_{\parallel}^{(2)}(k,d)}{N} \to A_{1}^{2}\lambda(k)[\text{Ai}(\lambda(k)d-a^{(1)})]^{2}, \tag{33}\]
with the effective inverse length scale \(\lambda(k)=[J_{\perp}/(4t\cos(k/2))]^{1/3}\), and the normalization constants \(A_{j}\). We compare to these continuum results and see remarkably good agreement away from \(d=0\) even for relatively large values of \(J_{\perp}\). Additionally, we show the average distance between the holes \(\langle|d|\rangle_{sk}=\sum_{d}|d|g_{s}^{(2)}(k,d)/N\) as a function of \(J_{\perp}/t\) in Fig. 6(c) for the ground state at \(k=0\). This reveals the strong-correlation scaling
\[\langle d\rangle_{\parallel k}=a^{(1)}\cdot\langle|d|\rangle_{\perp k}=\frac{ 2a^{(1)}}{3\lambda(k)}\propto\left(\frac{t}{J_{\perp}}\right)^{1/3}, \tag{34}\]
for \(J_{\perp}/t\ll 1\). Figure 6(c) shows that this asymptotic form is already accurate for \(J_{\perp}/t\leq 1\). We attribute this to the fact that the effective interaction strength for two holes is \(4t/J_{\perp}\), rather than just \(t/J_{\perp}\). For weak correlations, we similarly get \(\langle d\rangle_{\parallel k}-1=\langle|d|\rangle_{\perp k}/2=1/\lambda^{6}( k)\propto(t/J_{\perp})^{2}\), becoming accurate for \(J_{\perp}/t\geq 10\) in Fig. 6(c). Importantly, we emphasize that for holes on the same leg, the hard-core constraint \(g_{\parallel}^{(2)}(k,d=0)=0\) results in a different boundary condition in the continuum limit. This change in the boundary condition alters the relative wave function from being on the form of \(\text{Ai}(\lambda(k)|d|-a^{(0)})\) to \(\text{Ai}(\lambda(k)d-a^{(1)})\), and changes the prefactor of the \((J_{\perp}/t)^{2/3}\) term in the two-hole energy from \(a^{(0)}\simeq 1.02\) to \(a^{(1)}\simeq 2.34\). This also results in a significant qualitative change in the relative spatial distribution of the holes. For holes on separate legs, the holes are always most likely to be found as nearest neigbors, whereas this is not true at all for holes on the same leg. This leads to more distant holes in the intra-leg configuration and explains the extra factor of \(a^{(1)}\simeq 2.34\) in \(\langle d\rangle_{\parallel k}\). In Fig. 6, we focus on the ground state behavior, i.e. \(k=0\). We note, however, that as the momentum approaches the edge of the Brillouin zone, the correlator compresses more and more and eventually the holes only sit next to each other: \(g_{\perp}^{(2)}(k=\pi,0)=N\) and \(g_{\parallel}^{(2)}(k=\pi,1)=N\).
Equation (33) reveals that for holes on separate legs, the correlator at \(d=0\) scales with the inverse length scale \(\lambda(k)\propto(J_{\perp}/t)^{1/3}\). Since the binding energy scales with \(t(J_{\perp}/t)^{2/3}\), we get that \(E_{b\perp}/J_{\perp}\propto 1/g^{(2)}(0,0)\) at strong coupling. More precisely,
\[\frac{E_{b\perp}}{J_{\perp}}=\frac{c_{\perp}}{g_{\perp}^{(2)}(0,0)/N}+\frac{1}{ 2}+\frac{J_{\parallel}}{J_{\perp}}, \tag{35}\]
with \(c_{\perp}=2^{-4/3}(1-2^{-2/3})\) for \(J_{\perp},J_{\parallel}\ll t\). This is very valuable for quantum simulation experiments, as it provides an indirect probe of the binding energy. In fact, in Ref. [27] an approximate relation at finite temperatures between the binding energy and the two-point correlator was used in this regard. For the configuration with two holes on the same leg, Eq. (33) similarly gives \(g_{\parallel}^{(2)}(k,1)\propto\lambda^{3}(k)\). The asymptotic
Figure 7: (a) Trans-leg binding energy in units of \(J_{\perp}\) versus \(g_{\perp}^{(2)}\) at \(k=0\) and for adjacent holes, \(d=0\), for indicated values of the intraleg spin coupling \(J_{\parallel}\). This is compared to the asymptotic behavior in Eq. (35) for \(J_{\parallel}=0.1t\) (light red dashed line), as well as the weak coupling binding energy (black long dashed line). We observe a monotically decreasing behavior of \(E_{b}/J_{\perp}\) for increasing \(g_{\perp}^{(2)}\). (b) Intra-leg binding energy versus \(g_{\parallel}^{(2)}\) at \(k=0\) and adjacent holes, \(d=1\), for the same values of \(J_{\parallel}\), and also compared to the asymptotic behavior [Eq. (36)]. For small \(J_{\parallel}\), the binding energy false off very quickly with increasing \(g_{\parallel}^{(2)}\).
relation between the binding energy and the correlation function, therefore, now takes on the form
\[\frac{E_{\parallel}}{J_{\perp}}=\frac{c_{\parallel}}{[g_{\parallel}^{(2)}(0,0)/ N]^{1/3}}+\frac{1}{2}+\frac{J_{\parallel}}{J_{\perp}}, \tag{36}\]
with \(c_{\parallel}=2^{-1/3}(a^{(0)}-2^{-2/3}a^{(1)})\). This asymptotic relationship indicates that \(g_{\parallel}^{(2)}(0,0)\) must be much smaller to observe an impact on the binding energy. To explore these behaviors further, we plot the binding energy versus \(g^{(2)}\) in Fig. 7. For holes on separate legs, this reveals a monotonic relation between the binding energy and the \(g^{(2)}\) correlator for nearest neighbor holes for any value of \(J_{\parallel}\), which indeed enables experiments to infer a binding energy from a measured \(g^{(2)}\) function. In the case of holes on the same leg, however, the applicability of this approach may, however, depend quite crucially on the value of \(J_{\parallel}\). In fact, for \(J_{\parallel}\ll t\), we see that only at extremely low value of \(g_{\parallel}^{(2)}(0,0)\) does the binding energy start to change significantly, which will naturally make it much harder to infer a binding energy from a measured \(g^{(2)}\) function.
## IV Nonequilibrium dynamics
In this section, we investigate the nonequilibrium dynamics of two initially localized holes. Such a quench experiment is a natural choice for quantum simulation experiments, and have recently been considered for the motion of a hole in a Fermi-Hubbard background [21], in which they were able to see the crossover dynamics from an initial free ballistic motion of the hole, signatures of string oscillations, and finally to the ballistic motion of magnetic polaron quasiparticles at long times [59; 5]. In the current setup, we investigate the situation where the holes are localized and start out as nearest neighbor, described by the wave functions for the separate-legs (\(\perp\))and same-leg (\(\parallel\)) configurations
\[|\Psi_{2\perp}(\tau=0)\rangle =\hat{h}_{0,1}^{\dagger}\hat{h}_{0,2}^{\dagger}\prod_{l>0}\hat{s }_{l,1}^{\dagger}\prod_{m>0}\hat{s}_{m,2}^{\dagger}|\mathrm{AF}\rangle\] \[=\frac{1}{\sqrt{N}}\sum_{k}|\Psi_{2\perp}(k,0)\rangle\,,\] \[|\Psi_{2\parallel}(\tau=0)\rangle =\hat{h}_{0,1}^{\dagger}\hat{h}_{1,1}^{\dagger}|\mathrm{AF} \rangle=\frac{1}{\sqrt{N}}\sum_{k}|\Psi_{2\parallel}(k,1)\rangle\,, \tag{37}\]
using \(\tau\) as the variable for time to distinguish it from the hopping amplitude \(t\). In the second line, as well as the last expression of the third line, we utilize that the initial state is the superposition of all the crystal momentum states in Eqs. (15) and (16) for \(d=0\) and \(d=1\), respectively. To determine the full dynamics, we calculate the overlap of the initial states with the two-hole eigenstates in Eq. (17)
\[\langle\Psi_{2\perp}^{(n)}(k)|\Psi_{2\perp}(\tau=0)\rangle =\frac{C_{\perp}^{(n)}(k,0)}{\sqrt{N}},\] \[\langle\Psi_{2\parallel}^{(n)}(k)|\Psi_{2\parallel}(\tau=0) \rangle =\frac{C_{\perp}^{(n)}(k,1)}{\sqrt{N}}\mathrm{e}^{-ik/2}. \tag{38}\]
Since the eigenstates are delocalized over the entire lattice, there is an overall factor of \(1/\sqrt{N}\), whereas the factors of \(C_{\perp}^{(n)}(k,0)=\sqrt{Z_{2\perp}^{(n)}(k)}\) and \(C_{\perp}^{(n)}(k,1)=\sqrt{Z_{2\parallel}^{(n)}(k)}\) are the amplitudes for finding the holes as nearest neighbors in the states \(|\Psi_{2\perp}^{(n)}(k)\rangle\) and \(|\Psi_{2\parallel}^{(n)}(k)\rangle\), respectively. See also Eqs. (25) and (26). We note that it is crucial to take into account that states in all the bands \(n\) have an overlap with the initial state and must be taking into account. The nonequilibrium wave functions are then
\[|\Psi_{2\perp}(\tau)\rangle =\sum_{k,n}\mathrm{e}^{-iH\tau}\,|\Psi_{2\perp}^{(n)}(k)\rangle \,\langle\Psi_{2\perp}^{(n)}(k)|\Psi_{2\perp}(\tau=0)\rangle\] \[= \frac{1}{\sqrt{N}}\sum_{k,n}C_{\perp}^{(n)}(k,0)\mathrm{e}^{-iE_ {2\perp}^{(n)}(k)\tau}\,|\Psi_{2\perp}^{(n)}(k)\rangle, \tag{39}\]
Figure 8: Temporal evolution of the probability to find the two holes at distances \(d=0\) (a), \(d=1\) (b), \(d=2\) (c) for holes on separate legs of the ladder, and of finding the holes at distances \(d=1\) (d), \(d=2\) (e), and \(d=3\) (f) for holes on the same leg. This is shown for indicated values of the trans- (\(J_{\perp}\)) and intra-leg (\(J_{\parallel}\)) spin couplings and compared to the quantum walk for holes on separate legs (black lines) and on the same leg (grey lines). We also show the long-time average probability distributions \(\bar{P}_{\perp}(d),\bar{P}_{\parallel}(d)\) in dashed lines. (g)-(h) \(\bar{P}_{\perp}(d),\bar{P}_{\parallel}(d)\) compared to the ground state probability distributions, \(P_{\perp}^{0}(d)=|C_{\perp}^{(0)}(k=0,d)|^{2}\) and \(P_{\parallel}^{0}(d)=|C_{\parallel}^{(0)}(k=0,d)|^{2}\), for holes on separate legs (g) and on the same leg (h).
for the separate-legs configuration, and
\[|\Psi_{2\parallel}(\tau)\rangle = \sum_{k,n}\mathrm{e}^{-iH\tau}\,|\Psi_{2\parallel}^{(n)}(k)\rangle \,\langle\Psi_{2\parallel}^{(n)}(k)|\Psi_{2\perp}(\tau=0)\rangle \tag{40}\] \[= \frac{1}{\sqrt{N}}\!\sum_{k,n}\mathrm{e}^{-ik}C_{\parallel}^{(n) }\!(k,1)\mathrm{e}^{-iE_{2\parallel}^{(n)}(k)\tau}|\Psi_{2\parallel}^{(n)}(k )\rangle.\]
for holes on the same leg. To describe the two-hole dynamics more concisely, we use Eqs. (39) and (40) to compute the probability of finding the holes at a distance \(d\) as a function of time
\[P_{\perp}(d,\tau) = \frac{1}{N}\!\sum_{k}\Big{|}\sum_{n}C_{\perp}^{(n)}(k,0)C_{\perp }^{(n)}(k,d)\mathrm{e}^{-iE_{2\perp}^{(n)}(k)\tau}\Big{|}^{2},\] \[P_{\parallel}(d,\tau) = \frac{1}{N}\!\sum_{k}\Big{|}\sum_{n}C_{\parallel}^{(n)}(k,1)C_{ \parallel}^{(n)}(k,d)\mathrm{e}^{-iE_{2\parallel}^{(n)}(k)\tau}\Big{|}^{2}, \tag{41}\]
describing the relative wave function versus time. Figures 8(a)-8(f) shows the dynamics of these probability distributions for several indicated distances, \(d\). At short times, the holes initially blow apart _ballistically_ as described by the quantum walks
\[P_{\perp}^{(\mathrm{q.w.})}(d,\tau) = \frac{1}{N}\!\sum_{k}\cos(kd)\Big{|}\frac{1}{N}\!\sum_{p}\mathrm{ e}^{+i(\varepsilon_{p}-\varepsilon_{p+k})\tau}\Big{|}^{2},\] \[P_{\parallel}^{(\mathrm{q.w.})}(d,\tau) = \frac{2}{N}\!\sum_{k}\cos(k)\Bigg{[}\cos(kd)\Big{|}\frac{1}{N}\! \sum_{p}\mathrm{e}^{+i(\varepsilon_{p}-\varepsilon_{p+k})\tau}\Big{|}^{2} \tag{42}\] \[\qquad\qquad-\Big{|}\frac{1}{N}\!\sum_{p}\mathrm{e}^{ip}\mathrm{ e}^{+i(\varepsilon_{p}-\varepsilon_{p+k})\tau}\Big{|}^{2}\Bigg{]},\]
derived in Appendix B. For holes on the same leg, lower line in Eq. (42), the hard-core property of the holes constrains their motion and slightly alters it from the quantum walk of independent holes on separate legs. On longer timescales, the distribution of the holes is seen to oscillate around the time-averaged distributions
\[\bar{P}_{\perp}(d) = \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}d\tau\ P_{\perp}(d,\tau)\] \[= \frac{1}{N}\sum_{k,n}|C_{\perp}^{(n)}(k,0)|^{2}|C_{\perp}^{(n)}(k,d)|^{2},\] \[\bar{P}_{\parallel}(d) = \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}d\tau\ P_{\parallel}(d,\tau) \tag{43}\] \[= \frac{1}{N}\sum_{k,n}|C_{\parallel}^{(n)}(k,1)|^{2}|C_{\parallel} ^{(n)}(k,d)|^{2},\]
which denotes the steady state approximately reached on long timescales. We note, however, that because the two-hole spectra in Figs. 3 and 4 consists of a discrete set of coherent peaks for any nonzero value of the trans-leg spin coupling \(J_{\perp}\), the motion will generally be highly aperiodic and never settle at its long-time average. As a result, the system does not fully equilibrate. It does still, however, give the characterize distribution of the holes at long times. To understand this further, in Figs. 8(g)-8(h), we compare it to the probability distribution for the two holes in the ground state. We observe that the behavior of the steady state is markedly different from the ground state. First and foremost, the state will dynamically extend over much larger length scales than its ground state counterpart. This is challenging for a cold-atom simulation experiment, and may hinder the observation of the long-time dynamics. However, we stress that already over a few hopping timescales \(1/t\), does the dynamics start to deviate from the quantum walk.
To investigate this more quantitatively using a simple experimental probe, we compute the average distances
\[\left\langle\left|d\right|\right\rangle_{\perp}(\tau) = \sum_{d}|d|P_{\perp}(d,\tau),\] \[\left\langle d\right\rangle_{\parallel}(\tau) = \sum_{d}dP_{\parallel}(d,\tau), \tag{44}\]
as a function of time. Two exemplary results are shown in Figs. 9(a)-9(b). For times \(\tau<2/t\), holes on the same leg will depart slightly slower than holes on separate legs, simply because there are more configurations available for holes on separate legs in the very first hop. After that, the hard-core constraint leads to faster divergent motion for holes on the same leg, but the motion remains a ballistic quantum walk. When the holes cross their long-time average, \(\overline{\left\langle\left|d\right|\right\rangle_{\perp},\left\langle d \right\rangle_{\parallel}}\), the motion starts to deviate significantly from the initial ballistic behavior and instead crosses over to oscillations around \(\overline{\left\langle\left|d\right|\right\rangle_{\perp},\left\langle d \right\rangle_{\parallel}}\). We use
Figure 9: (a)–(b) Mean distance versus time for indicated intra- and trans-leg spin couplings for holes on separate legs (blue lines) and the same leg (red lines). The black and grey dashed lines show the quantum walk for holes on separate legs and the same leg, respectively. At long times, the holes oscillate around well-defined mean distances \(\overline{\left\langle\left|d\right|\right\rangle_{\perp},\left\langle d \right\rangle_{\parallel}}\) (long-dashed lines), shown in (c) as a function of the trans-leg spin coupling \(J_{\perp}\), for indicated values of the intra-leg spin coupling \(J_{\parallel}\). For weak correlations, the time-averaged mean distances scale as \((t/J_{\perp})^{2}\) as the eigenstates in Fig. 6, while the scaling in the strong correlation limit is changed from \((t/J_{\perp})^{1/3}\) for the eigenstates to \(t/J_{\perp}\) for the dynamics.
this to define the dynamical regimes in Fig. 1(d). In fact, the interhole distance in the separate-legs configuration quickly evolves linearly in time, \(\left\langle\left|d\right|\right\rangle_{\perp}^{\left(\text{q.w.}\right)}=13/8(t \cdot\tau)\). We, therefore, simply define the crossover timescale in Fig. 1(d) as the time \(\tau\) at which \(\left\langle\left|d\right|\right\rangle_{\perp}^{\left(\text{q.w.}\right)} \simeq 13/8(t\cdot\tau)=\left\langle\left|d\right|\right\rangle_{\perp}\). We, hereby, note that the crossover from the quantum walk to the string oscillation regime for say \(J_{\perp}=3t\), happens already around \(\tau\simeq 1/t\). This should be a signifant help to see at least the onset of the oscillation regime in a cold-atom simulation [21].
Finally, in Fig. 9(c), we compute the long-time average distances as a function of the trans-leg spin coupling, \(J_{\perp}\), for indicated values of the intra-leg spin coupling, \(J_{\parallel}\), in the case of holes on the same leg. For weak correlations, \(J_{\perp}\gg t\), we find that this distance is the same as for the ground states in Fig. 6(c) and, thus, vanishes as \((t/J)^{2}\). For strong correlations \(J_{\perp}\ll t\), however, we see that the distance between the holes reaches a universal \(t/J\)-scaling for \(J_{\parallel}\ll t\). This same scaling was found for the motion of a single hole in antiferromagnetic Bethe lattice structures [53], and shows a remarkable qualitative difference to the equilibrium situation with eigenstates supporting only a much weaker \((t/J)^{1/3}\) scaling for the eigenstates, Fig. 6(c). This quantifies the qualitative picture drawn from Figs. 8(g)-8(h) that the quench holes already for intermediate values of \(J_{\perp}\sim t\) spread out much more than one would expect from the spatial size of the ground state.
For the computation of the dynamics, we increase the system size \(N\) and the number of included bands \(n_{\max}\) until the results have converged. As a rule of thumb, this is achieved when the system size is a few times larger than the mean distance between the holes. For the most strongly correlated case of \(J_{\perp}=0.1t\), we go up to \(N=600\) sites and \(n_{\max}=88\) bands. Utilizing the inversion symmetry of the system, we, hereby, need to resolve the energy and residue of \(N/2\cdot n_{\max}=26400\) states. This emphasizes that we need a very thorough understanding of the eigenspectrum to be able to simulate the quench dynamics in this manner.
## V Conclusions and Outlook
Inspired by the recent experimental realization of hole pairing in a cold-atom quantum simulator [27] of a mixed-dimensional \(t\)-\(J\) model [28], we have investigated a simplified setup of Ising spin interactions. This allowed us to determine the _exact_ low-energy single- and two-hole eigenstates. We used this to rigorously show that two holes on separate legs bind strongly to each other in the strongly correlated regime of \(J_{\perp},J_{\parallel}\ll t\), in that it features a _superlinear_ binding energy: \(E_{b}\propto(J_{\perp}/t)^{2/3}\).
Furthermore, we used this exact description to rigorously account for the nonequilibrium quench dynamics following two initially localized holes at adjacent sites. Similar dynamics has previously been investigated for a single hole in a square lattice geometry [21], whose analysis provided evidence of emergent dynamical regimes, describing the crossover from a quantum walk on short timescales to string oscillations at intermediate timescales and finally the ballistic motion of magnetic polaron quasiparticles at long times [59; 5]. In the present mixed-dimensional setup, we found a similar dichotomy of the dynamics for _two_ holes with two major differences. First, the holes are confined to each other, such that their distance remains finite. Second, the string oscillations in the present scenario have an infinite lifetime, and, therefore, persist indefinitely, hindering the long-time equilibration of the system.
These results pave the way for a precise comparison with state-of-the-art cold-atom quantum simulation experiments. There are three essential ingredients that makes the system interesting from this perspective. First, our mixed-dimensional model may be implemented both with fermions and hard-core bosons. Second, the effective interaction strength of \(4t/J_{\perp}\) means that the experiments can more easily access a strongly correlated regime already for \(J_{\perp}\lesssim 4t\). Third, this is particularly relevant for the quench dynamics, where the crossover from the quantum walk to the string oscillations already happens around times of \(\tau\simeq 1/t\) in this intermediate parameter regime. We, therefore, believe that it should be possible to experimentally access the crossover from the quantum walk to the confinement-induced oscillations.
Furthermore, such experiments also naturally lead to two interesting roads ahead. First, by systematically increasing the number of legs in the ladder, one can carefully analyze if the system supports the formation of stripes [60; 61; 62; 63] inherent to the phenomenology of high-temperature superconductors. Such inquiries were investigated in Ref. [50] using quantum Monte Carlo calculations, in the special case where the trans- and intra-leg spin interactions are equal. We speculate that our methodology may lend exact insights into this scenario at zero temperature. Second, we believe that it is possible to use the present methodology also at nonzero temperatures. This would require a thorough analysis of the eigenstates as more and more spins are flipped. This would enable the exact determination of the nonequilibrium dynamics of holes at finite temperatures, and could be used to answer whether the holes will deconfine [64] from each other as a result of thermal spin fluctuations.
###### Acknowledgements.
The author thanks Marton Kanasz-Nagy and J. Ignacio Cirac for valuable discussions. This Article was supported by the Carlsberg Foundation through a Carlsberg Internationalisation Fellowship.
## Appendix A Continuum limit for two holes
In this appendix, we derive the two-hole energy in the limit \(J_{\perp}/t\ll 1\). The derivation is very similar to the recent results in Bethe lattice structures [53].
We initially analyze the situation for holes on separate legs, starting from the equations of motion in Eq. (18). Using
\(\psi^{(n)}(k,d)=(-1)^{d}C^{(n)}(k,d)\), we obtain
\[\frac{E^{(n)}_{2\perp}(k)+4t\cos(k/2)+J_{\perp}/2}{2t\cos(k/2)}\psi^ {(n)}(k,d)=\] \[\frac{J_{\perp}}{4t\cos(k/2)}|d|\psi^{(n)}(k,d)\] \[-\left[\psi^{(n)}(k,d-1)-2\psi^{(n)}(k,d)+\psi^{(n)}(k,d+1) \right], \tag{10}\]
valid for any \(k\neq\pm\pi\). We then rescale lengths according to \(d=x/\lambda\), and define \(\phi^{(n)}(k,x)=\psi^{(n)}(k,x/\lambda)/\sqrt{\lambda}\). Inserting this in Eq. (A), we get
\[a^{(n)}\phi^{(n)}(k,x)=\frac{J_{\perp}}{4t\cos(k/2)}\frac{|x|}{ \lambda^{3}}\phi^{(n)}(k,x)\] \[-\frac{\phi^{(n)}(k,x-\lambda)-2\phi^{(n)}(k,x)+\phi^{(n)}(k,x+ \lambda)}{\lambda^{2}}, \tag{11}\]
with \(a^{(n)}=(E^{(n)}_{2\perp}(k)+4t\cos(k/2)+J_{\perp}/2)/(2t\cos(k/2)\cdot\lambda ^{2})\). To remove the dependency on \(J_{\perp}/t\), we set \(\lambda^{3}(k)=J_{\perp}/(4t\cos(k/2))\). In the limit of \(\lambda\propto(J_{\perp}/t)^{1/3}\to 0^{+}\), the second line of Eq. (A) simply becomes the second derivative of \(\phi\). Hence, we are left with the differential equation
\[a^{(n)}\phi^{(n)}(k,x)=|x|\phi^{(n)}(k,x)-\frac{d^{2}\phi^{(n)}(k,x)}{dx^{2}}, \tag{12}\]
where the wave function is subject to the normalization condition \(\int_{-\infty}^{+\infty}dx|\phi(k,x)|^{2}=1\). Hence, we effectively have a single particle in one dimension subject to a linear potential in this limit. Rearranging yields
\[\frac{d^{2}f(y)}{dy^{2}}=yf(y), \tag{13}\]
where \(y=|x|-a^{(n)}\), and \(f(y)=\phi^{(n)}(k,y+a^{(n)})\). Hence, \(y\geq-a^{(n)}\) is required here. The solutions to Eq. (A) are the Airy functions \(\mathrm{Ai}\), \(\mathrm{Bi}\), such that \(f(y)=A\cdot\mathrm{Ai}(y)+B\cdot\mathrm{Bi}(y)\). Normalization of the wave function then dictates that \(B=0\), i.e. \(\phi^{(n)}(k,|x|)=A\cdot\mathrm{Ai}(|x|-a)\). Since the potential is even in \(x\), we may choose eigenfunctions that are either even or odd. For even functions, the derivate of \(\phi\) at \(x=0\) is
\[\frac{d\phi^{(n)}(k,x)}{dx}\Big{|}_{x=0^{\pm}}=\pm A\frac{d\mathrm{Ai}(y)}{dy }\Big{|}_{y=-a^{(n)}}. \tag{14}\]
Since the potential is continuous everywhere, so must the derivative be. This, in particular, holds at \(x=0\), and, therefore, \(-a^{(n)}\) must be a zero of the derivative of the Airy function, \(\mathrm{Ai}^{\prime}(-a^{(n)})=0\). This defines one set of eigenfunctions with the lowest eigenvalues given by \(a_{e}=1.01879..,3.24819..,4.82010..,\ldots\).
For odd functions, we need \(\phi^{(n)}(k,x)=A\cdot\mathrm{sgn}(x)\mathrm{Ai}(|x|-a^{(n)})\) to vanish at \(x=0\). Hence, \(-a^{(n)}\) must be a zero of the Airy function itself, \(\mathrm{Ai}(-a^{(n)})=0\). This defines another set of eigenvalues: \(a_{o}=2.33811..,4.08795..,5.52056..,\ldots\). As one can expect, we get an alternating pattern of even and odd eigenstates. The asymptotic energies are, hereby, given by
\[E^{(n)}_{2\perp}(k)=-4t\cos\left(\frac{k}{2}\right)\left[1-\frac{a^{(n)}}{2} \lambda^{2}(k)\right]+\frac{J_{\perp}}{2}, \tag{15}\]
with \(\lambda(k)=[J_{\perp}/(4t\cos(k/2))]^{1/3}\), and where \(a^{(2m)}=a_{e}^{(m)}\) and \(a^{(2m+1)}=a_{e}^{(m)}\) for even \(n=2m\) and odd \(n=2m+1\) eigenstates, respectively. The asymptotic eigenstates for holes on separate legs are, hereby,
\[\psi^{(n)}(k,d)=A\cdot(\mathrm{sgn}(d))^{n}\sqrt{\lambda(k)}\cdot\mathrm{Ai} (\lambda(k)|d|-a^{(n)}), \tag{16}\]
with normalization constants \(A_{n}\). The full derivation, here, carries over to two holes on the same leg. However, in this case the hard-core constraint of the holes mean that the wave function must vanish at \(d=0\). Consequently, only the odd asymptotic wave functions, \(\psi^{(2n+1)}(k,d)\), from above are allowed in this case, and hence
\[E^{(n)}_{2\parallel}(k)=-4t\cos\left(\frac{k}{2}\right)\left[1-\frac{a^{(2n+1) }}{2}\lambda^{2}(k)\right]+\frac{J_{\perp}}{2}, \tag{17}\]
The lowest two-hole eigenstates are at vanishing total momentum, \(k=0\), and for \(a^{(0)}=1.01879..\) and \(a^{(1)}=2.33811..\) for holes on separate legs and the same legs, respectively,
\[E^{(0)}_{2\perp}(0) =-4t\left[1-\frac{a^{(0)}}{2}\left(\frac{J_{\perp}}{4t}\right)^{2 /3}\right]-\frac{J_{\perp}}{2},\] \[E^{(0)}_{2\parallel}(0) =-4t\left[1-\frac{a^{(1)}}{2}\left(\frac{J_{\perp}}{4t}\right)^{2 /3}\right]-\frac{J_{\perp}}{2}. \tag{18}\]
We note that for a fixed \(J_{\perp}/t\), Eqs. (15) and (16) will break down as one approaches \(k=\pm\pi\). Finally, a very similar calculation shows that asymptotically, the single-hole energy is
\[E^{(n)}_{1}=-2t\left[1-\frac{a^{(n)}}{2}\left(\frac{J_{\perp}}{2t}\right)^{2 /3}\right]+\frac{J_{\parallel}}{2}. \tag{19}\]
Equations (19) and (18) give the asymptotic binding energy in Eq. (30).
## Appendix B Quantum walks of two holes
In this appendix, we derive the probability distributions in Eq. (42) describing the distance between two non-interacting particles performing quantum walks either in the separate-legs or same-leg configuration.
The hopping Hamiltonian for identical particles may simply be written as
\[\hat{H}_{t}=-t\sum_{j,\mu}\left[\hat{c}^{\dagger}_{j,\mu}\hat{c}_{j+1,\mu}+\hat{c }^{\dagger}_{j+1,\mu}\hat{c}_{j,\mu}\right]. \tag{20}\]
To easily enforce the hard-core constraint in the case of particles on the same leg, we use fermionic commutation relations \(\{\hat{c}_{j,\mu},\hat{c}^{\dagger}_{l,\nu}\}=\delta_{j,l}\delta_{\mu,\nu}\). The Hamiltonian is diagonalized by Fourier transforming to crystal momentum states,
\[A_{\perp}(j,d,\tau) =\langle 0|\,\hat{c}_{j,1}\hat{c}_{j+d,2}\,|\Psi_{\perp}(\tau)\rangle= \frac{1}{N^{2}}\sum_{\begin{subarray}{c}k_{1},k_{2}\\ q_{1},q_{2}\end{subarray}}\mathrm{e}^{i(k_{2}+q_{2})j+q_{2}d}\mathrm{e}^{-i( \varepsilon_{k1}+\varepsilon_{q1})\tau}\langle 0|\,\hat{c}_{q_{2},2}\hat{c}_{k_{2},1} \hat{c}_{k_{1},1}^{\dagger}\hat{c}_{q_{1},2}^{\dagger}\,|0\rangle\,,\] \[A_{\parallel}(j,d,\tau) =\langle 0|\,\hat{c}_{j,1}\hat{c}_{j+d,1}\,|\Psi_{\parallel}(\tau)\rangle =\frac{1}{N^{2}}\sum_{\begin{subarray}{c}k_{1},k_{2}\\ q_{1},q_{2}\end{subarray}}\mathrm{e}^{i(k_{2}+q_{2})j+q_{2}d}\mathrm{e}^{-iq_{ 1}}\mathrm{e}^{-i(\varepsilon_{k1}+\varepsilon_{q1})\tau}\langle 0|\,\hat{c}_{q_{2},1} \hat{c}_{k_{2},1}\hat{c}_{k_{1},1}^{\dagger}\hat{c}_{q_{1},1}^{\dagger}\,|0 \rangle\,, \tag{40}\]
because the particles on separate legs only have a single nonzero matrix element \(\langle 0|\,\hat{c}_{q_{2},2}\hat{c}_{k_{2},1}\hat{c}_{k_{1},1}^{\dagger}\hat {c}_{q_{1},2}^{\dagger}\,|0\rangle=\delta_{k_{1},k_{2}}\delta_{q_{1},q_{2}}\), whereas particles on the same leg also feature an exchange term \(\langle 0|\,\hat{c}_{q_{2},2}\hat{c}_{k_{2},1}\hat{c}_{k_{1},1}^{\dagger}\hat {c}_{q_{1},2}^{\dagger}\,|0\rangle=\delta_{k_{1},k_{2}}\delta_{q_{1},q_{2}}- \delta_{k_{1},q_{2}}\delta_{q_{1},k_{2}}\). As a result, the amplitudes simplify to
\[A_{\perp}(j,d,\tau) =\frac{1}{N^{2}}{\sum_{k,q}}\mathrm{e}^{iqd}\mathrm{e}^{i(k+q)j} \mathrm{e}^{-i(\varepsilon_{k}+\varepsilon_{q})\tau},\] \[A_{\parallel}(j,d,\tau) =\frac{1}{N^{2}}{\sum_{k,q}}\big{[}\mathrm{e}^{iqd}-\mathrm{e}^{ ikd}\big{]}\mathrm{e}^{i(k+q)j}\mathrm{e}^{-iq}\mathrm{e}^{-i(\varepsilon_{k}+ \varepsilon_{q})\tau}. \tag{41}\]
From here, we may, then, calculate the probabilities to find the holes a distance \(d\) apart. Since we are not interested in the absolute position of the holes, \(j\), we get
\[P_{\perp}^{\mathrm{a.w.}}(d,\tau)=\sum_{j}|A_{\perp}(j,d,\tau)|^{2}\] \[=\frac{1}{N^{4}}{\sum_{\begin{subarray}{c}j,k_{1},k_{2}\\ q_{1},q_{2}\end{subarray}}}\mathrm{e}^{i(q_{1}-q_{2})d}\mathrm{e}^{i(k_{1}-k_{2 }+q_{1}-q_{2})j}\mathrm{e}^{-i(\varepsilon_{k1}+\varepsilon_{q1}-\varepsilon_{ k2}-\varepsilon_{q2})\tau}\] \[=\frac{1}{N^{3}}{\sum_{k,q,p}}\mathrm{e}^{ipd}\mathrm{e}^{i( \varepsilon_{k+p}+\varepsilon_{q-p}-\varepsilon_{k}-\varepsilon_{q})\tau}\] \[=\frac{1}{N^{3}}{\sum_{k,q,p}}\mathrm{e}^{ipd}\mathrm{e}^{i( \varepsilon_{k+p}-\varepsilon_{k})\tau}\mathrm{e}^{-i(\varepsilon_{q+p}- \varepsilon_{q})\tau}\] \[=\frac{1}{N}{\sum_{p}}\mathrm{e}^{ipd}\Big{|}\frac{1}{N}\sum_{k} \mathrm{e}^{i(\varepsilon_{k}-\varepsilon_{k+p})\tau}\Big{|}^{2} \tag{42}\]
Combining the summands at \(p\) and \(-p\) then results in top line of Eq. (42) describing the probability distribution for the distance between the holes on separate legs. A completely analogous calculation derives the bottom line of Eq. (42) from the bottom line of (41) for two holes on the same leg.
|
2307.05272 | Atomic-Scale Insights into Damage Produced by Swift Heavy Ions in
Polyethylene | We describe the formation of swift heavy ion tracks in polyethylene (PE) by
combining the Monte Carlo code TREKIS, which models electronic excitation in
nanometric proximity of the ion trajectory, with the molecular dynamics
simulating a response of the atomic system to the perturbation. The model
predicts circular tracks in amorphous PE but elliptical ones in crystalline PE
caused by preferential propagation of excitation along polymer chains during
the cooling stage. The obtained track sizes and shapes agree well with the
high-resolution microscopy of tracks in PE. The velocity effect in PE is shown:
the track parameters differ for ions with the same energy losses but different
velocities. | P. Babaev, F. Akhmetov, S. Gorbunov, N. Medvedev, R. Rymzhanov, R. Voronkov, A. E. Volkov | 2023-07-11T14:05:25Z | http://arxiv.org/abs/2307.05272v2 | # Atomic-Scale Insights into Damage Produced by Swift
###### Abstract
We describe the formation of swift heavy ion tracks in polyethylene (PE) by combining the Monte Carlo code TREKIS, which models electronic excitation in nanometric proximity of the ion trajectory, with the molecular dynamics simulating a response of the atomic system to the perturbation. The model predicts circular tracks in amorphous PE but elliptical ones in crystalline PE caused by preferential propagation of excitation along polymer chains during the cooling stage. The obtained track sizes and shapes agree well with the high-resolution microscopy of tracks in PE. The velocity effect in PE is shown: the track parameters differ for ions with the same energy losses but different velocities.
\({}^{1}\)P.N. Lebedev Physical Institute of the Russian Academy of Sciences, Leninskij pr., 53,119991 Moscow, Russia
\({}^{2}\)Industrial Focus Group XUV Optics, MESA+ Institute for Nanotechnology, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
\({}^{3}\)Institute of Physics, Czech Academy of Sciences, Na Slovance 1999/2, 182 21 Prague 8, Czech Republic
\({}^{4}\)Institute of Plasma Physics, Czech Academy of Sciences, Za Slovankou 3, 182 00 Prague 8, Czech Republic
\({}^{5}\)Flerov Laboratory of Nuclear Research, Joint Institute for Nuclear Research, Dubna, Russia
\({}^{6}\)Institute of Nuclear Physics, Almaty, Kazakhstan
\({}^{*}\) Corresponding author: [email protected]
Keywords: swift heavy ion, polyethylene, Monte-Carlo TREKIS, molecular dynamics, AIREBO-M, velocity effect.
## 1 Introduction
Swift heavy ions (SHI, \(E>1\) MeV/a.m.u., \(M\geq 6\) a.m.u.) lose their energy mainly via excitation of the electronic system of a target in the nanometer proximity of their trajectories [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 89, 91, 85, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 54, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 67, 68, 69, 71, 73, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 62, 63, 64, 65, 66, 67, 68, 69, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 45, 46, 47, 48, 49, 53, 54, 55, 56, 57, 58, 59, 63, 64, 65, 66, 67, 68, 69, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 8, 89, 93, 94, 95, 96, 9
3]. The passage of an SHI is characterized by a high energy release followed by drastic and fast (\(\sim\)100 ps) structure transformations in a target leading to the formation of ion tracks with extremely high length/radius ratio (\(\sim\)100 \(\upmu\)m/10 nm) [3, 4].
SHI irradiation triggers effects changing the physical and chemical properties of organic polymers: primary bond cleavage leads to chain scissions and cross-linking; low mass fragments (e. g. H\({}_{2}\), C\({}_{x}\)H\({}_{y}\), C\({}_{x}\)O\({}_{x}\) in widely used polymers) tend to leave the track reducing its density; unsaturated bonds, free radicals and small molecules such as alcohols and carboxylic acids are formed [5, 6]. Excited long organic molecular chains favor the formation of isomers and cross-linking of polymer chains, resulting in a wide variety of polymer structures. Due to the low thermal conductivity of polymers, cooling of the initially heated region with a diameter \(\leq\) 10 nm takes over 100 ps [2], which may induce effects that are absent in inorganic materials.
Technological advances [1] motivate the development of various experimental methods to study damage in latent tracks in polymers. Transmission electron microscopy enables observation of the size and shape of SHI tracks [6, 7]. Spectroscopic techniques such as infrared, ultraviolet, visible, and Raman spectroscopy investigate physical and chemical changes in the irradiated polymers [8, 9]. Recently, diffraction techniques have been used to investigate the velocity effect in polyethylene terephthalate: dependence of the damaged track size on the SHI velocity at the same linear energy transfer (LET or \(S_{e}\)) to the electronic system [10].
Analytical models can describe track parameters in polymers by fitting them to the experimental data. It is possible to determine the profile of changes in the electron density in a track and its characteristic dimensions by fitting SAXS curves [10]. Describing chemical species in a cooled track, a simple model in Ref. [11] translates the radial distribution of the released energy into the radial concentrations of transient types of low-mass fragments, and further into radial concentrations of the final fragments, cross-linked structures, and intact macromolecules.
Modelling is also widely used in investigations of the SHI effects in polymers. Monte Carlo (MC) simulation of a response of the electronic system in polymeric material to the excitation by an SHI impact was reported in Ref. [12]. The approach describes the spatial distribution of ionization events in a track, reproducing inhomogeneity of the radial ionization density and generation of a large number of low-energy electrons. Refs. [13, 14, 15] used the scattering cross sections extracted from the experimental optical data for calculations of the electronic inelastic mean free paths, the stopping power, and the radial distributions of the energy deposited in a material by secondary electrons in some polymers.
Progress in the development of the reactive force fields allowed the tracking of atomic trajectories and bond rearrangements in polymers with atomic precision [16, 17, 18]. Refs. [19, 20, 21] combined the thermal spike model with coarse-grained molecular dynamics (MD), treating monomers as separate simulation blocks (the united-atom approach). This approach allowed to estimate crater formation and sputtering from the surface of polymers irradiated with SHIs. Alternatively, a detailed approach in Ref. [22] models polymers irradiated with SHIs combining the thermal spike model adapted to treat polymers with the modified charge-implicit ReaxFF reactive force field in molecular dynamics, treating each atom as a distinct simulation block (the all-atom approach). The approach reproduced the track structure and revealed chemical transformations during track formation. Computational models describing the etching of SHI tracks are also being actively developed [3, 23, 24].
The application of hybrid approaches is based on the fundamental separation of the track formation kinetics into a sequence of well-separated stages allowing to construct multiscale models for its description [3, 25]. An SHI passes the interatomic distance within \(\sim\)10\({}^{\text{-3}}\) fs, causing extreme electronic excitation. Relaxation of the excitation, accompanied by energy transfer to the lattice, lasts \(\sim\)50-100 fs. It is followed by the kinetics of the heated atomic ensemble up to track cooling by about \(\sim\) 1 ns after the ion passage.
Recently, an approach quantitatively describing all coupled stages of SHI track formation in inorganic dielectrics was developed [26, 27, 28]. It is based on the application of MC code TREKIS-3 modeling excitation of the electronic and atomic systems followed by molecular dynamics (LAMMPS) tracing atomic trajectories [29]. The scattering cross sections used in the MC module account for collective responses of the electronic and atomic systems to the excitation in the framework of the loss function (the imaginary part of the inverse complex dielectric function (CDF)) formalism [30, 31].
To describe SHI track formation in PE, we applied the same [26, 27, 28] methodology, using in MD simulations the reactive force field AIREBO-M for hydrocarbons [18]. On the example of polyethylene, we demonstrate that this approach is a reliable tool for modeling SHI tracks in polymers, which reproduces the track sizes detected in the experiments without _a posteriori_ fitting parameters. The obtained spatial distributions of broken bonds and small-mass fragments can be used in models of chemical activation of SHI-irradiated polymers and their etching. We also pointed out the velocity effect: different PE damage by ions with the same LET but different energies.
## 2 Model
We use the combined approach consisting of Monte Carlo modeling the response of the electronic system to an SHI passage and molecular dynamics tracing the atomic system evolution.
The asymptotic trajectory event-by-event MC code TREKIS-3 describes the excitation of the electronic ensemble and calculates the energy transferred to the atomic system [32, 33, 34]. TREKIS-3 models the following processes: 1) SHI passage through a target, resulting in the creation of primary electrons and holes; 2) scattering of primary and all secondary electrons on target atoms and electrons; 3) Auger decays of core holes, resulting in the generation of secondary electrons; 4) radiative decays of the core holes with subsequent photon emission and photoabsorption exciting new electrons and holes; 5) transport of valence holes and their interaction with atoms of the target [30, 31].
Within the first-order Born approximation, the inelastic scattering cross section of an incident particle (an SHI, an electron, or a valence hole) is expressed in terms of the loss function that takes into account the collective response of a target to excitation [35, 36, 37]. The loss function approximated with a set of oscillatory functions can be reconstructed from the optical coefficients using Ritchie and Howie's algorithm [36]. We use the optical coefficients for polycrystalline PE from Ref.[30], because no data were found for crystalline and amorphous PE separately. Both the used coefficients of the loss function and its graph for PE can be found in [30] (Table A7 and Figure A6, respectively) and in [38]. However, the application of the phonon part of the loss function restored from the optical coefficients provides unrealistically large elastic mean free paths of electrons (up to \(\sim\)1 \(\upmu\)m). Thus, we describe the elastic scattering of electrons and valence holes with the Rutherford cross sections with the modified Molier screening parameter [39].
The MC procedure is iterated 10\({}^{3}\) times to achieve reliable statistics. We obtain the temporal dependencies of the radial distributions of the electron density and energy, the distribution of holes in the valence band and various atomic shells, and the energy transferred to the atomic system of the target. As in Refs. [40, 41], in addition to the energy received by atoms due to the scattering of electrons and valence holes, we assume an instantaneous deposition of the potential energy of valence holes (electron-hole pairs) to the lattice at 100 fs after the SHI passage. This energy transfer approximates the effects of atomic acceleration caused by a transient nonthermal modification of the interatomic potential [42, 43].
The calculated radial distribution of the energy transferred to the atomic system of the target is then used as input data for the classical MD code LAMMPS [44], which describes atomic
response causing damage in the proximity of the projectile trajectory. We set the initial velocities of atoms in coaxial cylindrical layers using the distribution of transferred energy calculated with the MC assuming Gaussian-like dispersion of the kinetic energy and the uniform distribution of the atomic momenta within each layer [40, 41]. Interactions between atoms in PE are calculated using the AIREBO-M force field [18].
Although experimental samples usually have a polycrystalline or mixed polycrystalline-amorphous structure, we use fully crystalline and fully amorphous supercells to demonstrate damage characteristics in them.
In the MD simulation, the orthorhombic crystalline supercell of 197\(\times\)197\(\times\)50 A\({}^{3}\) with 259200 atoms is used, with the unit cell parameters taken from Ref. [45]. The usual width of the PE crystalline grain (so-called _lamella_) is of about200 A [46], therefore, in the simulations we can see damage across its entire width. Each polymer chain in the crystalline supercell contains 156 (CH\({}_{2}\)) polymer units (468 atoms). The crystalline sample in our simulation was prepared using the Moltemplate tool [47] and has a density of about 1 g/cm\({}^{3}\).
For simulations of the amorphous PE, the supercell has the dimensions of 204\(\times\)190\(\times\)50 A\({}^{3}\) and contains 207168 atoms. Chains in the amorphous supercell have 1,328 atoms (442 carbons and 886 hydrogens) built from CH\({}_{2}\) units and have CH\({}_{3}\) tags at their ends, with the unit cell parameters from Ref. [45]. The amorphous sample, prepared using the EMC tool [48], has a density of about 0.82 g/cm\({}^{3}\).
Both, crystalline and amorphous simulated densities are close to the experimental ones from Ref. [49] (1.003 g/cm\({}^{3}\) and 0.85 g/cm\({}^{3}\) respectively). Examples of supercell structures are shown in Fig.1.
The periodic boundary conditions are used in all directions. The boundaries of the supercell along the directions perpendicular to the SHI trajectory are cooled by the Berendsen thermostat to 300 K with a characteristic time of 0.1 ps [50]. The supercell is simulated until its average temperature drops below 350 K when no structural changes are expected thereafter. Refs [30, 31] contain more details describing the calculations. The results of the simulations are visualized with the help of OVITO software [51].
The selection of the U, Xe, and Zn ions for the simulations is based on the availability of experimental data for comparison [52].
## III Results and Discussion
### Structure changes: amorphous vs. crystalline PE
Figs. 2 and 3 illustrate the snapshots of the supercell before and after 700 MeV U and Xe 1470 MeV ion impacts in amorphous and crystalline PE accompanied by the partial radial pair distribution functions (RPDF), consequently. The changes in the C-C RPDF characterize the cleavage of polymer chains and the appearance of new structures; the C-H RPDF shows the detachment of hydrogen atoms from parent carbon chains; the H-H RPDF shows an appearance of molecular hydrogen (0.75 A peak). Areas of damaged and less dense material can be seen in the supercell snapshots.
The decrease of the major peaks in C-C, C-H, and H-H RPDFs and the appearance of the minor peaks in C-H and H-H RPDFs indicate severely damaged structures with bond rearrangements and formations of new molecular species. No voids were observed in the track in the simulation, which agrees with the experiment [52].
Figure 1: Crystalline **(top)** and amorphous **(bottom)** supercells with their magnified structures. Red balls represent carbon atoms, and blue balls are hydrogen.
Figure 2: MD supercell snapshots (left panels) and RPDFs (right panels) prepared by OVITO [51]: (a) pristine amorphous cell of PE; (b) amorphous one irradiated with Xe
1470 MeV; (c) pristine crystalline PE cell; (d) crystalline one irradiated with Xe 1470 MeV.
Figure 3: MD supercell snapshots (left panels) and RPDFs (right panels) prepared by OVITO [51]. (a) Pristine amorphous cell of PE; (b) amorphous one irradiated with U at
the Bragg peak (700 MeV); (c) pristine crystalline PE cell; (d) crystalline one irradiated with uranium U at the Bragg peak.
Figure 4 presents the temporal evolution of the distribution of unsaturated carbon atoms in the amorphous and crystalline polyethylene supercells after the 1470 MeV Xe ion impact. We define an unsaturated carbon atom as an atom with dangling bonds (or/and having double-triple bonding) and thus having less than 4 neighbors belonging to it within the cut-off radius of interaction (2 A for C-C and 1.8 A for C-H interaction in AIREBO-M potential [53]). The spatial distribution of the initial damage (at 1 ps) is cylindrically symmetric in both amorphous and crystalline PE. The distribution of unsaturated carbons in the amorphous sample (Fig. 4a) remains circular, while relaxation of this damaged region in the crystalline supercell (Fig. 4b) later causes enhanced structure transformations along PE chains (see also three-dimensional distributions in Supplementary Materials, Figs. S1, S2).
Analysis of the number of unsaturated carbons shows that the increase in the number of bond breaks significantly slows down by about 25 picoseconds in both amorphous and crystalline cases. Further cooling of the track shows an expansion of the damaged area with these defects without a notable increase in their total number. Figure 4 (see also Figures S3-S6 in Supplementary Materials) shows that the number of unsaturated carbon atoms and their positions almost do not change after 250 ps in the amorphous sample and after 100 ps in the crystalline one. Cooling down to 350 K takes \(\sim\)1 ns which is an order of magnitude longer than that in inorganic insulators [2].
Figure 5 compares the regions of unsaturated carbons with the area of the decreased density and the experimentally measured track (observed as the region stained with a chemical colorant around the SHI's trajectory [52]).
Figure 4: Spatio-temporal distribution of unsaturated carbons at different times after irradiation with 1470 MeV \({}^{129}\)Xe ion in **(a)** amorphous and **(b)** crystalline PE. The projection of the cell perpendicular to the direction of the SHI trajectory is shown.
As was mentioned above, the calculated damage in the amorphous supercell has a nearly circular shape due to the lack of specific chain directions and the radial energy deposition. The experimentally measured tracks in the amorphous samples irradiated with Xe ions also have a circular shape of an average diameter of 62.8 \(\pm\) 20.4 A [52]. In contrast, the damage in the crystalline supercell (Fig. 5b) has an elongated shape along the PE chains similar to that observed in the experiment with a width of 40-60 A and a length of about 20 nm [52].
In Figure 6 the track sizes calculated for U and Xe ions at 11.4 MeV/u irradiation are compared with the experimental data [52]. The track size decreases significantly with the atomic number of the SHI in our simulation, in agreement with the observation in [52]. In the case of Zn, the simulation shows no dangling bonds. Although the Zn ion track diameters are reported in [52] to be in the range of 24-58 A, they are poorly visible in TEM (see Fig. 4 in [52]). In addition, the authors of [52] noted that the Zn tracks are unstable under electron beam irradiation in TEM and partially recover during observation, making a direct comparison with the simulation impossible. In [52], the authors noted that the exact mechanism of adhesion of molecules of the colorant (applied _post-mortem_) to PE in a damaged track is unknown. It was assumed that colorant molecules attach to the areas of high damage or a higher free volume, enabling visualization of the produced damage. There is a decrease in the material density in the proximity of the Zn trajectory
Figure 5: Calculated spatial distributions of unsaturated carbons and relative densities in **(a)** amorphous and **(b)** crystalline PE after 1470 MeV \({}^{129}\)Xe impact. The black circle in **(a)** shows the average experimental track diameter (62.8 \(\pm\) 20.4 Å) [52]; two black arcs in **(b)** present the minimal registered track width – 40 Å [52]. White crosses mark the SHI impact points.
in our simulation (see Figure S7 in Supplementary Materials). Our results suggest that the applied colorant can attach to sites of reduced density even in the absence of unsaturated carbons.
More generally, a detailed quantitative comparison of the simulation with the experiment requires the development of models describing the chemical kinetics of colorant-PE interaction, which is beyond the scope of the present work.
Figure 6: Unsaturated carbons after irradiation with the ions (a) U and (b) Xe, with the energy of 11.4 MeV/a.m.u. The solid circles indicate the lower and the upper diameters of the track, according to Ref. [52]. Crosses mark SHI’s penetration points. The projection of the cell perpendicular to the direction of the SHI trajectory is shown.
### Polymer chain damage and fragment distributions
Figure 7 visualizes only intact chains highlighting the most damaged areas whose sizes and shapes coincide with those observed in the experiment.
Figure 8 illustrates the mass spectrum of fragments produced after the cooling. Mainly, hydrogen dimers and unbroken chains with a small contribution of various molecular forms including free radicals are created by SHI impact. In amorphous PE, fragments with masses between molecular hydrogen and intact chains are present (see inset in Fig. 8a). The mass distribution in crystalline PE differs from that in the amorphous one - fragments are compactly grouped near the two main peaks corresponding to molecular hydrogen and intact chains. Their relative quantity is much lower than in amorphous PE (see inset in Fig. 8b). Additionally, some of the fragments cross-linked during relaxation resulting in a small number of chains longer than the initial ones. 3D figures showing various molecular fragments can be found in Supplementary Materials (Figures S8, S9).
Figure 7: Supercells showing **(a)** undamaged chains in amorphous PE and **(b)** undamaged and longer cross-linked chains in crystalline PE irradiated with 1470 MeV \({}^{129}\)Xe.
The spatial distributions of low-mass fragments (under 1000 a.m.u.) are similar to those of the unsaturated carbons and visible damage (cf. Figs.4,5,9,10). The distribution of small fragments is circular in the amorphous case and elongated in the crystal (Fig. 9). The same trend is seen for the fragments of higher masses (Fig. 10).
Figure 8: Mass distribution of fragments in a track in **a)** amorphous PE and **b)** crystalline lamella after the passage of Xe ion with energy 1470 MeV (the insets show the full range of masses). The percentages present depicted fragments’ proportions corresponding to individual peaks; carbon and hydrogen atoms are shown in black and gray, respectively.
Figure 9: Distribution of fragments with masses \(<1000\) a.m.u. after 1470 MeV Xe ion passage in **(a)** amorphous and **(b)** crystalline PE. The darker color the heavier fragments.
Figure 10: Distribution of fragments with masses from 1000 to 4000 a.m.u. after 1470 MeV Xe ion passage in **(a)** amorphous and **(b)** crystalline PE. The darker color the heavier fragments.
## III.3 Velocity effect in PE
Due to the peak-shaped Bragg curve of swift heavy ions energy losses, the same LET value can be achieved at two different SHI energies: on the left and the right shoulders of the curve, see Figure 11. These energies can differ by an order of magnitude, resulting in very different spectra of fast electrons excited by the ions [27]. The different spectra result in the different radial distributions of the energy deposited to the target lattice causing different damage. This velocity effect is well-known in inorganic materials [1, 3] and has recently been shown in polyethylene terephthalate [10].
We chose two energies 100 MeV and 5500 MeV for \({}^{238}\)U ions, producing the same LET of 10.9 keV/nm (70% of the Bragg peak value). It appears that the slower SHI causes a higher density of bond breaks, confirming the existence of the velocity effect in amorphous and crystalline PE samples (see Fig. 11).
## References
* [1] A. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. Abazov, V. A. A.
The skew of the spatial distribution of the unsaturated carbons to the left is evident in Fig. 11 b, c. The simulations showed that the distribution of the unsaturated atoms can arbitrarily lean to the left, to the right, stay straight or shift in the supercell as a whole due to fluctuations in the initial conditions in MD calculations (see Figures S10, S11 in Supplementary Materials).
## 5 Conclusions
Monte Carlo code TREKIS-3 combined with the molecular dynamics code LAMMPS enabled us to describe excitation and structural changes in amorphous and crystalline polyethylene irradiated with swift heavy ions without _a posteriori_ fitting parameters. The evolution of the atomic structure, showing damaged chemical bonds, is followed up to the time when no further structural changes occur (\(\sim\)1 ns). In amorphous polyethylene, initial bond ruptures form a circular dense core that expands radially during the cooling to form finally a nearly circular track by \(\sim\)250 ps. In contrast, in the crystalline sample, an elliptical region of damaged bonds stretched along the polymer chains evolves from the initial cylindrical excitation over the timescale of tens of picoseconds. Our simulations agree with the experimentally measured track sizes and also reproduce the difference in the shapes in the crystalline and amorphous samples, validating the model.
The obtained radial pair distribution functions indicate significant damage in the vicinity of the SHI track. Analysis of the mass spectra of the simulated sample shows the presence of a large amount of molecular hydrogen and low-mass fragments in the track. No voids were found in tracks in bulk PE, in agreement with the experiment [52].
We also demonstrated the velocity effect in PE: ions with the same linear energy transfer but different velocities (the left and the right shoulders of the Bragg curve) cause different damage in the material. The lower velocity ion causes more damage over a larger area. This effect has been shown experimentally for PET polymer [10].
Figure 11: **(a)** LET curve for \({}^{238}\)U calculated with TREKIS-3. Energies 100 MeV and 5500 MeV producing the same LET = 11 keV/nm are marked with the horizontal dotted line. **(b-e)** Snapshots of unsaturated carbon atoms after simulation with TREKIS+MD of \({}^{238}\)U passage in crystalline PE specimen with energies 100 and 5500 MeV **(b, c)** and \({}^{238}\)U passage in amorphous PE with energies 100 and 5500 MeV **(d, e)**.
The results were obtained without fitting parameters in simulations, demonstrating the applicability of a combined model to polymers tracing both, SHI-induced electronic and atomic kinetics.
## Acknowledgments
The authors are grateful to Michael V. Sorokin for helpful discussions. PB, SG, RV, and AEV gratefully acknowledge financial support from the Russian Science Foundation (grant No. 22-22-00676). NM thanks the financial support from the Czech Ministry of Education, Youth, and Sports (grants No. LTT17015, LM2018114, and No. EF16_013/0001552). This work has been carried out using computing resources of the federal collective usage center Complex for Simulation and Data Processing for Mega-science Facilities at NRC "Kurchatov Institute", " [http://ckp.nrcki.ru](http://ckp.nrcki.ru)".
|
2307.15061 | The RoboDepth Challenge: Methods and Advancements Towards Robust Depth
Estimation | Accurate depth estimation under out-of-distribution (OoD) scenarios, such as
adverse weather conditions, sensor failure, and noise contamination, is
desirable for safety-critical applications. Existing depth estimation systems,
however, suffer inevitably from real-world corruptions and perturbations and
are struggled to provide reliable depth predictions under such cases. In this
paper, we summarize the winning solutions from the RoboDepth Challenge -- an
academic competition designed to facilitate and advance robust OoD depth
estimation. This challenge was developed based on the newly established KITTI-C
and NYUDepth2-C benchmarks. We hosted two stand-alone tracks, with an emphasis
on robust self-supervised and robust fully-supervised depth estimation,
respectively. Out of more than two hundred participants, nine unique and
top-performing solutions have appeared, with novel designs ranging from the
following aspects: spatial- and frequency-domain augmentations, masked image
modeling, image restoration and super-resolution, adversarial training,
diffusion-based noise suppression, vision-language pre-training, learned model
ensembling, and hierarchical feature enhancement. Extensive experimental
analyses along with insightful observations are drawn to better understand the
rationale behind each design. We hope this challenge could lay a solid
foundation for future research on robust and reliable depth estimation and
beyond. The datasets, competition toolkit, workshop recordings, and source code
from the winning teams are publicly available on the challenge website. | Lingdong Kong, Yaru Niu, Shaoyuan Xie, Hanjiang Hu, Lai Xing Ng, Benoit R. Cottereau, Liangjun Zhang, Hesheng Wang, Wei Tsang Ooi, Ruijie Zhu, Ziyang Song, Li Liu, Tianzhu Zhang, Jun Yu, Mohan Jing, Pengwei Li, Xiaohua Qi, Cheng Jin, Yingfeng Chen, Jie Hou, Jie Zhang, Zhen Kan, Qiang Ling, Liang Peng, Minglei Li, Di Xu, Changpeng Yang, Yuanqi Yao, Gang Wu, Jian Kuai, Xianming Liu, Junjun Jiang, Jiamian Huang, Baojun Li, Jiale Chen, Shuang Zhang, Sun Ao, Zhenyu Li, Runze Chen, Haiyong Luo, Fang Zhao, Jingze Yu | 2023-07-27T17:59:56Z | http://arxiv.org/abs/2307.15061v2 | # The RoboDepth Challenge: Methods and Advancements Towards Robust Depth Estimation
###### Abstract
Accurate depth estimation under out-of-distribution (OoD) scenarios, such as adverse weather conditions, sensor failure, and noise contamination, is desirable for safety-critical applications. Existing depth estimation systems, however, suffer inevitably from real-world corruptions and perturbations and are struggled to provide reliable depth predictions under such cases. In this paper, we summarize the winning solutions from the RoboDepth Challenge - an academic competition designed to facilitate and advance robust OoD depth estimation. This challenge was developed based on the newly established KITTI-C and NYUDepth2-C benchmarks. We hosted two stand-alone tracks, with an emphasis on robust self-supervised and robust fully-supervised depth estimation, respectively. Out of more than two hundred participants, nine unique and top-performing solutions have appeared, with novel designs ranging from the following aspects: spatial- and frequency-domain augmentations, masked image modeling, image restoration and super-resolution, adversarial training, diffusion-based noise suppression, vision-language pre-training, learned model ensembling, and hierarchical feature enhancement. Extensive experimental analyses along with insightful observations are drawn to better understand the rationale behind each design. We hope this challenge could lay a solid foundation for future research on robust and reliable depth estimation and beyond. The datasets, competition toolkit, workshop recordings, and source code from the winning teams are publicly available on the challenge website1.
Footnote 1: The RoboDepth Challenge: [https://robodepth.github.io](https://robodepth.github.io).
## 1 Introduction
The robustness of a learning-based visual perception system is among the most important factors that practitioners pursue [105]. In the context of depth estimation, the robustness of a depth prediction algorithm is often coped with its ability to maintain satisfactory performance under perturbation and degradation. Indeed, since most depth estimation systems target estimating structural information from real-world scenes [94; 27; 14], it is inevitable for them to deal with unseen data that are distribution-shifted from those seen during training.
Data distribution shifts often take various forms, such as adversarial attack [13; 19; 111] and common corruptions [32; 28; 43]. While the former aims to trick learning-based models by providing deceptive input, the latter cases - which are caused by noises, blurs, illumination changes, perspective transformations, _etc_. - are more inclined to occur in practice. Recently, the RoboDepth benchmark [51] established the first comprehensive study on the out-of-distribution (OoD) robustness of monocular depth estimation models under common corruptions. Specifically, a total of eighteen corruption types are defined, ranging from three main categories: 1) adverse weather and lighting conditions, 2) motion and sensor failure, and 3) noises during data processing. Following the taxonomy, two robustness probing datasets are constructed by simulating realistic data corruptions on images from the KITTI [27] and NYU Depth V2 [94] datasets, respectively. More than forty depth estimation models are benchmarked and analyzed. The results show that existing depth estimation algorithms, albeit achieved promising performance on "clean" benchmarks, are at risk of being vulnerable to common corruptions. This study also showcases the importance of considering scenarios that are both in-distribution and OoD, especially for safety-critical applications.
The RoboDepth Challenge has been successfully hosted at the 40th IEEE Conference on Robotics and Automation (ICRA 2023), London, UK. This academic competition aims to facilitate and advance robust monocular depth estimation under OoD corruptions and perturbations. Specifically, based on the newly established _KITTI-C_ and _NYUDepth2-C_ benchmarks [51], this competition provides a venue for researchers from both industry and academia to explore novel ideas on: 1) designing network structures that are robust against OoD corruptions, 2) proposing operations and techniques that improve the generalizability of existing depth estimation algorithms, and 3) rethinking potential
Figure 1: The RoboDepth Challenge adopts the eighteen data corruption types from three main categories defined in the RoboDepth benchmark [51]. Examples shown are from the _KITTI-C_ dataset.
detrimental components from data corruptions occur under depth estimation scenarios. We formed two stand-alone tracks: one focused on robust self-supervised depth estimation from outdoor scenes and another focused on robust fully-supervised depth estimation from indoor scenes. The evaluation servers of these two tracks were built upon the CodaLab platform [80]. To ensure fair evaluations, we set the following rules and required all participants to obey during this challenge:
* All participants must follow the exact same data configuration when training and evaluating their depth estimation algorithms. The use of public or private datasets other than those specified for model training is prohibited.
* Since the theme of this challenge is to probe the out-of-distribution robustness of depth estimation models, any use of the eighteen corruption types designed in the RoboDepth benchmark [51] is strictly prohibited, including any atomic operation that is comprising any one of the mentioned corruptions.
* To ensure the above rules are followed, each participant was requested to submit the code with reproducible results; the code was for examination purposes only and we manually verified the training and evaluation of each participant's model.
We are glad to have more than two hundred teams registered on the challenge servers. Among them, \(66\) teams made a total of \(1137\) valid submissions; \(684\) attempts are from the first track, while the remaining \(453\) attempts are from the second track. More detailed statistics are included in Section 3. In this report, we present solutions from nine teams that have achieved top performance in this challenge. Our participants proposed novel network structures and pre-processing and post-processing techniques, ranging from the following topics:
* _Spatial- and frequency-domain augmentations_: Observing that the common data corruptions like blurs and noises contain distinct representations in both spatial and frequency domains [58, 7], new data augmentation techniques are proposed to enhance the feature learning.
* _Masked image modeling_: The masking-based image reconstruction approach [31] exhibits potential for improving OoD robustness; this simple operation encourages the model to learn more robust representations by decoding masked signals from remaining ones.
* _Image restoration and super-resolution_: The off-the-shelf restoration and super-resolution networks [125, 65, 9] can be leveraged to handle degradation during the test time, such as noise contamination, illumination changes, and image compression.
* _Adversarial training_: The joint adversarial objectives [74] between the depth estimation and a noise generator facilitate robust feature learning; such an approach also maintains the performance on in-distribution scenarios while tackling OoD cases.
* _Diffusion-based noise suppression_: The denoising capability of diffusion is naturally suitable for handling OoD situations [90]; direct use of the denoising step in the pre-trained diffusion model could help suppress the noises introduced by different data corruptions.
* _Vision-language pre-training_: Leveraging the pre-trained text features [133] and aligning them to the extracted image features via an adapter is popular among recent studies and is proven helpful to improve the performance of various visual perception tasks [11, 10].
* _Learned model ensembling_: The fusion among multiple models is commonly used in academic competitions; an efficient, proper, and simple model ensembling strategy often combines the advantages of different models and largely improves the performance.
* _Hierarchical feature enhancement_: Designing network architectures that are robust against common corruptions is of great value; it has been constantly verified that the CNN-Transformer hybrid structures [131, 128] are superior in handling OoD corruptions.
The remainder of this paper is organized as follows: Section 2 reviews recent advancements in depth estimation and out-of-distribution perception and summarizes relevant challenges and competitions. Section 3 elaborates on the key statistics, public resources, and terms and conditions of this challenge. Section 4 provides the notable results from our participants that are better than the baselines. The detailed solutions of top-performing teams from the first track and the second track of this challenge are presented in Section 5 and Section 6, respectively. Section 7 draws concluding remarks and points out some future directions. Section 8 and Section 9 are acknowledgments and appendix.
Related Work
### Depth Estimation
As opposed to some 3D perception tasks that rely on the LiDAR sensor, _e.g._ LiDAR segmentation [1; 4; 49; 47; 69] and 3D object detection [55; 116; 121; 48], monocular depth estimation aims to predict 3D structural information from a single image, which is a more affordable solution in existing perception systems. Based on the source of supervision signals, this task can be further categorized into supervised [94; 2], self-supervised [27; 29], and semi-supervised [53; 38] depth estimation. Ever since the seminar works [21; 26; 135; 30; 56] in this topic, a diverse range of ideas has been proposed, including new designs on network architectures [85; 131; 128; 39; 63; 64], optimization functions [129; 12; 112; 92], internal feature constraints [115; 134; 77; 114], semantic-aided learning [104; 40; 62], geometry constraint [106; 99], mixing-source of depth supervisions [86; 98; 60], and unsupervised model pre-training [5; 79]. Following the conventional "training-testing" paradigm, current depth estimation methods are often trained and tested on datasets within similar distributions, while neglecting the natural corruptions that commonly occur in real-world situations. This challenge aims to fill this gap: we introduce the first academic competition for robust out-of-distribution (OoD) depth estimation under corruptions. By shedding light on this new perspective of depth estimation, we hope this challenge could enlighten follow-up research in designing novel network architectures and techniques that improve the reliability of depth estimation systems to meet safety-critical requirements.
### Out-of-Distribution Perception
The ability to be generalized across unseen domains and scenarios is crucial for a learning-based system [105]. To pursue superior OoD performance under commonly occurring data corruptions, various benchmarks affiliated with different perception tasks have been established. ImageNet-C [32] represented the first attempt at OoD image classification; the proposed corruption types, such as blurs, illumination changes, perspective transformations, and noise contamination, have been widely adopted by following works in OoD dataset construction. Michaelis _et al._[75] built the large-scale Robust Detection Benchmark upon PASCAL VOC [23], COCO [66], and Cityscapes [14] for OoD object detection. Subsequent works adopt a similar paradigm in benchmarking and analyzing OoD semantic segmentation [41], video classification [120], pose estimation [103], point cloud perception [89; 88], LiDAR perception [50; 48; 24; 68], bird's eye view perception [110; 109], and robot navigation [6]. All the above works have incorporated task-specific corruptions that mimic real-world situations, facilitating the development of robust algorithms for their corresponding tasks. To achieve a similar goal, in this challenge, we resort to the newly-established _KITTI-C_ and _NYUDepth2-C_ benchmarks [51] to construct our OoD depth estimation datasets. We form two stand-alone tracks, with an emphasis on robust self-supervised and robust fully-supervised depth estimation, respectively, to encourage novel designs for robust and reliable OoD depth estimation.
### Relevant Competitions
It is worth mentioning that several previous depth estimation competitions have been successfully held to facilitate their related research areas. The Robust Vision Challenge (RVC) [126] aimed to explore cross-domain visual perception across different scene understanding tasks, including reconstruction, optical flow estimation, semantic segmentation, single image depth prediction, _etc_. The Dense Depth for Autonomous Driving (DDAD) Challenge [25] targeted long-range and dense depth estimation from diverse urban conditions. The Mobile AI Challenge [36] focused on real-time depth estimation on smartphones and IoT platforms. The SeasonDepth Depth Prediction Challenge [34] was specialized for estimating accurate depth information of scenes under different illumination and season conditions. The Monocular Depth Estimation Challenge (MDEC) [96; 97] attracted broad attention from researchers and was tailored to tackle monocular depth estimation from complex natural environments, such as forests and fields. The Argoverse Stereo Competition [52] encouraged real-time stereo depth estimation under self-driving scenarios. The NTIRE 2023 Challenge on HR Depth from Images of Specular and Transparent Surfaces [84] mainly aimed at handling depth estimation of non-Lambertian surfaces characterizing specular and transparent materials. Different from previous pursuits, our RoboDepth Challenge is tailored to facilitate robust OoD depth estimation against real-world corruptions. A total of eighteen corruption types are considered, ranging from
adverse weather conditions, sensor failure, and noise contamination. We believe this research topic is of great importance to the practical deployment of depth estimation algorithms, especially for safety-critical applications.
## 3 Challenge Summary
### Overall Statistics
This is the first edition of the RoboDepth Challenge. The official evaluation servers2 of this competition were launched on 01 January 2023. During the five-month period of competition, \(226\) teams registered on our servers; among them, \(66\) teams attempted to make submissions. Finally, we received \(1137\) valid submissions and selected six winning teams (three teams for each track) and three innovation prize awardees. The detailed information of the winning teams and innovation prize awardees is shown in Table 1 and Table 2, respectively.
Footnote 2: We built servers on CodaLab. More details of this platform are at [https://codalab.lism.upsaclay.fr](https://codalab.lism.upsaclay.fr).
### Track # 1: Robust Self-Supervised Depth Estimation
**Evaluation Server**. The first track of the RoboDepth Challenge was hosted at [https://codalab.lism.upsaclay.fr/competitions/9418](https://codalab.lism.upsaclay.fr/competitions/9418). The participants were requested to submit their depth disparity maps to our server for evaluation. Such depth predictions were expected to be generated by a learning-based model, in a self-supervised learning manner, trained on the official _training_ split of the KITTI dataset [27].
**Statistics**. In the first track of the RoboDepth Challenge, a total number of \(137\) teams registered at our evaluation server. We received \(684\) valid submissions during the competition period. The
Figure 3: The submission and scoring statistics for the two tracks in the RoboDepth Challenge.
Figure 2: We successfully hosted the RoboDepth Challenge at ICRA 2023.
top-three best-performing teams are OpenSpaceAI, USTC-IAT-United, and YYQ. Additionally, we selected the teams Ensemble and Scent-Depth as the innovation prize awardees of this track.
### Track # 2: Robust Supervised Depth Estimation
**Evaluation Server**. The second track of the RoboDepth Challenge was hosted at [https://codalab.lisl.upsaclay.fr/competitions/9821](https://codalab.lisl.upsaclay.fr/competitions/9821). The participants were requested to submit their depth disparity maps to our server for evaluation. Such depth predictions were expected to be generated by a learning-based model, in a fully-supervised learning manner, trained on the official _training_ split of the NYUDepth V2 dataset [94].
**Statistics**. In the second track of the RoboDepth Challenge, a total number of \(89\) teams registered at our evaluation server. We received \(453\) valid submissions during the competition period. The top-three best-performing teams are USTCxNetEaseFuxi, OpenSpaceAI, and GANCV. Additionally, we selected the team AIIA-RDepth as the innovation prize awardee of this track.
### The RoboDepth Workshop
We hosted the online workshop at ICRA 2023 on 02 June 2023 after the competition was officially concluded. Six winning teams and three innovation prize awardees attended and presented their approaches.
The video recordings of this workshop are publicly available at [https://www.youtube.com/watch?v=mYhdTGiIGCY&list=PLxxr1fcfh-qBGZ6x_e1AT2_YnAxiHIKtkB](https://www.youtube.com/watch?v=mYhdTGiIGCY&list=PLxxr1fcfh-qBGZ6x_e1AT2_YnAxiHIKtkB).
The slides used can be downloaded from [https://ldkong.com/talks/icra23_robodepth.pdf](https://ldkong.com/talks/icra23_robodepth.pdf).
### Terms and Conditions
The RoboDepth Challenge is made freely available to academic and non-academic entities for non-commercial purposes such as research, teaching, scientific publications, or personal experimentation. Permission is granted to use the related public resources given that the participants agree:
* That the data in this challenge comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, the challenge organizing team is not responsible for any errors or omissions.
* That the participants may not use the data in this challenge or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with the purpose to procure a commercial gain.
* That the participants include a reference to RoboDepth (including the benchmark data and the specially generated data for this academic challenge) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage and GitHub repository.
## 4 Challenge Results
### Evaluation Metrics
In the RoboDepth Challenge, the two most conventional metrics were adopted: 1) error rate, including Abs Rel, Sq Rel, RMSE, and log RMSE; and 2) accuracy, including \(\delta_{1}\), \(\delta_{2}\), and \(\delta_{3}\).
**Error Rate**. The Relative Absolute Error (Abs Rel) measures the relative difference between the pixel-wise ground-truth (gt) and the prediction values (pred) in a depth prediction map \(D\), as calculated by the following equation:
\[\text{Abs Rel}=\frac{1}{|D|}\sum_{pred\in D}\frac{|gt-pred|}{gt}. \tag{1}\]
The Relative Square Error (Sq Rel) measures the relative square difference between gt and pred as follows:
\[\text{Sq Rel}=\frac{1}{|D|}\sum_{pred\in D}\frac{|gt-pred|^{2}}{gt}. \tag{2}\]
RMSE denotes the Root Mean Square Error (in meters) of a scene (image), which can be calculated as \(\sqrt{\sum|gt-pred|^{2}}\); while log RMSE is the log-normalized version of RMSE, _i.e._, \(\sqrt{\sum|\log(gt)-\log(pred)|^{2}}\).
**Accuracy**. The \(\delta\) metric is the depth estimation accuracy given the threshold:
\[\delta_{t}=\frac{1}{|D|}|\{\ \text{pred}\in D|\max{(\frac{gt}{pred},\frac{pred}{ gt})}<1.25^{t}\}|\times 100\%\, \tag{3}\]
\begin{table}
\begin{tabular}{l|l|l} \hline
**Rank** & **\#1: Robust Self-Supervised MDE** & **\#2: Robust Supervised MDE** \\ \hline \hline \multirow{7}{*}{1st Place} & **Team Name** & **Team Name** \\ & OpenSpaceAI & USTCxNetEaseFuxi \\ \cline{2-3} & **Team Members** & **Team Members** \\ & Ruijie Zhu\({}^{1}\), Ziyang Song\({}^{1}\), Li Liu\({}^{1}\), & Jun Yu\({}^{1}\), Mohan Jing\({}^{1}\), Pengwei Li\({}^{1}\), \\ & Tianzhu Zhang\({}^{1,2}\) & Xiaohua Qi\({}^{1}\), Cheng Jin\({}^{2}\), Yingfeng \\ & Chen\({}^{2}\), Jie Hou\({}^{2}\) \\ \cline{2-3} & **Affiliations** & **Affiliations** \\ & \({}^{1}\)University of Science and Technology of China, \({}^{2}\)Deep Space Exploration Lab & \({}^{1}\)University of Science and Technology of China, \({}^{2}\)NetEase Fuxi \\ \cline{2-3} & **Contact \(\approx\)** & **Contact \(\approx\)** \\ & [email protected] & USTC\_IAT\[email protected] \\ \hline \hline \multirow{7}{*}{2nd Place} & **Team Name** & **Team Name** \\ & USTC-IAT-United & OpenSpaceAI \\ \cline{2-3} & **Team Members** & **Team Members** \\ & Jun Yu\({}^{1}\), Xiaohua Qi\({}^{1}\), Jie Zhang\({}^{2}\), & Li Liu\({}^{1}\), Ruijie Zhu\({}^{1}\), Ziyang Song\({}^{1}\), \\ & Mohan Jing\({}^{1}\), Pengwei Li\({}^{1}\), Zhen & Tianzhu Zhang\({}^{1,2}\) \\ & Kan\({}^{1}\), Qiang Ling\({}^{1}\), Liang Peng\({}^{3}\), & Minglei Li\({}^{3}\), Di Xu\({}^{3}\), Changpeng Yang\({}^{3}\) \\ \cline{2-3} & **Affiliations** & **Affiliations** \\ & \({}^{1}\)University of Science and Technology of China, \({}^{2}\)Central South University, \({}^{3}\)Huawei Cloud Computing Technology Co., Ltd & \({}^{1}\)University of Science and Technology of China, \({}^{2}\)Deep Space Exploration Lab \\ \cline{2-3} & **Contact \(\approx\)** & **Contact \(\approx\)** \\ & USTC\_IAT\[email protected] & liu\[email protected] \\ \hline \hline \multirow{7}{*}{3rd Place} & **Team Name** & **Team Name** \\ & YYQ & GANCV \\ \cline{2-3} & **Team Members** & **Team Members** \\ & Yuanqi Yao\({}^{1}\), Gang Wu\({}^{1}\), Jian Kaui\({}^{1}\), & Jiamian Huang\({}^{1}\), Baojun Li\({}^{1}\) \\ & Xianming Liu\({}^{1}\), Junjun Jiang\({}^{1}\) & \\ \cline{2-3} & **Affiliations** & **Affiliations** \\ & \({}^{1}\)Harbin Institute of Technology & \({}^{1}\)Individual Researcher \\ \cline{2-3} & **Contact \(\approx\)** & **Contact \(\approx\)** \\ & [email protected] & [email protected] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the top-performing teams in each track of the RoboDepth Challenge.
where \(\delta_{1}=\delta<1.25,\delta_{2}=\delta<1.25^{2},\delta_{3}=\delta<1.25^{3}\) are the three conventionally used accuracy scores among prior works [30; 61].
Following the seminar work MonoDepth2 [30], the Abs Rel metric was selected as the major indicator to compare among submissions in the first track of the RoboDepth Challenge.
Based on the Monocular-Depth-Estimation-Toolbox3, the \(\delta_{1}\) score was used to rank different submissions in the second track of the RoboDepth Challenge.
Footnote 3: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox).
### Track # 1 Results
In the first track of the RoboDepth Challenge, we received \(684\) valid submissions. The top-performing teams in this track include OpenSpaceAI, USTC-IAT-United, and YYQ. The shortlisted submissions are shown in Table 3; the complete results can be found on our evaluation server.
Specifically, the team OpenSpaceAI achieved a Abs Rel score of \(0.121\), which is \(0.100\) higher than the baseline MonoDepth2 [30]. They also ranked first on the log RMSE, \(\delta_{1}\), and \(\delta_{3}\) metrics. Other top-ranked submissions are from: the team USTC-IAT-United (Abs Rel\(=0.123\), \(\delta_{1}=0.861\)), team YYQ (Abs Rel\(=0.123\), \(\delta_{1}=0.848\)), team zs_dlut (Abs Rel\(=0.124\), \(\delta_{1}=0.852\)), and team UMCV (Abs Rel\(=0.124\), \(\delta_{1}=0.847\)). We refer readers to the solutions presented in Section 5 for additional comparative and ablation results and more detailed analyses.
### Track # 2 Results
In the second track of the RoboDepth Challenge, we received \(453\) valid submissions. The top-performing teams in this track include USTCxNetEaseFuxi, OpenSpaceAI, and GANCV. The shortlisted submissions are shown in Table 4; the complete results can be found on our evaluation server.
Specifically, the team USTCxNetEaseFuxi achieved a \(\delta_{1}\) score of \(0.940\), which is \(0.285\) higher than the baseline DepthFormer-SwinT [63]. They also ranked first on the Abs Rel and log RMSE metrics. Other top-ranked submissions are from: the team OpenSpaceAI (Abs Rel\(=0.095\), \(\delta_{1}=0.928\)), team GANCV (Abs Rel\(=0.104\), \(\delta_{1}=0.898\)), team shinonomei (Abs Rel\(=0.123\), \(\delta_{1}=0.861\)), and team YYQ (Abs Rel\(=0.125\), \(\delta_{1}=0.851\)). We refer readers to the solutions presented in Section 6 for additional comparative and ablation results and more detailed analyses.
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline
**Team 1** & **Team 2** & **Team 3** \\ \hline \hline
**Team Name** & **Team Name** & **Team Name** \\ Ensemble & Scent-Depth & AIIA-RDepth \\ \hline
**Track** & **Track** & **Track** \\ \#1: Robust Self-Supervised MDE & \#1: Robust Self-Supervised MDE & \#2: Robust Supervised MDE \\ \hline
**Team Members** & **Team Members** & **Team Members** \\ Jiale Chen\({}^{1}\), Shuang Zhang\({}^{1}\) & Runze Chen\({}^{1,2}\), Haiyong Luo\({}^{1}\), Fang Zhao\({}^{2}\), Jingze Yu\({}^{1,2}\) & Sun Ao\({}^{1}\), Gang Wu\({}^{1}\), Zhenyu Li\({}^{1}\), Xianming Liu\({}^{1}\), Junjun Jiang\({}^{1}\) \\ \hline
**Affiliations** & **Affiliations** & **Affiliations** \\ \({}^{1}\)Tsinghua University & \({}^{1}\)Beijing University of Posts and Telecommunications, & \({}^{1}\)Harbin Institute of Technology \\ & \({}^{2}\)Institute of Computing Technology, Chinese Academy of Sciences & \\ \hline
**Contact**\(\Xi\) & **Contact**\(\Xi\) & **Contact**\(\Xi\) \\ [email protected] & [email protected] & sunao\[email protected] \\ \hline \end{tabular}
\end{table}
Table 2: Summary of innovation prize awardees (across two tracks) in the RoboDepth Challenge.
## 5 Winning Solutions from Track # 1
### The \(1\)st Place Solution: OpenSpaceAI
**Authors:** Ruijie Zhu, Ziyang Song, Li Liu, and Tianzhu Zhang.
**Summary** - Though existing self-supervised monocular depth estimation methods achieved high accuracy on standard benchmarks, few works focused on their OoD generalizability under real-world corruptions. The OpenSpaceAI team proposes IRUDepth to improve the robustness and uncertainty estimation of depth estimation systems. It takes a CNN-Transformer hybrid architecture as the baseline and applies simple yet effective data augmentation chains to enforce consistent depth predictions under diverse corruption scenarios.
#### 5.1.1 Overview
Depth estimation is a fundamental task in 3D vision with vital applications, such as autonomous driving [93], augmented reality [123], virtual reality [59], and 3D reconstruction [119]. Though many specialized depth sensors, _e.g._ LiDAR and Time-of-Flight (ToF) cameras, can generate accurate raw depth data, they have certain limitations compared to the learning-based monocular depth estimation systems, such as higher hardware cost and limited usage scenarios.
To meet the high requirement of the challenging OoD depth estimation, we propose IRUDepth, a novel framework that focuses on improving the robustness and uncertainty of current self-supervised monocular depth estimation systems. Following MonoViT [131], we use MPViT [57] as the depth encoder, which is a CNN-Transformer hybrid architecture that fuses multi-scale image features. We use PoseNet [108] to jointly optimize the camera parameters and predicted depth maps.
To improve the robustness of the self-supervised monocular depth estimation model under OoD situations, we design an image augmentation module and a triplet loss function motivated by AugMix [33]. For the image augmentation module, we utilize stochastic and diverse augmentations to generate random augmented pairs for input images. After predicting the corresponding depth maps, a triplet
\begin{table}
\begin{tabular}{c|c|c c c c|c c c} \hline \hline
**\#** & **Team Name** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25^{\uparrow}\) & \(\delta<1.25^{\uparrow}\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline
1 & OpenSpaceAI & \(\mathbf{0.121}\) & \(0.919\) & \(4.981\) & \(\mathbf{0.200}\) & \(\mathbf{0.861}\) & \(0.953\) & \(\mathbf{0.980}\) \\
2 & USTC-IAT-United & \(\underline{0.123}\) & \(0.932\) & \(\mathbf{4.873}\) & \(0.202\) & \(\mathbf{0.861}\) & \(\mathbf{0.954}\) & \(\underline{0.979}\) \\
3 & YYQ & \(\underline{0.123}\) & \(0.885\) & \(4.983\) & \(\underline{0.201}\) & \(0.848\) & \(0.950\) & \(\underline{0.979}\) \\ \hline
4 & zs\_dlut & \(0.124\) & \(0.899\) & \(4.938\) & \(0.203\) & \(0.852\) & \(0.950\) & \(\underline{0.979}\) \\
5 & UMCV & \(0.124\) & \(\mathbf{0.845}\) & \(\underline{4.883}\) & \(0.202\) & \(0.847\) & \(0.950\) & \(\mathbf{0.980}\) \\
6 & THU\_ES & \(0.124\) & \(0.892\) & \(4.928\) & \(0.203\) & \(0.851\) & \(0.951\) & \(\mathbf{0.980}\) \\
7 & THU\_Chen & \(0.125\) & \(\underline{0.865}\) & \(4.924\) & \(0.203\) & \(0.846\) & \(0.950\) & \(\mathbf{0.980}\) \\
8 & seesee & \(0.126\) & \(0.900\) & \(4.979\) & \(0.206\) & \(0.857\) & \(0.952\) & \(0.978\) \\
9 & namemeane & \(0.126\) & \(0.994\) & \(4.950\) & \(0.204\) & \(0.860\) & \(0.953\) & \(\underline{0.979}\) \\
10 & USTCxNetEaseFuxi & \(0.129\) & \(0.973\) & \(5.100\) & \(0.208\) & \(0.846\) & \(0.948\) & \(0.978\) \\
11 & Tutu & \(0.131\) & \(0.972\) & \(5.085\) & \(0.207\) & \(0.835\) & \(0.946\) & \(\underline{0.979}\) \\
12 & Cai & \(0.133\) & \(1.017\) & \(5.282\) & \(0.214\) & \(0.837\) & \(0.945\) & \(\underline{0.976}\) \\
13 & Suzzally & \(0.133\) & \(1.023\) & \(5.285\) & \(0.215\) & \(0.835\) & \(0.943\) & \(\underline{0.976}\) \\
14 & waterch & \(0.137\) & \(0.904\) & \(5.276\) & \(0.214\) & \(0.813\) & \(0.941\) & \(\underline{0.979}\) \\
15 & hus99 & \(0.139\) & \(1.057\) & \(5.302\) & \(0.220\) & \(0.826\) & \(0.939\) & \(\underline{0.975}\) \\
16 & panzer & \(0.141\) & \(0.953\) & \(5.429\) & \(0.221\) & \(0.804\) & \(0.936\) & \(\underline{0.976}\) \\
17 & lyle & \(0.142\) & \(0.981\) & \(5.590\) & \(0.225\) & \(0.806\) & \(0.936\) & \(\underline{0.974}\) \\
18 & SHSCUMT & \(0.142\) & \(1.064\) & \(5.155\) & \(0.215\) & \(0.821\) & \(0.943\) & \(\underline{0.977}\) \\
19 & hanhenegong & \(0.142\) & \(1.064\) & \(5.155\) & \(0.215\) & \(0.821\) & \(0.943\) & \(\underline{0.977}\) \\
20 & king & \(0.160\) & \(1.230\) & \(5.927\) & \(0.244\) & \(0.769\) & \(0.921\) & \(\underline{0.966}\) \\
21 & xujianyao & \(0.172\) & \(1.340\) & \(6.177\) & \(0.258\) & \(0.743\) & \(0.910\) & \(\underline{0.963}\) \\
22 & Wenhui.Wei & \(0.172\) & \(1.340\) & \(6.177\) & \(0.258\) & \(0.743\) & \(0.910\) & \(\underline{0.963}\) \\
23 & jerryxu & \(0.192\) & \(1.594\) & \(6.506\) & \(0.279\) & \(0.709\) & \(0.895\) & \(\underline{0.956}\) \\ \hline - & MonoDepth [30] & \(0.221\) & \(1.988\) & \(7.117\) & \(0.312\) & \(0.654\) & \(0.859\) & \(\underline{0.938}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Leaderboard of Track # 1 (robust self-supervised depth estimation) in the RoboDepth Challenge. The **best** and **second** best scores of each metric are highlighted in **bold** and **underline**, respectively. Only entries better than the baseline are included in this table. In Track # 1, MonoDepth2 [30] was adopted as the baseline. See our evaluation server for the complete results.
loss is applied to constrain the Jensen-Shannon divergence between the predicted depth of the clean image and its augmented version.
The proposed IRUDepth ranks first in the first track of the RoboDepth Challenge. Extensive experimental results on the KITTI-C benchmark also demonstrate that IRUDepth significantly outperforms state-of-the-art methods and exhibits satisfactory OoD robustness.
#### 5.1.2 Technical Approach
Given an RGB image \(I_{t}\in\mathbb{R}^{H\times W\times 3}\), the IRUDepth framework aims to predict its corresponding depth map \(D_{t}\in\mathbb{R}^{H\times W}\). To improve the robustness and uncertainty estimation, we use random image augmentation inspired by AugMix [33] to generate two augmented views of an image. Then both clean and augmented images are used as the input to train the depth network.
Following MonoViT [131], we use the MPViT module [57] as the encoder of the depth network to extract the local and global context from images. This module is a multi-path CNN-Transformer hybrid architecture. With a disparity head, we generate pixel-aligned depth maps for all input images. The triplet loss is proposed to encourage consistency between the predicted depth maps of the clean and augmented images.
During training, to obtain supervisory signals from adjacent image frames, we input the adjacent image frames \(I^{\prime}_{t}=\{I_{t-1},I_{t+1}\}\) and feed them into the pose estimation network together with \(I_{t}\) to estimate the relative camera pose \(T_{t\to t^{\prime}}\). The synthesized counterpart to \(I_{t}\) is then generated by:
\[I_{t\to t^{\prime}}=T_{t^{\prime}}<\texttt{proj}(D_{t},T_{t\to t^{\prime}},K)>\, \tag{4}\]
where the \(\texttt{proj}(\cdot)\) operator returns the 2D coordinates when reprojecting the point cloud generated using \(D_{t}\) onto \(I_{t^{\prime}}\); \(K\) denotes the camera parameter matrix.
**Augmentation Module**. We believe a proper data augmentation technique can significantly improve the generalizability of monocular depth estimation models. Various data augmentation methods [16, 124, 127] have been proposed to enhance the robustness of the model during training. Recently, adding adversarial losses during training has been shown to be an effective way of improving model robustness [74]. Training with these approaches, however, often greatly increases the training time and GPU memory consumption. It is thus desirable to design a cost-effective augmentation that can be easily plugged into the training pipeline to balance model performance and training consumption. In particular, IRUDepth combines operations from AutoAugment [15] and AugMix [33] as the main components of our augmentation chain.
**Augmentation Protocol**. To ensure the designed augmentations are disjoint with simulated evaluation data, we exclude operations that constitute or are similar to the \(18\) corruption types in KITTI-C.
\begin{table}
\begin{tabular}{c|c|c c c c|c c c} \hline \hline
**\#** & **Team Name** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline
1 & USTCxNetEaseFuxi & **0.088** & 0.046 & 0.347 & **0.115** & **0.940** & 0.985 & 0.996 \\
2 & OpenSpaceAI & 0.095 & **0.045** & **0.341** & 0.117 & 0.928 & **0.990** & **0.998** \\
3 & GANCV & 0.104 & 0.060 & 0.391 & 0.131 & 0.898 & 0.982 & 0.995 \\ \hline
4 & AIIA-RDepth & 0.123 & 0.080 & 0.450 & 0.153 & 0.861 & 0.975 & 0.993 \\
5 & YYQ & 0.125 & 0.085 & 0.470 & 0.159 & 0.851 & 0.970 & 0.989 \\
6 & Hyq & 0.124 & 0.089 & 0.474 & 0.158 & 0.851 & 0.967 & 0.990 \\
7 & DepthSquad & 0.137 & 0.085 & 0.462 & 0.158 & 0.845 & 0.976 & 0.996 \\
8 & kinda & 0.146 & 0.095 & 0.480 & 0.165 & 0.831 & 0.973 & 0.993 \\
9 & dx3 & 0.131 & 0.095 & 0.507 & 0.170 & 0.825 & 0.963 & 0.989 \\
10 & uhtw & 0.150 & 0.100 & 0.492 & 0.168 & 0.822 & 0.973 & 0.993 \\
11 & myungwowo & 0.147 & 0.099 & 0.496 & 0.168 & 0.820 & 0.972 & 0.994 \\
12 & kamiz\_t & 0.134 & 0.100 & 0.528 & 0.176 & 0.815 & 0.959 & 0.986 \\
13 & didever & 0.137 & 0.100 & 0.517 & 0.175 & 0.808 & 0.962 & 0.988 \\
14 & THUZS & 0.156 & 0.117 & 0.555 & 0.190 & 0.785 & 0.953 & 0.988 \\
15 & ffnhau88 & 0.163 & 0.129 & 0.579 & 0.193 & 0.767 & 0.952 & 0.986 \\
16 & wallong & 0.198 & 0.167 & 0.624 & 0.222 & 0.710 & 0.927 & 0.981 \\ \hline - & DepthFormer [63] & 0.190 & 0.179 & 0.717 & 0.248 & 0.655 & 0.898 & 0.970 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Leaderboard of Track # 2 (robust supervised depth estimation) in the RoboDepth Challenge. The **best** and _second_ best scores of each metric are highlighted in **bold** and underline, respectively. Only entries better than the baseline are included in this table. In Track # 2, DepthFormer-SwinT [63] was adopted as the baseline. See our evaluation server for the complete results.
Specifically, we remove the _'contrast'_, _'color'_, _'brightness'_, _'sharpness'_, and _'cutout'_ operations from the original augmentation types in [15, 33]. Also, to avoid any potential overlap with the KITTI-C testing set, we do not use any image noising or image blurring operations.
**Augmentation Chain**. We randomly sample \(k=3\) augmentation chains to combine different augmentation operations. Following AugMix [33], we mix the resulting images from these augmentation chains via element-wise convex combinations. In particular, we sample convex coefficients from a Dirichlet distribution for the first stage mixing on augmentation chains. Next, we use a second stage mixing sampled from a Beta distribution to mix the clean and the augmented images. In this way, we can obtain final images generated by an arbitrary combination of data augmentation operations with random mixing weights. We use such images in the training phase of IRUDepth.
**Loss Function**. Following MonoDepth2 [30], we minimize the photometric reprojection error \(L_{p}\). This loss can be calculated as follows:
\[L_{p}=\min_{t^{\prime}}\mathsf{pe}(I_{t},I_{t^{\prime}\to t})\;, \tag{5}\]
\[\mathsf{pe}(I_{a},I_{b})=\frac{\alpha}{2}(1-\mathsf{SSIM}(I_{a},I_{b}))+(1- \alpha)||I_{a},I_{b}||_{1}\;. \tag{6}\]
Here we set \(\alpha=0.85\). Additionally, as in [29], we apply the following smoothness loss:
\[L_{s}=|\partial_{x}d_{t}^{*}|e^{-|\partial_{x}I_{t}|}+|\partial_{y}d_{t}^{*}| e^{-|\partial_{y}I_{t}|}\;, \tag{7}\]
where \(d_{t}^{*}=dt/\bar{d}t\) is the normalized inverse depth as proposed in [102].
To constrain the consistency between the predicted depth maps of the clean and augmented images, we apply the Jensen-Shannon divergence consistency loss used in [33]. This loss aims to enforce smoother neural network responses. Firstly, we mix the depth result as the mixed depth center:
\[D_{mix}=\frac{1}{3}(D_{t}+D_{t}^{aug1}+D_{t}^{aug2})\;, \tag{8}\]
where \(D_{t}\), \(D_{t}^{aug1}\), and \(D_{t}^{aug2}\) are the depth maps of the clean and the two augmented images, respectively. Next, we compute the triplet loss listed as follows:
\[L_{mix}=\frac{1}{3}\big{(}\mathtt{KL}(D_{t}||D_{mix})+\mathtt{KL}(D_{t}^{aug 1}||D_{mix})+\mathtt{KL}(D_{t}^{aug2}||D_{mix})\big{)}\;, \tag{9}\]
where the KL divergence (KL) is used to measure the degree of difference between two depth distributions. Note that we use the mixed depth instead of the depth map of clean images in KL, which
Figure 4: Overview of the IRUDepth framework designed for robust self-supervised depth estimation.
is proven to perform better in experiments. As in [42, 33], the triplet loss function in the form of the Jensen-Shannon divergence impels models to be stable, consistent, and insensitive across the input images from diverse scenarios.
Finally, during training, the total loss sums up the above three losses computed from outputs at the scale \(s\in\{1,\frac{1}{2},\frac{1}{4},\frac{1}{8}\}\), as computed in the following form:
\[L_{total}=\frac{1}{N}\sum_{s=i}^{N}(\alpha L_{p}+\beta L_{s}+\gamma L_{mix})\, \tag{10}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are loss coefficients sampled from the scale set \(s\).
#### 5.1.3 Experimental Analysis
**Implementation Details**. The IRUDepth framework is implemented using Pytorch. Four NVIDIA GTX 3090 GPUs are used for model training, each with a batch size of \(6\). We take MPViT [57] as the backbone, which is pre-trained on ImageNet-1K, and further fine-tuned on low-resolution (\(640\times 192\)) images from the KITTI dataset [27] following splits and data processing in [20, 135]. The overall framework is optimized end-to-end with the AdamW optimizer [72] for \(30\) epochs. The learning rate of the pose network and depth decoder is initially set as \(1\mathrm{e}{-4}\), while the initial learning rate of the MPViT module is set as \(1\mathrm{e}{-5}\). Both the learning rates decay by a factor of \(10\) for the final \(5\) epochs.
**Comparative Study**. The benchmark takes the depth maps of clean images as the ground truth for evaluation. Table 5 compares the model performance with other methods on the RoboDepth competition leaderboard. Our method ranks first on the leaderboard and outperforms other methods across four depth evaluation metrics. Figure 5 reports some challenging examples from the RoboDepth benchmark. Even for severely corrupted images, our approach could predict accurate and consistent depth maps, which demonstrates the strong robustness of the proposed IRUDepth.
**Ablation Study**. To further validate the effectiveness of the proposed data augmentation module and the triplet loss, we report some ablation results in Table 6. We assess the impact of different model backbones, image augmentation techniques, and loss functions. As shown in this table, the proposed data augmentation and triplet loss function play an important role in improving the model's performance under OoD corruptions.
#### 5.1.4 Solution Summary
In this work, we proposed IRUDepth, a method that aims to improve the robustness and uncertainty estimation of self-supervised monocular depth estimation. With the novel image augmentation and
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MPViT-S & \(0.172\) & \(1.340\) & \(6.177\) & \(0.258\) & \(0.743\) & \(0.910\) & \(0.963\) \\ MPViT-S + Aug + \(L_{mix}\) & \(0.123\) & \(0.946\) & \(5.011\) & \(0.203\) & \(0.855\) & \(0.950\) & \(0.979\) \\ MPViT-B & \(0.170\) & \(1.212\) & \(5.816\) & \(0.319\) & \(0.753\) & \(0.912\) & \(0.961\) \\ MPViT-B + Aug & \(0.146\) & \(1.166\) & \(5.549\) & \(0.226\) & \(0.806\) & \(0.936\) & \(0.974\) \\ \hline MPViT-B + Aug + \(L_{mix}\) & \(\mathbf{0.121}\) & \(\mathbf{0.919}\) & \(\mathbf{4.981}\) & \(\mathbf{0.200}\) & \(\mathbf{0.861}\) & \(\mathbf{0.953}\) & \(\mathbf{0.980}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation results of IRUDepth on the RoboDepth competition leaderboard. Notations: Aug denotes the proposed image augmentations; \(L_{mix}\) denotes the proposed triplet loss. For methods only with Aug, we use augmented images instead of clean images as the input. The **best** and **second** best scores of each metric are highlighted in **bold** and underline, respectively.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline Ensemble & \(0.124\) & \(0.899\) & \(4.938\) & \(0.203\) & \(0.852\) & \(0.950\) & \(0.979\) \\ UMCV & \(0.124\) & \(\mathbf{0.845}\) & \(4.883\) & \(0.202\) & \(0.847\) & \(0.950\) & \(\mathbf{0.980}\) \\ YYQ & \(0.123\) & \(0.885\) & \(4.983\) & \(0.201\) & \(0.848\) & \(0.950\) & \(0.979\) \\ USTC-IAT-United & \(0.123\) & \(0.932\) & \(\mathbf{4.873}\) & \(0.202\) & \(\mathbf{0.861}\) & \(\mathbf{0.954}\) & \(0.979\) \\ \hline
**IRUDepth (Ours)** & \(\mathbf{0.121}\) & \(0.919\) & \(4.981\) & \(\mathbf{0.200}\) & \(\mathbf{0.861}\) & \(0.953\) & \(\mathbf{0.980}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Quantitative results on the RoboDepth competition leaderboard (Track # 1). The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
proposed triplet loss function, IRUDepth achieved better generalization performance than state-of-the-art methods for self-supervised depth estimation on the KITTI-C dataset. Moreover, our IRUDepth ranked first in the first track of the RoboDepth Challenge, which demonstrates its superior robustness under different kinds of OoD situations.
### The 2nd Place Solution: Ustc-iat-United
**Authors:** Jun Yu, Xiaohua Qi, Jie Zhang, Mohan Jing, Pengwei Li, Zhen Kan, Qiang Ling, Liang Peng, Minglei Li, Di Xu, and Changpeng Yang.
**Summary -** Although current self-supervised depth estimation methods have achieved satisfactory results on "clean" data, their performance often disappoints when encountering damaged or unseen data, which are cases that frequently occur in the real world. To address these limitations, the USTC-iat-United team proposes a solution that includes an MAE mixing augmentation during training and an image restoration module during testing. Both comparative and ablation results verify the effectiveness and superiority of the proposed techniques in handling various types of corruptions that a depth estimation system has to encounter in practice.
#### 5.2.1 Overview
Self-supervised depth estimation aims to estimate the depth map of a given image without the need for explicit supervision. This task is of great importance in computer vision and robotics, as it enables machines to perceive the 3D structure of the environment without the need for expensive depth sensors. Various self-supervised learning methods have been proposed to achieve this task, such as monocular, stereo, and multi-view depth estimation. These methods leverage the geometric and photometric constraints among multiple views of the same scene to learn depth representation.
In recent years, deep learning techniques have been widely adopted in self-supervised depth estimation tasks. Garg _et al._[26] reformulated depth estimation into a view synthesis problem and proposed a photometric loss across stereo pairs to enforce view consistency. Godard _et al._[29] proposed to leverage differentiable bilinear interpolation [37], virtual stereo prediction, and an SSIM \(+\) L1 reconstruction loss to better encourage the left-right consistency. Utilizing solely supervision signals from monocular video sequences, SfM-Learner [135] relaxed the stereo constraint by replacing the known stereo transform with a pose estimation network. These techniques have shown promising
Figure 5: Qualitative results of IRUDepth in the RoboDepth benchmark under different corruptions.
results in various visual perception applications, such as autonomous driving [27], augmented reality [123], and robotics [76].
Despite the significant role that monocular and stereo depth estimation play in real-world visual perception systems and the remarkable achievements that have been made, current deep learning-based self-supervised monocular depth estimation models are mostly trained and tested on "clean" datasets, neglecting OoD scenarios. Common corruptions, however, often occur in practical scenes, which are crucial for the safety of applications such as autonomous driving and robot navigation. In response to this concern, recent research has focused on developing robust self-supervised depth estimation models that can handle OoD scenarios. The challenge of artifacts arising from dynamic objects has been addressed by integrating uncertainty estimation [46, 82, 118], motion masks [101], optical flow [73], or the minimum reconstruction loss. Simultaneously, to enhance robustness against unreliable photometric appearance, strategies such as feature-based reconstructions [95] and proxy-depth supervision [46] have been introduced.
In recent advancements of network architecture design, several techniques have been incorporated, such as 3D packing and unpacking blocks, positional encoding [131], sub-pixel convolution for depth super-resolution [81], progressive skip connections, and self-attention decoders [39]. Moreover, some researchers have proposed to use synthetic data to augment the training dataset and improve the model's generalization ability to OoD scenarios. For example, domain randomization techniques are used to generate diverse synthetic data with various levels of perturbations, which can help the model learn to handle different types of corruption.
In this work, to address this challenging task, we propose a solution with novel designs ranging from the following aspects: 1) an augmented training process and 2) a more stable testing pipeline. For the former stage, we resort to masked autoencoders (MAE) [31] and image mixing techniques to enhance representation learning of self-supervised depth estimation models. For the latter, we explore off-the-shelf image restoration networks for obtaining images with better visual cues at the test time. Through comparative and ablation experiments, we demonstrate and verify the effectiveness and satisfactory performance of the proposed techniques under challenging OoD scenarios.
#### 5.2.2 Technical Approach
Our proposed solution consists of the following major components: 1) the training pipeline, 2) the test pipeline, 3) an MAE mixing operation, and 4) an image restoration module.
**Training Pipeline**. Our model training pipeline, as shown in Figure 6, is composed of four main parts: 1) the input of image sequences or image pairs, 2) an MAE reconstruction with image mixing, 3) a data augmentation module, and 4) the overall model training objectives. The input of our model follows MonoDepth2 [30], where image sequences are used for single-view training and left-right image pairs are used for stereo training. The MAE mixing operation will be elaborated on later. We also adopt the data augmentation settings of MonoDepth2 [30] during training. We use the MonoViT [131] architecture as our backbone model, which consists of MPViT [57] encoder blocks and a self-attention decoder. The training is conducted in a self-supervised manner on the KITTI depth estimation dataset [27], using the photometric loss [30] for both monocular frames and stereo pairs, as well as a proxy depth regression objective. The regularization is achieved by incorporating edge-aware disparity smoothness [29] and depth gradient consistency with respect to the proxy labels.
Figure 6: Illustration of the training pipeline in our proposed robust depth estimation solution.
**Testing Pipeline**. The proposed solution consists of five components in the testing phase, as shown in Figure 7: 1) the input image for inference, 2) an image restoration module, 3) model inference, 4) a test-time augmentation (TTA) technique, and 5) the final depth prediction result. The specific process is as follows: we use a single image as the input for inference, which is then enhanced with the image restoration module to be described later. The restored image is then fed into the depth estimation model for feature extraction and prediction. Finally, a TTA approach based on MonoDepth2 [30] is applied as a post-processing technique to produce the final result. The entire process is mathematically and academically rigorous.
**MAE Reconstruction**. The masking-based image reconstruction method aims for reconstructing masked regions in an image by minimizing the mean absolute error between the original input and its reconstruction. Mathematically, given an image \(x\) and its reconstruction \(\hat{x}\), the MAE reconstruction process can be formulated as follows:
\[\hat{x}=\arg\min_{\hat{x}}\frac{1}{n}\sum_{i=1}^{n}\left|x_{i}-\tilde{x}_{i} \right|\,, \tag{11}\]
where \(n\) is the number of pixels in the image, and \(x_{i}\) and \(\tilde{x}_{i}\) represent the \(i\)-th pixel of the original image and its reconstruction, respectively.
MAE is a type of network that can be used for unsupervised learning of visual features, which is particularly well-suited for learning from large-scale datasets as they can be trained efficiently on distributed computing systems. The basic idea of MAE is to learn a compressed representation of an image by encoding it into a lower-dimensional space and then decoding it back to its original size. Unlike traditional autoencoders which use fully connected layers for both the encoder and decoder, MAE uses convolutional layers to capture spatial information and reduce the number of parameters.
The MAE reconstruction process not only preserves semantic information similar to the original image but also introduces blurriness and distortion, making it a suitable method for enhancing robustness under various OoD corruptions. In this challenge, we directly load a pre-trained MAE model [31] for image reconstruction of the input image \(x\). Specifically, the pre-trained model \(f\) can be represented as a function that maps the input image \(x\) to its reconstructed image \(\hat{x}\), _i.e._, \(\hat{x}=f(x)\).
**Image Mixing**. Blending different images is a commonly-used data augmentation technique. It can be used to generate new training samples by mixing two or more images together. The basic idea is to combine the content of two or more images in a way that preserves the semantic information while introducing some degree of variability. This can help the model learn to be more robust to changes in the input data and improve its generalization performance.
One common approach for image mixing is to conduct a weighted sum of the pixel values from different input images. Given two images \(I_{A}\) and \(I_{B}\), we can generate a mixed image \(I_{C}\) as follows:
\[I_{C}=(1-\alpha)I_{A}+\alpha I_{B}\, \tag{12}\]
where \(\alpha\) is a mixing coefficient that controls the degree of influence of each of the two images. For example, when \(\alpha=0.5\), the resulting image is an equal blend of the two inputs. When \(\alpha\) is closer to \(0\) or \(1\), the resulting image is more similar to one of these two candidate input images.
Figure 7: Illustration of the testing pipeline in our proposed robust depth estimation solution.
To introduce a certain degree of randomness and diversity into the mixing process, we can use different values of \(\alpha\) for each pair of images. This can further increase the variability of the generated samples and improve the model's ability to handle different types of input data. Image mixing has been shown to be an effective data augmentation technique for various computer vision tasks, including image classification, object detection, and semantic segmentation. It can help the model learn to be more robust to changes in the input data and improve its generalization performance.
**MAE Mixing**. Different from the aforementioned image mixing, our MAE mixing operation refers to the mixing of the MAE-reconstructed image and the original image. This mixing process can be mathematically described as follows:
\[x_{mix}=(1-\alpha)x+\alpha\hat{x}\, \tag{13}\]
where \(x\) and \(\hat{x}\) represent the original image and the MAE-reconstructed image, respectively, and \(\alpha\) is a hyperparameter representing the mixing ratio.
By combining the reconstructed images with the original ones, the diversity of the training data can be greatly enriched, thereby enhancing the robustness of the depth estimation model. Without the need for altering the supervision signal, we achieve such mixing and control its degree using weighted image interpolation, as described earlier. The resulting mixed image \(x_{mix}\) can be used as the input to the depth estimation model, thereby increasing the diversity of the training data and improving the model's ability to generalize to unseen data.
**Image Restoration**. The goal of image restoration is to recover a blurred or noisy image without changing its size and content. To perform such a restoration, we use an efficient image restoration network called Restormer [125]. This model is based on the Transformer backbone to restore damaged images. In this challenge, we did not further fine-tune the network but directly loaded the pre-trained Restormer [125] checkpoint to restore the corrupted images.
As shown in Figure 7, before feeding the test images into the depth estimation model, we perform image restoration to enhance the image quality. Specifically, we first restore the damaged images using the Restormer network, which is pre-trained on various restoration tasks including 'image de-raining','single-image motion de-blurring', 'defocus de-blurring', and 'image de-noising'. After the restoration process, we use the restored images as the input of our depth estimation model for further processing. Mathematically, the restoration process can be formulated as follows:
\[\hat{I}=\texttt{Restormer}(I)\, \tag{14}\]
Figure 8: Visualizing the effectiveness of using the Restormer network for snow removal.
where \(I\) denotes the input image of the image restoration network and \(\hat{I}\) denotes the restored image. Subsequently, the depth estimation process can be formulated as follows:
\[D=\texttt{DepthEstimate}(\hat{I})\, \tag{15}\]
where \(D\) denotes the estimated depth map. Figure 8 to Figure 11 provide representative results of various types of corrupted images and their restored versions from Restormer [125].
Specifically, Figure 8 displays the restoration results of images degraded by _'snow'_; while Figure 9 shows the restoration results of images degraded by _'motion blur'_. Figure 10 and Figure 11 present
Figure 10: Visualizing the effectiveness of using the Restormer network for defocus deblurring.
Figure 9: Visualizing the effectiveness of using the Restormer network for motion deblurring.
the restoration results of images degraded by _'defocus blur'_ and by _'noises'_, respectively. In each figure, the left-hand-side images represent the inputs that are degraded by different kinds of real-world corruptions, while the right-hand-side images are the restored outputs. The results demonstrate the effectiveness of the Restormer network in restoring images degraded by various types of distortions.
#### 5.2.3 Experimental Analysis
**Implementation Details**. We use the standard Eigen split [20] of the KITTI depth estimation dataset [27] as our training dataset and trained our models with the corresponding hyperparameters specified in the MonoDepth2 paper [30]. We then fine-tuned the pre-trained models using our MAE mixing data augmentation, with the starting learning rate being one-fifth of the original learning rate.
**Baselines**. We evaluated the robustness of multiple self-supervised depth estimation models on the corrupted dataset and identified four models with superior depth performance under OoD scenarios: CADepth [115], MonoDepth2 [30], Lite-Mono [128], and MonoViT [131]. Their depth estimation results are shown in Table 7. We can observe from this table that MonoViT [131] achieves the best OoD depth estimation performance when trained with the Mono+Stereo modality and with an input resolution of \(640\times 192\). Therefore, all subsequent experiments are conducted with this configuration.
**MAE Mixing**. We conducted experiments to investigate the impact of the mixing ratio hyperparameter \(\alpha\) in our MAE mixing data augmentation. The ablation results are shown in Table 8. Specifically, we varied the mixing ratio \(\alpha\) between the original image \(x\) and the MAE-reconstructed image \(\hat{x}\), where the mixed image is given by Eq. 13. As can be seen from the results, the mixing ratio \(\alpha\) should be carefully selected for our MAE mixing data augmentation, as excessively high or low values of \(\alpha\) can negatively impact the model's performance.
Without loss of generalizability, our experimental results indicate that a mixing ratio of \(\alpha=0.3\) achieves the optimal performance on the OoD testing set and better enhances the model's generalization ability. This suggests that a balanced mixture of the original image and the MAE-reconstructed image is beneficial for the model's representation learning. On the other hand, excessively high values of \(\alpha\) can lead to overfitting issues, where the model becomes too specialized to the training data and performs poorly on new data. Conversely, excessively low values of \(\alpha\) may not provide enough variation in the augmented data, leading to underfitting and poor performance. One possible reason for the sensitivity of the MAE mixing method to the mixing ratio hyperparameter \(\alpha\) is the use of image restoration techniques. The restoration algorithm may introduce artifacts or distortions in
Figure 11: Visualizing the effectiveness of using the Restormer network for de-noising.
the reconstructed image, which can affect the performance of the MAE mixing method. Furthermore, the restoration algorithm only operates on the testing set, while the MAE mixing data augmentation is used during training, making it necessary to carefully tune the hyperparameter \(\alpha\).
To address this issue, an end-to-end training approach can be explored in future work. This would involve jointly training the restoration algorithm and the downstream task model, allowing for better integration of the restoration and augmentation processes. By incorporating the restoration algorithm into the training process, the sensitivity of the MAE mixing method to the mixing ratio hyperparameter \(\alpha\) can potentially be reduced, leading to improved performance and generalization ability.
**Image Restoration**. In the final stage of our experiments, we applied the image restoration process described in previous sections to the testing images before depth inference. This resulted in an improved absolute relative error (in terms of the Abs Rel score) of \(0.123\). The image restoration process helps to reduce the negative impact of artifacts and distortions in corrupted images, leading to more accurate predictions by the depth estimation model. By incorporating this step into the testing pipeline, we are able to achieve better performance over the baselines. Furthermore, the use of image restoration techniques can also improve the generalization ability of the depth estimation model, as it helps to reduce the impact of variations and imperfections across a wide range of test images.
#### 5.2.4 Solution Summary
In this work, we have attempted various strategies to address the challenging OoD self-supervised monocular depth estimation. We first demonstrated that the CNN-Transformer hybrid networks exhibit excellent robustness over plain CNN-based ones. We designed and employed an efficient data augmentation method - MAE mixing - which can serve as a strong enhancement for depth estimation. Additionally, we have shown that the image restoration network can effectively handle common distortions at test time, such as blur, noise, rain, and snow, and can significantly improve depth prediction scores. Ultimately, our solution achieved an absolute relative error of \(0.123\) and ranked second in the first track of the RoboDepth Challenge.
### The 3rd Place Solution: YYQ
**Authors:** Yuanqi Yao, GangWu, Jian Kuai, Xianming Liu, and Junjun Jiang.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**Method** & **Ref** & **Input Modality** & **Input Resolution** & **Abs Rel\(\downarrow\)** & \(\Delta\) \\ \hline \hline \multirow{4}{*}{MonoDepth2} & \multirow{4}{*}{[30]} & Mono & \(640\times 192\) & \(0.149\) & \(+0.000\) \\ & & Stereo & \(640\times 192\) & \(0.153\) & \(+0.004\) \\ & & Mono+Stereo & \(640\times 192\) & \(0.146\) & \(-0.003\) \\ & & Mono & \(1024\times 320\) & \(0.153\) & \(+0.004\) \\ & & Stereo & \(1024\times 320\) & \(0.154\) & \(+0.005\) \\ & & Mono+Stereo & \(1024\times 320\) & \(0.240\) & \(+0.091\) \\ \hline \multirow{4}{*}{CADepth} & \multirow{4}{*}{[115]} & Mono & \(640\times 192\) & \(0.149\) & \(+0.000\) \\ & & Mono & \(1024\times 320\) & \(0.151\) & \(+0.002\) \\ & & Mono & \(1280\times 384\) & \(0.157\) & \(+0.008\) \\ & & Mono+Stereo & \(640\times 192\) & \(0.147\) & \(-0.002\) \\ & & Mono+Stereo & \(1024\times 320\) & \(0.143\) & \(-0.006\) \\ \hline \multirow{4}{*}{Line-Mono-L} & \multirow{4}{*}{[128]} & Mono & \(1024\times 320\) & \(0.148\) & \(-0.001\) \\ \cline{2-6} & & Mono & \(640\times 192\) & \(0.143\) & \(-0.006\) \\ \cline{1-1} \multirow{4}{*}{MonoViT} & \multirow{4}{*}{[131]} & Mono+Stereo & \(640\times 192\) & \(0.134\) & \(-0.015\) \\ & & Mono & \(1024\times 320\) & \(0.149\) & \(+0.000\) \\ & & Mono+Stereo & \(1024\times 320\) & \(0.138\) & \(-0.011\) \\ \cline{1-1} & & Mono & \(1280\times 384\) & \(0.147\) & \(-0.002\) \\ \hline \end{tabular}
\end{table}
Table 7: The performance of multiple models trained on the standard Eigen split of the KITTI dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Mixing Ratio & \(\alpha=0.1\) & \(\alpha=0.3\) & \(\alpha=0.5\) & \(\alpha=0.7\) & \(\alpha=0.9\) \\ \hline
**Abs Rel\(\downarrow\)** & \(0.125\) & \(\mathbf{0.123}\) & \(0.128\) & \(0.132\) & \(0.137\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Ablation results of MAE mixing ratio \(\alpha\) on the Robodepth competition leaderboard (Track # 1). The **best** and **second** best scores of each metric are highlighted in **bold** and underline, respectively.
**Summary** - The YYQ team proposes to enhance the OoD robustness of self-supervised depth estimation models via joint adversarial training. Adversarial samples are introduced during training to reduce the sensitivity of depth prediction models to minimal perturbations in the corrupted input data. This approach also ensures the depth estimation models maintain their performance on the in-distribution scenarios while being more robust to different types of data corruptions. Extensive ablation results showcase the effectiveness of the proposed approach.
#### 5.3.1 Overview
Self-supervised depth estimation has emerged as a crucial technique in visual perception tasks, enabling the inference of depth information from 2D images without the use of expensive 3D sensors. However, like conventional depth estimation algorithms, self-supervised depth estimation models trained on "clean" datasets often lack robustness and generalization ability when faced with naturally corrupted data. This issue is particularly relevant in real-world scenarios where it is often difficult to ensure that the input data at test time matches the ideal image distribution of the training dataset. Additionally, adversarial attacks can also lead to incorrect depth estimation results, posing safety hazards in applications such as autonomous driving.
To address the above challenges, we propose a method for enhancing the robustness of existing self-supervised depth estimation models via adversarial training. Specifically, adversarial samples are introduced during training to force the depth estimation model to process modified inputs that aim to deceive the discriminator model. By doing so, we can reduce the sensitivity of the self-supervised depth estimation model to minimal perturbations in the input data, ensuring that the model can be trained on a "clean" dataset while maintaining a certain degree of robustness to common types of corruptions in the real world.
We believe that our approach will play a significant role in future vision perception applications, providing more reliable depth estimation algorithms for various fields, including autonomous driving [27], augmented reality [123], and robot navigation [6]. Furthermore, our approach also provides a new aspect to improve the robustness of learning-based models in other self-supervised learning tasks with no extra cost. Experimental results in the RoboDepth competition leaderboard demonstrate that our proposed method can improve the depth estimation scores over existing models by \(23\%\) on average while still maintaining their original performance on the "clean" KITTI dataset [27]. These results verify the effectiveness and practicality of our proposed approach.
#### 5.3.2 Technical Approach
Our approach can be divided into two main parts as shown in Figure 12. In the first part, we propose a constrained adversarial training method for self-supervised depth estimation, which allows us to jointly train the depth estimation model and the adversarial noise generator. The adversarial noise generator is designed to produce spatially uncorrelated noises with adversarial properties to counter a specified depth estimation network. The second part is a model ensemble, where we improve the robustness of individual models by fusing between different training settings and model sizes.
**Joint Adversarial Training**. In the joint adversarial training stage, we use a simple method to jointly train an adversarial noise generator and the depth estimation model, making it useful to enhance the robustness of any existing self-supervised depth estimation model.
Specifically, we first initialize an adversarial noise generator for adding adversarial noise to the depth estimation model, and then jointly train the depth estimation model with the adversarial noise generator. This encourages the trained depth estimation model to be robust to adversarial noise perturbations. In actual implementations, we use the common reprojection loss in self-supervised depth estimation as the supervision loss for optimizing the adversarial noise generator.
To facilitate robustness during feature learning, we now train the depth estimation model \(f_{\theta}\) to minimize the risk under adversarial noise distributions jointly with the noise generator as follows:
\[\min_{\theta}\max_{\phi}\mathbb{E}_{x,y\sim D}\mathbb{E}_{\delta}\left[ \mathcal{L}\left(f_{\theta}\left(\mathrm{clip}_{\epsilon}(\mathbf{x}+\mathbf{\delta} )\right),y\right)\right]\;, \tag{16}\]
where \(\mathbf{x}+\mathbf{\delta}\in[0,1]^{N}\) and \(\|\mathbf{\delta}\|_{2}=\epsilon\). Here \(\mathcal{L}\) represents the photometric reprojection error \(L_{p}\) in MonoDepth2 [30], which can be formulated as follows:
\[L_{p}=\sum_{t^{\prime}}pe\left(I_{t},I_{t^{\prime}\to t}\right). \tag{17}\]
The noise generator \(g_{\phi}\) consists of four \(1\times 1\) convolutional layers that use Rectified Linear Unit (ReLU) activations and include a residual connection that connects the input directly to the output. To ensure accurate depth estimation on "clean" KITTI images, we adopt a strategy that samples mini-batches comprising \(50\%\) "clean" data and \(50\%\) perturbed data. Out of the perturbed data, we use the current state of the noise generator to perturb \(30\%\) of images from this source, while the remaining \(20\%\) is augmented with samples from previous distributions selected randomly. To facilitate this process, we save the noise generator's states at regular intervals.
The overall framework of our approach is shown in Figure 12. The network architecture we adopted remained consistent with MonoViT [131] except for the adversarial network. Firstly, we use "clean" multi-frame images as the input to the adversarial noise generator to obtain adversarial multi-frame images. Next, we feed the adversarial and "clean" images with a certain proportion into the encoder, decoder, and pose network without changing the original model architecture. We use the image reprojection loss as a constraint for optimizing the corresponding adversarial noise generator.
**Model Ensemble**. To further enhance the robustness of individual depth estimation models, we use a model ensemble strategy separately on both the small and base variants of MonoViT [131], _i.e._ MonoViT-S and MonoViT-B. Specifically, we verify the performance of MonoViT-S with and without a model ensemble, as well as MonoViT-B, which are improved by \(3\%\) and \(6\%\), respectively. Finally, considering that different model sizes could also affect the model's representation learning by focusing on different features, we ensemble the MonoViT-S and MonoViT-B models to achieve the best possible performance in our final submission.
#### 5.3.3 Experimental Analysis
**Implementation Details**. We implement our proposed approach using PyTorch. The MonoViT-S model is trained on a single NVIDIA GeForce RTX 3090 GPU, while the MonoViT-B model is trained on a single NVIDIA A100 GPU. During training, only images from the training split of the KITTI depth estimation dataset [27] are used.
Figure 12: The overall architecture of our framework. We employ a subset of multi-frame images for adversarial training, which incorporates both “clean” and adversarial images into the encoder, decoder, and pose network, without altering the model structure. The image reprojection loss serves as a constraint for the corresponding adversarial noise generator, providing a simple yet effective way to enhance the self-supervised depth estimation model’s robustness.
**Comparative Study**. As shown in Table 9, Table 10, and Table 11, our proposed approach improves the performance of existing self-supervised depth estimation models by \(23\%\) on average under corrupted scenarios, while still maintaining good performance on the "clean" testing dataset.
**Joint Adversarial Training**. Table 9 shows the evaluation results of the proposed joint adversarial training. It can be seen that such a training enhancement approach significantly improves the robustness of existing depth estimation models under OoD corruptions. The results from Table 10 further validate that our method not only brings a positive impact on OoD settings but also maintains excellent performance on the "clean" testing set. We believe this advantage ensures the accurate estimation of depth information for images in any scenario.
**Model Ensemble**. We evaluate the performance of MonoViT-S and MonoViT-B with and without model ensemble and show the results in Table 11. We observe that such a simple model fusion strategy introduces depth prediction improvements of \(3\%\) and \(6\%\), respectively. Furthermore, given that different model sizes could cause a model to focus on different features, we combined MonoViT-S and MonoViT-B through ensemble learning to achieve the best possible performance. This validates the effectiveness of the model ensemble in improving the robustness of depth estimation models.
#### 5.3.4 Solution Summary
In this work, we proposed a joint adversarial training approach along with a model ensemble strategy for enhancing the robustness of self-supervised depth estimation models. The adversarial samples introduced during training help reduce the sensitivity of the model to minimal perturbations in the input data, thereby improving the model performance on corrupted scenarios while still maintaining its original performance on "clean" datasets. Built upon the strong MonoViT baselines, our approaches achieved promising depth estimation results in this challenging competition. Our team ranked third in the first track of the RoboDepth Challenge.
### The Innovation Prize Solution: Ensemble
**Authors:** Jiale Chen and Shuang Zhang.
**Summary** - Observing distinct behaviors of OoD corruptions in the frequency domain, the Ensemble team proposes two stand-alone models for robust depth estimation. The main idea is to improve the OoD generalizability of depth estimation models from two aspects: normalization
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel** \(\downarrow\) & **Sq Rel** \(\downarrow\) & **RMSE** \(\downarrow\) & **log RMSE** \(\downarrow\) & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MonoViT-S & \multirow{2}{*}{\(0.104\)} & \multirow{2}{*}{\(0.747\)} & \multirow{2}{*}{\(4.461\)} & \multirow{2}{*}{\(0.177\)} & \multirow{2}{*}{\(0.897\)} & \multirow{2}{*}{\(\mathbf{0.966}\)} & \multirow{2}{*}{\(0.983\)} \\ + Adversarial Training & & & & & & & \\ \hline MonoViT-B & \multirow{2}{*}{\(0.100\)} & \multirow{2}{*}{\(0.747\)} & \multirow{2}{*}{\(4.427\)} & \multirow{2}{*}{\(0.176\)} & \multirow{2}{*}{\(0.901\)} & \multirow{2}{*}{\(\mathbf{0.966}\)} & \multirow{2}{*}{\(\mathbf{0.984}\)} \\ (Baseline) & & & & & & & \\ \hline MonoViT-B & \multirow{2}{*}{\(\mathbf{0.099}\)} & \multirow{2}{*}{\(\mathbf{0.725}\)} & \multirow{2}{*}{\(\mathbf{4.356}\)} & \multirow{2}{*}{\(\mathbf{0.175}\)} & \multirow{2}{*}{\(\mathbf{0.902}\)} & \multirow{2}{*}{\(\mathbf{0.966}\)} & \multirow{2}{*}{\(\mathbf{0.984}\)} \\ + Adversarial Training & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 10: Quantitative results f the baseline and our proposed joint adversarial training approach on the testing set of the KITTI dataset [27]. The **best** and **second best** scores of each metric are highlighted in **bold** and underline, respectively.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel** \(\downarrow\) & **Sq Rel** \(\downarrow\) & **RMSE** \(\downarrow\) & **log RMSE** \(\downarrow\) & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MonoViT-S & \multirow{2}{*}{\(0.160\)} & \multirow{2}{*}{\(1.238\)} & \multirow{2}{*}{\(5.935\)} & \multirow{2}{*}{\(0.245\)} & \multirow{2}{*}{\(0.768\)} & \multirow{2}{*}{\(0.920\)} & \multirow{2}{*}{\(0.967\)} \\ (Baseline) & & & & & & & \\ \hline MonoViT-S & \multirow{2}{*}{\(0.135\)} & \multirow{2}{*}{\(1.066\)} & \multirow{2}{*}{\(\mathbf{5.258}\)} & \multirow{2}{*}{\(0.215\)} & \multirow{2}{*}{\(0.829\)} & \multirow{2}{*}{\(0.942\)} & \multirow{2}{*}{\(\mathbf{0.976}\)} \\ + Adversarial Training & & & & & & & \\ \hline MonoViT-B & \multirow{2}{*}{\(\mathbf{0.130}\)} & \multirow{2}{*}{\(\mathbf{1.027}\)} & \multirow{2}{*}{\(5.281\)} & \multirow{2}{*}{\(\mathbf{0.213}\)} & \multirow{2}{*}{\(\mathbf{0.839}\)} & \multirow{2}{*}{\(\mathbf{0.945}\)} & \multirow{2}{*}{\(0.975\)} \\ + Adversarial Training & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 9: Quantitative results of the baseline and our proposed joint adversarial training approach on the RoboDepth competition leaderboard (Track # 1). The **best** and **second best** scores of each metric are highlighted in **bold** and **underline**, respectively.
and augmentation. To incorporate this, amplitude-phase recombination and feature interaction modules are proposed. The effectiveness of each model has been verified and analyzed. A further combination of both models contributes to an enhanced depth estimation robustness.
#### 5.4.1 Overview
Performing self-supervised depth estimation under common corruptions and sensor failure is of great value in practical applications. In this work, we propose two model variants built respectively upon MonoViT [131] and Lite-Mono [128] and improve their robustness to tackle OoD scenarios. We further propose a simple yet effective approach for the model ensemble to meet better performance on the challenging OoD depth estimation benchmark. It is worth noting that our method is the only one that trained without an extra pre-trained model; we also do not use any image pre-processing or post-processing operations in this competition.
#### 5.4.2 Technical Approach
We contribute two stand-alone solutions for robust self-supervised depth estimation: Model-I and Model-II. The first model adopts MonoViT [131] as the backbone and is enhanced with a better normalization technique and an amplitude-phase recombination augmentation. The second model, on the other hand, is built upon Lite-Mono [128] and has been integrated with a double-path architecture for better feature extraction, a median-normalization for better OoD generalization, and a channel perturbation for stronger augmentation.
**Normalization**. For Model-I, we change the normalization mechanism of the conventional CNN layers for robustness enhancement. The original network uses batch normalization (BN), which includes parameters containing information related to the batch dimension. Such in-distribution parameters, however, can have an impact on OoD testing. Inspired by AdaIN [35], which performs style transfer by controlling the normalization process at the channel level of the feature maps, we replace batch normalization with instance normalization (IN). It has been proven that IN performs individual normalization for each channel, thus achieving stable domain information for OoD testing.
**Amplitude-Phase Recombination**. We employ the amplitude-phase recombination (APR) [8] as data augmentation for enhancing the model's robustness. Figure 13 provides representative examples of this APR operation on the KITTI dataset [27]. By exchanging the magnitude and phase spectra between different style images and performing inverse Fourier transform, we discover that the phase spectra contain more shape information, while the magnitude spectra contain more texture and style information. We utilize this single-image transformation and magnitude-phase exchange to construct ARP samples during training.
**Lite Backbone**. For Model-II, we aim at utilizing a lightweight model for robust depth estimation. Backbones with fewer parameters have lower capacity and weaker fitting capabilities. However, they may exhibit greater robustness and perform better in handling unknown data distributions. Our
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline \multicolumn{8}{c}{MonoViT-S} \\ \hline + AT & \(0.135\) & \(1.066\) & \(5.258\) & \(0.215\) & \(0.829\) & \(0.942\) & \(0.976\) \\ + AT + Ensemble & \(0.127\) & \(0.942\) & \(5.043\) & \(0.205\) & \(0.844\) & \(0.948\) & \(0.979\) \\ \hline \hline \multicolumn{8}{c}{MonoViT-B} \\ \hline + AT & \(0.130\) & \(1.027\) & \(5.281\) & \(0.213\) & \(0.839\) & \(0.945\) & \(0.975\) \\ + AT + Ensemble & \(0.126\) & \(0.917\) & \(5.115\) & \(0.206\) & \(0.842\) & \(0.948\) & \(0.978\) \\ \hline \hline \multicolumn{8}{c}{MonoViT-S + MonoViT-B} \\ \hline + AT + Ensemble & \(\mathbf{0.123}\) & \(\mathbf{0.885}\) & \(\mathbf{4.983}\) & \(\mathbf{0.201}\) & \(\mathbf{0.848}\) & \(\mathbf{0.950}\) & \(\mathbf{0.979}\) \\ \hline \hline \end{tabular}
\end{table}
Table 11: Quantitative results of the baseline and the model ensemble strategy on the RoboDepth competition leaderboard (Track # 1). Here AT denotes models trained with the proposed joint adversarial training approach. The **best** and **second best** scores of each metric are highlighted in **bold** and **underline**, respectively.
second model variant selects Lite-Mono [128] as the basic backbone and we make further changes to it to improve the overall robustness.
**Double-Path Architecture**. CNNs have exhibited a heightened sensitivity towards local information, whereas visual Transformers demonstrate a greater aptitude for capturing global information. It is widely observed that various types of corruptions manifest significant dissimilarities in their frequency domain distributions. Consequently, a deliberate selection has been made to adopt a double-path architecture whereby distinct CNN and Transformer pathways are employed to extract features independently, followed by a subsequent feature aggregation step. Figure 14 (a) provides an example of the dual-path structure used in our network.
**Median-Normalization for OoD Generalization**. In our framework, we propose a simple median-normalization method to facilitate better OoD generalizability. The feature map from the CNN layer is first divided into \(4\times 4\) patches, and the median value of each patch is selected for computing the mean and variance values of the channel.
**Domain & Style Perturbation in Channel**. For CNNs, the mean and variance of each channel represent domain and style information. Following DSU [58], in the training process, we resample the mean and variance of the feature maps' channels outputted by the CNN. This allows the depth estimation model to utilize different domain and style distributions during training.
Figure 14: Illustrative examples of the two main components in Model-II. (a) The double-path architecture. (b) The feature interaction module from semantics to texture.
Figure 13: Illustrative examples of amplitude-phase exchange and recombination.
**Feature Interaction from Semantics to Texture**. For a pyramid-shaped network architecture, shallow features contain more texture information, while deep features contain more semantic information. In the case of OoD corruptions, shallow texture features are often heavily affected, while deep semantic features exhibit higher robustness degrees. Therefore, we propose modules for aggregating information from semantics to texture before feeding the features into the depth decoder. Figure 14 (b) provides an example of our feature interaction modules. The high-level feature map is upsampled bilinearly and concatenated with the low-level one. Channel attention from CBAM [107] and \(1\times 1\) convolution is adopted for fusion and channel squeeze.
**Model Ensemble**. As will be discussed in the following section, Model-II is of a more stable performance compared with Model-I. To leverage the advantages of both models, we propose a simple yet effective approach for the model ensemble. The final depth prediction \(D_{\text{final}}\) is the aggregation of predictions (\(D_{1}\) and \(D_{2}\)) from both models, with fusion coefficients \(\alpha\), \(\beta\), and \(\eta\). Specifically, the ensemble adopts the following formulation:
\[D_{\text{final}}=\begin{cases}\frac{1}{\alpha\frac{D}{D_{1}}+\beta\frac{D}{D_{ 2}}},&\text{where }\left|\frac{1}{D_{1}}-\frac{1}{D_{2}}\right|/\frac{1}{D_{2}}<\eta\\ D_{2},&\text{where }\left|\frac{1}{D_{1}}-\frac{1}{D_{2}}\right|/\frac{1}{D_{2}}\geq\eta \end{cases}. \tag{18}\]
**Training Loss**. In addition to the conventional monocular self-supervised losses used in MonoDepth2 [30], our overall framework is trained with the proposed APR loss. The APR loss measures the L-1 distance between the disparities estimated from the raw image (\(D\)) and the augmented image (\(D_{APR}\)) as follows:
\[\mathcal{L}_{APR}=\left\|\frac{1}{D}-\frac{1}{D_{APR}}\right\|_{1}. \tag{19}\]
#### 5.4.3 Experimental Analysis
**Implementation Details**. Our model is trained on four NVIDIA Tesla V100 GPUs. The AdamW optimizer [72] is adopted and the learning rate is set to 1e-4. Model-I is fine-tuned for \(40\) epochs with the pre-trained weights from MonoViT [131]. Model-II is trained without APR loss for \(83\) epochs and a further \(244\) epochs with APR loss. The parameters of model ensemble are set as \(\alpha=\frac{2}{3}\), \(\beta=\frac{1}{3}\), and \(\eta=0.45\), respectively.
**Main Results**. Table 12 shows the comparative and ablation results of Model-I, Model-II, and the fusion between them. For Model-I, we observe that the amplitude-phase recombination operation and statistical normalization help improve the depth estimation performance over the baseline MonoViT [131]. For Model-II, we can see that the double-path feature interaction and median-normalization modules are conducive to enhancing Lite-Mono [128] under OoD scenarios. Finally, an ensemble of both Model-I and Model-II brings a significantly positive impact on the robustness of self-supervised depth estimation models.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel\(\downarrow\)** & **Sq Rel\(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\)\(\uparrow\) & \(\delta<1.25^{2}\)\(\uparrow\) & \(\delta<1.25^{3}\)\(\uparrow\) \\ \hline \hline \multicolumn{8}{c}{Model-I} \\ \hline MonoViT & \(0.172\) & \(1.340\) & \(6.177\) & \(0.258\) & \(0.743\) & \(0.910\) & \(0.963\) \\ + APR & \(0.140\) & \(1.216\) & \(5.448\) & \(0.221\) & \(0.830\) & \(0.939\) & \(0.974\) \\ + APR + BN \(\rightarrow\) IN & \(0.129\) & \(1.007\) & \(5.066\) & \(0.208\) & \(0.849\) & \(0.948\) & \(0.977\) \\ \hline \hline \multicolumn{8}{c}{Model-II} \\ \hline \hline Lite-Mono & \(0.199\) & \(1.642\) & \(6.937\) & \(0.293\) & \(0.681\) & \(0.880\) & \(0.948\) \\ Lite-Mono-8m & \(0.196\) & \(1.569\) & \(6.708\) & \(0.287\) & \(0.684\) & \(0.884\) & \(0.952\) \\ + Interact + Perturb & \(0.133\) & \(0.942\) & \(5.115\) & \(0.212\) & \(0.832\) & \(0.944\) & \(0.978\) \\ \hline \hline \multicolumn{8}{c}{Model-I \& Model-II} \\ \hline
**Ensemble** & **0.124** & **0.871** & **4.904** & **0.202** & **0.851** & **0.951** & **0.980** \\ \hline \hline \end{tabular}
\end{table}
Table 12: Quantitative results of the baselines and our proposed approaches on the RoboDepth competition leaderboard (Track # 1). The **best** and **second** best scores of each metric are highlighted in **bold** and underline, respectively.
#### 5.4.4 Solution Summary
In this work, we proposed two stand-alone models for robustness enhancement: Model-I adopted an amplitude-phase recombination operation and instance normalization for noise suppression; Model-II are equipped with a dual-path architecture with median-normalization, channel perturbation, and feature interaction for OoD generalization enhancement. As a result, our team achieved the innovative prize in the first track of the RoboDepth Challenge.
### The Innovation Prize Solution: Scent-Depth
**Authors:** Runze Chen, Haiyong Luo, Fang Zhao, and Jingze Yu.
**Summary - The lack of structural awareness in existing depth estimation systems can lead to significant performance degradation when faced with OoD situations. The Scent-Depth team resorts to structural knowledge distillation to tackle this challenge. A novel graph-based knowledge distillation framework is built, which is able to transfer structural knowledge from a large-scale semantic model to a monocular depth estimation model. Followed by an ensemble between semantic and depth models, the robustness of depth estimation is largely enhanced.
#### 5.5.1 Overview
Single-image depth estimation, also known as monocular depth estimation, is a popular area of research in computer vision due to its diverse range of applications in robotics, augmented reality, and autonomous driving [17; 76; 130]. Despite considerable efforts made, accurately estimating the depth of objects from a single 2D image remains a challenging task due to the inherent ill-posed nature of this problem [21]. Models that rely solely on pixel-level features struggle to capture the critical structural information of objects in a scene, which negatively impacts their performance in complex and noisy real-world environments. This lack of structural awareness can lead to significant performance degradation when faced with external disturbances such as occlusion, adverse weather, equipment malfunction, and varying lighting conditions. Therefore, effectively integrating structural information into the model becomes a crucial aspect of enhancing its depth estimation performance in various practical scenarios.
Recovering the 3D structure of a scene from just a single 2D image is difficult. However, researchers have developed unsupervised learning methods that leverage image reconstruction and view synthesis. Through the use of a warping-based view synthesis technique with either monocular image sequences or stereo pairs, the model can learn fundamental 3D object structural features, providing an elegant solution for monocular depth estimation. This approach has been previously described in academic literature [135; 29]. Recent research has shown that the combination of Vision Transformers (ViT) [18] and convolutional features can significantly enhance the modeling capacity for long-range structural features [128; 131]. The fusion approach leverages the strengths of both ViT [18] and convolutional features in effectively capturing structural information from images. By incorporating both features, the model can leverage the benefits of the long-range attention mechanism and convolutional features' ability to extract local features. This approach has shown promising results in improving the accuracy of monocular depth estimation models in complex and noisy real-world environments.
Currently, large-scale vision models demonstrate impressive generalization capabilities [45], enabling the effective extraction of scene structural information in various visual contexts. Transferring scene structural knowledge through knowledge distillation from these vision models possesses significant research value. Building upon the RoboDepth Challenge, we aim to design a robust single-image depth estimation method based on knowledge distillation. Specifically, we have leveraged the ample scene structural knowledge provided by large-scale vision models to overcome the limitations of prior techniques. By incorporating these insights, our approach enhances robustness to OoD situations and improves overall performance in practical scenarios.
#### 5.5.2 Technical Approach
**Task Formulation**. The main objective of monocular depth estimation is to develop a learning-based model capable of accurately estimating the corresponding depth \(\hat{D}_{t}\) from a monocular image frame \(I_{t}\), within the context of a monocular image sequence \(I=\{...,I_{t}\in\mathbb{R}^{W\times H},...\}\) with camera intrinsic determined by \(K\). However, the challenge lies in obtaining the ground truth depth measurement \(D_{t}\), which is both difficult and expensive to acquire.
To overcome this, we rely on unsupervised learning methods, which require our monocular depth estimation approach to leverage additional scene structural information in a single image to obtain more accurate results. In monocular depth estimation, our ultimate goal is to synthesize the view \(I_{t^{\prime}\to t}\) by using the estimated relative pose \(\hat{T}_{t\to t^{\prime}}\) and the estimated depth map \(\hat{D}_{t}\) with respect to the source frame \(I^{\prime}_{t}\) and the target frame \(I_{t}\). This synthesis operation can be expressed as follows:
\[I_{t^{\prime}\to t}=I_{t^{\prime}}<\texttt{proj}(\hat{D}_{t},\hat{T}_{t\to t^{ \prime}},K)>\, \tag{20}\]
where \(\texttt{proj}(\cdot)\) projects the depth \(D_{t}\) onto the image \(I_{t^{\prime}}\) to obtain the two-dimensional positions, while \(<\cdot>\) upsamples the estimation to match the shape of \(I_{t^{\prime}}\) and is the approximation of \(I_{t}\) obtained by projecting \(I_{t^{\prime}}\). The crux of monocular depth estimation is the depth structure consistency; we need to leverage the consistency of depth structure between adjacent frames to accomplish view synthesis tasks. To achieve this, we refer to [135, 132] and utilize \(\mathcal{L}_{p}\) to impose constraints on the quality of re-projected views. This learning objective is defined as follows:
\[\mathcal{L}_{p}^{u,v}(I_{t},I_{t^{\prime}\to t})=\frac{\alpha}{2}(1- \texttt{ssim}(I_{t},I_{t^{\prime}\to t}))+(1-\alpha)||I_{t}-I_{t^{\prime}\to t }||_{1}\, \tag{21}\]
\[\mathcal{L}_{p}=\sum_{\mu}\mathcal{L}_{p}^{u,v}(I_{t},I_{t^{\prime}\to t})\, \tag{22}\]
where \(\texttt{ssim}(\cdot)\) computes the structural similarity index measure (SSIM) between \(I_{t}\) and \(I_{t^{\prime}\to t}\), \(\mu\) is the auto-mask of dynamic pixels [30], and \(\mathcal{L}_{p}\) calculates the distance measurement by taking the weighted sum of \(\sum_{\mu}\mathcal{L}_{p}^{u,v}\) over all pixels \((u,v)\).
Textures on object surfaces can vary greatly and are often not directly related to their three-dimensional structure. As a result, local textures within images have limited correlation with overall scene structure, and our depth estimation model must instead focus on higher-level, global structural features. To overcome this, we adopt the method proposed in [87] to model local texture independence by utilizing an edge-aware smoothness loss, denoted as follows:
\[\mathcal{L}_{e}=\sum||\mathbf{e}^{-\nabla I_{t}}\cdot\nabla\hat{D}_{t}||\, \tag{23}\]
where \(\nabla\) denotes the spatial derivative. By incorporating \(\mathcal{L}_{e}\), our model can better learn and utilize the overall scene structural information, irrespective of local texture variations in the object surfaces.
**Structural Knowledge Distillation**. The visual scene structural information is vital for a wide range of visual tasks. However, feature representations of different models for distinct tasks exhibit a certain degree of structural correlation in different channels, which is not necessarily one-to-one due to task specificity. We define \(A(E,F)\) as the correlation between feature channels of \(E\) and \(F\), where \(\texttt{vec}\) flattens a 2D matrix into a 1D vector as follows:
\[A(E,F)=\frac{|\texttt{vec}(E)\cdot\texttt{vec}(F)^{\mathbf{T}}|}{||\texttt{ vec}(E)||_{2}\cdot||\texttt{vec}(F)^{\mathbf{T}}||_{2}}. \tag{24}\]
Here, \(A(E,F)\) represents the edge adjacency matrix for state transitions from \(E\) to \(F\) in the graph space, where all \(C\) channels are the nodes.
To leverage this correlation between features, we propose a structure distillation loss \(\mathcal{S}\) based on isomorphic graph convolutions. We use graph isomorphic networks based on convolution operations to extract features from \(E\) and \(F\), resulting in \(F^{\prime}\) and \(E^{\prime}\), respectively. We then calculate the cosine distance between \(E\) and \(E^{\prime}\), as well as between \(F\) and \(F^{\prime}\), and include these calculations in \(\mathcal{S}\) as:
\[F^{\prime}=\texttt{gin}(\theta^{E\to F},F,A(F,E))\, \tag{25}\]
\[E^{\prime}=\texttt{gin}(\theta^{F\to E},E,A(E,F))\, \tag{26}\]
\[\mathcal{S}(E,F)=\texttt{cosdist}(E,E^{\prime})+\texttt{cosdist}(F,F^{\prime})\, \tag{27}\]
where \(\mathtt{gin}(\cdot)\) represents the graph isomorphic network function and \(\theta\) refers to the parameters of the graph isomorphic network. This approach aggregates structured information across different tasks, which enables the transfer of such structural information to aid depth estimation.
Semantic objects in a scene carry crucial structural information; the depth of a semantic object in an image exhibits a degree of continuity. To extract image \(I_{t}\)'s encoding, we use separate depth and semantic expert encoders to obtain \(F_{t}^{(d)}\) and \(E_{t}^{(s)}\), respectively. The depth feature \(F_{t}^{(d)}\in\mathbb{R}^{C^{\prime}\times W^{\prime}\times H^{\prime}}\) and the semantic feature \(E_{t}^{(s)}\in\mathbb{R}^{C\times L}\) of frame \(t\) exhibit a structural correlation that demonstrates graph-like characteristics in the feature embedding.
We align the depth feature \(F_{t}^{(d)}\) with the semantic feature \(E_{t}^{(s)}\) using the alignment function \(\mathtt{align}(\cdot)\), which is satisfying \(E_{t}^{(s)}=\mathtt{align}(F_{t}^{(d)})\). To ensure consistent node feature dimensions before constructing the graph structure, we implement the alignment mapping \(\mathtt{align}(\cdot)\) using bilinear interpolation and convolution layers.
To distill the feature structural information of a powerful expert semantic model to the deep depth estimation model, we employ the structure graph distillation loss \(\mathcal{L}_{g}\), which links the structural correlation between semantic embedding and depth embedding as follows:
\[\mathcal{L}_{g}=\mathcal{S}(F_{t}^{(d)},E_{t}^{(s)}). \tag{28}\]
It is worth noting that \(\mathcal{L}_{g}\) enables the cross-domain distillation of semantic structural information from the semantic expert model to the depth estimation model.
**Total Loss**. We propose a method to train a monocular depth model using semantic and structural correlation of visual scenarios. To achieve this goal, we incorporate the idea of knowledge distillation into the design of training constraints for monocular depth estimation. The overall training objective is defined as follows:
\[\min_{\theta}\mathcal{L}=\lambda_{p}\mathcal{L}_{p}+\lambda_{e}\mathcal{L}_{e} +\lambda_{g}\mathcal{L}_{g}\, \tag{29}\]
where \(\{\lambda_{p},\lambda_{e},\lambda_{g}\}\) are loss weights that balance the various constraints during training. We use these weights to determine the level of importance assigned to each constraint.
**Model Ensembling**. To further improve the overall robustness, we use different single-stage monocular depth estimation backbones to train multiple models, resulting in different model configurations \(C\) and corresponding depth estimations \(D_{t}^{(C)}\). We then employ a model ensembling approach to improve the robustness of the overall depth estimation \(D_{t}\). The ensembling process involves combining the predictions of each individual model \(D_{t}^{(C)}\) with equal weight, resulting in an ensemble prediction \(D_{t}\), which we use as the final prediction. This approach leverages the diversity of the individual models and improves the overall robustness of the depth estimation. we combine these depth maps using equal weights and obtain the ensemble depth map \(D_{t}\) as follows:
\[D_{t}=\frac{1}{N}\sum\frac{D_{t}^{(C)}}{\mathtt{median}(D_{t}^{(C)})}\, \tag{30}\]
where \(N\) denotes the total number of configurations, and the \(\mathtt{median}(\cdot)\) calculates the median value of each depth map \(D_{t}^{(C)}\).
#### 5.5.3 Experimental Analysis
**Implementation Details**. We use the publicly accessible KITTI dataset [27] as the training dataset. This dataset consists of images with a resolution of \(1242\times 375\). We down-sample each image to \(640\times 192\) during training. To avoid introducing any corruptions as data augmentation during the training process, we use only raw images from KITTI [27] for training. During training, we use a batch size of 8 and train the model on two NVIDIA V100 GPUs. We employ a cosine annealing learning rate adjustment strategy with a period of \(100\) iterations, setting the minimum and maximum learning rates to \(1\)e-\(5\) and \(1\)e-\(4\), respectively.
**Comparative & Ablation Study**. We adopt MonoDepth2 [30], DPT [85], and MonoViT [131] as the baseline models in our experiments and compare them with our own models trained in different ways. The quantitative results of different methods are shown in Table 13. As can be seen from the comparative results, our knowledge distillation strategy can significantly improve the depth estimation
performance over the original MonoDepth2 [30] and MonoViT [131] under corrupted scenes. We also enable the performance of MonoViT [131] to exceed that of the large-scale depth estimation model, DPT [85], in the challenging OoD scenarios. Additionally, the model ensembling strategy also improves the performance of depth estimation, with all indicators achieving higher performance than those of a single model setting in our robustness evaluation.
#### 5.5.4 Solution Summary
In this work, we explored robust single-image depth estimation and proposed a novel knowledge distillation approach based on graph convolutional networks. We integrated information from different visual tasks for robustness enhancement and showed that the proposed knowledge distillation strategy can effectively improve the performance of MonoDepth and MonoViT and surpassed that of DPT in corrupted scenes. Additionally, the fusion of multiple models further improved the depth estimation results. Our team achieved the innovative prize in the first track of the RoboDepth Challenge.
## 6 Winning Solutions from Track # 2
### The 1st Place Solution: USTCxNetEaseFuxi
**Authors:** Jun Yu, Mohan Jing, Pengwei Li, Xiaohua Qi, Cheng Jin, Yingfeng Chen, and Jie Hou.
**Summary** - Most existing depth estimation models are trained solely on "clean" data, thereby lacking resilience against real-world interference. To address this limitation, the USTCxNetEaseFuxi team incorporates CutFlip and MAEMix as augmentations to enhance the model's generalization capabilities during training. Additionally, appropriate inpainting methods, such as image restoration and super-resolution, are selected and tailored to handle specific types of corruptions during testing. Furthermore, a new classification-based fusion approach is proposed to leverage advantages from different backbones for robustness enhancement.
#### 6.1.1 Overview
To fulfill the needs of real-world perception tasks such as robot vision and robot autonomous driving, significant progress has been made in the field of image depth estimation in recent years. Many high-quality datasets have been constructed by using high-performance sensing elements, such as depth cameras for depth imaging [94] and LiDAR sensors for 3D perception [27].
However, the current learning-based depth estimation paradigm might become too ideal. Most existing models are trained and tested on clean datasets, without considering the fact that image acquisition often happens in real-world scenes. Even high-performance sensing devices are often affected by factors such as different lighting conditions, lens jittering, and noise perturbations. These factors can disrupt the contour information of objects in the image and interfere with the determination of relative depth. Traditional methods such as filtering cannot effectively eliminate these noise interference, and existing models often lack sufficient robustness to effectively overcome these problems.
In this work, to pursue robust depth estimation against corruptions, we propose an effective solution with specific contributions as follows. Firstly, we conducted experiments on various high-performance
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel\(\downarrow\)** & **Sq Rel\(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE\(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MonoDepth2 [30] & \(0.221\) & \(1.988\) & \(7.117\) & \(0.312\) & \(0.654\) & \(0.859\) & \(0.938\) \\ DPT [85] & \(0.151\) & \(1.073\) & \(5.988\) & \(0.237\) & \(0.782\) & \(0.928\) & \(0.970\) \\ \hline Ours (MonoDepth2) & \(0.156\) & \(1.185\) & \(5.587\) & \(0.235\) & \(0.787\) & \(0.932\) & \(0.973\) \\ Ours (MonoDepth2-E) & \(0.151\) & \(1.058\) & \(5.359\) & \(0.226\) & \(0.794\) & \(0.935\) & \(0.976\) \\ Ours (MonoViT) & \(0.148\) & \(1.030\) & \(5.582\) & \(0.230\) & \(0.790\) & \(0.930\) & \(0.974\) \\ Ours (MonoViT-E) & \(\mathbf{0.137}\) & \(\mathbf{0.904}\) & \(\mathbf{5.276}\) & \(\mathbf{0.214}\) & \(\mathbf{0.813}\) & \(\mathbf{0.941}\) & \(\mathbf{0.979}\) \\ \hline \hline \end{tabular}
\end{table}
Table 13: Quantitative results of the baselines and our proposed approaches on the RoboDepth competition leaderboard (Track # 1). The **best** and **second best** scores of each metric are highlighted in **bold** and underline, respectively.
models to compare their robustness and ultimately selected the models with the best possible robustness at present [133, 3, 77, 70]. Secondly, we attempted to find a group of non-pixel-level operational data enhancement methods without simulating noise in real conditions. Next, we have chosen some new and effective image restoration methods for reconstructing corrupted images [125]. Finally, our proposed approach achieved first place in the second track of the RoboDepth Challenge, which proves the effectiveness of our designs.
#### 6.1.2 Technical Approach
Our proposed solution consists of three main components: 1) the overall framework, 2) training augmentations, and 3) post-processing techniques. We first elaborate on the overall architecture of our solution. We then describe the two data augmentation techniques involved in our solution. The third component, post-processing, is introduced, including a test-time data processing method and a model fusion approach.
**Robust Depth Estimation Framework**. Our overall model architecture is depicted in Figure 15. We select four existing depth estimation models that have demonstrated outstanding performance on the NYU Depth V2 [94] dataset for comparative experiments. Among them, AiT [77] exhibited the highest robustness and was found to be highly suitable for tackling this challenging task. MIM-Depth [112] also exhibited relatively superior performance. Consequently, these two models were chosen for subsequent experiments. VPD [133] focused on learning advanced semantic information about the scene during training, which was lacking fine-grained capabilities in handling the corrupted images. On the other hand, Zoe [3] emphasized learning relative depth, but the presence of certain corruption types, such as zoom blur, inevitably resulted in significant errors.
During training, we employed two data augmentation techniques: CutFlip and MAEMix. CutFlip involves horizontally cutting the input image and swapping the upper and lower halves. MAEMix, on the other hand, entails simple image blending between the original image and the reconstructed image using a mask. In the testing phase, we applied denoising, deblurring, and super-resolution techniques to the corrupted images to enhance their quality and mitigate the effects brought by noises and interference.
**CutFlip**. We adopt a simple CutFlip operation during training to enhance data diversity. We vertically split the input images into the upper and lower halves, and then flip these two parts along the vertical axis. This process helps to weaken the correlation between the depth and the vertical position of the
Figure 15: Overview of the proposed robust depth estimation solution. Our design consists of three main components: 1) a training and inference framework; 2) a data augmentation combo; and 3) a test-time augmentation module.
image. The probability of applying CutFlip is set to \(0.5\) and the vertical splitting position is randomly sampled, allowing the model to adapt well to various types of training data.
**MAEMix**. In fact, MAE-based data processing can serve as a powerful data augmentation technology [31]. The realization of MAE is simple: masking out random patches on the input images and reconstructing the masked regions based on the remaining visual cues. Empirically, masking out most of the input images (such as \(75\%\)) will form an important and meaningful self-supervised learning task. Strictly speaking, the MAE method belongs to a denoising autoencoder (DAE). The denoising operation in DAE belongs to a kind of representation learning, which destroys the input signal and learns to reconstruct the original and undamaged signals. The encoder and decoder structures of MAE are different and asymmetric. The encoder often encodes the input as a latent representation, while the decoder reconstructs the original signal from this latent representation.
The reconstructed image will have a decrease in clarity compared to the original image, and the image content will also undergo certain changes, which to some extent aligns with our idea of enhancing the model's robustness. An effective approach is to mix the reconstructed image with the original image, thereby transferring the disturbance introduced by MAE reconstruction to the original image. This process helps to incorporate the variations and distortions captured by MAE into the original input, resulting in enhanced feature learning and improved overall robustness of the depth estimation model.
**Post-Processing**. For the test time augmentation, our research focus lies on image restoration operations. The testing set comprises heavily interfered and damaged images. Observing the test set, it was found that noises and blurs accounted for a significant proportion of corruptions, while weather-related corruptions were rarely seen, with only a small number of images showing corruption effects similar to fog. Indeed, as the NYU Depth V2 dataset [94] is mainly constructed for indoor scenes, such indoor environments are rarely affected by adverse weather conditions in practical situations. Hence we focus on noise corruptions and blur corruptions during the post-processing.
Before performing image reconstruction, we pre-classified the test set, categorizing different noises and blurs into pre-defined categories, while the remaining images were mainly compressed image quality and color corruptions, which were all classified into another category together.
For images with various types of noises and blurs, we utilized Restormer [125] for repairing. Restormer [125] has achieved state-of-the-art results in multiple image restoration tasks, including image de-snowing, single image motion de-blurring, defocus de-blurring (single image and dual pixel data), and image de-noising, outperforming networks such as SwinIR [65] and IPT [9].
On the other hand, for other images, we employed SwinIR [65] for super-resolution processing. SwinIR [65] has exhibited excellent performance in dealing with image compression and corruption, which can significantly improve image quality. However, color destruction, due to its inherent difficulty in recovery, can only receive a small amount of improvement.
Furthermore, our attempts to utilize Mean Absolute Error for image reconstruction during inference yielded unsatisfactory results. Similarly, we conducted multi-scale testing using super-resolution techniques, but the outcomes were sub-optimal. We speculate that the underwhelming performance of the Mean Absolute Error metric and super-resolution techniques may be attributed to their
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Method** & **Ref** & **Data Augmentation** & \(\delta\!<\!1.25\!\uparrow\!\!
reliance on algorithmic assumptions to generate image features, rather than capturing genuine content. We conjecture that the discrepancy between algorithmic assumptions and real content could have contributed to the cause of these sub-optimal results.
#### 6.1.3 Experimental Analysis
**Baselines**. We selected four state-of-the-art depth estimation models [133; 3; 77; 70] in our experiments. The inference results of these models on the RoboDepth Challenge testing set are presented in Table 14. Among all models tested, AiT [77] and MIM-Depth [112] demonstrated relatively superior performance, making them highly suitable for handling the challenging OoD depth estimation task. AiT [77] itself adopts mask authentication techniques similar to that in natural language processing hence it has exhibited a certain degree of robustness. MIM-Depth [112] balances the global attention and local attention well, making it perform more evenly on the entire image.
**Data Augmentation Techniques**. The results of our ablation experiments on the proposed MAEMix and CutFlip are presented in Table 14. We observe that the optimal data augmentation combinations differ for AiT [77] and MIM-Depth [112]. After applying data augmentation, AiT [77] achieved a depth estimation performance (in terms of the \(\delta_{1}\) score) of \(90.3\%\), while MIM-Depth [112] reached a \(\delta_{1}\) accuracy of \(91.5\%\).
**Post-Processing Techniques**. We found that the post-processing for interference reduction using image restoration and super-resolution techniques [125; 65] led to an additional improvement in performance. As can be seen from Table 15, these two post-processing operations help improve the depth estimation accuracy (in terms of the \(\delta_{1}\) score) with scores reaching \(92.20\%\) and \(92.87\%\) for AiT [77] and MIM-Depth [112], respectively.
**Model Ensemble Techniques**. Finally, we perform model fusion on the previously obtained results from AiT [77] and MIM-Depth [112] using the following strategies: 1) Weighted Average Ensemble, and 2) Classification Ensemble approaches. As can be seen from Table 16, the former ensemble strategy achieved a final \(\delta_{1}\) score of \(93.3\%\) while the latter one yielded an accuracy \(94.0\%\), which is \(0.0037\) higher than the baseline.
#### 6.1.4 Solution Summary
In this work, we have demonstrated through extensive experiments that incorporating the CutFlip and MAEMix data augmentation techniques during the training process brings a positive effect on enhancing the depth estimation model's robustness. Additionally, the simple and effective inpainting approaches, _i.e._ Restormer and SwinIR, directly improve the depth prediction results under OoD corruptions. Our classification-based model fusion method fully considers the advantages of different
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Method** & **Ref** & **Model Ensemble** & \(\delta<1.25\uparrow\) & \(\Delta\) \\ \hline \hline AiT-P & [77] & None & \(0.903\) & \(+0.000\) \\ \hline AiT-P + MIM-Depth & [112] & Weighted Average Ensemble & \(0.933\) & \(+0.030\) \\ \cline{2-4} & & Classification Ensemble & \(\mathbf{0.940}\) & \(+0.037\) \\ \hline \hline \end{tabular}
\end{table}
Table 16: Quantitative results of the baselines and different model ensemble techniques on the RoboDepth competition leaderboard (Track # 2). The **best** and _second_ best scores of each metric are highlighted in **bold** and underline, respectively.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Method** & **Ref** & **Post-Processing** & \(\delta<1.25\uparrow\) & \(\Delta\) \\ \hline \hline \multirow{3}{*}{AiT-P} & \multirow{3}{*}{[77]} & None & \(0.903\) & \(+0.000\) \\ & & Restormer & \(0.921\) & \(+0.018\) \\ & & Restormer + SwinIR & \(0.922\) & \(+0.019\) \\ \hline \multirow{3}{*}{SwinV2-L 1K-MIM-Depth} & \multirow{3}{*}{[70]} & None & \(0.887\) & \(+0.000\) \\ & & Restormer & \(0.924\) & \(+0.037\) \\ \cline{1-1} & & Restormer + SwinIR & \(\mathbf{0.929}\) & \(+0.042\) \\ \hline \hline \end{tabular}
\end{table}
Table 15: Quantitative results of the baselines and different post-processing techniques on the RoboDepth competition leaderboard (Track # 2). The **best** and _second_ best scores of each metric are highlighted in **bold** and _underline_, respectively.
depth estimation modes and achieves better results than the traditional fusion approaches, which helped us achieve satisfactory results in the challenge competition. Our team ranked first in the second track of the RoboDepth Challenge.
### The \(\mathtt{2nd}\) Place Solution: \(\mathtt{OpenSpaceAI}\)
**Authors:** Li Liu, Ruijie Zhu, Ziyang Song, and Tianzhu Zhang.
**Summary - The \(\mathtt{OpenSpaceAI}\) team proposes a Robust Diffusion model for Depth estimation (RDDepth) to address the problem of single-image depth estimation on OoD datasets. RDDepth takes the use of VPD as the baseline for utilizing the denoising capability of the diffusion model, which is naturally suitable for handling such a problem. Additionally, the high-level scene priors provided by the text-to-image diffusion model are leveraged for robust predictions. Furthermore, the AugMix data augmentation is incorporated to further enhance the model's robustness.
#### 6.2.1 Overview
Monocular depth estimation is a fundamental task in computer vision and is crucial for scene understanding and other downstream applications. In real practice, there are inevitably some corruptions (_e.g._ rain), which hinder safety-critical applications. Many learning-based monocular depth estimation methods [2, 64, 122, 63, 117, 67] train and evaluate in the subsets of an individual benchmark. Therefore, they tend to overfit a specific dataset, which leads to poor performance on OoD datasets. The second track of the RoboDepth Challenge provides the necessary data and toolkit for the supervised learning-based model to handle OoD depth estimation. The objective is to accurately estimate the depth information while training only on the clean NYU Depth V2 [94] dataset. Our goal is to improve the model's generalization ability across real-world OoD scenarios.
To address this issue, we propose a Robust Diffusion model for Depth estimation (RDDepth). RDDepth takes VPD [133] as the baseline, which aims to leverage the high-level knowledge learned in the text-to-image diffusion model for visual perception. We believe the knowledge from VPD [133] can also benefit the robustness of depth predictors since the prior of scenes is given. Moreover, the denoising capability of diffusion is naturally suitable for handling OoD situations.
Instead of using the step-by-step diffusion pipeline, we simply employ the autoencoder as a backbone model to directly consume the natural images without noise and perform a single extra denoising step with proper prompts to extract the semantic information. Specifically, RDDepth takes the RGB image as input and extracts features by the pre-trained encoder of VQGAN [22], which projects the image into the latent space. The text input is defined by the template of _"a photo of a [CLS]"_, and then the CLIP [83] text encoder is applied to obtain text features.
To solve the domain gap when transferring the text encoder to depth estimation, we adopt an adapter to refine the text features obtained by the CLIP [83]. The latent feature map and the refined text features are then fed into UNet [91] to obtain hierarchical features, which are used by the depth decoder to generate the final depth map. In addition, we employed the AugMix [33] data augmentation, which does not include any of the \(18\) types of corruption and their atomic operations in the original RoboDepth benchmark. We find that, within a certain range, more complex data augmentation enables the model to learn more robust scene priors, thereby enhancing its generalization when tested on corrupted data.
#### 6.2.2 Technical Approach
In this section, we present RDDepth, a framework that achieves promising robustness in OoD depth estimation scenarios. RDDepth is built upon VPD [133] - a framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks. The key idea of VPD [133] is to investigate how to fully extract the pre-trained high-level knowledge in a pre-trained diffusion model. We find such knowledge also benefits the prediction of corrupted images thanks to their semantic information. The overall framework of our RDDepth is illustrated in Figure 16.
**Framework Overview**. Our RDDepth is based on VPD [133], a model that builds upon the foundation of the popular Stable Diffusion [90] and conducts the denoising process in a learned
latent space with a UNet architecture. As for Stable Diffusion [90], there is adequate high-level knowledge due to the weak supervision of the natural language during pre-training. We believe that this high-level knowledge can, to some extent, mitigate the influence of the corruptions in the feature space, thereby guiding the recovery of more accurate depth maps in the depth prediction head. Therefore, the key idea is to investigate how to effectively leverage the advanced knowledge of the diffusion model to steer subsequent models in monocular depth estimation.
Specifically, RDDepth firstly uses encoder \(\epsilon\) in VQGAN [22] to extract image features and obtain the representation of latent space. Then we hope to extract corresponding text features from class names by the simple template _"a photo of a [CLS]"_. Moreover, We align the text features to the image features by an adapter. This design enables us to retain the pre-trained knowledge of the text encoder to the fullest extent while reducing the domain discrepancy between the pre-training task and the depth estimation task. After that, we feed the latent feature map and the conditioning inputs to the pre-trained network (usually implemented as a UNet [91]). We do not use the step-by-step diffusion pipeline, which is common in other works. Instead, we simply consider it as a backbone. In other words, no noise is added to the latent feature map during the denoising process since we set \(t=0\). Then, we use only one denoising step by UNet [91] to obtain the features.
The hierarchical feature \(\mathcal{F}\) can be easily obtained from the last layer of each output block in different resolutions. Typically, the size of the input image is \(512\times 512\); the hierarchical feature maps \(\mathcal{F}\) contain four sets, where the \(i\)-th feature map \(F_{i}\) has the spatial size of \(H_{i}=W_{i}=2^{i+2}\), with \(i=1,2,3,4\). The final depth map is then generated by a depth decoder, which is implemented as a semantic FPN [44].
**Data Augmentation Module**. We exploit new data augmentation designs that are overtly different from conventional ones. In general circumstances, models can only memorize the specific corruptions seen during training, which results in poor generalization ability against corruptions. AugMix [33] is proposed for helping models withstand unforeseen corruptions. Specifically, AugMix [33] involves blending the outputs obtained by applying chains or combinations of multiple augmentation operations. Inspired by it, we investigate the effect of different data augmentation on indoor scene corruptions in our work. The augmentation operations include rotation, translation, shear, _etc_. Next, we randomly sample three augmentation chains; each augmentation chain is constructed by composing from one to three randomly selected augmentation operations. This operation can prevent the augmented image from veering too far from the original image.
**Loss Function**. We adopt the Scale-Invariant Logarithmic (SILog) loss introduced in [21] and denote it as \(L\). We first calculate the logarithm difference between the predicted depth map and the ground-truth depth as follows:
\[\Delta d_{i}=\log d^{\prime}_{i}-\log d^{*}_{i}\, \tag{31}\]
Figure 16: The architecture of RDDepth. The proposed RDDepth firstly uses a pre-trained image encoder to project RGB image into the latent space, meanwhile extracting the corresponding text feature. The text adapter is used to tackle the domain gap between the text and depth estimation tasks. UNet is considered a backbone to provide hierarchical features in our framework.
where \(d_{i}^{\prime}\) and \(d_{i}^{*}\) are the predicted depth and ground-true depth, respectively, at pixel \(i\). The SIIog loss is computed as:
\[L=\sqrt{\frac{1}{K}\sum_{i}\Delta d_{i}^{2}-\frac{\lambda}{K}(\sum_{i}\Delta d_{ i})^{2}}\, \tag{32}\]
where \(K\) is the number of pixels with valid depth and \(\lambda\) is a variance-minimizing factor. Following previous works [2, 64], we set \(\lambda=0.5\) in our experiments.
#### 6.2.3 Experimental Analysis
**Implementation Details**. We provide the common configurations of our baseline. We fix the VQGAN [22] encoder \(\epsilon\) and the CLIP [83] text encoder during training. To fully preserve the pre-trained knowledge, we always set the learning rate \(\epsilon_{\theta}\) of as 1/10 of the base learning rate.
Our work is implemented using PyTorch on eight NVIDIA RTX 3090 GPUs. The network is optimized end-to-end with the Adam optimizer (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)). We set the learning rate to \(5\)e-\(4\) and train the model for \(30\) epochs with a batch size of \(24\). During training, we randomly crop the images to \(480\times 480\) and then use AugMix [33] to obtain the augmented image. We freeze the CLIP [83] text decoder and VQGAN [22] encoder. The version of stable diffusion is v1-5 by default. The decoder head and other experimental settings are the same as [112]. We use the flip and sliding windows during testing.
**Main Results**. The results of RDDepth are presented in Table 17. Our RDDepth framework ranks second in terms of the a1 metric and outperforms many other participants in other metrics, such as Sq Rel, RMSE, a2, and a3. The results demonstrate the superior robustness of our approach.
Additionally, we show some visual examples under different corruption scenarios in Figure 17. It can be seen that our proposed RDDepth can predict reasonable and accurate depth maps under various corruptions, such as color quantization, low light, motion blur, zoom blur, _etc_.
#### 6.2.4 Solution Summary
In this work, we proposed RDDepth, a Robust Diffusion model for Depth estimation. With the VPD serving as the baseline, we exploited how to leverage the diffusion model's high-level scene knowledge to guide depth estimation head to counteract the effects of corruptions, thus generalizing well in unseen OoD data. Furthermore, we found that proper data augmentations can benefit the model's generalization ability. RDDepth achieved state-of-the-art performance on the RoboDepth benchmark and ranked second in the second track of the RoboDepth Challenge.
### The 3rd Place Solution: GANCV
**Authors:** Jiamian Huang and Baojun Li.
**Summary** - To better handle depth estimation under real-world corruptions, the GANCV team proposes a joint depth estimation solution that combines AiT with masked image modeling depth estimation (MIM-Depth). New techniques related to data augmentation and model ensemble are incorporated to further improve the depth estimation robustness. By combining the advantages of AiT and MIM-Depth, this solution achieves promising OoD depth prediction results and ranks third in the second track of the RoboDepth Challenge.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline YYQ & \(0.125\) & \(0.085\) & \(0.470\) & \(0.159\) & \(0.851\) & \(0.970\) & \(0.989\) \\ AIIA-RDepth & \(0.123\) & \(0.080\) & \(0.450\) & \(0.153\) & \(0.861\) & \(0.975\) & \(0.993\) \\ GANCV & \(0.104\) & \(0.060\) & \(0.391\) & \(0.131\) & \(0.898\) & \(0.982\) & \(0.995\) \\ USTCxNetEaseFuxi & **0.088** & \(0.046\) & \(0.347\) & \(\mathbf{0.115}\) & \(\mathbf{0.940}\) & \(0.985\) & \(0.996\) \\ \hline
**OpenSpaceAI (Ours)** & \(0.095\) & \(\mathbf{0.045}\) & \(\mathbf{0.341}\) & \(0.117\) & \(0.928\) & \(\mathbf{0.990}\) & \(\mathbf{0.998}\) \\ \hline \hline \end{tabular}
\end{table}
Table 17: Quantitative results on the RoboDepth competition leaderboard (Track # 2). The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
#### 6.3.1 Overview
Depth estimation plays a crucial role as one of the vital components in visual systems that capture 3D scene structure. Depth estimation models have been widely deployed in practical applications, such as the 3D reconstruction of e-commerce products, mobile robotics, and autonomous driving [27; 94; 17; 54]. Compared to expensive and power-hungry LiDAR sensors that provide high-precision but sparse depth information, the unique advantages of low-cost and low-power cameras have made monocular depth estimation techniques a relatively popular choice.
Although promising depth estimation results have been achieved, the current learning-based models are trained and tested on datasets within the same distribution. These approaches often ignore the more commonly occurring OoD situations in the real world. The RoboDepth Challenge was recently established to raise attention among the community for robust depth estimation. To investigate the latest advancements in monocular depth estimation, we propose a solution that combines the AiT [77] and masked image modeling (MIM) depth estimation [112].
AiT [77] consists of three components: the tokenizer, detokenizer, and task solver, as shown in Figure 18. The tokenizer and detokenizer form a VQ-VAE [78], which is primarily used for the automatic encoding and decoding of tokens. The task solver is implemented as an auto-regressive encoder-decoder network, where both the encoder and decoder components combine Transformer blocks to generate soft tokens. In summary, the task solver model takes images as inputs, predicts token sequences through autoregressive decoding, and employs VQ-VAE's decoder to transform the predicted tokens into the desired output results.
MIM is a sub-task of masked signal prediction, where a portion of input images is masked, and deep networks are employed to predict the masked signals conditioned on the visible ones. In this work, we utilized SimMIM [113] model deep estimation training. SimMIM [113] consists of four major components with simple designs: 1) random masking with a large masked patch size; 2) the masked tokens and image tokens are fed to the encoder together; 3) the prediction head is as light as a linear layer; 4) predicting raw pixels of RGB values as the target with the L1 loss of the direct regression. With these simple designs, SimMIM [113] can achieve state-of-the-art performance on different downstream tasks.
In addition to the network architecture, we also explore model ensemble - a commonly used technique in competitions, aiming to combine the strengths and compensate for the weaknesses of multiple
Figure 17: Qualitative results of RDDepth in the second track of the RoboDepth Challenge.
models by integrating their results. For depth estimation, utilizing a model ensemble can effectively balance the estimated results, especially when there are significant differences in the estimated depth values. It can help mitigate the disparities and harmonize the variations among them.
Lastly, we investigate the choice of different backbones in the depth estimation model. The backbone refers to the selection of the underlying architecture or network as the foundational framework for monocular depth estimation. Currently, in the field of computer vision, the most commonly used backbones are Vision Transformers (ViT) [18] and Swin Transformers [71]. The choice between ViT [18] and Swin Transformers [71] depends on various factors. We use Swin Transformers [71] as the backbone of our framework due to its general-purpose nature.
#### 6.3.2 Technical Approach
We propose a multi-model fusion approach with a primary focus on integrating the results of two depth estimation models: AiT [77] and MIM-Depth [112]. Since the overall framework involves combining the results of two models, the training process is conducted in multiple stages. Therefore, we will first discuss the training of the AiT [77] algorithm, followed by the training of MIM-Depth [112]. Finally, we will explain how to integrate the results of these two models together. When presenting the training stages of both models, we will also provide details related to training tricks and the relevant training parameters employed.
**AiT Training**. As per the competition organizers' requirements, we train AiT [77] on the official training split of NYU Depth V2 [94]. Moreover, we have strictly limited the utilization of various data augmentation techniques as specified. We initially employed only two data augmentation techniques: random horizontal flipping and random cropping. These methods were used to augment the training data while ensuring adherence to the specified guidelines. In addition to the aforementioned data augmentation techniques, we also employed two additional data augmentation methods: random brightness and random gamma. These augmentation techniques were sourced from the Monocular-Depth-Estimation-Toolbox [61] and were implemented in compliance with the competition guidelines. We set all the probability of random augmentations to \(0.5\).
As can be observed from Figure 18, the training process of AiT [77] can be divided into two stages. In the first stage, we focus on training the VQ-VAE [78] network, which is a token encoder-decoder model. To enhance the robustness of the VQ-VAE [78], we apply random masks to the original ground truth depth maps as inputs. The mask is performed with a masking ratio of \(0.5\) and a patch size of \(16\), allowing the model to better estimate monocular depth information in real-world scenarios. For monocular depth estimation, the input image size adopted is \(480\times 480\), with a batch size of \(512\) in the first training stage. The AdamW [72] optimizer is used with the base learning rate of 3e-4. During the pre-training process, we utilized a server equipped with eight A100 GPUs and trained the VQ-VAE model [78] for a total of \(100\) epochs. In the fine-tuning process, we continued training VQ-VAE [78] for at least \(50\) additional epochs with a learning rate of \(1\)e-\(5\). This fine-tuning process aims to further optimize the model's performance on the validation set of the NYU Depth V2 dataset [94]. Ultimately, the validation loss of the VQ-VAE [78] model reduced to around \(0.021\). For the data augmentation in this stage, we just use random horizontal flipping and random cropping.
Figure 18: Overview of the architecture proposed in AiT [77]. This network structure includes a VQ-VAE [78] tokenizer and a task solver.
In the second stage, we use the Swin Transformer V2 Large [70] as the backbone, which is pre-trained with SimMIM [113]. In the training process, we use the AdamW [72] optimizer with a base learning rate of 2e-\(4\); the weight decay is set to \(0.075\). Furthermore, we set the layer decay value of the learning rate to \(0.9\) in order to prevent the model from overfitting. This value helps to control the learning rate decay rate for different layers of the depth estimation model, ensuring a balanced optimization process during training. We also set the drop path rate to \(0.1\). The total training steps are \(15150\) with a batch size of \(80\). The step learning rate schedule is used and the learning rate dropped to \(2\)e-\(5\) and \(2\)e-\(6\) at the \(7575\)-th step and the \(12120\)-th step, respectively. Regarding data augmentation, in addition to the conventional ones used in VQ-VAE [78], we also append random brightness with a limit value from \(0.75\) to \(1.25\) and a random gamma.
**MIM-Depth-Estimation Training**. In addition to training AiT [77], we also explored the application of MIM-Depth [112]. Unlike the training strategy used for AiT [77], the training of MIM-Depth [112] is performed in an end-to-end manner, which makes the training process relatively simpler. The network architecture of MIM-Depth [112] is depicted in Figure 19. As mentioned earlier, we select SimMIM [113] as the backbone architecture.
During the training of MIM-Depth [112], we apply five data augmentation techniques to enhance the model performance. These methods include random masking, random horizontal flipping, random cropping, random brightness adjustment, and random gamma adjustment. For all the random augmentation, a probability of \(0.5\) is employed.
**Integrating Both Models**. After completing the training of both models, we experiment with various ensemble strategies to integrate the results of AiT [77] and MIM-Depth [112], aiming to achieve better performance than each individual model. Two ensemble strategies we used include plain averaging and weighted averaging. After conducting comparative experiments, we decide to opt for weighted averaging of the depth estimation results, with weights assigned to the AiT model's result and the MIM-Depth-Estimation model's result as \(0.6\) and \(0.4\), respectively.
#### 6.3.3 Experimental Analysis
**Implementation Details**. For the training of MIM-Depth [112], we select Swin Transformer V2 Large [70] as the backbone architecture. Additionally, we apply a trained weight of Swin Transformer V2 Large [70] pre-trained on the ImageNet classification dataset as the pre-trained model for MIM-Depth [112]. For monocular depth estimation training, we maintain the same input image size as AiT [77], which consists of \(480\times 480\) pixels. This consistency in input image size ensures compatibility and facilitates the comparison and integration of results between AiT [77] and MIM-Depth [112]. We also apply layer decay during the training, but unlike AiT [77], we set the value to \(0.85\). For the drop path rate setting, we apply it with a value of \(0.5\). Regarding the data augmentation for masking, we select the mask patch size of \(32\) and the mask ratio of \(0.1\). In terms of the optimizer, we use AdamW [72] with a learning rate of \(5\)e-\(4\). We use the linear learning rate schedule and set a minimum learning rate to prevent the learning rate from decreasing too quickly. We train the entire model for approximately \(25\) epochs on an \(8\) V100 GPUs server. The batch size we set during training is \(24\).
Figure 19: Illustration of MIM-Depth [112]. The overall structure is from SimMIM [113].
We conduct several comparative experiments focusing on the selection of data augmentation methods, masking strategies, and ensemble strategies. All experimental results are obtained using the test set of the second track of the RoboDepth competition.
**Data Augmentations**. Regarding the use of data augmentations, we compare multiple combinations and present the results in Table 18. We first establish a data augmentation combination that includes random horizontal flipping and random cropping, both with a probability of \(0.5\), dubbed Aug1. We apply this combination to preprocess the training data for both MIM-Depth [112] and AiT [77]. We also form another data augmentation combination by adding random brightness variation and random gamma adjustment to the previous combination; we denote this strategy as Aug2. Both of these augmentations are applied with a probability value of \(0.5\).
**Masking Strategy**. We conduct experiments with two sets of masking strategies on the MIM-Depth[112]. In the first set, we select the patch size to \(32\), while in the second set, the patch size is \(16\). Both sets have a mask ratio of \(0.1\). As shown in Table 19, we observe that the two different mask patch sizes have a minimal impact on the final depth estimation results. Additionally, we compare the scenarios where no masking ratio is set (baseline). It can be seen that masking-based modeling has a significant impact on the model's robustness.
**Ensemble Strategy**. In terms of ensemble strategies, we compare the plain averaging and weighted averaging methods. As shown in Table 20, we can see that the weighted averaging method outperforms the simple averaging method. We also observe that the optimal weights for the ensemble are \(0.6\) for AiT [77] and \(0.4\) for MIM-Depth [112]. Furthermore, the ensemble approach achieves better performance compared to individual models.
#### 6.3.4 Solution Summary
In this work, we presented a collaborative model ensemble solution for robust monocular depth estimation and conducted various experimental analyses to demonstrate its effectiveness. The proposed solution primarily consists of combining AiT and MIM-Depth, with an in-depth discussion on the use of data augmentation and model ensemble techniques. This solution is employed in the second track of the RoboDepth Challenge. Ultimately, we achieved a third-place ranking in the second track of this competition.
### The Innovation Prize Solution: AIIA-RDepth
**Authors:** Sun Ao, Gang Wu, Zhenyu Li, Xianming Liu, and Junjun Jiang.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MIM-Depth [112] w/ Aug2 & \(0.132\) & \(0.091\) & \(0.458\) & \(0.157\) & \(0.849\) & \(0.967\) & \(0.990\) \\ MIM-Depth [112] w/ Aug2 & \(0.115\) & \(0.070\) & \(0.414\) & \(0.141\) & \(0.883\) & \(0.976\) & \(0.994\) \\ \hline AiT [77] w/ Aug1 & \(0.115\) & \(0.076\) & \(0.435\) & \(0.146\) & \(0.871\) & \(0.973\) & \(0.990\) \\ AiT [77] w/ Aug2 & \(0.104\) & \(0.062\) & \(0.405\) & \(0.134\) & \(0.891\) & \(0.981\) & \(0.994\) \\ \hline \hline \end{tabular}
\end{table}
Table 18: Quantitative results of the candidate models [77, 112] with different data augmentation strategies on the RoboDepth competition leaderboard (Track # 2). Aug1 indicates that only random horizontal flipping and random cropping are used during the training. Aug2 refers to the addition of random brightness and random gamma as data augmentations on top of Aug1. The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline MIM-Depth w/ p16-r0.1 & \(0.115\) & \(0.070\) & \(0.418\) & \(0.141\) & \(0.881\) & \(0.971\) & \(0.990\) \\ MIM-Depth w/ p32-r0.0 & \(0.169\) & \(0.157\) & \(0.535\) & \(0.186\) & \(0.794\) & \(0.940\) & \(0.978\) \\ MIM-Depth w/ p32-r0.1 & \(0.115\) & \(0.070\) & \(0.414\) & \(0.141\) & \(0.883\) & \(0.976\) & \(0.994\) \\ \hline \hline \end{tabular}
\end{table}
Table 19: Quantitative results of MIM-Depth [112] with different masking ratios and patch sizes on the RoboDepth competition leaderboard (Track # 2). p indicates patch size and \(x\) denotes masking ratio. The **best** and **second best** scores of each metric are highlighted in **bold** and underline, respectively.
**Summary** - To enhance the resilience of deep depth estimation models, the AIIA-RDepth team introduces a multi-stage methodology that incorporates both spatial and frequency domain operations. Initially, several masks are employed to selectively occlude regions in the input image, followed by spatial domain enhancement techniques. Subsequently, robust attacks are applied to the high-frequency information of the image in the frequency domain. Finally, these two approaches are amalgamated into a unified framework called MRSF: Masking and Recombination in the Spatial and Frequency domains.
#### 6.4.1 Overview
Monocular depth estimation is a vital research area in the field of computer vision and finds wide-ranging applications in industries such as robotics [17], autonomous driving [27, 100], virtual reality, and 3D reconstruction [94]. Recently, deep learning has witnessed significant advancements and has gradually become the mainstream approach for addressing monocular depth estimation problems.
Existing models for supervised monocular depth estimation often train and test on datasets within the same distribution, yielding satisfactory performance on the corresponding testing sets. However, when there exists an incomplete distribution match or certain corruptions between the training and testing data, such as variations in weather and lighting conditions, sensor failures and movements, and data processing issues, the performance of these deep learning models tends to be significantly degraded. To address these challenges, the RoboDepth competition introduced novel datasets that include \(18\) types of corruptions, aiming to probe the robustness of models against these corruptions. In light of these OoD issues, we propose a robust data augmentation method that enhances images in both spatial and frequency domains.
Among approaches for supervised monocular depth estimation, DepthFormer from Li _et al._[63] stands out as a significant contribution. This model proposed to leverage the Transformer architecture to effectively capture the global context by integrating an attention mechanism. Furthermore, it employs an additional convolutional branch to preserve local information.
As for data augmentation, the CutOut technique from DeVries and Taylor [16] is widely acknowledged, where square regions of the input are randomly masked out during training. This approach has been proven effective in improving the robustness and overall performance of convolutional neural networks. Additionally, in frequency domain enhancement, Amplitude Phase Reconstruction (APR) from Chen _et al._[7] is an important method. It directs the attention of CNN models toward the phase spectrum, enhancing their ability to extract meaningful information from the frequency domain.
#### 6.4.2 Technical Approach
In this section, we will elucidate the motivation behind and introduce the two components of our Masking and Recombination in the Spatial and Frequency domains (MRSF) approach: 1) masking image regions in the spatial domain and 2) reconstructing the image in the frequency domain. Figure 20 provides an overview of MRSF.
**Motivation**. While considerable progress has been made in learning-based depth estimation models, their training and testing processes often rely on clean datasets, disregarding the OoD situations. In
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\) \(\uparrow\) & \(\delta<1.25^{2}\) \(\uparrow\) & \(\delta<1.25^{3}\) \(\uparrow\) \\ \hline \hline USTCxNetEaseFux & **0.088** & 0.046 & 0.347 & **0.115** & **0.940** & 0.985 & 0.996 \\ OpenSpaceAI & \(0.095\) & **0.045** & **0.341** & \(0.117\) & \(0.928\) & **0.990** & **0.998** \\ AIIA-RDepth & \(0.123\) & \(0.088\) & \(0.480\) & \(0.162\) & \(0.861\) & \(0.975\) & \(0.993\) \\ \hline
**MIM-Depth (Ours)** & \(0.115\) & \(0.070\) & \(0.414\) & \(0.141\) & \(0.883\) & \(0.976\) & \(0.994\) \\
**AIT (Ours)** & \(0.104\) & \(0.062\) & \(0.405\) & \(0.134\) & \(0.891\) & \(0.981\) & \(0.994\) \\
**Ensemble (Ours)** & \(0.104\) & \(0.060\) & \(0.391\) & \(0.131\) & \(0.898\) & \(0.982\) & \(0.995\) \\ \hline \hline \end{tabular}
\end{table}
Table 20: Quantitative results of MIM-Depth [112], AiT [77], our ensemble model, and other participants on the RoboDepth competition leaderboard (Track # 2). The **best** and **second best** scores of each metric are highlighted in **bold** and underline, respectively.
practical scenarios, common corruptions are more likely to occur, which can have safety-critical implications for applications such as autonomous driving and robot navigation.
Upon thorough examination of various corrupted images in the RoboDepth Challenge, we have observed that the corruption effects introduced in the competition are inclined to contain high-frequency interference, consequently resulting in substantial alterations to the local information.
To rectify the perturbations to local information and address the issue of high-frequency interference, we employ a robust data augmentation technique, MRSF, that encompasses both spatial and frequency domain operations. This approach enabled us to capture the global information of the image while simultaneously addressing the high-frequency disturbances introduced by the attacking images.
**SDA: Spatial Domain Augmentation**. To facilitate the model's enhanced understanding of global image information and improve its robustness, we employ a masking method to augment the images in the spatial domain. Initially, we randomly select \(N\) points within the \(640\times 480\) images of NYU Depth V2 [94]. These points serve as the top-left corners for generating patch masks, each of a fixed size (specifically, we use a square of size \(a\times a\) in the practical implementation, where \(a\) is the mask length). If a mask extended beyond the boundaries of the original image, the exceeding portion will be discarded to ensure that only parts within the image are retained.
The overall pipeline of SDA is shown in Figure 21. Through this process, we generate \(N\) mask images based on the \(N\) points. Subsequently, we perform a logical OR operation on these \(N\) mask images, merging them into a final image mask. By applying this mask to the original image, we obtain the augmented image in the spatial domain. The SDA method relies on two critical hyperparameters, namely \(N\) and \(a\), which have a substantial impact on its performance. Figure 22 provides examples of setting \(N\) to different values. To determine their optimal values, we conducted an extensive series of experiments. Through careful analysis, we discover that setting \(N\) to \(12\) and \(a\) to \(120\) resulted in the model achieving its peak performance. This finding highlights the importance of precisely tuning these hyperparameters in order to maximize the effectiveness of the SDA method.
**FDA: Frequency Domain Augmentation**. We apply a rotation of angle \(\theta\) to the original input image, resulting in a new image. Subsequently, we perform a two-dimensional Fourier transform to convert both images into the frequency domain, yielding two frequency domain representations. While preserving the phase spectrum of the frequency domain images, we extract the magnitude values from each frequency domain representation.
The overall pipeline of FDA is shown in Figure 23. The first image tends to retain its low-frequency components, with the low-frequency signal defined as the area with a size of \(S\) around the center of the frequency domain image; while the remaining portion represented the high-frequency signal. The second image tends to exclusively preserve its high-frequency components. We then reconstruct
Figure 20: The overall pipeline of the Masking and Recombination in the Spatial and Frequency domains (MRSF) approach for robust monocular depth estimation.
the low-frequency and high-frequency components of the two frequency domain representations, resulting in a single reconstructed frequency domain image. Subsequently, we apply a mask to the high-frequency portion of this frequency domain image to enhance the model's robustness to high-frequency information. We obtain the final image by performing an inverse Fourier transform.
In this method, two crucial hyperparameters, _i.e._\(\theta\) and \(S\), play significant roles in the success of the FDA. Figure 24 provides examples of setting \(\theta\) to different values. We conducted extensive experiments on these two parameters and found that the model tends to achieve the optimal performance when \(\theta\) is set to \(24\) degrees and \(S\) is set to \(50\times 50\).
**MRSF: Masking & Recombination in Spatial & Frequency Domains**. After incorporating the SDA and FDA methods, we observe significant performance improvements in the OoD testing set.
Figure 21: The overall pipeline of the Spatial Domain Augmentation (SDA) operation.
Figure 22: Illustrative example of images after applying the SDA method. From left to right: the \(N\) values in SDA are set to \(0\) (original image), \(3\), \(6\), and \(12\), respectively, while \(a\) remains fixed at \(120\).
Therefore, combining these two methods became a natural idea. Our approach involves concatenating the two methods and assigning a certain probability for their usage, denoted as \(\rho 1\) and \(\rho 2\), respectively, during a joint data augmentation. Regarding the issue of the order in which the methods are applied, we conducted experiments and found that when SDA is applied first, the masks have already disrupted the model's frequency domain properties. Consequently, conducting an attack in the frequency domain after the SDA stage results in images with significant discrepancies in both frequency and spatial domains compared to the original image, rendering the data augmentation ineffective. To address this, we adopted the strategy of performing FDA first, followed by a spatial domain enhancement, to achieve our combined data augmentation.
In our MRSF approach, the two mentioned hyperparameters, \(\rho 1\), and \(\rho 2\), play a significant role in balancing the augmentation effects brought by SDA and FDA. We conducted extensive experiments on these two parameters and determined that the optimal performance tends to be achieved when \(\rho 1\) and \(\rho 2\) are both set to \(0.5\).
#### 6.4.3 Experimental Analysis
**Implementation Details**. The MRSF framework is built upon DepthFormer [63] - a state-of-the-art model in the field of monocular depth estimation which incorporates a Swin Transformer backbone
Figure 23: The overall pipeline of the Frequency Domain Augmentation (FDA) operation.
Figure 24: Illustrative examples of images after applying the FDA method. From left to right: the \(\theta\) values in FDA are set to \(0\) (original image), \(3\), \(6\), and \(12\), respectively.
to capture the global context of the input image through an effective attention mechanism. It also utilizes a CNN to extract local information and employs a hierarchical aggregation and heterogeneous interaction module to fuse the features obtained from both components. Figure 25 provides an overview of DepthFormer [63]. We resort to the Monocular-Depth-Estimation-Toolbox [61] for the implementation of our baseline. The model is trained on the official training split of the NYU Depth V2 dataset [94], which contains \(24000\) RGB-depth pairs with a spatial resolution of \(640\times 480\).
**Comparative Study**. We summarize the competition results in Table 21. Our approach achieved the fourth position in the second track of the RoboDepth competition and was honored with the innovative prize. Subsequently, we proceed to study the effects brought by SDA, FDA, and MRSF. The results are shown in Table 22. Specifically, for MRSF, we employed a stochastic approach where we randomly applied the FDA method to attack the model in the frequency domain, particularly targeting the high-frequency components, with a certain probability. Within the already perturbed images, we further applied the SDA model in the spatial domain using a random mask, again with a certain probability. Through this fusion approach, our method exhibited performance improvements beyond those achieved by either individual method alone.
**Ablation Study**. After completing the baseline testing, we proceed to ablate the effects brought by SDA and FDA. When applying masks in the spatial domain, the number of masks (\(N\)) and the size of individual masks (\(a\)) are two critical hyperparameters to determine. We conducted numerous experiments regarding these two parameters and show the results in Table 23.
Initially, we made a conjecture that the model's performance is correlated with the total area of the masks; when the total area remains constant, the impact of \(N\) and \(a\) on the model's performance would be limited. This conjecture was validated in the first three experimental groups. Subsequently, while keeping the size of individual masks (\(a\)) fixed, we varied the number of masks (\(N\)) and found that the model achieved optimal performance when \(N\) was set to \(12\) and \(a\) was set to \(120\). When the mask size is too large or too small, the model's performance does not reach its optimal level. Our experiments have demonstrated that the model tends to achieve the best possible performance when the total area of the masks is approximately \(75\%\) of the original input resolution.
Furthermore, we conduct extensive experiments in the frequency domain augmentation. Our method primarily focused on testing various rotation angles, as depicted in Table 24. Ultimately, we found that the optimal value for \(\theta\) is \(24\) degrees. This is because excessively small \(\theta\) values would result in minimal changes to the image, while excessively large values may lead to the loss of crucial information. Additionally, the partitioning of high-frequency and low-frequency information is an important parameter that we explored through experiments. Eventually, we discovered that the model would perform well when low-frequency information was preserved within a rectangular region of size \(50\times 50\) at the center of the frequency domain image.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta\leqslant 1.25\ddagger\) & \(\delta\leqslant 1.25^{2}\ddagger\) & \(\delta\leqslant 1.25^{3}\ddagger\) \\ \hline \hline USTCxNetEaseFuxi & \(\mathbf{0.088}\) & \(0.046\) & \(0.347\) & \(\mathbf{0.115}\) & \(\mathbf{0.940}\) & \(0.985\) & \(0.996\) \\ OpenSpaceAI & \(\mathbf{0.095}\) & \(\mathbf{0.045}\) & \(\mathbf{0.341}\) & \(0.117\) & \(0.928\) & \(\mathbf{0.990}\) & \(\mathbf{0.998}\) \\ GANCV & \(0.104\) & \(0.060\) & \(0.391\) & \(0.131\) & \(0.898\) & \(0.982\) & \(0.995\) \\ \hline
**AIIA-RDepth (Ours)** & \(0.123\) & \(0.088\) & \(0.480\) & \(0.162\) & \(0.861\) & \(0.975\) & \(0.993\) \\ \hline \hline \end{tabular}
\end{table}
Table 21: Quantitative results on the RoboDepth competition leaderboard (Track # 2). The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
Figure 25: The framework overview of DepthFormer [63].
#### 6.4.4 Solution Summary
In this work, to address the challenge of robust monocular depth estimation, we employed a multi-stage approach. Firstly, we utilized a masking technique to selectively occlude regions in the input image to enhance the spatial domain representation learning. Subsequently, we performed frequency domain operations to separate the high-frequency and low-frequency information of the image, followed by a series of robustness-enhancing techniques applied specifically to the high-frequency information. By employing these two augmentations, we significantly improved the model's robustness on the OoD testing dataset. We evaluated our proposed approach in the second track of the RoboDepth Challenge and achieved the innovative prize.
## 7 Conclusion & Future Direction
This paper has presented the results of the RoboDepth Challenge. In this challenge, we were dedicated to encouraging out-of-distribution depth estimation to meet safety-critical requirements. Our top-performing participants presented diverse designs and techniques that are proven useful in improving the robustness of depth estimation models under common corruptions. We compared and summarized robustness enhancement strategies from different aspects and drew insightful observations behind them. It is worth mentioning again that promising results have been achieved on the challenging testing sets across two tracks of this challenge. We believe that the RoboDepth Challenge has shed light on the development of depth estimation systems that are reliable, scalable, and generalizable.
Future editions of the RoboDepth Challenge seek further improvements from the following aspects:
* Extension of the scale and diversity of robustness evaluation sets. The current RoboDepth benchmarks only considered two distinct data sources and five discrete severity levels. Simulating continuous severity changes on more depth estimation datasets is desirable.
* Integration of more depth estimation tasks. While this challenge mainly focused on monocular depth estimation, it is important to study also the robustness of other related tasks, such as stereo, panorama, and surrounding-view depth estimation.
* Exploration of other techniques that could improve the OoD robustness. The recent vision foundation models have opened up new possibilities for unified and generalizable visual perception. It would be interesting to combine these models for robust depth estimation.
* Pursuit of both robustness and efficiency. Since the depth estimation system might require in-vehicle deployment, the use of certain techniques, such as model ensemble and TTA, would become unreasonable. It is thus crucial to design suitable latency constraints.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline DepthFormer [63] & \(0.131\) & \(0.095\) & \(0.507\) & \(0.170\) & \(0.827\) & \(0.963\) & \(0.987\) \\ \hline + SDA (\(N=12,a=60\)) & \(0.128\) & \(0.091\) & \(0.491\) & \(0.166\) & \(0.839\) & \(0.964\) & \(0.987\) \\ + SDA (\(N=48,a=30\)) & \(\mathbf{0.127}\) & \(0.091\) & \(0.491\) & \(0.165\) & \(0.840\) & \(0.965\) & \(0.987\) \\ + SDA (\(N=3,a=120\)) & \(\mathbf{0.127}\) & \(0.089\) & \(0.489\) & \(0.165\) & \(0.843\) & \(0.965\) & \(0.987\) \\ + SDA (\(N=6,a=120\)) & \(\mathbf{0.127}\) & \(0.089\) & \(0.484\) & \(0.163\) & \(0.846\) & \(0.965\) & \(0.988\) \\ + SDA (\(N=12,a=120\)) & \(\mathbf{0.127}\) & \(\mathbf{0.088}\) & \(\mathbf{0.480}\) & \(\mathbf{0.162}\) & \(\mathbf{0.850}\) & \(\mathbf{0.967}\) & \(\mathbf{0.988}\) \\ \hline \hline \end{tabular}
\end{table}
Table 23: Ablation results of Spatial Domain Augmentation (SDA) with different hyperparameters on the testing set of the second track of the RoboDepth competition. The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline \hline
**Method** & **Abs Rel \(\downarrow\)** & **Sq Rel \(\downarrow\)** & **RMSE \(\downarrow\)** & **log RMSE \(\downarrow\)** & \(\delta<1.25\uparrow\) & \(\delta<1.25^{2}\uparrow\) & \(\delta<1.25^{3}\uparrow\) \\ \hline \hline DepthFormer [63] & \(0.131\) & \(0.095\) & \(0.507\) & \(0.170\) & \(0.827\) & \(0.963\) & \(0.987\) \\ \hline + SDA & \(0.127\) & \(\mathbf{0.088}\) & \(\mathbf{0.480}\) & \(\mathbf{0.162}\) & \(0.850\) & \(0.967\) & \(0.988\) \\ + FDA & \(0.128\) & \(0.087\) & \(0.462\) & \(0.160\) & \(0.850\) & \(0.969\) & \(0.989\) \\ \hline \hline
**+ **MRSF** & \(\mathbf{0.123}\) & \(\mathbf{0.088}\) & \(\mathbf{0.480}\) & \(\mathbf{0.162}\) & \(\mathbf{0.861}\) & \(\mathbf{0.975}\) & \(\mathbf{0.993}\) \\ \hline \hline \end{tabular}
\end{table}
Table 22: Quantitative results of different components in the proposed MRSF framework. The **best** and second best scores of each metric are highlighted in **bold** and underline, respectively.
## 8 Acknowledgements
This competition is sponsored by Baidu Research, USA ([http://research.baidu.com](http://research.baidu.com)).
This research is part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. This work is affiliated with the WP4 of the DesCartes programme, with an identity number: A-8000237-00-00.
We sincerely thank the support from the ICRA 2023 organizing committee.
## 9 Appendix
In this appendix, we supplement the following materials to support the findings and conclusions drawn in the main body of this paper:
* Section 9.1 attaches the certificates that are awarded to our participants.
* Section 9.2 acknowledges the public resources used during the course of this work.
### Competition Certificates
In this section, we attach the certificates that are awarded to the top-performing participants in the RoboDepth Challenge. Specifically, the certificates awarded to winners from the first track are shown in Figure 26, Figure 27, Figure 28, Figure 29, and Figure 30. The certificates awarded to winners from the second track are shown in Figure 31, Figure 32, Figure 33, and Figure 34.
Figure 26: The certificate awarded to the OpenSpaceAI team in the first track of the RoboDepth Challenge.
Figure 27: The certificate awarded to the USTC-IAT-United team in the first track of the RoboDepth Challenge.
Figure 28: The certificate awarded to the YYQ team in the first track of the RoboDepth Challenge.
Figure 29: The certificate awarded to the Ensemble team in the first track of the RoboDepth Challenge.
Figure 30: The certificate awarded to the Scent-Depth team in the first track of the RoboDepth Challenge.
Figure 31: The certificate awarded to the USTCxNetEaseFuxi team in the second track of the RoboDepth Challenge.
Figure 32: The certificate awarded to the OpenSpaceAI team in the second track of the RoboDepth Challenge.
Figure 33: The certificate awarded to the GANCV team in the second track of the RoboDepth Challenge.
Figure 34: The certificate awarded to the AIIA-RDepth team in the second track of the RoboDepth Challenge.
### Public Resources Used
In this section, we acknowledge the use of public resources, during the course of this work:
* KITTI Vision Benchmark4 CC BY-NC-SA 3.0
Footnote 4: [https://www.cvlibs.net/datasets/kitti](https://www.cvlibs.net/datasets/kitti).
Footnote 5: [https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html).
Footnote 6: [https://github.com/niantimelabs/monodepth2](https://github.com/niantimelabs/monodepth2).
Footnote 7: [https://github.com/zxcq1f/MonoViT](https://github.com/zxcq1f/MonoViT).
Footnote 8: [https://github.com/noahz/Lite-Mono](https://github.com/noahz/Lite-Mono).
Footnote 9: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox).
Footnote 10: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer).
Footnote 11: [https://github.com/bethgelab/imagecorruptions](https://github.com/bethgelab/imagecorruptions).
Footnote 12: [https://github.com/EPFL-VILAB/3DCommonCorruptions](https://github.com/EPFL-VILAB/3DCommonCorruptions).
Footnote 13: [https://github.com/hendrycks/robustness](https://github.com/hendrycks/robustness).
* NYU Depth Dataset V25 Unknown
* MonoDepth26 Custom MonoDepth2 License
* MonoViT7 MIT License
* Lite-Mono8 Unknown
* Monocular-Depth-Estimation-Toolbox9 Apache License 2.0
Footnote 8: [https://github.com/noahz/Lite-Mono](https://github.com/noahz/Lite-Mono).
Footnote 10: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox).
Footnote 11: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer).
Footnote 12: [https://github.com/Depth-SVILAB/imagecorruptions](https://github.com/Depth-SVILAB/imagecorruptions).
Footnote 13: [https://github.com/EPFL-VILAB/3DCommonCorruptions](https://github.com/EPFL-VILAB/3DCommonCorruptions).
Footnote 14: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox).
Footnote 15: [https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html).
Footnote 16: [https://github.com/niantimelabs/monodepth2](https://github.com/niantimelabs/monodepth2).
Footnote 17: [https://github.com/zxcq1f/MonoViT](https://github.com/zxcq1f/MonoViT).
Footnote 18: [https://github.com/noahz/Lite-Mono](https://github.com/noahz/Lite-Mono).
Footnote 19: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox).
Footnote 10: [https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer](https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox/tree/main/configs/depthformer).
Footnote 11: [https://github.com/bethgelab/imagecorruptions](https://github.com/bethgelab/imagecorruptions).
Footnote 12: [https://github.com/EPFL-VILAB/3DCommonCorruptions](https://github.com/EPFL-VILAB/3DCommonCorruptions).
Footnote 13: [https://github.com/hendrycks/robustness](https://github.com/hendrycks/robustness).
### Public Resources Used
In this section, we acknowledge the use of public resources, during the course of this work:
* KITTI Vision Benchmark4 CC BY-NC-SA 3.0
* NYU Depth Dataset V25 Unknown
* MonoDepth26 Custom MonoDepth2 License
* MonoViT7 MIT License
* Lite-Mono8 Unknown
* Monocular-Depth-Estimation-Toolbox9 Apache License 2.0
* DepthFormer10 Apache License 2.0
* ImageCorruptions11 Apache License 2.0
* 3DCC12 Attribution-NC 4.0 International
* ImageNet-C13 Apache License 2.0 |
2303.07192 | PN-OWL: A Two Stage Algorithm to Learn Fuzzy Concept Inclusions from OWL
Ontologies | OWL ontologies are a quite popular way to describe structured knowledge in
terms of classes, relations among classes and class instances. In this paper,
given a target class T of an OWL ontology, with a focus on ontologies with
real- and boolean-valued data properties, we address the problem of learning
graded fuzzy concept inclusion axioms with the aim of describing enough
conditions for being an individual classified as instance of the class T. To do
so, we present PN-OWL that is a two-stage learning algorithm made of a P-stage
and an N-stage. Roughly, in the P-stage the algorithm tries to cover as many
positive examples as possible (increase recall), without compromising too much
precision, while in the N-stage, the algorithm tries to rule out as many false
positives, covered by the P-stage, as possible. PN-OWL then aggregates the
fuzzy inclusion axioms learnt at the P-stage and the N-stage by combining them
via aggregation functions to allow for a final decision whether an individual
is instance of T or not. We also illustrate its effectiveness by means of an
experimentation. An interesting feature is that fuzzy datatypes are built
automatically, the learnt fuzzy concept inclusions can be represented directly
into Fuzzy OWL 2 and, thus, any Fuzzy OWL 2 reasoner can then be used to
automatically determine/classify (and to which degree) whether an individual
belongs to the target class T or not. | Franco Alberto Cardillo, Franca Debole, Umberto Straccia | 2023-03-01T09:08:55Z | http://arxiv.org/abs/2303.07192v1 | # PN-OWL: A Two Stage Algorithm to Learn Fuzzy Concept Inclusions from OWL Ontologies
###### Abstract
OWL ontologies are a quite popular way to describe structured knowledge in terms of classes, relations among classes and class instances.
In this paper, given a target class \(T\) of an OWL ontology, with a focus on ontologies with real- and boolean-valued data properties, we address the problem of learning graded fuzzy concept inclusion axioms with the aim of describing enough conditions for being an individual classified as instance of the class \(T\).
To do so, we present PN-OWL that is a two-stage learning algorithm made of a P-stage and an N-stage. Roughly, in the P-stage the algorithm tries to cover as many positive examples as possible (increase _recall_), without compromising too much _precision_, while in the N-stage, the algorithm tries to rule out as many false positives, covered by the P-stage, as possible. PN-OWL then aggregates the fuzzy inclusion axioms learnt at the P-stage and the N-stage by combining them via aggregation functions to allow for a final decision whether an individual is instance of \(T\) or not.
We also illustrate its effectiveness by means of an experimentation. An interesting feature is that fuzzy datatypes are built automatically, the learnt fuzzy concept inclusions can be represented directly into Fuzzy OWL 2 and, thus, any Fuzzy OWL 2 reasoner can then be used to automatically determine/classify (and to which degree) whether an individual belongs to the target class \(T\) or not.
## 1 Introduction
OWL 2 ontologies [67] are a popular means to represent _structured_ knowledge and its formal semantics is based on _Description Logics_ (DLs) [6]. The basic ingredients of DLs are concept descriptions, inheritance relationships among them and instances of them.
In this work, we focus on the problem of automatically learning fuzzy \(\mathcal{EL}(\mathbf{D})\) concept inclusion axioms from OWL 2 ontologies based on the terminology and instances within it. Despite an important amount of work has been carried about DLs, the application of machine learning techniques to OWL 2 ontologies is relatively less addressed compared to the _Inductive Logic Programming_ (ILP) setting (see _e.g._[69, 70] for more insights on ILP). We refer the reader to [54, 71] for an overview and to Section 5.
In this paper, the problem we address is the following: given a target class \(T\) of an OWL ontology, learn rule-like graded fuzzy \(\mathcal{EL}(\mathbf{D})\)[13, 16, 86] concept inclusion axioms with the aim of describing sufficient conditions for being an individual classified as instance of the class \(T\).
**Example 1.1**: _Consider an ontology [18, 20] that describes the meaningful entities of mammography image analysis. An excerpt of this ontology is given in Fig. 1. Now, suppose we have a set of patients that exhibit a cancer (positive examples) and another set which does not (negative examples). Now, one may ask about what characterises the patients with cancer (our target class \(T\)). Then one may learn from the ontology the following fuzzy \(\mathcal{EL}(\mathbf{D})\) concept inclusion (expressed in the so-called Fuzzy OWL 2 syntax [13])2_
Footnote 1: See also _e.g._[20, 51, 53, 87] for an analogous example.
Footnote 2: [http://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html](http://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html)
* **(implies (and (some hasDensity fat-containing) (some hasMargin spiculated) (some hasShape irregular) (some hasAge hasAge_high))** Cancer 0.86) _,_
_where the fuzzy set hasAge_high is defined as_
* **(define-fuzzy-concept hasAge_high right- (0,150,60,80))**
_In words,_
* _"if the density is fat-containing, the margin is spiculated, the shape is irregular and the age is high then it is cancer, with confidence_ 0.86_"._
In this work, the objective is the same as in _e.g._[20, 53, 87] except that now we propose to rely on an adaptation of the PN-rule [2, 3, 40, 41] algorithm to the (fuzzy) OWL case. Further, like in [51, 87], we continue to support so-called _fuzzy concept descriptions_ and _fuzzy concrete domains_[59, 85, 86], such as the expression (**some** hasAge hasAge_high) (_viz._ an aged person)
Figure 1: Excerpt of the mammographic ontology.
in Example 1.1 above, which is a fuzzy concept, _i.e._ a concept for which the belonging of an individual to the class is not necessarily a binary yes/no question, but rather a matter of (truth) degree in \([0,1]\).
For instance, in our example, the degree depends on the person's age: the higher the age the older is the person, _e.g._ modelled via a so-called _right shoulder function_ (see Figure 2(d)). Here, the range of the 'attribute' hasAge becomes a so-called fuzzy concrete domain [85, 86].
Let us recap that the basic principle of PN-rule consists of a _P-stage_ in which _positive_ rules (called _P-rules_) are learnt to cover as many as possible instances of a target class, and keeping the non-positive rate at a reasonable level, and an _N-stage_ in which _negative_ rules (called _N-rules_) are learnt to remove most of the non-positive examples covered by the P-stage. The two rule sets are then used to build up a decision method to classify an object being instance of the target class or not [2, 3, 40, 41]. It is worth noting that what differentiates this method from all others is its second stage. It learns N-rules that essentially remove the non-positive examples (so-called false positives) collectively covered by the union of all the P-rules.
The following are the main features of our two stage algorithm, called PN-OWL:
* at the P-stage, it generates a set of fuzzy \(\mathcal{EL}(\mathbf{D})\) inclusion axioms, the P-rules, that cover as many as possible instances of a target class \(T\) without compromising too much the amount on non-positives (_i.e._, try to increase the so-called _recall_);
* at the N-stage, it generates a set of fuzzy \(\mathcal{EL}(\mathbf{D})\) inclusion axioms, the N-rules, that cover as many as possible of non-positive instances of class \(T\) (_i.e._, then try to increase the so-called _precision_);
* the fuzzy inclusion axioms are then combined (aggregated) into a new fuzzy inclusion axiom describing sufficient conditions for being an individual classified as an instance of the target class \(T\) (_i.e._ the combination aims at increasing the overall effectiveness, _e.g._ the so-called F1-measure);
* all fuzzy inclusion axioms may possibly include fuzzy concepts and fuzzy concrete domains, where each axiom has a leveraging weight (specifically, called _confidence_ or _precision_);
* all generated fuzzy concept inclusion axioms can be directly encoded as _Fuzzy OWL 2_ axioms [12, 13]. Therefore, a Fuzzy OWL 2 reasoner, such as _fuzzyDL_[11, 15], can then be used to automatically determine (and to which degree) whether an individual belongs to the target class \(T\).
We will illustrate the effectiveness of PN-OWL by means of an experimentation that shows that the effectiveness of the combined approach increases w.r.t. a baseline based on the P-stage only.
In the following, we proceed as follows: in Section 2, for the sake of completeness, we recap the salient notions we will rely on this paper. Then, in Section 3 we will present our algorithm PN-OWL, which is evaluated in Section 4. In Section 5 we compare our work with closely related work appeared so far. Section 6 concludes and points to some topics of further research.
## 2 Background
We introduce the main notions related to _(Mathematical) Fuzzy Logics_ and _Fuzzy Description Logics_ we will use in this work (see also [16, 86]).
Mathematical Fuzzy Logic._Fuzzy Logic_ is the logic of _fuzzy sets_[93]. A _fuzzy set_\(A\) over a countable crisp set \(X\) is a function \(A\colon X\to[0,1]\), called _fuzzy membership_ function of \(A\). A _crisp set_\(A\) is characterised by a membership function \(A\colon X\to\{0,1\}\) instead. Often, fuzzy set operations conform to \((A\cap B)(x)=\min(A(x),B(x))\), \((A\cup B)(x)=\max(A(x),B(x))\) and \(\bar{A}(x)=1-A(x)\) (\(\bar{A}\) is the set complement of \(A\)), the _cardinality_ of a fuzzy set is defined as \(|A|=\sum_{x\in X}A(x)\), while the _inclusion degree_ between \(A\) and \(B\) is defined as \(deg(A,B)=\frac{|A\cap B|}{|A|}\).
The trapezoidal, the triangular, the left-shoulder function, and the right-shoulder function are frequently used to specify membership functions of fuzzy sets (see Figure 2).
One easy and typically satisfactory method to define the membership functions is to uniformly partition the range of, _e.g._ person's age values (bounded by a minimum and maximum value), into 3, 5 or 7 fuzzy sets using triangular (or trapezoidal) functions (see Figure 3). Another popular approach may consist in using the so-called _c-means_ fuzzy clustering algorithm (see, _e.g._[8]) with 3,5 or 7 clusters, where the fuzzy membership functions are triangular functions built around the centroids of the clusters (see also [38]).
In _Mathematical Fuzzy Logic_[37], the convention prescribing that a formula \(\phi\) is either true or false (w.r.t. an interpretation \(\mathcal{I}\)) is changed and is a matter of degree measured on an ordered scale that is no longer \(\{0,1\}\), but typically \([0,1]\). This degree is called _degree of truth_ of the formula \(\phi\) in the interpretation \(\mathcal{I}\). A _fuzzy formula_ has the form \(\langle\phi,\alpha\rangle\), where \(\alpha\!\in\!(0,1]\) and \(\phi\) is a First-Order Logic (FOL) formula, encoding that the degree of truth of \(\phi\) is _greater than or equal to \(\alpha\)_. From a semantics point of view, a _fuzzy interpretation_\(\mathcal{I}\) maps each atomic formula
Figure 3: Uniform fuzzy sets over salaries: trapezoidal (left) or triangular (right).
Figure 2: (a) Trapezoidal function \(\mathit{trz}(a,b,c,d)\), (b) triangular function \(\mathit{tri}(a,b,c)\), (c) left shoulder function \(\mathit{ls}(a,b)\), and (d) right shoulder function \(\mathit{rs}(a,b)\).
into \([0,1]\) and is then extended inductively to all FOL formulae as follows:
\[\mathcal{I}(\phi\wedge\psi) = \mathcal{I}(\phi)\otimes\mathcal{I}(\psi)\ \,\ \ \mathcal{I}(\phi\vee\psi)\ =\ \mathcal{I}(\phi)\oplus\mathcal{I}(\psi)\] \[\mathcal{I}(\phi\rightarrow\psi) = \mathcal{I}(\phi)\Rightarrow\mathcal{I}(\psi)\ \,\ \ \mathcal{I}(\neg\phi)\ =\ \ominus\mathcal{I}(\phi)\] \[\mathcal{I}(\exists x.\phi(x)) = \sup_{y\in\Delta^{\mathcal{I}}}\mathcal{I}(\phi(y))\ \,\ \ \mathcal{I}(\forall x.\phi(x))\ =\ \inf_{y\in\Delta^{\mathcal{I}}}\mathcal{I}(\phi(y))\,\]
where \(\Delta^{\mathcal{I}}\) is the (non-empty) domain of \(\mathcal{I}\), and \(\otimes\), \(\oplus\), \(\Rightarrow\), and \(\ominus\) are so-called _t-norms_, _t-conorms_, _implication functions_, and _negation functions_, respectively, which extend the Boolean conjunction, disjunction, implication, and negation, respectively, to the fuzzy case.
One usually distinguishes three different logics, namely Lukasiewicz, Godel, and Product logics [37],3 whose truth combination functions are reported in Table 1.
Footnote 3: Notably, a theorem states that any other continuous t-norm can be obtained as a combination of them.
An _r-implication_ is an implication function obtained as the _residuum_ of a continuous t-norm \(\otimes\), _i.e._\(\alpha_{1}\Rightarrow\alpha_{2}=\sup\{\alpha_{3}\mid\alpha_{1}\otimes\alpha_{3} \leq\alpha_{2}\}\). Note also, that given an r-implication \(\Rightarrow_{r}\), we may also define its related negation \(\ominus_{r}\alpha\) by means of \(\alpha\Rightarrow_{r}0\) for every \(\alpha\in[0,1]\).
The notions of satisfiability and logical consequence are defined in the standard way, where a fuzzy interpretation \(\mathcal{I}\)_satisfies_ a fuzzy formula \(\langle\phi,\alpha\rangle\), or \(\mathcal{I}\) is a _model_ of \(\langle\phi,\alpha\rangle\), denoted as \(\mathcal{I}\!\models\!\langle\phi,\alpha\rangle\), iff \(\mathcal{I}(\phi)\geq\alpha\). Notably, from \(\langle\phi,\alpha_{1}\rangle\) and \(\langle\phi\!\rightarrow\!\psi,\alpha_{2}\rangle\) one may conclude (if \(\rightarrow\) is interpreted as an r-implication) \(\langle\psi,\alpha_{1}\otimes\alpha_{2}\rangle\) (this inference is called _fuzzy modus ponens_).
Fuzzy Description Logics basics.We recap here the fuzzy DL \(\mathcal{ALC}_{@}(\mathbf{D})\), which extends the well-known fuzzy DL \(\mathcal{ALC}(\mathbf{D})\)[85] with the _aggregated concept_ construct [14] (indicated with the symbol @). \(\mathcal{ALC}_{@}(\mathbf{D})\) is expressive enough to capture the main ingredients of fuzzy DLs we are going to consider here.
We start with the notion of _fuzzy concrete domain_, that is a tuple \(\mathbf{D}\!=\!\langle\Delta^{\mathbf{D}},\ ^{\mathbf{.D}}\rangle\) with datatype domain \(\Delta^{\mathbf{D}}\) and a mapping \(\ {}^{\mathbf{.D}}\) that assigns to each data value an element of \(\Delta^{\mathbf{D}}\), and to every 1-ary datatype predicate \(\mathbf{d}\) a 1-ary fuzzy relation over \(\Delta^{\mathbf{D}}\). Therefore, \(\ {}^{\mathbf{.D}}\) maps indeed each datatype predicate into a function from \(\Delta^{\mathbf{D}}\) to \([0,1]\). In the domain of numbers, typical datatypes predicates \(\mathbf{d}\) are characterized by the well known membership functions (see also Fig. 2)
\[\mathbf{d} \rightarrow ls(a,b)\ |\ rs(a,b)\ |\ tri(a,b,c)\ |\ trz(a,b,c,d)\] \[|\ \ \geq_{v}\ |\ \leq_{v}\ |\ =_{v}\,\]
\begin{table}
\begin{tabular}{|c||c c c|} \hline & Lukasiewicz & Gödel & Product \\ \hline \(\alpha_{1}\otimes\alpha_{2}\) & \(\max(\alpha_{1}+\alpha_{2}-1,0)\) & \(\min(\alpha_{1},\alpha_{2})\) & \(\alpha_{1}\cdot\alpha_{2}\) \\ \(\alpha_{1}\oplus\alpha_{2}\) & \(\min(\alpha_{1}+\alpha_{2},1)\) & \(\max(\alpha_{1},\alpha_{2})\) & \(\alpha_{1}+\alpha_{2}-\alpha_{1}\cdot\alpha_{2}\) \\ \(\alpha_{1}\Rightarrow\alpha_{2}\) & \(\min(1-\alpha_{1}+\alpha_{2},1)\) & \(\begin{cases}1\ \ \ \text{if}\ \alpha_{1}\leq\alpha_{2}\\ \alpha_{2}\ \ \text{otherwise}\end{cases}\) & \(\begin{cases}1\ \ \ \ \text{if}\ \alpha_{1}\leq\alpha_{2}\\ \alpha_{2}/\alpha_{1}\ \ \text{otherwise}\end{cases}\) \\ \(\ominus\alpha\) & \(1-\alpha\) & \(\begin{cases}1\ \ \ \text{if}\ \alpha=0\\ 0\ \ \text{otherwise}\end{cases}\) & \(\begin{cases}1\ \ \ \text{if}\ \alpha=0\\ 0\ \ \text{otherwise}\end{cases}\) \\ \hline \end{tabular}
\end{table}
Table 1: Truth combination functions for fuzzy logics.
where additionally \(\geq_{v}\) (resp. \(\leq_{v}\) and \(=_{v}\)) corresponds to the crisp set of data values that are no less than (resp. no greater than and equal to) the value \(v\).
_Aggregation Operators_ (AOs) are mathematical functions that are used to combine different pieces of information. There exist large number of different AOs that differ on the assumptions on the data (data types) and about the type of information that we can incorporate in the model [90]. There is no standard definition of AO. Usually, given a domain \(\mathbb{D}\) (such as the reals), an AO of dimension \(n\) is a mapping \(@:\mathbb{D}^{n}\rightarrow\mathbb{D}\). For us, \(\mathbb{D}=[0,1]\). Thus, an AO aggregates \(n\) values of \(n\) different criteria. In our scenario, such criteria will be represented by using fuzzy concepts from a fuzzy ontology and we assume to have a finite family \(@_{1},\ldots,@_{l}\) of AOs within our language.
Now, consider pairwise disjoint alphabets \(\mathbf{I},\mathbf{A}\) and \(\mathbf{R}\), where \(\mathbf{I}\) is the set of _individuals_, \(\mathbf{A}\) is the set of _concept names_ (also called _atomic concepts_ or _class names_) and \(\mathbf{R}\) is the set of _role names_. Each role is either an _object property_ or a _datatype property_. The set of _concepts_ are built from concept names \(A\) using connectives and quantification constructs over object properties \(R\) and datatype properties \(S\), as described by the following syntactic rule (\(n_{i}\geq 1\)):
\[\begin{array}{lll}C&\rightarrow&\top\ \mid\ \bot\ \mid A\ \mid C_{1}\sqcap C_{2}\ \mid C_{1}\sqcup C_{2}\ \mid\neg C\ \mid C_{1}\to C_{2}\ \mid\\ &&\exists R.C\ \mid\forall R.C\ \mid\exists S.\mathbf{d}\ \mid\forall S. \mathbf{d}\ \mid\\ &&@_{i}(C_{1},\ldots,C_{n_{i}})\.\end{array}\]
An _ABox_\(\mathcal{A}\) consists of a finite set of assertion axioms. An _assertion_ axiom is an expression of the form \(\langle a{:}C,\alpha\rangle\) (called _concept assertion_, \(a\) is an instance of concept \(C\) to degree greater than or equal to \(\alpha\)) or of the form \(\langle(a_{1},a_{2}){:}R,\alpha\rangle\) (called _role assertion_, \((a_{1},a_{2})\) is an instance of object property \(R\) to degree greater than or equal to \(\alpha\)), where \(a,a_{1},a_{2}\) are individual names, \(C\) is a concept, \(R\) is an object property and \(\alpha\in(0,1]\) is a truth value. A _Terminological Box_ or _TBox_\(\mathcal{T}\) is a finite set of _General Concept Inclusion_ (GCI) axioms, where a fuzzy GCI is of the form \(\langle C_{1}\sqsubseteq C_{2},\alpha\rangle\) (\(C_{1}\) is a sub-concept of \(C_{2}\) to degree greater than or equal to \(\alpha\)), where \(C_{i}\) is a concept and \(\alpha\in(0,1]\). We may omit the truth degree \(\alpha\) of an axiom; in this case \(\alpha=1\) is assumed and we call the axiom _crisp_. We also write \(C_{1}=C_{2}\) as a macro for the two GCIs \(C_{1}\sqsubseteq C_{2}\) and \(C_{2}\sqsubseteq C_{1}\). We may also call a fuzzy GCI of the form \(\langle C\sqsubseteq A,\alpha\rangle\), where \(A\) is a concept name, a _rule_ and \(C\) its _body_. A _Knowledge Base_ (KB) is a pair \(\mathcal{K}=\langle\mathcal{T},\mathcal{A}\rangle\), where \(\mathcal{T}\) is a TBox and \(\mathcal{A}\) is an ABox. With \(\mathsf{I}_{\mathcal{K}}\) we denote the set of individuals occurring in \(\mathcal{K}\).
Concerning the semantics, let us fix a fuzzy logic, a fuzzy concrete domain \(\mathbf{D}\!=\!\langle\Delta^{\mathbf{D}},\,.\,\mathbf{{}^{D}}\rangle\) and aggregation operators \(@_{i}:[0,1]^{n_{i}}\rightarrow[0,1]\). Now, unlike classical DLs in which an interpretation \(\mathcal{I}\) maps _e.g._ a concept \(C\) into a set of individuals \(C^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\), _i.e._\(\mathcal{I}\) maps \(C\) into a function \(C^{\mathcal{I}}:\Delta^{\mathcal{I}}\rightarrow\{0,1\}\) (either an individual belongs to the extension of \(C\) or does not belong to it), in fuzzy DLs, \(\mathcal{I}\) maps \(C\) into a function \(C^{\mathcal{I}}:\Delta^{\mathcal{I}}\rightarrow[0,1]\) and, thus, an individual belongs to the extension of \(C\) to some degree in \([0,1]\), _i.e._\(C^{\mathcal{I}}\) is a fuzzy set. Specifically, a _fuzzy interpretation_ is a pair \(\mathcal{I}=(\Delta^{\mathcal{I}},{}^{\mathcal{I}})\) consisting of a nonempty (crisp) set \(\Delta^{\mathcal{I}}\) (the _domain_) and of a _fuzzy interpretation function_\({}^{\mathcal{I}}\) that assigns: _(i)_ to each atomic concept \(A\) a function \(A^{\mathcal{I}}\colon\Delta^{\mathcal{I}}\rightarrow[0,1]\); _(ii)_ to each object property \(R\) a function \(R^{\mathcal{I}}\colon\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}\rightarrow[0,1]\); _(iii)_ to each datatype property \(S\) a function \(S^{\mathcal{I}}\colon\Delta^{\mathcal{I}}\times\Delta^{\mathbf{D}}\rightarrow[0,1]\); _(iv)_ to each individual \(a\) an element \(a^{\mathcal{I}}\in\Delta^{\mathcal{I}}\) such that \(a^{\mathcal{I}}\neq b^{\mathcal{I}}\) if \(a\neq b\) (the so-called _Unique Name Assumption_); and _(v)_ to each data value \(v\) an element \(v^{\mathcal{I}}\in\Delta^{\mathbf{D}}\). Now, a fuzzy interpretation function is extended to concepts as specified
below (where \(x\in\Delta^{\mathcal{I}}\)):
\[\top^{\mathcal{I}}(x)=1\,\ \bot^{\mathcal{I}}(x)\ =\ 0\,\ (C \sqcap D)^{\mathcal{I}}(x)=C^{\mathcal{I}}(x)\otimes D^{\mathcal{I}}(x)\] \[(C\sqcup D)^{\mathcal{I}}(x)=C^{\mathcal{I}}(x)\oplus D^{ \mathcal{I}}(x)\,\ (\neg C)^{\mathcal{I}}(x)=\ominus C^{\mathcal{I}}(x)\] \[(C\to D)^{\mathcal{I}}(x)=C^{\mathcal{I}}(x)\Rightarrow D^{ \mathcal{I}}(x)\,\ (\forall R.C)^{\mathcal{I}}(x)=\inf_{y\in\Delta^{\mathcal{I}}}\{R^{ \mathcal{I}}(x,y)\Rightarrow C^{\mathcal{I}}(y)\}\] \[(\exists R.C)^{\mathcal{I}}(x)=\sup_{y\in\Delta^{\mathcal{I}}}\{R ^{\mathcal{I}}(x,y)\otimes C^{\mathcal{I}}(y)\}\,\ (\forall S.\mathbf{d})^{\mathcal{I}}(x)=\inf_{y\in\Delta^{ \mathbf{D}}}\{S^{\mathcal{I}}(x,y)\Rightarrow\mathbf{d^{D}}(y)\}\] \[(\exists S.\mathbf{d})^{\mathcal{I}}(x)=\sup_{y\in\Delta^{ \mathbf{D}}}\{S^{\mathcal{I}}(x,y)\otimes\mathbf{d^{D}}(y)\}\,\] \[(\mathbb{O}_{i}(C_{1},\ldots,C_{n_{i}}))^{\mathcal{I}}(x)= \mathbb{O}_{i}(C_{1}^{\ \mathcal{I}}(x),\ldots,C_{n_{i}}^{\ \ \ \ \ \ \mathcal{I}}(x))\.\]
The _satisfiability of axioms_ is then defined by the following conditions: _(i)_\(\mathcal{I}\) satisfies an axiom \(\langle a{:}C,\alpha\rangle\) if \(C^{\mathcal{I}}(a^{\mathcal{I}})\geq\alpha\); _(ii)_\(\mathcal{I}\) satisfies an axiom \(\langle(a,b){:}R,\alpha\rangle\) if \(R^{\mathcal{I}}(a^{\mathcal{I}},b^{\mathcal{I}})\geq\alpha\); _(iii)_\(\mathcal{I}\) satisfies an axiom \(\langle C\sqsubseteq D,\alpha\rangle\) if \((C\sqsubseteq D)^{\mathcal{I}}\geq\alpha\) with4\((C\sqsubseteq D)^{\mathcal{I}}=\inf_{x\in\Delta^{\mathcal{I}}}\{C^{\mathcal{I}}(x) \Rightarrow D^{\mathcal{I}}(x)\}\). \(\mathcal{I}\) is a _model_ of \(\mathcal{K}=\langle\mathcal{A},\mathcal{T}\rangle\) iff \(\mathcal{I}\) satisfies each axiom in \(\mathcal{K}\). If \(\mathcal{K}\) has a model we say that \(\mathcal{K}\) is _satisfiable_ (or _consistent_). We say that \(\mathcal{K}\)_entails_ axiom \(\tau\), denoted \(\mathcal{K}\models\tau\), if any model of \(\mathcal{K}\) satisfies \(\tau\). The _best entailment degree_ of \(\tau\) of the form \(C\sqsubseteq D\), \(a{:}C\) or \((a,b){:}R\), denoted \(bed(\mathcal{K},\tau)\), is defined as
Footnote 4: However, note that under standard logic \(\sqsubseteq\) is interpreted as \(\Rightarrow_{z}\) and not as \(\Rightarrow_{kd}\).
\[bed(\mathcal{K},\tau)=\sup\{\alpha\mid\mathcal{K}\models\langle\tau,\alpha \rangle\}\.\]
**Remark 1**: _Please note that \(bed(\mathcal{K},a{:}C)=1\) (i.e. \(\mathcal{K}\models a{:}C\)) implies that \(bed(\mathcal{K},a{:}\neg C)=0\) holds, and similarly, \(bed(\mathcal{K},a{:}\neg C)=1\) (i.e. \(\mathcal{K}\models a{:}\neg C\)) implies that \(bed(\mathcal{K},a{:}C)=0\) holds. However, in both cases the other way around does not hold. Furthermore, we may well have that both \(bed(\mathcal{K},a{:}C)=\alpha_{1}>0\) and \(bed(\mathcal{K},a{:}\neg C)=\alpha_{2}>0\) hold._
Now, consider concept \(C\), a rule \(C\sqsubseteq A\), a KB \(\mathcal{K}\) and a set of individuals l. Then the _cardinality_ of \(C\) w.r.t. \(\mathcal{K}\) and l, denoted \(|C|^{\mathsf{l}}_{\mathcal{K}}\), is defined as
\[|C|^{\mathsf{l}}_{\mathcal{K}}=\sum_{a\in\mathsf{l}}bed(\mathcal{K},a{:}C). \tag{1}\]
The _crisp cardinality_ (denoted \(\lceil C\rceil^{\mathsf{l}}_{\mathcal{K}}\)) is defined similarly by replacing in Eq. 1 the term \(bed(\mathcal{K},a{:}C)\) with \(\lceil bed(\mathcal{K},a{:}C)\rceil\).
Eventually, we say that the _application_ of rule \(C\sqsubseteq A\) to individual \(a\) w.r.t. \(\mathcal{K}\) is \(bed(\mathcal{K},C{:}a)\) and that rule \(C\sqsubseteq A\)_applies_ to individual \(a\) w.r.t. \(\mathcal{K}\) if \(bed(\mathcal{K},C{:}a)>0\).
## 3 Pn-Owl
At first, we introduce our learning problem.
### The Learning Problem
In general terms, the learning problem we are going to address is stated as follows. Consider
1. a satisfiable crisp KB \(\mathcal{K}\) and its individuals \(\mathsf{l}_{\mathcal{K}}\);
2. a _target concept name_\(T\);
3. an associated classification function \(f_{T}\colon\mathfrak{l}_{\mathcal{K}}\to\{-1,0,1\}\), where for each \(a\in\mathfrak{l}_{\mathcal{K}}\), the values (_labels_) correspond to \[f_{T}(a)=\begin{cases}1&\text{$a$ is a {\it positive} example w.r.t. $T$}\\ -1&\text{$a$ is a {\it negative} example w.r.t. $T$}\\ 0&\text{$a$ is an {\it unlabelled} example w.r.t. $T$}\end{cases}\]
4. the partitioning of the examples into \[\mathcal{E}^{+} =\{(a,1)\mid a\in\mathfrak{l}_{\mathcal{K}},f_{T}(a)=1\}\vDash\text{ the positive examples}\] \[\mathcal{E}^{-} =\{(a,-1)\mid a\in\mathfrak{l}_{\mathcal{K}},f_{T}(a)=-1\}\vDash \text{the negative examples}\] \[\mathcal{E}^{u} =\{(a,0)\mid a\in\mathfrak{l}_{\mathcal{K}},f_{T}(a)=0\}\vDash \text{the unlabelled examples}\] where \(\mathcal{E}^{+}\neq\emptyset\) is assumed. We define \(\mathcal{E}=\mathcal{E}^{+}\cup\mathcal{E}^{-}\cup\mathcal{E}^{u}\) as the set of all examples, and with \(\overline{\mathcal{E}^{+}}=\mathcal{E}\setminus\mathcal{E}^{+}\) we denote the set of _non-positive_ examples.
5. the set of individuals \(\mathfrak{l}_{\mathcal{S}}=\{a\mid(a,l)\in\mathcal{S}\}\), where \(\mathcal{S}\subseteq\mathcal{E}\) is a set of examples. Moreover, we define \[\mathfrak{l}_{\mathcal{E}^{+}} =\{a\mid(a,1)\in\mathcal{E}^{+}\}\vDash\text{the positive individuals}\] \[\mathfrak{l}_{\mathcal{E}^{-}} =\{a\mid(a,-1)\mid a\in\mathcal{E}^{-}\}\vDash\text{the negative individuals}\] \[\mathfrak{l}_{\mathcal{E}^{u}} =\{a\mid(a,0)\mid a\in\mathcal{E}^{u}\}\vDash\text{the unlabelled individuals}\] \[\mathfrak{l}_{\overline{\mathcal{E}^{+}}} =\mathfrak{l}_{\mathcal{K}}\setminus\mathfrak{l}_{\mathcal{E}^{+}} \vDash\text{the non-positive individuals}\]
6. a _hypothesis space_ of classifiers \(\mathcal{H}=\{h\colon\mathfrak{l}_{\mathcal{K}}\to[0,1]\}\);
7. a _training set_\(\mathcal{E}_{train}\subset\mathcal{E}\) of individual-label pairs, with \(\mathcal{E}_{train}\cap\mathcal{E}^{+}\neq\emptyset\);
8. a _test set_\(\mathcal{E}_{test}=\mathcal{E}\setminus\mathcal{E}_{train}\).
We assume that the only axioms involving \(T\) in \(\mathcal{K}\) are either of the form \(a\colon T\) or \(a\colon\neg T\). We write \(\mathcal{E}(a)=1\) if \(a\) is a positive example (_i.e._\(a\in\mathfrak{l}_{\mathcal{E}^{+}}\)), \(\mathcal{E}(a)=-1\) if \(a\) is a negative example (_i.e._\(a\in\mathfrak{l}_{\mathcal{E}^{-}}\)) and \(\mathcal{E}(a)=0\) otherwise.
The general goal is to learn a classifier function \(\bar{h}\in\mathcal{H}\) that is the result of _Empirical Risk Minimisation_ (ERM) on \(\mathcal{E}_{train}\), _i.e._
\[\bar{h} = \arg\min_{h\in\mathcal{H}}R(h,\mathcal{E}_{train})\ =\ \frac{1}{|\mathcal{E}_{train}|}\sum_{a\in\mathfrak{l}_{train}}L(h(a),\mathcal{E} _{train}(a))\,\]
where \(L\) is a _loss function_ such that \(L(\hat{l},l)\) measures how different the prediction \(\hat{l}\) of a hypothesis is from the true label \(l\) and \(R(h,\mathcal{E}_{train})\) is the _risk_ associated with hypothesis \(h\) over \(\mathcal{E}_{train}\), defined as the expectation of the loss function over the training set \(\mathcal{E}_{train}\).
The effectiveness of the learnt classifier \(\bar{h}\) is then assessed by determining \(R(\bar{h},\mathcal{E}_{test})\) on the test set \(\mathcal{E}_{test}\).
In our learning setting, a hypothesis \(h\in{\cal H}\) is a set of GCIs of the form
\[\langle C_{1}\sqsubseteq P_{1},\alpha_{1}\rangle\,\ldots\,\langle C_{h} \sqsubseteq P_{h},\alpha_{h}\rangle \tag{2}\] \[\mbox{\raisebox{-1.29pt}{@}}^{+}(P_{1},\ldots,P_{h})\sqsubseteq P \tag{3}\]
\[\langle D_{1}\sqsubseteq N_{1},\beta_{1}\rangle\,\ldots\,\langle D_{k} \sqsubseteq N_{k},\beta_{k}\rangle \tag{4}\] \[\mbox{\raisebox{-1.29pt}{@}}^{-}(N_{1},\ldots,N_{k})\sqsubseteq N \tag{5}\]
\[\mbox{\raisebox{-1.29pt}{@}}(P,N)\sqsubseteq T \tag{6}\]
where each \(P_{i},P,N_{j},N\) are new atomic concept names not occurring in \({\cal K}\), and \(\alpha_{i},\beta_{j}\) are the confidence degree of the relative GCIs, \(\mbox{\raisebox{-1.29pt}{@}}^{+},\mbox{\raisebox{-1.29pt}{@}}^{-},\mbox{ \raisebox{-1.29pt}{@}}\) are aggregation operators, and each \(C_{i},D_{j}\) is a fuzzy \({\cal EL}({\bf D})\) concept expression defined as (\(v\) is a boolean value)
\[\begin{array}{rcl}C&\longrightarrow&\top\ |\ A\ |\ \exists r.C\ |\ \exists s.{\bf d}\ |\ C_{1}\sqcap C_{2}\\ {\bf d}&\rightarrow&ls(a,b)\ |\ rs(a,b)\ |\ tri(a,b,c)\ |\ trz(a,b,c,d)\ |\ =_{v}\ \.\end{array}\]
Informally, _(i)_ each \(P_{i}\) 'rule' will tell us why an individual should be positive; _(ii)_ then we aggregate the various degrees of positiveness via the aggregator operator \(\mbox{\raisebox{-1.29pt}{@}}^{+}\); _(iii)_ on the other hand, each \(N_{i}\) 'rule' will tell us why an individual should be _not_ positive; _(iv)_ then we aggregate the various degrees of non-positiveness via the aggregator operator \(\mbox{\raisebox{-1.29pt}{@}}^{-}\). Typically, both \(\mbox{\raisebox{-1.29pt}{@}}^{+}\) and \(\mbox{\raisebox{-1.29pt}{@}}^{-}\) are the max operator; finally, _(v)_ we use the last 'rule' to establish whether and individual is an instance of \(T\) or not (_viz._ is positive or not positive) by combining the degree of being positive or not via the \(\mbox{\raisebox{-1.29pt}{@}}\) operator. A simple choice for \(\mbox{\raisebox{-1.29pt}{@}}\) is the following and will be the one we will adopt:
(\(\star\)) if the degree \(p\) of being positive is greater than the degree of being non-positive
\(n\) then \(p\), else \(0\).
Now, for \(a\in{\sf l}_{\cal K}\), the _classification prediction value_\(h(a)\) of \(a\), \(T\) and \({\cal K}\) is defined as
\[h(a)=bed({\cal K}\cup h,a{:}T). \tag{7}\]
**Remark 2**: _Note that, as stated above, essentially a hypothesis is a sufficient condition for being an individual instance of a target concept to some degree. If \(h(a)=0\) then we say that \(a\) is not a positive instance of \(T\), while if \(h(a)>0\) then \(a\) is a positive instance of \(T\) to degree \(h(a)\). As a consequence, we will distinguish between positive and non-positive examples of \(T\) only. That is, negative examples and unlabelled examples are indistinguishable._
Let us note that even if \({\cal K}\) is a crisp KB, the possible occurrence of fuzzy concrete domains in expressions of the form \(\exists S.{\bf d}\) in a hypothesis may imply that not necessarily \(h(a)\in\{0,1\}\). A similar effect may also be induced by the aggregation operators.
**Remark 3**: _Clearly, the set of hypotheses by this syntax is potentially infinite due, e.g. to conjunction and the nesting of existential restrictions in the concept expressions. This set is made finite by imposing further restrictions on the generation process such as the maximal number of conjuncts and the maximal depth of existential nestings allowed._
We conclude by saying that a hypothesis \(h\)_covers_ (resp. \(\theta\)-covers, for \(\theta\in(0,1]\)) an individual \(a\in{\sf l}_{\cal K}\) iff \(h(a)>0\) (resp. \(h(a)\geq\theta\)), and indicate with \(Cov(h)\) (resp. \(Cov_{\theta}(h)\)) the set of covered
(resp. \(\theta\)-covered) individuals. Moreover, for a GCI \(C\sqsubseteq T\), the _confidence degree_ (also called _inclusion degree_) of \(C\sqsubseteq T\) w.r.t. \(\mathcal{K}\) and a set of positive individuals \(P\), denoted \(cf(C\sqsubseteq T,\mathcal{K},P)\), is defined as
\[cf(C\sqsubseteq T,\mathcal{K},P)=\frac{|C|_{\mathcal{K}}^{P}}{|C|_{\mathcal{K }}^{\mathcal{K}}}\, \tag{8}\]
which is the proportion of positive individuals covered by \(C\) w.r.t. the individuals covered by \(C\). Clearly, \(cf(C\sqsubseteq T,\mathcal{K},P)\in[0,1]\) and the closer the confidence is to \(1\) the'more precise' is \(C\sqsubseteq T\), in the sense the less it covers non-positive individuals. In addition, the _support_ of \(C\sqsubseteq T\) w.r.t. \(\mathcal{K}\) and a set of individuals \(\mathsf{l}\), denoted \(supp(C\sqsubseteq T,\mathcal{K},\mathsf{l})\), is defined as
\[supp(C\sqsubseteq T,\mathcal{K},\mathsf{l})=\frac{|C|_{\mathcal{K}}^{\mathsf{ l}}}{|\mathsf{l}|} \tag{9}\]
### Conceptual Illustration of the Learning Method.
Before presenting our learning algorithm, we will first conceptually illustrate its principle by relying on Figure 4.
At the beginning, let us consider the sets of all individuals, the positive, the negative and the unlabelled individuals, respectively the sets \(\mathsf{l}_{\mathcal{K}},\mathsf{l}_{\mathcal{E}^{+}},\mathsf{l}_{\mathcal{ E}^{-}}\) and \(\mathsf{l}_{\mathcal{E}^{u}}\), as depicted in Figure 4 (a).
At the first stage, the P-stage, we consider the entire training set \(\mathcal{E}\) and try to maximise the covering of positive individuals, while minimising the covering of negative individuals. Specifically, let us assume that we have learnt a hypothesis \(h_{P}\) (a set of rules) with a covering \(Cov_{\theta_{P}}(h_{P})\), as depicted in Figure 4 (b). Here, the value \(\theta_{P}\) acts as a confidence threshold for the learnt rules in hypothesis \(h_{P}\). Note that \(Cov_{\theta_{P}}(h_{P})\) has to contain positive individuals, _i.e._\(Cov_{\theta_{P}}(h_{P})\cap\mathsf{l}_{\mathcal{E}^{+}}\neq\emptyset\), but may also contain negative and unlabelled individuals. We call the individuals in \(TP=Cov_{\theta_{P}}(h_{P})\cap\mathsf{l}_{\mathcal{E}^{+}}\)_true positives_, while call those in \(FP=Cov_{\theta_{P}}(h_{P})\setminus\mathsf{l}_{\mathcal{E}^{+}}\)_false positives_, _i.e._ a false positive is an individual that is erroneously classified by \(h_{P}\) as an instance of the target class \(T\), while in fact it is not (it might be an unlabelled or a negative example). This phase ends with a set of rules of the form \((2)-(3)\).
Now, in the next stage, the N-stage, with the aim to increase the effectiveness of the classifiers, we would like to remove as many as possible false positives in \(FP\), while avoiding removing, if possible, any of the true positives in \(TP\). To do so, we set-up a new learning problem in which the new target class is \(FP\), where the negatives individuals are those in \(TP\) and the positives are those in \(FP\). Of course, the N-stage applies only if \(FP\neq\emptyset\). The setup of the N-stage is depicted in Figure 4 (c). Specifically, let us assume that we have learnt now a hypothesis \(h_{N}\) with a covering \(Cov_{\theta_{N}}(h_{N})\), as depicted in Figure 4 (d). Note that we may have another parameter \(\theta_{N}\) acting as a confidence threshold for the learnt rules in hypothesis \(h_{N}\). This phase ends with a set of rules of the form \((4)-(5)\).
So, in general, at the end of the two stages, the situation may be as depicted in Figure 4 (e). However, in practice one may want likely to impose that none of the initial positive individuals are covered by \(h_{N}\) and, thus, none of the true positives in \(TP\) will be removed by \(h_{N}\).
Eventually, we aggregate the P-rules and N-rules via \((\star)\). This latter step ends with the rule of the form \((6)\). At the end of this two-stage process, we aim at to have captured most of the positive individuals of the target class, with few of the negative and unlabelled individuals (false positives).
### The Learning Algorithm PN-OWL
We now present our two-stage learning algorithm, called PN-OWL, that we have conceptually illustrated in the section before. Essentially, at the P-stage (resp. N-stage) our algorithm invokes
a learner, called _stage learner_, that generates a set \(h_{P}\) (resp. \(h_{N}\)) of fuzzy \(\mathcal{EL}(\mathbf{D})\) candidate GCIs that has, respectively, the form
\[h_{P} = \{\langle C_{1}\sqsubseteq T,\alpha_{1}\rangle,\ldots,\langle C_{h} \sqsubseteq T,\alpha_{h}\rangle\} \tag{10}\] \[h_{N} = \{\langle D_{1}\sqsubseteq FP,\beta_{1}\rangle,\ldots,\langle D_ {k}\sqsubseteq FP,\beta_{k}\rangle\} \tag{11}\]
called _stage hypothesis_. In the following, we indicate with \(p_{i}\) the fuzzy GCI \(\langle C_{i}\sqsubseteq T,\alpha_{i}\rangle\), while denote with \(n_{j}\) the fuzzy GCI \(\langle D_{j}\sqsubseteq FP,\beta_{j}\rangle\). The rules in \(h_{P}\) (resp. \(h_{N}\)) will then be aggregated using the max aggregation operator.
The stage hypotheses are then combined into a final hypothesis for the target class \(T\) using the aggregation operator (\(\star\)).
As stage learner we will use a modified version of the fuzzy Foil-\(\mathcal{DL}\)[50, 51, 53] learner that will be described in Section 3.4.
Then, the PN-OWL algorithm is shown in Algorithm 1. Note that the P-stage are the steps
Figure 4: How PN-OWL works. (a) Original training set; (b) Coverage \(Cov_{\theta_{P}}(h_{P})\) w.r.t. learnt hypothesis \(h_{P}\) after the P-stage; (c) Starting dataset for N-stage: the new target class are the false positives \(FP\) of the P-stage, while the negative individuals are the initial positives; (d) Coverage \(Cov_{\theta_{N}}(h_{N})\) w.r.t. learnt hypothesis \(h_{N}\) after the N-stage; (e) Final scenario.
1-5, while the N-stage are the steps 15-19 in which at step 19 we invoke the stage learner trying to cover as many as false positives as possible. The remaining steps deal with the construction of the final classifier ensemble as per Eqs. (2)-(6).
Eventually, for an individual \(a\in\mathsf{l}_{\mathcal{K}}\), the _classification prediction value of_ PN-OWL for individual \(a\) is \(h(a)\), where \(h\) is the returned hypothesis of PN-OWL. Moreover, we say that PN-OWL _classifies_\(a\) as instance of target class \(T\) if \(h(a)>0\).
```
0: KB \(\mathcal{K}\), training set \(\mathcal{E}\), target concept name \(T\), confidence thresholds \(\theta_{P},\theta_{N}\in[0,1]\), non-positive coverage percentages \(\eta_{P},\eta_{N}\in[0,1]\)
0: Hypothesis \(h\) as by Eqs. (2)-(6).
1: // P-stage
2: \(Pos\leftarrow\mathsf{l}_{\mathcal{E}^{+}}\);
3: \(Neg\leftarrow\mathsf{l}_{\mathcal{E}^{-}}\);
4: \(U\leftarrow\mathsf{l}_{\mathcal{K}}\setminus(Pos\cup Neg)\);
5: \(h_{P}\leftarrow\) FuzzyStageLearner(\(\mathcal{K}\), \(T\), \(Pos\), \(Neg\), \(U\), \(\theta_{P}\), \(\eta_{P}\)); \(\triangleright\) P-Stage hypothesis \(h_{P}\), i.e. set of axioms \(\langle C_{i}\sqsubseteq T,\alpha_{i}\rangle\)
6:if\(h_{P}=\emptyset\)thenreturn\(\emptyset\); \(\triangleright\) Nothing learnt, exit
7: \(Cov\gets Cov_{\theta_{P}}(h_{P})\); \(\triangleright\) P-stage Coverage
8: \(TP\gets Cov_{\theta_{P}}(h_{P})\cap\mathsf{l}_{\mathcal{E}^{+}}\); \(\triangleright\) True positives
9: \(FP\gets Cov_{\theta_{P}}(h_{P})\setminus\mathsf{l}_{\mathcal{E}^{+}}\); \(\triangleright\) False positives
10: // Start building classifier \(h\)
11: \(h\leftarrow\{\langle C_{i}\sqsubseteq P_{i},\alpha_{i}\rangle,|\ \langle C_{i}\sqsubseteq T,\alpha_{i}\rangle\in h_{P},P_{i}\text{ new }\}\); \(\triangleright\) As per Eq. 2
12:if\(FP=\emptyset\)then\(\triangleright\) No N-stage, exit with aggregated \(h_{P}\)
13: \(h\gets h\cup\{@^{+}(P_{1},\ldots,P_{h})\sqsubseteq T\}\); \(\triangleright\) No need of new \(P\) in Eq. 3
14:return\(h\);
15: // N-stage
16: \(Pos\gets FP\);
17: \(Neg\leftarrow\mathsf{l}_{\mathcal{E}^{+}}\);
18: \(U\leftarrow\mathsf{l}_{\mathcal{K}}\setminus(Pos\cup Neg)\);
19: \(h_{N}\leftarrow\) FuzzyStageLearner(\(\mathcal{K}\), \(FP\), \(Pos\), \(Neg\), \(U\), \(\theta_{N}\), \(\eta_{N}\)); \(\triangleright\) N-Stage hypothesis \(h_{N}\), i.e. set of axioms \(\langle D_{j}\sqsubseteq FP,\beta_{j}\rangle\)
20: // Build final classifier ensemble \(h\)
21:if\(h_{N}=\emptyset\)then\(\triangleright\) No learning in N-stage, return aggregated \(h_{P}\)
22: \(h\gets h\cup\{@^{+}(P_{1},\ldots,P_{h})\sqsubseteq T\}\); \(\triangleright\) No need of new \(P\) in Eq. 3
23:return\(h\);
24: \(h\gets h\cup\{@^{+}(P_{1},\ldots,P_{h})\sqsubseteq P\mid P\text{ new }\}\); \(\triangleright\) As per Eq. 3
25: \(h\gets h\cup\{\langle D_{j}\sqsubseteq N_{j},\beta_{j}\rangle,|\ \langle D_{j}\sqsubseteq FP,\beta_{j}\rangle\in h_{N},N_{j}\text{ new }\}\); \(\triangleright\) As per Eq. 4
26: \(h\gets h\cup\{@^{-}(N_{1},\ldots,N_{k})\sqsubseteq N\mid N\text{ new }\}\); \(\triangleright\) As per Eq. 5
27: \(h\gets h\cup\{@(P,N)\sqsubseteq T\}\); \(\triangleright\) As per Eq. 6
28:return\(h\);
```
**Algorithm 1** PN-OWL
### The Stage Learner pnFoil-\(\mathcal{D}\mathcal{L}\)
As stage learner we will use fuzzy Foil-\(\mathcal{D}\mathcal{L}\)[20, 50, 51, 53], which however will be modified to adapt to our specific setting (see Algorithm 2), which we call pnFoil-\(\mathcal{D}\mathcal{L}\). That is, the procedure invocations FuzzyStageLearner in lines 5 and 19 of the PN-OWL algorithm are indeed calls to pnFoil-\(\mathcal{D}\mathcal{L}\).
Essentially, pnFoil-\(\mathcal{DL}\) carries on inducing GCIs until as many as positive examples are covered or nothing new can be learnt. When an axiom is induced (see step 4 in Algorithm 2), the positive examples still to be covered are updated (steps 10 and 11).
In order to induce an axiom (step 4), Learn-One-Axiom is invoked (see Algorithm 3), which in general terms operates as follows:
1. start from concept \(\top\);
2. apply a refinement operator to find more specific fuzzy \(\mathcal{EL}(\mathbf{D})\) concept description candidates;
3. exploit a scoring function to choose the best candidate;
4. re-apply the refinement operator until a good candidate is found;
5. iterate the whole procedure until a satisfactory coverage of the positive examples is achieved.
```
0: KB \(\mathcal{K}\), target concept name \(T\), a set \(P\) (resp. \(N\) and \(U\)) of positive (resp. negative and unlabelled) examples, confidence threshold \(\theta\in[0,1]\), non-positive coverage percentage \(\eta\in[0,1]\)
0: A hypothesis, i.e. a set \(h=\{\langle C_{i}\sqsubseteq T,\delta_{i}\rangle|1\leq i\leq k\}\) of fuzzy \(\mathcal{EL}(\mathbf{D})\) GCIs
1:\(h\leftarrow\emptyset,Pos\gets P,\phi\leftarrow\top\sqsubseteq T\);
2://Loop until no improvement
3:while (\(Pos\neq\emptyset\)) and (\(\phi\neq\mathbf{null}\)) do
4:\(\phi\leftarrow\textsc{Learn-One-Axiom}(\mathcal{K},T,Pos,P,N,U,\theta,\eta)\); \(\triangleright\) Learn one fuzzy \(\mathcal{EL}(\mathbf{D})\) GCI of the form \(C\sqsubseteq T\)
5:if\(\phi\in h\)then\(\triangleright\) axiom already learnt
6:\(\phi\leftarrow\mathbf{null}\);
7:if\(\phi\neq\mathbf{null}\)then
8:\(\delta\gets cf(\phi,\mathcal{K},P)\);\(\triangleright\) Compute confidence of \(\phi\)
9:\(h\gets h\cup\{\langle\phi,\delta\rangle\}\);\(\triangleright\) Update hypothesis
10:\(Pos_{\phi}\gets Pos\cap Cov(\langle\phi,\delta\rangle)\);\(\triangleright\) Positives covered by \(\langle\phi,\delta\rangle)\)
11:\(Pos\gets Pos\setminus Pos_{\phi}\);\(\triangleright\) Update positives still to be covered
12:return\(h\);
```
**Algorithm 2** pnFoil-\(\mathcal{DL}\)
We now detail the steps of Learn-One-Axiom (Algorithm 3).
**Computing fuzzy datatypes.** For a numerical datatype \(s\), we consider _equal width triangular partitions_ of values \(V_{s}=\{v\mid\mathcal{K}\models a{:}\exists s.=_{v}\}\) into a finite number of fuzzy sets (\(3,5\) or \(7\) sets), which is identical to [50, 53, 87] (see, _e.g._ Fig. 3). We additionally also consider the use of the c-means fuzzy clustering algorithm over \(V_{s}\), where the fuzzy membership function is a triangular function build around the centroid of a cluster [20, 50, 53, 87].
**The refinement operator.** The refinement operator we employ is essentially the same as in [20, 50, 51, 57, 87]. Specifically, it takes as input a concept \(C\) and generates new, more specific concept description candidates \(D\) (_i.e._, \(\mathcal{K}\models D\sqsubseteq C\)). For the sake of completeness, we recap the refinement operator here. Let \(\mathcal{K}\) be a knowledge base, \(\mathbf{A}_{\mathcal{K}}\) be the set of all atomic concepts in \(\mathcal{K}\), \(\mathbf{R}_{\mathcal{K}}\) the set of all object properties in \(\mathcal{K}\), \(\mathbf{S}_{\mathcal{K}}\) the set of all numeric datatype properties in \(\mathcal{K}\), \(\mathbf{B}_{\mathcal{K}}\) the set of all boolean datatype properties in \(\mathcal{K}\) and \(\mathcal{D}\) a set of (fuzzy) datatypes. The refinement operator \(\rho\) is shown in Table 2.
**The scoring function.** The scoring function we use to assign a score to each candidate hypothesis is essentially a _gain_ function, like to the one employed in [20, 50, 51, 57, 87], and it implements an information-theoretic criterion for selecting the best candidate at each refinement step. Specifically, given a fuzzy \(\mathcal{EL}(\mathbf{D})\) GCI \(\phi\) of the form \(C\sqsubseteq T\) chosen at the previous step, a KB \(\mathcal{K}\), a set of positive examples \(Pos\) still to be covered and a candidate fuzzy \(\mathcal{EL}(\mathbf{D})\) GCI \(\phi^{\prime}\) of the form \(C^{\prime}\sqsubseteq T\), then
\[gain(\phi^{\prime},\phi,\mathcal{K},Pos)=p*(log_{2}(cf(\phi^{\prime},\mathcal{ K},Pos))-log_{2}(cf(\phi,\mathcal{K},Pos)))\, \tag{12}\]
where \(p=|C^{\prime}\sqcap C|_{\mathcal{K}}^{Pos}\) is the fuzzy cardinality of positive examples in \(Pos\) covered by \(\phi\) that are still covered by \(\phi^{\prime}\).
Please note that in Eq. 12, the confidence degrees are calculated w.r.t. the positive examples still to be covered (\(Pos\)). In this way, Learn-One-Axiom is somewhat guided towards positives not yet covered so far by pnFoil-\(\mathcal{DL}\). Note also that the gain is positive if the confidence degree increases.
**Stop criterion.** Learn-One-Axiom stops when the confidence degree is above a given threshold \(\theta\in[0,1]\) and the non-positive coverage percentage is below \(\eta\in[0,1]\), or no GCI can be learnt anymore.
**The Learn-One-Axiom algorithm.** The Learn-One-Axiom algorithm just like defined in Algorithm 3: steps 1 - 3 are simple initialisation steps. Please note here that \(NP\) are the non-positives in accordance with Remark 2, which states that we will distinguish among positives and non-positives only (_cf._ also step. 18, where the non-positive coverage percentage is used). Steps 5-21 are the main loop from which we may exit in case the stopping criterion is satisfied, in step 8 we determine all new refinements, which then are scored in steps 10-15 in order to determine the one with the best gain. At the end of the algorithm, once we exit from the main loop, the best found GCI is returned (step 22).
**Remark 4**: _pnFoil-\(\mathcal{DL}\) also allows to use a backtracking mechanism (step 19), which, for ease of presentation, we omit to include. The mechanism is the same as for the pFoil-\(\mathcal{DL}\)-learnOneAxiom described in [87, Algorithm 3]. Essentially, a stack of top-\(k\) refinements is maintained, ranked in decreasing order of the confidence degree from which we pop the next best refinement (if the stack is not empty) in case no improvement has occurred. \(C_{best}\) becomes the popped-up refinement._
\begin{table}
\begin{tabular}{l l l} \(\mathbf{A}_{\mathcal{K}}\cup\{\exists r.\top\mid r\in\mathbf{R}_{\mathcal{K}} \}\cup\{\exists s.d\mid s\in\mathbf{S}_{\mathcal{K}},d\in\mathcal{D}\}\cup\) & \\ \(\{\exists s.=_{b},\mid s\in\mathbf{B}_{\mathcal{K}},b\in\{\mathbf{true},\mathbf{ false}\}\}\) & if & \(C=\top\) \\ \(\{A^{\prime}\mid A^{\prime}\in\mathbf{A}_{\mathcal{K}},\mathcal{K}\models A^{ \prime}\sqsubseteq A\}\cup\) & & \\ \(\{A\sqcap A^{\prime\prime}\mid A^{\prime\prime}\in\rho(\top)\}\) & if & \(C=A\) \\ \(\{\exists r.D^{\prime}\mid D^{\prime}\in\rho(D)\}\cup\{(\exists r.D)\sqcap D^{ \prime\prime}\mid D^{\prime\prime}\in\rho(\top)\}\) & if & \(C=\exists r.D,r\in\mathbf{R}_{\mathcal{K}}\) \\ \(\{(\exists s.d)\sqcap D\mid D\in\rho(\top)\}\) & if & \(C=\exists s.d,s\in\mathbf{S}_{\mathcal{K}},d\in\mathcal{D}\) \\ \(\{(\exists s.=_{b})\sqcap D\mid D\in\rho(\top)\}\) & if & \(C=\exists s.=_{b},s\in\mathbf{B}_{\mathcal{K}}\), \\ & & & \(b\in\{\mathbf{true},\mathbf{false}\}\) \\ \(\{C_{1}\sqcap...\sqcap C_{i}^{\prime}\sqcap...\sqcap C_{n}\mid i=1,...,n,C_{i}^ {\prime}\in\rho(C_{i})\}\) & if & \(C=C_{1}\sqcap...\sqcap C_{n}\) \\ \end{tabular}
\end{table}
Table 2: Downward Refinement Operator.
## 4 Evaluation
We have implemented the algorithm within the _FuzzyDL-Learner_5 system and have evaluated it over a set of (crisp) OWL ontologies.
Footnote 5: Data and implementation [http://www.umbertostraccia.it/cs/software/FuzzyDL-Learner/](http://www.umbertostraccia.it/cs/software/FuzzyDL-Learner/).
**Datasets.** Several OWL ontologies from different domains have been selected as illustrated in Table 3. In it, we report the DL the ontology refers to, the number of concept/class names, object properties, datatype properties and individuals in the ontology. For each ontology \(\mathcal{K}\) we indicate also the number \(|\mathcal{E}^{+}|\) of positive examples. All others are non-positive and we set \(\mathcal{E}^{-}=\overline{\mathcal{E}^{+}}=\mathfrak{l}_{\mathcal{K}}\backslash \mathfrak{l}_{\mathcal{E}^{+}}\). The ontologies Iris, Wine, Wine Quality and Yeast are built from the well-known _UC Irvine Machine Learning Repository_ (UCIMLR) [27] and have been transformed from the CSV format, provided by that repository, into OWL ontologies according to the procedure described in [20]. In the Wine Quality ontology, the quality attribute has been removed as the positive examples (the GoodRedWines) are those having "quality" greater than or equal to 7.
All other ontologies, except malware, belong to the well-known SML-Bench dataset [91].6 The malware ontology has been described in [88, 89].
Footnote 6: See also, [https://github.com/SmartDataAnalytics/SML-Bench](https://github.com/SmartDataAnalytics/SML-Bench)
For completeness, in Appendix A, a succinct description of what the ontologies are about is provided.
**Remark 5**: _While evaluating ontology-based learning algorithms is untypical on numerical datatype properties,7 we believe it is interesting to do so as an important ingredient of our algorithm is the use of fuzzy concrete datatype properties to improve the human understandability of the classification decision process._
Footnote 7: To the best of our knowledge, we are unaware of any evaluation of ontology-based methods on those data sets.
**Remark 6**: _We leave it for future work to look at \(\mathrm{e.g.}\) methods to learn from the training data a threshold \(0\leq\tau_{p}\leq 1\) such that \(h\) predicts individual \(a\) to be a positive example if \(h(a)>\tau_{p}\). However, in this paper, we will always have \(\tau_{p}=0\)._
_More generally, unlike we do now, if we would like to distinguish the negative examples from the unlabelled ones, we may well learn a classifier \(h^{-}\) for negative examples and then define a decision method that predicts an individual \(a\) to be a positive (resp. negative) example based on the prediction value \(h(a)\) (resp. \(h^{-}(a)\)) of \(a\) being a positive (resp. negative) example. That is, depending on the pair \(\langle h(a),h^{-}(a)\rangle\), one may then define e decision criteria whether \(a\) is a positive or negative example, or just leave the prediction as unknown if there is not enough evidence of being one of the two._
**Measures.** We considered the following effectiveness measures (see also [87, 20]), which, for the sake of completeness, we recap here. Specifically, consider a learnt classifier \(h\) and let us assume to have added it to the KB \(\mathcal{K}\). In our setting, we always have the condition that if the classifier prediction value \(h(a)\) of an individual \(a\) is non-zero then the learner classifies \(a\) as an instance of \(T\), _i.e._\(h\) predicts \(a\) to be a positive example iff \(h(a)>0\).
In line with what we have said above, as all individuals are either positive or non-positive, we will consider the following measures, all of which are based on crisp cardinality (see also Eq. 1).
**True Positives:** denoted \(TP\), is defined as the number of instances of \(T\) that are positive
\[TP=\lceil T\rceil_{\mathcal{K}}^{\mathsf{I}_{\mathcal{K}}^{\mathsf{E}^{+}}} \tag{13}\]
**False Positives:** denoted \(FP\), is defined as the number of instances of \(T\) that are not positive
\[FP=\lceil T\rceil_{\mathcal{K}}^{\mathsf{I}_{\mathcal{K}^{+}}^{\mathsf{E}^{+}}} \tag{14}\]
**Precision/Confidence:** denoted \(P\), is defined as the fraction of true positives w.r.t. the covered examples of \(h\)
\[P=\frac{TP}{\lceil T\rceil_{\mathcal{K}}^{\mathsf{I}_{\mathcal{K}}}} \tag{15}\]
\begin{table}
\begin{tabular}{l||c c c c c c} \hline \hline
**ontology** & **DL** & **class.** & **obj. prop.** & **data. prop.** & **ind.** & **target**\(T\) & **pos** \\ \hline \hline
**HTN** & \(\mathcal{SHCIN}(\mathcal{D})\) & 51 & 29 & 9 & 723 & \(\mathtt{Tolcarn}\),\(\mathtt{Woman}\) & 46 \\ \hline
**Lymphography** & \(\mathcal{ALC}\) & 50 & 0 & 0 & 148 & \(\mathtt{Tolcarn}\) & 81 \\ \hline
**Mamographic** & \(\mathcal{ALC}(\mathcal{D})\) & 20 & 3 & 2 & 975 & \(\mathtt{Tolcarn}\) & 445 \\ \hline
**Malware** & \(\mathcal{ALH}(\mathcal{D})\) & 192 & 6 & 10 & 5669 & \(\mathtt{malware}\) & 500 \\ \hline
**Iris** & \(\mathcal{ALEHF}(\mathcal{D})\) & 4 & 0 & 5 & 150 & \(\mathtt{Iris-versicolor}\) & 50 \\ & & & & & & Iris-virginica & 50 \\ \hline & & & & & & 1 & 59 \\ Wine & \(\mathcal{ALEHF}(\mathcal{D})\) & 3 & 0 & 13 & 178 & 2 & 71 \\ & & & & & & 3 & 48 \\ \hline
**Wine Quality** & \(\mathcal{ALEHF}(\mathcal{D})\) & 7 & 0 & 11 & 6497 & \(\mathtt{GoodRedWine}\) & 217 \\ \hline
**Yeast** & \(\mathcal{ALEHF}(\mathcal{D})\) & 11 & 0 & 8 & 1462 & \(\mathtt{CFT}\) & 444 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Facts about the ontologies of the evaluation.
**Recall:**: denoted \(R\), is defined as fraction of true positives w.r.t. all positives
\[R=\frac{TP}{|\mathfrak{l}_{\mathcal{E}^{+}}|}\, \tag{16}\]
\(F1\)**-score:**: denoted \(F1\), is defined as
\[F1=2\cdot\frac{P\cdot R}{P+R}. \tag{17}\]
For each parameter configuration, a stratified \(k\)-fold cross validation design8 was adopted (specifically, \(k=5\)) to determine the macro average of the above described performance measures. In all tests, we have that \(\mathfrak{l}_{\mathcal{E}}=\mathfrak{l}_{\mathcal{K}}\) and that, of course, there is at least one positive example in each fold. For each fold, during the training phase, we remove all assertions involving test examples from the ontology, and, thus, restrict the training phase to training examples only.
Footnote 8: Stratification means here that each fold contains roughly the same proportions of positive and non-positive instances of the target class.
All configuration parameters for the best runs are available from the downloadable data, which we do not report here. Some of the salient parameters, used within our algorithm, are reported in Table 4.
A typical parameter setup is as follows, but may vary depending on the ontology and may be subject of a search for the optimal setting.
**P-stage.**: \(c_{P}=5,d_{P}=1,\theta_{P}=0.1,\eta_{P}=1.0\)
**N-stage.**: \(c_{N}=10,d_{N}=1,\theta_{N}=0.3,\eta_{N}=0.2\)
Let us briefly comment them. During the P-stage, we would like to increase recall, that is the percentage of covered positives w.r.t. all positives. To this end, we choose a low positive rule confidence threshold \(\theta_{P}\) and high non-positive coverage percentage threshold \(\eta_{P}\). In the N-stage however, we want to be more precise in removing the false positives in order to avoid removing true positives of the P-Stage. Therefore, we increase the confidence threshold \(\theta_{N}\), lower the non-positive coverage percentage threshold \(\eta_{N}\) and increase the number of maximal conjuncts \(c_{N}\). The maximal role depth is determined manually a priori by inspecting the ontology.
For \(\text{\text@b{@}}^{+},\text{\text@b{@}}^{-}\) (resp. \(\text{\text@b{@}}\)) we used max (resp. \((\star)\)), and for concept conjunction \(\sqcap\) (resp. GCI operator \(\sqsubseteq\)) we used the t-norm min (resp. the Lukasiewicz implication). These could well be another set of parameters to be optimised. However, the parameter space is already quite large,
\begin{table}
\begin{tabular}{|c|c|} \hline \hline \(\theta_{P}\) & confidence threshold for positive rules of P-stage \\ \hline \(\theta_{N}\) & confidence threshold for negative rules of N-stage \\ \hline \(\eta_{P}\) & non-positive coverage percentage threshold for positive rules of P-stage \\ \hline \(\eta_{N}\) & non-positive coverage percentage threshold for negative rules of N-stage \\ \hline \(c_{P}\) & maximal number of conjuncts for positive rules of P-stage \\ \hline \(c_{N}\) & maximal number of conjuncts for negative rules of P-stage \\ \hline \(d_{P}\) & maximal role depth for positive rules of P-stage \\ \hline \(d_{N}\) & maximal role depth for negative rules of P-stage \\ \hline \hline \end{tabular}
\end{table}
Table 4: Some salient parameters of the PN-OWL algorithm.
so we fixed these logical operators as specified.9 Concerning other parameter settings, we also varied the number of fuzzy sets \((3,5\) or \(7)\). For c-means, we fixed the hyper-parameter to the default \(m=2\), the threshold to \(\epsilon=0.05\) and the number of maximum iterations to \(100\).
Footnote 9: A run with fixed parameters, _e.g._ on the malware ontology, may already take up to \(4\) days of computation time.
As baseline, we consider Fuzzy Foil-\(\mathcal{DL}\)[50, 51, 53, 20], with best parameter setup as specified in [20]. Essentially, Fuzzy Foil-\(\mathcal{DL}\), is as PN-OWL, except that it stops after the P-stage and, thus, is as PN-OWL in which the negative set of rules \(h_{N}\) is by definition empty (_cf._ lines 21-23 of PN-OWL algorithm). This allows us to appreciate the added value (if any) in terms of effectiveness of the N-stage phase.
The results are reported in Table 5. For the UCIMLR datasets, in case of multiple targets, the average of the measures has been considered.
**Example 4.1**: _We provide here examples of learnt rules (in Fuzzy OWL syntax) via PN-OWL applied to the Mammographic ontology. The first one is one of the learnt rules during the P-stage, while the second one is one of the learnt rules during the N-Stage. In the latter case, FALSEP_ToLearn denotes the class of false positives covered by rules learnt during the P-stage. The number associated to a rule is its confidence/precision. We also report the specification of some learnt fuzzy sets via fuzzy c-means._
(implies (and (some hasDensity low) (some hasShape irregular) (some hasAge hasAge_veryHigh) (some hasBiRads hasBiRads_high)) ToLearn 0.965068) (implies (and (some hasDensity low) (hasMargin some microlobulated) (hasShape some oval) (hasBiRads some hasBiRads_medium)) FALSE_ToLearn 0.75) (define-fuzzy-concept hasBiRads_medium triangular(1,6,2.780,3.997,5.022)) (define-fuzzy-concept hasBiRads_high right-shoulder(1,6,3.997,5.022)) (define-fuzzy-concept hasAge_veryHigh right-shoulder(1,6,62.793,71.882))
**Discussion.** In Table 5, the last column reports the improvement of PN-OWL relative to the measure \(F1\) (see Eq. 17), over our baseline Fuzzy Foil-\(\mathcal{DL}\). Overall, PN-OWL performs better than Fuzzy Foil-\(\mathcal{DL}\) (with the exception of Lymphography) and in some cases the improvement is particularly high, such as for NTN, Mammographic and Wine Quality.
Essentially, for PN-OWL we were able to find a better compromise between precision and recall than for Foil-\(\mathcal{DL}\). In particular, we were able to increase precision confirming our conjecture that indeed the N-stage is able to remove the false positives.
Concerning Lymphography, we were unable to replicate the results of Fuzzy Foil-\(\mathcal{DL}\) in [20], for which we get now an F1 measure of 0.805 in place of 0.855. The difference lies in few miss-classified examples. We also noted that in this case PN-OWL achieves \(F1=1.0\) during the training phase, which may suggest an over-fitting problem.
Last but not least, let us mention that PN-OWL (so does Fuzzy Foil-\(\mathcal{DL}\)) does definitely not yet behave well on the Wine Quality and Yeast datasets, which will be the subject of further investigation.
The overall lesson learnt with PN-OWL is that indeed the N-stage may provide a non negligible contribution to improve effectiveness of the classification process, provided one may find the appropriate balance among precision and recall. Unfortunately, searching the parameter space of PN-OWL for an optimum is quite time consuming and a brute-force approach may likely not be feasible (at least not with our computational resources at hand). In fact, we proceeded one run per time, and by analysing the results tried to figure out whether and how to change some of the parameters in Table 4 to increase recall and/or precision. On the other-hand, optimising Foil-\(\mathcal{DL}\) is much easier as it has half of the parameters of PN-OWL.
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|} \hline
**Dataset** & **Algorithm** & **Precision** & **Recall** & **F1** & **\% Improvement** \\ \hline \multirow{2}{*}{NTN} & Fuzzy DL-FOIL} & 0.661 & 0.513 & 0.548 & \multirow{2}{*}{**80.47\%**} \\ & PN-OWL & **1.000** & **0.980** & **0.989** & \\ \hline \hline \multirow{2}{*}{Lymphography} & Fuzzy DL-FOIL} & **0.861** & **0.851** & **0.855** & \\ & PN-OWL & 0.836 & 0.841 & 0.833 & **-2.57\%** \\ \hline \hline \multirow{2}{*}{Mammographic} & Fuzzy DL-FOIL} & 0.737 & 0.692 & 0.710 & \multirow{2}{*}{**11.27\%**} \\ & PN-OWL & **0.746** & **0.831** & **0.790** & \\ \hline \hline \multirow{2}{*}{Malware} & Fuzzy DL-FOIL} & 0.623 & **0.830** & 0.704 & \multirow{2}{*}{**5.06\%**} \\ & PN-OWL & **0.701** & 0.818 & **0.740** & \\ \hline \hline \multirow{2}{*}{Iris} & Fuzzy DL-FOIL} & 0.886 & 0.910 & 0.890 & \multirow{2}{*}{**4.16\%**} \\ & PN-OWL & **0.949** & 0.910 & **0.927** & \\ \hline \hline \multirow{2}{*}{Wine} & Fuzzy DL-FOIL} & 0.884 & **0.971** & 0.895 & \multirow{2}{*}{**0.98\%**} \\ & PN-OWL & **0.933** & 0.904 & **0.914** & \\ \hline \hline \multirow{2}{*}{Wine Quality} & Fuzzy DL-FOIL} & 0.227 & **0.917** & 0.363 & \multirow{2}{*}{**27.93\%**} \\ & PN-OWL & **0.365** & 0.659 & **0.464** & \\ \hline \hline \multirow{2}{*}{YEAST} & Fuzzy DL-FOIL} & 0.427 & 0.746 & 0.540 & \multirow{2}{*}{**4.37\%**} \\ & PN-OWL & **0.432** & **0.815** & **0.564** & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results table. The measures are the macro average over the 5 folds w.r.t. the test set.
Related Work
Concept inclusion axiom learning in DLs is essentially inspired by statistical relational learning, where classification rules are (possibly weighted) Horn clause theories (see _e.g._[69, 70]), and various methods have been proposed in the DL context so far (see _e.g._[54, 24, 71]). The general idea consists in the exploration of the search space of potential concept descriptions that cover the available training examples using so-called refinement operators (see, _e.g._[7, 22, 45, 46, 47, 48, 49]). The goal is then to learn a concept description of the underlying DL language covering (possibly) all the provided positive examples and (possibly) not covering any of the provided negative examples. The fuzzy case (see [50, 53, 87, 20]) is a natural extension relying on fuzzy DLs [10, 86] and fuzzy ILP (see _e.g._[82]) instead.
As already mentioned, our two-stage algorithm is conceptually inspired by PN-rule [2, 3, 40, 41] consisting of a P-stage in which positive rules (called P-rules) are learnt to cover as many as possible instances of a target class and an N-stage in which negative rules (called N-rules) are learnt to remove most of the non-positive examples covered by the P-stage. The two rule sets are then used to build up a decision method to classify an object being instance of the target class or not [2, 3, 40, 41]. It is worth noting that what differentiates this method from all others is its second stage. The main differences of PN-OWL w.r.t. PN-rule are: _(i)_ PN-rule operates with _tabular data_ only, _i.e._ the data consists of attribute value pairs \((A,v)\), while we are in the context of OWL ontologies.10; PN-rules are of the form \(cond\to T\), where the condition \(cond\) is of the form \((A\in[l,h])\) or \((A\not\in[l,h])\) for continuous attribute \(A\).11, while we have, conjunction of conditions in the rule body and each condition may be fuzzy, besides being either a class name or a restriction on attributes (attributes may be also nested); and _(iii)_ PN-rule considers a completely different rule scoring and combination strategy than we use in PN-OWL. The latter can be represented in Fuzzy OWL 2[12, 13], while for the former we conjecture it cannot: so, we left this option out as a fuzzy DL reasoner would not be able to reason with those types of rules.
Footnote 10: Tabular data can easily be mapped into OWL ontologies as illustrated in [20].
Footnote 11: If \(A\) is categorical then obviously \(cond\) is either of the form \(A=v\) or \(A\neq v\).
Other closely related works are [30, 28, 36, 35, 50, 53, 87]. In fact, [30, 28, 36, 78] can be seen as an adaption to the DL case of the the well-known Foil-algorithm, while [50, 53] that stem essentially from [51, 52, 55, 56, 57, 58], propose _fuzzy_ Foil-like algorithms instead, and are inspired by fuzzy ILP variants such as [26, 82, 84].12 Let us note that [50, 56] consider the weaker hypothesis representation language DL-Lite [5], while here we rely on an aggregation of fuzzy \(\mathcal{EL}(\mathbf{D})\) inclusion axioms. Fuzzy \(\mathcal{EL}(\mathbf{D})\) has also been considered in [87], which however differs from [50, 53] by the fact that a (fuzzy) probabilistic ensemble evaluation of the fuzzy concept description candidates has been considered.13 Let us recap that, to our opinion, fuzzy \(\mathcal{EL}(\mathbf{D})\) concept expressions are appealing as they can straightforwardly be translated into natural language and, thus, contribute to the explainability aspect of the induced classifier.
Footnote 12: See, _e.g._[23], for an overview on fuzzy rule learning methods.
Footnote 13: Also, to the best of our knowledge, concrete datatypes were not addressed in the evaluation.
Discrete boosting has been considered in [35] that also shows how to derive a weak learner (called wDLF) from conventional learners using some sort of random downward refinement operator covering at least a positive example and yielding a minimal score fixed with a threshold. Related to this work is [20] that deals with fuzziness in the hypothesis language and a real-valued variant of AdaBoost and differentiates from the previous one by using a descent-like gradient algorithm to search for the best alternative. Notably, this also deviates from 'fuzzy' rule learning AdaBoost variants, such as [25, 66, 68, 81, 92] in which the weak learner is required to generate the whole rules' search space beforehand the selection of the best current alternative. Such an
approach is essentially unfeasible in the OWL case due to the size of the search space.
In [39] a method is described that can learn fuzzy OWL DL concept equivalence axioms from FuzzyOWL 2 ontologies, by interfacing with the _fuzzyDL_ reasoner [15]. The candidate concept expressions are provided by the underlying DL-Learner[44, 18, 19] system. However, it has been tested only on a toy ontology so far. Moreover, let us mention [42] that is based on an ad-hoc translation of fuzzy Lukasiewicz \(\mathcal{ALC}\) DL constructs into fuzzy _Logic Programming_ (fuzzy LP) and uses a conventional ILP method to learn rules. Unfortunately, the method is not sound as it has been shown that the mapping from fuzzy DLs to LP is incomplete [64] and entailment in Lukasiewicz \(\mathcal{ALC}\) is undecidable [21]. To be more precise, undecidability holds already for \(\mathcal{EL}\) under the infinitely valued Lukasiewicz semantics [17].14
Footnote 14: We recall that \(\mathcal{EL}\) is a strict sub-logic of \(\mathcal{ALC}\).
While it is not our aim to provide an extensive overview about learning w.r.t. ontologies literature, we nevertheless recap here that there are also alternative methods to what we present here, but are related only to the extent that they deal with concept description induction in the context of DLs. So, _e.g._, the series of works [32, 33, 75, 74, 76, 72, 80, 77, 79] are inspired on _Decision Trees/Random Forests_, [9, 29, 31, 34] consider _Kernel Methods_ for inducing concept descriptions, while [60, 62, 61, 63, 94] consider essentially a _Naive Bayes_ approach. Last but not least, [43] is inspired on _Genetic Programming_ to induce concept expressions, while [65] is based on the _Reinforcement Learning_ framework. Eventually, [73] proposes to use decision trees to learn so-called _disjointness axioms_, _i.e._ expressions of the form \(C\sqcap D\sqsubseteq\perp\), declaring that class \(C\) and \(D\) are disjoint.
## 6 Conclusions & Future Work
In this work, we addressed the problem of automatically learning fuzzy concept inclusion axioms from OWL 2 ontologies to describe sufficient condition of being an individual classified as instance of target class \(T\). That is, given a target class \(T\) of an OWL ontology, we have addressed the problem of inducing fuzzy concept inclusion axioms that describe sufficient conditions for being an individual instance of \(T\). Specifically, we have presented a two-stage algorithm, called PN-OWL that is inspired on the PN-rule [2, 3, 40, 41] and adapted to the context of OWL. The main features of our algorithm are essentially the fact that _(i)_ at the P-stage, it generates a set of fuzzy inclusion axioms, the P-rules, that cover as many as possible instances of the target class \(T\) without compromising too much the amount on non-positives; _(ii)_ at the N-stage, it generates a set of fuzzy inclusion axioms, the N-rules, that cover as many as possible of non-positive instances of class \(T\) of the P-stage; _(iii)_ the fuzzy inclusion axioms are then combined (aggregated) into a new fuzzy inclusion axiom describing sufficient conditions for being an individual classified as an instance of the target class \(T\). Additionally, all fuzzy inclusion axioms may possibly include fuzzy concepts and fuzzy concrete domains, where each axiom has a leveraging weight (specifically, called confidence or precision), and all generated fuzzy concept inclusion axioms can directly be encoded as _Fuzzy OWL 2_ axioms.
We have also conducted an extensive evaluation, comparing it with fuzzy Foil-\(\mathcal{DL}\). Our evaluation shows that, PN-OWL performs generally better than fuzzy Foil-\(\mathcal{DL}\) in terms of effectiveness, though finding an optimal parameter configuration is much more time consuming than for Foil-\(\mathcal{DL}\) as PN-OWL has double as many parameters than fuzzy Foil-\(\mathcal{DL}\).
Concerning future work, besides investigating about other learning methods, and future work listed here and there in the paper, we envisage various aspects worth to be investigated in more detail: _(i)_ it is still unclear how the construction of fuzzy sets may impact effectiveness. So far, we did not notice a clear winner between the uniform clustering and c-means clustering algo
rithms used to build fuzzy datatypes. This is somewhat surprising. We would like to investigate that in more detail by considering various alternatives as well [1] and/or considering clustering methods based on the aggregation of data properties, _i.e._ multi-dimensional clustering versus uni-dimensional clustering; _(ii)_ moreover, we would like to cover more OWL datatypes than those considered here so far (numerical and boolean) such as strings, dates, etc., possibly in combination with some classical machine learning methods (see, _e.g._[83]); _(iii)_ we would like to investigate the computational aspect: so far, for some ontologies, a learning run may take even a week (w.r.t. our available resources). Here, we would like to investigate both parallelization methods as well as to investigate about the impact, in terms of effectiveness, of efficient, logically sound, but not necessarily complete, reasoning algorithms; _(iv)_ in principle, our two-stage algorithm PN-OWL is parametric w.r.t. the learner to be used during both the P-stage and the N-stage (_cf._ lines 5 and 19 of the PN-OWL algorithm): here we would like to investigate how to plug in another alternative such as Fuzzy OWL-Boost[20] and to verify its effectiveness; _(v)_ we would like to asses also the impact of other alternative scoring functions to information gain (_cf._ Eq. 12) within our setting, inclusive various alternative choices of t-norms and r-implications; and (_vi_) we are looking for combining our Fuzzy DL-Learning with sub-symbolic learning methods, such as _e.g._ Neural Networks, an activity that is already on-going.
Moreover, we really would like to consider extending the hypothesis language \(\mathcal{EL}(\mathbf{D})\) with so-called _threshold concepts_[11] of the form \(C[\geq d]\) (resp. \(C[\leq d]\)), where \(d\in[0,1]\) and \(C\) is either a class name or an existential restriction, with the intended meaning "\(C[\geq d]\) (resp. \(C[\leq d]\)) is the fuzzy set of individuals that are instances of \(C\) to degree greater (resp. smaller) than or equal to \(d\)." This would provide us a more fine grained hypothesis language in which a threshold may be defined for each conjunct of a rule rather than via a rule confidence threshold as it is now. A Fuzzy OWL 2 example of such a rule may be, by referring to the Wine Quality ontology and target wine 1
(implies (and (some alcohol alcohol_VH)[<= 0.786] (some sulphates sulphates_H)[>= 0.289] (some pH pH_L)[<= 0.106]) 1)
with intended meaning "if, for an individual (wine) \(a\), the alcohol level of being very high is smaller than or equal to 0.786, the sulphates level of being high is greater than or equal to 0.289 and the pH level of being low is smaller than or equal 0.106 then classify \(a\) to some extend (_e.g._ the minimum of the degrees of being \(a\) an instance of a conjunct) as instance of the target class 1.
## Acknowledgment
This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under (GA No 952215). This work has also been partially supported by the H2020 DeepHealth Project (GA No. 825111). This paper is also supported by the FAIR (Future Artificial Intelligence Research) project funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, investment 1.3, line on Artificial Intelligence). Eventually, this work has also been partially supported by the H2020 STARWARS Project (GA No. 101086252), type of action HORIZON TMA MSCA Staff Exchanges.
We wish to thank Centro Servizi CNR of the ICT-SAC Department of the National Research Council for the precious computing services and resources they made available. We wish to address a special thanks to Ing. Giorgio Bartoccioni (ICT-SAC) for his technical support. |
2306.15348 | PANet: LiDAR Panoptic Segmentation with Sparse Instance Proposal and
Aggregation | Reliable LiDAR panoptic segmentation (LPS), including both semantic and
instance segmentation, is vital for many robotic applications, such as
autonomous driving. This work proposes a new LPS framework named PANet to
eliminate the dependency on the offset branch and improve the performance on
large objects, which are always over-segmented by clustering algorithms.
Firstly, we propose a non-learning Sparse Instance Proposal (SIP) module with
the ``sampling-shifting-grouping" scheme to directly group thing points into
instances from the raw point cloud efficiently. More specifically, balanced
point sampling is introduced to generate sparse seed points with more uniform
point distribution over the distance range. And a shift module, termed bubble
shifting, is proposed to shrink the seed points to the clustered centers. Then
we utilize the connected component label algorithm to generate instance
proposals. Furthermore, an instance aggregation module is devised to integrate
potentially fragmented instances, improving the performance of the SIP module
on large objects. Extensive experiments show that PANet achieves
state-of-the-art performance among published works on the SemanticKITII
validation and nuScenes validation for the panoptic segmentation task. | Jianbiao Mei, Yu Yang, Mengmeng Wang, Xiaojun Hou, Laijian Li, Yong Liu | 2023-06-27T10:02:28Z | http://arxiv.org/abs/2306.15348v1 | # PANet: LiDAR Panoptic Segmentation with Sparse Instance Proposal and Aggregation
###### Abstract
Reliable LiDAR panoptic segmentation (LPS), including both semantic and instance segmentation, is vital for many robotic applications, such as autonomous driving. This work proposes a new LPS framework named PANet to eliminate the dependency on the offset branch and improve the performance on large objects, which are always over-segmented by clustering algorithms. Firstly, we propose a non-learning Sparse Instance Proposal (SIP) module with the "sampling-shifting-grouping" scheme to directly group thing points into instances from the raw point cloud efficiently. More specifically, balanced point sampling is introduced to generate sparse seed points with more uniform point distribution over the distance range. And a shift module, termed bubble shifting, is proposed to shrink the seed points to the clustered centers. Then we utilize the connected component label algorithm to generate instance proposals. Furthermore, an instance aggregation module is devised to integrate potentially fragmented instances, improving the performance of the SIP module on large objects. Extensive experiments show that PANet achieves state-of-the-art performance among published works on the SemanticKITII validation and nuScenes validation for the panoptic segmentation task. Code is available at [https://github.com/Jieqianyu/PANet.git](https://github.com/Jieqianyu/PANet.git).
## I Introduction
LiDAR Panoptic Segmentation (LPS) plays an important role in 3D scene understanding using point clouds, which has been an essential task for many robotic applications such as autonomous driving. LPS combines semantic and instance segmentation in a single framework, providing both semantic labels for points in the scenes and instance IDs for points that belong to instances (things). With the emergence of large-scale point cloud benchmarks, e.g., SemanticKITTI [1] and nuScenes [2], LPS has achieved rapid progress. However, performing reliable panoptic segmentation is still highly challenging due to the sparse, unordered, and non-uniform sampled natures of point clouds.
Existing methods for LPS can be mainly divided into detection-based [3, 4, 5, 6] and clustering-based [7, 8, 9, 10, 11, 12, 13, 14] approaches. The former applied the 3D object detection network to discover instances, which are usually limited by detection accuracy. The latter achieved instance segmentation through the center regression and clustering algorithms. For clustering-based methods, the clustering performance is easily affected by the point distribution of the regressed centers. And we found that most existing methods heavily depend on the learnable offset branch to provide geometric shifts for center regression. However, it is hard to predict ideal geometric shifts due to the sparsity, non-uniform density of LiDAR point cloud, and various shapes/sizes of instances. Recently, DSNet [9] designed a learnable dynamic shifting (DS) module to further shift the regressed centers to the clustering centers iteratively, but the shifted centers may not match the ground-truth instance centers. The possible inconsistency degrades the learning of the DS module.
To address the above problems, we propose a non-learning Sparse Instance Proposal (SIP) module to directly group instances from the raw points of things efficiently. We adopt the "sampling-shifting-grouping" scheme to design our SIP module. Specifically, to avoid the significant computational burden and memory overhead caused by shifting and grouping all the points of things, we introduce a balanced point-sampling (BPS) strategy. The proposed BPS generates sparse seed points with a more uniform distribution over the distance range and implements the point sampling and assignment simultaneously, improving the efficiency. Furthermore, a simple but effective shift module, termed Bubble Shrinking (BS), is devised to efficiently and precisely shift the seed points to the clustered centers iteratively. Finally, we group the shifted points into instances by the Connected Component Labeling (CCL) algorithm, which can be implemented by the efficient depth-first search. Due to the cascade design of BPS, BS, and CCL-based grouping, our non-learning SIP module is effective and efficient and can be easily extended to other backbones and datasets in a plug-and-play manner.
Nevertheless, due to the sparsity of LiDAR point clouds, there are cases where fragmented/over-segment instances may be generated by clustering algorithms, including our SIP module, especially when grouping large objects such as trucks and buses. To improve the completeness of instance segmentation, we propose an Instance Aggregation (IA) module to further integrate the potentially fragmented instances. More specifically, we apply the KNN-Transformer to enhance the interactions among the instance proposals and merge the instance proposals that belong to the same instance ID by the instance affinities. Our IA module can complement the SIP module, further improving the segmentation performance of large objects.
Extensive experiments on two large-scale datasets SemanticKITTI [1] and nuScenes [2] demonstrate the effectiveness of our method PANet. Our contributions are summarized as follows:
\(\bullet\) We develop a Sparse Instance Proposal (SIP) module without extra learning tasks to directly group instances from the raw points of things, which can be easily extended to
other backbones and datasets in a plug-and-play manner.
\(\bullet\) SIP eliminates the dependency on the offset branch and accelerates the clustering process due to the cascade design of BPS, BS, and CCL-based grouping.
\(\bullet\) We propose an instance aggregation module to integrate the possible fragmented instances and complement the SIP module to improve large objects' segmentation performance.
\(\bullet\) Extensive experiments on both SemanticKITTI [1] and nuScenes [2] datasets show that our model achieves state-of-the-art performance.
## II Related Work
### _LiDAR Semantic Segmentation._
According to the data representations, most semantic segmentation methods on point clouds can be categorized into projection-based, point-based, voxel-based, and multi-view methods. Projection-based works [15, 16, 17, 18, 19] project the raw point clouds into a certain plane such as range-view and BEV, which benefit from some efficient 2D CNN architectures but do not always reflect 3D relationships. Following PointNet/PointNet++ [20, 21], point-based methods [22, 23] directly process the raw point clouds, which, however, take a time-consuming local neighborhood search. Voxel-based methods [24, 25, 26] voxelized the raw point clouds and usually apply 3D sparse convolutions [27] to extract voxel-wise features, reducing the computational burden and memory overhead. Multi-view methods [13, 28, 29] fuse different representations of point clouds to exploit their individual properties. Similar to [14], we combine a sparse 3D CNN and a tiny 2D U-Net to aggregate multi-scale 3D features and 2D features to improve semantic segmentation, a vital part of panoptic segmentation.
### _LiDAR Panoptic Segmentation_
Most LiDAR panoptic segmentation (LPS) methods usually consist of semantic and instance branches. Regarding the implementation of instance segmentation, these approaches can be classified into detection-based and clustering-based methods.
**Detection-based methods.** These methods [3, 4, 5, 6] integrate 3D object detection [30, 31, 32, 33] to discover instances. They adopt the detector to explicitly provide instances' location and size information for further segmentation. PanopticTrackNet [3] utilizes Mask R-CNN [34] for instance segmentation. EfficientLPS [4] fuses the semantic logits, bounding boxes, and mask logits and generates the panoptic segmentation results in the range view, which are further back-projected to obtain final predictions. AOP-Net [5], and LidarMultiNet [6] designs a multi-task pipeline to combine 3D object detection and panoptic segmentation. These methods benefit from the prior information provided by the detector, but the segmentation performance largely depends on the detection results.
**Clustering-based methods.** These methods [7, 8, 9, 10, 11, 12, 13, 14, 35] perform clustering algorithms to group thing points into instances. Panoptic-PolarNet [10] predicts the 2D center heatmap and points shifts for clustering. DS-Net [9] designed a learnable dynamic shift module to further shift the regressed centers to the clustering centers. Panoptic-PHNet [14] introduced a pseudo heatmap generated from the shifted thing points and a center grouping module to yield instance centers for efficient clustering. There are also studies [35] over-segmenting the instances and utilizing graph networks to aggregate the fragments. We also devise a clustering scheme. Unlike these methods, the proposed sparse instance proposal module without extra learning tasks eliminates the dependency on the offset head. It allows us to extend it to other backbones and datasets in a plug-and-play manner. Moreover, different from [35], which merges a large amount of artificially over-segmented proposals with GNN, our instance aggregation module focuses on improving the SIP module's performance on large objects and utilizes the KNN-Trasformer to model the relationships among sparse potentially fragmented instances, which is more effective and efficient.
## III Method
### _Backbone Design_
Similar to Panoptic-PHNet [14], we combine a multi-scale sparse 3D CNN and a tiny 2D U-Net to aggregate multi-scale 3D features and 2D features. As Fig.1(a) shows, the input LiDAR point cloud \(P\in\mathbb{R}^{N\times 4}\) (coordinates and intensity) is first fed into a voxelization layer (similar to the voxelization in DRINet [25]) to obtain the voxel-wise features \(F^{0}_{v}\in\mathbb{R}^{N^{\prime}\times 64}\) with a dense spatial resolution of \(L\times H\times W\). And we project the \(F^{0}_{v}\) along the z-axis to generate the BEV feature \(F_{b}\in\mathbb{R}^{64\times t\times W}\). After that, the sparse 3D CNN, which consists of four encoder blocks used in GASN [26] extracts the multi-scale 3D features \((F^{1}_{v},F^{2}_{v},F^{3}_{v},F^{4}_{v})\). While the 2D U-Net consisting of a stack of 2D convolutions encodes the BEV features \(F^{\prime}_{b}\) under different receptive fields. We further back-project the encoded 3D features, and BEV features to get the point-wise features \((f^{0}_{p},f^{1}_{p},f^{2}_{p},f^{3}_{p},f^{4}_{p},f^{b}_{p})\), which are concatenated along channel dimension and into MLPs for the fused features \(f_{p}\in\mathbb{R}^{N\times 64}\). Finally, the semantic head and instance branch take the fused features \(f_{p}\) as the input and output semantic confidences and instance proposals.
**Semantic head.** The MLPs are exploited to predict the semantic scores for each point. The cross-entropy and lovasz loss [36] are used to train the semantic head. Moreover, following [26], we use deep sparse supervision to accelerate the network's coverage.
**Instance branch.** As shown in Fig. 1(a), the instance branch consists of two modules, namely Sparse Instance Proposal (SIP) and Instance Aggregation (IA), which are further introduced in the following sections. We do not use the commonly used offset branch in [9, 10, 12, 13, 14] to predict the geometric center shifts, which are used for subsequent instance grouping with clustering algorithms. we utilize the SIP module to directly generate instance proposals from the raw points of things. Moreover, we use the IA module to merge proposals of the same instance ID by the
instance affinities to boost the SIP module's performance on large objects.
### _Sparse Instance Proposal_
The sparse instance proposal module receives the semantic predictions and raw point cloud as the input and generates the instance proposals, as shown in Fig. 1. Since directly grouping raw points of things will bring a large computational burden and memory overhead, we first introduce a Balanced Point-Sampling (BPS) strategy to obtain sparseS seed points.
**Balanced point-sampling.** The widely employed farthest point sampling (FPS) is computationally expensive. For example, it takes over 200 seconds to sample 10% of 1 million points. While random sampling usually suffers from long-tail distribution problems [37], i.e., the closer the distance to the sensor, the denser the point cloud. Therefore, inspired by PCB-RandNet [37], we devise a Balanced Point-Sampling (BPS) strategy for generating sparse seed points of things. As shown in Fig.1(b), the input point cloud is first divided into voxel blocks, similar to the voxelization operation in Sec. III-A. Then, we scatter the points in the same voxel and calculate the average of these points as the seed point. It means that each non-empty voxel contains one seed point that dominates all points in the same voxel block. Notably, our BPS is affected by the instance occupancy and voxel resolution rather than the point density. The seed point distribution over the distance range can be uniform through our BPS method. Moreover, different from FPS and random sampling, which need to exploit K Nearest Neighbor (K-NN) to further assign corresponding thing points to seed points, our BPS is convenient and efficient to use the voxel block to implement the point sampling and assignment simultaneously.
**Bubble shrinking.** After the BPS, we can obtain the sparse seed points \(X\in\mathbb{R}^{M\times 3}\) of things. The semantics of a seed point is generated by major voting on all points corresponding to the seed point. Furthermore, the shift module, termed bubble shrinking, is exploited to shift the seed points to centers of instances they belong to. We first give the minimum radius \(r_{c}\) of thing category \(c\) empirically. Then we establish a graph with all the sparse seed points as vertices. Two vertices are connected if they belong to the same category \(c\) and their distance is smaller than the minimum radius \(r_{c}\). We assign each seed point a bubble which contains points that connect to the seed point. Then similar to other shift modules [9, 38], the bubbles are iteratively shrunk \(L=4\) times according to the points in the bubble as shown in Fig.1(c). In this way, the seed points will be more clustered. The procedure of bubble shrinking is illustrated in Algorithm 1. Our bubble shrinking is simple
Fig. 1: The overview of the proposed PANet. The backbone consists of a multi-scale sparse 3D CNN and a tiny 2D U-Net to aggregate multi-level 3D and 2D features. The extracted features are fed into the semantic head for semantic predictions. In the instance branch, the Sparse Instance Proposal (SIP) module shown in the bottom half is proposed to efficiently group the raw points of things into instances. Moreover, the Instance Aggregation (IA) module takes point-wise features and integrates the potentially fragmented instances generated by the SIP module. Finally, the semantic predictions and merged instances are combined to obtain the final panoptic segmentation results. “V2B” denotes the projection of voxel features to BEV features.
and efficient. Notably, in our shift module, the connectivity of the graph, i.e., the adjacency matrix \(K\) is determined initially. There is no need to rebuild the connected graph in each iteration, reducing the computation and memory overhead. Moreover, we experimentally find that keeping the same graph connectivity across all iterations is more stable.
**Point grouping.** For instance proposals, we perform point grouping upon the shifted sparse seed points \(X^{\prime}\). As shown in Fig. 1(d), we use the connected component labeling to group points to instances. Similar to bubble shrinking, the shifted points are viewed as vertices in a graph, and the semantic categories and distances determine the connectivity among vertices. The distance thresholds are empirically set to half of the minimum radius in bubble shrinking. Then we regard the connected component in the graph as an instance. Note that the points dominated by seed points in the same connected component share an instance ID. We define the grouped instance proposals as \(\{I_{i}\}_{i=1}^{O}\). \(O\) is the number of proposals.
### _Instance Aggregation_
Due to the effective shifting and grouping, our SIP can reduce fragmented instances. However, we observed that it still suffers from the over-segment problem when grouping big objects such as buses and trucks due to the sparsity of LiDAR point cloud, as shown in Fig.2. To improve the performance on large objects, similar to GP-S3Net [35], we propose to aggregate instances by the instance affinities. Specifically, given the points set \(P_{i}\) and the point-wise features \(F_{i}\) of \(i\)-th instance proposal, we utilize MLPs to enhance \(F_{i}\) with the position \(P_{i}\) and use the max pooling to aggregate the enhanced features into the global instance features \(g_{i}\) of shape \([64]\). The procedure is formulated as follows:
\[g_{i}=\text{MaxPool}(\text{MLP}([F_{i},P_{i}])) \tag{1}\]
where \([\cdot]\) denotes concatenation.
We further apply the KNN-Transformer to implement the interactions among the global instance features \(\{g_{i}\}_{i=1}^{O}\). Let \(p_{i}=\text{AvgPool}(\text{P}_{i})\) is the center of \(i\)-th instance. The indices of \(K\) nearest neighbors for \(p_{i}\) are calculated based on their spatial location. Then, we index the corresponding \(K\) instance features \(\{g_{j}\}_{j\in N(i)}\) according to the indices. Through linear transformations, \(g_{i}\) is mapped to \(q_{i}\) with shape \([64]\) and \((k_{i},v_{i})\) with shape \([K,64]\) is generated by \(\{g_{j}\}_{j\in N(i)}\). Afterwards, the similarity weights between \(q_{i}\) and \(k_{i}\) are computed and multiplied with \(v_{i}\) to obtain features \(\hat{g}_{i}\):
\[\hat{g}_{i}=\text{softmax}(\frac{q_{i}\circ k_{i}}{\sqrt{C}})\circ v_{i} \tag{2}\]
where \(\circ\) denotes dot-product and \(C=64\).
The enhanced global instance features \(\{\hat{g}_{i}\}_{i=1}^{O}\) are exploited to calculate the instance affinities. The instance affinity \(s_{i,j}\) of proposals \(i\) and \(j\) is computed according to their global features and the distance of instance centers. The procedure can be formulated as follows:
\[s_{i,j}=\text{sigmoid}[\text{MLP}([\hat{g}_{i},\hat{g}_{j},|p_{i}-p_{j}|])] \tag{3}\]
which is supervised by binary cross-entropy loss:
\[L_{aff}=\sum_{i,j}[y_{i,j}log(s_{i,j})+(1-y_{i,j})log(1-s_{i,j})] \tag{4}\]
where \(y_{i,j}\) is set 1 if proposals \(i\) and \(j\) share the same instance ID else 0. Note that the instance ID for each proposal is determined by major voting. In the inference stage, we merge two instances if their affinity exceeds a certain threshold. Similar to the point grouping in Sec. III-B, the merging procedure can also be implemented by the CCL algorithm efficiently.
## IV Experiments
We conduct extensive experiments on two large-scale datasets SemanticKITTI [1] and nuScenes [2] to evaluate our PANet.
### _Datasets and Metrics_
**SemanticKITTI.** SemanticKITTI [1], a large-scale dataset for autonomous driving, is derived from the KITTI odometry dataset [39], which collects 22 data sequences (10 for training, 11 for testing, and 1 for validation) with a 64-beams LiDAR sensor. It is the first dataset that presents
Fig. 2: False cases where the bus is over-segmented by SIP module. And IA module can merge the fragmented instances effectively.
the challenge of LiDAR-based panoptic segmentation. SemanticKITTI provides annotated point-wise labels in 20 classes for segmentation tasks, among which 8 classes are defined as things classes, and the rest are stuff classes.
**NuScenes.** The other large-scale dataset nuScenes [2] also releases the challenge of panoptic segmentation. It contains a wide diversity of urban scenes, including 850 scenes for training and validation and 150 for testing. nuScenes uses a 32-beam LiDAR sensor to create annotations in 2 Hz, which contains 6 stuff classes and 10 thing classes out of 16 semantic classes.
**Metrics.** As defined in [40], we take Panoptic Quality (PQ), Segmentation Quality (SQ), and Recognition Quality (SQ) as the evaluation metrics. The metrics for things and stuff are calculated separately, i.e., \(\text{PQ}^{\text{Th}}\text{,SQ}^{\text{Th}}\text{,RQ}^{\text{Th}}\) and \(\text{PQ}^{\text{St}}\text{,SQ}^{\text{St}}\text{,RQ}^{\text{St}}\). In addition, we also report \(\text{PQ}^{\dagger}\) defined by swapping the PQ of each stuff class to IoU and then averaging over all classes. Mean IoU (mIoU) is also adopted to evaluate the quality of semantic segmentation.
### _Implementation Details_
The voxelization space for the backbone is limited in \([[\pm 48],[\pm 48],[-3,1.8]]\) and the voxelization resolution is set to \([0.2,0.2,0.1]\) in meters. During training, similar to [9], we apply data augmentation such as global scaling and random flipping on the input points of both datasets. Our model is trained for 50 epochs following [26] for semantic segmentation and another 10 epochs for the instance aggregation module with a total batch size of 8 on 2 NVIDIA RTX 3090 GPUs. The minimum radius in the bubble-shrinking module is empirically given according to the average size of things in each class. Moreover, we use per-category histogram thresholding to determine the merging threshold in the instance aggregation module.
### _Comparison with the State-of-the-art._
**Quantitative Results.** Table I shows that PANet outperforms all baseline methods on SemanticKITTI validation by a large margin. The PANet performs slightly lower on mIOU than SCAN while surpassing SCAN by 4.5% in terms of PQ. PANet also achieves competitive performance on the SemanticKITTI test set, as shown in Table II. For example, PANet outperforms the clustering-based methods Panoptic PolarNet, DS-Net, and EfficientLPS by 6.4%, 4.6% and 6.6% in terms of \(\text{PQ}^{\text{Th}}\). Notably, the SIP module in our PANet requires no extra training. However, PANet still performs best on validation and outperforms most clustering methods on test split, demonstrating the effectiveness of our methods. We also evaluate PANet on nuScenes validation. The results are presented in Table III. Our approach achieves state-of-the-art performance on all reported metrics, which confirms the advantages of our PANet.
**Qualitative Results.** We visualize the results of our PANet and some LPS methods (DSNet [9], and Panoptic-PolarNet [10]) with released codes on the SemanticKITTI test set to validate our PANet. As shown in Fig. 3, our PANet performs better than DSNet and Panoptic-PolarNet on both crowded scenes (upper part of the figure) and large object segmentation such as trucks and buses (bottom half).
### _Ablation Study_
We conduct ablation studies on network components, clustering algorithms, and sampling algorithms on SemanticKITTI validation. The running time is tested on a single NVIDIA 1080 TI.
**Ablation on network components.** We analyze the effects of the 2D features, Sparse Instance Proposal (SIP) module, and Instance Aggregation (IA) module. Table IV shows the detailed ablation studies. When taking the balanced point-sampling (BPS) and CCL-based point grouping (PG) for the instance proposal (line 1), our model has already achieved good performance, demonstrating the effectiveness of the combination of our BPS and CCL-based point grouping. Moreover, the bubble shifting (BS) can further boost the performance by 1.5% on \(\text{PQ}^{\text{Th}}\) (line 1 vs. line 2). Besides, the 2D features bring the gain by 0.6% on PQ and 0.4% on mIoU (line 2 vs. line 3). Our IA module also improves the performance by 0.5% on \(\text{PQ}^{\text{Th}}\) (line 3 vs. line 4). As shown in Table V, IA presents a significant quality improvement on large objects such as Truck (+3.1% on PQ), which shows its effectiveness in aggregating the over-segmented large objects.
We also provide another baseline to show the effectiveness of our non-learning SIP module. The baseline denoted as "sem + MeanShift" comes from the combination of semantic predictions and MeanShift [38] clustering on raw points of things, as shown in Table I. Compared to the baseline, our SIP brings a significant improvement (+3.8% on PQ).
**Ablation on clustering algorithms.** We compare our PANet (the scheme of SIP attached with the IA) with three widely used clustering algorithms: MeanShift [38], DBScan
[45], and HDBScan [46]. We also give the results of the Dynamic Shift (DS) module [9]. Following common practices, we add an offset head composed of three linear layers to predict the point-wise offset vector to the instance center. And these clustering algorithms group instances upon the shifted points of things. The results are presented in Table I and show that our methods surpass all listed clustering methods in PQ accuracy. Significantly our PANet outperforms the DS module by 1.6% in terms of PQ. Notably, the performance of MeanShift clustering without the offset branch decreases by 2.1% in terms of PQ, which further clarifies our claim in Sec. I. Furthermore, the model equipped with only our SIP achieves 61.5% in terms of PQ and performs better than those baselines which exploit offset head for the geometric shift. We also provide a detailed comparison between our SIP and DS modules in Table VI, showing that our model performs better in both accuracy and speed (\(\sim 13\times\) faster). Notably, our SIP is non-learning and can be extended to other backbones and datasets in a plug-and-play manner.
applied to assign the corresponding points to the seed points. For convenience, we test the running time of SIP modules equipped with different sampling algorithms to provide an efficiency comparison. The results are listed in Table VII and show that our BPS achieves comparable performance with FPS in accuracy and has the lowest latency. For example, BPS outperforms random sampling by 0.9% in terms of PQTh and is 3 times faster than FPS. It is noteworthy that random sampling is slower than our BPS. The reason lies in the K-NN assignment, which brings the additional computation overhead.
## V Conclusion
In this paper, we propose a LiDAR panoptic segmentation framework named PANet to avoid the dependency on the offset branch and the over-segmented problem on large objects. PANet devises a non-learning Sparse Instance Proposal (SIP) module to directly group thing points into instances from the raw point cloud efficiently. SIP does not require extra training and can be easily extended to other backbones and datasets in a plug-and-play manner. Moreover, the Instance Aggregation (IA) module is proposed to integrate the fragmented instances, which can complement the SIP module and improve the completeness of segmentation on large objects. Our method achieves the state-of-the-art among published works on both SemanticKITTI and nuScenes benchmarks.
## Acknowledgment
This work was supported by NSFC 62088101 Autonomous Intelligent Unmanned Systems.
|
2305.14506 | Confidence Sets for Causal Orderings | Causal discovery procedures aim to deduce causal relationships among
variables in a multivariate dataset. While various methods have been proposed
for estimating a single causal model or a single equivalence class of models,
less attention has been given to quantifying uncertainty in causal discovery in
terms of confidence statements. A primary challenge in causal discovery of
directed acyclic graphs is determining a causal ordering among the variables,
and our work offers a framework for constructing confidence sets of causal
orderings that the data do not rule out. Our methodology specifically applies
to identifiable structural equation models with additive errors and is based on
a residual bootstrap procedure to test the goodness-of-fit of causal orderings.
We demonstrate the asymptotic validity of the confidence set constructed using
this goodness-of-fit test and explain how the confidence set may be used to
form sub/supersets of ancestral relationships as well as confidence intervals
for causal effects that incorporate model uncertainty. | Y. Samuel Wang, Mladen Kolar, Mathias Drton | 2023-05-23T20:23:42Z | http://arxiv.org/abs/2305.14506v2 | # Confidence sets for causal orderings
###### Abstract.
Causal discovery procedures aim to deduce causal relationships among variables in a multivariate dataset. While various methods have been proposed for estimating a single causal model or a single equivalence class of models, less attention has been given to quantifying uncertainty in causal discovery in terms of confidence statements. The primary challenge in causal discovery is determining a causal ordering among the variables. Our research offers a framework for constructing confidence sets of causal orderings that the data do not rule out. Our methodology applies to structural equation models and is based on a residual bootstrap procedure to test the goodness-of-fit of causal orderings. We demonstrate the asymptotic validity of the confidence set constructed using this goodness-of-fit test and explain how the confidence set may be used to form sub/supersets of ancestral relationships as well as confidence intervals for causal effects that incorporate model uncertainty.
## 1. Introduction
Inferring causal relations as opposed to mere associations is a problem that is not only of intrinsic scientific interest but also helps predict how an observed system might change under intervention (Peters et al., 2017). When randomized controlled trials are infeasible, methods for _causal discovery_--the problem of estimating a causal model from observational data--become valuable tools for hypothesis generation and acceleration of scientific progress. Examples of applications include systems biology (Sachs et al., 2005), neuroscience (Shen et al., 2020), and climate modeling (Nowack et al., 2020).
Causal models may be represented as a _directed acyclic graph_ (DAG), and--leveraging this representation--causal discovery can be cast as recovering the appropriate DAG. The first step in causal discovery is _identification_: determining appropriate assumptions under which the causal model can be recovered from population information; see, e.g., Shimizu et al. (2006); Loh and Buhlmann (2014); Peters et al. (2014). The next step is providing a method to _estimate_ the causal graph from data; see, e.g., Buhlmann et al. (2014); Chen et al. (2019); Wang and Drton (2020b). Once an estimation procedure is established, it is natural to question the estimation _uncertainty_. Uncertainty quantification and the ability to test identifying assumptions are essential for trustworthy estimation of causal graphs and help to determine whether key modeling assumptions are appropriate. Nonetheless, the literature on frequentist causal discovery, with a few exceptions (e.g., Strobl et al., 2019), only outputs a point estimate in the form of a DAG or single equivalence class.
Given a causal ordering of the variables, causal discovery reduces to variable selection in a sequence of regressions. Thus, the key difficulty lies in inferring the causal ordering; this motivates the issue we address in this paper: developing a procedure that provides a confidence set for causal orderings.
### Setup
We represent a causal model for the random vector \(Y=(Y_{1},\ldots,Y_{p})\) with a DAG \(G=(V,E)\), where each node \(v\) in the vertex set \(V=[p]\) indexes a random variable \(Y_{v}\). An edge \(u\to v\in E\) indicates that \(Y_{u}\) has a direct causal effect on \(Y_{v}\), and we say that \(u\) is a _parent_ of its _child_\(v\). If there exists a directed path in \(G\) from \(u\) to \(v\), then \(u\) is an _ancestor_
of its _descendant_\(v\). We denote the sets of parents, children, ancestors, and descendants of node \(v\) by \(\operatorname{pa}(v)\), \(\operatorname{ch}(v)\), \(\operatorname{an}(v)\), and \(\operatorname{de}(v)\), respectively. The models we consider take the form of a recursive structural equation model (SEM) with additive noise:
\[Y_{v}=f_{v}\left((Y_{u})_{u\in\operatorname{pa}(v)}\right)+\varepsilon_{v}, \qquad v\in V, \tag{1.1}\]
where the \(f_{v}\) are unknown and the errors \(\{\varepsilon_{v}\}_{v=1}^{p}\) are mean zero and mutually independent.
In a fully general SEM, the DAG may only be identified from observational data up to a Markov equivalence class--a collection of graphs that imply the same set of conditional independence relations (Spirtes et al., 2000). As the different graphs in the equivalence class may have contradicting causal interpretations, it is also of interest to work with restricted SEMs in which the DAG itself becomes identifiable (Maathuis et al., 2019, Chap. 18.6.3). Specifically, for the model in (1.1), the DAG becomes identifiable when \(f_{v}\) are non-linear or the errors \(\varepsilon_{v}\) non-Gaussian. In contrast, the linear Gaussian case, also allowed under (1.1), features the same Markov equivalence classes as the general nonparametric model.
We focus on a causal ordering for the variables in the model given by DAG \(G\); i.e., a total ordering of \(V\) where variables that appear later have no causal effect on earlier variables. We may identify each possible ordering with a permutation \(\theta:V\to V\), where \(\theta\) yields a causal ordering for \(G\) if and only if \(\theta(u)<\theta(v)\) implies that \(v\not\in\operatorname{an}(u)\). In general, a causal ordering is not unique, and, letting \(\mathcal{S}_{V}\) be the set of all permutations of \(V\), we denote the set of all causal orderings \(\Theta(G)=\left\{\theta\in\mathcal{S}_{V}\,:\,\theta(u)<\theta(v)\text{ only if }v\not\in\operatorname{an}(u)\right\}.\)
### Contribution
Let \(\mathbf{Y}\) be a sample drawn from the SEM in (1.1), and let \(\alpha\in(0,1)\). We propose a procedure that constructs a \(1-\alpha\)_confidence set of causal orderings_, \(\hat{\Theta}(\mathbf{Y},\alpha)\), where \(\hat{\Theta}(\mathbf{Y},\alpha)\subseteq\mathcal{S}_{V}\). Specifically, our procedure inverts a goodness-of-fit test for a causal ordering and returns the set of all orderings that are not rejected by the test. Thus, for any \(\theta\in\Theta(G)\):
\[\lim_{n\to\infty}P\left(\theta\in\hat{\Theta}(\mathbf{Y},\alpha)\right)\geq 1 -\alpha. \tag{1.2}\]
It follows that if \(G\) has a unique causal ordering (i.e., \(|\Theta(G)|=1\)), then \(\hat{\Theta}(\mathbf{Y},\alpha)\) contains that causal ordering with asymptotic probability at least \(1-\alpha\). If \(|\Theta(G)|>1\), then our procedure still infers a valid causal ordering--i.e., \(\Theta(G)\cap\hat{\Theta}(\mathbf{Y},\alpha)\neq\emptyset\)--with asymptotic probability at least \(1-\alpha\).
The confidence set \(\hat{\Theta}(\mathbf{Y},\alpha)\) provides a set of orderings that are not excluded by the data. Different elements of \(\hat{\Theta}(\mathbf{Y},\alpha)\) suggest different causal orderings which may, but do not have to, lead to different causal conclusions; we elaborate on this point in Section 4.3. The set \(\hat{\Theta}(\mathbf{Y},\alpha)\) being large cautious the analyst against overconfidence in a specific estimated ordering. In contrast, if \(\hat{\Theta}(\mathbf{Y},\alpha)\) is small, few causal orderings are compatible with the data under the considered model class. This latter aspect is crucial because \(\hat{\Theta}(\mathbf{Y},\alpha)\) may also be empty, indicating that the model class does not capture the data-generating process.
Furthermore, \(\hat{\Theta}(\mathbf{Y},\alpha)\) can be post-processed to form other useful objects. Most notably, similar to the problem studied by Strieder et al. (2021), we may form confidence intervals for causal effects that also incorporate model uncertainty. In addition, \(\hat{\Theta}(\mathbf{Y},\alpha)\) yields a sub/superset of the true ancestral relationships with some user-defined probability.
Our framework takes a straightforward approach based on goodness-of-fit tests. However, realizing this idea presents significant challenges, and we construct our procedure with careful attention to both statistical and computational aspects. Specifically, our methodology is built using computationally attractive tests for regression models with asymptotic validity under \(p=o(n)\), where \(p\) is the number of variables and \(n\) is the sample size. Computationally,
we devise the statistical decisions so that we can use a branch-and-bound type procedure to handle problems at a moderate but challenging scale. Despite prioritizing computational tractability, the procedure is asymptotically valid when allowing \(p\) to grow with \(n\), and we establish the asymptotic validity of the confidence set when \(p^{4}=o(n)\).
To motivate \(\hat{\Theta}(\mathbf{Y},\alpha)\) as an object of interest, we preview the analysis in Section 6.3 of daily stock returns for \(12\) industry portfolios. DirectLiNGAM (Shimizu et al., 2011) gives a point estimate of the causal ordering where the Utilities industry is first and causally precedes the other \(11\) industries. The set \(\hat{\Theta}(\mathbf{Y},.05)\) contains approximately \(1/15,000\) of the \(12\)! possible total orderings, and indeed Utilities is first in every ordering in the confidence set. Nonetheless, many orderings in \(\hat{\Theta}(\mathbf{Y},\alpha=.05)\)--i.e., those not rejected by the data--have other causal implications which differ from the point estimate. As shown in Section 6.3, the point estimate is quite far from the Frechet mean of \(\hat{\Theta}(\mathbf{Y},\alpha=.05)\). Finally, in the estimated causal ordering, Manufacturing precedes Chemicals, so a naive analysis would conclude that the total effect of Chemicals onto Manufacturing is \(0\). In contrast, when accounting for model uncertainty, we produce a \(90\%\) confidence interval for the total effect of Chemicals onto Manufacturing of \(\{0\}\cup(0.101,0.909)\cup(0.973,1.101)\).
### Related work
Previous work on uncertainty in causal discovery predominantly focuses on specific parameters within a causal model, rather than uncertainty across the entire model selection procedure. In linear SEMs with equal error variances, Jankova and van de Geer (2019) provide confidence intervals for the linear coefficients and Li et al. (2019) test for absence of edges; Shi et al. (2021) consider the same problem for more general additive noise models. However, this work either assumes a causal ordering is known or requires accurate estimation of a causal ordering to properly calibrate the test. Thus, they are poorly suited for our setting of interest: where the "signal strength" is small or modeling assumptions may be violated. In contrast, Strieder et al. (2021) focus on the equal variance case with bivariate data and form confidence intervals for causal effects that account for model uncertainty.
A confidence set of models has previously been proposed in work such as Hansen et al. (2011) and Lei (2020), who consider a set of candidate models and remove all models determined to be "strictly worse" than any other candidate in the set. In contrast, Ferrari and Yang (2015) and Zheng et al. (2019) form confidence sets by including all models that are not rejected when compared to some saturated model.
As an intermediate step, our framework requires a goodness-of-fit test for regression models. This is a classical problem (Breusch and Pagan, 1979; Cook and Weisberg, 1983) that has seen renewed interest in recent work such as Sen and Sen (2014), Shah and Buhlmann (2018), Berrett and Samworth (2019), and Schultheiss et al. (2023). In principle, it is possible adopt any of these existing procedures into our proposed framework; however, the high computational cost renders them unusable in all but the smallest problems. Thus, we propose a specific new test that possesses both statistical and computational properties that are particularly advantageous for our goal of targeting causal orderings, which requires us to test a very large number of regression models. We provide a detailed comparison of our proposal and existing work in Section 3 after describing our procedure.
There is a large literature on testing model fit for a specific SEM, particularly in the linear case. Testing model fit is then classically done by comparing empirical and model-based covariances (Bollen and Long, 1993). However, in some settings, as discussed in Section 1.1, a unique graph may be identified, but simply comparing covariances will fail to falsify graphs in the same Markov equivalence class. Furthermore, the models we consider do not constrain covariances and thus require alternative approaches.
We note that, by their very nature, Bayesian approaches also quantify uncertainty for causal structures and have seen numerous computational advances, e.g., by focusing on causal orderings (Friedman and Koller, 2003; Niinimaki et al., 2016; Kuipers and Moffa, 2017). This said, nearly all Bayesian causal discovery procedures focus on cases where the graph can only be identified up to an equivalence class. The few exceptions--e.g., Hoyer and Hyttinen (2009); Shimizu and Bollen (2014); Chang et al. (2022)--require specifying a likelihood for the data, rather than adopting the semi-parametric approach that we employ. At a more fundamental level, credible intervals differ conceptually from confidence regions that are our focus; especially, since in a complex model selection problem as we consider, there is no Bernstein-von Mises connection between the two concepts.
### Outline
In Section 2, we give background on causal discovery. In Section 3, we propose a computationally attractive goodness-of-fit test for a single causal regression and show in Section 4 that it can be used to test a causal ordering and form the confidence set \(\hat{\Theta}(\mathbf{Y},\alpha)\). We establish theoretical guarantees in Section 5 and examine empirical performance in Section 6.
## 2. Background on causal discovery
For expository simplicity, we initially focus on the linear SEM in (1.1) where each \(f_{v}\) is linear. Thus, assuming zero means,
\[Y_{v}=\sum_{u\in\mathrm{pa}(v)}\beta_{v,u}Y_{u}+\varepsilon_{v},\qquad v\in V. \tag{2.1}\]
Collecting \(\varepsilon=(\varepsilon_{v}:v\in V)\) and letting \(B\in\mathbb{R}^{p\times p}\) denote the matrix of causal effects where \(B_{v,u}=\beta_{v,u}\) if \(u\in\mathrm{pa}(v)\) and \(B_{v,u}=0\) if \(u\not\in\mathrm{pa}(v)\), we have the multivariate model \(Y=BY+\varepsilon\). We use \(Y_{U}=(Y_{u}:u\in U)\) to denote the sub-vector corresponding to the elements in \(U\). We use bold font to denote the collection of \(i=1,\ldots,n\) observations; i.e., \(Y_{v,i}\) denotes the \(i\)th observation of the \(v\)th variable and \(\mathbf{Y}=(Y_{v,i}:i\in[n],v\in[p])\in\mathbb{R}^{n\times p}\) and \(\mathbf{Y}_{v}=(Y_{v,i}:i\in[n])\in\mathbb{R}^{n}\). When we pass sets of observations to a function, it should be interpreted as the function applied to each observation; i.e., \(h(\mathbf{Y}_{v})=(h(Y_{v,i}):i\in[n])\) and \(g(\mathbf{Y}_{U})=(h(Y_{U,i}):i\in[n])\).
For linear SEMs, Shimizu et al. (2006) show that the exact graph can be identified when the errors, \(\varepsilon_{v}\), are mutually independent and non-Gaussian. The identification result relies on the following key observation. Let \(\eta_{v\setminus U}\) denote the residuals when \(Y_{v}\) is regressed--using population values--onto a set of variables \(Y_{U}\). If \(U\) contains all the parents of \(v\) but no descendants (i.e., all variables that have a direct causal effect on \(v\) and no variables that are caused directly or indirectly by \(v\)), then the residuals resulting from the population regression are independent of the regressors. Thus, the hypothesis in (2.2) implies the hypothesis in (2.3) where \(\mathrm{nd}(v)=V\setminus\{v\cup\mathrm{de}(v)\}\) denotes the non-descendants of \(v\):
\[H_{0}:\mathrm{pa}(v)\subseteq U\subseteq\mathrm{nd}(v), \tag{2.3}\] \[H_{0}:\eta_{v\setminus U}\perp\!\!\!\perp Y_{U}. \tag{2.2}\]
If \(U\) contains a descendant of \(v\), \(\eta_{v\setminus U}\) will still be uncorrelated with \(Y_{U}\). However, if the errors are non-Gaussian then \(\eta_{v\setminus U}\perp\!\!\!\perp Y_{U}\) except when \(B\) takes specific pathological values. Thus, testing the independence of residuals and regressors may falsify the hypothesis in (2.3) and subsequently (2.2).
A simple bivariate case is given in Figure 1, where the correct model is \(Y_{1}\to Y_{2}\). When viewing the raw data (left plot), no specific causal relation is immediately apparent. However, in the middle plot, we have identified the correct model \((Y_{1}\to Y_{2})\), and the residuals when
regressing \(Y_{2}\) onto \(Y_{1}\) are independent of the regressor, \(Y_{1}\). On the right-hand side, we have posited the incorrect model \(Y_{2}\to Y_{1}\). When regressing \(Y_{1}\) onto \(Y_{2}\), the residuals remain uncorrelated with \(Y_{1}\), but are no longer independent of \(Y_{2}\).
Of course, we typically do not have access to population values, and the linear coefficients are nuisance parameters that need to be estimated before conducting an independence test. Wang and Drton (2020) show--in the linear non-Gaussian SEM setting--consistent recovery of the graph is still possible when estimating the nuisance parameters, even in the high-dimensional setting. Thus, to test (2.3) from data, one might naively use least squares regression and directly test whether the residuals, \(\hat{\eta}_{v\setminus U}=Y_{v}-\hat{\beta}Y_{U}\), are independent of \(Y_{U}\). Unfortunately, even when the null hypothesis holds, \(\hat{\eta}_{v\setminus U}=\varepsilon_{v}+(\beta-\hat{\beta})Y_{U}\) so \(\hat{\eta}_{v\setminus U}\not\perp Y_{U}\) and the naive test does not control the Type I error rate. Example A.1 in the appendix provides an illustration. A more careful approach is required for a valid test of (2.3), and this problem has been previously addressed, e.g., by Sen and Sen (2014). In Section 3, we discuss a procedure that is particularly suited for our setting and contrast our approach with existing procedures.
## 3. Goodness-of-fit for regression
We now propose a procedure for testing the null hypothesis in (2.3) as a proxy for (2.2); this test will be used as a building block for testing causal orderings as described in Section 4. We first describe how this can be done in the linear SEM setting and then generalize the procedure when \(f_{v}\) may be non-linear.
### Residual bootstrap test for linear models
For some \(v\in V\) and set \(U\subseteq V\setminus v\), let \(b_{v,U}=\arg\min_{\hat{b}}\mathbb{E}\left([Y_{v}-\hat{b}^{T}Y_{U.1}]^{2}\right)\) where \(Y_{U.1}\) denotes the random vector \(Y_{U}\) augmented by a term for the intercept and let \(\eta_{v\setminus U}=Y_{v}-b_{v,U}^{T}Y_{U.1}\); i.e., \(b_{v,U}\) is the population regression coefficient and \(\eta_{v\setminus U}\) is the resulting residual. The quantity \(b_{v,U}\) and random variable \(\eta_{v\setminus U}\) are well defined for all \(U\) and \(v\), and when \(\operatorname{pa}(v)\subseteq U\subseteq\operatorname{nd}(v)\) then \(b_{v,U}\) coincide with the causal parameters--i.e., \(b_{v,U}=\beta_{v,U}\)--and \(\eta_{v\setminus U}=\varepsilon_{v}\). Given data, we denote the population residuals as \(\boldsymbol{\eta}_{v\setminus U}=(\eta_{v\setminus U,1},\ldots,\eta_{v \setminus U,n})\). Furthermore, let \(\hat{b}_{v,U}\) be regression coefficients estimated from sample moments, let \(\hat{f}_{v}(\mathbf{Y}_{U})=\mathbf{Y}_{U.1}\hat{b}_{v,U}\), and let \(\boldsymbol{\hat{\eta}}_{v\setminus U}=\mathbf{Y}_{v}-\hat{f}_{v}(\mathbf{Y} _{U})\) denote the residuals calculated using \(\hat{b}_{v,U}\).
Figure 1. Left: Raw data. Middle/Right: Residuals from regressing the posited child onto the posited parent for the correct model (middle) and the incorrect model (right).
Our test will require a set of functions, \(\mathcal{H}=\{h_{j}\}_{j=1}^{J}\), which we refer to as _test functions_; these are selected by the analyst and we give practical guidance below. Let
\[\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})=\frac{1}{\sqrt{n}}h_{j}(\mathbf{Y}_{u})^ {T}\boldsymbol{\hat{\eta}}_{v\setminus U}\quad\text{ and }\quad\tau(\mathbf{Y}_{v},u,U;\mathbf{Y})=\left| \{\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})\}_{j\in[J]}\right|_{2}. \tag{3.1}\]
Finally, the test statistic aggregates \(\tau(\mathbf{Y}_{v},u,U;\mathbf{Y})\) across all \(u\in U\). Specifically, let
\[T_{1}(\mathbf{Y}_{v},U;\mathbf{Y})=\frac{1}{|U|}\sum_{u\in U}|\tau(\mathbf{Y}_ {v},u,U;\mathbf{Y})|\quad\text{ and }\quad T_{2}(\mathbf{Y}_{v},U;\mathbf{Y})=\sqrt{\sum_{u\in U} \frac{1}{\sqrt{|U|}}\left(\tau(\mathbf{Y}_{v},u,U;\mathbf{Y})\right)^{2}}. \tag{3.2}\]
When making statements which refer to both \(T_{1}\) and \(T_{2}\) we will sometimes omit the sub-script and simply use \(T\).
By the first order conditions of least squares regression, \(\boldsymbol{\hat{\eta}}_{v\setminus U}\) is always uncorrelated with \(\mathbf{Y}_{U}\). Thus, if \(h_{j}\) is a linear function, \(\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})=0\). Furthermore, in multivariate Gaussians, uncorrelated is equivalent to independent, so for linear Gaussian SEMs, \(\eta_{v\setminus U}\perp\!\!\!\perp Y_{U}\) and (2.3) always hold regardless of whether (2.2) holds. We note, however, that this inability to falsify (2.2) is not specific to our approach, but intrinsic to the non-identifiability of linear Gaussian SEMs as previously mentioned. In either of these cases, the Type I error rate of our proposed procedure will be preserved, but the test will have trivial power.
When \(h_{j}\) is a non-linear and the errors are non-Gaussian, we may use \(\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})\) (and subsequently \(T\)), to assess whether \(\mathbf{Y}_{u}\) and \(\boldsymbol{\eta}_{v\setminus U}\) are truly independent (not just uncorrelated) and ultimately test the hypothesis in and (2.2). Under the null hypothesis, \(\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})\) has mean zero, because \(\varepsilon_{v}\perp\!\!\!\perp Y_{U}\), and
\[\tau_{j}(\mathbf{Y}_{v},u,U;\mathbf{Y})=\frac{1}{\sqrt{n}}h_{j}(\mathbf{Y}_{u} )^{T}\boldsymbol{\hat{\eta}}_{v\setminus U}=\frac{1}{\sqrt{n}}h_{j}(\mathbf{Y }_{u})^{T}[I-\mathbf{Y}_{U.1}(\mathbf{Y}_{U.1}^{T}\mathbf{Y}_{U.1})^{-1} \mathbf{Y}_{U.1}]\boldsymbol{\varepsilon}_{v}. \tag{3.3}\]
However, when the null hypothesis in (2.2) does not hold, the population regression coefficients are generally not equal to the causal coefficients such that \(b_{v,U}\neq\beta_{v,U}\). Letting \(U^{\prime}=\mathrm{pa}(v)\cup U\), with a slight abuse of notation, we define \(b_{v,U^{\prime}}=(b_{v,U})_{u}\) if \(u\in U\) and \(0\) otherwise, and similarly let \(\beta_{v,U^{\prime}}=(\beta_{v,U})_{u}\) if \(u\in\mathrm{pa}(v)\) and \(0\) otherwise. Then, \(\eta_{v\setminus U}=\varepsilon_{v}+(\beta_{v,U^{\prime}}-b_{v,U^{\prime}})^{T }Y_{U^{\prime}.1}\) and \(\frac{1}{\sqrt{n}}h_{j}(\mathbf{Y}_{u})^{T}\boldsymbol{\hat{\eta}}_{v\setminus U}\) is equal to
\[\frac{1}{\sqrt{n}}\left(h_{j}(\mathbf{Y}_{u})^{T}\boldsymbol{\eta}_{v\setminus U }-\mathbb{E}(h_{j}(Y_{u})^{T}\eta_{v\setminus U})\right)+\frac{1}{\sqrt{n}}h_ {j}(\mathbf{Y}_{u})^{T}\mathbf{Y}_{U^{\prime}.1}[b_{v,U^{\prime}}-\hat{b}_{v, U^{\prime}}]+\sqrt{n}\mathbb{E}(h_{j}(Y_{u})^{T}\eta_{v\setminus U}). \tag{3.4}\]
The first term is mean \(0\), and the second term is asymptotically mean \(0\) when \(p\) is fixed and \(n\) grows. However, for linear SEMs with non-Gaussian errors, we may select a non-linear \(h_{j}\) which renders \(\mathbb{E}(h_{j}(Y_{u})^{T}\eta_{v\setminus U})\neq 0\) so that the third term is \(O(\sqrt{n})\). For example, Wang and Drton (2020a) show that for any integer \(K>1\), \(\mathbb{E}(Y_{u}^{K}\eta_{v\setminus U})\neq 0\) for generic choices of \(B\) and \(K\)th degree moments of the errors. The test will have greater power when \(|\mathbb{E}(h_{j}(Y_{u})^{T}\eta_{v\setminus U})|\) is large relative to the variability of the first two terms in (3.4), but selecting a set of test functions with "optimal" power depends on the unknown distribution of the errors and is difficult in practice. Nonetheless, in the simulations, we set \(\mathcal{H}=\{y^{2},\mathrm{sign}(y)|y|^{2.5},y^{3}\}\) and observe that the test exhibits good empirical power when compared to other state-of-the-art tests. In Section 5, we explicitly analyze the power of our proposed testing procedure.
If we had access to new realizations of \(\varepsilon_{v}\), we could sample directly from the distribution of \(\tau_{j}(Y_{v},u,U;Y)\) and ultimately \(T\) (conditional on \(\boldsymbol{Y}_{U}\)) by replacing \(\boldsymbol{\varepsilon}_{v}\) in (3.3) with new draws. Comparing the observed \(T\) to the distribution of these new realizations would yield an exact finite-sample test. We refer to this distribution as the _oracle distribution_ because, in practice, we cannot resample \(\boldsymbol{\varepsilon}_{v}\) exactly. Alternatively, conditioning on \(\mathbf{Y}_{U}\), the quantity in (3.3) is asymptotically normal under the null hypothesis. Thus, the null distribution could be approximated by samples which replace the \(\boldsymbol{\varepsilon}_{v}\) in (3.3) with draws from a Gaussian.
However, when \(\varepsilon_{v}\) is not close to a Gaussian, we see drastic improvements by using the residual bootstrap procedure proposed below. We illustrate this explicitly with a simulation study in Section B of the appendix.
Instead, we calibrate our test with a residual bootstrap procedure. For each bootstrap draw, we condition on \(\mathbf{Y}_{U}\) and replace \(\mathbf{\varepsilon}_{v}\) in (3.3) with \(\mathbf{\tilde{\eta}}=(\tilde{\eta}_{i}:i\in[n])\) where each \(\tilde{\eta}_{i}\) is drawn i.i.d. from the empirical distribution of \(\mathbf{\hat{\eta}}_{v\setminus U}\). This is equivalent to forming \(\mathbf{\tilde{Y}}_{v}=\hat{f}(\mathbf{Y}_{U})+\tilde{\eta}\), regressing \(\mathbf{\tilde{Y}}_{v}\) onto \(\mathbf{Y}_{U.1}\) to form the residuals \(\mathbf{\hat{\tilde{\eta}}}_{v\setminus U}\) and computing \(\tau_{j}(\mathbf{\tilde{Y}}_{v},u,U;\mathbf{Y})=(1/\sqrt{n})h_{j}(\mathbf{Y}_ {u})^{T}\mathbf{\hat{\tilde{\eta}}}_{v\setminus U}\) as described in Alg. 1. Similar to before, we then compute \(T(\mathbf{\tilde{Y}}_{v},U;\mathbf{Y})\) by letting \(\tau(\mathbf{\tilde{Y}}_{v},u,U;\mathbf{Y})=\left|\{\tau_{j}(\mathbf{\tilde{Y }}_{v},u,U;\mathbf{Y})\}_{j\in[J]}\right|_{2}\). In the simulations, we use the asymptotically equivalent quantity which divides by \(\sqrt{n-|U|}\) instead of \(\sqrt{n}\). In Section 5, we show that this approximation converges to the oracle distribution in Wasserstein distance when \(p=o(n)\).
Various other procedures, discussed below, have also been proposed for testing goodness-of-fit for a linear model via the hypothesis in (2.3). In theory, these procedures could also be used in the framework we subsequently propose for testing a causal ordering. However, practically speaking and as shown in Section 6, the computational cost of these procedures--save perhaps Schultheiss et al. (2023)--renders them infeasible for our goal of computing confidence sets for causal orderings.
Moreover, beyond the drastic computational benefits, the proposed procedure possesses some statistical advantages, which we briefly discuss now and also empirically demonstrate in Section 6. Sen and Sen (2014) use the Hilbert-Schmidt Independence Criterion (HSIC) (Gretton et al., 2007) to measure dependence between regressors and residuals. They only consider the fixed \(p\) setting, and the simulations in Section 6 show that the type I error is inflated when \(p\) is moderately sized compared to \(n\). We conjecture this is partly because they bootstrap both the regressors and residuals to approximate the joint distribution rather than conditioning on the regressors and only approximating the distribution of the errors. In contrast, Shah and Buhlmann (2018) propose Residual Prediction (RP) and Berrett and Samworth (2019) propose MintRegression, which test the goodness-of-fit conditional on the covariates; however, both procedures calibrate their tests using a parametric bootstrap that assumes the errors are Gaussian. In the supplement, Shah and Buhlmann (2018) do consider cases where the errors are non-Gaussian and use a residual bootstrap, which shows good empirical performance although they do not provide any theoretical guarantees. Finally, Schultheiss et al. (2023) also propose a goodness-of-fit test for individual covariates in a linear model using a statistic similar to Wang and Drton (2020). They show that the statistic is asymptotically normal and calibrate the hypothesis test using an estimate of the limiting distribution. A direct comparison of required conditions is not straightforward because they focus on the high-dimensional sparse linear model; whereas we do not assume
sparsity, but require \(p<n\). Nonetheless, for valid testing, they require the number of non-zero coefficients to be \(o(n^{1/2}/\log^{3}(p))\); in contrast, we require \(p=o(n)\).
### Non-linear models via sieves
Using a similar argument with the independence of residuals and regressors, Peters et al. (2014) show that the causal graph may also be identified when the structural equations in (1.1), \(f_{v}\), are non-linear. The procedure proposed above directly generalizes to this setting.
Let \(\Phi^{(v)}=\{\phi_{k}\}_{k\geq 1}\) be a basis of functions (e.g., the b-spline or polynomial basis) that take inputs \(Y_{U}\). We approximate \(f_{v}\) with \(\hat{f}_{v}\) by regressing \(Y_{v}\) onto the first \(K\) elements of \(\Phi^{(v)}\)--which we will denote by \(\Phi^{(v)}_{K}\). As in the linear setting where we included an intercept term, we require the constant function to be in the span of \(\Phi^{(v)}_{K}\). Given the residuals of \(Y_{v}\), the test statistics can then be calculated in the same way as in the linear setting. In this case, to ensure the test statistics are not identically zero, we must select test functions that do not lie in the span of \(\Phi^{(v)}_{K}\). If we do not know a good basis, \(\Phi^{(v)}_{K}\), for \(f_{v}\) a priori, we can use a sieve estimator where \(K\) grows with \(n\). Let \(f_{v,K}\) be the squared error projection of \(f_{v}\) into the span of \(\Phi^{(v)}_{K}\); i.e.,
\[f_{v,K}=\arg\min_{f\in\operatorname{span}(\Phi^{(v)}_{K})}E_{Y_{U}}\left[(f_{v }(Y_{U})-f(Y_{U}))^{2}\right]. \tag{3.5}\]
In Section 5, we show that even if \(f_{v}\not\in\operatorname{span}(\Phi^{(v)}_{K})\) for any finite \(K\), the proposed test is valid as long as \(\mathbb{E}_{Y_{U}}\left[(f_{v,K}(Y_{U})-f_{v}(Y_{U}))^{2})\right]\) decreases appropriately with \(K\).
## 4. Inference for causal orderings
Before discussing details, we first discuss some of the trade-offs involved in our design decisions. Estimating a causal ordering can be seen as a preliminary task when estimating a causal graph and various causal discovery procedures have fruitfully employed a search over causal orderings instead of individual graphs; e.g., Raskutti and Uhler (2018); Solus et al. (2021). Given a correct causal ordering, estimating the graph simplifies greatly to variable selection in a sequence of regressions (Shojaie and Michailidis, 2010). Thus, for computational reasons, we focus on confidence sets in the much smaller space of causal orderings rather than all possible graphs. Nonetheless, let \(\hat{\mathcal{G}}\) be the set of DAGs formed by estimating a graph from each ordering \(\theta\in\hat{\Theta}(\mathbf{Y},\alpha)\). If--given a correct ordering--the true graph can be recovered with overwhelming probability, then (1.2) trivially implies that \(\hat{\mathcal{G}}\) is an asymptotically valid confidence set for \(G\); i.e., \(\lim_{n\to\infty}P(G\in\hat{\mathcal{G}})\geq 1-\alpha\).
We also choose to form the confidence set \(\hat{\Theta}(\mathbf{Y},\alpha)\) by inverting a goodness-of-fit test. If the model assumptions are violated, then all possible orderings may be rejected, resulting in an empty confidence set. Alternatively, one could form a confidence set by considering a neighborhood around \(\hat{\theta}\), a point estimate of a causal ordering. This would produce a non-empty confidence set even when the model is misspecified and might be preferred if \(\hat{\theta}\) represents a useful "projection" into the considered class of models. However, in practice, it is difficult to know if the misspecification is "mild," and we argue that observing an empty confidence set is important because it alerts the scientist to potentially choose a causal discovery procedure that makes less restrictive identifying assumptions.
Finally, we construct a test for each causal ordering by aggregating several regression tests. Although we only require \(p=o(n)\) for asymptotic validity of all individual regression tests, the aggregation requires \(p^{4}=o(n)\). Alternatively, a direct test that does not aggregate individual tests might enjoy better statistical properties. However, we trade statistical efficiency for
computational efficiency and aggregating individual tests allows for a branch-and-bound type procedure, which in practice drastically decreases computation and enables feasible analysis of "medium-sized" problems. In Section 6.1, we show that this procedure can be applied to problems with \(p\approx 20\).
### Testing a given ordering
For an ordering \(\theta\in\mathcal{S}_{V}\), let \(\operatorname{pr}_{\theta}(v)=\{u:\theta(u)<\theta(v)\}\) be the set of nodes that precede \(v\) in \(\theta\). When \(\theta\) is a valid ordering for \(G\), then \(\operatorname{pa}(v)\subseteq\operatorname{pr}_{\theta}(v)\subseteq \operatorname{nd}(v)\) for all \(v\) such that \(\theta(v)>1\). However, when \(\theta\) is not a valid causal ordering, there exists some \(v\) such that \(\operatorname{pr}_{\theta}(v)\not\subseteq\operatorname{nd}(v)\). Thus, testing whether \(\theta\) is a valid causal ordering is equivalent to testing
\[H_{0,\theta}:\operatorname{pa}(v)\subseteq\operatorname{pr}_{\theta}(v) \subseteq\operatorname{nd}(v)\quad\forall v\text{ such that }\theta(v)>1. \tag{4.1}\]
To operationalize a test for \(H_{0,\theta}\), we use the procedure from Section 3.1 to test \(H_{0,\theta,v}:\eta_{v\setminus\operatorname{pr}_{\theta}(v)}\mathchoice{ \mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-10.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt \perp}\mskip-10.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt \perp}\mskip-10.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt \perp}\mskip-10.0mu \perp}}Y_{\operatorname{pr}_{\theta}(v)}\) for all \(v\) such that \(\theta(v)>1\). Using the residual bootstrap procedure to test \(H_{0,\theta,v}\) produces a p-value, denoted as \(\hat{\gamma}_{\theta,v}\), which approximates, \(\gamma_{\theta,v}\), the p-value which would result from a test calibrated by the oracle distribution. We propose aggregating the \(p-1\) p-values into a single test for (4.1) by taking the minimum p-value. Specifically, let
\[\hat{\gamma}_{\theta}=\min_{v:\theta(v)>1}\hat{\gamma}_{\theta,v}\qquad\text{ and}\qquad\gamma_{\theta}=\min_{v:\theta(v)>1}\gamma_{\theta,v}. \tag{4.2}\]
By Lemma 4, the of p-values produced by the oracle procedure, \(\{\gamma_{\theta,v}\}_{v=1}^{p}\), are mutually independent under the null hypothesis so \(\gamma_{\theta}\) follows a \(\operatorname{Beta}(1,p-1)\). Of course we do not have access to \(\gamma_{\theta}\), but under conditions described in Section 5, \(\hat{\gamma}_{\theta}\to_{p}\gamma_{\theta}\). Thus, to compute a final p-value for \(H_{0,\theta}\), denoted as \(\hat{\Gamma}_{\theta}\), we also compare \(\hat{\gamma}_{\theta}\) to a \(\operatorname{Beta}(1,p-1)\) distribution.
### Efficient computation
To satisfy (1.2), we construct \(\hat{\Theta}(\mathbf{Y};\alpha)\) by including any \(\theta\) where \(H_{0,\theta}\) is not rejected by a level \(\alpha\) test; i.e.,
\[\hat{\Theta}(\mathbf{Y};\alpha)=\{\theta:\hat{\Gamma}_{\theta}\geq\alpha\}. \tag{4.3}\]
Of course, enumerating all permutations is computationally prohibitive, so we propose a branch-and-bound style procedure to avoid unnecessary computation. Pseudocode is given in Alg. 2. For any fixed \(\theta\), we sequentially test \(H_{0,\theta,v}\) for \(z=\theta^{-1}(v)=2,\dots,p\), and we update a running record \(\hat{\gamma}_{\theta}^{(z)}=\min_{v:\theta^{-1}(v)\leq z}\hat{\gamma}_{\theta,v}\). Once \(\hat{\gamma}_{\theta}^{(z)}\) is less than the \(\alpha\) quantile of a \(\operatorname{Beta}(1,p-1)\), we can reject \(\theta\) without testing the remainder of the ordering.
Furthermore, we test each ordering in \(\mathcal{S}_{v}\) simultaneously rather than sequentially. We first testing all orderings of length \(z=2\). Subsequently, we only consider orderings of length \(z=3\) that are formed by appending a node to a set of length \(2\) that is not already rejected. We repeat this process for increasing \(z\). This approach avoids redundant computation because the test of \(\operatorname{pa}(v)\subseteq\operatorname{pr}_{\theta}(v)\subseteq \operatorname{nd}(v)\), only depends on the combination of elements included in \(\operatorname{pr}_{\theta}(v)\) and not the specific ordering \(\theta\). For example, when \(z=3\), once we have tested \(\operatorname{pa}(6)\subseteq(4,5)\subseteq\operatorname{nd}(6)\) for the incomplete ordering \((4,5,6)\) we do not need to recompute the test for \((5,4,6)\). In the worst case, when the signal is small and no orderings are rejected, the procedure is still an exhaustive search. Nonetheless, in Section 6, we show that under reasonable signal-to-noise regimes, problems with \(p=20\) are feasible.
### Post-processing the confidence set
We now discuss how \(\hat{\Theta}(\mathbf{Y},\alpha)\) can be post-processed into other useful objects. Specifically, we consider: (1) confidence intervals for causal effects which incorporate model uncertainty, and (2) sub/super-sets of ancestral relations with confidence.
Strieder et al. (2021) consider a linear SEM with equal variances and provide CIs for causal effects which account for the model uncertainty. With a similar goal, we propose a procedure for the setting of linear SEMs with independent errors. We focus on the total effect of \(v\) onto \(u\)--\(\partial\mathbb{E}[Y_{u}\;|\;\mathrm{do}(Y_{v}=y)]/\partial y\) using the do-operator (Pearl, 2009); a procedure for the direct effect of \(v\) onto \(u\)--i.e., \(\beta_{u,v}\)--is analogous and discussed in the Section D of the appendix. When \(v\not\in\mathrm{an}(u)\), the total effect of \(v\) on \(u\) is \(0\), and when \(v\in\mathrm{an}(u)\), the total effect may be recovered by a regression of \(Y_{u}\) onto \(Y_{v}\) and a set of additional covariates--often called the _adjustment set_. In particular, letting \(\mathrm{an}(v)\) be the adjustment set yields an unbiased estimate. While adjustment sets which recover the total effect may not be unique, an incorrect adjustment set may bias the estimate; e.g., incorrectly including a descendant of \(Y_{u}\) or excluding a parent of \(Y_{v}\) from the adjustment set may induce bias. Thus, naively selecting a single adjustment set and calculating a confidence interval for the parameter of interest will not provide nominal coverage when there is considerable uncertainty in a "correct" adjustment set. Robust quantification of uncertainty must also account for uncertainty in the selected adjustment set.
Alg. 3 describes a procedure to calculate \(1-\alpha\) CIs for the total effect of \(v\) onto \(u\) which account for model uncertainty. Specifically, we consider the adjustment set \(\mathrm{pr}_{\theta}(v)\) for each ordering \(\theta\in\hat{\Theta}(\mathbf{Y},\alpha/2)\). We then calculate the \(1-\alpha/2\) CI for the regression parameter of interest, conditional on that adjustment set. The final CI is given by the union of all conditional CIs. In practice, if \(v\) and \(u\) are fixed in advance, this can be calculated simultaneously with \(\hat{\Theta}(\mathbf{Y},\alpha/2)\) to avoid redundant regressions. This is similar in flavor to the IDA procedure of Maathuis et al. (2009), but we additionally account for uncertainty due to estimating the graph rather than just population level non-identifiability within a Markov equivalence class.
**Lemma 1**.: _Let \(\pi_{u,v}\) denote the total causal effect of \(v\) onto \(u\). Suppose \(\hat{\Theta}(\mathbf{Y},\alpha/2)\) satisfies (1.2), and \(C(S)\) is an asymptotically valid \(1-\alpha/2\) confidence interval for the parameter of interest, conditional on \(S\) being a valid adjustment set. Then, for the confidence interval produced by Alg. 3, \(\lim_{n\to\infty}P(\pi_{u,v}\in\hat{C}_{\alpha})\geq 1-\alpha.\)_
Furthermore, \(\hat{\Theta}(\mathbf{Y},\alpha)\) may be used to compute a sub/super-set of ancestral relations. Let \(\mathcal{A}(G)=\{(u,v)\::\:u\in\mathrm{an}(v)\}\) denote the set of true ancestral relationships in \(G\), \(\hat{\mathcal{A}}_{\cap}=\{(u,v)\::\:\theta(u)<\theta(v)\:\forall\theta\in\hat{ \Theta}(\mathbf{Y},\alpha)\}\) denote the set of ancestral relations that hold for all \(\theta\in\hat{\Theta}(\mathbf{Y},\alpha)\)
and \(\hat{\mathcal{A}}_{\cup}=\{(u,v)\,:\,\exists\theta\in\hat{\Theta}(\mathbf{Y},\alpha)\) s.t. \(\theta(u)<\theta(v)\}\) denote the set of ancestral relations which are implied by at least one \(\theta\in\hat{\Theta}(\mathbf{Y},\alpha)\). Lemma 2 shows \(\hat{\mathcal{A}}_{\cap}\subseteq\mathcal{A}\subseteq\hat{\mathcal{A}}_{\cup}\) with probability at least \(1-2\alpha\). The set \(\hat{\mathcal{A}}_{\cap}\) is similar to the conservative set of causal predictors given in Peters et al. (2016).
**Lemma 2**.: _Suppose \(\hat{\Theta}(\mathbf{Y},\alpha)\) satisfies (1.2). Then, \(\lim_{n\to\infty}P(\hat{\mathcal{A}}_{\cap}\subseteq\mathcal{A}\subseteq\hat{ \mathcal{A}}_{\cup})\geq 1-2\alpha\)._
## 5. Theoretical guarantees
We now show conditions under which the residual bootstrap procedure is asymptotically valid. We also show that aggregating multiple tests results in a valid test for an entire ordering. Furthermore, we analyze the power of the regression goodness-of-fit test.
### Residual bootstrap
Suppose \(\theta\) is a causal ordering of the DAG \(G\) and consider a fixed \(v\in V\). Let \(F^{(v)}\) denote the distribution of \(\varepsilon_{v}\), \(F_{n}^{(v)}\) denote the empirical distribution of \(\boldsymbol{\varepsilon}_{v}\), and \(\hat{F}_{n}^{(v)}\) denote the empirical distribution of the residuals \(\boldsymbol{\hat{\eta}}_{v\setminus\mathrm{pr}_{\theta}(v)}\). We allow for nonlinear \(f_{v}\), and recall that \(\Phi_{K}^{(v)}\) is the basis used to model \(f_{v}\) which includes an intercept term. Specializing to linear SEMs, \(K=|\mathrm{pr}_{\theta}(v)|+1\) and \(\Phi_{K}^{(v)}\) would simply be linear functions of \(Y_{u}\) for \(u\in\mathrm{pr}_{\theta}(v)\) and an intercept. Recall that \(f_{v,K}\) denotes the optimal approximation of \(f_{v}\) within \(\Phi_{K}^{(v)}\); i.e.,
\[f_{v,K}=\arg\min_{f\in\mathrm{span}(\Phi_{K}^{(v)})}\mathbb{E}_{Y_{\mathrm{pr }_{\theta}(v),i}}\left[f_{v}(Y_{\mathrm{pr}_{\theta}(v),i})-f(Y_{\mathrm{pr}_{ \theta}(v),i})\right]^{2}.\]
Let \(\mathbf{d}_{v}=(d_{v,i}:i\in[n])\) denote the approximation error such that \(d_{v,i}=f_{v}(Y_{\mathrm{pr}_{\theta}(v),i})-f_{v,K}(Y_{\mathrm{pr}_{\theta}( v),i})\). If \(f_{v}\in\mathrm{span}(\Phi)\), then \(\mathbf{d}_{v}=0\).
We examine the distribution of the test statistic under two settings: an oracle distribution of \(T(\mathbf{Y}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\)--where we condition on \(\boldsymbol{Y}_{\mathrm{pr}_{\theta}(v)}\) and resample \(\boldsymbol{\varepsilon}_{v}\) in (3.3) exactly from \(F^{(v)}\)--and the bootstrap distribution of \(T(\mathbf{\tilde{Y}}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\)--where we replace \(\boldsymbol{\varepsilon}_{v}\) in (3.3) with \(\boldsymbol{\tilde{\eta}}\) where \(\tilde{\eta}_{v,i}\stackrel{{ i.i.d}}{{\sim}}\hat{F}_{n}^{(v)}\). Let \(d_{2}(F_{1},F_{2})\) denote the Wasserstein-2 distance between the distributions \(F_{1}\) and \(F_{2}\). In a slight abuse of notation, we will also use \(d_{2}(X_{1},X_{2})\) to denote \(d_{2}(F_{1},F_{2})\) when \(X_{1}\sim F_{1}\) and \(X_{2}\sim F_{2}\).
Theorem 1 shows that the oracle distribution and the bootstrapped distribution converge in Wasserstein-2 distance under weak conditions. We consider a sequence of models indexed by \(p\), \(\mathcal{M}_{p}=\big{\{}G_{p}=\{V,E\},\{f_{v}\}_{v\in V},\{F^{(v)}\}_{\{v\in V \}}\big{\}}\), such that \(G\) is a graph with \(p\) vertices and let \(n_{p}\) denote the sample size drawn from \(\mathcal{M}_{p}\). For simplicity, we will suppress the notational dependence on \(p\). We will require tail conditions on the errors and test functions which we impose through upper bounds on Orlicz-\(a\) norms; for \(a>0\), the Orlicz-\(a\) norm of
\(X\) is \(\|X\|_{\Psi_{a}}=\inf\{t>0:\mathbb{E}\left(|X|^{a}/t^{a}\right)\leq 2\}\). When \(\|X\|_{\Psi_{1}}<\infty\), \(X\) has sub-exponential tails, and when \(\|X\|_{\Psi_{a}}<\infty\) for \(0<a<1\), \(X\) has been referred to as sub-Weibull (see, e.g., Vladimirova et al. (2020); Gotze et al. (2021)). When \(Y_{v}\) is sub-exponential, the test functions we use in the simulations, \(\mathcal{H}=\{Y^{2},\mathrm{sign}(Y)|Y|^{2.5},Y^{3}\}\), satisfy \(\|h_{j}(Y_{v})\|_{\Psi_{(1/J_{1})}}<\infty\) for \(J_{1}=3\).
**Theorem 1**.: _Suppose that \(p\leq K<n\) with \(K=o(n)\). Furthermore, there exist constants \(0<M_{1},M_{2},M_{3}<\infty\) such that for all \(v\in V\) and \(j\in[J]\), \(\varepsilon_{v}\sim F^{(v)}\) with \(\|\varepsilon_{v}\|_{\Psi_{1}}\leq M_{1}\), \(\mathrm{var}(\varepsilon_{v})<M_{2}\), \(\mathbb{E}(h_{j}(Y_{v})^{2})\leq M_{2}\) and \(\|h_{j}(Y_{v})\|_{\Psi_{(1/J_{1})}}\leq M_{3}\) for some \(1/2\leq J_{1}<\infty\). Let \(d^{\star}=\max_{v}\overline{d_{v}^{2}}\). Then, for any causal ordering \(\theta\) consistent with DAG \(G\), with probability \(1-o(1)\), \(\max_{v}d_{2}\left[T_{s}(\mathbf{Y}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y}),T (\mathbf{\tilde{Y}}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\right]^{2}\) is less than_
\[\begin{split} 27J(M_{2}+\log(p)/\sqrt{n})(KM_{2}/n+2\log(n)/ \sqrt{n}+2d^{\star})+3Jd^{\star}(M_{2}+\log^{3J_{1}}(pJn)).\end{split} \tag{5.1}\]
Corollary 1 considers the setting where each \(f_{v}\) belongs to a known finite-dimensional basis \(\Phi_{K}^{(v)}\). In this case \(\mathbf{d}_{v}=0\). Corollary 2 considers the setting where a finite basis for \(f_{v}\) is not known a priori, and a sieve estimator for \(f_{v}\) is used so that \(K\) grows with \(n\).
**Corollary 1** (Known finite basis).: _Suppose that conditions of Theorem 1 are satisfied. If each \(f_{v}\) belongs to a known finite-dimensional basis, then with probability \(1-o(1)\)_
\[\max_{v}d_{2}\left[T_{s}(\mathbf{Y}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y}),T _{s}(\mathbf{\tilde{Y}}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\right]^{2}\leq 27 J\left(M_{2}+\frac{\log(p)}{\sqrt{n}}\right)\left(KM_{2}/n+\frac{2\log(n)}{ \sqrt{n}}\right).\]
**Corollary 2** (Sieve basis).: _Suppose that conditions of Theorem 1 are satisfied. Furthermore, suppose that a finite-dimensional basis for each \(f_{v}\) is not known a priori, but there exists \(K^{\star}\) such that for all \(K>K^{\star}\), \(\|f_{v}(Y_{\mathrm{pr}_{\theta}(v)})-f_{v,K}(Y_{\mathrm{pr}_{\theta}(v)})\|_{ \Psi_{1}}\leq M_{1}\), and \(\max_{v}\mathbb{E}(d_{v}^{2})<K^{-r}\) for some \(r>0\). Letting \(K=n^{1/(1+r)}\), we have with probability \(1-o(1)\) for some constant \(c\) which depends on \(M_{1},M_{2}\),_
\[\max_{v}d_{2}\left[T_{s}(\mathbf{Y}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y}),T _{s}(\mathbf{\tilde{Y}}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\right]^{2}\leq cJ \left(\log(pnJ)^{3J_{1}}n^{\frac{-1}{1+r}}\right)+\frac{cJ\log(pnJ)^{3J_{1}+1} }{\sqrt{n}} \tag{5.2}\]
Corollary 2 requires that the approximation bias--as measured by \(\mathbb{E}\left((f_{v}-f_{v}^{(K_{v})})^{2}\right)\)--is bounded above by \(K^{-r}\) for some \(r>0\). This is satisfied under various more general conditions (Barron and Sheu, 1991). For example, suppose that \(\int(g^{(r)})^{2}<\infty\) where \(g\) is a univariate function and \(g^{(r)}\) is its \(r\)th derivative (i.e., bounded Sobolev norm). Then, letting \(g_{K}\) denote the best polynomial approximation up to degree \(K\), we have \(\mathbb{E}\left((g-g_{K})^{2}\right)=O(K^{-r})\). Thus, in the additive model where \(f_{v}=\sum_{u\in\mathrm{pa}(v)}f_{v,u}(X_{u})\), it follows that \(\mathbb{E}\left((f_{v}-f_{v,K})^{2}\right)=O(pK^{-r})\). Similar statements hold when \(g\) has finite support and is approximated by splines or the Fourier basis.
Regardless, whether a finite basis is known or not, Theorem 1 and its corollaries show that for a sequence of models \(\mathcal{M}_{p}\), when fixing \(M_{1},M_{2},M_{3},J_{1},J\) and letting \(p\), \(K\), and \(n\) grow such that \(K/n\to 0\), we have \(\max_{v}d_{2}\left[T_{s}(\mathbf{Y}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y}),T _{s}(\mathbf{\tilde{Y}}_{v},\mathrm{pr}_{\theta}(v);\mathbf{Y})\right]^{2}=o_{p}(1)\). The convergence in Wasserstein distance, however, does not immediately imply that the p-values from the oracle procedure and the bootstrap procedure converge. Roughly speaking, Lemma 3 shows that when the test statistic does not converge to a point mass, the quantile functions converge uniformly.
**Lemma 3**.: _Let \(\gamma(t)\) denote the p-value calculated for the test statistic \(t\) using the CDF \(F\). Let \(\hat{\gamma}(t)\) denote the p-value calculated using the CDF \(\hat{F}\). Suppose \(\phi(\xi)=\sup_{z}F(z+\xi)-F(z)\). Then,_
\[|\hat{\gamma}(t)-\gamma(t)|<d_{2}(\hat{F},F)^{2/3}+\phi(d_{2}(\hat{F},F)^{2/3}). \tag{5.3}\]
Lemma 3 implies that the p-values from the idealized and obtainable procedures converge when \(\lim_{\xi\to 0}\phi(\xi)=0\). This is satisfied when the distribution of \(T\) is continuous with bounded density. Finally, Lemma 4 shows that, under the null hypothesis in (4.1), the p-values generated by the oracle procedure are mutually independent.
**Lemma 4**.: _Let \(\theta\) be a causal ordering for \(G\) and suppose \(\boldsymbol{\gamma}=(\gamma_{\theta,v}:v=2,\ldots p)\) are p-values calculated using the oracle procedure. Then the elements of \(\boldsymbol{\gamma}\) are mutually independent._
To test the ordering, \(\theta\), we aggregate the individual p-values into the test statistic \(\gamma_{\theta}=\min_{v}\gamma_{\theta,v}\) and \(\hat{\gamma}_{\theta}=\min_{v}\hat{\gamma}_{\theta,v}\) for the oracle and bootstrap procedures, respectively. We then compare \(\gamma_{\theta}\) and \(\hat{\gamma}_{\theta}\) to a Beta\((1,p-1)\) to get \(\Gamma_{\theta}\) and \(\hat{\Gamma}_{\theta}\), the final p-value for the entire ordering. The following corollary implies that for the final step, \(|\hat{\Gamma}_{\theta}-\Gamma_{\theta}|\to 0\) if \(p^{4}/n\to 0\).
**Corollary 3**.: _For a fixed \(\theta\in\Theta(G)\), suppose that the conditions of Theorem 1 hold and \(p^{4}/n\to 0\). Furthermore, assume that the test statistic \(T(\mathbf{Y}_{v},\mathrm{pa}(v);\mathbf{Y})\) has a continuous density bounded above by \(L<\infty\) for all \(v\in V\). Then_
\[\lim_{n\to\infty}P(\theta\in\hat{\Theta}(\mathbf{Y},\alpha))\geq 1-\alpha. \tag{5.4}\]
### Power analysis
Recall that \(\eta_{v\backslash\mathrm{pr}_{\theta}(v)}\) denotes the population residuals; i.e., in the linear case \(Y_{v}-b_{v,U}^{T}Y_{U.1}\) where \(Y_{U.1}\) denotes \(Y_{U}\) with an added intercept term and \(b_{v,U}=\min_{b}\mathbb{E}(Y_{v}-bY_{U.1})\). Let \(\tau^{\star}_{v,u,j,\mathrm{pr}_{\theta}(v)}=\mathbb{E}\left(h_{j}(Y_{u})\eta _{v\backslash\mathrm{pr}_{\theta}(v)}\right)\), \(\tau^{\star}_{v,u,\mathrm{pr}_{\theta}(v)}=\sqrt{\sum_{j}(\tau^{\star}_{v,u,j, \mathrm{pr}_{\theta}(v)})^{2}}\), and
\[\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)}=\|\{\tau^{\star}_{v,u,\mathrm{pr}_{ \theta}(v)}\}_{u\in\mathrm{pr}_{\theta}(v)}\|_{2}^{2}. \tag{5.5}\]
Under \(H_{0,\theta,v}\), \(Y_{u}\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}\eta_{v\backslash\mathrm{pr}_{\theta}(v)}\) for every \(u\in\mathrm{pr}_{\theta}(v)\) so that \(\tau^{\star}_{v,u,j,\mathrm{pr}_{\theta}(v)}=0\) and \(\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)}=0\). Under the alternative, however, when the data is generated by a linear non-Gaussian SEM or non-linear SEM (i.e., when the causal ordering may be identified from data), \(Y_{u}\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt width 0.4pt depth -0.2pt \hss}\hbox{$\perp$}}}\eta_{v\backslash\mathrm{pr}_{\theta}(v)}\) and for appropriately chosen \(h_{j}\), the quantity \(\mathbb{E}\left(h_{j}(Y_{u})\eta_{v\backslash\mathrm{pr}_{\theta}(v)}\right)\) and subsequently \(\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)}\) are non-zero. Theorem 2 shows that the test for a single regression may have power going to \(1\) even as \(p\) grows if \(p/n\to 0\) and \((p/n)=o(\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)})\).
**Theorem 2**.: _Suppose \(K=O(p)\) and consider a sequence of models where \(p,n\) grow, but \(\lambda_{\min},\lambda_{\max},M_{1}\), and \(J\) are fixed. Suppose \(\mathrm{pa}(v)\subseteq\mathrm{pr}_{\theta}(v)\subseteq\mathrm{nd}(v)\) does not hold, then for any fixed level \(\alpha\), the hypothesis \(H_{0,\theta,v}\) will be rejected with probability going to \(1\) if_
1. \(W_{i}\) _is jointly sub-Gaussian with Orlicz-2 norm_ \(M_{1}\)_,_ \(\max_{v,j}\mathbb{E}(g_{j}(Y_{v})^{2})\leq M_{1}\)_,_ \(\max_{v,j}\mathbb{E}(Y_{v}^{2})\leq M_{1}\)_, and_ \(p/n=o(\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)})\) _and_ \(p/n\to 0\)_; or_
2. \(W_{i}\) _is jointly log-concave with Orlicz-1 norm_ \(M_{1}\) _and_ \(p\log(n)^{3}/n=o(\tau^{\star}_{v,\mathrm{pr}_{\theta}(v)})\) _and_ \(p\log(n)^{3}/n\to 0\)_,_
_where \(W_{i}=\begin{bmatrix}Y_{v,i},\;\left(\phi_{k}(Y_{\mathrm{pr}_{\theta}(v)}):k \in[K]\right),\;\left(h_{j}(Y_{u,i}):j\in[J],u\in\mathrm{pr}_{\theta}(v)\right) \end{bmatrix}\in\mathbb{R}^{K+|\mathrm{pr}_{\theta}(v)|J+1}\)._
When aggregating individual regression tests to test an entire ordering at level \(\alpha\), rejecting the null requires that the minimum p-value across the individual tests be less than \(1-(1-\alpha)^{1/p}\). Using a crude Bonferroni upper bound, in order to reject the entire ordering with probability going to \(1\), we would require \(\max_{v}\tau_{v,\mathrm{pr}_{\theta}(v)}\) to be larger by a factor of \(p\); i.e., in the sub-Gaussian case, we require \(p^{2}/n=o(\max_{v}\tau_{v,\mathrm{pr}_{\theta}(v)})\).
## 6. Numerical experiments
In Table 1 we compare the proposed goodness-of-fit test for a single linear regression to the procedures of Sen and Sen (2014) (denoted in the table as "S"), RP Test Shah and Buhlmann (2018) ("RO" for OLS version and "RL" for Lasso version), MINT Berrett and Samworth (2019) ("M"), and higher-order least squares Schultheiss et al. (2023) ("H"). For our procedure, we use \(\mathcal{H}=\{y^{2},y^{3},\mathrm{sign(y)}\times|y|^{2.5}\}\), where each \(h_{j}(\mathbf{Y}_{v})\) is standardized. We display results for the \(T_{2}\) statistic, and results for the \(T_{1}\) statistic are similar.
For each replication, we construct a graph by starting with edges \(v\to v+1\) for all \(v<p\); for any \(u<v-1\), \(u\to v\) is added with probability \(1/2\). For each edge, we sample a linear coefficient uniformly from \(\pm(.1,.95)\). We consider settings where all error terms are either uniform, lognormal, gamma, Weibull, or Laplace random variables and a setting--called mixed--where the distribution of each variable in the SEM is randomly selected. We let \(n\approx p^{5/4}\) or \(p^{2}\). The data is standardized before applying the goodness-of-fit tests. For each setting of \(p,n\), and error distribution, we complete \(500\) replications.
Table (a)a shows the computation time for each procedure; the average time is similar across error distribution so we aggregate the results. We display \(p=10,20,45\); similar results for \(p=15,30\) are in the appendix. RP Test is typically the slowest, followed by Sen and Sen (2014) and MINT. All of these procedures would be prohibitively slow for computing confidence sets of causal orderings. HOLS is the fastest of the existing procedures and is actually faster than our procedure when \(p=45\) and \(n=p^{2}\). However, across other settings, our proposed procedure is often an order of magnitude faster than HOLS and up to 100-10,000x faster than the other procedures.
In addition to the stark computational benefits, the proposed test also performs well statistically. In Table (b)b, we compare the empirical size and power for tests with nominal level \(\alpha=.1\). To measure size we test the (true) \(H_{0}:\mathrm{pa}(p)\subseteq\{1,\ldots,p-1\}\subseteq\mathrm{nd}(p)\); to measure power we test the (false) \(H_{0}:\mathrm{pa}(1)\subseteq\{2,3,\ldots,p\}\subseteq\mathrm{nd}(1)\). In each setting, if a procedure does not exhibit empirical size within \(2\) standard deviations of \(\alpha=.1\), then we do not display the empirical power.
Our proposed procedure controls size within the nominal rate in every setting. It also has the highest (or comparable) power in many settings where the errors are skewed, but tends to do less well when the errors are symmetric. Sen and Sen (2014) tends to exceed the nominal size for skewed distributions and performs worse when \(n=p^{2}\) as opposed to \(n\approx p^{5/4}\). The OLS variant of Shah and Buhlmann (2018) performs well across a variety of settings but does not control size when the errors are uniform and \(n=p^{2}\); the Lasso variant generally fails to control the type I error when \(p=45\) as the linear model is not sparse. MINT exhibits inflated size when the errors are heavy tailed, but generally has good power in settings where the size is controlled. Finally, HOLS controls empirical size across a wide variety of settings--except the lognormal errors. When \(n=p^{2}\), HOLS tends to have good power when the errors are symmetric, but suffers when the errors are skewed.
### Confidence sets
Fig. 2 shows results for \(500\) replicates when constructing \(90\%\) confidence sets using Alg. 2. We fix \(p=10\) and let \(n=500,1000,2500,5000\). We generate random graphs and data as before with two changes: \(u\to v\) for all \(u<v-1\) is included with probability \(1/3\) and each linear coefficient is drawn from \(\beta_{u,v}=z_{u,v}\times g_{u,v}\) where \(z_{u,v}\) is a Rademacher random variable and \(g_{u,v}\sim\mathrm{Gamma}(n^{-1/10},1)\).
The upper left panel shows the proportion of times that the true causal ordering is recovered by DirectLiNGAM (Shimizu et al., 2011b). By construction, the average edge weight
\begin{table}
\end{table}
Table 1. Comparison of goodness-of-fit tests “W” is proposed procedure, “S” Sen and Sen (2014), “R” and “RL” are the OLS and lasso variants of Shah and Buhlmann (2018), “M” Berrett and Samworth (2019), and “H” Schultheiss et al. (2023).
\begin{table}
\end{table}
Table 1. Comparison of goodness-of-fit tests “W” is proposed procedure, “S” Sen and Sen (2014), “R” and “RL” are the OLS and lasso variants of Shah and Buhlmann (2018), “M” Berrett and Samworth (2019), and “H” Schultheiss et al. (2023).
decreases with \(n\) so the causal ordering is not consistently estimated as \(n\) increases. Nonetheless, the bottom left panel shows that the empirical coverage of the confidence sets are all very close to the nominal rate of \(.9\). In addition, the upper middle panel shows that the confidence sets are still increasingly informative in that the proportion of all \(10!\) possible orderings which are included in \(\hat{\Theta}(\mathbf{Y},.1)\) decreases. The proposed procedure is more powerful (i.e., returns a smaller confidence set) for the skewed distributions; however, even with symmetric errors, the confidence sets still contain less than \(2\%\) of all orderings when \(n=5000\). The bottom middle panel shows the proportion of all pairwise ancestral relationships which are certified into \(\hat{\mathcal{A}}_{\cap}\). Again, despite inconsistent estimation of the causal ordering, the proportion of ancestral relations which are recovered increases with \(n\).
The top right panel shows the time required to calculate the confidence set when \(p=10\). The computation time is not monotonic with \(n\) because a larger sample size requires more computation for each considered ordering; however, this may be offset by increased power to reject incorrect orderings so the branch and bound procedure considers fewer orderings. Finally, in the bottom right panel, we show computational feasibility for larger \(p\) by displaying the median computation time (sec\(\times 1000\)) for \(10\) replicates with \(n=10000\). We consider gamma errors for \(p=10,\ldots,20\). We draw random graphs and data as before, except the linear coefficients are selected uniformly from \((-1,1)\). Although the computational time increases rapidly, the procedure is still feasible for \(p=20\). We note, however, that \(4/10\) of the replicates for \(p=20\) failed to finish in the allotted \(48\) hours.
### Confidence intervals with model selection uncertainty
We now consider CIs for causal effects which account for model uncertainty. We draw random graphs and data under the same setup in Section 6.1 with \(p=10\), except we draw the magnitude of the linear coefficients from \(\operatorname{Gamma}(1/2,1)\), so the "signal strength" is fixed instead of decreasing with
Figure 2. _Top Left_: % of times point estimate of ordering is correct. _Top Middle_: Avg proportion of all possible orderings included in \(\hat{\Theta}(\mathbf{Y},\alpha=.9)\). _Top Right_: Avg time (sec\(\times 100\)) for each confidence set. _Bottom Left_: Coverage for \(\hat{\Theta}(\mathbf{Y},\alpha=.9)\). _Bottom Middle_: Avg proportion of all ancestral relations which are included in \(\hat{\mathcal{A}}_{\cap}\). _Bottom Right_: Median time (sec\(\times 1000\)) for each confidence set with \(n=10,000\) and \(p\) varying.
\(n\) as before. We compute 80% CIs for the total effect of \(Y_{4}\) onto \(Y_{7}\) using Alg. 3. We contrast this with a naive procedure that computes CIs using only the adjustment set implied by \(\hat{\theta}\), the causal ordering estimated by DirectLiNGAM. Specifically, if \(\hat{\theta}(4)<\hat{\theta}(7)\), then the naive procedure uses an adjustment set of \(\mathrm{pr}_{\hat{\theta}}(4)\) and returns the typical 80% CI for the regression coefficient of \(Y_{4}\). When \(\hat{\theta}(4)>\hat{\theta}(7)\), 7 is estimated to be an non-descendant of 4, so the returned CI is 0.
Table 2 compares the empirical coverage and lengths of the proposed CIs and the naive CIs. The proposed CIs have empirical coverage above the nominal rate. The "Adj" column shows the proportion of times the point estimate yields a valid adjustment set for the total effect. Given that these values are much smaller than 1, it is unsurprising that the naive CIs cover well below the nominal rate. Under gamma errors--when the model uncertainty is typically smaller--the proposed CIs have median length that is only 2-3 times larger than the naive procedure. However, in the Laplace setting where model uncertainty is larger, the median length is up to 5 times longer.
### Data example
We now analyze data consisting of the daily value-weighted average stock returns for 12 different industry portfolios from 2019 to 2022 (\(n=1008\))1. All stocks from the NYSE, AMEX, and NASDAQ are placed into one of 12 different industries. Using DirectLiNGAM, the estimated causal ordering is: Utilities, Energy, Manufacturing, Business Equipment, Finance, Other, Telecomm, Consumer Non-Durable, Wholesale, Consumer Durables, Chemicals, Healthcare. The 95% confidence set of causal orderings returned for the data contains approximately \(1/15,290\) of the 12! total orderings. The right panel in Fig. 3 summarizes the pairwise ancestral relationships for all non-rejected orderings, where darker shades of the \((u,v)\) element indicates that \(v\) precedes \(u\) in a larger proportion of non-rejected orderings.
Footnote 1: Data available at: [https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html](https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). Accessed: Jan 2023
Notably, Utilities is first in every non-rejected ordering so \(\hat{\mathcal{A}}_{\cap}=\{(8,v)\,:\,v\neq 8\}\), which agrees with the point estimate. At first glance, this may seem odd; however, Utilities are often viewed as a proxy for bonds and directly capture the effect of changing interest rates and market uncertainty. From 2020 to 2022, the performance of American stock markets was largely driven by uncertainty around COVID-19 and federal monetary interventions. Thus, it makes sense that "Utilities" are estimated to be an ancestor of all other industries.
Nonetheless, the other orderings in \(\hat{\Theta}(\mathbf{Y},\alpha=.05)\) have causal implications which differ from the point estimate. We compute the Frechet mean of \(\hat{\Theta}(\mathbf{Y},\alpha=.05)\) using a distance between two orderings which counts the number of implied ancestral relations present in one ordering but not the other; i.e., \(d(\theta,\theta^{\prime})=\left|\{(u,v)\,:\,u\in\mathrm{pr}_{\theta}(v)\text{ and }u\not\in\mathrm{pr}_{\theta^{\prime}}(v)\}\right|.\) The mean
\begin{table}
\begin{tabular}{|c|c c|c c|c c|c c|c c|c c|c c|} \hline & \multicolumn{8}{c|}{Gamma} & \multicolumn{8}{c|}{Gamma} & \multicolumn{8}{c|}{Laplace} \\ \cline{2-13} n & Coverage & \multicolumn{2}{c|}{Avg Len} & \multicolumn{2}{c|}{Med Len} & \multicolumn{2}{c|}{Adj} & \multicolumn{2}{c|}{Coverage} & \multicolumn{2}{c|}{Avg Len} & \multicolumn{2}{c|}{Med Len} & \multicolumn{2}{c|}{Adj} \\ & MU & NV & MU & NV & MU & NV & MU & NV & MU & NV & MU & NV & Adj \\ \hline
250 &.97 &.67 &.81 &.19 &.55 &.18 &.37 &.99 &.59 & 1.1 &.19 &.74 &.17 &.25 \\
500 &.96 &.63 &.60 &.14 &.38 &.13 &.42 &.99 &.61 &.86 &.13 &.60 &.12 &.36 \\
1000 &.97 &.71 &.37 &.10 &.24 &.09 &.51 &.99 &.64 &.68 &.10 &.45 &.09 &.44 \\
2000 &.94 &.71 &.22 &.07 &.14 &.06 &.61 &.97 &.65 &.46 &.07 &.30 &.06 &.54 \\ \hline \end{tabular}
\end{table}
Table 2. Comparison of 80% CIs for the total effect of \(Y_{4}\) onto \(Y_{7}\). “MU” denotes CIs which account for model uncertainty; “NV” denotes naive CIs
ordering is: Utilities, Wholesale, Consumer Non-durables, Healthcare, Chemicals, Finance, Energy, Business Equipment, Telecomm, Consumer Durables, Manufacturing, Other, and the left panel of Fig. 3 shows the distance from the Frechet mean for all orderings in \(\hat{\Theta}(\mathbf{Y},\alpha)\). The distance of the point estimate is indicated by the vertical line, and we observe that it is actually further from the mean than most other orderings in \(\hat{\Theta}(\mathbf{Y},\alpha)\).
Finally, the naive 90% CI for the total effect of Energy onto Manufacturing is \((0.306,0.353)\). Furthermore, since Manufacturing precedes Chemicals in the estimated ordering, we would naively conclude that the total effect of Chemicals onto Manufacturing is 0. In contrast, when accounting for model uncertainty, we produce a 90% CI for the total effect of Energy onto Manufacturing of \(\{0\}\cup(0.040,0.128)\cup(0.171,0.291)\cup(0.301,0.358)\) and a CI for the effect of Chemicals onto Energy of \(\{0\}\cup(0.101,0.909)\cup(0.973,1.101)\).
## 7. Discussion
We have proposed a procedure for quantifying uncertainty when estimating the causal structure. Our goodness-of-fit testing framework returns a confidence set that may be informative about the validity of the posited identification assumptions, as well as which causal orderings the identifying assumptions cannot rule out. The confidence set can also be used to compute various other objects of interest. Notably, this includes confidence intervals for causal effects that also account for model uncertainty and a sub/superset of ancestral relations. Our specific goodness-of-fit test is designed for models in which residuals are independent of regressors under the null hypothesis. Future work could extend this procedure to settings where causal sufficiency may not hold (Wang and Drton, 2020).
While we believe the proposed approach has many desirable characteristics, there, of course, also are a few previously mentioned disadvantages which could be addressed in future work. Most notably, the primary disadvantage of our approach is the computational expense required to test the set of all possible orderings. While the proposed branch and bound procedure can handle medium-sized problems and could scale even larger with a careful parallel implementation, the approach is unlikely to scale too far beyond \(p=20\). Nonetheless, we believe that
Figure 3. Left: distance from Fréchet mean to all other orderings in \(\hat{\Theta}(\mathbf{Y},.05)\). The distance for the point estimate is indicated by the vertical line. Right: A darker shade of the \((u,v)\) cell indicates that \(v\) precedes \(u\) in a larger proportion of orderings in \(\hat{\Theta}(\mathbf{Y},.05)\).
this approach is a useful first step. We believe that this initial method is valuable for practitioners and hope it also spurs further research in statistical methodology for quantifying model uncertainty in estimating causal structures. |
2307.01014 | Microwave Gaussian quantum sensing with a CNOT gate receiver | In quantum illumination (QI) the non-classical correlations between
continuous variable (CV) entangled modes of radiation are exploited to detect
the presence of a target embedded in thermal noise. The extreme environment
where QI outperforms its optimal classical counterpart suggests that
applications in the microwave domain would benefit the most from this new
sensing paradigm. However all the proposed QI receivers rely on ideal photon
counters or detectors, which are not currently feasible in the microwave
domain. Here we propose a new QI receiver that utilizes a CV controlled not
gate (CNOT) in order to perform a joint measurement on a target return and its
retained twin. Unlike other QI receivers, the entire detection process is
carried out by homodyne measurements and square-law detectors. The receiver
exploits two squeezed ancillary modes as a part of the gate's operation. These
extra resources are prepared offline and their overall gain is controlled
passively by a single beamsplitter parameter. We compare our model to other QI
receivers and demonstrate its operation regime where it outperforms others and
achieves optimal performance. Although the main focus of this study is
microwave quantum sensing applications, our proposed device can be built as
well in the optical domain, thus rendering it as a new addition to the quantum
sensing toolbox in a wider sense. | Hany Khalifa, Kirill Petrovnin, Riku Jäntti, Gheorghe Sorin Paraoanu | 2023-07-03T13:45:06Z | http://arxiv.org/abs/2307.01014v2 | # Microwave Gaussian quantum sensing with a CNOT gate receiver
###### Abstract
In _quantum illumination_ (QI) the non-classical correlations between _continuous variable_ (CV) entangled modes of radiation are exploited to detect the presence of a target embedded in thermal noise. The extreme environment where QI outperforms its optimal classical counterpart suggests that applications in the microwave domain would benefit the most from this new sensing paradigm. However all the proposed QI receivers rely on ideal photon counters or detectors, which are not currently feasible in the microwave domain. Here we propose a new QI receiver that utilises a CV _controlled not gate_ (CNOT) in order to perform a joint measurement on a target return and its retained twin. Unlike other QI receivers, the entire detection process is carried out by homodyne measurements and square-law detectors. The receiver exploits two squeezed ancillary modes as a part of the gate's operation. These extra resources are prepared offline and their overall gain is controlled passively by a single beamsplitter parameter. We compare our model to other QI receivers and demonstrate its operation regime where it outperforms others and achieves optimal performance. Although the main focus of this study is microwave quantum sensing applications, our proposed device can be built as well in the optical domain, thus rendering it as a new addition to the quantum sensing toolbox in a wider sense.
+
Footnote †: footnote]
## I Introduction
Quantum sensing is a new paradigm for information detection that utilises non-classical features of the electromagnetic (EM) radiation to push detection sensitives beyond the classical limits. Entanglement, squeezing and superposition of quantum states are the main resources upon which many of the quantum sensing architectures are built [1, 2]. Besides quantum sensing, quantum entanglement is an essential feature of many other quantum-technology applications, as it allows us to establish remote correlations between two jointly prepared EM radiation modes. Quantum cryptography [3], quantum computing [4], quantum communication [5] and quantum-enhanced metrology [6], have all exploited quantum entanglement to outperform their classical counterparts. Nonetheless, quantum entanglement is a fragile phenomenon, susceptible to environment-induced decoherence in the form of excess noise photons.
Quantum illumination is a quantum sensing protocol that can retrieve information sent over a noisy, entanglement-breaking channel [7]. The protocol utilizes entangled _two mode squeezed vacuum_ (TMSV) states to detect the presence or absence of a target embedded in a thermal environment [8]. A probe (denoted as _signal_) is sent to illuminate a target, while its twin (denoted as _idler_) is retained in order to perform a correlation measurement on the target's return. The operational domain of QI is the low _signal-to-noise ratio_ (SNR) limit, where the optimum QI receiver enjoys a 6dB advantage in error exponent over the optimum classical one [8]. This suggests that the microwave domain is probably the most natural setting for QI experiments. Unfortunately, up to
this moment there is no known physical realization of the optimum QI receiver. Currently, up to 3 dB error exponent enhancement can be attained theoretically with the available hardware. The _optical parametric amplifier_ (OPA), and _phase conjugate_ (PC) receivers [9] are the most remarkable receiver architectures that had been demonstrated experimentally to reap this sub-optimal advantage. The full 6 dB advantage can only be hypothetically attained with the complicated _sum frequency generation_ (SFM) receiver and its extremely intricate upgrade, _feed-forward_ SFG (FF-SFG) [10]. Further, when non ideal storage of the idler mode is considered, the performance of all the mentioned receivers is greatly affected [11]: 6 dB of idler loss is enough to rule out any quantum advantage. However, all of the aforementioned designs relied on ideal photon counters operating in the low SNR regime in order to acknowledge a successful detection event. For quantum optical experiments, despite the insignificance of thermal background noise, efficient photon counting with low dark counts requires the use of superconductors and therefore operation at low temperatures. For microwave-frequency experiments, due to the extremely small powers at the single quantum level, microwave photon counters are only at the proof of concept stage.
In this article we propose a new microwave QI receiver that operates without the need for ideal single photon counters. Our proposed model is based on the CV CNOT operation [12, 13, 14, 15]. Under this unitary gate the signal and idler quadratures transform into a superposition that is directly related to their cross-correlations features [16, 17]. Operationally speaking, a CV CNOT gate utilises two quadrature-squeezed ancillary modes, such that one is position-squeezed, while the other is momentum-squeezed. In order to avoid the cumbersome process of nonlinear coupling upon a receiving event, it has been demonstrated in [13] that an offline preparation of the squeezed resources is both equivalent and more efficient than an online nonlinear coupling of a mode pair. The overall interaction gain can be controlled by a single beamsplitter parameter. This controlled operation gives us the ability to smoothly choose the operational domain where our device can outperform other QI receivers.
The basic idea behind the operation of the proposed receiver is simple: In the event of receiving a small fraction of the signal-idler initial correlations, the CNOT receiver strengthens it by a scalar value equal to the receiver's controllable interaction again. This is made possible due to the entangling properties of the universal CNOT gate. On the other hand, when these correlations are lost, the receiver outputs uncorrelated noise beams. Then the signal levels of both possible cases are determined by homodyning the receiver's output field quadratures [18, 19]. However, since the average homodyne currents of the quadratures of a TMSV necessarily vanish, we propose feeding the output homodyne current to a square law detector, a spectrum analyzer (SA) for instance, in order to overcome this problem. This had been the standard method in the optical domain [12, 20, 21, 22] and can be straightforwardly replicated in microwave quantum optics. Further, our device considers the non ideal storage of the idler mode Fig. (1), modelled by a beamsplitter with transmisivity \(T\), where the beamsplitter's unused port injects vacuum noise. Finally it is worth mentioning that recently in the domain of microwave circuit quantum electrodynamics (CQED), there have been other successful implementations of the universal CNOT gate [23, 24, 25]. We have opted for this specific implementation since its performance can be tracked analytically in a straightforward manner and can serve as a good model to calculate the receiver's internal noise. Thus, as long as a CNOT gate platform is capable of performing the gain-controlled, generalized CNOT interaction, one should be able to replicate the results of this study in both the microwave and optical domains.
This article is organized as follows: in Section II we briefly describe the QI enhanced sensing protocol, then in section III we describe the theory of operation of our CNOT receiver. We are mostly concerned with showing the ability of our receiver design to extract the signal-idler cross-correlations. Section IV is mainly focused on the performance analysis of our device, a comparison between our model and OPA, PC and SFG receivers is carried out in great detail, such that the section's main objective is to demonstrate the operational regime where our device can outperform others. Finally, Section V will be our conclusion.
## II QI protocol
We consider applications where a transmitter is sending its information over a noisy and lossy channel. The receiver, potentially co-located with the transmitter, stores a mode that shares quantum correlations with the transmitted signal. Quantum target detection [26], quantum radars [27] and quantum backscatter communications [28] are perfect examples of these applications.
At the transmitter a pump field excites a non-linear element to generate \(K\) independent signal-idler mode pairs via _spontaneous parametric down conversion_ (SPDC), \(\{a_{S}^{\prime},a_{I}^{\prime}\},\ 0\leq j\leq M\)[29, 30, 31]. The total number of probe signals is equal to \(K=\tau W\), where \(\tau\) is the duration of a transmission event and \(W\) is the phase-matching bandwidth of the nonlinear element. For our purposes the archetypal nonlinear element in the microwave domain is the _Josephson parametric amplifier_ (JPA).
Figure 1: The quantum illumination protocol. At the transmitter correlated signal-idler pairs are generated, where the signal is sent towards a suspected region, while the idler is retained in a lossy memory element for a correlation measurement on the target’s return. When there is no object, that is \(\gamma=0\), the target return is a noisy environment mode \(a_{B}\).
Depending on the design, the operating frequency of a JPA is in the range of \(4-8\)GHz [31]. Another celebrated device that can generate microwave signal-idler entangled pairs is the _Josephson ring modulator_ (JRM) [32, 33]. In the case of a JPA source, the bandwidth of the generated twin pairs is typically \(1\)MHz, hence for a total number of probe pairs \(K=10^{6}\), the protocol duration would be \(\tau\approx 1s\). However, the bandwidth can be substantially increased up to \(100\)MHz when a _travelling wave parametric amplifier_ (TWPA) [30] source is utilized. This would dramatically result in a faster QI protocol.
Each signal-idler pair is in a TMSV that admits a number state representation,
\[|\Psi\rangle_{St}=\sum_{n=0}^{\infty}\sqrt{\frac{N_{S}^{n}}{(N_{S}+1)^{n+1}}}|n \rangle_{s}|n\rangle_{I}, \tag{1}\]
where \(N_{S}\) is the mean photon number in each of the signal and idler modes, i.e., \(\langle a_{S}^{\dagger}a_{S}\rangle=\langle a_{I}^{\dagger}a_{I}\rangle=N_{S}\).
It is also useful to express the above TMSV as a squeezing operation applied to a vacuum state,
\[|\Psi\rangle_{St}=\mathcal{S}(\gamma)|0,0\rangle_{St},\ \ \ \ \ S(\gamma)=e^{( \gamma_{a}^{\dagger}a_{I}^{\dagger}-\gamma^{*}aa)}, \tag{2}\]
where the complex squeezing parameter \(r\) is defined as \(\gamma=re^{i\varphi}\), and \(\varphi\) is the angle of the squeezing axis. Relating the two expressions to each other, the number of photons in either the signal or idler mode can be redefined in terms of the complex squeezing parameter as \(\langle a_{S}^{\dagger}a_{S}\rangle=\langle a_{I}^{\dagger}a_{I}\rangle=\sinh^{ 2}r\).
The entanglement between each pair is quantified by their \(4\times 4\) covariance matrix.
\[C(S,I)=\begin{pmatrix}A&B\\ B^{\top}&D\end{pmatrix}, \tag{3}\]
where the matrices \(A,B,B^{\top},D\) are defined as follows: \(A_{bl}=(1/2)[\langle q_{S}^{\dagger}q_{S}^{\dagger}+q_{S}^{\dagger}q_{S}^{ \dagger}\rangle-\langle q_{S}^{\dagger}\rangle\langle q_{S}^{\dagger}\rangle]= \text{diag}(N_{S}+1/2,N_{S}+1/2)\), \(B_{bl}=(1/2)[\langle q_{S}^{\dagger}q_{S}^{\dagger}+q_{S}^{\dagger}q_{I}^{ \dagger}\rangle-\langle q_{S}^{\dagger}\rangle\langle q_{I}^{\dagger}\rangle]= \text{diag}([N_{S}(N_{S}+1)]^{1/2},[N_{S}(N_{S}+1)]^{1/2})\), \(B_{bl}^{\top}=(1/2)[\langle q_{S}^{\dagger}q_{S}^{\dagger}+q_{I}^{\dagger}q_{S }^{\dagger}\rangle-\langle q_{S}^{\dagger}\rangle\langle q_{S}^{\dagger} \rangle]=\text{diag}([N_{S}(N_{S}+1)]^{1/2},[N_{S}(1+N_{S})]^{1/2})\), \(D_{bl}=(1/2)[\langle q_{I}^{\dagger}q_{I}^{\dagger}+q_{I}^{\dagger}q_{I}^{ \dagger}\rangle-\langle q_{I}^{\dagger}\rangle\langle q_{I}^{\dagger}\rangle]= \text{diag}(N_{S}+1/2,N_{S}+1/2)\), \(k,l=1,2,q^{\dagger}=X=(a+a^{\dagger})/\sqrt{2},q^{2}=Y=-i(a-a^{\dagger})/ \sqrt{2}\) are the mode quadratures, \(B^{\top}\) is the transpose of \(B\) and diag is a \(2\times 2\) diagonal matrix.
As for channel considerations, we will assume a lossy transmission medium overwhelmed by noise photons. Hypothetically, two transmission scenarios might arise under this channel model.
_Hypothesis 1_ (alternative hypothesis)(\(H=1\)): The transmitted signal reaches the receiver with a very small probability. This can be modelled by a low transmissivity beamsplitter that mixes the signal with a bath mode
\[a_{R}=\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}, \tag{4}\]
where \(a_{R}\) is the received mode, \(\eta\) is the beamsplitter's transmissivity, and \(a_{B}\) is a zero mean Langevin bath mode, \(\langle a_{B}\rangle=\langle a_{B}^{\dagger}\rangle=0\), with mean photon number \(\langle a_{B}^{\dagger}a_{B}\rangle=N_{B}\), and zero cross correlations \(\langle a_{B_{b}}a_{B_{i}}\rangle=0,\forall k\neq l\), since a thermal state is diagonal in the number basis.
_Hypothesis 0_ (null hypothesis) \((H=0)\): The transmitted signal is completely lost and replaced by a bath mode, \(a_{B}\), i.e, \(a_{R}=a_{B}\).
Simultaneously, we consider storing the idler mode in a leaky memory element, which can be represented by a pure loss channel with transmisitivity \(T\)
\[a_{\mathrm{M}}=\sqrt{T}a_{I}+\sqrt{1-T}a_{V} \tag{5}\]
where \(a_{V}\) is an environment vacuum mode. It is worth noting that recently there has been some notable progress regarding microwave quantum memories, with efficiency as high as \(80\%\)[34, 35].
## III Theory of operation
### The CNOT receiver
The preferred operating regime where QI's advantage is manifest is the low SNR, such that, \(N_{S}\ll 1\ll N_{B}\). In this domain the signal-idler cross correlations of a TMSV, \(\langle a_{S}a_{I}\rangle+\langle a_{S}^{\dagger}a_{I}^{\dagger}\rangle\), where \(\langle a_{S}a_{I}\rangle=(1/2)\langle(X_{S}+iY_{S})(X_{I}+iY_{I})\rangle= \langle X_{S}X_{I}\rangle-\langle Y_{S}Y_{I}\rangle=\sqrt{N_{S}(1+N_{S})}\), \(\langle a_{S}^{\dagger}a_{I}^{\dagger}\rangle=(1/2)\langle(X_{S}-iY_{S})(X_{I} -iY_{I})\rangle=\langle X_{S}X_{I}\rangle-\langle Y_{S}Y_{I}\rangle=\sqrt{N_{S }(1+N_{S})}\), \(\langle a_{S}a_{I}\rangle+\langle a_{S}^{\dagger}a_{I}^{\dagger}\rangle=2 \sqrt{N_{S}(1+N_{S})}\), exceed the maximum that can be attained classically with equal strength, uncorrelated coherent pairs, with average photon number \(N_{S}\) each, \(\langle\alpha|X_{\mathrm{N}}|\alpha\rangle=\sqrt{2}\,\text{Re}(\alpha)\), \(\langle\alpha|Y_{\mathrm{m}}|\alpha\rangle=\sqrt{2}\,\text{Im}(\alpha)\), \(\langle\alpha,\alpha|X_{\mathrm{N}}X_{\mathrm{m}}|\alpha,\alpha\rangle=2\,[ \text{Re}(\alpha)]^{2}\), \(\langle\alpha,\alpha|Y_{\mathrm{m}}Y_{\mathrm{m}}|\alpha,\alpha\rangle=2\,[ \text{Im}(\alpha)]^{2}\), \(\langle\alpha,\alpha|X_{\mathrm{m}}Y_{\mathrm{m}}|\alpha,\alpha\rangle=2\,[ \alpha]^{2}=2N_{S}\), where \(\mathrm{m}=S,I\), \(\alpha\) is the coherent field complex amplitude, and \([X_{\mathrm{m}},Y_{\mathrm{m}}]=i\delta_{\mathrm{mn}}\) is the field quadratures commutation relation, where the reduced Planck's constant is set equal to one, \(\hbar=1\).
A receiver's main task in QI is to extract the aforementioned signal-idler cross correlations from a target return and its retained idler twin, \(\langle a_{R}a_{\mathrm{M}}\rangle\). In terms of the quadrature operators, this quantity can be expanded into four terms \((1/2)\big{[}X_{R}X_{\mathrm{M}}+iX_{R}Y_{\mathrm{M}}+iY_{R}X_{\mathrm{M}}-Y_ {R}Y_{\mathrm{M}}\big{]}\). Attempting to perform a _heterodyne_ measurement on each mode separately is probably the most economic way to gain full access to the field's quadratures [36]. However, the splitting of each mode first on a balanced beamsplitter, would add an additional 3dB loss to the overall output SNR [37, 18]. Along with detectors inefficiencies, this would rule out any quantum advantage over the optimal classical illumination (CI) receiver. Further, the Gaussian Wigner statistics of a directly homodyned squeezed state is non-negative and a nonlinear detection scheme, such as photon counting, is needed to reveal the non classical signal-idler signature [38, 39, 40]. In this regard, the previous installments of the QI protocol opted for single photon counters as detectors for their receiver designs.
In order to avoid the extra losses of double homodyning, and the status-quo technological infeasibility of microwave single photon counters, our proposed CV CNOT receiver mediates a controllable interaction between a target's return and
a stored idler that can access their non-classical correlations by creating an observable quantity corresponding to their relative momentum and total position quadratures. Further, as mentioned in the introduction, the controllable gain of the receiver can in fact strengthen these correlations rendering them more visible for successful detection. Finally, in order to satisfy the required detector's non linearity described previously, our receiver utilises a square law detection chain [41], composed of a balanced double port homodyne detector and a spectrum analyzer, where the corresponding measurement outcome is the quadratures variances or powers [22]. This has been the standard method of detection in quantum optical experiments [12, 18, 20, 21]. For completeness we expose the details of this method in appendix A. Besides making the above arguments more rigorous, we now focus on the mathematical representation of our proposed device. We first show how it extracts the signal-idler cross correlation signature, then we see how the device's controllable gain can enhance it.
The CNOT receiver transforms a returned mode and its stored idler twin as follows,
\[X_{R}^{\rm(out)} = e^{iGY_{\rm M}X_{R}}\ X_{R}\ e^{-iGY_{\rm M}X_{R}}\] \[= X_{R}\] \[Y_{R}^{\rm(out)} = e^{iGY_{\rm M}X_{R}}\ Y_{R}\ e^{-iGY_{\rm M}X_{R}}\] \[= Y_{R}+[iGX_{R}Y_{\rm M},Y_{R}]+0,\] \[= Y_{R}-GY_{\rm M}\] \[X_{\rm M}^{\rm(out)} = e^{iGY_{\rm M}X_{R}}\ X_{\rm M}\ e^{-iGY_{\rm M}X_{R}}\] \[= X_{\rm M}+[iGX_{R}Y_{\rm M},X_{\rm M}]+0,\] \[= X_{\rm M}+GK_{R},\] \[Y_{\rm M}^{\rm(out)} = e^{iGY_{\rm M}X_{R}}\ Y_{\rm M}\ e^{-iGY_{\rm M}X_{R}} \tag{6}\] \[= Y_{\rm M}\]
where we have utilised the operator expansion formula for any two non commuting operators \([A,B]\neq 0\), \(e^{\lambda A}Be^{-\lambda A}=B+\lambda[A,B]+\big{(}\lambda^{2}/2!\big{)}[A,[A,B] ]+....\), the commutation relation between the field's quadratures \([X,Y]=i\), such that \(\hbar=1\), and \(G\) is the interaction gain [14]. Note that \([X_{R}^{\rm(out)},Y_{R}^{\rm(out)}]=[X_{M}^{\rm(out)},Y_{M}^{\rm(out)}]=i\) and the rest of the commutators are zero, as expected for a unitary transformation.
In Fig. (2) we depict the CNOT receiver as a unitary gate. As can be seen, the receiver has two different quadrature input tuples, of two elements each, and their corresponding two observable output tuples. The first element of the first output tuple, \(X_{R}^{\rm(out)}\), is the unaffected mode, whereas the second, \(Y_{R}^{\rm(out)}\), carries information on both the returned and stored momentum quadratures. On the other hand, the first element of the second output tuple, \(X_{\rm M}^{\rm out}\), carries the position information of both the return and stored modes, whereas the second, \(Y_{\rm M}^{\rm(out)}\), is the unaffected mode. Ideally this sort of interaction is probed in order to perform a _non demolition_ measurement on the unaffected quadratures by only measuring the translated ones [19, 42]. After that each output is mixed on a balanced beamsplitter with a vacuum mode, defined by its corresponding conjugate quadratures tuple, \((X_{r},Y_{r})\). Finally, the powers of the four outputs are measured by a spectrum analyzer module (SA) comprising a double port homodyne detector (see details in appendix A). In order to verify a successful implementation of the receiver's operation, and hence a successful capturing of sought cross correlations, the four conjugate quadratures corresponding to the output return and memory modes have to be measured simultaneously. As can be deduced from Eq. (6), the power (second moment) of the unaffected quadrature is added to the translated one respectively for each output. In the event of a failed operation, that is, \(G=0\), the powers are equal respectively. We now proceed with calculating the mode variances, i.e., (signal powers), of the involved quadratures as being measured practically, and demonstrate the previous ideas mathematically.
### Extracting the signal-idler cross correlation
The receiver's outputs as demonstrated in Fig. (2) are mode quadratures. The information contained in the signal-idler cross correlations can be accessed by measuring their respective variances, which in the present case coincides with the signal power (second moment), since we are dealing with zero mean fields. As pointed out earlier, in order to measure the quadratures variances simultaneously from the signal return and stored idler, \(a_{R}^{\rm(out)}\), \(a_{\rm M}^{\rm(out)}\), the modes are first split individually on a balanced beamsplitter, where a vaccuum mode enters from the unused port. Then detected by a square law detector. This would result in a 3dB loss of the measured quadrature. For illustration let's consider the first receiver's output, after a balanced beamsplitter the first output is homodyned for the position quadrature, thus \(\bar{a}_{R}^{\rm(out)}=(1/\sqrt{2})(a_{R}^{\rm(out)}+a_{r}),\bar{X}_{R}^{\rm( out)}=(1/2)(a_{R}^{\rm(out)}+a_{r}+a_{R}^{\rm(out)}+a_{r}^{\rm(out)}+a_{r}^{\rm(t)}), \langle[\bar{X}_{R}^{\rm(out)}]^{2}\rangle=(1/2)\big{\langle}(X_{R}^{\rm(out) }+X_{r})(X_{R}^{\rm(out)}+X_{r})\big{\rangle}=(1/2)\big{(}([X_{R}^{\rm(out)}]^{ 2})+\langle X_{r}^{2}\rangle\big{)}\). Similarly the second beamsplitter output can be homodyned for the momentum quadrature \(Y_{R}^{\rm(out)}\). As can be seen the 3dB noise penalty is visible now in the signal's power as the intensity of the original field is halved. The rest of the quadratures corresponding to the receiver's second output are treated in a similar manner. Thus, to keep the notations simple, we include the noise penalty directly to the following calculations,
Figure 2: A unitary gate representation of the CNOT receiver. Each spectrum analyzer (SA) module comprises a double port homodyne detector as described in appendix A. The tuple (\(X_{r},Y_{r}\)) represents a vacuum mode.
whereas the overall vacuum noise is added at the end of this derivation.
Suppose that the alternative hypothesis is true, i.e, \(H=1\), then
\[\langle[X_{\rm M}^{\rm(out)}]^{2}\rangle=\langle[X_{\rm M}^{2}+2GX_{\rm M}X_{R}+G ^{2}X_{R}^{2}]\rangle, \tag{7}\]
where,
\[\langle X_{\rm M}^{2}\rangle =\frac{1}{4}\langle(\sqrt{T}a_{I}+\sqrt{1-T}a_{V}+\sqrt{T}a_{I}^ {\dagger}+\sqrt{1-T}a_{V}^{\dagger})\rangle\] \[(\sqrt{T}a_{I}+\sqrt{1-T}a_{V}+\sqrt{T}a_{I}^{\dagger}+\sqrt{1-T} a_{V}^{\dagger})\rangle\] \[=\frac{(2TN_{S}+1)}{4}\] \[\langle X_{\rm M}X_{R}\rangle =\frac{1}{4}\langle(\sqrt{T}a_{I}+\sqrt{1-T}a_{V}+\sqrt{T}a_{I}^ {\dagger}+\sqrt{1-T}a_{V}^{\dagger})\rangle\] \[(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}+\sqrt{\eta}a_{S}^{\dagger}+ \sqrt{1-\eta}a_{B}^{\dagger})\rangle\] \[=\frac{\sqrt{\eta TN_{S}(1+N_{S})}}{2}\] \[\langle X_{R}^{2}\rangle =\frac{1}{4}\langle(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}+\sqrt{ \eta}a_{S}^{\dagger}+\sqrt{1-\eta}a_{B}^{\dagger})\rangle\] \[(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}+\sqrt{\eta}a_{S}^{\dagger}+ \sqrt{1-\eta}a_{B}^{\dagger})\rangle\] \[=\frac{1}{4}\big{(}\eta[1+2\langle a_{S}^{\dagger}a_{S}\rangle]+( 1-\eta)[1+2\langle a_{B}^{\dagger}a_{B}\rangle]\big{)}\] \[=\frac{\big{(}\eta(1+2N_{S})+(1-\eta)(1+2N_{B})\big{)}}{4} \tag{8}\]
where in Eq. (8) we have used the following, \(\langle a_{S}a_{S}\rangle=\langle a_{I}a_{I}\rangle=\langle a_{B}a_{B}\rangle= \langle a_{S}^{\dagger}a_{S}^{\dagger}\rangle=\langle a_{I}^{\dagger}a_{I}^{ \dagger}\rangle=\langle a_{I}^{\dagger}a_{B}^{\dagger}\rangle=\langle a_{I}^{ \dagger}a_{I}\rangle=\langle a_{S}^{\dagger}a_{I}\rangle=\langle a_{I}^{ \dagger}a_{I}\rangle=0\), \(\langle a_{I}^{\dagger}a_{V}\rangle=\langle a_{I}^{\dagger}a_{V}\rangle= \langle a_{I}^{\dagger}a_{V}\rangle=\langle a_{I}^{\dagger}a_{I}^{\dagger}\rangle =\langle a_{I}^{\dagger}a_{V}^{\dagger}\rangle=0\), \(\langle a_{I}^{\dagger}a_{I}\rangle=\langle a_{I}^{\dagger}a_{I}\rangle=N_{S}\), \(\langle a_{S}^{\dagger}a_{S}\rangle=[a_{I},a_{I}]=[a_{V},a_{V}^{\dagger}]=1\), \(\langle a_{S}a_{I}\rangle=0\), \(\langle 0,\langle a_{S}\cos r+e^{\varphi}\sinh(r)\rangle\langle a_{I}^{\dagger} \cosh r+e^{\varphi}\sinh(r)\rangle\langle a_{I}^{\dagger}\rangle\}0,0= \sinh(r)\cosh(r)=\sqrt{N_{S}(1+N_{S})}\), \(\langle a_{I}^{\dagger}a_{I}^{\dagger}\rangle=0,(0|a_{S}^{\dagger}\cosh r+e ^{-i\varphi}\sinh(r)a_{S})|0,0\rangle=\sinh(r)\cosh(r)=\sqrt{N_{S}(1+N_{S})}\), \(\sinh^{2}(r)=N_{S}\), and we have set \(\varphi=0\).
Then a similar calculation of the momentum translated output yields
\[\langle[Y_{\rm M}^{\rm(out)}]^{2}\rangle=\langle[Y_{R}^{2}-2GY_{R}Y_{\rm M}+G ^{2}Y_{\rm M}^{2}]\rangle, \tag{9}\]
where,
\[\langle Y_{R}^{2}\rangle =\frac{-1}{4}\big{\langle}(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}- \sqrt{\eta}a_{S}^{\dagger}-\sqrt{1-\eta}a_{B}^{\dagger})\] \[\quad(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}-\sqrt{\eta}a_{S}^{ \dagger}-\sqrt{1-\eta}a_{B}^{\dagger})\big{\rangle}\] \[=\frac{1}{4}\big{(}\eta[1+2\langle a_{S}^{\dagger}a_{S}\rangle]+( 1-\eta)[1+2\langle a_{B}^{\dagger}a_{B}\rangle]\big{)}\] \[=\frac{\big{(}\eta(1+2N_{S})+(1-\eta)(1+2N_{B})\big{)}}{4},\] \[\langle Y_{R}Y_{\rm M}\rangle =\] \[\frac{-1}{4}\big{\langle}(\sqrt{T}a_{I}+\sqrt{1-T}a_{V}-\sqrt{T}a _{I}^{\dagger}-\sqrt{1-T}a_{V}^{\dagger})\] \[\quad(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}-\sqrt{\eta}a_{S}^{ \dagger}-\sqrt{1-\eta}a_{B}^{\dagger})\big{\rangle}\] \[=\frac{-\sqrt{\eta TN_{S}(1+N_{S})}}{2} \tag{10}\]
\[\langle Y_{\rm M}^{2}\rangle =\frac{-1}{4}\big{\langle}(\sqrt{T}a_{I}+\sqrt{1-T}a_{V}-\sqrt{T} a_{I}^{\dagger}+\sqrt{1-T}a_{V}^{\dagger})\] \[\quad(\sqrt{\eta}a_{S}+\sqrt{1-\eta}a_{B}-\sqrt{\eta}a_{S}^{ \dagger}-\sqrt{1-\eta}a_{B}^{\dagger})\big{\rangle}\] \[=\frac{-\sqrt{\eta TN_{S}(1+N_{S})}}{2} \tag{11}\]
As for the unaffected modes they are equal to, \(\langle[Y_{R}^{\rm(out)}]^{2}\rangle=\langle X_{R}^{2}\rangle\), \(\langle[Y_{\rm M}^{\rm(out)}]^{2}\rangle=\langle Y_{\rm M}^{2}\rangle\). It is clear now from Eqs. (8) and (11) that the receiver's translated modes \(\langle[X_{\rm M}^{\rm(out)}]^{2}\rangle\), \(\langle[Y_{R}^{\rm(out)}]^{2}\rangle\) indeed carry the total signal-idler cross correlation signature \(\langle X_{\rm M}X_{R}\rangle\), \(\langle Y_{\rm F}Y_{\rm M}\rangle\), nonetheless accompanied by unwanted noise. The receiver's output when \(H=1\), is the sum of the signal powers of all the receiver's output quadratures
\[\mathcal{I}_{1} =\langle X_{\rm M}^{2}\rangle+\langle Y_{\rm M}^{2}\rangle+2G[ \langle X_{\rm M}X_{R}\rangle-\langle Y_{R}Y_{\rm M}\rangle]\] \[+G^{2}[\langle X_{R}^{2}\rangle+\langle Y_{\rm M}^{2}\rangle]+ \langle X_{R}^{2}\rangle+\langle Y_{R}^{2}\rangle+\langle X_{V}^{2}\rangle+ \langle Y_{V}^{2}\rangle \tag{12}\]
where the vacuum contribution stems from the noise penalty on all measurements.
When the null hypothesis is true, \(H=0\), the target return is replaced with a bath mode and the four receiver's outputs become
\[X_{R}^{\rm(out)} =X_{B}\] \[Y_{R}^{\rm(out)} =Y_{B}-GY_{\rm M}\] \[X_{\rm M}^{\rm(out)} =X_{\rm M}+GX_{B}\] \[Y_{\rm M}^{\rm(out)} =Y_{\rm M} \tag{13}\]
Then,
\[\langle[X_{\rm M}^{\rm(out)}]^{2}\rangle =\langle X_{\rm M}^{2}\rangle+2G\langle X_{\rm M}X_{B}\rangle+G^{2} \langle X_{B}^{2}\rangle,\] \[\langle[X_{R}^{\rm(out)}]^{2}\rangle =\langle X_{B}^{2}\rangle=\frac{1}{2}\langle(a_{B}+a_{B}^{\dagger} )(a_{B}+a_{B}^{\dagger})\rangle=\frac{(1+2N_{B})}{4},\] \[\langle X_{\rm M}^{2}\rangle =\frac{(2TN_{S}+1)}{4},\] \[\langle[Y_{R}^{\rm(out)}]^{2}\rangle =\langle Y_{B}^{2}\rangle-2G\langle Y_{B}Y_{\rm M}\rangle+G^{2} \langle Y_{\rm M}^{2}\rangle,\] \[\langle Y_{B}^{2}\rangle =\frac{-1}{4}\langle(a_{B}-a_{B}^{\dagger})(a_{B}-a_{B}^{\dagger})\rangle\] \[=\frac{(1+2N_{B})}{4},\] \[\langle[Y_{\rm M}^{\rm(out)}]^{2}\rangle =\langle Y_{\rm M}^{2}\rangle=\frac{(2TN_{S}+1)}{4} \tag{14}\]
where \(\langle X_{\rm M}X_{B}\rangle=\langle Y_{B}Y_{\rm M}\rangle=0\), since the bath mode is not correlated with the stored idler.
Correspondingly, it can
In summary we have demonstrated the details of the process of extracting the signal-idler cross correlation in Eq. (16). This wil be the relevant quantity when we start discussing the receiver's error exponent. We have further shown that the receiver's output signal power is enhanced by the receiver's gain. Thus indeed the CNOT receiver can in principle offer a better performance than the other QI protocols. In order to quantify practically the amount of gain that can be controlled to enhance the detection process, we have to consider the effect of background noise on the device operation. This will be the task of the next section.
### Background noise of CNOT receiver
As shown in appendix A, a double port homodyne measurement is capable of extracting the input field power, which is displayed on a spectrum analyzer's screen. In order to calculate the error exponent of our receiver, we need to calculate its noise power. This corresponds to calculating the PSD of a bath mode. Consider the case where the null hypothesis is true, that is, a returned mode is replaced with a bath one. Under the AWGN channel model, the bath mode enters the receiver as a white Gaussian random process, whereas the stored idler is an a thermal state with average photon \(N_{S}\) after tracing out its signal twin. In such case the receiver output is Eq. (13). In the low brightness regime we can approximately neglect the power of the memory mode, and that corresponding to the vacuum noise penalty, thus we calculate the bath quadrature noise power according to Eqs. (30-31) as
\[\left\langle X_{B}^{2}\right\rangle \approx\frac{\langle(a_{B}+a_{B}^{\dagger})(a_{B}+a_{B}^{\dagger })\rangle}{4}\] \[\approx\frac{1+2N_{B}}{4}\approx\frac{N_{B}}{2} \tag{17}\]
Similarly, the noise power of the momentum quadrature is calculated as
\[\left\langle Y_{B}^{2}\right\rangle \approx\frac{-\langle(a_{B}-a_{B}^{\dagger})(a_{B}-a_{B}^{ \dagger})\rangle}{4}\] \[\approx\frac{1+2N_{B}}{4}\approx\frac{N_{B}}{2} \tag{18}\]
where in the above equations we have used the bath properties \(\langle a_{B}\rangle=\langle a_{B}^{\dagger}\rangle=0\), and we assumed that powers are measured in a narrow bandwidth.
Thus the overall noise power becomes
\[\mathrm{P_{N}}(\mathcal{I}_{0}) =\langle[X_{\mathrm{M}}^{\mathrm{(out)}}]^{2}\rangle+\langle[Y_{ \mathrm{R}}^{\mathrm{(out)}}]^{2}\rangle+\langle[X_{\mathrm{M}}^{\mathrm{(out )}}]^{2}\rangle\] \[+\langle[Y_{\mathrm{M}}^{\mathrm{(out)}}]^{2}\rangle\] \[=G^{2}\langle X_{B}^{2}\rangle+\langle X_{\mathrm{M}}^{2} \rangle+\langle X_{B}^{2}\rangle+\langle Y_{\mathrm{M}}^{2}\rangle\] \[+G^{2}\langle Y_{\mathrm{M}}^{2}\rangle+\langle Y_{B}^{2}\rangle \tag{19}\]
Thus,
\[\mathrm{P_{N}}(\mathcal{I}_{0})\approx N_{B}+\frac{G^{2}N_{B}}{2}=N_{B}(1+ \frac{G^{2}}{2}) \tag{20}\]
where we recall that \(\langle X_{B}\rangle=\langle Y_{B}\rangle=\langle X_{\mathrm{M}}\rangle= \langle Y_{\mathrm{M}}\rangle=0\).
Since the background noise is identical for both transmission hypotheses, we will assume equal hypotheses noise power, \(\mathrm{P_{N}}(\mathcal{I}_{0})=\mathrm{P_{N}}(\mathcal{I}_{1})=\mathrm{P_{N}}\). This is a reasonable approximation in communication systems, when a thermal bath is the dominant noise source [43]. Hence the device background noise power is
\[\mathrm{P_{N}}\approx N_{B}(1+\frac{G^{2}}{2}) \tag{21}\]
The previous expression is used to calculate the receiver's error exponent as shown in the next section.
## IV Performance analysis
The objective of this section is to demonstrate the operational regime where our device can outperform other QI receivers. Theoretically we have shown in the last section that our receiver's gain can indeed strengthen the signal-idler cross correlations, which in principle should translate into a better device SNR. However, practically the device internal interactions add extra noise to that in the channel. Thus it is imperative to study the effect of the overall noises in the system on the performance of our device. We will use the practical setup described in appendix B as our model of device noise, this will help us understand exactly how the receiver's gain can be manipulated to achieve the desired enhancement. This section is organized as follows: we begin with a simplified background on the basics of error probability in transmission problems tailored to the QI scenario. Then we derive the error probability formula of our device. Finally we plot our device's error bounds in different device settings and compare them to the other QI protocols in order to highlight our areas of improvement.
A good performance metric of a QI receiver is its ability to circumvent high error rates when discriminating between the two possible transmission hypotheses. This binary decision situation is identical to on-off communications systems [44], such that when \(H=1\) is true, the 1-bit signal (\(j_{1}\)) is sent, whereas when \(H=0\) is true the 0-bit (\(j_{0}\)) signal is sent. Further, we suppose that the receiver's environment, that is, the channel's noise, is _additive white Gaussian noise_ (AWGN). Empirically this is a reasonable assumption in most practical cases, since the electrons random motion inside the receiver's front end conductors is modelled as a _stationary Gaussian random process_. The receiver's total _bit error rate_ (BER) [45] is then defined as \(P_{e}=p(1)p(0|1)+p(0)p(1|0)\), where \(p(1)\) (prior) is the probability that the target is there, \(p(0|1)\) (conditional) is the probability of a _miss_, i.e., deciding that the target is not there while in reality it is there, \(p(0)\) is the probability that the target is not there, \(p(1|0)\) is the probability of a _false alarm_, i.e., deciding that target is there while in reality it is not there. The conditionals are calculated with respect to a decision threshold \(j_{d}\) as \(p(0|1)=(1/\sqrt{2\pi\sigma_{1}^{2}})\int\limits_{-\infty}^{j_{d}}\exp\big{(}-(j -j_{1})/2\sigma_{1}^{2}\big{)}dj=(1/2)\mathrm{erfc}\big{(}(j_{1}-j_{d})/\sqrt {2}\sigma_{1}\big{)}\),
\[p(1|0) = (1/\sqrt{2\pi\sigma_{0}^{2}})\int\limits_{j_{d}}^{\infty}\exp\big{(} -(j-j_{0})/2\sigma_{0}^{2}\big{)}dj\quad=(1/2)\text{erfc}\big{(}(j_{d}-j_{d})/ \sqrt{2}\sigma_{0}\big{)},\]
where the means of the \(0\&1\) bit signals are \(j_{0}\), \(j_{1}\) respectively, whereas \(\sigma_{0}^{2},\sigma_{1}^{2}\) are the _filtered_ power spectral density (PSD) of the zero mean white Gaussian noise process comprising the receiver's environment when \(H=0\), \(H=1\) respectively, and \(\text{erfc}(x)=(2/\pi)\int\limits_{x}^{\infty}e^{-\gamma^{2}}dy\) is the complementary error function. Assuming equal priors, \(p(0)=p(1)=1/2\), BER reads \(P_{e}=1/4\big{[}\text{erfc}\big{(}(j_{d}-j_{d})/\sqrt{2}\sigma_{1}\big{)}+ \text{erfc}\big{(}(j_{d}-j_{0})/\sqrt{2}\sigma_{0}\big{)}\big{]}\). We note that a filtered zero mean white Gaussian random process has a variance equals to its PSD, this complies with empirical observations. The BER minimum occurs when \(j_{d}\) is chosen such that, \((j_{d}-j_{0})^{2}/2\sigma_{0}^{2}=(j_{1}-j_{d})^{2}/2\sigma_{1}^{2}+\ln\! \big{(}\sigma_{1}/\sigma_{0}\big{)}\). Under the assumption that the noise PSD is equal for both hypotheses, \(\sigma_{0}=\sigma_{1}=\sigma\), we arrive at an expression for the decision threshold as \((j_{d}-j_{0})/\sigma=(j_{1}-j_{d})/\sigma=R_{Q}\), \(j_{d}=(j_{0}+j_{1})/2\), where \(R_{Q}\) is denoted by the error exponent. An expression for the error exponent can be written as \(R_{Q}=(j_{1}-j_{0})/2\sigma\). A more general form of the error exponent is adopted when the noise PSD is different for the two transmission hypotheses, \(R_{Q}=(j_{1}-j_{0})/(\sigma_{0}+\sigma_{1})\). Moreover, the minimum BER as a function of the error exponent can be provided as, \(P_{\text{e, min}}=(1/2)\text{erfc}\big{(}\frac{R_{Q}}{\sqrt{2}}\big{)}\). By exploiting a series expansion of the error function, the minimum BER can be written as \(P_{\text{e, min}}=\frac{1}{R_{Q}\sqrt{2\pi}}\exp\big{(}\frac{-R_{Q}^{2}}{2} \big{)}+\frac{R_{Q}}{2\sqrt{2\pi}}\big{[}\frac{\sqrt{2\pi}}{R_{Q}}-2-\sum \limits_{n=1}^{\infty}(\frac{-R_{Q}^{2}}{2})^{n}\frac{1}{(n+1)!(2n+1)!}\big{]}\), such that it can be approximated as, \(P_{\text{e, min}}\approx\frac{1}{R_{Q}\sqrt{2\pi}}\exp\big{(}\frac{-R_{Q}^{2}}{2} \big{)}\). A practical upper bound on the minimum error probability neglects the denominator of this expression, as we shortly see. After this brief motivation, the error exponent of the CNOT receiver can be defined as
\[R_{Q\text{CNOT}}=\frac{R_{Q}^{2}}{2}=\frac{1}{2}\bigg{[}\frac{\mathcal{I}_{1}- \mathcal{I}_{0}}{\sqrt{\text{P}_{\text{N}}(\mathcal{I}_{1})}+\sqrt{\text{P}_ {\text{N}}(\mathcal{I}_{0})}}\bigg{]}^{2}, \tag{22}\]
where \(\mathcal{I}_{1}\) is the average receiver's output when \(H=1\), given by the expression in Eq. (12), whereas \(\mathcal{I}_{0}\) is given by 15, which is the receiver's output when the null hypothesis is true. Their associated noise powers are defined as \(\text{P}_{\text{N}}(\mathcal{I}_{1})\), \(\text{P}_{\text{N}}(\mathcal{I}_{0})\) respectively. Similarly, for equal noise powers we define the SNR\({}_{\text{CNOT}}\) as \(4R_{Q\text{CNOT}}\).
Further, it has been shown that for \(K\) probe signals, the minimum bit error probability is upper bounded by the classical Bhattacharyya bound [9]
\[P_{e,\text{min}}^{K}\leq\frac{1}{2}\exp{[-KR_{Q}]}, \tag{23}\]
where \(K\) is the total number of signal-idler pairs generated at the transmitter.
We are now ready to compare between the error probability upper bounds of different QI receivers. Following [9], and [40], the error exponents of the OPA, PC and SFG are
\[R_{Q\text{CNOT}} = \eta TN_{S}/2N_{B} \tag{24}\] \[R_{Q\text{CNOT}} = \eta TN_{S}/N_{B} \tag{25}\]
respectively.
For the CNOT receiver, we assume that both hypotheses have equal noise power based on the analysis presented in
Figure 3: A plot of the minimum bit error probability of all QI receivers against the total number of probe pairs \(K\). We have also included the optimal CI receiver in order to demarcate the domain of the quantum advantage. The subscript “LL” denotes the ideal lossless operation, whereas “L” means that the transmissivity of the lossy idler storage is non ideal. In this plot it was chosen to be \(7-0.7\), since according [34], a realistic microwave quantum memory has an efficiency of 60%. In Fig. (a), the CNOT is operating in the unty gain regime with a beamsplitter parameter \(g\approx 0.38\). As can be seen, the SFG receiver has the best performance among all receivers in both the lossless and lossy operations. The unity gain CNOT only outperforms the optimal CI receiver, while being outperformed by all receivers in both the LL and 1 operations. In order to achieve an enhanced performance over OPA and PC in both the LL and 1 operations, we showed that a gain of \(G=1.5\) is enough for the task. In Fig. (b) the CNOT receiver operates with a larger gain value, specifically, \(G=3\). The corresponding beamsplitter parameter in this case is \(g\approx 0.09\). In this regime of operation, the CNOT receiver (dotted blue) outperforms the LP and LP data receivers (solid red ). On the other hand, if can be seen that all CNOT slightly outperforms 1.5% while being outperformed by the LL SFG. Thus, SFG still has the best performance as before. We also note that the optimum CI has the least performance both in the LL operation (solid green) and 1 operation (dotted green). In Fig. (c), the CNOT receiver operates with a gain equal to \(G=6\), and a corresponding beamsplitter parameter of \(g\approx 0.03\). It is clear that the CNOT performance is only comparable to SFG in both LL and 1 cases, while closing the performance gap, still is outperformed by SFG. Nonetheless, the L CNOT managed to outperform both the LL OPA and LL PC for completeness, we observe that similar as before, the optimum CI has the least performance in both operations LL and L.
appendix A. Thus its error exponent expression according to Eq. (22) becomes,
\[R_{Q\text{cnor}}=\eta G^{2}\text{TN}_{S}/2\text{P}_{\text{N}} \tag{26}\]
where \(\text{P}_{\text{N}}(\mathcal{I}_{0})=\text{P}_{\text{N}}(\mathcal{I}_{1})=\text {P}_{\text{N}}(\mathcal{I})\) is defined by Eq. (21). We further assumed for all receivers that \(\eta=0.01\), \(T=0.7\), \(N_{S}=0.01\), and \(N_{B}=20\).
In Fig. (3) we have plotted the minimum bit error probability against the total number of probe pairs. We have also included the the optimum CI receiver to demonstrate the quantum advantage in the low SNR setting. Let us consider first the trivial case of zero gain operation. In the event of zero interaction gain, \(G=0\), the CNOT receiver homodynes both the return and the stored idler individually. Consequently, the \(3\)dB noise penalty due to the simultaneous measurement of the two non commuting quadratures eradicates any quantum advantage. We now focus on non zero gain operation of our receiver. We considered three different gain values for our CNOT receiver, namely, unity gain, \(G=3\), and \(G=6\). By substituting with \(G=1\) in Eq. (21), it can be seen that the total number of added noise photons in this case is \(64\). As can be seen from Fig. (3a), the performance of the CNOT receiver was only able to outperform the optimum CI in both the LL operation, that is, \(T=1\) and the L operation, that is, \(T=0.7\). Nonetheless, it was outperformed by all QI receivers in both cases respectively. We note that the SFG receiver has the best performance among all receivers in both operations for this case. Further, the unity gain case is interesting in itself, since it represents the domain of operation where the device operates typically as a qubit CNOT gate. Thus we conclude that any other realization of the CNOT operation based on a different platform would replicate the same performance.
In Fig. (3b) the CNOT receiver operates with a gain above unity, i.e., \(G=3\). The total number of added noise photons is \(\approx 224\). In this domain of operation it can be seen that the CNOT outperforms both the OPA and PC in the LL and L cases respectively. However, it is still being outperformed by the SFG receiver.
In Fig. (3c) the CNOT receiver operates with a gain equal to \(G=6\). The total number of added noise photons in this case is \(\approx 764\). It can be seen from the plot, that in this case the CNOT receiver is only comparable to the SFG, although, still being outperformed by it in the LL and L cases respectively. We further observe that the CNOT outperformed both LL OPA and LL PC even when operating in its lossy operation. Thus, we can conclude from the plots in Fig. (3), that by increasing the CNOT gain its performance approaches that of the SFG. Further, by analyzing Eq. (21) in the limit of large gain, \(G>>1\), and assuming negligible internal noise, that is, strong squeezing and negligible homodyne detection inefficiencies, the variance of the receiver's output becomes \(\text{Var}(\mathcal{I})\approx N_{B}G^{2}/2\), and consequently the error exponent would be \(R_{Q\text{cnor}}=\eta\text{TN}_{S}/N_{B}\), therefore coinciding with that of SFG.
## V Conclusion
In this paper we have considered a new QI receiver design for microwave applications. Due to the technological difficulty of realizing single photon counters, the proposed device relies completely on homodyne measurements and square law detectors. The receiver is built upon an offline controlled gain CV CNOT gate in order to extract the signal-idler cross correlations.
We have investigated different gain operational values of our CNOT receiver. In the unity gain scenario, we have shown that the CNOT offers no performance advantage over any of the QI receivers, while only managing to edge past the optimum CI receiver. We expect similar performance from any other realization of a unity gain CNOT gate. On the other hand when operating with above unity gain, we showed that our device approached the best QI receiver gradually as the gain increases. Ideally when squeezing and vacuum noises are suppressed, a high gain operating point matches the SFG receiver. We further noticed that with squeezing noise, an above unity gain CNOT can still offer a decent performance, comparable to SFG, especially in the radar domain, where the maximum number of utilised probe pairs is \(\approx 10^{5}-10^{6}\)[27]. This is visible in the error probability curves in Figs. (3b), and (3c).
Two final remarks on the engineering challenges of implementing the protocol in the microwave domain. Tailoring a desired high gain operational point requires a small and controllable beamsplitter coefficient \(g\) by virtue of the relation \(G=(1-g)/\sqrt{g}\). Recently significant progress has been made towards engineering devices capable of achieving this level of controlled transformations [46, 47]. Further, we have also observed that a high gain operating point is usually accompanied by excess noise photons, this may result in an elongated dead time of our receiver; however, recent techniques in cQED can mediate excess noise by utilising circuit refrigeration procedures [48]. These are all clear signs that the proposed model can be practically implemented by the existing quantum microwave technologies.
## Appendix A Analysis of CNOT receiver's homodyne measurement
The CNOT receiver as described in the main text operates on the fields non-commuting quadrature operators. The noise penalty of measuring two non-commuting observables is 3dB. This can be seen when we consider splitting individually each of the returned mode and the stored idler, \(a_{R},a_{\text{M}}\), on a balanced beamsplitter, where a vacuum mode enters from its unused port, \(\bar{a}_{R}=(1/\sqrt{2})(a_{R}-i_{A})\), \(\bar{a}_{r}=(1/\sqrt{2})(a_{r}+i_{R})\), such that \(\bar{X}_{R}=(1/\sqrt{2})(\bar{a}_{R}+\bar{a}_{R}^{\dagger})=(1/4)(a_{R}-i_{A} +a_{R}^{\dagger}+i_{A}^{\dagger})\), and \(\bar{Y}_{r}=(1/i\sqrt{2})(\bar{a}_{r}-\bar{a}_{r}^{\dagger})=(1/i4)(a_{r}+i_{ R}-a_{r}^{\dagger}+i_{R}^{\dagger})\), then \([\bar{X}_{R},\bar{Y}_{r}]=(1/i16)\big{(}i[\bar{a}_{R},\bar{a}_{R}^{\dagger}]+i[ \bar{a}_{R}^{\dagger},a_{R}]+i[\bar{a}_{r},a_{r}^{\dagger}]+i[a_{r}^{\dagger},a _{r}]\big{)}=0\), and hence these two observables can be measured simultaneously [18]. However, as can be seen, the noise penalty due to splitting the mode first on a balanced beamsplitter is present as an attenuation factor of \(1/2\), where
only half of the original intensity is contained in \(\bar{X}_{R}\), \(\bar{Y}_{V}\). One might wonder now that the cross correlation output of our CNOT receiver would suffer a similar fate. This would have been the case if our receiver measures the return and the stored modes individually without mixing them first. However the interaction between the two modes as described by the gate transformations in Eq. (6) and the analysis that followed, showed that the cross correlation signature was preserved.
We now focus on outlining the details of our receiver's homodyne chain. Following [37, 22], Fig. (4) presents a schematic of the detection circuit used by our receiver to output the measured values of the observable quantities in Eqs. (7) and (9) (see also Fig. (2)). As can be seen, the input field is mixed on a balanced beamsplitter with a local oscillator field followed by two detectors, \(D_{1},D_{2}\), a subtraction circuit that calculates the difference between the generated photocurrents, and a a spectrum analyzer display that shows the measured field's variance (power). Without loss of generality, let's consider an arbitrary returned mode \(a_{R}\), not necessarily in a TMSV with an idler. We further assume a noiseless transmission of this return. On the other hand, at the receiver, our local oscillator is tuned to extract the unaffected quadrature \(Y_{R}\). The output of the detection chain is defined as follows,
\[a_{R}^{\text{(out)}} =\frac{1}{\sqrt{2}}(a_{R}^{\text{(in)}}+id^{\text{(in)}})\] \[d^{\text{(out)}} =\frac{1}{\sqrt{2}}(d^{\text{(in)}}-ia_{R}^{\text{(in)}}) \tag{27}\]
where \(d^{\text{(in)}}\) is the local oscillator mode.
Then, the output of the subtraction circuit is,
\[\text{I} =a_{R}^{\text{(out)}}a_{R}^{\text{(out)}}-d^{\text{(out)}\dagger} d^{\text{(out)}},\] \[N_{R}^{\text{(out)}} =\frac{1}{2}(N_{R}^{\text{(in)}}+N_{d}^{\text{(in)}}+id^{\text{( in)}}a_{R}^{\text{(in)}}-id^{\text{(in)}}a_{R}^{\text{(in)}\dagger}),\] \[N_{d}^{\text{(out)}} =\frac{1}{2}(N_{R}^{\text{(in)}}+N_{d}^{\text{(in)}}-id^{\text{( in)}}a_{R}^{\text{(in)}}+id^{\text{(in)}}a_{R}^{\text{(in)}\dagger})\] \[\text{I} =i{(d^{\text{(in)}\dagger}a_{R}^{\text{(in)}}-d^{\text{(in)}}a_{R }^{\text{(in)}\dagger})} \tag{28}\]
where \(N_{R}^{\text{(in)}}=a_{R}^{\text{(in)}\dagger}a_{R}^{\text{(in)}}\), and \(N_{d}^{\text{(in)}}={d^{\text{(in)}\dagger}d^{\text{(in)}}}\).
By assuming that the local oscillator mode is a complex number, \(d^{\text{(in)}}\rightarrow\tilde{D}=|\alpha_{L}|e^{i\phi_{L}}\), we can extract the field's \(Y\) quadrature by setting the LO phase to \(\pi\) and normalizing the output current,
\[Y_{R}=\frac{\text{I}}{|\alpha_{L}|\sqrt{2}}=\frac{-i(a_{R}-a_{R}^{\dagger})} {\sqrt{2}} \tag{29}\]
where \(|\alpha_{L}|\) is the LO field strength, and \(\phi_{L}\) is its phase.
One of the powerful features of double port homodyning is that the subtraction circuit eliminated the noise associated with the LO field. This results in the homodyned output noise power being only dependent on the input's variance, as we shall see now. In order to estimate the overall noise accompanied with the process of double port homodyning [37], we split the returned mode into a signal carrying part plus fluctuations, \(a_{R}=\langle a_{R}\rangle+\Delta a_{R}\), such that \(\langle a_{R}\rangle=A_{R},\langle\Delta a_{R}\rangle=0\), where \(A_{R}=A_{R}^{\text{T}}+iA_{R}^{\text{T}}\), \(\Delta a_{R}=\Delta a_{R}^{\text{T}}+i\Delta a_{R}^{\text{T}}\), \(A_{R}^{\text{T}},A_{R}^{\text{T}}\) are the \(X\), and \(Y\) quadrature amplitude values respectively, and \(\Delta a_{R}^{\text{T}},\Delta a_{R}^{Y}\) are their associated fluctuations. Thus
\[\begin{split}\langle\text{I}\rangle&=i|\alpha_{L}| (\langle a_{R}\rangle^{*}+\langle\Delta a_{R}^{\dagger}\rangle-\langle a_{R} \rangle-\langle\Delta a_{R}\rangle)\\ &=i|\alpha_{L}|(A_{R}^{\text{T}}-A_{R})=2|\alpha|\text{ Im}[A_{R}^{*}]\\ &=2|\alpha_{L}|A_{R}^{Y}\end{split} \tag{30}\]
\[\begin{split}\langle\Delta\text{I}^{2}\rangle&= \langle\text{I}^{2}\rangle-\langle\text{I}\rangle^{2},\\ \langle\text{I}^{2}\rangle&=\big{\langle}\big{[}i| \alpha_{L}|(A_{R}^{*}+\Delta a_{R}^{\dagger}-A_{R}-\Delta a_{R})\big{]}^{2} \big{\rangle}\\ &=\big{\langle}\big{[}2|\alpha_{L}|(\text{Im}[A_{R}^{*}]+\text{ Im}[\Delta a_{R}^{\dagger}])\big{]}^{2}\big{\rangle}\\ &=4|\alpha_{L}|^{2}\big{\langle}(A_{R}^{\text{T}}+\Delta a_{R}^{Y })^{2}\big{\rangle}\\ &\approx 4|\alpha_{L}|^{2}{A_{R}^{Y}}^{2}+4|\alpha_{L}|^{2}\langle \Delta a_{R}^{Y}\rangle,\\ \langle\Delta\text{I}^{2}\rangle&\approx 4|\alpha_{L}|^{2}\langle\Delta a_{R}^{Y 2}\rangle\\ \langle\Delta\text{Y}_{R}^{2}\rangle&\approx \frac{\langle\Delta\text{I}^{2}\rangle}{4|\alpha_{L}|^{2}}=\langle\Delta a_{R} ^{Y^{2}}\rangle\end{split} \tag{31}\]
The above expressions show that balanced double port homodyning can extract both the mean and second moment (power) of a returned mode. Consider now the double port homodyning of a target return that is a part of a TMSV generated at the transmitter. Since our protocol operates in the microwave domain, the detectors that produce \(N_{R}^{\text{(out)}}\), and \(N_{d}^{\text{(out)}}\) respectively are square law detectors [18], such as _bolometers_[49, 50, 51], for instance. Unlike single photon counters, the detector's medium in the case of a square law detector responds to the incident signal power. On the other
Figure 4: **A schematic of the homodyne detection chain used by our CNOT receiver. The setup is composed of a balanced beamsplitter and two detectors \(D_{i},D_{2}\). The desired signal is injected from the beamsplitter’s first port, while a strong local oscillator (IO) field enters from the second. The detectors outputs are directed towards a subtraction circuit in order to display the final result. The double port homodyne circuit can be thought of as being embedded inside a spectrum analyzer device, such that its display shows the power of the measured input.**
hand, in single photon counters it responds to the incident photon intensity or flux. Thus the former is a scalar quantity, while the later is a vector one.
As pointed out in the main text, the expected value of the quadratures of a squeezed vacuum field vanish, that is to say, the average of the current generated after the subtraction circuit is zero, \(\langle\mathrm{I}\rangle=0\). However, the variance of a zero mean squeezed vacuum field is non-zero. Thus we seek a device that can display these variances. This can be achieved by a spectrum analyzer, since in the case of a TMSV, the field variance of the input coincide with the field's second moment, i.e., its power, \(\langle\Delta\mathrm{I}^{2}\rangle=\langle\mathrm{I}^{2}\rangle\) as shown in Eq. (31). Thus the spectral output of the spectrum analyzer is proportional to the input field power. In summary, the homodyne measurement chain deployed by our CNOT receiver is composed of two steps; first the balanced double port homodyning captures the variance of the input signal, while suppressing the LO noise. Hence the detection noise is forced to be shot limited. Then the spectrum analyzer displays the measured power. It is worth mentioning that modern spectrum analyzer devices have a built in double port homodyne circuit and displays the input power at the end of the measurement.
We consider a similar process to extract the rest of the gates outputs. For the sake of completeness we show this for the other unaffected quadrature, that is, the memory mode position quadrature. While the rest of the outputs are just a linear superposition of the return and memory modes and can be deduced similarly in a straight forward manner. Consider now performing a double port homodyne measurement on the memory mode to extract its position quadrature. Similarly as before, the mode transforms at the detection chain as
\[a_{\mathrm{M}}^{\mathrm{(out)}} =\frac{1}{\sqrt{2}}(a_{\mathrm{M}}^{\mathrm{(in)}}+id^{\mathrm{( in)}})\] \[d^{\mathrm{(out)}} =\frac{1}{\sqrt{2}}(d^{\mathrm{(in)}}-ia_{\mathrm{M}}^{\mathrm{( in)}}) \tag{32}\]
Then, the output of the subtraction circuit is,
\[\mathrm{I} =a_{\mathrm{M}}^{\mathrm{(out)}}{}^{\dagger}a_{\mathrm{M}}^{ \mathrm{(out)}}-d^{\mathrm{(out)}}{}^{\dagger}a^{\mathrm{(out)}},\] \[N_{\mathrm{M}}^{\mathrm{(out)}} =\frac{1}{2}(N_{\mathrm{M}}^{\mathrm{(in)}}+N_{d}^{\mathrm{(in)}} +id^{\mathrm{(in)}}{}^{\dagger}a_{\mathrm{M}}^{\mathrm{(in)}}-id^{\mathrm{( in)}}{}a_{\mathrm{M}}^{\mathrm{(in)}}{}^{\dagger}),\] \[N_{d}^{\mathrm{(out)}} =\frac{1}{2}(N_{\mathrm{M}}^{\mathrm{(in)}}+N_{d}^{\mathrm{(in)}} -id^{\mathrm{(in)}}{}^{\dagger}a_{\mathrm{M}}^{\mathrm{(in)}}+id^{\mathrm{( in)}}{}a_{\mathrm{M}}^{\mathrm{(in)}}{}^{\dagger})\] \[\mathrm{I} =i(d^{\mathrm{(in)}}{}^{\dagger}a_{\mathrm{M}}^{\mathrm{(in)}}-d^{ \mathrm{(in)}}a_{\mathrm{M}}^{\mathrm{(in)}}{}^{\dagger}) \tag{33}\]
Thus we can extract the field's \(X\) quadrature by setting the LO phase to \(\pi/2\) and normalizing the output current,
\[X_{\mathrm{M}}=\frac{\mathrm{I}}{|\alpha_{L}|\sqrt{2}}=\frac{(a_{\mathrm{M}}+ a_{\mathrm{M}}^{\dagger})}{\sqrt{2}} \tag{34}\]
The field's power can be extracted as described before.
## Appendix B Practical model of the CNOT receiver
In this section we present an implementation of the CNOT gate receiver in the main text. This model will also serve as a practical representation of the receiver's internal noise, which will eventually play a role when calculating the receiver's overall noise variance. Following the experimental implementation presented in [14], and the theoretical study in [13], the first beamsplitter (BS1) in Fig. (5) is described by
\[\begin{pmatrix}\sqrt{\frac{g}{1+g}}&\sqrt{\frac{1}{1+g}}\\ -\sqrt{\frac{1}{1+g}}&\sqrt{\frac{g}{1+g}}\end{pmatrix} \tag{35}\]
The signal-idler quadratures transform as
\[X_{\mathrm{M}}^{1} =\sqrt{\frac{g}{1+g}}X_{\mathrm{M}}+\sqrt{\frac{1}{1+g}}X_{R}\] \[X_{R}^{1} =\sqrt{\frac{g}{1+g}}X_{R}-\sqrt{\frac{1}{1+g}}X_{\mathrm{M}}\] \[Y_{\mathrm{M}}^{1} =\sqrt{\frac{g}{1+g}}Y_{\mathrm{M}}+\sqrt{\frac{1}{1+g}}Y_{R}\] \[Y_{R}^{1} =\sqrt{\frac{g}{1+g}}Y_{R}-\sqrt{\frac{1}{1+g}}Y_{\mathrm{M}}\]
Then each output of the first beamsplitter is mixed on another beamsplitter of transmissivity \(1-g\) with the outputs of two single mode squeezers, such that squeezer A is momentum squeezed, i.e., \(X_{A}^{\mathrm{(HD)}}e^{\mu_{A}}\), \(Y_{B}^{\mathrm{(HD)}}e^{-\tau_{A}}\), on the other hand squeezer B is position squeezed, i.e., \(X_{B}^{\mathrm{(HD)}}e^{-\tau_{A}}\), \(Y_{B}^{\mathrm{(HD)}}e^{\tau_{A}}\). The beamsplitters are denoted by BS4, BS3 respectively. For ease of readability, we omit the exponential factors from the
Figure 5: Schematic of the CNOT receiver based on [14]. In both squeezer circuits, the NL module stands for a non linear parametric element that can be an optical parametric oscillator as in the optical domain, or a Josephson parametric amplifier as the microwave domain. As described in appendix A, both local oscillator fields (O) are mixed with the outputs of the ‘\(\boldsymbol{\gamma}\) beamsplitters on another balanced beamsplitters. Squeeze A is momentum squeezed, whereas squeezer B is position squeezed.
squeezed mode in the upcoming derivation, then add them back in the last step.
\[\begin{pmatrix}\sqrt{1-g}&\sqrt{g}\\ \sqrt{g}&-\sqrt{1-g}\end{pmatrix}. \tag{37}\]
Thus the modes transform as,
\[X_{M}^{(2)} =\frac{g}{\sqrt{1+g}}X_{M}+\sqrt{\frac{g}{1+g}}X_{R}+\sqrt{1-g}X_{ A}^{\text{(HD)}}\] \[\tilde{X}_{A}^{\text{(HD)}} =\sqrt{g}X_{A}^{\text{(HD)}}-\sqrt{\frac{g(1-g)}{1+g}}X_{\text{M} }-\sqrt{\frac{1-g}{1+g}}X_{R}\] \[X_{R}^{(2)} =\frac{g}{\sqrt{1+g}}X_{R}-\sqrt{\frac{g}{1+g}}X_{\text{M}}+\sqrt {1-g}X_{B}^{\text{(HD)}}\] \[\tilde{X}_{B}^{\text{(HD)}} =\sqrt{g}X_{B}^{\text{(HD)}}-\sqrt{\frac{(1-g)g}{1+g}}X_{R}+\sqrt {\frac{1-g}{1+g}}X_{\text{M}} \tag{38}\]
Similarly,
\[Y_{M}^{(2)} =\frac{g}{\sqrt{1+g}}Y_{M}+\sqrt{\frac{g}{1+g}}Y_{R}+\sqrt{1-g}Y_ {A}^{\text{(HD)}}\] \[\tilde{Y}_{A}^{\text{(HD)}} =\sqrt{g}Y_{A}^{\text{(HD)}}-\sqrt{\frac{g(1-g)}{1+g}}Y_{\text{M} }-\sqrt{\frac{1-g}{1+g}}Y_{R}\] \[Y_{R}^{(2)} =\frac{g}{\sqrt{1+g}}Y_{R}-\sqrt{\frac{g}{1+g}}Y_{\text{M}}+\sqrt {1-g}Y_{B}^{\text{(HD)}}\] \[\tilde{Y}_{B}^{\text{(HD)}} =\sqrt{g}Y_{B}^{\text{(HD)}}-\sqrt{\frac{(1-g)g}{1+g}}Y_{R}+\sqrt {\frac{1-g}{1+g}}Y_{\text{M}} \tag{39}\]
Finally the modes labeled by the superscript '(HD)' are homodyned with a local oscillator field (LO), whereas the other modes are directed towards a final beamsplitter (BS2) of transmissivity \(\frac{1}{1+g}\)
\[\begin{pmatrix}\sqrt{\frac{1}{1+g}}&\sqrt{\frac{g}{1+g}}\\ -\sqrt{\frac{g}{1+g}}&\sqrt{\frac{1}{1+g}}\end{pmatrix} \tag{40}\]
Let us consider first the position quadratures and see how they evolve,
\[X_{R}^{\text{(out)}} =\sqrt{\frac{1}{1+g}}X_{R}^{(2)}+\sqrt{\frac{g}{1+g}}X_{\text{M}} ^{(2)}\] \[=\Big{(}\frac{2g}{1+g}\Big{)}X_{R}-\Big{(}\frac{\sqrt{g}(1-g)}{1+ g}\Big{)}X_{\text{M}}\] \[+\sqrt{\frac{1-g}{1+g}}X_{B}^{\text{(HD)}}+\sqrt{\frac{g(1-g)}{1+ g}}X_{A}^{\text{(HD)}}\] \[X_{\text{M}}^{\text{(out)}} =\sqrt{\frac{1}{1+g}}X_{\text{M}}^{(2)}-\sqrt{\frac{g}{1+g}}X_{R} ^{(2)}\] \[=\Big{(}\frac{2g}{1+g}\Big{)}X_{\text{M}}+\Big{(}\frac{\sqrt{g}( 1-g)}{1+g}\Big{)}X_{R}\] \[+\sqrt{\frac{1-g}{1+g}}X_{A}^{\text{(HD)}}-\sqrt{\frac{g(1-g)}{1 +g}}X_{B}^{\text{(HD)}}, \tag{41}\]
Suppose now that the mode \(\tilde{X}_{A}^{\text{(HD)}}\) in Eq. (38) was homodyned with efficiency \(\gamma\), that is, \(\sqrt{\gamma}\tilde{X}_{A}^{\text{(HD)}}-\sqrt{1-\gamma}X_{V}\), where \(X_{V}\) is a vacuum position quadrature, then after being re-scaled appropriately is utilised to perform the following post-correction operation in order to eliminate the anti-squeezed position quadrature \(X_{A}^{\text{(HD)}}\),
\[X_{R}^{\text{(out)}} \to X_{R}^{\text{(out)}}-\sqrt{\frac{1-g}{\gamma(1+g)}}\tilde{X}_{A}^ {\text{(HD)}}\] \[\to X_{R}^{\text{(out)}}-\sqrt{\frac{g(1-g)}{1+g}}X_{A}^{\text{( HD)}}+\frac{1-g}{1+g}X_{R}\] \[+\frac{\sqrt{g}(1-g)}{(1+g)}X_{\text{M}}+\sqrt{\frac{(1-\gamma)( 1-g)}{\gamma g(1+g)}}X_{V},\] \[X_{R}^{\text{(out)}} =X_{R}+\sqrt{\frac{1-g}{1+g}}X_{B}^{\text{(HD)}}+\sqrt{\frac{(1- \gamma)(1-g)}{\gamma g(1+g)}}X_{V}, \tag{42}\]
We follow a similar approach to derive the expression of \(X_{\text{M}}^{\text{(out)}}\), where a different appropriate scaling of \(\tilde{X}_{A}^{\text{(HD)}}\) is assumed as follows;
\[X_{\text{M}}^{\text{(out)}} \to X_{\text{M}}^{\text{(out)}}-\sqrt{\frac{(1-g)}{\gamma g(1+g)}} \tilde{X}_{A}^{\text{(HD)}}\] \[\to X_{\text{M}}^{\text{(out)}}-\sqrt{\frac{(1-g)}{(1+g)}}X_{A}^ {\text{(HD)}}+\frac{1-g}{1+g}X_{\text{M}}\] \[+\frac{(1-g)}{\sqrt{g}(1+g)}X_{R}+\sqrt{\frac{(1-\gamma)(1-g)}{ \gamma(1+g)}}X_{V}\] \[=X_{\text{M}}+\Big{(}\frac{1-g}{1+g}(\sqrt{g}+\frac{1}{\sqrt{g}} \Big{)}\Big{)}X_{R}-\sqrt{\frac{g(1-g)}{1+g}}X_{B}^{\text{(HD)}}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}X_{V}\] \[=X_{\text{M}}+\Big{(}\frac{1-g}{\sqrt{g}}\Big{)}X_{R}-\sqrt{\frac {g(1-g)}{1+g}}X_{B}^{\text{(HD)}}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}X_{V} \tag{43}\]
where \(G=\frac{1-g}{\sqrt{g}}\).
Focusing now on the momentum quadratures we follow a similar derivation to that in Eqs. (41-43),
\[Y_{R}^{\text{(out)}} =\sqrt{\frac{1}{1+g}}Y_{R}^{(2)}+\sqrt{\frac{g}{1+g}}Y_{\text{M }}^{(2)}\] \[=\Big{(}\frac{2g}{1+g}\Big{)}Y_{R}-\Big{(}\frac{\sqrt{g}(1-g)}{1+ g}\Big{)}Y_{\text{M}}+\sqrt{\frac{1-g}{1+g}}Y_{B}^{\text{(HD)}}\] \[+\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\text{(HD)}}\]
## IEEE Access
\[Y_{\rm M}^{\rm(out)} =\sqrt{\frac{1}{1+g}}Y_{\rm M}^{(2)}-\sqrt{\frac{g}{1+g}}Y_{R}^{(2)}\] \[=\Big{(}\frac{2g}{1+g}\Big{)}Y_{\rm M}+\Big{(}\frac{\sqrt{g}(1-g)} {1+g}\Big{)}Y_{R}+\sqrt{\frac{1-g}{1+g}}Y_{A}^{\rm(HD)}\] \[-\sqrt{\frac{g(1-g)}{1+g}}Y_{B}^{\rm(HD)}, \tag{44}\]
Then similarly, we assume that \(\tilde{Y}_{B}^{\rm(HD)}\) in Eq. (39) was homodyned with efficiency \(\gamma\), and used to perform the following post-correction operation after proper re-scaling in order to eliminate the anti-squeezed momentum quadrature \(Y_{B}^{\rm(HD)}\)
\[Y_{R}^{\rm(out)} \to Y_{R}^{\rm(out)}-\sqrt{\frac{1-g}{\gamma g(1+g)}}\tilde{Y}_{B}^{ \rm(HD)}\] \[\to Y_{R}^{\rm(out)}-\sqrt{\frac{1-g}{1+g}}Y_{B}^{\rm(HD)}+\frac{1-g}{1 +g}Y_{R}\] \[-\frac{(1-g)}{\sqrt{g}(1+g)}Y_{\rm M}+\sqrt{\frac{(1-\gamma)(1-g) }{\gamma g(1+g)}}Y_{V}\] \[=Y_{R}-\Big{(}\frac{1-g}{1+g}(\sqrt{g}+\frac{1}{\sqrt{g}})\Big{)} Y_{\rm M}+\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\rm(HD)}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}Y_{V}\] \[=Y_{R}-GY_{\rm M}+\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\rm(HD)}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}Y_{V} \tag{45}\]
Similarly,
\[Y_{\rm M}^{\rm(out)} \to Y_{\rm M}^{\rm(out)}+\sqrt{\frac{(1-g)}{\gamma(1+g)}}\tilde{Y}_{B}^ {\rm(HD)}\] \[\to Y_{\rm M}^{\rm(out)}+\frac{1-g}{1+g}Y_{\rm M}-\frac{\sqrt{g}(1-g)} {(1+g)}Y_{R}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma(1+g)}}Y_{V},\] \[Y_{\rm M}^{\rm(out)} =Y_{\rm M}-\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\rm(HD)}+\sqrt{\frac{( 1-\gamma)(1-g)}{\gamma g(1+g)}}Y_{V} \tag{46}\]
Therefore the four receiver's output can be written as
\[X_{R}^{\rm(out)} =X_{R}+\sqrt{\frac{1-g}{1+g}}X_{B}^{\rm(HD)}e^{-\alpha}+\sqrt{ \frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}X_{V},\] \[X_{\rm M}^{\rm(out)} =X_{\rm M}+GX_{R}-\sqrt{\frac{g(1-g)}{1+g}}X_{B}^{\rm(HD)}e^{- \alpha}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}X_{V}\] \[Y_{R}^{\rm(out)} =Y_{R}-GY_{\rm M}+\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\rm(HD)}e^{-r_{A}}\] \[+\sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}Y_{V}\] \[Y_{\rm M}^{\rm(out)} =Y_{\rm M}-\sqrt{\frac{g(1-g)}{1+g}}Y_{A}^{\rm(HD)}e^{-r_{A}}+ \sqrt{\frac{(1-\gamma)(1-g)}{\gamma g(1+g)}}Y_{V} \tag{47}\]
It can be seen from the above equation that the ideal transformation in Eq.(6) can be retrieved in the limit of large squeezing parameters \(r_{A}\), \(r_{B}\) and unity homodyne detection efficiency \(\gamma\).
We now consider the effect of finite squeezing and inefficient homodyne detection on the overall number of added noise photons. From the previous equation the total noise power can be calculated as
\[\big{\langle}[X_{R}^{\rm(out)}]^{2}\big{\rangle} =\frac{\langle X_{R}^{2}\rangle}{2}+\frac{1-g}{2(1+g)}\big{\langle} [X_{B}^{\rm(HD)}]^{2}\big{\rangle}+\frac{\langle X_{V}^{2}\rangle}{2}\] \[+\frac{(1-\gamma)(1-g)}{2\gamma g(1+g)}\langle X_{V}^{2}\rangle\] \[\big{\langle}[X_{\rm M}^{\rm(out)}]^{2}\big{\rangle} =\frac{\langle X_{\rm M}^{2}\rangle}{2}+\frac{g(1-g)}{2(1+g)} \big{\langle}[X_{B}^{\rm(HD)}]^{2}\big{\rangle}+\frac{G^{2}\langle X_{B}\rangle }{2}\] \[+\frac{(1-\gamma)(1-g)}{2\gamma g(1+g)}\langle Y_{V}^{2}\rangle+ \frac{\langle Y_{V}^{2}\rangle}{2}\] \[\big{\langle}[Y_{R}^{\rm(out)}]^{2}\big{\rangle} =\frac{\langle Y_{B}^{2}\rangle}{2}+\frac{g(1-g)}{2(1+g)}\big{\langle} [Y_{A}^{\rm(HD)}]^{2}\big{\rangle}+\frac{\langle X_{V}^{2}\rangle}{2}\] \[+\frac{(1-\gamma)(1-g)}{2\gamma g(1+g)}\langle X_{V}^{2}\rangle+ \frac{G^{2}Y_{\rm M}}{2}\] \[\big{\langle}[Y_{\rm M}^{\rm(out)}]^{2}\big{\rangle} =\frac{\langle Y_{M}^{2}\rangle}{2}+\frac{\langle Y_{V}^{2}\rangle }{2}+\frac{g(1-g)}{2(1+g)}\big{\langle}[Y_{A}^{\rm(HD)}]^{2}\big{\rangle}\] \[+\frac{(1-\gamma)(1-g)}{2\gamma g(1+g)}\langle Y_{V}^{2}\rangle \tag{48}\]
The homodyne inefficiency and the finite squeezing of the utilized squeezer circuits enter the picture as extra added noise, and we have added the 3dB loss penalty due to measuring non commuting quadratures. By recalling that the value of of the beamsplitter parameter is \(0<g<1\), and the low brightness regime, it can be seen that the bath noise power dominates and the overall noise power is the expression derived earlier in Eq. (21).
For practical considerations, the following experimental parameters can be assumed for physical implementation of the CNOT receiver. A realistic squeezing that can be achieved in a laboratory is approximately equal to \(\approx-3\)dB, that is, \(e^{-2r}\approx 0.5\), such that \(r=\ln 2/2\)[29, 52]. It is also possible to achieve up to \(-6\)dB experimentally [53]. As for practical gain values when a JPA is utilised as a squeezing resource, the optimal gain values are approximately \(\approx 15\pm 3\)dB. In this regime the JPA remains quantum limited, i.e. only adds half a quantum of noise. Finally, in the optical domain the homodyne detector's efficiency is approximately \(\gamma\approx 0.97\)[12]. Recently graphene-based microwave bolometers [49, 51] have enjoyed similar successes, and thus in either cases it is pretty reasonable to assume ideal operation. Thus by adding the squeezing and vacuum noise contributions in Eq. (48), we estimate that the device internal noise adds approximately \(\approx 2\) noise photons.
## Acknowledgment
The authors are grateful to Maximilian Reichert, Roberto Di Candia, Robert Jonsson, and Stefano Pirandola for discussions at various stages of this work.
|
2308.11482 | On Polymer Statistical Mechanics: From Gaussian Distribution to
Maxwell-Boltzmann Distribution to Fermi-Dirac Distribution | Macroscopic mechanical properties of polymers are determined by their
microscopic molecular chain distribution. Due to randomness of these molecular
chains, probability theory has been used to find their micro-states and energy
distribution. In this paper, aided by central limit theorem and mixed Bayes
rule, we showed that entropy elasticity based on Gaussian distribution is
questionable. By releasing freely jointed chain assumption, we found that there
is energy redistribution when each bond of a molecular chain changes its
length. Therefore, we have to change Gaussian distribution used in polymer
elasticity to Maxwell-Boltzmann distribution. Since Maxwell-Boltzmann
distribution is only a good energy description for gas molecules, we found a
mathematical path to change Maxwell-Boltzmann distribution to Fermi-Dirac
distribution based on molecular chain structures. Because a molecular chain can
be viewed as many monomers glued by covalent electrons, Fermi-Dirac
distribution describes the probability of covalent electron occupancy in
micro-states for solids such as polymers. Mathematical form of Fermi-Dirac
distribution is logistic function. Mathematical simplicity and beauty of
Fermi-Dirac distribution make many hard mechanics problems easy to understand.
Generalized logistic function or Fermi-Dirac distribution function was able to
understand many polymer mechanics problems such as viscoelasticity [1],
viscoplasticity [2], shear band and necking [3], and ultrasonic bonding [4]. | Lixiang Yang | 2023-08-22T14:54:57Z | http://arxiv.org/abs/2308.11482v1 | On Polymer Statistical Mechanics: From Gaussian Distribution to Maxwell-Boltzmann Distribution to Fermi-Dirac Distribution
###### Abstract
Macroscopic mechanical properties of polymers are determined by their microscopic molecular chain distribution. Due to randomness of these molecular chains, probability theory has been used to find their micro-states and energy distribution. In this paper, aided by central limit theorem and mixed Bayes rule, we showed that entropy elasticity based on Gaussian distribution is questionable. By releasing freely jointed chain assumption, we found that there is energy redistribution when each bond of a molecular chain changes its length. Therefore, we have to change Gaussian distribution used in polymer elasticity to Maxwell-Boltzmann distribution. Since Maxwell-Boltzmann distribution is only a good energy description for gas molecules, we found a mathematical path to change Maxwell-Boltzmann distribution to Fermi-Dirac distribution based on molecular chain structures. Because a molecular chain can be viewed as many monomers glued by covalent electrons, Fermi-Dirac distribution describes the probability of covalent electron occupancy in micro-states for solids such as polymers. Mathematical form of Fermi-Dirac distribution is logistic function. Mathematical simplicity of Fermi-Dirac distribution makes many hard mechanics problems easy to understand. Generalized logistic function or Fermi-Dirac distribution function was able to understand many polymer mechanics problems such as viscoelasticity [1], viscoplasticity [2], shear band and necking [3], and ultrasonic bonding [4].
**Keywords: Gaussian Statistical Mechanics; Polymer Physics; Mixed Bayes Rule; Conditional Probability Density Functions; Maxwell-Boltzmann Distribution; Fermi-Dirac Distribution**
## 1 Introduction to Entropy Elasticity
Mechanical behaviors of rubbers are largely determined by its micro-structures [5; 6; 7; 8]. Since rubbers are microscopically made of randomly distributed molecular chains, probability theory is introduced to understand their stress-strain relationship [9]. When rubbers are stretched under external forces, molecular chains will get stretched and change their configurations. After a molecular
chain changes from one configuration to another one, its entropy will change too.
In entropy rubber elasticity, entropy instead of internal energy is considered as the driving force of mechanical response [10]. So force and displacement or stress-strain relationship is derived by relationship between probability distribution of molecular chains and entropy of the system [11].
For a single chain, with one end fixed, the probability to find the other end of the chain in a small box will obey Gaussian distribution or normal distribution. Flory[12] derived Gaussian distribution for a random freely jointed chain in one dimension. The distance of a molecular chain from one end to the other end is projected to x-axis which can be treated as a random variable \(S_{n}\). The probability distribution of \(S_{n}\) can be considered as the corresponding excess of heads in a series of coin tosses. In another word, random variable \(S_{n}\) is a binomial random variable with probability of each trial 0.5. In chapter X appendix A [12], Flory also showed the random variable \(S_{n}\) will become normal distribution if let the number of the bond vectors of a molecular chain go to infinity. One dimensional Gaussian distribution density function of end to end distance of a molecular chain in \(x\) direction was given as
\[G(x,N)=(\,\frac{3}{2\pi Nb^{2}})\,^{1/2}{\bf e}^{-3x^{2}/2Nb^{2}}, \tag{1.1}\]
where \(N\) is the number of Kuhn segments of a molecular chain and \(b\) is given as Kuhn length. By assuming Gaussian distributions in \(x\), \(y\) and \(z\) directions are independent of each other, one dimensional Gaussian distribution can be extended to become three dimensional Gaussian distribution [13]
\[G({\bf r},N)=(\,\frac{3}{2\pi Nb^{2}})\,^{3/2}{\bf e}^{-3r^{2}/2Nb^{2}}, \tag{1.2}\]
where \(r\) is the magnitude of the chain displacement vector \({\bf r}\), i.e.
\[r^{2}=x^{2}+y^{2}+z^{2}.\]
In fact, what Flory derived can be mathematically proved by using central limit theorem. Based on central limit theorem, the number of segment of each molecular chain is not necessarily going to infinity to be a Gaussian distribution. In many cases, Gaussian distribution can be approached even with several segments if probability distribution function of each segment is symmetric.
Once probability of end to end distance of a molecular chain is obtained, entropy of this single chain is related to its micro-states by Boltzmann's entropy equation. If all molecular chains are taken to deform affinely with the macroscopic deformation, probability distribution of a single molecular chain can be extended to an entire polymer system. For an entire polymer system, Boltzmann's entropy equation can be written as
\[S\;=\;k_{B}ln\Omega, \tag{1.3}\]
where \(S\) is entropy of an entire polymer system and \(k_{B}=1.3807\times 10^{-23}JK^{-1}\) is Boltzmann's constant. \(\Omega\) is the number of micro-states that this system can have. Originally developed for idea gas, Boltzmann's entropy equation was firstly introduced by Werner Kuhn in 1930s to model rubber elasticity. This method was later adopted by Paul Flory and many other researchers [14, 15, 16, 17]. This entropy concept still dominates rubber elasticity up to nowadays. Boltzmann's entropy equation can be derived by using a statistical representation of temperature and the first law of thermodynamics. If a system goes to equilibrium, it will select a macroscopic configuration that maximizes the number of micro-states. If we assume that there is only one single micro-state for each value of energy or no degeneracy in the system, the number of micro-states \(\Omega\) is proportional to probability distribution of energy in the system. Therefore, the number of micro-states \(\Omega\) will depend on system energy which comes from each micro-state of the system. Mathematically, \(\Omega\) can be written as \(\Omega(E)\) where \(E\) is system energy.
After entropy of the entire system is obtained, Helmholtz free-energy, \(F\), and entropy, \(S\), are related by definition
\[F=U-TS, \tag{1.4}\]
where \(U\) is internal energy and \(T\) is temperature. Entropy elasticity assumes that the change of Helmholtz free energy is mostly due to the change of entropy. Therefore, internal energy can be approximately viewed as a constant. Its contribution to force and stretch is nearly zero. By ignoring the effect of \(U\), Eq.(1.4) is changed to
\[F=-TS. \tag{1.5}\]
Combining Eq.(1.3) and Eq.(1.5), we obtained
\[F=-k_{B}Tln\Omega. \tag{1.6}\]
Helmholtz free energy is defined as free energy with constant temperature and fixed volume. For most solids, these constraints are approximately true [18]. So strain energy density in solids defined as \(W\) is considered to be the same as Helmholtz free energy. With strain energy density given as Helmholtz free energy, stress can be obtained by taking derivative of strain energy density with respect to strain or stretch. If strain energy density is based on Gaussian distribution of molecular chains, Neo-Hookean hyperleastic model can be obtained. If strain energy density is based on Langevin chains, 3 chain model, 4 chain model, and 8 chain model can be obtained. Gent model can be shown to have a close relationship with 8 chain model [19]. 8 chain model or Arruda-Boyce model [20] is widely used as a nonlinear spring for many large deformation viscoelastic models [7, 21, 22]. Recently, many researchers work on rubber elasticity by other non-Gaussian chain models [23, 24]. Their mathematical structures are very complicate. But their ideas are still based on entropy elasticity framework. To fulfill some applications or needs, many other phenomenological hyperelastic models such as Ogden's hyperelastic model [25] or many \(I_{1}\) based hyperelas
tic models [26] were also built by assuming some form of strain energy density function to fit experimental data. No statistical probability distributions of microscopic molecular chains are considered in these models.
## 2 Deep Into Entropy Elasticity
Although entropy elasticity gains lots of success in explaining mechanical behavior of rubbers, many concepts used in entropy elasticity need a second thought. In entropy elasticity, Helmholtz free energy is related to the number of micro-states by Eq.(1.6). As we know, the number of micro-states is very hard to calculate. For example, given a molecular chain which has 100 links, each link can elongate or contract which is assumed to have 2 micro-states. Hence, there are \(2^{100}=1.267E30\) micro-states. But this molecular chain has only 101 macro-states. One marco-state can be elongation of all 100 links. Another marco-state could be contraction of 1 link and elongation of the rest 99 links. Each macro-state is related to different number of micro-states. It is almost impossible to find all micro-states. But it is possible to find the probability of each macro-state. If we define the number of micro-states of a macro-state \(i\) to all number of micro-states as probability \(P_{i}\), entropy of the system can be given as Gibbs entropy equation
\[S=-k_{B}\sum_{i}P_{i}lnP_{i}. \tag{2.1}\]
By using Gibbs entropy equation, Helmholtz free energy is given as
\[F=-k_{B}TlnZ, \tag{2.2}\]
where \(Z\) is partition function given as \(Z=\sum_{i}{\bf e}^{-E_{i}/k_{B}T}\). \(E_{i}\) is the energy of each macro-state. So we have two Helmholtz free energy equations, e.g., Eq. (1.6) and Eq. (2.2). One is based on total micro-states and the other one is based on partition function. In entropy elasticity, Gaussian probability function of the end to end length vector of a molecular chain is found. Should it be viewed as the probability distribution of each micro-state to total micro-states? Or should it be viewed as the probability distribution of the number of micro-states of a macro-state \(i\) to all number of micro-states, e.g., \(P_{i}\)? This is not clear. Different consideration will lead to different ways to calculation of Helmholtz free energy. Since Helmholtz free energy is used as strain energy density for hyperelastic materials, different stress-strain relationship will be got. When we solve conservation laws with different constitutive models, it will lead to different results [27; 28; 29].
The second concern about entropy elasticity is that its firstly dealing with only one molecular chain by probability distribution. Then an affine or non-affine [30] or phantom assumption [8] is made to extend to an entire molecular chain system. In entropy elasticity, it assumes that a single molecular chain can have more configurations in relax states than after stretched. Since it assumes that these configurations are proportional to micro-states, micro-states
and entropy will be deduced after stretched. If a single molecular chain is fully stretched to its contour length, it will have only one configuration. Therefore, entropy is the smallest. This thought is mostly based on freely jointed chain assumption. In freely jointed chains, each link can freely rotate and is rigid. No internal energy change happens when a freely jointed chain changes its configurations.
However, in real polymer materials, polymer chains are not freely jointed. Rotation of each bond is associated with energy redistribution [31]. Bonds between monomers are not rigid. Bond length is not constant. It can elongate or contract under an external force. Bond length variation will be related to energy change of the entire molecular chain.
Because bond number of a molecule chain is the same before stretched and after stretched, micro-states of a molecular chain should remain the same even a molecular chain has more configurations in a relaxed state. Since these micro-states can be empty or occupied by electrons, the difference of a molecular chain before stretched and after stretched is the probability redistribution of electron occupancy in these micro-states. If more electrons occupy in higher energy micro-states, entire energy of the system will be higher. That is, after stretched by an external force, more high energy micro-states are occupied by electrons in each bond of molecular chains. Therefore, more configurations of a molecular chain do not necessarily mean more micro-states. Entropy is not necessarily decreasing due to stretched chains.
In entropy elasticity, since entropy is related to micro-states by Boltzmann's entropy equation, internal energy is completely separated from entropy and ignored. This is probably true for freely joint rigid chain. In fact, for real chain,
Figure 1: Schematic illustration of a molecular chain (orange color) in a relax state (a) and in a stretched state (b).
those electron filled micro-states will have internal energy. Those empty micro-states will have no internal energy. Because entropy is related to micro-states by Boltzmann entropy equation, entropy should be a function of internal energy [32].
In entropy elasticity, stiffing effect in stress-strain curve is considered as a molecular chain reaching its contour length or its full length. However, even under large tensile force, a molecular chain can hardly reach its contour length. This is because that there are many randomly distribution chains instead of one molecular chain inside polymers. The stiffing effect in stress-strain curve of rubber materials can be viewed as more entanglements formed after being stretched, See Fig.(1). Before stretched, each molecular chain can hardly feel the exist of other molecular chains. After putting tensile force on it, molecular chains will move like snakes or reptations [2]. They will start to have interactions with other molecular chains. Many entangled forces will be built up after stretched. These gradually built-up entangled forces are the cause of stiffing effect which is shown in rubber force-stretch curve. It can be physically considered as more defects along a molecular chain. Or some parts of a molecular chain get out of its reptation tube if a size of reptation tube is assumed [33]. If force continues to increase, eventually the molecular chain will break. Fracture will happen [34].
Another evidence that makes people to believe that entropy plays a role in rubber elasticity is that rubbers become stiffer while metals become softer after heating them up [35]. Flory called this force as the force of retraction. When we observed an abnormal phenomena, there can be many explanations. Entropy can be one of them. But we think it is due to fluctuation of molecular chains instead of entropy change. Unlike crystalline metals, molecular chains are very long with random distribution. With temperature rising, each molecular chain can easily feel the other molecular chains. More temporary bonds or dynamical bonds in transient networks [24; 36] can be formed due to large amplitude vibrations of molecular chains in high temperature. Therefore, bond density increases with temperature. This will increase Young's modulus of rubbers. But this change is small and temporary. The overall change of Young's modulus from glassy zone to rubbery zone is decreasing [1].
Entropy elasticity is originally built by considering static loading with constant temperature and fixed pressure. In fact, the contraints of Helmholtz free energy are constant temperature and fixed volume. Hence, it's hard for existed entropy elasticity to extend to temperature and hydrostatic pressure domain [37]. It is also not easy to extend to time domain. This is because entropy elasticity is built on equilibrium thermodynamics [38]. First law of thermodynamics tell us that energy will be conserved. but didn't tell us how fast and how slow the energy flows. To extend statistical elasticity to time, temperature and hydrostatic pressure dependent area, we believe that it will rely on how to describe bond deformations of molecular chains [39]. To consider this, statistical description of covalent electrons should be used since each bond is formed by covalent electrons. Borrowed from statistical description of gas molecules, current entropy elasticity theory is not appropriate to understand these bond movements. The big difference between gas molecules and covalent electrons in
solids is that the distance between two gas molecules is very large comparing with the distance between two electrons. Physically, the distance between two gas molecules is very larger than "thermal" de Broglie wavelength. The moving path of one gas molecule can be distinguished from that of another gas molecular. They are treated as distinguishable particles. The distance between two covalent electrons is comparable or smaller than "thermal" de Broglie wavelength. They are fermions and obey the Pauli exclusion principle. They are treated as indistinguishable particles.
For distinguishable particles such as gas molecules, classical statistical method such as Boltzmann's entropy equation and Maxwell-Boltzmann distribution are good candidates. For solids like rubbers, molecular chains are glued by covalent electrons which are indistinguishable particles, quantum statistical method such as Fermi-Dirac statistics need be used.
In the next section, we will show how we can go from Maxwell-Boltzmann distribution to Fermi-Dirac distribution based on a molecular chain structure.
## 3 From Gaussian distribution to Maxwell-Boltzmann distribution to Fermi-Dirac distribution
Firstly let us exam Eq.(1.1) in detail. One dimensional Gaussian distribution for end to end distance of a molecular chain, e.g., Eq.(1.1) can be written as
\[G(x,N)=\frac{{\bf e}^{-3x^{2}/2Nb^{2}}}{\left(\frac{3}{2\pi Nb^{2}}\right){}^ {1/2}}. \tag{3.1}\]
or
\[G(x,N)=\frac{{\bf e}^{-kx^{2}/2}}{L} \tag{3.2}\]
if let \(k=3/Nb^{2}\) and \(L=\left(\begin{array}{c}3\\ 2\pi Nb^{2}\end{array}\right){}^{1/2}\). Eq.(3.2) can be further written as
\[G(x,N)=P(\mbox{microstate in x direction})=\frac{{\bf e}^{-E_{x}}}{\sum_{i}{ \bf e}^{-E_{i}}} \tag{3.3}\]
if let energy \(E_{x}=kx^{2}/2\) and \(L=\sum_{i}{\bf e}^{-E_{i}}\).
However, from classical thermodynamics, we know probability of micro-state \(r\) can be written as
\[P(\mbox{microstate r})=\frac{{\bf e}^{-E_{r}/k_{B}T}}{\sum_{i}{\bf e}^{-E_{i}/ k_{B}T}} \tag{3.4}\]
where \(\sum_{i}{\bf e}^{-E_{i}/k_{B}T}\) is partition function.
If comparing Eq.(3.3) with Eq.(3.4), we know that \(k_{B}T\) is missing in one dimensional Gaussian distribution for end to end distance of a molecular chain, e.g., Eq.(1.1). Should we add \(k_{B}T\) into Gaussian distribution, e.g., Eq.(1.1)? The answer is yes. In the following section, we will illustrate the reason why \(k_{B}T\) need be added.
If each bond segment length vector of a molecular chain is projected to x-axis, segment length vector in x-direction, e.g., \(X_{i}\), can be considered as a random variable, the total chain length vector in \(x\) direction can be the summation of \(X_{i}\), e.g., \(S_{n}=X_{1}+...+X_{n}\). For each random variable \(X_{i}\), it measures the distance between two monomers in x-direction, see Fig.(2). For all random variables \(X_{i}\), let's assume that they are independent and have the same mean \(\nu\) and the same standard deviation \(\sigma\). When temperature is increasing, monomer vibration amplitude will increase. Assuming that all monomers vibrate in their equilibrium positions, average distance between two monomers will remain the same. But variation of the distance between two monomers will increase. In other words, mean \(\nu\) of \(X_{i}\) remains the same while the standard deviation \(\sigma\) of all \(X_{i}\) will increase with temperature. Even though we don't know exact probability distribution of \(X_{i}\), central limit theorem (A.2) tells us that total chain length vector will become Gaussian distribution which can be written as
\[Z_{n}=\frac{S_{n}-n\nu}{\sqrt{n}\sigma}. \tag{3.5}\]
So the random variable \(S_{n}\) which is used to measure the end-to-end length vector of a molecular chain in x-direction is a normal distribution with mean \(n\nu\) and standard deviation \(\sqrt{n}\sigma\). Since \(\sigma\) will depend on temperature \(T\), we can select that \(\sigma^{2}\varpropto k_{B}T\).
In real polymer materials, different molecular chains can have different bond numbers. As long as unit number \(n\) of a molecular chain is larger than a critic number, all primary molecular chains will converge to normal distribution. This critic number is usually very small if probability distribution function of each bond length is symmetric. If primary molecular chains are cross-linked, central limit theorem can also be used. As long as the unit number of cross-linked molecular chain is large enough, summation of random variables following each path of cross-linked molecular chain could be viewed as normal distribution.
Let \(\bar{\nu}\) be the average of all means of molecular chains. Let \(\bar{\sigma}\) be the average of all standard deviations of molecular chains by defining \(\bar{\sigma}^{2}=k_{B}T/k_{0}\).
Figure 2: Schematic illustration of vibration of monomers due to temperature change. Vibration amplitude \(A\) is proportional to \(k_{B}T\).
is a proportional constant which is not necessarily equal to \(k=3/Nb^{2}\). Then Gaussian distribution function of \(S_{n}\) can be written as
\[G(x,N)=\frac{\sqrt{k_{0}}}{\sqrt{2\pi k_{B}T}}{\bf e}^{-\frac{k_{0}}{2k_{B}T}(x- \bar{\nu})^{2}}. \tag{3.6}\]
By using equal partition theorem and let \(\bar{\nu}\to 0\), Eq.(3.6) becomes Maxwell-Boltzmann distribution in Cartesian coordinate system, e.g.,
\[G(x,N)=\sqrt{\frac{m}{2\pi k_{B}T}}{\bf e}^{-\frac{mv_{x}{2}^{2}}{2k_{B}T}}, \tag{3.7}\]
where \(m\) is mass of gas molecules and \(v_{x}\) is velocity of gas molecules in x-direction. Velocity distribution of gas molecules can be found by Maxwell-Boltzmann distribution in polar coordinate system. Maxwell-Boltzmann distribution can be viewed as temperature dependent Gaussian distribution. Here, we use Maxwell-Boltzmann distribution to describe probability distribution of the end-to-end length vector of a molecular chain with an external force. When an external force is applied to the molecular chain, electrons will move from low energy states to high energy states. Probability distribution of the end-to-end length vector of a molecular chain can be described by Maxwell-Boltzmann distribution. But energy storage inside the deformed molecular chain is related to electron occupancy in each quantum micro-states. The probability to find electron occupancy in each quantum micro-state need be described by Fermi-Dirac function. Once we know the end-to-end length vector of a molecular chain, energy storage due to electron occupancy in each quantum micro-state can be found. In another word, we can derive Fermi-Dirac function by using Maxwell-Boltzmann distribution, e.g., Eq.(3.6).
When an external force is applied to polymer materials, this force will do work on polymer materials. Some of the work will become strain energy absorbed by molecular chains. The configuration of molecular chains will get changed. Because of the randomness of molecular chains, we don't know how each bond changes its rotation angle and bond length. We only know the probability distribution of the end to end length vector of each polymer chain obeys Maxwell-Boltzmann distribution.
Firstly let's consider a signal transmission problem. If an unknown signal (probable inputs are assumed to be 1/2 and -1/2) is sent to polymer materials, this signal will go through these molecular chains. Measured output signal will be contaminated by those Maxwell-Boltzmann distributed molecular chains, which can be viewed as a random noise. Output signal is a combination of random noise and input signal and becomes a continuous random signal. If we can measure the output continuous random signal, probability to find which input signal is sent with known measured output signal is given by mixed Bayes rule.
Analogy to this signal problem, consider the length change of a molecular chain under an input force. Let's assume that an unknown force is applied at
the starting point of the chain, see Fig.(3). The displacement at the starting point due to this unknown force is defined as a discrete random variable \(K\). Without losing generality, let probability to be \(0.5\) if this end moves to the left by \(-0.5\) and probability to be \(0.5\) if it moves to the right by \(0.5\). That is, the discrete random variable \(K\) is defined as
\[P_{K}(k)=\begin{cases}0.5,\text{if }k=0.5\\ 0.5,\text{if }k=-0.5\end{cases} \tag{3.8}\]
Recall that \(S_{n}\) is a continuous random variable of the end to end length vector of a molecular chain with an external force, see Fig.(3). After imposing a force on the starting point of a molecular chain, we want to locate the ending point of the molecular chain. Let's define a continuous random variable \(Y\) which measures the total length from the ending point of this deformed molecular chain to the starting point of this initial undeformed chain, See Fig.(3). So mathematically \(Y=K+S_{n}\). That is, total length is the combination of input displacement and end to end length of a molecular chain.
Firstly, let's consider two extreme cases when molecular chains are extremely stiff or extremely soft. If a molecular chain is super stiff, variation of each bond length will be close to \(0\). Temperature dependent Gaussian distribution (Maxwell-Boltzmann distribution) of a molecular chain \(S_{n}\) will collapse to delta function with constant mean and variance is nearly \(0\). For example, assuming an input displacement \(K\) is \(1/2\) and the length of a molecular chain \(S_{n}\) is \(10\), output length \(Y\) will be \(10.5\). In other words, the probability of finding input displacement is \(1/2\) given measured output length \(10.5\) is \(1\). It also says, the probability that available micro-states are occupied by electrons in this super stiff molecular chain is \(1\).
If a molecular chain is super soft, variance of temperature dependent Gaussian distribution will approach infinity. Probability distribution of the end to end distance vector of a molecular chain can be viewed as uniform distribution from negative infinity to positive infinity. Hence, it is hard to determine mean value of \(S_{n}\). Assuming input displacement is \(1/2\) and the length of a molecular
Fig. 3: Representation of a Gaussian molecular chain in two dimensions under a random external force. \(K\), \(S_{n}\) and \(Y\) are considered as random variables.
chain is 10, output length can not be determined at all because the variance of this super soft chain is infinity. In other words, the probability of finding input displacement is 1/2 given any measured output length is 0. This gives us an idea that available micro-states are all empty in this super soft molecular chain. Or the probability that available micro-states are occupied by electrons is 0.
Between those two extreme cases when a molecular chain obeys temperature dependent Gaussian distribution, probability of available micro-states occupied by electrons is between 0 and 1.
Therefore, probability to find input displacement, e.g., \(K\), imposed on a molecular chain with given measured total length of a deformed molecular chain \(Y\) will be the probability that available micro-states are occupied by electrons.
Even though we consider only one molecular chain configuration here, Eq. (3.6) is applied to all molecular chains since average mean and average variance are used.
By using mixed Bayes rule, the probability of finding input displacement \(k\) given any measured output length \(y\) is given as posterior probability distribution, e.g., \(P_{K|Y}(k|y)\). It also tells us the probability that available micro-states are occupied by electrons. To obtain the posterior probability distribution, e.g., \(P_{K|Y}(k|y)\), prior probability distribution \(f_{Y|K}(y|k)\) and \(f_{Y}(y)\) need be calculated firstly. Since the random variable of measured output length \(Y=K+S_{n}\), prior probability distribution \(f_{Y|K}(y|k)\) is a conditional Maxwell-Boltzmann distribution. That is, if we know the exact displacement at the starting point of a molecular chain, the probability to find the other end of a molecular chain is given by conditional Maxwell-Boltzmann distribution \(f_{Y|K}(y|k)\). \(f_{Y}(y)\) is the probability to find the other end of a molecular chain regardless of input displacement.
A brief review of mixed Bayes rule is given in the appendix. Bayesian inference has also been used to evaluate parameters of constitutive models and polymer properties [40, 41].
To proceed, the conditional Maxwell-Boltzmann distribution \(f_{Y|K}(y|k)\) can be written as
\[f_{Y|K}(y|k)\:=\:\sqrt{\frac{k_{0}}{2\pi k_{B}T}}e^{-\frac{k_{0}}{2k_{B}T}(y- \bar{\nu}-k)^{2}}. \tag{3.9}\]
In order to use mixed Bayes rule, \(f_{Y}(y)\) is calculated by
\[f_{Y}(y)\:=\:\frac{1}{2}\sqrt{\frac{k_{0}}{2\pi k_{B}T}}e^{-\frac{k_{0}}{2k_{ B}}(y-\bar{\nu}-\frac{1}{2})^{2}}+\frac{1}{2}\sqrt{\frac{k_{0}}{2\pi k_{B}T}}e^{- \frac{k_{0}}{2k_{B}T}(y-\bar{\nu}+\frac{1}{2})^{2}}. \tag{3.10}\]
Substituting Eq.(3.8), Eq.(3.9) and Eq. (3.10) into Eq.(A.7), we can get the probability of input distance is \(-1/2\) if given measured total distance \(y\) as
\[P_{K|Y}(-\frac{1}{2}|y)\:=\:\frac{1}{e^{\frac{k_{0}(y-\bar{\nu})}{k_{B}T}}+1}. \tag{3.11}\]
By letting \(\varepsilon=k_{0}y\) and \(\mu=k_{0}\bar{\nu}\), Eq.(3.11) is recast as
\[P_{K|Y}(-\frac{1}{2}|y)\;=\;\frac{1}{e^{\frac{\varepsilon-\mu}{k_{B}T}}+1}, \tag{3.12}\]
which is Fermi-Dirac distribution function. Fermi Dirac distribution is the probability that an available energy micro-state at energy level \(\varepsilon\) is occupied by an electron. \(\mu\) is called chemical potential or Fermi energy. Fermi-Dirac distribution can be used to calculate energy storage in each bond of a molecular chain if we know the density of micro-states. Similarly, the probability of input distance is \(1/2\) if given measured total distance \(y\) is given as
\[P_{K|Y}(\frac{1}{2}|y)\,=\,\frac{1}{e^{-\frac{\varepsilon-\mu}{k_{B}T}}+1}, \tag{3.13}\]
which is the probability of finding a hole at an available energy micro-state. The probability of finding an electron in an available micro-state is one minus the probability of finding a hole in an available micro-state, e.g.,
\[\frac{1}{e^{\frac{\varepsilon-\mu}{k_{B}T}}+1}\,=\,1-\frac{1}{e^{-\frac{ \varepsilon-\mu}{k_{B}T}}+1}. \tag{3.14}\]
In polymer materials, cross-links and entanglements are co-existed. As we discussed, cross-links are bonds formed by covalent electrons, which can be modeled by Fermi-Dirac distribution. Entanglements need be considered separately. They are weak forces compared to forces caused by covalent bonds. Entanglements can be viewed as a transient network. Density and velocity of entanglements can be easily changed by an external force. A possible treatment of entanglements is considering them as defects along molecular chains [2]. In another word, defect density and defect velocity will change under an external force. Idea of this treatment is consistent with tube and reptation models [33]. It will be treated as a defect if any part of a molecular chain gets out of the assumed tube under an external force. The mathematical structure of Fermi-Dirac distribution is also much easier and cleaner than the slip-link model introduced by Edwards and Vilgis [42, 43].
Mathematically, Fermi-Dirac distribution function is logistic function. Logistic function can be generalized to include time, temperature, hydrostatic pressure effects. By putting generalized Fermi-Dirac distribution into our stress-strain equation, many mechanical behavior problems of polymers can be easily understood. For example, shear band and necking formation is localized plastic deformation if defect effect is larger than bonding effect [3]. Ultrasonic vibration is considered to be an inverse process of shear band generation [4]. Knee injury can be prevented by erasing physical aging which can be understood by time dependent version of generalized Fermi-Dirac distribution [44]. Generalized logistic function has been used to model linear viscoelasticity [1]. Time, temperature, and hydrostatic pressure dependent viscosity of polymers can be understood by generalized Fermi-Dirac distribution. Aided by defect concepts
in molecular chains, modeling nonlinear viscoplasticityity of polymers is straightforward [2]. Model prediction is validated by many experiments shown in our previous papers[1; 2].
## 4 Conclusion
In this article, we showed how to derive from temperature independent Gaussian distribution to temperature dependent Gaussian distribution or Maxwell-Boltzmann distribution. Then Fermi-Dirac distribution is derived from Maxwell-Boltzmann distribution based on a deformed molecular chain under an external force. Since the distance between gas molecules is very larger than "thermal" de Broglie wavelength, Maxwell-Boltzmann distribution should be used for describing the energy distribution for gas molecules. On the other hand, Fermi-Dirac distribution is needed to capture energy distribution for covalent bonds in polymer molecular chains. Coupled with defects or entanglements along molecular chains, generalized Fermi-Dirac distribution can be used to explain most if not all polymer mechanical problems. Since human tissue is made of molecular chains of different bond stiffness, this method will be extended to explain intracranial brain tissue dynamics in the next paper.
## Data Availability
Data sharing not applicable - no new data generated.
|
2308.10944 | Towards an astronomical foundation model for stars with a
Transformer-based model | Rapid strides are currently being made in the field of artificial
intelligence using Transformer-based models like Large Language Models (LLMs).
The potential of these methods for creating a single, large, versatile model in
astronomy has not yet been explored. In this work, we propose a framework for
data-driven astronomy that uses the same core techniques and architecture as
used by LLMs. Using a variety of observations and labels of stars as an
example, we build a Transformer-based model and train it in a self-supervised
manner with cross-survey data sets to perform a variety of inference tasks. In
particular, we demonstrate that a $\textit{single}$ model can perform both
discriminative and generative tasks even if the model was not trained or
fine-tuned to do any specific task. For example, on the discriminative task of
deriving stellar parameters from Gaia XP spectra, we achieve an accuracy of 47
K in $T_\mathrm{eff}$, 0.11 dex in $\log{g}$, and 0.07 dex in $[\mathrm{M/H}]$,
outperforming an expert $\texttt{XGBoost}$ model in the same setting. But the
same model can also generate XP spectra from stellar parameters, inpaint
unobserved spectral regions, extract empirical stellar loci, and even determine
the interstellar extinction curve. Our framework demonstrates that building and
training a $\textit{single}$ foundation model without fine-tuning using data
and parameters from multiple surveys to predict unmeasured observations and
parameters is well within reach. Such "Large Astronomy Models" trained on large
quantities of observational data will play a large role in the analysis of
current and future large surveys. | Henry W. Leung, Jo Bovy | 2023-08-21T18:00:05Z | http://arxiv.org/abs/2308.10944v3 | # Towards an astronomical foundation model for stars with a Transformer-based model
###### Abstract
Rapid strides are currently being made in the field of artificial intelligence using Transformer-based models like Large Language Models (LLMs). The potential of these methods for creating a single, large, versatile model in astronomy has not yet been explored. In this work, we propose a framework for data-driven astronomy that uses the same core techniques and architecture as used by LLMs. Using a variety of observations and labels of stars as an example, we build a Transformer-based model and train it in a self-supervised manner with cross-survey data sets to perform a variety of inference tasks. In particular, we demonstrate that a _single_ model can perform both discriminative and generative tasks even if the model was not trained or fine-tuned to do any specific task. For example, on the discriminative task of deriving stellar parameters from _Gaia_ XP spectra, we achieve an accuracy of 47 K in \(T_{\rm eff}\), 0.11 dex in \(\log g\), and 0.07 dex in \(\rm[M/H]\), outperforming an expert XGBoost model in the same setting. But the same model can also generate XP spectra from stellar parameters, inpaint unobserved spectral regions, extract empirical stellar loci, and even determine the interstellar extinction curve. Our framework demonstrates that building and training a _single_ foundation model without fine-tuning using data and parameters from multiple surveys to predict unmeasured observations and parameters is well within reach. Such "Large Astronomy Models" trained on large quantities of observational data will play a large role in the analysis of current and future large surveys.
keywords: methods: data analysis - stars: fundamental parameters
## 1 Introduction
Astronomy is on a path of ever-expanding data sets collected by large surveys like _Gaia_(Gaia Collaboration et al., 2016) and Sloan Digital Sky Survey (SDSS; Blanton et al., 2017; Kollmeier et al., 2017) now, and Rubin Observatory Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) and Euclid (Laureijs et al., 2011) in the future, across multiple areas such as spectroscopy, photometry, and time-domain observations. Due to the rapidly expanding size of the data sets, data-driven analysis has become increasingly popular among astronomers. But so far, bespoke data-driven models are created for every separate task and data-driven models that focus on cross-survey and/or cross-domain analyses (like the work of Leung et al., 2023) are only trained on the intersection of the relevant data instead of their union, due to the lack of flexibility in model inputs and outputs. Data-driven science using deep neural networks in particular requires big data sets for training and it would be ideal if we can train such models on most of the available data to truly create a synergistic understanding of multiple surveys.
A Transformer is a neural network architecture that relies on the attention mechanism (Vaswani et al., 2017). Currently, there is ongoing rapid development of Transformer-based models in the guise of Large Language Models (LLMs). LLMs like OpenAI's GPT series of models (OpenAI, 2023) and Google's BERT (Devlin et al., 2018) employ instruction fine-tuning and they have shown be able to do some tasks previously thought to only be possible with a general intelligence model (Bubeck et al., 2023). Science communities have been critical of these big natural language models due to the problem of hallucinations (i.e., the model returns false information that sounds convincing) and because they can easily fail at simple math and logic problems. These issues mean that naively applying existing natural-language focused models to science is difficult. Moreover, they focus on natural language applications such as chat-bots, which involve completely different kinds of data from the floating point astronomical data. Nevertheless besides some uses of the attention mechanism (e.g., Zou et al., 2020; Pimentel et al., 2023; Rozanski et al., 2023; Moreno-Cartagena et al., 2023; Dagli, 2023), LLMs have already been used in astronomy using its natural language focus to build, e.g., assistants for literature review (Ciuci & Ting, 2023) and for creating human-readable summaries of data in a data science platform for time-domain astronomy (Coughlin et al., 2023). The big commercial state-of-the-art models are trained and controlled by big companies, where data quality and access are not guaranteed.
Nevertheless, Transformer-based models have great potential for building a model that learns general knowledge about scientific areas such as observations of stars, the example we focus on in this paper. Such a model would obviously be useful in many scientific applications. We propose here that such models can be built by adopting the
same core technologies and ideas of LLMs using Transformers, but applying them to tasks that are not focused on natural language.
Instead of giving a model sentences like in a LLM, we give the model floating-point data directly along with the type of the data and the model then vectorizes this input to floating-point vectors (the model's embedding). In our star-model application, the input can be a list of known parameters of an astronomical object, such as the flux over a certain wavelength range, to build a context of that particular object. Because the model sees a large amount of data during training and learns how different parameters describing stars are related to each other, we can then ask the model about other unmeasured parameters. Hence, in an implicit way, the model acquires embodied knowledge of stars.
There are a few key advantages specifically to adopting and adapting the technology behind Transformer-based models and the ideology behind LLMs and using this to train science-specific models rather than simply fine-tuning existing models:
**Expert knowledge:** Big, commercially-trained models are powerful but usually focus on natural languages or commercial domain data which are inherently different from astronomical data. Fine-tuning existing state-of-the-art models does not fundamentally solve the issue that existing models contain little astronomy knowledge that cannot be augmented without significant re-training with an astronomy focus.
**Big Data across surveys:** The flexibility of Transformer-based models like LLMs in the input and output nodes allows us to train on a significant portion of astronomical data, because we can train on all data even when the data set is highly incomplete due to, e.g., different survey footprints or photometric bands. Unlike the usual data-driven methods, we can use the union set of all surveys instead of their intersection. Transformer-based models provide ways to handle variable length input data by masking empty spaces with special masking tokens.
**Interpretability:** Because our Transformer-based model that we have implemented (see below) has generative capabilities (as shown in Figure 1 and will be demonstrated in Figure 7 and Figure 6), the model is as interpretable as traditional generative models used in astronomy (indeed, it is even _more_ interpretable, because it can generate both data and parameters). In the future, these kinds of Transformer-based models should carry similar capabilities as LLMs, where you can ask for reasoning even if you have no way to know exactly how a deep neural network comes up with the answer in terms of neural pathways. In the end, an expert (such as an astronomer) who is using the model should be the one who decides whether to trust the result from data-driven models trained on a large amount of data. However, we emphasize that this is no different in current data-driven models.
**Versatility:** Transformer-based models have demonstrated a versatile range of solutions to perform downstream tasks like few-shot learning, rapid fine-tuning (e.g., Hu et al., 2021), and agents (e.g., LangChain Chase, 2022). With a Transformer-based foundation model for astronomy, astronomers can fine-tune it specialized tasks or new data sets (we give an example of this in Appendix C).
**Portability:** The deep learning community is currently putting a large amount of research effort into Transformer-based models such as LLMs and their role in artificial intelligence. There are new developments every day on LLMs and their applications. If we can rephrase our machine learning problem into a setting where Transformer-based models can excel, current and future developments like successors to Transformers using the attention algorithm can potentially be easily ported to our foundation model for astronomy.
In this work, we present a novel perspective on the use of Transformer-based models in astronomy by constructing a model that utilizes the core ideas and technologies of LLMs without involving natural language. We train a proto-foundation model that is not trained on specific input/output pairs for specific supervised and unsupervised tasks, but rather is trained in a self-supervised manner with a big data set to contain general knowledge of the data set, in our case observations and properties of stars in the Milky Way. From this context of a star, one can later request information from the model about other properties or observations of the same star with predictive uncertainty from the model. As a proof-of-concept of this new research area, we specifically build a Transformer-based model for stars using cross-survey, cross-domain data from APOGEE, _Gaia_, _2MASS_ and dust maps. Our approach allows us to think about building a big foundation model for astronomy, its potential role in artificial intelligence, and its application in astronomy (Smith and Geach, 2023).
This paper is organized as follows. We provide the motivation behind using a Transformer-based model in Section 2 and give an overview of the relevant deep-learning methodology and terminology. Section 3 describes details of our model implementation. Section 4 describes the datasets that we use to train our model. Section 5 describes the training process of our model. Section 6 describes the results of our trained model on various tasks. Section 7 discuss our models dependencies on various combination of stellar parameters and observations as well as the embedding learned by our model, the reliability and expectation of our model, and the model's role as a foundation model. Finally, Section 8 gives a conclusion to this paper.
## 2 Transformer-based Neural Network
Transformer-based neural networks (thereafter Transformers; Vaswani et al., 2017) refers to a kind of neural network architecture that utilizes an algorithm called the attention mechanism (Bahdanau et al., 2014), which allows the model to attend to different parts of an input sequence and transform to a context vector for downstream layers. The attention mechanism is important especially in natural language tasks like neural machine translation as words have different meanings in different contexts; the attention mechanism provides neural pathways to learn how to get the context.
In this section, we discuss the motivation and overview of our framework in Section 2.1, then briefly introduce the basic concept of tokens and embedding in Section 2.2, and the attention mechanism that is the core component of Transformer-based neural networks in Section 2.3. The detailed implementation of our model is given in Section 3. For in-depth introductions of this terminology and methodology, we refer interested readers to an online open-source interactive book _Dive into Deep Learning1_(Zhang et al., 2021).
Footnote 1: [https://d2l.ai/](https://d2l.ai/)
### Dense NN to Transformer-based Encoder-Decoder
The main motivation for having a foundation model for stars like the one we implement in this paper (see Figure 1) is to give the model the ability to perform multiple tasks directly without fine-tuning. To achieve this, we need to build the model using various components like an embedding and the attention mechanism. It is critical to understand the motivation behind and function of each of them. We introduce these components here by imagining that we want to "evolve" a simple dense neural network (that simply maps _Gaia_ XP coefficients to stellar parameters) to the Transformer-based
encoder-decoder model that we develop in this work. Dense neural networks are commonly used in astronomy for data-driven machine learning problems or as emulators of complex functions.
Simple dense neural networks fail if one shuffles the order of the input XP coefficients, even though the input then has essentially the same information content. The dense neural network fails, because while the information content is the same after shuffling, there is no way for the model to know which input value corresponds to which coefficient. Unlike in natural or programming languages, the actual ordering of the coefficients does not intrinsically matter, because the
Figure 1: High-level overview of our foundation model for stars. The top part of the figure displays the overall goal of training a big single neural network for astronomy that can turn a combination of data into useful information in the left part, while in the right part the high-level architecture of the Transformer-based model for stars in this paper is shown (for architectural details see Figure 2). The bottom part of the figure gives an overview of the versatility of our model on multiple use cases, showing how a _single_ neural network without any fine-tuning can perform many useful tasks, such as “generative” and “discriminative” tasks in the traditional machine learning framework, with all weights in the model participating in those tasks. For each of these example use case, we will discuss the performance later in Section 6.
information is the same regardless of the order. This is why we need to introduce an embedding, specifically an embedding that includes both the type of observation and its corresponding value. Unlike in natural language processing (NLP), we do not require a positional encoding to be part of the embedding although this may be useful in some contexts (e.g., Allam and McEwen, 2021; Donoso-Oliva et al., 2023; Rozanski et al., 2023; Moreno-Cartagena et al., 2023). With such an embedding, a simple dense neural network would be robust to random shuffling of the same inputs (i.e., always providing 110 XP coefficients, but in random order), because the model can learn to approximate pooling operations such as global averaging.
The addition of an embedding layer is not enough when the input content and length are not always the same, for example, when different subsets of inputs are used in random order. This is because it is difficult for neural networks to get the context if the inputs are not always the same, unlike the scenario we discussed in the previous paragraph where the 110 coefficients always exist in the inputs, just in random order. The encoder block of a Transformer using the attention mechanism--specifically self-attention on the input sequence--is needed to train such model. The attention mechanism provides a way for the model to get the "context" of the input sequence while handling variable length inputs by masking non-existing inputs using padding tokens.
If we further desire flexibility in what the output node returns rather than having a model that can only predict a fixed set of stellar parameters, we can add the decoder block of a Transformer to perform cross-attention between the context vector obtained from the input sequence and the output request vector. This allows the model to make predictions based on the context of the input sequence for a particular output request. This provides tremendous flexibility to the model and training sequence, because this allows the input node and the output node to be anything in the data set.
### Tokens and Embedding
Tokens and embedding are easiest to understand in the natural language context. A neural network can only accept floating-point numbers as inputs and a process to turn words into floating-point vectors (i.e., a word embedding) is needed. Let's say the goal is to have \(N\) unique words to be turned into vectors \(\mathbf{e}_{i}\) with an embedding dimension \(d_{i}\); all the words/vectors then make up the embedding database \(\mathbb{E}=\{\mathbf{e}_{1},\mathbf{e}_{2}...\mathbf{e}_{N}\}\) (e.g., word2vec for natural language; Mikolov et al., 2013). Each word can then be turned into (or "tokenized") the integer that correspond to the column index in \(\mathbb{E}\) in order to retrieve embedding that correspond to that word. In order to build-up such a database \(\mathbb{E}\), each vector \(\mathbf{e}_{i}\) is part of the neural network optimization problem such that the model can learn the weights that make up a good embedding database (e.g., Bengio et al., 2003). Moreover, there are tokens that carry out special functions, like the padding token which is ignored by the attention mechanism and which is usually used to pad variable-length sentences to all have the same length.
The input astronomical data in our model passes through a similar embedding processing as that which is used in NLP. Our tokenization process is fairly simple, because we simply look up the exact match of the observation type to an indexed database of possible observation types and so we require no heuristics currently in this process as are used in NLP (which needs to deal with, e.g., typos and contractions). This also means that 64 tokens always mean 64 observations with their value (as opposed to NLP, where the number of tokens is generally bigger than the number of individual words).
We have implemented a custom embedding process for our data that we refer to as a "non-linear embedding". This works by embedding data of type \(x\) using
\[y_{x}=f(w_{x}\cdot M_{x})+w_{b,x} \tag{1}\]
where \(y_{x}\) is the final vectorized data for a particular type of data \(x\) with value \(M_{x}\) that will be fed into the model. The function \(f\) is a typical activation function used in neural networks, \(w_{x}\) are the trainable embedding weights, \(M_{x}\) is the value of the data, and \(w_{b,x}\) is a trainable bias weight; all of these are particular to the kind of data \(x\). Bias weights are a necessary part of the embedding, because without them, the neural network would receive a vector of zeros for all data with a value of zero and would have no way of knowing which observation has a value of zero. However, a data value of zero has different meanings for different kinds of data.
It is critical to embed the data value information in a non-linear way in the embedding instead of providing it as a scalar value, as we will discuss further in Section 2.3 and Section 7.1, because the attention on the data is expected to depend on both the type of observation and its value.
### Attention Mechanism
The attention mechanism is the core algorithm underlying Transformers. Here we discuss a specific implementation of attention called scaled dot-product attention that we use in our model. Unlike recurrent neural networks (RNNs), the attention mechanism can process a whole input sequence all at once and disregard the ordering of input sequences. The attention mechanism was mainly developed to solve the issue of distance-independent dependencies, that is how each element relates to each other regardless of how far apart elements are in an input sequence. Long-distance dependence in the input sequence is important especially in NLP, because words can have multiple meanings and context is needed to allow readers to accurately interpret the meaning of a sentence. That nearby words are more likely important for interpreting a given word is incorporated in Transforms using positional embeddings.
The core algorithm of the scaled dot-product attention, which has no trainable weights, requires three inputs--Queries \(Q\), Keys \(K\), and Values \(V\)--that are used to calculate a context vector \(C\). First, the output alignment scores \(S\) are calculated as the matrix multiplication between \(Q\) and transpose of \(K\)
\[S=QK^{T}\,, \tag{2}\]
which essentially gets the similarity between the query vectors \(Q\) and key vectors \(K\). The alignment scores \(S\) are then converted to attention weights \(A\) by
\[A=\mathrm{Softmax}\left(\frac{S}{\sqrt{d_{K}}}\right)\,, \tag{3}\]
where \(d_{K}\) refers to an integer constant that is the number of dimensions in \(K\), which is usually the embedding dimension, to ensure unit variance of S to improve numerical stability in the gradients used during training when \(d_{K}\) is large. The Softmax function is applied to ensure that the attention weights for each input sequence sum up to one. Finally the context vector \(C\) is computed by the matrix multiplication between \(A\) and \(V\)
\[C=\mathrm{Attention}(Q,K,V)=A\,V=\mathrm{Softmax}\left(\frac{QK^{T}}{\sqrt{d_{ K}}}\right)V\,, \tag{4}\]
where \(V\) is the input sequence Values vector.
These equations allow the attention mechanism to effectively act as a database software, where one can execute a query against a database of key-value pairs. In a simple regression setting, the attention weight
\(A\) effectively allow the model to determine the regions where the regression should be carried out in the input sequence for a regression to a query \(Q\), where the keys \(K\) are the kind of data and the values \(V\) are the input regression floating-point values corresponding to each input key \(K\). This is similar to algorithms such as non-parametric kernel regression. Although in theory floating-point values can be passed via \(V\), in practice and in this work we encode value information in the key vector \(K\) as well and pass it as \(K\) and \(V\). We discuss this further in Section 7.1.
When the Queries \(Q\) and the Keys \(K\) are the same sequence, the
Figure 2: Neural network architecture of the model used in this work (right) compared to that of the standard Transformer-based encoder-decoder model (left; Vaswani et al., 2017). The encoder in our model has a very similar architecture to the encoder block in the standard Transformer, except that we do not use positional encoding, but rather perform a non-linear embedding of the type of observations and their corresponding values. The decoder blocks in our model have the same architecture as the encoder blocks in the encoder, because we do not need positional encoding for the decoder either in our application, which is different from the standard Transformer-based encoder-decoder where the decoder blocks are different from the encoder blocks, because they need to consider the current output sequence in addition to the context and request.
algorithm above is known as _self-attention_, which means attention of one sequence applied to itself. When the Queries \(Q\) and the Keys \(K\) are different sequences, the algorithm is known as _cross-attention_, because it calculates the attention of one sequence to a different sequence.
In practice, there are trainable weights and multiple attention mechanisms are applied in parallel to the same input sequence (referred to as "multi-head attention"), which allows the model to attend to different subspaces of the embedding. As part of a neural network, this mechanism constitutes a Multi-Head Attention layer (MHA, called MultiheadAttention in PyTorch(Paszke et al., 2019)). Within the MHA layer, there are \(h\) heads where \(h\) is a factor of \(d_{K}\) such that each attention head is responsible for a subspace with dimension \(d_{K}/h\) of the embedding space. Within each attention head \(i\) in MHA, \(Q\), \(K\) and \(V\) all have their own trainable weights \(W_{i}^{Q}\), \(W_{i}^{K}\) and \(W_{i}^{V}\) to linearly transform \(Q\), \(K\) and \(V\) by matrix multiplication before the attention mechanism is applied within each head \(i\). The results from all heads are then concatenated and are finally linearly transformed through matrix multiplication with another weights matrix \(W^{O}\) to obtain the final output. That is, we do
\[C=W^{O}\text{Concat}\big{(}\text{Attention}\big{(}QW_{i}^{Q},KW_{i}^{K},VW_{i} ^{V}\big{)}\big{)}\,. \tag{5}\]
The Attention mechanism also provides a way to handle variable length inputs. That is, one can have missing data in the input sequences \(Q\), \(K\), and \(V\). Any missing parts of the sequence compared to a model's context length can be padding with an arbitrary constant and be ignored in the attention mechanism by setting the attentions to those part to zero.
## 3 Model Implementation
We implement our model using PyTorch(Paszke et al., 2019) with \(\approx 8.8\) millions trainable parameters including those from the embedding space. We have deliberately over-parameterized the model to achieve the best performance; a scaled-down model with \(\approx 1\) million trainable parameters should achieve similar results. A high-level overview of the model is given in Figure 1 and the detailed architecture is presented in Figure 2; our model mimics a typical Transformer-based encoder-decoder model (as opposed to other commonly-used architectures like Vision Transformer (Dosovitskiy et al., 2020), which is a Transformer-based encoder model).
Our Transformer-based model shares a common limitation of Transformers, which is that there is a limit to how many tokens we can process at once; this limit is known as the "context window". Our model has a context window set to 64, that is, our model can only handle up to 64 tokens at once and we did not employ any methodology or mechanism to handle longer contexts. This is different from more traditional neural networks, where there are dedicated input neurons for each data type. We do not have that "luxury" here. But this limitation is actually important to demonstrate the model's performance and generality, because once we train on a large variety of data, it is infeasible to have dedicated neurons for every possible type of data, so the context window will eventually be much smaller than the possible types of data. For the embedding space, we use a dimension of 128. The computational cost of each scaled dot-product attention mechanism is \(O(n^{2}d_{K})\) where \(n\) is the context length and \(d_{K}\) is the embedding dimension (note that the shape of the \(Q\), \(K\), and \(V\) matrices is \(n\times d_{K}\); the \(O(n^{2}d_{K})\) scaling comes from the scaling of the involved matrix multiplications). Thus, the attention mechanism in each layer scales quadratically with the length of the context window in terms of required computational power. This is why it is expensive to increase the size of the context window.
Our encoder has two Transformer encoder blocks doing self-attention on the input sequence with intermediate dense layers of 1024 and 512 neurons, respectively, for the first and second block. Our decoder has three Transformer decoder blocks doing cross-attention between the final output from the encoder, which represents the context of a star, and the information request vector. These blocks have 3096, 1548, and 774 neurons in their intermediate dense layers, respectively. All of the Transformer blocks in the encoder and decoder have 16 attention heads, that is, each attention head is responsible for 8 embedding dimensions and then we concatenate all the results from each head as the results. To map from the output of the final decoder block to the final output, we use a three-layer dense neural network with 3096, 1548 and 774 neurons respectively. For the activation function of these dense layers (including those in the Transformer encoder/decoder blocks), we use Gaussian Error Linear Units (GeLU; Hendrycks and Gimpel, 2016) globally when appropriate. A dropout (Srivastava et al., 2014) rate of 0.1 is applied globally for the dense layers, except in the embedding space, during training, but is disabled during testing. There are Add & Norm operations in all Transformer blocks. This operation is composed of two operations, Add and Norm. The Add operation simply element-wise adds two inputs together, which usually are the input and the output of the previous layer to allow residual connections (He et al., 2015). The Norm operation (which is called LayerNorm in PyTorch) performs normalization of the layer (Lei Ba et al., 2016), by tranforming the input \(x\) to \(y\) given by
\[y=\frac{x-\text{Mean}[x]}{\sqrt{\text{Var}[x]+\epsilon}}\times\gamma+\beta\,, \tag{6}\]
where the gain \(\gamma\) and the bias \(\beta\) are trainable parameters initialized as ones and zeros, respectively, at the beginning of training, thus simply standardizing the inputs at the start of training. The parameter \(\epsilon\) is a small constant to ensure numerical stability.
The "unit vector" \(w_{x}\) in Equation 1 for the non-linear embedding is also used as the request "token" given to the decoder (shown in Figure 1 and Figure 2). We train the decoder such that its output is a scalar value answer corresponding to the requested information vector. This is similar to what happens in LLMs where the next words are determined based on the previous generated words (analog to the information request here) as well as the original user inputs (analog to the data input by users here). As long as we keep the request "token" consistent with the input tokens by reusing the "unit vector" \(w_{x}\), reuse the knowledge on the embedding learned by the encoder (see further discussion in Section 7.2 and Figure 14).
## 4 Datasets
To train our model, we construct a small, relatively low-dimensional data set that is perfect for fast proof-of-concept prototyping. The dataset we use is composed of data from APOGEE, _Gaia_, and dust-map data. In theory, our model is flexible enough to train on a highly heterogeneous dataset, that is a dataset where there are lots of missing data. In practice, training on the whole _Gaia_ data set, for example, would take a large amount of computational power. That is why we choose to only train on stars observed in APOGEE, where we can still construct a heterogeneous dataset to demonstrate the flexibility of our model, because some stars in APOGEE do not have good
observations from APOGEE while having good _Gaia_ observation, while some other stars with good APOGEE observations are too dim for _Gaia_, and yet other stars are so dim that they do not have good _Gaia_ or APOGEE observations. The number of stars in our training set satisfying a number of criteria is given in Table 1. We discuss the APOGEE data in Section 4.1, _Gaia_ data in Section 4.2, and the data we use on interstellar extinction in Section 4.3. The training set has a total of 397, 718 stars while the test set contains \(44,080\) stars, with stars ranging from dwarfs to bright giants in both data sets.
### APOGEE DR17
We use precise stellar parameter labels from the Apache Point Observatory Galactic Evolution Experiment data release 17 (APOGEE DR17; Blanton et al., 2017; Abdurro'uf et al., 2022). In particular, we use surface temperature \(T_{\rm eff}\), surface gravity \(\log g\), and the overall metallicity [M/H]. APOGEE is a high-resolution (\(R\sim 22,000\)), high signal-to-noise (\(>100\) per pixel typically) panoptic spectroscopic survey in the near infrared H-band wavelength region of \(1.5-1.7\mu m\)(Majewski et al., 2017; Wilson et al., 2019). The stellar parameter labels are derived from APOGEE spectra using the APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP; Garcia Perez et al., 2016). We do not use the APOGEE spectra directly here for the training of our model.
In order for a star in our data set to have usable stellar labels, we require that the star does not have any bit set in STARFLAG and does not have bit 19 (M_H_BAD) and bit 23 (STAR_BAD) set in ASPCAPFLAG, to avoid the issue of obvious binary stars and of problematic stellar parameters derived from ASPCAP. Stars with bad APOGEE stellar labels are not excluded as long as they survive cuts made to other survey discussed below andrew simply set bad APOGEE labels to NaN.
In addition to using data from APOGEE itself, we also adapt _2MASS_(Skrutskie et al., 2006) photometry in the \(J\), \(H\), \(K_{s}\) bands and we calculate the _2MASS_ colors \(J-H\) and \(J-K\) and their uncertainty.
Figure 3: Properties of the training set. The top left panel displays a Kiel diagram of APOGEE \(T_{\rm eff}\) and \(\log g\) colored by [M/H] while the top right panel shows the on-sky distribution of the sample colored by the reddening \(E(B-V)\) from the Combined19 extinction map. The bottom left panel shows the fraction of stars for which their \(n^{\rm th}\) BP and RP XP coefficients are relevant, the bottom middle panel displays the distribution of _Gaia_ parallaxes (with some negative parallaxes), and the bottom right panel shows distribution of _Gaia_ apparent \(G\)-band magnitude, which together with the parallax is used to calculate the stellar luminosity. The bottom-left relevancy fraction starts at \(\approx 90\%\), because \(\approx 10\%\) of stars in the training set do not have XP spectra.
The _2MASS_ data are taken directly from the APOGEE data file rather than performing our own cross match.
### _Gaia_ DR3
The DR3 release from _Gaia_(Gaia Collaboration et al., 2016, 2023) provides an unprecedented number of low-resolution spectra along with precise astrometry. The _Gaia_ XP spectra (Carrasco et al., 2021; De Angeli et al., 2023; Montegriffo et al., 2023) are low-resolution (\(R\sim 30\) to 100), optical to near-infrared (330 to 1050 nm) spectra obtained from Blue Photometer (BP) and Red Photometer (RP) aboard the _Gaia_ spacecraft. Broad blue and red photometry \(G_{\rm BP}\) and \(G_{\rm RP}\) is also obtained from these spectra (Riello et al., 2021). Due to the large amount of data in various formats, we have developed a Python package called MyGaiaDB3 to manage _Gaia_ and ancillary photometric and spectroscopic data and to be able to query these using the SQL language on local machines.
Footnote 3: [https://github.com/henrysky/MyGaiaDB](https://github.com/henrysky/MyGaiaDB)
For a star be considered to have a valid parallax that we use, a star must have Renormalised Unit Weight Error \(\rm{ruve}<1.4\) and parallax uncertainty \(\sigma_{\varpi}<0.1\) mas to ensure a good astrometric solution. We adopt the Gaia parallax zero-point correction \(\varpi_{\rm offset}\) from (Lindegren et al., 2021) as implemented in the gaiadr3-zeropoint 4 Python package and apply the additional correction from Leung et al. (2023). The formula for the additional correction is as follows:
Footnote 4: [https://gitlab.com/icc-ub/public/gaiadr3_zeropoint](https://gitlab.com/icc-ub/public/gaiadr3_zeropoint)
\[\varpi_{\rm offset}=\begin{cases}Z_{\rm gd3}&\text{for $G\leq 13$}\\ Z_{\rm gd3}+5\;\mu\text{as}\;(G-13)&\text{for $13<G\leq 17$}\\ Z_{\rm gd3}+20\;\mu\text{as}&\text{for $G>17$}\end{cases}\, \tag{7}\]
where \(Z_{\rm gd3}(p_{\rm gd3}),G,\,v_{\rm eff},\phi_{\rm eff},\beta)\) is the official _Gaia_ zero-point correction function (Lindegren et al., 2021) that depends on \(p_{\rm gd3}\), which is the number astrometric_params_solved of astrometric parameters solved for, \(G\) is the apparent _Gaia_\(G\)-band magnitude phot_g_mean_mag, \(v_{\rm eff}\) is the effective wavenumber nu_eff_used_in_astrometry, \(\phi_{\rm eff}\) is the astrometrically estimated effective wavenumber pseudocolour, and \(\beta\) is the ecliptic latitude ect_lat. The final corrected parallax is then \(\varpi=\varpi_{\rm gd3}-\varpi_{\rm offset}\).
Unlike usual stellar spectra, the _Gaia_ XP spectra were released as a set of 110 coefficients of an orthogonal basis function expansion of the spectrum, where lower-order coefficients explain large-scale features of the spectra (thus, information like \(T_{\rm eff}\)) and higher-order coefficients explain small-scale features including the noise. The data from each photometer has its own expansion, so there are 55 BP coefficients and 55 RP coefficients. The spectra as flux vs. wavelength are only directly available for a small subset of stars. Therefore, we use the coefficients directly and we simply treat each coefficient as a type of data. We normalize the XP coefficients by dividing each coefficient by the _Gaia_\(G\)-band apparent flux \(F_{G}\), given by
\[2.5\log_{10}F_{G}=G_{0,\rm{vega}}-G\, \tag{8}\]
where the _Gaia_ DR3 \(G\)-band photometric zero-point \(G_{0,\rm{vega}}=25.6873668671\) mag (Riello et al., 2021).
_Gaia_ also provides a relevancy parameter for the coefficients of each photometer for a given star, such that the relevancy \(r_{\rm BP}\) for BP and \(r_{\rm RP}\) for RP means that only BP coefficients up to the \(r_{\rm BP}\)th and up to the \(r_{\rm RP}\)th for RP coefficients are relevant to describe the XP spectrum of a star. In other words, only the first \(r_{\rm BP}\) BP coefficients and only the first \(r_{\rm RP}\) RP coefficients are significant compared to thresholds adopted in the _Gaia_ data pipeline. During the training process, we set coefficients deemed irrelevant to NaN except for the last two BP and RP coefficients, we copy the relevancy from the third last BP and RP coefficients to the last two BP and RP coefficients, because _Gaia_ never indicates that any of the last two BP and RP are relevant. Therefore, as long as _Gaia_ says that the third last BP or RP coefficient is relevant, we set the same relevancy to the last two BP and RP coefficients, respectively.
Footnote 5: [https://github.com/jobovy/mwdust](https://github.com/jobovy/mwdust)
For a star to be considered to have a valid XP spectrum (a star can still be selected without XP spectrum, but with good observations from other surveys), the star need to have \(6.0<G<17.5\) and \(0<G_{\rm BP}-G_{\rm RP}<4\), as well as having the BP and RP flux excess factor less than 2.0 to avoid problematic stars with erroneous fluxes estimation.
To determine a stellar luminosity proxy, we adopt the luminosity-parallax relation from Anderson et al. (2018)
\[L_{\rm{pseudo}}=\varpi 10^{18\,m_{\rm{spequet}}}=103^{18\,M_{\rm{ absolute}}+2}\, \tag{9}\]
where \(L_{\rm{pseudo}}\) is an alternative scaling of luminosity, a pseudo-luminosity. Previously, we have adopted a similar scaling to obtain machine-learned spectro-photometric distances (Leung and Bovy, 2019) in order to preserve the Gaussianity of the parallax error distribution in the pseudo-luminosity. Unlike the usual luminosity (or pseudo-luminosity) that has been used in the past, which is the true luminosity calculated from the extinction-corrected apparent magnitude and the distance, we simply calculate the pseudo-luminosity using the measured apparent magnitude without applying an extinction correction. Thus, the pseudo-luminosity \(L_{\rm{pseudo}}\) is contaminated by extinction, but the advantage of this approach is that any pseudo-luminosity we infer from the trained model can be directly used to convert to apparent magnitude to distance using the _Gaia_ measured \(G\)-band apparent magnitude without having to explicitly consider the \(A_{G}\) extinction.
### Dust Map
We obtain information on the interstellar extinction to stars in our sample from the Combined19 (hereafter C19) three-dimensional extinction map in the mwdust5 Python package (Bovy et al., 2016). This map combined data from the extinction maps of Drimmel103(Drimmel et al., 2003), Marshall106(Marshall et al., 2006), and
\begin{table}
\begin{tabular}{l r} \hline Criteria & Number of Stars \\ \hline Stars in the training set & 396,718 \\ \hline Stars with _Gaia_ XP spectra & 363,100 \\ Stars with APOGEE \(T_{\rm eff}\) & 263,809 \\ Stars with APOGEE \(T_{\rm eff}\), log \(g\) and \([\rm{M/H}]\) & 261,182 \\ Stars with _Gaia_ XP spectra without APOGEE parameters without _Gaia_ XP spectra & 33,570 \\ Stars without _Gaia_\(G\)-band luminosity & 12,065 \\ Stars without Combined19 \(E\,(B-V)_{\rm C19}\) & 11,947 \\ Stars without _2MASS_ colors & 2,031 \\ \hline \end{tabular}
\end{table}
Table 1: The number of stars in the training set that satisfies certain criteria. A star does not need to have both XP spectra and stellar parameters to be included. A star can have only XP spectra or can have only stellar parameters to be included.
Green19 (Green et al., 2019). Green19 is a three-dimensional extinction map based on _Gaia_, _2MASS_ and Pan-STARRS 1 data that covers the sky with declination above \(-30^{\circ}\) and out to a distance of a few kpc. The Marshall166 map is a three-dimensional extinction map based on _2MASS_ that covers the sky of \(-100^{\circ}\leq l\leq 100^{\circ}\) and \(-10^{\circ}\leq b\leq 10^{\circ}\), mainly using giants; this map performs better than Green19 when available (Bovy et al., 2016) and we therefore use it wherever it is available. The Drimme103 map is a three-dimensional extinction map based on COBE/DIRBE (Hauser et al., 1998) and we use it to fill in the blank for the area not covered by Green19 or Marshall106, which is mainly the part of the sky surrounding the south celestial pole.
The reddening \(E\left(B-V\right)_{\rm C19}\) obtained from C19, which we use to train the model, is on the scale of the \(E\left(B-V\right)_{\rm SFD}\) of the SFD extinction map (Schlegel et al., 1998). This scale can be converted to true reddening \(E\left(B-V\right)_{\rm true}\) using
\[E\left(B-V\right)_{\rm true}=0.884\times E\left(B-V\right)_{\rm SFD}=0.884 \times E\left(B-V\right)_{\rm C19}, \tag{10}\]
where the factor of 0.884 comes from the measurement of Schlafly & Finkbeiner (2011). Because we train on \(E\left(B-V\right)_{\rm C19}\), any \(E\left(B-V\right)\) obtained from our trained model is on the SFD scale as well and needs to be converted to true \(E\left(B-V\right)\) using the equation above. We assume that the uncertainty in \(E\left(B-V\right)_{\rm C19}\) is solely coming from the uncertainty in the parallax used to obtain the distance at which the extinction map is evaluated. We set \(E\left(B-V\right)\) to NaN for (a) negative values arising from numerical instability in the extinction map evaluation, (b) values over 10 mag, and (c) stars with unavailable distances. For stars with negative parallaxes but otherwise with good _Gaia_ astrometry satisfying \(\rm{true}<1.4\) and \(\sigma_{\varpi}<0.1\) mas, we set the distance to 99 kpc for the purpose of obtaining the extinction such that we obtain the extinction at infinity for these distant stars. For each stars, we then draw 1, 000 samples from the zero-point corrected parallax uncertainty distribution and obtain the distribution of \(\ln E\left(B-V\right)\). \(E\left(B-V\right)\) values that are exactly equal to zero are set to a small number \(e^{-7}\) before taking the logarithm to prevent numerical issues, but also to prevent the model from predicting negative \(E\left(B-V\right)\). Then we simply calculate the median and robust standard deviation as the final logarithmic \(E\left(B-V\right)\) and its uncertainty that goes into our model.
## 5 Training
We train our model with 118 unique kinds of tokens (that is 118 types of data; 110 _Gaia_ XP coefficients, \(G_{\rm BP}-G_{\rm RP}\), \(J-H\), \(J-K\), \(T_{\rm eff}\), \(\log g\), \(\rm{[M/H]}\), logarithmic \(E\left(B-V\right)_{\rm C19}\), and _Gaia_\(G\)-band pseudo-luminosity as described in Section 4). The entire training run involves \(\approx 16.4\) millions tokens that are not NaN from \(\approx 397\)k stars in the training set. Data that are NaN are converted to a padding token (the same idea used to mask empty spaces in sentences in NLP), which are masked the in attention layers by always having no attention paid to those padding tokens. In order to train the model to obtain general knowledge of stars and the data set, we train without a clear learning objective in terms of input and output combinations, but rather we pick a random set of data for each star as the input and a random data point as the output which may or may not be already included in the input even for stars in the same batch during training. This training procedure necessitates being able to interact with the decoder, because we do not have a fixed output target during training (or testing). This way of training the model is similar to LLM pre-training, where the goal is to predict the next words given the starting words of sentences for OpenAI's GPT models or to predict the masked words in sentences for Google's BERT. But here we do not care about the relative ordering of the input data, but simply with learning the general relationships among data types.
Because we employ a context window length of \(n=64\) as discussed in Section 3, we randomly choose 5 to 64 elements from all 118 unique tokens for each star as inputs. For stars with less than 64 available non-NaN elements or even a handful of stars with less than 5, NaN elements are selected, but we always prioritize non-NaN elements for selection. Even if there are many stars with enough data to fill the context window, it is important to select only a few data elements sometimes to break co-adaptation between data points. An example is if C is predictable from A and B but A has more predictive power than B to C. If A and B always co-exist in the input sequence, then the model might learn A is related to C but not B. For the output node at each training epoch, we always choose one data type selected from among all non-NaN elements, but not including \(G_{\rm BP}-G_{\rm RP}\) or \(J-K\), because we use these for testing the model's generalization capabilities (see Section 7.2).
To get a sense of how many data type combinations the model sees during training, we note that 64 unique samples (ignoring the ordering) from among 118 possible tokens comprise \(\approx 1.6\times 10^{34}\) combinations, which is 25 orders of magnitude larger than the \(\approx 1.6\times 10^{9}\) used during training. It is therefore very unlikely that the model ever sees the same combination twice during training or that it sees the exact combinations on which we test the model below.
As the learning-rate scheduler for the Adaml(Kingma & Ba, 2014; Loshchilov & Hutter, 2017) optimizer, we adapt Cosine Annealing with Warm Restarts (Loshchilov & Hutter, 2016), which is implemented as CosineAnnealingWarmRestarts in PyTorch. Cosine Annealing is a type of cyclical learning rate schedule that has cycles with a cosine-shaped learning rate. We have set the initial learning rate of each cycle to \(1\times 10^{-4}\) and the final learning rate to \(1\times 10^{-10}\) with no early stopping being applied. The training process uses eight of these cycles where each cycle spans 512 epochs (thus, 4096 epochs in total for the training process). The adopted batch-size is 1024 for the best balance between computational performance (a higher batch size requires higher video-card memory) and model performance on the validation set (which is independent from the test set) during training.
As the objective function \(J\), we use a robust version of the mean-squared loss that takes uncertainty in the training data into account, that is,
\[J(y,\tilde{y})=\frac{(\tilde{y}-y)^{2}}{2\,e^{s}}+\frac{s}{2}\,, \tag{11}\]
where \(y\) and \(\tilde{y}\) represent the ground truth and predictions, respectively and \(s=\ln\left(\sigma_{\rm known}^{2}+\sigma_{\rm predictive}^{2}\right)\), with \(\sigma_{\rm known}^{2}\) the uncertainty of the ground-truth data and \(\sigma_{\rm predictive}^{2}\) the model's predictive uncertainty. This is the same loss function as we have used in many of our previous works (e.g., Leung & Bovy, 2019, 2019). Further technical details of the training process are given in Appendix A.
## 6 Results
To illustrate the general capabilities of our trained model, we apply our model to multiple tasks that is was not specifically trained or even fine-tuned to do. During the inference process all weights of the model are used, so the users of the model do not need to specifically programme a routine for each task. For example, models like conditional autoencoders can do discriminative and generative tasks, but only the encoder participates in the discriminative task and only
the decoder in the generative task. To the best of our knowledge, this is the first model in astronomy that has such multi-tasking capabilities with all weights participating. In the following subsections, we will show the result of applying our model to map stellar spectra to stellar parameters in Section 6.1, to determine stellar spectra for given stellar parameters in Section 6.2, to infer a portion of a stellar spectrum based on another portion in Section 6.3, to find relations between stellar parameters in Section 6.4, and to recover the interstellar extinction curve in Section 6.5.
Unless otherwise specified, a scatter \(\sigma\) refers to a robust measurement of the scatter that make use of the median absolute deviation (MAD): \(\sigma=1.4826\) MAD. The 1.4826 factor is applied such that for a Gaussian distribution, \(\sigma\) equals the Gaussian standard deviation. All results, unless specified otherwise, are obtained either by applying the model to stars in the test set, which is independent from and identically distributed to the training set, or by using the model on data not corresponding to real stars.
Figure 4: Results for predicting \(T_{\rm eff}\) from different combinations of XP spectra and photometry. The top-left panel shows an expertly trained XGBoost model as a baseline. The top-middle panel shows the performance of our Transformer-based model given the first 30 BP and RP coefficients that describe the XP spectra as well as the \(J-H\), \(J-K\) and \(\mathrm{G_{BP}-G_{\rm gro}}\) colors. These are the same inputs as the XKboost model, showing that our general model is not specifically tuned to these inputs slightly outperforms a dedicated, expertly-trained XGBoost model. The top-right panel is similar, but only using the first 5 BP and RP coefficients instead of the first 30 BP and RP coefficients. The bottom-left panel displays a task where the inputs are 64 values randomly selected among that 110 BP and RP coefficients and colors for each star and given to the model in random order. The number of 64 is chosen so as to fill the whole context window of the model. In the bottom-middle panel, we provide the model with the most uniformizing BP and RP coefficients and also do not give any color, to see if the model correctly returns large uncertainties in this case where the predictions are poor. The bottom-right panel is similar, except that we now mix in \(T_{\rm eff}\), \(\log g\) and \(\mathrm{[M/H]}\) in random order for each star. In this case, the model successfully answers the information request to very high precision. Similar plots for \(\log g\) and \(\mathrm{[M/H]}\) are available in Figure 11 in Appendix B.
### Spectra to Stellar Parameters
Deep learning provides a way to quickly infer stellar parameters from stellar spectra directly. Here, as the first task we test our model on, we will check if the model can infer stellar parameters and their uncertainties reasonably for different sets of inputs. We again emphasize that unlike in traditional machine-learning methods for inferring stellar parameters from observed spectra, we are using a _single_ model where all weights participate in the inference and our model has never been specifically trained or even fine-tuned to predict the requested labels from the specific set of inputs that we test on.
Results for predicting \(T_{\rm eff}\) are shown in Figure 4. An extended version of this figure that also includes results for \(\log g\) and \(\left[{\rm M}/{\rm H}\right]\) is given in Figure 12 in Appendix B. To put our results in this section into context, we compare to a baseline model using traditional machine-learning methods. Specifically, we train dedicated XGBoost models for each specific set of inputs and outputs as shown Figure 4 to give an idea of how our _single_ NN model is doing on a variety of input combinations compared to more traditional methods. For the case where we input the first 30 BP and 30 RP coefficients as well as colors, our model performs slightly better than the XGBoost model. The results we obtain using our model are also similar to works such as Rix et al. (2022), who specifically train an XGBoost model on all XP coefficients and on _2MASS_ and WISE photometry. As shown in the top-right panel of Figure 4, our model is robust even when only a few XP coefficients are provided as inputs. The bottom-left panel of Figure 4 shows that our model performs well even for a random subset of XP coefficients and colors. The model's predictive uncertainty also make sense, in that less accurate predictions compared to the ground-truth are assigned high uncertainty by the model.
Two particularly stringent tests of our model are given in the bottom-middle and bottom-right panels of Figure 4. In the middle panel, we give the model the 30 least informative BP and RP coefficients and in this case we find correctly that the predictions are inaccurate compared to the ground truth but also that the model knows that it is very uncertain about those predictions. In the right panel, we randomly mix in \(T_{\rm eff}\), \(\log g\) and \(\left[{\rm M}/{\rm H}\right]\) among these least informative XP coefficients on a star by star basis. In this case, the model returns almost perfect predictions with very low uncertainty, because the requested information is already present in the inputs. But we have to emphasize that the model has not been explicitly instructed to learn the identity mapping if the information request already exists in the input sequence. This behavior therefore only occurs within the boundaries of the training set's stellar parameter space. For example, if we give \(T_{\rm eff}=10,000\) K and request \(T_{\rm eff}\), the model returns a \(T_{\rm eff}\) that is far from the input \(T_{\rm eff}\) with large uncertainty.
Overall, our model performs well and about as well as can be expected given the data (as shown by comparing to the expertly-trained XGBoost models). Figure 5 shows additional performance tests of our model, where the inputs are the first 30 BP and RP coefficients and the \(J-H\), \(J-K\), and \(G_{\rm BP}-G_{\rm RP}\) colors along with one padding token, which we believe to be the best combination to fill up the 64 token context window for this task. The left panel displays the Kiel diagram of the test set without any cuts on the model uncertainty. The distribution is reasonable with expected trends and features such as the red clump. The middle panel shows that the model predicts \(E\left(B-V\right)\) to an accuracy of 0.04 mag on the scale of \(E\left(B-V\right)_{\rm C19}\), while the right panel demonstrates that we obtain spectro-photometric distances to an accuracy of 13.4%. Because our model can use any combination of inputs given to it, we can determine these parameters based on, e.g., XP coefficients and _2MASS_ colors or just XP coefficients when _2MASS_ colors are unavailable, without having to train separate models for these different input options.
Figure 5: Additional performance of our model for the same set of inputs as in the top-middle panel of Figure 4, but requesting additional information. The left panel in this figure shows our model’s \(T_{\rm eff}-\log g\) distribution color-coded by \(\left[{\rm M}/{\rm H}\right]\) without cuts on the model uncertainty (the performance of predicting \(\log g\) and \(\left[{\rm M}/{\rm H}\right]\) can be seen in Figure 12 in Appendix B), where one can clearly see features such as the metallicity–color gradient for giants and red-clump stars. The middle panel displays the model’s performance on \(E\left(B-V\right)\) color-coded by the model prediction uncertainty on \(E\left(B-V\right)\). The right panel shows the recovery of the spectro-photometric distances obtained using our model color-coded by the model prediction uncertainty.
### Stellar Parameters to Spectra
Given the flexibility in the inputs and outputs of our model, we can invert the machine-learning problem from Section 6.1 without needing to change the model. That is, we can simply provide different sets of input and of output requests without needing to consider which parts of the model need to be involved, because we again use all the same weights inside the model as before. Here we explore the performance of our model when predicting optical stellar spectra for given sets of stellar parameters. Because we are working with XP coefficients, we use the GaiaXP\({}^{\rm 6}\) Python package to convert XP coefficients to physical stellar spectra of flux density versus wavelength to visualize the results.
In some of our results, we need to adopt approximate \(T_{\rm eff}-\log g\) relations for dwarf-type and giant-type stars used in Zhang et al. (2023). These are
\[\log g_{\rm dwarf}=\begin{cases}4.6&\text{for $T_{\rm eff}<5000$ K}\\ 4.6-0.0005(T_{\rm eff}-5000)&\text{for $5000$ K}\leq T_{\rm eff}<\!\!6300$ K}\\ 3.95&\text{for $T_{\rm eff}\geq\!\!6300$ K}\end{cases} \tag{12}\]
for for solar-metallicity dwarfs and
\[T_{\rm eff,~{}RGB}=\begin{cases}5200-441.86(3.65-\log g)&\text{for $\log g<3.65$ dex}\\ 5900-1400(4.15-\log g)&\text{for $\log g\geq 3.65$ dex}\end{cases} \tag{13}\]
for solar-metallicity giants.
Simulated spectra with a range of stellar parameters are shown in Figures 6 and Figure 7. Figure 6 displays spectral reconstructions of the flux density at a distance of 10 pc. The left panel has spectra for dwarf-type stars along the sequence given by Equation (12) and shows that the higher the \(T_{\rm eff}\), the higher is the luminosity, but also the clearer the Balmer lines are, and the weaker most metal lines are. The right panel is for giant-type stars along the sequence given by Equation (13) and shows that giants with higher \(T_{\rm eff}\) and, thus, higher \(\log g\) have lower luminosity, while a similar \(T_{\rm eff}\) attenuation effect on the metal lines can be observed as for dwarfs.
Figure 7 shows predicted spectra of simulated stars with different metallicity \(\rm[M/H]\), with the flux density now normalized by the \(G\)-band apparent flux to be able to focus on the effect of changing \(\rm[M/H]\) on the shape of the spectrum and on the appearance of absorption lines. The left panel shows dwarf-type stars for which we can clearly see the metal lines deepening as the metallicity increases. The right panels show giant-type stars, but rather than specifying \(T_{\rm eff}\), we let the model internally assume \(T_{\rm eff}\). That is, we do not use any empirical \(T_{\rm eff}-\log g-\rm[M/H]\) relation to calculate what the \(T_{\rm eff}\) should be, but instead just do not provide any \(T_{\rm eff}\) to the model (see the discussion in Section 6.4 below). As can be seen in the top-left panel of Figure 3, for giants of a fixed \(\log g\), increasing \(\rm[M/H]\) corresponds to decreasing \(T_{\rm eff}\). This effect can be seen in the simulated spectra, where spectra have vastly different shapes due to the changing internally-assumed \(T_{\rm eff}\). Metal lines are clearly deeper as \(\rm[M/H]\) increases.
Figure 8 shows a few comparisons of predictions from our model
Figure 6: Predicting stellar spectra based on stellar parameters using our model. The left panel shows our model’s prediction of _Gaia_ XP spectra for dwarf-type stars with solar metallicity and zero extinction for \(T_{\rm eff}\) sampled between 4, 000 K to 7, 000 K for every 1, 000 K with \(\log g\) given by the empirical \(T_{\rm eff}-\log g\) relation given by Equation 12. The right panel gives our model’s prediction of _Gaia_ XP spectra for giant-type stars of solar metallicity and zero extinction with \(\log g\) sampled between 1.5 and 3.5 for every 0.5 dex and \(T_{\rm eff}\) obtained from the empirical \(T_{\rm eff}-\log g\) relation from Equation 13. Both panels display the locations of a few \(\rm[M/H]\) Fraunhofer lines, of the Balmer series, and of the Calcium Triplet. The spectra are the predicted flux densities at 10 pc, which are directly predicted by our model as it can return the absolute magnitude, so both the shape and the flux-density level is predicted by the model. The model correctly predicts that hot main-sequence stars are brighter than cool main-sequence stars, while low-\(\log g\) giants are predicted to be brighter than higher \(\log g\) giants.
to real-world stellar data to check if the simulated _Gaia_ XP spectra predicted using our model based on precise stellar parameters derived from high resolution APOGEE compare well to the actual observe _Gaia_ XP spectra for a few example stars. As in Figure 6, we normalize all of the real observed spectra to give the flux density if the star were at 10 pc using the _Gaia_\(G\)-band apparent magnitude as well as the observed parallax. In Figure 8, the top-left panel has a giant-type star with near solar metallicity, while the top-right panel has a star with similar parameters except that its metallicity is lower. In both cases, the reconstructed spectra match the observed spectra overall well and all large- and small-scale features match those seen in the observed spectra. The main difference is a small error in the intrinsic luminosity which causes the overall flux density level to be slightly off. The distance inferred by our model as well as the geometric distance obtained from the _Gaia_ parallax is also given in the plots. We see that the inferred distances are actually highly accurate, even if the flux level visually looks to be quite off. The bottom-left panel shows a bright giant with some extinction and the bottom-right panel has a dwarf-type star with no extinction. In both cases, the reconstructions are quite good and the model predicts the luminosity quite well, leading to a highly accurate predicted distance and a good match between the flux density amplitude of the predicted and observed spectra. In all panels, we show best-fit spectra from Zhang et al. (2023) for the same stars as reference comparison. One thing to note is that Zhang et al. (2023) find the stellar parameters that best fit the observed spectrum while our reconstruction comes purely from ground-truth stellar parameters. This causes the Zhang et al. (2023) reconstruction to in general be much closer to _Gaia_, except in the bottom right panel where Zhang et al. (2023)'s spectrum is off due to the difference in zero-point correction used in this work.
Overall, our model is able to predict realistic XP spectra for stars within the wide range of stellar parameters covered by the training set.
### Spectra to Spectra
In addition to inputting or outputting stellar parameters, our model also works without involving any stellar parameters and can go directly, e.g., from one portion of a stellar spectrum to another portion, because the input portion of the spectrum provides enough context. In other words, the model has a good internal representation of a star given just a portion of the spectrum without explicitly having to involve the stellar parameters.
Figure 9 displays \(G\)-band normalized XP spectra for the same four stars as in Figure 8. In the test in Figure 9, we provide the RP portion of the spectrum to the model and then request both the RP and BP portions of the same spectrum, because we need both the BP and RP portions to use Gaia**XP**y to convert back to physical spectra and we use the RP and BP coefficients returned by the model to create the final predicted XP spectrum. For each star, we repeat this procedure giving the BP portion of the spectrum and requesting the RP portion. The BP and RP portions of a star's spectrum only overlap over a very short wavelength range, so the portion of the spectrum that is given to the model does not contain much information about the portion that we request. In both cases of giving BP or giving RP, the model reconstructs the missing portion
Figure 7: Spectral shape and absorption lines of spectra predicted by our model. The spectra shown in this figure are similar to those in Figure 7, except that we use the \(G\)-band normalized flux density (that is, the flux density divided by the \(G\)-band flux) instead of the flux density at 10 pc, because we want to focus on the shape of the predicted spectra. The left panel shows our model’s predicted normalized _Gaia_ XP spectra for main-sequence stars with \(T_{\rm eff}=5,500\) K, \(\log g=4.60\) dex, zero extinction, and \(\rm[M/H]\) sampled between \(-2.0\) dex and \(0.3\) dex. The right panel panels provided normalized _Gaia_ XP spectra for giant-type stars with \(\log g=2.50\) dex, zero extinction, and \(\rm[M/H]\) sampled between \(-2.0\) dex to \(0.3\) dex. In the right panel, we do not provide \(T_{\rm eff}\) as an input, so the model has to internally assume a \(T_{\rm eff}\) from a \(T_{\rm eff}-\log g-\rm[M/H]\) relation that is measured during training. For giants, this means that the model must know that metal-rich giants are cooler than metal-poor giants at the same \(\log g\), as can be seen in training data in Figure 3. In both panels, we see that our model correctly produces deeper metal absorption lines, especially at \(516.891\) nm, for high \(\rm[M/H]\) stars.
of the spectrum well and, in fact, much better than in Figure 8, where the spectra are reconstructed based on the stellar parameters. This is likely because the measurement of the stellar parameters by APOGEE is uncertain, while the reconstruction in Figure 9 does not involve uncertain stellar parameters, but just certain spectral portions. This is the same reason as why many works now prefer to employ unsupervised or self-supervised training of models that do not include very precise labels. For example, see the papers of Sanders & Matsunaga (2023) and Laroche & Speagle (2023) for applications to _Gaia_ XP spectra specifically.
### Stellar Parameters to Stellar Parameters
To assess the quality of the astrophysical knowledge that our model has acquired during training, we can request relations between stellar parameters from our trained model without involving any observational data. This way, we can test if the model has actually learned basic relationships between stellar parameters when it creates an internal "perception" of a star.
Figure 10 shows that our model can reconstruct the distribution of stars observable by APOGEE in the Kiel diagram of \(\log g\) versus \(T_{\rm eff}\) (like the one in Figure 3). Although the cannot currently simply ask questions such as "Please give me your concept of the Kiel diagram", we can still extract this knowledge from the model by, e.g., creating uniformly sampled grids of \(\log g\) and \(\left[{\rm M}/{\rm H}\right]\) as inputs to the model and then asking for \(T_{\rm eff}\) or other stellar parameters. If the model has learned the general properties of stars (specially stars within the training set's boundaries), the model should predict \(T_{\rm eff}\) in a way that recreates the Kiel diagram of the training set. And indeed, we see that the model is clearly able to recreate the Kiel diagram. The top two panels of Figure 10 show the ground truth of \(T_{\rm eff}-\log g\) color-coded by the logarithmic \(G\)-band luminosity from _Gaia_ as well as from the model. We see that not only can the model reproduce the expected \(T_{\rm eff}\) trends, it can also reproduce the run of stellar luminosity just from stellar parameters alone. The bottom left panel of Figure 10 shows the usual Kiel diagram color-coded by \(\left[{\rm M}/{\rm H}\right]\) and the expected clear \(\left[{\rm M}/{\rm H}\right]\) trend can be clearly seen for giant-type stars (see Figure 3 for the ground truth as a reference), where metal-rich giants are cooler than metal-poor giants at the same \(\log g\). The bottom right panel of the same figure shows the uncertainty in the predicted \(T_{\rm eff}\), which is
Figure 8: Model reconstruction of the _Gaia_ XP spectra for stellar parameters sampled from four real-world APOGEE stars. We predict these spectra by giving the model the observed APOGEE \(T_{\rm eff}\), \(\log g\), \(\left[{\rm M}/{\rm H}\right]\), and the reddening \(E\left(B-V\right)_{\rm C19}\) obtained from the extinction map at the distance given by the inverse _Gaia_ parallax. We then request all _Gaia_ XP coefficients as well as the expected stellar luminosity contaminated with extinction from the model, to obtain the predicted flux density at 10 pc. The top-left panel has a giant-type star with near-solar metallicity and almost no extinction, while the top-right panel displays a giant with similar extinction, \(T_{\rm eff}\), and \(\log g\) as the top left star, but with much lower metallicity. The bottom-left panel displays a lower metallicity bright giant with some extinction. Finally, the bottom-right panel has a dwarf-type star with no extinction. In all cases, our model accurately predicts the shape of the spectra and the main noticeable difference is the slight amplitude offset that results from a small error in the predicted distance. Best-fit spectra from Zhang et al. (2023) are also shown as dashed lines.
small for giant-type stars, because there is a simple relation between \(T_{\rm eff}-\log g-\left[{\rm M}/{\rm H}\right]\) for giants, thus, making \(T_{\rm eff}\) highly predictable in this scenario. For dwarf-type stars, this is not the case (i.e., we do not see a nice \(\left[{\rm M}/{\rm H}\right]\) color gradient for dwarfs in the left panel), so the model's uncertainty on \(T_{\rm eff}\) is. Thus, our model internally contains information similar to stellar isochrones and we can extract these using this procedure.
### Recovery of the Interstellar Extinction Curve
As an example of an empirical law that is not at all included in the model but that we can recover from our model, we predict the ratio of the total-to-selective extinction \(R(\lambda)\) as a function of wavelength \(\lambda\), defined as
\[R(\lambda)=\frac{A(\lambda)}{E(B-V)_{\rm true}}\,, \tag{14}\]
where \(A(\lambda)\) is the extinction at wavelength \(\lambda\) and \(E(B-V)_{\rm true}\) is the reddening on the true scale (as opposed to the SFD or C19 scale). Figure 11 shows the extinction curve that we recover from our model.
Our model does not contain a fixed or pre-defined extinction or reddening law setup (unlike the work of Zhang et al. (2023)), the only information about extinction in our model is the \(E(B-V)_{\rm C19}\) value when it is available. So to determine the extinction curve, we simulate spectra of fixed stellar parameters with different \(E(B-V)\) and we can then derive the model's implied extinction curve by comparing the spectra for different \(E(B-V)\). We first set up a grid of stellar parameters. To simplify the process, we use solar-metallicity giants with \(\log g\) ranging from 1.0 dex to 2.5 dex for every 0.1 dex with \(T_{\rm eff}\) determined using Equation (13), and using \(E(B-V)\) of 0.0, 0.4, 0.8 and 1.2 mag, and we then request the _Gaia_ XP spectra and the luminosity at 10 pc so we can obtain the absolute magnitude. The emulated spectra with \(E(B-V)=0\) act as the reference extinction-free spectra \(F_{0}(\lambda)\), while those spectra \(F_{E}(\lambda)\) with \(E(B-V)_{\rm C19}>0\) are extinguished. The extinction curve \(R(\lambda)\) is then calculated as follow:
\[R(\lambda)=\frac{A(\lambda)}{E(B-V)_{\rm true}}=\ln\left(\frac{F_{0}(\lambda) }{F_{E}(\lambda)}\right)\frac{1}{0.884\,E(B-V)_{\rm C19}}\,, \tag{15}\]
and we do this calculation for the entire grid of stellar parameters and extinctions. We then take the median of the derived extinction curves, which is shown as the black dashed line in Figure 11, with the standard deviation among the grid points given as the blue curve. It is clear that the black dashed line mostly agrees with the mean
Figure 9: Reconstructing parts of a spectrum based on other parts. Each of the four panels in this figure shows our model’s reconstruction of a portion of the \(G\)-band normalized _Gaia_ XP spectrum when providing the model with another portion of the _Gaia_ XP spectrum of the same star (for the same APOGEE stars as in Figure 8). The blue shaded area represents the rough wavelength range given to the model while the beige shaded area is the rough wavelength range predicted by the model. The black dashed lines in each panel are reference lines that allow one to compare the top and bottom spectra in each panel. In all cases, the model successfully reconstructs portions of the _Gaia_ XP spectra.
Task: Stellar Parameters to Stellar Parameters
Figure 10: Our model’s perception of the Kiel diagram. To investigate our model’s understanding of the relations among stellar parameters, we sample a grid of evenly-spaced log \(g\) from 0 dex to 5 dex dex with a spacing of 0.01 dex and \(\left[\mathrm{M/H}\right]\) from \(-\)2 dex to 0.5 dex with a spacing of 0.02 dex and we request \(T_{\mathrm{eff}}\) and the stellar luminosity. The spacings are simply chosen to create a dense sampling of the diagram. **Top left** panel: the ground-truth diagram from APOGEE color-coded by the _Gaia_\(G\)-band logarithmic luminosity in solar units. **Top right** panel: our model’s Kiel diagram similarly color-coded with our model’s stellar luminosity. Neither of the luminosities in the top left and right panels are corrected for extinction. One can clearly see that the model reproduces the patterns seen in the top left panel, reconstructing both the \(T_{\mathrm{eff}}\)-log \(g\) stellar evolutionary tracks as well as the luminosity along those tracks. **Bottom left** panel: our model’s Kiel diagram color-coded by our the input \(\left[\mathrm{M/H}\right]\), which again displays the expected trends for red giants (see Figure 3 for the ground truth in the training set). **Bottom right** panel: our model’s Kiel diagram color-coded by the uncertainty in the predicted \(T_{\mathrm{eff}}\), demonstrating that while the \(T_{\mathrm{eff}}-\log g-\left[\mathrm{M/H}\right]\) trend is clear and tight in red giants, this is not the case for sub-giants and dwarfs. The bottom left panel shows that tracks with different \(\left[\mathrm{M/H}\right]\) overlap for these stellar types and the bottom right panel shows that this leads to the \(T_{\mathrm{eff}}\) prediction based on log \(g\) and \(\left[\mathrm{M/H}\right]\) being quite uncertain for sub-giants and dwarfs. The red dashed line is a reference line for red-giant branch stars, which is described in Equation (13).
extinction curve7 of Fitzpatrick (1999) with \(R_{V}\approx 3.1\), although at the red end of the spectrum, the model extinction curve prefers a lower \(R_{V}\approx 2.5\).
Footnote 7: Calculated using the extinction Python package (Barbury, 2016).
## 7 Discussion
In this paper, we have trained and tested a single model to perform a variety of tasks. Ultimately, the goal of this type of modeling is to create a foundation model for stars or for more general parts of astronomy (a review of neural networks and the role of foundation model in astronomy can be found in Smith & Geach, 2023). Our model is only a step towards this, which is why we refer to it as a "proto-foundation model". In this section, we discuss various aspects of our model in the context of foundation models: how our model processes information in Section 7.1, how the embedding that we create is meaningful and generalizable in Section 7.2, the reliability of our model and of future foundation models built using the same technology in Section 7.3, and a comparison of our model to other foundation models in astronomy in Section 7.4.
### Parameters Dependencies
One quick way to check what the model has learned is to investigate the dependencies among parameters inside the model. We provide two examples of such investigations: one looking at the attention scores from the attention layers in the model and one comparing our single model's performance to separately-trained XGBoost models for different combinations of input parameters.
We can learn about how our model processes inputs and output information requests by looking at the attention paid to them by the model. As an example, we consider the cross-attention between input sequences and information requests. Figure 12 shows how the cross-attention changes for two cases, one with no extinction and one with high extinction. The first feature that can be seen in this figure is that in both cases, requesting \(T_{\rm eff}\) and \([{\rm M/H}]\) solely pays attention to \(T_{\rm eff}\) and \([{\rm M/H}]\) and it is therefore not surprising that the model can predict very accurate parameters when the parameters are given as inputs (see Figure 4). That is, the model learns to return the inputs when they are requested, at least within the boundaries of the training set (see the discussion in Section 6.1). When requesting the first BP and RP coefficients, which both describe the large-scale features of the _Gaia_ XP spectra, the model pays attention to \(T_{\rm eff}\) in the case of no extinction, but to \(E(B-V)\) in case of high extinction. This is the expected behavior, because in the case of no extinction, the spectral shape is largely determined by \(T_{\rm eff}\), while the large-scale shape is determined by the extinction when it is large (see Figure 11). This also demonstrate the important of having the value included in the embedding in our application rather than just providing the value as \(V\) in the attention layer that we have introduced in Section 2.3. Without embedding the value as well, the alignment scores and attention from Equations (2) and (3) would not be able to depend on the value (of \(E\left(B-V\right)\) in this case).
Another way to check whether our model correctly deals with different combinations of inputs is to compare its performance to separate expertly-trained XGBoost models. Examples of this are shown in Figure 13. The example task here is to predict the first RP coefficient
Figure 11: Our model’s recovery of the interstellar extinction curve. The left panel demonstrate the effect of extinction on stellar spectra as learned by the model by showing the model’s reconstruction of _Gaia_ XP spectra for a giant with \(T_{\rm eff}=4,\,700\) K, \(\log g=2.50\) dex, solar metallicity, and with different reddening values \(E(B-V)\). By comparing to the zero extinction spectra, we can derive the wavelength dependence of the extinction curve and this is shown in the right panel. In this panel, the dotted black line displays this learned extinction curve \(A(\lambda)/E(B-V)\) with the blue shaded region representing the uncertainty on this curve. The colored lines are the extinction curve from Fitzpatrick (1999) for different \(R_{V}\). Our derived extinction curve agrees well with the standard \(R_{V}=3.1\) curve, except at longer wavelengths, where the lower \(R_{V}\) curves match better.
provided a combination of stellar parameters and colors. We see that in all the example combinations of inputs, our model behaves very similarly to the XGBoost models even though our model is trained very differently from the XGBoost models, which are trained on each specific task (hence the name of expert model) while our model is trained in a general, self-supervised manner. Moreover, our model also produces reasonable uncertainties in all cases. In the case of only supplying stellar parameters, we cannot predict the first RP coefficient accurately, because of the important role of interstellar reddening and our model correctly indicates that its predictions have large uncertainties. When adding \(J-H\) and \(J-K\) to the inputs, which provide a handle on the reddening, both our model and the XGBoost model provide fairly good predictions and the uncertainty from our model is lower. When we further add the \(G_{\rm BP}-G_{\rm RP}\) color, which is calculated from the XP spectrum, both models are able to give highly accurate predictions.
### The Meaningfulness of the Embedding
To assess the meaningfulness of the embedding learned by our model, we have withheld two labels \(G_{\rm BP}-G_{\rm RP}\) and \(J-K\) from the training process as discussed in Section 5. This allows us to see if the decoder can answer information requests for outputs it was never trained on. If the embedding is not meaningful--i.e., it contains no knowledge of the fact that there are intrinsic properties like \(T_{\rm eff}\) and properties only seen from Earth like extinction--the encoder will understand the label embedding, but the decoder will not. In other words, a good embedding is one that is meaningful with respect to the property of stars, not meaningful with respect to the encoder only. We chose to withhold the two labels \(G_{\rm BP}-G_{\rm RP}\) and \(J-K\), because \(G_{\rm BP}-G_{\rm RP}\) can be inferred from _Gaia_ XP spectra directly, while \(J-K\) requires the model to extrapolate. Figure 14 demonstrates that the decoder in our model is indeed able to interpret the information requests for \(G_{\rm BP}-G_{\rm RP}\) and \(J-K\), producing precise values for all but the reddest stars.
In an earlier prototype of the current model with only 55k trainable parameters, we observed similar behavior. This prototype was trained on only \(T_{\rm eff}\), \([{\rm M}/{\rm H}]\), and the XP spectra and we only requested \(T_{\rm eff}\), \([{\rm M}/{\rm H}]\), and the first two coefficients during training. We found that the model still managed to infer tight sequence of the next few coefficients, similar to what can be seen in Figure 14. We emphasize that the ability to predict a tight sequence is important rather than an accurate sequence, because the decoder was never trained on the requested outputs and it has therefore little way of knowing what the correct scale of the outputs is. A small amount of fine-tuning would allow the tight sequence to be corrected to an accurate sequence as well. Thus, because our model learns meaningful embeddings of the properties of stars, it can easily be fine-tuned to predict other types of data or parameters not used during training.
### Reliability of our Model and Reasonable Expectations
Traditional machine-learning methods in astronomy are trained to perform a single task, such as determine stellar parameters from stellar spectra. As such, their reliability in different parts of parameter space can be relatively straightforwardly be determined by assessing its performance using a test set. The type of model that we have introduced in this work is far more flexible both in terms of the large number of trainable parameters and in terms of input-output combinations that are possible. However, this flexibility means that it is difficult to test the model, because we cannot realistically assess the model's performance and uncertainty estimation on all possible combinations of inputs and outputs. The type of flexible foundation models that we use here will eventually have orders of magnitude more diverse capabilities than the proto-foundation model that we have trained in this work. Thus, while in this work we have still been able to assess the model's performance on many of the plausible tasks that our model could be used for, in the future this will not be possible.
Figure 12: Average cross-attention weights \(A\) (see Equation 3) across all attention heads between an input sequence and a request vector. That is, for a request for information from the model, the cross-attention between the request and the inputs shows which parts of the input the model has learned are relevant. The attention weights are calculated in such a way as to make each column sum up to one. In all three panels, the inputs are \(T_{\rm eff}\), \([{\rm M}/{\rm H}]\), and \(E\left(B-V\right)\) and we request \(T_{\rm eff}\), \([{\rm M}/{\rm H}]\), or the first BP or RP coefficient. The top panel considers a star with \(E\left(B-V\right)=0.0\), while the middle panel uses the same star, but with high \(E\left(B-V\right)\). The bottom panel gives the difference in the average attention score between the two. We see that in both \(E\left(B-V\right)\) cases, the model pays attention to \(T_{\rm eff}\) and \([{\rm M}/{\rm H}]\) when determining \(T_{\rm eff}\) and \([{\rm M}/{\rm H}]\), respectively, but that to determine the BP/RP coefficients, the model uses \(T_{\rm eff}\) in the \(E\left(B-V\right)=0\) case, but relies on \(E\left(B-V\right)\) in the high extinction case. This trend in attention is not possible without encoding the value of the observation in the embedding space.
Natural-language-focused generative models like chat-bots suffer from the problem of "hallucination" (Maynez et al., 2020) in that they can make up non-factual information that sounds convincing, because there is no intrinsic mechanism inside the model to care about whether the model's outputs are facts or not. Our model is similarly trained to produce plausible outputs given inputs and this means that the model may make silent assumptions during inference gleaned from the training data if not given with detailed enough input data to make a clear prediction. For example, this happens in our model when the request for information cannot be conclusively inferred from the data provided, in which case the model output will be strongly biased towards the distribution of the training set without using true physics-driven assumptions. An example of this is the following. If you request \(\log g\) by just giving the model \(T_{\text{eff}}=3,700\) K and \(\left[\text{M}/\text{H}\right]=0.0\) dex, the model returns \(\log g=3.8\pm 2.5\) dex. This correctly has a large uncertainty, because there can be giant-type stars with low \(\log g\) as well as dwarf-type stars with high \(\log g\) that share the same \(T_{\text{eff}}\). But if in addition you provide \(E(B-V)=0\) in the input data and request \(\log g\), the model will confidently say that this star has high \(\log g\), because the model essentially thinks it must be a dwarf-type star as small reddening values are always associated with dwarf-type stars in the training set (because generally stars must be close to us in order to be observed with low reddening).
In the near term, we can scale our model up to train on more stars and on a wider variety of data from different surveys, which might lead to emergent abilities of the model (Wei et al., 2022) (that is, the ability to perform tasks that were not possible when training on smaller data sets). Nevertheless, this would still be very far from building intelligent models for astronomy. One of the challenges involved in building such intelligent models might be a version of the alignment problem, which usually refers to the problem of aligning an artificial intelligence systems with human values. In science we might want intelligent models to have scientific values such as Occam's razor and simple Transformer-based models trained with a self-supervised training procedure like we use in this work or physics-inspired networks do not solve the general, fundamental issue (even if they may be able to partially solve the problem). The solutions to some interesting problems will require future models to learn and align with basic scientific values as they gain the ability to create and manipulate variables (i.e., have sub-goals) to explain the (ideally simple and physics-driven) relationships inherent in the data. For example, in this work, we were able to retrieve knowledge of the extinction curve (even while giving the model extinction information with the wrong scaling to the true quantity, see discussion in Section 7.2 and Figure 14) without giving the model any explicitly understanding of what an extinction curve is, but simply from manip
Figure 13: Assessing our model by comparing to expert XGBoost models. The top panels show our _single_ model’s prediction for the first RP coefficient color-coded by the model uncertainty for different combinations of inputs, while the lower panels show the predictions from separately trained XGBoost models for the same combinations of inputs. Both models struggle when only provided with \(T_{\text{eff}}\), \(\log g\), and \(\left[\text{M}/\text{H}\right]\) (left panels), but return good predictions if \(J-H\) and \(J-K\) are provided as well (middle panels), because these contain reddening information. When \(G_{\text{BP}}-G_{\text{RP}}\) is provided as well, both models perform well as expected (right panels). In all cases, our model has similar performance as expertly trained models.
ulating the input stellar parameters and output stellar spectra. Future models, however, might autonomously learn internal representations of the extinction curve that they can more directly use in making predictions. However, the use of such types of intelligent models, even if specifically trained for astronomy, should be treated with caution and they should be thoroughly evaluated when applied to a specific task that they were not specifically trained to do (and they can be fine-tuned if it turns out they perform less well than desired).
### Foundation Models
There has been much recent interest in foundation models in astronomy, for example, the works of Hayat et al. (2021), Stein et al. (2022), Walmsley et al. (2022), Rozanski et al. (2023), and Slijepcevic et al. (2023). In general, a "foundation model" refers to a model trained on a large amount of data that is constructed in such a way that it can be adapted by minimal fine-tuning to a wide range of downstream tasks (hence the model is pre-trained for them from the perspective of the downstream tasks). For example, Hayat et al. (2021), Walmsley et al. (2022) and Slijepcevic et al. (2023) adopt a commonly-used convolutional neural network architecture such as ResNet-N (He et al., 2015), but train on unlabelled images using contrastive objectives to learn useful representation of the data (e.g., He et al., 2019) in a self-supervised, task-independent manner. This makes the model useful for downstream tasks. Pre-training models in this way has been shown to outperform simple supervised learning (e.g., Khosla et al., 2020).
Our model has a blurry boundary between data and labels because of the flexibility in the inputs and outputs, which can be anything that can also be an input. Our model highly mimics standard LLM architectures for which lots of work has been done recently to adapt these models to downstream tasks using fine-tuning methods such as Low-Rank Adaptation (LoRA; Hu et al. (2021)) or using the encoder of our model for tasks such as encoding multi-domain data to a common embedding space as in Radford et al. (2021). Ultimately depending on the type of astronomical data included, a combination of all of these techniques would be necessary to create a big universal foundation model that works with multi-domain data from low to very high dimension in astronomy (i.e., we might need a pool of experts to deal with the different modalities of the data). As an example, we present an application of using our model as a foundation model for the purpose of searching for stellar spectroscopy - stellar parameter pairs in a data set using a contrastive objective function in Appendix C.
Much work remains to be done to turn the proto-foundation model introduced in this paper into a proper foundation model with wide applicability to problems in astronomy. One possible improvement to the model might be to have a proper tokenizer. In NLP, tokenization takes care of the fact that there are many uniquely spelled words and it is difficult to create separate embedding for all of those words (e.g., Salton, 1962). The solution is to have a tokenizer (e.g., Peters et al., 2018), which breaks up words like "non-existent" into "non existent", which can be tokenized into two tokens. Similarly, not commonly used words like "non-star" or "non-galaxy" can be understood as "not star" and "not galaxy". In the context of an astronomical foundation model, we could have a tokenizer that understands, for example, different combinations of colors without having each of the combinations be a unique token, while taking advantage of the observations and the "non-linear" embedding already learned by our model.
## 8 Conclusion
In this paper, we have introduced a novel framework of utilizing a Transformer-based encoder-decoder model towards building and training a foundation model for stars in astronomy. To illustrate this framework, we have trained a proto-foundation model on only a small subset of stars selected from only a few surveys due to the high computational cost involved in training these models8. But using this limited set of data, we believe we have successfully demonstrated that this framework can be trained to build a foundation model for stars. Our pre-trained model can be used for multiple tasks and can, furthermore, be fine-tuned for other downstream tasks. While our model is competitive with traditional machine-learning models run
Figure 14: Model performance for outputs never requested during training. The two panels of this figure show our model’s prediction for the \(G_{\rm BP}-G_{\rm RP}\) and \(J-K\) colors based on input XP spectra only. These two colors were never requested from the decoder during the training although they are used in the encoder to learn the embedding (i.e., vector representation) of these two types of data. We can clearly see that the trained model is still able to predict the two colors when requested. This demonstrates that the unit vector embedding as discussed in Section 2.2 and Equation (1) is a meaningful embedding learned by the encoder that can be understood by the decoder reasonably well.
on the same data and is, therefore, already useful in its own right, the main purpose of this paper is to show that implementing versatile, unified, cross-survey, cross-domain artificial intelligence in astronomy is well within reach and with this work, we hope to accelerate the development of such models. Our Transformer-based model adopts and adapts ideas and technology used in LLMs and inherits many of their advantages, but also disadvantages (such as possible "halucinations"). That our model so closely tracks LLMs means that progress in the ideas and technology behind LLMs--an area of high activity currently in academic research and industry--can be easily translated and used by our model.
Our specific application in this paper was to train a Transformer-based encoder-decoder model on data from APOGEE, _Gaia_, _2MASS_ and extinction data in a self-supervised manner where all data can be in the input and output nodes and our model additionally provides a predictive uncertainty that represents the model's confidence in its own prediction. We use a single, non-fine-tuned training process to train the model, yet find that the model can perform multiple tasks of interest to astronomers, including but not limited to (i) mapping from stellar spectra to stellar parameters, (ii) from stellar parameters to stellar spectra, (iii) from one portion of a stellar spectrum to another portion, (iv) from stellar parameters to other stellar parameters. We also demonstrated that we can recover the interstellar extinction curve from the trained model, as the model internally clearly learns about extinction. For the common application of mapping from stellar spectra to stellar parameters, our model's prediction accuracy is comparable (or even slightly better) in the same setting as fine-tuned XGBoost models with an accuracy of 47 K in \(T_{\rm eff}\), 0.11 dex in \(\log g\), and 0.07 dex in [M/H] compared to the APOGEE ground truth. For generative tasks such as predicting stellar spectra from stellar parameters or mapping a portion of the spectrum to another portion, our model is able to predict spectra that are very close to those of real-world stars with the same stellar parameters. And even without explicitly telling the model about the existence of an interstellar extinction curve--we only tell the model about \(E(B-V)\)--, the model still managed to learn the extinction curve on its own.
While we have focused on a relatively small proof-of-concept setting, training more general models on large cross-survey, multi-domain astronomical data sets with a huge amount of missing data due to the different footprints of the surveys, etc., is in principle straightforward, albeit computationally costly. For example, already in our training set, there are stars that only have spectra and stars that only have stellar labels and the model is able to use all of these for training. Of course, the astronomy community's wide past experience with applying machine learning to astronomical data will be crucial in the construction of these larger foundation models.
In this paper, we have focused on implementing the most basic version of a foundation model in astronomy. There are still many important aspects to add to the model and many open questions as to how more general models should be constructed. These include questions such as how to incorporate data from a wide range of spectral resolutions (broadband photometry to high-resolution spectra), how to make use of theoretical models (e.g., stellar isochrones and theoretical photometry and spectra), how to add time-domain observations (e.g., asteroseismology), and whether it is useful and desirable to create even larger foundation models that incorporate not only stars, but also galaxies over a wide range of redshifts, because the observations of stars and galaxies have many commonalities.
## Acknowledgements
We thank the anonymous referee, Joshua S. Speagle, and Alexander Laroche for helpful comments. HL and JB acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC; funding reference number RGPIN-2020-04712). Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
All the code and resources for this paper are available at [https://github.com/henrrysky/astroIN_stars_foundation](https://github.com/henrrysky/astroIN_stars_foundation), including the code that defines the model, to train the model, and to generate the figures. This also includes the trained model, which can be used to test the model further. The underlying data are all publicly available and all of the software used in this work is open source.
|
2307.09448 | Ultrafast In vivo Transient Absorption Spectroscopy | Transient absorption (TA) spectroscopy has proved fundamental to our
understanding of energy and charge transfer in biological systems, allowing
measurements of photoactive proteins on sub-picosecond timescales. Recently,
ultrafast TA spectroscopy has been applied in vivo, providing sub-picosecond
measurements of photosynthetic light harvesting and electron transfer processes
within living photosynthetic microorganisms. The analysis of the resultant data
is hindered by the number of different photoactive pigments and the associated
complexity of photoactive reaction schemes within living cells. Here we show
how in vivo ultrafast TA spectroscopy can be applied to a diverse array of
organisms from the tree of life, both photosynthetic and non-photosynthetic. We
have developed a series of software tools for performing global, lifetime and
target analysis of in vivo TA datasets. These advances establish in vivo TA
spectroscopy as a versatile technique for studying energy and charge transfer
in living systems. | Tomi K. Baikie, Darius Kosmützky, Joshua M. Lawrence, Victor Gray, Christoph Schnedermann, Robin Horton, Joel D. Collins, Hitesh Medipally, Bartosz Witek, Marc M. Nowaczyk, Jenny Zhang, Laura Wey, Christopher J. Howe, Akshay Rao | 2023-07-18T17:24:04Z | http://arxiv.org/abs/2307.09448v1 | ## Ultrafast _In vivo_ Transient Absorption Spectroscopy
### Abstract (150 words)
Transient absorption (TA) spectroscopy has proved fundamental to our understanding of energy and charge transfer in biological systems, allowing measurements of photoactive proteins on sub-picosecond timescales. Recently, ultrafast TA spectroscopy has been applied _in vivo_, providing sub-picosecond measurements of photosynthetic light harvesting and electron transfer processes within living photosynthetic microorganisms. The analysis of the resultant data is hindered by the number of different photoactive pigments and the associated complexity of photoactive reaction schemes within living cells. Here we show how _in vivo_ ultrafast TA spectroscopy can be applied to a diverse array of organisms from the tree of life, both photosynthetic and non-photosynthetic. We have developed a series of software tools for performing global, lifetime and target analysis of _in vivo_ TA datasets. These advances establish _in vivo_ TA spectroscopy as a versatile technique for studying energy and charge transfer in living systems.
## Introduction
Ultrafast transient absorption (TA) has proved a vital technique in photobiological research due to its impressive sub-picosecond temporal resolution and ability to elucidate the nature of charge transport in photosynthetic systems. TA spectroscopy has revealed the ultrafast dynamics of photosynthetic light harvesting [1, 2, 3], electron transport [4, 5, 6] and photoprotection [7, 8]. This research has largely been performed _in vitro_, identifying energy and charge transfer processes within the various pigment-protein complexes found across diverse phototrophic organisms, including light harvesting complexes, photosystems, rhodopsin and other photoactive biomolecules [9, 10, 11, 12, 13, 14, 15, 16].
_In vitro_ studies offer remarkable systematic clarity and naturally reduce the complexity of the photobiology by isolating specific photoactive biomolecules. However, the isolation of biomolecules from their physiological environments can lead to changes in their behaviour [17, 18] and often involves time consuming extraction and purification procedures. Measuring _in vivo_ ensures that the structure, interaction partners, solvation, and local dielectric properties of biomolecules are properly maintained.
However, the _in vivo_ study of live cell systems with ultrafast TA spectroscopy has proved difficult due to scattering arising from cells and their extracellular polymeric substances [19]. Only recently have experimental parameters been optimised to resolve cells at room temperature, utilising small (\(<\) 5 \(\mu\)m diameter) photosynthetic microorganisms: the cyanobacterium _Synechocystis sp_. PCC 6803 (hereafter _Synechocystis_) [17] and the eukaryotic green alga _Nannochloropsis oceanica_[20, 21]. TA spectroscopy has also been applied to study similarly sized thylakoids, extracted from the chloroplasts of plants [22, 23, 24]. These studies have demonstrated the validity of _in vivo_
ultrafast TA spectroscopy for resolving photosynthetic processes such as photosynthetic electron transport and non-photochemical quenching at appropriate time scales, where comparison of spectra from mutants provided insights into the functions of proteins involved [17, 21].
The time-resolved spectra from _in vivo_ TA spectroscopy studies are complex, owing to the simultaneous measurement of a multitude of photoactive biomolecules, their overlapping spectra, and often similar lifetimes. This makes the analysis of specific processes difficult, and typically spectral features have been identified by _a priori_ comparison to _in vitro_ TA spectroscopy studies, or by the piecewise analysis of knockout mutants lacking specific proteins [17, 24]. In general, a complete description of photoactive pathways _in vivo_ is unlikely due to the sheer number and complexity of photoactive pathways being simultaneously probed. Hence model-independent analysis is useful to identify and elucidate the effects of experimental parameters.
Here, we demonstrate the widespread applicability of _in vivo_ TA spectroscopy for photobiological research by performing measurements a diverse array of photosynthetic cells spread across the tree of life. For the first time we have developed both the apparatus (see **Methods 1**) and cell preparation (see **Methods 2-3, 6**) to study photoactive processes in non-phototrophic cells, as well as performing measurements of large cells (up to 20 \(\upmu\)m in diameter). We have also developed a series of software tools for the analysis of these complex systems, from which we may understand the systems in a model-independent manner or with assumed reaction schemes. |
2304.08206 | Climatologies of Various OH Lines From About 90,000 X-shooter Spectra | The nocturnal mesopause region of the Earth's atmosphere radiates
chemiluminescent emission from various roto-vibrational bands of hydroxyl (OH),
which is therefore a good tracer of the chemistry and dynamics at the emission
altitudes. Intensity variations can, e.g., be caused by the general
circulation, gravity waves, tides, planetary waves, and the solar activity.
While the basic OH response to the different dynamical influences has been
studied quite frequently, detailed comparisons of the various individual lines
are still rare. Such studies can improve our understanding of the OH-related
variations as each line shows a different emission profile. We have therefore
used about 90,000 spectra of the X-shooter spectrograph of the Very Large
Telescope at Cerro Paranal in Chile in order to study 10 years of variations of
298 OH lines. The analysis focuses on climatologies of intensity, solar cycle
effect, and residual variability (especially with respect to time scales of
hours and about 2 days) for day of year and local time. For a better
understanding of the resulting variability patterns and the line-specific
differences, we applied decomposition techniques, studied the variability
depending on time scale, and calculated correlations. As a result, the mixing
of thermalized and nonthermalized OH level populations clearly influences the
amplitude of the variations. Moreover, the local times of the variability
features shift depending on the effective line emission height, which can
mainly be explained by the propagation of the migrating diurnal tide. This
behavior also contributes to remarkable differences in the effective solar
cycle effect. | Stefan Noll, Carsten Schmidt, Wolfgang Kausch, Michael Bittner, Stefan Kimeswenger | 2023-04-17T12:28:05Z | http://arxiv.org/abs/2304.08206v1 | # Climatologies of Various OH Lines From About 90,000 X-shooter Spectra
###### Abstract
Climatologies of intensity, solar cycle effect, and residual variability of 298 OH lines were derived from 10 years of X-shooter data
The strongest variations are found for intermediate rotational energies where cold and hot OH populations show similar contributions
Tides cause a local time shift of the climatological patterns depending on the effective line emission height
###### Abstract
The nocturnal mesopause region of the Earth's atmosphere radiates chemiluminescent emission from various roto-vibrational bands of hydroxyl (OH), which is therefore a good tracer of the chemistry and dynamics at the emission altitudes. Intensity variations can, e.g., be caused by the general circulation, gravity waves, tides, planetary waves, and the solar activity. While the basic OH response to the different dynamical influences has been studied quite frequently, detailed comparisons of the various individual lines are still rare. Such studies can improve our understanding of the OH-related variations as each line shows a different emission profile. We have therefore used about 90,000 spectra of the X-shooter spectrograph of the Very Large Telescope at Cerro Paranal in Chile in order to study 10 years of variations of 298 OH lines. The analysis focuses on climatologies of intensity, solar cycle effect, and residual variability (especially with respect to time scales of hours and about 2 days) for day of year and local time. For a better understanding of the resulting variability patterns and the line-specific differences, we applied decomposition techniques, studied the variability depending on time scale, and calculated correlations. As a result, the mixing of thermalized and nonthermalized OH level populations clearly influences the amplitude of the variations. Moreover, the local times of the variability features shift depending on the effective line emission height, which can mainly be explained by the propagation of the migrating diurnal tide. This behavior also contributes to remarkable differences in the effective solar cycle effect.
## Plain Language Summary
Emission from various lines of the hydroxyl (OH) molecule is an important contribution to the Earth's nighttime radiation in the near-infrared. The emission mostly originates from altitudes between 80 and 100 km and is therefore a good tracer of the chemistry and dynamics at these altitudes. OH intensity variations can be caused by changes in the atmospheric conditions and passing waves with different time scales. In order to better understand the origin of these variations and their impact on the OH emission, we studied the variability of 298 OH lines measured in 10 years of data from the X-shooter spectrograph at Cerro Paranal in Chile. The analysis focused on average variations with respect to local time and day of year, i.e. climatologies. As the lines show different vertical emission distributions, this study also provides height-dependent information. The climatologies for intensity, the response to the solar activity cycle of 11 years, and the residual variability (dominated by waves with time scales of a few hours and about 2 days) revealed remarkable patterns which depend on the OH excitation level. The features can partly be explained by the impact of solar tides, particularly with a period of 24 h.
## 1 Introduction
Chemiluminiscent emission of the hydroxyl (OH) radical dominates the nocturnal radiation of the Earth's atmosphere in the near-infrared wavelength regime. Various roto-vibrational bands of the electronic ground state contribute to the emission spectrum (e.g., Noll et al., 2015; Rousselot et al., 2000). The radiation originates in the mesopause region between 80 and 100 km (e.g., Baker and Stair, 1988; Noll et al., 2022b) and is mostly related to the production of OH with relatively high vibrational excitation (up to a vibrational level \(v=9\)) by the reaction of atomic hydrogen and ozone (Bates and Nicolet, 1950) and the subsequent relaxation processes. Apart from the emission of photons, collisions with different constituents of the atmosphere contribute to this redistribution of the level populations. In the end, the population of each \(v\) can be described by a cold, fully thermalized and a hot, nonthermalized component (Kalogerakis et al., 2018; Noll et al., 2020; Oliva et al., 2015). The latter dominates the populations of levels with high rotational quantum numbers \(N\).
The OH emission layer with a typical full width at half maximum of about 8 km (Baker & Stair, 1988) can be affected by perturbations in pressure, temperature, and the distribution of the atmospheric constituents on different time scales as such changes alter the production and thermalization of OH molecules. In particular, atomic oxygen matters. This radical is required for the production of ozone and plays an important role in the vibrational relaxation and destruction of OH (e.g., Adler-Golden, 1997; Dodd et al., 1994; Noll et al., 2018; von Savigny et al., 2012). It shows a strong response to vertical transport since its concentration steeply declines in the lower part of the OH emission region (e.g., Smith et al., 2010), where the buildup of a reservoir by photolysis of molecular oxygen at daytime is less efficient and the consumption of atomic oxygen at nighttime is fast under undisturbed conditions (Marsh et al., 2006). An important source of perturbation are the globally acting solar tides, especially the westward migrating diurnal and semidirurnal tides (e.g., Smith, 2012), which propagate from the troposphere/stratosphere into the mesopause region (e.g., Hagan et al., 1995). The related changes in the vertical pressure, temperature, and chemical composition profiles can significantly alter the nocturnal trend in OH emission, i.e. emission increases are also possible (e.g., Marsh et al., 2006; Takahashi et al., 1998; Zhang et al., 2001). Intra-annual changes in the tidal amplitudes also contribute to the observed seasonal variability of OH emission with maximum intensities around the equinoxes at low latitudes (e.g., Gao et al., 2010; Shepherd et al., 2006; Takahashi et al., 1995). Another source of perturbations of the mesopause region are gravity waves, which have periods from minutes to hours and act on a regional scale (e.g., Fritts & Alexander, 2003). Individual gravity waves are sporadic but the general activity of such waves with respect to OH emission shows a seasonal pattern (e.g., Hannawald et al., 2019; Kim et al., 2010; Lopez-Gonzalez et al., 2020; Reisin & Scheer, 2004; Sedlak et al., 2020). Increased activity tends to be observed around solstices or in winter depending on the wave period and the latitude. The intra-annual variations are related to the weather conditions in the troposphere (the dominating source region for primary waves) and the wind speeds and directions up to the mesosphere. The latter rule the efficiency of the blocking of the vertical wave propagation and the related possible generation of secondary waves. OH emission is also influenced by the globally acting planetary waves with periods of the order of days (e.g., Lopez-Gonzalez et al., 2009; Pedatella & Forbes, 2012) and the seasonal changes of the residual meridional circulation (e.g., Marsh et al., 2006). Changes in OH nightglow on time scales of the order of years are particularly caused by the solar activity cycle of about 11 years (e.g., Gao et al., 2016; Noll et al., 2017), which leads to a significant variation of hard ultraviolet photons that can, e.g., destroy molecular oxygen (e.g., Marsh et al., 2007).
In conclusion, the sensitivity of OH lines to the various sources of variation makes them valuable for the study of the dynamics in the mesopause region. However, the investigations are often based on a few bright lines or unresolved broad-band data due to instrumental limitations. This constitutes a loss of information. As the radiative lifetimes and rate coefficients for collisions depend on the specific OH energy level, the vertical emission distributions deviate for OH lines with different upper levels (e.g., Dodd et al., 1994; Noll et al., 2018; von Savigny et al., 2012). Consequently, the response of OH emission to perturbations depends on the selected line. The resulting differences can therefore provide additional information on the vertical component of the dynamics as well as the OH-related chemistry. At least for the integrated intensities of the Q branches of OH(3-1) and OH(4-2), this was demonstrated by Schmidt et al. (2018), who estimated vertical wavelengths of gravity waves. Studies of large sets of individual OH lines are rare, particularly with respect to variations. With a few thousand spectra in maximum (Cosby & Slanger, 2007; Hart, 2019; Noll et al., 2015, 2017), only a rough characterization of the dynamics was possible. Some differences in the nocturnal, seasonal, and long-term variations for lines with different upper vibrational levels \(v^{\prime}\) were identified. A fraction of these variations might be explained by differences in the thermalization of the involved level populations as the observed changes in the rotational and vibrational temperatures, i.e. ratios of lines with different upper levels \(N^{\prime}\) and \(v^{\prime}\), indicate. The studies did not in
volve weak lines with high \(N^{\prime}\), which would be crucial for a better understanding of the variation of the level populations on different time scales especially with respect to the hot component.
In a previous study (Noll et al., 2022b), we analyzed 298 OH lines with a wide range of \(v^{\prime}\) and \(N^{\prime}\). The intensities were measured in spectra of the X-shooter spectrograph (Vernet et al., 2011) of the Very Large Telescope (VLT) of the European Southern Observatory (ESO) at Cerro Paranal in Chile (\(24.6^{\circ}\,\mathrm{S}\), \(70.4^{\circ}\,\mathrm{W}\)). In a time interval of eight nights in January/February 2017 (and also seven nights in January 2019), the data set allowed us to perform a detailed study of the propagation of a very strong (and a moderate) quasi-2-day wave (Q2DW), which is the most remarkable planetary wave at low southern latitudes (e.g., Ern et al., 2013; Gu et al., 2019; Tunbridge et al., 2011). It is characterized by a lifetime of only a few weeks usually in summer and a westward moving pattern with a zonal wavenumber of 3 in the southern hemisphere. Our fits of the wave properties resulted in a most likely period of \(44\,\mathrm{h}\) for both events, a strong dependence of the wave amplitude on local time in 2017 mainly due to the interaction of the Q2DW with solar tides, and maximum amplitudes for lines with intermediate \(N^{\prime}\) in both years. As the latter show similar contributions of cold and hot populations (Noll et al., 2020), the increased intensity variability can be explained by variations in the ambient temperature that significantly affect the population mixing. In combination with emission profiles of the two OH channels of the Sounding of the Atmosphere using Broad-band Emission Radiometry (SABER) instrument onboard the Thermosphere Ionosphere Mesosphere Energetics Dynamics (TIMED) satellite (Russell et al., 1999), we linked wave phases and emission heights for the wave in 2017 thanks to a nearly linear relation and significant phase differences. With a vertical wavelength of about \(32\,\mathrm{km}\), we finally derived average centroid emission heights between 86 and \(94\,\mathrm{km}\). The emission altitudes increase with \(v^{\prime}\) and \(N^{\prime}\).
These results demonstrate the potential of parallel investigations of the dynamics of various OH lines. In this study, we extend the analysis of the same 298 lines by using the entire X-shooter data set discussed by Noll et al. (2022b) of about 90,000 near-infrared spectra covering a period of 10 years between October 2009 and September 2019. The resulting wealth of data allowed us to study climatologies for local time and day of year for the intensity, solar cycle effect, and residual variability in order to characterize the impact of the OH level on the intensity variation for pertubations with different time scales. The contents of the paper are as follows. First, we will briefly describe the data set (section 2). Then, we will introduce our approach to calculate the climatologies and our decomposition techniques for a detailed analysis (section 3). The results for the different investigated properties will be discussed in section 4. Finally, we will draw our conclusions (section 5).
## 2 Data
We used VLT/X-shooter spectra in the near-infrared between 1.0 and \(2.5\,\mathrm{\mu m}\) (Vernet et al., 2011) taken in a time interval of 10 years. In the following, we give a brief overview of the related data processing and data selection. More details are provided by Noll et al. (2022b).
The raw medium-resolution echelle spectra of astronomical targets originate from the ESO Science Archive Facility and were first processed with version v2.6.8 of the official reduction pipeline (Modigliani et al., 2010) and preprocessed calibration data. The resulting wavelength-calibrated two-dimensional (2D) spectra were then further treated by averaging them along the spatial direction (projected slit length of \(11^{\prime\prime}\)), correcting them for systematic biases in the pipeline-based separation of sky and astronomical target, and performing an absolute flux calibration based on 10 master response curves with valid time intervals between 9 and 15 months that were derived from X-shooter spec
tra of the spectrophotometric standard stars EG 274 and LTT 3218 (Moehler et al., 2014). For the predominating clear sky conditions, the relative flux uncertainties are expected to be of the order of 2 to 3% for wavelengths up to 2.1 \(\mu\)m. The accuracy of the absolute fluxes is lower due to uncertainties in the reference spectral energy distributions of the standard stars.
In the final spectra, the continuum was determined by means of the lowest quintile of the intensities in pixel ranges that depended on the corresponding line density and slope of the continuum. After the subtraction of the continuum, line intensities were measured in wavelength ranges which depended on the variable width of the entrance slit of the spectrograph and the separation of the \(\Lambda\) doublet components of each OH line as taken from Brooke et al. (2016). The measured line intensities were corrected for their dependence on the zenith angle due to projection effects assuming a reference altitude of 87 km. Moreover, we considered the partial absorption of line emission by molecules especially in the lower atmosphere. The line-specific absorption was modeled for Doppler-broadened OH lines assuming a temperature of 190 K and by means of the Line-By-Line Radiative Transfer Model (LBLRTM, Clough et al., 2005), which involved typical atmospheric profiles for Cerro Paranal (Noll et al., 2012). Apart from the line, the correction depended on the zenith angle and the highly variable amount of water vapor. The latter was estimated based on pairs of OH lines with very different absorption. The relations were calibrated using data of the Low Humidity And Temperature PROfiler (L-HATPRO) microwave radiometer at Cerro Paranal (Kerber et al., 2012). For this study, we used the same 298 OH lines as selected for the analysis of the Q2DW in 2017 by Noll et al. (2022b). In this way, we can directly compare both investigations. Moreover, the results suggest that an additional optimization of the line set for the studied climatological properties would not significantly improve the quality of the analysis. Selection criteria such as high atmospheric transmission, negligible line blending, and smooth underlying continuum work independently of the data sample and the analyzed property.
As the individual spectra show strong variations in the exposure time, spectral resolution, absorption by water vapor, zenith angle, and residual contribution of the astronomical target, the size of the useful data set depends on OH line properties such as intensity and wavelength. Based on a \(\sigma\)-clipping approach for outlier detection with respect to continuum, intensity, and intensity uncertainty, the final line-specific samples comprise between 61,458 and 88,315 data points with a mean of 82,836 for the 298 lines. In order to further improve the quality, we averaged the intensities for consecutive 30 min intervals and only kept those intervals with a minimum summed exposure time of 10 min. This approach significantly reduced the variation in the size of the data set. For the 268 OH lines with wavelengths up to 2.1 \(\mu\)m, the resulting number of bins is between 18,936 and 19,570 (mean of 19,480 and relative standard deviation of 0.65%). The corresponding data coverage with respect to date and local time is illustrated in Figure 1a for an example line. In general, there is a relatively smooth coverage as 63% of the nights are covered with an average number of 8.5 bins. Data gaps longer than a week are rare (maximum of 41 consecutive nights in 2014). As bad weather is a minor issue at Cerro Paranal (Holzlohner et al., 2021; Kerber et al., 2016), technical reasons are more common (especially telescope sharing by different instruments). In the case of the 30 lines at longer wavelengths (mainly belonging to OH(9-7)), the bin numbers range from 16,926 to 17,019 (mean of 17,001 and relative standard deviation of 0.10%). Hence, the only noteworthy differences are related to the wavelength regime. Spectra taken with a so-called \(K\)-blocking filter for straylight reduction (Vernet et al., 2011) cannot be used beyond 2.1 \(\mu\)m. Nevertheless, the decrease of the sample size at long wavelengths is only about 13%. Hence, the resulting climatologies should still be sufficiently consistent. Comparisons for \(v^{\prime}=9\) lines from different bands did not show clear discrepancies.
Figure 1: Sampling of date in years and local time (LT) in hours for the 19,546 30 min bins used for OH(4-2)P\({}_{1}\)(1) (a). The given times of each LT interval differ due to the different central times and exposure lengths of the contributing individual observations. (a) also provides the mean intensities of the bins in kilorayleigh (kR). For a better visibility of the variations, the upper limit of the color scale was fixed to 3 standard deviations above the mean intensity. This cut only affected 132 bins. (b) shows the sampling of the solar radio flux at 10.7 cm averaged for 27 days, \(S_{274}\), in solar flux units (sfu). The distribution indicates that the most of solar cycle 24 is covered.
## 3 Methods
### Calculation of Climatologies
In this study, we focus on 2D climatologies depending on local time (LT) and day of year (e.g. Figure 2). The local time is defined as mean solar time for the longitude of Cerro Paranal (70.4\({}^{\circ}\) W). Each climatology consists of a grid of 12\(\,\times\)12 data points with centers in the middle of the nighttime hours (from 18:30 to 05:30 LT) and the middle of the months. Each grid point represents the average of a property for a selection of close data bins, which were derived as discussed in section 2 and are representative of the average of the central times of the considered individual exposures weighted by the exposure time (see Figure 1a). The maximum relative distance to a grid point was usually 1.0, which corresponds to time differences of 1 h and 1 month of average length, respectively. Hence, a bin can contribute to several adjacent grid points. The climatologies are smoothed. Smaller selection radii would lead to a better time resolution but worse statistics. We required a minimum number of selected bins of 400. Even if we only consider the 134 grid cells with significant nighttime contribution (at least 24% with respect to solar zenith angles greater than 100\({}^{\circ}\)) that were used for the scientific analysis, this criterion is not always fulfilled. In such cases, we iteratively increased the selection radius in steps of 0.1 until the sample was large enough. For the example line OH(4-2)P\({}_{1}\)(1) (see also Noll et al., 2022b), where the input data set and the resulting intensity climatology are shown in Figures 1 and 2a, the mean radius was 1.10 for the 134 useful grid points. However, it was only 1.02 for the 113 cells with 100% nighttime contribution. The mean sample size was 522 with a maximum of 732. Apart from the decrease of the numbers close to twilight, the sample size shows a remarkable seasonal pattern with maximum numbers around the equinoxes. These variations reflect changes in the structure of the X-shooter observing programs, which depend on the seasonal visibility of the different classes of astronomical objects. As discussed in section 2, the number of data bins is significantly reduced for OH lines with wavelengths longer than 2.1\(\,\mu\)m. Nevertheless, the time resolution and quality of the statistics is only slightly worse. For OH(9-7)P\({}_{1}\)(1),
Figure 2: Climatologies of intensity relative to mean as a function of local time (with a resolution of 1 h) and day of year (with a resolution of 1 month) for OH(4-2)P\({}_{1}\)(1) (a) and OH(4-2)P\({}_{1}\)(14) (b). The climatologies are representative of a solar radio flux of 100\(\,\)sfu. The colored contours are only provided for dates and times with solar zenith angles larger than 100\({}^{\circ}\). The seasonal variation is partly repeated (marked by lighter colors) for a better representation around the turn of the year.
we find a mean selection radius of 1.14 and a mean sample size of 474 for the 134 useful grid points.
Intensity climatologies relative to the mean as shown in Figure 2 were corrected for time-specific differences in the mean solar radio flux at 10.7 cm (Tapping, 2013). As OH intensities depend on solar activity (e.g., Gao et al., 2016; Noll et al., 2017), variations in the mean solar radio flux by the selection of subsamples can affect intensity climatologies. Using the moving 27-day average centered on the day of observation, \(S_{\rm 27d}\) (see Figure 1b), as preferred by Noll et al. (2017), we found values between 89 and 109 solar flux units (sfu) with a mean of 99 sfu for the nighttime grid points related to the representative example OH(4-2)P\({}_{1}\)(1). The mean \(S_{\rm 27d}\) values are relatively low in July/August and relatively high in November/December. In order to minimize the impact of the solar radio flux on the intensity climatologies, we corrected the intensities of each grid point to be representative of 100 sfu, which is close to the mean value. For this purpose, we performed a linear regression analysis for the relation between intensity and solar radio flux. The resulting slopes for the different subsamples were then used for the correction. The climatologies of these slopes are discussed in section 4.2. As a standard deviation of 5 sfu is relatively small, the correction factors only varied between 0.98 and 1.04 with a standard deviation of 0.01 for OH(4-2)P\({}_{1}\)(1). The lowest and highest factors were found in December and July, respectively. The approach led to a general decrease of the variance in the selected subsamples. Average reductions between 1 and 6% were found for the climatologies of the different lines.
### Decomposition of Climatologies
For a systematic analysis of the climatologies of different properties for the whole set of selected OH lines, we used decomposition techniques. First, we performed the popular principal component analysis (PCA), which is an orthogonal linear transformation in the feature space that results in a new coordinate system with the maximum sample variance along the primary axis, the second largest variance along the orthogonal secondary axis, and so on. Consequently, a few dimensions are sufficient to describe most of the variance, which significantly reduces the complexity of the data set. The most important variability patterns are highlighted, whereas the contribution of noise and outliers is minimized. The transformation can be written as
\[{\bf T}={\bf X}{\bf W}. \tag{1}\]
This matrix equation includes the matrix of the input data \({\bf X}\) with \(n\) rows representing the samples and \(p\) columns representing the features. In our case, \(p\) equals 298, i.e. the number of selected OH lines, and \(n\) is 134, i.e. the number of useful nighttime data points of the climatologies (see section 3.1). Note that the sample mean of each column (the individual climatologies) needs to be shifted to zero before the PCA can be applied. For a complete transformation, the weight matrix \({\bf W}\) and the score matrix \({\bf T}\) would have sizes of \(p\,\times\,p\) and \(n\,\times\,p\), respectively. However, as we aim at reducing the dimensionality, these matrices are truncated with sizes of \(p\,\times\,L\) and \(n\,\times\,L\), where \(L\) is the number of kept dimensions. Our analysis showed that it is sufficient to choose \(L\,=\,2\) as the explained variance is already between 89 and 98% depending on the property (see also section 4). Moreover, the third and higher components indicated that they were strongly affected by variability caused by measurement uncertainties. The climatologies of all 298 lines (set to mean values of zero) can then be described by the linear combination of two basis climatologies provided by the columns of \({\bf T}\) and the corresponding scaling factors for each line given by \({\bf W}\).
Apart from PCA, we also used nonnegative matrix factorization (NMF; e.g., Lee & Seung, 1999) for the analysis of the different climatologies. NMF is an approximative dimensionality reduction of the form
\[{\bf A}{\bf B}\approx{\bf X}, \tag{2}\]
where all matrices have only nonnegative entries. The input data matrix \(\mathbf{X}\) with \(n\) rows and \(p\) columns is the same as discussed above. The result matrices \(\mathbf{A}\) and \(\mathbf{B}\) have sizes of \(n\times L\) and \(L\times p\), i.e. they provide the basis climatologies and line-dependent scaling coefficients for the approximative reconstruction of the individual climatologies. Consistent with the PCA, we selected \(L~{}=~{}2\) for the simplest description of the variability. Note that the choice of \(L\) affects the patterns of the basis climatologies, whereas the PCA-related components remain fixed independent of \(L\). In general, the NMF-related results were similar to those of the PCA, even in the case of the solar cycle effect, where the possible negative values had to be set to zero for the application of the NMF. The corresponding systematic bias was relatively small as only a small fraction of slightly negative values contributed to the different climatologies. Nevertheless, we preferred the unbiased PCA for this analysis. The PCA-related results dominate the discussion in section 4. The only exception is section 4.1 on intensity climatologies as the NMF-related separation of the two main contributions was significantly better, which motivated us to also include that approach in our analysis. The results of both methods are available via the release of the full data set (Noll et al., 2023).
### Variability as a Function of Time Scale
Our analysis also comprises a study of the variations which are not covered by the 2D climatologies of intensity and solar cycle effect. This residual variability (section 4.3) can be caused by intensity changes on different time scales. For the interpretation, it is therefore important to understand the contributions of the different wave types depending on the climatological grid point. As the X-shooter data set is highly inhomogeneous, this analysis requires a robust approach with respect to data gaps. A promising statistical method is the study of intensity differences between all possible combinations of data pairs as a function of the corresponding time difference. As the time series consist of bins with a width of half an hour (section 2), we can assign each pair of relative intensities \(I_{i}\) and \(I_{j}\) to a time difference \(\Delta t=t_{i}-t_{j}\) which is a whole-number multiple of \(30\,\mathrm{min}\).
Figure 3: Sample variance derived from data pairs as a function of the pair-related time difference \(\Delta t\) for intensity relative to mean of OH(4-2)P\({}_{1}\)(1) (black) and the latter after subtraction of the solar-activity-adapted (section 3.1) climatology in Figure 2a (green/gray). (a) shows the variance for \(\Delta t\) of the order of hours and days with a time resolution of \(30\,\mathrm{min}\) (marked by dots), i.e. the step size of the binned input data. Time differences that are rare due to the restriction to nighttime data are skipped. (b) displays the results for \(\Delta t\) of multiples of full days up to 5 years. For a smoother plot, only the averages of bins with a width of 5 days are shown.
Then, we can calculate the mean sample variance
\[s^{2}(\Delta t)=\frac{1}{N(\Delta t)}\sum_{\Delta t,i>j}s_{ij}^{2}\quad\text{with} \quad s_{ij}=\frac{|I_{i}-I_{j}|}{\sqrt{2}} \tag{3}\]
for the \(N(\Delta t)\) pairs of each \(\Delta t\). Without restrictions in \(\Delta t\), it would equal the normal sample variance based on deviations from the mean value of the full data set. Moreover, the definition has the advantages that pure noise with a Gaussian distribution causes a constant shift of \(\sigma^{2}\) and that a sine wave with the period \(T\) results in an oscillation between 0 (achieved for integer multiples of \(T\)) and the squared amplitude \(a^{2}\) (shifted by half a period).
In order to illustrate the approach, Figure 3 shows \(s^{2}(\Delta t)\) for the intensities relative to the mean for our example line OH(4-2)P\({}_{1}\)(1) based on the full number of 19,546 useful bins. The function is given for relatively short \(\Delta t\) up to 8.5 days (a) and longer time scales up to 5 years with a step size of 5 days (b). For the latter case, only \(s^{2}\) for integer multiples of 1 day were averaged. There is a dominating oscillation with a period of 24 h, which reflects the strong nocturnal trend in Figure 2a. Moreover, a clear semi-annual oscillation is visible, which mainly originates from the climatological pattern at the beginning of the night. As we are primarily interested in the residual variability, the 2D intensity climatology for 100 sfu in Figure 2a adapted to the actual \(S_{\text{27d}}\) values (section 3.1) was subtracted from the times series. The results are also shown in Figure 3. As expected, \(s^{2}(\Delta t)\) of the corrected time series indicates distinctly lower variances. Exceptions in Figure 3a are only present for time scales of a few hours and multiples of 24 h. They reveal the importance of short-term variations and day-to-day variations of the nocturnal pattern for the residual variability. For long time scales, the lowest reduction can be seen for multiples of 1 year. Hence, year-to-year variations are probably more crucial for the annual oscillation, which dominates the second half of the night in Figure 2a.
The analysis can also be performed for each grid point of the 2D climatology. In this way, the contributions of the different time scales to the residual variability climatology can be studied in detail. As only a small fraction of the entire data set is relevant for a specific grid point (see section 3.1), there can be a lack of suitable pairs for a certain \(\Delta t\). This issue affects long time scales in general as well as short times scales where the absence of daytime data matters. Good statistics are therefore limited to \(\Delta t\) of a few hours and those close to low multiples of 24 h. These restrictions still allow the detailed study of the impact of short-term variations with time scales shorter than 1 day, which constitute the largest contribution to the residual variability as Figure 3a reveals. For maximum robustness, we measured the short-term variance as the minimum of \(s^{2}\) for \(\Delta t\) of 24 and 48 h. In the case of the whole time series of OH(4-2)P\({}_{1}\)(1), the variance for 48 h is clearly lower. This fact suggests that there is a significant contribution of the Q2DW, although its activity period is usually only a few weeks in summer (e.g., Ern et al., 2013). Consequently, we can also estimate the amplitude of the Q2DW. Assuming a sine wave, we derived \(a^{2}\) by subtracting \(s^{2}\) for 48 h from the mean for 24 and 72 h. In the case of a negative difference, we set \(a=0\). This approach may lose a part of the amplitude as the Q2DW period can deviate from 48 h. Noll et al. (2022b) found 44 h for the covered intervals in 2017 and 2019 at Cerro Paranal. However, the restriction to multiples of 24 h increases the robustness of the statistics and avoids possible biases depending on LT due to different influences of daytime intervals without observations. Concerning the statistics, \(\Delta t=72\) h is the time difference with the lowest number of data pairs. For the nocturnal grid points of the climatology of OH(4-2)P\({}_{1}\)(1), we find numbers between 88 and 300 with a mean of 172, i.e. about one third of the average sample size. In conclusion, our analysis allows us to study 2D climatologies of short-term variations and Q2DW amplitudes. The corresponding results will be discussed in sections 4.3.1 and 4.3.2.
## 4 Results
### Relative Intensity
Figure 2 shows example climatologies of intensity relative to the climatological mean for a solar radio flux of 100 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm}}}}}}}} \)}}}}}}}}}}\) and solar zenith angles greater than 100\({}^{\circ}\). Both example lines belong to the P\({}_{1}\) branch of the strong OH(4-2) band but show the maximum difference in the upper rotational quantum number \(N^{\prime}\) of the line set (1 vs. 14). The reference intensities for the two climatologies are 9.65 and 0.18 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{ }}}}}}}}}}}}}}}}\), respectively. Hence, even the high-\(N^{\prime}\) line is still relatively bright. The mean intensity is comparable to those of the green and red atomic oxygen lines at Cerro Paranal (Noll et al., 2012). The derived intensity climatologies appear to be quite robust since the relative root mean square averaged for the climatologies of the 23 lines with \(N^{\prime}\)\(\geq\) 10 (mean intensities between 7 and 570 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}\)) is only 2.5%, irrespective of possible true physical differences. The comparison of both climatologies in Figure 2 reveals clear differences at the beginning of the night, where OH(4-2)P\({}_{1}\)(1) shows much stronger emission relative to the mean. The maximum values near both equinoxes (April and October) are hardly visible in the climatology of OH(4-2)P\({}_{1}\)(14). The maximum in the second half of the year even appears to be shifted to November. On the other hand, the patterns after midnight are more similar. In particular, the maximum in May before dawn and the minimum in August/September can be found in both climatologies. A 2D intensity climatology for Cerro Paranal was already shown by Noll et al. (2017) for P-branch lines with low \(N^{\prime}\) in OH(6-2). The summed intensity of these lines indicates a variability pattern that agrees quite well with our results for OH(4-2)P\({}_{1}\)(1). The data of Noll et al. (2017) were taken between April 2000 and March 2015 with another VLT spectrograph. Moreover, similar features as in Figure 2a are also present in the nocturnal trends and monthly variations of the OH(9-4) band (dominated by lines with low \(N^{\prime}\)) measured by Takahashi et al. (1998) at Cachoeira Paulista in Brazil (23\({}^{\circ}\) \(\rm{\rm{\rm{\rm{\circ{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ \rm{ \,}}}}}}}}}}}}}}}}\) 1}\) and 2 1000 for the period between October 1987 and June 1993. The 2D climatology of the OH(8-3) band at Buckland Park in Australia (35\({}^{\circ}\) \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}\)\atop }}\) 1.39\({}^{\circ}\) E) for the years 1995 to 2010 from Reid et al. (2014) also indicates a rough agreement. Consequently, the shown intensity climatologies appear to be relatively stable with respect to the observing period and moderate changes of the latitude. For larger changes of the latter, there can be significant deviations as a SABER-based study of the global OH peak emission rates by Gao et al. (2010) suggests. The impact of the latitude is also illustrated by the study of Takahashi et al. (1998), which also contains results for a site close to the equator (4\({}^{\circ}\) \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}\)\).
Our data set particularly benefits from the parallel coverage of hundreds of OH lines which show differences in their effective emission heights of several kilometers (Noll et al., 2022b). Therefore, more detailed insights into the OH-related dynamics will be possible if the climatologies of all 298 lines are jointly studied in a systematic way. For this purpose, we applied the decomposition techniques that were introduced in section 3.2. The first two components of the PCA (matrix \(\bf T\) in section 3.2) explain 98.4% of the full variance. The first component is similar to the intensity climatology of OH(4-2)P\({}_{1}\)(1), whereas the second one better agrees with the climatology of OH(4-2)P\({}_{1}\)(14). This result confirms that the variabilities of our example lines differ relatively strongly with respect to the full line set. The PCA was not able to properly separate the high intensities at the beginning of the night, which are only visible for one example line, from the other variability features that can be found in the data of both lines. As stated in section 3.2, we therefore preferred the NMF. It works particularly well if a pattern can be reconstructed by summing a few nonnegative components. For our analysis, we calculated the decomposition for the least complex case, i.e. two components.
A comparison of the resulting basis climatologies in the left column of Figure 4 (matrix \(\bf A\) in section 3.2) with the example cases in Figure 2 shows that the NMF clearly separates the strongly line-dependent nocturnal trend of decreasing intensities from the underlying features that are obviously present in all climatologies. The latter are char
## 6 Conclusion
Figure 4: Decomposition of climatologies of intensity relative to mean with nonnegative matrix factorization for two components. The resulting climatologies (see also Figure 2) as given by matrix \(\mathbf{A}\) in section 3.2 are shown in (a) and (c) and the corresponding coefficients for the 298 considered OH lines from matrix \(\mathbf{B}\) are given in (b) and (d). The number symbols in the latter plots indicate the upper vibrational level of the transition \(v^{\prime}\). The abscissa shows the energy of the upper level of the transition minus the lowest energy for the corresponding \(v^{\prime}\) in inverse centimeters. The additional energy is related to rotation with quantum numbers \(N^{\prime}~{}>~{}1\) and/or spin–orbit coupling with quantum number \(F^{\prime}=2\) (see Noll et al., 2020).
acterized by the first component in Figure 4a, which is very similar to the climatology of OH(4-2)P\({}_{1}\)(14). The correlation coefficient \(r\) for the climatological grid cells with significant nighttime contribution is +0.94. For the interpretation of this remarkable pattern with the maximum in May at 05:30 LT and the minimum in August at 23:30 LT, it is important to know whether it is restricted to OH intensities or whether it can also be seen in other properties of the mesopause region. The kinetic temperature is a popular quantity for the study of mesospheric perturbations. It can be estimated from intensity ratios of OH lines if the involved level populations are in local thermodynamic equilibrium (LTE), which is best fulfilled for the lowest \(v^{\prime}\) and \(N^{\prime}\)(Cosby & Slanger, 2007; Kalogerakis et al., 2018; Noll et al., 2015, 2016, 2020). We therefore analyzed the ratio of the P\({}_{1}\) lines of OH(3-1) with \(N^{\prime}=1\) and 2, which are frequently used for airglow instruments optimized for temperature measurements (e.g., Schmidt et al., 2013). The climatology of the line ratio indicates a good correlation with the first NMF component (\(r\,=\,+0.83\)). There are no increased values in the evening as it is typical of individual lines with low \(v^{\prime}\) and \(N^{\prime}\) like OH(4-2)P\({}_{1}\)(1) (Figure 2a). This result is also confirmed by the 2D climatology of kinetic temperature based on SABER measurements at 89 km in the region of Cerro Paranal from Noll et al. (2019). The same publication shows that similar features are also present for the number density of atomic oxygen, which is crucial for the production of OH. Hence, the first NMF component is an indicator of general perturbations of the mesopause region. As the climatologies amplify variations with fixed time scales of 24, 12, or 8 h, solar tides and the seasonal changes of their amplitudes appear to be the main source of the variability pattern. The significant impact of tides on OH emission was already discussed before (e.g., Marsh et al., 2006; Takahashi et al., 1998; Zhang et al., 2001). It is especially important for low latitudes where the migrating diurnal tide that follows the apparent motion of the Sun is the most prominent mode. The residual meridional circulation that can influence the seasonal variability appears to be a minor effect at low latitudes (Marsh et al., 2006).
The second NMF-related basis climatology is shown in Figure 4c. In each month, the intensity is decreasing in the course of the night with the highest rates in the evening. Moreover, the pattern indicates a semi-annual oscillation with the maximum values near the equinoxes (April and September). The latter could be related to the ozone number density in the OH emission layer, which indicates a similar seasonal variation (Noll et al., 2019). As the nocturnal decrease is not visible in the climatologies of kinetic temperature and atomic oxygen number density shown by Noll et al. (2019), the phenomenon seems to be linked to the OH-related chemistry. As already discussed by Marsh et al. (2006), the drop in intensity is probably caused by the consumption of atomic oxygen, which is mostly produced at daytime due to photolysis of molecular oxygen. In the denser atmosphere at lower altitudes, the losses (also by the production of OH via ozone) are faster. Assuming an exponential decay, Marsh et al. (2006) modeled time constants of 6 h at 84 km and 1 day at 88 km, which indicates a large vertical gradient of this property. We can also fit exponential functions in the second NMF component. For the natural logarithm of the values, only a linear regression analysis needs to be performed. For a better robustness, we only considered local times until the nocturnal minimum and only grid points with relative intensities higher than 0.04. The average of all monthly fits amounts to 3.3 \(\pm\) 0.2 h, which suggests that the crucial altitudes are probably 1 to 3 km below 84 km (if the model of Marsh et al. works for our data). Emission at higher altitudes with long decay times appears to mainly contribute to the first basis climatology in Figure 4a. The second NMF component is therefore strongest for OH lines with the lowest effective emission heights, which are about 86 km on average (Noll et al., 2022b) but could be about 2 km lower in the early evening as SABER data for Cerro Paranal indicate (Noll et al., 2018). Our regression analysis also revealed that there might be seasonal variations of the time constant. The values of 2.6\(\pm\)0.1 h for austral autumn and 4.0\(\pm\)0.1 h for austral winter show the largest discrepancies.
The scaling factors of the two basis climatologies for all 298 lines are provided in the right column of Figure 4 (matrix **B** in section 3.2) as a function of the energy of the upper state of the transition reduced by the lowest energy for the corresponding \(v^{\prime}\), i.e. this energy difference (given in inverse centimeters) mainly depends on \(N^{\prime}\). Except for a few unreliable outliers, the scaling factors indicate a clear transition from climatologies with a slight dominance of the second component (decaying daytime population) in the evening for the lowest \(v^{\prime}\) and \(N^{\prime}\) to a contribution of the first component (tidal features) of almost 100% for the highest \(N^{\prime}\) irrespective of \(v^{\prime}\). For low rotational energies, the different \(v^{\prime}\) are well separated with larger gaps for lower vibrational excitations. In an intermediate zone between 500 to 1300 cm\({}^{-1}\), the factors of lower \(v^{\prime}\) show larger changes, which then leads to the vanishing discrepancies for the highest \(N^{\prime}\). This distribution is very similar to the effective emission heights derived by Noll et al. (2022b) for the same line set as correlation coefficients of \(+0.93\) and \(-0.92\) for the first and second NMF component demonstrate. This excellent agreement supports the assumption that the mixing of the two basis climatologies strongly depends on the height distribution of the emission. Hence, the differences in the coefficients for the studied lines should mostly be caused by the strong height dependence of the time constant for the decay of the daytime population of atomic oxygen.
While only a relatively narrow altitude range seems to significantly contribute to the second NMF component, tidal features should be present at all altitudes. Hence, the strength of these structures might depend on the studied OH line. With the NMF decomposition of the climatologies into two components, this question cannot be answered. However, we can directly measure features in the climatologies for this purpose. As the contribution of the decaying atomic oxygen population can be neglected in the morning, the best suitable feature is the maximum in May (Figure 4a). For an estimate of the strength of this feature, we divided the relative intensity at 04:30 LT in May by the value for the grid point at the same local time in August. This choice is a compromise between high contrast, good nighttime coverage, and late time. The resulting intensity ratios are plotted in Figure 5. The most prominent structure of the distribution of the 298 data points is the maximum in the energy range from 400 to 800 cm\({}^{-1}\). Interestingly, the Q2DWs in 2017 and 2019 investigated by Noll et al. (2022b) showed the largest am
Figure 5: Ratio of intensities in May and August for 04:30 LT derived from the intensity climatologies of all 298 lines. The upper levels of the transitions are characterized by \(v^{\prime}\) (markers) and the energy additional to the vibrational excitation (abscissa).
plitudes in a similar range. As already stated in section 1, this effect can be explained by an increased level of variability due to differences in the mixing of cold and hot populations caused by variations in the ambient temperature. It seems that variations by tides have a similar impact on OH emission. Nevertheless, the correlation coefficient for the whole set of lines in Figure 5 and the amplitude of the Q2DW event in 2017 is only +0.33. An important discrepancy is the lack of a dependence on \(v^{\prime}\) for low \(N^{\prime}\). Moreover, the ratios for very high and low \(N^{\prime}\) only differ by a few percent. Hence, structures that suggest an impact of the emission height cannot clearly be identified. The tidal modes that cause the measured climatological feature appear to affect the different emission altitudes at a similar time. The effective vertical wavelengths of these perturbations seem to be relatively long. This interpretation is supported by the fact that the location of the feature is relatively stable in the two example climatologies in Figure 2, which correspond to emissions with an effective altitude difference of about 5.5 km (Noll et al., 2022b). If this result was also valid for the other tidal features, the structures in the first NMF component could be similar for all OH lines.
### Solar Cycle Effect
As already described in section 3.1, we analyzed the impact of the solar cycle on OH intensities using a linear regression approach with the moving 27-day average \(S_{\rm 27d}\) of the solar radio flux at 10.7 cm as a proxy. Calculations with an additional linear long-term trend showed that the latter can be neglected. The ratio of the mean solar cycle effect (SCE) without and with this trend for all 30 min bins of all OH lines did not significantly differ from unity (0.977). Moreover, the long-term trend was not significant (\(+1.2\pm 0.9\) % per decade). These results confirm previous findings (Gao et al., 2016; Noll et al., 2017). Hence, the subsequent regression results were obtained with the solar radio flux as the only parameter. Our data set covers almost the entire solar cycle 24, i.e. the time series starts and ends with low solar activity as Figure 1b shows. The minimum \(S_{\rm 27d}\) for the 30 min bins in both halves of the interval are 71 sfu in October 2009 and 67 sfu in March 2018. The maximum of 166 sfu was achieved in February 2014. Hence, the impact of solar cycle 24 can be investigated for a range of about 100 sfu. The mean value also amounts to about 100 sfu. The regression analysis was separately per
Figure 6: Climatologies of solar cycle effect relative to the intensity climatologies in Figure 2 (see caption for plot details) for OH(4-2)P\({}_{1}\)(1) (a) and OH(4-2)P\({}_{1}\)(14) (b). The climatologies indicate the response of OH emission to changes of the solar radio flux averaged for 27 days by 100 sfu. Each value of the climatological grid is given relative to the corresponding mean intensity.
formed for the subsamples of each climatological grid point, which allowed us to study the influence of the solar activity cycle as a function of LT and month.
The results for our two example lines relative to the corresponding intensity climatologies (Figure 2) and for a change of 100 sfu are shown in Figure 6. The climatologies are remarkable in terms of the large inhomogeneities of the SCE. A particularly strong influence of the solar activity on the OH intensities can be found from midnight to dawn around July. Then, a relative SCE of more than +40% (with uncertainties lower than 10%) per 100 sfu is possible. For the rest of the climatology, the response of the OH emission is usually much weaker. Even negative values cannot be excluded. Only around the turn of the year, the significance of the regression slopes for part of the grid points is sufficient to safely assume a positive effect. The major difference between the patterns for both lines is the location of the maximum with a width of several hours in austral winter. For OH(4-2)P\({}_{1}\)(14), it seems to be shifted by a few hours towards earlier LTs. An exact measurement is difficult as OH(4-2)P\({}_{1}\)(1) shows the highest value just before twilight. The latter line also indicates a weaker decrease of the SCE towards August than in the other case.
effective SCE on the OH line parameters was not seen before. Pertsev and Perminov (2008) stated that the SCEs for OH bands with \(v^{\prime}\) between 3 and 9 were similar at Zvenigorod. Solar forcing for OH emission at Cerro Paranal was already investigated by Noll et al. (2017) based on optical spectra for the period from April 2000 to March 2015. The results were restricted to summed intensities of lines with low \(N^{\prime}\) of bands with \(v^{\prime}\) between 5 and 9. All bands showed relative SCEs of about 16% per 100 sfu with uncertainties much larger than the discrepancies. A mild trend could be possible for the two OH channels of SABER (Russell et al., 1999). Global data suggest a ratio of about 1.1 for the relative SCEs of the channels centered on 2.1 and 1.6 \(\mu\)m (Gao et al., 2016) with effective \(v^{\prime}\) of 8.3 and 4.6, respectively (Noll et al., 2016). Noll et al. (2017) also presented SABER results for the region around Cerro Paranal. For the years from 2002 to 2015, the corresponding ratio derived from \(14.5\pm 1.3\) and \(12.1\pm 1.5\) % per 100 sfu is 1.2 but with relatively high uncertainties. The obvious main reason for the lack of strong differences is the fact that only lines with low \(N^{\prime}\) significantly contributed and the range of \(v^{\prime}\) was limited. On the other hand, Figure 7a suggests that SCEs that were previously estimated are in good agreement with our results (despite the differences in the samples) if we focus on the relevant lines.
For a better understanding of the line dependence of the SCE, we also derived the maximum amplitude for each line in July, i.e. the month with the strongest positive response. The results (not plotted but available via the data release) are very different from those of the effective SCE. With a range of the reliable values from 37 to 67% per 100 sfu, the maximum-to-minimum ratio is distinctly smaller than for the effective SCE. As the highest values are present in the energy range from 300 to 800 cm\({}^{-1}\), there is a clear similarity to the distribution for the intensity ratio plotted in Figure 5 and the amplitude of the Q2DWs studied by Noll et al. (2022b). Consequently, the maximum SCE also appears to be significantly affected by the mixing of cold and hot populations. Interest-
ingly, the amplitude increases with decreasing \(v^{\prime}\) for low \(N^{\prime}\), which is contrary to the effective SCE in Figure 7a but agrees with the results for the Q2DW in 2017. In the latter case, the corresponding correlation coefficient of \(+0.74\) is therefore relatively high.
The discrepancies between the effective and the maximum SCE are visualized in Figure 7b, which shows the ratio of both quantities for all OH lines. Starting with a minimum of about 0.17 for \(v^{\prime}\,=\,2\) and \(\Delta E^{\prime}\) near 0 cm\({}^{-1}\), the ratio strongly increases for higher
Figure 7: Average of solar cycle effect from the corresponding nighttime climatology (examples in Figure 6) for all 298 lines (a). The ratio of this average for the entire nocturnal year and the maximum in July is given in (b) for each line. The plotted line properties are discussed in the caption of Figure 4.
(up to 0.38 at 0 cm\({}^{-1}\)) and higher \(N^{\prime}\) (average of 0.43 for \(\Delta E^{\prime}>1500\) cm\({}^{-1}\)). The bump related to the population mixing has obviously vanished, i.e. the effective and maximum SCE appear to be affected by this feature in a similar way. The remaining probably monotonic increase with a flattening for the highest rotational energies is reminiscent of the distribution of the effective emission height of the investigated lines (Noll et al., 2022b). The correlation coefficient for the comparison of the mean-to-maximum ratio with these heights is +0.80 despite several outliers in Figure 7b. Hence, the ratio is mainly a function of the emission height. At least in part, this might be explained by the location of the maximum SCE with respect to LT as shown in Figure 6. While the range of increased SCE in July is well covered by the nighttime climatology of OH(4-2)P\({}_{1}\)(14), the largest values for OH(4-2)P\({}_{1}\)(1) are achieved before dawn. In the latter case, the favorable conditions for a strong effect may extend to later LTs. Hence, the restriction to nighttime observations seems to cause an increasing loss of times with strong solar forcing for OH lines with lower effective emission heights, which then contributes to the observed wide spread of the effective SCEs.
An alternative approach for the analysis of the climatologies is the use of decomposition techniques as described in Section 3.2. As the SCE can also be negative, we prefer the PCA here. The results for the first two components, which explain 90.3 and 3.6%
Figure 8: Decomposition of climatologies of solar cycle effect (examples in Figure 6) with principal component analysis for two components. The plots in the left and right column correspond to the content of matrices \(\mathbf{T}\) and \(\mathbf{W}\) described in section 3.2. The plot details are similar to those of Figure 4.
of the variance, are shown in Figure 8. The first basis climatology in (a) (first column of matrix \(\mathbf{T}\) in section 3.2) is an intermediate case compared to the patterns for the two example lines displayed in Figure 6. The corresponding scaling factors for the individual lines in (b) (first column of \(\mathbf{W}\)) show a clear bump at intermediate rotational energies. Hence, this is another confirmation of the increased impact of temperature variations for OH rotational levels with similar contributions of cold and hot populations. The entire distribution of the data points is very similar to the one for the Q2DW in 2017 shown by Noll et al. (2022b). Benefiting from the noise-reducing properties of the PCA, the correlation coefficient is +0.96. It is remarkable that two phenomena which are related to time scales that differ by several orders of magnitude can produce such a similar response of the OH emission intensity. The second PCA component in (c) (second column of \(\mathbf{T}\)) tends to show positive values in the middle of the night and exhibits the most negative values close to the morning twilight in July and August. Thus, positive scaling factors lead to a shift of the feature with the strongest SCE towards earlier LTs in the combined climatologies. The coefficients in (d) (second column of \(\mathbf{W}\)) therefore represent a measure of the shift in LT for each OH line. The data distribution supports our conclusion that the high SCEs in austral winter are present at later times for decreasing \(v^{\prime}\) and \(N^{\prime}\). Moreover, this distribution is highly correlated with the effective emission heights (\(r=+0.89\)), which demonstrates that the shift of the SCE features is primarily altitude dependent. As the representative LTs are earlier for higher altitudes, the SCE appears to be influenced by upward-propagating perturbations. As we investigate climatologies, the formation of a robust pattern is only imaginable in the case of tides. Hence, the impact of tides on the mesopause region appears to affect the sensitivity of OH emission to solar forcing. As the shifts could amount to several hours, the vertical wavelengths of the relevant tidal modes need to be relatively short. We will provide a more quantitative discussion of this topic in section 4.3.2.
Changes in the solar activity affect the daytime production of atomic oxygen and hydrogen by photolysis as well as the energy input into the atmosphere indicated by the ambient temperature (e.g., Beig et al., 2008; Marsh et al., 2007). Both effects modify the OH intensity. The impact of the solar cycle on the temperature at OH emission altitudes can be tested separately via the ratio of the OH(3-1)P\({}_{1}\)(1) and OH(3-1)P\({}_{1}\)(2) intensities (see section 4.1). The corresponding SCE climatology shows a very similar seasonal pattern as in the case of the intensity of individual lines. Although the lines have low \(v^{\prime}\) and \(N^{\prime}\), the maximum effect occurs shortly after midnight in austral winter, which better matches lines like OH(4-2)P\({}_{1}\)(14) (Figure 6). This result supports our explanation of the shifts of the SCE pattern in LT direction since the effective height for rotational temperature changes tends to be several kilometers higher than the effective height for intensity variations (e.g., Noll et al., 2022b; Swenson and Gardner, 1998). The temperature changes are weighted by the OH intensity profile, whereas the intensity changes usually maximize in the lower part of the layer due to the steepening of the atomic oxygen gradient (e.g., Smith et al., 2010). With respect to the presence of atomic oxygen at relatively low altitudes, the solar activity cycle seems to have a similar impact as waves with much shorter time scales.
### Residual Variability
We also investigated the variability of OH line intensities which cannot be explained by the average climatologies discussed in section 4.1 and those related to the SCE presented in section 4.2. For this purpose, we calculated a line-specific model intensity for each 30 min bin depending on the LT interval, month, and the deviation of the solar radio flux \(S_{\rm 27d}\) from 100 sfu using the corresponding climatologies for relative intensity and SCE. The model values were then subtracted from the measured relative intensities. Climatologies of the residual variability can now be derived by the calculation of the standard deviation of the corrected relative intensities for the subsamples related to each climatological grid point (see section 3.1). For our two example lines, the results are pro
Figure 9: Climatologies of residual variability (standard deviation; top row), short-term variability (residual variations for time scales less than a day; middle row), and amplitude of quasi-2-day wave (bottom row) relative to the intensity climatologies in Figure 2 (see caption for plot details) for OH(4-2)P\({}_{1}\)(1) (left column) and OH(4-2)P\({}_{1}\)(14) (right column).
vided in the top row of Figure 9. Both climatologies show the highest values in June and July and a second maximum around January. The main discrepancy is the LT range with the highest residual variability relative to the climatological mean. The later maximum for OH(4-2)P\({}_{1}\)(1) qualitatively agrees with the results for the SCE (Figure 6).
As already discussed in section 3.3, we analyzed the time dependence of the residual variability by means of the calculation of the sample variance for data pairs with the same time difference. With this approach, it was possible to derive climatologies for short-term variability with time scales shorter than a day and the amplitude of the Q2DW for each OH line. As we only needed to use the frequently occurring time differences of 24, 48, and 72 h for the calculation of these properties, the results are relatively robust. We discuss them in the following.
#### 4.3.1 Short-Term Variability
As demonstrated by Figure 3 for OH(4-2)P\({}_{1}\)(1), the variance in the corrected intensities is dominated by short time scales up to a few hours. Using the minimum of the variance values for 24 and 48 h as a measure, the short-term variability corresponds to 75% of the total residual variance in the case of OH(4-2)P\({}_{1}\)(1) and 70% in the case of OH(4-2)P\({}_{1}\)(14). These percentages are the averages for the nighttime climatological grid. The individual grid points for OH(4-2)P\({}_{1}\)(1) reveal a clear seasonal variation with the highest and lowest fractions in austral winter (84% in August) and summer (61% in January), respectively. The absolute minimum of 47% at around 03:30 LT in January matches the summer maximum of the residual variability. As a consequence, the resulting short-term variability in Figure 9c only indicates one pronounced maximum in the second half of the night in June and July. The secondary maximum in January has mostly vanished. For the weak OH(4-2)P\({}_{1}\)(14) line, the variance fractions are relatively noisy. Nevertheless, relatively low variance ratios in the evening in summer lead to the weakening of the summer peak in Figure 9d. Again, there is only one dominating climatological maximum in winter but earlier in the night compared to OH(4-2)P\({}_{1}\)(1). The exact shape of this structure remains unclear due to the relatively high uncertainties with respect to the variance fraction.
Our results for the seasonal pattern can be compared to a study of the variance of the intensity of the OH(6-2) band measured at El Leoncito in the Argentinian Andes (32\({}^{\circ}\) S, 69\({}^{\circ}\) W) by Reisin and Scheer (2004). After the removal of diurnal and semidirurnal tidal modes by means of a fitting procedure, the authors also obtained a clear primary maximum in June and July, and a weaker secondary one in December and January. This pattern, which was explained by variations in the gravity wave (GW) activity, is also present in the corresponding results for OH(6-2)-based rotational temperatures, which is consistent with our climatology for the ratio of OH(3-1)P\({}_{1}\)(1) and OH(3-1)P\({}_{1}\)(2) (see section 4.1) that indicates a similar LT dependence of the winter maximum as in the case of the short-term intensity variance for the individual lines. In addition, significantly increased GW activity in austral winter was found by Alexander et al. (2015) in SABER temperature fluctuations with vertical wavelengths between 5 and 20 km for the region around the Andes between 29\({}^{\circ}\) and 36\({}^{\circ}\) S in the mesosphere and stratosphere, i.e. at significantly lower altitudes than those related to the OH emission. SABER-based global maps of gravity wave amplitude and momentum flux at 30 km (Ern et al., 2018; Preusse et al., 2009) also reveal a winter maximum at Cerro Paranal. The increased activity seems to be connected to the GW hot spot in the southern Andes, i.e. the winter polar vortex and orographic forcing obviously play a role. According to the maps, the shallow summer maximum is probably related to GWs forced by deep convection, which have a hot spot east of the Andes at low southern latitudes in summer. At least for short-period waves with periods of 5 to 10 min, this interpretation seems to be supported by broad-band OH airglow imaging at Cerro Pachon in Chile (30\({}^{\circ}\) S, 71\({}^{\circ}\) W) studied by Cao and Liu (2022). The observations show preferential propagation directions
towards the south (and also west) in austral summer and towards the north (and also east) in winter. Filtering effects (e.g., Fritts & Alexander, 2003) certainly contributed to this pattern as the wind measured by a meteor radar at the same site tended to indicate opposite directions, i.e. critical similarities in speed and direction were reduced. According to Cao and Liu (2022), the majority of the detected waves was probably not directly propagating from the tropospheric hot spots. Either wave ducting in the mesopause region (e.g., Walterscheid et al., 1999) or secondary wave generation related to wave breaking at lower levels should be crucial. The importance of secondary waves is also discussed by other studies related to the same geographical region (Liu et al., 2019; Vadas et al., 2019; Vargas et al., 2016). As indicated by Figure 3, most of the measured variance is related to perturbations with periods of hours, i.e. significantly longer than for the waves studied by Cao and Liu (2022). Long-period GWs tend to propagate mainly horizontally and can therefore be detected far from the source region for favorable atmospheric conditions (e.g., Fritts & Alexander, 2003). Hence, it is more likely to observe waves at Cerro Paranal that directly originate from the tropospheric source regions than in the case of short-period waves. Note that such discrepancies in the propagation properties can contribute to period-dependent differences in the seasonal wave activity (e.g., Sedlak et al., 2020).
In conclusion, the relatively strong short-term variations in the X-shooter OH data in June and July appear to be mainly produced by primary or secondary GWs related to the winter hot spot south of Cerro Paranal. Nevertheless, the details of the generation, propagation, and filtering of the relevant waves remain relatively uncertain. The OH intensity variance for time scales of hours may be affected by day-to-day changes in the tidal pattern. However, such changes can be forced by varying interactions with GWs (e.g., Fritts & Alexander, 2003; Smith, 2012), i.e. the origin of the observed OH intensity variance would still be gravity waves. The scenario of strong interactions between tides and GWs seems to be supported by the fact that the enhanced climatological short-term variations in austral winter in Figure 9 are obviously embedded in the area of the intensity climatologies where the strongest tidal variations as indicated by Figure 4a are present. Moreover, there is an interesting similarity in the climatologies of the short-term variations and the solar cycle effect, which also shows the largest effects in the second half of the night in winter in the case of OH(4-2)P\({}_{1}\)(1) (Figure 6a). Although the maximum SCE appears to be later in LT (a few hours) and day of year (a few weeks), the correlation coefficient for these well-defined features of similar shape is still 0.53. Hence, it could be that the increased SCE is related to enhanced GW activity. The stronger vertical transport by the wave-induced perturbations might increase the sensitivity of the OH production and emission to the atmospheric changes related to solar activity (such as the increased production of atomic oxygen).
In the following, we discuss the dependence of the short-term variations on the line parameters. Figure 10a shows the effective relative standard deviations for the entire nighttime climatologies. The plot was produced in the same way as for the SCE in Figure 7a. The distribution shows the typical features related to the mixing of cold and hot rotational populations. Hence, there is a good positive correlation of \(r\,=\,+0.77\) with the amplitude of the Q2DW from 2017 (Noll et al., 2022b). On the other hand, \(r\) is only \(+0.28\) in the case of a comparison with the SCE-related Figure 7a. The discrepancies seem to be related to the location of the winter maximum in the climatologies. As discussed in section 4.2, there appears to be a significant loss of high SCE values for OH levels with low \(v^{\prime}\) and \(N^{\prime}\) due to the morning twilight. This issue is less critical for the short-term variations, which show their highest values closer to midnight (e.g. Figures 6a and 9c). In agreement with this assumption, a high \(r\) of \(+0.83\) is found if the short-term variations are only compared with the maximum SCE in July.
In Figure 11, we show the PCA results for the climatologies of the short-term variations. The first and the second component, which explain 83.6 and 5.3% of the vari
Figure 11: Decomposition of climatologies of short-term variability (examples in Figure 9) with principal component analysis for two components. The figure is similar to Figure 8.
Figure 10: Average of short-term variability (a) and Q2DW amplitude (b) from the corresponding nighttime climatology (examples in Figure 9) for all 298 lines. The plotted line properties are discussed in the caption of Figure 4.
ance, show similar structures as those for the SCE. The scaling factors of both components in the right panels of Figure 11 correlate with the corresponding ones for the SCE in Figure 8 with \(r\) of +0.81 and +0.94, respectively. The first value seems to be lowered by the relatively large spread with respect to the vibrational levels in Figure 11b. This also causes a relatively strong negative correlation with the effective emission heights of \(-\)0.80 (compared to +0.85 for the amplitude of the Q2DW in 2017). However, the second component still shows a distinctly stronger dependence on the emission height (\(r\) = +0.89) in agreement with the results for the SCE. Consequently, the PCA reveals that the line dependence of the short-term variations can also mainly be explained by a combination of an amplitude increase for intermediate \(N^{\prime}\) due to an increased sensitivity to temperature variations (population mixing) and a height-dependent shift of the climatological pattern in LT direction. The latter is related to a basis climatology for the second component (Figure 11c) with an earlier decrease of the values in winter (in the middle of the night) than in the case of the SCE (around 04:00 LT, Figure 8c). If we explain the short-term variations as mainly caused by GWs as discussed before, these results imply that the presence of GW features in OH emission depends on LT and height. As the maximum range of effective emission heights for our line set is about 8 km on average (Noll et al., 2022b) with similar layer widths (e.g., Baker & Stair, 1988; Noll et al., 2016), possible changes in the GW sources and filtering effects below the mesopause region do not appear to be favorable explanations. Variations in the location of critical layers important for wave breaking and wave ducts may play a role. However, as already discussed in section 4.1 with respect to the intensity climatologies, especially the vertical distribution of atomic oxygen can cause strong line-dependent differences. In this context, height-dependent variations of the concentration (accompanied by temperature changes) by independent perturbations seem to be important. Considering the amplification of waves with periods of integer fractions of a day in the climatologies and the shift of the pattern towards earlier LTs (up to several hours) with increasing height, rising solar tides could be crucial again. They appear to be able to alter the OH-related chemistry in a way that significantly affects the sensitivity of OH-based observations of GW activity.
#### 4.3.2 Quasi-2-Day Wave
As described in section 3.3, we can also derive 2D climatologies of the relative amplitude of the Q2DW based on the differences of the residual variances for time differences of 2 and 1 plus 3 days. Hence, we can extend the investigation of Q2DW activity in January/February 2017 (eight nights) and January 2019 (seven nights) (Noll et al., 2022b) by a more statistical approach for the entire data set. For our two example lines, the results are shown in the bottom row of Figure 9. The climatologies display clear maximum values in January at about 03:00 LT for OH(4-2)P\({}_{1}\)(1) and about 23:00 LT for OH(4-2)P\({}_{1}\)(14), i.e. they are separated by about 4 h. In this month, the average Q2DW-related variance for both lines is almost on a similar level as the dominating short-term intensity variations discussed in section 4.3.1 (ratios of 0.89 and 0.49 respectively). Hence, a significant fraction of the residual variability in January is obviously caused by Q2DW activity. The sharp peak at 00:30 LT in July for the relatively faint OH(4-2)P\({}_{1}\)(14) emission in (f) is probably caused by measurement uncertainties since such a strong feature at this position is not visible in the climatologies of other lines with high \(N^{\prime}\). Consequently, the less extended LT range with strong short-term variations in July in (d) compared to the pattern for the entire residual variability in (b) can also be explained by this issue. An outlier can occur more easily in (f) than in (b) of Figure 9 since the residual variances for the required specific time differences are based on smaller samples. In the case of 00:30 LT in July, there was only a particularly small number of only 101 data pairs for the calculation of the variance for \(\Delta t\) = 72 h (cf. section 3.3), whereas the sample of 30 min bins comprised 468 values for this grid point. Moreover, we estimated the uncertainties in the Q2DW amplitude using half the difference in the variance \(s^{2}\) for \(\Delta t\) = 24 and 72 h (ignoring possible true systematic differences) as the error for \(s^{2}\) at 24, 48,
and 72 h, respectively. For those climatological grid points with an amplitude at least half as strong as the maximum in January, we then derived a satisfying mean relative uncertainty of about 18% for both lines in austral summer. On the other hand, the percentage is about 55% for OH(4-2)P\({}_{1}\)(14) in the rest of the year.
The location of the maximum values in January agrees well with the expected activity period of a westward-propagating Q2DW with a zonal wavenumber of 3 (e.g., Tunbridge et al., 2011). The later peak for OH(4-2)P\({}_{1}\)(1) is consistent with the studies of specific time series by Noll et al. (2022b). Interestingly, the maximum values appear to be more pronounced and more clearly separated in the climatologies. The maximum relative amplitudes in (e) and (f) are 0.32 and 0.26, respectively. The corresponding values for the strong event in 2017 were 0.74 and 0.46. On the other hand, the moderate event in 2019 with only a weak LT dependence indicated 0.31 and 0.49, respectively. Our Q2DW climatologies should be more representative than the two short time intervals that were studied by Noll et al. (2022b). However, the gaps in the X-shooter data set and the strong year-to-year changes in the Q2DW properties (e.g., Ern et al., 2013; Gu et al., 2019; Tunbridge et al., 2011) imply that these climatologies are only rough approximations of the long-term averages. For an illustration of the uncertainties, we recalculated the climatologies without the well-covered strong Q2DW event in 2017. The results indicate a decrease of the average amplitude in January by 21 and 10% for the two example lines with \(N^{\prime}\,=\,1\) and 14. The main decrease is related to the second half of the night, which showed the highest amplitudes in 2017. As a consequence, the morning peak for OH(4-2)P\({}_{1}\)(1) became more diffuse with two apparent maxima, whereas the summer pattern in the climatology of the high-\(N^{\prime}\) line did not change much. Hence, the eight nights in 2017 had a clear impact on the Q2DW climatologies in austral summer by increased averages and more pronounced peaks.
Similar to the properties that were discussed in the previous sections, we derived effective Q2DW amplitudes from the nighttime climatologies of the whole line set. The results are shown in Figure 10b. As significant activity of the Q2DW is restricted to the austral summer, the typical amplitude relative to the intensity climatology is only of the order of 10%. Nevertheless, the dependence of the values on \(v^{\prime}\) and \(N^{\prime}\) resembles other plots of this kind. For example, the effective short-term variations in Figure 10a show a good correlation with \(r=+0.77\) despite differences in the outlier distribution. Of course, it is also interesting to compare with the corresponding results for the maximum amplitude of the Q2DW in 2017. Considering that there are large differences in the related samples (10 years vs. eight nights), the correlation coefficient of \(r\,=\,+0.81\) indicates a convincing agreement. Moreover, the clear presence of the bump at intermediate \(N^{\prime}\) supports the conclusion that the mixing of cold and hot populations seems to be a general driver for line-specific differences in the strengths of perturbations on various time scales.
In Figure 12, we show the results for the PCA of the climatologies of the Q2DW amplitude of all 298 lines. As in the case of the SCE and the short-term variations, we discuss the first two components, which explain 83.5 and 5.0% of the variance. The primary basis climatology in (a) indicates the expected strong maximum in the second half of the night in austral summer. Based on the full line set, the derived pattern can show possible features more clearly than the individual climatologies. Hence, there might be a weak secondary maximum in the same LT range in winter, which would not contradict other observations (e.g., Ern et al., 2013). The corresponding scaling factors in (b) confirm the typical amplitude distribution related to population mixing. With a correlation coefficient of \(+0.95\), the pattern is very similar to the distribution of the amplitudes for the Q2DW event in 2017. Apart from comparing the same wave type, the noise-reducing property of the PCA certainly contributes to this very high \(r\). In the same way as for the SCE and the short-term variations, the second basis climatology in (c) changes the LTs of the maximum activity. Positive scaling factors in (d) lead to a shift to ear
Figure 12: Decomposition of climatologies for Q2DW amplitude (examples in Figure 9) with principal component analysis for two components. The figure is similar to Figure 8.
lier LTs. Again, these changes are strongly correlated with the effective emission height (\(r=+0.94\)). Hence, Q2DWs also appear to strongly interact with upward-propagating tides, which confirms the conclusions of Noll et al. (2022b) based on the LT dependence of the amplitude of the Q2DW from 2017. Interactions with migrating solar tides were already reported in previous studies (e.g., Hecht et al., 2010; Palo et al., 1999).
The well-localized peaks of the Q2DW amplitude in the individual climatologies as shown in the bottom panels in Figure 9 allow us to quantitatively analyze the relation of height and LT in order to learn more about the tidal modes that cause the observed activity shifts. For this purpose, we derived the LT bin in January with the maximum amplitude for the 298 studied OH lines and then averaged the reference emission heights from Noll et al. (2022b) for all lines with the maximum in a given bin. Thereafter, we performed a linear regression analysis of the average heights and the central times of the LT bins with a length of 1 h. The relation between height and LT turned out to be almost perfectly linear with \(r=-0.994\). The slope of the relation could therefore be derived with high accuracy, amounting to \(-1.23\pm 0.07\,\mathrm{km\,h^{-1}}\). This negative phase propagation can be converted into the vertical wavelength of a rising wave if the wave period is set. For a diurnal tide with a period of 24 h, the result is 29.5\(\pm\)1.6 km, which is remarkably close to the expected wavelength of the first symmetric westward propagating mode with a zonal wavenumber of 1. Forbes (1995) provide 27.9 km based on calculations. This migrating solar tide, which is also known as DW1, is the dominating tidal mode at low latitudes (e.g., Smith, 2012). The most important semidiurnal mode, the westward-migrating SW2 tide with a zonal wavenumber of 2 would need a wavelength of about 15 km in order to fit. However, its wavelength is clearly longer than in the case of the DW1 (Smith, 2012). Hence, it is likely that the line-dependent LT shift of the maximum Q2DW activity in January can mostly be explained by the interaction of Q2DW and DW1. The significant contribution of the Q2DW in 2017 to the climatologies seems to be important for this result since the LT difference for the two example lines as reported above would otherwise be more uncertain. The Q2DW in 2017 had a most probable wavelength of about 32 km during the analyzed eight nights (Noll et al., 2022b). As this is comparable to the DW1, strong interactions are likely. This statement is supported by the fact that the Q2DW from 2019, which did not show a strong LT dependence, had a very long vertical wavelength.
The interpretation of the line-dependent shifts of the patterns in the climatologies of the SCE and short-term variations is more difficult as the most striking features are broader and are partly cut by the nighttime limits. Nevertheless, there are clear indications that the situation for the Q2DW in austral summer can be generalized. The best examples are probably the climatologies of the residual variability for the two reference lines in the top row of Figure 9. These climatologies include the Q2DW as well as the GW activity. In these cases, the variability in January and July show a similar dependence on LT, at least with respect to the main peaks. The LT difference for the maximum in July is probably about 4 h with an uncertainty of 1 h, i.e. very similar to the result for the Q2DW in January shown in the bottom panels. The extracted short-term variations in panels (c) and (d) of Figure 9 are also consistent with this estimate. In the case of the SCE shown in Figure 6, the interpretation is most difficult but the minimum difference should be 2 to 3 h. Hence, the DW1 is also the most likely tidal mode for the origin of the line-dependent changes in the climatological patterns of the short-term variations (i.e. especially GW activity) and the SCE. Nevertheless, the OH intensity variations are also affected by other tidal modes (with longer vertical wavelengths) as the analysis of the intensity climatologies in section 4.1 indicates. There, the location of the tidal features appears to be relatively robust with respect to the line parameters.
## 5 Conclusions
We studied intensity time series (binned in 30 min steps) of 298 OH lines with respect to climatological variability patterns based on almost 90,000 near-infrared spectra taken at Cerro Paranal in Chile in the time interval from October 2009 to September 2019. Different 2D climatologies were calculated for local time (LT) and day of year with grid steps and (minimum) data selection radii of 1 h and 1 month, respectively.
The climatologies of the OH intensities relative to the mean revealed a strong dependence on line parameters such as the upper vibrational level \(v^{\prime}\) and upper rotational level \(N^{\prime}\). Using nonnegative matrix factorization for the decomposition of the observed patterns, we could clearly separate two major components. First, there is a relatively stable variability structure which can be explained by tidal features, which are strongest in the middle of the year. Similar features can also be observed in the temperature and atomic oxygen concentration at the altitudes with the strongest OH emission. Their amplitude seems to maximize for intermediate rotational energies between 400 to 800 cm\({}^{-1}\). A similar amplitude distribution was previously discovered by Noll et al. (2022b) for a quasi-2-day wave (Q2DW) which was investigated in a time interval of eight nights in 2017 based on the same X-shooter data set. Another Q2DW in 2019 (seven nights) also showed this feature. It can obviously be explained by assuming that the population distribution for each \(v^{\prime}\) can be described by the combination of a cold thermalized population with the effective ambient temperature at the OH emission heights and a hot non-thermalized population with \(v^{\prime}\)-dependent pseudo temperatures (Noll et al., 2020). The maximum tidal variations are then found where both populations show similar contributions on average since the steep decrease of the cold population with increasing \(N^{\prime}\) there causes a particular high variability of the population mixing if the ambient temperature changes. The second component of the relative intensity climatologies is characterized by a general decrease from the evening to the morning with amplitude maxima near the equinoxes. Assuming an exponential function, the effective time constant of the decrease is 3.3\(\pm\)0.2 h with possible seasonal variations. The contribution to the combined climatologies strongly decreases with increasing \(v^{\prime}\) and \(N^{\prime}\) and vanishes almost completely for the highest rotational levels. This behavior resulted in a strong anticorrelation with the effective line emission heights based on phase measurements in the X-shooter and related height-resolved SABER data for the Q2DW event in 2017 (Noll et al., 2022b). Supported by the results of Marsh et al. (2006), this component can be explained by the particularly strong decay of the nighttime population of atomic oxygen, which is essential for the OH production and is mostly produced by photolysis of molecular oxygen at daytime, at the lowest OH emission altitudes below 84 km.
We also calculated climatologies of the solar cycle effect (SCE) relative to the corresponding intensity climatologies for 27-day averages of the solar radio flux. The effective SCE values derived from the entire nighttime climatologies show a large range between about 8 and 23% per 100 solar flux units (sfu), which was not observed before but is consistent with previous results if it is considered which OH lines contributed to the analysis. The lowest SCE values were found for the lines with the lowest \(v^{\prime}\) and \(N^{\prime}\). Between intermediate and high rotational energies, no clear trend was seen. This distribution can be explained by the striking structure of the SCE climatologies with values between slightly negative and more than +50% per 100 sfu and its change depending on the line parameters. A principal component analysis (PCA) revealed that the primary component is characterized by a strongly positive effect in the second part of the night around July and only weak effects otherwise. This component also shows the highest values for intermediate \(N^{\prime}\), i.e. differences in the sensitivity of the OH level populations to changes in the ambient temperature is the main reason for the observed discrepancies. The second component essentially describes a shift of this pattern in LT direction. As indicated by a very high correlation coefficient of +0.96, the LT of the maximum is later for lines with lower effective emission height. This effect obviously contributes to the low
effective nighttime SCEs for lines with low \(v^{\prime}\) and \(N^{\prime}\) as the extension of the maximum feature in LT direction appears to be reduced by the end of the night. The shift of the SCE pattern can be best understood in terms of the impact of upward-propagating perturbations, which seem to change the sensitivity of the OH emission to atmospheric effects of solar activity such as higher atomic oxygen production and higher temperatures. As we investigate climatologies, the perturbations are most likely related to solar tides.
By correcting the OH intensity data for the mean climatologies and the SCE using the corresponding solar radio flux, we could also study climatologies of the residual variability consisting of the standard deviations for the selected subsamples for each climatological grid point. The results show maximum values around the solstices. With the derivation of the mean variance as a function of the time difference of data pairs for each subsample, we were able to distinguish between different variability sources. Time scales up to several hours are most important. Such short-term variations show a clear maximum in June and July, which is probably related to gravity waves (GWs) that tended to be generated in the south towards the Andean winter hot spot and reached Cerro Paranal by favorable propagation conditions either directly, via wave ducts, or as secondary waves. These waves may also play an important role for the strong SCE effect that is present in a similar region of the climatologies. With respect to the activity maximum in austr winter, short-term variations only explain a part of the residual variability as especially January is characterized by a significant contribution of Q2DWs, which we also measured. The remaining activity on short time scales might mainly be related to GWs that originate from deep convection in the north and east on the other side of the Andes. For the climatologies of the short-term variations and the Q2DW amplitude, we also calculated effective values for each OH line. Compared to the SCE, they are less affected by the nighttime limitations as the LTs with the highest activity were not close to the twilight for all lines. As a consequence, the line-dependent distributions of both properties showed a clearer bump at intermediate \(N^{\prime}\). As the first PCA components also revealed this feature, it seems that the line-dependent mixing of cold and hot populations has a major impact on the amplitude of various kinds of variations with time scales that can differ by several orders of magnitude. Similar to the SCE, the second PCA components indicate clear shifts of the climatological patterns in LT direction that are strongly correlated with the effective emission heights of the lines. Hence, the impact of tides on the sensitivity of OH emission to perturbations can also be generalized. In order to learn more about the relevant tidal modes, we used the well-defined LTs of the maximum Q2DW amplitude in January (with clear impact of the strong event in 2017) of all 298 lines in order to link them to the corresponding effective emission heights. The resulting height-LT relation is almost perfectly linear with a slope of \(-1.23\pm 0.07\,\mathrm{km}\,\mathrm{h}^{-1}\) that can be best explained by a wave period of \(24\,\mathrm{h}\) and a vertical wavelength of about \(30\,\mathrm{km}\). As the climatologies for short-term variations and SCE show similar pattern shifts, it appears that OH-based studies of GWs, Q2DWs, solar activity, and possibly other variability sources at Cerro Paranal and similar locations are significantly affected by the westward-propagating diurnal tide with zonal wavenumber 1, DW1. This tidal mode can act by direct interactions with the other perturbations and/or indirectly via the change of the OH-related chemistry, which particularly depends on the atomic oxygen profile.
## 6 Open Research
The basic X-shooter NIR-arm data for this project originate from the ESO Science Archive Facility at [http://archive.eso.org](http://archive.eso.org) (open access for all data used) and are related to various observing programs that were carried out between October 2009 and September 2019. The raw spectra were processed (using the corresponding calibration data) and then analyzed. Data of the analysis of eight nights in 2017 and seven nights in 2019 with respect to specific Q2DWs were already published (Noll et al., 2022a). We used the resulting line-dependent wave amplitudes and effective emission heights. Concerning this
study, the line-specific time series (binned in 30 min steps) for the calculation of the climatologies and the results of the analysis as partly shown in the figures can be obtained from the public repository Zenodo at [http://zenodo.org/record/7826060](http://zenodo.org/record/7826060) (Noll et al., 2023).
## Acknowledgments
Stefan Noll is financed by the project NO 1328/1-3 of the German Research Foundation (DFG). The authors thank Sabine Mohler from ESO for her support with respect to the X-shooter calibration data. Moreover, the authors are grateful to the two anonymous reviewers for their valuable comments.
|
2310.05865 | A Learning-Based Framework for Safe Human-Robot Collaboration with
Multiple Backup Control Barrier Functions | Ensuring robot safety in complex environments is a difficult task due to
actuation limits, such as torque bounds. This paper presents a safety-critical
control framework that leverages learning-based switching between multiple
backup controllers to formally guarantee safety under bounded control inputs
while satisfying driver intention. By leveraging backup controllers designed to
uphold safety and input constraints, backup control barrier functions (BCBFs)
construct implicitly defined control invariance sets via a feasible quadratic
program (QP). However, BCBF performance largely depends on the design and
conservativeness of the chosen backup controller, especially in our setting of
human-driven vehicles in complex, e.g, off-road, conditions. While
conservativeness can be reduced by using multiple backup controllers,
determining when to switch is an open problem. Consequently, we develop a
broadcast scheme that estimates driver intention and integrates BCBFs with
multiple backup strategies for human-robot interaction. An LSTM classifier uses
data inputs from the robot, human, and safety algorithms to continually choose
a backup controller in real-time. We demonstrate our method's efficacy on a
dual-track robot in obstacle avoidance scenarios. Our framework guarantees
robot safety while adhering to driver intention. | Neil C. Janwani, Ersin Daş, Thomas Touma, Skylar X. Wei, Tamas G. Molnar, Joel W. Burdick | 2023-10-09T17:02:50Z | http://arxiv.org/abs/2310.05865v3 | # A Learning-Based Framework for Safe Human-Robot Collaboration
###### Abstract
Ensuring robot safety in complex environments is a difficult task due to actuation limits, such as torque bounds. This paper presents a safety-critical control framework that leverages learning-based switching between multiple backup controllers to formally guarantee safety under bounded control inputs while satisfying driver intention. By leveraging _backup controllers_ designed to uphold safety and input constraints, _backup control barrier functions_ (BCBFs) construct implicitly defined control invariance sets via a feasible quadratic program (QP). However, BCBF performance largely depends on the design and conservativeness of the chosen backup controller, especially in our setting of human-driven vehicles in complex, e.g, off-road, conditions. While conservativeness can be reduced by using multiple backup controllers, determining when to switch is an open problem. Consequently, we develop a broadcast scheme that estimates driver intention and integrates BCBFs with multiple backup strategies for human-robot interaction. An LSTM classifier uses data inputs from the robot, human, and safety algorithms to continually choose a backup controller in real-time. We demonstrate our method's efficacy on a dual-track robot in obstacle avoidance scenarios. Our framework guarantees robot safety while adhering to driver intention.
## I Introduction
_Safety filters_ are useful control tools that allow a robot to remain safe while under actuation from a potentially unsafe controller, or driver. Safety filters accomplish this by minimally affecting desired control commands, and thus have become a popular add-on to robot control architectures [1, 2, 3] since they address real-world robot dynamics and kinodynamic constraints in a run-time fashion.
_Control barrier functions_ (CBFs) [4] are a popular method for constructing safety filters due to their ability to integrate nonlinear dynamics while providing formal safety guarantees. A CBF-based safety filter requires defining a control-invariant set that ensures safety. However, such sets can be difficult to construct with input constraints in mind [5, 6, 7, 8, 9, 10, 11].
Consequently, the CBF framework has been extended to include actuation capability, such as torque limits, through the use of _backup_ control barrier functions (BCBFs) [12, 13, 14]. BCBFs rely on the formulation of a _backup controller_, which is designed with input limits in mind to guarantee safety. The backup controller typically involves a simple saving maneuver, such as coming to a stop, turning at a maximum rate, or hovering. By calculating the future backup trajectory of the system, one can analyze future safety of the robot and incorporate this information into optimization-based controllers like quadratic programs (QPs). Consequently, when constructing safety filters using the BCBF method, the system's conservativeness is a strong function of the control engineer's choice of backup policy.
To address this limitation, recent work examines the use of multiple backup strategies in the BCBF framework, since multiple strategies can help overcome the conservativeness of a single strategy. In [15], an algorithm is used to evaluate the BCBF method with multiple backup controllers, such that the one with the least control intervention can be chosen. However, with many backup controllers, this method could be computationally infeasible. In [16] and [17], different maneuvers are proposed to increase the reachability of a given backup controller, where a switching algorithm chooses a different maneuver if it is no longer possible to perform the current maneuver. While this method used multiple
Fig. 1: Visualization of the proposed safety-critical control framework with desired robot behavior. The red, green, and blue arrows represent different trajectories that rely on different backup controllers, with corresponding colors, to guarantee safety. Using driver intention tracking, the correct, green, backup controller is chosen among the red and blue controllers in order to guide the driver to their desired location. A supplementary video can be found here: [https://youtu.be/41Jh1GD_90k](https://youtu.be/41Jh1GD_90k)
backup maneuvers with more computationally efficient ways of solving BCBFs, it is better suited to fully autonomous systems. It does not explicitly provide, in a human-robot interaction context, a way to take into account the intentions of a human vehicle driver or human companion. Nor does it account for the fact that the human operator may have better situational awareness than the autonomous controller. Even though autonomous selection may be suboptimal at times, the driver does not have the authority to bypass the chosen backup control strategy. Additionally, it cannot know if the driver has suddenly changed his or her goals. Generally, the types of safe behavior utilized by the BCBF cannot be tuned.
There has been impressive previous work to include human preferences in safety filters. For instance in [18], preference-based learning is used to adapt tuneable parameters of a CBF to user preferences for autonomous obstacle avoidance on a quadruped. However, input bounds are not formally considered, which could potentially result in infeasiblity and consequently unsafe trajectories. While there has also been work on preference based reinforcement learning (pbRL) [19], these methods must sample trajectories within the expected environment [20]. Since the drivers of teleoperated systems must provide control inputs at each timestep, trajectory sampling becomes difficult to complete due to the human-in-the-loop nature of the problem.
In order to intelligently pick backup controllers, we propose a system that utilizes a long short term memory (LSTM) and a deep neural network (DNN) component to learn a reward corresponding to each backup controller. LSTMs have been used in real-time driver intention and maneuver tracking, and have shown to be successful at regression and clasification tasks with trajectory input [21, 22, 23]. A simple switching law is derived by using the controller with the largest reward in the BCBF framework. This system learns to estimate driver intention by training on example trajectories, labeled with the correct choice of backup controller, as provided by the driver. In our work, we show that
1. The LSTM-DNN switching architecture effectively learns the correct backup controller.
2. Our approach increases the reachable sets of the driven robot while maintaining the safety guarantees of BCBFs.
Moreover, we present experimental implementations of our algorithms on a tracked robot. These experiments demonstrate that switching between new and previously formulated backup controllers based on estimated driver intention can work on actual hardware systems.
This paper is organized as follows. Section II presents preliminaries on CBFs, BCBFs, DNNs and LSTMs. Section III presents our contributions, including a description of system architecture. We present the implementation of our framework on hardware in Section IV and discuss experimental results in Section V. Section VI concludes the paper.
## II Preliminaries
We consider robots governed by a general nonlinear control affine system:
\[\dot{x}=f(x)+g(x)u, \tag{1}\]
where \(x\!\in\!\mathbb{R}^{n}\) is the state, \(u\!\in\!U\!\subset\!\mathbb{R}^{m}\) is the control input, and functions \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\), and \(g:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\) are locally Lipschitz continuous. A locally Lipschitz continuous controller \(\mathbf{k}:\mathbb{R}^{n}\to U\) yields a locally Lipschitz continuous closed-loop control system, \(f_{\mathrm{cl}}:X\to\mathbb{R}^{n}\):
\[\dot{x}=f(x)+g(x)\mathbf{k}(x)\triangleq f_{\mathrm{cl}}(x). \tag{2}\]
Given an initial condition \(x_{0}\!\triangleq\!x(t_{0})\!\in\!\mathbb{R}^{n}\), this system has a unique solution given by the flow map
\[\phi(t,x_{0})\triangleq x(t)=x_{0}+\int_{t_{0}}^{t}f_{\mathrm{cl}}(x(\tau))d \tau,\ t>t_{0}. \tag{3}\]
### _Control Barrier Functions_
To characterize safety, we consider a safe set \(\mathcal{C}\!\subset\!\mathbb{R}^{n}\) defined as the 0-superlevel set of a continuously differentiable function \(h:\mathbb{R}^{n}\to\mathbb{R}\):
\[\mathcal{C} \triangleq\left\{x\in X\subset\mathbb{R}^{n}:h(x)\geq 0\right\}, \tag{4}\] \[\partial\mathcal{C} \triangleq\left\{x\in X\subset\mathbb{R}^{n}:h(x)=0\right\},\]
where \(\partial\mathcal{C}\) is the boundary of set \(\mathcal{C}\). This set is forward invariant if, for every initial condition \(x(0)\in\mathcal{C}\), the solution of (2) satisfies \(x(t)\in\mathcal{C},\ \forall t\geq 0\). The closed-loop system (2) is called safe w.r.t. set \(\mathcal{C}\) if \(\mathcal{C}\) is forward invariant [4].
**Definition 1** (Control barrier function [4]).: Function \(h\) is a CBF for (1) on \(\mathcal{C}\) if \(\frac{\partial h}{\partial x}\neq 0\) for all \(x\in\partial\mathcal{C}\) and there exists an extended class-\(\mathcal{K}_{\infty}\) function*\(\alpha\in\mathcal{K}_{\infty,\epsilon}\) such that \(\forall x\in\mathcal{C}\):
Footnote *: A continuous function \(\alpha:\mathbb{R}\to\mathbb{R}\) belongs to the set of extended class-\(\mathcal{K}\) functions (\(\alpha\in\mathcal{K}_{\infty,\epsilon}\)) if it is strictly monotonically increasing, \(\alpha(0)=0\), \(\alpha(r)\to\infty\) as \(r\to\infty\), \(\alpha(r)\to-\infty\) as \(r\to-\infty\).
\[\sup_{u\in U}\!\!\left[\!\hat{h}(x,u)\!=\!\frac{\partial h(x)}{\partial x}f(x) \!+\frac{\partial h(x)}{\partial x}g(x)\!\right]\!\geq\!-\alpha(h(x)). \tag{5}\]
**Theorem 1**.: _[_4_]_ _If \(h\) is a CBF for (1) on \(\mathcal{C}\), then any locally Lipschitz continuous controller \(\mathbf{k}:\mathbb{R}^{n}\to U\) satisfying_
\[\dot{h}\left(x,\mathbf{k}(x)\right)\geq-\alpha(h(x)),\ \ \forall x\in \mathcal{C} \tag{6}\]
_renders (2) safe with respect to \(\mathcal{C}\)._
### _Backup Control Barrier Functions_
BCBFs are motivated by the fact that verifying (5), which is required for the feasibility of (6), may be challenging for a particular choice of \(h\), especially with bounded inputs. Consider input bounds with component-wise hard constraints:
\[U\triangleq\left\{u\in\mathbb{R}^{m}:-u_{\max}\leq u\leq u_{\max}\right\}, \tag{7}\]
where \(u_{\max}\!\in\!\mathbb{R}_{>0}^{m}\). The backup set method [12, 13, 14] addresses this feasibility problem by designing implicit control invariant sets and safe controllers through a CBF framework.
We consider a backup set \(\mathcal{C}_{\mathrm{b}}\!\subset\!\mathbb{R}^{n}\) defined as the 0-superlevel set of a continuously differentiable function
\[h_{\mathrm{b}}:\mathbb{R}^{n}\!\rightarrow\!\mathbb{R}\] \[\mathcal{C}_{\mathrm{b}}\triangleq\left\{x\in\mathbb{R}^{n}:h_{ \mathrm{b}}(x)\geq 0\right\}, \tag{8}\]
such that it is a subset of \(\mathcal{C}\), i.e., \(\mathcal{C}_{\mathrm{b}}\subseteq\mathcal{C}\), and \(\frac{\partial h_{\mathrm{b}}}{\partial x}\neq 0\) for all \(x\in\partial\mathcal{C}_{\mathrm{b}}\). Practically speaking, the backup set and backup controller are easily characterized safety procedures that can keep the vehicle in a strict subset of the maximally feasible safe set, which may be impossible to characterize in practice.
We assume that there is a locally Lipschitz continuous backup controller \(\mathbf{k_{b}}:\mathbb{R}^{n}\!\rightarrow\!U\), which satisfies (7) for all \(x\in\mathcal{C}\), to render the backup set forward invariant. This results in the locally Lipschitz continuous closed-loop
\[\dot{x}=f(x)+g(x)\mathbf{k_{b}}(x)\triangleq f_{\mathrm{b}}(x), \tag{9}\]
and its solution with the initial state \(x_{0}\in\mathbb{R}^{n}\) is
\[\phi_{\mathrm{b}}(t,x_{0})\triangleq x(t)=x_{0}+\int_{t_{0}}^{t}f_{\mathrm{b} }(x(\tau))d\tau,\quad t>t_{0}. \tag{10}\]
Designing a control invariant backup set \(\mathcal{C}_{\mathrm{b}}\) is generally easier than verifying if \(\mathcal{C}\) is control invariant. However, the methods used to develop \(\mathcal{C}_{\mathrm{b}}\) may result in a conservative set [5]. We can reduce conservatism by expanding the backup set. To achieve this, we use the backup trajectory \(\phi_{\mathrm{b}}(\tau,x)\) over a finite time period \(\tau\in[0,T]\) with some \(T\in\mathbb{R}_{>0}\). Note that \(\phi_{\mathrm{b}}(\tau,x)\) is the flow map of the system under the backup controller with the initial condition \(x\). Then we define a larger control invariant set, called \(\mathcal{C}_{E}\), such that \(\mathcal{C}_{\mathrm{b}}\subseteq\mathcal{C}_{E}\subseteq\mathcal{C}\):
\[\mathcal{C}_{E}\triangleq\left\{x\in\mathcal{C}\ \left|\ \overbrace{h(\phi_{ \mathrm{b}}(\tau,x))}^{\triangleq\underline{h}_{\mathcal{L}_{p}}(x)}\geq 0,\ \forall\tau\in[0,T],\right.\right. \tag{11}\] \[\left.\underbrace{h_{\mathrm{b}}(\phi_{\mathrm{b}}(T,x))}_{ \triangleq\overline{h}_{\mathcal{L}_{p}}(x)}\geq 0\right.\]
That is, \(\mathcal{C}_{E}\) is the set of initial states from where the system can use a \(T\)-length feasible controlled trajectory (that satisfies the input constraints and respects the system dynamics) to safely reach \(\mathcal{C}_{\mathrm{b}}\). We note that the input limits are satisfied since they are incorporated into the set \(\mathcal{C}_{E}\) via \(\mathbf{k_{b}}\). In (11), the first constraint implies that the flow under the backup controller satisfies the safety constraints, and the second constraint enforces that the backup set is reached in time \(T\). To guarantee safety with respect to \(\mathcal{C}_{E}\), we enforce the following constraint for all \(x\in\mathcal{C}_{E}\):
\[\frac{\overline{\partial h(\phi_{\mathrm{b}}(\tau,x))}^{\triangleq L _{L}f_{\mathrm{b}}(x)}}{\partial\phi_{\mathrm{b}}(\tau,x)}\overline{\frac{ \partial\phi_{\mathrm{b}}(\tau,x)}{\partial x}}f(x) \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(n\) and \(n-1\) refer to current and previous time steps, \(\varrho_{\tilde{f}},\varrho_{\tilde{t}};\varrho_{\tilde{o}}\) are the activation functions for input, output and forget gates, and are generally chosen as \(\varrho_{\text{signoid}}\). \(\varrho_{\tilde{g}}\) is the activation function for the memory cell state, which is \(tanh\) for general cases, \(\mathbf{\tilde{x}}_{n}\) is the normalized input, \(\mathbf{\tilde{h}}_{n-1}\) is the hidden state, and \(\mathbf{\tilde{W}}\), \(\mathbf{\tilde{R}}\), \(\mathbf{\tilde{b}}\) are the weight matrices for \(\mathbf{\tilde{i}}\) and the recurrent weight matrices, and bias vectors. The update equations for \(\mathbf{\tilde{h}}_{n-1}\) and the cell state \(\mathbf{\tilde{c}}_{n}\) given by
\[\mathbf{\tilde{h}}_{n}=\mathbf{\tilde{o}}_{n}\odot\varrho_{\tilde{g}}( \mathbf{\tilde{c}}_{n});\;\;\;\mathbf{\tilde{c}}_{n}=\mathbf{\tilde{f}}_{n} \odot\mathbf{\tilde{c}}_{n-1}+\mathbf{\tilde{i}}_{n}\odot\mathbf{\tilde{g}}_ {n},\]
where \(\odot\) is the Hadamard (element-wise) product. Then, with a fully connected layer, weight matrix \(\mathbf{\tilde{W}}_{y}\) and bias vector \(\mathbf{\tilde{b}}_{y}\), the output of the LSTM network is given by
\[\mathbf{\tilde{y}}=\mathbf{\tilde{W}}_{y}\mathbf{\tilde{h}}_{n}+\mathbf{ \tilde{b}}_{y}.\]
Notably, \(\mathbf{\tilde{W}}\) and \(\mathbf{\tilde{b}}\) do not vary with \(n\), as the feedback loop nature of LSTMs and RNNs allow the same weights and biases to be used for each timestep in a sequence of data. This allows LSTMs and RNNs to be applied on input sequences of varying lengths without changes in architecture.
## III Intention Estimation
To improve the reachability of multiple BCBFs through intention estimation, a deep LSTM-DNN network translates observational data from the driver, safety systems, and robot state into the choice of \(\mathbf{k}_{\mathbf{b}}\).
For organizational purposes, the following sections that describe our framework use the following definitions
* \(\mathcal{K}\) is a set of \(m_{k}\in\mathbb{N}\) backup controllers such that controller \(\mathbf{k}_{\mathbf{bi}}\in\mathcal{K}\) renders a backup set \(\mathcal{C}_{\text{bi}}\subseteq\mathcal{C}\) forward invariant
* \(\gamma_{t}\in\mathbb{R}^{k}\) is a vector of \(k\in\mathbb{N}\) features: robot state, environment, driver input, and safety data. We maintain an \(H+1\)-length data history \(\vec{\gamma}(t,H)=(\gamma_{t},\gamma_{t-1},\ldots,\gamma_{t-H})\), Additional details on \(\vec{\gamma}(t,H)\) can be found below.
* \(R:\mathbb{R}^{k\times(H+1)}\rightarrow\mathbb{R}^{m_{k}}\) is a reward function that maps input history \((\gamma_{t},\gamma_{t-1},\ldots,\gamma_{t-H})\) to a list of \(m_{k}\) rewards with values ranging from 0 to 1. Each component of this list corresponds to the integer \(i\) of each controller \(\mathbf{k}_{\mathbf{bi}}\in\mathcal{K}\).
We use a unicycle model to capture the tracked robot's motion, given by
\[\underbrace{\begin{bmatrix}\dot{x}_{I}\\ \dot{y}_{I}\\ \dot{\theta}\end{bmatrix}}_{\dot{x}}=\underbrace{\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}}_{f(x)}+\underbrace{\begin{bmatrix}\cos\theta&0\\ \sin\theta&0\\ 0&1\end{bmatrix}}_{g(x)}\underbrace{\begin{bmatrix}v\\ \omega\\ u\end{bmatrix}}_{u}, \tag{13}\]
where \(p=\begin{bmatrix}x_{I}&y_{I}\end{bmatrix}^{\top}\) is the vehicle's planar position w.r.t. the inertial frame \(I\), \(\theta\) is vehicle's yaw angle, \(-v_{\max}\leq v\leq v_{\max}\) is its linear velocity and \(-\omega_{\max}\leq\omega\leq\omega_{\max}\) is the angular velocity. While we describe our intention estimator in the context of the unicycle model, we believe that our system can generalize to more complex dynamical models. This is the subject of future experimentation as detailed in section VI.
We employ an intention estimation framework to learn the reward function, \(R\). Learning is useful in this context since constructing a reward function based on multi-modal input in high dimensions may be challenging. The feature vector at a single timestep, \(\gamma_{t}\), consists of the robot state and vertical position \([x^{I},z^{I}]\in\mathbb{R}^{4}\), robot state and vertical velocities \([\dot{x}^{I},\dot{z}^{I}]\in\mathbb{R}^{4}\), driver velocity commands \(u\in U\), and safety information from evaluating \(h:\mathbb{R}^{3}\rightarrow\mathbb{R}\) at \(x\) and at \(x_{g}\in\mathbb{R}^{3}\), which is an intermediate goal determined by forward integrating the driver's desired \(u\) over a short horizon. Note that \(\gamma_{t}\) can contain many more features than detailed here in other implementations, especially if they relate to the safety of the robot. However, the aforementioned features generalize to any robot using our implementation. We wish to learn \(R\) by mapping history \(\vec{\gamma}(t,H)\) to rewards corresponding to each backup controller \(\mathbf{k}_{\mathbf{bi}}\) at time \(t\). This process allows us to derive switching laws for the chosen backup controller as a function of time. For instance, we may choose the backup controller \(\mathbf{k}_{\mathbf{bi}}\) with the largest reward at time \(t\) where \(i=\arg\max(R(\vec{\gamma}(t,H))\)
We use a deep LSTM network with a DNN decoder to construct \(R\) from history \(\vec{\gamma}(t,H)\). Deep LSTMs can learn complex temporal relationships by using multiple layers and cyclic connections to understand sequential data, and the DNN decoder maps the output of the LSTM to rewards for each backup controller through several hidden layers. Both the LSTM and DNN utilize dropout for regularization, and the DNN uses ReLU activations in the hidden layers. Furthermore, we use the sigmoidal activation function in the DNN final layer. Thus, the outputted rewards for each backup controller, \(\mathbf{k}_{\mathbf{bi}}\) at time \(t\), can be interpreted very loosely as a likelihood that \(\mathbf{k}_{\mathbf{bi}}\) is a correct backup controller choice at time \(t\). For example, if our framework is highly certain that \(\mathbf{k}_{\mathbf{b}\mathbf{0}}\) is the correct choice of backup controller at time \(t\), then the first component of the output of \(R(\vec{\gamma}(t,H))\) is expected to be close to 1.
Fig. 2: Cross entropy and training loss during a training episode. Accuracy is calculated by comparing the maximum reward output from the LSTM-DNN architecture and determining if it corresponds to the correct choice of backup controller for the respective point in the training set. We also use a validation (test) dataset to observe out-of-sample performance of the network during training. The network achieves 97% accuracy by the 30th epoch.
Our strategy and architecture enables easy reward engineering for the training dataset. We construct a multi-class dataset by gathering data of the robot driving in suitable environments under the correct backup controllers. During data collection the robot should be operating with a BCBF safety filter as we plan to evaluate the network in safety-critical contexts. During training, the switches between backup controllers are labeled in the dataset by the driver using configurable buttons on the robot ground station. Labeling is completed by assigning \(1\) to the driver's choice of backup controller and \(0\) to all others for that instance. Since we utilize the sigmoidal activation function in this multi-class classification learning task, we use the softmax loss function to train the network. Indeed, this training procedure makes an implicit assumption that there exists a single, correct choice of backup controller at any given time. While this may not be true in certain situations, we guarantee safe switching as discussed in Section IV-B. Consequently, any choice of correct backup controller, or switch between them, does not void safety guarantees. We discuss discontinuities between switching controllers in section V, however in short, switching between backup controllers is, in practice, smooth.
To improve training, we utilize the Adam optimizer [25] with a stepped learning rate scheduler that reduces the learning rate every constant step of training epochs. In order to compensate for the time it takes to solve the BCBF-QP, we employ a forecast of roughly double the time it takes our system to solve the BCBF-QP. In other words, before training we shift backup controller labels backwards in time so that the network learns to pre-emptively select the correct backup controller, allowing time for the BCBF-QP to have computed safe control outputs for the new controller when they are needed. We show results from a training episode done on hardware-collected data in Fig. 2, and present our model parameters here. We used a 2 layer LSTM with 100 hidden units with the 12 total input features detailed earlier. Our DNN is composed of two hidden layers of sizes 50 and 25 neurons, respectively. We use a dropout value of 0.2 for the hidden layers of the DNN and a dropout of 0.1 for the LSTM. As indicated in Fig. 2, we achieved \(97\%\) accuracy over a dataset of 19000 datapoints collected on hardware. The sequence length for the training samples of the LSTM-DNN model was chosen to be 15 timesteps, which corresponds to roughly 0.75 seconds of data. We found that this time-range produced the most accurate results in the aforementioned offline training.
### _Human Interface_
Since we extract an intention estimation signal from the human driver, providing necessary feedback is essential for improving human-robot collaboration. Past work has suggested that systems which allow robots to estimate human intention, while their collaborating humans estimate robot intention, can improve overall performance [26]. Thus, we develop an interface based on the ROS _rviz_ visualization tool as well as an Xbox controller featuring haptic motors. The _rviz_ system displays a visualization of the robot pose, along with dials that indicate estimated system safety, as quantified by the magnitude of the saturated safety function \(tanh(h(x))\). Furthermore, vibratory haptic feedback is provided to the user when the robot nears the boundary of the safe set as indicated by \(h\). This feedback warns that the vehicle is in a potentially dangerous conditions, so we tune the the thresholds and strength of the haptic feedback as preferred by the driver.
## IV Implementation
### _Hardware_
Our algorithms are deployed on a tracked GVR-Bot from US Army DEVCOM Ground Vehicle Systems Center. The GVR-Bot is a modified iRobot PackBot 510 and its rugged design and quick actuation makes it ideal for research of safety in the presence of unknown human driver intentions. Our Python and C++ algorithms run on a custom compute payload that is based on an NVIDIA Jetson AGX Orin (2048-core NVIDIA Ampere architecture, 64 Tensor Cores, 275 TOPS, 12 Core Arm Cortex A78AE @ 2.2GHz, 32GB Ram). Vision is provided by three synchronized Intel Realsense D457 Depth Cameras which are strategically positioned to provide a wide field of view for the control system and operator. They operate at a 30 Hz frame rate. A Vectornav VN-100 provides inertial measurements. Communication between our algorithm, the control computer, and the internal GVR-Bot drive computer (Intel Atom) is facilitated via ROS1. Estimation of the vehicle state is provided by an OpenVINS visual-inertial estimator [27]. Drive commands (linear and angular body velocities) are communicated from the AGX Orin to the Intel Atom where they are converted into individual track speeds, which are regulated via high-rate controllers on the GVR-Bot, see Fig. 1 and Fig. 3.
Fig. 3: Photo of our test-vehicle (a US Army DEVCOM GVR-Bot robot) operating in the NASA Jet Propulsion Laboratory Mars Yard. (\(yellow\)\(inset\)) Intel Realsense D457 Depth Cameras are coupled with a VN-100 IMU, (\(light-blue\)\(inset\)) custom compute payload.
### _Backup Controllers_
For safety, the tracked robot must avoid moving obstacles, considered as cylinders with radius \(R_{o}\in\mathbb{R}_{>0}\), position \(p_{o}\in\mathbb{R}^{2}\) and velocity \(v_{o}\in\mathbb{R}^{2}\). This leads to the safety constraint \(h(x)\geq 0\), with the CBF candidate given by
\[h(x)=\|p-p_{o}\|-R_{o}.\]
We enforce safety in the presence of input bounds by implementing multiple BCBFs.
We leverage three backup controllers that yield qualitatively different behavior. Controller \(\mathbf{k_{b0}}\) turns the robot away from the obstacle and drives away with maximum speed. Controller \(\mathbf{k_{b1}}\) drives straight away as the obstacle is approached, without turning. Controller \(\mathbf{k_{b2}}\) turns towards the obstacle and drives in reverse. It behaves similarly to \(\mathbf{k_{b0}}\), however the robot has turned around. These are expressed by:
\[\mathbf{k_{b}}(x) =\begin{bmatrix}v_{\max}\\ \omega_{\max}\tanh(n^{T}r/\varepsilon)\end{bmatrix},\;\;h_{0}(x)=n^{T}(qv_{ \max}-v_{o}),\] \[\mathbf{k_{b1}}(x) =\begin{bmatrix}v_{\max}\tanh(n^{T}q/\varepsilon)\\ 0\end{bmatrix},\;\;h_{1}(x)=\|p-p_{o}\|-R_{o},\] \[\mathbf{k_{b2}}(x) =-\mathbf{k_{b0}}(x),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
our system maintains the formal safety guarantees from the BCBF method as demonstrated in Fig. 5, where \(h\) was observed to be positive under input limits. To demonstrate the reproducibility of these results, further tests were conducted and documented in Fig. 6 and the accompanying video shared in the caption of Fig 1. Here, we tested our system in many scenarios and compared produced trajectories to trajectories corresponding to a single backup controller. The use of our switching framework outperforms the use of a single backup controller in certain scenarios, and our system never inhibits the driver from achieving their goal, while some backup controllers do. For instance, sub-figure (b) of Fig 6 presents a case where \(\mathbf{k_{b0}}\) inhibited the driver while our framework did not. The learned reward function correctly captures the driver's intent, encompassing risk-neutral preferences such as emergency stopping and overtaking with minimal slowdown, as well as risk-averse behaviors like maintaining a sizeable safety distance.
## VI Conclusions and Future Work
In this work, we implemented backup control barrier functions to ensure safety on a tracked robot platform, and used driver intention estimationto optimally choose between multiple backup policies. From our results, we conclude that this system improves safety and performance by improving the interaction between the robot and its driver by expanding the multiple BCBF framework with a learning method.
Several next steps exist for this preliminary framework, like, developing an algorithm to formally compute safe reachability under multiple backup controllers. Furthermore, while we showed that our framework is useful on a teleoperated ground vehicle, we plan to implement our framework on other robots, such as quadrupeds, bipeds, and drones. Finally, various techniques can be employed to enhance the resilience of standard CBF constraint (6) against disturbance or model uncertainty, such as GP-based uncertainty in the CBF constraint [28, 29].
**Acknowledgment.** The authors would like to thank the SFP program at Caltech, SFP sponsors Kiyo and Eiko Tomiyasu, and DARPA for funding this project. Furthermore, the authors would like to thank Ryan K. Cosner and Maegan Tucker for discussions about safety with backup CBFs and learning, and Matthew Anderson for his insights and help in experimental setup and testing.
Fig. 6: We compare our system on multiple environments with the use of a single backup controller, and also show the reproducibility of our test data. In figure (a), we attempt to drive the robot forwards to a point on the far right of the obstacle. In figure (b), we drive the robot and face the obstacle at various points. In figure (c), we attempt to reach the same goal as figure (a), but by driving the robot backwards. Notice our system provides more gentle corrections to driver controls and enables the driver to reach the goal position in every case. |
2307.10119 | Time-Dependent Shipment Options and Shipment Fees for E-Commerce
Fulfillment Centers | We build a parsimonious model of a stochastic e-commerce fulfillment center
offering time-dependent shipment options and corresponding fees to
utility-maximizing customers that arrive according to a Poisson process. For
this time-dependent policy, we present an exact analysis based on the
underlying periodic Markov chain as well as monotonicity results for the
optimal time-dependent shipment fee structure. We then propose a simple
time-dependent shipment policy with an attractive two-level fee structure, and
study it alongside two benchmarks that are prevalent in practice. Both
benchmarks rely on static, time-independent fees for the offered shipment
options, but differ in whether they stop offering same-day shipment after a
certain cutoff time. Our numerical analysis indicates that including such a
cutoff point under static shipment fees increases profits significantly, and
that moving to time-dependent shipment fees increases profits by another
substantial amount. Our time-dependent policy provides e-commerce companies
with an easily implementable mechanism to better manage online demands for
same-day or next-day shipment with the available capacity in the fulfillment
centers responsible for collecting and shipping those orders. | Uta Mohring, Melvin Drent, Ivo Adan, Willem van Jaarsveld | 2023-07-17T07:50:22Z | http://arxiv.org/abs/2307.10119v1 | # Time-Dependent Shipment Options and Shipment Fees for E-Commerce Fulfillment Centers
###### Abstract
Problem Definition:Fulfilling online orders faster becomes more and more important. To that end, e-commerce companies increasingly offer same-day shipment services. However, companies may overpromise their shipment services when the offered shipment options and corresponding fees are not aligned with the ability to fulfill these orders, leading to delayed orders and customer dissatisfaction. Indeed, fulfillment centers responsible for collecting and shipping online orders may not be able to hand over the orders to the parcel delivery company upon the agreed deadlines. Thus, the offered shipment options and corresponding fees should be adapted based on the time remaining for the fulfillment center to collect and ship these orders.
Methodology/Results:We build a parsimonious model of a stochastic e-commerce fulfillment center offering time-dependent shipment options and corresponding fees to utility-maximizing customers that arrive according to a Poisson process. For this time-dependent policy, we present an exact analysis based on the underlying periodic Markov chain as well as monotonicity results for the optimal time-dependent shipment fee structure. We then propose a simple time-dependent shipment policy with an attractive two-level fee structure, and study it alongside two benchmarks that are prevalent in practice. Both benchmarks rely on static, time-independent fees for the offered shipment options, but differ in whether they stop offering same-day shipment after a certain cutoff time. Our numerical analysis indicates that including such a cutoff point under static shipment fees increases profits significantly, and that moving to time-dependent shipment fees increases profits by another substantial amount.
Managerial Implications:E-commerce companies should better manage online demands for same-day or next-day shipment with the available capacity in the fulfillment centers responsible for collecting and shipping those orders. Our time-dependent policy provides these companies with an easily implementable mechanism to do so.
Pricing, shipment options, e-commerce, Markov chains, demand management
## 1 Introduction
Given the fast growth of e-commerce, fulfilling online orders faster and cheaper becomes more relevant than ever. In that respect, e-commerce companies increasingly promise same-day shipment. Same-day shipment is a powerful marketing tool to attract and retain customers. The ability to offer quick and convenient delivery options may differentiate a company from its competitors and increase customer loyalty (Cui et al., 2020). Nowadays, customers place orders in the afternoon
or evening, and expect these products to be shipped immediately so that they arrive the next day (Vanheusden et al. 2022). In dense urban areas, customers even expect those products to be delivered on the same day, especially grocery and retail products (Klapp et al. 2018, Voccia et al. 2019, Ulmer 2020). From an operational perspective, however, fulfilling the promise of same-day shipment is challenging, especially as the end of the day is nearing and the window for same-day shipment becomes tighter. Hence, same-day shipment requires a well-coordinated effort between marketing and operations to ensure a smooth and efficient order fulfillment process.
Companies that sell their products online, both business-to-business and business-to-consumer, typically fulfill these orders from so-called fulfillment centers. In these centers, orders are picked from the shelves and subsequently prepared for shipment. At fixed predefined deadlines, typically at the end of the day, these orders are consolidated into large batches and handed over to the external parcel delivery company responsible for the actual delivery of the products (e.g. FedEx or UPS). Meeting these strict deadlines is crucial for the fulfillment process as missing a delivery vehicle by only a few minutes may result in a delay to the customer of at least one day (Doerr and Gue 2013, Ceven and Gue 2017). In their pursuit of offering increasingly faster service, e-commerce companies might run the risk that they over-promise their shipment services. Indeed, the fulfillment center may not be able to keep up with the inflow of orders that require same-day shipment and need to be ready before the set deadline. This risk is particularly high when companies charge only flat fees for the different shipment options. Under such flat fees, customers cannot be incentivized to choose next-day shipment over same-day shipment when the operations in the fulfillment center would require so.
As a natural mitigation strategy to the risk described above, e-commerce companies may want to introduce a time- and fulfillment center load-based dynamic shipment policy. Under such a dynamic shipment policy, the offered shipment options and the corresponding shipment fees can be used as levers to better balance the inflow of orders that require same-day and next-day shipment with the available capacity and remaining time to do so. While such a dynamic shipment policy has merit from an operational perspective, it may be perceived as unfair by customers (Klein et al. 2019) and is arguably difficult to implement from a marketing point of view. As customers are rather sensitive to small price changes (Acimovic and Graves 2017), static shipment options and corresponding fees that depend only on the remaining time to the set deadline seem to be more appropriate and promising as well (Campbell and Savelsbergh 2006, Agatz et al. 2013). Such well-targeted time-dependent shipment options and fees have the potential to influence customers' shipment preferences in order to balance the overall workload and increase the capacity utilization in fulfillment centers, very much akin to a dynamic shipment policy but without eliciting perceptions of unfairness among customers.
In this paper, we analyze such time-dependent shipment options and shipment fees for e-commerce fulfillment centers. We do so in the context of products for which the shipment options and shipment fees mostly influence the consumer's decision to choose same-day or next-day shipment, and not so much the actual decision to buy the specific product. This makes sense when companies expect little competition or when customers are loyal to the specific products and/or companies. For these settings, we study the potential of time-dependent shipment options and fees in managing the distribution of customers demanding same-day or next-day shipment. When e-commerce companies operate such time-dependent shipment policies, they need to decide for each moment in time which shipment options they offer as well as the fees that correspond with these shipment options. We study precisely these decisions. Because of its practical appeal, we focus on so-called cutoff-based shipment options under which we offer same-day and next-day shipment to all customers placing their orders until a certain cutoff point; customers ordering after the cutoff point will then only be offered next-day shipment (Mohring et al. 2023). Given the cutoff point decision, shipment fees need to be set for each moment in time. Thus, the following interrelated questions arise:
1. Cutoff-based shipment options: Until what moment in time should same-day shipment still be offered to customers?
2. Shipment fees: How much should the shipment fees be?
3. Differentiated fees: When and how should the shipment fees be adapted over time?
We build a parsimonious model that provides answers to the questions posed above. We consider a multi-period model, where each period ends with a deadline upon which orders that are due for shipment in that period need to be handed over to the parcel delivery company. Throughout each such period, online customers arrive according to a Poisson process. Customers that complete their online transaction before the cutoff point are offered the choice between same-day or next-day shipment together with their corresponding fees (or, equivalently, express shipment or regular shipment). We assume that customers make this choice by trading off the utility that they derive from both shipment options. Customers that arrive after the cutoff point cannot choose same-day shipment, and hence their products are handed over to the parcel company upon the next deadline at the end of the following period. The processing capacity of the fulfillment center within each period is randomly distributed. If the available capacity is insufficient to process all orders due for shipment in a given period, then these orders are said to be late and carried over for shipment in the subsequent period. We are interested in finding the optimal time-dependent shipment options and fees such that the long-run average profit consisting of revenue minus costs for late orders is maximized. By developing a discrete-time Markov chain model for the steady-state analysis of stochastic e-fulfillment centers described above, we are able to provide closed-form expressions for
the relevant performance measures as well as structural properties of the optimal time-dependent shipment policy.
The main contribution of this paper is threefold:
1. We are the first to study time-dependent shipment options and fees for stochastic fulfillment centers. We present an exact analysis of the steady-state performance measures for this system based on the underlying discrete-time periodic Markov chain model.
2. We show that the optimal time-dependent shipment policy exhibits intuitive structure: The fee for express shipment increases as the time until the deadline of the current period decreases.
3. Motivated by its practical appeal, we numerically study a simple instance of our time-dependent shipment policy that switches between shipment fees only once. We compare this simple time-dependent approach with two benchmark policies that are prevalent in practice. Both benchmark approaches rely on static, time-independent fees for both shipment options, but differ in whether they include a cutoff point for the same-day shipment option or not and how they set the fees. Our numerical analysis indicates that (i) including such a cutoff point under static shipment fees increases profits substantially, and that additionally (ii) moving to time-dependent shipment fees, i.e. our approach, increases profits by another considerable margin. Thus, e-commerce companies may derive substantial benefits from introducing static time-dependent shipment fees.
The remainder of this paper is structured as follows: Section 2 gives an overview of related literature. Section 3 provides the formal problem description and introduces the Markov chain model. Furthermore, we establish analytical properties and a simple instance of the time-dependent shipment policy. Section 4 presents the numerical study, and concluding remarks are given in Section 5.
## 2 Literature review
Comprehensive literature reviews on e-commerce fulfillment are provided by Agatz et al. (2008) and Boysen et al. (2019). Agatz et al. (2008) address operational issues along the physical distribution processes (purchasing, warehousing, delivery, and sales) in e-commerce fulfillment, and Boysen et al. (2019) study warehousing systems for e-commerce fulfillment centers. Our research is related to two main research streams in e-commerce literature: Demand management in last-mile delivery and shipment strategies in warehousing.
### Demand management in last-mile delivery
Demand management in last-mile delivery can be seen as planning the assortment of delivery service options to maximize overall profit, and its main dimensions are service offering and service pricing. While service offering refers to the question which service options to offer, service pricing investigates how to set the prices of the offered service options to steer customers' preferences and
choice behavior (Wassmuth et al., 2023). Most papers in this research stream provide _dynamic offering_ or _dynamic pricing_ approaches, which determine the offered service options and the associated prices depending on the current system state and expectations about future events whenever a new customer arrives. A recent literature review is provided by Wassmuth et al. (2023).
In contrast, _differentiated offering_ and _differentiated pricing_ approaches, which set service options and associated prices for different customer groups in advance based on forecast data (Agatz et al., 2013) have received much less attention. The few papers that consider differentiated service offering focus on the spatial dimension and study the problem of selecting the set of time slots offered in each delivery zip code subject to a service level constraint (Agatz et al., 2011; Cleophas and Ehmke, 2014; Hernandez et al., 2017). Stroh et al. (2022) are the first to address temporal differentiated service offering. The authors investigate the timing and composition of vehicle dispatches in same-day delivery systems. Based on the introduced model and the derived optimal dispatching policies, they answer various system-design questions, e.g., how large the vehicle fleet and the service area should be. Klein et al. (2019) study the differentiated time slot pricing problem of an e-grocer which offers a predefined set of time slots every day and determines the price for each time slot in each delivery zip code prior to order arrival. The authors present a mixed integer programming formulation of the problem with routing constraints to explicitly incorporate the anticipated delivery costs.
In contrast to this paper, current research focuses on dynamic service offering and pricing. There are only a few papers on temporal differentiated service offering and pricing, which all are limited to last-mile delivery applications. We appear to be the first to study the effects of demand management decisions on order processing in e-commerce fulfillment centers.
### Shipment strategies in warehousing
Literature on shipment strategies in warehousing covers shipment assignment in networks with multiple e-fulfillment centers (Xu et al., 2009; Acimovic and Graves, 2015; Jasin and Sinha, 2015), joint product pricing and shipment assignment (Lei et al., 2018), joint product framing and shipment assignment (Lei et al., 2022), shipment consolidation (Wei et al., 2021), and order dispatching for shipment (Klapp et al., 2018). Most related to our research are the papers on shipment pricing and time-dependent shipment options.
#### 2.2.1 **Shipment pricing**
The variety of shipment pricing policies of e-tailers can be divided into three categories (Becerril-Arreola et al., 2013): (1) unconditional free shipment (the retailer absorbs the shipping costs of all orders), (2) contingent free shipment (the retailer only absorbs the shipping costs of those orders exceeding a predefined threshold), and (3) shipment fee (the customers absorb the shipping cost). Contingent free shipment is found to be the most effective policy in terms of
increasing e-tailers' revenues (Lewis 2006, Lewis et al. 2006). Leng and Becerril-Arreola (2010) and Becerril-Arreola et al. (2013) investigate whether, when and how contingent free shipment boosts e-tailers' revenues. The value of offering a membership-based free shipment policy (customers pay an upfront fee and enjoy free-shipment throughout the subscription period) as a supplement to the aforementioned policies is discussed by Fang et al. (2021) and Sun et al. (2022).
Furthermore, Gumus et al. (2013) study price partitioning decisions of an e-tailer regarding shipment fees by analyzing the following policies: One presents customers with a price that is partitioned into the product price and the shipment fee, the other offers free shipment by a non-partitioned price where the product price already includes the shipping cost. Jiang et al. (2013) derive nonlinear mixed-integer programming models to simultaneously optimize product prices and shipment-fee schedules for single- and multi-product orders.
In contrast to our paper, current research focuses on the effects of shipment pricing policies on customers' purchasing behavior and e-tailers' revenue. There are no policies that offer multiple shipment options in terms of delivery time, such as same-day and next-day shipment, and charge differentiated fees for these shipment options.
#### Time-dependent shipment options
The idea of time-dependent shipment options is to offer differentiated shipment conditions throughout the day, such that the offered shipment option depends on the moment in time when the customer places an order.
Doerr and Gue (2013) consider the order fulfillment process in a distribution center, where orders arrive and are picked and prepared for shipment continuously throughout the day, but shipments leave only at specific times at the end of the day. The authors propose a novel performance metric, called _Next Scheduled Deadline_ (NSD), to better reflect this deadline-orientation of the order fulfillment process in the system performance of the distribution center. NSD measures the fraction of orders targeted for a particular deadline (truck departure), that are ready by that deadline. An order is targeted for a particular deadline if it arrives before the cutoff point associated with this deadline. NSD is a meaningful performance metric for the service quality perceived by customers, implicitly installs time-dependent shipment options offered to customers, and motivates workers to accelerate the operating speed when it matters most, so just before the deadline. Focusing on the latter dimension of NSD, Doerr and Gue (2013) investigate how to steer the cutoff point and the percentage of NSD in order to improve worker motivation.
The dimension of NSD as a performance metric for service quality and the related time-dependent shipment options are discussed by Ceven and Gue (2017), MacCarthy et al. (2019) and Mohring et al. (2023): Ceven and Gue (2017) study wave picking policies in distribution centers operating against daily deadlines and derive the optimal number and timing of wave releases in terms of
NSD. MacCarthy et al. (2019) investigate the fulfillment process of buy-online-pickup-in-store retail services, where online orders are fulfilled in conventional retail stores whilst also serving walk-in customers. The online orders are promised to be ready for pickup in store after a specific time later the same day if they have been placed until a predefined cutoff point. The authors derive best performance frontiers for single-wave and multi-wave order picking and determine the optimal number and timing of picking waves and the minimum picking rate needed to guarantee a target value of NSD. Mohring et al. (2023) investigate order fulfillment in warehouses that operate against predefined deadlines and offer differentiated shipment options to customers depending on whether they arrive until or after a certain cutoff point. The authors show the benefit of offering cutoff-based shipment options and study how to set the cutoff point appropriately to meet customer expectations in terms of order response time and service level while taking different types of service level measurement and the characteristics of the order fulfillment process into account.
Current research is limited to order fulfillment settings where only a single shipment option is available to the customer at a time. There is a lack of research regarding time-dependent shipment menus from which the customer selects his/her preferred option as well as the related question of charging differentiated fees in order to influence customers' shipment preferences towards the shipment option preferred from the operational perspective. Our paper addresses this research gap.
## 3 Model and analysis
In Section 3.1, we introduce our e-commerce fulfillment center. In Section 3.2, this system is formally modeled as a discrete-time Markov chain. We establish analytical properties for the time-dependent shipment policy in Section 3.3 and propose a simple instance of the time-dependent shipment policy in Section 3.4.
### Problem description
We investigate a stylized stochastic e-commerce fulfillment center with time-dependent shipment options and shipment prices. In the following, we provide a formal description of the order fulfillment process, the time-dependent shipment policy, and the customer choice model.
**Order fulfillment** We consider the order fulfillment process of a company operating its own warehouse but outsourcing the deliveries to a dedicated parcel delivery company. Order fulfillment starts with receiving orders from the customers. Customer orders arrive continuously throughout the day and are released for order fulfillment after some preparatory administrative steps. By this, the continuous inflow of customer orders transforms into a discrete-time order income for the actual order fulfillment process. Order fulfillment in warehouses incorporates picking, consolidating, and packing products and preparing parcels for shipment (De Koster et al. 2007). These
steps are organized in small batches, and discrete-time modeling of the order fulfillment process is appropriate.
The order fulfillment process ends with handing over the parcels to the parcel delivery company, which consolidates the parcels to large transportation batches for delivery depending on their destinations. Parcel delivery companies have fixed daily delivery schedules with predefined vehicle departure times. Hence, the order fulfillment process in the warehouse operates against fixed predefined deadlines. Meeting these deadlines is crucial as missing a vehicle departure by only a few minutes may result in a delay to the customer of at least one day (Doerr and Gue 2013, Ceven and Gue 2017). These recurring deadlines subdivide the discrete-time axis \(t\in\mathbb{N}_{0}\) into buckets, e.g. one day, which we call operating cycles. Operating cycles consist of \(T\) time periods each and are indexed by \(k\in\mathbb{N}_{0}\). Operating cycle \(k\) corresponds to the time periods \(\{kT,kT+1,\ldots,kT+T-1\}\). It starts at deadline \(kT\) and ends immediately before reaching the subsequent deadline \((k+1)T\) (cf. Figure 1). For each time \(t\in\mathbb{N}_{0}\), the corresponding operating cycle is
\[k=\left\lfloor\frac{t}{T}\right\rfloor, \tag{1}\]
and the age \(\tau\) within operating period \(k\) is obtained as:
\[\tau=t\bmod T. \tag{2}\]
The random customer demand \(D_{t}\) arriving at time \(t\in\mathbb{N}_{0}\) gives the number of orders that are released for order fulfillment at time \(t\). The random variables \(D_{t}\), \(t\in\mathbb{N}_{0}\) are i.i.d., specified by the Poisson-distributed customer demand per time interval \(D\) with expected customer demand volume \(\lambda\):
\[D_{t}\sim D\sim\text{Poi}(\lambda). \tag{3}\]
The random processing capacity \(B_{t}\) provided at time \(t\in\mathbb{N}_{0}\) determines the system's capability of fulfilling these orders. It gives the number of orders that can be completely processed at time \(t\)
Figure 1: Representation of the time line and corresponding notation for the e-commerce fulfillment center.
The random variables \(B_{t}\), \(t\in\mathbb{N}_{0}\) are i.i.d., specified by the discrete generally-distributed processing capacity per time interval \(B\):
\[B_{t}\sim B. \tag{4}\]
The processing capacity \(B_{t}\) and the customer demand \(D_{t}\) are independent of each other, and the system utilization is below 1; the utilization \(\rho\) is calculated as the ratio of the expected customer demand \(\mathbb{E}[D]\) and the expected processing capacity \(\mathbb{E}[B]\):
\[\rho=\frac{\mathbb{E}[D]}{\mathbb{E}[B]}. \tag{5}\]
For ease of reference, Table 1 lists the notation introduced above, as well as the notation introduced in the remainder of this section.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Name & Remark \\ \hline
**Time** & **axis** & \\ \(t\) & Time & \(t\in\mathbb{N}_{0}\) \\ \(t_{\mathrm{inc}}\) & Duration of each time interval (discretization & \(t_{\mathrm{inc}}:=1\) \\ & interval) & \\ \(T\) & Duration of each operating cycle & \(T\in\mathbb{N}\) \\ \(k\) & Operating cycle & \(k=\lfloor t/T\rfloor\in\mathbb{N}_{0}\) \\ \(\tau\) & Age of operating cycle & \(\tau=t\bmod T\) \\ \hline
**Order fulfillment** & & \\ \(D\) & Customer demand per time interval & Generic random variable w. support \(\mathbb{N}_{0}\) \\ \(D_{t}\) & Customer demand at time \(t\) & \(D_{t}\sim D\) \\ \(B\) & Processing capacity per time interval & Generic random variable w. support \(\mathbb{N}_{0}\) \\ \(B_{t}\) & Processing capacity at time \(t\) & \(B_{t}\sim B\) \\ \(\rho\) & Utilization & \(\rho=\mathbb{E}[D]/\mathbb{E}[B]\) \\ \(E_{r}\) & Order income for express shipment at age \(\tau\) & Generic random variable w. support \(\mathbb{N}_{0}\) \\ \(E_{t}\) & Order income for express shipment at time \(t\) & \(E_{t}\sim E_{r}\) with \(\tau=t\bmod T\) \\ \(R_{r}\) & Order income for regular shipment at age \(\tau\) & Generic random variable w. support \(\mathbb{N}_{0}\) \\ \(R_{t}\) & Order income for regular shipment at time \(t\) & \(R_{t}\sim R_{\tau}\) with \(\tau=t\bmod T\) \\ \hline \hline
**Shipment policy** & & \\ \(\overline{p}\) & Regular price & \(\overline{p}\in\mathbb{R}\) \\ \(\tau_{C}\) & Cutoff point & \(\tau_{C}\in\{0,1,\ldots,T-1\}\) \\ \(f_{t}\) & Shipment fee for express shipment at time t & \(f_{t}=f_{r}\) with \(\tau=t\bmod T\) \\ \(f_{r}\) & Shipment fee for express shipment at age \(\tau\) & \(\forall\tau\in\{0,1,\ldots,T-1\}:f_{r}\in\mathbb{R}\) \\ \hline
**Customer choice** & & \\ \(U\) & Additional utility derived by arbitrary customer & random variable w. support \([U_{\mathrm{min}},U_{\mathrm{max}}]\) \\ & from express shipment instead of regular shipment & \\ \(U_{\mathrm{min}}\) & Minimum utility derived from express shipment & \(U_{\mathrm{min}}\in\mathbb{R}_{\geq 0}\) \\ \(U_{\mathrm{max}}\) & Maximum utility derived from express shipment & \(U_{\mathrm{max}}\in[U_{\mathrm{min}},\infty)\) \\ \(w(p)\) & Fraction of customers choosing express shipment at & \(w:\mathbb{R}\rightarrow[0,1]\) \\ & price \(p\) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters of the e-commerce fulfillment center.
**Shipment policy** Two shipment options are potentially available to customers: (1) Express shipment and (2) regular shipment. In particular, consider a customer arriving at time \(t\), and let \(k=\lfloor t/T\rfloor\). Then, the two service options correspond to (1) order dispatch at the next deadline after \(t\), i.e., at time \((k+1)T\), and (2) order dispatch at the second deadline after \(t\), i.e., at time \((k+2)T\). So, if the operating cycle equals one day, i.e., \(T=24\) hours, express shipment corresponds to same-day shipment and delivery to the customer the next day, and regular shipment means next-day shipment and delivery to the customer the day after the next. Regular shipment is always available for the customers, whereas express shipment is only available when \(\tau=t\bmod T\leq\tau_{C}\), where \(\tau_{C}\) is the cutoff point for offering the express shipment option.
Irrespective of the selected service option, customers pay the regular price \(\overline{p}\) that would in online settings be listed as the _product price_. Additionally, they pay a shipment fee \(f\) that depends on the selected service option. Regular shipment comes with a zero shipment fee, as is common in online retailing. For express shipment, there is a non-zero shipment fee to reduce the proportion of customers choosing this service option. This is reasonable as customers are often indifferent between faster and slower fulfillment, but sensitive to small price changes (Campbell and Savelsbergh 2006, Acimovic and Graves 2017). Thus, for express shipment, customers pay a higher price consisting of the regular price \(\overline{p}\) and a time-dependent shipment fee \(f_{t}\). The shipment fee depends on the age \(\tau\) of the operating cycle, such that \(f_{t}=f_{\tau}\) with \(\tau=t\bmod T\).
Note that express shipment is only available when \(\tau\leq\tau_{C}\), thus the shipment policy consists of the cutoff point \(\tau_{C}\) and the shipment fees \(f_{\tau}\) for \(\tau\in\{0,1,\ldots,\tau_{C}\}\). Our model in principle allows for each age \(\tau\) to have a unique shipment fee, but in Section 3.4, we propose a policy with a simpler structure that may be easier for customers to understand and accept.
**Customer choice** For modeling customer choice, the main quantity of interest is the utility that an arbitrary customer derives from express shipment compared to regular shipment. This quantity is modeled as a random variable, that will be denoted by \(U\). For ease of exposition, we assume \(U\) to be uniformly distributed on \([U_{\min},U_{\max}]\). Further, to enable us to focus on analyzing the choice between express and regular shipment, we assume that the \(D_{t}\) customers that arrive at time \(t\) want to buy the product. (In particular, customers in \(D_{t}\) derive a utility \(v\in\mathbb{R}\) from owning the product, and \(v\geq\bar{p}\). Thus, the customers' utility derived from service options express and regular shipment is \(v+U\) and \(v\), respectively.)
In each operating cycle, all customers arriving until the cutoff point \(\tau_{C}\) have the choice between express and regular shipment. The customers choose the available service option that maximizes their utility. Recall that the price for regular and express shipment is \(\bar{p}\) and \(p=\bar{p}+f_{\tau}\), respectively.
Consequently, the proportion \(w\) of customers choosing express shipment depends on the price for express shipment \(p\) as follows:
\[w(p)=\begin{cases}\mathbb{P}(U>p-\bar{p})&\text{if }p\in[U_{\min}+\bar{p},U_{ \max}+\bar{p}]\\ 1&\text{if }p\leq U_{\min}+\bar{p}\\ 0&\text{if }p\geq U_{\max}+\bar{p}.\end{cases} \tag{6}\]
Note that \(\mathbb{P}(U>p-\bar{p})=1-\frac{p-\bar{p}-U_{\min}}{U_{\max}-U_{\min}}\) for \(p\in[U_{\min}+\bar{p},U_{\max}+\bar{p}]\). All customers arriving after the cutoff point \(\tau_{C}\) do not have a choice and always receive regular shipment.
### Model
We introduce a discrete-time Markov chain model for steady-state analysis of e-commerce fulfillment centers with time-dependent shipment options and shipment fees. In the following, we provide the discrete-time Markov chain formulation (cf. Section 3.2.1) and derive formulas for several performance measures (cf. Section 3.2.2).
#### 3.2.1 Discrete-time Markov chain
For \(t\in\mathbb{N}_{0}\), let \(\mathbf{X}_{t}\) be the order backlog of the fulfillment center
\[\mathbf{X}_{t}=\left(X_{t}^{C}\ X_{t}^{\Sigma}\right). \tag{7}\]
Here, \(X_{t}^{C}\) specifies the number of unprocessed orders at time \(t\) that are due by the deadline immediately after the current operating cycle or that are already too late at time \(t\). Recall that the current operating cycle is \(k=\lfloor t/T\rfloor\), and the deadline after the current operating cycle occurs at \((k+1)T=(\lfloor t/T\rfloor+1)T\). Also, \(X_{t}^{\Sigma}\) specifies the _total_ number of unprocessed orders at time \(t\), i.e., the sum of the orders due by \((k+1)T\), the orders due by \((k+2)T\), and the orders that are already too late. The dynamics of the order backlog and the corresponding transition probabilities depend on the age \(\tau\) of the current operating cycle. So, the state of the underlying periodic Markov chain is given by \((\mathbf{X}_{t},\tau)\) with \(\tau=t\bmod T\).
The customer demand \(D_{t}\sim D\) released for order fulfillment at time \(t\) subdivides into the order income for express shipment \(E_{t}\) and the order income for regular shipment \(R_{t}\), which are independent of each other. This choice depends on the shipment fee \(f_{t}\) charged for express shipment at time \(t\): In line with our introduced shipment policy, we adopt a static pricing policy that sets \(f_{t}=f_{\tau}\), where \(\tau=t\bmod T\). That is, the fee depends only on the age \(\tau\) of the operating cycle. We follow the same predictable fee pattern every operating cycle. Then, we have \(E_{t}\sim E_{\tau}\) and \(R_{t}\sim R_{\tau}\), where
\[E_{\tau}\sim \operatorname{Poi}\big{(}\lambda w(\bar{p}+f_{\tau})\big{)} \tag{8}\] \[R_{\tau}\sim \operatorname{Poi}\big{(}\lambda(1-w(\bar{p}+f_{\tau})\big{)},\]
for \(\tau\leq\tau_{c}\) and \(w(\cdot)\) given by (6). For \(\tau>\tau_{c}\), we have \(E_{\tau}=0\) and \(R_{\tau}=D_{t}\).
The processing capacity \(B_{t}\sim B\) provided for order fulfillment at time \(t\) is initially used to process the orders \(X_{t}^{C}\) and the order income for express shipment \(E_{t}\) released at time \(t\). Any remaining processing capacity is used to process further orders or remains unused in case the system runs idle.
Accordingly, the dynamics for \(X_{t}^{\Sigma}\) are straightforward: We have \(X_{t+1}^{\Sigma}=\left(X_{t}^{\Sigma}+E_{t}+R_{t}-B_{t}\right)^{+}\), where \((x)^{+}\) denotes \(\max(x,0)\). For the dynamics of \(X_{t}^{C}\) consider any time \(t\) and let \(\tau=t\bmod T\). If \(\tau<T-1\), then any orders \(X_{t}^{C}\) and \(E_{t}\) that remain unprocessed in period \(t\) are carried over to the next period, and as a consequence, we find \(X_{t+1}^{C}=\left(X_{t}^{C}+E_{t}-B_{t}\right)^{+}\). If \(\tau=T-1\), then the deadline occurs at the next period. Therefore, any orders \(X_{t}^{C}\), \(E_{t}\), and \(R_{t}\) that remain unprocessed in period \(t\) are due by deadline \((k+2)T\) or already too late, and we find \(X_{t+1}^{C}=X_{t+1}^{\Sigma}=\left(X_{t}^{\Sigma}+E_{t}+R_{t}-B_{t}\right)^{+}\).
Following these considerations, transitions from \(\mathbf{X}_{t}=\left(X_{t}^{C}\ X_{t}^{\Sigma}\right)\) at time \(t\) to \(\mathbf{X}_{t+1}=\left(X_{t+1}^{C}\ X_{t+1}^{\Sigma}\right)\) at time \(t+1\) are given by
\[X_{t+1}^{C} = \begin{cases}\left(X_{t}^{C}+E_{t}-B_{t}\right)^{+}&\text{if $t \bmod T<T-1$}\\ \left(X_{t}^{\Sigma}+E_{t}+R_{t}-B_{t}\right)^{+}&\text{if $t\bmod T=T-1$} \end{cases} \tag{9}\] \[X_{t+1}^{\Sigma} = \left(X_{t}^{\Sigma}+E_{t}+R_{t}-B_{t}\right)^{+}. \tag{10}\]
Note that these transitions depend on time only by \(\tau=t\bmod T\), and hence the quantities \(X_{t}^{C},X_{t}^{\Sigma},\tau\) together represent the state completely. Also, since \(\rho<1\), the (periodic) Markov chain that arises from a given shipment policy \(\{\tau_{C},f_{0},f_{1},\ldots,f_{\tau_{C}}\}\) has a stationary probability distribution. Therefore, defining performance measures in terms of long run averages is appropriate.
#### 3.2.2 Performance measures
**Backorders** There are potentially some orders that are due by deadline \((k+1)T\) immediately after operating cycle \(k\) or that are already too late in operating cycle \(k\), but remain unprocessed in the last period of operating cycle \(k\) and become backorders. We denote the random number of backorders occurring at the end of operating cycle \(k\) by \(M_{k}\). Such backorders occur if at time \(t=kT+(T-1)\), the sum of the orders \(X_{t}^{C}\) and the order income for express shipment \(E_{t}\) exceeds the processing capacity \(B_{t}\), i.e., \(M_{k}=\left(X_{kT+(T-1)}^{C}+E_{kT+(T-1)}-B_{kT+(T-1)}\right)^{+}\).
The long-run average number of backorders per operating cycle equals
\[\mathbb{E}[M]=\lim_{K\rightarrow\infty}\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}[ M_{k}]. \tag{11}\]
Recall that a customer arriving at time \(t\) (with \(k=\lfloor t/T\rfloor\)) is promised a shipment at \((k+1)T\) in the case of choosing express shipment, and a shipment at \((k+2)T\) in the case of choosing regular shipment. To motivate our later choice to penalize \(\mathbb{E}[M]\), we seek to relate it to a more customer-centric performance measure: The average delay experienced by customers relative to
the shipment time that was initially promised. Recall that each order in \(M_{k}\) was initially promised to be delivered at or before \((k+1)T\), but will be delivered at or after \((k+2)T\). Thus, by Little (1961), the average delay that a customer experiences relative to the promised delivery date equals \(T\mathbb{E}[M]/(T\mathbb{E}[D])=\mathbb{E}[M]/\mathbb{E}[D]\).
**Profit** The profit \(G\) of the e-fulfillment center derives from the revenues received for all customer orders per operating cycle and the penalties paid to customers per operating cycle if their orders are not fulfilled on time by the promised deadline. \(G\) subdivides into a fixed part \(G^{F}\), which is independent of the selected shipment policy, and a variable part \(G^{V}\), depending on the shipment policy. The _fixed_ profit \(G^{F}\) corresponds to the revenue earned from the regular price \(\overline{p}\) per operating cycle; it satisfies \(\mathbb{E}[G^{F}]=T\mathbb{E}[D]\overline{p}\). The _variable_ profit \(G^{V}\) incorporates the revenue earned from the shipment fees for express shipment and the penalties paid for backorders per operating cycle. We model a constant penalty cost of \(c\) per backorder, to be paid once per operating cycle. Thus, the expected variable profit \(\mathbb{E}[G^{V}]\) equals
\[\mathbb{E}[G^{V}]=\sum_{\tau=0}^{T-1}f_{\tau}\mathbb{E}[E_{\tau}]-c\mathbb{E} [M]. \tag{12}\]
That is, we seek to balance the payments received from customers choosing express shipment and the risk of over-promising.
### Analytical properties of the time-dependent shipment policy
In this section, we establish analytical properties for the time-dependent shipment policy introduced in the previous sections. All proofs are relegated to Appendix B. We start with the following simple observation that enables us to present our main results more succinctly:
**Lemma 1**: _Consider any shipment policy (as introduced in Section 3.1) with cutoff point \(\tau_{C}\), and fee \(f_{\tau}\) for \(\tau\in\{0,\ldots,\tau_{C}\}\). A mathematically equivalent shipment policy may be obtained by setting \(\tau_{C}^{\prime}=T-1\), \(f_{\tau}^{\prime}=f_{\tau}\) for \(\tau\leq\tau_{C}\) and \(f_{\tau}=U_{\text{max}}\) for \(\tau>\tau_{C}\). This equivalent shipment policy fetches the same variable profit \(\mathbb{E}[G_{v}]\)._
This observation follows since \(w(\bar{p}+f_{\tau})=0\) for \(f_{\tau}=U_{\text{max}}\), which in turn is equivalent to not offering express shipment. With this, for the remainder of this section, we shall consider the shipment policy to be denoted by \((f_{0},f_{1},\ldots,f_{T-1})\), under the implicit assumption that \(\tau_{C}=T-1\). The vector \((f_{0},f_{1},\ldots,f_{T-1})\) shall also be referred to as the _fee structure_.
Since the customer utility \(U\) is supported on \([U_{\text{min}},U_{\text{max}}]\), it is sufficient to consider shipment fees that fall between \(U_{\text{min}}\) and \(U_{\text{max}}\):
**Definition 1**: _The set of fee structures \(\mathcal{F}\) is given by_
\[\mathcal{F}=\big{\{}(f_{0},f_{1},\ldots,f_{T-1})\big{|}\forall\tau\leq T-1:f_ {\tau}\in[U_{\text{min}},U_{\text{max}}]\big{\}}. \tag{13}\]
We are interested in studying how the costs depend on the fee structure, to eventually gain insights into fee structures that yield high profit. To denote explicitly the dependence of variables on the fee structure \(f\), we shall use a superscript \(f\), e.g. \(M^{f}\), \(M^{f}_{k}\) and \(G^{f}\).
Next, consider the _expected cumulative demand for express shipment until \(\tau\)_, which shall be denoted by \(\mathbb{E}[E^{f}[0,\tau]]\!:=\!\lambda\sum_{\tau^{\prime}=0}^{\tau}w(\bar{p}+f_ {\tau^{\prime}})\) for \(\tau\!\in\!\{0,1,\ldots,T-1\}\). It is easy to see that each fee structure \(f\) generates a cumulative demand profile \(\vec{\mathcal{C}}^{f}\!=\!(\vec{\mathcal{C}}^{f}_{-1},\vec{\mathcal{C}}^{ \mathcal{V}}_{0},\vec{\mathcal{C}}^{f}_{1},\ldots,\!\vec{\mathcal{C}}^{f}_{T-1})\), where \(\vec{\mathcal{C}}^{f}_{\tau}\) denotes \(\mathbb{E}[E^{f}[0,\tau]]\). (We understand \(\vec{\mathcal{C}}^{f}_{-1}\) to equal \(0\); this element is included for later convenience.) Note that \(\vec{\mathcal{C}}^{f}_{\tau}-\vec{\mathcal{C}}^{f}_{\tau-1}=\lambda w(\bar{p}+ f_{\tau})\). Accordingly, a _feasible_ cumulative demand profile \(\vec{\mathcal{C}}\) satisfies \(\vec{\mathcal{C}}_{\tau}-\vec{\mathcal{C}}_{\tau-1}\!\in\![0,\lambda]\) for \(\tau\!\in\!\{0,\ldots,\lambda\}\). For any feasible cumulative demand profile \(\vec{\mathcal{C}}\), there exists a unique fee structure \(f\!\in\!\mathcal{F}\) that generates \(\vec{\mathcal{C}}\). Demand profiles may help us to gain insights into how fee structures affect backlog:
**Lemma 2**: _Let \(f,f^{\prime}\!\in\!\mathcal{F}\) and corresponding cumulative demand profiles \(\vec{\mathcal{C}}^{f}\) and \(\vec{\mathcal{C}}^{f}\), and suppose \(\vec{\mathcal{C}}^{f}\!\geq\!\vec{\mathcal{C}}^{f^{\prime}}\) componentwise and \(\vec{\mathcal{C}}^{f}_{T-1}\!=\!\vec{\mathcal{C}}^{f^{\prime}}_{T-1}\). Then \(f\) generates fewer backorders than \(f^{\prime}\), i.e. \(\mathbb{E}[M^{f}]\!\leq\!\mathbb{E}[M^{f^{\prime}}]\)._
The lemma will serve as a key component in proving the main result of this section, and is also of interest in itself. Let \(\alpha\) denote the expected fraction of total orders that are placed as express shipment. The lemma compares fee structures that result in the same \(\alpha\) over the entire day, i.e. \(f,f^{\prime}\) in the lemma (note \(\vec{\mathcal{C}}^{f}_{T-1}\!=\!\vec{\mathcal{C}}^{f^{\prime}}_{T-1}\)). Given \(\alpha\), it demonstrates the importance of eliciting sufficiently many express shipments early in the day. Indeed, if \(\vec{\mathcal{C}}^{f}\geq\vec{\mathcal{C}}^{f^{\prime}}\), then \(f\) has a higher fraction of express orders in the first half of the day, when compared to \(f^{\prime}\). Having sufficient express shipments early on ensures that on busy days, all precious fulfillment center capacity in the early stages of the operating cycle can be fully utilized for express shipments, and none is _wasted_ on orders for regular shipment, reducing expected backorders.
Our main result of this section implies that the optimal shipment fee for express shipment is increasing in the remaining time until the end of the current operating cycle:
**Theorem 1**: _There exists an optimal fee structure \(f^{*}\!=\!(f_{0},f_{1},\ldots,f_{T-1})\) that is monotone, i.e., that has \(f_{\tau}\!>\!f_{\tau-1}\) for all \(1\!\leq\!\tau\!\leq\!T-1\)._
Since we consider static pricing policies, each fee structure \(f\) is associated with a unique Markov Reward Process, and the Theorem uses coupling arguments (based on the connection to cumulative demand profiles in Lemma 2) to gain analytical insights in this collection of Markov Reward Processes. Though superficially similar, the result is quite unlike structural results for dynamic (pricing) problems, where structure is established within a single Markov Decision Process, typically via establishing properties of the value function by induction.
The result establishes that fees should increase towards the deadline of the current operating cycle. Intuitively, this reduces the probability of a large amount of express shipment orders close to the deadline, while increasing the revenue from customers that really want a last-minute order, a clear actionable insight gained from our study. Our finding also motivates us to study a simple shipment policy with an increasing fee structure in the next section.
### Simple time-dependent shipment policy
Our time-dependent shipment policy in general allows for a unique shipment fee \(f_{\tau}\) at every age \(\tau\) of the operating cycle (cf. Section 3.1). However, frequent adaptions of the shipment fee or even dynamic pricing create perceptions of unfairness among customers (Klein et al., 2019). A simple static time-dependent shipment policy with a limited number of well-targeted shipment fees seems to be easier to understand and accept for customers and therefore more appropriate to implement in practice (Agatz et al., 2013). For this purpose, we propose a simple instance of our time-dependent shipment policy, for short _TSP_.
_TSP_ charges stepwise increasing shipment fees \(f_{\tau}\) for express shipment, which is in line with the results in Section 3.3, and introduces a fee switching point \(\tau_{F}\leq\tau_{C}\): Customers arriving until the fee switching point pay the express fee \(f_{E}\) for express shipment, whereas customers arriving thereafter pay the higher last-minute express fee \(f_{LE}\), \(f_{LE}>f_{E}\), if they choose express shipment (cf. Figure 2). Thus, the shipment fees \(f_{\tau}\), \(\tau\in\{0,1,\ldots,\tau_{C}\}\), for express shipment are given as follows:
\[f_{\tau}=\begin{cases}f_{E}&\text{if }\tau\leq\tau_{F}\\ f_{LE}&\text{if }\tau_{F}<\tau\leq\tau_{C}.\end{cases} \tag{14}\]
Applying _TSP_, order income for express shipment \(E_{t}\) and regular shipment \(R_{t}\) at time \(t\), with \(\tau=t\bmod T\), are given as follows:
\[E_{t}\sim E_{\tau}\sim\begin{cases}\text{Poi}\left(w(\overline{p}+f_{E}) \lambda\right)&\text{if }\tau\leq\tau_{F}\\ \text{Poi}\left(w(\overline{p}+f_{LE})\lambda\right)&\text{if }\tau_{F}<\tau\leq\tau_{C}\\ 0&\text{if }\tau_{C}<\tau\leq T-1,\end{cases} \tag{15}\]
Figure 2: Illustration of the simple time-dependent shipment policy.
\[R_{t}\sim R_{\tau}\sim\begin{cases}\text{Poi}\left((1-w(\overline{p}+f_{E})) \lambda\right)&\text{if }\tau\leq\tau_{F}\\ \text{Poi}\left((1-w(\overline{p}+f_{LE}))\lambda\right)&\text{if }\tau_{F}<\tau\leq \tau_{C}\\ \text{Poi}\left(\lambda\right)&\text{if }\tau_{C}<\tau\leq T-1.\end{cases} \tag{16}\]
## 4 Numerical analysis
In this section, we conduct a numerical study to analyze the benefit of time-dependent shipment options and shipment fees in order fulfillment based on our simple time-dependent shipment policy _TSP_ (cf. Section 4.2). We furthermore investigate the profit-maximizing parameterization of _TSP_ (cf. Section 4.3). The underlying experimental design and implementation details are given in Section 4.1.
### Experimental design and model implementation
We consider e-commerce fulfillment centers with an operating cycle of length \(T=8\) and utilizations of \(\rho\in\{0.85,0.9,0.95\}\). The customer demand \(D_{t}\), \(t\in\mathbb{N}_{0}\), is Poisson-distributed with \(\lambda=5\). For the processing capacity \(B_{t}\), \(t\in\mathbb{N}_{0}\), we assume a discretized Beta-distribution with support \(\{0,1,\ldots,20\}\), which is specified by its expected value \(\mathbb{E}[B]\) and its squared coefficient of variation \(scv[B]\) (Law et al. 2007, 295-297). \(\mathbb{E}[B]\) derives from the given utilization \(\rho\) as given in (5), and \(scv[B]\) is set to \(0.5\). The regular price is given by \(\overline{p}=4\), and the additional utility derived from express shipment is uniform on its support \([0,\overline{p}]\). We consider various values for the regular and the last-minute express fee \(f_{E},f_{LE}\in\{0.2,0.4,\ldots,3.6,3.8\}\), \(f_{LE}>f_{E}\), whose value range derives from the given minimum and maximum willingness to pay. The cutoff point \(\tau_{C}\) varies between \(1\) and \((T-1)\), and the fee switching point \(\tau_{F}\) between \(0\) and \((\tau_{C}-1)\). We consider penalty costs of \(c=8\) and \(c=12\), such that the penalty cost per backorder exceeds the maximum price for express shipment.
To implement and compute our model, we derive a finite upper bound for its state space, which is given by \(\mathbb{N}_{0}\times\mathbb{N}_{0}\times\{0,1,\ldots,T-1\}\). It is sufficient to limit the total number of unprocessed orders \(X_{t}^{\Sigma}\) by the finite upper bound \(\overline{X^{\Sigma}}\) since \(X_{t}^{C}\leq X_{t}^{\Sigma}\) at any time period \(t\). Due to this upper bound, some orders are potentially rejected without being fulfilled. An incoming order is rejected in operating cycle \(k\) only if, by accepting this order, the current total number of unprocessed orders \(X_{t}^{\Sigma}\) would exceed the upper bound \(\overline{X^{\Sigma}}\). We determine \(\overline{X^{\Sigma}}\) such that the probability of rejecting an order is negligibly small. In this study, the threshold value is \(0.023\). The formula of the rejection probability and the required adaptions of the state transition are given in Appendix A.
### Insights on the benefit of time-dependent shipment options and shipment fees
To evaluate the benefit of time-dependent shipment options and shipment fees, we consider our simple time-dependent shipment policy _TSP_ and the following benchmark policies that are prevalent in practice:
1. The _constant shipment policy (CSP)_ offers express and regular shipment to all customers independent of the age of the operating cycle at their time of arrival and charges a constant fee \(f\) for express fulfillment, i.e., there is no cutoff point and no fee switching point. The fee is set such that it maximizes the expected variable _revenue_ per operating cycle, which is \(\sum_{\tau=0}^{T-1}f_{\tau}\mathbb{E}[E_{\tau}]=Tf\mathbb{E}[E]=Tf\left(1-\frac{ f}{\overline{p}}\right)\lambda\) when assuming uniform utility with support \([0,\overline{p}]\). Hence, the revenue-maximizing fee is \(f^{RM}=0.5\overline{p}\), which corresponds to \(f^{RM}=2\) in our study.
2. The _time-dependent shipment policy with constant fees (TSP-CF)_ offers express and regular shipment to all customers arriving until the cutoff point \(\tau_{C}\), and regular shipment to all customers arriving thereafter. It charges a constant fee \(f\) for express fulfillment, i.e., there is no fee switching point. Regarding the fee value, we differentiate two instances: The naive instance _TSP-CF_ charges the revenue-maximizing fee \(f=f^{RM}\), similar to _CSP_, whereas _TSP-CF*_ charges the profit-maximizing fee \(f=f^{*}\), which derives from maximizing the expected variable profit \(\mathbb{E}[G^{V}]\) given by (12).
Table 2 gives the expected backorders \(\mathbb{E}[M]\) and the expected variable profit \(\mathbb{E}[G^{V}]\) of _TSP_ and the benchmark policies as well as their benefit in terms of variable profit, compared to each other. Note that it is sufficient to focus on the variable profit \(\mathbb{E}[G^{V}]\) as only the variable part of the profit is affected by the selected shipment policy.
Considering the highly utilized system setting (\(\rho=0.95\)) with high penalty cost (\(c=12\)), _TSP_ leads to an increase in the variable profit of \(5.78\%\) compared to _TSP-CF*_. As expected, _TSP_ has an even higher benefit compared to _TSP-CF_ (\(72.08\%\)) and _CSP_ (\(91.75\%\)). Similar results apply to the other system settings. In general, _TSP_ outperforms the most sophisticated benchmark policy _TSP-CF*_ by \(3.38\%\) (median). Furthermore, it is interesting to note that there is a clear ranking of the benchmark policies in terms of variable profit: _CSP_ has the lowest variable profit, _TSP-CF_ leads to an increase in the variable profit of \(42.56\%\) (compared to _CSP_), and _TSP-CF*_ further improves the variable profit by \(28.85\%\) (compared to _TSP-CF_) (given numbers are median values).
These results indicate that the simple time-dependent shipment policy _TSP_, which consists of two fees \(f_{E}\), \(f_{LE}\) and one fee switching point \(\tau_{F}\), already improves profits significantly compared to benchmark policies with different features of constant shipment options and/or constant shipment fees. Even if it is not possible to implement a time-dependent shipment policy due to practical restrictions, profits can be improved by considerable margins when evolving the shipment policy as follows: (1) offering time-dependent shipment options by introducing a cutoff point instead of static ones (_TSP-CF_ versus _CSP_), (2) charging the profit-maximizing fee instead of the revenue-maximizing one (_TSP-CF*_ versus _TSP-CF_), and (3) switching from a constant fee to a simple time-dependent fee structure by introducing a fee switching point (_TSP_ versus _TSP-CF*_).
### Insights on the optimal parameterization of the simple time-dependent shipment policy
In this section, we analyze the profit-maximizing parameterization of the simple time-dependent shipment policy _TSP_, i.e., determining the profit-maximizing cutoff point \(\tau_{C}^{*}\), fee switching point \(\tau_{F}^{*}\), express fee \(f_{E}^{*}\), and last-minute express fee \(f_{LE}^{*}\).
Table 3 gives the profit-maximizing parameterization of _TSP_ for given cutoff points of \(\tau_{C}=5,6,7\) including the corresponding expected backorders \(\mathbb{E}[M]\) and expected variable profit \(\mathbb{E}[G^{V}]\). Considering the highly utilized system setting (\(\rho=0.95\)) with high penalty cost (\(c=12\)), the variable profit increases by \(24.24\%\) (\(6.27\%\)) when postponing the cutoff point from \(\tau_{C}=5\) (\(\tau_{C}=6\))
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{Policy parameters1 } & \multicolumn{3}{c}{Performance} & \multicolumn{3}{c}{Benefit2 \% compared to} \\ \cline{3-10} Setting & Policy & \(f_{E}\) & \(f_{LE}\) & \(\tau_{F}\) & \(\tau_{C}\) & \(\mathbb{E}[M]\) & \(\mathbb{E}[G^{V}]\) & _CSP_ & _TSP-CF_ & _TSP-CF*_ \\ \hline \(\rho=0.85\), & _CSP_ & 2.0 & & & 1.29 & 29.66 & & & & \\ \(c=8\) & _TSP-CF_ & 2.0 & & & 6\({}^{*}\) & 0.58 & 30.39 & 2.45 & & & \\ & _TSP-CF*_ & 2.4\({}^{*}\) & & & 7\({}^{*}\) & 0.77 & 32.22 & 8.64 & 6.04 & & \\ & _TSP_ & \(2.4\)\({}^{*}\) & \(3.0\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 0.56 & 32.86 & 10.80 & 8.14 & 1.98 \\ \hline \(\rho=0.85\), & _CSP_ & 2.0 & & & 1.29 & 24.49 & & & & \\ \(c=12\) & _TSP-CF_ & 2.0 & & & 6\({}^{*}\) & 0.58 & 28.08 & 14.67 & & & \\ & _TSP-CF*_ & 2.4\({}^{*}\) & & & 6\({}^{*}\) & 0.33 & 29.70 & 21.27 & 5.76 & & \\ & _TSP_ & \(2.4\)\({}^{*}\) & \(3.2\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 0.50 & 30.78 & 25.67 & 9.59 & 3.63 \\ \hline \(\rho=0.9\), & _CSP_ & 2.0 & & & & 2.69 & 18.45 & & & \\ \(c=8\) & _TSP-CF_ & 2.0 & & & 6\({}^{*}\) & 1.76 & 20.95 & 13.56 & & & \\ & _TSP-CF*_ & 2.6\({}^{*}\) & & & 7\({}^{*}\) & 1.42 & 25.08 & 35.89 & 19.67 & & \\ & _TSP_ & \(2.6\)\({}^{*}\) & \(3.2\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 1.18 & 25.60 & 38.72 & 22.16 & 2.08 \\ \hline \(\rho=0.9\), & _CSP_ & 2.0 & & & & 2.69 & 7.68 & & & \\ \(c=12\) & _TSP-CF_ & 2.0 & & & 5\({}^{*}\) & 1.27 & 14.80 & 92.74 & & & \\ & _TSP-CF*_ & 2.6\({}^{*}\) & & & 6\({}^{*}\) & 0.95 & 20.43 & 166.02 & 38.02 & & \\ & _TSP_ & \(2.8\)\({}^{*}\) & \(3.4\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 0.91 & 21.07 & 174.35 & 42.34 & 3.13 \\ \hline \(\rho=0.95\), & _CSP_ & 2.0 & & & & 6.36 & -10.91 & & & & \\ \(c=8\) & _TSP-CF_ & 2.0 & & & 2\({}^{*}\) & 2.20 & -2.03 & 81.36 & & & \\ & _TSP-CF*_ & \(3.0\)\({}^{*}\) & & & 7\({}^{*}\) & 2.42 & 6.67 & 161.10 & 427.87 & & \\ & _TSP_ & \(3.0\)\({}^{*}\) & \(3.4\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 2.27 & 6.91 & 163.34 & 439.89 & 3.67 \\ \hline \(\rho=0.95\), & _CSP_ & 2.0 & & & & 6.36 & -36.37 & & & \\ \(c=12\) & _TSP-CF_ & 2.0 & & & 1\({}^{*}\) & 1.73 & -10.75 & 70.45 & & & \\ & _TSP-CF*_ & \(3.2\)\({}^{*}\) & & & 6\({}^{*}\) & 2.13 & -3.18 & 91.24 & 70.37 & & \\ & _TSP_ & \(3.2\)\({}^{*}\) & \(3.6\)\({}^{*}\) & \(6\)\({}^{*}\) & 7\({}^{*}\) & 2.27 & -3.00 & 91.75 & 72.08 & 5.78 \\ \hline \hline Median & _TSP-CF_ & & & & & & 42.56 & & & \\ & _TSP-CF*_ & & & & & & 63.57 & 28.85 & & \\ & _TSP_ & & & & & & 65.24 & 32.25 & 3.38 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Optimal policy parameterization, performance measures and benefit of TSP and the benchmark policies.
to the last time period before the deadline, i.e., \(\tau_{C}=7\). Similar results apply to the other system settings. So, the profit-maximizing cutoff point of _TSP_ is \(\tau_{C}^{*}=7\).
Figures 3 and 4 illustrate the expected variable profit of _TSP_ depending on the express fee \(f_{E}\) (Figure 3) respective the last-minute express fee \(f_{LE}\) given the setting-specific profit-maximizing express fee \(f_{E}^{*}\) (Figure 4). Both figures consider different fee switching points \(\tau_{F}<\tau_{C}\), given the profit-maximizing cutoff point \(\tau_{C}^{*}=7\). From these figures, we find the profit-maximizing fee values \(f_{E}^{*}\), \(f_{LE}^{*}\) and the profit-maximizing fee switching point \(\tau_{F}^{*}\) in each system setting; e.g., \(f_{E}^{*}=2.4\), \(f_{LE}^{*}=3.0\), \(\tau_{F}^{*}=6\) in the medium utilized system setting (\(\rho=0.85\)) with low penalty cost (\(c=8\)), and \(f_{E}^{*}=3.2\), \(f_{LE}^{*}=3.6\), \(\tau_{F}^{*}=6\) in the highly utilized system setting (\(\rho=0.95\)) with high penalty cost (\(c=12\)). The profit-maximizing fee values \(f_{E}^{*}\), \(f_{LE}^{*}\) exceed the revenue-maximizing fee \(f^{RM}=2\) in each system setting, and they increase as penalty cost and utilization increase, respectively. The profit-maximizing fee switching point corresponds to the last time period before the cutoff point, i.e., \(\tau_{F}^{*}=6\), irrespective of the considered system setting.
Summarizing these results yield the following insights and recommendations in terms of optimal parameterization of _TSP_:
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & \multicolumn{3}{c}{Policy parameters1 } & \multicolumn{3}{c}{Performance} \\ Setting & \(f_{E}\) & \(f_{LE}\) & \(\tau_{F}\) & \(\tau_{C}\) & \(\mathbb{E}[M]\) & \(\mathbb{E}[G^{V}]\) & Benefit2[\%] \\ \hline \(\rho=0.85\), & \(2.4^{*}\) & \(3.0^{*}\) & \(6^{*}\) & \(7^{*}\) & 0.56 & 32.86 & \\ \(c=8\) & \(2.2^{*}\) & \(2.4^{*}\) & \(4^{*}\) & 6 & 0.39 & 31.24 & 5.21 \\ & \(2.0^{*}\) & \(2.2^{*}\) & \(0^{*}\) & 5 & 0.26 & 27.66 & 18.82 \\ \hline \(\rho=0.85\), & \(2.4^{*}\) & \(3.2^{*}\) & \(6^{*}\) & \(7^{*}\) & 0.50 & 30.78 & \\ \(c=12\) & \(2.2^{*}\) & \(2.4^{*}\) & \(2^{*}\) & 6 & 0.36 & 29.73 & 3.53 \\ & \(2.2^{*}\) & \(2.4^{*}\) & \(4^{*}\) & 5 & 0.24 & 26.68 & 15.35 \\ \hline \(\rho=0.9\), & \(2.6^{*}\) & \(3.2^{*}\) & \(6^{*}\) & \(7^{*}\) & 1.18 & 25.60 & \\ \(c=8\) & \(2.4^{*}\) & \(2.6^{*}\) & \(3^{*}\) & 6 & 1.07 & 24.31 & 5.30 \\ & \(2.4^{*}\) & \(2.6^{*}\) & \(4^{*}\) & 5 & 0.86 & 21.68 & 18.08 \\ \hline \(\rho=0.9\), & \(2.8^{*}\) & \(3.4^{*}\) & \(6^{*}\) & \(7^{*}\) & 0.91 & 21.07 & \\ \(c=12\) & \(2.6^{*}\) & \(2.8^{*}\) & \(4^{*}\) & 6 & 0.89 & 20.52 & 2.67 \\ & \(2.4^{*}\) & \(2.6^{*}\) & \(0^{*}\) & 5 & 0.76 & 18.46 & 14.17 \\ \hline \(\rho=0.95\), & \(3.0^{*}\) & \(3.4^{*}\) & \(6^{*}\) & \(7^{*}\) & 2.74 & 6.91 & \\ \(c=8\) & \(2.8^{*}\) & \(3.0^{*}\) & \(0^{*}\) & 6 & 2.56 & 6.20 & 11.41 \\ & \(2.8^{*}\) & \(3.0^{*}\) & \(4^{*}\) & 5 & 2.49 & 4.83 & 42.96 \\ \hline \(\rho=0.95\), & \(3.2^{*}\) & \(3.6^{*}\) & \(6^{*}\) & \(7^{*}\) & 2.27 & -3.00 & \\ \(c=12\) & \(3.2^{*}\) & \(3.4^{*}\) & \(5^{*}\) & 6 & 2.08 & -3.20 & 6.27 \\ & \(3^{*}\) & \(3.2^{*}\) & \(0^{*}\) & 5 & 1.96 & -3.96 & 24.24 \\ \hline \hline \end{tabular} \({}^{1}\) Optimized parameters are labelled with (*).
\({}^{2}\) Benefit is calculated as the relative deviation in the expected variable profit of the profit-maximizing configuration of _TSP_ compared to a given policy.
\end{table}
Table 3: **Profit-maximizing parameterization, performance measures and benefit of TSP for selected cutoff points \(\tau_{C}=5,6,7\).**
Figure 3: Variable profit of TSP depending on the express fee \(f_{E}\) for different fee switching points \(\tau_{F}<\tau_{C}\) and given cutoff point \(\tau_{C}^{*}=7\).
Figure 4: **Variable profit of TSP depending on the last-minute express fee \(f_{LE}\) for different fee switching points \(\tau_{F}<\tau_{C}\), given setting-specific express fees \(f_{E}^{*}\) and cutoff point \(\tau_{C}^{*}=7\).**
1. Express shipment should be offered until the last time period before the deadline, i.e., the profit-maximizing cutoff point is \(\tau_{C}=T-1\). Hence, all customers ordering before the deadline can choose between express and regular shipment.
2. The fee for express shipment should be increased to the last-minute express fee in the last time period before the cutoff point, i.e., the profit-maximizing fee switching point is \(\tau_{F}=\tau_{C}-1\). So, the vast majority of customers that choose express shipment pay the express fee \(f_{E}\) for express shipment. Only those customers, which arrive shortly before the cutoff point and still choose express shipment, have to pay the higher last-minute express fee \(f_{LE}\).
3. The shipment fees \(f_{E}\), \(f_{LE}\) should be higher than the revenue-maximizing fee \(f^{RM}\). Furthermore, it is optimal to charge higher fees \(f_{E}\), \(f_{LE}\) when penalty cost and utilization increase as higher fees for express shipment reduce the proportion of customers choosing express shipment and result in fewer backorders and lower penalty costs.
## 5 Concluding remarks
Customers increasingly expect faster shipment services. In their pursuit to meet this need, e-commerce companies might run the risk that they over-promise their shipment services when fulfillment centers fail to process online orders before the agreed deadline. To mitigate this risk, we developed a time-dependent shipment policy that seeks to incentivize customers to choose next-day shipment over same-day shipment as the remaining time to process same-day shipment orders decreases.
We considered a multi-period model, where each period ends with a deadline upon which orders that are due for shipment in that period need to be handed over to an external parcel delivery company. During each such period, online customers arrive according to a Poisson process. Our time-dependent shipment policy prescribes shipment fees for same-day as well as next-day shipment, accompanied by a cutoff point in the period after which same-day shipment is no longer offered. Customers select their preferred shipment option among the available options by trading off their own utility. The processing capacity of the fulfillment center within each period is randomly distributed. If the available capacity is unable to process all orders due for shipment in a given period, then these orders are carried over for shipment in the subsequent period.
We developed a discrete-time Markov chain model for the steady-state analysis of the time-dependent shipment policy. We derived closed-form expressions for the relevant performance measures as well as structural properties of the globally optimal time-dependent shipment policy with one cutoff point and multiple shipment fees that depend on the time remaining until the end of the period. Motivated by its practical appeal, we numerically studied a simple instance of our time-dependent shipment policy that switches between shipment fees only once. We benchmarked this
simple time-dependent approach against two benchmark policies that are often applied in practice. Both benchmark approaches rely on static, time-independent fees for both shipment options, but differ in whether they include a cutoff point for the same-day shipment option or not. Our numerical results indicate that temporal differentiation of shipment fees as well as shipment options increases profits significantly.
|
2306.16899 | An improved kernelization algorithm for Trivially Perfect Editing | In the Trivially Perfect Editing problem one is given an undirected graph $G
= (V,E)$ and an integer $k$ and seeks to add or delete at most $k$ edges in $G$
to obtain a trivially perfect graph. In a recent work, Dumas, Perez and Todinca
[Algorithmica 2023] proved that this problem admits a kernel with $O(k^3)$
vertices. This result heavily relies on the fact that the size of trivially
perfect modules can be bounded by $O(k^2)$ as shown by Drange and Pilipczuk
[Algorithmica 2018]. To obtain their cubic vertex-kernel, Dumas, Perez and
Todinca [Algorithmica 2023] then showed that a more intricate structure,
so-called \emph{comb}, can be reduced to $O(k^2)$ vertices. In this work we
show that the bound can be improved to $O(k)$ for both aforementioned
structures and thus obtain a kernel with $O(k^2)$ vertices. Our approach relies
on the straightforward yet powerful observation that any large enough structure
contains unaffected vertices whose neighborhood remains unchanged by an editing
of size $k$, implying strong structural properties. | Maël Dumas, Anthony Perez | 2023-06-29T12:43:32Z | http://arxiv.org/abs/2306.16899v2 | # An improved kernelization algorithm for Trivially Perfect Editing
###### Abstract
In the Trivially Perfect Editing problem one is given an undirected graph \(G=(V,E)\) and an integer \(k\) and seeks to add or delete at most \(k\) edges in \(G\) to obtain a trivially perfect graph. In a recent work, Dumas _et al._[16] proved that this problem admits a kernel with \(O(k^{3})\) vertices. This result heavily relies on the fact that the size of trivially perfect modules can be bounded by \(O(k^{2})\) as shown by Drange and Pilipczuk [14]. To obtain their cubic vertex-kernel, Dumas _et al._[16] then showed that a more intricate structure, so-called _comb_, can be reduced to \(O(k^{2})\) vertices. In this work we show that the bound can be improved to \(O(k)\) for both aforementioned structures and thus obtain a kernel with \(O(k^{2})\) vertices. Our approach relies on the straightforward yet powerful observation that any large enough structure contains unaffected vertices whose neighborhood remains unchanged by an editing of size \(k\), implying strong structural properties.
## 1 Introduction
In the Trivially Perfect Editing problem one is given an undirected graph \(G=(V,E)\) and an integer \(k\) and seeks to _edit_ (add or delete) at most \(k\) edges in \(G\) so that the resulting graph is trivially perfect (_i.e._ does not contain any cycle on four vertices nor path on four vertices as an induced subgraph). More formally we consider the following problem:
Trivially Perfect Editing --
**Input**: A graph \(G=(V,E)\), a _parameter_\(k\in\mathbb{N}\)
**Question**: Does there exist a set \(F\subseteq(V\times V)\) of size at most \(k\) such that the graph \(H=(V,E\triangle F)\) is trivially perfect?
Here \(E\triangle F=(E\cup F)\setminus(E\cap F)\) denotes the symmetric difference between sets \(E\) and \(F\). We define similarly the deletion (resp. completion) variant of the problem by only allowing to delete (resp. add) edges. Graph modification covers a broad range of well-studied problems that find applications in various areas. For instance, Trivially Perfect Editing has been used to define the community structure of complex networks by Nastos and Gao [30] and is closely related to the well-studied graph parameter tree-depth [20, 32]. Theoretically, some of the earliest NP-Complete problems are graph modification problems [19, 25]. Regarding edge (graph) modification problems, one of the most notable one is the Minimum Fill-in problem which aims at adding edges to a given graph to obtain a chordal graph (_i.e._ a graph that does
not contain any induced cycle of length at least \(4\)). In a seminal result, Kaplan _et al._[24] proved that Minimum Fill-in admits a parameterized algorithm as well as a kernel containing \(O(k^{3})\) vertices. This result was later improved to \(O(k^{2})\) vertices by Natanzon _et al._[31]. Parameterized complexity and kernelization algorithms provide a powerful theoretical framework to cope with decision problems.
Parameterized complexityA parameterized problem \(\Pi\) is a language of \(\Sigma^{*}\times\mathbb{N}\), where \(\Sigma\) is a finite alphabet. An instance of a parameterized problem is a pair \((I,k)\) with \(I\subseteq\Sigma^{*}\) and \(k\in\mathbb{N}\), called the _parameter_. A parameterized problem is said to be _fixed-parameter tractable_ if it can be decided in time \(f(k)\cdot|I|^{O(1)}\). An equivalent definition of fixed-parameter tractability is the notion of _kernelization_. Given an instance \((I,k)\) of a parameterized problem \(\Pi\), a _kernelization algorithm_ for \(\Pi\) (kernel for short) is a polynomial-time algorithm that outputs an equivalent instance \((I^{\prime},k^{\prime})\) of \(\Pi\) such that \(|I^{\prime}|\leqslant h(k)\) for some function \(h\) depending on the parameter only and \(k^{\prime}\leqslant k\). It is well-known that a parameterized problem is fixed-parameter tractable if and only if it admits a kernelization algorithm (see _e.g._[18]). Problem \(\Pi\) is said to admit a _polynomial kernel_ whenever \(h\) is a polynomial.
Related workSince the work of Kaplan _et al._[24] many polynomial kernels for edge modification problems have been devised (see _e.g._[1, 2, 3, 11, 12, 14, 15, 23, 26]). There is also evidence that under some reasonable theoretical complexity assumptions, some graph modification problems do not admit polynomial kernels [8, 22, 27, 29]. We refer the reader to a recent comprehensive survey on kernelization for edge modification problems by Crespelle _et al._[9]. The Trivially Perfect Editing problem has been well-studied in the literature [1, 4, 5, 14, 16, 21, 28, 30]. Recall that trivially perfect graphs are a subclass of chordal graphs that additionally do not contain any path on four vertices as an induced subgraph. These graphs are also known as _quasi-threshold_ graphs. We note here that while the NP-Completeness of completion and deletion toward trivially perfect graphs has been known for some time [6, 33], the NP-Completeness of Trivially Perfect Editing remained opened until the work of Nastos and Gao [30]. Thanks to a result of Cai [7] stating that graph modification toward _any_ graph class characterized by a finite set of forbidden induced subgraphs is fixed-parameter tractable, Trivially Perfect Editing is fixed-parameter tractable. Regarding kernelization algorithms, Drange and Pilipczuk [14] provided a kernel containing \(O(k^{7})\) vertices, a result that was recently improved to \(O(k^{3})\) vertices by Dumas _et al._[16]. These results also work for the deletion and completion variants. For the latter problem, a recent result by Bathie _et al._[1] improves the bound to \(O(k^{2})\) vertices.
As part of the proof for the size of their cubic vertex-kernel, Dumas _et al._[16] subsequently showed the following result. The structures used in Theorem 1 shall be defined later.
**Theorem 1** ([16]).: _Let \((G,k)\) be an instance1 of Trivially Perfect Editing such that the sizes of its trivially perfect modules and combs are bounded by \(p(k)\) and \(c(k)\), respectively. If \((G,k)\) is a Yes-instance then \(G\) has \(O(k\cdot(p(k)+c(k))\) vertices._
Footnote 1: As we shall see Section 3.1 the instance also needs to be further reduced under standard reduction rules.
The cubic vertex-kernel of Dumas _et al._[16] relied on a result of Drange and Pilipczuk [14] that proved that \(p\in O(k^{2})\) and then used new reduction rules implying that \(c\in O(k^{2})\).
Our contributionWe provide reduction rules and structural properties on trivially perfect graphs that will imply an \(O(k)\) bound for both functions \(p\) and \(c\) of Theorem 1. These new reduction rules allow us to prove the existence a quadratic vertex-kernel for Trivially Perfect
Editing. To bound the size of trivially perfect modules by \(O(k)\), we first reduce the ones that contain a large matching of non-edges with the use of a simple reduction rule. To bound the ones that do not contain such structures, we will rely on so-called _combs_, introduced by Dumas _et al._[16]. Combs correspond to parts of the graph that induce trivially perfect graphs (but not necessarily modules) with strong properties on their neighborhoods. They are composed of two main parts, called the _shaft_ and the _teeth_, that will be independently reduced to a size linear in \(k\). The reduction rule dealing with shafts will ultimately allow us to bound the size of trivially perfect modules with no large matching of non-edges. Our approach relies on the straightforward yet powerful observation that any large enough structure contains unaffected vertices whose neighborhood remains unchanged by an editing of size \(k\). Finally, we note that our kernel works for both the deletion and completion variants of the problem.
OutlineSection 2 presents some preliminary notions and structural properties on (trivially perfect) graphs. Section 3 describes known as well as our additional reduction rules to obtain the claimed kernelization algorithm while Section 4 explain why our kernel is safe for the deletion variant of the problem. We conclude with some perspectives in Section 5.
## 2 Preliminaries
We consider simple, undirected graphs \(G=(V,E)\) where \(V\) denotes the vertex set of \(G\) and \(E\subseteq(V\times V)\) its edge set. We will sometimes use \(V(G)\) and \(E(G)\) to clarify the context. The open (respectively closed) neighborhood of a vertex \(u\in V\) is denoted by \(N_{G}(u)=\{v\in V\mid\{u,v\}\in E\}\) (respectively \(N_{G}[u]=N_{G}(u)\cup\{u\}\)). Given a subset of vertices \(S\subseteq V\) the neighborhood of \(S\) is defined as \(N_{G}(S)=\{v\in V(G)\setminus S\mid\{u,v\}\in E\}\). Similarly, given a vertex \(u\in V\) and \(S\subseteq V\) we let \(N_{S}(u)=N_{G}(u)\cap S\). In all aforementioned cases we forget the subscript mentioning graph \(G\) whenever the context is clear. Given a subset of vertices \(S\subseteq V\) we denote by \(G[S]\) the subgraph induced by \(S\), that is \(G[S]=(S,E\cap(S\times S))\). In a slight abuse of notation, we use \(G\setminus S\) to denote the induced subgraph \(G[V\setminus S]\). A _connected component_ is a maximal subset of vertices \(S\subseteq V\) such that \(G[S]\) is connected. A _module_ of \(G\) is a set \(M\subseteq V\) such that for all \(u,v\in M\) it holds that \(N(u)\setminus M=N(v)\setminus M\). Two vertices \(u\) and \(v\) are _true twins_ whenever \(N[u]=N[v]\), and a _critical clique_ is a maximal set of true twins. A graph is _trivially perfect_ if and only if it does not contain any \(C_{4}\) (a cycle on \(4\) vertices) nor \(P_{4}\) (a path on \(4\) vertices) as an induced subgraph. In the remaining of this section we describe characterizations and structural properties of trivially perfect graphs. The first one relies on the well-known fact that any connected trivially perfect graph contains a universal vertex (see _e.g._[35]).
**Definition 1** (Universal clique decomposition, [13]).: _A universal clique decomposition (UCD) of a connected graph \(G=(V,E)\) is a pair \(\mathcal{T}=\big{(}T=(V_{T},E_{T}),\mathcal{B}=\{B_{t}\}_{t\in V_{T}}\big{)}\) where \(T\) is a rooted tree and \(\mathcal{B}\) is a partition of the vertex set \(V\) into disjoint nonempty subsets such that:_
* _if_ \(\{v,w\}\in E\) _and_ \(v\in B_{t}\)_,_ \(w\in B_{s}\) _then_ \(s\) _and_ \(t\) _are on a path from a leaf to the root, with possibly_ \(s=t\)_,_
* _for every node_ \(t\in V_{T}\)_, the set_ \(B_{t}\) _of vertices is the universal clique of the induced subgraph_ \(G[\bigcup_{s\in V(T_{t})}B_{s}]\)_, where_ \(T_{t}\) _denotes the subtree of_ \(T\) _rooted at_ \(t\)_._
A simple way of understanding Definition 1 is to observe that such a decomposition can be obtained by removing the set \(U\) of universal vertices of \(G\) and then recursively repeating this process on every trivially perfect connected component of \(G\setminus U\). Drange _et al._[13] showed that
any connected graph admitting an UCD is trivially perfect, thus proving equivalence between both definitions. Using the notion of UCD, Dumas _et al._[16] proved the following characterization for trivially perfect graphs that will be heavily used in our reduction rules. A collection of subsets \(\mathcal{F}\subseteq 2^{U}\) over some universe \(U\) is a _nested family_ if \(A\subseteq B\) or \(B\subseteq A\) holds for any \(A,B\in\mathcal{F}\).
**Lemma 1** ([16]).: _Let \(G=(V,E)\) be a graph, \(S\subseteq V\) a maximal clique of \(G\) and \(\{K_{1},...,K_{r}\}\) the set of connected components of \(G\backslash S\). The graph \(G\) is trivially perfect if and only if the following conditions are verified:_
1. \(G[S\cup K_{i}]\) _is trivially perfect,_ \(1\leqslant i\leqslant r\)__
2. \(\bigcup_{1\leqslant i\leqslant r}\{N_{G}(K_{i})\}\) _is a nested family_
3. \((K_{i}\times N_{G}(K_{i}))\subseteq E\)_,_ \(1\leqslant i\leqslant r\)_. In other words,_ \(K_{i}\) _is a module of_ \(G\)_._
In the remaining of this paper, a _\(k\)-editing of \(G\) into a trivially perfect graph_ is a set \(F\subseteq(V\times V)\) such that \(|F|\leqslant k\) and the graph \(H=(V,E\triangle F)\) is trivially perfect. Here \(E\triangle F=(E\cup F)\setminus(E\cap F)\) denotes the symmetric difference between sets \(E\) and \(F\). For the sake of readability, we simply speak of \(k\)-editing of \(G\). We say that \(F\) is a \(k\)-completion (resp. \(k\)-deletion) when \(H=(V,E\cup F)\) (resp. \(H=(V,E\setminus F)\)) is trivially perfect. A vertex is _affected_ by a \(k\)-editing \(F\) if it is contained in some pair of \(F\) and _unaffected_ otherwise.
Packing, anti-matching and blow-upWe now define some structures and operators that will be useful for our kernelization algorithm. We assume in the remaining of this section that we are given a graph \(G=(V,E)\). The notion of _\(r\)-packing_ will be used in reduction rules to ensure the existence of unaffected vertices in ordered sets of critical cliques or of trivially perfect modules.
**Definition 2** (\(r\)-packing).: _Let \(\mathcal{S}=\{C_{1},\ldots,C_{q}\}\) be an ordered collection of pairwise disjoint subsets of \(V\). We say that \(\mathcal{C}\subseteq\mathcal{S}\) is a \(r\)-packing of \(\mathcal{S}\) if \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) for \(1\leqslant p\leqslant q\), \(\sum_{i=1}^{p}|C_{i}|\geqslant r\) and the number of vertices contained in \(\mathcal{C}\) is minimum for this property._
In a slight abuse of notation we use \(\mathcal{C}\) to denote both \(\{C_{1},\ldots,C_{p}\}\) and the set \(\cup_{i=1}^{p}C_{i}\).
**Observation 1**.: _Let \(\mathcal{S}=\{C_{1},\ldots,C_{q}\}\) be an ordered collection of pairwise disjoint subsets of \(V\) such that \(|C_{j}|\leqslant c\), for \(1\leqslant j\leqslant q\) and some integer \(c>0\). Let \(\mathcal{C}=\{C_{1},\ldots,C_{p}\}\) be a \(r\)-packing of \(\mathcal{S}\). Then \(\sum_{i=1}^{p}|C_{i}|\leqslant r+(c-1)\)._
Proof.: Since \(\sum_{i=1}^{p}|C_{i}|\geqslant r\) and the number of vertices in \(\mathcal{C}\) is minimum for this property we have that \(\sum_{i=1}^{p-1}|C_{i}|\leqslant r-1\). The result follows from the fact that \(|C_{p}|\leqslant c\).
**Definition 3** (Anti-matching).: _An anti-matching of \(G\) is a set of pairwise disjoint pairs \(\{u,v\}\) of vertices of \(G\) such that \(\{u,v\}\not\in E\)._
In a slight abuse of notation we denote by \(V(D)\) the set of vertices contained in pairs of an anti-matching \(D\).
**Observation 2**.: _Let \((G,k)\) be a Yes-instance of Trivially Perfect Editing and \(M\) be a module containing a \((k+1)\)-sized anti-matching. Let \(F\) be a \(k\)-editing of \(G\) and \(H=G\triangle F\). Then \(N_{G}(M)\) is a clique in \(H\)._
Proof.: Let \(D=\{\{u_{i},v_{i}\}\mid 1\leqslant i\leqslant k+1\}\) be a \((k+1)\)-sized anti-matching of \(M\). Assume for a contradiction that \(N_{G}(M)\) is not a clique in \(H\) and let \(\{u,v\}\) be a non-edge of \(H\) with \(u,v\in N_{G}(M)\). By the pigeonhole principle, since \(|F|\leqslant k\) there exists \(1\leqslant j\leqslant k+1\) such that \(\{u_{j},v_{j}\}\notin F\) and for every \(x\in V(G)\setminus M\), \(\{u_{j},x\},\{v_{j},x\}\notin F\). Hence \(\{u_{j},u,v_{j},v\}\) induces a \(C_{4}\) in \(H\), a contradiction.
We conclude this section by introducing a gluing operation on trivially perfect graphs, namely _blow-up_, that will ease the design of some reduction rules.
**Definition 4** (Blow-up).: _Let \(u\) be a vertex of \(G=(V,E)\) and \(H=(V_{H},E_{H})\) be any graph. The blow-up of \(G\) by \(H\) at \(u\), denoted \(G(u\to H)\) is the graph obtained by replacing \(u\) by \(H\) in \(G\). More formally:_
\[G(u\to H)=\left((V\setminus\{u\})\cup V_{H},E(G\setminus\{u\})\cup E_{H}\cup( V_{H}\times N_{G}(u)\right)\]
**Proposition 1**.: _Assume that \(G\) is trivially perfect and let \(u\) be a vertex of \(G\) such that \(N_{G}[u]\) is a clique. For any trivially perfect graph \(H\), the graph \(G(u\to H)\) is trivially perfect._
Proof.: Let \(S\subseteq V\setminus\{u\}\) be any maximal clique of \(G\) containing \(N_{G}(u)\). We apply the forward direction of Lemma 1 on \(S\) to obtain components \(\{K_{1},\ldots,K_{r}\}\) that are modules such that \(G[S\cup K_{i}]\) is trivially perfect for every \(1\leqslant i\leqslant r\) and \(\bigcup_{1\leqslant i\leqslant r}\{N_{G}(K_{i})\}\) is a nested family. Note that by construction and w.l.o.g., we may assume \(K_{1}=\{u\}\). The result then directly follows from the reverse direction of Lemma 1 by replacing \(K_{1}\) by \(H\).
## 3 Reduction rules
In the remaining of this section we assume that we are given an instance \((G=(V,E),k)\) of Trivially Perfect Editing.
### Standard reduction rules
We first describe some well-known reduction rules [2, 3, 14, 16] that are essential to obtain a vertex-kernel using Theorem 1[16]. We will assume in the remaining of this work that the instance at hand is reduced under Rules 1 and 2, meaning that none of them applies to the instance.
**Rule 1**.: _Let \(C\subseteq V\) be a connected component of \(G\) such that \(G[C]\) is trivially perfect. Remove \(C\) from \(G\)._
**Rule 2**.: _Let \(K\subseteq V\) be a critical clique of \(G\) such that \(|K|>k+1\). Remove \(|K|-(k+1)\) arbitrary vertices in \(K\) from \(G\)._
**Lemma 2** (Folklore, [2, 14]).: _Rules 1 and 2 are safe and can be applied in polynomial time._
### An \(O(k)\) bound on the size of trivially perfect modules
Using an additional reduction rule bounding the size of independent sets in any trivially perfect module by \(O(k)\), Drange and Pilipczuk [14] proved that such modules can be reduced to \(O(k^{2})\) vertices. We strengthen this result by proving that trivially perfect modules can further be reduced to \(O(k)\) vertices. We first deal with modules that contain a large anti-matching.
**Rule 3**.: _Let \(M\subseteq V\) be a trivially perfect module of \(G\). If \(G[M]\) contains a \((k+1)\)-sized anti-matching \(D\), then remove the vertices contained in \(M\setminus V(D)\)._
**Lemma 3**.: _Rule 3 is safe._
Proof.: Let \(G^{\prime}=(V^{\prime},E^{\prime})\) be the graph obtained after application of Rule 3. We need to prove that \((G=(V,E),k)\) is a Yes-instance if and only if \((G^{\prime}=(V^{\prime},E^{\prime}),k)\) is a Yes-instance. The forward direction is straightforward since \(G^{\prime}\) is an induced subgraph of \(G\) and trivially perfect graphs are hereditary. We now consider the reverse direction. Let \(M^{\prime}=V(D)\) denote the set of vertices kept by Rule 3. Moreover, let \(F^{\prime}\) be a \(k\)-editing of \(G^{\prime}\) and \(H^{\prime}=G^{\prime}\triangle F^{\prime}\). We will construct a \(k\)-editing \(F^{*}\) of \(G\). Note that since the pairs contained in an anti-matching are disjoint (Definition 3), \(|M^{\prime}|\geqslant 2(k+1)\). Moreover, since \(|F^{\prime}|\leqslant k\) there are at most \(2k\) affected vertices. Hence let \(u\) be an unaffected vertex of \(M^{\prime}\). By Observation 2 and since \(M^{\prime}\) contains a \((k+1)\)-sized anti-matching we have that \(N_{G^{\prime}}(M^{\prime})\) is a clique in \(H^{\prime}\). The graph \(H_{u}=H^{\prime}\setminus(M^{\prime}\setminus\{u\})\) is trivially perfect by heredity and \(N_{H_{u}}(u)=N_{G^{\prime}}(M^{\prime})\). It follows that \(N_{H_{u}}(u)\) is a clique and Proposition 1 implies that the graph \(H=H_{u}(u\to M)\) is trivially perfect. Let \(F^{*}\) be the editing such that \(H=G\triangle F^{*}\). Since \(u\) is unaffected by \(F^{\prime}\) and \(u\in M\) we have \(N_{H_{u}}(u)=N_{G}(M)\). Hence, since \(M\) is a module in \(G\) we have that \(N_{H}(v)=N_{G}(v)\) for every vertex \(v\in M\), implying that \(F^{*}\subseteq F^{\prime}\). This concludes the proof.
In order to bound the size of any trivially perfect module by \(O(k)\), we actually prove a more general reduction rule that will be useful for the rest of our kernelization algorithm. This rule operates on a more intricate structure, so-called _comb_[16], that induces a trivially perfect graph but not necessarily a module.
**Definition 5** (Comb [16]).: _A pair \((C,R)\) of disjoint subsets of \(V\) is a comb of \(G\) if:_
* \(G[C]\) _is a clique that can be partitioned into_ \(l\) _critical cliques_ \(\{C_{1},...,C_{l}\}\)__
* \(R\) _can be partitioned into_ \(l\) _non-empty non-adjacent trivially perfect modules_ \(\{R_{1},...,R_{l}\}\)__
* \(N_{G}(C_{i})\cap R=\bigcup_{j=i}^{l}R_{j}\) _and_ \(N_{G}(R_{i})\cap C=\bigcup_{j=1}^{i}C_{j}\) _for_ \(1\leqslant i\leqslant l\)__
* _there exist two (possibly empty) subsets of vertices_ \(V_{f},V_{p}\subseteq V(G)\backslash\{C\cup R\}\) _such that:_
* \(\forall x\in C,\ N_{G}(x)\backslash(C\cup R)=V_{p}\cup V_{f}\) _and_
* \(\forall y\in R,\ N_{G}(y)\backslash(C\cup R)=V_{p}\)
Figure 1: A comb of a graph \(G=(V,E)\) with shaft \(C\) and teeth \(R\). Each set \(C_{i}\) is a critical clique while each set \(R_{i}\) induces a (possibly disconnected) trivially perfect module, \(1\leqslant i\leqslant l\). Notice that the sets \(V_{p}\) and \(V_{f}\) might be adjacent to some other vertices of the graph.
Given a comb \((C,R)\), \(C\) is called the _shaft_ of the comb and \(R\) the _teeth_ of the comb. See Fig. 1 for an illustration of Definition 5. Recall that we assume that the graph is reduced under Rule 2, which means that \(|C_{i}|\leqslant k+1\) for \(1\leqslant i\leqslant l\). Dumas _et al._[16] showed the following proposition on the structure of combs.
**Proposition 2** ([16]).: _Given a comb \((C,R)\) of \(G\), the subgraph \(G[C\cup R]\) is trivially perfect. Moreover the sets \(V_{p}\) and \(V_{f}\), and the ordered partitions \((C_{1},\ldots,C_{l})\) of \(C\) and \((R_{1},\ldots,R_{l})\) of \(R\) are uniquely determined._
In the following we assume that any comb \((C,R)\) is given with the ordered partitions \((C_{1},\ldots,C_{l})\) of \(C\) and \((R_{1},\ldots,R_{l})\) of \(R\). We note here that Definition 5 slightly differs from the one given in [16] where the set \(V_{f}\) was required to be non-empty for technical reasons. Dropping this constraint will ease the presentation of our reduction rules.
We now give several observations that will help understand Definition 5, in particular its relation to trivially perfect modules. Given a trivially perfect graph \(G=(V,E)\) and its UCD \(\mathcal{T}_{G}=(T,\mathcal{B})\), one can construct a comb \((C,R)\) of \(G\) by simply taking a path \(P\) from a node \(v_{1}\) of \(T\) to one of its descendent \(v_{l}\). The shaft \(C\) are the vertices in bags of this path, the teeth \(R\) are the bags of subtrees rooted in the children (not on \(P\)) of any node on the path \(P\). We can observe that in this case, \(V_{p}\) corresponds to vertices in the bags on the path from the parent of \(v_{1}\) to the root of \(T\) and that \(V_{f}\) is empty.
In particular, the vertex set of any connected trivially perfect graph can be partitioned into a comb \((C,R)\) by taking a path from the root of its UCD to one of its leaves. This means that when \(V_{p}=V_{f}=\emptyset\), Definition 5 corresponds to a connected trivially perfect graph. Similarly, if only the set \(V_{f}\) is empty then Definition 5 corresponds to a _connected trivially perfect module_ since \(N_{G}(R)\setminus C=N_{G}(C)\setminus R=V_{p}\).
The following directly comes from the definition of a comb and is verified whether sets \(V_{p}\) and \(V_{f}\) are empty or not.
**Observation 3**.: _The set of vertices \(C\) (resp. \(R\)) is a module of \(G\setminus R\) (resp. \(G\setminus C\))._
We will show that combs can be safely reduced to \(O(k)\) vertices. We first focus on combs having a large shaft, which will allow us to reduce trivially perfect modules with small anti-matching to \(O(k)\) vertices (Lemma 6). Then we turn our attention to combs with many vertices in the teeth to bound the size of every comb to \(O(k)\) vertices (Lemma 8).
Combs with large shaftsDumas _et al._[16] showed that the length of a comb (_i.e._ the number \(l\) of different critical cliques in the shaft) can be reduced linearly in \(k\). However, as critical cliques contain \(O(k)\) vertices by Rule 2, it only allowed the authors to bound the number of vertices in shafts of combs to \(O(k^{2})\). Rule 4 presented in this section keeps two sets \(\mathcal{C}_{a}\) and \(\mathcal{C}_{b}\) containing a linear number (in \(k\)) of vertices at the beginning and at the end of the shaft, allowing to bound its size linearly in \(k\). The two sets \(\mathcal{C}_{a}\) and \(\mathcal{C}_{b}\) will be large enough to ensure the existence of two vertices that will be unaffected by a given \(k\)-editing of the graph. We will use such vertices to prove that there exists a \(k\)-editing of the graph that does not affect any vertex in the shaft lying between \(\mathcal{C}_{a}\) and \(\mathcal{C}_{b}\), implying the safeness of the rule.
**Rule 4**.: _Let \((C,R)\) be a comb of \(G\) such that there exist disjoint \((2k+1)\)-packings \(\mathcal{C}_{a}\) of \(\{C_{1},\ldots,C_{l}\}\) and \(\mathcal{C}_{b}\) of \(\{C_{l},C_{l-1},\ldots,C_{1}\}\). Remove \(C^{\prime}=C\setminus(\mathcal{C}_{a}\cup\mathcal{C}_{b})\) from \(G\)._
**Lemma 4**.: _Rule 4 is safe._
Proof.: Let \(G^{\prime}=G\setminus C^{\prime}\) be the graph obtained after application of Rule 4. Since \(G^{\prime}\) is an induced subgraph of \(G\) and since trivially perfect graphs are hereditary, any \(k\)-editing of \(G\) is a \(k\)-editing of \(G^{\prime}\).
For the reverse direction, let \(F^{\prime}\) be a \(k\)-editing of \(G^{\prime}\) and \(H^{\prime}=G^{\prime}\triangle F^{\prime}\). We will construct a \(k\)-editing \(F^{*}\) of \(G\). Let \(c_{a}\) and \(c_{b}\) be unaffected vertices in \(\mathcal{C}_{a}\) and \(\mathcal{C}_{b}\), respectively. Note that both sets contain at least \(2k+1\) vertices and that \(F^{\prime}\) affects at most \(2k\) vertices, hence \(c_{a}\) and \(c_{b}\) are well-defined. Let \(C_{a}\) and \(C_{b}\) be the critical cliques of \(C\) containing \(c_{a}\) and \(c_{b}\), \(1\leqslant a<b\leqslant l\). Moreover, let \(C_{\circ}=C_{a+1}\cup\ldots\cup C_{b-1}\) and \(R_{\circ}=R_{a}\cup\ldots\cup R_{b-1}\). Similarly, let \(C_{<}=C_{1}\cup\ldots\cup C_{a}\), \(C_{>}=C_{b}\cup\ldots\cup C_{l}\) and \(R_{>}=R_{b}\cup\ldots\cup R_{l}\) These sets are depicted Figure 2. Finally, let \(G_{\circ}=G\setminus C_{\circ}\) and \(H_{\circ}=H^{\prime}\setminus C_{\circ}\). Notice in particular that \(H_{\circ}\) is trivially perfect and that \(C^{\prime}\subseteq C_{\circ}\).
Let \(F_{\circ}\subseteq F^{\prime}\) be the \(k\)-editing such that \(H_{\circ}=G_{\circ}\triangle F_{\circ}\) and \(S_{\circ}\) be a maximal clique of \(H_{\circ}\) containing \(\{c_{a},c_{b}\}\). Notice that since \(c_{a}\) and \(c_{b}\) are unaffected, \(S_{\circ}\) is included in \(N_{G_{\circ}}(\{c_{a},c_{b}\})=C\cup V_{p}\cup V_{f}\cup R_{>}\). We use Lemma 1 on \(S_{\circ}\) to obtain a set of connected components \(\{K_{1},\ldots,K_{r}\}\) of \(H_{\circ}\setminus S_{\circ}\) such that \(\{K_{1},\ldots,K_{r}\}\) are modules in \(H_{\circ}\) whose (possibly empty) neighborhoods in \(S_{\circ}\) form a nested family. We first modify \(F_{\circ}\) to obtain a \(k\)-editing of \(G_{\circ}\) where vertices of \(R_{\circ}\) are affected uniformly.
**Claim 1**.: _There exists a \(k\)-editing \(F^{*}\) of \(G_{\circ}\) such that, in \(H^{*}=G_{\circ}\triangle F^{*}\), \(R_{\circ}\) is a module and \(H^{*}[R_{\circ}]=G[R_{\circ}]\)._
Proof.: We begin with several useful observations. First, \(R_{\circ}\) is a module in \(G_{\circ}\) since \(R\supset R_{\circ}\) is a module in \(G\setminus C\) (Observation 3) and vertices of \(R_{\circ}\) are adjacent to \(C_{<}\) and non adjacent to \(C_{>}\). Next, since any component \(K_{i}\) is a module in \(H_{\circ}\), \(1\leqslant i\leqslant r\), and since \(c_{a}\) and \(c_{b}\) are unaffected by \(F_{\circ}\), we have \(N_{H_{\circ}}(K_{i})\cap\{c_{a},c_{b}\}=N_{G_{\circ}}(K_{i})\cap\{c_{a},c_{b}\}\). In other words, vertices in a same component \(K_{i}\) must have the same adjacency with \(\{c_{a},c_{b}\}\) in \(G_{\circ}\) and in \(H_{\circ}\). Similarly, no vertex \(v\in R_{\circ}\) belongs to \(S_{\circ}\) since \(N_{G_{\circ}}(v)\cap\{c_{b}\}=\emptyset\). Moreover, the only vertices of \(G_{\circ}\) that are adjacent to \(c_{a}\) but not \(c_{b}\) are exactly those of \(R_{\circ}\). Hence for any vertex \(v_{\circ}\in R_{\circ}\) it holds that \(N_{H_{\circ}}(v_{\circ})\subseteq S_{\circ}\cup R_{\circ}\).
Assume now that \(R_{\circ}\) is not a module in \(H_{\circ}\) and let \(v_{\circ}\in R_{\circ}\) be a vertex contained in the least number of pairs of \(F_{\circ}\) with the other element in \(S_{\circ}\). Consider the graph \(\tilde{H}=H_{\circ}\setminus(R_{\circ}\setminus\{v_{\circ}\})\), which is trivially perfect by heredity. Since \(N_{H_{\circ}}(v_{\circ})\subseteq S_{\circ}\cup R_{\circ}\), it follows that \(N_{\tilde{H}}(v_{\circ})\subseteq S_{\circ}\) is a clique. Hence Proposition 1 implies that the graph \(H^{*}=\tilde{H}(v_{\circ}\to G[R_{\circ}])\) is trivially perfect. Let \(F^{*}\) be the editing such that \(H^{*}=G_{\circ}\triangle F^{*}\). By the choice of \(v_{\circ}\) we have \(|F^{*}|\leqslant|F_{\circ}|\). It follows that \(F^{*}\) is a desired \(k\)-editing, concluding the proof of Claim 1.
Figure 2: Illustration of the comb and the sets used in the proof of Lemma 4. The circles are critical cliques of the shaft and the triangles are teeth. The red vertices correspond to \(c_{a}\) and \(c_{b}\), the light blue rectangles correspond to sets \(\mathcal{C}_{a}\) and \(\mathcal{C}_{b}\) and the light red rectangle corresponds to \(C^{\prime}\), which is removed by Rule 4.
We henceforth consider \(H^{*}=G_{\circ}\triangle F^{*}\) where \(F^{*}\) is the \(k\)-editing from Claim 1. Note that the components around \(S_{\circ}\) may be different in \(H_{\circ}\setminus S_{\circ}\) and \(H^{*}\setminus S_{\circ}\). In a slight abuse of notation, we still define these components by \(\{K_{1},\ldots,K_{r}\}\). Recall that \(\{K_{1},\ldots,K_{r}\}\) are modules in \(H^{*}\) whose (possibly empty) neighborhoods in \(S_{\circ}\) form a nested family.
**Claim 2**.: _The graph \(H=G\triangle F^{*}\) is trivially perfect._
_Proof._ The graph \(H\) corresponds to \(H^{*}\) where vertices of \(C_{\circ}\) have been added with the same neighborhood as in \(G\). Let us first observe that \(S=S_{\circ}\cup C_{\circ}\) is a maximal clique in \(H\). Indeed, \(C_{\circ}\) is a clique by definition and \(S_{\circ}\subseteq\big{(}C\cup V_{p}\cup V_{f}\cup R_{>}\big{)}\subseteq N_{H} (C_{\circ})=N_{G}(C_{\circ})\) (recall that \(C\) is adjacent to \(V_{p}\cup V_{f}\) by Definition 5 and that vertices of \(C_{\circ}\) are adjacent to every vertex of \(R_{>}\)). Hence components \(\{K_{1},\ldots,K_{r}\}\) defined in \(H^{*}\setminus S_{\circ}\) are the same in \(H\setminus S\) and their neighborhoods are nested in \(S_{\circ}\). We split \(\{K_{1},\ldots,K_{r}\}\) into three types components w.r.t their adjacencies with \(\{c_{a},c_{b}\}\), namely:
1. \(\alpha\)-components that are non-adjacent to both \(c_{a}\) and \(c_{b}\)
2. \(\beta\)-components that are adjacent to \(c_{a}\) but not \(c_{b}\)
3. \(\delta\)-components that are adjacent to both \(c_{a}\) and \(c_{b}\)
In what follows we let \(K_{\alpha}\), \(K_{\beta}\) and \(K_{\delta}\) denote any \(\alpha\)-, \(\beta\)- and \(\delta\)-component, respectively. Note that \(N_{H^{*}}(K_{\alpha})\subseteq N_{H^{*}}(K_{\beta})\subseteq N_{H^{*}}(K_{ \delta})\subseteq S_{\circ}\) holds by construction. Recall that since \(c_{a}\) and \(c_{b}\) are unaffected by \(F^{*}\), \(N_{G}(K_{i})\cap\{c_{a},c_{b}\}=N_{H}(K_{i})\cap\{c_{a},c_{b}\}\) for any \(K_{i}\). We claim that \(\{N_{H}(K_{i})\mid 1\leqslant i\leqslant r\}\) is a nested family. Note that Lemma 1 will imply the result since \(S\) is a maximal clique in \(H\). To sustain this claim, recall that the neighborhoods of vertices of \(C_{\circ}\) are identical in \(G\) and \(H\). Moreover, \(N_{H}[c_{a}]\subseteq N_{H}[C_{\circ}]\subseteq N_{H}[c_{b}]\) holds as these vertices are unaffected by \(F^{*}\). It follows that \(\alpha\)-components (resp. \(\delta\)-components) are non-adjacent (resp. adjacent) to every vertices of \(C_{\circ}\) in \(H\). This means in particular that the neighborhoods of both \(\alpha\)- and \(\delta\)-components are nested in \(S\). Moreover we can observe that vertices of \(\beta\)-components are exactly the ones of \(R_{\circ}\) since they are the only ones that are adjacent to \(c_{a}\) but not \(c_{b}\) in \(G\). Hence, in \(H\), we still have:
\[N_{H}(K_{\alpha})\subseteq N_{H}(K_{\beta})\subseteq N_{H}(K_{\delta})\]
It remains to prove that the neighborhoods of \(\beta\)-components are nested in \(S\). Let w.l.o.g. \(\{K_{1},\ldots,K_{p}\}\), \(1\leqslant p\leqslant r\) be the \(\beta\)-components. By definition of a comb, the \(\beta\)-components (which are also \(R_{\circ}\)) can be ordered w.r.t. the inclusion of their neighborhood in \(G[C_{\circ}]\). We can assume w.l.o.g. that the ordering is \(N_{G[C_{\circ}]}(K_{1})\subseteq\ldots\subseteq N_{G[C_{\circ}]}(K_{p})\). Moreover we can observe that for any \(\beta\)-component \(K_{i}\) we have \(N_{G[C_{\circ}]}(K_{i})=N_{H[C_{\circ}]}(K_{i})\), \(1\leqslant i\leqslant p\). Since \(R_{\circ}\) is a module in \(H^{*}\) by Claim 1 and since vertices of \(\beta\)-components are exactly those of \(R_{\circ}\), it follows that the neighborhoods of \(\beta\)-components are nested. Hence \(\{N_{H}(K_{i})\mid 1\leqslant i\leqslant r\}\) is a nested family and \(H\) is a trivially perfect graph by Lemma 1. \(\diamond\)
By Claim 2 the graph \(H=G\triangle F^{*}\) is trivially perfect and as \(|F^{*}|\leqslant k\), it follows that \(F^{*}\) is a \(k\)-editing of \(G\), concluding the proof of Lemma 4.
**Observation 4**.: _Assume that the instance \((G,k)\) is reduced under Rules 2 and 4. For any comb \((C,R)\) of \(G\) it holds that \(|C|\leqslant 6k+2\)._
_Proof._ Since \(G\) is reduced under Rule 2 every critical clique \(C_{i}\) of the shaft contains at most \(k+1\) vertices, \(1\leqslant i\leqslant l\). By Observation 1, any \((2k+1)\)-packing of \(\{C_{1},\ldots,C_{l}\}\) (resp. \(\{C_{l},\ldots,C_{1}\}\)) contains at most \(3k+1\) vertices. It follows that \(|C|\leqslant 6k+2\) since otherwise one could find two _disjoint_\((2k+1)\)-packings of \(\{C_{1},\ldots,C_{l}\}\) and of \(\{C_{l},\ldots,C_{1}\}\) and Rule 4 would apply.
We are now ready to show how to reduce the size of any trivially perfect module. We need a combinatorial result that will be useful to obtain the claimed bound.
**Lemma 5**.: _Let \(G=(V,E)\) be a connected trivially perfect graph and \(\alpha\) be the size of a maximum anti-matching of \(G\). There exists a comb \((C,R)\) of \(G\) such that \(V=C\cup R\) and \(|R|\leqslant 4\alpha\). Moreover, such a comb can be computed in polynomial time._
Proof.: We provide a constructive proof that will directly imply the last part of the result. Recall that any trivially perfect graph contains a universal vertex and let \(U_{1}\subseteq G\) be the universal clique of \(G\). Let \(R_{1}^{1},\ldots,R_{p_{1}}^{1}\) denote the connected components of \(G\setminus U_{1}\). Since \(G\) does not contain any \((\alpha+1)\)-sized anti-matching, there is at most one set \(R_{i}^{1}\), \(1\leqslant i\leqslant p_{1}\) such that \(|R_{i}^{1}|>\alpha\) (as there is no edge between \(R_{i}^{1}\) and \(R_{j}^{1}\), \(1\leqslant i<j\leqslant p_{1}\)).
Assume without loss of generality that \(|R_{1}^{1}|>\alpha\). We add all vertices of \(\cup_{i=2}^{p_{1}}R_{i}^{1}\) to some set \(R_{<}\) and we will repeat this process on \(G[R_{1}^{1}]\) until every connected component is smaller than \(\alpha\). More formally, at step \(j>1\), for the trivially perfect graph \(G_{j}=G[R_{1}^{j-1}]\), let \(U_{j}\) be its universal clique and \(R_{1}^{j},\ldots,R_{p_{j}}^{j}\) be the connected components of \(G_{j}\setminus U_{j}\). Let \(R_{1}^{j}\) be the one of size greater than \(\alpha\) if it exists, if it does not, stop the process and let \(l\) be the last step. In particular, \(|R_{i}^{l}|\leqslant\alpha\), \(1\leqslant i\leqslant p_{l}\). Let \(R_{<}=\bigcup_{j=1}^{l-1}\bigcup_{i=2}^{p_{j}}R_{i}^{j}\) and \(R_{>}=R_{1}^{l}\cup\cdots\cup R_{p_{l}}^{l}\).
Recall that \(|R_{1}^{l-1}|>\alpha\) by construction. This implies that \(|R_{<}|\leqslant\alpha\) since otherwise \(G[R_{<}\cup R_{1}^{l-1}]\) would contain a \((\alpha+1)\)-sized anti-matching. We claim that \(|R_{>}|\leqslant 3\alpha\). To support this claim, let us consider the \((\alpha+1)\)-packing \(\{R_{1}^{l},\ldots,R_{q}^{l}\}\) of \(\{R_{1}^{l},\ldots,R_{p_{l}}^{l}\}\) and let \(R^{\prime}=\bigcup_{i=1}^{q}R_{i}^{l}\) be its vertices. Let \(R^{\prime\prime}=R_{>}\setminus R^{\prime}\). Recall that \(l\) is the last step of the process and \(|R_{i}^{l}|\leqslant\alpha\) for \(1\leqslant i\leqslant p_{l}\). Hence by Observation 1 it holds that \(|R^{\prime}|\leqslant 2\alpha\). Thus, we have that \(|R^{\prime\prime}|\leqslant\alpha\) since otherwise \(G[R^{\prime}\cup R^{\prime\prime}]\) would contain a \((\alpha+1)\)-sized anti-matching, a contradiction. Hence \(|R_{>}|=|R^{\prime}|+|R^{\prime\prime}|\leqslant 3\alpha\).
To obtain a comb for \(G\) we consider the set \(C=\{U_{1},\ldots,U_{l}\}\) as the shaft (recall that \(U_{1}\) is the universal clique of \(G\) and that \(U_{j}\) denotes the universal clique of \(G[R_{1}^{j-1}]\) at every step \(1<j\leqslant l\)). Moreover, for every \(1\leqslant j<l\), the tooth \(R_{j}\) is defined as \(\{R_{2}^{j},\ldots,R_{p_{j}}^{j}\}\), the last tooth \(R_{l}\) being \(R_{>}\). By construction \((C,R=\bigcup_{j=1}^{l}R_{j})\) is a comb of \(G\) such that \(|R|=|R_{<}|+|R_{>}|\leqslant 4\alpha\). This concludes the proof.
**Lemma 6**.: _Assume that the instance \((G,k)\) is reduced under **Rules 1** to 4 and let \(M\) be a trivially perfect module of \(G\). Then \(M\) contains at most \(11k+2\) vertices._
Proof.: Observe that if \(M\) contains an anti-matching of size more than \(k\), then it is reduced under **Rule 3** and contains \(2k+2\) vertices. Hence, suppose that \(M\) does not contain a \((k+1)\)-sized anti-matching. Assume first that \(G[M]\) is connected. Let \((C,R)\) be a comb obtained through Lemma 5, such that \(C\cup R=M\) and \(|R|\leqslant 4k\). By Observation 4 we have that \(|C|\leqslant 6k+2\). It follows that \(|M|\leqslant|C|+|R|\leqslant 10k+2\).
To conclude it remains to deal with the case where \(G[M]\) is disconnected. Let \(\{M_{1},\ldots,M_{p}\}\) denote the connected components of \(G[M]\). As \(M\) does not contain a \((k+1)\)-sized anti-matching, at most one of its connected component has size greater than \(k\), we may assume w.l.o.g. that it is \(M_{1}\), if existent. Let \(\mathcal{C}\) be the \((k+1)\)-packing of \(\{M_{1},\ldots,M_{p}\}\). As \(|M_{1}|\leqslant 10k+2\) and \(|M_{i}|\leqslant k\) for \(2\leqslant i\leqslant p\), we have that \(|\mathcal{C}|\leqslant 10k+2\). Moreover, since \(M\) does not contain any \((k+1)\)-sized anti-matching, \(|M\setminus\mathcal{C}|\leqslant k\) and thus \(|M|\leqslant 11k+2\). This concludes the proof.
### Combs with large teeth
We now turn our attention to the case were a given comb contains many vertices in its teeth. The arguments are somewhat symmetric to the ones used in the proof of Lemma 4. The main difference lies in the fact that the information provided by unaffected vertices differ when they are contained in the teeth rather than in the shaft.
**Rule 5**.: _Let \((C,R)\) be a comb of \(G\) such that there exist three disjoint sets \(\mathcal{R}_{a},\mathcal{R}_{b}\) and \(\mathcal{R}_{c}\) where:_
* \(\mathcal{R}_{a}\) _is a_ \((2k+1)\)_-packing of_ \(\{R_{1},\ldots,R_{l}\}\)_,_
* \(\mathcal{R}_{c}=\{R_{l},\ldots,R_{q}\}\) _is a_ \((2k+1)\)_-packing of_ \(\{R_{l},\ldots,R_{1}\}\)_,_
* \(\mathcal{R}_{b}\) _is a_ \((2k+1)\)_-packing of_ \(\{R_{q-1},\ldots,R_{1}\}\)_,_
_Remove \(R^{\prime}=R\setminus(\mathcal{R}_{a}\cup\mathcal{R}_{b}\cup\mathcal{R}_{c})\) from \(G\)._
**Lemma 7**.: _Rule 5 is safe._
Proof.: Let \(G^{\prime}=G\setminus R^{\prime}\) be the graph obtained after application of Rule 5. Since \(G^{\prime}\) is an induced subgraph of \(G\) and since trivially perfect graphs are hereditary, any \(k\)-editing of \(G\) is a \(k\)-editing of \(G^{\prime}\).
For the reverse direction, let \(F^{\prime}\) be a \(k\)-editing of \(G^{\prime}\) and \(H^{\prime}=G^{\prime}\triangle F^{\prime}\). We will construct a \(k\)-editing \(F^{*}\) of \(G\). Let \(r_{a}\), \(r_{b}\) and \(r_{c}\) be unaffected vertices in \(\mathcal{R}_{a}\), \(\mathcal{R}_{b}\) and \(\mathcal{R}_{c}\), respectively. Note that these vertices exist as these sets contain at least \(2k+1\) vertices and \(F^{\prime}\) affects at most \(2k\) vertices. Let \(R_{a}\), \(R_{b}\) and \(R_{c}\), \(1\leqslant a<b<c\leqslant l\), be the teeth of \(R\) containing \(r_{a}\), \(r_{b}\) and \(r_{b}\), respectively (these sets are well-defined since the packings \(\mathcal{R}_{a}\), \(\mathcal{R}_{b}\) and \(\mathcal{R}_{c}\) are disjoint). Moreover, since \(r_{a}\), \(r_{b}\) and \(r_{c}\) are unaffected by \(F^{\prime}\) their neighborhoods are equal in \(G^{\prime}\) and \(H^{\prime}\) and hence \(\left(N_{H^{\prime}}(r_{a})\setminus R_{a}\right)\subseteq\left(N_{H^{\prime }}(r_{b})\setminus R_{b}\right)\subseteq\left(N_{H^{\prime}}(r_{c})\setminus R _{c}\right)\).
**Claim 3**.: _The set \(N_{H^{\prime}}(r_{b})\setminus R_{b}\) is a clique in \(H^{\prime}\)._
Proof.: Assume for a contradiction that \(N_{H^{\prime}}(r_{b})\setminus R_{b}\) contains a non-edge \(\{u,v\}\). Recall that there is no edge between \(R_{b}\) and \(R_{c}\). Hence, since \(\left(N_{H^{\prime}}(r_{b})\setminus R_{b}\right)\subseteq\left(N_{H^{\prime }}(r_{c})\setminus R_{c}\right)\) we have that the set \(\{r_{b},u,v,r_{c}\}\) induces a \(C_{4}\) in \(H^{\prime}\), a contradiction. \(\diamond\)
Let \(R_{\circ}=R_{a+1}\cup\ldots\cup R_{b-1}\) and \(C_{\circ}=C_{a+1}\cup\ldots\cup C_{b}\). Similarly, let \(C_{<}=C_{1}\cup\ldots\cup C_{a}\), \(R_{<}=R_{1}\cup\ldots\cup R_{a}\) and \(R_{>}=R_{b}\cup\ldots\cup R_{l}\). Finally, let \(G_{\circ}=G\setminus R_{\circ}\) and \(H_{\circ}=H^{\prime}\setminus R_{\circ}\). These sets are depicted Figure 3. Notice in particular that \(H_{\circ}\) is trivially perfect and that \(R^{\prime}\subseteq R_{\circ}\). Let \(F_{\circ}\subseteq F^{\prime}\) be the \(k\)-editing such that \(H_{\circ}=G_{\circ}\triangle F_{\circ}\). We first modify \(F_{\circ}\) to obtain a \(k\)-editing of \(G_{\circ}\) where every vertex of \(C_{\circ}\) is affected uniformly.
**Claim 4**.: _There exists a \(k\)-editing \(F^{*}\) of \(G_{\circ}\) such that, in \(H^{*}=G_{\circ}\triangle F^{*}\), \(C_{\circ}\) is a clique module._
Proof.: Note that \(C_{\circ}\) is a critical clique in \(G_{\circ}\) since \(C\supset C_{\circ}\) is a module in \(G\setminus R\) (Observation 3) and vertices of \(C_{\circ}\) are non-adjacent to vertices of \(R_{<}\) and adjacent to vertices of \(R_{>}\). Assume now that \(C_{\circ}\) is not a clique module in \(H_{\circ}\) and let \(v_{\circ}\in C_{\circ}\) be a vertex contained in the least number of pairs of \(F_{\circ}\). Consider the graph \(H^{\prime}_{\circ}=H_{\circ}\setminus(C_{\circ}\setminus\{v_{\circ}\})\), which is trivially perfect by heredity, and let \(H^{*}\) be the graph obtained from \(H^{\prime}_{\circ}\) by adding vertices of \(C_{\circ}\setminus\{v_{\circ}\}\) as true twins of \(v_{\circ}\). Let \(F^{*}\) be the editing such that \(H^{*}=G_{\circ}\triangle F^{*}\). The graph \(H^{*}\) is trivially perfect as the class of trivially perfect graphs is closed under true twin addition. It follows from construction that \(C_{\circ}\) is a clique module in \(H^{*}\) and by the choice of \(v_{\circ}\), \(|F^{*}|\leqslant|F_{\circ}|\). \(\diamond\)
We henceforth consider \(H^{*}=G_{\circ}\triangle F^{*}\) where \(F^{*}\) is the editing from Claim 4. We now show that vertices of \(R_{\circ}\) can be added into \(H^{*}\) while ensuring it remains trivially perfect.
**Claim 5**.: _The graph \(H=G\triangle F^{*}\) is trivially perfect._
Proof.: We start by removing the vertices of \(R_{b}\setminus\{r_{b}\}\) from \(H^{*}\), which will give us more control on the neighborhood of \(r_{b}\) and ease some arguments. Let \(\tilde{H}=H^{*}\setminus(R_{b}\setminus\{r_{b}\})\), this graph is trivially perfect by heredity. Let \(S\) be a maximal clique of \(\tilde{H}\) containing \(r_{b}\). By Claim 3, \(N_{\tilde{H}}(r_{b})\) is a clique and since \(r_{b}\) is unaffected by \(F^{*}\) we have that \(S=N_{\tilde{H}}[r_{b}]=C_{<}\cup C_{\circ}\cup V_{p}\cup\{r_{b}\}\). We use Lemma 1 on \(S\) to obtain a set of connected components \(\{K_{1},\ldots,K_{r}\}\) of \(\tilde{H}\setminus S\) such that \(\{K_{1},\ldots,K_{r}\}\) are modules in \(\tilde{H}\) whose (possibly empty) neighborhoods in \(S\) form a nested family. We further split components \(\{K_{1},\ldots,K_{r}\}\) into two types: \(K_{i}\) is an \(\alpha\)-component if \(N_{\tilde{H}}(K_{i})\subseteq\left(N_{\tilde{H}}(r_{a})\cap S\right)\) and a \(\beta\)-component otherwise, \(1\leqslant i\leqslant r\). Since \(N_{\tilde{H}}(r_{a})\cap S=V_{p}\cup C_{<}\) we have that, for any \(\alpha\)-component \(K_{\alpha}\), \(N_{\tilde{H}}(K_{\alpha})\subseteq V_{p}\cup C_{<}\). Moreover, since \(S=N_{\tilde{H}}[r_{b}]\) and since \(C_{\circ}\) is a clique module in \(\tilde{H}\) by Claim 4, every \(\beta\)-component \(K_{\beta}\) satisfies \(N_{\tilde{H}}(K_{\beta})=V_{p}\cup C_{<}\cup C_{\circ}=S\setminus\{r_{b}\}\).
Observe now that \((V_{p}\cup C_{<})\subseteq N_{G}(R_{\circ})\subseteq S\setminus\{r_{b}\}\). In other words, the neighborhood of any tooth of \(R_{\circ}\) contains the neighborhood of any \(\alpha\)-component and is contained in the neighborhood of any \(\beta\)-component. Moreover the neighborhoods of the teeth of \(R_{\circ}\) are nested in \(G\) by definition of a comb. It follows that the vertices of \(R_{\circ}\) can be safely added to \(\tilde{H}\) with the same neighborhood as they have in \(G\), ensuring that the resulting graph \(H_{b}\) is trivially perfect. It remains to add the vertices of \(R_{b}\) back into the graph. By Claim 3 and Proposition 1, the graph \(H=H_{b}(r_{b}\to G[R_{b}])\) is trivially perfect. \(\diamond\)
By Claim 5 the graph \(H=G\triangle F^{*}\) is trivially perfect and as \(|F^{*}|\leqslant k\), it follows that \(F^{*}\) is a \(k\)-editing of \(G\), concluding the proof of Lemma 7.
**Lemma 8**.: _Assume that the instance \((G,k)\) is reduced under Rules 1 to 5. Let \((C,R)\) be a comb of \(G\). Then \(|C\cup R|=O(k)\)._
Proof.: First, note that \(|C|\leqslant 6k+2\) thanks to Observation 4. We proceed in the same fashion to bound the size of \(R\). As the teeth of a comb are trivially perfect modules, Lemma 6 implies that \(|R_{i}|\leqslant 11k+2\), \(1\leqslant i\leqslant l\). Hence by Observation 1 any \((2k+1)\)-packing of \(\{R_{1},\ldots,R_{l}\}\) requires at most \(13k+2\) vertices. It follows that \(|R|\leqslant 39k+6\) since otherwise one could find three disjoint \((2k+1)\)-packings of \(R\) that meet the requirements of Rule 5. Altogether we obtain that \(|C\cup R|\leqslant 45k+8\) which concludes the proof.
Figure 3: Illustration of the comb and the sets used in the proof of Lemma 7. The circles are critical cliques of the shaft and the triangles are teeth. The red vertices correspond to \(r_{a}\), \(r_{b}\) and \(r_{c}\), the light blue rectangles correspond to sets \(\mathcal{R}_{a}\), \(\mathcal{R}_{b}\) and \(\mathcal{R}_{c}\) and the light red rectangle corresponds to \(R^{\prime}\), which is removed by Rule 5.
### Reducing the graph exhaustively
We conclude this section by showing that the graph can be reduced in polynomial time.
**Lemma 9**.: _There is a polynomial time algorithm that outputs an instance \(G^{\prime}=(V^{\prime},E^{\prime})\) such that none of Rules 1 to 5 applies._
Proof.: First, Rules 1 and 2 can be applied in polynomial time thanks to Lemma 2. We now need to apply the other rules on trivially perfect modules and combs. For the modules, it is sufficient to reduce _strong modules_, which are modules that do not overlap with other modules. We can enumerate strong modules in linear time [34]. For each strong module \(M\) we can check in polynomial time if it is trivially perfect. We can moreover check if \(M\) contains a \((k+1)\)-sized anti-matching by finding a maximum matching in the complement graph \(\overline{G[M]}\), for instance using Edmonds' algorithm [17]. If \(M\) has a large anti-matching, then we can apply Rule 3. Otherwise, if \(|M|\geqslant 11k+2\) then it can be reduced using Rule 4. Indeed, \(G[M]\) contains in this case at most one connected component \(M^{\prime}\) with more than \(k\) vertices, such that \(|M\setminus M^{\prime}|\leqslant k\) (since otherwise \(M\) would contain a \((k+1)\)-sized anti-matching). We compute a comb \((C,R)\) through Lemma 5 in \(G[M^{\prime}]\), with \(|R|\leqslant 4k\). It follows that \(|C|>6k+2\) and Observation 4 implies that Rule 3 applies.
It remains to show that the combs not included in a trivially perfect module can be reduced in polynomial time. In order to do this Dumas _et al._[16] showed that so-called _critical combs_ can be enumerated in polynomial time, a critical comb being an inclusion-wise maximal comb where \(V_{f}\neq\emptyset\) and \(R\cup C\cup V_{f}\) does not induce a trivially perfect module. In particular, critical combs contain every comb not included in a trivially perfect module. Hence it is sufficient to only reduce these combs. Given a critical comb, Rules 4 and 5 can be applied in polynomial time. This concludes the proof.
Combining Theorem 1 and Lemmas 6, 8 and 9 we obtain the main result of this work.
**Theorem 2**.: Trivially Perfect Editing _admits a kernel with \(O(k^{2})\) vertices._
## 4 The deletion variant
As mentioned in the introduction, a quadratic vertex-kernel is known to exist for Trivially Perfect Completion[1]. The results presented Section 3 can be adapted to prove that Trivially Perfect Deletion also admits a quadratic vertex-kernel by simply replacing any mention of "editing" by "deletion" (this also work with completion).
More precisely, one can see that in order to prove the safeness of Rules 3 to 5, the \(k\)-editing \(F^{*}\) for the original graph that is derived from a \(k\)-editing \(F^{\prime}\) for the reduced instance only uses operations that were done by \(F^{\prime}\). In particular, if \(F^{\prime}\) only contains non-edges then so does \(F^{*}\), meaning that it is a valid solution. Together with the fact that Rules 1 and 2 are safe for the deletion variant, we obtain the following.
**Theorem 3**.: Trivially Perfect Deletion _admit a kernel with \(O(k^{2})\) vertices._
## 5 Conclusion
In this work we improved known kernelization algorithms for the Trivially Perfect Editing and Trivially Perfect Deletion problems, providing a quadratic vertex-kernel for both of them. This matches the best known bound for the completion variant [1]. Improving upon these bounds is an appealing challenge that may require a novel approach. On the other hand, it
would be interesting to develop lower bounds for kernelization on such problems. Finally, even if the use of unaffected vertices in the design of reduction rules is common, its combination with the structural properties of trivially perfect graphs in terms of their maximal cliques allowed us to design stronger reduction rules. We hope that the approach presented in this work may lead to finding or improving kernelization algorithms for some related problems. Let us for instance mention the cubic vertex-kernel for Proper Interval Completion[3] and the quartic one for Ptolemaic Completion[10] that might be appropriate candidates.
|
2304.11757 | Computing Controlled Invariant Sets of Nonlinear Control-Affine Systems | In this paper, we consider the computation of controlled invariant sets (CIS)
of discrete-time nonlinear control affine systems. We propose an iterative
refinement procedure based on polytopic inclusion functions, which is able to
approximate the maximal controlled invariant set to within a guaranteed
precision. In particular, this procedure allows us to guarantee the invariance
of the resulting near-maximal CIS while also computing sets of control inputs
that enforce the invariance. Further, we propose an accelerated version of this
procedure which refines the CIS by computing backward reachable sets of
individual components of set unions, rather than all at once. This reduces the
total number of iterations required for convergence, especially when compared
with existing methods. Finally, we compare our methods to a sampling-based
approach and demonstrate the improved accuracy and faster convergence. | Scott Brown, Mohammad Khajenejad, Sze Zheng Yong, Sonia MartInez | 2023-04-23T22:03:34Z | http://arxiv.org/abs/2304.11757v1 | # Computing Controlled Invariant Sets of Nonlinear Control-Affine Systems
###### Abstract
In this paper, we consider the computation of controlled invariant sets (CIS) of discrete-time nonlinear control-affine systems. We propose an iterative refinement procedure based on polytopic inclusion functions, which is able to approximate the maximal controlled invariant set to within a guaranteed precision. In particular, this procedure allows us to guarantee the invariance of the resulting near-maximal CIS while also computing sets of control inputs which enforce the invariance. Further, we propose an accelerated version of this procedure which refines the CIS by computing backward reachable sets of individual components of set unions, rather than all at once. This reduces the total number of iterations required for convergence, especially when compared with existing methods. Finally, we compare our methods to a sampling based approach and demonstrate the improved accuracy and faster convergence.
## I Introduction
Invariance is an important concept for ensuring robustness and safety of control systems. For a dynamical system, a set is (_forward_) _invariant_ if every trajectory starting in that set remains in the set for all time. For control systems, this notion can be generalized with the determination of a control input which is able to render a set invariant, leading to the notion of a _controlled invariant set_ (CIS). For systems which are subject to uncertainty or noise, the concept of a _robust controlled invariant set_ (RCIS) is critical for safety, as it guarantees invariance in the presence of disturbances.
Controlled invariant sets have been thoroughly studied for linear systems, e.g., [1, 2, 3, 4, 5]. Many of these methods employ iterative procedures based on a one-step backward operator [2, 3] to find backward reachable sets of the system for computing the CIS with high precision. In order to improve the computation time for high-dimensional systems, other non-iterative techniques have also been proposed which rely on lifting to a higher dimensional space to compute the CIS in closed form and projecting the resulting set to the original domain, e.g., [4, 5].
On the other hand, determining controlled invariant sets of nonlinear systems remains a significant challenge. Some works employ convex or zonotopic approximations, e.g., [6, 7], in order to reduce the computational complexity. However, as the maximal controlled invariant sets are nonconvex in general, these methods can be overly conservative.
Another related work pertains to the study of invariant sets of _switched_ systems, e.g., [8, 9, 10], where the input controls switching between a finite number of modes. In that case, the input is easily determined by considering all possible modes and selecting those which lead to invariance [9]. By sampling a continuous set of inputs, these methods can be applied to more general nonlinear systems, but the accuracy and scalability may be limited [10]. These methods result in sets that are guaranteed to be invariant, but the sampling is an additional source of computational complexity as it must be fine enough to properly capture the nonlinear behavior of the system. As a result, these methods are difficult to apply to systems with multiple inputs.
In this paper, we propose two iterative algorithms to compute the near-maximal (non-convex) controlled invariant set of control affine systems up to a guaranteed precision, _without_ sampling the set of allowable inputs. Inspired by [9], our methods use a bisection approach to over- and under-approximate the one-step backward reachable set operator. The main idea of the approach is to compute the _forward_ reachable set of the region of interest, which is described by a union of intervals. The parts of this forward set that are entirely contained in the original set are used as an under-approximation of the backward reachable set. To check the containment, we use polyhedral over-approximations of reachable sets, which allows us to cast the problem of determining the control input as a translation of a polyhedron. This technique allows us to determine a _continuous_, rather than sampled, set of invariance enforcing control inputs. In addition, we leverage the structure of set unions in the accelerated version, which has the potential to significantly reduce the total number of iterations required for convergence, when compared with existing methods.
### _Notation_
Let \(\mathbb{Z}_{\geq 0}\) be the set of nonnegative integers, and \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n\times p}\) be the \(n\)-dimensional Euclidean space, and the set of \(n\times p\) matrices, respectively. By means of \(\mathcal{B}_{r}\), we denote the \(\infty\)-norm hyperball of radius \(r\). We make use of \(\mathbb{IR}^{n}\) to denote the collection of (multidimensional) intervals of \(\mathbb{R}^{n}\), and denote its elements as \([x]\triangleq[\underline{x},\overline{x}]\in\mathbb{IR}^{n}\) with lower and upper bounds \(\underline{x}\) and \(\overline{x}\). The function \(w([x])\) measures interval width (i.e., \(w([x])=\|\overline{x}-\underline{x}\|\) with any vector norm \(\|\cdot\|\)), \(\mathrm{mid}([x])\) selects the midpoint of \([x]\) (i.e., \(\mathrm{mid}([x])=\frac{1}{2}(\overline{x}+\underline{x})\)), and a function in brackets \([f]([x])\) represents an interval inclusion function (cf. Definition 1). The operator \(\oplus\) denotes the Minkowski sum of two sets, and \(\ominus\) denotes the Pontryagin difference. For a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) and a set \(\mathcal{S}\subset\mathbb{R}^{n}\), \(f_{0}(\mathcal{S})\) denotes the image of \(\mathcal{S}\) under \(f_{0}\). For a bounded polyhedron (polytope) \(\mathcal{P}\), we use \(H_{\mathcal{P}}\) and \(b_{\mathcal{P}}\) to denote the components defining the halfspace representation \(\mathcal{P}=\{x~{}:~{}H_{\mathcal{P}}x\leq b_{\mathcal{P}}\}\). The symbol \(V_{\mathcal{P}}\) denotes the vertices
of \(\mathcal{P}\). Finally, \(\mathrm{conv}\,\mathcal{X}\) denotes the convex hull of the set \(\mathcal{X}\).
## II Preliminaries
This section introduces preliminary notions that will be used throughout the paper. We begin by defining inclusion functions, which are critical for tractably approximating the images of sets under nonlinear functions.
**Definition 1**.: _Given a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\), an inclusion function is an interval function \([f]:\mathbbm{R}^{n}\to\mathbbm{R}^{m}\) that satisfies_
\[[f]([x])\supseteq f([x]),\ \forall[x]\in\mathbbm{R}^{n},\]
_where \(f([x])\) denotes the exact image of \([x]\) under \(f\)._
The reader is referred to [11, Section 2.4] and [12] for a thorough discussion of different types of inclusion functions, such as natural, centered, etc., as well as mixed-monotone decomposition-based inclusion functions. The results of this paper are not specific to any one type of inclusion function. However, different inclusion functions may produce more or less precise approximations depending on the system.
### _Translating Polytopes_
This section defines two different operations on polytopes: translating one into another, and translating one so it intersects another. These will be used in our algorithm to determine safe control inputs.
**Definition 2**.: _Given two polytopes \(\mathcal{P}\) and \(\mathcal{Q}\), the set of translations of \(\mathcal{P}\) that insert\(\mathcal{P}\) into \(\mathcal{Q}\) is denoted by_
\[\mathcal{I}(\mathcal{P},\mathcal{Q})\triangleq\{r\in\mathbb{R}^{n}:\mathcal{P }\oplus\{r\}\subseteq\mathcal{Q}\}.\]
_Similarly, the set of translations of \(\mathcal{P}\) that overlap\(\mathcal{P}\) with \(\mathcal{Q}\) is defined as_
\[\mathcal{O}(\mathcal{P},\mathcal{Q})\triangleq\{s\in\mathbb{R}^{n}:\mathcal{P }\oplus\{s\}\cap\mathcal{Q}\neq\emptyset\}.\]
**Proposition 1** ([13, Theorem 2.3]).: _For polytopes \(\mathcal{P}\) and \(\mathcal{Q}\),_
\[\mathcal{I}(\mathcal{P},\mathcal{Q})=\{r\in\mathbb{R}^{n}:H_{\mathcal{Q}^{r}} \leq b_{\mathcal{Q}}-\beta\},\]
_where \(\beta_{i}=\max_{v\in V_{\mathcal{P}}}(H_{\mathcal{Q}})_{i}v\). If \(\mathcal{P}\) cannot be embedded into \(\mathcal{Q}\), then this results in \(\mathcal{I}(\mathcal{P},\mathcal{Q})=\emptyset\)._
**Proposition 2**.: _Given two polytopes \(\mathcal{P}\) with \(\mathcal{Q}\),_
\[\mathcal{O}(\mathcal{P},\mathcal{Q})=\left\{s\in\mathbb{R}^{n}:\begin{bmatrix} H_{\mathcal{Q}}\\ -H_{\mathcal{P}}\end{bmatrix}s\leq\begin{bmatrix}b_{\mathcal{Q}}-\alpha\\ b_{\mathcal{P}}-\gamma\end{bmatrix}\right\},\]
_where \(\alpha_{i}=\min_{v\in V_{\mathcal{P}}}(H_{\mathcal{Q}})_{i}v\) and \(\gamma_{i}=\min_{v\in V_{\mathcal{Q}}}(H_{\mathcal{P}})_{i}v\)._
Proof.: See Appendix A.
## III Invariance Control Problem
Here we introduce the class of systems under consideration and define the concept of controlled invariance. Specifically, we consider nonlinear control affine systems of the form
\[x_{k+1}=f(x_{k},u_{k})\triangleq f_{0}(x_{k})+\sum_{i=1}^{m}g_{i}(x_{k})u_{i,k}, \tag{1}\]
where \(x\in\mathbb{R}^{n}\) is the state and \(u\in\mathcal{U}\subset\mathbb{R}^{m}\) is the input. We assume that \(\mathcal{U}\) is a compact interval, and we will restrict our attention to a region of interest, \(\Omega\subset\mathbb{R}^{n}\), which we assume to be given as a finite union of compact intervals.
**Assumption 1**.: _The functions \(f_{0}\), and \(g_{1}\),..., \(g_{m}\) are Lipschitz continuous, i.e.,_
\[\forall i\in\{1,\ldots,m\},\ \exists L_{i}\text{ s.t. }\|g_{i}(x)-g_{i}(y)\| \leq L_{i}\|x-y\|,\] \[\text{ and }L\text{ s.t. }\|f_{0}(x)-f_{0}(y)\| \leq L\|x-y\|,\]
_for every \(x,y\in\Omega\)._
Next we define the precise notions of invariance which we will consider in this work.
**Definition 3**.: _A set \(\mathcal{X}\subset\mathbb{R}^{n}\) is controlled invariant with respect to the dynamics (1) if for every \(x_{0}\in\mathcal{X}\), there exists an input \(u\) such that \(f(x_{0},u)\in\mathcal{X}\)._
There are many computational challenges associated with computing controlled invariant sets, many of which arise due to the infinite precision required to adequately handle regions near the boundary of the set. As such, it is convenient to modify the definition to incorporate a robustness margin \(r\).
**Definition 4**.: _A set \(\mathcal{X}\subset\mathbb{R}^{n}\) is \(r\)-robustly controlled invariant for system (1) if for every \(x_{0}\in\mathcal{X}\), there exists an input \(u\) such that \(f(x_{0},u)\in\mathcal{X}\ominus\mathcal{B}_{r}\)._
Intuitively, in order to be robustly invariant, every point must be mapped into the interior of the set, at some distance \(r\) from the boundary.
We are ready to formally state the problem which we aim to address in this paper.
**Problem 1** (Controlled Invariant Set Computation).: _For a system (1) that satisfies Assumption 1 and a region of interest \(\Omega\subset\mathbb{R}^{n}\), given by a union of compact intervals, compute the maximal controlled invariant set contained in \(\Omega\) up to a guaranteed precision. Additionally, compute the corresponding set of control inputs that enforces this invariance._
**Remark 1**.: For ease of exposition, we do not consider any noise or uncertainty in the system dynamics (1). However, it is straightforward to include robustness to bounded noise in our algorithm by increasing the robustness margin \(r\).
## IV Computation of Controlled Invariant Sets
This section starts by reviewing a well-known iterative procedure for computing maximal controlled invariant sets [14]. After this, we describe our main contribution, which approximates the operator used in each iteration.
**Definition 5**.: _The pre-set, or the one-step backward reachable set of a set \(\Omega\subset\mathbb{R}^{n}\) is defined as_
\[Q(\Omega)\triangleq\left\{x\in\mathbb{R}^{n}:\exists u\in\mathcal{U}\text{ s.t. }f(x,u)\in\Omega\right\}.\]
We further define the operator
\[I(\Omega)\triangleq Q(\Omega)\cap\Omega,\]
which will enable computation of the maximal controlled invariant set. Repeated application is denoted as \(I^{i}(\Omega)=\{x\in\mathbb{R}^{n}:\exists u\in\mathcal{U}\text{ s.t. }f(x,u)\in\Omega\}\).
\(I(I^{i-1}(\Omega))\), \(i\in\mathbb{Z}_{\geq 0}\), with \(I^{0}(\Omega)=\Omega\). We also define \(I_{r}(\Omega)\triangleq Q(\Omega\ominus\mathcal{B}_{r})\cap\Omega\).
**Lemma 1** ([14, Special Case of Proposition 4]).: _If \(\Omega\subset\mathbb{R}^{n}\) is closed, then_
\[I^{\infty}\triangleq\lim_{i\to\infty}I^{i}(\Omega)\]
_is the maximal controlled invariant set contained in \(\Omega\)._
This result is the basis for many iterative algorithms, e.g., [2, 3, 14], but there are several computational challenges when dealing with nonlinear systems. In addition to operations involving _backward_ reachability, our algorithm also utilizes operations related to _forward_ reachability, which we define here.
**Definition 6**.: _The one-step forward reachable set of a set \(\Omega\subset\mathbb{R}^{n}\) is defined as_
\[P(\Omega)\triangleq\{x\in\mathbb{R}^{n}:\exists x_{0}\in\Omega,u\in\mathcal{U }\text{ s.t. }x=f(x_{0},u)\}.\]
The following definition restricts the previous reachable set to that of a particular input, which is used later for determining a suitable controller.
**Definition 7**.: _For a given \(u\in\mathcal{U}\), the fixed-control one-step forward reachable set of a set \(\Omega\subset\mathbb{R}^{n}\) is defined as_
\[P_{u}(\Omega)\triangleq\{x\in\mathbb{R}^{n}:\exists x_{0}\in\Omega\text{ s.t. }x=f(x_{0},u)\}.\]
From the definitions, it is clear that \(\bigcup_{u\in\mathcal{U}}P_{u}(\Omega)=P(\Omega)\).
### _Polyhedral Approximation of Reachable Sets_
Our algorithm will employ polyhedral over-approximations of \(P(\Omega)\) and \(P_{u}(\Omega)\) to determine feasible control inputs that can lead to invariance.
To this end, we use a decomposition of the function \(f\), which we will show to satisfy certain properties. This decomposition will vary depending on the interval \([x]\) under consideration. We first compute \(A\) and \(\phi\) such that
\[f_{0}(x)=Ax+\phi(x),\ \forall x\in[x], \tag{2}\]
decomposing \(f\) into a linear term plus a remainder. This is always possible (since we can let \(A=0\)) and can be done in multiple different ways. For example, if \(f\) is differentiable, this can be done via linearization about the midpoint of the interval. Another possibility is that \(f\) has a bounded Jacobian matrix on \([x]\), in which case we can apply a Jacobian sign-stable decomposition [15, Proposition 2] to compute (2). The method of decomposition will affect the accuracy of the final approximation, as we will discuss at the end of this section. Then, by using an inclusion function \([\Phi]:\mathbbm{IR}^{n}\to\mathbbm{IR}^{n}\) satisfying \([\Phi]([x])\supseteq\phi([x])\), we can guarantee that
\[f_{0}([x])\subseteq A[x]\oplus[\Phi]([x]), \tag{3}\]
where \(A[x]\) denotes the exact (polytope) image of \([x]\) under the linear map \(A\).
We also decompose the individual input functions \(g_{i}\). For an inclusion function \([g_{i}]\), let \(s_{i}=\operatorname{mid}([g_{i}]([x]))\) and \([\Psi_{i}]([x])=[g_{i}]([x])\ominus\{s_{i}\}\), so that
\[g_{i}([x])\subseteq\{s_{i}\}\oplus[\Psi_{i}]([x]), \tag{4}\]
and \([\Psi_{i}]([x])\) is centered at the origin. A centered \([\Psi_{i}]([x])\) will result in a better approximation later on. We will use the notation \(S\in\mathbb{R}^{n\times m}\) and \([\Psi]:\mathbbm{IR}^{n}\to\mathbbm{IR}^{n\times m}\) to denote the matrices with columns \(s_{i}\) and \([\Psi_{i}]\), respectively.
In order to guarantee the accuracy of the algorithm, we must first compute bounds on the error of these overapproximations. Assumption 1 allows us to upper bound the width of the resulting inclusion functions.
**Proposition 3**.: _Under Assumption 1, inclusion functions \([\Phi]\) and \([\Psi_{i}]\), \(i\in\{1,\ldots,m\}\), and constants \(\tilde{L}_{i}>0,\ i\in\{0,\ldots,m\}\) can be found so that_
\[w([\Phi]([x])\leq\tilde{L}_{0}w([x])\ \text{ and }\ w([\Psi_{i}]([x])\leq \tilde{L}_{i}w([x]).\]
_for every interval \([x]\subseteq\Omega\)._
Proof.: We can, for example, use inclusion functions based on mixed-monotone decompositions (cf. [16, Lemma 1]) or the mean-value form (cf. [11] and [9, Eq. (3)]) that can be shown to satisfy these bounds.
With these decompositions in mind, we can define our approximations of the forward reachable sets. Let
\[\overline{P}([x])\triangleq A[x]\oplus[\Phi]([x])\oplus\mathcal{SU}\oplus[ \Psi]([x])\mathcal{U}\]
be the polyhedral overapproximation of the one-step forward reachable set, and let
\[\overline{P}_{u}([x])\triangleq A[x]\oplus[\Phi]([x])\oplus Su\oplus[\Psi]([x] )\mathcal{U}\]
be the polyhedral overapproximation of the fixed-control reachable set. The expression \(S\mathcal{U}\) denotes the polytopic image of \(\mathcal{U}\) under the linear transformation \(S\), and \([\Psi]([x])\mathcal{U}\) denotes the interval-valued product of two interval matrices that can be computed using interval arithmetic.
**Lemma 2**.: _Let \(\rho\triangleq\tilde{L}_{0}+\max_{1\leq i\leq m}\tilde{L}_{i}w(\mathcal{U}_{i})\). Then for all intervals \([x]\subset\Omega\), the polytopic approximations of the forward reachable sets satisfy_
\[P_{u}([x])\subseteq\overline{P}_{u}([x])\subseteq P_{u}([x])\oplus\mathcal{B} _{\rho w([x])}\]
_and_
\[P([x])\subseteq\overline{P}([x])\subseteq P([x])\oplus\mathcal{B}_{\rho w([x])}.\]
Proof.: The inclusion \(P_{u}([x])\subseteq\overline{P}_{u}([x])\) is guaranteed by construction, since
\[P_{u}([x])=A[x]\oplus\phi([x])\oplus Su\oplus\sum_{i=1}^{m}\hat{g}_{i}([x]))u_{i},\]
\(\phi([x])\subseteq[\Phi]([x])\), and \(\hat{g}_{i}([x])\subseteq[\Psi_{i}]([x])\). The inclusion \(\overline{P}_{u}([x])\subseteq P_{u}([x])\oplus\mathcal{B}_{\rho w([x])}\) follows from Proposition 3 and the definition of the Minkowski sum. The second statement is proved in the same way.
### _Interval Approximation of Pre-Sets_
We describe our main contribution next, which is a novel algorithm for approximating the operator \(I\) for systems of the form (1). Given a union of compact intervals \(\Omega\), we propose an iterative refinement procedure that approximates \(I(\Omega)\) by another union of compact intervals. Algorithm 1 summarizes the main steps of this process, described in detail next.
```
1:\(\Omega\), \(\varepsilon\)
2:\(Q\leftarrow\{\Omega\}\), \(N\leftarrow\emptyset\), \(\underline{I}\leftarrow\emptyset\), \(\mathcal{E}\leftarrow\emptyset\), \(\mathcal{U}_{I}\leftarrow\emptyset\)
3:while\(Q\neq\emptyset\)do
4:\([x]\leftarrow\mathrm{pop\_front}(Q)\)
5: Compute \(A\), \(\Phi\), \(s_{i}\), and \(\Psi_{i}\) on \([x]\)
6:if\(\overline{P}([x])\cap\Omega=\emptyset\)then
7:\(N\gets N\cup[x]\)
8:elseif\(\exists u\in\mathcal{U}\) s.t. \(\overline{P}_{u}([x])\subseteq\Omega\)then
9:\(\underline{I}\leftarrow\underline{I}\cup[x]\)
10:\(\mathcal{U}_{I}\leftarrow\mathcal{U}_{I}\cup([x],S^{\dagger}(S\mathcal{U}([x] )))\)
11:elseif\(w([x])\leq\varepsilon\)then
12:\(\mathcal{E}\leftarrow\mathcal{E}\cup[x]\)
13:else
14:\((l,r)\leftarrow\mathrm{bisect}([x])\)
15:\(\mathrm{push\_front}(Q,l)\)
16:\(\mathrm{push\_front}(Q,r)\)
17:endif
18:endwhile
19:return\(\underline{I}\), \(\mathcal{U}_{I}\)
```
**Algorithm 1**\(\underline{I}(\Omega)\)
The algorithm is a loop that operates on a queue (\(Q\)) of intervals. An element of the queue is retrieved (and removed) from the front of the queue using the \(\mathrm{pop\_front}\) operation, while an element is added to the front of the queue using the \(\mathrm{push\_front}\) operation. The following steps are implemented until the queue is empty. Every interval \([x]\) is checked to see whether it may be part of the pre-set of \(\Omega\). Two tests are performed:
1. Is the forward reachable set from \([x]\) disjoint with \(\Omega\)? If so, \([x]\) is disjoint from \(I(\Omega)\).
2. Can an input \(u\) be found so that the reachable set from \([x]\), restricted to the input \(u\), lies entirely within \(\Omega\)? If so, \([x]\subset I(\Omega)\).
If either condition is satisfied, then the set \([x]\) is saved in lists labeled \(\mathcal{N}\) (the collection of intervals disjoint with \(I(\Omega)\)) and \(\mathcal{S}\) (the collection of intervals contained in \(I(\Omega)\)).
If neither condition is satisfied and the \([x]\) is wider than the specified tolerance, it is bisected along its largest dimension and both resulting intervals are added to the front of the queue using the \(\mathrm{push\_front}\) operation. Otherwise, \([x]\) is added to \(\mathcal{E}\), which is a collection of the so-called "indeterminate" sets, which are neither disjoint from nor subsets of \(I(\Omega)\).
Line 7 of Algorithm 1 requires checking the condition
\[\exists u\in\mathcal{U}\text{ s.t. }\overline{P}_{u}([x])\subseteq\Omega, \tag{5}\]
which is a nonconvex feasibility problem, due to the nonconvexity of \(\Omega\). Luckily, we can exploit the structure of both \(\overline{P}_{u}([x])\) and \(\Omega\) in order to efficiently and precisely compute the set of \(u\) that satisfy (5).
Notice that, in the definition of \(\overline{P}_{u}\), the term \(A[x]\oplus[\Phi]([x])\oplus[\Psi]([x])\mathcal{U}\) is a convex polyhedron, and the additive term \(Su\) serves only to _translate_ the resulting polyhedron. On the other hand, \(\Omega\) is a union of intervals, and also more generally a union of polyhedra. Therefore, we can reduce the feasibility problem in (5) to a problem of translating a polyhedron into a union of polyhedra. We describe here an equivalence which helps solve this problem, inspired by [17].
**Lemma 3**.: _Let \(\mathcal{P}=\mathrm{conv}(\Omega\cap\overline{P}([x]))\) and \(\mathcal{Q}=\mathcal{P}\setminus(\Omega\cap\overline{P}([x]))\). Then, the following statements are equivalent:_
1. \(\exists u\in\mathcal{U}\text{ s.t. }\overline{P}_{u}([x])\subseteq\Omega\)_;_
2. \(\mathit{S\!U}([x])\triangleq\mathcal{I}(\overline{P}_{0}([x]),\mathcal{P}) \setminus\mathcal{O}(\overline{P}_{0}([x]),\mathcal{Q})\neq\emptyset\)_._
Proof.: By defining \(\overline{P}_{0}([x])\triangleq A[x]\oplus[\Phi]([x])\oplus[\Psi]([x])\mathcal{U}\), we obtain that for any \(u\), \(\overline{P}_{u}([x])=\overline{P}_{0}([x])\oplus\mathit{S\!U}\). Furthermore, \(\overline{P}([x])=\overline{P}_{0}([x])\oplus\mathit{S\!U}\). This gives rise to the equivalance
\[\exists u\in\mathcal{U}\text{ s.t. }\overline{P}_{u}([x])\subseteq\Omega\] \[\iff\exists r\in\mathbb{R}^{n}\text{ s.t. }\overline{P}_{0}([x])\oplus\{r\}\subseteq\Omega\cap\overline{P}([x]),\]
where it must be true that \(r=Su\). We see that by intersecting with \(\overline{P}([x])\) on the right hand side, we can identify a translation by a vector \(r\), which is automatically restricted to the range of \(S\). To find the \(r\) in the latter expression, we first find the translations into the _convex hull_ of \(\Omega\cap\overline{P}([x])\) (i.e., \(\mathcal{I}(\overline{P}_{0}([x]),\mathcal{P})\)), then remove the translations that cause overlap with parts of the convex hull that are not in the original set (i.e., \(\mathcal{O}(\overline{P}_{0}([x]),\mathcal{Q})\)). This gives the set of translations \(r\) that result in containment in \(\Omega\cap\overline{P}([x])\), therefore yielding the expression for \(\mathit{S\!U}([x])\).
Since \(\mathit{S\!U}([x])\) is the difference between a polytope and a union of polytopes, it can be expressed as the union of a finite number of polytopes. The equivalence in Lemma 3 gives us a tractable method of solving the feasibility problem in (5), using procedures to efficiently compute \(\mathcal{I}\) and \(\mathcal{O}\). Finally, since \(\forall r\in\mathit{S\!U}([x]),\ \exists u\in\mathcal{U}\) such that \(r=Su\), we can recover the set of inputs with the Moore-Penrose pseudoinverse, \(\mathcal{U}([x])=S^{\dagger}(\mathit{S\!U}([x]))\), which is stored/saved in \(\mathcal{U}_{I}\) as pairs \(([x],\mathcal{U}([x]))\).
**Remark 2**.: The computation of the set difference between two polyhedra can be represented as a union of polyhedra. This procedure is computationally expensive, especially when considering the difference between a polyhedron and a union of polyhedra, for which the best algorithms [18] have exponential complexity. In fact, the computation of set differences is the largest computaional burden of our method. Caching of results from previous iterations may reduce the number of differences that must be computed in the final implementation, although these details are not described here for the sake of brevity.
We finish this section by proving that Algorithm 1 returns a useful approximation of \(I(\Omega)\), which is within some known bound of the maximal CIS.
**Lemma 4**.: _Let \(\Omega\) be a finite union of compact intervals. Then, for any precision \(\varepsilon>0\), Algorithm 1 terminates in a finite number of iterations. Furthermore, letting \(\underline{I}(\Omega)\) denote the output of Algorithm 1, it holds that_
\[I(\Omega\ominus\mathcal{B}_{\rho\varepsilon})\subseteq I_{\rho\varepsilon}( \Omega)\subseteq\underline{I}(\Omega)\subseteq I(\Omega). \tag{6}\]
Proof.: The proof is similar to [9, Lemma 1], relying on the bounds provided by Lemma 2.
### _Near-Maximal Controlled Invariant Sets_
With the ability to compute \(\underline{I}(\Omega)\) using Algorithm 1, all that remains is to repeat this operation until convergence is achieved. Algorithm 2 describes this simple procedure, which is guaranteed to terminate in finite time.
```
0:\(\Omega\), \(\varepsilon\)
1:\(I_{0}\leftarrow\Omega\), \(I_{1}\leftarrow\emptyset\)
2:while\(I_{0}\neq I_{1}\)do
3:\(I_{1}\gets I_{0}\)
4:\(I_{0}\leftarrow\underline{I}(I_{0})\)\(\triangleright\) via Algorithm 1
5:endwhile
6:return\(I_{0}\)
```
**Algorithm 2** Approximation of \(I^{\infty}(\Omega)\)
**Theorem 1**.: _For any finite union of compact intervals \(\Omega\) and precision \(\varepsilon>0\), Algorithm 2 terminates in a finite number of iterations. Furthermore, denoting the output of the algorithm as \(\underline{I}^{\infty}(\Omega)\), the following inclusions hold:_
\[I_{r}^{\infty}(\Omega)\subseteq\underline{I}^{\infty}(\Omega)\subseteq I^{ \infty}(\Omega),\]
_where \(r=\rho\varepsilon\). Finally, \(\underline{I}^{\infty}(\Omega)\) is controlled invariant._
Note that it is possible for the algorithm to return an empty set only if the system does not admit an \(r\)-robustly controlled invariant set, meaning \(I_{r}^{\infty}=\emptyset\).
Proof.: Let \(\underline{I}^{k}(\Omega)\) denote the value of \(\underline{I}(\underline{I}^{k-1}(\Omega))\) in the \(k\)-th iteration, with \(\underline{I}^{0}(\Omega)=\Omega\). Since the algorithm terminates if \(\underline{I}^{k-1}(\Omega)=\underline{I}^{k}(\Omega)\), we only need to consider the case when they are not equal. In this case, the structure of Algorithm 1 is such that \(\underline{I}^{k}(\Omega)\subsetneq\underline{I}^{k-1}(\Omega)\). Since \(\Omega\) is compact, \(\underline{I}^{k}(\Omega)\) contains only a finite number of intervals which will be considered using the bisection method for a given \(\varepsilon\). Then \(\exists K\geq 0\) such that \(\underline{I}^{k}(\Omega)=\emptyset,\ \forall k\geq K\), meaning the algorithm terminates in finite time.
To prove the inclusions, note that by Lemma 4, \(I_{r}(\Omega)\subseteq\underline{I}(\Omega)\subseteq I(\Omega)\). Also, for any two sets \(A\subseteq B\subseteq\mathbb{R}^{n}\), we know that \(\underline{I}(A)\subseteq\underline{I}(B)\) and \(I_{r}(A)\subseteq I_{r}(B)\). Therefore by applying Lemma 4 and induction, we can determine that \(I_{r}^{k}(\Omega)\subseteq\underline{I}^{k}(\Omega)\subseteq I^{k}(\Omega)\) for all \(k\geq 0\).
Finally, to prove invariance of \(\underline{I}^{\infty}(\Omega)\), note that by Lemma 4, \(\underline{I}^{\infty}(\Omega)=\underline{I}(\underline{I}^{\infty}(\Omega)) \subseteq I(\underline{I}^{\infty}(\Omega))\).
**Remark 3** (An Outside-In Approach).: The method described in this paper is commonly known as an _outside-in_ method, since we start with a region of interest \(\Omega\) and search for the largest invariant set \(\mathcal{X}=I^{\infty}(\Omega)\) which it contains. It is also possible to use an _inside-out_ approach, which starts with a known invariant set, and iteratively grows that set [10]. Our method can be easily adapted for this case, as well. \(\bullet\)
To conclude this section, we state a result that has the potential to significantly reduce the number of iterations required by Algorithm 2, hence, accelerating convergence. Let \(\{\Omega_{i}\}_{i=1}^{N}\) be any _ordered_ collection of sets such that \(\Omega=\bigcup_{i=1}^{N}\Omega_{i}\). Define the operator
\[I_{i}(\Omega)\triangleq\bigcup_{j\neq i}\Omega_{j}\cup(Q(\Omega)\cap\Omega_{i })\,,\]
and let \(I_{1:N}\triangleq I^{N}(\cdots(I^{1}(\Omega))\). We let the \(I_{i}\) operator preserve the order of the constituent subsets by defining
\[(I_{i}(\Omega))_{i}=Q(\Omega)\cap\Omega_{i}\quad\text{and}\quad(I_{i}(\Omega ))_{j}=\Omega_{j},\ j\neq i.\]
**Theorem 2**.: _For a closed set \(\Omega\subseteq\mathbb{R}^{n}\), \(I_{1:N}\) satisfies_
\[I_{1:N}(\Omega)\subseteq I(\Omega),\text{ and }\] \[\lim_{k\to\infty}(I_{1:N})^{k}(\Omega)=\lim_{k\to\infty}I^{k}( \Omega)=I^{\infty}(\Omega). \tag{7}\]
Proof.: By definition, for any \(i\neq j\),
\[I_{i}(I_{j}(\Omega)) =\bigcup_{\ell\neq i}I_{j}(\Omega)_{\ell}\cup(Q(I_{j}(\Omega))\cap I _{j}(\Omega)_{i})\] \[=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
required. Algorithm 3 describes the modified procedure that takes advantage of this fact. In the algorithm, \(\Omega\) is updated whenever an interval is determined to not be a part of the invariant set. The loop terminates if every interval in the queue has been checked since \(\Omega\) was last changed, meaning every interval is contained in the invariant set. The function \(\mathrm{checked}\) tracks whether a given \([x]\) has been tested against the current \(\Omega\). It defaults to \(0\), and is assigned a value of \(1\) after \([x]\) is processed. Note that \(\mathrm{push\_back}\) places an item in the back of the queue.
```
0:\(\Omega\), \(\varepsilon\)
1:\(Q\leftarrow\{\Omega\}\), \(N\leftarrow\emptyset\), \(\underline{L}\leftarrow\emptyset\), \(\mathcal{E}\leftarrow\emptyset\), \(\mathcal{U}_{I}\leftarrow\emptyset\)
2:while\(\exists[x]\in Q\), \(\mathrm{checked}([x],\Omega)=0\)do
3:\([x]\leftarrow\mathrm{pop\_front}(Q)\)
4: Compute \(A\), \(\Phi\), \(s_{i}\), and \(\Psi_{i}\) on \([x]\)
5:if\(\overline{P}([x])\cap\Omega=\emptyset\)then
6:\(N\gets N\cup[x]\)
7:\(\Omega\leftarrow\Omega\setminus[x]\)
8:elseif\(\exists u\in\mathcal{U}\) s.t. \(\overline{P}_{u}([x])\subseteq\Omega\)then
9:\(\mathrm{checked}([x],\Omega)\gets 1\)
10:\(\mathrm{push\_back}(Q,[x])\)
11:\(\mathcal{U}_{I}\leftarrow\mathcal{U}_{I}\cup([x],S^{\dagger}(S\mathcal{U}([ x])))\)
12:elseif\(w([x])\leq\varepsilon\)then
13:\(\mathcal{E}\leftarrow\mathcal{E}\cup[x]\)
14:\(\Omega\leftarrow\Omega\setminus[x]\)
15:else
16:\((l,r)\leftarrow\mathrm{bisect}([x])\)
17:\(\mathrm{push\_front}(Q,l)\)
18:\(\mathrm{push\_front}(Q,r)\)
19:endif
20:endwhile
21:\(\underline{I}^{\infty}\leftarrow\bigcup_{[x]\in Q}[x]\)
22:return\(\underline{I}^{\infty}\), \(\mathcal{U}_{I}\)
```
**Algorithm 3** Accelerated Approximation of \(I^{\infty}(\Omega)\)
**Lemma 5**.: _For any finite union of compact intervals \(\Omega\) and precision \(\varepsilon>0\), Algorithm 3 returns the same result as Algorithm 2. Furthermore, the number of iterations required by Algorithm 3 is less than or equal to the number required by Algorithm 2._
Proof.: The first statement can be proved by the same reasoning as Theorem 1. The second statement arises from the inclusion \(I_{1:N}(\Omega)\subseteq I(\Omega)\) (cf. Theorem 2) and can also be seen by direct comparison of the algorithms.
## V Examples and Comparisons
In this section, we demonstrate the effectiveness of our approach on a numerical example, i.e., an inverted pendulum on a cart. We also compare our approach to the method for switched systems in [9], where the input space is sampled/gridded and considered as controlled modes.
As in [9], we consider an inverted pendulum on a cart, discretized using forward Euler with a sampling time of \(0.01\)s. The dynamics are
\[\dot{x}_{1} =x_{2},\] \[\dot{x}_{2} =\frac{m\beta}{J}\sin(x_{1})-\frac{b}{J}x_{2}+\frac{l}{J}\cos(x_{ 1})u,\]
with parameters \(m=0.2\) kg, \(g=9.8\) m/s\({}^{2}\), \(l=0.3\) m, \(J=0.006\) kg \(\cdot\) m\({}^{2}\), and \(b=0.1\) N/m/s. This system and its discretization are control affine. We consider a region of interest \(\Omega=[-0.05,-0.05]\times[-0.01,0.01]\) and an input set \(\mathcal{U}=[-0.1,0.1]\).
Figure 1 shows the identified controlled invariant sets for our approaches, i.e., using Algorithms 2 and 3, and that of [9], with \(n_{u}=10\) and \(n_{u}=1000\) sampled inputs. All methods were run with a precision \(\varepsilon=0.001\). On the other hand, Figure 2 shows the union of all invariance-enforcing control inputs \(\mathcal{U}([x])\) identified by Algorithms 2 and 3. Finally, Table I shows a comparison of computation times with different parameters.
Fig. 1: Controlled invariant sets of the inverted pendulum system.
Fig. 2: Union of invariance-enforcing inputs \(\mathcal{U}([x])\) identified by Lemma 3.
Evidently, our method is able to identify a larger CIS in fewer iterations than the sampling and interval arithmetic based approach in [9], when the number of samples is small. This is presumably due to the higher accuracy of our polytopic approximations, and the fact that we consider the entire continuous range of control inputs. Increasing the number of sampled inputs results in a better approximation of the CIS, at the cost of some additional computation time.
## VI Conclusion
We proposed two methods for approximating controlled invariant sets of nonlinear control-affine systems using an iterative refinement approach. We used techniques from computational geometry involving translations of polyhedra to allow us to efficiently compute continuous sets of feasible control inputs, rather than using a sampling approach with switched dynamics. We demonstrated the effectiveness of our method on a numerical example, which showed improved accuracy over existing methods and led to faster convergence in some cases. In the future, we will further explore the extension of our approach to continuous time, as well as the control synthesis problem, including some notions of optimality, while also investigating ways to improve the accuracy and efficiency of our algorithms. We also will test our approaches on a wide variety of nonlinear systems.
### _Proof of Proposition 2_
We begin by stating two intermediate results which will be used to prove the proposition. The first allows us to determine whether two polytopes intersect by examining the hyperplanes defining each polytope.
**Proposition 4**.: _Given two polytopes \(\mathcal{P}\) and \(\mathcal{Q}\), \(\mathcal{P}\cap\mathcal{Q}\neq\emptyset\) if and only if both of the following statements are true_
1. \(\mathcal{P}\) _intersects every halfspace defining_ \(\mathcal{Q}\)_, i.e.,_ \(\forall i\in\{1,\ldots,N_{\mathcal{Q}}\},\ \exists p_{i}\in\mathcal{P}\) _such that_ \((H_{\mathcal{Q}})_{i}p_{i}\leq(b_{\mathcal{Q}})_{i}\)_._
2. \(\mathcal{Q}\) _intersects every halfspace defining_ \(\mathcal{P}\)_, i.e.,_ \(\forall i\in\{1,\ldots,N_{\mathcal{P}}\},\ \exists q_{i}\in\mathcal{Q}\) _such that_ \((H_{\mathcal{P}})_{i}q_{i}\leq(b_{\mathcal{P}})_{i}\)_._
Proof.: Necessity is simple, since if \(\mathcal{P}\cap\mathcal{Q}\neq\emptyset\), every \(v\in\mathcal{P}\cap\mathcal{Q}\) will satisfy the existence conditions in 1) and 2).
To prove sufficiency, note that \(\mathcal{P}\cap\mathcal{Q}=\emptyset\) if and only if there exists a separating hyperplane, defined by some \(h_{*}\in\mathbb{R}^{n}\) and \(b_{*}\in\mathbb{R}\), such that \(\forall v\in\mathcal{P},\ h^{\top}v\leq b\) and \(\forall v\in\mathcal{Q},\ h^{\top}v>b\). Conditions 1) and 2) preclude the existence of this separating hyperplane, implying \(\mathcal{P}\cap\mathcal{Q}\neq\emptyset\).
The second intermediate result tells us how to translate a polytope so that it intersects a given halfspace.
**Proposition 5**.: _Given a polytope \(\mathcal{P}\) and halfspace \(\mathcal{H}=\{x:h^{\top}x\leq b\}\), the set of translations, i.e., of \(\mathcal{P}\) that intersect \(\mathcal{H}\); i.e. \(\mathcal{O}(\mathcal{P},\mathcal{H})\triangleq\{s\in\mathbb{R}^{n}:\mathcal{ P}\oplus\{s\}\cap\mathcal{H}\neq\emptyset\}\), is given by_
\[\mathcal{O}(\mathcal{P},\mathcal{H})=\{s\in\mathbb{R}^{n}:\ h^{\top}s\leq b- \alpha\},\]
_where \(\alpha=\min_{v\in V_{\mathcal{P}}}h^{\top}v\)._
Proof.: The reasoning is similar to [13, Theorem 2.3], with \(\min\) replacing \(\max\) because intersection, rather than containment, is required.
Proposition 2 follows from the combination of Proposition 4 with repeated application of Proposition 5 to every halfspace defining both \(\mathcal{P}\) and \(\mathcal{Q}\) (with \(-s\) replacing \(s\) in the second part, as the translation is applied to \(\mathcal{P}\), not \(\mathcal{Q}\)).
|
2307.08756 | Reviving stochasticity: uncertainty in SMBH binary eccentricity is
unavoidable | We study supermassive black hole (SMBH) binary eccentricity of equal-mass
galaxy mergers in $N$-body simulations with the KETJU code, which combines the
GAGDET-4 fast multipole gravity solver with accurate regularised integration
and post-Newtonian corrections around SMBHs. In simulations with realistic,
high eccentricity galactic merger orbits, the hard binary eccentricity is found
to be a non-linear function of the deflection angle in the SMBH orbit during
the final, nearly radial close encounter between the SMBHs before they form a
bound binary. This mapping between the deflection angle and the binary
eccentricity has no apparent resolution dependence in our simulations spanning
the resolution range of $1\times10^5$-$8\times10^6$ particles per galaxy. The
mapping is also captured using a simple model with an analytic potential,
indicating that it is driven by the interplay between a smooth asymmetric
stellar background potential and dynamical friction acting on the SMBHs. Due to
the non-linearity of this mapping, in eccentric major merger configurations
small, parsec-scale variations in the merger orbit can result in binary
eccentricities varying in nearly the full possible range between $e=0$ and
$e=1$. In idealised simulations, such variations are caused by finite
resolution effects, and convergence of the binary eccentricity can be achieved
with increasing resolution. However, in real galaxies, other mechanisms such as
nuclear gas and substructure that perturb the merger orbit are likely to be
significant enough for the binary eccentricity to be effectively random. Our
results indicate that the distribution of these effectively random
eccentricities can be studied using even moderate resolution simulations. | Alexander Rawlings, Matias Mannerkoski, Peter H. Johansson, Thorsten Naab | 2023-07-17T18:01:04Z | http://arxiv.org/abs/2307.08756v3 | # Reviving stochasticity: uncertainty in SMBH binary eccentricity is unavoidable
###### Abstract
We study supermassive black hole (SMBH) binary eccentricity of equal-mass galaxy mergers in \(N\)-body simulations with the Keliu code, which combines the gadget-4 fast multipole gravity solver with accurate regularised integration and post-Newtonian corrections around SMBHs. In simulations with realistic, high eccentricity galactic merger orbits, the hard binary eccentricity is found to be a non-linear function of the deflection angle in the SMBH orbit during the final, nearly radial close encounter between the SMBHs before they form a bound binary. This mapping between the deflection angle and the binary eccentricity has no apparent resolution dependence in our simulations spanning the resolution range of \(1\times 10^{5}\)-\(8\times 10^{6}\) particles per galaxy. The mapping is also captured using a simple model with an analytic potential, indicating that it is driven by the interplay between a smooth asymmetric stellar background potential and dynamical friction acting on the SMBHs. Due to the non-linearity of this mapping, in eccentric major merger configurations small, parsec-scale variations in the merger orbit can result in binary eccentricities varying in nearly the full possible range between \(e=0\) and \(e=1\). In idealised simulations, such variations are caused by finite resolution effects, and convergence of the binary eccentricity can be achieved with increasing resolution. However, in real galaxies, other mechanisms such as nuclear gas and substructure that perturb the merger orbit are likely to be significant enough for the binary eccentricity to be effectively random. Our results indicate that the distribution of these effectively random eccentricities can be studied using even moderate resolution simulations.
keywords: black hole physics - galaxies: kinematics and dynamics - methods: numerical - software: simulations
## 1 Introduction
Supermassive black holes (SMBHs) are believed to reside at the centres of all massive galaxies (e.g. Kormendy & Ho, 2013). In the \(\Lambda\)CDM model as galaxies grow through gas accretion and mergers (e.g. Volonteri et al., 2003; Naab & Ostriker, 2017) their SMBHs are also expected to interact in a three-phase merger process (Begelman et al., 1980).
Firstly, dynamical friction (DF, Chandrasekhar, 1943) acts to bring the SMBHs from kiloparsec-scales down to parsec-scale separations, after which the SMBHs form a bound binary with a semimajor axis \(a\) and eccentricity \(e\)(e.g. Milosavljevic & Merritt, 2001; Merritt, 2013). In the second phase, the SMBH binary separation is reduced through sequential slighsoft encounters with the surrounding stellar distribution (Hills & Fullerton, 1980; Hills, 1983; Quinlan, 1996; Rantala et al., 2018). Finally, at small subparsec separations gravitational wave (GW) emission becomes the dominant mechanism by which the SMBH binary can lose its remaining orbital energy and angular momentum, thus driving the SMBHs to coalescence (Peters, & Mathews, 1963; Peters, 1964).
The complex nature of SMBH coalescence in a galaxy merger setting necessitates the two of numerical techniques (e.g. Berentzen et al., 2009; Khan et al., 2011; Dosopoulou & Antonini, 2017; Mannerkoski et al., 2019) to provide quantitative predictions for ongoing observational programmes such as ground-based pulsar timing arrays (PTAs, Agaize et al., 2023; Antoniadis et al., 2023; Xu et al., 2023; Zic et al., 2023) and the upcoming Laser Interferometer Space Antenna (LISA, e.g. Amaro-Seoane et al., 2023). In particular, the eccentricity of the binary significantly affects the GW emission, with higher eccentricities resulting both in faster mergers as well as changes in the emitted GW spectrum (Enoki & Nagashima, 2007; Huerta et al., 2015; Taylor et al., 2016). Understanding SMBH binary eccentricity and how faithfully it is captured in numerical simulations is thus critical for predicting and interpreting observations done with instruments such as PTAs and LISA.
The SMBH binary merging process and its dependence on eccentricity and resolution has been extensively studied using collisionless merger simulations (e.g. Berentzen et al., 2009; Vasiliev et al., 2015; Bortolas et al., 2016; Gualandris et al., 2017, 2022). Recently, Nasim et al. (2020) have argued that the substantial scatter in SMBH binary eccentricity observed in gas-free merger simulations is an artefact of poor phase space discretization, and that in the real Universe where SMBH masses are far greater than stellar masses (\(M_{\bullet}\gg m_{\star}\)), SMBH binary eccentricity is a reliably predictable quantity.
In this paper, we find that for realistic galaxy merger orbits the scatter in the final SMBH binary eccentricity is due to the physical sensitivity of the final eccentricity to small perturbations of the final nearly radial plunging trajectory of the SMBHs before they become
bound, even in the infinite resolution limit. As a result, we argue that the stochasticity of the binary eccentricity is an unavoidable physical feature of realistic galaxy mergers, at least in the equal-mass case.
This paper is structured as follows. In Section 2 we briefly discuss the main features of the Ketiu code and our numerical simulations. In Section 3 we study the eccentricity scatter in our simulations and also present a simple model that captures the observed scatter. We discuss our results and their implications in Section 4 and finally present our conclusions in Section 5.
## 2 Numerical simulations
We construct a number of idealised galaxy merger simulations, which we evolve with our new version of the Ketiu code (Mannerkoski et al., 2023; Rantala et al., 2017). The dynamics of SMBHs, and stars in a small region around them, are integrated with an algorithmically regularised integrator (Rantala et al., 2020), whereas the dynamics of the remaining particles is computed with the gadget-4(Springel et al., 2021) fast multiple method (FMM) with second order multipoles. Together with hierarchical time integration this allows for symmetric interactions and manifest momentum conservation. Ketiu also includes post-Newtonian (PN) correction terms up to order 3.5 between each pair of SMBHs (Blanchet, 2014).
Our galaxy models represent the nuclear bulge of a gas-devoid elliptical galaxy, and are designed to match exactly the models of Nasim et al. (2020). Each galaxy consists of a stellar-only Dehnen (1993) sphere with shape parameter \(\gamma=0.5\) and scale radius \(a=186\,\mathrm{pc}\), where the Dehnen profile is given by:
\[\rho_{\star}(r)=\frac{(3-\gamma)M_{\star}}{4\pi}\frac{a}{r^{\gamma}(r+a)^{(4- \gamma)}}\,. \tag{1}\]
The total stellar mass is \(M_{\star}=10^{10}\,M_{\sun}\), and at the centre of the model galaxy a SMBH of mass \(M_{\star}=10^{8}\,M_{\sun}\) is placed1. Our galaxies lie on the observed \(M_{\star}\)-\(\sigma\) relation presented in van den Bosch (2016). We test seven different mass resolutions with a varying number of stellar particles: \(N_{\star}=\{1.0\times 10^{5},2.5\times 10^{5},5.0\times 10^{5},1.0\times 10^{6}, 2.0\times 10^{6},4.0\times 10^{6},8.0\times 10^{6}\}\), corresponding to \(M_{\star}/m_{\star}=1000\)-\(80000\).
Footnote 1: We set the length and mass scales of the system to match one of those presented in Gualandris et al. (2022). However, as the performed simulations are gravity-only, the system may be transformed to other mass and length scales with the same \(M_{\mathrm{tot}}/a\) ratio without affecting the results.
We then construct isolated merger initial conditions by placing two galaxies on two different elliptical orbits with eccentricities of \(e_{0}=0.90\) and \(e_{0}=0.99\), at a fixed initial separation of \(D=3.72\,\mathrm{kpc}\) and a semimajor axis of \(a_{0}=2.79\,\mathrm{kpc}\). The \(e_{0}=0.90\) merger orbit matches that used by Nasim et al. (2020), with the radial and tangential velocities being consistent with the values reported in Gualandris et al. (2022). For each orbital configuration, we run ten realisations for each mass resolution, to account for stochasticity caused by the discretised phase space. Interactions between stellar particles are softened with a softening length of \(\varepsilon=2.5\,\mathrm{pc}\), and the Ketiu region radius is set to \(r_{\mathrm{kteijn}}=3\varepsilon=7.5\,\mathrm{pc}\).
## 3 Results
### Eccentricity scatter in simulations
The SMBH binary eccentricity for both the \(e_{0}=0.90\) (blue) and \(e_{0}=0.99\) (orange) orbits are shown in Figure 1 as a function of shifted time \(t^{\prime}=t-t_{\mathrm{bound}}\), where \(t_{\mathrm{bound}}\) is the time when the binary orbital energy \(E\) becomes permanently negative. At the time \(t_{\mathrm{bound}}\) the SMBH binary is not yet isolated due to the large quantity of intervening stellar mass, which causes the Keplerian definition of eccentricity to oscillate and not instantaneously match exactly the actual SMBH binary orbit. In Figure 1 we indicate the time of hard binary formation by circle markers; from this point the Keplerian eccentricity appropriately describes the binary orbit. The binary hardening radius \(a_{\mathrm{h}}\) is defined as (e.g. Merritt, 2006):
\[a_{\mathrm{h}}=\frac{q}{(1+q)^{2}}\frac{r_{\mathrm{m}}}{4}=\frac{r_{\mathrm{m }}}{16}, \tag{2}\]
where \(r_{\mathrm{m}}=r(m<2M_{\star,1})\) is the influence radius and \(q\) is the mass ratio between the SMBHs (for our simulations \(q=1.0\)). Before a hard binary forms, both the \(e_{0}=0.90\) and \(e_{0}=0.99\) sets initially demonstrate an overall decrease in eccentricity (semitransparent lines in Figure 1) until \(t^{\prime}\sim 1\,\mathrm{Myr}\). It should be noted that the precise value of the eccentricity in this regime is not robust due to the SMBHs orbiting in a potential still largely influenced by the stellar background, however the overall trend is consistent with the binary orbital geometry. After \(t^{\prime}\sim 1\,\mathrm{Myr}\), some of the \(e_{0}=0.99\) realisations continue to have a decreasing eccentricity, and all of the \(e_{0}=0.90\) realisations have an increasing eccentricity. We discuss the mechanism for this in subsection 3.3.
The \(e_{0}=0.99\) runs show a wide variation in eccentricity, where \(e\) spans almost the entire domain range \(e=[0,1]\). As a test we also perform two sets of ten runs using the \(10^{6}\) particle resolution set up, but reduce the SMBH mass to \(5.0\times 10^{7}\,M_{\sun}\) and \(1.0\times 10^{7}\,M_{\sun}\), and observe the same qualitative spread in eccentricity as in the \(M_{\star}=10^{8}\,M_{\sun}\) case. Even though the \(e_{0}=0.99\) runs have a higher initial merger eccentricity than the \(e_{0}=0.90\) runs, none of the SMBH binaries in the shown \(e_{0}=0.99\) set obtain high enough eccentricities to undergo GW-induced coalescence during the \(50\,\mathrm{Myr}\) timespan. The \(e_{0}=0.90\) set demonstrates six SMBH-binary mergers within \(50\,\mathrm{Myr}\) of forming a bound binary, seen as a rapid orbit circularisation in Figure 1, which is captured self-consistently using Ketiu.
To characterise the scatter in eccentricity, we determine the mean eccentricity \(e_{\mathrm{h}}\) over five orbital periods centred on the orbit within which the SMBH binary has become hard (equation (2)). We characterise the inter-simulation eccentricity scatter in the mean values of \(e_{\mathrm{h}}\) with the standard deviation, denoted as \(\sigma_{e}\).
We show the dependence of \(\sigma_{e}\) on mass resolution for the \(e_{0}=0.90\) orbit in Figure 2 with blue circle markers, and for the \(e_{0}=0.99\) orbit using orange square markers.
For the \(e_{0}=0.90\) orbit, the convergence of \(\sigma_{e}\) scales as \(1/\sqrt{N_{\star,\mathrm{tot}}}\), where \(N_{\star,\mathrm{tot}}\) is the total number of stellar particles in the merger. The values of \(\sigma_{e}\) we report are quantitatively similar to the values found by Nasim et al. (2020) for the same system, as can be seen by the teal triangles in Figure 2. The variation in eccentricity does not show the same scaling for the \(e_{0}=0.99\) orbit as the \(e_{0}=0.90\) orbit. For mass resolutions \(M_{\star}/m_{\star}\leq 10000\), the value of \(\sigma_{e}\) is almost constant, before significantly dropping at higher mass resolutions.
### Binary binding process and the scatter in eccentricity
To understand the observed scatter in eccentricity, we investigate the binary binding process in each simulation. Before the SMBHs are bound, the primary influence on the motion is from the galactic merger potential; two realisations for both merger orbits are shown in Figure 3. The initial stages of the orbit show indiscernible variation between realisations, however after a particularly strong interaction between the SMBHs during a pericentre passage, the orbits deviate
between realisations (inset panels, Figure 3), and evolve to different binary eccentricities.
To quantify the strength of the hard-scattering process that randomises the SMBH orbit, we measure the effective two-body deflection angle, defined as:
\[\theta_{\rm defl}=2\arctan\left(\frac{GM}{L\sqrt{2E}}\right)=2\arctan\left(\frac {b_{90}}{b}\right) \tag{3}\]
where \(M=M_{\bullet,1}+M_{\bullet,2}\), \(L\) is the magnitude of the SMBH system angular momentum vector, \(E\) the orbital energy at the time of the pericentre passage, \(b\) the effective impact parameter, and \(b_{90}\) the \(90^{\circ}\) deflection radius (Binney & Tremaine, 2008).
We observe clear evidence for a relationship between the deflection angle \(\theta_{\rm defl}\) and the resulting eccentricity at the time the SMBH binary becomes hard, as shown with the data points in Figure 4. We also observe a similar relationship in our two test simulation sets with the reduced SMBH masses.
### Reproducing the eccentricity behaviour with a simple model
The dependency of the binary eccentricity on the deflection angle \(\theta_{\rm defl}\) during the binary formation can be reproduced using a simple analytic model, which includes only the essential components relevant to this process. Taking the orbit to lie in the \(xy\)-plane, the relative motion of the two equal-mass SMBHs in this model follows the equation of motion
\[\vec{x}=-\frac{2GM_{\bullet}}{|\mathbf{x}|^{3}}\mathbf{x}+\mathbf{a}_{\rm bg}+\mathbf{a}_{\rm DF}, \tag{4}\]
where \(\mathbf{x}=(x,y)\) is the separation vector of the SMBHs, \(M_{\bullet}=10^{8}\,M_{\sun}\) is the mass of a single SMBH, \(\mathbf{a}_{\rm bg}\) the acceleration due to the asymmetric background potential, and \(\mathbf{a}_{\rm DF}\) is the acceleration due to DF.
The stellar background is modelled as a constant density spheroidal potential (e.g. Binney & Tremaine, 2008):
\[\Phi_{\rm bg}(\mathbf{x})=\pi G\rho(A_{x}x^{2}+A_{y}y^{2}), \tag{5}\]
with the acceleration given by
\[\mathbf{a}_{\rm bg}(\mathbf{x})=-\nabla\Phi_{\rm bg}(\mathbf{x}). \tag{6}\]
The \(A\) coefficients are related to the eccentricity \(e_{\rm s}\) of the spheroid
Figure 1: Binary eccentricity \(e\) (note the non-linear scale) and shifted time \(t^{\prime}=t-t_{\rm bound}\) for the \(M_{\bullet}/m_{\bullet}=5000\) resolution simulations from the \(e_{0}=0.90\) set (blue lines) and \(e_{0}=0.99\) set (orange lines). The circle markers indicate the time when a hard binary formed, with the preceding semitransparent lines indicating the transition period when the hard binary is still forming, and the solid lines the hard binary evolution. During the transition period, the Keplerian definition of binary motion is affected by the intervening stellar mass, creating the oscillatory artefacts in the semitransparent lines. In the \(e_{0}=0.90\) simulation set, six simulations have a binary merger within 50 Myr.
Figure 2: Convergence of the eccentricity scatter \(\sigma_{e}\) as a function of mass resolution \(M_{\bullet}/m_{\bullet}\) and the total number of stellar particles \(N_{\star,\rm tot}\). The results from Nasim et al. (2020) are shown as real triangles for comparison. The expected \(1/\sqrt{N_{\star}}\) scaling is recovered for the \(e_{0}=0.90\) mergers. For the \(e_{0}=0.99\) mergers, \(\sigma_{e}\) does not decrease until \(M_{\bullet}/m_{\star}\geq 20000\).
Figure 4: Deflection angle \(\theta_{\rm defl}\) and resulting hard binary eccentricity in the \(e_{0}=0.90\) (left) and \(e_{0}=0.99\) (right) simulations, compared to fitted model curves. The solid lines show the results from the analytic model with \(e_{\rm s}=0.91\), while the semitransparent lines show the results for \(e_{\rm s}=0.90\) and \(e_{\rm s}=0.92\). The \(e_{0}=0.90\) model curves have been shifted left by \(14^{\circ}\) to better match the simulation data, with the shift indicated by an arrow. The marginal histograms show kernel density estimates of the simulation data. The inset panel for the \(e_{0}=0.90\) data shows a zoom-in with \(\theta_{\rm defl}=[52^{\circ},73^{\circ}]\) and \(e_{\rm h}=[0.96,1.00]\).
Figure 3: Orbits from \(t=0\,{\rm Myr}\) to a time shortly after \(t_{\rm bound}\) of a single SMBH in two representative realisations of the \(e_{0}=0.90\) mergers (left) and the \(e_{0}=0.99\) mergers (right), which by symmetry of the equal mass system is a reflection of the second BH orbit about \(x=-z\). The line gradient shows the shifted time \(t^{\prime}=t-t_{\rm bound}\), however the colouring for \(|t^{\prime}|>2\,{\rm Myr}\) is constant for visual clarity. In each inset panel, \(\theta_{\rm defl}\) is the deflection angle at the pericentre between the arrows. The circle marker indicates the SMBH position when a bound binary is formed.
as
\[A_{\rm x} =2\left(\frac{1-e_{\rm s}^{2}}{e_{\rm s}^{2}}\right)\left[\frac{1}{2 e_{\rm s}}\ln\left(\frac{1+e_{\rm s}}{1-e_{\rm s}}\right)-1\right] \tag{7}\] \[A_{\rm y} =\frac{1-e_{\rm s}^{2}}{e_{\rm s}^{2}}\left[\frac{1}{1-e_{\rm s}^ {2}}-\frac{1}{2e_{\rm s}}\ln\left(\frac{1+e_{\rm s}}{1-e_{\rm s}}\right)\right], \tag{8}\]
with the long axis aligned along the \(x\)-axis. The stellar density is set to \(\rho=300\,M_{\sun}\,\rm pc^{-3}\), which approximately matches the values seen in the simulations when the SMBH binary is becoming bound.
The DF acting on a single BH is modelled using the Chandrasekhar (1943) formula assuming a Maxwellian distribution with a constant velocity dispersion \(\sigma_{\star}\)(e.g. Binney and Tremaine, 2008):
\[\boldsymbol{a}_{\rm 1,DF}=-\frac{4\pi G^{2}M_{\star\rho}\ln\Lambda}{| \boldsymbol{r}_{\rm 1}|^{3}}\left(\operatorname{erf}(X)-\frac{2X}{\sqrt{\pi}} \exp(-X^{2})\right)\boldsymbol{v}_{\rm 1}, \tag{9}\]
where \(\boldsymbol{v}_{\rm 1}=\dot{\pi}/2\) is the velocity of a single BH, and \(X=|\boldsymbol{v}_{\rm 1}|/(\sqrt{2}\sigma_{\star})\). We set \(\sigma_{\star}=200\,\rm km\,s^{-1}\) based on the value measured from the simulations. The value of the Coulomb logarithm is expected to be in the range \(\ln\Lambda\sim 4-5\) based on the size of the stellar system, and we find that \(\ln\Lambda=4.7\) gives results that agree well with the simulations.
The effect of DF weakens as the SMBH binary orbit shrinks. To account for this, when the binary is bound we multiply the acceleration given by equation (9) with the smooth cut-off function
\[f(a)=\frac{1}{1+\exp[(a_{\rm c}-a)/d_{\rm c}]}. \tag{10}\]
Here \(a\) is the semimajor axis of a bound binary, and the cut-off scales are \(a_{\rm c}=2a_{\rm h}\) and \(d_{\rm c}=0.5a_{\rm h}\), with \(a_{\rm h}\) defined by equation (2). The total DF term is then
\[a_{\rm DF}=2f(a)\boldsymbol{a}_{\rm 1,DF}. \tag{11}\]
To mimic the nearly linearly plunging orbits before the scattering event seen in Figure 3, we specify the initial conditions as
\[\boldsymbol{x}_{\rm 0} =(25\,\rm pc,\,\it b) \tag{12}\] \[\dot{\boldsymbol{x}}_{\rm 0} =(-v_{\rm 0},0). \tag{13}\]
In order to match the typical SMBH relative velocity seen at this separation in the simulations we set \(v_{\rm 0}=450\,\rm km\,s^{-1}\) for the \(e_{\rm 0}=0.90\) case, and \(v_{\rm 0}=560\,\rm km\,s^{-1}\) for the the \(e_{\rm 0}=0.99\) case.
The spheroid eccentricity parameter \(e_{\rm s}\) is not easily measured from the simulations, due to the presence of stellar components that remain tightly bound to the SMBHs and obscure the background potential relevant for the dynamics, in combination with a constantly evolving potential. However, only sufficiently large values of \(e_{\rm s}\) allow for low binary eccentricities to be produced, as is shown by Figure 5. Performing calculations with different values of \(e_{\rm s}\), we found a good fit to the simulation results for \(e_{\rm s}\approx 0.9\).
We compute the resulting eccentricities for a range of impact parameters up to \(b=20\,\rm pc\) for the two initial velocities, by solving the equation of motion (4) until the binary has become hard using the error controlled 8th order Runge-Kutta method DOP853 included in the SciPy library (Virtanen et al., 2020). For reference, the 90\({}^{\circ}\) deflection radii are \(p_{\rm 90}\approx 5.8\,\rm pc\) and \(b_{\rm 90}\approx 3.3\,\rm pc\) for the \(e_{\rm 0}=0.90\) and \(e_{\rm 0}=0.99\) cases, respectively.
The resulting model curves are shown together with the simulation data in Figure 4. To correctly match the data in the \(e_{\rm 0}=0.90\) case, the model curve has been shifted to the left by \(14^{\circ}\). This shift is required likely due to the rotation of the stellar component tilting the background potential relative to the SMBH trajectories in the simulations, with the effect being only evident in the \(e_{\rm 0}=0.90\) case due to the less radial merger orbit. In the \(e_{\rm 0}=0.99\) case we can also see that the simulation data has significant scatter around the model curve in the \(\theta_{\rm dell}\gtrsim 120^{\circ}\) region, although the simple model does appear to capture the mean behaviour relatively well, apart from the very largest values of \(\theta_{\rm dell}\). The behaviour of the model in this region is also sensitive to the exact values of \(e_{\rm s}\) and \(\ln\Lambda\), which might also be related to the large scatter seen in the simulation data.
However, in general the behaviour of the simulation data is captured well, with the model correctly producing the two main eccentricity minima, as well as the \(e\approx 1\) region between them. The predicted minima occur for trajectories where the torque from the background potential together with the DF causes the SMBHs to loop around into a nearly circular orbit, resulting in the low eccentricities seen in some of the \(e_{\rm 0}=0.99\) simulation data. Conversely, highly eccentric binaries are produced when the binary is trapped into nearly radial oscillations along the main axes of the potential. These mechanisms are contrasted in Figure 5.
## 4 Discussion
As shown by both the simulations and the analytic model, the eccentricity of the hard SMBH binary in the studied merger configurations can span nearly the full possible range in the interval \(e_{\rm h}=[0,1]\), depending on comparatively small, parsec-scale differences in the particular realisation of the galactic merger orbit. On the other hand, in minor mergers where the stellar background is less asymmetric when the SMBHs become bound, the scatter in eccentricity is likely to be relatively low even in the case of a radial merger orbit, since the system is less sensitive to the exact value of the deflection angle, as seen in the \(e_{\rm s}=0.2\) case in Figure 5.
Merger orbits with a lower initial eccentricity \(e_{\rm 0}\) than the values in this work can be expected to show less scatter in the binary eccentricity due to the lack of a hard scattering event that is sensitive to slight perturbations in the merger orbit, which was also seen in the study by Gualandris et al. (2022). However, major mergers are expected to occur on highly radial orbits based on cosmological simulations (e.g. Khochfar and Burkert, 2006). In addition, the DF from the dark matter halo would in general drive even lower eccentricity merger orbits to highly radial final plunges when the initial separation of the galaxies is large enough.
The simulation setup used in this work is idealised, and covers only a small part of the parameter space of merger configurations. However, similar behaviour can be expected to occur also in more realistic models, since the relevant dynamics occurs within a few hundred parsecs from the centre of the merged galaxy, where we do not expect any significant effects from dark matter or the outer parts of the galaxy, which are not included in our present simulations. Preliminary results using steeper (\(\gamma\gtrsim 1\)), more observationally-motivated galaxy density profiles indicate that similar behaviour occurs also in more realistic merger systems.
The variation of the deflection angle is caused by the random variation of the impact parameter \(b\) through equation (3). In numerical simulations with a discretised phase space, the primary cause of random variation in \(b\) is the Brownian motion of the SMBHs before they form a bound binary (Merritt, 2001; Bortolas et al., 2016). This effect scales as \(1/\sqrt{N_{\star,\rm tot}}\), which can explain the observed scaling of \(\sigma_{e}\)(Nasim et al., 2020) when the system falls into the region of parameter space where the relation between \(e\) and \(b\) is approximately linear. The deviation from this scaling seen in Figure 2 can then be explained by the fact that the eccentricity is not globally a linear function of the impact parameter. The non-linear mapping also explains why GW
induced mergers were present within a short (\(<50\) Myr) timeframe for the \(e_{0}=0.90\) mergers and not the \(e_{0}=0.99\) mergers in Figure 1, and highlights the difficulty in predicting the binary eccentricity \(e_{\rm h}\) from the initial merger orbit.
While the scatter in the impact parameter between different numerical realisations of the same merger is due to relatively low number of particles compared to real galaxies, we expect that uncertainty of a similar magnitude is also present in real systems due to various mechanisms, such as perturbations of the SMBH motion from gas and substructure in their stellar environment, or larger-scale perturbations of the galactic merger orbit. In addition, the SMBHs are also not necessarily located exactly at the centre of mass of the galaxy. For example, the work by Batchelder et al. (2010) has suggested that the SMBH in M87 may be displaced from the galactic photocentre by up to \(\sim 7\) pc due to jet acceleration or gravitational recoil kicks. Thus, it is not possible to give an exact prediction for the SMBH binary eccentricity produced by a given merger, and instead we must focus on predicting the distribution of eccentricities that can be produced by a given merger configuration.
Predicting the eccentricity distribution requires the knowledge of the distribution of impact parameters that can result from essentially identical mergers as well as the mapping between the impact parameter, or deflection angle, and the final eccentricity. Since the relation between the impact parameter and the eccentricity does not appear to significantly depend on the resolution, simulations at moderate mass resolutions of \(M_{\bullet}/m_{\bullet}\sim 10^{3}\) can be used to map out the relation by performing a large number of merger simulations, possibly augmented by more sophisticated versions of the analytic model presented here. The distribution of impact parameters in real systems is however a much more difficult problem to tackle, since in simulations numerical resolution effects are likely dominant.
Even without such extensive studies on the full distribution of SMBH binary eccentricities, the present results suggest that the distribution is likely to be broad and contain a significant number of binaries at high eccentricities. The possibility of a large fraction of high binary eccentricities has implications in particular for studies of the expected gravitational wave background (GWB) signal measured by PTAs, as in particular binaries with \(e_{\rm h}\gtrsim 0.9\) can retain significant eccentricities while their GW emission is within the observable frequency band. This could be observable in the shape of the GWB spectrum (Kelley et al., 2017), and would also affect the likelihood of detecting individual SMBH binaries with PTAs (Kelley et al., 2018). It is interesting to note that Bi et al. (2023) obtained a distribution of eccentricities consistent with the expectation of a significant fraction of SMBH binaries at high eccentricities based on analysis of the recent PTA observations.
Figure 5: Top: Sample orbits computed with the analytic model for different background potential eccentricities \(e_{\rm s}\), with otherwise identical initial conditions. The dashed lines show the isopotential contours of the model. Only the early part of the orbital evolution is shown for clarity. Bottom: Orbital eccentricity \(e\) of the model orbits (note the non-linear scale). The vertical line marks the end of the period shown in the top panels.
In the present work, the influences of rotation and SMBH orbits are not considered. A counter-rotating stellar background with respect to the SMBH binary angular momentum has a tendency to increase the binary eccentricity, whereas a corotating stellar background tends to circularise the binary (Sesana et al., 2011; Holley-Bockelmann and Khan, 2015). The SMBH angular momentum vector can however undergo reversals as a result of gravitational torques following pericentre passages of the binary (Rantala et al., 2019, termed an orbital flip) an example of which is shown in the left inset of of the left panel of Figure 3. The chaotic reversals of the binary angular momentum act to randomise the relative rotation of the stellar background, leading in general to complex eccentricity evolution (Nasim et al., 2021). A result of a sustained co- or counter-rotating stellar background could be an evolution of the binary eccentricity after \(e_{\rm h}\) to a bimodal distribution for a given merger configuration, if the stellar scattering interactions were strong enough. Whilst these mechanisms are unlikely to alter much our findings concerning the hard binary eccentricity, they may obfuscate predictions for SMBH binary eccentricity prior to the merger, which is critical for GW detection missions.
In contrast to the gas-free and fairly low density systems studied here, in higher density or gas-rich systems the binary eccentricity can also evolve significantly after the binary has become hard but before it has entered the GW emission dominated regime. In particular, interactions with a circumbinary accretion disc can lead to rapid evolution of the eccentricity if the accretion rate is high. Studies of prograde coplanar circumbinary discs have found that at almost all eccentricities the binaries evolve towards an eccentricity of \(e\approx 0.5\)(Zrake et al., 2021; D'Orazio and Duffell, 2021), whereas retrograde discs lead to the eccentricity always increasing (Tiede and D'Orazio, 2023). However, these studies are generally limited to eccentricities below \(e<0.9\). Our results indicate the need for extending such studies to higher binary eccentricities, as well as to more general configurations such as polar discs which become increasingly likely at high eccentricities (e.g. Martin and Lubow, 2017).
## 5 Conclusions
In conclusion, we find that the variation in the hard binary eccentricity \(e_{\rm h}\) for a given system configuration is tightly correlated with the deflection angle \(\theta_{\rm defl}\), and thus the impact parameter \(b\). By using a simple, resolution-free analytic model of the SMBH scattering process, we have demonstrated that uncertainty in SMBH binary eccentricity is not caused solely by discretisation effects in galaxy merger simulations, but is rather due to the physical sensitivity of the system to small changes in the merger orbit, which can be caused by physical mechanisms in addition to numerical discretisation effects. The results presented here justify extending the investigation to more realistic galaxy merger scenarios, in order to quantify the expected range of hard binary eccentricities, which is critical for predictions for current and future gravitational wave observation missions.
## Acknowledgments
We thank the anonymous referee for their comments which contributed to improving this work. We also thank Shihong Liao, Dimitrios Irodotou, and Francesco Paolo Rizzuto for interesting discussions. A.R. acknowledges the support by the University of Helsinki Research Foundation. A.R., M.M. and P.H.J. acknowledge the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930) and the support of the Academy of Finland grant 339127. TN acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311 from the DFG Cluster of Excellence "ORIGINS".
The numerical simulations used computational resources provided by the CSC - IT centre for Science, Finland.
## Author contributions
We list here the roles and contributions of the authors according to the Contributor Roles Taxonomy (CRediT). **AR**: Conceptualisation, Investigation, Formal analysis, Data curation, Writing - original draft. **MM**: Conceptualisation, Formal analysis, Writing - original draft. **PHJ**: Supervision, Writing - review & editing. **TN**: Writing - review & editing.
## Software
Ketju(Mannerkoski et al., 2023; Rantala et al., 2017), gadget-4 (Springel et al., 2021), NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), pygad (Rottgers et al., 2020).
## Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2305.09231 | Non-Hermitian Casimir effect of magnons | There has been a growing interest in non-Hermitian quantum mechanics. The key
concepts of quantum mechanics are quantum fluctuations. Quantum fluctuations of
quantum fields confined in a finite-size system induce the zero-point energy
shift. This quantum phenomenon, the Casimir effect, is one of the most striking
phenomena of quantum mechanics in the sense that there are no classical analogs
and has been attracting much attention beyond the hierarchy of energy scales,
ranging from elementary particle physics to condensed matter physics, together
with photonics. However, the non-Hermitian extension of the Casimir effect and
the application to spintronics have not yet been investigated enough, although
exploring energy sources and developing energy-efficient nanodevices are its
central issues. Here we fill this gap. By developing a magnonic analog of the
Casimir effect into non-Hermitian systems, we show that this non-Hermitian
Casimir effect of magnons is enhanced as the Gilbert damping constant (i.e.,
the energy dissipation rate) increases. When the damping constant exceeds a
critical value, the non-Hermitian Casimir effect of magnons exhibits an
oscillating behavior, including a beating one, as a function of the film
thickness and is characterized by the exceptional point. Our result suggests
that energy dissipation serves as a key ingredient of Casimir engineering. | Kouki Nakata, Kei Suzuki | 2023-05-16T07:21:00Z | http://arxiv.org/abs/2305.09231v2 | # Non-Hermitian Casimir Effect of Magnons
###### Abstract
There has been a growing interest in non-Hermitian quantum mechanics. The key concepts of quantum mechanics are quantum fluctuations. Quantum fluctuations of quantum fields confined in a finite-size system induce the zero-point energy shift. This quantum phenomenon, the Casimir effect, is one of the most striking phenomena of quantum mechanics in the sense that there are no classical analogs and has been attracting much attention beyond the hierarchy of energy scales, ranging from elementary particle physics to condensed matter physics, together with photonics. However, the non-Hermitian extension of the Casimir effect and the application to spintronics have not yet been investigated enough, although exploring energy sources and developing energy-efficient nanodevices are its central issues. Here we fill this gap. By developing a magnonic analog of the Casimir effect into non-Hermitian systems, we show that this non-Hermitian Casimir effect of magnons is enhanced as the Gilbert damping constant (i.e., the energy dissipation rate) increases. When the damping constant exceeds a critical value, the non-Hermitian Casimir effect of magnons exhibits an oscillating behavior, including a beating one, as a function of the film thickness and is characterized by the exceptional point. Our result suggests that energy dissipation serves as a key ingredient of Casimir engineering.
_Introduction._--Recently, non-Hermitian physics has been drawing considerable attention not only for fundamental science but also for applications such as energy-efficient nanodevices [1]. Thanks to the complete absence of any metallic elements from conduction electrons, insulating magnets are free from drawbacks of conventional electronics, e.g., substantial energy loss due to Joule heating. However, the effect of energy dissipation on spins is unavoidable even in insulating magnets. Hence, to explore energy sources and develop the potential for applications is a crucial issue.
We shed light on this problem by using a quantum effect, the Casimir effect [2; 3; 4], which arises from the zero-point energy. Quantum fluctuations of quantum fields realize a zero-point energy shift under spatial boundary conditions. This Casimir effect is one of the most striking phenomena of quantum mechanics in the sense that there are no classical analogs. Although the original platform for the Casimir effect [2; 3; 4] is the photon field [5], the concept can be extended to various fields such as scalar, tensor, and spinor fields. Thanks to this universal property, the Casimir effects have been investigated in various research areas [6] beyond the hierarchy of energy scales [7; 8; 9; 10; 11; 12], ranging from elementary particle physics to condensed matter physics, together with photonics. However, the non-Hermitian extension of the Casimir effect and the application to spintronics remain missing ingredients, although exploring energy sources and developing the potential for energy-efficient nanodevices are the central issues of spintronics [13; 14; 15; 16].
Here we fill this gap. The Casimir effects are characterized by the energy dispersion relation. We therefore incorporate the effect of energy dissipation into the energy dispersion relation of magnons through the Gilbert damping constant [17] and thus develop a magnonic analog of the Casimir effect, called the magnonic Casimir effect [18], into non-Hermitian systems. We then show that this non-Hermitian extension of the magnonic Casimir effect, which we call the magnonic non-Hermitian Casimir effect, is enhanced as the Gilbert damping constant (i.e., the energy dissipation rate) increases. When the damping constant exceeds a critical value, the magnonic non-Hermitian Casimir effect exhibits an oscillating behavior as a function of the film thickness and is characterized by the exceptional point [19] (EP). We refer to this behavior as the magnonic EP-induced Casimir oscillation. We emphasize that this magnonic EP-induced Casimir oscillation is absent in the dissipationless system of magnons. The magnonic EP-induced Casimir oscillation exhibits a beating behavior in the antiferromagnets (AFMs) where the degeneracy between two kinds of magnons is lifted. Our result suggests that energy dissipation serves as a new handle on Casimir engineering [20] to control and manipulate the Casimir effect of magnons. Thus, we pave a way for magnonic Casimir engineering through the utilization of energy dissipation.
_Magnonic non-Hermitian system._--We consider the insulating AFMs of two-sublattice systems in three dimensions described by the Heisenberg model (see Fig. 1),
Figure 1: Schematic of the thin film of the AFMs.
where the AFMs have the Neel magnetic order and there exists the zero-point energy [21; 22]. Throughout this study, we work under the assumption that the Neel phase remains stable in the presence of energy dissipation. Elementary magnetic excitations are two kinds of magnons having the spin angular momentum \(\sigma\hbar\) with the index \(\sigma=\pm\) and the reduced Planck constant \(\hbar\). By incorporating the effect of energy dissipation into the energy dispersion relation of magnons through the two-coupled Landau-Lifshitz-Gilbert equation where the value of the Gilbert damping constant \(\alpha>0\) for each sublattice is identical to each other, we study the low-energy magnon dynamics [23] described by the energy dispersion relation \(\epsilon_{\sigma,\mathbf{k},\alpha}\in\mathbb{C}\) and the wavenumber \(\mathbf{k}=(k_{x},k_{y},k_{z})\in\mathbb{R}\)[24] in the long wavelength limit as [25]
\[\epsilon_{\sigma,\mathbf{k},\alpha}=\frac{2S}{1+\alpha^{2}}\Big{(}-i\alpha C+ \sqrt{(E_{\sigma,\mathbf{k},\alpha})^{2}}\Big{)} \tag{1}\]
and
\[(E_{\sigma,\mathbf{k},\alpha})^{2}:={A_{\sigma,\alpha}}^{2}(ak)^{2}+{\delta_{ \sigma}}^{2}-{\cal D}_{\sigma}{}^{2}\alpha^{2}, \tag{2}\]
where \(k:=|\mathbf{k}|\), the length of a magnetic unit cell is \(a\), the spin moment in a magnetic unit cell is \(S\), and the others are material-dependent parameters which are independent of the wavenumber, \(0<A_{\sigma,\alpha}\in\mathbb{R}\), \(0<\delta_{\sigma}\in\mathbb{R}\), \(0<{\cal D}_{\sigma}\in\mathbb{R}\), and \(0<C\in\mathbb{R}\): In the presence of spin anisotropies, the parameters are given as [25]
\[A_{\sigma,\alpha}= \sqrt{(1+\alpha^{2})\Big{(}J^{2}+\sigma\frac{K_{\rm h}}{2}J\Big{)}}, \tag{3a}\] \[\delta_{\sigma}= \sqrt{K_{\rm e}(2J+K_{\rm e})+K_{\rm h}(J-\sigma J+K_{\rm e})},\] (3b) \[{\cal D}_{\sigma}= \sqrt{J^{2}+\sigma K_{\rm h}J+\frac{K_{\rm h}^{2}}{4}},\] (3c) \[C= J+K_{\rm e}+\frac{K_{\rm h}}{2}, \tag{3d}\]
where \(J>0\) parametrizes the antiferromagnetic exchange interaction between the nearest-neighbor spins, \(K_{\rm h}>0\) is the hard-axis anisotropy, and \(K_{\rm e}>0\) is the easy-axis anisotropy along the \(y\) direction (see Fig. 1). These are generally \(K_{\rm h}/J\ll 1\) and \(K_{\rm e}/J\ll 1\). In the absence of the hard-axis anisotropy \(K_{\rm h}=0\), two kinds of magnons \(\sigma=\pm\) are in degenerate states, whereas the degeneracy is lifted by \(K_{\rm h}>0\). Note that, in general, the effect of dipolar interactions is negligibly small in AFMs, and we neglect it throughout this study.
The Gilbert damping constant \(\alpha\) is a dimensionless constant, and the energy dissipation rate increases as the Gilbert damping constant grows. In the dissipationless system [18], the Gilbert damping constant (i.e., the energy dissipation rate) is zero \(\alpha=0\). The dissipative system of \(\alpha>0\) described by Eq. (1) can be regarded as a non-Hermitian system for magnons in the sense that the energy dispersion takes a complex value. Note that the constant term in Eq. (1), \(-i\alpha C\), is independent of the wavenumber and just shifts the purely imaginary part of the magnon energy dispersion \(\epsilon_{\sigma,\mathbf{k},\alpha}\). For this reason [see also Eq. (9a)], the constant term, \(-i\alpha C\), is not relevant to the magnonic Casimir effect [26]. We then define the magnon energy gap of Eq. (1) as \(\Delta_{\sigma,\alpha}:={\rm Re}(\epsilon_{\sigma,k=0,\alpha})\), i.e.,
\[\Delta_{\sigma,\alpha}=\frac{2S}{1+\alpha^{2}}{\rm Re}\Big{(}\sqrt{(E_{\sigma,k=0,\alpha})^{2}}\Big{)}, \tag{4}\]
and investigate the \(\alpha\)-dependence of the magnon energy dispersion.
_Magnonic EP.--_When the damping constant \(\alpha\) is small and \((E_{\sigma,k=0,\alpha})^{2}>0\), \(E_{\sigma,k=0,\alpha}\) takes a real value and decreases as \(\alpha\) increases. This results in
\[\frac{d\Delta_{\sigma,\alpha}}{d\alpha}<0. \tag{5}\]
Thus, the magnon energy gap decreases as the damping constant increases [27] [compare the solid line with the dashed one in the left panel of Fig. 2 (i)]. When the damping constant is large enough, the magnon energy gap vanishes \(\Delta_{\sigma,\alpha}=0\) at \(\alpha=\alpha_{\sigma}^{\rm cri}\),
\[\alpha_{\sigma}^{\rm cri}:=\frac{\delta_{\sigma}}{{\cal D}_{\sigma}}, \tag{6}\]
where there exists the gapless magnon mode which behaves like a relativistic particle with the linear energy dispersion. From the property of Eq. (5), we call (i) \(\alpha\leq\alpha_{\sigma}^{\rm cri}\) the gap-melting regime.
When the damping constant exceeds the critical value \(\alpha_{\sigma}^{\rm cri}\), i.e., \(\alpha>\alpha_{\sigma}^{\rm cri}\), \(E_{\sigma,k=0,\alpha}\) takes a purely imaginary value as \((E_{\sigma,k=0,\alpha})^{2}<0\). In this regime, the real part of the magnon energy dispersion remains zero \({\rm Re}(\epsilon_{\sigma,\mathbf{k},\alpha})=0\) for the region \(0\leq k\leq k_{\sigma,\alpha}^{\rm cri}\),
\[k_{\sigma,\alpha}^{\rm cri}:=\frac{1}{a}\sqrt{\frac{{\cal D}_{\sigma}{}^{2} \alpha^{2}-\delta_{\sigma}{}^{2}}{A_{\sigma,\alpha}{}^{2}}}, \tag{7}\]
whereas \({\rm Re}(\epsilon_{\sigma,\mathbf{k},\alpha})>0\) for \(k>k_{\sigma,\alpha}^{\rm cri}\) [see the highlighted in yellow in the left panel of Figs. 2 (ii) and (iii)]. The critical point \(k_{\sigma,\alpha}^{\rm cri}\) can be regarded as the EP [16; 27] for the wavenumber \(k\), and we refer to it as the magnonic EP. As the value of the damping constant becomes larger, that of the EP increases
\[\frac{dk_{\sigma,\alpha}^{\rm cri}}{d\alpha}>0. \tag{8}\]
At the EP \(k=k_{\sigma,\alpha}^{\rm cri}\), the group velocity \(\mathbf{v}_{\sigma,\mathbf{k},\alpha}:={\rm Re}[\partial\epsilon_{\sigma, \mathbf{k},\alpha}/(\partial\hbar\mathbf{k})]\) becomes discontinuous [see the solid lines in the left panel of Figs. 2 (ii) and (iii)]. We emphasize that this behavior is absent in the dissipationless system \(\alpha=0\)[18]. In the presence of the EP, the group velocity becomes much larger than the usual such as in the gap-melting regime (i) [compare the solid lines in the left panel of Figs. 2 (ii) and (iii) with the one of Fig. 2 (i)].
Assuming \(\alpha_{\sigma=+}^{\rm cri}<\alpha_{\sigma=-}^{\rm cri}\), the non-Hermitian system for magnons described by Eq. (1) with \(\alpha>0\) can be divided into three regimes (i)-(iii) in terms of the magnonic EPs as follows [see the left panel of Figs. 2 (i), (ii), and (iii)]:
1. \(\alpha\leq\alpha_{\sigma=+}^{\rm cri}<\alpha_{\sigma=-}^{\rm cri}\). No magnonic EPs.
2. \(\alpha_{\sigma=+}^{\rm cri}<\alpha<\alpha_{\sigma=-}^{\rm cri}\). One EP, \(k_{\sigma=+,\alpha}^{\rm cri}\).
3. \(\alpha_{\sigma=+}^{\rm cri}<\alpha_{\sigma=-}^{\rm cri}\leq\alpha\). Two EPs, \(k_{\sigma=+,\alpha}^{\rm cri}\) and \(k_{\sigma=-,\alpha}^{\rm cri}\).
_Magnonic Casimir energy._--The magnonic analog of the Casimir energy, called the magnonic Casimir energy [18], is characterized by the energy dispersion relation of magnons (see Ref. [18] for dissipationless systems). Therefore, by incorporating the effect of energy dissipation into the energy dispersion relation of magnons through the Gilbert damping constant [see Eq. (1)], a non-Hermitian extension of the magnonic Casimir effect can be developed. We remark that the Casimir energy induced by quantum fields on the lattice, such as the magnonic Casimir energy [18], can be defined by using the lattice regularization [28, 29, 30, 31, 32, 33, 34]. In this study, we focus on thin films confined in the \(z\) direction (see Fig. 1). In the two-sublattice systems, the wavenumber on the lattice is replaced as \((ak_{j})^{2}\to 2[1-\cos(ak_{j})]\) along the \(j\) axis for \(j=x,y,z\). Here by taking into account the Brillouin zone (BZ), we set the boundary condition for the \(z\) direction in wavenumber space so that it is discretized as \(k_{z}\to\pi n/L_{z}\), i.e., \(ak_{z}\to\pi n/N_{z}\), where \(L_{z}:=aN_{z}\) is the film thickness, \(N_{j}\in\mathbb{N}\) is the number of magnetic unit cells along the \(j\) axis, and \(n=1,2,...,2N_{z}\). Thus, the magnonic Casimir energy \(E_{\rm Cas}\)[18] per the number of magnetic unit cells on the surface for \(N_{z}\) is defined as the difference between the zero-point energy \(E_{0}^{\rm sum}\) for the discrete energy \(\epsilon_{\sigma,{\bf k},\alpha,n}\) due to discrete \(k_{z}\) [see Eq. (9b)] and the one \(E_{0}^{\rm int}\) for the continuous energy \(\epsilon_{\sigma,{\bf k},\alpha}\) [see Eqs. (9c) and (1)] as follows [28, 29, 30, 31, 32, 33, 34]:
\[E_{\rm Cas}(N_{z}):= E_{0}^{\rm sum}(N_{z})-E_{0}^{\rm int}(N_{z}), \tag{9a}\] \[E_{0}^{\rm sum}(N_{z}):= \sum_{\sigma=\pm}\int_{\rm BZ}\frac{d^{2}(ak_{\perp})}{(2\pi)^{2 }}\Bigg{[}\frac{1}{2}\Big{(}\frac{1}{2}\sum_{n=1}^{2N_{z}}\epsilon_{\sigma,{ \bf k},\alpha,n}\Big{)}\Bigg{]},\] (9b) \[E_{0}^{\rm int}(N_{z}):= \sum_{\sigma=\pm}\int_{\rm BZ}\frac{d^{2}(ak_{\perp})}{(2\pi)^{2 }}\Bigg{[}\frac{1}{2}N_{z}\int_{\rm BZ}\frac{d(ak_{z})}{2\pi}\epsilon_{\sigma,{ \bf k},\alpha}\Bigg{]}, \tag{9c}\]
where \(k_{\perp}:=\sqrt{{k_{x}}^{2}+{k_{y}}^{2}}\), \(d^{2}(ak_{\perp})=d(ak_{x})d(ak_{y})\), the integral is over the first BZ, and the factor \(1/2\) in Eqs. (9b) and (9c) arises from the zero-point energy.
We remark that [23] assuming thin films of \(N_{z}\ll N_{x},N_{y}\) (see Fig. 1), the zero-point energy in the thin film of the thickness \(N_{z}\) is \(E_{0}^{\rm sum}(N_{z})N_{x}N_{y}\) and consists of two parts as \(E_{0}^{\rm sum}(N_{z})=E_{\rm Cas}(N_{z})+E_{0}^{\rm int}(N_{z})\) [see Eq. (9a)], where \(E_{0}^{\rm int}(N_{z})\) exhibits the behavior of \(E_{0}^{\rm int}(N_{z})\propto N_{z}\) [see Eq. (9c)]. Then, to see the film thickness dependence of \(E_{\rm Cas}(N_{z})\), we introduce the rescaled Casimir energy \(C_{\rm Cas}^{[b]}\) in terms of \({N_{z}}^{b}\) for \(b\in\mathbb{R}\) as
\[C_{\rm Cas}^{[b]}(N_{z}):=E_{\rm Cas}\times{N_{z}}^{b} \tag{10}\]
and call \(C_{\rm Cas}^{[b]}\) the magnonic Casimir coefficient in the sense that \(E_{\rm Cas}=C_{\rm Cas}^{[b]}{N_{z}}^{-b}\).
Note that the zero-point energy arises from quantum fluctuations and does exist even at zero temperature. The zero-point energy defined at zero temperature does not depend on the Bose-distribution function [see Eqs. (9b) and (9c)]. Throughout this work, we focus on zero temperature [23].
_Magnonic non-Hermitian Casimir effect._--Finally, we investigate the magnonic Casimir effect in the non-Hermitian system \(\alpha>0\), which we call the magnonic non-Hermitian Casimir effect, for each regime (i)-(iii). As an example, we consider NiO, an insulating AFM. From Refs. [25, 36, 35], we roughly estimate the model parameter values for NiO as follows [see Eq. (1)]: \(J=47.0859\) meV, \(K_{\rm h}=0.039\,5212\) meV, \(K_{\rm e}=0.001\,71829\) meV, \(S=1.206\,83\), and \(a=0.417\) nm. NiO is a biaxial AFM of \(K_{\rm h}>0\) and \(K_{\rm e}>0\). Due to the hard-axis anisotropy \(K_{\rm h}>0\), the degeneracy between two kinds of magnons \(\sigma=\pm\) is lifted in NiO. These parameters provide \(\alpha_{\sigma=+}^{\rm cri}\sim 0.008\,5414<\alpha_{\sigma=-}^{\rm cri}\sim 0.041\,8709\). Figure 2 shows the magnon energy dispersion [Eq. (1)] and the magnonic Casimir energy [Eq. (9a)] with its Casimir coefficient [Eq. (10)] for each regime (i)-(iii).
(i) Gap-melting regime \(\alpha\leq\alpha_{\sigma=+}^{\rm cri}<\alpha_{\sigma=-}^{\rm cri}\). The magnonic Casimir energy takes a real value as shown in the middle and right panels of Fig. 2 (i), see also Eq. (10), and there are no magnonic EPs [see the left panel of Fig. 2 (i)]. This property is the same as its Casimir effect in the dissipationless system \(\alpha=0\)[18].
When \(\alpha<\alpha_{\sigma=+}^{\rm cri}\), the magnon energy gap for both \(\sigma=\pm\) is nonzero \(\Delta_{\sigma=\pm,\alpha}>0\) and both magnons \(\sigma=\pm\) are the gapped modes. For each gapped mode, the absolute value of the magnonic Casimir coefficient \(C_{\rm Cas}^{[3]}\) decreases and approaches asymptotically to zero as the film thickness increases. We emphasize that the magnon energy gap decreases as the damping constant \(\alpha\) increases [see Eq. (5)]. Then, the magnitude of the magnonic Casimir energy and its coefficient increase as the value of the damping constant becomes larger and approaches to the critical value \(\alpha\to\alpha_{\sigma=+}^{\rm cri}\) [see the middle panel of Fig. 2 (i)].
When \(\alpha=\alpha_{\sigma=+}^{\rm cri}\), the magnon \(\sigma=-\) remains the gapped mode, whereas the magnon energy gap for \(\sigma=+\) vanishes \(\Delta_{\sigma=+,\alpha}=0\) and the magnon \(\sigma=+\) becomes the gapless mode which behaves like a relativistic particle with the linear energy dispersion. In the gapless mode, the magnonic Casimir coefficient \(C_{\rm Cas}^{[3]}\) approaches asymptotically to a nonzero constant as the film thickness increases. This means that although the magnonic Casimir effect is realized on the lattice, the behavior of the gapless magnon mode is analogous to the conventional Casimir effect of a massless scalar field in con
tinuous space [37] except for \(a\)-dependent lattice effects, whereas that of the gapped magnon modes is similar to the Casimir effect known for massive degrees of freedom [37; 38].
(ii) Oscillating regime \(\alpha_{\sigma=+}^{\rm cri}<\alpha<\alpha_{\sigma=-}^{\rm cri}\). The magnonic Casimir energy takes a complex value as shown in the middle and right panels of Fig. 2 (ii), see also Eq. (10). There is one EP, e.g., \(ak_{\sigma=+,\alpha=0.04}^{\rm cri}\sim 0.039\,05437773\) for \(\alpha=0.04\) [see the left panel of Fig. 2 (ii)]. Then, the magnonic non-Hermitian Casimir effect exhibits an oscillating behavior as a function of \(N_{z}\) for the film thickness \(L_{z}:=aN_{z}\).
An intuitive explanation for the oscillation of the magnonic non-Hermitian Casimir effect and its relation to the EP is given as follows: Through the lattice regularization, the magnonic Casimir energy is defined as the difference [see Eq. (9a)] between the zero-point energy with the discrete wavenumber \(k_{z}\) [see Eq. (9b)] and the one with the continuous wavenumber [see Eq. (9c)]. On the lattice, the wavenumber \(k_{z}\) under the boundary condition is discretized in units of \(\pi/aN_{z}\) as \(k_{z}\rightarrow(\pi/aN_{z})n\). As the film thickness \(N_{z}\) increases, the unit becomes smaller, and finally, it matches the EP as \(\pi/aN_{z}=k_{\sigma,\alpha}^{\rm cri}\), i.e., \(N_{z}=\pi/ak_{\sigma,\alpha}^{\rm cri}\), where the magnonic non-Hermitian Casimir effect is enhanced because the group velocity becomes much larger than the usual such as in the gap-melting regime (i) due to the EP [compare the solid lines in the left panel of Figs. 2 (ii) and (iii) with the one of Fig. 2 (i)]. Then, the magnonic non-Hermitian Casimir effect is enhanced periodically where the film thickness \(N_{z}\) is multiples of \(\pi/ak_{\sigma,\alpha}^{\rm cri}\). Thus, the oscillating behavior of the magnonic non-Hermitian Casimir effect stems from the EP, \(k_{\sigma,\alpha}^{\rm cri}\), and the oscillation is characterized in units of \(\pi/ak_{\sigma,\alpha}^{\rm cri}\). We refer to this oscillating behavior as the magnonic EP-induced Casimir oscillation. The period of this Casimir oscillation is
\[\Lambda_{\sigma,\alpha}^{\rm Cas}:=\frac{\pi}{ak_{\sigma,\alpha}^{\rm cri}}. \tag{11}\]
As an example, the period is \(\Lambda_{\sigma=+,\alpha=0.04}^{\rm Cas}\sim 80.441\,49815\) for \(\alpha=0.04\). This agrees with the numerical result in the middle and right panels of Fig. 2 (ii), see the highlighted in red. We call (ii) \(\alpha_{\sigma=+}^{\rm cri}<\alpha<\alpha_{\sigma=-}^{\rm cri}\) the oscillating regime. The middle and right panels of Fig. 2 (ii)
Figure 2: Plots of the magnon energy dispersion \(\epsilon_{\sigma,k,\alpha}\), the real part of the magnonic Casimir energy \(\rm Re(\textit{E}_{Cas})\), and the imaginary part \(\rm Im(\textit{E}_{Cas})\) for NiO in (i) the gap-melting regime, (ii) the oscillating regime, and (iii) the beating regime. Inset: Each magnonic Casimir coefficient \(C_{\rm Cas}^{[b]}\) [see Eq. (10)].
show that the magnonic EP-induced Casimir oscillation is characterized by its Casimir coefficient \(C_{\rm Cas}^{[b]}\) of \(b=1.5\).
(iii) Beating regime \(\alpha_{\sigma=+}^{\rm crii}<\alpha_{\sigma=-}^{\rm crii}\leq\alpha\). The magnonic Casimir energy takes a complex value as shown in the middle and right panels of Fig. 2 (iii), see also Eq. (10). There are two EPs, \(k_{\sigma=+,\alpha}^{\rm crii}\) and \(k_{\sigma=-,\alpha}^{\rm crii}\), which induce two types of the Casimir oscillations characterized by \(\Lambda_{\sigma=+,\alpha}^{\rm Cas}\) and \(\Lambda_{\sigma=-,\alpha}^{\rm Cas}\), respectively. As an example, \(ak_{\sigma=+,\alpha=0.05}^{\rm crii}\sim 0.049\,\)21389535 and \(ak_{\sigma=-,\alpha=0.05}^{\rm crii}\sim 0.027\,28830018\) provide \(\Lambda_{\sigma=+,\alpha=0.05}^{\rm Cas}\sim 63.835\,48044\) and \(\Lambda_{\sigma=-,\alpha=0.05}^{\rm Cas}\sim 115.125\,9929\), respectively, for \(\alpha=0.05\) [see the left panel of Fig. 2 (iii)]. Due to the interference between the two Casimir oscillations, the magnonic non-Hermitian Casimir effect exhibits a beating behavior as a function of \(N_{z}\) for the film thickness \(L_{z}:=aN_{z}\) with a period of
\[\frac{1}{|1/\Lambda_{\sigma=+,\alpha}^{\rm Cas}-1/\Lambda_{\sigma=-,\alpha}^{ \rm Cas}|}. \tag{12}\]
As an example, the period is \(|1/\Lambda_{\sigma=+,\alpha=0.05}^{\rm Cas}-1/\Lambda_{\sigma=-,\alpha=0.05}^{ \rm Cas}|^{-1}\sim 143.284\,2588\) for \(\alpha=0.05\). This agrees with the numerical result in the middle and right panels of Fig. 2 (iii), see the highlighted in blue. We call (iii) \(\alpha_{\sigma=+}^{\rm crii}<\alpha_{\sigma=-}^{\rm crii}\leq\alpha\) the beating regime. The middle and right panels of Fig. 2 (iii) show that the beating behavior of the magnonic EP-induced Casimir oscillation is characterized by its Casimir coefficient \(C_{\rm Cas}^{[b]}\) of \(b=1.5\).
We remark that in the absence of the hard-axis anisotropy \(K_{\rm h}=0\), two kinds of magnons \(\sigma=\pm\) are in degenerate states [see Eq. (1)]. This results in \(\alpha_{\sigma=+}^{\rm crii}=\alpha_{\sigma=-}^{\rm crii}\) [see Eq. (6)] and \(\Lambda_{\sigma=+,\alpha}^{\rm Cas}=\Lambda_{\sigma=-,\alpha}^{\rm Cas}\) [see Eq. (11)]. Hence, the Casimir oscillation is one type with a period of \(\Lambda_{\sigma,\alpha}^{\rm Cas}\). This means that the beating behavior is absent in the uniaxial AFMs of \(K_{\rm h}=0\) and \(K_{\rm e}>0\)[23].
_Magnonic Casimir engineering.--_In each regime (i)-(iii), the Gilbert damping (i.e., energy dissipation) serves as a key ingredient of Casimir engineering [20] to control and manipulate the Casimir effect of magnons. The Gilbert damping can be enhanced and controlled by the established experimental techniques of spintronics such as spin pumping [23]. In addition, microfabrication technology can control the film thickness and manipulate the magnonic non-Hermitian Casimir effect. The Casimir pressure of magnons, which stems from the real part of its Casimir energy, contributes to the internal pressure of thin films. We find from the middle panel of Figs. 2 (ii) and (iii) that depending on the film thickness, the sign of the real part of the magnonic Casimir coefficient changes. This means that by tuning the film thickness, we can control and manipulate the direction of the magnonic Casimir pressure as well as the magnitude thanks to the EP-induced Casimir oscillation. Thus, our study utilizing energy dissipation, the magnonic non-Hermitian Casimir effect, provides the new principles of nanoscale devices, such as highly sensitive pressure sensors and magnon transistors [39], and paves a way for magnonic Casimir engineering.
_Conclusion.--_We have shown that as the Gilbert damping constant (i.e., the energy dissipation rate) increases, the non-Hermitian Casimir effect of magnons in antiferromagnets is enhanced and exhibits the oscillating behavior which stems from the exceptional point. This exceptional point-induced Casimir oscillation also exhibits the beating behavior when the degeneracy between two kinds of magnons is lifted. These magnonic Casimir oscillations are absent in the dissipationless system of magnons. Thus, we have shown that energy dissipation serves as a new handle on Casimir engineering.
We would like to thank Ryo Hanai, Hosho Katsura, Norio Kawakami, Se Kwon Kim, Katsumasa Nakayama, Masatoshi Sato, Kenji Shimomura, Ken Shiozaki, Kisuke Totsuka, Shun Uchino, and Hikaru Watanabe for helpful comments and discussions. We acknowledge support by JSPS KAKENHI Grants No. JP20K14420 (K. N.), No. JP22K03519 (K. N.), No. JP17K14277 (K. S.), and No. JP20K14476 (K. S.).
|
2308.01849 | Curricular Transfer Learning for Sentence Encoded Tasks | Fine-tuning language models in a downstream task is the standard approach for
many state-of-the-art methodologies in the field of NLP. However, when the
distribution between the source task and target task drifts, \textit{e.g.},
conversational environments, these gains tend to be diminished. This article
proposes a sequence of pre-training steps (a curriculum) guided by "data
hacking" and grammar analysis that allows further gradual adaptation between
pre-training distributions. In our experiments, we acquire a considerable
improvement from our method compared to other known pre-training approaches for
the MultiWoZ task. | Jader Martins Camboim de Sá, Matheus Ferraroni Sanches, Rafael Roque de Souza, Júlio Cesar dos Reis, Leandro Aparecido Villas | 2023-08-03T16:18:19Z | http://arxiv.org/abs/2308.01849v1 | # Curricular Transfer Learning for Sentence Encoded Tasks
###### Abstract
Fine-tuning language models in a downstream task is the standard approach for many state-of-the-art methodologies in the field of NLP. However, when the distribution between the source task and target task drifts, _e.g._, conversational environments, these gains tend to be diminished. This article proposes a sequence of pre-training steps (a curriculum) guided by "data hacking" and grammar analysis that allows further gradual adaptation between pre-training distributions. In our experiments, we acquire a considerable improvement from our method compared to other known pre-training approaches for the MultiWoZ task.
## 1 Introduction
Traditional Machine Learning (ML) technology is influenced by the assumption that a difference in data distribution between training and real-world data can result in a degradation of the predictive learner Shimodaira (2000). To overcome the effects of data divergence, studies have developed several algorithms that have some regularizing hyper-parameter to mitigate the variance in the hypothesis space and learn computable functions that have great generality to out-of-distribution data Sarker (2021).
These traditional methods have been successfully applied in many practical applications, but they still present limitations for specific real-world scenarios where we have complex data like images or text Bengio et al. (2007). Deep learning overcame a few of these limitations, acquiring near-human performance in various tasks. Whereas traditional ML has an explicit bias to control overfitting, deep models need to acquire its bias from data, which generally could be expensive or even unattainable in some scenarios.
Transfer learning was introduced to achieve high performance in low-resource domains by artificially increasing available data from a different domain to overcome these limitations Ruder (2019). This technique consists in pre-training a model with high variance in domains with several available examples, so those learning models can acquire suitable biases that generalize for other tasks, including tasks where little to no examples are available Howard and Ruder (2018).
In Natural Language Processing (NLP), the standard approach for transfer learning is language modeling, which consists in pre-training a model to denoise a sentence, and then fine-tuning it to the target task where the model is applied Devlin et al. (2018). This self-supervised sequential transfer learning enables the model to learn high-level linguistic features and statistical properties of language that help the model to generalize for many downstream tasks. Exploiting these capabilities, recent studies encode tasks as a pure sequence-to-sequence problem Raffel et al. (2020) to directly solve it with auto-regressive language modeling task Brown et al. (2020).
Task-Oriented Dialog Systems (TODS) are conversational agents designed for solving businesses-related tasks by interacting with customers through a conversation in a natural language interface. Its task is to solve sub-problems related to business attendance -- identify intentions, and entities, decide which action to take, and generate a system response. As a natural language process, TODS could benefit significantly from the general language capabilities of Language Models (LMs) Fellows et al. (2021). The academy has explored encoding TODS as a sequence-to-sequence problem Lin et al. (2020); Hosseini-Asl et al. (2020); Yang et al. (2020) to improve significantly the effectiveness of agents in unrestricted domains. Figure 1 illustrates an example of how this process occurs. The general task is composed of four steps; first, it receives a user utterance (denoted by su tag); then it identifies intents and entities in the utterance; the belief state (denoted by sb tag); then it decides
which action to take (sa tag); finally it generates a response for this action to return to the user.
In Figure 1, the interactions occur in the following manner: a) we first represent every step in textual symbols, and each field is marked by the start and end of tags; (b) we concatenate the current utterance with the fields from previous turns in the dialog; (c) we feed the whole dialog history to the auto-regressive model to generate the following steps (belief state, action, and response). (d) the generated fields are parsed, then used for querying information; later we return the response to the user as shown in (e).
The TODS task encoded as sequence-to-sequence presents an unusual structure that is not present in the text used for pre-training language models. The encoded sentence presents a regular grammar, for the meta-structure encoded by unique tag tokens. Utterances and responses have a conversation distribution, so it is not the same distribution of words of a general text domain Zhang et al. (2020). We have a classification task, where the model aims to infer user intent given his utterance. Then, a Named Entity Recognition (NER) and parsing task, where the model has to extract and format slot-values, and it has a policy that is trying to infer that will shape the response Yang et al. (2020); Hosseini-Asl et al. (2020).
General language modeling is well suited for many NLP tasks. However, it still requires fine-tuning on large datasets for tasks with complex behavior, like textual distributions that diverge a lot from source training Liu et al. (2021); Ouyang et al. (2022); Nakano et al. (2021), or domain-specific applications Wu et al. (2020); Zhang et al. (2020); Thoppilan et al. (2022). For TODS, given the sequential nature of decisions, a high error is not permissible, as it propagates to the following interactions in a dialog, and many examples in the downstream task are needed for proper behavior of conversational agents Fellows et al. (2021).
Although we could build conversational datasets for our TODS application, this specialized data is expensive Zhang et al. (2020), and small businesses could not afford to build massive datasets. Facing the problem of adaptation with low-resource in the downstream task, we should learn a proper bias to allow the rapid adaptation of our learning model to the drift in downstream data, so the final model should require only a fraction of the usual needs.
Inspired by psycho-pedagogical and optimization literature, a curriculum approach presents examples to a learner that gradually increases in complexity or difficulty Bengio et al. (2009). This means the learner could focus on acquiring the basic skills in the initial steps that help to learn later posed examples. By the same principle, we could guide the learner through this learning process by presenting not only examples, but new tasks with increasing complexity that are somehow related Green (2005).
In this study, we propose to have intermediate tasks in that we can more easily acquire training data, and this data simulates one or more properties present in the final task. By having another pre-training step to acquire intermediate skills, the model would require fewer fine-tuning steps for the final task, being less prone to over-fit, and acquiring a better generalization. In our approach, we present a transfer learning "curriculum", where we gradually adapt our LM by posing it to intermediate pre-training tasks. In initial tasks, we have an infinite number of examples; as the complexity of the task increases, the available data decreases exponentially. However, our initial tasks adjust the bias for the LM such that it can learn more easily those more complex examples.
We use forum structure to simulate the regular grammar and the word distribution in utterances and responses, and the topic being discussed to simulate the intent classification task. This approach significantly improves BLEU, SUCCESS,
Figure 1: Operational diagram of TODS based on LMs. The task is encoded as a sequence-to-sequence problem to later be decoded and parsed for system communication.
and INFORM, compared to other pre-training approaches for TODS. In this article, we contribute with the following:
* We propose a new curriculum learning method where the complexity of learning varies at a sentence task level instead of an instance level. This approach allows for gradually fine-tuning scaled LMs to small datasets without over-fitting.
* As a medium for intermediate training steps, we propose a method for constructing pseudo-supervised data in the context of conversational agents that encompass the main problems in the target task, forcing the LM to meta-learn behaviors that help to solve the final target task.
The remaining of this article is organized as follows: Section 2 presents background studies. Section 3 formalizes our Curricular Transfer Learning proposal and presents our approach to solving the TODS task. Section 4 presents our conducted experimental evaluation. Section 5 discusses the results and Section 6 draws conclusions from our study.
## 2 Background
Our research work is heavily inspired by two key studies addressing curriculum learning and transfer learning. We review studies applied to TODS, but also in general machine learning literature.
**Curriculum Learning.** Curriculum learning was initially proposed for next word prediction (Bengio et al., 2009), and later extended for many NLP tasks (Soviany et al., 2021). In the majority of the studies the curriculum is applied in an instance level (Liu et al., 2021; Kim et al., 2021; Dai et al., 2021; Zhu et al., 2021). Other set of studies, the task is adjusted with a gradual change in the task, where the complexity increases based on length (Foglino and Leonetti, 2019) or mode(Liu et al., 2020).
In the context of TODS, some studies apply a curriculum technique in Reinforcement Learning (RL) context, e.g., sort dialogue instance based on number os slots to track (Saito, 2018), dialog reward information as a measure of complexity (Zhu et al., 2021; Liu et al., 2021), or dialog metrics are used to sort instances in the curriculum (Zhao et al., 2022; Dai et al., 2021).
**Transfer Learning.** Literature of end-to-end TODS recently presented studies focusing on rapid adaptation or learning with low resources. Most investigations focus on heuristics for rapid adaptation to data, like variational inference (Zhang et al., 2020), optimizing editing distance (Lin et al., 2020), multi-tasking (Kulhanek et al., 2021; Su et al., 2021), or denoising (Sun et al., 2022).
For the conversational literature, there is a set of studies that explore exclusive "conversation-biased" data for pre-training language models. Some works explore self-supervision on raw text from Reddit (Wu et al., 2020; Zhang et al., 2020), while other present the text as a simple question-answer pair (Adiwardana et al., 2020; Thoppilan et al., 2022).
We summarize our contribution as follows: In the curriculum learning literature for NLP, no study employs a curriculum at a dataset level. For the transfer learning literature, some studies apply pre-training on previous general conversation-oriented data, but do not exploit massive pseudo-supervised data as one of the steps of pre-training.
Our solution aims to reduce the requirements for annotated data in training sequence-to-sequence tasks. We manipulate the existing structure in web-data to create intermediate tasks that simulate the same sequence structure present in the target task, which we call "data hacking". This approach helps the language model to acquire a better bias for the final task. We create data that resembles a simplified version of the target grammar we want our model to acquire, so it can learn basic properties of the grammar and later generalize to more complex instances. In the next section, we present our proposal for complex grammar acquisition by grammar decomposition and instantiate the solution for the problem of TODS.
## 3 Grammar Acquisition with Curricular Transfer Learning
The classical transfer learning framework, using sequential induction, allows us to use out-of-distribution data from a source task to leverage the knowledge in this data for another correlated target task. By learning the proper bias for the problem to be modeled, the learner model can rapidly adapt to the target hypothesis, demanding fewer examples to acquire a good generalization point in the target domain. In the context of NLP, transfer learning is achieved by pre-training an LM, with self-supervision, in texts sampled from indexed
pages on the web (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2020), generally with diverse contents.
Although evidence shows that language modeling acquires high generalization capabilities for the out-of-distribution data (Hendrycks et al., 2020), some specialized and complex tasks that diverge from the general distribution of random web text are less benefited (Mielke et al., 2019; Cotterell et al., 2018; Zellers et al., 2019; Deletang et al., 2022; Thoppilan et al., 2022). In low-resource contexts, those pre-trained models tend to overfit or underfit the target task, failing to arrive at a desirable optimum and presenting poor generalization.
In the case of TODS, the sequence we want to model has a conversation-biased distribution of words, many unique marking tokens for the classification, named entity recognition, and parsing. This drift between pre-training and fine-tuning makes it difficult for our agent to learn and acquire optimal effectiveness (Zhang et al., 2020). Exploring other pre-training data that resemble the same properties in the TODS task could significantly improve the learner.
We observe the problem of training a model from two perspectives: 1) the learning perspective, where we aim to obtain the model to acquire the pattern recognition skills; 2) the optimization aspect, where we expect that the model minimizes some loss function. As posed by (Bengio et al., 2009), a curriculum approach has a beautiful interpretation and practical application for both perspectives. In our approach, we teach a model in the same way humans learn by gradually increasing the task's difficulty, so it first acquires the basic skills to thrive in more complex scenarios. Also, relying on this curriculum approach, we might minimize a less noisy version of the original problem to arrive at the global optima, a continuation method.
For human language acquisition, the generative grammar theory assumes that a learner has an innate universal grammar that restricts what kind of grammar a learner could acquire (White, 2003). The process starts by recognizing simple structures in a grammar; instead of memorizing, the learner identifies syntactic structures they encounter and evaluates the feedback in an environment to determine precisely the grammar being used (Guasti, 2004).
Whereas most of a child's primary language is obtained culturally, without directed control, for second language acquisition the process is almost entirely performed in a controlled environment. In the initial stages, the learner recognizes simple grammatical structures; which is crucial for complete grammar acquisition, as many possible candidate grammars are filtered in this search step (Komarova et al., 2001). By exploiting this universal grammar, the learner could quickly acquire the grammar of the language it is inserted.
Several approaches in Artificial Intelligence (AI) explore the advantages of posing easier instances of the problem or addressing sequentially surrogate objectives to achieve more complex goals. In RL, robotic hand manipulation is an example. Instead of directly training the robot to put a red box over a blue box, it is easier first to teach the arm to recognize which color to pick, then how to hold the box correctly, and later how to place it over the blue box (Manela and Biess, 2022). This decomposition of complexity allows agents to learn the final task faster than directly attempting to perform the final task.
In the context of sequence-to-sequence modeling, we can view the problem as a grammar acquisition process, starting from basic construction to then acquiring more complex ones. In our developed solution, we first present the learner with basic grammar with a more simple composition, to later extend for more complex elements (Davies, 1980; Guasti, 2004).
We present a formal definition for the problem of grammar acquisition. We extend the model of (Pan and Yang, 2009) for the multi-sequential case which is an instance of the curriculum learning model.
First, given a domain \(\mathcal{D}=\{\mathcal{X},P(X)\}\) consisting in a feature space \(\mathcal{X}\) and a probability distribution \(P(X)\), where \(X=\{x_{1},x_{2},...,x_{n}\}\in\mathcal{X}\). Consider a task \(\mathcal{T}=\{\mathcal{Y},f(\cdot)\}\), where \(\mathcal{Y}\) is the label space, and \(f(\cdot)\) is the objective predictive function, which is not observed but could be learned from the data, where \(f:X\rightarrow\mathcal{Y}\).
We define transfer learning as, given a source domain \(\mathcal{D}_{S}\), a source task \(\mathcal{T}_{S}\), and target domain \(\mathcal{D}_{T}\) with a target task \(\mathcal{T}_{T}\). Transfer learning aims to help improve the learning of the target predictive function \(f(\cdot)_{T}\) in \(\mathcal{D}_{T}\) using the knowledge in \(\mathcal{D}_{S}\) and \(\mathcal{T}_{S}\), where \(\mathcal{D}_{S}\neq\mathcal{D}_{T}\) or \(\mathcal{T}_{S}\neq\mathcal{T}_{T}\).
Although most studies in NLP literature apply a single step of sequential transfer learning, we propose many steps of transference in the Curricular
Transfer Learning (CTL). It is a curriculum of transference, _e.g._, \(\mathbf{\mathcal{T}}=\{\mathcal{T}_{S_{0}},\mathcal{T}_{S_{1}},...,\mathcal{T}_{S_{n} },\mathcal{T}_{T}\}\), where we have \(n\) pre-training source tasks \(\mathcal{T}_{S}\). By this approach, the \(\mathcal{T}_{S_{k}}\) source task helps optimizing for \(\mathcal{T}_{S_{k+1}}\) task, acquiring a better generalization for the following task and, consequently, the general curriculum as an accumulation of improvements.
Similar to a curriculum learning approach, consider the instances from \(\mathcal{T}_{S_{k}}\) as smoothed objectives of \(\mathcal{T}_{T}\). The cost function \(C_{S_{0}}\) is easier to optimize as it does not compass the entire TODS objective, and we have more available instances in this set. After optimizing for this objective, we gradually increase the task complexity by passing tasks with more linguistic features to extract from \(C_{S_{0}}\) to \(C_{S_{n}}\); and finally \(C_{T}\), fewer instances are presented for more complex sets.
In the language modeling problem, the cost function we aim to minimize is the cross-entropy for the next token. When the \(\lambda\) family is composed of languages with increasingly complex grammar for a cost function \(C_{0}\) that resembles some other target cost function \(C_{1}\) for some family \(C_{\lambda}(\theta)\) of cost functions, the \(\lambda\) factor varies between instances according to each data set.
Now, suppose a serialized decoding is learned to a specific pattern. In that case, for instance, the regular grammar of starting and ending fields, adapting to a new grammar that shares the same pattern, will be more straightforward. So the probability of the token to come after <eos_u> is previously learned in a more manageable task, so the optimization in this phase should only focus on learning the classification task.
Considering that our final task is composed of formal grammar. Suppose another data source could simulate some derivation nodes in the target grammar without loss in the global structure, e.g., the first production rule. In that case, we pose this simpler version of the grammar as a step in the pre-training curriculum, and recursively, append more simplified grammars that simulate previous ones.
We formally call it a curricular transfer learning if, given an ordered set of sequence-to-sequence tasks \(\mathbf{\mathcal{T}}=\{\mathcal{T}_{S},\mathcal{T}_{I_{1}},...,\mathcal{T}_{I_{n} },\mathcal{T}_{T}\}\), some complexity ordering \(<_{\mathcal{C}}\), and some grammatical similarity \(\sim_{G}\). Every task in \(\mathbf{\mathcal{T}}\) respects the order
\[\mathcal{T}_{a}<_{\mathcal{C}}\mathcal{T}_{b}\hskip 28.452756pt\forall a,b:a<b \tag{1}\]
where \(a\) and \(b\) are indexes for tasks in \(\mathbf{\mathcal{T}}\) and
\[\mathcal{T}_{a}\sim_{G}\mathcal{T}_{b}\hskip 28.452756pt\forall a,b. \tag{2}\]
The main appeal of this method is that more accessible instances of the problem have a derivative cost to obtain, whereas final instances have a considerable cost. The artificially created intermediate tasks should help the LM to meta-learn the final task, like learning unique tokens to classify, perform a name entity recognition, parsing, or generate a response. We describe the method for complex grammar acquisition in the following manner:
1. Describe the desired grammar formally to be learned, e.g. the generation rule for the tokens, listing the expected types of text distribution and regular structure.
2. Find text and structures that can be morphed to simulate this behavior. For the same kind of word distribution in the environment (formal, conversational, etc.), some NLP tasks are present in the problem, like classification or NER, etc.
3. Recursively proposes (1) that address the subsequent grammar as another pre-training step.
## 4 Experimental Evaluation
We present how we developed our proposed curricular transfer learning steps for the case of _MultiWoZ_. A general intuition for creating intermediate tasks is to find textual corpora or data sets that we transform into a form that resembles properties from final grammar. We demonstrate how we applied our proposal to improve the optimization and overall effectiveness in MultiWoZ, a widely adopted framework for study and evaluation of TODS. We present a curriculum decomposition with a meta-structure and a single additional task and we compare it with previous pre-training approaches.
### Protocol
For the conversation-oriented curriculum, we consider a diverse corpus in the first stage to meta-learn general language skills; a corpus that simulates the conversational patterns and distributions of words; and finally, the data set for the target task. We use the following text data sets as pre-training steps:
1. **Common Crawl (General Language Modeling)** This dataset was proposed as the unsupervised auto-regressive pre-train for the proposed architecture (Radford et al., 2019). Although we do not train the GPT-2 from scratch in this dataset, we describe it here to illustrate the CTL approach.
2. **Tripadvisor (Dialog Modeling)** We built this data set by crawling major cities discussed in Tripadvisor (Paris, Rome, Istanbul, Barcelona, Madrid, Amsterdam, Lisbon, and London), for each thread we encoded the thread creator with user utterance tokens, the city as the belief tokens, and thread replies as system response. We replicate each pseudo-user utterance for each pseudo-system response. The crawler is available in Github1. Footnote 1: [https://github.com/jadermcs/tripadvisor-dialogues](https://github.com/jadermcs/tripadvisor-dialogues)
3. **Multi-Domain Wizard-of-Oz (Task-Oriented Modeling)** The MultiWoZ is a general domain task-oriented dialogues data set collected in a Wizard-of-Oz setting. It comprises 10000 dialogues in domains like'restaurant,' 'train,' and 'hotel booking.' We selected the latest official data set for our evaluation, version 2.2.
To our LM learn how to solve this problem, we need to encode it as a sequence-to-sequence prediction problem. To this end, we use the special <sos_X>, where \(X\in u,b,a,r\) for the start and end of the string in utterance, belief, action, and response tokens. The example below represents the tokens dialog from _MultiWoZ_, encoded as a sequence-to-sequence problem:
<sos_u> I am looking for [...] <eos_u> <sos_b> [restaurant] pricerange cheap area centre <eos_b> <sos_a> [inform] choice [request] food <eos_a> <sos_r> There are [value_count] restaurants [...] <eos_r> <sos_u>...
We first formalize the target grammar to enlist known properties of this sequence. We notice that we have a regular grammar for demarking field tokens, in long sequences if our model is unable to predict this simple pattern we might fail to parse the decoded sequence.
As the training example is the concatenation of the entire dialog session, the language of this grammar should be any form in \(\mathcal{L}=\{\epsilon,UBAR,UBARUBAR,...\}\), cycling infinitely in the regular pattern.
For each field in our language \(\mathcal{L}\), we have a grounded theory for linguistic patterns that are present. Fields \(u\) and \(r\) follow a dialog act distribution (Traum and Hinkelman, 1992; Bunt, 2006). We have that string \(w\) in \(u\) is sampled from \(u(w)\) an utterance distribution. The string \(w\) in \(R\) is sampled from a response distribution of words \(r(w)\). <se*> are delimiter tokens for each segment.
In the \(b\) field, we have a classification task, and a NER with parsing as a sequence problem so \(k\subset K_{b};i\subset I_{b}\). In \(a\), our model aims to learn a policy optimization (Williams et al., 2016), \(k\subset K_{a};i\subset I_{a}\). Table 1 illustrates the grammar our model aims to acquire.
If we simulate nodes \(U\), \(B\), and \(R\) we obtain a candidate grammar for an intermediate task. To obtain this data, we crawled conversations from online forums where a user has a centered topic discussion and starts a thread by posing a question. Later, users can join the discussion by replying to the thread creator or other messages.
We manipulate these data to resemble our grammar in the following manner: we set the message that creates the thread as an utterance and the topic of this conversation as the classification problem. We put an empty string for action, and for the response, we pick an answer in the thread. We replicate this pattern for every message replying to the original topic creator. Finally, we append the unique tokens to encode it. This process is fully automated by a script using the HTML annotation from scraped data. We present the template below:
<sos_u> i'll be in amsterdam for a week [...] <eos_u> <sos_b> amsterdam <eos_b> <sos_a> <eos_a> <sos_r> take your pick [...] <eos_r>
Table 2 illustrates the formal version of our fabricated grammar that simulates an intermediate transfer step.
In gray, we show the removed elements, and in bold pseudo-elements. The final step is to ran
\begin{table}
\begin{tabular}{c c} \(S\) & \(\to UBARS\,|\,\epsilon\) \\ \(U\) & \(\rightarrow\)\(<_{\text{su}}\)\(>\)\(w_{u}\)\(\sim\)\(u(w_{u})\)\(<\)eu\(>\) \\ \(B\) & \(\rightarrow\)\(<_{\text{sB}}\)\(>\)\(k\subset K_{b};i\subset I_{b}\)\(<\)eb\(>\) \\ \(A\) & \(\rightarrow\)\(<_{\text{sA}}\)\(>\)\(k\subset K_{a};i\subset I_{a}\)\(<\)ea\(>\) \\ \(R\) & \(\rightarrow\)\(<_{\text{sr}}\)\(>\)\(w_{r}\)\(\sim\)\(r(w_{r})\)\(<\)er\(>\) \\ \end{tabular}
\end{table}
Table 1: Regular grammar for TODS encoding in a language modeling problem.
domly concatenate the above template with random messages from different discussion topics, so it can generalize the regular pattern.
To better evaluate our transference curriculum, we proposed four different training settings. The first is the used in Yang et al. (2020). The second approach is to pre-train it in a conversational distribution without any special encoding. Finally, the last curriculum encodes all the grammar in Table 2. We specify the curriculum details in the following:
1. The first curriculum named "gpt-*/multiwoz," uses pre-trained weights from a GPT-2 from _HuggingFace_.
2. The second curriculum named "gpt-*/noencode/multiwoz," starts with GPT-2 weights, then we train it on TripAdvisor data, with the random ordering of the texts and no special encoding.
3. The third curriculum named "gpt-*/encode/multiwoz," is the fully encoded task modeling utterance, response, and classification task for the pre-training.
For all curriculums that include _TripAdvisor_, we pre-process the text using the same script for _MultiWoZ_. We train this architecture in the same strategy as Yang et al. (2020), where all the dialog session is presented to the model in a single sequence. The training sequence should contain at most 256 tokens, sessions with more than this maximum length we split into the following sequence. We train the models with early stopping with a plateau of 5 steps.
For the inference stage, we decode tokens with a sampling strategy normalizing with \(\tau=0.7\) until we reach an end-of-response token. We give belief state and action from the oracle and just decode the response with the slot-values tokens.
We evaluate the proposed curriculum under two perspectives: (i) the running loss for each curriculum variation; and (ii) the standard metrics for the MultiWoZ data sets with oracle values for database and action, computed by Nekvinda and Dusek (2021) library.
In the first analysis, we investigate how the initialization proposed by each curriculum helps in the optimization process for minimizing the empirical loss for the target grammar. The second analysis measure, given each curriculum, how this improved minimization translates into better agent performance for the TODS task. In this step, we compute using BLUE, INFORM, and SUCCESS.
### Results
Our evaluation consists of two experiments, curriculum convergence, and optimality. For convergence, we vary the model size and evaluate the scalability and time required to stop training in the curriculum.
Figure 2 presents the loss for the validation split over time. We observe that the curriculum with the pseudo-tods task has a significantly better starting point for the MultiWoZ task and converges to a lower loss than the other curriculums. For the small architecture, it converges faster.
Table 3 presents the TODS metrics computed for the test split. We observe that the pseudo-tods curriculum performs better than other curricula. The COMBINED score is given by \(\texttt{(INFORM+SUCCESS)}*0.5+\texttt{BLEU}\).
We found that the faster adaptation and overall loss observed during training also translated to the high-level metrics; and the larger the model, the more consistent the gain.
\begin{table}
\begin{tabular}{l l} \(S\) & \(\to UBARS\,\mathbf{1}\,\epsilon\) \\ \(U\) & \(\to\)\(<_{\texttt{su}}\)\(>\)\(w_{u}\sim u(w_{u})\)\(<\)eu\(>\) \\ \(B\) & \(\to\)\(<_{\texttt{sb}}\)\(>\)\(\boldsymbol{k}\)\(\boldsymbol{\subset}\)\(\boldsymbol{K_{\mathbf{c}}};i\subset I_{b}\)\(<\)eb\(>\) \\ \(A\) & \(\to\)\(<_{\texttt{sa}}\)\(>\)\(k\subset K_{a};i\subset I_{a}\)\(<\)ea\(>\) \\ \(R\) & \(\to\)\(<_{\texttt{sr}}\)\(>\)\(w_{r}\sim r(w_{r})\)\(<\)er\(>\) \\ \end{tabular}
\end{table}
Table 2: Regular grammar for simulating the distribution of the target grammar.
Figure 2: Loss during training on validation split of MultiWoZ. Each line represents one curriculum configuration. The bottom figure is a zoom in the final steps of early stopping.
## 5 Discussion
Differing from traditional language modeling, where the task is to predict sequences of words given a context, a task-oriented dialog presents a structured sequence prediction problem with three NLP sub-tasks: (i) a classification task to recognize intents; (ii) a NER to recognize entities; and (iii) a natural language generation task to predict the adequate response for a given utterance and system actions. LMs are general multi-task learners for NLP, which overfit on specific contexts with a nonstandard distribution of tokens.
In this research, we defended that having a general language modeling task as a starting point is not ideal for the problem of TODS. As our problem has a strong grammar dependency, it is, the generated text should be parseable, and the model should always predict the same tokens even if the dialog session has a hundred interactions. This requires constructing gradual datasets to better initialize the LM and guarantee it does not degenerate in predicting the grammar for long sequences.
Our proposed solution, CTL, a sequence of transfer learning steps, helps to balance the trade-off between the right bias in the data and annotation costs. Our proposal shows how this approach is not only a multi-step sequential transfer learning, and it can be viewed as a form of curriculum learning when considering the overall ordering within all tasks. We optimized order through the same data-generating process in the original curriculum learning approach. Our CTL approach allows us to use out-of-distribution data, which is central to learning with scalability and generality for modern NLP systems.
In our experiments we could verify that our proposed pre-training approach significantly improve the initialization for training on MultiWoZ data, resulting in an overall improvement in standard TODS metrics when compared to previous approaches.
Our approach explored the CTL on a narrow case study, given that sequence-to-sequence task encoding is a relatively new proposal Hosseini-Asl et al. (2020). However, prompt-based methods are a recent trending topic in NLP. These methods allow a pre-trained LM to perform out-of-the-box several tasks they were not explicitly trained for (also called zero-shot learning). Although those models perform pretty well in classic NLP tasks, those with a complex morphological structure still need additional learning. We could extend those models with CTL to other complex applications by creating pseudo-labeled datasets in a super-scale.
## 6 Conclusion
In this research, we developed a solution to train LMs for complex grammar acquisition. Our proposal was accomplished by training the model in intermediate datasets following the grammar simplification method. This allows external data sources to encode as an intermediate sequence task to improve the general optima. Unlike earlier research attempts to use additional data depending on annotated data, which is expensive and not feasible for many business models, our approach used unsupervised data to structure pseudo-supervised data and model sharing to significantly reduce the costs of maintaining this kind of system.
|
2305.19022 | PDOZ: innovative personal electronic dosimeter for electron and gamma
H*(d) dosimetry | The personal (or active) electronic dosimeters (PEDs) are devices used to
determine the individual exposure to ionizing radiations and they are employed
in hospitals, research laboratories and nuclear power plants. The PDOZ project
is a personal electronic dosimeter able to detect, discriminate and measure the
delivered dose by beta particles and gamma rays. In this paper, several Monte
Carlo simulations are described. The first one is regarding the ICRU sphere
implemented to evaluate the ambient dose equivalent, H*(10), and the
fluence-to-dose equivalent conversion coefficients for gamma rays and beta
particles. The second simulation is carried out to study the prototype
dosimeter response to gamma rays and beta particles and, also thanks to
previous one, to obtain the conversion curve necessary to calculate the ambient
dose equivalent from the silicon photomultipliers counts. In the last one,
instead, the performance of a prototype dosimeter, composed by a small plastic
scintillator coupled to two SiPMs, is evaluated and a simulation with different
radioactive sources is made whose results are compared with the experimental
measurements. All simulations are carried out by Geant4 including the optical
photon transport. All simulations are carried out by Geant4 including the
optical photon transport. | Lucia Salvi, Giulia Rossi, Giovanni Bartolini, Ali Behcet Alpat, Arca Bozkurt, Mustafa Dogukan Cegil, Ahmet Talha Guleryuz | 2023-05-30T13:22:50Z | http://arxiv.org/abs/2305.19022v4 | # PDOZ: innovative personal electronic dosimeter for electron and gamma \(H^{*}(d)\) dosimetry
###### Abstract
The personal (or active) electronic dosimeters (PEDs) are devices used to determine the individual exposure to ionizing radiations and they are employed in hospitals, research laboratories and nuclear power plants. The PDOZ project is a personal electronic dosimeter able to detect, discriminate and measure the delivered dose by beta particles and gamma rays. In this paper, several Monte Carlo simulations are described. The first one is regarding the ICRU sphere, [1], [2] implemented to evaluate the ambient dose equivalent, \(H^{*}(10)\), and the fluence-to-dose equivalent conversion coefficients for gamma rays and beta particles. The second simulation is carried out to study the prototype dosimeter response to gamma rays and beta particles and, also thanks to previous one, to obtain the conversion curve necessary to calculate the ambient dose equivalent from the silicon photomultipliers counts. In the last one, instead, the performance of a prototype dosimeter, composed by a small plastic scintillator coupled to two SiPMs, is evaluated and a simulation with different radioactive sources is made whose results are compared with the experimental measurements. All simulations are carried out by Geant4 including the optical photon transport.
_Keywords_: dose, dosimeter, fluence, Geant4, PDOZ, scintillator, SiPM, ambient dose equivalent, ICRU, fluence-to-dose equivalent conversion coefficients, \(H^{*}(10)\)
## 1 Introduction
In recent years, dosimetric measurements in the various fields of application of ionizing radiation have become increasingly necessary. Nuclear applications, in fact, are not only for electrical power generation or medical applications concerning diagnostic imaging and radiotherapy or hadrontherapy treatments. Ionizing radiation is also used for sterilization of medical and pharmaceutical products and for irradiation of food to improve its preservation for long periods of time. Simultaneously with these applications, dosimetric systems have been developed to control radiation processes and determine the amount of energy released from radiation into matter [3]. Furthermore, the radiation received by exposed people during and after catastrophic events like Fukushima, Chernobyl or in a radiological accident must be accurately measured. The precision and reliability of the measurement even in this harsh radiation environment situations are of paramount significance for decision making after such an unfavorable event and on protective action [4]. These events have shown that most of state of the art of commercial PEDs show their limitations in particular in linearity of their response to wide energy mixed radiation fields (i.e. electrons, photons and neutrons). We design PDOZ, that will use both organic and inorganic scintillators, to be a reliable PED with linear response in mixed fields of radiation of wide energy ranges. A non - exhaustive list of applications of PDOZ can be as: in medical imaging well logging, homeland security, marine and space exploration, and high energy physics (HEP).
Nowadays, there are many different dosimetric methods, each of which provides information on the energy and dose absorbed by the medium on which the ionizing radiation impacted.
Personal dosimeter currently on the market can detect gamma rays and neutrons; gamma rays and thermal and fast neutrons; gamma rays and beta particles; gamma, X - rays and neutrons. For example the EPD - N2 dosimeter, developed by Thermo Fisher Scientific [5], can detect gamma radiation (and also x - rays) and thermal neutrons in the energy ranges 20 keV - 10 MeV and 0.025 eV - 15 MeV respectively with an energy response ranging from \(\pm 20\%\) to \(\pm 50\%\) for gammas (it depends on the energy) and \(\pm 30\%\) for neutrons. Other example to personal electronic dosimeters with multi - detector capabilities is the NRF series, namely NRF30, NRF31 and NRF34, developed bu FujiElectric [6]. The NRF30 can detect gamma and X - ray in the energy range from 30 keV to 6 MeV. The NRF31 instead can detect both gamma (X) - rays and neutrons in the energy ranges 30 keV - 6 MeV and 0.025 eV - 15 MeV respectively. In the end, the NRF34 can detect gamma (X) - rays and beta particles in the energy ranges 30 keV - 6 MeV and 0.2 MeV - 0.8 MeV (mean energy). For all of them the energy response is \(\leq\pm 20\%\) for gamma (X) - rays, \(\leq\pm 30\%\) for beta particles and \(\leq\pm 50\%\) for neutrons. None of these personal electronic dosimeters can detect gamma - rays, beta particles and neutrons at the same time.
The PDOZ is a project consisting in a dosimeter development that, in its final version, will be able to detect and discriminate three different kinds of particles: beta,
gamma and neutrons. For this purpose three scintillators, a plastic one and two distinct crystals, will be employed and two silicon photomultipliers, SiPMs, will be placed under the bottom surface of each of them. Each scintillator is specific to one type of particle to be detected: the plastic one is employed to measure the dose delivered by beta particles, whereas two distinct crystals measure the dose delivered by gamma particles and neutrons. The silicon photomultipliers are located in the bottom side of the scintillators in order to detect the light produced when a particle enters inside them and releases some energy. This energy excites the scintillator atoms to luminescence and during their return to ground state they emit light photons, in our case wavelength in blue light region, which are then collected by silicon photomultipliers. Scintillators play a crucial role as radiation detection material. They indirectly detect radiation and are usually coupled with a photo-sensor that reads the energy deposited in scintillator which are converted into light photons. Scintillators are classified into organic and inorganic, and the scintillator type used in a radiation detector is determined by the type of radiation particle to be measured as well as the purpose of radiation detection. Plastic scintillators are organic scintillators while inorganic scintillators are primarily ionic solids and composed of high - density crystals. Inorganic scintillators can then be classified into two categories, single - crystals and polycrystalline ceramics, with the former typically exhibiting better optical properties at the expense of fabrication costs [7], [8].
The electronic circuit is implemented to consider the light as a signal when both the SiPMs have a pulse over threshold in coincidence. When a coincidence occurs it is interpreted as a count in the device. Compared to traditional PhotoMultiplier Tube (PMT) sensors, small sized SiPMs sensors are widely used in personal dosimeters, have lower operating voltages and consume less power. PDOZ will have several dosimetry applications where the user can observe the real time dose and the dose deposited by each type of particle. It can be used in various places, for instance hospitals, research centers, nuclear power plants, laboratories and in every environment where radiations are present.
In this work, the study of the fluence to ambient dose equivalent curve, the dosimeter response to gamma and beta particles and the conversion curves are described in the ICRU simulation and single dosimeter response simulation. In the end, the performance study of a single dosimeter with different radioactive sources is carried out, and the results are compared and validated with experimental data.
## 2 Materials
The PDOZ dosimeter will be able to measure the absorbed dose released from different kinds of sources. For this reason, the final version includes three separate scintillators:
* plastic scintillator, the BC408 [9], for beta particles;
* crystal, CsI(Tl) [10], for gamma rays;
* another \({}^{10}\)B or \({}^{6}\)LiF coated crystal, (to be decided), for neutrons.
The three scintillators are put at the same height inside the PEDs external box together with their electronic circuit for the detection. Coupled to the bottom of each scintillator there are two silicon photomultipliers, SiPM: ASD-NUV1C-P [11], to detect the light produced by entering particles. Figure 1 shows the device with the three scintillators.
To study the performance of a single scintillator dosimeter (using BC408) three calibrated laboratory radioactive sources have been used to collect data:
* life \(\tau_{1/2}=28.98\) years;
* life \(\tau_{1/2}=5.27\) years;
* life \(\tau_{1/2}=30.08\) years.
The simulations of the ICRU sphere and those carried out to study the response of a single scintillator to gamma and beta particles and to the radioactive sources are implemented thanks to the Geant4 [12] toolkit, that is used to create simulations of particles and radiation passage through the matter and any kind of setup or detector.
## 3 Conversion curves calculation
### Definitions of ICRU sphere and H*(10)
The definitions of ICRU sphere and ambient dose equivalent, \(H^{*}(10)\), employed are the ones provided by the International Commission on Radiological Protection. The ICRU sphere is a sphere with unity density, 1 g/cm\({}^{3}\), a diameter of 30 cm and made of tissue-equivalent material, so 76.2% oxygen, 11.1% carbon, 10.1% hydrogen, 2.6% nitrogen [1],
Figure 1: _Geometry of the PDOZ. The scintillators are in purple, blue and green, whereas the SiPMs, two for each scintillator, are in red._
[2] and serves to validate Monte Carlo simulations used to trace the conversion curve from count/time reading to ambient dose equivalent. Figure 2 illustrates the ICRU sphere simulation.
The \(H^{*}(10)\), instead, is: "the ambient dose equivalent at a point in a radiation field is the dose equivalent that would be produced by the corresponding expanding and aligned field in the ICRU sphere at a depth of 10 mm on the radius vector opposing the direction of the aligned field [1], [2]".
### ICRU Simulations
To calculate the fluence-to-dose equivalent conversion coefficients both for gamma and beta particles, Monte Carlo simulations are implemented with Geant4. An ICRU sphere is simulated together with a sensitive volume composed by a tissue-equivalent cube having dimensions \(1\times 1\times 1\) mm\({}^{3}\) and placed at a depth of 10 mm on the radius vector opposing the direction of the aligned field for the particles. The source distribution is peaked on the axis where the cube is placed in order to reduce the simulation time without decreasing the statistic. The events are generated according to Equation 1, [13],
\[r=R\cdot\xi^{\frac{1}{1-\alpha}} \tag{1}\]
in which \(r\) is the radial coordinate, \(R=15\) cm represents the beam radius, \(\xi\in(0,1)\) stands for a random number and \(\alpha\) indicates a constant parameter set at 0.5 as reported in the article [13]. The scored quantities must be weighted with the statistical weight \(w\) defined in Equation 2.
\[w=\frac{2}{1-\alpha}\cdot\frac{r^{1+\alpha}}{R^{1+\alpha}} \tag{2}\]
Several beams of \(10^{7}\) monoenergetic gamma rays are generated at different energy points inside the range of 0 keV to 2000 keV.
Figure 2: _Monte Carlo simulation of a parallel gamma rays beam at \(100\) keV, impinging on a ICRU sphere where different interactions take place._
The same procedure is then applied to beta particles. Since electrons are low penetrating particles, the ambient dose equivalent, \(H^{*}(d)\), is calculated at a depth of \(d=0.07\) mm as it is recommended by the ICRP standard for low penetrating radiations [14]. For this reason, the geometry is changed and a box of \(0.01\times 0.01\times 0.01\) mm\({}^{3}\) is placed at a depth of 0.075 mm. In this case \(10^{7}\) monoenergetic beta particles are generated at different energy points inside the range of 100 keV to 5000 keV.
In every simulation, for each energy, the absorbed dose in the ICRU box is recorded. Figure 3 shows the dose in the ICRU box volume.
### Fluence to ambient dose equivalent
To calculate the fluence-to-dose equivalent conversion coefficients, the fluence is determined with Equation 3
\[F=\frac{N}{A} \tag{3}\]
in which \(F\) is the fluence measured in cm\({}^{-2}\), \(N\) stands for the number of generated events and \(A\) represents the source area which is a circle of radius 2 cm. The source radius is now set to 2 cm because also the dosimeter dimensions are smaller than those of the ICRU sphere. The fluence-to-dose equivalent conversion coefficients are found with Equation 4
\[X=\frac{D}{F} \tag{4}\]
where \(X\) is the fluence-to-dose equivalent conversion coefficient measured in pSv\(\cdot\)cm\({}^{2}\) for gamma rays and in nSv\(\cdot\)cm\({}^{2}\) for beta particles, \(D\) represents the dose and \(F\) indicates
Figure 3: _The dose deposited in ICRU box for gamma particles in red and for beta particles in blue._
the fluence. Figure 4 and Figure 5 report the results.
### Plastic scintillator dosimeter simulation
In order to study the performance of a single dosimeter, the device energetic and geometrical efficiencies when subjected to different types of radiation are simulated. The incoming particle hits the scintillator, the energy deposited in it excites the scintillator molecules which then release some energy in form of optical photons. The wavelength of the optical photon released by BC408 in use is peaked around 420 nm where the coupled SiPM's quantum efficiency is at its maximum. To evaluate the dose, the conversion factors, from the count per seconds to Sv/h, are obtained by studying the dosimeter response to different kinds of ionising radiation. The simulation geometry employed has to match that of the experimental prototype, so it involves a \(10\times 5\times 15\) mm\({}^{3}\) plastic scintillator, the BC408, two holes for the silicon photomultipliers and their cables and a 100 \(\mu\)m foil of teflon wrapping to avoid optical photons to escape from scintillator. In order to reproduce the behaviour as realistic as possible, the photon detection efficiency of the silicon photomultipliers is taken into account. Figure 6 shows the simulation of the single dosimeter geometry simulation and the experimental device.
Figure 5: _In the first picture, the comparison between the conversion coefficients from our simulation and the ICRP74 values for beta particles in logarithmic scale. In the second one, the ratio between the conversion coefficients from the simulation and the ICRP74 values._
The silicon photomultiplier, or SiPM, is a solid state photodetectors consisting of an array of several (hundreds or thousands) integrated single-photon avalanche diodes, SPADs, which are called microcells or pixel. When a photon is detected, the SPAD produce a large electric output signal because of the internal avalanche multiplication. In a SiPM it is feasible to count every hit SPAD independently and so it is possible to detect and count photons with high resolution and with single-photon sensitivity.
The SiPMs are used to detect the photons produced by a scintillator: a radiation is absorbed by the scintillator that produces photons in the near-UV or visible range. These photons are then transferred to the exit surface of the scintillator and detected by a photodetector (SiPM) in order to be converted into an electric signal. Since the SiPMs have a high photon detection efficiency (PDE), they are very suitable for this purpose [15].
The entire process, from the production of scinintillation photons after the radiation hits the scintillator, up to the detection of these photons by the SiPM, is simulated. Also the photon detection efficiency is simulated in order to have the detector simulation as faithful as possible.
Three different cuts on a simulation are implemented in order to simulate three different acquisition system settings; in all of them one count takes place when in an event there is a coincidence between the signal of both SiPMs:
* _p.e.0_, all the coincidences (any number of photons hitting two SiPMs in coincidence) are considered, so it is required to have at least one optical photon for each SiPM;
* _p.e.1_, it is required to have at least two optical photons for each SiPM which means that the first photoelectron peak is eliminated from the coincidens counting (that corresponds to have only one optical photon detected by both the SiPMs);
* _p.e.2_, at least three optical photons for each SiPMs are required, so both the first and second photoelectron peaks are removed from the coincidence counting.
Figure 6: _In the left picture the Geant4 dosimeter simulation: in grey the two holes below for SiPMs and their cables, in blue the teflon foil. In the right image the experimental device with plastic scintillator._
#### 3.4.1 Gamma particles simulation
A single dosimeter, with a BC408 scintillator, is simulated with a circle-shape source having a radius, (\(r\)), of 1 cm and \(10^{5}\) events are generated with the energy ranging from 100 keV to 2500 keV. Equation 5 is the fluence, in cm\({}^{-2}\), used to normalize the quantities:
\[F=\frac{n}{\pi r^{2}}=\frac{10^{5}}{\pi}\qquad[cm^{-2}] \tag{5}\]
where \(F\) represents the fluence, \(n\) stands for the number of events and \(r\) is the radius. The dosimeter response, necessary to find the ambient dose equivalent from the SiPMs counts, is given by Equation 6
\[R\left(E\right)=\frac{n_{c}}{F} \tag{6}\]
in which \(R\left(E\right)\) is the response, \(n_{c}\) indicates the number of events with two SiPMs in coincidence and \(F\) stands for the fluence. Figure 7 shows the dosimeter response in the three different settings.
The conversion curve, in \(\mu\)Sv/CPS\(\cdot\)h, is estimated with Equation 7
\[CF=\frac{X}{R} \tag{7}\]
where \(CF\) is the conversion factor, \(X\) stands for the fluence-to-dose equivalent conversion coefficient obtained from the ICRU simulation and converted in \(\mu\)Sv\(\cdot\)cm\({}^{2}\) and
Figure 7: _Number of two SiPMs signal coincidences normalized to the fluence for the p.e.0, p.e.1 and p.e.2 threshold settings in red, blue and green, respectively._
represents the response of the dosimeter. The conversion curve for the ambient dose equivalent can be estimated with Equation 8, [16],
\[f\left(E\right)=\sum_{k=1}^{K_{max}}A\left(k\right)\left(\log E\right)^{k-1} \tag{8}\]
where \(A\left(k\right)\) represents a parameter, \(K_{max}\) is the total number of \(A\left(k\right)\) terms and \(E\) stand for the energy.
Figure 8 illustrates the conversion curves for gamma particles fitted with Equation 8 in the three settings.
#### 3.4.2 Beta particles simulation
The same procedure described before is applied with a beta particles source in which \(10^{5}\) events are generated with energy ranging from 100 keV to 2500 keV. Figure 9 illustrates the dosimeter response in the three settings, whereas Figure 10 shows the conversion curve for beta particles fitted with Equation 8.
Figure 8: _Gamma event conversion curves for the p.e.0, p.e.1 and p.e.2 threshold settings, in red, blue and green, respectively._
Figure 10: _Beta event conversion curves for the p.e.0, p.e.1 and p.e.2 threshold settings, in red, blue and green, respectively._
Figure 9: _Number of coincidences normalized to the fluence for the p.e.0, p.e.1 and p.e.2 threshold settings in red, blue and green, respectively._
## 4 Results and Discussion
### Source simulations and data analysis
In order to study the performance of one dosimeter with a single scintillator, several simulations are carried out with distinct sources whose activities are corrected taking into account the elapsed time, and, subsequently, the results are compared with the experimental data. The experimental set up involves different settings with the distance between the dosimeter and the source varying. The \({}^{90}\)Sr, \({}^{60}\)Co and \({}^{137}\)Cs sources are tested in distances between \(0.5-5\) cm. The simulation geometry consists of a single plastic scintillator, BC404 with dimensions \(10\times 5\times 15\) mm\({}^{3}\), on which a foil of teflon, 100 \(\mu\)m thick, is placed. Also the source geometry is implemented in order to make comparisons with experimental data. The radioactive source is placed inside a disk of plexiglass and epoxy. The SiPMs are coupled to scintillator with EPO-TEK EJ2189 electrically conductive two-component epoxy glue [17]. Figure 11 shows the experimental set up simulation.
The sources employed are \({}^{90}\)Sr, \({}^{60}\)Co and \({}^{137}\)Cs [18] that have the following decays
\({}^{90}\)Sr \(\rightarrow\)\({}^{90}\)Y + e\({}^{-}\) + \(\bar{\nu}_{\rm e}\)
\({}^{60}\)Co \(\rightarrow\)\({}^{60}\)Ni + e\({}^{-}\) + \(\bar{\nu}_{\rm e}\) + \(\gamma_{1}\) + \(\gamma_{2}\)
with \(\gamma_{1}=1.173\) keV and \(\gamma_{2}=1.332\) keV,
\({}^{137}\)Cs \(\rightarrow\)\({}^{137}\)Ba + e\({}^{-}\) + \(\bar{\nu}_{\rm e}\) + \(\gamma\)
Figure 11: _Geometry of the plastic scintillator with two holes of depth \(5\) mm each in which two SiPMs are inserted and geometry of the source. The red disk is made of plexiglass whereas the blue one of epoxy._
in which \(\gamma=661\) keV. The sources are used to study the dosimeter detection of gamma rays and beta particles and the source spectra are those defined in Geant4.
To simulate the device behaviour, also the Photon Detection Efficiencies of the SiPMs are considered. One count is given when there is the coincidence between both SiPMs signals. The settings used to make the comparison between simulated and experimental data are the same described in section 3.4.
Figure 12 shows the comparison between the simulated and the experimental data for the three different sources and settings.
Figure 12: _In the first picture the comparison between simulated, in magenta, red and dark cyan, and experimental data, in dark green, blue and lilac, respectively for the p.e.0, p.e.1 and p.e.2 settings for the \({}^{90}\)Sr source. In the second one the ratio between the simulated and the experimental data is reported for the \({}^{90}\)Sr source. In the other pictures the same results are shown for the \({}^{60}\)Co and the \({}^{137}\)Cs sources._
## 5 Conclusions
In this work, the ICRU procedure suggested in [19] has been followed to obtain the ambient dose equivalent and the fluence-to-dose equivalent conversion coefficients for gamma rays and beta particles sources. The same simulation is then implemented on the detector in order to study the performance of a single scintillator seen by two SiPMs when it is exposed to gamma ray or beta particle sources. Main goal of this study is to validate our Geant4 simulation of a single scintillator personal dosimeter (PED) following the procedure suggested by ICRU. Our PED provides coincident event counts for various configurations. The simulation creates the exact same configuration of the real experimental setup and physical processes including the optical photon transport and coincidence conditions. The agreement between the data and simulation (see counts per second versus distances curves for three different radiation sources in Figure 12) validates our Monte Carlo approach and physical parameters set. From these studies the conversion curves are obtained by which the dose can be evaluated. In the end, a final simulation is implemented to study the response of the device when exposed to sources of \({}^{90}\)Sr, \({}^{60}\)Co and \({}^{137}\)Cs. In this last work, also experimental data are taken to do a comparison with the simulated one. The results confirm the agreement between simulated and experimental data within the error bars for both gamma rays and beta particles. Through this work, it will be possible to evaluate the dose from the counts per second using the conversion curves found.
In the end, in the new version of the PDOZ, also the response to neutrons must be studied. Therefore, in the final prototype, whose design includes particle discrimination, it will be possible to detect, discriminate and evaluate the dose released by beta particles, gamma rays and neutrons.
|
2303.05785 | Scaling Up 3D Kernels with Bayesian Frequency Re-parameterization for
Medical Image Segmentation | With the inspiration of vision transformers, the concept of depth-wise
convolution revisits to provide a large Effective Receptive Field (ERF) using
Large Kernel (LK) sizes for medical image segmentation. However, the
segmentation performance might be saturated and even degraded as the kernel
sizes scaled up (e.g., $21\times 21\times 21$) in a Convolutional Neural
Network (CNN). We hypothesize that convolution with LK sizes is limited to
maintain an optimal convergence for locality learning. While Structural
Re-parameterization (SR) enhances the local convergence with small kernels in
parallel, optimal small kernel branches may hinder the computational efficiency
for training. In this work, we propose RepUX-Net, a pure CNN architecture with
a simple large kernel block design, which competes favorably with current
network state-of-the-art (SOTA) (e.g., 3D UX-Net, SwinUNETR) using 6
challenging public datasets. We derive an equivalency between kernel
re-parameterization and the branch-wise variation in kernel convergence.
Inspired by the spatial frequency in the human visual system, we extend to vary
the kernel convergence into element-wise setting and model the spatial
frequency as a Bayesian prior to re-parameterize convolutional weights during
training. Specifically, a reciprocal function is leveraged to estimate a
frequency-weighted value, which rescales the corresponding kernel element for
stochastic gradient descent. From the experimental results, RepUX-Net
consistently outperforms 3D SOTA benchmarks with internal validation (FLARE:
0.929 to 0.944), external validation (MSD: 0.901 to 0.932, KiTS: 0.815 to
0.847, LiTS: 0.933 to 0.949, TCIA: 0.736 to 0.779) and transfer learning (AMOS:
0.880 to 0.911) scenarios in Dice Score. | Ho Hin Lee, Quan Liu, Shunxing Bao, Qi Yang, Xin Yu, Leon Y. Cai, Thomas Li, Yuankai Huo, Xenofon Koutsoukos, Bennett A. Landman | 2023-03-10T08:38:34Z | http://arxiv.org/abs/2303.05785v2 | # Scaling Up 3D Kernels with Bayesian Frequency Re-parameterization for Medical Image Segmentation
###### Abstract
With the inspiration of vision transformers, the concept of depth-wise convolution revisits to provide a large Effective Receptive Field (ERF) using Large Kernel (LK) sizes for medical image segmentation. However, the segmentation performance might be saturated and even degraded as the kernel sizes scaled up (e.g., \(21\times 21\times 21\)) in a Convolutional Neural Network (CNN). We hypothesize that convolution with LK sizes is limited to maintain an optimal convergence for locality learning. While Structural Re-parameterization (SR) enhances the local convergence with small kernels in parallel, optimal small kernel branches may hinder the computational efficiency for training. In this work, we propose RepUX-Net, a pure CNN architecture with a simple large kernel block design, which competes favorably with current network state-of-the-art (SOTA) (e.g., 3D UX-Net, SwinUNETR) using 6 challenging public datasets. We derive an equivalency between kernel re-parameterization and the branch-wise variation in kernel convergence. Inspired by the spatial frequency in the human visual system, we extend to vary the kernel convergence into element-wise setting and model the spatial frequency as a Bayesian prior to re-parameterize convolutional weights during training. Specifically, a reciprocal function is leveraged to estimate a frequency-weighted value, which rescales the corresponding kernel element for stochastic gradient descent. From the experimental results, RepUX-Net consistently outperforms 3D SOTA benchmarks with internal validation (FLARE: 0.929 to 0.944), external validation (MSD: 0.901 to 0.932, KiTS: 0.815 to 0.847, LiTS: 0.933 to 0.949, TCIA: 0.736 to 0.779) and transfer learning (AMOS: 0.880 to 0.911) scenarios in Dice Score. Both codes and pretrained models are available at: [https://github.com/MASILab/RepUX-Net](https://github.com/MASILab/RepUX-Net)
Keywords:Bayesian Frequency Re-parameterization, Large Kernel Convolution, Medical Image Segmentation.
## 1 Introduction
With the introduction of Vision Transformers (ViTs), CNNs have been greatly challenged as seen with the leading performance in multiple volumetric data benchmarks, especially for medical image segmentation [7, 8, 21, 23]. The key contribution of ViTs is largely credited to the large Effective Receptive Field (ERF) with a multi-head self-attention mechanism [6]. Note the attention mechanism is computationally unscalable with respect to the input resolutions [17, 18]. Therefore, the concept of depth-wise convolution is revisited to provide a scalable and efficient feature computation with large ERF using large kernel sizes (e.g., \(7\times 7\times 7\)) [14, 18]. However, either from prior works or our experiments, the model performance becomes saturated or even degraded when the kernel size is scaled up in encoder blocks [4, 16]. We hypothesize that scaling up the kernel size in convolution may limit the optimal learning convergences across local to global scales. Recently, the feasibility of leveraging large kernel convolutions (e.g., \(31\times 31\)[4], \(51\times 51\)[16]) has been shown with natural image domain with Structural Re-parameterization (SR), which adapts Constant-Scale Linear Addition (CSLA) block (Fig. 2b) and re-parameterizes the large kernel weights during inference [4]. As convolutions with small kernel sizes converge more easily, the convergence of small kernel regions enhances in the re-parameterized weight, as shown in Fig. 1a. With such observation, we further ask: **Can we adapt variable convergence across elements of the convolution kernel during training, instead of regional locality only?**
In this work, we first derive and extend the theoretical equivalency of the weight optimization in the CSLA block. We observe that the kernel weight of each branch can be optimized with variable convergence using branch-specific learning rates. Furthermore, the ERF with SR is visualized to be more widely distributed from the center element to the global surroundings [4], demonstrating a similar behavior to the spatial frequency in the human visual system [13]. Inspired by the reciprocal characteristics of spatial frequency, we model the spatial frequency as a Bayesian prior to adapt variable convergence of each kernel element with stochastic gradient descent (Fig. 1b). Specifically, we compute a
Figure 1: With the fast convergence in small kernels, SR merges the branches weights and enhances the locality convergence with respect to the kernel size (deep blue region), while the global convergence is yet to be optimal (light blue region). By adapting BFR, the learning convergence can rescale in an element-wise setting and distribute the learning importance from local to global.
scaling factor with respect to the distance from the kernel center and multiply the corresponding element for re-parameterization during training. Furthermore, we simplify the encoder block design into a plain convolution block only to minimize the computation burden in training and achieve State-Of-The-Art (SOTA) performance. We propose RepUX-Net, a pure 3D CNN with the large kernel size (e.g., \(21\times 21\times 21\)) in encoder blocks, to compete favorably with current SOTA segmentation networks. We evaluate RepUX-Net on supervised multi-organ segmentation with 6 different public volumetric datasets. RepUX-Net demonstrates significant improvement consistently across all datasets compared to all SOTA networks. We summarize our contributions as below:
* We propose RepUX-Net with better adaptation in large kernel convolution than 3D UX-Net, achieving SOTA performance in 3D segmentation. To our best knowledge, this is the first network that effectively leverages large kernel convolution with plain design in the encoder for 3D segmentation.
* We propose a novel theory-inspired re-parameterization strategy to scale the element-wise learning convergence in large kernels with Bayesian prior knowledge. To our best knowledge, this is the first re-parameterization strategy to adapt 3D large kernels in the medical domain.
* We leverage six challenging public datasets to evaluate RepUX-Net in 1) direct training and 2) transfer learning scenarios with 3D multi-organ segmentation. RepUX-Net achieves significant improvement consistently in both scenarios across all SOTA networks.
Figure 2: Overview of RepUX-Net. Unlike performing SR to merge branches weight or performing GR within optimizers, we propose to multiply a Bayesian function \(\delta\) and scale the element-wise learning importance in each large kernel. We then put the scaled weights back into the convolution layer for training.
## 2 Related Works
**Weights Re-parameterization:** SR is a methodology of equivalently converting model structures via transforming the parameters in kernel weights. For example, RepVGG demonstrates to construct one extra ResNet-style shortcut as a \(1\times 1\) convolution, parallel to \(3\times 3\) convolution during training [5]. Such parallel branch design is claimed to enhance the learning efficiency during training, in which the 1x1 branch is then merged into the parallel \(3\times 3\) kernel via a series of linear transformation in the inference stage. OREPA further adds more parallel branches with linear scaling modules to enhance training efficiency [10]. Inspired by the parallel branches design, RepLKNet is proposed to scale up the 2D kernel size (e.g., 31x31) with a 3x3 convolution as the parallel branch [4]. SLaK further extends the kernel size to 51x51 by decomposing the large kernel into two rectangular parallel kernels with sparse groups and training the model with dynamic sparsity [16]. However, the proposed models' FLOPs remain at a high-level with the parallel branch design and demonstrates to have a trade-off between model performance and training efficiency. To tackle the trade-off, RepOptimizer provides an alternative to re-parameterize the back-propagate gradient, instead of the structural parameters of kernel weights, to enhance the training efficiency with plain convolution block design [3]. Significant efforts have been demonstrated to enlarge the 2D kernel size in the natural image domain, while limited studies have been proposed for 3D kernels in medical domain. As 3D kernels have a larger number of parameters than 2D, it is challenging to directly leverage the parallel branch design and maintain an optimal convergence of learning large kernel convolution without trading off the computation efficiency significantly.
## 3 Methods
Instead of changing the gradient dynamics during training [3], we introduce RepUX-Net, a pure 3D CNN architecture that performs element-wise scaling in large kernel weights to enhance the learning convergence and effectively adapts large receptive field for volumetric segmentation. To design such behavior, we adapt a two-step pipeline: 1) we define the theoretical equivalency of variable learning convergence in convolution branches; 2) we simulate the behavior of spatial frequency to re-weight the learning importance of each element in kernels for stochastic gradient descent. Note the theoretical derivation depends on the optimization with first-order gradient-driven optimizer (e.g., SGD, AdamW) [3].
### Variable Learning Convergence in Multi-Branch Design
From Figure 2, the learning convergence of the large kernel convolution can be improved by either adding up the encoded outputs of parallel branches weighted by diverse scales with SR (RepLKNet [4]) or performing Gradient Re-parameterization (GR) by multiplying with constant values (RepOptimizer [3]) in a Single Operator (SO). Inspired by the concepts of SR and GR, we extend
the equivalency proof in RepOptimizer to adapt variable learning convergence in branches. Here, we only showcase the conclusion with two convolutions and two constant scalars as the scaling factors for simplicity. The complete proof of equivalency is demonstrated in Supplementary 1.1. Let \(\{\alpha_{L},\alpha_{S}\}\) and \(\{W_{L},W_{S}\}\) be the two constant scalars and two convolution kernels (Large & Small) respectively. Let \(X\) and \(Y\) be the input and output features, the CSLA block is formulated as \(Y_{CSLA}=\alpha_{L}(X\star W_{L})+\alpha_{S}(X\star W_{S})\), where \(\star\) denotes as convolution. For SO blocks, we train the plain structure parameterized by \(W^{\prime}\) and \(Y_{SO}=X\star W^{\prime}\). Let \(i\) be the number of training iterations, we ensure that \({Y^{(i)}}_{CSLA}={Y^{(i)}}_{SO},\forall i\geq 0\) and derive the stochastic gradient descent of parallel branches as follows:
\[\alpha_{L}W_{L(i+1)}+\alpha_{S}W_{S(i+1)}=\alpha_{L}W_{L(i)}-\lambda_{L}\alpha _{L}\frac{\partial\mathcal{L}}{\partial W_{L_{i}}}+\alpha_{S}W_{S(i)}-\lambda _{S}\alpha_{S}\frac{\partial\mathcal{L}}{\partial W_{S_{i}}}, \tag{1}\]
where \(\mathcal{L}\) is the objective function; \(\lambda_{L}\) and \(\lambda_{S}\) are the Learning Rate (LR) of each branch respectively. We observe that the optimization of each branch can be different by adjusting the branch-specific LR. The locality convergence in large kernels enhance with the quick convergence in small kernels. Additionally from our experiments, a significant improvement is demonstrated with different branch-wise LR using SGD (Table 2). With such observation, we further hypothesize that **the convergence of each large kernel element can be optimized differently by linear scaling with prior knowledge**.
### Bayesian Frequency Re-parameterization (BFR)
With the visualization of ERF in RepLKNet [4], the diffused distribution (from local to global) in ERF demonstrates similar behavior with the spatial frequency in the human visual system [13]. High spatial frequency (small ERF) allows to refine and sharpen details with high acuity, while global details are demonstrated with low spatial frequency. Inspired by the reciprocal characteristics in spatial frequency, we first generate a Bayesian prior distribution to model the spatial frequency by computing a reciprocal distance function between each element and the central point of the kernel weight as follows:
\[\begin{split} d(x,y,z,c)&=\sqrt{\left(x-c\right)^{ 2}+\left(y-c\right)^{2}+\left(z-c\right)^{2}}\\ \delta(x_{k},y_{k},z_{k},c,\alpha)&=\frac{\alpha}{d( x_{k},y_{k},z_{k},c)+\alpha}\end{split} \tag{2}\]
where \(k\) and \(c\) are the element and central index of the kernel weight, \(\alpha\) is the hyperparameter to control the shape of the generated frequency distribution. Instead of adjusting the LR in parallel branches, we propose to re-parameterize the convolution weights by multiplying the scaling factor \(\delta\) to each kernel element and apply a static LR \(\lambda\) for stochastic gradient descent in single operator setting as follows:
\[W^{{}^{\prime}}_{i+1}=\delta W^{{}^{\prime}}_{i}-\lambda\frac{\partial L}{ \partial\delta W^{{}^{\prime}}_{i}} \tag{3}\]
With the multiplication with \(\delta\), each element in the kernel weight is rescaled with respect to the frequency level and allow to converge differently with a static LR in stochastic gradient descent. Such design demonstrates to influence the weighted convergence diffused from local to global in theory, thus tackling the limitation of enhancing the local convergence only in branch-wise setting.
### Model Architecture
The backbone of RepUX-Net is based on 3D UX-Net [14], which comprises multiple volumetric convolution blocks that directly utilize 3D patches and leverage skip connections to transfer hierarchical multi-resolution features for end-to-end optimization. Inspired by [15], we choose a kernel size of \(21\times 21\times 21\) for Depth-Wise Convolution (DWC-21) as the optimal choice without significant trade-off between model performance and computational efficiency in 3D. We further simplify the block design as a plain convolution block design to minimize the computational burden from additional modules. The encoder blocks in layers \(l\) and \(l+1\) are defined as follows:
\[\hat{z}^{l}=\text{GeLU}(\text{DWC-21}(\text{BN}(z^{l-1}))),\ \hat{z}^{l+1}= \text{GeLU}(\text{DWC-21}(\text{BN}(z^{l}))) \tag{4}\]
where \(\hat{z}_{l}\) and \(\hat{z}_{l+1}\) are the outputs from the DWC layer in each depth level; BN denotes as the batch normalization layer.
## 4 Experimental Setup
**Datasets** We perform experiments on six public datasets for volumetric segmentation, which comprise with 1) Medical Segmentation Decathlon (MSD) spleen dataset [1], 2) MICCAI 2017 LiTS Challenge dataset (LiTS) [2], 3) MICCAI 2019 KiTS Challenge dataset (KiTS) [9], 4) NIH TCIA Pancreas-CT dataset (TCIA) [20], 5) MICCAI 2021 FLARE Challenge dataset (FLARE) [19], and 6) MICCAI 2022 AMOS challenge dataset (AMOS) [12]. More details of each dataset (including data split for training and inference) are described in Supplementary Material (SM) Table 1.
**Implementation** We evaluate RepUX-Net with three different scenarios: 1) internal validation with direct supervised learning, 2) external validation with the unseen datasets, and 3) transfer learning with pretrained weights. All preprocessing and training details including baselines, are followed with [14] for benchmarking. For external validations, we leverage the AMOS-pretrained weights to evaluate 4 unseen datasets. In summary, we evaluate the segmentation performance of RepUX-Net by comparing current SOTA networks in a fully-supervised setting. Furthermore, we perform ablation studies to investigate the effect on Bayesian frequency distribution with different scales generated by \(\alpha\) and the variability of branch-wise learning rates with first-order gradient optimizers (e.g., SGD, AdamW) for volumetric segmentation. Dice similarity coefficient is leveraged as an evaluation metric to measure the overlapping regions between the model predictions and the manual ground-truth labels.
## 5 Results
**Different Scenarios Evaluations.** Table 1 shows the result comparison of current SOTA networks on medical image segmentation in a volumetric setting. With our designed convolutional blocks as the encoder backbone, RepUX-Net demonstrates the best performance across all segmentation task with significant improvement in Dice score (FLARE: 0.934 to 0.944, AMOS: 0.891 to 0.902). Furthermore, RepUX-Net demonstrates the best generalizability consistently with a significant boost in performance across 4 different external datasets (MSD: 0.926 to 0.932, KiTS: 0.836 to 0.847, LiTS: 0.939 to 0.949, TCIA: 0.750 to 0.779). Furthermore, from Figure 2A, RepUX-Net demonstrates the quickest convergence rate in training with AMOS datasets from scratch. For transfer learning scenario, the performance of RepUX-Net significantly outperforms the current SOTA networks with mean Dice of 0.911 (1.22% enhancement), as shown in Table 2. RepUX-Net demonstrates its capabilities across the generalizability of unseen datasets and transfer learning ability. The qualitative representations (in SM Figure 1) further provides additional confidence of the quality improvement in segmentation predictions with RepUX-Net.
**Ablation studies with block designs & optimizers.** With the plain convolution design, a mean dice score of 0.906 is demonstrated with AdamW optimizer
\begin{table}
\begin{tabular}{l|c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Optimizer} & \multicolumn{3}{c|}{Mean Branch Para. Branch BFR} & \multicolumn{3}{c|}{Train Steps Main LR Para. LR} & \multicolumn{3}{c}{Mean Dice} \\ \hline SGD & \(21\times 21\times 21\) & \(\times\) & \(\times\) & 40000 & 0.0003 & \(\times\) & 0.898 \\ AdamW & \(21\times 21\times 21\) & \(\times\) & \(\times\) & 40000 & 0.0001 & \(\times\) & 0.906 \\ SGD & \(21\times 21\times 21\) & \(3\times 3\times 3\) & \(\times\) & 40000 & 0.0003 & 0.0006 & 0.917 \\ AdamW & \(21\times 21\times 21\) & \(3\times 3\times 3\) & \(\times\) & 40000 & 0.0001 & 0.0001 & 0.929 \\ AdamW & \(21\times 21\times 21\) & \(\times\) & \(\checkmark\) & 40000 & 0.0001 & \(\times\) & **0.938** \\ \hline SGD & \(21\times 21\times 21\) & \(3\times 3\times 3\) & \(\times\) & 60000 & 0.0003 & 0.0006 & 0.930 \\ AdamW & \(21\times 21\times 21\) & \(3\times 3\times 3\) & \(\times\) & 60000 & 0.0001 & 0.0001 & 0.938 \\ AdamW & \(21\times 21\times 21\) & \(\times\) & \(\checkmark\) & 60000 & 0.0001 & \(\times\) & **0.944** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation studies with quantitative Comparison on Block Designs with/out frequency modeling using different optimizer
\begin{table}
\begin{tabular}{l|c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c|}{Internal Testing} & \multicolumn{5}{c}{External Testing} \\ & & \multicolumn{5}{c|}{FLARE} & \multicolumn{2}{c|}{MSD} & \multicolumn{2}{c}{KITS} & \multicolumn{2}{c}{LiTS} & \multicolumn{1}{c}{TCIA} \\ \hline methods & \#Params FLOPs & Spleen Kidney & Liver & Pancreas & Mean & Spleen Kidney & Liver & Pancreas \\ \hline nn-UNet [11] & 31.2M & 743.3G & 0.971 & 0.966 & 0.976 & 0.792 & 0.926 & 0.917 & 0.829 & 0.935 & 0.739 \\ \hline TransBSTS [22] & 31.6M & 110.4G & 0.964 & 0.959 & 0.974 & 0.711 & 0.902 & 0.881 & 0.797 & 0.926 & 0.699 \\ UNETR [8] & 92.8M & 82.6G & 0.927 & 0.947 & 0.960 & 0.710 & 0.886 & 0.857 & 0.801 & 0.920 & 0.679 \\ nFormer [23] & 149.3M & 240.2G & 0.973 & 0.960 & 0.975 & 0.717 & 0.906 & 0.880 & 0.774 & 0.927 & 0.690 \\ SwinUNETR [7] & 62.2M & 328.4G & 0.979 & 0.965 & 0.980 & 0.788 & 0.929 & 0.901 & 0.815 & 0.933 & 0.736 \\
3D UX-Net (k=7) [14] & 53.0M & 639.4G & 0.981 & 0.969 & 0.982 & 0.801 & 0.934 & 0.926 & 0.836 & 0.939 & 0.750 \\
3D UX-Net (k=21) [14] & 65.9M & 757.6G & 0.980 & 0.968 & 0.979 & 0.795 & 0.930 & 0.908 & 0.808 & 0.929 & 0.720 \\ \hline
**RepOptimizer [3]** & 65.8M & 757.4G & 0.981 & 0.969 & 0.981 & 0.822 & 0.937 & 0.913 & 0.833 & 0.934 & 0.746 \\
**3D RepUX-Net (Ours)** & 65.8M & 757.4G & **0.984** & **0.970** & **0.983** & **0.837** & **0.944*** & **0.932*** & **0.847*** & **0.949*** & **0.779*** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of SOTA approaches on the five different testing datasets. (*: \(p<0.01\), with Paired Wilcoxon signed-rank test to all baseline networks)
and perform slightly better than that with SGD. With the additional design of a parallel small kernel branch, the segmentation performance significantly improved (SGD: 0.898 to 0.917, AdamW: 0.906 to 0.929) with the optimized parallel branch LR using SR. The performance is further enhanced (SGD: 0.917 to 0.930, AdamW: 0.929 to 0.937) without being saturated with the increase of the training steps. By adapting BFR, the segmentation performance outperforms the parallel branch design significantly with a Dice score of 0.944.
**Effectiveness on Different Frequency Distribution.** From Figure 2 in SM, RepUX-Net demonstrates the best performance when \(\alpha=1\), while comparable performance is demonstrated in both \(\alpha=0.5\) and \(\alpha=8\). A possible family of Bayesian distributions (different shapes) may need to further optimize the learning convergence of kernels across each channel.
**Limitations.** The shape of the generated Bayesian distribution is fixed across all kernel weights with an unlearnable distance function. Each channel in kernels is expected to extract variable features with different distributions. Exploring different families of distributions to rescale the element-wise convergence in kernels will be our potential future direction.
## 6 Conclusion
We introduce RepUX-Net, the first 3D CNN adapting extreme large kernel convolution in encoder network for medical image segmentation. We propose to model the spatial frequency in the human visual system as a reciprocal function, which generates a Bayesian prior to rescale the learning convergence of each ele
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline & \multicolumn{10}{c}{Train From Scratch Scenario} \\ \hline Methods & \multicolumn{2}{c}{Spoen R. Kid L. Kid Gall.} & \multicolumn{2}{c}{Eso. Liver} & \multicolumn{2}{c}{Liver Stom. Aorta} & \multicolumn{2}{c}{IVC} & \multicolumn{2}{c}{Panc. RAG} & \multicolumn{2}{c}{LAG} & \multicolumn{2}{c}{Doo. Blad. Pros.} & \multicolumn{2}{c}{Avg} \\ \hline \hline nn-UNet & 0.951 & 0.919 & 0.930 & 0.845 & 0.797 & 0.975 & 0.863 & 0.941 & 0.898 & 0.813 & 0.730 & 0.677 & 0.772 & 0.797 & 0.815 & 0.850 \\ \hline TransBTS & 0.930 & 0.921 & 0.909 & 0.798 & 0.722 & 0.966 & 0.801 & 0.900 & 0.820 & 0.702 & 0.641 & 0.550 & 0.684 & 0.730 & 0.679 & 0.783 \\ UNETR & 0.925 & 0.923 & 0.903 & 0.777 & 0.701 & 0.964 & 0.759 & 0.887 & 0.851 & 0.687 & 0.688 & 0.543 & 0.629 & 0.710 & 0.770 & 0.740 \\ nnFormer & 0.932 & 0.928 & 0.914 & 0.831 & 0.743 & 0.968 & 0.820 & 0.905 & 0.838 & 0.725 & 0.678 & 0.578 & 0.677 & 0.737 & 0.796 & 0.785 \\ SwinUNETR & 0.956 & 0.957 & 0.949 & 0.891 & 0.820 & 0.978 & 0.880 & 0.939 & 0.894 & 0.818 & 0.800 & 0.730 & 0.803 & 0.849 & 0.819 & 0.871 \\
3D X-Net (k=7) & 0.966 & 0.959 & 0.951 & 0.903 & 0.830 & 0.930 & 0.910 & 0.950 & 0.913 & 0.830 & 0.805 & 0.756 & 0.846 & 0.897 & 0.863 & 0.890 \\
3D UX-Net (k=21) & 0.963 & 0.959 & 0.953 & **0.921** & 0.848 & 0.981 & 0.903 & 0.953 & 0.910 & 0.828 & 0.815 & 0.754 & 0.824 & 0.900 & 0.878 & 0.891 \\ RepOptimizer & 0.968 & **0.964** & 0.953 & 0.903 & 0.857 & 0.981 & 0.915 & 0.950 & 0.915 & 0.826 & 0.802 & 0.756 & 0.813 & 0.906 & 0.867 & 0.892 \\ \hline RepUX-Net (Ours) & **0.972** & 0.963 & **0.964** & 0.911 & **0.861** & **0.982** & **0.921** & **0.956** & **0.924** & **0.837** & **0.818** & **0.777** & 0.831 & **0.916** & **0.879** & **0.902* \\ \hline \hline \multicolumn{10}{c}{Transfer Learning Scenario} \\ \hline Methods & \multicolumn{2}{c}{Spoen R. Kid L. Kid Gall.} & \multicolumn{2}{c}{Eso. Liver} & \multicolumn{2}{c}{Stom. Aorta} & \multicolumn{2}{c}{IVC} & \multicolumn{2}{c}{Panc. RAG} & \multicolumn{2}{c}{Loo. Blad. Pros.} & \multicolumn{2}{c}{Avg} \\ \hline \hline nn-UNet & 0.965 & 0.959 & 0.951 & 0.889 & 0.820 & 0.980 & 0.890 & 0.948 & 0.901 & 0.821 & 0.785 & 0.739 & 0.806 & 0.869 & 0.839 & 0.878 \\ \hline TransBTS & 0.885 & 0.931 & 0.916 & 0.817 & 0.744 & 0.969 & 0.837 & 0.914 & 0.855 & 0.724 & 0.630 & 0.566 & 0.704 & 0.741 & 0.650 & 0.792 \\ UNETR & 0.926 & 0.936 & 0.918 & 0.785 & 0.702 & 0.969 & 0.788 & 0.893 & 0.828 & 0.732 & 0.717 & 0.554 & 0.658 & 0.683 & 0.722 & 0.762 \\ nnFormer & 0.935 & 0.904 & 0.887 & 0.836 & 0.712 & 0.964 & 0.789 & 0.901 & 0.821 & 0.743 & 0.655 & 0.870 & 0.641 & 0.744 & 0.714 & 0.790 \\ SwinUNETR & 0.959 & 0.960 & 0.949 & 0.894 & 0.827 & 0.979 & 0.890 & 0.944 & 0.893 & 0.828 & 0.791 & 0.745 & 0.817 & 0.875 & 0.841 & 0.880 \\
3D UX-Net (k=7) & 0.970 & 0.967 & 0.961 & 0.923 & 0.832 & 0.984 & 0.920 & 0.951 & 0.914 & 0.856 & 0.825 & 0.739 & 0.853 & 0.906 & 0.876 & 0.900 \\
3D UX-Net (k=21) & 0.969 & 0.965 & 0.962 & 0.910 & 0.824 & 0.982 & 0.918 & 0.949 & 0.915 & 0.850 & 0.823 & 0.740 & 0.843 & 0.905 & 0.877 & 0.898 \\ RepOptimizer & 0.967 & 0.967 & 0.957 & 0.908 & 0.847 & 0.983 & 0.913 & 0.945 & 0.914 & 0.838 & 0.825 & 0.780 & 0.836 & 0.915 & 0.864 & 0.897 \\ \hline RepUX-Net & **0.973** & **0.968** & **0.965** & **0.933** & **0.865** & **0.985** & **0.930** & **0.960** & **0.923** & **0.859** & **0.829** & **0.793** & **0.869** & **0.918** & **0.891** & **0.911* \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluations on the AMOS testing split in different scenarios.(*: \(p<0.01\), with Paired Wilcoxon signed-rank test to all baseline networks)
ment in kernel weights. By introducing the frequency-guided importance during training, RepUX-Net outperforms current SOTA networks on six challenging public datasets via both direct training and transfer learning scenarios.
|
2306.17239 | Can supercooled phase transitions explain the gravitational wave
background observed by pulsar timing arrays? | Several pulsar timing array collaborations recently reported evidence of a
stochastic gravitational wave background (SGWB) at nHz frequencies. Whilst the
SGWB could originate from the merger of supermassive black holes, it could be a
signature of new physics near the 100 MeV scale. Supercooled first-order phase
transitions (FOPTs) that end at the 100 MeV scale are intriguing explanations,
because they could connect the nHz signal to new physics at the electroweak
scale or beyond. Here, however, we provide a clear demonstration that it is not
simple to create a nHz signal from a supercooled phase transition, due to two
crucial issues that could rule out many proposed supercooled explanations and
should be checked. As an example, we use a model based on non-linearly realized
electroweak symmetry that has been cited as evidence for a supercooled
explanation. First, we show that a FOPT cannot complete for the required
transition temperature of around 100 MeV. Such supercooling implies a period of
vacuum domination that hinders bubble percolation and transition completion.
Second, we show that even if completion is not required or if this constraint
is evaded, the Universe typically reheats to the scale of any physics driving
the FOPT. The hierarchy between the transition and reheating temperature makes
it challenging to compute the spectrum of the SGWB. | Peter Athron, Andrew Fowlie, Chih-Ting Lu, Lachlan Morris, Lei Wu, Yongcheng Wu, Zhongxiu Xu | 2023-06-29T18:09:17Z | http://arxiv.org/abs/2306.17239v4 | Can supercooled phase transitions explain the gravitational wave background observed by pulsar timing arrays?
###### Abstract
Several pulsar timing array collaborations recently reported evidence of a stochastic gravitational wave background (SGWB) at nHz frequencies. Whilst the SGWB could originate from the merger of supermassive black holes, it could be a signature of new physics near the 100 MeV scale. Supercooled first-order phase transitions (FOPTs) that end at the 100 MeV scale are intriguing explanations, because they could connect the nHz signal to new physics at the electroweak scale or beyond. Here, however, we provide a clear demonstration that it is not simple to create a nHz signal from a supercooled phase transition, due to two crucial issues that should be checked in any proposed supercooled explanations. As an example, we use a model based on non-linearly realized electroweak symmetry that has been cited as evidence for a supercooled explanation. First, we show that a FOPT cannot complete for the required transition temperature of around 100 MeV. Such supercooling implies a period of vacuum domination that hinders bubble percolation and transition completion. Second, we show that even if completion is not required or if this constraint is evaded, the Universe typically reheats to the scale of any physics driving the FOPT. The hierarchy between the transition and reheating temperature makes it challenging to compute the spectrum of the SGWB.
## I Introduction
The North American Nanohertz Observatory for Gravitational Waves (NANOGrav) recently detected a stochastic gravitational wave background (SGWB) for the first time with a significance of about \(4\sigma\)[1]. This was corroborated by other pulsar timing array (PTA) experiments, including the Chinese Pulsar Timing Array (CPTA; [2]), the European Pulsar Timing Array (EPTA; [3]), and the Parkes Pulsar Timing Array (PPTA; [4]). Although the background could originate from mergers of super-massive black holes (SMBHs; [5; 6]), this explanation might be inconsistent with previous estimates of merger density and remains a topic of debate [7; 8; 9].1 Thus, there is an intriguing possibility that the SGWB detected by NANOGrav could originate from more exotic sources [11]. Indeed, many exotic explanations were proposed for an earlier hint of this signal [12; 13; 14], or immediately after the announcement. These include non-canonical kinetic terms [15], inflation [16; 17; 18; 19; 20], first-order phase transitions (FOPTs; [21; 22; 23; 24; 25]), cosmic strings [26; 27; 28; 29; 30; 31; 32; 33], domain walls [34; 35], primordial black holes [36], primordial magnetic fields [37], axions and ALPs [38; 39; 40; 41; 42; 43; 44], QCD [45; 46], and dark sector models [47; 48; 49; 50; 51; 52; 53; 54; 53; 55].
Footnote 1: The fit to SMBHs may also be improved by considering a spike in dark matter around the SMBHs [10].
The nanohertz (nHz) frequency of the signal indicates that any new physics explanation should naturally lie at around 100 MeV. If there are new particles around the MeV scale there are constraints from cosmology [56; 57; 58; 59] as any new particles must not significantly alter the number of relativistic degrees of freedom [60; 61], must not inject energy e.g. from particle decays that could spoil Big Bang nucleosynthesis (BBN) and distort the cosmic microwave background (CMB), and must not lead to an overabundance of dark matter (DM). In any case, there are constraints from laboratory experiments, such as fixed-target experiments, \(B\)-factories and high-energy particle colliders. These constraints may weaken if the new particles are sequestered in a dark sector, though could be important [62] if one wishes to connect the dark sector to the Standard Model (SM).
It is conceivable, however, that new physics at characteristic scales far beyond the MeV scale could be responsible for a nHz signal. This could happen, for example, if a FOPT (see Refs. [63; 64; 65] for reviews) starts at a temperature above the 100 MeV scale but ends at the MeV scale due to supercooling. That is, the Universe remains in a false vacuum (a local minimum of the scalar potential) until the MeV scale because a transition to the true vacuum (the global minimum) is suppressed. A SGWB would be produced by percolation of bubbles of true vacuum. This was previously considered in the context of the electroweak phase transition [66; 67; 68; 69; 70; 71; 72; 73] and was discussed as a possible new physics explanation by NANOGrav [11; 13]. Supercooling could help new physics explanations evade constraints on MeV scale modifications to the SM and connect a nHz signal to new physics and phenomenology at the electroweak scale or above. For example, a scalar cubic coupling driving a supercooled FOPT [66] could be observable through Higgs boson pair production at the Large Hadron Collider (LHC; [74]).
In this work, however, we raise two difficulties with supercooled FOPTs. We explicitly demonstrate that these difficulties rule out one of the earliest models that was constructed to explain a nHz GW signal through a supercooled
FOPT [66]. This model was prominently cited as a viable example [11; 13]. The difficulties are that, firstly, supercooled FOPTs that reach nucleation may still struggle to complete [75] and requiring completion places stringent limits on the viable parameter space [76]. We show for the model proposed in Ref. [66] that requiring the phase transition completes places a lower bound on the transition temperature, or more precisely the percolation temperature, and that the phase transition does not complete for the low temperatures associated with a nHz signal. This finding is consistent with brief remarks in Ref. [77] and, as mentioned there, similar to the graceful-exit problem in old inflation [78]. Secondly, even if the completion criteria are ignored or can be evaded, the energy released by the phase transition reheats the Universe to about the new physics scale [59]. The resulting hierarchy between the percolation and reheating temperatures makes it challenging to compute the SGWB spectrum.
## II Cubic potential and benchmarks
We consider a modification to the SM Higgs potential to include a cubic term,
\[V_{0}(r)=-\frac{\mu^{2}}{2}r^{2}+\frac{\kappa}{3}r^{3}+\frac{\lambda}{4}r^{4}. \tag{1}\]
Details of the model and the radiative and finite-temperature corrections included are described in Appendix A.1.
We consider two benchmark points to highlight the challenges of fitting a nHz signal with this cubic potential. These benchmarks are selected to probe two criteria: 1) realistic percolation, that is, that the physical volume of the false vacuum is decreasing at the onset of percolation (see eq. (14)); and 2) having a completion temperature (see eq. (19)). These benchmarks are:
\[\text{BP1:}\ \kappa =-116.81\,\text{GeV}, \tag{2}\] \[\text{BP2:}\ \kappa =-117.56\,\text{GeV}. \tag{3}\]
BP1 resulted in the most supercooling for which the transition satisfies both criteria, though fails to supercool to sub-GeV temperatures. For \(\kappa\) just below \(-116.81\,\text{GeV}\) we find that percolation is unrealistic by eq. (14) even though there is a completion temperature by eq. (19). BP2 resulted in stronger supercooling with a percolation temperature of \(100\,\text{MeV}\); however, the transition did not complete. Although BP2 was chosen so that percolation was estimated to begin at \(100\,\text{MeV}\) according to the usual condition, we find that the space between bubbles continues to expand below \(100\,\text{MeV}\). Thus, despite a nominal percolation temperature, both percolation and completion could be unrealistic in BP2. Without significant percolation of bubbles, the phase transition would not generate a SGWB. We discuss the results in the next section, with further details of the analysis left to the appendices.
## III Challenges
### Challenge 1 -- completion
A possible strategy for achieving a peak frequency at the nHz scale is to consider a strongly supercooled phase transition, where bubble percolation is delayed to below the GeV scale and percolation is defined by \(P_{f}(T_{p})=0.71\). However, in many models, a first-order electroweak phase transition has bubbles nucleating at around the electroweak scale. There is then an extended period of bubble growth and expansion of space. If bubbles grow too quickly compared to the expansion rate of the Universe, the bubbles will percolate before sufficient supercooling. Yet if bubbles grow too slowly the transition may never complete due to the space between bubbles inflating [77; 79; 75]; this effect can cause both the realistic percolation condition eq. (14) and the condition for a completion temperature to fail. Thus, while it is possible to tune model parameters to achieve percolation at sub-GeV temperatures, completion of the transition becomes less likely as supercooling is increased. We define the completion temperature \(T_{f}\) as the temperature for which false vacuum fraction \(P_{f}\) is \(1\%\); that is, \(P_{f}(T_{f})=0.01\). See appendix A.2 for details.
We find that completion is impossible for the cubic potential if \(T_{p}\lesssim 1\,\text{GeV}\). The same arguments apply to the models considered in Ref. [75]. In the cubic potential, strong supercooling implies a Gaussian bubble nucleation rate peaking at \(T_{\Gamma}\sim 50\,\text{GeV}\). Percolation at say \(T_{p}\sim 1\,\text{GeV}\) and completion shortly after implies that the false vacuum fraction must drop sharply around \(T_{p}\). Because \(T_{\Gamma}\gg T_{p}\), we must have \(P_{f}\approx 1\) until \(T\sim T_{p}\), otherwise the influx of nucleated bubbles would quickly percolate. However, in the cubic potential, strong supercooling means that the false vacuum fraction decreases slowly over a large range of temperatures. This is because delayed percolation requires bubbles to slowly take over the Universe. Consequently, completion after the onset of percolation is also slow. This is demonstrated in fig. 1.
In Ref. [11], the cubic potential is suggested as a candidate model for a strongly supercooled phase transition that could explain the detected SGWB. The Universe was assumed to be radiation dominated in the original investigation [66] of detecting GWs from the cubic potential with PTAs. However, a more careful treatment of the energy density during strong supercooling shows that the Universe becomes vacuum dominated [77]. This leads to a period of rapid expansion that hinders bubble percolation and completion of the transition. In fact, one must check not only that \(P_{f}<0.01\) eventually, but also that the physical volume of the false vacuum is decreasing at \(T_{p}\)[77; 75; 79] (again see appendix A.2 for details).
We note that many studies still use the nucleation temperature \(T_{n}\) as a reference temperature for GW production. As argued in Ref. [75], the nucleation temperature is not an accurate predictor of GW production; the percolation temperature should be used instead. Figure 1 demonstrates the large difference between \(T_{n}\) and \(T_{p}\) in supercooled phase transitions. In BP1 the difference is \(\mathcal{O}(10\,\text{GeV})\). In BP2 there is no nucleation temperature -- one might be tempted to assume GWs cannot be produced because of this. However, percolation and
completion are possible even without a nucleation temperature [75]. Another large source of error is the use of \(\beta/H\) for estimating the timescale of the transition. The mean bubble separation can be used instead as described in appendix A.3.
### Challenge 2 -- reheating
Even if the completion constraints can be avoided, a second issue was recently observed [59]. Whilst strong supercooling can lower the percolation temperature down to \(T_{p}\approx 100\,\mathrm{MeV}\) as in BP2 or even lower, the energy released from the phase transition reheats the plasma, creating a hierarchy \(T_{\mathrm{reh}}\gg T_{p}\). Indeed, the reheating and percolation temperatures are approximately related by [77]
\[T_{\mathrm{reh}}\simeq(1+\alpha)^{1/4}\,T_{p}, \tag{4}\]
such that the substantial latent heat in a strongly supercooled transition, \(\alpha\gg 1\), implies that \(T_{\mathrm{reh}}\gg T_{p}\). Ref. [59] approximates the latent heat by the free energy difference (\(\Delta V\)) divided by radiation energy density and shows that in the Coleman-Weinberg model the latent heat is so large that the Universe reheats well above the percolation temperature and back to the scale of new physics.
A simple scaling argument suggests that this observation -- that supercooled FOPTs reheat to the scale of new physics, \(M\) -- is generic. The new physics creates a barrier between minima so we expect \(\Delta V\sim M^{4}\) and because the radiation energy density goes like \(T_{p}^{4}\), we expect the latent heat may go like \(\alpha\sim\frac{M^{4}}{T_{p}^{4}}\). This leads to
\[T_{\mathrm{reh}}\sim\left(\frac{M^{4}}{T_{p}^{4}}\right)^{\frac{1}{2}}\,T_{p} =M. \tag{5}\]
It is possible this scaling can be avoided by fine-tuning terms to achieve \(T_{\mathrm{reh}}\ll M\) if \(\Delta V\ll M^{4}\).
This argument, however, relies on the simple approximation of the reheating temperature in eq. (4) and crude dimensional analysis. We now confirm that this problem exists and is unavoidable in a careful analysis of the example model we consider. This careful treatment is general and can be used in other models. We assume that the reheating occurs instantaneously around the time of bubble percolation, and use conservation of energy so that the reheating temperature can be obtained from [80; 75]
\[\rho(\phi_{f}(T_{p}),T_{p})=\rho(\phi_{t}(T_{\mathrm{reh}}),T_{\mathrm{reh}}), \tag{6}\]
where \(\phi_{f}\) and \(\phi_{t}\) are the false and true vacua, respectively, and \(\rho\) is the energy density. For BP1, the percolation temperature is \(T_{p}\approx 38.5\,\mathrm{GeV}\), and the transition completes and reheats the Universe to \(T_{\mathrm{reh}}\approx 44.8\,\mathrm{GeV}\). The reheating temperature exceeds the percolation temperature due to the energy released from the supercooled phase transition, though they remain of the same order of magnitude. For BP2, however, the percolation temperature drops to \(T_{p}\approx 100\,\mathrm{MeV}\), whereas the reheating temperature is \(T_{\mathrm{reh}}\approx 36.0\,\mathrm{GeV}\); more than two orders of magnitude larger.2
Footnote 2: The approximation eq. (4) leads to similar reheating temperatures: \(43.9\,\mathrm{GeV}\) and \(33.6\,\mathrm{GeV}\) for BP1 and BP2, respectively.
We show the behavior of the reheating temperature as a function of percolation temperature in fig. 2. These results come from a scan over \(\kappa\in[-117.558\,\mathrm{GeV},-116.800\,\mathrm{GeV}]\). We clearly see that the reheating temperature tends towards a constant value \(T_{\mathrm{reh}}\approx 36\,\mathrm{GeV}\) for \(T_{p}\to 0\). As we now discuss, the fact that \(T_{\Gamma}\sim T_{\mathrm{reh}}\gg T_{p}\) breaks assumptions typically made when computing the SGWB.
### Gravitational Wave Spectra
The frequencies of a SGWB created at a percolation temperature \(T_{p}\) would be redshifted from the reheating temper
Figure 1: The false vacuum fraction as a function of temperature for BP1 (blue, right-most solid curve) and BP2 (orange, left-most solid curve). The nucleation (dotted), percolation (dashed) and completion (dash-dotted) temperatures are shown for both benchmark points. However, BP2 only has a percolation temperature, which is at \(T_{p}\approx 100\,\mathrm{MeV}\).
Figure 2: The reheating temperature \(T_{\mathrm{reh}}\) against percolation temperature \(T_{p}\) as \(\kappa\) varied. The dashed black line corresponds to \(T_{\mathrm{reh}}=T_{p}\). We see that \(T_{\mathrm{reh}}\gtrsim 36\,\mathrm{GeV}\) even when \(T_{p}\to 0\). See the main text for details.
ature \(T_{\rm reh}\) to the current temperature \(T\simeq 2.725\,\)K (for a review, see section 6.2 of Ref. [65]). From the peak frequency equations (e.g. eq. (10)) and the frequency redshift equation assuming radiation domination (see Ref. [65]), the peak frequency of the SGWB today should be
\[f_{p}\approx 10\,{\rm nHz}\,\left(\frac{g_{*}(T_{\rm reh})}{100}\right)^{\frac{1} {3}}\left(\frac{T_{\rm reh}}{100\,{\rm MeV}}\right)\left(\frac{1}{R_{*}H(T_{ \rm reh})}\right), \tag{7}\]
where \(R_{*}\) is the mean bubble separation, \(H\) is the Hubble parameter and \(g_{*}\) is the number of effective degrees of freedom.3 In the absence of supercooling we anticipate that \(T_{\rm reh}\sim T_{p}\), such that \(R_{*}H(T_{\rm reh})\sim R_{*}H(T_{p})\) and since the bubbles would not have a long time to grow \(R_{*}H(T_{p})\lesssim 1\). Thus, in the absence of supercooling, we expect \(T_{p}\sim 100\,\)MeV to lead to a \(\sim 10\,\)nHz signal.
Footnote 3: We apply suppression factors from Ref. [81] to the degrees of freedom of each particle when estimating \(g_{*}\). This incorporates effects of particles decoupling from the thermal bath as the temperature drops below their respective mass.
In this cubic model, however, \(T_{p}\sim 100\,\)MeV requires strong supercooling, so we now consider an analysis more appropriate for this scenario. At the time of the phase transition the peak frequency \(f_{p,*}\) is set by the mean bubble separation \(R_{*}\) via \(f_{p,*}\sim 1/R_{*}\)[63]. Using the frequency redshift equation (14), the peak frequency of the SGWB today scales as
\[f_{p}\sim\frac{1\,{\rm GeV}}{R_{*}(T_{p})s_{t}(T_{\rm reh})^{1/3}}, \tag{8}\]
where \(s_{t}\) is the true vacuum entropy density (see the appendices for further details). Because radiation domination is a valid assumption in the true vacuum, the entropy density scales as \(s_{t}(T)\sim T^{3}\).
One can show that \(R_{*}\sim 1\,{\rm GeV}/(T_{\Gamma}T_{p}N^{1/3})\) if bubbles nucleate simultaneously at \(T_{\Gamma}\), where \(N\) is the total number of bubbles nucleated per Hubble volume throughout the transition.4 Combining this with eq. (8), we obtain
Footnote 4: See section 3.6 of Ref. [75]. Simultaneous nucleation is an extreme case of the Gaussian nucleation that we observe in this model for strong supercooling. We find it to be a good predictor for the peak frequency scaling.
\[f_{p}\sim T_{p}\left(\frac{T_{\Gamma}N^{\frac{1}{3}}}{T_{\rm reh}}\right). \tag{9}\]
Numerically, we find that \(N^{\frac{1}{3}}\), \(T_{\Gamma}\) and \(T_{\rm reh}\) - and thus the right-most factor -- depend only weakly on the amount of supercooling (see fig. 2). Thus, for supercooling we find the relationship \(f_{p}\sim T_{p}\). This suggests that one can obtain an arbitrarily low peak frequency by fine-tuning the percolation temperature.
In the cubic model, these arguments are surprisingly accurate. Indeed, we find numerically that
\[\frac{1}{R_{*}H(T_{\rm reh})}\simeq 1.1\,\left(\frac{T_{p}}{T_{\rm reh}} \right)\left(\frac{T_{\Gamma}N^{\frac{1}{3}}}{T_{\rm reh}}\right). \tag{10}\]
Assuming radiation domination in the true vacuum for \(H(T_{\rm reh})\) and that \(g_{*}\approx 100\), eqs. (7) and (10) lead to
\[f_{p}\approx 10\,{\rm nHz}\,\left(\frac{T_{p}}{100\,{\rm MeV}}\right)\left( \frac{T_{\Gamma}N^{\frac{1}{3}}}{T_{\rm reh}}\right), \tag{11}\]
in agreement with the scaling anticipated in eq. (9). The right-most factor in eqs. (10) and (11) is \(\mathcal{O}(1)\) and approximately independent of the amount of supercooling. Thus, to achieve a redshifted peak frequency of \(10\,\)nHz, we require \(T_{p}\approx 100\,\)MeV.
Comparing eq. (11) with the result in the absence of supercooling eq. (7), supercooling and subsequent substantial reheating redshift the frequency more than usual. However, assuming radiation domination eq. (10) leads to
\[R_{*}H(T_{p})\approx\frac{T_{\Gamma}}{T_{p}}. \tag{12}\]
This increase in bubble radius caused by the delay between nucleation and percolation partially offsets the impact of additional redshifting.
Our findings are contrary to the claim in Refs. [77; 59] that reheating makes it difficult to reach GW frequencies relevant for PTAs. However, we do agree with the finding in Ref. [77] that completion poses an issue for nHz GW signals in this model. As found in section III.1, a percolation temperature of \(T_{p}=100\,\)MeV would not result in a successful transition. Not only would the majority of the Universe remain in the false vacuum even today, the true vacuum bubbles would not actually percolate due to the inflating space between the bubbles.
We now consider the SGWB predictions. We take great care in our predictions. For example, we use the pseudo-trace [82] to avoid assumptions about the speed of sound and the equation of state that can break down in realistic models. We also use the mean bubble separation rather than proxy timescales derived from the bounce action that are invalid for strongly supercooled phase transitions. A full description of our treatment is given in appendices A.3 and A.5.
In this model we find that the bubbles mostly nucleate at temperatures around \(T_{\Gamma}\sim 50\,\)GeV. We thus expect that friction from the plasma is sufficient to prevent runaway bubble walls, despite the large pressure difference. This implies that the SGWB from bubble collisions is negligible and that all the available energy goes into the fluid, resulting in a SGWB from sound waves and turbulence.
In fig. 3 we show the predicted SGWB spectrum for both BP1 (upper panel; the model with maximal supercooling while guaranteeing completion) and BP2 (lower panel; the model with percolation at \(100\,\)MeV but no completion). The peak frequencies are about \(5\times 10^{4}\,\)nHz and \(16\,\)nHz for BP1 and BP2, respectively. BP1 represents the lowest peak frequency that can be obtained for realistic scenarios in this model because for lower percolation temperatures the transition does not complete. Thus this model _cannot_ explain the nHz signal observed by PTA experiments despite various optimistic statements from the literature. For comparison we show the SGWB prediction if one were to assume vacuum
transitions (dotted grey curves). This assumption is not realistic for this model and in any case does not result in agreement with the observed spectrum.
If one ignores the completion requirement, BP2 shows that the peak frequency can be reduced to match the nHz signal observed by PTA experiments, though the amplitude is several orders of magnitude too high. Caution should be taken interpreting the SGWB predictions for such strong supercooling because it is well beyond what has been probed in simulations. Besides, despite a nominal percolation temperature, bubbles are not expected to percolate in BP2 because the false vacuum between them is inflating. Without percolation, GWs would not be generated.
## IV Conclusions
Supercooled FOPTs are an intriguing explanation of the nHz SGWB recently observed by several PTAs, as they could connect a nHz signal to phenomenology and experiments at the electroweak scale. Indeed, they were mentioned as a possibility [1; 11]. However, we demonstrate that there are two major difficulties in supercooled explanations. First, completion of the transition is hindered by vacuum domination. We demonstrate that this rules out the possibility of explaining the PTA signal in the supercooling model mentioned as a prototypical example in Ref. [1; 11].
Second, the Universe typically reheats to the scale of any physics driving the transition. This makes it challenging to compute the SGWB spectrum because factors involving ratios of the reheat, nucleation and percolation temperatures -- often implicitly neglected -- must be carefully included in fit formulae. In any case, the predictions are questionable because the thermal parameters in these scenarios are well beyond those in hydrodynamical simulations on which fit formulae are based. We anticipate that these issues are quite generic and they should be carefully checked in any supercooled explanation.
###### Acknowledgements.
AF was supported by RDF-22-02-079. LM was supported by an Australian Government Research Training Program (RTP) Scholarship and a Monash Graduate Excellence Scholarship (MGES). The work of PA is supported by the National Natural Science Foundation of China (NNSFC) under grant No. 12150610460 and by the supporting fund for foreign experts grant wgzx2022021L.
## Appendix A Predicting gravitational waves
In this appendix, we summarize our calculations for the phase transition and GW spectra. First, we secptify the model and the effective potential we used, then we describe the analysis of the phase transition. We then give a detailed description of how we determine the relevant thermal parameters and the red shift factors for the peak amplitude and peak frequency for gravitational waves. Finally, we provide an outline of our fitting models for sound waves, turbulence, and collisions in GW spectra.
### Effective Potential
Following Ref. [66; 83], we construct a simple model that falls under the category of non-linearly realized electroweak symmetry. The SM Higgs doublet belongs to the coset group \(G_{c}=SU(2)_{L}\times U(1)_{Y}/U(1)_{\rm EM}\) and can be expressed as
\[H(x)=\frac{r(x)}{\sqrt{2}}e^{ip(x)T}\begin{pmatrix}0\\ 1\end{pmatrix}, \tag{1}\]
where \(i=1-3\). The Higgs boson is a singlet field in the SM, denoted as \(r(x)\sim(1,1)_{0}\), and the fields \(\theta^{i}(x)\) correspond to three would-be Goldstone bosons. The physical Higgs boson, \(h\), is a fluctuation in \(r\) around the vacuum expectation value
Figure 3: The SGWB from BP1 (top panel; strongest supercooling for which the FOPT completed) and BP2 (bottom panel; strongest supercooling for which the percolation temperature could be computed) fail to match the observations by PTA experiments (violins). Our BPs predict a non-vacuum transition in which all available energy goes into the fluid, such that total SGWB (solid red) is the sum of contributions from sound waves (dashed blue) and turbulence (dashed green). For comparison, we show a vacuum transition where bubble collisions are the sole source of GWs (dotted grey).
of electroweak symmetry breaking, that is, \(r=\langle r\rangle+h\), where \(\langle r\rangle=v=246\,\mathrm{GeV}\).
The general tree-level Higgs potential for the SM singlet field, \(r\), can be written as
\[V_{0}(r)=-\frac{\mu^{2}}{2}r^{2}+\frac{\kappa}{3}r^{3}+\frac{\lambda}{4}r^{4}. \tag{10}\]
We add zero- and finite-temperature one-loop Coleman-Weinberg corrections to the tree-level potential,
\[V(r,T)=V_{0}(r)+[V_{\mathrm{CW}}(r)+V_{T}(r,T)]_{m_{1}^{2}-m_{2}^{2}+\Delta_{T }}, \tag{11}\]
and we replace all scalar and longitudinal gauge boson masses \(m_{i}^{2}\) with the thermal masses \(m_{i}^{2}+\Delta_{T}\) (evaluated at lowest order), such that we are using the Parwani method [84] for Daisy resummation to address infrared divergences.5 The formulas for \(V_{\mathrm{CW}}(r)\) and \(V_{T}(r,T)\) and the thermal masses can be found in the Appendix of Ref. [66]. We apply Boltzmann suppression factors \(e^{-m^{2}/T^{2}}\) to the Debye corrections as discussed in appendix A.3. There are, of course, other possible models with supercooled FOPTs, including the classic Coleman-Weinberg model [86].
Footnote 5: One could resum additional terms by matching to a three-dimensional effective field theory (see e.g. ref. [85]) but here we stick more closely to the procedure used in the original work on this idea.
The model parameters, namely \(\mu\), \(\kappa\), and \(\lambda\), are constrained by the tadpole and on-shell mass conditions,
\[\left.\frac{dV}{dr}\right|_{rav}=0,\hskip 28.452756pt\left.\frac{d^{2}V}{dr^{2} }\right|_{rav}=m_{h}^{2}, \tag{12}\]
where \(m_{h}\simeq 125\,\mathrm{GeV}\). We use them to eliminate \(\mu^{2}\) and \(\lambda\) at the one-loop level through
\[\mu^{2} =\frac{1}{2}\left(m_{h}^{2}+\kappa v+\frac{3}{v}\left.\frac{ \mathrm{d}V_{\mathrm{CW}}}{\mathrm{d}r}\right|_{v}-\left.\frac{\mathrm{d}^{2} V_{\mathrm{CW}}}{\mathrm{d}r^{2}}\right|_{v}\right), \tag{13a}\] \[\lambda =\frac{1}{2v^{2}}\left(m_{h}^{2}-\kappa v+\frac{1}{v}\left.\frac {\mathrm{d}V_{\mathrm{CW}}}{\mathrm{d}r}\right|_{v}-\left.\frac{\mathrm{d}^{2} V_{\mathrm{CW}}}{\mathrm{d}r^{2}}\right|_{v}\right). \tag{13b}\]
This requires an iterative procedure starting with the tree-level tadpole equations and repeatedly using one-loop extraction until convergence. This leaves the cubic coupling \(\kappa\) as the only free parameter. This cubic term creates a barrier between minima in the potential and can lead to supercooling. The remaining cubic coupling \(\kappa\) creates a barrier between minima in the potential and can lead to supercooling. The requirement that the potential must be bounded from below ensures that \(\lambda>0\). On the other hand, by convention so that \(\langle r\rangle>0\), we choose \(\kappa<0\).
In addition to the particles stated in the Appendix of Ref. [66], we add radiative corrections from all remaining quarks and the muon and tau. The omitted states are always so light that we can treat them as radiation. We therefore account for 81 effective degrees of freedom in the one-loop and finite-temperature corrections, leaving 25.75 degrees of freedom from light particles that we treat as pure radiation. Thus we add a final term to the effective potential:
\[V_{\mathrm{rad}}(T)=-\frac{\pi^{2}}{90}g_{*}^{\prime}T^{4}, \tag{14}\]
where \(g_{*}^{\prime}=106.75-81=25.75\).
### Phase transition analysis
We use PhaseTracer [87] to determine the phase structure (see ref. [88] for a discussion of uncertainties) and TransitionSolver[80] to evaluate the phase history and extract the GW signal. The particle physics model considered in this study has at most one first-order phase transition, making phase history evaluation a simple matter of analyzing the single first-order phase transition. We use a modified version of CosmoTransitions[89] to calculate the bounce action during transition analysis.6
Footnote 6: The modifications are described in Appendix F of Ref. [75]. Most important are the fixes for underflow and overflow errors.
TransitionSolver tracks the false vacuum fraction [75]
\[P_{f}(T)=\exp\left[-\frac{4\pi}{3}v_{w}^{3}\right]_{T}^{T_{c}}\frac{\Gamma(T^ {\prime})dT^{\prime}}{T^{4}H(T^{\prime})}\left(\int_{T}^{T^{\prime}}\frac{dT ^{\prime\prime}}{H(T^{\prime\prime})}\right)^{3}\right] \tag{15}\]
as a function of temperature,7 where \(v_{w}\) is the bubble wall velocity, \(\Gamma\) is the bubble nucleation rate, \(H\) is the Hubble parameter, and \(T_{c}\) is the critical temperature at which the two phases have equal free-energy density. This allows us to evaluate the GW power spectrum at the onset of percolation. Percolation occurs when the false vacuum fraction falls to 71% [90; 91; 92]. Thus we define the percolation temperature, \(T_{p}\), through
Footnote 7: See Ref. [75] and section 4 of Ref. [65] for the assumptions implicit in eq. (15).
\[P_{f}(T_{p})=0.71. \tag{16}\]
This temperature will be used as the reference temperature for GW production. Additionally, we define the completion (or final) temperature, \(T_{f}\), through
\[P_{f}(T_{f})=0.01, \tag{17}\]
as an indication of the end of the phase transition.
The quantities in eq. (15) are estimated as follows. The bubble wall velocity \(v_{w}\) is typically ultra-relativistic in the strongly supercooled scenarios we consider here, so we take \(v_{w}\approx 1\). The bubble nucleation rate is estimated as [93]
\[\Gamma(T)=T^{4}\Bigg{(}\frac{S(T)}{2\pi}\Bigg{)}^{\frac{3}{2}}\exp(-S(T)), \tag{18}\]
where \(S(T)\) is the bounce action. The Hubble parameter
\[H(T)=\sqrt{\frac{8\pi G}{3}\rho_{\mathrm{tot}}(T)} \tag{19}\]
depends on the total energy density [75]
\[\rho_{\rm tot}(T)=\rho_{f}(T)-\rho_{\rm gs}, \tag{12}\]
and \(G=6.7088\times 10^{-39}\,\mathrm{GeV}^{-2}\) is Newton's gravitational constant [94]. The energy density of phase \(\mathbf{\phi}_{i}\) is given by [95]
\[\rho_{i}(T)=V(\mathbf{\phi}_{i},T)-T\left.\frac{\partial V}{\partial T}\right|_{ \mathbf{\phi}_{i}(T)}, \tag{13}\]
where \(V(\mathbf{\phi}_{i},T)\) is the effective potential. The subscript \(f\) in eq. (12) denotes the false vacuum and gs denotes the zero-temperature ground state of the potential. Finally, the transition is analysed by evaluating the false vacuum fraction eq. (10) starting near the critical temperature and decreasing the temperature until the transition completes. We define completion to be when \(P_{f}(T_{f})=0.01\), and further check that the physical volume of the false vacuum is decreasing at \(T_{p}\)[75; 79]; that is,
\[3+T_{p}\left.\frac{\mathrm{d}\mathcal{V}_{t}^{\rm ext}}{\mathrm{d}T}\right|_{ T_{p}}<0, \tag{14}\]
where \(-\mathcal{V}_{t}^{\rm ext}\) is the exponent in eq. (10). This condition was empirically determined to be the strongest completion criterion of those considered in Ref. [75], and continues to be in the models considered in this study.
### Thermal parameters
The GW signal depends on several thermal parameters: the kinetic energy fraction \(K\), the characteristic length scale \(L_{*}\), the bubble wall velocity \(v_{w}\), and a reference temperature \(T_{*}\) for GW production. We take the reference temperature to be the percolation temperature \(T_{p}\) because percolation necessitates bubble collisions [75]. As explained above, we take \(v_{w}\approx 1\) due to the strong supercooling. Specifically, we use the Chapman-Jouguet velocity [96]
\[v_{w}=v_{\rm CJ}=\frac{1+\sqrt{3\alpha(1+c_{s,f}^{2}(3\alpha-1))}}{c_{s,f}^{-1 }+3\alpha c_{s,f}}. \tag{15}\]
The Chapman-Jouguet velocity is the lowest velocity for a detonation solution, and we expect more realistically that \(v_{w}>v_{\rm CJ}\)[97; 65]. The choice \(v_{w}=v_{\rm CJ}\) is as arbitrary a choice as any fixed value of \(v_{w}\), but has the benefit of always being a supersonic detonation. We note that a choice of \(v_{w}<1\) is required to estimate the kinetic energy fraction.
The kinetic energy fraction is the kinetic energy available to source GWs, divided by the total energy density \(\rho_{\rm tot}\). We calculate this as [82]
\[K=\frac{\bar{\theta}_{f}(T_{*})-\bar{\theta}_{i}(T_{*})}{\rho_{\rm tot}(T_{*}) }\kappa(\alpha,c_{s,f},c_{s,f}), \tag{16}\]
where
\[\alpha=\frac{4(\bar{\theta}_{f}(T_{*})-\bar{\theta}_{i}(T_{*}))}{3w_{f}} \tag{17}\]
is the transition strength parameter, and the pseudotrace \(\bar{\theta}\) is given by [96]
\[\bar{\theta}_{i}(T)=\frac{1}{4}\left(\rho_{i}(T)-\frac{p_{i}(T)}{c_{s,f}^{2}(T )}\right). \tag{18}\]
The pressure is \(p=-V\), the enthalpy is \(w=\rho+p\), and the speed of sound \(c_{s}\) in phase \(\mathbf{\phi}_{i}\) is given by
\[c_{s,f}^{2}(T)=\left.\frac{\partial_{T}V}{\partial T_{T}^{2}V}\right|_{\mathbf{ \phi}_{i}(T)}. \tag{19}\]
This treatment of the kinetic energy fraction corresponds to model M2 of Refs. [82; 96]. We use the code snippet in the appendix of Ref. [96] to calculate \(\kappa(\alpha,c_{s,f},c_{s,f})\); although \(\kappa\) is independent of \(c_{s,f}\) for a supersonic detonation. We note that \(c_{s,f}\sim 1\) at very low temperature in our model if Boltzmann suppression is not employed. This is because the temperature-dependent contributions to the free energy density are dominated by the Debye corrections at low temperature. Hence, the free energy density in a phase scales roughly as \(V(T)=aT^{2}+b\) at low temperature, where \(a\) and \(b\) are temperature independent. Consequently, the sound speed is roughly the speed of light by eq. (19). However, applying Boltzmann suppression to the Debye corrections (as suggested in Ref. [82]) corrects the sound speed back towards \(c_{s}=1/\sqrt{3}\) at low temperature. Specifically, for BP2 we find \(c_{s,f}^{2}\approx c_{s,f}^{2}\approx 0.40\) at \(T_{p}=0.1\,\mathrm{GeV}\).
For turbulence, we take the efficiency coefficient \(\kappa_{\rm turb}\) to be 5% and show it merely for comparison. Modeling the efficiency of the turbulence source is still an open research problem [65]. While strong phase transitions could lead to significant rotational modes in the plasma [98], the resultant efficiency of GW production from turbulence is not yet clear.
We also consider a case where bubble collisions alone source GWs. In this case we ignore the sound wave and turbulence sources altogether and use \(K=\alpha/(1+\alpha)\) for the collision source. This assumes that the efficiency for generating GWs from the bubble collisions is maximal, which we take as a limiting case. We do not calculate the friction in the cubic potential so a proper estimate of the efficiency coefficient for the collision source is not possible.
We use the mean bubble separation \(R_{*}\) for the characteristic length scale \(L_{*}\). We calculate \(R_{*}\) directly from the bubble number density, \(n_{B}(T)\): [65]
\[R_{*}(T)=(n_{B}(T))^{-\frac{1}{3}}=\left(T^{3}\!\!\int_{T}^{T_{c}}\!\!dT^{ \prime}\frac{\Gamma(T^{\prime})P_{f}(T^{\prime})}{T^{\prime 4}H(T^{\prime})}\right)^{-\frac{1}{3}}. \tag{20}\]
A common approach is to instead calculate
\[\frac{\beta}{H}=T\frac{\mathrm{d}S}{\mathrm{d}T}, \tag{21}\]
which is a characteristic timescale for an exponential nucleation rate. The GW fits then implicitly map \(\beta/H\) onto \(R_{*}\) through
\[R_{*}=(8\pi)^{\frac{1}{3}}\frac{v_{w}}{\beta}. \tag{22}\]
However, an exponential nucleation rate is not appropriate for a strongly supercooled phase transition in the model we investigate. Further, \(\beta/H\) becomes negative below \(T_{\Gamma}\) (i.e. below the minimum of the bounce action). While alternative mappings exist for a Gaussian nucleation rate [99, 77], usually eq. (100) is inverted in GW fits when using \(\beta/H\).
We also incorporate the suppression factor
\[\Upsilon(\tau_{\rm sw})=1-\frac{1}{\sqrt{1+2H_{*}\tau_{\rm sw}}}, \tag{101}\]
in our GW predictions, which arises from the finite lifetime of the sound wave source [100]. We use the shorthand notation \(H_{*}=H(T_{*})\). The timescale \(\tau_{\rm sw}\) is estimated by the shock formation time \(\tau_{\rm sw}\sim L_{*}/\overline{U}_{f}\), where \(\overline{U}_{f}=\sqrt{K\rho_{f}/\overline{w}}\) and \(\overline{w}\) is the average enthalpy density [65].
### Redshifting
The GW spectrum we see today is redshifted from the time of production. The frequency and amplitude scale differently (see Refs. [101, 65]). The redshift factors are obtained using conservation of entropy and the assumption of radiation domination. Here, we avoid the latter assumption, thus our redshift factors may look unfamiliar.
Frequencies redshift according to [65]
\[f_{0}=\frac{a_{1}}{a_{0}}f_{1}=\mathcal{R}_{f}f_{1}, \tag{102}\]
where \(a\) is the scale factor of the Universe, and we have defined the redshift factor for frequency, \(\mathcal{R}_{f}\). Using conservation of entropy,
\[a_{0}^{3}s_{0}=a_{1}^{3}s_{1}, \tag{103}\]
where \(s\) is the entropy density, \(\mathcal{R}_{f}\) becomes
\[\mathcal{R}_{f}=\left(\frac{s_{0}}{s_{1}}\right)^{\frac{1}{3}}. \tag{104}\]
The number of entropic degrees of freedom at the current temperature \(T_{0}=2.725\,\mathrm{K}=2.348\times 10^{-13}\,\mathrm{GeV}\)[102] is
\[g_{s}(T_{0})=2+\frac{7}{11}N_{\rm eff}, \tag{105}\]
where \(N_{\rm eff}=3.046\) is the effective number of neutrinos. The entropy density today is \(s_{0}=2.237\times 10^{-38}\,\mathrm{GeV}^{3}\) which we computed from the temperature derivative of the effective potential. Because the frequency \(f_{1}\) is typically determined using quantities expressed in GeV, we apply the unit conversion \(1\,\mathrm{GeV}=1.519\times 10^{24}\,\mathrm{Hz}\) to express the dimensionful redshift factor for frequency as
\[\mathcal{R}_{f}=4.280\times 10^{11}\,\frac{\mathrm{Hz}}{\mathrm{GeV}}\, \left(\frac{1\,\mathrm{GeV}^{3}}{s_{1}}\right)^{\frac{1}{3}}. \tag{106}\]
The amplitude redshifts according to [65]
\[\Omega_{0}h^{2}=\left(\frac{a_{1}}{a_{0}}\right)^{4}\left(\frac{H_{1}}{H_{0}} \right)^{2}\Omega_{1}h^{2}=\mathcal{R}_{\Omega}\Omega_{1}, \tag{107}\]
where \(H\) is the Hubble parameter, and we have defined the redshift factor for the amplitude, \(\mathcal{R}_{\Omega}\), which absorbs the factor \(h^{2}\). The Hubble parameter today is \(H_{0}=100h\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}\). Using \(h=0.674\pm 0.005\)[103] and again converting from Hz to GeV, we have \(H_{0}=1.438\times 10^{-38}\,\mathrm{GeV}\). Thus, the dimensionless redshift factor for amplitude is
\[\mathcal{R}_{\Omega}=1.384\times 10^{33}\left(\frac{\mathrm{GeV}^{3}}{s_{1}} \right)^{\frac{1}{3}}\left(\frac{H_{1}}{\mathrm{GeV}}\right)^{2}. \tag{108}\]
If the GWs are produced at temperature \(T_{*}\) and reheating increases the temperature to \(T_{\rm reh}\) in the true vacuum, we take \(s_{1}=s_{*}(T_{\rm reh})\) and \(H_{1}=H(T_{*})\). We assume conservation of entropy in the true vacuum, where \(s_{t}=-\partial V/\partial T|_{\Phi_{s}}\), such that adiabatic cooling occurs for the temperature range \(T_{1}=T_{\rm reh}\) to \(T_{0}\). We use \(T_{*}\) in the Hubble parameter because \(\rho_{f}(T_{*})=\rho_{f}(T_{\rm reh})\) by the definition of \(T_{\rm reh}\) (eq. (6)). We find that the redshift factors \(\mathcal{R}_{f}\) and \(\mathcal{R}_{\Omega}\) are within 1% of the values obtained when assuming radiation domination, at least for BP1 and BP2. This demonstrates that radiation domination is a good assumption in the true vacuum in this model.
### Gravitational waves
We consider three contributions to the GW signal: bubble collisions, sound waves in the plasma, and magnetohydrodynamic turbulence in the plasma. For simplicity, we consider two scenarios: 1) non-runaway bubbles, where GWs are sourced purely by the plasma because the energy stored in bubble walls is dissipated into the plasma; and 2) runaway bubbles, where GWs are sourced purely by the energy stored in the bubble walls. We do not consider the fluid shells in this latter case. In the following, we reverse common mappings such as \(R_{*}=(8\pi)^{\frac{1}{3}}v_{w}/\beta\) and \(K=\kappa\alpha/(1+\alpha)\) to generalise the GW fits beyond assumptions made in the original papers. This generalisation comes at the cost of further extrapolation, beyond what is already inherent in using such fits. We also use our redshift factors derived in appendix A.4 instead of the radiation domination estimates. We refer the reader to the reviews in Ref. [63, 64, 65] for further discussions on the GW fits listed below.
We use the recent GW fit for the collision source from Ref. [104]. The redshifted peak amplitude is
\[\Omega_{\rm coll}(f)=\mathcal{R}_{\Omega}A\left(\frac{H_{*}R_{*}}{(8\pi)^{ \frac{1}{3}}v_{w}}\right)^{\!2}\!\!K^{2}\,S_{\rm coll}(f), \tag{109}\]
and the spectral shape is
\[S_{\rm coll}(f)=\frac{(a+b)^{c}}{\left[b\left(\frac{f}{f_{\rm coll}}\right)^{- \frac{2}{3}}+a\left(\frac{f}{f_{\rm coll}}\right)^{\frac{2}{3}}\right]^{c}}. \tag{110}\]
The redshifted peak frequency is
\[f_{\rm coll}=\mathcal{R}_{f}\left(\frac{0.77(8\pi)^{\frac{1}{3}}v_{w}}{2\pi R_{* }}\right). \tag{111}\]
The fit parameters \(A\), \(a\), \(b\), \(c\), and \(f_{p}\) (the peak frequency before redshifting) can be found in Table 1 in Ref. [104]; specifically the \(T_{rr}\propto R^{-3}\) column for bubbles. We normalised the spectral shape by moving \(A\) into eq. (157) as an explicit factor. We have mapped \(f_{p}/\beta\) onto \(1/R_{*}\) in eq. (160).
For the sound wave source, we use the GW fits in the sound shell model [105] from Refs. [106; 107]. The redshifted peak amplitude is
\[h^{2}\Omega_{\rm sw}(f)=3\mathcal{R}_{\Omega}K^{2}\left(\frac{H_{*}R_{*}}{c_{ s,f}}\right)\frac{M\left(s,r_{b},b\right)}{\mu_{f}(r_{b})}\Upsilon(\tau_{\rm sw })\tilde{\Omega}_{\rm gw}, \tag{161}\]
with spectral shape
\[M\left(s,r_{b},b\right)=s^{9}\left(\frac{1+r_{b}^{4}}{r_{b}^{4}+s^{4}}\right)^ {\frac{a+b}{4}}\left(\frac{b+4}{b+4-m+ms^{2}}\right)^{\frac{b+4}{2}}, \tag{162}\]
where
\[m=\left(9r_{b}^{4}+b\right)/\left(r_{b}^{4}+1\right), \tag{163}\] \[s=ff_{p},\] (164) \[r_{b}=f_{b}/f_{p},\] (165) \[\mu_{f}(r_{b})=4.78-6.27r_{b}+3.34r_{b}^{2}. \tag{166}\]
In eq. (161) we have used \(\tau_{c}\sim R_{*}/c_{s,f}\) for the autocorrelation timescale [64], hence the factor \(1/c_{s,f}\). We take \(\tilde{\Omega}_{\rm gw}=0.01\) in accordance with Table 4 of Ref. [108], and \(b=1\). The breaks in the power laws are governed by the mean bubble separation and the fluid shell thickness, which respectively correspond to the redshifted frequencies [108]
\[f_{b}=1.58\mathcal{R}_{f}\left(\frac{1}{R_{*}}\right)\left(\frac{z_{p}}{10}\right) \tag{167}\]
and
\[f_{p}=1.58\mathcal{R}_{f}\left(\frac{1}{R_{*}\Delta_{w}}\right)\left(\frac{z_ {p}}{10}\right). \tag{168}\]
The length scale for the fluid shell thickness is roughly [106]
\[R_{*}\Delta_{w}\approx R_{*}\big{|}v_{w}-c_{s,f}\big{|}/v_{w}, \tag{169}\]
although see Ref. [65] for further discussion. We take the dimensionless wavenumber at the peak to be \(z_{p}=10\) which is applicable for the supersonic detonations we consider [106; 108]. We note that a more recent analysis -- taking into account the scalar-driven propagation of uncollided sound shells -- reproduces the causal \(f^{3}\) scaling below the peak of the GW signal found in numerical simulations [109]. Additionally, the spectral shape should depend on the thermal parameters [109; 110].
Finally, for the turbulence fit, we use the fit from Ref. [111] based on the analysis in Ref. [112], using \(L_{*}=R_{*}\) rather than \(L_{*}\sim 2v_{w}/\beta\). The redshifted peak amplitude is
\[\Omega_{\rm turb}(f)=9.0\mathcal{R}_{\Omega}\left(H_{*}R_{*}\right)(\kappa_{ \rm turb}K)^{\frac{3}{2}}S_{\rm turb}(f), \tag{170}\]
with unnormalised spectral shape
\[S_{\rm turb}(f)=\frac{(ff/f_{\rm turb})^{3}}{\left(1+f/f_{\rm turb}\right)^{ \frac{11}{3}}\left(1+8\pi f/(\mathcal{R}_{f}H_{*})\right)}. \tag{171}\]
We take \(\kappa_{\rm turb}=0.05\) as a conservative estimate of the turbulence source in a strongly supercooled transition. The redshifted peak frequency is [112]
\[f_{\rm turb}=\mathcal{R}_{f}\frac{3.5}{R_{*}}. \tag{172}\]
Note that we have not assumed \(f_{\rm turb}\sim\pi\beta/(2v_{w})\) like was done in Ref. [111].
|
2301.04584 | Continual HyperTransformer: A Meta-Learner for Continual Few-Shot
Learning | We focus on the problem of learning without forgetting from multiple tasks
arriving sequentially, where each task is defined using a few-shot episode of
novel or already seen classes. We approach this problem using the recently
published HyperTransformer (HT), a Transformer-based hypernetwork that
generates specialized task-specific CNN weights directly from the support set.
In order to learn from a continual sequence of tasks, we propose to recursively
re-use the generated weights as input to the HT for the next task. This way,
the generated CNN weights themselves act as a representation of previously
learned tasks, and the HT is trained to update these weights so that the new
task can be learned without forgetting past tasks. This approach is different
from most continual learning algorithms that typically rely on using replay
buffers, weight regularization or task-dependent architectural changes. We
demonstrate that our proposed Continual HyperTransformer method equipped with a
prototypical loss is capable of learning and retaining knowledge about past
tasks for a variety of scenarios, including learning from mini-batches, and
task-incremental and class-incremental learning scenarios. | Max Vladymyrov, Andrey Zhmoginov, Mark Sandler | 2023-01-11T17:27:47Z | http://arxiv.org/abs/2301.04584v3 | # Continual Few-Shot Learning Using HyperTransformers
###### Abstract
We focus on the problem of learning without forgetting from multiple tasks arriving sequentially, where each task is defined using a few-shot episode of novel or already seen classes. We approach this problem using the recently published HyperTransformer (HT), a Transformer-based hypernetwork that generates specialized task-specific CNN weights directly from the support set. In order to learn from a continual sequence of tasks, we propose to recursively re-use the generated weights as input to the HT for the next task. This way, the generated CNN weights themselves act as a representation of previously learned tasks, and the HT is trained to update these weights so that the new task can be learned without forgetting past tasks. This approach is different from most continual learning algorithms that typically rely on using replay buffers, weight regularization or task-dependent architectural changes. We demonstrate that our proposed Continual HyperTransformer method equipped with a prototypical loss is capable of learning and retaining knowledge about past tasks for a variety of scenarios, including learning from mini-batches, and task-incremental and class-incremental learning scenarios.
## 1 Introduction
Continual few-shot learning involves learning from a continuous stream of tasks described by a small number of examples without forgetting previously learned information. This type of learning closely resembles how humans and other biological systems acquire new information, as we can continually learn novel concepts with a small amount of information and retain that knowledge for an extended period of time. Algorithms for few-shot continual learning can be useful in many real-world applications where there is a need to classify a large number of classes in a dynamic environment with limited observations. Some practical applications can include enabling robots to continually adapt to changing environments based on an incoming stream of sparse demonstrations or allowing for privacy-preserving learning, where the model can be trained sequentially on private data sharing only the weights without ever exposing the data.
To tackle this problem, we propose using HyperTransformer (HT; Zhmoginov et al. 2022), a recently published few-shot learning method that utilizes a large hypernetwork (Ha et al., 2016) to meta-learn from episodes sampled from a large set of few-shot learning tasks. The HT is trained to directly generate weights of a much smaller specialized Convolutional Neural Network (CNN) model using only few labeled examples. This works by decoupling the domain knowledge model (represented by a Transformer; Vaswani et al. 2017) from the learner itself (a CNN), generated to solve only a given specific few-shot learning problem.
We present a modification to HT method, called Continual HyperTransformer (CHT), that is aimed at exploring the capability of the HT to _sequentially update the CNN weights_ with the information from a new task, while retaining the knowledge about the tasks that were already learned. In other words, given the CNN weights \(\theta_{t-1}\) generated after seeing some previous tasks \(0,\dots,t-1\) and a description of the new task \(t\), the CHT generates the weights \(\theta_{t}\) that are suited for all the tasks \(0,\dots,t\).
In order for the CHT to be able to absorb a continual stream of tasks, we modified the loss function from a cross-entropy that was used in the HT to a more flexible prototypical loss (Snell et al., 2017), that uses prototypes as a learned representation of every class from all the tasks. As the tasks come along, we maintain
and update a set of prototypes in the embedding space. The prototypes are then used to predict the class and task attributes for a given input sample.
We evaluate CHT in three realistic scenarios where a continual few-shot learning model like ours might be used: the mini-batch version, where every task consists of the same classes; the lifelong learning version, where classes for all the tasks are drawn from the same overall distribution; and the heterogeneous task semantic version, where every task has its own unique distribution of classes.
We also test CHT in two different continual learning scenarios: task-incremental learning (predicting class attributes using the task information) and class-incremental learning (predicting class attributes without access to task information; also known as lifelong learning). Moreover, we show empirically that a model trained for class-incremental learning can also perform well in task-incremental learning, similar to a model specifically trained for task-incremental learning.
Our approach has several advantages. First, as a hypernetwork, the CHT is able to generate and update the weights of the CNN on the fly with no training required. A trained Transformer holds the domain world-knowledge and can generalize from limited few-shot observations. There is also evidence to suggest that similar functions are performed by the prefrontal cortex in the human brain (Miller et al., 2001), which may imply biological plausibility of our approach.
Second, we demonstrate that models learned with CHT do not suffer from the catastrophic forgetting. We even see cases of the positive backward transfer for smaller models, where the performance on a given task actually improves for subsequently generated weights.
Third, while the CHT is trained to optimize for \(T\) tasks, the model can be stopped at any point \(t\leq T\) during the inference with weights \(\theta_{t}\) that are suited for all the tasks \(0\leq\tau\leq t\). Moreover, the performance of a given weight \(\theta_{t}\) improves when the CHT is trained on more tasks \(T\).
Finally, we designed the CHT model to be independent from a specific step and operate as a recurrent system. It can be used to learn a larger number of tasks it was originally trained for.
## 2 Related work
Few-shot learningMany few-shot learning methods can be divided into two categories: metric-based learning and optimization-based learning. First, _metric-based_ methods (Vinyals et al., 2016; Snell et al.,
Figure 1: In few-shot continual learning, the model learns from \(T\) tasks sequentially. For the first task (task 0), the CNN weights \(\theta_{0}\) are generated using only the support set \(S^{(0)}\). For each subsequent task \(t\), the Continual HyperTransformer (CHT) uses the support set \(S^{(t)}\) and the previously generated weights \(\theta_{t-1}\) to generate the weights \(\theta_{t}\). To update the weights \(\psi\) of the CHT, the loss is calculated by summing the individual losses computed for each generated weight \(\theta_{t}\) when evaluated on the query set of all the prior tasks \((Q^{(\tau)})_{\tau=0}^{T}\).
2017; Sung et al., 2018; Oreshkin et al., 2018) train a fixed embedding network that works universally for any task. The prediction is based on the distances between the known embeddings of the support set and the embeddings of the query samples. These methods are not specifically tailored for the continual learning problem, since they treat every task independently and have no memory of the past tasks.
Second, _optimization-based_ methods (Finn et al., 2017; Nichol and Schulman, 2018; Antoniou et al., 2019; Rusu et al., 2019) propose to learn an initial fixed embedding, which is later adapted to a specific task using a few gradient-based steps. However, these methods are not able to learn continually, as simply adapting the embedding for a new task will result in the catastrophic forgetting of previously learned information.
Continual learningMost continual learning methods can be grouped into three categories based on their approach to preventing catastrophic forgetting when learning a new task: rehearsal, regularization and architectural (see Biesialska et al. 2020 for an overview). _Rehearsal_ methods work by injecting some amount of replay data from past tasks while learning the new task (Lopez-Paz and Ranzato, 2017; Riemer et al., 2018; Rolnick et al., 2019; Gupta et al., 2020; Wang et al., 2021) or distilling a part of a network using task-conditioned embeddings (Mandivarapu et al., 2020; Von Oswald et al., 2019). _Regularization_ methods introduce an explicit regularization function when learning new tasks to ensure that old tasks are not forgotten (Kirkpatrick et al., 2017; Zenke et al., 2017). _Architectural_ methods modify the network architecture with additional task-specific modules (Rusu et al., 2016), ensembles (Wen et al., 2020) or adapters (Pfeiffer et al., 2020) that allow for separate routing of different tasks.
We believe that our approach requires the least conceptual overhead to the techniques above, since it does not impose any additional explicit constraints to prevent forgetting. Instead, we reuse the same principle that made HT work in the first place: decoupling the specialized representation model (a CNN) from the domain-aware Transformer model. The Transformer learns how to best adapt the incoming CNN weights in a way that the new task is learned and the old tasks are not forgotten. In this sense, the closest analogy to our approach would be slow and fast weights (Munkhdalai and Yu, 2017), with the Transformer weights being analogous to the slow weights that accumulate the knowledge and generate CNN weights as fast weights.
Incremental few-shot learningA related, but distinct area of research is incremental few-shot learning (Gidaris and Komodakis, 2018; Ren et al., 2019; Perez-Rua et al., 2020; Chen and Lee, 2020; Tao et al., 2020; Wang et al., 2021; Shi et al., 2021; Zhang et al., 2021; Mazumder et al., 2021; Lee et al., 2021; Yin et al., 2022). There, the goal is to adapt a few-shot task to an _existing_ base classifier trained on a _large_ dataset, without forgetting the original data. In contrast, our model learns directly from a series of few-shot tasks presented one after the other, without relying on any prior classifier. All of our tasks are defined using only a small number of samples.
Perhaps the closest to our setting is the paper by Antoniou et al. (2020) which focuses on the general problem definition of the continual few-shot learning, but falls short of providing a novel method to solve it.
## 3 Continual few-shot learning
We consider the problem of continual few-shot learning, where we are given a series of \(T\) tasks, where each task \(t:=\{S^{(t)},Q^{(t)}\}\) is specified via a \(K\)-way \(N\)-shot support set \(S^{(t)}:=(x_{i}^{(t)},y_{i}^{(t)})_{i=0}^{NK}\) and a query set \(Q^{(t)}:=(\hat{x}_{i}^{(t)},\hat{y}_{i}^{(t)})_{i=0}^{NK}\), where \(K\) is the number of classes in each task, \(N\) is the number of labeled examples for each class, and \(\hat{N}\) (typically \(\hat{N}\gg N\)) is the number of query examples to be classified.
We assume that the classes composing each individual task are drawn from the same distribution uniformly at random without replacement. However, we consider different ways in which classes for different tasks are chosen. First, each task may include exactly the same set of classes. This is similar to mini-batch learning with \(T\) iterations, where each batch contains exactly \(N\) examples of each of \(K\) classes1. Second, each task might include a different set of classes, but drawn from the same overall distribution of classes. This corresponds to a lifelong learning scenario, where tasks can be thought of as observations that allow us
to learn more about the world as we encounter new classes during the inference. Finally, each task might have its own unique semantic meaning and the classes for different tasks are drawn from different distributions. We will evaluate all of these scenarios in our experiments.
Figure 1 illustrates the process of learning of a continual few-shot problem. For each of the tasks \(t\in 0,\ldots,T\), a learner \(a_{\psi}\) (parameterized by \(\psi\)) needs to produce CNN weights \(\theta_{t}\) based on the support set \(S^{(t)}\) of task \(t\) and previously generated weights \(\theta_{t-1}\) (except for the first task, where \(\theta_{t}\) is generated only using \(S^{(0)}\)):
\[\theta_{t}:=a_{\psi}\left(S^{(t)},\theta_{t-1}\right), \tag{1}\]
such that \(\theta_{t}\) can predict the classes from all the tasks \(\tau\in 0,\ldots,t\). Notice that when learning from task \(t\), the learner does not have access to the support set of past tasks and must rely solely on the input weights \(\theta_{t-1}\) as a source of information from previous tasks.
After the weights \(\theta_{t}\) are generated, we can use the query set \(Q^{(\tau)}\) of all tasks \(\tau\in 0,\ldots,t\) to evaluate the prediction quality of the \(\theta_{t}\) and calculate the loss \(\mathcal{L}_{\psi}\) with respect to the learner parameters \(\psi\). In this work, we consider two types of predictions given the weights \(\theta_{t}\):
* _Task-incremental learning_, in which the goal is to identify the class attribute given the sample and its task attribute: \(p(\hat{y}=k|\hat{x},\tau)\).
* _Class-incremental learning_, in which the goal is to identify both class and task attributes of the samples: \(p(\hat{y}=k,\tau|\hat{x})\).
Finally, we can test the performance of the trained model \(a_{\psi}\) on episodes sampled from a holdout set of classes \(\mathcal{C}_{\text{test}}\). Notice that, in general, the total number of tasks for the test \(T_{test}\) might be different from \(T\).
## 4 Continual HyperTransformer
Notice that for \(T=1\), the continual learning problem above reduces to a standard few-shot learning problem defined by a single few-shot learning task \(t_{0}=\{S^{(0)},Q^{(0)}\}\). One method that has been effective in solving this type of problem is HyperTransformer(HT, Zhmoginov et al., 2022) that uses a self-attention mechanism to generate CNN weights \(\theta\) directly from the support set of the few-shot learning problem (see Figure 2, left). These weights are constructed layer by layer using the embeddings of the support set and the activations of
Figure 2: The information flow of the HyperTransformer (HT) model (_left_) compared to the proposed Continual HyperTransformer (CHT) model (_right_). In the original HT, the input weight embeddings are initialized with empty placeholders. In contrast, the proposed CHT model incorporates information from past tasks when generating weights for the current task. The weight slice information from previously learned tasks is passed as input to the new iteration of the CHT. The CHT uses the support set for the current task and the input weight information to generate the weights. This allows the CHT to retain knowledge about past tasks and avoid forgetting when learning new tasks.
the previous layer. After the weights have been generated, the cross-entropy loss \(\mathcal{L}_{\psi}\left(f_{\theta}(\hat{x}),\hat{y}\right)\) is calculated by running the query set \((\hat{x},\hat{y})\) through the generated CNN.
Our proposed Continual HyperTransformer (CHT) naturally extends HT to handle a continual stream of tasks by using the generated weights from already learned tasks as input weight embeddings into the weight generator for a new task (see Figure 2, right). In this way, the learned weights themselves act as both the input and the output of the CHT, performing a dual function: storing information about the previous tasks _as well as_ serving as the weights for the CNN when evaluating on tasks that have already been seen.
For each task \(t\), the CHT takes as input the support set of that task \(S^{(t)}\) as well as the weights from the previous tasks \(\theta_{t-1}\), and generates the weights using the equation (1) that are suited for all the tasks \(\tau\in 0,\dots,t\). Therefore, for each step \(t\) we want to minimize the loss on the query sets of every task up to \(t\):
\[J_{t}(\psi)=\sum_{\tau=0}^{t}\mathcal{L}_{\psi}\left(f_{\theta_{t}}(\hat{x}^{ (\tau)}),\hat{y}^{(\tau)}\right). \tag{2}\]
The overall loss function is simply the sum of the losses for all tasks:
\[\arg\min_{\psi}\sum_{t=0}^{T}J_{t}(\psi). \tag{3}\]
The CHT generates a sequence of weights \(\{\theta_{\tau}\}_{\tau=0}^{t}\), such that each weight is suited for all tasks up to the current task: \(\theta_{0}\) performs well only on task 0, \(\theta_{1}\) performs well on tasks 0 and 1, and so on. This allows the model to effectively learn and adapt to a stream of tasks, while also maintaining good performance on previously seen tasks.
This design allows for a "preemptive" approach to continual learning, where the CHT model can be trained on \(T\) tasks, and run for any number of tasks \(\tau<T\), producing well-performing weights \(\theta_{\tau}\) for all the tasks seen up to that point. An alternative approach would be to specify the exact number of tasks in the sequence in advance, and only consider the performance after the final task \(T\). This would correspond to minimizing only the last term \(J_{T}(\psi)\) in the equation (3). However, in our experiments, we did not observe any significant improvement using this approach compared to the one we have described above.
Another desirable property of the proposed CHT architecture is its ability to be recurrent. The parameters of the HT do not depend on task information, and only take the weights \(\theta\) and the support set as input. This means that it is not only possible to preempt CHT at some earlier task, but also extend the trained model to generate weights for additional tasks beyond the ones it was trained. We will demonstrate this ability in the experimental section.
### Prototypical loss
The last element of the algorithm that we have left to discuss is the exact form of loss function \(\mathcal{L}_{\psi}(\cdot)\) in the equation (2). The original HT used the cross-entropy loss, which is not well suited for continual learning because the number of classes that it predicts is tied to the number of parameters in the head layer of the weights \(\theta\). This means that as the number of tasks increases, the architecture of CNN needs to be adjusted, which goes against our design principle of using a recurrent CHT architecture. Another option would be to fix the head layer to the \(K\)-way classification problem across all the tasks and only predict the class information within tasks (a problem known as domain-incremental learning; Hsu et al., 2018). However, this would cause classes with the same label but different tasks to be minimized to the same location in the embedding space, leading to collisions. Additionally, since class labels are assigned at random for each training episode, the collisions would occur randomly, making it impossible for CHT learn the correct class assignment. In the Appendix A.1, we show that the accuracy of this approach decreases dramatically as the number of tasks increases and becomes impractical even for just two tasks.
To make the method usable, we need to decouple the class predictions of every task while keeping the overall dimensionality of the embedding space fixed. One solution is to come up with a fixed arrangement
of \(TK\) points, but any kind of such arrangement is suboptimal because it is not possible to place \(TK\) points equidistant from each other in a fixed-dimensional space for large \(T\). A much more elegant solution is to learn the location of these class prototypes from the support set itself, e.g. with a prototypical loss (Snell et al., 2017). The prototypes are computed by averaging the embeddings of support samples from a given class \(k\)_and_ task \(\tau\):
\[c_{\tau k}:=\frac{1}{N}\sum_{(x,y)\in S^{(\tau)}}f_{\theta_{\tau}}(x)\mathbf{1 }_{y=k}. \tag{4}\]
We can use the prototypes in two different continual learning scenarios. First, for the _task-incremental learning_, we are assumed to have access to the task we are solving and need to predict only the class information. The probability of the sample belonging to a class \(k\) given the task \(\tau\) is then equal to the softmax of the \(\ell_{2}\) distance between the sample and the prototype normalized over the distances to the prototypes from all the classes from \(\tau\):
\[p(\hat{y}=k|\hat{x},\tau):=\frac{\exp(-\|f_{\theta_{t}}(\hat{x})-c_{\tau k}\| ^{2})}{\sum_{k^{\prime}}\exp(-\|f_{\theta_{t}}(\hat{x})-c_{\tau k^{\prime}}\| ^{2})}. \tag{5}\]
Second, for more general _class-incremental learning_, we need to predict class attributes across all seen tasks. The probability of a sample belonging to class \(k\) of task \(\tau\) is equal to the softmax of the \(\ell_{2}\) distance between the sample and the prototype, normalized over the distances to the prototypes from all classes for all tasks:
\[p(\hat{y}=k,\tau|\hat{x}):=\frac{\exp(-\|f_{\theta_{t}}(\hat{x})-c_{\tau k}\| ^{2})}{\sum_{\tau^{\prime}k^{\prime}}\exp(-\|f_{\theta_{t}}(\hat{x})-c_{\tau^ {\prime}k^{\prime}}\|^{2})}. \tag{6}\]
The final loss function is given by minimizing the negative log probability of the chosen softmax over the query set. The pseudo-code for the entire CHT model is described in Algorithm 1.
```
0:\(T\) randomly sampled \(K\)-way \(N\)-shot episodes: \(\{S^{(t)};Q^{(t)}\}_{t=0}^{T}\).
0: The loss value \(J\) for the generated set of tasks.
1:\(J\gets 0\)\(\triangleright\) Initialize the loss.
2:\(\theta_{-1}\gets 0\)\(\triangleright\) Initialize the weights.
3:for\(t\gets 0\) to \(T\)do
4:\(\theta_{t}\gets a_{\hat{\psi}}(S^{(t)},\theta_{t-1})\)\(\triangleright\) Generate weight for current task.
5:for\(k\gets 0\) to \(K\)do\(\triangleright\) Compute prototypes for every class of the current task.
6:\(c_{tk}\leftarrow\frac{1}{N}\sum_{(x,y)\in S^{(t)}}f_{\theta_{t}}(x)\mathbf{1 }_{y=k}\)
7:endfor
8:for\(\tau\gets 0\) to \(t\)do\(\triangleright\) Update the loss with every seen query set using the equation (6).
9:for\(k\gets 0\) to \(K\)do
10:\(J\gets J-\sum_{(\hat{x},\hat{y})\in Q^{(\tau)}}\log p(\hat{y}=k,\tau|\hat{ x})\mathbf{1}_{\hat{y}=k}\)
11:endfor
12:endfor
13:endfor
```
**Algorithm 1** Class-incremental learning using HyperTransformer with Prototypical Loss.
Empirically, we noticed that the CHT models trained with the class-incremental learning objective (6) perform equally well in both class-incremental and task-incremental settings, while models trained with the task-incremental objective (5) perform well only in the task-incremental setting and rarely outperform models trained with the equation (6). Therefore, we will focus on CHT models trained with the equation (6) and evaluate them for both task- and class-incremental learning scenarios.
Notice that the prototypes are computed using the current weights \(\theta_{\tau}\) in the equation (4) for task \(\tau\), but they are used later to compare the embeddings produced by subsequent weights \(\theta_{t}\) in equation (6). Ideally, once the new weights \(\theta_{t}\) are generated, the prototypes should be recomputed as well. However, in true continual
learning, we are not supposed to reuse the support samples after the task has been processed. We have found that freezing the prototypes after they are computed provides a viable solution to this problem, and the difference in performance compared to recomputing the prototypes every step is marginal.
Finally, we want to highlight an important use-case where recomputing the prototypes might still be possible or even desirable. The weights \(\theta_{t}\) are not affected by this issue and are computed in a continual learning manner from the equation (1) without using information from the previous task. The support set is only needed to update the prototypes through generated weights, which is a relatively cheap operation. This means that it is possible to envision a privacy-preserving scenario in which the weights are updated and passed from client to client in a continual learning manner, and the prototypes needed to "unlock" those weights belong to the clients that hold the actual data.
## 5 Connection Between Prototypical Loss and MAML
While the core idea behind the prototypical loss is very natural, this approach can also be viewed as a special case of a simple 1-step MAML-like learning algorithm. This can be demonstrated by considering a simple classification model \(\mathbf{q}(x;\phi)=s(\mathbf{W}f_{\theta}(x)+\mathbf{b})\) with \(\phi=(\mathbf{W},\mathbf{b},\theta)\), where \(f_{\theta}(x)\) is the embedding and \(s(\cdot)\) is a softmax function. MAML algorithm identifies such initial weights \(\phi^{0}\) that any task \(\tau\) with just a few gradient descent steps initialized at \(\phi^{0}\) brings the model towards a task-specific local optimum of \(\mathcal{L}_{\tau}\).
Notice that if any label assignment in the training tasks is equally likely, it is natural for \(\mathbf{q}(x;\phi^{0})\) to not prefer any particular label over the others. Guided by this, let us choose \(\mathbf{W}^{0}\) and \(\mathbf{b}^{0}\) that are _label-independent_. Substituting \(\phi=\phi^{0}+\delta\phi\) into \(\mathbf{q}(x;\phi)\), we then obtain
\[q_{\ell}(x;\phi)=q_{\ell}(x;\phi^{0})+s^{\prime}_{\ell}(\cdot) \left(\delta\mathbf{W}_{\ell}f_{\theta^{0}}(x)+\delta b_{\ell}+\mathbf{W}^{0}_{\ell} \frac{\partial f}{\partial\theta}(x;\theta^{0})\delta\theta\right)+O(\delta \phi^{2}),\]
where \(\ell\) is the label index and \(\delta\phi=(\delta\mathbf{W},\delta\mathbf{b},\delta\theta)\). The lowest-order label-dependent correction to \(q_{\ell}(x;\phi^{0})\) is given simply by \(s^{\prime}_{\ell}(\cdot)(\delta\mathbf{W}_{\ell}f_{\theta^{0}}(x)+\delta b_{\ell})\). In other words, in the lowest-order, the model only adjusts the final logits layer to adapt the pretrained embedding \(f_{\theta^{0}}(x)\) to a new task.
For a simple softmax cross-entropy loss (between predictions \(\mathbf{q}(x)\) and the groundtruth labels \(y\)), a single step of the gradient descent results in the following logits weight and bias updates:
\[\delta\mathbf{W}_{i,\cdot}=\frac{\gamma}{n}\sum_{(x,y)\in S}\left( \mathbf{1}_{y=k}-\frac{1}{|C|}\right)f_{\theta^{0}}(x),\qquad\delta b_{k}= \frac{\gamma}{n}\sum_{(x,y)\in S}\left(\mathbf{1}_{y=k}-\frac{1}{|C|}\right), \tag{7}\]
where the \(1/|C|\) term results from normalization in the softmax operation. Here \(\gamma\) is the learning rate, \(n\) is the total number of support-set samples, \(|C|\) is the number of classes and \(S\) is the support set. In other words, we see that the label assignment imposed by \(\delta\mathbf{W}\) and \(\delta\mathbf{b}\) from the equation (7) effectively relies on computing a dot-product of \(f_{\theta^{0}}(x)\) with "prototypes" \(c_{k}:=N^{-1}\sum_{(x,y)\in S}f_{\theta^{0}}(x)\mathbf{1}_{y=k}\).
## 6 Experiments
Most of our experiments were conducted using two standard benchmark problems using Omniglot and tieredImageNet datasets. The generated weights for each task \(\theta_{t}\) are composed of four convolutional blocks and a single dense layer. Each of the convolutional blocks consist of a \(3\times 3\) convolutional layer, batch norm layer, ReLU activation and a \(2\times 2\) max-pooling layer. For Omniglot we used 8 filters for convolutional layers and 20-dim FC layer to demonstrate how the network works on small problems, and for tieredImageNet we used 64 filters for convolutional and 40-dim for the FC layer2 to show that the method
works for large problems as well. The models were trained in an episodic fashion, where the examples for each training iteration are sampled from a given distribution of classes. The reported accuracy was calculated from 1024 random episodic evaluations from a separate test distribution, with each episode run 16 times with different combinations of input samples.
For the HT architecture, we tried to replicate the setup used in the original paper as closely as possible. We used a 4-layer convolutional network as a feature extractor and a 2-layer convolutional model for computing activation features. For Omniglot we used a 3-layer, 2-head Transformer and for tieredImageNet, we used a simplified 1-layer Transformer with 8 heads. In all our experiments, we trained the network on a single GPU for \(4M\) steps with SGD with an exponential LR decay over \(100\,000\) steps with a decay rate of \(0.97\). We noticed some stability issues when increasing the number of tasks and had to decrease the learning rate to compensate: for Omniglot experiments, we used a learning rate \(10^{-4}\) for up to 4 tasks and \(5\times 10^{-5}\) for 5 tasks. For tieredImageNet, we used the same learning rate of \(5\times 10^{-6}\) for training with any number of tasks \(T\). We trained the CHT models with the class-incremental objective (6), but evaluated them for both task-incremental and class-incremental scenarios.
### Learning from mini-batches
We first consider a case where every task includes the same set of classes. Specifically, we compared the following three models using a set of four 5-way 1-shot support set batches \(S^{(1)},\dots,S^{(4)}\) that consist of the same set of classes from tieredImageNet:
\[\theta^{(a)} \equiv a_{\psi}(S^{(1)}+S^{(2)}+S^{(3)}+S^{(4)},\theta_{0}),\] \[\theta^{(b)} \equiv a_{\psi}(S^{(3)}+S^{(4)},a_{\psi}(S^{(1)}+S^{(2)},\theta_{ 0})),\] \[\theta^{(c)} \equiv a_{\psi}(S^{(4)},a_{\psi}(S^{(3)},a_{\psi}(S^{(2)},a_{\psi}(S ^{(1)},\theta_{0})))),\]
where + operation denotes a simple concatenation of different support set batches. For this experiment, we used the cross-entropy loss (since the label set was the same for all \(S^{(i)}\)) and each support set batch \(S^{(i)}\) contained a single example per class. We observed that the test accuracies for \(\theta^{(a)}\), \(\theta^{(b)}\) and \(\theta^{(c)}\) were equal to \(67.9\%\), \(68.0\%\) and \(68.3\%\) respectively, all within the statistical error range (\(\pm 0.4\%\)). At the same time, HT trained with just \(S^{(1)}\) or \(S^{(1)}+S^{(2)}\) (with 1 or 2 samples per class respectively) performed significantly worse, reaching the test accuracies of \(56.2\%\) and \(62.9\%\) respectively. This demonstrates that the proposed mechanism of updating generated CNN weights using information from multiple support set batches can achieve performance comparable to processing all samples in a single pass with HT.
### Learning from tasks within a single domain
Next, we consider a scenario where the tasks consist of classes drawn from a single overall distribution. We present the results of two models: one trained on 20-way, 1-shot tasks with classes sampled from Omniglot dataset, and another trained on 5-way, 5-shot tasks with classes sampled from tieredImageNet dataset.
We compare the performance of CHT to two baseline models. The first is a Constant ProtoNet (ConstPN), which represents a vanilla Prototypical Network, as described in Snell et al. (2017). In this approach, a universal fixed CNN network is trained on episodes from \(\mathcal{C}_{\text{train}}\). This constant network can be applied to every task separately by projecting the support set as prototypes for that task and computing the prediction with respect to these prototypes. Strictly speaking, this is not a continual learning method, since it treats every task independently and has no memory of previous tasks. For the best results on this baseline, we had to increase the number of classes by a factor of 5 during training (e.g. for 20-way Omniglot evaluation we have trained it with 100-way problems).
The second baseline we used specifically for the class-incremental learning is a Merged HyperTransformer (MergedHT), where we combine all the tasks and train a single original HT instance as a single task. This method does not solve a continual learning problem, since it has the information about all the tasks from the beginning, but it produces a solution for every class and task that we can still be compared to the weights generated by the CHT.
Each trained model is applied to both task-incremental (Figure 3) and class-incremental (Figure 4) settings. To understand the effect of continual learning with multiple tasks, each column represents a separate run of the CHT trained on \(T=2,3,4\) or \(5\) tasks in total (for training a higher \(T\), see the results in the Appendix). To demonstrate the recurrence of the method, we extended the number of tasks to \(5\) for the evaluation regardless of how many tasks it was trained on. Each plot shows \(5\) curves corresponding to the CHT, split into two groups: bullet marker (\(\bullet\)) for tasks that the model was trained for and diamond marker (\(\diamond\)) for extrapolation to more tasks.
Figure 4: Class-incremental learning on Omniglot and tieredImageNet. Each column represents a different CHT trained with a total of \(T=2\), \(3\), \(4\) or \(5\) tasks. The tasks marked with a bullet symbol (\(\bullet\)) correspond to the terms in the objective function (3) that are being minimized. The lines marked with the diamond symbol (\(\diamond\)) show the extrapolation of the trained CHT to a larger number of tasks.
Figure 3: Task-incremental learning on Omniglot and tieredImageNet. Each column represents a different CHT trained with a total of \(T=2\), \(3\), \(4\) or \(5\) tasks. The tasks marked with a bullet symbol (\(\bullet\)) correspond to the terms in the objective function (3) that are being minimized. The lines marked with the diamond symbol (\(\diamond\)) show the extrapolation of the trained CHT to a larger number of tasks.
Task-incremental learning.We start by analysing the task-incremental learning results. For the Omniglot dataset, we saw no signs of catastrophic forgetting for the CHT. In fact, we observed a positive backward knowledge transfer, where the performance on past tasks _improved_ as more weights were generated. For example, in most cases, the performance of \(\theta_{1}\) (green markers) was higher than \(\theta_{0}\) (orange markers), and \(\theta_{2}\) was higher than both \(\theta_{1}\) and \(\theta_{0}\). Additionally, as the number of tasks increased, the overall performance of the CHT also increased, with the model trained on \(T=5\) tasks performing better than the one trained on \(T=2\) tasks.
For the tieredImageNet dataset, the results were better than the ConstPN baseline, but the positive backward knowledge effect effect was not as pronounced as it was for the Omniglot dataset. The performance for every training task remained roughly the same for all generated weights, indicating that the model did not suffer from catastrophic forgetting.
Overall, the CHT consistently outperformed the ConstPN baseline, particularly when applied to the same or lower number of tasks it was trained on. Although the accuracy of the CHT did decrease slightly when it was applied to more tasks than it was trained on, this decrease was not significant. In fact, even when CHT was trained on only \(T=3\) tasks, generating weights for one of two additional tasks still resulted in better performance than the ConstPN baseline.
Class-incremental learning.In the class-incremental learning setting, the task name is given by two numbers indicating the range of tasks we used for evaluation (e.g. task name 0-3 corresponds to four tasks from 0 to 3). The black constant dashed line is the baseline performance of the ConstPN, which uses a fixed embedding and does not differentiate between tasks. The starred blue markers represent a separate run of the HT for a particular configuration of merged tasks.
As one can see in the figure The accuracy of all the models decreased as more tasks were included in the prediction. This was expected because the size of the generated CNN did not change, but the number of classes that needs to be predicted was increasing. For Omniglot dataset we again saw the positive backwards transfer taking place, with CHT models trained on more tasks \(T\) performing better overall. For a given model trained on a fixed \(T\), the performance was comparable. This demonstrates the preemptive property of the CHT, where models trained for a certain number of tasks can still be run for any smaller number of tasks with similar performance.
When comparing the results to the baselines, the CHT had better results than the ConstPN up to the number of tasks \(T\) it was trained for, and the extrapolation results improved as \(T\) increases. Interestingly, for the case of \(T=5\) the CHT was able to outperform even the MergedHT baseline for the Omniglot, even though the MergedHT had access to information about all tasks from the beginning. This suggests that having more classes to classify makes the learning problem difficult for the original HT, as the image embeddings may not be able to learn good embeddings. This is particularly evident in the tieredImageNet dataset, where the performance of the MergedHT is so low that it falls below 60%, even for the 0-1 task.
### Learning from tasks across multiple domains
In the experiments described above, the support and query sets for each task were drawn from the same general distribution, and the image domain remained consistent across all tasks. If the tasks were drawn from different distributions and different image domains, we would expect task-agnostic ConstPN approach to suffer in accuracy because it would need to find a universal representation that works well across all image domains. In contrast, the CHT approach could adapt its sample representations differently for different detected image domains, leading to improved performance.
We verify this by creating a multi-domain episode generator that includes tasks from various image datasets: Omniglot, Caltech101, Caltechirds2011, Cars196, OxfordFlowers102 and StanfordDogs. We compared the accuracy of the ConstPN and CHT on this generator using episodes containing two tasks with 5-way, 1-shot problems. The generated CNN model had 16 channels with 32 channels for the final layer. Other parameters were the same as those used in the tieredImageNet experiments. The ConstPN achieved the accuracy of 53% for task 0, 52.8% for task 1 and 50.8% for combined tasks. The CHT achieved the accuracy of 56.2% for task 0, 55.2% for task 1 and 53.8% for combined tasks. The accuracy
gap of nearly 3% between these two methods, which is larger than the gap observed in the Omniglot and tieredImageNet experiments, suggests that the CHT is better at adapting to a multi-domain task distribution.
## 7 Conclusions
The proposed Continual HyperTransformer model has several attractive features. As an efficient few-shot learner, it can generate CNN weights on the fly with no training required, using only a small set of labeled examples. As a continual learner, it is able to update the weights with information from new tasks by iteratively passing them through HT. Empirically, we have shown that the learning occurs without catastrophic forgetting and may even result in positive backward transfer. By modifying the loss function from cross-entropy to the prototype loss, we defined a learning procedure that optimizes the location of the prototypes of all the classes of every task. A single trained CHT model can be used in both task-incremental and class-incremental scenarios.
## 8 Acknowledgements
The authors would like to thank Nolan Miller, Gus Kristiansen, Jascha Sohl-Dickstein and Johannes von Oswald for their valuable insights and feedback throughout the project.
|
2306.11890 | Out of Distribution Generalization via Interventional Style Transfer in
Single-Cell Microscopy | Real-world deployment of computer vision systems, including in the discovery
processes of biomedical research, requires causal representations that are
invariant to contextual nuisances and generalize to new data. Leveraging the
internal replicate structure of two novel single-cell fluorescent microscopy
datasets, we propose generally applicable tests to assess the extent to which
models learn causal representations across increasingly challenging levels of
OOD-generalization. We show that despite seemingly strong performance, as
assessed by other established metrics, both naive and contemporary baselines
designed to ward against confounding, collapse on these tests. We introduce a
new method, Interventional Style Transfer (IST), that substantially improves
OOD generalization by generating interventional training distributions in which
spurious correlations between biological causes and nuisances are mitigated. We
publish our code and datasets. | Wolfgang M. Pernice, Michael Doron, Alex Quach, Aditya Pratapa, Sultan Kenjeyev, Nicholas De Veaux, Michio Hirano, Juan C. Caicedo | 2023-06-15T20:08:16Z | http://arxiv.org/abs/2306.11890v1 | # Out of Distribution Generalization via Interventional Style Transfer in Single-Cell Microscopy
###### Abstract
Real-world deployment of computer vision systems, including in the discovery processes of biomedical research, requires causal representations that are invariant to contextual nuisances and generalize to new data. Leveraging the internal replicate structure of two novel single-cell fluorescent microscopy datasets, we propose generally applicable tests to assess the extent to which models learn causal representations across increasingly challenging levels of OOD-generalization. We show that despite seemingly strong performance, as assessed by other established metrics, both naive and contemporary baselines designed to ward against confounding, collapse on these tests. We introduce a new method, Interventional Style Transfer (IST), that substantially improves OOD generalization by generating interventional training distributions in which spurious correlations between biological causes and nuisances are mitigated. We publish our code1 and datasets2.
Footnote 1: [https://github.com/Laboratory-for-Digital-Biology/IST](https://github.com/Laboratory-for-Digital-Biology/IST)
Footnote 2: 10.5281/zenodo.7830240
## 1 Introduction
The ability to learn meaningful visual features from multiplexed microscopy images of cells and tissues promises to unlock cellular morphology as a powerful new single-cell omic with considerable potential to advance biomedical research [32]. In turn, efforts are underway to collect fluorescent microscopy datasets that interrogate single-cell biology across hundreds of millions of cells and thousands of biological perturbations [6, 12]. To enable scientific discovery, computer vision models must learn representations that generalize to observations made in new observational environments (OEs) [39, 40]. Yet, vision systems are prone to learning spurious correlations between concepts of interest (e.g. objects) and contextual nuisances (e.g. background) [28]. This can yield biased representations that, although they may generalize well to hold-out sets that are independent and identically distributed (IID) with respect to the training data, collapse when tested on data that fall outside this distribution. For example, the performance of state-of-the-art (SOTA) vision models trained on the stereotypical views of objects in ImageNet, dramatically deteriorates when tested on ObjectNet images [2], which were collected with proactive interventions on several nuisance factors, such as background and object orientation (e.g. fallencover chairs), that pose little challenge to humans.
The same confounding influence that OEs exhibit in natural image datasets, manifests in biomedical datasets in the form of "batch-effects". Indeed, despite best efforts, technical variation between datasets collected in separate (experimental) batches cannot be perfectly controlled. Given the susceptibility of vision models to spurious correlations in natural image data outlined above, batch-effects present a major threat to meaningful biomedical applications of representation learning in fluorescent microscopy.
In this paper we adopt the terminology of causal inference [33, 39] to study the (batch)effects of OEs as a confounder \(\mathbf{C}\) to a general causal process that we suggest describes most datasets in biology. Our goal is to learn representations from such data, that model causal relationships,
while remaining invariant to OEs. Importantly, we hold that the hypothesis that a given representation is causal (i.e. invariant over OEs and nuisances) cannot be falsified using IID hold-out data. Instead, we propose a rigorous testing regime based on generalization to OEs which are out-of-distribution (OOD) compared to the training data as a necessary characteristic and critical empirical measure of causal learning. To this end, and to foster progress towards causal representation learning in the field, we publicly release two real-world single-cell fluorescent microscopy datasets that exhibit internal replicate structures representative of most high-content imaging protocols (see below). We leverage this substructure to design realistic OOD generalization tasks. Surprisingly, we find that not only naive, but also SOTA post-processing and regularization baselines designed to mitigate batch-effects and improve generalization, fail when evaluated on these OOD tasks, despite in part excellent scores on IID hold-out sets and auxiliary metrics.
Given the ineffectiveness of existing methods on our OOD-task, we next consider intervening on the training distribution itself. Intuitively, if the training set contained observations balanced over all OEs, models should learn invariances to OEs and represent the right causal structure [28, 33]. While collecting such dense datasets is not impossible (see e.g. [38]), many key applications require the assessment of very large numbers of conditions (e.g. over even modestly-sized drug libraries) that, for practical reasons, have to be collected in multiple batches. As such, most high-content imaging datasets are sparse, that is, sets of conditions are only observed in some OEs but not others (see Figs. 1, 3). Inspired by recent results on generative interventions to mitigate biases in natural image data [28], we propose a new, light-weight method for _Interventional Style Transfer_ (IST) that generates effective interventions across an arbitrary number of OEs. To achieve this, we introduce architectural innovations and loss terms that prevent content hallucinations, which we find leads to failure of other style-transfer methods on our benchmark datasets. We then employ IST to yield a training distribution that mitigates OEs as confounders (Fig.1) and show that models trained on it exhibit major improvements in OOD-generalization.
As our main contributions, we (1) publish two new benchmark single-cell datasets with different degrees of sparsity in their replicate structure; we (2) propose a rigorous OOD-generalization test regime that can be adopted across most experimental dataset; and (3) we introduce IST as the first method that achieves substantial improvements across increasingly challenging levels of OOD-generalization, as a starting-point for future work towards causal representation learning in microscopy and beyond.
## 2 Related Work
**Data Augmentation:** Data augmentations, such as blur, contrast, and rotations, are almost universally used in computer vision to yield more robust models [25, 41]. Both style-transfer [14, 42] and adversarial training [44] have been employed in the pursuit of more complex augmentations. Our IST approach can be viewed as learning augmentations that imitate the effect of confounders.
**Generative Models:** Generative models have been successfully employed on fluorescent microscopy and other biomedical data [16, 45]. When OEs are unobserved, [28] show that generative models can be steered to produce noisy image manipulations on complex nuisances such as view point, that, when employed during training, improve OOD generalization. We employ the known replicate structure of our data to steer the generator directly.
Figure 1: (A) In most high-content datasets, not all conditions are observed in all OEs. Training distributions thus entail spurious correlations between OEs, observations and biological causes, yielding models that learn confounded representations according to \(\mathbf{SCM}_{\delta}\). (B) We introduce an IST approach to impute images as if they had been collected in different OEs. By randomly permuting images across OEs, we yield an interventional distribution that removes spurious correlations with OEs allowing models to learn representations that are less biased and better capture the true causal structure according to \(\mathbf{SCM}_{\psi}\).
**Domain Adaptation:** Our approach builds on advances in domain-adaptation and style-transfer, developed to allow for the differential manipulating a style while preserving other content [22, 24, 8, 9]. A major risk in applying style-transfer methods to scientific data is the inadvertent alteration of content (in our case phenotypic information). We hence design our IST approach to emphasizes content-preservation by discouraging major changes in pixel space.
**Batch-effect correction:** How to mitigate batch-effects is an active field of study in biomedicine [21]. Our work is closest to [34] who employ style-transfer to disentangle batch effects from biological features. IST features architectural improvements that prevent content alterations without the need for threshold- or segmentation-based regularization terms. IST also does not depend on assumptions about the nature of batch-effects (such as that they primarily manifest in first-order statistics [11]), and achieves strong performance on challenging benchmarks without the need for additional post-hoc methods employed in [34].
**Fairness:** A considerable body of work in visual recognition explores questions of fairness e.g. over demographic factors [46, 15], including by means of style-transfer. Although discussions of causality are absent from these works, questions of fairness relate to our study on batch-effects as we seek to learn causal representations from biased data.
## 3 Causal Analysis
Fig. 1 presents a generalized structural causal model (SCM) [33] that we assume as the basis of our work. We seek to reveal causal relationships between a set of conditions \(\mathbf{Y}\) (e.g. disease categories) that may manifest cellular phenotypes \(\mathbf{Z}\). To characterize \(\mathbf{Z}\), we collect observations \(\mathbf{X}\) using fluorescent microscopy. Observations are made in specific OEs \(\mathbf{C}\) (i.e. batch, constituted by a specific well, plate, aliquot of reagents, etc.) that introduce technical variation to \(\mathbf{X}\), and may further influence the biology of \(\mathbf{Z}\), revealing it as a confounder [39]. Importantly, in most datasets, not all conditions (we say, biological causes) are observed in all OEs. As such, the specific OE \(\mathbf{c}\) also determines (the set of) biological causes \(\mathbf{Y}\): e.g. two plates may contain different sets of conditions. Given training distributions \(P(X,Y,C)\) in which biological causes are sparse over OEs, discriminative models learn spurious correlations between biological causes \(\mathbf{Y}\) and confounding OEs \(\mathbf{C}\) according to \(\mathbf{SCM}_{\delta}\) resulting in biased representations \(\mathbf{\hat{Z}_{SCM}}_{s}\) that generalize poorly to new OEs (Fig. 1A).
A method that could produce faithful imputations of source images from one OE as if they had been collected in a different OE, could allow us to approximate the interventional distribution \(P(X,Y|do(C))\) that would eliminate the backdoor paths emanating from \(\mathbf{C}\), removing OEs as a confounder [33]. Recent work on generative interventions for causal learning in natural images showed that even under noisy image manipulations, a classifier can learn better features for recognition in OOD data [28]. Inspired by their results, we propose an interventions-based approach that is compatible with experimental datasets. Importantly, in most natural image data-generation processes, observations cause labels, i.e. human experts label images according to what they see, and we train models to recapitulate this ability [28, 35]). Instead, in our datasets, \(\mathbf{Y}\) represents conditions that we _hypothesize_ may cause observable cellular phenotypes. Our goal is then to approximate the conditional distribution \(P(Y|X)\), that is to estimate the cause \(\mathbf{Y}\) given noisy observations \(\mathbf{X}\), whereby we hope to learn (discover) biologically meaningful representations of a priori unknown phenotypes \(\mathbf{Z}\). Second, in contrast to natural images, where OEs are generally unobserved, our experimental data-acquisition protocols inherently document a rich ontology of processing steps that lead to any particular image (Fig. 3A). OEs are thus systematically tracked and feature rich metadata through which \(\mathbf{C}\) is partially observed. In contrast to [28], we can hence explicitly steer the data generation process learned by IST according to the known OE structure of our datasets. By using IST to intervene on \(\mathbf{C}\), we seek to mitigate spurious correlations in the training distribution, yielding \(\mathbf{\hat{Z}_{SCM}}_{\psi}\) and representations that generalize to OOD data (Fig. 1B).
## 4 Method
Advances in neural style transfer make it possible to perform image transformations that preserve spatial content while adjusting other feature statistics as desired [22, 26, 8, 9]. In order to generate effective interventions on \(\mathbf{C}\), a style-transfer model must learn to specifically transfer features related to OEs, while preserving phenotypic content. StarGANv2 [9] introduced a framework to train a single encoder-decoder architecture capable of style-transfer between an arbitrary number of style-domains, such as demographic categories. We instead aim to steer our model to generate images in the "style" of specific OEs. Indeed, we find that StarGANv2, adapted to multichannel microscopy, produces visually compelling images. However, the outputs consistently feature both subtle and more obvious content hallucinations (Fig. 2), suggesting that StarGANv2 fails to
Figure 2: StarGANv2 produces compelling images, but introduces both subtle and more obvious content alterations (white arrows)
adequately preserve phenotypic content.
As a potential alternative to style-transfer in pixel space, [11] propose MixStyle to pursue domain-generalization by mixing style-features of images from different OEs in the feature-maps of hidden-layers during training, with promising results. Remarkably, this avoids the need for a generator all together, making Mixstyle extremely lightweight. However [11] assume that computing mean and standard-deviation of the feature-maps is sufficient to adequately capture style. While this may hold to some extent for natural images [19], there are no guarantees that this is true for batch-effects in microscopy data, and indeed, we find MixStyle offers little-to-no benefit on our task (see Sec. 6).
To avoid these failure modes we design IST with several major and minor innovations. Specifically, to enforce content preservation, we introduce skip-connections between bottle-neck layers of our encoder and decoder, and introduce three complementary loss terms that discourage phenotypic alterations. Second, we find that, although first-order feature-map statistics are by themselves insufficient to describe OEs, they suffice as style-codes that, when injected into Adaptive Instance Normalization (AdaIN) layers [19], can be interpreted by our decoder to generate output images across an arbitrary number of OEs. This allows us to avoid all auxilliary networks required by StarGANv2 or comparable methods, rendering IST not only effective, but also computationally efficient, as detailed below.
### Model Components
To train our IST-model, let \(\mathbf{X}\) be the set of images, with associated environments \(\mathbf{C}\) and cause labels \(\mathbf{Y}\), respectively. Given an image \(x\in\mathbf{X}\) observed in environment \(c\in\mathbf{C}\), we seek to train a Generator \(\mathcal{G}\) capable of producing image transformations \(\hat{x}\) as if they come from other environments \(\mathbf{C}\) (style) while preserving the original phenotypic content \(z\in\mathbf{Z}\). To this end, we derive _style codes_\(v\) and train \(\mathcal{G}\) to interpret them. Our framework consists of three main modules (see Fig. 4A):
**Encoder:** Given an image \(x\), the encoder \(\mathcal{E}\) derives the representation \(u_{x}=\mathcal{E}(x)\), composed of multi-layer feature-sets \(u_{x}^{l}\), with \(l\in\{1...,L\}\) and \(L\) the number of layers in the network. We implement \(\mathcal{E}\) as a ResNet18 [18] with instance normalization (IN) layers and a few modifications to facilitate skip connections. We pre-train \(\mathcal{E}\) on an auxiliary multi-task objective of predicting \(C\) and \(Y\) given \(X\).
**Generator:** Our generator predicts output images \(\hat{x}=\mathcal{G}(u,v)\) given a feature set \(u\) and an style code \(v\). To promote the preservation of phenotypic content in the output images, we bias \(\mathcal{G}\) against major changes in pixel space [20], by implementing \(\mathcal{G}\) as a UNet-decoder with skip connections that concatenate feature-sets \(u_{x}^{l}\) from the \(l\)-th corresponding feature-layer of the encoder, with \(l\in\{1...,L\}\). We find that our choice improves both similarity between pairs \(x\) and \(\hat{x}\) as well as the realism of our output images.
**Critic:** Similar to [9] we implement a critic \(\mathcal{D}\) as a multi-task discriminator with \(N_{c}\) output heads, where \(N_{c}\) is the number of OEs. Each head \(\mathcal{D}_{c}\) is trained as a binary classifier to distinguish real from fake images of their true or assigned OE \(c\). To facilitate convergence, we initialize \(\mathcal{D}\) with the weights of the pre-trained encoder \(\mathcal{E}\) and fine-tune over the adversarial optimization process.
**Style codes:** To steer our generator, we compute image-specific style-codes. StarGANv2 employs a dedicated style-encoder to derive style-codes from latent distributions or input images [9]. We find that effective style-codes can be computed directly from image features using our pre-trained, frozen encoder \(\mathcal{E}\). Given the features \(u_{i}=\mathcal{E}(x_{i})\) of an image \(x_{i}\), we compute:
\[v_{i}=\left[(\mu_{u_{i}^{l}},\sigma_{u_{i}^{l}}):l\in\{1...L\}\right] \tag{1}\]
where \(\mu_{u_{i}^{l}}\) and \(\sigma_{u_{i}^{l}}\) are the mean and standard deviation across the spatial domain of the feature maps of layer \(l\).
### Training the Generator
We train \(\mathcal{G}\) to transform the appearance of an image from one OE to another. Following pre-training, we freeze the encoder \(\mathcal{E}\) and use its weights to initialize the critic \(\mathcal{D}\). The generator \(\mathcal{G}\) is trained using SGD on pairs of triplets: \((x_{\alpha},y_{\alpha},c_{\alpha})\) for content images and \((x_{\beta},y_{\beta},c_{\beta})\) for style images. During training, we randomly sample content images balanced over \(\mathbf{Y}\), and style images balanced over \(\mathbf{C}\). In that way, content and style images are independently drawn to ensure samples with diverse phenotypic and technical variation respectively. During the forward pass, we first compute their feature sets \(u_{\alpha}=\mathcal{E}(x_{\alpha})\) and \(u_{\beta}=\mathcal{E}(x_{\beta})\) to then derive style codes \(v_{\alpha},v_{\beta}\). To intervene on the OE of the content image, we then inject \(v_{\beta}\) using AdaIN-layers to predict the output \(\hat{x}=\mathcal{G}(\mathcal{E}(x_{\alpha}),v_{\beta})\). We minimize the following training objectives:
**Adversarial Loss:** Given a pair of content and style images \(x_{\alpha},x_{\beta}\), we compute style-codes \(v_{\alpha},v_{\beta}\) as described above. The generator is trained to produce realistic output images \(\hat{x}=\mathcal{G}(\mathcal{E}(x_{\alpha}),v_{\beta})\) with the following adversarial loss:
\[\begin{split}\mathcal{L}_{Adv}=&\mathbb{E}_{x,c} \left[\log(\mathcal{D}_{c}(x)\right]+\\ &\mathbb{E}_{x,\hat{x},\tilde{c}}\left[\log(1-\mathcal{D}_{\tilde{ c}}(\mathcal{G}(\mathcal{E}(x),v_{\tilde{c}})))\right],\end{split} \tag{2}\]
where \(\mathcal{D}_{c}(\cdot)\) is the head corresponding to OE \(c\). \(\mathcal{G}\) learns to use the style-codes \(v_{\tilde{c}}\) to generate versions of \(x\) as if observed in another environment \(\tilde{c}\).
**Style Loss:** We further ensure effective intervention by applying a style-loss:
\[\mathcal{L}_{Style}=\mathbb{E}_{\hat{x},c}\left[\frac{1}{L}\sum_{l=1}^{L}\lVert \text{Gram}(u_{x}^{l})-\text{Gram}(u_{\hat{x}}^{l})\rVert_{1}\right] \tag{3}\]
where \(\text{Gram}(\cdot)\) denotes the Gram matrix of features in the \(l\)-th layer of the encoder \(\mathcal{E}\), used in style transfer to match the feature covariance of stylized images [26, 27].
**Cycle-Consistency Loss:** To promote the preservation of phenotypic content of a source image \(x_{\alpha}\) in the output \(\hat{x}=\mathcal{G}(\mathcal{E}(x_{\alpha}),v_{\beta})\), we apply a cycle-consistency loss [47]:
\[\mathcal{L}_{Cyc}=\mathbb{E}_{x,c}[\|x-\mathcal{G}(\mathcal{E}(\hat{x}),v_{c} )\|_{1}], \tag{4}\]
where \(v_{c}\) is the estimated style code of the original content image, i.e. we reconstruct \(x_{\alpha}\) from \(\hat{x}\).
**Content Loss:** We additionally constrain the absolute changes in pixel space between \(x\) and \(\hat{x}\) to prevent substantial loss or addition of phenotypic content (e.g. the hallucination of new cells or cellular components) by applying a content loss
\[\mathcal{L}_{Cont}=\mathbb{E}_{\hat{x},c}\left[\|(\hat{x}-x)\|_{1}+\frac{1}{L }\sum_{l=1}^{L}\|z_{\hat{x}}^{l}-z_{x}^{l}\|_{1}\right] \tag{5}\]
**Class-matching Loss:** To further enforce that the generator preserves the phenotypic characteristics of input images, we implement a class-matching loss, defined as:
\[\mathcal{L}_{Cmatch}=\mathbb{E}_{\hat{x}}\left[-\sum_{y\in Y}\hat{y}\;log\left( \mathcal{E}_{cnt}(\hat{x})_{y}\right)\right], \tag{6}\]
which is essentially the cross entropy loss of the cause predictions for the synthesized image with respect to the predictions for the real input image, according to the frozen encoder classifier \(\mathcal{E}_{cls}\). Note that instead of using the actual cause label \(y\), we use as target the prediction for the real image \(\hat{y}=\mathcal{E}_{cls}(x_{c})_{y}\).
**Full Objective:** We then optimize a min-max objective that trains the generator and critic in an adversarial fashion:
\[\begin{split}\min_{\mathcal{G}}\max_{\mathcal{D}}=& \mathcal{L}_{Adv}+\lambda_{1}\mathcal{L}_{Style}+\\ &\lambda_{2}\mathcal{L}_{Cyc}+\lambda_{3}\mathcal{L}_{Cont}+ \lambda_{4}\mathcal{L}_{Cmatch},\end{split} \tag{7}\]
where \(\lambda_{i}\in\mathbb{R}\) are hyperparameters of the loss terms.
### IST for causal learning
Once our IST-model is trained, we employ it to generate an interventional training distribution \(P(\hat{X},Y|do(C))\), on which we in turn train a predictor network \(P\) (Fig. 4A). To produce \(\hat{X}\) during predictor training, content \(x_{\alpha}\) and style \(x_{\beta}\) images are sampled from the training distribution by pairing random causes with random OEs (both drawn uniformly) and passed through the (frozen) IST-model. This strategy breaks the spurious correlations between biological causes and OEs present in the original datasets. During testing, we also pass test images through our IST-model by randomly pairing them with a training image. This can be interpreted as bringing unseen images to familiar OEs for analysis, and we observed that IST-trained predictors perform better using this additional test-time correction.
## 5 Experiments
To evaluate the merits of our IST approach in causal representation learning compared to relevant contemporary baselines, we conduct experiments on two novel single-cell microscopy datasets that exhibit different degrees of sparsity and correlation between biological causes and OEs (Fig. 3B,D). Based on the known OE substructure in these dataset, we propose and empirically assess three increasingly challenging levels of OOD-generalization, by constructing hold-out sets according to a hierarchy of process
Figure 3: A: diagrammatic illustration of a generalized data-acquisition process for high-content microscopy. Well colors indicate conditions (e.g. cell lines or perturbations) arrayed over multiwell plates with limited capacity. Datasets exhibit a nested replicate structures; a series constitutes a full experimental replicate including fresh cells and reagents, which may contain further replicates by plate, plate-section (acquired separately), and acquisitions, each constituting potentially meaningful OEs. This yields datasets of varying degrees of sparsity. B,C: schematic dataset substructure for GRID and LINCS-SC respectively. We define _levels of generalization_ according to increasingly distant relationships between OEs. The training and OOD-test setup for one fold of level-2 is indicated.
ing steps that separate them from the training data (Fig.3).3
Footnote 3: Although we offer some details on biological causes for context, we do not interpret our results with respect to their biological implications; we focus on testing the generality of models across OEs.
### Datasets
**GRID:** We publish a subset of the _Genetics of Rare Inherited Disease_ (GRID) dataset, collected to discover latent disease-associated phenotypes in primary patient cells. The dataset contains 17,030 fluorescent microscopy images that reveal the organelle structure of primary dermal fibroblasts derived from 19 patients with 8 genetically confirmed inherited mitochondrial or neuromuscular diseases, and healthy controls (Fig.3B). Data was acquired in multi-well plates with a hierarchical replicate structure: images were collected within the minimal OE of individual wells that contain cells of a specific cell-line. Images of the same cell-lines were collected in multiple (replicate) wells, organized into plate-_sections_, _plates_, and _series_ (Fig. 3A,B). Replicate wells across _sections_ (level-1) are seeded onto the same plate, during the same tissue culture session and derive from the same source cultures. Plate-level replicates (level-2) are separated by _plate_, but share source cultures. Finally, _series_ (level-3) indicate full experimental replicates, starting with fresh thaws of cells. Critically, while sections contain identical sets of cell-lines, they only partially overlap between plates and series, yielding a sparse matrix of biological causes vs. OEs (Fig. 3B).
**LINCS-SC:** In contrast to GRID, the LINCS Cell-Profiling dataset was collected as a pharmacological perturbation study, including 1,327 clinically relevant compounds [10], using a _single_ A549 lung cancer cell line [43]. Cells were stained according to the Cell-Painting protocol [3] and im
Figure 4: A: Diagram of our IST-method. Given images \(x_{\alpha}\) and \(x_{\beta}\), encoder \(\mathcal{E}\) (gray) extracts latent representations \(u_{\alpha}\) and provides it to our generator (green) \(G\). \(\mathcal{E}\) further extracts style-codes \(x_{\beta}\) and provides them to \(G\) via AdaIN layers. We yield a prediction \(\hat{x}\) that preserves the phenotypic content of \(x_{\alpha}\) but inherits the OE of \(x_{\beta}\). We train predictors \(P\) (blue) on the resulting data distribution \(\hat{X}\). B, C: UMAPs and output images illustrating the capacity of IST to project images into specific batches. When \(x_{\beta}\) is sampled from a specific OE, output images fall onto their expected landmark in the UMAP space computed on the pretrained representations of \(\mathcal{E}\) (see Sec. 4). When sampling \(x_{\beta}\) fairly from all training OEs, the resulting distribution \(\hat{X}\) is randomized over all OEs.
aged at lower magnification, such that the resulting images contain many cells. The LINCS single-cell (LINCS-SC) dataset, contains a subset of 101 compounds with strong morphological effects as judged by prior analyses [31]. Single-cell images were derived by segmenting source images with Cell Profiler [30] for a total of 200,000 images. In contrast to GRID, LINCS plates contain no sections. Moreover, LINCS plate- and series-level replicates are structured according to 25 unique plate-maps that host exclusive perturbations: with the exception of controls, there is no overlap between compounds across plate-maps. Consequently, the data-matrix is almost perfectly sparse between plate-maps (Fig. 3C). Finally, in LINCS, only one series contains all plate-maps (i.e. treatments), but without plate-level replicates, while four additional series contain exclusive subsets of plate-maps, each replicated 5 times (Fig. 3C).
### Baselines
We seek to train predictors \(P\) such that they generalizes to unseen OEs. We compare IST to strong domain-specific and more general baselines that collectively represent three major categories: post-hoc correction in feature space, regularization during training, and interventions on the training distribution. For all experiments, we use the same set of pre-processing steps and augmentations. As a naive baseline, we randomly initialize a predictor \(P\) as a ResNet18 network (using IN layers) attached to a linear classification head and train it to predict biological causes \(\mathbf{Y}\) from \(\hat{\mathbf{X}}\). We implement other methods via minimal necessary deviations:
**Symphony:** Symphony (SYM) is a state-of-the-art batch-effect correction method developed for single-cell RNA-sequencing (scRNAseq) datasets [21]. Symphony extends a previous method, Harmony [23], which learns linear corrections over labeled nuisances. In contrast to Harmony, Symphony allows for inference on unseen datasets. We fit Symphony on training-set features \(\hat{Z}\) extracted from our naive baseline. We set the _topn_ hyperparameter equal to our feature-dimension and empirically tune others.
**Domain-Adversarial Regularization:** We also compare to domain-adversarial (DR) training as a a regularization technique to learn features that discriminate classes but are invariant to domain-shifts between datasets [13]. We adopt this strategy with slight modification to allow for multiple domains (OEs). Specifically, we modify the architecture of our naive baseline by adding a second classification-head that distinguishes OEs. During backpropagation, we employ a gradient-reversal layer to invert the gradient emanating from the OE-classifier for all layers in the shared ResNet18 stem. We tuned DR's gradient-reversal hyperparamter \(\lambda\) by grid-search to optimize validation accuracy on \(Y\), while minimizing performance on \(C\).
**StarGANv2:** We assess StarGANv2 (SG2) [9] as a SOTA style-transfer method in natural images. We train using default parameters over 75k iterations. We sample content and style image pairs as for IST and use OE-labels \(\mathbf{Y}\) as domains in SG2's multi-task discriminator. Following training, we use SG2 in the same way as IST, to project input images to random OEs, in the hope to yield an interventional training distribution free of spurious correlations.
**MixStyle:** Finally, we assess MixStyle (MS) [11] as a second recent style-transfer baseline that was specifically developed for domain generalization. We implement MS in our predictor architecture and successfully recapitulate [11]'s results on PACS using our setup. For fairer comparison to IST, we also test MS in a domain adaptation setup, in which we allow MS to train on styles (but not biological causes \(\mathbf{Y}\)) of images from test OEs.
### Evaluation metrics
**OOD-generalization:** To test OOD-generalization, we perform section-, plate-, or series-wise cross-validation (_levels of generalization_, see Fig. 3) by testing predictors on OEs that were left out during training.
**UMAPs:** While qualitative, feature-space visualizations are widely used in the biomedical literature. We report Uniform Manifold Approximation and Projection (UMAPs) [29].
**LISI score:** We use a ratio of OE- (bLISI) to cause-wise (cLISI) scores [23], normalized over the cardinality \(|C|\) and \(|Y|\) respectively, to quantify local diversity in feature space. Ideally, bLISI \(=1\), while cLISI \(=1/|Y|\).
**kNN-CV:** As a second feature-space based metric, we simulate our OOD-generalization experiments by evaluating kNN-classifiers on predicting cause-labels for validation-set images from OEs the corresponding training set images of which are left out of the kNN-reference set.
Figure 5: UMAPs of GRID-data training-set features extracted from the penultimate layer of predictors trained with or without IST. Colors show disease-categories \(Y\) (left) and OEs \(C\) (right)
## 6 Results
We report empirical results for IST and all baselines for GRID and validate our results on LINCS-SC (Table 1). Trained across all OEs, our naive baseline achieves excellent performance on IID hold-out data, suggesting there exist robust phenotypic manifestations of inherent genetic (GRID) and pharmacological (LINCS-SC) causes in the observed single-cell images. However, visual inspection of the resulting feature spaces via UMAP reveals OEs as a prominent superstructure in our models' representations, whereas biological causes form secondary clusters within the local context of their parent OE (Fig. 5). Consistently, LISI scores indicate poor integration over OEs and performance deteriorates in kNN-CV. Critically, when tested on OOD-generalization, our naive baseline shows almost complete collapse across all three levels of generalization on GRID and level-3 for LINCS-SC.
As expected, we find that SYM excels at purging variation over OEs when assessing LISI-scores on the training set. However, for both GRID and LINCS-SC data, we find that this effect does not generalize even to IID validation data and performs poorly on all other metrics. We find that DR-models achieve LISI-scores similar to SYM on GRID, while fully generalizing to IID data, and drastically improving kNN-CV scores. On LINCS-SC however, DR yields only comparatively minor improvements in kNN-CV scores. Remarkably, despite these somewhat promising auxiliary metrics, DR does not significantly improve OOD-generalization across any level in either dataset. Likely because SG2 permutes both style and content (see Fig. 2), predictors trained on the SG2-generated distribution fail even at IID generalization. MixStyle on the other hand, yields excellent IID-performances but - presumably hampered by it's assumptions about what constitutes style-features - yields equally disappointing results in our OOD tests.
By contrast, we find that IST learns to faithfully impute observations as if they had been made in different OEs (Fig. 4B). Qualitative inspection of output images suggests that IST simultaneously preserves phenotypic content of the source images (Fig.4C). As such, IST is able to randomize over the confounder \(C\) (Fig.4B) to yield a training distribution \(P(y,\hat{x},z|do(c))\) in which the original correlations between OEs and biological causes are diminished. Consistent with this, we observe major performance gains across all levels of OOD-generalization, as well as other metrics, for both GRID and LINCS-SC data, when predictors are trained on IST-generated data-distributions (Table 1). These results suggest that our IST approach generates effective interventions on confounders and thereby promotes the emergence of causal representations of biological phenomena.
## 7 Conclusions
Learning visual features that generalize across environments is a critical prerequisite for real-world applications of machine learning systems in biomedicine, yet the field lacks broadly adopted metrics to assess progress towards this goal. We propose OOD-generalization tests structured according to a hierarchy of technical processing steps that generally characterize the data generation process of most high-content imaging studies. We show that seemingly well-performing baselines, including SOTA-methods for batch-effect correction, as assessed by IID hold-out sets and several auxiliary metrics, almost completely collapse on this benchmark, revealing highly confounded representations. The success of IST instead shows that effective interventions to mitigate confounders can be learned, given they are at least partially observed. We point out that even models trained on billions of diverse natural images have only achieved minor gains on ObjectNet [17], suggesting that scale alone is not efficient at breaking contextual biases. Conversely, we suggest our approach bears semblance to thought experiments, by which humans routinely reevaluate familiar concepts in never-observed contexts, thus filling in a sparse matrix of actual observations. We propose IST as a fruitful direction for efficient causal learning.
### Acknowledgements
This research is based on work partially supported by Muscular Dystrophy Association (MDA) Development Grant 628114, and NIH award K99HG011488 to WMP, and by the Broad Institute Schmidt Fellowship program (JCC) and by NSF award 2134695 to JCC.
\begin{table}
\begin{tabular}{l c c c|c c c|c c|c c c} \hline \hline & \multicolumn{4}{c}{**GRID**} & \multicolumn{4}{c}{**LINCS-SC**} \\ \cline{2-13} & IID & cLISI/bLISI & kNN-CV & Level-1 & Level-2 & Level-3 & IID & kNN-CV & Level-1 & Level-2 & Level-3 \\ \hline Baseline & 0.55 & 0.5417 & 0.4458 & 0.1877 & 0.1381 & 0.1254 & **0.57** & 0.1271 & NA & 0.4194 & 0.0405 \\ Baseline (kNN) & 0.63 & 0.5417 & 0.4458 & 0.1838 & 0.1378 & 0.1317 & 0.55 & 0.1271 & NA & 0.3897 & 0.0452 \\ Symphony (kNN) & 0.50 & 0.7340 & 0.4404 & 0.1797 & 0.1474 & 0.1255 & 0.35 & 0.1098 & NA & 0.2697 & 0.0461 \\ DR \(\alpha=0.0625\) & **0.73** & 1.0811 & **0.6906** & 0.1900 & 0.1379 & 0.1259 & 0.56 & 0.1381 & NA & 0.4256 & 0.0438 \\ MixStyle-DA & 0.60 & 0.6039 & 0.5519 & 0.1084 & nc & nc & 0.41 & nc & NA & 0.4250 & 0.0450 \\ StarGANv2 & 0.20 & 1.099 & 0.1539 & 0.1659 & 0.1284 & 0.0977 & nc & nc & NA & nc & nc \\ STF (ours) & 0.60 & **1.4063** & 0.5815 & **0.5839** & **0.5350** & **0.3673** & 0.53 & **0.3304** & NA & **0.7016** & **0.3138** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Macro f1 and LISI scores on predictor performance on GRID and LINCS-SC. We report kNN-based classification results for Symphony vs. baseline. For all, higher is better. Level-1 cannot be evaluated for LINCS-SC (see Fig. 3C). nc: not computed.
## Competing interests
Columbia University has filed a US patent application on Interventional Style Transfer for causal representation learning on behalf of WMP, JCC and MH. The remaining authors declare no competing interests.
|
2305.08891 | Common Diffusion Noise Schedules and Sample Steps are Flawed | We discover that common diffusion noise schedules do not enforce the last
timestep to have zero signal-to-noise ratio (SNR), and some implementations of
diffusion samplers do not start from the last timestep. Such designs are flawed
and do not reflect the fact that the model is given pure Gaussian noise at
inference, creating a discrepancy between training and inference. We show that
the flawed design causes real problems in existing implementations. In Stable
Diffusion, it severely limits the model to only generate images with medium
brightness and prevents it from generating very bright and dark samples. We
propose a few simple fixes: (1) rescale the noise schedule to enforce zero
terminal SNR; (2) train the model with v prediction; (3) change the sampler to
always start from the last timestep; (4) rescale classifier-free guidance to
prevent over-exposure. These simple changes ensure the diffusion process is
congruent between training and inference and allow the model to generate
samples more faithful to the original data distribution. | Shanchuan Lin, Bingchen Liu, Jiashi Li, Xiao Yang | 2023-05-15T12:21:08Z | http://arxiv.org/abs/2305.08891v4 | # Common Diffusion Noise Schedules and Sample Steps are Flawed
###### Abstract
We discover that common diffusion noise schedules do not enforce the last timestep to have zero signal-to-noise ratio (SNR), and some implementations of diffusion samplers do not start from the last timestep. Such designs are flawed and do not reflect the fact that the model is given pure Gaussian noise at inference, creating a discrepancy between training and inference. We show that the flawed design causes real problems in existing implementations. In Stable Diffusion, it severely limits the model to only generate images with medium brightness and prevents it from generating very bright and dark samples. We propose a few simple fixes: (1) rescale the noise schedule to enforce zero terminal SNR; (2) train the model with \(v\) prediction; (3) change the sampler to always start from the last timestep; (4) rescale classifier-free guidance to prevent over-exposure. These simple changes ensure the diffusion process is congruent between training and inference and allow the model to generate samples more faithful to the original data distribution.
## 1 Introduction
Diffusion models [3, 15] are an emerging class of generative models that have recently grown in popularity due to their capability to generate diverse and high-quality samples. Notably, an open-source implementation, Stable Diffusion [10], has been widely adopted and referenced. However, the model, up to version 2.1 at the time of writing, always generates images with medium brightness. The generated images always have mean brightness around 0 (with a brightness scale from -1 to 1) even when paired with prompts that should elicit much brighter or darker results. Moreover, the model fails to generate correct samples when paired with explicit yet simple prompts such as "Solid black color" or "A white background", etc.
We discover that the root cause of the issue resides in the noise schedule and the sampling implementation. Common noise schedules, such as linear [3] and cosine [8] schedules, do not enforce the last timestep to have zero signal-to-noise ratio (SNR). Therefore, at training, the last timestep does not completely erase all the signal information. The leaked signal contains some of the lowest frequency information, such as the mean of each channel, which the model learns to read and respect for predicting the denoised output. However, this is incongruent with the inference behavior. At inference, the model is given pure Gaussian noise at its last timestep, of which the mean is always centered around zero. This falsely restricts the model to generating final images with medium brightness. Furthermore, newer samplers no longer require sampling of all the timesteps. However, implementations such as DDIM [16] and PNDM [6] do not properly start from the last timestep in the sampling process, further exacerbating the issue.
We argue that noise schedules should always ensure zero SNR on the last timestep and samplers should always start from the last timestep to correctly align diffusion training and inference. We propose a simple way to rescale existing schedules to ensure "zero terminal SNR" and propose a new classifier-free guidance [4] rescaling technique to solve the image over-exposure problem encountered as the terminal SNR approaches zero.
Figure 1: Stable Diffusion uses a flawed noise schedule and sample steps which severely limit the generated images to have plain medium brightness. After correcting the flaws, the model is able to generate much darker and more cinematic images for prompt: “Isabella, child of dark, [...] ”. Our fix allows the model to generate the full range of brightness.
We train the model on the proposed schedule and sample it with the corrected implementation. Our experimentation shows that these simple changes completely resolve the issue. These flawed designs are not exclusive to Stable Diffusion but general to all diffusion models. We encourage future designs of diffusion models to take this into account.
## 2 Background
Diffusion models [3, 15] involve a forward and a backward process. The forward process destroys information by gradually adding Gaussian noise to the data, commonly according to a non-learned, manually-defined variance schedule \(\beta_{1},\ldots,\beta_{T}\). Here we consider the discrete and variance-preserving formulation, defined as:
\[q(x_{1:T}|x_{0}):=\prod_{t=1}^{T}q(x_{t}|x_{t-1}) \tag{1}\]
\[q(x_{t}|x_{t-1}):=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\textbf {I}) \tag{2}\]
The forward process allows sampling \(x_{t}\) at an arbitrary timestep \(t\) in closed form. Let \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}\):
\[q(x_{t}|x_{0}):=\mathcal{N}(x_{t};\sqrt{\bar{a}_{t}}x_{0},(1-\bar{\alpha}_{t} )\textbf{I}) \tag{3}\]
Equivalently:
\[x_{t}:=\sqrt{\bar{a}_{t}}x_{0}+\sqrt{1-\bar{a}_{t}}\epsilon,\quad\text{where } \epsilon\sim\mathcal{N}(\textbf{0},\textbf{I}) \tag{4}\]
Signal-to-noise ratio (SNR) can be calculated as:
\[\text{SNR}(t):=\frac{\bar{\alpha}_{t}}{1-\bar{\alpha}_{t}} \tag{5}\]
Diffusion models learn the backward process to restore information step by step. When \(\beta_{t}\) is small, the reverse step is also found to be Gaussian:
\[p_{\theta}(x_{0:T}):=p(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}) \tag{6}\]
\[p_{\theta}(x_{t-1}|x_{t}):=\mathcal{N}(x_{t-1};\tilde{\mu}_{t},\tilde{\beta}_ {t}\textbf{I}) \tag{7}\]
Neural models are used to predict \(\tilde{\mu}_{t}\). Commonly, the models are reparameterized to predict noise \(\epsilon\) instead, since:
\[\tilde{\mu}_{t}:=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\epsilon) \tag{8}\]
Variance \(\tilde{\beta}_{t}\) can be calculated from the forward process posteriors:
\[\tilde{\beta}_{t}:=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t} \tag{9}\]
## 3 Methods
### Enforce Zero Terminal SNR
Table 1 shows common schedule definitions and their \(\text{SNR}(T)\) and \(\sqrt{\bar{\alpha}_{T}}\) at the terminal timestep \(T=1000\). None of the schedules have zero terminal SNR. Moreover, cosine schedule deliberately clips \(\beta_{t}\) to be no greater than 0.999 to prevent terminal SNR from reaching zero.
We notice that the noise schedule used by Stable Diffusion is particularly flawed. The terminal SNR is far from reaching zero. Substituting the value into Equation 4 also reveals that the signal is far from being completely destroyed at the final timestep:
\[x_{T}=0.068265\cdot x_{0}+0.997667\cdot\epsilon \tag{10}\]
This effectively creates a gap between training and inference. When \(t=T\) at training, the input to the model is not completely pure noise. A small amount of signal is still included. The leaked signal contains the lowest frequency information, such as the overall mean of each channel. The model subsequently learns to denoise respecting the mean from the leaked signal. At inference, pure Gaussian noise is given for sampling instead. The Gaussian noise always has a zero mean, so the model continues to generate samples according to the mean given at \(t=T\), resulting in images with medium brightness. In contrast, a noise schedule with zero terminal SNR uses pure noise as input at \(t=T\) during training, thus consistent with the inference behavior.
The same problem extrapolates to all diffusion noise schedules in general, although other schedules' terminal SNRs are closer to zero so it is harder to notice in practice. We argue that diffusion noise schedules must enforce zero terminal SNR to completely remove the discrepancy between training and inference. This also means that we must use variance-preserving formulation since variance-exploding formulation [17] cannot truly reach zero terminal SNR.
We propose a simple fix by rescaling existing noise schedules under the variance-preserving formulation to enforce zero terminal SNR. Recall in Equation 4 that \(\sqrt{\bar{\alpha}_{t}}\) controls the amount of signal to be mixed in. The idea is to keep \(\sqrt{\bar{\alpha}_{1}}\) unchanged, change \(\sqrt{\bar{\alpha}_{T}}\) to zero, and linearly rescale \(\sqrt{\bar{\alpha}_{t}}\) for intermediate \(t\in[2,\ldots,T-1]\) respectively. We find scaling the schedule in \(\sqrt{\bar{\alpha}_{t}}\) space can better preserve the curve than scaling in \(\text{SNR}(t)\) space. The PyTorch implementation is given in Algorithm 1.
Note that the proposed rescale method is only needed for fixing existing non-cosine schedules. Cosine schedule can simply remove the \(\beta_{t}\) clipping to achieve zero terminal SNR. Schedules designed in the future should ensure \(\beta_{T}=1\) to achieve zero terminal SNR.
### Train with V Prediction and V Loss
When SNR is zero, \(\epsilon\) prediction becomes a trivial task and \(\epsilon\) loss cannot guide the model to learn anything meaningful about the data.
We switch to v prediction and v loss as proposed in [13]:
\[v_{t}=\sqrt{\bar{\alpha}_{t}}\epsilon-\sqrt{1-\bar{\alpha}_{t}}x_{0} \tag{11}\]
\[\mathcal{L}=\lambda_{t}||v_{t}-\tilde{v}_{t}||_{2}^{2} \tag{12}\]
After rescaling the schedule to have zero terminal SNR, at \(t=T\), \(\bar{\alpha}_{T}=0\), so \(v_{T}=x_{0}\). Therefore, the model is given pure noise \(\epsilon\) as input to predict \(x_{0}\) as output. At this particular timestep, the model is not performing the denoising task since the input does not contain any signal. Rather it is repurposed to predict the mean of the data distribution conditioned on the prompt.
We finetune Stable Diffusion model using v loss with \(\lambda_{t}=1\) and find the visual quality similar to using \(\epsilon\) loss. We recommend always using v prediction for the model and adjusting \(\lambda_{t}\) to achieve different loss weighting if desired.
### Sample from the Last Timestep
Newer samplers are able to sample much fewer steps to generate visually appealing samples. Common practice is to still train the model on a large amount of discretized timesteps, e.g. \(T=1000\), and only perform a few sample steps, e.g. \(S=25\), at inference. This allows the dynamic change of sample steps \(S\) at inference to trade-off between quality and speed.
However, many implementations, including the official DDIM [16] and PNDM [6] implementations, do not properly include the last timestep in the sampling process, as shown in Table 2. This is also incorrect because models operating at \(t<T\) are trained on non-zero SNR inputs thus inconsistent with the inference behavior. For the same reason discussed in section 3.1, this contributes to the brightness problem in Stable Diffusion.
We argue that sampling from the last timestep in conjunction with a noise schedule that enforces zero terminal SNR is crucial. Only this way, when pure Gaussian noise is given to the model at the initial sample step, the model is actually trained to expect such input at inference.
We consider two additional ways to select sample steps in Table 2. Linspace, proposed in iDDPM [8], includes both the first and the last timestep and then evenly selects intermediate timesteps through linear interpolation. Trailing, proposed in DPM [7], only includes the last timestep and then selects intermediate timesteps with an even interval starting from the end. Note that the selection of the sample steps is not bind to any particular sampler and can be easily interchanged.
We find trailing has a more efficient use of the sample
\begin{table}
\begin{tabular}{l|l|r r} \hline \hline \multicolumn{1}{l|}{Schedule} & \multicolumn{1}{l|}{Definition (\(i=\frac{t-1}{T-1}\))} & \multicolumn{1}{l}{SNR(\(T\))} & \multicolumn{1}{l}{\(\sqrt{\bar{\alpha}_{T}}\)} \\ \hline Linear [3] & \(\beta_{t}=0.0001\cdot(1-i)+0.02\cdot i\) & 4.035993e-05 & 0.006353 \\ Cosine [8] & \(\beta_{t}=\text{min}(1-\frac{\bar{\alpha}_{t}}{\bar{\alpha}_{t-1}},0.999),\bar {\alpha}_{t}=\frac{f(t)}{f(0)},f(t)=\text{cos}(\frac{i+0.008}{1+0.008},\frac{ \pi}{2})^{2}\) & 2.428735e-09 & 4.928220e-05 \\ Stable Diffusion [10] & \(\beta_{t}=(\sqrt{0.00085}\cdot(1-i)+\sqrt{0.012}\cdot i)^{2}\) & 0.004682 & 0.068265 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Common schedule definitions and their SNR and \(\sqrt{\bar{\alpha}}\) on the last timestep. All schedules use total timestep \(T=1000\). None of the schedules has zero SNR on the last timestep \(t=T\), causing inconsistency in train/inference behavior.
Figure 2: Comparison between original Stable Diffusion noise schedule and our rescaled noise schedule. Our rescaled noise schedule ensures zero terminal SNR.
steps especially when \(S\) is small. This is because, for most schedules, \(x_{1}\) only differs to \(x_{0}\) by a tiny amount of noise controlled by \(\beta_{1}\) and the model does not perform many meaningful changes when sampled at \(t=1\), effectively making the sample step at \(t=1\) useless. We switch to trailing for future experimentation and use DDIM to match the official Stable Diffusion implementation.
Note that some sampler implementations may encounter zero division errors. The fix is provided in Appendix A.
### Rescale Classifier-Free Guidance
We find that as the terminal SNR approaches zero, classifier-free guidance [4] becomes very sensitive and can cause images to be overexposed. This problem has been noticed in other works. For example, Imagen [11] uses cosine schedule, which has a near zero terminal SNR, and proposes dynamic thresholding to solve the over-exposure problem. However, the approach is designed only for image-space models. Inspired by it, we propose a new way to rescale classifier-free guidance that is applicable to both image-space and latent-space models.
\[x_{cfg}=x_{neg}+w(x_{pos}-x_{neg}) \tag{13}\]
Equation 13 shows regular classifier-free guidance, where \(w\) is the guidance weight, \(x_{pos}\) and \(x_{neg}\) are the model outputs using positive and negative prompts respectively. We find that when \(w\) is large, the scale of the resulting \(x_{cfg}\) is very big, causing the image over-exposure problem. To solve it, we propose to rescale after applying classifier-free guidance:
\[\sigma_{pos}=\text{std}(x_{pos}),\quad\sigma_{cfg}=\text{std}(x_{cfg}) \tag{14}\]
\[x_{rescaled}=x_{cfg}\cdot\frac{\sigma_{pos}}{\sigma_{cfg}} \tag{15}\]
\[x_{final}=\phi\cdot x_{rescaled}+(1-\phi)\cdot x_{cfg} \tag{16}\]
In Equation 14, we compute the standard deviation of \(x_{pos},x_{cfg}\) as \(\sigma_{pos},\sigma_{cfg}\in\mathbb{R}\). In Equation 15, we rescale \(x_{cfg}\) back to the original standard deviation before applying classifier-free guidance but discover that the generated images are overly plain. In Equation 16, we introduce a hyperparameter \(\phi\) to control the rescale strength. We empirically find \(w=7.5,\phi=0.7\) works great. The optimized PyTorch implementation is given in Algorithm 2.
```
1defapply_cfg(pos, neg, weight=7.5, rescale=0.7):
2#Applyregularclassifier-freeguidance.
3cfg=neg+weight+(pos-neg)
4#Calculatestandarddeviations.
5std_pos=pos.std([1,2,3],keepdim=True)
6std_cfg=cfg.std([1,2,3],keepdim=True)
7#Applyguidancerescalewithfusedoperations.
8factor=std_pos/std_cfg
9factor=rescale*factor+(1-rescale)
10returncfg*factor
```
**Algorithm 2** Classifier-Free Guidance with Rescale
## 4 Evaluation
We finetune Stable Diffusion 2.1-base model on Laion dataset [14] using our fixes. Our Laion dataset is filtered similarly to the data used by the original Stable Diffusion. We use the same training configurations, i.e. batch size 2048, learning rate 1e-4, ema decay 0.9999, to train our model for 50k iterations. We also train an unchanged reference model on our filtered Laion data for a fair comparison.
### Qualitative
Figure 3 shows our method can generate images with a diverse brightness range. Specifically, the model with flawed designs always generates samples with medium brightness. It is unable to generate correct images when given explicit prompts, such as "white background" and "Solid black background", etc. In contrast, our model is able to generate according to the prompts perfectly.
### Quantitative
We follow the convention to calculate Frechet Inception Distance (FID) [2, 9] and Inception Score (IS) [12]. We randomly select 10k images from COCO 2014 validation dataset [5] and use our models to generate with the corresponding captions. Table 3 shows that our model has im
\begin{table}
\begin{tabular}{l|l|c c c c c c c c c c c} \hline \hline Type & Method & Discretization & Timesteps (e.g. \(T=1000,S=10\)) & & & & & & & & & \\ \hline Leading & DDIM [3], PNDM [6] & \(\text{arange}(1,T+1,\text{floor}(T/S))\) & 1 & 101 & 201 & 301 & 401 & 501 & 601 & 701 & 801 & 901 \\ Linspace & iDDPM [8] & \(\text{round}(\text{linspace}(1,T,S))\) & 1 & 112 & 223 & 334 & 445 & 556 & 667 & 778 & 889 & 1000 \\ Trailing & DPM [7] & \(\text{round}(\text{flip}(\text{arange}(T,0,-T/S)))\) & 100 & 200 & 300 & 400 & 500 & 600 & 700 & 800 & 900 & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between sample steps selections. \(T\) is the total discrete timesteps the model is trained on. \(S\) is the number of sample steps used by the sampler. We argue that sample steps should always include the last timestep \(t=T\) in the sampling process. Example here uses \(T=1000,S=10\) only for illustration proposes. Note that the timestep here uses range \([1,\dots,1000]\) to match the math notation used in the paper but in practice most implementations use timestep range \([0,\dots,999]\) so it should be shifted accordingly.
Figure 3: Qualitative comparison. Left is Stable Diffusion reference model. Right is Stable Diffusion model after applying all the proposed fixes. All images are produced using DDIM sampler, \(S=50\) steps, trailing timestep selection, classifier-free guidance weight \(w=7.5\), and rescale factor \(\phi=0.7\). Images within a pair are generated using the same seed. Different negative prompts are used.
proved FID/IS, suggesting our model better fits the image distribution and are more visually appealing.
## 5 Ablation
### Comparison of Sample Steps
Figure 4 compares sampling using leading, linspace, and trailing on our model trained with zero terminal SNR noise schedule. When sample step \(S\) is small, e.g. taking \(S=5\) as an extreme example, Trailing noticeably outperforms linspace. But for common choices such as \(S=25\), the difference between trailing and linspace is not easily noticeable.
### Analyzing Model Behavior with Zero SNR
Let's consider an "ideal" unconditional model that has been trained till perfect convergence with zero terminal SNR. At \(t=T\), the model learns to predict the same exact L2 mean of all the data samples regardless of the noise input. In the text-conditional case, the model will predict the L2 mean conditioned on the prompt but invariant to the noise input.
Therefore, the first sample step at \(t=T\) ideally yields the same exact prediction regardless of noise input. The variation begins on the second sample step. In DDPM [3], different random Gaussian noise is added back to the same predicted \(x_{0}\) from the first step. In DDIM [16], different predicted noise is added back to the same predicted \(x_{0}\) from the first step. The posterior probability for \(x_{0}\) now becomes different and the model starts to generate different results on different noised inputs.
This is congruent with our model behavior. Figure 5 shows that our model predicts almost exact results regardless of the noise input at \(t=T\), and the variation begins from the next sample step.
In another word, at \(t=T\), the noise input is unnecessary, except we keep it for architectural convenience.
### Effect of Classifier-Free Guidance Rescale
Figure 6 compares the results of using different rescale factors \(\phi\). When using regular classifier-free guidance, corresponding to rescale factor \(\phi=0\), the images tend to over-expose. We empirically find that setting \(\phi\) to be within 0.5 and 0.75 produces the most appealing results.
### Comparison to Offset Noise
Offset noise is another technique proposed in [1] to address the brightness problem in Stable Diffusion. Instead of sampling \(\epsilon\sim\mathcal{N}(0,\textbf{I})\), they propose to sample \(\epsilon_{hwc}\sim\mathcal{N}(0.1\delta_{c},\textbf{I})\), where \(\delta_{c}\sim\mathcal{N}(0,\textbf{I})\) and the same \(\delta_{c}\) is used for every pixel \(h,w\) in every channel \(c\).
When using offset noise, the noise at each pixel is no longer iid. since \(\delta_{c}\) shifts the entire channel together. The mean of the noised input is no longer indicative of the mean of the true image. Therefore, the model learns not to respect the mean of its input when predicting the output at all timesteps. So even if pure Gaussian noise is given at \(t=T\)
Figure 4: Comparison of sample steps selections. The prompt is: “A close-up photograph of two men smiling in bright light”. Sampled with DDIM. Same seed. When the sample step is extremely small, e.g. \(S=5\), trailing is noticeably better than linspace. When the sample step is large, e.g. \(S=25\), the difference between trailing and linspace is subtle.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & FID \(\downarrow\) & IS \(\uparrow\) \\ \hline SD v2.1-base official & 23.76 & 32.84 \\ SD with our data, no fixes & 22.96 & 34.11 \\
**SD with fixes (Ours)** & **21.66** & **36.16** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative evaluation. All models use DDIM sampler with \(S=50\) steps, guidance weight \(w=7.5\), and no negative prompt. Ours uses zero terminal SNR noise schedule, v prediction, trailing sample steps, and guidance rescale factor \(\phi=0.7\).
Figure 5: Visualization of the sample steps on prompt “An astronaut riding a horse”. Horizontal axis is the timestep \(t\). At \(t=T\), the model generates the mean of the data distribution based on the prompt.
and the signal is leaked by the flawed noise schedule, the model ignores it and is free to alter the output mean at every timestep.
Offset noise does enable Stable Diffusion model to generate very bright and dark samples but it is incongruent with the theory of the diffusion process and may generate samples with brightness that does not fit the true data distribution, i.e. too bright or too dark. It is a trick that does not address the fundamental issue.
## 6 Conclusion
In summary, we have discovered that diffusion models should use noise schedules with zero terminal SNR and should be sampled starting from the last timestep in order to ensure the training behavior is aligned with inference. We have proposed a simple way to rescale existing noise schedules to enforce zero terminal SNR and a classifier-free guidance rescaling technique to counter image over-exposure. We encourage future designs of diffusion models to take this into account.
|
2310.13961 | Ensemble-Instruct: Generating Instruction-Tuning Data with a
Heterogeneous Mixture of LMs | Using in-context learning (ICL) for data generation, techniques such as
Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023)
can train strong conversational agents with only a small amount of human
supervision. One limitation of these approaches is that they resort to very
large language models (around 175B parameters) that are also proprietary and
non-public. Here we explore the application of such techniques to language
models that are much smaller (around 10B--40B parameters) and have permissive
licenses. We find the Self-Instruct approach to be less effective at these
sizes and propose new ICL methods that draw on two main ideas: (a)
Categorization and simplification of the ICL templates to make prompt learning
easier for the LM, and (b) Ensembling over multiple LM outputs to help select
high-quality synthetic examples. Our algorithm leverages the 175 Self-Instruct
seed tasks and employs separate pipelines for instructions that require an
input and instructions that do not. Empirical investigations with different LMs
show that: (1) Our proposed method yields higher-quality instruction tuning
data than Self-Instruct, (2) It improves performances of both vanilla and
instruction-tuned LMs by significant margins, and (3) Smaller instruction-tuned
LMs generate more useful outputs than their larger un-tuned counterparts. Our
codebase is available at https://github.com/IBM/ensemble-instruct. | Young-Suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian, Salim Roukos, Ramón Fernandez Astudillo | 2023-10-21T10:21:17Z | http://arxiv.org/abs/2310.13961v1 | # Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs
###### Abstract
Using in-context learning (ICL) for data generation, techniques such as Self-Instruct Wang et al. (2023) or the follow-up Alpaca Taori et al. (2023) can train strong conversational agents with only a small amount of human supervision. One limitation of these approaches is that they resort to very large language models (around 175B parameters) that are also proprietary and non-public. Here we explore the application of such techniques to language models that are much smaller (around 10B-40B parameters) and have permissive licenses. We find the Self-Instruct approach to be less effective at these sizes and propose new ICL methods that draw on two main ideas: (a) Categorization and simplification of the ICL templates to make prompt learning easier for the LM, and (b) Ensembling over multiple LM outputs to help select high-quality synthetic examples. Our algorithm leverages the 175 Self-Instruct seed tasks and employs separate pipelines for instructions that require an input and instructions that do not. Empirical investigations with different LMs show that: (1) Our proposed method yields higher-quality instruction tuning data than Self-Instruct, (2) It improves performances of both vanilla and instruction-tuned LMs by significant margins, and (3) Smaller instruction-tuned LMs generate more useful outputs than their larger un-tuned counterparts. Our codebase is available at [https://github.com/IBM/ensemble-instruct](https://github.com/IBM/ensemble-instruct).
## 1 Introduction
Instruction-tuned language models have demonstrated strong zero-shot generalization capabilities to new tasks Chung et al. (2022); Wei et al. (2021); Ouyang et al. (2022); Mishra et al. (2022); Wang et al. (2022); Longpre et al. (2023), creating interest in large-scale automatic synthesis of instruction-tuning data Honovich et al. (2022); Wang et al. (2023); Xu et al. (2023); Sun et al. (2023); Xu et al. (2023). In this context, Self-Instruct Wang et al. (2023) showed that a small number of expert-annotated seed examples, coupled with in-context learning (ICL) with a base model, can be used to generate an instruction-tuning dataset to efficiently instruct that same base model. While this method yielded strong results and multiple follow-up works, most techniques resort to very large LMs (around 175B parameters) Wang et al. (2023); Taori et al. (2023), available only through closed-access APIs, or have restricted model access.
In this paper, we present Ensemble-Instruct, a novel algorithm enabling high-quality instruction-tuning data generation with smaller LMs (40B parameters or less), that are also fully accessible and have permissive usage licenses. We show that, when using smaller models as generators, Self-Instruct struggles to produce text of adequate quality, adversely affecting the utility of the generated data and downstream model performance. Staying within the ICL framework and using the Self-Instruct seed tasks, Ensemble-Instruct explores two main ideas to solve this problem: (1) Categorizing and simplifying the ICL prompts to ease the few-shot learning process, and (2) Ensembling over multiple LM outputs to improve both accuracy and diversity of the generated data.
A standard instruction-tuning sample exemplifies a task comprising: (a) an _instruction_ that describes the action to be performed, (b) an optional _input_ on which the action is performed, and (c) the _output_ of the action. Similar to Self-Instruct, we generate samples in two stages: instruction generation and instance generation, where an _instance_ comprises an input (optional) and an output. Unlike Self-Instruct, Ensemble-Instruct seeks to simplify the problem for the generating LM by first categorizing the examples into two types--those with an input and those without--and then employing separate pipelines for the two that leverage their own unique and simplified prompts (SS2.1). Further, it ensembles over the outputs of different LMs in
two complementary ways: (1) including examples generated by a heterogeneous collection of LMs in the final set to increase diversity, and (2) majority voting followed by filtering low-consensus examples to improve accuracy (SS2.4).
To understand the effects of our proposed methods, we run an extensive evaluation of different models for instruction generation. This includes vanilla language models (T5) ul2-20b Tay et al. (2022), Falcon-40b Penedo et al. (2023), the instruction-tuned models flan-t5-11b Chung et al. (2022) and flan-ul2-20b Tay et al. (2022) and the chat-tuned1 version of GPT-NeoX-20B Black et al. (2022). As base models to fine-tune with our generated data, we use the vanilla LM Pythia-1.4B Biderman et al. (2023) for ablation analysis, MPT-7B2, a decoder only LM similar to LLaMA Touvron et al. (2023) as well as GPT-JT-6B3, an instructed version of GPT-J Wang and Komatsuzaki (2021) trained on Chain of Thought and Natural instruction datasets among others. All chosen models are open-source and have permissive licenses Apache-2).
Footnote 1: [https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B](https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B)
Footnote 2: [https://www.mosaicml.com/blog/mpt-7b](https://www.mosaicml.com/blog/mpt-7b)
Footnote 3: [https://huggingface.co/togethercomputer/GPT-JT-6B-v1](https://huggingface.co/togethercomputer/GPT-JT-6B-v1)
We evaluate the models fine-tuned on the the data generated by Ensemble-Instruct on the the SuperNatural Instructions (SuperNI) test set Wang et al. (2022) and 252 user-oriented tasks from Wang et al. (2023). Our contributions can be summarized as follows:
* We propose a technique for generating high-quality instruction-tuning data with 40B-parameter or smaller LMs that are openly accessible, with non-restrictive licenses.
* We outperform Self-Instruct training of GPT3 (175B) with a far smaller base model (MPT-7B). The technique also improves the performance of instruction-tuned GPT-JT-6B.
* Ablation studies demonstrate the importance of the individual components of our technique.
* We release the synthetic instruction-tuning dataset of about 45k samples along with our ICL templates and codebase.
## 2 Ensemble-Instruct
A high-level overview of Ensemble-Instruct is given in Figure 1. The algorithm has three main components: (i) Categorization of tasks and their associated prompts, (ii) Generation of instructions followed by instances, where an instance comprises an input (optional) and an output, and (iii) Ensemble of outputs from multiple LMs.
### Categorization of Tasks and Prompts
We divide the tasks, i.e. the instruction-tuning samples, into two categories: those where the instruction needs an input to be meaningful (type A) and those where it does not (type B). Examples of tasks from these two types can be seen in Figures 1 and 2. Among the seed tasks of Wang et al. (2023), 125 belong to type A and 50 to type B. For each category, we employ a dedicated pipeline that (a) uses ICL demonstrations only of that type, and (b) tailors the number of demonstrations to the difficulty of the type, at different stages of generation.
### Instruction Generation
For type A tasks, we use 24 ICL demonstrations during instruction generation. Out of those, 20 are randomly sampled from the 125 seed tasks of the same type, and 4 are sampled from instructions previously generated by the model itself. For type B tasks, we use 10 ICL demonstrations, of which 8 are sampled from the 50 type B seed tasks and 2 from previously generated synthetic instructions. Further, we adopt the approach of Wang et al. (2023) of adding a new instruction to the set only if its Rouge-L Lin (2004) score with every existing instruction is less than 0.7.
### Instance Generation
During instance generation, we use 18 ICL demonstrations for type A tasks and 15 for type B tasks, randomly selected from the seed tasks. Figure 2 shows examples of type A and type B tasks, and the prompts used for instance generation.
### Output Ensembling
The instruction and instance generation steps should in principle complete the process of synthesizing an instruction-tuning sample [20]. However, samples generated by small LMs can be inaccurate, which prompts us to design a final step of output ensembling. Instead of simply accepting the already generated example, we use an additional set of LMs to predict new outputs, given either the generated instruction-input pair (type A) or the instruction (type B).
The final output is derived by applying the greedy consensus Algorithm 1 to the outputs generated by the different LMs. The algorithm computes the Rouge-L score between all three pairs of outputs. If the lowest Rouge-L is above a threshold \(t\), it returns the first element of the pair with the highest Rouge-L score. This can be seen as a greedy version of Minimum Bayesian Risk decoding [1] with additional thresholding. The minimum threshold \(t\) is set to \(0.01\) across all tasks. It is important to note that if the above process does not select any of the three outputs, the example is filtered out.
## 3 Analysis of Instruction Tuning Dataset
We generate multiple instruction-tuning datasets using a heterogeneous set of LMs. Table 1 shows the labels of our synthetic datasets according to the LMs used in different stages of generation. Table 2 summarizes the set of LMs we use for generation.
Figure 1: High-level overview of Ensemble-Instruct for synthetic instruction data generation. The top part generates data for the tasks comprising instruction, input, and output while the bottom part generates for tasks without inputs. The instruction generation and instance generation steps are done using the same LM with few-shot in-context learning. Additional LMs are used for the additional output generation, for which in-context examples are used only when the LM is not previously instruction tuned. In each box, the bottom gray portion gives an example of what is produced during that step.
### Instance vs. Output Generation
As shown in Table 1, we use a distinct set of LMs for instruction and instance generation on one hand and output generation for ensembling on the other. The motivations are two-fold: (1) We observed that only relatively large decoder only models with 20B parameters or more are capable of generating input-output instances (type A). Therefore, we use decoder only models including falcon, gpt-neoxt-chat for input-output instance generation. (2) Instruction-tuned models are capable of generating high quality zero-shot outputs. Therefore, we use instruction-tuned models including flan-ul2, flan-t5-xxl, gpt-neoxt-chat for additional output generation for ensembling. We found that vanilla LMs ul2, falcon lag behind instruction-tuned models for output generation, as shown in eo-falcon-lm of Table 4.
Table 3 reports the number of valid instance generations, as well as samples accepted by the ensemble Algorithm 1, using flan-ul2 and flan-t5-xxl as additional outputs. We show results for 100 random samples using different models (falcon, flan-ul2, gpt-neoxt-chat) to generate instruction and type A instances using the same prompt and examples 5. Instructed models struggle to generate valid instances and in particular flan-ul2 generates no valid instance for the 100 samples. Although not shown in the table, most LMs are capable of generating type B instructions and instances, indicating that instructions and instances that do not require an input is an easier task than those requiring an input.
Footnote 5: See [https://github.com/IBM/ensemble-instruct/blob/main/ensemble_instruct/sample_instances.py](https://github.com/IBM/ensemble-instruct/blob/main/ensemble_instruct/sample_instances.py) for instance rejection criteria and scripts/ensemble_instruct.sh for experiment reproduction.
### Small LM Dataset Comparison
We instruction-tune Pythia-1.4B-deduped with different datasets and evaluate them on the 119 tasks of the SuperNI test set. For validation, we use 10,589 samples from 106 SuperNI training tasks. Note that the validation and test sets have zero task overlap. We instruction-tune the model for 5 to 7 epochs and select the checkpoint with the highest validation Rouge-L score for evaluation. Performances of these tuned models on the test set are shown in Table 4, where m-self-inst denotes the algorithm and ICL templates of Wang et al. (2023) applied to {ul2, neox}, and f-self-inst, the algorithm and ICL templates of Wang et al. (2023) applied to falcon. We also show the performance of pythia-1.4B-deduped fine-tuned with two ex
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Label** & **Instructions** & **Instances** & **Additional Outputs for Ensembling** \\ \hline so-falcon & falcon & falcon & – \\ so-[ul2, neox] & ul2, gpt-neoxt-chat & UL2, gpt-neoxt-chat & – \\ eo-falcon-lm & falcon & falcon & UL2, gpt-neoxt-chat & – \\ eo-falcon-lm & falcon & falcon & UL2, gpt-neoxt-chat \\ eo-falcon-lm & falcon & falcon & flan-ul2, gpt-neoxt-chat \\ eo-[ul2, neox]-lm & UL2, gpt-neoxt-chat & UL2, gpt-neoxt-chat & flan-ul2, gpt-neoxt-chat \\ \hline \end{tabular}
\end{table}
Table 1: Labels of our synthetic tuning datasets according to the LMs used for generating instructions, instances and additional outputs for ensembling. Datasets with outputs from a single LM and an ensemble of LMs are prefixed with so- and eo-, respectively. The rest of each label specifies the models that were used at different stages of the process. If additional outputs were generated using instruction-tuned LMs for ensembling, the dataset is suffixed with -ilm. If vanilla LMs were used for the same purpose, we use the suffix ‘-lm. With instruction-tuned LMs, we generate the output zero-shot; for vanilla LMs, we use few-shot ICL.
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Model** & **instruction** & **instance** & **ensemble** \\ \hline falcon & 100 & 72 & 49 (68\%) \\ gpt-neoxt-chat & 100 & 40 & 25 (63\%) \\ flan-ul2 & 100 & 0 & 0 (0\%) \\ \hline \end{tabular}
\end{table}
Table 3: Number of valid type A instructions and instances generated by different models for 100 samples as well and number (and percentage) of samples filtered by Algorithm 1. All models share the same prompt and examples.
ternal datasets, alpaca6 and self-inst7 for comparisons with much larger training data obtained with the self-instruct algorithm.
Footnote 6: [https://huggingface.co/datasets/yahma/alpaca-clean-clean](https://huggingface.co/datasets/yahma/alpaca-clean-clean)
Footnote 7: [https://github.com/yizhongw/self-instruct/blob/main/data/gpt3_generations/batch_221203/all_instances_82x.json](https://github.com/yizhongw/self-instruct/blob/main/data/gpt3_generations/batch_221203/all_instances_82x.json)
The performance gap between m-self-inst and so-{ul2, neox} shows that our categorization and simplification of ICL prompts for instruction and instance generation already improves performance over Self-Instruct. The same applies to the larger falcon model, with so-falcon outperforming f-self-inst by a large margin. Output ensembling with instruction-tuned LMs further improves performance in both settings. Importantly, we find ensembling with vanilla LMs via ICL less effective than ensembling with instruction-tuned LMs that were applied zero-shot. Finally, we produce data that is more sample-efficient than Self-Instruct: With only about 30k examples, so-falcon yields a Rouge-L score of 34.4, which is equal to what Self-Instruct yields with about 82k examples.
### Qualitative Analysis
We randomly select 140 samples (40 with an input and 100 with no input) from eo-{ul2, neox}-ilm and manually assign one of three categories to each: good, bad and maybe. good indicates that there are no errors in the instruction, input (optional) and output, and the sample as a whole is coherent. maybe indicates that the input and the output do not contain errors, but the quality is questionable, e.g., the output is not complete. bad indicates that the input or the output contains errors and is incoherent with the instruction.
Manual evaluation results are shown in Table 5, which was carried out by one of the authors. We find that examples containing only an instruction and an output (type B) are generally of higher quality (77% good) than those also containing an input (type A) (55% good). This difference in quality is reflective of the relative difficulty of generating them by smaller models, i.e. it is easier to generate output-only instances, as suggested in SS3.1. Out of the 24,809 m-self-inst examples in Table 4 (after excluding the 175 seed tasks), 20,752 (83.6%) are of type B, further demonstrating that it is easier to generate output-only instances. Ensemble-Instruct pipeline avoids such unbalanced generation by first categorizing the tasks and then leveraging separate sets of simplified prompts for each. Each of our data sets generated with Ensemble-Instruct is an almost even split between instructions with and without an input.
Figure 3 shows some synthetic examples before and after output ensembling, depicting a few different ways in which ensembling improves the quality of the generated output. Regarding the effect of ensembling, observations show that it is particularly effective in selecting accurate output when it is short, e.g. classification tasks, via exact match. For longer outputs from generation tasks, e.g. summarization, the algorithm often filters out non-sensical outputs with hallucinations.
\begin{table}
\begin{tabular}{l c c} \hline
**Dataset** & **\# samples** & **Rouge-L** \\ \hline zero-shot baseline & 0 & 9.8 \\ \hline alpaca & 51,760 & 33.4 \\ self-inst & 82,612 & 34.4 \\ \hline m-self-inst & 24,984 & 28.5 \\ so-{ul2, neox} & 25,660 & 33.6 \\ eo-{ul2, neox} -ilm & 18,218 & 38.3 \\ \hline f-self-inst & 38,624 & 25.6 \\ so-falcon & 30,537 & 34.4 \\ eo-falcon-lm & 26,503 & 32.9 \\ eo-falcon-ilm & 26,701 & 37.1 \\ \hline \end{tabular}
\end{table}
Table 4: Efficacy of synthetic instruction tuning datasets measured by the performance of pythia-1.4B-deduped tuned models on the SuperNI test set. Dataset labels are described in Table 1. alpaca and self-inst are external synthetic datasets for further comparisons. M-self-inst denotes the algorithm and ICL templates of Wang et al. (2023) applied to {ul2, neox}. F-self-inst denotes the algorithm and ICL templates of Wang et al. (2023) applied to falcon. All training sets include the 175 seed tasks and the learning rate is 1e-5.
\begin{table}
\begin{tabular}{l c c c} \hline & \multicolumn{3}{c}{**Instance Type**} \\
**criteria** & **output** & **input-output** & **total** \\ \hline good & 77 & 22 & 99 (70.7\%) \\ bad & 14 & 15 & 29 (20.7\%) \\ maybe & 9 & 3 & 12 (8.6\%) \\
**total** & 100 & 40 & 140 \\ \hline \end{tabular}
\end{table}
Table 5: Manual evaluation of synthetic instruction tuning data quality on 140 randomly selected samples.
_Instance Generation with Both an Input and an Output:_
Generate examples for the following instructions. The instruction requires input and output instances. And you have to generate both input and output.
instruction: Extract all the country names in the paragraph, list them separated by commas.
input: Dr. No is the sixth novel by the English author Ian Fleming to feature his British Secret Service agent James Bond. Written at Fleming's Goldeneye estate in Jamaica, it was first published in the United Kingdom by Jonathan Cape in 1958. In the novel Bond looks into the disappearance in Jamaica of two fellow MI6 operatives who had been investigating Doctor No. Bond travels to No's Caribbean island and meets Honeychile Rider, who is there to collect shells. They are captured and taken to a luxurious facility carved into a mountain. The character of Doctor No, the son of a German missionary and a Chinese woman, was influenced by Sax Rohmer's Fu Manchu stories. Dr. No was the first of Fleming's novels to face widespread negative reviews in Britain, but it was received more favourably in the United States.
output: English, British, Jamaica, the United Kingdom, German, Chinese, Britain, the United States. |EoS|
instruction: Sort the given input ascendingly.
input: [10, 92, 2, 5, -4, 92, 5, 101]
output: [-4, 2, 5, 5, 10, 92, 92, 101]
|EoS|
instruction: In this task, you need to compare the meaning of the two sentences and tell if they are the same. Output yes or no.
input: Sentence 1: The teacher is speaking to the class. Sentence 2: The teacher is speaking to the students.
output: yes
|EoS|
instruction:
_Instance Generation with Only an Output:_
Generate examples for the instructions. The instruction does not require input and generate the output directly.
instruction: Which exercises are best for reducing belly fat at home?
output: Lying Leg Raises, Leg In And Out, Plank, Side Plank, Sit-ups
|EoS|
instruction: Is working part-time a good idea for teenagers between 15 and 19 years old?
output: There are both pros and cons to having a part-time job as a teenager. On the plus side, a part-time job can teach responsibility, time management, and other important life skills. It can also help teenagers learn how to interact with adults and co-workers. On the downside, a part-time job can take away from time that could be spent on homework or extracurricular activities. It can also be difficult to find a job that works around a school schedule.
|EoS|
instruction:
## 4 Experimental Results
We carry out automatic evaluations using Rouge-L on the evaluation sets in Table 6. Both \(119\) SuperNI tasks and \(252\) user-oriented tasks are identical to those evaluated in Wang et al. (2023).8
Footnote 8: We refer the reader to §4.4 of Wang et al. (2023) on the characteristics of the \(252\) user-oriented test set and §A.1 on
\begin{table}
\begin{tabular}{l r r} \hline
**Evaluation Data** & **\# tasks** & **\# samples** \\ \hline SuperNI & 119 & 11,810 \\ User-Oriented & 252 & 252 \\ \hline \end{tabular}
\end{table}
Table 6: Evaluation datasets for automatic evaluations using Rouge-L. None of the tasks in the evaluation are seen during training.
Figure 2: Example ICL templates for instance generation. The top 3 examples are for instances with both an input and an output; the bottom 3 are for output-only instances. We append the special token |EoS| to the end of each example, clearly demarcating example boundaries, to make sure generation stops after the final output token.
Footnote 9: They first train \(2.62\) billion tokens using the UL2 loss on the Pile, (Gao et al., 2020), followed by \(0.92\) billion tokens with a mixture of 5% of Chain-of-Thought (COT, Longpre et al. (2023)), 20% of Public Pool of Prompts (P3, (Bach et al., 2022)), 20% of SuperNI, and 55% of the Pile.
We set aside \(106\) tasks (\(10,589\) samples) from the SuperNI \(756\) training tasks as the validation data set. For SuperNI instruction tuning, we exclude the validation set from training to simulate evaluation on unseen tasks.
We fine-tune \(2\) base LMs on the instruction tuning data generated by the current technique: (1) a vanilla LM, mpt-7b, and (2) an instruction tuned LM, gpt-jt-6b.10 To fine-tune these models, we adopt QLoRA (Dettmers et al., 2023), which enables us to train both LMs with a single A100 GPU (40GB memory) within 24 hours. We also carried out full fine-tuning of mpt-7b for \(2\) data sets, eo-{ul2,neox}-ilm and SuperNI with \(2\) A100 GPUs (80GB memory). The results are shown in Tables 7 and 8 for the SuperNI test set, and in Table 9 for the 252 user-oriented test set.
Footnote 10: They first train \(2.62\) billion tokens using the UL2 loss on the Pile, (Gao et al., 2020), followed by \(0.92\) billion tokens with a mixture of 5% of Chain-of-Thought (COT, Longpre et al. (2023)), 20% of Public Pool of Prompts (P3, (Bach et al., 2022)), 20% of SuperNI, and 55% of the Pile.
In Table 7, mpt-7b fine-tuned on our synthetic data generated from vanilla LMs (SD I) outperforms both T0 and GPT3SELF-INT despite the fact that the latter are fine-tuned on over 80K samples whereas mpt-7b is fine-tuned only on around 30K samples. mpt-7b fine-tuned on our synthetic data generated from instruction-tuned models (SD II) outperform the data generated using vanilla LMs (SD I) by up to 3 points. Full fine-tuning outperforms QLoRA fine-tuning by 1.4 on eo-{ul2,neox}-ilm (46.8 vs. 45.4). Full fine-tuning again outperforms QLoRA fine-tuning by 2.2 on SuperNI training (50.4 vs. 48.2). mpt-7b fine-tuned on the combination of two synthetic data sets eo-{ul2,neox}-falcon}-ilm and the SuperNI training set improves the Rouge-L score over SuperNI training only by 2.2 points (from 48.2 to 50.4). We see a similar pattern in Table 8 for the instruction-tuned base LM gpt-jt-6b. The fact that our synthetically generated data significantly improve the performance of the instruction-tuned LM suggests that our technique generates data sufficiently different from the instruction tuning data incorporated into the base LM training.
In Table 9, we note that both base models, mpt-7b and gpt-jt-6b, perform worse on the user-oriented data set than on the SuperNI test set: 10.6 vs. 16.6 with mpt-7b and 6.2 vs. 10.4 with gpt-jt-6b. Fine-tuning these models on about 45K samples of the synthetic data provides a significant
Figure 3: Instruction tuning dataset examples before and after output ensembling. Ensembling generally improves different aspects of output quality, including correctness and adherence to the specifics of the question. We observe a side effect of shorter outputs being preferred over longer ones in generation tasks even if in some cases that makes the output less accurate, as shown in the last example.
boost to the Rouge-L scores, from 10.6 to 22.1 for mpt-7b, and from 6.2 to 21.5 for gpt-yt-6b. This suggests that the synthetic data we generate capture the characteristics of user-oriented instructions to a certain degree. Consistent with the results noted in Table 4 for the SuperNI test set, the data generated by our technique is more effective than the data generated using Self-Instruct (m-self-inst, f-self-inst) on the user oriented data set as well.
In Table 10, we show experimental results with other much larger models to illustrate the scalability of the proposed Ensemble-Instruct to any black-box models. Regardless of the base model sizes, ranging from 6B to 40B, fine-tuning the base model with the synthetic data eo-{ul2, neox \(\cup\)falcon}-ilm improves the Rouge-L score significantly. The fine-tuned model performances seem to correlate well with the base model's parameter sizes, i.e. 43.1 for the smallest gpt-yt-6b, 49.9 for the largestfalcon-40b and all other model sizes and scores in between. In particular, the experimental results onfalcon-40b indicates that Ensemble-Instruct is not an instance of model distillation in the sense that the synthetic data generated fromfalcon-40b and smaller models signifi
\begin{table}
\begin{tabular}{l c c c c} \hline
**Models** & **\# Params** & **Training Set** & **\# Samples** & **Rouge-L** \\ \hline
**Vanilla Base LMs** & & & & \\ T-1M, Wang et al. (2023) & 11B & None (zero-shot) & 0 & 25.7 \\ GPT3, Wang et al. (2023) & 175B & None (zero-shot) & 0 & 6.8 \\ MPT & 7B & None (zero-shot) & 0 & 16.6 \\ \hline
**Instruction-tuned w/ SD I** & & & & \\ T0, Wang et al. (2023) & 11B & Self-Instruct (GPT3) & 82,612 & 33.1 \\ GPT32xI-nST, Wang et al. (2023) & 175B & Self-Instruct (GPT3) & 82,612 & 39.9 \\ MPT\({}^{\text{slora}}\), ours & 7B & so-falcon & 30,537 & 43.1 \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-falcon-lm & 26,503 & 43.2 \\ \hline
**Instruction-tuned w/ SD II** & & & & \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-falcon-ilm & 26,701 & 44.4 \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-{ul2,neox}-ilm & 18,218 & 46.8 \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-{ul2,neox}-ilm & 18,218 & 45.4 \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-{ul2,neox}-ilm & 44,744 & 46.4 \\ \hline
**Instruction-tuned w/ SuperNI** & & & & \\ T\({}_{\text{x}}\)-Instruct, Wang et al. (2023) & 11B & SuperNI & 50,000 & 46.0 \\ GPT3, Wang et al. (2023) & 175B & SuperNI & 50,000 & 49.5 \\ MPT\({}^{\text{slora}}\), ours & 7B & SuperNI & 64,528 & 50.4 \\ MPT\({}^{\text{slora}}\), ours & 7B & SuperNI & 64,528 & 48.2 \\ \hline
**Instruction-tuned with SD II \& SuperNI** & & & \\ GPT35xI-nST, Wang et al. (2023) & 175B & Self-Instruct \& SuperNI & 132,612 & 51.6 \\ MPT\({}^{\text{slora}}\), ours & 7B & eo-combo-ilm \& SuperNI & 109,272 & 50.4 \\ \hline \end{tabular}
\end{table}
Table 7: Evaluation results on the SuperNI test set. SD I denotes synthetic data generated from only vanilla LMs, and SD II, synthetic data generated from the combination of vanilla and instruction-tuned LMs. Superscript\({}^{\text{tf}}\) denotes full fine-tuning. Superscript\({}^{\text{slora}}\), QLoRA fine-tuning. Learning rate is set to 1e-6 for full fine-tuning and 5e-5 for QLoRA tuning. eo-combo-ilm denotes eo-{ul2, neox \(\cup\)falcon}-ilm. Combination of synthetic data eo-combo-ilm and SuperNI training set improves over SuperNI training set by 2.2 points, from 48.2 to 50.4. Instruction tuning with SD II output-performs instruction tuning with SD I. For instruction tuning with SuperNI, we subsample 100 instances from each of the 650 training tasks.
\begin{table}
\begin{tabular}{l c c} \hline
**Transset** & **\# Samples** & **Rouge-L** \\ \hline zero-shot & 0 & 10.4 \\ \hline falcon & 30,537 & 41.7 \\ eo-falcon-lm & 26,503 & 40.5 \\ \hline eo-falcon-ilm & 26,701 & 41.9 \\ eo-{ul2,neox}-ilm & 18,218 & 42.7 \\ eo-combo-ilm & 44,744 & 43.1 \\ \hline SuperNI & 64,528 & 44.2 \\ \hline \end{tabular}
\end{table}
Table 8: Results of (instruction-tuned base LM) gpt-yt-6b fine-tuned on synthetic data. eo-combo-ilm denotes eo-{ul2, neox \(\cup\)falcon}-ilm. All models are fine-tuned with QLoRA with learning rate 5e-5.
\begin{table}
\begin{tabular}{l c c} \hline
**Models** & **Trainset** & **Rouge-L** \\ \hline mpt-7b & zero-shot & 10.6 \\ mpt-7b & m-self-inst & 20.6 \\ mpt-7b & f-self-inst & 21.6 \\ mpt-7b & eo-combo-ilm & 22.1 \\ \hline GPT-JT-6b & zero-shot & 6.2 \\ GPT-JT-6b & m-self-inst & 16.5 \\ GPT-JT-6b & f-self-inst & 17.4 \\ GPT-JT-6b & eo-combo-ilm & 21.5 \\ \hline \end{tabular}
\end{table}
Table 9: Results on the 252 user-oriented test set.
cantly improves all model's zero-shot performance including the largest model falcon-40b.
## 5 Related Work
This work is directly related to Self-Instruct Wang et al. (2023), borrowing from it the initial seed tasks and the idea of using ICL for tuning a base model into a instruction following model. It could also be seen as related to follow-up works such as: Alpaca Taori et al. (2023)--a practical application of Self-Instruct--Evol-Instruct Xu et al. (2023), which iteratively evolves instructions into increasing difficulty levels and Dromedary Sun et al. (2023), which combines self-instruct with principle-based correction, similar to Constitutional AI Bai et al. (2022). One fundamental limitation of these approaches is that they resort to very large language models (around 175B parameters or 65B parameters at the minimum) that are also proprietary and non-public. Here we explore techniques for generating instruction tuning data using LMs that are much smaller (around 10B-40B parameters) and have permissive licenses. We crucially draw on a heterogeneous mixture of smaller LMs to generate diverse outputs and then ensemble over multiple outputs to select high-quality synthetic examples, while also simplifying the instruction creation process.
The use of a reference metric, such as Rouge-L, to ensemble the outputs of multiple language distributions is a common technique in Minimum Bayesian Risk decoding, with applications to speech-to-text Goel and Byrne (2000), machine translation Kumar and Byrne (2004), language modeling Suzgun et al. (2022) and parsing Lee et al. (2022), among others. Here we use a similar technique in the context of instruction generation. To the best of our knowledge, this is the first application of such an approach to instruction-tuning data generation.
Jiang et al. (2023) proposes LLM-Blender, an ensembling framework to improve the generaion qualities by leveraging the diverse strengths of multiple language models. While we utilize the output ensemble in the context of synthetic data generation with Rouge-L as the reference metric, LLM-Blender focuses on improving model output qualities using PairRanker and GenFuser, both approaches capitalize on the efficacy of ensembling as a way of improving output qualities.
Also related to this work are approaches directly distilling from ChatGPT or GPT-4 OpenAI (2023) without specific instruction strategies, such as Vicuna10, which distills ChatGPT, Baize Xu et al. (2023), distilling conversations and Orca Mukherjee et al. (2023), which uses a large amount of ChatGPT and GPT-4 outputs and combines FLAN tasks, system prompts and machine-generated explanations sampled from these models. The strength of these approaches seems to rely more on the amount and quality of teacher samples available than on the inductive biases of the self-instructing technique and still rely on proprietary models with non-permissive licenses.
Footnote 10: [https://lmsys.org/blog/2023-03-30-vicuna/](https://lmsys.org/blog/2023-03-30-vicuna/)
## 6 Conclusion
We present a novel technique to generate instruction-tuning data through ICL, following the recent Self-Instruct work Wang et al. (2023). Unlike Self-Instruct, we propose techniques that explicitly avoid the use of proprietary language models like GTP-3, ChatGPT or GPT-4. We show that when using smaller models, Self-Instruct becomes less performant. To overcome this, we draw on two main ideas: (a) Categorization and simplification of ICL templates to make prompt learning easier, and (b) Ensembling over multiple LM outputs to select high-quality examples. These ideas allow us to outperform training with Self-Instruct while utilizing the same seed tasks. The resulting synthetic data enables base models like MPT-7B to outperform GPT-3, a far larger model with 175B parameters. The results of this work also encourage the departure from closed-access models for advancing instruction generation algorithms.
\begin{table}
\begin{tabular}{l c c} \hline
**Model-ParamSize** & **zero-shot** & **fine-tuned** \\ \hline opt-jt-6b & 10.4 & 43.1 \\ mpt-7b & 16.6 & 46.4 \\ open-llama-13b & 11.9 & 46.7 \\ mpt-30b & 12.2 & 49.5 \\ falcon-40b & 12.7 & 49.9 \\ \hline \end{tabular}
\end{table}
Table 10: Fine-tuning results on large models demonstrating the scalability of the Ensemble-Instruct technique to any black-box models. Zero-shot and fine-tuned model scores are Rouge-L on superNI test set. Performance improvement of falcon-40b after fine-tuning, compared with its zero-shot performance indicates that Ensemble-Instruct is not an instance of model distillation. All models are fine-tuned with eo-{ul2, neox \(\cup\) falcon}-iLM in Table 7.
## 7 Limitations
Due to time and resource constraints, some parts of the experimental setup are not ideal. All model outputs were collected from an internal API serving models from HuggingFace11. Due to limitations of this API, different number of samples were collected for each model which may have introduced noise in the performance estimates. We report the exact number of samples used for training along with the results. Note that for cases using ensembling one has to take into account that there is an additional filtering process that removes samples. We provide approximate rates for ensembling filtering in Table 3. For the small user-oriented test set containing 252 tasks, automatic evaluation is arguably not ideal. Proper human evaluation would provide a clearer signal but this requires of significant time investment and resources. The method employs a set of various LMs, and therefore the generated synthetic data can be susceptible to the limitations of such LMs, particularly the biases inherent in the training data which may be harmful leading to synthetic data with hate, abuse and social stereotypes.
Footnote 11: [https://huggingface.co/](https://huggingface.co/)
|
2303.13859 | XGC-VQA: A unified video quality assessment model for User,
Professionally, and Occupationally-Generated Content | With the rapid growth of Internet video data amounts and types, a unified
Video Quality Assessment (VQA) is needed to inspire video communication with
perceptual quality. To meet the real-time and universal requirements in
providing such inspiration, this study proposes a VQA model from a
classification of User Generated Content (UGC), Professionally Generated
Content (PGC), and Occupationally Generated Content (OGC). In the time domain,
this study utilizes non-uniform sampling, as each content type has varying
temporal importance based on its perceptual quality. In the spatial domain,
centralized downsampling is performed before the VQA process by utilizing a
patch splicing/sampling mechanism to lower complexity for real-time assessment.
The experimental results demonstrate that the proposed method achieves a median
correlation of $0.7$ while limiting the computation time below 5s for three
content types, which ensures that the communication experience of UGC, PGC, and
OGC can be optimized altogether. | Xinhui Huang, Chunyi Li, Abdelhak Bentaleb, Roger Zimmermann, Guangtao Zhai | 2023-03-24T08:47:02Z | http://arxiv.org/abs/2303.13859v1 | XGC-VQA: A Unified Video Quality Assessment Model for User, Progressionally, and Occupationally-Generated Content
###### Abstract
With the rapid growth of Internet video data amounts and types, a unified Video Quality Assessment (VQA) is needed to inspire video communication with perceptual quality. To meet the real-time and universal requirements in providing such inspiration, this study proposes a VQA model from a classification of User Generated Content (UGC), Professionally Generated Content (PGC), and Occupationally Generated Content (OGC). In the time domain, this study utilizes non-uniform sampling, as each content type has varying temporal importance based on its perceptual quality. In the spatial domain, centralized downsampling is performed before the VQA process by utilizing a patch splicing/sampling mechanism to lower complexity for real-time assessment. The experimental results demonstrate that the proposed method achieves a median correlation of \(0.7\) while limiting the computation time below 5s for three content types, which ensures that the communication experience of UGC, PGC, and OGC can be optimized altogether.
Xinhui Huang\({}^{1}\)*, Chunyi Li\({}^{1}\)*, Abdelhak Bentaleb\({}^{3}\), Roger Zimmermann\({}^{2}\),and Guangtao Zhai\({}^{1}\) Shanghai Jiao Tong University\({}^{1}\), National University of Singapore\({}^{2}\), Concordia University\({}^{3}\)+Video Quality Assessment, User Generated Content, Professionally Generated Content, Perception-inspired Communication
Footnote †: These authors contributed equally to this work.
Video Quality Assessment, User Generated Content, Professionally Generated Content, Occupationally Generated Content, Perception-inspired Communication
## 1 Introduction
Video has become the dominant data type in today's internet and accounts for 82% of network bandwidth usage [1]. To cope with the massive amount and different types of video data being transmitted, classical Video Quality Assessment (VQA) has been used as an evaluation criterion for video transmission or encoding performance [2]. However, with the rapid development of perception-inspired communication [3], VQA has been increasingly used to inspire video communication beyond just an overall perceptual quality gauge, such as network resource allocation[4], video coding mode selection[5], and real-time bitrate guidance [6]. Due to the real-time requirement of services above, a unified VQA metric for various video contents is needed in a No Reference (NR) scenario with low-complexity.
However, the differences across video content types create a great challenge for designing such a unified model. For today's mainstream video providers, User Generated Content (UGC), Professionally Generated Content (PGC), and Occupationally Generated Content (OGC) are the three major content types. UGC [7] is created by a regular user of social platforms, PGC [8] is quality content created by professional users, and OGC [9] is produced by practitioners.
When offering perception-inspired bitrate guidance in media delivery systems, evaluating the perceptual quality of all video frames would compromise real-time performance, so downsampling is required. All three video types require downsampling in both the spatial and temporal domains. Due to the huge differences in resolution, luminance, and quality between UGC, PGC, and OGC, it is not feasible to use the same VQA method of downsampling for all three. For example, there exist already some fast and remarkable VQA metrics [10, 11], but they do not perform well on UGC. There exist also low-complexity metrics to deal with the challenge of UGC VQA, but their results on OGC are not consistent with the Human Visual System (HVS) due to OGC's higher resolution and dynamic range. Considering the differences between UGC, PGC, and OGC, how to build a unified VQA metric is still an open research question.
## 2 Related Works
In the spatial domain, classic VQA models use a visual saliency map for downsampling. Xu _et al._ proposed to learn the video saliency model about the state-of-the-art H.265 codec features [12]. Another metric is to randomly sample video fragments into patches [13]. However, since the HVS has different visual saliency for UGC, PGC, and OGC, their respective saliency maps are also different. Therefore, random sampling has difficulty to obtain a high-saliency region. On the other hand, calculating the saliency map will introduce extra time complexity, which contradicts the real-time requirement. Thus a simple and efficient method for sampling the space domain is needed.
In the temporal domain, the traditional approach is to consider several continuous frames [14]. However, this can lose a lot of temporal information. Another method is to subsample every few frames evenly [15], resulting in insufficiency of long-term features [16]. However, for UGC, users tend to switch to the next video after the first few seconds when watching low-quality videos. Conversely, for OGC, where the quality of the video is higher, users are more likely to watch the whole video and remember its later parts better, so
the subsequent frames are more important.
Due to the limited generalization ability of the above model in UGC, PGC, and OGC, some databases[9] and metrics[17] are designed for this unified VQA task. However, with the development of internet video services, the specific content of UGC and OGC has also changed. On one hand, as UGC is easier to produce[1] in recent years, its overall quality has generally declined; on the other hand, with the application of High Dynamic Range (HDR) [18, 19] and the standardization of display devices[20], the dynamic range of high-quality OGC increased, and some minority resolutions (e.g. 960*540) no longer appear in OGC. Thus, those unified databases / VQA methods should adapt today's video content.
Based on these insights, we design XGC VQA, where X stands for the attribute user, professional or occupational, with contributions in the following three aspects. (\(i\)) **Classification**: We introduce an effective classification model for UGC, PGC, and OGC video through a parameter that is used to define the content producer's professionalism. This allows for different downsampling mechanisms for different content. (\(ii\)) **Spatial domain:** A centralized downsampling before the VQA process is conducted based on the patch splicing/sampling mechanism in FastVQA [14]. The sampling density depends on the above professionalism, thus minimizing the input for each UGC, PGC, and OGC without affecting the performance of the model. (\(iii\)) **Temporal domain:** Non-uniform sampling based on different temporal frame importance in UGC, PGC, and OGC. Sampling according to importance allows further reduction in model complexity without compromising performance.
## 3 Proposed Method
When aiming to apply a real-time unified VQA metric to all UGC, PGC, and OGC videos, we need to classify a video first and adopt different downsampling strategies for their specific content. The confidence parameter \(x\) for XGC is obtained by a linear combination of the video features. In the spatial domain, we choose different attention maps according to the video types; in the temporal domain, we choose the frames to be evaluated according to the confidence parameter. The framework of our model is shown in Fig. 1.
### Classification Modules
Among the nine features of the video according to the previous study [9], the greatest differences among UGC, PGC, and OGC are brightness, resolution, and image quality. In this study, we first separate UGC from PGC, then distinguish between PGC and OGC.
The difference between UGC and PGC is that UGC is recorded by ordinary users and is not as professional as PGC in terms of equipment, and this inferiority is mainly reflected in two hardware-level constraints as follows. (\(i\)) **Brightness:** UGC is often shot with mobile phones, which can result in poorly lit footage due to the limitations of the device's camera and lighting conditions. Additionally, shooting with the front-facing camera can result in uneven brightness due to the camera's placement and the lighting direction [21]. (\(ii\)) **Resolution:** UGC camera clarity is not high, or limited by network bandwidth, storage space, etc. The resolution of UGC is often between 360p-720p, while PGC and OGC can both reach 1080p or even 4K [22, 23, 24].
Fig. 2 shows several selected screenshots from the UGC, PGC, and OGC video categories [22, 23, 24], and we note that the PGC photo has a higher resolution and more uniform luminance distribution, leading to a better experience for the user.
Therefore, when we distinguish between UGC and PGC, we mainly consider these two aspects. Generally speaking, UGC has at least one defect in unevenness or resolution, and when both are good then UGC can evolve into PGC, so we use the worst of the two to characterize how much a UGC tends
Figure 1: The framework of the proposed method.
Figure 2: Example of illustrating the difference UGC, PGC, and OGC: from the first to the third row, UGC, PGC, and OGC are listed in order of increasing resolution from left to right.
to be PGC, the hardware performance \(\lambda\) and the confidence level \(x\) can be expressed as:
\[\lambda=\min(1-\frac{\mathrm{std(img)}}{\mathrm{mean(img)}},\frac{\sqrt{h\cdot w} }{\sqrt{h_{m}\cdot w_{m}}}) \tag{1}\]
\[x=\alpha\lambda\quad\mathrm{if}\ (\lambda\leq 1) \tag{2}\]
where \(\alpha\) is a linearity coefficient. Empirically we found the best results when \(\alpha\) equals 0.5. \(img\) represents a frame in the video being evaluated, and \((h,w)\) indicate the height and width of the video. \((h_{m},w_{m})\) is an empirically[22] pre-defined UGC resolution bound for videos, whose resolution below that tends to be classified as UGC. When \(\lambda<1\), it means that hardware (uneven, unclear) is the limiting factor for a good video. Thus we have a valid distinction between UGC and PGC.
When \(\lambda>1\), it means that the capture impairment and network are already good enough for evenness and resolution, and at this point, using better lenses or improving resolution, the video quality may not be improved because the monitor at the receiving end is not good enough [25], or the Just Noticeable Difference (JND) of HVS is not triggered [26]. The difference between PGC and OGC videos at this point is the quality of the content. Fig. 2 shows the difference between PGC and OGC, and we can see that OGC is more focused on aesthetic quality and gives a better experience.
The video quality relies heavily on deep learning [27]. Since we are a real-time model, the complexity introduced by multiple pooling convolutions of the neural network is unacceptable. Given that distortion quality is strongly correlated to aesthetic [28], we will use the commonly used distortion model [29] to characterize quality, and the confidence level \(x\) can be expressed as follows:
\[x=\alpha+(1-\alpha)\cdot\frac{\mathrm{risque}(img)}{100}\quad\mathrm{if}\ ( \lambda>1) \tag{3}\]
where \(\mathrm{risque}(\cdot)\) represents the most widely used NR quality model [29]. This allows us to make a valid distinction between UGC, PGC, and OGC.
Fig. 3 shows the luminance unevenness, resolution, and quality distribution in the UGC[22], PGC[23], and OGC[24] database. We see that U/PGC and P/OGC have certain intersections, while U/OGC has almost no intersection, which shows that our classification method is accurate and effective.
### Spatial Domain
For most VQA models, the computational complexity is exponentially correlated [14] with the image size. Thus, spatial downsampling plays a key role in reducing complexity. Meanwhile, a sub-image should follow the visual saliency of HVS to reach a good VQA performance. Fig. 4 shows the saliency map of UGC, PGC, and OGC through the most widely-used [30] salient detection method. The result shows that for UGC, HVS tends to focus on the video's geometric center; for PGC, the size of such center doubled; for OGC, there is almost no saliency center. Therefore, when a video is more likely to be classified as UGC, the more concentrated the saliency distribution is, the more it can be downsampled in the spatial domain. Due to the commonly used deep learning model, only 7 / 8 of the image is input into the network. We assume that this sampling is the scene when \(x=0.5\) in the classification model. Therefore, the spatial sub-sampled image \(img_{s}\) can be expressed as:
\[img_{s}=img((\frac{x}{8}h:\frac{8-x}{8}h),(\frac{x}{8}w:\frac{8-x}{8}w)) \tag{4}\]
Thus, according to the characteristics of UGC, PGC, and OGC, we reduce the network input size while retaining the main area of the image.
### Temporal Domain
For the Quality of Experience (QoE) models, existing methods either take all video frames as input [31] or intercept some frames [29] from the video, the former may result in long latency, while the latter may suffer from inadequate long-term feature representation[16]. Therefore, to guarantee real-time prediction, we can only analyze a certain number of frames. Traditionally, the QoE model assumes that the video content of different segments contributes equally to the QoE and therefore samples each segment evenly. However, the following characteristics of video result at the end of the
Figure 4: The averaged visual saliency map comparison between U/P/OGC videos (1 frame computed per video). From yellow to blue indicates the visual saliency from high to low.
Figure 3: Feature distribution of UGC, PGC, and OGC.
segment having a greater impact on QoE than at the start: (_i_) **UGC:** It's generally believed that for UGC videos, people will switch to the next video as soon as they are not interested in the one they are currently watching, so the content of the front part of the video is more important. (_ii_) **OGC:** It is of higher quality and may require payment. As a result, users are more likely to watch the videos in their entirety and remember the content towards the end of the video. Therefore, the content of the later part of the video is considered to be more important.
Considering the two video characteristics mentioned above, we conduct brisque [29], the most widely used NR-VQA database for UGC, PGC, and OGC. The overall quality of a video \(Q\) and from its \(n\) segments is represented as:
\[Q=\sum\limits_{i=1}^{n}{\omega_{i}Q_{i}} \tag{5}\]
where \(\omega_{i}\) is the weight parameter of a segment's quality \(Q_{i}\). Then we use Spearman Rank-order Correlation Coefficient (SRoCC) as the correlation function \(S\) between QoE and the Mean Opinion Score (MOS) \(M\) from the subjective assessment:
\[S(Q,M)=\sum\limits_{i=1}^{n}{S(\omega_{i}Q_{i},M)}=\sum\limits_{i=1}^{n}{ \omega_{i}S(Q_{i},M)} \tag{6}\]
where \(fr_{i}\) is the number of selected frames from the start/end of a segment. The more frames sampled, the better the QoE model's predictions correlated with a subjective score. As SRoCC is approximately logarithmically[32] related to the sampling rate, with a certain number of sampling frames, the specific sampling the scheme can be transformed into an optimization problem:
\[\left\{\begin{array}{l}{fr=\sum\limits_{i=1}^{n}{fr_{i}}}\\ {\max(\sum\limits_{i=1}^{n}{\omega_{i}{\rm log}(fr_{i})})}\end{array}\right. \tag{7}\]
where \(fr\) is the number of frames sampled. From the Lagrange multiplier method, the derivative of the log function gives \(fr\) as proportional to \(w\). From (7), the optimal sampling scheme can be derived by fully sampling one subsegment respectively, and discarding the contents of the other sub-segment to reflect the weight \(\omega\) through the SRoCC between QoE and subjective score, which implies the best sampling scheme:
\[\frac{{fr_{i}}}{{fr_{i+1}}}=\frac{{\omega_{i}}}{{\omega_{i+1}}}=\frac{{S({\rm brisque }(seg_{i}),M)}}{{S({\rm brisque}(seg_{i+1}),M)}} \tag{8}\]
where \(seg_{i}\) is the sub-segment and QoE is predicted by \({\rm brisque}(\cdot)\) similar as Section 3.1. When \(n\rightarrow+\infty\), the video is divided into countless segments. And we computed the ratio of \(\frac{{\omega_{i}}}{{\omega_{n}}}\) in UGC[22] and OGC[24] databases to conduct temporal downsampling: (_i_) **UGC:** The ratio of the weight parameters of the first and last frame \(\frac{{\omega_{i}}}{{\omega_{n}}}=\frac{{S({\rm brisque}(seg_{i}),M)}}{{S({ \rm brisque}(seg_{i}),M)}}=\frac{5}{3}\). (_ii_) **OGC:** The ratio of the weight parameter is \(\frac{{\omega_{i}}}{{\omega_{n}}}=\frac{5}{6}\).
Therefore, assume a linear relationship between \(\omega_{i}\) and \(t_{i}\), based on the previous confidence parameter \(x\), we have:
\[\frac{{\omega_{1}}}{{\omega_{n}}}=\frac{5}{{3+3x}} \tag{9}\]
The sampling density starts at \(3+3x\) and ends at \(5\). For example, while \(x=0.1\), for sampling 10 frames in a 150 frames video, the sampled frame index \(fr_{t}(x)\) according to the above method is shown in Fig.1. Thus, the metric of sampling different frames for different videos from UGC to OGC is realized.
### Training
To realize low complexity while extracting the key features from the spatial downsampled sub-graph, we use Fragment Attention Network (FANet) as the backbone and two supplementary modules which have been proven effective in previous work [14], including: (_i_) **Two separate bias tables:** One for intra-patch attention pairs and one for cross-patch attention pairs. The mechanism for the bias tables is the same as T, but they are learned separately and used for the respective attention pairs. (_ii_) **Non-linear regression:** Before performing pooling, regressing the features can avoid confusion between mini-patches with diverse qualities due to discontinuity between them. The output score \(s_{pred}\) can be expressed as:
\[s_{pred}=Pool(R_{NL}(f)) \tag{10}\]
where \(R_{NL}\) is non-linear layer and \(f\) is the feature.
Overall, these modules are added to Swin-T to adapt it to image fragments and improve its performance.
## 4 Performance Evaluation
### Experiment Setup
The proposed metric is validated on the Youtube-UGC [22], Live-VQC (PGC) [23], and Live-HDR (OGC) [24] databases. Youtube-UGC is the most widely used NR-VQA database for UGC content. Due to the recent reduction of UGC quality as Section 2 mentioned, we removed some UGC with excessively high resolution; Live-VQC is a large-scale video quality assessment database which is commonly regarded as PGC[9], whose content is better than UGC. Live-HDR is a database for HDR videos, whose processing technology of improving image brightness and contrast is in line with the high resolution and quality of OGC. Therefore Live-HDR can be used as today's OGC database.
The databases are split randomly in an 80/20 ratio for training/testing set. For the SVR and deep learning-based model, the partitioning and assessment are repeated 1,000
and 10 times for fair comparison and computational complexity, while the average result is reported as the final performance. Our metric is compared with 7 widely-used VQA metric, which shows outstanding performance in previous VQA tasks. Three major VQA types, namely handcraft [29, 33, 34], Support Vector Regression (SVR) [35, 36], and deep learning-based[37, 14] model are all included.
We use three common correlation functions, namely SRoC, Kendall Rank-order Correlation Coefficient (KRoCC), and Pearson Linear Correlation Coefficient (PLCC), to measure how well our metric correlates with the subjective scores. The computational time is verified in seconds on an NVIDIA RTX A6000 GPU.
### Experimental Results and Discussion
Table.1 shows the result, from which we have several useful findings. The utilization of the SVR / deep learning-based model has been found to yield superior outcomes when compared to a handcrafted model, resulting in a performance enhancement of approximately 60%. However, this advantage is offset by a computational cost that is nearly twice as high. Under this challenge of high complexity, XGC adopts the FANet architecture similar to FastVQA, ensuring its assessment time is less than 5s for each content. Additionally, it has been observed that certain models exhibit exceptional proficiency in processing specific types of videos, but a gradual decline in coefficients has been noted across different databases. For instance, Fast-VQA demonstrates a remarkable correlation, exceeding 0.8, when evaluating UGC but produces only average results, reaching a correlation of 0.5, when processing OGC. Similarly, the V-blinds[36] has satisfying performance on OGC but gradually declined in UGC. Ultimately, the proposed XGC-VQA model demonstrates exceptional performance on all databases, while sustaining a relatively rapid processing time, particularly when processing OGC videos with a larger resolution, whose computation time greatly outperforms those of SVR-based models.
### Ablation Study
We conduct an ablation experiment to single out the core contributors of XGC-VQA. The results are listed in Table. 2. The results obtained from our experiment reveal that incorporating either the time or space domain does not significantly contribute to an increase in computation time, thereby ensuring that our model operates in real-time. Conversely, omitting either domain leads to a decrease in experimental results.
## 5 Conclusions
Facing the challenge that the traditional VQA model cannot achieve good results on UGC, PGC, and OGC at the same time, we propose a unified VQA model: the video is classified by confidence parameter \(x\) for UGC, PGC, and OGC; spatial and temporal domain sampling is done based on \(x\). In addition to the pervasiveness of video content, our approach also provides real-time bitrate guidance for all types of videos on the internet today, driving the development and evolution of perception-inspired video communication.
|
2308.06840 | Mesh-Free Hydrodynamic Stability | A specialized mesh-free radial basis function-based finite difference
(RBF-FD) discretization is used to solve the large eigenvalue problems arising
in hydrodynamic stability analyses of flows in complex domains. Polyharmonic
spline functions with polynomial augmentation (PHS+poly) are used to construct
the discrete linearized incompressible and compressible Navier-Stokes operators
on scattered nodes. Rigorous global and local eigenvalue stability studies of
these global operators and their constituent RBF stencils provide a set of
parameters that guarantee stability while balancing accuracy and computational
efficiency. Specialized elliptical stencils to compute boundary-normal
derivatives are introduced and the treatment of the pole singularity in
cylindrical coordinates is discussed. The numerical framework is demonstrated
and validated on a number of hydrodynamic stability methods ranging from
classical linear theory of laminar flows to state-of-the-art non-modal
approaches that are applicable to turbulent mean flows. The examples include
linear stability, resolvent, and wavemaker analyses of cylinder flow at
Reynolds numbers ranging from 47 to 180, and resolvent and wavemaker analyses
of the self-similar flat-plate boundary layer at a Reynolds number as well as
the turbulent mean of a high-Reynolds-number transonic jet at Mach number 0.9.
All previously-known results are found in close agreement with the literature.
Finally, the resolvent-based wavemaker analyses of the Blasius boundary layer
and turbulent jet flows offer new physical insight into the modal and non-modal
growth in these flows. | Tianyi Chu, Oliver T. Schmidt | 2023-08-13T19:49:29Z | http://arxiv.org/abs/2308.06840v1 | # Mesh-Free Hydrodynamic Stability
###### Abstract
A specialized mesh-free radial basis function-based finite difference (RBF-FD) discretization is used to solve the large eigenvalue problems arising in hydrodynamic stability analyses of flows in complex domains. Polyharmonic spline functions with polynomial augmentation (PHS+poly) are used to construct the discrete linearized incompressible and compressible Navier-Stokes operators on scattered nodes. Rigorous global and local eigenvalue stability studies of these global operators and their constituent RBF stencils provide a set of parameters that guarantee stability while balancing accuracy and computational efficiency. Specialized elliptical stencils to compute boundary-normal derivatives are introduced and the treatment of the pole singularity in cylindrical coordinates is discussed. The numerical framework is demonstrated and validated on a number of hydrodynamic stability methods ranging from classical linear theory of laminar flows to state-of-the-art non-modal approaches that are applicable to turbulent mean flows. The examples include linear stability, resolvent, and wavemaker analyses of cylinder flow at Reynolds numbers ranging from 47 to 180, and resolvent and wavemaker analyses of the self-similar flat-plate boundary layer at a Reynolds number as well as the turbulent mean of a high-Reynolds-number transonic jet at Mach number 0.9. All previously-known results are found in close agreement with the literature. Finally, the resolvent-based wavemaker analyses of the Blasius boundary layer and turbulent jet flows offer new physical insight into the modal and non-modal growth in these flows.
keywords: RBF-FD, polyharmonic splines, polynomial augmentation, linear stability theory, Navier-Stokes, wavemaker
## 1 Introduction
Flow instabilities and large-scale coherent structures are ubiquitous phenomena in fluid mechanics that have been the focus of extensive research. Linear stability (LST) analysis is specifically designed to investigate the growth of small perturbations exclusively around laminar base flows, which are the steady-state solution to the Navier-Stokes equations. One-dimensional LST analysis was widely used in the past century, e.g., [20; 30; 52; 79]. Eriksson and Rizzi [36] and Tuckerman and Marcus [146] were the pioneers in conducting LST analysis in two-dimensional (2D). Subsequently, Jackson [57] and Zebib [154] examined the 2D nature of vortex shedding in the wakes of bluff bodies. Readers are referred to Huerre and Monkewitz [53] and Theofilis [136] for comprehensive reviews of the concept of 2D LST modes. The implementation of the 2D LST framework has enabled improved identification of flow instability in non-parallel flows, including cylinder wakes [46; 76; 91], aerofoil wakes [34; 151], boundary layers [1; 35], and jets in cross-flow [10; 95; 107]. Studies have demonstrated that an open flow can possess marginal stability despite exhibiting local convective instability. LST analysis around a steady laminar base flow, by its inherent nature, is not applicable to predict finite-amplitude flow instabilities arising from nonlinear interactions. The use of mean flow for LST analysis, despite violating the basic assumption of linear theory, has been used successfully to predict coherent flow features in diverse types of flows, including cylinder wakes [11; 100], open cavity flows [126; 127], mixing layers [45; 87; 149], and turbulent or transitional jets [48; 92; 116]. Theoretical conditions required for the validity of
mean flow stability analysis have been explored by Beneddine _et al._[18]. Although beyond the scope of this work, it is noteworthy that the weakly-nonlinear extension of LST analysis has been successfully employed for studying the dynamics of non-parallel flows [27; 127]. The LST-based semi-empirical \(e^{N}\) method [55; 131] has succeeded in transition prediction for certain flows such as boundary layers. However, the prediction of disturbance behavior in more complex scenarios, such as crossflows or bypass transitions, falls outside the scope of LST theory.
Despite these limitations, resolvent, or input-output analysis, has recently emerged as a linear tool for accurately predicting large-scale coherent structures in fully turbulent flows. Resolvent analysis (RA) originally stems from the studies of transient growth [37; 104; 105; 144] and seeks the optimal pairs of inputs and corresponding outputs through the linearized system that maximizes the energy gain. Within the laminar regime, RA has been utilized to investigate the linear response to external body forces or perturbations for channel flows [64; 84; 111], boundary layers [4; 21; 24; 89; 97; 128; 142], and jets [44; 59; 118]. In contrast to classical LST analysis, the input-output perspective offers a mathematically rigorous framework for studying turbulent mean-flows by identifying the forcing as the Reynolds stresses in the perturbation-interaction terms in the Reynolds-decomposed Navier-Stokes equations [81; 124]. Applications include near-wall flows [54; 1; 124], boundary layers [6; 58; 110], incompressible [44; 69] or compressible jets [59; 117; 143], and airfoil wakes [108; 153]. The validation of fundamental relationships between RA and other modal decomposition techniques was facilitated by Towne _et al._[143], establishing RA as a well-suited tool for turbulence modeling [98; 99]. Both classical LST and RA require, in their most basic form, the construction of large matrices that have to be decomposed into their singular- or eigen-components. The construction and decomposition of these matrices are particularly challenging for flows in complex geometries. Previous studies [33; 109; 125; 153] have leveraged the flexibility of the finite-volume (FV) methods on unstructured meshes. However, a downside of unstructured FV methods is that the accuracy is usually restricted to 2nd-order. Alternatively, the utilization of finite-element (FE) methods for spatial discretization provides high-order accuracy for flow instability analysis [76; 77; 110; 127]. The commonly employed weak formulation of governing equations in FE methods raises concerns regarding stability and convergence. Finite-element (FE) methods offer the same flexibility, and the FreeFEM+ toolbox by Hecht [51] has been employed in a number of recent studies [76; 77; 110; 127]. Matrix-free methods, such as the time-stepper [7; 10; 145] and other related techniques [32; 78; 89; 93], provide an alternative approach where the decomposition of large matrix operations can be completely circumvented. Iterative Krylov subspace methods are commonly employed to obtain a partial eigendecomposition. However, their major limitation lies in their capacity to extract only a limited portion of the spectra at a time. Randomized approaches have also been explored as potential solutions to decrease the computational cost of the singular- or eigendecomposition of large stability matrices [85; 108]. In this study, we demonstrate the capability and accuracy of radial basis functions (RBF) in effectively addressing the aforementioned challenges.
The use of RBF-based methods provides high flexibility in meshing complex geometries, allowing for local grid refinement and local adjustments to the order of accuracy of the discretization. The RBF methodology is rooted in scattered data interpolation, first introduced by Hardy [50], and offers a systematic means of approximating multivariate functions on arbitrarily scattered nodes. By generalizing the classical finite-difference (FD) methods, the so-called RBF-FD methods enable the approximation of spatial derivatives to arbitrary node layouts based on RBFs [139]. In fluid flow problems, Gaussian (GA), multiquadric (MQ), and inverse multiquadric (IMQ) are among the most commonly used radial basis functions (RBFs). Refer to Fornberg and Flyer [42] for a detailed overview. The computation of these RBFs requires specifying a shape parameter, a free parameter that can significantly impact both the numerical stability and accuracy. Recently, a new class of RBF-FD approximations has emerged that utilizes polyharmonic splines (PHS) augmented with polynomials (PHS+poly). This technique was first introduced by Flyer _et al._[39] and has demonstrated great potential in achieving high-order accuracy while avoiding the difficulties associated with the shape parameter selection. Bayona _et al._[15] and Flyer _et al._[40] highlighted the advantages of local PHS+poly-generated RBF-FD stencils in achieving high-order accuracy for derivative approximations and solving elliptic partial differential equations (PDEs), while also eliminating stagnation errors under node refinement. The feasibility of implementing a larger stencil near domain boundaries to avoid the Runge phenomenon was analytically verified by Bayona [13] using RBFs in a closed-form. Additional numerical examples in 2-D and 3-D were presented in Bayona _et al._[16] to demonstrate the effectiveness of the PHS+poly approach. The accuracy and stability of the PHS+poly RBF-FD method depend on the combination of the stencil size, PHS exponent, and the degree of polynomials. Several studies, including Chu and Schmidt [29], Le Borne and Leinen [67], Shankar and Fogelson [123], have investigated and discussed the optimal combinations of these parameters to achieve the'spectral-like'
accuracy of higher-order Pade-type compact FDs [68]. In addition, other approaches have been explored to improve the computational accuracy and efficiency of PHS+poly RBF-FDs, including overlapping stencils [122; 123], the partition of unity (PU) method [83], and oversampling [140]. Comparative analyses have been carried out on various higher-order mesh-free discretizations, consistently demonstrating that PHS+poly-based RBF-FDs outperform other methods in accuracy, robustness, and computational efficiency, e.g., [14; 40; 47; 94; 112]. Recent successful applications of PHS+poly RBF-FDs in flow simulations on scattered nodes [119; 120; 121; 147] highlight the potential of RBF methods for investigating flow instabilities in complex geometries. In our recent work [29], we introduced a staggered-node RBF-FD-based semi-implicit fractional-step solver for the incompressible Navier-Stokes equations, which will be utilized to compute the base state in the present study.
This work introduces a high-order, mesh-free framework for hydrodynamic stability analysis based on the PHS+poly RBF-FD discretizations. The efficient construction of global Jacobians on scattered nodes using RBF-based methods is demonstrated. The trade-off between computational efficiency and accuracy is addressed, and best practices are provided. Furthermore, we discuss the practical treatment of boundary conditions, particularly the boundary-normal derivatives and the pole singularity in cylindrical coordinates. The numerical framework is demonstrated on various hydrodynamic stability theoretical methods and flows. Refer to table 3 for an overview. In particular, LST, RA, and wavemaker (WM) analyses are shown for the canonical cylinder flow at Reynolds numbers ranging from 47 to 180, RA and WM analyses for a laminar zero-pressure-gradient (ZPG) flat-plate Blasius boundary layer at a Reynolds number of \(6\times 10^{5}\), RA and WM analyses for a turbulent transonic jet at Mach number 0.9 and a Reynolds number of approximately \(10^{6}\). The identified flow phenomena include vortex shedding in cylinder wakes [11; 100], the Orr and Tollmien-Schlichting waves in boundary layers [1; 22; 89; 128], and Orr and Kelvin-Helmholtz wavepackets [26; 44; 69; 117; 134] as well as trapped acoustic waves [116; 141] in turbulent jets. The comparisons of these benchmark problems with the literature allow us to highlight the viability, accuracy, and robustness. The study furthermore encapsulates the first application of RA-based WM analysis to the ZPG Blasius boundary layer and turbulent jets, providing a new perspective on modal and non-modal growth in these flows.
This paper is organized as follows: SS2 introduces the PHS+poly RBF-FD method, SS3 presents the mesh-free hydrodynamic stability analysis, and SS4 discusses the numerical implementations. The performance of the mesh-free framework is demonstrated in SS5. Finally, SS6 concludes and summarizes the paper.
## 2 Radial basis function-based finite differences (RBF-FD)
The goal of the RBF-FD method is to compute the discrete representation of any linear differentiation operator \(\mathcal{D}\) at a given location \(x_{0}\) as a linear combination of the function values \(g(\mathbf{x}_{j})\), such that
\[\mathcal{D}g(\mathbf{x}_{0})=\sum_{j=1}^{n}w_{j}g(\mathbf{x}_{j}). \tag{1}\]
To obtain the unknown differentiation weights \(w_{j}\), we use the RBF interpolant,
\[s(\mathbf{x})=\sum_{j=1}^{n}\gamma_{j}\phi(\|\mathbf{x}-\mathbf{x}_{j}\|), \tag{2}\]
to approximate the given function \(g(\mathbf{x})\) by satisfying
\[s(\mathbf{x}_{j})=g(\mathbf{x}_{j}),\qquad j=1,2,\cdots n. \tag{3}\]
Here, \(\phi(r)\) is a smooth radial function, \(\{\mathbf{x}_{j}\}_{j=1}^{n}\) is a set of scattered nodes, and \(\|\cdot\|\) denotes the standard Euclidean norm. Combining equations (1-3) leads to the linear system
\[\underbrace{\begin{bmatrix}\phi(\|\mathbf{x}_{1}-\mathbf{x}_{1}\|)&\phi(\|\mathbf{x}_{1}- \mathbf{x}_{2}\|)&\cdots&\phi(\|\mathbf{x}_{1}-\mathbf{x}_{n}\|)\\ \phi(\|\mathbf{x}_{2}-\mathbf{x}_{1}\|)&\phi(\|\mathbf{x}_{2}-\mathbf{x}_{2}\|)&\cdots&\phi(\| \mathbf{x}_{2}-\mathbf{x}_{n}\|)\\ \vdots&\vdots&&\vdots\\ \phi(\|\mathbf{x}_{n}-\mathbf{x}_{1}\|)&\phi(\|\mathbf{x}_{n}-\mathbf{x}_{2}\|)&\cdots&\phi(\| \mathbf{x}_{n}-\mathbf{x}_{n}\|)\\ \end{bmatrix}}_{A}\begin{bmatrix}w_{1}\\ w_{2}\\ \vdots\\ w_{n}\end{bmatrix}=\begin{bmatrix}\mathcal{D}\phi(\|\mathbf{x}-\mathbf{x}_{1}\|)\\ \mathcal{D}\phi(\|\mathbf{x}-\mathbf{x}_{2}\|)\\ \vdots\\ \mathcal{D}\phi(\|\mathbf{x}-\mathbf{x}_{n}\|)\\ \end{bmatrix}_{\mathbf{x}=x_{0}}, \tag{4}\]
which can be solved directly to obtain the weight vector \(\mathbf{w}=[w_{1},\cdots,w_{n}]^{T}\). An implicit assumption for equation (4) is that the derivative of the basis function, \(\mathcal{D}\phi\), is continuous.
Polynomial augmentation is commonly applied to the RBF-FD method to enforce consistency with Taylor expansion-based FD approximations [39; 42; 43; 66; 152]. The two-dimensional augmented RBF-FD method can be expressed as
\[\mathcal{D}g(\mathbf{x}_{0})=\sum_{j=1}^{n}w_{j}g(\mathbf{x}_{j})+\sum_{i=1}^{(q+1)(q+ 2)/2}c_{i}P_{i}(\mathbf{x}_{0}), \tag{5}\]
where \(P_{i}(\mathbf{x})\) are multivariate polynomials up to degree \(q\). To match the local Taylor series, additional constraints for the differentiation weights,
\[\sum_{j=1}^{n}w_{j}P_{i}(\mathbf{x}_{j})=\mathcal{D}P_{i}(\mathbf{x}_{0}) \qquad\text{for }1\leq i\leq\frac{(q+1)(q+2)}{2}, \tag{6}\]
are included in the computation. These constraints, also referred to as the vanishing momentum conditions [56], ensure that the RBF approximations locally reproduce polynomial behavior up to degree \(q\)[40] and decay in the far-field [41]. A more general and compact representation of equations (5-6) are
\[\underbrace{\left[\begin{array}{cc}\mathbf{A}&\mathbf{P}\\ \mathbf{P}^{T}&\mathbf{0}\end{array}\right]}_{A_{\text{ang}}}\left[\begin{array}{c} \mathbf{w}\\ \mathbf{c}\end{array}\right]=\begin{bmatrix}\mathcal{D}\mathbf{\phi}\\ \mathcal{D}\mathbf{P}\end{bmatrix}, \tag{7}\]
where \(\mathbf{A}\) represents the same interpolation matrix as defined in equation (4). In equation (1), only the weight vector \(\mathbf{w}\) is used to approximate the differentiation operator \(\mathcal{D}\). Motivated by recent studies (see, e.g., [15; 16; 39; 40]), we use polyharmonic splines (PHS),
\[\phi(r)=r^{m}, \tag{8}\]
as the basis functions, where \(m\) is an odd positive integer. The accuracy and stability of this PHS+poly RBF-FD method depend on the combination of the stencil size, PHS exponent, and the degree of polynomials. The selection of these parameters will be discussed later in SS4.1.
RBF-FD methods provide a mesh-free approach for discretizing the domain without the need for a pre-defined mesh or explicit node connectivity. This feature enables the investigation of flow instabilities for complex geometries, thereby offering advantages over traditional discretizations that rely on structured meshes. The subsequent sections of this paper focus on the construction of the discrete linearized Navier-Stokes (LNS) and then the resolvent operators using PHS+poly RBF-FDs.
## 3 Mesh-free hydrodynamic stability analysis
### Governing equations
The motion of a general incompressible Newtonian fluid is governed by the Navier-Stokes equations,
\[\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\cdot\nabla\right) \mathbf{u} =-\nabla p+\text{Re}^{-1}\nabla^{2}\mathbf{u}, \tag{9a}\] \[\nabla\cdot\mathbf{u} =0. \tag{9b}\]
Here, all variables are nondimensionalized by the velocity scale \(\mathbf{U}_{\infty}\) and the length scale \(L\), and Re denotes the Reynolds number.
In the case of laminar flows, we can decompose the flow state around the steady-state solution of the Navier-Stokes equations (9) as
\[\mathbf{u}=\mathbf{U}+\mathbf{u}^{\prime},\quad p=P+p^{\prime}, \tag{10}\]
where \((\mathbf{U},\,P)\) represents the base flow that satisfies
\[(\mathbf{U}\cdot\nabla)\,\mathbf{U} =-\nabla P+\text{Re}^{-1}\nabla^{2}\mathbf{U}, \tag{11a}\] \[\nabla\cdot\mathbf{U} =0, \tag{11b}\]
and \((\cdot)^{\prime}\) denotes the small fluctuating components. In turbulent flows, we take the Reynolds decomposition of the flow state into the temporal mean, \(\overline{(\cdot)}\), and fluctuating components, given by
\[\mathbf{u}=\mathbf{\overline{u}}+\mathbf{u}^{\prime},\quad p=\overline{p}+p^{\prime}. \tag{12}\]
By generalizing the notation of the base state as \((\mathbf{u}_{0},\,p_{0})\), the resulting governing equations for the fluctuations take the form of
\[\frac{\partial\mathbf{u}^{\prime}}{\partial t}+(\mathbf{u}_{0}\cdot\nabla )\,\mathbf{u}^{\prime}+(\mathbf{u}^{\prime}\cdot\nabla)\,\mathbf{u}_{0} =-\nabla p^{\prime}+\text{Re}^{-1}\nabla^{2}\mathbf{u}^{\prime}+\mathbf{ f}^{\prime}, \tag{13a}\] \[\nabla\cdot\mathbf{u}^{\prime} =0. \tag{13b}\]
Here, the term \(\mathbf{f}^{\prime}\) represents the remaining nonlinear interactions between the fluctuation components. Table 1 summarizes the two aforementioned decompositions.
Equation (13) can be written compactly in terms of the fluctuating state, \(\mathbf{q}^{\prime}=[u^{\prime},\;v^{\prime},\;p^{\prime}]^{T}\), as
\[\mathcal{P}\mathcal{P}^{T}\left(\frac{\partial}{\partial t}\mathbf{q}^{\prime} \right)=\mathcal{L}\mathbf{q}^{\prime}+\mathcal{P}\mathbf{f}^{\prime}, \tag{14}\]
where \(\mathcal{P}\) is the prolongation operator that extends the velocity vector \([u,\,v]^{T}\) into \([u,\,v,\,0]^{T}\), and its transpose is the restriction operator that extracts the velocity vector from the extended state vector [128]. The incompressible linearized Navier-Stokes (LNS) operator takes the form of
\[\mathcal{L}\equiv\begin{pmatrix}-(\mathbf{u}_{0}\cdot\nabla)\,(0)-[(0 \cdot\nabla]\,\mathbf{u}_{0}+\text{Re}^{-1}\nabla^{2}&-\nabla\\ \nabla\cdot()&0\end{pmatrix}. \tag{15}\]
The governing equations for compressible flows will be discussed in SS5.3 and A. Beyond the linear dynamics, the remaining forcing \(\mathbf{f}^{\prime}\) in equation (14) comprises products of fluctuating quantities, as outlined in table 1. These terms will be either neglected or modeled.
Classical (temporal) linear stability (LST) analysis investigates fluctuations with complex frequency \(\lambda=\lambda_{r}+\mathrm{i}\lambda_{i}\), where \(\lambda_{r}\) is the exponential growth rate and \(\lambda_{i}\) the oscillation frequency. The fluctuations are assumed to be infinitesimally small, and the forcing term, \(\mathbf{f}^{\prime}\), is, therefore, negligible at \(O(1)\), see, e.g., Schmid and Henningson [115]. Substituting perturbations of the form \(\mathbf{q}^{\prime}(\mathbf{x},t)=\tilde{\mathbf{q}}(\mathbf{x})\mathrm{e}^{\lambda t}\) into the governing equations (13) yields the LST equation,
\[\lambda\mathcal{P}\mathcal{P}^{T}\tilde{\mathbf{q}}=\mathcal{L}\tilde{\mathbf{q}}. \tag{16}\]
Equation (16) is a generalized eigenvalue problem, and the eigenvector associated with the largest growth rate ought to predict the dominant flow instability mechanism.
The nonlinear interactions in equation (14) are no longer negligible for general cases of finite amplitude fluctuations. Within the resolvent framework of turbulent flows, the nonlinear interactions, along with the background
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Base state & Description & Notation & Obtained from & Remaning forcing \(\mathbf{f}^{\prime}\) \\ \hline \hline \multirow{3}{*}{\((\mathbf{u}_{0},\,p_{0})\)} & Base flow & \((\mathbf{U},\,P)\) & equation (11) & \(-(\mathbf{u}^{\prime}\cdot\nabla)\,\mathbf{u}^{\prime}\) \\ \cline{2-5} & Mean flow & \((\mathbf{\overline{u}},\overline{p})\) & long-time average & \(-(\mathbf{u}^{\prime}\cdot\nabla)\,\mathbf{u}^{\prime}+\overline{(\mathbf{u}^{\prime} \cdot\nabla)\,\mathbf{u}^{\prime}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of flow state decompositions.
turbulence, can be interpreted as external forcing, \(\mathbf{f}^{\prime}\), to the otherwise linear dynamics. This interpretation was first proposed by McKeon and Sharma [81]. By assuming a normal mode form for the fluctuating components, \([\mathbf{q}^{\prime},\,\mathbf{f}^{\prime}](\mathbf{x},t)=[\hat{\mathbf{q}},\,\hat{\mathbf{f}}](\mathbf{x })\mathrm{e}^{\mathrm{i}\omega t}+c.c.\), where \(\omega\) is the angular frequency, or equivalently by taking the Fourier transform, we obtain the linear time-invariant (LTI) representation of the governing equation (13) in the frequency domain,
\[\left(\mathrm{i}\omega\mathcal{P}\mathcal{P}^{T}-\mathcal{L}\right) \hat{\mathbf{q}} =\mathcal{P}\left(\mathcal{B}\hat{\mathbf{f}}\right), \tag{17a}\] \[\hat{\mathbf{u}} =\mathcal{P}^{T}\left(\mathcal{C}\hat{\mathbf{q}}\right). \tag{17b}\]
The linear operators \(\mathcal{B}\) and \(\mathcal{C}\) are used to select spatial regions of particular interest. We write equations (17) in a compact form as
\[\hat{\mathbf{u}}=\mathcal{H}(\omega)\hat{\mathbf{f}}, \tag{18}\]
where \(\mathcal{H}(\omega)=\mathcal{P}^{T}C\left(\mathrm{i}\omega\mathcal{P}\mathcal{ P}^{T}-\mathcal{L}\right)^{-1}\mathcal{P}\mathcal{B}\) is known as the resolvent operator. The numerical discretization of the global operator, \(\mathcal{L}\), is at the core of both LST and resolvent analyses. In this work, the PHS+poly RBF-FDs are utilized to construct differentiation matrices and global Jacobians on scattered nodes, taking advantage of their flexibility and accuracy in numerical discretization.
### Global Jacobians and mesh-free stability analysis
For differentiation operators \(\mathcal{D}=\frac{\partial^{\mu+\beta}}{\partial x_{1}^{2}\partial x_{2}^{2}}\) with \(\alpha+\beta\leq 2\), we seek differentiation matrices \(\mathbf{D}_{x_{1}^{2}x_{2}^{2}}\) that satisfy
\[\underbrace{\begin{bmatrix}w_{11}&w_{12}&\cdots&w_{1N}\\ w_{21}&w_{22}&\cdots&w_{2N}\\ \vdots&\vdots&&\vdots\\ w_{N1}&w_{N2}&\cdots&w_{NN}\end{bmatrix}\begin{bmatrix}g(\mathbf{x}_{1})\\ g(\mathbf{x}_{2})\\ \vdots\\ g(\mathbf{x}_{N})\end{bmatrix}}_{D_{x_{1}^{2}x_{2}^{2}}}\left[\begin{matrix}\frac{ \partial^{\mu+\beta}}{\partial x_{1}^{2}\partial x_{2}^{2}}\mathcal{B}(\mathbf{x }_{1})\\ \frac{\partial^{\mu+\beta}}{\partial x_{1}^{2}\partial x_{2}^{2}}\mathcal{B}(\mathbf{ x}_{2})\\ \vdots\\ \frac{\partial^{\mu+\beta}}{\partial x_{1}^{2}\partial x_{2}^{2}}\mathcal{B}( \mathbf{x}_{N})\end{matrix}\right]}_{\frac{\partial^{\mu+\beta}}{\partial x_{1}^{2 }\partial x_{2}^{2}}\mathcal{B}(\mathbf{x}_{N})}. \tag{19}\]
Here, \((x_{1},x_{2})\) represents the coordinate system, and \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) represents the global computational grid. The \(j\)th row of the sparse matrix \(\mathbf{D}\) contains the \(n\ll N\) weights that approximate the derivative at node \(\mathbf{x}_{j}\). The matrix hence has \(N\times n\) nonzero elements. In a slight change of notation, to enhance readability, we now denote by \(\mathbf{u}=\mathbf{u}(\mathbf{x})\) and \(\mathbf{v}=\mathbf{v}(\mathbf{x})\) the global velocity fields, and by \(\mathbf{p}=\mathbf{p}(\mathbf{x})\) the global pressure field in the computational domain \(\Omega\).
Upon the use of these global RBF-FD-based differentiation matrices, we assemble the discrete global LNS operator from equation (15), taking the two-dimensional Cartesian coordinate system as an example, as
\[\mathbf{L}=\begin{cases}\mathbf{S}-\mathrm{diag}\left(\mathbf{D}_{x}\mathbf{u}_{0}\right)+ \mathrm{Re}^{-1}\left(\mathbf{D}_{xx}+\mathbf{D}_{yy}\right)&-\mathrm{diag}\left(\mathbf{D }_{y}\mathbf{u}_{0}\right)&-\mathbf{D}_{x}\\ -\mathrm{diag}\left(\mathbf{D}_{x}\mathbf{v}_{0}\right)&\mathbf{S}-\mathrm{diag}\left(\mathbf{D }_{y}\mathbf{v}_{0}\right)+\mathrm{Re}^{-1}\left(\mathbf{D}_{xx}+\mathbf{D}_{yy}\right)&- \mathbf{D}_{y}\\ \mathbf{D}_{x}&\mathbf{D}_{y}&\mathbf{0}\end{cases}, \tag{20}\]
where \(\mathbf{S}=-\left(\mathbf{u}_{0}\circ\mathbf{D}_{x}+\mathbf{v}_{0}\circ\mathbf{D}_{y}\right)\), and \(\circ\) denotes the Hadamard product.
For a weighted inner product \(\left(\mathbf{q}_{1},\mathbf{q}_{2}\right)=\mathbf{q}_{2}^{*}\mathbf{W}\mathbf{q}_{1}\) that accounts for the non-uniformly distributed nodes, the discrete LST and its adjoint eigenvalue problems take the form of
\[\lambda\mathbf{P}\mathbf{P}^{T}\tilde{\mathbf{q}} =\mathbf{L}\tilde{\mathbf{q}}, \tag{21a}\] \[\lambda^{+}\mathbf{P}\mathbf{P}^{T}\tilde{\mathbf{q}}^{+} =\mathbf{L}^{+}\tilde{\mathbf{q}}^{+}, \tag{21b}\]
respectively, where
\[\mathbf{L}^{+}=\mathbf{W}^{-1}\mathbf{L}^{H}\mathbf{W} \tag{22}\]
is the discrete adjoint LNS operator. Here, the restriction matrix for two-dimensional problems takes the form of \(\mathbf{P}^{T}=\begin{bmatrix}\mathbf{I}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{I}&\mathbf{0}\end{bmatrix}\), and \((\cdot)^{H}\) denotes the Hermitian transpose. The eigenvectors of these two generalized eigenvalue problems, \(\tilde{\mathbf{q}}=[\tilde{\mathbf{u}},\tilde{\mathbf{v}},\tilde{\mathbf{p}}]^{T}\) and \(\tilde{\mathbf{q}}^{+}=[\tilde{\mathbf{u}}^{+},\tilde{\mathbf{v}}^{+},\tilde{\mathbf{p}}^{+}] ^{T}\), are referred to as the LST and adjoint modes, respectively.
The wavemaker (WM), introduced by Giannetti and Luchini [46], identifies the flow region with the strongest localized feedback, where the dominant instability mechanisms act. Based on the leading LST and adjoint modes, the WM is locally defined as
\[\zeta_{\text{LST}}(x_{1},x_{2})=\frac{\|\mathbf{P}^{T}\tilde{\mathbf{q}}^{+}(x_{1},x_ {2})\|\,\|\mathbf{P}^{T}\tilde{\mathbf{q}}(x_{1},x_{2})\|}{|\,\langle\tilde{\mathbf{q}}^{+ },\tilde{\mathbf{q}}\rangle\,|}, \tag{23}\]
where the norm \(\|\cdot\|\) measures the localized energy. Referred to Luchini and Bottaro [71] for a comprehensive review. In addition to quantifying the receptivity, the wavemaker also indicates the non-normality level of the flow field [27]. Initially introduced for base flows, Meliga _et al._[82] extended the application of wavemaker as a sensitivity analysis technique for mean flows. We utilize this methodology to gauge the structural sensitivity of the RBF-FD-based global LNS operator, \(\mathbf{L}\).
### Mesh-free resolvent analysis (RA)
The construction of the global Jacobian, \(\mathbf{L}\), leads to the direct discretization of the input-output system in equation (18), yielding
\[\mathbf{\hat{u}}=\mathbf{H}(\omega)\mathbf{\hat{f}}, \tag{24}\]
where
\[\mathbf{H}(\omega)=\mathbf{P}^{T}\mathbf{C}\left(\text{i}\omega\mathbf{P}\mathbf{P}^{T}-\mathbf{L} \right)^{-1}\mathbf{P}\mathbf{B} \tag{25}\]
is referred to as the discrete resolvent operator.
In the absence of nonlinear interactions described in table (1), the application of input-output, or resolvent analysis (RA) provides a means to model them as optimal forcing inputs to the linear system in equation (24). The objective of resolvent analysis is to identify pairs of optimal forcings and their corresponding responses that maximize the gain, \(\sigma^{2}\), defined as the ratio of the energy of the response to the energy of the forcing,
\[\sigma^{2}(\mathbf{\hat{f}};\omega)=\frac{\|\mathbf{\hat{u}}\|_{u}^{2}}{\|\mathbf{\hat{f} }\|_{f}^{2}}=\frac{\left\langle\mathbf{H}(\omega)\mathbf{\hat{f}},\mathbf{H}(\omega)\mathbf{ \hat{f}}\right\rangle_{u}}{\left\langle\mathbf{\hat{f}},\mathbf{\hat{f}}\right\rangle_ {f}}. \tag{26}\]
Refer to Schmid and Henningson [115] for a detailed discussion. The energy of the response and the forcing are measured in the norms \(\|\cdot\|_{u}\) and \(\|\cdot\|_{f}\), induced by the inner products
\[\langle\mathbf{\hat{u}}_{1},\mathbf{\hat{u}}_{2}\rangle_{u}=\mathbf{\hat{u}}_{2}^{*}\mathbf{W} _{u}\mathbf{\hat{u}}_{1}\qquad\text{and}\qquad\left\langle\mathbf{\hat{f}}_{1},\mathbf{ \hat{f}}_{2}\right\rangle_{f}=\mathbf{\hat{f}}_{2}^{*}\mathbf{W}_{f}\mathbf{\hat{f}}_{1} \tag{27}\]
on the output and input spaces, respectively. Here, \(\mathbf{W}_{u}\) and \(\mathbf{W}_{f}\) are weight matrices containing both the numerical quadrature weights and weights associated with these inner products.
We further define the modified, or weighted, resolvent operator
\[\mathbf{R}(\omega)\equiv\mathbf{W}_{u}^{\frac{1}{2}}\mathbf{H}(\omega)\mathbf{W}_{f}^{-\frac{ 1}{2}}=\mathbf{\hat{U}}\Sigma\mathbf{\hat{F}}^{*} \tag{28}\]
to account for the energy in the input and output spaces. The optimal responses \(\mathbf{\hat{U}}=\begin{bmatrix}\mathbf{\hat{u}}_{1},\cdots,\mathbf{\hat{u}}_{N}\end{bmatrix}\) and the corresponding forcings \(\mathbf{\hat{F}}=\begin{bmatrix}\mathbf{\hat{f}}_{1},\cdots,\mathbf{\hat{f}}_{N}\end{bmatrix}\) are obtained from the singular value decomposition (SVD) of the modified resolvent operator and ranked by the energy gains \(\Sigma=\begin{bmatrix}\sigma_{1},\cdots\sigma_{N}\end{bmatrix}\). The resulting modes are orthogonal in their respective inner products, that is, \(\left\langle\mathbf{\hat{u}}_{j},\mathbf{\hat{u}}_{k}\right\rangle_{u}=\left\langle\bm {\hat{f}}_{j},\mathbf{\hat{f}}_{k}\right\rangle_{f}=\delta_{jk}\). The optimal input and output modes are related through
\[\mathbf{R}(\omega)\mathbf{\hat{f}}_{j}=\sigma_{j}(\omega)\mathbf{\hat{u}}_{j}, \tag{29}\]
which provides a physical interpretation of the singular values and vectors. In practice, the optimal input forcings, \(\hat{\mathbf{f}}_{j}\), are determined as the solutions of the eigenvalue problem
\[\mathbf{W}_{f}^{-1}\mathbf{H}(\omega)^{*}\mathbf{W}_{u}\mathbf{H}(\omega)\hat{\mathbf{f}}_{j}=\sigma _{j}^{2}\hat{\mathbf{f}}_{j}, \tag{30}\]
where \(\mathbf{W}_{f}^{-1}\mathbf{H}(\omega)^{*}\mathbf{W}_{u}\mathbf{H}(\omega)\) is Hermitian. The matrix inversion of \(\left(\mathrm{i}\omega\mathbf{P}\mathbf{P}^{T}-\mathbf{L}\right)\) is solved using LU-factorization [128].
The RA-based wavemaker,
\[\zeta_{\mathrm{RA}}(x_{1},x_{2})=\frac{\|\hat{\mathbf{u}}_{1}(x_{1},x_{2})\|\|\hat {\mathbf{f}}_{1}(x_{1},x_{2})\|}{\|\hat{\mathbf{u}}_{1}^{*}\mathbf{W}_{u}\hat{\mathbf{f}}_{1} \|}, \tag{31}\]
is defined analogous to equation (23). Within the framework of RA, it provides a quantitative measure of the effect of the localized feedback at any given frequency [103; 109; 130; 135].
## 4 Numerical implementations
### Node generation and parameter selection
We employ the unstructured triangular mesh generator DistMesh developed by Persson [96] to efficiently generate scattered nodes with localized refinement in regions of interest. We highlight that the local connection information is not used in the computation. The accuracy and stability of the PHS+poly RBF-FD method are contingent upon three parameters: the stencil size, \(n\), the PHS exponent, \(m\), and the polynomial degree, \(q\). we investigate the order of accuracy ranging from the standard choice of \(q=2\) to the desired high-order accuracy for hydrodynamic stability analysis, here \(q=4\). Increasing the order of accuracy generally leads to a decrease in the size of the linear operator.
The local \(\mathcal{D}\)-Lebesgue function proposed by Shankar and Fogelson [123] provides a measure of the eigenvalue stability of the RBF-FD method. For a given linear operator \(\mathcal{D}\), the local Lebesgue function takes the form of the 1-norm of the RBF-FD weight in equation (1), that is
\[\Lambda_{\mathcal{D}}(\mathbf{x}_{0};\{\mathbf{x}_{j}\}_{j=1}^{n})=\|\mathbf{w}\|_{11}= \sum_{j=1}^{n}|w_{j}|. \tag{32}\]
Figure 1: Maximum local \(\mathcal{L}\)-Lebesgue function for different combinations of PHS exponents, \(m\), and polynomial orders, \(q\) for the first (a-c) and second derivatives (d-f) in the \(x\)- (solid line), \(y\)- (’+’) and \(xy\)-directions (circle).
Larger values of the local Lebesgue function indicate an increased susceptibility to numerical instability in the assembled global differentiation matrix. Figure 1 shows the maximum value of the local Lebesgue function across various parameter combinations based on the test grid shown in figure 4. Refer to the accompanying context of figure 4 for more details of the grid. For all considered stencil sizes, polynomial degrees, and relevant operators, the minimax value is consistently achieved at \(m=3\), suggesting its suitability for hydrodynamic stability analysis.
For two-dimensional problems, it has been suggested that the stencil size should satisfy \(n\gtrsim(q+1)(q+2)\)[39; 40]. In practice, including a few additional nodes beyond the minimum requirement is often beneficial to improve performance. The specific choice of additional nodes may depend on the type of node sets used. For instance, a recommended formula for Halton nodes is \(n=(q+1)(q+2)+\lfloor\ln\left[(q+1)(q+2)\right]\rfloor\)[67; 123]. We here select the stencil size \(n\) based on the assumption of a perfectly arranged hexagonal node distribution comprising \(q/2+1\) layers, despite the actual distribution of nodes being heterogeneous. For practical examples, refer to figure 3 and the surrounding context. The recommended parameter combinations are summarized in table 2. In particular, we use \((n,m,q)=(37,3,4)\) for interior nodes and \((n,m,q)=(19,3,2)\) for nodes near boundaries in the construction of the discrete LNS operator, \(\mathbf{L}\). The RBF stencil for each node is determined as the nearest \(n\) nodes acquired through a k-nearest neighbor (kNN) search.
The condition number of the augmented matrix \(\mathbf{A}_{\text{aug}}\), defined in equation (7), affects the numerical stability in a similar way and hence needs to be considered separately. The condition number for various parameter combinations has been investigated and discussed in Le Borne and Leinen [67]. Independent of the parameter selection, Shahane _et al._[119] demonstrated that scaling the local stencil to a unit length scale can improve the condition number. We further reveal the existence of an optimal range for the averaged local grid spacing, wherein the condition number of the augmented matrix \(\mathbf{A}_{\text{aug}}\) reaches its minimum value.
For a given stencil \(\{\mathbf{x}_{j}\}_{j=1}^{n}\), we perform scaling with respect to the location of \(\mathbf{x}_{0}\) with a scaling factor \(\kappa\) such that
\[\{\tilde{\mathbf{x}}_{j}\}_{j=1}^{n}=\{\kappa(\mathbf{x}_{j}-\mathbf{x}_{0})\}_{j=1}^{n}, \tag{33}\]
and the averaged local grid spacing becomes \(\Delta\tilde{r}=\kappa\Delta r\). Figure 2 shows the maximum condition number within the test grid as a function of \(\Delta\tilde{r}\) for the considered parameters. Optimal performance is consistently achieved within the
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & Shankar and Fogelson [123] & Le Borne and Leinen [67] & present \\ \hline \hline \(q=2\) & \((n,m)=(14,3)\) & \((n,m)=(14,5)\) & \((n,m)=(19,3)\) \\ \hline \(q=3\) & \((n,m)=(22,3)\) & \((n,m)=(22,7)\) & \((n,m)=(28,3)\) \\ \hline \(q=4\) & \((n,m)=(33,3)\) & \((n,m)=(33,7)\) & \((n,m)=(37,3)\) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of parameter selections.
range of \(0.15\lesssim\Delta\hat{r}\lesssim 0.3\) across all examined parameter combinations. Based on this, we perform spatial scaling to ensure an averaged local spacing of \(\Delta\hat{r}=0.25\) before solving the RBF-FD weights in equation (1). The chain rule is applied to transform the RBF weights back to the original grid.
### Boundary condition treatments
#### 4.2.1 Homogeneous Dirichlet boundary conditions
Homogeneous Dirichlet boundary conditions, \(\mathbf{q}^{\prime}=0\), are widely employed in hydrodynamic stability analysis based on the assumption that perturbation variables either vanish at the solid wall or in the far field. To approximate spatial derivatives for a node near such a boundary, we follow the Kansa method [65] and compute the RBF-FD weights using the local stencil consisting of both the interior nodes, \(\{\mathbf{x}^{(i)}\}_{j=1}^{n^{(i)}}\), and the boundary nodes, \(\{\mathbf{x}^{(b)}\}_{j=1}^{n^{(i)}}\). Letting \(g(\mathbf{x}^{(b)}_{j})=0\), equation (1) becomes
\[\mathcal{D}g(\mathbf{x}_{0})=\sum_{j=1}^{n^{(i)}}w^{(i)}_{j}g(\mathbf{x}^{(i)}_{j})+ \sum_{j=1}^{n^{(i)}}w^{(b)}_{j}g(\mathbf{x}^{(b)}_{j})=\sum_{j=1}^{n^{(i)}}w^{(i)} _{j}g(\mathbf{x}^{(i)}_{j}), \tag{34}\]
and only the weights for the interior nodes will be used in the computation.
#### 4.2.2 Boundary-normal derivatives
Neumann conditions, inflow/outflow conditions, or the enforcement of continuity necessitate the evaluation of boundary-normal derivatives and require special treatment for scattered nodes that are not structured near boundaries. To accurately approximate derivatives in the boundary-normal direction, we propose an elliptical stencil with its major axis perpendicular to the boundary. The elliptical form is achieved by utilizing a weighted Euclidean distance metric
\[\tilde{r}=\sqrt{a^{2}+b^{2}}\sqrt{\frac{\left((\mathbf{x}-\mathbf{x}_{0})\cdot\mathbf{n} \right)^{2}}{a^{2}}+\frac{\left((\mathbf{x}-\mathbf{x}_{0})\cdot\mathbf{t}\right)^{2}}{b^{ 2}}}, \tag{35}\]
where \(\mathbf{n}\) and \(\mathbf{t}\) are the normal and tangential directions, respectively, and \(a\) and \(b\) denote the corresponding scaling factors. An eccentricity of \(\frac{\sqrt{a^{2}-b^{2}}}{a}=\frac{2\sqrt{b}}{5}\) is used in the computation. We use the \(q=2\) polynomial augmentation for the boundary-tangential direction and a higher-order of accuracy of \(q=4\) for the boundary-normal direction. An illustrative example is shown in figure 3(a). This new strategy circumvents the need to increase the local stencil near domain boundaries to prevent the Runge phenomenon, as demonstrated in Bayona _et al._[15], Bayona [13]. Without this special treatment, stable solutions for hydrodynamic stability analysis cannot be attained.
#### 4.2.3 Symmetric and anti-symmetric boundary conditions
To address pole singularities that arise in the polar coordinates and symmetric or anti-symmetric boundary conditions, we propose a generalization of the pole treatment method by Mohseni and Colonius [86] for scattered nodes. We introduce a set of ghost nodes, denoted as \(\mathcal{X}^{\star}\), which are symmetric to the interior nodes, \(\mathcal{X}^{\star}\), with respect to the centerline. For each interior node, we divide its local stencil into two disjoint sets such that \(\{\mathbf{x}\}_{j=1}^{n}=\{\mathbf{x}^{\star}_{j}\}_{j=1}^{n^{\star}}\cup\{\mathbf{x}^{ \star}_{j}\}_{j=1}^{n^{\star}}\), where \(\mathbf{x}^{\star}\in\mathcal{X}^{\star}\) and \(\mathbf{x}^{+}\in\mathcal{X}^{+}\), respectively, and \(n=n^{\star}+n^{+}\). Additionally, we define \(\{\mathbf{x}^{\star}_{j}\}_{j=1}^{n^{\star}}\in\{\mathbf{x}^{\star}_{j}\}_{j=1}^{n^{ \star}}\) as the counterparts of the image nodes, and \(\{\mathbf{x}^{\star}_{j}\}_{j=1}^{n^{\star}-n^{\star}}=\{\mathbf{x}^{\star}_{j}\}_{j=1} ^{n^{\star}}\setminus\{\mathbf{x}^{\Delta}_{j}\}_{j=1}^{n^{\star}}\) represents the remaining nodes within the interior nodes, Refer to figure 3(b) for detailed symbol explanations. The function values at the image nodes are determined by their corresponding counterparts, given by
\[g(\mathbf{x}^{+}_{j})=\eta g(\mathbf{x}^{\Delta}_{j}), \tag{36}\]
where \(\eta\) depends on the type of boundary conditions being imposed. Specifically, we use \(\eta=1\) for symmetric boundary conditions and \(\eta=-1\) for anti-symmetric boundary conditions. The RBF-FD weights in equation (1) can be written as
\[\mathcal{D}g(\mathbf{x}_{0})=\sum_{j=1}^{n^{\star}}w^{\star}_{j}g(\mathbf{x}^{\star}_{ j})+\sum_{j=1}^{n^{\star}}w^{+}_{j}g(\mathbf{x}^{+}_{j})=\sum_{j=1}^{n^{\star}-n^{ \star}}w^{\circ}_{j}g(\mathbf{x}^{\circ}_{j})+\sum_{j=1}^{n^{\star}}\left(\eta w^{+} _{j}+w^{\Delta}_{j}\right)g(\mathbf{x}^{\Delta}_{j}). \tag{37}\]
This treatment allows us to approximate derivatives solely based on the function values at the interior nodes, effectively handling pole singularities and addressing the challenges associated with scattered nodes.
## 5 Applications
We demonstrate the mesh-free RBF-FD-based hydrodynamic stability framework outlined in SS3 using three representative examples: canonical steady and unsteady cylinder wakes, a self-similar non-parallel steady laminar boundary-layer flow, and the turbulent mean of a transonic jet, as summarized in table 3. These three examples are benchmark problems for open flows and are appropriate for validating mesh-free hydrodynamic stability.
### Cylinder wake
We first consider the incompressible cylinder flow at diameter-based Reynolds numbers, \(\mathrm{Re}=\frac{U_{\mathrm{sc}}D}{\nu}\), ranging from 47 to 180 and investigate the mean-flow stability within the two-dimensional laminar regime. The occurrence of the periodic von Karman vortex shedding in the cylinder wake beyond the critical Reynolds number of \(\mathrm{Re}_{c}\simeq 47\) is a well-known phenomenon, owing to a Hopf bifurcation that results in flow instability, see, e.g., [17; 148]. Beyond the limit of \(\mathrm{Re}\simeq 188\), the cylinder flow becomes three-dimensional [12; 150].
Classical LST analysis of the cylinder base flow ought to predict the onset of unsteadiness [57; 154] but fails to capture the vortex-shedding frequency beyond \(\mathrm{Re}_{c}\)[11; 127]. Previous studies by Hammond and Redekopp [49] and Pier [100] show that LST analysis around the cylinder mean flow accurately identifies the vortex-shedding frequency compared to experimental measurements. Barkley [11] supported these findings and showed that the cylinder mean flow is marginally stable in the 2D regime. Sipp and Marquet [127] subsequently provided theoretical underpinning by conducting a weakly nonlinear analysis and establishing criteria for utilizing mean flows in LST analysis. The vortex-shedding dynamics beyond the critical Reynolds number were later investigated using a self-consistent model [74; 75] and RA [61; 135] based on the mean flow stability. We here conduct both LST and resolvent analyses of the cylinder wake mean flow.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Flow & \(\mathrm{Re}\) & \(M\) & Flow type & Base state & Analysis & Sec. \\ \hline \hline Cylinder wake [29] & \(47-180\) & - & 2D laminar & \(\overline{u},\overline{v}\) & LST/RA/WM & §5.1 \\ \hline Boundary layer (ZPG) & \(6\times 10^{5}\) & - & 2D laminar & \(u_{\mathrm{ZPG}},\nu_{\mathrm{ZPG}}\) & RA/WM & §5.2 \\ \hline Jet [23] & \(\approx 10^{6}\) & 0.9 & 3D turbulent & \(\overline{\rho},\overline{u}_{x},\overline{u}_{t},\overline{u}_{\theta}, \overline{T}\) & RA/WM & §5.3 \\ \hline \end{tabular}
\end{table}
Table 3: Overview of datasets and analyses. The columns from left to right indicate the flow description, Reynolds number, Mach number, flow type, base state, analysis type, and section number. The zero-pressure-gradient (ZPG) Blasius solution is used for analyzing the boundary layer. Analyses include linear stability (LST) and resolvent analyses (RA), along with wavemaker (WM).
Figure 3: RBF stencils (red-shaded circles) for a given node (blue star): (a) boundary-normal derivatives; (b) interior node near the symmetric centerline (dot-dashed). The latter stencil includes the interior nodes (\([\mathbf{x}^{\mu}]^{\mu}_{\mu-1}\), dot) and image ghost nodes (\([\mathbf{x}^{+}]^{\mu}_{\mu-1}\), \({}^{+}\)). The counterparts of the image nodes, \([\mathbf{x}^{\alpha}]^{\mu}_{\mu-1}\), and the remaining nodes, \([\mathbf{x}^{\gamma}]^{\mu}_{\mu-1}\), are shown as triangles and circles, respectively. The area of each red-shaded circle represents the corresponding local radial control volume, \(\mathrm{d}V_{i}\).
We define the the computational domain \(\Omega\) as the exterior of the cylinder \(r\geq D/2=0.5\) and within the rectangle \(-15\leq x\leq 30,\;-15\leq y\leq 15\). The computational domain is discretized using \(N=118225\) scattered nodes. Local grid refinement is employed near the cylinder with a characteristic distance of \(\Delta r=0.03\) and around the wake centerline with \(\Delta r=0.04\) to better resolve the flow structures. The unsteady cylinder flow is simulated using the PHS+poly RBF-FD version of the fractional-step, staggered-grid incompressible Navier-Stokes solver by Chu and Schmidt [29]. The mean-flow profiles are obtained as the time average of flow over 20 vortex-shedding cycles. Figure 4 shows the mean vorticity at \(\mathrm{Re}=100\) and the computational grid. Homogeneous boundary conditions, \(u^{\prime}=v^{\prime}=0\), are prescribed at the inlet and the cylinder surface. Symmetric boundary conditions with \(v^{\prime}=\partial u^{\prime}/\partial y=0\) are applied at the transverse boundaries. A stress-free outflow condition, \(-p^{\prime}\mathbf{n}+\frac{1}{\mathrm{Re}}\nabla\mathbf{u}^{\prime}\cdot\mathbf{n}=0\), where \(\mathbf{n}=[1,0]^{T}\) is the outflow direction, is enforced at the outflow.
The local wavemaker sensitivity in equation (23) and the resolvent gain in equation (26) and are both quantified in terms of the perturbed kinetic energy. To this end, we define the integration matrices
\[\mathbf{W}_{u}=\mathbf{W}_{f}\equiv\begin{bmatrix}\mathbf{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{1}\end{bmatrix}\otimes\mathrm{diag}(\mathrm{d}V_{1},\mathrm{d}V_{2 },\cdots,\mathrm{d}V_{N}) \tag{38}\]
in equation (27) to approximate the integral within the computational domain, where \(\otimes\) denotes the Kronecker product. Here, \(\mathrm{d}V_{i}=\xi\pi(\Delta r_{i})^{2}\) is the local radial control volume for each grid. The constant \(\xi\) ensures consistency with the total control volume and is defined by letting \(\sum_{i=1}^{N}\mathrm{d}V_{i}=\xi\pi\sum_{i=1}^{N}(\Delta r_{i})^{2}=\int_{ \Omega}1\mathrm{d}\mathbf{x}\). Refer to figure 3 for practical illustrations.
Figure 5 shows the Strouhal number, \(\mathrm{St}=\lambda_{i}/2\pi\), associated with the leading eigenvalues at varying Reynolds numbers. Starting from the critical Reynolds number of \(\mathrm{Re}_{c}\approx 47\), the frequency-Reynolds number dependence exhibits the typical features of a Hopf bifurcation. Our results are in good agreement with Barkley [11], and similarly, deviate no more than 3% from those reported by Pier [100]. The maximum growth rates are almost identical to zero (see also figure 6 below), confirming that the mean flow is marginally stable.
We next conduct a comparative study of LST and RA at two representative Reynolds numbers, the critical Reynolds number of \(\mathrm{Re}=47\) and \(\mathrm{Re}=100\), as an example of the unsteady regime. Figure 6 shows the resulting resolvent singular value and stability eigenvalue spectra. At \(\mathrm{Re}=\mathrm{Re}_{c}\), both the peak of the resolvent gain and the leading eigenvalue identify the same frequency, \(\mathrm{St}_{c}=0.1199\), as the vortex shedding frequency. This value is in good agreement with the results of the LST analysis around the base flow, see, e.g., Giannetti and Luchini [46] (\(\mathrm{St}_{c}\simeq 0.118\) for \(\mathrm{Re}_{c}\simeq 46.7\)), Marquet _et al._[76] (\(\mathrm{St}_{c}\simeq 0.116\) for \(\mathrm{Re}_{c}\simeq 46.8\)), and Sipp and Marquet [127] (\(\mathrm{St}_{c}\simeq 0.118\) for \(\mathrm{Re}_{c}\simeq 46.6\)). Similarly, the resolvent singular value spectrum at \(\mathrm{Re}=100\) in panel 6(b) displays a clear peak at the vortex shedding frequency, which is now at \(\mathrm{St}=0.1652\), and again coinciding with the least stable LST eigenvalue. This result matches closely with the findings of earlier Strouhal-Reynolds number relationship [38; 60; 74; 101; 150] and RA [61; 135] studies.
Figure 7 shows the leading LST and RA modes for the cylinder mean flow at \(\mathrm{Re}=\mathrm{Re}_{c}\) and \(\mathrm{Re}=100\). The leading LST and optimal response modes, along with their corresponding adjoint and forcing modes, are near-identical at both
Figure 4: Computational grid for cylinder flows with \(N=118225\) nodes, colored using the mean vorticity, \(\overline{\omega}=\mathbf{D}_{x}\overline{\mathbf{v}}-\mathbf{D}_{y}\overline{\mathbf{n}}\), at \(\mathrm{Re}=100\).
Reynolds numbers. The resemblance observed between the LST and resolvent response modes is to be anticipated, as the singular value of the resolvent attains its peak at the vortex-shedding frequency [18; 135], thus exhibiting the characteristic vortex-shedding structure. This implies that the optimal forcing leverages the global instability mode to achieve maximum gain. Differing from the optimal response and LST modes, the optimal forcing and adjoint modes are active far upstream of the cylinder but peak downstream in close vicinity to the cylinder. This hallmark of convective instability was similarly observed in previous works [61; 76; 135].
We finally investigate the wavemaker \(\zeta_{\text{LST}}\), defined in equation (23), in figure 8 to quantify the sensitivity of spatially localized feedback. Wavemakers obtained from LST and RA look similar for both Reynolds numbers and reach their maxima in two symmetrically positioned lobes located across the separation bubble. This result signifies the promising applicability of RA-based wavemakers in accurately identifying the region where the flow instability mechanisms happen. The wavemaker patterns at the critical Reynolds number shown in panels 8 (a,c) compare well to those previously reported by Giannetti and Luchini [46] and Marquet _et al._[76]. The shrinking of wavemaker region at \(\text{Re}=100\) in panels 8 (b,d) closely matches with the findings in Meliga _et al._[82] and Symon _et al._[135].
Figure 5: Vortex shedding frequency predicted by the leading eigenvalue of the mean-flow stability problem as a function of Reynolds number. The frequency is given as the non-dimensional Strouhal number \(\text{St}=\lambda_{l}/2\pi\). Shown for comparison are results from Pier [100] (green square) and Barkley [11] (blue circle). Two representative Reynolds numbers for the following analysis, \(\text{Re}_{c}\approx 47\) and \(\text{Re}=100\), are highlighted as dashed lines.
Figure 6: Resolvent singular values (blue curve) and stability eigenvalue (red circle) spectra for \(\text{Re}=47\) (a) and \(\text{Re}=100\) (b). The 20 eigenvalues closest to \(\lambda_{c}=0+0.753\)i or \(\text{St}_{c}=0.1199\) and \(\lambda=0+1.038\)i or \(\text{St}=0.1652\) (dashed lines) were found within the regions outlined by the black dot-dashed lines using the shift-and-invert Arnoldi algorithm for \(\text{Re}=47\) and \(100\), respectively.
### Blasius boundary layer
We now examine the incompressible two-dimensional non-normal flat-plate boundary layer, a classic example of convectively unstable flows characterized by the amplification of disturbances during downstream advection. The convective instability of boundary-layer flows has been extensively studied through 1D LST analysis over the past century, e.g., [63; 72; 73; 106; 132]. Above the critical displacement-thickness Reynolds number of \(\text{Re}_{\delta,c}\approx 520\), the Tollmien-Schlichting (TS) waves are known to arise as unstable eigenmodes of the Orr-Sommerfeld equation [25]. While these locally unstable waves are damped in 2D LST [1; 2; 7; 35], investigations into the non-normality of the linearized Navier-Stokes equations for open flows have revealed an alternative pathway for disturbance amplification
Figure 8: Wavemakers \(\zeta_{\text{LST}}\) for mean flow at (a) \(\text{Re}=47\) and (c) \(\text{Re}=100\). Respective results for \(\zeta_{\text{RA}}\) are shown in (b) and (d). The blue curve represents the mean flow streamlines.
Figure 7: Leading modes for \(\text{Re}=47\) at \(\text{St}=0.1199\) (a-d) and \(\text{Re}=100\) at \(\text{St}=0.1652\) (e-h): (a,e) LST modes; (b,f) response modes; (c,g) adjoint modes; (d,h) forcing modes.
[27; 77; 129]. To analyze the non-modal behavior of boundary-layer flows, input-output analysis has been conducted to determine the optimal harmonic forcing that results in the largest asymptotic response [3; 8; 9; 115; 128].
Following the setup by Sipp and Marquet [128], we compute the resolvent gain in the restricted domain \(\Omega_{\mathrm{RA}}=[x,\,y/\delta]\in[0.02,\,1]\times[0,\,22.52]\) within the computational domain \(\Omega=[x,\,y/\delta]\in[0.02,\,1.27]\times[0,\,22.52]\) such that the forcing optimizes the ratio between the restricted kinetic energy, \(\iint_{\Omega_{\mathrm{RA}}}\left(|\hat{u}|^{2}+|\hat{v}|^{2}\right)\,\mathrm{ d}x\mathrm{d}y\), and the integral of forcing, \(\iint_{\Omega}\left(|\hat{f}_{u}|^{2}+|\hat{f}_{v}|^{2}\right)\,\mathrm{d}x \mathrm{d}y\). The zero-pressure-gradient (ZPG) asymptotic Blasius solution, characterized by a local Reynolds number of \(\mathrm{Re}_{x}=\frac{U_{w}x}{\nu}=6\times 10^{5}\) or \(\mathrm{Re}_{\delta}=\frac{U_{w}\delta(x)}{\nu}=1332\) at the end of the restricted domain \(\Omega_{\mathrm{RA}}\), is used as the base flow to investigate the flow instabilities, where \(\delta(x)=1.72\sqrt{x/\mathrm{Re}_{x}}\) is the displacement thickness. The leading edge (\(x<0.02\)) is removed to avoid the singularity of the self-similar solution. Hereafter, we use the notations \(\mathrm{Re}=\mathrm{Re}_{x=1}\) and \(\delta=\delta(x=1)\) for simplicity.
The computational domain \(\Omega\) is discretized using \(N=726143\) scattered nodes, corresponding to \(2.2\times 10^{6}\) degrees of freedom. In comparison, Sipp and Marquet [128] employ \(13.7\times 10^{6}\) degrees of freedom. The characteristic distances of the grid are \(\Delta r=0.069\delta\) near the flat plate (\(y/\delta<4\)), \(\Delta r=0.077\delta\) near the inlet (\(x<0.025\)), and average at \(0.081\delta\) over the whole domain. At the inlet and the flat plate, we prescribe \(u^{\prime}=v^{\prime}=0\). A symmetric boundary condition with \(v^{\prime}=\partial u^{\prime}/\partial y=0\) is applied at the far-field (\(y/\delta=22.52\)). A stress-free outflow condition is enforced at the outflow. The weight matrix \(\mathbf{W}_{u}\) is defined in equation (38), but with zero weights in the region \(\Omega\setminus\Omega_{\mathrm{RA}}\).
We first examine the leading resolvent singular value as a function of the normalized frequency, \(F=10^{6}\cdot\omega/\mathrm{Re}\), in figure 9. Overall, our results agree well with those reported by Sipp and Marquet [128]. The slight deviation of the peak and the resolvent gains beyond the peak (\(F\gtrsim 88\)) are most likely attributed to differences in the base flows, that is, the ZPG self-similar Blasius solution in the present study and the fully non-parallel numerical data of Sipp and Marquet [128] that includes the leading edge. The discrepancy at higher frequencies is potentially also related to the truncation of the leading edge, where the asymptotic Blasius solution becomes singular.
Figure 10 shows the optimal and suboptimal forcings and corresponding responses at \(F=100\). The optimal response exhibits clear Tollmien-Schlichting (TS) wavepackets in the downstream region, while the upstream tilted structures in the optimal forcing highlight the active role of the Orr mechanism in extracting energy from the mean shear via the Reynolds stress [25]. The clear spatial separation between leading resolvent forcing and response modes indicates the stream-wise non-normality of the system [27; 76; 129]. The suboptimal forcing and response are similar to the leading modes except for two local maxima. This modulation confirms the orthogonality in their respective inner products. Qualitative comparisons with previous studies by Akervik _et al._[1], Monokrousos _et al._[89], Brandt _et al._[22] and Sipp and Marquet [128] verify the capability of the present framework in identifying the convective instability of the boundary-layer flow.
Figure 9: Resolvent gains (solid lines) and peak frequencies (dashed lines) for the flat-plate boundary layer and \(\mathrm{Re}=6\cdot 10^{5}\).
Figure 11: Spatial distributions of energy density for optimal (dashed) and suboptimal (dotted) forcings and corresponding responses (solid and dot-dashed, respectively) at \(F=100\). The results (red) are compared to those reported by Sipp and Marquet (1982) (blue). The two vertical solid lines represent the upstream neutral point (branch I) and the downstream neutral point (branch II) from a local stability analysis.
Figure 10: Optimal and suboptimal resolvent forcings (b,d) and corresponding responses (a,c) for the flat-plate boundary layer at \(F=10^{6}\cdot\omega/\mathrm{Re}=100\). The normalized stream-wise velocity components have been interpolated onto a stretched Cartesian mesh for visualization. Panels (e) and (f) show the local regions of the optimal forcing (magenta box) and corresponding response modes (red box) with the largest magnitudes, respectively, on the scattered nodes used for the computation. The Blasius displacement thickness, \(\delta(x)\), is highlighted as the white dashed line.
As a quantitative assessment of the flow structures, we examine the energy density functions,
\[d_{f}(x)=\int_{0}^{\gamma_{\rm max}}\left(|\hat{f}_{u}|^{2}+|\hat{f}_{v}|^{2} \right)\,\mathrm{d}y,\qquad\mathrm{and}\qquad d_{u}(x)=\int_{0}^{\gamma_{\rm max }}\left(|\hat{u}|^{2}+|\hat{v}|^{2}\right)\,\mathrm{d}y, \tag{39}\]
in figure 11 for the modes shown in figure 10. Our results are almost identical to Sipp and Marquet [128] The spatial distribution of the optimal forcing unambiguously identifies the location of the upstream neutral point (branch I) from a local stability analysis at \(x=0.3\) and the corresponding response is localized at \(x=0.89\), which is in close proximity to the downstream neutral point (branch II).
Figure 12 shows the energy density distributions of the optimal forcing and response as a function of frequency, with the stream-wise coordinate given in terms of the local Reynolds number, \(\mathrm{Re}_{\delta}\). The optimal forcings agree well with Sipp and Marquet [128], and their maxima effectively delineate the convectively stable/unstable boundary (branch I) obtained using local stability theory. The Orr and TS mechanisms coexist and compete while both contribute to the overall energy gain of RA. As frequency increases, the spatial support of the TS-like optimal responses decreases, suggesting that the TS mechanism is only supported in a limited region of the shear layer at high frequencies. Additionally, the energy density of the forcing shows an increasing trend, indicating that the Orr mechanism becomes dominant at high frequencies. Similar to the deviation in the resolvent gain spectra shown in figure 9, we observe a slight downstream shift in the peak of optimal responses compared to the results reported Sipp and Marquet [128] for increasing frequency (\(F\gtrsim 130\)).
Finally, we show the RA-based wavemaker, defined in equation (31), in figure 13 to examine the instability mechanisms at different frequencies. The maximum magnitude of \(\zeta_{\rm RA}\) shows an increasing trend with frequency. The large magnitudes of the wavemaker are attributed to the fact that the value of the term \(|\hat{\mathbf{\mu}}_{1}^{*}W_{u}\hat{\mathbf{f}}_{1}|^{-1}\) is considerably greater than the value of the self-adjoint modes, which is equal to 1 [77]. This again confirms that the boundary-layer flow exhibits high non-normality. For \(F\lesssim 100\) (panels 13 (a-c)), the RA-based wavemaker exhibits an elongated shape and is artificially restricted within the optimization domain \(\Omega_{\rm RA}\). The wavemaker becomes more concentrated towards the upstream region for higher frequencies (\(F\gtrsim 140\), panels 13 (d-g)). This observation suggests a transition from the TS-dominated to the Orr-dominated mechanism as the frequency increases.
### Turbulent jet
The above two examples demonstrate the capability of the proposed numerical framework for analyzing incompressible flows within the laminar regime. We now focus on the mean flow analysis of a turbulent iso-thermal jet at
Figure 12: Energy density distributions for the optimal forcings (a) and responses (b) as a function of frequency. The locations of maximum energy densities are marked as red circles. The results reported by Sipp and Marquet [128] are shown as blue stars for comparison. The neutral curve obtained from a local LST analysis is shown as the magenta line.
. Mach number, based on the jet velocity and the far-field speed of sound, of \(M=0.9\) and Reynolds number, based on the nozzle diameter and the jet velocity, of \(\mathrm{Re}\approx 10^{6}\). The large eddy simulation (LES) data, generated using the unstructured flow solver 'Charles', is used for analysis. Further details about the dataset can be found in Bres _et al._[23]. Previous studies have demonstrated that the transonic turbulent jet under consideration displays a variety of coherent features, including the well-known Kelvin-Helmholtz instabilities of the shear layer [23], downstream non-modal Orr-type waves [99; 117], and trapped acoustic waves in the potential core [116; 141].
The state vector,
\[\mathbf{q}=[\rho,u_{x},u_{r},u_{\theta},T]^{T}, \tag{40}\]
comprises primitive variables: density \(\rho\), temperature \(T\), and cylindrical velocity components \(u_{x}\), \(u_{r}\) and \(u_{\theta}\) in the streamwise, \(x\), radial, \(r\), and circumferential, \(\theta\), directions, respectively. The fluctuating state is defined as \(\mathbf{q^{\prime}}=[\rho^{\prime},u_{x}^{\prime},u_{r}^{\prime},u_{\theta}^{ \prime},T^{\prime}]^{T}\). Owing to the rotational symmetry of the jet, we may decouple the governing equations and construct the compressible linear operator \(\mathcal{L}\) in cylindrical coordinates, without loss of generality, for each azimuthal wavenumber \(m_{\theta}\) independently. For a round jet, the mean azimuthal velocity component is zero. Upon linearization of the compressible Navier-Stokes equations shown in (A.1), we obtain the linearized Navier-Stokes operator around the azimuthally averaged long-time mean of the primitive state, \(\mathbf{\overline{q}}=[\overline{\rho},\overline{u}_{x},\overline{u}_{r},0, \overline{T}]^{T}\), such that
\[\frac{\partial}{\partial t}\mathbf{q^{\prime}}=\mathcal{L}\mathbf{q^{ \prime}}+\mathbf{f}. \tag{41}\]
The general setup, including boundary conditions, sponge regions, and a molecular Reynolds number of \(\mathrm{Re}=3\times 10^{4}\), follows Schmidt _et al._[116] and Schmidt _et al._[117].
The computational domain \(\Omega=[-1,31.01]\times[0,9.65]\) includes the physical domain \(x,r\in[0,30]\times[0,6]\) and the surrounding sponge regions that prevent waves from being reflected. Local refinement is used near the nozzle with \(\Delta r=0.005\) and near the jet centerline with \(\Delta r=0.008\), resulting in \(N=210817\) scattered nodes for the construction of the RBF-FD-based global Jacobian \(\mathbf{L}\), see figure 14. A structured mesh with a comparable number of \(N=185250\) was used in Schmidt _et al._[117].
Figure 13: RA-based wavemaker \(\xi_{\mathrm{RA}}\) for the flat-plate boundary layer as a function of frequency.
The resolvent gain is quantified in terms of the compressible energy norm [28] through the weight matrices
\[W_{q}=W_{f}\equiv\mathrm{diag}\left(\frac{\overline{T}}{\gamma\overline{\rho}M^{ 2}},\overline{\rho},\overline{\rho},\overline{\rho},\frac{\overline{\rho}}{ \gamma(\gamma-1)\overline{T}M^{2}}\right)\otimes\mathrm{diag}(2\pi r_{1} \mathrm{d}V_{1},2\pi r_{2}\mathrm{d}V_{2},\cdots,2\pi r_{N}\mathrm{d}V_{N}). \tag{42}\]
For compressible flows, the discrete weighted resolvent operator takes the form of
\[\mathbf{R}(\omega)=\mathbf{W}_{\hat{q}}^{\frac{1}{2}}\mathbf{C}\left(\mathrm{i}\omega\mathbf{I }-\mathbf{L}\right)^{-1}\mathbf{B}\mathbf{W}_{f}^{-\frac{1}{2}}, \tag{43}\]
where the input and output matrices, \(\mathbf{B}\) and \(\mathbf{C}\), are used to are used to focus the analysis exclusively on the physical domain. As a first demonstration, we conduct the mesh-free RA for the symmetric component of the jet with \(m_{\theta}=0\) in the following.
Figure 15 compares the leading resolvent singular value spectra to those reported by Schmidt _et al._[117], obtained using a 4th-order finite difference [80] and a tenth-order filter for discretization. Very good agreement is observed within the frequency range \(0.15\lesssim\mathrm{St}\lesssim 1.2\), with only a minor deviation at the peak value. This specific frequency range has been previously identified as the regime where different physical mechanisms are active in turbulent jets, see, e.g., [116; 117; 134; 138; 141; 98].
To verify that large-scale coherent structures in the turbulent jet are accurately captured, we investigate the leading resolvent modes at three representative frequencies, as shown in figure 16. At \(\mathrm{St}=0.2\), the optimal response exhibits
Figure 14: Computational grid for jet with \(N=210817\approx 1013\times 208\) nodes, colored using the mean streamwise velocity \(\overline{u}_{x}\) at \(\mathrm{Re}\approx 10^{6}\). The potential core (white solid) and the jet width (white dashed) are delineated as isolines corresponding to 99% and 5% of the jet velocity, respectively
Figure 15: Leading resolvent singular value spectra for the transonic jet and \(m_{\theta}=0\). Results reported by Schmidt _et al._[117] are shown as comparisons (blue dashed).
a clear downstream (\(x\gtrsim 10\)) Orr-type wavepacket. The elongated structures tilted against the mean shear in the corresponding forcing are a clear indicator of the Orr mechanism associated with the optimal non-modal spatial growth [117; 137; 138]. Similar flow structures have been reported in Garnaud _et al._[44], Lesshafft _et al._[69] and Pickering _et al._[99]. For higher frequencies (St \(\gtrsim 0.6\)), the optimal response take the form of compact Kelvin-Helmholtz (KH) wavepackets localized upstream in the initial shear layer region of the jet (\(x\lesssim 10\)), which can be identified as the modal KH-type shear-layer instability of the turbulent mean flow [48; 62; 134]. The corresponding forcing distributions near the lip line (\(r\simeq 0.5\)) remain indicative of the Orr mechanism, but this time with the KH instability [44; 59; 103; 118; 138]. Within the potential core, the optimal response exhibits duct modes at St = 0.6 and trapped acoustic waves at St = 1. The presence of the latter is a general feature of resonance mechanisms between propagating waves associated with the isothermal and transonic jet, as previously described in [116; 141]. Notably, comparable patterns can be observed in the corresponding forcings, indicating the nearly self-adjoint nature of the trapped instability mechanism. We confirmed the modal shapes are almost identical to those obtained using the numerical scheme outlined in Schmidt _et al._[117] (not shown).
Figure 17 shows how the RA-based wavemaker, \(\zeta_{RA}\), unveils different physical mechanisms that are active in the turbulent jet. Most notably, the wavemaker region and its overall peak move upstream as the frequency increases from St = 0.2 to 1. At St = 0.2, the optimal forcing and response modes are spatially separated (see panels 16(a,b)), resulting in a very weak wavemaker signature in the downstream self-similar region. This result is anticipated, as
Figure 16: Streamwise velocity component of the optimal response (a,c,e) and corresponding forcing modes (b,d,f) at three representative frequencies: (a-b) St = 0.2; (c-d) St = 0.6; (e-f) St = 1. The modes are interpolated onto a stretched Cartesian mesh with contours corresponding to \(\pm 0.6\|\cdot\|_{\infty}\) for visualization. The potential core and the jet width, shown in figure 14, are visualized for comparisons.
Figure 17: RA-based wavemaker \(\zeta_{RA}\) for: (a) St = 0.2; (b) St = 0.6; (c) St = 1. The region with the strongest feedback is marked as green ’+’.
the responses at low frequencies are purely triggered by the Orr mechanism without being associated with a single global mode or a local feedback mechanism that leads to the creation of waves in a localized region. This is indicative of a non-modal convective instability. At \(\mathrm{St}=0.6\), the wavemaker peaks at the centerline at \(x\simeq 5\). This is the end of the potential core, where upstream traveling acoustic modes are generated, also known as duct modes, as previously described by Towne _et al._[141]. Parallel-flow models accurately predict the occurrence of duct modes, but the wavemaker potentially reveals the location where duct modes originate, which is not predicted by the theory. The wavemaker at \(\mathrm{St}=1\) peaks within the shear layer, which is associated with the KH instability, but it also exhibits a comparable magnitude at the upstream region of the potential core, which identifies the resonance mechanisms that trigger trapped acoustic modes [116, 141]. That is, the wavemaker analysis confirms the phase-linking between downstream KH waves and upstream traveling waves at \(\mathrm{St}=1\). Note that this is not the case for \(\mathrm{St}=0.6\), where the wavemaker is solely associated with the duct modes.
## 6 Summary and discussion
In this study, a novel higher-order mesh-free framework for hydrodynamic stability analysis is developed. The framework has been demonstrated and validated on three benchmark problems for open flows: a canonical laminar cylinder wake, a non-parallel flat-plate Blasius boundary layer, and a transonic turbulent jet. It also provides new insights into well-known physics. The proposed framework incorporates PHS-type RBFs with polynomial augmentations to discretize large hydrodynamic stability matrix problems on scattered nodes with high accuracy, stability, and computational efficiency. The resulting differentiation matrices are employed to construct the discrete linearized Navier-Stokes operator for conducting LST and RA. We propose a set of parameters to address the trade-off between accuracy and computational efficiency for PHS+Poly RBF-FD discretizations that provide best practices. In the context of scattered nodes, the practical implementations of various boundary conditions arising in hydrodynamic stability analysis are discussed and addressed.
The present mesh-free approach accurately predicts flow instabilities in wakes behind a 2D cylinder, including the vortex-shedding frequency, corresponding coherent structures, and structural sensitivity. The results are in good agreement with previous studies [11, 46, 76, 100]. In the case of the Blasius boundary layer, the present framework accurately identifies the TS wavepackets and yields favorable quantitative and qualitative comparisons with those previously reported in the literature [1, 22, 89, 128]. Furthermore, the RA-based wavemaker confirms the high non-normality of the boundary layer and the transition in dominant mechanisms from TS to Orr as the frequency increases. When applied to the mean flow of a turbulent jet, the mesh-free RA yields results that are almost identical to those reported by Schmidt _et al._[117]. The application of RA-based WM analysis identifies distinct dominant physical mechanisms operating at different frequencies. These mechanisms include the Orr mechanism in the downstream region with a very weak wavemaker signature at \(\mathrm{St}=0.2\), prominent duct-like structures near the end of the potential core at \(\mathrm{St}=0.6\), and the phase coupling between the KH and trapped acoustic mechanisms at \(\mathrm{St}=1\). These results closely align with the findings documented in prior literature, e.g., [62, 69, 116, 117, 141], and provide a new, global perspective on modal and non-modal growth in jets, as well as on the origins of these instabilities.
Building upon the pioneering work of Hardy [50], RBF discretizations have been widely employed in canonical problems, exhibiting a continual evolution through advancements like RBF-FDs [139] and PHS+poly RBFs [39]. More recently, this mesh-free approach has garnered attention in computational fluid dynamics (CFD) for physical exploration or engineering applications, see, e.g., [29, 119, 120, 121, 147]. While RBF-FDs demonstrate the capability to construct sparse differentiation operators, the potential applications in hydrodynamic stability analysis have not been thoroughly explored. The current work is the first demonstration of such applications. Future extensions encompass applications involving 3D and complex geophysical or engineering fluid flows.
**CRediT authorship contribution statement**
**Tianyi Chu**: Formal analysis, Methodology, Software, Validation, Visualization, Writing - original draft. **Oliver T. Schmidt**: Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing - review & editing.
**Declaration of competing interest**
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Oliver T. Schmidt reports financial support was provided by National Science Foundation under Grant No. CBET-1953999. Tianyi Chu reports financial support was provided by National Science Foundation under Grant No. CBET1953999.
**Data availability**
Data will be made available on request.
**Acknowledgments**
We gratefully acknowledge support by the National Science Foundation under Grant No. CBET-1953999 (PM Ron Joslin).
## Appendix A Governing equations for compressible flows
The compressible Navier-Stokes equations govern the motion of a general, compressible Newtonian fluid,
\[\begin{split}\frac{\partial\rho}{\partial t}&=- \nabla\cdot\rho\mathbf{u},\\ \frac{\partial\rho\mathbf{u}}{\partial t}&=-\frac{1 }{2}\nabla\cdot(\mathbf{u}:\rho\mathbf{u}+\rho\mathbf{u}:\mathbf{u})-\nabla p +\frac{1}{\text{Re}}\nabla\cdot\boldsymbol{\tau},\\ \frac{\partial\rho e}{\partial t}&=-\nabla\cdot \rho e\mathbf{u}+\frac{1}{(\gamma-1)\text{RePrMa}_{\infty}^{2}}\nabla\cdot k \nabla T-\nabla\cdot p\mathbf{u}+\frac{1}{\text{Re}}\nabla\cdot\boldsymbol{ \tau}\mathbf{u},\end{split} \tag{10}\]
where, \(e\) is the total energy. For a Newtonian fluid, the viscous stress tensor is \(\boldsymbol{\tau}=\mu\left(\nabla\mathbf{u}+\nabla\mathbf{u}^{T}\right)-\frac {2}{3}\mu(\nabla\cdot\mathbf{u})\text{I}\). All flow quantities are non-dimensionalized by their dimensional free-stream values, denoted by \((\cdot)_{\infty}^{*}\), and the coordinates by the jet diameter \(D\). The dimensionless Reynolds number \(\text{Re}=\rho_{\infty}^{*}u_{\infty}^{*}D/\mu_{\infty}^{*}\), Prandtl number \(\text{Pr}=c_{\text{p}}^{*}\mu_{\infty}^{*}/k_{\infty}^{*}\), and Mach number \(M=u_{\infty}^{*}/a_{\infty}^{*}\) then fully describe the flow. Here, \(\mu_{\infty}^{*},k_{\infty}^{*},c_{\text{p}}^{*},\boldsymbol{\gamma},a_{\infty}^ {*}\) are the free-stream values of the dynamic viscosity, heat conductivity, heat capacity at constant pressure, heat capacity ratio, and speed of sound, respectively. Closure of the equations is achieved under the assumption of an ideal gas, and using Sutherland's law to compute the dynamic viscosity from the local temperature.
|
2303.03492 | A Security-aware Network Function Sharing Model for 5G Slicing | Sharing Virtualized Network Functions (VNFs) among different slices in Fifth
Generation (5G) is a potential strategy to simplify the system implementation
and utilize 5G resources efficiently. In this paper, we propose a
security-aware VNF sharing model for 5G networks. The proposed optimization
model satisfies the service requirements of various slices, enhances slice
security by isolating their critical VNFs, and enhances resource utilization of
the underlying physical infrastructure. The model tries to systematically
decide on sharing a particular VNF based on two groups of constraints; the
first group of constraints is common assignment constraints used in the
existing literature. The second group is the novel security constraints that we
propose in this work; the maximum traffic allowed to be processed by the VNF
and the exposure of the VNF to procedures sourced via untrusted users or access
networks. This sharing problem is formalized to allow for procedure-level
modeling that satisfies the requirements of slice requests in 5G systems. The
model is tested using standard VNFs and procedures of the 5G system rather than
generic ones. The numerical results of the model show the benefits and costs of
applying the security constraints along with the network performance in terms
of different metrics. | Mohammed Mahyoub, AbdulAziz AbdulGhaffar, Emmanuel Alalade, Ashraf Matrawy | 2023-03-06T20:47:49Z | http://arxiv.org/abs/2303.03492v1 | # A Security-aware Network Function Sharing Model for 5G Slicing
###### Abstract
Sharing Virtualized Network Functions (VNFs) among different slices in Fifth Generation (5G) is a potential strategy to simplify the system implementation and utilize 5G resources efficiently. In this paper, we propose a security-aware VNF sharing model for 5G networks. The proposed optimization model satisfies the service requirements of various slices, enhances slice security by isolating their critical VNFs, and enhances resource utilization of the underlying physical infrastructure. The model tries to systematically decide on sharing a particular VNF based on two groups of constraints; the first group of constraints is common assignment constraints used in the existing literature. The second group is the novel security constraints that we propose in this work; the maximum traffic allowed to be processed by the VNF and the exposure of the VNF to procedures sourced via untrusted users or access networks. This sharing problem is formalized to allow for procedure-level modeling that satisfies the requirements of slice requests in 5G systems. The model is tested using standard VNFs and procedures of the 5G system rather than generic ones. The numerical results of the model show the benefits and costs of applying the security constraints along with the network performance in terms of different metrics.
5G Security, Network Slicing (NS), VNF Sharing, Optimization
## I Introduction
Networks are visioned to support various applications and services with diversified requirements [1]. One distinct concept in 5G architecture is the Network Slicing (NS) which was not present in previous generations of cellular networks. NS enables 5G operators to deploy multiple logical networks on shared physical resources to serve traffic segments with different demands [2, 3]. This is achieved using different technologies integrated with 5G architecture such as, most notably, Network Function Virtualization (NFV) technology. NFV allows the deployment of Virtualized Network Functions (VNFs) in software or a virtualized environment on commodity hardware. Both NS and NFV help 5G operators to reduce the overall Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) by deploying VNFs efficiently and flexibly to optimize the utilization of network resources [4].
In this paper, we propose a security-aware VNFs sharing model for 5G networks. The proposed optimization model not only satisfies the service requirements of various slices but also enhances security by isolating their critical VNFs while enhancing resource utilization of the underlying physical infrastructure. This goal is achieved by sharing as many noncritical VNFs as possible to efficiently utilize resources and satisfy the latency limitations of the procedures composing 5G slices. Although some literature studies considered the sharing property of VNFs in the mapping process, they subjectively decide on this property and use it as an input to their model. This work tries to fill this gap by following a systematic way to decide whether a particular VNF is critical and, if so, to avoid sharing it among slices. In the proposed model, two novel security constraints are considered to define the VNF criticality. The first constraint is the maximum traffic that can be processed by a particular VNF. If a VNF has to process large user and control traffic, it could become a bottleneck which makes it critical, and thus, should not be shared between slices. The second one is exposure to procedures initiated by untrusted entities (i.e. user devices or networks). If a VNF is exposed to procedures coming from untrusted parties, this VNF should not be shared among slices too. In case of the exposed VNF is compromised, this can impact all other slices that share it. To this end, providing isolation to critical VNFs is very crucial in 5G network slicing. In light of the above discussion, the contributions of this paper are four-fold:
* Proposing a multi-objective Mixed-Integer Nonlinear Programming (MINLP) model aiming at minimizing the processing capacity needed and procedures' latency of all requested slices.
* Providing a systematic way to decide on the sharing property of a particular VNF by introducing new security constraints that define the VNF's criticality.
* Considering the procedure level granularity instead of abstracting a slice as a unit. To the best of our knowledge, this is the first work to consider procedure-level details in the optimization model.
* The proposed model is tested using standard procedures and VNFs of 5G architecture that are discribed in 3rd Generation Partnership Project (3GPP) standards [5] rather than using generic VNFs or symbolic procedures.
The rest of this paper is structured as follows. Section II discusses the related works and Section III explains the proposed model. The system setup and model parameters are presented in Section IV. The standard 5G procedures implemented to test the model are defined in Section V. The proposed model is evaluated in Section VI and the limitations of this work are provided in Section VII. Finally, Section VIII
concludes this study.
## II Related work
This section reviews the related literature studies that attempted to solve the VNF placement and allocation problem using optimization approaches. Although there are many studies that considered sharing a physical node between multiple VNFs, our work mainly focuses on sharing the a VNFs themselves between multiple 5G slices.
Leyva _et al._ in [6] proposed an Integer Linear Programming (ILP)-formulated optimizing model for User Plane Functions (UPFs) chaining and placement in Multi-access Edge Computing (MEC) system of 5G. Their model targeted the provisioning cost and Quality of Service (QoS) optimization. It considered several aspects such as resource capacity, service latency, UPF-specific requirements, and the order of VNFs in the Service Function Chains (SFCs). UPFs placement and routing are modelled as SFC embedding problem in which active Protocol Data Unit (PDU) sessions are modelled as SFC requests. There is no restriction on sharing a particular VNF except its capacity limit. To solve the problem in a polynomial time, a customized heuristic along with simulating annealing algorithm has been proposed in their work. Our work, on the other hand, in addition to the data-plane function (i.e., UPFs), considers control-plane functions as well.
Coelho _et al._ in [7, 8] modeled the provisioning of the NS requests at the service level as an optimization problem. The model considered functional splitting in the radio access domain and also the separation of the control and data-plane functions. The authors assumed that the network slice request might impose constraints on VNFs that can not be shared between NSs due to their criticality or their belonging to different tenants. They tested different sharing policies such as sharing Data Plane Services (DPS) only, Control Plane Services (CPS) only, some of DPS, some CPS, or without sharing constraints. These sharing policies are given to the model as input, however, our model decides whether to share VNF systematically based on different security constraints.
Malandrino _et al._[9] studied reducing the cost of the 5G service deployment through sharing VNFs subject to end-to-end delay requirements. With the assumption that there is no isolation needed for the new service request, VNFs are shared if convenient (i.e. meet the delay requirements). The authors focused more on how to assign priorities for traffic flows that share the same VNF. For this, they randomly assigned flows priority upon entering VNFs. To reduce the time complexity, they proposed FlexShare as an assignment algorithm.
Tang _et al._[10] proposed a dynamic scaling approach for VNF based on traffic analysis and VNF placement. They analyzed the traffic characteristic of operator networks and then proposed an organizational approach for VNF placement in a common data center. Their model aims to achieve high service availability and save computational resources depending on the traffic estimation to scale in/out the VNF instance. The authors considered general VNF and only user traffic. Our work, however, considers the actual VNFs of the 5G core, data plane, and control plane traffic.
Truong-Huu _et al._ in [11] leveraged the VNF's sharing property in their optimization model to minimize the bandwidth and computational resources required to serve slice requests. A VNF is identified as shareable depending on its functionalities so that it can be assigned to serve different slices. The network address translation function is an example of shareable VNF, however, firewall service is non-shareable. In their work, the sharing property of a VNF is set in advance and provided to the model as input. Additionally, their work considered random traffic flows and generic VNFs that serve these traffics.
Another work leveraging the shareable VNFs criteria to enhance resource utilization is presented by Chengli _et al._ in [12]. Their enhancement is evaluated in terms of the slice acceptance ratio. They claimed that some common functions such as mobility management and network address translation functions can be shared across multiple slices. Similar to the work in [11], Chengli _et al._[12] randomly set the sharing property in their experiments and used it as an input for their model.
A queuing-based system model is proposed by Agarwal _et al._[13] for optimizing the VNFs placement and allocation in physical hosts taking into account the VNFs sharing. Authors in [13] utilized the concept of queuing theory and considered random procedures with a sequence of random and generic VNFs. In our paper, we consider 5G VNFs and multiple standard 5G procedures. Other models were presented in [14, 15] to optimize the utilization of the underlined physical infrastructure considering different slicing requirements. However, both of them assumed that VNFs can't be shared among slice or service requests.
Finally, Sattar _et al._ proposed an optimal slice allocation model in 5G core networks [16] and extended it to propose a security-aware optimization model to protect the 5G core network slices against Distributed DoS (DDoS) attacks in [17]. The model tried to isolate the network slices at the hardware level. The authors considered both inter-slice and intra-slice isolation and evaluated the performance of their proposed solution on a testbed which involved both simulation and experimental parts. Their results confirmed the benefits of utilizing a security-aware network slice optimization model to mitigate the impact of DDoS attacks. Our work focuses on sharing and isolation of 5G VNFs whereas the work presented in [17] only considers sharing of physical resources and not the VNFs. Furthermore, our work considers the standard VNFs of the 5G core along with several procedures used in the 5G network.
To sum up, it can be observed that the capacity limit of the VNFs is the thing that we have in common with most of the literature studied which is standard in this area. To the best of our knowledge, this is the first study incorporating security aspects into the optimization model. Not only that, but our work also proposes a systematic way to decide on whether to share or not to share a particular VNF installed
in a specific physical node. Additionally, our work considers procedure-level rather than slice requests or traffic flows, and some standard VNFs and procedures of 5G rather than generic or symbolic ones.
## III System Model and Problem Formulation
In this section, the proposed model is given. The VNF sharing problem considered in this study is formalized and solved as an MINLP. The proposed model aims to optimize computational processing costs and the latency of slices' procedures.
### _Model Description and Notations_
The modeling of the virtual and physical networks is defined in this subsection. In this model, each slice request \(s\in\mathcal{S}\) is composed of a set of procedures, \(\mathcal{P}_{s}\). The virtual network is modeled in this work by a set of directed graphs. Each graph \((\mathcal{V}_{p}^{s},\mathcal{R}_{p}^{s})\) corresponds to a particular procedure \(p\in\mathcal{P}_{s}\) that belongs to a specific slice \(s\in\mathcal{S}\), where \(\mathcal{V}_{p}^{s}\) is the set of VNFs serving the procedure \(p\) and \(\mathcal{R}_{p}^{s}\) denotes the set of the virtual links used by that procedure. Each procedure \(p\in\mathcal{P}_{s}\) requires a specific data rate, \(\lambda_{p}^{s}\), and a maximum tolerated delay, \(\delta_{p,max}^{s,max}\). Each VNF \(v\in\mathcal{V}\) is represented by a tuple \(\langle v_{i},\ I_{v},\ \delta_{v}^{h},\ \zeta_{v}^{max},\ \mu_{v},\ \omega_{v}\rangle\) where \(v_{i}\) is the deployed \(i^{th}\) instance of VNF \(v\) type, \(I_{v}\) denotes the set of all instances of VNF \(v\) type deployed across all physical nodes, \(\delta_{v_{i}}^{n}\) is the processing delay for \(i^{th}\) instance of type \(v\) deployed in node \(n\), \(\zeta_{v}^{max}\) is the maximum accepted processing capacity to which the VNF type \(v_{i}\) can be extended, \(\mu_{v}\) is the per unit processing capacity required by the VNF of type \(v\), and \(\omega_{v}\) denotes the number of processed data units per unit processing time by the VNF of type \(v\). Finally, the physical infrastructure network is modeled as a directed graph \(\mathcal{G}=(\mathcal{N},\mathcal{L})\), where \(\mathcal{N}\) is the set of physical nodes and \(\mathcal{L}\) denotes the physical links between these nodes. Each physical node \(n\in\mathcal{N}\) has a finite processing capacity, \(C_{n}^{max}\). A physical link \((n,m)\), between node \(n\) and node \(m\), entails a deterministic delay \(d(n,m)\) proportional to its length and also a maximum bandwidth capacity \(b(n,m)\). Table I summarizes the notations and variable definitions used throughout this paper.
### _Model Assumptions_
Few assumptions are considered in this work as outlined in this subsection. The standard 5G VNFs considered in the model such as Access and Mobility Function (AMF), Session Management Function (SMF), Network Repository Function (NRF), etc. could be VM-based or container-based VNFs. It is not important that the number of VNFs type per slice is the same as that for another slice. Additionally, multiple instances of the VNF type can be initiated if required as assumed in [14]. VNFs are required to dynamically support scale-in and scale-out with minimal impact on the service quality offered [18]. Physical nodes are geographically distributed and each of them can deploy any VNF type. It is assumed that all traffic units need the same computational capability for processing. Although in this work we focus on CPU or computational capacity, the storage and memory could be accommodated.
In this model, the delay raised by the load balance in the case of muli-core VNF is minimal to be considered. The load balancer is needed when a VNF requires processing capabilities that cannot be fulfilled by a single core. In this case, multiple cores are needed to satisfy the processing requirement of that VNF. The load balancer will be used to balance the traffic between the cores and may lead to some performance penalties. Additionally, the context switching delay caused by sharing the CPU's cores between multiple VNFs is not taken into account. This delay comes in a form of cache sharing and saving/loading the context of different VNFs. It is linearly increased with the number of procedures using those
VNFs.
### _The Objective Function_
The first part of the objective function of this model is to minimize the total processing capacity needed to serve all slices. This part is satisfied by sharing as many noncritical VNFs as possible while considering the security constraints imposed to mitigate the risks that raise by such sharing. The second part is to minimize the delay of all procedures. These two parts can be formulated in Eq.(1). It is worth mentioning here that we use common optimization goals in the literature which are minimizing delay and resource consumption. Although we focus on minimizing the processing capacity and procedures delay, our model can be extended to consider additional key performance indicators seamlessly.
\[\min_{T_{i}:i\neq V}\sum_{v\in V}\sum_{i\in I_{v}}\sum_{n\in\mathcal{N}} \zeta_{v_{i}}^{n}+\sum_{s\in\mathcal{S}}\sum_{p\in\mathcal{P}_{s}}\delta_{p}^{s} \tag{1}\]
The total required computational capacity for all VNFs
**Subject to** constraints (6) to (19).
#### Iii-C1 Computational Capacity and Procedure Delay
In the following, we show how the computational capacity needed for a particular VNF and a procedure delay are calculated.
* _VNF computational capacity_: Generally, the more services/procedures a VNF provides/hosts, the more physical resources are required. The processing capacity \(\zeta_{v_{i}}^{n}\), that is needed by a particular VNF, comes in two forms; operational or base processing capacity \(\zeta_{v_{i}}^{n,B}\) and traffic processing capacity \(\zeta_{v_{i}}^{n,T}\) as shown in Eq. (2). Based on the number of procedures that a particular VNF \(v\) instance serves, we can calculate its \(\zeta_{v_{i}}^{n,T}\). If the VNF type \(v\) requires \(\mu_{v}\) processing capability to process one unit of traffic, then the \(\zeta_{v_{i}}^{n,T}\) calculated as in Eq. (3). \[\zeta_{v_{i}}^{n}=\zeta_{v_{i}}^{n,B}.\;\beta_{v_{i}}^{n}+\zeta_{v_{i}}^{n,T} \;\;\;\;\forall v\in\mathcal{V},\;\;i\in I_{v},\;n\in\mathcal{N}\] (2) The \(v\)'s base capacity \[\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
* The total capacity of a particular VNF instance, needed to process all procedures mapped to it, cannot exceed the absolute maximum capacity assigned to that VNF. This constraint has been considered in other papers in the literature such as [9] and [6]. \[\zeta_{v_{i}}^{n}\leq\zeta_{v}^{max},\ \ \ \ \ \forall n\in\mathcal{N},\ \ v\in \mathcal{V},\ \ i\in I_{v}\] (11) \[\boxed{\begin{array}{l}\text{The maximum computational capacity}\\ \text{assigned to the VNF $v$}\end{array}}\]
* Constraint (12) ensures that the total capacity used by all VNFs deployed in a physical node \(n\) does not exceed the maximum processing capacity of that node. \[\sum_{v\in\mathcal{V}}\sum_{i\in I_{v}}\zeta_{v_{i}}^{n}\cdot\beta_{v_{i}}^{ n}\ \ \leq C_{n}^{max},\ \ \ \ \ \forall n\in\mathcal{N}\] (12)
* Constraint (13) ensures that a physical link \((n,m)\) is used by a particular procedure, to map virtual link \((v_{i},z_{j})\), iff the two VNFs \(v_{i}\) and \(z_{j}\) are mapped to nodes \(n\) and \(m\), respectively. This constraint is a non-linear constraint. \[\begin{array}{l}\chi_{(v_{i},z_{j}),p}^{(n,m),s}\leq\eta_{v_{i},p}^{n,s} \cdot\ \eta_{z_{j},p}^{m}\\ \forall(m,n)\in\mathcal{L},\ (v_{i},z_{j})\in\mathcal{R},\ p\in\mathcal{P} \end{array}\] (13)
* Constraint (14) ensures that the total bandwidth required by all procedures that move between VNFs through a particular link, \((n,m)\), are limited by the finite capacity of that link, \(\zeta_{(n,m)}^{max}\) \[\begin{array}{l}\sum_{s\in\mathcal{S}}\sum_{p\in\mathcal{P}_{s}}\sum_{(v_{ i},z_{j})\in\mathcal{R}_{p}^{s}}\lambda_{v}^{s}\ \ \chi_{(v_{i},z_{j}),p}^{(n,m),s}\ \leq\ b(n,m)\\ \forall(n,m)\in\mathcal{L}\end{array}\] (14)
* Constraint (15) certifies that the latency introduced by nodes' processing and network propagation can't exceed the maximum tolerated latency of a particular procedure. \[\delta_{p}^{s}\leq\delta_{p}^{s,max}\ \ \ \ \ \ \forall s\in\mathcal{S},\ p\in \mathcal{P}_{s}\] (15)
#### Iii-B3 Security constraints
Two security constraints are formulated in the model; VNF's maximum traffic and VNF exposure constraints. These constraints are explained as follows:
* _The VNF's maximum traffic:_ This constraint ensures that the traffic processing capacity \(\zeta_{v_{i}}^{T,n}\) of a VNF instance \(v_{i}\) should not exceed the predefined maximum traffic processing capacity \(\zeta_{v}^{T,max}\). Using this constraint, the \(\zeta_{v}^{T,max}\) for a critical VNF instance can be set at a lower value, and hence it will not be shared which will protect the critical VNF. This is represented in constraint (16). \[\zeta_{v_{i}}^{n,T}\ \leq\zeta_{v}^{T,max},\ \ \ \ \ \forall v\in\mathcal{V},\ \ i\in I_{v},\ n\in\mathcal{N}\] (16)
* _The VNF exposure:_ The VNF that is exposed to the outside network cannot be assigned to more than one slice. A VNF is exposed to the outside network if it the first VNF in the VNFs chain serving a procedure that is initiated by the UE or the Radio Access Network (RAN). First, let \(\Omega_{v_{i}}^{n,s}\) denotes to that VNF instance \(v_{i}\) deployed in physical node \(n\) is exposed to the outside by the slice \(s\). The \(\Omega_{v_{i}}^{n,s}\) is calculated by Equations (17) and (18) \[\begin{array}{l}\sum_{p\in\mathcal{P}_{s}}\eta_{p,v}^{s}\ \psi_{p}^{s}\ \gamma_{v_{i},p}^{n,s}\leq \mathcal{C}\ \Omega_{v_{i}}^{n,s}\\ \forall s\in\mathcal{S},v\in\mathcal{V},\ i\in I_{v},\ n\in\mathcal{N}\\ \text{Is the VNF $v$ the first on the}\\ \text{VNFs sequence of the procedure?}\end{array}\] (17) Where \(\mathcal{C}\) is a parameter greater than the maximum number of procedures mapped into the \(v_{i}\) and sourced externally. \[\begin{array}{l}\Omega_{v_{i}}^{n,s}-\sum_{p\in\mathcal{P}_{s}}\eta_{p,v}^ {s}\ \psi_{p}^{s}\ \gamma_{v_{i},p}^{n,s}\leq 0\\ \forall s\in\mathcal{S},v\in\mathcal{V},\ i\in I_{v},\ n\in\mathcal{N} \end{array}\] (18) Then the constraint (19) ensures that the VNF instance \(v_{i}\) must not assigned to more than one slice. \[\sum_{s\in\mathcal{S}}\Omega_{v_{i}}^{n,s}\leq 1\ \ \ \ \ \forall v\in\mathcal{V},\ i\in I_{v},\ n\in\mathcal{N}\] (19) \[\boxed{\begin{array}{l}\text{Indicating whether $v_{i}$ is exposed externally}\\ \text{\end{array}}}\]
## IV System Setup
This section provides details about the system setup and values of the parameter used to test the model.
**The used solver:** The proposed model is implemented using the JuMP modeling language [19] which is embedded in Julia [20]. As our model contains a combination of linear as well as non-linear constraints, the Solving Constraint Integer Programs (SCIP) solver [21, 22] is employed to solve the modeled problem. The SCIP is currently one of the fastest non-commercial solvers available to solve problems of Mixed Integer Programming (MIP) and MINLP classes [21]. The experiments are performed on a Linux machine that has an Intel processor with \(32\) cores and \(32\) GB of RAM.
**The environment set-up:** A total of three simulated physical nodes with a maximum of \(30\) capacity units each are considered in these experiments. Each VNF requires one capacity unit of the physical node to be deployed (or activated) and one more capacity unit to serve one procedure in each traverse. For instance, if a procedure needs to use a VNF more than once, then the VNF will require the same capacity as the number of times the VNF is visited by that procedure. Multiple VNFs can be deployed in a physical node and the total used computational capacity of all the VNFs deployed in that physical node cannot exceed its maximum capacity units (i.e. \(30\) units). Physical nodes are connected as a mesh topology. The links' propagation delay between physical nodes is set to \(5ms\) for all links. Whereas the processing time each VNF takes to process each request of the procedure is randomly assigned between \(0.5ms\) to \(1ms\) based on uniform
distribution. The parameters used in the model along with their corresponding values are summarized in Table II.
**The simulation time limit:** The SCIP solver is used with two parameters, the maximum number of threads used by the solver and the time limit to solve the model. In these experiments, the maximum number of threads is set to \(6\) threads in order to run multiple experiments at the same time. However, we noticed that SCIP only used a single thread at any given time while switching between these threads during the run (i.e. SCIP did not use all \(6\) threads concurrently). The time limit, on the other hand, is set in order to obtain a sub-optimal solution from the model in a timely manner. Limiting the time is also considered by previous studies [7] to avoid the long time that the model could take to solve a problem with a high number of input parameters. Figure 1 compares the objective value obtained as a function of multiple values of the time limit. When the time limit is set to \(30\) minutes, the model provided the highest objective value. As the time limit increases, the objective value starts to saturate. So in light of these results, the time limit is set to \(3\) hours for all subsequent experiments and this same value is already used in the literature [7].
**The implemented scenario:** A simple network scenario is implemented in order to obtain and analyze the results from the proposed optimization model. A total of two slices are considered and each slice consists of three procedures. Slice one requires registration with AMF re-allocation, handover, and authentication procedures. However, slice two requires general registration, handover, and authentication procedures. These procedures are described in the next section. The number of procedures sourced externally and the maximum VNF traffic are varied across the conducted experiments. Table III summarizes the network configuration of the studied scenario.
## V The Implemented 5G Procedures
Although our model can support all existing 5G procedures defined by 3GPP in [23], only four procedures are implemented in this work to test the visibility and correctness of the model. These procedures are the general registration procedure, registration with AMF re-allocation procedure, handover procedure, and authentication procedure. In fact, implementing more procedures would enlarge the time needed to get results out of the model. In this section, those implemented procedures are described briefly and the sequence of their serving VNFs is provided. More details on these procedures can be found in 3GPP technical specification 23.502 [23].
**General Registration Procedure:** This procedure enables the UE to register with the 5G network to receive services. The UE can perform this procedure in different scenarios like the initial registration to join the network, the emergency registration to use the emergency services, etc. The sequence of VNFs used by this procedure is as follows:
\(UE\)\(\rightarrow\)\(RAN\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(Old\)\(AMF\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{AUSF}\)\(\rightarrow\)\(\mathit{UDM}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{UDM}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{SDM}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{SDM}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{PCF}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{SMF}\)\(\rightarrow\)\(New\)\(AMF\)\(\rightarrow\)\(\mathit{UE}\)\(\rightarrow\)\(New\)\(AMF\).
Since the UE and the RAN are not actual VNFs but do appear in the sequence, we remove the UE and the RAN from the beginning of the procedures' sequence while implementing the procedures in our model. More details on this limitation are explained in section VII.
**Registration with AMF Re-Allocation Procedure:** In this procedure, the initial AMF redirects the registration-related traffic to the target AMF. For instance, this can happen when the initial AMF cannot serve the UE, so a change in the AMF is required in this case. One important thing to mention here is that multiple types of AMFs can be seen in the sequence of VNFs, for instance, initial AMF, target AMF, etc. In our model, we consider
Fig. 1: Objective value as a function of different time limits
these variants of AMF as different VNFs to ensure that these VNFs are deployed separately from each other. The sequence of VNFs used by this procedure is as follows: \(RAN\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
attributed to the fact that the model already activated separate VNFs for different procedures, hence, no new VNFs need to be initiated. Another reason could be that the first VNF of the external procedure is not used by another procedure which makes no change to the result.
### _Impact of the Maximum VNF Traffic Constraint_
To evaluate the impact of the maximum VNF traffic constraint, the exposure security constraint is disabled and only one procedure is assumed to be sourced externally. Figure 3(a) shows the benefit of using this constraint. In this set of experiments, the maximum allowed VNF traffic is ranging from \(1\) to \(5\) in steps of one. Also, each VNF is set to require one capacity unit to serve one procedure. The maximum VNF traffic simply means the number of procedures that the VNF can serve. Since one procedure is assumed as external, we consider a procedure exposed to the threat if it shares any VNF with the external procedure. Based on this assumption, without the maximum traffic constraint, the number of procedures exposed to external threats is constant at \(3\) as shown in Fig. 3(a). With the VNF maximum traffic constraint enabled, the number of exposed procedures is zero initially. This is attributed to that each procedure is mapped to a unique VNF and there is no sharing. However, as the maximum limit of VNF traffic increases, the number of exposed procedures also increases until it becomes similar to the results without the maximum traffic constraint as shown in Fig. 3(a).
Figure 3(b) shows the cost of implementing this security constraint. The figure shows that when the maximum VNF traffic increases, the number of activated VNF instances stays constant at \(15\). However, with the maximum traffic constraint enabled, the number of initiated VNF instances is \(27\) when the maximum VNF traffic is set to \(1\). This in turn requires more capacity and resources by the network operator. The total activated VNFs is reduced when the maximum VNF traffic limit is increased until it merges with the result of the other scenario (i.e. without the maximum traffic security constraint).
### _Physical Node Capacity_
This subsection shows the amount of physical node capacity required to activate the VNFs that meet the slices requirements. For this set of experiments, the maximum VNF traffic limit is set to \(2\), and only the registration with AMF reallocation procedure is set as an external procedure. Figure 4 shows the proportional computational capacity used for each physical node. In this experiment, the maximum VNF traffic constraint is enabled and the results are obtained with and without the exposure constraint. As shown in the figure, physical nodes \(1\) and \(2\) consume the same amount of capacity either with or without the security constraint. However, the capacity of physical node \(3\) consumed is \(100\%\) with the exposure constraint and about \(97\%\) without the security constraint. The major takeaway from this figure is that the extra overhead of the security constraints is not huge if the network operators select moderate security requirements. To scrutinize this further, the total computational capacity of the physical nodes used by each VNF is reported in Fig. 5. It can be seen from the figure that the top three VNFs that use most of the capacity are the initial AMF, new AMF, and SMF. The initial AMF and new AMF are only deployed in physical nodes \(3\) and \(2\), respectively. The SMF is mainly initialized in physical node \(3\) but another instance of the SMF is also deployed in physical node \(2\). This observation also gives an indication of the VNFs which are mostly used by 5G procedures and hence making them critical to be protected from threats.
Fig. 3: Impact of maximum VNF traffic constraint
Fig. 2: Impact of exposure constraint
### _VNF Instance Capacity_
In this subsection, we show the capacity used by VNF instances and their utilization. Figure 6 shows the proportional computational capacity used (excluding the base capacity) by each VNF instance out of its predefined maximum capacity. Here we only present the results of VNFs with more than one instance activated. Both security constraints are enabled in this experiment. It can be seen from the figure that three instances of the AMF are initiated. The AMF consumes the highest total capacity of the initialized instances among all VNFs. The SMF, UDM, and Authentication Server Function (AUSF) come next with two instances activated for each.
Figure 7 shows the utilization of VNFs with and without the security constraints. The utilization is computed by taking the average of the proportional capacity used across all instances of a particular VNF type. As shown in Fig. 7 the utilization of the new AMF, for example, is at \(100\%\). One important point to mention here is that the utilization of the SMF and initial AMF when the security constraints are enabled is lower than when they are disabled. The reason behind this is that the limit on the maximum VNF traffic and the exposure constraint results in more VNF instances activated, reducing the overall VNF utilization. However, this is a trade-off to make between protecting the network against threats and achieving higher utilization.
### _Procedure Delay_
Lastly, we calculate the time it takes for each procedure to be completed. A delay average is reported if the same procedure is used across slices. The experiments are performed with both security constraints enabled and when they are disabled. As Fig. 8 shows, the delay to complete the authentication and handover procedures is around \(7ms\) and it is almost the same for both scenarios (i.e. with and without the security constraints). The registration with the AMF re-allocation procedure takes \(5ms\) more when the security constraints are enabled. However, the major difference in the delay is in the
Fig. 4: Physical node capacity used
Fig. 5: Physical node capacity used by each VNF type
Fig. 6: Proportional computational capacity used by each VNF instance excluding the base capacity
Fig. 7: Utilization of VNFs
registration procedure. The delay to complete the procedure in the experiment without the security constraints is \(30ms\) more than the delay in the experiment when the security constraints are enabled. This difference in the delay is because, for the results without the security constraints, some VNFs were arbitrarily deployed in different physical nodes resulting in extra propagation delay that contributes to the total delay. Another reason could be the limited run time for the model, which only provides a sub-optimal solution once the time limit is reached. Additionally, there is a trade-off between sharing VNFs and the delay as a lower number of VNFs does not always guarantee less delay due to multiple factors that can influence the delay.
## VII Limitations
This section attempts to identify some limitations of the proposed model in this work and the way that it is planned to deal with them. These limitations are summarized in the following points:
* **UE and RAN assignments:** The first limitation of the current implementation is considering the UE and the RAN as VNFs. Since UE and RAN are also part of 5G procedures explained in section V, the sequence of network entities that are involved during the procedure also includes the UE and the RAN. Currently, we do not distinguish between UE or RAN and other 5G VNFs in the implementation. In order to get around this issue, we remove the UE and the RAN from the beginning of the procedures' sequence of VNFs. This is done to ensure that the exposure constraint does not consider the UE or the RAN as the first VNF of procedures set to be sourced externally. Therefore, removing the RAN or UE from the beginning of the sequence will guarantee that a 5G VNF will be the first VNF of this procedure. Another technique we employed is to assign zero base and processing capacities to the UE and the RAN. As a result, the model will map the UE and RAN to the physical nodes (similar to other VNFs) but their capacities will not be impacting the total capacity of the physical node. This is done to ensure the UE and the RAN contribute to the procedure delay without consuming the computational capacities of physical nodes.
* **Model time limit:** Another limitation of the current model is that we limit the run time of the model to \(3\) hours. This is done to obtain the results from the model in a timely manner. Once the time limit is reached, the model will provide the best solution obtained so far in terms of the objective function. Since there is a limit on the model run time, the results presented in this study might not be optimal.
## VIII Conclusion
In this work, we propose an optimization-based security-aware VNF sharing model for 5G systems. The goal of the proposed model is not only to enable the efficient mapping of the VNFs to maximize their utilization but also to isolate slices by not sharing their critical VNFs to enhance security. For this, we introduce a systematic way to decide whether to share a particular VNF or not. To do so, two security constraints were defined in the proposed model; VNF's maximum traffic and VNF exposure constraints. The overall goal of the objective function is to minimize the computational capacity required and the total procedure delay. The numerical results of the model are obtained using the four standard 5G procedures with actual VNFs. The results show the advantage of using the security constraints in terms of securing the network slices, procedures, and VNFs by limiting the sharing of critical VNFs. The use of security constraints introduces additional costs to the network operators in the form of more capacity used. However, the use of security constraints will ensure the protection of critical network infrastructure from external threats such as, for example, DDoS attacks.
## IX Acknowledgement
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and TELUS Communications through the Collaborative Research and Development (CRD).
|
2305.08649 | Implications of Narrow Spectra of Fast Radio Bursts | Fast radio bursts (FRBs) are millisecond-duration radio transients with
extremely high brightness temperatures at cosmological distances, and the
physical origin and the radiation mechanism of FRBs are still unknown. The
observed spectral bandwidth of some FRBs appeared narrow compared with their
peak frequencies, which could be used to constrain the radiation mechanism and
the astrophysical environment of FRBs. In this work, we investigate some
possible physical origins of the narrow spectra from the perspectives of
intrinsic radiation mechanisms, coherent processes, radiative transfers, and
interference processes. We find that: (1) If the observed narrow spectra of
FRBs are attributed to the intrinsic radiation mechanism by a single charged
particle, the particle's deflection angle should be much smaller than the
radiation beaming angle. (2) Coherent process can make cause narrow spectra.
For the bunching mechanism, the narrow spectra might arise from the radiating
bunches with a quasi-periodic distribution. For the maser mechanism, the
negative absorption process can naturally cause a narrow spectrum. (3) Most
absorption and scattering processes do not significantly change the observed
spectra based on the current observation of some FRB repeaters. (4)
Scintillation and plasma lensing in the FRB source environment can modulate the
spectra, leading to narrow spectra and the burst-to-burst variation of spectra.
A planet-like object can generate spectral modulation via gravitational lensing
at the GHz band, but the observed burst-to-burst variation of the spectra does
not support this scenario. | Yuan-Pei Yang | 2023-05-15T13:48:53Z | http://arxiv.org/abs/2305.08649v2 | # Implications of Narrow Spectra of Fast Radio Bursts
###### Abstract
Fast radio bursts (FRBs) are millisecond-duration radio transients with extremely high brightness temperatures at cosmological distances, and the physical origin and the radiation mechanism of FRBs are still unknown. The observed spectral bandwidth of some FRBs appeared narrow compared with their peak frequencies, which could be used to constrain the radiation mechanism and the astrophysical environment of FRBs. In this work, we investigate some possible physical origins of the narrow spectra from the perspectives of intrinsic radiation mechanisms, coherent processes, radiative transfers, and interference processes. We find that: (1) If the observed narrow spectra of FRBs are attributed to the intrinsic radiation mechanism by a single charged particle, the particle's deflection angle should be much smaller than the radiation beaming angle. (2) Coherent process can make cause narrow spectra. For the bunching mechanism, the narrow spectra might arise from the radiating bunches with a quasi-periodic distribution. For the maser mechanism, the negative absorption process can naturally cause a narrow spectrum. (3) Most absorption and scattering processes do not significantly change the observed spectra based on the current observation of some FRB repeaters. (4) Scintillation and plasma lensing in the FRB source environment can modulate the spectra, leading to narrow spectra and the burst-to-burst variation of spectra. A planet-like object can generate spectral modulation via gravitational lensing at the GHz band, but the observed burst-to-burst variation of the spectra does not support this scenario.
Compact radiation sources (289); Radio transient sources (2008); Radio bursts (1339); Radiative processes (2055); Neutron stars (1108) 0000-0002-4818-2888]Yuan-Pei Yang
## 1 Introduction
Fast radio bursts (FRBs) are millisecond-duration radio bursts with extremely high brightness temperatures of \(T_{B}\sim 10^{35}\) K, which suggests that their radiation mechanisms must be coherent. Some coherent emission mechanisms have been invoked to interpret the emissions of FRBs, including coherent radiation by charged bunches (Katz, 2014, 2018; Kumar et al., 2017; Yang and Zhang, 2018, 2023; Lu et al., 2020; Cooper and Wijers, 2021; Wang et al., 2022; Kumar et al., 2022; Qu et al., 2023), maser by hydrodynamic instabilities or kinetic instabilities (Lyubarsky, 2021; Beloborodov, 2017; Waxman, 2017; Metzger et al., 2019), coherent plasma radiation (Yang and Zhang, 2021; Mahlmann et al., 2022), etc. However, there is no smoking gun to identify the radiation mechanism of FRBs so far. In addition to the radiation mechanism, the physical origin of FRBs also remains an unsolved puzzle due to the diversity of FRBs (see the recent review of Zhang, 2022). FRB 200428 was detected to be associated with a Galactic magnetar SGR J1935+2154 (Bochenek et al., 2020; CHIME/FRB Collaboration et al., 2020; Mereghetti et al., 2020; Li et al., 2021; Ridnaia et al., 2021; Tavani et al., 2021), implying that at least some FRBs originate from the magnetars born from the core collapses of massive stars. However, the association between FRB 20200120E and its host globular cluster with an extremely old age challenges the core-collapse magnetar formation (Bhardwaj et al., 2021; Kirsten et al., 2022), which means that it is more likely produced by an old object or a system associated with a compact binary merger (Wang et al., 2016; Zhang, 2020; Kremer et al., 2021; Lu et al., 2022).
Up to the present, hundreds of FRB sources have been detected, and dozens of them show repeating behaviors (e.g., CHIME/FRB Collaboration et al., 2021).
The increasing number of detected FRBs starts to shed light on the diversity among the phenomena, and the properties of the observed spectra provide important information about the radiation mechanism of FRBs. The first CHIME/FRB catalog identified four observed archetypes of burst morphology (Pleunis et al., 2021), including simple broadband, simple narrow band, temporally complex, and downward drifting. Meanwhile, the bursts from FRB repeaters have a larger pulse duration, narrower bandwidth, and lower brightness temperature than those of the one-off FRBs, which might be due to a beaming, propagation effect, or intrinsic populations. Law et al. (2017) made the first simultaneous detection of FRB 20121102A using multiple telescopes and found that its burst spectra could not be well modeled by a power law and more like a Gaussian shape characterized by a \(\sim 500\) MHz envelope. Zhou et al. (2022) recently reported over 600 bursts from the repeating FRB 20201124A during an active episode and found that the sub-bursts of FRB 20201124A show narrow spectra with a center frequency of 1.09 GHz and a characteristic bandwidth of \(\sim 277\)MHz. FRB 20220912A also has many bursts with narrow spectral bandwidth (Zhang et al., 2023). For the bursts with their spectra within the L band of the Five-hundred-meter Aperture Spherical radio Telescope (FAST), the relative spectral bandwidth of the radio bursts was found to be distributed near \(\Delta\nu/\nu_{0}\sim(0.1-0.2)\). Some FRBs show more extremely narrow bandwidth. One burst of FRB 20190711A has a central frequency of 1.4 GHz and a full-width-at-half-maximum (FWHM) bandwidth of just 65 MHz, and no evidence of any emission in the remaining part of the 3.3 GHz band of the Ultra-wideband Low (UWL) receiver system of the Parkes radio telescope (Kumar et al., 2021), which means that the relative spectral bandwidth is only \(\Delta\nu/\nu_{0}\sim 0.05\).
In this work, we will discuss the possible physical origins of the observed narrow spectra of FRBs from the perspectives of intrinsic radiation mechanisms, coherent processes, radiative transfers, and interference processes. The paper is organized as follows. In Section 2, we discuss the spectral bandwidth distribution and the possible physical processes affecting the FRB spectra. In Section 3, we generally analyze the radiation features of the intrinsic radiation mechanisms by a single charged particle, including the radiation mechanisms with the particle's deflection angle larger than the radiation beaming angle in Section 3.1 and the opposite scenario in Section 3.2, and the possible astrophysical scenarios are discussed in the Section 3.3. In Section 4, we discuss how the coherent processes change the radiation spectra, including the bunching mechanism in Section 4.1 and the maser mechanism in Section 4.2. The radiative transfers (including absorption and scattering processes) are discussed in Section 5, and some interference processes (scintillation, gravitational lensing, and plasma lensing) at a large-scale region are discussed in Section 6. The results are summarized and discussed in Section 7. The convention \(Q_{x}=Q/10^{x}\) is adopted in cgs units unless otherwise specified. Some detailed calculations are presented in the Appendices.
## 2 Narrow spectra of FRBs: Observation and Physical Origin
### Spectral bandwidth distribution of FRB repeaters
Some FRBs, e.g., FRB 20190711A, FRB 20201124A, and FRB 20220912A, appear extremely narrow spectra in the bandwidths of telescopes (e.g., Kumar et al., 2021; Zhou et al., 2022; Zhang et al., 2023), implying that the spectra of at least some FRBs are intrinsically narrow. In this section, we will discuss the implication of the observed narrow bandwidths of FRBs, and emphasize that the observed relative spectral bandwidth mainly depends on the intrinsic spectral shape, the bandwidth definition (e.g., full width at half maximum, full width at tenth maximum, etc.), and the telescope's bandwidth. We first consider that the intrinsic spectra of radio bursts from an FRB source have a general form described by a broken power-law distribution,
\[F_{\nu}=F_{\nu,0}\left\{\begin{aligned} &\left(\frac{\nu}{\nu_{0}}\right)^{ \alpha_{l}},&\text{for }\nu\leqslant\nu_{0},\\ &\left(\frac{\nu}{\nu_{0}}\right)^{-\alpha_{h}},& \text{for }\nu>\nu_{0},\end{aligned}\right. \tag{1}\]
where \(F_{\nu,0}\) corresponds to the maximum flux at the peak frequency \(\nu_{0}\). We should notice that \(\alpha_{l}>0\) and \(\alpha_{h}>0\) are assumed here, considering that the flux \(F_{\nu}\) vanishes at \(\nu\to 0\) or \(\infty\). In literature, one usually defines the spectral bandwidth via the full width at a fraction of maximum (\(F_{\nu,0}/N_{\text{FW}}\)), e.g., the full width at half maximum (FWHM) with \(N_{\text{FW}}=2\)(e.g., Kumar et al., 2021; Zhang et al., 2023) or via the full width at tenth maximum (FWHM) with \(N_{\text{FW}}=10\)(e.g., Pleunis et al., 2021). Thus, the relative spectral bandwidth of a radio burst is
\[\frac{\Delta\nu}{\nu_{0}}\simeq N_{\text{FW}}^{\frac{1}{\alpha_{h}}}-N_{ \text{FW}}^{-\frac{1}{\alpha_{l}}}. \tag{2}\]
We can see that the relative spectral bandwidth \(\Delta\nu/\nu_{0}\) depends on the factor of \(N_{\text{FW}}\) and the intrinsic spectral shape that is described by the above two power-law indexes. _Since such a defined (FWHM or FWTM) relative spectral bandwidth only depends on the intrinsic spectral
shape, the \(\Delta\nu/\nu_{0}\) distribution of an FRB repeater should be narrow if the intrinsic spectral shape keeps unchanged._
Most radiation mechanisms involved in various astrophysical scenarios have a low-frequency spectral index of \(\alpha_{l}<3\), see Table 1. For example, the synchrotron self-absorption has a low-frequency spectral index of \(\alpha=5/2\)(Rybicki & Lightman, 1986) and the curvature radiation by a bunch-cavity (or electron-positron) pair has a low-frequency spectral index of \(\alpha=8/3\)(Yang et al., 2020; Yang & Zhang, 2023). Thus, for these radiation mechanisms with \(\alpha_{l}<3\), according to Eq.(2), the relative spectral bandwidth for FWHM with \(N_{\rm FW}=2\) should satisfy
\[\frac{\Delta\nu}{\nu_{0}}>0.2. \tag{3}\]
For example, we assume that the intrinsic spectrum is due to the curvature radiation, then one approximately has \(\alpha_{l}\simeq 2/3\) and \(\alpha_{h}\rightarrow\infty\)(Yang & Zhang, 2018, 2023). According to Eq.(2), the relative spectral bandwidth for FWHM is \(\Delta\nu/\nu_{0}\simeq 0.65\). In particular, for FRB 20190711A with an extremely narrow FWHM spectral bandwidth of \(\Delta\nu/\nu_{0}\sim 0.05\)(Kumar et al., 2021), one has \(\min(\alpha_{l},\alpha_{h})\gtrsim 14\), implying an extremely narrow intrinsic spectrum that should involve some special mechanisms (that might be attributed intrinsic radiation mechanisms, coherent processes, radiative transfers, or interference processes), see the following discussions.
In reality, the bandwidth \(\Delta\nu_{t}\) of a radio telescope is usually narrow compared with the telescope's central frequency \(\nu_{0,t}\), \(\Delta\nu_{t}\ll\nu_{0,t}\). For example, the L band of FAST is from 1 GHz to 1.5 GHz, i.e., \(\nu_{0,t}=1.25\) GHz and \(\Delta\nu_{t}=0.5\) GHz. Due to the limited telescope's bandwidth, many observed spectra of FRBs are often incomplete. If a radio burst is observable for a certain telescope, its emission must be within the telescope's bandwidth, leading to the following conditions:
\[\nu_{0}+\frac{\Delta\nu}{2}>\nu_{0,t}-\frac{\Delta\nu_{t}}{2}\quad\text{and} \quad\nu_{0}-\frac{\Delta\nu}{2}<\nu_{0,t}+\frac{\Delta\nu_{t}}{2}. \tag{4}\]
The observed central frequency \(\nu_{0,\text{obs}}\) (not the intrinsic peak frequency) of an observable FRB is in the telescope's bandwidth,
\[\nu_{0,\text{obs}}\in\left[\nu_{0,t}-\frac{\Delta\nu_{t}}{2},\nu_{0,t}+\frac{ \Delta\nu_{t}}{2}\right], \tag{5}\]
although the intrinsic peak frequency \(\nu_{0}\) might be estimated outside the telescope's bandwidth based on the observed spectral shape. The observed spectral bandwidth of an observable FRB is
\[\Delta\nu_{\text{obs}} =\min\left(\nu_{0}+\frac{\Delta\nu}{2},\nu_{0,t}+\frac{\Delta\nu_ {t}}{2}\right)\] \[-\max\left(\nu_{0}-\frac{\Delta\nu}{2},\nu_{0,t}-\frac{\Delta\nu _{t}}{2}\right). \tag{6}\]
Since the bandwidth of a radio telescope is usually narrow, the distribution of the intrinsic peak frequency \(\nu_{0}\) of an FRB repeater could be approximately assumed to be uniform near the telescope's bandwidth, i.e., the distribution function of \(\nu_{0}\) could be approximately described by
\[f(\nu_{0})\sim\text{const}. \tag{7}\]
We make a Monte Carlo simulation to generate the distribution of the observed spectral bandwidth \(\Delta\nu_{\text{obs}}\) of the radio bursts from an FRB repeater, as shown in Figure 1, and take \(\nu_{0,t}=1.25\) GHz and \(\Delta\nu_{t}=0.5\) GHz that are consistent with the L band of FAST. For \(\Delta\nu/\nu_{0}=0.1\), the observed spectral bandwidth \(\Delta\nu_{\text{obs}}\) of most bursts are consistent with the intrinsic ones due to \(\Delta\nu<\Delta\nu_{t}\). For \(\Delta\nu/\nu_{0}=1\), since many bursts have \(\Delta\nu>\Delta\nu_{t}\), the observed spectral bandwidth \(\Delta\nu_{\text{obs}}\) of most bursts would be constrained by the bandwidth of the telescope, leading to \(\Delta\nu_{\text{obs}}\sim\Delta\nu_{t}\). Very few bursts
\begin{table}
\begin{tabular}{c c c} \hline \hline Radiation mechanisms & Low-frequency spectral index \(\alpha_{l}\) & References \\ \hline Curvature radiation by a single charged particle & 2/3 & Jackson (1998); Yang \& Zhang (2018a) \\ \hline Curvature radiation by a bunch-cavity (or electron-positron) pair & 8/3 & Yang et al. (2020); Yang \& Zhang (2023) \\ \hline Curvature radiation by fluctuating bunches & 0 & Yang \& Zhang (2023) \\ \hline Synchrotron radiation by particles with a random pitch-angle distribution & 1/3 & Jackson (1968); Rybicki \& Lightman (1986) \\ \hline Synchrotron radiation by particles with a narrow pitch-angle distribution & 2/3 & Yang \& Zhang (2018b) \\ \hline Synchrotron self-absorption & 5/2 & Rybicki \& Lightman (1986) \\ \hline Jitter radiation & 1 & Medweley (2000); Dermer \& Menon (2009) \\ \hline Blackbody radiation & 2 & Rybicki \& Lightman (1986) \\ \hline Bremsstrahlung radiation & 0 & Rybicki \& Lightman (1986) \\ \hline Inverse Compton scattering & Depend on incident photon spectrum & Rybicki \& Lightman (1986); Zhang (2022a) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the low-frequency spectral index \(\alpha_{l}\) for various radiation mechanisms
have \(\Delta\nu_{\rm obs}<\Delta\nu_{t}\), because only a part of one end (low-frequency end or high-frequency end) of the intrinsic spectral bandwidth \(\Delta\nu\) is within the telescope's bandwidth. Considering that many radio bursts are incomplete in the frequency domain due to the narrow telescope's bandwidth, the observed spectral bandwidth distribution of radio bursts from an FRB repeater could be used to test whether most of the bursts' spectra of an FRB source are intrinsically narrow.
### Physical origin of FRB narrow spectra
Next, we generally discuss the possible physical origin of the intrinsic narrow spectra of FRBs. We consider that a finite pulse of electromagnetic wave has the form of \(\vec{E}(t)=\vec{E}_{\parallel}(t)+\vec{E}_{\perp}(t)\), where \(\vec{E}_{\parallel}(t)\) and \(\vec{E}_{\perp}(t)\) are a pair of orthogonal components of \(\vec{E}(t)\). The properties of \(\vec{E}(t)\) vary with time and vanishes sufficiently rapidly for \(t\to\pm\infty\), and the power spectrum of \(\vec{E}(t)\) satisfies \(|E(\omega)|^{2}=|E_{\parallel}(\omega)|^{2}+|E_{\perp}(\omega)|^{2}\). Since the two orthogonal components are independent, let us treat only one of the two components, \(\vec{E}_{k}(t)\) with \(k=\parallel,\perp\). In particular, if an observed spectrum \(|E(\omega)|^{2}\) appears narrow, the main component between \(|E_{\parallel}(\omega)|^{2}\) and \(|E_{\perp}(\omega)|^{2}\) must also be narrow.
Without loss of generality, the spectrum of the main component could be roughly described by a rectangular profile with a central frequency \(\omega_{0,k}\) and a spectral bandwidth \(\Delta\omega_{k}\),
\[|E_{k}(\omega)|^{2}\propto{\rm rect}\left(\frac{\omega-\omega_{0,k}}{\Delta \omega_{k}}\right), \tag{8}\]
where \({\rm rect}(x)\) is the rectangular function that is defined as \({\rm rect}(x)=1\) for \(|x|\leqslant 1/2\) and \({\rm rect}(x)=0\) for \(|x|>1/2\). Thus, \(E_{k}(\omega)\) as the Fourier transformation of \(E_{k}(t)\) is given by
\[E_{k}(\omega)\propto{\rm rect}\left(\frac{\omega-\omega_{0,k}}{\Delta\omega_{ k}}\right)e^{i\phi_{k}}, \tag{9}\]
where \(\phi_{k}\) is a phase argument. Generally, \(\phi_{k}\) cannot be directly obtained only based on the information of the power spectrum \(|E_{k}(\omega)|^{2}\), but for a selected appropriate pair of orthogonal components, \(E_{\parallel}(\omega)\) and \(E_{\perp}(\omega)\) might be a pair of real and imaginary numbers, i.e., \({\rm Im}[E_{\parallel}(\omega)]=E_{\parallel}(\omega)\) and \({\rm Re}[E_{\perp}(\omega)]=E_{\perp}(\omega)\), see the following discussions in Section 3.1 and Section 3.2, leading to \(\phi_{k}(\omega)=n\pi/2\) with \(n\in\mathbb{Z}\). In this case, one may take \(\phi_{k}={\rm const}\). According to the properties of the Fourier transform, the corresponding pulse profile is
\[E_{k}(t)\propto\frac{\Delta\omega_{k}}{\sqrt{2\pi}}{\rm sinc}\frac{\Delta \omega_{k}t}{2}e^{-i(\omega_{0,k}t-\phi_{k})}, \tag{10}\]
where \({\rm sinc}(x)\equiv\sin x/x\). In Figure 2, we plot the pulse profile \(E_{k}(t)\) based on Eq.(10). The top panel shows the scenario with a narrow spectrum of \(\Delta\omega_{k}/\omega_{0,k}=0.1\) and the bottom panel shows the scenario with a wide spec
Figure 1: The simulated distribution of the observed spectral bandwidth of radio bursts from an FRB repeater. The purple, yellow, and blue bars correspond to the different relative spectral bandwidths with \(\Delta\nu/\nu_{0}=0.1,0.5\) and \(1\), respectively. We take \(\nu_{0,t}=1.25\) GHz and \(\Delta\nu_{t}=0.5\) GHz for the telescope’s parameters. The intrinsic peak frequencies \(\nu_{0}\) of the radio bursts are assumed to be uniformly distributed near the telescope’s bandwidths, i.e., \(f(\nu_{0})={\rm const}\).
Figure 2: The evolution of electric field component of a pulse of electromagnetic wave with a rectangular power spectrum given by Eq.(8). The top and bottom panels correspond to the scenarios with a narrow spectrum of \(\Delta\omega_{k}/\omega_{0,k}=0.1\) and with a wide spectrum of \(\Delta\omega_{k}/\omega_{0,k}=1\), respectively. The phase argument is taken as \(\phi_{k}=0\) here.
trum of \(\Delta\omega_{k}/\omega_{0,k}=1\). We can see that1 a narrow spectrum with \(\Delta\omega_{k}/\omega_{0,k}\ll 1\) implies that _the electromagnetic signal \(E_{k}(t)\) should be periodic and quasi-sinusoid with an oscillating frequency of \(\omega\sim\omega_{0,k}\) in a short term_ and have a typical pulse duration of \(T\sim 4\pi/\Delta\omega_{k}\) in a long term. This conclusion is natural because a narrow spectrum means that the radiation should be quasi-monochromatic, leading to a quasi-sinusoid periodic waveform of \(E_{k}(t)\). In particular, if the periodic quasi-sinusoid signal \(E_{k}(t)\) is produced by the intrinsic radiation mechanism of a single charged particle, and the particle's acceleration is required to be periodic during its radiation beam pointing to the observer, see the discussion in Section 3.2 and Appendix A.2.
Footnote 1: Notice that although the spectrum by Eq.(8) is assumed to be a rectangular profile here for analytical analysis, this conclusion is widely applicable to other spectral profiles. For example, if the spectrum is described by a Gaussian profile with a peak frequency \(\omega_{0,k}\) and a scatter of \(\Delta\omega_{k}\), the corresponding electromagnetic signal would also be periodic and quasi-sinusoid with a frequency of \(\omega\sim\omega_{0,k}\) in a short term and have a typical pulse duration of \(T\sim 4\pi/\Delta\omega_{k}\) in a long term, which could be easily tested via the numerical calculation of the corresponding Fourier transform.
In general, the observed spectral features of an astrophysical phenomenon mainly depend on the following physical processes from the fundamental level to the highest level, see Figure 3:
(1) The most fundamental level is the intrinsic radiation mechanism by a single charged particle that determines the initial radiation of each particle in the emission region. Since the radiating particles in one emission region usually have different energies, in most astrophysical phenomena, the information on the intrinsic radiation mechanism is directly reflected by the spectral shape at the lowest band that is contributed by the particles with the lowest energies.
(2) The second level is the total emission by all radiating particles in the emission region, which determines the emission coefficient of the emission system. There are two scenarios in this level: incoherent radiation and coherent radiation. For incoherent radiation, the radiation spectral power is just a simple summation of the spectral power of each particle. In this case, due to a wide distribution of the parameters of multiple particles, the relative spectral bandwidth \(\Delta\nu/\nu_{0}\) should be at least wider than that of the intrinsic radiation mechanism by a single particle. However, for coherent radiation, because of the superposition of electromagnetic waves at certain frequencies, the total spectral shape not only depends on the intrinsic radiation mechanism by a single particle but also depend on the specific coherent process in the emission region.
(3) The third level corresponds to the radiative transfer processes, which involves absorption2 and scattering. Due to these radiative transfer processes, the incident emission by radiating particles would be changed by absorption or scattering in certain regions. In particular, the absorption processes (e.g., free-free absorption, plasma absorption, synchrotron self-absorption, etc.) usually make the spectra harder, leading to a significant cutoff at the low-frequency band. The frequency-independent electron scattering processes simultaneously suppress the radiation power in all bands, thus, the spectral shape keeps unchanged.
Footnote 2: If the particles have inverted population, the absorption coefficient would become negative, in which case, rather than decrease along ray, the intensity actually increases, so-called as “maser”. However, since the maser mechanism is also one of the main coherent radiation mechanisms, we particularly classify it as the second level in this paper.
(4) The highest level is the interference processes that change the radiation spectra via the wave interference at a large-scale region, including scintillation, gravitational lensing, plasma lensing, etc. These interference processes might occur in the FRB environments (e.g., circumburst medium, companion wind in a binary system, etc.) or at some regions far away from the FRB sources (e.g., interstellar medium, intergalactic medium, etc.).
In the following sections, we will analyze the above processes in detail and discuss the corresponding implications for the FRB observation.
## 3 Radiation spectra by a single charged particle
We first discuss the spectral features of the intrinsic radiation mechanisms by a single charged particle, as the fundamental level pointed out above. The radiation of a
Figure 3: The schematic pyramid illustrates the physical processes affecting the observed spectral features.
single charged particle with Lorentz factor \(\gamma\) undergoing arbitrary accelerated motion is a coherent superposition with the contributions from the accelerations parallel to and perpendicular _to the particle's velocity_. For comparable parallel and perpendicular forces, the radiation from the parallel component is of order \(1/\gamma^{2}\) compared to that from the perpendicular component. Thus, one usually neglects the parallel acceleration3. The radiation spectrum of the perpendicular component depends on the relation between the particle's deflection angle \(\psi\) and the radiation beaming angle \(\sim 1/\gamma\)(Landau and Lifshitz, 1975), as shown in Figure 4. The particle's deflection angle \(\psi\) is determined as follows. The particle's momentum is \(p\sim\gamma m_{e}c\), and the change in the perpendicular momentum due to a transverse force \(F_{\perp}\) is \(p_{\perp}\sim F_{\perp}\Delta t_{\rm acc}\) (\(\Delta t_{\rm acc}\) is the time during which the particle acceleration changes significantly). Thus, the particle's deflection angle is
Footnote 3: However, we should notice that the parallel acceleration could be dominant under the scattering process. For example, if the incident electromagnetic wave is linear polarized and weak (the Lorentz force by the magnetic field component is much weaker than the electric field force), the charged particle would be linearly accelerated by the oscillating electric field (the scenario for strong wave could be seen in Yang and Zhang (2020)). Besides, under the magnetosphere of a neutron star, even if the incident wave is circularly polarized or strong, the charged particle can only oscillate along the field lines due to the existence of a strong background magnetic field and emit the scattering wave due to the parallel acceleration (Beloborodov, 2022; Zhang, 2022; Qu and Zhang, 2023).
\[\psi\sim\frac{p_{\perp}}{p}\sim\frac{F_{\perp}\Delta t_{\rm acc}}{\gamma m_{e}c },\quad\mbox{leading to}\quad\gamma\psi\sim\frac{F_{\perp}\Delta t_{\rm acc}}{ m_{e}c}. \tag{11}\]
If \(\gamma\psi\gg 1\), i.e., the particle's deflection angle is much larger than the radiation beaming angle, the observer will see radiation from a short segment of the electron's trajectory that is nearly parallel to the line of sight, as shown in the panel (a) of Figure 4, which corresponds to the scenarios of curvature radiation (e.g., Jackson, 1998; Yang and Zhang, 2018, 2023), traditional (large-pitch-angle) synchrotron radiation (e.g., Ginzburg and Syrovatskii, 1969; Jackson, 1998; Rybicki and Lightman, 1986), etc. If \(\gamma\psi\ll 1\), i.e., the particle's deflection angle is much smaller than the radiation beaming angle, the particle's entire trajectory would be seen by the observer, as shown in the panel (b) of Figure 4, which corresponds to the small-pitch-angle synchrotron radiation (e.g., Epstein, 1973), jitter radiation (e.g., Medvedev, 2000), etc. In the following discussion, we will discuss the case of \(\gamma\psi\gg 1\) in Section 3.1 and the case of \(\gamma\psi\ll 1\) in Section 3.2.
### Deflection angle larger than radiation beaming angle
The radiation for \(\gamma\psi\gg 1\) is equivalent to the radiation by the particle moving instantaneously at constant speed on an appropriate circular path (Jackson, 1998), as shown in the panel (a) of Figure 4. We consider that the acceleration curvature radius is \(\rho\), the angle between the line of sight and the trajectory plane is \(\theta\), and the radiation angular frequency is \(\omega\). The radiation energy per unit frequency interval per unit solid angle is (Jackson, 1998)
\[\mathcal{E}_{\omega}=\frac{3e^{2}}{4\pi^{2}c}\gamma^{2}\hat{\omega}^{2}(1+ \gamma^{2}\theta^{2})^{2}\left[K_{2/3}^{2}(\xi)+\frac{1}{1/\gamma^{2}\theta^{ 2}+1}K_{1/3}^{2}(\xi)\right]. \tag{12}\]
where \(\xi=(\hat{\omega}/2)(1+\gamma^{2}\theta^{2})^{3/2}\), \(\hat{\omega}=\omega/\omega_{c}\), and \(\omega_{c}=3\gamma^{3}c/2\rho\) is the typical radiation frequency.
_The radiation spectrum by a single radiating particle with \(\gamma\psi\gg 1\) is intrinsically wide_(Jackson, 1998; Yang and Zhang, 2018), \(\Delta\omega/\omega_{0}\sim 1\), as discuss in Appendix A.1 in detail, and it satisfies the power-law distribution with \(\mathcal{E}_{\omega}\propto\hat{\omega}^{2/3}\) at the low frequency and appears an exponential decay at the high frequency. Therefore, the observed narrow spectrum of FRBs can not be attributed to the intrinsic radiation mechanism by a single charged particle with \(\gamma\psi\gg 1\) (also see Katz (2018) for a detailed discussion).
The polarization properties for the scenario of \(\gamma\psi\gg 1\) are discussed in Appendix A.1. For the radiation by a single charged particle, the intrinsic linear/circular polarization degree mainly depends on the angle between the viewing direction and the trajectory plane. The larger the viewing angle, the lower (higher) the linear (circular) polarization degree. Besides, the higher the observed frequency, the lower (higher) the linear (circular) polarization degree. Therefore, the high circular
Figure 4: Emission from various points along the trajectory of a relativistic particle. Panel (a) \(\gamma\psi\gg 1\): the emission from some parts (bold portions) of the trajectory is observable. Panel (b) \(\gamma\psi\ll 1\): the emission from the entire trajectory is observable.
polarization degree should be attributed to the off-beam observation. If there are multiple radiating particles uniformly distributed in a fan beam, the cumulative distributions of the linear and circular polarization degrees would depend on the telescope's sensitivity, the particles' beaming angle and the observed frequency. Important conclusions for the cumulative polarization distributions include: (1) The higher the telescope's sensitivity, the lower the number fraction between the linearly and circularly polarized bursts. The reason is the bursts at larger viewing angles have higher circular polarization degrees and lower fluxes. (2) The larger the particles' beaming angle, the higher the number fraction between the linearly and circularly polarized bursts. If the viewing angle is much larger than \(1/\gamma\), most bursts would have high linear polarization. (3) The higher the observed frequency, the higher the number fraction between the linearly and circularly polarized bursts. The reason is that the threshold viewing angle is significantly suppressed at the high frequency, leading to a larger relative number of bursts within the particle beaming angle.
### Deflection angle smaller than radiation beaming angle
In the scenario with \(\gamma\psi\ll 1\), the particle with a charge \(q\) moves along the line of sight with an almost constant velocity \(\vec{\beta}\) but with a varying acceleration \(\dot{\vec{\beta}}\) as shown in the panel (b) of Figure 4, which is called a "wiggler" in the laboratory. The radiation energy per unit frequency interval per unit solid angle at the line-of-sight direction \(\vec{n}\) could be written as (Landau & Lifshitz, 1975, also see Appendix A)
\[\mathcal{E}_{\omega}=\frac{q^{2}}{4\pi^{2}c}\left(\frac{\omega}{\bar{\omega}} \right)^{4}\left|\vec{n}\times\left[(\vec{n}-\vec{\beta})\times\dot{\vec{\beta }}_{\bar{\omega}}\right]\right|^{2} \tag{13}\]
with
\[\dot{\vec{\beta}}_{\bar{\omega}}\equiv\int_{-\infty}^{\infty}\dot{\vec{\beta} }e^{i\bar{\omega}t^{\prime}}dt^{\prime}\quad\text{and}\quad\tilde{\omega} \equiv(1-\vec{n}\cdot\vec{\beta})\omega. \tag{14}\]
where \(t^{\prime}\) is the retarded time. In the ultrarelativistic case, the longitudinal acceleration is smaller than the transverse acceleration, \(\dot{\vec{\beta}}_{\parallel}/\dot{\vec{\beta}}_{\perp}^{\ast}\sim 1/\gamma^{2}\ll 1\). Thus, \(\dot{\vec{\beta}}\) and \(\vec{\beta}\) are approximately perpendicular to each other, \(\dot{\vec{\beta}}\perp\vec{\beta}\). Considering that Fourier component \(\dot{\vec{\beta}}_{\bar{\omega}}\) is significantly different from zero only if \(1/\tilde{\omega}\) is of the same order as the time \(\Delta t_{\rm acc}\) during which the particle acceleration changes significantly, the typical frequency of the radiation is estimated to be (Landau & Lifshitz, 1975):
\[\omega\sim(1-\beta)^{-1}\Delta t_{\rm acc}^{-1}\sim 2\gamma^{2}\Delta t_{\rm acc }^{-1}. \tag{15}\]
For example, if the particle's acceleration is due to the Lorentz force by the magnetic field \(B\), one has \(\Delta t_{\rm acc}^{-1}\sim\omega_{B}=eB/\gamma m_{e}c\), where \(\omega_{B}\) is the cyclotron frequency, leading to the typical radiation frequency to be \(\omega\sim\gamma eB/m_{e}c\).
In many astrophysical scenarios, the perpendicular acceleration of a charged particle is usually attributed to the Lorentz force by magnetic fields. Such a scenario corresponds to the small-pitch-angle synchrotron radiation, also see Appendix A.2 for a detailed discussion. According to Eq.(A43), the radiation power per unit frequency interval per unit solid angle is
\[\mathcal{P}_{\omega}=\frac{e^{2}\gamma^{5}\psi^{2}\omega_{0}}{4\pi c}\bar{ \omega}^{4}\left(1-\bar{\omega}+\frac{1}{2}\bar{\omega}^{2}\right)\delta \left(\bar{\omega}-\frac{2}{1+\gamma^{2}\theta^{2}}\right), \tag{16}\]
where \(\bar{\omega}\equiv\omega/\gamma\omega_{0}\), and \(\omega_{0}\) is the fundamental cyclotron frequency in the rest frame with velocity \(\beta\cos\psi\). The radiation only occurs at the direction \(\theta\) with
\[\gamma\theta=\left(\frac{2}{\bar{\omega}}-1\right)^{1/2}. \tag{17}\]
Notice that the particle's deflection angle \(\psi\) only affects the normalized radiation power but not the typical frequency and the spectral shape. The typical radiation frequency is \(\bar{\omega}\sim\) a few (corresponding to \(\omega\sim\gamma\omega_{0}\)). According to Eq.(16), for a certain viewing direction \(\gamma\theta\), the emission is only at the frequency \(\bar{\omega}=2/(1+\gamma^{2}\theta^{2})\). Thus, _the radiation spectrum of a single particle should be extremely narrow_. This result is consistent with the conclusion discussed in Section 2.2, that is, the narrow spectrum could be produced by that the particle's acceleration is periodic during its radiation beam pointing to the observer. If multiple radiating particles are distributed in a three-dimensional beam, the corresponding spectrum would become relatively wider than that of a single charged particle, but the spectrum is still narrow with \(\Delta\nu/\nu\ll 1\), see Appendix A.2 for a detailed calculation. Therefore, if the observed narrow spectrum of FRBs is due to the intrinsic radiation mechanism by a single charged particle, it should be attributed to the scenario of \(\gamma\psi\ll 1\).
The polarization properties for the scenario of \(\gamma\psi\ll 1\) are discussed in Appendix A.2. First, for the general polarization properties of the scenario with \(\gamma\psi\ll 1\), it can be easily proved that (1) If the acceleration is always on a straight line perpendicular to the particle's velocity, the polarization is fully linear; (2) If the acceleration rotates with a constant angular velocity on the plane perpendicular to the particle's velocity, the polarization is fully circular. In particle, for the small-pitch-angle synchrotron radiation by a single charged particle, high linear polarization (low circular polarization) only occur
at \(\bar{\omega}\sim 1\) and \(\gamma\theta\sim 1\), otherwise, high circular polarization (low linear polarization) is dominant. For multiple radiating particles uniformly distributed in a three-dimensional beam, the observed linear and circular polarization degrees depend on the gyration directions of the radiating particles and whether the viewing direction is within the beaming angle. If all radiating particles have the same gyration directions, the polarization of this radiation mechanism would be almost 100% circular polarization, otherwise, their polarizations would cancel out.
### Different astrophysical scenarios
The above two general radiation processes with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) could appear in various astrophysical scenarios, e.g, in the magnetosphere of a neutron star or in a magnetized shocked medium. The radiation processes in the magnetosphere are shown in Figure 5. In the inner region of the magnetosphere, due to the strong magnetic field, the electrons move almost along the curvature field lines and produce curvature radiation4. In the picture of the curvature radiation, since the deflection angle (i.e., the deflection angle of the field line) is much larger than the radiation beaming angle, the condition of \(\gamma\psi\gg 1\) is satisfied. In the outer region of the magnetosphere with the relatively weak magnetic field, the electrons would move in the field lines with a spiral trajectory and the corresponding radiation mechanism is the small-pitch-angle synchrotron radiation. The deflection angle \(\psi\) in this case corresponds to the pitch angle of the synchrotron radiation, leading to \(\gamma\psi\ll 1\). One should notice the difference in the deflection angles \(\psi\) in the above two scenarios due to the different mechanisms.
Footnote 4: The accelerations of the electrons are essentially by the Lorentz forces of the drift velocities perpendicular to the field lines.
The critical conditions between the curvature radiation and the small-pitch-angle synchrotron radiation could be obtained as follows: when a charged particle with a Lorentz factor \(\gamma\) moving along a trajectory with a curvature radius of \(\rho\), the observer will see the radiation with the emission cone of angular width \(1/\gamma\) around the observer direction and the typical timescale of the radiating process is \(\tau_{r}=\rho/\gamma c\). The gyration period of an electron under a magnetic field is \(\tau_{B}=2\pi/\omega_{B}=2\pi\gamma m_{e}c/eB\). If the radiation process is dominated by the small-pitch-angle synchrotron, the number of times for the electron's gyration during the time of \(\tau_{r}\) must be much larger than once, which requires that the gyration period of the electron is much shorter than \(\tau_{r}\), \(\tau_{B}<\tau_{r}\), leading to the first necessary condition:
\[B>B_{\rm cr,1}=\frac{2\pi\gamma^{2}m_{e}c^{2}}{e\rho}\simeq 1.1\ {\rm G}\ \gamma_{2}^{2}\rho_{8}^{-1}. \tag{18}\]
According to the Larmor formula, the radiation power of the small-pitch-angle synchrotron radiation is
\[P=\frac{2}{3}\frac{e^{4}\gamma^{2}B^{2}\beta_{\perp}^{2}}{m_{e}^{2}c^{3}} \simeq\frac{2}{3}\frac{e^{4}\gamma^{2}B^{2}\psi^{2}}{m_{e}^{2}c^{3}}, \tag{19}\]
where \(\beta_{\perp}=\beta\sin\psi\simeq\psi\) for \(\beta\sim 1\) and \(\psi\ll 1\). The cooling timescale is estimated by \(Pt_{\rm cool}\sim\gamma m_{e}c^{2}\), and one obtains
\[t_{\rm cool}\sim\frac{3m_{e}^{3}c^{5}}{2e^{4}\gamma B^{2}\psi^{2}}\simeq 5.2 \times 10^{10}\ {\rm s}\ \gamma_{2}^{-1}B_{1}^{-2}\psi_{-3}^{-2}. \tag{20}\]
If the small-pitch-angle synchrotron radiation is not significantly cooling during the electron moving along the field line, the cooling timescale \(t_{\rm cool}\) must be larger than \(\tau_{r}\), leading to the second necessary condition:
\[B<B_{\rm cr,2}=\left(\frac{3m_{e}^{3}c^{6}}{2e^{4}\rho\psi^{2}}\right)^{1/2} \simeq 3.9\times 10^{8}\ {\rm G}\ \rho_{8}^{-1/2}\psi_{-3}^{-1}. \tag{21}\]
Based on Eq.(18) and Eq.(21), the small-pitch-angle synchrotron radiation finally requires the conditions:
\[1.1\ {\rm G}\ \gamma_{2}^{2}\rho_{8}^{-1}\lesssim B\lesssim 3.9\times 10^{8} \ {\rm G}\ \rho_{8}^{-1/2}\psi_{-3}^{-1}. \tag{22}\]
Accordingly, the condition of the curvature radiation is
\[B\gtrsim 3.9\times 10^{8}\ {\rm G}\ \rho_{8}^{-1/2}\psi_{-3}^{-1}. \tag{23}\]
Figure 5: Two radiation mechanisms with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) in the magnetosphere of a neutron star. In the inner region with a relatively strong magnetic field, the radiation mechanism is the curvature radiation. In the outer region with a relatively weak magnetic field, the radiation mechanism is the small-pitch-angle synchrotron radiation.
As we expected, the small-pitch-angle synchrotron radiation is preferred to be in the outer region of the magnetosphere, and the curvature radiation is preferred to be in the inner region of the magnetosphere.
Next, we make some comments about the observable properties of the small-pitch-angle synchrotron radiation in the magnetosphere. As discussed in Section 3.2, according to Eq.(15) the typical radiation frequency of the small-pitch-angle synchrotron radiation is
\[\nu\sim\gamma^{2}\nu_{B}=\frac{\gamma eB}{2\pi m_{e}c}\simeq 2.8\ \text{GHz} \ \gamma_{2}B_{1}. \tag{24}\]
Thus, the small-pitch-angle synchrotron radiation in the outer magnetosphere could be emitted at the GHz band 5. The polarization properties of the small-pitch-angle synchrotron radiation depend on the gyration motions of the charged particles in the magnetosphere, see the following discussions in Appendix A.2. Before the charged particles enter the magnetosphere's outer region, in addition to the parallel velocities along the field line, the charged particles have a small drift velocity perpendicular to the field lines, providing an additional Lorentz force to make the particles move along the curved paths. Since the direction of the drift velocity only depends on the curved field line, the charged particles tend to have the same gyration direction when they enter the outer region of the magnetosphere. Thus, the small-pitch-angle radiation is expected to be highly circularly polarized (see Appendix A.2). Besides, since the spectral shape and the typical frequency of the small-pitch-angle radiation is independent of the particle's pitch angle, for multiple particles, the spectral bandwidth is mainly determined by the distribution of the particles' Lorentz factors.
Footnote 5: Notice that the typical frequency at GHz band requires the magnetic field to be \(B\sim 10\) G according to Eq.(24). Such a weak field strength leads to the local magnetic energy density being much smaller than the plasma energy density, \(B^{2}/8\pi\ll\xi^{-1}L_{\text{iso}}/4\pi r^{2}c\), where \(L_{\text{iso}}\) is the isotropic luminosity of an FRB and \(\xi\) is the radiation efficiency. This means that the radiating particles will straighten the field lines (that is similar to the magnetic field geometry of stellar wind), leading to a larger curvature radius \(\rho\) than that of the dipole field at the same position. Such a process makes the small-pitch-angle radiation easy to produce because the field lines become almost parallel to the particles’ velocities. However, how to generate coherent radiation is an issue for such a scenario, which need to be further analyzed in detail.
The above two general radiation processes with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) might also occur in the magnetized shocked medium, which corresponds to the traditional synchrotron radiation and small-pitch-angle synchrotron radiation, as shown in the panel (a) and the panel (b) of Figure 6. The critical condition between the two scenarios depends on the relative directions between the magnetic field, the particles' injection, and the viewing direction. If the direction of the particles' injection is almost parallel to the field lines, the radiation mechanism would be the small-pitch-angle synchrotron radiation, as shown in the panel (b) of Figure 6. However, due to the shock compression, the magnetic field in the shocked medium usually has a significant component parallel to the shock surface. Thus, the small-pitch-angle injection process seems to be fine-tuning. Besides, in the magnetized shocked medium, since the directions of the particles' gyration motion are random, the small-pitch-angle synchrotron would be significantly depolarized for multiple particles.
## 4 Radiation Spectra by Coherent Processes of Multiple Particles
In this section, we discuss how the coherent processes generate narrow spectra, including the bunching mechanism (Section 4.1) and maser mechanism (Section 4.2).
### Narrow spectra by bunching mechanism
Coherent curvature radiation by charged bunches has been proposed as one of the popular ideas to explain the emission of FRBs (e.g., Katz, 2014, 2018; Yang and Zhang, 2018, 2023; Kumar and Bosnjak, 2020; Lu et al., 2020; Cooper and Wijers, 2021). Compared with the spectrum by a single charged particle with \(\gamma\psi\gg 1\), see Section 3.1 and Figure 10, a relatively narrow spectrum could be generated by the coherent curvature radiation from a structured bunch as proposed by Yang et al. (2020) and
Figure 6: Two radiation mechanisms with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) in the magnetized shocked medium. Panel (a) corresponds to traditional synchrotron radiation. Panel (b) corresponds to the small-pitch-angle synchrotron radiation, in which scenario, the emission region has a magnetic field almost parallel to the line of sight and the direction of the particles’ injection is almost parallel to the field lines.
Yang & Zhang (2023) because the frequency structure of the burst is the Fourier transform of the spatial structure of the radiating charge density as pointed out by Katz (2018), but it is still hard to explain the observed extremely narrow spectrum of some FRBs, e.g., FRB 20190711 with \(\Delta\nu/\nu\sim 0.05\). In particular, the dynamic fluctuation of the bunches would also make the spectrum show a white noise, especially at the low-frequency band (Yang & Zhang, 2023). Since the narrow spectrum implies that the electromagnetic signal should be quasi-sinusoid in the short term as pointed out in Section 2.2, one possibility to generate a narrow spectrum is that the radiating bunches moving along the same trajectory are quasi-periodically distributed due to some special processes, like quasi-monochromatic Langmuir wave or oscillating pair creation in the charge-starved region. In this case, the radiation could be amplified at some harmonic frequencies, see the following discussion.
We consider that the charged bunch distribution is quasi-periodic in the time domain, and the medium in the emission contains \(N\) radiating bunches. Each radiating bunch emits a pulse of \(E_{0}(t)\) with the same shape but with different arrival times. Thus, the total electric field from the multiple radiating bunches is given by
\[E(t)=\sum_{j}^{N}E_{0}(t-t_{j}). \tag{25}\]
According to the time-shifting property of the Fourier transform, the total power spectrum of multiple radiating bunches is
\[|E(\omega)|^{2}=|E_{0}(\omega)|^{2}\left|\sum_{j}^{N}e^{i\omega t_{j}}\right|^ {2}, \tag{26}\]
where \(|E_{0}(\omega)|^{2}\) corresponds to the power spectrum of the first radiating bunch. The coherence properties of the radiation by the multiple bunches are determined by the factor of \(|\sum_{j}^{N}\exp(i\omega t_{j})|^{2}\). If the multiple bunches are quasi-periodically distributed, one would have
\[i\omega t_{j}=ij\omega/\omega_{m}+i\delta\phi_{j}, \tag{27}\]
where \(1/\omega_{m}=\text{const.}\) corresponds to the period of the bunch distribution, and \(\delta\phi_{j}\) corresponds to the relative random phase related to the phase of \(j\omega/\omega_{m}\).
We first consider the case of \(\delta\phi_{j}=0\), which implies that the bunch distribution is strictly periodic. Defining \(z=\exp(i\omega/\omega_{m})\), the modulus square of the sum of the phase factor in Eq.(26) is calculated by
\[\left|\sum_{j=1}^{N}z^{j}\right|^{2} =\left|z\frac{z^{N}-1}{z-1}\right|^{2}=\frac{2-z^{N}-z^{*N}}{2-z- z^{*}}\] \[=\frac{1-\cos(N\omega/\omega_{m})}{1-\cos(\omega/\omega_{m})}= \frac{\sin^{2}(N\omega/2\omega_{m})}{\sin^{2}(\omega/2\omega_{m})}, \tag{28}\]
where \(z^{*}\) is the conjugation of the complex number \(z\), and the geometric sequence summation is used in the above calculation. Therefore, the power spectrum by the multiple bunches with a periodic distribution is given by
\[|E(\omega)|^{2}=|E_{0}(\omega)|^{2}\sin^{2}\left(\frac{N\omega}{2\omega_{m}} \right)\sin^{-2}\left(\frac{\omega}{2\omega_{m}}\right). \tag{29}\]
The radiation is coherently amplified when \(\sin^{2}(\omega/2\omega_{m})\sim 0\), leading to the coherent peak frequencies at \(\omega/\omega_{m}=2n\pi\) with \(n\in\mathbb{Z}^{+}\). In the top panel of Figure 7, we plot the power spectrum of the multiple radiating bunches with a periodic distribution according
Figure 7: The spectra of multiple radiating bunches with a periodic or quasi-periodic distribution in the time domain. The top panel corresponds to a strictly periodic distribution with \(\delta\phi_{j}=0\). The bottom panel corresponds to a quasi-periodic distribution with \(\delta\phi_{j}\) uniformly distributed in \([-0.5,0.5]\). A curvature-radiation-like spectrum for a single point-source bunch, \(|E_{0}(\omega)|^{2}\propto\omega^{2/3}\exp(-\omega/\omega_{c})\), is taken as an example in this figure. We take \(\omega_{m}=\omega_{c}\) and \(N=10^{3}\). The peak frequency is at \(2n\pi\omega_{m}\) with \(n\in\mathbb{Z}^{+}\).
to Eq.(29). The spectrum of a single bunch is assumed to be \(|E_{0}(\omega)|^{2}\propto\omega^{2/3}\exp(-\omega/\omega_{c})\) (corresponding to the spectrum of the curvature radiation by a single point-source bunch, see Yang and Zhang (2018, 2023)) as an example, and \(N=10^{3}\) and \(\omega_{m}=\omega_{c}\) are taken. We can see that the coherent radiation energy is radiated into multiples of \(2\pi\omega_{m}\) with narrow bandwidths.
Next, we consider that the bunch distribution is quasi-periodical with \(\delta\phi_{j}\neq 0\). We assume that the relative random phases \(\delta\phi_{j}\) are uniformly distributed in a range of \([-\delta\phi_{m},\delta\phi_{m}]\) with \(0<\delta\phi_{m}\leqslant\pi\), and make a Monte Carlo simulation to calculate the total radiation based on Eq.(26) and Eq.(27). In the bottom panel of Figure 7, we plot the power spectrum of the multiple radiating bunches with a quasi-periodic distribution with \(\delta\phi_{m}=0.5\). The other parameters are the same as that in the case of strictly periodic distribution. We can see that the coherent radiation energy is still radiated into multiples of \(2\pi\omega_{m}\) with narrow bandwidths even for \(\delta\phi_{j}\neq 0\). Meanwhile, for a given frequency band, the continuous part of the spectrum becomes harder for a larger value of \(\delta\phi_{m}\). In particular, if \(\delta\phi_{m}=\pi\), the distribution of the multiple bunches would be random and the total power spectrum completely incoherent.
### Narrow spectra by maser mechanism
In addition to the bunching mechanism, some maser mechanisms have been also proposed to be the radiation mechanism of FRBs (e.g., Lyubarsky, 2014; Metzger et al., 2019; Waxman, 2017; Beloborodov, 2020). The maser mechanism corresponds to the negative absorption by radiating particles with inverted populations. A negative optical depth of \(\tau_{\nu}\) at frequency \(\nu\) causes an amplification by a factor of
\[A(\nu)=\exp(|\tau_{\nu}|). \tag{30}\]
Thus, the amplification involved in the maser mechanism could be very large even for an intermediate negative optical depth.
We first consider that the electron distribution (i.e., the electron number density in an interval of \((\gamma,\gamma+d\gamma)\) with \(\gamma\) as the electron Lorentz factor) is
\[n(\gamma)d\gamma\propto\gamma^{p}d\gamma, \tag{31}\]
the radiation power of a single electron is \(P(\nu,\gamma)\), and the scale of the emission region is \(L\). The optical depth \(\tau_{\nu}\) could be generally given by6 (e.g., Rybicki and Lightman, 1986)
Footnote 6: Notice that this equation is very general because it is directly derived from Einstein coefficient. However, we should emphasize that this equation potentially assumes that the electron distribution is isotropic. If the electron distribution is one-dimensional (e.g., relativistic electrons and positrons moving along the magnetic field lines in the magnetosphere of a neutron star), the integrand in Eq.(32) would be proportional to \(dn(\gamma)/d\gamma\)(Melrose, 1978) rather than \(\gamma^{2}\partial[n(\gamma)/\gamma^{2}]/\partial\gamma\), and the condition for the negative absorption would become \(p>0\) rather than \(p>2\).
\[\tau_{\nu}=-\frac{L}{8\pi m_{e}\nu^{2}}\int P(\nu,\gamma)\left[\gamma^{2}\frac {\partial}{\partial\gamma}\left(\frac{n(\gamma)}{\gamma^{2}}\right)\right]d\gamma. \tag{32}\]
Thus, the negative absorption requires that \(\tau_{\nu}<0\), i.e., \(\partial(n(\gamma)/\gamma^{2})/\partial\gamma>0\), leading to the condition of
\[p>2. \tag{33}\]
In general, the electron distribution \(n(\gamma)\) in the emission is usually wide and appears more complex than a single power-law distribution given by Eq.(31), that is, the power-law index \(p\) could be energy-dependent, \(p(\gamma)\). In this case, the effective electrons contributing to the maser mechanism are only in the range in which \(p(\gamma)>2\). We rewrite Eq.(32) as
\[\tau_{\nu}=\tau_{\nu,+}+\tau_{\nu-}, \tag{34}\]
where \(\tau_{\nu,+}\) and \(\tau_{\nu,-}\) are the positive and negative optical depths with the integral ranges in Eq.(32) corresponding to \(p(\gamma)<2\) and \(p(\gamma)>2\), respectively. For an effective maser mechanism, one usually has \(|\tau_{\nu,-}|\gg\tau_{\nu,+}\) and \(\tau_{\nu}\simeq\tau_{\nu,-}\) unless the electron distribution is very fine-tuning leading to a small net negative absorption. Thus, in the following discussion, we only consider the electrons distributed in
\[\gamma_{\rm min}<\gamma<\gamma_{\rm max}\quad\text{in which range }p(\gamma)\sim p>2, \tag{35}\]
where \(p(\gamma_{\rm min})=p(\gamma_{\rm max})\simeq 2\).
We are mainly interested in the synchrotron maser that has been proposed as one of the popular ideas to explain the coherent emission of FRBs (e.g., Lyubarsky, 2014; Metzger et al., 2019; Waxman, 2017; Beloborodov, 2020). The \(\delta\)-function approximation for the synchrotron emissivity can be written as
\[P(\nu,\gamma) \simeq \frac{4}{3}c\sigma_{\rm T}\frac{B^{2}}{8\pi}\gamma^{2}\delta(\nu -\gamma^{2}\nu_{B}) \tag{36}\] \[= \frac{4\pi}{9}\frac{e^{3}B}{m_{e}c^{2}}\gamma\delta\left(\gamma- \sqrt{\frac{\nu}{\nu_{B}}}\right),\]
where \(\nu_{B}=eB/2\pi m_{e}c\) is the cyclotron frequency. Substituting Eq.(31) and Eq.(36) in Eq.(32) gives
\[\tau_{\nu} \sim\frac{e^{3}BL}{18m_{e}^{2}c^{2}}\frac{n(\gamma\sim\sqrt{\nu/\nu _{B}})}{\nu^{2}}\] \[\propto\nu^{(p-4)/2}.\quad\text{for $\gamma_{\min}^{2}\nu_{B}<\nu< \gamma_{\max}^{2}\nu_{B}$}. \tag{37}\]
Based on Eq.(30) and Eq.(37), we take the form of amplification as \(A(\nu)=\exp[\tau_{\nu_{0}}(\nu/\nu_{\min})^{(p-4)/2}]\), where \(\tau_{\nu,0}\) corresponds to the optical depth at \(\nu_{\min}=\gamma_{\min}^{2}\nu_{B}\) and \(2<p<4\) is assumed. In Figure 8, we plot the spectra of the synchrotron maser, and an intermediate optical depth of \(\tau_{\nu,0}=10\) is assumed. The black, red, and blue lines correspond to \(p=2.5,3,3.5\), respectively. We can see that for these typical parameters, the amplification varies several orders of magnitude within a narrow bandwidth. Thus, the spectrum of synchrotron maser should be extremely narrow.
## 5 Radiation Spectra Corrected by Radiative Transfer Processes
The radiative transfer processes include absorption (Section 5.1) and scattering (Section 5.2). Since both absorption and scattering are dominant in some dense environments, they are important only at the circumburst medium or the emission region.
### Absorption
The absorption coefficients \(\alpha_{\nu}\) of most absorption processes (e.g., free-free absorption, synchrotron absorption, plasma absorption, etc.) have a negative correlation with the frequency of electromagnetic waves, \(\alpha_{\nu}\propto\nu^{-\delta}\) with \(\delta>0\), that is, the lower the frequency, the more significant the absorption effect. We consider that the incident intensity is \(I_{\nu,0}\) and the scale of the absorption region is \(L\), then the outgoing intensity has a spectrum of \(I_{\nu,0}\exp(-\alpha_{\nu}L)\). If the frequency \(\nu\) is much less than the absorption frequency \(\nu_{\text{abs}}\) defined by \(\tau_{\nu}(\nu_{\text{abs}})=\alpha_{\nu}(\nu_{\text{abs}})L=1\), the intensity would be significantly cut off, i.e., \(I_{\nu}(\nu<\nu_{\text{abs}})\ll I_{\nu,0}\). This result suggests that the radio bursts emitted during a limited time of \(T\) would have the same low-frequency cutoff at \(\nu_{\text{low}}\sim\nu_{\text{abs}}\) if the intrinsic spectrum is intrinsically wide and the typical evolution timescale of the absorption parameters is much longer than \(T\). However, such a conclusion is inconsistent with the observations of some FRB repeaters, suggesting that the absorption effects do not significantly change the observed spectra.
For example, FRB 200428 from Galactic magnetar SGR J1935+2154 consisted of two sub-bursts separated by \(T\sim 29\) ms (CHIME/FRB Collaboration et al., 2020). The first component did not show a significant low-frequency cutoff at the telescope's bandwidth, implying that the low-frequency cutoff satisfies \(\nu_{\text{low}}\ll 400\) MHz, and the second component had a low-frequency cutoff at \(\nu_{\text{low}}\sim(400-500)\) MHz. The relative variation of the low-frequency cutoffs, \(\delta\nu_{\text{low}}/\nu_{\text{low}}\), was several tens percent during \(T\sim 29\) ms. This observation suggests that the relative variations of the environment parameters (e.g., average electron density, absorption region lengthscale, temperature, magnetic field, etc.) in the absorption region also reached several tens percent during such an extremely short time, which is obviously not realistic. Therefore, the low-frequency cutoffs of the observed spectra of some FRB sources should not be attributed to the absorption processes.
### Electron Scattering
The most important mechanism of various scatterings is electron scattering, including relativistic and nonrelativistic types. Relativistic electron scattering mainly refers to inverse Compton scattering. For a general large-angle scattering process by a relativistic electron with Lorentz factor of \(\gamma\), the inverse Compton scattering process coverts a photon with frequency \(\nu\) to a high-frequency one with frequency \(\sim\gamma^{2}\nu\). _Since the cross section (i.e., Thomson cross section) is independent of the photon frequency, the scattering process simultaneously suppresses the specific intensity in all bands, leading to the spectral shape of the unscattered radiation unchanged_.
The nonrelativistic electron scattering is called Thomson scattering. As an elastic scattering process, the total amount of radiation emitted per unit frequency is almost equal to the total amount absorbed in that same
Figure 8: The amplification of the synchrotron maser mechanism as a function of frequency. The black, red and blue lines correspond to the electron distribution index \(p=2.5,3,3.5\), respectively. The optical depth at \(\nu_{\min}\) is assumed to be \(\tau_{\nu,0}=10\) here. The frequency is dimensionless with \(\nu_{\min}=\gamma_{\min}^{2}\nu_{B}\). Notice that the amplification becomes zero for \(\nu<\nu_{\min}\) in this figure, see the text for the discussion.
frequency and the cross section is also independent of frequency. However, repeating scattering process can build up a substantial effect, i.e., induced Compton scattering7. The induced Compton scattering on relatively dense plasma would unavoidably affect the emergent radiation. We consider that the electron number density is \(n_{e}\), the electron temperature is \(T\), the photon occupation number (i.e, the average number of photons in a state) is (e.g., Rybicki and Lightman, 1986; Wilson, 1982; Yang and Zhang, 2023)\(n_{\gamma}=kT_{B}/h\nu=(c^{2}/2h\nu^{3})I_{\nu}\), where \(T_{B}\) is the radiation brightness temperature and \(I_{\nu}\) is the radiation intensity. The kinetic equation for the photon interacting with nonrelativistic electrons is (Kompaneets, 1957; Syunyaev, 1971; Rybicki and Lightman, 1986)
Footnote 7: Induced Compton scattering could be significant for relativistic electron scattering when the scattering angle is so small that the scattering frequency is close to the incident one.
\[\frac{\partial n_{\gamma}}{\partial t}=\frac{h\sigma_{\rm T}n_{e}}{m_{e}c} \frac{1}{\nu^{2}}\frac{\partial}{\partial\nu}\nu^{4}\left(n_{\gamma}^{2}+n_{ \gamma}+\frac{kT}{h}\frac{\partial n_{\gamma}}{\partial\nu}\right). \tag{38}\]
This is the so-called Kompaneets equation, which describes the evolution of the photon distribution function due to repeated nonrelativistic electron scattering. For FRBs with extremely temperatures, we are only interested in \(n_{\gamma}\gg 1\) (i.e., \(kT_{B}\gg h\nu\)) and \(n_{\gamma}\gg kT/h\nu\) (i.e., \(T_{B}\gg T\)), leading to the second and third terms in the parentheses ignored. Thus, the kinetic equation could be written as
\[\frac{\partial n_{\gamma}}{\partial t}=\frac{2h\sigma_{\rm T}n_{e}n_{\gamma}} {m_{e}c}\frac{\partial(\nu^{2}n_{\gamma})}{\partial\nu}. \tag{39}\]
According to the above equation, the radiation with the brightness temperature \(T_{B}\) is unable to escape from a region with an effective optical depth \(\tau_{\rm ind}=(kT_{B}/m_{e}c^{2})\tau_{\rm T}>1\)(Lyubarsky and Ostrovska, 2016), where \(\tau_{\rm T}\) is the optical depth of Thomson scattering. The total number of photons is conserved and the photons are redistributed toward lower frequencies, which leads a larger brightness temperature \(T_{B}\) so that the rate of redistribution increases further out. Finally, the photons are eventually absorbed by some absorption processes, e.g., free-free absorption, synchrotron absorption, plasma absorption, etc. In conclusion, the induced Compton scattering makes the emergent spectrum softer than the initial incident spectrum and cuts off the spectrum at a low frequency determined by the absorption processes. Based on the discussion in Section 5.1, since the radio bursts of an FRB repeater did not show the same low-frequency cutoff during a limited time, the effect of induced Compton scattering should also be ignored.
The above electron scattering mechanisms potentially assume that the scattering processes are linear, which means that the electron oscillation is nonrelativistic and the interaction between electromagnetic wave and electron is dominated by electric force (i.e., Lorentz force could be ignored due to the nonrelativistic motion of electrons). We consider an electromagnetic wave has an electric field \(E\) and frequency \(\omega\), and define the strength parameter as
\[a=\frac{v_{\rm os}}{c}=\frac{eE}{2\pi m_{e}c\nu}, \tag{40}\]
where \(v_{\rm os}=eE/2\pi m_{e}\nu\) is the typical oscillation velocity due to the electric force. For \(a\ll 1\), the electron oscillation is nonrelativistic and the interaction is dominated by the electric force, which means that the Thomson theory is valid. For \(a\gg 1\), the electron oscillation becomes relativistic and the interaction between electron and electromagnetic wave enters the nonlinear regime of "strong waves". In particular, the nonlinear effect makes the cross section enhanced by a factor of \(a^{2}\)(e.g., Sarachik and Schappert, 1970; Yang and Zhang, 2020; Beloborodov, 2022; Qu et al., 2022)
\[\sigma\sim a^{2}\sigma_{\rm T}\propto\frac{I_{\nu}}{\nu}, \tag{41}\]
where \(I_{\nu}\) is the incident intensity. Thus, the lower the frequency or the high the intensity, the more significant the scattering. According to Eq.(41), the radio bursts would have low-frequency cutoff at \(\nu_{\rm low}\propto I_{\nu}\), which is worthy to test the nonlinear effect by the observations of FRB repeaters. In addition to the escape unscattered photons, high-frequency photons would be produced by the relativistically oscillating electrons but they are significantly outside the band of the incident photons, which is beyond the scope of this work.
## 6 Radiation Spectra Corrected by Interference Processes
The interference processes that can change the radiation spectra mainly include scintillation, gravitational lensing, and plasma lensing. Due to the wave interference, the spectra are coherently enhanced at some frequencies but coherently reduced at some others frequencies.
### Scintillation
Generally, "scintillation" refers to the spectral modulations caused by the coherent combination of multiple waves, and "temporal scattering" refers to the temporal broadening of pulses due to the multipath propagation. For a certain plasma screen, the relation between
scintillation and temporal scattering is \(\Delta\nu_{\rm sci}\tau_{\rm s}\sim 1/2\pi\), where \(\Delta\nu_{\rm sci}\) is the scintillation bandwidth and \(\tau_{\rm s}\) is the scattering time. If the observed temporal scattering is \(\tau_{\rm s}\sim 1\) ms, the corresponding scintillation bandwidth should be \(\Delta\nu_{\rm sci}\sim 160\) Hz, which is much smaller than the observed scintillation bandwidth of \(\Delta\nu_{\rm sci}\sim 1\) MHz. Therefore, for extragalactic FRBs, the observed scintillation is proposed to be mainly contributed by the interstellar medium within the Milky Way, and the observed scattering time is more likely contributed by the circumburst medium or the interstellar medium in the FRB host galaxy.
In this section, we consider that the spectral shape with a typical bandwidth \(\gtrsim 100\) MHz might be attributed to another plasma screen that is different from the regions contributing the observed temporal scattering and the narrow-bandwidth scintillation with \(\Delta\nu_{\rm sci}\sim 1\) MHz. The relatively wider scintillation bandwidth of \(\Delta\nu_{\rm sci}\gtrsim 100\) MHz requires that the screen should be close to the FRB source. We consider that the plasma screen has a thickness of \(\Delta R\), and the power spectrum of electron density fluctuations in the plasma screen is power-law with
\[P(k)=C_{N}^{2}k^{-\beta},\quad\text{for }2\pi L^{-1}\lesssim k\lesssim 2\pi l_{0 }^{-1}, \tag{42}\]
where \(\beta\) is the spectral index of the three-dimensional power spectrum (Kolmogorov turbulence has \(\beta=11/3\)), \(k=2\pi/l\) is the spatial wavenumber, \(l_{0}\) and \(L\) are the inner and outer scales of the inertial range of the turbulence, respectively, and the normalization factor \(C_{N}^{2}\) is given by (e.g., Xu and Zhang, 2017; Yang et al., 2022)
\[C_{N}^{2}\simeq\begin{cases}\dfrac{3-\beta}{2(2\pi)^{4-\beta}}l_{0}^{3-\beta} \delta n_{e}^{2},&\text{for }\beta<3,\\ \dfrac{\beta-3}{2(2\pi)^{4-\beta}}L^{3-\beta}\delta n_{e}^{2},&\text{for } \beta>3,\end{cases} \tag{43}\]
where \(\delta n_{e}^{2}\) is the total mean-squared density fluctuation. We define the diffractive lengthscale \(l_{\rm diff}\) that represents the transverse separation for which the root-mean-squared difference of the wave phases is equal to unit rad. For the electromagnetic wave with a wavelength of \(\lambda\), the diffractive lengthscale \(l_{\rm diff}\) is (e.g., Xu and Zhang, 2017; Yang et al., 2022)
\[l_{\rm diff}=\begin{cases}(f_{1,\alpha}\pi^{2}r_{e}^{2}\lambda^{2}l_{0}^{\beta -4}C_{N}^{2}\Delta R)^{-\frac{1}{2}},&\text{for }l_{\rm diff}<l_{0},\\ (f_{2,\alpha}\pi^{2}r_{e}^{2}\lambda^{2}C_{N}^{2}\Delta R)^{\frac{1}{2-\beta} },&\text{for }l_{\rm diff}>l_{0},\end{cases} \tag{44}\]
where \(r_{e}\) is the classical electron radius, \(f_{1,\alpha}=\Gamma(1-\alpha/2)\), \(f_{2,\alpha}=[\Gamma(1-\alpha/2)/\Gamma(1+\alpha/2)](8/\alpha 2^{\alpha})\), and \(\alpha=\beta-2\). For the Kolmogorov turbulence with \(\alpha=5/3\) (\(\beta=11/3\)), one has \(f_{1,\alpha}=5.6\) and \(f_{2,\alpha}=8.9\), respectively.
The scattering angle of the electromagnetic waves is \(\theta_{\rm s}\simeq\lambda/2\pi l_{\rm diff}\), and the transverse scale of the visible part of the plasma screen is (e.g., Yang et al., 2022)
\[l_{\rm s}(\lambda)=\theta_{\rm s}R\simeq\dfrac{\lambda R}{2\pi l_{\rm diff}}, \tag{45}\]
where \(R\) is the distance from the plasma screen to the FRB source. The temporal scattering time could be then estimated by
\[\tau_{\rm s}(\lambda)\simeq\dfrac{l_{\rm s}^{2}}{2Rc}=\dfrac{\lambda^{2}R}{8 \pi^{2}cl_{\rm diff}^{2}}. \tag{46}\]
Using Eq.(44) and \(\Delta R\sim R\), the temporal scattering time satisfies
\[\tau_{\rm s}(\lambda)\simeq\begin{cases}\dfrac{3-\beta}{16(2\pi)^{4-\beta}} \dfrac{f_{1,\alpha}r_{e}^{2}}{cl_{0}}\delta n_{e}^{2}R^{2}\lambda^{4},&l_{\rm diff }<l_{0},\\ \dfrac{(3-\beta)^{\frac{3}{2-2}}}{2^{\frac{3+4}{2-2}}}\dfrac{f_{2,\alpha}^{ \frac{2}{\beta-2}}}{cl_{0}^{\frac{2(\beta-3)}{\beta-2}}}\delta n_{e}^{\frac{ 4}{\beta-2}}R^{\frac{\beta}{\beta-2}}\lambda^{\frac{2\beta}{\beta-2}},&l_{ \rm diff}>l_{0}.\end{cases} \tag{47}\]
for \(\beta<3\) and
\[\tau_{\rm s}(\lambda)\simeq\begin{cases}\dfrac{\beta-3}{16(2\pi)^{4-\beta}} \dfrac{f_{1,\alpha}r_{e}^{2}l_{0}^{3-4}}{cl^{\beta-3}}\delta n_{e}^{2}R^{2} \lambda^{4},&l_{\rm diff}<l_{0},\\ \dfrac{(\beta-3)^{\frac{2}{\beta-2}}}{2^{\frac{3+4}{2-2}}}\dfrac{f_{2,\alpha} ^{\frac{2}{\beta-3}}}{cl^{\frac{2(\beta-3)}{\beta-2}}}\delta n_{e}^{\frac{4}{ \beta-2}}R^{\frac{\beta}{\beta-2}}\lambda^{\frac{2\beta}{\beta-2}},&l_{\rm diff }>l_{0}.\end{cases} \tag{48}\]
for \(\beta>3\). The corresponding scintillation bandwidth is
\[\Delta\nu_{\rm sci}(\nu)\simeq\dfrac{1}{2\pi\tau_{\rm s}}\] \[\simeq\begin{cases}\dfrac{16(2\pi)^{3-\beta}}{3-\beta}\dfrac{l_{0 }}{c^{3}f_{1,\alpha}r_{e}^{2}}\delta n_{e}^{-2}R^{-2}\nu^{4},&l_{\rm diff}<l_{0 },\\ \dfrac{2^{\frac{3+4}{2-2}}}{2\pi(3-\beta)^{\frac{2}{\beta-2}}}\dfrac{l_{0}^{ \frac{2(\beta-3)}{\beta-2}}}{cl^{\frac{3+2}{\beta-2}}f_{2,\alpha}^{\frac{2}{ \beta-2}}}\delta n_{e}^{\frac{4}{\beta-2}}R^{\frac{\beta}{\beta-2}}\nu^{\frac {2\beta}{\beta-2}},&l_{\rm diff}>l_{0}.\end{cases} \tag{49}\]
for \(\beta<3\) and
\[\Delta\nu_{\rm sci}(\nu)\simeq\dfrac{1}{2\pi\tau_{\rm s}}\] \[\simeq\begin{cases}\dfrac{16(2\pi)^{3-\beta}}{\beta-3}\dfrac{L^{ \beta-3}}{c^{3}f_{1,\alpha}r_{e}^{2}l_{0}^{\beta-4}}\delta n_{e}^{-2}R^{-2} \nu^{4},&l_{\rm diff}<l_{0},\\ \dfrac{2^{\frac{\beta+4}{\beta-2}}}{2\pi(\beta-3)^{\frac{2}{\beta-2}}}\dfrac{L^{ \frac{2(\beta-3)}{\beta-2}}}{cl_{0}^{\frac{2(\beta-3)}{\beta-2}}}\delta n_{e}^{ \frac{4}{\beta-2}}R^{\frac{\beta}{\beta-2}}\lambda^{\frac{2\beta}{\beta-2}},&l_{ \rm diff}>l_{0}.\end{cases} \tag{50}\]
for \(\beta>3\). For the Kolmogorov turbulence with \(\beta=11/3\), the typical value of the scintillation bandwidth is given by Eq.(50)
\[\Delta\nu_{\rm sci}\simeq 130\ {\rm MHz}\ L_{15}^{2/3}l_{0,13}^{1/3}\delta n_{e,3}^{- 2}R_{15}^{-2}\nu_{9}^{4}, \tag{51}\]
where \(l_{\rm diff}<l_{0}\) is satisfied for the typical parameters. Therefore, if the spectral shape with a typical bandwidth of \(\Delta\nu_{\rm sci}\sim 100\ {\rm MHz}\) is attributed to the plasma screen, the environment near the FRB source would be required to be intermediately dense (e.g. \(n_{e}\sim\delta n_{e}\sim 10^{3}\ {\rm cm^{-3}}\) at \(R\sim 10^{15}\ {\rm cm}\)) and turbulent.
### Gravitational lensing
The time-delay effect of gravitational lensing can modulate the spectrum of a transient (Gould, 1992; Barnacka et al., 2012), which mainly depends on the mass and position of the lensing object. We consider the lensing object has a mass of \(M\). The distances from the lens to the FRB source, from the observer to the lens, and from the observer to the FRB source are \(d_{\rm ls}\), \(d_{\rm ol}\) and \(d_{\rm os}\), respectively. The Einstein radius of gravitational lensing is
\[r_{\rm E}=\left(\frac{4GM}{c^{2}}\frac{d_{\rm ol}d_{\rm ls}}{d_{\rm os}} \right)^{1/2}. \tag{52}\]
We define the source position projected onto the lens plane as \(r_{\rm s}\), then the image positions are given by
\[r_{\pm}=\frac{1}{2}\left(r_{\rm s}\pm\sqrt{r_{\rm s}^{2}+4r_{\rm E}^{2}}\right). \tag{53}\]
The time delay between the two images is
\[\delta t=\frac{2GM}{c^{3}}\left[\left(\frac{r_{+}}{r_{\rm E}}\right)^{2}- \left(\frac{r_{-}}{r_{\rm E}}\right)^{2}+2\ln\left|\frac{r_{+}}{r_{-}}\right| \right]. \tag{54}\]
The phase difference between the rays from the two images is
\[\delta\phi=2\pi\nu\delta t. \tag{55}\]
A gravitational lensing event with remarkable spectral modulation requires \(\delta\phi\sim 1\) and \(r_{\rm s}\lesssim r_{\rm E}\), leading to
\[\delta t=\frac{1}{2\pi\nu}\simeq 1.6\times 10^{-10}\ {\rm s}\ \nu_{9}^{-1}, \tag{56}\] \[M\sim\frac{c^{3}\delta t}{G}\simeq 2.0\times 10^{-5}M_{\odot} \delta t_{-10}. \tag{57}\]
The time delay between two images is much shorter than the durations of FRBs, and the lens is required to be the planet-like object. The amplitude contributed by the image at position \(r_{\pm}\) is \(\mathcal{A}_{\pm}\propto\exp(i\phi)/\sqrt{|1-r_{\rm E}^{4}/r_{\pm}^{4}|}\) (e.g., Barnacka et al., 2012). The amplification is obtained by summing the amplitudes of the two images, \(A\equiv|\mathcal{A}|^{2}=|\mathcal{A}_{+}+\mathcal{A}_{-}|^{2}\), which is given by
\[A =\frac{1}{|1-r_{\rm E}^{4}/r_{+}^{4}|}+\frac{1}{|1-r_{\rm E}^{4}/r _{-}^{4}|}\] \[+\frac{2\cos\delta\phi}{\sqrt{|1-r_{\rm E}^{4}/r_{+}^{4}|}\sqrt{| 1-r_{\rm E}^{4}/r_{-}^{4}|}}. \tag{58}\]
In Figure 9, we plot the amplification \(A\) of the gravitational lensing as a function of frequency. We take the distances as \(d_{\rm ls}=1\) kpc, \(d_{\rm ol}=1\) Gpc and \(d_{\rm os}=1\) Gpc, respectively. The top panel takes the lens mass as \(M=10^{-5}M_{\odot}\), and the black, red, and blue lines correspond to \(r_{\rm s}/r_{\rm E}=0.5,1,1.5\), respectively. The bottom panel takes the source projected position as \(r_{\rm s}/r_{\rm E}=1.0\). The black, red, and blue lines correspond to the lens mass of \(M=10^{-6}M_{\odot},10^{-5}M_{\odot},10^{-4}M_{\odot}\), respectively. We can see that spectral mod
Figure 9: The amplification of the gravitational lensing as a function of frequency. The top panel takes the lens mass as \(M=10^{-5}M_{\odot}\). The black, red, and blue lines correspond to \(r_{\rm s}/r_{\rm E}=0.5,1.0,1.5\), respectively. The bottom panel takes the source projected position as \(r_{\rm s}/r_{\rm E}=1.0\). The black, red, and blue lines correspond to the lens mass of \(M=10^{-6}M_{\odot},10^{-5}M_{\odot},10^{-4}M_{\odot}\), respectively. The distances from the lens to the FRB source, from the observer to the lens, and from the observer to the FRB source are taken as \(d_{\rm ls}=1\) kpc, \(d_{\rm ol}=1\) Gpc and \(d_{\rm os}=1\) Gpc, respectively.
at GHz band only when \(M\sim 10^{-5}M_{\odot}\) and \(r_{\rm s}\sim r_{\rm E}\) as pointed out above.
Although a planet-like object can generate the spectral modulation at the GHz band, the observed variation of the spectra of an FRB repeater seems not to support such a scenario due to the following reasons. For the above distance parameters, the typical lensing timescale is
\[t_{\rm lens}=\frac{r_{\rm E}}{v}\sim 10^{4}~{}{\rm s}~{}M_{\odot,-5}v_{7}^{-1}d_{ \rm os,Gpc}^{-1/2}d_{\rm ol,Gpc}^{1/2}d_{\rm ls,kpc}^{1/2}, \tag{59}\]
where \(v\) is the transverse velocity of the lens. This timescale is much shorter than the several-year active period of an FRB repeater. The observation of FRB repeaters showed that the burst-to-burst spectral variation is always present during the active periods. If the variation is due to the gravitational lensing, one would not see the persistent variation of the spectra during the long term, because the gravitational lensing event is most likely one-time. On the other hand, this timescale is much longer than the waiting time of an FRB repeater with a significant variation of spectra. It cannot explain the remarkable burst-to-burst variation of the spectra in an extremely short time, e.g., the spectrum variation of two components in FRB 200428 (CHIME/FRB Collaboration et al., 2020). Therefore, the spectral modulation by gravitational lensing could be ruled out.
### Plasma lensing
Plasma lensing plays an important role in the "extreme scattering events" as shown in the light curves of some radio pulsars and active galactic nuclei (Fiedler et al., 1987; Bannister et al., 2016). Similar to gravitational lensing discussed in Section 6.2, plasma lensing can also cause the time-delay effect. However, the time delay in plasma lensing is frequency-dependent due to the plasma dispersion (e.g., Er et al., 2020), which would lead to a different spectral modulation compared to that of gravitational lensing.
Following Cordes et al. (2017), we consider the plasma structure has a form with the dispersion measure satisfying \(\mathrm{DM}(x)=\mathrm{DM}_{l}\exp(-x^{2}/a^{2})\), which yields a phase perturbation \(\phi_{\lambda}=-\lambda r_{\rm e}\mathrm{DM}(x)\), where \(\lambda\) is the wavelength and \(r_{\rm e}\) is the classical electron radius. The distances from the lens to the FRB source, from the observer to the lens, and from the observer to the FRB source are \(d_{\rm ls}\), \(d_{\rm ol}\) and \(d_{\rm os}\), respectively. The transverse coordinates in the source, lens, and observer's planes are \(x_{\rm s}\), \(x\) and \(x_{\rm o}\), and one defines the dimensionless coordinates \(u_{\rm s}=x_{\rm s}/a\), \(u=x/a\) and \(u_{\rm o}=x_{\rm o}/a\), respectively. The effective transverse offset can be expressed as \(u^{\prime}=(d_{\rm ol}/d_{\rm os})u_{\rm s}+(d_{\rm ls}/d_{\rm os})u_{\rm o}\). The lens equation in geometric optics gives (Clegg et al., 1998; Cordes et al., 2017)
\[u[1+\alpha\exp(-u^{2})]=u^{\prime}, \tag{60}\]
where the parameter \(\alpha\) is given by
\[\alpha=\frac{\lambda^{2}r_{\rm e}\mathrm{DM}_{l}}{\pi a^{2}}\left(\frac{d_{ \rm ol}d_{\rm ls}}{d_{\rm os}}\right). \tag{61}\]
There are either one or three solutions for \(u\) for a given offset \(u^{\prime}\) based on Eq.(60) (see the detailed discussion in Cordes et al., 2017). Due to the limited spatial resolution of radio telescopes, the separated images might be extremely difficult to resolve. In the geometrical optics regime, the focusing or defocusing of incident wavefronts yields an amplification (Clegg et al., 1998; Cordes et al., 2017)
\[A=|1+\alpha(1-2u^{2})\exp(-u^{2})|^{-1}. \tag{62}\]
At \(\alpha=\alpha_{\rm min}=e^{3/2}/2\) and \(|u|=\sqrt{3/2}\), the amplification reaches \(A\rightarrow\infty\)(Cordes et al., 2017). The actual physical optics gains should be finite, and the maximum value is
\[A_{\rm max} \sim\frac{a}{r_{\rm F}}=a\sqrt{\frac{2\pi d_{\rm os}}{\lambda d_{ \rm ol}d_{\rm ls}}}\] \[\simeq 3.9~{}\nu_{9}^{1/2}a_{\rm AU,-3}d_{\rm os,Gpc}^{1/2}d_{\rm ol, Gpc}^{-1/2}d_{\rm ls,pc}^{-1/2}, \tag{63}\]
where \(r_{\rm F}=\sqrt{d_{\rm ol}d_{\rm ls}/2\pi d_{\rm os}}\) is the Fresnel scale, \(a_{\rm AU,-3}=a/10^{-3}\)AU, \(d_{\rm os,Gpc}=d_{\rm os}/\)Gpc, \(d_{\rm ol,Gpc}=d_{\rm ol}/\)Gpc, and \(d_{\rm ls,pc}=d_{\rm ls}/\)pc. The focal frequency is defined as
\[\nu_{\rm f} =\nu\left(\frac{\alpha}{\alpha_{\rm min}}\right)^{1/2}=\frac{c}{a }\left(\frac{r_{\rm e}\mathrm{DM}_{l}}{\pi\alpha_{\rm min}}\frac{d_{\rm ol}d_{ \rm ls}}{d_{\rm os}}\right)^{1/2}\] \[\simeq 0.12~{}\mathrm{GHz}~{}\mathrm{DM}_{l,-8}^{1/2}a_{\rm AU,-3}^{- 1/2}d_{\rm os,Gpc}^{d/1/2}d_{\rm ol,Gpc}^{1/2}d_{\rm ls,pc}^{1/2}, \tag{64}\]
where \(\mathrm{DM}_{l,-8}=\mathrm{DM}_{l}/(10^{-8}~{}\mathrm{pc}~{}\mathrm{cm}^{-3})\). The frequency below \(\nu_{\rm f}\) will show ray crossings (Cordes et al., 2017).
According to Eq.(60) and Eq.(62), one can obtain the amplification as a function of the dimensionless frequency \(\nu/\nu_{\rm f}\), as shown in Figure 5 of Cordes et al. (2017). There are two caustic peaks in the amplified spectra with widths of \((1-10)\%\) of the observation frequency \(\nu\), which are contributed by individual subimages by the Gaussian plasma lensing. The amplification is suppressed below the unit at the frequency \(\nu\ll\nu_{\rm f}\) and asymptotes to the unity at \(\nu\gg\nu_{\rm f}\) due to the caustic.
Similar to the scenario of gravitational lensing discussed in Section 6.2, if the time delays of subimaged bursts are smaller than their durations, the total image would be interfered, leading to the oscillating structures on the spectra. Due to the plasma dispersion along the
line of sight, the arrival times of subimages should be chromatic, here we are only interested in the perturbations from the plasma lens, which add to delays from other plasma components (e.g., interstellar medium, intergalactic medium, etc.). The typical delay timescale of the plasma lensing is about (Cordes et al., 2017)
\[\delta t\sim\frac{cr_{e}\mathrm{DM}_{l}}{2\pi\nu^{2}}\simeq 4.1\times 10^{-1 1}\ \mathrm{s}\ \mathrm{DM}_{l,-8}\nu_{9}^{-2}. \tag{65}\]
The phase difference of the rays from the two images is \(\delta\phi=2\pi\nu\delta t\). A plasma lensing event with remarkable spectral modulation requires \(\delta\phi\sim 1\), \(\nu\sim\nu_{\mathrm{f}}\) and \(A_{\mathrm{max}}>1\), leading to
\[\mathrm{DM}_{l}\sim\frac{\nu}{cr_{e}}\simeq 3.8\times 10^{-8} \ \mathrm{pc}\ \mathrm{cm}^{-3}\nu_{9}, \tag{66}\] \[a_{\mathrm{AU},-3}^{-1}d_{\mathrm{ls},\mathrm{pc}}^{1/2}\sim 4.1. \tag{67}\]
Therefore, a remarkable spectral modulation requires that the lensing object has an average electron number density of \(n_{e}\sim 10\ \mathrm{cm}^{-3}\) and a typical scale of \(a\sim 10^{-3}\ \mathrm{AU}\) at a distance of \(d_{\mathrm{ls}}\sim 1\ \mathrm{pc}\) from the FRB source.
The variation timescale of the spectra depends on the changes in amplification through caustics due to the motions of source and observer. The effective transverse velocity combined into the motions of the source and the observer is \(v_{\perp}=(d_{\mathrm{ol}}/d_{\mathrm{os}})v_{\mathrm{s},\perp}+(d_{\mathrm{ ls}}/d_{\mathrm{os}})v_{\mathrm{o},\perp}\). Using Eq.(60) and Eq.(62) and taking the derivatives, one has \(\delta u^{\prime}\sim(\delta A/A)A^{2}\) for \(u\sim 1\) when the amplification reaches the maximum value. Using \(\delta u^{\prime}\sim v_{\perp}t_{\mathrm{cau}}/a\), the timescale of a caustic crossing (also the variation timescale of the spectra) is estimated as
\[t_{\mathrm{cau}}\sim\frac{a(\delta A/A)}{v_{\perp}A^{2}}\simeq 15\ \mathrm{s}\ a_{ \mathrm{AU},-3}v_{\perp,7}^{-1}A_{1}^{-2}(\delta A/A). \tag{68}\]
Therefore, a plasma lensing can cause the burst-to-burst variation of the spectra during a much shorter time compared with that of the gravitational lensing. The variation timescale of the spectra of the two components of FRB 200428 might a relatively extreme amplification of \(A\sim 100\).
## 7 Discussions and Conclusions
Some FRBs appear significantly intrinsic narrow spectra in the telescope's bandwidth (Pleunis et al., 2021; Kumar et al., 2021; Zhou et al., 2022), which is an important clue to reveal the radiation mechanism and the environment of FRBs. In this work, we investigated the physical origin of the narrow spectra of FRBs from the perspectives of radiation mechanisms, coherent processes, radiative transfers, and interference processes, and the following conclusions are drawn:
1. Without considering the finite bandwidth of a telescope, the relative spectral bandwidth defined by FWHM or FWTM only depends on the intrinsic spectral shape. Some FRBs (e.g., FRB 20190711A, Kumar et al. (2021)) showed that their spectra must be intrinsically narrow \(\Delta\nu/\nu_{0}\ll 0.2\) for FWHM. This gives a strong constraint on the radiation mechanisms of FRBs because most radiation mechanisms with low-frequency spectral index \(\alpha_{l}<3\) lead to wide spectra with \(\Delta\nu/\nu_{0}>0.2\) for FWHM. In reality, the narrow telescope's bandwidth usually makes some observed bursts' spectra incomplete, and the distribution of the observed relative spectral bandwidths would be affected by the limited telescope's bandwidth.
2. An intrinsic narrow spectrum with \(\Delta\omega/\omega_{0}\ll 1\) (\(\omega_{0}\) is the peak frequency and \(\Delta\omega\) is the spectral bandwidth) implies that the electromagnetic wave is quasi-sinusoid with a typical frequency of \(\omega\sim\omega_{0}\) in a short term and have a typical pulse duration of \(T\sim 4\pi/\Delta\omega\) in a long term. We generally discuss the spectral shapes and polarization distributions from the perspective of radiation mechanisms. For the radiation mechanisms involving the relativistic particle's perpendicular acceleration, the radiation features (including the spectrum and polarization) depend on the relation between the particle's deflection angle \(\psi\) and the radiation beaming angle \(1/\gamma\). The scenarios with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) lead to different features of the spectrum and polarization.
3. For the scenario of \(\gamma\psi\gg 1\), the observer would see radiation from short segments of the particle's trajectory that are nearly parallel to the line of sight. Such a scenario is applicable to the curvature radiation and the traditional (large-pitch-angle) synchrotron radiation. The intrinsic spectra of these mechanisms are usually wide, and the intrinsic linear/circular polarization degree mainly depends on the angle between the viewing direction and the trajectory plane. If \(\gamma\psi\ll 1\), the particle's entire trajectory would be seen by the observer during a long term. In particular, for the small-pitch-angle synchrotron radiation by a single charged particle, the radiation is only emitted at a certain frequency within an extremely narrow band for a certain viewing direction, and the circular polarization is dominated. Furthermore, we discussed some astrophysical scenarios that might involve radiation processes with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\). In the magnetosphere of a neutron star, the radiation process with \(\gamma\psi\gg 1\) occurs in the inner magnetosphere and the corresponding radiation mechanism is the curvature radiation and the radiation process with \(\gamma\psi\ll 1\) occurs in the outer magnetosphere and the corresponding radiation mechanism is the small-pitch-angle synchrotron radiation. On the other hand,
both scenarios with \(\gamma\psi\gg 1\) and \(\gamma\psi\ll 1\) might occur in the magnetized shocked medium, which corresponds to the traditional synchrotron radiation and the small-pitch-angle radiation, respectively. However, the generation of the small-pitch-angle radiation requires that the direction of the particles' injection is almost parallel to the field lines, which seems fine-tuning.
4. We find that coherent radiation processes can generate narrow spectra. For the bunching mechanisms (e.g., coherent curvature radiation), one possibility to generate a narrow spectrum is that the radiating bunches are quasi-periodically distributed. The quasi-periodic distribution of bunches might be due to the quasi-monochromatic Langmuir waves or quasi-periodic oscillating pair creation in the charge-starved region. For a train of bunches separated by a period of \(1/\omega_{m}\), the total radiation will be coherently amplified at the peak frequencies of \(\omega=2n\pi\omega_{m}\) with \(n\in\mathbb{Z}^{+}\) that is independent of the radiation mechanism of the bunches. In particular, we notice that the bunches are not required to be positioned at exactly the same distance from each other. A small random relative phase of \(\delta\phi_{j}\ll\pi\) can still lead to the coherent amplification at \(\omega=2n\pi\omega_{m}\). However, once \(\delta\phi_{j}\sim\pi\), the total radiation would become completely incoherent. On the other hand, the maser mechanism can naturally arise a narrow spectrum because a negative optical depth of \(\tau_{\nu}\) causes an amplification by a factor of \(\exp(|\tau_{\nu}|)\). An intermediate negative optical depth naturally causes the spectrum narrow.
5. The radiative transfer processes mainly include absorption and scattering. Based on the current observation, most processes seem not to significantly change the observed FRB spectra or directly arise a narrow spectral shape. The absorption processes can cause the radio bursts emitted during a short term to have the same low-frequency cutoff at the absorption frequency, which is inconsistent with the observations of some FRBs, e.g., the burst-to-burst spectral variation of two components of FRB 200428. Inverse Compton scattering suppresses the specific intensity in all bands but can not change the spectral shape. Induced Compton scattering redistributes the incident photons toward low frequencies, and the photons are eventually absorbed by the absorption processes, leading to the same issue as the absorption. The nonlinear scattering depends on the radiation intensity and frequency and makes the low-frequency cutoff proportional to the intensity, which is worthy to test the nonlinear effect by the observations of FRB repeaters.
6. We discussed some interference processes including scintillation, gravitational lensing, and plasma lensing. If the scintillation significantly changes the observed shape, the scintillation bandwidth is required to be \(\Delta\nu_{\rm sci}\gtrsim 100\) MHz, which is much larger than the observed narrow-bandwidth scintillation with \(\Delta\nu_{\rm sci}\sim 1\) MHz. Thus, involving another plasma screen is necessary to modulate the spectra at a bandwidth of \(\gtrsim 100\) MHz. The scintillation with \(\Delta\nu_{\rm sci}\gtrsim 100\) MHz requires that the corresponding plasma screen is within a distance of \(\sim 10^{15}\) cm from the FRB source and the plasma medium is intermediately dense and turbulent. The spectral modulations of gravitational lensing and plasma lensing can modulate the FRB spectra when the time delay of sublimaged bursts is about \(\delta t\sim 10^{-10}\) s for GHz wave. For gravitational lensing, such a condition requires that the lensing object has a mass of \(\sim 10^{-5}M_{\odot}\), i.e., a planet-like object. However, since the gravitational lensing event is most likely one-time but the spectrum variation is always present during the several-year active period of an FRB repeater, such a scenario could be ruled out. Besides, the gravitational lensing cannot explain the remarkable burst-to-burst variation of the spectra in a time shorter than the typical lensing timescale. The delay time of \(\delta t\sim 10^{-10}\) s requires the plasma lens has an average electron number density of \(n_{e}\sim 10\) cm\({}^{-3}\) and a typical scale of \(a\sim 10^{-3}\) AU at a distance of \(d_{\rm ls}\sim 1\) pc from the FRB source. These conditions are moderate to explain the observed FRB spectra. Meanwhile, the typical lensing time scale of decades of seconds can cause the spectrum variation during a short time, although the extremely short spectrum variation as shown in FRB 200428 might be due to a relatively extreme amplification.
We thank the anonymous referee for the helpful comments and suggestions. We also thank Bing Zhang and Yue Wu for reading the initial manuscript and for their helpful comments and acknowledge the discussions with Yi Feng, Jin-Lin Han, Kejia Lee, Ze-Nan Liu, Yuanhong Qu, Wei-Yang Wang, and Yong-Kun Zhang. This work is supported by the National Natural Science Foundation of China grant No.12003028 and the National SKA Program of China (2022SKA0130100).
## Appendix A Radiations by a single charged particles
In this appendix, we generally discuss the features of the spectra and polarizations of the radiation by charged accelerated particles, which are applicable to most radiation mechanisms in various astrophysical scenarios. We consider that a particle with a charge \(q\) moves on a trajectory \(\vec{r}_{0}(t^{\prime})\) with velocity \(\vec{\beta}(t^{\prime})c\) and acceleration \(\dot{\vec{\beta}}(t^{\prime})c\), where \(t^{\prime}\) is the retarded time. The line-of-sight direction is \(\vec{n}\) and the distance between the particle and the observer is \(R\). The radiation field at \((\vec{r},t)\) is given by (e.g., Rybicki & Lightman, 1986)
\[\vec{E}(\vec{r},t) = \frac{q}{c}\left[\frac{\vec{n}}{\kappa^{3}R}\times\{(\vec{n}- \vec{\beta})\times\dot{\vec{\beta}}\}\right]_{\rm rec},\] \[\vec{B}(\vec{r},t) = \left[\vec{n}\times\vec{E}\right]_{\rm rec},\] (A1)
where \(\kappa\equiv 1-\vec{n}\cdot\vec{\beta}\), the quantities in the square bracket, \(\left[...\right]_{\rm rec}\), are evaluated at the retarded time \(t^{\prime}\). The radiation energy per unit frequency interval per unit solid angle is
\[\mathcal{E}_{\omega} \equiv \frac{dW}{d\omega d\Omega}=\frac{q^{2}}{4\pi^{2}c}\left|\int_{- \infty}^{\infty}\left[\kappa^{-3}\vec{n}\times\{(\vec{n}-\vec{\beta})\times \dot{\vec{\beta}}\}\right]_{\rm rec}e^{i\omega t}dt\right|^{2}\] (A2) \[= \frac{q^{2}\omega^{2}}{4\pi^{2}c}\left|\int\vec{n}\times(\vec{n} \times\vec{\beta})\exp\left[i\omega(t^{\prime}-\vec{n}\cdot\vec{r}_{0}(t^{ \prime})/c)\right]dt^{\prime}\right|^{2}\] (A3) \[= \frac{e^{2}\omega^{2}}{4\pi^{2}c}\left|-\vec{\epsilon_{\parallel} }A_{\parallel}(\omega)+\vec{\epsilon_{\perp}}A_{\perp}(\omega)\right|^{2},\] (A4)
where \(A_{\parallel}\) and \(A_{\perp}\) are two orthogonal components perpendicular to the line of sight. The linear polarization degree is
\[\pi_{L}=\left|\frac{\left[(A_{\parallel}A_{\parallel}^{*}-A_{\perp}A_{\perp}^ {*})^{2}+(A_{\parallel}A_{\perp}^{*}+A_{\perp}A_{\parallel}^{*})^{2}\right]^{ 1/2}}{A_{\parallel}A_{\parallel}^{*}+A_{\perp}A_{\perp}^{*}}\right|,\] (A5)
and the circular polarization degree is
\[\pi_{V}=\left|\frac{1}{i}\frac{A_{\parallel}A_{\perp}^{*}-A_{\perp}A_{ \parallel}^{*}}{A_{\parallel}A_{\parallel}^{*}+A_{\perp}A_{\perp}^{*}}\right|.\] (A6)
It should be noted that the \(\mathcal{E}_{\omega}\) corresponds to the total radiation energy in an entire pulse. If the radiation pulse repeats on average time \(T\), the above radiation energy could be converted to the radiation power (Rybicki & Lightman, 1986)
\[\mathcal{P}_{\omega}\equiv\frac{1}{T}\frac{dW}{d\omega d\Omega}=\frac{ \mathcal{E}_{\omega}}{T}.\] (A7)
The radiation spectrum by a single charged particle with the perpendicular acceleration depends on the relation between the particle's deflection angle \(\psi\) and the radiation beaming angle \(\sim 1/\gamma\)(Landau & Lifshitz, 1975), as shown in Figure 4, which will be summarized in the following subsections.
### Deflection angle larger than radiation beaming angle
In the scenario with \(\gamma\psi\gg 1\), the radiation is equivalent to the radiation by the particle moving instantaneously at constant speed on an appropriate circular path (Jackson, 1998), as shown in the panel (a) of Figure 4. We consider that the acceleration curvature radius is \(\rho\), the angle between the line of sight and the trajectory plane is \(\theta\), and the radiation angular frequency is \(\omega\). The radiation energy per unit frequency interval per unit solid angle is given by the above equation with (Jackson, 1998)
\[A_{\parallel}(\omega) = \frac{2i}{\sqrt{3}}\frac{\rho}{c}\left(\frac{1}{\gamma^{2}}+\theta ^{2}\right)K_{2/3}(\xi),\] (A8) \[A_{\perp}(\omega) = \frac{2}{\sqrt{3}}\frac{\rho\theta}{c}\left(\frac{1}{\gamma^{2}}+ \theta^{2}\right)^{1/2}K_{1/3}(\xi),\] (A9)
and
\[\xi\equiv\frac{1}{2}\hat{\omega}\left(1+\gamma^{2}\theta^{2}\right)^{3/2}\quad \text{with}\quad\hat{\omega}\equiv\frac{\omega}{\omega_{c}},\] (A10)
where \(\omega_{c}=3\gamma^{3}c/2\rho\) is the typical radiation frequency. The radiation energy per unit frequency interval per unit solid angle is
\[\mathcal{E}_{\omega}=\frac{3e^{2}}{4\pi^{2}c}\gamma^{2}\hat{\omega}^{2}(1+ \gamma^{2}\theta^{2})^{2}\left[K_{2/3}^{2}(\xi)+\frac{1}{1/\gamma^{2}\theta^{2 }+1}K_{1/3}^{2}(\xi)\right].\] (A11)
The spectrum \(\mathcal{E}_{\omega}\) of a single radiating particle satisfies the power-law distribution with \(\mathcal{E}_{\omega}\propto\hat{\omega}^{2/3}\) at the low frequency and appears an exponential decay at the high frequency, which is intrinsically wide (e.g., Jackson, 1998; Yang & Zhang, 2018), \(\Delta\omega/\omega_{0}\sim 1\) (see Section 2.1), as shown in Figure 10. Meanwhile, the larger the viewing angle \(\gamma\theta\), the lower the cutoff frequency. The spectrum of the radiation by multiple particles is also usually wide, which has been discussed in detail in (Yang & Zhang, 2018, 2023) and we will not repeat it here. According to Eq.(A5), Eq.(A6), Eq.(A8) and Eq.(A9), the linear polarization degree is
\[\pi_{L}=\left|\frac{K_{2/3}^{2}(\xi)-1/(1/\gamma^{2}\theta^{2}+1)K_{1/3}^{2}( \xi)}{K_{2/3}^{2}(\xi)+1/(1/\gamma^{2}\theta^{2}+1)K_{1/3}^{2}(\xi)}\right|,\] (A12)
and the circular polarization degree is
\[\pi_{V}=\left|\frac{2/(1/\gamma^{2}\theta^{2}+1)^{1/2}K_{2/3}(\xi)K_{1/3}(\xi) }{K_{2/3}^{2}(\xi)+1/(1/\gamma^{2}\theta^{2}+1)K_{1/3}^{2}(\xi)}\right|.\] (A13)
Similar to the spectrum given by \(\mathcal{E}_{\omega}\), both \(\pi_{L}\) and \(\pi_{V}\) are also the functions as the variables \((\hat{\omega},\gamma\theta)\). In Figure 11, we plot the linear and circular polarization degrees \(\Pi_{i}\) with \(i=L\) and \(V\) as the functions of the dimensionless frequency \(\hat{\omega}\) and the viewing angle \(\gamma\theta\), respectively. For a certain viewing angle \(\gamma\theta\), the higher the frequency \(\hat{\omega}\), the lower (higher) the linear (circular) polarization degree. For a certain frequency \(\hat{\omega}\), the larger the viewing angle \(\gamma\theta\), the lower (higher) the linear (circular) polarization degree. Thus, the high circular polarization degree should be attributed to the off-beam observation (Wang et al., 2022; Liu et al., 2023; Qu & Zhang, 2023).
The polarization measurement at least requires that burst flux larger than the telescope's threshold. For the scenario with \(\gamma\psi\gg 1\), according to Eq.(12) and using the property of Bessel function, \(K_{\nu}(x)\sim\sqrt{\pi/2x}\exp(-x)\) for \(x\gg 1\), the radiation energy falls off in angle approximately as
\[\mathcal{E}_{\omega}\sim\mathcal{E}_{\omega,0}\exp\left(-\hat{\omega}\gamma^{ 3}\theta^{3}\right),\] (A14)
Figure 10: The spectrum given by Eq.(12) that is applicable for the scenario with the particle’s deflection angle larger than the radiation beaming angle, \(\gamma\psi\gg 1\). The black, red, and blue lines correspond to \(\gamma\theta=0.1,1\) and \(3\), respectively. The unit of \(\mathcal{E}_{\omega}\) is arbitrary.
where \({\cal E}_{\omega,0}\equiv{\cal E}_{\omega}(\theta=0)\) and \({\cal E}_{\omega}\propto K_{\nu}^{2}(\xi)\) is used. We consider that only the burst with radiation energy
\[{\cal E}_{\omega}>\eta_{c}^{-1}{\cal E}_{\omega,0}\quad\mbox{with }\eta_{c}\geqslant 1\] (A15)
could be observed due to the constraint of a detector's sensitivity, leading to
\[\gamma\theta<\gamma\theta_{\rm th}\equiv\left(\frac{1}{\hat{\omega}}\ln\eta_{c }\right)^{1/3}.\] (A16)
Only the bursts with viewing angle \(\theta<\theta_{\rm th}\) are observable. Notice that the distribution of the intrinsic burst energies is neglected, and here we are mainly interested in the suppression effect by the viewing direction.
Figure 11: The relations between the polarization degree \(\Pi\) and the dimensionless frequency \(\hat{\omega}\) and the viewing angle \(\gamma\theta\) for a single radiating particle with \(\gamma\psi\gg 1\). The blue and red lines correspond to the linear and circular polarization degrees, respectively. The top panel shows the polarization degree \(\Pi\) as the function of the dimensionless frequency \(\hat{\omega}\). The solid, dashed, and dotted lines correspond to \(\gamma\theta=0.1,1\) and \(3\), respectively. The bottom panel shows the polarization degree \(\Pi\) as the function of the viewing angle \(\gamma\theta\). The solid, dashed, and dotted lines correspond to \(\hat{\omega}=0.1,1\) and \(3\), respectively.
Figure 12: The cumulative distribution of the linear (top panels) and circular (bottom panels) polarization degrees for the radiation by multiple particles with \(\gamma\psi\gg 1\). Notice that the cumulative distribution of the circular polarization degrees is \(N(>\Pi_{V}=0)=1\) at \(\Pi_{V}=0\) and decreases significantly once \(\Pi_{V}>0\). The component of \(N(>\Pi_{V}=0)=1\) at \(\Pi_{V}=0\) is not shown in the bottom panels. The distribution function of the viewing direction is assumed to satisfy Eq.(A19). In the left panels: the solid, dashed, and dotted lines correspond to \(\eta_{c}=10,100\) and \(1000\), respectively for \(\hat{\omega}=1\) and \(\gamma\Theta_{j}=10\). In the middle panels: the solid, dashed, and dotted lines correspond to \(\gamma\Theta_{j}=3,10\) and \(100\), respectively for \(\hat{\omega}=1\) and \(\eta_{c}=100\). In the right panels: the solid, dashed, and dotted lines correspond to \(\hat{\omega}=0.1,1\) and \(3\), respectively for \(\gamma\Theta_{j}=10\) and \(\eta_{c}=100\).
We consider that multiple radiating particles are uniformly distributed in a fan beam with an opening angle \(\Theta_{j}\), and the viewing angle is \(\Theta\). According to Eq.(A12) and Eq.(A13), the polarization is 100% linear when the viewing direction is on the trajectory plane, and most radiation energy is emitted near the trajectory plane. The larger the viewing angle, the higher the circular polarization degree. Since the circular polarizations on the different sides of the trajectory plane are opposite, in the particles' beam center the coherent sum of the circular polarizations will be canceled, leading to the linear polarization dominated. A detailed analysis was also discussed in Wang et al. (2022) and Liu et al. (2023). Therefore, the linear polarization degree could be approximately given by
\[\Pi_{L}\simeq\begin{cases}1,&\text{for }\Theta\leqslant\Theta_{j},\\ \pi_{L}(\Theta-\Theta_{j}),&\text{for }\Theta>\Theta_{j},\end{cases}\] (A17)
and the circular polarization degree could be approximately given by
\[\Pi_{V}\simeq\begin{cases}0,&\text{for }\Theta\leqslant\Theta_{j},\\ \pi_{V}(\Theta-\Theta_{j}),&\text{for }\Theta>\Theta_{j},\end{cases}\] (A18)
Since the view direction related to the trajectory plane is random, the number of the bursts emitting within \((\Theta,\Theta+d\Theta)\) is 8
Footnote 8: Notice that the distribution of the viewing direction should be not \(N(\Theta)d\Theta=\sin\Theta d\Theta\) in this scenario, because the viewing direction is related to the trajectory plane of the accelerated particle, see Figure 14.9 and Section 14 in Jackson (1998) for a detailed discussion.
\[N(\Theta)d\Theta=\frac{1}{(\Theta_{j}+\theta_{\text{th}})}d\Theta,\] (A19)
where \(\theta_{\text{th}}\) is given by Eq.(A16), and \(\Theta_{j}+\theta_{\text{th}}\) corresponds to the threshold angle above which the observed flux would be less than the telescope's flux threshold. Therefore, the cumulative distribution of the linear and circular polarization degrees are
\[N(>\Pi_{L}) =\frac{\gamma\Theta_{j}+(\gamma\theta)(\Pi_{L})}{\gamma\Theta_{j} +\gamma\theta_{\text{th}}},\] (A20) \[N(>\Pi_{V}) =\frac{\gamma\theta_{\text{th}}-(\gamma\theta)(\Pi_{V})}{(\gamma \Theta_{j}+\gamma\theta_{\text{th}})},\] (A21)
for \((\gamma\theta)(\Pi_{i})\leqslant\gamma\theta_{\text{th}}\) with \(i=L,V\), respectively, where \((\gamma\theta)(\Pi_{i})\) is the inverse function of \(\pi_{i}(\gamma\theta)\) given by Eq.(A12) and Eq.(A13).
In Figure 12, we plot the cumulative distributions of the linear and circular polarization degrees according to Eq.(A20) and Eq.(A21). The cumulative distributions of the polarization degrees depend on the telescope's flux threshold \(\eta_{c}\), the particles' beaming angle \(\gamma\Theta_{j}\), and observed frequency \(\hat{\omega}\). We can see that: (1) The higher the telescope's sensitivity (i.e., a large value of \(\eta_{c}\)), the lower the number fraction between the linearly and circularly polarized bursts. The reason is that most high circularly polarized bursts have relatively low fluxes due to large values of \(\gamma\theta\). (2) The larger the particles' beaming angle, the higher the number fraction between the linearly and circularly polarized bursts. If \(\gamma\Theta_{j}\gg 1\), most bursts would have \(\Pi_{L}\sim 1\) and \(\Pi_{V}\sim 0\). A moderate number fraction between linearly polarized bursts and circularly polarized bursts as shown in FRB 20201124A (Xu et al., 2022; Jiang et al., 2022) requires that \(\gamma\Theta_{j}\sim 1\). (3) The higher the observed frequency, the higher the number fraction between the linearly and circularly polarized bursts. The reason is that the threshold viewing angle \(\gamma\theta_{\text{th}}\) is significantly suppressed at the high frequency (see Eq.(A16)), leading to a larger relative number of the bursts from the particle beaming angle \(\Theta_{j}\).
### Deflection angle smaller than radiation beaming angle
In the scenario with \(\gamma\psi\ll 1\), the particle with a charge \(q\) moves along the line of sight with an almost constant velocity \(\vec{\beta}\) but with a varying acceleration \(\dot{\vec{\beta}}\), as shown in the panel (b) of Figure 4. The radiation energy per unit frequency interval per unit solid angle at the line-of-sight direction \(\vec{n}\) could be written as (Landau and Lifshitz, 1975, also see Appendix A)
\[\mathcal{E}_{\omega}=\frac{q^{2}}{4\pi^{2}c}\left(\frac{\omega}{\hat{\omega} }\right)^{4}\left|\vec{n}\times\left[(\vec{n}-\vec{\beta})\times\dot{\vec{ \beta}}_{\hat{\omega}}\right]\right|^{2}\] (A22)
with
\[\dot{\vec{\beta}}_{\tilde{\omega}}\equiv\int_{-\infty}^{\infty}\dot{\vec{\beta}}e^ {i\tilde{\omega}t^{\prime}}dt^{\prime}\quad\text{and}\quad\tilde{\omega}\equiv(1 -\vec{n}\cdot\vec{\beta})\omega.\] (A23)
where \(t^{\prime}\) is the retarded time. In the ultrarelativistic case, the longitudinal acceleration is smaller than the transverse acceleration, \(\dot{\vec{\beta}}_{\parallel}/\dot{\vec{\beta}}_{\perp}\sim 1/\gamma^{2}\ll 1\). Thus, \(\dot{\vec{\beta}}\) and \(\vec{\beta}\) are approximately perpendicular to each other, \(\dot{\vec{\beta}}\perp\vec{\beta}\). Since both \(\vec{n}\) and \(\vec{\beta}\) are approximately constant in the above equation, the properties of the spectrum and polarization are mainly determined by the acceleration \(\dot{\vec{\beta}}\).
First, we discuss the general properties of the polarization in the scenario with \(\gamma\psi\ll 1\). Since both \(\vec{n}\) and \(\vec{\beta}\) are approximately constant in Eq.(A22), the spectral properties are mainly determined by the acceleration \(\dot{\vec{\beta}}\). Choosing a coordinate system \(S\) with \(z\)-direction pointing toward the observer and with the particle velocity on the \(y-z\) plane, thus \(\vec{n}=(0,0,1)\) and \(\vec{\beta}=(0,\sin\theta,\cos\theta)\). In the coordinate system \(S^{\prime}\) with \(z^{\prime}\)-direction pointing toward the particle velocity \(\vec{\beta}\) and \(x^{\prime}\)-axis parallel with \(x\)-axis, the acceleration could be written as \(\dot{\vec{\beta}}^{\prime}=(b\cos\phi,b\sin\phi,0)\), where \(b=|\dot{\vec{\beta}}^{\prime}|\) and \(\phi\) is the azimuth angle of \(\dot{\vec{\beta}}^{\prime}\) in the \(x^{\prime}-y^{\prime}\) plane that is perpendicular to \(z^{\prime}\)-direction. Thus, the acceleration in the \(S\) coordinate system is \(\dot{\vec{\beta}}=(b\cos\phi,b\sin\phi\cos\theta,-b\sin\phi\sin\theta)\). According to Eq.(A22), the radiation polarization property is determined by
\[\vec{n}\times\left[(\vec{n}-\vec{\beta})\times\dot{\vec{\beta}}\right]=\left[- b\cos\phi(1-\cos\theta),b\sin\phi(1-\cos\theta),0\right],\] (A24)
leading to
\[A_{\parallel}(\tilde{\omega}) \propto-\int b(t)\cos\phi(t)e^{i\tilde{\omega}t}dt,\] (A25) \[A_{\perp}(\tilde{\omega}) \propto\int b(t)\sin\phi(t)e^{i\tilde{\omega}t}dt.\] (A26)
Based on Eq.(A5) and Eq.(A6), we can easily prove that: (1) If the acceleration is always on a straight line perpendicular to \(\vec{\beta}\), i.e., \(\phi=\text{const.}\), the polarization is fully linear with \(\pi_{L}=1\); (2) If the acceleration rotates with a constant angular velocity \(\Omega\) on the plane perpendicular to \(\vec{\beta}\), i.e., \(\phi(t)=\Omega t\) and \(b(t)=\text{const}\), the polarization is fully circular with \(\pi_{V}=1\).
In order to obtain the accurate spectrum and polarization, a simpler and more intuitive processing method is to calculate the radiation in the particle comoving frame \(K^{\prime}\) with velocity \(\beta_{\parallel}=\beta\cos\psi\) related to the observer frame \(K\), then transfer the radiation to the \(K\) frame via Lorentz and Doppler transformations. In the \(K^{\prime}\) frame, the particle moves with the velocity
\[\beta^{\prime}\simeq\frac{\beta_{\perp}}{\gamma(1-\beta_{\parallel}^{2})}= \frac{(\gamma^{2}-1)^{1/2}\sin\psi}{\cos^{2}\psi+\gamma^{2}\sin^{2}\psi} \simeq\gamma\psi\ll 1.\] (A27)
Thus, the particle in the \(K^{\prime}\) frame is non-relativistic for \(\gamma\psi\ll 1\). In many astrophysical scenarios, the perpendicular acceleration of a charged particle is usually attributed to the Lorentz force by magnetic fields, meanwhile, the intrinsic variation timescale of the magnetic field is longer than \(\Delta t_{\text{acc}}\). In this case, the radiation in the \(K^{\prime}\) frame is cyclotron-like. We consider that in the \(K^{\prime}\) frame the acceleration curvature radius is \(\rho^{\prime}\), and the angle between the line of sight and the trajectory plane is \(\theta^{\prime}\). We define
\[\zeta\equiv m\beta^{\prime}\sin\theta^{\prime}\] (A28)
with \(m\) as the harmonic number, then in the \(K^{\prime}\) frame, the radiation power per unit solid angle in the \(m\)-th harmonic is (Landau & Lifshitz, 1975; Jackson, 1998)
\[\frac{dP^{\prime}_{m}}{d\Omega}=\frac{e^{2}\omega_{0}^{4}m^{2}}{8\pi^{3}c} \left|-\vec{\epsilon_{\parallel}}A^{\prime}_{\parallel}(\omega)+\vec{\epsilon_ {\perp}}A^{\prime}_{\perp}(\omega)\right|^{2}\] (A29)
with
\[A^{\prime}_{\parallel}(\omega) = \frac{2\pi i\rho^{\prime}}{c}\frac{dJ_{m}(\zeta)}{d\zeta},\] (A30) \[A^{\prime}_{\perp}(\omega) = \frac{2\pi\rho^{\prime}}{c}\frac{\cot\theta^{\prime}}{\beta^{ \prime}}J_{m}(\zeta),\] (A31)
where the fundamental frequency is
\[\omega_{0}=\frac{\beta^{\prime}c}{\rho^{\prime}}.\] (A32)
In particular, if the gyration motion is caused by the Lorentz force of the magnetic field, one has \(\omega_{0}=\omega^{\prime}_{B}=eB/m_{e}c\), where \(\omega^{\prime}_{B}\) is the cyclotron frequency in the \(K^{\prime}\) frame. According to Eq.(A5), Eq.(A6), Eq.(A30) and Eq.(A31), the linear polarization degree is
\[\pi_{L}=\left|\frac{[dJ_{m}(\zeta)/d\zeta]^{2}-(\cot^{2}\theta^{ \prime}/\beta^{\prime 2})J_{m}^{2}(\zeta)}{[dJ_{m}(\zeta)/d\zeta]^{2}+(\cot^{2} \theta^{\prime}/\beta^{\prime 2})J_{m}^{2}(\zeta)}\right|,\] (A33)
and the circular polarization degree is
\[\pi_{V}=\left|\frac{2[dJ_{m}(\zeta)/d\zeta](\cot\theta^{\prime}/ \beta^{\prime})J_{m}(\zeta)}{[dJ_{m}(\zeta)/d\zeta]^{2}+(\cot^{2}\theta^{ \prime}/\beta^{\prime 2})J_{m}^{2}(\zeta)}\right|.\] (A34)
Due to \(\beta^{\prime}\ll 1\), the emission but the fundamental frequency \(m=1\) can be neglected, leading to an extremely narrow spectrum. Using the properties of Bessel function \(J_{m}(x)\sim[1/\Gamma(m+1)](x/2)^{m}\) for \(0<x\ll(m+1)^{1/2}\), the radiation power reduces to
\[\mathcal{P}^{\prime}_{\omega}\equiv\frac{dP^{\prime}_{m}}{d\omega d\Omega}= \frac{e^{2}\omega_{0}^{2}\beta^{\prime 2}}{8\pi c}(1+\cos^{2}\theta^{ \prime})\delta(\omega^{\prime}-\omega_{0}).\] (A35)
The linear polarization degree is
\[\pi^{\prime}_{L}=\left|\frac{1-\cos^{2}\theta^{\prime}}{1+\cos^{ 2}\theta^{\prime}}\right|,\] (A36)
and the linear polarization degree is
\[\pi^{\prime}_{V}=\left|\frac{2\cos\theta^{\prime}}{1+\cos^{2} \theta^{\prime}}\right|.\] (A37)
Using the following transformations
\[\cos\theta^{\prime}=\frac{\cos\theta-\beta_{\parallel}}{1-\beta_ {\parallel}\cos\theta}\simeq\frac{1-\gamma^{2}\theta^{2}}{1+\gamma^{2}\theta^ {2}},\] (A38) \[\omega^{\prime}=\omega\frac{1-\beta_{\parallel}\cos\theta}{(1- \beta_{\parallel}^{2})^{1/2}}\simeq\frac{\omega}{2\gamma}(1+\gamma^{2}\theta^ {2}),\] (A39) \[\mathcal{P}_{\omega}=\mathcal{P}^{\prime}_{\omega}\frac{(1-\beta _{\parallel}^{2})^{3/2}}{(1-\beta_{\parallel}\cos\theta)^{3}}\simeq\mathcal{P} ^{\prime}_{\omega}\frac{8\gamma^{3}}{(1+\gamma^{2}\theta^{2})^{3}},\] (A40)
the received radiation power is
\[\mathcal{P}_{\omega}=\frac{e^{2}\gamma^{2}\psi^{2}\omega^{4}}{4 \pi c\omega_{0}^{2}}\left(1-\frac{\omega}{\gamma\omega_{0}}+\frac{\omega^{2}} {2\gamma^{2}\omega_{0}^{2}}\right)\delta\left(\omega-\frac{2\gamma\omega_{0}}{ 1+\gamma^{2}\theta^{2}}\right).\] (A41)
Note that the radiation power \(\mathcal{P}_{\omega}\) in Eq.(A40) is emphasized to be the received specific power in the \(K\) frame, and a factor of \(\gamma^{3}(1+\beta\cos\theta^{\prime})^{3}\) should be corrected from the emitted specific power, see Section 4.8 of Rybicki & Lightman (1986) and involve \(d\omega=\gamma(1+\beta\cos\theta^{\prime})d\omega^{\prime}\) to calculate the specific radiation power. We define
\[\bar{\omega}\equiv\frac{\omega}{\gamma\omega_{0}},\] (A42)
then the received radiation power can be rewritten as
\[\mathcal{P}_{\omega}=\frac{e^{2}\gamma^{5}\psi^{2}\omega_{0}}{4\pi c}\bar{\omega} ^{4}\left(1-\bar{\omega}+\frac{1}{2}\bar{\omega}^{2}\right)\delta\left(\bar{ \omega}-\frac{2}{1+\gamma^{2}\theta^{2}}\right),\] (A43)
and the radiation only occurs at the direction \(\theta\) with
\[\gamma\theta=\left(\frac{2}{\bar{\omega}}-1\right)^{1/2}.\] (A44)
Notice that the particle's deflection angle \(\psi\) only affects the normalized radiation power but not the typical frequency and the spectral shape. According to Eq.(A43), for a certain viewing direction \(\gamma\theta\), the emission is only at the frequency \(\bar{\omega}=2/(1+\gamma^{2}\theta^{2})\). Thus, the radiation spectrum of a single particle is extremely narrow. Since most radiation energy is emitted with the direction satisfying \(\gamma\theta\lesssim 1\), the typical radiation frequency is \(\bar{\omega}\sim\) a few (corresponding to \(\omega\sim\gamma\omega_{0}\)), which is consistent with the above result estimated by Eq.(15).
Since the polarization degree is Lorentz invariance, according to Eq.(A36), Eq.(A37) and Eq.(A38), the linear and circular polarization degrees are
\[\pi_{L} =\left|\frac{2\gamma^{2}\theta^{2}}{1+\gamma^{4}\theta^{4}}\right| =\left|\frac{2\bar{\omega}-\bar{\omega}^{2}}{\bar{\omega}^{2}-2 \bar{\omega}+2}\right|,\] (A45) \[\pi_{V} =\left|\frac{1-\gamma^{4}\theta^{4}}{1+\gamma^{4}\theta^{4}}\right| =\left|\frac{2\bar{\omega}-2}{\bar{\omega}^{2}-2\bar{\omega}+2} \right|.\] (A46)
In particular, \(\gamma\theta>1\) and \(\gamma\theta<1\) correspond to the opposite (left and right) circular polarization, respectively. In Figure 13, we plot the linear and circular polarization degrees \(\Pi_{i}\) with \(i=L\) and \(V\) as the functions of the dimensionless frequency \(\bar{\omega}\) and viewing angle \(\gamma\theta\), respectively. The blue and red lines correspond to the linear and circular polarization degrees, respectively. The top and bottom panels show the polarization degree \(\Pi\) as the functions of the dimensionless frequency \(\bar{\omega}\) and the viewing angle \(\gamma\theta\), respectively. We can see that high linear polarization (low circular polarization) mainly occur at \(\bar{\omega}\sim 1\) and \(\gamma\theta\sim 1\), otherwise, low linear polarization (high circular polarization) is dominant.
Furthermore, we consider that multiple radiating particles are uniformly distributed in a three-dimensional beaming with an opening angle \(\Theta_{j}\). The number of the charged particles within \((\Theta,\Theta+d\Theta)\) is
\[N_{e}(\Theta)d\Theta=N_{e,0}\sin\Theta d\Theta.\] (A47)
Figure 13: The relations between the polarization degree \(\Pi\) and the dimensionless frequency \(\hat{\omega}\) and the viewing angle \(\gamma\theta\) for a single radiating particle with \(\gamma\psi\ll 1\). The blue and red lines correspond to the linear and circular polarization degrees, respectively. The top panel shows the polarization degree \(\Pi\) as the function of the dimensionless frequency \(\bar{\omega}\). The bottom panel shows the polarization degree \(\Pi\) as the function of the viewing angle \(\gamma\theta\). The dimensionless frequency \(\bar{\omega}\) and the viewing angle \(\gamma\theta\) are related via \(\bar{\omega}=2/(1+\gamma^{2}\theta^{2})\).
When the viewing direction points to the beaming center \(\Theta=0\), the radiation spectrum by multiple particles might be approximately given by
\[\mathcal{P}_{\omega}(\Theta=0)\propto\int_{0}^{\Theta_{j}}\mathcal{P }_{\omega}(\gamma\Theta)\sin\Theta d\Theta\] \[\propto\bar{\omega}^{4}-2\bar{\omega}^{3}+2\bar{\omega}^{2}\quad \text{for}\ \bar{\omega}_{\text{min}}=\frac{2}{1+\gamma^{2}\Theta_{j}^{2}}<\bar{\omega}<2,\] (A48)
where \(\Theta_{j}\ll 1\) is assumed in the above equation. Outside the particles' beam \(\Theta\gg\Theta_{j}\), the radiation spectrum by multiple particles might be approximately given by
\[\mathcal{P}_{\omega}(\Theta)\propto\int_{\Theta-\Theta_{j}}^{ \Theta}\mathcal{P}_{\omega}(\gamma\Theta)d\Theta\propto\frac{\bar{\omega}^{5}-2 \bar{\omega}^{4}+2\bar{\omega}^{3}}{(2\bar{\omega}-\bar{\omega}^{2})^{1/2}}\] \[\text{for}\ \bar{\omega}_{\text{min}}=\frac{2}{1+\gamma^{2}\Theta^{2}} <\bar{\omega}<\bar{\omega}_{\text{max}}=\frac{2}{1+\gamma^{2}(\Theta-\Theta_{ j})^{2}}.\] (A49)
Notice that a factor of \(\sin\Theta\) should be involved in the above equation for the viewing direction outside the particles' beam. In Figure 14, we plot the spectra for \(\Theta_{j}=0\) and \(\Theta\gg\Theta_{j}\) given by Eq.(A48) and Eq.(A49), respectively. For the case of \(\Theta_{j}=0\), the peak frequency is at \(\bar{\omega}\sim 2\) near which the spectrum is narrow due to \(\mathcal{P}_{\omega}\propto\bar{\omega}^{4}\). For the case of \(\Theta\gg\Theta_{j}\), the spectrum is much narrower, i.e., \(\Delta\bar{\omega}/\bar{\omega}\simeq 2\Theta_{j}/\Theta\ll 1\). Meanwhile, the larger the viewing angle \(\Theta\), the narrower the spectrum and the lower the peak radiation power. At last, we should notice that the above discussion assumes that all radiating particles have the same Lorentz factor \(\gamma\). The distribution of \(\gamma\) for multiple particles would make the spectra relatively wider.
The observed linear and circular polarization degrees depend on the gyration directions of the radiating particles and whether the viewing direction is within \(\Theta_{j}\). If all radiating particles have the same gyration directions, clockwise or anticlockwise, the linear and circular polarization degrees of the multiple particles might be written as
\[\Pi_{L}\simeq\begin{cases}0,&\text{for}\ \Theta\leqslant\Theta_{j},\\ \pi_{L}(\Theta-\Theta_{j}),&\text{for}\ \Theta>\Theta_{j},\end{cases}\] (A50)
and
\[\Pi_{V}\simeq\begin{cases}1,&\text{for}\ \Theta\leqslant\Theta_{j},\\ \pi_{V}(\Theta-\Theta_{j}),&\text{for}\ \Theta>\Theta_{j},\end{cases}\] (A51)
Figure 14: The spectra that are applicable for multiple radiating particles with \(\gamma\psi\ll 1\). The blue and red line corresponds to the on-beam case given by Eq.(A48) and the off-beam case given by Eq.(A49), respectively. The red solid, dashed and dotted lines correspond to \(\gamma\Theta=3,5\) and \(8\), respectively. Here \(\gamma\Theta_{j}=1\) is taken. The unit of \(\mathcal{P}_{\omega}\) is arbitrary. For easy comparison with different scenarios, the spectra of the off-beam cases are multiplied by an arbitrary factor in this figure.
respectively, similar to the discussion in Section 3.1. Notice that different from the above scenario with \(\gamma\psi\gg 1\), the on-beam radiation for \(\gamma\psi\ll 1\) is dominated by the \(\sim 100\%\) circular polarization. On the other hand, if the gyration directions of the particles are random, their polarizations would cancel out. In the following discussion, we are mainly interested in the former scenario.
Since the viewing direction is random in the solid angle, the number of the observable burst within \((\Theta,\Theta+d\Theta)\) is given by
\[N(\Theta)d\Theta=\sin\Theta d\Theta\quad\text{for}\quad 0\leqslant\Theta \leqslant\Theta_{j}+\theta_{\text{th}},\] (A52)
otherwise, \(N(\Theta)d\Theta=0\), where \(\Theta_{j}+\theta_{\text{th}}\) corresponds to the lower limit of the frequency \(\bar{\omega}\) or the lower limit of \(\mathcal{P}_{\omega}\). According the polarization properties given by Eq.(A45) and Eq.(A46), the polarization is \(\sim 100\%\) circular except \(\gamma\theta\sim 1\). Therefore, the cumulative distribution of the linear polarization degrees for multiple radiating particles is
\[N(>\Pi_{L})\simeq\frac{\sin\Theta_{j}\Delta\theta_{L}}{1-\cos(\Theta_{j}+ \theta_{\text{th}})},\] (A53)
where \(\Delta\theta_{L}=\theta_{L,2}-\theta_{L,1}\) with \(\theta_{L,i}\) solved by Eq.(A45), and \(\Delta\theta_{L}\ll 1\) is used in the above equation.
\[\Delta\theta_{L}=\frac{1}{\gamma\Pi_{L}^{1/2}}\left[\left(1+\sqrt{1-\Pi_{L}^{ 2}}\right)^{1/2}-\left(1-\sqrt{1-\Pi_{L}^{2}}\right)^{1/2}\right].\] (A54)
Similarly, the cumulative distribution of the circular polarization degree is
\[N(>\Pi_{V})\simeq 1-\frac{\sin\Theta_{j}\Delta\theta_{V}}{1-\cos(\Theta_{j}+ \theta_{\text{th}})}\] (A55)
where \(\Delta\theta_{V}\) is solved by Eq.(A46), leading to
\[\Delta\theta_{V}=\frac{2}{\gamma}\left(\frac{1-\Pi_{V}}{1+\Pi_{V}}\right)^{1/ 4}.\] (A56)
In Figure 15, we plot the cumulative distribution of the linear (top panel) and circular (bottom panel) polarization degrees, respectively. We can see that for multiple radiating particles, the polarization of this radiation mechanism is almost \(100\%\) circular polarization. Thus, if the radiation mechanism of FRBs corresponds to this scenario with \(\gamma\psi\ll 1\), the observed moderate number fraction between linearly polarized bursts and circularly polarized bursts of FRB 20201124A (Jiang et al., 2022) might be due to the propagation effects, because the intrinsic polarization of this mechanism is predicted to be \(\sim 100\%\) circular polarized. Notice that this result is based on the assumption that all radiating particles have the same gyration direction. If the gyration directions of the particles are random, their polarizations would cancel out.
Figure 15: The cumulative distribution of the linear (top panel) and circular (bottom panel) polarization degrees. The distribution function of the viewing direction is assumed to satisfy Eq.(A47). The black solid, dashed and dotted lines correspond to \(\gamma\Theta_{j}=3,10\) and \(30\), respectively for \(\gamma\theta_{\text{th}}=100\). The solid red, blue and black lines correspond to \(\gamma\theta_{\text{th}}=10,30\) and \(100\), respectively for \(\gamma\Theta_{j}=3\). \(\gamma=100\) is taken here. Different from Figure 12, the \(y\)-axis is \(\log[N(>\Pi_{L})]\) here. |
2306.11648 | Harnessing the Power of Adversarial Prompting and Large Language Models
for Robust Hypothesis Generation in Astronomy | This study investigates the application of Large Language Models (LLMs),
specifically GPT-4, within Astronomy. We employ in-context prompting, supplying
the model with up to 1000 papers from the NASA Astrophysics Data System, to
explore the extent to which performance can be improved by immersing the model
in domain-specific literature. Our findings point towards a substantial boost
in hypothesis generation when using in-context prompting, a benefit that is
further accentuated by adversarial prompting. We illustrate how adversarial
prompting empowers GPT-4 to extract essential details from a vast knowledge
base to produce meaningful hypotheses, signaling an innovative step towards
employing LLMs for scientific research in Astronomy. | Ioana Ciucă, Yuan-Sen Ting, Sandor Kruk, Kartheik Iyer | 2023-06-20T16:16:56Z | http://arxiv.org/abs/2306.11648v1 | Harnessing the Power of Adversarial Prompting and Large Language Models for Robust Hypothesis Generation in Astronomy
###### Abstract
This study investigates the application of Large Language Models (LLMs), specifically GPT-4, within Astronomy. We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System, to explore the extent to which performance can be improved by immersing the model in domain-specific literature. Our findings point towards a substantial boost in hypothesis generation when using in-context prompting, a benefit that is further accentuated by adversarial prompting. We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses, signaling an innovative step towards employing LLMs for scientific research in Astronomy.
Machine Learning, ICML, ICML
## 1 Introduction
Significant strides in Natural Language Processing (NLP) have been made possible through attention mechanisms and transformer architecture, leading to the development of Large Language Models (LLMs) such as GPT-4 (Vig, 2019; Brown et al., 2020; Ouyang et al., 2022). These models exhibit extraordinary aptitude in understanding, generating, and interacting with human language. They go beyond deciphering complex linguistic patterns to making non-trivial deductions and forming relationships across diverse contexts (e.g., Devlin et al., 2018; Elkins and Chun, 2020).
Two intriguing facets of these models have stirred excitement for their potential that surpasses their initial intended applications. Firstly, despite LLMs' propensity to sample posterior means of languages--a factor that can occasionally result in non-trivial hallucination problems--improved performance has been witnessed through in-context prompting (Wang et al., 2022; Wei et al., 2022; Zhang et al., 2022). This enhancement enables them to handle complex, domain-specific tasks (e.g., Radford and Narasimhan, 2018; Brown et al., 2020; Lu et al., 2022). Secondly, these models, when combined with revolutionary technologies like Langchain1 to provide extensive context to the LLMs, expand their functionality across a wide range of fields.
Footnote 1: [https://python.langchain.com](https://python.langchain.com)
While methods like the use of adapters (He et al., 2021; Karimi Mahabadi et al., 2021; Hu et al., 2021) can remarkably augment performance for domain-specific tasks through fine-tuning the LLMs, these approaches often prove challenging for institutions without sufficient resources. In this study, we delve into the application of low-cost in-context prompting (Chen et al., 2021; Xie et al., 2021) in the realm of astronomy.
Astronomy offers a compelling case study due to three key reasons. Firstly, although the field is rich in literature, the inclusion of such text in the vast corpus used to train GPT models is probably limited. This lack leads to noticeable hallucination problems when employing naive versions of LLMs (Ciuca et al., 2023). Secondly, unlike domains that focus more on intensive, detailed studies, advancements in astronomy often stem from "connecting the dots" across different subfields due to the universality of underlying physical processes at various scales. This feature fosters the hypothesis that extensive in-context prompting could significantly enhance hypothesis generation if LLMs are initially exposed to a broad range of literature.
Lastly, astronomy's longstanding "open sky" policy makes it an ideal candidate for in-context prompting research. This policy ensures that most data sets are publicly available immediately or after a short proprietary period (Almeida et al., 2023; Fabricius et al., 2021). Further, the field possesses a comprehensive, well-curated literature database. The internet has enabled the archiving of astronomical knowledge,
with NASA's Astrophysics Data System hosting over 15 million resources, effectively covering the entire spectrum of astronomical literature utilized by researchers (Accomazzi et al., 2015; Borganan and Wofford, 2021). This accessibility greatly aids our engagement with the astronomy database.
## 2 Literature retrieval and pre-processing
For this study, we focused our exploration on Galactic Astronomy, utilizing our domain expertise to assess the results. We selected Galactic Astronomy as our focal area due to its integrative nature, fusing knowledge from diverse subfields. The study of galaxy evolution not only incorporates the fundamental understanding of stars and stellar populations (Aouad et al., 2020; Sanchez et al., 2022) but it is also influenced by large-scale cosmological environmental factors (Singh et al., 2020; Whitney et al., 2021). Therefore, studying galaxy evolution provides both exciting challenges and abundant possibilities for harnessing implicit knowledge embedded within the vast network of literature.
Our study includes a selection of 1,000 papers related to Galactic Astronomy from the NASA ADS (Accomazzi et al., 2015) Astronomy collection. Our chosen papers were identified through a database query based on criteria such as 'Gaia' appearing in the abstract, publications from the last ten years (since the Gaia launch date), being refereed journal articles, and the inclusion of relevant keywords such as 'galaxy kinematics and dynamics', 'galaxy structure', 'galaxy disk', 'galaxy halo', 'galaxy abundances', and 'galaxy evolution'. Our initial query yielded more than 1,000 papers, leading us to prioritize the most recent publications. Our curated collection contains details such as the ArxivID, Publication Date, Authors, Title, Abstract, Citation, and Key, providing a comprehensive dataset for our analysis. The full dataset as well as the codebase used in our analysis can be found here for reproducibility2.
Footnote 2: [https://github.com/errai34/IdeaGPT](https://github.com/errai34/IdeaGPT)
## 3 Astro-GPT Workflow
Our exploration capitalizes on the abilities of OpenAI's GPT-4 model (OpenAI, 2023). The first step in in-context prompting involves pre-processing 1,000 papers from the Galactic Astronomy corpus using the langchain library. Each paper, transformed from PDF to text, is subsequently segmented into 'chunks' of 1,000 tokens each. These segmented units are then embedded using OpenAI's text-ada-002 embedding model.
The retrieval phase begins with converting the chat history and input query into a standalone input, which is then embedded. A similarity search is conducted between the embedded query and the vector database. We then use langchain's contextual compression to filter out irrelevant information from the individual chunks. These final texts, combined with the standalone input, form the foundation upon which a GPT-4 model, having a context window of approximately 8,000 tokens, formulates ideas. To scrutinize the model's prowess, we design an adversarial experiment. This involves a secondary GPT-4 model that critiques the idea, highlighting its frailties and suggesting potential enhancements. This feedback is reformulated within a feedback-question structure by a third GPT-4 instance and returned to the initial model.
Figure 1: This figure illustrates the adversarial in-context prompting workflow using OpenAI’s GPT-4 model. The procedure begins with the pre-processing and embedding of Galactic Astronomy papers. A similarity search is conducted on the embedded query, and relevant document chunks are retrieved. A further contextual compression is performed to remove irrelevant information from the chunks. These compressed texts serve as input to a GPT-4 instance, which generates an idea. This idea is then critiqued by a second GPT-4 model, and the feedback is moderated by a third GPT-4 model.
Implementing our experimental setup, we use \(N_{k}\) papers, where \(k\in\{1,10,100,1000\}\). Each sample undergoes hypothesis generation by the 'Generation GPT-4' instance (our in-context prompted model on \(k\) papers). An adversarial response from 'Adversarial GPT-4' follows, which is reformulated by a moderator GPT-4 instance and fed back to the generator model. This cycle, yielding three hypotheses and two critiques per experiment, is repeated twice for each \(N_{k}\) and replicated five times in total. The same approach is applied to 1,000 papers, without resampling, accumulating a total of 60 hypotheses and 40 critiques.
## 4 Results
### Human Evaluation
Given the qualitative nature of hypothesis generation, we needed an assessment process that, while inherently subjective, would match the expectations of human experts. For this, we involved two domain experts in the field of Galactic Astronomy to evaluate the quality of the generated hypotheses. These were graded based on the number of papers included within the domain-specific context, and we computed the average score from these dual-human evaluations for each hypothesis. The hypotheses are graded based on a rubric of three categories - scientific accuracy, creativity and feasibility, and the average score of these three domains assumed to be the final score. We also evaluated the critiques provided by the AI judge, which had access to the same contextual information.
As illustrated in the left panel of Fig. 2, adversarial prompting proved to be a critical tool in markedly improving hypothesis generation. The quality of hypothesis generation, without adversarial prompting, showed little dependence on the number of papers, suggesting that in-context prompting alone, while helpful for mitigating hallucination, did not suffice for a comprehensive understanding of the corpus.
The introduction of adversarial prompting considerably altered this outcome. A significant improvement in the quality of hypothesis generation was observed both for the AI generator and the AI judge, even without explicitly aligning the models with human expectations. Notably, adversarial prompting introduced a strong correlation between hypothesis quality and the number of papers reviewed, especially at larger context (\(N=1000\)). It also leads to a much more consistent in term of the quality of the hypotheses (and the critiques). The average quality score rose significantly from 2.5 (when 10 papers were used as context, where a score of 3/5 corresponds to a typical hypothesis by a competent PhD student) to a near-expert level of 4/5 when 1,000 papers were included, emphasizing the potential of adversarial prompting in enhancing the quality of scientific hypothesis generation. We refer to the Appendix for examples.
### Exploration of Embeddings
To truly understand the power of adversarial prompting, we first passed the abstracts of our set of 1000 astronomy papers through the text-ada-002 embedding model and arranged them into a 2D TSNE projection. This captured the contextual differences and similarities of these 1000 papers.
For each hypothesis generated, we determined which papers inspired it by querying the GPT model. In Fig. 3, we visualized this 'knowledge footprint' for each hypothesis as black polygons within a green hull representing all the papers GPT-4 had access to. From Fig. 3's top panels, it's clear that as the number of papers grows, GPT broadens its scope, drawing on diverse topics to build interconnected hypotheses. We note that, even with a small training pool
Figure 2: Adversarial prompting and domain-specific context enrichment significantly enhance hypothesis generation quality. 60 hypotheses and 40 critiques generated by the AI were evaluated by two human experts, with the mean scores reported for individual instances. The iterations of adversarial prompting (\(n_{F}\)) was instrumental in driving substantial enhancements in both the quality and consistency of the AI judge and AI generator outputs, particularly when they were supplied with an extensive context (\(N=1000\) papers). Crucially, in the absence of adversarial prompting (\(n_{F}=0\)), the quality of the hypothesis remained stagnant despite the provision of ample context. This observation underscores the stark contrast and superior effectiveness of adversarial prompting.
of 10 papers, we design our experiment in a way that the corpus still covers a wide range of topics, but GPT-4 lacks the context to connect them in a meaningful way, leading to more general hypotheses (see Appendix).
The bottom panel in Fig. 3, concentrating on the case with 1000 papers, explores how the knowledge footprint evolves with different numbers of adversarial attacks. In the preliminary iteration (bottom left), the judge ingeniously identifies areas of critique based on knowledge overlooked by the original response. This compels the generator to expand its scope further (as shown in the bottom middle and right panels), to appropriately address the criticism. As shown in some of the examples in the Appendix, adversarial prompting allows the GPT-4 model to genuinely benefit from a large number of contextual inputs, guiding the model towards a more coherent understanding of the topic, rather than creating a mere 'fact jumble'. The specific examples of hypotheses and corresponding critiques are shown in the Appendix.
## 5 Conclusion and Future Direction
In this research, we delved into a detailed examination of the GPT-4 model's ability to propose novel hypotheses in the domain of Galactic Astronomy using in-context prompting. Our findings confirm that in-context prompting significantly mitigates hallucination, leading to the generation of meaningful hypotheses that can compete with substantive thesis topics, as evaluated by domain experts. Importantly, we found that the inclusion of adversarial prompts enables the model to progressively enhance its performance based on the number of contextual papers. However, a naive implementation of in-context prompting without adversarial judgment fails to replicate this improvement.
While our study marks the inception of a pioneering exploration of in-context prompting for scientific hypothesis generation, it's clear that this dynamic field is rapidly evolving. Thus, we have identified several crucial areas for enhancement. These areas include (a) an improved and automated evaluation method for hypotheses. We have observed that while the AI judge can assist the AI generator, improvements are primarily in technical detail rather than deep insights. We propose leveraging well-curated question-and-answer pairs (e.g. Dugan et al., 2022) to better align the judge with human expectations. (b) Instead of focusing solely on hypothesis generation, integrating other downstream tasks and finite fine-tuning models with smaller adapter models could potentially improve inferences. We have commenced curating metadata from ADS to better design these tasks.
## 6 Broader Impact
In this study, our focus on in-context prompting, rather than the more computationally intensive fine-tuning, is inspired by the aim to democratize the utilization of LLMs for scientific inquiry. Current GPT models, due to their immense parameter sets, often render fine-tuning impractical. In the era of Large Language Models, it is crucial to determine whether all academic institutions, regardless of size or available computational resources, can keep pace with these
Figure 3: Visual representation of each hypothesis’s ‘knowledge footprint’, depicted as black polygons within the TSNE projection of the abstracts of our corpus comprising 1000 papers. As the quantity of papers consumed increases, the model leverages a more diverse array of topics, thus boosting the quality of the hypothesis (as seen in the top panels). The green hull in the top panel shows the overall knowledge base that the model have access through in-context prompting. The lower panel demonstrates how the ‘knowledge footprint’ evolves with varying quantities \(n_{F}\) of adversarial attacks in the case of 1000 papers. The black polygon signifies the footprint of the original response, while the lime indicates the information utilized by the AI judge for critique.
rapidly advancing technologies.
This critical inquiry forms the crux of our study, and our findings present a hopeful outlook. Our study indicates that, with the right strategies and approaches with 'humans in the loop' as domain experts, the barrier to harnessing the full power of these advanced LLMs can be lowered. As a result, we envision a future where all institutions, regardless of size or resources, can contribute to and benefit from the swift advancements in AI, enhancing the collective endeavour of scientific discovery. Our journey into this new frontier of Large Language Models is just beginning, and it promises a thrilling ride full of unexpected insights and revolutionary breakthroughs.
|
2305.10006 | EfficientSCI: Densely Connected Network with Space-time Factorization
for Large-scale Video Snapshot Compressive Imaging | Video snapshot compressive imaging (SCI) uses a two-dimensional detector to
capture consecutive video frames during a single exposure time. Following this,
an efficient reconstruction algorithm needs to be designed to reconstruct the
desired video frames. Although recent deep learning-based state-of-the-art
(SOTA) reconstruction algorithms have achieved good results in most tasks, they
still face the following challenges due to excessive model complexity and GPU
memory limitations: 1) these models need high computational cost, and 2) they
are usually unable to reconstruct large-scale video frames at high compression
ratios. To address these issues, we develop an efficient network for video SCI
by using dense connections and space-time factorization mechanism within a
single residual block, dubbed EfficientSCI. The EfficientSCI network can well
establish spatial-temporal correlation by using convolution in the spatial
domain and Transformer in the temporal domain, respectively. We are the first
time to show that an UHD color video with high compression ratio can be
reconstructed from a snapshot 2D measurement using a single end-to-end deep
learning model with PSNR above 32 dB. Extensive results on both simulation and
real data show that our method significantly outperforms all previous SOTA
algorithms with better real-time performance. The code is at
https://github.com/ucaswangls/EfficientSCI.git. | Lishun Wang, Miao Cao, Xin Yuan | 2023-05-17T07:28:46Z | http://arxiv.org/abs/2305.10006v2 | EfficientSCI: Densely Connected Network with Space-time Factorization for Large-scale Video Snapshot Compressive Imaging
###### Abstract
Video snapshot compressive imaging (SCI) uses a two-dimensional detector to capture consecutive video frames during a single exposure time. Following this, an efficient reconstruction algorithm needs to be designed to reconstruct the desired video frames. Although recent deep learning-based state-of-the-art (SOTA) reconstruction algorithms have achieved good results in most tasks, they still face the following challenges due to excessive model complexity and GPU memory limitations: 1) these models need high computational cost, and 2) they are usually unable to reconstruct large-scale video frames at high compression ratios. To address these issues, we develop an **efficient network** for video SCI by using **dense connections and space-time factorization mechanism** within a single residual block, dubbed **EfficientSCI**. The EfficientSCI network can well establish spatial-temporal correlation by using **convolution in the spatial domain and Transformer in the temporal domain**, respectively. We are the first time to show that an UHD color video with high compression ratio can be reconstructed from a snapshot 2D measurement using a single end-to-end deep learning model with PSNR above 32 dB. Extensive results on both simulation and real data show that our method significantly outperforms all previous SOTA algorithms with better real-time performance. The code is at [https://github.com/ucaswangls/EfficientSCI.git](https://github.com/ucaswangls/EfficientSCI.git).
## 1 Introduction
Traditional high-speed camera imaging methods usually suffer from high hardware and storage transmission cost. Inspired by compressed sensing (CS) [5, 9], video snapshot compressive imaging (SCI) [45] provides an elegant solution. As shown in Fig. 2, video SCI consists of a hardware encoder and a software decoder. In the encoder part, multiple raw video frames are modulated by different masks and then integrated by the camera to get a compressed measurement, giving low-speed cameras the ability to capture high-speed scenes. For the decoding part, the desired high-speed video is retrieved by the reconstruction algorithm using the captured measurement and masks.
So far, many mature SCI imaging systems [14, 24, 31] have been built, but for the decoding part, there are still many challenges. In particular, although the model-based methods [21, 43, 44] have good flexibility and can reconstruct videos with different resolutions and compression rates, they require long reconstruction time and can only achieve poor reconstruction quality. In order to improve the reconstruction quality and running speed, PnP-FFDNet [46] and PnP-FastDVDnet [47] integrate the pre-trained denoising network into an iterative optimization algorithm. However, they still need a long reconstruction time on large-scale datasets, _e.g_., PnP-FastDVDNet takes hours to reconstruct a UHD video from a single measurement.
By contrast, deep learning based methods [28, 30, 35, 40]
Figure 1: Comparison of reconstruction quality (average PSNR in dB on 6 benchmark grayscale datasets) and testing time of several SOTA deep learning based algorithms. Our proposed EfficientSCI achieves higher reconstruction quality with fewer parameters and shorter testing time.
have better real-time performance and higher reconstruction quality. For example, BIRNAT [8] uses bidirectional recurrent neural network and generative adversarial method to surpass model-based method DeSCI [21] for the first time. MetaSCI [39] has made some explorations for the model to adapt to different masks, which reduces the model training time. DUN-3DUnet [40] and ELP-Unfolding [42] combine iterative optimization ideas with deep learning models to further improve reconstruction quality. However, due to the high model complexity and insufficient GPU memory, most existing deep learning algorithms cannot train the models required for reconstructing HD or large-scale videos. RevSCI [7] uses a reversible mechanism [2] to reduce the memory used for model training, and can reconstruct HD video with a compression rate up to 24, but the model training time increases exponentially. In addition, the current reconstruction algorithms generally use convolution to establish spatial-temporal correlation. Due to the local connection of convolution, long-term dependencies cannot be well established, and the model cannot reconstruct data with high compression rates.
In summary, model-based methods usually require long reconstruction time and can only achieve poor reconstruction quality. Learning-based methods have high model complexity but cannot be well applied to large-scale color video reconstruction. To address these challenges, we develop an **efficient network** for video SCI by using _dense connections and space-time factorization mechanism_. As shown in Fig. 1, our proposed method dramatically outperforms all previous deep learning based reconstruction algorithms in terms of reconstruction quality and running speed with fewer parameters. Our main contributions can be summarized as follows:
* An efficient end-to-end network, dubbed EfficientSCI, is proposed for reconstructing high quality video frames from a snapshot SCI measurement.
* By building hierarchical dense connections within a single residual block, we devise a novel ResDNet block to effectively reduces model computational complexity but enhance the learning ability of the model.
* Based on the _space-time factorization_ mechanism, a Convolution and Transformer hybrid block (CFormer) is built, which can efficiently establish space-time correlation by using convolution in the spatial domain and Transformer in the temporal domain, respectively.
* Experimental results on a large number of simulated and real datasets demonstrate that our proposed method achieves state-of-the-art (SOTA) results and better real-time performance.
## 2 Related Work
**CNN and Variants:** In the past ten years, models with convolutional neural networks (CNN) as the backbone [13, 15, 20] have achieved excellent results on multiple computer vision tasks [20, 25, 32]. Among them, ResNeXt [41] and Res2Net [11] effectively increases model capacity without increasing model complexity by using grouped convolutions inside residual blocks. DenseNet [15] and CSPNet [34] achieve feature reuse by taking all previous feature maps as input. In video-related tasks, 3D convolution can establish good spatial-temporal correlation and has been widely used in action recognition [18], video super-resolution [26], video inpainting [6] and so on. In previous video SCI reconstruction, RevSCI and DUN-3DUnet greatly improve the reconstruction quality of benchmark grayscale datasets by integrating 3D convolution into the network. However, in complex high-speed scenarios (_e.g._, crash), since they cannot effectively establish long-term temporal dependencies, the reconstruction quality is still lower than 30 dB. In addition, the excessive use of 3D convolution increases the amount of model parameters, which is not conducive to the application of large-scale and high compression ratio data.
**Vision Transformers:** Most recently, Vision Transformer (ViT) [10] and its variants [4, 12, 37, 50] have achieved competitive results in computer vision. However, the high computational complexity limits its application in video-related tasks. TimeSformer [3] performs self-attention calculations in time and space respectively, which reduces model complexity and improves model accuracy, but the computational complexity still increases quadratically with the image size. The Video Swin Transformer [23] limits self-attention calculations to local windows but cannot effectively establish long-term temporal dependencies. In addition, a large number of experimental results show that Transformer has higher memory consumption than CNN, and using Transformer in space is not conducive to large-scale video SCI reconstruction. Therefore, through _space-time factorization mechanism_, using Transformer only in time domain can not only effectively utilize its ability to establish long-term time series dependencies, but also reduce model complexity and memory consumption.
Figure 2: Schematic diagram of grayscale and color video SCI.
## 3 Mathematical Model of Video SCI
Fig. 2 briefly describes the flow chart of video SCI. For grayscale video SCI system, the original \(B\)-frame (grayscale) input video \(\{\mathbf{X}_{m}\}_{m=1}^{B}\in\mathbb{R}^{n_{x}\times n_{y}}\) is modulated by pre-defined masks \(\{\mathbf{M}\}_{m=1}^{B}\in\mathbb{R}^{n_{x}\times n_{y}}\). Then, by compressing across time, the camera sensor captures a compressed measurement \(\mathbf{Y}\in\mathbb{R}^{n_{x}\times n_{y}}\). The whole process can be expressed as:
\[\mathbf{Y}=\sum_{m=1}^{B}\mathbf{X}_{m}\odot\mathbf{M}_{m}+\mathbf{Z}, \tag{1}\]
where \(\odot\) denotes the Hadamard (element-wise) multiplication, and \(\mathbf{Z}\in\mathbb{R}^{n_{x}\times n_{y}}\) denotes the measurement noise. Eq. (1) can also be represented by a vectorized formulation. Firstly, we vectorize \(\mathbf{y}=\mathrm{vec}(\mathbf{Y})\in\mathbb{R}^{n_{x}n_{y}}\), \(\mathbf{z}=\mathrm{vec}(\mathbf{Z})\in\mathbb{R}^{n_{x}n_{y}}\), \(\mathbf{x}=\left[\mathbf{x}_{1}^{\top},\dots,\mathbf{x}_{B}^{\top}\right]^{\top}\in \mathbb{R}^{n_{x}n_{y}B}\), where \(\mathbf{x}_{m}=\mathrm{vec}(\mathbf{X}_{m})\). Then, sensing matrix generated by masks can be defined as:
\[\mathbf{H}=[\mathbf{D}_{1},\dots,\mathbf{D}_{B}]\in\mathbb{R}^{n_{x}n_{y}\times n _{x}n_{y}B}, \tag{2}\]
where \(\mathbf{D}_{m}=\mathrm{Diag}(\mathrm{vec}(\mathbf{M}))\in\mathbb{R}^{n_{x}n_ {y}\times n_{x}n_{y}}\) is a diagonal matrix and its diagonal elements is filled by \(\mathrm{vec}(\mathbf{M})\). Finally, the vectorized expression of Eq. (1) is
\[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{z}. \tag{3}\]
For color video SCI system, we use the Bayer pattern filter sensor, where each pixel captures only red (R), blue (B) or green (G) channel of the raw data in a spatial layout such as 'RGGB'. Since adjacent pixels are different color components, we divide the original measurement \(\mathbf{Y}\) into four sub-measurements \(\{\mathbf{Y}^{r},\mathbf{Y}^{g1},\mathbf{Y}^{g2},\mathbf{Y}^{b}\}\in\mathbb{R} ^{\frac{n_{x}}{2}\times\frac{n_{y}}{2}}\) according to the Bayer filter pattern. For color video reconstruction, most of the previous algorithms [46, 48] reconstruct each sub-measurement independently, and then use off-the-shelf demosaic algorithms to get the final RGB color videos. These methods are usually inefficient and have poor reconstruction quality. In this paper, we feed the four sub-measurements simultaneously into the reconstruction network to directly obtain the final desired color video.
## 4 The Proposed Network
As shown in Fig. 3, in the pre-processing stage of EfficientSCI, inspired by [7, 8], we use the estimation module to pre-process measurement (\(\mathbf{Y}\)) and masks (\(\mathbf{M}\)) as follows:
\[\overline{\mathbf{Y}}=\mathbf{Y}\oslash\sum_{m=1}^{B}\mathbf{M}_{m},\ \ \ \mathbf{X}_{e}=\overline{\mathbf{Y}}\odot\mathbf{M}+\overline{\mathbf{Y}}, \tag{4}\]
where \(\oslash\) represents Hadamard (element-wise) division, \(\overline{\mathbf{Y}}\in\mathbb{R}^{n_{x}\times n_{y}}\) is the normalized measurement, which preserves a certain background and motion trajectory information, and \(\mathbf{X}_{e}\in\mathbb{R}^{n_{x}\times n_{y}\times B}\) represents the coarse estimate of the desired video. We then take \(\mathbf{X}_{e}\) as the input of the EfficientSCI network to get the final reconstruction result.
EfficientSCI network is mainly composed of three parts: \(i)\) feature extraction module, \(ii)\) ResDNet module and \(iii)\) video reconstruction module. The feature extraction module is mainly composed of three 3D convolutional layers with kernel sizes of \(3\times 7\times 7,3\times 3\times 3\) and \(3\times 3\times 3\) respectively. Among them, each 3D convolution is followed by a LeakyReLU activation function [27], and the spatial stride step size of the final 3D convolution is 2. The spatial resolution of the final output feature map is reduced to half of the input. The feature extraction module effectively maps the input image space to the high-dimensional feature space. The ResDNet module is composed of \(N\) ResDNet block (described in Sec. 4.1), which can efficiently explore spatial-temporal correlation. The video reconstruction module is composed of pixelshuffle [33] (mainly restore spatial resolution to input network input size) and three 3D convolution layers (kernel sizes are \(3\times 3\times 3,1\times 1\times 1\) and \(3\times 3\times 3\) respectively), and conducts video reconstruction on the features output by the ResDNet blocks.
Figure 3: Architecture of the proposed EfficientSCI network and the overall process of color or grayscale video reconstruction. The measurement (\(\mathbf{Y}\)) and masks (\(\mathbf{M}\)) are pre-processed by the estimation module to obtain an estimated \(\mathbf{X}_{e}\), and then feed \(\mathbf{X}_{e}\) into EfficientSCI network to get the desired reconstruction result. (a) ResDNet module with \(N\) residual style units. (b) ResDNet block. Inside a residual block, the input features are divided into \(S\) parts by the channel split. Each part uses CFormer to efficiently extract spatial-temporal correlation, and employs dense connections to further improve model capacity. For convenience, only the case of \(S=4\) is shown here.
### ResDNet Block
Dense connection is an effective way to increase model capacity. Unlike DenseNet, which spans multiple layers, we build a more efficient dense connection within a single residual block. As shown in Fig. 3(b), the input features of the ResDNet block are first divided into \(S\) parts along the feature channel dimension. Then, for each part \(i=1,\cdots,S\), we use CFormer (described in Section 4.2) to efficiently establish the spatial-temporal correlation. Specifically, for the input of the \(i^{th}\) CFormer, we concatenate all the CFormer output features before the \(i^{th}\) part with the input features of the \(i^{th}\) part and then use a \(1\times 1\times 1\) convolution to reduce the dimension of the feature channel, which can further reduce the computational complexity. Next, we concatenate all CFormer output features along the feature channel dimension and use a \(1\times 1\times 1\) convolution to better fuse each part of the information. Given an input \(\mathbf{X}_{r}\in\mathbb{R}^{T\times H\times W\times C}\), ResDNet block can be expressed as:
\[\mathbf{X}_{1},\cdots,\mathbf{X}_{S}=\mathrm{Split}(\mathbf{X}_{r}),\] \[\mathbf{Y}_{1}=\mathrm{CFormer}_{1}(\mathbf{X}_{1}),\] \[\mathbf{Y}_{2}=\mathrm{CFormer}_{2}(\mathrm{Conv}_{1}(\mathrm{ Concat}([\mathbf{Y}_{1},\mathbf{X}_{2}]))),\] \[\qquad\qquad\vdots\] \[\mathbf{Y}_{S}=\mathrm{CFormer}_{S}(\mathrm{Conv}_{1}(\mathrm{ Concat}([\mathbf{Y}_{1},\cdots,\mathbf{Y}_{S-1},\mathbf{X}_{S}]))),\] \[\hat{\mathbf{Y}}_{r}=\mathrm{Concat}([\mathbf{Y}_{1},\cdots, \mathbf{Y}_{S}]),\] \[\hat{\mathbf{X}}_{r}=\mathrm{Conv}_{1}(\hat{\mathbf{Y}}_{r})+ \mathbf{X}_{r}, \tag{5}\]
where 'Split' represents division along the channel, '\(\mathrm{Conv}_{1}\)' represents a \(1\times 1\times 1\) convolution operation, 'Concat' represents concatenate along the channel and \(\hat{\mathbf{X}}_{r}\in\mathbb{R}^{T\times H\times W\times C}\) represents the output of the ResDNet block. This design has two advantages: \(i)\) the features of different levels are aggregated at a more granular level, which improves the representation ability of the model; \(ii)\) the model complexity is reduced (shown in Table 6).
### CFormer Block
As shown in Fig. 4, the CFormer block includes three parts: Spatial Convolution Branch (SCB), Temporal Self-Attention Branch (TSAB) and Feed Forward Network (FFN). Based on the space-time factorization mechanism, SCB is used to extract spatial local information, TSAB is used to calculate temporal attention of the feature points at the same spatial position in each frame. After that, FFN is used to further integrate spatial-temporal information.
It is worth noting that in order to make the model flexible to different compression ratios, we introduce zero padding position encoding [17] into CFormer block, instead of the absolute position encoding [10] or relative position encoding [22]. Specifically, we modified the first linear transformation layer in the traditional FFN to a \(3\times 3\times 3\) convolution with padding size of 1.
**Spatial Convolution Branch:** 2D convolution can well exploit spatial local correlation and reconstructs more detailed information, and it also enjoys efficient memory consumption and higher operating efficiency, which is suitable for large-scale video reconstruction. Therefore, We only use two \(3\times 3\) 2D convolutions to reconstruct spatial local details in SCB as shown in Fig. 4.
**Temporal Self-attention Branch:** The local receptive field of convolution makes it difficult to establish long-term dependencies. The global perception ability of Transformer can mitigate this issue. However, the time and memory complexity of traditional Transformers increase quadratically with the image size. To alleviate this problem, following [3, 35], we propose TSAB (shown in Fig. 4), which restricts the self-attention computation to the temporal domain, and its complexity only increase linearly with the image/video size.
In particular, we first reshape the input \(\mathbf{X}_{st}\in\mathbb{R}^{T\times H\times W\times\frac{C}{S}}\) to \(\mathbf{X}_{t}\in\mathbb{R}^{HW\times T\times\frac{C}{S}}\), and then obtain \(query\ (\mathbf{Q}\in\mathbb{R}^{HW\times T\times\frac{C}{S}})\), \(key\ (\mathbf{K}\in\mathbb{R}^{HW\times T\times\frac{C}{2S}})\) and \(value\ (\mathbf{V}\in\mathbb{R}^{HW\times T\times\frac{C}{S}})\) by linearly mapping \(\mathbf{X}_{t}\):
\[\mathbf{Q}=\mathbf{X}_{t}\mathbf{W}^{Q},\ \ \mathbf{K}=\mathbf{X}_{t}\mathbf{W}^{K },\ \ \mathbf{V}=\mathbf{X}_{t}\mathbf{W}^{V}, \tag{6}\]
where \(\{\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\}\in\mathbb{R}^{\frac{C}{S} \times\frac{C}{2S}}\) are projection matrices.
It is worth noting that the output dimension of the projection matrix is reduced to half of the input dimension, further decreasing the computational complexity of TSAB. Then, we respectively divide \(\mathbf{Q}\), \(\mathbf{K}\), \(\mathbf{V}\) into \(N\) heads along the feature channel: \(\mathbf{Q}=\{\mathbf{Q}_{j}\}_{1}^{N}\), \(\mathbf{K}=\{\mathbf{K}_{j}\}_{1}^{N}\), \(\mathbf{V}=\{\mathbf{V}_{j}\}_{1}^{N}\in\mathbb{R}^{HW\times T\times\frac{C}{ 2S}}\). For each head \(j=1,\cdots,N\), the attention can be calculated as:
\[head_{j}=\mathbf{A}_{j}*\mathbf{V}_{j}, \tag{7}\]
where \(\mathbf{A}_{j}=softmax(\mathbf{Q}_{j}\mathbf{K}_{j}^{T}/\sqrt{d})\in\mathbb{R }^{HW\times T\times T}\) repre
Figure 4: The CFormer block is composed of Spatial Convolution Branch (SCB), Temporal Self-Attention Branch (TSAB) and Feed Forward Network (FFN). For ease of presentation, only the head \(N=1\) scenario is described in the TSAB.
sents an attention map, \(\mathbf{K}_{j}^{T}\) represents the transposed matrix of \(\mathbf{K}_{j}\) and \(d=\frac{C}{2SN}\) is a scaling parameter. Then, we concatenate the outputs of \(N\) heads along the feature channel dimension and perform a linear mapping to obtain the final output \(\hat{\mathbf{X}}_{t}\in\mathbb{R}^{T\times H\times W\times\frac{C}{S}}\) of TSAB:
\[\hat{\mathbf{X}}_{t}=\mathbf{R}(\mathbf{W}^{P}(\mathrm{Concat}[head_{1},\cdots, head_{N}])), \tag{8}\]
where \(\mathbf{W}^{P}\in\mathbb{R}^{\frac{C}{2S}\times\frac{C}{S}}\) represents projection matrices, and \(\mathbf{R}\) is the reshape operator.
We further analyze the computational complexity of SCB and TSAB, and compare them with 3D convolution and several classic Multi-head Self-Attention (MSA) mechanisms. The results are shown in Table 1, where 'SCB3D' represents the replacement of 2D convolution in SCB with 3D convolution and \(K\) represents the kernel size, 'G-MSA' represents the original global MSA [10], and 'TS-MSA' represents the MSA in TimeSformer [3]. It can be observed that the computational complexity of our proposed SCB and TSAB grows linearly with the spatial size \(HW\), the computational cost is much less than 'TS-MSA' and 'G-MSA' (grow quadratically with \(HW\)). Compared with 3D convolution, since \(T\) is generally smaller than \(C\), 'SCB' and 'TSAB' still need less computational cost.
**Feed Forward Network:** The feed forward network of traditional Transformer usually uses two linear layers and a nonlinear activation function to learn more abstract feature representations. However, in the whole FFN, there is no interaction between the feature points. In order to better integrate the spatial-temporal information and position coding information, we replace the first linear transformation layer in the traditional FFN with a \(3\times 3\times 3\) convolution.
Given \(\mathbf{X}_{f}\in\mathbb{R}^{T\times H\times W\times\frac{C}{2}}\), FFN can be expressed as:
\[\hat{\mathbf{X}}_{f}=\mathbf{X}_{f}+\mathbf{W}_{1}(\phi(\mathbf{W}_{2}( \mathbf{X}_{f}))), \tag{9}\]
where \(\mathbf{W}_{1},\mathbf{W}_{2}\) represent \(1\times 1\times 1\) convolution and \(3\times 3\times 3\) convolution operations respectively, \(\phi\) denotes the LeakyReLU non-linearity activation, and \(\hat{\mathbf{X}}_{f}\in\mathbb{R}^{T\times H\times W\times\frac{C}{2}}\) is the output of the FFN.
It should be noted that in the whole CFormer block, we do not use any regularization layers, such as Layer Normalization [1] and Batch Normalization [16]. The experimental results show that _removing the regularization layer will not reduce the quality of model reconstruction and can further improve the efficiency of the model._
**Network Variants:** To balance speed and performance of the proposed network, we introduce four different versions of EfficientSCI network, dubbed as EfficientSCI-T, EfficientSCI-S, EfficientSCI-B and EfficientSCI-L standing for Tiny, Small, Base and Large networks, respectively. The network hyper-parameters are shown in Table 2, in which we mainly changed the the number of ResDNet blocks and the number of channels. As shown in Table 3, we also compare model parameters and computational complexity (FLOPs) with several advanced methods. The complexity of our proposed EfficientSCI-T is smaller than that of BIRNAT and RevSCI, and EfficientSCI-L is smaller than that of DUN-3DUnet and ELP-Unfolding.
## 5 Experiment Results
### Datasets
Following BIRNAT [8], we use DAVIS2017 [29] with resolution \(480\times 894\) (480p) as the model training dataset. To verify model performance, we first test the EfficientSCI network on several simulated datasets, including six benchmark grayscale datasets (Kobe, Traffic, Runner, Drop, Crash and Aerial with a size of \(256\times 256\times 8\)), six benchmark mid-scale color datasets (Beauty, Bosphorus, Jockey, Runner, ShakeNDry and Traffic with a size of \(512\times 512\times 3\times 8\)), and four large-scale datasets (Messi, Hummingbird, Swinger and Football with different sizes and compression ratios). Then we test our model on some real data (including Domino, Water Balloon) captured by a real SCI system [30].
### Implementation Details
We use PyTorch framework with 4 NVIDIA RTX 3090 GPUs for training with random cropping, random scaling, and random flipping for data augmentation, and use
\begin{table}
\begin{tabular}{c|c} \hline Method & Computational Complexity \\ \hline SCB3D & \(\frac{1}{2}HWTK^{3}C^{2}\) \\ G-MSA & \(HWTC^{2}+(HWT)^{2}C\) \\ TS-MSA & \(2HWTC^{2}+T(HW)^{2}C+HWT^{2}C\) \\ \hline SCB & \(\frac{1}{2}HWTK^{2}C^{2}\) \\ TSAB & \(\frac{1}{2}HWTC^{2}+\frac{1}{2}HWT^{2}C\) \\ \hline \end{tabular}
\end{table}
Table 1: Computational complexity of several SOTA methods.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Model & Channel & Block & PSNR & SSIM & Test time(s) \\ \hline EfficientSCI-T & 64 & 8 & 34.22 & 0.961 & 0.07 \\ EfficientSCI-S & 128 & 8 & 35.51 & 0.970 & 0.15 \\ EfficientSCI-B & 256 & 8 & 36.48 & 0.975 & 0.31 \\ EfficientSCI-L & 256 & 12 & 36.92 & 0.977 & 0.45 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reconstruction quality and test time (s) using EfficientSCI with different number of channels and blocks.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & Params (M) & FLOPs (G) & PSNR & SSIM \\ \hline BIRNAT & 4.13 & 390.56 & 33.31 & 0.951 \\ RevSCI & 5.66 & 766.95 & 33.92 & 0.956 \\ DUN-3DUnet & 61.91 & 3975.83 & 35.26 & 0.968 \\ ELP-Unfolding & 565.73 & 4634.94 & 35.41 & 0.969 \\ \hline EfficientSCI-T & 0.95 & 142.18 & 34.22 & 0.961 \\ EfficientSCI-S & 3.78 & 563.87 & 35.51 & 0.970 \\ EfficientSCI-B & 8.82 & 1426.38 & 36.48 & 0.975 \\ EfficientSCI-L & 12.39 & 1893.72 & 36.92 & 0.977 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Computational complexity and reconstruction quality of several SOTA algorithms on 6 grayscale benchmark datasets.
Adam [19] to optimize the model with the initial learning rate 0.0001. After iterating for 300 epochs, we adjusted the learning rate to 0.00001 and continued to iterate for 40 epochs to get the final model parameters. The peak-signal-to-noise-ratio (PSNR) and structured similarity index metrics (SSIM) [38] are used as the performance indicators of reconstruction quality.
### Results on Simulation Datasets
#### 5.3.1 Grayscale Simulation Video
We compare our method with SOTA model-based methods (GAP-TV [44], PnP-FFDNet [46], PnP-FastDVDnet [47], DeSCI [21]) and deep learning-based methods (BIRNAT [8], RevSCI [7], GAP-CCoT [36], DUN-3DUnet [40], ELP-Unfolding [42]) on simulated grayscale datasets. Table 4 shows the quantitative comparison results, it can be observed that our proposed EfficientSCI-L can achieve the highest reconstruction quality and has good real-time performance. In particular, the PSNR value of our method surpasses the existing best method ELP-Unfolding by 1.46 dB on average. In addition, our proposed EfficientSCI-T achieves high reconstruction quality while achieving the best real-time performance. It is worth noting that, for a fair comparison, we uniformly test the running time of all deep learning based methods on the same NVIDIA RTX 3090 GPU. Fig. 5 shows the visual reconstruction results of some data. By zooming in some local areas, we can observe that our method can recover sharper edges and more detailed information compared to previous SOTA methods. The mid-scale color results are shown in the SM due to space limitation and our method outperforms previous SOTA by 2.02 dB in PSNR on the benchmark dataset [47].
#### 5.3.2 Large-scale Color Simulation Video
Most deep learning based methods, such as BIRNAT [8], DUN-3DUnet [40], cannot be applied to large-scale data reconstruction due to excessive model complexity and GPU memory constraints. RevSCI [7] uses a reversible mechanism and can reconstruct a 24 frames RGB color video with size of \(1080\times 1920\times 3\), but training the model is extremely slow. GAP-CCoT [36] and ELP-Unfolding [42] only use 2D convolution for reconstruction and thus cannot handle color video data well. Therefore, we only compare with several SOTA model-based methods (GAP-TV [44], PnP-FFDNet-color [46], PnP-FastDVDnet-color [47]) on large-scale color data. Table 5 shows the comparisons between our proposed method and several model-based methods on PSNR, SSIM and test time (in minutes). It can be observed that model-based methods either have long reconstruction time (PnP-FFDNet-color, PnP-FastDVDnet-color) or low reconstruction quality (GAP-TV). Our proposed EfficientSCI-S can achieve higher reconstruction quality and running speed. Especially on UHD color video football (\(1644\times 3840\times 3\times 40\)), the PSNR value of our method is 2.5 dB higher than PnP-FastDVDnet-color, and the reconstruction time is only \(0.5\%\) of it. Fig. 6 shows some visual reconstruction results. By zooming
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline Method & Kobe & Traffic & Runner & Drop & Crash & Aerial & Average & Test time(s) \\ \hline GAP-TV & 26.46, 0.885 & 20.89, 0.715 & 28.52, 0.909 & 34.63, 0.970 & 24.82, 0.838 & 25.05, 0.828 & 26.73, 0.858 & 4.2 (CPU) \\ PnP-FFDNet & 30.50, 0.926 & 24.18, 0.828 & 32.15, 0.933 & 40.70, 0.989 & 25.42, 0.849 & 25.27, 0.829 & 29.70, 0.892 & 3.0 (GPU) \\ PnP-FastDVDnet & 32.73, 0.947 & 27.95, 0.932 & 36.29, 0.962 & 41.82, 0.989 & 27.32, 0.925 & 27.98, 0.897 & 32.35, 0.942 & 6.0 (GPU) \\ DeSCI & 33.25, 0.952 & 28.71, 0.925 & 38.48, 0.969 & 43.10, 0.993 & 27.04, 0.909 & 25.33, 0.860 & 32.65, 0.935 & 6180 (CPU) \\ BIRNAT & 32.71, 0.950 & 29.33, 0.942 & 38.70, 0.976 & 42.28, 0.992 & 27.84, 0.927 & 28.99, 0.917 & 33.31, 0.951 & 0.1 (GPU) \\ RevSCI & 33.72, 0.957 & 30.02, 0.949 & 39.40, 0.977 & 42.93, 0.992 & 28.12, 0.937 & 29.35, 0.924 & 33.92, 0.956 & 0.19 (GPU) \\ GAP-CCoT & 32.58, 0.949 & 29.03, 0.938 & 39.12, 0.980 & 42.54, 0.992 & 28.52, 0.941 & 29.40, 0.923 & 33.53, 0.958 & 0.08 (GPU) \\ DUN-3DUnet & 35.00, 0.969 & 31.76, 0.966 & 40.03, 0.980 & 44.96, 0.995 & 29.33, 0.956 & 30.46, 0.943 & 35.26, 0.968 & 0.58 (GPU) \\ ELP-Unfolding & 34.41, 0.966 & 31.58, 0.962 & 41,16, 0.986 & 44.99, 0.995 & 29.65, 0.959 & 30.68, 0.944 & 35.41, 0.969 & 0.34 (GPU) \\ \hline EfficientSCI-T & 33.45, 0.960 & 29.20, 0.942 & 39.51, 0.981 & 43.56, 0.993 & 29.27, 0.954 & 30.32, 0.937 & 34.22, 0.961 & **0.07 (GPU)** \\ EfficientSCI-S & 34.79, 0.968 & 31.21, 0.961 & 41.34, 0.986 & 44.61, 0.994 & 30.34, 0.965 & 30.78, 0.945 & 35.51, 0.970 & 0.15 (GPU) \\ EfficientSCI-B & 35.76, 0.974 & 32.30, 0.968 & 43.05, 0.988 & 45.18, 0.995 & 31.13, 0.971 & 31.50, 0.953 & 36.48, 0.975 & 0.31 (GPU) \\ EfficientSCI-L & **36.27, 0.976** & **32.83**, **0.971** & **43.79**, **0.991** & **45.46**, **0.995** & **31.52**, **0.974** & **31.64**, **0.955** & **36.92**, **0.977** & 0.45 (GPU) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The average PSNR in dB (left entry) and SSIM (right entry) and running time per measurement of different algorithms on 6 benchmark grayscale datasets. The best results are shown in bold and the second-best results are underlined.
Figure 5: Selected reconstruction frames of simulated grayscale data. Zoom in for better view.
in local areas, we can observe that the reconstruction results of our method are closer to the real value. In addition, our proposed model enjoys high flexibility for different compression ratios, that is, the model trained on low compression ratio data can be directly used for high compression ratio video reconstruction task. To verify this, we test hummingbird data with different compression ratios \(B=8,16,24,32,40,48\), and the reconstruction results are shown in Fig. 7. We can observe that our method can be applied to video data with different compression ratios, even when the compression ratio \(B\) grows to 48, the PSNR value of EfficientSCI-T model can still reach more than 32 dB. Moreover, our proposed approach surpasses other reconstruction algorithms at all compression ratios.
#### 5.3.3 Ablation Study
To verify the performance of the proposed ResDNet block and CFormer block on the impact of the reconstruction quality, we conduct some ablation experiments. The results are shown in Table 6 and Table 7, we not only compare the reconstruction quality of different models, but also analyze the model parameters and FLOPs. All experiments are conducted on the 6 grayscale benchmark datasets.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Dataset & Pixel resolution & GAP-TV & PnP-FFDNet-color & PnP-FastDVDnet-color & EfficientSCI-S \\ \hline Messi & \(1080\times 1920\times 3\times 8\) & 25.20, 0.874, 0.66 & 34.28, 0.968, 14.93 & 34.34, 0.970, 15.94 & **34.41, 0.973, 0.09** \\ Hummingbird & \(1080\times 1920\times 3\times 30\) & 25.10, 0.750, 20.3 & 28.79, 0.665, 61.20 & 31.17, 0.916, 54.00 & **35.56, 0.952, 0.39** \\ Swinger & \(2160\times 3840\times 3\times 15\) & 22.68, 0.769, 39.2 & 29.30, 0.934, 138.8 & 30.57, 0.949, 138.4 & **31.05, 0.951, 0.62** \\ Football & \(1644\times 3840\times 3\times 40\) & 26.19, 0.858, 83.0 & 32.70, 0.951, 308.8 & 32.31, 0.947, 298.1 & **34.81, 0.964, 1.52** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average PSNR in dB (left entry) and SSIM (middle entry) and test time (minutes) per measurement (right entry) of different algorithms on 4 benchmark large-scale datasets. Best results are in bold.
Figure 6: Comparison of reconstruction results of different algorithms on several benchmark large-scale color video simulation datasets.
Figure 7: Reconstruction quality (PSNR in dB, higher is better) of different reconstruction algorithms, with varying compression rates \(B\) from 8 to 48.
ResDNet Block:We verify the effect of different group numbers (GN) (corresponding to \(S\) in Eq. 5) and dense connections on the reconstruction quality. As shown in Table 6, the model complexity decreases gradually with the increase of GN, but the reconstruction quality greatly decreases when there are no dense connections in the ResDNet block. By introducing dense connections in the ResDNet block, the reconstruction quality of our proposed method is greatly improved, and a gain of 1.46 dB can be obtained when GN is 4.
CFormer Block:In the CFormer block, we first replace SCB with Swin Transformer (Swin) to verify its effectiveness. Then, we replace SCB and TSAB with two stacked 3D convolutions (S2-3D) to verify the effectiveness of TSAB. As shown in Table 7, compared with Swin Transformer, SCB can bring a 0.52 dB gain. Although the number of parameters and FLOPs have increased, the experimental results show that SCB takes up less memory than the Swin Transformer, which is very important for large-scale and high compression ratio data. Please refer to more detailed analysis in SM. Compared with SCB and TSAB, S2-3D not only increases model parameters and FLOPs by \(83\%\) and \(77\%\) respectively, but also reduces the PSNR value by 0.58 dB, which verifies the necessity of using space-time factorization and TSAB.
Number of Channels and Blocks:Table 2 shows that the quality of model reconstruction increases with the number of channels and blocks. However, the amount of parameters and FLOPs also increase (see Table 3), resulting in a degradation in the real-time performance of the model. |
2303.13681 | Mobile MoCap: Retroreflector Localization On-The-Go | Motion capture through tracking retroreflectors obtains highly accurate pose
estimation, which is frequently used in robotics. Unlike commercial motion
capture systems, fiducial marker-based tracking methods, such as AprilTags, can
perform relative localization without requiring a static camera setup. However,
popular pose estimation methods based on fiducial markers have lower
localization accuracy than commercial motion capture systems. We propose Mobile
MoCap, a system that utilizes inexpensive near-infrared cameras for accurate
relative localization even while in motion. We present a retroreflector feature
detector that performs 6-DoF (six degrees-of-freedom) tracking and operates
with minimal camera exposure times to reduce motion blur. To evaluate the
proposed localization technique while in motion, we mount our Mobile MoCap
system, as well as an RGB camera to benchmark against fiducial markers, onto a
precision-controlled linear rail and servo. The fiducial marker approach
employs AprilTags, which are pervasively used for localization in robotics. We
evaluate the two systems at varying distances, marker viewing angles, and
relative velocities. Across all experimental conditions, our stereo-based
Mobile MoCap system obtains higher position and orientation accuracy than the
fiducial approach.
The code for Mobile MoCap is implemented in ROS 2 and made publicly available
at https://github.com/RIVeR-Lab/mobile_mocap. | Gary Lvov, Mark Zolotas, Nathaniel Hanson, Austin Allison, Xavier Hubbard, Michael Carvajal, Taskin Padir | 2023-03-23T21:29:17Z | http://arxiv.org/abs/2303.13681v2 | # Mobile MoCap: Retroreflector Localization On-The-Go
###### Abstract
Motion capture (MoCap) through tracking retroreflectors obtains high precision pose estimation, which is frequently used in robotics. Unlike MoCap, fiducial marker-based tracking methods do not require a static camera setup to perform relative localization. Popular pose-estimating systems based on fiducial markers have lower localization accuracy than MoCap. As a solution, we propose Mobile MoCap, a system that employs inexpensive near-infrared cameras for precise relative localization in dynamic environments. We present a retroreflector feature detector that performs 6-DoF (six degrees-of-freedom) tracking and operates with minimal camera exposure times to reduce motion blur. To evaluate different localization techniques in a mobile robot setup, we mount our Mobile MoCap system, as well as a standard RGB camera, onto a precision-controlled linear rail for the purposes of retroreflective and fiducial marker tracking, respectively. We benchmark the two systems against each other, varying distance, marker viewing angle, and relative velocities. Our stereo-based Mobile MoCap approach obtains higher position and orientation accuracy than the fiducial approach.
The code for Mobile MoCap is implemented in ROS 2 and made publicly available at [https://github.com/RIVER-Lab/mobile_mocap](https://github.com/RIVER-Lab/mobile_mocap).
## I Introduction
Localization of people and robots is a critical need for automated warehouse facilities. Recognizing the 6D pose (position and orientation) of dynamic, moving objects allows robots to not only understand their current location but also to ensure an appropriate factor of safety. In this context, especially where robots and humans occupy a shared space, safety is paramount and the margin for error has to be minimized. Artificial landmarks, such as fiducial markers, provide an easily identifiable reference point for cameras to track objects in 3D space [1, 2, 3]. Recognizable marker geometries are used to localize fiducial markers and to discriminate between objects. This technique has applications in virtual reality [4], medicine [5], and robotics [6, 7].
To address the limitations of current fiducial-based approaches, we present Mobile MoCap, a stereo visual localization approach that uses retroreflectors, as inspired by commercial visual motion capture systems [8]. Fig. 1 demonstrates an overview of our proposed system. Retroreflectors are commercially available as flat, cut-to-size, textured sheets of adhesive-backed paper or as screw-mounted, textured spheres of plastic. Affixing just three of these retroreflectors to the same rigid target body is all that is needed to extract said target's 6D (six degrees of freedom, _i.e._, 6-DoF) pose by triangulating the unique three-dimensional (3D) positions of each of the attached retroreflectors. These retroreflective markers reflect near infrared (NIR) light into an NIR camera, making them easily identifiable under dim lighting conditions, or even within scenes of complete darkness. Furthermore, unlike two-dimensional (2D) fiducial markers, retroreflectors can be arranged in a non-planar layout, making retroreflector-based detection systems more robust because such markers can be detected from a wider range of IR camera viewing angles.
Our approach yields a twofold benefit over fiducial markers. First, by decreasing the exposure time of the cameras, we inherently reduce motion blur while still being able to isolate retroreflectors from the scene's background. Second, by not being constrained to planar mounting surfaces when distributing retroreflectors over the target object's unique geometry, we can achieve better tracking of the target in 3D space.
We designed a robotic experimental testbed to autonomously benchmark our proposed Mobile MoCap system against AprilTags [1, 3], a popular fiducial marker tracking approach. The robotic testbed consists of a camera assembly mounted onto a servo and actuated linear rail, enabling the cameras' viewing angle and distance to be controlled relative to a static target. In our experiments, we consider different
Fig. 1: Mobile MoCap camera assembly with central camera for dedicated visual fiducial tracking. **(a)** Target tracking plane with fiducial marker and retroreflector tags; **(b)** Segmented tags used in tracking objects
viewing angles and distances, as well as relative velocities to a target that contains both retroreflectors and an AprilTag. We compare the two marker localization techniques and determine their relative performance. The overall setup was designed to mimic a scenario in which a mobile robot aims to localize a static object of interest, such as a charging station or worker wearing a reflective safety vest.
The key contributions of this paper include:
* A lightweight inexpensive visual localization stereo system capable of real-time 6-DoF tracking, which succeeds in dynamic conditions that are challenging to contemporary fiducial marker-based approaches, while offering superior precision.
* A robotics testing platform to benchmark our proposed approach against a pervasive fiducial marker baseline, where the cameras are mounted onto an actuated rail and perform relative localization at various distances, viewing angles, and relative velocities.
* An open-source software framework implemented atop the Robot Operating System (ROS) 2 [9] and OpenCV [10] for easy interoperability with robotic systems and inexpensive camera hardware.
## II Related Work
There are numerous approaches to visually tracking objects or getting a bearing on the environment. One prominent approach is to track an object directly, without using markers or any other landmarks. These "markerless" approaches are attractive because they do not depend on extra hardware for tracking [11]. For example, Region-based Convolutional Neural Networks (R-CNN) can perform markerless tracking by combining regional classification with CNN [12]. The YOLO network architecture [13] can instead predict multiple object classifications and bounding boxes from a single image, in real-time. Nevertheless, learning-based markerless pose estimation requires extensive training data, and cannot demonstrate localization precision comparable to those of marker-based approaches [14].
In contrast, marker-based tracking methods rely on visually distinct landmarks that can be easily identified by an optical camera. Active markers are a common example, which require a power supply and supporting electronics to function, _e.g._ pulsed RGB LEDs during human hand tracking [15]. These markers excel under low-lighting conditions, but the requirement for a power source makes them more resource-intensive than other solutions. Another example are low-powered retroreflector tags that can be localized at longer distances [16]. These tags are favored in high-speed vehicle applications, but are still inhibited by the need for supporting hardware.
Passive markers are an alternative solution to remedy this issue. Passive markers require no additional electronics or power supply, and thus offer a lightweight cost-effective tracking approach. For instance, these markers are a preferable choice in full-body motion capture as they ensure that the participant is not encumbered by supporting hardware [8]. One drawback of passive markers is that they need direct line of sight with a camera, and occlusions can prevent effective tracking. Motion capture rigs solve this issue by leveraging multiple cameras to continuously track a single marker from all viewpoints. However, MoCap cannot translate well to robotics considering cameras are in fixed positions on the robot body looking outwards, rather than placed around the environment in a motion capture assembly.
Other common solutions used in autonomous vehicle and robotics domains are fiducial markers, which are visually distinct, flat, black and white printed images, _e.g._ the AprilTag [1] or ARTag [2]. These fiducials are easy and quick to localize, even when using low resolution images that are partially occluded or not orthogonal to the camera's line of sight [1, 3]. Fiducial markers have a plethora of applications in robotics, most notably for visual servoing and autonomous vehicles [17].
Passive fiducial tracking methods can be improved by incorporating retroreflective material and multiple cameras into the setup. Monocular setups are generally lower cost and less space obtrusive [18, 19, 20]. However, stereo cameras offer distinct advantages, such as higher precision and more effective acquisition of scene information [21]. In prior work, stereo cameras with a wide field of view and an occlusion detection algorithm were shown to rival commercially available MoCap systems, albeit not in real-time [22]. Trinocular setups with NIR cameras have also demonstrated the capacity to track passive markers at sub-tenth millimeter precision [23]. Nevertheless, a multiple camera system can rapidly become expensive and typically remains stationary.
Considering the difficulties with marker occlusion, few existing marker tracking systems are mobile, while retaining the advantages of motion capture technology. Previous work in augmented reality for surgical applications has utilized the stereo cameras on the Microsoft HoloLens in order to track a surgical tool of known length [24]. The study's results were reported at sub-centimeter precision under this mobile stereo configuration. We expand on this system by generalizing it for widespread use in robotics, where stereo cameras must handle motion at significantly larger speeds and distances than in augmented reality use-cases.
## III Stereo-based Mobile MoCap
Given the prevalence of the ROS 2 [9] middleware, we implemented our entire software and hardware stacks with the goals of being: 1) open-source; 2) cross-platform; and 3) easily reproducible.
### _System Hardware_
A stereo NIR camera system with an integrated light source can be constructed from inexpensive off the shelf components. ArduCam1 offers an NIR USB camera that provides a camera stream running at 30 frames per second (FPS) and 640x480p resolution. When properly focused, the cameras have a field of view (FOV) of 100\({}^{\circ}\), horizontal, and 138\({}^{\circ}\), diagonal. These cameras utilize a silicon-based
photodetector (CMOS) with broadband sensitivity. We permanently disable the IR cut filter by removing the transition photoresistor to allow for IR tracking in well-illuminated indoor environments, which would normally cause the camera to autoswitch to the RGB space. IR LEDs adjacent to the camera lens provide illumination at 850 nm, within the rated tolerance of the photodetector, but outside human visual range. Creating a stereo NIR camera from the ArduCams costs less than $100, including $20 in materials to fix the two cameras together. A full parts manifest and assembly instructions are available on the project website.
The monocular and stereo camera subsystems are combined in series on a laser cut acrylic plate to enable benchmarking of AprilTags against the proposed system. Tags are mounted on a continuous rotation servo (Dynamixel AX-12A). Four 19 mm retroreflective marker spheres (Qualisys) were asymmetrically arranged on the opposite side of the acrylic plate, with one marker placed on a 22 mm standoff for non-planar asymmetry.
### _ROS 2 Software Package_
We have developed a ROS 2 package [9] to facilitate 6-DoF tracking (shown in Fig. 2) of rigid retroreflector geometry, as well as 3D translation estimation of individual retroreflectors. ROS 2 ensures easy interoperability with other robotics systems and has a rich set of transform libraries available, suitable for geometric data processing. As detailed in Fig. 2, the core algorithm steps are encapsulated as ROS 2 nodes that occupy independent threads and can be run in parallel. The OpenCV library in C++ is used on the image processing steps to gain performance, with the pose extraction and utility scripts implemented in Python. The code is publicly available online. To the authors' best knowledge, there is no open-source, publicly available stereo retroreflector tracking package for ROS 2.
### _Filtering and Feature Detection_
The two stereo cameras are initialized with their exposure times minimized (500 \(\mu s\)). The resulting images are predominantly occupied by the retroreflectors and small sections of reflected light noise, as visualized in Fig. 2a. When the 750 nm IR cut filter is disengaged, light sources other than the co-located IR LEDs are easily filtered out as they appear green in the RGB colorspace without IR light removal. Finally, a global intensity threshold is applied to the image, which can be tuned for the desired operating conditions.
For each image, the pixel coordinates of each potential marker center are determined after filtering. The feature detector determines the edges in the image using a Canny Edge Detector (Fig. 2b) [25]. Multiple detections are corrected for by infilling the selected contours (Fig. 2c), resulting in the removal of redundant marker locations. Shading in nested contours helps prevent a single marker being detected as more than one marker. After the shading in process is completed, an ellipse is fit to each contour. The marker radius is approximated as the average of the major and minor axis of the ellipse, with the center of the ellipse representing the feature location. Fitting an ellipse to the processed contours provides a better estimation of marker centers than alternative techniques when there are partial occlusions of the markers, such as fitting a minimum enclosing circle to the contours or the circle Hough Transform [26]. The final feature detection results are shown in Fig. 2.
### _Correspondence Matching_
Once the marker centers are determined for both frames, the 3D locations of features are triangulated from corresponded feature pairs and the camera intrinsics/extrinsics. A core benefit of stereo-based tracking is that the 3D location is triangulated from a single 2D feature pair, which is not necessarily correlated to other features in the scene. Camera projection matrices are determined by a stereo calibration.
An alignment method is used to match feature correspondences based on relative geometries, as summarized in Algorithm III-D. For each feature in the first frame, the offset to every other feature is calculated and stored as a relative geometry. Then, for each feature in the second frame, the association that minimizes the pixel difference for that feature's relative geometry to the rest of the features is found. Only one correspondence is assumed to be correct, so if there are several correspondences from a feature in the first frame to features in the second frame, only the association with the lowest pixel error is considered. This geometric matching approach is predicated on the assumption that the cameras have almost the same orientation, similar extrinsics, and are not far enough apart to have vastly different views of features. Using RANSAC [27] to estimate the homography between the two images fails to determine correspondences, as keypoint detectors, such as ORB [28] and SURF [29], do not locate enough keypoints to correctly determine correspondence in sparse low exposure time images.
Once the feature correspondence is found, it is used along with the cameras' projection matrices and detected features
Fig. 2: Feature detection pipeline deconstructed as series of ROS 2 nodes. **(a)** A raw camera image of four retroreflectors with exposure time minimized; **(b)** Detection of contrasting edges on the retroreflectors; **(c)** Elimination of nested contours within the detected region; and **(d)** the final detected contours with their centroids.
to iteratively triangulate the 3D locations of the features. This approach is akin to the Iterative-LS method described by [30]. Triangulation allows a marker's 3D translation to be tracked by the cameras individually.
Extracting a desired object's pose for multiple objects from the 3D locations of the markers is left for future work, as our current approach only supports tracking a single rigid geometry comprised of three markers arranged in a scalene triangle. This is due to the fact that when there are multiple rigid bodies, it is non-trivial to determine which 3D points belong to a specific rigid body, especially when the 3D features belonging to rigid bodies are occluded.
The points are corresponded to a known stored geometry through an exhaustive search that correlates the longest edge between points to the longest edge in the stored scalene triangle. Once the point correspondence is determined, the translation and rotation is determined with a least-squares correspondence of two 3D point sets [31].
## IV Experiments
To evaluate our Mobile MoCap system, we designed a series of experiments using AprilTags with a monocular camera as a benchmark. An overview of the experimental setup is illustrated in Fig. 3.
### _Experimental Setup_
For the automated evaluation of our Mobile MoCap system, we designed a robotic testing platform that consists of a servo-controlled camera assembly that travels along a linear rail (LoPro), as shown in Fig. 3). The linear rail, belt-driven with a high-precision stepper motor, allows for a means of reliably moving the Mobile MoCap assembly at a fixed rate with high repeatability. The architecture and system details for the rail are detailed in [32]. To track the position of the moving Mobile MoCap assembly relative to a static world frame, we used an OptiTrack V120:Trio (Natural Point), which recorded data at 120 Hz during all trials and serves as the ground truth for our experiments. In order to test the robustness of our system, we ran experiments where the Mobile MoCap assembly was static and moving.
Using spherical (3D) retroreflectors would give the Mobile MoCap system a non-negligible advantage over one that relied on flat (2D) fiducial markers in instances when the assembly's onboard cameras point at the markers from non-orthogonal viewing angles. Therefore, we opted to test our system only using the flat, paper-based variant of retroreflectors to allow for a fairer comparison between retroreflective marker and AprilTag technologies. To accomplish this, we first custom-designed and laser-cut a flat backing plate to adhere both types of 2D markers. The backing plate was mounted to the distal end of the linear rail in a vertical orientation perpendicular to the rail's axis of motion. We opted to produce three 30mm-wide, circular markers from a sheet of retroreflective paper (Qualisys) using a die-cutting machine (Cricut) and arranged them asymmetrically on the backing plate (see Fig. 0(a)); together, the trio of discs cover a cumulative surface area of about 2120 mm\({}^{2}\). We also downloaded a set of AprilTags from the publicly-available tag36h11 family, randomly selected 1 square-shaped tag to test with (tag ID#20), proportionally scaled it to be 52mm wide, and printed it using a traditional office printer; in total, the AprilTag covered a surface area of 2704 mm\({}^{2}\). By forcing both styles of flat markers to live side-by-side on the same plane, we ensure comparable metrics between the two; in fact, given that the AprilTag is roughly 25 percent larger in surface area compared to the combined surface areas of the
Fig. 3: Experimental setup for benchmarking the performance of the linear rail system against visual fiducial markers. _Inset_: Relative pose transforms from the motion capture setup.
set of flat retroreflectors, we intentionally gave the AprilTag marker a theoretical advantage in being seen by more of the RGB camera's pixels.
### _Static Trials_
As a base comparison point of estimation noise, we first tested the Mobile MoCap system at various distances and angles from the marker plate. The distances chosen were 0.9, 1.34, 1.78, and 2.23 m, and the angles chosen were 20, 10, and 0 degrees deflection from normal. Each angle was recorded for 2 seconds at 30 FPS on all three cameras. This was repeated 30 times for each angle and distance, for a total of 120 trials.
### _Constant Angular Velocity Trials_
Finally, we check the system robustness against rotational motion blur by rotating the Mobile MoCap assembly through a 40 degree range of motion with varying angular velocities. First, the Mobile MoCap assembly is fixed at a distance 1 m away from the marker plate. Then, the assembly is rotated through constant angular velocities of 0.05, 0.1, 0.2, and 0.4 rad/s. During this, the three cameras on the mobile assembly are recording at 30 Hz. Each angular velocity value is run 30 times, for a total of 120 trials.
### _Constant Linear Velocity Trials_
Next we test the tracking fidelity of our system while in motion. We moved the back and forth along the rail at a constant velocities of 10, 20, 25, and 30 \(\frac{cm}{s}\) starting at.9m and ending at 2.2m, which was repeated five times.
## V Results
To quantify estimation error in object tracking, we consider metrics related to both the position and orientation. Position error is calculated as the root mean squared error (RMSE) between the OptiTrack's ("ground truth") observed 3D coordinate \((x_{1},y_{1},z_{1})\) and its corresponding estimate \((x_{2},y_{2},z_{2})\) at time \(i\). Given that the OptiTrack operates at a significantly faster frequency rate than the AprilTag and Mobile MoCap tracking methods, every sequence of estimates had to be synchronized to a common rate. This yields \(n\) total samples and a RMSE metric on 3D position expressed as:
\[p_{\text{rmse}}=\sqrt{\sum_{i=1}^{n}\frac{(x_{1,i}-x_{2,i})^{2}+(y_{1,i}-y_{2, i})^{2}+(z_{1,i}-z_{2,i})^{2}}{n}}.\]
To evaluate 3D rotation error, we consider an inner dot product of the ground truth and estimated unit quaternions. This error function can be viewed as a distance between orientations [33]. Let OptiTrack's ground truth frame orientation be \(\mathbf{q}_{1}\), and the estimated quaternion as \(\mathbf{q}_{2}\), then the error in rotation is:
\[q_{\text{err}}=\frac{\arccos{(|\mathbf{q}_{1,i}\cdot\mathbf{q}_{2,i}|)}}{n},\]
where \(q_{\text{err}}\in[0,\pi]\).
For each of row of measurements reported in Tables I-III, these above metrics are averaged over multiple trials conducted for the same parameter set. The values for \(x\), \(y\), and \(z\) are the mean positional error in each of the three axes plus-minus the standard deviation of that axes' relative error.
### _Static_
The results in Tables Iand II are quite promising in favor of our system. Every \(p_{\text{rmse}}\) value for the Mobile MoCap system is lower than its corresponding value for AprilTags, with sub-two centimeter error at 0.9 m from the marker plate at zero degrees. One potential outlying value is a \(p_{\text{rmse}}\) value of 0.6 cm at 2.2 m and 0.17 radians in Table II, which is lower than the preceding value, whereas for every other set of three values at each distance with increasing angular deflection, the p value shows a positively increasing trend.
### _Dynamic Angular Velocity_
The results in Table III show a significant trend. In all cases of angular rotation, the Mobile MoCap system offers a smaller positional error than AprilTags. Although at times the improvement is marginal, the results were collected at 1 m distance, where the visual recognition of the AprilTag is still reliable. At further distances, up to the maximum range of the system, the Mobile MoCap performance is likely superior. The orientation error also trends in favor of the Mobile MoCap system, although the performance increase at this distance is still within a standard deviation of the AprilTag. For all systems, increasing \(a_{vel}\) is directly correlated with an increase in pose error.
### _Dynamic Linear Velocity_
The values in Table IV show some distinct trends. As the constant velocity values increase, the \(p_{\textit{rmse}}\) values for AprilTag show an increasing trend, while the \(p_{\textit{rmse}}\) values for Mobile MoCap stay more or less the same. This is an incredibly fruitful result, that suggests Mobile MoCap is extremely robust to motion blur. Additionally, the normalized \(q_{\textit{err}}\) values indicating orientation error are on average larger for AprilTag than they are for Mobile MoCap. These results demonstrate the full potential of the Mobile MoCap system under exposure to significant motion blur. Finally, note that the standard deviations in position and orientation errors are significantly less than those of the ApriTag method.
## VI Discussion
Fiducial markers, such as AprilTags [1, 3], are limited by the minimum resolvable distance between points in an image. As AprilTags are based on a lexographic coding system, the detection pipeline aims to detect line segments and quads within a target area. At long distance, especially when the camera has low resolution or poor focus, the number of pixels occupied by a quad is small, thus decreasing the likelihood of detection. The Mobile MoCap system overcomes this challenge by relying on simpler geometric features that are recognized individually, rather than in the context of a larger grid encoding.
The Mobile MoCap system is not without limitations. Unlike AprilTags, there is no inherent association between marker configuration and object identifier (ID). It is possible to associate a marker geometry to a specific ID, but this is not yet a built-in feature, unlike many fiducial marker libraries. Similar to other more expensive systems, our system is also subject to noise from highly reflective surfaces, such as metallic screw heads or reflective trim on articles of clothing. These items can be falsely detected as retroreflectors at close distances. Most indoor lighting is provided by LED or fluorescent lights spanning the visible light spectrum, resulting in low noise from indoor lights. However, the sun provides a broader spectrum, and incidental sunlight can be falsely detected as a marker. The current iteration of this system is best suited for indoor use, or outdoor use during the night. Nevertheless, our results demonstrate that our system is functional and accurate in dynamic environments where more expensive motion tracking systems optimally operate.
## VII Conclusions
In this work we presented a novel system architecture for an inexpensive, ultra-portable motion capture system. We demonstrated its superior performance to fiducial markers in position and orientation accuracy. In future iterations of this work, we plan to exchange the camera system for a system capable in the Short Wave IR range, allowing for robust performance both inside and outside. Additionally, this system will be mounted to a variety of mobile robots to explore multi-vehicle tracking. Our system and methods empirically demonstrate motion capture systems can be made in cost-effective formats while still providing excellent localization accuracy in robot-centric contexts.
## Acknowledgment
The authors would like to thank Alina Spectorov for aid in debugging the Mobile MoCap software. This research has been supported in part by Defense Advanced Research Projects Agency (DARPA) under HR0011-22-2-0004. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for Public Release, Distribution Unlimited.
|
2305.17769 | Investigating HMIs to Foster Communications between Conventional
Vehicles and Autonomous Vehicles in Intersections | In mixed traffic environments that involve conventional vehicles (CVs) and
autonomous vehicles (AVs), it is crucial for CV drivers to maintain an
appropriate level of situation awareness to ensure safe and efficient
interactions with AVs. This study investigates how AV communication through
human-machine interfaces (HMIs) affects CV drivers' situation awareness (SA) in
mixed traffic environments, especially at intersections. Initially, we designed
eight HMI concepts through a human-centered design process. The two
highest-rated concepts were selected for implementation as external and
internal HMIs (eHMIs and iHMIs). Subsequently, we designed a within-subjects
experiment with three conditions, a control condition without any communication
HMI, and two treatment conditions utilizing eHMIs and iHMIs as communication
means. We investigated the effects of these conditions on 50 participants
acting as CV drivers in a virtual environment (VR) driving simulator.
Self-reported assessments and eye-tracking measures were employed to evaluate
participants' situation awareness, trust, acceptance, and mental workload.
Results indicated that the iHMI condition resulted in superior SA among
participants and improved trust in AV compared to the control and eHMI
conditions. Additionally, iHMI led to a comparatively lower increase in mental
workload compared to the other two conditions. Our study contributes to the
development of effective AV-CV communications and has the potential to inform
the design of future AV systems. | Lilit Avetisyan, Aditya Deshmukh, X. Jessie Yang, Feng Zhou | 2023-05-28T16:32:30Z | http://arxiv.org/abs/2305.17769v1 | Investigating HMIs to Foster Communications between Conventional Vehicles and Autonomous Vehicles in Intersections
###### Abstract
In mixed traffic environments that involve conventional vehicles (CVs) and autonomous vehicles (AVs), it is crucial for CV drivers to maintain an appropriate level of situation awareness to ensure safe and efficient interactions with AVs. This study investigates how AV communication through human-machine interfaces (HMIs) affects CV drivers' situation awareness (SA) in mixed traffic environments, especially at intersections. Initially, we designed eight HMI concepts through a human-centered design process. The two highest-rated concepts were selected for implementation as external and internal HMIs (eHMIs and iHMIs). Subsequently, we designed a within-subjects experiment with three conditions, a control condition without any communication HMI, and two treatment conditions utilizing eHMIs and iHMIs as communication means. We investigated the effects of these conditions on 50 participants acting as CV drivers in a virtual environment (VR) driving simulator. Self-reported assessments and eye-tracking measures were employed to evaluate participants' situation awareness, trust, acceptance, and mental workload. Results indicated that the iHMI condition resulted in superior SA among participants and improved trust in AV compared to the control and eHMI conditions. Additionally, iHMI led to a comparatively lower increase in mental workload compared to the other two conditions. Our study contributes to the development of effective AV-CV communications and has the potential to inform the design of future AV systems.
Mixed traffic, situation awareness, human-machine interface, AV-to-CV communication.
## I Introduction
In recent years, the potential of autonomous vehicles (AVs) has been extensively discussed, as they hold the promise of revolutionizing transportation by reducing accidents caused by human error, enhancing transportation efficiency, and increasing mobility for individuals who are unable to drive [1, 2]. However, a significant challenge in fully realizing the benefits of AVs lies in establishing effective communication between AVs and other road users [3, 4, 5].
In response to the challenge, many studies have focused on the interactions between AVs and vulnerable road users, including pedestrians, bicyclists, and motorcyclists, with an emphasis on designing external human-machine interfaces (eHMIs) through the use of both visual (e.g., lights and laser projections) and auditory cues (e.g., low frequency hum, chime, beep) [6, 7, 8]. The eHMIs could help the AV to convey its intentions to vulnerable road users in complex situations to reduce misunderstandings and improve traffic safety. For example, Bai et al. [6] explored different external interaction modalities, including visual, auditory, and visual+auditory to promote effective communications between AVs and vulnerable road users. Unlike implicit signals through the trajectory and speed of AVs [9], eHMIs adopt a communication strategy to share its intention explicitly, which improves the effectiveness of pedestrians' crossing decisions and cuts down on the amount of time it takes to cross the street [10]. Hensch et al. [11] investigated the perceived safety of eHMIs and pointed out that it increased the perceived trustworthiness and intuitiveness of AVs at the same time.
In addition to investigating the interactions between AVs and vulnerable road users, it is also crucial to examine the interactions between AVs and CVs. As AVs will coexist with CVs on the roads for the foreseeable future, effective communication between these vehicle types is essential for ensuring smooth and safe traffic flow. For instance, when an AV approaches a four-way intersection, it must communicate with other vehicles to ensure their awareness of its presence and facilitate appropriate speed and position adjustments [12].
However, in contrast to the extensive research examining interactions between AVs and vulnerable road users, there is little research examining communication between AVs and CVs, with only few exceptions. This is disturbing as traditional means of communication between human drivers, such as waving or gestures, will highly like become ineffective in mixed traffic environments [13], as the passenger in an AV may not be available to engage in such communication.
In this study, we aimed to fill the research gap and investigate whether human drivers in CVs could benefit from AVs equipped with appropriately designed HMIs in mixed traffic, especially in challenging traffic situations. We designed a within-subjects experiment in VR where the treatment was the HMIs with three different designs, including a control condition without any HMI, an eHMI located on the front of the AV, denoted as eHMI, and an HMI located in the internal of the CV, denoted as iHMI. We designed two scenarios where the design showed the AV would yield its right of way or insist on its right of way to show how these three design variations impacted the participants' situation awareness (SA) of road conditions, their trust in AVs, the acceptance of such designs, and mental workload involved in the negotiation process using both self-reported measures and eye tracking data.
## II Related Work
### _Communications between AVs and vulnerable road users_
Due to the increase in AVs on the road, it is necessary to develop a communication system between AV and other road users, especially vulnerable road users, such as pedestrians and bicyclists, to increase road traffic efficiency and safety. There are certain situations, such as four-way intersections [12] and pedestrian crosswalks [9], in which the need for communication between AVs and other road users increases. There are several different ways that AVs could communicate with vulnerable road users, including implicit and explicit communications. Implicit communications make use of the AV's ability to detect and respond to the presence of other road users without direct communication, but rather using existing implicit cues, such as vehicle trajectory and speed changes [14]. For example, Dietrich et al. [15] examined the effects of pitch and deceleration of AV on pedestrian crossing behavior in urban scenarios and showed that changes in pitch and deceleration of AV had a significant impact on pedestrian crossing behavior, with higher pitch and deceleration leading to more cautious crossing behavior by pedestrians. While such implicit communications seem dominant in modern driving, it might be not adequate in ambiguous situations between AVs and vulnerable road users [16].
Another important communication strategy is to explicitly display the intention on an eHMI so that other road users can understand AV's intention easily. These eHMIs come with different modalities, including visual cues, such as LED strips and screens, laser projections on the road, and auditory signals, such as beeps and chimes. For example, Bai et al. [6] examined six different visual cues and five different auditory cues and found that participants preferred pedestrian silhouette on the front of the AV and chime to other cues and a combination of both visual and auditory cues were preferred the most.
While numerous studies demonstrate that eHMIs improve the interaction between pedestrians and vehicles, it is still unclear exactly what information an eHMI should show. Merate et. al. [17] investigated the information needs of vulnerable road users and found that messages about vehicle action, e.g, turning, starting, stopping, were more important than vehicle dynamics, such as speed. Additionally, they pointed out that auditory- and light-based communication methods were preferable than text-based messages. Schieben et al. [18] examined the design of the interaction of AVs with other traffic participants, focusing on human needs and expectations. The study emphasized the importance of effective communication and understanding between AVs and other road users, and the need for AVs to adapt to the behavior and expectations of vulnerable road users. Faas et al. [19] investigated the information that should be displayed on the eHMI of AVs and showed that participants believed that information on the vehicle's speed and direction, as well as information on the vehicle's state, such as whether it was in autonomous mode or not, should be prominently displayed.
The design of eHMI should also take into account factors such as trust, acceptance, and user experience to improve the overall interaction between AVs and other road users by considering human needs and expectations in the design of AVs and their interactions with other road users, especially the vulnerable ones. Eisma et al. [20] conducted an experiment about different message perspectives (i.e., egocentric and allocentric) displayed on eHMIs and examined their effects on participants' understanding. They found that pedestrian-centric perspective was more effective in communicating the vehicle's intent and was better understood by the participants. Lee et al. [21] found that certain eHMI designs could lead to confusion and mistrust among pedestrians, resulting in them taking longer to cross the street and making more mistakes due to the fact that these designs confused the participants. Therefore, it is important that design of eHMIs for AVs should be carefully considered to ensure that they effectively communicate the vehicle's intent and do not negatively impact other road users' behavior.
### _Communications between AVs and CVs_
As AVs become more prevalent, they will inevitably share the road with manually driven vehicles or CVs. This mixed traffic environment presents a key challenge of communicating between these two types of vehicles and they must be addressed to ensure the safety and efficiency of the transportation system. However, there is a lack of research in investigating the communication between AVs and CVs, especially in complex and ambiguous situations, such as intersection without traffic lights, bottleneck scenarios, where traditional vehicle communications often fail. Aoki and Rajkumar [22] investigated dynamic intersection scenarios with possible collision risks and showed that using V2V communications and sensor systems, AVs effectively communicated with other road users and positively impacted on traffic flow. Rettenmair et al. [23] studied different strategies of communication between AVs and CVs in a bottleneck scenario where two cars came facing each other while only one was able to pass. The results showed that participants indicated better visibility of the display in comparison with the projection. Clearly visible displays were better instead of laser projections, which were not easily legible at greater distances [20]. In another study, Rettenmaier et al. [24] found that a well-designed eHMI using two arrows to indicate the direction had a positive impact on comprehensibility, transferability and simplicity and the participants had significantly shorter passing times and fewer crashes compared to other eHMI designs. Papakostopoulos et al. [12] investigated the effect of eHMIs on drivers' ability to infer the motion intention of AVs. They found that the presence of eHMIs on the AV accelerated CV drivers' decision-making in terms of improving participants' driving behavior and reducing the overall crossing time. Therefore, eHMIs can have a positive impact on other vehicles' behavior by carefully considering the design of eHMIs for AVs to ensure that they effectively communicate the AV's intention and not negatively impact the safety of other road users.
## III Method
### _Designing HMIs with a human-centered approach_
In the first phase of this study, we employed a human-centered design process [25] to develop interfaces to facilitate
communication between CVs and AVs for a specific scenario in intersections. To empathize with CV drivers and understand their needs, we reviewed existing literature and had a class discussion (with more than 30 master students in the human-centered design program in University of Michigan-Dearborn) about the challenges faced by communications between CVs and AVs. Based on our analysis, we identified three main challenges, including: 1) conspicuousness - easy to see visually, 2) comprehensibility - easily understandable with minimal cognitive effort, and 3) identifiability - easy to recognize to whom it was addressed.
Then, we had a brainstorm session to generate a wide range of possible ideas for the "left turn" scenario in an intersection and followed the design principles outlined in Rettenmaier et al.'s study [23] and Avetisyan et al.'s SA framework [3] for informational needs. Ultimately, we narrowed down to eight design concepts (see Fig. 1) that included two versions for the AV to 1) yield and 2) insist on the right of way, and were presented using different visual formats, i.e., signs, texts, or a combination of both. Designs 1 to 4 included SA level 1 (i.e., perception of the items in the environment) and level 2 (i.e., comprehension of the current situation) information, while designs 5 to 8 included additional SA level 3 information (i.e., projection of future status of the environment). To facilitate communication between CVs and AVs in the selected scenario, we broke down the HMI message into three parts, which explained the traffic situation, a suggested way of behaving, and improved trust. Firstly, we added a sign or text that described the current issue that the traffic lights were malfunctioning. Secondly, we included right-of-way information that informed CV drivers how to proceed, incorporating well-known traffic signs and colors to enhance comprehension. Thirdly, we attempted to provide extra information to increase the confidence of CV drivers.
To evaluate the interface designs, we conducted an online survey study with 32 participants who were introduced to the "left turn" scenario and asked to show their level of agreement for the prototypes of the design concepts using a 7-point Likert scale based on four statements: 1) The message is easy to understand, 2) The message contains relevant details, 3) The message helps to respond quickly, and 4) The message is preferred with text only. The final score was calculated as the average of these four statements. Among the eight designs, the second interface design (see Fig. 1) received the highest rating of 5.28 and was chosen to be tested in the second phase of the study using a driving simulator in a VR environment.
### _Participants_
In the design phase, a total number of 32 students were participated and evaluated eight concepts. At this phase, participants were compensated with 1% course credit.
In the experiment phase, a total number of 50 participants took part in the experiment. Due to the severe motion sickness and eye calibration failures, six participants did not finish the experiment. Therefore, the data analysis was conducted based on the remaining 44 participants (11 females and 33 males; Age = 24.4 \(\pm\) 4.19 years old) who were university students located in the United States and possessed a valid U.S. driver's license. On average, participants had \(5\pm 4.52\) years of driving experience and drove approximately \(5\pm 1\) days per week. This study was conducted in accordance with the ethical requirements of the Institutional Review Board at the University of Michigan (application number HUM00219554). Participants received a compensation of $20 in cash upon completion of the study, and the average completion time was 35 minutes for male and 41 minutes for female participants. Participants experienced low levels of motion sickness in the experiment (i.e., mean ratings were 1.66 and 1.70 for male and female participants on 7-point Likert scales).
### _Apparatus_
The experiment was conducted in a virtual reality driving simulator at the University of Michigan-Dearborn using a desktop computer with an Intel Xeon(R) W-2104 CPU processor running at 3.20GHz, 64.0 GB of RAM, and an NVIDIA GeForce RTX 3060 graphics card with 12 GB of memory. The operating system used was Windows 10. The experiment employed an HTC Vive Pro Eye headset (Taipei, Taiwan) in combination with a Logitech G29 Driving Force (Lausanne, Switzerland) steering wheel and floor pedal. Four different drives (one for training and three for experimentation) were created using the Unity game engine (San Francisco, CA). All self-reported data was collected using a survey developed and administered through the Qualtrics (Seattle, WA) platform.
### _Experiment Design_
**Independent variables.** In this study, a within-subjects experiment was conducted, in which the independent variable was the communication interface with three conditions: control, eHMI, and iHMI. The control condition did not have any explicit communication between the AV and the CV. In the eHMI condition, the AV communicated with the CV through an external display attached to the front of the AV, which shared the current traffic situation and right-of-way information from the egocentric perspective (the CV driver's perspective). In the iHMI condition, the AV communicated with the CV through an internal display that appeared on the CV's windshield and shared the same information as described for eHMI.
**Dependent variables.** The study collected both self-reported and eye tracking data. Self-reported measures were used to assess SA, trust in AV, interface acceptance. _Situation awareness_ was assessed using the Situation Awareness Global Assessment Technique (SAGAT) technique [26], where the SA questions were designed following the method used in previous studies [3, 27]. After each trial, the participants were asked to answer SA questions (see Table I) for two interactions separately, where they chose all the applicable answers out of the 5 possible options. The final SA score was measured based on the number of correct answers, with a score range of 0 to 3. To prevent negative impacts on participants' engagement in VR and to avoid causing additional motion sickness, the SA was evaluated in the post-trial phase. _Trust_ in AV was measured with Jayaraman et al.'s [28] trust
scale with 7-point Likert scales. At the end of each session, participants evaluated their trust in five dimensions: competence, predictability, dependability, responsibility, reliability over time and faith, by answering question presented in Table II. To understand the acceptance of the proposed concepts as communication interfaces, Van der Laan et al.'s [29] nine-item acceptance measure was applied, where participants rated them in two dimensions: 1) perceived usefulness (i.e., useful, good, effective, assisting, and raising alertness), which focused on the functional aspects of the concepts and how well it could assist the participant in current situations, and 2) satisfaction (i.e., pleasant, nice, likable, and desirable) which focused on the overall emotional responses and fulfillment of expectations after experiencing the concepts. The measures were assessed by 7-point Likert scales (see Fig. 2) after each trial was completed. Furthermore, the participants' simulation sickness was evaluated using the Simulator Sickness Questionnaire (SSQ) [30] on a 7-point Likert scale.
To collect eye-tracking data from participants, the integrated eye tracker in the HTC VIVE Pro Eye headset was used with a resolution of 1440 x 1600 pixels per eye, and a 110-degree horizontal field of view. The eye tracker had a 120 Hz frequency of gaze data output with an accuracy of 0.5-1.1 degrees. In total, 7 measures were collected and used to analyze the pupil diameter and eye openness (see Table III). Previous studies showed that mean change in pupil diameter was a reliable measure since it can eliminate the side factors, i.e. environment illumination, that could potentially influence on pupil diameters [31].
Therefore, to investigate how participants' mental workload and vigilance [32, 33] were influenced by the interface conditions, mean pupil diameter and eye openness changes were examined. Prior to the analysis, the raw data went through four step of data processing. First, the invalid eye data rows and outliers were removed. Next, the records of left and right eyes were combined by their mean coordinate values. Third, the interaction moments were identified using vehicle positions in the VR and the data was segmented to ten second before and after the interaction with AV. Finally, the mean pupil diameter change and eye openness change were calculated per interaction, and the average of two interactions was used for further analysis.
the eye tracker, and the participants had a training session where they had an opportunity to familiarize themselves with the VR environment and driving equipment through a test drive (see Fig. 3). During this training session, participants experienced interactions with AVs similar to the experimental scenarios and were introduced to two interface concepts. Upon completion of the training session, participants were given the choice to either continue or stop their participation in the experiment. Since a within-subjects experimental design was employed, after the training sessions, participants experienced each of the three experiment sessions corresponding to three conditions, i.e., eHMI, iHMI, and control conditions, in a randomized order. At the beginning of each session, the participants received brief introduction about current interface. During the drive, the eye tracking outputs and vehicle trajectory were recorded. At the end of each session, the simulation was paused and the participants filled in the survey section required to measure the dependant variables (i.e., SA, trust, and acceptance). At the end of the last session, participants were asked to evaluate the 16 symptoms of simulation sickness listed on the SSQ. Overall, the experiment took approximately 38 minutes to complete, with 15 minutes allotted for demographic information, instruction, and training, and approximately 5 minutes for each experimental session. The participants also had the option to take a 5-minute break between each experiment condition.
### _Scenario Design_
To develop an interaction scenario, a high-risk traffic situations was investigated that resulted in communication issues or even crashes. A typical scenario was selected from Najm et al.'s crash report Najm et al. [35], which analyzed police-reported crashes and categorized them based on five metrics: frequency of occurrence, number of people involved, injury severity, economic cost, and functional years lost. The scenarios was adapted for our experiment to focus on the communication between AVs and CVs. In each scenario, there were two variations with an AV at two different intersections where the CV driver was instructed to turn left by the integrated navigation system's voice (as shown in Fig. 4). In one interaction, the AV yielded its right of way, while in the other interaction, it insisted on the right of way. When the distance between the AV and the CV reached the predefined distance, the AV displayed a message informing that the traffic lights were not functioning, and the CV either had the right of way or had to yield to the AV.
In the eHMI condition, the messages were displayed on the front of the AVs as shown in Fig. 5c and 5d, disappeared as the vehicles passed each other. In the iHMI condition, the message was displayed on the CV's windshield as shown in Fig. 5a and 5b. In the control scenario, the AV did not have a communication interface, and the CV driver had to rely on their own understanding to navigate.
## IV Results
As the experiment was conducted using a within-subjects experimental design, a repeated measures one-way analysis of variance (ANOVA) was conducted to examine how the interface conditions affected the dependent variables. For the
\begin{table}
\begin{tabular}{p{42.7pt} p{341.4pt}} \hline \hline Dimension & Question \\ \hline Competence & To what extent did the autonomous cars perform their function properly i.e., recognizing you and reacting for you? \\ Predictability & To what extent were you able to predict the behavior of the autonomous cars from moment to moment? \\ Dependability & To what extent can you count on the autonomous cars to do its job? \\ Responsibility & To what extent the autonomous cars seemed to be wary of their surroundings? \\ Reliability over time & To what extent do you think the autonomous car’s actions were consistent through out the interaction? \\ Faith & What degree of faith do you have that the autonomous cars will be able to cope with all uncertain ties in the future? \\ \hline \hline \end{tabular}
\end{table} TABLE II: The trust questionnaire [28].
Fig. 3: Experiment setup with VR driving simulator. The participant was wearing HTC Vive headset and driving a CV via Logitech G29 steering wheel and pedals. The screen showed the participant’s view in the VR environment.
\begin{table}
\begin{tabular}{p{42.7pt} p{341.4pt}} \hline \hline Measures & Description \\ \hline Timestamp & The current time of data recording \\ Eye validity & The variable that explains validity of eye data Eye openness & The level of eye openness \\ Pupil diameter & The diameter of pupil \\ \hline \hline \end{tabular}
\end{table} TABLE III: Eye tracking measures. Each measure was collected for the left and right eye separately.
Fig. 2: The acceptance scale [29]. The usefulness measure was the average of useful, good, effective, assisting, and raising alertness items. The satisfaction measure was the average of pleasant, nice, likable, and desirable items.
post-hoc analysis, a Bonferroni correction method was applied. The statistical analyses were carried out using MATLAB and R programming languages, with an alpha level of.05 set for all the tests. The effects of the interface conditions on the dependent variables were assessed using \(\eta^{2}\), i.e., the proportion of variance in the dependent variables.
### _Situation awareness_
SA was measured at two interactions for each interface condition and the mean SA score was used in the analysis. The results of one-way repeated measures ANOVA showed that there was a significant difference among three conditions (\(F(2,131)=13.64,p=.000,\eta^{2}=.132\)). As illustrated in Fig. 6, the post-hoc analysis indicated that sharing the information through eHMI (\(p=.002\)) and iHMI (\(p=.000\)) significantly increased the SA level of the driver in the CV compared to the control condition. The difference between iHMI and eHMI was not significant.
### _Trust_
In order to understand the effects of three interface conditions on trust, the mean of the five trust dimensions was analyzed. The results indicated that trust was significantly different among the tested conditions (\(F(2,131)=25.20,p=.000,\eta^{2}=.233\)). Specifically, the trust level was significantly lower in the control condition than that in the eHMI and iHMI conditions (\(p=.000\)) as shown in Fig. 7. Also, in the eHMI condition the reported trust level was lower than iHMI, but the difference was not statistically significant (\(p=.648\)).
### _Acceptance_
Regarding the acceptance of designed interface concepts, the results showed that participants' ratings were significantly different across the tested conditions with regard to usefulness (\(F(2,131)=80.84,p=.000,\eta^{2}=.535\)) and satisfaction (\(F(2,131,)=45.80,p=.000,\eta^{2}=.360\)) of the concepts. Pairwise comparisons indicated that participants rated the interface as significantly useful compared to AVs without explicit communication interface. Additionally, the individual item comparisons in the usefulness dimension showed iHMI was significantly more effective compared to eHMI (\(p=.015\)). As for satisfaction items, no significant difference was found between eHMI and iHMI conditions (\(p=1.000\)).
Fig. 4: The CV is turning left at an intersection with malfunctioning traffic lights and has to turn left, crossing the path of the incoming AV. The scenario was tested in two different AV behaviors. In one scenario, the AV yielded to the CV, while, in the other, the AV insisted on the right of way.
Fig. 5: Experimental scenarios with (a) go and (b) yield messages in iHMI condition, (c) and (d) in eHMI condition.
Fig. 6: The effect of interface conditions on the SA level of the driver in the CV, where ‘**’ indicates \(p<.01\) and ‘***’ indicates \(p<.001\). Note that the error bar showed the standard error.
Fig. 7: The effect of interface conditions on trust in AVs, where ‘***’ indicates \(p<.001\). Note that the error bar showed the standard error.
### _Pupil diameter and eye openness_
To understand how participants' mental workload and vigilance [32, 33] were influenced by the interface condition, mean pupil diameter and eye openness changes were examined. Due to the eye tracker's technical issues, 6 participants' data were partially missing and were excluded from eye-tracking results. Therefore, eye-tracking measures were analyzed based on data from 38 participants.
The results of one-way ANOVA performed on mean pupil diameter change showed that there was a significant difference among the three conditions (\(F(2,113)=6.69,p=.002,\eta^{2}=.185\)). The post-hoc test showed that pupil diameter change in iHMI condition was significantly lower than that in the control (\(p=.001\)) and eHMI (\(p=.042\)) conditions (see Fig. 9). Regarding the mean eye openness change, the patterns showed that in the control condition, participants tended to frequently squeeze their eyes during the interaction compared to the eHMI and iHMI conditions (see Fig. 10). However, this difference was not statistically significant (\(F(2,113)=1.40,p=.253,\eta^{2}=.057\)).
### _CV speed_
The results of a one-way ANOVA conducted on the mean speed change revealed a significant difference among the three conditions (\(F(2,119)=5.59,p=.006\)) when the AV insisted its right of way. According to the post-hoc test, the iHMI condition showed a significant drop in the CV speed after displaying the"Yield" message compared to the eHMI condition (\(p=.005\)) and marginal drop compared to the control (\(p=.096\)) conditions (see Fig. 11). However, in the intersection where the AV yielded the right of way, the speed change was not statistically significant (\(F(2,119)=1.32,p=.548\)). Comparing the average CV speed, the results showed that in control condition participants' speed was significantly lower than other conditions (\(F(2,119)=10.98,p=.000\)). Specifically, at the intersection where the CV had the right of way to turn, the participants approached the intersection with significantly lower speed than that in the iHMI condition (\(p=.005\)). Similarly, at the "Yield" intersection, their speed was significantly lower than that in the iHMI (\(p=.027\)) and eHMI (\(p=.000\)) conditions (see Fig. 12).
## V Discussions
### _Situation awareness_
In this study, the participants' SA level was measured in three interface conditions and the results indicated that the SA level was increased by either eHMI or iHMI (see Fig. 6). Compared to the control condition, in the eHMI and iHMI conditions, the participants were more conscious of the traffic situation. In particular, they were able to identify what the traffic issue was at the moment and who had the right of way to continue driving based on the shared information by giving quicker responses with respect to vehicle control. In the treatment conditions, immediately after receiving the visual message, the participants showed reactive behavior by slowing the car down and proceeded shortly after comprehending the message. In the control condition, drivers intended to keep a safe distance until they could understand the AV's intention
Fig. 8: Perceived usefulness and satisfaction of HMIs, where ‘***’ indicates \(p<.001\). Note that the error bar showed the standard error.
Fig. 10: Mean eye openness change and standard error across different interface conditions. Note that a positive change in eye openness indicated wider eyes following the interaction between CV drivers and AV, while a negative change indicated narrower eyes.
Fig. 9: Mean pupil diameter change and standard error across different interface conditions, where ‘*’ indicates \(p<.05\), ‘***’ indicates \(p<.01\). Note that a positive change indicated larger pupil diameter following the interaction between CVs and AVs, while a negative change indicated smaller pupil diameter.
based on the implicit cues of the AV (i.e., speed decrease or continuous driving). This finding was consistent with the research expectations and previous studies that HMIs filled the communication gap between AVs and CVs and reduced the ambiguity level in uncertain situations.
### _Trust_
The results regarding trust indicate that the designed HMIs had a significant positive impact on the participants' trust level (see Fig. 7). Irrespective of the interface conditions, participants exhibited higher trust in AVs compared to the control condition. Additionally, a notable behavioral difference was observed among the conditions. In the control condition, where the participants were informed that the AVs would not communicate with them, they tended to maintain a lower speed (see Fig. 12) compared to the other conditions. However, their reaction to AVs was still delayed, as they took more time to decelerate than the AV required to cross the intersection. As for the eHMI and iHMI conditions, the participants were more inclined to confidently rely on the HMI instructions in the iHMI condition, indicating a greater level of trust in the AV and quicker reaction, than in the eHMI condition.
### _Acceptance_
With respect to the acceptance, the results showed that AVs were more acceptable with the designed HMIs than that in the control condition. Overall, both types of HMIs were highly acceptable compared to the control condition. However, the iHMI condition was more preferable, which might be due to the perceived ease of understanding the messages. First, the participants indicated that iHMI was grabbing their attention immediately and it was easy to see, unlike the eHMI for which they needed to be attentive to notice that AV was trying to communicate and put extra effort to see the message in the distance. Additionally, the participants' responses seemed to be faster in the iHMI condition which was reflected in the changes in vehicle speed when they received the message to yield the AV (see Fig. 11). Second, it was clear for them in the iHMI condition that the message was addressed to them, meaning that it was clear that the actions (i.e., yield or go) was displayed from the CV driver's point of view and in a smaller distance, in contrast to the eHMI condition where the receiver could be other road users. This result was consistent with participants' ratings and with the previous studies [20] regarding to the effectiveness of interfaces, showing that the iHMI condition was more effective compared to the eHMI or control condition and resulted in faster responses.
### _Eye tracking measures_
In this study, we utilized the pupil diameter and eye openness as metrics to evaluate participants' mental workload and vigilance. A significant difference was observed in the mean change of pupil diameter across the tested conditions, with the iHMI condition showing the greatest change compared to the control and eHMI conditions. The change in pupil diameter was established in previous studies as an indicator of changes in participants' vigilance and mental workload [32, 33]. Specifically, the pupil diameter tended to increase with an increase in mental effort and attention, which was often influenced by the level of uncertainty and perceived risk at the moment. As demonstrated in Fig. 9, activation of the iHMI condition led to a noticeable decrease in pupil diameters, showing that the iHMI condition required less processing effort than the other tested conditions.
As for the eye openness, we could not identify significant differences between the three conditions. One possible reason for this could be less sensitive as a measure to differentiate cognitive workload and vigilance compared to pupil diameters though it is a good indicator for driver fatigue or drowsiness [36, 37]. Since each trial was conducted in a short period, (i.e., 5 minutes), it might not be sufficient to impact participants' vigilance. A longer experiment duration could potentially provide more reliable results by giving participants more time to adapt to the experimental conditions.
Fig. 11: Mean speed change and standard error across different interface conditions. The change was measured by calculating the average speed three seconds before and after receiving the message. Note that a positive change in speed indicated the CV had positive acceleration after onset of “Yield” message, while a negative change positive braking after the message.
Fig. 12: Mean speed and standard error across different interface conditions measured in three second time period before and after onset of “Yield” message.
### _Implications_
Effective communication between vehicles is a vital aspect to develop collaborative driving in a mixed traffic environment. However, due to the complexity of understanding the capabilities and intentions of AVs, it has become more challenging for drivers in CVs to maintain the necessary level of SA, which is critical to ensure transportation safety in mixed traffic. This study showed that SA was improved during the AV-CV interaction using external or internal HMIs at an intersection. Results indicated that sharing the traffic situation and the AV's intention via appropriate designed HMIs boosted human-drivers' SA. However, there was no standardized communication protocols so far, which might lead to confusion and misinterpretation of signals. Our study provided an example of designing HMIs with a human-centered process, and showed that iHMIs might be more advantageous than eHMIs. More research should be conducted to understand the potential of iHMIs and provide more standard communication protocols, involving design (e.g., visual cues, auditory signals), location of HMIs, and other elements in the communication protocols.
Moreover, as self-driving technology evolves, establishing public acceptance and trust becomes crucial. Ambiguity in communication between CVs and AVs can raise concerns among road users about the reliability and safety of autonomous systems. Our findings showed that the proposed HMI designs increased their trust in AVs compared to the control condition. We should also point out that extra information in ambiguous scenarios could potentially add additional workload on driving tasks. Therefore, it is important to consider that when designing HMIs for vehicle communication systems to support real-time decision making. Overall, our findings offer valuable insights for future investigations and implementations in AV-CV communication investigation in mixed traffic.
### _Limitations_
This research also has limitations that can be addressed in the future studies. First, more objective measures (e.g., eye fixation on areas on interests) should be collected to better understand attention requirements of each design. Due to the limitations of the VR headset, we were not abel to collect such measures. Second, only one particular scenario was investigated to understand the effects of the proposed HMIs. In future studies, more scenarios should be included to generalize the results for various ambiguous traffic situations, and more road users (e.g., pedestrians and other vehicles) should be involved to make the scenarios close to the real traffic situations. Third, the study population mainly included university students and was not gender balanced. Future studies should include a more diverse sample to better understanding the effects of HMIs on AV-CV communications in mixed traffic.
## VI Conclusions
In this research, we aimed to develop HMIs that facilitate communication between AV and CV drivers and investigate how such interfaces would influence CV drivers' SA, trust, acceptance, cognitive workload, and vigilance when navigating in mixed traffic environments, particularly at intersections. We designed eight different interface concepts and subsequently tested the highest-rated concepts:internal and external HMIs. We also included a control condition for comparison. The effectiveness of the HMIs was evaluated using SA, trust, acceptance, cognitive workload, and vigilance using participants' self-reopreted and eye-tracking measures in ambiguous situations where the CV needed to make a left turn at an intersection with malfunctioning traffic lights. We found that HMIs were assisting CV drivers in uncertain situations and resulted in increase of SA level, trust, and acceptance. The iHMI was considered the most effective communication method with AV and resulted in lowest change in drivers' mental workload.
|
2306.16127 | MLSMM: Machine Learning Security Maturity Model | Assessing the maturity of security practices during the development of
Machine Learning (ML) based software components has not gotten as much
attention as traditional software development. In this Blue Sky idea paper, we
propose an initial Machine Learning Security Maturity Model (MLSMM) which
organizes security practices along the ML-development lifecycle and, for each,
establishes three levels of maturity. We envision MLSMM as a step towards
closer collaboration between industry and academia. | Felix Jedrzejewski, Davide Fucci, Oleksandr Adamov | 2023-06-28T11:56:42Z | http://arxiv.org/abs/2306.16127v1 | # MLSMM: Machine Learning Security Maturity Model
###### Abstract
Assessing the maturity of security practices during the development of Machine Learning (ML) based software components has not gotten as much attention as traditional software development. In this Blue Sky idea paper, we propose an initial Machine Learning Security Maturity Model (MLSMM ) which organizes security practices along the ML-development lifecycle and, for each, establishes three levels of maturity. We envision MLSMM as a step towards closer collaboration between industry and academia.
Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine, Machine Learning, Machine
sociating activities to mitigate attacks to real-world adversary tactics (Zhang et al., 2023).
## 2 MLSMM Prototype
MLSMM combines state-of-practice maturity evaluation techniques for software product security with state-of-the-art mitigation techniques assigned to particular stages within the ML development process. Following OWASP SAMM, our proposed MLSMM is prescriptive in nature--i.e., it provides high-level guidance and advices to an organization--rather than descriptive--i.e., providing a summary of what other organizations do.4 Table 1 presents an excerpt from the _Model Training_ phase of ML-components development. The model is hierarchical; it starts with the nine phases of ML development (Amershi et al., 2019) each with a variable number of security practices from MITRE ATLAS associated with them. Each security practice has three possible maturity levels where the activities on a lower level are typically easier to execute and require less formalization than the ones on a higher level. At this initial stage, MLSMM consists of 19 practices. A complete draft is available on the project website. 5 Similarly to SAMM, we propose a simple questionnaire measuring the maturity levels. We use ordinal-value answers to assess how well an organization fulfills the activities associated with a level. Based on the example in Table 1, an organization reaches _Maturity Level 1_ in _Model Hardening_ once it performs activities such as adversarial training and network distillation. However, these activities are performed ad-hoc and in an unstructured fashion 6. The organization reaches the next level once the answers to the questionnaire show evidence that hardening is a standardized practice for _every_ model. The final level implies that model hardening is part of the model training process _by design_ rather than done in reaction to specific events. In Table 1, the next security practice assessed for _Model Training_ is _Use Ensemble Methods_. The lowest maturity level indicates the presence of simple ensemble methods introduced during training without providing any security context. Level 2 is reached once the use of ensemble methods is grounded in security activities identified before model development, such as threat modeling. At Maturity Level 3, the organization continuously applies ensemble method shuffling to avoid information leakage. MLSMM does not insist that an organization achieves the maximum maturity in every category as each organization should determine the target level, for each _Security Practice_, that best fits their needs.
Footnote 6: A maturity level of zero indicates the complete lack of such activities
## 3 Conclusion and Future Work
We presented our idea for MLSMM --an actionable, domain- and model-agnostic security maturity model to assess ML components developments based on existing industrial practices and procedures. Our next steps are i) expand the model to cover additional ML security practices not included within MITRE ATLAS, ii) create a questionnaire to gather evidence to instantiate the model in practice, iii) validate the model and questionnaire with our industry partners regarding their usefulness and usability.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**ML Phase** & **Security Practice** & **Level 1** & **Level 2** & **Level 3** \\ \hline \multirow{3}{*}{Model Training} & Model Hardening & Model hardening is performed & Model hardening is standardized within the organization & Models are proactively hardened within the organization \\ & & & & \\ & & & & \\ \hline Use Ensemble Methods & Simple ensemble models & Ensemble approaches are introduced or removed based on specific threats against & Ensembles are continuously shuffled to avoid leaking information to attackers \\ \hline \hline \end{tabular}
\end{table}
Table 1: Excerpt of the proposed Machine-Learning Security Maturity Model. |
2304.14149 | Random sequential adsorption of aligned regular polygons and rounded
squares: Transition in the kinetics of packing growth | We study two-dimensional random sequential adsorption (RSA) of flat polygons
and rounded squares aligned in parallel to find a transition in the asymptotic
behavior of the kinetics of packing growth. Differences in the kinetics for RSA
of disks and parallel squares were confirmed in previous analytical and
numerical reports. Here, by analyzing the two classes of shapes in question we
can precisely control the shape of packed figures and thus localize the
transition. Additionally, we study how the asymptotic properties of the
kinetics depend on the packing size. We also provide accurate estimations of
saturated packing fractions. The microstructural properties of generated
packings are analyzed in terms of the density autocorrelation function. | Michał Cieśla, Piotr Kubala, Aref Abbasi Moud | 2023-04-27T12:43:20Z | http://arxiv.org/abs/2304.14149v1 | Random sequential adsorption of aligned regular polygons and rounded squares: Transition in the kinetics of packing growth
###### Abstract
We study two-dimensional random sequential adsorption (RSA) of flat polygons and rounded squares aligned in parallel to find a transition in the asymptotic behavior of the kinetics of packing growth. Differences in the kinetics for RSA of disks and parallel squares were confirmed in previous analytical and numerical reports. Here, by analyzing the two classes of shapes in question we can precisely control the shape of packed figures and thus localize the transition. Additionally, we study how the asymptotic properties of the kinetics depend on the packing size. We also provide accurate estimations of saturated packing fractions. The microstructural properties of generated packings are analyzed in terms of the density autocorrelation function.
## I Introduction
Random sequential adsorption (RSA) is a numerical protocol used for generating random packings [1; 2; 3]. According to it, the shapes are placed randomly one after another, however, the placing occurs only if the next shape does not overlap any of the previously added shapes. After placing, the position and orientation of each figure remain unchanged. The procedure continues until the packing is saturated - there is no place for any other shape. In contrast to the so-called random close packings (RCP) where the neighboring particles typically are in contact [4], here the packing is rather loose and the mean packing fraction is significantly smaller.
Although the history of RSA begins in 1939 when Flory used the random process described above to study the structure of a linear polymer to which some groups of molecules can be attached at random places [5], the real interest in RSA began in 1980 when Feder noticed that the structure of such two-dimensional random packings resembles monolayers produced in irreversible adsorption experiments [6]. The similarities were so substantial that saturated packing fractions of disks on a flat surface were determined using adsorption experiments [7]. On the other hand, the numerical generation of large, strictly saturated packings was ineffective because when the packing is almost saturated, the probability that a randomly placed and oriented object will not intersect with any previously added one is tiny. Thus, the number of such attempts has to be very large to place the next figure. Although for some specific shapes, there exist methods overcoming this problem e.g. [8; 9; 10], the properties of saturated state are still often estimated using the kinetics of packing growth computed for almost saturated packings. For a majority of shapes, the asymptotic kinetics is given by Feder's law
\[\theta-\theta(t)\sim t^{-\frac{1}{d}}, \tag{1}\]
when the packing is close to a saturated state. Here, \(\theta\) is the saturated packing fraction, and \(\theta(t)\) is the packing fraction after \(t\) tries of adding a shape to the packing. Parameter \(d\) depends on shape and packing dimensionality. For example, for \(k\)-dimensional (hyper)spheres packed in the (hyper)space of the same dimensionality, \(d=k\), while for anisotropic, randomly oriented two-dimensional shapes placed on the two-dimensional flat surface \(d=3\)[9; 11; 12]. On the other hand, for parallel squares or rectangles
\[\theta-\theta(t)\sim\frac{\log t}{t}. \tag{2}\]
Both these relations were confirmed analytically [13; 14] and numerically e.g. [9; 11; 12; 15; 16]
Here, we want to study the transition between these two regimes. We tried to achieve this in two ways. The first one is to generate two-dimensional random packings composed of flat regular polygons aligned in parallel. For the RSA of squares, as noted above, the kinetics of packing growth is governed by (2). When the number of regular polygon sides grows, its shape approaches the disk for which kinetics is given by (1) with parameter \(d=2\). A similar study was recently presented in [17], but the author focused on saturated packing fractions while the presented results on the RSA kinetics of squares did not agree with the analytical law (2). The second way is to generate packings built of aligned squares with rounded corners. By increasing the radius of this rounding the shape approaches the disk, thus the transition in the packing's growth kinetics should be visible. The second method seems to be superior to the first one because the radius can be changed continuously, while the number of
regular polygon's sides is a discrete value and the disk is approached only in the limit of an infinite number of sides.
## II Numerical details
RSA protocol consists of iterations of the following steps:
* select the position of a virtual polygon randomly with the probability uniformly distributed over the packing;
* check if the virtual particle does not overlap with any polygon inside the packing;
* if it does not, add it to the packing, otherwise, remove and abandon it.
To generate strictly saturated packings according to RSA protocol we traced the regions where subsequent particles can be added. This idea was used for the first time by Akeda et al. in the case of packings built of parallel squares [18] and by Wang for disks [8]. The method is based on the division of the packing into small regions called voxels, and each voxel is tested if there is a possibility to place there the center of the next shape without overlapping existing polygons. If not, such a voxel is removed from the list of existing voxels. Thus, the random sampling of the position of the virtual shape is limited only to the voxels that are on the list, which speeds up the packing generation. The voxels can be divided into smaller ones to better estimate the region when placing is possible. The simulation ends where there are no voxels left, thus, the packing is saturated. A variant of this method for polygons was invented by Zhang [19] and improved further in [20] and the details about the voxel removal criterion can be found there. Although in its original version, this method was designed for the generation of saturated packings built of arbitrarily oriented polygons, its restriction to a single orientation is straightforward.
It should be mentioned that when the sampling of the virtual shape position covers only existing voxels occupying a fraction of the whole packing surface area \(S\), one iteration corresponds to \(S/S_{v}\) iterations in the original RSA protocol, where \(S_{v}\) is the total surface area of these voxels. Additionally, to compare the results obtained for different sizes of packings, the number of iterations is expressed in the so-called dimensionless time units where one unit contains \(S/S_{p}\) iterations. Here, \(S_{p}\) is the surface area of a single polygon. Throughout this study, the number of iterations shall be expressed in these units and denoted as \(t\).
We studied saturated RSA packings built of regular polygons of the number of sides ranging from 3 to 1000. To estimate the kinetics and other properties, we generated 100 independent random packings for each type of polygon. The figures were placed on the square of the surface area \(S=10^{6}\), while the surface area of a single polygon was normalized to \(S_{p}=1\). To minimize finite-size effects, periodic boundary conditions were used [10]. The number of iterations needed to form saturated packing differs significantly between independent packings, as it in general is distributed according to a heavy tail probability distribution function [21]. Therefore, to estimate the asymptotic value of the parameter \(d\) in the power law (1) we restricted to the data from the range \([t_{\min}/100,t_{\min}]\), where \(t_{\min}\) is the dimensionless time when the first packing becomes saturated. Such an approach guarantees sufficient statistics and also analyzes kinetics close to saturation. It is worth noting that \(\log[\theta(2t)-\theta(t)]\) exhibits the same asymptotic scaling as (1) when plotted against \(\log(t)\)[22; 23], which gives another way to determine the exponent \(d\).
Similarly we studied RSA packings of rounded squares. The shape is parameterized by one additional parameter \(r\) which corresponds to the circle radius at each corner of the square - see Fig. 1. Parameter \(r\) can vary from 0 (square) to infinity (disk), but here it was restricted to \(r\in[0,1]\). The surface area of the shape in Fig. 1 is \(S_{r}=2+4r\sqrt{2}+\pi r^{2}\), and the linear size of the rounded square was always rescaled to obtain \(S_{r}=1\). This parameterization was used in Ref. [24], where one can also find a detailed description of the method for generating saturated RSA packing built of rounded polygons.
## III Results
Example saturated packings built of polygons aligned in parallel are shown in Fig. 2. Note that the square with rounded corners characterized by \(r=0.2\) is visually indistinguishable from a normal square.
Figure 1: Parametrization of a rounded square. The circumscribed circle of the square has a unit radius and circumscribed circle of the rounded square has a radius of \(1+r\).
### Kinetics of packing growth
The kinetics is presented in Figs. 3 and 4.
Fig. 3 presents the data according to (2). The only case where a straight line is observed corresponds to packings built of squares (\(n=4\)). In analogy, we can analyze the data according to (1). The dependence of the fitted value of the parameter \(d\) from (1) on the number of regular polygon sides \(n\) is presented in Fig. 4.
We observe that only in the case of packings built of squares the parameter \(d\) describing the kinetics of packing growth significantly differs from 2. For all other regular polygons the kinetics seems to be governed by the power law with \(d=2\) - the same as for spheres, which is consistent with the argument that \(d\) corresponds to the number of particles' degrees of freedom [25; 26]. Here, for all shapes, even for squares, there are only two degrees of freedom corresponding to the position of the center of a two-dimensional shape. For squares, as derived by Swendsen, the kinetics does not follow the power-law (1) but the one described by (2) [14]. However, for time scales \(t<t_{\rm min}\) appearing in our study, it is hard to distinguish between \(\log t/t\) and \(t^{-\alpha}\) with \(\alpha\) slightly smaller than 1 (\(d\) slightly larger than 1). Thus, although the RSA kinetics for parallel squares is governed by (2) - see Fig. 3, we can successfully fit the power law to it - see Fig. 4, with the fitted value of parameter \(d\) slightly larger than 1.
The above results show the uniqueness of the square shape. This is the only regular polygon that leads to dissimilar kinetics of RSA packing growth. As noted before, to study this phenomenon carefully we also analyzed packings built of squares with rounded corners. The results are presented in Fig. 5. Here, we observe the transition of \(d\) for \(r\) between \(r=0.02\) and \(r=0.12\) from \(d=1.2\) to \(d=2.0\), respectively. Note, that it is very hard to visually distinguish a square from a rounded square even with not insignificant rounding \(r=0.2\) - see Fig. 2. It implies that even tiny changes in shape can significantly influence the RSA kinetics. Similar effects were studied
Figure 4: The dependence of the fitted value of the parameter \(d\) from (1) on the number of sides \(n\) of the polygon. The dashed horizontal line corresponds to \(d=2\) and the dashed vertical line denotes \(n=4\). Inset shows the kinetics of packings growth for several different regular polevons.
Figure 5: The dependence of the fitted value of the parameter \(d\) from (1) on the parameter \(r\) describing rounded squares. The dashed line corresponds to \(d=2\).
Figure 3: The dependence of the packing fraction near saturation on \(\ln(t)/t\) for packing built of oriented regular polygons of a different number of sides. Straight lines correspond to the kinetics governed by (2).
Figure 2: Example saturated packings built of equilateral triangles, squares, pentagons, and rounded squares for \(r=0.2\) aligned in parallel. The packing size is \(S=400\) and the periodic boundary conditions are used.
analytically by Baule for two-dimensional shapes placed on a one-dimensional line [22] and then supported by numerical simulations [27]. There, the kinetics depended on the analytical nature of the contact function, which is defined as the separation distance at which two particles are in contact. Interestingly, fine details of the contact function are revealed in numerical simulations only for some percentage of packings, and the value of this percentage depends on the packing size [27]. Therefore, the sharpness and place of the observed transition depend on the packing size. Regardless, it does not explain the uniqueness of the square shape in comparison with other regular polygons.
The last thing related to the kinetics of packing growth that we want to explain is the recent result obtained in [17], where the kinetics of RSA of squares is described by the power-law (1) with \(d\approx 2\). The packing sizes under consideration in the study mentioned are significantly smaller than the ones we used. Additionally, the author worked with non-saturated configurations. As those two differences may be the source of the discrepancy, we analyzed how the exponent in a power law fit depends on the packing size and the dimensionless time at which we calculate it. The results are presented in Fig. 6. The plots clearly show that the fitted value of the parameter \(d\) is larger for both, small and non-saturated packings, which explains the results from the former manuscript. Interestingly, although the saturated packing fractions can be determined quite accurately using relatively small packings [10], the study of the kinetics of packing growth requires a few orders of magnitude larger packing sizes. This observation agrees with the results for the kinetics of packing growth for several different figures placed on a one-dimensional line [27]. It is however important to recall that the value of \(d\) for squares will always depend on \(t\), regardless of how large it is, because the true asymptotic behavior is not described by the power law (1), but (2).
Having obtained saturated packing of squares of different sizes we are able to use another way to determine the kinetics of packing growth. It was analytically shown that for disks the median of dimensionless time at which the last shape is added to the packing \(M[t_{\rm sat}]\) scales with a packing size \(S\) as
\[M[t_{\rm sat}]\sim S^{d}, \tag{3}\]
where \(d\) is the same parameter as in (1) [21]. It seems that similar relation is also valid for a packing built of oriented squares - see Fig. 7. Moreover, the fitted value of the exponent \(1.147\pm 0.016\) is close to the parameter \(d\) determined from (1) for \(S=10^{7}\), but here the value is size independent.
In the next sections, we study other basic characteristics of random packings to see if they also reflect the variability of the kinetics of packing growth.
### Mean saturated packing fraction
The mean density of saturated packing is a basic property of interest. Here, because we generated strictly saturated configurations, we only need to average the obtained densities without any extrapolation. Because the surface area of a single figure is always normalized to 1 the density equals the number of deposited shapes divided by the packing area \(S=10^{6}\). The results obtained are shown in Fig. 8. For regular polygons, we see oscillations of packing fractions for even and odd numbers of polygon sides. This effect was already observed in previous studies [17; 19]. The values presented here are, in general, in agreement with these results - note that in Ref. [19] RSA of unoriented polygons was studied. The packing fraction for rounded squares shows the transition between two limits - the upper one for aligned squares and the lower one for disks. However, this transition occurs for larger r - it starts at \(r=0.1\) and approaches the packing fraction of disks near \(r=1\), while the kinetics for rounded squares is indistinguishable from the one for disks at \(r\approx 0.1\). It shows that the behavior of packing fractions weakly correlated with the kinetics of
Figure 6: The dependence of the parameter \(d\) from (1) on the packing size \(S\). Inset shows the dependence of the parameter \(d\) on the dimensionless time \(t\) at which the packing generation was stopped for packings of size \(S=10^{7}\).
Figure 7: The dependence of the median of saturation time \(t_{\rm sat}\) on the packing size \(S\). Dots are the data determined numerically using 100 independently generated packing and solid line is a fit \(M[t_{\rm sat}]=1.4351\cdot S^{1.147}\).
packings for the systems in question. For convenience, the presented data has been collected in Tab. 1.
### Density autocorrelation function
While the packing fraction describes the global structure of a set of shapes, the local statistics of their positions can be better understood by probing the density autocorrelation function which can be defined as follows:
\[g(R)=\lim_{\mathrm{d}R\to 0}\frac{\langle N(R,R+\mathrm{d}R)\rangle}{\theta\,2\pi R \,\mathrm{d}R} \tag{4}\]
where \(\langle N(R,R+\mathrm{d}R)\rangle\) is the mean number of shapes, whose centers are placed in the distance between \(R\) and \(R+\mathrm{d}R\) from the center of a given figure. The presence of \(\theta\) in the denominator is for normalization \(g(R\to\infty)=1\). The density autocorrelation functions for several packings are shown in Fig. 9. The correlation functions have typical features of ones observed for RSA packings or equilibrium liquids. It was shown that for one-dimensional packings \(g(R)\) vanishes superexponentially [28], which is also observed here. The plots partially explain the behavior of the mean saturated packing fraction. It is the highest for squares, while at the same time, we observe the shortest distance between neighboring shapes of this type. On the other hand, the plot farthest to the right corresponds to triangles, which form looser configurations. The density autocorrelation for packings built of rounded squares is practically the same as for squares if the rounding is small \(r<0.1\). For larger \(r\) we observe a second maximum, which grows as the shape approaches the disk. This maximum first appears at \(R=\sqrt{2}\), which corresponds to the slight cusp in \(G(R)\) for squares and appears due to its rapid decay when squares are not in touch. However, where square corners are rounded, this distance decreases, and the cusp transforms into the peak, which travels left with an increasing radius of rounding \(r\) and grows up to infinity in the limit of touching disks [13; 14]. The effect of rounding the squares on the positioning of the density autocorrelation function peak could be interesting to study because it would reveal additional factors that could be used to customize the growth kinetics, saturation, and tightness of packing. It is intriguing that while both techniques of going from square to circle finally produced similar d and saturation densities, their response behavior in terms of the density autocorrelation function appears to be different.
\begin{table}
\begin{tabular}{c c c} \hline \(n\) & \(\theta\) & \(d\) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Mean saturated packing fractions obtained from computer simulations. The error of packing fraction \(\theta\) is the standard deviation of the mean value. The error of the parameter \(d\) was calculated using the exact differential method applied to the result of the least square fitting of numerical data to relation (1).
Figure 8: The dependence of the packing fraction on the number of regular polygon sides. The dashed line corresponds to \(\theta=0.547\) approximating the RSA packing fraction of disks [18; 6; 6]. The inset shows the dependence of the packing fraction of the rounded square on parameter \(r\). The dashed lines highlight two limits: \(\theta=0.547\) for the RSA of disks and \(\theta=0.562\) for the RSA of squares. In both plots, dots are the values determined from generated packings. The error bars are smaller than the dot size and thus they are omitted. The thin solid lines connecting dots are to guide the eye.
The results are expected because, if these polygons were stacked in a lattice pattern, one would expect squares to be the most densely packed (due to a higher likelihood that the packing would have no corners that could potentially leave some of the available space open). Moreover, particles with odd numbers of sides will undoubtedly have more unoccupied space as their nearest neighbors cannot occupy the space near them without overlap. When the number of sides increases, the maximum saturation packing for the disks should be reached. A regular polygon's shape may also affect the volume that it excludes. Because it is more rounded and has a bigger internal volume than the pentagon in respect to its circumference, the hexagon has a lower excluded volume than a regular pentagon of the same size.
## IV Conclusions
The square appears to be a unique shape in terms of random sequential adsorption as the two-dimensional oriented packings built of particles of this shape characterize significantly different kinetics given by relation (2) while packings built of all other regular polygons, as well as the majority of other shapes obey the power-law (1). By studying the kinetics of packings built of rounded squares we show that even quite small rounding, which, in practice, is not noticeable visually, changes the kinetics to the one typically observed in similar settings, namely \(d=2\) for shapes with two degrees of freedom. This transition is not observed in other characteristics. It is important to add that to study the asymptotic properties of the kinetics of packing growth relatively large packings have to be generated, and preferably as close as possible to their saturation points, contrary to the packing fraction which can be quite precisely estimated using relatively small packings, as long as periodic boundary conditions are used [10].
For rounded squares, we also observe the transition between the packing fractions of configurations formed by squares and disks. However, the transition occurs for significantly larger values of parameter \(r\) responsible for the amount of rounding than in the case of packing growth kinetics. The study of density autocorrelation functions seems to give additional details regarding packing densities with, however, no further insight into the asymptotic behavior.
To handle processing in real applications, such as "Pickering emulsion" and adsorption in catalysts, unique "particle engineering schemes" are becoming necessary. For instance, it has been discovered here that for adsorption purposes, items that at a later time can be imagined as molecules or particles with square shapes have distinct growth kinetics and saturation, allowing for customizable levels of adsorbability. Moreover, rounding techniques and modifications made to the same methodology may be adjusted to solve the dearth of theoretical models for two-dimensional adsorption using RSA.
###### Acknowledgements.
Numerical simulations were carried out with the support of the Interdisciplinary Center for Mathematical and Computational Modeling (ICM) at the University of Warsaw under grant no. GB76-1.
|
2307.15707 | Density-polarity coupling in confined active polar films: asters,
spirals, and biphasic orientational phases | Topological defects in active polar fluids can organise spontaneous flows and
influence macroscopic density patterns. Both of them play, for example, an
important role during animal development. Yet the influence of density on
active flows is poorly understood. Motivated by experiments on cell monolayers
confined to discs, we study the coupling between density and polar order for a
compressible active polar fluid in presence of a +1 topological defect. As in
the experiments, we find a density-controlled spiral-to-aster transition. In
addition, biphasic orientational phases emerge as a generic outcome of such
coupling. Our results highlight the importance of density gradients as a
potential mechanism for controlling flow and orientational patterns in
biological systems. | Mathieu Dedenon, Claire A. Dessalles, Pau Guillamat, Aurélien Roux, Karsten Kruse, Carles Blanch-Mercader | 2023-07-28T17:57:02Z | http://arxiv.org/abs/2307.15707v1 | # Density-polarity coupling in confined active polar films:
###### Abstract
Topological defects in active polar fluids can organise spontaneous flows and influence macroscopic density patterns. Both of them play, for example, an important role during animal development. Yet the influence of density on active flows is poorly understood. Motivated by experiments on cell monolayers confined to discs, we study the coupling between density and polar order for a compressible active polar fluid in presence of a +1 topological defect. As in the experiments, we find a density-controlled spiral-to-aster transition. In addition, biphasic orientational phases emerge as a generic outcome of such coupling. Our results highlight the importance of density gradients as a potential mechanism for controlling flow and orientational patterns in biological systems.
Active matter is composed of individual constituents able to extract energy from their local environment to produce mechanical work [1; 2]. This feature gives rise to collective phenomena that play an important role in many biological systems, such as the emergence of polar flocking, motility-induced phase separation or spontaneous flows [1; 2]. For instance, spontaneous flows generated by gradients of active stress have been observed in various systems, including cytoskeleton assays [3; 4; 5], or multicellular ensembles [6; 7; 8; 9; 10]. All these systems can organize into out-of-equilibrium phases with domains featuring orientational order. This order can locally be disrupted by disclinations, often called topological defects, which are associated with rotational flow patterns [2; 11; 12].
Both, theoretical and experimental studies have demonstrated that the interplay between topological defects and active processes concentrates mechanical stress, leading to the formation of density gradients [13; 14; 15; 16; 17; 18]. Reciprocally, cell density variations influence orientational order [19; 20]. Given the growing recognition of topological defects as organizing centers during morphogenesis [7; 8; 17; 21], understanding how density gradients and orientational order interact is essential.
A density-controlled transition between different +1 topological defects was observed in monolayers of polarized cells confined to a disc [17]. At low cell density, spontaneous rotational flows emerged in a spiral multicellular arrangement. Whereas for increasing cell density, a transition occurred to an aster arrangement without rotational flows, Fig. 1a. Steeper cell density gradients were found for asters compared to spirals, Fig. 1b. In the hydrodynamic description of an incompressible active polar fluid, an aster-to-spiral transition arises from the competition between the active stress and orientational elasticity [22]. The transition corresponds to a spontaneous flow instability [23; 24; 25; 26], where density does not appear explicitly as a control parameter.
In this Letter, we study a coupling between density gradients and orientational order, in the case of +1 topological defects in confined active polar fluids, Fig. 1d. In spreading cell monolayers, this Density-Polarity Coupling (DPC) expresses a tendency of cells to polarize away from high density regions [27; 28; 29]. First, we identify conditions for a density-controlled spiral-to-aster transition. Second, we show that biphasic orientational phases are a generic feature of comp
Figure 1: _Density-driven transition of a confined polar tissue._**(a)** Phase-contrast image of a confined monolayer of C2C12 myoblasts, showing a spiral (left) or an aster (right) polar state. Scale bar is 50 \(\mu\)m. Modified from [20]. **(b)** Radial cell density profile for spirals and asters. Data extracted from [17]. **(c)** Schematic representing a polar tissue confined to a disc of radius \(R\), described as a 2D compressible polar fluid with velocity \(\mathbf{v}\), polarity \(\mathbf{p}\) with radial angle \(\psi\), and density \(n\). **(d)** Schematic representing the effect of the Density-Polarity Coupling (DPC), see Eq. (2).
nally, we discuss the relevance of DPC for monolayers of polarized cells.
To describe a two-dimensional compressible active polar fluid, we use active gel theory [22; 30]. The system is characterised by velocity \(\mathbf{v}(\mathbf{r},t)\), polarity \(\mathbf{p}(\mathbf{r},t)\) and particle density \(n=n_{0}\bar{n}(\mathbf{r},t)\) fields, where \(n_{0}\) is the preferred particle density, Fig. 1c.
The equilibrium physics is captured by an effective free energy \(\mathcal{F}=\int_{A}\mathrm{d}A\,f\) with free-energy density
\[f=\frac{B}{2}\left(1-\bar{n}\right)^{2}+\frac{G}{2}|\mathbf{\nabla}\bar{n}|^{2}+ \frac{K}{2}|\mathbf{\nabla}\mathbf{p}|^{2}+\frac{\chi}{2}\mathbf{p}^{2}+f_{\mathrm{DPC}}. \tag{1}\]
The first two terms penalize density variations with elastic coefficients \(B,G>0\). The second two terms tend to suppress polarity variations with elastic coefficients \(K,\chi>0\). Thus, we favour a disordered phase in the bulk. We use the one-constant approximation [31] for simplicity and leave the general case for future studies.
The last term, \(f_{\mathrm{DPC}}\), accounts for the coupling between density and polarity. The lowest order term in powers of \((\mathbf{p},\mathbf{\nabla}\bar{n})\) with polar symmetry reads
\[f_{\mathrm{DPC}}=J_{\mathrm{p}}n_{0}(\mathbf{p}\cdot\mathbf{\nabla})\bar{n}, \tag{2}\]
which is related to a density-dependent spontaneous splay term of the Frank free energy [31; 32; 29]. Previous works identified a linear instability of an ordered state associated with this coupling [33; 34; 35; 36; 37]. Negative (positive) values of the coupling coefficient \(J_{\mathrm{p}}\) favor (anti-)alignment of polarity to density gradients, Fig. 1d. From now on, we use \(J_{\mathrm{p}}n_{0}\equiv j_{\mathrm{p}}\) as control parameter.
The evolution of the fields \(\bar{n}\), \(\mathbf{v}\) and \(\mathbf{p}\) is determined by the continuity equation, the polarity dynamics and the local force balance:
\[\partial_{t}\bar{n} = -\partial_{\beta}(\bar{n}v_{\beta}) \tag{3a}\] \[D_{t}p_{\alpha} = \frac{h_{\alpha}}{\gamma}-\nu\left(v_{\alpha\beta}-\frac{1}{2}v_{ \gamma\gamma}\delta_{\alpha\beta}\right)p_{\beta}\] (3b) \[0 = \partial_{\beta}(\sigma^{\mathrm{e}}_{\alpha\beta}+\sigma^{\mathrm{ d}}_{\alpha\beta}), \tag{3c}\]
where \(\mathbf{h}=-\delta\mathcal{F}/\delta\mathbf{p}\) is the molecular field, \(v_{\alpha\beta}=(\partial_{\alpha}v_{\beta}+\partial_{\beta}v_{\alpha})/2\), and \(\omega_{\alpha\beta}=(\partial_{\alpha}v_{\beta}-\partial_{\beta}v_{\alpha})/2\) are the symmetric and anti-symmetric parts of the velocity gradient tensor, and \(D_{t}p_{\alpha}=\partial_{t}p_{\alpha}+v_{\beta}\partial_{\beta}p_{\alpha}+ \omega_{\alpha\beta}p_{\beta}\) is the co-rotational derivative. The stress is decomposed into the Ericksen and the deviatoric components that read
\[\sigma^{\mathrm{e}}_{\alpha\beta} = -P\delta_{\alpha\beta}-(G\partial_{\beta}\bar{n}+j_{\mathrm{p}}p _{\beta})\partial_{\alpha}\bar{n}-K\partial_{\alpha}p_{\gamma}\partial_{\beta }p_{\gamma} \tag{4a}\] \[\sigma^{\mathrm{d}}_{\alpha\beta} = 2\eta\left(v_{\alpha\beta}-\frac{1}{2}v_{\gamma\gamma}\delta_{ \alpha\beta}\right)\] \[+ \frac{\nu}{2}(p_{\alpha}h_{\beta}+p_{\beta}h_{\alpha}-p_{\gamma}h _{\gamma}\delta_{\alpha\beta})+\frac{1}{2}(p_{\alpha}h_{\beta}-p_{\beta}h_{ \alpha})\] \[- \frac{1}{2}\zeta_{0}\Delta\mu p_{\gamma}p_{\gamma}\delta_{\alpha \beta}-\zeta\Delta\mu\left(p_{\alpha}p_{\beta}-\frac{1}{2}p_{\gamma}p_{ \gamma}\delta_{\alpha\beta}\right)\]
with the pressure \(P=\mu\bar{n}-f\), the chemical potential \(\mu=\delta\mathcal{F}/\delta\bar{n}\) and \(\Delta\mu\) is the chemical potential difference extracted from fuel consumption. The phenomenological parameters are the rotational viscosity \(\gamma\), the flow alignment coefficient \(\nu\), the shear viscosity \(\eta\), and the active isotropic (anisotropic) coefficient \(\zeta_{0}\) (\(\zeta\)).
As in the experimental system of Ref. [17], we consider an active fluid confined to a disc of radius \(R\), Fig. 1c. Using polar coordinates \((r,\theta)\), the polarity field is decomposed into the polar order \(S\) and the tilt angle \(\psi\) with respect to the radial direction, so that \(\mathbf{p}=S\cos\psi\,\mathbf{e}_{r}+S\sin\psi\,\mathbf{e}_{\theta}\), where \(\mathbf{e}_{r}\) and \(\mathbf{e}_{\theta}\) are the unit polar vectors. In addition we assume rotational invariance, \(\partial_{\theta}=0\). Because our theoretical description is achiral, without loss of generality, we restrict the range of angles to \(\psi=[0,\pi]\). Using the convention that outward polarity corresponds to \(\psi<\pi/2\), one can classify the different \(+1\) topological defects into out-aster \(\psi=0\), out-spiral \(0<\psi<\pi/2\), vortex \(\psi=\pi/2\), in-spiral \(\pi/2<\psi<\pi\) and in-aster \(\psi=\pi\).
The evolution equations for the fields \(\bar{n}\), \(S\), \(\psi\), \(v_{r}\) and \(v_{\theta}\) are detailed in Supplementary Material (SM) [38]. Motivated by the experiments in [17], spatial boundary conditions at \(r=R\) are set to \(S=1\) (boundary-induced order), \(\partial_{r}\psi=0\) (free orientation), \(v_{r}=0\) (absence of particle flux), and \(\sigma_{\theta r}=0\) (absence of shear stress). At equilibrium, the last boundary condition at \(r=R\) is obtained from the minimization of the free energy (1), which yields \(\partial_{r}\bar{n}=-j_{\mathrm{p}}\cos\psi/G\). We assume that this condition also holds out-of-equilibrium. At \(r=0\), regularity of the solution imposes that \(S=\partial_{r}\psi=\partial_{r}\bar{n}=v_{r}=v_{\theta}=0\).
Parameters are non-dimensionalized by using disc radius \(R\) as length scale, Frank constant \(K\) as energy scale and rotational viscosity \(\gamma\) to obtain a time scale \(\gamma R^{2}/K\). In the following, \(B=12\), \(G=2\), \(\eta=2\), \(\nu=-1.5\) are fixed, and \(\chi\), \(j_{\mathrm{p}}\), \(\zeta\Delta\mu\), \(\zeta_{0}\Delta\mu\) are varied. In numerics, the initial polarity is oriented outwards (i.e. \(\psi(r,t=0)<\pi/2\)), and the total particle density \(\int_{A}\mathrm{d}A\,n/A\) is set to \(n_{0}\) to avoid any pre-stress in the uniform configuration. For more details on the numerical scheme and initial conditions, see SM [38].
First, we consider the case of vanishing activity \(\zeta\Delta\mu=\zeta_{0}\Delta\mu=0\). In this case, the equilibrium states are in- and out- asters, Fig. 2a, which have the same total energy and are selected through spontaneous symmetry breaking. The corresponding density gradients have opposite signs, see SM [38].
Next, in the case of vanishing DPC \(j_{\mathrm{p}}=0\), spontaneous flows occur when \(\zeta\Delta\mu>0\), Fig. 2. Specifically, in- and out-asters transition to rotating spirals when anisotropic activity switches from contractile \(\zeta\Delta\mu<0\) to extensile \(\zeta\Delta\mu>0\), Fig. 2a-c. Unlike in past works [22; 23], here the instability threshold vanishes because of the absence of boundary anchoring. Spirals feature counter-rotating flows with a vanishing net torque because forces are internal, see Fig. 2b. Their steady-state orientation angle \(\psi(r)=\psi_{L}\) satisfies the relation \(\nu\cos(2\psi_{L})=1\)[39], where \(\psi_{L}\) is the Leslie angle, see Fig. 2c. Gradients of density are sustained by active processes in both spirals and asters, with their direction
set by \(\bar{n}^{\prime}\sim-(\zeta\cos(2\psi)+\zeta_{0})\Delta\mu\) for uniform \(\psi\)[40], see Fig. 2d for an extensile spiral.
Based on the above results, when \(j_{\mathrm{p}}\neq 0\) and \(\zeta\Delta\mu>0\), we expect competition between DPC, promoting radial configurations, and the active anisotropic stress driving the polarity towards the Leslie angle. Solving numerically our hydrodynamic equations (3), a spiral-to-aster transition is found at a threshold value of \(j_{\mathrm{p}}\), Fig. 2a and c. As \(|j_{\mathrm{p}}|\) increases near the threshold value, density gradients become steeper and the angle \(\psi\) approaches zero as for the out-aster state thanks to DPC, Fig. 2c,d. In contrast, the polar order parameter remains approximately independent of \(j_{\mathrm{p}}\), Fig. 2e. Importantly, this transition now occurs at a finite threshold of activity, Fig. 2a.
To further understand this competition, we analysed the linear stability of an out-aster to perturbations in the angle \(\psi\), see SM [38]. Neglecting gradients of orientation Fig. 2c, the linear dynamics for the angle perturbation \(\delta\psi\) reduces to
\[\partial_{t}\delta\psi\propto\left\{j_{\mathrm{p}}\bar{n}_{\mathrm{a}}^{ \prime}+\frac{2\zeta\Delta\mu\gamma(1-\nu)S_{\mathrm{a}}^{3}}{4\eta+\gamma S_{ \mathrm{a}}^{2}(\nu-1)^{2}}\right\}\delta\psi \tag{5}\]
where \(S_{\mathrm{a}}(r)\) and \(\bar{n}_{\mathrm{a}}(r)\) are, respectively, the steady-state polar order and reduced density for an out-aster. Assuming that the instability originates from the boundary, we replace these profiles by their boundary values \(S_{\mathrm{a}}=1\) and \(\bar{n}_{\mathrm{a}}^{\prime}=-j_{\mathrm{p}}/G\) in Eq. (5) and obtain the analytical threshold
\[|j_{\mathrm{p}}^{*}|=\sqrt{\frac{2\zeta\Delta\mu G\gamma(1-\nu)}{4\eta+\gamma (1-\nu)^{2}}}. \tag{6}\]
This threshold suggests that an out-aster is linearly unstable for \(\zeta\Delta\mu(1-\nu)>0\) and an intermediate range of the DPC coefficient \(|j_{\mathrm{p}}|<|j_{\mathrm{p}}^{*}|\). Expression (6) is in qualitative agreement with numerics, Fig. 2a. In conclusion, DPC can suppress the spontaneous flow transition and stabilise asters in active polar fluids.
Let us reconsider the equilibrium case. There, linear stability analysis shows that DPC alone can destabilize a uniform ordered state [33; 34; 35; 36]. Indeed equilibration of density fluctuations leads to an effective Frank free-energy with a renormalized splay constant \(K_{\mathrm{s}}=K-j_{\mathrm{p}}^{2}/B\), whereas the bend constant remains unchanged \(K_{\mathrm{b}}=K\), see SM [38]. For \(K_{\mathrm{s}}<0\), that is above the threshold value \(|j_{\mathrm{p}}^{\dagger}|=\sqrt{KB}\), splay distortions are favoured. In our system, the threshold for this instability \(j_{\mathrm{p}}^{\dagger}\) is modified by activity and boundary conditions. The instability is associated with a finite wavelength, which can generate biphasic orientational phases in the context of \(+1\) topological defects that we analyze in the following.
Beyond the spontaneous splay instability, biphasic asters emerge where in- and out-aster states coexist, Fig. 3a. This state is characterised by a non-monotonic density profile, favouring non-uniform orientations due to DPC, Fig. 3a, and a sharp interface with strong orientation gradients (\(R|\mathbf{\nabla}\psi|\gg 1\)). Because the positive bend constant \(K_{\mathrm{b}}=K\) prevents large gradients of \(\psi\), the polar order \(S\) needs to be sufficiently small to stabilize the interface, Fig. 3a, inset. This can be achieved in the disordered limit \(\sqrt{K/\chi}\ll R\), such that polar order is localized at the disc periphery, see Fig. 3a.
Below the spontaneous splay instability, double spirals can be found. They are characterised by a gradual gradient of orientation (\(R|\mathbf{\nabla}\psi|\sim 1\)), Fig. 3b. This gradient results from a competition between active alignment and DPC, modulated by the local amplitude of polar order \(S\). Indeed, if anisotropic activity dominates over DPC at the periphery (\(S\sim 1\)), spirals are stabilised for \(\zeta\Delta\mu>0\), Fig. 3b. Away from the periphery, where order
Figure 2: _Spiral-to-aster transition induced by DPC_. **(a)** Density plot of the peripheral angle \(\psi_{R}=\psi(R)\) at steady-state, as a function of anisotropic activity (\(\zeta\Delta\mu\)) and DPC (\(j_{\mathrm{p}}\)) coefficients. Blue curve: threshold \(|j_{\mathrm{p}}|=|j_{\mathrm{p}}^{*}|\) from Eq. (6). **(b-e)** Radial profiles of azimuthal velocity \(v_{\theta}(r)\) (b), angle \(\psi(r)\) (c), density variation \(\delta\bar{n}=\bar{n}(r)-1\) (d) and polar order \(S(r)\) (e), for \(\zeta\Delta\mu=2\) and \(j_{\mathrm{p}}\) varies as indicated in legend (e) and black arrow (a,c). Gray line in (c): Leslie angle \(\psi_{L}\). Parameters are \(\chi=4\) and \(\zeta_{0}\Delta\mu=0\).
is weak (\(S\ll 1\)), DPC always dominates, favouring out-asters for inward density gradients, Fig. 3b. Contrary to Fig. 2c,e where polar order remains large near the center, locally attenuating the competition between active alignment and DPC, here the disordered limit \(\sqrt{K/\chi}\ll R\) results in larger orientational gradients.
These states can be characterized by the peripheral angle \(\psi_{R}=\psi(R)\) and the angle difference between the periphery and the center \(\Delta\psi=\psi(R)-\psi(0)\). Whereas Fig. 3d is apparently similar to Fig. 2a, the state diagram for the angle difference in Fig. 3e reveals biphasic asters and double spirals with \(\Delta\psi\neq 0\), SM [38]. The dependence on activity of the spontaneous splay threshold can be understood from the non-monotonicity of density profiles, as in Fig. 3a. Whereas density gradients \(\bar{n}^{\prime}(R)\) are set by DPC at the periphery, in the bulk, they scale as \(\bar{n}^{\prime}\sim-(\zeta+\zeta_{0})\Delta\mu\) when activity dominates. Therefore, biphasic asters are favoured when \(j_{\mathrm{p}}>0\) and \((\zeta+\zeta_{0})\Delta\mu<0\) or vice-versa, in agreement with state 2 in Fig. 3e when \(\zeta_{0}\Delta\mu=0\), or in Fig. 3g when \(\zeta_{0}\Delta\mu\neq 0\).
At low values of \(|j_{\mathrm{p}}|\), double spirals can emerge, states 4, 6 and 8 in Fig. 3e,g. Whereas peripheral orientation remains outward when \(\zeta_{0}\Delta\mu=0\), Fig. 3e, large isotropic activity can induce inward oriented states 7 and 8 in Fig. 3f,g. These states can no longer be understood from peripheral angle dynamics alone. They appear when anisotropic active stresses overcome DPC at the periphery, in combination with outward (inward) bulk density gradients to promote inward orientation for \(j_{\mathrm{p}}>0\) (\(j_{\mathrm{p}}<0\)). Increasing \(\zeta_{0}\Delta\mu\) to positive values changes the direction of bulk density gradients, and reverses the central angle from inwards to outwards through the sequence of states \(8\to 6\to 4\) for \(j_{\mathrm{p}}>0\), see Fig. 3c,g.
In summary, a local coupling between polarity and density gradients can account for the observed transition between rotating spirals and non-flowing asters as cell density increases, Fig. 1. In addition, these results provide an alternative interpretation of this transition, in terms of a transition from a double spiral to an aster, black arrow in Fig. 3g. In this case, for low densities, a double spiral with aster-like orientation \(\psi\simeq 0\) in the center is found, Fig. 3b,g. With increasing density, this inner phase expands until it fills the entire disc and the angle becomes \(\psi=0\). This double-spiral state delays the relaxation of the peripheral angle towards zero, which is consistent with experiments, see _appendix_.
The spiral-to-aster transition discussed above crucially relies on the choice of the free energy term \(f_{\mathrm{DPC}}\) (Eq. 2). Alternatively, it can be written as \(f_{\mathrm{DPC}}=-j_{\mathrm{p}}\bar{n}\mathbf{\nabla}\cdot\mathbf{p}\), which leads to the equilibrium BC \(\partial_{r}\bar{n}(R)=0\). In SM [38], we show that the main results remain unchanged for different parameter values. One could also consider a free energy of the form \(\tilde{f}_{\mathrm{DPC}}=J_{\mathrm{p}}(\mathbf{p}\cdot\mathbf{\nabla})n/n_{0}\)[24; 33; 41]. In this case, a density-controlled spiral-to-aster transition occurs if \(\zeta\Delta\mu\) decreases with density. Then, other parameters like isotropic active stress also need to depend on density to match the observed density profiles in Fig. 1b. Therefore, Eq. 2 corresponds to a minimal extension of Ref. [20].
DPC not only provides an explanation for the dynamics of polar tissues on discs, Fig. 1, but also proposes a mechanism for collective states found in giant epithelial cell monolayers [28]. There, a radially spreading tissue develops azimuthal flows in the central region, and density gradients become non-monotonic. In our framework, this state resembles biphasic asters except for an outward spiral orientation near the center in Ref. [28]. We expect this difference to originate from a global polar order, which is able to sustain bulk active stresses contrary to our disordered system. Validation of these hypotheses requires a precise measurement of the cell polarity field and complementary theoretical analysis.
Figure 3: _Orientational patterns induced by DPC._ **(a-b)** Radial profiles of angle (left), density variation (middle) and polar order (right), for biphasic asters (a) and double spirals (b). Inset: polar order near disc center. **(c)** Schematics of orientation states. **(d-g)** Steady-state density plots: peripheral angle \(\psi_{R}=\psi(R)\) (d,f) and angle difference between periphery and center \(\Delta\psi=\psi(R)-\psi(0)\) (e,g). Dashed lines in (d-g): \(j_{\mathrm{p}}=\pm\sqrt{KB}\). Black arrow in (g): double spiral to aster transition. Parameters \(\chi=81\), \(\zeta_{0}\Delta\mu=0\) for (d,e), and \(\zeta\Delta\mu=20\) for (f,g).
To our knowledge, the above experimental works represent the first evidences of DPC in cellular systems. To further investigate this coupling experimentally, one could control density gradients using optogenetic tools [42] and generate specific flow or polarity patterns. Although we have focused on systems with polar symmetry, it is also interesting to consider couplings between density gradients and other types of orientational order, like nematic systems [43].
###### Acknowledgements.
We are grateful to Jean-Francois Joanny and Ram Adar for insightful discussions, Ricard Alert for pointing out reference [28], and Ludovic Dumoulin for help on numerical methods. The computations were performed at University of Geneva on Baobab HPC cluster. C.A.D. acknowledges funding from the EMBO fellowship ALTF 886-2022, and P.G. acknowledges support from the Human Frontiers of Science Program (grant number LT-000793/2018-C).
## Appendix
Here we compare the evolution of the peripheral angle \(\psi_{R}\) between experiment and theory. Experiments in Ref. [17] first show a spiral maintained over one day, followed by a rapid transition to an aster, see Fig.11b. In theory, assuming uniform angle \(\psi=\psi_{R}\) and transition controlled by boundary effects, we obtain an expression for the angle
\[\cos(2\psi_{R})=\frac{1}{\nu}\frac{1}{1-j^{2}}\left[1-\frac{j^{2}}{2}\left( \frac{4\eta}{\gamma}+\nu^{2}+1\right)\right], \tag{10}\]
see SM [38]. The critical value \(j_{\mathrm{p}}^{*}\) at which spiral-to-aster transition occurs is given by Eq. (6). Comparison between experiments and Eq. (10) shows agreement for \(\eta/\gamma\ll 1\), see Fig.11a. Previous quantitative analysis [40; 20] suggests that \(\eta/\gamma\sim 1\).
In the main text, we showed the existence of double spirals, Fig. 3b. For high values of activity coefficients \(\zeta\Delta\mu\), \(\zeta_{0}\Delta\mu\) and increasing \(j_{\mathrm{p}}>0\) (black arrow in Fig. 3g), spiral orientation is maintained at the periphery while aster-like orientation develops in center. Compared to uniform angle states, this delays the spiral-aster transition time, see Fig. 11b. Thus, double spiral itself can also quantitatively reproduce experimental data.
|
2303.10388 | ggpicrust2: an R package for PICRUSt2 predicted functional profile
analysis and visualization | Microbiome research is now moving beyond the compositional analysis of
microbial taxa in a sample. Increasing evidence from large human microbiome
studies suggests that functional consequences of changes in the intestinal
microbiome may provide more power for studying their impact on inflammation and
immune responses. Although 16S rRNA analysis is one of the most popular and a
cost-effective method to profile the microbial compositions, marker-gene
sequencing cannot provide direct information about the functional genes that
are present in the genomes of community members. Bioinformatic tools have been
developed to predict microbiome function with 16S rRNA gene data. Among them,
PICRUSt2 has become one of the most popular functional profile prediction
tools, which generates community-wide pathway abundances. However, no
state-of-art inference tools are available to test the differences in pathway
abundances between comparison groups. We have developed ggpicrust2, an R
package, to do extensive differential abundance(DA) analyses and provide
publishable visualization to highlight the signals. | Chen Yang, Jiahao Mai, Xuan Cao, Aaron Burberry, Fabio Cominelli, Liangliang Zhang | 2023-03-18T10:50:09Z | http://arxiv.org/abs/2303.10388v3 | _Bioinformatics_, YYYY, 0-0
###### Abstract
**Summary**: Microbiome research is now moving beyond the compositional analysis of microbial taxa in a sample. Increasing evidence from large human microbiome studies suggests that functional consequences of changes in the intestinal microbiome may provide more power for studying their impact on inflammation and immune responses. Although 16S rRNA analysis is one of the most popular and a cost-effective method to profile the microbial compositions, marker-gene sequencing cannot provide direct information about the functional genes that are present in the genomes of community members. Bioinformatic tools have been developed to predict microbiome function with 16S rRNA gene data. Among them, PICRUST2 has become one of the most popular functional profile prediction tools, which generates community-wide pathway abundances. However, no state-of-art inference tools are available to test the differences in pathway abundances between comparison groups. We have developed ggpicrust2, an R package, to do extensive differential abundance (DA) analyses and provide publishable visualization to highlight the signals.
**Availability and implementation**: The package is open-source under the MIT and file license and is available at CRAN and [https://github.com/cafferychen777/ggpicrust2](https://github.com/cafferychen777/ggpicrust2). Its shiny web is available at [https://urlzs.com/EvDW8](https://urlzs.com/EvDW8).
**Contact:**: [email protected]
**Supplementary information: Supplementary data are available at _Bioinformatics_ online.
## 1 Introduction
One limitation of microbial community marker-gene sequencing is that it does not provide information about the functional composition of sample communities (Douglas _et al._, 2020). By 2022, there are several methods available for predicting functions from the 16S rRNA sequence based on different approaches, such as PICRUST2 (Douglas _et al._, 2020), Tax4Fun2 (Wemheuer _et al._, 2020), MicFunPred (Mongal _et al._, 2021) and PICRUST(Langille _et al._, 2013) and so on. However, the accuracy and applicability of these methods depend on the specific research question and the characteristics of the microbial community being studied. Overall, these methods have greatly enhanced our ability to understand
the functional roles of microbial communities in various environments, from the human gut to soil and water ecosystems. Among the various tools available, PICRUSt2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) has emerged as a highly favored instrument for predicting functional profiles, as it facilitates the generation of comprehensive pathway abundances within microbial communities. By doing so, PICRUSt2 provides researchers with valuable insights into the functional roles of microbial communities.
Nonetheless, a consensus regarding the optimal methodology for inferring and visualizing the functional abundance output generated by PICRUS2 remains to be established within the academic community. As determining the statistically significant differences in functions and pathways between groups using Differential Abundance (DA) methods constitutes a critical step in the analysis, selecting an appropriate DA approach is indeed a topic of considerable importance within the scholarly discourse. The official wiki of PICRUSt2 initially recommended STAMP (Parks _et al._, 2014) as the preferred software for analysis and visualization. However, STAMP has not been updated since 2015, indicating that it is unable to integrate the most recent advances in differential abundance (DA) analysis, which are crucial for systematically making statistical inferences from PICRUSt2 output data. Furthermore, STAMP presents installation challenges on macOS platforms, making it less user-friendly and potentially hindering its adoption by researchers in the field. The performance of five DA methods supported by STAMP, including ANOVA, Kruskal-Wallis H-test, t-test (equal variance), Welch's t-test, and White's non-parametric test, has been shown to be relatively inferior in a recent comparison of 20 DA methods across 38 datasets (Nearing _et al._, 2022). The comparison concluded that AIDEx2 and ANCOM-II produce the most consistent results across studies and agree best with the interest of results from different approaches, but still recommend that researchers should use a consensus approach based on multiple differential abundance methods to help ensure robust biological interpretations (Nearing _et al._, 2022). Despite there are several platforms or packages support the multiple advanced DA methods such as MicrophiomeAnalyst (Chong _et al._, 2020), MicrobiomeExplorer (Reeder _et al._, 2021), microbiomeMarker (Cao _et al._, 2022), they are not specifically designed for PICRUSt2 functional output data. Due to the discrepancies in format and characteristics between PICRUSt2 output data and 16S rRNA gens data, the above platforms or software intended for the analysis of 16S rRNA genes data often encounter difficulties when importing PICRUSt2 data. Although almost all DA methods can be used in R, each method creates various burdens for data import and parameter configuration which increases both the effort and time cost, and diminishes the efficiency. Additionally, these R packages often lack the ability to visualize DA results and generate publication-quality figures. Thus, developing a user-friendly R package for analyzing PICRUSt2 functional output data using various DA methods and visualizations is urgently needed to fill the gaps.
## 2 _gggpicrust2_ R package
The general workflow of the package is shown in Figure 1. _gggpicrust2_ not only allows for recently developed advanced DA methods and visualization of results but also can convert PICRUSt2 output KO abundance tables into KEGG pathway abundance tables, which cannot be performed using PICRUSt2 alone. It also provides annotation of KO, EC, MetaCyc pathway, and KEGG pathway and enables classification of KEGG pathways. In the future, gggpicrust2 plans to incorporate a broader array of functional prediction tools, including but not limited to Tax4Fun2, in order to expand its capabilities and utility. Additionally, the package will integrate other methods that have demonstrated strong performance in simulation comparisons, ensuring continuous improvement and alignment with the latest advancements in the field.
### _Data input_
_gggpicrust2_ recommends adopting the data format of PICRUSt2 original output pred_metagenome_unstratttsv without reformat. But evx and txt is also acceptable. Furthermore, it is capable of supporting files that have been converted to mimic the PICRUSt2 output format, ensuring compatibility and flexibility for various data sources.
### _Conversion to KEGG pathway abundance_
KEGG Orthology (KO) is a classification system developed by the Kyoto- to Encyclopedia of Genes and Genomes (KEGG) database (Kanehisa _et al._, 2022). It uses a hierarchical structure to classify enzymes based on the reactions they catalyze. To better understand pathways? role in different groups and classify the pathways, the KO abundance table can be converted to KEGG pathway abundance. But PICRUSt2 removes the function from PICRUSt. _ko2kegg_abundance()_ can help convert the table.
### _Advanced DA methods_
Differential abundance (DA) analysis plays a major role in PICRUSt2 downstream analysis. _pathway_daa()_ integrates almost all DA methods applicable to the predicted functional profile which there excludes ANCOM and ANCOMBC. It includes ALDEx2 (Fernandes _et al._, 2013), DESeq2 (Love _et al._, 2014), Maaslin2 (Mallick _et al._, 2021), LinDA (Zhou _et al._, 2022), edgeR (Robinson _et al._, 2010), limma voom (Ritchie _et al._, 2015), metagenomeSeq (Paulson _et al._, 2013), lefser (Segata _et al._, 2011) which have demonstrated varying degrees of success in distinct benchmarking assessments(Yang and Chen, 2022; Calgaro _et al._, 2020; Nearing _et al._, 2022).
### _Annotation of KO, EC, and pathway_
_pathway_annotation()_ can annotate the KO, EC, MetaCyc pathways' description from annotation table. And it can pull requests to online KEGG database to annotate KEGG pathways' pathway_name, pathway_description, pathway_class and pathway_map. The function can be used to annotate the output file of PICRUSt2 or the output table of pathway_daa().
### _Visualization_
The mainstream visualization of PICRUSt2 is bar_plot, error_bar_plot, pca_plot, heatmap_plot. pathway_crrorbar can show the relative abundance difference between groups and log2 fold change and p-values derived from DA results. _pathway_pca()_ can show the difference after dimensional reduction via Principal Component Analysis (PCA). _pathway_heatmap()_ can visualize the patterns in PICRUSt2 output data which can be useful for identifying trends or highlighting areas of interests.
### _Integration_
_gggpicrust0_ is the integration function of _pathway_daa()_, _pathway_annotation()_, _pathway_errorbar()_, _ko2kegg_abundance()_. This tool
is designed to facilitate the entire data analysis process for those who are new to the field. However, it is also capable of being used by professional analysts in a modular fashion, allowing for increased customization and control. To further support users and promote the understanding of the package's capabilities, we have developed a detailed user manual, which is provided as Supplementary Materials. This document includes step-by-step installation instructions, explanations of the main features, and guidance on how to effectively leverage the _gggricrust2_ package implementation of LinDA. This approach led to the identification of KEGG pathways that demonstrated statistically significant differences between the pro-survival and pro-inflammatory environments across both groups of mice. Of particular interest were the pathways ko5016, which is primarily involved in the pathogenesis of Huntington's disease, and ko05012, known for its association with Parkinson's disease. Both pathways are linked to human diseases and neurodegenerative disorders. The DA results were meticulously annotated, and the output was visualized for subsequent analysis. The visual representation of the results, which provides insights into the involvement of these pathways in the studied conditions, is depicted in Figure 1.
## 3 Conclusion
_gggricrust2_, available at CRAN and [https://github.com/caffervehen777/gggricrust2_](https://github.com/caffervehen777/gggricrust2_), is an R package developed explicitly for PICRUST2 predicted functional profile to do advanced differential abundance (DA) analysis and visualization of the DA results. This package effectively addresses the limitations of existing tools in terms of methods and visualization, and its integrated and distributed design caters to both professionals and beginners by meeting the needs of both groups. By providing a seamless experience for analyzing and visualizing DA results, _gggricrust2_ has the potential to significantly enhance the quality and efficiency of research involving functional profile predictions. _gggricrust2_ has already been incorporated into the PIC-RUS2 wiki documentation, reflecting its growing recognition and adoption within the research community.
## Acknowledgments
We want to acknowledge Sonja Schaufelberger at University of Gothenburg for the feedback and suggestions regarding the _gggricrust2_ package. Her insights have significantly contributed to the improvement and development of our tool, ensuring that it remains both versatile and useful for researchers in the scientific community.
## Funding
This work has been supported by NIH grants SP30AG072959-02 and 3R01DK042191-3051.
_Conflict of Interest:_ none declared.
|
2305.14260 | R2H: Building Multimodal Navigation Helpers that Respond to Help
Requests | Intelligent navigation-helper agents are critical as they can navigate users
in unknown areas through environmental awareness and conversational ability,
serving as potential accessibility tools for individuals with disabilities. In
this work, we first introduce a novel benchmark, Respond to Help Requests
(R2H), to promote the development of multi-modal navigation helpers capable of
responding to requests for help, utilizing existing dialog-based embodied
datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH),
which assesses the helper agent's ability to generate informative responses
based on a given dialog history, and (2) Respond during Interaction (RdI),
which evaluates the effectiveness and efficiency of the response during
consistent cooperation with a task performer. Furthermore, we explore two
approaches to construct the navigation-helper agent, including fine-tuning a
novel task-oriented multi-modal response generation model that can see and
respond, named SeeRee, and employing a multi-modal large language model in a
zero-shot manner. Analysis of the task and method was conducted based on both
automatic benchmarking and human evaluations. Project website:
https://sites.google.com/view/response2helprequests/home. | Yue Fan, Jing Gu, Kaizhi Zheng, Xin Eric Wang | 2023-05-23T17:12:09Z | http://arxiv.org/abs/2305.14260v2 | # _R2h_: Building Multimodal Navigation Helpers that _Respond to Help_
###### Abstract
The ability to assist humans during a navigation task in a supportive role is crucial for intelligent agents. Such agents, equipped with environment knowledge and conversational abilities, can guide individuals through unfamiliar terrains by generating natural language responses to their inquiries, grounded in the visual information of their surroundings. However, these multimodal conversational navigation helpers are still underdeveloped. This paper proposes a new benchmark, _Respond to Help (R2H)_, to build multimodal navigation helpers that can respond to help, based on existing dialog-based embodied datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH), which assesses the helper agent's ability to generate informative responses based on a given dialog history, and (2) Respond during Interaction (RdI), which evaluates the helper agent's ability to maintain effective and consistent cooperation with a task performer agent during navigation in real-time. Furthermore, we propose a novel task-oriented multimodal response generation model that can see and respond, named _SeeRee_, as the navigation helper to guide the task performer in embodied tasks. Through both automatic and human evaluations, we show that SeeRee produces more effective and informative responses than baseline methods in assisting the task performer with different navigation tasks. Project website: [https://sites.google.com/view/respond2help/home](https://sites.google.com/view/respond2help/home).
## 1 Introduction
Assisting humans in real-world scenarios is a fundamental capability that AI agents should possess. A multimodal conversational helper agent, as a supportive role providing real-time navigation assistance in the environment, can greatly improve work efficiency and success rate of humans, like emergency rescue in complex architectures. Figure 1 illustrates an example of a delivery person seeking assistance in navigating through an unfamiliar building. With the help from a navigation helper agent, the delivery person can ask questions about directions and receive responses that are tailored to visual information about the current surroundings.
The most critical aspect for a conversational helper agent's success is its ability to effectively guide task performers in a cooperative manner. To achieve this, the helper agent needs to provide real-time assistance, ensuring the successful completion of the tasks. However, creating such an agent poses a significant challenge, because the evaluation of the helper agent's performance is not solely dependent on the agent itself, but also requires the presence of a task performer.
To fully consider the cooperative dynamic between the helper agent and a task performer, involving a human in the evaluation as a performer would be the most direct approach, though it is often impractical due to the high cost and low effi
Figure 1: R2H demo. A task-oriented helper provides responses to help a task performer who is delivering a package. The helper has access to task and environment information that is not available to the task performer, for example, the location of the destination and map of the environment.
ciency. An alternative solution is to involve another agent as the task performer. In this work, we introduce the Respond to Help (R2H) benchmark, designed to automatically evaluate conversational multimodal navigation helpers in this cooperative dynamic. The R2H benchmark incorporates pre-trained and fixed task performer agents into the evaluation tasks. These performer agents follow the responses from the helper agent, providing a practical and efficient way to evaluate the helper agent's performance.
Leveraging two existing vision-and-dialog navigation datasets, CVDN (Thomason et al., 2020) and AIFRED (Shridhar et al., 2020), our R2H benchmark introduces two novel tasks: the Respond to Dialog History task and the Respond during Interaction task. In the Respond to Dialog History (RDH) task, the helper agent generates a response based on the dialog history, with the aim of guiding the task performer agent closer to the target. In the Respond during Interaction (RdI) task, the helper agent is required to generate multiple responses based on the demands of the task performer, guiding the task performer agent to navigate to the destination. As a result, R2H benchmark provides a practical evaluation for helper-performer cooperation in both short- and long-term. Specially distinguished from other language generation tasks emphasising the similarity of generated language to the ground truth, the tasks in R2H benchmark especially focus on how much the success rate of task performers is benefited with the help from the helper.
We also present a multimodal helper agent, SeeRee, to respond to inquiries from the task performer. Our helper agent leverages the oracle knowledge about the task and environment that is unavailable to the task performer, such as the destination location in a navigation task, to generate responses to inquiries from the task performer. SeeRee employs pre-trained vision and language models, specifically the Video Swin Transformer (Liu et al., 2022) and BERT (Devlin et al., 2019), to handle multimodal inputs. To manage long input sequences, SeeRee leverages a novel Conditional Optimized Sparse (COS) attention mask. Moreover, we introduce a Parse by Step method, which leverages GPT-3 (Brown et al., 2020) to transform ground-truth human responses into structured, noise-free step-by-step instructions. Using those parsed instructions (instead of human responses) as the new supervision results can train a better helper agent with improved performance in helping the task performer.
In our experiments, we benchmark SeeRee and a baseline model using the R2H benchmark. The results and ablation studies demonstrate that SeeRee surpasses the baseline in generating effective responses; validate the COS attention mask and Parse by Step method. We also conduct extensive human evaluations to assess how effectively SeeRee's responses assist humans in completing embodied tasks, which shows that SeeRee delivers highly accurate responses that significantly enhance the task success rate compared to the baseline.
Additionally, we ask human testers to rate the faithfulness and naturalness of the responses evaluate the response from helper agents with automatic scores. As a result, our experiments indicate that a higher language similarity to human helpers does not necessarily lead to a more successful conversational helper agent.
The main contributions are concluded as follows:
* We present the Respond to Help (R2H) benchmark as a testbed for automatically evaluating the capabilities of multimodal conversational navigation helper agent, that helps task performers complete tasks by providing natural language responses to inquiries based on environment information.
* We propose a novel task-oriented multimodal helper agent, SeeRee, utilizing the Conditional Optimized Sparse (COS) attention mask and noise-free step-by-step instructions (Parse by Step).
* Our experiments on the R2H task and human evaluations show that SeeRee outperforms other baseline helper agents and even approaches human-level performance in some aspects. The experiments also indicate that a closer linguistic resemblance to human helpers does not automatically translate into a more effective conversational helper agent.
## 2 The R2H Benchmark
Significant differences exist between building a helper agent and developing task performer agents for dialog-based multimodal navigation tasks. Task performers developed in the pioneer works (Thomason et al., 2020; Gao et al., 2022; Fan et al., 2022) are challenged as command followers, evaluated
based on their performance of following dialog histories from human annotations, as shown in Figure 1(i). However, building the helper agent requires evaluations in a collaborative setting with task performers because being able to provide responses to help the task performer is the most important aspect of the helper agent.
Involving human task performers to evaluate the helper agent is ideal but expensive. Alternatively, inspired by padmakumar2022,nguyen2019 and roman2020 that study and build specific helper and performer agents to collaborate (shown in Figure 1(ii)), we introduce the Respond to Help (R2H) benchmark (shown in Figure 1(iii)), involving task performer in the evaluation process. This approach allows for a more comprehensive and realistic assessment of the helper agent's capabilities, as it tests the agent's ability to respond effectively in a wide range of scenarios. The benchmark includes two novel tasks, the Respond to Dialog History (RDH) task and the Respond during Interaction (RdI) task, building upon three existing vision-and-dialog navigation datasets. RDH task evaluates helper agents in a situation where partial human dialog history is provided and RdI task aims at challenging the helper agents with real collaborative scenarios.
### Respond to Dialog History Task
Respond to Dialog History (RDH) Task focuses on evaluating the accuracy and completeness of the response from helper agents. The helper agent is challenged with understanding the dialog history and respond to help the task performer based on information about the task and environment in the form of image sequences. After the responses \(\hat{r}_{i}\) generated, they will be concatenated with all available human dialog history \(h_{i-1}=\{q_{0},r_{0},\ldots,q_{i-1},r_{i-1}\}\) in the corresponding trajectory before the inquiries \(q_{i}\) from human task performers. As a result, generated response from the helper agent forms a new dialog history \(\hat{h}=\{h_{i-1},q_{i},\hat{r}_{i}\}\) which becomes the input to the task performer.
R2H benchmark adopts Sate-Of-The-Art open-sourced performer agents for each vision-and-language navigation dataset. The task performer agent is pre-trained on the original training set with human dialogs \(h_{i}\) including the response from the human helper \(r_{i}\) and predict actions based on \(\hat{h}\) for completing the task. Therefore, the progress and success made by the task performer can be seen as a test of the accuracy and completeness of the responses generated by the task performer agent.
### Respond during Interaction Task
The Respond during Interaction (RdI) task challenges the ability of helper agents to cooperate consistently with the task performer agent. Similar
Figure 2: Comparison between different dialog-based embodied benchmark types. The benchmark either (i) evaluates task performers, where performers follow instructions in human annotations, (ii) evaluates helper- performer pairs, where the helper and performer agents need to jointly learn and be evaluated together, or (iii) evaluates helper agents only (our R2H benchmark), where helpers need to provide appropriate instructions according to performer and environment information.
to the RDH task, the Rdl task involves a pre-trained task performer agent predicting navigation actions based on dialog histories. However, unlike the RDH task, no dialog history is initially provided in the Rdl task. The task performer agent initiates inquiries \(\hat{q}_{i}\) for help when needed and navigate based on the ongoing dialog between itself and the helper agent \(\hat{h}_{i}=\{\hat{q}_{0},\hat{r}_{0},\dots,\hat{q}_{i},\hat{r}_{i}\}\), where \(\hat{r}_{i}\) is the real-time responses from the helper agent to \(\hat{q}_{i}\). As a result, the dialog \(\hat{h}_{i}\) as a result of interaction serves as the primary source of guidance for the task performer agent, making the consistent quality of the helper agent's responses crucial for successful navigation.
### Adapting Existing Datasets
R2H benchmark establish the same tasks across different datasets.
DatasetsR2H benchmark is built upon existing vision-and-dialog navigation datasets with dialogs between task performers and helpers.
* **CVDN**(Thomason et al., 2020) is situated in the Matterport3D simulator (Chang et al., 2017) with photo-realistic scenes. The dataset records the collaboration between the human task performer and the human helper to complete navigation tasks of finding target objects in different indoor environments. The helper uses expert knowledge to guide the task performer with natural language when questions are asked by the task performer.
* **DialFRED**(Gao et al., 2022) is built on Ai2-thor (Kolve et al., 2017) simulator with synthetic views. Similar to the CVDN, the helper and task performer collaborate to navigate to targets; however, DialFRED dataset involves a significant amount of template based dialogs and is collected in a way that each trajectory only corresponds to one pair of inquiry and response, without any dialog history. Therefore it is not suitable for Rdl task.
Adaption Across DatasetsGiven the variability of available information across different datasets and environments, the R2H benchmark is designed with a harmonizing approach, converting the environment specific information into image sequences. A script-based sampler are designed to generate the image sequences by leveraging expert information in each dataset. The sampler outputs image sequences showing the task performer's views on the shortest path to the destination in both CVDN and DialFRED datasets. Especially for CVDN dataset, following the data collection process in dataset, a connectivity graph for viewpoints is used to generate the shortest path and therefore the image sequence length is variable but limited to views within 5 viewpoints of the current position. As a result, helper agents can be evaluated in different datasets with the same task inputs, which enhanced the benchmark's versatility to further adapt to new datasets.
### Metrics
Since we aim to evaluate how capable the response generated by helper agents is in helping the task performer, we adopt the metrics for task completion: Goal Progress (GP), evaluating the distance of the progress made towards the destination, where it is computed as the trajectory length, deducted by the remaining trajectory from the current location to the destination viewpoint; Success Rate (SR), showing the ratio of tasks being completed successfully; Success weighted by inverse Path Length (SPL) (Anderson et al., 2018) or Path Weighted Success Rate (PWSR) (Gao et al., 2022): measuring the Success Rate weighted by the total length of the navigation trajectory.
## 3 The SeeRee Agent
In this section, we introduce the helper agent that can _see and respond_, named SeeRee. SeeRee generates responses to task-oriented navigation inquiries from the task performer. As shown in Figure 3, our helper agent SeeRee generates natural language responses to the task performer's inquiries based on the task and environment information that is not aware by the task performer. Since the available information can vary among different tasks and environments, SeeRee is designed to adapt to different scenarios by converting the information into image sequences using task- and environment-specific samplers. The sampler we adopted for the experiments is detailed in Section 4.1. Then, the image sequences are padded to a fixed length (Lin et al., 2022) and encoded by Video Swin Transformer (VidSwin) (Liu et al., 2022). The image sequence embedding is concatenated with BERT text embedding (Devlin et al., 2019) and fed into a multimodal transformer. Finally, the responses are generated from the multimodal transformer in an
auto-regressive way.
### Multimodal Transformer
Following prior multimodal language generation studies (Hu et al., 2020; Lin et al., 2022), our multimodal transformer takes as input the embedding containing both text and image information and generates natural language responses in a unidirectional sequence-to-sequence generation process. We treat the input inquiries as prompts for generating the response, and a special token [CLS] is added to the end of the inquiry. During training, we apply Mask Language Modeling (MLM) (Devlin et al., 2019) to the response where \(80\%\) tokens are masked with [MSK] tokens, and \(10\%\) tokens are changed randomly. Cross entropy loss is applied to predictions for the masked tokens:
\[L_{MLM}=\Sigma_{i}L_{CrossEntropy}(y_{i},\hat{y}_{i}), \tag{1}\]
where \(y_{i}\) is the masked token at position \(i\) and \(\hat{y}_{i}\) is the prediction.
At inference time, the text is generated in an auto-regressive manner, where we insert multiple [MSK] tokens after the [CLS] token and predict tokens to replace [MSK]tokens one by one unidirectionally until the prediction is [EOS] or all the [MSK] tokens are predicted.
### Conditional Optimized Sparse (COS) Attention Mask
One challenge in generating responses for dialog-based embodied tasks is effectively modeling the long input image sequence, reducing the redundancy in the repetitive images but keeping the critical details. To this end, we introduce a Conditional Optimized Sparse (COS) attention mask for the multimodal transformer, as shown in Figure 3. The mask can be divided into three row sections, corresponding to in what range the embeddings for input inquiry, the response generated, input image sequence can attend to, respectively. The first row section shows that the language embedding (LE) for input inquiry can attend to itself and the input visual embedding (VE). The second row section means the LE for the generated response can attend to the VE and all the previous LE, allowing unidirectional text generation (Devlin et al., 2019). The third row section indicates that the VE can attend to partial itself and the LE of input inquiry. Especially, instead of being falling binary and pre-defined, a learnable conditional mask \(C\) that is non-binary and sparse is adopted in the third row section of the COS attention mask, controlling the self-attention of the VE. \(C\) is conditioned on VE, and the mapping is modeled by:
\[C=\sigma(f(\text{VE})), \tag{2}\]
where \(\sigma(x)=\frac{1}{1+e^{-x}}\) and \(f\) is a multi-layer perceptron. In this way, our COS attention mask uses a conditional optimizing strategy that optimizes the attention mask based on the image sequence. Additionally, in order to let the transformer attend to the specific details of the VE that are most relevant to the task, we enforce \(C\) to be sparse, letting the VE to sparsely attend to itself using a sparsity loss (Lin et al., 2022):
\[L_{SPARSE}=\lambda\times\sum_{i=1}^{M}\sum_{j=1}^{M}\left|C_{i,j}\right|, \tag{3}\]
Figure 3: Overview of SeeRee. The visual and text inputs are encoded and fed into the multimodal transformer, where a Conditional Optimized Sparse (COS) attention mask is applied. The COS attention mask has fixed binary values except for the learnable conditional mask \(C\) for visual embedding (VE) that is conditioned on VE itself. Yellow, blue, and green colored rows correspond to the attention masks for LE of input inquiry and generated response and VE, respectively.
where \(\lambda\) is a regularization hyperparameter and \(C_{i,j}\) is the value of the learnable conditional mask \(C\). As a result, COS attention mask enables better encoding and understanding of long visual input and improves the response generation result.
### Response Preprocessing with Parse by Step
Training a helper agent to directly imitate the responses in the human dialog may not be optimal as they may be unorganized and include utterances irrelevant to the task. Structured step-by-step instructions are easier for the helper agent to learn. Therefore, we propose a Parse by Step method for SeeRee, utilizing GPT-3 Brown et al. (2020), a large language model that has been trained on a vast amount of natural language texts, to preprocess the ground-truth responses in the training data. In line with the idea of in-context learning Brown et al. (2020); Kojima et al. (2022), we manually create the prompt for inputs to GPT-3. Table 1 illustrates the prompt and sampled preprocessed results. By filling each response in the training set in the blank of the prompt and letting GPT-3 complete the sentence, we get the preprocessed training data from the output, which is parsed in a step-by-step manner with a streamlined language pattern. As a result, the token \(y_{i}\) in the objective function (1) is sampled from a tokenized preprocessed human response \(Y\):
\[y_{i}\in Tok(Y),Y=p(R), \tag{4}\]
where \(Tok\) is the tokenization process, \(P\) is the Parse by Step method, and \(R\) is the original human response in the dialog of the training set. In our experiments, with the preprocessed training data as supervision for training, SeeRee is able to generate more informative responses.
## 4 Experiments
### Implementation Details
We train SeeRee using AdamW optimizer Loshchilov and Hutter (2018) for 10k iterations with a batch size of 6 and learning rate of \(1e^{-4}\). We select the results based on the best GP in the unseen validation set. We initialize the encoders and multimodal transformers in SeeRee with weights from SwinBert Lin et al. (2022) to benefit from its image sequence understanding ability. Training takes about 12 hours on one NVIDIA A6000 GPU.
Baseline ModelWe adopt the RMM Guide model (\(\mathcal{G}\)) Roman et al. (2020) as the baseline model in our experiments. RMM \(\mathcal{G}\) is a sequence-to-sequence LSTM-based model that generates natural language help based on the input image sequence. During the experiment, we apply the same sampler for both SeeRee and the baseline model.
### RDH Task
Task PerformerFor RDH on CVDN dataset, we use HAMT1 Chen et al. (2021), pre-trained on the original CVDN Navigation from Dialog History (NDH) task as the task performer. For RHD on DialFRED, we leverage the task performer 2 in the original work Gao et al. (2022) that takes a single round of input to complete the task.
Footnote 1: [https://github.com/cshizhe/VLN-HAMT](https://github.com/cshizhe/VLN-HAMT).
Footnote 2: Due to unavailable model weights, we reproduced the model with comparable results as in the original paper.
ResultsAs shown in Table 2, on either dataset, the task performer agent achieves the best performance using the response generated by SeeRee, outperforming the results using responses from the baseline helper agent, RMM guider agent. We also show the task performer's performance on the original validation set, where the responses are provided by human annotators. By comparing the results, we find that SeeRee's responses on CVDN are highly comparable to those of human helpers in terms of effectiveness, with a difference of only about \(10\%\). Additionally, on DiaFRED, SeeRee even reaches the same performance as human helpers in seen validation set for generating effective responses that guide the task performer agent to success. We believe this is because the human utterances in DialFRED are mostly template-based, which eases the challenge in multimodal generation. We believe that once SeeRee is well-trained, it could outperform human helpers by making fewer mistakes.
### Rdl Task
Task PerformerWe utilize the task performer model from the pioneering work RMM Roman et al. (2020) that combines two components: the RMM Navigator model (\(\mathcal{N}\)), responsible for predicting navigation actions, and the RMM Questioner model (\(\mathcal{Q}\)), generating inquires to the helper agent. As a result, the combination allows the task performer collaborate with helper agents to navigate towards destination.
ResultsAs shown in Table 3, we train and evaluate the same task performer agent \(\mathcal{N}+\mathcal{Q}\) with different response generation models in the RdI task on CVDN (Thomason et al., 2020): (1) the helper agent RMM \(\mathcal{G}\) is trained and updated jointly with the task performer agent; (2) the helper agent RMM \(\mathcal{G}\) is fixed during training, (3) the helper agent is changed to SeeRee and kept fixed during training. The results demonstrate that SeeRee, when serving as the helper agent, performs the best in collaborating with the task performer in the RdI task. Interestingly, we find that only updating the task performer agent during training leads to better results. One possible explanation for this is that joint training may create potential instability, making it challenging for both models to reach optimum simultaneously. The helper agent could face difficulty in converging due to the changing models of the task performer agent and vice versa.
### Ablation Study
To thoroughly analyze the effect of different components in our model, we conduct ablation studies on CVDN dataset based on RDH task and language generation metrics, BLUE2 (Papineni et al., 2002) and ROUGE-L (Lin and Och, 2004). The result is shown in Table 4.
Conditional Optimized Sparse (COS) Attention Maskis shown to benefit the response generation in the R2H task through ablation studies. We repeated the same training on SeeRee model, but replaced the conditional mask \(C\) with a fixed attention mask with all ones. Results show that when using the COS attention mask, the task performer
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Prompt:** Commander says: “Yes to the kitchen. Go to the left of the fireplace and then all the way up the stairs’. \\ & Step by step: 1. Yes. 2. go to the kitchen 3. go the left of the fireplace, 4. go upstairs. \\ & Commander says:, \(\dots\) step by step: 1. \\ \hline
**Index** & \multicolumn{2}{c}{**Original response**} \\ \hline
1 & Go into the bedroom and walk through it and exit it by using a door on the left. \\ & & 1. Enter the bedroom. \\
1 & Go into the bedroom and walk through it and exit it by using a door on the left. \\ & & 2. Walk through it. \\
2 & Yeah keep going around the outside till you get to the end. And sorry about the mixup at first. \\
2 & Yeah. \\ & & 2. Keep going around the outside. \\ & & 3. Get to the end. \\
3 & Go straight a little, then the right and go downstairs. \\ & & 1. Go back. \\
4 & I would go back. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of our Parse by Step method on the CVDN dataset (Thomason et al., 2020). We fill the original response to the blank of the prompt and input to GPT-3 for sentence completion. The output from the GPT-3 becomes the processed response with mainly task-related instructions kept and organized in steps. Through Parse by Step, we preprocess the response in the training set.
\begin{table}
\begin{tabular}{l|c c|c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c|}{**Seen Validation**} & \multicolumn{2}{c}{**Unseen Validation**} \\ \cline{3-8} \multicolumn{1}{c}{**Helper**} & \multicolumn{1}{c|}{**Task Performance**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Human Annotator & \multirow{2}{*}{
\begin{tabular}{c} DiaIFRED \\ SRMM \(\mathcal{G}\) \\ SeeRee (ours) \\ \end{tabular} } & 49.05 & 39.11 & 33.38 & 14.56 \\ \cline{2-8} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-8} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{1}{
receives significant improvements. This illustrates that the COS attention mask helps to enrich the response with more valuable information that comes from the visual environment information, enabling better performance of the task performer agent in approaching the destination.
Parse By Stepis first evaluated on the original validation set as a sanity check, where the task performer infers on the human dialog from the original dataset, but the response within is processed by our Parse by Step method. The results show that the human response processed with Pase by Step is overall equally capable as the original response, with a \(5\%\) drop in SPL and SR on the seen validation set and a \(5\%\) increase in GP on the seen validation set and SR on the unseen validation set. This indicates that the information contained in the original response is kept. We then ablate the method on SeeRee by comparing the SeeRee model trained on the training data preprocessed or not with our Parse by Step method. The evaluation result shows that despite the BLUE2 score and ROUGH-L score drops, GP in the RDH task receives a major increase. This proves the effectiveness of our Parse by Step method in facilitating the training of the helper agent and indicates that a high language similarity to human responses does not necessarily equate to a more effective conversational helper agent.
### Human Evaluation
To further evaluate the performance of our helper agent, SeeRee, we conduct human evaluations to assess the quality of the responses generated. Human participants are paid with $17/h and they act as task performers navigating in the Matterport3D simulator Chang et al. (2017). We randomly sample 60 trajectories in unseen validation sets of CVDN Thomason et al. (2020) to provide the target object and starting position to the participants. During the simulation, participants can control their movement in the environment using their keyboards and ask questions to the helper agent whenever needed. Upon completion of the task, we assess task success and goal progress in the same way as in Thomason et al. (2020). Additionally, we ask participants to provide their subjective assessment of the naturalness and faithfulness of the responses. We evaluate SeeRee, a baseline helper agent, and the situation with no helper agent. The results show that SeeRee provides the best results in terms of task completion. Through subjective evaluation, we find that SeeRee achieves significantly higher scores in terms of response faithfulness. Despite being trained with data preprocessed by Parse by Step, where the training supervision is no longer the original human utterances, it still achieves almost equal score of naturalness compared to the baseline model trained on original human responses. Through these evaluations, we show the ability of SeeRee to help human users complete embodied tasks.
## 5 Related Work
Dialog-based Multimodal Embodied BenchmarksPrevious dialog-based multimodal embodied benchmarks are usually focusing on evaluating either task performers Thomason et al. (2020); Gao et al. (2022); Shi et al. (2022) or the corresponding pairs of task performer and helper Roman et al. (2020); Hahn et al. (2020); Padmakumar et al. (2022). For instance, the CVDN Thomason et al. (2020) evaluates a task performer to navigate to a desired room by dialog histories Gao et al. (2022)
\begin{table}
\begin{tabular}{l|c c c|c c|c c} \hline \hline \multirow{3}{*}{**Helper**} & \multicolumn{3}{c|}{**COS**} & \multicolumn{3}{c|}{**Seen**} & \multicolumn{3}{c}{**Unseen**} \\ & \multicolumn{3}{c|}{**attention mask**} & \multicolumn{1}{c|}{**Parse by Step**} & \multicolumn{3}{c}{**Validation**} & \multicolumn{1}{c}{**Validation**} \\ \cline{3-8} & & & GP & B2 & R & GP & B2 & R \\ \hline Human & - & ✗ & 6.91 & - & - & 5.13 & - & - \\ Annotator & - & ✓ & 7.26 & - & - & 5.10 & - & - \\ \hline SeeRee & ✗ & ✗ & 5.43 & 9.99 & 2.97 & 4.65 & 100.0 & 221.3 \\ (ours) & ✓ & ✗ & 5.36 & 144.2 & **5.49** & 4.72 & **13.94 & **25.48** \\ ✓ & ✓ & ✓ & **6.50** & 13.78 & 24.2 & **4.87** & 13.16 & 24.14 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the SeeRee agent on CVDN dataset Thomason et al. (2020) based on GP in RDH task and language generation metrics, BLUE2 (B2) and ROUGE-L (R). We directly apply our Parse by Step method to the original validation data created by humans, showing that Parse by Step maintains essential task-related information. The results show the effectiveness of COS attention mask and Parse by Step in task-oriented response generation.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline & \multicolumn{3}{c|}{**Task**} & \multicolumn{3}{c}{**Subjective**} \\ & \multicolumn{3}{c|}{**Completion**} & \multicolumn{3}{c}{**Response Evaluation**} \\ \cline{2-5}
**Helper** & GP \(\uparrow\) & SPL \(\uparrow\) & SR \(\uparrow\) & Naturalness \(\uparrow\) & faithfulness \(\uparrow\) \\ \hline No helper & 3.54 & 2.23 & 4.50 & - & - \\ RMM \(\mathcal{G}\) & 6.32 & 5.89 & 10.57 & **0.73/1** & 0.59/1 \\ SeeRee (ours) & **9.89** & **13.82** & **18.18** & 0.72/1 & **0.75/1** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of human evaluation on RdI task in Matterport3D simulator Chang et al. (2017). The human testers start in initialized positions and will be provided with target objects based on unseen validation data from CVDN Thomason et al. (2020). Different helper agents are tested on the same set of data and we measure the task completion of human testers, as well as the subjective evaluation on the responses.
developed a dialogue-enabled embodied instruction following benchmark, DialFRED, based on the ALFRED benchmark (Shridhar et al., 2020) and presented a task performer framework. Further, in order to study the collaboration between the performer and helper, there is a wide range of activities involved in these tasks, such as navigating to a specific location (Roman et al., 2020), locating positions (Hahn et al., 2020), and interacting objects (Padmakumar et al., 2022). Compared to these benchmarks, our R2H benchmark aims for better helper agents and is the only benchmark for sole helper evaluation.
Multimodal-based Language GenerationSeeRec is in line with a growing collection of methods applied to the visual question-answering task, where multiple inputs are encoded and then decoded into text. Cho et al. (2021) introduced the VL-T5 model a unified framework that learns different tasks in a single architecture with the same language modeling objective; Wang et al. (2022) proposed Generative Image-to-text Transformer, to unify vision-language tasks such as image/video captioning and question answering; Fu et al. (2021) tackled video question answering problem with VIOLET model that adopts a video transformer to model the temporal dynamics of video inputs explicitly. Although showing great successes, one problem shared by these works is the input frames to the model are limited in quantity, only around half a dozen. Since the task of modeling the helper generating responses requires more detailed information from the visual input which may only flash once and will be easily ignored by sparse sampling, the input has to include a longer image sequence. Lin et al. (2022) developed SwinBERT, a learnable sparse attention mask for video caption that reduces the computation cost and enables long frame sequence as input. However, the learned sparse attention mask is fixed after training which lacks generalization ability. In this work, we apply the COS attention mask for SeeRee, improving the model's ability to understand and uncover information from long sequence inputs.
## 6 Conclusion
In this study, we introduce the Respond to Help (R2H) benchmark with two tasks, Respond to Dialog History and Respond during Interaction, assessing a helper agent's guidance capabilities. With the R2H benchmark, we build and evaluate a navigation helper agent, SeeRee. The results show that SeeRee outperformed the baseline model. Through further ablation study and human evaluation, we prove the effectiveness of SeeRee in assisting humans with tasks and also argue that the effectiveness of responses cannot be determined by their linguistic resemblance to human dialogue alone.
Ethics Statement
The human evaluation part of this project is classified as exempt by Human Subject Committee vis IRB protocols. Additionally, we recognize the potential ethical issues related to training language generation model using human dialog data, such as the possibility of the model learning harmful language without understanding its context. To mitigate this concern, our proposed data preprocessing method, Parse by Step, converts the responses in the training data into more structured and task-specific instructions, effectively reducing the presence of noise and profanity in the training data. As a result, the likelihood of our model generating inappropriate responses is greatly minimized.
|
2307.05514 | Impact of Non-Thermal Particle Acceleration on Radiative Signatures of
AGN Jet-Cloud Interactions | This study investigates the complex dynamics of AGN (Active Galactic Nucleus)
jet-cloud interactions, particularly focusing on the impact of non-thermal
particle acceleration on the resulting radiative signatures. We utilize
advanced computational simulations, tracking changes in jet properties and
emissions over a span of 0.2 Myr (millions of years). The research design
incorporates the modeling of jet-cloud interactions, with a key focus on
variations in the jet's density, velocity, and magnetic field. Findings reveal
a two-fold increase in the magnetic field strength up to ~5 {\mu}G due to cloud
incorporation, which, coupled with an elevated non-thermal particle population,
enhances synchrotron emissions, shifting the spectral index from 2.2 to 2.4.
Inverse Compton scattering saw a 30% increase within the first 0.125 Myr,
reflecting in an abrupt X-ray and gamma-ray emissions spike. Furthermore, the
jet's light curve flux variability in the X-ray band showcased an initial peak
increase of about 28% by 0.175 Myr, settling to a 20% increase by 0.2 Myr,
attributable to cloud disruption and absorption. Conclusions drawn from these
findings confirm our hypothesis that non-thermal particle acceleration
dramatically influences the radiative signatures of AGN jet-cloud interactions
.It underscores the necessity of considering such acceleration processes in
modeling AGN jet-cloud interactions and posits that these changes could be
instrumental as observational indicators, thereby contributing to more accurate
interpretations of AGN activity and evolution. | Krish Jhurani | 2023-07-05T21:05:07Z | http://arxiv.org/abs/2307.05514v1 | ## Impact of Non-Thermal Particle Acceleration on Radiative Signatures of AGN Jet-Cloud Interactions
## Abstract
This study investigates the complex dynamics of AGN (Active Galactic Nucleus) jet-cloud interactions, particularly focusing on the impact of non-thermal particle acceleration on the resulting radiative signatures. We utilize advanced computational simulations, tracking changes in jet properties and emissions over a span of 0.2 Myr (millions of years). The research design incorporates the modeling of jet-cloud interactions, with a key focus on variations in the jet's density, velocity, and magnetic field. Findings reveal a two-fold increase in the magnetic field strength up to \(\sim\)5 \(\mu\)G due to cloud incorporation, which, coupled with an elevated non-thermal particle population, enhances synchrotron emissions, shifting the spectral index from 2.2 to 2.4. Inverse Compton scattering saw a 30% increase within the first 0.125 Myr, reflecting in an abrupt X-ray and gamma-ray emissions spike. Furthermore, the jet's light curve flux variability in the X-ray band showcased an initial peak increase of about 28% by 0.175 Myr, settling to a 20% increase by 0.2 Myr, attributable to cloud disruption and absorption. Conclusions drawn from these findings confirm our hypothesis that non-thermal particle acceleration dramatically influences the radiative signatures of AGN jet-cloud interactions.It underscores the necessity of considering such acceleration processes in modeling AGN jet-cloud interactions and posits that these changes could be instrumental as observational indicators, thereby contributing to more accurate interpretations of AGN activity and evolution.
## Introduction
Relativistic jets launched from Active Galactic Nuclei (AGN) epitomize one of the most intense phenomena in the cosmos, transforming the galactic landscape and influencing large-scale structure formation (Blandford & Rees, 1974; McNamara & Nulsen, 2007). These colossal outflows of plasma, propelled at near-light speeds, propagate beyond their host galaxies, colliding with the interstellar medium (ISM), the contents of which significantly influence their dynamics and subsequent radiation (Scheuer, 1974; De Young, 1993).
Interactions of AGN jets with the constituents of the ISM, such as interstellar clouds, introduce a complex interplay of processes that modify the jet structure, momentum, and energy distribution (Begelman & Cioffi, 1989; Wagner & Bicknell, 2011). However, an intricate element of this narrative that has largely been overlooked in extant literature is non-thermal particle acceleration (Bell, 1978; Blandford & Ostriker, 1978). Non-thermal particle acceleration, a cornerstone of high-energy astrophysics, is at the heart of various cosmic phenomena including, but not limited to, pulsar wind nebulae, supernova remnants, and indeed AGN jets (Achterberg et al., 2001; Kardashev, 2010). The acceleration of particles to relativistic velocities through mechanisms such as Fermi acceleration and magnetic reconnection significantly shapes the radiation spectrum, with its influence extending to high-energy bands including X-ray and gamma-ray emissions (Drury, 1983; de Gouveia Dal Pino & Lazarian, 2005). Yet, its role in modifying the course and consequences of AGN jet-cloud interactions remains largely uncharted territory.
The existing body of work offers isolated glimpses into the individual aspects of AGN jet-cloud interactions, non-thermal particle acceleration, and high-energy emission processes. For example, the influence of jet-cloud interactions on the morphology of radio galaxies was investigated by van Breugel et al. (1985), and a detailed analysis of non-thermal particle acceleration in AGN jets was conducted by Ostrowski (1998). However, a combined and integrated exploration that marries these three crucial facets into a coherent theoretical framework is
conspicuously absent. Our study addresses this void by developing a multidimensional computational model that unifies the magnetohydrodynamics of AGN jet-cloud interactions and non-thermal particle acceleration. We probe the influence of these combined processes on the high-energy radiative signatures emerging from AGN jets. Our findings aim to deepen our understanding of the rich physics within AGN jets, providing a fresh lens through which we can interpret multi-wavelength observational data from state-of-the-art astrophysical observatories.
## Theoretical Background
The phenomena at play during AGN jet-cloud interactions, driven by non-thermal particle acceleration, straddle a range of intricate physical theories.
Accretion disc models, illuminated by seminal research (Shakura & Sunyaev, 1973; Blandford & Znajek, 1977), serve as a theoretical launchpad for this study. The dynamics of these systems, fueled by supermassive black holes, are regulated by gravitational, magnetic, and radiation pressures. A delicate interplay of these forces governs the accretion of matter and the consequential extraction of angular momentum, thus launching the jets. The details of these processes are still subjects of ongoing research, with competing models proposing different mechanisms of energy and momentum transfer.
To examine the interactions of AGN jets with interstellar clouds, we must turn to the framework of magnetohydrodynamics (MHD). Here, the collective behaviour of charged particles, interplaying with magnetic and electric fields, is depicted through a fluid-like representation. The dynamics of such systems encapsulate shock waves, instabilities, and magnetic reconnection, giving us critical insight into the turbulent jet-cloud interface (De Young, 1993). The characterization of the interstellar cloud - its density, magnetic field strength and orientation, and the ionization state - can significantly alter the results of these MHD interactions. Particle acceleration mechanisms are cornerstones in this theoretical edifice. Diffusive shock acceleration or Fermi acceleration, postulates that charged particles acquire energy during their deflections by magnetic irregularities at shock fronts, thereby leading to a power-law energy distribution (Bell, 1978; Blandford & Ostriker, 1978). Conversely, magnetic reconnection entails a rapid topological reconfiguration of magnetic field lines leading to swift energy release and particle acceleration (de Gouveia Dal Pino & Lazarian, 2005). This mechanism is especially relevant in turbulent magnetic environments such as the AGN jet-cloud interface.
The propagation of non-thermal particles in the AGN jet and cloud environment can be statistically described using the Fokker-Planck equation. This framework encapsulates acceleration, radiative cooling, and spatial diffusion processes, thereby facilitating an understanding of the energy distribution and temporal evolution of these particles. To correlate our theoretical constructs with observable properties, we must employ radiative transfer principles. The spectrum of the AGN jet-cloud system primarily features non-thermal radiation extending from radio to gamma-ray frequencies. Synchrotron emission, resulting from relativistic electrons gyrating in magnetic fields, and inverse Compton scattering, wherein low-energy photons are upscattered to higher energies on colliding with high-energy electrons, are pivotal in this context (Rybicki & Lightman, 1979). Ultimately, the goal is to harmonize these intricate theoretical threads into a consistent, multidisciplinary framework. This integrative approach allows us to delve deeper into the physics of AGN jet-cloud interactions and the pivotal role of non-thermal particle acceleration, thereby unravelling new insights into these cosmic powerhouses.
## Methodology
We constructed a detailed three-dimensional computational model to simulate the AGN jet-cloud interaction and the associated non-thermal particle acceleration. This model offers a unified framework, encompassing multiple scales and physical processes for a comprehensive analysis.
### Relativistic MHD Module
The central component of our computational model is the Relativistic Magnetohydrodynamics (RMHD) module, designed to solve the governing equations of plasma in the presence of a magnetic field under relativistic conditions. The dynamics of the AGN jet and the interstellar cloud are controlled by these equations, which synthesize the concepts of special relativity, fluid dynamics, and electromagnetism. Our computational model is grounded on the conservative form of the RMHD equations, composed of the conservation laws of mass, momentum, energy, and magnetic flux, which are coupled with the equation of state for a perfect gas. The magnetic field, B, and the fluid four-velocity, u, constitute the primary variables of the system, alongside the rest mass density, p, and the specific enthalpy, h. The Lorentz factor, \(\Gamma\), encapsulating the relativistic effects, emerges naturally from these variables. For a precise, stable, and conservative numerical scheme, we adopted the Godunov method, which can accurately capture shock waves and other discontinuities inherent in the jet-cloud interaction dynamics. The second-order version of this method was implemented, offering a good balance between accuracy and computational cost. An exact Riemann solver was employed, capable of treating strong shocks and rarefaction waves that may emerge in the interaction process. To ensure the \(\nabla\cdot\)B = 0 constraint, which arises from the non-existence of magnetic monopoles, we incorporated a constrained transport scheme into the Godunov method. This staggered mesh algorithm carefully updates the magnetic field at each time step, preserving the divergence-free condition of the magnetic field to machine precision. This is of critical importance to prevent numerical instabilities and secure physically accurate results. The equations were discretized on a uniform three-dimensional Cartesian grid with 512 x 512 x 512 computational cells. The computational domain spans a cubical region of 100 parsecs on each side, providing a sufficient scale to encapsulate the jet-cloud interaction process, and high enough resolution to resolve smaller-scale phenomena such as shock fronts and fluid instabilities.
### Cloud Modeling
The interstellar cloud is a key feature in our computational setup, representing a stand-in for any dense gas concentration that the AGN jet might interact with. Its parameters are based on characteristics observed in galaxies hosting AGNs, such as high-density regions near the galactic nucleus or even galactic clouds in the jet path in the case of radio galaxies. We model the cloud as a homogeneous sphere of gas at an initial stage, abiding by the simplifying assumption of spherical symmetry to reduce the complexity of initial conditions. This simplification is justified as our primary focus is on the dynamics of the jet-cloud interaction and subsequent particle acceleration processes, not the intricate cloud morphology. The cloud is assigned a radius of 10 parsecs, consistent with observed scales of dense interstellar clouds. The initial density is set to 100 atoms/cm\({}^{3}\), typical of warm interstellar medium values. Such high densities relative to the surrounding medium allow for a pronounced interaction with the jet, ensuring the generation of substantial shock waves. An isothermal equation of state is assumed with a temperature of 10\({}^{\circ}\)4 K, maintaining the cloud in a thermally stable phase and preventing cloud collapse or expansion over the simulation time. In addition, the cloud is threaded by a uniform magnetic field of 5 micro-Gauss, oriented parallel to the jet axis. This setup provides a foundation for the development of magnetic instabilities during the interaction process and contributes to the magnetic energy available for particle acceleration. The position of the cloud within the computational domain is also a crucial aspect. It is set at a distance of 20 parsecs from the jet launching point, allowing the jet to develop a steady-state structure before it encounters the cloud.
The particle acceleration module's role is to provide a detailed depiction of the processes that lead to the acceleration of particles to non-thermal energies during the interaction of the AGN jet with the interstellar cloud. This module operates by solving a one-dimensional, time-dependent Fokker-Planck equation in momentum space. This equation describes the evolution of the distribution function of particles, encompassing the balance between injection, acceleration, and loss processes. It is equipped with two critical terms: diffusion and advection. The diffusion term represents the randomization of particle momentum due to pitch-angle scattering and momentum diffusion, while the advection term signifies systematic energy gain, for instance, due to shock compression or reconnection electric fields. We consider the two dominant non-thermal particle acceleration mechanisms, diffusive shock acceleration (DSA) and magnetic reconnection. The DSA process is responsible for the generation of a power-law spectrum of particles at shock fronts. Magnetic reconnection, on the other hand, may occur in the current sheets generated during the jet-cloud interaction, efficiently accelerating particles and potentially producing a flatter power-law distribution. The initial condition for the distribution function is a power-law at low energies, representing the injection of particles into the acceleration process. The spectral index of the power-law, 2.2, is typical for DSA at strong shocks. The high-energy cutoff and temporal evolution of the distribution are computed self-consistently, taking into account the balance between acceleration and radiative and adiabatic loss processes. A constant fraction of the energy density in the magnetic field is assumed to be available for acceleration at each computational cell, the exact value of which (the acceleration efficiency) being one of the key parameters of the model. By integrating the distribution function over momentum, this module provides the energy distribution of accelerated particles. These results feed into the radiative transfer module, giving rise to non-thermal emission signatures that can be compared with multi-wavelength observations.
### Radiative Transfer Module
The radiative transfer module is a crucial component of our computational model, responsible for transforming the properties of the accelerated particle population and the magnetic field configuration from the RMHD and particle acceleration modules into observable radiative signatures. The principal radiation mechanisms accounted for are synchrotron emission and inverse Compton scattering. Both processes are prominent in environments with highly relativistic particles and significant magnetic fields, such as AGN jets. The synchrotron process involves the emission of radiation by relativistic particles spiraling in magnetic fields. This mechanism is intrinsically tied to the strength and configuration of the magnetic field, as well as the energy distribution of the non-thermal particles, both of which are provided by the other modules in our simulation. By calculating the local synchrotron emissivity in each computational cell, integrating over all cells, and considering the effects of relativistic beaming, we can obtain the total synchrotron flux in a wide frequency range, extending from radio to X-rays. The inverse Compton process involves the upscattering of low-energy photons to higher energies by relativistic particles. In the case of AGN jets, these low-energy photons can originate from various sources, including the Cosmic Microwave Background (CMB), the accretion disk, and the jet itself. As in the case of synchrotron emission, the flux of inverse Compton radiation can be computed by integrating the local emissivity over the computational volume, taking into account the energy distribution of the seed photons and the scattering particles, as well as the effects of relativistic beaming. The resulting emission is expected to span a broad frequency range, from X-rays to gamma-rays. Finally, the orientation of the observer relative to the jet axis can significantly affect the observed radiative signatures, due to relativistic beaming. Therefore, the radiative transfer module calculates the resulting emission for an observer at an inclination angle of 30 degrees to the jet axis.
### Computational Implementation
Our model was built upon an existing RMHD codebase, leveraging its robustness and speed. The base code is written in Fortran 2008 for its computational efficiency and is parallelized using the Message Passing Interface
(MPI) for distributed memory systems. It exploits the power of modern supercomputers, allowing the model to handle the high computational demands of our simulation. The incorporation of additional modules necessitated substantial code development. The particle acceleration and radiative transfer modules were integrated as separate routines that interface with the RMHD module. These routines, written in Fortran 2008, fetch the local plasma properties and magnetic field configuration at each computational cell from the RMHD module, perform the microphysical calculations, and then return the local radiative properties and the properties of the non-thermal particle population to the main code. A two-way coupling between the RMHD and the particle acceleration module was established to account for the back-reaction of accelerated particles on the fluid dynamics.
## Results
### Dynamics of Jet Cloud Interaction
Figure 1: Velocity Map of AGN Jet-Cloud Interaction
Figure 3: Reduction in Jet’s Average Speed Over Time
Graph illustrating the reduction in the average speed of the jet due to the additional inertia exerted by the absorbed cloud material
Figure 2: Change in Jet’s Average Density Over Time
The turbulent evolution of an AGN jet-cloud interaction, encapsulated within our simulations, paints a compelling narrative of cosmic disruption and transformation. Figures 1 to 4 demarcate significant stages of this interaction and offer insights into the changes in the jet's density, velocity, and magnetic field. Figure 1 details the initial phases of the interaction, starting from the projection of the jet towards a stationary cloud (Begelman, 1998). Traveling at almost the speed of light, the jet swiftly generates a forward shock within itself (Komissarov, 1999). This shock can be discerned in the velocity map of Figure 1 at around 0.01 Myr. By 0.05 Myr, the shock interacts with the cloud, resulting in a turbulent environment, disintegrating the cloud's structure, and slowly integrating it into the jet. By 0.2 Myr, the jet has wholly incorporated the cloud, leading to a significant increase in the jet's density. Figure 2 plots the changes in the jet's average density against time. The density shows a substantial growth from its initial value of 10\({}^{\circ}\)-26 g/cm\({}^{\circ}\)3, rising by 2.5 times to almost 2.5 x 10\({}^{\circ}\)-26 g/cm\({}^{\circ}\)3 by 0.2 Myr. This increase underlines the significant morphological change the jet experiences upon cloud incorporation, echoing the violent, transformative nature of these cosmic encounters. The jet's velocity and magnetic field properties, crucial elements in the jet's behavior, also experience substantial modifications during this interaction period. Figure 3 illustrates the reduction in the jet's average speed to 0.95c from an initial near-light speed by 0.2 Myr. This 5% decrease reflects the additional inertia exerted by the absorbed cloud material, slowing down the hitherto unimpeded jet. Simultaneously, the cloud interaction introduces new magnetic reconnection sites within the jet. In Figure 4, these sites are marked as local maxima in the magnetic field strength graph, with strengths escalating to \(\sim\)5 uG. This is a two-fold increase from the initial jet magnetic field strength, signaling the intensification of magnetic dynamics within the jet-cloud system. Collectively, these figures illustrate a dramatic metamorphosis in the jet's characteristics--its density, velocity, and magnetic field--consequent to its interaction with an interstellar cloud. The considerable changes in these properties offer a crucial contextual backdrop for understanding the ensuing particle acceleration processes and resultant radiative signatures (discussed in later sections).
Figure 4: Magnetic Field Strength Across Jet
Figure 5: Velocity Vectors at Shock Fronts
Mapping of velocity vectors at the forward and reverse shocks, highlighting the zones of diffusive shock acceleration
Figure 6: Evolution of the Power-law Spectral Index
Graph showing the evolution of the power-law spectral index, highlighting the surge in the acceleration of high-energy particles due to the jet-cloud interaction
Figure 8: Power-Law Index at Reconnection Sites
Graph indicating the average power-law index at magnetic reconnection sites, emphasizing the distinct contribution of Fermi acceleration to non-thermal particle population
Figure 7: Magnetic Field Intensity Map Indicating Zones of Magnetic Reconnection
Spatial map illustrating the zones of magnetic reconnection within the jet, identified as local peaks in magnetic field strength
The complexities of AGN jet-cloud interaction, featuring shock waves and magnetic reconnection events, pave the way for intricate non-thermal particle acceleration dynamics. Our computational model enabled us to map this energetic particle behavior and derive important insights, supported by various figures. In Figure 5, the forward and reverse shocks in the jet are visibly characterized by an abrupt change in velocity vectors at 30 and 70 parsecs. These shock fronts are breeding grounds for diffusive shock acceleration (Bell, 1978). At these sites, the cosmic particles continually cross the shock front, gaining energy at each crossing and consequently resulting in a power-law distribution in particle energies. This process is illustrated in Figure 6, where we observe the evolution of the power-law spectral index. The initial index, based on a quiescent jet, is set at 2.2. However, as the jet-cloud interaction commences and intensifies, the spectral index exhibits an upward trend, reaching an average value of 2.4 by 0.2 Myr. This 9% shift, discernible in the steepening gradient of Figure 6, indicates a surge in the acceleration of high-energy particles--a defining characteristic of diffusive shock acceleration (Jokipii, 1987). In parallel, Figure 7 maps magnetic field intensities, identifying the zones of magnetic reconnection. These zones, marked as local peaks in magnetic field strength, correspond to the locations of particle acceleration via Fermi processes. Such acceleration events are known to generate a 'harder' spectrum, reflected in the flatter power-law index in the energy distribution of the particles. As shown in Figure 8, the power-law index in these zones averages at 2.1, notably lower than that at the shock fronts. This deviation of about 14% emphasizes the distinct contribution of Fermi acceleration to our non-thermal particle population and the broader impact on the energy landscape of the jet-cloud system (Drury, 1983). Lastly, Figure 9 portrays the proportion of magnetic field energy absorbed by the non-thermal particles as a function of time. With the jet-cloud interaction amplifying the magnetic field, the energy available for absorption increases. By 1 Myr into the simulation, approximately 15% of the total magnetic energy--up from nearly negligible levels at the onset--has been absorbed by the non-thermal particles. This trajectory underlines the non-linear growth of energy transfer from the magnetic field to the particles, which will significantly influence the system's radiative
Figure 9: Energy Absorbed by Non-Thermal Particles Over Time
signatures. Thus, through a careful dissection of figures 5 to 9, we discern the mechanics and consequences of non-thermal particle acceleration within our modeled AGN jet-cloud interaction.
### Radiative Signatures
Figure 11: Synchrotron Emissions vs. Time
Graph illustrating the time evolution of synchrotron emissions, revealing a significant increase due to enhanced particle acceleration and magnetic field strength
Figure 10: Energy Distribution of Non-Thermal Particles
Figure 12: Inverse Compton Scattering Emissions vs. Time Graph showcasing the increase in inverse Compton scattering emissions due to the interaction between high-energy particles and photons within the jet-cloud system
Figure 13: Spectral Energy Distribution of the Jet Graph displaying the jet’s overall spectral energy distribution, indicating a pronounced high-energy bump in the X-ray and gamma-ray range during the jet-cloud interaction
A comprehensive understanding of the energy interchange within the system, propelled by particle acceleration and magnetic field evolution, culminates in a spectrum of discernible radiative signatures. These manifestations offer tangible pathways to interpreting the invisible mechanisms driving AGN jet-cloud interactions. The radiative signatures are captured and exhibited in Figure 10 through Figure 14, each providing unique insights into the energy dynamics of our system. Our analysis begins with Figure 10, which displays the energy distribution of the accelerated non-thermal particles. It captures a distinctive shift in the spectral index from 2.2 to 2.4 within the first 0.2 Myr (Bell, 1978; Jokipii, 1987). This shift of 0.2 in the spectral index signifies an approximately 9% increase in the rate of high-energy particle production, indicating an enhanced generation of synchrotron emissions. Following this trend, Figure 11 portrays the time evolution of the synchrotron emissions. A closer inspection reveals a steep increase of about 35% within the first 0.15 Myr, followed by a more gradual rise to the 30% mark by 0.2 Myr (Blandford & Eichler, 1987). This non-linear trajectory, coupled with the amplified synchrotron emissions, signifies a direct correlation between the rise in high-energy particle production and the increased radiative output, emphasizing the crucial role particle acceleration plays in modulating radiative signatures. Similar behaviors are observed with inverse Compton scattering emissions. Predominantly resulting from the interaction between high-energy non-thermal particles and photons within the jet-cloud system, Figure 12 showcases a sharp rise of about 30% within the first 0.125 Myr. A steadier climb follows this abrupt increase, plateauing around a 25% increase by 0.15 Myr (Jones & Ellison, 1991). This detailed portrayal underscores the significant role of inverse Compton scattering in the overall radiative emission. The escalating high-energy emissions from both synchrotron radiation and inverse Compton scattering culminate in a profound impact on the jet's overall spectral energy distribution (SED), displayed in Figure 13. Notably, a high-energy bump in the X-ray and gamma-ray range becomes more pronounced during the jet-cloud interaction, peaking around 0.2 Myr. Finally, the broader observational implications of these radiative processes are considered through synthetic light curves for the jet
Figure 14: Synthetic Light Curve in the X-ray Band
across various wavebands. Figure 14 displays the light curve in the X-ray band, revealing a peak flux increase of about 28% by 0.175 Myr, which eventually settles to a 20% increase by 0.2 Myr. This dynamic evolution underscores the dramatic variability in the jet's radiative output owing to its interaction with the cloud. This in-depth dissection of the radiative signatures deriving from the AGN jet-cloud interaction delivers a holistic understanding of the various energy processes at play. Each radiative output interconnects with and influences the others, creating a complex, multilayered picture of the energetic processes within the system and their broad observational consequences.
## Discussion
Enhanced Radio Emissions due to Increased Synchrotron Radiation
Synchrotron radiation serves as a fundamental emission mechanism in many high-energy astrophysical systems, such as AGN jets. The mechanism arises from the acceleration of relativistic charged particles, particularly electrons, spiraling in a magnetic field. As these electrons deviate from a straight trajectory, they emit radiation perpendicular to their acceleration, leading to the well-known broadband, polarized synchrotron emission (Rybicki & Lightman, 1979). Our simulations reveal that the jet-cloud interaction dramatically enhances synchrotron radiation through a two-pronged approach: the boost in the non-thermal particle population and the augmentation of the magnetic field strength. In Figure 10, the spectral index shifts from 2.2 to 2.4, signifying an increase in the production of high-energy particles that, in the presence of a magnetic field, contribute directly to synchrotron emissions (Bell, 1978; Jokipii, 1987). This rise is also reflected in Figure 11, which showcases an increased output of synchrotron emissions (Blandford & Eichler, 1987). Simultaneously, the introduction of cloud material into the jet fabric enhances the jet's magnetic field strength, providing further fuel for the synchrotron process. As shown in Figure 4, the jet-cloud interaction triggers magnetic reconnection sites that escalate magnetic field strength up to \(\sim\)5 \(\mu\)G--a two-fold increase from the initial jet magnetic field strength. This strengthened magnetic field, when combined with the enlarged non-thermal particle population, amplifies the synchrotron radiation, thereby leading to enhanced radio emissions (Blandford & Eichler, 1987; Meier, Koide & Uchida, 2001). These amplified radio emissions serve as a tangible observational signature of jet-cloud interactions. Observations of radio-loud AGNs using radio telescopes like the Very Large Array (VLA) and the upcoming Square Kilometer Array (SKA) could potentially detect these heightened emissions. Specifically, enhancements at low frequencies, where synchrotron radiation predominantly manifests, might signal ongoing jet-cloud interactions in AGNs (Padovani, 2017; Falcke & Biermann, 1995). Furthermore, such interactions could be associated with the emergence of complex radio structures, such as hotspots or lobes, due to the dissipation of kinetic energy into synchrotron radiation (Kaiser, Schoenmakers & Rottgering, 2000).
### Increase in X-ray and Gamma-Ray Emissions due to Inverse Compton Scattering
Inverse Compton scattering (ICS) is a crucial radiation mechanism in high-energy astrophysical environments. In these scenarios, low-energy photons acquire significantly more energy when they interact with relativistic electrons, shifting from lower energy bands such as the radio or infrared into higher energy domains, such as X-ray and gamma-ray spectra (Blumenthal & Gould, 1970). In the context of our AGN jet-cloud interaction model, the ICS mechanism is fueled by the increased population of high-energy, non-thermal particles and the amplification of magnetic fields due to the interaction. From Figure 10, we observe a significant boost in the non-thermal particle population due to diffusive shock acceleration and Fermi processes (Bell, 1978; Drury, 1983). This growth intensifies with the increasing magnetic field strength, as seen in Figure 4, that allows more interactions between these high-energy particles and lower-energy photons in the AGN jet-cloud system. These interactions lead to a
sharp rise in ICS emissions in the X-ray and gamma-ray regions, as demonstrated in Figure 12. The abrupt increase of approximately 30% within the first 0.125 Myr showcases the rapid ramp-up of ICS activity as a result of the jet-cloud interaction (Jones & Ellison, 1991). Furthermore, the significant role of ICS in shaping the jet's overall spectral energy distribution (SED) is evident from Figure 13. The pronounced high-energy bump in the X-ray and gamma-ray range during the interaction, peaking around 0.2 Myr, is a direct consequence of the surge in ICS emissions. This enhancement of X-ray and gamma-ray emissions could provide an observational indicator of AGN jet-cloud interactions, detectable by high-energy observatories such as the Chandra X-ray Observatory or the Fermi Gamma-ray Space Telescope (Nandra et al., 2013; Atwood et al., 2009). These enhanced emissions might also help in the identification of previously unknown AGN activities by their distinctive high-energy signatures and contribute to our broader understanding of AGN energetics and evolution.
### Variability in Light Curve Flux Due to Cloud Disruption and Absorption
The interaction of an AGN jet with an interstellar cloud inevitably creates a disruption in the jet's otherwise stable flow, leading to a significant shift in its observational characteristics. One such profound change is observed in the flux variability of the jet's light curve. Our computational simulation showcases this variability through the dramatic change in the X-ray band light curve, as depicted in Figure 14. The flux shows an initial peak increase of about 28% by 0.175 Myr, which later settles to a 20% increase by 0.2 Myr. This observation of flux variability can be attributed to the absorption and subsequent acceleration of the cloud material by the jet (Blundell & Bowler, 2004). The cloud's disruption and absorption lead to a pronounced increase in the jet's density, as seen in Figure 2. This change corresponds to a higher population of particles within the jet, thereby providing more material to accelerate and, consequently, generate radiation. The process of shock acceleration and magnetic reconnection, intensified due to cloud incorporation (Bell, 1978; Drury, 1983), further contributes to the heightened production of high-energy particles. The increased number of these particles enhances the emission of synchrotron and inverse Compton scattering radiation, which directly impacts the flux observed in various light curves, particularly in the X-ray band (Blumenthal & Gould, 1970; Jones & Ellison, 1991). Furthermore, the increased density and magnetic field lead to a higher opacity within the jet (Kylafis, 1983). The cloud absorption results in more photon-matter interactions, which can cause an initial burst of radiation as observed in our model. Over time, as the disrupted cloud material becomes integrated and the jet returns to a more stable state, the radiation flux is observed to reduce but still maintains a higher value than pre-interaction levels. These findings have significant implications for our understanding of AGN variability, particularly in their light curves. High variability in the light curve flux, especially in the X-ray band, could be indicative of recent or ongoing jet-cloud interactions. This would provide observational astronomers with a tool to identify and study these dynamic events and their role in AGN evolution and activity.
## Conclusion
Our extensive computational simulations of AGN jet-cloud interactions have shed light on the profound transformations that such interactions induce in the jet's properties and their subsequent impacts on non-thermal particle acceleration and radiative signatures. These transformations fundamentally change the jet's density, velocity, and magnetic field, significantly influence the acceleration mechanisms of non-thermal particles, and, consequently, shape the observable radiative signatures in the form of enhanced synchrotron emissions, inverse Compton scattering, and variabilities in the light curve.
These findings reinforce our initial thesis that non-thermal particle acceleration plays a crucial role in the radiative signatures of AGN jet-cloud interactions. The observed increase in synchrotron and inverse Compton emissions, along with the variability in light curve flux, align with the theoretical expectations of the impacts of non-thermal particle acceleration on radiative signatures. Therefore, these results underscore the need to account for such
acceleration processes in models of AGN jet-cloud interactions for a more accurate interpretation of observed AGN phenomena.
Future works should seek to investigate the role of different cloud parameters on the jet-cloud interactions. This includes exploring variations in cloud densities, chemical compositions, and spatial sizes. Understanding how these variables influence the morphological changes in the jet, acceleration of non-thermal particles, and the resultant radiative signatures could provide valuable context-specific insights. Additionally, future works should seek to explore broader feedback processes in AGN. The dynamics of jet-cloud interactions undoubtedly influence the wider environments around AGN, potentially driving galactic outflows or triggering star formation, to which coupling our jet-cloud interaction simulations with larger-scale galaxy evolution models could create a more unified picture of AGN impacts on galactic scales. Finally, future research should seek to look into specific mechanisms of particle acceleration within the AGN jets. For instance, exploring how the spectral indices vary under different magnetic field conditions or shock front configurations could enhance our understanding of these high-energy phenomena.
|
2305.04073 | Explaining RL Decisions with Trajectories | Explanation is a key component for the adoption of reinforcement learning
(RL) in many real-world decision-making problems. In the literature, the
explanation is often provided by saliency attribution to the features of the RL
agent's state. In this work, we propose a complementary approach to these
explanations, particularly for offline RL, where we attribute the policy
decisions of a trained RL agent to the trajectories encountered by it during
training. To do so, we encode trajectories in offline training data
individually as well as collectively (encoding a set of trajectories). We then
attribute policy decisions to a set of trajectories in this encoded space by
estimating the sensitivity of the decision with respect to that set. Further,
we demonstrate the effectiveness of the proposed approach in terms of quality
of attributions as well as practical scalability in diverse environments that
involve both discrete and continuous state and action spaces such as
grid-worlds, video games (Atari) and continuous control (MuJoCo). We also
conduct a human study on a simple navigation task to observe how their
understanding of the task compares with data attributed for a trained RL
policy. Keywords -- Explainable AI, Verifiability of AI Decisions, Explainable
RL. | Shripad Vilasrao Deshmukh, Arpan Dasgupta, Balaji Krishnamurthy, Nan Jiang, Chirag Agarwal, Georgios Theocharous, Jayakumar Subramanian | 2023-05-06T15:26:22Z | http://arxiv.org/abs/2305.04073v2 | # Explaining RL Decisions with Trajectories
###### Abstract
Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy.
**Keywords:** Explainable AI, Verifiability of AI Decisions, Explainable RL.
## 1 Introduction
Reinforcement learning (Sutton & Barto, 2018) has enjoyed great popularity and has achieved huge success, especially in the online settings, post advent of the deep reinforcement learning (Mnih et al., 2013; Schulman et al., 2017; Silver et al., 2017; Haarnoja et al., 2018). Deep RL algorithms are now able to handle high-dimensional observations such as visual inputs with ease. However, using these algorithms in the real world requires - i) efficient learning from minimal exploration to avoid catastrophic decisions due to insufficient knowledge of the environment, and ii) being explainable. The first aspect is being studied under offline RL where the agent is trained on collected experience rather than exploring directly in the environment. There is a huge body of work on offline RL (Levine et al., 2020; Kumar et al., 2020; Yu et al., 2020; Kostrikov et al., 2021). However, more work is needed to address the explainability aspect of RL decision-making.
Previously, researchers have attempted explaining decisions of RL agent by highlighting important features of the agent's state (input observation) (Puri et al., 2019; Iyer et al., 2018; Greydanus et al., 2018). While these approaches are useful, we take a complementary route. Instead of identifying salient state-features, we wish to identify the past experiences (trajectories) that led the RL agent to learn certain behaviours. We call this approach as trajectory-aware RL explainability. Such explainability confers faith in the decisions suggested by the RL agent in critical scenarios (surgical (Loftus et al., 2020), nuclear (Boehnlein et al., 2021), etc.) by looking at the trajectories responsible for the decision. While this sort of training data attribution has been shown to be highly effective in supervised learning (Nguyen et al., 2021), to the best of our knowledge, this is the first work to study data attribution-based explainability in RL. In the present work, we restrict ourselves to offline RL
setting where the agent is trained completely offline, i.e., without interacting with the environment and later deployed in the environment.
Contributions of this work are enumerated below:
1. A novel explainability framework for reinforcement learning that aims to find experiences(trajectories) that lead an RL agent learn certain behaviour.
2. A solution for trajectory attribution in offline RL setting based on state-of-the-art sequence modeling techniques. In our solution, we present a methodology that generates a single embedding for a trajectory of states, actions, and rewards, inspired by approaches in Natural Language Processing (NLP). We also extend this method to generate a single encoding of data containing a set of trajectories.
3. Analysis of trajectory explanations produced by our technique along with analysis of the trajectory embeddings generated, where we demonstrate how different embedding clusters represent different semantically meaningful behaviours. Additionally, we also conduct a study to compare human understanding of RL tasks with trajectories attributed.
This paper is organized as follows. In Sec. 2 we cover the works related to explainability in RL and the recent developments in offline RL. We then present our trajectory attribution algorithm in Sec. 3. The experiments and results are presented in Sec. 4. We discuss the implications of our work and its potential extensions in the concluding Sec. 5.
## 2 Background and Related Work
**Explainability in RL.** Explainable AI (XAI) refers to the field of machine learning (ML) that focuses on developing tools for explaining the decisions of ML models. Explainable RL (XRL) (Puiutta and Veith, 2020; Korkmaz, 2021) is a sub-field of XAI that specializes in interpreting behaviours of RL agents. Prior works include approaches that distill the RL policy into simpler models such as decision tree (Coppens et al., 2019) or to human understandable high-level decision language (Verma et al., 2018). However, such policy simplification fails to approximate the behavior of complex RL models. In addition, causality-based approaches (Pawlowski et al., 2020; Madumal et al., 2020) aim to explain an agent's action by identifying the cause behind it using counterfactual samples. Further, saliency-based methods using input feature gradients (Iyer et al., 2018) and perturbations (Puri et al., 2019; Greydanus et al., 2018) provide state-based explanations that aid humans in understanding the agent's actions. To the best of our knowledge, for the first time, we explore the direction of explaining an agent's behaviour by attributing its actions to past encountered trajectories rather than highlighting state features. Also, memory understanding (Koul et al., 2018; Danesh et al., 2021) is a relevant direction, where finite state representations of recurrent policy networks are analysed for interpretability. However, unlike these works, we focus on sequence embedding generation and avoid using policy networks for actual return optimization.
**Offline RL.** Offline RL (Levine et al., 2020) refers to the RL setting where an agent learns from collected experiences and does not have direct access to the environment during training. There are several specialized algorithms proposed for offline RL including model-free ones (Kumar et al., 2020; Kumar et al., 2019) and model-based ones (Kidambi et al., 2020; Yu et al., 2020). In this work, we use algorithms from both these classes to train offline RL agents. In addition, recently, the RL problem of maximizing long-term return has been cast as taking the best possible action given the sequence of past interactions in terms of states, actions, rewards (Chen et al., 2021; Janner et al., 2021; Reed et al., 2022; Park et al., 2018). Such sequence modelling approaches to RL, especially the ones based on transformer architecture (Vaswani et al., 2017), have produced state-of-the-art results in various offline RL benchmarks, and offer rich latent representations to work with. However, little to no work has been done in the direction of understanding these sequence representations and their applications. In this work, we base our solution on transformer-based sequence modelling approaches to leverage their high efficiency in capturing the policy and environment dynamics of the offline RL systems. Previously, researchers in group-driven RL (Zhu et al., 2018) have employed raw state-reward vectors as trajectory representations. We believe transformer-based embeddings, given their proven capabilities, would serve as better representations than state-reward vectors.
## 3 Trajectory Attribution
**Preliminaries.** We denote the offline RL dataset using \(\mathcal{D}\) that comprises a set of \(n_{\tau}\) trajectories. Each trajectory, denoted by \(\tau_{j}\) comprises of a sequence of observation (\(o_{k}\)), action (\(a_{k}\)) and per-step reward (\(r_{k}\)) tuples with \(k\) ranging from 1 to the length of the trajectory \(\tau_{j}\). We begin by training an offline RL agent on this data using any standard offline RL algorithm from the literature.
**Algorithm.** Having obtained the learned policy using an offline RL algorithm, our objective now is to attribute this policy, i.e., the action chosen by this policy, at a given state to a set of trajectories. We intend to achieve this in the following way. We want to find the smallest set of trajectories, the absence of which from the training data leads to different behavior at the state under consideration. That is, we posit that this set of trajectories contains specific behaviors and respective feedback from the environment that trains the RL agent to make decisions in a certain manner. This identified set of trajectories would then be provided as attribution for the original decision.
While this basic idea is straightforward and intuitive, it is not computationally feasible beyond a few small RL problems with discrete state and action spaces. The key requirement to scale this approach to large, continuous state and action space problems, is to group the trajectories into clusters which can then be used to analyze their role in the decision-making of the RL agent. In this work, we propose to cluster the trajectories using trajectory embeddings produced with the help of state-of-the-art sequence modeling approaches.
Figure 1 gives an overview of our proposed approach involving five steps: (i) Trajectory encoding, (ii) Trajectory clustering, (iii) Data embedding, (iv) Training explanation policies, and (v) Cluster attribution, each of which is explained in the sequel.
**(i) Trajectory Encoding.** First, we tokenize the trajectories in the offline data according to the specifications of the sequence encoder used (e.g. decision transformer/trajectory transformer). The observation, action and reward tokens of a trajectory are then fed to the sequence encoder to produce corresponding latent representations, which we refer to as output tokens. We define the _trajectory embedding_ as an average of these output tokens. This technique is inspired by average-pooling tech
Figure 1: **Trajectory Attribution in Offline RL.** First, we encode trajectories in offline data using sequence encoders and then cluster the trajectories using these encodings. Also, we generate a single embedding for the data. Next, we train explanation policies on variants of the original dataset and compute corresponding data embeddings. Finally, we attribute decisions of RL agents trained on entire data to trajectory clusters using action and data embedding distances.
niques (Choi et al., 2021; Briggs, 2021) in NLP used to create sentence embedding from embeddings of words present in it. (Refer to Algorithm 1.)
```
/*Encoding given set of trajectories individually */ Input : Offline Data \(\{\tau_{i}\}\), Sequence Encoder \(E\) Initialize: Initialize array \(T\) to collect the trajectory embeddings
1for\(\tau_{j}\) in \(\{\tau_{i}\}\)do /* Using \(E\), get output tokens for all the \(o\), \(a\)\(\&\)\(r\) in \(\tau_{j}\) */ \((e_{o_{1,j}},e_{a_{1,j}},e_{r_{1,j}},...,e_{\sigma_{j}},e_{a_{\tau,j}},e_{r_{ \tau,j}})\gets E(o_{1,j},a_{1,j},r_{1,j},...,\sigma_{j},a_{\tau,j},r_{\tau,j})\)// where 3T =input tokens /* Take mean of outputs to generate \(\tau_{j}\)'s embedding \(t_{j}\) */ \(t_{j}\leftarrow(e_{o_{1,j}}+e_{a_{1,j}}+e_{r_{1,j}}+e_{a_{2,j}}+e_{a_{2,j}}+e_{ r_{2,j}}+...+e_{\sigma_{\tau,j}}+e_{a_{\tau,j}}+e_{\tau_{\tau,j}})/(3\mathbb{T})\) Append \(t_{j}\) to \(T\)
```
**Algorithm 1**encodeTrajectories
**(ii) Trajectory Clustering.** Having obtained trajectory embeddings, we cluster them using X-Means clustering algorithm (Pelleg et al., 2000) with implementation provided by Novikov (2019). While in principle, any suitable clustering algorithm can be used here, we chose X-Means as it is a simple extension to the K-means clustering algorithm (Lloyd, 1982); it determines the number of clusters \(n_{c}\) automatically. This enables us to identify all possible patterns in the trajectories without forcing \(n_{c}\) as a hyperparameter (Refer to Algorithm 2).
```
/*Clustering the trajectories using their embeddings */ Input : Trajectory embeddings \(T=\{t_{i}\}\), clusteringAlgo \(C\leftarrow\) clusteringAlgo(T) // Cluster using provided clustering algorithm Output : Return trajectory clusters \(C=\{c_{i}\}_{i=1}^{n_{c}}\)
```
**Algorithm 2**clusterTrajectories
**(iii) Data Embedding.** We need a way to identify the least change in the original data that leads to the change in behavior of the RL agent. To achieve this, we propose a representation for data comprising the collection of trajectories. The representation has to be agnostic to the order in which trajectories are present in the collection. So, we follow the set-encoding procedure prescribed in (Zaheer et al., 2017) where we first sum the embeddings of the trajectories in the collection, normalize this sum by division with a constant and further apply a non-linearity, in our case, simply, softmax over the feature dimension to generate a single _data embedding_ (Refer to Algorithm 3).
We use this technique to generate data embeddings for \(n_{c}+1\) sets of trajectories. The first set represents the entire training data whose embedding is denoted by \(\bar{d}_{\text{orig}}\). The remaining \(n_{c}\) sets are constructed as follows. For each trajectory cluster \(c_{j}\), we construct a set with the entire training data but the trajectories contained in \(c_{j}\). We call this set the complementary data set corresponding to cluster \(c_{j}\) and the corresponding data embedding as the complementary data embedding \(\bar{d}_{j}\).
```
/*Generating data embedding for a given set of trajectories */ Input : Trajectory embeddings \(T=\{t_{i}\}\), Normalizing factor \(M\), Softmax temperature \(T_{\text{soft}}\)
1\(\bar{s}\leftarrow\frac{\bar{d}_{i}}{M}\)// Sum the trajectory embeddings and normalize them
2\(\bar{d}\leftarrow\{d_{j}|d_{j}=\frac{exp(s_{j}/T_{\text{soft}})}{\sum_{k}exp(s_ {k}/T_{\text{softmax}})}\}\)// Take softmax along feature dimension Output : Return the data embedding \(\bar{d}\)
```
**Algorithm 3**generateDataEmbedding
**(iv) Training Explanation Policies.** In this step, for each cluster \(c_{j}\), using its complementary data set, we train an offline RL agent. We ensure that all the training conditions (algorithm, weight initialization, optimizers, hyperparameters, etc.) are identical to the training of the original RL policy, except for the modification in the training data. We call this newly learned policy as the explanation policy corresponding to cluster \(c_{j}\). We thus get \(n_{c}\) explanation policies at the end of this step. In addition, we compute data embeddings for complementary data sets (Refer to Algorithm 4).
```
/*Trainexplanationpolicies &computerelateddataembeddings*/ Input :Offlinedata{\(\pi_{i}\)}, Traj. Embeddings \(T\), Traj. Clusters \(C\), offlineRLAlog
1for\(c_{j}\)in\(C\)do
2\(\{\tau_{i}\}_{j}\leftarrow\{\tau_{i}\}-c_{j}\)//Computecomplementarydatasetcorresp.to\(c_{j}\)\(T_{j}\leftarrow\)gatherTrajectoryEmbeddings(\(T,\{\tau_{i}\}_{j}\))//Gathercorresp.\(\tau\)embeds
3Explanationpolicy, \(\pi_{j}\leftarrow\)offlineRLAlog(\(\tau_{i}\))\(\triangleright\)Complementarydataembedding, \(d_{j}\leftarrow\)generateDataEmbedding(\(T_{j},M,T_{\text{ref}}\))
```
**Output :** Explanation policies \(\{\pi_{j}\}\), Complementary data embeddings \(\{d_{j}\}\)
**Algorithm 4**trainExpPolicies
**(v) Cluster Attribution.** In this final step, given a state, we note the actions suggested by all the explanation policies at this state. We then compute the distances of these actions (where we assume a metric over the action space) from the action suggested by the original RL agent at the state. The explanation policies corresponding to the maximum of these distances form the candidate attribution set. For each policy in this candidate attribution set, we compute the distance between its respective complementary data embedding and the data embedding of the entire training data using the Wasserstein metric for capturing distances between softmax simplices (Vallender, 1974). We then select the policy that has the smallest data distance and attribute the decision of the RL agent to the cluster corresponding to this policy(Refer to Algorithm 5). Our approach comprised of all five steps is summarized in Algorithm 6.
```
/*Generatingcluster attributionsfor\(a_{\text{orig}}=\pi_{\text{orig}}(s)\)*/ Input :State\(s\), Original Policy \(\pi_{\text{orig}}\), Explanation Policies \(\{\pi_{j}\}\), Original Data Embedding \(\tilde{d}_{\text{orig}}\), Complementary Data Embeddings \(\{\tilde{d}_{j}\}\)
1Originalactionaction, \(a_{\text{orig}}\leftarrow\pi_{\text{orig}}(s)\)
2Actions suggested by explanation policies, \(a_{j}\leftarrow\pi_{j}(s)\)\(d_{a_{\text{orig}},a_{j}}\leftarrow\)calcActionDistance(\(a_{\text{orig}},a_{j}\))//Computeactiondistance
3\(K\leftarrow\text{argmax}(d_{a_{\text{orig}},a_{j}})\)//Getcandidateclustersusingargmax
4\(w_{k}\gets W_{\text{dist}}(\tilde{d}_{\text{orig}},\tilde{d}_{k})\)//ComputeWassersteindistanceb/wcomplementarydata embeddings ofcandidateclusters&origdataembedding
5\(c_{\text{final}}\leftarrow\text{argmin}(w_{k})\)//Chooseclusterwithmindataembeddingdist.
```
**Output :**\(c_{\text{final}}\)
**Algorithm 5**generateClusterAttribution
## 4 Experiments and Results
Next, we present experimental results to show the effectiveness of our approach in generating trajectory explanations. We address the following key questions: Q1) Do we generate reliable trajectory explanations? (Sec. 4.2) Q2) How does a human understanding of an environment align with trajectories attributed by our algorithm and what is the scope of data attribution techniques? (Sec. 4.3)
### Experimental Setup
We first describe the environments, models, and metrics designed to study the reliability of our trajectory explanations.
**RL Environments.** We perform experiments on three environments: i) _Grid-world_ (Figure 5) which has discrete state and action spaces, ii) _Seaquest_ from Atari suite which has environments with continuous visual observations and discrete action spaces (Bellemare et al., 2013), and iii) _HalfCheetah_ from MuJoCo environments which are control environments with continuous state and action spaces (Todorov et al., 2012).
**Offline Data and Sequence Encoders.** For grid-world, we collect offline data of 60 trajectories from policy rollout of other RL agents and train an LSTM-based trajectory encoder following the procedure described in trajectory transformer, replacing the transformer with LSTM. For Seaquest,
we collect offline data of 717 trajectories from the D4RL-Atari repository and use a pre-trained decision transformer as trajectory encoder. Similarly, for HalfCheetah, we collect offline data of 1000 trajectories from the D4RL repository (Fu et al., 2020) and use a pre-trained trajectory transformer as a trajectory encoder. To cluster high-level skills in long trajectory sequences, we divide the Seaquest trajectories into 30-length sub-trajectories and the HalfCheetah trajectories into 25-length sub-trajectories. These choices were made based on the transformers' input block sizes and the quality of clustering.
**Offline RL Training and Data Embedding.** We train the offline RL agents for each environment using the data collected as follows - for grid-world, we use model-based offline RL, and for Seaquest and HalfCheetah, we employ DiscreteSAC (Christodoulou, 2019) and SAC (Haarnoja et al., 2018), respectively, using d3rlpy implementations (Takuma Seno, 2021). We compute data embedding of entire training data for each of the environments. See Appendix A.3 for additional training details.
**Encoding of Trajectories and Clustering.** We encode the trajectory data using sequence encoders and cluster the output trajectory embeddings using the X-means algorithm. More specifically, we obtain 10 trajectory clusters for grid-world, 8 for Seaquest, and 10 for HalfCheetah. These clusters represent meaningful high-level behaviors such as _'falling into the lava'_, _'filling in oxygen'_, _'taking long forward strides'_, etc. This is discussed in greater detail in Section A.4.
**Complementary Data Sets.** We obtain complementary data sets using the aforementioned cluster information and provide 10 complementary data sets for grid-world, 8 for Seaquest, and 10 for HalfCheetah. Next, we compute data embeddings corresponding to these newly formed data sets.
**Explanation Policies.** Subsequently, we train explanation policies on the complementary data sets for each environment. The training produces 10 additional policies for grid-world, 8 policies for Seaquest, and 10 policies for HalfCheetah. In summary, we train the original policy on the entire data, obtain data embedding for the entire data, cluster the trajectories and obtain their explanation policies and complementary data embeddings.
**Trajectory Attribution.** Finally, we attribute a decision made by the original policy for a given state to a trajectory cluster. We choose top-3 trajectories from these attributed clusters by matching the context for the state-action under consideration with trajectories in the cluster in our experiments.
**Evaluation Metrics.** We compare policies trained on different data using three metrics (deterministic nature of policies is assumed throughout the discussion) _- 1) Initial State Value Estimate_ denoted by \(\mathbb{E}(V(s_{0}))\) which is a measure of expected long-term returns to evaluate offline RL training as described in Paine et al. (2020), _2) Local Mean Absolute Action-Value Difference:_ defined as \(\mathbb{E}(|\Delta Q_{\pi_{\text{orig}}}|)=\mathbb{E}(|Q_{\pi_{\text{orig}}}( \pi_{\text{orig}}(s))-Q_{\pi_{\text{orig}}}(\pi_{j}(s))|)\) that measures how original policy perceives suggestions given by explanation policies, and _3) Action Contrast Measure:_ a measure of difference in actions suggested by explanation policies and the original action. Here, we use \(\mathbb{E}(\mathbbm{1}(\pi_{\text{orig}}(s)\neq\pi_{j}(s))\) for discrete action space and \(\mathbb{E}((\pi_{\text{orig}}(s)-\pi_{j}(s))^{2})\) for continuous action space. Further, we compute distances between embeddings of original and complementary data sets using Wasserstein metric: \(W_{\text{dist}}(d_{\text{orig}},\bar{d}_{j})\), later normalized to [0, 1]. Finally, the cluster attribution frequency is measured using metric \(\mathbb{P}(c_{\text{final}}=c_{j})\).
### Trajectory Attribution Results
**Qualitative Results.** Figure 2 depicts a grid-world state - (1, 1), the corresponding decision by the trained offline RL agent - _'right'_, and attribution trajectories explaining the decision. As we can observe, the decision is influenced not only by trajectory (traj.-i) that goes through (1, 1) but also by other distant trajectories(trajs.-ii, iii). These examples demonstrate that distant experiences (e.g. traj.-iii) could significantly influence the RL agent's decisions, deeming trajectory attribution an essential component of future XRL techniques. Further, Figure 3 shows the Seaquest agent (submarine) suggesting action _'left'_ for the given observation in the context of the past few frames. The corresponding attributed trajectories provide insights into how the submarine aligns itself to target enemies coming from the left. Figure 10 shows HalfCheetah observation, the agent suggested action in terms of hinge torques and corresponding attributed trajectories showing runs that influence the suggested set of torques. This is an interesting use-case of trajectory attribution as it explains complicated torques, understood mainly by the domain experts, in terms of the simple semantic intent of _'getting up from the floor'_.
**Quantitative Analysis.** Tables 1, 2 and 3 present quantitative analysis of the proposed trajectory attribution. The initial state value estimate for the original policy \(\pi_{\text{orig}}\) matches or exceeds estimates for explanation policies trained on different complementary data sets in all three environment settings. This indicates that the original policy, having access to all behaviours, is able to outperform other policies that are trained on data lacking information about important behaviours (e.g. grid-world: reaching a distant goal, Seaquest: fighting in the top-right corner, HalfCheetah: stabilizing the frame while taking strides). Furthermore, local mean absolute action-value difference and action differences turn out to be highly correlated (Tab. 2 and 3), i.e., the explanation policies that suggest the most contrasting actions are usually perceived by the original policy as low-return actions. This evidence supports the proposed trajectory algorithm as we want to identify the behaviours which when removed make agent choose actions that are not considered suitable originally. In addition, we provide the distances between the data embeddings in the penultimate column. The cluster attribution distribution is represented in the last column which depicts how RL decisions are dependent on various behaviour clusters. Interestingly, in the case of grid-world, we found that only a few clusters containing information about reaching goals and avoiding lava had the most significant effect on the original RL policy.
Figure 3: **Seaquest Trajectory Attribution. The agent (submarine) decides to take ‘left’ for the given observation under the provided context. Top-3 attributed trajectories are shown on the right (for each training data traj., we show 6 sampled observations and the corresponding actions). As depicted in the attributed trajectories, the action ‘left’ is explained in terms of the agent aligning itself to face the enemies coming from the left end of the frame.**
Figure 2: **Grid-world Trajectory Attribution. RL agent suggests taking action ‘right’ in grid cell (1,1). This action is attributed to trajectories (i), (ii) and (iii) (We denote gridworld trajectory by annotated \(\wedge\),\(\lor\),\(>\),\(<\) arrows for ‘up’, ‘down’, ‘right’, ‘left’’ actions respectively, along with the time-step associated with the actions (0-indexed)). We can observe that the RL decisions could be influenced by trajectories distant from the state under consideration, and therefore attributing decisions to trajectories becomes important to understand the decision better.**
We conduct two additional analyses - 1) trajectory attribution on Seaquest trained using Discrete BCQ (Sec. A.7), and 2) Breakout trajectory attribution (Sec. A.8). In the first one, we find a similar influence of clusters across the decision-making of policies trained under different algorithms. Secondly, in the breakout results, we find clusters with high-level meanings of _'depletion of a life'_ and _'right corner shot'_ influence decision-making a lot. This is insightful as the two behaviors are highly critical in taking action, one avoids the premature end of the game, and the other is part of the tunneling strategy previously found in Greydanus et al. (2018).
### Quantifying utility of the Trajectory Attribution: A Human Study
One of the key desiderata of explanations is to provide useful relevant information about the behaviour of complex AI models. To this end, prior works in other domains like vision (Goyal et al., 2019) and language (Liu et al., 2019) have conducted human studies to quantify the usefulness of output explanations. Similarly, having established a straightforward attribution technique, we wish to analyze the utility of the generated attributions and their scope in the real world through a human study. Interestingly, humans possess an explicit understanding of RL gaming environments and can reason about actions at a given state to a satisfactory extent. Leveraging this, we pilot a human study with 10 participants who had a complete understanding of the grid-world navigation environment to quantify the alignment between human knowledge of the environment dynamics with actual factors influencing RL decision-making.
**Study setup.** For the study, we design two tasks: i) participants need to choose a trajectory that they think _best_ explains the action suggested in a grid cell, and ii) participants need to identify _all_
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\pi\) & \(\mathbb{E}(V(s_{0}))\) & \(\mathbb{E}(|\Delta Q_{\pi_{\text{orig}}}|))\) & \(\mathbb{E}(1(\pi_{\text{orig}}(s)\neq\pi_{j}(s))\) & \(W_{\text{dist}}(\bar{d},\bar{d}_{j})\)\(\mathbb{P}(c_{\text{final}}=c_{j})\) \\ \hline orig & **0.3061** & - & - & - & - \\
0 & 0.3055 & 0.0012 & 0.0409 & **1.0000** & 0.0000 \\
1 & 0.3053 & 0.0016 & 0.0409 & 0.0163 & 0.0000 \\
2 & 0.3049 & 0.0289 & **0.1429** & 0.0034 & 0.0000 \\
3 & 0.2857 & **0.0710** & 0.1021 & 0.0111 & 0.3750 \\
4 & 0.2987 & 0.0322 & **0.1429** & 0.0042 & 0.1250 \\
5 & 0.3057 & 0.0393 & 0.0409 & 0.0058 & 0.0000 \\
6 & 0.3046 & 0.0203 & 0.1225 & 0.0005 & **0.5000** \\
7 & 0.3055 & 0.0120 & 0.0205 & 0.0006 & 0.0000 \\
8 & 0.3057 & 0.0008 & 0.0205 & 0.0026 & 0.0000 \\
9 & 0.3046 & 0.0234 & **0.1429** & 0.1745 & 0.0000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative Analysis of Grid-world Trajectory Attribution. The analysis is provided using 5 metrics. Higher the \(\mathbb{E}(V(s_{0}))\), better is the trained policy. High \(\mathbb{E}(|\Delta Q_{\pi_{\text{orig}}}|))\) along with high \(\mathbb{E}(1(\pi_{\text{orig}}(s)\neq\pi_{j}(s))\) is desirable. The policies with lower \(W_{\text{dist}}(\bar{d},\bar{d}_{j})\) and high action contrast are given priority while attribution. The cluster attribution distribution is given in the final column.**
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\pi\) & \(\mathbb{E}(V(s_{0}))\) & \(\mathbb{E}(|\Delta Q_{\pi_{\text{orig}}}|))\)\(\mathbb{E}(1(\pi_{\text{orig}}(s)\neq\pi_{j}(s))\) & \(W_{\text{dist}}(\bar{d},\bar{d}_{j})\)\(\mathbb{P}(c_{\text{final}}=c_{j})\) \\ \hline orig & 85.9977 & - & - & - & - \\
0 & 50.9399 & 1.5839 & 0.9249 & 0.4765 & 0.1129 \\
1 & 57.5608 & 1.6352 & 0.8976 & 0.9513 & 0.0484 \\
2 & 66.7369 & 1.5786 & 0.9233 & **1.0000** & 0.0403 \\
3 & 3.0056 & **1.9439** & **0.9395** & 0.8999 & 0.0323 \\
4 & 58.1854 & 1.5813 & 0.8992 & 0.5532 & 0.0968 \\
5 & 87.3034 & 1.6026 & 0.9254 & 0.2011 & **0.3145** \\
6 & 70.8994 & 1.5501 & 0.9238 & 0.6952 & 0.0968 \\
7 & **89.1832** & 1.5628 & 0.9249 & 0.3090 & 0.2581 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Quantitative Analysis of Seaquest Trajectory Attribution. The analysis is provided using 5 metrics. Higher the \(\mathbb{E}(V(s_{0}))\), better is the trained policy. High \(\mathbb{E}(|\Delta Q_{\pi_{\text{orig}}}|))\) along with high \(\mathbb{E}(1(\pi_{\text{orig}}(s)\neq\pi_{j}(s))\) is desirable. The policies with lower \(W_{\text{dist}}(\bar{d},\bar{d}_{j})\) and high action contrast are given priority while attribution. The cluster attribution distribution is given in the final column.**
_relevant_ trajectories that explain action suggested by RL agent. For instance, in Fig. 4a, we show one instance of the task where the agent is located at (1, 1) and is taking _'right'_ and a subset of attributed trajectories for this agent action. In both tasks, in addition to attributions proposed by our technique, we add i) a randomly selected trajectory as an explanation and ii) a trajectory selected from a cluster different from the one attributed by our approach. These additional trajectories aid in identifying human bias toward certain trajectories while understanding the agents' behavior.
**Results.** On average, across three studies in Task 1, we found that 70% of the time, human participants chose trajectories attributed by our proposed method as the best explanation for the agent's action. On average, across three studies in Task 2, nine participants chose the trajectories generated by our algorithm (Attr Traj 2). In Fig. 4b, we observe good alignment between human's understanding of trajectories influencing decision-making involved in grid navigation. Interestingly, the results also demonstrate that not all trajectories generated by our algorithm are considered relevant by humans, and often they are considered as good as a random trajectory (Fig. 4b; Attr Traj 1).
In all, as per the Task 1 results, on average 30% of the time, humans fail to correctly identify the factors influencing an RL decision. Additionally, actual factors driving actions could be neglected by humans while understanding a decision as per Task 2 results. These findings highlight the necessity to have data attribution-based explainability tools to build trust among human stakeholders for handling over the decision-making to RL agents in near future.
## 5 Discussion
In this work, we proposed a novel explanation technique that attributes decisions suggested by an RL agent to trajectories encountered by the agent in the past. We provided an algorithm that enables us to perform trajectory attribution in offline RL. The key idea behind our approach was to encode trajectories using sequence modelling techniques, cluster the trajectories using these embeddings and then study the sensitivity of the original RL agent's policy to the trajectories present in each of these clusters. We demonstrated the utility of our method using experiments in grid-world, Seaquest and HalfCheetah environments. In the end, we also presented a human evaluation of the results of our method to underline the necessity of trajectory attribution.
The ideas presented in this paper, such as generating trajectory embedding using sequence encoders, and creating an encoding of the set of trajectories, can be extended to other domains of RL. For instance, the trajectory embeddings could be beneficial in recognizing hierarchical behaviour patterns as studied under options theory (Sutton et al., 1999). Likewise, the data encoding could prove helpful in studying transfer learning (Zhu et al., 2020) in RL by utilizing the data embeddings from both the source and target decision-making tasks. From the XRL point of view, we wish to extend our work to online RL settings where the agent constantly collects experiences from the environment.
While our results are highly encouraging, one of the limitations of this work is that there are no established evaluation benchmarks as well as no other works to compare with. We believe that more extensive human studies will help address this.
Figure 4: **Column (a): An example of the human study experiment where users are required to identify the attributed trajectories that best explain the state-action behavior of the agent. Column (b): Results from our human study experiments show a decent alignment of human knowledge of navigation task with actual factors influencing RL decision-making. This underlines the utility as well as the scope of the proposed trajectory attribution explanation method.**
## Acknowledgements
We thank anonymous reviewers for their helpful feedback to make this work better. Moreover, NJ acknowledges funding support from NSF IIS-2112471 and NSF CAREER IIS-2141781.
Finally, we wish to dedicate this work to the memory of our dear colleague Georgios Theocharous who is not with us anymore. While his premature demise has left an unfillable void, his work has made an indelible mark in the domain of reinforcement learning and in the lives of many researchers. He will forever remain in our memories.
|
2302.08836 | Casimir effect in a Lorentz-violating tensor extension of a scalar field
theory | This paper investigates the Casimir Energy modifications due to the
Lorentz-violating CPT-even contribution in an extension of the scalar QED. We
have considered the complex scalar field satisfying Dirichlet boundary
conditions between two parallel plates separated by a small distance. An
appropriate tensor parametrization allowed us to study the Casimir effect in
three setups: isotropic, anisotropic parity-odd, and anisotropic parity-even.
We have shown that the Lorentz-violating contributions promote increased
Casimir energy for both the isotropic and anisotropic parity-odd
configurations. However, in the parity-even case, the Lorentz-violating terms
can promote either an increase or a decrease in the Casimir energy. We have
shown that both the increased and decreased amounts in the Casimir energy
depend on the momentum projection over the Lorentz-violating vectors. | M. C. Araújo, J. Furtado, R. V. Maluf | 2023-02-17T12:18:38Z | http://arxiv.org/abs/2302.08836v1 | # Casimir effect in a Lorentz-violating tensor extension of a scalar field theory
###### Abstract
This paper investigates the Casimir Energy modifications due to the Lorentz-violating CPT-even contribution in an extension of the scalar QED. We have considered the complex scalar field satisfying Dirichlet boundary conditions between two parallel plates separated by a small distance. An appropriate tensor parametrization allowed us to study the Casimir effect in three setups: isotropic, anisotropic parity-odd, and anisotropic parity-even. We have shown that the Lorentz-violating contributions promote increased Casimir energy for both the isotropic and anisotropic parity-odd configurations. However, in the parity-even case, the Lorentz-violating terms can promote either an increase or a decrease in the Casimir energy. We have shown that both the increased and decreased amounts in the Casimir energy depend on the momentum projection over the Lorentz-violating vectors.
Casimir effect, Lorentz symmetry breaking, CPT violation, Lorentz-violating scalar QED.
Introduction
The Casimir effect is a purely quantum phenomenon and presents as one of the most direct manifestations of the existence of vacuum quantum fluctuations [1; 2]. Predicted by H. Casimir in 1948 [3], the Casimir effect is characterized by the force of attraction taking place between two parallel, electrically neutral conducting plates separated by a short distance in the quantum vacuum, and it emerges as a consequence of the boundary conditions imposed on the electromagnetic field by the plates. These boundaries force the frequencies of the field to be in a discrete spectrum where only specific, well-defining values are allowed. Consequently, the zero-point energy (vacuum energy) also goes through changes. After some regularization procedure, we obtain a finite amount that can be interpreted as the needed energy to keep the plates in the desired configuration. Experimentally, M. J. Sparnaay confirmed the Casimir effect at the micrometer scale by Al plates in 1958 [4]. About thirty-nine years later, the experiment was again carried out with higher accuracy using layers of Cu and Au by Lamoreaux [5] and for a metallic sphere and flat plate by Mohideen and Roy [6]. A modern review of the experimental methods and realistic measurements can be found in the references [6; 7; 8].
Although the Casimir effect was initially studied for the electromagnetic field, it is known to occur for any quantum field under certain boundary conditions. These boundaries can be material means, interfaces between two phases of the vacuum, or even space-time topologies [1; 9]. In fact, the Casimir force can depend on many parameters: as the geometry of the confining materials and the type of boundary conditions (Dirichlet, Neumann, or mixed) [10; 11], or even the presence of extra-dimensions [12]. In this sense, the Casimir effect was widely evaluated in a number of different scenarios: as black holes [13; 14; 15; 16; 11], modified gravity [17; 18], Horava-Lifshitz-like field theory [19; 20; 21; 22; 23], and Lorentz-violation contexts [24; 25; 26; 10].
As is known, Lorentz covariance and CPT symmetry are preserved by the Standard Model (SM) of particle physics. However, although experimentally successful, the SM still leaves some open questions, such as the neutrino oscillation [27], or the lack of a quantum description of gravity, making the search for a physics beyond the SM a quite natural choice. In these new scenarios, Lorentz and CPT violation could arise from an underlying theory combining gravity with quantum mechanics such as string theory [28; 29], Horava-Lifshitz
gravity [30; 31], loop quantum gravity [32; 33], among others. Alternatively, in order to investigate signs of such violations at low energies, Colladay and Kostelecky proposed the Standard Model Extension (SME) [34; 35; 36; 37]. As a comprehensive framework for treating CPT violation at the level of effective field theory, the SME has led to several experimental investigations [38; 39; 40; 41; 42; 43]. Moreover, it has been extensively studied in numerous contexts such as neutrino oscillation [44; 45; 46; 47], radiative corrections [48; 49; 50; 51], supersymmetric models [52; 53; 54; 55], and noncommutative quantum field theories [56; 57].
More recently, Edwards and Kostelecky proposed an extension of the SM scalar sector where small CPT deviations in neutral-meson oscillations could be evaluated [58]. Structurally, the extension consists of a general effective scalar field theory in any spacetime dimension that contains explicit-perturbative spin-independent Lorentz-violating operators of arbitrary mass dimension [59]. The significance of spin-independency lies on the fact that even for a particle with nonzero spin, Lorentz-violating effects could be handled as though the particle had zero spin. This is extremely interesting since the Higgs boson is the only example of a fundamental spinless particle in the SM.
In this paper, the four-dimensional tensor sector of the Lorentz violating extension of the scalar electrodynamics proposed in Ref. [59] will be considered in the calculation of the Casimir energy for a massive complex scalar field satisfying Dirichlet boundary conditions between two large parallel plates separated by a small distance. As will be shown, by an appropriate tensor parametrization in terms of vectors comprising the Lorentz-violating coefficients, we will be able to analyze the Casimir effect in three different scenarios: isotropic, anisotropic parity-odd, and anisotropic parity-even scenarios. Consequently, effects of Lorentz violation on the Casimir energy may be direction-dependent. In all the cases, we apply the dimensional regularization process to regularize the Casimir energy.
This paper is organized as follows. In Sec. II, the model is presented and the tensor parameterization is discussed. In Sec. III, the isotropic case is considered. Sections IV and V are respectively dedicated to the anisotropic parity-even and anisotropic parity-odd sectors. In Sec. VI, the main results of the paper are summarized.
The model
In this section, we present some features concerning the model we are going to investigate the Casimir effect. As proposed by Edwards and Kostelecky in reference [59], the Lagrange density describing the propagation of a complex scalar field \(\phi\) of mass \(m\) in the presence of arbitrary Lorentz-violating effects can be written in the form
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+\hat{K}_{c}^{\mu\nu} \partial_{\mu}\phi^{*}\partial_{\nu}\phi-\frac{i}{2}\left[\hat{K}_{a}^{\mu}( \phi^{*}\partial_{\mu}\phi-\partial_{\mu}\phi^{*}\phi)\right]-m^{2}\phi^{*}\phi, \tag{1}\]
where \(\hat{K}_{c}^{\mu\nu}\) and \(\hat{K}_{a}^{\mu}\) are respectively a null trace constant tensor and a constant vector assumed to introduce only small Lorentz violation to conventional physics. In this work, our focus will be on the tensor sector of the model (1). General vector contributions will be left as perspective. Moreover, Lorentz violation effects on the Casimir energy due to similar vector terms in a massive real scalar field model were investigated in Ref. [10].
The corresponding modified Klein-Gordon equation is
\[(\Box+m^{2}+\hat{K}_{c}^{\mu\nu}\partial_{\mu}\partial_{\nu})\phi=0, \tag{2}\]
with similar equation for \(\phi^{*}\).
Now, since \(\hat{K}_{c}^{\mu\nu}\) is a symmetric traceless tensor, we can use the general parametrization
\[\hat{K}_{c}^{\mu\nu}=\frac{1}{2}(U^{\mu}V^{\nu}+U^{\nu}V^{\mu})-\frac{1}{4} \eta^{\mu\nu}(U\cdot V), \tag{3}\]
where \(U\) and \(V\) are two arbitrary 4-vectors which comprise the Lorentz-violating coefficients [60]. This parametrization is useful to investigate the different configurations of this tensor individually [61]: the anisotropic parity-even sector, for example, is parameterized by two pure spacelike 4-vector \(U=(0,\mathbf{u})\) and \(V=(0,\mathbf{v})\), with \(\mathbf{u}\cdot\mathbf{v}=0\), whereas the anisotropic parity-odd sector is represented by \(U=(0,\mathbf{u})\) and \(V=(v_{0},\mathbf{0})\). The isotropic sector is in turn recovered by two timelike 4-vector \(U=(u_{0},\mathbf{0})\) and \(V=(v_{0},\mathbf{0})\). In the following sections, we will calculate the vacuum energy for each sector above considering the complex scalar field satisfying Dirichlet boundary conditions,
\[\phi(x,y,0,t)=\phi(x,y,a,t)=0, \tag{4}\]
between two large parallel plates separated by a small distance \(a\), see Fig. (1).
## III The isotropic sector
As discussed before, the tensor \(\hat{K}^{\mu\nu}_{c}\) in this case must be parameterized by two timelike 4-vectors \(U=(u_{0},{\bf 0})\) and \(V=(v_{0},{\bf 0})\) so that Eq. (2) can be rearranged as
\[(\Box+m^{2}+\frac{3}{4}\,u_{0}\,v_{0}\,\partial_{0}^{2}+\frac{1}{4}\,u_{0}\,v_ {0}\,\nabla^{2})\phi=0. \tag{5}\]
Adopting the standard procedure [62], the quantum field can be expanded as
\[\phi(x)=\sqrt{\frac{2}{a}}\sum_{n=1}^{\infty}\int\frac{d^{2}p}{(2\pi)^{2}} \frac{\sin{(n\pi z/a)}}{2\omega_{n}({\bf p})}\left[a_{n}({\bf p})e^{-ip\cdot x }+b_{n}^{\dagger}({\bf p})e^{ip\cdot x}\right], \tag{6}\]
where \(n\) is an integer and the angular frequency written as
\[\omega_{n}({\bf p})=\frac{1}{\sqrt{1+\frac{3}{4}u_{0}v_{0}}}\sqrt{\left(1- \frac{1}{4}u_{0}v_{0}\right)\left[{\bf p}^{2}+\left(\frac{n\pi}{a}\right)^{2} \right]+m^{2}}, \tag{7}\]
with \(p\cdot x\equiv\omega_{n}({\bf p})t-{\bf p}\cdot{\bf x}\), and \({\bf p}=(p_{x},p_{y})\). Note that the conditions (4) are directly verified.
Now, as a result of canonical quantization, the Hamiltonian can be settled as
\[H=\sum_{n=1}^{\infty}\int\frac{d^{2}p}{(2\pi)^{2}}\frac{1}{2}\left(1+\frac{3}{ 4}u_{0}v_{0}\right)\left[a_{n}^{\dagger}({\bf p})a_{n}({\bf p})+b_{n}^{\dagger }({\bf p})b_{n}({\bf p})+\frac{L^{2}}{1+\frac{3}{4}u_{0}v_{0}}2\omega_{n}({\bf p })\right], \tag{8}\]
where \(L^{2}\) is the area of each plate and the annihilation and creation operators \(a_{n}({\bf p})\) and \(a_{n}^{\dagger}({\bf p})\), respectively, satisfy the algebra
\[[a_{n}({\bf p}),a_{m}^{\dagger}({\bf q})]=\frac{1}{1+\frac{3}{4}u_{0}v_{0}}\, (2\pi)^{2}\,2\omega_{n}({\bf p})\,\delta^{2}({\bf p}-{\bf q})\,\delta_{nm}, \tag{9}\]
Figure 1: Two parallel plates with area \(L^{2}\) separated by a small distance \(a\). We mean here by “small” that \(a\ll L\).
with all the other commutation relations involving them being zero. Operators \(b_{n}(\mathbf{p})\) and \(b_{n}^{\dagger}(\mathbf{p})\) obey a similar algebra. Therefore, taking the vacuum expectation value of the Hamiltonian (8), we find
\[E_{0\,(iso)}=\frac{L^{2}}{(2\pi)^{2}}\sum_{n=1}^{\infty}\int d^{2}p\,\frac{1}{ \sqrt{1+\frac{3}{4}u_{0}v_{0}}}\sqrt{\left(1-\frac{1}{4}u_{0}v_{0}\right)\left[ \mathbf{p}^{2}+\left(\frac{n\pi}{a}\right)^{2}\right]+m^{2}}, \tag{10}\]
as the vacuum energy for the massive complex scalar field in the isotropic sector.
In order to reach a finite quantity of the equation (10), we will invoke the dimensional regularization procedure [63] and write
\[E_{0\,(iso)}^{Reg}=\frac{L^{d}}{(2\pi)^{d}}\sum_{n=1}^{\infty}\int d^{d}p\, \frac{1}{\sqrt{1+\frac{3}{4}u_{0}v_{0}}}\sqrt{\left(1-\frac{1}{4}u_{0}v_{0} \right)\left[\mathbf{p}^{2}+\left(\frac{n\pi}{a}\right)^{2}\right]+m^{2}}, \tag{11}\]
where \(d\) is the transverse dimension assumed as a continuous, complex variable. In this way we can perform the momentum integral by using the Schwinger proper time representation:
\[\frac{1}{a^{z}}=\frac{1}{\Gamma(z)}\int_{0}^{\infty}dt\,t^{z-1}e^{-at}. \tag{12}\]
After performing the moment integration, the energy density becomes
\[E_{0\,(iso)}^{Reg}=\frac{L^{d}}{(2\pi)^{d}\,\sqrt{1+\frac{3}{4}u _{0}v_{0}}}\frac{\pi^{d/2}\,\Gamma\left(-\frac{d+1}{2}\right)}{\Gamma\left(- \frac{1}{2}\right)\left(1-\frac{1}{4}u_{0}v_{0}\right)^{d/2}}\sum_{n=1}^{ \infty}\left[m^{2}+\left(1-\frac{1}{4}u_{0}v_{0}\right)\left(\frac{n\pi}{a} \right)^{2}\right]^{\frac{d+1}{2}}. \tag{13}\]
One put the sum over \(n\) into a more helpful form using an Epstein-Hurwitz Zeta function type [64]
\[\sum_{n=-\infty}^{+\infty}(bn^{2}+\mu^{2})^{-s}=\sqrt{\frac{\pi}{b }}\,\frac{\Gamma\left(s-\frac{1}{2}\right)}{\Gamma(s)}\mu^{1-2s}+\frac{2\pi^{ s}}{\sqrt{b}\Gamma(s)}\sum_{n=-\infty}^{+\infty\,\prime}\mu^{\frac{1}{2}-s} \left(\frac{n}{\sqrt{b}}\right)^{s-\frac{1}{2}}K_{\frac{1}{2}-s}\left(\frac{2 \pi\mu n}{\sqrt{b}}\right), \tag{14}\]
where \(K_{\nu}(z)\) is the modified Bessel function, and the prime in the sum means that the term \(n=0\) has to be excluded. After some algebra, we can write the dimensionally regularized energy in Eq. (13) as
\[E_{0\,(iso)}^{Reg} = -\frac{L^{d}}{(4\pi)^{\frac{d+2}{2}}\left(1-\frac{1}{4}u_{0}v_{0} \right)^{\frac{d+1}{2}}\sqrt{1+\frac{3}{4}u_{0}v_{0}}}\left\{-m^{d+1}\Gamma \left(-\frac{d+1}{2}\right)\sqrt{\left(1-\frac{1}{4}u_{0}v_{0}\right)}\,\pi\right. \tag{15}\] \[+ \left.a\,m^{d+2}\Gamma\left(-\frac{d+2}{2}\right)+\frac{4m^{\frac {d+2}{2}}\left(1-\frac{1}{4}u_{0}v_{0}\right)^{\frac{d+2}{4}}}{a^{d/2}}\sum_ {n=1}^{\infty}\frac{K_{\frac{d+2}{2}}\left(\frac{2amn}{\sqrt{1-\frac{1}{4}u_{ 0}v_{0}}}\right)}{n^{(d+2)/2}}\right\}.\]
Note that only the third term in brackets is relevant for the Casimir energy. The other two terms do not contribute because they do not depend on the distance \(a\) or because they are linearly related to it. Hence, the Casimir energy for an arbitrary \(d\) can be expressed as
\[E^{Cas}_{(iso)}(d)=-\frac{4\,L^{d}\,a^{-d/2}}{\left(1-\frac{1}{4}u_{0}v_{0} \right)^{-\frac{d}{4}}\sqrt{1+\frac{3}{4}u_{0}v_{0}}}\left(\frac{m}{4\pi} \right)^{\frac{d+2}{2}}\sum_{n=1}^{\infty}\frac{K_{\frac{d+2}{2}}\left(\frac{2 amn}{\sqrt{1-\frac{1}{4}u_{0}v_{0}}}\right)}{n^{(d+2)/2}}. \tag{16}\]
Finally, taking \(d=2\) in the expression above, some interesting limits can be analyzed. For example, to the case where \(ma\ll\left(1-\frac{1}{4}u_{0}v_{0}\right)^{1/2}\) we get
\[E^{Cas}_{(iso)}=-\sqrt{\frac{4-u_{0}v_{0}}{4+3u_{0}v_{0}}}\frac{L^{2}\,\pi^{2} }{720\,a^{3}}+\frac{4}{\sqrt{16-8u_{0}v_{0}-3u_{0}^{2}v_{0}^{2}}}\frac{L^{2} }{48\,a}+\cdots, \tag{17}\]
where the first term can be recognized as the usual Casimir energy for the massless complex scalar field and the second one as the usual mass term, both fixed by multiplicative Lorentz-violating factors. A quick inspection of the result (17) allows us to conclude that the Lorentz-violating coefficient decreases the magnitude of the Casimir energy in the first term. In contrast, the opposite occurs in the second term, where the Casimir energy increase in magnitude. In addition, since the product \(u_{0}v_{0}\) is supposed to be very small, the Eq. (17) can be expanded, up to first order, as
\[E^{Cas}_{(iso)}=-\frac{\pi^{2}L^{2}}{720a^{3}}+\frac{L^{2}m^{2}}{48a}+\frac{ \pi^{2}L^{2}u_{0}v_{0}}{1440a^{3}}+\frac{L^{2}m^{2}u_{0}v_{0}}{192a}. \tag{18}\]
The above expression allow us to see that the net effect of the isotropic Lorentz-violating contribution, expressed by the product \(u_{0}v_{0}\), is in order to increase the Casimir energy. The behaviour of the Casimir energy for this case is graphically depicted in Fig. (2). As we can see, when the value of the product \(u_{0}v_{0}\) increases, the Casimir energy increases as well. Therefore, for too small values of \(u_{0}v_{0}\), like the current bounds on it (\(u_{0}v_{0}\approx 10^{-14}GeV\)), the curves are practically overlapping.
On the other extreme, i.e. \(ma\gg\left(1-\frac{1}{4}u_{0}v_{0}\right)^{1/2}\), we have
\[E^{Cas}_{(iso)}=-\frac{1}{\sqrt{\left(1+\frac{3}{4}u_{0}v_{0}\right)\sqrt{1- \frac{1}{4}u_{0}v_{0}}}}\frac{L^{2}\,m^{2}}{8\,\pi^{2}\,a}\left(\frac{\pi}{m \,a}\right)^{1/2}\exp{\left(-\frac{2\,m\,a}{\sqrt{1-\frac{1}{4}u_{0}v_{0}}} \right)}, \tag{19}\]
such that the expected exponential decay with respect to the mass is observed, however shaped by Lorentz-violating factors as well. Note that the Lorentz-violating coefficient into the exponential factor causes the magnitude of the Casimir energy to decrease faster when
compared to the usual case. Accordingly, the Casimir energy must again lead to a small force at the non-relativistic limit. Note that both results (17) and (19) carry an extra factor of 2 that takes into account the degrees of freedom of the complex scalar field. Also, the standard results are recovered when Lorentz violation is "turned off". Accordingly, up to first order in \(u_{0}v_{0}\), Eq. (19) reads
\[E_{(iso)}^{Cas}=-\frac{L^{2}m^{2}e^{-2am}\sqrt{\frac{1}{am}}}{8\pi^{3/2}a}+ \frac{L^{2}m^{2}e^{-2am}\sqrt{\frac{1}{am}}(4am+5)}{128\pi^{3/2}a}u_{0}v_{0}, \tag{20}\]
and also here, the Lorentz-violating contribution is strictly positive, thus promoting an increase in the Casimir energy. As we can see from Fig. (3), the behaviour of the Casimir energy in this regime is very similar to that presented in Fig. (2).
## IV The anisotropic parity-even sector
The anisotropic parity-even sector is in turn parameterized by two pure spacelike 4-vector \(U=(0,\mathbf{u})\) and \(V=(0,\mathbf{v})\), with \(\mathbf{u}\cdot\mathbf{v}=0\). Thus, the modified Klein-Gordon equation takes the form
\[\left[\Box+m^{2}+\left(\mathbf{u}\cdot\nabla\right)\left(\mathbf{v}\cdot \nabla\right)\right]\phi=0. \tag{21}\]
In order to apply the method of separation of variables for solving Eq. (21), two types of solutions will be considered here. The first one will correspond to the case where there is
Figure 2: Casimir energy for some values of the product \(u_{0}v_{0}\). For this plot we have considered \(L=1\), \(m=1\) and \(a=(10^{-7},10^{-6})\).
no Lorentz violation in the z-direction, and the second one to the case where only Lorentz violation in the z-direction is taken into account. Let us treat each case separately.
### No Lorentz violation in the z-direction
The general situation for this case is reached when \(\mathbf{u}=(u_{x},u_{y},0)\) and \(\mathbf{v}=(v_{x},v_{y},0)\), so that the solution to Eq. (21) can again be written like Eq. (6), but now with a new dispersion relation:
\[\omega_{n}(\mathbf{p})=\sqrt{\mathbf{p}^{2}+m^{2}-\left(\mathbf{u}\cdot \mathbf{p}\right)\left(\mathbf{v}\cdot\mathbf{p}\right)+\left(\frac{n\pi}{a} \right)^{2}}, \tag{22}\]
where \(\mathbf{p}=(p_{x},p_{y})\) as well.
This leads to a new Hamiltonian, that now reads
\[H=\frac{1}{2}\sum_{n=1}^{\infty}\int\frac{d^{2}p}{(2\pi)^{2}}\left[a_{n}^{ \dagger}(\mathbf{p})a_{n}(\mathbf{p})+b_{n}^{\dagger}(\mathbf{p})b_{n}( \mathbf{p})+L^{2}2\omega_{n}(\mathbf{p})\right], \tag{23}\]
where the annihilation and creation operators \(a_{n}(\mathbf{p})\) and \(a_{n}^{\dagger}(\mathbf{p})\), respectively, satisfy
\[\left[a_{n}(\mathbf{p}),a_{m}^{\dagger}(\mathbf{q})\right]=\left(2\pi\right)^ {2}2\omega_{n}(\mathbf{p})\,\delta^{2}(\mathbf{p}-q)\,\delta_{nm} \tag{24}\]
with all the other commutation relations involving them being zero and with \(b_{n}(\mathbf{p})\) and \(b_{n}^{\dagger}(\mathbf{p})\) obeying a similar algebra.
Figure 3: Casimir energy for some values of the product \(u_{0}v_{0}\). For this plot we have considered \(L=1\), \(m=10^{7}\) and \(a=(10^{-7},10^{-6})\).
Consequently, the vacuum energy for the massive complex scalar field in the anisotropic parity-even sector with no Lorentz violation in the z-direction is
\[E_{(APE)}=\frac{L^{2}}{(2\pi)^{2}}\sum_{n=1}^{\infty}\int d^{2}p\,\sqrt{\mathbf{p} ^{2}+m^{2}-\left(\mathbf{u}\cdot\mathbf{p}\right)\left(\mathbf{v}\cdot\mathbf{p }\right)+\left(\frac{n\pi}{a}\right)^{2}}. \tag{25}\]
Proceeding as before, we can then write the corresponding Casimir energy for an arbitrary dimension \(d\):
\[E_{(APE)}^{Cas}(d)=-\frac{4\,L^{d}\,a^{-d/2}}{\left(1-|\mathbf{u}||\mathbf{v}| \cos\theta_{u}\,\cos\theta_{v}\right)^{d/2}}\left(\frac{m}{4\pi}\right)^{\frac {d+2}{2}}\sum_{n=1}^{\infty}\frac{K_{\frac{d+2}{2}}\left(2amn\right)}{n^{(d+2 )/2}}. \tag{26}\]
Here \(\theta_{u}\) and \(\theta_{v}\) are respectively the angles that the vectors \(\mathbf{u}\) and \(\mathbf{v}\) have with respect to the momentum \(\mathbf{p}\).
For \(d=2\), the asymptotic behavior \(ma\ll 1\) reads
\[E_{(APE)}^{Cas}=\frac{1}{1-|\mathbf{u}||\mathbf{v}|\cos\theta_{u}\,\cos \theta_{v}}\left[-\frac{L^{2}\pi^{2}}{720\,a^{3}}+\frac{L^{2}m^{2}}{48\,a}+ \cdots\right], \tag{27}\]
where the terms in brackets can be recognized as the usual Casimir energy and its correction due to the mass. Unlike the isotropic case, where the usual and mass terms are each corrected by different Lorentz-violating factors, see Eq. (17), we see in Eq. (27) that a single factor breaking the Lorentz symmetry fixes both terms in the anisotropic parity-even sector. Note that no Lorentz violation effect will be observed if one of the vectors \(\mathbf{u}\) and \(\mathbf{v}\), or both, are orthogonal to the momentum \(\mathbf{p}\) as well. In Fig. (4), we depict the behaviour of the Casimir energy in terms of the angles between the momentum and the vectors \(\mathbf{u}\) and \(\mathbf{v}\). As it can be noted, the effect of the Lorentz symmetry violation can be either in order to increase or decrease the Casimir energy. For instance, when \(\theta_{u}=\theta_{v}=0\) we have a non-vanishing Lorentz contribution and the Casimir energy reaches a minimum value. On the other hand, when \(\theta_{u}=0\) and \(\theta_{v}=\pi\) we have also a non-vanishing Lorentz violating contribution, but the Casimir energy reaches a maximum value instead.
Now, in the limit \(ma\gg 1\), although the Eq. (27) has to be replaced by
\[E_{(APE)}^{Cas}=-\frac{1}{1-|\mathbf{u}||\mathbf{v}|\cos\theta_{u}\,\cos \theta_{v}}\left[\frac{L^{2}\,m^{2}}{8\pi^{2}a}\left(\frac{\pi}{ma}\right)^{1/ 2}e^{-2ma}\right], \tag{28}\]
we note that some features are preserved. For example, the usual mass exponential decay is fixed by a Lorentz-violating factor that disappear when \(\mathbf{u}\) and \(\mathbf{v}\) are orthogonal to the momentum \(\mathbf{p}\). The general behaviour of the Casimir energy in this particular regime is similar to the one presented in Fig. (4), being the energy scale the only modification.
### Lorentz violation only in the z-direction
In this case, let us assume \(\mathbf{u}=u_{z}\,\mathbf{\hat{z}}\) and \(\mathbf{v}=v_{z}\,\mathbf{\hat{z}}\). As before, the solution to Eq. (21) is given by Eq. (6) with a new dispersion relation, namely
\[\omega_{n}(\mathbf{p})=\sqrt{\mathbf{p}^{2}+m^{2}+\left(1-u_{z}v_{z}\right) \left(\frac{n\pi}{a}\right)^{2}}. \tag{29}\]
Accordingly, the Hamiltonian is once again given by Eq. (23) with \(\omega_{n}(\mathbf{p})\) given by Eq. (29). Therefore, the vacuum energy for the massive complex scalar field in the anisotropic parity-even sector with Lorentz violation only occurring in the z-direction is
\[E_{(APE)}=\frac{L^{2}}{(2\pi)^{2}}\sum_{n=1}^{\infty}\int d^{2}p\,\sqrt{ \mathbf{p}^{2}+m^{2}+\left(1-u_{z}v_{z}\right)\left(\frac{n\pi}{a}\right)^{2}}. \tag{30}\]
By using dimensional regularization, we can then write
\[E_{(APE)}^{Cas}(d)=-(1-u_{z}v_{z})^{d/4}\,4L^{d}\,a^{-d/2}\left(\frac{m}{4\pi} \right)^{\frac{d+2}{2}}\sum_{n=1}^{\infty}\frac{K_{\frac{d+2}{2}}\left(\frac{ 2amn}{\sqrt{1-u_{z}v_{z}}}\right)}{n^{\left(d+2\right)/2}} \tag{31}\]
and by taking \(d=2\) we can analyze the limit \(ma\ll\left(1-u_{z}v_{z}\right)^{1/2}\). The result is
\[E_{(APE)}^{Cas}=\left(1-u_{z}v_{z}\right)^{3/2}\left(-\frac{L^{2}\pi^{2}}{720a ^{3}}\right)+\left(1-u_{z}v_{z}\right)^{1/2}\left(\frac{L^{2}m^{2}}{48a} \right)+\cdots. \tag{32}\]
Similar to the isotropic case, the usual Casimir energy and its mass correction are both fixed by different factors violating the Lorentz symmetry. Also here we can expand the previous
expression in terms of \(u_{z}v_{z}\) up to first order as a way of better understanding the Lorentz violating contribution. Such expansion yields
\[E^{Cas}_{(APE)}=-\frac{\pi^{2}L^{2}}{720a^{3}}+\frac{L^{2}m^{2}}{48a}+\frac{\pi^{ 2}L^{2}}{480a^{3}}u_{z}v_{z}-\frac{L^{2}m^{2}}{96a}u_{z}v_{z}. \tag{33}\]
From the above expression we can clearly see that the Lorentz violating contributions are not necessarily in order to increase or decrease the Casimir energy, but the increase or decrease depends on some conditions on the parameters of the system. If \(m^{2}<\pi^{2}/5a\) (\(m^{2}>\pi^{2}/5a\)) the Lorentz violating contribution will increase (decrease) the Casimir energy. Also, for a certain defined range of parameters, the increase or decrease in the Casimir energy may depend on the signal of the product \(u_{z}v_{z}\). If the vectors \(\mathbf{u}\) and \(\mathbf{v}\) are parallel, then \(u_{z}v_{z}>0\). Otherwise, if the vectors \(\mathbf{u}\) and \(\mathbf{v}\) are anti-parallel, then \(u_{z}v_{z}<0\). The effects of parallel and anti-parallel \(\mathbf{u}\) and \(\mathbf{v}\) in the Casimir energy are depicted in Fig. (5).
Accordingly, in the limit \(ma\gg(1-u_{z}v_{z})^{1/2}\), we have
\[E^{Cas}_{(APE)}=-\left(1-u_{z}v_{z}\right)^{3/4}\frac{L^{2}m^{2}}{8\pi^{2}a} \left(\frac{\pi}{ma}\right)^{1/2}\exp\left(-\frac{2ma}{\sqrt{1-u_{z}v_{z}}} \right), \tag{34}\]
or
\[E^{Cas}_{(APE)}=-\frac{L^{2}m^{2}e^{-2am}\sqrt{\frac{1}{am}}}{8\pi^{3/2}a}+ \frac{L^{2}m^{3}e^{-2am}(4am+3)}{32\pi^{3/2}(am)^{3/2}}u_{z}v_{z}, \tag{35}\]
up to first order in \(u_{z}v_{z}\). In this regime, the Casimir energy similarly behaves to that one depicted in Fig. (5), but with a different energy scale.
Figure 5: Casimir energy for parallel and anti-parallel \(\mathbf{u}\) and \(\mathbf{v}\) vectors. For this plot we have considered \(L=1\) and \(m=1\).
The anisotropic parity-odd sector
In the anisotropic parity-odd sector, the tensor in Eq. (3) is parameterized by a spacelike and a timelike 4-vector so that the modified Klein-Gordon equation take the form
\[[\Box+m^{2}+v_{0}(\mathbf{u}\cdot\nabla)\partial_{0}]\phi=0. \tag{36}\]
Similarly to the previous section, in order to apply the method of separation of variables for solving Eq. (36), we will be interested in analyzing the case where there is no Lorentz violation in the z-direction, that is \(\mathbf{u}=(u_{x},u_{y},0)\). The field solution for this situation is again given by Eq. (6) with
\[\omega_{n}(\mathbf{p})=\sqrt{\frac{v_{0}^{2}}{4}(\mathbf{u}\cdot\mathbf{p})^{2 }+\mathbf{p}^{2}+m^{2}+\left(\frac{n\pi}{a}\right)^{2}}. \tag{37}\]
The Hamiltonian can then be read as
\[H=\sum_{n=1}^{\infty}\int\frac{d^{2}p}{(2\pi)^{2}}\frac{1}{4\omega_{n}( \mathbf{p})^{2}}\left[2\omega_{n}(\mathbf{p})^{2}+v_{0}\,\omega_{n}(\mathbf{p })\left(\mathbf{u}\cdot\mathbf{p}\right)\right]\left[a_{n}^{\dagger}(\mathbf{ p})a_{n}(\mathbf{p})+b_{n}^{\dagger}(\mathbf{p})b_{n}(\mathbf{p})+L^{2}2 \omega_{n}(\mathbf{p})\right], \tag{38}\]
where all the annihilation and creation operators satisfy the same algebra as in Eq. (24). Therefore, the vacuum energy for the massive complex scalar field in the anisotropic parity-odd sector with no Lorentz violation effects occurring in the z direction is
\[E_{(APO)}=\frac{L^{2}}{(2\pi)^{2}}\sum_{n=1}^{\infty}\int d^{2}p\left[\frac{v_{ 0}}{2}(\mathbf{u}\cdot\mathbf{p})+\sqrt{\frac{v_{0}^{2}}{4}(\mathbf{u}\cdot \mathbf{p})^{2}+\mathbf{p}^{2}+m^{2}+\left(\frac{n\pi}{a}\right)^{2}}\,\right] \tag{39}\]
By using dimensional regularization and proceeding as before, we can then write the corresponding Casimir energy for an arbitrary dimension d:
\[E_{(APO)}^{Cas}(d)=-4\,L^{d}\,a^{-d/2}\left[\frac{1}{1+\left(\frac{v_{0}| \mathbf{u}|\cos\theta_{u}}{2}\right)^{2}}\right]^{\frac{d}{2}}\left(\frac{m}{ 4\pi}\right)^{\frac{d+2}{2}}\sum_{n=1}^{\infty}\frac{K_{\frac{d+2}{2}}\left(2 amn\right)}{n^{(d+2)/2}}. \tag{40}\]
where \(\theta_{u}\) is again the angle that the vector \(\mathbf{u}\) has with the momentum \(\mathbf{p}\). For \(d=2\), this expression in the limit \(ma\ll 1\) takes the form
\[E_{(APO)}^{Cas}=\frac{1}{1+\left(\frac{v_{0}|\mathbf{u}|\cos\theta_{u}}{2} \right)^{2}}\left[-\frac{L^{2}\pi^{2}}{720\,a^{3}}+\frac{L^{2}m^{2}}{48\,a}+ \cdots\right]. \tag{41}\]
On the other hand, when \(ma\gg\left(1-\frac{1}{4}u_{0}v_{0}\right)^{1/2}\), we get
\[E_{(APO)}^{Cas}=-\frac{L^{2}m^{2}}{8\pi^{2}a}\left(\frac{\pi}{ma}\right)^{1/2 }\frac{e^{-2ma}}{1+\left(\frac{v_{0}|\mathbf{u}|\cos\theta_{u}}{2}\right)^{2}} \tag{42}\]
As we can see, similarly to the anisotropic parity-even case with no Lorentz violation in the z-direction, the anisotropic parity-odd Casimir energy is just the usual result for the massive complex scalar field fixed by a Lorentz-violating factor depending on the angle between the vectors \(\mathbf{u}\) and \(\mathbf{p}\). Furthermore, no Lorentz violation effect will be observed if those vectors are orthogonal to each other. The behaviour of the Casimir energy for the regime \(ma\ll 1\) is depicted in Fig. (6), and an entirely similar behaviour is observed when \(ma\gg 1\), being the energy scale the only difference.
## VI Conclusion
In this paper we investigate the influence of the Lorentz-violating CPT-even contributions of the scalar electrodynamics proposed by Kostelecky and Edwards in the Casimir energy. We have considered the complex scalar field satisfying Dirichlet boundary conditions between two parallel plates separated by a small distance. An appropriate tensor parametrization allowed us to study the Casimir effect in three configurations: isotropic, anisotropic parity-odd and anisotropic parity-even. In each case we have employed dimensional regularization in order to regularize the Casimir energy.
For all three configurations we have showed that the Lorentz violating contributions affect the Casimir energy by promoting an increase or decrease on it, depending on the configuration of the system itself. Such an increase or decrease are indeed very small since the current bounds on the Lorentz violating tensor \(\hat{K}_{c}^{\mu\nu}\) are no greater than \(10^{-14}GeV\). The
Figure 6: Casimir energy when \(ma\ll 1\). For this plot we have considered \(L=1\), \(m=1\) and \(a=10^{-7}\).
increase (decrease) in the Casimir energy consequently yields to a decrease (increase) in the Casimir force. The Casimir force from the Lorentz violating contributions must be at least fourteen magnitude order smaller than the usual Casimir force.
For the isotropic configuration we have found a general solution which was studied in two regimes, namely \(ma\ll\left(1-\frac{1}{4}u_{0}v_{0}\right)^{1/2}\) and \(ma\gg\left(1-\frac{1}{4}u_{0}v_{0}\right)^{1/2}\). Both regimes exhibit a similar behaviour, i.e., the influence of the Lorentz symmetry violation is in order to promote an increase in the Casimir energy, being the energy scale the only difference between the two regimes. Analogously, for the anisotropic parity-odd configuration we have a similar behaviour of the Casimir energy in the regimes where \(ma\ll 1\) and \(ma\gg 1\). In both cases the Lorentz-violating terms contribute by increasing the Casimir energy, but the amount by which the Casimir is increased depends on the projection of the momentum on the Lorentz-violating vectors.
Finally, in the parity-even case, the Lorentz symmetry violation can increase or decrease the Casimir energy. And in this case as well, the amount by which the Casimir energy is increased or decreased depends on the projection of the momentum on the Lorentz-violating vectors. Such a direction dependency on the Casimir energy can be used as a signature to detect small violation in the Lorentz symmetry. It is important to highlight that our results are a direct and important generalization of previous results found in the literature in regard the influence of Lorentz symmetry violation in the Casimir effect.
###### Acknowledgements.
The authors thank the Fundacao Cearense de Apoio ao Desenvolvimento Cientifico e Tecnologico (FUNCAP), Grant no. PNE0112-00085.01.00/16 (JF), and the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Grant no. 200879/2022-7 (RVM) for financial support. R. V. Maluf acknowledges the Departament de Fisica Teorica de la Universitat de Valencia for the kind hospitality.
|
2306.04523 | Can current NLI systems handle German word order? Investigating language
model performance on a new German challenge set of minimal pairs | Compared to English, German word order is freer and therefore poses
additional challenges for natural language inference (NLI). We create WOGLI
(Word Order in German Language Inference), the first adversarial NLI dataset
for German word order that has the following properties: (i) each premise has
an entailed and a non-entailed hypothesis; (ii) premise and hypotheses differ
only in word order and necessary morphological changes to mark case and number.
In particular, each premise andits two hypotheses contain exactly the same
lemmata. Our adversarial examples require the model to use morphological
markers in order to recognise or reject entailment. We show that current German
autoencoding models fine-tuned on translated NLI data can struggle on this
challenge set, reflecting the fact that translated NLI datasets will not mirror
all necessary language phenomena in the target language. We also examine
performance after data augmentation as well as on related word order phenomena
derived from WOGLI. Our datasets are publically available at
https://github.com/ireinig/wogli. | Ines Reinig, Katja Markert | 2023-06-07T15:33:07Z | http://arxiv.org/abs/2306.04523v1 | Can current NLI systems handle German word order? Investigating language model performance on a new German challenge set of minimal pairs
###### Abstract
Compared to English, German word order is freer and therefore poses additional challenges for natural language inference (NLI). We create WOGLI (Word Order in German Language Inference), the first adversarial NLI dataset for German word order that has the following properties: (i) each premise has an entailed and a non-entailed hypothesis; (ii) premise and hypotheses differ only in word order and necessary morphological changes to mark case and number. In particular, each premise and its two hypotheses contain exactly the same lemmata. Our adversarial examples require the model to use morphological markers in order to recognise or reject entailment. We show that current German autoencoding models fine-tuned on translated NLI data can struggle on this challenge set, reflecting the fact that translated NLI datasets will not mirror all necessary language phenomena in the target language. We also examine performance after data augmentation as well as on related word order phenomena derived from WOGLI. Our datasets are publically available at [https://github.com/ireinig/wogli](https://github.com/ireinig/wogli).
## 1 Introduction
German is endowed with a rather free word order [1], especially when it comes to ordering nominal arguments in a sentence. Currently, large German NLI datasets are only available as translations from other languages. For example, the training portion (392k pairs) of the German XNLI dataset [1] is a machine translation of the English MultiNLI training set [20]. The testing portion of German XNLI is a manual translation of 5k English premise-hypothesis pairs that were newly created by the authors of XNLI. Such translated sets do not necessarily mirror all German-specific linguistic phenomena, such as the freer German word order.
We construct a new German challenge set named WOGLI (Word Order in German Language Inference). This dataset is handcrafted and does not stem from translation. It contains 16k premises where each premise is accompanied by one entailed (E) and one non-entailed (NE) hypothesis that both contain the same lemmata as the premise but change argument order. Morphological markers are indicative of subject and (direct) object, thus informing about the hypothesis' entailment relationship to the premise. In other words, WOGLI serves as a test bed for current language models' capabilities to distinguish subject from object in the context of German word order.
Our contributions are as follows:
1. We propose the first NLI dataset that specifically targets German word order phenomena.
2. We show that current German autoencoding models fine-tuned on the translated XNLI dataset can struggle on our proposed challenge set (Sections 4 and 5), tending to always predict entailment for both hypotheses.
3. We show that data augmentation can help performance on WOGLI but needs a considerable number of examples to work (Section 6).
4. We derive generalization sets including similar word order phenomena to WOGLI to investigate how the augmented models transfer to these datasets and show that German word order remains challenging in NLI (Section 7).
All our datasets are publically available1.
Footnote 1: [https://github.com/ireinig/wogli](https://github.com/ireinig/wogli)
## 2 German Word Order
The topological model.The topological model [1] describes regularities in German
word order, dependent on the concepts of _prefield_ and _middlefield_ for constituent positioning. In this model, so-called _left and right brackets_ form "[t]he skeleton of the sentence" (Bader and Haussler, 2010, p. 719), while other fields are defined according to the position of the verb (Durscheid, 2012).
Declarative main clauses, such as _Peter sieht den Mann_ at the top of Table 1, have a verb-second order. The left bracket contains the finite verb and the prefield is filled with one constituent (Bader and Haussler, 2010; Durscheid, 2012). In contrast, embedded clauses, such as _dass Peter den Mann sieht_ in the bottom half of Table 1, have a verb-last order. In verb-last clauses, the left bracket is occupied by a subjunction, the right bracket by a finite verb or a verb complex, and other constituents are placed in the middlefield (Durscheid, 2012).
While subject followed by object (SO) is viewed as the canonical word order, it is possible to place the object before the subject (OS) in both embedded and main clauses (Table 1). In the main clause either the subject or object is placed in the prefield, in embedded clauses both are placed in the middlefield but in varying order.
OS acceptability and minimal pairs.The marked OS order is more frequent in main clauses involving the prefield (Bader and Haussler, 2010) (around 14% of main clauses with accusative direct object) and in the active voice (Bader et al., 2017) (see data and examples in Table 1). Therefore, we construct our challenge set using only such clauses to raise acceptability of the marked OS word order examples. Even in the prefield, OS order can vary in acceptability dependent on relative constituent weight (Siewierska, 1993) (shorter before longer), discourse properties such as givenness (Bader and Portele, 2019) (given before new) and semantic properties such as agency (Siewierska, 1993; Bader and Haussler, 2010) (animate before inanimate). As we focus on simple grammatical examples without further interference, however, all our constituents are short and all premises and hypotheses are single sentences. To ensure that entailed and non-entailed sentences are semantically plausible, all our constituents refer to persons.
German word order in XNLI.We extract hypotheses in the training portion of the translated German XNLI (henceforth, GXNLI-train) that are declarative main clauses with a length between 4 and 9 tokens. The 38,090 extracted clauses are in active voice and contain one subject NP and one direct object NP in accusative case. We exclude clauses that start with prepositions or adverbs to limit ourselves to prefield cases. Only 1.8% (698 clauses) of the extracted clauses are in OS order, compared to the 14% to be expected in a German corpus according to Bader and Haussler (2010). Additionally, a vast majority of the 698 OS clauses start with the same demonstrative pronoun object _das/this_, e.g. _Das werde ich tun/This I will do_, thus offering little variety. The extreme prevalence of the SO order in GXNLI-train hypotheses may be
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Clause & Order & Prefield & L brack. & Middlefield & R brack. & Count (\% of accus.) \\ \hline Main & SO & **Peter** & sieht & den Mann & & 231 (86\%) \\ & & **Peter** & sees & the man\({}_{ACC}\) & & \\ & & _Peter_ & _sees_ & _the man_ & & \\ \cline{2-6} & OS & Den Mann & sieht & **Peter** & & 38 (14\%) \\ & & The man\({}_{ACC}\) & sees & **Peter** & & \\ & & _Peter_ & _sees_ & _the man_ & & \\ \hline Emb. & SO & & & & **Peter** den Mann & sieht & 546 (99\%) \\ & & & & that & **Peter** the man\({}_{ACC}\) & sees & \\ & & & _that_ & _Peter sees the man_ & & \\ \cline{2-6} & OS & & & & **dass** den Mann **Peter** & sieht & 6 (1\%) \\ & & & & that & the man\({}_{ACC}\) **Peter** & sees & \\ & & & & _that_ & _Peter sees the man_ & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples for word order in declarative, active German main and embedded clauses with subject and (accusative) direct object arguments, with corpus statistics from Bader and Haussler (2010). As in the remainder of this paper, the subject is always bold. Transliterations and translations (in italics) are provided below each example.
due to its translated nature.
## 3 WOGLI construction
Verb Collection.We collected 50 frequent German transitive verb types including agentive (such as _warmen/warn_), object-experiencer (such as _erschrecken/startle_) and subject-experiencer (such as _lieben/love_) verbs. All verbs can take animate (human) subjects as well as animate (human) direct objects, and all objects take the accusative case. All verbs are not symmetric, meaning that they do not lead to bidirectional entailments.2 In addition, none of the verbs need to split prefixes when used in main clauses so that the resulting premises have a very simple SVO structure. All verbs occur at least 70 times in GXNLI-train. Consequently, any difficulties that a language model will experience are unlikely to be due to verb rarity.
Footnote 2: For example, for the symmetric verb _heiraten_/_mary_, X marries Y would entail Y marries X, which would not allow us to automatically derive non-entailed hypotheses.
Noun Collection.We collected 144 noun types describing humans that function as direct object or subject in our premises/hypotheses. These include 38 masculine common nouns such as _Gast/guest_, each of which was seen at least 10 times in GXNLI-train and 24 feminine common nouns such as _Lehrerin/(female) teacher_. We collected feminine common nouns by searching for the suffix _in_ in GXNLI-train, which often indicates female persons in German. The unbalanced masculine-feminine split is due to the automatic translation of GXNLI-train as gender-neutral English job descriptions, for example _doctor_, are most frequently translated via the German male form, e.g. _Arzt_ instead of the female form _Arztin3_. We also collected 41 female and 41 male first names that occur at least 10 times in GXNLI-train. The 144 noun types yield 181 different noun surface forms (nominative/accusative, plural/singular).
Footnote 3: We could have made up the shortfall by including more feminine forms, even if they do not occur in GXNLI-train, but we consider it more important for this study to keep lexical differences to the fine-tuning set minimal.
Premise and Hypothesis Generation.We automatically generated German premises as declarative, present tense, main clauses in the active voice with SVO structure (see lines 1 and 5) in Table 2). Each SVO premise is accompanied by two hypotheses. H1-SO (NE) exchanges object and subject including changing S/O case markers and potentially verb number markers. Therefore, similarly to English, this change leads to non-entailment, as the premise _The doctor warns the client_ and the corresponding H1 _The client warns the doctor_ illustrate. We call this subset WOGLI-SO, as the new subject precedes the object. H2-OS (E) simply swaps argument order but keeps case and number markers intact, leading to a sentence synonymous to the premise but with marked OS word order. The resulting set of entailed hypotheses is called WOGLI-OS. Table 2 shows two full examples with case and number marking.
We have 17 patterns due to combinations of different argument NPs, including masculine and feminine proper names and common nouns as well as singular and plural arguments. Subjects/objects are either a simple proper name (such as _Maria_) or consist of an article4 and a common noun, e.g. _der Arzt/the doctor_. Consequently, each sentence always has a length of four or five words. A list of all 17 patterns is provided in Table 6 in the Appendix; we exclude the patterns in Table 7 in the Appendix as they generate ambiguous hypotheses, due to the absence of disambiguating morphological markers. The 17 patterns in WOGLI can be divided into two groups: 5 **all-singular** patterns that combine two singular nominal arguments (see first example in Table 2) and 12 **singular-plural** patterns in which one argument is singular and the other one is plural (see second example in Table 2). In all 9 patterns involving a masculine singular NP, (i) masculine determiners and (ii) masculine common nouns belonging to the weak declension type5 carry morphological markers of case. Proper nouns never change surface forms. Additionally, in all **singular-plural** patterns, verb number agreement with the subject always leads to a change in the verb's surface form between E and NE hypotheses.
Footnote 4: We used the articles _ein_ (indef.), _der_ (def.) and _dieser_ (demonstrative), as well as their feminine and plural forms.
Footnote 5: The six masculine common nouns in WOGLI that belong to the weak declension type are _Kunde/Kunden/client_, _Student_/_Student_/_Student_/_Student_/_Student_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_/Natural_Natural_/Natural
17,000 possible premises. As in random generation, some premises are generated twice, we deduplicate and are left with 16,971 premises. H1-SO (NE) and H2-OS (E) are deterministically generated from the premises, leading to 33,942 sentence pairs.
All word lists with GXNLI-train frequencies and translations can be found in our Github repository. Each of the 50 verb types appears between 308 and 383 times (mean: 339.4 times) in the 16,971 premises. They also appear 20 times on average per pattern in the premises. Table 8 in the Appendix gives noun statistics for WOGLI.
## 4 Experiments on WOGLI
Models.We use two German models and one multilingual BERT model:
* BERT-base6 is a cased base BERT model pre-trained by the MDZ Digital Library team on 16GB of German-language text. Footnote 6: [https://huggingface.co/dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased)
* GBERT-large7 is a BERT model pre-trained on 163.4GB of data (Chan et al., 2020), using the same cased vocabulary as BERT-base.8 Footnote 7: [https://huggingface.co/deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
* mBERT-base9 is a cased BERT model pre-trained on 104 languages (Devlin et al., 2019).
Since models were fine-tuned on GXNLI-train in a three-class setting, we merge contradiction and neutral into non-entailed predictions for evaluations on WOGLI. Fine-tuning details are provided in Section C of the Appendix.
Results (see Table 3).As a sanity check, we first test our models on GXNLI-test. Our models' performances on GXNLI-test are broadly in line with published work. Conneau et al. (2020)
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Premise & **Der/Dieser/Ein\({}_{NOM-SG-M}\)** & **Arzt** & warnt\({}_{SG}\) & den/diesen/einen\({}_{ACC-SG-M}\) & Kunden\({}^{\dagger}\) \\ & **The/This/A\({}_{NOM-SG-M}\)** & **doctor** & warms\({}_{SG}\) & the/this/\({}_{ACC-SG-M}\) & client \\ & _The/This/A_ & _doctor_ & _warms_ & _the/this/\({}_{A}\)_ & _client_ \\ \hline H1-SO (NE) & **Der/Dieser/Ein\({}_{NOM-SG-M}\)** & **Kunde\({}^{\dagger}\)** & warmt\({}_{SG}\) & den/diesen/einen\({}_{ACC-SG-M}\) & Arzt \\ & **The/This/A\({}_{NOM-SG-M}\)** & **client** & warms\({}_{SG}\) & the/this/\({}_{A/ACC-SG-M}\) & doctor \\ & _The/This/A_ & _client_ & _warms_ & _the/this/\({}_{A}\)_ & _doctor_ \\ \hline H2-OS (E)\({}^{*}\) & Den/Dieser/Ein\({}_{NOM-SG-M}\) & **Kunden\({}^{\dagger}\)** & warmt\({}_{SG}\) & der/dieser/ein\({}_{NOM-SG-M}\) & **Arzt** \\ & The/This/A\({}_{ACC-SG-M}\) & client & warms\({}_{SG}\) & **the/this/\({}_{A}\)\({}_{NOM-SG-M}\)** & **doctor** \\ & _The/This/A_ & _doctor_ & _warms_ & _the/this/\({}_{A}\)_ & _client_ \\ \hline H3-OS (NE)\({}^{*}\) & Den/Dieser/Ein\({}_{NOM-SG-M}\) & Arzt & warmt\({}_{SG}\) & der/dieser/ein\({}_{NOM-SG-M}\) & **Kunde\({}^{\dagger}\)** \\ & The/This/A\({}_{ACC-SG-M}\) & doctor & warms\({}_{SG}\) & **the/this/\({}_{A}\)\({}_{NOM-SG-M}\)** & **client** \\ & _The/This/A_ & _client_ & _warms_ & _the/this/\({}_{A}\)_ & _doctor_ \\ \hline \hline Premise & **Der/Dieser/Ein\({}_{NOM-SG-M}\)** & **Minister** & empflehl\({}_{SG}\) & die/dese\({}_{ACC-PL-F}\) & Autorinen \\ & **The/This/A\({}_{NOM-SG-M}\)** & **minister** & recommends\({}_{SG}\) & the/these\({}_{ACC-PL-F}\) & authors \\ & _The/This/A_ & _minister_ & _recommends_ & _the/these_ & _authors_ \\ \hline H1-SO (NE) & **Die/Dieser\({}_{NOM-PL-F}\)** & **Autorinen** & empflehl\({}_{PL}\) & den/diesen/einen\({}_{ACC-SG-M}\) & Minister \\ & **The/These\({}_{NOM-PL-F}\)** & **authors** & recommend\({}_{PL}\) & the/this/\({}_{ACC-SG-M}\) & minister \\ & _The/These_ & _authors_ & _recommend_ & _the/this/\({}_{a}\) & _minister_ \\ \hline H2-OS (E)\({}^{*}\) & Die/Dieser\({}_{ACC-PL-F}\) & Autorinen & empflehl\({}_{SG}\) & **der/dieser/ein\({}_{NOM-SG-M}\)** & **Minister** \\ & The/These\({}_{ACC-PL-F}\) & authors & recommends\({}_{SG}\) & the/this/\({}_{A}\)\({}_{NOM-SG-M}\) & **minister** \\ & _The/This/A_ & _minister_ & _recommends_ & _the/these_ & _authors_ \\ \hline H3-OS (NE)\({}^{*}\) & Den/Dieser/Ein\({}_{ACC-SG-M}\) & Minister & empflehl\({}_{PL}\) & die/dese\({}_{NOM-PL-F}\) & **Autorinen** \\ & The/This/A\({}_{ACC-SG-M}\) & minister & recommend\({}_{PL}\) & **the/these\({}_{NOM-PL-F}\) & **authors** \\ & _The/These_ & _authors_ & _recommend_ & _the/this/\({}_{a}\) & _minister_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Two examples of WOGLI premise-hypothesis pairs, one for the pattern sing_masc_v_sing_masc and one for the pattern sing_masc_v_pl_fem. Underlined words have different surface forms in NE and E hypotheses and carry distinguishing morphological markers of case and/or number. Nouns belonging to the weak declension type are identified by \({}^{\dagger}\). Hypotheses H3 are not part of WOGLI proper but will be used in a generalization set called WOGLI-OS-hard as they demand to both process marked OS word order as well as recognising non-entailment in the face of high word overlap. As in the remainder of this paper, hypotheses with a marked word order are identified by an asterisk.
achieve an accuracy of 81.2% on GXNLI-test with a monolingual BERT-base model, higher than our 76.67%. However, their model uses a larger vocabulary (40k, ours: 31k) and was pre-trained on a larger corpus (up to 60GB, ours: 16GB). This particular model is unfortunately not available. Other prior work concentrates on multilingual models: GBERT-large's size is smaller than mT5-base (580m parameters) by Xue et al. (2021), but its performance of 84.65% on GXNLI-test exceeds the one reported for mT5-base (81.6%). Devlin et al. (2019) achieve an accuracy of 75.9% with mBERT-base (_Translate Train Cased_)10, in line with ours.
Footnote 10: Results on XNLI are provided in the corresponding GitHub repository: [https://github.com/google-research/bert/blob/master/multilingual.md#results](https://github.com/google-research/bert/blob/master/multilingual.md#results).
On WOGLI, both base models completely fail, labeling almost all instances as entailments. GBERT-large performs a bit better, suggesting that the language model's scale plays a role in its ability on WOGLI. However, it still shows a strong tendency for the entailment class and the results are not robust across runs. Our vocabulary is frequent and present in GXNLI-train and our sentences have a very simple grammar. Therefore, the models' poor performances on WOGLI suggest that not all German-specific linguistic phenomena are represented in the translated GXNLI-train, similar to our GXNLI word order analysis in Section 2.11
Footnote 11: One question that arises is whether even larger models or models pretrained on substantially more data will solve the problem. Other monolingual models for German are sparse. We therefore ran two large, publically available, multilingual model checkpoints fine-tuned on XNLI on WOGLI. They also do not perform well (see Section D in the Appendix).
## 5 Error analysis
All analyses in this section are carried out on ensemble predictions (majority vote of the 5 runs) of the strongest model in Table 3, GBERT-large. The ensemble model reaches an accuracy of 57.82% on WOGLI and 27.41% on WOGLI-SO.
### Fluency
We measure the correlation of model performance and linguistic acceptability, approximating the latter via pseudo-loglikelihood (Salazar et al., 2020). WOGLI premises have an average PLL of \(-30.54\) (SD: \(8.318\)). H1-SO (NE) hypotheses have an average PLL of \(-30.56\) (SD: \(8.287\)), while H2-OS (E) hypotheses are less fluent due to marked word order, with an average PLL of \(-36.53\) (SD: \(8.535\)). GBERT-large performs worse on SO (NE) pairs than on the less fluent OS (E) pairs; fluency thus does not play an important role in the model's performance on WOGLI. Instead, the lexical overlap heuristic (Naik et al., 2018; McCoy et al., 2019; Gururangan et al., 2018) is a possible reason for the degradation on non-entailed pairs.
### Performance by subject and object properties
We now focus on WOGLI-SO (NE) only as this is the part of the dataset where the models fail.
Gender.Regarding the gender of arguments in WOGLI, we formulate the following hypothesis:
* SO hypotheses with masculine subjects (objects) are easier to classify than the ones with feminine subjects (objects).
* can be explained by _(a)_ the presence of gender bias due to translation in GXNLI-train (see Section 3) _or (b)_ morphological differences between masculine and feminine NPs.
Performance on instances in WOGLI-SO (NE) with masculine common noun subjects is indeed significantly higher than for feminine common noun subjects. The same holds for common noun objects (see also Table 10 in the Appendix). However, this does not transfer to proper names. Gender bias in GXNLI-train _(a)_ as an explanation for **A1** is therefore unlikely.
Morphological differences between feminine and masculine NPs _(b)_, however, are a possible explanation for **A1**. Feminine articles and common nouns have the same surface forms in accusative/nominative. Masculine articles and common nouns, however, can bear morphological case markers. The masculine singular articles _der_, _ein_ and _dieser_ are the only articles in WOGLI to change surface forms in the accusative to _den_, _einen_ and _diesen_. Additionally, singular masculine common nouns belonging to the weak declension type also carry case markers. Morphological markers in some masculine NPs could thus be helpful for the model to distinguish subject from object.
Referential properties of subjects/objects.In prefield SO sentences, definite NPs tend to precede indefinite NPs (Weber and Muller, 2004), probably because indefinite constituents are often new and definite constituents are often given
(Chafe, 1976). Although XNLI and WOGLI do not contain discourse context, preference for SO sentences with definite before indefinite NPs might be encapsulated in pretraining data. We thus hypothesize that:
* SO hypotheses in which a definite NP precedes an indefinite NP are easier to classify.
We separate WOGLI constituents into definite and indefinite following Prince (1992): definite and demonstrative articles, as well as proper names are markers of definiteness, while indefinite articles point to indefiniteness. We then separate WOGLI-SO (16,971 pairs) into two groups: **preferred** (14,671 pairs) and **dispreferred** (2,300 pairs). Pairs in the **dispreferred** group are opposed to the aforementioned discourse hierarchy in that indefinite constituents precede definite constituents in the SO hypothesis. Pairs in the **preferred** group form the three other possible cases: definite precedes indefinite, definite precedes definite and indefinite precedes indefinite; these cases are not in opposition to the hierarchy.
GBERT-large achieves an accuracy of 29.85% on the SO pairs in the **preferred** group but only 11.78% on the **dispreferred** group (difference significant at 1% significance level, z-test for proportions). Therefore we can confirm **A2**.
**Number.** Lastly, we analyse the role of verb number agreement in classifying WOGLI-SO (NE) instances. As explained in Section 3, WOGLI patterns either combine only singular arguments (**all-singular**) or a singular and a plural argument (**singular-plural**). Only in the latter group of patterns, subject-verb agreement leads to a change in the verb's surface form from the premise to the H1-SO (NE) hypothesis (see _empflehlen/recommend\({}_{PL}\)_ vs. _empflehl/recommends\({}_{SG}\)_ in the second example in Table 2). We investigate the importance of verb number agreement for classifier performance by separating WOGLI-SO (16,971 pairs) into two groups, **all-singular** (4,997 pairs) and **singular-plural** (11,974 pairs).
GBERT-large achieves an accuracy of 36.66% on the SO pairs in the **all-singular** group and 23.54% on the **singular-plural** group (difference significant at 1% significance level, z-test for proportions). Thus the number switch in the verb occurring in **singular-plural** SO hypotheses is not a particularly helpful cue for the classifier.
## 6 Data augmentation
Following McCoy et al. (2019) and Min et al. (2020) on data augmentation with challenge sets, we hypothesize that augmenting GXNLI-train with a WOGLI subset can be helpful.
We sample 1,037 premises and their corresponding E/NE hypotheses from WOGLI, resulting in 2,074 training instances. Each of the 17 patterns occurs 61 times. All 50 verb lemmas are represented, each appearing between 18 and 25 times. All 181 noun forms appear at least once.12
Footnote 12: The nouns occur in varying frequencies due to the small size of the augmentation set.
We concatenate these WOGLI instances with GXNLI-train, name the resulting augmented training set GXNLI+1037 and shuffle it before fine-tuning GBERT-large 10 times on this augmented training set. We evaluate on the remaining 31,868 WOGLI instances, named WOGLI-test-1037. This augmented training set allows GBERT-large to classify WOGLI almost perfectly, while maintaining its performance on GXNLI-test (Table 4).
Smaller augmentation size.We fine-tune GBERT-large on a shuffled concatenation of GXNLI-train and only 102 WOGLI premises sampled in a stratified manner from the aforementioned 1,037 premises along with both their corresponding NE and E hypotheses. Each one of the 17 patterns appears 6 times and each one
\begin{table}
\begin{tabular}{l l l l} \hline Evaluation set & BERT-base (110m) & GBERT-large (335m) & mBERT-base (172m) \\ \hline GXNLI-test & 76.65 (0.41) & 84.65 (0.163) & 75.16 (0.552) \\ \hline WOGLI & 50.16 (0.133) & 57.68 (1.86) & 50.01 (0.015) \\ WOGLI-SO (NE) & 0.33 (0.269) & 27.42 (7.828) & 0.02 (0.029) \\ WOGLI-OS (E)* & 100 (0.005) & 87.94 (4.171) & 100 (0.0) \\ \hline \end{tabular}
\end{table}
Table 3: Accuracies for two German and one multilingual model on GXNLI-test and WOGLI, averaged over 5 runs. All are trained on GXNLI-train. Accuracies are computed for 3 classes in GXNLI-test and 2 classes in WOGLI.
of the 50 verb lemmas appears at least once and at most 4 times. Due to the small augmentation size, it is not possible to ensure representation of all 181 nouns, with 73 not appearing. We evaluate on the remaining 33,738 WOGLI pairs, named WOGLI-test-102. The smaller augmentation size yields a model that performs worse and less robustly on WOGLI test instances (Table 4).
## 7 Generalization experiments
McCoy et al. (2019) investigate whether augmented models improved by simply memorizing the seen templates. To do so, they evaluate them on pairs from unseen patterns. Inspired by this setup, we study the models' generalization capabilities by evaluating them on four new evaluation sets that share structural and lexical similarities with the WOGLI pairs that were seen during fine-tuning.
### Construction of generalization sets
Pronoun subjects: WOGLI-p-subject.We replace the premise subject in WOGLI by a personal pronoun (_He warns the client_). Correspondingly, the H1-SO (NE) hypothesis then has the pronoun as the object (_The client warns him_) whereas the entailed H2-OS (E) hypothesis just swaps word order with regards to the premise (_The client warns he_) (see also Table 11 in the Appendix). To focus on the pronominalization change, the same 17 patterns, verb lemmas, proper nouns and common nouns are also used in WOGLI-p-subject. In addition to the previously mentioned morphological markers of case and/or verb number occurring in WOGLI sentences (Section 3), the masculine singular pronoun _er/he_ (nominative) in WOGLI-p-subject changes surface form in the accusative case (_ihn/him_). Feminine and plural pronouns (_sie/she/her/they/them_) in WOGLI-p-subject, however, do not change surface form. Some WOGLI premises can become duplicates after replacing the subject by a personal pronoun. Consider the two premises _Die Arzte warnen den Kunden/The doctors\({}_{masc}\) warn the client_ and _Die Arztinen warnen den Kunden/The doctors\({}_{fem}\) warn the client_. After replacing the subject, both premises lead to the new premise _Sie warnen den Gast/They warn the guest_, since plural masculine and plural feminine nominative personal pronouns have the same surface form in German. We keep only one version for such duplicates. The new generalization set contains 13,802 unique premises, or a total of 27,604 pairs.
Dative: WOGLI-dative.We collect a new list of 22 transitive verbs that require dative instead of accusative objects. All verbs are not symmetric, which ensures that NE hypotheses always have the correct gold label. Each verb lemma appears at least 17 times in GXNLI-train. We use the same 144 noun types as in WOGLI to generate new instances. The premises again have SVO structure, and H1 has SO (NE) and H2 has OS (E) structure. Therefore the instances are completely parallel to WOGLI apart from the case of the object.
In these dative constructions, 24 patterns are possible (Table 6 in the Appendix). Each pattern appears 150 times in WOGLI-dative and each verb lemma appears between 132 and 182 times in the premises. All possible noun surface forms appear between 6 and 81 times in the premises. We generate 3,600 premises, or 7,200 pairs in total. Table 12 in the Appendix shows an example. In WOGLI-dative, all determiners (singular and plural, feminine and masculine) change surface forms by case. Additionally, plural masculine common nouns change surface forms in the dative if they do not end with -n in the nominative13. As in WOGLI, singular masculine nouns of the weak delension type and verbs in singular-plural patterns also change surface forms.
Footnote 13: Masculine nouns ending with -n in plural nominative are: _Kunden/clients_, _Professoren/professors_, _Student/students_, _Mentoren/mentors_, _Patient/patients_, _Soldaten/soldiers_, _Journalisten/journalists_, _Zeugen/witnesses_. These nouns maintain the same surface forms in plural dative.
always preceded by a definite article.
are structurally similar to WOGLI and (ii) models exposed to WOGLI do not necessarily generalize well to some related datasets at all. As German word order is quite intricate and will have additional variations for embedded or non-declarative clauses this means training datasets need to be very large and varied to learn German word order.
## 8 Related work
Many English adversarial NLI datasets have been proposed. Some of these (Dasgupta et al., 2018; Kim et al., 2018; Nie et al., 2019; McCoy et al., 2019), like us, include minimal pairs with a high word overlap between premise and hypotheses. Kim et al. (2018), for example, change argument order to generate non-entailments so that "understanding" word order is necessary to solve these. However, in WOGLI, changes in argument order generate entailed _and_ non-entailed hypotheses, depending on keeping or changing corresponding morphology. The more fixed English word order does not allow for flexibility to that degree.
Regarding adversarial NLI datasets for German, Hartmann et al. (2021) investigate negation but do not work on word order. Tikhonova et al. (2022) propose NLI diagnostic datasets for French, German and Swedish. Sentence pairs are manually translated from the Russian TERRa dataset (Shavrina et al., 2020) as well as from the diagnostic dataset of GLUE (Wang et al., 2018). We inspected a random 100 hypotheses of the German TERRa dataset, none of which were in marked word order. The translated GLUE benchmark is annotated with linguistic features relevant for entailment such as lexical semantics, logic, and predicate-argument structure. Only the predicate-argument structure examples include a handful where word order of arguments has been inverted between premise and hypothesis. However, resulting hypotheses were often ambiguous and -- in our opinion -- wrongly annotated as not-entailed. Consider the premise _John zerbrach das FensterJohn broke the window_ and the hypothesis _Das Fenster hat John eingeschlagen_, which is ambiguous between _The window\({}_{NOM}\) broke John\({}_{ACC}\)_ (SO order, NE) OR _The window\({}_{ACC}\) broke John\({}_{NOM}\)_ (OS order, E). This is annotated as non-entailment in the dataset, assuming SO order with an implausible semantic reading, whereas the marked word order with a plausible semantic reading leads to entailment.
Unlike us, both datasets do not emphasise word order. They are also based on translations and therefore rarely contain OS hypotheses.
## 9 Conclusion
We created WOGLI, a new NLI challenge set, in order to examine the challenges brought by the freer German word order. Premises, entailed and not-entailed hypotheses contain exactly the same lemmata; the two hypotheses differ only in word order and morphological changes but change label. Three current BERT-based models fine-tuned on GXNLI-train struggle on WOGLI pairs. This poor performance mirrors the fact that translated NLI training sets such as GXNLI do not incorporate all required linguistic phenomena that are specific to the target language, German. We find that the number of WOGLI pairs for augmentation during fine-tuning must be sufficiently high in order to (i) learn WOGLI and (ii) generalize to other WOGLI-like pairs. Even with a larger augmentation set and a large pretrained model, a generalization set that differs more from WOGLI, such as WOGLI-OS-hard (NE), remains difficult.
In future experiments, we will expand WOGLI datasets to contain additional variation, such as tense variation, more complex sentence structure (additional arguments and adjuncts, active/passive), more complex constituent structure and other sentence types (non-declarative, embedded). This will also allow us to conduct more fine-grained error analyses regarding the hierarchies that influence the linearization of arguments and thus word order.
## Acknowledgments
Ines Reinig is funded by the German Research Foundation (DFG) under the UNCOVER project (PO 1900/7-1 and RE 3536/3-1). |
2303.01285 | Uses and Gratifications of Alternative Social Media: Why do people use
Mastodon? | The primary purpose of this investigation is to answer the research
questions; 1) What are users' motivations for joining Mastodon?; 2) What are
users' gratifications for using Mastodon?; and 3) What are the primary reasons
that the users continue to use Mastodon? We analyzed the collected data from
the perspective of the Uses and Gratifications Theory. A questionnaire was
designed to measure the opinions of Mastodon users from 15 different Mastodon
instances. We examined 47 items through exploratory factor analysis using
principal components extraction with Varimax with Kaiser Normalization. The
results extracted 7 factors of gratification sought (expectation) and 7 factors
of gratification obtained. We discovered that the primary reason that the users
join and use Mastodon is the ease of controlling and sheltering users'
information from data mining. The findings of the gratification sought
structure are similar to findings of the gratification obtained structure, and
the comparison between the two groups of data suggests that users are satisfied
with the ongoing use of Mastodon. | Kijung Lee, Mian Wang | 2023-03-02T14:09:12Z | http://arxiv.org/abs/2303.01285v1 | # Uses and Gratifications of Alternative Social Media: Why do people use Mastodon?
###### Abstract
The primary purpose of this investigation is to answer the research questions; 1) What are users' motivations for joining Mastodon?; 2) What are users' gratifications for using Mastodon?; and 3) What are the primary reasons that the users continue to use Mastodon? We analyzed the collected data from the perspective of the Uses and Gratifications Theory. A questionnaire was designed to measure the opinions of Mastodon users from 15 different Mastodon instances. We examined 47 items through exploratory factor analysis using principal components extraction with Varimax with Kaiser Normalization. The results extracted 7 factors of gratification sought (expectation) and 7 factors of gratification obtained. We discovered that the primary reason that the users join and use Mastodon is the ease of controlling and sheltering users' information from data mining. The findings of the gratification sought structure are similar to findings of the gratification obtained structure, and the comparison between the two groups of data suggests that users are satisfied with the ongoing use of Mastodon.
**Keywords:** Alternative Social Media, Mastodon, Uses and Gratifications.
## 1 Introduction
The emergence of social media radically changed how people communicate and interact with others. Using a collection of platforms, e.g., Facebook, Twitter, and Youtube, to name a few, social media users share ideas and collaborate with the community of others. However, the often-mentioned pure function of social media is sometimes overshadowed by dysfunctional issues, essentially summarized as information control. The first and foremost issue of information control is privacy. Most social media users are concerned that the information they share on social media platforms is not protected for privacy, especially how their private information is collected and used for targeted advertising [1, 11, 12, 13]. On the other hand, the failure of the service providers to moderate appropriate content across social platforms contributed to the spread of misinformation. While content moderation may be accepted to filter out offensive messages, it is unpleasant for some people due to its potential bias and lack of efficacy [10, 11]. In addition to privacy and misinformation issues, mainstream social media have been criticized for
censorship, political neutrality, user control, and malicious activities (Casilli & Tubaro, 2012).
Many of the mentioned issues of mainstream social media contributed to users' transition to alternative social media (ASM henceforth). ASM emerged in 2010th, led by a small number of startups, including Diaspora. One of the primary differences between mainstream social media services (e.g., Facebook and Twitter) and ASM is the user's control over their data. In other words, the decentralized nature of ASM enables the users to manage their user-created-contents thanks to the lack of a single service provider. An example of users' transition from mainstream social media to ASM is the recent cyber-migration from Twitter to Mastodon in October 2019. The Twitter account of Sanjay Hegde, a Supreme Court lawyer, was suspended because of the excessive content he posted, which triggered this movement. Twitter users were unhappy with the censorship policies and content verification (Ranipeta, 2019). Mastodon garnered over 1.1 million during that time (Parker, 2012).
As the largest and most popular ASM, Mastodon is one representative and notable example. Drawing on data from the Federation hub (Parker, 2012), Mastodon is the largest and most popular platform. It was created in 2016 by a German programmer, Eugen Rochko, to provide users with their "own space on the Internet." As a microblogging service, Mastodon is a decentralized web platform (Raman, Joglekar, Cristofaro, Sastry, & Tyson, 2019), installed on GNU Social (Kenlon, 2017), to host their servers and create their private networks. In line with Mastodon's slogan, "Giving social networking back to you," Mastodon decentralized its network using many independent local servers. More specifically, Mastodon lets users set up their local server (an instance) with complete administration and allows users to register and interact with each other. The primary benefit is security and authentication compared to on-premises and cloud applications (Parker, 2012). Many people see and use Mastodon as an alternative to Twitter because they both have similar user interfaces and built-in mechanisms.
With less restrictive content moderation by the service providers, the ASM may potentially provide a forum with "free" speech and improved data privacy. Freedom of expression is not a simple idea to implement as a policy of service because public speaking, in the most part, is governed by the law rather than the company policy or the lack of it. However, as the ASM are not managed by a single entity, they have their version of "content moderation," although it is more like a formal agreement among the members of a particular group. In practice, there are ways that users can post content without censorship. This is one of the major reasons that users prefer ASM over traditional social media. However, the unfamiliarity with the decentralized structure of how ASM work, in addition to the less appealing user interface, contributes to the low number of active users.
Theoretical examination of ASM... uses and gratifications
The primary purpose of this investigation is to answer the research questions; _1) What are users' motivations for joining Mastodon?; 2) What are users' gratifications for using Mastodon?; and 3) What are the primary reasons that the users continue to use Mastodon?_ We analyzed the collected data from the perspective of the Uses and Gratifications (U&G henceforth) Theory. The rest of the paper is organized into the
following sections; The Literature Review section describes the Uses and Gratifications Theory and how the theory can be applied in the context of the analyses of Mastodon users; In the Methods section, we describe procedures and participants, instrument design, and statistical approach to answer the research questions; In the Results and Analysis section, we provide the results of the analyses and our interpretations of how the results answer the research questions; The we discuss the implications, contributions, and limitations and concludes the paper in the Discussion and Conclusion section.
## 2 Literature Review
Perspectives to explicate the transition from one form of media to a different form of media are often described in terms of how one offers the means for communication to marginalized people (Atton, 2008). In other words, alternative media are not only defined by the content that they deliver but also by how people can participate in the production of messages. In the traditional media, the distinction seems clear from newspaper to radio, from radio to television, and from television to web 2.0 media, that people's choice of one media over another provided them with more power and opportunity to take part in the message production. Uses and Gratifications Theory is a theoretical framework to show why and how people select and use specific media to satisfy their needs.
### Uses and Gratifications Theory
In the early 1960s, the U&G Theory was used to measure people's short-term effects on mass media campaigns (Blumler, 1979). Early theorists have applied the U&G perspective in the context of various mass media such as television and electronic bulletins (Eighmey & McCord, 1998). For example, Schramm's (Schramm, 1965) research about children seeking out television and collecting data on how TV affects them is a classical case. However, the work of Blumler and Katz and Blumler (Katz & Blumler, 1974) changed researchers' perspectives to focus on what people do instead of the media's influence or impact on the individual. That resulted in the audiences becoming active objects where they could select any media as passive objects to satisfy their needs. This basic assumption also helps people understand the reasons for selecting different types of media. U&G Theory successfully distinguished itself from early mass communication theories and became a new form of the functionalist perspective on mass media communication (Luo, 2002). Katz and Blumler (Katz & Blumler, 1974) have found thirty-five social and psychological needs and divided them into five different basic categories: "Cognitive", "Affective", "Integrative", "Social Integrative", and "Escape". Furthermore, more and more gratifications have been discovered and used in studies such as: "Information Seeking", "Pass Time", "Entertainment", "Relaxation", "Communicatory Utility", "Convenience Utility", "Expression of Opinion", "Information Sharing", And "Surveillance/Knowledge About Others" (Whiting & Williams, 2013).
With the advancement of media technology, researchers have also paid attention to Web applications (e.g., SNSs). The interactive nature of this new type of media requires active user participation. Many researchers (LaRose, Mastro, & Eastin, 2001; Lee, 2004; Papacharissi & Rubin, 2000) have put the effort in the field to examine the expectations and gratifications that individuals seek and obtain. Understanding what motivates people to select social media has become the priority mission of U&G Theory. The results indicated that the users tend to use SNS to gain multiple gratifications. Users tend to use different platforms for receiving different gratifications. Quan-Haase et al. (Quan-Haase, Wellman, Witte, & Hampton, 2002) have addressed: They do not apply one single form of social media but a range of communication tools. All social media can be adapted to be a part of a user's communication repertoire. For example, people tend to use Facebook to communicate and socialize for real-life communication networks (Young, K., 2011), but people who use Twitter aim to ask questions, get support, or share advice (Grosseck & Holotescu, 2008). People are using different media types for different purposes and desires, and the U&G Theory can help researchers discover users' motivations and associated behaviors.
U&G theory examines why individuals use media to seek and obtain needs from specific media types as the most successful theoretical framework. In the review of U&G literature, many studies (Karimi, Khodabandelou, Ehsani, & Ahmad, 2014; Quan-Haase & Young, 2010; Quinn, 2016) only focus on mainstream social media, but few studies examined ASM from a U&G perspective. To address this deficiency in literature, the following research questions will be addressed from two aspects: Expectations for joining and gratifications for using Mastodon.
### Motivations for using Mastodon
To date, Mastodon has 2.9 million users and 2,720 nodes (from the-federation.info), and this number is still going up during the COVID-19 pandemic. It has become the largest and the most popular ASM on the Internet and has also raised users' expectations of joining accordingly. Quan-Haase et al. (Quan-Haase & Young, 2010) have addressed that gratifications sought (also often referred to as "needs" or "motives") refer to those gratifications that audience members expect to obtain from a medium before they have come into contact with it. The users want to join Mastodon because they seek specific needs and use Mastodon to fulfill their expectations. This helps us find what factors influence Mastodon adoption and why users join. It helps us to find what factors influence adoption and why most users choose to join Mastodon. No studies have systematically investigated gratifications sought (expectations) from the Mastodon. Understanding the gap between these two types of gratifications is vital for analyzing how different audience members use various media kinds (Palmgreen & Rayburn, 1979). Hence, to explore the expectation for joining Mastodon, the first research question is the following:
Research Question 1: What are users' motivations for joining Mastodon?
### Gratifications of using Mastodon
As many researchers (LaRose et al., 2001; Levy & Windahl, 1984; Palmgreen, Wenner, & Rayburn, 1980) have addressed, the gratifications obtained insight into what motivates the continued use of the medium. Ongoing use of one type of media implies that gratifications sought are reinforced by gratifications obtained. Although many studies examined gratifications obtained, e.g., (DiMicco et al., 2008; Joinson, 2008; Quan-Haase & Young, 2010), for mainstream media, no studies have systematically examined gratifications obtained from Mastodon. The gratifications obtained from using Mastodon will present reasons that support users for ongoing use of the platform. To better understand user's satisfaction, the comparisons between gratifications sought and gratifications obtained are required. Hence, the following research question is formed:
Research Question 2: What are users' gratifications for using Mastodon?
Research Question 3: What are the primary factors that the users continue to use Mastodon?
## 3 Methods
This paper presents a small-scale exploratory investigation of Mastodon users' motivations for joining and using Mastodon. The Questionnaire Survey method was used. This study consists of an online questionnaire with nineteen questions. The online questionnaire applies the U&G perspective to fulfill the purpose of the investigation.
### Procedure and Participants
All samples were obtained under research protocols reviewed and approved by the University of Cincinnati Institutional Review Board (IRB) committee. Participation was voluntary, and participants for the survey were recruited from fifteen different Mastodon instances. The study employed the use of SurveyMonkey to apply questionnaires to Mastodon users. The flexibility of question types is useful for schools and school social workers (Massat, McKay, & Moses, 2009). Respondents accessed the questionnaire through a SurveyMonkey link and first were asked to read the information sheet that informed them everything about this investigation. Then, they answered all subsequent questions based on their usage of Mastodon. Data collection took place between February 2021 and March 2021.
Two hundred fifty-four participants were initially recruited from fifteen different Instances of the Mastodon that have the most active user ratio (the highest ratio is 87%, and the lowest ratio is 9.2%). Due to missing data and an incomplete survey, the final sample size for answering research questions was one hundred fifty. Of this sample, 78% of the respondents were female, 17% were male, and 5% preferred not to tell. The age range of 18-24 had the most significant number of respondents (62.4%), followed by the age range of 25-34 (32.4%) and 35-44 (4.5%). The majority of participants had completed a bachelor's (43.5%) and a master's degree (27.8%). Most of the participants lived in China (61.33%) the and U.S. (12.44%).
### Measurements
The online survey included three sections with nineteen questions: demographic information, gratification sought, and gratification obtained. In the demographic section (with four questions), respondents were asked to indicate their age, gender, education level, and location. The demographic information provides an overview of respondents' backgrounds. The gratification sought section (with 47 items), and the gratification obtained section (with the same 47 items) were two measures of Mastodon. Participants could rate each item measured with 5-point Likert scale, ranging from "STRONGLY DISAGREE" to "STRONGLY AGREE." The questionnaire contains thirty-two items that were created for indicating reasons people use mainstream social media (e.g., Twitter). Other fifteen items were extracted from the online conversion with Mastodon users, especially about privacy and free speech issues. The questionnaire is divided into three sections: "Basic Information", "Why do you join Mastodon?" and "Why do you use Mastodon?" Participants were asked to complete the phrases such as "I joined the Mastodon because I expected......" and "I use the Mastodon because......".
Table 1 below summarizes the commonly used phrases following "I joined the Mastodon because I expected......" and "I use the Mastodon because......" to measure the gratifications sought and gratifications obtained.
Table 1 Common phrases used to measure gratification sought and gratification obtained
\begin{tabular}{l l l} \hline \hline Factor & ID & Phrase \\ \hline Social Escapism and & & \\ Support & SE1 & Get away from what I am doing \\ & SE2 & Express negative emotions \\ & SE3 & Stirs me up \\ & SE4 & Make me feel less lonely. \\ & SE5 & Escape from reality. \\ & SE6 & Help me unwind. \\ & & Express my feelings without causing unnecessary social \\ & SE7 & attention. \\ & SE8 & Show me how to get along with others. \\ & SE9 & Avoid someone online \\ & SE10 & Let out my emotions easily to others who will sympathize \\ & SE11 & Not to be alone. \\ & SE12 & Let others know I care about their feelings \\ & SE13 & I want to give myself enough entertainment. \\ \hline Adult Content sharing & & \\ & AS1 & Safe to share adult content; \\ & AS2 & share adult content photos; \\ & AS3 & share adult webpage links; \\ \hline \hline \end{tabular}
\begin{tabular}{l l l} & AS4 & share adult content videos; \\ & AS5 & share my own adult contents; \\ & AS6 & share adult content audio clips; \\ & AS7 & Find dates \\ \hline Adult Content Consuming & & \\ & AC1 & Only for adult content \\ & & adult content that can only be posted on Mastodon for entertainment. \\ & AC2 &tainment. \\ & AC3 & find adult only instances. \\ & AC4 & find links that contain adult content. \\ & AC5 & find adult videos. \\ & AC6 & find adult photos. \\ & AC7 & find adult audio clips. \\ \hline \multirow{3}{*}{Information Seeking} & IS1 & I want to find someone similar to me. \\ & IS2 & I want to meet people with the same interests. \\ & IS3 & I believe it allows me to meet new people. \\ & IS4 & Learn about unknown things \\ & IS5 & Do research \\ & IS6 & Get new ideas \\ & IS7 & Learn about useful things \\ \hline \multirow{3}{*}{Socialization} & S1 & my friend suggested it to me. \\ & S2 & Keep in touch with someone. \\ & S3 & I am looking for someone I know. \\ & S4 & all my friends are using Mastodon. \\ & S5 & Find a job \\ & S6 & Find a date \\ \hline \multirow{3}{*}{Privacy} & & decentralized platform cannot collect my personal information \\ & P1 & mation \\ & P2 & Not be found by people I do not know \\ & P3 & decentralized platform will prevent people who want stalk \\ & P4 & Not be exposed on SNS \\ & & None-decentralized SNSs cannot provide sufficient privacy \\ & P5 & settings; \\ \hline \multirow{3}{*}{Convenience} & CO1 & Use it at anytime and anywhere \\ & CO2 & Use it easily \\ & CO3 & Use it conveniently \\ \end{tabular}
Results and Analysis
### Research Question 1: Motivations for using Mastodon
When examining what motivates Mastodon users to join Mastodon, 12 factors were identified based on factor analysis with eigenvalues greater than 1.0. However, after running a parallel analysis (Factor \(\mathrm{Analyzed}=47\); n = 150; Calculated Eigenvalues (1.548) \(<\) Mean Eigenvalue (1.638954)), five factors were removed, and seven factors were retained (see Table 2).
Factor 1, "Social Escapism and Support", is comprised of thirteen items measuring how users expect Mastodon can provide a place to help them avoid real-world problems or get away from responsibilities (e.g., depression and difficulties). The initial eigenvalue is 4.7519, and this factor explains 22.448% of the total variance. The mean of these items was very high, suggesting that the items represented vital expectations. Two key expectations were " Express my feelings without causing unnecessary social attention" (\(\mathrm{Mean}=4.18\)) and "Let out my emotions easily to others who will sympathize" (\(\mathrm{Mean}=3.83\)). Factor 2 contains "Adult Content Sharing" items. It was the second-highest factor (eigenvalue 4.182, variance explained 9.595%) comprising of seven items measuring Mastodon Users' expectation to microblog services as an adult content sharing community. Two key expectations were "Safe to share adult content" (\(\mathrm{Mean}=3.1133\)) and "Want to share adult webpage links" (\(\mathrm{Mean}=2.3467\)). Factor 3, "Adult Content Consuming" (eigenvalue 4.0771, variance explained 6.776%), consists of seven items measuring the participants' expectation for entertainment on Mastodon. Two key expectations were "adult content that can only be posted on Mastodon for entertainment" (\(\mathrm{Mean}=2.28\)) and "Want to find links that contain adult content" (\(\mathrm{Mean}=2.14\)). "Information Seeking" was the fourth factor identified (eigenvalue 2.46026, variance explained 5.416%) and consists of six items measuring the participants' expectation for willingness to join a new platform and search for information of people and events. Factor 5, "Socialization" (eigenvalue 2.28028, variance explained 4.727%), includes four items measuring the participants' expectation for feeling involved with society and what is going on with others and friendship toward others. Factor 6, "Privacy" (eigenvalue 2.01714 variance explained 4.322%), comprises five items measuring the expectation of privacy protection online on Mastodon. Two key expectations were "None-decentralized SNSs cannot provide sufficient privacy settings" (\(\mathrm{Mean}=3.96\)) and "decentralized platform cannot collect my personal information" (\(\mathrm{Mean}=3.82\)). Factor 7, "Convenience" (eigenvalue 1.66315, variance explained 4.098%), consists of 3 items: "Use it conveniently"(\(\mathrm{Mean}=3.7733\)), "Use it easily"(\(\mathrm{Mean}=3.8667\)), and "Use it at anytime and anywhere"(\(\mathrm{Mean}=3.7533\)).
Table 2 Factor Loadings (Principal Components, Varimax with Kaiser Normalization, Suppress Small Coefficients: 0.3) of 47 Gratifications Sought (N 150)
\begin{tabular}{l c c c} \hline \multicolumn{4}{l}{Social Escapism and} \\ Support & & 3.36 & \\ & SE1 & 3.24 & 1.16 & 0.76 \\ & SE2 & 3.71 & 1.14 & 0.64 \\ & SE3 & 3.06 & 1.08 & 0.74 \\ & SE4 & 3.63 & 1.08 & 0.70 \\ & SE5 & 3.21 & 1.14 & 0.67 \\ & SE6 & 3.10 & 0.95 & 0.66 \\ & SE7 & 4.18 & 0.93 & 0.64 \\ & SE8 & 2.63 & 1.02 & 0.62 \\ & SE9 & 3.49 & 1.33 & 0.58 \\ & SE10 & 3.83 & 0.99 & 0.48 \\ & SE11 & 3.06 & 1.06 & 0.44 \\ & SE12 & 3.18 & 1.03 & 0.42 \\ & SE13 & 3.53 & 0.85 & 0.33 \\ \hline \multicolumn{4}{l}{Adult Content sharing} & 2.32 & \\ & AS1 & 3.11 & 1.13 & 0.52 \\ & AS2 & 2.14 & 0.99 & 0.89 \\ & AS3 & 2.35 & 1.07 & 0.77 \\ & AS4 & 2.13 & 0.99 & 0.88 \\ & AS5 & 1.89 & 1.05 & 0.87 \\ & AS6 & 2.09 & 0.98 & 0.88 \\ & AS7 & 1.61 & 0.79 & 0.48 \\ \hline \multicolumn{4}{l}{Adult Content Consuming} & 1.86 & \\ & AC1 & 1.53 & 0.75 & 0.54 \\ & AC2 & 2.28 & 1.12 & 0.73 \\ & AC3 & 1.53 & 0.75 & 0.60 \\ & AC4 & 2.14 & 1.12 & 0.74 \\ & AC5 & 1.78 & 0.85 & 0.88 \\ & AC6 & 2.01 & 1.03 & 0.84 \\ & AC7 & 1.78 & 0.80 & 0.87 \\ \hline \multicolumn{4}{l}{Information Seeking} & 3.43 & \\ & IS1 & 3.42 & 0.97 & 0.71 \\ & IS2 & 3.53 & 1.02 & 0.70 \\ & IS3 & 3.62 & 0.95 & 0.63 \\ & IS4 & 3.78 & 0.88 & 0.56 \\ \hline \end{tabular}
### Research Question 2: Gratifications of using Mastodon
When examining what gratifications pushed the use Mastodon, 20 factors were identified based on the factor analysis with eigenvalues greater than 1.0. However, after running a parallel analysis (Factor \(\mathrm{Analyzed}=47\); n = 150; Calculated Eigenvalues (1.548) \(<\) Mean Eigenvalue (1.638954)), 13 factors were eventually removed, and seven factors were eventually retained and explaining 57.382 of the variances. (see Table 3).
Factor 1, "Social Escapism and Support", comprises eleven Social escapism and support items. The factor measures how users obtained the gratifications from using Mastodon to avoid real-world problems or to get away from responsibilities. The initial eigenvalue is 4.8092, and the factor explains 27.258% of the total variance. Two key gratifications were "Escape from reality" (Mean = 3.3977) and " Express my feelings without causing unnecessary social attention" (Mean = 3.8295), which shows how users on Mastodon use the platform as a beneficial association. Factor 2, "Adult Content Consuming" (eigenvalue 4.2925, variance explained 10.463%), consists of
seven items measuring user satisfaction with the particular entertainment on Mastodon. Factor 3, "Adult Content sharing" (eigenvalue 3.9416, variance explained 7.611%), consists of six items measuring participants' satisfaction with sharing adult content on Mastodon. Two key gratifications were "Able to share adult video and not be discovered" (Mean = 3) and "Safe to share adult content" (Mean = 3). "Socialization" was the fourth factor identified (eigenvalue 2.9030, variance explained 5.834%) and consists of five items measuring the extent to which participants felt involved with society and what is going on with others and friendship toward others. The critical gratification was "Able to keep in touch with someone" (Mean = 3.0114). Factor 5, "Information Seeking" (eigenvalue 2.3946, variance explained 4.851%) and consists of nine items measuring the participants' gratification for willingness to use a new platform, meet new people, and search for information. The Mean for these items was very high. Two key gratifications were "Learned about unknown things" (Mean = 3.8750) and "Let out my emotions easily to others who will sympathize" (Mean = 3.7159). Factor 6, "Privacy" (eigenvalue 2.0730, variance explained 4.435%) and consists five items measuring the gratifications of privacy online on Mastodon. The Mean for these items was also very high. The critical gratification was "None-decentralized SNSs cannot provide sufficient privacy settings" (Mean = 3.9886). As the last gratification factor, "Convenience" (eigenvalue 1.1624, variance explained 3.528%), consists of three items, with the item "Use it conveniently" (Mean = 4.0455), "Use it easily" (Mean = 3.8977), and "Use it at anytime and anywhere" (Mean = 3.7045) measuring the extent to which participants willing to join a new platform.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline I use Mastodon & & & & & & & & & & \\ because: & mean & sd & Factors & & & & & & \\ \hline & & & & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline Social Escapism & & & & & & & & & & \\ and Support & & 3.48 & & & & & & & & \\ & SE5 & 3.40 & 1.09 & 0.80 & & & & & & \\ & SE1 & 3.51 & 1.05 & 0.79 & & & & & \\ & SE4 & 3.52 & 0.97 & 0.78 & & & & & \\ & SE3 & 3.43 & 1.06 & 0.74 & & & & & \\ & SE11 & 3.45 & 0.97 & 0.69 & & & & & \\ & SE6 & 3.39 & 0.96 & 0.68 & & & & & \\ & SE2 & 3.83 & 1.06 & 0.66 & & & & & \\ & SE8 & 3.06 & 1.04 & 0.62 & & & & & \\ & SE7 & 3.83 & 1.06 & 0.54 & & & & & \\ & SE9 & 3.48 & 1.13 & 0.47 & & & & & \\ & SE13 & 3.40 & 1.02 & 0.38 & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Factor Loadings (Principal Components, Varimax Rotation) of 47 Gratifications Obtained (N 150)
\begin{tabular}{l l l l l} \hline Adult Content & & & & \\ Consuming & & 2.10 & & & \\ AC6 & 2.34 & 1.18 & 0.87 \\ AC4 & 2.20 & 1.07 & 0.85 \\ AC1 & 2.47 & 1.18 & 0.84 \\ AC5 & 2.05 & 1.02 & 0.82 \\ AC7 & 2.09 & 1.01 & 0.82 \\ AC2 & 1.98 & 1.06 & 0.74 \\ AC1 & 1.55 & 0.76 & 0.47 \\ \hline Adult Content & & & & \\ sharing & 2.60 & & & \\ AS4 & 3.00 & 1.14 & 0.90 \\ AS6 & 2.33 & 1.00 & 0.87 \\ AS2 & 2.41 & 1.07 & 0.87 \\ AS1 & 2.38 & 1.06 & 0.86 \\ AS3 & 2.49 & 1.05 & 0.83 \\ AS5 & 3.00 & 1.14 & 0.45 \\ \hline Socialization & 2.19 & & & \\ S2 & 3.01 & 1.31 & 0.80 \\ S1 & 2.48 & 1.36 & 0.79 \\ S3 & 2.81 & 1.34 & 0.75 \\ S4 & 1.76 & 0.98 & 0.75 \\ S5 & 1.57 & 0.74 & 0.53 \\ S6 & 1.53 & 0.71 & 0.48 \\ \hline Information Seeking & & & \\ IS3 & 3.70 & 0.86 & 0.68 \\ IS6 & 3.64 & 0.90 & 0.55 \\ IS4 & 3.88 & 0.77 & 0.47 \\ IS2 & 3.69 & 0.94 & 0.46 \\ IS5 & 2.03 & 0.99 & 0.45 \\ IS2 & 3.34 & 1.00 & 0.43 \\ IS7 & 3.80 & 0.79 & 0.40 \\ \hline Privacy & & 3.79 & & \\ P1 & 3.67 & 0.87 & 0.71 \\ P5 & 3.99 & 0.73 & 0.65 \\ P3 & 3.76 & 0.83 & 0.65 \\ P2 & 3.74 & 0.94 & 0.65 \\ \hline \end{tabular}
### Research Question 3: Gratification Sought and Gratifications Obtained
In Tables 2 and 3, both significant factors related to gratifications sought (G.S.) and gratifications obtained (G.O.) have been summarized individually. To better understand the G.S. and G.O.'s differences, all the factors have been sorted by mean (see Table 4), and a series of paired t-tests (see Table 5) have been applied. Comparing the mean differences between G.O. and G.S. were run to assess differences individually. From the seven pairs measured, three pairs (e.g., Adult Content Sharing; Adult Content Consuming; Social Escapism and Support) had statistically significant mean differences between G.O. and G.S., and four pairs (e.g., Socialization; Privacy; Convenience; Information Seeking) had no statistically significant mean differences. Among all the pairs, six pairs (e.g., Adult Content Sharing; Adult Content Consuming; Social Escapism and Support; Socialization; Privacy; Convenience; Information Seeking) have a positive variance (expectations exceeded).
## 5 Discussion and Conclusion
This paper presented a small-scale exploratory investigation of the Mastodon platform. Despite the increased research on SNSs with U&G Theory, most studies have focused on mainstream media. This study's main contribution to social science research is to discover the reasons behind the rise of ASMs such as Mastodon and provide a new way to think about this new SNS. The present study fills the ASM inspection gap to examine the G.S. and G.O. from the U&G perspective. Moreover, to better understand the G.S. and G.O.'s differences, the comparison approach was applied. This approach was based on many satisfaction studies, e.g., (Gibbs, O'Reilly, & Brunette, 2014; Johnson & Yang, 2009).
The survey results (Table 4) show that the mean for the 4 G.S. factors (e.g., Convenience; Privacy; Information Seeking; Social Escapism and Support) were over 3.00 on a scale of 5.00, suggesting the user's expectation of those factors for joining Mastodon is higher than "Neutral" but lower than "Agree". 3 G.S. factors (e.g., Socialization; Adult Content Sharing; Adult Content Consuming) were less than 2.50 on a scale of 5.00, suggesting the user's expectation on those factors for joining Mastodon.
don is lower than "Neutral" but higher than "Disagree". 4 G.O. factors (e.g., Convenience; Privacy; Social Escapism and Support; Information Seeking) were over 3.00 on a scale of 5.00, suggesting the user's gratifications on those factors for using Mastodon is higher than "Neutral" but lower than "Agree". 3 G.O. factors (e.g., Adult Content Sharing; Socialization; Adult Content Consuming) were less than 2.50 on a scale of 5.00, suggesting the user's expectation on those factors for joining Mastodon is lower than "Neutral" but higher than "Disagree".
Of the 7 G.S. and G.O. pairs tested (Table 5), "Adult Content Sharing" and "Adult Content Consuming" presented statistically significant mean differences, suggesting that users gained gratification from using Mastodon more than expected. "Social Escapism and Support" presented significant mean differences, suggesting users' expectations were met. 4 pairs (e.g., Socialization; Privacy; Convenience; Information Seeking) were not statistically significant due to similar G.S. and G.O. factor scores, suggesting that users expected to gain those gratifications from using Mastodon in the beginning and expectations were met. However, consider that both factors' scores of "Socialization" were around 2.50 (e.g., G.S. = 2.44835; G.O. = 2.51423), which means user's satisfaction was lower than "Neutral" but higher than "Disagree". Therefore, the most satisfying reasons for users joining and using Mastodon were "Information Seeking", "Convenience" and "Privacy". It suggests that joining Mastodon's main expectations are the platform is informative and easy to use, and the privacy setting will shelter users' information from data mining. And similar gratifications are received from using the platform. Users are satisfied with the ongoing use of the Mastodon.
"Adult Content Sharing" and "Adult Content Consuming" were surprisingly lower than 2.50. During the online conversation with a Mastodon user, the two factors and related items were always mentioned by participants. As one of the components of social media, adult content (e.g., pornography) has been recognized as a visual stimulus (Luscombe, 2016), and the combination of computer access, sexual pleasure, and the brain's mechanisms for learning could make online pornography acutely habitforming. Zignani et al. (Zignani, Quadri, Galdeman, Gaito, & Rossi, 2019) have addressed this in the research: Since each instance defines its policy and its members' code of conduct, it is worth noting that pornography or content for an adult audience may not be prohibited in some instances. 8.9% of posts are labeled as "sensitive". It is such a great percentage for such a small-scale ASM. Muntinga et al. (Muntinga, Moorman, & Smit, 2011) have also addressed: entertainment motivation covers several media gratifications related to escaping or being diverted from problems or routine; emotional release or relief; relaxation; cultural or aesthetic enjoyment; passing the time; and Adult Content Consuming. Hence, in this research, "Adult Content Sharing" and "Adult Content Consuming" would be listed as gratification factors and gained high scores to some extent. However, the opposite results were obtained. The plausible explanation is that respondents would lie about the answers because lying would look more attractive or for privacy or protection concerns (Drouin, Miller, Wehle, & Hernandez, 2016), even if it is an anonymous online survey.
The most impressive finding from the analysis is that the users are seeking "Convenience" as the main form of gratification more than "Privacy" (Table 4), which is
the unique function of the D.W. structure. The gratification of "Convenience" is assessed using three items modified from Bae (Bae, 2018). It suggests that users value the features of "simple, handy, and easy to control" more than functional capability. Another plausible explanation is that users value the advertisement-free platform that Mastodon provides. All the instances have a clean and ad-free timeline and present a spotless, simple, and friendly interface, giving users more convenience. That explained why users are satisfied with that factor (Table 5).
Additionally, during the investigation, the researcher found that the number of active Mastodon users was considerably lower than mainstream media (e.g., Twitter), which is reasonable because ad-free means no marketers will support the platform. The technical logic behind advertising was based on user data collecting. ASM refused that. As Gehl (Gehl, 2015) mentioned, the refusal of advertising has consequences: the ASM does not give in to the technical, infrastructural, or organizational demands that marketers would make upon them.
"Privacy" is the second satisfying reason for people joining and using Mastodon. It is understandable because SNS Users always demonstrate strong privacy concerns online (Buchanan, Paine, Joinson, & Reips, 2007; Young, A. L. & Quan-Haase, 2009). As a result, the Mastodon returned to individual users or groups of users not only their data but also the control over their contents (Zignani et al., 2019). However, many researchers, e.g., (Debatin, Lovejoy, Horn, & Hughes, 2009; Tufekci, 2008), found little to no relationship between online privacy concerns and information disclosure on online social network sites. It is true because people who have privacy concerns would prefer not to use any SNS in their life. Tufekci (Tufekci, 2008) has conducted: "We tested to see whether those expressing higher degrees of privacy concerns were less likely to use social network sites. Indeed, nonusers of social network sites had higher levels of privacy concerns." As a factor for measuring gratifications, "Privacy" is assessed using five items modified from Smith et al. (Smith, Milberg, & Burke, 1996), Buchanan et al. (Buchanan et al., 2007), Heravi et al. (Heravi, Mubarak, & Choo, 2016), and Dinev et al. (Dinev & Hart, 2004). Two items summarized from the Mastodon user's online conversion: "None-decentralized SNSs cannot provide sufficient privacy settings" and "Avoid being judged by sharing adult content".
"Social Escapism and Social Support" were two separate factors modified from Bae (Bae, 2018). These items were combined during factor analysis. On the one hand, Korgaonkar et al. (Korgaonkar & Wolin, 1999) have defined social escapism as a "pleasurable, fun and enjoyable factor". Hastall (Hastall, 2017) found escapism "is just one of several ways in which individuals deal with challenging life situations". On the other hand, the SNS users are seeking emotional support at the same time. Online support provides a convenient connection with others in similar circumstances as Hwang et al. (Hwang et al., 2010) have conducted: the ability to communicate anonymously, reciprocity of social support, and a judgment-free space for people to share information about their health status. Hence, it is reasonable to mix the two concepts.
The present research has limitations:
1. The number of participants was too small: 254 responses but only 150 valid responses in this case. According to Punch et al. (Punch & Oancea, 2014), the average number for the exploratory study should be 200 participants.
2. Both G.S. and G.O. measurements could be modified. If we can use more appropriate items via an official interview with Mastodon's users, the results would be more accurate.
3. The measurement could be expanded by adding demographic information because we were surprised 78% of participants were female. Research about gender and social media by Wang et al. (Wang, Fink, & Cai, 2008) found that females prefer to use SNS to satisfy their lack of family relationships while males join SNS for removing their feeling of loneliness. Other researchers (Cho et al., 2003; Choi, 2000; Karimi et al., 2014) applied demographic factors (e.g., time consumption on SNS, education level, age, gender) to investigate the daily usage of SNS and use this information to predict motivations for media usage.
4. Lastly, the research is depended on the survey instrument's effectiveness and the subjects' ability to answer the questions accurately. This limitation is similar to most surveys and is an inherent limitation of U&G research.
Despite the increased research on SNSs applying the U&G theory, most studies have focused on mainstream social media. The present investigation was the first to examine ASM from the U&G theory perspective through exploring gratifications sought and gratifications obtained. The present study fills the gap between ASM and U&G theory. We discover why Mastodon arose and provide a new measurement, privacy, which measures users' privacy concerns to a centralized social media. The findings of this research indicated that users would join and use a social media application if it is a privacy-guaranteed platform. A better understanding of Alternative Social Media will make mainstream social media rethink their current design decisions.
|
2307.14419 | Optimization of Image Acquisition for Earth Observation Satellites via
Quantum Computing | Satellite image acquisition scheduling is a problem that is omnipresent in
the earth observation field; its goal is to find the optimal subset of images
to be taken during a given orbit pass under a set of constraints. This problem,
which can be modeled via combinatorial optimization, has been dealt with many
times by the artificial intelligence and operations research communities.
However, despite its inherent interest, it has been scarcely studied through
the quantum computing paradigm. Taking this situation as motivation, we present
in this paper two QUBO formulations for the problem, using different approaches
to handle the non-trivial constraints. We compare the formulations
experimentally over 20 problem instances using three quantum annealers
currently available from D-Wave, as well as one of its hybrid solvers. Fourteen
of the tested instances have been obtained from the well-known SPOT5 benchmark,
while the remaining six have been generated ad-hoc for this study. Our results
show that the formulation and the ancilla handling technique is crucial to
solve the problem successfully. Finally, we also provide practical guidelines
on the size limits of problem instances that can be realistically solved on
current quantum computers. | Antón Makarov, Márcio M. Taddei, Eneko Osaba, Giacomo Franceschetto, Esther Villar-Rodriguez, Izaskun Oregi | 2023-07-26T18:00:02Z | http://arxiv.org/abs/2307.14419v1 | # Optimization of Image Acquisition for Earth Observation Satellites via Quantum Computing
###### Abstract
Satellite image acquisition scheduling is a problem that is omnipresent in the earth observation field; its goal is to find the optimal subset of images to be taken during a given orbit pass under a set of constraints. This problem, which can be modeled via combinatorial optimization, has been dealt with many times by the artificial intelligence and operations research communities. However, despite its inherent interest, it has been scarcely studied through the quantum computing paradigm. Taking this situation as motivation, we present in this paper two QUBO formulations for the problem, using different approaches to handle the non-trivial constraints. We compare the formulations experimentally over 20 problem instances using three quantum annealers currently available from D-Wave, as well as one of its hybrid solvers. Fourteen of the tested instances have been obtained from the well-known SPOT5 benchmark, while the remaining six have been generated ad-hoc for this study. Our results show that the formulation and the ancilla handling technique is crucial to solve the problem successfully. Finally, we also provide practical guidelines on the size limits of problem instances that can be realistically solved on current quantum computers.
Keywords:Quantum Computing Satellite Image Acquisition Earth Observation Quantum Annealer D-Wave
## 1 Introduction
The Satellite Image Acquisition Scheduling Problem (SIASP) [2, 22] is of great importance in the space industry; every day, satellite operators must face the challenge of selecting the optimal subset of images to be taken during the next orbit pass of a satellite with respect to some notion of value from a set of requests made by their clients. For any given set of requests, there are constraints such as geographical proximity incompatibilities, on-board disk space availability or special image configuration requirements, which make it impossible to collect all
the images. The problem becomes even harder to solve when we consider that, since requests arrive continuously over time, the planning must be updated several times a day, which requires the planning algorithm to be re-run many times over short periods of time. This fact makes execution speed crucial. Moreover, the industry is currently facing a myriad of challenges such as extending the problem to multiple satellites [3, 12] or including climatic restrictions [21], making it even more computationally expensive.
The SIASP and its extensions are extremely combinatorial in nature, which make them very challenging to solve with classical computing methods even for moderately sized instances. Traditionally, the problem has been tackled by exact integer linear programming algorithms based on branch-and-bound methods [2, 16, 20] or (meta-)heuristic and hybrid algorithms, which are usually preferred [1, 10] since they are the only ones able to handle large scale problems and comply with execution-time limitations.
On another note, Quantum Computing (QC) is emerging as a very promising alternative for dealing with optimization problems, where Quantum Annealing based computers such as the ones developed by D-Wave are especially suitable and might offer significant speedups in the future. The SIASP has been barely studied from the QC perspective, which makes this problem a great candidate to explore the possible near- to mid-term benefits that QC can bring to the table in the space industry.
Taking as motivation the scarcity of works conducted in this regard, our objective in this paper is to study SIASP from the QC perspective. To this end, we propose two distinct formulations to encode the variables of the problem.
Overall, 20 different problem instances have been used to test the adequacy of each formulation. On one hand, 14 of them have been directly taken from the well-known SPOT5 dataset [2], while the other six have been generated based on SPOT5 using an instance reductor implemented ad-hoc for this study. As solving alternatives, we assess the performance of four solvers available to the public from D-Wave, three purely quantum-annealing based and one hybrid. This allows us to assess the quality of our formulations as well as to test the limits of today's available hardware.
The rest of the paper is structured as follows. The following section offers some background on the problem dealt with in this paper. Next, Section 3 introduces the mathematical formulations employed for dealing with the problem by the quantum annealers considered. The description of the experimental setup is given in Section 4, along with the tests conducted and the discussion. Finally, Section 5 ends the paper with several conclusions and future research following up from the reported findings.
## 2 Background
QC is gaining significant notoriety in the scientific community, mainly due to its promising computational characteristics. It introduces new mechanisms, based on quantum mechanics, for solving a wide variety of problems more efficiently. This
paradigm is expected to achieve a speed and/or precision advantage in modelling systems and in solving complex optimization problems, besides potential energetic benefits. More specifically, quantum computers have been employed in recent times for facing problems coming from diverse fields such as logistics [15], economics [13] or industry [11].
Although the problem dealt with in this research is of great industrial interest, one of the main obstacles faced by any researcher when trying to tackle it through the quantum perspective is the shortage of previous work focused on this task. This fact implies a challenge to be faced and a research opportunity, which has motivated the realization of the present paper.
There is nevertheless a specific work, published in 2020 by Stollenwerk et al. [17, 18], which deals for the first time with the SIASP using the quantum paradigm. They investigate the potential and maturity of then-current quantum computers to solve real-world problems by carrying out an experimental study on a reduced number of small instances of the SIASP. To this end, they introduce a Quadratic Unconstrained Binary Optimization (QUBO) formulation, which served as inspiration for our research. In any case, since that formulation does not completely cover our needs, we have extended it in order to contemplate ternary constraints and multi-camera satellites.
The main contributions that our research provides with respect to that pioneering work can be summarized as follows:
* The dataset employed in our research has been taken from realistic simulations of a satellite mission.
* The satellite considered has three cameras (instead of the single camera used in [17]), two of which can be used in conjunction to take stereo images. This issue greatly increases the complexity of the problem, and gives rise to several possible formulations, of which two distinct ones are studied.
* In our paper, we consider ternary constraints, which require ancillary qubits to model them. We study two ways to encode those constraints and show the compressing power of one with respect to the other.
* We perform an experimental study on the performance of several quantum annealing based solvers.
Finally, it is interesting to mention the research recently published in [23] and [24]. These papers are focused on solving the SIASP using a quantum genetic algorithm. Despite the scientific contribution behind these investigations, they are outside the scope of the present paper since the methods developed fall within the area known as _quantum-inspired evolutionary computation_. Thus, these techniques are really classical algorithms with the particularity of using concepts from quantum physics for their design.
## 3 Mathematical Formulations of the Problem
In this section, we focus on the mathematical formulation of the SIASP treated in this paper. First of all, we go in detail on the classical formulation of the problem.
After that, the section gravitates around the quantum-oriented formulations chosen for the experimentation.
### Classical Formulation of the SIASP
Our classical formulation for the SPOT5 instances (we refer the reader to [2] for a complete description of the structure of the instances) is largely inspired by [19], and it can be stated in the language of mathematical programming as follows. Let \(x_{i,j}\) be the binary decision variable, defined as:
\[x_{i,j}=\begin{cases}1&\text{if image $i$ is taken with camera $j$,}\\ 0&\text{otherwise,}\end{cases}\]
where \(i\in\{1,2,\ldots,N\}\) is the index representing the image requests, \(N\) being the total amount of requests and \(j\in\{1,2,3,4\}\) the identifier of the camera. There are three physical cameras which can take mono images and we define camera 4 to represent the combined use of cameras 1 and 3 to take stereo images. Thus, the objective function to be optimized is:
\[\min\quad-\sum_{i}\sum_{j}w_{i}x_{i,j},\]
where \(w_{i}\) is the weight or value of taking image \(i\). Note that although our task is to maximize the value, we can express it as the minimization of the negative of the objective. This optimization is subject to the following constraints:
\[\sum_{j}x_{i,j} \leq 1 \forall i \tag{1a}\] \[x_{p,j_{p}}+x_{q,j_{q}} \leq 1 \forall\left((p,j_{p}),(q,j_{q})\right)\in C_{2}\] (1b) \[x_{p,j_{p}}+x_{q,j_{q}}+x_{r,j_{r}} \leq 2 \forall\left((p,j_{p}),(q,j_{q}),(r,j_{r})\right)\in C_{3}\] (1c) \[x_{i,4} =0 \forall i\in M\] (1d) \[x_{i,j} =0 \forall i\in S,\forall j\in\{1,2,3\}\] (1e) \[x_{i,j} \in\{0,1\} \tag{1f}\]
where Constraint (1a) forces the images to be taken only once. Constraint (1b) represents the incompatibilities of taking certain pairs of images (set \(C_{2}\)) which arise from two images being too close geographically to each other to take both. Constraint (1c) represents the ternary constraints (set \(C_{3}\)) related to the data flow restrictions of the satellite that do not allow to take more than two images at once. Constraints (1d) and (1e) forbid mono images (set \(M\)) to be taken as stereo and stereo (set \(S\)) images to be taken as mono, respectively.
### Formulations of the SIASP for its Quantum Solving
In order to treat the problem with existing quantum-annealing hardware, we need a different formulation. Quantum annealers' QPUs are able to produce a
final Hamiltonian of the form:
\[H_{F}=\sum_{i}h_{i}\ Z_{i}+\sum_{i\neq j}J_{i,j}\ Z_{i}Z_{j}\, \tag{2}\]
where \(Z_{i}\) is the \(z\) Pauli operator of the \(i\)-th qubit, \(h_{i}\) is its on-site energy and \(J_{i,j}\) the coupling between qubits \(i,j\). This allows us to minimize the corresponding function on classical binary variables \(x_{i}\) (obtained transforming \(Z_{i}\to 1-2x_{i}\)). This is by analogy called _energy_, and can be written as:
\[E(\mathbf{x})=\mathbf{x}^{T}Q\ \mathbf{x}\, \tag{3}\]
where \(Q\) is a matrix that can be assumed symmetric without loss of generality. A QUBO consists in the minimization of such functions, and currently available quantum annealers such as D-Wave target this kind of problems specifically. The limitation to polynomials of order two comes from the fact that the Hamiltonian of Eq. (2) only couples qubits two by two, and is intrinsic to the hardware.
Additionally, the hardware is not able to couple every pair of qubits, hence an important feature of the QPU is its topology, i.e. the configuration of which qubits are in fact coupled, hence which off-diagonal terms of \(Q\) are nonzero and can be tuned. For this study, three different QPUs have been used, which are DW_2000Q_6 (2000Q), Advantage_system6.1 (Advantage) and Advantage2_prototype1.1 (Advantage2), have topologies called Chimera, Pegasus and Zephyr, respectively, which are increasingly interconnected [6, 7, 8]. If the topology does not present all couplings needed for the problem, a typical solution is to embed a given qubit in more than one physical qubit. This is a source of overhead in the amount of qubits needed for a given instance. We make use of D-Wave's native algorithm based on minor embedding [5].
The transformation of our original problem into a QUBO formulation presents several challenges. Firstly, QUBO is unconstrained, i.e., we cannot explicitly impose Constraints (1). Secondly, currently available quantum hardware possesses few qubits, so reducing the number of variables is of utmost importance. Thirdly, it can only tackle problems written in quadratic form.
The fact that QUBO is unconstrained is circumvented by the addition of penalty terms [9, 17], which are extra terms that raise the value of \(E(\mathbf{x})\) whenever \(\mathbf{x}\) is not a feasible solution -- i.e. when \(\mathbf{x}\) does not obey all constraints. Importantly, imperfections in the penalization and minimization may lead to infeasible solutions.
Because of the reduced number of available qubits, we have devised a denser encoding of our problem into binary variables, mainly related to the representation of mono and stereo photos. In Section 3.1 we have presented the classical formulation using our first encoding, which we refer to as 4cam. It is a straightforward and fixed-length representation of the binary variables, whose allocation depends only on the total number of photos requested. In the denser encoding, which we call 3cam, if requested photo \(i\) is mono, there are three variables \(x_{i,1}\), \(x_{i,2}\), \(x_{i,3}\), whereas if it is stereo there exists a single variable \(x_{i}\). The advantages of the 3cam formulation are the reduction in the number of qubits necessary
and also in the number of constraints, since Eqs. (1d, 1e) no longer need to be imposed.
Finally, the polynomial to be minimized must be of quadratic form, a limitation particularly relevant for the penalty terms relative to ternary constraints (1c). These require the introduction of additional binary variables ("slack" variables), which is another source of overhead in the amount of qubits needed. Let us momentarily simplify the notation of inequality (1c) to \(x_{p}+x_{q}+x_{r}\leq 2\) for cleanness. For 4cam we write the corresponding penalty term directly in quadratic form, introducing two slack variables \(s_{1},s_{2}\) for each ternary constraint [9, Sec 5]:
\[P(x_{p}+x_{q}+x_{r}-2+s_{1}+2s_{2})^{2}\, \tag{4}\]
where \(P>0\) is a parameter. For 3cam, in the spirit of optimizing this formulation as much as possible, we take a more involved approach: initially we write a cubic penalty term \(P\ x_{p}x_{q}x_{r}\) and then reduce it to quadratic following Boros et al. [4, Sec 4.4]: a single slack variable \(s_{1}\) replaces the pair \(x_{q}x_{r}\), and an extra term is added (parentheses below),
\[P\ x_{p}s_{1}+P(x_{q}x_{r}-2x_{q}s_{1}-2x_{r}s_{1}+3s_{1}). \tag{5}\]
Advantages of the latter method are fewer terms in Eq. (5) than in Eq. (4) after expansion, and avoiding the introduction of a second slack variable per constraint. Additionally, if the same pair \(x_{q}x_{r}\) appears in many constraints, the same slack variable replaces it in all terms, with the same extra term added.
Figure 1 graphically shows the compression achieved with the formulation and encoding for instance 15. Notice that the representations include all variables, original and slack, and all necessary penalty terms. See Table 1 for a full breakdown of all instances.
Figure 1: Graph representation of problem instance 15 (see the details in Table 1) obtained from using the QUBO \(Q\) matrices as adjacency matrices. The one on the left corresponds to the 4cam formulation with binary expansion encoding (Eq.4) and the one on the right to the 3cam formulation with Boros encoding (Eq.5). For clarity, the self-loops (one for each node, corresponding to the diagonal of the matrix) have been omitted.
## 4 Experimentation
This section is devoted to describe the experimentation carried out. First, we detail the benchmark of instances employed for testing the formulations detailed in the previous section. After that, we delve on the experimental setup and the analysis of the results obtained.
### Benchmark
To conduct our study, the well-known SPOT5 benchmark dataset introduced in [2] have been used, which is composed of 21 simulated instances of image-acquisition planning problems from the SPOT5 earth observation satellite of the French space agency. Among the 21 instances, 8 were discarded as they have capacity constraints, which consideration is out of the scope of this research. This dataset has certain characteristics that make it suitable for consideration in our study: it is open, heterogeneous, real-world oriented and widely used by the scientific community.
However, the large size of many of the instances is a limiting factor to asses the performance of the QPUs. To mitigate this, we have implemented a Python script for the automatic generation of reduced instances. This script, coined InstanceReductor, takes as input an existing instance of the SPOT5 dataset and the desired size for the newly generated instance. Then, the InstanceReductor generates a new file by randomly reducing the data in order to contemplate the number of requests introduced as input.
Overall, 20 different instances have been employed in the experimentation designed in this study. 14 of them are part of SPOT5, and their sizes range from 8 to 348 requests. The 6 remaining cases have been generated by InstanceReductor, and they consider a number of requests between 15 and 40. We summarize in Table 1 the characteristics of each instance considered in this paper. Lastly, and aiming to enhance the reproducibility of this study, both generated cases and InstanceReductor are openly available in [14].
### Setup and Results
To conduct the experiments on quantum hardware, we have used our two QUBO formulations (4cam and 3cam) on the 20 instances detailed in Table 1. Additionally, for each instance and encoding, we tested four different D-Wave solvers (2000Q, Advantage, Advantage2 and HybridBQM, where the latter stands for hybrid_binary_quadratic_model_version2). To account for the probabilistic nature of the solvers, we run each combination 5 times. For the three completely quantum solvers we have left all parameters as default except the number of runs, which we have set at 2000 following the advice of D-Wave. For the hybrid solver, we adopt all the default parameters. Lastly, the value of all penalty coefficients \(P\) of each instance has been set to one plus the maximum possible value that the objective function can reach. In this regard, refining the choice of \(P\) could
be further investigated as it can severely affect the feasibility of the solutions obtained.
Additionally, in order to use it as a reference for the quantum experiments, we have first implemented the classical 4cam formulation described in Section 3.1 and solved all the considered instances with Google OR-Tools, checking that our results coincide with the known optima.
Table 2 depicts the average results and standard deviations reached by 2000Q, Advantage, Advantage2 and HybridBQM, as well as by the classical solver. In Fig. 2 we show the detailed results of our experiments split by instance and formulation. Together with Table 2, we can see that the better-performing formulation across almost all instances and solvers is 3cam. Also, with 3cam we obtain solutions for certain combinations of instance and solver that are untreatable with 4cam. This is so because it is much more efficient in terms of variables and connections to be used for modelling the problem. Additionally, the best-performing solver is the HybridBQM.
If we turn our attention to the purely quantum solvers, it is interesting that Advantage2, although being still a prototype at the time of writing this article, has the edge for some of the smaller instances where an embedding can be found. For larger instances, the Advantage QPU is the most reliable solver, which manages to obtain results (albeit not necessarily outstanding ones) up until instance 412, with 300 requests.
\begin{table}
\begin{tabular}{l l l l l|l l l|l l} \hline \hline ID & \multicolumn{2}{l|}{Requests Stereo Constraints Ternary} & \multicolumn{2}{l|}{L4cam Q4cam} & \multicolumn{2}{l}{L3cam Q3cam} \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 1: Main characteristics of the 20 used instances, ordered by increasing number of requests. For each instance we depict the number of total and stereo requests, the amount of total and ternary constraints as well as the number of linear (L4cam, L3cam) and quadratic (Q4cam, Q3cam) terms for the two QUBO formulations. Shaded instances are generated by InstanceReductor.
Furthermore, 2000Q cannot handle instances larger than 503 (143 requests), where the limit of Advantage2 is at instance 404 (100 requests). The evolution of the solvers seems to be clear, and we can expect that when the final version of the Advantage2 QPU is launched, we will be able to solve even larger problems with greater precision. Finally, an important note is that in instances 404, 42 and 408 there are some solutions above the optimum value, which means that some constraints were broken in the solving process. This was likely due to insufficiently large penalty values, which highlights the importance of choosing them correctly.
## 5 Conclusions and Further Work
In this paper we have experimentally assessed the performance of D-Wave's three pure quantum annealers and one hybrid solver for the SIASP using two distinct formulation approaches. We have resorted to a realistic benchmark dataset and established how the quality of the solutions degrades with problem size, imposing practical limits on instances that can currently be solved effectively.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{4cam} & \multicolumn{4}{c}{3cam} \\ \cline{2-10} ID & 2000Q & Advantage & Advantage2 & HybridBUM & 2000Q & Advantage & Advantage2 & HybridBUM \\ \hline
[MISSING_PAGE_POST]
0 \(\pm\) 0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for the considered instances by encoding and solver. Each instance was run 5 times and the values reported are the mean \(\pm\) standard deviation of the objective function value. Marked in bold are the best-performing results for each problem instance ignoring the hybrid solver. Results marked with * are those for which an embedding was not found in at least one of the runs while the ones with no numerical values are the ones for which no embedding was found in any of the 5 attempts. Instances 404, 42 and 408 with the 4cam formulation and Advantage QPU had some unfeasible solutions, which were removed when computing the results shown in this Table.
Figure 2: Box and jitter plots of the experimental results. Each subplot shows the execution data for one instance normalized to its optimum value. The dashed green line represents the optimal solution and the colors encode the solvers. The \(x\) axis is split between the two formulations, 4cam on the left and 3cam on the right. The first three rows of the plot share the same \(y\) axis, while the last row does not due to the broken constraints in instances 404, 42 and 408. Instances 8, 20, 25 and 30, with perfect performance up to 2 decimals in all cases, have been omitted. Elsewhere, when no data is shown for a given solver and instance, no embedding was found for it in any of the 5 runs.
Our results show that an efficient formulation allows to solve larger problem instances with better accuracy. This fact is key for success, which makes it a promising avenue for further research. The parameterization also influences the quality of the solutions, leading us to believe that a more exhaustive tuning of the problem penalty values as well as the solver parameters such as number of reads, chain strength, annealing time, etc. could bring us better performances overall.
On another note, future research could be focused on extending the problem to consider capacity constraints or multiple satellites, which would make it more appealing for industrial applications. Finally a study of the problem from the perspective of gate-based quantum computers, for example by means of variational quantum algorithms such as the quantum approximate optimization algorithm, would also be of significant interest.
## Acknowledgments
This work was supported by the Government of Spain (Misiones CUCO Grant MIG-20211005, FIS2020-TRANQI and Severo Ochoa CEX2019-000910-S), Fundacio Cellex, Fundacio Mir-Puig, Generalitat de Catalunya (CERCA program), and by the European Research Council ERC AdG CERQUTE.
|
2303.11096 | Deep-Learning Aided Channel Training and Precoding in FDD Massive MIMO
with Channel Statistics Knowledge | We propose a method for channel training and precoding in FDD massive MIMO
based on deep neural networks (DNNs), exploiting Downlink (DL) channel
covariance knowledge. The DNN is optimized to maximize the DL multi-user
sum-rate, by producing a pre-beamforming matrix based on user channel
covariances that maps the original channel vectors to effective channels.
Measurements of these effective channels are received at the users via common
pilot transmission and sent back to the base station (BS) through analog
feedback without further processing. The BS estimates the effective channels
from received feedback and constructs a linear precoder by concatenating the
optimized pre-beamforming matrix with a zero-forcing precoder over the
effective channels. We show that the proposed method yields significantly
higher sum-rates than the state-of-the-art DNN-based channel training and
precoding scheme, especially in scenarios with small pilot and feedback size
relative to the channel coherence block length. Unlike many works in the
literature, our proposition does not involve deployment of a DNN at the user
side, which typically comes at a high computational cost and
parameter-transmission overhead on the system, and is therefore considerably
more practical. | Yi Song, Tianyu Yang, Mahdi Barzegar Khalilsarai, Giuseppe Caire | 2023-03-20T13:32:37Z | http://arxiv.org/abs/2303.11096v1 | Deep-Learning Aided Channel Training and Precoding in FDD Massive MIMO with Channel Statistics Knowledge
###### Abstract
We propose a method for channel training and precoding in FDD massive MIMO based on deep neural networks (DNNs), exploiting Downlink (DL) channel covariance knowledge. The DNN is optimized to maximize the DL multi-user sum-rate, by producing a pre-beamforming matrix based on user channel covariances that maps the original channel vectors to "effective channels". Measurements of these effective channels are received at the users via common pilot transmission and sent back to the base station (BS) through analog feedback without further processing. The BS estimates the effective channels from received feedback and constructs a linear precoder by concatenating the optimized pre-beamforming matrix with a zero-forcing precoder over the effective channels. We show that the proposed method yields significantly higher sum-rates than the state-of-the-art DNN-based channel training and precoding scheme, especially in scenarios with small pilot and feedback size relative to the channel coherence block length. Unlike many works in the literature, our proposition does not involve deployment of a DNN at the user side, which typically comes at a high computational cost and parameter-transmission overhead on the system, and is therefore considerably more practical.
FDD massive MIMO, channel statistics knowledge, analog feedback, DNN-based training and precoding.
## I Introduction
Deep Neural Networks (DNNs) have been recently successfully applied in various areas of wireless communications such as resource allocation and scheduling [1, 2], channel estimation [3, 4], beamforming [5], transceiver design [6], etc. Given sufficient data, a DNN is trained in a (semi-)supervised or unsupervised fashion to learn mappings from an input space to some desired output that optimizes a suitable utility metric that is otherwise very hard to optimize with conventional tools.
In this paper, we propose a DNN-based solution to the problem of channel training and multi-user precoding in a frequency division duplex (FDD) massive MIMO system with channel statistics knowledge at the Base Station (BS). It is well-known that, to achieve the benefits of massive MIMO, the transmitter needs to obtain fresh Downlink (DL) channel state information (CSI). Unlike time division duplexing (TDD) systems, where relying on channel reciprocity, DL channels are directly estimated from Uplink (UL) pilots, in FDD the BS must train the CSI by broadcasting pilot sequences in DL and receiving user feedback. This process requires careful design of DL pilots, user feedback messages, and the precoder based on feedback. In particular, a small pilot length (in DL) and feedback size (in UL) relative to the channel dimension, results in poor DL spectral efficiency. This is caused by the large channel estimation error and the resulting interference due to precoding over erroneous channels. Incorporating knowledge of channel statistics at the transmitter in designing the pilots and the precoder can significantly mitigate this effect. We propose a scheme in which a DNN is trained with the given constraints on pilot and feedback size (fixed by the standard) to produce a pre-beamforming matrix as a function of user channel statistics. Both the pilot vectors (a set of \(T_{\rm dil}\) row-vectors in \(\mathbb{C}^{M}\)) and precoding vectors (a set of \(K\) row-vectors in \(\mathbb{C}^{M}\)) are chosen from the row-space of this pre-beamforming matrix. As will be apparent by the signal model in the next sections, the pre-beamforming matrix maps original channels to "effective channels", which will be estimated (through DL training and UL feedback) and over which zero-forcing (ZF) will be performed. Intuitively, since an "accurate" estimation of the original channels with limited pilot and feedback resources is infeasible, the DNN-based transform is employed to manage interference by precoding over certain effective channels with possibly smaller number of known coefficients and therefore can be trained with the given pilot/feedback budget. Other elements of our proposed network include the following. Upon receiving pilots, the users send them back to the BS after a power normalization via analog feedback [7], i.e., feeding back complex-valued measurements by modulating them as quadrature and in-phase components of the baseband signal. The BS then computes a minimum mean-squared error (MMSE) estimate of the effective channels. The precoder is then generated as a product of a ZF precoder on the effective channels and the pre-beamforming matrix and is used to send data to users in DL. The DNN is optimized end-to-end to yield a pre-beamforming matrix based on input channel statistics, that maximizes the multi-user sum-rate.
Recently many works have utilized DNNs for channel training and precoding in massive MIMO. Some have proposed extrapolation of DL channels from UL channels using DNNs [8, 9, 10]. Albeit highly successful under certain scenarios, these methods would fail when the channel coherence bandwidth is small relative to the separation between UL and DL carrier frequencies and explicit DL training and feedback is necessary.
In other works, pilot design and channel estimation with DNNs is considered [4, 11]. The objective in these works is to minimize the channel estimation MSE, which is different from maximizing the multi-user sum-rate as considered in our work. Another category of works focuses on compression of feedback, where perfect channel state knowledge at the users is assumed [12, 13, 14]. This assumption is hard to achieve in massive MIMO, since the channel dimension is typically larger than the pilot length and channel estimation is carried out via a compressed sensing scheme which not only may fail depending on the channel sparsity order, but is also computationally costly and difficult to implement in real time in the user devices. Finally, [15] proposed a highly successful DNN-based scheme for pilot sequence design, feedback quantization and DL precoding. This scheme however involves deployment of the feedback computation layers at the user side, which requires transfer of a large (in the order of a million) number of parameters to the users, incurring a huge overhead in DL.
Our proposed method offers the following advantages with respect to the existing works in the literature.
1. **Exploiting Channel Statistics:** Our proposed DNN utilizes channel statistics to design DL pilots and the precoder. This results in significantly higher DL sum-rate, particularly in scenarios where channel training is difficult due to small pilot and feedback dimensions. Note that, although availability of channel statistics knowledge at the BS is not always granted, it is justified by the fact that in FDD systems, DL channel covariances can be estimated from UL pilots based on what is known as "angle reciprocity" [16, 17, 18]. Therefore, it is reasonable to devise DNN-based solutions that exploit channel covariance knowledge.
2. **No DNN at the User Side:** Unlike many works in the literature [12, 13, 14, 15], our proposed method does not involve training a DNN at the user side or transmission of optimized DNN parameters to it. Users simply send back pilot measurements to the BS with a power normalization, from which the BS estimates effective channels.
3. **Direct Sum-Rate Maximization:** The idea of training and precoding in FDD massive MIMO by preconditioning channels with a transform based on statistics was proposed by some of the authors of the present work in [19]. There, the transform was optimized to maximize the spatial multiplexing gain, which is equivalent to the rate pre-log factor in high SNR. In contrast, in the present work we optimize the transform to directly maximize ergodic sum-rate in a data-driven fashion and through the DNN. Our simulation results will show that this approach significantly improves upon [19].
We will show via simulations that our method achieves better performance in terms of DL sum-rate compared to the state-of-art result in [15] as well as [19], especially when the pilot and feedback dimensions are small compared to the channel dimension, proving the applicability of this scheme in FDD massive MIMO systems.
## II System model
### _Common Training_
We consider a massive MIMO system in FDD mode, where a BS equipped with a uniform linear array (ULA) of \(M\) antennas servers \(K\) users with a single antenna in a cell. Because channel reciprocity does not hold in FDD, the BS has to train DL channels by broadcasting pilot sequences of length \(\beta\) from each of its \(M\) antenna ports. We denote these pilot sequences as rows of a pilot matrix \(\mathbf{X}^{\mathrm{p}}\in\mathbb{C}^{\beta\times M}\) (the superscript "\(\mathrm{p}\)" stands for "pilot"). The pilot signal received at user \(k\) is expressed as
\[\mathbf{y}_{k}^{\mathrm{p}}=\mathbf{X}^{\mathrm{p}}\mathbf{h}_{k}+\mathbf{z}_ {k}^{\mathrm{p}},\;k\in[K], \tag{1}\]
where \(\mathbf{h}_{k}\sim\mathcal{CN}(\mathbf{0},\mathbf{C}_{k})\) is the Rayleigh fading channel vector of user \(k\) with covariance \(\mathbf{C}_{k}=\mathbb{E}[\mathbf{h}_{k}\mathbf{h}_{k}^{\mathrm{H}}]\), \(\mathbf{z}_{k}^{\mathrm{p}}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{M})\) is additive white Gaussian noise (AWGN) with unit variance per element, and for an integer \(a\) we define \([a]\triangleq\{1,2,\ldots,a\}\). Assuming the BS has a total transmission power of \(P_{\mathrm{dl}}\), the pilot matrix should satisfy the power constraint
\[\|[\mathbf{X}^{\mathrm{p}}]_{i,}\|^{2}\leqslant P_{\mathrm{dl}},\;\forall i \in[\beta]. \tag{2}\]
where \([\mathbf{X}^{\mathrm{p}}]_{i,}\) denotes the \(i\)-th row of \(\mathbf{X}^{\mathrm{p}}\). Also, since the noise variance is normalized to one, we define the SNR in DL as \(\text{SNR}_{\mathrm{dl}}=P_{\mathrm{dl}}\). For future reference, we define the effective channel of user \(k\) by
\[\mathbf{g}_{k}\triangleq\mathbf{B}\mathbf{h}_{k},\;\forall k\in[K], \tag{3}\]
where \(\mathbf{B}\in\mathbb{C}^{M\times M}\) is the pre-beamforming matrix, mapping the original channel to the effective channel.
We propose to design \(\mathbf{X}^{\mathrm{p}}\) as the product
\[\mathbf{X}^{\mathrm{p}}=\mathbf{W}\mathbf{B}, \tag{4}\]
where \(\mathbf{W}\in\mathbb{C}^{\beta\times M}\) is a an arbitrary full-rank matrix. While we do not impose any constraints on \(\mathbf{W}\) other than being full-rank, \(\mathbf{B}\) will be produced by a trained DNN with user channel covariances as input. This will be explained in Section III. With this construction, the DL pilot signal in (1) can be equivalently written as \(\mathbf{y}_{k}^{\mathrm{p}}=\mathbf{W}\mathbf{g}_{k}+\mathbf{z}_{k}^{\mathrm{ p}}\), so that received pilot symbols can be equivalently seen as noisy linear measurements of the effective channel.
### _Analog Feedback_
After receiving pilot signals, each user sends a feedback "message" to the BS using the UL channel. A common approach known as digital feedback consists of estimating the channel at the receiver from the pilots, quantizing it and sending the quantization index to the BS [20]. Alternatively, users can encode the pilot signal into quantization codewords without explicit channel estimation. A different approach, known as analog feedback consists of sending complex-valued feedback symbols to the BS by modulating the quadrature and in-phase components of the carrier by real and imaginary parts of the feedback symbol [7]. Analog feedback is simpler and imposes less feedback delay than digital feedback which requires quantization and channel coding. The feedback symbols can be estimates of the channel or the received pilot signal itself. In our proposition, the user sends the power-normalized pilot symbols directly and without channel estimation to the
BS via analog feedback. The feedback message of user \(k\) in this case is given by
\[\mathbf{x}_{k}^{\mathrm{fb}}=\sqrt{\rho_{k}}\mathbf{y}_{k}^{\mathrm{p}},\;\;\text {with}\;\rho_{k}=\beta P_{\mathrm{ul}}/\|\mathbf{y}_{k}^{\mathrm{p}}\|^{2}. \tag{5}\]
which satisfies the average power constraint
\[\|\mathbf{x}_{k}^{\mathrm{fb}}\|^{2}\leq\beta P_{\mathrm{ul}},\;\forall k\in[K], \tag{6}\]
where \(P_{\mathrm{ul}}\) is the user average transmit power per symbol in UL, assumed equal among all users. The elements of \(\mathbf{x}_{k}^{\mathrm{fb}}\) are sent back to the BS via analog feedback. This means that, just like the analog QAM modulation, real and imaginary parts of each complex-valued symbol in the feedback message modulate carriers that have a 90 degrees phase difference. These carriers are then combined and the resulting signal is sent to the BS. To avoid any confusion, we emphasize that the feedback symbols are not quantized and the user does _not_ use a digital QAM modulation here. Also, note that there is no factual problem with transmitting unquantized feedback symbols: even in the prevalent OFDM signaling with digital QAM, the continuous time-domain I and Q signal after the IFFT is transmitted effectively unquantized (quantized with 10-12 bits per sample). Besides, we model the UL channel as an AWGN channel which is orthogonally accessed by the users. Then, the BS receives the noisy feedback signal as
\[\mathbf{y}_{k}^{\mathrm{fb}}=\mathbf{x}_{k}^{\mathrm{fb}}+\mathbf{z}_{k}^{ \mathrm{fb}}, \tag{7}\]
where \(\mathbf{z}_{k}^{\mathrm{fb}}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{\beta})\) is the noise vector.
**Remark 1**: _The UL channel can be generally modeled as a multiple-access channel (MAC), but we consider the special case of the AWGN channel with orthogonal access for simplicity and defer the treatment of more general models to a future work. Note that most previous works do not discuss the feedback channel model at all and assume availability of perfect, error-free feedback at the BS [13, 15, 21]. \(\Diamond\)_
**Remark 2**: _By adopting the proposed analog feedback strategy, there is no need for complex processing at the user side. This is in contrast to schemes that involve deploying a DNN at the user side (see e.g., [4, 14, 15]) that have two disadvantages: First, the forward pass of a DNN involves consecutive matrix multiplications and applying activation functions which consume time. Second, and more importantly, if the DNN is trained at the BS side, its optimized parameters should be transferred to the user as soon as it enters the cell. Given that the number of parameters in a DNN can be in the order of millions, this imposes a large overhead on DL resources. Our scheme avoids both of these disadvantages by using analog feedback. \(\Diamond\)_
### _Effective Channel Estimation and Precoding_
Given the feedback signal, the BS computes an MMSE estimate of effective channels as
\[\mathbf{\hat{g}}_{k}=\mathbb{E}[\mathbf{g}_{k}|\mathbf{y}_{k}^{\mathrm{fb}}]= \mathbf{C}_{gy,k}\mathbf{C}_{yy,k}^{-1}\mathbf{y}_{k}^{\mathrm{fb}},\;k\in[K], \tag{8}\]
where
\[\mathbf{C}_{gy,k}=\mathbb{E}[\mathbf{g}_{k}(\mathbf{y}_{k}^{\mathrm{fb}})^{ \mathrm{H}}]=\sqrt{\rho_{k}}\mathbf{B}\mathbf{C}_{k}(\mathbf{X}^{\mathrm{p}})^ {\mathrm{H}}, \tag{9}\]
\[\mathbf{C}_{yy,k}=\mathbb{E}[\mathbf{y}_{k}^{\mathrm{fb}}(\mathbf{y}_{k}^{ \mathrm{fb}})^{\mathrm{H}}]=\rho_{k}\mathbf{X}^{\mathrm{p}}\mathbf{B}\mathbf{C }_{k}\mathbf{B}^{\mathrm{H}}(\mathbf{X}^{\mathrm{p}})^{\mathrm{H}}+(1+\rho_{k} )\mathbf{I}_{\beta}. \tag{10}\]
Next, the BS transmits data in DL using a linear precoder as follows. Let \(\mathbf{s}=[s_{1},\ldots,s_{K}]\) denote a row vector consisting of the user data symbols, each satisfying \(\mathbb{E}[|s_{k}|^{2}]=1\). The precoded data vector is then given by \(\mathbf{x}^{\mathrm{d}}=\mathbf{s}\mathbf{V}\in\mathbb{C}^{1\times M}\), where \(\mathbf{V}\) is a linear precoding matrix and the superscript "d" stands for "data". Similar to the design of the pilot matrix in (4), we propose a construction of the precoder as the product
\[\mathbf{V}=\widetilde{\mathbf{V}}\mathbf{B}, \tag{11}\]
where \(\mathbf{B}\) is the pre-beamforming matrix to be designed and \(\widetilde{\mathbf{V}}\) is a zero-forcing precoder on the estimated effective channels. Denoting the estimated effective channels by \(\mathbf{\hat{G}}=[\mathbf{\hat{g}}_{1},\ldots,\mathbf{\hat{g}}_{K}]\), this precoder is given by
\[\widetilde{\mathbf{V}}=\sqrt{\alpha}\left(\mathbf{\hat{G}}^{\mathrm{H}} \mathbf{\hat{G}}\right)^{-1}\mathbf{\hat{G}}^{\mathrm{H}}\in\mathbb{C}^{K \times M}, \tag{12}\]
where each row represents the precoding vector of a user and \(\alpha>0\) is a scalar that forces the precoder to satisfy
\[\mathrm{Tr}(\mathbf{V}\mathbf{V}^{\mathrm{H}})\leq P_{\mathrm{dl}}. \tag{13}\]
The point of decomposing the precoder in (11) is for it to first map the original channel to the effective channel through \(\mathbf{B}\) and then apply zero-forcing on the effective channels. The received data symbol at user \(k\) is given as
\[y_{k}^{\mathrm{d}} =\mathbf{x}^{\mathrm{d}}\mathbf{h}_{k}+z_{k}^{\mathrm{d}} \tag{14}\] \[=\mathbf{v}_{k}\mathbf{h}_{k}s_{k}+\sum_{k^{\prime}\neq k}^{K} \mathbf{v}_{k^{\prime}}\mathbf{h}_{k}s_{k^{\prime}}+z_{k}^{\mathrm{d}}, \tag{15}\]
where \(\mathbf{v}_{k}\) is the \(k\)-th row of \(\mathbf{V}\) and \(z_{k}^{\mathrm{d}}\sim\mathcal{CN}(0,1)\) is the AWGN. Treating interference as noise and assuming signal and interference coefficients knowledge at the receiver, the achievable ergodic sum-rate in DL is given by [22]
\[R_{\mathrm{sum}}=\sum_{k=1}^{K}\mathbb{E}\left[\log_{2}\left(1+\frac{|\mathbf{ v}_{k}\mathbf{h}_{k}|^{2}}{1+\sum_{k^{\prime}\neq k}|\mathbf{v}_{k^{\prime}} \mathbf{h}_{k}|^{2}}\right)\right], \tag{16}\]
where the expectation is taken over channel and noise distributions. Note that the terms \(\{|\mathbf{v}_{k^{\prime}}\mathbf{h}_{k}|^{2}\ :\ k,k^{\prime}\in[K],\ k\neq k^{\prime}\}\) are interference coefficients between channels of users \(k\) and \(k^{\prime}\). The precoder \(\mathbf{V}\) is a function of the pre-beamforming matrix \(\mathbf{B}\) through the pilot matrix \(\mathbf{X}^{\mathrm{p}}\) in (4), the channel estimates in (8) and the resulting precoder in (11). Our goal is to design \(\mathbf{B}\) based on available DL channel covariances at the BS, such that the ergodic sum-rate is maximized, i.e., we want to find a mapping \(\mathcal{P}_{\mathbf{B}}(\cdot)\) from the set of \(K\) user channel covariances to the pre-beamforming matrix \(\mathbf{B}\) that maximizes ergodic sum-rate. We further denote the mapping described by Eqs. (8)-(13) from user feedback signals denoted by \(\mathbf{Y}^{\mathrm{fb}}=\left[\mathbf{y}_{1}^{\mathrm{fb}},\ldots,\mathbf{y}_ {K}^{\mathrm{fb}}\right]\) to the precoder by \(f_{\mathrm{pc}}\left(\ \cdot\ ;\mathbf{B}\right)\), so that \(\mathbf{V}=f_{\mathrm{pc}}\left(\mathbf{Y}^{\mathrm{fb}};\mathbf{B}\right)\). Now, the sum-rate maximization problem can be posed as:
\[\underset{\mathcal{P}_{\mathbf{B}}(\cdot)}{\text{maximize}} R_{\mathrm{sum}}\] (17a) subject to \[\mathbf{B}=\mathcal{P}_{\mathbf{B}}\left(\{\mathbf{C}_{k}\}_{k=1}^{K} \right),\] (17b) \[\mathbf{V}=f_{\mathrm{pc}}(\mathbf{Y}^{\mathrm{fb}};\mathbf{B}),\] (17c) \[\|[\mathbf{W}\mathbf{B}]_{i,}\|^{2}\leq P_{\mathrm{dl}},\;\forall i \in[\beta],\] (17d) (6), ( 17e )
## III Pre-Beamforming Based on Channel Statistics
In massive MIMO, it is typical to have a pilot length that is small relative to the channel dimension (\(\beta<M\)). This results, from (1) in an underdetermined system of noisy linear
equations from which the effective channel \(\mathbf{g}_{k}=\mathbf{B}\mathbf{h}_{k}\) must be estimated. Because the system is underdetermined, the channel estimation error can be high even with the MMSE estimator. Given the effective channel covariance \(\mathbb{E}[\mathbf{g}_{k}\mathbf{g}_{k}^{\mathsf{H}}]=\mathbf{B}\mathbf{C}_{k} \mathbf{B}^{\mathsf{H}}\), it is shown in [19] that if \(\beta<\text{rank}\left(\mathbf{B}\mathbf{C}_{k}\mathbf{B}^{\mathsf{H}}\right)\), then the effective channel estimation MSE scales as \(O(1)\) when \(\text{SNR}_{\text{dil}}\rightarrow\infty\). This means that a small pilot dimension leads to a constant channel estimation error which is independent of SNR. In addition, when the channel estimation error is large, naive zero-forcing results in large interference coefficients between the users in the denominator of (16) and reduces the ergodic sum-rate. Thus, the pre-beamforming matrix should be designed such that the rank of the effective channel covariance \(\mathbf{B}\mathbf{C}_{k}\mathbf{B}^{\mathsf{H}}\) becomes smaller to reduce the estimation error with a given pilot dimension. On the other hand, the effective rank should not reduce too much because then the signal coefficient \(|\mathbf{v}_{k}\mathbf{h}_{k}|^{2}\) in the numerator of (16) reduce, resulting in smaller ergodic sum-rate. In the extreme case, if \(\mathbf{B}=\mathbf{0}\) then the sum-rate will be zero. These two effects imply that the pre-beamforming matrix \(\mathbf{B}\) should be a transformation that reduces the inherent dimension (i.e., channel covariance rank) of effective channels down to a certain value to achieve a favourable trade-off in minimizing interference and maximizing signal coefficients.
We simplify the design problem by exploiting properties of the channel covariance. It is known that the covariance of a ULA channel with large \(M\) is (approximately) diagonalized by the DFT matrix, thanks to the similarity of large Toeplitz matrices to their Circulant equivalents and the famous Szego's theorem [23, 24]. In other words, the channel covariance can be approximately decomposed as
\[\mathbf{C}_{k}\approx\mathbf{F}\text{diag}(\boldsymbol{\gamma}_{k})\mathbf{F} ^{\mathsf{H}}, \tag{18}\]
where \(\boldsymbol{\gamma}_{k}\in\mathbb{R}_{+}^{M}\) is the vector of channel covariance eigenvalues of user \(k\) and \(\mathbf{F}\in\mathbb{C}^{M\times M}\) is the DFT matrix whose \((m,n)\)-th entry is given by \([\mathbf{F}]_{m,n}=\frac{1}{\sqrt{M}}e^{-j2\pi\frac{n}{M}},m,n\in[M]\). We simplify design of \(\mathbf{B}\) by restricting it to belong to the set
\[\mathcal{B}\triangleq\{\mathrm{diag}(\boldsymbol{\lambda})\mathbf{F}^{ \mathsf{H}}\ :\ \boldsymbol{\lambda}\in[0,1]^{M}\}. \tag{19}\]
Then the covariance of the effective channel is given by
\[\mathbf{B}\mathbf{C}_{k}\mathbf{B}^{\mathsf{H}}\approx\mathrm{diag}\left( \boldsymbol{\lambda}^{2}\odot\boldsymbol{\gamma}_{k}\right) \tag{20}\]
where \(\boldsymbol{\lambda}^{2}\) denotes element-wise square of \(\boldsymbol{\lambda}\) and \(\odot\) denotes element-wise product. Essentially with this design choice, the "effective rank" of the covariance is equivalent to the number of large coefficients in \(\boldsymbol{\lambda}\) and therefore this vector controls the inherent dimension of effective channels. From a different perspective, \(\boldsymbol{\lambda}\) can be seen as "beam-selection" vector, since the DFT columns are equivalent to the array steering vectors of a ULA evaluated on a grid of angle-of-departures (AoDs). If the \(m\)-th coordinate of \(\boldsymbol{\lambda}\) (\(\lambda_{m}\in[0,1]\)) is small (\(\lambda_{m}\to 0\)), then the contribution of the \(m\)-th beam in the effective channel of all users will be eliminated. In this sense, the present work is similar to the _active channel sparsification_ (ACS) method in [19] which proposed beam-selection with the objective of maximizing the multiplexing gain. However, the DNN-based method proposed here aims to directly maximize the sum-rate and in this sense extends the idea presented in [19].
### _DNN-Based Optimization_
We employ a DNN to produce \(\boldsymbol{\lambda}\) vector based on input channel covariances \(\{\mathbf{C}_{k}\}_{k=1}^{K}\). Based on the \(K\) covariances, the given pilot dimension and SNR, the network is trained to output a \(\boldsymbol{\lambda}\) that maximizes the sum-rate. Note that the channel covariance of a ULA is a Toeplitz Hermitian matrix that is fully determined by its first column. Denoting the first columns of the \(K\) covariances by \(\mathbf{c}_{1},\ldots,\mathbf{c}_{K}\), we define the matrix \(\boldsymbol{\Sigma}=\left[\mathbf{c}_{1},\ldots,\mathbf{c}_{K}\right]\in \mathbb{C}^{M\times K}\). Then the optimization problem (17) can be reformulated as
\[\underset{\boldsymbol{\Theta}}{\text{maximize}} R_{\text{sum}} \tag{21a}\] \[\underset{\boldsymbol{\Theta}}{\text{subject to}} \boldsymbol{\lambda}=\mathcal{P}_{\boldsymbol{\lambda}}\left( \boldsymbol{\Sigma};\boldsymbol{\Theta}\right),\] (21b) \[\mathbf{B}=\mathrm{diag}(\boldsymbol{\lambda})\mathbf{F}^{\mathsf{H}},\] (21c) \[\eqref{eq:DNN-based},\eqref{eq:DNN-based},\eqref{eq:DNN-based},\eqref{eq:DNN-based}, \tag{21d}\]
where \(\mathcal{P}_{\boldsymbol{\lambda}}(\cdot;\boldsymbol{\Theta}):\mathbb{C}^{M \times K}\rightarrow\mathbb{C}^{M}\) is the mapping from the covariance first columns to the beam-selection vector associated with the DNN with parameters \(\boldsymbol{\Theta}\). The proposed architecture is illustrated in Fig. 1.
We solve (21) in a data-driven fashion to optimize network parameters by generating random realizations of \(\boldsymbol{\Sigma}\) according to a distribution \(\mathcal{D}\). This distribution is typically based on geometric properties of the scattering environment, such as the number of paths, the distribution of AoDs and their associated powers. In practice, random realizations of this distribution can be collected at the BS at different times for \(K\) randomly located users in the cell. In our simulation results, we consider random samples of \(\mathcal{D}\) to be generated from a multipath scattering model that is independent across users and is parametrized by the number of paths, uniformly distributed AoDs and powers. Then, each random sample of \(\mathcal{D}\) is given as input to the DNN. The expected value in the objective function (21a) is replaced by an empirical mean obtained by generating many independent samples of the DL channel for each user.
**Remark 3**: _The main difference between our design and the DNN-based scheme in [15] is that we learn a mapping between \(\boldsymbol{\Sigma}\) and \(\mathbf{B}\), exploiting the channel second order statistics for different users. In contrast, [15] proposes to learn a pilot matrix that should "fit" all the user channel statistics from a large ensemble, and not the specific statistics of the \(K\) users that are scheduled to be served in a single frame. \(\lozenge\)_
### _DNN Implementation Details_
Our DNN consists of three fully-connected layers, where the number of hidden neurons per layer are \([\ell_{1},\ell_{2},\ell_{3}]=[1024,512,M]\). We use ReLU activation functions in all hidden layers. In order to produce \(\boldsymbol{\lambda}\) in \([0,1]^{M}\), we use \(tanh\) activation in the output layer and scale its output to \([0,1]\) as \(0.5(\tanh(\cdot)+1)\). We implement the network in PyTorch [25] with the Adam optimizer [26] with a batch size of \(1024\) and initial learning rate of \(10^{-4}\). For fast convergence, a batch normalization layer is added before each linear layer [27].
## IV Numerical Results
For the simulations, we consider \(M=64\) antennas, \(K=6\) users, \(\beta=8\) pilots. We stress the point that in general,
estimating a set of \(6\), \(64\)-dimensional channels from a DL pilot of length \(8\) is extremely difficult and a successful performance in such a setup should be noticed. We set the DL SNR to \(P_{\rm dl}=20\) and consider a scattering channel model with \(L\) paths. The DL channel covariance of user \(k\) is given by
\[{\bf C}_{k}=\sum_{\ell=1}^{L}\eta_{k,\ell}{\bf a}(\theta_{k,\ell}){\bf a}^{\sf H }(\theta_{k,\ell}),\;\forall k\in[K], \tag{22}\]
where \(\eta_{k,\ell}\) and \(\theta_{k,\ell}\) are the power and the AoD of the \(\ell\)-th channel path of user \(k\), and where \({\bf a}(\theta)\in\mathbb{C}^{M}\) is the steering vector of a ULA, whose \(m\)-th entry is given by \([{\bf a}(\theta)]_{m}=e^{j\frac{2\pi d}{\lambda}(m-1)\sin(\theta)}\), \(m\in[M]\) where \(d\) is the antenna spacing and \(\lambda^{\prime}\) is the carrier wavelength. We assume the maximum array angular aperture to be given by \(\theta_{\max}=60^{\circ}\) and assume that the antenna spacing is set to \(d=\frac{\lambda^{\prime}}{2\sin\theta_{\max}}\). The user AoDs are generated independently from a uniform distribution, i.e., \(\theta_{k,\ell}\sim\mathcal{U}(-\theta_{\max},\theta_{\max})\). The path powers are randomly and uniformly generated in the real interval \([0.4,0.8]\) and then scaled to sum to one, i.e., \(\sum_{\ell=1}^{L}\eta_{k,\ell}=1,\forall k\in[K]\). Choosing powers as such is not necessary, and is simply to avoid path powers close to zero. We recall that the input of the DNN is the matrix \(\mathbf{\Sigma}\) containing the first covariance columns of all users as its columns. When generated according to the distribution of AoDs and powers as above, \(\mathbf{\Sigma}\) follows a distribution \(\mathcal{D}(L)\), parameterized by the number of paths \(L\). This specific characterization is just used here to perform the simulations. In general, one can choose any family of distributions to generate \(\mathbf{\Sigma}\) and train the DNN accordingly. In the upcoming simulations, we provide results for two important scenarios: (a) sparse scattering with \(L=2\) paths per user, and (b) rich scattering with \(L=20\) paths. Note that the number of paths is equivalent to the channel covariance rank. Given that the pilot length is \(\beta=8\), training channels with a large \(L\) is more difficult than those with a small \(L\). For each case, we generate the training and testing data with randomly generated samples of \(\mathbf{\Sigma}\) according to \(\mathcal{D}(L)\). The training data is per epoch randomly generated with a fixed series of random seeds. The testing data contains \(1000\) randomly samples of \(\mathbf{\Sigma}\), and for each random sample of covariance, we generate \(10\) random instantaneous channel samples as well as DL and UL additive noise vectors. The same testing data is used to produce results for all the baseline methods.
### _Comparison Baselines_
We compare our proposed scheme with the state-of-the-art DNN-based design in [15] that is under digital feedback and without channel statistic knowledge.1 The number of feedback symbols in analog feedback is \(\beta\). Considering a UL channel capacity of \(C_{\rm ul}=\log_{2}(1+P_{\rm ul})\) bits per channel use, this translates to \(B=\beta C_{\rm ul}\) feedback bits. In order to make a fair comparison between analog and digital feedback, we set the UL transmit power to \(P_{\rm ul}=2^{B/\beta}-1\) so that both strategies feed back the same amount of data. Additionally, we provide results of maximum ratio transmission (MRT) precoding and ZF precoding under perfect DL CSI. The precoder of MRT and ZF are respectively obtained by \({\bf V}_{\rm MRT}=J_{\rm MRT}{\bf H}^{\sf H}\) and \({\bf V}_{\rm ZF}=J_{\rm ZF}({\bf H}^{\sf H}{\bf H})^{-1}{\bf H}^{\sf H}\), where \({\bf H}=\left[{\bf h}_{1},\ldots,{\bf h}_{K}\right]\) and \(J_{\rm MRT}\) and \(J_{\rm ZF}\) are power normalization scalars to satisfy the power constraint (13). Furthermore, we also provide results for the case of training and precoding without the pre-beamforming matrix. This is equivalent to setting \({\bf B}={\bf F}^{\sf H}\) which performs only a rotation on the channel and is the same as setting \(\mathbf{\lambda}={\bf 1}\) (a vector of all ones). Comparing to this case, the performance improvement by optimizing \(\mathbf{\lambda}\) will become clear. Finally, both with and without pre-beamforming, the matrix constituent \({\bf W}\) of the pilot matrix in (4) is generated randomly with standard normal elements and will be fixed in training. We noted earlier that the choice of this matrix is arbitrary as long as it is full-rank (which is the case, with probability 1, when each element generated as a standard normal random variable). We have tried optimizing this matrix, jointly with the rest of the network, but this did not result in noticeable gain in performance and therefore was ignored.
Footnote 1: We have trained the DNN proposed in [15] according to their public code. Our code can be found in [https://github.com/YiSongTUBerlin/DL-Aided-Channel-Training-and-Precoding-in-FDD-Massive-MIMO-with-Channel-Statistics-Knowledge.git](https://github.com/YiSongTUBerlin/DL-Aided-Channel-Training-and-Precoding-in-FDD-Massive-MIMO-with-Channel-Statistics-Knowledge.git)
### _Performance Comparison_
The sum-rate performance vs feedback capacity (in bits) for sparse scattering with \(L=2\) is illustrated in Fig. 2. We observe that our proposed DNN-based technique outperforms all rival methods (except ZF with perfect CSI). In particular, we see a significant performance advantage in comparison to the DNN-based method in [15], especially for small feedback sizes. This should be mainly attributed to the fact that our
Fig. 1: System schematic for DNN aided FDD multi-user training and precoding with channel statistics knowledge. The proposed system takes DL covariance matrices \(\mathbf{\Sigma}\) as input and train with DL channels \(\{{\bf h}_{k}\}\) to output the precoder \({\bf V}\).
proposed scheme exploits channel statistics knowledge at the BS. Even when there is practically no feedback (\(B\to 0\)), our scheme is able to achieve a relatively large sum-rate because one component of the designed precoder in (11), namely the pre-beamforming matrix \(\mathbf{B}\) depends only on channel statistics and not the feedback. In this case, our DNN is essentially performing a kind of statistical beamforming with (almost) no CSI. Interestingly, statistical beamforming is shown to be very effective in the case of sparse channels [28], which supports the observed behavior here. The proposed method also outperforms MRT, which is due to the use of ZF precoding in our architecture. The advantage with respect to the case with no pre-beamforming matrix (the red curve) is rather small. This is due to the fact that the channels are sparse and \(L<\beta\), and therefore with sufficient feedback no beam-selection is necessary. Here, the optimized beam-selection vector is \(\boldsymbol{\lambda}\approx\mathbf{1}\), which is very close to no pre-beamforming with \(\mathbf{B}=\mathbf{F}^{\text{H}}\). Finally, we see a similar advantage in comparison to the ACS method in [19], resulting from the direct maximization of sum-rate rather than multiplexing gain.
The methods are compared in the rich scattering scenario with \(L=20\) in Fig. 3. Here since \(\beta<L\), we expect that using an optimized pre-beamforming matrix yields a large performance gain. This is confirmed by the results of Fig. 3, where we see that the proposed DNN-based method clearly outperforms the competitor methods. The performance advantage with respect to the case with no pre-beamforming is clearly observed and shows that even with channel statistics knowledge, the system under-performs because of the interference cause by MMSE channel estimation with \(\beta<L\). The proposed method also performs better than the ACS method in [19], since it directly maximizes sum-rate instead of multiplexing gain. Finally, both channel statistics knowledge and optimized pre-beamforming results in the much higher sum-rate values achieved by our method and the DNN-based method proposed in [15] for all feedback sizes. We point out that ZF precoding with perfect CSI yields a sum-rate of \(\approx 60\) bits/s/Hz, which is much larger than the rest and is omitted from Fig. 3 for a better representation of the results.
In Fig. 4, we present a heat map of the optimized \(\boldsymbol{\lambda}\) of the proposed scheme for \(50\) random realizations of \(\mathcal{D}(L)\) (stacked as rows), for three different combinations of parameters, namely \((L=2,B=1)\) (sparse scattering, small feedback) in Fig. 4a, \((L=20,B=40)\) (rich scattering, large feedback) in Fig. 4b, and \((L=20,B=1)\) (rich scattering, small feedback) in Fig. 4c. First, we observe in Fig. 4a that under \(L=2\) the learned \(\boldsymbol{\lambda}\) are almost all ones because the channels are sparse enough that no pre-beamforming is needed and agrees with the sum-rate performance presented in Fig. 2. In Fig. 4b, since \(\beta<L\), the DNN produces beam-selection vectors that contain many zeros, meaning that many beams are not selected in the pre-beamformer. If we decrease the feedback size from \(B=40\) to \(B=1\) bits, we have the results of Fig. 4c where even less beams are selected (more elements in \(\boldsymbol{\lambda}\) turn out to be zero) because feedback size is extremely small and the DNN chooses accordingly to train effective channels with fewer coefficients.
## V Conclusion
We proposed a DNN-based channel training and precoding scheme with channel statistics knowledge at the BS, for FDD massive MIMO systems. The DNN is trained for an ensemble of channel statistics (provided by the cell geometric environment), and generates, for any given input of user channel statistics a pre-beamforming matrix that maps original channels to effective channels such that the DL sum-rate is maximized. The proposed system works with analog feedback and requires no DNN implemented at the user side, which makes it far more practical than most DNN-based approaches in the literature. Our numerical results showed the significant advantage offered by this architecture for both sparse and rich scattering scenarios and various feedback sizes.
|
2310.18834 | Automating the Correctness Assessment of AI-generated Code for Security
Contexts | Evaluating the correctness of code generated by AI is a challenging open
problem. In this paper, we propose a fully automated method, named ACCA, to
evaluate the correctness of AI-generated code for security purposes. The method
uses symbolic execution to assess whether the AI-generated code behaves as a
reference implementation. We use ACCA to assess four state-of-the-art models
trained to generate security-oriented assembly code and compare the results of
the evaluation with different baseline solutions, including output similarity
metrics, widely used in the field, and the well-known ChatGPT, the AI-powered
language model developed by OpenAI. Our experiments show that our method
outperforms the baseline solutions and assesses the correctness of the
AI-generated code similar to the human-based evaluation, which is considered
the ground truth for the assessment in the field. Moreover, ACCA has a very
strong correlation with the human evaluation (Pearson's correlation coefficient
r=0.84 on average). Finally, since it is a fully automated solution that does
not require any human intervention, the proposed method performs the assessment
of every code snippet in ~0.17s on average, which is definitely lower than the
average time required by human analysts to manually inspect the code, based on
our experience. | Domenico Cotroneo, Alessio Foggia, Cristina Improta, Pietro Liguori, Roberto Natella | 2023-10-28T22:28:32Z | http://arxiv.org/abs/2310.18834v2 | # Automating the Correctness Assessment of AI-generated Code for Security Contexts
###### Abstract
Evaluating the correctness of code generated by AI is a challenging open problem. In this paper, we propose a fully automated method, named _ACCA_, to evaluate the correctness of AI-generated code for security purposes. The method uses symbolic execution to assess whether the AI-generated code behaves as a reference implementation. We use _ACCA_ to assess four state-of-the-art models trained to generate security-oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the AI-powered language model developed by OpenAI.
Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the AI-generated code similar to the human-based evaluation, which is considered the ground truth for the assessment in the field. Moreover, _ACCA_ has a very strong correlation with the human evaluation (Pearson's correlation coefficient \(r=0.84\) on average). Finally, since it is a fully automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in \(\sim 0.17\)s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience.
keywords: Code correctness, AI code generators, Assembly, Offensive +
Security, Symbolic Execution +
Footnote †: journal: Computer Science
## 1 Introduction
Artificial Intelligence (AI) code generators use Neural Machine Translation (NMT) models to turn natural language (NL) descriptions into programming code. They represent a powerful asset in the arsenal of cybersecurity professionals and malicious programmers. Indeed, AI (_offensive_) code generators are becoming an attractive solution to creating _proof-of-concept_ exploits in order to assess the exploitability and severity of software vulnerabilities [1; 2; 3], letting the AI helping developers to generate low-level (i.e., assembly) and complex code, with a reduced effort and improved effectiveness.
Despite the dramatic increase in the adoption of AI code generators, they still have limitations and potential drawbacks. For example, they may not always generate code that is _correct_, i.e., code that performs what is required from the NL description, as they may struggle with more complex programming tasks that require human creativity and problem-solving skills, or may incorrectly interpret developers' descriptions. Furthermore, AI code generators can introduce security vulnerabilities if not properly tested and validated [4; 5; 6]. For these reasons, assessing the correctness of AI-generated code becomes a crucial challenge.
From the existing literature, it comes out that one of the most effective ways to assess the correctness of generated code is to perform a manual code review (i.e., _human evaluation_) [7; 8]. This involves having a human expert review the code and identify any errors or inconsistencies with the NL description. However, human evaluation has several limitations. First, manual analysis can be a time-consuming process. Indeed, reviewers must carefully examine each line of code and thoroughly test the software to ensure that it meets the intended requirements and NL specifications. This process also requires reviewers to be highly knowledgeable about the programming language, development environment, and intended functionality of the code to provide accurate assessments. Moreover, the analysis can be subjective, as different reviewers may have different interpretations of the code and its intended functionality, depending on the expertise and experience of the reviewer. This can lead to inconsistent assessments of code correctness. Last but not least, manual analysis is prone to human error, as reviewers may
miss subtle errors or inconsistencies in the code, or may introduce errors and biases into their assessments due to factors such as fatigue, distractions, or subjective opinions. From the above considerations, it is clear that what we gained from the help of AI, we lost due to the manual review.
Unfortunately, there is currently no fully automated solution that can perform the human evaluation of AI-generated code. In fact, although existing automated testing and code analysis tools can effectively identify errors or inconsistencies in code, they do not provide any insights into whether the code is what is actually required by developers [9; 10; 11; 12]. Moreover, these solutions often require in inputs entire, compilable programs (e.g., entire functions) rather than single code snippets, which is instead often the case with AI-generated code.
Besides the automated solution issue, there is a more important one, i.e., how to evaluate the correctness of AI-generated code. Indeed, previous studies proposed a large number of _output similarity_ metrics, i.e., metrics computed by comparing the textual similarity of generated code with a ground-truth reference implementation [13; 14; 15]. The major advantage of the proposed metrics is that they are reproducible, easily tuned, and time-saving. However, in the context of programming code generation, existing metrics are not able to fully reflect the correctness of the code.
As illustrated in the next section, generated code can be different from the reference but still be correct (e.g., the assembly conditional jumps jz and je are different instructions that can be used to perform the same operation); or, there can be subtle differences between the generated and the reference code, which can be similar yet produce different outputs (e.g., the assembly conditional jumps je and jne are syntactically similar instructions, but they perform the opposite operation). Hence, it is crucial to develop novel, more accurate methods for automatically evaluating the correctness of AI-generated code.
This paper proposes a method, named _ACCA_ (_Assembly Code Correctness Assessment_), to automatically assess the correctness of assembly AI-generated code without any human effort. More precisely, our solution leverages symbolic execution to assess whether the generated code behaves as a reference implementation, despite syntactic differences between the reference and the generated code.
We apply _ACCA_ to assess four state-of-the-art NMT models in the generation of security-oriented code in assembly language starting from NL descriptions in the English language, and compare the results of _ACCA_ with
the human evaluation and several baseline assessment solutions, including a wide range of output similarity metrics and the well-known ChatGPT by OpenAI. We show that the proposed method provides an almost perfect assessment of the code correctness and has a very strong correlation with the human evaluation, outperforming all the baseline assessment solutions.
In the following, Section 2 introduces a motivating example; Section 3 describes _ACCA_; Section 4 presents the experimental setup; Section 5 shows the experimental results; Section 6 presents the related work; Section 7 concludes the paper.
## 2 Motivating Example
To generate (offensive) code from natural language (NL), AI code generators are fed with corpora containing pairs of NL intents (inputs) and code snippets (output). These corpora are commonly split into _training data_, i.e., the data used to feed the model, _validation data_, i.e., the data used to tune the model's parameters, and _test data_, i.e., the data used to evaluate the model in the generation of the code starting from new NL descriptions (i.e., the NL intents in the test data are never seen by the model in the train and validation data).
The most practical solution to assess the performance of the NMT models in the code generation is to compare, for every NL description of the test data (i.e., the input), the model's prediction with the code snippet (i.e., the output) in the test set, which is considered the _ground-truth_ for the evaluation. To this aim, state-of-the-art provides a set of metrics that estimate the similarity between the code generated by NMT models and the code snippets in the test set. However, output similarity metrics cannot properly assess whether two pieces of code are different but semantically equivalent, i.e., they provide the same output and/or effects although they use different operations (e.g., jz label and je label are different assembly instructions performing the same conditional jump).
For this reason, _human evaluation_ is considered the golden standard for assessing the correctness of the code generated by the models [16]. Through manual inspection of every model's predictions, human evaluators assess if the code generated by the models is _semantically correct_, i.e., if the output is the exact translation of the NL intent into the target programming language. Semantic correctness implies _syntax correctness_, i.e., a code prediction that performs what is described in the NL intent must also adhere to the syntax
rules of the target programming languages. Human evaluation classifies the code as correct or incorrect by assigning a value equal to 1 or 0, respectively.
As a simple example, consider the intent "_transfer EAX contents into EDX register_", which translates, on the 32-bit version of the x86 instruction set architecture (IA-32), to the assembly snippet:
mov EDX, EAX
An alternative method to copy the contents of a register into another is by pushing and popping its value onto the stack. Therefore, a semantically equivalent implementation of this copy is the code:
push EAX
pop EDX
Despite the model's prediction being both syntactically and semantically correct, output similarity metrics are not able to grasp the equivalence between the two snippets since they base their calculation on character and/or token similarity. Therefore, this translation results in low scores1 for several output similarity metrics widely used in the field (see SS 4.3), such as _BLEU-4_ (0.11) and _Edit Distance_ (0.31).
Footnote 1: Scores of output similarity metrics range between 0 and 1.
The opposite occurs with the intent "_clear the EDX register and move 5 in the lowest byte of the register_", which translates to the assembly snippet:
xor EDX, EDX
mov DL, 5
If the model generates the snippet:
xor EDX, EDX
mov BL, 5
then prediction and reference differ by a single character, yet the code does not accomplish the same task. Indeed, the lowest byte of EDX is stored in the DL register, while BL contains the lowest byte of EBX. Automatic metrics fail to account for situations like this. For instance, the _Edit Distance_ between these two pieces of code is 0.96, while the _BLEU-4_ is 0.65, which are considered
high values. Differently, a human evaluator would appropriately classify this snippet as semantically incorrect, since it does not perform the intended operation, although it properly respects the syntax of the assembly language.
However, since the human analyst needs to check the syntax and the semantics of every output generated by the models, human evaluation is often unfeasible. Indeed, the huge amount of data to scrutinize makes the analysis time-consuming and prone to errors.
## 3 Proposed Method
To overcome the limitations in the assessment of AI-generated assembly code described in the previous section, we propose _ACCA_, a method to automatically assemble and symbolically execute both the reference (i.e., the ground truth) and predicted snippets (i.e., the code generated by the models). Through symbolic execution, the method simulates the execution of both programs and determines whether, starting from the same state, they terminate producing equivalent results and effects.
First, _ACCA_ compares, at the string level, the prediction with the reference to assess its correctness, because, if the prediction is equal to the reference, then we assume the code prediction is correct. If this is not the case, the method assesses whether the code is syntactically correct. Indeed, if a code prediction is not structured according to the rules of the target programming languages, it is classified as incorrect.
If this is not the case, i.e., if the prediction differs from the ground truth reference and the prediction is syntactically correct, _ACCA_ generates two source code files, one for the reference and one for the predicted snippet. It then assembles and links them to produce the executable files needed for the symbolic execution. At this point, the method symbolically executes both files resulting from the assembling process to assess whether they are equivalent.
Finally, _ACCA_ returns the pair of syntactic correctness value (SYN) and the semantic correctness value (SEM) of the code predicted by the model, which is equal to 1 when the correctness is verified, 0 otherwise. Figure 1 shows a detailed flowchart of the syntactic and semantics correctness evaluation process.
In the rest of this section, we detail the phases of the workflow. For the sake of simplicity, we showcase examples of assembly code for the Linux OS running on IA-32 architecture to describe the evaluation process, although
the proposed method is not restricted to this specific context, and it can be applied to different instruction set architectures and operating systems.
### String Comparison
_ACCA_ first checks whether the predicted code snippet perfectly matches the ground truth by performing a string comparison. If they match (i.e., the code generated by the model is equal to ground truth), the prediction is considered both syntactically and semantically correct (i.e., SYN=1, SEM=1) and the evaluation ends. Otherwise, i.e., if they differ, the method proceeds with the evaluation of the syntactic and semantic correctness. The preliminary string comparison is done to speed up the evaluation process by skipping the symbolic execution process when not needed, i.e., when the prediction perfectly matches the reference snippet and is, therefore, correct.
### Assembling Process
The purpose of the assembling process is to assess whether each code snippet generated by the model adheres to the syntax rules of the programming language it is written in, i.e., to check whether it is compilable. Since NMT is still far from producing entire complex programs, the output predicted by the models is a portion of an entire program (i.e., a single-line or multi-line statement). Thus, _ACCA_ constructs a complete program by adding the necessary entry point and exit point code. For instance, consider the following code snippets for IA-32:
cmp EAX, EBX
je loop
pop ECX
Figure 1: Detailed flowchart of _ACCA_.
This code compares the contents of two registers and, based on the result, either performs a jump operation or reads from the stack and saves the value into another register. The snippet is syntactically correct according to the assembly language, yet it is incomplete for the execution since code snippets need to contain a minimal set of information to be properly executed.
In the case of Linux OS, these instructions are kept in the text section, which must begin with the declaration global_start to define the initial program entry point. Moreover, to make sure the program terminates correctly, the proposed method also includes a fictitious label that represents the exit address the code jumps to at the end of its execution. This label is declared in the data section, which contains initialized read-write data.
Therefore, to have a complete program, the code is modified as follows:
section.data exit_addr db 0x56 section.text global_start _start: cmp EAX, EBX je loop pop ECX jmp exit_addr
Once the whole program is created, _ACCA_ generates a source file and leverages an assembler to assess its syntactic correctness. Indeed, if the programs compile, then all the instructions of the programs are syntactically correct, therefore also the code generated by the model respects the structure of assembly programming language. There are three possible output scenarios for the compilation:
* _No errors_, in which the assembling process is completely correct;
* _Warnings_, in which the assembler reports some type of warning (e.g., word data exceeds bounds, invalid register size specification ignored, etc.), but the compilation still terminates without errors;
* _Errors_, in which the assembling process results in an error that prevents the code from being assembled.
In the first two cases, the output produced by the model is considered syntactically correct (i.e., SYN=1). Warnings are considered acceptable since they indicate issues involving bad practice, but are not severe enough to prevent the code compilation. When the compilation produces an error (i.e., the third case), we investigate the nature of the error. More precisely, we focus on a specific category of the error raised by the compiler, the _Symbol-Not-Defined (SDN) errors_, which occur when the code contains a symbol (e.g., a label or variable) that has not been previously defined or initialized. We handle this category of errors appropriately. Indeed, since the predicted snippets contain only one or a few instructions, they might reference a label or variable defined in a different portion of the program, which leads to an assembling error even when the program is syntactically correct.
Consider, again, the previous code snippet: the first instruction compares the contents of the EAX and EBX registers and, if they are equal, the execution jumps to the loop label. However, this symbol has not been defined yet. To handle these cases, we analyze the assembler output to determine the missing symbol's name and include it in the source code file as a fictitious label. This label simply points to a jump operation to the previously defined exit address (i.e., myExitAddr). Indeed, the destination of the jump is not significant for the evaluation since we are only interested in checking the correctness of the instructions generated by the model. Therefore, after a SDN error, _ACCA_ further modifies the program as follows:
section.data myExitAddr db 0x56 section.text global _start _start: cmp EAX, EBX je loop pop ECX jmp myExitAddr loop: jmp myExitAddr
Once the source code file is modified accordingly, _ACCA_ repeats the assembling process as before to check if the compilation ends with no errors or warnings. In this case, _ACCA_ assigns the SYN score equal to 1 and continues the evaluation to check the semantic correctness.
When the compilation ends with errors different from the SDN, such as an invalid combination of opcode and operands, expression syntax error, etc., then _ACCA_ labels the model's prediction as syntactically incorrect (SYN=0). Since a snippet syntactically incorrect is also semantically incorrect, then the evaluation process terminates, assigning also the SEM score equal to zero. A source code file is generated and assembled for both the ground truth and the predicted code snippet.
### Symbolic Execution
To evaluate the semantic correctness, _ACCA_ leverages the symbolic execution. To this aim, the method needs the program executable. If the assembling process ends correctly, the assembler outputs an object file, which is then fed to the linker to complete the process.
Since the same operation may be correctly implemented in different ways, a simple textual comparison with the reference is not enough to assess the semantic correctness of a program. We still consider the ground truth as the reference for the correct implementation of the intended code. However, we do not limit the comparison to a textual similarity, but we examine the _system state_ at the end of the execution of both the reference and the generated code. Indeed, two programs that implement the same functionality using different operations can be considered semantically equivalent if they result in the same final system state. Since the final execution state depends on the inputs and initial state of the program, we need to compare the state produced by both programs for every possible combination of inputs and initial state.
Symbolic execution is a state-of-the-art solution for program analysis based on abstract execution. It consists in simulating the execution of a program providing symbolic values as its input instead of concrete ones. The result of this execution is a set of output values expressed as functions of the input symbols. _ACCA_ uses symbolic execution to determine all the existing execution paths and all possible corresponding output system states. It then compares the set of output system states of the generated program with the set of output system states of the ground truth program: if they match, then the programs are semantically equivalent (SEM=1), otherwise, the method classifies the model's prediction and the ground-truth as not semantically equivalent (SEM=0).
To symbolically execute the programs, we use a _binary analysis platform_ (BAP) that loads each executable and provides a complete abstract repre
sentation of the target architecture, CPU registers, memory address space, and stack. The program is conceived as a sequence of _basic blocks_, i.e., a straight-line code sequence with no branches, and the interconnections between the blocks represent the jump operations. An example is shown in Fig. 2: the program compares the contents of two registers and, if they are equal, the execution jumps to a specific address, otherwise, it jumps to the next instruction. Each possible branch is the entry point of a different basic block, which contains a sequence of operations that can be executed in order and one last instruction that causes the execution to move to another basic block.
_ACCA_ begins the execution by initializing the abstract registers and flags to the same symbolic values; the method also sets the value of the stack pointer and initializes the memory to a blank state. Then, it executes the program by simulating each instruction step by step and keeping track of its state at each given step. During the execution of the programs, performing operations, such as arithmetic operations, comparisons, assignments, etc., that involve a variable (e.g., if X > 10) will yield an _execution tree_, i.e., a tree of operations formed by all the possible paths of the program, which encode all branch decisions taken up to a specific point. Execution trees are then translated into constraints, i.e., formulas that express a set of assumptions on the symbolic outputs of the execution. These constraints are finally solved by a _satisfiability modulo theories_ (SMT) solver, which is typically employed in program verification to verify whether constraints can be satisfied by some assignment of concrete values to the program's symbolic arguments [17]. As an example, consider the following simple constraints:
x > y y > 2 10 > x
Figure 2: Example of a code snippet represented as a sequence of basic blocks.
The SMT solver treats them as assertions that must be satisfied with the valid values of symbolic variables. It, therefore, outputs a value that is consistent with all the constraints (e.g., x=4, y=3).
_ACCA_ symbolically executes both the reference code and the predicted snippet: assuming that the program state at the beginning of the execution is identical, if the programs are semantically equivalent, then their state is also identical at the end of the execution. Therefore, to assess the semantic correctness of the generated code compared to the ground truth, the proposed method checks whether the states of the architecture are equal at the end of both executions. The program state includes:
* _state of the registers_, i.e., the contents of the abstract CPU registers;
* _state of the flags_, i.e., the abstract status register that keeps track of the CPU state, including the result of arithmetic operations (e.g., carry flag, overflow flag, etc.) and interruptions;
* _values on stack_, i.e., the contents of the memory area used to store temporary values;
* _path constraints_, i.e., the condition of input values, defined over the previous items, that leads to the corresponding final state.
To this end, _ACCA_ compares the state of each _leaf_ node, i.e., the final _states_ at the end of each path in the execution tree representing the program, of both executables. To compare the leaf nodes, the method constructs a set of lists for every final basic block of the two programs. Each set contains a list of register values, flag values, boolean constraints, and stack values. For example, a reference program whose execution tree ends with two basic blocks (leaf nodes) has two sets of lists. Each set contains all the values that represent the system state for that particular execution path. If the execution tree of the generated program has the same number of leaf nodes, then each list of the two sets is compared with each list of two sets of the reference program. If there is a correspondence between each leaf of the first program and each leaf of the second program, then they are semantically equivalent (SEM=1) and the evaluation process ends. Contrarily, if the leaf nodes of the two program execution trees do not match, then we conclude that the predicted code is not semantically equivalent to the reference snippet (SEM=0) and the process ends.
Since the total number of states can grow exponentially in the number of branches, one of the main challenges of symbolic execution is the path explosion problem [17]. Indeed, keeping track of a large number of pending branches to be explored impacts both the running time and the space requirements of symbolic execution. The primary causes of path explosion are loops and function calls. One common method to deal with this problem is to bind the path exploration to a limited number of iterations. To handle programs whose symbolic execution does not terminate, we set a maximum number of execution steps. Since AI-generated code is typically concise and consists of a few assembly instructions, a correct program concludes its execution in a few execution steps. If it runs for more than max_steps, then the symbolic execution stops, and the generated program is classified as incorrect (SEM=0).
### Implementation Details
To assemble the programs and generate the executable files, we rely on the wide set of available open-source software. For the previous examples (on the IA-32), we used the Netwide Assembler (NASM) [18], an 80x86 and x86-64 assembler that supports a wide range of executable formats, including Linux, BSD, and Windows operating system formats. As a binary analysis platform, we use ANGR [19]. ANGR provides support for a variety of CPU architectures, including ARM, MIPS, PPC, and x86 processors. It comprises a series of sub-components that implement the different steps necessary for the symbolic execution: to disassemble the executables and lift the binary code to an intermediate representation; to simulate the program state and execution, including registers and memory; and to solve the generated constraints, using the z3 [20] SMT solver as a backend. We set the maximum number of execution steps max_steps to 100 to avoid infinite loops. Our implementation runs on both Linux and Windows OS. We will publicly share the implementation of _ACCA_ on GitHub.
## 4 Experimental Setup
### AI-code Generation
To perform code generation and assess the tool on the AI-generated code, we adopted four state-of-the-art NMT models.
**Seq2Seq** is a model that maps an input of sequence to an output of sequence. Similar to the encoder-decoder architecture with attention mechanism [21], we use a bi-directional LSTM as the encoder to transform an embedded intent sequence into a vector of hidden states with equal length. We implement the Seq2Seq model using _xnmt_[22]. We use an Adam optimizer [23] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\), while the learning rate \(\alpha\) is set to 0.001. We set all the remaining hyper-parameters in a basic configuration: layer dimension = 512, layers = 1, epochs = 200, beam size = 5.
**CodeBERT**[24] is a large multi-layer bidirectional Transformer architecture [25] pre-trained on millions of lines of code across six different programming languages. Our implementation uses an encoder-decoder framework where the encoder is initialized to the pre-trained CodeBERT weights, and the decoder is a transformer decoder, composed of 6 stacked layers. The encoder follows the RoBERTa architecture [26], with 12 attention heads, hidden layer dimension of 768, 12 encoder layers, and 514 for the size of position embeddings. We set the learning rate \(\alpha=0.00005\), batch size = 32, and beam size = 10.
**CodeT5+**[27] is a new family of Transformer models pre-trained with a diverse set of pretraining tasks including causal language modeling, contrastive learning, and text-code matching to learn rich representations from both unimodal code data and bimodal code-text data. We utilize the variant with model size \(220M\), which is trained from scratch following T5's architecture [28]. It has an encoder-decoder architecture with 12 decoder layers, each with 12 attention heads and hidden layer dimension of 768, and 512 for the size of position embeddings. We set the learning rate \(\alpha=0.00005\), batch size = 16, and beam size = 10.
**PLBart**[29] is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. The model is pre-trained on a large collection of Java and Python functions and natural language descriptions collected from GitHub and StackOverflow. We use the PLBart-large architecture with 12 encoder layers and 12 decoder layers, each with 16 attention heads. We set the learning rate \(\alpha=0.00005\), batch size = 16, and beam size = 10.
We followed the best practices in the field of code generation by supporting NMT models with data processing operations. The data processing steps are usually performed both before translation (_pre-processing_), to train the NMT model and prepare the input data, and after translation (_post-processing_), to improve the quality and the readability of the code in output.
Our pre-processing operations start with the _stopwords filtering_, i.e., we remove a set of custom-compiled words (e.g., _the_, _each_, _onto_) from the intents to include only relevant data for machine translation. Next, we use a _tokenizer_ to break the intents into chunks of text containing space-separated words (i.e., the _tokens_). To improve the performance of the machine translation [30; 31; 2], we _standardize_ the intents (i.e., we reduce the randomness of the NL descriptions) by using a _named entity tagger_, which returns a dictionary of _standardizable_ tokens, such as specific values, label names, and parameters, extracted through regular expressions. We replace the selected tokens in every intent with "_var#_", where # denotes a number from 0 to \(|l|\), and \(|l|\) is the number of tokens to standardize. Finally, the tokens are represented as real-valued vectors using _word embeddings_.
The pre-processed data is used to feed the NMT model. Once the model is trained, we perform the code generation from the NL intents. Therefore, when the model takes as inputs new intents, it generates the related code snippets based on its knowledge (_model's prediction_). As for the intents, also the code snippets predicted by the models are processed (_post-processing_) to improve the quality and readability of the code. Finally, the dictionary of standardizable tokens is used in the _de-standardization_ process to replace all the "_var#_" with the corresponding values, names, and parameters.
### Dataset
To feed the models for the generation of security-oriented code, we extended the publicly available _Shellcode_IA32_ dataset [32; 33] for automatically generating _shellcodes_ from NL descriptions. A shellcode is a list of machine code instructions to be loaded in a vulnerable application at runtime. The traditional way to develop shellcodes is to write them using the assembly language, and by using an assembler to turn them into _opcodes_ (operation codes, i.e., a machine language instruction in binary format, to be decoded and executed by the CPU) [34; 35]. Common objectives of shellcodes include spawning a system shell, killing or restarting other processes, causing a denial-of-service (e.g., a fork bomb), leaking secret data, etc.
The dataset consists of instructions in assembly language for _IA-32_ collected from publicly available security exploits [36; 37], manually annotated with detailed English descriptions. In total, it contains \(3,200\) unique pairs of assembly code snippets/English intents. We further enriched the dataset with additional samples of shellcodes collected from publicly available security exploits, reaching \(5,900\) unique pairs of assembly code snippets/English
internts. To the best of our knowledge, the resulting dataset is the largest collection of shellcodes in assembly available to date.
Our dataset also includes \(1,374\) intents (\(\sim\)23% of the dataset) that generate multiple lines of assembly code, separated by the newline character \(\backslash\)_n_. These multi-line snippets contain many different assembly instructions (e.g., whole functions). For example, the copy of the ASCII string _"/bin//sh"_ into a register is a typical operation to spawn a shell, which requires three distinct assembly instructions: push the hexadecimal values of the words _"/bin"_ and _"//sh"_ onto the stack register before moving the contents of the stack register into the destination register. Further examples of multi-line snippets include conditional jumps, tricks to zero-out the registers without generating null bytes, etc. Table 1 shows two further examples of multi-line snippets with their natural language intents.
Table 2 summarizes the statistics of the dataset used in this work, including the unique examples of NL intents and assembly code snippets, the unique number of tokens, and the average number of tokens per snippet and intent. The dataset is publicly available on GitHub2.
Footnote 2: [https://github.com/desertlab/Shellcode_IA32](https://github.com/desertlab/Shellcode_IA32)
\begin{table}
\begin{tabular}{c|c} \hline
**Code Snippet** & **English Intent** \\ \hline & _Perform the xor between BL register and_ \\ xor bl, OxBB \(\backslash\)n jz formatting \(\backslash\)n & _0xBB and jump to the label formatting if_ \\ mov cl, byte [esi] & _the result is zero else move the current_ \\ & _byte of the shellcode in the CL register._ \\ \hline xor ecx, ecx \(\backslash\)n mul ecx & _Zero out the EAX and ECX registers._ \\ \hline \end{tabular}
\end{table}
Table 1: Examples of assembly code with NL descriptions from our dataset.
\begin{table}
\begin{tabular}{c|c c} \hline
**Metric** & **NL Intents** & **Assembly Code** \\ & & **Snippets** \\ \hline _Unique lines_ & \(5,740\) & \(3,316\) \\ _Unique tokens_ & \(2,855\) & \(1,770\) \\ _Avg. tokens per line_ & \(9.18\) & \(5.83\) \\ \hline \end{tabular}
\end{table}
Table 2: Dataset statistics
### Baseline Assessment Solutions
As a baseline for the evaluation, we used the following output similarity metrics, which are widely used to assess the performance of AI generators in many code generation tasks [15], including the generation of assembly code for security contexts [38; 1; 3; 2; 33]:
* **Compilation Accuracy (CA)**. It indicates whether each code snippet produced by the model is compilable according to the syntax rules of the target language. CA value is either 1, when the snippet's syntax is correct, or 0 otherwise. To compute the _compilation accuracy_, we used the _Networking Assembler_ (NASM) assembler [18].
* **Bilingual Evaluation Understudy (BLEU) score**[39]. It measures the degree of n-gram overlapping between the string of each code snippet produced by the model and the reference. This metric also takes into account a _brevity penalty_ to penalize predictions shorter than the references. BLEU value ranges between 0 and 1, with higher scores corresponding to a better quality of the prediction. Similar to previous studies, we use the BLEU-4 score (i.e., we set \(n=4\)). We implemented BLEU score computation employing the bleu_score module contained in the open-source Python suite Natural Language Toolkit (NLTK) [40].
* **SacreBLEU**[41]. This is a different implementation of the BLEU score which differs from the traditional one because it uses different tokenization techniques. We used the implementation available on Hugging Face [42]
* **Exact Match accuracy (EM)**. It indicates whether each code snippet produced by the model perfectly matches the reference. EM value is 1 when there is an exact match, 0 otherwise. To compute the exact match, we used a simple Python string comparison.
* **Edit Distance (ED)**. It measures the _edit distance_ between two strings, i.e., the minimum number of operations on single characters required to make each code snippet produced by the model equal to the reference. ED value ranges between 0 and 1, with higher scores corresponding to smaller distances. For the edit distance, we adopted the Python library pylcs[43].
As a further baseline for the comparison, we adopted the famous **Chat-GPT**[44], the AI-powered language model developed by OpenAI. For every code generated by the models, we asked ChatGPT to assess the code generated by the models by assigning a value 1 when it is correct, 0 otherwise. We performed two different evaluations:
* **ChatGPT-NL**: ChatGPT evaluates if the code generated by the models is the translation in assembly code of what is required in the natural language intents, similar to what a human evaluator does during the manual code review;
* **ChatGPT-GT**: ChatGPT evaluates if the code generated by the models is semantically equivalent to the ground truth used as a reference for the evaluation, similar to the assessment performed by output similarity metrics.
### Human Evaluation
To ensure a robust and thorough assessment of both _ACCA_ and baseline approaches in evaluating AI-generated code, we conducted a meticulous comparison against human evaluation, which serves as ground truth for our analysis.
In the human evaluation, a code prediction was deemed successful if it accurately translated the NL description into the assembly language and adhered to the established assembly programming language rules, warranting a score of 1. Any deviation resulted in a score of 0.
To fortify the integrity of our evaluation and minimize the potential for human error, 3 authors, well-versed in assembly for IA-32 architecture and shellcode development, independently scrutinized each code snippet generated by the models. Any discrepancies that arose were attributed to human oversight and promptly rectified, culminating in unanimous consensus across all cases in the human evaluation, demonstrating a resounding 100% alignment.
## 5 Experimental Results
To perform the experiments, we split the dataset into training, validation, and test sets using a common 80%/10%/10% ratio [45; 46]. For our experiments, we used a machine with a Debian-based distribution, with 8 vCPU, 16 GB RAM, and one Nvidia T4 GPU.
### Quantitative Analysis
First, for all the models, we compared the average code correctness values computed by _ACCA_ over all the examples of the test set with respect to the average semantic correctness assessed with the human evaluation. Table 3 shows the results.
The table highlights that the results provided by _ACCA_ are very close to the human evaluation. Indeed, the method classifies, on average, 64% of the generated code snippets by all the models as correct. On the other hand, according to our manual code review, we found that, on average, the 71% of the generated code snippets are semantically correct. Hence, although they are very close, _ACCA_ underestimates the code correctness when compared to human evaluation.
The quantitative analysis is, however, of limited use if we do not consider the percentage of code snippets that are equivalently classified by both human evaluation and the proposed method. For instance, both _ACCA_ and the human evaluation can assess the code snippets as correct for 50% of the cases but still have 100% of discrepancy cases (i.e., the code snippets considered correct by _ACCA_ and human evaluation are disjoint sets). Therefore, the table also shows a matching rate, which expresses the percentage of code
\begin{table}
\begin{tabular}{l|l l l l|l} \hline \hline
**Evaluation** & **Seq2Seq** & **CodeBERT CodeT5+** & **PLBart** & **Average** \\ \hline _Compilat. Acc._ & 0.28 & 0.15 & 0.14 & 0.21 & 0.19 \\ _BLEU-4_ & 0.22 & 0.20 & 0.29 & 0.24 & 0.24 \\ _SacreBLEU_ & 0.15 & 0.05 & 0.07 & 0.09 & 0.09 \\ _Edit Distance_ & 0.15 & 0.07 & 0.06 & 0.09 & 0.09 \\ _Exact Match_ & 0.25 & 0.17 & 0.27 & 0.24 & 0.23 \\ _ChatGPT-NL_ & 0.08 & 0.02 & 0.07 & 0.17 & 0.09 \\ _ChatGPT-GT_ & 0.06 & 0.13 & 0.10 & 0.10 & 0.10 \\ _ACCA_ & 0.06 & 0.10 & 0.07 & 0.07 & 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Offset with respect to the human evaluation. The best performance (lower offset) is blue, while the worst performance (higher offset) is red.
\begin{table}
\begin{tabular}{c|l l l l|l} \hline \hline & **Seq2Seq** & **CodeBERT CodeT5+** & **PLBart** & **Average** \\ \hline _Human Eval._ & 0.66 & 0.78 & 0.76 & 0.64 & 0.71 \\ _ACCA_ & 0.60 & 0.68 & 0.69 & 0.57 & 0.64 \\ _Matching Rate_ & 0.94 & 0.90 & 0.93 & 0.92 & 0.92 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Code correctness assessment of _ACCA_ with respect to the human evaluation.
snippets that are considered correct or incorrect by both human evaluation and _ACCA_. We found that our method and human evaluation provide the same classification in the \(\sim 92\%\) of the predictions (min 90%, max 94%). These results suggest that the proposed approach aligns well with human evaluations.
To better appreciate the evaluation provided by _ACCA_, we compared the results of the human evaluation with the results provided by the baseline solutions (described in SS 4.3). To this aim, we computed an _offset value_, i.e., the difference between the optimal value represented by the human evaluation and the value provided by different assessing solutions. The lower the offset, the closer the result is to the human evaluation. Table 4 shows the results.
The average offset of the output similarity metrics ranges between a minimum (best) value of 0.09 (for SacreBLEU and edit distance) and a maximum (worst) value of 0.24 (for BLEU-4). ChatGPT provided results similar to the best-performing output similarity metrics, with an average offset equal to 0.09 over all the models when the correctness of models' predictions is computed with respect to the NL intent (ChatGPT-NL), and equal to 0.10 when the predictions are compared to the ground truth (ChatGPT-GT). _ACCA_ provided the lowest offset in 3 out of 4 models and, an average offset equal to 0.07, which results to be the lowest value, i.e., the code correctness computed by the proposed approach is, on average, the closest to the human evaluation.
### Qualitative Analysis
We performed a manual inspection of the cases of discrepancy to examine when the method provides different results from the human evaluation. We have a discrepancy case when the method assesses the code as correct but the human evaluation does not, or when the method assesses the code as incorrect it is semantically correct according to the human evaluation.
As shown in Table 3, the method underestimates the performance of the models. In fact, an in-depth inspection of the results revealed that \(\sim 99\%\) of the discrepancy cases were due to examples classified as correct by human evaluation (value 1) but incorrect _ACCA_ (value 0). To better discuss these discrepancy cases, Table 5 illustrates four representative examples of mismatch between _ACCA_ and the human evaluation.
The first two rows of the table showcase two model predictions that are correctly labeled by human evaluation, but considered incorrect by our method. These misclassifications were due to the ambiguity of the code snippets since the same NL description can be expressed by semantically different
code snippets. For instance, to zero out the stack (row # 1), a programmer can reset any register and then push the contents of the register (i.e., 0) into the stack register. Also, to move the contents of a register into a different one (row #2), a programmer can use the mov instruction to transfer a value from a source to a destination, or, equivalently, the xchg instruction, to swap the contents of the registers. Both the code snippets generated by the model accomplish what is required in the NL intent, but at the end of the symbolic execution, the state of the registers is different from the one obtained with the code in the ground truth (EAX is reset instead of EDX in the row # 1, while, in row # 2, EAX contains the value of ESI, instead of his original value). Therefore, _ACCA_ provides the SEM score equal to zero, even if the snippets are semantically correct.
The last two rows of the table show examples of incorrect predictions that are wrongly classified as correct by the tool. As already remarked, these cases are very limited in numbers and represent situations in which, although the symbolic execution of predictions and ground-truth reference lead to the same state of the registers at the end of the execution, the model's prediction is not what is required by the NL description. For instance, in row # 3, the prediction contains what is described in the NL intents except for the label L1. The label does not affect the state of the registers during the execution
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**NL Intent** & **Ground Truth** & \begin{tabular}{c} **Model’s** \\ **Prediction** \\ \end{tabular} &
\begin{tabular}{c} **Human** \\ **Eval.** \\ \end{tabular} & _ACCA_ \\ \hline _Push zero into the stack_ & xor edx, edx & xor eax, eax & 1 & 0 \\ \hline _Save eax contents in_ & mov esi, eax & xchg esi, eax & 1 & 0 \\ \hline _In L1 jump short to esp_ & L1: jmp & jmp short esp & 0 & 1 \\ \hline _Restore the top of the stack into the ecx register then decrement the ecx register and jump to the l1 label if the contents of the ecx register is not zero_ & pop ecx \textbackslash n & pop ecx \textbackslash n dec & 0 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of mismatches _ACCA_ and human evaluation.
of the code, but it will impact the behavior of the whole program (unless the label is never used by other instructions). Row #4, instead, showcases a more complex example in which the correct instruction loop, which decrements the value of the counter register ECX and jumps to the args (i.e., the l1 label) if the counter is not zero, is replaced, in the model's prediction, by the decrement of the counter (dec ecx and an unconditional jump (jmp). In this case, although the instructions led to the same state of the registers because the counter was not zero after the decrement, the prediction is incorrect since the unconditional jump does not take into account the condition on the ecx register specified in the NL intent.
### Correlation Analysis
Additionally, we performed a statistical analysis by computing the correlation of _ACCA_ with the human evaluation overall the code snippets of the test set (i.e., we considered the values on the single predictions).
To this aim, we computed the _Pearson_ correlation coefficient \(r\), which measures the strength of association (i.e., the linear relationship) between two variables in a correlation analysis and is defined as the covariance of the two variables divided by the product of their respective standard deviations [47]. The correlation coefficient is a unit-free value between \(-1\) and \(1\), which represents _perfect_ correlation, _negative_, and _positive_, respectively. Positive values indicate a positive correlation, i.e., the values of both variables tend to increase together, while negative values indicate a negative correlation, i.e., the values of one variable tend to increase when the values of the other variable decrease. A high value of the coefficient indicates that there is a strong correlation with the human evaluation. On the contrary, a small value indicates that there is a weak correlation with human evaluation. To provide context for the evaluation, we also computed the correlation coefficients between the baseline solutions and the human evaluation. Table 6 shows the results.
Confirming previous work [15], the analysis shows that Edit Distance and Exact Match are the output similarity metrics most correlated to the semantic correctness for security-oriented code, with both Pearson's \(r\) coefficients equal to \(0.70\) and \(0.61\), respectively. The output similarity metric that is less correlated to human evaluation is the compilation accuracy (\(r=0.48\)), showing that the syntactic correctness of the code is not highly correlated to its semantic correctness.
An important takeaway from our experiments is that, despite ChatGPT-based assessments providing results close to the human evaluation in the quantitative analysis (see 4), they have a correlation coefficient lower than the best-performing output similarity metric. Indeed, ChatGPT-GT has a correlation coefficient equal to 0.67, while ChatGPT-NL has a very poor correlation with human evaluation, resulting in the lowest value among all the baseline solutions (\(r=0.41\)). This is a consequence of the high number of discrepancy cases between these solutions and the human evaluation, which is even more exacerbated in the ChatGPT-NL solution.
Finally, _ACCA_ provides the highest correlation coefficient over all the four models, with an average value \(r=0.84\), hence being the only one to have a _very strong_ correlation with human evaluation [48].
### Computational Cost
We assessed the computational cost of _ACCA_ in assessing the code correctness. Since the method skips the symbolic execution process for all generated snippets that are identical to the ground truth or that are not syntactically correct, we performed a thorough analysis considering three different cases: the evaluation of all the predictions in the test set (i.e., 590 code snippets), the evaluation of the subset of generated snippets that do not match the ground truth (i.e., "PR \(\neq\) GT"), and evaluation of the subset of generated snippets that do not match the ground truth and are also syntactically correct (i.e., "PR \(\neq\) GT \(\&\) SYN=1").
Table 7 presents a detailed analysis of the computational cost of _ACCA_. The table shows the average and standard deviation of our method's cost, in terms of time (seconds) to assess a single code snippet, for each model.
\begin{table}
\begin{tabular}{l|l l l l|l|l} \hline \hline
**Evaluation** & **Seq2Seq** & **CodeBERT** & **CodeT5+** & **PLBart** & **Average** \\ \hline _Compilat. Acc._ & 0.36 & 0.44 & 0.59 & 0.52 & 0.48 \\ _BLEU-4_ & 0.56 & 0.55 & 0.56 & 0.64 & 0.58 \\ _SacreBLEU_ & 0.53 & 0.52 & 0.60 & 0.66 & 0.58 \\ _Edit Distance_ & 0.67 & 0.61 & 0.75 & 0.76 & 0.70 \\ _Exact Match_ & 0.60 & 0.67 & 0.55 & 0.61 & 0.61 \\ _ChatGPT-NL_ & 0.42 & 0.42 & 0.37 & 0.44 & 0.41 \\ _ChatGPT-GT_ & 0.67 & 0.61 & 0.70 & 0.68 & 0.67 \\ _ACCA_ & 0.87 & 0.78 & 0.85 & 0.85 & 0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Pearson correlation coefficient \(r\) with human evaluation. For every model, the best values are blue, while the worst are red.
Regarding the evaluation of the entire test set, the average time required to assess a snippet is, as expected, lower than in other cases, with an average time equal to \(\sim 0.17\)s. This is because predictions matching the ground truth are included in the analysis. In fact, in these cases, both syntactic assessment and symbolic execution for the semantic assessment are skipped.
When we consider the subset of samples in which PR \(\neq\) GT, i.e. when we exclude from our analysis the predictions that perfectly match the ground truth, the mean time per snippet increases on all four models. Indeed, in this case, _ACCA_ requires on average \(\sim 0.28\)s to assess the code correctness.
In the last scenario, PR \(\neq\) GT & SYN=1, the value increases again because snippets that were labeled as syntactically incorrect during the assembling process (see SS 3.2) are excluded from the analysis. Therefore, this evaluation concerns only the predictions that went through the symbolic execution, i.e., all the evaluation steps of the proposed method. Overall, regardless of the model, _ACCA_ needs on average 0.34s to symbolically execute the code and perform the evaluation.
Another aspect that influences the total computational cost required for the analysis is the type of operation performed by the code snippet. For instance, while logical operations (e.g., and, xor, not) and instructions that handle register contents (e.g., inc, dec, mov) are fast computed (i.e., \(\sim 0.25\)s), instructions used to iterate over a variable, to compare two registers, or to perform conditional jumps (e.g., cmp, loop, jns) are less time-efficient. This is because arithmetical and logical operations are often simpler to implement because they involve basic bit-level manipulation. Contrarily, comparisons usually involve comparing values from different registers or memory locations, and conditional jumps depend on the result of these comparisons. This complexity can lead to longer execution times compared to simple logical op
\begin{table}
\begin{tabular}{l|c c c c|c} \hline & **Seq2Seq** & **CodeBERT** & **CodeT5+** & **PLBart** & **All Models** \\ \hline _All predictions_ & 0.20\(\pm\)1.22 & 0.14\(\pm\)1.12 & 0.11\(\pm\)0.97 & 0.21\(\pm\)1.56 & 0.17\(\pm\)1.24 \\ _PR \(\neq\) GT_ & 0.33\(\pm\)1.58 & 0.29\(\pm\)1.57 & 0.22\(\pm\)1.36 & 0.35\(\pm\)1.99 & 0.28\(\pm\)1.66 \\ _PR \(\neq\) GT_ \& _SYN=1_ & 0.37\(\pm\)1.67 & 0.33\(\pm\)1.68 & 0.27\(\pm\)1.51 & 0.46\(\pm\)2.30 & 0.34\(\pm\)1.82 \\ \hline \end{tabular}
\end{table}
Table 7: Avg. and std. dev. of the computational time (seconds) of _ACCA_ per snippet on the whole data, the subset of predictions not matching the reference, and the subset that is also syntactically correct.
erations. Table 8 presents two examples of outliers in the computational cost analysis, _ACCA_'s result, and the time needed for their evaluation.
Both ground truth snippets perform similar operations: a comparison between two registers or a numerical value, a conditional jump based on the previous result, and an unconditional jump to a specific label. While in row # 1 the code generated by the model has the same code complexity, in the second one the prediction exhibits lower complexity since the last jump is missing (i.e., jmp while). Both predictions are classified as incorrect by _ACCA_ and take \(\sim 24\)s and \(\sim 12\)s, respectively.
Finally, to provide a context for the evaluation, we compared the computational cost of _ACCA_ with the ones of the output similarity metrics, which are automatic and time-saving solutions, and the ChatGPT-based assessment solutions. Unsurprisingly, we found that the output similarity metrics provide an average estimate of similarity in a very limited amount of time (\(\sim 0.004\) seconds on average per snippet), ranging from \(0.001\) seconds for the exact match to \(\sim 0.01\) seconds for the SacreBLEU metric. ChatGPT is also time-efficient, needing only \(\sim 0.003\) seconds to evaluate the correctness of the generated code with respect to the code description (ChatGPT-NL) and \(\sim 0.001\) for the comparison between predicted and ground truth snippets (ChatGPT-GT). As a result, the computational costs of _ACCA_, are higher than one of the baseline solutions since it depends on the non-negligible time needed by the binary execution.
However, it is important to stress again that the output similarity metrics provide only an estimate of the code similarity rather than evaluating the code's correctness. Moreover, although ChatGPT provides limited computational time for the assessment, it is not an automated solution as it requires a non-trivial manual effort, including detailed instructions and several iterations with the human operator. On the contrary, our method is fully
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Ground Truth** & **Model’s Prediction** & _ACCA_ &
\begin{tabular}{c} **Time** \\ **(s)** \\ \end{tabular} \\ \hline cmp BYTE al, 2 \(\backslash\)nje & cmp al, 2 \(\backslash\)nje while & 0 & 23.92 \\ do\_inject \(\backslash\)n jmp while & \(\backslash\)n jmp do\_inject & & \\ \hline cmp ax, bx \(\backslash\)nje l3 & cmp ax, bx \(\backslash\)nje & 0 & 11.98 \\ \(\backslash\)n jmp while & while & & \\ \hline \hline \end{tabular}
\end{table}
Table 8: Examples of code snippets with a high computational cost.
automated as it does not require any human intervention for the assessment.
Finally, it is worth noticing that the computational times of _ACCA_ are definitely lower than the average time required by human analysts to manually inspect the code, based on our experience. Indeed, since the human analyst needs to check both the NL description and the code snippet predicted by the models, in our experiments, the assessment of the semantic correctness required \(\sim 20\) seconds on average per code snippet.
## 6 Related Work
**Automatic Program Evaluation.** Traditionally, the problem of automatic program assessment has been largely addressed for educational purposes, aiming to assist educators in the student work evaluation process. Insa and Silva [49; 50] presented a tool to assess Java programs by automatically validating different properties, such as the use of interfaces and class hierarchy. Romli _et al._[51] developed _FaSt-Gen_, a framework of test data generation to cover both the functional and structural testing of programs for automatic assessment. Li _et al._[52] leveraged random testing and dynamic symbolic execution (DSE), i.e., a software testing technique that simulates the execution of a program by providing symbolic inputs instead of concrete values. They generated test inputs and ran programs on these test inputs to compute values of behavioral similarity. Arifi _et al._[53] proposed a method to grade C programs in an educational context automatically. They measured the similarity between programs by comparing the outputs of their symbolic execution. _CASM-VERIFY_[54] is a tool to automatically check the equivalence of optimized assembly implementations of cryptographic algorithms. The tool decomposes the equivalence checking problem into several small sub-problems using a combination of concrete and symbolic evaluation. The use of symbolic execution to evaluate code similarity has been explored also for security applications. Luo _et al._[55] introduced a binary code similarity comparison method for code theft detection. Gao _et al._[56] presented _BinHunt_, a method to identify the semantic differences between an executable and its patched version, revealing the vulnerability that the patch eliminates. Scalabrino _et al._ focused on automatically assessing the understandability of code snippets by combining 121 existing and new metrics, including code-related, documentation-related, and developer-related metrics. They concluded, however, that these metrics are not suited to capture the complexity of code in practical applications. Ullah and Oh [57] proposed
a neural network-based solution to perform _binary diffing_ on x86 architecture binaries, i.e., the process of discovering the differences and similarities in functionality between two binary programs. Leveraging symbolic execution to check semantic equivalence has been proposed and used in the area of compiler validation since compilers should preserve semantics. For example, Bera _et al._[58] applied symbolic execution on the bytecode produced by the compilation with optimizations and that produced by the compilation without optimizations, in order to detect compiler bugs. Hawblitzel _et al._[59] detected compiler bugs by comparing assembly language outputs through symbolic execution. These solutions, however, require entire programs as input and do not work on portions of code (i.e., code snippets), which is often the case with AI-generated code since NMT is still far from generating entire complex functions, particularly in the context of offensive security.
**Programming Language Code-oriented Metrics.** In addition to state-of-the-art textual similarity metrics used as a baseline for the evaluation (see SS 4.3), recent work proposed a set of novel code-oriented metrics, i.e., metrics created ad-hoc for specific programming languages, to automatically assess the correctness of the generated code. Examples of code-oriented metrics are CodeBLEU [60] and RUBY [61], which were introduced to evaluate programs written in Java and C#. However, these solutions rely on deeper program analysis, including syntax and dataflow match, and require compilable code to function, which prevents them from being language-agnostic. Indeed, none of the available code-oriented metrics is designed for low-level programming languages such as assembly. Previous work on code generation also resorted to _functional correctness_ to evaluate the quality of the generated programs, where a code sample is considered correct if it passes a set of unit tests. Kulal _et al._[62] used an evaluation metric based on functional correctness to address the problem of producing correct code starting from pseudocode. They generated \(k\) code samples per problem and assessed the ratio of problems in which any of the \(k\) samples passed the set of unit tests. Chen _et al_[63] proposed pass@k, an unbiased and numerically stable implementation of this metric. They generated \(n\geq k\) samples per task (\(n=200\) and \(k\leq 100\)), counted the number of correct samples \(c\leq n\) that pass unit tests, and calculated an unbiased estimator to benchmark their models in the generation of Python programs from docstrings. To estimate the functional correctness of a program, however, a set of unit tests needs to be manually constructed. This requires a significant effort that is often unfeasible for large amounts of
generated code.
**AI Generative for Security.** Automatic exploit generation (AEG) research challenge consists of automatically generating working exploits [64]. This task requires technical skills and expertise in low-level languages to gain full control of the memory layout and CPU registers and attack low-level mechanisms (e.g., heap metadata and stack return addresses). Given their recent advances, AI-code generators have become a new and attractive solution to help developers and security testers in this challenging task. Liguori _et al._[33] released a dataset containing NL descriptions and assembly code extracted from software exploits. The authors performed an empirical analysis showing that NMT models can correctly generate assembly code snippets from NL and that in many cases can generate entire exploits with no errors. The authors extended the analysis to the generation of Python security-oriented code used to obfuscate software exploits from systems' protection mechanisms [2]. Yang _et al._[38] proposed a data-driven approach to software exploit generation and summarization as a dual learning problem. The approach exploits the symmetric structure between the two tasks via dual learning and uses a shallow Transformer model to learn them simultaneously. Yang _et al._[1] proposed a novel template-augmented exploit code generation approach. The approach uses a rule-based template parser to generate augmented NL descriptions and uses a semantic attention layer to extract and calculate each layer's representational information. The authors show that the proposed approach outperforms the state-of-the-art baselines from the previous studies of automatic code generation. Ruan _et al._[3] designed an approach for software exploit generation based on prompt tuning. The solution aids the generation process by inserting trainable prompt tokens into the original input to simulate the pre-training stage of the model to take advantage of its prior knowledge distribution. Xu _et al._[65] introduced an artifact-assisted AEG solution that automatically summarizes the exploit patterns from artifacts of known exploits and uses them to guide the generation of new exploits. The authors implemented AutoPwn, an AEG system that automates the generation of heap exploits for Capture-The-Flag _pwn_ competitions. Recent work also explored the role of GPT-based models, including ChatGPT and Auto-GPT, in the offensive security domain. Botacin [66] found that, by using these models, attackers can both create and deobfuscate malware by splitting the implementation of malicious behaviors into smaller building blocks. Pa _et al._[67] and [68] proved the feasibility of
generating malware and attack tools through the use of reverse psychology and _jailbreak prompts_, i.e., maliciously crafted prompts able to bypass the ethical and privacy safeguards for abuse prevention of AI code generators like ChatGPT. Gupta _et al._[68] also examined the use of AI code generators to improve security measures, including cyber defense automation, reporting, threat intelligence, secure code generation and detection, attack identification, and malware detection. All previous work uses state-of-the-art output similarity metrics or performs manual analysis to assess the correctness of AI-generated code/programs.
Our work is complementary to previous ones. Indeed, this work proposes a method that leverages symbolic execution to automatically assess the correctness of low-level code snippets used in security contexts. Since the method does not necessarily require full programs in inputs, it is suitable for assessing AI-generated code because they are often incomplete or non-compilable programs. Moreover, the proposed method does not require any human intervention, yet, differently from traditional text similarity metrics, which are commonly used to assess the performance of AI-generated code, its accuracy is comparable to human evaluation.
## 7 Conclusion
In this paper, we addressed the automatic correctness of the code generated by AI code generators. We proposed a fully automated method, named _ACCA_, that uses symbolic execution to assess the correctness of security-oriented code without any human effort.
We used our method to evaluate the performance of four state-of-the-art code generators in the generation of offensive assembly from NL descriptions and compared the results with the human evaluation and different baseline solutions, including state-of-the-art output similarity metrics and the well-known ChatGPT.
Our experiments showed that _ACCA_ provides results almost equal and is the most correlated assessment solution to human evaluation, which is considered the golden standard in the field. Moreover, the analysis of the computational cost revealed that the time to perform the assessment of every code snippet is \(\sim 0.17\)s on average, which is lower than the average time required by human analysts to manually inspect the code, based on our experience. |
2305.03708 | On the analysis of Rayleigh-Bénard convection using Latent Dirichlet
Allocation | We apply a probabilistic clustering method, Latent Dirichlet Allocation
(LDA), to characterize the largescale dynamics of Rayleigh-B\'enard convection.
The method, introduced in Frihat et al. 2021, is applied to a collection of
snapshots in the vertical mid-planes of a cubic cell for Rayleigh numbers in
the range [106, 108]. For the convective heat flux, temperature and kinetic
energy, the decomposition identifies latent factors, called motifs, which
consist of connex regions of fluid. Each snapshot is modelled with a sparse
combination of motifs, the coefficients of which are called the weights. The
spatial extent of the motifs varies across the cell and with the Rayleigh
number. We show that the method is able to provide a compact representation of
the heat flux and displays good generative properties. At all Rayleigh numbers
the dominant heat flux motifs consist of elongated structures located mostly
within the vertical boundary layer, at a quarter of the cavity height. Their
weights depend on the orientation of the large-scale circulation (LSC). A
simple model relating the conditionally averaged weight of the motifs to the
relative strength of the corner rolls and of the large-scale circulation, is
found to predict well the average LSC reorientation rate. Application of LDA to
the temperature fluctuations shows that temperature motifs are well correlated
with heat flux motifs in space as well as in time, and to some lesser extent
with kinetic energy motifs. The abrupt decrease of the reorientation rate
observed at 108 is associated with a strong concentration of plumes impinging
layers onto the corners of the cell, which decrease the temperature difference
within the corner structures. It is also associated with a reinforcement of the
longitudinal wind through formation and entrainment of new plumes. | Bérengère Podvin, Laurent Soucasse, François Yvon | 2023-05-05T17:41:54Z | http://arxiv.org/abs/2305.03708v4 | # On the characterization of the convective heat flux in turbulent Rayleigh-Benard convection
###### Abstract
We propose to use a clustering method, Latent Dirichlet Allocation or LDA (Frihat et al. JFM 2021), to characterize the convective heat flux in turbulent Rayleigh-Benard convection. LDA provides a probabilistic decomposition of a collection of fields into latent factors that can be used as a generative model. The technique is applied to vertical mid-sections of a cubic cell for Rayleigh numbers in the range \([10^{6},10^{8}]\). We show that LDA provides a relevant representation of the convective heat flux and displays good generative properties. The latent factors identified by the decomposition, called motifs, consist of localized regions of fluid. The average size of the motifs decreases with the Rayleigh number and their spatio-temporal characteristics vary within the cell: frequent, elongated structures are found within the vertical boundary layers, while rounder, larger structures generally aligned with the velocity field are observed in the entrainment zone. At the lower Rayleigh numbers, large, infrequent structures are also present in the bulk, but spatial coherence is significantly reduced at the highest Rayleigh number \(Ra=10^{8}\). The most often detected motifs are centered at a distance of \(0.25H\) from the horizontal plate and consist of elongated structures within the vertical boundary layer. The prevalence of these motifs depends on the large-scale organization of the flow. Using a simple physical argument, we propose a model based on the characteristics of these motifs to predict the average reorientation rate of the large-scale circulation.
keywords
## 1 Introduction
Rayleigh-Benard convection, in which a fluid is heated from below and cooled from above, represents an idealized configuration to study thermal convection phenomena. These characterize a variety of applications ranging from industrial processes such as heat exchangers to geophysical flows in the atmosphere or the ocean. A central question is to
determine how the heat transfer depends on nondimensional parameters such as the Prandtl number \(Pr=\nu/\kappa\) where \(\nu\) is the kinematic viscosity and \(\kappa\) the thermal diffusitivity, and the Rayleigh number
\[Ra=\frac{g\beta\Delta TH^{3}}{\nu a}, \tag{1}\]
where \(g\) is the gravity, \(\beta\) is the thermal expansion coefficient, \(\Delta T\) the temperature difference and \(H\) the cell dimension. The Grossmann & Lohse (2000)'s theory constitutes a unified approach to address this question. It is based on separating the contributions from the bulk averaged thermal and kinetic dissipation rate into two subsets, one corresponding to the boundary layers, and one corresponding to the bulk. This theory was further refined in Grossmann & Lohse (2004), where the thermal dissipation rate was split into a contribution from the plumes and a contribution from the turbulent background. Through the action of buoyancy, the thermal boundary layers generate plumes which create a large-scale circulation, as evidenced by Xi _et al._ (2004), also called "wind" (Castaing _et al._, 1989). The distribution of temperature fluctuations depends on plume clustering effects (Wang _et al._, 2022), but it is also affected by interaction with turbulent fluctuations in the bulk, resulting in fragmentation (Bosbach _et al._, 2012).
Shang _et al._ (2003) showed that thermal plumes carry most of the convective heat flux and that the plume-dominated regions were located near the sidewalls and the conducting surfaces. The morphology of plumes and its effect on the heat transfer have been given careful attention. The plumes have a sheet-like structure near the boundary layer and progressively become mushroom-like as they move into the bulk region (Zhou _et al._, 2007). Shishkina & Wagner (2008) found that very high values of the local heat flux were observed in regions where the sheet-like plumes merged, constituting "stems" for the mushroom-like plumes developing in the bulk. The relative contributions of the plumes and turbulent background vary with the Rayleigh number: Emran & Schumacher (2012) have shown that the fraction of plume-dominated regions decreases with the Rayleigh number, while that of background-dominated regions increases.
The extraction of plumes from the temperature fluctuations is therefore an essential step for the understanding of thermal convection flows. Several definitions have been used: some of the first criteria were based on the skewness of the temperature derivative (Belmonte & Libchaber, 1996) or the temperature difference (Zhou & Xia, 2010). Ching _et al._ (2004) have proposed to use simultaneous measurements of the temperature and the velocity to define the velocity of the plumes using conditional averaging. Following Huang _et al._ (2013), van der Poel _et al._ (2015) identified plumes from both a temperature anomaly and an excess of convective heat flux. Zhou _et al._ (2016) relied on cliff-ramp-like structures in the temperature signals to determine the spatial characteristics of plumes. Emran & Schumacher (2012) and Vishnu _et al._ (2022) separated the plume from the background regions based on a threshold on the convective heat flux. Shevkar _et al._ (2022) have recently proposed a dynamic criterion based on the 2-D velocity divergence to separate plumes from boundary layers.
As pointed out by Chilla & Schumacher (2012), this multiplicity of criteria illustrates the difficulty of identifying coherent structures in a consistent and objective manner, which is a long-running question in turbulent flows. To this end, Proper Orthogonal Decomposition (POD) (Lumley, 1967) has proven a useful tool to analyze large-scale fluctuations in Rayleigh-Benard convection. It has been used in particular to study reorientations of the large-scale circulation (Bailon-Cuba _et al._, 2010; Foroozani _et al._, 2017; Podvin & Sergent, 2015, 2017; Soucasse _et al._, 2019). Through spectral decomposition of the autocorrelation tensor, POD provides a basis of spatial modes, also called empirical modes, since they originate from the data. The modes are energetically optimal to reconstruct the fluctuations. The POD modes
typically have a global support, which is well suited to capture the large-scale organization of the flow. However, this can make physical interpretation difficult as there is no straightforward connection between a mode and a local coherent structure as a local structure is represented with a superposition of many POD modes, a situation also observed in Fourier analysis.
As an alternative, Frihat _et al._ (2021) have recently adapted a probabilistic method that can extract localized latent factors in turbulent flow measurements. This method, Latent Dirichlet Allocation or LDA (Griffiths & Steyvers, 2002; Blei _et al._, 2003), was originally developed in the context of natural language processing, where it aims to extract topics from a collection of documents. In this framework, documents are represented by a non-ordered set of words taken from a fixed vocabulary. A word count matrix can be built for the collection, where each column corresponds to a document, each line corresponds to a vocabulary word and the matrix entry represents the number of times the word appears in the document. LDA provides a probabilistic decomposition of the word count matrix, based on latent factors called _topics_. Topics are defined by two distributions: the distribution of topics within each document (each document is associated with a mixture of topics, the coefficients of the mixture sum up to one) and the distribution of vocabulary words with each topic (each topic is represented by a combination of words, the coefficients of which also sum up to one).
The method has been adapted for turbulent flows as follows: we consider a collection of snapshots of a scalar field taken over a 2D domain discretized into cells. The equivalent of a document is therefore a snapshot, and the cells (or snapshot pixels) constitute the vocabulary. The digitized values of the scalar field over the cells in a snapshot are gathered into a vector which is formally analogous to a column of the word count matrix. The "topics" produced by the decomposition, called _motifs_, correspond to fixed (in the Eulerian sense), spatially coherent regions of the flow. The method was applied to the analysis of the turbulent Reynolds stress in wall turbulence (Frihat _et al._, 2021) and to pressure anomalies (Fery _et al._, 2022), where it was found to be well suited for the representation of sparse (intermittent) data.
In this paper we examine how LDA can help characterize the spatial organization of the convective heat flux and its dependence on the Rayleigh number. To this end, the technique is applied to 2D sections of a cubic Rayleigh-Benard cell in the range of Rayleigh number \([10^{6},10^{8}]\). The numerical configuration and the data set are described in Section 2. Proper Orthogonal Decomposition (POD) and Latent Dirichlet Allocation (LDA) are respectively presented in Section 3 and 4. We examine in Section 5 how LDA compares with POD and the extent to which it is able to capture the general features of the heat flux. The spatial characteristics of the motifs are detailed in Section 6. We then show in Section 7 how the signature of the large-scale organization of the flow can be tracked in the dominant motifs. Our results are summarized in Section 8.
## 2 Numerical setting
### Set-up
Numerical setup and associated datasets are the same as used in Soucasse _et al._ (2019, 2021). The configuration studied is a cubic Rayleigh-Benard cell filled with air, with isothermal horizontal walls and adiabatic side walls. The air is assumed to be transparent and thermal radiation effects are disregarded. Direct numerical simulations have been performed at various values of the Rayleigh number. The Prandtl number is set to 0.707. All physical quantities are made dimensionless using the cell size \(H\), the reference time \(H^{2}/(a\sqrt{Ra})\) and the reduced temperature \(\theta=(T-T_{0})/\Delta T\), \(T_{0}\) being the mean temperature between hot and cold walls. Spatial coordinates are denoted \(x\), \(y\), \(z\) (\(z\) being the vertical direction) and the origin is placed at a bottom corner of the cube.
Navier-Stokes equations under Boussinesq approximation are solved using a Chebyshev collocation method (Xin & Le Quere, 2002; Xin _et al._, 2008). Computations are made parallel using domain decomposition along the vertical direction. Time integration is performed through a second-order semi-implicit scheme. The velocity divergence-free condition is enforced using a projection method. Numerical parameters are given in Table 1 for the four considered Rayleigh numbers \(Ra=\{10^{6};3\ 10^{6};10^{7};10^{8}\}\). We have checked that the number of collocation points is sufficient to accurately discretize the boundary layers according to the criterion proposed by Shishkina _et al._ (2010). A number of 1000 snapshots have been extracted from the simulations for each Rayleigh number at a sampling period of 10 (at \(Ra=\{10^{6};3\ 10^{6};10^{7}\}\)) or 5 (at \(Ra=10^{8}\)), in dimensionless time units. It is worth noting that the time separation between the snapshots is sufficient to describe the evolution of the large-scale circulation but is not suited for a fine description of the plume emission or of the reorientation process. For each Rayleigh number, a dataset satisfying the statistical symmetries of the flow was then constructed from these 1000 snapshots, as it will be described in the next section.
### Construction of the data set
At each Rayleigh number, the data set consisted of a collection of \(N_{S}=1000\) heat flux snapshots \(w(\underline{x},t_{k})\theta(\underline{x},t_{k})\), \(k=1,\ldots N_{S}\). We note that due to the velocity reference scale, the non-dimensional heat flux varies like \(NuRa^{-1/2}\). As in Soucasse _et al._ (2019), the data set was first enriched by making use of the statistical symmetries of the flow (Puigjaner _et al._, 2008). In the cubic Rayleigh-Benard cell, four quasi-stable states are available for the flow for this Rayleigh number range: the large-scale circulation settles in one of the two diagonal planes of the cube with clockwise or counterclockwise motion. The evolution of the large-scale circulation can be tracked through that of the \(x\) and \(y\) components of the angular momentum of the cell \(\underline{L}=\int(\underline{x}-\underline{x_{0}})\times\underline{u} \underline{d}\underline{x}\) with respect to the cell center \(\underline{x_{0}}\). As Figure 1 shows at \(Ra=10^{7}\), the angular momentum along each horizontal direction oscillates near a quasi-steady position for long periods of times - several hundreds of convective time scales, before experiencing a rapid switch (\(\mathcal{O}(10)\) convective time scales) to the opposite value, which corresponds to a reorientation. Reorientations from one state to another occur during the time sequence but each state is not necessarily equally visited. In order to counteract this bias, we have built enlarged snapshot sets, obtained by the action of the symmetry group of the problem on the original snapshot sets. The symmetries are based on four independent symmetries \(S_{x}\), \(S_{y}\), \(S_{z}\) and \(S_{d}\) with respect to the planes \(x=0.5H\), \(y=0.5H\), \(z=0.5H\) and \(x=y\). This generates a group of 16 symmetries for the cube, which should lead to a 16-fold in the number of snapshots. However, since we will exclusively consider the vertical mid-planes \(x=0.5H\) and \(y=0.5H\), which are invariant planes for respectively \(S_{x}\) and
\begin{table}
\begin{tabular}{l l l l l} \(Ra\) & \((N_{x},N_{y},N_{z})\) & \(N_{S}\) & \(\Delta t\) & \(\delta_{BL}\) \\ \hline \(10^{6}\) & (81,81,81) & 1000 & 10 & 0.056 \\ \(3\ 10^{6}\) & (81,81,81) & 1000 & 10 & 0.042 \\ \(10^{7}\) & (81,81,81) & 1000 & 10 & 0.0297 \\ \(10^{8}\) & (161,161,161) & 1000 & 5 & 0.0167 \\ \end{tabular}
\end{table}
Table 1: Characteristics of the datasets at various Rayleigh numbers: spatial resolution \(N_{x}\), \(N_{y}\), \(N_{z}\) in each direction of space, number of snapshot \(N_{S}\), snapshot sampling period \(\Delta t\) and boundary layer thickness \(\delta_{BL}\).
\(S_{y}\), the increase is reduced. The data set aggregates 1000 snapshots on each of the planes \(x=0.5H\) and \(y=0.5H\), each of which undergoes a vertical flip, a horizontal flip and a combination of the two, yielding a total of \(N_{s}=8000\) snapshots.
The LDA technique requires transforming the data into a non-negative, integer field. The signal defined on a grid of \(\tilde{N}_{C}\) cells was digitized using a rescaling factor \(s\) and positive and negative values were split onto two distinct grids, leading to a field defined on \(N_{C}=2\tilde{N}_{C}\) cells. We thus define
\[\Phi_{k}(\underline{x}_{j})=\Phi(\underline{x}_{j},t_{k})=\text{Max}\,\left[ \text{Int}[s\,w(\underline{x}_{j},t_{k})\theta(\underline{x}_{j},t_{k}),0\right] \tag{1}\]
\[\Phi_{k}(\underline{x}_{j+\tilde{N}_{C}})=\Phi(\underline{x}_{j+\tilde{N}_{C }},t_{k})=-\text{Max}\,\left[-\text{Int}[s\,w(\underline{x}_{j},t_{k})\theta (\underline{x}_{j},t_{k}),0\right] \tag{2}\]
where \(s>0\), \(k\in[1,N_{S}]\) and \(j\in[1,\tilde{N}_{C}]\) and \(\underline{x}_{j}\) represents the \(j^{\text{th}}\) cell location on the plane \(x=0.5H\) or \(y=0.5H\). We note that throughout the paper, the total field will directly be represented on the physical grid of size \(\tilde{N}\) from the renormalized difference \([\Phi(\underline{x}_{j},t_{k})-\Phi(\underline{x}_{j+\tilde{N}_{C}},t_{k})]/s\).
## 3 POD analysis
### Method
Proper Orthogonal Decomposition (Holmes _et al._, 2002) makes it possible to write a collection of \(N_{S}\) spatial fields \(\Phi(\underline{x}_{j},t_{k})\) defined on \(N_{C}\) grid points, as a superposition of spatial modes \(\phi_{n}(\underline{x})\), the amplitude of which varies in time:
\[\Phi(\underline{x}_{j},t_{k})=\sum_{n=1}^{N_{S}}a_{n}(t_{k})\phi_{n}( \underline{x}_{j}), \tag{3}\]
with \(k\in[1,N_{S}]\) and \(j\in[1,N_{C}]\). The amplitudes \(a_{n}(t_{k})\) are solution of the eigenvalue problem
\[C_{kp}a_{n}(t_{p})=\lambda_{n}a_{n}(t_{k}), \tag{4}\]
where \(C\) is the temporal autocorrelation matrix
\[C_{kp}=\frac{1}{N_{S}}\sum_{j=1}^{N_{C}}\Phi(\underline{x}_{j},t_{k})\Phi( \underline{x}_{j},t_{p}) \tag{5}\]
The eigenvalues \(\lambda_{n}\), such that \(\lambda_{1}>\lambda_{2}>\lambda_{3}>\ldots\), represent the respective contribution of the modes to the total variance. If we consider the \(p\) most energetic modes, the reconstruction based on \(p\) modes minimizes the \(L_{2}\)-norm error between the set of snapshots and the projection of the set of snapshots onto a basis of size \(p\).
### Application to the convective heat flux
POD is applied to the digitized signal \(\Phi\) defined in equations (1) and (2). The first three POD modes and POD coefficients are shown in Figure 2 for \(Ra=10^{7}\), where black vertical and horizontal lines indicate the thickness of the boundary layers. We checked that the first mode corresponds to the mean flow. The mode is most important in a region close to the wall, with a maximum within the vertical boundary layer at a height of about \(z\sim 0.1H\). The second mode corresponds to a dissymetry between the vertical sides and is most important at mid-height in the region outside the boundary layers. The third mode is both antisymmetric in the vertical and in the horizontal direction. It is maximum at the edge of the vertical
boundary layers, at a vertical distance of about \(0.25H\) from the horizontal surfaces. The pattern it is associated with corresponds to a more intense flux along a diagonal (bottom of one side and top of the opposite side) and a less intense flux along the opposite diagonal. As evidenced by application of a moving average performed over 200 convective time units (about 4 recirculation times \(T_{c}\), as was determined in Soucasse _et al._ (2019)), the evolution of the amplitude at large time scales matches that of the horizontal angular momentum components \(L_{x}\) and \(L_{y}\) (compare with Figure 1), unlike the two dominant modes. This mode therefore appears to be the signature of the large-scale circulation, where the flux is more intense in the lower corner of the cell as hot plumes rise on one side and in the upper corner of the opposite side of the cell as cold plumes go down.
## 4 Latent Dirichlet Allocation
### Principles
We briefly review the principles of Latent Dirichlet Allocation and refer the reader to Frihat _et al._ (2021) for more details. LDA is an inference approach to identify latent factors in a collection of observed data, which relies on Dirichlet distributions as priors. We first recall the definition of a Dirichlet distribution, which is a multivariate probability distribution over the space of multinomial distributions. It is parameterized by a vector of positive-valued parameters \(\underline{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{N})\) as follows
\[p(\theta_{1},...,\theta_{N};\alpha_{1},...,\alpha_{N})=\frac{1}{B(\underline {\alpha})}\prod_{n=1}^{N}\theta_{n}^{\alpha_{n}-1}, \tag{1}\]
where \(B\) is a normalizing factor, which can be expressed in terms of the Gamma function \(\Gamma\):
\[B(\underline{\alpha})=\frac{\prod_{n=1}^{N}\Gamma(\alpha_{n})}{\Gamma(\sum_{ n=1}^{N}\alpha_{n})}. \tag{2}\]
The components \(\{\alpha_{n},n=1\ldots N\}\) of \(\underline{\alpha}\) control the sparsity of the distribution: values of \(\alpha_{n}\) larger than unity correspond to evenly dense distributions, while values lower than unity correspond to sparse distributions. Here \(\theta\) will represent either the motif-cell distribution or the snapshot-motif distribution.
As mentioned above, the data to which LDA is applied consists of a collection of non-negative, integer fields that are defined in equations (1) and (2). For each snapshot \(k\), the integer value \(\Phi_{k}(\underline{x}_{j})\) measured at cell \(j\) is interpreted as an integer count of the cell \(j\). The key is to interpret this integer count as the number of times cell \(j\) appears in the description of snapshot \(k\). A snapshot \(k\) is therefore defined as a list of tuples of the form \((\underline{x}_{j},\Phi_{k}(\underline{x}_{j}))\).
The main assumptions of LDA are the following:
1. Each snapshot consists of a mixture of \(N_{T}\) latent factors called motifs. \(N_{T}\) is a user-defined parameter (analogous to a number of clusters).
2. Each motif \(n\) is associated with a multinomial distribution over the grid cells \(\psi_{n}\) so that the probability to observe the \(j^{\text{th}}\) grid cell located at \(\underline{x}_{j}\) given the motif \(n\) is \(\psi_{n}(x_{j})\). The distribution \(\psi_{n}\) is modelled with a Dirichlet prior parameterized with an \(N_{C}\)-dimensional vector \(\underline{\eta}\). Low values of \(\eta_{l}\) mean that the motif is distributed over a small number of cells.
3. Each snapshot \(k\) is associated with a distribution \(b^{k}\) over motifs such that the probability that motif \(n\) is present in snapshot \(k\) is \(b_{n}^{k}\), which we will rewrite as \(b_{n}^{k}=b_{n}(t_{k})\). This distribution is modelled with a \(N_{T}\)-dimensional Dirichlet distribution of parameter \(\underline{\alpha}\). The magnitude of \(\alpha\) characterizes the sparsity of the distribution. Low values of \(\alpha_{l}\) mean that relatively few motifs are observed in the snapshot.
### Implementation
The snapshot-motif distribution \(b^{k}\) and the motif-cell distribution \(\psi_{n}\) are determined from the observed snapshots \(\Phi_{k}(\underline{x})\) and constitute \(N_{T}\) and \(N_{C}\)-dimensional categorical distributions. Finding the distributions \(b^{k}\) and \(\psi_{n}\) that are most compatible with the observations constitutes an inference problem. The problem can be solved either with a Markov chain Monte-Carlo (known as MCMC) algorithm such as Gibbs sampling (Griffiths & Steyvers, 2002), or by a variational approach (Blei _et al._, 2003), which aims to minimize the Kullback-Leibler divergence between the true posterior and its variational approximation. In both cases, the computational complexity of the problem of the order of \((N_{C}N_{S}N_{T})\).
The solution _a priori_ depends on the number of motifs \(N_{T}\) as well as on the values of the Dirichlet parameters \(\underline{\alpha}\) and \(\underline{\eta}\). Special attention was therefore given to establish the robustness of the results reported here. Non-informative default values were used for the Dirichlet parameters i.e. the prior distributions were taken with symmetric parameters equal to \(\forall n,\alpha_{n}=1/N_{T}\) and \(\forall l,\eta_{l}=1/N_{C}\). Practical implementation was performed in Python using gensim (Rehurek & Sojka, 2011). No significant change was observed in the results when the value of the quantization \(s\) was high enough (however it had to be kept reasonably low in order to limit the computational time). Although multiple tests were carried out for varying values of \(s\in[40,600]\), all results reported in this paper were obtained with \(s=600\). Analyses were also performed for varying numbers of motifs \(N_{T}\), ranging from 50 to 400.
### LDA as a generative process
The standard generative process performed by LDA with \(N_{T}\) motifs is the following.
1. For each motif \(n\), a \(N_{C}\)-dimensional cell-motif distribution \(\psi_{n}\) is drawn from the Dirichlet distribution of parameter \(\underline{\eta}\).
2. To generate snapshot \(k\): 1. a \(N_{T}\)-dimensional snapshot-motif distribution \(b^{k}\) is drawn according to a Dirichlet distribution parameterized by \(\underline{\alpha}\) 2. a total integer count \(\Phi_{T}(t_{k})=\sum_{j}\Phi(\underline{x}_{j},t_{k})\) is drawn This number corresponds to the total number of cell integer counts associated with snapshot \(k\). \(\Phi_{T}(t_{k})\) is typically sampled from a Poisson distribution that matches the statistics of the original database. 3. for each \(i=1,\ldots,\Phi_{T}(t_{k})\): -- a motif \(n\) is selected from \(b(t_{k})\) (since \(b\) represents the probability that motif \(n\) is present in the \(k^{\text{th}}\) snapshot) -- once this motif \(n\) is chosen, a cell \(j\) is selected from \(\psi_{n}\) (since \(\psi_{n}(\underline{x}_{j})\) represents the probability that cell \(j\) is present in motif \(n\))
The snapshot \(k\) then represents the set of \(\Phi_{T}\) cells \(j\) that have been drawn and can be reorganized as a list of \(N_{C}\) cells with integer counts \(\Phi(\underline{x}_{j},t_{k})\).
In fluid mechanics applications ((Frihat _et al._, 2021; Fery _et al._, 2022)), sampling from the motif-cell distribution (step c)) can be replaced with a faster step, where the contribution of each motif \(n\) to snapshot \(k\) is directly obtained from the motif-cell distribution \(\psi_{n}\) and the distribution \(b(t_{k})\) as \(\Phi_{T}(t_{k})b_{n}(t_{k})\psi_{n}(\underline{x}_{j})\). Figure 3 illustrates the LDA generative process on a \(4\times 3\) grid for three topics.
### Interpretation and evaluation criteria
By construction, the decomposition identifies _fixed_ regions of space over which the intensity of the scalar field is important at the same time. There is therefore no direct connection between the motifs and plumes, which are Lagrangian structures travelling and possibly changing in shape and orientation through the shell. LDA motifs only aim to detect the Eulerian signature of these structures.
Each motif \(n\) can be characterized in space through the motif-cell distribution \(\psi_{n}\) (which integrates to 1 over the cells) and which will sometimes referred to as the motif in the absence of ambiguity. Each distribution has a maximum value \(\psi^{max}\) and a maximum location \(\underline{x}_{n}^{max}\) such that \(\psi_{n}(\underline{x}^{max})=\psi^{max}\). One can also define a characteristic area \(\Sigma_{n}\) using
\[\Sigma_{n}=\int_{\Omega}\mathbb{1}_{\{\psi_{n}\geqslant\psi_{n}^{max}/e\}}d\Omega \tag{10}\]
where \(\Omega\) represents the plane of analysis and the factor \(1/e\sim 0.606\) is an arbitrary factor chosen by analogy with a Gaussian distribution. Each motif can also be characterized in time through the snapshot-motif distribution \(b_{n}^{k}\), which we will also call _prevalence_ throughout the paper. The motifs can be ordered by their time-averaged prevalence \(\langle b_{n}\rangle=\frac{1}{N_{S}}\sum_{k=1}^{N_{S}}b_{n}(t_{k})\) where \(\langle\cdot\rangle\) represents a time average.
A reconstruction of the field can be obtained by using the inferred motif-cell distribution and snapshot-motif distribution to provide what we will call the LDA-Reconstructed field, defined as
\[\Phi_{R}(\underline{x}_{j},t_{k})=\Phi_{T}\sum_{n=1}^{N_{T}}b_{n}(t_{k})\psi_ {n}(\underline{x}_{j}) \tag{11}\]
where \(\Phi_{T}\) represents the sum of the field values over the cells. To evaluate the relevance of the decomposition, one can compute for each snapshot \(k\) the instantaneous correlation coefficient \(C_{k}\) between a given field \(\Phi\) and its reconstruction \(\Phi_{R}\) defined as
\[C_{k}(\Phi,\Phi_{R})=\frac{\int\Phi(\underline{x},t_{k})\Phi_{R}(\underline{x },t_{k})\underline{dx}}{\left(\int\Phi^{2}(\underline{x},t_{k})\underline{dx }\int\Phi_{R}^{2}(\underline{x},t_{k})\underline{dx}\right)^{1/2}} \tag{12}\]
A global measure of the reconstruction is then given by \(\langle C\rangle=\frac{1}{N_{S}}\sum_{k=1}^{N_{S}}C_{k}\), the average value of \(C\) over all snapshots.
## 5 Evaluation of LDA for reconstruction and generation
### Reconstruction
We first evaluate to which extent the LDA decomposition provides an adequate reconstruction of the heat flux. Figure 4 (left) shows how the instantaneous value of the correlation coefficient \(C_{k}\left(\Phi,\Phi_{R}\right)\) depends on the integral of the field \(I_{k}=\int\Phi(\underline{x},t_{k})d\underline{x}\). The Rayleigh considered is \(Ra=10^{7}\) and the number of topics is \(N_{T}=100\), but the same trend was observed for all other Rayleigh numbers as well as all other values of \(N_{T}\). Lower values of the correlation were associated with lower values of the total integrated heat flux, which illustrates that the LDA representation is suited to capture extreme events.
Figure 4 (right) presents the mean correlation coefficient \(\langle C\rangle\) for different number of motifs and different Rayleigh numbers on the vertical planes. The minimum value for the lower number of topics and the highest Rayleigh number was 0.8, which shows the relevance of the decomposition. Unsurprisingly, the correlation increases with the number of topics. It also decreases with the Rayleigh number, which is consistent with an increase in the complexity of the flow.
Figure 5 compares an original snapshot at \(Ra=10^{7}\) (based on the digitized signal) with different reconstructions: i) the LDA-reconstruction based on \(N_{T}=100\) motifs, ii) the reconstruction limited to the 20 most prevalent topics (for this particular snapshot), iii) the POD-based reconstruction based on the first 20 modes. By construction, POD provides
the best approximation of the field for a given number of modes. Since the distribution of the heat flux is intermittent in space and time, only a limited number of motifs is necesssary to reconstruct the flow. We note that little difference was observed between the full LDA reconstruction and the reconstruction limited to the 20 most prevalent motifs, which highlights the intermittent nature of the field. The relative error between the original and the reconstructed field is 29% for the full LDA reconstruction, 34 % when the 20 most prevalent modes are retained in the reconstruction. In contrast, limiting the POD to 20 global modes slightly lowers the quality of the reconstruction, with a global error of 38%. It should be noted that the 20 dominant POD modes correspond to an average over all snapshots, while the 20 most prevalent LDA modes are selected for that specific snapshot. On average, the reconstructed field based on keeping the 20 most prevalent motifs differed by less than 10% from the full 100-mode reconstruction and the average correlation coefficient \(C\) decreased from 0.89 to 0.83. This shows that LDA can provide a compact representation of the local heat flux.
### Generation
The ability to generate statistically relevant synthetic fields is of interest for a number of applications, such as accelerating computations or developing multi-physics models. As a generative model, LDA makes it possible to produce such a set of fields, the statistics of which can be compared with those of the original fields used to extract the motifs, as well as with those of the corresponding LDA-Reconstructed fields. It would also be useful to compare the generated LDA data set with one generated using POD. To this end, we generated two sets of 4000 new fields using both LDA and POD. The same number \(N_{T}=100\) of POD modes and LDA motifs to generate the datasets. The plane in which the data is generated is assumed to be the \((y,z)\) plane. The different fields to be compared are therefore the following:
1. the original (digitized) field \(\Phi\) defined in section 3.2 with equations (1) and (2)
2. the LDA-Reconstructed field (LDA-R) as defined in equation (4)
3. the LDA-Generated field (LDA-G): the field is constructed by sampling prevalences \(\tilde{b}_{n}(t_{k})\) from snapshot-motif distributions and then reconstructing \[\Phi^{LDA-G}(\underline{x}_{j},t_{k})=\Phi_{T}\sum_{n=1}^{N_{T}}\tilde{b}_{n} (t_{k})\psi_{n}(\underline{x}_{j}),\] (5)
where \(\Phi_{T}\) is a random variable obtained by sampling a Poisson distribution with the same statistics as the original database.
1. the POD-Generated field (POD-G): the field is constructed by independently sampling \(N_{T}\) POD mode amplitudes \(\tilde{a}_{n}\) from the POD amplitudes of the original database \[\Phi^{POD-G}(\underline{x}_{j},t_{k})=\sum_{n=1}^{N_{T}}\tilde{a}_{n}(t_{k}) \phi_{n}(\underline{x}_{j})\] (6)
The time-averaged fields corresponding to the different databases are compared in Figure 6. A good agreement is observed for all datasets, with global errors of 4%, 8% and 3% for respectively the LDA-reconstructed, the LDA-generated and the POD-generated datasets. Although it provides the lowest error (as could be expected), the POD-generated data set overestimates negative values in the core of the cell.
For a given location \((y_{0},z_{0})\), we defined spatial autocorrelation functions in the horizontal and vertical directions as:
\[R_{y}(y,y_{0},z_{0})=\frac{\langle\Phi(y,z_{0},t)\Phi(y_{0},z_{0},t)\rangle}{ \langle\Phi(y_{0},z_{0},t)^{2}\rangle} \tag{7}\]
\[R_{z}(z,y_{0},z_{0})=\frac{\langle\Phi(y_{0},z,t)\Phi(y_{0},z_{0},t)\rangle}{ \langle\Phi(y_{0},z_{0},t)^{2}\rangle} \tag{10}\]
The autocorrelation functions are displayed in Figure 7 for the selected locations indicated in Figure 6, which correspond to regions of high heat flux. We can see that that in all cases, the flux remains correlated over much longer vertical extents than in the horizontal direction. Both the LDA-reconstructed and the POD-generated autocorrelations approximate the original data well - again, by construction, POD-based fields are optimal to reconstruct second-order statistics. The LDA-generated autocorrelation is not as close to the original one, but still manages to capture the characteristic spatial scale over which the fields are correlated.
One-point pdfs of the convective flux are represented in Figure 8 for the same selected locations (again, indicated in Figure 6). POD-generated fields tend to overpredict lower values of the convective flux while underpredicting higher values and therefore do not capture well the intermittent features of the heat flux. The LDA-generated fields display a better agreement with the original fields and are in particular able to capture the exponential tails of the distributions.
## 6 Spatial organization of the motifs
We now describe the spatial organization of the motifs through the motif-cell distribution \(\psi_{n}\). The general trends reported below held for all values of \(N_{T}\) considered, which ranged from 50 to 400. For all Rayleigh numbers, most LDA motifs were found to be associated with a positive flux (i.e were associated with the first \(\tilde{N}\) cells in the decomposition). Negative (counter-gradient) motifs were also identified, but their average prevalence was generally very small (at most 10 % of that of the dominant motif). We therefore chose to focus only on the motifs with a positive contribution to the heat flux. Figure 9 (left) displays these motifs for three different Rayleigh numbers for \(N_{T}=100\). The motif-cell distribution is materialized by a black line corresponding to the iso-probability contour of \(0.606\psi_{n}^{max}\), which can be compared with the average value of the heat flux at this location. For all Rayleigh numbers, the motifs are clustered in the regions of high heat flux, close to the vertical walls. Within the vertical boundary layers, motifs are elongated in shape. Outside the vertical boundary layers, the motifs are more isotropic and tend to increase in shape as one moves away from the walls. Outside the horizontal boundary layers, the motif-cell distributions are elongated in the direction of the wind, with a horizontal orientation in the center of the cell, and a gradual vertical shift closer to the walls. Large motifs are found in the bulk at \(Ra=10^{6}\) and \(Ra=10^{7}\) (it was also the case at \(Ra=3\ 10^{6}\)). In contrast, fewer, smaller motifs are found in the bulk at \(Ra=10^{8}\) in the central region \(y\in[0.2,0.8]H\), signalling a loss of spatial coherence in the bulk at this Rayleigh number.
In general, the motif size seems to decrease with the Rayleigh number. This is confirmed by Figure 10, which represents the average motif area as a function of their distance from the vertical walls. In order to avoid the influence of the horizontal plates, we only considered the motifs located at a vertical distance larger than \(0.07H\) from the horizontal walls (i.e. outside the horizontal entrainment region). The size of the symbols shown in the picture is proportional to the fraction of motifs over which the average was performed. Results were relatively robust with respect to the number of topics \(N_{T}\), although some dependence on \(N_{T}\) is observed in the center of the cell. Within the boundary layer, the motif area grows quadratically, which means that the characteristic size of the motif is proportional to the wall distance. We note that a similar scaling was found for turbulent eddies in pressure-gradient driven turbulence such as channel flow (Frihat _et al._, 2021). Further away from the vertical wall, after a short plateau at the edge of the boundary layer, a slower increase in the motif
size was observed with a rate that increased with the Rayleigh number, so that the motif area was about the same (on the order of \(0.02H^{2}\)) for all Rayleigh numbers in the center of the cell. This suggests the presence of a double scaling for the motifs: one based on the boundary layer thickness, and one based on the cell size. The decrease in size with the Rayleigh number appears consistent with a dependence on the boundary layer thickness but also with an increase of the fragmentation by the bulk turbulent fluctuations, in agreement with the literature (Bosbach _et al._ 2012; van der Poel _et al._ 2015). The difference observed at the highest Rayleigh number also signals that the flow is still evolving and has not reached an asymptotic state.
## 7 Dominant motifs
### Description
Owing to the symmetry of the database, the motifs in the vertical plane \((x,z)\) (resp. \((y,z)\)) should approximate the symmetry \(x\to 1-x\) (resp. \(y\to 1-y\)), and \(z\to 1-z\) (complete symmetry cannot be expected owing to the stochastic nature of the decomposition). The four dominant motifs at \(Ra=10^{7}\) are represented in Figure 11 (left) for \(N_{T}=100\). They consist of elongated structures lying mostly in the boundary layer, and located at a vertical distance of about 0.25 H from the horizontal walls. They can be interpreted as the wall imprint of hot plumes rising in the boundary layer (resp. cold plumes descending in the boundary layer). Although the positions and sizes of the four motifs may slightly vary from one to the other, their features are generally similar and a characteristic motif can be obtained from taking the average over all four motifs. Figure 11 (right) represents this characteristic motif for the various Rayleigh numbers. We can see that the dominant motifs are always located mostly within the boundary layer, with a maximum at a height of about 0.25 \(H\). Their characteristic width \(l_{y}\) was found to decrease as \(Ra^{-0.23\pm 0.04}\), which matches the scaling of the boundary layer thickness.
The evolution of the snapshot-motif distribution, or motif prevalence, is represented in Figure 12 for \(Ra=10^{7}\). We can see that the behavior of the motif prevalence depends on the sign of the global momentum represented in Figure 1. When a moving average of \(T_{m}=150\) time units, corresponding to 3 recirculation times, was applied, two quasi-stationary states could be identified in each plane (and are materialized by the dashed horizontal black lines indicated in Figure 12). The two states appear to correspond to the two possible directions of the angular momentum component. Streamlines of the flow conditionally averaged on the higher prevalence value of \(b_{1}\) are represented in figure 13 left). They indicate that for the higher characteristic value of the prevalence, which we will denote \(b_{+}\), the motif is associated with the large-scale circulation while it is associated with the corner vortex on the opposite side for the lower prevalence value, denoted as \(b_{-}\), as summarized in figure 13 right).
This indicates that information about the large-scale reorientation can be extracted from local measurements. Two states, \(I_{+}\) and \(I_{-}\), respectively corresponding to the large-scale circulation and corner vortex can be defined from the prevalence of the dominant motif \(b_{1}\) using
\[I_{+}=\{k|\langle b_{1}(t_{k})\rangle_{T_{m}}>\langle b_{1}\rangle\}\text{ and }I_{-}=\{k|\langle b_{1}(t_{k})\rangle_{T_{m}}<\langle b_{1}\rangle\} \tag{1}\]
where \(\langle\cdot\rangle_{T_{m}}\) represents the moving average over \(T_{m}\). The average prevalences conditioned on \(I_{+}\) and \(I_{-}\) are respectively \(b_{+}\) and \(b_{-}\).
Figure 14 displays the histogram of the prevalence of the dominant motif \(b_{1}\) (motifs 2 to 4 displayed similar features). At all Rayleigh numbers, the total distribution is characterized by two distinct lobes, which correspond to the absence and the presence of the motif in the snapshot. The relative importance of the lobes therefore provides an indirect measure of the
motif intermittency, which can be related to plume emission. The ratio of motif presence to motif absence was about 0.5-0.6 in the range of Rayleigh numbers considered - no significant variation was observed with the Rayleigh number.
However, further insights can be obtained by examining the respective contributions of the \(I_{+}\) and \(I_{-}\) states to the distribution of \(b_{1}\), which are also represented in Figure 14. For all Rayleigh numbers, \(I_{+}\) states contribute more to the higher-value lobe than \(I_{-}\) states, while \(I_{-}\) contributes more to the lower-value lobe. This suggests that plumes are emitted at a lower frequency in the corner rolls than in the large-scale circulation. Moreover, the relative contributions of the \(I_{+}\) and the \(I_{-}\) states vary non-monotonically with the Rayleigh number. In the higher-value lobe, the relative contribution of \(I_{-}\) appears to increase relatively to \(I_{+}\) with more high values of \(I_{-}\) at \(Ra=3\;10^{6}\), while \(I_{-}\) represents more low values at \(Ra=10^{8}\). In the lower-value lobe, the contribution of \(I_{+}\) is least at \(Ra=3\;10^{6}\) and largest at \(Ra=10^{8}\). These observations suggest that both the intensity of the large-scale circulation and that of the corner roll appear to change with the Rayleigh number, in agreement with the findings of Vishnu _et al._ (2020).
### A model for the reorientation time scale
A simple model can be made to link these observations with the dynamics of reorientations. We can assume that the conditionally averaged prevalence of the dominant motif in the region close to the wall \(b_{\pm}\) is directly proportional to the emission rate of plumes, which can be modelled as a Poisson point process. This means that the time separating two plume ejections \(T_{\pm}\) follows an exponential distribution with mean \(1/b_{\pm}\), where \(+\) and \(-\) respectively characterize the large-scale circulation (\(I_{+}\)) and the corner vortex (\(I_{-}\)) states. \(b_{\pm}\) therefore represents the parameter of the exponential distribution. A reorientation can be associated with the event where the corner vortex becomes stronger than the large-scale circulation state, i.e. the time separating two emissions in the corner vortex state becomes smaller than that separating two emissions in the large-scale circulation state. This event can occur independently in either one of the two horizontal directions \(x\) or \(y\).
One can show that the probability \(p\) that this event occurs at any given time is given by
\[p=p(T_{-}>T_{+})=\frac{b_{-}}{b_{+}+b_{-}} \tag{7.2}\]
Owing to the memoryless nature of the exponential distribution, this holds for the time separating an arbitrary number of emissions, in particular over a characteristic time \(T_{s}\) sufficiently long to reverse the circulation in that direction. \(T_{s}\) should be on the order of the recirculation time \(T_{c}\) so that we have \(T_{s}=\beta T_{c}\) with \(\beta\lessgtr 1\). If \(f_{c}\) is the recirculation frequency, one would then expect the frequency between reorientations \(f_{r}\) to depend on \(p\) and \(f_{c}\) following
\[f_{r}=2p\beta^{-1}f_{c}, \tag{7.3}\]
where the factor 2 comes from the fact that a reorientation can occur in each direction. Figure 15 (right) compares for different Rayleigh numbers the probability \(p\) with the ratio of the frequency between reorientations and the recirculation frequency estimated in (Soucasse _et al._, 2021). We see that a very good agreement is obtained between the variations of the average reorientation rate and the measure of the relative intensity of the large-scale circulation and corner vortices. We note that the largest discrepancy is observed for the highest Rayleigh number, for which the reorientation rate is very low and therefore cannot be determined with good precision from the DNS. The value of \(\beta\) used in the figure was determined empirically and was found to be \(1/5.6\sim 0.18\). \(T_{s}=\beta T_{c}\) is therefore close to the characteristic time it takes for a fluid particle to travel up the vertical side of the cell, which
can be estimated as \(T_{c}/(2(1+\sqrt{2}))\sim 0.21T_{c}\) (see Soucasse _et al._ (2019)). This suggests that an estimate for the reorientation rate can be obtained by comparing directly the average prevalence of the motif associated with the large-scale circulation with that of its counterpart in the corner structure. This could be of particular interest in cases where the observation time is smaller than the expected reorientation time, a situation that is often encountered in - but not limited to - numerical simulations.
## 8 Conclusion
We have applied a new analysis technique, Latent Dirichlet Allocation, to characterize the spatio-temporal organization of the local heat flux. As a soft clustering method, LDA extracts probabilistic latent factors in the form of spatially localized motifs, from a collection of instantaneous fields. The technique was applied to the vertical mid-plane of a Rayleigh-Benard cubic cell in a range of Rayleigh numbers in \([10^{6},10^{8}]\). The method was found to be relatively robust with respect to the user-defined parameters. It was able to provide good reconstructions of the original database and to generate new data sets that captured key statistics of the snapshots.
The motif size and prevalence were found to vary across the cell. Different regions could be identified: the vertical boundary layers with frequent, elongated motifs, an entrainment region, where motifs had a rounder shape, but were generally oriented in the direction of the large-scale circulation, and the bulk region with infrequent motifs. A lack of spatial coherence was observed in the bulk at the highest Rayleigh number. The size of the motifs generally decreased with the Rayleigh number.
For all Rayleigh numbers, the dominant motifs consisted of elongated vertical structures located mostly within the vertical boundary layer, at a height of \(0.25H\). The width of these motifs scaled with the boundary layer thickness. The prevalence of motifs was found to depend on the large-scale organization of the flow: two states could be identified - one corresponding to the large-scale circulation and one to a corner roll structure. The two states were characterized by different average prevalences which varied non-monotonically with the Rayleigh number. A simple model was able to relate the prevalences of the dominant motif associated with the two states with the average reorientation rate of the large-scale circulation in the cell. This suggests that the model could be used as a predictor of this rate in cases where few or even no reorientations are observed.
|
2306.02146 | Finite-frequency noise, Fano factor, $ΔT$-noise and
cross-correlations in double quantum dots | A theoretical study on electrical current fluctuations in a double quantum
dot connected to electronic reservoirs is presented, with the aim of deriving
the finite-frequency noise, the Fano factor and the $\Delta T$-noise. We
establish a general expression for the noise in terms of Green functions in the
double quantum dot and self-energies in the reservoirs. This result is then
applied to model double quantum dots in various situations. For a
non-interacting double quantum dot, we have highlighted several interesting
features in the physical properties of this system. In particular, we have
demonstrated the possibility of obtaining a significant reduction in
zero-frequency noise and Fano factor either when the system is placed in a
given operating regime, or when a temperature gradient is applied between the
two reservoirs, resulting in a negative $\Delta T$-noise being generated. In
addition, in the vicinity of honeycomb vertices, a sign change is observed in
the finite-frequency cross-correlator between the two reservoirs, in contrast
to what is obtained for the zero-frequency cross-correlator, which remains
negative throughout the $(\varepsilon_1,\varepsilon_2)$-plane,
$\varepsilon_{1,2}$ being the level energies in each of the two dots. By using
an approximate first-level numerical approach, we finally study how the
finite-frequency noise in a double quantum dot evolves under the influence of
Coulomb interactions. | A. Crépieux, T. Q. Duong, M. Lavagna | 2023-06-03T16:25:12Z | http://arxiv.org/abs/2306.02146v2 | # Fano factor, \(\Delta T\)-noise and cross-correlations in double quantum dots
###### Abstract
We present a theoretical study of electrical current fluctuations and finite-frequency noise in a double quantum dot connected to two electron reservoirs with the aim of deriving the Fano factor, the \(\Delta T\)-noise and the cross-correlations. This allows one to highlight several interesting features. Firstly the possibility of getting a significant reduction of current noise and Fano factor either when the system is placed in a given operating regime, or when a temperature gradient is applied between the two reservoirs, resulting from the fact that a negative \(\Delta T\)-noise is generated. The second feature is the sign change found in the cross-correlator between the two reservoirs with increasing frequencies. This study clarifies the understanding of the results obtained experimentally in such systems.
_Introduction_ - The reduction of electrical noise in double quantum dots is a major issue if one wishes to finely control the electric charge transfer and improve the performance and the quality factor, in spin-qubits in particular[1; 2; 3]. There are some theoretical studies devoted to the characterization of current fluctuations and electrical noise in double quantum dots[4], but they are most often limited to the calculation of noise at zero frequency[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] or use perturbative approaches[19; 20; 21] based mainly on master equation technique. However we now have experimental works studying fluctuations in double quantum dots, built from GaAs/AlGaAs or Si/SiGe heterostructures, that are expected to undergo a considerable development in the coming years. Part of these works are looking for the optimal conditions to suppress the sensitivity of the device to electrical noise, in order to obtain long-lived and high-fidelity spin qubits[1; 2; 3], while others are devoted to the measurement of electrical current cross-correlations[22] or to the measurement of entropy fluctuations[23]. All these experimental works provide the motivation for developing further theoretical studies of electrical noise and current fluctuations in double quantum dots.
In this letter, we present a non-perturbative approach to determine the finite-frequency noise in a double quantum dot system using the non-equilibrium Green function technique. After outlining the formalism used to model the double quantum dot connected to left and right electron reservoirs, we give the results for the general expression of the noise. We numerically calculate the noise and the Fano factor in various geometries: double quantum dot connected either in series or in parallel, and discuss the results. We next focus our interest to the \(\Delta T\)-noise produced when the two reservoirs are raised to different temperatures[24; 25; 26; 27; 28; 29]. We show that the latter quantity exhibits specific characteristics in its evolution[30] that have never been demonstrated before in double quantum dots. To end up, we determine the cross-correlator between the left and right reservoirs and compare the obtained results with the experimental ones [22].
_Model and results_ - The Hamiltonian of two coupled quantum dots connected to left (L) and right (R) reservoirs is given by
\[\widehat{\mathcal{H}}=\sum_{\begin{subarray}{c}\alpha=L,R\\ k\in\alpha\end{subarray}}\varepsilon_{\alpha k}\widehat{c}_{\alpha k}^{\ \dagger}\widehat{c}_{\alpha k}+\sum_{\begin{subarray}{c}i=1,2\\ n\in i\end{subarray}}\varepsilon_{in}\widehat{d}_{in}^{\dagger}\widehat{d}_{in}\] \[+\sum_{\begin{subarray}{c}n\in 1\\ m\in 2\end{subarray}}\mathcal{V}_{12}\widehat{d}_{2m}^{\dagger}\widehat{d}_{1n}+ \sum_{\begin{subarray}{c}\alpha=L,R\\ k\in\alpha\end{subarray}}\sum_{\begin{subarray}{c}i=1,2\\ n\in i\end{subarray}}V_{i\alpha}\widehat{c}_{\alpha k}^{\ \dagger}\widehat{d}_{in}+h.c.\, \tag{1}\]
where \(\widehat{c}_{\alpha k}^{\ \dagger}\) (\(\widehat{c}_{\alpha k}\)) is the creation (annihilation) operator related to the reservoir \(\alpha\) with momentum \(k\) and energy \(\varepsilon_{\alpha k}\), and \(\widehat{d}_{in}^{\ \dagger}\) (\(\widehat{d}_{in}\)) is the creation (annihilation) operator related to the dot \(i\), with \(i=1,2\). Each dot \(i\) contains \(N\) discrete energy levels denoted \(\varepsilon_{in}\), with \(n\in[1,N]\). The notation \(h.c.\) corresponds to the hermitian conjugate terms associated with the third and fourth contributions in Eq. (1). One assumes that the inter-dot coupling \(\mathcal{V}_{12}\) between the states \(|1n\rangle\) and \(|2m\rangle\) in the dots, and the hopping integral \(V_{i\alpha}\) between the states \(|in\rangle\) in the dot \(i\) and \(|\alpha k\rangle\) in the reservoir \(\alpha\) do not depend on the indices \(n\) and \(m\), nor on the momentum \(k\).
The finite-frequency non-symmetrized noise in the DQD is defined as the Fourier transform of the current fluctuations, \(\mathcal{S}_{\alpha\beta}(\omega)=\int_{-\infty}^{\infty}\langle\Delta\hat{I }_{\alpha}(t)\Delta\hat{I}_{\beta}(0)\rangle e^{-i\omega t}dt\), where \(\Delta\hat{I}_{\alpha}(t)=\hat{I}_{\alpha}(t)-\langle\hat{I}_{\alpha}\rangle\) is the deviation of the current \(\hat{I}_{\alpha}(t)\) from its average value \(\langle\hat{I}_{\alpha}\rangle\). The calculations lead to the expression \(\mathcal{S}_{\alpha\beta}(\omega)=\frac{e^{2}}{\hbar}\sum_{i=1}^{5}\int_{- \infty}^{\infty}\mathrm{Tr}\Big{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underlineunderlineunderline{\underlineunderline{{\underline}}}}}}}{{ \underline{\underline{\underline{\underline{\underline{{\underline{\underline{}}}}}}{{{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline}}}}}}{{{\underline{ \underline{\underline{\underline{\underline{{\underline{\underline}}}}}}{{{\underline{ \underline{\underline{\underline{\underline{\underline{{\underline{\cdot}}}}}}}{{{\underline{ \underline{\underline{\underline{\underline{{\underline{\cdot}}}}}}}}{{{\underline{ \underline{\underline{\underline{\underline{{\underline{\underline{{\cdot}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\|\right.\right.\overline{\] \[\overline{\overline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\{{\underline{\cdot{\cdot
The auto-correlator \(\mathcal{S}_{RR}(\omega)\) and cross-correlator \(\mathcal{S}_{RL}(\omega)\) are obtained by interchanging the indices \(L\) and \(R\) in Eq. (9) and Eq. (10), respectively. These results are a generalization to double quantum dots of the results obtained for a single dot[32; 33]. The main differences are the expressions themselves of the Green functions and of the transmission amplitudes and coefficients, and the presence of matrix products rather than scalar products. Eqs. (9) and (10) apply for both serial and parallel double quantum dots since one can freely play with the values of the dot-reservoir couplings \(\Gamma_{\alpha,ij}\). In the following we consider only symmetrical dot-reservoir couplings, meaning that \(\Gamma\equiv\Gamma_{L,11}=\Gamma_{R,22}\) and \(\Gamma_{L,22}=\Gamma_{R,11}=0\) for a serial double dot, and \(\Gamma\equiv\Gamma_{\alpha,ij}\), \(\forall\{\alpha,i,j\}\), for a parallel double dot.
_Discussion -_ We first study the Fano factor, defined as \(\mathcal{F}_{\alpha}=\mathcal{S}_{\alpha\alpha}(0)/(eI_{\alpha})\), in order to identify the regions where the zero-frequency noise is reduced compared to the current, i.e. such that one has \(\mathcal{F}_{\alpha}\ll 1\). Figure 1 displays the color-scale plot of the Fano factor associated to the L-reservoir, \(\mathcal{F}_{L}\), as a function of bias voltage \(V=\mu_{L}-\mu_{R}\) and detuning energy \(\varepsilon_{d}=\varepsilon_{2}-\varepsilon_{1}\) for a double quantum dot in series with \(N=3\) energy levels in each dot (\(n=1,2,3\)). It shows that when the energy \(\varepsilon_{1}\) of dot 1 is aligned with \(\mu_{L}+\mu_{R}\), here equal to zero since we set \(\mu_{L}=V/2\) and \(\mu_{R}=-V/2\), then the value of \(\mathcal{F}_{L}\) at low voltage is most of the time about 0.5 or larger, excepted in some narrow purple stripes along the lines \(\varepsilon_{d}=n\varepsilon_{0}\) when \(V>0.5\) (see left panel of Fig. 1). On the contrary, when the energy \(\varepsilon_{1}\) of dot 1 is not aligned with \(\mu_{L}+\mu_{R}\) (the most probable situation), one observes that the value of \(\mathcal{F}_{L}\) is strongly reduced at low voltage for some values of the detuning energy (see the purple color regions in the right panel of Fig. 1), leading to the possibility of reducing the noise, relatively to the current, even at low voltage. In the case of a double quantum dot arranged in parallel, the situation is opposite. Indeed, it is when the detuning energy is aligned with \(\mu_{L}+\mu_{R}\) that one observes a strong reduction of noise at low voltage (see the purple color regions in the left panel of Fig. 2), whereas for \(\varepsilon_{d}\neq\mu_{L}+\mu_{R}\), one has \(\mathcal{F}_{L}\approx 0.5\) at low voltage (see the right panel of Fig. 2), meaning that the noise is high compared to the current. These results help to identify the regions of the \((V,\varepsilon_{d})\)-plane where one has a noise reduction.
We now turn our interest to the \(\Delta T\)-noise defined as \(\Delta\mathcal{S}_{\alpha\alpha}=\mathcal{S}_{\alpha\alpha}^{\delta T}(0)- \mathcal{S}_{\alpha\alpha}^{0}\left(0\right)\), where \(\mathcal{S}_{\alpha\alpha}^{\delta T}(0)\) is the zero-frequency auto-correlator at zero voltage when the reservoir are raised to distinct temperatures, i.e., \(T_{L}=T+\delta T/2\) and \(T_{R}=T-\delta T/2\), and \(\mathcal{S}_{\alpha\alpha}^{0}(0)\) is the zero-frequency auto-correlator calculated for identical reservoir temperatures, i.e., \(T_{L,R}=T\). Figure 3 displays the color-scale plots of the \(\Delta T\)-noise in a double quantum dot in series for two different sets of parameters. It shows that in the regime where \(\Gamma\gtrsim T\), the \(\Delta T\)-noise remains positive (see left panel of Fig. 3), whereas in the regime where \(\Gamma\lesssim T\), the sign of the \(\Delta T\)-noise changes (see right panel of Fig. 3), meaning that the noise is reduced in some regions of the \((\varepsilon_{1},\varepsilon_{2})\)-plane when a temperature gradient between the two reservoirs is applied. The fact that one has a change of behavior in the \(\Delta T\)-noise when reducing \(\Gamma\) is in perfect agreement with what has been obtained and generically explained in Ref. [30], with a change of behavior at \(\Gamma/T\approx 2.6\), principally due to the fact that the energy dependencies of the transmission amplitude \(\underline{t_{\alpha\alpha}}(\varepsilon)\) and transmission coefficients \(\underline{\mathcal{T}_{\alpha\alpha}}(\varepsilon)\) become relevant when \(\Gamma\lesssim T\). In the case of a double quantum dot in parallel, such a change of behavior in the \(\Delta T\)-noise, between the regime where \(\Gamma\lesssim T\) and the regime where \(\Gamma\gtrsim T\), is also observed (see Fig. 4). In order to further understand this phenomenon, we have reported in Fig. 5 the evolution of the \(\Delta T\)-noise minimum as a function of the \(\Gamma/T\) ratio for three different quantum dot geometries: single dot, double dot in series, and double dot in parallel. We see that the value of this minimum is negative at low \(\Gamma/T\) ratio and converges to zero at \(\Gamma/T\approx 3\) whatever the geometry is. Again, this variation is in agreement with previous results on \(\Delta T\)-noise obtained for quantum systems caracterized by energy-dependent transmission[30]. This is an important result because it means that there is an operating regime in which the noise can be reduced by applying a temperature gradient between the right and left reservoirs, in both single and double quantum dot systems.
Figure 2: Same as in Fig. 1 for a double quantum dot in parallel.
Finally, we study the finite-frequency cross-correlator for a double quantum dot in series in order to make a comparison with the experimental observations presented in Ref. [22]. Figure 6 shows the cross-correlator \(\mathcal{S}_{LR}(\omega)\) near the honeycomb vertex located in the central region of the \((\varepsilon_{1},\varepsilon_{2})\)-plane, at both zero-frequency (\(\omega=0\)) and finite-frequency (\(\omega=0.1\)).
At zero-frequency, the cross-correlator is negative in sign (see left panel of Fig. 6) as expected since one has \(\mathcal{S}_{LR}(0)=-\mathcal{S}_{LL}(0)\) with strictly positive auto-correlator, whereas at finite-frequency the sign of the cross-correlator becomes positive in some regions of the \((\varepsilon_{1},\varepsilon_{2})\)-plane (see right panel of Fig. 6). Remarkably, and this deserves to be emphasized, the entire evolution of the cross-correlator that we obtained at finite-frequency is in perfect agreement with the experimental results presented in Ref. [22] which show a sign change. And identically to what is experimentally observed, one obtains a vanishing finite-frequency cross-correlator at zero voltage due to the fact that at low temperature the system can not emit noise at frequency larger than the voltage[34; 35], so that for \(\hbar\omega>eV\), one has \(\mathcal{S}_{LR}(\omega)=\mathcal{S}_{LL}(\omega)=0\).
_Summary_ - We have derived expressions for the auto-correlators and cross-correlators of the current fluctuations in a double quantum dot which apply to any frequency, voltage and temperature values, whatever the values of inter-dot and dot-reservoirs couplings are. They allow to highlight specific features such as the reduction of the noise and of the Fano factor, as well as the possibility of having a negative \(\Delta T\)-noise, meaning that the noise can be even more reduced by applying a temperature gradient between the two reservoirs. Moreover, it leads to behavior for the cross-correlator which is in perfect agreement with the experimental measurements[22] with a change of sign near a honeycomb vertex. The approach presented in this Letter can be extended to take electron-electron and electron-photon interactions into account, which are all the more important to be able to describe realistic situations of double quantum dot systems, as experimentally studied. Indeed, in the limit where two-electron processes are negligible compared to single-electron ones, one can insert the Green functions of the double quantum dot, calculated in the presence of interactions, into the formula we have obtained for the finite-frequency noise in order to study their effects. This has been done successfully in the case of a single quantum dot in the Kondo regime[33] and has allowed to explain
Figure 4: Same as in Fig. 3 for a double quantum dot in parallel.
the main features of the experimental curves[36], so it would be worthwhile to do it for a double quantum dot as well.
|
2305.14274 | Error Basis and Quantum Channel | The Weyl operators give a convenient basis of $M_n(\mathbb{C})$ which is also
orthonormal with respect to the Hilbert-Schmidt inner product. The properties
of such a basis can be generalised to the notion of a nice error basis(NEB), as
introduced by E. Knill. We can use an NEB of $M_n(\mathbb{C})$ to construct an
NEB for $Lin(M_n(\mathbb{C}))$, the space of linear maps on $M_n(\mathbb{C})$.
Any linear map on $M_n(\mathbb{C})$ will then correspond to a $n^2\times n^2$
coefficient matrix in the basis decomposition with respect to such an NEB of
$Lin(M_n(\mathbb{C}))$. Positivity, complete (co)positivity or other properties
of a linear map can be characterised in terms of such a coefficient matrix. | B. V. Rajarama Bhat, Purbayan Chakraborty, Uwe Franz | 2023-05-23T17:23:56Z | http://arxiv.org/abs/2305.14274v1 | # Error basis and quantum channel
###### Abstract.
The Weyl operators give a convenient basis of \(M_{n}(\mathbb{C})\) which is also orthonormal with respect to the Hilbert-Schmidt inner product. The properties of such a basis can be generalised to the notion of a nice error basis(NEB), as introduced by E. Knill [3]. We can use an NEB of \(M_{n}(\mathbb{C})\) to construct an NEB for \(\operatorname{Lin}(M_{n}(\mathbb{C}))\), the space of linear maps on \(M_{n}(\mathbb{C})\). Any linear map will then correspond to a \(n^{2}\times n^{2}\) coefficient matrix in the basis decomposition with respect to such an NEB of \(\operatorname{Lin}(M_{n}(\mathbb{C}))\). Positivity, complete (co)positivity or other properties of a linear map can be characterised in terms of such a coefficient matrix.
## 1. Introduction
The Pauli matrices had the key role for modelling quantum error correction for one qubit system. E. Knill generalised the properties of Pauli matrices in higher dimension to introduce the notion of nice error basis in 1996 [3]. Since then it has played a fundamental important role in quantum error correction. A nice error basis(NEB) gives a convenient orthonormal basis of \(M_{n}(\mathbb{C})\) constructed from a projective representation of a group known as an index group. Knill in his paper showed that the existence of such a basis of \(M_{n}(\mathbb{C})\) is equivalent to the existence of an irreducible character of a suitable group which vanishes outside its center. The discrete Weyl relation [5, Equation (46)] \(UV=qVU\), where \(q\) is an \(n\)-th root of unity, and its n-dimensional realisation
\[Ue_{k}=e_{k+1\operatorname{mod}n},\quad Ve_{k}=q^{k}e_{k},\quad k=0,1,\ldots,n-1,\]
with \(\{e_{0},\ldots,e_{n-1}\}\) an orthonormal basis of \(\mathbb{C}^{n}\), were discussed by Hermann Weyl when he studied the equivalence between Heisenberg's matrix mechanics and Schrodinger's wave mechanics in 1927. Knill used these discrete Weyl type operators to construct an NEB from the group \(\mathbb{Z}_{p}^{2k}\) where \(p\) is prime. Later Klappenecker and Rotteler [1] and then Parthasarathy [9] constructed a similar NEB from the index group \(\mathbb{Z}_{n}\times\mathbb{Z}_{n}\) and Parthasarathy used them for quantum error correcting codes.
Recently, X. Huang, T. Zhang et al [12] have used the Weyl operators as a basis of \(M_{n}(\mathbb{C})\) to describe a quantum state and give a necessary condition for separability of a bipartite state. In this paper we take advantage of an NEB of \(M_{n}(\mathbb{C})\) in particular, Weyl operators on \(M_{n}(\mathbb{C})\), to construct a convenient basis of \(\operatorname{Lin}(M_{n}(\mathbb{C}))\), the set of all linear maps on \(M_{n}(\mathbb{C})\). Such a basis of \(\operatorname{Lin}(M_{n}(\mathbb{C}))\) will
## 1. Introduction
Let \(G\) be a group of order \(n^{2}\), and let \(\mathcal{C}\) be a finite group of order \(n^{2}\). We say that \(G\) is a _finite group_ of order \(n^{2}\) if and only if \(G\) is finite. We say that \(G\) is _finite_ if and only if \(G\) is finite.
then conditions (1)-(3) ensure that \(E\) is an orthonormal set since
\[\langle\rho_{g},\rho_{h}\rangle=\mathrm{Tr}(\rho_{g}^{*}\rho_{h})= \omega(g^{-1},g)^{-1}\mathrm{Tr}(\rho_{g^{-1}}\rho_{h}) =\omega(g^{-1},g)^{-1}\omega(g^{-1},h)\mathrm{Tr}(\rho_{g^{-1}h})\] \[=\omega(g^{-1},g)^{-1}\omega(g^{-1},h)n\delta_{g^{-1}h}.\]
Comparing the dimension it follows that \(E\) is an orthogonal basis of \(M_{n}(\mathbb{C})\). It is also easy to see that the associativity of the group \(G\) implies that function \(\omega:G\times G\to\mathbb{C}\) is a \(2\)-cocycle. If we normalise each \(\pi_{g}\) so that \(\det(\pi_{\mathrm{g}})=1\) then \(\omega(g,h)\) becomes an \(n\)-th root of unity for any \(g,h\in G\).
**Theorem 2.2**.: _Let \(\mathcal{E}=\{\pi(g);g\in G\}\) be a set of unitary matrices indexed by a finite group \(G\). Then \(\mathcal{E}\) is a NEB iff \(\pi\) is a unitary faithful irreducible projective representation of order \(|G|^{1/2}\)._
Proof.: See theorem 1, [1].
### NEB from Groups of Central Type:
**Definition 2.3**.: A group \(\mathrm{H}\) is called of central type if there exists an irreducible character \(\chi\) such that \(\chi(1)^{2}=|H:Z(H)|\), where \(Z(H)\) is the center of the group.
It is also equivalent to the existence of an irreducible character which vanishes outside \(Z(H)\)(cf. p. 28, [6]). Let \(H\) be a group of central type with a unitary irreducible character \(\chi\) verifying the above condition. Suppose, \(\mathfrak{X}_{\chi}\) is the unitary irreducible representation corresponding to the character \(\chi\). Recall that a representation \(\mathfrak{X}\) is called faithful iff \(\mathrm{Ker}(\mathfrak{X})=\{Id\}\). If we define the kernel of a character \(\chi\) as \(\mathrm{Ker}\chi:=\{\mathrm{h}\in\mathrm{H};\chi(\mathrm{h})=\chi(1)\}\) then it is well known that \(\mathrm{Ker}(\mathfrak{X}_{\chi})=\mathrm{Ker}(\chi)\). We take the quotient group \(H/\mathrm{Ker}\;\chi\) so that the natural representation \(\widetilde{\mathfrak{X}}_{\chi}(\bar{h}):=\mathfrak{X}_{\chi}(h)\) of \(H/\mathrm{Ker}\chi\) becomes faithful.
**Lemma 2.4**.: _Let \(H\) be a group and \(\chi\) is an irreducible character of \(G\). Then \(Z(H/\mathrm{Ker}\chi)=\mathrm{Z}(\chi)/\mathrm{Ker}\chi\), where \(Z(\chi):=\{h\in H;|\chi(h)|=\chi(1)\}\) is called the center of the representation. Moreover, \(Z(\chi)/\mathrm{Ker}\chi\) is a cyclic group._
Proof.: Lemma 2.27, [6]
**Lemma 2.5**.: _If \(\chi\) is an irreducible character of the group \(H\) then \(\chi(1)^{2}\leq|H:Z(\chi)|\). Equality occurs if and only if \(\chi\) vanishes on \(G-Z(\chi)\)._
Proof.: Corollary 2.30, [6].
**Theorem 2.6**.: _Let \(H\) be a group of central type. Then the group \((H/\mathrm{Ker}\chi)/(\mathrm{Z}(\mathrm{H})/\mathrm{Ker}\chi)\cong\mathrm{H}/ \mathrm{Z}(\mathrm{H})\) is an index group._
Proof.: Let \(H\) be a group of central type with an irreducible character \(\chi\) which satisfies \(\chi(1)^{2}=|H:Z(H)|\) then \(Z(\chi)=Z(H)\) [cf. p. 28, [6]]. Therefore we have that \(Z(H/\mathrm{Ker}\chi)=Z(H)/\mathrm{Ker}\chi\) by Lemma 2.4. For each \(h\in(H/\mathrm{Ker}\chi)/(Z(H)/\mathrm{Ker}\chi)\) we choose a coset representative \(\phi(h)\) in \(H/\mathrm{Ker}\chi\). Let us denote \(\pi_{h}=\mathfrak{\tilde{X}}_{\chi}(\phi(h))\). Therefore we have
\[\pi_{h}\pi_{k}=\mathfrak{\tilde{X}}(\phi(h))\mathfrak{\tilde{X}}(\phi(k))= \mathfrak{\tilde{X}}(\phi(h)\phi(k))=\mathfrak{\tilde{X}}(\phi(hk)z_{h,k}),\]
where \(z_{h,k}\in Z(H)/{\rm Ker}\chi\). \(\tilde{\mathfrak{X}}_{\chi}(Z(H)/{\rm Ker}\chi)\) consists of scalar multiples of identity only. So we obtain \(\pi_{h}.\pi_{k}=\tilde{\mathfrak{X}}(\phi(hk))\tilde{\mathfrak{X}}(z_{h,k})= \omega(h,k)\pi_{hk}\), where \(\omega(h,k)\in\mathbb{C}\). Since the representation is irreducible, all the \(\pi_{h}\) spans \(M_{n}(\mathbb{C})\) for some \(n\in\mathbb{N}\). Using isomorphism theorem we get \((H/{\rm Ker}\;\chi)/(Z(H)/{\rm Ker}\;\chi)\cong H/Z(H)\). As we know the character \(\chi\) vanishes outside \(Z(H)\), it follows that \({\rm Tr}(\pi_{h})=0\) except at the identity. So we find that \((H/{\rm Ker}\chi)/({\rm Z(H)/Ker}\chi)\cong{\rm H/Z(H)}\) is an index group if H is a group of central type.
**Example:** We present here two examples to briefly clarify the previous result-one with an Abelian index group and another with a non-Abelian index group.
1. The group of unit quaternions \(Q=\{\pm 1,\pm i,\pm j,\pm k\}\) (with multiplication as the group operation) has eight elements and five irreducible representations (up to equivalence), which we can choose as \[\varepsilon : \varepsilon(i)=\varepsilon(j)=1,\] \[\sigma_{i} : \sigma_{i}(i)=1,\quad\sigma_{i}(j)=-1,\] \[\sigma_{j} : \sigma_{j}(i)=-1,\quad\sigma_{j}(j)=1,\] \[\sigma_{k} : \sigma_{k}(i)=-1=\sigma_{k}(j),\] \[\pi : \pi(i)=\left(\begin{array}{cc}0&i\\ i&0\end{array}\right),\quad\pi(j)=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right).\] For the character table we get \[\begin{array}{c||c|c|c|c|c|c|c|c||c}&1&-1&i&-i&j&-j&k&-k&\dim\\ \hline\chi_{\varepsilon}=\varepsilon&1&1&1&1&1&1&1&1&1\\ \chi_{\sigma_{i}}=\sigma_{i}&1&1&1&1&-1&-1&-1&-1&1\\ \chi_{\sigma_{j}}=\sigma_{j}&1&1&-1&-1&1&1&-1&-1&1\\ \chi_{\sigma_{k}}=\sigma_{k}&1&1&-1&-1&-1&-1&1&1&1\\ \chi_{\pi}&2&-2&0&0&0&0&0&0&2\end{array}\] We see that \[{\rm ker}(\chi_{\pi}) = \{1\},\] \[Z(\chi_{\pi}) = \{1,-1\}=Z(Q).\] \(Q\) is a group of central type: its center is \(Z(Q)=\{-1,1\}\) and it has a \(|Q/Z(Q)|^{1/2}\)-dimensional irreducible representation. We have \(Q/Z(Q)\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). The 2-cocycle \(\omega:\mathbb{Z}_{2}\times\mathbb{Z}_{2}\rightarrow\mathbb{T}\) is given by the relations \[\pi(g_{1})\pi(g_{2})=\omega(g_{1},g_{2})\pi(g_{1}g_{2}),\] for \(g_{1},g_{2}\in\mathbb{Z}_{2}\). If we write \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}=\{(\pm 1,\pm 1)\}\) multiplicatively and choose \[\pi(+1,+1) = \pi(1)=I_{2},\]
\[\pi(+1,-1) = \pi(i)=\left(\begin{array}{cc}0&i\\ i&0\end{array}\right),\] \[\pi(-1,+1) = \pi(j)=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)\] \[\pi(-1,-1) = \pi(k)=\left(\begin{array}{cc}i&0\\ 0&-i\end{array}\right).\]
We get
\[\begin{array}{c|cccc}\omega&(+1,+1)&(+1,-1)&(-1,+1)&(-1,-1)\\ \hline(+1,+1)&1&1&1&1\\ (+1,-1)&1&-1&1&-1\\ (-1,+1)&1&-1&-1&1\\ (-1,-1)&1&1&-1&-1\end{array}\]
* Klappenecker and Rotteler constructed [1] an example of NEB corresponding to a non-commutative index group which we briefly mention here. Consider the group \(H_{n}\) for some \(n\in\mathbb{N}\), generated by the composition of the maps \[\tau:x\mapsto x+1(\mathrm{mod}\quad 2^{n})\quad\mbox{and}\quad\alpha:x\mapsto 5x (\mathrm{mod}\quad 2^{n}).\] If \(A:=\langle\tau\rangle\) and \(B:=\langle\alpha\rangle\) then \(H_{n}=A\rtimes B\). **Theorem 2.7**.: _The group \(H_{n}\) is a group of central type of order \(2^{2n-2}\) with cyclic center \(Z(H_{n})=\langle\tau^{2^{n-2}}\rangle\). The index group \(H_{n}/Z(H_{n})\) is non-Abelian for \(n\geq 5\). Proof._ See theorem 5, [1]. \(\square\)
Let \(\phi:\mathbb{Z}/2^{n}\mathbb{Z}\to\mathbb{C}\) be a map defined by
\[\phi(x)=\exp{\left(\frac{2\pi i5^{x}}{2^{n}}\right)}.\]
Then the diagonal matrix
\[\pi(\tau)=\mathrm{diag}(\phi(0),\phi(1),\ldots,\phi(2^{n-1}-1))\]
and the shift
\[\pi(\alpha)=\begin{bmatrix}0&1&0&\ldots&0\\ 0&0&1&\ldots&0\\ \vdots&&\ddots&\vdots\\ 0&0&0&\ldots&1\\ 1&0&0&\ldots&0\end{bmatrix}\]
define an ordinary faithful irreducible representation of \(H_{n}\). The NEB corresponding to the index group \(H_{n}/Z(H_{n})\) is given by
\[\mathcal{E}=\{\pi(\tau)^{k}\pi(\alpha)^{l}:0\leq k,l<2^{n-2}-1\}.\]
For more examples of NEB and index groups, see the website [https://people.engr.tamu.edu/andreas](https://people.engr.tamu.edu/andreas)
### NEB from Discrete Weyl Relation
We define a bicharacter \(\varkappa:\mathbb{Z}_{n}\times\mathbb{Z}_{n}\longrightarrow\mathbb{C}\) such that
1. \(|\varkappa(x,y)|=1\), for all \(x,y\in G\),
2. Symmetry: \(\varkappa(x,y)=\varkappa(y,x)\), for all \(x,y\in G\),
3. Non-degeneracy: \(\varkappa(x,y)=1\) for all \(y\in G\) iff \(x=0\),
4. Character: \(\varkappa(x,y+z)=\varkappa(x,y)\cdot\varkappa(x,z)\).
For example we can take
\[\varkappa(k,\ell)=\exp\left(\frac{2\pi ik\ell}{n}\right),\qquad k,\ell\in \mathbb{Z}_{n}.\]
We fix an orthonormal basis, \(\{|x\rangle;x\in\mathbb{Z}_{n}\}\) of \(\mathbb{C}^{n}\). We define two unitary representation \(U,V\) of \(\mathbb{Z}_{n}\) on \(\mathbb{C}^{n}\), by the relation
\[U_{a}|x\rangle = |x+a\rangle,\] \[V_{a}|x\rangle = \varkappa(a,x)|x\rangle,\]
The operators \(U_{a},V_{b}\), \(a,b\in Z_{n}\), satisfy the _Weyl commutation relations_
\[U_{a}U_{b}=U_{a+b},\qquad V_{a}V_{b}=V_{a+b},\qquad V_{b}U_{a}=\varkappa(a,b)U _{a}V_{b}\]
for \(a,b\in\mathbb{Z}_{n}\).
We define the _Weyl operators_
\[W_{a,b}=U_{a}V_{b}\]
for \(a,b\in\mathbb{Z}_{n}\) (see p. 212, [8]). The matrix coefficients of a Weyl operator \(W_{a,b}\) w.r.t. to the basis \(\{|x\rangle;x\in\mathbb{Z}_{n}\}\) are given by
\[\langle y|W_{a,b}|x\rangle=\varkappa(b,x)\delta_{y,x+a},\qquad x,y\in\mathbb{ Z}_{n},\]
or, equivalently,
\[W_{a,b}=\sum_{x\in G}\varkappa(b,x)\,|x+a\rangle\langle x|. \tag{2.1}\]
These Weyl operators \(\{W_{a,b};(a,b)\in Z_{n}\times\mathbb{Z}_{n}\}\) form a nice error basis and \(\mathbb{Z}_{n}\times\mathbb{Z}_{n}\) becomes its index group. To check that we can verify some straightforward relations
1. \((\text{Unitary})W_{a,b}^{*}=W_{a,b}^{-1}=\varkappa(a,b)W_{-a,-b}\).
2. \((\text{Projective representations})\)\(W_{a,b}W_{x,y}=\varkappa(b,x)W_{a+x.b+y}\).
If we compute the trace of the Weyl operators with respect to the Hilbert-Schmidt inner product on \(M_{n}\)
\[\text{Tr}(W_{a,b})=\sum_{x\in G}\langle x|\underbrace{U_{a}V_{b}|x\rangle}_{ =\varkappa(b,x)|x+a}=\left\{\begin{array}{ll}n&\text{ if }(a,b)=(0,0),\\ 0&\text{ else.}\end{array}\right.\]
Here we used the orthogonality of the characters,
\[\sum_{x\in G}\underbrace{\overline{\varkappa(a,x)}\varkappa(b,x)}_{=\varkappa(b- a,x)}=\#G\,\delta_{a,b}.\]
## 3. Convenient Basis of \(\operatorname{Lin}(M_{n}(\mathbb{C}))\)
For a pair of matrices \(A,B\in M_{n}\) we define a linear map \(T_{A,B}:M_{n}\to M_{n}\) by
\[T_{A,B}(X)=AXB^{*},\qquad\text{ for }X\in M_{n}.\]
**Proposition 3.1**.: _The map \(M_{n}(\mathbb{C})\times M_{n}(\mathbb{C})\ni(A,B)\mapsto T_{A,B}\in \operatorname{Lin}(M_{n}(\mathbb{C}))\) extends to an unique isomorphism of *-algebras \(T:M_{n}\otimes M_{n}^{*}\ni A\otimes B^{*}\to T_{A,B}\in\operatorname{Lin}(M_{n },M_{n})\), which is also an isomorphism of Hilbert spaces._
The above proposition shows that if \(\{B_{i};1\leq i\leq n^{2}\}\) is a basis of \(M_{n}\) and we define \(T_{ij}(X)=B_{i}XB_{j}^{*}\) for any \(X\in M_{n}\) then \(\{T_{ij}\in\operatorname{Lin}(\operatorname{M_{n}});1\leq\operatorname{i}, \operatorname{j}\leq\operatorname{n}^{2}\}\) is a basis of \(\operatorname{Lin}(\operatorname{M_{n}})\). Taking a NEB as a basis of \(M_{n}\) has the added advantage that in that case \(T_{ij}\) also becomes NEB in \(\operatorname{Lin}(\operatorname{M_{n}})\).
**Lemma 3.2**.: _Let \(\{\frac{1}{\sqrt{n}}\pi_{g};g\in G\}\) be a NEB of \(M_{n}\) corresponding to an index group \(G\). Define the linear map \(T_{x,y}:M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\)_
\[T_{x,y}(X):=\pi_{x}X\pi_{y}^{*}\]
_Then the \(\{\frac{1}{n}T_{x,y};x,y\in G\}\) is a NEB of \(\operatorname{Lin}(M_{n})\) with index group \(G\times G\) and 2-cocycle \(\omega^{L}:(G\times G)\times(G\times G)\to\mathbb{T}\) given by_
\[\omega^{L}((x^{\prime},y^{\prime}),(x,y))=\frac{\omega(x^{\prime},x)}{\omega( y^{\prime},y)}.\]
Proof.: It is trivial to check that \(T_{1,1}=\operatorname{Id}\) where \(1\) is the identity element of \(G\). To check the trace condition we compute
\[\operatorname{Tr}_{\operatorname{Lin}(M_{n})} (T_{x,y})=\sum\left\langle|i\rangle\langle j|,T_{xy}|i\rangle \langle j|\right\rangle=\sum\operatorname{Tr}_{M_{n}}\Bigl{(}|j\rangle \langle i|\pi_{x}|i\rangle\langle j|\pi_{y}^{*}\Bigr{)}\] \[=\sum\langle i|\pi_{x}|i\rangle\langle j|\pi_{y}^{*}|j\rangle= \operatorname{Tr}(\pi_{x})\operatorname{Tr}(\pi_{y}^{*})=n^{2}\delta_{1,x} \delta_{1,y},\]
which shows that trace of each operator \(T_{x,y}\) is zero except for the identity. For any \(X\in M_{n}\) and \(x,y,x^{\prime},y^{\prime}\in G\) we see that
\[T_{x^{\prime},y^{\prime}}\circ T_{x,y}(X)=\pi_{x^{\prime}}\pi_{x}X\pi_{y}^{*} \pi_{y^{\prime}}^{*}=\frac{\omega(x^{\prime},x)}{\omega(y^{\prime},y)}\pi_{x^{ \prime}x}X\pi_{y^{\prime}y}^{*}=\frac{\omega(x^{\prime},x)}{\omega(y^{\prime},y )}T_{x^{\prime}x,y^{\prime}y}(X),\]
which proves the claim about the two cocycle \(\omega^{L}\).
The next proposition follows immediately since any NEB forms an ONB of the associated space of linear maps.
**Proposition 3.3**.: _The set \(\{\frac{1}{n}T_{x,y};x,y\in G\}\) forms an orthonormal basis of \(\operatorname{Lin}(M_{n}(\mathbb{C}))\) with respect to Hilbert-Schmidt inner product._
Let \(\alpha\in\mathrm{Lin}(M_{n}(\mathbb{C}))\) and \(\{B_{i};1\leq i\leq n^{2}\}\) be a basis of \(M_{n}(\mathbb{C})\). Since \(T_{ij}\)(defined above ) forms a basis of \(\mathrm{Lin}(M_{n})(\mathbb{C})\) we can decompose \(\alpha\) as
\[\alpha(X)=\sum_{1\leq i,j\leq n^{2}}D_{\alpha}(i,j)B_{i}XB_{j}^{*}. \tag{3.1}\]
In particular if we take a NEB \(\{\frac{1}{\sqrt{n}}\pi_{x};x\in G\}\) as basis of \(M_{n}(\mathbb{C})\) we can explicitly compute the coefficient \(D_{\alpha}\).
\[\alpha(X)=\sum_{x,y}D_{\alpha}(x,y)T_{x,y}(X)=\sum_{x,y}D_{\alpha}(x,y)\pi_{x} X\pi_{y}^{*}. \tag{3.2}\]
for all \(X\in M_{N}(\mathbb{C})\). Using the orthonormality of the basis \(T_{x,y}\) and NEB \(\{\frac{1}{\sqrt{n}}\pi_{g};g\in G\}\) of \(M_{n}(\mathbb{C})\), we have \(D_{\alpha}(x,y)=\frac{1}{n}\langle T_{x,y},\alpha\rangle_{\mathrm{Lin}(M_{N})}\) i.e.
\[D_{\alpha}(x,y) = \frac{1}{n^{2}}\mathrm{Tr}(T_{x,y}^{\dagger}\alpha)\] \[= \frac{1}{n}\sum_{g\in G}\langle T_{x,y}(\pi_{g}),\alpha(\pi_{g})\rangle\] \[= \frac{1}{n^{2}}\sum_{g\in G}\mathrm{Tr}(\pi_{y}\pi_{g}^{*}\pi_{x} ^{*}\alpha(\pi_{g})). \tag{3.3}\]
Here \(T^{\dagger}\) denotes the involution applied on \(T\) w.r.t the Hilbert-Schmidt inner product on \(\mathrm{Lin}(M_{n}(\mathbb{C}))\). If we use the Weyl operators as the nice error basis for defining \(T_{x,y}\) i.e. \(T_{x,y}(X)=W_{x}XW_{y}^{*}\) for \(x,y\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}\) then we can compute the coefficient \(D_{\alpha}\) in (3.3), using \(\{|a\rangle\langle b|;a,b\in\mathbb{Z}_{n}\}\) as a o.n.b of \(M_{n}(\mathbb{C})\)
\[D_{\alpha}(x,y) = \frac{1}{n}\sum_{a,b\in G}\mathrm{Tr}\Big{(}W_{y}|b\rangle \langle a|W_{x}^{*}\alpha\big{(}|a\rangle\langle b|\big{)}\Big{)}\] \[= \frac{1}{n}\sum_{a,b\in G}\frac{\varkappa(y_{2},b)}{\varkappa(x_{ 2},a)}\Big{\langle}a+x_{1}\Big{|}\alpha\big{(}|a\rangle\langle b|\big{)} \Big{|}b+y_{1}\Big{\rangle}, \tag{3.4}\]
for \(x=(x_{1},x_{2}),y=(y_{1},y_{2})\).
**Lemma 3.4**.: _Let \(\alpha,\beta\in\mathrm{Lin}(M_{n}(\mathbb{C}))\) with coefficients \((D_{\alpha}(x,y))_{x,y\in G}\), as defined in the equation (3.2). Then the coefficient of their composition \(\alpha\circ\beta\) are given by_
\[D_{\alpha\circ\beta}(x,y)=\sum_{p,q\in G}\omega(p,xp^{-1})\overline{\omega(q, yq^{-1})}D_{\alpha}(p,q)D_{\beta}(p^{-1}x,q^{-1}y).\]
Proof.: We have
\[\alpha\circ\beta(X) = \sum_{p,q\in G}D_{\alpha}(p,q)\pi_{p}\left(\sum_{p^{\prime},q^{ \prime}\in G}D_{\beta}(p^{\prime},q^{\prime})\pi_{p^{\prime}}X\pi_{q^{\prime} }^{*}\right)\pi_{q}^{*}\] \[= \sum_{p,p^{\prime},q,q^{\prime}\in G}\omega(p,p^{\prime})\overline {\omega(q,q^{\prime})}D_{\alpha}(p,q)D_{\beta}(p^{\prime},q^{\prime})\pi_{pp^{ \prime}}X\pi_{qq^{\prime}}^{*}\]
\[= \sum_{x,y\in G}\underbrace{\left(\sum_{p,q\in G}\omega(p,p^{-1}x) \overline{\omega(q,q^{-1}y)}D_{\alpha}(p,q)D_{\beta}(p^{-1}x,q^{-1}y)\right)}_{D_ {\alpha\circ\beta}}\pi_{x}X\pi_{y}^{*},\]
which completes the proof.
We can define two different involutions on \(\operatorname{Lin}(M_{n}(\mathbb{C}))\). The first comes from the Hilbert-Schmidt inner product and is characterised by the condition
\[\langle X,\alpha(Y)\rangle=\langle\alpha^{\dagger}(X),Y\rangle\]
for all \(X,Y\in M_{n}(\mathbb{C})\).
The second is inherited from the involution in \(M_{n}(\mathbb{C})\) and defined by \(\alpha^{\#}(X)=\alpha(X^{*})^{*}\).
Both involutions are conjugate linear, but only the first is anti-multiplicative, whereas the second is multiplicative, i.e., we have
\[(\alpha\circ\beta)^{\dagger}=\beta^{\dagger}\circ\alpha^{\dagger},\qquad( \alpha\circ\beta)^{\#}=\alpha^{\#}\circ\beta^{\#}\]
for \(\alpha,\beta\in\operatorname{Lin}(\operatorname{M_{n}}(\mathbb{C}))\).
**Proposition 3.5**.: _We have_
\[D_{\alpha^{\dagger}}(x,y)=\frac{\omega(x,x^{-1})}{\omega(y,y^{-1})}\overline{ D_{\alpha}(x^{-1},y^{-1})}\]
_and_
\[D_{\alpha^{\#}}(x,y)=\overline{D_{\alpha}(y,x)}\]
_for \(x,y\in G\)._
Proof.: It is easy to see that for any \(x,y\in G\) we have \(T_{x,y}^{\dagger}=\frac{\omega(x,x^{-1})}{\omega(y,y^{-1})}T_{x^{-1},y^{-1}}\). Applying the involution \(\dagger\) on the decomposition of \(\alpha\) in (3.2)
\[\alpha^{\dagger}=\sum_{x,y\in G}\overline{D_{\alpha}(x,y)}T_{x,y}^{\dagger}= \sum_{x,y\in G}\frac{\omega(x,x^{-1})}{\omega(y,y^{-1})}\overline{D_{\alpha}( x,y)}T_{x^{-1},y^{-1}}\]
the first claim follows. Similarly, we can trivially check that \(T_{x,y}^{\#}=T_{y,x}\) for any \(x,y\in G\). Then the second claim follows by applying \(\#\) on the decomposition (3.2)
\[\alpha^{\#}=\sum_{x,y\in G}\overline{D_{\alpha}(x,y)}T_{x,y}^{\#}=\sum_{x,y \in G}\overline{D_{\alpha}(x,y)}T_{y,x}\]
### Examples
Here we compute the kernel or the \(n^{2}\times n^{2}\) matrix \(D_{\alpha}\) corresponding to different positive maps \(\alpha\in\mathrm{Lin}(M_{n}(\mathbb{C}))\) which are important in quantum information.
**Identity map:** The identity map corresponds to the kernel \(D_{\mathrm{Id}}(x,y)=\delta_{1,x}\delta_{1,y}\) for \(x,y\in G\) as we can write
\[X=D_{1,1}\pi_{1}X\pi_{1}^{*}=\sum_{x,y\in G}\delta_{1,x}\delta_{1,y}\pi_{x}X\pi_ {y}^{*}.\]
**Depolarising Channel:** Let \(P\in\mathrm{Lin}(M_{n}(\mathbb{C}))\) be the diagonal sum \(P=\sum_{g\in G}T_{g,g}\). For any \(h\in G\) and \(X\in M_{n}(\mathbb{C})\) we have
\[\pi_{h}P(X)=\sum_{g}\omega(h,g)\pi_{hg}X\pi_{g}^{*}\quad\text{and}\quad P(X)\pi _{h}=\sum_{g}\frac{\omega(h,h^{-1})}{\omega(h^{-1},g)}\pi_{g}X\pi_{h^{-1}g}^{*}.\]
After a change of variable we can write \(P(X)\pi_{h}=\sum_{g}\frac{\omega(h,h^{-1})}{\omega(h^{-1},hg)}\pi_{hg}X\pi_{g} ^{*}\). Using the defintion of 2-cocycle
\[\frac{\omega(h,h^{-1})}{\omega(h^{-1},hg)}=\frac{\omega(h,h^{-1})}{\omega(h,h ^{-1})\omega(1,g)\overline{\omega(h,g)}}=\omega(h,g).\]
Thus we see that \(P(X)\) commutes with every basis element \(\pi_{h}\) on \(M_{n}\). So we conclude that \(P(X)=cI_{n}\) for some \(c\in\mathbb{C}\). Computing the trace of both sides we find that \(c=\mathrm{Tr}(\mathrm{X})\). Therefore we see that the map \(P\), defined as the diagonal sum of the operators \(T_{g,g}\mathrm{s}\), is actually the depolarising channel which corresponds to the identity matrix \(D_{P}(x,y)=\delta_{x,y}\) where \(x,y\in G\).
**Transposition:** Let \(T\in\mathrm{Lin}(M_{n}(\mathbb{C}))\) be the transposition map given by \(X\mapsto X^{t}\). We compute the \(n^{2}\times n^{2}\) matrix \(D_{T}\) corresponding to the transposition map. For any\(x,y\in G\) we have
\[D_{T}(x,y)=\langle T_{x,y}|T\rangle = \sum_{1\leq i,j\leq n}\mathrm{Tr}\Big{(}(T_{x,y}|i\rangle\langle j |)^{*}T|i\rangle\langle j|\Big{)}\] \[= \sum_{1\leq i,j\leq n}\mathrm{Tr}\Big{(}\pi_{y}|j\rangle\langle i |\pi_{x}^{*}|j\rangle\langle i|\Big{)}\] \[= \sum_{1\leq i,j\leq n}\langle i|\pi_{x}^{*}|j\rangle\mathrm{Tr} \Big{(}\pi_{y}|j\rangle\langle i|\Big{)}\] \[= \sum_{1\leq i,j\leq n}\langle i|\pi_{x}^{*}|j\rangle\langle i|\pi _{y}|j\rangle=\mathrm{Tr}(\overline{\pi}_{x}\pi_{y})\]
In particular, if we take the Weyl operators \(\{W_{a,b};a,b\in\mathbb{Z}_{n}\}\) as the chosen NEB and \(\{|i\rangle;i\in\mathbb{Z}_{n}\}\) as the standard basis of \(\mathbb{C}^{n}\) then
\[D_{T}((a,b),(c,d))=\sum_{i,j\in\mathbb{Z}_{n}}\langle i|W_{a,b}^{*}|j\rangle \langle i|W_{c,d}|j\rangle=\sum_{i,j\in\mathbb{Z}_{n}}\chi(a,b)\chi(-b,j)\chi( d,j)\delta_{i,j-a}\delta_{i,c+j}.\]
So we have
\[D_{T}((a,b),(c,d))=\left\{\begin{array}{cl}\sum_{i\in\mathbb{Z}_{n}}\chi(a,b) \chi(d-b,i+a)&\mbox{ if }c=-a,\\ 0&\mbox{ otherwise.}\end{array}\right.\]
**Conditional Expectation onto Diagonal:** Consider the linear map \(C:M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\), \(C(X)=(\delta_{ij}x_{ij})_{1\leq i,j\leq n}\) for \(X=(x_{ij})_{1\leq i,j\leq n}\in M_{n}(\mathbb{C})\). This map is a conditional expectation onto the *-subalgebra of diagonal matrices with respect to the standard basis. We compute the coefficient matrix \(D_{C}\) of the map \(C\). For any \(x,y\in G\)
\[D_{C}(x,y)=\langle T_{x,y}|C\rangle = \sum_{1\leq i,j\leq n}\mbox{Tr}\Big{(}(T_{x,y}|i\rangle\langle j |)^{*}C|i\rangle\langle j|\Big{)}\] \[= \sum_{1\leq i,j\leq n}\delta_{ij}\mbox{Tr}(\pi_{y}|j\rangle \langle i|\pi_{x}^{*}|i\rangle\langle j|)\] \[= \sum_{1\leq j\leq n}\delta_{ij}\langle i|\pi_{x}^{*}|i\rangle \langle j|\pi_{y}|j\rangle=\mbox{Tr}(C(\pi_{x}^{*})C(\pi_{y})).\]
In particular, taking the Weyl operators as NEB gives
\[D_{C}((a,b),(c,d)=\left\{\begin{array}{cl}\sum_{j\in\mathbb{Z}_{n}}\chi(d-b,j)&\mbox{ if }c=a=0,\\ 0&\mbox{ otherwise.}\end{array}\right.\]
## 4. Correspondence between Choi matrix \(C_{\alpha}\) and \(D_{\alpha}\)
Recall that the Choi-Jamiolkowski matrix of a map \(\alpha\in\mbox{Lin}(M_{n}(\mathbb{C}))\) is the \(n^{2}\times n^{2}\)-matrix defined by
\[C_{\alpha}=\sum_{j,k=1}^{n}E_{jk}\otimes\alpha(E_{jk})\in M_{n}(\mathbb{C}) \otimes M_{n}(\mathbb{C})\cong M_{n^{2}}(\mathbb{C}).\]
It is known that \(\alpha\in\mbox{Lin}(M_{n}(C))\) is completely positive (CP) iff \(C_{\alpha}\) is positive. Furthermore, \(\alpha\in\mbox{Lin}(M_{n}(\mathbb{C}))\) is \(k\)-positive if and only if
\[\langle v,C_{\alpha}v\rangle\geq 0\]
for all \(v\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) with Schmidt rank not more than \(k\). For any completely positive map \(\alpha\) we have the Kraus decompositon
\[\alpha=\sum_{j=1}^{r}\mbox{Ad}_{L_{j}}\]
for some matrices \(L_{j}\in M_{n}(\mathbb{C})\), where for any \(X\in M_{n}(\mathbb{C})\) the conjugate map \(\mbox{Ad}_{L_{j}}\) is given by \(\mbox{Ad}_{L_{j}}(X)=L_{j}XL_{j}^{*}\). We call \(\alpha\) 1-super positive or entanglement breaking iff \(\mbox{rank}(L_{j})=1\) for any \(j\). It is called completely co-positive iff \(T\circ\alpha\) is completely positive, where \(T\) is the transposition map, see [10, 11].
We can switch from \(C_{\alpha}\) to \(D_{\alpha}\) by a change of basis.
**Proposition 4.1**.: _If \(T_{x,y}\) is defined with respect to the Weyl operators then the Choi-Jamiolkowski matrix of \(T_{x,y}\) is given by_
\[C_{T_{x,y}}(v,w)=\frac{\varkappa(x_{2},v_{1})}{\varkappa(y_{2},w_{1})}\delta_{v _{1}+x_{1},v_{2}}\delta_{w_{1}+y_{1},w_{2}},\]
_for \(x=(x_{1},x_{2}),y=(y_{1},y_{2}),v=(v_{1},v_{2}),w=(w_{1},w_{2})\in\mathbb{Z}_{ n}\times\mathbb{Z}_{n}\). More generally, if \(\alpha\) is of the form \(\alpha=\sum_{x,y\in\mathbb{Z}_{n}\times Z_{n}}D_{\alpha}(x,y)T_{x,y}\), then its Choi-Jamiolkowski matrix is given by_
\[C_{\alpha}(v,w)=\sum_{x_{2},y_{2}\in G}\frac{\varkappa(x_{2},v_{1})}{\varkappa (y_{2},w_{1})}D_{\alpha}\big{(}(v_{2}-v_{1},x_{2}),(w_{2}-w_{1},y_{2})\big{)},\]
_for \(v,w\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}\). Conversely, the coefficients from the equation (3.4) can be computed from the Choi-Jamiolkowski matrix via_
\[D_{\alpha}(x,y)=\frac{1}{n}\sum_{a,b\in G}\frac{\varkappa(y_{2},b)}{\varkappa (x_{2},a)}C_{\alpha}\big{(}(a,a+x_{1}),(b,b+y_{1})\big{)}\]
_for \(x,y\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}\)._
Proof.: Using Formula (2.1), we get
\[W_{y}^{*}=\sum_{d\in\mathbb{Z}_{n}}\frac{1}{\varkappa(y_{2},d)}|d\rangle \langle d+y_{1}|\]
and
\[T_{x,y}\big{(}|a\rangle\langle b|\big{)}=\frac{\varkappa(x_{2},a)}{\varkappa (y_{2},b)}|a+x_{1}\rangle\langle b+y_{1}|,\]
for \(a,b\in\mathbb{Z}_{n}\), \(x,y\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}\). So, if we choose \(\{|a\rangle;a\in\mathbb{Z}_{n}\}\) as a basis of \(\mathbb{C}^{n}\), we can write the corresponding matrix units as \(|a\rangle\langle b|\), \(a,b\in\mathbb{Z}_{n}\), and we get
\[C_{T_{x,y}} = \sum_{a,b\in\mathbb{Z}_{n}}|a\rangle\langle b|\otimes T_{x,y} \big{(}|a\rangle\langle b|\big{)}\] \[= \sum_{a,b\in\mathbb{Z}_{n}}\frac{\varkappa(x_{2},a)}{\varkappa(y _{2},b)}|(a,a+x_{1})\rangle\langle(b,b+y_{1})|,\]
which proves the first claim of the proposition.
For \(\alpha=\sum_{x,y\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}}D_{\alpha}(x,y)T_{x,y}\), this yields
\[C_{\alpha}=\sum_{x,y\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}}\sum_{a,b\in \mathbb{Z}_{n}}D_{\alpha}(x,y)\frac{\varkappa(x_{2},a)}{\varkappa(y_{2},b)}| (a,a+x_{1})\rangle\langle(b,b+y_{1})|\]
or
\[C_{\alpha}(v,w)=\sum_{x_{2},y_{2}\in\mathbb{Z}_{n}}\frac{\varkappa(x_{2},v_{1} )}{\varkappa(y_{2},w_{1})}D_{\alpha}\big{(}(v_{2}-v_{1},x_{2}),(w_{2}-w_{1},y_ {2})\big{)}.\]
For the converse we use the equation (3.4),
\[D_{\alpha}(x,y) = \frac{1}{N}\sum_{a,b\in G}\frac{\varkappa(y_{2},b)}{\varkappa(x_{2},a)}\Big{\langle}a+x_{1}\Big{|}\alpha\big{(}|a\rangle\langle b|\Big{)}\Big{|}b+ y_{1}\Big{\rangle}\] \[= \frac{1}{N}\sum_{a,b\in\mathbb{Z}_{n}}\frac{\varkappa(y_{2},b)}{ \varkappa(x_{2},a)}\Big{\langle}a+x_{1}\Big{|}\left(\sum_{g,h\in\mathbb{Z}_{n }}C_{\alpha}\big{(}(a,g),(b,h)\big{)}|g\rangle\langle h|\right)\Big{|}b+y_{1} \Big{\rangle}\] \[= \sum_{a,b\in\mathbb{Z}_{n}}\frac{\varkappa(y_{2},b)}{\varkappa(x _{2},a)}C_{\alpha}\big{(}(a,a+x_{1}),(b,b+y_{1})\big{)},\]
where we used the identity
\[\alpha\big{(}|a\rangle\langle b|\big{)}=\sum_{g,h\in G}C_{\alpha}\big{(}(a,g),(b,h)\big{)}|g\rangle\langle h|.\]
## 5. Characterisation of positive and completely positive maps
**Theorem 5.1**.: _Let \(\{B_{x}\}_{x=1,2\ldots n^{2}}\) be a basis of \(M_{n}(\mathbb{C})\). Consider a linear map \(\alpha\in L(M_{n}(\mathbb{C}))\) of form \(\alpha(X)=\sum_{x,y=1}^{n^{2}}D_{\alpha}(x,y)B_{x}XB_{y}^{*}\). Then \(\alpha\) is_
1. _Hermitiantiy preserving if and only if_ \(D_{\alpha}\) _is Hermitian._
2. _positive if and only if for any_ \(v,w\in\mathbb{C}^{n}\)_,_ \[\langle v\otimes w,\tilde{\alpha}(v\otimes w)\rangle\geq 0\]
_where \(\tilde{\alpha}=\tau\circ\sum_{x,y=1}^{n^{2}}D_{\alpha}(x,y)(B_{x}\otimes B_{y }^{*})\) and \(\tau(u\otimes v)=v\otimes u\) is the flip operator._
Proof.: \(\alpha\in\mathrm{Lin}(\mathrm{M}_{n}(\mathbb{C}))\) is Hermitianity preserving iff \(\alpha(X^{*})^{*}=\alpha(X)\) i.e. \(\alpha^{\#}=\alpha\). Comparing the coefficient matrix of both sides the first claim follows directly from the proposition 3.5.
On the other hand, \(\alpha\) positive if and only if it maps rank one projection to positive operators. i.e. for all \(v,w\in\mathbb{C}\)
\[0\leq\langle v,\alpha(|u\rangle\langle u|)v\rangle =\Big{\langle}v,\sum D_{\alpha}(x,y)B_{x}|u\rangle\langle u|B_{y} ^{*}v\Big{\rangle}\] \[=\sum D_{\alpha}(x,y)\langle v,B_{x}u\rangle\langle u,B_{y}^{*}v\rangle\] \[=\Big{\langle}u\otimes v,\tau\circ\sum D_{\alpha}(x,y)B_{x} \otimes B_{y}^{*}(u\otimes v)\Big{\rangle}\] \[=\langle u\otimes v,\tilde{\alpha}(u\otimes v)\rangle.\]
**Theorem 5.2**.: _A linear map \(\alpha\in\mathrm{Lin}(M_{n})\) is a completely positive map with Kraus rank \(r\) then if and only if the corresponding coefficient matrix \(D_{\alpha}\in M_{n^{2}}\) as defined in (3.1), is positive semi-definite of rank \(r\)._
Proof.: If \(\alpha\in L(M_{n})\) is CP with Kraus rank \(r\) then there exist \(\{L_{j}\in M_{n};1\leq j\leq r\}\) such that \(\alpha\) can be written as Kraus decomposition
\[\alpha=\sum_{j=1}^{r}Ad_{L_{j}},\]
where \(Ad_{L_{j}}\) is the conjugate map given by \(Ad_{L_{j}}(X)=L_{j}XL_{j}^{*}\) for any matrix \(X\in M_{n}\). Since the map \(L(M_{n})\ni\alpha\mapsto D_{\alpha}\in M_{n^{2}}\) is a linear isomorphism, we have \(D_{\alpha}=\sum_{j=1}^{r}D_{Ad_{L_{j}}}\). We can expand each \(L_{j}\) with respect to the NEB \(\{B_{x};x\in G\}\) and write \(L_{j}=\sum_{z}l_{j}(z)B_{z}\). Therefore we get
\[Ad_{L_{j}}(X)=L_{j}XL_{j}^{*}=\left(\sum_{z\in G}l_{j}(z)B_{z} \right)X\left(\sum_{z^{\prime}\in G}l_{j}(z^{\prime})B_{z^{\prime}}\right)^{*}\] \[=\sum_{z,z^{\prime}\in G}l_{j}(z)\overline{l_{j}(z^{\prime})}B_{ z}XB_{z^{\prime}}^{*}\]
We find that \(D_{Ad_{L_{j}}}\) is a rank one operator given by
\[D_{Ad_{L_{j}}}=|l_{j}\rangle\langle l_{j}|\]
where \(l_{j}=(l_{j}(1),l_{j}(2),\ldots,l_{j}(n^{2}))^{t}\) is a vector in \(\mathbb{C}^{n^{2}}\). Thus \(D_{\alpha}=\sum_{j=1}^{r}|l_{j}\rangle\langle l_{j}|\) is a positive semi-definite operator of rank \(r\).
Conversely, assume that \(D_{\alpha}\) is positive semi-definite with rank \(r\). So there exists vectors \(v_{1},v_{2},\ldots,v_{r}\in\mathbb{C}^{n^{2}}\) such that \(D_{\alpha}=\sum_{j=1}^{r}|v_{j}\rangle\langle v_{j}|\). If we denote \(\{|x\rangle;x\in G\}\) the standard basis on \(\mathbb{C}^{n^{2}}\) then
\[D_{\alpha}(x,y)=\sum_{j=1}^{r}\langle x|v_{j}\rangle\langle v_{j}|y\rangle= \sum_{j=1}^{r}\langle x|v_{j}\rangle\overline{\langle y|v_{j}\rangle}.\]
Therefore we can write the equation (3.1) as
\[\alpha(X) = \sum_{x,y\in G}\sum_{j=1}^{r}\langle x|v_{j}\rangle\overline{ \langle y|v_{j}\rangle}B_{x}XB_{y}^{*}\] \[= \sum_{x,y\in G}\sum_{j=1}^{r}\big{(}\langle x|v_{j}\rangle B_{x} \big{)}X\big{(}\langle y|v_{j}\rangle B_{y}\big{)}^{*}\] \[= \sum_{j=1}^{r}\left(\sum_{x}\langle x|v_{j}\rangle B_{x}\right)X \left(\sum_{y}\langle y|v_{j}\rangle B_{y}\right)^{*}.\]
If we denote \(L_{j}=\sum_{x\in G}\langle x|v_{j}\rangle B_{x}\) then we get \(\alpha(X)=\sum_{j=1}^{r}L_{j}XL_{j}^{*}\), which shows that \(\alpha\) is completely positive with Kraus rank \(r\).
We remark here that the similar result was obtained by Poluikis and Hill from a different approach [7]. But our approach has the advantange that coefficient
matrix corresponding to the composition of two linear maps becomes the convolution type product of their individual coefficient matrices. This composition law can be used to characterise the completely co-positive maps too.
**Corollary 5.3**.: _A linear map \(\alpha\in\operatorname{Lin}(M_{n}(\mathbb{C}))\) is completely co-positive iff the convolution product_
\[\sum_{p,q\in\mathbb{Z}_{n}\times\mathbb{Z}_{n}}\frac{\chi(p,x-p)}{\chi(q,y-q)} \operatorname{Tr}(\overline{W}_{p}W_{q})D_{\alpha}(x-p,y-q)\]
_is positive semidefinite._
Proof.: Using the Theorem 5.2 a linear map \(\alpha\in\operatorname{Lin}(M_{n}(\mathbb{C}))\) is co-CP iff in the decomposition \(T\circ\alpha\) w.r.t the Weyl operators
\[T\circ\alpha(X)=\sum_{x,y\in\mathbb{Z}_{n}\times Z_{n}}D_{T\circ\alpha}W_{x}XW _{y}^{*},\]
the coefficient matrix \(D_{T\circ\alpha}\) is positive semi-definite. We use the Lemma 3.4 and the coefficients we have found for transposition in example 3.1 to compute the coefficient \(D_{t\circ\alpha}\) which complete the claim.
**Corollary 5.4**.: _A linear map \(\alpha\in\operatorname{Lin}(M_{2})\) is \(1\)-super positive iff \(D_{\alpha}=\sum_{j=1}^{r}|l_{j}\rangle\langle l_{j}|\) where \(l_{j}=(l_{j}(1),\ldots,l_{j}(4))^{t}\) is a vector in \(\mathbb{C}^{4}\) satisfying \(l_{j}(1)^{2}=\sum_{k=2}^{4}l_{j}(k)^{2}\)._
Proof.: Since \(\alpha\) is \(1\)-super positive in \(M_{2}\) there exists matrices \(L_{1},L_{2},\ldots,L_{r}\in M_{2}\) of rank \(1\) such that \(\alpha=\sum_{j}^{r}Ad_{L_{j}}\). We can decompose each \(L_{j}\) w.r.t the Pauli basis
\[\sigma_{1}=\tfrac{1}{2}\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\ \ \sigma_{2}=\tfrac{1}{2}\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\ \ \sigma_{3}=\tfrac{1}{2}\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\ \ \sigma_{4}=\tfrac{1}{2}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\]
to obtain \(L_{j}=\sum_{i=1}^{4}l_{j}(k)\sigma_{k}\). After this decomposition \(L_{j}\) has the form
\[\frac{1}{2}\begin{bmatrix}l_{j}(1)+l_{j}(4)&l_{j}(2)-il_{j}(3)\\ l_{j}(2)+il_{j}(3)&l_{j}(1)-l_{j}(4)\end{bmatrix}.\]
\(L_{j}\) has rank \(1\) iff \(\det(L_{j})=0\) i.e. \(l_{j}(1)^{2}=l_{j}(2)^{2}+l_{j}(3)^{2}+l_{j}(4)^{2}\). We have already seen that in the proof of 5.2 that \(D_{Ad_{L_{j}}}=|l_{j}\rangle\langle l_{j}|\), which completes the the claim.
**Proposition 5.5**.: _A linear map \(\alpha:M_{n}(\mathbb{C})\longrightarrow M_{n}(\mathbb{C})\)_
1. _is trace preserving if and only if_ \[\sum_{x\in G}\omega(x,g)D_{\alpha}(x,xg)=\delta_{1,g}\] _for all_ \(g\in G\)
2. _is unit preserving if and only if_ \[\sum_{x\in G}\frac{\omega(x,x^{-1}z)}{\omega(z^{-1}x,x^{-1}z)}D_{\alpha}(x,z^{-1} x)=\delta_{1,z}\] _for all_ \(z\in G\)_._
Proof.:
(a) Since \(\{\pi_{g};g\in G\}\) forms a basis of \(M_{n}\) the map \(\alpha\) is trace preserving iff \(\operatorname{Tr}(\alpha(\pi_{g}))=\operatorname{Tr}(\pi_{g})\) for all \(g\in G\). Now
\[\operatorname{Tr}\alpha(\pi_{g})=\sum_{x,y}D_{\alpha}(x,y)\frac{\omega(x,g) \omega(xg,y^{-1})}{\omega(y^{-1},y)}\operatorname{Tr}(\pi_{xgy^{-1}})\]
Substituting \(xg=y\) we get
\[\sum_{x\in G}\omega(x,g)D_{\alpha}(x,xg)=\operatorname{Tr}(\pi_{g})=\delta_{1,g}\]
for all \(g\in G\).
(b) By definition \(\alpha\) is unit preserving iff \(\alpha(I_{n})=I_{n}\).
\[\alpha(I_{n})=\sum_{x,y\in G}D_{\alpha}(x,y)\pi_{x}\pi_{y}^{*} =\sum_{x,y\in G}\frac{\omega(x,y^{-1})}{\omega(y,y^{-1})}D_{\alpha }(x,y)\pi_{xy^{-1}}\] \[=\sum_{x,z\in G}\frac{\omega(x,x^{-1}z)}{\omega(z^{-1}x,x^{-1}z) }D_{\alpha}(x,z^{-1}x)\pi_{z}.\]
Comparing the coefficients we get
\[\sum_{x\in G}\frac{\omega(x,x^{-1}z)}{\omega(z^{-1}x,x^{-1}z)}D_{\alpha}(x,z^{ -1}x)=\delta_{1,z}\]
for all \(z\in G\).
## Acknowledgements
P.C. and U.F. were supported by the ANR Project No. ANR-19-CE40-0002. B.V.R. Bhat is suppported by the J C Bose Fellowship JBR/2021/000024 of SERB(India)
|
2304.03421 | mmodel: A workflow framework to accelerate the development of
experimental simulations | Simulation has become an essential component of designing and developing
scientific experiments. The conventional procedural approach to coding
simulations of complex experiments is often error-prone, hard to interpret, and
inflexible, making it hard to incorporate changes such as algorithm updates,
experimental protocol modifications, and looping over experimental parameters.
We present mmodel, a framework designed to accelerate the writing of
experimental simulation packages. mmodel uses a graph-theory approach to
represent the experiment steps and can rewrite its own code to implement
modifications, such as adding a loop to vary simulation parameters
systematically. The framework aims to avoid duplication of effort, increase
code readability and testability, and decrease development time. | Peter Sun, John A. Marohn | 2023-04-07T00:07:20Z | http://arxiv.org/abs/2304.03421v2 | # mmodel: A workflow framework to accelerate the development of experimental simulations
###### Abstract
Simulation has become an essential component of designing and developing scientific experiments. The conventional procedural approach to coding simulations of complex experiments is often error-prone, hard to interpret, and inflexible, making it hard to incorporate changes such as algorithm updates, experimental protocol modifications, and looping over experimental parameters. We present _mmodel_, a framework designed to accelerate the writing of experimental simulation packages. _mmodel_ uses a graph-theory approach to represent the experiment steps and can rewrite its own code to implement modifications, such as adding a loop to vary simulation parameters systematically. The framework aims to avoid duplication of effort, increase code readability and testability, and decrease development time.
## I Introduction
Scientific experiments are becoming ever more complex, with multiple interconnected components. Experiments thus often require simulation code to explore the feasibility, optimize experimental parameters, and extract information from experimental results. In fields such as mass spectrometry,[1; 2], X-ray spectroscopy,[3; 4; 5] and magnetic resonance force microscopy (MRFM),[6; 7; 8; 9; 10] experiments are expensive and time-consuming. In these and other fields, simulation has become crucial to experimental design and hardware development. For example, research groups in the MRFM field employ a wide variety of experimental setups, materials, and spin-detection protocols. In order to design experiments and validate results across laboratories, we need simulation codes that cover all possible setups, materials, and protocols. The code should be fast, readable, reliable, modular, and expandable.
The typical approach to experimental simulation is to create one-off simulations for each experiment of interest.[11] Unfortunately, the code needs to be rewritten when we modify the experiment or implement a loop to vary parameters efficiently. These practices result in simulations that contain large amounts of duplicated code and are often unreadable. Moreover, the one-off nature of the code impedes reproducibility and leads to errors due to a lack of testing and annotation;[11; 12] coding errors lead to incorrect publication results in many scientific fields.[13; 14; 15] A new approach is needed to develop modular, expandable, testable, and readable simulation code.
This paper introduces the _mmodel_ package that provides a framework for coding experimental simulations. _mmodel_ employs a graph-based representation of the experimental components, a method popular in scientific workflows design.[16; 17] The framework prioritizes the prototyping phase of the development by allowing modification to the simulation model post-definition, which most workflow systems cannot achieve. In addition, we show that existing projects can easily migrate to the _mmodel_ framework. Section II introduces a simple representative physics experiment and shows how _mmodel_'s graph-theory-based simulation approach facilitates experimental parameter exploration, experimental result validation, and the debugging process. Section III compares _mmodel_ to other popular scientific workflows and discusses the advantages of _mmodel_ in the prototyping phase of development. Appendix A summarizes the _mmodel_ API and presents example scripts to define, modify and execute the experiment discussed in Section II.
## II Scientific experiment prototype
Consider the following contrived experiment investigating the force exerted on a moving magnet by a charged sample. The magnet is spherical and is attached to a micro-cantilever that oscillates in the \(x\) direction, shown in Fig. 1. The force on a moving charge \(q\) by an electric field \(\mathbf{E}\) and a magnetic field \(\mathbf{B}\) can be calculated using the Lorentz force law[18]
\[\mathbf{F}=q(\mathbf{E}+\mathbf{v}\times\mathbf{B}). \tag{1}\]
Using the magnet as the frame of reference, the problem is equivalent to calculating the sum of the force exerted on a fixed magnet by moving charges traveling through the magnet's magnetic field.[19]
Figure 1: Example setup. (a) The isometric view. A magnet is attached to a micro-cantilever. A charged sample is located below the magnet. (b) The side view. The cantilever oscillates in the \(x\) direction, and the magnet is \(x_{m}\) away from the equilibrium position. The \(i^{\text{th}}\) sample charge \(Q_{i}\) is located at \(\mathbf{s}_{i}=(x_{i},y_{i},z_{i})\).
mmodel
In our contrived experiment, the velocity has only an \(x\) component, \(v_{x}\). We can approximate the cantilever oscillation as simple harmonic motion in the \(x\) direction. Given the cantilever frequency \(f\) and amplitude \(A\), the magnet displacement \(x_{\text{m}}\) (measured relative to the equilibrium position) and the velocity \(v_{x}\) are
\[x_{\text{m}}(t)=A\cos(\omega t) \tag{2}\]
\[v_{x}(t)=-\omega A\sin(\omega t) \tag{3}\]
where \(\omega=2\pi f\) and \(x_{\text{m}}\in(-A,A)\). We assume the magnet's origin at the equilibrium position is \((0,0,0)\). We want to calculate the force on the magnet when the displacement is \(x_{\text{m}}\). The instantaneous magnet velocity \(v_{x}\) at the position of motion (\(x_{\text{m}}\), \(0\), \(0\)) can be calculated as follows:
\[v_{x}(x_{\text{m}})=\pm 2\pi f\sqrt{A^{2}-x_{\text{m}}^{2}}. \tag{4}\]
The sphere's magnetic field in the \(y\) and \(z\) directions at the \(i^{\text{th}}\) charge's position \(\mathbf{s_{i}}=(x_{i},y_{i},z_{i})\), \(B_{y}(\mathbf{s_{i}})\) and \(B_{z}(\mathbf{s_{i}})\), can be calculated using
\[B_{z}(\mathbf{s_{i}})=\frac{\mu_{0}M_{\text{s}}}{3}\left(3\frac{Z_{i}^{2}}{R_{i}^{ 5}}-\frac{1}{R_{i}^{3}}\right) \tag{5}\]
\[B_{y}(\mathbf{s_{i}})=\mu_{0}M_{\text{s}}\left(\frac{Y_{i}Z_{i}}{R_{i}^{5}}\right), \tag{6}\]
with \(\mu_{0}\) the vacuum permeability; \(M_{\text{s}}\) the magnet saturation magnetization; \(r\) the magnet radius; \(X_{i}=(x_{i}-x_{\text{m}})/r\), \(Y_{i}=y_{i}/r\), and \(Z_{i}=z_{i}/r\) the reduced charge coordinates; and \(R_{i}=\sqrt{X_{i}^{2}+Y_{i}^{2}+Z_{i}^{2}}\) a reduced distance. It is important to note that each point charge can induce an image charge in the magnet, resulting in an additional electric and magnetic field.[18] We ignore the fields generated based on this effect for simplicity.
Finally, without the external field \(\mathbf{E}\), the forces exerted on the magnet in the \(y\) and \(z\) direction, \(F_{y}\) and \(F_{z}\), by a set of \(N\) discrete charges \(Q_{i}\) can be calculated by summing the forces exerted by the individual charges:
\[F_{y}(x_{\text{m}})=v_{x}(x_{\text{m}})\sum_{i=1}^{N}Q_{i}B_{z}(\mathbf{s_{i}},x_{ \text{m}}) \tag{7}\]
\[F_{z}(x_{\text{m}})=-v_{x}(x_{\text{m}})\sum_{i=1}^{N}Q_{i}B_{y}(\mathbf{s_{i}},x_ {\text{m}}), \tag{8}\]
where \(Q_{i}=Q(\mathbf{s_{i}})\) is the charge of the \(i^{\text{th}}\) sample charge located at \(\mathbf{s_{i}}\). In the pseudocode, we use _grid_ to represent the collection of charge positions. The variables are shown in Table 1.
### Procedural Approach
Conventionally, simulations use a procedural approach that executes the experiment in steps. One procedural approach to calculating the force on the point charge is to group the calculation by component. We use Bzfunc, Byfunc, Vxfunc, Fyfunc, and Fzfunc to denote the respective calculations. Here we first calculate the components \(v_{x}\), \(B_{x}\), and \(B_{y}\). The second step is to calculate \(F_{y}\) and \(F_{z}\) based on the values of \(Q\), \(v_{x}\), \(B_{z}\), \(B_{y}\).
In the calculation, we step through the grid points to calculate the effective force at each point of the charge distribution, and we sum all the force values to obtain the total force acting on the cantilever. The algorithm is shown in Alg. 1. In the prototyping process, the above calculations can be packaged into a function or a script, with the inputs of the magnet, cantilever, and grid as input parameters and the \(y\)- and \(z\)-direction forces as outputs.
\begin{table}
\begin{tabular}{l l} \hline \hline variable & description \\ \hline \(f\) & cantilever frequency \\ \(A\) & cantilever amplitude \\ \(v_{x}\) & instantaneous magnet motion velocity \\ \(x_{\text{m}}\) & magnet position in the \(x\) direction \\ \(r\) & magnet radius \\ \(\mu_{0}\) & vacuum permeability \\ \(M_{\text{s}}\) & magnet saturation magnetization \\ \(B_{y}\) & magnet magnetic field in the \(y\) direction \\ \(B_{z}\) & magnet magnetic field in the \(z\) direction \\ \(q\) & point charge \\ \(Q_{i}\) & charge of the \(i^{\text{th}}\) sample charge \\ \(\mathbf{s_{i}}\) & location of the \(i^{\text{th}}\) sample charge, \((x_{i},y_{i},z_{i})\) \\ _grid_ & collection of sample charge positions \\ \(F_{y}\) & force on the magnet in the \(y\) direction \\ \(F_{z}\) & force on the magnet in the \(z\) direction \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of the Lorentz force law model variables calculating the forces on moving magnet from a charge distribution.
### _mmodel_ graph-theory approach
Alternatively, we can represent the process flow in a directed acyclic graph (DAG) \(G=(V,E)\), with nodes \(V\) as the experimental steps and edges \(E\) as the data flow. A DAG is a directed graph with no directed cycles. A DAG has at least a topological ordering, where for all the edges \((u,v)\), the parent node \(u\) is always positioned before the child node \(v\). With this approach, we create a graph representing the experiment and simulation process and execute the nodes in topological order while managing the intermediate value flows. In _mmodel_, each process is a Python callable.
Here we divide the experiment into five nodes, named by their output: \(\mathrm{Vx}\), \(\mathrm{Bz}\), \(\mathrm{By}\), \(\mathrm{Fy}\), and \(\mathrm{Fz}\). Each node corresponds to its respective functions, resulting in a graph \(G=\{V,E\}\), where \(V=\{\mathrm{Vx},\mathrm{Bz},\mathrm{By},\mathrm{Fy},\mathrm{Fz}\}\) and \(E=\{(\mathrm{Bz},\mathrm{Fy}),(\mathrm{By},\mathrm{Fz}),(\mathrm{Vx}, \mathrm{Fy}),(\mathrm{Vx},\mathrm{Fz})\}\), as shown in Fig. 2. Each node takes input parameters externally or from its parent node. The graph is then converted into a function by defining the method of execution of the notes in the topological ordering. The ordering ensures that the parent node of an edge is always executed first.
One topological ordering for graph G is [Bz, By, \(\mathrm{Vx}\), \(\mathrm{Fy}\), \(\mathrm{Fz}\)]. To direct the data flow, we can store the intermediate value of \(v_{x}\), \(B_{z}\), and \(B_{y}\) calculated by the corresponding nodes. These intermediate values can be stored internally in memory or externally on disk. The function returns the output of the two terminal nodes, \(\mathrm{Fy}\) and \(\mathrm{Fz}\).
### Experimental exploration: Investigate the effect of different cantilever frequencies
To investigate the effect of cantilever frequency \(f\) over the forces on the charge, we want to loop through \(f\) values and examine the resulting changes in the force acting on the cantilever. Since calculating \(B_{y}\) and \(B_{z}\) does not depend on the \(f\) parameter, the most efficient simulation is to loop only the \(\mathrm{Vxfunc}\), \(\mathrm{Fyfunc}\), and \(\mathrm{Fzfunc}\) calculations. For the procedural approach, a new function must be written by modifying the original experiment source code to add loops to the \(\mathrm{Vxfunc}\) subsection of the code.
However, the graph-theory approach can achieve this looping result without modifying the original code. Adding the loop to the workflow can be achieved in four steps. First, we create a subgraph, \(H=\{V=\{\mathrm{Vx},\mathrm{Fy},\mathrm{Fz}\},E=\{(\mathrm{Vx},\mathrm{Fy}),( \mathrm{Vx},\mathrm{Fz})\}\}\), that excludes the \(\mathrm{By}\) and \(\mathrm{Bz}\) nodes from the original graph. Second, we convert the subgraph into a function with the additional loop step. Third, we replace the subgraph with a single node in the original graph and map it to the subgraph function. The new graph is shown in Fig. 3, where \(\mathrm{floopfunc}\) is the function that loops through the \(f\) values. Last, the newly obtained graph is converted into a new experimental function. The process can be combined into a single function, shown in Alg. 2, and applies to all graphs defined using _mmodel_'s graph definition. Importantly, these steps can be automated, requiring only a one-line code modification to implement parameter looping (see A.8).
Figure 3: _mmodel_ graph-theory representation of the Lorentz force law model with the frequency \(f\) loop node. (a) The new mmodel graph with subgraph substituted with a loop node, (b) \(f\) loop node model graph.
Figure 2: _mmodel_ graph-theory representation of the Lorentz force law model calculating the forces on a moving magnet from a charge distribution.
### Experimental validation: Determine the cantilever amplitude
In an experiment, the cantilever is driven at its resonance frequency, and the cantilever amplitude \(A\) can be measured by the position detector. Suppose we have measured \(F_{y}\) and \(F_{z}\) and want to use these values to validate the amplitude value. In a procedural approach, a function is written to iterate through samplings (i.e., Monte Carlo samplings) of \(A\) values until the simulated forces agree with the experiment (within a specified error). Similar to Section II.3, only the Vxfunc, Fyfunc, and Fzfunc depend on \(A\).
With the graph-theory approach, we can create a function that iterates through given parameter values until the termination condition is met. The function SampleMod that creates a new graph with the Monte Carlo fitting process is shown in Alg. 3. For the example scenario, the subgraph containing the Vx, Fy, and Fz nodes is converted into a function. The MonteCarlo function takes the subgraph function as an input and outputs the optimal offset value.
### Debugging: Investigate intermediate values
In a prototyping process, debugging is crucial; common issues include data value or format errors. Sometimes we want to inspect the intermediate results, such as the \(v_{x}\) or \(F_{y}\) values. Conventionally, this inspection is done by adding print or logging statements -- a significant effort.
Debugging becomes more straightforward with graphs: we can obtain the subgraph that outputs the desired intermediate results. The subgraph excludes unnecessary calculations, which makes it very efficient computationally. For example, we can obtain the subgraph \(H=\{V=\{\mathrm{Bz},\mathrm{Vx},\mathrm{Fy}\},E=\{(\mathrm{Bz},\mathrm{Fy}),( \mathrm{Vx},\mathrm{Fy})\}\}\) that only includes the Fy node and its parents, shown in Fig. 4. Additionally, since the process controls the output of the nodes, we can output the intermediate value alongside the original output. Alternatively, we can modify and add nodes to the graph to achieve additional functionality, such as plotting, shown in Fig. 5.
## III Discussion
The last two decades have seen an explosion of data in science and the data-driven science that comes with it. Scientific workflows become useful tools for managing and analyzing data, freeing domain experts from much tedious programming.[16; 17] There are various popular packages and software that facilitate the simulation of scientific experiments, such as _Pegasus_,[20]_Kepler_,[21; 22]_AiiDA_,[23]_Dask_,[24] and _Pyra_.[25] These packages operate at different levels of complexity. For example, _Pegasus_, _Kepler_, and _AiiDA_ focus on complete data pipeline management, while _Dask_ and _Pyra_ focus on data analysis. However, these packages are most suited and optimized for upscaling scientific analysis with well-defined models. In our MRFM experiments, we constantly update the signal calculation algorithm and continuously loop over parameters and protocols to decide among experimental protocols and determine the optimal experimental
Figure 4: Subgraph that only outputs \(F_{y}\) value based on the Lorentz force law model graph.
Figure 5: Graphs with added plotting functionality based on the Lorentz force law model. (a) Add a node that plots By value to the graph, (b) modify the By node to add plotting functionality with calculation.
conditions. With the growing number of changes to the simulation code, neither the procedural approach nor the existing scientific workflow package suits the development process. The difficulty in maintaining the simulation package inspired the creation of _mmodel_, which aims to reduce code duplication and provide automated solutions to simulation updates.
As highlighted in the Section II, _mmodel_ solves four issues when coding experimental simulations. First, the framework reduces code duplication and makes it easy to update subsections of the simulations. _mmodel_ separates the code definition into three parts: function, graph, and model, shown in Fig. 6. The functions describe the theoretical calculation of the experimental steps. The graphs describe the steps of the experiments, and the model defines the execution process. This separation makes it easy to only change or update one part of the process without modifying the other two. For example, in Section II, if we wanted to update the calculation of \(v_{x}\), only the VxFunc function would need to be updated. In the prototyping phases, these changes often happen due to theory and algorithm updates and bug fixes. The code reduction resulting from using _mmodel_ requires fewer tests, which speeds up the development process. Second, _mmodel_ simplifies experiment parameters testing. The graph in _mmodel_ is interactive, and all node objects are Python functions. There is no additional decorator required during the function definition step. This design allows defining graphs with existing functions, such as Python built-in or _Numpy_-package functions. In addition, using functions as node objects extends the capability of the execution process. For example, a special function can be defined to identify subgraphs and create loops for a specific parameter. Our function node implementation extends the traditional capability of the DAG, which cannot contain cycles. We can write special functions to modify a process or a group of processes, such as the LoopMod function in Alg. 2, which can loop a given parameter for all the graphs defined using _mmodel_. Once these special functions are written, users can apply them directly to their graphs. Third, _mmodel_ optimizes runtime performance during the prototyping process. For example, _mmodel_ provides an execution method that optimizes memory--mmodel.MemHandler--by keeping a reference count of the times intermediate parameters are needed by the child nodes. Once all child nodes have accessed the value, the value is deleted. Another build-in execution method--mmodel.H5Handler--optimizes runtime memory by storing intermediate values in an H5DF file. Additionally, debugging using the partial graphs or outputs of the intermediate value speeds up the prototyping cycle. Fourth, _mmodel_ improves the readability of the code by providing a visual graph representation of the experiment with rich metadata.
_mmodel_ is advantageous over the other workflow systems in the prototyping phase. The existing scientific workflow packages are designed for well-defined models, making the computing steps definition well-structured. As a result, the API is restrictive for modifying existing workflows and defining processes outside the package's scope. Prototyping these packages creates unnecessarily complicated code and has a high learning curve. For example, _Pegasus_ and _Kepler_ require developers to define the functions and graphs using a GUI with a domain-specific language. _AiDA_, _Dask_, and _Pydra_ require function definition with package-specific Python decorators and special node classes. The function and graph definition cannot be easily transferred to another package due to their domain-specific approach.[17] These packages provide a limited ability to modify the graph post definition. For example, to add loop modification, they must either be written into the graph using a special node that directs the parameter inputs (_Pegasus_, _Pyra_) or built into the model (_Kepler_, _AiDA_, _Dask_). They cannot be added to the already-defined graph. This implementation prevents post-interaction at the graph or model level. Any modification to the simulation process requires a completely new workflow. In contrast, _mmodel_ is designed to be lightweight and flexible. Each part of the graph and model definition process allows for customization. For example, a new execution process can be written with the mmodel.handler API. The implementation also allows existing packages to use the framework easily with minimal adjustments to existing functions. The drawback of the _mmodel_'s function-first approach is that sub-operation modification creates nested graphs and operations, making it difficult to upscale if many modification steps are required. However, we believe the resulting modest increase in computational complexity is a reasonable compromise at the prototyping stage.
We believe _mmodel_ fills the void of a fast, reproducible, and general-purpose scientific workflow framework for simulating complex experiments. _mmodel_ is written in Python with minimal API restrictions, making the framework excellent for building on or migrating existing Python codes. The graph representation of the experiment and the associated rich metadata helps with communication among collaborators. The flexibility and extendability allow code optimization and easy unit testing, reducing errors and accelerating code development to simulate complicated experiments.
## IV Future directions
In future versions, we would like to provide more graph execution methods that optimize performance and profilers that provide execution analysis. Additionally, we would like to add adaptors that can convert _mmodel_ graphs to other popular workflow frameworks. We encourage scientists and developers to contribute to the project. The _mmodel_ version 0.6 is available at the repository: [https://github.com/marohn-group/mmodel](https://github.com/marohn-group/mmodel) and the documentation is available at [https://marohn-group.github.io/mmodel-docs/](https://marohn-group.github.io/mmodel-docs/). The project has 100% unit test coverage and is open-source under the 3-Clause BSD License.
Figure 6: _mmodel_ workflow definition steps. Modifiers can be applied to nodes and models. The graph defines the experimental steps, and the model defines the execution method.
Acknowledgements
Research reported in this publication was supported by Cornell University, the Army Research Office under Award Number W911NF-17-1-0247, and the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number R01GM143556. The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of Cornell University, the U.S. Army Research Office, or the National Institutes of Health.
## Appendix A MModel API
We show how to build and interact with the model using the example in Section II.
### Define Functions
The first step is to define the five functions of the calculation process, shown in Lst. 1. Note that for \(F_{y}\) and \(F_{z}\) calculations, instead of looping all grid positions, we calculate all the forces in the grid points at once using matrix multiplication of \(Q\) and \(B_{y}\) matrices and sum the values of all grid points. The approach is more efficient with libraries optimized for array programming, such as _Numpy_.[26] Here we only calculate the force on the magnet when it moves in the positive \(x\) direction. For the numerical simulation, we approximate the charge distribution \(Q\) in a rectangular prism grid defined by the parameter \(grid\).
```
1importmath
2importnumpyasnp
3
4defBzfunc(mu0,Ms,r,grid,xm):
6"""Magneticfieldinthezdirection.
7
8:paramfloatmu0:vacuumpermeability
9:paramfloatMs:magnetsaturation
10magnetization
11:paramfloatr:magnetradius
12:paramnp.ndarraygrid:generatedbynp.ogrid
13,np.mgrid,
14:orp.meshgridwiththeshapeof(3,x,y,z)
15:paramintxm:instantaneousmagnetposition
16:xdirection
17:returnBz
18:returnBz
19:x-(grid[0]-xm)/r
20Y=grid[1]/r
21Z=grid[2]/r
22R=np.sqrt(X**2+Y**2+Z**2)
23Bz=mu0*Ms/3*(3*Z**2/R**5-1/R**3*)
24returnBz
25defByfunc(mu0,Ms,r,grid,xm):
26"""Magneticfieldintheydriection.
27
28:paramfloatmu0:vacuumpermeability
29:paramfloatMs:magnetsaturation
30:magentization
31:paramfloatr:magnetradius
32:paramnp.ndarraygrid:generatedbynp.ogrid,
33:orp.megridwiththeshapeof(3,x,y,z)
34:paramintxm:instantaneousmagnetposition
35xdirection
36:returnBz
37:returnBz
38defByfunc(mu0,Ms,r,grid,xm):
39"""Magneticfieldintheydriection.
40:paramfloatmu0:vacuumpermeability
41:paramfloatMs:magnetsaturation
42:paramfloatr:magnetradius
43:paramnp.ndarraygrid:generatedbynp.ogrid,
44ornp.meshgridwiththeshapeof(3,x,y,z)
45:paramintxm:instantaneousmagnet
46xdirection
47:returnBy
48:returnBy
49defVrfunc(f,A,xm):
50"""Magneticvelocityatpositionxm.
51
52:paramfloatf:cantileverfrequency
53:paramfloatA:cantileveramplitude
54:paramfloatxm:instantaneousmagnetpositioninthe
55xdirection
56:returnx
57:return:instantaneousmagnetvelocityatpositionxm
58::rtype:float
59:==
60omega=2*math.pi*f
61vx=omega*math.sqrt(A**2-xm**2)
62returnvx
63
64
65defFyfunc(Q,vx,Bz):
66"""Forceonthemagnetintheydriection.
67:rtype:float
68:
69Fy=np.sum(Q*vx*Bz)
70returnFy
71
82defFfunc(Q,vx,By):
83"""Forceonthemagnetinthezdirection.
84:paramnp.ndarrayQ:chargedistribution
85:paramfloatvx:instantaneousmagnetvelocityinthe
### Graph and function mapping
The second step is to define the graph that contains all the function nodes and the edge flow in a DAG. The graph in _mmodel_ is defined with the ModelGraph class. The class inherits from networkx.DiGraph of the popular graph library _NetworkX_.[27] All _NetworkX_ operations can be used on _mmodel_ graphs. The nodes and edges can be defined with all available _NetworkX_ or _mmodel_ syntax for adding notes and edges. The edges represent the data flow, and the nodes represent the function. For the graph, the node names and edges are first defined using add_grouped_edge, then we map the functions to nodes using set_node_object. Listing 2 defines the graph G with grouped edges. The functions and output variable name are mapped to the nodes in the (node_name, function, output) tuple.
```
1frommmodelimportModelGraph
2G=ModelGraph(name="example_graph")
3grouped_edges=[{["Bz", "Vx"], "Fy"},{["By", "Vx"], "Fz"}]
5G.add_grouped_edgesfrom(grouped_edges)
6node_objects=[
7("Bz", Bzfunc, "Bz"),
8("By", Byfunc, "By"),
9("Vz", Varfunc, "Vx"),
10("Fy", Fyfunc, "Fy"),
11("Fz", Fafunc, "Fz"),
12]
13G.set_node_objects_from(node_objects)
```
Listing 1: Define the functions needed for Lorentz force law model calculating the forces on moving magnet from a charge distribution.
```
1frommmodelimportModel,MemHandler
2
3experiment=Model("experiment",G,handler=MemHandler)
```
Listing 3: Convert the _mmodel_ graph to a model by defining the model name and execution method.
#### 3.4.4 Metadata and graph representation
_mmodel_ provides rich metadata and graph plotting capabilities. The model and graph metadata can be extracted by printing out the string value shown in Lst. 4. The graph representation uses the dot graph, a standard for representing the graph. With the _Graphviz_ package,[28] the users can employ the default graph plotting method in _mmodel_ or provide their own. Both graph and model have the draw method to display the metadata and the graph. The output of model.draw() is shown in Fig. 7.
```
1>>>print(G)
2ModelGraphwith5nodesand4edges
3
4>>>print(experiment)
5experiment(A,Ms,Q,f,grid,mu0,r,xm)
6returns:(Fy,Fz)
7graph:example_graph
8handler:MemHandler
```
Listing 4: Print out _mmodel_ graph and model metadata. The graph metadata shows the number of nodes and edges, and the model metadata shows the input, returns, execution method, and modifiers.
#### 3.4.5 Model execution
Here we define representative experimental parameters and execute the model as a function, shown in Lst. 5. We choose the magnet to be a 250 nm radius cobalt sphere and define a sample grid size of dimension \(2000\,\mathrm{nm}\times 2000\,\mathrm{nm}\times 500\,\mathrm{nm}\) in the shape of \((25,25,25)\). The sample is centered at \((0,0,-600)\) nm, which makes the sample surface 100 nm away from the magnet surface. The instantaneous magnet position is \((20,0,0)\) nm. The mock sample has a charge distribution of approximately 0.5 electron / nm\({}^{3}\), and values are randomly generated. All the parameters defined are shown in Table. 2.
```
1grid=np.ogrid[-1000:1000:25j,-1000:1000:25j,-350:-850:25j]#nm
2v_voxel=(2000/25)*(2000/25)*(500/25)#nm-3
3np.random.seed(0)
4Q=np.random.rand(25,25,25)*1.6e-19*v_voxel#As
5x=100#nm
6A=300#nm
7f=3000#Hz
8mu0=1.26e9#aN/A^2
9Ms=1.45e-3#A/nm
10r=250#nm
experiment(A, Ms, Q, f, grid, mu0, r, xn) returns: (Fy, Fz) graph: example_graph handler: MemHandler
\begin{tabular}{|l|l|} \hline Bz & Vx \\ Bzfunc(mu0, Ms, r, grid, xn) return: Bx functype: callable & Vxfunc(f, A, xn) return: vx functype: callable \\ Magnetic field in the z direction. & Magnet velocity at position xm. \\ \hline Fy & Fz \\ Fyfunc(Q, vx, Bx) return: Fy functype: callable & Fzfunc(Q, vx, By) return: Fz functype: callable \\ Force on the magnet in the y direction. & Force on the magnet in the z direction. \\ \hline \end{tabular}
#### iii.2.6 Graph execution returns
The model class allows the user to define the return parameters and the return order. For example, to investigate the intermediate value \(v_{x}\) along with the forces, we simply add \(v_{x}\) to the experiment definition, as shown in Lst. 6.
```
experiment_debug=Model( name="experiment_debug",graph=G,handler= MemHandler,returns=['Fy", "Fz", "vx"] ) ) >>>print(experiment_debug) experiment_debug(A, Ms, Q, f, grid, mu0, r, xn) returns: (Fy, Fz, vx) graph: example_graph handler: MemHandler
```
Listing 6: Create a model with customized returns and returns order by define the returns parameter.
#### iii.2.7 Modifier
The modifier is one of the core features of _mmodel_. The modifiers do not change the original function definition. Instead, they are Python wrappers that modify the functions after their definition.
For example, the modifier that prints out the node value can be defined as Lst. 7. The pattern argument specifies the format of the output. The modifier adds a printing step after the node
\begin{table}
\begin{tabular}{|l l l|} \hline \hline variable & value & unit & description \\ \hline \(A\) & 300 & nm & cantilever amplitude \\ \(f\) & 3000 & Hz & cantilever frequency \\ \(x_{\text{m}}\) & 100 & nm & magnet position \\ \(r\) & 4000 & nm & magnet radius \\ \(\mu_{0}\) & \(1.26\times 10^{9}\) & aN/A\({}^{2}\) & vacuum permeability \\ \(M_{\text{s}}\) & \(1.45\times 10^{-3}\) & A/nm & saturation magnetization \\ \(Q\) & \(\sim\) 0.5 electron/nm\({}^{3}\) & A\(\cdot\)s & charge distribution \\ \(grid\) & \(2000\times 2000\times 500\) & nm & grid points \\ voxel & 128000 & nm\({}^{3}\) & grid voxel size \\ \hline \hline \end{tabular}
\end{table}
Table 2: Input parameters for the Lorentz force law model, the units are adjusted to match the calculation.
Figure 7: Experiment graph model generated by _mmodel_.
calculation.
```
1fromfunctoolsimportwraps
2
3
4defprint_result(pattern,end="\n"):"""Printoutfunctionreturnvalues.
5
6
7:paramstrpattern:printformat
8:paramstrend:printattendedofthestring.
9"""
10
11defstdout_modifier(func);
12@wraps(func)
13defwrapped(**kwargs);
14result=func(**kwargs)
15print(pattern.format(result),end=end)
16returnresult
17returnwrapped
18stdout_modifier.metadata=f"print_result((repr(pattern)),(repr(end)))"
19returnstdout_modifier
```
Listing 7: Define the modifier that prints out the formatted return value of the function after execution.
The modifier can decorate any Python function, adding additional steps before or after the execution. The modifiers can be added to the node or model in a list with their keyword parameters in dictionary form, and multiple modifiers can be applied. Listing 8 adds the modifier to the Vx node and the experiment model, which adds the print steps of formatted \(V_{x}\), \(F_{y}\), and \(F_{z}\) values and their units.
```
1#atthenodlevel
2G.set_node_object(
3node="Vx",
4func="Vx",
5output="vx",
6modifiers=[print_result("vx:(:.2e)nn/n"," |")],
7)
8#atthenodlevel
9experiment=Model(
10"experiment",
11graph=6,
12handler=MemHandler,
13modifiers=[print_result("Fy:{0[0]:.2f}aN|Fz:{0[1]:.2f}aN"]],
14)
15
16>>>Fy,Fz=experiment(A,Ms,Q,f,grid,mu0,r,xm)
17vx:5.33e+06nn/s|Fy:6.95aN|Fz:0.15aN
```
Listing 8: Apply the print_result modifier during the function node and model definition.
#### c.2.8 Create loops within the model
In this section, we define a function that can modify the graph of a model and create a loop inside the graph. We can define a shortcut combining the proper steps of creating the loop discussed in Section II.3. The resulting shortcut function, shown in Lst. 9, allows the model to take list values of the target parameter. _mmodel_ does not provide the shortcut function for looping since many loop operations can be package specific.
```
1frommmodel.modifierimportloop_input
2
3defloop_shortcut(model:Model,loop_param:str,submodel_name:str):
5"""Shortcuttocreateloopsofloop_paraminthemodel.
6
7:parammmodel.Modelmodel:themodeltobecoped
8:paramstrloop_param:theloopparameter
9:paramstrsubmodel_name:thenameoftheubmodel
10
11:return:theloopedmodel
12:rttype:mmodel.Model
13"""
14
15Q=model.graph
16#determinethesubgraphthathastheloopparameterastheinput
17H=G.subgraph(inputs=[loop_param])
18H.name=f^{submodel_name}_subgraph"
19
20#createinnerfunctionandsubstitutethesubgraphwiththeloop_modifier
21loop_func=Model(submodel_name,H,model.handler)
22
23#determinethesubgraphreturns
24output="{{}}".format(",".join(H.returns))
25
26#replacethesubgraph,andaddtheloop_modifier
27loop_graph=G.replace_subgraph(
28H,
29#'(loop_param)loopnode",
30loop_func,
31output=output,
32modifiers=[loop_input(loop_param)],
33)
34loop_graph.name=f^{loop_param}_loop_{G. name}"
35model_name=f^{loop_param}_loop_{model. name}"
36returnModel(model_name,loop_graph,model.handler)
```
Listing 9: Define the shortcut that loops a given parameter within a model. The shortcut modifies the graph and the subnodes.
The shortcut can be used directly on an existing model, as shown in Lst. 10. The resulting model requires the \(f\) input to be iterable, and the results are the iterated (\(F_{y}\), \(F_{z}\)) pairs. The graph is modified to show the proper units of the force nodes. The resulting model graph and the loop subgraph are shown in Fig. 8.
```
1#ModifytheFyandFznodestooutputtheresult.
2G.modify_node("Fy",modifiers=[print_result("Fy:{[:.2f}aN",|"],inplace=True)
3G.modify_node("Fz",modifiers=[print_result("Fz:{[:.2f}aN")],inplace=True)
mmodel
```
1experiment=Model("experiment",G,MemHandler)
2
3
4experiment=Model("experiment",G,MemHandler)
5
6#definethelooperexperiment
7>>>loop_experiment=loop_shortcut(experiment,loop_param="f")
8>>>print(loop_experiment)
9f_loop_experiment(A,Ms,Q,f,grid,mu0,r,xm)
10returns:(Fy,Fz)
11graph:f_loop_example_graph
12handler:MemHandler
13
14#definetrequenciestoloopoverandexecute theexperiment
15>>>f=np_arange(3000,8000,1000)#Hz
16>>>loop_result=loop_experiment(A,Ms,Q,f,grid,mu0,r,xm)
17vx:5.33e+06nn/s|Fy:6.95aN|Fz:0.15aN
18vx:7.11e+06nn/s|Fy:9.26aN|Fz:0.20aN
19vx:8.89e+06nn/s|Fy:11.58aN|Fz:0.25aN
20vx:1.07e+07nn/s|Fy:13.89aN|Fz:0.30aN
21vx:1.24e+07nn/s|Fy:16.21aN|Fz:0.35aN
```
Listing 10: Create a model that loops parameter \(f\) based on the Lorentz force model and execute the model with the iterated \(f\) value.
Figure 8: Model graph that loops parameter \(f\) generated using the loop_shortcut. Top: graph of the modified model. Bottom: the graph of the \(\mathtt{f}\) loop node. |
2306.12355 | Linear and Non-Linear Barrier Coverage in Deterministic and Uncertain
environment in WSNs: A New Classification | This paper Various studies cited in the literature deal with the classic
problem of obstacle coverage, where the deployment environment, sensor nodes,
and base stations have characteristics that are considered perfect but suffer
from various flaws in the real world. This paper presents other barrier
coverage types ranked in a new classification based on linear and nonlinear
barrier coverage according to deterministic and insecure environments, and
enumerates some of the different current and future challenges of these
coverage types and connectivity in WSNs. | Adda Boualem, Djahida Taibi, Aroua Ammar | 2023-06-21T15:57:19Z | http://arxiv.org/abs/2306.12355v1 | # Linear and Non-Linear Barrier Coverage
###### Abstract
This paper Various studies cited in the literature deal with the classic problem of obstacle coverage, where the deployment environment, sensor nodes, and base stations have characteristics that are considered perfect but suffer from various flaws in the real world. This paper presents other barrier coverage types ranked in a new classification based on linear and nonlinear barrier coverage according to deterministic and insecure environments, and enumerates some of the different current and future challenges of these coverage types and connectivity in WSNs.
WSN, Barrier Coverage, Deterministic and Uncertain Linear Barrier Coverage, Deterministic and Uncertain non-linear Barrier coverage, Connectivity, Current and Future Challenges.
## 1 Introduction
The Wireless Sensor Network (WSN) technology was born with the recent advances in wireless communication and electronics that have enabled the development of inexpensive, low-power, and versatile sensors that are small enough to operate over short distances communicate within. The real mode is merged by the different types of uncertainties, such as imprecision in measurement components, atmospheric phenomena, intrusion, animals, and natural phenomena such as volcano, rivers, industrial phenomena and others [1]. The unreliability of communication radios and reception radios, etc., affects the quality of service and decisions regarding real-world information [2].
This survey addresses the problems of Linear-based barrier coverage models and Non-Linear-based barrier coverage models. This purpose is to expand the wireless sensor network as much as possible to overcome the previous uncertainties and guarantee the quality of service. Thus, the aim is to ensure barrier coverage with a minimal number of connected node subsets, and dominant nodes and minimal cost, regardless of the type of deployment (random or deterministic).
This paper is structured as follow; Section 2 reviews the problems that influence the Wireless Sensor Networks, and show the relationship between these problems. Section 3 summarizes the different types of coverage proposed in the literature.
Section 4 cites some related works in barrier coverage. It is subdivided into two main parts. The first part presents some deterministic and uncertain-based coverage strategies and show the drawbacks to open up avenues for research. The second part focused on the new barrier coverage classification (the linear and non-linear types barrier coverage in deterministic and uncertain environment in WSN, and quotes some linear and non-linear- bases coverage
strategies and show the drawbacks to open up avenues for research. Section 5 describes the current challenges and future challenges of linear and non-linear coverage. Finally, Section 6 presents conclusions and future work.
## 2 Problems that Impact Wireless Sensor Networks
Implementing sensor networks poses many challenges for researchers to meet stringent constraints imposed by certain properties, including:
* Energy sources are very limited,
* An unattended and harsh deployment environment (the hostility of the environment),
* limited and unsecured radio links (unreliable communication),
* Changing network topologies (changing topologies of deployed sensor networks),
* Uncertain characteristics of either the deployment environment, or the components of the sensor nodes used,
* Heterogeneity of nodes and multi-hop communications, etc.
The most problems that affect the WSN are; Energy, Node Cost, Limited bandwidth, Deployment, Security, etc.
The Fig 1 cites the majority of known problems in the literature.
## 3 Coverage Types
There is a range of types proposed in the literature, as illustrated in Figure. 2. The main types are:
* **Target coverage:** Target Coverage, also known as Point Coverage, is used to monitor specified targets in an AoI.
Fig. 3.(1) illustrates a target coverage scenario where six sensor nodes are deployed to monitor the five targets in the Area of interest (AoI). Where, target \(\textbf{f}_{5}\) is covered by only one sensor node. The other targets; \(\textbf{f}_{1}\), \(\textbf{f}_{2}\), \(\textbf{f}_{3}\) and \(\textbf{f}_{4}\) are simultaneously covered by two sensor nodes. Target coverage reduces energy consumption because it monitors
Only specifies the position of the target within the AoI.
* **Area Coverage:** The area coverage, is to monitor the AoI by a minimal set of deployed sensor nodes in the area of interest (Figure.3.(3)).
* **Barrier Coverage:** The Barrier Coverage is to deploy a limited number of sensing nodes in area borders, for monitoring, and building a Barrier against intrusion (Figure 3.(2)).
A comparison between the different strategies used for k-barrier Coverage according to different classifications in Table 1 and Table 2.
The barrier coverage of a geographical area by a network of sensors focuses on the answers to the questions:
* How to cover a geographical area against all intrusion types with a minimal set of sensor nodes?
* How can coverage and connectivity be guaranteed for a maximum period for applicable objectives?
* How can coverage be guaranteed for a maximum period of time in uncertain environment?
* In which deterministic and uncertain positions should sensor nodes be placed or deployed to ensure maximum coverage of the area of interest?
In practice, barrier coverage is usually embodied in the answer to two basic questions [16] :
* How to evaluate barrier coverage performances rate, when sensor nodes are deployed in form of monitoring barriers?
* How to improve coverage performance when the active barriers is minimal?.
Many researchers are currently interested in developing solutions that address various needs [17], and have proposed divers classification such the classification illustrated in Figure. 4.
The Barrier Coverage aims to detect and to minimize the intrusion through the network (WSNs). Boulis [18] pointed out that the monitoring of WSN is based on the management of three factors: the energy level of each sensor node, the coverage area and the connection characteristics between nodes. [11]
Intrusion detection is one of the most important issues in WSNs, whose goal is to detect any intruder trying to enter the network. In fact, many security applications need to detect intruders, such as border protection, critical infrastructure protection and hazardous substance monitoring. In WSNs, area coverage and barrier coverage [17] are proposed to achieve the objective of intruder detection. In this case, we proposed a new classification of barrier coverage types; (a) Linear barrier coverage, and (b) non- linear barrier coverage. In this case. Each type characterized by strong or weak barrier, in static or dynamic topology, depending on the need for permanent or periodic coverage of the application.
Figure 3: (1) Target Coverage: a network with 6 sensor nodes and 5 targets, (2) (a) Weak barrier coverage. (b) Strong barrier coverage. (c) Hybrid barrier coverage. (d) Barrier coverage in directional sensor networks, (3) WSN Area Coverage
## 5 Current and future challenges Barrier Coverage Problem
In many cases, inconsistent detection requirements must be considered when using WSNs, depending on the size or sensitivity of the surveillance area. For example, high detection accuracy is required for sensitive areas, and low detection accuracy is required for smaller areas. Atmospheric event influences shaping the physical environment Location,
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Ref/No, Pub Year** & **Paper Highlight** & **Deployment Type** & **Sensor type** & **Connectivity Type** & **K-Barrier Type** & **Space** \\ \hline Hong Y, et al [9], 2017 & Monitor intruders in high resolution and maximize network longevity & Deterministic deployment & Cameras & Connected barriers & Strong Barrier & 3D \\ \hline Zhanwei & Minimizes hybrid access point (H-AP) power supply by jointly optimizing power/information transfer timing and H-AP transfer performance while meeting node throughput requirements. & Random deployment & Static & Connected nodes & Weak Barrier & 2D \\ \hline Ernest Bonnah et, al [6], 2022 & Coverage maximization scheme, by using the exposure profile from the sensor node to calculate the minimum exposure path & Random deployment & Static & Connected nodes & Strong Barrier & 2D \\ \hline Ernest B, et al [7], 2017 & Linear Programming (LP) techniques and on-linear optimization problems in WSN & Random deployment & Static & Connected nodes & Non-linear Barriers & 2D \\ \hline Kaiye G, et al [8], 2021 & Minimizing the cost of the entire system while meeting pre-specified requirements on the expected signal coverage and system reliability. & Random deployment & Mobile & Connected nodes & Linear Barrier & 2D \\ \hline Hong Y, et al [4], 2022 & monitoring the intruder with high resolution and maximizing the network lifetime & Deterministic deployment & Cameras & Connected barriers & Strong Barrier & 3D \\ \hline Zhan wei [5],2019 & Minimize hybrid access point (H-AP) energy provision via jointly optimizing the time allocation of energy/ information transmissions and H-AP’s transmission power while satisfying node throughput requirement. & Random deployment & Static & Connected nodes & Weak Barrier & 2D \\ \hline \end{tabular}
\end{table}
Table 1: Comparative study between the different k-Barriers Coverage works according the Connectivity and k-barrier Type.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Ref/No, Pub Year & Paper Highlight & K-Barrier Type & Strategy/ Objectives \\ \hline A. Boualem, et al [9], 2017 & Probabilistic interference and Truth-Table model for Strong K-Barrier Coverage & Strong Barrier & Sensing model \\ \hline Xing-Gang Fan, et al [10], 2020 & An Efficient Scheme for Mobile Nodes to Join Barriers One by One to Form Strong Barrier Coverage & Strong barrier & Sensor mobility \\ \hline Ernest Bonnah, et al [11], 2020 & The coverage maximization scheme uses the exposure profile of the sensor nodes from the sink node to calculate the minimum exposure path & Strong Barrier & Coverage intensity \\ \hline Xiaoyang L, et al [12], 2018 & Genetic algorithm of route planning of WSN & Weak Barrier & Node localization method \\ \hline A. Boualem, et al [9], 2017 & Probabilistic interference and Truth-Table model for Strong K-Barrier Coverage & Strong Barrier & Sensing model \\ \hline \end{tabular}
\end{table}
Table 2: Comparative study between the different k-Barriers Coverage works according k-barrier Type and strategy objective
communication strength and monitoring of sensor nodes in the network. This reality makes it necessary to consider the nature of uncertainty. Many proposed works in literature consists of introducing fuzziness in the process of scheduling sensor nodes in WSNs for several purposes. The following types of uncertainty are present in WSNs:
* Uncertainty in radio communication links. Communication performance increases with spatial distance. Mobility, power, performance, and connectivity are constraints that prevent network sensor nodes from communicating when deployed in a three-dimensional (3D) environment.
* Uncertainty in the testing process. Environmental disturbances, angles, nonlinear distances, noise, sensor types, and other factors will bring uncertainty to the detection process of sensor networks.
* Detection uncertainty in data collection. When sensors are deployed in harsh environments, various factors can affect the quality of the collected or captured data, such as degradation (wind, terrain, animals, etc 1
## 5 Conclusion
Several classifications focus on proposed solutions to ensure 1-barrier coverage and/or K-barrier in WSNs. In this paper, we divide barrier coverage into two axes; (a) deterministic-based barrier coverage model, (b) uncertain- based barrier coverage model. In each model, we classify barrier coverage according to deterministic and uncertain 2D and 3D environments. Depending on the application's need for continuous or intermittent coverage, we classify (1) linear and (2) nonlinear barrier coverage into strong and weak barriers in static or dynamic topologies. The gap between existing work and practical application needs to be considered in future research to address cur- rent and future needs, such as: Boolean barrier coverage, probabilistic barrier coverage, and fuzzy barrier coverage. Based on the requirements for monitoring the occurrence, importance and hazards of phenomena. In this paper, we mainly answer the following questions: "what new types of barrier coverage have emerged, how to
deal with the problem of coverage of hazardous area boundaries and borders, what is the proposed solution for this, what are Current and future of this sacred task What were the challenges and, what deployment topologies were used...".
## Acknowledgements
The authors would like to thank everyone, just everyone!
|
2304.13632 | Automatic and Flexible Transmission of Semantic Map Images using Polar
Codes for End-to-End Semantic-based Communication Systems | Semantic communication represents a promising roadmap toward achieving
end-to-end communication with reduced communication overhead and an enhanced
user experience. The integration of semantic concepts with wireless
communications presents novel challenges. This paper proposes a flexible
simulation software that automatically transmits semantic segmentation map
images over a communication channel. An additive white Gaussian noise (AWGN)
channel using binary phase-shift keying (BPSK) modulation is considered as the
channel setup. The well-known polar codes are chosen as the channel coding
scheme. The popular COCO-Stuff dataset is used as an example to generate
semantic map images corresponding to different signal-to-noise ratios (SNRs).
To evaluate the proposed software, we have generated four small datasets, each
containing a thousand semantic map samples, accompanied by comprehensive
information corresponding to each image, including the polar code
specifications, detailed image attributes, bit error rate (BER), and frame
error rate (FER). The capacity to generate an unlimited number of semantic maps
utilizing desired channel coding parameters and preferred SNR, in conjunction
with the flexibility of using alternative datasets, renders our simulation
software highly adaptable and transferable to a broad range of use cases. | Hossein Rezaei, Thushan Sivalingam, Nandana Rajatheva | 2023-04-26T15:41:57Z | http://arxiv.org/abs/2304.13632v5 | Automatic and Flexible Transmission of Semantic Map Images using Polar Codes for End-to-End Semantic-based Communication Systems
###### Abstract
Semantic communication represents a promising roadmap toward achieving end-to-end communication with reduced communication overhead and an enhanced user experience. The integration of semantic concepts with wireless communications presents novel challenges. This paper proposes a flexible simulation software that automatically transmits semantic segmentation map images over a communication channel. An additive white Gaussian noise (AWGN) channel using binary phase-shift keying (BPSK) modulation is considered as the channel setup. The well-known polar codes are chosen as the channel coding scheme. The popular COCO-Stuff dataset is used as an example to generate semantic map images corresponding to different signal-to-noise ratios (SNRs). To evaluate the proposed software, we have generated four small datasets, each containing a thousand semantic map samples, accompanied by comprehensive information corresponding to each image, including the polar code specifications, detailed image attributes, bit error rate (BER), and frame error rate (FER). The capacity to generate an unlimited number of semantic maps utilizing desired channel coding parameters and preferred SNR, in conjunction with the flexibility of using alternative datasets, renders our simulation software highly adaptable and transferable to a broad range of use cases.
end-to-end communication, error-correcting codes, polar code, semantic communication, simulation software, successive-cancellation decoder.
## I Introduction
Over the course of several decades, wireless communication has undergone a continuous evolution, driven by advancements in mathematical breakthroughs and novel innovations aimed at fulfilling the needs of human beings [1]. Consequently, the forthcoming wireless communication is expected to provide a highly sophisticated wireless experience, encompassing a vast range of implementations in various domains, such as extended reality (XR), self-sufficient robots, holoportation, and numerous other cutting-edge technologies [2]. Therefore, there is a necessity for a more sophisticated mode of communication.
The current communications paradigm has centered around transmitting bits while minimizing the occurrence of errors. This approach originated from Shannon's seminal 1948 paper [3], which laid out the concept of "channel capacity". In addition, it demonstrated that the rates below the channel capacity could be achieved without significantly increasing errors at the receiver's end. Researchers have pursued this topic for over five decades, eventually discovering capacity-achieving codes that work effectively over long block lengths.
The current receivers do not explicitly leverage the available source information at the transmitter side. Further, joint source-channel coding (JSCC) [4] and unequal error protection (UEP) [5] have been extensively researched, but these were primarily focused on the transmitter side. However, the advent of modern image and video coding techniques has spurred a rising attraction to utilize artificial intelligence (AI) and machine learning (ML) [6] for the efficient encoding of source information. This is further facilitated by the availability of image databases, which can be used to obtain style images for various scenarios. As a result, object classification is enhanced, leading to improved segmented images referred to as semantically coded images.
The idea behind semantic communication is to explore the knowledge base information at the receiver end to reduce communication overhead and enhance user experience. This concept is particularly significant in 6G and beyond [1], given the substantial role played by Internet of Things (IoT) applications. In this context, the number of transmission bits and their abstract meaning holds paramount importance. However, achieving deep trustworthiness tailored to specific applications is more crucial than shallow precision at the bit level. Integrating semantic concepts with wireless communications presents several novel challenges [7].
Polar codes [8, 9, 10, 11, 12, 13, 14] are the first capacity-achieving error-correcting codes over binary-input discrete memoryless channels (B-DMC). They are constructed recursively using polarization phenomenon [8], a feature that enables them to correct errors and optimize communication channels. As such, polar codes are a potential solution in the field of coding theory. The low-complexity encoding and decoding algorithms of polar codes have led to selection as the coding scheme for the control channel of enhanced mobile broadband (eMBB) in the fifth generation of new radio (5G-NR) wireless communication standards.
In this paper, we propose a flexible software [15] that automatically transmits the semantic segmentation map images over an additive white Gaussian noise (AWGN) channel using binary phase-shift keying (BPSK). This software is beneficial in investigating the effect of channel noise on end-to-end image communication systems utilizing semantic concepts. The well-known polar codes are chosen as the channel coding scheme, with the flexibility of selecting any code rate and code
length. The dataset selected for the task is the popular common objects in context (COCO)-Stuff dataset [16], which is an augmented version of the COCO [17] dataset and contains \(91\), different stuff classes. The output semantic map images corresponding to four different signal-to-noise ratios (SNRs) are generated to achieve a very small dataset, each containing a thousand images from the COCO-Stuff dataset [15]. While this paper utilizes the COCO-Stuff dataset as an example, it should be noted that the software is not confined to this particular dataset and can be leveraged for transmitting images from any other dataset as well.
The remainder of the paper is organized as follows. In Section II, a background on end-to-end semantic communications, COCO-Stuff dataset, and polar codes will be provided. Section III presents the post-channel semantic map image generator software. The simulation results are summarized in Section IV, and finally, Section V concludes this work.
## II Background
### _End-to-End Semantic Communications_
Currently, there is no comprehensive system model that integrates semantic and current communication systems for effective semantic communication. However, a preliminary system model for semantic communication has been proposed in [18]. According to this model, the source information is initially encoded with semantic coding schemes and then further encoded using current channel coding approaches before being transmitted via communication channels. The received bits are decoded using existing channel decoders at the receiver's end, and the semantic decoder produces the output. The success of this strategy relies heavily on the accuracy of the feature extraction process and its ability to meet the receiver's requirements. As such, there is a need for thorough research into feature extraction and optimization to ensure the effectiveness of semantic coding.
For semantic communication, the first essential step is semantic coding. This involves extracting and capturing the meaning or semantic features of the source and ensuring they align with the sufficient conditions of the receiver. For example, image/video applications segment images based on templates and use image databases such as the COCO dataset [https://cocodataset.org](https://cocodataset.org). However, defining a general framework for semantic coding is challenging since receiver requirements vary depending on the application. Task-oriented semantic extraction [19] and coding can improve data rates significantly. Since there is not a single transmission system for semantic communication, designing a semantic communication system that aligns with the current communication framework is imperative.
The channel decoder initially decodes the semantic information from the received signal at the receiver end. The primary difficulty is guaranteeing that the transmitter's original semantic details are maintained during the communication. Subsequently, the disordered semantic information is fed as input to the semantic decoder, which then generates an output utilizing the existing knowledge base. For example, the authors in [18] propose a generative adversarial network (GAN) based semantic encoder, which produces the actual output image using the existing style image (knowledge base) and the received segmented map.
### _COCO-Stuff Dataset_
COCO [20] is a popularly-used dataset in computer vision that serves as a benchmark for various image-based tasks. It is a comprehensive dataset that includes features such as object detection, segmentation, and captioning. This dataset has become a standard knowledge base for semantic communication-based image transmission systems due to its large-scale and diverse collection of images. COCO comprises over \(118\)K training images and \(5\)K in validation images containing various everyday objects captured in familiar settings. Additionally, the dataset features \(1.5\) million object instances, \(80\) object classes, and \(91\) stuff classes, making it an extensive and varied dataset. Another unique feature of COCO is its inclusion of five captions per image and \(250\)K individuals with key points, making it a valuable resource for research in computer vision and artificial intelligence. Therefore, we investigate our analysis based on the COCO dataset.
Numerous studies have utilized the COCO dataset to investigate various applications in image processing. One such study, named as CGBNet [21], uses context encoding and multi-path decoding to create a semantic segmentation based on the COCO dataset. Another study employed a GAN-based image segmentation technique [22] which addresses the distribution similarity problem in image segmentation from natural language referring expressions. Additionally, researchers in [23] utilized the COCO stuff to investigate image coding strategies and develop a semantically structured bitstream to reduce complexity.
### _Polar Codes_
Polar codes, invented by Arikan in \(2009\)[8], represent a distinctive class of Shannon's capacity-achieving error-correcting codes. Let us denote by \(\mathcal{P}(N,K)\) a polar code of length \(N\)=\(2^{n}\), which contains \(K\) information bits. The code rate then can simply be computed as \(\mathcal{R}\)=\(K/N\). As the code length approaches infinity (\(N\rightarrow\infty\)), the polarization phenomenon allows for the physical channel to be divided into extremely reliable and unreliable virtual channels. The \(K\) most reliable bit positions are included in the information set \(\mathcal{I}\), while the remaining \(N\)-\(K\) less reliable bit positions are included in the frozen set \(\mathcal{F}\).
Mathematically speaking, binary polar codes also known as Arikan's codes, are a set of two bits to two bits transformation using a basic \(2\times 2\) polarization matrix known as binary kernel. The binary kernel is denoted by \(G_{2}\) and defined as
\[G_{2}=\left[\begin{array}{cc}1&1\\ 1&0\end{array}\right]. \tag{1}\]
By employing a linear transformation as \(x=u\cdot G\), larger polar codes can be constructed in a recursive manner. Here \(x\) denotes the encoded stream, \(u\) represents an \(N\)-bit input vector
and \(G\) is the generator matrix created by the \(n\)-th Kronecker product matrix, i.e.
\[G\triangleq T_{n_{0}}\otimes T_{n_{1}}\otimes...\otimes T_{n_{s}}, \tag{2}\]
where \(T_{n_{i}}\)s are squared kernel matrices. The input message is then integrated into the reliable bit positions of \(u\) and the remaining bits of \(u\) are set to zero.
## III Post-Channel Semantic Map Image Generator
In this section, we will expound on the proposed post-channel semantic map image generator software. Fig. 1 illustrates a high-level architecture of an end-to-end semantic-based image transmission system. A desired framework can be used to extract the semantic map images in the transmitter side. In this study, we utilize the COCO-Stuff dataset as it is readily available. The extracted semantic maps are subsequently subjected to encoding by polar codes, which have been employed as the channel (de)coder. The users have the flexibility to select the desired code length and code rate to achieve their objective. The encoded data is then transmitted through an AWGN channel using BPSK modulation, and the impact of channel noise on the image data is determined by the selected SNR. On the receiver side, a polar decoder is employed to decode the image data, and the resulting data is utilized to reconstruct the semantic map image. As expected, the quality of the regenerated image will be influenced by the channel noise. The proposed software is responsible for executing all the tasks delineated within the red dashed box. Finally, log-likelihood ratios are used as the demaping method. The proposed software is scripted in Python and the specification of the channel is summarized in Table I.
A comprehensive, step-by-step depiction of the software's execution sequence is outlined in Algorithm 1, which provides a detailed flow to understand the concept of the execution. Moreover, Algorithm 2 describes the channel decoder's function. Other than stated above, \(D\) is the size of the dataset, \(T_{c}\) is the CPU's elapsed time, \(\mathbf{h_{r}}\) indicates reliable channels, and \(\mathcal{CD}(.)\) denotes the channel decoder. Also, \(\mathbf{I_{dd}}\) and \(\mathbf{I_{ds}}\) represent the accumulated decoded data (all the decoded image data), and corresponding data of the decoded stream (only the latest decoded packet), respectively.
```
Inputs: \(N\), \(K\), \(D\), \(min\_SNR\), \(max\_SNR\), \(SNR\_step\) Outputs: Post-channel images, performance data, \(T_{c}\) Compute \(\mathbf{h_{r}}\) for\(SNR\leftarrow min\_SNR\)to\(max\_SNR\)by\(SNR\_step\)do for\(j\gets 1\)to\(D\)do read \(j\)th image reshape image data and calculate \(pixel\_count\) while\(pixel\_count>0\)do msg \(\leftarrow\) next \(K\) bits \(\mathbf{I_{ds}}\), \(BER\), \(FER\leftarrow\mathcal{CD}\left(N,K,\mathbf{msg},\mathbf{h_{r}},SNR\right)\)\(\mathbf{I_{dd}}\leftarrow\mathbf{I_{dd}}+\mathbf{I_{ds}}\) write image characteristics, \(N\), \(K\), \(FER\), \(BER\), \(T_{c}\) end for reshape \(\mathbf{I_{dd}}\) and construct an image write constructed image to the output folder end for
```
**Algorithm 1** Execution flow of the proposed software
The simulation software has the capability to transmit one or multiple images in one run. It generates the post-channel images along with a comprehensive text file that summarizes the image parameters, error rates, and channel specifications corresponding to all processed images. The proposed software's sample text output corresponding to a sample image is illustrated in Fig. 2. The report provides an overview of several key factors that impact image quality and processing efficiency. Specifically, the image resolution, number of pixels, SNR, polar code specifications, error rates (frame-error rate (FER) and bit-error rate (BER)), and CPU's elapsed time are all highlighted. Additionally, the study employs an AMD Ryzen 7 PRO \(5850\)U x64 CPU operating at a frequency of \(1.90\) GHz to execute the software. Notably, the CPU's elapsed time is significantly influenced by the resolution of the image. As a general rule, higher image resolutions result in longer elapsed times. The average time required to transfer an image from the COCO-Stuff dataset through the channel is roughly three minutes. This is substantiated by the fact that the sample image of Fig. 2 necessitates the transmission of \(300\)K bits of data, which can be transferred through the use of \(1200\) packets.
## IV Simulation Results
In this section, we will examine the effect of code length and code rate of polar codes on the error-correction performance. Fig. 3 illustrates the impact of altering the code length on both
FER and BER. All codes have a fixed rate of \(\mathcal{R}=1/2\). Fig. 3 demonstrates that increasing the code length results in superior FER and BER performances. This is due to higher polarization of larger polar codes meaning that some channels' reliability increases as the code length grows, while others decrease. As a result, we can choose channels with higher reliabilities while maintaining the same code rate. Fig. 4 depicts a sample image transmitted through a communication channel utilizing polar codes with varying lengths and a constant code rate of \(\mathcal{R}=1/2\). The figure illustrates that shorter codes perform better than longer codes when the SNR is 1 dB. However, as the SNR increases, longer codes become more effective, which aligns with the findings presented in Fig. 3.
The impact of altering the code rate is depicted in Fig. 5. It is evident that as the code rate increases, the FER and BER tend to increase. This is because less reliable virtual channels are used to transmit data as the code rate increases. Fig. 6 displays a sample image transmitted through the channel utilizing a polar code of size \(N=512\) and various code rates. It is apparent that images transmitted with a high code rate and low SNR exhibit the highest amount of noise.
Fig. 1: High-level architecture of an end-to-end image transmission system using semantic communications.
Fig. 3: Effect of the block length of polar codes on the error-correction performance of a sample image over an AWGN channel.
Fig. 2: A text output generated by the software after transmitting a sample image.
## V Conclusion
Flexible simulation software that automatically transmits semantic segmentation map images using polar codes is presented in this paper. The proposed software allows for a comprehensive analysis of the impact of channel noise on semantic map images within end-to-end image transmission systems. While the COCO-Stuff dataset is selected in this paper, it is essential to note that the software can transmit images from any preferred dataset. Moreover, the user is also empowered to choose the desired coding parameters and signal-to-noise ratio, enhancing the software's flexibility and usability. With its advanced features and adaptability, this simulation software represents a significant step forward in the field of semantic image transmission with wireless communication.
## Acknowledgment
This research has been supported by the Academy of Finland, 6G Flagship program under Grant 346208.
|
2306.05629 | R-PMAC: A Robust Preamble Based MAC Mechanism Applied in Industrial
Internet of Things | This paper proposes a novel media access control (MAC) mechanism, called the
robust preamble-based MAC mechanism (R-PMAC), which can be applied to power
line communication (PLC) networks in the context of the Industrial Internet of
Things (IIoT). Compared with other MAC mechanisms such as P-MAC and the MAC
layer of IEEE1901.1, R-PMAC has higher networking speed. Besides, it supports
whitelist authentication and functions properly in the presence of data frame
loss. Firstly, we outline three basic mechanisms of R-PMAC, containing precise
time difference calculation, preambles generation and short ID allocation.
Secondly, we elaborate its networking process of single layer and multiple
layers. Thirdly, we illustrate its robust mechanisms, including collision
handling and data retransmission. Moreover, a low-cost hardware platform is
established to measure the time of connecting hundreds of PLC nodes for the
R-PMAC, P-MAC, and IEEE1901.1 mechanisms in a real power line environment. The
experiment results show that R-PMAC outperforms the other mechanisms by
achieving a 50% reduction in networking time. These findings indicate that the
R-PMAC mechanism holds great potential for quickly and effectively building a
PLC network in actual industrial scenarios. | Kai Song, Biqian Feng, Yongpeng Wu, Zhen Gao, Wenjun Zhang | 2023-06-09T02:28:55Z | http://arxiv.org/abs/2306.05629v1 | # R-PMAC: A Robust Preamble Based MAC Mechanism Applied in Industrial Internet of Things
###### Abstract
This paper proposes a novel media access control (MAC) mechanism, called the robust preamble-based MAC mechanism (R-PMAC), which can be applied to power line communication (PLC) networks in the context of the Industrial Internet of Things (IIoT). Compared with other MAC mechanisms such as P-MAC and the MAC layer of IEEE1901.1, R-PMAC has higher networking speed. Besides, it supports whitelist authentication and functions properly in the presence of data frame loss. Firstly, we outline three basic mechanisms of R-PMAC, containing precise time difference calculation, preambles generation and short ID allocation. Secondly, we elaborate its networking process of single layer and multiple layers. Thirdly, we illustrate its robust mechanisms, including collision handling and data retransmission. Moreover, a low-cost hardware platform is established to measure the time of connecting hundreds of PLC nodes for the R-PMAC, P-MAC, and IEEE1901.1 mechanisms in a real power line environment. The experiment results show that R-PMAC outperforms the other mechanisms by achieving a 50% reduction in networking time. These findings indicate that the R-PMAC mechanism holds great potential for quickly and effectively building a PLC network in actual industrial scenarios.
Industrial internet of things, power line communication, P-MAC, R-PMAC, IEEE1901.1.
## I Introduction
The exponentially growing data produced in manufacturing process and daily life have significantly contributed to human progress. The Internet of Things (IoT), an emerging communication paradigm, offers a promising solution for collecting and managing vast amounts of data, enabling timely detection of changes and informed decision-making. In IoT, the objects of everyday life are equipped with micro-controllers and transceivers for digital communication. With the assistance of suitable protocol stack, they can communicate with other devices or cloud servers [1]. As a predominant branch of IoT, the Industrial Internet of Things (IIoT) focuses on industrial data like temperature, humidity, facility situation and electrical power consumption. Nowadays, the IIoT serves as the foundation of Industry 4.0 and Intelligent Manufacturing [2], as it updates industrial devices with information technology (IT) systems and eliminates information silos [3, 4, 5].
In recent years, many extensive research works concentrate on the increasingly important communication technology for IIoT, including wireless communication technologies such as narrow-band Internet of Things (NB-IoT) and LoRa, as well as wired communication technologies like optical fiber, RS-485, power line communication (PLC). However, there are many limitations in realizing IIoT, such as the potential blind spots in wireless communication system and the high cost of wired communication system. In contrast, by utilizing ubiquitous power lines as transmission medium, PLC offers lower deployment costs and stronger signal penetration through walls. Thus, PLC is a promising technology to realize IIoT. In essence, the IIoT is composed of a large number of communication nodes which regularly upload industrial data via low-cost hardware. Hence, in order to implement PLC-based IIoT, we need to design an appropriate protocol stack to connect these nodes with the software applications located on the upper layer of IIoT.
Designing protocols for the physical (PHY) and multi-access control (MAC) layers in PLC-based IIoT network poses some challenges, including the requirement of robustness to varying channel quality, the usage of low-cost hardware devices, and minimizing networking time in large-scale networks. To date, however, existing PHY and MAC layer protocols have not been fully optimized for the specific requirements of PLC-based IIoT. For instance, the home-oriented PLC protocols usually experience performance degradation in industrial scenarios due to significantly reduced available bandwidth. Besides, the existing MAC layer protocols require a significant amount of time to manage the networking and data uploading processes for hundreds of nodes. To address these issues, we propose a new MAC layer mechanism called robust preamble-based MAC (R-PMAC) and validate its performance on a practical hardware platform. The main contributions of this paper are summarized as follows:
* We propose a novel preamble-based MAC layer protocol called R-PMAC that offers improved robustness and higher networking speeds for PLC-based IIoT. We elaborate on every step of the networking process, in single-layer and multi-layer scenarios, and show that it outperforms both P-MAC and the MAC layer of IEEE1901.1.
* To deal with the possible data loss caused by data frame collision and variable channel quality, we develop a
collision handling mechanism and a retransmission mechanism. By utilizing these two mechanisms, R-PMAC is capable of completing networking in the presence of data frame loss.
* We specify the data frame length, which can simplify the realization of P-MAC. Besides, we meticulously design the structures of various data frames used in R-PMAC to take full advantage of the space of data frame.
* We design a practical programmable hardware platform with low-cost chips to compare the proposed R-PMAC with the MAC layer of IEEE1901.1 and P-MAC. The results of experiment demonstrate that the R-PMAC performs better than the other two mechanisms.
The rest of this paper is organized as follows. We review the related works in Section II. Section III introduces some basic mechanisms that are necessary for the realization of R-PMAC, while Section IV describes the networking process. In Section V, we propose the mechanisms that can increase the robustness of R-PMAC, followed by the design of the structure of different data frames of R-PMAC in Section VI. We then describe the programmable hardware platform in Section VII, where we implement R-PMAC, P-MAC, and the MAC layer of IEEE1901.1. In Section VIII, we demonstrate the experimental results validating the performance of R-PMAC, and in Section IX, we conclude the paper, highlighting the future development of PLC technology applied in IIoT.
## II Related work
The IIoT can be described with a hierarchical model consisting of multiple abstract layers. At the upper layers, software applications and interfaces facilitate data exchange, which is similar to traditional computer networks. Common protocols such as HTTP and TCP-IP can be readily adapted to these layers. Researchers have also developed various IIoT software platforms which runs on servers and focuses on analyzing industrial data [6]. The lower layers of the IIoT, including physical (PHY) layer and multi-access control (MAC) layer, are responsible for establishing connections between IIoT nodes over unreliable channels. Usually, they need customized designs because their performance determines the data collection rate and the efficiency of network organization and maintenance.
### _The PHY layer of PLC network_
The PHY layer of PLC performs modulation, demodulation, and complicated data processing to transmit data through the PLC channel. In addition to traditional modem techniques like continuous phase frequency shift keying (FSK) [7], researchers attempt to equip PHY layer of PLC with more advanced modulation schemes to further overcome the fading, noise, and multipath propagation present in power lines [8]. Orthogonal frequency division multiplexing (OFDM) is such a modulation scheme that can meet the requirement. The OFDM can modulate different carriers individually with a specific multi-bit modulation scheme and signal power allocation using the water-filling principle [9]. In OFDM, the result of the Inverse Fast Fourier Transform (IFFT) of an information vector of length N is transmitted. Multiplying the result at the receiver with the FFT gives back the information vector [9]. OFDM have been pervasively used in both broad-band and narrow-band PLC. For instance, IEEE1901.1 is a classical PLC protocol and is applied in smart home. Baseband data is generated by turbo encoding, scrambling, bit interleaving, and robust OFDM (ROBO) interleaving in the PHY layer of IEEE1901.1, and then modulated via OFDM [10]. It can be observed that OFDM is robust to non-AWGN and distance and frequency dependent noise [11], which indicates that OFDM is suitable for the PHY layer of PLC.
In addition, researchers explore the transplantation of wireless communication techniques to the PHY layer of PLC. For example, multiple lines in a three-phase wire are used to transmit signals, which is similar to the multiple-input-multiple-output (MIMO) structure in wireless communication [12]. Besides, wireless and PLC interfaces are integrated in network stations whose control nodes evaluate the channel quality of both wireless and PLC to select the channel with better quality for exchanging data [13].
### _The MAC layer of PLC network_
The MAC layer rules the topology of PLC network and the way of nodes joining in the network. Existing protocols like IEEE1901.1 and HomePlug tend to use tree topology or mesh topology according to detailed scenarios [10, 14]. In terms of the types of nodes, most protocols use the same classification method, which divides nodes into three categories: Central Coordinators (CCOs), Proxy Coordinators (PCOs), and Stations (STAs). CCO is the unique network gateway, while STAs can serve as PCO on demand when it forwards data between CCO and other STAs.
Two primary objectives of optimizing MAC layer in IIoT are to speed up networking process and handle data frame collisions caused by large number of nodes. MAC protocols can be divided into two main groups: fixed access and dynamic access [15]. Time Division Multiple Access (TDMA) is a representative fixed access protocol, while dynamic access protocols can be further classified into arbitration protocols (e.g. Token-Passing and Polling protocol) and contention protocols like Carrier Sense Multiple Access (CSMA). In the network based on token-passing and polling protocol, stations exchange token-message, and the station that possesses the token gets the medium [9]. Besides, the active polling scheme, evolved from polling protocol, can keep active network stations and temporarily exclude other stations from the polling cycle, which alleviates the disadvantage of long round-trip time [16]. Among various contention protocols, CSMA is widely applied in MAC layer to decrease collision. Recent research on CSMA includes modifications of traditional CSMA mechanisms [17], modelling of network topology [18], and compressing the time of channel contention with advanced channel sense technology [19].
As an alternative to CSMA, the novel preamble-based MAC (P-MAC) mechanism is proposed to divide networking process
into three stages, Preamble time exchange (PTE), Time query (T-Query), and Network configuration (Net-Config) [20]. PTE involves a downlink preamble broadcast by the CCO and uplink preambles sent by the STAs in response to the CCO. Both CCO and STAs can capture the timing differences between the downlink and uplink preamble. In T-Query and Net-Config, CCO distinguishes STAs with time differences recorded in PTE and control the STAs to join in the network. In order to save time spent on channel contention, P-MAC limits channel contention to the PTE stage and avoid time-consuming collision detection and handling. Since the preambles contain no data, the time slots for preambles in PTE are substantially shorter than those for data frames, which means that the duration of PTE can be significantly reduced.
### _Motivation_
Although the PHY and MAC layer of the PLC network have been studied and applied in various contexts, there is still a space for improvement when it comes to the design of a more efficient and robust network for IIoT. The designated bandwidth of IEEE1901.1 is greater than the available bandwidth of actual power line channel in industrial settings, and it uses long data frames, which makes the network operate slowly. In practical situations, it takes more than ten minutes to connect over 100 nodes. As for the P-MAC mechanism, it possesses reduced networking time, but further research is needed to ensure its robustness and scalability in larger networks. Hence, beyond the existing protocols such as IEEE1901.1 and P-MAC improved MAC protocols should be designed to tackle narrow bandwidth and large number of nodes in industrial settings. Therefore, in this paper, we design a new robust preamble based MAC mechanism in PLC based IIoT, as detailed in the following text.
## III Basic mechanism of R-PMAC
In this section, we introduce the three basic techniques in R-PMAC, including the precise time difference calculation, preamble generation, and SID allocation. These detailed technical are the foundation of the R-PMAC mechanism.
### _Precise Time Difference Calculation_
In the PTE stage of R-PMAC, CCO transmits a downlink preamble to STAs, and then STAs respond the CCO with uplink preambles. The time difference between the CCO's downlink preamble and the STAs' uplink preambles can help to distinguish different STAs. However, due to limited processing capabilities of the CCO and STAs, they are unable to handle signals in real-time, leading to uneven time differences measured made by the CCO and STAs.
There are three main types of delays that contribute to the measurement errors: sending delay, propagation delay, and receiving delay. The receiving delay can be further divided into the time spent on signal processing by the PHY hardware and the data transportation between PHY hardware and MAC hardware. These delays are listed in TABLE I, and Fig. 1 illustrates how these delays cause the time differences of STA and CCO different. In IIoT, the distance between nodes is usually small, so the influence of propagation delay is much less than that caused by other delays. Therefore, \(T_{C}\) can be ignored, while other delays in TABLE I should be taken into account. Therefore, implementing P-MAC requires sufficient processing speed to meet the premise of \(T_{P}=R_{P}+R_{M}\), which can be challenging for the low-cost hardware.
Therefore, we introduce a precise time difference calculation mechanism to R-PMAC, allowing the CCO to obtain correct time differences even with a non-real-time receiver, making R-PMAC more robust than P-MAC. In OFDM-based systems, preambles and data frames have fixed lengths, so \(T_{P}\), \(R_{P}\), and \(R_{M}\) in TABLE I are constant. Based on this feature, we design the delay calibration mechanism shown in Fig. 2. The random backoff time of the STA can be set to zero. Then CCO measures the time differences \(\tau_{CCO1}\) and \(\tau_{CCO2}\), while the STA records \(\tau_{STA}\) and conveys it to the CCO through a data frame. By introducing a correction factor \(\tau\), linear equations about \(T_{P}\), \(R_{P}\), \(R_{M}\), and \(\tau\) are given. In practice, the \(\tau_{CCO1}\), \(\tau_{CCO2}\), and \(\tau_{STA}\) can be easily measured in (1), and the equations in (1) can be used to determine the values of the delays and the correction factor. For a specific hardware platform, the delays can be regarded as invariant, so the correction factor \(\tau\) can be calculated and stored in the hardware as a constant. Then, in the PTE stage, the CCO can use (2) to determine \(\Delta\hat{T}_{STA}\), which is equal to \(\Delta T_{STA}\).
\[\begin{cases}R_{P}\!+\!R_{M}-T_{P}&-\tau=0\\ R_{P}\!+\!R_{M}&+\tau=\tau_{CCO2}\\ R_{P}\!+\!R_{M}+T_{P}&+\tau=\tau_{CCO1}\\ R_{M}+T_{P}&=\tau_{STA}\end{cases} \tag{1}\]
\[\Delta\hat{T}_{STA}=\Delta T_{CCO}-2\tau \tag{2}\]
Fig. 1: Delay in PTE process. The CCO or STA starts timing after the received signal is sent to MAC layer, and ends timing after the signal to transmit is sent to PHY layer. The delays lengthen the time difference of CCO.
### _Preambles in R-PMAC_
In R-PMAC, preambles are used for not only frame synchronization but also time difference calculation. It is necessary to separate the preamble as three parts: the head of a data frame, the preamble transmitted by the CCO, and the preamble transmitted by the STA. Therefore, R-PMAC requires at least three different types of preambles and we refer to them as DAT (data frame preamble), NET (network preamble), and REQ (request preamble), respectively. The method to generate these preambles has been discussed in [20]. Firstly, a short OFDM symbol is generated following the rule of least peak-to-average power ratio. Then, the preamble is formed by combining all-zeros sequences and the generated short OFDM symbol. The detailed structure of preambles is displayed in Fig. 3, where the head and tail of the preamble are both marked with an OFDM symbol. The middle part of the preamble is formed by different combinations of all-zeros sequences and OFDM symbols to represent the type of preamble.
### _SID allocation_
In order to properly manage unused SIDs and allot them to new STAs joining the network, CCO needs a suitable data structure to maintain them. There are several issues that need to be addressed when constructing the SID allocation mechanism.
* Some SIDs might either fail to join the network or exit the network after a while. As a result, the previously used SIDs will have gaps with others as they are re-assigned to different STAs. Therefore, the CCO can't simply allocate the SIDs incrementally.
* The numerous SIDs contained in unproper data structures can easily get the CCO's memory exhausted. Thus, it is essential that the data structure be efficient in terms of storage.
* Given the large number of STAs, it is imperative that the CCO swiftly queries and updates the data structure for any unused SIDs.
Considering the second issue, a bitmap-style data structure is inappropriate. The third issue indicates that simple data structures like arrays and linked lists are inadequate for searching and updating SIDs because of their high time complexity. Here, we model our SID allocation mechanism after the operating system's memory allocation algorithm and implement it using a modified linked list.
In the modified linked list, each node stores a segment of continuous SIDs represented with a minimum and a maximum SID. The Fig. 4 shows an example of saving some discontinuous SIDs in such a linked list.
The modified linked list used for SID allocation in R-PMAC has a complexity of \(O(1)\) for getting an idle SID or updating the data structure. The complexity of adding one SID is \(O(M)\), where \(M\) is the number of elements in the linked list. If all STAs successfully join the network, the idle SIDs are continuous and can be represented by a single node, saving space. In the worst case, every two idle SIDs are separated by one used SID, resulting in the largest possible size of the linked list, which is \(O(N)\). Overall, this data structure has better performance and is more suitable for this scenario than bitmap, ordinary arrays or linked lists.
## IV Networking Process of R-PMAC
PTE, T-Query, and Net-Config are the fundamental phases of a preamble-based MAC system [20]. In R-PMAC, these phases are organized into a networking cycle (NC) that occurs periodically. The networking process of a NC, e.g. the detailed
Fig. 4: An example of linked list generation. The contiguous SIDs from 0x01 to 0x0A are represented by a node while 0x0E, the discrete SID is stored in another node.
Fig. 3: The detailed structure of preambles. Every preamble contains five symbols with same length. The OFDM symbols has fixed waveform while the all-zero sequence is a blank interval
Fig. 2: Delay calibration mechanism of R-PMAC. In the process of delay calibration, the STA can be preset to respond to CCO without backoff, then the CCO can get the correction factor \(\tau\) based on measured time difference.
scheduling of data frame transmission, needs to be carefully designed to address the following two key issues:
* In an IIoT system with a large number of STAs, the networking process may spend substantial time because of inefficient scheduling of data frame transmission. This can slow down networking and make the PLC network sluggish. Therefore, the MAC layer protocol should establish connections between CCO and STAs using as few data frames as possible.
* Security-conscious networking procedure is necessary in some scenario to ascertain that only STAs with legal MAC addresses are allowed access. Accordingly, R-PMAC needs to have an authentication procedure, which means that all the MAC addresses need to be forwarded to R-PMAC through multi-hop.
The networking procedure of the tree-like PLC network can be subdivided into single-layer networking and multi-layer networking. Specifically, the single-layer networking elucidates the process of STAs connecting with CCO or PCO with one hop, while multi-layer networking defines how the CCO communicates with PCOs through multiple hops to expand the network. To resolving the aforementioned issues, we develop the single-layer and multi-layer networking procedure for R-PMAC.
### _Single-layer Networking of R-PMAC_
#### Iv-A1 PTE stage
The implement of PTE stage is relatively concise due to the attribute of preamble detection. Since the preamble carries no data and has limited duration, CCO and STAs cannot detect it until the transmission of preamble is finished. Therefore, complex carrier sensing is unnecessary in PTE. Once the CCO broadcasts the NET preamble, STAs wait for a random number of time slots and send REQ preambles. The random number ranges from 1 to a preset maximum number of time slots, recorded as \(N_{max}\). The duration of time slots in PTE is very short (approximately 500\(\mu\)s), so the \(N_{max}\) has little impact on the networking time. A maximum number of 256 slots has been considered sufficient in most situations.
#### Iv-A2 T-Query stage
The data frames in T-Query are used to transmit time differences and MAC addresses. At the beginning of T-Query, the CCO broadcasts a data frame called Time Difference Frame (TDF), which contains the time differences recorded in PTE. Then the STA replies CCO with a data frame called MAC Address Frame (MAF), which contains its MAC address.
Using a TDMA approach to arrange MAF transmission is a viable option since it is simple to implement and can significantly save time spent in transmitting. The STAs iterate through the timing differences in the TDF. As soon as a STA detects a time difference in TDF consistent with the one it has recorded, it will get the index of that time difference, which we label as \(i\). The CCO begins waiting for the MAF from the STAs after sending the TDF, and the STA transmits its MAF at the \(i\)-th time slot. Data frames take up lengthy time windows, so the existence of idle time windows may significantly elongate the networking process. By the means mentioned above, there are no free spaces in the schedule when using the methods described above.
#### Iv-A3 Net-Config stage
The Net-Config stage uses data frames to transmit MAC addresses, SIDs, and acknowledgement signal indicating that a STA has successfully joined the network. Net-Config works similarly to T-Query. At the start of Net-Config, the CCO broadcasts a data frame called the Short ID Frame (SDF), containing the MAC addresses of the STAs and the SIDs assigned to them. The STAs decode the SDF, search for their SIDs, and update their network condition. Finally, the STAs that successfully join the network send Acknowledge Frames (ACK) to the CCO to inform it that they are available.
Using TDMA to manage the transmission of ACKs is feasible as well. The STA may look up its MAC address's index in the SDF and then transmit its ACK in the corresponding time slot.
To demonstrate the R-PMAC single-layer networking method, we consider an example where a CCO connects with four STAs in two NCs. The process is depicted in Fig. 5 and consists of three steps. First, the CCO in NC1 sends a NET preamble to signal the start of networking. The STAs then transmit REQs at random slots. STA1 and STA2 transmit REQs in different slots and are successfully heard by the CCO. The time difference can be represented by the number of time slots between NET and REQ. Considering STA3 and STA4, they send REQs in the same slot, which causes a collision. Consequently, the CCO does not detect legal preamble and disregards the preambles from STA3 and STA4. Second, in T-Query, the CCO transmits a TDF containing the time differences of STA1 and STA2 (\(\Delta T_{1}\) and \(\Delta T_{2}\)). As the time difference of STA1 in TDF is greater than that of STA2, STA1 transmits MAF in the first time slots while STA2 transmits in the second. STA3 and STA4 give up joining the network in this NC because they do not find their time differences in TDF. Thirdly, in Net-Config, STA1 and STA2 receive their SIDs from SDF, update their online status, and send ACKs to the CCO. Therefore, STA1 and STA2 have successfully connected to the CCO in NC1. In NC2, the CCO sends another NET preamble in PTE while STA3 and STA4 send their REQs in different slots. Finally, STA3 and STA4 join the network with the same way to NC1.
### _Multi-layer Networking of R-PMAC_
Due to limited communication distance of PLC, CCO needs some STAs to serve as PCOs and establish multi-hop links with farther STAs. In the meanwhile, considering the compulsory whitelist authorization, since only CCO has a whitelist, PCOs cannot directly allow STAs to join the network. Instead, it needs to promptly send all STA information to CCO. Hence, the interaction between CCO and PCOs in multi-layer networking should be appropriately designed.
#### Iv-B1 Behavior of PCOs
We further divide PCOs into two types, Direct PCO (D-PCO) and Indirect PCO (I-PCO). D-PCO communicates with STAs directly while I-PCO serves as intermediate nodes between the CCO and the D-PCO. Specifically, D-PCO processes PTE, T-Query and Net-Config
to connect with STAs. I-PCOs simply construct the route between CCO and D-PCO, and forward data frames. During multi-layer networking, the detailed route is determined and loaded to data frames by CCO. Then the I-PCOs can forward the data frames according to the route in data frames.
#### Iii-B2 Interaction between CCO and D-PCO
The relationship between CCO and D-PCO is that CCO controls D-PCO to execute single-layer networking and find more STAs, while D-PCO feeds back these STAs' information to CCO. The data frames used by CCO and D-PCO are listed in TABLE II.
#### Iii-B3 Detailed process of multi-layer networking
The detailed process of multi-layer networking can be divided into 4 steps. Firstly, D-PCO receives the PTE-S freque from CCO and starts the PTE stage to record some time differences. D-PCO then fills the PTE-F frame with the number of time differences and send it to CCO. Secondly, CCO transmits TQuery-S to D-PCO. After getting TQuery-S, D-PCO starts T-Query stage and gets the MAC addresses of the STAs which response to D-PCO in PTE stage. Thirdly, D-PCO conveys STA's MAC addresses to CCO through TQuery-F. CCO checks its whitelist, allocates SIDs to those legal MAC addresses, and delivers the SIDs to D-PCO through NetConfig-S. Considering that the TQuery-F and NetConfig-S have limited length and cannot accommodate the whole MAC addresses, the third part may repeat for some times until all the MAC addresses are checked by CCO. Fourthly, D-PCO gets all the SIDs from CCO, so it starts NetConfig stage and gets ACKs from STAs which successfully join the network. Then D-PCO transmits the SIDs of the newly joined STAs back to CCO through Netconfig-F.
Let's consider an example scenario in which CCO needs to connect with six STAs across two layers, as shown in Fig.6. In this scenario, the CCO can only directly communicate with STA1 and STA2. STA1 can connect with STA3 and STA4, while STA4 can connect with STA5 and STA6. In NC1, STA1 and STA2 connect to the CCO. In NC2, CCO sends a PTE-S to STA1 to instruct it to start the PTE stage. STA1 records the time differences of STA3 and STA4 and sends a PTE-F to CCO with the number of time differences (N1=2). CCO then sends a TQuery-S to STA1, making STA1 to initiate a T-Query stage and obtain the MAC addresses of STA3 and STA4. Then, STA1 forwards the MAC addresses to CCO through TQuery-F. CCO checks that these MAC addresses (MAC1 and MAC2) are in the whitelist and assigns SIDs to them. It then sends a Netconfig-S to STA1. With the assigned SIDs, STA1 starts Net-Config and distributes the SIDs to STA3 and STA4. Finally, STA1 obtains ACKs from STA3 and STA4 and sends a Netconfig-F to CCO, which updates the states of STA3 and STA4 to be online. In NC3, STA1 serves as an I-PCO to connect with STA4, which serves as a D-PCO to connect with STA5 and STA6. Following the same procedure above, CCO is able to admit STA5 and STA6 to the network via STA4.
### _Mathematical analysis of different MAC mechanism_
As the number of nodes in IIoT grows, it is imperative to implement a MAC mechanism that spends as little time as possible on networking. In this section, we analyze the performance of R-PMAC by comparing it with existing MAC mechanisms, including P-MAC and the MAC layer of IEEE 1901.1.
#### Iii-C1 Basic idea of analysis
We analyze the performance of these MAC mechanisms by considering two networking processes: random access and communication. The number of nodes, denoted by \(N\), and the number of slots, denoted by \(M\), are used to calculate the mathematical expectation of the number of slots required for successful random access, represented by the random variable \(X\). This calculation allows us to estimate the networking time expectation for each MAC mechanism.
Fig. 5: Single-layer networking of R-PMAC
#### Iv-C2 Analysis of PTE stage
The PTE stage is the random access stage of P-MAC and R-PMAC, where each node chooses one slot to send a preamble. Collisions can occur if multiple nodes choose the same slot. The successful transmission occurs only if the slot chosen by a node is not used by others.
To estimate the expected value of the random variable \(X_{PTE}\) for PTE, we simulated the system with \(N=20,100,200\) and \(M/N\) ranging from 0.8 to 4. The simulation results in Fig. 7 show that four PTE slots per node are required to access the CCO. Therefore, the expectation of \(X\) for PTE can be represented by Eq. (3).
\[E[X_{PTE}]=4N \tag{3}\]
#### Iv-C3 Analysis of CSMA stage
The CSMA is the random access stage of the MAC layer of IEEE1901.1. To simplify the analysis, we assume that only one STA will be successfully heard by CCO if multiple STAs send data frames in one CSMA slot. Then the CSMA stage can be modelled as a Poisson process. We set the probability of one STA sending data frame in one CSMA slot to be \(p\). Thus, the expectation of \(X_{CSMA}\) can be given by (4).
\[E[X_{CSMA}]=\frac{1}{p}N \tag{4}\]
#### Iv-C4 Comparison of networking time
We set the time length of PTE slot and CSMA slot to be \(T_{PTE}\) and \(T_{CSMA}\) respectively, time length of other data frames to be \(T_{data}\), and \(D(\cdot)\) to be the duration of networking time. Then the expectation of networking time with the three mechanisms is given by 5. The coefficient \(1/p\) is close to 4. The \(T_{PTE}\) is just the length of a preamble and the \(T_{CSMA}\) is usually dozens of times longer than \(T_{PTE}\). So we can conclude that preamble based MAC mechanism is faster than the CSMA mechanism used by IEEE1901.1. Compared with P-MAC, R-PMAC replaces polling with TDMA, which decreases the data frames sent by CCO and save 50 percents of networking time. In addition, R-PMAC decreases the data frames between CCO and D-PCO, which also saves time of multi-layer networking. So the advance of R-PMAC to P-MAC is more significant in multi-layer networking.
Fig. 6: Multi-layer networking of R-PMAC
\[E[D(IEEEE1901.1)] =\frac{1}{p}NT_{CSMA}+4NT_{data} \tag{5}\] \[E[D(P-MAC)] =4NT_{PTE}+4NT_{data}\] \[E[D(R-PMAC)] =4NT_{PTE}+2NT_{data}\]
## V Robustness mechanisms in R-PMAC
Unstable communication link and data frame collisions can occur during the networking process of a PLC network. The consequent the loss of critical data frames will interrupt the usual networking process. Ignoring these exceptions results in a lack of robustness and incurs malfunction in networking. Therefore, ensuring the robustness of the networking system is critical for our design. In this section, we examine the causes of data frame loss and propose solutions to address them.
### _The consequences of data frame loss in PLC network_
There are two primary causes of data frame loss in PLC networks: hardware clock instability and time-varying channel quality.
#### V-A1 Unstable hardware clock
As previously mentioned, the TDMA method replaces the polling method to reduce data frame transmission time and save networking resources. In TDMA, time slots are allocated to the STAs, which begin timing at the same time and send data frames at their assigned time slots. The TDMA method requires synchronized clocks for the STAs; otherwise, data frames from different STAs may overlap, resulting in data frame loss. Nevertheless, nodes in the PLC network typically have low-cost hardware with unstable crystal oscillators that fluctuate with time-varying temperature, preventing complete synchronization of the node clock. Due to this hardware flaw, the collision of data frames delivered by multiple STAs cannot be entirely eliminated, leading to data frame loss.
#### V-A2 Time-varying channel quality
In addition to PLC nodes, varieties of appliances are also connected with the power line and generate severe current noise, which worsens the channel quality. In this situation, the route that has been established may be disconnected after a while. The data frames will be lost if they are transmitted through the disconnected route.
### _The impact of data frame loss_
Data frame loss can have a negative impact on the preamble-based networking process. Loss of critical data frames can result in CCOs and STAs receiving incorrect information and taking incorrect actions, resulting in abnormal behavior. We will compare the consequences of data frame loss in the T-Query stage, Net-Config stage, and multi-layer networking.
#### V-B1 T-Query stage
In T-Query stage, STAs send MAFs to the CCO with the method of TDMA, so collision and loss of MAFs due to unstable clocks may occur. And the TDF may loss due to poor channel quality. Loss of MAFs and TDF does not conveys wrong information but reduces the number of MAFs received by CCO, so CCO has to start more NCs to connect with the whole STAs.
#### V-B2 Net-Config stage
Similar to the T-Query stage, STAs send ACKs to the CCO via TDMA, which can result in lost ACKs. Inadequate channel quality can also lead to SDF loss. Loss of ACKs and SDFs can result in an exceptional situation where the STAs and CCO inconsistently update the STAs' states. After receiving the SDF from the CCO, a STA will change its status to online, record the SID, and send an ACK to the CCO. If the CCO does not receive the STA's ACK, it will assume the STA is inaccessible, abandon attempts to establish a connection with the STA so that the SID can be recycled. In subsequent NCs, the same SID will be assigned to different STAs, resulting in two STAs with identical SIDs coexisting. They will simultaneously respond to the CCO, leading to a collision. Therefore, it is crucial to prevent Net-Config data frame loss.
#### V-B3 Multi-layer networking
In multi-layer networking, data frame loss in T-Query and Net-Config still happens when D-PCO communicates with STAs. Besides, CCO interacts with D-PCO through the data frames listed in TABLE II, which contain important information such as the MAC addresses of STAs and the SIDs of online STAs. Once one of these data frames is lost, CCO has to terminate the NC and all the STAs fail to join in the network in this NC. The CCO has to start more NCs to connect with the whole STAs and waste a lot of time.
Fig. 7: Ratio of \(X_{PTE}\) and \(N\) versus ratio of \(M\) and \(N\). The simulation results shows that 4 is the integer closest to the values of \(X_{PTE}/N\) in common cases.
### _Mechanisms increasing robustness_
The issues of collision and poor channel quality need to be addressed in order to increase the robustness of PLC networks. While the probability of collision can be reduced by increasing the time slot length, excessively long time slots may slow down networking. Similarly, increasing the transmission power and improving signal-to-noise ratio can help reduce data frame loss, but this approach is not energy-efficient, which is contrary to the intention of IIoT. Hence, PLC networks require more sophisticated mechanisms to handle data frame loss and increase network robustness. In this paper, we propose a collision handling mechanism and a retransmission mechanism, which enables the network to function normally in the event of packet loss by adding small amount of extra data frames or attaching small amount of additional information to existing data frames.
### _Collision handling mechanism_
Our analysis have shown that only the collision in Net-Config leads to logic error while that in T-Query only merely decreases the number of online STAs in one NC. Hence, we only try to deal with the collision in Net-Config.
#### V-D1 Basic idea of collision handling mechanism
The basic idea of our collision handling mechanism is to combine the TDMA method with polling to avoid data frame collisions with the high time utilization of TDMA retained. Specifically, STAs first reply to the CCO in the TDMA method, and the CCO can detect data frame collision. For STAs that encounter collision, the CCO retries to communicate with them in a polling way.
#### V-D2 Detection of collision
In ideal situation, the number of MAFs in T-Query and the number of ACKs in Net-Config received by CCO receives should be equal. In practice, however, the number of STAs in the Net-Config may be reduced due to the loss of ACKs. To detect collisions, the decrease in the number of STAs heard in Net-Config compared to T-Query can be used as an indicator of collision occurrence.
#### V-D3 Process of collision handling
The Net-Config step is followed by the Polling phase. The CCO records the MAC addresses of the STAs that were lost in Net-Config and subsequently poll these STAs to establish communication. During the polling phase, the CCO may get ACKs from the recorded STAs, indicating that these STAs have obtained their SIDs and are now online. Their MAC addresses will be added to the list of online STAs by CCO. As for certain STAs that do not respond to CCO during the Polling step, CCO regards the link to them too unstable to exchange data. After the Polling stage, the states of the STAs recorded by CCO will match with those recorded by STAs.
We provide an example to show the process of collision handling mechanism, which is shown in Fig.8. The Polling stage follows the Net-Config, where the ACKs of STA2 and STA3 collide and STA4 dust not receive the SDF. The CCO detects collision when it only gets one ACK from STA1, and then starts the Polling stage, in which CCO sends SDF to STA2, STA3 and STA4 one by one and waits for their response. Since STA1 and STA2 have gotten their SIDs in Net-Config, they retransmit ACKs in Polling stage and get heard by CCO. As for STA4, it failed to receive its SID due to the lost SDF, so it does not reply to CCO. The CCO assures that STA4 cannot be online in this NC.
### _Retransmission mechanism_
#### V-E1 Basic idea of retransmission mechanism
In multi-layer networking, the probability of data frame loss increases with the number of hops between CCO and D-PCO. Without a handling mechanism, these data frame loss may interrupt the NC. To prevent this, we add extra information to the data frames between CCO and D-PCO, allowing CCO and PCO to backtrack to their previous state and recreate the lost data frames.
#### V-E2 Detection of data frame loss
As CCO knows the route from itself to D-PCO and the duration of a data frame, it can use the hop number to calculate the maximum waiting time. After transmitting a data frame to D-PCO, CCO starts a timer and awaits response. If the timer exceeds the maximum waiting time, the data frame is considered lost and needs to be retransmitted by D-PCO.
#### V-E3 Process of retransmission
When D-PCO detects a retransmitted data frame from CCO, it uses the saved information to go back to the previous transmission state and reproduce the data frame sent last time. Moreover, if the timeout happens several times, the connection between the CCO and the D-PCO will be deemed unreliable, and the CCO will cease communicating with this D-PCO and control another one for subsequent networking.
An example is provided to illustrate the process of retransmission mechanism in Fig. 9. As the number of STAs is large, D-PCO has to send MAC addresses in two TQuery-F. After receiving the first TQuery-F, CCO saves \(N_{1}\), the number of STAs in the first TQuery-F. CCO then requests the second part of MAC addresses from D-PCO, but the second TQuery-F is lost during forwarding. CCO detects the data frame loss through its timeout timer and loads \(N_{1}\) into a new Netconfig-S, which is sent to D-PCO. D-PCO checks the Netconfig-S and finds that \(N_{1}\) is less than \(N\), the number of STAs it sent to CCO, indicating that the last TQuery-F was not received by CCO. Hence, D-PCO loads the MAC addresses of the STAs ranging from \(N_{1}\) to \(N\) into a new TQuery-F and sends it to CCO. CCO receives the new TQuery-F successfully and gets the second part of MAC addresses. By this means, the multi-layer networking continues despite data frame loss.
## VI The format of data frames
In addition to the robust networking process of R-PMAC, it is important to define the detailed format of data frames. In this section, we have designed the formats of various types of data frames, including their length and content.
### _The length of data frames_
Each OFDM symbol contains the same amount of bits, and a fixed-length data frame can simplify hardware design. Therefore, we can set the length of all data frames to a reasonable
fixed value. In the networking process, the transferred data types mainly include time difference, MAC address, SID, and other brief data. Additionally, after the network is created, the majority of sent data will usually consist of short-length sensor data and control signals. Thus, we fix the length of the data frame to 100 bytes, which can contain dozens of various types of short data and satisfy the needs of both the networking and operational phases of the PLC network.
### _Basic framework of data frames_
We clarify the structure of the data frame in Fig. 10, which includes the head, valid data length, instruction type, route, payload, and tail. The head symbolizes the start of the data frame, and the valid data length indicates the length of the effective data in the frame. The instruction type is used to distinguish functions of data frames, and the route indicates the transmission path of the data frame. The payload is the actual data being transmitted, and the tail can be used for error parity to detect any errors in the data frame.
We use a one-byte data, 0x68, to represent the head, the detailed structure of the routing information is shown in Fig.11, in which the first byte represents the route length and the following bytes represent the SIDs of hops. We set the maximum number of nodes in the network to be 255, so the short addresses taken up 1 byte. In addition, since the maximum effective data length of the specified data frame is 100 bytes, the effective data length can be expressed with 1 byte.
One byte can be used to represent the instruction types,
Fig. 11: The structure of route in data frame
Fig. 8: An example of collision handling mechanism
Fig. 10: Basic framework of data frames
Fig. 9: An example of retransmission mechanism
all of which are listed in TABLE III. Besides, this length is sufficient for adding more instruction types matching with other network functions such as data collection, channel quality detection, reboot of nodes, etc.
### _Payload formats of data frames_
Since the basic framework of the data frame has been determined, the payload format of data frames and the specific information can be clarified.
#### Vi-C1 Payload of TDF
The payload of TDF is shown in Fig.12. It contains the number of time differences and indexes of slot in PTE, each of which takes up 1 byte. The time difference in practice usually is tens of thousands of microseconds and needs several bytes to save, which wastes the space of payload. Thus, we replace the time difference with the index of slot, which can be simply derived by dividing detailed time difference with time length of the slot in PTE. Since the maximum number of slots in PTE has been set to be 256, the index can be stored with only 1 byte.
#### Vi-C2 Payload of MAF
The payload of MAF is shown in Fig.13. It contains the MAC address and old SID(OSID) of the specific STA.
#### Vi-C3 Payload of SDF
The payload of SDF is shown in Fig.14. It contains the number of MAC addresses and SIDs, and pairs of MAC addresses and SID. Besides, due to the limited capacity of one SDF, multiple SDFs may be sent successively and the End flag helps STAs to determine whether this SDF is the last one.
#### Vi-C4 Payload of ACK
The payload of ACK is shown in Fig.15. It just contains its MAC address and updated SID.
#### Vi-C5 Payload of PTE-S, TQuery-S and PTE-F
The payload of PTE-S, TQuery-S and PTE-F is simple. PTE-S and TQuery-S do not have payload because they just instruct the STA to start PTE and T-Query. The payload of PTE-F is just the number of STAs heard by D-PCO.
#### Vi-C6 Payload of TQuery-F
The payload of TQuery-F is shown in Fig.16, which is similar to SDF. The distinction is that TQuery-F contains the OSIDs while SDF carries updated SID newly allocated by CCO. When the D-PCO has sent all the MAC addresses of STAs, it sets the End flag to 1 to inform CCO.
#### Vi-C7 Payload of Netconfig-S
The payload of Netconfig-S is shown in Fig.17. Its structure is similar to the SDF. Specially, the counter of CCO records the number of STAs whose MAC addresses have been got by CCO. The counter of CCO is used in retransmission mechanism mentioned in \(V.E\). If it is less than the counter of D-PCO, D-PCO will know that the last TQuery-F is lost.
#### Vi-C8 Payload of Netconfig-F
The payload of Netconfig-F is shown in Fig.18. It contains the SIDs of STAs joining in the network successfully.
## VII Design of Hardware Platform
To validate the research presented in the previous chapters, we require a hardware platform that can run the networking
Fig. 16: The payload of TQuery-F
Fig. 14: The payload of SDF
Fig. 12: The payload of TDF
Fig. 13: The payload of MAF
Fig. 15: The payload of ACK
process of R-PMAC, P-MAC, and IEEE1901.1 to compare their performance. In this chapter, we illustrate our design of the programmable hardware platform.
### _Architecture of hardware platform_
When PLC networks work, the transmitting unit of hardware generates baseband signal based on the contents of the data packet and modulates it onto the power line. The receiving unit demodulates the signal from the power line and then does digital signal processing to retrieve the data frame. In addition, the networking operation must be controlled by a logical unit.
Analog front ends (AFE) are necessary to modulate and demodulate PLC signals and an embedded processor is needed to control the networking procedure and interact with the user. Coding and decoding baseband signals involve complicated mathematical calculations. Although the embedded processor may also operate as a calculating unit, simultaneously performing signal processing and networking process control will place a significant workload on it. As a result, we include FPGA, which can be configured as a digital front-end (DFE) for fast digital signal processing, serving as an interface between AFE and embedded processor. As for the analog signal, it is transferred through the power amplifier (PA) and coupler to the power line.
The FPGA and embedded CPU are programmable, thus we may implement various mechanisms on the same hardware platform. In addition, The circuit implemented in FPGA can be hardened as application-specific integrated circuits (ASIC) during afterward manufacturing process to save costs.
### _Selection of hardware device_
To keep the cost low, we choose hardware devices that satisfy the basic requirement. We choose AD9865 [21] as the AFE. For embedded processor, we choose STM32F103RET6 [22], which is based on Cortex-M architecture and widely used by industry. For FPGA, we use Altera's Cyclone-IV chip [23].
### _Realization of hardware platform_
#### Vii-C1 The structure of hardware platform
The detailed structure of our hardware platform and its application is shown in Fig.19. There exist dozens or hundreds of nodes on the same power line. For one node, embedded processor control the FPGA and exchange data with FPGA. The RS-485 can be used to connect PC and the hardware platform.
#### Vii-C2 The digital front end in FPGA
The DFE in FPGA is situated at the PHY layer. The detailed digital signal process done by the front end includes convolutional coding, Reed-Solomon (RS) coding, channel interleaving, and ROBO interleaving.
Finally, the real hardware platform designed by us is shown in Fig.20. We partitioned the hardware platform into three boards, deploying AFE and FPGA, embedded processor, and power amplifier on each of them, respectively.
## VIII Experiment
In this chapter, we present our experiment and its results. We set the experiment parameters to match the physical environment and conduct measurements in a practical power line system to evaluate the performance of R-PMAC.
### _Parameters of PHY Layer_
We configure the parameters of hardware platform according to the parameters of IEEE1901.1. In this way, the result of hardware experiment can be closer to real situation and validate the superiority of R-PMAC to existing P-MAC and IEEE1901.1.
#### Vi-A1 Parameters of IEEE1901.1
For the networking process of IEEE1901.1, data frames including central beacon, proxy beacon, association request message (MMeAssocReq) and association indication message (MMeAssocInd) are mainly involved [10]. Central beacon and proxy beacon are 127 bytes
Fig. 19: The structure of hardware platform. The platform can be programmed with RS485 bus. The AFE does A/D and D/A converting. The data frames flow between the FPGA and the embedded processor, and the state of FPGA is configured by the embedded processor through SPI bus.
Fig. 20: The real hardware platform.
Fig. 18: The payload of Netconfig-F
while MMeAssocReq and MMeAssocInd are 142 bytes. The beacons and messages are filled to 136 bytes and 272 bytes respectively and sent to PHY layer. Turbo coding, scrambling, channel interleaving, and ROBO (Robust OFDM) interleaving expand the bit length of beacons and messages to 21,760 and 43,520. The time length \(T\) can be calculated by (6), in which \(N_{s}\) is the number of OFDM symbol and \(N_{c}\) is the number of feasible subcarriers. In actual industrial condition, available bandwidth is about 3MHz and \(N_{c}\) is 94. Finally, the time length of beacons and messages are 9,102\(\mu s\) and 17,488\(\mu s\), respectively.
\[\begin{split} T=& 40.96\times(13+N_{s})+18.32\times 2\\ &+(N_{s}-2)\times 10.8(us),N_{s}=\lceil N_{b}/N_{c}\rceil\end{split} \tag{6}\]
#### V-A2 Parameters of hardware platform
Regarding the hardware platform parameters, we have previously established in Sections VI and VII that the length of the data frame is 100 bytes, and the FPGA processes convolutional coding, RS coding, channel interleaving, and ROBO interleaving. The convolutional and RS coding rates are set to 1/2 and 239/255, respectively, which increase the data frame to 8,768 bits. We use 1024-point OFDM symbols for data frames and 128-point OFDM symbols for the preamble. The bandwidth and sampling frequency are set to 1.25MHz and 40MHz, respectively, resulting in the preamble and data frames' time lengths becoming \(51.2\mu s\) and \(819.2\mu s\), respectively. For both P-MAC and R-PMAC, the preamble consists of 5 OFDM symbols, which takes \(512\mu s\). The available subcarriers number is 720, so the data frames with 8,768 bits take \(512+8,768/720\times 819.2=10,488\mu s\). We can see that the time length and data length of data frames used in the hardware platform are close to those in IEEE1901.1.
#### V-A3 Configuration of time slot length
We set the time slot length of different data frames according to their time length. The detailed time length and time slot length of data frames and preambles are listed in TABLE IV. Although the data frames in R-PMAC have shorter time length (10,488\(\mu s\)) than those in IEEE1901.1 (17,488\(\mu s\)), we set the time slot length in P-MAC and R-PMAC to be the same to that in IEEE1901.1 (20,000\(\mu s\)). Shorter networking time with identical slot length gets the superiority of R-PMAC more convincing.
### _Experimental results_
We measured the networking time of a power line communication (PLC) network in a building with multiple residents and various appliances in use. The network consisted of CCO and several STAs connected to the building's power grid. The CCO was connected to a PC via a USB port for real-time data collection, as shown in Fig.21.
We set the number of STAs to range from 90 to 240 and they can constitute a multi-layer network. Since the channel quality changes over time, the final network topology and the number of layers can vary randomly. In this situation, we recorded the average networking time for different numbers of layers.
The experiment result is shown in Fig.22. Based on the result, we find that:
* The networking time with the three MAC mechanisms increase with the number of STAs and number of layers.
* The number of layers can have great influence on the networking time. The growth rate of networking time when using R-PMAC is significantly smaller than using P-MAC or IEEE1901.1.
We can draw the conclusion that R-PMAC can help the PLC nodes to organize a network at a higher speed. Besides, when using R-PMAC, the networking time of given number of STAs is not greatly affected by the maximum depth of network.
In real-world scenarios, there always exist some STAs omitted by the CCO due to data frame loss. Therefore, we introduce the robustness mechanism previously mentioned to minimize the number of STAs that fail to connect. To validate the functionality of the robustness mechanism, we conducted repetitive experiments with and without the robust mechanism to construct a 6-layer network, and record the ratios of lost STAs in histograms shown in Fig. 23. The results demonstrated that the ratios of lost STAs were closer to zero with the robust mechanism, while approximately 10% of STAs were lost without it, highlighting the mechanism's importance in overcoming data frame loss and connecting more STAs.
## IX Conclusion
In this paper, we concentrate on designing R-PMAC, a preamble based MAC mechanism, which can accelerate the networking of PLC network in IIoT and be robust to collision and data frame loss. In detail, R-PMAC replaces polling with TDMA to save half of time in T-Query and Net-Config, and decreases data frames between CCO and PCO to realize higher time efficiency in multi-layer networking. We propose
Fig. 21: The CCO connected with PC
collision handling mechanism and a retransmission mechanism to eliminate the influence of data frame loss and collision. Our mathematical analysis displays that preamble based MAC mechanism is faster than the CSMA used by IEEE1901.1, and R-PMAC is approximately 50% faster than the P-MAC in [20]. Furthermore, we make a programmable hardware platform and realize the R-PMAC on it. By running the networking of IEEE1901.1, P-MAC and R-PMAC on the hardware platform, we further validate that R-PMAC have better performance than its counterparts. Based on our research, the R-PMAC is a promising candidate for IIoT.
There is still a lot of issues to be studied in the PLC
Fig. 23: The ratio of STAs failing to join in the network
Fig. 22: Average networking time versus number of STAs for given number of layers. Given the number of STAs, the CCO will eventually construct a network with a certain number of layers, with which we classify the data and plot the average networking time under different number of STAs.
based IIoT. In [24], the idea of using multiple frequency bands was proposed to address the issue of frequent variations in the passband frequency of power lines. Following this, in the future, the PHY layer and MAC layer of PLC can be designed to support multiple central frequencies to make the large-scales more flexibility. In addition, some intermediate layers can be added to the protocol stack of IIoT to facilitate smoother data exchange between lower layers and upper layers. For instance, IPv6 Low Power Wireless Personal Area Network (6LowPAN) [25] is used in wireless IoT to bridge the network layer and link layer. It is expected that the abstract layer techniques like 6LowPAN will play an essential role in the future PLC-based IIoT.
|
2303.07838 | Online to Offline Crossover of White Supremacist Propaganda | White supremacist extremist groups are a significant domestic terror threat
in many Western nations. These groups harness the Internet to spread their
ideology via online platforms: blogs, chat rooms, forums, and social media,
which can inspire violence offline. In this work, we study the persistence and
reach of white supremacist propaganda in both online and offline environments.
We also study patterns in narratives that crossover from online to offline
environments, or vice versa. From a geospatial analysis, we find that offline
propaganda is geographically widespread in the United States, with a slight
tendency toward Northeastern states. Propaganda that spreads the farthest and
lasts the longest has a patriotic framing and is short, memorable, and
repeatable. Through text comparison methods, we illustrate that online
propaganda typically leads the appearance of the same propaganda in offline
flyers, banners, and graffiti. We hope that this study sheds light on the
characteristics of persistent white supremacist narratives both online and
offline. | Ahmad Diab, Bolor-Erdene Jagdagdorj, Lynnette Hui Xian Ng, Yu-Ru Lin, Michael Miller Yoder | 2023-03-14T12:24:15Z | http://arxiv.org/abs/2303.07838v2 | # Online to Offline Crossover of White Supremacist Propaganda narratives
###### Abstract.
White supremacist extremist groups are a significant domestic terror threat in many Western nations. These groups harness the Internet to spread their ideology via online platforms: blogs, chat rooms, forums, and social media, which can inspire violence offline. In this work, we study the persistence and reach of white supremacist propaganda in both online and offline environments. We also study patterns in narratives that crossover from online to offline environments, or vice versa. From a geospatial analysis, we find that offline propaganda is geographically widespread in the United States, with a slight tendency toward Northeastern states. Propaganda that spreads the farthest and lasts the longest has a patriotic framing and is short, memorable, and repeatable. Through text comparison methods, we illustrate that online propaganda typically leads the appearance of the same propaganda in offline flyers, banners, and graffiti. We hope that this study sheds light on the characteristics of persistent white supremacist narratives both online and offline.
white supremacist, hate speech, propaganda, geospatial, temporal, narratives +
Footnote †: 10.022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
+
Footnote †: 10.022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
+
Footnote †: 10.022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
+
Footnote †: 10.022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
+
Footnote †: 10.022Copy held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
+
Footnote †: 10.022Copy held by the owner/author(s). Publication rights licensed to ACM. ICSB 978-1-4503-4914-92/23/04...$15.00 [https://doi.org/10.1145/3548373.35897569](https://doi.org/10.1145/3548373.35897569)
## 1. Introduction
In 2021, top officials from the US Department of Justice and Department of Homeland Security named white supremacist extremism as the most significant domestic terror threat (Sutton, 2016). This ideology has inspired extremists to commit mass murders in Pittsburgh (Sutton, 2016), El Paso (2016), and Buffalo (2016) in recent years. Far-right groups with white supremacist and fascic elements have recently led or planned violent attacks against governments in the United States, Brazil, and Germany. To spread their ideology and build communities of hate, white supremacist groups disseminate propaganda using memorable slogans like "end all immigration" in both the online and offline spaces (Sutton, 2016). With this backdrop, it is increasingly important to study the spread of hate by white supremacist groups in the online medium, and how it relates to offline propaganda.
Online hate speech is a frequent topic of attention in the cyber social threat space (Bradshaw, 2016; D'Alessandro, 2016; D'Alessandro, 2016). Some quantitative and computational work has focused specifically on white supremacist ideology. Alatawi et al. (2016) develop methods for detecting white supremacist hate speech, while Eddington (2016) draw network connections between white supremacy hate groups within the United States and United Kingdom in 2016. However, fewer studies examine the connections between offline events and online hate. Lupu et al. (2016) observed that offline trigger events, such as protests and events, often leads to increase in online hate speech, Hirvonen (2016) studied the affinity between online white supremacy hate speech and offline hostility; and Keum et al. (2016) identified that the constant dissemination of ideology by white supremacist groups online can impact the mental health of their surrounding community. Given these potential dangers, it is important to characterize and study the patterns of messaging that draws people into white supremacism and investigate how this messaging connects to offline violence (D'Alessandro, 2016).
This paper aims to improve our understanding of the correlation between online and offline white supremacist messaging. Using an overlay of offline and online sources, we identify matching white supremacist propaganda and investigate the following research questions:
**RQ1:** What spatial and temporal patterns characterize the spread of offline white supremacist propaganda? Which types of propaganda spread the farthest and last the longest?
Through a spatial and geographic analysis of events reported as containing offline white supremacist propaganda (flyers, graffiti,
banners, etc), we identify when the same propaganda is used across events and analyze the reach of each propaganda phrase.
**RQ2**: What temporal patterns characterize the online-to-offline crossover of white supremacist propaganda?
We directly correlate online and offline white supremacist "quotes", which are phrases representing ideologies and propaganda promoted by individuals and extracted from online posts and offline event descriptions. We do so via text similarity matches, serving as indicators to identify the persistence of the quotes in both spaces. Similar methods to identify persistence of text across multiple platforms were conducted in studies of the spread of news (Sundhi et al., 2018) and disinformation (Sundhi et al., 2019). Using information on quote persistence, we characterize the geographical, time-series and narrative distributions of quotes that persist in both online and offline environments.
### Contributions
To our knowledge, this study is the first of its kind to combine the analysis of white supremacist propaganda across both offline and online environments, studying a US-wide reach of white supremacist ideology. Using a combination of geographical, temporal and text analysis, we make the following contributions:
1. We measure the persistence of offline white supremacist propaganda messages through text analysis techniques and quantitatively estimate the spatiotemporal distribution of particular white supremacy slogans.
2. We analyze the prevailing common narrative themes between online and offline white supremacist propaganda.
3. We find preliminary evidence within 2 years of overlapping data that white supremacist propaganda quotes begin online before appearing in offline events, supporting the hypothesis that online activity can translate into offline activity.
## 2. Related Work
### White Supremacist Extremism
Merriam-Webster defines white supremacy as "the belief that the white race is inherently superior to other races and that white people should have control over people of other races"1. Social movements organized under this principle are recognized as white supremacist extremism. Key beliefs of this ideology include essentialist, "natural" race and gender hierarchies with white men at the top (Krishnam et al., 2017). White supremacists believe that white male power is threatened in today's world and that violent action is necessary to protect the white race (Bradley et al., 2017; Krizhevsky et al., 2017). White supremacist extremism has been demonstrated in premeditated violent attacks against people of other races, which provides white supremacists a shared sense of accomplishment, solidarity and power (Sundhi et al., 2018). Larger scale incidents include the Tree of Life synagogue attack in Pittsburgh, Pennsylvania, where the shooter believed Western Civilization was facing "extinction" and that Jewish refugees were "invaders"; the 2017 "Unite the Right" rally in Charlottesville, North Carolina, where an anti-racist protester was killed with a car (Krishnam et al., 2017); anti-Muslim shootings in Christchurch, New Zealand; and the anti-Latinx shooting in El Paso, Texas inspired by the Christchurch event.
Footnote 1: [https://www.merriam-webster.com/dictionary/white%20supremacy](https://www.merriam-webster.com/dictionary/white%20supremacy), accessed 24 January 2023
Though these movements are regarded as fringe, scholars such as sociologist Jessie Daniels (Daniels, 2017) do not view them as isolated from structural white supremacism, a system in which white people control the material resources and major institutions of societies (Bradley et al., 2017; Krizhevsky et al., 2017). These social movements exploit bigtorities widely held in societies with structural white supremacism (Krishnam et al., 2017; Sundhi et al., 2019; Sundhi et al., 2019). Ideologies that are implicit in structural white supremacy are explicitly stated in white supremacist social movements. Studying white supremacist communication is thus important not just to understand a fringe social movement, but also as a window into broader societies organized under white supremacy.
Offline white supremacist propaganda is a small piece of the larger communication infrastructure of white supremacist extremism. To better understand white supremacist extremism, we must also look online. White supremacist groups have recognized the potential of the Internet in spreading their message. Donald Black, the founder of white supremacist forum Stormfront, encouraged his followers to use social networking sites to "reach new people and bring them here" (Bradley et al., 2017). Similarly, David Duke, former Ku Klux Klan leader, mentioned in 2007 that the Internet could give "millions access to the truth [...] of racial enlightenment" (Daiels, 2017).
White supremacist groups have made extensive use of social media and online forums for information provision, seeking donations, recruitment and networking (Sundhi et al., 2018). They have also united around political campaigns, discursively creating and connecting their organizations and lingo to the campaigns (Daiels, 2018), and made use of online chat channels to distribute and mobilize extremism movements (Daiels, 2018).
Therefore, it is important to trace the presence of the same white supremacist propaganda in both online and offline environments.
### The Intersection between Online and Offline Speech
The spread of online propaganda and offline activism has been observed to be positively related. Online platforms facilitate the spread of white supremacist ideology through community building and the development of shared realities and emotions. Empirical histories suggest that the sowing of discord among individual and online communities can create an impetus for violent offline expressions, such as protests and shootings (Sundhi et al., 2019).
The formation of online communities is rarely isolated: online communities signal a social connection in the offline world. Literature provides a mixed view of the connection between the online to offline ecology: a supporting relationship and a negative relationship. Spread of ideologies online can be dismissed as unproductive and inhibit the incidents of offline activism (Sundhi et al., 2019). For example, activists online have been observed to distance themselves from offline riots, contrasting away the online-offline domains (Bradley et al., 2017). However, online efforts such as the production of videoclips can crossover to offline action, where these videoclips inspire extremism or teach the ways of extremism (e.g., how to fire a gun) (Daiels, 2017). Others find that both online speech and offline events can reinforce each other in a cycle of radicalization (Bradley et al., 2017; Krizhevsky et al., 2017).
Given the close relationship and the dynamic interplay between the online and offline domains, it is clear that one domain influences the other, and vice versa. As such, it is important to understand the intersection between online and offline domains.
### Computational Propaganda
Computational propaganda is the "use of algorithms, automation and human curation to purposefully distribute misleading information" (Sutton et al., 2017). The use of computational methods increases the ease and rate of disseminating propaganda information, and provides flexibility for both broad efforts and targeted attempts at distributing propaganda materials. The presence of computational propaganda has been observed most prominently in the political sphere, where automated agents have attempted to influence the UK-EU Brexit referendum (Sutton et al., 2017), the US elections (Sutton et al., 2017) and the Brazil elections (Bradley et al., 2017), to name a few instances.
In recent years, detection of computational propaganda has made significant progress. Several shared tasks involving detection and classification of annotated propaganda datasets have been constructed for the development of detection and analysis techniques within the community (Sutton et al., 2017; Sutton et al., 2017). Detection methods involve constructing machine learning models for identification of the use of propaganda techniques (Sutton et al., 2017) and the classification of the types of techniques employed (Bradley et al., 2017) within texts, and the use of network analysis approaches to identify propaganda through the presence of coordinated inauthentic action (Sutton et al., 2017).
Our work focuses on how online white supremacist propaganda of all types also appears in offline propaganda.
## 3. Datasets
In this study, we use two datasets that contain white supremacist propaganda: one offline dataset and one online dataset. We elaborate on the details of these datasets in the following section.
### Offline Dataset
We gathered 43,154 events involving white supremacist propaganda from the ADL H.E.A.T. Map2. Collected by the Anti-Defamation League, an anti-hate non-profit organization, this map aggregates and plots incidents of hate, extremism, antisemitism and terrorism from a variety of sources, including news reports, government documents and victim reports. Commonly, events include propaganda such as flyers and banners, with a description of the incident and metadata including location, date, group, and ideology of the group. The data fields that are extracted from the events are reflected in Table 1. The data considered in this study spans from January 2008, until July 2022.
Footnote 2: [https://www.adl.org/resources/tools-to-track-hate/heat-map](https://www.adl.org/resources/tools-to-track-hate/heat-map)
_Location Annotation._ For each offline white supremacist propaganda phrase, or "quote", we annotated it with latitude/longitude (lat/lon) coordinates by parsing the provided location information (city, state) through the Nominatim API3. The API takes in a (city, state) pair and returns the (lat/lon) coordinates to which the pair is geographically located. Errors in the output due to misspelled cities (e.g. Mattson, IL instead of Matteson, IL) and ambiguous locations (e.g. national park addresses) were manually corrected.
Footnote 3: [https://nominatim.org/](https://nominatim.org/)
### Online Dataset
The online dataset contains 4,371,453 posts after removing duplicates, it spans from October 2001 through November 2019. The data was collected from explicitly white supremacist domains and organizations, assembled from a variety of datasets and data dumps by (Sutton et al., 2019). Table 2 lists the data fields extracted per post from the online dataset. Further details about the online dataset are reflected in Table 3.
This dataset includes forums and websites dedicated to white supremacism, including Stormfront, Iron March, and the Daily Stormer. Tweets from organizations that the Southern Poverty Law Center labels as white supremacist hate groups (Sutton et al., 2017; Sutton et al., 2017) are also included. This dataset filters 4chan /pol/ data, an imageboard known for white supremacy, to posts from users choosing Nazi, Confederate, Fascist, and White Supremacist flags (in the dataset from Papasavva et al. (Papasva et al., 2019)), as well as posts in "general" threads with fascist and white supremacist topics (in the dataset from Jokubauskaite and Peeters (Jokubauskaite and Peeters, 2019)). We also include smaller datasets manually annotated for white supremacist ideology from Rieger et al. (Rieger et al., 2019), Siegel et al. (Siegel et al., 2019), and Alatawi et al. (Alatawi et al., 2019).
## 4. Methods
### Matching Propaganda Quotes
For this work, we refer to individual propaganda slogans or phrases used by individuals or organizations as "quotes". Through this paper, we report these quotes within double quotation marks (" "). For example, an incident where Patriot Front, a white supremacist group, distributed propaganda with the slogan "Americis is not for sale" and the incident where an unknown person drew the slogan "Americis is not for sale" in graffiti are incidents that share the same propaganda, though they differ in nature, location and people involved. Mapping out such connections can aid in identifying the spatial and temporal spread of similar white supremacist propaganda. Identifying quotes offline and locating them online helps establish connections between the same propaganda ideology spreading in both domains. We detail our process for matching quotes in this section.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{142.3pt}|} \hline
**Data Field** & **Description** \\ \hline quote & white supremacist propaganda text \\ location & city and state of event \\ event & event details (i.e. people/organizations involved) timestamp & time/date of event occurrence \\ \hline \end{tabular}
\end{table}
Table 1. Data fields extracted per event for offline dataset
\begin{table}
\begin{tabular}{|p{42.7pt}|p{142.3pt}|} \hline
**Data Field** & **Description** \\ \hline quote & white supremacist propaganda text \\ dataset & data source of posts (cf. Table 3) \\ platform & online platform source (cf. Table 3) \\ timestamp & time/date of event occurrence \\ \hline \end{tabular}
\end{table}
Table 2. Data fields extracted per post in online dataset
We start with the set \(Q\) of all offline quote occurrences, extracted from events in the offline dataset. For each pair of quote occurrences \(q_{1},q_{2}\in Q\), we perform two steps of matching comparison to determine whether they are similar. The first step simply compares if the texts of the two quotes (lowercased and tokenized) are exactly the same. Should the first step return negative, the quotes are compared in a second step. The second step further pre-processes both quotes by removing punctuation and English stopwords. The pre-processed quotes are then compared with each other again; if they match exactly, they are considered as matches.
This matching flow was performed in three iterations: (1) offline-offline quotes, to identify matching quote clusters in the offline dataset; (2) offline-online, to identify quotes that had crossed over and appeared in both medium; and (3) online-online quotes, to identify matching quote clusters in the online dataset. In each iteration, we performed an all-pairs matching flow, meaning every quote was compared to all other quotes in the dataset. After each matching iteration, the generated unique set of quotes were manually inspected and combined if necessary.
In the offline dataset, we identified 1798 unique white supremacist propaganda quotes. Not all reported quotes are what would generally be regarded as propaganda, however. Some quotes are very short (e.g. "aryan") or non-ideological (e.g. "send a message"). In order to filter these out, two of the authors annotated quotes for being ideological, not just descriptive. Specifically, annotators marked whether each phrase portrays a white supremacist ideological position or evaluation (such as "race traitors") instead of being descriptive and likely also used by those without a white supremacist worldview (such as the phrase "white nationalists"). Annotators discussed difficult cases and came to a consensus. 1655 out of 1798 unique quotes (92.0%) were annotated as propaganda.
For the online dataset, we found 188,861 online posts containing exact matches of offline quotes. Searching for near matches (removing punctuation and stopwords) provided an additional 46,701 posts, yielding a total of 235,562 online posts that mention either the exact or near matches of propaganda quotes in offline events.
### Spatial Coverage of Offline Propaganda Quotes
To calculate the spatial coverage of the offline quotes, we borrow the concept of a "radius of gyration" from physical science. The radius of gyration, \(R_{g}\), of a propaganda quote represents the typical distance travelled by the quote, weighted by number of occurrences in different locations. In our calculation, we weight each quote instance equally (the "mass" of each quote is 1 in the original physical science terms). We start with each unique quote cluster \(C\in Q\), the result of our matching process described in 4.1. For each set of \(N\) matching propaganda quotes within a quote cluster \(C=\{q_{1},q_{2}...\}\), we calculate the location centroid \(q_{c}\) of \(C\) as the average latitude/longitude of the quotes in \(C\). \(R_{g}\) is the root mean square Euclidean distance of \(q_{i}\) to the centroid \(q_{c}\). Formally, \(R_{g}\) is represented as Equation 1 and illustrated in Figure 1.
\[R_{g} =\sqrt{\frac{\sum_{i=1}^{N}(q_{i}-q_{c})^{2}}{N}},\] (1) where \((q_{i}-q_{C})=\) Euclidean distance of \(q_{i}\) to \(q_{C}\), \[q_{C} =\text{centroid of quotes},\] \[N =\text{num matching quotes}\]
**Equation 1: Calculation of Radius of Gyration**
Figure 1. Illustration of parameters used in calculating the radius of gyration for \(N=3\) matching quotes \(q_{1}\), \(q_{2}\), and \(q_{3}\). The blue circle is the geographical cover (gyration) of the quote.
\begin{table}
\begin{tabular}{|l|l|r|r|} \hline Platform & Data source & \(\#\) Posts & \(\#\) matched offline quotes \\ \hline
4chan & Papasavva et al. (Papasavva et al., 2018) & 2,686,267 & 137,744 \\
4chan & Jokubauskaite and Peeters (Papasavva et al., 2018) & 578,650 & 20,546 \\
4chan, \&n, Reddit & Rieger et al. (Rieeger et al., 2018) & 361 & 65 \\ Stormfront & Stormfront dump & 751,980 & 57,761 \\ Iron March & Iron March dump & 179,468 & 10,559 \\ Twitter & Qian et al. (Qian et al., 2018) & 84,695 & 2409 \\ Twitter & ElSherief et al. (Elsherief et al., 2018) & 3,480 & 261 \\ Twitter & Alatawi et al. (Alatawi et al., 2018) & 1,098 & 46 \\ Twitter & Siegel et al. (Siegel et al., 2018) & 170 & 37 \\ Discord & Patriot Front dump & 39,577 & 675 \\ Daily Stormer & Calderón et al. (Calderón et al., 2018) & 26,099 & 5058 \\ \hline \end{tabular}
\end{table}
Table 3. Details of online dataset.
### Temporal Coverage
We analyze at what points in time offline events and the same propaganda quotes occur in online posts. In this manner, we can observe the temporal trends per medium and analyze the temporal trends across each other.
We first calculate how long the quotes appear in reported offline events across the US, simply the number of days from the first occurrence to the final occurrence.
To get an idea of how long white supremacist propaganda quotes circulate in their first environment before crossing over to the second environment, for each quote cluster \(C\) of matching quotes, we examine the temporal relationship between first appearances online and offline. We calculate the average time difference in terms of days between its first appearance in the online dataset and its first appearance in the offline dataset, or vice versa.
## 5. Results
### Geographic Distribution of Offline Propaganda Quotes
Figure 2 depicts the geographical distribution of white supremacist propaganda quotes as extracted from the offline dataset. The blue circles are the centers of gyration for each quote cluster, sized proportionally by the number of occurrences of the quote. This propaganda is spread widely across the United States. To test how the distribution of propaganda compares with the overall population distribution of the United States, we first calculate the overall center of propaganda as the average latitude and longitude across all centers of quotes, weighted by quote frequency. If propaganda is evenly distributed across the US population, this center would match the mean center of population, the mean latitude and longitude of all population in the United States. We find that the center of propaganda is further to the northeast than this mean center of population, which suggests that propaganda has a slight tendency toward the northeast section of the US.
### Spatiotemporal Distribution of Quotes
The spatiotemporal distribution of the offline propaganda quotes is visualized in Figure 3. In Region A, quotes have both high spatial and temporal coverage. These quotes, such as "America First", "America is not for sale" are repeated extremely often and over a wide spread of area. Region B are quotes that cover a wide spread of geography but are not long-lasting (i.e., "March against sharia"). Region C depicts quotes that are long-lasting but do not cover a large area (i.e., "Defending our heritage").
We observe that propaganda messages with high spatiotemporal coverage (i.e., Region A at the top right corner of the graph) are quotable, memorable, and could apply in many contexts. These properties of successful propaganda are in line with the findings about the discourse of propaganda in the Persian Gulf War (Wolf, 2018). We find that the most commonly repeated propaganda are also appeals to patriotism. An example of such a message is:
**Message:** "One nation, against invasion"
**Event description:** Patriot Front, a white supremacist group, distributed propaganda flyers at Northwest Arkansas Community College that read "America is not for sale", "Reclaim America", and "One nation, against invasion"
**Frequency of quote:** 942
**Life span:** 820 days
Looking at patterns in other regions of the spatiotemporal distribution, we observe much propaganda is time and location specific.
Some propaganda quotes are associated with the locations that they are encompassed in. Some quotes are associated with a small radius, possibly because the narrative theme within the message is fairly targeted towards the geographical community. For example, "which way western man" has the smallest radius of 2.6, centered in Philadelphia, Pennsylvania. This is a catchphrase that is misquoted from the 1978 book by William Simon and is mostly used by the European Heritage Association. William E. Simon, a businessman and philanthropist and the Secretary of the Treasury in 1974, was born in the city of Philadelphia. The use of the catchphrase around his birthplace could be a deliberate attempt to appeal in a local area. The quote appeared in 47 online posts and 26 offline events.
Another such message is "Justice for Cannon Hinnant", which has a small radius of gyration covering North Carolina to Virginia.
Figure 3. Spatiotemporal distribution of quotes, with the most popular quotes of each region listed to the right.
Figure 2. Geographic distribution of propaganda quotes from the offline dataset. Each red dot is an incident. Each blue circle is centered around the average geographical locations of a quote, its size is proportional to the quote’s frequency.
The shooting of five year old Cannon Hinnant took place in August 2020 in Wilson, North Carolina (Wilson, 2020). Hinnant was a white American who was shot at point-blank by his Black neighbor, Darius Sessoms. The limited radius of the propaganda quote could be an attempt to arouse white supremacist feelings around the area of the incident by appealing to the commonality of geography. This specific quote appeared offline 40 times and was observed in 9 online posts.
Some narratives appear at a specific time, in tandem with world events. The narrative "stop coronavirus, deport illegal aliens, close borders, stop immigration" was first recorded on 21 March 2020 until 17 May 2020, coinciding with the beginning of the worldwide coronavirus pandemic. Another narrative rooted in conspiracy theories is "every single aspect of the covid agenda is jewish". This narrative arose on 18 September 2021 until 31 March 2022 as part of an effort to link anti-government sentiment ("the COVID agenda") with antisemitic extremist views (Krishnan, 2020). This national and local targeting warrants further investigation into white supremacist communication tactics.
Figure 4 illustrates the temporal distribution of white supremacist quotes online and offline. This is for the 2-year period from 2018 through 2019, chosen for the presence of significant activity in both online and offline datasets. The online dataset does not contain data past 2019, and there are relatively few reports in the offline dataset before 2018. Overall, we find that white supremacist propaganda online leads appearances offline. 87% of white supremacist quotes first appear online before appearing offline. The average time difference between the appearance of quotes across both medium are 242 \(\pm\) 212 days. That is, quotes circulate online for an average of 7 months before making it to the offline space. Examples of such quotes are: "white power world wide", "troops to the border", "build the wall deport them all", "our race is our nation", "only two genders".
13% of the quotes appear offline first before making their appearance online. Examples of such quotes are: "protect your heritage", "hate speech is free speech", "wake up white America", "Reclaim America", "the holocaust was a good thing". Figure 5 plots the first appearances of each quote in offline and online spaces. These quotes also appear within less than two months (49 \(\pm\) 158 days) online after their first offline occurrence, a much shorter period for crossover than those that appear online first.
### Comparison of Online and Offline Propaganda Use
We perform a comparison of propaganda use in online and offline platforms through identifying popular quotes in both environments. Figure 6 compares the most popular quotes in both online and offline platforms. We observe that the most popular quotes, measured by the frequency of quote occurrences, differ between online and offline environments. In fact, propaganda that is popular online is not popular offline, and vice versa.
Propaganda popular in offline spaces are more vague and applicable in many contexts. They are more likely to give plausible deniability of racism and some seem to act as dogwhistles (Krishnan, 2020). Propaganda popular online are more directly ideological, naming minoritized groups such as Jews and specifying that they are for a "white America" and "white children".
## 6. Discussion
In this study, we examined the spatiotemporal prevalence of white supremacist propaganda in both online and offline spaces.
We find that short, decontextualized, and easily repeatable phrases like "America First" have the largest spatial and temporal coverage offline. They often appeal to patriotism and are more coded than
Figure 4. Frequencies of propaganda quote appearances offline and online.
Figure 5. Visualization of the first appearances of white supremacist propaganda quotes: online or offline. The size of each bubble indicates the number of unique quotes appearing on the same day. It can be observed that the vast majority of quotes first appear online
Figure 6. Frequency of most popular quotes in offline and online domains.
direct. Though white supremacist propaganda is prevalent throughout the US, we identify a trend toward the northeast area. There could be many reasons for this. This area could have a historical reputation for liberalism, and so radical right-wing hate groups may view this as a "battleground" where propaganda is needed in a way that is not in more conservative areas. One of the most prominent groups behind events in our dataset, contributing 84% of the offline propaganda quotes, Patriot Front, a vastly spread hate group across the United States and is known to be active in eastern cities such as Boston and Philadelphia (2019). Some of their slogans call for the formation of a white etho-state and restoring America to the past, which may explain a focus on this region of the country.
In the online space, we examined posts from a wide variety of data sources, ranging from online chat logs to forums to blogs to social media platforms. This broadens our analysis beyond the idiosyncrasies of any particular platform. The presence of white supremacist propaganda across all these different genres of online platforms highlights the severity and spread of white supremacist ideology.
We observe white supremacist propaganda crossing from online environments to offline events. This is in line with other online-to-offline studies, where online activism facilitates offline protests (Krishnan et al., 2019). This points to the importance of monitoring narratives online from hate groups.
A large majority (87%) of the quotes appear online first before appearing offline, during which they circulate for an average of seven months. This is a fairly long period of time, during which narratives can coalesce around certain messages before they are publicly distributed as offline propaganda. Quotes that appear offline before we observe them in our online dataset cross over much faster, less than two months on average. This could be due to the ease of quickly picking up and discussing narratives in offline propaganda in online spaces (e.g. blogs, social media).
Our observations further show that propaganda that is popular in the online space do not necessarily equate to popularity in the offline space. This indicates that both environments have separate information consumption spheres, with different communicative strategies from groups with the same ideological objectives. Such a case has been observed in the US Capitol Riots event, an event with significant white supremacist propaganda, where there were different disinformation narratives propagated across YouTube and website videos (Santantos et al., 2020).
### Limitations and Future Work
This study of online-to-offline crossover is not without limitations. Though the ADL draws from a team of expert annotators to curate its offline hate group event dataset, from which we extract our offline propaganda dataset, the annotation methodology is largely unknown. The coding scheme and the inter-annotator agreement could affect the categorization of events. Our online dataset does not contain posts past 2019, which limits our ability to compare with offline events. Collecting newer online white supremacist posts (perhaps from sources such as Telegram) would allow a more recent comparison. In both temporal and spatial analysis, more stable results could be found by better-handling outliers in the dataset.
We match propaganda quotes with literal exact matches and matches without punctuation and stopwords. These methods could be expanded to include other forms of near matches, such as topic modeling, segmentation (Santos et al., 2020), \(n\)-gram overlap (Santos et al., 2020), fuzzy search (Boston and Roth, 2019), paraphrasing (Santos et al., 2020), or rhetorical function matching (Santos et al., 2020). These more sophisticated matching approaches may yield quotes that are similar in semantics or rhetoric, but do not have exact word overlap.
In addition, the current analysis is limited to the United States and to English propaganda. White supremacist ideology circulates worldwide, and future work should expand this analysis to other countries and contexts where messaging may vary.
An analysis of the network structure of those engaging with white supremacist propaganda offline and online may give insight to how this content is spread, and through what groups narratives cross from online to offline spaces.
## 7. Conclusion
In this work, we conduct analysis of the relationship between white supremacist propaganda in online and offline spaces. Understanding when and where white supremacist hate messages spread is critical for an effective and knowledgeable response by authorities, organizations, companies and academics.
We combine data from a hand-annotated dataset that records incidents of offline white supremacist propaganda spread with a curated dataset of white supremacist quotes from online blogs, forums and social media. We begin by analyzing the geographic spread of offline propaganda, identifying that incidents of white supremacist activity are generally concentrated around regions of historical significance in the United States. Through temporal analysis comparison of quotes from both online and offline environments, we find preliminary evidence that online white supremacy propaganda generally appears online before being distributed offline. Through text comparison methods, we observe that the most popular propaganda in online and offline environments differ, but narratives seem to be time and location-specific. We hope that this study illuminates patterns in strategic communication used by white supremacist hate groups.
###### Acknowledgements.
The authors would like to acknowledge the support from the AFOSR awards and the Collaboratory Against Hate Research and Action Center (CAH). We thank the ADL Center on Extremism for curating and providing the offline dataset and for inspiration for this project. Any opinions, findings, and conclusions or recommendations expressed in this material do not necessarily reflect the views of the funding sources.
|
2310.07900 | Asymptotics for power posterior mean estimation | Power posteriors "robustify" standard Bayesian inference by raising the
likelihood to a constant fractional power, effectively downweighting its
influence in the calculation of the posterior. Power posteriors have been shown
to be more robust to model misspecification than standard posteriors in many
settings. Previous work has shown that power posteriors derived from
low-dimensional, parametric locally asymptotically normal models are
asymptotically normal (Bernstein-von Mises) even under model misspecification.
We extend these results to show that the power posterior moments converge to
those of the limiting normal distribution suggested by the Bernstein-von Mises
theorem. We then use this result to show that the mean of the power posterior,
a point estimator, is asymptotically equivalent to the maximum likelihood
estimator. | Ruchira Ray, Marco Avella Medina, Cynthia Rush | 2023-10-11T21:11:16Z | http://arxiv.org/abs/2310.07900v2 | # Asymptotics for power posterior mean estimation
###### Abstract
Power posteriors "robustify" standard Bayesian inference by raising the likelihood to a constant fractional power, effectively downweighting its influence in the calculation of the posterior. Power posteriors have been shown to be more robust to model misspecification than standard posteriors in many settings. Previous work has shown that power posteriors derived from low-dimensional, parametric locally asymptotically normal models are asymptotically normal (Bernstein-von Mises) even under model misspecification. We extend these results to show that the power posterior moments converge to those of the limiting normal distribution suggested by the Bernstein-von Mises theorem. We then use this result to show that the mean of the power posterior, a point estimator, is asymptotically equivalent to the maximum likelihood estimator.
power posterior, \(\alpha\)-posterior, Bernstein-von Mises, posterior mean convergence
## I Introduction
In this work, we study \(\alpha\)-posteriors (also known as power posteriors), which are proportional to the product of the prior and the likelihood raised to a constant, fractional power [4, Chapter 8.6]. This procedure (also known as posterior tempering) decreases the influence of the likelihood in the calculation of the posterior. Decreasing the influence of the likelihood has been explored for several problems as a way to improve Bayesian procedures [9, 12, 13]. More recent literature about this area - under the name \(\alpha\)-posteriors - has focused on the robustness to model misspecification and data corruption of the \(\alpha\)-posterior compared to that of the ordinary posterior both empirically [5, 6, 11] and theoretically [11, 2]. To this end, both the theoretical and empirical properties of this procedure have been investigated in recent years [5, 11, 3, 1, 2]. In particular, [2] established that \(\alpha\)-posteriors derived from low-dimensional, parametric models are asymptotically normal. In other words, they obey a Bernstein-von Mises theorem. For standard posteriors, the Bernstein-von Mises theorem says that the large sample distribution of the posterior is close to a normal distribution centered around the maximum likelihood estimator (MLE) with posterior variance depending on the "curvature" of the likelihood function [8, 7, 10]. The work in [2] shows that raising the likelihood to the power \(\alpha\) while calculating the posterior distribution leaves the mean of the limiting Gaussian unchanged but divides the variance of the limiting Gaussian by \(\alpha\).
We extend the results of [2] by showing that the \(k^{th}\) moment of the \(\alpha\)-posterior converges in probability to the \(k^{th}\) moment of the limiting normal distribution suggested by the Bernstein-von Mises theorem in [2]. This result (Theorem 1) holds under similar conditions to those of [2], but we additionally assume the finiteness of the \(k^{th}\)\(\alpha\)-posterior moment to prove \(k^{th}\) moment convergence. We use Theorem 1 to establish the \(\sqrt{n}\)-asymptotic normality of the \(\alpha\)-posterior mean (Theorem 2). In particular, we show that the \(\alpha\)-posterior mean has the same asymptotic distribution as that of the MLE. This result shows that, asymptotically, tempering does not affect point estimation from tempered posteriors.
The proof of our first result is adapted from the Bernstein-von Mises proofs of [7] and [2] in order to show convergence of the moments of the \(\alpha\)-posterior rather than convergence to a limiting normal distribution in total variation. The second result follows almost immediately from our first result. The main assumptions required in our proofs are slightly stronger than those from [2] and [7], as discussed in (II-B). Like [2] and [7], our results make no assumptions of model correctness or independent and identically distributed (i.i.d.) data.
## II Main Results
### _Notation and preliminaries_
Let \(\phi(\cdot|\mu,\Sigma)\) denote the multivariate normal density with mean \(\mu\) and covariance matrix \(\Sigma\). The indicator function of an event \(A\) is denoted \(1\{A\}\). For a sequence of distributions \(P_{n}\) on random variables (or matrices) \(X_{n}\), if \(\lim_{n}P_{n}(||X_{n}||_{2}>\epsilon)=0\) for every \(\epsilon>0\), we say \(X_{n}=o_{P_{n}}(1)\). We say \(X_{n}\) converges in \(L_{1}\) to a random variable \(X\) if \(\lim_{n\to\infty}\mathbb{E}[|X_{n}-X|]=0\) and
\(\mathbb{E}[|X_{n}|]\) and \(\mathbb{E}[|X|]\) exist. We say \(X_{n}\) is 'bounded in \(P_{n}\)-probability', or \(O_{P_{n}}(1)\), if for every \(\epsilon>0\) there exists \(M_{\epsilon}>0\) such that \(P_{n}(||X_{n}||_{2}<M_{\epsilon})\geq 1-\epsilon\). We use \(||X||_{p}=\left(\sum_{i}|X_{i}|^{p}\right)^{1/p}\) to denote the \(p\)-norm. Let \(B_{v}(\delta)\) denote a closed ball of radius \(\delta\) around vector \(v\in\mathbb{R}^{p}\).
**Statistical Model:** Let \(\mathcal{F}_{n}=\{f_{n}(\cdot|\theta):\theta\in\Theta\subseteq\mathbb{R}^{p}\}\) be a parametric family of densities used as a statistical model for the random sample \(X^{n}=(X_{1},\ldots,X_{n})\). This model is allowed to be misspecified in the sense that \(f_{0,n}\), the true density of the random sample \(X^{n}\), may not belong to \(\mathcal{F}_{n}\). We will use the notion of the pseudo-true parameter \(\theta^{*}\), which is the true data-generating \(\theta\) if the model is well-specified, meaning \(f_{0,n}(X^{n}|\theta)\in\mathcal{F}_{n}\). If the model is misspecified, \(\theta^{*}\) is the value of \(\theta\in\Theta\) such that the model \(f_{n}(X^{n}|\theta^{*})\) is closest in KL-divergence to the true model. Denote the (pseudo) MLE by \(\hat{\theta}^{ML}=\arg\max_{\theta}f_{n}(X^{n}|\theta)\), where the asymptotic behavior of the MLE is determined with respect to the model \(f_{0,n}\). Let \(\mathbb{E}_{f_{0,n}}\) and \(\mathbb{P}_{f_{0,n}}\) denote the expectation and probability taken with respect to \(f_{0,n}\).
**Power posterior:** Given the statistical model \(\mathcal{F}_{n}\), a prior density \(\pi(\theta)\) for \(\theta\), and a scalar \(\alpha>0\), the power posterior, or \(\alpha\)-posterior, has the density:
\[\pi_{n,\alpha}(\theta|X^{n})=\frac{f_{n}(X^{n}|\theta)^{\alpha}\pi(\theta)}{ \int_{\Theta}f_{n}(X^{n}|\theta)^{\alpha}\pi(\theta)d\theta}. \tag{1}\]
### _Assumptions_
Our main results hold under the following assumptions:
* The MLE, \(\hat{\theta}^{ML}\), is unique and there exists a \(\theta^{*}\) in the interior of \(\Theta\) such that \(\hat{\theta}^{ML}\stackrel{{ p}}{{\rightarrow}}\theta^{*}\) and \(\sqrt{n}(\hat{\theta}^{ML}-\theta^{*})\) is asymptotically normal.
* The prior density \(\pi(\theta)\) is continuous and positive in a neighborhood of \(\theta^{*}\).
* Given \(\Delta_{n,\theta^{*}}=\sqrt{n}(\hat{\theta}^{ML}-\theta^{*})\), there exists a positive definite matrix \(V_{\theta^{*}}\) such that \[R_{n}(h)\equiv\log\left(\frac{f_{n}(X^{n}|\theta^{*}+h/\sqrt{n})}{f_{n}(X^{n }|\theta^{*})}\right)\] (2) \[-h^{T}V_{\theta^{*}}\Delta_{n,\theta^{*}}+\frac{1}{2}h^{T}V_{ \theta^{*}}h,\] satisfies \(\sup_{h\in K}|R_{n}(h)|\to 0\) in \(f_{0,n}\)-probability for all compact sets \(K\subseteq\mathbb{R}^{p}\).
* There exists a \(\gamma>0\) such that the following is finite \(\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}||h||_{2}^{k_{0}(1+\gamma)}n^ {-\frac{p}{2}}\pi_{n,\alpha}\left(\theta^{*}+\frac{h}{\sqrt{n}}\right)dh \right],\) for some \(k_{0}\in\mathbb{N}\).
We adapt the proof of [2, Theorem 1] - which in turn adapts the arguments of [7, Theorem 2.1] - to prove Theorem 1. We retain assumptions **(A0)** - **(A2)** from [7] and [2], but make a slightly stronger posterior concentration assumption in **(A3)**. Indeed, our assumption implies the posterior concentration assumption in [7] and [2], which states that \(\pi_{n,\alpha_{n}}(\cdot|X^{n})\) concentrates at rate \(\sqrt{n}\) around \(\theta^{*}\) if for every sequence \(r_{n}\rightarrow\infty\),
\[\mathbb{E}_{f_{0,n}}\left[\mathbb{P}_{\pi_{n,\alpha}(\theta|X^{n})}\left(|| \sqrt{n}(\theta-\theta^{*})||_{2}>r_{n}\right)\right]\to 0. \tag{3}\]
Applying Markov's inequality to the left side of (3) and using the change of variable \(h=\sqrt{n}\left(\theta-\theta^{*}\right)\) gives us the upper bound
\[r_{n}^{-k_{0}}\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}||h||_{2}^{k_{0}} n^{-\frac{p}{2}}\pi_{n,\alpha}\left(\theta^{*}+\frac{h}{\sqrt{n}}\right)dh \right]. \tag{4}\]
Because \(r_{n}\rightarrow\infty\), the posterior concentration assumption is satisfied if
\[\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}||h||_{2}^{k_{0}}n^{-\frac{p}{2} }\pi_{n,\alpha}\left(\theta^{*}+\frac{h}{\sqrt{n}}\right)dh\right]<\infty,\]
which is implied by **(A3)**.
### _Convergence of power posterior moments_
We denote by \(\phi(\cdot|\mu,\Sigma)\) the density of a normal random vector with mean \(\mu\) and covariance matrix \(\Sigma\). Recall that [2] showed the following result:
\[d_{TV}\left(\pi_{n,\alpha}\left(\theta|X^{n}\right),\phi\left(\theta\Big{|} \hat{\theta}^{ML},\frac{1}{\alpha n}V_{\theta^{*}}^{-1}\right)\right) \to 0,\]
in \(f_{0,n}\)-probability, where \(V_{\theta^{*}}\) is the matrix from assumption **(A2)** and it can be thought of as measuring the curvature of the likelihood. Theorem 1 essentially shows that the \(k^{th}\) moment of the \(\alpha\)-posterior approaches the \(k^{th}\) moment of the limiting Gaussian from [2] with growing sample size.
**Theorem 1**.: _Assume **(A0)**-**(A3)** hold. Then, for any integer \(k\in[1,k_{0}]\),_
\[\int_{\mathbb{R}^{p}}\left|\left|h^{\otimes k}\right|\right|_{1}\left|\pi_{n, \alpha}^{LAN}(h)-\phi_{n}(h)\right|dh\to 0,\]
_in \(f_{0,n}\)-probability, where_
\[\left|\left|h^{\otimes k}\right|\right|_{1}=\sum_{i_{1}=1}^{p}\cdots\sum_{i_{k} =1}^{p}\left|h_{i_{1}}\times\cdots\times h_{i_{k}}\right|, \tag{5}\]
_is the \(1\)-norm of a \(k^{th}\) order tensor and_
\[\pi_{n,\alpha}^{LAN}(h)\equiv n^{-p/2}\pi_{n,\alpha}(\theta^{*}+h/ \sqrt{n}|X^{n}), \tag{6}\] \[\phi_{n}(h)\equiv n^{-p/2}\phi\left(h\big{|}\sqrt{n}(\hat{\theta}^{ ML}-\theta^{*}),V_{\theta^{*}}^{-1}/\alpha\right),\]
_are the centered and scaled versions of the \(\alpha\)-posterior and its limiting normal._
**Remark 1**.: _We mention that the objects in 6 are proper densities. Indeed, using a change of variable \(h=\sqrt{n}(\theta-\theta^{*})\), we see_
\[\int\pi_{n,\alpha}^{LAN}(h)dh =\int n^{-p/2}\pi_{n,\alpha}(\theta^{*}+h/\sqrt{n}|X^{n})dh\] \[=\int\pi_{n,\alpha}(\theta|X^{n})d\theta=1.\]
_Similarly,_
\[\int\phi_{n}(h)dh =\int n^{-\frac{p}{2}}\phi\left(h\big{|}\sqrt{n}(\hat{\theta}^{ML}- \theta^{*}),\frac{V_{\theta^{*}}^{-1}}{\alpha}\right)dh\] \[=\int\phi\left(\theta\big{|}\hat{\theta}^{ML},\frac{V_{\theta^{*}}^ {-1}}{n\alpha}\right)d\theta=1.\]
**Remark 2**.: _With a change of variable \(h=\sqrt{n}(\hat{\theta}^{ML}-\theta^{*})\), we see that as long as \(k_{0}\geq 2\), Theorem 1 shows that both of the following converge to zero in \(f_{0,n}\)-probability:_
\[\int_{\mathbb{R}^{p}}\!\!\sqrt{n}\|\theta-\theta^{*}\|_{1}\left| \pi_{n,\alpha}(\theta|X^{n})-\phi\!\left(\theta\bigg{|}\hat{\theta}^{ML}, \frac{V_{\theta^{*}}^{-1}}{n\alpha}\right)\right|d\theta,\] \[\int_{\mathbb{R}^{p}}\!\!\|[\theta\!-\!\theta^{*}][\theta\!-\! \theta^{*}]^{T}\|_{1}\bigg{|}\pi_{n,\alpha}(\theta|X^{n})\!-\!\phi\!\left( \theta\bigg{|}\hat{\theta}^{ML},\frac{V_{\theta^{*}}^{-1}}{n\alpha}\right) \!\bigg{|}d\theta.\]
The proof of [2] and [7] relates "closeness" of the likelihood ratios of two densities to closeness in terms of total variation distance. Since the results in [2] and [7] are Bernstein-von Mises results, we modify their proof to relate "closeness" of likelihood ratios to "closeness" of moments. To accomplish this, we modify Lemma 4 from [2] (Lemma 1 in this work) to express the moments of the \(\alpha\)-posterior in terms of the moments and likelihood ratios of these two densities. Lemma 1 reveals that the moments of the \(\sqrt{n}\)-scaled \(\alpha\)-posterior must be **(I)** finite on a growing neighborhood surrounding the pseudo-true parameter and Lemma 2 shows that the moments are **(II)** vanishing on the complement of this neighborhood. Lemmas 1 and 2 require **(A3)** instead of the weaker posterior concentration assumption from [2].
### _Asymptotic normality of the Bayes estimator_
The result in Section (II-C) is useful for determining the large sample behavior of point estimators derived from the \(\alpha\)-posterior. In particular, because \(\alpha\) does not affect the centering of the \(\alpha\)-posterior, as the the mean of the posterior approaches \(\hat{\theta}^{ML}\), it stands to reason that the limiting distribution of the mean of the \(\alpha\)-posterior is unaffected by \(\alpha\). Our second result verifies this.
**Theorem 2**.: _Assume **(A0)**-**(A3)** hold. Then the \(\alpha\)-posterior mean estimator \(\hat{\theta}^{B}=\int_{\mathbb{R}^{p}}\theta\pi_{n,\alpha}(\theta|X^{n})d\theta\) has the following limiting distribution \(\sqrt{n}(\hat{\theta}^{B}-\theta^{*})\overset{d}{\rightarrow}N(0,V_{\theta^{* }}^{-1})\)._
The high-level argument in our proof of Theorem 2 involves showing that the Bayes estimator is asymptotically equivalent to the (pseudo) MLE, which implies that they have the same limiting distribution. In other words, we need to show that \(\sqrt{n}(\hat{\theta}^{B}-\hat{\theta}^{ML})=o_{p}(1)\). Since \(\hat{\theta}^{B}\) is defined as the first moment of the \(\alpha\)-posterior, we can obtain the desired result by examining the moments of the \(\alpha\)-posterior centered around the pseudo-true parameter, which we do in Theorem 1.
## III Discussion
In this work, we have studied the moments of \(\alpha\)-posteriors in parametric, low-dimensional models to better understand point estimators derived from posterior tempering. We found that because posterior tempering does not affect the "first-order" properties of the posterior, the limiting distribution of the posterior mean is also unaffected by tempering. This result demonstrates that while tempering may induce robustness in the posterior distribution, it does not do so in the mean of the tempered posterior. We believe this phenomenon should hold for other point estimators derived from posteriors (e.g., the posterior median and mode), which is left for future work.
We also mention some other directions for future work. First, as noted in Section II-B, our **(A3)** is stronger than the posterior concentration assumption used to prove the Bernstein-von Mises results in [2, 7]. However, we believe that posterior concentration is a sufficient condition for our Theorem 1 to hold for at least \(k=1\) and \(k=2\). Future work could involve modifying our proof to reflect this. Second, while we have advanced our understanding of \(\alpha\)-posteriors for fixed, constant, \(\alpha\), we would like to establish similar guarantees to \(\alpha\)-posteriors where \(\alpha\) is selected in a data-driven way (i.e., in settings where \(\alpha\) might change with the sample size, \(n\)).
## IV Proofs of Main Results
### _Proof of Theorem 1_
The proof is adapted from the proofs of [2, Theorem 1] and [7, Theorem 2.1] to show convergence of the posterior moments rather than the total variation distance.
Let \(Z_{0}\) denote the integral we are trying to control, namely
\[Z_{0}\coloneqq\int_{\mathbb{R}^{p}}\big{|}\big{|}h^{\otimes k}\big{|}\big{|}_{ 1}\left|\pi_{n,\alpha}^{LAN}(h)-\phi_{n}(h)\right|dh. \tag{7}\]
Recalling the definition of \(\big{|}\big{|}h^{\otimes k}\big{|}\big{|}_{1}\) in 5 and using the fact that \(||h||_{1}\leq\sqrt{p}||h||_{2}\), we note that
\[\big{|}\big{|}h^{\otimes k}\big{|}\big{|}_{1} =\sum_{i_{1}=1}^{p}\cdots\sum_{i_{k}=1}^{p}\left|h_{i_{1}}\times \cdots\times h_{i_{k}}\right|\] \[=\sum_{i_{1}=1}^{p}\left|h_{i_{1}}\right|\cdots\sum_{i_{k}=1}^{p} \left|h_{i_{k}}\right|=||h||_{1}^{k}\leq p^{\frac{k}{2}}\left|\left|h\right| \right|_{2}^{k}, \tag{8}\]
with \(p\) fixed. By the bound in 8 and monotonicity of the integral, we have that
\[Z_{0}\leq p^{\frac{k}{2}}\left|\left|h\right|\right|_{2}^{k}\left|\pi_{n, \alpha}^{LAN}(h)-\phi_{n}(h)\right|dh\coloneqq Z. \tag{9}\]
Thus, it suffices to show that \(\mathbb{E}_{f_{0,n}}[Z]\to 0\), which implies \(\mathbb{E}_{f_{0,n}}[Z_{0}]\to 0\). This, in turn, implies that \(Z_{0}\overset{p}{\rightarrow}0\) by Markov's inequality. We show convergence in \(L_{1}\) by **(I)** showing convergence in \(L_{1}\) conditioned on an
intersection of events and **(II)** showing the complement of these events occur with vanishing probability.
We now show convergence in \(L_{1}\). We will use the following strategy to do this. For vectors \(g,h\in K_{0}\), define the random variables
\[f_{n}^{+}(g,h) =\left\{1-\frac{\phi_{n}(h)\pi_{n,\alpha}^{LAN}(g|X^{n})}{\pi_{n, \alpha}^{LAN}(h|X^{n})\phi_{n}(g)}\right\}^{+}, \tag{10}\] \[f_{n}^{-}(g,h) =\left\{\frac{\pi_{n,\alpha}^{LAN}(h|X^{n})\phi_{n}(g)}{\phi_{n}( h)\pi_{n,\alpha}^{LAN}(g|X^{n})}-1\right\}^{-}, \tag{11}\]
where \(\{x\}^{+}=\max\{0,x\}\) and \(\{x\}^{-}=\max\{0,-x\}\). We appeal to [2, Lemma 5] to show that \(f_{n}^{+}(g,h)\) and \(f_{n}^{-}(g,h)\) are well-defined with high-probability. Consider \(\eta>0\) and define the following events,
\[\mathcal{A} =\left\{\sup_{g,h\in B_{0}(r_{n})}f_{n}^{+}(g,h)\leq\eta\right\}, \tag{12}\] \[\mathcal{B} =\left\{\sup_{g,h\in B_{0}(r_{n})}f_{n}^{-}(g,h)\leq\eta\right\}.\]
The events \(\mathcal{A}\) and \(\mathcal{B}\) hold with high probability for large enough \(n\) by [2, Lemma 5]. To apply [2, Lemma 5], we mention that **(I)** because \(\theta^{*}\) is in the interior of \(\Theta\subseteq\mathbb{R}^{p}\) by **(A1)**, there exists a \(\delta>0\) such that ball \(B_{\theta^{*}}(\delta)\subset\Theta\) and \(\pi\) is continuous and positive on the ball and **(II)**\(\sqrt{n}\)-stochastic local asymptotic normality holds by **(A2)**.
Next, we upper bound \(\mathbb{E}_{f_{0,n}}[Z]\) as follows using Holder's Inequlity for some \(\gamma>0\):
\[\mathbb{E}_{f_{0,n}}[Z]=\mathbb{E}_{f_{0,n}}[Z\mathbbm{1}\left\{ \mathcal{A}\cap\mathcal{B}\right\}]+\mathbb{E}_{f_{0,n}}[Z\mathbbm{1}\left\{ \left(\mathcal{A}\cap\mathcal{B}\right)^{c}\right\}] \tag{13}\] \[\quad+\mathbb{E}_{f_{0,n}}[Z^{1+\gamma}]^{\frac{1}{1+\gamma}} \left(\mathbb{P}_{f_{0,n}}(\mathcal{A}^{c})+\mathbb{P}_{f_{0,n}}(\mathcal{B}^ {c})\right)^{\frac{\gamma}{1+\gamma}}.\]
We aim to show that each term in (13) converges to zero.
**First term in (13).** We upper bound the first term of (13), which is \(\mathbb{E}_{f_{0,n}}[Z\mathbbm{1}\left\{\mathcal{A}\cap\mathcal{B}\right\}]\).
Using (9) and appealing to Lemma 1 with \(s(h)=p^{\frac{k}{2}}||h||_{2}^{k}\), we have that
\[Z =p^{\frac{k}{2}}\left||h||_{2}^{k}\left|\pi_{n,\alpha}^{LAN}(h|X^ {n})-\phi_{n}(h)\right|dh\right. \tag{14}\] \[\leq p^{\frac{k}{2}}\Bigg{[}\sup_{g,h\in B_{0}(r_{n})}f_{n}^{+}(g,h)\Bigg{]}\int_{\mathbb{R}^{p}}||h||_{2}^{k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\] \[\quad+p^{\frac{k}{2}}\left[\sup_{g,h\in B_{0}(r_{n})}f_{n}^{-}(g,h)\right]\int_{\mathbb{R}^{p}}||h||_{2}^{k}\phi_{n}(h)dh\] \[\quad+p^{\frac{k}{2}}\int_{||h||_{2}>r_{n}}||h||_{2}^{k}\left(\pi _{n,\alpha}^{LAN}(h|X^{n})+\phi_{n}(h)\right)dh.\]
Therefore, to study \(\mathbb{E}_{f_{0,n}}[Z\mathbbm{1}\{\mathcal{A}\cap\mathcal{B}\}]\), we notice that the suprema in (14) are bounded. This gives the upper bound
\[\mathbb{E}_{f_{0,n}}[Z\mathbbm{1}\left\{\mathcal{A}\cap\mathcal{B}\right\}] \tag{15}\] \[\leq p^{\frac{k}{2}}\eta\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R} ^{p}}||h||_{2}^{k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\right]\] \[\quad+p^{\frac{k}{2}}\eta\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{ R}^{p}}||h||_{2}^{k}\phi_{n}(h)dh\right]\] \[\quad+p^{\frac{k}{2}}\mathbb{E}_{f_{0,n}}\left[\int_{||h||_{2}>r_ {n}}||h||_{2}^{k}\phi_{n}(h)dh\right]\] \[\quad+p^{\frac{k}{2}}\mathbb{E}_{f_{0,n}}\left[\int_{||h||_{2}>r_ {n}}||h||_{2}^{k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\right].\]
Label the four terms in (15) as \(T_{1}\), \(T_{2}\), \(T_{3}\), and \(T_{4}\). We argue that, for arbitrary \(\epsilon>0\), each can be upper bounded by \(\epsilon/4\) (possibly for large enough \(n\)).
Consider term \(T_{1}\) of (15). By **(A3)**, there exists \(M_{1}<\infty\) such that
\[\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}||h||_{2}^{k}\pi_{n,\alpha}^{LAN }(h|X^{n})dh\right]<M_{1}.\]
Thus, we can pick \(\eta<\frac{\epsilon}{4M_{1}p^{k/2}}\) such that
\[T_{1}=p^{k/2}\eta\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}||h||_{2}^{k} \pi_{n,\alpha}^{LAN}(h|X^{n})dh\right]<\frac{\epsilon}{4}.\]
For term \(T_{2}\), we notice that by a change of variables,
\[\int_{\mathbb{R}^{p}}\|h\|_{2}^{k}\phi_{n}(h)dh \tag{16}\] \[=\int_{\mathbb{R}^{p}}n^{-\frac{p}{2}}\|h\|_{2}^{k}\phi\left(h \bigg{|}\sqrt{n}(\hat{\theta}^{ML}-\theta^{*}),\frac{V_{\theta^{*}}^{-1}}{ \alpha}\right)dh\] \[=\int_{\mathbb{R}^{p}}n^{\frac{k}{2}}\|\theta-\hat{\theta}^{ML}+ \theta^{*}\|_{2}^{k}\phi\left(\theta\bigg{|}0,\frac{V_{\theta^{*}}^{-1}}{n \alpha}\right)d\theta\] \[=n^{\frac{k}{2}}\mathbb{E}_{\mathcal{N}(0,V_{\theta^{*}}^{-1}/(n \alpha))}\left[\|\theta-\hat{\theta}^{ML}+\theta^{*}\|_{2}^{k}\right].\]
Next we upper bound the above using Holder's Inequality1 and centered absolute Gaussian moments:
Footnote 1: Note that for \(m>0\) and \(v=(v_{1},\ldots,v_{p})\in\mathbb{R}^{p}\), Hölder’s inequality implies \((\sum_{j=1}^{p}|v_{j}|)^{m}\leq p^{m-1}\sum_{j=1}^{p}|v_{j}|^{m}\)
\[n^{\frac{k}{2}}\mathbb{E}\Big{[}\|\theta-\hat{\theta}^{ML}+\theta^{* }\|_{2}^{k}\Big{]} \tag{17}\] \[=n^{\frac{k}{2}}\mathbb{E}\Bigg{[}\Bigg{(}\sum_{i=1}^{p}(\theta_{i }-\hat{\theta}_{i}^{ML}+\theta_{i}^{*})^{2}\Bigg{)}^{\frac{k}{2}}\Bigg{]}\] \[\leq n^{\frac{k}{2}}p^{\frac{k}{2}-1}\sum_{i=1}^{p}\mathbb{E}\left[( \theta_{i}-\hat{\theta}_{i}^{ML}+\theta_{i}^{*})^{k}\right]\] \[\leq n^{\frac{k}{2}}p^{\frac{k}{2}-1}\sum_{i=1}^{p}\mathbb{E}\left[ \left|\theta_{i}-\hat{\theta}_{i}^{ML}+\theta_{i}^{*}\right|^{k}\right]\] \[\leq n^{\frac{k}{2}}2^{k-1}p^{\frac{k}{2}-1}\sum_{i=1}^{p}\left( \mathbb{E}\left[|\theta_{i}|^{k}\right]+|\hat{\theta}_{i}^{ML}-\theta_{i}^{*}|^{k} \right).\]
The first term on the right side of (17) equals
\[\begin{split} n^{\frac{k}{2}}& 2^{k-1}p^{\frac{k}{2}-1} \sum_{i=1}^{p}\mathbb{E}[|\theta_{i}|^{k}]\\ &=\frac{2^{\frac{3k}{2}-1}p^{\frac{k}{2}-1}}{\sqrt{\pi}}\Gamma \left(\frac{k+1}{2}\right)\sum_{i=1}^{p}\left[\frac{[V_{\theta^{*}}^{-1}]_{ii} }{\alpha}\right]^{\frac{k}{2}},\end{split} \tag{18}\]
and the second term on the right side of (17) equals \(2^{k-1}p^{\frac{k}{2}-1}\sum_{i=1}^{p}(\sqrt{n}|\hat{\theta}_{i}^{ML}-\theta_{i }^{*}|)^{k}\). Because \(\sqrt{n}(\hat{\theta}_{i}^{ML}-\theta_{i}^{*})\) is asymptotically normal by Assumption **(A0)**, there exists an \(N_{1}\coloneqq N_{1}(p,k,M)\) such that for all \(n>N_{1}\), we have that the second term on the right side of (17) is bounded by some \(M<\infty\) with high probability. From (16) - (18) we have shown for all \(n>N_{1}\),
\[\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}\|h\|_{2}^{k}\phi_{n}(h)dh \right]\leq M_{2}, \tag{19}\]
where
\[M_{2}\coloneqq\frac{2^{\frac{3k}{2}-1}p^{\frac{k}{2}-1}}{\sqrt{\pi}}\Gamma \left(\frac{k+1}{2}\right)\sum_{i=1}^{p}\left[\frac{[V_{\theta^{*}}^{-1}]_{ii} }{\alpha}\right]^{\frac{k}{2}}+M.\]
Putting this together, we have
\[T_{2}=p^{\frac{k}{2}}\eta\mathbb{E}_{f_{0,n}}\left[\int_{\mathbb{R}^{p}}\|h\|_ {2}^{k}\phi_{n}(h)dh\right]\leq p^{\frac{k}{2}}\eta M_{2}.\]
Hence, by choosing \(\eta<\epsilon/(4M_{2}p^{\frac{k}{2}})\) we have \(T_{2}\leq\epsilon/4\).
We control \(T_{3}\) by leveraging (19). Indeed, by Cauchy-Schwarz,
\[\begin{split}& T_{3}=p^{\frac{k}{2}}\mathbb{E}_{f_{0,n}}\left[ \int_{\mathbb{R}^{p}}\|h\|_{2}^{k}(1-\mathbb{1}\left\{||h||_{2}\leq r_{n}\}) \phi_{n}(h)dh\right]\\ &\leq p^{\frac{k}{2}}\mathbb{E}_{f_{0,n}}\Bigg{[}\sqrt{\int_{ \mathbb{R}^{p}}\|h\|_{2}^{2k}\phi_{n}(h)dh}\\ &\qquad\qquad\times\sqrt{\int_{\mathbb{R}^{p}}(1-\mathbb{1}\left\{ ||h||_{2}\leq r_{n}\})\phi_{n}(h)dh\right]}\\ &\leq p^{\frac{k}{2}}\sqrt{\mathbb{E}_{f_{0,n}}\Bigg{[}\int_{ \mathbb{R}^{p}}\|h\|_{2}^{2k}\phi_{n}(h)dh\Bigg{]}}\\ &\qquad\qquad\times\sqrt{\mathbb{E}_{f_{0,n}}\Bigg{[}\int_{ \mathbb{R}^{p}}(1-\mathbb{1}\left\{||h||_{2}\leq r_{n}\})\phi_{n}(h)dh\right]}. \end{split} \tag{20}\]
By appealing to (19) with the power \(2k\) instead of \(k\), we know that \(\mathbb{E}_{f_{0,n}}[\int_{\mathbb{R}^{p}}\|h\|_{2}^{2k}\phi_{n}(h)dh]\leq\tilde {M}_{2}\) for \(n>N_{1}\).
Because \(r_{n}\to\infty\) and \(\phi_{n}(h)\leq(2\pi)^{-p/2}|V_{\theta^{*}}^{-1}/\alpha|^{-1/2}\) for all \(h\in\mathbb{R}^{p}\), we see that \((1-\mathbb{1}\{||h||_{2}\leq r_{n}\})\phi_{n}(h)\to 0\). Thus, by the bounded convergence theorem, there exists \(N_{2}\coloneqq N_{2}(\epsilon,\tilde{M}_{2},p,k)\) such that for all \(n>N_{2}\),
\[\mathbb{E}_{f_{0,n}}\Bigg{[}\int_{\mathbb{R}^{p}}(1-\mathbb{1}\left\{||h||_{2 }\leq r_{n}\})\phi_{n}(h)dh\Bigg{]}<\frac{\epsilon^{2}}{16\tilde{M}_{2}p^{k}}.\]
Thus, for \(n>\max(N_{1},N_{2})\), we have that \(T_{3}<\frac{\epsilon}{4}\).
By appealing to Lemma 2 (assumptions are met by appealing to **(A3)**) with \(f_{Z}(z)\) being \(\pi_{n,\alpha}^{LAN}(z|X^{n})\), we pick \(N_{3}\coloneqq N_{3}(p,k,\epsilon,\gamma)\) such that for all \(n>N_{3}\),
\[\mathbb{E}_{f_{0,n}}\Bigg{[}\int_{||h||_{2}>r_{n}}\hskip-14.226378pt||h||_{2}^{ k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\Bigg{]}<\frac{\epsilon}{4p^{k/2}}.\]
Thus,
\[T_{4}=p^{\frac{k}{2}}\mathbb{E}_{f_{0,n}}\Bigg{[}\int_{||h||_{2}>r_{n}}\hskip-14.2 26378pt||h||_{2}^{k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\Bigg{]}<\frac{ \epsilon}{4}\]
for \(n>N_{3}\). Therefore, we have shown that the first term of (13) is upperbounded by \(\epsilon\) whenever \(\eta<\frac{\epsilon}{4\min\{M_{1},M_{2}\}p^{\frac{k}{2}}}\) and \(n\geq\max\{N_{1},N_{2},N_{3}\}\).
**Second term in (13).** First, we consider upper bounding the term \(\mathbb{E}_{f_{0,n}}[Z^{1+\gamma}]\). Notice that by the fact that \(|\pi_{n,\alpha}^{LAN}(h|X^{n})-\phi_{n}(h)|\leq\pi_{n,\alpha}^{LAN}(h|X^{n})+ \phi_{n}(h)\),
\[\begin{split}& Z^{1+\gamma}=\left[p^{\frac{k}{2}}\int_{\mathbb{R}^{p}}||h ||_{2}^{k}\left|\pi_{n,\alpha}^{LAN}(h|X^{n})-\phi_{n}(h)\right|dh\right]^{1+ \gamma}\\ &\leq\left[2p^{\frac{k}{2}}\max\bigg{\{}\int_{\mathbb{R}^{p}}||h ||_{2}^{k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh,\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.\int_{\mathbb{R}^{p}}||h ||_{2}^{k}\phi_{n}(h)dh\right\}\right]^{1+\gamma},\end{split}\]
Next, applying Jensen's Inequality, we have that
\[\begin{split}&\Bigg{[}\max\biggl{\{}\int_{\mathbb{R}^{p}}||h||_{2}^{ k}\pi_{n,\alpha}^{LAN}(h|X^{n})dh,\int_{\mathbb{R}^{p}}||h||_{2}^{k}\phi_{n}(h)dh \Bigg{]}\Bigg{]}^{1+\gamma}\\ &=\max\Bigg{\{}\bigg{(}\int_{\mathbb{R}^{p}}||h||_{2}^{k}\pi_{n, \alpha}^{LAN}(h|X^{n})dh\bigg{)}^{1+\gamma},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\bigg{(}\int_{\mathbb{R}^{p}}||h ||_{2}^{k}\phi_{n}(h)dh\bigg{)}^{1+\gamma}\Bigg{\}}\\ &\leq\max\Bigg{\{}\int_{\mathbb{R}^{p}}||h||_{2}^{k(1+\gamma)}\pi_{n, \alpha}^{LAN}(h|X^{n})dh,\\ &\qquad\qquad\qquad\qquad\qquad\qquad\int_{\mathbb{R}^{p}}||h||_{2}^{ k(1+\gamma)}\phi_{n}(h)dh\Bigg{\}}\\ &\leq\int_{\mathbb{R}^{p}}||h||_{2}^{k(1+\gamma)}(\pi_{n,\alpha}^{ LAN}(h|X^{n})+\phi_{n}(h))dh.\end{split}\]
Therefore, we have the bound
\[\mathbb{E}_{f_{0,n}}[Z^{1+\gamma}]\leq(2p^{\frac{k}{2}})^{1+\gamma}\mathbb{E}_{f_{0,n }}\bigg{[}\int_{\mathbb{R}^{p}}||h||_{2}^{k(1+\gamma)}\phi_{n}(h)dh\bigg{]}\]
\[+(2p^{\frac{k}{2}})^{1+\gamma}\mathbb{E}_{f_{0,n}}\bigg{[}\int_{\mathbb{R}^{p}}||h ||_{2}^{k(1+\gamma)}\pi_{n,\alpha}^{LAN}(h|X^{n})dh\bigg{]}.\]
We argue that the two terms on the right side of the above are upper bounded by constants. By Assumption **(A3)**, there exists an \(M_{3}<\infty\) such that \(\mathbb{E}_{f_{0,n}}[\int_{\mathbb{R}^{p}}||h||_{2}^{k(1+\gamma)}\pi_{n,
Thus, we have shown that
\[\mathbb{E}_{f_{0,n}}[Z^{1+\gamma}] \leq(2p^{k/2})^{1+\gamma}(M_{3}+M_{4})\] \[\leq 2(2p^{k/2})^{1+\gamma}\max\{M_{3},M_{4}\}.\]
We finally upper bound \(\mathbb{P}(\mathcal{A}^{c})\) and \(\mathbb{P}(\mathcal{B}^{c})\). First, note that \(f_{n}^{+}(g,h)\) is the same as \(f_{n}(g,h)\) from [2]. Next, we use the fact that \(\{1-x\}^{+}=\{x-1\}^{-}\) to see
\[f_{n}^{-}(g,h) =\left\{\frac{\pi_{n,\alpha}^{LAN}(h|X^{n})\phi_{n}(g)}{\phi_{n}( h)\pi_{n,\alpha}^{LAN}(g|X^{n})}-1\right\}^{-} \tag{21}\] \[=\left\{1-\frac{\pi_{n,\alpha}^{LAN}(h|X^{n})\phi_{n}(g)}{\phi_{n }(h)\pi_{n,\alpha}^{LAN}(g|X^{n})}\right\}^{+}=f_{n}^{+}(h,g).\]
Taking the supremum in \(g,h\) over \(B_{0}(r_{n})\) of both sides of 21 gives us that
\[\sup_{g,h\in B_{0}(r_{n})}f_{n}^{-}(g,h)=\sup_{g,h\in B_{0}(r_{n})}f_{n}^{+}(h, g). \tag{22}\]
Thus, there exists \(N_{4}\coloneqq N_{4}(\eta,\epsilon,M_{3},M_{4},\gamma,p,k)\) such that \(\mathbb{P}(\mathcal{A}^{c})=\mathbb{P}(\mathcal{B}^{c})\) is sufficiently small for all \(n>N_{4}\) by [2, Lemma 5]:
\[\mathbb{P}(\mathcal{A}^{c})=\mathbb{P}(\mathcal{B}^{c})<\frac{\epsilon^{\frac {1+\gamma}{\gamma}}}{2^{1+\frac{2+\gamma}{\gamma}}p^{\left(\frac{k}{2}\right) \left(\frac{1+\gamma}{\gamma}\right)}\max\{M_{3},M_{4}\}^{\frac{1}{\gamma}}}.\]
Putting this together, we find that for \(n>N_{4}\),
\[\mathbb{E}_{f_{0,n}}[Z^{1+\gamma}]^{\frac{1}{1+\gamma}}\left( \mathbb{P}_{f_{0,n}}(\mathcal{A}^{c})+\mathbb{P}_{f_{0,n}}(\mathcal{B}^{c}) \right)^{\frac{2}{1+\gamma}}\] \[\leq\left[2(2p^{k/2})^{1+\gamma}\max\{M_{3},M_{4}\}\right]^{ \frac{1}{1+\gamma}}\] \[\quad\times\left[\frac{\epsilon^{\frac{1+\gamma}{\gamma}}}{2^{ \frac{2+\gamma}{\gamma}}p^{\left(\frac{k}{2}\right)\left(\frac{1+\gamma}{ \gamma}\right)}\max\{M_{3},M_{4}\}^{\frac{1}{\gamma}}}\right]^{\frac{\gamma}{1 +\gamma}}=\epsilon.\]
**Final bound.** We have shown that whenever \(\eta\) is chosen small enough, for all \(n>\max(N_{1},N_{2},N_{3},N_{4})\), for arbitrary \(\epsilon>0\),
\[\mathbb{E}[Z] =p^{k/2}\mathbb{E}\Bigg{[}\int_{\mathbb{R}^{p}}||h||_{2}^{k}\left| \pi_{n,\alpha}^{LAN}(h|X^{n})-\phi_{n}(h)\right|dh\Bigg{]}\] \[<2\epsilon.\]
This gives us the desired result.
### _Proof of Theorem 2_
Recall, \(\hat{\theta}^{B}=\int_{\mathbb{R}^{p}}\theta\pi_{n,\alpha}(\theta|X^{n})d\theta\). Using the change of variable \(h=\sqrt{n}(\theta-\theta^{*})\),
\[\hat{\theta}^{B}=\int\theta\pi_{n,\alpha}(\theta|X^{n})d\theta=\frac{1}{\sqrt {n}}\int h\pi_{n,\alpha}^{LAN}(h)dh+\theta^{*}.\]
In the above, we used the definition \(\pi_{n,\alpha}^{LAN}(h)=n^{-\frac{p}{2}}\pi_{n,\alpha}(\theta^{*}+\frac{h}{ \sqrt{n}}|X^{n})\) in (6). Recalling the definition of \(\phi_{n}(h)\) in (6), we can write,
\[\sqrt{n}(\hat{\theta}^{B}-\theta^{*})=\int h\pi_{n,\alpha}^{LAN}(h )dh \tag{23}\] \[=\int h(\pi_{n,\alpha}^{LAN}(h)-\phi_{n}(h))dh+\int h\phi_{n}(h)dh\] \[=\int h(\pi_{n,\alpha}^{LAN}(h)-\phi_{n}(h))dh+\sqrt{n}(\hat{ \theta}^{ML}-\theta^{*}).\]
The second term of 23 converges weakly to \(\mathcal{N}(0,V_{\theta^{*}}^{-1})\) by Assumption **(A0)**. It remains to show that the first term of the last line of 23 is \(o_{p}(1)\). This follows from
\[\left|\left|\int h(\pi_{n,\alpha}^{LAN}(h)-\phi_{n}(h))dh\right| \right|_{1}\] \[\leq\int\left|\left|h\right|\right|_{1}\left|\pi_{n,\alpha}^{LAN}( h)-\phi_{n}(h)\right|dh=o_{p}(1)\]
where the equality follows from Theorem 1 for \(k=1\).
## V Technical Lemmas
**Lemma 1**.: _Consider sequences of densities \(\varphi_{n}\) and \(\psi_{n}\). For a given compact set \(K\subset\mathbb{R}^{p}\), suppose that the densities \(\varphi_{n}\) and \(\psi_{n}\) are positive on \(K\). Then for any function, \(s(h)\), that is nonnegative on all of \(\mathbb{R}^{p}\),_
\[\int_{\mathbb{R}^{p}}s(h)\left|\varphi_{n}(h)-\psi_{n}(h)\right|dh\] \[\leq\left[\sup_{g,h\in K}\tilde{f}_{n}^{+}(g,h)\right]\int_{ \mathbb{R}^{p}}s(h)\psi_{n}(h)dh\] \[\quad+\left[\sup_{g,h\in K}\tilde{f}_{n}^{-}(g,h)\right]\int_{ \mathbb{R}^{p}}s(h)\varphi_{n}(h)dh\] \[\quad+\int_{\mathbb{R}^{p}\setminus K}s(h)\left(\psi_{n}(h)+ \varphi_{n}(h)\right)dh,\]
_where_
\[\tilde{f}_{n}^{+}(g,h) \coloneqq\left\{1-\frac{\varphi_{n}(h)\psi_{n}(g)}{\psi_{n}(h) \varphi_{n}(g)}\right\}^{+},\] \[\tilde{f}_{n}^{-}(g,h) \coloneqq\left\{\frac{\psi_{n}(h)\varphi_{n}(g)}{\varphi_{n}(h) \psi_{n}(g)}-1\right\}^{-}.\]
**Proof.**_Since \(s(h)\), \(\psi_{n}(h)\), and \(\varphi_{n}(h)\) are nonnegative on all of \(\mathbb{R}^{p}\), we have that \(s(h)|\psi_{n}(h)-\varphi_{n}(h)|\leq s(h)(\psi_{n}(h)+\varphi_{n}(h))\). Using this fact and that \(|\psi_{n}(h)-\varphi_{n}(h)|=\{\psi_{n}(h)-\varphi_{n}(h)\}^{-}+\{\psi_{n}(h)- \varphi_{n}(h)\}^{+}\),_
\[\int_{\mathbb{R}^{p}}s(h)|\psi_{n}(h)-\varphi_{n}(h)|dh \tag{24}\] \[=\int_{K}s(h)|\psi_{n}(h)-\varphi_{n}(h)|dh\] \[\quad+\int_{\mathbb{R}^{p}\setminus K}s(h)|\psi_{n}(h)-\varphi_{n}(h )|dh\] \[\leq\int_{K}s(h)\{\psi_{n}(h)-\varphi_{n}(h)\}^{+}dh\] \[\quad+\int_{K}s(h)\{\psi_{n}(h)-\varphi_{n}(h)\}^{-}dh\] \[\quad+\int_{\mathbb{R}^{p}\setminus K}s(h)(\psi_{n}(h)+\varphi_{n}(h ))dh.\]
_We bound the first two terms on the right side of (24) separately. Define \(a_{n}=(\int_{K}\psi_{n}(g)dg)^{-1}\) and \(b_{n}=(\int_{K}\varphi_{n}(g)dg)^{-1}\) and assume \(a_{n}\geq b_{n}\) without loss of generality. For the first term in (24), for all \(h\in K\),_
\[\frac{\varphi_{n}(h)}{\psi_{n}(h)}=\frac{a_{n}}{b_{n}}\int_{K}\frac{\varphi_{n}( h)}{\psi_{n}(h)}\frac{\psi_{n}(g)}{\varphi_{n}(g)}b_{n}\varphi_{n}(g)dg.\]
_Thus,_
\[\{\psi_{n}(h)-\varphi_{n}(h)\}^{+}=\left\{1-\frac{\varphi_{n}(h)}{\psi_ {n}(h)}\right\}^{+}\psi_{n}(h)\] \[=\left\{1-\frac{a_{n}}{b_{n}}\int_{K}\frac{\varphi_{n}(h)}{\psi_{n} (h)}\frac{\psi_{n}(g)}{\varphi_{n}(g)}b_{n}\varphi_{n}(g)dg\right\}^{+}\psi_{n }(h). \tag{25}\]
_Applying Jensen's inequality on the convex function \(f(x)=\{1-x\}^{+}\), we have \(\{1-\mathbb{E}[X]\}^{+}\leq\mathbb{E}[\{1-X\}^{+}]\). Applying this to (25),_
\[\left\{1-\frac{a_{n}}{b_{n}}\int_{K}\frac{\varphi_{n}(h)}{\psi_ {n}(h)}\frac{\psi_{n}(g)}{\varphi_{n}(g)}b_{n}\varphi_{n}(g)dg\right\}^{+}\] \[\leq\int_{K}\left\{1-\frac{a_{n}}{b_{n}}\frac{\varphi_{n}(h)}{ \psi_{n}(h)}\frac{\psi_{n}(g)}{\varphi_{n}(g)}\right\}^{+}b_{n}\varphi_{n}(g)dg \tag{26}\] \[=\int_{K}\tilde{f}_{n}^{+}(g,h)b_{n}\varphi_{n}(g)dg.\]
_The final step above uses that when \(a_{n}/b_{n}\geq 1\), we have \(\{1-(a_{n}/b_{n})x\}^{+}\leq\{1-x\}^{+}\) for \(x\geq 0\). Using (25)-(26) we can rewrite the first term of (24) as_
\[\int_{K}s(h)\{\psi_{n}(h)-\varphi_{n}(h)\}^{+}dh\] \[\leq\int_{K}\int_{K}s(h)\tilde{f}_{n}^{+}(g,h)b_{n}\varphi_{n}(g) \psi_{n}(h)dgdh\] \[\leq\Biggl{[}\sup_{g,h\in K}\tilde{f}_{n}^{+}(g,h)\Biggr{]}\int_{ K}s(h)\psi_{n}(h)\left[\int_{K}b_{n}\varphi_{n}(g)dg\right]dh\] \[\leq\Biggl{[}\sup_{g,h\in K}\tilde{f}_{n}^{+}(g,h)\Biggr{]}\int_{ K}s(h)\psi_{n}(h)dh,\] \[\leq\Biggl{[}\sup_{g,h\in K}\tilde{f}_{n}^{+}(g,h)\Biggr{]}\int_{ \mathbb{R}^{p}}s(h)\psi_{n}(h)dh.\]
_We turn to the second term of (24) and write_
\[\left\{\psi_{n}(h)-\varphi_{n}(h)\right\}^{-}=\left\{\frac{\psi_ {n}(h)}{\varphi_{n}(h)}-1\right\}^{-}\varphi_{n}(h)\] \[=\left\{\frac{a_{n}}{b_{n}}\int_{K}\frac{\psi_{n}(h)\varphi_{n}(g )}{\varphi_{n}(h)\psi_{n}(g)}b_{n}\psi_{n}(g)dg-1\right\}^{-}\varphi_{n}(h).\]
_Applying Jensen's inequality to the convex function \(f(x)=\{x-1\}^{-}\), we have that \(\{\mathbb{E}[X]-1\}^{-}\leq\mathbb{E}[\{X-1\}^{-}]\) and_
\[\left\{\frac{a_{n}}{b_{n}}\int_{K}\frac{\psi_{n}(h)\varphi_{n}(g) }{\varphi_{n}(h)\psi_{n}(g)}b_{n}\psi_{n}(g)dg-1\right\}^{-}\] \[\leq\int_{K}\biggl{\{}\frac{a_{n}\psi_{n}(h)\varphi_{n}(g)}{b_{n} \varphi_{n}(h)\psi_{n}(g)}-1\biggr{\}}^{-}b_{n}\psi_{n}(g)dg\] \[\stackrel{{(a)}}{{\leq}}\int_{K}\left\{\frac{\psi_{n }(h)\varphi_{n}(g)}{\varphi_{n}(h)\psi_{n}(g)}-1\right\}^{-}b_{n}\psi_{n}(g)dg\] \[=\int_{K}\tilde{f}_{n}^{-}(g,h)b_{n}\psi_{n}(g)dg\stackrel{{ (b)}}{{\leq}}\int_{K}\tilde{f}_{n}^{-}(g,h)a_{n}\psi_{n}(g)dg.\]
_Step \((a)\) uses that \(\{(a_{n}/b_{n})x-1\}^{-}\leq\{x-1\}^{-}\) for \(x\geq 0\) when \(a_{n}/b_{n}\geq 1\) and step \((b)\) that \(b_{n}\leq a_{n}\). Hence, we can rewrite the second term of (24) as_
\[\int_{K}s(h)\{\psi_{n}(h)-\varphi_{n}(h)\}^{-}dh\] \[\leq\int_{K}\int_{K}s(h)\tilde{f}_{n}^{-}(g,h)a_{n}\psi_{n}(g) \varphi_{n}(h)dgdh\] \[\leq\Biggl{[}\sup_{g,h\in K}\tilde{f}_{n}^{-}(g,h)\Biggr{]}\int_{ K}s(h)\varphi_{n}(h)\biggl{[}\int_{K}a_{n}\psi_{n}(g)dg\biggr{]}dh.\]
_This establishes the the final bound since_
\[\int_{K}s(h)\varphi_{n}(h)\left[\int_{K}a_{n}\psi_{n}(g)dg\right]dh\] \[=\int_{K}s(h)\varphi_{n}(h)dh\leq\int_{\mathbb{R}^{p}}s(h)\varphi _{n}(h)dh.\]
**Lemma 2**.: _Suppose a random variable \(Z\stackrel{{ d}}{{\coloneqq}}Y|X^{n}\) has a density \(f_{Z}(\cdot)\) on \(\mathbb{R}^{p}\). Assume there exists a \(\gamma>0\) such that \(\mathbb{E}_{f_{0,n}}[\mathbb{E}[||Z||_{2}^{k(1+\gamma)}]]<\infty\). Then, for any \(\epsilon>0\) and \(r_{n}\rightarrow\infty\), there exists an integer \(N(\epsilon,\gamma,k)>0\) such that for all \(n>N(\epsilon,\gamma,k)\),_
\[\mathbb{E}_{f_{0,n}}\left[\int_{||z||_{2}>r_{n}}||z||_{2}^{k}\,f_{Z}(z)dz \right]<\epsilon, \tag{27}\]
**Proof**.: _For an arbitrary sequence \(r_{n}\rightarrow\infty\), we choose \(N_{0}\) large enough that \(r_{n}>0\) for all \(n>N_{0}\). Let \(z=(z_{1},\ldots,z_{p})\in\mathbb{R}^{p}\) and let \(f_{||z||_{2}^{k}}(\cdot)\) denote the density function of \(||z||_{2}^{k}\). Define the random variable_
\[V\coloneqq||Z||_{2}^{k}\frac{1}{2}\left\{||Z||_{2}>r_{n}\right\}=\begin{cases}0& \text{if }||Z||_{2}\leq r_{n},\\ ||Z||_{2}^{k}&\text{if }||Z||_{2}>r_{n}.\end{cases}\]
_Thus, the density of \(V\) is given by_
\[f_{V}(v)=\begin{cases}\mathbb{P}(||Z||_{2}\leq r_{n})&\text{if }v=0,\\ f_{||z||_{2}^{k}}(v)&\text{if }v>r_{n}.\end{cases}\]
_Next, notice that_
\[\mathbb{E}[V]=\int_{0}^{r_{n}}\mathbb{P}(V>t)dt+\int_{r_{n}}^{\infty}\mathbb{P}(V> t)dt. \tag{28}\]
_Using the definition of \(V\) and its density function, we study (28). Consider the first term on the right side,_
\[\int_{0}^{r_{n}}\mathbb{P}(V>t)dt =\int_{0}^{r_{n}}\int_{t}^{\infty}f_{V}(v)dvdt\] \[\stackrel{{(a)}}{{=}}\int_{0}^{r_{n}}\mathbb{P}(||Z ||_{2}^{k}>r_{n})dt\] \[=r_{n}\mathbb{P}(||Z||_{2}^{k}>r_{n})\stackrel{{(b)}}{{ \leq}}\frac{\mathbb{E}[||Z||_{2}^{k(1+\gamma)}]}{r_{n}^{\gamma}}. \tag{29}\]
_Step \((a)\) in (29) follows because, for \(0<t<r_{n}\),_
\[\int_{t}^{\infty}f_{V}(v)dv=\int_{r_{n}}^{\infty}f_{||z||_{2}^{k}}(v)dv=\mathbb{P} (||Z||_{2}^{k}>r_{n}),\]
_and step \((b)\) by Markov's inequality._
_For the second term on the right side of (28), we again
use Markov's Inequality as_
\[\begin{split}\int_{r_{n}}^{\infty}\mathbb{P}(V>t)dt& \leq\int_{r_{n}}^{\infty}\frac{\mathbb{E}[V^{1+\gamma}]}{t^{1+\gamma}}dt\\ &\stackrel{{(c)}}{{\leq}}\int_{r_{n}}^{\infty}\frac{ \mathbb{E}[||Z||_{2}^{k(1+\gamma)}]}{t^{1+\gamma}}dt\\ &=\frac{\mathbb{E}[||Z||_{2}^{k(1+\gamma)}]}{\gamma r_{n}^{ \gamma}}.\end{split} \tag{30}\]
_Step \((c)\) in (30) follows because_
\[V^{1+\gamma}=\left(||Z||_{2}^{k}\mathbb{1}\left\{||Z||_{2}>r_{n}\right\} \right)^{1+\gamma}\leq||Z||_{2}^{k(1+\gamma)}.\]
_Plugging (29) and (30) into (28), we find_
\[\mathbb{E}[V]\leq\frac{\gamma+1}{\gamma r_{n}^{\gamma}}\mathbb{E}\left[||Z||_ {2}^{k(1+\gamma)}\right]. \tag{31}\]
_Taking the expectation with respect to \(X^{n}\) in (31) gives,_
\[\begin{split}\mathbb{E}_{f_{0,n}}\left[\mathbb{E}[V]\right]& =\mathbb{E}_{f_{0,n}}\left[\mathbb{E}[||Z||_{2}^{k}\mathbb{1}\left\{||Z||_{2 }>r_{n}\right\}]\right]\\ &\leq\frac{\gamma+1}{\gamma r_{n}^{\gamma}}\mathbb{E}_{f_{0,n}} \left[\mathbb{E}[||Z||_{2}^{k(1+\gamma)}]\right]<\epsilon,\end{split} \tag{32}\]
_where the final bound in the equations above follows since \(\mathbb{E}_{f_{0,n}}\left[\mathbb{E}[||Z||_{2}^{k(1+\gamma)}]\right]<\infty\) by assumption and \(r_{n}\rightarrow\infty\). Thus, there exists \(N(\epsilon,\gamma,k)\geq N_{0}\) such that the bound is true for all \(n>N(\epsilon,\gamma,k)\), which proves (27)._
## Acknowledgment
This research was partially supported by the NSF grant DMS-231097 (M. Avella Medina).
|
2303.01291 | Robust, High-Precision GNSS Carrier-Phase Positioning with
Visual-Inertial Fusion | Robust, high-precision global localization is fundamental to a wide range of
outdoor robotics applications. Conventional fusion methods use low-accuracy
pseudorange based GNSS measurements ($>>5m$ errors) and can only yield a coarse
registration to the global earth-centered-earth-fixed (ECEF) frame. In this
paper, we leverage high-precision GNSS carrier-phase positioning and aid it
with local visual-inertial odometry (VIO) tracking using an extended Kalman
filter (EKF) framework that better resolves the integer ambiguity concerned
with GNSS carrier-phase. %to achieve centimeter-level accuracy in the ECEF
frame. We also propose an algorithm for accurate GNSS-antenna-to-IMU extrinsics
calibration to accurately align VIO to the ECEF frame. Together, our system
achieves robust global positioning demonstrated by real-world hardware
experiments in severely occluded urban canyons, and outperforms the
state-of-the-art RTKLIB by a significant margin in terms of integer ambiguity
solution fix rate and positioning RMSE accuracy. | Erqun Dong, Sheroze Sheriffdeen, Shichao Yang, Jing Dong, Renzo De Nardi, Carl Ren, Xiao-Wen Chang, Xue Liu, Zijian Wang | 2023-03-02T14:17:05Z | http://arxiv.org/abs/2303.01291v1 | # Robust, High-Precision GNSS Carrier-Phase Positioning
###### Abstract
Robust, high-precision global localization is fundamental to a wide range of outdoor robotics applications. Conventional fusion methods use low-accuracy pseudorange based GNSS measurements (\(>5m\) errors) and can only yield a coarse registration to the global earth-centered-earth-fixed (ECEF) frame. In this paper, we leverage high-precision GNSS carrier-phase positioning and aid it with local visual-inertial odometry (VIO) tracking using an extended Kalman filter (EKF) framework that better resolves the integer ambiguity concerned with GNSS carrier-phase. We also propose an algorithm for accurate GNSS-antenna-to-IMU extrissics calibration to accurately align VIO to the ECEF frame. Together, our system achieves robust global positioning demonstrated by real-world hardware experiments in severely occluded urban canyons, and outperforms the state-of-the-art RTKLIB by a significant margin in terms of integer ambiguity solution fix rate and positioning RMSE accuracy.
## I Introduction
In this paper, we tackle the robustness issue of GNSS carrier-phase-based global positioning in challenging urban canyons with the aid of VIO.
Robust, centimeter-level global localization is a cornerstone in many outdoor applications such as mobile robots, self-driving cars, and augmented reality. Currently, a large family of localization solutions rely on visual-inertial odometry (VIO) [1, 2, 3] because of the accurate short-distance tracking. Without pre-built maps that are costly to build and maintain, VIO has limited usage in large-scale applications that require accurate global positioning. Inspired by some recent works that fuse GNSS pseudorange positioning (GNSS-PR) into VIO to mitigate VIO's drift [4, 5, 6, 7, 8], we fuse VIO into GNSS carrier-phase positioning (GNSS-CP) towards a low-cost but high-precision global positioning for outdoor applications.
GNSS-CP has manifested ability for drift-free centimeter-level localization of reported 2-5cm accuracy under open sky. In contrast, GNSS-PR with \(>\)5m accuracy has limited potential for centimeter-level localization. See Fig. 1 for a demonstration. However, GNSS-CP is more susceptible and fragile to line-of-sight blockage and multi-path disturbance in dense urban areas. Section V-D shows that the accuracy of GNSS-CP in urban canyons can degrade from centimeter-level to be even tens of meters. So towards robust GNSS carrier-phase-based global positioning in challenging urban canyons, we address these points in this paper: 1) leveraging the local tracking and uncertainty estimation from VIO to better condition and constrain the search space of integer least sqaures, and 2) from an engineering perspective, reject measurements from outlier satellites in urban environments so that the GNSS carrier-phase resolution are less prone to noisy multi-path signals.
To achieve this, we carefully register the VIO trajectories to the ECEF frame with a proposed calibration algorithm to solve the translation offset between the IMU and the center of the GNSS antenna. We loosely couple VIO and GNSS-CP in an EKF framework, where VIO serves as the motion model that propagates prior state estimates and covariances, and GNSS-CP serves as the measurement update to obtain a posterior float solution. We design strict outlier rejection to tackle signal blockage and multi-path reflection in challenging environments such as urban canyons. Altogether, the integer ambiguity resolution can be seeded with good float solutions (where the integer ambiguities are treated as real numbers) to solve for fixed solutions (where the integer ambiguities are fixed as integers) with a higher success rate.
We conduct hardware experiments in urban canyons using the Aria glasses [9] with a rigidly attached, low-cost, multi-band GNSS antenna. Results show that our calibra
Fig. 1: Demonstration of GNSS pseudorange trajectory (red), GNSS carrier-phase trajectory (green) and VIO trajectory aligned with GNSS carrier-phase in the first \(50\,\mathrm{m}\) (blue). The light-green numbers and dashed lines indicate the direction of four major parts of the trajectory. We can observe that the pseudorange trajectory is much noisier than the other two with a positioning error of up to \(10\,\mathrm{m}\). This indicates the potential of GNSS carrier-phase for more precise global positioning than GNSS pseudorange.
tion renders centimeter-level alignment error, and the overall positioning and integer ambiguity fixing rate of our method outperforms the state-of-the-art GNSS-CP named RTKLIB [10]. It is worth mentioning that the urban canyons in our experiments (Fig. 4, 5) are severer than that in the experiments of recent papers [11, 12, 13, 7] in terms of the heights and density of the buildings.
## II Related Work
The fusion of visual-inertial information with GNSS has been explored since the success of visual and visual-inertial localization and mapping [1, 14, 2]. Some works [15, 6, 4, 5, 12, 8] localize in the local ENU coordinate system by a fusion of visual-inertial with GNSS pseudorange positioning. Others can align the local odometry coordinate frame onto the global ECEF coordinate frame with GNSS pseudorange measurements [16, 7]. They model either pseudorange positioning results or pseudorange measurements as Gaussian noises. However, due to the noisiness of GNSS pseudorange measurements/positioning, the alignment of VIO and GNSS are subject to large errors in the range of several meters even under the open sky (refer to Table II in [7]). Importantly, most work that fuses pseudorange GNSS measurement and visual inertial sensing (such as [5, 6, 8]) report results against vanilla RTK or RTKLIB as groundtruth, whereas in this paper we achieve better performance than the RTK groundtruth as used by others.
There were prior works that attempted global positioning with GNSS carrier-phase, camera, and IMU. Li et al. [11] and Yoder et al. [13] fused visual-inertial information as a prior estimate to aid the integer ambiguity resolution. Shepard et al. [17] used a bundle adjustment method for estimation. These papers assumed that the IMU-Antenna extrinsics calibration parameters are known constants beforehand without characterizing their values. In contrast, we propose a lightweight optimization method to estimate it and experimentally validate it. Moreover, all three works lack real-world experiments in severely challenging urban canyons like ours. Moreover, [13] has the limitation of requiring two GNSS antennas installed on the device.
Research efforts have been made to assist carrier phase positioning in unfavorable sky visibility condition. Takasu et al. [18] use dead-reckoning by IMU to take over the navigation. Building heights constructed by Lidar [19] are used to reduce satellite signal outliers. Thresholding-based outlier detection methods are also used to mitigate challenging urban canyon effects [11, 12, 13, 7], but the real-world environments in their experiments are not as challenging as that in ours. In our paper, we consider VIO's incremental motion as a strong prior and reject outliers in both measurement update and integer ambiguity resolution.
## III Problem Formulation
Suppose there are \(g\) groups of signals received by our GNSS module and the reference station. Each group are in the same constellation and have the same frequency, and suppose group \(j\) has \(m_{j}\) satellites. Then the GNSS measurement at a certain epoch is \(\boldsymbol{y}=[\boldsymbol{\phi}_{1}^{\top},\ldots,\boldsymbol{\phi}_{g}^{ \top},\boldsymbol{\rho}_{1}^{\top},\ldots,\boldsymbol{\rho}_{g}^{\top}]^{\top}\), where \(\boldsymbol{\phi}_{j}\) is the double-differenced carrier-phase measurements [20] in the \(j\)-th group, and \(\boldsymbol{\rho}_{j}\) is the double-differenced pseudorange measurements in the \(j\)-th group. With one satellite selected as the reference satellite for double-differencing, \(\boldsymbol{\phi}_{j}\) and \(\boldsymbol{\rho}_{j}\) both have \((m_{j}-1)\) dimensions.
The visual-inertial inputs are processed by VIO providing IMU-centric device poses {I} in the local odometry frame {O}, denoted as \(\boldsymbol{T}_{\textsc{O}\textsc{I}}\in\mathrm{SE}(3)\) where \(\mathrm{SE}(3)\) is the three-dimensional Special Euclidean Group. The VIO also outputs associated uncertainty [1]. Then we interpolate the VIO trajectory at the timestamps of GNSS carrier-phase measurements for time-interpolated VIO poses. We take a loosely-coupled fusion approach by feeding the output of VIO unilaterally to our GNSS-CP algorithm. We assume the intrinsics and extrinsics of cameras and IMUs are assumed to be calibrated, and the extrinsics between GNSS antenna and IMU, denoted \(\boldsymbol{t}_{\textsc{I}\textsc{A}}\in\mathbb{R}^{3}\), need to be estimated.
We assume that an initial convergence of carrier-phase positioning (not necessarily fixed solution) is available before our fusion algorithm starts, which allows us to initially align the VIO local odometry frame to the ECEF frame (shown in Fig. (b)b and Fig. (b)b) to warm start our algorithm.
The state to estimate is denoted \(\boldsymbol{x}=[\prescript{\text{E}}{\boldsymbol{p}}_{\textsc{u}}^{\top}, \boldsymbol{d}_{1}^{\top},\ldots,\boldsymbol{d}_{g}^{\top}]^{\top}\), where \(\prescript{\text{E}}{\boldsymbol{p}}_{\textsc{u}}\) means the current position of the user in the ECEF frame {E}, and \(\boldsymbol{d}_{j}=[d_{j}^{(1,2)},d_{j}^{(1,3)},\ldots,d_{j}^{(1,m_{j})}]^{\top}\) is the double-differenced integer ambiguities for satellites in group \(j\). We build our fusion system upon a baseline EKF named RTKLIB [10]. The prediction step of the baseline EKF uses a constant velocity model; whereas we use VIO for better kinematic modeling, introduced in Section IV-B. For the measurement update step, the measurement equations are double-differenced carrier-phase and pseudorange measurements. Specifically, for each group \(j\),
\[\boldsymbol{\phi}_{j}=\boldsymbol{h}_{\boldsymbol{\phi},j}(\boldsymbol{x})+ \boldsymbol{\epsilon}_{\phi,j},\ \ \ \ \ \boldsymbol{\rho}_{j}=\boldsymbol{h}_{\boldsymbol{\rho},j}(\boldsymbol{x})+ \boldsymbol{\epsilon}_{\rho,j},\]
where \(\boldsymbol{\epsilon}_{\phi,j}\) is the Gaussian noise of the double-differenced carrier-phase measurements and \(\boldsymbol{\epsilon}_{\rho,j}\) is the Gaussian noise of the double-differenced pseudorange measurements (For the formulation of \(\boldsymbol{\epsilon}_{\phi,j}\) and \(\boldsymbol{\epsilon}_{\rho,j}\), readers are referred to [20], Section 7.5). The individual measurement functions in \(\boldsymbol{h}_{\phi,j}(\boldsymbol{x})\) and \(\boldsymbol{h}_{\boldsymbol{\rho},j}(\boldsymbol{x})\) are
\[h_{\phi,j}^{(1,\textsc{a})}(\boldsymbol{x})=(||\prescript{ \text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2}-||\prescript{\text{E}}{ \boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2})-(||\prescript{\text{E}}{\boldsymbol{ t}}_{\textsc{u}}^{(s)}||_{2}-||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(s)}||_{2})+ \lambda_{j}d_{j}^{(1,\textsc{a})},\] \[h_{\rho,j}^{(1,\textsc{a})}(\boldsymbol{x})=(||\prescript{\text{ E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2}-||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2})-(|| \prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(s)}||_{2}-||\prescript{ \text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(s)}||_{2}),\]
where subscripts \(\mathrm{u}\) and \(\mathrm{r}\) mean the user and the reference station, superscripts \(1\) and \(s\) are satellite ID's where without loss of generality we assume the reference satellite's ID is \(1\), \(\lambda_{j}\) means the wavelength of group \(j\)'s signal, \(||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2}\) means the distance between the user \(\mathrm{u}\) and satellite \(1\) satisfying \(||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2}=||\prescript{ \text{E}}{\boldsymbol{p}}_{\textsc{u}}-\prescript{\text{E}}{\boldsymbol{p}}^{(1)}||_{2}\), and similar meanings apply to \(||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(1)}||_{2}\), \(||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(s)}||_{2}\), and \(||\prescript{\text{E}}{\boldsymbol{t}}_{\textsc{u}}^{(s)}||_{2}\).
The EKF measurement update treats all components of \(\boldsymbol{x}\), including both position variables and integer ambiguities, as real numbers to solve a float solution [20]. Based on the float solution, we then use the LAMBDA algorithm [21, 22]
to resolve the fixed solution, where a core step is solving (searching) an integer least squares problem
\[\mathbf{D}=\operatorname*{argmin}_{\mathbf{D}\in\mathbb{Z}^{N}}~{}(\mathbf{D}-\hat{\mathbf{D}})^{ \top}\mathbf{W}_{\hat{\mathbf{D}}}^{-1}(\mathbf{D}-\hat{\mathbf{D}}), \tag{1}\]
where \(\mathbf{D}=[\mathbf{d}_{1}^{\top},\dots,\mathbf{d}_{g}^{\top}]^{\top}\) is all the double-differenced integer ambiguities in all groups, \(\hat{\mathbf{D}}\) is its float solution, \(N\) is its dimension, and \(\mathbf{W}_{\hat{D}}^{-1}\) is the inverse of the covariance of \(\hat{\mathbf{D}}\). After integer ambiguity is searched and validated with the ratio test [23], we can get a fixed solution of \({}^{\mathrm{E}}\mathbf{p}_{u}\) that is more accurate than the float solution [21]. In this integer least-squares problem, the float solution and its covariance form an ellipsoidal search space for the correct integer, so their quality affects the success rate of the integer ambiguity search, and equivalently affects the fixed solution rate.
In a nutshell, this paper calculates the state \(\mathbf{x}\) at every epoch with the GNSS measurements \(\mathbf{y}\) at each epoch and the VIO poses \(\mathbf{T}_{OI}\).
## IV Method
We achieve the robust positioning by improving the quality of the float solution and covariance in Eq. (1) for better integer ambiguity resolution with a fusion of VIO into GNSS carrier-phase. Well-aligned VIO can be much more accurate than RTKLIB's constant velocity model in terms of both prior prediction and covariance propagation. Plus, the accuracy of VIO tracking can also effectively reject outlier satellites so that only the line-of-sight GNSS signals are used for positioning. We introduce these methods in this section, and demonstrate the system performance in the experiments Section V-C and V-D.
### _Alignment of VIO and GNSS with Extrinsics Calibration_
We obtain the transformation from the odometry frame \(\{\mathrm{O}\}\) to the ECEF frame \(\{\mathrm{E}\}\), \(\mathbf{T}_{\mathrm{EO}}\), by aligning the VIO trajectory with the GNSS trajectory with a least squares optimization. For precise alignment, the extrinsic parameter \(\mathbf{t}_{\mathrm{IA}}\), which represents the translation between the center of the IMU and the center of the GNSS antenna, should also be considered. Given a GNSS trajectory consisting of a sequence of the user's 3D positions \({}^{\mathrm{E}}\mathbf{p}_{u_{i}},~{}i=1,\dots,k\), and the corresponding time-interpolated VIO trajectory \(\mathbf{T}_{\mathrm{OI}_{i}},~{}i=1,\dots,k\), we optimize the transformation \(\mathbf{T}_{\mathrm{EO}}\) and the extrinsic parameter \(\mathbf{t}_{\mathrm{IA}}\) with
\[\min_{\mathbf{T}_{\mathrm{EO}}\in\mathrm{SE}(3),\atop\mathbf{t}_{\mathrm{IA}}\in \mathbb{R}^{3}}~{}\sum_{i=1}^{k}\left\|\mathbf{T}_{\mathrm{EO}}\cdot\mathbf{T}_{ \mathrm{OI}_{i}}\cdot\begin{bmatrix}\mathbf{t}_{\mathrm{IA}}\\ 1\end{bmatrix}-{}^{\mathrm{E}}\mathbf{p}_{u_{i}}\right\|^{2}. \tag{2}\]
For the alignment \(\mathbf{T}_{\mathrm{EO}}\) to be well constrained on all degrees of freedom, the trajectories needs persistence of excitation [24] along translation and rotation components. To this end, we use principal component analysis [25] to decide if the first eigenvalue ratio (FER) to the sum of all eigenvalues is smaller than a threshold.
We solve the optimization Eq. (2) using a two-pass process. First, we initialize \(\mathbf{T}_{\mathrm{EO}}\) to identity, fix \(\mathbf{t}_{\mathrm{IA}}\) to 0, and optimize Eq. (2) for a coarse estimate of \(\mathbf{T}_{\mathrm{EO}}\). Second, we initialize \(\mathbf{T}_{\mathrm{EO}}\) to the coarse estimate from the first pass, and optimize Eq. (2) again for \(\mathbf{T}_{\mathrm{EO}}\) and \(\mathbf{t}_{\mathrm{IA}}\) jointly. The two-pass process is necessary for ensuring that the nonlinear optimization is seeded with a good initialization to avoid bad local minimum. Section V-B demonstrates the improvement of alignment accuracy with our proposed calibration method.
Considering VIO's drifting nature, we need to frequently re-align VIO with the ECEF frame. All the later alignments optimize \(\mathbf{T}_{\mathrm{EO}}\) in Eq. 2 with \(\mathbf{t}_{\mathrm{IA}}\) fixed. In real-world urban canyon environments, GNSS positioning can degrade severely and impede accurate trajectory alignment. Therefore, we use an root mean square error (RMSE) threshold of 0.1m and an FER threshold of 97% to decide whether to perform a new trajectory alignment at a new epoch. The frequency of re-alignment is annotated in Fig. 3(b) and Fig. 4(b).
### _Fusion Algorithm_
With the VIO registered to the ECEF frame, we then fuse VIO with GNSS in a loosely-coupled fashion. We use VIO to track the incremental motion in ECEF as the EKF's prior prediction and propagates covariance. Then we follow the routine in RTKLIB [10] to resolve integer ambiguity with the MLAMBDA algorithm [22]. Major designs of our system are introduced in this section. The pseudocode for the overall fusion routine is summarized in Algorithm 1.
```
1:NewEpochData in epoch \(i+1\) captured
2:\(\mathbf{T}_{\mathrm{EO}}\) = AlignNewSegmentationOrReuseOldAlignment()
3:\({}^{\mathrm{E}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{E}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{M}}\)\({}^{\mathrm{E}}\)
#### Iv-A2 Covariance Propagation
Denote the prior covariance of \({}^{\mathrm{E}}\mathbf{p}_{u_{i+1}}\) as \(\mathbf{P}_{i+1|i}\), and the posterior covariance of \({}^{\mathrm{R}}\mathbf{p}_{u_{i}}\) as \(\mathbf{P}_{i|i}\). We propagate \(\mathbf{P}_{i+1|i}\) from \(\mathbf{P}_{i|i}\) combining VIO's covariance. We obtain the \(12\times 12\) covariance matrix \(\mathrm{cov}\big{(}\big{[}\mathbf{\xi}_{\mathrm{O}_{i+1}}\big{]}\big{)}\) from the covariance of the joint Gaussian distribution maintained by VIO, where \(\mathbf{\xi}_{\mathrm{O}_{i+1}}\) and \(\mathbf{\xi}_{\mathrm{O}_{i+1}}\) are the corresponding Lie algebra elements [26] for \(\mathbf{T}_{\mathrm{O}_{i1}}\) and \(\mathbf{T}_{\mathrm{O}_{i+1}}\). Then the covariance of the incremental translation in the odometry frame is
\[\mathrm{cov}({}^{\mathrm{O}}\mathbf{t}_{\mathrm{I}_{i}i_{+1}})=[\mathbf{J}_{\mathrm{ I}_{i}}\mathbf{J}_{\mathrm{I}_{i+1}}]\,\mathrm{cov}(\begin{bmatrix}\mathbf{\xi}_{ \mathrm{O}_{i}}\\ \mathbf{\xi}_{\mathrm{O}_{i+1}}\end{bmatrix})\begin{bmatrix}\mathbf{J}_{\mathrm{I}_{i }}^{\top}\\ \mathbf{J}_{\mathrm{I}_{i+1}}^{\top}\end{bmatrix}, \tag{4}\]
where \(\mathbf{J}_{\mathrm{I}_{i}}\) and \(\mathbf{J}_{\mathrm{I}_{i+1}}\) are the partial derivatives of \({}^{\mathrm{O}}\mathbf{t}_{\mathrm{I}_{i}i_{+1}}\) w.r.t. the Lie algebra element of \(\mathbf{T}_{\mathrm{O}_{i}}\) and \(\mathbf{T}_{\mathrm{O}_{i+1}}\), respectively. In Eq. (4), \(\mathrm{cov}({}^{\mathrm{O}}\mathbf{t}_{\mathrm{I}_{i}i_{+1}})\) is \(3\times 3\), and \(\mathbf{J}_{\mathrm{I}_{i}}\) and \(\mathbf{J}_{\mathrm{I}_{i+1}}\) are both \(3\times 6\). Then in the ECEF frame,
\[\mathrm{cov}({}^{\mathrm{E}}\mathbf{t}_{\mathrm{I}_{i}i_{+1}})=\mathbf{R}_{\mathrm{EO }}\cdot\mathrm{cov}({}^{\mathrm{O}}\mathbf{t}_{\mathrm{I}_{i}i_{+1}})\cdot\mathbf{R}_{ \mathrm{EO}}^{\top}. \tag{5}\]
The final form of the proposed covariance propagation of the GNSS prior position in EKF prediction step is
\[\mathbf{P}_{i+1|i}=\mathbf{P}_{i|i}+\mathrm{cov}({}^{\mathrm{E}}\mathbf{t}_{\mathrm{I}_{ i}i_{+1}}). \tag{6}\]
#### Iv-A3 Outlier Satellite Pruning
In practice, as GNSS signals undergo severe blockage and reflection in urban canyons, signals traveling along non-line-of-sight (NLoS) paths should not participate in the positioning. Therefore, we need to detect and rule out such outlier satellite measurements. We do outlier rejection in both the measurement update step and the integer ambiguity resolution step.
In the measurement update step, since VIO provides accurate EKF prediction, we use 5 times of VIO's incremental motion as a threshold to filter the innovation of the satellite measurements. The remaining measurements \(\bar{\mathbf{y}}\) is a subset \(\mathbf{y}\). After this, some elements of \(\bar{\mathbf{y}}\) can still be unreliable with some NLoS signals passing the threshold. Then in the integer ambiguity resolution step, if the integer ambiguities solved by the MLAMBDA [22] algorithm cannot pass the validation by the ratio test [23], the system further prunes satellites that produce largest residuals in the integer least squares problem, and then solve it again. This way, some more spurious outliers can be ruled out.
## V Experiments
### _Hardware Implementation_
We perform hardware experiments in real-world challenging urban canyons (Bellevue, Washington, USA) using the Aria glasses [9], which has a suite of machine perception sensors including two global-shutter monochrome SLAM cameras, two 6-DOF IMUs, and a built-in GPS pseudorange module. We rigidly attach an external multi-band GNSS carrier-phase antenna to the Aria glasses with a metal ground plate, as shown in Fig. (a)a. The GNSS carrier-phase measurements are received by a u-BLox ZED-F9P GNSS receiver, and the raw data is further logged by an Arduino into an SD-card for offline processing1. We use three constellations in our experiments: GPS, GLONASS, and Galileo. We set up our own reference station using the same antenna and receiver, about \(20\,\mathrm{km}\) from the user's receiver. This baseline length is common in many cities in north America between a public base station and its nearby urban environment. The two SLAM cameras operate at \(10\,\mathrm{Hz}\) at \(640\times 480\) resolution, and the two IMUs operate at \(1000\,\mathrm{Hz}\) and \(800\,\mathrm{Hz}\), respectively. The internal GPS pseudorange module runs at \(1\,\mathrm{Hz}\) while the external GNSS carrier-phase module at \(5\,\mathrm{Hz}\).
Footnote 1: Our algorithm is casual, which is the same as EKF.
For synchronization between GNSS carrier-phase and VIO, since the built-in GPS of the Aria glasses and the external GNSS carrier-phase antenna both record GPS Time, we can simply use the built-in GPS of Aria glasses as a bridge to synchronize the VIO (system time on Aria glasses) and the external GNSS carrier-phase signals.
### _Analysis of Calibration for Alignment Accuracy_
Table I shows alignment experiments with and without the proposed calibration for three trajectories, with lengths \(100\,\mathrm{m}\), \(50\,\mathrm{m}\) and \(30\,\mathrm{m}\) respectively, under unobstructed sky visibility. We can observe that the calibration significantly reduces the alignment error from decimeter-level to centimeter-level, up to \(12\,\mathrm{cm}\) in trajectory 3, and thus largely boosts the alignment accuracy. Another implication is that the alignment with the calibration has the same magnitude of error as carrier-phase positioning itself, and thus the accuracy of VIO and carrier-phase positioning are commensurate to be suitable for complementary sensor fusion.
Moreover, we show with a field experiment under open sky that GNSS pseudorange measurement is too noisy to provide VIO with a reliable centimeter-level ECEF-referenced registration. In Fig. 1, the trajectory under good sky visibility is perceived by VIO, GNSS carrier-phase, and GNSS pseudorange. We can observe that GNSS-PR is much noisier than the other two modalities, with a positioning error w.r.t. the other
\begin{table}
\begin{tabular}{c|c|c|c} & Traj 1 & Traj 2 & Traj 3 \\ \hline Aligned Length (m) & 100 & 50 & 30 \\ \hline RMSE without Calibration (m) & 0.11 & 0.11 & 0.16 \\ \hline RMSE with Calibration (m) & **0.071** & **0.028** & **0.037** \\ \end{tabular}
\end{table} TABLE I: Alignment errors with and without calibration
Fig. 2: Device used in the hardware experiments.
two modalities up to \(10\,\mathrm{m}\). This suggests that it is not suitable to fuse visual-inertial information with GNSS pseudorange because VIO cannot have centimeter-level registration to the ECEF frame by aligning to GNSS pseudorange positioning. This conclusion is also supported by the ECEF registration error reported in Table II of GVINS [7], where meter-level registration is observed even in an open-sky sports field.
### _Groundtruth Comparison Experiment_
To characterize the performance of our method under severe sky blockage in urban scenarios, we need to be creative about how to get the groundtruth. It is also inappropriate to compare with methods that fuse low-accuracy pseudorange GNSS measurement into VIO [5, 6, 7, 8], which lack the ability to fuse carrier-phase GNSS measurement and instead directly use carrier-phase positioning as the groundtruth. To this end, we collected two datasets, one open-sky and one lightly-blocked sky by trees, and manually exclude satellites to simulate sky blockage, and then use RTKLIB to process full-constellation data as the groundtruth.
We run our method and RTKLIB to process a subset of satellite measurements after manual satellite removal (10 satellites excluded for open-sky dataset and 5 excluded for lightly-blocked dataset). The removed satellites contain the most frequently observed ones and moderately frequent ones during the observation period. The absolute position error (APE) over time is shown in Fig. 3 plotted by the evo toolkit [27]. The RMSE, mean APE, median APE, standard deviation of APE, max APE, and fixed solution rate (FSR, higher is better) are listed in Table II. For both RTKLIB and our method, we use the same standard for the integer solution validation, which is the popular ratio test [23], the ratio between the second best and the best residuals being 3. We can observe that our method achieves lower position errors while having much higher fixed solution rate (FSR) than RTKLIB compared to the groundtruth.
### _Real-World Urban Canyon Experiments_
We conduct real-world experiments in two challenging urban canyon environments, "UC1" and "UC2" in Fig. 4 and Fig. 5. 2 A 3D street view of the urban canyon in our dataset is shown in Fig. 6 to illustrate how challenging our problem is. As is analyzed in Section V-B, works that fuse GNSS pseudorange and VIO [5, 6, 7, 8] are not suitable for comparison in our experiments, since their registration to ECEF have meter-level offset due to the noisiness of pseudorange positioning. Plus, they do not tackle carrier-phase positioning problem as we do.
Footnote 2: Readers are also referred to [https://youtu.be/PCDymbSTndQ](https://youtu.be/PCDymbSTndQ) for a video demonstration.
In the trajectories, green means fixed solution, yellow means float solution, and white means "fill-in" positions from aligned VIO when the validations in Alg. 1 fails. Visualizations of the results from RTKLIB and our model are shown in Fig. 4 and Fig. 5. We can observe that RTKLIB suffers from the urban canyon effects both horizontally (Fig. 3(a), 4(a)) and vertically (Fig. 3(c), 4(c)). All the zig-zags
\begin{table}
\begin{tabular}{l|c c} & Open Sky, 10 sats ex & Lightly-blocked, 5 sats ex \\ & RTKLIB : Ours & RTKLIB : Ours \\ \hline RMSE (m) \(\downarrow\) & 0.68 \(\pm\) **0.38** & 0.78 \(\pm\) **0.033** \\ Mean (m) \(\downarrow\) & 0.55 \(\pm\) **0.34** & 0.45 \(\pm\) **0.025** \\ Median (m) \(\downarrow\) & 0.40 \(\pm\) **0.32** & 0.26 \(\pm\) **0.019** \\ Std (m) \(\downarrow\) & 0.41 \(\pm\) **0.17** & 0.63 \(\pm\) **0.023** \\ Max (m) \(\downarrow\) & 2.97 \(\pm\) **0.88** & 4.90 \(\pm\) **0.43** \\ FSR (\%) \(\uparrow\) & 10.8 \(\pm\) **20.0** & 48.6 \(\pm\) **92.9** \\ \end{tabular}
\end{table} TABLE II: Statistics of groundtruth comparison experiments. “10 sats ex” means 10 satellites are manually excluded. Same applies to “5 sates ex”.
Fig. 3: Performance comparison with ground truth for simulating signal blockage in urban canyons. Best viewed zoom in.
Fig. 6: The urban canyons of the trajectory of Fig. 4(b) visualized in Google Earth. The average building height in this canyon is 55m and the tallest building is 81m.
and large jumps of the trajectory by the RTKLIB's EKF are eliminated by our model (Fig. 3(b), 4(b)). Moreover, the segments formed by white spots connect smoothly with the GNSS positioning segments formed by green and yellow, which is a qualitative indicator that the our fusion method is coherent.
For quantitative evaluation, we pick three segments respectively from the UC1 and UC2 trajectories, where the urban canyon effects are severe, moderate, and mild, as marked in Fig. 3(a) and Fig. 4(a). In the selected segments, due to lack of ground truth, we use our method as a reference and evaluate RTKLIB's and the VIO's deviations from our method.3 The VIO trajectory is aligned with the selected segments by neighboring segments. We show that although VIO has a drift rate of about \(0.5\%\), it is much closer to our trajectory than RTKLIB's EKF, indicating that our system is much more robust than the original RTKLIB. The RMSE, mean APE, median APE, and max APE are listed in the following two tables. For simplicity, RTKLIB is denoted "RTK" in the tables, and VIO denoted "VIO".
Footnote 3: RTKLIB is commonly used as ground truth in SLAM research. In this sense, we are improving the ground truth used by many researches.
## VI Conclusion
This paper tackles the robustness issue of GNSS carrier-phase positioning in urban canyons with a loosely-coupled EKF to fuse GNSS carrier-phase and VIO. We propose an optimization-based method to calibrate the extrinsics between IMU and GNSS antenna to accurately align VIO to the ECEF frame. We use VIO as the prediction step for the EKF and incorporate its covariance estimate into the covariance propagation of EKF. The performance of our method is quantitatively validated with simulation experiments and real-world experiments in challenging urban canyons. Its robustness, positioning accuracy and fixed solution rate outperform the state-of-the-art GNSS carrier-phase positioning RTKLIB [10] by a large margin in challenging urban canyons, filling the gap towards universally global positioning.
Fig. 4: UC1 environment. Green: fixed solution. Yellow: float solution. White: aligned VIO when there is no valid float solution. Best viewed in color and zoom in.
Fig. 5: UC2 environment. Green: fixed solution. Yellow: float solution. White: aligned VIO when there is no valid float solution. Best viewed in color and zoom in. |
2305.05379 | TASTY: A Transformer based Approach to Space and Time complexity | Code based Language Models (LMs) have shown very promising results in the
field of software engineering with applications such as code refinement, code
completion and generation. However, the task of time and space complexity
classification from code has not been extensively explored due to a lack of
datasets, with prior endeavors being limited to Java. In this project, we aim
to address these gaps by creating a labelled dataset of code snippets spanning
multiple languages (Python and C++ datasets currently, with C, C#, and
JavaScript datasets being released shortly). We find that existing time
complexity calculation libraries and tools only apply to a limited number of
use-cases. The lack of a well-defined rule based system motivates the
application of several recently proposed code-based LMs. We demonstrate the
effectiveness of dead code elimination and increasing the maximum sequence
length of LMs. In addition to time complexity, we propose to use LMs to find
space complexities from code, and to the best of our knowledge, this is the
first attempt to do so. Furthermore, we introduce a novel code comprehension
task, called cross-language transfer, where we fine-tune the LM on one language
and run inference on another. Finally, we visualize the activation of the
attention fed classification head of our LMs using Non-negative Matrix
Factorization (NMF) to interpret our results. | Kaushik Moudgalya, Ankit Ramakrishnan, Vamsikrishna Chemudupati, Xing Han Lu | 2023-05-06T03:37:44Z | http://arxiv.org/abs/2305.05379v3 | # TASTy: A Transformer based Approach to Space and Time complexity
###### Abstract
Code based Language Models (LMs) have shown very promising results in the field of software engineering with applications such as code refinement, code completion and generation. However, the task of time and space complexity classification from code has not been extensively explored due to a lack of datasets, with prior endeavors being limited to Java. In this project, we aim to address these gaps by creating a labelled dataset of code snippets spanning multiple languages (Python and C++ datasets currently, with C, C#, and JavaScript datasets being released shortly). We find that existing time complexity calculation libraries and tools only apply to a limited number of use-cases. The lack of a well-defined rule based system motivates the application of several recently proposed code-based LMs. We demonstrate the effectiveness of dead code elimination and increasing the maximum sequence length of LMs. In addition to time complexity, we propose to use LMs to find space complexities from code, and to the best of our knowledge, this is the first attempt to do so. Furthermore, we introduce a novel code comprehension task, called cross-language transfer, where we fine-tune the LM on one language and run inference on another. Finally, we visualize the activation of the attention fed classification head of our LMs using Non-negative Matrix Factorization (NMF) to interpret our results.
## 1 Introduction
Language Models such as BERT, GPT-3, and T5 (Devlin et al., 2018; Brown et al., 2020; Raffel et al., 2020) have propagated the idea of pre-training and finetuning due to their success on a variety of downstream tasks. This has led to a plethora of pre-trained models for programming languages (PL). These are used on code related downstream tasks such as code search (Neelakantan et al., 2022), code translation (Wang et al., 2021), code summarization (Parvez et al., 2021) etc. We propose to work on one such downstream task: predicting time and space complexities based on code.
Empirical methods of predicting time complexity involve running the same piece of code over differently sized inputs and measuring the time taken for execution, which enables us to approximate a function. Our attempt at classification will help reduce the time and effort required in this process. Additionally, we hope our attempt serves as a stepping stone for future efforts that will remedy this issue once and for all. Finally, we wish to put code-based LM's code comprehension to the test by testing its cross-language transfer capabilities, where we train our model to predict time complexities on one language and evaluate on another. For even an average human programmer, finding the time complexity for an arbitrary piece of code can be a cumbersome task. Our long-term goals are two-fold: 1. Utilization of LM based complexity predictors on competitive coding platforms, reducing the number of code runs over differently sized inputs. 2. Integration of methods like ours with AI programming assistants, which will make it easier for programmers to optimize their code and ease the code review process.
Previous datasets with code and its time complexities, such as CoRCoD (Sikka et al., 2020) and CodeComplex (Jeon et al., 2022) focus solely on Java. In this paper, we introduce a C++ and Python dataset of code with both their corresponding time and space complexities. This is the first dataset of its kind for these languages with time complexities and as far as we know, the first dataset ever to contain space complexities. A sample input and output of our model are shown in Figure 1.1
Footnote 1: We plan to release similar datasets for C, C# and JavaScript shortly.
Most prior attempts to find time complexities are based on traditional machine learning based methods such as Random Forests, Support Vector Machines (Sikka et al., 2020) and Light Gradient Boosting Machines (CODAT, 2021) bolstered by manually constructed features. We believe learned attention weights might fare better since the LMs can benefit from being exposed different code based contexts which is infeasible with constructed features. Hence we evaluate several code-based LMs on our dataset as well as the CodeComplex dataset.
The sole other attempt using LMs for time complexity prediction- Jeon et al. (2022) uses techniques such as hierarchical attention and dead code elimination. Often, the length of the code exceeds the maximum sequence length of a transformer, which means that it cannot attend to all the tokens in the code. Dead code elimination attempts to solve this problem by removing unused methods and variables, which reduces the length of the code enabling transformers to attend to more tokens. While verifying the efficacy of such elimination, we wish to determine whether increasing sequence lengths (Beltagy et al., 2020) increases the accuracy of our models.
If LMs can truly understand code at a conceptual level, models pre-trained on multiple languages should be able to predict time complexity when we fine-tune them on say, our Python based dataset and evaluate them on our C++ based dataset. We believe this is reasonable since a competent human programmer who has experience in multiple programming languages, would be able to calculate time complexities for programs in all those languages.
Finally, we use Non-negative Matrix Factorization (NMF) to decompose the activations of the Feed Forward Neural Network (FFNN) to extract interpretable components as described in Alammar (2021). NMF affords researchers to probe parts-to-whole relationships as described in Lee and Seung (1999). We use this quality of NMF to qualitatively analyse the trained models.
We also present a survey of existing tools that predict time and space complexity and detail our findings in Appendix A.1. Unfortunately, none of the tools we found provide a comprehensive solution to the problem of identifying algorithm complexities.
## 2 Related Work
**CoRCoD:**Sikka et al. (2020) created a dataset of 933 Java codes and their complexities. We extend their work by making a dataset of C++ and Python programs. Like our dataset, their code also seems to stem from GeeksforGeeks based on the comments in the code. They classify programs into 5 classes: \(O(1)\), \(O(n)\), \(O(\log n)\), \(O(n\log n)\), \(O(n^{2})\). Their methods include engineered features such as number of loops, number of if statements, maximum depth of nested loops etc. extracted from the Abstract Syntax Tree. Additionally, they also use graph2vec which uses a skipgram (Mikolov et al., 2013) model to compute graph embeddings corresponding to code embeddings, which are then classified using traditional ML methods such as Random Forests, Support Vector Machines etc.
**CODAT (2021)**: Uses IBM's CodeNet Dataset to predict time complexities. They derive the labels by defining programs as sub-polynomial, polynomial or above-polynomial (dataset is not publicly available). Like CorCoD, they engineer features such as number of loops, breaks, if statements along with graph based representations derived from an Abstract Syntax Tree. Their best performing ap
Figure 1: Sample input program for time and space complexity calculation. The expected time and space complexity model outputs would be **Quadratic** and **Constant** respectively.
proach uses a 15 layer residual network which works on manually constructed features concatenated with graph representations.
**CodeComplex**: (Jeon et al., 2022) construct a human annotated Java dataset (CodeComplex) of 3803 Java codes and their time complexities (our dataset is comprised of C++ and Python). We also note that this dataset has some overlap with CoRCoD. Their approach combines dead-code elimination, multi-level pre-training objectives such as loop depth prediction, number of parameters prediction along with hierarchical attention which generates method level and class level embeddings which are combined to create a final embedding for prediction.
## 3 Our Dataset
We call our dataset the GeeksforGeeks (GFG) 2 dataset, named after the site where the code was scraped. Our C++ and Python dataset contain 1410 and 1373 codes and their corresponding time and space complexities respectively. Check Table 1 for dataset class distribution. We remark that our dataset is imbalanced and the linear class alone constitutes 54.6% and 55.8% of our Python and C++ time complexity datasets, while combining the linear and quadratic classes constitutes 82.3% of both datasets. There is a tendency on coding interview preparation platforms such as GFG to curate more solutions around these complexities in order to be more instructive. We do not exhaustively scrape the GFG website, thus, we could potentially fix the high class imbalance with selective scraping. Other statistics about the dataset such as construction details, the space complexity class breakdown and statistics about the code length and its distribution are mentioned in the Appendix A.2 and Appendix A.3 respectively.
Footnote 2: [https://www.geeksforgeeks.org/](https://www.geeksforgeeks.org/)
## 4 Experiments
We consider BERT, CodeBERT, GraphCodeBERT, CodeT5 (Devlin et al., 2018; Feng et al., 2020; Guo et al., 2020; Wang et al., 2021) and Longformer (Beltagy et al., 2020) as our candidate models. CodeBERT, GraphCodeBERT and CodeT5 have been pre-trained on CodeSearchNet (Husain et al., 2019), a dataset comprising of 6 languages: Python, Java, JavaScript, PHP, Ruby and Go. CodeT5 has additionally been trained on C and C# which their work sources from BigQuery 3. All the models are fine-tuned by adding a linear layer (a classification head) at the end to obtain the desired output. In addition to fine-tuning these models on our GFG Python and GFG C++ datasets, we also fine-tune these models for time complexity classification on the CodeComplex dataset.
Footnote 3: [https://console.cloud.google.com/](https://console.cloud.google.com/) marketplace/details/github/github-repos
For fine-tuning, we utilize the Adam (Kingma & Ba, 2014) optimizer with an effective batch-size of 32; a default batch-size of 8 and 4 steps of gradient accumulation (Andersson et al., 2022). Across all runs, we fine-tune for 15 epochs using a constant 1e-5 learning rate. We also employed mixed-precision training for all models (Micikevicius et al., 2017) and gradient checkpointing (Chen et al., 2016) for all models except CodeBERT. We use the base version of all the models. All of our results are averaged across 5 runs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Time Complexity** & **CoRCoD (Java)** & **CodeComplex (Java)** & **GFG (C++)** & **GFG (Python)** \\ \hline \(O(1)\) & 143 & 533 & 33 & 35 \\ \(O(n)\) & 385 & 472 & 787 & 750 \\ \(O(n^{2})\) & 200 & 553 & 374 & 381 \\ \(O(n^{3})\) & - & 579 & 35 & 36 \\ \(O(\ln n)\) & 55 & 576 & 26 & 25 \\ \(O(n\ln n)\) & 150 & 518 & 127 & 121 \\ NP-hard & - & 572 & 28 & 25 \\ \hline
**Total** & 933 & 3803 & 1410 & 1373 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the per class occurrences of time complexities across different datasets CodeComplex and CoRCoD numbers borrowed from respective papers.
For the CodeComplex dataset, we use the training and testing splits provided by Jeon et al. (2022) with dead code elimination. For our GFG dataset, we use a 80-20 label stratified train-test split for both time and space.
We consider the BERT results to be our baseline, since it is trained only on Natural Language while the other models mentioned are trained either on both Natural and Programming Languages or only Programming Languages. While the Longformer too has only been trained on Natural Language, it has an inherent advantage over BERT since it can attend to longer sequence lengths.
## 5 Results
### Time & Space complexity results with constant maximum sequence length
We fine-tune the models mentioned in Table 2 except the Longformer with a constant maximum sequence length of 512 tokens. We expected CodeT5 to achieve the best accuracy since it has a higher number of learnable parameters, but GraphCodeBERT achieves the best accuracy across all 3 datasets, on both time and space. While the disparity in accuracies between BERT and the programming language based models is quite significant on the CodeComplex dataset, the disparity on the GFG C++ and GFG Python datasets is not so drastic. In fact, CodeT5 does even worse than BERT on the GFG Python dataset, on both time and space. We also report the average per-class accuracies across 5 runs in Appendix A.4.
### Time complexity results with varying maximum sequence length
We fine-tune the Longformer model across a range of maximum sequence lengths as shown in latter part of Table 2 and Figure 2. On both the GFG C++ and the Java CodeComplex dataset, there is an increase in complexity prediction accuracy with increasing sequence length.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Language (Dataset)** & **Model** & \begin{tabular}{c} **Max. Seq.** \\ **Length** \\ \end{tabular} & \begin{tabular}{c} **Parameters** \\ **Time (\%)** \\ \end{tabular} &
\begin{tabular}{c} **Accuracy** \\ **Space (\%)** \\ \end{tabular} \\ \hline Java (CodeComplex) & BERT & 512 & 110M & 84.68 & - \\ Java (CodeComplex) & CodeBERT & 512 & 125M & 90.54 & - \\ Java (CodeComplex) & GraphCodeBERT & 512 & 125M & **92.08** & - \\ Java (CodeComplex) & CodeT5 & 512 & 220M & 88.59 & - \\ C++ (GFG) & B\(\bar{\text{E}}\bar{\text{E}}\bar{\text{F}}\) & 512 & 110M & 75.60 & 74.75 \\ C++ (GFG) & CodeBERT & 512 & 125M & 74.46 & 75.88 \\ C++ (GFG) & GraphCodeBERT & 512 & 125M & **78.29** & **79.71** \\ C++ (GFG) & CodeT5 & 512 & 220M & 71.56 & 70.28 \\ Python (GFG) & B\(\bar{\text{E}}\bar{\text{E}}\bar{\text{F}}\) & 512 & 110M & 75.92 & 72.58 \\ Python (GFG) & CodeBERT & 512 & 125M & 74.10 & 73.23 \\ Python (GFG) & GraphCodeBERT & 512 & 125M & **77.89** & **73.96** \\ Python (GFG) & CodeT5 & 512 & 220M & 68.43 & 65.16 \\ \hline Java (CodeComplex) & Longformer & 256 & 41M & 80.33 & - \\ Java (CodeComplex) & Longformer & 512 & 41M & 85.62 & - \\ Java (CodeComplex) & Longformer & 1024 & 41M & 85.75 & - \\ Java (CodeComplex) & Longformer & 2048 & 41M & **86.66** & - \\ C++ (GFG) & Longformer & 256 & 41M & 71.77 & - \\ C++ (GFG) & Longformer & 512 & 41M & **72.55** & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Time and Space Complexity prediction results. We average test accuracies across 5 runs of all LMs. For Longformers, we vary the max. sequence length.
### Cross language transfer (CLT)
The results for CLT are given in Table 3. BERT has the best accuracies for cross-language transfer on both the target languages, followed by GraphCodeBERT on Python and Longformer (512) on C++.
### Dead code elimination
Introduced by Jeon et al. (2022), this technique removes variables, functions etc. that are defined but not used, since scraped code need not be optimal code. We observe a clear increase in the accuracy of LMs with dead code elimination on the CodeComplex dataset Table 4, thus verifying the results presented by Jeon et al. (2022). They report that CodeBERT model's accuracy jumps from 86.0% to 96.0% with pre-training, dead code elimination and additional pre-training objectives. We ablated for the accuracy of dead code elimination in our work.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Model**} & **Accuracy** & **Accuracy** \\ & **w/o DC (\%)** & **w/DC (\%)** \\ \hline CodeBERT & 86.0 & 90.54 \\ GraphCodeBERT & 80.3 & 92.08 \\ CodeT5 & 85.9 & 88.59 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablating for the efficacy of Dead Code (DC) elimination on the CodeComplex dataset. Values without DC borrowed from Jeon et al. (2022). Values with DC are from Table 2.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Parameters**} & **Max.** & **Accuracy** \\ & & **Seq. Length** & **py / cpp (\%)** \\ \hline BERT & 110M & 512 & **29.13 / 23.54** \\ CodeBERT & 125M & 512 & 19.37 / 16.80 \\ GraphCodeBERT & 125M & 512 & **23.96** / 18.22 \\ CodeT5 & 220M & 512 & 18.06 / 12.05 \\ \hline Longformer & 41M & 256 & 20.83 / 18.22 \\ Longformer & 41M & 512 & 15.73 / **20.92** \\ Longformer & 41M & 1024 & 13.83 / 13.19 \\ Longformer & 41M & 2048 & 13.98 / 16.87 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross-language transfer results: Inference of models trained on the CodeComplex (all sources are java) to target GFG Python (py) / C++ (cpp) datasets. No explicit fine-tuning was performed on the target language.
Figure 2: Increasing accuracy with increasing maximum sequence length using Longformers on the Java CodeComplex dataset.
### Qualitative analysis
We perform qualitative analyses based on the activations of two models, CodeBERT and GraphCodeBERT. NMF is particularly suited to analysing text qualitatively since it is able to provide us with interpretable components. NMF has found success in medical research (Hamamoto et al., 2022) and information retrieval Gonzalez et al. (2010). In our case, we apply NMF to the activations of the neurons across the FFNN layers. i.e. the classification head on top of the LM.
The visualization in Figure 3 is an example of using this method (Alammar, 2021) on activations produced by CodeBERT after passing a "hello world" java program through it. It is clear by looking at the different components that each component is activated by different items of syntax. For example, component 1 in the figure is closely associated with the start line character <s> and the character </s>. Component 7 is closely linked with access specifiers in the code (public class and public static).
Identifying the number of components one should use for analysis depends on the total length and complexity of the input. NMF solutions are non-unique and we can get dissimilar results across different iterations. Depending on the complexity of the input activations, we vary the number of components to reveal fine-grained or global features of the input text. We compare models across the three dimensions that follow:
**Number of Components** (Figure 3): Comparing the visualizations produced by a 3 component NMF and an 8 component NMF reveals that different components encode for different kinds of syntax tokens. In the 3 component decomposition, we observe three distinct sections that activate the neurons, the start and end tokens; whitespace tokens such as newlines and tabs; and the main body of the functions. With 8 components we observe an increase in the different semantic units that different components react to. We now see the emergence of components that react to access modifiers, identifier names, parenthesis and individual functions.
**Layers** (Figure 4): Activations across the earlier layers result in components with more variation and higher weights on average. Later layers are more smooth and have lower weights. However, the semantic information conveyed by the components appears to be identical.
**Pre-Training Language** (Figure 5) : We observe visualizations of layer activations for GraphCodeBERT trained on Python. To generate these activations we use a C++ sample and a Python sample. The components for both pairs of languages appear to encode for similar things. For example, component 1 encodes the start and end tokens, components 5 in C++ and 2 in Python both correspond to for-loop structures.
Figure 4: Comparing visualizations of activations across different layers
Figure 3: Comparing visualizations with different number of components with CodeBERT activations across all layers
## 6 Discussion
**Comparison to traditional ML methods.** In contrast to traditional ML methods such as LGBM and Random Forests, we comprehensively explored LMs for our complexity prediction task. We note that traditional ML approaches are not easily parallelizable and cannot be trained on hundreds of thousands of data points concurrently while such an impediment does not exist for LMs (Vaswani et al., 2017). We also note that ML based approaches require extensive feature engineering such as counting the number of ifs, loops, and breaks, finding nested loop depths etc (Sikka et al., 2020). LMs have an inherent scalability benefit along with the benefit of context without the need to create manual features.
**Natural Language vs. Programming Language models.** While there is a trend of increasing accuracy with increasing sequence length for Longformers on the CodeComplex dataset (Table 2, Figure 2), we note that even Longformer (2048 sequence length) cannot surpass the accuracy of GraphCodeBERT (512 sequence length). Observe that all the programming language (PL) based models (CodeBERT, GraphCodeBERT and CodeT5) beat the Natural Language (NL) based models (BERT and Longformer) on the CodeComplex dataset (Table 2). PL models also beat NL models on the GFG C++ and Python dataset on both time and space accuracies (Table 2) with CodeT5 being the sole exception. We also note the efficacy of dead-code elimination which leads to at least a 2.69% increase in accuracy across all LMs (Table 4).
**CodeT5.** Remark that CodeT5 is even outcome by BERT on both the GFG C++ and Python datasets (Table 2). Furthermore, despite having a higher number of parameters than all the other models used here, it does worse than even the Longformer having 5x fewer parameters. On an unbalanced dataset, even majority class predictors will achieve high accuracies, hence we will compare CodeT5 on the balanced CodeComplex dataset. We would expect CodeT5 to surpass GraphCodeBERT's performance on CodeComplex since it has additional pre-training objectives, a higher number of parameters and extra training data (only model actually trained on C and C# which are similar to C++). Instead, we believe that this provides further impetus to the arguments presented by Wang et al. (2021), where they propose that the Bimodal Dual Generation pre-training objective, while being great for PL-NL tasks such as code summarization and code generation, might bias the model towards such tasks. At its core, time and space complexity prediction are closer to a PL-PL task than a PL-NL task. Hence, it might be interesting to see whether CodeT5 pre-trained without the Bimodal Dual Generation objective continues to do worse than BERT on our dataset.
**Python vs. C++ Cross Language Transfer.** We also note that Java to Python CLT accuracies are better than Java to C++ CLT accuracies with two exceptions Table 3, the Longformer (512) and Longformer (2048) results. This is explained by the fact that the pre-training of all code based LMs used included Python data but not C++ data (Code T5 uses C and C# data additionally). This explains the outliers as well since it stems from a NL based LM (Longformer) rather than a PL based LM, a pattern that is also supported by the BERT results in Table 3, since BERT does better than
Figure 5: Comparing visualizations of activations where GraphCodeBERT was trained on python. Here we test the model on C++ and Python code
CodeBERT, GraphCodeBERT and CodeT5. We expected that Java to C++ CLT would be easier than Java to Python CLT since syntactically Java as a language is closer to C++ than it is to Python.
**NL vs. PL models on Cross Language Transfer.** We conduct a completely novel kind of code-understanding experiment, the CLT experiments (Table 3). We expected our NL based models to give us almost random accuracies on this task, which would amount to 14.28% (1 in 7); i.e. PL based LMs would do better than NL based LMs. However, BERT was the best model on this task across both target languages despite having not being pre-trained on programming languages. Furthermore, the next best performer on Java to C++ CLT was the Longformer (sequence length 512) model and we note that even the Longformer (sequence length 256) model does better than most other PL based models. Interestingly, on the Longformer models, it seems like there was a trend of decreasing CLT performance with increasing sequence length until it bottomed out at the random prediction level. We expected our PL based models to do way better than they did since there are similarities across programming language structures across languages such as classes, methods, loops, conditionals etc. Our results lead us to think that existing models struggle to learn from the task to generalize to another language. We believe that better predictions on this task might tell us more about the representations learned by LMs.
## 7 Conclusion
In our work we address the lack of algorithmic complexity datasets by publishing our GFG for C++ and Python. To the extent of our knowledge, this is the first work to include space complexity data. We highlight the importance of dead code elimination and increasing the sequence length to address the issue of tokens that exceed maximum sequence length of LMs.
We also introduce a novel CLT task that can be used as an additional benchmark for LM's language comprehension capabilities. Finally, using NMF we demonstrate that LMs can identify large several constructs in the code such as for loop structures, access modifiers and individual functions.
While our indicate results that PL based models fare better on the complexity prediction task, NL based models do better on CLT. However the difference in the performance of NL and PL models is within 5-6% in both of these cases, so there isn't a clear winner. We'd like to try larger NL and PL models and also larger PL models with a higher max sequence length.
Based on these results, code complexity prediction using a transformer based approach seems promising. In future work, we plan to extend the GFG dataset by adding more samples for each language and focusing on having a certain minimum number of samples per class. This extension will enable us to perform the fine-tuning and CLT experiments on a larger scale. We would also like to extend the task to be solvable by generative models.
## 8 Ethical Considerations
All the code on GFG is a under a CCBY license which allows adaptation and remixing as long as we attribute the original authors, while also allowing commercial usage. The code in our dataset contains a comment line identifying the original contributor wherever available.
|
2308.12821 | Weight decompositions on algebraic models for mapping spaces and
homotopy automorphisms | We obtain restrictions on the rational homotopy types of mapping spaces and
of classifying spaces of homotopy automorphisms by means of the theory of
positive weight decompositions. The theory applies, in particular, to connected
components of holomorphic maps between compact K\"ahler manifolds as well as
homotopy automorphisms of K\"ahler manifolds. | Joana Cirici, Bashar Saleh | 2023-08-24T14:24:07Z | http://arxiv.org/abs/2308.12821v1 | # Weight decompositions on algebraic models
###### Abstract.
We obtain restrictions on the rational homotopy types of mapping spaces and of classifying spaces of homotopy automorphisms by means of the theory of positive weight decompositions. The theory applies, in particular, to connected components of holomorphic maps between compact Kahler manifolds as well as homotopy automorphisms of Kahler manifolds.
J. Cirici acknowledges the Serra Hunter Program the AEI (CEX2020-001084-M and PID2020-117971GB-C22). This work was also partially supported by the Departament de Recerca i Universitats de la Generalitat de Catalunya (2021 SGR 00697) and the ANR-20-CE40-0016 HighAGT. B. Saleh acknowledges the support by the Knut and Alice Wallenberg Foundation through grant no. 2019.0521.
**Theorem 1.1**.: _Let \(X\) be of the homotopy type of a finite CW-complex and let \(Y\) be a connected nilpotent space of finite \(\mathbb{Q}\)-type having positive weights. Let \(\Psi\colon X\to Y\) denote a constant map. Then the connected component of \(\operatorname{map}(X,Y)\) which contains \(\Psi\), has positive weights._
We actually prove a more general result for not necessarily constant maps satisfying a weight-preserving property (see Proposition 4.11) which in the context of complex geometry leads to the following result:
**Theorem 1.2**.: _Let \(f:X\to Y\) be a holomorphic map between compact Kahler manifolds, where \(Y\) is nilpotent. Then the connected component of the mapping space \(\operatorname{map}(X,Y)\) which contains \(f\), has positive weights._
The above result is actually true for any formal map between formal topological spaces. For an arbitrary map of complex algebraic varieties we also obtain restricted weights, but these are not necessarily positive. Such weights may be interpreted as part of a mixed Hodge structure on the models for mapping spaces of complex algebraic varieties. Note that the mapping spaces we are considering are purely topological (no algebraic condition except for the fact that we pick a connected component of an algebraic map) and so have a priory no reason to carry mixed Hodge structures. Therefore our results exhibit in some sense the motivic nature of mapping spaces.
In [10], Lupton and Smith show that for a formal space, the universal cover of the classifying space of the space of its homotopy automorphisms has positive weights. We extend this result to the coformality and pointed setting:
**Theorem 1.3**.: _Let \(X\) be a simply connected space of the homotopy type of a finite CW-complex. If \(X\) is formal or coformal, then the universal covers of \(B\operatorname{aut}(X)\) and \(B\operatorname{aut}_{*}(X)\) have positive weights._
Again, in the case when the initial space is a complex algebraic variety, the above result may be understood as a manifestation of a mixed Hodge theory inherited by homotopy automorphisms.
We also endow algebraic models for algebraic constructions related to the free and based loop spaces with weights inherited from weight decompositions on the original space.
We briefly explain the structure of this paper. The algebraic models of mapping spaces turn out to be more accessible through Quillen's approach to rational homotopy, via differential graded Lie or \(L_{\infty}\)-algebras. With this in mind, we first review the theory of weights in this setting and show how weights are naturally transferred between the Sullivan and Quillen approaches to rational homotopy theory. We do this in Section 2, where we also show that the main constructions in rational homotopy (such as minimal models and homotopy transfer theory) generalize to the weight-graded setting, complementing work of Douglas [12]. Section 3 is devoted to the study of particular properties of weights. We review the notions of purity and of having positive weights. We also consider an intermediate situation where weights are restricted to live in a certain segment. All of these restrictions lead to non-trivial homotopical consequences. We also review two main sources of weights: the weight decompositions induced from formality and coformality respectively. In Section 4 we prove the main theorems of this paper on mapping spaces and classifying spaces of homotopy automorphisms.
### Acknowledgments
We would like to thank Geoffroy Horel and Alexander Berglund for useful discussions and suggestions. Also, thanks to Jose Moreno, Aniceto Murillo and Daniel Tanre for answering our questions on completed Lie algebras.
### Conventions
We will be using a cohomological convention even for the dgla's, and we reserve the lower index for weights. In particular, Lie models, homology, and homotopy groups of spaces will be concentrated in negative cohomological degrees.
## 2. Weights in rational homotopy
In this section we review the notion of weight decomposition on a differential graded algebra and promote basic homotopical constructions to the weight-graded context. For practical purposes, we focus on the commutative and Lie settings, although a general operadic approach is also possible.
### Weight-graded algebras
All algebras in this section will be over a field \(\Bbbk\) of characteristic zero. Recall the following definition from [10], [1].
**Definition 2.1**.: Let \((A,d,\cdot)\) be a non-negatively graded dg-algebra over \(\Bbbk\). A _weight decomposition_ of \(A\) is a direct sum decomposition
\[A^{n}=\bigoplus_{p\in\mathbb{Z}}A_{p}^{n}\]
of each vector space \(A^{n}\), such that:
1. \(dA_{p}^{n}\subseteq A_{p}^{n+1}\) for all \(n\geq 0\) and all \(p\in\mathbb{Z}\).
2. \(A_{p}^{n}\cdot A_{q}^{n}\subseteq A_{p+q}^{n+m}\) for all \(n,m\geq 0\) and all \(p,q\in\mathbb{Z}\).
Given \(x\in A_{p}^{n}\) we will denote by \(|x|=n\) its _cohomological degree_ and by \(w(x)=p\) its _weight_.
**Definition 2.2**.: A weight decomposition on a cdga \(A\) is said to be _positive (negative)_ if \(A^{0}\) has weight \(0\) and \(A^{i}\) is concentrated in positive (negative) degrees for all \(i\neq 0\).
It follows from the definition that the cohomology of \(A\) has an induced weight decomposition by letting \(H^{n}(A)_{p}:=H^{n}(A_{p})\). Also, if \(A\) is unital weight-graded, then the unit \(1\in A\) is of weight-zero.
**Remark 2.3**.: The above definition has its obvious adaptation to other operadic algebras. In the case of a dg Lie algebra (dgla), to have a weight decomposition one just asks that both the differential and the Lie bracket preserve weights, while for dg-coalgebras the differential and coproduct should preserve the weights. In the case of \(A_{\infty}\), \(C_{\infty}\) or \(L_{\infty}\) algebras, weight decompositions should be preserved by all structure maps \(\mu_{n}\colon A^{\otimes n}\to A\).
**Definition 2.4**.: The dual of a finite type commutative dg-coalgebra (cdgc) \(C\) with a weight decomposition gives a commutative dg-algebra (cdga) \(C^{\vee}\) with a weight decomposition, by letting \((C^{\vee})_{p}:=(C_{-p})^{\vee}\). The same is true for the passage from finite type cdga's to cdgc's.
### Chevalley-Eilenberg and Quillen constructions
Under some connectivity hypotheses, there is an adjunction between cdgc's and dgla's, given by the Chevalley-Eilenberg and Quillen constructions (see for instance [12, SS 22]). We promote this adjunction to the weight-graded setting.
First, the _Chevalley-Eilenberg coalgebra construction_ associates, to any dgla \(L\), a cdgc \(\mathcal{C}^{*}_{CE}(L)\) whose underlying complex is the Chevalley-Eilenberg complex of \(L\). As a graded coalgebra we have that
\[\mathcal{C}^{*}_{CE}(L):=\Lambda^{c}(s^{-1}L).\]
where \(\Lambda^{c}(V)\) denotes the free graded cocommutative coalgebra generated by the desuspension \(s^{-1}V\) of \(V\), which in each degree is given by \((s^{-1}V)^{n}:=V^{n+1}.\) The differential on \(\mathcal{C}^{*}_{CE}(L)\) is given by \(d=d_{\alpha}+d_{\beta}\), where
\[d_{\alpha}(s^{-1}x_{1}\wedge\cdots\wedge s^{-1}x_{n}):=\sum_{i}\pm s^{-1}x_{1 }\wedge\cdots\wedge s^{-1}d_{L}x_{i}\wedge\cdots\wedge s^{-1}x_{n}\]
and
\[d_{\beta}(s^{-1}x_{1}\wedge\cdots\wedge s^{-1}x_{n}):=\sum_{i,j}\pm s^{-1}[x_ {i},x_{j}]\wedge s^{-1}x_{1}\wedge\cdots\wedge\widehat{s^{-1}x_{i}}\wedge \cdots\wedge\widehat{s^{-1}x_{j}}\wedge\cdots\wedge s^{-1}x_{n}.\]
If the dgla \(L\) carries a weight decomposition, we obtain a weight decomposition on the cdgc \(\mathcal{C}^{*}_{CE}(L)\) by setting
\[w(s^{-1}x_{1}\wedge\cdots\wedge s^{-1}x_{n}):=w(x_{1})+\cdots+w(x_{n}).\]
The Chevalley-Eilenberg coalgebra construction is also defined for an \(L_{\infty}\)-algebra \((L,\ell_{i})\), with the differential on \(\mathcal{C}^{*}_{CE}(L)\) given by
\[d(s^{-1}x_{1}\wedge\cdots\wedge s^{-1}x_{n})=\sum_{m\leq n}\sum_{i_{1},\ldots, i_{m}}\pm\ell_{m}(x_{i_{1}},\ldots\,x_{i_{m}})\wedge s^{-1}x_{1}\wedge\cdots \wedge\widehat{s^{-1}x_{i_{1}}}\wedge\cdots\wedge\widehat{s^{-1}x_{i_{m}}} \wedge\cdots\wedge s^{-1}x_{n}.\]
This differential is weight preserving provided that the \(L_{\infty}\)-algebra carries a weight decomposition.
By Definition 2.4, dualizing this construction, the _Chevalley-Eilenberg cdga_
\[\mathcal{A}^{*}_{CE}(L):=\mathcal{C}^{*}_{CE}(L)^{\vee}\]
of a dgla of \(L_{\infty}\)-algebra with a weight decomposition, inherits a weight decomposition. Note that this is only defined for dgla's and \(L_{\infty}\)-algebras of finite type.
Given a counital cdge \(C\) we obtain a non-counital cdge \(\overline{C}:=C/\mathds{k}\mathbf{1}\) on which the reduced coproduct \(\bar{\Delta}\colon\overline{C}\to\overline{C}\otimes\overline{C}\) is given by \(\bar{\Delta}(c)=\Delta(c)-c\otimes\mathbf{1}-\mathbf{1}\otimes c\). The Quillen construction associates to \(C\) the dgla \(\mathscr{L}^{*}(C)\), given as a graded object by
\[\mathscr{L}^{*}(C):=\mathbb{L}(s\overline{C}),\]
where \(\mathbb{L}V\) denotes the free graded Lie algebra generated by \(V\). Its differential is the unique derivation on \(\mathscr{L}^{*}(C)\) that satisfies
\[d(sc)=-sd_{C}(c)+[-,-]\circ(s\otimes s)\circ\bar{\Delta}(c)\]
for every \(c\in\overline{C}\). If \(C\) has a weight decomposition, we obtain a weight decomposition on \(\mathscr{L}^{*}(C)\) by letting
\[w([sc_{1},[\cdots,[sc_{n-1},sc_{n}]\cdots]):=w(c_{1})+\cdots+w(c_{k}).\]
By Definition 2.4, if \(A\) is a graded unital cdga of finite type with a weight decomposition then its _Quillen dgla construction_
\[\mathscr{L}(A):=\mathscr{L}^{*}(A^{\vee})\]
has a weight decomposition.
The following is a promotion to the weight-graded setting of the well-known adjunction between the Chevalley-Eilenberg and Quillen constructions. The classical proof (see for instance [12]) adapts mutatis mutandis to algebras with weights after considering weight-preserving twisting morphisms:
**Proposition 2.5**.: _Let \(A\) be a cdga of finite type and \(L\) an \(L_{\infty}\)-algebra, both with weight decompositions. There are weight preserving quasi-isomorphisms_
\[q_{a}\colon\mathcal{A}_{CE}^{*}(\mathscr{L}(A))\to A\text{ and }q_{\ell}\colon \mathscr{L}^{*}(\mathcal{C}_{CE}^{*}(L))\to L.\]
### Weights on algebraic models
Recall that a cdga \(A\) is said to be _connected_ if it is unital and \(\bar{A}=A/\mathds{k}\mathbf{1}\) is concentrated in positive cohomological degrees. The homotopy theory of cdga's has its obvious adaptation to the weight-graded setting. We review some main results on minimal models for cdga's and dgla's respectively.
**Definition 2.6**.: A _weight-graded cofibrant cdga_ is the colimit of free weight-graded extensions \(A_{i}=A_{i-1}\otimes_{d}\Lambda V_{i}\), starting from \(A_{0}=\mathds{k}\), where each \(V_{i}\) is a bigraded vector space where the first grading is concedntrated in degree \(i\). It is said to be _minimal_ if such extensions are ordered by non-decreasing positive cohomological degrees. If \(A\) is a cdga with a weight decomposition, then a _weight-graded cofibrant (resp. minimal) model_ of \(A\) is given by a weight preserving quasi-isomorphism \(M\to A\) where \(M\) is a weight-graded cofibrant (resp. minimal) cdga.
The original construction of minimal models for connected cdga's extends to the weight-graded setting without surprises. In Lemma 3.2 in [1] it is shown that positive weights are preserved under the minimal model construction. In fact, the same proof gives the following:
**Lemma 2.7**.: _Let \(A\) be a connected cdga with a weight decomposition whose induced decomposition in cohomology is positive. Then there is a weight-graded minimal model of \(A\) with a positive weight decomposition. This is unique up to weight-graded isomorphism._
A dgla \(L\) is said to be _connected_ if it is concentrated in non-positive cohomological degrees. Minimal dgla models are defined and constructed analogously: in the simply connected case (dgla's concentrated in strictly negative cohomological degrees) minimal dgla models exist (see e.g. [11, SS 24]). Neisendorfer [18] conjectured that connected dgla's with nilpotent cohomology also admit minimal models. This is to the knowledge of the authors not proved, but a completed version of the statement holds (see e.g. [1, Proposition 3.16]). However, quasi-free (not necessary minimal) dgla models for such dgla's always exist and are given by \(\mathscr{L}^{*}(\mathcal{C}_{CE}^{*}(L))\). If \(\mathcal{A}_{CE}^{*}(L)\) is formal, then a minimal dgla model for \(L\) is given by \(\mathscr{L}(H^{*}(\mathcal{A}_{CE}^{*}(L)))\). By Proposition 2.5, these models promote to the weight-graded setting. Moreover, we have:
**Proposition 2.8**.: _Let \(\Lambda V\) be a connected weight-graded cofibrant cdga and \(\mathbb{L}W\) be a connected weight-graded cofibrant dgla._
1. _There is an isomorphism of bigraded vector spaces_ \[H^{*}(V)^{\vee}\cong s^{-1}H^{*}(\mathscr{L}(\Lambda V)).\]
2. _There is an isomorphism of bigraded vector spaces_ \[H^{*}(W)\cong sH^{*}(\overline{\mathcal{C}}^{*}_{CE}(\mathbb{L}W)).\]
Proof.: By Proposition 2.5 the quasi-isomorphism \(q_{a}\colon\mathcal{A}^{*}_{CE}(\mathscr{L}(\Lambda V))\to\Lambda V\) preserves weights. The induced map on the indecomposables \(s\mathscr{L}(\Lambda V)^{\vee}\to V\) is a quasi-isomorphism (see [12, SS 12.1.3]) which is also weight preserving. Dualizing this quasi-isomorphism we obtain the first statement. The second statement is proved similarly.
### Weight-graded homotopy transfer
Let \(\mathcal{O}\) be one of the operads \(\mathcal{A}ss\), \(\mathcal{C}om\) or \(\mathcal{L}ie\). Recall that for these specific operads, an \(\mathcal{O}_{\infty}\)-algebra structure on a differential graded vector space \(A\) is completely determined by a collection of maps \(\mu_{i}\colon A^{\otimes i}\to A\), with \(i\geq 1\), where \(\mu_{1}\) is the differential of \(A\) and the collection \(\{\mu_{i}\}\) satisfies certain compatibility conditions depending on the operad \(\mathcal{O}\). When considering weight-graded \(\mathcal{O}_{\infty}\)-algebras such operations are assumed to preserve weights. There are obvious notions of weight-graded \(\infty\)-\(\mathcal{O}_{\infty}\)-morphism and weight-graded minimal \(\mathcal{O}_{\infty}\)-models.
The following result appears in [1] in the Lie setting. The proof is an adaptation to the weight-graded setting of the classical proof and works more generally for other operadic algebras.
**Proposition 2.9**.: _Let \(\mathcal{O}\) be one of the operads \(\mathcal{A}ss\), \(\mathcal{C}om\) or \(\mathcal{L}ie\)._
1. _Weight-graded minimal_ \(\mathcal{O}_{\infty}\)_-models exist and are unique up to weight preserving_ \(\infty\)_-_\(\mathcal{O}_{\infty}\)_-isomorphisms._
2. _The inverse of a weight preserving_ \(\infty\)_-_\(\mathcal{O}_{\infty}\)_-isomorphism is also weight preserving._
3. _A weight preserving_ \(\infty\)_-_\(\mathcal{O}_{\infty}\)_-quasi-isomorphism has a weight preserving homotopy inverse._
Proof.: We sketch the idea of the proof for completeness. Let \((A,\{\mu_{k}\})\) be a weight-graded \(\mathcal{O}_{\infty}\)-algebra and \(B\) a weight-graded dg vector space. Assume there is a homotopy retract
where \(p\), \(i\) and \(h\) are weight preserving and \(p\) and \(i\) are cochain maps. Then there is an explicit formula for a transferred \(\mathcal{O}_{\infty}\)-algebra structure \((B,\{\eta_{k}\})\) where each \(\eta_{k}\) is given as linear combinations of maps given as compositions of \(p\), \(i\), \(h\), and \(\mu_{\ell}\), where \(\ell\leq k\) (we refer the reader familiar with the theory of operads to [12, SS 10.3]). Since all of these maps are weight preserving, it follows that \(\eta_{k}\) is also weight preserving.
One also shows that \(i\) extends to an \(\infty\)-\(\mathcal{O}_{\infty}\)-quasi-isomorphism
\[i^{\infty}\colon(B,\{\eta_{i}\})\rightsquigarrow(A,\{\mu_{i}\}),\]
whose explicit formula is again given by linear combinations of maps given as compositions of the maps mentioned earlier (see [12, SS 10.3.5]). Since there exists a weight preserving homotopy retract in which \(B=H^{*}(A)\) the existence part of statement _(a)_ follows.
For the uniqueness part, we need some more: If \(B=H^{*}(A)\), then one can show that also \(p\) extends to an \(\infty\)-\(C_{\infty}\)-quasi-isomorphism \(p^{\infty}\), which by similar arguments as above is weight preserving.
Statement _(b)_ is proved again by inspecting the explicit formula for the inverse of a weight preserving \(\infty\)-isomorphism \(\{f_{k}\}\colon A\rightsquigarrow B\) (which again is a linear combination of compositions the maps that are weight preserving). Now given all this, the proof of [12, Theorem 10.4.4] holds word by word in this setting as well, yielding the proof of _(c)_.
The uniqueness part of _(a)_ follows, since any two minimal weight-graded models will have a weight-graded \(\infty\)-quasi-isomorphism between them, which is a weight-graded \(\infty\)-isomorphism by minimality.
### Algebraic models of spaces
The rational homotopy type of any nilpotent space \(X\) of finite type is algebraically modelled by either cdga's (as in Sullivan's approach) or by dgla's (as in Quillen's approach). The Sullivan minimal model of a connected topological space \(X\), defined uniquely up to isomorphism, is given by a minimal cdga cofibrant model of the algebra of piece-wise linear forms \(\mathcal{A}_{pl}(X)\) on \(X\). On the other hand, if \(X\) is simply connected, then the Quillen minimal
model of \(X\) is given by a minimal dgla model of the Lie algebra \(\lambda(X)\), where \(\lambda\) denotes Quillen's construction in [10]. Note that \(\lambda\) is only defined for simply connected spaces, but the theory of dgla models can be extended to nilpotent spaces of finite type by taking first a Sullivan minimal model and then applying Quillen's functor \(\mathscr{L}\) from finite type cdga's to dgla's (see [14]). A further extension of Quillen models to general topological spaces (not necessarily connected or nilpotent) is treated in [11]. Recall that a topological space is said to be _formal_ if it has a formal Sullivan model. It is called _coformal_ if it has a formal Quillen model.
By Proposition 2.5 and Proposition 2.9, a weight graded algebraic model for \(X\) (cdga, \(C_{\infty}\)-algebra, dgla, \(L_{\infty}\)-algebra) is uniquely represented, up to weight-graded isomorphism in the corresponding category, by either a weight-graded cdga minimal model, a weight graded minimal \(C_{\infty}\)-algebra model or a weight graded minimal \(L_{\infty}\)-algebra model.
In the simply connected case, a weight graded algebraic model is also uniquely represented by a unique weight graded minimal dgla model. In the non-simply connected case, it is not known whether minimal dgla models always exist (Neisenforfer cojecured their existence, and a proof of a completed version of the Neisendorfer conjecture can be found in [11, Proposition 3.16]). In Section 4.4 we discuss properties that guarantee the existence of such models.
**Remark 2.10**.: A non-trivial weight decomposition on a Sullivan minimal model \(\Lambda V\) for a nilpotent space \(X\) of finite type is a source of weight decompositions on the rational homotopy groups in two different ways. First, there is an induced weight decomposition on \(H^{*}(V)\cong\pi_{*}^{\mathbb{Q}}(X)^{\vee}\). Second, there is an induced weight decomposition on
\[H^{*}(\mathscr{L}(\Lambda V))\cong\pi_{-*}^{\mathbb{Q}}(\Omega X)=\pi_{-*-1}^{ \mathbb{Q}}(X).\]
By Proposition 2.8 _(a)_ these two weight decompositions coincide.
Similarly, a non-trivial indecomposable induced weight decomposition on a dgla model for \(X\) is a source of two weight decompositions on the rational (co)homology of \(X\), which coincide by Proposition 2.8_(b)_.
## 3. Restrictions on weights
In this section we consider various properties that restrict the range of weights in our algebras. We first introduce the formality and coformality induced weights. Then review the theory of positive and pure weights and introduce an intermediate property which gives vanishing of higher operations in cohomology.
### Formality and coformality induced weights
If a nilpotent space \(X\) of finite type is formal, then of course it has a weight-graded model, namely its cohomology \(H^{*}(X)\) with weights given by \(w(x)=|x|\). The theory of weight-graded minimal models gives a unique up to isomorphism weight-graded minimal cdga model \(\Lambda V\to H^{*}(X)\) and a unique weight-graded dgla model \(\mathscr{L}(H^{*}(X))\). We call these the _formality induced weight decompositions_ on the algebraic models for the space \(X\).
**Example 3.1**.: Since \(\mathbb{C}P^{k}\) is formal, its cohomology is its own cdga model: the algebra \(\mathbb{Q}[u]/(u^{k+1})\), \(|u|=2\), \(w(u)=2\) with the trivial differential is a weight-graded cdga model for \(\mathbb{C}P^{k}\). The formality induced weight decomposition on the dgla model of \(\mathbb{C}P^{k}\) is explicitly given by
\[\mathbb{L}(v_{1},\ldots,v_{k}),\,|v_{i}|=1-2i,\,w(v_{i})=-w(u^{i})=-2i,\,d(v_ {i})=\frac{1}{2}\sum_{a+b=i}[v_{a},v_{b}].\]
If a nilpotent space \(X\) of finite type is coformal, then of course it has a weight-graded Lie model, namely the rational homotopy groups \(\pi_{*}(\Omega X)\otimes\mathbb{Q}\) on the based loop space of \(X\) with weights given by \(w(x)=|x|\). If \(X\) is simply connected, there is a unique up to isomorphism weight-graded minimal dgla model \(\mathbb{L}W\to\pi_{*}(\Omega X)\otimes\mathbb{Q}\) and a unique weight-graded Sullivan minimal model \(\mathcal{A}_{CE}^{*}(\pi_{*}(\Omega X)\otimes\mathbb{Q})\). We call these the _coformality induced weight decompositions_ on the algebraic models for \(X\). If \(X\) is not simply connected it is still possible to deduce the existence of minimal dgla models, for instance when \(X\) is formal (other cases discussed in Section 4.4).
We state two results about formality and coformality induced weight decompositions that we will use later on.
**Proposition 3.2**.: _Let \(X\) be a nilpotent space of finite type.
1. _If_ \(X\) _is formal, then the formality induced weight decomposition on its Quillen minimal model_ \(\mathbb{L}W\) _satisfies_ \(w(x)<|x|\) _for every non-trivial_ \(x\in\mathbb{L}W\) _of pure weight_ \(w(x)\) _and cohomological degree_ \(|x|\)_._
2. _If_ \(X\) _is coformal, then the coformality induced weight decomposition on its Sullivan minimal model_ \(\Lambda V\) _satisfies_ \(w(x)<|x|\) _for every non-trivial_ \(x\in\Lambda V\) _of pure weight_ \(w(x)\) _and cohomological degree_ \(|x|\)_._
Proof.: Given the formality induced weight decomposition on \(\mathbb{L}W\), there is an isomorphism of weight-graded vector spaces \(W\cong s(H^{*}(X;\mathbb{Q}))^{\vee}\) (see Remark 2.10). In particular, for every \(x\in W\) we have that \(w(x)=|x|-1<|x|\). The proof of the second statement follows similarly.
**Remark 3.3**.: The above inequalities hold for the cohomologies \(H^{*}(\mathbb{L}W)=\pi_{*}(\Omega X)\otimes\mathbb{Q}\) and \(H^{*}(\Lambda V)=H^{*}(X;\mathbb{Q})\) and thus for the minimal \(L_{\infty}\)- and \(C_{\infty}\)-models in respective case.
### Weights on a tilted segments and purity
We now consider weights restricted to a certain segment, generalizing the purity context. We prove that segmented weights give homotopical restrictions.
We will again assume that \(\mathcal{O}\) is one of the operads \(\mathcal{A}ss\), \(\mathcal{C}om\) or \(\mathcal{L}ie\).
**Definition 3.4**.: Let \(\alpha>0\) be a rational number and \(k\geq 0\) an integer. We say that a weight-graded dg \(\mathcal{O}\)-algebra \(A\) is _\((\alpha,k)\)-segmented_ if \(H^{n}(A)_{p}\) is non-trivial only when
\[\alpha n\leq p\leq\alpha(n+k).\]
**Example 3.5**.: If \(A\) is \((1,2)\)-segmented then the weights in the cohomology \(H^{*}(A)\) are concentrated in the segment visualized below:
**Remark 3.6**.: Being \((\alpha,0)\)-segmented is equivalent to being \(\alpha\)-pure in the sense of [1]. If \(\alpha=a/b\) with \(a\) and \(b\) coprime, this implies that \(H^{*}(A)\) is concentrated in degrees that are divisible by \(b\), and \(H^{kb}(A)\) is pure of weight \(ka\). In particular, the case of \((1,0)\)-segmented is the classical purity one encounters for compact Kahler manifolds. Note that \((\alpha,k)\)-segmented implies \((\alpha,k^{\prime})\)-segmented for every \(k^{\prime}\geq k\).
We will say that an \(\mathcal{O}_{\infty}\)-algebra \((A,\mu_{1},\mu_{2},\dots)\) has a minimal \(\mathcal{O}_{\infty}^{\leq k}\)-model, if it has a minimal \(\mathcal{O}_{\infty}\)-model with vanishing operations \(\mu_{m}=0\) for \(m>k\). Note that having a minimal \(\mathcal{O}_{\infty}^{\leq 2}\)-model is equivalent to formality of the \(\mathcal{O}_{\infty}\)-algebra. We have:
**Proposition 3.7**.: _Every weight-graded \((\alpha,k)\)-segmented dg \(\mathcal{O}\)-algebra has a minimal \(\mathcal{O}_{\infty}^{\leq k+2}\)-model._
Proof.: Let \((H,0,\mu_{2},\mu_{3},\dots)\) be a weight-graded minimal \(\mathcal{O}_{\infty}\)-model for \(A\). Let \(m>k+2\) and assume to get a contradiction that there are elements \(x_{1},\dots,x_{m}\in H\) of homogeneous weights and cohomological degrees such that \(\mu_{m}(x_{1}\otimes\dots\otimes x_{m})\neq 0\). By taking into account that
\[|\mu_{m}(x_{1}\otimes\dots\otimes x_{m})|=2-m+\sum_{i=1}|x_{i}|,\]
it follows that the weight of \(\mu_{m}(x_{1}\otimes\ldots\otimes x_{m})\), satisfies
\[w(\mu_{m}(x_{1},\ldots,x_{m}))\leq\alpha(k+2-m+\sum|x_{i}|)<\alpha\sum|x_{i}|.\]
Since \(\mu_{m}\) is weight preserving it follows that that
\[w(\mu_{m}(x_{1}\otimes\ldots\otimes x_{n}))\geq\alpha\sum|x_{i}|.\]
The two inequalities above contradict each other, yielding that \(\mu_{m}=0\).
**Remark 3.8**.: A similar argument as in the proof of Proposition 3.7 gives vanishing of Massey products of length \(>k+2\) for any dga with an \((\alpha,k)\)-segmented weight decomposition.
**Example 3.9**.: On the cohomology of a \(d\)-dimensional smooth complex algebraic variety, weights arising from mixed Hodge theory are concentrated in the triangle given by the following: Let \(p\) denote the weight and \(n\) the cohomological degree. Then the range of weights is given by the following inequalities; \(p\leq d\) and \(n\leq p\leq 2n\), as visualized below.
We see that any \(d\)-dimensional smooth complex algebraic variety is \((1,d/2)\)-segmented, yielding that the ordinary \(n\)-Massey products vanish for \(n>d/2+2\).
**Example 3.10**.: In some situations, there are weight decompositions up to a certain degree, which are segmented up to this degree. Then the vanishing of higher operations is still true in the corresponding range of degrees. This is the case, for instance, of any compact complex manifold admitting a transverse Kahler structure on a fundamental central foliation: by [10] such a manifold \(M\) admits a model which, in degrees \(\leq 2\) has a weight decomposition where \(H^{1}(M)\) is concentrated in weights \(1\) and \(2\), and \(H^{2}(M)\) has weights \(2\), \(3\) and \(4\). As a consequence, one infers that there are no \(k\)-tuple Massey products of elements in \(H^{1}(M)\) for \(k\geq 5\).
### Positive weights
Positive weights together with finite-dimensional cohomology, lead to segmented weights and so give vanishing results of certain higher operations. First, note that there are various equivalent characterizations for a space to have positive weights:
**Proposition 3.11**.: _Given a nilpotent space \(X\) of finite type the following are equivalent:_
1. \(X\) _has a cdga model with a positive weight decomposition._
2. \(X\) _has a cdga model with a weight decomposition such that the weights are positive on cohomology._
3. \(X\) _has a dgla model with a negative weight decomposition._
4. \(X\) _has a dgla model with a weight decomposition such that the weights are negative on cohomology._
Proof.: That _(a)_ and _(b)_ are equivalent follows from Lemma 2.7. The implication _(a)_\(\Rightarrow\)_(c)_ follows by applying \(\mathscr{L}\) to \(A\) if \(A\) is of finite type. If \(A\) is not of finite type, we might apply \(\mathscr{L}\) to a weight graded minimal Sullivan model of it, again using Lemma 2.7.
The implication _(c)_\(\Rightarrow\)_(a)_ follows by applying \(\mathcal{A}^{*}_{CE}\) to \(L\) if it is of finite type. If \(L\) is not of finite type, we might apply \(\mathcal{A}^{*}_{CE}\) to a minimal \(L_{\infty}\)-algebra model of it; such models exist have positive weights by Proposition 2.9. Hence _(a)_\(\Leftrightarrow\)_(c)_.
The implication _(c) \(\Rightarrow\) (d)_ is clear. For _(d) \(\Rightarrow\) (c)_, let \(L\) be a dgla model for \(X\) where \(H^{*}(L)\) has negative weight decomposition. By Proposition 2.9 it follows that \(L\) has a negatively graded minimal \(L_{\infty}\)-algebra model \((H^{*}(L),\{\ell_{i}\})\). Now the dgla \(\mathscr{L}(\mathcal{A}_{CE}^{*}(H^{*}(L),\{\ell_{i}\}))\) is a dgla model for \(X\) which is negatively graded.
In view of the above result, we define:
**Definition 3.12**.: A topological space is said to have _positive weights_ if it admits a Sullivan model with a positive weight decomposition or, equivalently, a Quillen model with a negative weight decomposition.
The following is straightforward:
**Proposition 3.13**.: _Let \(A\) be a dga with a positive weight decomposition. If \(H^{*}(A)\) is finite dimensional, then there is some \(\alpha>0\) and \(k\geq 0\), so that \(A\) is \((\alpha,k)\)-segmented._
As a consequence, \(k\)-tuple Massey products on the cohomology of topological spaces with positive weights vanish for sufficiently large \(k\).
## 4. Applications
In this last section, we prove our main results on mapping spaces, classifying spaces of homotopy automorphisms and free and based loop spaces. We also show that nilpotent spaces with positive weights always admit minimal dgla models.
### Mapping spaces
Given a connected space \(X\), and a nilpotent space \(Y\) of finite \(\mathbb{Q}\)-type, Berglund constructs in [1] explicit \(L_{\infty}\)-algebra models for the different connected components of \(\operatorname{map}(X,Y)\). We show that when \(X\) and \(Y\) carry weight decompositions, it is possible to equip the Berglund models with a weight decomposition, under certain conditions. This applies, for instance, to components of algebraic maps between complex algebraic varieties or holomorphic maps between compact Kahler manifolds.
We first recall the main constructions treated in [1].
Given an \(L_{\infty}\)-algebra \((L,\{\ell_{i}\})\) let \(\Gamma^{k}L\) be the span of all possible elements of \(L\) one can form using at least \(k\) elements from \(L\) and the maps \(\ell_{1},\ell_{2},\dots\) (for instance, \(\ell_{3}(x,\ell_{2}(y,z),\ell_{1}(t))\) belongs to \(\Gamma^{4}L\)). This gives a filtration
\[L=\Gamma^{0}L\supseteq\Gamma^{1}L\supseteq\cdots.\]
The \(L_{\infty}\)-algebra is called _degree-wise nilpotent_ if for every cohomological degree \(n\) there is some \(k\) such that \((\Gamma^{k}L)^{n}=0\). Minimal \(L_{\infty}\)-models for nilpotent spaces of finite \(\mathbb{Q}\)-type turn out to be degree-wise nilpotent ([1, Theorem 2.3]).
The Maurer-Cartan elements of a degree-wise nilpotent \(L_{\infty}\)-algebra \((L,\{\ell_{i}\})\) are given by
\[\operatorname{MC}(L):=\{\tau\in L^{1}\ |\ \sum\ell_{i}(\tau^{\otimes i})/i!=0\}.\]
Note that the sum \(\sum\ell_{i}(t^{\otimes i})/i!\) converges due to degree-wise nilpotency.
Given a cdga \(A\) and an \(L_{\infty}\)-algebra \((L,\{\ell_{i}\})\), the tensor product \(A\otimes L\) is an \(L_{\infty}\)-algebra \((A\otimes L,\{\ell_{i}^{A}\})\) where
\[\ell_{i}^{A}((a_{1}\otimes x_{1})\otimes\cdots\otimes(a_{i}\otimes x_{i}))= \pm a_{1}\cdots a_{i}\otimes\ell_{i}(x_{1}\otimes\cdots x_{i}).\]
If \(A\) is connected and bounded and \((L,\{\ell_{i}\})\) is degree-wise nilpotent, then \((A\otimes L,\{\ell_{i}^{A}\})\) is again a degree-wise nilpotent \(L_{\infty}\)-algebra and it makes sense to consider the set of Maurer-Cartan elements \(\operatorname{MC}(A\otimes L)\).
Assume now that \(L\) is of finite type. Given an element \(\tau\in\operatorname{MC}(A\otimes L)\), there is an associated morphism
\[\varphi_{\tau}\in\operatorname{Hom}_{\operatorname{cdga}}(\mathcal{A}_{CE}^{* }(L),A)\]
defined as follows: Since \(L\) is of finite type, we have that the underlying graded algebra structure of \(\mathcal{A}_{CE}^{*}(L)\) is given by the free graded commutative algebra \(\Lambda(s^{-1}L)^{\vee}\). Then for \(\xi\in(s^{-1}L)^{\vee}\) we set
\[\varphi_{\tau}(\xi):=(1\otimes\xi)(\tau).\]
If either \(A\) or \(L\) is finite dimensional, Berglund [1, Proposition 6.1] shows that there is an isomorphism of sets
\[\operatorname{MC}(A\otimes L)\xrightarrow{\infty}\operatorname{Hom}_{\operatorname {cdga}}(\mathcal{A}_{CE}^{*}(L),A);\quad\tau\mapsto\varphi_{\tau}.\]
Moreover, the equivalence respects homotopy: two elements in \(\operatorname{MC}(A\otimes L)\) are gauge equivalent if and only if their corresponding maps in \(\operatorname{Hom}_{\operatorname{cdga}}(\mathcal{A}_{CE}^{*}(L),A)\) are homotopic (in the sense of [1, SS 12 (b)]).
**Definition 4.1**.: Given a weight-graded dgla \(L\), let \(W_{0}\operatorname{MC}(L)\) denote the set of weight-zero Maurer-Cartan elements:
\[W_{0}\operatorname{MC}(L):=\{\tau\in\operatorname{MC}(L)\ |\ w(\tau)=0\}= \operatorname{MC}(L_{0}).\]
**Lemma 4.2**.: _Let \(A\) be a weight-graded connected cdga and let \(L\) be a weight-graded connected dgla of finite type. Let \(\tau\in\operatorname{MC}(A\otimes L)\). Then \(\tau\in W_{0}\operatorname{MC}(A\otimes L)\) if and only if \(\varphi_{\tau}\) is weight preserving._
Proof.: It is enough to prove that \(\varphi_{\tau}\) preserves the weight on an element in the indecomposables \(\xi\in(s^{-1}L)^{\vee}\). Assume \(\xi:s^{-1}L\to\mathbb{Q}\) is of weight \(n\). Then it increases the weight by \(n\) (in this case, it vanishes on elements of weights \(w\neq-n\)). For a weight zero Maurer-Cartan element
\[\tau=\sum a_{i}\otimes x_{i}\in W_{0}\operatorname{MC}(A\otimes L)\]
we must have that \(w(a_{i})+w(x_{i})=0\) and hence \(w(a_{i}\otimes\xi(s^{-1}x_{i}))=w(\xi)\). In particular
\[w(\varphi_{\tau}(\xi))=w(1\otimes\xi s^{-1}(\tau))=w\left(\sum a_{i}\otimes \xi(s^{-1}x_{i})\right)=w(\xi).\]
Hence \(\varphi_{\tau}\) is weight preserving.
For the following result, we will require two standard constructions on dgla's:
**Definition 4.3**.: Given a degree-wise nilpotent \(L_{\infty}\)-algebra \((L,\{\ell_{i}\})\) and a Maurer-Cartan element \(\tau\in\operatorname{MC}(L)\), let \((L^{\tau},\ell_{i}^{\tau})\) denote the \(\tau\)-twisted \(L_{\infty}\)-algebra where
\[\ell_{i}^{\tau}(x_{1}\otimes\cdots\otimes x_{i})=\sum_{k\geq 0}\frac{1}{k!} \ell_{k+i}(\tau^{\otimes k}\otimes x_{1}\otimes\cdots\otimes x_{i}).\]
**Definition 4.4**.: The _\(n\)-connected cover_ of a an \(L_{\infty}\) algebra \((L,\{\ell_{i}\})\), denoted by \((L\langle n\rangle,\{\ell_{i}\})\), has an underlying graded vector space given by is given by
\[L\langle n\rangle^{i}=\left\{\begin{array}{ll}L^{i}&\text{ if }i<n,\\ \ker(L^{n}\xrightarrow{d}L^{n+1})&\text{ if }i=n,\\ 0&\text{ if }i>n.\end{array}\right.\]
The structure maps are given by restrictions.
We have all the preliminaries to state a weight-graded version of [1, Theorem 1.5].
**Theorem 4.5**.: _Let \(X\) be a connected space with a weight-graded cdga model \(A\) and let \(Y\) be a connected nilpotent space of finite \(\mathbb{Q}\)-type with a weight-graded degree-wise nilpotent \(L_{\infty}\)-algebra model \((L,\{\ell_{i}\})\), where either \(A\) or \(L\) is finite dimensional. Given a map \(f\colon X\to Y\) whose rational homotopy class corresponds to some \(\tau\in W_{0}\operatorname{MC}(A\otimes L)\), then the \(L_{\infty}\)-algebra model given by \((A\otimes L)^{\tau}\langle 0\rangle\) is a weight-graded \(L_{\infty}\)-algebra model for \(\operatorname{map}^{f}(X,Y)\)._
Proof.: If \(\mathfrak{g}\) is a weight-graded \(L_{\infty}\)-algebra and \(\tau\in W_{0}\operatorname{MC}(\mathfrak{g})\), then \(\mathfrak{g}^{\tau}\) is also a weight-graded \(L_{\infty}\)-algebra. The rest follows from [1, SS 6].
**Remark 4.6**.: As shown in Example 4.8 below, algebraic varieties always have finite-dimensional models and so the above theorem applies. An alternative description for algebraic models of mapping spaces, where finite-dimensionality of the initial algebraic models is not necessary, is given in Proposition 1.9 of [10]. This description might be convenient in order to study the whole mixed Hodge structure on mapping spaces of algebraic varieties.
We review two main examples where the above theorem applies:
**Example 4.7**.: Let \(f:X\to Y\) be a formal map between topological spaces of finite type. For instance, this situation applies to any holomorphic map between compact Kahler manifolds. The map
\[f^{*}:A(Y):=H^{*}(Y)\to A(X):=H^{*}(X)\]
is a finite-dimensional model of \(f\) which is weight-graded by letting
\[A^{n}_{n}(X)=H^{n}(X)\text{ and }A^{n}_{p}=0\text{ otherwise.}\]
Let \(L_{Y}\) denote the minimal \(L_{\infty}\)-algebra model with the formality induced weight decomposition. Since \(L_{Y}\) is degree-wise nilpotent, it follows that \(\mathcal{A}^{*}_{CE}(L_{Y})\) is a minimal model of \(A(Y)\) as weight graded algebras. We obtain a weight preserving morphism
\[\varphi:\mathcal{A}^{*}_{CE}(L_{Y})\to A(X).\]
The Maurer-Cartan element \(\tau_{\varphi}\in\operatorname{MC}(H^{*}(X)\otimes L_{Y})\) corresponding to \(\varphi\) will then have weight zero.
**Example 4.8**.: Let \(f:X\to Y\) be a morphism of connected complex algebraic varieties. Such a map is modelled by the morphism of bigraded cdga's
\[f^{*}:(E_{1}(Y),d_{1})\longrightarrow(E_{1}(X),d_{1})\]
given by the multiplicative weight spectral sequence (see [10]). It is a finite-dimensional model and moreover admits a weight decomposition by rearranging the bidegrees: let
\[A^{n}_{p}(X):=E_{1}^{n-p,p}(X).\]
Then the map \(f^{*}:A(Y)\to A(X)\) is a finite-dimensional weight-graded cdga model of \(f\). Furthermore, in the case of smooth algebraic varieties (not necessarily projective) the weight decompositions are positive. Arguing as in the previous example, we obtain a weight preserving morphism
\[\varphi:\mathcal{A}^{*}_{CE}(L_{Y})\to A(X).\]
The Maurer-Cartan element corresponding to \(\varphi\) will then have weight zero. Note that in the case of a smooth projective variety \(X\) we recover the weight decomposition of the previous example.
**Example 4.9**.: Let \(e\colon\mathbb{C}P^{k}\hookrightarrow\mathbb{C}P^{k+n}\), \(n\geq 1\), be the standard embedding which is modelled by the projection
\[\mathbb{Q}[u]/u^{k+n+1}\to\mathbb{Q}[u]/u^{k+1},\]
which is a weight preserving morphism. Hence \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\) has weight graded algebraic models by Theorem 4.5 where the weights are inherited from the formality induced weights on the algebraic models for \(\mathbb{C}P^{k}\) and \(\mathbb{C}P^{k+n}\). We will show that \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\) has a minimal \(\mathcal{C}m_{\infty}^{\leq k+2}\)-model by endowing a Sullivan minimal model, computed in [1, SS 7], with weights. A minimal model for \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\) is given
\[A=\Lambda(z,y_{n},y_{n+1},\dots,y_{n+k}),\quad|z|=2,\ |y_{r}|=2r+1,\quad dz=0,\ dy_{r}=z^{r+1}\]
(see [1] for further details). By explicit computations using the formality induced weights one gets that
\[w(z)=2,\ w(y_{r})=2r+2.\]
We will prove that \(A\) is \((1,k)\)-segmented, and therefore has a minimal \(\mathcal{C}m_{\infty}^{\leq k+2}\)-model by Proposition 3.7. First we observe that \(w(z)=|z|\) and that \(w(y_{j})=|y_{j}|+1\). Hence
\[w(z^{a}y_{i_{1}}\cdots y_{i_{p}})=|z^{a}y_{i_{1}}\cdots y_{i_{p}}|+p.\]
Since \(y_{j}^{2}=0\), it follows that if \(p>k+1\) then \(z^{a}y_{i_{1}}\cdots y_{i_{p}}=0\). From this we may deduce \((1,k+1)\)-segmentation. However, we observe that any element of cohomological degree \(\ell\) and of weight \(\ell+k+1\) is a multiple of \(z^{a}y_{n}\cdots y_{n+k}\) (where \(a=\ell-\sum_{i=n}^{n+k}(2i+1)\)) and those elements are non-cycles. Hence there is no cohomology class of cohomological degree \(\ell\) and of weight \(\ell+k+1\). From this we conclude the \((1,k)\)-segmentation.
**Remark 4.10**.: Note that the deduction above is not sharp: weights imply that there is a \(\mathcal{C}m_{\infty}^{\leq k+2}\)-model for \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\) but, in fact, \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\) is formal. Indeed, consider the cdga \(B=\Lambda(z,y_{n},w_{n+1},\dots,w_{n+k})\) where \(z\) and \(y_{n}\) is as before but with \(|w_{r}|=2r+1\) and \(dw_{r}=0\). Define a morphism \(\varphi\colon A\to B\) where
\[\varphi(z)=z,\,\varphi(y_{n})=y_{n}\text{ and }\varphi(y_{n+i})=w_{n+i}+z^{i}y_{ n}.\]
An inverse \(\psi\colon B\to A\) is given by
\[\psi(z)=z,\,\psi(y_{n})=y_{n}\text{ and }\psi(w_{n+i})=y_{n+i}-z^{i}y_{n}.\]
In particular \(B\) is a cdga model for \(\operatorname{map}^{e}(\mathbb{C}P^{k},\mathbb{C}P^{k+n})\). Moreover we can decompose \(B\) as the tensor product \(\Lambda(z,y_{n})\otimes\Lambda(w_{n+1})\otimes\cdots\Lambda(w_{n+k})\) which is a model for the product
\[\mathbb{C}P^{n}\times S^{2n+3}\times S^{2n+5}\times\cdots\times S^{2(k+n)+1}.\]
In the next two results we provide conditions that guarantee positive weights for certain components of mapping spaces.
**Proposition 4.11**.: _Let \(X\) be a connected space and \(Y\) be a nilpotent space of finite \(\mathbb{Q}\)-type. Let \(\Lambda V\) be a minimal model for \(X\) and let \(L_{Y}\) be a minimal model \(L_{\infty}\)-algebra model for \(Y.\)_
1. _If_ \(X\) _and_ \(Y\) _are formal and_ \(\dim(H^{*}(X;\mathbb{Q}))<\infty\) _and_ \(f\colon X\to Y\) _corresponds to some weight-zero Maurer-Cartan element_ \(\tau\in W_{0}\operatorname{MC}(H^{*}(X;\mathbb{Q})\otimes L_{Y})\) _with respect to the formality induced weights, then_ \(\operatorname{map}^{f}(X,Y)\) _has positive weights._
2. _If_ \(X\) _and_ \(Y\) _are coformal and_ \(\dim(\pi^{\mathbb{Q}}_{*}(Y))<\infty\) _and_ \(f\colon X\to Y\) _corresponds to some weight-zero Maurer-Cartan element_ \(\tau\in W_{0}\operatorname{MC}(\Lambda V\otimes\pi^{\mathbb{Q}}_{*}(Y))\) _with respect to the coformality induced weights, then_ \(\operatorname{map}^{f}(X,Y)\) _has positive weights._
Proof.: _(a)_ Let \(a\otimes b\in H^{*}(X)\otimes L_{Y}\) be an element of non-positive cohomological degree, so that \(|a|+|b|\leq 0\). Since \(X\) and \(Y\) are formal we have \(w(a)=|a|\) and \(w(b)<|b|\) by Proposition 3.2_(a)_. In particular \(w(a\otimes b)=w(a)+w(b)<0\). From this we conclude that \((H^{*}(X;\mathbb{Q})\otimes L_{Y}W)^{\tau}\langle 0\rangle\) is a dgla model for \(\operatorname{map}^{f}(X,Y)\) with negative weights, and thus \(\operatorname{map}^{f}(X,Y)\) admits a cdga model with positive weights by Proposition 3.11. Assertion _(b)_ is proved similarly, using Proposition 3.2_(b)_ instead.
Theorem 1.2 in the introduction for holomorphic maps between Kahler manifolds now follows from Proposition 4.11 together with Example 4.7.
We may now prove Theorem 1.1 in the introduction. Note that in order to get a non-trivial weight decomposition on the tensor product \(A\otimes L\), it is enough that one of the algebras has a non-trivial weight decomposition. Assuming that \(L\) has a negative weight decomposition and \(A\) has a trivial weight decomposition, we get that \((A\otimes L)\langle 0\rangle\) has a negative weight decomposition. The only weight-zero Maurer-Cartan element is thus the trivial one, which corresponds to the constant map.
**Theorem 4.12**.: _Let \(X\) be of the homotopy type of a finite CW-complex and let \(Y\) be a connected nilpotent space of finite \(\mathbb{Q}\)-type having positive weights. Let \(\Psi\colon X\to Y\) denote a constant map. Then \(\operatorname{map}^{\Psi}(X,Y)\) has positive weights._
Proof.: Since \(X\) is of the homotopy type of a finite CW-complex, it has a finite dimensional connected cdga model \(A\), see for instance [10, Example 6, p. 146]. Endow \(A\) with the trivial weight decomposition. Since the Sullivan minimal model for \(Y\) has a positive weight decomposition, it follows from Proposition 3.11 that \(Y\) has an \(L_{\infty}\)-algebra model \(L\) with a negative weight decompositon. Therefore by Proposition 4.11, \((A\otimes L_{Y})^{\operatorname{triv}}\langle 0\rangle\), where \(\operatorname{triv}\) is the trivial Maurer-Cartan element, is an \(L_{\infty}\)-algebra model for \(\operatorname{map}^{\Psi}(X,Y)\) which admits a negative weight decomposition.
### Classifying spaces of homotopy automorphisms
For a topological space \(X\), let \(\operatorname{aut}(X)\) (\(\operatorname{aut}_{*}(X)\)) denote the topological monoid of (pointed) homotopy automorphisms, i.e. the space of (pointed) endomorphisms \(\varphi\colon X\to X\) that are homotopy equivalences. If \(X\) is of the homotopy type of a finite CW-complex, then there is a universal \(X\)-fibration
\[X\to B\operatorname{aut}_{*}(X)\to B\operatorname{aut}(X).\]
Given a map \(f\colon B\to B\operatorname{aut}(X)\), we may apply the based loop space functor which up top homotopy yields a map \(\Omega f\colon\Omega B\to\operatorname{aut}(X)\). If \(B\) is simply connected, then \(\Omega B\) is connected, and hence \(\Omega f\) factors through the connected component of \(\operatorname{aut}(X)\) that contains the identity. We denote this connected component by \(\operatorname{aut}_{\circ}(X)\), which again is a topological monoid. Delooping this factorization gives that \(f\) factors through \(B\operatorname{aut}_{\circ}(X)\) if \(B\) is simply connected. Hence \(B\operatorname{aut}_{\circ}(X)\) classifies \(X\)-fibrations over simply connected spaces, via the \(X\)-fibration
\[X\to B\operatorname{aut}_{*_{\circ}}(X)\to B\operatorname{aut}_{\circ}(X),\]
where \(\operatorname{aut}_{*,\circ}(X)\) denotes the connected component of the identity in \(\operatorname{aut}_{*}(X)\).
Since \(B\operatorname{aut}_{\circ}(X)\) and \(B\operatorname{aut}_{*,\circ}(X)\) are simply connected and of finite type, they have rational cdga and dgla models. The dgla models are expressed in terms of derivations of certain algebras.
**Definition 4.13**.: Let \(A\) be a cdga, then \(\operatorname{Der}(A)\) is the _dgla of graded derivations_,
\[\operatorname{Der}(A)=\{\theta\colon A\to A\ |\ \theta(a\cdot b)=\theta(a) \cdot b+(-1)^{|\theta||a|}a\cdot\theta(b)\},\]
where the Lie bracket is given by
\[[\theta,\eta]=\theta\circ\eta-(-1)^{|\theta||\eta|}\eta\circ\theta\]
and the differential \(\partial\colon\operatorname{Der}(A)\to\operatorname{Der}(A)\) is given by \([d,-]\) where \(d\) is the differential of \(A\).
Similarly, one may define the _dgla of derivations on a dgla_\(L\). A derivation of a dgla \(L\) is a linear map \(\eta\colon L\to L\), such that
\[\eta[a,b]=[\eta(a),b]+(-1)^{|\eta||a|}[a,\eta(b)].\]
The dgla of derivations on a dgla \(L\) is denoted by \(\operatorname{Der}(L)\).
**Lemma 4.14**.: _Let \(\Lambda V\) (resp. \(\mathbb{L}W\)) be a weight-graded cofibrant minimal cdga (resp. dgla). Then there is an induced weight grading on the dgla \(\operatorname{Der}(\Lambda V)\) (resp. \(\operatorname{Der}(\mathbb{L}W)\)) so that_
\[\operatorname{Der}(\Lambda V)\cong\operatorname{Hom}(V,\Lambda V)\quad\text{( resp. }\operatorname{Der}(\mathbb{L}W)\cong\operatorname{Hom}(W,\mathbb{L}W)\ )\]
_is an isomorphism of bigraded vector spaces._
Proof.: We prove the statement for the derivations on the cofibrant minimal cdga. The statement regarding the derivations on the minimal dgla is proved similarly.
By definition, if \(\Lambda V\) is a weight-graded minimal cdga, then \(V\) is weight-graded. Hence the mapping space \(\operatorname{Hom}(V,\Lambda V)\) has an induced weight grading; a linear map is of weight \(w\) if it raises the weight by \(w\). There is an isomorphism of vector spaces \(\operatorname{Hom}(V,\Lambda V)\cong\operatorname{Der}(\Lambda V)\), which induces a weight grading on \(\operatorname{Der}(\Lambda V)\). A derivation is of weight \(n\) if it increases the weight by \(n\).
The Lie bracket on \(\operatorname{Der}(\Lambda V)\) preserves the weight. Since the differential on \(\Lambda V\) preserves weights, the differential \(\partial=[d,-]\) on \(\operatorname{Der}(\Lambda V)\) is also weight preserving.
**Theorem 4.15**.: _Let \(X\) be a simply connected space of the homotopy type of a finite CW-complex. If \(X\) is formal or coformal, then \(B\operatorname{aut}_{\circ}(X)\) and \(B\operatorname{aut}_{*,\circ}(X)\) have positive weights._
Proof.: Let \(\Lambda V\) and \(\mathbb{L}W\) be Sullivan minimal and dgla models for \(X\), respectively, both endowed with the (co)formality induced weight decomposition. It follows by the theory of Sullivan [10], Schlessinger and Stasheff [11] and Tanre [15], that dgla models for \(B\operatorname{aut}_{\circ}(X)\) and \(B\operatorname{aut}_{*,\circ}(X)\) are given by \(\operatorname{Der}(\Lambda V)\langle-1\rangle\) and \(\operatorname{Der}(\mathbb{L}W)\langle-1\rangle\), respectively, which now have induced weight decompositions (Lemma 4.14). By Proposition 3.11, it is enough to prove that these Lie models have a cohomology concentrated in negative weights in order complete the proof.
Consider the filtration \(0=F_{0}\subseteq F_{1}\subseteq\cdots\), where \(F_{i}\) is the space of derivations that vanish on generators of cohomological degree \(>i\). In particular,
\[F_{i}\cong\operatorname{Hom}(V^{\leq i},\Lambda V)\]
is a weight-graded subcomplex of \(\operatorname{Der}(\Lambda V)\). This gives rise to a weight-graded spectral sequence \(E_{*}^{*,*}\), that is a spectral sequence where \(E_{r}^{t,s}\) is weight-graded and the differential preserves the weight. We have that
\[E_{1}^{t,s}=\operatorname{Hom}(V^{-t},H^{s}(\Lambda V))\Rightarrow H^{t+s}( \operatorname{Der}(\Lambda V))\]
(c.f. the proof of [1, Lemma 3.5]).
If \(X\) is formal then \(V^{-t}=\pi_{t+1}^{Q}(\Omega X)^{\vee}\) is concentrated in weights \(\geq-t\) by Proposition 3.2_(a)_ (and by taking into account the dualization) and \(H^{s}(\Lambda V)\) is concentrated in weight \(s\). Hence \(\operatorname{Hom}(V^{-t},H^{s}(\Lambda V))\) is concentrated in weights \(<s+t\). In particular \(\operatorname{Hom}(V^{-t},H^{s}(\Lambda V))\) has negative weights if \(s+t<0\), and consequently \(H^{*}(\operatorname{Der}(\Lambda V)\langle-1\rangle)\) has negative weights.
If \(X\) is coformal, then \(V^{-t}\) is concentrated in weight \(-t-1\) and \(H^{s}(\Lambda V)\) is concentrated in weights \(<s\) by Proposition 3.2 (b), and thus \(\operatorname{Hom}(V^{-t},H^{s}(\Lambda V))\) is concentrated in weights \(\leq s+t\). As before, we conclude that \(H^{*}(\operatorname{Der}(\Lambda V)\langle-1\rangle)\) has negative weights.
For the pointed version, we instead consider the spectral sequence
\[E_{1}^{t,s}=\operatorname{Hom}(W^{-t},H^{s}(\mathbb{L}W))\Rightarrow H^{t+s}( \operatorname{Der}(\mathbb{L}W)),\]
and use the same type of arguments.
**Example 4.16**.: It is possible to deduce explicit bounds on the weights for the explicit dgla models for \(B\operatorname{aut_{\circ}}(X)\) and \(B\operatorname{aut_{*,\circ}}(X)\) when \(X\) is finite dimensional, simply connected and formal or coformal. We will write out these (not necessarily sharp) bounds for the dgla model for \(B\operatorname{aut_{\circ}}(X)\) when \(X\) is formal.
We start by observing that if \(X\) is formal and simply connected then a minimal dgla model for \(X\) is of the form \(\mathbb{L}(s\bar{H}^{*}(X;\mathbb{Q})^{\vee})\). In cohomological degree \(n<0\) we have that
\[\mathbb{L}(s\bar{H}^{*}(X)^{\vee})^{n}=\sum_{\sum_{j=1}^{k}i_{j}=-n+k}S_{i_{1},\dots,i_{k}}^{n}\]
where
\[S_{i_{1},\dots,i_{k}}^{n}=[s\bar{H}^{i_{1}}(X;\mathbb{Q})^{\vee},[s\bar{H}^{i _{2}}(X;\mathbb{Q})^{\vee},\cdots,[s\bar{H}^{i_{k-1}}(X;\mathbb{Q})^{\vee},s \bar{H}^{i_{k}}(X;\mathbb{Q})^{\vee}]\cdots].\]
Since \(X\) is formal, \(\bar{H}^{i}(X;\mathbb{Q})\) is concentrated in weight \(i\), and thus an element of \(S_{i_{1},\dots,i_{k}}\) is concentrated in weight \(\sum_{j=1}^{k}(-i_{j})=n-k\). Since \(X\) is simply connected, the maximal value of \(k\) for which \(S_{i_{1},\dots,i_{k}}^{n}\) is non-trivial is \(k=-n\) (where all \(i_{j}=2\)), which gives the minimal possible weight \(w=2n\). The minimal value of \(k\) is \(1\), and in that case we have that
\[S_{-n+1}^{n}=s\bar{H}^{-n+1}(X;\mathbb{Q})^{\vee}\]
which is concentrated in weight \((n-1)\). This is the maximal possible weight. From this we conclude that the weights in cohomological degree \(n\) of the minimal dgla model for \(X\) and its \(n\)'th cohomology \(\pi_{-n}^{\mathbb{Q}}(\Omega X)\) are concentrated in the interval \([2n,n-1]\).
Hence, we have that
\[E_{1}^{t,s}=\operatorname{Hom}(V^{-t},\bar{H}^{s}(\Lambda V))=\operatorname{ Hom}(\pi_{-t-1}^{\mathbb{Q}}(\Omega X)^{\vee},H^{s}(X;\mathbb{Q}))\]
has weights concentrated in the interval \([s+2t+2,s+t]\).
If \(X\) is \(d\)-dimensional, then \(H^{*}(X)\) is concentrated in cohomological degrees \(\leq d\), and thus the spectral sequence
\[E_{1}^{t,s}=\operatorname{Hom}(V^{-t},H^{s}(\Lambda V))\]
associated to \(\operatorname{Der}(\Lambda V)\) is concentrated in \(s\)-values satisfying \(0\leq s\leq d\). In particular, for every \(n\leq 0\), we have that
\[\bigoplus_{t+s=n}E_{1}^{t,s}=E^{n,0}\oplus E^{n-1,1}\oplus\cdots\oplus E^{n-d,d}\]
which has weights concentrated in the interval \([2n-d+2,n]\).
Theorem 4.15 and the above example apply, for instance, to the theory of homotopy automorphisms of compact Kahler manifolds. In the case of complex algebraic varieties (possibly singular and/or non-projective), one may endow the dgla models for \(B\operatorname{aut_{\circ}}(X)\) and \(B\operatorname{aut_{*,\circ}}(X)\) with weight decompositions or, more generally, with mixed Hodge structures, using the mixed Hodge cdga models for algebraic varieties of [10]. While the weights on the dgla models for homotopy automorphisms will not be negative in general, the presence of a weight decomposition might still lead to homotopical restrictions in many situations.
### Free and based loop spaces
In this last section we endow algebraic models for algebraic constructions related to the free and based loop spaces with weights inherited from weight decompositions on the original space.
The singular chains on the based loop space of a topological space \(X\), denoted by \(C_{*}(\Omega X)\), defines a dg Hopf algebra. We have that this algebra with rational coefficients is quasi-isomorphic to the universal enveloping algebra \(UL\) of a dgla model \(L\) for \(X\). The universal enveloping algebra functor preserves weights, and so this gives a weight-graded model for \(C_{*}(\Omega X;\mathbb{Q})\) whenever \(X\) has weights. Also, we have:
**Proposition 4.17**.: _If \(X\) has positive weights then \(C_{*}(\Omega X;\mathbb{Q})\) has a dg associative algebra model with negative weights._
The above applies, for instance, to any smooth algebraic variety.
The free loop space \(LX=\operatorname{map}(S^{1},X)\) on \(X\) endowed with the compact open topology, has an obvious \(S^{1}\)-action
\[g.\varphi(x)=\varphi(g.x)\text{ for every }g\in S^{1}\text{ and }\varphi\in LX.\]
We denote the space of \(S^{1}\)-homotopy orbits by \(LX//S^{1}\). Both \(LX\) and \(LX//S^{1}\) are spaces of great importance in geometry for several different reasons. For instance, if the Betti numbers of the free loop space \(LM\) on a manifold \(M\) are unbounded then \(M\) admits infinitely many geometrically distinct geodesics [10]. The homology of \(LX\) and \(LX//S^{1}\) are also related to the theory of Hoschschild and cylic (co)homology since the (co)homology of \(LX\) and \(LX//S^{1}\) can be computed as a certain Hochschild and cyclic (co)homology. The theory of free loop spaces is also connected to homological conformal field theories.
**Proposition 4.18**.: _Let \(X\) be a simply connected space of finite \(\mathbb{Q}\)-type. A weight decomposition on an algebraic model for \(X\) induces weight decompositions on the Sullivan minimal models for \(LX\) and \(LX//S^{1}\)._
Proof.: We will use the models for \(LX\) and \(LX//S^{1}\) constructed in [11]. Let \((\Lambda V,d)\) be a Sullivan minimal model for \(X\). In order to describe these models we need to set some notation. Let \(\beta\in\operatorname{Der}(\Lambda(V\oplus s^{-1}V))\) be the unique derivation that satisfies \(\beta(v)=s^{-1}v\) and \(\beta(s^{-1}v)=0\). A Sullivan model for \(LX\) is given by
\[(\Lambda(V\oplus s^{-1}V),\delta)\]
where \(\delta\) is the unique derivation that satisfies \(\delta(v)=dv\), \(\delta(s^{-1}v)=-\beta dv\). Likewise, a model for \(LX//S^{1}\) is given by the algebra
\[(\Lambda(V\oplus s^{-1}V\oplus\mathds{k}\alpha),\mathscr{D})\]
where \(|\alpha|=2\) and where \(\mathscr{D}\) is the unique derivation satisfying
\[\mathscr{D}(\alpha)=0\text{, }\mathscr{D}(v)=dv+\alpha\cdot s^{-1}v\text{, } \mathscr{D}(s^{-1}v)=-\beta dv\text{ for every }v\in V.\]
Given a weight decomposition for \(X\), then it admits a weight graded minimal model, and hence, the above models for \(LX\) and \(LX//S^{1}\) inherit weight decompositions, with \(w(s^{-1}v)=w(v)\) and \(w(\alpha)=0\).
### Minimal Lie models for nilpotent spaces with positive weights
We end this section by proving that nilpotent spaces with positive cdga models admit minimal dgla models. In [13], Neisendorfer proves that the rational homotopy type of a finite type nilpotent space \(X\) can be modelled by a dgla \(\mathscr{L}(\Lambda V)\), which extends the dgla models of Quillen [10] for simply connected spaces. The Neisendorfer model \(\mathscr{L}(\Lambda V)\) is however not minimal, and Neisendorfer conjectured that minimal models for nilpotent spaces exist.
A proof of the completed version of the Neisendorfer conjecture can be found in [1, Proposition 3.16]. Moreover, one can also deduce the following:
**Lemma 4.19**.: _Given a dgla \((\widehat{\mathbb{L}}W,d)\) with a degree-wise nilpotent homology such that \(d(W)\subset\mathbb{L}W\subseteq\widehat{\mathbb{L}}W\), then the completion map \((\mathbb{L}W,d)\to(\widehat{\mathbb{L}}W,d))\) is a quasi-isomorphism._
Proof.: It is proved in [1, Proposition 3.30] that if \((\mathbb{L}W,d)\) has a degree-wise nilpotent homology, then the completion map \((\mathbb{L}W,d)\to(\widehat{\mathbb{L}}W,d)\) is a quasi-isomorphism. If \(H^{*}(\widehat{\mathbb{L}}W,d)\) is degree-wide nilpotent, then \(H^{*}(\mathbb{L}W,d)\) is also degree-wise nilpotent since
\[H^{*}(\mathbb{L}W,d)\subseteq H^{*}(\widehat{\mathbb{L}}W,d).\]
Hence the completion map is a quasi-isomorphism.
**Theorem 4.20**.: _Let \(X\) be a nilpotent space of finite type that has a cdga model with a positive weight decomposition. Then \(X\) has a minimal dgla model. Moreover, the minimal model inherits a weight decomposition from the cdga model._
Proof.: Following [1, SS 2] there is a functor \(\mathscr{E}\) from the category of \(C_{\infty}\)-algebras with \(\infty\)-\(C_{\infty}\)-morphisms to the category of quasi-free dg Lie coalgebras and dg Lie coalgebra morphisms, that defines an equivalence of categories and that preserves quasi-isomorphisms. We briefly explain the functor \(\mathscr{E}\): Let \(\mathbb{L}^{c}(V)\) be a free graded Lie coalgebra cogenerated by a finite type graded vector space \(V\). The dual \((\mathbb{L}^{c}(V))^{\vee}\) is given by the completed graded Lie algebra \(\widehat{\mathbb{L}}(V^{\vee})\)
Given a unital \(C_{\infty}\)-algebra \((A,\{\mu_{i}\})\) we have that
\[\mathscr{E}(A,\{\mu_{i}\})=(\mathbb{L}^{c}(s^{-1}\overline{A}),\delta)\]
where the differential decomposes as \(\delta=\delta_{0}+\delta_{1}+\dots\) where \(\delta_{i}\) is the part that decreases the word length by \(i\) and where \(\delta_{i}\) is completely encoded by \(\mu_{i+1}\).
It is straightforward to show that if \((A,\{\mu_{i}\})\) is a weight-graded \(C_{\infty}\)-algebra, then \(\mathscr{E}(A,\{\mu_{i}\})\) is a weight-graded dg Lie coalgebra (there is a natural weight grading on \(\mathbb{L}^{c}(s^{-1}\overline{A})\), and we will have that \(\delta_{i}\) preserves the weights whenever \(\mu_{i+1}\) preserves the weights). Similarly one shows for any weight preserving \(\infty\)-\(C_{\infty}\)-morphism \(g\colon A\rightsquigarrow B\), the coalgebra morphism \(\mathscr{E}(g)\) is weight preserving.
If \(A\) is a finite type cdga, then the dual \(\mathscr{E}(A)^{\vee}\) has the structure of a dgla whose underlying graded Lie algebra structure \(\widehat{\mathbb{L}}(s\overline{A}^{\vee})\) is given by the completion of the free graded Lie algebra \(\mathbb{L}(s\overline{A}^{\vee})\) with respect to the bracket length. If \(A\) is a finite type \(C_{\infty}\)-algebra model for \(X\), then it follows by the theory established in [1] that \(\mathscr{E}(A)^{\vee}\) is a completed dgla model for \(X\).
Now, let \((H,\{\mu_{i}\})\) be a weight graded minimal \(C_{\infty}\)-algebra model for the positively graded cdga model for \(X\). We have
\[\mathscr{E}(H,\{\mu_{i}\})=(\mathbb{L}^{c}(s^{-1}\overline{H}),\delta_{1}+ \delta_{2}+\dots)\]
Since \(H\) is positively graded, it follows that elements of word-length \(n\) in \(\mathbb{L}^{c}(s^{-1}\overline{H})\), i.e. elements of \(\mathscr{L}ie(n)^{\vee}\otimes(s^{-1}\overline{H})^{\otimes n}\), are of weight at least \(n\). Since \(\delta_{n}\) vanishes on elements of word-length \(<n+1\), we conclude that every non-trivial element in the image of \(\delta_{n}\) is of at least weight \(n+1\). Hence if \(y\in\mathbb{L}^{c}(s^{-1}\overline{H})\) and \(w(y)=m\), then \(y\not\in\operatorname{im}(\delta_{i})\) for \(i\geq m\)
Dualizing the Lie coalgebra \(\mathcal{E}(H,\{\mu_{i}\})\) gives a complete dgla
\[(\widehat{\mathbb{L}}(s\overline{H}^{\vee}),d_{1}+d_{2}+\dots)\]
where \(d_{i}\) is the part of the differential that increases the bracket length by \(i\).
For an element \(y\in s\overline{H}^{\vee}\) of weight \(w(y)=-m\), we have that \(d_{i}(y)=0\) for every \(i\geq m\).
In particular, we have that
\[d(y)=d_{1}(y)+\dots+d_{m-1}(y)\in\mathbb{L}(sH^{\vee})\subset\widehat{\mathbb{ L}}(sH^{\vee}),\]
and thus \(\delta(s\overline{H}^{\vee})\subset\mathbb{L}(s\overline{H}^{\vee})\subset \widehat{\mathbb{L}}(s\overline{H}^{\vee})\). Thus \(\mathbb{L}(s\overline{H}^{\vee},d)\) defines a minimal dgla model for \(X\).
|
2306.03258 | LipVoicer: Generating Speech from Silent Videos Guided by Lip Reading | Lip-to-speech involves generating a natural-sounding speech synchronized with
a soundless video of a person talking. Despite recent advances, current methods
still cannot produce high-quality speech with high levels of intelligibility
for challenging and realistic datasets such as LRS3. In this work, we present
LipVoicer, a novel method that generates high-quality speech, even for
in-the-wild and rich datasets, by incorporating the text modality. Given a
silent video, we first predict the spoken text using a pre-trained lip-reading
network. We then condition a diffusion model on the video and use the extracted
text through a classifier-guidance mechanism where a pre-trained ASR serves as
the classifier. LipVoicer outperforms multiple lip-to-speech baselines on LRS2
and LRS3, which are in-the-wild datasets with hundreds of unique speakers in
their test set and an unrestricted vocabulary. Moreover, our experiments show
that the inclusion of the text modality plays a major role in the
intelligibility of the produced speech, readily perceptible while listening,
and is empirically reflected in the substantial reduction of the WER metric. We
demonstrate the effectiveness of LipVoicer through human evaluation, which
shows that it produces more natural and synchronized speech signals compared to
competing methods. Finally, we created a demo showcasing LipVoicer's
superiority in producing natural, synchronized, and intelligible speech,
providing additional evidence of its effectiveness. Project page and code:
https://github.com/yochaiye/LipVoicer | Yochai Yemini, Aviv Shamsian, Lior Bracha, Sharon Gannot, Ethan Fetaya | 2023-06-05T21:20:33Z | http://arxiv.org/abs/2306.03258v2 | # LipVoicer: Generating Speech from Silent Videos Guided by Lip Reading
###### Abstract
Lip-to-speech involves generating a natural-sounding speech synchronized with a soundless video of a person talking. Despite recent advances, current methods still cannot produce high-quality speech with high levels of intelligibility for challenging and realistic datasets such as LRS3. In this work, we present _LipVoicer_, a novel method that generates high-quality speech, even for in-the-wild and rich datasets, by incorporating the text modality. Given a silent video, we first predict the spoken text using a pre-trained lip-reading network. We then condition a diffusion model on the video and use the extracted text through a classifier-guidance mechanism where a pre-trained automatic speech recognition (ASR) serves as the classifier. LipVoicer outperforms multiple lip-to-speech baselines on LRS2 and LRS3, which are in-the-wild datasets with hundreds of unique speakers in their test set and an unrestricted vocabulary. Moreover, our experiments show that the inclusion of the text modality plays a major role in the intelligibility of the produced speech, readily perceptible while listening, and is empirically reflected in the substantial reduction of the word error rate (WER) metric. We demonstrate the effectiveness of LipVoicer through human evaluation, which shows that it produces more natural and synchronized speech signals compared to competing methods. Finally, we created a demo showcasing LipVoicer's superiority in producing natural, synchronized, and intelligible speech, providing additional evidence of its effectiveness.
Project page: [https://lipvoicer.github.io](https://lipvoicer.github.io)
## 1 Introduction
In the lip-to-speech task, we are given a soundless video of a person talking and are required to accurately and precisely generate the missing speech. Such a task may occur, e.g., when the speech signal is completely obfuscated due to background noises. This task poses a significant challenge as it requires the generated speech to satisfy multiple criteria. This includes intelligibility, synchronization with lip motion, naturalness, and alignment with the speaker's characteristics such as age, gender, accent, and more. Another major hurdle for lip-to-speech techniques is the ambiguities inherent in lip motion, as several phonemes can be attributed to the same lip movement sequence. Resolving these ambiguities requires the analysis of lip motion in a broader context within the video.
Generating speech from a silent video has seen significant progress in recent years, partly due to advancements made in deep generative models. Specifically in applications such as text-to-speech and mel-spectogram-to-audio (neural vocoder) [1; 2]. Despite these advancements, many lip-to-speech methods produce satisfying results only when applied to datasets with a limited number of speakers, and constrained vocabularies, like GRID [3] and TCD-TIMIT [4]. Therefore, speech generation for silent videos in-the-wild still lags behind. We found that these methods struggle to reliably generate
natural speech with a high degree of intelligibility on more challenging datasets like LRS2 [5] and LRS3 [6].
In this paper, we propose _LipVoicer_, a novel approach for producing high-quality speech for silent videos. The first and crucial part of LipVoicer is leveraging a lip-reading model at inference time, for extracting the transcription of the speech we wish to generate. Next, we train a diffusion model, conditioned on the video, to generate mel-spectrograms. This generation process is guided by both the video and the predicted transcription, as illustrated in Fig. 1a. Consequently, our model successfully intertwines the information conveyed by textual modality with the dynamics and characteristics of the speaker, captured by the diffusion model. Incorporating the inferred text has an additional benefit, as it allows LipVoicer to alleviate the lip motion ambiguity to a great extent. Finally, we use the DiffWave [1] neural vocoder to generate the raw audio. A diagram of our approach is depicted in Fig. 1.
Previous methods often use text to guide the generation process at train time. We, however, utilize it at inference time. The text, transcribed using a lip-reader, allows us to utilize guidance [7, 8] which ensures that the text of the generated audio corresponds to the target text. Guidance, with or without a classifier, is an important part of diffusion models and a key feature in recent advancements in text-to-image [9, 10] and text-to-speech [2, 11].
We evaluate our LipVoicer model on the challenging LRS2 and LRS3 datasets. These datasets are "in-the-wild" videos, with hundreds of unique speakers and with an open vocabulary. We show that our proposed design leads to the best results on these datasets in both human evaluations as well as WER of an ASR system.
Figure 1: An illustration of LipVoicer, a dual-stage framework for lip-to-speech. (a) To generate the speech from a given silent video, a pre-trained lip-reader provides additional guidance by predicting the text from the video. An ASR steers MelGen, which generates the mel-spectrogram, in the direction of the extracted text using classifier guidance, such that the generated mel-spectrogram reflects the spoken text. (b) MelGen, our diffusion denoising model that generates mel-spectrograms conditioned on a face image and a mouth region video extracted from the full-face video using classifier-free guidance.
To the best of our knowledge, LipVoicer is the first method to use text inferred by lip-reading to enhance lip-to-speech synthesis. The inclusion of the text modality in inference removes the uncertainty of deciphering which of the possible candidate phonemes correspond to the lip motion. Additionally, it helps the diffusion model to focus on creating naturally synced speech. The speech generated by LipVoicer is intelligible, well synchronized to the video, and sounds natural. Finally, LipVoicer achieves state-of-the-art results for highly challenging in-the-wild datasets.
## 2 Background
### Denoising Diffusion Probabilistic Models (DDPM)
Denoising Diffusion Probabilistic Models (DDPM) define a forward process that gradually turns the input into Gaussian noise, then learn the reverse process that tries to recover the input. Specifically, assume a training data point is sampled from the data distribution we wish to model \(\mathbf{x}_{0}\sim p_{\text{data}}(\mathbf{x})\). The forward process is defined as \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t}|\sqrt{1-\beta_{ t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})\) where \(\{\beta_{1},...,\beta_{t},...,\beta_{T}\}\) is a pre-defined noise schedule. We can deduce from Gaussian properties that \(q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t}|\sqrt{\alpha_{t}} \mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I})\) with \(\bar{\alpha}_{t}=\Pi_{s=1}^{t}\alpha_{s}\), and \(\alpha_{t}=1-\beta_{t}\). A sample of \(\mathbf{x}_{t}\) can be obtained by sampling \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\), and using the reparemerization trick, \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\). Under mild conditions, the distribution at the final step \(q(\mathbf{x}_{T})\) is approximately given by a standard Gaussian distribution.
In the reverse process proposed in [12], \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is then learned by a neural network that tries to approximate \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\). In [12] it was also shown that in order to learn \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) it is enough if our model's output \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) is trained to recover the added noise \(\epsilon\) used to generate \(\mathbf{x}_{t}\) from \(\mathbf{x}_{0}\). The loss function used to train the diffusion model is \(\mathbb{E}_{t,\mathbf{x}_{0},\epsilon}\left[\|\epsilon-\epsilon_{\theta}( \mathbf{x}_{t},t)\|^{2}\right]\). At inference time, given \(\mathbf{x}_{t}\) and the inferred noise, we can sample from \(p_{\theta}(\mathbf{x}_{t-1}|x_{t})\) by taking \(\mathbf{x}_{t-1}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\mathbf{x}_{t}-\frac {\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t) \right)+\beta_{t}\mathbf{z}\) where \(\mathbf{z}\sim\mathcal{N}(0,I)\).
### Guidance
One key feature in many diffusion models is the use of guidance for conditional generation. Guidance, both with or without a classifier, enables us to "guide" our iterative inference process to generate outputs that are more faithful to our conditioning information, e.g., in text-to-image, it helps enforce that the generated images match the prompt text.
Assume we wish to sample from \(q(\mathbf{x}_{t}|\mathbf{c})\), \(\mathbf{x}_{t}\) is our sample at the current iteration, \(\mathbf{c}\) is some context, and \(p(\mathbf{c}|\mathbf{x}_{t})\) is a pre-trained classifier. Our goal is to generate \(\mathbf{x}_{t-1}\). The idea of classifier guidance [7] is to use the classifier to guide the diffusion process to generate outputs that have the right context \(\mathbf{c}\). Specifically, if the diffusion model returns \(\epsilon_{\theta}(\mathbf{x}_{t},t)\), the classifier guidance alters the noise term that will be used for the update to \(\hat{\epsilon}=\epsilon_{\theta}(\mathbf{x}_{t},t)-\omega_{1}\sqrt{1-\bar{ \alpha}_{t}}\nabla_{\mathbf{x}_{t}}\log p(\mathbf{c}|\mathbf{x}_{t})\) where \(\omega_{1}\) is a hyperparameter that controls the level of guidance.
In a later work [8], a classifier-free guidance that removes the dependence on an existing classifier is proposed. In classifier-free guidance, we make two noise predictions, one with the conditioning context information, \(\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{c},t)\), and one without it - \(\epsilon_{\theta}(\mathbf{x}_{t},t)\). We then use \(\hat{\epsilon}=\epsilon_{\theta}(\mathbf{x}_{t},\mathbf{c},t)+\omega_{2}( \epsilon_{\theta}(\mathbf{x}_{t},\mathbf{c},t)-\epsilon_{\theta}(\mathbf{x}_ {t},t))\) where the hyperparameter \(\omega_{2}\) controls the guidance strength. This allows us to enhance the update directions that correspond to the context \(\mathbf{c}\).
## 3 Related Work
### Audio Generation
Following their success in image generation, diffusion models took a leading role as a neural vocoder and in text-to-speech. Examples include: WaveGrad [13], Diff-TTS [11], DiffWave [1], Grad-TTS [14], PriorGrad [15], SaShiMi [16], Tango [17], and Diffsound [18].
Other important audio generation models that do not rely on diffusion models include: WaveNet [19], HiFi-GAN [20], Tacotron [21], VoiceLoop [22], WaveRNN [23], ClariNet [24], GAN-TTS [25], Flowtron [26], and AudioLM [27].
We note that despite the recent progress, unconditional speech generation quality is still unsatisfactory. Thus, some conditioning, e.g., text or mel-spectrogram, is required for high-quality speech generation. We also note that many audio generation models adopt a sequential approach, namely first generating the mel-spectrogram and then using it to generate the raw audio using a pre-trained vocoder.
### Visual speech recognition
The field of lip-to-text, also referred to as lip-reading, has seen significant progress in recent years, and multiple techniques are now available [28; 29; 30]. Impressively, they can cope with adverse visual conditions, such as videos where the mouth of the speaker is only partly frontal, challenging lighting conditions and various languages. Use cases of visual speech recognition include resolving multi-talker simultaneous speech and transcribing archival silent films.
### Lip-to-speech synthesis
The task in this research area is to reconstruct the underlying speech signal from a silent talking-face video. A common approach to lip-to-speech includes a video encoder followed by a speech decoder that generates a mel-spectrogram, that is fed to a neural vocoder to generate the final time-domain audio. Recent examples include: Lip2Wave [31], SVTS [32], End-to-end GAN [33], VCA-GAN [34] and Lip2Speech [35]. In ReVISE [36], the authors use the AV-HuBERT [29] to represent speech by a set of discrete tokens, which are predicted from the video, and generate the audio from the tokens using HiFi-GAN [20]. These techniques have garnered significant attention due to their potential applications in cases where the speech is missing or corrupted. This may occur, for example, due to strong background noise, or when the speech of a person located in the background of the video recording should be attended.
In Lip2Speech [35], the authors use the ground truth text and a pre-trained ASR as an additional loss during training to try to enforce that the generated speech will have the correct text. We, however, use predicted text at inference time.
## 4 LipVoicer
This section details our proposed LipVoicer scheme for lip-to-speech generation. Given a silent talking-face video \(\mathcal{V}\), LipVoicer generates a mel-spectrogram that corresponds to a high likelihood underlying speech signal. The proposed method comprises three main components:
1. A mel-spectrogram generator (MelGen) that learns to create a mel-spectrogram image from \(\mathcal{V}\).
2. A pre-trained lip-reading network that infers the most likely text from the silent video.
3. An ASR system that anchors the mel-spectrogram recovered by MelGen to the text predicted by the lip-reader.
At first, we train MelGen, a conditional denoising diffusion probabilistic models (DDPM) model trained to generate a mel-spectrogram waveform \(\mathbf{x}\) conditioned on the video \(\mathcal{V}\) (see Sec. 2.1). Similar to diffusion-based frameworks in text-to-speech, e.g. [11], we use a DiffWave [1] residual backbone for MelGen. When considering the representation for \(\mathcal{V}\), we wish for our representation to encapsulate all the needed information to generate the mel-spectrogram, i.e. the content (spoken words) and dynamics (accent, intonation) of the underlying speech, the timing of each part of speech, as well as the identity of the speaker, e.g. gender, age, etc. However, we wish to remove all irrelevant information to help train and remove unnecessary computational costs. To this end, \(\mathcal{V}\) is preprocessed by creating a cropped mouth region video \(\mathcal{V}_{L}\) and randomly choosing a single full-face image \(\mathcal{I}_{F}\), corresponding to the content and dynamics and to the voice characteristics, respectively. The mouth cropping was implemented according to the procedure in [28].
To extract features from \(\mathcal{V}_{L}\) and \(\mathcal{I}_{F}\), we use an architecture similar to the one described in VisualVoice [37] (which is an audio-visual speech separation model). For \(\mathcal{I}_{F}\), the face embedding \(\mathbf{f}\in\mathbb{R}^{D_{f}}\) is computed using ResNet-18 [38] with the last two layers discarded. The lip video \(\mathcal{V}_{L}\) is encoded using a lip-reading architecture [39]. It is composed of a 3D convolutional layer followed by ShuffleNet v2 [40] and then a temporal convolutional network (TCN), resulting in the lip video embedding
\(\mathbf{m}\in\mathbb{R}^{N\times D_{m}}\), where \(N\) and \(D_{m}\) signify the number of frames and channels, respectively. In order to merge the face and lip video embeddings, \(\mathbf{f}\) is replicated \(N\) times and concatenated to \(\mathbf{m}\), yielding the video embedding \(\mathbf{v}\in\mathbb{R}^{N\times D}\), where \(D=D_{f}+D_{m}\). Next, a DDPM is trained to generate the mel-spectrogram with and without the conditioning on the video embedding \(\mathbf{v}\) following the classifier-free mechanism [8].
In order to make MelGen robust to scenarios characterized by an unconstrained vocabulary, we use the text modality as an additional source of guidance. In general, syllables uttered in a silent talking-face video can be ambiguous, and may consequently lead to an incoherent reconstructed speech. It can therefore be beneficial to harness recent advances in lip-reading and ground the generated mel-spectrogram to the text predicted by a pretrained lip-reading network. The question is how to best include this textual information in our framework. One could simply add it as a global conditioning, similar to \(\mathcal{I}_{F}\), however, this ignores the temporal information in the text. One could also try to align the text and the video, which is a complicated process that would introduce additional errors.
To circumvent the challenge of aligning text with video content, we employ text guidance by harnessing the classifier guidance approach [7], similarly to [2]. Using a powerful ASR model, we can compute \(\nabla_{\mathbf{x}}\log p(t_{LR}|\mathbf{x})\) needed for guidance, where \(t_{LR}\) is the text predicted by a lip-reader. The inferred noise \(\hat{\epsilon}\) used in the inference update step of the diffusion model is thus modified by both classifier guidance and classifier-free guidance:
\[\hat{\epsilon}=\epsilon_{mg}(\mathbf{x}_{t},\mathcal{V}_{L},\mathcal{I}, \omega_{1})-\omega_{2}\sqrt{1-\bar{\alpha}_{t}}\nabla_{\mathbf{x}_{t}}\log p (t_{LR}|\mathbf{x}_{t})\,, \tag{1}\]
where \(\mathbf{x}_{t}\) is the mel-spectrogram at time step \(t\) of the diffusion inference process, and
\[\epsilon_{mg}(\mathbf{x}_{t},\mathcal{V}_{L},\mathcal{I},\omega_{1})=(1+ \omega_{1})\epsilon_{\theta}(\mathbf{x}_{t},\mathcal{V}_{L},\mathcal{I})- \omega_{1}\epsilon_{\theta}(\mathbf{x}_{t}) \tag{2}\]
is the estimated diffusion noise at the output of MelGen, and \(\omega_{1},\omega_{2}\) are hyperparameters. Note that we use an ASR rather than audio-video ASR, namely \(\mathbf{x}_{t}\) is used as an input to the speech recognizer while the video signal is discarded in order to encourage the model to focus on audio generation.
In our experiments, we noticed that during the generation process, \(\epsilon_{mg}\) was much larger in magnitude compared to \(\nabla_{\mathbf{x}_{t}}\log p(t_{LR}|\mathbf{x}_{t})\) which led to difficulties in correctly setting \(\omega_{2}\) and to unsatisfactory mel-spectrogram estimates. To remedy this, we followed [2] by introducing a gradient normalization factor, i.e. Eq. 1 becomes
\[\hat{\epsilon}=\epsilon_{mg}-\omega_{2}\gamma_{t}\sqrt{1-\bar{\alpha}_{t}} \nabla_{\mathbf{x}_{t}}\log p(t_{LR}|\mathbf{x}_{t}) \tag{3}\]
where
\[\gamma_{t}=\frac{||\epsilon_{mg}||}{\sqrt{1-\bar{\alpha}_{t}}\cdot||\nabla_{ \mathbf{x}_{t}}\log p(t_{LR}|\mathbf{x}_{t})||}. \tag{4}\]
and \(||\cdot||\) is the Frobenius norm.
Classifier guidance allows us to train MelGen that is solely conditioned on \(\mathcal{V}\), and use a pre-trained ASR to guide that the generated speech matches \(t_{LR}\). As a result, the ASR is responsible for the precise words in the estimated speech, and MelGen provides the voice characteristics, synchronization between \(\mathcal{V}\) and \(\mathbf{x}\), and the continuity of the speech, see Fig. 1 for an illustration of our system. One additional advantage of this approach is the modularity and ease of substituting both the lip-to-text and the ASR modules. If one wishes to substitute these models with improved versions in the future, the process can be accomplished effortlessly without requiring any re-training. Finally, DiffWave [1] is used as the vocoder that transforms the reconstructed mel-spectrogram to a time-domain speech signal.
## 5 Experiments
In this section, we compare LipVoicer to various lip-to-speech approaches. We use multiple datasets and learning setups to evaluate LipVoicer. To encourage future research and reproducibility, our source code will be made publicly available. Additional experimental results on GRID [3], ablation studies, and complete details are provided in the Appendix. We also created the following anonymized website [https://lipvoicer.github.io](https://lipvoicer.github.io) containing _randomly_ selected videos generated by all compared approaches. We believe this is the best way to fully understand the performance gain of LipVoicer, and highly encourage the readers to visit.
Implementation DetailsFor predicting the text from the silent video at inference time, we use [28] and [30] as our lip-readers for LRS2 and LRS3, respectively. The former achieves a WER rate of 26.1% and the latter 19.1% with respect to the ground truth text. The ASR deployed for applying classifier guidance was [41]. We modified its architecture by adding the diffusion time step embedding to the conformer blocks and then fine-tuned on LRS2 and LRS3. Finally, for the sake of fairness, we employ a different ASR [42] to evaluate the WER of the speech generated by our method and the baselines. The rest of the implementation details for reproducing our experiments can be found in the Appendix.
BaselinesWe compare LipVoicer with recent lip-to-speech synthesis baselines. The baseline methods include (1) SVTS [32], a transformer-based video to mel-spectrogram generator with a pre-trained neural vocoder. (2) VCA-GAN [34], a GAN model with a visual context attention module that encodes global representations from local visual features. (3) Lip2Speech [35], a multi-task learning framework that incorporates ground truth text to form an additional loss to enforce the correspondence between the text predicted from the generated speech and the target text at train time.
DatasetsLipVoicer is compared against the baselines on the highly challenging datasets LRS2 [43] and LRS3 [44]. LRS2 contains roughly 142,000 videos of British English in its train and pre-train splits, which amounts to 220 hours of speech by various speakers. In the test set, there are 1,243 videos. The train and pre-train sets of LRS3 comprise 9,000 different speakers, contributing 151,000 videos which are stretched across 430 hours of speech videos. There are 1,452 videos in the test split. The language spoken in LRS3 videos is English but with different accents including non-native ones. We specifically select these in-the-wild datasets, LRS2 and LRS3, for their diverse range of real-world scenarios with variations in lighting conditions, speaker characteristics, speaking styles, and speaker-camera alignment. We train LipVoicer using the pretrain+train splits of LRS2 and LRS3 on each dataset separately, and evaluation is carried out on the full unseen test data splits. Note that both datasets also include transcriptions, but we do not use them for either training or inference.
MetricsSeveral metrics are used to evaluate the quality and intelligibility of our generated speech and compare it to the baseline methods. As the main goal of speech generation is to create a speech signal that sounds natural and intelligible to human listeners, our focus is on human evaluation measured by the mean opinion score (MOS). We also compare WER using a speech recognition model. For a fair comparison, we use an ASR [42] that is different from the one we used for guidance. Following [36, 45], we measure the synchronization between the generated speech and the matching video with SyncNet [46] scores. Specifically, we report the temporal distance between audio and video (LSE-D), and SyncNet's confidence scores (LSE-C).
It is worth mentioning that previous studies on lip-to-speech synthesis have presented metrics that we believe are unsuitable for this task, and as a result, we have chosen not to include them in our report. Specifically, they use short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ). Both metrics are intrusive, namely, are based on a comparison with the clean raw audio signal. While they are valuable for speech enhancement and speaker separation, in speech generation it is not expected, even for a perfect model, to recreate the original audio, for example, due to a variation of the pitch between the original speech and the reconstructed one. Our goal is to generate a speech signal that matches the _video_, and not _the original speech_.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{LRS2} \\ \cline{2-5} & Intelligibility & Naturalness & Quality & Synchronization \\ \hline GT & \(3.35\pm 0.03\) & \(3.474\pm 0.03\) & \(3.481\pm 0.03\) & \(3.534\pm 0.03\) \\ \hline Lip2Speech[35] & \(2.794\pm 0.04\) & \(2.839\pm 0.04\) & \(2.86\pm 0.04\) & \(3.08\pm 0.04\) \\ VCA-GAN [34] & \(2.719\pm 0.04\) & \(2.763\pm 0.04\) & \(2.829\pm 0.04\) & \(2.999\pm 0.04\) \\ \hline LipVoicer (Ours) & \(\mathbf{3.265\pm 0.03}\) & \(\mathbf{3.353\pm 0.03}\) & \(\mathbf{3.453\pm 0.03}\) & \(\mathbf{3.479\pm 0.03}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: LRS2 Human evaluation (MOS).
### Human Evaluation Results
Given 50 random samples from the test splits of LRS2 and LRS3 datasets (each), we used Amazon Mechanical Turk to rate the different approaches and the ground truth. The listeners were asked to rate the videos, on a scale of 1-5, for Intelligibility, Naturalness, Quality, and Synchronization. We note that we observed a non-negligible amount of noise in the scores of the human evaluators. We ameliorated this effect by having a large number (16) of distinct annotators per sample, and by having each annotator score all methods on a specific sample. This ensures that all methods compared are annotated by the exact same set of annotators. We also note that for the SVTS [32] baseline comparison we used generated videos that were sent to us by the authors. As only LRS3 videos were available, we evaluate SVTS only on LRS3.
Tables 1 and 2 depict the MOS results on the LRS2 and LRS3 datasets. LipVoicer outperforms all three baseline methods Lip2Speech, SVTS, and VCA-GAN. Not only that, it is also remarkably close to the ground truth scores. The questionnaire, screenshots of the task, and other implementation details are given in the Appendix.
### Objective Evaluation Results
We further evaluated our method with the Word-Error-Rate (WER) and synchronization metrics. For a fair comparison, we evaluate the WER using the ASR model from [42] that is distinct from the one we use for guidance. For synchronization, we use the pre-trained SyncNet [46] model to evaluate the LSE-C and LSE-D metrics. As a result of the disparity in image shapes between LRS2 and the expected input shape SyncNet was trained on, we refrain from providing synchronization metrics for LRS2. Despite our efforts to mitigate this challenge through image padding to align with SyncNet's expected input shape, this approach resulted in significant artifacts that adversely impacted SyncNet's performance across all methods. For SVTS, we report WER and synchronization metrics only for LRS3, since the authors did not open-source their code and only released the generated test files for LRS3. From the WER scores, it is clear that our method significantly improves over competing baselines. It is also clear that this improvement is solely due to the ASR guidance, as without it the WER plunges from \(24.1\%\) to \(84.9\%\) on LRS3. In addition to generating high-quality content,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{LRS3} \\ \cline{2-5} & Intelligibility & Naturalness & Quality & Synchronization \\ \hline GT & \(3.481\pm 0.04\) & \(3.624\pm 0.03\) & \(3.428\pm 0.04\) & \(3.47\pm 0.04\) \\ \hline Lip2Speech[35] & \(2.791\pm 0.04\) & \(2.946\pm 0.04\) & \(2.711\pm 0.04\) & \(2.973\pm 0.04\) \\ SVTS [32] & \(2.989\pm 0.03\) & \(3.187\pm 0.03\) & \(2.929\pm 0.04\) & \(3.045\pm 0.03\) \\ VCA-GAN [34] & \(2.819\pm 0.04\) & \(2.951\pm 0.04\) & \(2.654\pm 0.04\) & \(2.959\pm 0.04\) \\ \hline LipVoicer (Ours) & **3.300 \(\pm\) 0.033** & **3.511 \(\pm\) 0.033** & **3.209 \(\pm\) 0.036** & **3.276 \(\pm\) 0.034** \\ \hline \hline \end{tabular}
\end{table}
Table 2: LRS3 Human evaluation (MOS).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{LRS2} & \multicolumn{2}{c}{LRS3} \\ \cline{2-5} & WER \(\downarrow\) & WER \(\downarrow\) & LSE-C \(\uparrow\) & LSE-D \(\downarrow\) \\ \hline GT & \(6.1\%\) & \(2.5\%\) & \(6.880\) & \(7.638\) \\ \hline Lip2Speech & \(58.2\%\) & \(61.7\%\) & \(5.231\) & \(8.832\) \\ svts & - & \(75.6\%\) & \(6.018\) & \(8.290\) \\ vca-gan & \(95.1\%\) & \(87.5\%\) & \(5.255\) & \(8.913\) \\ \hline LipVoicer w/o ASR (Ours) & \(81.2\%\) & \(84.9\%\) & **6.318** & \(8.310\) \\ LipVoicer (Ours) & **33.9\%** & **24.1\%** & \(6.239\) & **8.266** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of LRS2 & LRS3 word error rate (WER) and Lip-Speech Synchronization.
LipVoicer demonstrates commendable synchronization scores, ensuring that the generated speech aligns seamlessly with the accompanying video. Interestingly, the incorporation of text classifier guidance enhances the intelligibility performance while leading to a slight degradation in the LSE-C synchronization metric. We postulate that this observation may occur in cases where the predicted text is significantly different from the ground-truth text.
### Qualitative Results
We present a qualitative comparison between the mel-spectrograms generated by LipVoicer, the baselines, and the ground truth. The results, shown in Fig. 2, demonstrate that even for a relatively long utterance, the mel-spectrogram generated by LipVoicer visibly resembles that of the original video. While all approaches manage to successfully detect the beginning and the end of speech segments, LipVoicer is the only method that generates spectral content that is precisely aligned with the ground truth mel-spectrogram. The baselines, on the other hand, struggle to achieve this level of fidelity. Particularly, they fail to generate pitch information of the speech, which results in an unnatural voice and impairs the naturalness of the speech signal. This comparison provides valuable insights into the capabilities of LipVoicer to generate naturally looking spectrograms that are synchronized with the original video's mel-spectrograms.
### Naturalness-Intelligibility Trade-Off
Another aspect of LipVoicer that should be taken into consideration is the influence of the ASR classifier on the quality of lip-to-speech performance. Occasionally, the speech signal recovered by MelGen, namely without the classifier guidance provided by the ASR, sounds more natural than the one generated by the full scheme, i.e. LipVoicer. In addition, the ASR guidance may lead to synchronization lapses when the predicted text does not significantly overlap with the ground truth text. However, as emerges from the results in Table 3, the inclusion of the ASR guidance has a crucial
Figure 2: The ground truth mel-spectrogram, and its reconstruction by LipVoicer and the baselines.
impact on the intelligibility of the speech. In particular, it results in an invaluable gain on the WER metric. This trade-off is also depicted in Fig. 3. The mel-spectrogram produced without applying classifier guidance presents smooth transitions. In this context, incorporating the ASR in the inference process degrades the mel-spectrogram, as can be seen by comparing Fig. 2(b) and Fig. 2(c). However, the ASR guidance managed to reconstruct spectral content (right-hand side of the mel-spectrograms) more faithfully with respect to the setup where the ASR guidance is omitted. Consequently, while it may sometimes reduce the speech naturalness, in most cases the generated speech at the output of our method is of high quality, sounds natural, and is intelligible.
## 6 Limitations and social impacts
LipVoicer is a powerful lip-to-speech method that has the potential to bring about some social impacts. On the positive side, it can help restore missing or corrupt speech in important videos. However, there are also potential drawbacks to consider. The generated speech may introduce the risk of misrepresentation or manipulation. While we use a faithful lip-reading module, "bad-faith actors" can try to inject misleading text. Mitigating this potential risk is an important challenge but beyond the scope of this work.
## 7 Conclusion
In this paper, we present LipVoicer, a novel method that shows promising results in generating high-quality speech from silent videos. LipVoicer achieves this by utilizing text inferred from a lip-reading model to guide the generation of natural audio. We train and test LipVoicer on multiple challenging datasets comprised of in-the-wild videos. We empirically show that text guidance is crucial to create intelligible speech, as measured by the word error rate. Furthermore, we show through human evaluation that LipVoicer faithfully recovers the ground truth speech and surpasses recent baselines in intelligibility, naturalness, quality, and synchronization. The impressive achievements of LipVoicer in lip-to-speech synthesis not only advance the current state-of-the-art but also paves the way for intriguing future research directions in this domain.
|
2306.14724 | The crystal and magnetic structure of cesium superoxide | CsO2 is a member of the family of alkali superoxides (formula AO2 with A= Na,
K, Rb and Cs) that exhibit magnetic behavior arising from open $p$-shell
electrons residing on O2- molecules. We use neutron diffraction to solve the
crystal and magnetic structures of CsO2, and observe a complex series of
structures on cooling from room temperature to 1.6 K. These include an
incommensurate modulation along the a-axis of the structure at intermediate
temperatures, which then locks into a commensurate modulation that doubles the
unit cell compared to the previously supposed orthorhombic unit cell. In both
incommensurate and commensurate phases our structural solution involves a
staggering of the cesium ion positions along the b-axis, in contrast to studies
of other alkali superoxides in which staggered tilts of the O2- dimers relative
to the c-axis are seen. Below T ~ 10 K we observe magnetic Bragg reflections
arising from an antiferromagnetically ordered structure with a wavevector of k
= (0,0,0) (relative to the doubled crystallographic unit cell), with moments
that point predominantly along the b-axis with a small component along the
a-axis that hints at possible anisotropic exchange coupling (consistent with
the crystal structure). Measurements of the magnetic Bragg reflections in an
applied magnetic field suggest a spin-flop transition takes place between 2 T
and 4 T in which moments likely flop to point along the crystallographic
a-axis. Our measurements indicate that CsO2 is an interesting example of
magnetic properties being inherently linked to the crystal structure, in that
the staggered displacement of the cesium ions activates antisymmetric exchange
which then permits the observed spin canting. | R. A. Ewings, M. Reehuis, F. Orlandi, P. Manuel, D. D. Khalyavin, A. S. Gibbs, A. D. Fortes, A. Hoser, A. J. Princep, M. Jansen | 2023-06-26T14:20:22Z | http://arxiv.org/abs/2306.14724v2 | # The crystal and magnetic structure of cesium superoxide
###### Abstract
\(\mathrm{C}\mathrm{o}\mathrm{O}_{2}\) is a member of the family of alkali superoxides (formula \(A\mathrm{O}_{2}\) with \(A=\mathrm{Na}\), K, Rb and Cs) that exhibit magnetic behavior arising from open \(p\)-shell electrons residing on \(\mathrm{O}_{2}^{-}\) molecules. We use neutron diffraction to solve the crystal and magnetic structures of \(\mathrm{C}\mathrm{o}\mathrm{O}_{2}\), and observe a complex series of structures on cooling from room temperature to \(1.6\,\mathrm{K}\). These include an incommensurate modulation along the \(a\)-axis of the structure at intermediate temperatures, which then locks into a commensurate modulation that doubles the unit cell compared to the previously supposed orthorhombic unit cell. In both incommensurate and commensurate phases our structural solution involves a staggering of the cesium ion positions along the \(b\)-axis, in contrast to studies of other alkali superoxides in which staggered tilts of the \(\mathrm{O}_{2}^{-}\) dimers relative to the \(c\)-axis are seen. Below \(T\simeq 10\,\mathrm{K}\) we observe magnetic Bragg reflections arising from an antiferromagnetically ordered structure with a wavevector of \(\mathbf{k}=(0,0,0)\) (relative to the doubled crystallographic unit cell), with moments that point predominantly along the \(b\)-axis with a small component along the \(a\)-axis that hints at possible anisotropic exchange coupling (consistent with the crystal structure). Measurements of the magnetic Bragg reflections in an applied magnetic field suggest a spin-flop transition takes place between \(2\,\mathrm{T}\) and \(4\,\mathrm{T}\) in which moments likely flop to point along the crystallographic \(a\)-axis. Our measurements indicate that \(\mathrm{C}\mathrm{o}\mathrm{O}_{2}\) is an interesting example of magnetic properties being inherently linked to the crystal structure, in that the staggered displacement of the cesium ions activates antisymmetric exchange which then permits the observed spin canting.
## I Introduction
The complex interplay between spin, orbital and lattice degrees of freedom plays a varied role across many material families, resulting in a plethora of emergent properties. Well known examples include the Jahn-Teller interaction in transition metal oxides [1], magnetoelastic coupling [2], and quantum spin liquid states in strongly spin-orbit coupled materials [3]. This latter example is particularly topical, with the search ongoing for a material showing the bond-directional Ising-like anisotropy necessary for a realisation of the Kitaev model, for instance.
Anionogenic magnetic materials are compounds on which the spin degree of freedom is associated with the partially occupied \(p\)-orbitals of an anionic molecule. Although theoretical predictions exist for other simple anions [4], the most well-known experimental examples are for the cases of oxygen anions. Some such materials known to be magnetic include solid oxygen [5; 6; 7], alkali sesquioxides, and alkali superoxides [8]. In the latter, the magnetic moment arises due to a partially (half) occupied \(\pi^{\star}\) orbital on the \(\mathrm{O}_{2}^{-}\) molecular units. Because the magnetic object is spatially extended, on so-called \(\mathrm{O}_{2}^{-}\) dumbbells, rather than arising from unpaired electrons on a single ion, a natural coupling arises between the magnetism and the crystal structure. Of particular interest is the interplay of magnetism with the orientation of the dumbbells, a possibility which was noted explicitly in the 1980s and referred to as magnetogyration [9]. The alkali sesquioxides have attracted much recent attention because they undergo charge ordering which then leads to the formation of magnetic pyrochlore or dimer lattices [10; 11; 12; 13; 14].
The alkali superoxides \(A\mathrm{O}_{2}\) (\(A=\mathrm{Na}\), K, Rb, Cs) have been known for a relatively long time, with studies in the 1970s and 1980s elucidating many of the key physical properties. In particular, the fact that all of them are magnetic and undergo a cascade of structural phase transitions on cooling was recognized at this time [8]. The materials in the family have a common high temperature structure of the NaCl type [15] in which the \(\mathrm{O}_{2}^{-}\) dumbbells are able to rotate freely. As they are cooled the rotation of the dumbbells progressively becomes hindered, so they precess about the \(c\)-axis of a tetragonal structure, and then eventually freeze. The details of this freezing are fairly well characterized for \(\mathrm{KO}_{2}\), which forms a monoclinic structure at low temperature in which the dumbbells are frozen at an angle from the higher temperature tetragonal \(c\)-axis [16; 17]. Neutron diffraction measurements from the 1960s confirm that \(\mathrm{KO}_{2}\) orders antiferromagnetically [17], however in the data collected at that time only two magnetic Bragg peaks could be discerned above background, making a magnetic structure
refinement impossible. On the other hand, the precise details of the structures of RbO\({}_{2}\) and CsO\({}_{2}\) were much less well characterized at that time [15; 18]. What was known is that, like KO\({}_{2}\), both materials take on a tetragonal structure at room temperature, with transitions at \(\sim 150\) K into possible orthorhombic structures. Around the same temperature an incommensurate structure was observed in both materials, with a period around three lattice units along the \(a\)-axis. Neither the orthorhombic structure nor the incommensurate structure were solved in detail in either case, however.
More recent experimental studies of CsO\({}_{2}\), RbO\({}_{2}\) and NaO\({}_{2}\) have revisited their bulk magnetic properties. In particular, CsO\({}_{2}\) has been posited to be a pseudo-1-dimensional magnet, with evidence including a broad hump in the magnetic susceptibility reminiscent of what is predicted by a modified Bonner-Fisher model [19]. Magnetic order occurs at \(\sim 10\) K, and DFT calculations of superexchange pathways [20], and NMR [21] and EPR [22] data are consistent with the formation of a Tomonaga-Luttinger liquid phase.
These studies have also revisited the crystal structure, noting the presence of a transition from a tetragonal to an orthorhombic structure below \(\sim 70\) K (a substantially lower temperature than was seen for a similar transition in earlier studies). Notwithstanding, the crystallographic details of the orthorhombic structure were not solved, and nor were the details of the magnetic structure. High field magnetization studies [23] are consistent with CsO\({}_{2}\) exhibiting reduced dimensionality, however these data are not fully consistent with the models proposed in the other studies, and the authors note that in the absence of detailed knowledge of the magnetic structure all of the data are hard to interpret. Recent studies combining \(\mu\)SR, Raman spectroscopy, and x-ray diffraction on RbO\({}_{2}\)[24] and NaO\({}_{2}\)[25] have been interpreted similarly, with progressively lowering symmetry on cooling, including the observation of incommensurate structural Bragg peaks in RbO\({}_{2}\). The main difference between the two compounds is that the data on RbO\({}_{2}\) are more consistent with a 3-dimensional Heisenberg model rather than 1-dimensional magnetism, whereas the data on NaO\({}_{2}\) suggest that there is low dimensionality and no long range magnetic order.
Application of a relatively modest magnetic field, of \(\sim 2.5\) T at 2 K, results in a significant change in the magnetic susceptibility of CsO\({}_{2}\)[23]. This is interpreted as being a spin flop transition from a low field antiferromagnetic state with easy-axis anisotropy. It is notable that the majority of the NMR measurements interpreted as evidence for the formation of a Tomonaga Luttinger liquid were taken at fields either close to or above the spin flop transition field.
As already discussed, the crystal and magnetic structure of KO\({}_{2}\) is better known, and perhaps as a consequence of this there has been more theoretical interest in this member of the superoxide family. Studies have focused on the coupling between spin, orbital and lattice physics. Several authors have proposed that KO\({}_{2}\) undergoes orbital ordering [26; 27; 28], which then has consequences for the nature of superexchange between O\({}_{2}^{-}\) molecules via the K ion and hence affects the magnetic order that develops. In particular, anisotropic exchange has been discussed [29] which as well as affecting the propensity to magnetic order may also give rise to unusual magnetic excitations. The calculations rely on a good knowledge of the crystal structure to have predictive power, underscoring the importance of accurate measurements.
Given the ongoing ambiguity concerning the low temperature crystal structure of CsO\({}_{2}\), and the ordered magnetic structure, we were motivated to revisit this problem using modern neutron scattering instrumentation and analysis methods. As well as providing a complete picture, such measurements are invaluable for the interpretation of data from other experimental probes as well as informing future theoretical predictions. We found that at low temperature the crystallographic unit cell is doubled along the \(a\)-axis compared to the room temperature tetragonal cell, with a staggered displacement along the \(b\)-axis of the Cs ions but essentially no tilts of the O\({}_{2}^{-}\) dumbbells. This phase sets in below 80 K. We also observed the incommensurate crystal structure, akin to one seen in early studies, and found that like the low temperature phase it is characterized by an \(a\)-axis modulation of the positions of the Cs ions along the \(b\)-axis. It would seem that this incommensurate structure eventually locks into the aforementioned commensurate one on cooling. The incommensurate phase is visible below 190 K, below which temperature there is a distinct transition from the room temperature tetragonal phase (space group \(I4/mmm\)) to another that, aside from the incommensurate reflections, can be indexed using \(Immm\). In the magnetically ordered phase below 10 K we found that the moments align antiferromagnetically, with the largest component along the \(b\)-axis but a small component along the \(a\)-axis also, indicating possible exchange anisotropy.
## II Methods
The CsO\({}_{2}\) powder samples used for this study were prepared using the well-established method of oxidation of distilled Cs metal with dried molecular oxygen [30].
Neutron diffraction experiments were conducted using the E2 [31], E6 and E9 [32] diffractometers at the BER-II reactor at the Helmholtz Zentrum Berlin, and on the WISH [33] and HRPD [34] diffractometers at the ISIS spallation neutron source. Experiments on E2 we performed using a fixed incident neutron wavelength of \(\lambda=2.38\) A, selected by a PG monochromator, between temperatures of 1.7 and 100 K in applied magnetic fields between 0 and 6.5 T.
The sample was pressed into cylindrical pellets of diameter 6 mm and height 3 mm, ten of which were placed into a sealed quartz ampoule in which the sample was well
fixed in order to avoid movement of the powder grains in an applied magnetic field. This ampoule was placed into a cylindrical vanadium container of diameter 8 mm and height 60 mm. Experiments on E6 were performed with a fixed incident neutron wavelength \(\lambda=2.41\) A, also selected by a PG monochromator, for temperatures between 1.6 and 248 K in zero applied magnetic field, using the same sample containment as for E2. One dataset was collected using E9, with a fixed incident neutron wavelength of \(\lambda=1.7985\) A, selected by a Ge monochromator, at a temperature of 4 K, again with the same sample containment. Neutron powder patterns were collected between the diffraction angles 7.8\({}^{\circ}\) to 83.4\({}^{\circ}\) (E2), 5.5\({}^{\circ}\) to 136.4\({}^{\circ}\) (E6), and 7.5\({}^{\circ}\) to 141.7\({}^{\circ}\) (E9), respectively.
HRPD and WISH are both time-of-flight (ToF) instruments, with a white beam of neutrons incident on the sample and analysis of the arrival time at the detectors (ToF) used to determine wavelength and hence Bragg reflection d-spacing. Experiments on HRPD were performed at temperatures of 20 K and 80 K, with further measurements taken over fixed time intervals (6 minutes) as the sample was cooled from 290 K. The sample was loaded into an 8 mm diameter vanadium can of height 40 mm for these measurements. All of the data used for subsequent analysis were collected in the highest angle detector bank, with data time-focused to a scattering angle of \(2\theta=168.33^{\circ}\). Experiments on WISH were performed at temperatures in the range 1.6 to 300 K. The sample was loaded into an aluminum can with an annular geometry, with height 40 mm, outer diameter 20 mm and sample thickness 2 mm. The sample was first cooled to 1.6 K, then a measurement of 40 minutes duration was performed at 1.6 K, followed by another of the same duration at 25 K. The sample was then re-cooled to 1.6 K and further measurements were made at fixed temperatures for 5 minutes per temperature, warming back up to 300 K. A sample temperature fluctuation of \(\leq 2\%\) was observed for each of the measurements. For crystal structure analysis data time-focused on to the higher angle detector banks, at \(2\theta=152.9^{\circ}\) and \(2\theta=121.7^{\circ}\) were used. For the magnetic refinements the lower angle detector bank data, at \(2\theta=58.3^{\circ}\) was used.
Structural refinements were performed using the FullProf software suite [35; 36] for the E2, E6 and E9 data, and using Jana2006 [37] for the HRPD and WISH structural data. Subsequent magnetic refinements were performed using FullProf for the WISH data. Magnetic symmetry analysis for the magnetic structure refinement was done using the Basiirreps tool implemented in FullProf. The incommensurate structure analysis was done using Jana2006 together with the Isodistort software [38; 39].
## III Results
### Crystal structure
Figure 1 shows data collected at 290 K (room temperature) on the HRPD instrument, together with a refinement with the \(I4/mmm\) space group and lattice parameters widely reported in the literature. A satisfactory refinement can be achieved for these data collected for a relatively short duration (6 minutes) with correspondingly non-optimized signal to noise.
The details of the crystal structure determined at 290 K are given in table 1. Refinements of the data from WISH at the same temperature are consistent with these parameters.
On cooling a phase transition is observed at \(T_{\mathrm{S1}}=192(2)\) K, that appears to be tetragonal to orthorhombic as illustrated in fig. 2 which shows the lattice param
Figure 1: Data collected at 290 K on HRPD as a function of d-spacing, refined using the \(I4/mmm\) space group (see text). Measured data are indicated by black circles, the fit by the red line, and the difference curve (offset by \(-1200\) units) is shown as a blue curve. Vertical ticks indicate the positions of Bragg reflections. The residuals for these fits were \(R_{p}=2.18\) and \(R_{wp}=2.88\).
\begin{table}
\begin{tabular}{l l l l l l} Atom & Site & \(x\) & \(y\) & \(z\) & \(U_{\mathrm{iso}}(\AA^{2})\) \\ \hline Cs & \(2a\) & 0.00 & 0.00 & 0.00 & 0.0323(6) \\ O & \(4e\) & 0.00 & 0.00 & 0.4138(1) & 0.0404(5)) \\ \hline \end{tabular}
\end{table}
Table 1: Crystal structure parameters for the \(I4/mmm\) space group solution, at 290 K measured on the HRPD instrument. The determined lattice parameters were \(a=4.4589(2)\AA\) and \(c=7.3320(3)\AA\). The unit cell volume was \(V=145.796(1)\AA^{3}\). The residuals for the fit were \(R_{p}=2.18\%\) and \(R_{wp}=2.88\%\).
eter as a function of temperature determined from Rietveld refinements using the tetragonal \(I4/mmm\) space group above the transition and the orthorhombic \(Immm\) space group below it, from data collected on E6. The latter space group represents the symmetry of the lattice if only macroscopic strain is taken into account [38; 39]. The strain reflects the change of the unit cell metric below the transition and therefore this space group is sufficient to evaluate the thermal evolution of the unit cell parameters. The transition is at a temperature consistent with early structural studies of CsO\({}_{2}\)[15], but significantly higher than observed in more recent x-ray diffraction experiments [20]. In order to get good fits we found that anisotropic displacement parameters (ADPs) were required. The best fits were obtained with considerable elongation of the ADP ellipsoids of the cesium ions along the \(b\)-axis compared to the other principal axes. Furthermore, the ADP ellipsoids of the oxygen ions were elongated in the \(ab\)-plane, albeit with a smaller difference between \(a\) and \(b\) than for the cesium ions, and compressed significantly along the \(c\)-axis. These findings suggest the presence of atomic displacements not accounted for in the \(Immm\) space group.
Close examination of the data collected on WISH shows that at temperatures above \(\sim 72\,\)K, and at least up to \(180\,\)K, weak incommensurate superlattice reflections are visible above background that can be indexed using a propagation vector of \(\mathbf{k}_{\mathrm{IC}}=(0.561(2),0,0)\) referred to the tetragonal unit cell, i.e. a period of 1.78 lattice units. The appearance of the incommensurate superlattice seems to be concomitant with the putative transition from the tetragonal to orthorhombic structure at \(T_{\mathrm{S1}}\). To probe this further we show in fig. 3 the temperature dependence of the intensity and position of the \((1+k_{x},2,1)\) peak (the fact that superlattice peaks such as this one appear at short \(d\)-spacing, and hence large \(|\mathbf{Q}|\), indicates that they are structural rather than magnetic in origin).
Given that the incommensurate reflections seem to appear at the same temperature as the putative tetragonal to orthorhombic transition, to solve the structural modulation we assumed that the corresponding order parameter is transformed by a single irreducible representation (irrep) of the high temperature tetragonal space group \(I4/mmm\). The irreducible nature of the order parameter was concluded based on the continuous character of the transition evidenced by the temperature dependence of the unit cell parameters (fig 2). There are four irreps, \(\Sigma_{i}(i=1-4)\) associated with the \((g,0,0)\) line of symmetry. The relevant superspace groups were generated by ISODISTORT [38; 39] and compared with the available diffraction data. Some of them could be ruled out due to extinction conditions that predict no intensity where a finite signal was observed (e.g. at \(d=4.05\AA\)). The remainder were tested using the Jana2006 software. The best agreement (with \(R_{wp}=8.83\%\)) was found for the model with \(Immm(0,0,g)s00\) symmetry yielded by the \(\Sigma_{4}\) irrep and illustrated in fig 4. The largest displacements are of the cesium ions, which are staggered along the \(b\)-axis following a sinusoidal modulation with an amplitude of \(0.0564(9)\) lattice units. The oxygen molecules also exhibit a sinusoidal displacement along \(b\), albeit with a smaller amplitude of \(0.0090(7)\) lattice units. Interestingly, this symmetry does not permit the rotation of oxygen molecules widely predicted for CsO\({}_{2}\), and seen in other alkali superoxides. Rather, the entire molecules' centres of mass are displaced along \(b\), with the molecules remaining oriented parallel to the c-axis. The complete structural solution for the incommensurate phase is given in table 2.
Figure 2: Temperature dependence of lattice parameters refined from E6 data, assuming \(I4/mmm\) and \(Immm\) space groups at high and low temperature respectively.
Figure 3: Data from WISH showing the disappearance on warming up to \(200\,\)K of the incommensurate Bragg reflection at \(\mathbf{Q}_{\mathrm{IC}}=(1.56,2,1)\) at \(d=1.69\,\)Å. The upper panel shows the intensity of the peak, determined by integrating a fitted Gaussian, with black circles. The lower panel shows the fitted position of the peak \(k_{x}\) with respect to \((1+k_{x},2,1)\) with red squares. In both cases the errorbars are smaller than the point size.
On cooling further a significant anomaly in the \(a\) lattice parameter (in the \(Immm\) setting), and a smaller anomaly in the \(c\) parameter, occurs at \(T_{\rm S2}=71.5(9)\,\)K. At the same time the propagation vector of the incommensurate structural modulation changes and locks in to a commensurate value of \({\bf k}_{\rm C}=(\frac{1}{2},0,0)\) (again referred to the \(Immm\) cell), which survives down to the lowest temperatures measured. Data showing the transition locking into \({\bf Q}_{\rm C}=(\frac{3}{2},2,1)=(1,2,1)+{\bf k}_{\rm C}\) are shown in fig. 5.
In order to understand what is happening crystallographically in this phase we performed careful Rietveld refinements of all of the data collected on the different instruments used. In previous work it was suggested that the orthorhombic space group \(Immm\) provides an appropriate description of the crystal structure. However, that would be inconsistent with the doubled unit cell seen below \(T_{\rm S2}\).
Assuming the most likely scenario, that the low temperature structural transition is a lock-in type, one can deduce the possible symmetries of the crystal structure below \(T_{\rm S2}\). This scenario sets symmetry constraints on the transformational properties of the commensurate order parameter, impaling the same active irrep for both incommensurate and commensurate phases. As mentioned above, the modulated orthorhombic structure is associated with \(\Sigma_{4}\). This irrep is four-dimensional with the components of the complex order parameter \((\eta_{1},\eta_{1}^{*},\eta_{2}^{*},\eta_{2})\). Using the matrix operators summarised in table 3, one can verify the existence of a lock-in free-energy invariant, \(\eta_{1}^{4}+\eta_{1}^{*4}+\eta_{2}^{*4}+\eta_{2}^{4}\), for the commensur
\begin{table}
\begin{tabular}{l l l l l} Atom & \(x\) & \(y\) & \(z\) & \(U_{\rm iso}(\AA^{2})\) \\ \(A_{i}^{1}\) & \(A_{x}^{1}\) & \(A_{y}^{1}\) & \(A_{z}^{1}\) & \\ \(B_{i}^{1}\) & \(B_{x}^{1}\) & \(B_{y}^{1}\) & \(B_{z}^{1}\) & \\ \hline Cs & 0.0 & 0.0 & 0.0 & 0.0029(5) \\ & -0.0564(9) & 0.0 & 0.0 & \\ & 0.0 & 0.0 & 0.0 & \\ O & 0.0 & 0.40964(12) & 0.0 & 0.0110(3) \\ & -0.0090(7) & 0.0 & 0.0 & \\ & 0.0 & 0.0 & 0.0 & \\ \hline \end{tabular}
\end{table}
Table 2: Structural parameters of CsO\({}_{2}\) obtained from refinement of the neutron diffraction data (\(T=100\,\)K) using the \(Immm(0,0,g)s00\) superspace group with the basis vectors related to the parent tetragonal \(I4/mmm\) structure as \((0,-1,0,0),(0,0,1,0),(-1,0,0,0),(0,0,0,1)\) with the origin at \((0,0,0,\frac{3}{2})\). Here, \(A_{i}^{1}\) and \(B_{i}^{1}\) with \(i=(x,y,z)\) are the Fourier coefficients of the first harmonic (\(n=1\)) of the displacive modulation function: \(u_{i,j,l}(r_{j,l},{\bf k}_{\rm IC})=\sum_{n=0}^{\infty}A_{i,j,l}^{\rm th}sin(2 n\pi[{\bf r}_{j,l}\cdot{\bf k}_{\rm IC}])+B_{i,j}^{\rm th}cos(2\pi[{\bf r}_{j,l} \cdot{\bf k}_{\rm IC}])\), where \({\bf r}_{j,l}\) indicates the position of the \(j^{\rm th}\) atom of the average structure in the \(l^{\rm th}\) unit cell. The unit cell parameters were \(a=4.4117(1)\AA\), \(b=7.3438(2)\AA\), \(c=4.3762(1)\AA\) and \({\bf k}_{\rm IC}=(0,0,0.561(2))\), the latter adopted for the setting of the superspace group specified above. The unit cell volume was \(V=14.782(6)\AA^{3}\). The residuals for the fit were \(R_{p}=6.86\%\) and \(R_{wp}=8.83\%\).
Figure 4: Representation of the incommensurate crystal structure determined using the refined WISH data at \(100\,\)K. The Cs ions (green spheres) and oxygen dumbbells (red spheres connected by red cylinder to indicate the molecular bond) modulate sinusoidally along the \(a\)-axis, with displacements along the \(b\)-axis (in tetragonal notation, which is cyclically permuted with respect to the notation in table 2). Five unit cells along \(a\) are shown in order to illustrate the form of the displacements. This image, and others in the rest of the manuscript illustrating crystal and magnetic sturctures, were rendered using the VESTA software [40].
Figure 5: Data from WISH showing the transition from incommensurate to commensurate crystal structure. Below \(\sim 80\,\)K a peak corresponding to \({\bf Q}_{\rm C}=(\frac{3}{2},2,1)\) appears at \(d=1.71\,\)Å. At higher temperatures a peak corresponding to \({\bf Q}_{\rm IC}=(1.57,2,1)\) at \(d=1.69\,\)Ås visible. Commensurate and incommensurate peak positions are indicated by a black square and a red circle respectively.
surate value of the propagation vector \(\mathbf{k}_{\mathrm{C}}=(1/2,0,0)\). This energy term exists only for the single value of the propagation vector and therefore its activation favours the commensurate phase. This symmetry argument further supports the lock-in mechanism for the transition at \(T_{\mathrm{S2}}\).
In the commensurate limit of \(\mathbf{k}_{\mathrm{C}}=(1/2,0,0)\) there are three isotropy subgroups associated with \(\Sigma_{4}\). Depending on choice of the global phase of the modulation, one can obtain the \(Pnma\), \(Pnma\) and \(Pmc2_{1}\) subgroups. They were the primary candidates in the refinement of the low-temperature diffraction data. Independent analysis based on testing of all possible isotropy subgroups of \(I4/mmm\) consistent with the doubled unit cell also resulted only into two possibilities \(Pnma\) and \(Pmc2_{1}\).
We performed refinements of all data collected on HRPD (since it had the highest resolution) at \(20\,\mathrm{K}\) using \(Pnam\) and found generally satisfactory fits, with \(R_{p}=2.51\%\) and \(R_{wp}=3.33\%\). We also investigated refining the data with the \(Pna2_{1}\) space group (No. 33), which is a sub-group of \(Pnam\), and found a slightly improved residuals of \(R_{p}=2.47\%\) and \(R_{wp}=3.29\%\). How
\begin{table}
\begin{tabular}{c c c} Symm. & \(mM_{5}^{+}(\delta_{1},\delta_{2})\) & \(m\Sigma_{3}(\xi_{1},\xi_{1}^{*},\xi_{2}^{*},\xi_{2})\) & \(\Sigma_{4}(\eta_{1},\eta_{1}^{*},\eta_{2}^{*},\eta_{2})\) \\ \hline \(\{1|0,0,0\}\) & \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\end{pmatrix}\) \\ \(\{2_{z}|0,0,0\}\) & \(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\) & \(\begin{pmatrix}0&-1&0&0\\ -1&0&0&0\\ 0&0&0&-1\\ 0&0&0&-1&0\end{pmatrix}\) & \(\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&1&0\end{pmatrix}\) \\ \(\{2_{y}|0,0,0\}\) & \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\) & \(\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&-1&0\\ 0&0&0&-1\end{pmatrix}\) & \(\begin{pmatrix}0&-1&0&0\\ -1&0&0&0\\ 0&0&0&-1\end{pmatrix}\) \\ \(\{4_{z}^{+}|0,0,0\}\) & \(\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\) & \(\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 0&-1&0&0\\ -1&0&0&0\end{pmatrix}\) & \(\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix}\) \\ \(\{-1|0,0,0\}\) & \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\) & \(\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix}\) & \(\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix}\) \\ \(\{1|1,0,0\}\) & \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\) & \(\begin{pmatrix}e^{\pi i}&0&0&0\\ 0&e^{-\pi i}&0&0\\ 0&0&e^{-\pi i}&0\\ 0&0&0&e^{\pi i}\end{pmatrix}\) & \(\begin{pmatrix}e^{\pi i}&0&0&0\\ 0&e^{-\pi i}&0&0\\ 0&0&0&e^{\pi i}\end{pmatrix}\) \\ \(\{1|0,1,0\}\) & \(\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&1&0&0\\ 0&0&0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&1\end{pmatrix}\) \\ \(\{1|1/2,1/2,1/2\}\) & \(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\) & \(\begin{pmatrix}e^{\frac{1}{2}\pi i}&0&0&0\\ 0&e^{-\frac{1}{2}\pi i}&0&0\\ 0&0&e^{-\frac{1}{2}\pi i}&0\\ 0&0&0&e^{\frac{1}{2}\pi i}\end{pmatrix}\) & \(\begin{pmatrix}e^{\frac{1}{2}\pi i}&0&0&0\\ 0&e^{-\frac{1}{2}\pi i}&0&0\\ 0&0&e^{-\frac{1}{2}\pi i}&0\\ 0&0&0&e^{\frac{1}{2}\pi i}\end{pmatrix}\) \\ \(T\) & \(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\) & \(\begin{pmatrix}-1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\end{pmatrix}\) & \(\begin{pmatrix}e^{\frac{1}{2}\pi i}&0&0&0\\ 0&e^{-\frac{1}{2}\pi i}&0&0\\ 0&0&e^{-\frac{1}{2}\pi i}&0\\ 0&0&0&e^{\frac{1}{2}\pi i}\end{pmatrix}\) \\ \(T\) & \(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\) & \(\begin{pmatrix}-1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&-1&0\\ 0&0&0&1\end{pmatrix}\) & \(\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\) \\ \end{tabular}
\end{table}
Table 3: Matrices of irreducible representations for generators of \(I4/mmm\) space group, associated with the \(\mathbf{k}=(1,1,1)\) (\(M\)-point) and the \(\mathbf{k}=(1/2,0,0),(-1/2,0,0),(0,-1/2,0),(0,1/2,0)\) wavevector star (\(\Sigma\)-line of symmetry) [41]. Note that here \(T\) is the time reversal operator.
ever, given that the improvement in goodness-of-fit was very small and likely arises due to fewer symmetry constraints on the structure, we take the higher symmetry structure to be the solution. The final refinements using the \(Pnam\) space group are shown in fig. 6.
The details of the refinements using the \(Pnam\) space group is given in table 4, and shown graphically in fig. 7. The most notable feature is that the Cs ions are shifted considerably from their ideal position of \(\frac{3}{4}\) on the \(b\)-axis, and form a zig-zag pattern in the doubled unit cell along \(a\). In the \(Pnam\) structure the oxygen dumbbells are not allowed by symmetry to tilt, whereas they are in principle free to do this in the \(Pna2_{1}\) structure that was investigated. In the former structural solution we found that there was a small staggered shift of the centre of mass of the oxygen dumbbells along the \(b\)-axis, albeit of much smaller magnitude than the shift of the Cs ions. For the \(Pna2_{1}\) structure (see table 5) this shift of the oxygen dumbbells was found to be of the same nature, i.e. there was essentially no tilt of the oxygen dumbbells even when this would be allowed by symmetry.
\begin{table}
\begin{tabular}{l l l l l l} Atom & Site & \(x\) & \(y\) & \(z\) & \(U_{\rm iso}(\AA^{2})\) \\ \hline Cs & \(4a\) & 0.1203(3) & 0.7816(9) & -0.0003(4) & 0.0043(4) \\ O1 & \(4a\) & 0.1277(4) & 0.7502(12) & 0.4091(14) & 0.0066(3) \\ O2 & \(4a\) & 0.6293(4) & 0.7575(8) & -0.4091(12) & 0.0091(5) \\ \hline \end{tabular}
\end{table}
Table 5: Crystal structure parameters for the \(Pna2_{1}\) space group solution, at 20 K measured on the HRPD instrument. The determined lattice parameters were \(a=8.727(3)\AA\), \(b=4.3968(2)\AA\), \(c=7.3380(2)\AA\). The unit cell volume was \(V=281.553(3)\AA^{3}\). The residuals for the fit were \(R_{p}=2.47\%\) and \(R_{wp}=3.30\%\), which represent a very small improvement compared to the refinement shown in table 4 for the \(Pnam\) crystal structure.
Figure 6: Data and refinement using \(Pnam\) for data collected on HRPD at 20 K as a function of d-spacing. Black circles indicate the measured data, the red line the refinement, and the blue line indicates the difference between the two (offset by -2000 units). Vertical ticks indicate the positions of Bragg reflections.
\begin{table}
\begin{tabular}{l l l l l l} Atom & Site & \(x\) & \(y\) & \(z\) & \(U_{\rm iso}(\AA^{2})\) \\ \hline Cs & \(4c\) & 0.1207(4) & 0.7142(4) & 0.25 & 0.0022(4) \\ O & \(8d\) & 0.8747(4) & 0.2472(4) & 0.15944(8) & 0.0082(3) \\ \end{tabular}
\end{table}
Table 4: Crystal structure parameters for the \(Pnam\) space group solution, at 20 K measured on the HRPD instrument. The determined lattice parameters were \(a=8.727(1)\AA\), \(b=4.39758(6)\AA\), and \(c=7.33860(8)\AA\). The unit cell volume was \(V=281.640(6)\AA^{3}\). The residuals for the fit were \(R_{p}=2.51\%\) and \(R_{wp}=3.33\%\).
Figure 7: The refined crystal structure with the \(Pnam\) space group, with oxygen molecules shown in red (spheres connected by a cylinder to indicate the molecular bond) and cesium ions shown in green (spheres only). The unit cell is indicated by the gray box. The view shown is slightly tilted from being parallel to the \(a\)-axis. The periodic displacements of the Cs ions are clear, with a smaller but still visible periodic displacement of the O\({}_{2}^{-}\) molecules also. The figure shows two unit cells along the \(a\)-axis.
### Magnetic structure
On cooling below \(T\approx 10\) K additional low \(Q\) / long \(d\)-spacing peaks appear that are consistent with the previously supposed appearance of antiferromagnetic order. Such peaks were consistently visible in the data collected on WISH, E2, E6 and E9. As mentioned in sec. II, only the WISH data were used for refining the magnetic structure. Of all the datasets, those from this instrument had the best signal to noise ratio and hence the most magnetic Bragg peaks were discernable, giving the greatest chance of a reliable refinement of the magnetic structure. Subsequent checks of the refinement using the E2, E6 and E9 instruments yielded good fits. An overview of the data collected as a function of temperature below 11 K is shown in fig. 8.
The appearance of strong peaks at \((1,0,0)\) and \((1,0,1)\) indicates the onset of antiferromagnetic order, at \(10<T_{\rm N}<11\) K. This compares with the peaks in the inverse susceptibility and specific heat, that are taken to give the value of \(T_{\rm N}\), of 9.6 K [42; 43; 44]. The small difference may be due to small differences in thermometry or a slight lag on the true sample temperature since the measurements were collected on warming.
All possible irreducible representations of a magnetic structure with \({\bf k}=(0,0,0)\) in the \(Pnam\) space group are given in table 6. The general expressions of the Fourier coefficients \({\bf S_{k}}(j)\) were obtained from the basis functions calculated from the different representations of the O\({}_{2}^{-}\) units at the Wyckoff position \(4c\) of the space group \(Pnam\): O1 at \((x,y,\frac{1}{4})\), O2 at \((-x,-y,\frac{3}{4})\), O3 at \((\frac{1}{2}+x,\frac{1}{2}-y,\frac{1}{4})\), and O4 at \((\frac{1}{2}-x,\frac{1}{2}+y,\frac{3}{4})\). Instead of the Wyckoff position \(8d\), where the individual oxygen atoms are located, we used \(4c\) which defines the center of gravity of the O\({}_{2}^{-}\) unit.
We also considered a solution to the magnetic structure using the lower symmetry space group \(Pna2_{1}\). The difference between the magnetic structures allowed in the two space groups is that \(Pnam\) does not permit magnetic order with a general component of the moment in the \(ab\)-plane and along the \(c\)-axis, but one or the other. On the other hand, \(Pna2_{1}\) does in principle allow magnetic moments to point along a general direction. However, we found that even refining the \(c\)-axis component of the magnetic moment, permitted in \(Pna2_{1}\), we found the best fit yielded this component as zero within errorbar. For the following we therefore restrict our discussion just to \(Pnam\).
Note that the magnetic form factor for the \(s=1/2\) O\({}_{2}^{-}\) ions was included using the tabulated values obtained from measurements of solid oxygen [45]. Although the size of the oxygen molecules is different (1.29 A for solid oxygen vs 1.33 A for CsO\({}_{2}\)) the difference is small enough that we have reasonable confidence that the derived form factor is applicable here.
Before fitting the data we note that the strong intensity at (1,0,0) and (1,0,1) implies that the largest component of the magnetic moment will be along the \(b\)-axis, since the magnetic neutron diffraction cross-section is proportional to the component of the magnetic moment perpendicular to the wavevector. This means that we can immediately rule out the \(\Gamma_{1}\), \(\Gamma_{3}\), \(\Gamma_{6}\) and \(\Gamma_{8}\) irreps, since they contain moments parallel to the \(c\)-axis only.
We also note that for the \(\Gamma_{7}\) irrep the \(y\)-components of the moments are coupled ferromagnetically. Since the bulk susceptibility follows a purely antiferromagnetic trend below \(T_{\rm N}\), for the refinements this component must be fixed to zero, which immediately allows us to rule it out. This then leaves the following possible irreps: \(\Gamma_{2}\), \(\Gamma_{4}\), and \(\Gamma_{5}\), which all have AFM arrangements of spins for the major \(y\)-component of the moment.
The presence of a very weak peak corresponding to \((0,0,1)\), that appears below \(T_{\rm N}\) and hence must be magnetic, also allows us to narrow down our choice of possible irreps. This observation allows us to rule out \(\Gamma_{5}\), for which this peak has zero intensity. Comparing \(\Gamma_{2}\) and \(\Gamma_{4}\), we found substantially better fits (\(R_{wp}=6.36\%\)) for
\begin{table}
\begin{tabular}{l c c c c} \hline Irrep & \({\bf S_{k}}(1)\) & \({\bf S_{k}}(2)\) & \({\bf S_{k}}(3)\) & \({\bf S_{k}}(4)\) \\ \hline \(\Gamma_{1}(4c)\) & \((0,0,u)\) & \((0,0,u)\) & \((0,0,-u)\) & \((0,0,-u)\) \\ \(\Gamma_{2}(4c)\) & \((u,v,0)\) & \((-u,-v,0)\) & \((u,-v,0)\) & \((-u,v,0)\) \\ \(\Gamma_{3}(4c)\) & \((0,0,u)\) & \((0,0,u)\) & \((0,0,u)\) & \((0,0,u)\) \\ \(\Gamma_{4}(4c)\) & \((u,v,0)\) & \((-u,-v,0)\) & \((-u,v,0)\) & \((u,-v,0)\) \\ \(\Gamma_{5}(4c)\) & \((u,v,0)\) & \((u,v,0)\) & \((u,-v,0)\) & \((u,-v,0)\) \\ \(\Gamma_{6}(4c)\) & \((0,0,u)\) & \((0,0,-u)\) & \((0,0,-u)\) & \((0,0,u)\) \\ \(\Gamma_{7}(4c)\) & \((u,v,0)\) & \((u,v,0)\) & \((-u,v,0)\) & \((-u,v,0)\) \\ \(\Gamma_{8}(4c)\) & \((0,0,u)\) & \((0,0,-u)\) & \((0,0,u)\) & \((0,0,-u)\) \\ \hline \end{tabular}
\end{table}
Table 6: Irreducible representations of a \({\bf k}=(0,0,0)\) magnetic structure in the \(Pnam\) space group.
Figure 8: Overview of the data collected in the low angle bank on WISH as a function of temperature between 1.5 K and 11 K. Data have been converted from time-of-flight to d-spacing. The strong peaks at \(\sim 8.8\) Å and \(\sim 5.7\) Å correspond to the (1,0,0) and (1,0,1) Bragg reflections respectively.
the case of the \(\Gamma_{2}\) irrep, compared to the \(\Gamma_{4}\) irrep (\(R_{wp}=7.75\%\)). The best fit was found with \(\mu_{y}=0.512(16)\mu_{\rm B}\) and \(\mu_{x}=0.05(2)\mu_{\rm B}\), giving a total moment of \(0.514\mu_{\rm B}\). As illustrated in the inset of fig. 9, the presence of the \((0,0,1)\) peak is crucial for determining the \(x\)-component of the magnetic moment, as the intensity is rather sensitive to the value of \(\mu_{x}\). The final refinement is shown in fig. 9, together with an inset showing the sensitivity of the \((0,0,1)\) peak's intensity to \(\mu_{x}\). The final refined magnetic structure is shown in fig. 10. Our result implies the magnetic structure has \(Pn^{\prime}a^{\prime}m^{\prime}\) symmetry, keeping the setting of the paramagnetic \(Pnam\) space group.
### Spin flop transition
Data taken on the E2 diffractometer probed the effect of an applied magnetic field on the two strongest magnetic Bragg peaks, \((1,0,0)\) and \((1,0,1)\), and are shown in fig. 11. Both peaks showed marked decreases in intensity between \(2\,\)T and \(4\,\)T. The strong \((1,0,0)\) peak intensity reduces in magnitude above background by approximately one third, with a somewhat smaller reduction of around \(15\%\) seen for the \((1,0,1)\) peak. In the zero field phase we have established that the moments lie predominantly along the \(b\)-axis, and in a spin flop transition we would expect them to switch to being parallel to the \(a\) or \(c\) directions. Crucially, this would apply only to those (randomly oriented) crystallites in the powder sample whose \(b\) axes lie mostly parallel to the applied magnetic field. Above the critical field we would therefore expect approximately one third of the crystallites to undergo a spin flop transition. The maximum change in magnetic Bragg peak intensity would therefore be \(\frac{1}{3}\), corresponding to the signal from the one third of crystallites involved in the spin flop going to zero. Because the \((1,0,0)\) peak reduces by approximately this amount we can surmise that the spin flop involves spins reorienting from the \(b\)-axis to the \(a\)-axis. If the spins were to flop towards \(c\), and assuming no change in the size of the ordered moment, then the component of magnetization perpendicular to \(\mathbf{Q}\) would be unchanged, and hence no change in the intensity of the \((1,0,0)\) Bragg peak would be seen, at variance with our observations. On the other hand, a spin flop towards \(a\) would result in the component of magnetization perpendicular to \(\mathbf{Q}\) going to zero. We can check this result by looking at the intensity of the \((1,0,1)\) magnetic Bragg peak as well. Spins reorienting from \(b\) to \(a\) would result in a reduction in Bragg peak intensity of \(\sim 40\%\) for the one third of crystallites involved, giving rise to an overall reduction in intensity of \(\sim 13\%\), which is broadly consistent with what we observe.
Figure 10: The refined magnetic structure of CsO\({}_{2}\), with black arrows indicating the moment direction on the O\({}_{2}^{-}\) dumbbells, the oxygen ions shown as red spheres, connected by a red cylinder to indicate the molecular bond, and the Cs ions are shown as green spheres.
Figure 9: Data collected in the low angle bank on WISH at \(1.5\,\)K (black points), together with the final refinement (solid red line), difference curve (solid blue line). The lower ticks (blue) are for the antiferromagnetic structure and the upper ticks (green) are for the nuclear structure. The inset shows data focused on the region where the \((0,0,1)\) peak is to be found. The lines correspond to simulations of the scattering with different values of \(\mu_{x}\), with all other parameters from the refinement fixed. This shows that the intensity of the \((0,0,1)\) peak is rather sensitive to the value of \(\mu_{x}\).
## IV Discussion and conclusion
We have determined the low temperature crystal structure of CsO\({}_{2}\), finding it to be described by the orthorhombic space group \(Pnam\) (or equally well by the lower symmetry orthorhombic space group \(Pna2_{1}\)) in which the \(a\) axis is approximately double the length of the \(b\) axis. Earlier work had indicated a lowering of symmetry on cooling from room temperature, for example via the appearance of extra peaks in the Raman spectrum that could not be accounted for in a high symmetry setting [20] or due to the observation of the weak \((\frac{1}{2},0,0)\) peaks (indexed in the high temperature tetragonal unit cell) at low temperature in x-ray diffraction measurements [15]. Furthermore, DFT calculations have predicted an enlarged (doubled) unit cell compared to the high temperature tetragonal phase [20]. However, we note neither that the Raman nor the x-ray data were used to determine the detailed crystal structure as we have here. In addition, the DFT calculations were correct in predicting a doubled unit cell, but suggested that the doubling occurs along more than one crystallographic axis, which we did not observe.
Concerning the details of the low temperature crystal structure we determined, we see that the doubling of the unit cell along \(a\) is predominantly due to a zig-zag displacement ('puckering') of the Cs ions, with a smaller zig-zag displacement of the centers of mass of the O\({}_{2}^{-}\) dumbbells. The staggered displacement of Cs ions was foreseen by DFT [20]. On the other hand, the same DFT also predicted a tilting relative to the \(c\)-axis of the O\({}_{2}^{-}\) dumbbells, a scenario with which our data are inconsistent.
Looking at the refined low temperature commensurate structure we can compare it to the incommensurate structure visible for \(80\simeq T\simeq 200\) K by looking at the amplitude of the displacement of the Cs ions and O\({}_{2}^{-}\) dumbbells respectively. The amplitude of the sinusoidal modulation of Cs ion positions in the incommensurate phase was 0.0564(9) lattice units, whereas the amplitude of the modulation, given by the deviation of the Cs ion position along the \(b\)-axis relative to the ideal coordinate of \(y=0.75\) in the commensurate phase, is 0.0358(1) lattice units. These values are rather comparable. The amplitude of the O\({}_{2}^{-}\) dumbbell displacement was 0.0090(7) lattice units in the incommensurate phase, whereas in the commensurate phase it is 0.0028(4) lattice, which is somewhat different.
We see in fig. 2 that cooling below the transition from a tetragonal phase at \(T_{\rm S1}=192\) K the \(a\) lattice parameter of the average structure shrinks significantly more rapidly down to \(T_{\rm S2}=71.5\) K, reducing by \(\sim 1.2\%\) compared to the \(b\) lattice parameter which reduces by just 0.34% and the \(c\) lattice parameter which barely changes (\(<0.1\%\) change). At \(T_{\rm S2}\) the \(a\) lattice parameter (when indexed using the \(Immun\) space group) suddenly increases. A possible explanation for this is that the puckering of the Cs ions in the incommensurate structure at comparatively high temperature, when the oxygen molecules' orientation is fluctuating significantly, results in strain on the lattice due to under-bonding. As the fluctuations decrease on cooling eventually the Cs ions and oxygen molecules can bond, leading to a lock-in to the commensurate structure, a decrease in lattice strain and a corresponding slight increase in \(a\).
It was already noted that a staggered structure would likely have an impact on the strength of the superexchange interaction between the \(s=1/2\) units, due to differing orbital overlap between the oxygens and the cesium ions [21]. NMR data modelled under the assumption of staggered tilts of the O\({}_{2}^{-}\) dumbbells indicated the formation of antiferromagnetic 1d spin chains in CsO\({}_{2}\). The temperature dependence of the magnetic exchange parameters determined from the NMR was understood in relation to the assumed staggered dumbbell tilts together with a model based changing orbital overlap as a result of librations of the dumbbells. Our findings show that in fact the dumbbells are hardly tilted at all, rather the cesium ion positions are staggered. One might anticipate that the overall effect could be similar, at least as far as superexchange is concerned. It would be interesting to re-model the NMR data based on the crystal structure we have determined.
We note that there is some discussion in the literature of orbital ordering in superoxides [21; 22; 26; 27; 28], which has an impact on the aforementioned spin chain formation. Orbital order is frequently accompanied by structural distortion, as in the cooperative Jahn-Teller effect [46]. In the tetragonal \(I4/mmm\) structure of CsO\({}_{2}\), one can take the \(2b(0,0,1/2)\) Wyckoff position with the \(4/mmm\) site symmetry as the place where the
Figure 11: Data collected on E2 at \(T=1.7\) K in applied magnetic fields between 0 T and 6.5 T. A sharp drop in peak intensity of the (1,0,0) and (1,0,1) magnetic Bragg reflections is observed for \(2<B<4\) T, most likely arising from a spin flop transition.
\(\mathrm{O}_{2}^{-}\) molecules reside. The degenerate \(\pi^{*}\) molecular orbitals of \(\mathrm{O}_{2}^{-}\) form basis of two-dimensional \(E_{u}\) site symmetry irrep, which has non-zero subduction frequency in the \(\Sigma_{4}\) space group irrep. This implies that local distortions that belong to the site symmetry irrep \(E_{u}\) can induce global distortions that belong to the space group irrep \(\Sigma_{4}\). In other words, the structural distortions obtained in the present work are compatible with the orbital ordering scenario. The ground state orthorhombic structure is consistent with the orbital pattern proposed by Riyadi et al. [20] although no additional doubling along the \(b-\) and \(c-\)axes suggested by the authors, due to tilting of the oxygen dimers, was detected in our diffraction data. The incommensurate phase implies partial occupancies of the \(\pi_{x}^{*}\) and \(\pi_{y}^{*}\) orbitals, modulated upon propagation through the crystal (i.e. an orbital density wave). A similar phenomenon takes place in some perovskite manganites containing octahedrally coordinated \(\mathrm{Mn}^{3+}\) cations with electronic degeneracy. The modulated orbital states in these systems vary from achiral density waves [47] to chiral orbital helices [48].
Considering now the magnetic structure, we have confirmed that, as long anticipated, \(\mathrm{CsO}_{2}\) is an antiferromagnet. We note that until now the details of the magnetic structure had remained unknown, with the only proposal being that of ferromagnetism within the \(ab\)-plane with an antiferromagnetic stacking between planes [43], which was formed on the basis of analogy with \(\mathrm{KO}_{2}\)[17] rather than a direct measurement. So, our data clarify this point at last, revealing ferromagnetism in the \(bc\)-plane and antiferromagnetism along \(a\). The total magnetic moment determined, \(0.514\mu_{\mathrm{B}}\), is reduced compared to the ideal spin only value, possibly due to fluctuations and/or reduced dimensionality [21].
The presence of a small component of the magnetic moment along the \(a\) axis, as well as the larger component along \(b\), is significant. These orthogonal spin components are transformed by the same irrep of the paramagnetic \(Pnma\) space group (table 6), implying bi-linear coupling between them. This is a typical case of antisymmetric Dzyaloshinskii-Moriya exchange underpinning the coupling at microscopic level [49]. Let us denote the corresponding magnetic order parameters as \(\delta\) and \(\xi\), respectively. The bi-linear invariant, \(\delta\xi\), in principle also involves single ion terms, however these are not relevant here due to the \(S=1/2\) nature of the interacting spins. The \(\delta\) and \(\xi\) order parameters are transformed by distinct irreps (\(mM_{5}^{+}\) and \(m\Sigma_{3}\)) of the tetragonal \(I4/mmm\) space group (table 3), indicating that they would be decoupled if no structural distortion was present. Decomposition of the experimentally determined ground state crystal structure of \(\mathrm{CsO}_{2}\) in respect of symmetrised displacacy modes of the tetragonal \(I4/mmm\) reveals the presence of \(\Gamma_{1}^{+}(k=0),\Gamma_{2}^{+}(k=0),M_{5}^{-}(k=1,1,1)\) and \(\Sigma_{4}(k=1/2,0,0)\) modes. The latter has the largest amplitude (of \(0.25\AA\)), as expected for the primary order parameter. Further analysis of the allowed free-energy terms reveals that the \(\Sigma_{4}(\eta_{1},\eta_{1}^{*},\eta_{2}^{*},\eta_{2})\) order parameter forms a trilinear invariant with \(mM_{5}^{+}(\delta_{1},\delta_{2})\) and \(m\Sigma_{3}(\xi_{1},\xi_{1}^{*},\xi_{2}^{*},\xi_{2})\):
\[\delta_{1}\eta_{1}\xi_{1}^{*}+\delta_{1}\eta_{1}^{*}\xi_{1}-\delta _{1}\eta_{2}\xi_{2}^{*}-\delta_{1}\eta_{2}^{*}\xi_{2}\] \[-\delta_{2}\eta_{1}\xi_{1}^{*}-\delta_{2}\eta_{1}^{*}\xi_{1}- \delta_{2}\eta_{2}\xi_{2}^{*}-\delta_{2}\eta_{2}^{*}\xi_{2}\]
indicating that this structural distortion is responsible for the coupling of the orthogonal magnetic modes in the ground state orthorhombic structure. The term is reduced down to \(\delta_{1}\eta_{1}\xi_{1}^{*}+\delta_{1}\eta_{1}^{*}\xi_{1}-\delta_{2}\eta_{1} \xi_{1}^{*}-\delta_{2}\eta_{1}^{*}\xi_{1}\) for the case of single arm of the \(\Sigma\)-star, as observed experimentally. The coupling is fully optimised when the \(mM_{5}^{+}(\delta_{1},\delta_{2})\) order parameter takes the \(\delta_{1}=-\delta_{2}\) direction (\(\delta_{1},-\delta_{2}\)). This direction represents the symmetry of the experimentally observed secondary antiferromagnetic mode with the components of the spins along the orthorhombic \(a\)-axis.
In conclusion, we have used neutron diffraction to perform a comprehensive study of the crystal and magnetic structure of \(\mathrm{CsO}_{2}\). We find that an incommensurate crystal structure appears seemingly simultaneously with a transition from a tetragonal to an orthorhombic structure at \(192\,\mathrm{K}\). This incommensurate structure, which is modulated along \(a\), is composed of displacements of the cesium ions along the \(b\)-axis and of smaller displacements of the \(\mathrm{O}_{2}^{-}\) dimers' entire centre of mass. On cooling further below \(72\,\mathrm{K}\) the modulated structure locks into a commensurate wavevector of \((\frac{1}{2},0,0)\), doubling the unit cell compared to the previously supposed orthorhombic crystal structure. Hints of a structure bearing some similarity to this had previously been suggested from DFT calculations. However, in both calculations and in other superoxides it was found that the \(\mathrm{O}_{2}^{-}\) dimers tilt with respect to the \(c\)-axis, which is different to what is seen here. Magnetic order sets in below \(\sim 10\,\mathrm{K}\) and as previously supposed this order is antiferromagnetic. The spins of the \(\mathrm{O}_{2}^{-}\) dimers modulate along the \(a\)-axis and point predominantly along \(b\), albeit with a small component along \(a\). Thus, \(\mathrm{CsO}_{2}\) is an interesting example of a structure-properties relationship. The large structural distortion, associated with the staggered cesium and oxygen displacements, activates antisymmetric exchange and couples the orthogonal magnetic modes directly observed in the neutron diffraction experiment. For applied magnetic field \(2<B<4\,\mathrm{T}\) we see changes in magnetic Bragg peak intensity that are consistent with a spin flop transition in which the magnetic moments reorient to point along the \(a\)-axis.
Note: just before the completion of this manuscript we became aware of another work, ref. [50], in which neutron powder diffraction was used to examine the magnetic structure of \(\mathrm{CsO}_{2}\). The authors also discuss the structural aspects of the order. The main difference to the present work compared to ref. [50] is in how the structural distortions are combined with the magnetic order. Taking the tetragonal \(I4/mmm\) space group as the common parent symmetry, the suggested structural distortions and the magnetic order double the \(a\)- and \(b\)- axes, respectively. In the ground state found in our work, both
the primary structural distortions and the spin ordering double the same tetragonal axis (the \(a\)-axis in our setting). In addition, the measurements presented in ref. [50] were insensitive to the small \(a\)-axis component of the magnetic moment that we found.
###### Acknowledgements.
We are grateful to Bella Lake and Ryota Ono for insightful discussions. Experiments at the ISIS Neutron and Muon Source were supported by an Xpress beamtime allocation RB1990392 from the Science and Technology Facilities Council. Raw data are available online [51].
|
2303.11082 | Evaluating Language Models for Knowledge Base Completion | Structured knowledge bases (KBs) are a foundation of many intelligent
applications, yet are notoriously incomplete. Language models (LMs) have
recently been proposed for unsupervised knowledge base completion (KBC), yet,
despite encouraging initial results, questions regarding their suitability
remain open. Existing evaluations often fall short because they only evaluate
on popular subjects, or sample already existing facts from KBs. In this work,
we introduce a novel, more challenging benchmark dataset, and a methodology
tailored for a realistic assessment of the KBC potential of LMs. For automated
assessment, we curate a dataset called WD-KNOWN, which provides an unbiased
random sample of Wikidata, containing over 3.9 million facts. In a second step,
we perform a human evaluation on predictions that are not yet in the KB, as
only this provides real insights into the added value over existing KBs. Our
key finding is that biases in dataset conception of previous benchmarks lead to
a systematic overestimate of LM performance for KBC. However, our results also
reveal strong areas of LMs. We could, for example, perform a significant
completion of Wikidata on the relations nativeLanguage, by a factor of ~21
(from 260k to 5.8M) at 82% precision, usedLanguage, by a factor of ~2.1 (from
2.1M to 6.6M) at 82% precision, and citizenOf by a factor of ~0.3 (from 4.2M to
5.3M) at 90% precision. Moreover, we find that LMs possess surprisingly strong
generalization capabilities: even on relations where most facts were not
directly observed in LM training, prediction quality can be high. | Blerta Veseli, Sneha Singhania, Simon Razniewski, Gerhard Weikum | 2023-03-20T13:14:59Z | http://arxiv.org/abs/2303.11082v1 | # Evaluating Language Models
###### Abstract
Structured knowledge bases (KBs) are a foundation of many intelligent applications, yet are notoriously incomplete. Language models (LMs) have recently been proposed for unsupervised knowledge base completion (KBC), yet, despite encouraging initial results, questions regarding their suitability remain open. Existing evaluations often fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we introduce a novel, more challenging benchmark dataset, and a methodology tailored for a realistic assessment of the KBC potential of LMs. For automated assessment, we curate a dataset called WD-Known, which provides an unbiased random sample of Wikidata, containing over 3.9 million facts. In a second step, we perform a human evaluation on predictions that are not yet in the KB, as only this provides real insights into the added value over existing KBs. Our key finding is that biases in dataset conception of previous benchmarks lead to a systematic overestimate of LM performance for KBC. However, our results also reveal strong areas of LMs. We could, for example, perform a significant completion of Wikidata on the relations nativeLanguage, by a factor of \(\sim 21\) (from 260k to 5.8M) at 82% precision, usedLanguage, by a factor of \(\sim 2.1\) (from 2.1M to 6.6M) at 82% precision, and citizenOf by a factor of \(\sim 0.3\) (from 4.2M to 5.3M) at 90% precision. Moreover, we find that LMs possess surprisingly strong generalization capabilities: even on relations where most facts were not directly observed in LM training, prediction quality can be high. We open-source the benchmark dataset and code.1
Footnote 1: [https://github.com/bveseli/LMsForKBC](https://github.com/bveseli/LMsForKBC)
## 1 Introduction
Structured knowledge bases (KBs) like Wikidata [26], DBpedia [1], and Yago [25] are backbones of the semantic web and are employed in many knowledge-centric applications like search, question answering and dialogue. Constructing these KBs at high quality and scale is a long-standing research challenge and multiple knowledge base construction benchmarks exist, e.g., FB15k [2], CoDEx [21], and LM-KBC22 [24]. Text-extraction, knowledge graph embeddings, and LM-based knowledge extraction have continuously moved scores upwards on these tasks, and leaderboard portals like Paperswithcode2 provide evidence to that.
Recently, pre-trained language models have been purported as a promising source for structured knowledge. Starting from the seminal LAMA paper [17], a throw of works have explored how to better probe, train, or fine-tune these LMs [12]. Nonetheless, we observe a certain divide between these late-breaking investigations, and practical knowledge base completion (KBC). While the recent LM-based approaches often focus on attractive and clean methodologies that produce fast results, practical KBC is a highly precision-oriented, extremely laborious process, involving a very high degree of manual labor, either for manually creating statements [26], or for building comprehensive scraping, cleaning, validation, and normalization pipelines [1, 25]. We believe previous works fall short in three aspects:
1. _Focus on high precision:_ On the KB side, part of Yago's success stems from its validated \(>\)95% accuracy, and the Google Knowledge Vault was not deployed into production, partly because it did not achieve 99% accuracy [27]. Yet, previous LM analyses balance precision and recall or report precision/hits@k values, implicitly tuning systems towards balanced recall scores resulting in impractical precision.
2. _Evaluation of completion potential:_ Existing benchmarks often sample from popular subjects. This is useful for system comparison, but not for KBC. E.g., predicting capitals of countries with 99% accuracy does not imply practical value: they are already captured in established KBs like Wikidata.
3. _Prediction of missing facts:_ Existing works test LMs on facts already stored in KBs, which does not provide us with a realistic assessment for completion. For KBC we need to predict objects for subject-relation pairs, previously not known to the KB.
It is also important to keep in mind the scale of KBs: Wikidata, for instance, currently contains around 100 Million entities, and 1.2B statements. The cost of producing such KBs is massive, one estimate from 2018 sets the cost per triple at 2 USD for manually curated triples[15], and 1 ct for automatically extracted ones.5 Thus, even small additions in relative terms, might correspond to massive gains in absolute numbers. For example, even by the lower estimate of 1 ct/triple, adding one triple to just 1% of Wikidata humans, would come at a cost of 100k USD.
Footnote 5: Wikidata might broadly fall in between, as its aim is human-curated quality, but major portions are imported semi-automatically from other sources.
In this paper, _we conduct a systematic analysis of LMs for KBC_, where we focus on _high precision ranges_ (90%). We evaluate by first using a new benchmark dataset WD-Known, where we randomly sample facts from Wikidata (including many without any structured information, Wikipedia information, or English labels) and second by a manual evaluation on subject-relation pairs without object values, yet.
Technically, we focus on the BERT language model [7], and the Wikidata knowledge base. Although BERT has been superseded by newer LMs, its popularity is still matched only by the closed source GPT-3 and chatGPT models.
Wikidata is by far the most prominent and comprehensive public KB, so evaluations against it provide the strongest yardstick.
Our main results are as follows:
1. In actual KBC, LMs perform considerably worse than benchmarks like LAMA indicated, but still achieve strong results for language-related, socio-demographic relations (e.g., _nativeLanguage_).
2. Simple changes on out-of-the-box LMs, in particular, vocabulary expansion and prompt formulation, can significantly improve their ability to output high-precision knowledge.
3. Using LMs, Wikidata could be significantly expanded for three relations, nativeLanguage by a factor of \(\sim 21\) (from 260k to 5.8M) at precision 82%, citizenOf by a factor of \(\sim 0.3\) (from 4.2M to 5.3M) at 90% precision and usedLanguage by a factor of \(\sim 2.08\) (from 2.1M to 6.6M) at 82% precision.
## 2 Background and Related Work
KB construction and completionKnowledge base construction is a field with considerable history. One prominent approach is by human curation, as done e.g., in the seminal CYC project [11], and this is also the backbone of today's most prominent public KB, Wikidata [26]. Another popular paradigm is the extraction from semi-structured resources, as pursued in Yago and DBpedia [25, 1]. Extraction from free text has also been explored (e.g., NELL [4]). A popular paradigm has been embedding-based link prediction, e.g., via tensor factorization like Rescal [14], and KG embeddings like TransE [2].
An inherent design decision in KBC is the P/R trade-off - academic projects are often open to trade these freely (e.g., via F-1 scores), while production environments are often very critically concerned with precision, e.g., Wikidata generally discouraging statistical inferences6, and industrial players likely using to a considerable degree human editing and verification [27]. Although the P/R trade-off is in principle tunable via thresholds, the high-precision range is hardly investigated. For example in all of Rescal, TransE, and LAMA, the main results focus on metrics like hits@k, MRR, or AUC, which provide no bounds on precision.
Footnote 6: There is often a terminological confusion here: Automated editing is omnipresent on Wikidata, but the bots performing them typically execute meticulously pre-defined edit and insertion tasks (e.g., based on other structured sources), not based on statistical inference.
LMs for KB constructionKnowledge extraction from language models provides fresh hope for the synergy of automated approaches and high-precision curated KBs. Knowledge-extraction from LMs provides remarkably straightforward access to very large text corpora: The basic idea by [17] is to just define one template per relation, then query the LM with subject-instantiated versions, and retain its top prediction(s). A range of follow-up works appeared, focusing, e.g.,
on investigating entities, improving updates, exploring storage limits, incorporating unique entity identifiers, and others [23, 18, 3, 20, 10, 16, 8, 19, 5]. Nonetheless, we observe the same gaps as above: The high-precision area, and real KBC, are underexplored.
Benchmarking KB completionKB completion (sometimes also referred to as link prediction) has a set of quasi-standard benchmarks. Here we review the important ones and outline why they do not help for the focus of our investigation.
Two of the most popular benchmarks, both introduced in [2], are **FB15k** and **WN18**. The former contains statements for 15k extremely popular entities from Freebase, entities that already in 2013, when KBs were small, had at least 100 statements. The latter contains 41k entities from WordNet, yet these are linguistic concepts where WordNet is already the authoritative reference, and the potential for completion is small. **DBP-5L** is a popular multilingual dataset of 14k entities, yet it is collected by iterative graph neighbourhood expansion, so by design is biased towards popular (well-connected) entities. **YAGO3-10** is another benchmark, that contains YAGO3 entities with at least 10 statements [6]. The recent **CoDEx** benchmark provides a much larger subset of Wikidata triples but again focuses on the more popular subjects, as even its hardest variant considers only entities with at least 5 statements [21]. The **LAMA** benchmark [16] is based on the T-REx dataset, which in turn restricts the scope of subjects to those having a Wikipedia page. **LAMA-UHN**[18] removes some surface correlations but does not remove the restriction to Wikipedia-known subjects. **LM-KBC22**[24] provides a curated mix of popular and long-tail subjects, but not a random sample, and only a small set of 12 relations. In summary, all these benchmarks provide non-random subject sets, and by taking ground truth from existing KBs, necessarily can not evaluate a method's real KB completion potential. The **PKGC** work [13] uses human evaluation to account for KB incompleteness, but also uses subjects from previous benchmarks focused on popular entities.
## 3 Method
### Knowledge Base Completion Tasks
Established KBs like Wikidata store a large number of facts of the form _(subject, relation, object)_. However, such KBs still suffer from incompleteness, which limits their applicability. KBC tries to counteract this incompleteness and describes the task of predicting missing facts for a KB. KBC is often split into subtasks, such as predicting the relation, subject, or object slots in triples. In the following, we focus on the most prominent object slot filling task, i.e. predicting an object for a subject-relation pair without an object so far. Identifying plausible subject-relation pairs is another important task, as not every combination qualifies, e.g., (Albert Einstein, hasCapital,- ) has no object.
We will refer to facts that are present in a KB as _existing facts_ and _existing-fact prediction_ describes predicting the object for a subject-relation pair for
which the object value already exists in the KB. Similarly, we refer to facts that are missing in a KB as _missing facts_ and _missing-fact prediction_ describes predicting the object for a subject-relation pair with no object value yet.
### Fact Prediction using Pre-trained Language Models
The slot filling ability of an LM, i.e. predicting "Paris" for a pair (France, hasCapital), is essential for KBC. This is done by querying an LM using cloze-style statements, a.k.a. prompts, like "The capital of France is [MASK]."[17]. The LM's goal is to predict the token at the masked position, i.e. the object.
We probe the masked language model BERT [7] and query for facts of different relations by using relation-specific predefined prompts. The prompts contain placeholders for subject and object, so that the input for the LM can be automatically created by replacing the subject placeholder with the actual subject and the object placeholder with [MASK]. For each cloze-style query like "The capital of France is [MASK].", BERT returns a probability vector over its vocabulary \([t_{1},..,t_{n}]\) (\(\sim\)29K tokens). From this vector, we select _top k_ predictions \([r_{1},..,r_{k}]\) with the highest probability as predictions for the object. We set \(k=10\).
### Systematic Analysis Procedure
The goal of our analysis is to realistically assess the abilities of an LM for KBC. Therefore we perform a two-fold analysis by investigating 1) the existing-fact prediction ability of BERT in an automated evaluation and 2) the potential for KBC using LMs by predicting missing facts and evaluating the LM's predictions using human annotations.
The first analysis part includes the automated evaluation of existing facts in WD-Known compared to the LAMA-T-REx benchmark in order to get a realistic estimate of an LM's prediction ability. Based on this evaluation we will extract relation-specific prediction thresholds considering precision and recall trade-offs to enable KBC on high precision and reasonable recall.
Figure 1: To query the LM for an object slot, we convert triples into relation-specific prompts by masking the object, following [17]. The output is a probability vector over the model’s vocabulary. We retain the top \(k=10\) predictions.
In the second analysis part, we produce high accuracy predictions for missing facts in Wikidata given the previously extracted threshold and evaluate the model's predictions using human annotations. Given the human evaluation, we will provide estimations about the amount of addable new facts to Wikidata. In Figure 2 we show an overview of our systematic analysis.
## 4 Datasets
### LAMA-T-REx
The LAMA-T-REx dataset [17] is derived from T-REx dataset [9], which is a subset of Wikidata and provides alignment between Wikidata (WD) triples and Wikipedia texts. LAMA-T-REx consists of 41 relations with \(\sim\) 1000 facts per relation sampled from T-REx. Included with the dataset are relation-specific, manually defined prompts, which [17] also refers to as templates. We use these 41 relations and their corresponding templates throughout our work. The scope of subjects in LAMA-T-REx is restricted to having a Wikipedia page and contains little data as shown in Table 1. This makes it difficult to realistically assess LMs for KBs.
### WD-Known
To realistically assess the usability of LMs for KBC, we created a large-scale dataset with random facts from Wikidata, without subject restrictions to pages such as Wikipedia. One must observe that, while WD-Known is an unbiased sample from Wikidata's subject set per class, it is still biased towards reality, like Wikidata itself [22].
**Creation.** We extract facts from Wikidata for the same 41 relations as in LAMA-T-REx. We extract subject-relations pairs per relation and because these pairs may have multiple valid objects, e.g. N:M relations, we extract all associated valid objects along with the pairs. Otherwise, we would risk incomplete
Figure 2: Our systematic analysis is divided into two parts. First, we analyse existing-fact prediction by an automated evaluation computing the recall achieved at 90% precision (\(R@P90\)). The prediction at \(R@P90\) is used as a threshold in the second part of the analysis. We analyse the potential of KBC for missing facts via a manual evaluation.
ground truth data. This would falsify our evaluation as an LM's prediction can not be recognized as correct when predicting another valid object than the extracted one. We sampled a maximum of \(100,000\) subject-relation pairs per relation along with all valid objects. If a relation contains fewer than \(100,000\) pairs, we extract all of them. The extracted facts consist of Wikidata-specific entity IDs, which we converted into natural language labels to allow probing an LM for facts. In contrast to [17], we do not remove multi-token objects like "Ball Aerospace & Technologies" because the inability to predict such objects is part of (some) LM's limitations and at the time KBC is performed it is unknown which objects are multi-token objects and which are not as KBC includes predicting missing facts, i.e. facts without ground truth object yet. This dataset feature enables the testing of LMs with multi-token prediction capability in comparison to uni-token predictions for KBC. Additionally, a large dataset like WD-Known enables fine-tuning for fact prediction. In Table 1 we report some dataset statistics in comparison to LAMA-T-REx showing the larger size of WD-Known. The dataset is available at [https://github.com/bveseli/LMsForKBC](https://github.com/bveseli/LMsForKBC).
## 5 Potential for Existing-Fact Prediction
Existing-fact prediction describes the prediction of an object for a subject-relation pair for which the ground truth object already exists in a KB. We will analyse the prediction ability of BERT given existing facts from Wikidata in an automated evaluation, focusing on the recall achieved at 90% precision (\(R@P90\)).
### Metric
We use a rank-based metric and calculate the recall that our method achieves at 90% precision (\(R@P90\)). To compute \(R@P90\), we sort the predictions in descending order of their prediction probability, and threshold predictions when average precision drops below 90%. When determining which prediction is true or false, we have to consider that a subject-relation pair can be associated with multiple valid objects. Therefore, a prediction is true, when it is among the valid objects and false otherwise.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & & \#unique & \#unique & \multicolumn{3}{c}{\#multi-token} & object dist. \\ Dataset & & subjects & objects & \#triple & objects & entropy \\ \hline \multirow{2}{*}{LAMA-T-REx} & total & 31,479 & 61,85 & 34,039 & 0 & - \\ & average & 767 & 150 & 830 & 0 & 0.76 \\ \multirow{2}{*}{WD-Known} & total & **2,956,666** & **709,327** & **3,992,386** & **1,892,495** & - \\ & average & **72,113** & **17,300** & **97,375** & **46,158** & 0.67 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Our WD-Known dataset in comparison to LAMA-T-REx. We report the total number of distinct objects (#unique objects), distinct subjects (#unique subjects), and the number of triples (#triples) as well as the total number of objects consisting of more than one token (#multi-token objects), and the average object entropy.
### Baselines
We want to check if BERT's fact prediction ability goes at least beyond just predicting statistically common objects, e.g. English for spokenLanguage. Therefore, we try two distribution-based baselines: random and majority vote. We compute relation-specific object distributions based on a relations ground truth data. In the case of a majority vote, we assign the most probable object to each fact and in the case of random, we assign a random object from the distribution. Additionally, we compare BERT to a relation extraction (RE) baseline and use the _Rosette Text Analytics_7 relation extractor. Given a text snippet, the relation extractor can extract up to 17 relations. For the intersection of these 17 and our 41 relations, we subsampled relation-wise 200 facts from WD-Known and align each fact with text. For text alignments, we consider two different source types: web texts and Wikipedia pages to cover facts with Wikipedia-unknown and Wikipedia-known subjects. Per relation, we align 100 facts with web texts, i.e. top 10 Google search results after googling the subject of a fact and 100 facts with wikipages, i.e. the summary of a subject's Wikipedia page. We get 200 facts with text alignments per relation headquarteredIn and citizenOf, 400 in total.
Footnote 7: [https://www.rosette.com/capability/relationship-extractor/#tech-specs](https://www.rosette.com/capability/relationship-extractor/#tech-specs)
### Evaluation and Results
#### 5.3.1 Quantitative Analysis.
In Table 2 we report \(R@P90\) values achieved by BERT's predictions on our dataset WD-Known in comparison to _LAMA-T-REx_. On WD-Known the LM achieves significantly lower values (marked in bold) suggesting a more realistic assessment of BERT's fact prediction ability by WD-Known. Only the relations nativeLanguage, spokenLanguage,officialLanguage show a smaller decrease and therefore stable results.
To investigate why BERT achieves low results, we perform a correlation analysis computing the Pearson correlation coefficient over \(R@P90\) in Figure 3. We notice a negative correlation between \(R@P90\) and the object distribution entropy, meaning that a more uniform distribution makes the predictions harder. Furthermore, the number of unique objects is also negatively correlated with BERT's performance, i.e. fewer possible objects benefit the performance. The
Figure 3: Pearson Correlation Analysis
single valuedness, i.e. the relation-wise proportion of subject-relation pairs with only one object, shows a low but positive correlation. This indicates the performance is better on N:1 or 1:1 relations confirming the observation in [17]. The performance is also positively correlated with the number of objects in BERT's vocabulary, i.e. the more objects of a relation are covered by the LM's vocabulary, the better the performance. The vocabulary acts as a bottleneck, preventing the model from predicting facts.
Looking at the baselines in Table 3, we see that the majority baseline is quite solid with an average precision of 0.18. Access to the underlying distribution of relation-specific ground truth data has a noticeable impact on assigning objects to a subject-relation pair. Still, BERT achieving an average precision in a higher range (\(>75\%\)) shows that the prediction ability is based on more than predicting statistically common objects. The RE baseline is outperformed by BERT for two tested relations, while BERT and RE show lower results on facts aligned with webtexts than on facts aligned with Wikipedia pages.
#### 4.2.3 Qualitative Analysis.
Since we are interested in high precision for KB completion and quantitative analysis showed low R@P90 values for most of the relations, we need to increase BERT's performance. We first perform a qualitative analysis of issues. Since analysing all 41 relations qualitatively is not feasible, we select a representative subset of relations. The subset is diverse regarding, e.g. semantics (e.g. language, demographic, goods), entity type (human, company, places), performance (lower vs. higher scores), and possible objects (all
\begin{table}
\begin{tabular}{l|l|l||l|l|l} \hline \hline & WD-Kwowns & LAMA-T-REx & WD-Kwowns & LAMA-T-REx \\ \cline{2-5} Relation & \(R@P90\) & \(R@P90\) & Relation & \(R@P90\) & \(R@P90\) \\ \hline nativeLanguage & **0.61** & 0.68 & foundedIn & 0.00009 & 0.001 \\ spokenLanguage & **0.26** & 0.33 & deathPlace & 0.00009 & 0 \\ officialLanguage & **0.25** & 0.37 & namedAfter & 0.00008 & 0 \\ headquarteredIn & **0.04** & 0.52 & partOf & 0.00006 & 0 \\ developedBy & **0.04** & 0.64 & twinTown & 0.00003 & 0.001 \\ producedBy & **0.03** & 0.86 & sharesBorders & 0.00001 & 0.001 \\ countryOfJurisdiction & **0.02** & 0.66 & fieldOfWork & 0 & 0 \\ hasCapital & **0.01** & 0.60 & employedBy & 0 & 0.002 \\ locatedIn & **0.008** & 0.20 & hasReligion & 0 & 0 \\ bornIn & 0.006 & 0.009 & playerPosition & 0 & 0 \\ isCapital & **0.006** & 0.81 & subclassOf & 0 & 0.01 \\ CountryOfOrigin & 0.005 & 0.08 & holdsPosition & 0 & 0 \\ isA & 0.004 & 0.06 & diplomaticRelation & 0 & 0 \\ LanguageOfFilm & 0.003 & 0 & citizenOf & 0 & 0 \\ ownedBy & **0.0008** & 0.16 & consistsOf & 0 & 0 \\ hostCountry & 0.0006 & 0.002 & musicGenre & 0 & 0 \\ originalBroadcaster & 0.004 & 0.02 & musicLabel & 0 & 0.02 \\ inTerrioryOf & 0.0002 & 0.01 & playsInstrument & 0 & 0.003 \\ writtenIn & **0.0001** & 0.15 & hasProfession & 0 & 0 \\ locationOfWork & 0.0001 & 0 & inContinent & 0 & 0.004 \\ memberOf & **0.0001** & 0.52 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: BERT’s performance on data sets WD-Known a LAMA-T-REx for the same 41 relations. Boldface marks significantly lower values, indicating an overestimation of BERT’s fact predicting ability on LAMA.
languages vs. cities worldwide). The chosen relations are shown in Table 4. We aim to identify and eliminate or at least reduce systematic prediction errors for those relations. Common error categories are: 1) predicting the wrong hierarchy, i.e. country instead of a city; 2) predicting the wrong category, i.e. country instead of nationality and 3) ambiguous prompts, i.e. predicting "local" for "Albert Einstein is a [MASK] citizen". Such cases falsify the evaluation, since a distinction between actually true predictions ("German" or "Germany") and actually false predictions ("local" or "German") is not made. These errors are rooted in the conceptual discrepancy between LMs and KBs. While querying a KB is clearly defined and leads to unambiguous answers, an LM is queried by natural language prompts, which are as ambiguous as natural language itself. To use LMs for KBs we need to translate between them. Therefore we focus on three main model parts: input, model, and output: 1) input optimization by adjusting prompts; 2) model optimization by fine-tuning, and 3) output adjustment by converting ground truth and prediction into the same category. Input and model optimizations are done on relation-wise data splits for training and test \((80-20)\), where we split based on subject-relation pairs avoiding having the same subject in training and test.
_Input Optimization. AutoPrompt_[23] introduced an automatic way of prompt generation given a specific task, such as fact retrieval. We generate our prompts as suggested by [23] on relation-specific training splits of WD-Known. In Table 4 the input optimization with AutoPrompts achieves notable improvements for \(R@P90\), e.g. hasReligion increased from 0 to 0.21, citizenOf from 0 to 0.15. There is no deterioration in any of the relations.
_Model Optimization._ Fine-tuning not only allows us to optimize an LM on the searched output but also enables adding new words to the model's vocabulary as this has been shown to be a bottleneck for fact prediction. We fine-tune relation-specific LMs and for vocabulary extension, we add a \(max.\) of 2000 objects per relation. For fine-tuning the masked language model objective [7] is used as well as our training splits. We show results on two setups: 1) fine-tuning \(BERT_{large}\), denoted \(FT\), and 2) fine-tuning \(BERT_{large}\) and with extended vocabulary, de
\begin{table}
\begin{tabular}{l c c c||l c c c c c} \hline \hline \multicolumn{3}{c}{Distribution vs. BERT} & \multicolumn{6}{c}{RE vs. BERT} \\ \hline & & & & & \multicolumn{4}{c}{Wikipedia} & \multicolumn{3}{c}{Google search} \\ \cline{5-10} & \(\overline{P}\) & \(\overline{R}\) & \(F1\) & & \(\overline{P}\) & \(\overline{R}\) & \(F1\) & \(\overline{P}\) & \(\overline{R}\) & \(F1\) \\ \hline Random & 0.09 & 0.05 & 0.06 & Rosette & 0.32 & 0.05 & 0.08 & 0.21 & 0.01 & 0.02 \\ Majority & 0.18 & 0.09 & 0.03 & BERT @P75 & 0.87 & 0.31 & 0.45 & 0.37 & 0.28 & 0.31 \\ BERT @P75 & 0.75 & 0.43 & 0.54 & BERT @P90 & 0.95 & 0.22 & 0.17 & 0.45 & 0.23 & 0.30 \\ BERT @P90 & 0.90 & 0.35 & 0.48 & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Random and Majority Baselines (left) and Relation Extraction (RE) Baseline for citizenOf and headquarteredIn (right). The RE Baseline is done on two datasets: 1) a (Wikipedia) dataset, where the triples are aligned with the Wikipedia summaries extracted from the subject’s Wikipedia pages, and 2) a (Google search) dataset, where the triples are aligned with texts from the top 10 Google search results after searching for the subject of the respective triple. Scores were computed on a subset of WD-Known with text alignments as described in 5.2.
noted \(FT_{vocab}\). In Table 4 the model optimization shows the biggest improvements. Only developedBy, producedBy, headquarteredIn deteriorate at first in the \(FT\) setup, but producedBy and developedBy improve significantly in the \(FT_{vocab}\) setup. We found that after fine-tuning the model predicts substrings that match the ground truth object, e.g. in relation producedBy "Len" is predicted if the ground truth object is "Lenovo" or "Xiao" for "Xiaomi". The same happens in relation developedBy, e.g. "Core" for "Corel", and in relation headquarteredIn e.g. "Wind" for the city "Windhoek". After vocabulary expansion previously missing tokens like "Xiaomi" can now be fully predicted, so that \(R@P90\) can increase in the \(FT_{vocab}\) setup. It is worth noting that producedBy and developedBy could only be improved significantly by expanding the vocabulary during fine-tuning. Only for headquarteredIn the precision does not improve. While headquarteredIn does degrade in precision, fine-tuning with an extended vocabulary increases the overall quality significantly (see Table 4, AVG \(R@P90\)).
_Output Adjustment._ To map prediction and ground truth, so a prediction "French" is correctly recognized for object "France", we use manually crafted dictionaries. Methods like string matching can always lead to incorrect mappings and sometimes even do not work for examples like the one shown. Therefore, we used two dictionaries mapping nationalities and countries on the one hand and religions and religious affiliations on the other hand. A prediction will be true if it belongs to the same entry in a relation-specific dictionary as the ground truth object and false otherwise. The creation of such mapping dictionaries involves tremendous manual labor, contradicting our search for automated KBC. Therefore, we evaluate on only two hasReligion and citizenOf as these relations were also most affected by the second error category mentioned above so that the output adjustment here might have the greatest effect. We find, that automated fine-tuning significantly outperforms this approach.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{Input} & \multicolumn{2}{c}{Output} \\ & Base & Opt. & \multicolumn{2}{c}{Model Opt.} & Adjustment \\ \cline{2-5} & & & & & Manual \\ Relation & Pre-Trained & AutoPrompt & \(FT\) & \(FT_{vocab}\) & Mapping \\ \hline nativeLanguage & 0.62 & 0.66 & 0.79 & **0.79** & - \\ hasReligion & 0 & 0.21 & 0.13 & **0.27** & 0.02 \\ citizenOf & 0 & 0.15 & 0.23 & **0.23** & 0.01 \\ producedBy & 0.03 & 0.03 & 0 & **0.15** & - \\ developedBy & 0.04 & 0.04 & 0.0004 & **0.11** & - \\ headquarteredIn & **0.04** & 0.04 & 0 & 0 & - \\ spokenLanguage & 0.26 & 0.42 & **0.51** & 0.5 & - \\ LanguageOfFilm & 0.003 & 0.04 & **0.29** & 0.27 & - \\ \hline AVG \(R@P90\) & 0.12 & 0.20 & 0.24 & **0.29** & 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Table shows \(R@P90\) values. Improvement approaches to maximize BERT’s performance on specific relations in comparison to pre-trained BERT-large (case-sensitive). Improvements were tested at three key points in the LM: _input, model, output_. Scores were computed on the test split of WD-Known.
**Summary.** We found that using biased datasets lead to an overestimation of an LM's performance for fact prediction. To neither over- nor underestimate BERT's performance, we tested it on the large dataset WD-Known and implemented improvements to increase BERT's performance and perform KBC with high precision later on. We have seen the model's vocabulary to be a bottleneck for its performance. When fine-tuning with vocabulary extension, the model's performance can be significantly improved for fact prediction.
## 6 Potential for Knowledge Base Completion
In the following, we will obtain plausible missing facts from Wikidata, produce high accuracy predictions respective to \(R@P90\), and let annotators from Amazon Mechanical Turk (mturk) manually evaluate the predictions given a five-value scale _true, plausible, unknown, implausible, false_. The relations in focus are the same as in Table 4, except for hasReligion, which appeared with too sparse information on the web.
### Human Evaluation
**Obtaining missing facts.** The key to KBC with LMs is the ability to predict _missing facts_. Directly extracting empty subject-relation pairs from Wikidata is not possible, since Wikidata consists of existing facts \((s,r,o)\). Also, randomly sampling an arbitrary subject to combine it with any relation would run the risk of violating our condition of plausible missing facts, where an object for a subject-relation pair exists (e.g. an implausible pair like (Albert Einstein, hasCapital,\(\cdot\)) has no object). Therefore, we will only sample subject-relation pairs, where the subject has a relation-compatible entity type. Thus, we compute relation-wise the most frequent subject entity type within a relation. When sampling subject-relation pairs with missing objects, the subject is conditioned on having the most frequent entity type. This ensures extracting plausible missing facts. We randomly sample \(10,000\) missing facts per relation, \(70,000\) missing facts in total.
#### 6.1.1 High Accuracy Predictions.
To provide human annotators with reasonable predictions, we set a relation-specific prediction threshold to ensure the prediction quality and use the best possible model for predictions based on results in Table 4. Given these best models, the threshold is the prediction probability at \(R@P90\) of each relation to respect the relation-specific precision and recall trade-offs. We keep only those missing facts, i.e. subject-relation pairs, whose predictions have a probability over the threshold. These are our _high accuracy predictions_.
#### 6.1.2 Annotations.
We filter the \(70,000\) missing facts relation-wise by keeping only the facts with high accuracy predictions and then sample \(50\) missing facts with high accuracy predictions. This results in one prediction per subject-relation
pair, i.e. 350 predicted missing facts in total. For these 350 missing facts, we use Amazon Mechanical Turk (mturk) for human annotations. An annotator is asked to evaluate a predicted missing fact. For readability reasons, each fact is formulated into relation-specific statements such as "The native language of Marcus Adams is English", where "English" is the prediction and nativeLanguage the relation. The annotators are given a five-value scale: true, plausible, unknown, implausible, and false. Before voting, we ask them to look up the fact on the web. They are required to give an evidence link and copy a text snippet supporting their voting. In case they vote unknown, they have to explain their voting. This way we ensure that annotators leave reasonable votes. We can say that all annotators left understandable insights regarding their voting.
We also self-annotated the 350 missing facts with ground truth and evidence links. Along with ground truth annotations, we annotated if the subject was known to English (en) Wikipedia; if the ground truth consists of more than one word; if the ground truth is in BERT's vocabulary, the position of the found evidence in Google search results and the search link used to find the ground truth. Furthermore, we rated each prediction using the same five-value scale as mturk annotators. We and mturk annotators reach an agreement of 69% given the five-value scale and an agreement of 94% given the upper categories true (true, plausible) and false (unknown, implausible, false).
### Results
**Quantitative results.** In Table 5, we see that the human annotators rate the model's predictions as highly correct. Based on these values we have determined the potential for KBC in Table 6. Given the number of missing facts and the proportion of high accuracy predictions, we can estimate the amount of addable facts at a relations-specific accuracy. This accuracy was achieved through human evaluation as shown in Table 5. Given the relation nativeLanguage we could add \(5,550,689\) new facts in a human-in-a-loop procedure or \(7,871,085\cdot 0.86=\)
\begin{table}
\begin{tabular}{l|c|c||c|c|c|c|c|c} \hline \hline & & & true with & & & false with & in Wikipedia \\ Relation & true & false & evidence & plausible & unknown & implausible & evidence & (en) \\ \hline nativeLanguage & 82\% & 18\% & 48\% & 34\% & 6\% & 8\% & 4\% & 16\% \\ spokenLanguage & 82\% & 18\% & 46\% & 36\% & 6\% & 6\% & 6\% & 28\% \\ headquarteredIn & 82\% & 18\% & 34\% & 48\% & 6\% & 8\% & 4\% & 26\% \\ developedBy & 62\% & 38\% & 50\% & 12\% & 0\% & 0\% & 38\% & 74\% \\ producedBy & 22\% & 78\% & 20\% & 2\% & 6\% & 0\% & 72\% & 10\% \\ LanguageOfFilm & 76\% & 24\% & 56\% & 20\% & 0\% & 2\% & 22\% & 32\% \\ citizenOf & 90\% & 10\% & 52\% & 38\% & 4\% & 0\% & 6\% & 8\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Overview of the results from the human evaluation. “True“ denotes the summed up human ratings for “true with evidence“ and “plausible“ per relation. Similarly, “false“ denotes the combined human ratings for “unknown“, “implausible“, and “false with evidence“ per relation. The column “in Wikipedia (en)“ describes the ratio of subjects with English Wikipedia articles to subjects without English Wikipedia articles per relation.
\(6,769,133\) with an accuracy of \(82\%\) automatically. In a human-in-a-loop procedure, the estimated costs of \(2\) USD (section 1) for manually curated facts could drop to \(40\) Cents with approximately \(2\) minutes per annotation as we experienced with our mturk evaluation. Given our results in Table 6 we can perform significant KBC for relations nativeLanguage and spokenLanguage at precision \(82\%\) and citizenOf at precision \(90\%\).
**Qualitative results.** Looking into the annotations and predictions, we see that statements such as "Final Fantasy VII is developed by Square" are almost literally included in corresponding evidence links such as Wikipedia pages8. In contrast, relations like nativeLanguage include statements only implicitly, e.g. the native language of "Marcus Adams" is never mentioned explicitly in the corresponding Wikipedia page9. Yet, the LM achieves comparable results on most relations despite their more implicit or explicit factual basis.
Footnote 8: [https://en.Wikipedia.org/wiki/Final_Fantasy_VII](https://en.Wikipedia.org/wiki/Final_Fantasy_VII)
Footnote 9: [https://en.Wikipedia.org/wiki/Marcus_Adams_](https://en.Wikipedia.org/wiki/Marcus_Adams_)(Canadian_football)
**Generalization or Retrieval?** To investigate this further, we computed the proportion of subjects with English Wikipedia articles per relation. This enables us to estimate whether facts were mentioned in corresponding Wikipedia articles and thus, were present in BERT's training set. It can be shown that the language-related and socio-demographic relations achieve high results despite their lower occurrence in Wikipedia. This means that BERT predicts never seen facts with a high accuracy for these relations, showing a high generalization capability. When given facts such as the headquarters of "Universal de Television Peru", the model correctly predicts "Lima". Or given a fact such as the original language of the movie "II mio paeese", the model predicts "Italian" correctly. Socio-demographic relations such as citizenOf, headquarteredIn or language-related relations like nativeLanguage, LanguageOfFilm exhibit stronger correlations (e.g. person name and spoken language/origin country) than other relations (e.g. video game and developer, goods/products and manufacturer). These correlations are learned by the LM from the vast amounts of training data and used for the prediction. We recognize here that such learned correlations can be quite useful for fact prediction. Regarding the non-language or non-socio-demographic relations, we see that producedBy has the least Wikipedia-known subjects of \(10\%\) and shows also the lowest accuracy of \(22\%\). In contrast, developedBy has the most Wikipedia-known subjects and still a high accuracy of \(62\%\) despite being a non-language or non-socio-demographic relation. In these relations, the model shows less generalization capability and is more in need of actual retrieval. As an example: the developer of "Pro Cycling Manager 2015" must be explicitly mentioned during training to know it, yet the model correctly predicts "Cyanide".
**Conclusion.** Given these qualitative examples and quantified numbers the model is capable of generalization as well as retrieval. But it is still unclear in what mixture and to what extent for example fact retrieval is possible. Regarding KBC both are beneficial. In case of precise retrieval, facts are addable automatically to an existing KB. In the case of generalization, which still achieves a high
accuracy, human-in-a-loop procedures allow adding manually curated facts in a faster and cheaper way.
## 7 Discussion and Conclusion
In this paper, we investigated the potential of automated KB completion using LMs. We introduced a challenging benchmark dataset, WD-Known, an unbiased random sample of Wikidata containing 3.9M existing facts. Using this dataset enabled a more realistic assessment of KB completion using LMs. This revealed that previous benchmarks lead to an overestimate of LM-based KB completion performance.
Our analysis showed that LMs are not able to obtain results at a high precision (\(\sim 90\%\)) for all relations equally, but LM-based knowledge covers language-related and socio-demographic relations particularly well. Furthermore, we discovered that an LM's vocabulary can limit the capability of fact prediction and we achieved significant improvements with fine-tuning and vocabulary expansion.
Since the prediction of facts non-existent to the KB is crucial for KB completion, we extracted plausible subject-relation pairs with non-existent objects in the KB. By probing the LM for these facts, we received actual novel facts previously unknown to the KB. Since the ground truth for these facts is missing, we performed a human evaluation. Annotators rated the LM's suggestions as highly correct. That showed a high potential for KB completion, either completely automated at a precision of up to 90% or as a strong recommender system for human-in-a-loop procedures. We demonstrated that in a human-in-a-loop procedure, LMs might reduce the costs for manually curated facts significantly, from approximately $2 to $0.4 per fact.
Moreover, we showed that LMs build surprisingly strong generalization capabilities for specific socio-demographic relations.
A promising direction for future work could be the construction of LMs specifically for KBs, which goes beyond fine-tuning. This could include defining specific vocabularies that are optimized for fact prediction.
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r} \hline \hline & & \multicolumn{2}{c|}{\#missing} & high accuracy & \multicolumn{2}{c|}{\#addable} & growth \\ Relation & \(cardinality^{WD}\) & facts & predictions(\%) & accuracy & facts & factor \\ \hline nativeLanguage & 264,778 & 7,871,085 & 86\% & 82\% & 5,550,689 & 20.96 \\ spokenLanguage & 2,148,775 & 7,090,119 & 77\% & 82\% & 4,476,701 & 2.08 \\ headquarteredIn & 409,309 & 55,186 & 8\% & 82\% & 3,443 & 0.008 \\ developedBy & 42,379 & 29,349 & 2\% & 62\% & 363 & 0.01 \\ producedBy & 123,036 & 31,239 & 0.8\% & 22\% & 55 & 0.0004 \\ LanguageOfFilm & 337,682 & 70,669 & 37\% & 76\% & 19,872 & 0.06 \\ citizenOf & 4,206,684 & 4,616,601 & 28\% & 90\% & 1,163,383 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The amount of missing facts and the percentage of high accuracy predictions denotes the number of new facts we could add at a relation-specific precision. The amount of addable facts indicates the number of potential new facts that could be added without error, e.g. in a human-in-a-loop procedure. The growth factor describes the potential growth of Wikidata given the current \(cardinality^{WD}\) in Wikidata and the amount of addable facts. |
2307.03171 | LEO: Learning Efficient Orderings for Multiobjective Binary Decision
Diagrams | Approaches based on Binary decision diagrams (BDDs) have recently achieved
state-of-the-art results for multiobjective integer programming problems. The
variable ordering used in constructing BDDs can have a significant impact on
their size and on the quality of bounds derived from relaxed or restricted BDDs
for single-objective optimization problems. We first showcase a similar impact
of variable ordering on the Pareto frontier (PF) enumeration time for the
multiobjective knapsack problem, suggesting the need for deriving variable
ordering methods that improve the scalability of the multiobjective BDD
approach. To that end, we derive a novel parameter configuration space based on
variable scoring functions which are linear in a small set of interpretable and
easy-to-compute variable features. We show how the configuration space can be
efficiently explored using black-box optimization, circumventing the curse of
dimensionality (in the number of variables and objectives), and finding good
orderings that reduce the PF enumeration time. However, black-box optimization
approaches incur a computational overhead that outweighs the reduction in time
due to good variable ordering. To alleviate this issue, we propose LEO, a
supervised learning approach for finding efficient variable orderings that
reduce the enumeration time. Experiments on benchmark sets from the knapsack
problem with 3-7 objectives and up to 80 variables show that LEO is ~30-300%
and ~10-200% faster at PF enumeration than common ordering strategies and
algorithm configuration. Our code and instances are available at
https://github.com/khalil-research/leo. | Rahul Patel, Elias B. Khalil | 2023-07-06T17:52:29Z | http://arxiv.org/abs/2307.03171v1 | # LEO: Learning Efficient Orderings for Multiobjective Binary Decision Diagrams
###### Abstract
Approaches based on Binary decision diagrams (BDDs) have recently achieved state-of-the-art results for multiobjective integer programming problems. The variable ordering used in constructing BDDs can have a significant impact on their size and on the quality of bounds derived from relaxed or restricted BDDs for single-objective optimization problems. We first showcase a similar impact of variable ordering on the Pareto frontier (PF) enumeration time for the multiobjective knapsack problem, suggesting the need for deriving variable ordering methods that improve the scalability of the multiobjective BDD approach. To that end, we derive a novel parameter configuration space based on variable scoring functions which are linear in a small set of interpretable and easy-to-compute variable features. We show how the configuration space can be efficiently explored using black-box optimization, circumventing the curse of dimensionality (in the number of variables and objectives), and finding good orderings that reduce the PF enumeration time. However, black-box optimization approaches incur a computational overhead that outweighs the reduction in time due to good variable ordering. To alleviate this issue, we propose LEO, a supervised learning approach for finding efficient variable orderings that reduce the enumeration time. Experiments on benchmark sets from the knapsack problem with 3-7 objectives and up to 80 variables show that LEO is \(\sim\)30-300% and \(\sim\)10-200% faster at PF enumeration than common ordering strategies and algorithm configuration. Our code and instances are available at [https://github.com/khalil-research/leo](https://github.com/khalil-research/leo).
## Introduction
In many real-world scenarios, one must jointly optimize over a set of conflicting objectives. For instance, solving a portfolio optimization problem in finance requires simultaneously minimizing risk and maximizing return. The field of multi-objective optimization deals with solving such problems. It has been successfully applied in novel drug design (Lambrinidis and Tsantili-Kakoulidou, 2021), space exploration (Song et al., 2018; Tangputantakul et al., 2020), administrating radiotherapy (Yu et al., 2000), supply chain network design (Altiparnak et al., 2006), among others. In this paper, we specifically focus on multiobjective integer programming (MOIP), which deals with solving multiobjective problems with integer variables and linear constraints.
The goal of solving a multiobjective problem is to find the Pareto frontier (PF): the set of feasible solutions that are not dominated by any other solution, i.e., ones for which improving the value of any objective deteriorates at least one other objective. The PF solutions provide the decision-maker with a set of trade-offs between the conflicting objectives. Objective-space search methods iteratively solve multiple related single-objective problems to enumerate the PF but suffer from redundant computations in which previously found solutions are encountered again or a single-objective problem turns out infeasible. On the other hand, decision-space search methods leverage branch-and-bound. Unlike the single-objective case where one compares a single scalar bound (e.g., in mixed-integer linear programming (MIP)), one needs to compare bound _sets_ to decide if a node can be pruned; this in itself is quite challenging. Additionally, other crucial components of branch-and-bound such as branching variable selection and presolve are still underdeveloped, limiting the usability of this framework.
Binary decision diagrams (BDDs) have been a central tool in program verification and analysis (Bryant, 1992; Wegener, 2000). More recently, however, they have been used to solve discrete optimization problems (Bergman et al., 2016, 2016) that admit a recursive formulation akin to that of dynamic programming. BDDs leverage this structure to get an edge over MIP by efficiently encoding the feasible set into a network model which enables fast optimization. To the best of our knowledge, Bergman and Cire (2016) were the first to use BDDs to solve multiobjective problems, achieving state-of-the-art results for a number of problems. The VO used to construct a BDD has a significant impact on its size and consequently any optimization of the diagram. However, the VO problem within BDD-based MOIP has not been addressed in the literature. We address this gap by designing a novel learning-based BDD VO technique for faster enumeration of PF.
We begin with the following hypothesis: VO has an impact on the PF enumeration time and an "efficient" VO can reduce it significantly. Following an empirical validation of this hypothesis, we show that such orderings can be found using black-box optimization, not directly in the (exponentially large) space of variable orderings, but rather indirectly in the space of constant-size variable scoring functions. The scoring function is a weighted linear combination of a fixed
set of variable properties (or attributes), and the indirect search is in the space of possible weight combinations. Figure 1 illustrates how a search in the property-weight space can alleviate the curse of dimensionality to make VO search scalable to problems with many variables. However, solving the black-box optimization problem may be prohibitively time-consuming for any one instance. For variable ordering to be useful in practice, the time required to produce a good VO should be negligible relative to the actual PF enumeration time. To alleviate this issue, we train a supervised machine learning (ML) model on the orderings collected using black-box optimization. A trained model can then be used on unseen (test) instances to predict variable orderings. Should such a model generalize well, it would lead to reduced PF enumeration times. We refer to our approach as LEO (Learning Efficient Orderings). Our key contributions can be summarized as follows:
1. We show that variable ordering can have a dramatic impact on solving times through a case study of the multi-objective knapsack, a canonical combinatorial problem.
2. We show how black-box optimization can be leveraged to find efficient variable orderings at scale.
3. We design a supervised learning framework for predicting variable orderings which are obtained with black-box optimization on a set of training instances. Our ML models are invariant to permutations of variables and independent of the number of variables, enabling fast training and the use of one ML model across instances of different sizes.
4. We perform an extensive set of experiments on the knapsack problem and show that LEO is \(\sim\)30-300% and \(\sim\)10-200% faster than the best non-ML ordering strategy and the SMAC algorithm configuration tool, respectively.
5. We perform a feature importance analysis of the best class of ML models we have found, extreme gradient boosted ranking trees. The analysis reveals that: (a) a single ML model can be trained across instances with varying numbers of objectives and variables; and (b) a single knapsack-specific feature that we had not initially contemplated performs reasonably well on its own, though far worse than our ML models.
## 3 Preliminaries
### Multiobjective Optimization
An MOIP problem takes the form \(\mathcal{M}:=\min_{x}\left\{\bar{z}(x):x\in\mathcal{X},\mathcal{X}\subset \mathbb{Z}_{+}^{n}\right\}\), where \(x\) is the decision vector, \(\mathcal{X}\) is a polyhedral feasible set, and \(\bar{z}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) a vector-valued objective function representing the \(p\) objectives. In this work, we focus on the knapsack problem with binary decision variables, hence \(\mathcal{X}\subset\{0,1\}^{n}\).
**Definition: Pareto dominance.** Let \(x^{1},x^{2}\in\mathcal{X},\bar{y}^{1}=\bar{z}(x^{1}),\bar{y}^{2}=\bar{z}(x^{2})\), then \(\bar{y}^{1}\)_dominates_\(\bar{y}^{2}\) if \(\bar{y}^{1}_{j}\leq\bar{y}^{2}_{j},\forall j\in[p]\) and \(\exists j\in[p]:\bar{y}^{1}_{j}<\bar{y}^{2}_{j}\).
**Definition: Efficient set.** A solution \(x^{1}\in\mathcal{X}\) is called an efficient solution if \(\nexists x^{2}\in\mathcal{X}\) such that \(x^{2}\) dominates \(x^{1}\). The set of all efficient solutions to a multiobjective problem is called an efficient set \(\mathcal{X}_{E}\).
**Definition: Pareto frontier (PF).** The set of images of the efficient solutions in the objective space, i.e., \(\mathcal{Z}_{N}=\{\bar{z}(x):x\in\mathcal{X}_{E}\}\), is called the PF.
The exact solution approaches to solving multiobjective problems focus on efficiently enumerating its PF.
### BDDs for Multiobjective Optimization
A BDD is a compact encoding of the feasible set of a combinatorial optimization problem that exploits the recursive
Figure 1: Searching for variable orderings. The robot represents any automated search procedure that tries to find a good variable ordering (Approach #1) or property weights (Approach #2) by iteratively proposing a variable ordering, measuring its run time on one or more instances, and repeating until a satisfactory ordering has been found. Approach #1 depicts a naive approach that searches for an efficient variable ordering directly in the variable ordering space. The proposed Approach #2 conducts an indirect search by controlling a small set of variable property weights (PWs).
formulation of the problem. Formally, a BDD is a layered acyclic graph \(G=(n,\mathcal{N},\mathcal{A},\ell,d)\), composed of nodes in \(\mathcal{N}\), arcs in \(\mathcal{A}\), a node-level mapping \(\ell:\mathcal{N}\rightarrow[n+1]\) that maps each node to a decision variable, and an arc-label mapping \(d:\mathcal{A}\rightarrow\{0,1\}\) that assigns a value from the variable domain of an arc's starting node to the arc. Here, \(n\) is the number of variables of the multiobjective problem \(\mathcal{M}\).
The nodes are partitioned into \(n+1\) layers \(L_{1},\ldots,L_{n+1}\), where \(L_{l}=\{u:\ell(u)=l,u\in\mathcal{N}\}\). The first and last layers have one node each, the root \(\mathbf{r}\) and terminal nodes \(\mathbf{t}\), respectively. The width of a layer \(L_{l}\) is equal to the number of nodes in that layer, \(|L_{l}|\). The width of a BDD \(G\) is equal to the maximum layer width, \(\max_{l\in[n+1]}|L_{l}|\). An arc \(a:=(r(a),t(a))\in\mathcal{A}\) starts from the node \(r(a)\in L_{l}\) and ends in node \(t(a)\in L_{l+1}\) for some \(l\in[n]\). It has an associated label \(d(a)\in\{0,1\}\) and a vector of values \(\bar{v}(a)\in\mathbb{R}_{+}^{p}\) that represents the contribution of that arc to the \(p\) objective values.
Let \(\mathcal{P}\) represent all the paths from the root to the terminal node. A path \(e(a_{1},a_{2},\ldots,a_{n})\in\mathcal{P}\) is equal to the solution \(x=(d(a_{1}),d(a_{2}),\ldots,d(a_{n}))\) and the corresponding image of this solution in the objective space is given by \(\bar{v}(e)=\sum_{i=1}^{n}\bar{v}(a_{i})\), where the sum is taken elementwise over the elements of each vector \(\bar{v}(a_{i})\). The BDD representation of \(\mathcal{M}\) is valid if \(\mathcal{Z}_{N}=\textsc{ND}\left(\bigcup_{e\in\mathcal{P}}\bar{v}(e)\right)\), where \(\mathcal{Z}_{N}\) is the PF of \(\mathcal{M}\) and ND an operator to filter out dominated objective vectors. We refer the readers to Bergman et al. (2021) and Bergman and Cire (2016) for the detailed description of the multiobjective knapsack BDD construction. In what follows, we assume access to a BDD construction and PF enumeration procedure and will focus our attention on the variable ordering aspect.
## Related Work
### Exact Approaches for Solving MOIP Problems
Traditional approaches to exactly solve MOIP can be divided into objective-space search and decision-space search methods. The objective-space search techniques Kirlik and Sayn (2014); Boland, Charkhgard, and Savelsbergh (2014); Ozlen, Burton, and MacRae (2014) enumerate the PF by searching in the space of objective function values. They transform a multiobjective problem into a single one either by weighted sum aggregation of objectives or transforming all but one objective into constraints. The decision-space search approaches Przybylski and Gandibleux (2017); Sourd and Spanjaard (2008); Parragh and Tricoire (2019); Adelgren and Gupte (2022); Belotti, Soylu, and Wiecek (2013) instead explore the set of feasible decisions. Both these approaches have their own set of challenges as described in the introduction. We point the reader to Ehrgott, Gandibleux, and Przybylski (2016); Ehrgott (2006) for a more detailed background.
Bergman et al. (2021) showcased that BDD-based approaches can leverage problem structure and can be orders of magnitude faster on certain problem classes; the use of valid network pruning operations along with effective network compilation techniques were the prime factors behind their success. However, they do not study the effect of VO on the PF enumeration time.
### Variable Ordering for BDD Construction
Finding a VO that leads to a BDD with a minimal number of nodes is an NP-Complete problem Bollig, Lobbing, and Wegener (1996); Bollig and Wegener (1996). This problem has received significant attention from the verification community as smaller BDDs are more efficient verifiers. Along with the size, VO also affects the objective bounds; specifically, smaller exact BDDs are able to obtain better bounds on the corresponding limited width relaxed/restricted BDDs Bergman et al. (2016).
The VO techniques for BDD construction can be broadly categorized as exact or heuristic. The exact approaches to VO Friedman and Supowit (1987); Ebendt and Drechsler (2007); Ebendt, Gunther, and Drechsler (2005); Bergman et al. (2012), though useful for smaller cases, are not able to scale with problem size. To alleviate the issue of scalability for larger problems, heuristic VO techniques are used; the proposed methodology falls into this category. These heuristics can be general or problem-specific Rice and Kulhari (2008); Bergman et al. (2016); Lu et al. (2000); Aloul, Markov, and Sakallah (2003); Chung, Hajj, and Patel (1993); Butler et al. (1991); Rudell (1993) but the literature has not tackled the multiobjective optimization setting. A VO method can also be classified as either _static_ or _dynamic_. Static orderings are specified in advance of BDD construction whereas dynamic orderings are derived incrementally as the BDD is constructed layer-by-layer. The latter may or may not be applicable depending on the BDD construction algorithm; see Karahalios and van Hoeve (2022) for a discussion of this distinction for Graph Coloring. We focus on static orderings and discuss extensions to the dynamic setting in the Conclusion.
We focus on ML-based heuristics for VO as they can be learned from a relevant dataset instead of handcrafted heuristics developed through a tedious trial-and-error process.
### Machine Learning for Variable Ordering
Grumberg, Livne, and Markovitch (2003) proposed a learning-based algorithm to construct smaller BDDs for model verification. In particular, they create random orders for a given instance in the training set and tag variable pairs based on their impact on the resulting BDD size. Using this data, they learn multiple pair precedence classifiers. For a new instance at test time, they query each trained pair precedence classifier to construct a precedence table. These tables are merged into one to derive the ordering. The success of this method hinges on the selective sampling of informative variable pairs and the ability to generate orders with sufficient variability in BDD size to increase the chance of observing high-quality labels. As they leverage problem-specific heuristics for selective sampling and rely on random sampling for generating labels, this method is not applicable to our setting. Specifically, we do not have a notion of how informative a given variable pair is. Additionally, random orders may not produce high variability in PF enumeration time as evidenced by our experiments.
Carbin (2006) uses active learning to address the VO problem for BDDs used in program analysis with the goal of minimizing the run time, similar to that of LEO. However, certain differences make it less amenable to our setting. Specifically, the technique to generate the candidate VOs in Carbin (2006) is grounded in program analysis and cannot be applied to our problem. Instead, LEO leverages bayesian optimization through SMAC in conjunction with the novel property-weight search space to generate VO candidates.
Drechsler, Gockel, and Becker (1996) use an evolutionary search approach to learn a single variable ordering heuristic for a set of instances. The learned heuristic is a sequence of BDD operations (e.g., swapping two variables in an ordering) applied to instances from circuit design and verification, where the BDD represents a boolean function that must be verified efficiently. This approach can be seen as one of algorithm configuration, which we will compare to using SMAC and ultimately outperform.
The work of Cappart et al. (2019) is the first learning-based method to address VO problem for BDDs used in solving discrete optimization problems. Specifically, they learn a policy to order variables of relaxed/restricted BDDs to obtain tighter bounds for the underlying optimization problem using reinforcement learning (RL). A key component of training an RL policy is devising the reward that the agent receives after taking an action in a given state and moving to the next state. Note that we care about reducing the time to enumerate the PF, which is only available at the end of a training episode. The absence of intermediate rewards in our setting makes RL inapplicable.
Karahalios and van Hoeve (2022) developed an algorithm portfolio approach to select the best VO strategy from a set of alternatives when constructing relaxed decision diagrams for the (single-objective) graph coloring problem. Fundamental to a portfolio-based approach is the existence of a set of strategies, such that each one of them is better than the others at solving some problem instances. However, such an algorithm portfolio does not exist in our case and part of the challenge is to discover good ordering strategies.
There have been some recent contributions Wu et al. (2022); Liu, Yan, and Jin (2023) relating to solving multiobjective problems using deep learning and graph neural networks. However, these approaches are not exact and thus beyond the scope of this paper. We discuss potential extensions of LEO to the inexact setting in the Conclusion.
## Methodology
The proposed methodology, as depicted in Figure 2, is divided into three phases. We apply our technique to the multi-objective knapsack problem (MKP) which can be described as:
\[\max_{x\in\{0,1\}^{n}}\left\{\left\{\sum_{i\in[n]}a_{i}^{p}x_{i}\right\}_{p=1 }^{P}:\sum_{i\in[n]}w_{i}x_{i}\leq W\right\}.\]
Here, \(n\) is the number of items/variables, \(w_{i}\) and \(a_{i}^{p}\) are the weight and profit corresponding to each item \(i\in[n]\) and objective \(p\in[P]\). Finally, the capacity of the knapsack is \(W\in\mathbb{Z}_{+}\). Next, we define an efficient variable ordering, a concept which will be useful in describing LEO.
Let \(\mathcal{O}\) be the set of all possible variable orderings and \(\Gamma(o),o\in\mathcal{O}\) denote the PF enumeration time over a BDD constructed using order \(o\). Let \(o^{\star}\equiv\arg\min_{o\in\mathcal{O}}\Gamma(o)\) be the optimal VO. Finding \(o^{\star}\) among all \(n!\) possible permutations of \(n\) variables is intractable, so we will aim for an efficient variable ordering (EVO) \(o^{e}\) that is as close as possible to \(o^{\star}\). Note that our approach is heuristic and does not come with approximation guarantees on the enumeration time of \(o^{e}\) relative to \(o^{\star}\).
The objective in the first phase is to find, for each training instance, an EVO that acts as a label for supervising an ML model. In the second phase, each training instance is mapped to a set of features and an ML model is trained. Finally, we perform model selection and use the chosen model to predict EVOs, referred to as \(\hat{o^{e}}\), that are then used to construct a BDD and compute the PF for any test instance.
### Phase 1: Finding an EVO
Since finding an optimal VO that minimizes the PF enumeration time is NP-complete, we devise a heuristic approach for finding an EVO, \(o^{e}\).
To find \(o^{e}\) for a given instance \(\mathcal{I}\), we use black-box optimization as the run time \(\Gamma^{\mathcal{I}}\) cannot be described analytically and optimized over. A naive approach to find \(o^{e}\) would be to search directly in the variable ordering space, as suggested in Approach #1 of Figure 1. While this might work well for tiny problems, it will not scale with the increase in the problem size as there are \(n!\) possible orderings.
To alleviate this issue, we define a surrogate search space that removes the dependence on the problem size. Specifically, we introduce score-based variable ordering where we order the variables based on the decreasing order of their total score. For a given problem class, we define a set of properties \(\mathcal{K}\) for its variables that capture some problem structure. For example, in an MKP, the weight of an item can act as property. Table 1 lists all properties of a variable of an MKP. Let \(g_{ik}\) be the property score of variable \(i\) for some property \(k\in\mathcal{K}\), \(\bar{w}=(w_{1},\cdots,w_{k})\) be the property weights in \([-1,1]^{|\mathcal{K}|}\). Then, the score of a variable \(i\) is defined as
\[s_{i}\equiv\sum_{k\in\mathcal{K}}w_{k}\cdot\frac{g_{ik}}{\sum_{i\in[n]}g_{ik}}. \tag{1}\]
\begin{table}
\begin{tabular}{l l} \hline Property & Definition \\ \hline weight & \(w_{i}\) \\ avg-value & \(\sum_{p=1}^{P}a_{i}^{p}/P\) \\ max-value & \(\max\{a_{i}^{p}\}_{p=1}^{P}\) \\ min-value & \(\min\{a_{i}^{p}\}_{p=1}^{p}\) \\ avg-value-by-weight & \(\left(\sum_{p=1}^{P}a_{i}^{p}/P\right)/w_{i}\) \\ max-value-by-weight & \(\max\{a_{i}^{p}\}_{p=1}^{P}/w_{i}\) \\ min-value-by-weight & \(\min\{a_{i}^{p}\}_{p=1}^{P}/w_{i}\) \\ \hline \end{tabular}
\end{table}
Table 1: Properties of a variable \(i\) of an MKP.
We recover the variable ordering by sorting them in decreasing order of their score. Thus, as depicted in Approach #2 in Figure 1, the search for \(o^{e}\) is conducted in the surrogate search space \([-1,1]^{|\mathcal{K}|}\), which only depends on \(|\mathcal{K}|\) and not on the number of variables \(n\). Note that defining the search space in this manner gives an additional layer of dynamism in the sense that two instances with the same property weights can have different variable orders.
With a slight abuse of notation, let \(\Gamma(\bar{w})\) represent the time taken to enumerate the PF using a VO obtained by property weights \(\bar{w}\). Given a problem instance, the black-box optimizer controls the property weights and the BDD manager computes the PF enumeration time based on the VO derived from these property weights. The black-box optimizer iteratively tries different property weight configurations, maintaining a list of incumbents, a process that is depicted in Phase 1 of Figure 2. We propose to use the variable ordering obtained from the best incumbent property weight as the label for learning task.
### Phase 2: Dataset Generation and Model Training
In this phase, we begin by generating the training dataset. We give special attention to designing our features such that the resulting models are permutation-invariant and independent of the size of the problem. Note that instead of this feature engineering approach, one could use feature learning through graph neural networks or similar deep learning techniques, see (Cappart et al., 2023) for a survey. However, given that our case study is on the knapsack problem, we opt for domain-specific feature engineering that can directly exploit some problem structure and leads to somewhat interpretable ML models.
Suppose we are given \(J\) problem instances, each having \(n_{j}\) variables, \(j\in[J]\). Let \(\alpha_{ij}\) denote the features of variable \(i\in[n_{j}]\) and \(\beta_{j}\) denote the instance-level context features. Using the features and the EVO computed in Phase 1, we construct a dataset \(\mathcal{D}=\{(\alpha_{ij},\beta_{j},r_{i}(o^{e_{j}})_{i}:i\in[n_{j}]:j\in[J]\}\). Here, \(o^{e}_{j}\) is the EVO of an instance \(j\) and \(r_{i}:\mathbb{Z}^{n_{j}}_{+}\rightarrow\mathbb{Z}_{+}\) a mapping from the EVO to the rank of a variable \(i\). For example, if \(n_{j}=4\) for some instance \(j\) and \(o^{e}_{j}=(2,1,4,3)\), then \(r_{1}(o^{e}_{j})=3\), \(r_{2}(o^{e}_{j})=4\), \(r_{3}(o^{e}_{j})=1\), and \(r_{4}(o^{e}_{j})=2\). For a complete list of variable and context features, refer to Table 2.
**Learning-to-rank** (LTR) is an ML approach specifically designed for ranking tasks, where the goal is to sort a set of items based on their relevance given a query or context. It is commonly used in applications such as information retrieval. We formulate the task of predicting the EVO as an LTR task and use the pointwise and pairwise ranking approaches to solve the problem.
In the pointwise approach, each item (variable) in the training data is treated independently, and the goal is to learn a model that directly predicts the relevance score or label for each variable. This is similar to solving a regression problem with a mean-squared error loss. Specifically, we train a model \(f_{\theta}(\alpha_{ij},\beta_{j})\) to predict \(r_{i}(o^{e}_{j})\). Once the model is trained, it can be used to rank items based on their predicted scores \(\hat{o}^{e}\). However, this approach does not explicitly consider the relationships between variables in its loss function.
The pairwise approach aims to resolve this is
Figure 2: Schematic of the proposed method LEO, which comprises of three phases. In Phase 1, we generate property-weight labels by iteratively improving them using the interplay between “Black-box optimizer” and “BDD Manager”. Phase 2 focuses on building datasets of tuples of features extracted by “Featurizer” and best property weights obtained from Phase 1 for training learning-to-rank models. In Phase 3 we do “Model selection” to select the model with best generalization and use it to predict efficient variable order. Finally, we use the predicted variable order to construct the BDD and enumerate the Pareto frontier.
sue [1]. Let
\[\mathcal{T}_{j}=\left\{(i_{1},i_{2}):r_{i_{1}}(o_{j}^{c})>r_{i_{1}}(o_{j}^{c}) \right\}\]
be the set of all variable tuples \((i_{1},i_{2})\) such that \(i_{1}\) is ranked higher than \(i_{2}\) for instance \(j\). Then, the goal is to learn a model that maximizes the number of respected pairwise-ordering constraints. Specifically, we train a model \(g_{\phi}(\cdot)\) such that the number of pairs \((i_{1},i_{2})\in\mathcal{T}_{j}\) for which \(g_{\phi}(\alpha_{i_{1}j},\beta_{j})>g_{\phi}(\alpha_{i_{2}j},\beta_{j})\) is maximized. This approach is better equipped to solve the ranking problem as the structured loss takes into account the pairwise relationships.
### Phase 3: Model Selection and Testing
We follow a two-step approach to perform model selection as the ranking task is a proxy to the downstream task of efficiently solving a multiobjective problem. Firstly, for each model class (e.g., decision trees, linear models, etc.), we select the best model based on Kendall's Tau [13], a ranking performance metric that measures the fraction of violated pairwise-ordering constraints, on instances from a validation set different from the training set. Subsequently, we pit the best models from each type against one another and select the winner based on the minimum average PF enumeration time on the validation set. Henceforth, for previously unseen instances from the test set, we will use the model selected in Phase 3 to predict the EVO and then compute the PF.
## Computational Setup
Our code and instances are available at [https://github.com/khalil-research/leo](https://github.com/khalil-research/leo). All the experiments reported in this manuscript are conducted on a computing cluster with an Intel Xeon CPU E5-2683 CPUs. We use SMAC [12] - a black-box optimization and algorithm configuration library - for generating the labels. The ML models are built using Python 3.8, Scikit-learn [2], and XGBoost [1], and SVM-Rank [1]. The "BDD Manager" is based on the implementation of Bergman and Cire [1], which is available at [https://www.andrew.cmu.edu/user/vanhoeve/mdd/](https://www.andrew.cmu.edu/user/vanhoeve/mdd/).
### Instance Generation
We use a dataset of randomly generated MKP instances as described in [1]. The values \(w_{i}\) and \(a_{i}^{p}\) are sampled randomly from a discrete uniform distribution ranging from 1 to 100. The capacity \(W\) is set to \(\left\lceil 0.5\sum_{i\in I}w_{i}\right\rceil\). We generate instances with sizes
\[\mathcal{S}=\{(3,60),(3,70),(3,80),(4,50),(5,40),(6,40),(7,40)\},\]
where the first and second component of the tuple specify the number of objectives and variables, respectively. For each size, we generate 1000 training instances, 100 validation instances, and 100 test instances.
### Instrumenting SMAC
**As a labeling tool:** To generate EVOs for the learning-based models, we use SMAC [12]. In what follows, SmacI refers to the use of SMAC as a black-box optimizer that finds an EVO for a given training instance. Specifically, SmacI solves \(\tilde{w}_{j}^{e}=\min_{\tilde{w}}\Gamma_{j}(\tilde{w})\) for each instance \(j\) in the training set; we obtain \(o_{j}^{e}\) by calculating variable scores - a dot product between \(\tilde{w}_{j}^{e}\) and the corresponding property value - and sorting variables in the decreasing order of their score.
**As a baseline:** The other more standard use of SMAC, which we refer to as SmacD, is as an algorithm configuration tool. To obtain a SmacD ordering, we use SMAC to solve \(\tilde{w}_{D}^{e}=\min_{\tilde{w}}\mathbb{E}_{j\thicksim\left[J\right]}\left[ \Gamma_{j}(\tilde{w})\right]\) and obtain an order \(o_{D_{j}}^{e}\) for instance \(j\) using single property weight vector \(\tilde{w}_{D}^{e}\). The expectation of the run time here simply represents its average over the \(\left|J\right|\) training instances. Note that we get only one property weight vector for the entire dataset in the SmacD case rather than one per instance as in the SmacI case. However, we obtain an instance-specific VO when using the SmacD configuration as the underlying property values change across instances.
**Initialization:** For both uses of SMAC, we use the MinWt ordering as a warm-start by assigning all the property weights to zero except the weight property, which is set to -1. This reduces the need for extensive random exploration
\begin{table}
\begin{tabular}{l l l l} \hline \hline Type & Feature & Description & Count \\ \hline \multirow{2}{*}{Variable} & Properties & The variable properties described in Table 1 & 7 \\ \cline{2-4} & Rank & The ranks corresponding to the heuristic orderings & 10 \\ \cline{2-4} & & described in Table 3 & \\ \cline{2-4} & Value SD & The standard deviation of the values & 1 \\ \hline \multirow{4}{*}{Context} & \# objectives & The number of objectives in the MKP problem & 1 \\ \cline{2-4} & \# items & The number of items in the MKP problem & 1 \\ \cline{2-4} & Capacity & The capacity of the MKP & 1 \\ \cline{2-4} & Weight stats. & The mean, min., max. and std. of the MKP weights & 4 \\ \cline{2-4} & Aggregate value & The mean, min., max. and std. of the values & 12 \\ \cline{2-4} & stats. & of each objective & 37 \\ \hline \multicolumn{4}{l}{Total count} & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Features for an MKP.
of the configuration space by providing a reasonably good ordering heuristic.
**Random seeds:** As SMAC is a randomized optimization algorithm, running it with multiple random seeds increases the odds of finding a good solution. We leverage this idea for SmacI as its outputs will be used as labels for supervised learning and are thus expected to be of very high quality, i.e., we seek instance-specific parameter configurations that yield variable orderings with minimal solution times. We run SmacD with a single seed and use its average run time on the training set as a target to beat for SmacI. Since the latter optimizes at the instance level, one would hope it can do better than the distribution-level configuration of SmacD.
As such, for SmacI, we run SMAC on each instance with 5 seeds for all sizes except (5, 40) and (6, 40), for which we used one seed. We start with one seed, then average the run time of the best-performing seed per instance to that of the average enumeration time of SmacD on the training set, and relaunch SMAC with a new seed until a desired performance gap between SmacI and SmacD is achieved.
**Computational budget:** In the SmacD setting, we run SMAC with a 12-hour time limit, whereas in the SmacI case the time limit is set to 20 minutes per instance except for sizes (3, 60), (4, 50), (5, 40), for which it is set to 5 minutes per instance. In both settings, SMAC runs on a 4-core machine with a 20GB memory limit. It can be observed that generating the labels can be computationally expensive. This dictates the choice of sizes in the set of instance sizes, \(\mathcal{S}\). Specifically, we select instance sets with an average running time of at most 100 seconds (not too hard) and at least 5 seconds (nor too easy) using the top-down compilation method described in Bergman et al. (2021).
### Learning Models
We use linear regression, ridge regression, lasso regression, decision trees, and gradient-boosted trees (GBT) with mean-squared error loss to build size-specific pointwise ranking models. Similarly, we train support vector machines and GBT with pairwise-ranking loss to obtain pairwise ranking models for each size. In the experimental results that follow, the best learning-based method will be referred to as ML. This turns out to be GBT trained with pairwise-ranking loss. The GBT models that were selected achieved a Kendall's Tau ranging between 0.67 to 0.81 across all problem sizes on the validation set. The model selection follows the procedure mentioned in Phase 3 of the Methodology section.
In terms of features, we omit the context features in Table 2 when training linear size-specific models, as these features take on the same value for all variables of the same instance and thus do not contribute to the prediction of the rank of a variable. The context features are used with non-linear models such as decision trees and GBT.
We also train two additional GBT models - ML+A and ML+AC - with pairwise-ranking loss, on the _union of the datasets of all sizes_. In particular, ML+A is trained with only variable features, similar to ML, whereas ML+AC adds the instance context features to the variable features.
### Baselines
To evaluate the performance of learning-based orderings, we compare to four baselines:
* **Lex** uses the (arbitrary) default variable ordering in which the instance was generated.
* **MinWt** orders the variables in increasing order of their weight values, \(w_{i}\). This is a commonly used heuristic for solving the single-objective knapsack problem.
* **MaxRatio** orders the variables in decreasing order of the property min-value-by-weight detailed in Table 1, which is defined as \(\min\{a_{i}^{p}\}_{p=1}^{P}/w_{i}\). This rule has an intuitive interpretation: it prefers variables with larger _worst-case_ (the minimum in the numerator) value-to-weight ratio. It is not surprising that this heuristic might perform well given that it is part of a \(\frac{1}{2}\)-approximation algorithm for the single-objective knapsack problem (Chekuri and Fox, 2009).
* **SmacD**, as described in an earlier paragraph. It produces one weight setting for the property scores per instance set. This baseline can be seen as a representative for the algorithm configuration paradigm recently surveyed by Schede et al. (2022).
## Experimental Results
We examine our experimental findings through a series of questions that span the impact of VO on PF enumeration time (Q1), the performance of SmacI as a black-box optimizer that provides labels for ML (Q2), the performance of LEO on unseen test instances and comparison to the baselines (Q3, Q4), a feature importance analysis of the best ML models used by LEO (Q5), and an exploration of size-independent, unified ML models (Q6).
**Q1. Does variable ordering impact the Pareto frontier enumeration time?** To test the hypothesis that VO has an impact on the PF enumeration time, we compare the run time of ten heuristic orderings against the expected run time of random orderings. By the run time of an ordering, we mean the time taken to compute the PF on a BDD constructed using that particular ordering.
The heuristic orderings are constructed using the variable properties. For example, the heuristic ordering called max_avg-value-by-weight sorts the variables in descending order of the ratio of a variable's average value by its weight. We estimate the expected run time of a random ordering by sampling 5 heuristic variable orderings uniformly at random from all possible \(n!\) orderings and averaging their run time. Table 3 summarizes the results of this experiment. We work with 250 MKP instances of the problem sizes (3, 20), (3, 40), (3, 60), (3, 80), (5, 20), (5, 30), (5, 40), (7, 20), (7, 30), and (7, 40). The values in the table are the ratios of the average run time of a heuristic ordering to that of the 5 random orderings; values smaller than one indicate that a heuristic ordering is faster than a random one, on average. The best heuristic ordering for each size is highlighted.
First, it is clear that some heuristic orderings consistently outperform the random ones across all problem
sizes (min_weight, max_min-value), and by a significant margin. In contrast, some heuristic orderings are consistently worse than random. For example, the heuristic ordering max_weight, min_avg-value, and min_min-value consistently underperform when the number of variables is more than 20.
Second, the choice of MinWt as a baseline was motivated by the results of this experiment as min_weight wins on most sizes as highlighted in Table 3.
Altogether, this experiment validates the hypothesis that VO has an impact on the PF enumeration time and that there exists EVO that can significantly reduce this time.
Figure 4: Box plots of time to enumerate the PF for different sizes.
Figure 3: SmacI performance w.r.t. wallclock time across different sizes. For a given size, we first of all find the best seed, i.e., the seed which finds an incumbent property-weight having minimum PF enumeration time, for each instance. Then for a given wallclock time, we calculate the mean and standard error of the PF enumeration time across instances for the incumbent property-weight up to that wallclock time. The mean and standard error are plotted as a blue line and blue error band around it, respectively. The horizontal black line represents the average PF enumeration time of SmacD configuration on the validation set.
Figure 5: Performance profile of different methods in terms of the fraction of instances solved w.r.t. time for various sizes.
Figure 6: Cumulative number of intermediate solutions generated at a particular layer across all sizes.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Size & Method & Count & GMean & Min & Max \\ \hline \hline \multirow{4}{*}{(5, 40)} & ML & 100 & 1.43 & 1.26 & 40.91 \\ & ML+AC & 100 & 1.54 & 1.06 & 41.29 \\ & ML+A & 100 & **1.38** & 1.28 & 41.24 \\ & SmacD & 100 & 2.49 & 1.38 & 51.22 \\ & MaxRatio & 100 & 2.48 & 1.44 & 58.87 \\ & MinWt & 100 & 2.42 & 1.44 & 52.61 \\ & Lex & 100 & 6.23 & 1.86 & 109.18 \\ \hline \multirow{4}{*}{(6, 40)} & ML & 100 & 14.45 & 2.57 & 377.89 \\ & **ML+AC** & 100 & **13.62** & 2.01 & 427.80 \\ & ML+A & 100 & 13.67 & 2.29 & 355.64 \\ & SmacD & 100 & 18.62 & 2.95 & 434.34 \\ & MaxRatio & 100 & 19.84 & 3.05 & 664.88 \\ & MinWt & 100 & 18.82 & 2.91 & 427.06 \\ & Lex & 100 & 42.23 & 6.01 & 713.58 \\ \hline \multirow{4}{*}{(7, 40)} & ML & 100 & **41.97** & 2.26 & 1490.58 \\ & ML+AC & 99 & 45.39 & 2.27 & 1153.07 \\ & ML+A & 99 & 44.70 & 2.81 & 1130.82 \\ & SmacD & 99 & 64.84 & 2.95 & 1521.43 \\ & MaxRatio & 98 & 86.49 & 4.14 & 1700.44 \\ & MinWt & 99 & 60.08 & 3.16 & 1561.36 \\ & Lex & 95 & 102.23 & 7.66 & 1604.17 \\ \hline \multirow{4}{*}{(4, 50)} & ML & 100 & **5.75** & 2.54 & 79.42 \\ & ML+AC & 100 & 5.97 & 2.22 & 77.75 \\ & ML+A & 100 & 6.04 & 2.47 & 73.17 \\ & SmacD & 100 & 12.17 & 4.04 & 218.24 \\ & MaxRatio & 100 & 14.23 & 3.24 & 167.95 \\ & MinWt & 100 & 18.37 & 4.67 & 553.15 \\ & Lex & 100 & 32.25 & 6.67 & 395.67 \\ \hline \multirow{4}{*}{(3, 60)} & ML & 100 & 3.15 & 3.88 & 34.13 \\ & **ML+AC** & 100 & **3.07** & 3.44 & 31.99 \\ & ML+A & 100 & 3.23 & 3.56 & 31.14 \\ & SmacD & 100 & 4.07 & 3.53 & 43.58 \\ & MaxRatio & 100 & 9.24 & 5.76 & 64.21 \\ & MinWt & 100 & 11.61 & 5.36 & 75.98 \\ & Lex & 100 & 20.19 & 8.12 & 195.25 \\ \hline \multirow{4}{*}{(3, 70)} & ML & 100 & 15.05 & 7.92 & 64.55 \\ & **ML+AC** & 100 & **14.94** & 7.41 & 65.67 \\ & ML+A & 100 & 15.35 & 7.68 & 80.82 \\ & SmacD & 100 & 16.64 & 7.52 & 96.94 \\ & MaxRatio & 100 & 33.54 & 12.01 & 215.63 \\ & MinWt & 100 & 44.43 & 12.17 & 194.07 \\ & Lex & 100 & 71.55 & 19.87 & 308.21 \\ \hline \multirow{4}{*}{(3, 80)} & ML & 100 & 43.85 & 15.29 & 286.11 \\ & **ML+AC** & 100 & **42.89** & 15.43 & 262.57 \\ & ML+A & 100 & 44.64 & 15.35 & 262.77 \\ & SmacD & 100 & 53.33 & 15.66 & 632.92 \\ & MaxRatio & 100 & 100.31 & 29.64 & 741.82 \\ & MinWt & 100 & 129.40 & 30.35 & 794.10 \\ & Lex & 99 & 196.31 & 56.52 & 1332.19 \\ \hline \end{tabular}
\end{table}
Table 4: PF enumeration time evaluation results on the test set with a time limit of 1800 seconds. Each row represents aggregate statistics for a given instance “Size” and “Method”. “Count” stands for the number of instances on which the algorithm ran successfully without hitting the time or memory limits. “GMean”, “Min” and “Max” denotes the geometric mean, minimum and maximum of the enumeration time computed across “count” many instances. We use a shift of 5 to compute the “GMean”.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Size & Method & Nodes(\%) & Width(\%) & Checks(\%) & GMean(\%) \\ \hline \hline \multirow{4}{*}{(5, 40)} & ML & 81.46 & 88.13 & **36.28** & **22.91** \\ & SmacD & **69.49** & **79.11** & 56.15 & 39.95 \\ & MaxRatio & 88.80 & 94.10 & 52.49 & 39.75 \\ & MinWt & **69.49** & **79.11** & 56.15 & 38.76 \\ \hline \multirow{4}{*}{(6, 40)} & ML & 78.35 & 85.82 & **39.36** & **34.22** \\ & SmacD & **70.12** & **79.53** & 52.02 & 44.09 \\ & MaxRatio & 90.63 & 95.01 & 53.39 & 46.97 \\ & MinWt & **70.12** & **79.53** & 52.02 & 44.56 \\ \hline \multirow{4}{*}{(7, 40)} & ML & 81.21 & 88.70 & **56.99** & **41.06** \\ & SmacD & 71.18 & 80.18 & 79.12 & 63.43 \\ & MaxRatio & 72.11 & 81.37 & 100.93 & 84.61 \\ & MinWt & **70.11** & **79.52** & 69.13 & 58.77 \\ \hline \multirow{4}{*}{(4, 50)} & ML & 90.60 & 96.68 & **21.63** & **17.84** \\ & SmacD & 76.03 & 87.94 & 42.93 & 37.73 \\ & MaxRatio & 88.33 & 96.27 & 48.97 & 44.13 \\ & MinWt & **70.45** & **84.91** & 68.81 & 56.97 \\ \hline \multirow{4}{*}{(3, 60)} & ML & 93.11 & 98.36 & **20.18** & **15.58** \\ & SmacD & 91.70 & 98.81 & 26.14 & 20.14 \\ & MaxRatio & 86.32 & 96.42 & 48.45 & 45.76 \\ & MinWt & **68.68** & **86.68** & 62.06 & 57.51 \\ \hline \multirow{4}{*}{(3, 70)} & ML & 92.30 & 98.57 & **18.21** & **21.03** \\ & SmacD & 90.81 & 98.71 & 21.52 & 23.26 \\ \cline{1-1} & MaxRatio & 86.20 & 97.16 & 49.04 & 46.88 \\ \cline{1-1} & MinWt & **70.33** & **89.75** & 68.03 & 62.10 \\ \hline \multirow{4}{*}{(3, 80)} & ML & 91.09 & 98.69 & **19.94** & **22.34** \\ & SmacD & 92.34 & 99.27 & 27.77 & 27.17 \\ \cline{1-1} & MaxRatio & 84.45 & 97.34 & 57.17 & 51.10 \\ \cline{1-1} & MinWt & **68.79** & **90.58** & 74.72 & 65.91 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Analyzing the impact of BDD topology on the PF enumeration time on the test set. For a given “Size” and “Method”, the metric value indicates the percentage of mean performance by that “Method” compared to the Lex method. The “Nodes”
fact that Lex performs the worst among all methods and the impact of BDD size on the downstream task, we expect that the size of the BDDs generated by Lex to be bigger. This holds true across different sizes and methods as it can be observed in Table 5 that Nodes and Width values are less than 100.
Extrapolating, one would expect BDDs generated using ML orderings to be the smallest, as it achieves the best performance in terms of enumeration time. Counter-intuitively, that is not the case: MinWt generates the smallest-sized BDDs on average. For instance, the value of Nodes for size (3, 80) is 68.79% for MinWt, which is lower than 91.09% for ML. However, the reduction in size does not translate to improvements in the running time, as we already know that ML performs best in terms of time.
We can decipher the performance gains of ML by studying the "Checks" metric. This metric can be thought of as a proxy of the work done by the PF enumeration algorithm; indeed, this metric is positively correlated with Time. This phenomenon can be further studied in Figure 6, which shows the mean cumulative solutions generated up to a given layer in the BDD. Clearly, ML has the least number of intermediate solutions generated, which also translates to fewer Pareto-dominance checks and smaller Checks.
To summarize, smaller-sized BDDs can be efficient in enumerating the PF. However, reducing the BDD size beyond a certain threshold would inversely affect its performance as it leads to more intermediate solutions being generated, increasing the number of Pareto-dominance checks. This also validates the need for task-specific methods like LEO that are specifically geared towards reducing the run time rather than optimizing a topological metric of the BDD.
**Q5. How interpretable are the decisions made by LEO?** LEO uses GBT with the pairwise ranking loss for learning to order variables. To obtain feature importance scores, we count the number of times a particular feature is used to split a node in the decision trees. We then normalize these scores by dividing them by the maximum score, resulting in values in \([0,1]\) for each size.
Figure 7 is a heatmap of feature importance scores for different sizes. We can note that the min-value-by-weight feature is important across all sizes, especially for cases with more than 3 objectives. In fact, the choice of VO heuristic MaxRatio was driven by feature importance scores and is a case in point of how learning-based methods can assist in designing good heuristics for optimization. Furthermore, the real-valued features are more important than the categorical rank features for problems with more than 3 objectives.
For problems with 3 objectives, the std-value feature deviation is extremely crucial. Also, rank feature rk_max_avg-value-by-weight receives higher importance than some of the real-valued features. It is also interesting to observe that certain rank-based features are consistently ignored across all sizes.
In a nutshell, the heatmap of feature importance scores helps in interpreting the features that govern the decisions made by LEO, which also happens to be in alignment with a widely used heuristic to solve the single-objective knapsack problem.
**Q6. Can we train a single size-independent ML model?** We answer in the affirmative: in fact, the methods ML+A and ML+AC in Table 4 refer to two models trained on the union of all training datasets of different sizes. As described in the section "Learning Models", ML+A uses the same variable features as ML, whereas ML+AC adds the instance context features described in Table 2. These two models perform remarkably well, sometimes even outperforming the size-specific ML models that are being tested in-distribution.
The main takeaway here is that our ML model architecture, which is independent of the number of variables in an instance, enables the training of unified models that are size-independent. In turn, these models perform very well. Combined with the simple features and types of ML models we have used, this finding points to a great potential for the use of ML in multiobjective BDD problems. The appendix includes feature importance plots for both of these models. ML+AC seems to make great use of the context features in its predictions, as expected.
## Conclusion
LEO is the first machine learning framework for accelerating the BDD approach for multiobjective integer linear programming. We contribute a number of techniques and findings that may be of independent interest. For one, we have shown that variable ordering does impact the solution time of this approach. Our labeling method of using a black-box optimizer over a fixed-size parameter configuration space to discover high-quality variable orderings successfully bypasses the curse of dimensionality; the application of this
Figure 7: Heatmap of feature importance scores normalized across different sizes. The features are described in Table 2. The property-based features assume the same names as the properties detailed in Table 1. The rank-based features of a variable have a prefix “rk_” before the heuristic ordering; refer to Table 3 for more details. Finally, the “Value SD” feature from Table 2 is named std-value in this plot.
approach to other similar labeling tasks may be possible, e.g., for backdoor set discovery in MIP Khalil et al. (2022). An additional innovation is using size-independent ML models, i.e., models that do not depend on a fixed number of decision variables. This modeling choice enables the training of unified ML models, which our experiments reveal to perform very well. Through a comprehensive case study of the knapsack problem, we show that LEO can produce variable orderings that significantly reduce PF enumeration time.
There are several exciting directions for future work that can build on LEO:
* The BDD approach to multiobjective integer programming has been applied to a few problems other than knapsack Bergman et al. (2021). Much of LEO can be directly extended to such other problems, assuming that their BDD construction is significantly influenced by the variable ordering. One barrier to such an extension is the availability of open-source BDD construction code. As the use of BDDs in optimization is a rather nascent area of research, it is not uncommon to consider a case study of a single combinatorial problem, as was done for example for Graph Coloring in Karahalios and van Hoeve (2022) and Maximum Independent Set in Bergman et al. (2012).
* Our method produces a _static_ variable ordering upfront of BDD construction. While this was sufficient to improve on non-ML orderings for the knapsack problem, it may be interesting to consider _dynamic_ variable orderings that observe the BDD construction process layer by layer and choose the next variable accordingly, as was done in Cappart et al. (2019).
* We have opted for rather interpretable ML model classes but the exploration of more sophisticated deep learning approaches may enable closing some of the remaining gap in training/validation loss, which may improve downstream solving performance.
* Beyond the exact multiobjective setting, extending LEO to a heuristic that operates on a _restricted_ BDD may provide an approximate PF much faster than full enumeration. We believe this to be an easy extension of our work.
|
2301.04590 | Kinetic equilibrium of two-dimensional force-free current sheets | Force-free current sheets are local plasma structures with field-aligned
electric currents and approximately uniform plasma pressures. Such structures,
widely found throughout the heliosphere, are sites for plasma instabilities and
magnetic reconnection, the growth rate of which is controlled by the
structure's current sheet configuration. Despite the fact that many kinetic
equilibrium models have been developed for one-dimensional (1D) force-free
current sheets, their two-dimensional (2D) counterparts, which have a magnetic
field component normal to the current sheets, have not received sufficient
attention to date. Here, using particle-in-cell simulations, we search for such
2D force-free current sheets through relaxation from an initial,
magnetohydrodynamic equilibrium. Kinetic equilibria are established toward the
end of our simulations, thus demonstrating the existence of kinetic force-free
current sheets. Although the system currents in the late equilibrium state
remain field aligned as in the initial configuration, the velocity distribution
functions of both ions and electrons systematically evolve from their initial
drifting Maxwellians to their final time-stationary Vlasov state. The existence
of 2D force-free current sheets at kinetic equilibrium necessitates future work
in discovering additional integrals of motion of the system, constructing the
kinetic distribution functions, and eventually investigating their stability
properties. | Xin An, Anton Artemyev, Vassilis Angelopoulos, Andrei Runov, Sergey Kamaletdinov | 2023-01-11T17:35:51Z | http://arxiv.org/abs/2301.04590v2 | # Kinetic equilibrium of two-dimensional force-free current sheets
###### Abstract
Force-free current sheets are local plasma structures with field-aligned electric currents and approximately uniform plasma pressures. Such structures, widely found throughout the heliosphere, are sites for plasma instabilities and magnetic reconnection, the growth rate of which is controlled by the structure's current sheet configuration. Despite the fact that many kinetic equilibrium models have been developed for one-dimensional (1D) force-free current sheets, their two-dimensional (2D) counterparts, which have a magnetic field component normal to the current sheets, have not received sufficient attention to date. Here, using particle-in-cell simulations, we search for such 2D force-free current sheets through relaxation from an initial, magnetohydrodynamic equilibrium. Kinetic equilibria are established toward the end of our simulations, thus demonstrating the existence of kinetic force-free current sheets. Although the system currents in the late equilibrium state remain field aligned as in the initial configuration, the velocity distribution functions of both ions and electrons systematically evolve from their initial drifting Maxwellians to their final time-stationary Vlasov state. The existence of 2D force-free current sheets at kinetic equilibrium necessitates future work in discovering additional integrals of motion of the system, constructing the kinetic distribution functions, and eventually investigating their stability properties.
0000-0002-4880-7888]Xin An
0000-0002-4073-3888]Anton Artemyev
0000-0002-4883-0888]Vassilis Angelopoulos
0000-0002-0783-0888]Andrei Runov
0000-0002-4883-0888]Sergey Kamaletdinov
## 1 Introduction
Current sheets are spatially localized plasma structures that play an essential role in various space plasma systems: solar flares (Syrovatskii, 1981; Parker, 1994; Fleishman & Pevtsov, 2018), solar wind turbulence (Servidio et al., 2011; Borovsky, 2010; Vasko et al., 2022), boundaries of planetary magnetospheres - magnetopauses (De Keyser et al., 2005), magnetotails of planets (Jackman et al., 2014; Achilleos, 2018; Lui, 2018) and comets (Cravens & Gombosi, 2004; Volwerk et al., 2018; Volwerk, 2018). Strong currents flowing within current sheets are subject to various instabilities resulting in magnetic field line reconnection, which converts magnetic energy to plasma heating and particle acceleration (e.g., Gonzalez & Parker, 2016; Birn & Priest, 2007).
The properties of magnetic reconnection and the dissipation rate of magnetic energy strongly depend on the current sheet configurations. The simplest magnetic field geometry is the one-dimensional (1D) current sheet, which is either a tangential discontinuity separating two plasmas with different properties or a rotational discontinuity having a finite magnetic field component, \(B_{n}\), normal to the current sheet. The stress balance in 1D tangential discontinuities is established by the gradients of plasma and magnetic field pressures [see Figure 1(a) and Allanson et al. (2015); Neukirch et al. (2020a,b)], whereas the stress balance in 1D rotational discontinuities requires a contribution from the plasma dynamic pressure in order to balance the magnetic field line tension force \(\propto B_{n}\) [see Figure 1(b) and Hudson (1970)].
In two-dimensional (2D) current sheets, \(B_{n}\neq 0\) naturally appears (Schindler, 2006); thus, the stress balance requires 2D plasma pressure gradients [see Figure 1(c) and Yoon and Lui (2005)] or 2D dynamic pressure gradients [see Figure 1(d) and Birn (1992); Nickeler and Wiegelmann (2010); Cicogna and Pegoraro (2015)].
Although there are many types of current sheet configurations, they can all be categorized by two plasma parameters: the Alfven Mach number \(M_{A}\) (i.e., the ratio of plasma flow speed to Alfven speed) and the plasma beta \(\beta\) (i.e., the ratio of plasma thermal pressure to magnetic field pressure). Current sheet configurations from different space environments are located within different domains in this \((\beta,M_{A})\) space. Tangential discontinuities (\(B_{n}=0\)) do not require contributions from the plasma dynamic pressure. These discontinuities exist in systems with either large \(\beta\) (if the plasma pressure balances the magnetic field pressure) or small \(\beta\) [if the current sheet configuration is magnetically force free without cross-field currents, \(\mathbf{J}\times\mathbf{B}=0\); see Figure 1(a)]. Such current sheets are often observed in the solar wind (see discussion in Neugebauer, 2006; Artemyev et al., 2019), where they propagate with plasma flows and have \(M_{A}\approx 0\) in the current sheet reference frame (see examples of kinetic models of such current sheet configurations in de Keyser et al., 1996; Harrison and Neukirch, 2009; Allanson et al., 2016; Neukirch et al., 2020a). Rotational discontinuities are also observed in the solar wind (\(\beta\sim 1\)) and are characterized by plasma flow gradients with \(M_{A}\sim 1\) in the current sheet reference frame [de Keyser et al. (e.g., 1997); Haaland et al. (e.g., 2012); Paschmann et al. (e.g., 2013); Artemyev et al. (e.g., 2019); see Figure 1(b)]. Figure 1(e) shows the parameter regime of such current sheets in \((\beta,M_{A})\) space, whereas Figure 2(a) shows an example of a low-\(\beta\), force-free current sheet in the solar wind.
Current sheets in the shocked (magnetosheath) solar wind are characterized by lower \(M_{A}\) [Figure 1(e)] compared to that of the solar wind. The stress balance in these current sheets is significantly influenced by plasma pressure gradients, i.e., compressional current sheets with both parallel and transverse current density components (e.g., Chaston et al., 2020; Webster et al., 2021). Another \((\beta,M_{A})\) domain, with similar \(M_{A}\) and larger \(\beta\), is found in Earth's distant magnetotail, where strong plasma flows may reach \(M_{A}\geq 1\) and \(\beta\in[10,100]\)(e.g., Vasko et al., 2015; Artemyev et al., 2017). These current sheets are mostly balanced by plasma pressure gradients. However, in the distant magnetotail and for \(\beta\lesssim 10\) there have been previous observations of force-free current sheets with \(\mathbf{J}\times\mathbf{B}=0\)(Xu et al., 2018). Figure 1(e) shows the parameter domain of distant magnetotail current sheets, whereas Figure 2(b) shows an example of an almost force-free current sheet observed in the distant magnetotail. Both the magnetosheath and distant magnetotail current sheet configurations are characterized by a small but finite \(B_{n}\), and thus the magnetic field line tension force \(\propto B_{n}\) may be balanced by weak plasma flows or by plasma anisotropy. There is a series of models on such quasi-1D
current sheets, with \(B_{n}\neq 0\), that describe both large-\(\beta\)(see Burkhart et al., 1992; Sitnov et al., 2000, 2006; Mingalev et al., 2007; Zelenyi et al., 2011) and force-free (Artemyev, 2011; Mingalev et al., 2012; Vasko et al., 2014) current sheets.
The distant magnetotail (\(\beta,M_{A}\)) domain smoothly extends towards the lower \(M_{A}\) with the decrease of the radial distance from the Earth. Near-Earth magnetotail current sheets are characterized by
Figure 1: Panels (a)-(d) show magnetic field lines and the main components of stress balance for different current sheet configurations. The coordinate system consists of the \(l\) component along the main magnetic field direction, reversing sign at the current sheet neutral plane (\(B_{l}=0\)), the \(m\) component along the main current density direction corresponding to variations of \(B_{l}\), and the \(n\) (normal) component along the spatial gradient of \(B_{l}\) (i.e., \(4\pi j_{m}/c=\partial B_{l}/\partial r_{n}\)). Panel (e) shows the typical parameter regimes of current sheets observed by THEMIS and ARTEMIS (Angelopoulos, 2008, 2011) in the solar wind (blue), Earth’s magnetosheath (shocked solar wind; black), near-Earth magnetotail (red), and lunar-distance magnetotail (green). The black dashed box defines the parameter regime where 2D force-free current sheets are expected. Our dataset includes \(\sim 300\) solar wind current sheets, \(\sim 100\) magnetosheath current sheets, \(\sim 100\) current sheets in the near-Earth magnetotail, and \(\sim 500\) current sheets in the lunar-distance magnetotail. All current sheets were identified from THEMIS and ARTEMIS fluxgate magnetometer measurements (Auster et al., 2008). The selection procedure and data processing for solar wind current sheets are described in Artemyev et al. (2019): ion temperatures are calculated using the OMNI dataset (King and Papitashvili, 2005; Artemyev et al., 2018), plasma flows in the current sheet reference frame are calculated as the change of the most variable flow component across the current sheet (see details in Artemyev et al., 2019). The same criteria and data processing methods were applied to the magnetosheath (dayside) current sheets, but without using the OMNI data because THEMIS accurately measures magnetosheath ion temperatures in that region (McFadden et al., 2008). The dataset of the near-Earth magnetotail current sheets (radial distances \(\sim 10-30\) Earth radii) is described in Artemyev et al. (2016). The same criteria and data processing methods were applied to the lunar-distance current sheets. Details of the calculation of \(\beta\) and \(M_{A}\) are described in Artemyev et al. (2019, 2017).
slightly higher \(\beta>100\) and are quantified by the plasma pressure contribution to the stress balance (Runov et al., 2006; Artemyev et al., 2011; Petrukovich et al., 2015). The important difference between solar wind/magnetosheath and magnetotail current sheets is their magnetic field line configuration; solar wind current sheets may be considered as 1D discontinuities, whereas the magnetotail current sheets are 2D [Figures 1(a-d)]. Therefore, current sheet models with \(B_{n}\neq 0\) and low \(M_{A}\) should include 2D plasma pressure gradients balancing the magnetic field line tension force (see examples of such models in Schindler & Birn, 2002; Birn et al., 2004; Yoon & Lui, 2005; Sitnov & Merkin, 2016).
In the \((\beta,M_{A})\) space, the large-\(\beta\) domain can be described by kinetic models of current sheets with \(B_{n}=0\) (1D models) and \(B_{n}\neq 0\) (2D models with plasma pressure gradients). Conversely, the low-\(\beta\) domain with dominant field-aligned currents (force-free current sheets) can be described only by 1D kinetic models with \(B_{n}=0\). Large \(M_{A}\), low-\(\beta\) (1D rotational discontinuities with \(B_{n}\neq 0\)) and low \(M_{A}\), low-\(\beta\) (2D force-free current sheet) domains have been analyzed only by fluid models (e.g., Cowley, 1978; Hilmer & Voigt, 1987; Tassi et al., 2008; Lukin et al., 2018). The class of such 2D force-free current sheets is not limited to observations in the solar wind and distant Earth's
Figure 2: Four examples of force-free current sheets in (a) solar wind, (b) lunar-distance magnetotail, (c) Martian magnetotail, and (d) Jovian magnetotail. The top panels show the magnetic field in the local coordinate system Sonnerup & Cahill (1968). Grey curves show the magnetic field magnitudes; \(B\approx\) constant indicates the force-free current sheets. Middle panels show electron and ion \(\beta\) profiles. Bottom panels show profiles of the field-aligned currents with an indication on the dominant ion species and estimations of the current sheet thickness in the ion inertial length. For the solar wind and Earth’s lunar-distance magnetotail, we used ARTEMIS magnetic field (Auster et al., 2008) and plasma (McFadden et al., 2008) measurements (see details of data processing procedure in Artemyev et al., 2019). For the Martian magnetotail, we used MAVEN magnetic field (Connerney et al., 2015) and plasma (McFadden et al., 2015; Halekas et al., 2015) measurements (see details of data processing procedure in Artemyev et al., 2017). For the Jovian magnetotail, we used Juno magnetic field (Connerney et al., 2017, 2017) and plasma (McComas et al., 2017; Kim et al., 2020) measurements (see details of data processing procedure in Artemyev et al., 2023).
magnetotail, but also includes current sheets observed in the cold plasma of the Martian magnetotail (e.g., DiBraccio et al., 2015; Artemyev et al., 2017) and in the low-density Jovian magnetotail (e.g., Behannon et al., 1981; Artemyev et al., 2014). Figures 2(c,d) show examples of force-free current sheets observed in the Martian and Jovian magnetotails. There are not enough known integrals of motion to describe distribution functions of charged particles in such current sheet configurations (see discussion in Lukin et al., 2022). Consequently, the absence of kinetic models of force-free current sheets with \(B_{n}\neq 0\) significantly hinders the analysis of their stability, the process responsible for the magnetic reconnection onset and particle acceleration.
In this study, we develop a 2D kinetic force-free current sheet model. In Section 2, we describe a magnetohydrodynamic (MHD) model of 2D force-free current sheets. In Section 3, we initialize self-consistent, particle-in-cell (PIC) simulations using currents and magnetic fields from the MHD equilibrium. We load particle distributions using drifting Maxwellians, which initially do not necessarily satisfy the time-stationary Vlasov equation. In Section 4, we obtain kinetic equilibria by evolving these particle distribution functions using the self-consistent PIC simulations and demonstrate the existence of kinetic equilibria for 2D force-free current sheets. We describe the equilibrium distribution functions and the difference between ion- and electron-dominated 2D current sheets. In Section 5, we summarize the results and discuss their applications.
## 2 MHD Equilibria of Force-Free Current Sheets
We consider the general case of 2D force-free equilibrium in which all quantities vary with coordinates \(x\) and \(z\) only (i.e., \(\partial/\partial y=0\)), and, as is customary in this definition, the pressure gradient force contribution to the force balance is negligible, i.e., \(\nabla P\sim 0\). The divergence-free magnetic field has the form (e.g., Low & Wolfson, 1988)
\[\mathbf{B}=\left(-\frac{\partial A}{\partial z},B_{y},\frac{\partial A}{ \partial x}\right), \tag{1}\]
where \(A\mathbf{e}_{y}\) is the vector potential in the \(y\) direction, and \(B_{y}\) is the magnetic field in the \(y\) direction. The current density is
\[\mathbf{J}=\frac{c}{4\pi}\nabla\times\mathbf{B}=\frac{c}{4\pi}\left(-\frac{ \partial B_{y}}{\partial z},-\nabla^{2}A,\frac{\partial B_{y}}{\partial x} \right), \tag{2}\]
where \(c\) is the speed of light, and \(\nabla^{2}=\partial^{2}/\partial x^{2}+\partial^{2}/\partial z^{2}\) is the Laplacian. The force-free condition, \(\mathbf{J}\times\mathbf{B}=0\), is equivalent to \(\mathbf{J}=\alpha\mathbf{B}\), where \(\alpha=\alpha(x,z)\) is a scalar function. Comparing Equations (1) and (2), the force-free condition requires that \(B_{y}\) is a function of \(A\) only, and that \(B_{y}\) satisfies
\[\nabla^{2}A+B_{y}(A)\frac{\mathrm{d}B_{y}}{\mathrm{d}A}=0, \tag{3}\]
and the coefficient \(\alpha\) is \((c/4\pi)\mathrm{d}B_{y}/\mathrm{d}A\). Once \(B_{y}(A)\) is given, we can solve Equation (3) for \(A(x,z)\) with appropriate boundary conditions.
As shown in Figure 3, for balancing force-free current sheets with low thermal and dynamic pressures (e.g., low \(\beta\)/low Mach number current sheets in the solar wind, magnetosheath, and lunar-distance magnetotail; see Figure 1), the magnetic pressure \(B_{y}^{2}\) plays an role analogous to the thermal pressure. Motivated by the functional form of pressure \(p\propto\exp\left[2A/(\lambda B_{0})\right]\) describing thermal-pressure balanced current sheets (Lembege & Pellat, 1982), we adopt
\[B_{y}(A)=B_{0}\exp\left(\frac{A}{\lambda B_{0}}\right), \tag{4}\]
for force-free current sheets, so that Equation (3) describing magnetic field lines in the \(x\)-\(z\) plane (i.e., contours of \(A\)) resembles that of the Lembege-Pellat current sheet (e.g., Tassi et al., 2008; Lukin et al., 2018):
\[\nabla^{2}A=-\frac{B_{0}}{\lambda}\exp\left(\frac{2A}{\lambda B_{0}}\right), \tag{5}\]
where \(\lambda\) is the current sheet thickness, and \(B_{0}\) is the magnetic field at large \(|z|\).
Because the scale length along current sheets is much larger than that across current sheets, we assume \(A=A(\varepsilon x,z)\) to be weakly nonuniform in the \(x\) direction (\(\varepsilon\) being a small parameter). Equation (5) can be approximated as
\[\frac{\partial^{2}A}{\partial z^{2}}=-\frac{B_{0}}{\lambda}\exp\left(\frac{2A }{\lambda B_{0}}\right), \tag{6}\]
which is accurate up to order \(\varepsilon\). The boundary condition is
\[\left.\frac{\partial A}{\partial z}\right|_{z=0}=0,\hskip 14.226378ptA\big{|}_{ z=0}=\varepsilon B_{0}x, \tag{7}\]
which is equivalent to \(B_{x}(z=0)=0\) and \(B_{z}(z=0)=\varepsilon B_{0}\). The solution of \(A\) is
\[A=-\lambda B_{0}\ln\left[\cosh\left(\frac{z}{\lambda F(x)}\right)\cdot F(x) \right], \tag{8}\]
where \(F(x)=\exp\left(-\varepsilon x/\lambda\right)\). Thus the three components of the magnetic field are
\[B_{x} =B_{0}\tanh\left(\frac{z}{\lambda F(x)}\right)F^{-1}(x), \tag{9}\] \[B_{y} =B_{0}\left[\cosh\left(\frac{z}{\lambda F(x)}\right)\cdot F(x) \right]^{-1},\] \[B_{z} =\varepsilon B_{0}\left[1-\frac{z}{\lambda F(x)}\tanh\left(\frac{ z}{\lambda F(x)}\right)\right],\]
Figure 3: Force balance of current sheets in different space environments. At one end of the spectrum, representing current sheets in the near-Earth magnetotail, magnetic tension force \(J_{y}B_{z}/c\) and magnetic pressure force \(J_{y}B_{x}/c\approx\partial(B_{x}^{2}/8\pi)/\partial z\) are balanced by thermal pressure gradients \(\partial p/\partial x\) and \(\partial p/\partial z\), respectively. At the other end of the spectrum, representing current sheets in the solar wind and lunar-distance magnetotail, \(J_{y}B_{z}/c\) and \(J_{y}B_{x}/c\) are balanced by shears of \(B_{y}^{2}\), i.e., \(\partial(B_{y}^{2}/8\pi)/\partial x\) and \(\partial(B_{y}^{2}/8\pi)/\partial z\), respectively. There may be a continuous spectrum of current sheets between these two ends (Yoon et al., 2023), in which \(J_{y}B_{z}/c\) and \(J_{y}B_{x}/c\) are balanced by a mixture of thermal pressure gradients and shears of \(B_{y}^{2}\).
where \(B_{z}\neq 0\) describes rotational discontinuities. The current density is
\[\mathbf{J}=\alpha\mathbf{B}=\frac{c\mathbf{B}}{4\pi\lambda}\left[\cosh\left(\frac {z}{\lambda F(x)}\right)\cdot F(x)\right]^{-1}. \tag{10}\]
The direction of magnetic field rotates from pointing in \(+x\) at the \(z>0\) half-space (\(z/\lambda\gg 1\)) to pointing in \(+y\) around the neutral (equatorial) plane (\(-1\lesssim z/\lambda\lesssim 1\)), and further to pointing in \(-x\) at the \(z<0\) half-space (\(z/\lambda\ll-1\)). The magnitude of magnetic field, \((B_{x}^{2}+B_{y}^{2}+B_{z}^{2})^{1/2}=B_{0}[F^{-1}(x)+\mathcal{O}(\varepsilon)]\), has a weak dependence on \(x\) and is roughly a constant at a given \(x\) location. The magnitude of current density, \((J_{x}^{2}+J_{y}^{2}+J_{z}^{2})^{1/2}\propto\cosh^{-1}[z/(\lambda F(x))]\), is concentrated in a layer \(|z|<\lambda F(x)\).
## 3 Computational Setup
Searching for kinetic equilibria of 2D force-free current sheets, we initialize our simulated current sheets from the results of an MHD equilibrium. Importantly, this may not satisfy the time-stationary Vlasov equation because the initial particle distributions are simply drifting Maxwellians. We simulate their relaxation toward a certain kinetic equilibrium (if any) based on a massively parallel PIC code (Pritchett, 2001, 2005). Our simulations have two dimensions \((x,z)\) in configuration space and three dimensions \((v_{x},v_{y},v_{z})\) in velocity space. The results are presented in normalized units: magnetic fields are normalized to \(B_{0}\), lengths to the ion inertial length \(d_{i}=c/(4\pi n_{0}e^{2}/m_{i})^{1/2}\), time to the reciprocal of the ion gyrofrequency \(\omega_{ci}^{-1}=m_{i}c/(eB_{0})\), velocities to the Alfven velocity \(v_{\rm A}=B_{0}/(4\pi n_{0}m_{i})^{1/2}\), electric fields to \(v_{\rm A}B_{0}/c\), and energies to \(m_{i}v_{\rm A}^{2}\). Here \(n_{0}\) is the plasma density, \(m_{i}\) is the ion mass, and \(e\) is the elementary charge. The computational domain is \([-64\leq x/d_{i}\leq 0]\times[-8\leq z/d_{i}\leq 8]\) with a cell length \(\Delta x=d_{i}/32\). The time step is \(\Delta t=0.001\,\omega_{ci}^{-1}\). The ion-to-electron mass ratio is \(m_{i}/m_{e}=100\). The normalized speed of light is \(c/v_{\rm A}=20\), which gives the ratio of electron plasma frequency to electron gyrofrequency \(\omega_{pe}/\omega_{ce}=2\). The reference density \(n_{0}\) is represented by 1929 particles. The total number of particles is \(1.6\times 10^{9}\) including ions and electrons in each simulation.
Figure 4 shows the initial magnetic field configuration and current density, which are determined by Equations (9) and (10) using the input parameters \(\varepsilon=0.04\) and \(\lambda=2d_{i}\). There are a few degrees of freedom in choosing the initial particle distribution functions: (1) While the current density is known a priori, the proportion of ion and electron currents is left unspecified; (2) Electron and ion temperatures and plasma \(\beta\) vary across different space environments (e.g., Figure 2); (3) Specific forms of the velocity distribution functions are left undetermined. To this end, we choose initial electron and ion distribution functions as drifting Maxwellians
\[f_{s}(\mathbf{v},x,z)=\frac{n_{0}}{(2\pi T_{s}/m_{s})^{3/2}}\exp\left[-\frac{ m_{s}\left(\mathbf{v}-\mathbf{v}_{ds}(x,z)\right)^{2}}{2T_{s}}\right], \tag{11}\]
where the subscript \(s=i,e\) represents ions and electrons, \(\mathbf{v}_{ds}(x,z)\) is the position-dependent drift velocity, and \(T_{s}\) is the temperature. Note that \(T_{i}\), \(T_{e}\), and \(n_{0}\) are constants throughout the domain so that the initial plasma pressure \(n_{0}(T_{i}+T_{e})\) is a constant, as required by the force-free condition. The simulations allow either ions or electrons to be the main current carrier, while keeping the relative drift between them commensurate with the current density [Figures 4(c), 4(d), and 4(e)]. Additionally, we vary particle temperatures to search for kinetic equilibrium in different plasma beta regimes, where
the plasma beta is \(\beta=2(T_{i}+T_{e})/m_{i}v_{\rm A}^{2}\). The detailed parameters for particle distribution functions in the six runs are listed in Table 1. These parameters cover the \(\beta\) and \(T_{i}/T_{e}\) ranges of force-free current sheets observed in the solar wind and planetary magnetotails (see Figure 2).
In our simulations, the electromagnetic fields are advanced in time by integrating Maxwell's equations using a leapfrog scheme. These fields are stored on the Yee grid. The relativistic equations of motion for ions and electrons are integrated in time using a leapfrog scheme with a standard Boris push for velocity update. The conservation of charge is ensured by applying a Poisson correction to the electric fields (Marder 1987; Langdon 1992).
For particles crossing the \(x\) boundaries, we take advantage of the symmetry between \(z>0\) and \(z<0\) [Figure 4], so that particles exiting the system at a location \(z\) with velocity \((v_{x},v_{y},v_{z})\) are reinjected into the system at the conjugate location \(-z\) with velocity \((-v_{x},v_{y},v_{z})\). This is equivalent to an open boundary condition for particles because the injected particle distribution matches that at one cell interior to the boundary at all simulation times. At the \(z\) boundaries, particles striking the boundary are reflected into the system with \(v_{z}=-v_{z}\).
Figure 4: MHD equilibrium of a force-free current sheet. (a) Magnetic field in the \(x\) direction \(B_{x}\). (b) Magnetic field in the \(y\) direction \(B_{y}\). (c) Current density in the \(x\) direction \(J_{x}\). (d) Current density in the \(y\) direction \(J_{y}\). (e) Current density in the parallel direction \(J_{\parallel}\).
For fields at the \(x\) boundaries, the guard values of the tangential magnetic fields are determined by
\[\delta B^{n}_{y,g}=\delta B^{n}_{y,i1},\hskip 14.226378pt\delta B^{n}_{z,g}= \delta B^{n-1}_{z,i1}(2-\Delta x/c\Delta t)+\delta B^{n-2}_{z,i2}(\Delta x/c \Delta t-1), \tag{12}\]
where the superscript indicates the time level, and the subscripts \(g,i1,i2\) indicate the guard point, first interior point and second interior point, respectively. This boundary condition for \(\delta B_{z}\) ensures that the magnetic flux can freely cross the \(x\) boundaries (Pritchett, 2005). The guard values of the normal electric field \(\delta E_{x}\) at the \(x\) boundaries are determined by \(\delta E^{n}_{x,g}=\delta E^{n}_{x,i1}\). Similarly, at the \(z\) boundaries, the two components of the tangential magnetic fields in the guard point are determined by \(\delta B^{n}_{x,g}=\delta B^{n}_{x,i1}\) and \(\delta B^{n}_{y,g}=\delta B^{n}_{y,i1}\); The normal electric field \(\delta E_{z}\) in the guard point are determined by \(\delta E^{n}_{z,g}=\delta E^{n}_{z,i1}\). Unlike the driven simulations that adds magnetic flux to the simulation box, no external \(E_{y}\) is applied at the \(z\) boundaries.
## 4 Results
We track the evolution of the initially force-free current sheets in the six runs up to \(t=180\,\omega_{ci}^{-1}\), at which time the macroscopic states (e.g., electric and magnetic fields, currents, pressures) of the system are quasi-stationary. Below we examine if the configurations satisfy \(\mathbf{J}\times\mathbf{B}=0\) at the end of the simulations. We further investigate how the particle velocity distributions deviate from the initial Maxwellians and whether or not the system reaches a kinetic equilibrium.
### Initially ion-dominated current sheets
When ions initially carry entirely the (field-aligned) currents (Runs 1A, 2A, and 3A), electrons are accelerated by transient parallel electric fields to form field-aligned currents [e.g., Figure 5(b)], while ions are slightly decelerated by such electric fields [e.g., comparing Figures 5(a) and 4(e)]. These electron currents can be a significant fraction (e.g., \(\gtrsim 1/3\)) of ion currents in the off-equatorial region (\(1<|z|<5\)). Almost all currents remain field aligned [see \(J_{\perp}\sim 0\) in Figure 5(c)], implying that a force-free configuration may indeed be obtained.
During the relaxation of ion-dominated current sheets, electrostatic fields are generated and remain present in the late equilibrium states. These electrostatic fields are perpendicular to the magnetic field: At \(|z|>0\), they points away from the equatorial plane [Figure 6(a)]; Near the equator, they points in the \(-x\) direction (i.e., tailward) [Figure 6(b)]. Here the \(E_{x}\) component is much weaker than the \(E_{z}\) component. The electrostatic fields arise due to the decoupling of the unmagnetized ions
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Run NO. & \(\mathbf{v}_{di}\) & \(\mathbf{v}_{de}\) & \(\beta_{i}=2T_{i}/m_{i}v_{\rm A}^{2}\) & \(\beta_{e}=2T_{e}/m_{i}v_{\rm A}^{2}\) & \(\beta=\beta_{i}+\beta_{e}\) \\ \hline
1A & \(\mathbf{J}/(n_{0}e)\) & 0 & 0.05 & 0.05 & 0.1 \\
1B & 0 & \(-\mathbf{J}/(n_{0}e)\) & 0.05 & 0.05 & 0.1 \\ \hline
2A & \(\mathbf{J}/(n_{0}e)\) & 0 & 5/6 & 1/6 & 1 \\
2B & 0 & \(-\mathbf{J}/(n_{0}e)\) & 5/6 & 1/6 & 1 \\ \hline
3A & \(\mathbf{J}/(n_{0}e)\) & 0 & 25/3 & 5/3 & 10 \\
3B & 0 & \(-\mathbf{J}/(n_{0}e)\) & 25/3 & 5/3 & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters for particle distribution functions in the six PIC runs. The first digit in Run NO., ‘1’, ‘2’, ‘3’, denotes three plasma betas, \(\beta=0.1,1,10\), respectively. The second digit in Run NO., ‘A’, ‘B’, denotes current carriers as ions and electrons, respectively. The current density \(\mathbf{J}\) comes from Equation (10), and is visualized in Figures 4(c) and 4(d).
and magnetized electrons (Schindler et al., 2012), which can be derived from the ordering among ion thermal gyroradius, current sheet thickness, and electron thermal gyroradius
\[\rho_{i}:\lambda:\rho_{e}=d_{i}\sqrt{\frac{\beta_{i}}{2}}:2d_{i}:d_{i}\sqrt{ \frac{\beta_{e}}{2}\frac{m_{e}}{m_{i}}}=\sqrt{\frac{\beta_{i}}{8}}:1:\sqrt{ \frac{\beta_{e}}{8}\frac{m_{e}}{m_{i}}}. \tag{13}\]
Taking Run 2A for example, we have \(\rho_{i}:\lambda:\rho_{e}=0.32:1:0.01\). As the plasma beta is lowered, the electrostatic field decreases - due to stronger magnetization of ions, which is shown in Figures 7 and 8 below.
To show the detailed force balance of these current sheet configurations, we decompose the forces in parallel and perpendicular directions with respect to local magnetic fields. The force balance in the parallel direction is rapidly established because particles move freely along field lines. The unit vectors in the two perpendicular directions are defined as \(\mathbf{e}_{\perp 1}=\hat{\mathbf{z}}\times\hat{\mathbf{b}}\) and \(\mathbf{e}_{\perp 2}=\hat{\mathbf{b}}\times(\hat{\mathbf{z}}\times\hat{ \mathbf{b}})\), where \(\hat{\mathbf{b}}\) and \(\hat{\mathbf{z}}\) are the unit vectors along the magnetic field and \(z\) direction, respectively. It is worth noting that \(\mathbf{e}_{\perp 1}\) is roughly along the \(\pm y\) direction far above/below the equator, and along the \(-x\) direction at the equator, while \(\mathbf{e}_{\perp 2}\) is roughly along the \(+z\) direction all over the domain.
At the equator, the electrostatic fields, \(\mathbf{E}_{\perp 1}\) (i.e., along the \(-x\) direction), cause a drift \(c\mathbf{E}_{\perp 1}\times\mathbf{b}/|\mathbf{B}|\) of both ions and electrons [Figures 7(b-c), 7(h-i), and 7(n-o)], which does not produce net currents [Figures 7(a), 7(g), and 7(m)]. In addition, the pressure gradient \((\nabla\cdot\mathbf{P})_{\perp 1}\) is about zero in the two cases, with \(\beta=0.1\) [Run 1A; see Figure 7(a)] and \(\beta=1\) [Run 2A; see Figure 7(g)]. Thus, we obtain \((\mathbf{J}\times\mathbf{B})_{\perp 1}/c=(\nabla\cdot\mathbf{P})_{\perp 1}=0\) in the \(\mathbf{e}_{\perp 1}\) direction. In the case with \(\beta=10\), however, the current sheet gets compressed and rarefied periodically in the \(x\) direction. Correspondingly, \((\nabla\cdot\mathbf{P})_{\perp 1}\) oscillates around zero and acts as a restoring force [Run 3A; see Figure 7(m)]. Such oscillations are
Figure 5: Current density for the ion-dominated force-free current sheet with plasma beta \(\beta=1\) in Run 2A. The snapshot is taken at \(t=180\,\omega_{ci}^{-1}\) in the simulation. (a) Ion field-aligned currents. (b) Electron field-aligned currents. (c) Total perpendicular currents.
still localized around the equilibrium \((\mathbf{J}\times\mathbf{B})_{\perp 1}/c=(\nabla\cdot\mathbf{P})_{\perp 1}=0\) when performing a spatial average over \(x\).
Along the \(\mathbf{e}_{\perp 2}\) direction (i.e., roughly the \(z\) direction), both the pressure gradient \((\nabla\cdot\mathbf{P})_{\perp 2}\) and Lorentz force \((\mathbf{J}\times\mathbf{B})_{\perp 2}/c\) vanish [Figures 8(a), 8(g), 8(m)]. The dominant component of the electrostatic fields, \(\mathbf{E}_{\perp 2}\) (along the \(\pm z\) directions), leads to a drift \(c\mathbf{E}_{\perp 2}\times\mathbf{b}/|\mathbf{B}|\) (roughly in the \(+y\) direction) of both ions and electrons [Figures 8(b-c), 8(h-i), and 8(n-o)], which does not give net currents. All aforementioned perpendicular drift velocities are small, when compared to ion and electron parallel flow velocities that carry currents [e.g., Figure 5].
### Initially electron-dominated current sheets
When electrons initially carry entirely the (field-aligned) currents (Runs 1B, 2B, and 3B), there is no change in the macroscopic states of the system at the end of simulations [e.g., Figure 9]: The currents remain field aligned, and are carried by electrons only. Compared to the ion-dominated current sheets, the relaxation of electron-dominated current sheets does not alter the macroscopic states: neither the proportion of electron and ion currents is changed, nor do the electrostatic fields develop. In all three cases with different plasma betas, the system remains in a force-free configuration, with no plasma pressure gradients [Figures 7(d-f), 7(j-l), 7(p-r), 8(d-f), 8(j-l), 8(p-r)]. Despite the system being identical to the initial ones from the fluid point of view, the underlying velocity distribution functions evolve substantially from the initial Maxwellians to reach the Vlasov equilibrium, shown below.
### Reaching kinetic equilibrium or not?
Since the force-free equilibria are reestablished toward the end of simulations for both ion- and electron-dominated current sheets from the fluid point of view, a logical next step would be to further examine if such current sheets reach the kinetic equilibrium. Because all the currents are field aligned, we integrate the full distribution function \(f_{s}(x,z,v_{\parallel},v_{\perp},\phi,t)\) of species \(s\) to obtain the
Figure 6: Electrostatic field for the ion-dominated force-free current sheet with plasma beta \(\beta=1\) in Run 2A. The snapshot is taken at \(t=180\,\omega_{ci}^{-1}\) in the simulation. (a) Electric field in the \(z\) direction \(E_{z}\). (b) Electric field in the \(x\) direction \(E_{x}\). The magnitude of \(E_{x}\) is much smaller than that of \(E_{z}\).
reduced distribution \(g_{s}(x,v_{\parallel},t)\) at the equator as a function of \(x\), \(v_{\parallel}\), and \(t\):
\[g_{s}(x,v_{\parallel},t)=\int_{-d_{i}}^{d_{i}}\mathrm{d}z\int_{0}^{\infty} \mathrm{d}v_{\perp}\int_{0}^{2\pi}\mathrm{d}\phi\,f_{s}(x,z,v_{\parallel},v_{ \perp},\phi,t), \tag{14}\]
where \(v_{\parallel}\) and \(v_{\perp}\) are the parallel and perpendicular velocities, respectively, and \(\phi\) is the gyrophase.
Figure 10 shows the difference between the reduced distributions at the end of simulations and the initial Maxwellians, \(\Delta g_{s}=g_{s}(x,v_{\parallel},t=180\,\omega_{ci}^{-1})-g_{s}(x,v_{ \parallel},t=0)\), from Run 2A as an example. The
Figure 7: Force balance in the \(\mathbf{e}_{\perp 1}=\hat{\mathbf{z}}\times\hat{\mathbf{b}}\) direction at the equator for different plasma betas. Note \(\mathbf{e}_{\perp 2}\) is along the \(-x\) direction at the equator. All the snapshots are taken at \(t=180\,\omega_{ci}^{-1}\) in the simulations. The volume forces are averaged in the range \(-1\leq z/d_{i}\leq 1\). The rows from top to bottom represent Runs 1A, 1B, 2A, 2B, 3A, and 3B, respectively. The columns from left to right represent the force balance from the point view of single fluid, ion fluid, and electron fluid, respectively.
ion phase space density is transported toward velocities both above and below the mean ion flow velocity [Figure 10(a)]. Consequently, the ion velocity distribution becomes wider and less peaked, compared to the initial Maxwellian [Figure 10(b)]. The mean ion flow velocity is slowed more on the earthward side than the tailward side [see the inset of Figure 10(a)]. Correspondingly, electrons are accelerated on average in the \(v_{\parallel}<0\) direction to develop an electron current to compensate the reduction of the ion current [see the inset of Figure 10(c)]. Such changes of mean flow velocities are not apparent in the phase space density plot because the thermal velocities are much larger than the mean flow velocity.
Figure 8: Force balance in the \(\mathbf{e}_{\perp 2}=\mathbf{\hat{b}}\times(\mathbf{\hat{z}}\times\mathbf{\hat{b}})\) direction for different plasma betas. Note \(\mathbf{e}_{\perp 2}\) is approximately along the \(+z\) direction. All the snapshots are taken at \(t=180\,\omega_{ci}^{-1}\) in the simulations. The volume forces are averaged in the range \(-16\leq x/d_{i}\leq-12\). The format is the same as Figure 7.
the mean flow velocities. The peak of the electron phase space density is transported toward larger velocities in both \(v_{\parallel}>0\) and \(v_{\parallel}<0\) [Figures 10(c) and 10(d)].
The macroscopic states of electron-dominated current sheets do not differ from their initial configurations, as seen in Figure 9. The electron reduced distribution function \(g_{e}(x,v_{\parallel})\), however, shows systematic deviations from the initial Maxwellians, as displayed in Figure 11. Such deviations are similar to those in ion-dominated current sheets. The peak of the electron velocity distributions is reduced and redistributed toward larger velocities of \(v_{\parallel}>0\) and \(v_{\parallel}<0\) (but without changes in mean electron flow velocities) [Figures 11(c) and 11(d)]. The deviation of the ion velocity distribution from the initial Maxwellian is small [Figure 11(b)], but is required to reach the Vlasov equilibrium.
To quantify the convergence of the system toward kinetic equilibrium, we define the metric
\[G_{s}(t)=\int_{-\infty}^{\infty}\mathrm{d}v_{\parallel}\left\langle\left(g_{s }(x,v_{\parallel},t)-g_{s}(x,v_{\parallel},0)\right)^{2}\right\rangle_{x}, \tag{15}\]
where \(\langle\cdot\rangle_{x}\) denotes the average over spatial coordinate \(x\). This metric is chosen in such a way that the non-Maxwellian features of \(g_{s}(x,v_{\parallel},t)\) are properly accounted for. During the evolution of the system, it is expected that \(G_{s}(t)\) experiences significant changes at the beginning of the simulation and eventually reaches a steady state, if a kinetic equilibrium is established. Figure 12 shows the absolute values of the time derivative \(|\partial G_{s}/\partial t|\) for both ion- and electron-dominated current sheets with different plasma betas. All runs end up with \(|\partial G_{s}/\partial t|\lesssim 0.01\), except for Run 1A (i.e., the ion-dominated current sheet with \(\beta=0.1\)). For the same plasma beta, electron-dominated current sheets reach the kinetic equilibrium faster than ion-dominated current sheets. In the latter case, the system simply takes more time to redistribute the currents between unmagnetized ions and
Figure 9: Current density for the electron-dominated force-free current sheet with plasma beta \(\beta=1\) in Run 2B. The snapshot is taken at \(t=180\,\omega_{ci}^{-1}\) in the simulation. (a) Ion field-aligned currents. (b) Electron field-aligned currents. (c) Total perpendicular currents.
magnetized electrons. For both types (ion- and electron-dominated) of current sheets, those with higher plasma \(\beta\) evolve toward kinetic equilibrium faster, because the current sheets with higher plasma \(\beta\) can support larger transient electric fields to redistribute particles more rapidly in phase space toward the equilibrium.
At present, we cannot simulate further in time the relaxation of the low-\(\beta\), ion-dominated current sheet in Run 1A toward the final kinetic equilibrium, because the current sheet becomes unstable at a later time (most likely due to boundary conditions). However, the progression of the relaxation for different beta values and the behavior of Run 1A up to \(t\cdot\omega_{ci}\sim 10\) suggests that it also conforms to the explanatory model presented herein. The distribution functions at that time provide a sufficiently good representation of the Vlasov equilibrium to use in future modeling.
## 5 Conclusion and Discussion
In summary, we demonstrate that kinetic equilibria of 2D force-free current sheets exist for different proportions of ion and electron currents in various plasma betas. When initial currents are carried purely by ions, field-aligned electron currents are developed by transient parallel electric fields, while
Figure 10: Phase space distributions for the ion-dominated force-free current sheet with plasma beta \(\beta=1\) in Run 2A. (a) The difference between the reduced ion distribution and the initial Maxwellian \(\Delta g_{i}\). The inset plot shows the mean ion flow velocity as a function of \(x\) at \(t=0\) (gray line) and \(t=180\,\omega_{ci}^{-1}\) (black line). (b) A cut of the final ion distribution \(g_{i}(x,v_{\parallel},t=180\,\omega_{ci}^{-1})\) (black line), \(g_{i}(x,v_{\parallel},t=0)\) (gray line), and their difference \(\Delta g_{i}\) multiplied by 5, for better visibility (red line) between \(-16\leq x/d_{i}\leq-12\). (c) The different between the reduced electron distribution and the initial Maxwellian \(\Delta g_{e}\). The inset plot shows the mean electron flow velocity as a function of \(x\) at \(t=0\) (gray line) and \(t=180\,\omega_{ci}^{-1}\) (black line). (d) A cut of the final electron distribution \(g_{e}(x,v_{\parallel},t=180\,\omega_{ci}^{-1})\) (black line), \(g_{e}(x,v_{\parallel},t=0)\) (gray line), and their difference \(\Delta g_{e}\) multiplied by 5, for better visibility (red line) between \(-16\leq x/d_{i}\leq-12\).
perpendicular electrostatic fields are generated due to unmagnetized ions and magnetized electrons. When initial currents are carried exclusively by electrons, the macroscopic state of the system remains unchanged from its initial state, described by the MHD model. In both scenarios, the electron and ion distribution functions at the late equilibrium states show systematic deviations from the initial drifting Maxwellians. These deviations occur in order to satisfy the time-stationary Vlasov equation.
The existence of a kinetic equilibrium for 2D force-free current sheets indicates that there is at least one hidden symmetry in the system, implying the existence of an additional integral of motion (i.e., an additional invariant). Our system has five dimensions \(N_{D}=5\) in the phase space (2D in the coordinate space and 3D in the velocity space). The two known invariants are the total energy \(H\), and the \(y\) component of the canonical momentum \(P_{y}=mv_{y}+eA/c\). The degrees of freedom of the system are \(N_{f}=N_{D}-N_{I}\), where \(N_{I}\) is the number of invariants. The particle phase space densities at equilibrium are constructed as a function of such invariants of motion. To fully describe such kinetic equilibrium, which we now know it does exist, we must have \(N_{I}>N_{f}\), or equivalently, \(N_{I}>N_{D}/2\). Thus, \(N_{I}\) is at least 3 (in our case) and there is at least one hidden symmetry. Starting from particle trajectory data in the equilibrium electromagnetic fields, future research on this topic could make use of machine learning models to help find the number of invariants or even discover the analytic formula of these invariants (e.g., Liu & Tegmark, 2021; Liu et al., 2022).
## 6 Acknowledgments
. This work was supported by NASA grants 80NSSC20K1788, 80NSSC22K0752, and NAS5-02099. We acknowledge high-performance computing support from Cheyenne (doi:10.5065/D6RX99HX) pro
Figure 11: Phase space distributions for the electron-dominated force-free current sheet with plasma beta \(\beta=1\) in Run 2B. The format is the same as Figure 10.
vided by NCAR's Computational and Information Systems Laboratory, sponsored by the National Science Foundation (Computational and Information Systems Laboratory 2019).
|
2304.01236 | Astronomical image time series classification using CONVolutional
attENTION (ConvEntion) | Aims. The treatment of astronomical image time series has won increasing
attention in recent years. Indeed, numerous surveys following up on transient
objects are in progress or under construction, such as the Vera Rubin
Observatory Legacy Survey for Space and Time (LSST), which is poised to produce
huge amounts of these time series. The associated scientific topics are
extensive, ranging from the study of objects in our galaxy to the observation
of the most distant supernovae for measuring the expansion of the universe.
With such a large amount of data available, the need for robust automatic tools
to detect and classify celestial objects is growing steadily. Methods. This
study is based on the assumption that astronomical images contain more
information than light curves. In this paper, we propose a novel approach based
on deep learning for classifying different types of space objects directly
using images. We named our approach ConvEntion, which stands for CONVolutional
attENTION. It is based on convolutions and transformers, which are new
approaches for the treatment of astronomical image time series. Our solution
integrates spatio-temporal features and can be applied to various types of
image datasets with any number of bands. Results. In this work, we solved
various problems the datasets tend to suffer from and we present new results
for classifications using astronomical image time series with an increase in
accuracy of 13%, compared to state-of-the-art approaches that use image time
series, and a 12% increase, compared to approaches that use light curves. | Anass Bairouk, Marc Chaumont, Dominique Fouchez, Jerome Paquet, Frédéric Comby, Julian Bautista | 2023-04-03T08:48:44Z | http://arxiv.org/abs/2304.01236v1 | # Astronomical image time series classification using CONvolutional attENTION (ConvEnton)
###### Abstract
[MISSING_PAGE_POST]
pensive. Boone (2019) (winner of the photometric classification challenge PLAsTiCC (PLAsTiCC-team et al., 2018; Hlozek et al., 2020)) presented a model based on Gaussian process augmentation of the light curve and then train it on boosted decision tree classifier. Pasquet et al. (2019) created a deep architecture called PELICAN that accepts only light curves and redshifts as input. PELICAN can handle light curves with sparsity and irregular sampling. Some can choose to add more preprocessing before training a model. For instance, Qu et al. (2021) proposed a novel approach where they generated a 2D image heatmap from light curves using 2D Gaussian process regression, which they fed to convolutional neural networks to classify different types of supernovae. The approach yields great results on PLAsTiCC data, with an accuracy of 99.73% on the binary classification of SNIa and non-SNIa. The methods that use light curves for classification still have some limitations. In order to generate a light curve, we should correctly align the two consecutive images and we must lower the quality of one of the two images to subtract them to get the flux, which could lead to a loss of information. Some dedicated algorithms called scene modeling can mitigate such issues on blended objects but are very demanding in terms of computer resources. Most importantly, the scene information, namely, the background of the transient object, is in general not taken into account in the classification. Several recent works have proposed to eliminate the feature extraction and light curve phase and focus on classifying the objects using only images. Carrasco-Davis et al. (2019) and Gomez et al. (2020) used a RNN to classify the sequences after passing the images through a CNN to extract the spatial features. They forwarded the output to the RNN (GRU/LSTM) to extract the temporal characteristics and classify the object, while (Gomez et al., 2020) applied their model to only transient objects and Carrasco-Davis et al. (2019) classified variables and transient. These two papers showed promising results for the astronomical image time series (AITS). Therefore, we followed the same path to improve the classification and also solve some challenges posed by AITS, which have not been tackled before.
In particular, image time series (ITS) classification has always been one of the challenging areas of deep learning. In addition to spatial characteristics, you also need to give importance to the temporal aspects, which makes traditional feed-forward networks ineffective. Due to the lack of research carried out on ITS in astronomy, we need to import new technics from other fields of research. Most of the research in ITS classification is done in two major domains: action recognition, where the goal is to classify the type of human action (Shi et al., 2015; Ji et al., 2013), and landscape classification using satellite images (Turkoglu et al., 2021). These two fields have covered many of the essential methods to handle ITS. Then, RNN-based approaches use recurrent neural networks to manage the aspect of time in the classification. These approaches are split into two main categories. The first one handles the spatial features separately from the temporal features. Carrasco-Davis et al. (2019) and Gomez et al. (2020) used precisely this method at the point when the CNN handles the spatial characteristics to pass it later to the RNN, which might be LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014). The second category combines convolution inside the RNN cell, thus maintaining the spatial structure of the input, which leads to extracting spatial-temporal features in the sequence. This method was first introduced by Shi et al. (2015). These authors demonstrated how to create an end-to-end trainable model using the convolutional LSTM (ConvLSTM). Experiments indicate that their ConvLSTM network regularly exceeds fully connected LSTM (FC-LSTM) in capturing Spatio-temporal correlations. Using satellite images, Turkoglu et al. (2021) proposed a new type of RNN called ConvSTAR, which has fewer parameters than the LSTM and GRU. Another way of achieving the classification of ITS is by using convolution neural networks. Ji et al. (2013) created a new 3D CNN model for action recognition. This model pulls features from spatial and temporal dimensions, collecting motion information contained in several consecutive frames. Meanwhile, some of the latest developments have abandoned convolutions and RNNs to replace them with only transformers. Liu et al. (2022) and Yan
Figure 1: Sample of some objects present in our dataset. Each image in filter g/i corresponds to a different observation with the same filter.
et al. (2022) proposed an improved supervised transformer for image classification. On the other hand, Zhou et al. (2022) and Bao et al. (2022) proposed more complex transformers that are self-supervised.
In this work, we develop a new deep learning transformer-based architecture to classify AITS. Unlike other works that separate spatial and temporal feature extraction, we combine these two steps by performing a spatio-temporal feature extraction in one step. It improves the capacity of the network to recognize the objects. We also propose a solution for the missing observations problem, which demonstrates a significant improvement in the accuracy of the model. To illustrate the performances of our model, we tested it with actual data from the SDSS survey (Holtzman et al. 2008; Sako et al. 2014; Frieman et al. 2007). In Section 2, we describe the dataset that we used in our work. Section 3 introduces our architecture ConvEnton and describes the role of each component of the model. In Section 4, we present the results of our work with some statistics about the performance and some comparisons with other architectures used for image time series classification. Finally, in Section 5, we present our conclusions and perspectives on this work.
## 2 Dataset
### Database description
The Sloan Digital Sky Survey (SDSS) (Holtzman et al. 2008; Frieman et al. 2007) is a very ambitious and successful large-scale survey program using a dedicated 2.5-meter telescope at Apache Point Observatory, New Mexico, equipped with photometric and spectroscopic instruments that have released images, spectra, and catalog information for several hundred million celestial objects. The dataset used in this paper was collected during the SDSS Supernova Survey (Sako et al. 2014), one of three components (along with the Legacy and SEGUE surveys) of SDSS-II, a three-year extension of the original SDSS that operated from July 2005 to July 2008. The Supernova Survey is a time-domain survey, involving repeat imaging of the same region of the sky every other night, weather permitting.
The images are obtained through five wide-band filters (Fukugita et al. 1996) named u', g', r', i', and z', simplified as u, g, r, i, and z in the following, which corresponds to an effective mid-point wavelength of u (365nm), g (475nm), r (658nm), i (806nm), and z (900nm). The survey region observed repeatedly over three years is a 2.5-degree-wide stripe centered on the celestial equator in the Southern Galactic Cap that has been imaged numerous times in the last twenty years, allowing for the construction of a big image database for the discovery of new celestial objects. Most of the sources have included galactic variable stars, active galactic nuclei (AGN), supernovae (SNe), and other astronomical transients, all of which have been processed to generate multi-band (ugriz) light curves. The imaging survey is reinforced by an extensive spectroscopic follow-up program that uses spectroscopic diagnostics to identify SNe and measure their redshifts. Light curves were evaluated during the survey to provide an initial photometric type of the SNe and a selected sample of sources was targeted for spectroscopic observations.
In order to investigate the classification from images rather than light curves, we acquired the images from the public SDSS dataset through their platform. Our dataset contains many types of supernovas (see Table 1 and (Sako et al. 2014)). The label of "unknown" mainly represents very sparse or poorly measured transient candidates, "variables" have signals spanning over two seasons, and "AGNs" have a spectral signature. The three other classes are supernovae of type Ia, Ib/c, and II. Among supernovae, the typing is performed from spectroscopy or from the light curve using different machine learning techniques (see Sako et al. 2014). We grouped the non-Ia supernovas because our focus in this study only on the Ia type for their interest in cosmology as standard candles and also because of the small number of non-Ia with spectral signatures. The very small class of three SLSN bright objects has been added to the non-Ia supernovae. Figure 1 shows an example of astronomical image time taken from the SDSS dataset.
### Challenges
Most of the astronomical dataset suffers from a number of problems that should be dealt with before feeding it to the classification algorithm. Among difficulties contributing to the challenging nature of AITS, we can mention class imbalance (as shown in Table 1 of our dataset). In particular, we can clearly see that the classes we have are not balanced where the number of samples for variables is much bigger than SNIa. This imbalance significantly impacts machine learning models due to their higher prior probability, which means they tend to overclassify the larger class(es). As a result, instances belonging to the smaller class(es) are more likely to be misclassified than those belonging to the larger class(es). Another problem that impacts the model is missing bands. Indeed, each time an image is acquired in an AITS it is captured through one filter among a set of up to five or more channels. So, an image of a celestial object can be taken in many channels, but not necessarily at the same time. This results in missing bands for a given time of observation (see Figure 3). It is well known that the missing data negatively impacts the performance of the model if it is not dealt with. Gill et al. (2007) stated that an increasingly missing percentage of training data resulted in an increased testing error, which requires a solution to mitigate the impact of missing data.
## 3 Methods
In this section, we propose a neural network based on a combination of convolution and self-attentions. The goal of the model is to handle the challenges that we mentioned previously, such as class imbalance, data sparsity, and missing observations. Figure
\begin{table}
\begin{tabular}{||c c||} \hline Object name & Count \\ \hline \hline AGN & 906 \\ \hline SNIa & 499 \\ \hline SNOther & 89 \\ \hline Unknown & 2009 \\ \hline Variable & 3225 \\ \hline SNOther\_PT & 2041 \\ \hline SNIa\_PT & 1448 \\ \hline \end{tabular}
\end{table}
Table 1: Number of objects per class in the SDSS dataset. PT: Photometrically typed, which means that the SNs are not spectroscopically verified
2 represents the general architecture of the ConvEntion model. The model takes as its input the sequence of images that have been rearranged to embed the band information (See Section 3.1 and Figure 4). The sequence first passes through a 3DCNN to downsize its length. It allows for the reduction of the computation complexity of the model and also captures the local characteristics of the objects. The newly constructed sequence by the 3DCNN is fed to a convolutional BERT which then extracts the spatio-temporal features with high-level representation from the input. Finally, we pass the output of the convolutional BERT, which is a projection of our input into a high-level representation subspace, through a 3D max-pooling to downsample it, then it goes on to the final classifier to make the prediction. In the following subsections, we explain each component in depth.
### Data modeling
First, we note that throughout the paper, vectors are given in bold capital letters, sizes in capital letters, and indices in low-crease. To start with the **missing data problem**, a network dedicated to image time series is usually fed a sequence of images \(\mathbf{I}\in\mathbb{R}^{H\times W\times 5}\), where \(H\) and \(W\) are, respectively, the height and width of the image and 5 is the number of channels representing the bands (u, g, r, i, z). However, we know, as explained earlier, some bands are missing in the dataset. To fix this issue, instead of giving the model images with empty channels, thus introducing a bias to the network, we decided to separate the channels as individual images (\(\mathbf{X}\in\mathbb{R}^{H\times W}\)) simply by skipping the empty channels. As a consequence, the information about the type of filter, which holds a crucial value for the network to accurately discriminate between objects, is also eliminated. In an image with different channels, the order of the channels usually represents the type of filter (see Figure 3).
In order to preserve this valuable information, we should add the band type to the new 2D images \(\mathbf{X}\). Knowing that the information about the type of the filter is a categorical feature, thus we need to adapt it to the model 2D input representation. To do so, we propose using an embedding layer to encode the channel type before passing the input to the model. For each band (u, g, r, i, z), we assign a unique number \(id\in\{1,2,3,4,5\}\). Then, an embedding layer _BandEmbed_ converts the band type \(id\), which is a categorical feature, into 2D dense representation \(E_{id}\) with \(E_{id}\in\mathbb{R}^{H\times W}\) (see Figure 4):
\[E_{id}=BandEmbed(id). \tag{1}\]
The embedding layer is a fully connected layer that is reshaped to a 2D representation. The weights of _BandEmbed_ are
Figure 3: Each image has five filters (u, g, r, i, z), The black channel represents the missing observation
Figure 2: General architecture of the ConvEntion network. The image time series are first rearranged to embed the band information. Then each 3DCNN is fed with a sub-sequence of \(K\) inputs of the time series \(J(\in\mathbb{R}^{H\times H\times W\times 2}\) for M elements of images of size HzW) to create the new downsized sequence \(S(\in\mathbb{R}^{X\times H\times W\times 2})\). \(S\) is fed to the positional encoder in order to add the information about the position, which outputs \(F(\in\mathbb{R}^{N\times H\times W\times 2})\). Then \(F\) is passed to ConvBERT which has \(L\) layers. The 3D max-pooling is used to downsize the output of ConvBERT for the classifier
learnable. After getting the band embedding, we concatenate it with the new image to get our new input \(J\in\mathbb{R}^{M\times H\times W\times 2}\) that contains the band information, where \(M\) is the length of the sequence:
\[J_{m}=Concat(X_{m},E_{id}),\qquad m\in\{1,..,M\}. \tag{2}\]
The problem of **class imbalance** is one of the major challenges for any machine learning project. Some tried to solve this problem by adding a new loss function to mitigate the impact of the class imbalance. For example Lin et al. (2017) proposed a loss function called "focal loss" which applies a modulating term to the cross-entropy loss in order to focus the learning on hard misclassified examples. However, this approach tends to produce a vanishing gradient during backpropagation (Hossain et al. 2021). Other solutions propose the use of oversampling such as SMOTE (Chawla et al. 2002). Those authors proposed an approach where they synthesize new samples of the minority class. However, this solution was proposed mainly for tabular data. Knowing that our data are images that contain a much higher number of features than tabular data, it appears obvious that using SMOTE may not be optimal in our case. Dablain et al. (2021) introduced a solution based on SMOTE dedicated to images called DeepSMOTE. It is aimed at generating new images for the minority class. Once again, this approach is unsuitable in our case as our dataset is not composed of images, but of a sequence of images, and it is too expensive to generate a whole new sequence. So, instead of generating a new one, we used data augmentation and weighted random sampling(WRS) (Efraimidis 2015) on our database. We oversampled the dataset, which translates to simply altering the dataset to remove such an imbalance by increasing the number of minority classes and undersampling the data by decreasing the majority classes until we have reached a balanced dataset. In our case, the WRS was applied on a batch level. We generate balanced batches based on the probability of a sample being selected. We weighted each sample according to the inverse frequency of its label's occurrence and then sampled mini-batches from a multinomial distribution based on these weights. This means that samples with high weights are sampled more often for each mini-batch. The same sample can be reused in other mini-batches of the same epoch to increase the minority class, but with a data augmentation applied to it. Different methods of data augmentation were used: for example, a random drop of some steps from the whole sequence to create a new one or a sequence rotation, horizontal and vertical flips, and sequence shifting, where we construct a smaller sequence from the original one which has a bigger length than the input length of ConvEntion. In our implementation, we recall the dataset at every epoch, the transforms operation (augmentation) is executed and then we get different augmented data. Using this oversampling approach has drastically improved the performance of the model. We used the function \(WeightedRandomSampler\) from PyTorch (Paszke et al. 2019) as an implementation of WRS.
### 3D convolution network:
In several deep learning applications, large transformer models have demonstrated fantastic success in obtaining state-of-the-art results. However, because the original transformer's self-attention mechanism consumes \(O(M^{2})\) time and space with respect to the sequence length, \(M\), training the model for a long sequence is so expensive, it causes the problem called "attention bottleneck" (Wang et al. 2020; Choromanski et al. 2021). The problem is more severe for us because we use convolutions and 3D tensors inside the attention mechanism; for instance, the attention map is of a size \(H\times W\), so the complexity of the attention will be \(O(M^{2}\times H\times W)\). Thus, our model would then be prohibitively expensive to train. In the last few years, there have been numerous proposals aimed at solving this issue. Wang
Figure 4: Illustration of the handling of missing information by separating the bands. The empty channels are dropped, then we concatenate each image with a 2D representation of the band used to capture the image. The band embedding contains five band representations. The black channel represents the missing observation
et al. (2020) demonstrated that a low-rank matrix could approximate the self-attention mechanism. They suggested a new self-attention method that minimizes total self-attention complexity. Choromanski et al. (2021) presented a novel transformer architecture that uses linear space and time complexity to estimate regular (softmax) full-rank-attention Transformers with proven accuracy. However, all these propositions remain irrelevant in our case because we do not use the standard self-attention mechanism, as the convolutions make it an arduous task. So, the solution we preferred to go with is to reduce the length of the sequence before feeding it to the transformer block. Reducing the sequence must be done without losing relevant information. Thus, we propose using a 3D convolution neural network (3D CNN). A 3D CNN is an improved type version of CNN first proposed by Tran et al. (2014), where it applies a 3D filter to the dataset and the filter moves in three directions to calculate the low-level feature representations. Their output shape is in a 3D volume space. We applied \(3DCNN\) where we input the sequence \(\mathbf{J}\) to get the reduced new sequence \(\mathbf{S}\) following the equation:
\[S_{n}=3\text{DCNN}(J_{(n-1)\times K+1},...,J_{m\times K}),\quad n\in\{1,..,N\}. \tag{3}\]
We let \(M\) be the length of the series, \(J\) and we fed \(K\) inputs of \(J\) to the 3DCNN to generate one entry, \(S\), for our transformer. So, in the end, the new sequence, \(S\), will be \(S\in\mathbb{R}^{N\times H\times W\times D}\), where \(N=M/K\), \(D\) is the number of channels and \(H^{\prime}\) and \(W^{\prime}\) are the new height and width. By using the 3DCNN, we reduced the length of the sequence by a factor of K, which also reduced the complexity of the model. The 3DCNN does not just reduce the length of the input sequence, it also captures local spatio-temporal low-level features. The 3DCNN captures these particulate features due to its focus on the local characteristics (space and time) of the sequence, while the transformer focuses on the global characteristics. On the whole, we have reduced the computation without losing essential information that is important for classification. Table 2 summarizes the architecture used inside the 3DCNN.
### Convolutional BERT
After getting the new output \(S\) of the 3DCNN, it is time to feed it to what we call convolutional BERT which stands for Convolutional Bidirectional Encoder Representations from Transformers. Transformer and self-attention have become one of the main models that revolutionize deep learning in the last few years, especially in neural language processing (NLP). Self-attention (Bahdanau et al., 2014), also known as intra-attention, is an attention mechanism that connects different positions in a single sequence to compute a representation of the sequence. Here, "attention" refers to the fact that in real life, when viewing a video or listening to a song, we frequently pay more attention to certain details while paying less attention to others, based on the importance of the details. Deep learning uses a similar flow for its attention mechanism, giving particular parts of the data more focus as it is processed. Our intention in using this mechanism is for the model to focus more on the changes happening in the image sequence to better discriminate between astronomical objects. Self-attention layers are the foundation of the transformer block design. Transformers were first introduced by Vaswani et al. (2017), using model-based attention dispensing with recurrence and convolutions entirely. Their work inspired others who used the concept of transformers to achieve even better results. For example, in BERT (Devlin et al., 2019) the authors used only the encoder block by stacking many of them. Even though transformers were widely used in NLP in the last two years, people started implementing these blocks in other domains like image classification. Dosovitskiy et al. (2021) presented a model free from convolutions by using only a transformer to classify images. Garnot et al. (2019) also suggested that they are able to extract temporal characteristics using a custom neural architecture based on self-attention instead of recurrent networks. Their use was not limited to image classification; action recognition was also investigated as in Sharir et al. (2021), where the authors used a transformer-based approach inspired by the work of Dosovitskiy et al. (2021). Liu et al. (2021) did propose a new transformer where they added convolution to the attention mechanisms, making it able to apply convolutions while extracting the temporal features.
#### 3.3.1 Positional encoding
Because transformers have no recurrence throughout the thumbnail sequence, some information about each thumbnail's relative or absolute position must be injected into the feature map obtained by the 3DCNN to inform the model about the order in the sequence. Similarly to the original transformer paper (Vaswani et al., 2017), we use positional encoding at each layer in the encoder to achieve this. The only difference is that our positional encoding is a 3D tensor, where \(P\in\mathbb{R}^{N\times H\times W\times D}\). Because the positional encoding and the new feature maps have the same dimension, they can be added together. We use sine and cosine functions to encode the position (Vaswani et al., 2017):
\[P_{(n,2i)}=sin(n/10000^{2i/D}), \tag{4}\]
\[P_{(n,2i+1)}=cos(n/10000^{2i/D}), \tag{5}\]
where \(n\) denotes the position in the sequence of length, \(N\), and \(i\) is the channel dimension, while \(D\) represent the total number of channel gotten by the 3DCNN. The sinusoidal positional encoding is chosen to make it easy for the model to learn to attend to relative positions. To get the new input for the convolutional BERT, we conducted an element-wise addition between the positional encoding and the feature maps obtained from 3DCNN to obtain the new tensor \(F\in\mathbb{R}^{N\times H\times W\times D}\):
\[F_{n}=S_{n}+P_{n},\qquad n\in\{1,..,N\}. \tag{6}\]
In this study, we only used information about the position of the image in a sequence. While the observation date could be used as an alternative to the position, this would require adjusting the positional encoding function. Our experiments on the
\begin{table}
\begin{tabular}{l l} \hline Layer & Layer Parameters \\ \hline \hline Conv3d + BN3d & \(11\times 11\times 3\times 64,64\) \\ \hline Conv3d + BN3d & \(5\times 5\times 3\times 128,128\) \\ \hline Conv3d + BN3d & \(3\times 3\times 3\times 64,64\) \\ \hline Conv3d + BN3d & \(3\times 3\times 3\times 64,64\) \\ \hline \end{tabular}
\end{table}
Table 2: 3D CNN architecture where Conv3D is a 3D convolutional element and BN3d is a 3D batch normalization element.
SDSS dataset did not reveal any improvement in the model when using the observation date, as opposed to just using the position. This can be understood because we do the training and the test with the same observation sequence and the network can therefore learn this sequence. On the other hand, not incorporating any information regarding the order of the sequence greatly degraded the performance of the model. As a result, we ultimately chose to use only the position in our model (see Section 4.2 for a discussion).
The newly obtained sequence \(F\) is fed to a multi-head convolutional attention, which is an improved self-attention that has convolution. Then the multi-head convolutional attention is followed by the second component which is a tiny feed-forward network (FFN) that has convolutions applied to every attention map. Its primary purpose is to transform the attention map into a form acceptable by the next convolutional BERT layer, with the FFN consisting of two convolutional layers with ReLU activation in between.
#### 3.3.2 Multi-head convolutional self-attention
For this process, we used the model proposed by Liu et al. (2021), with a few modifications where we replaced the last linear layer with a convolution layer. We believe that convolution in self-attention is better than the dot product between the query and the key because the convolution will accurately calculate the similarity, especially when we have 3D feature maps. A query map and a set made up of a pair of key maps and value maps that are encoded to an output using convolutional self-attention. The query map, key maps, value maps, and output are all 3D tensors. Figure 5 represent the general architecture of the multi-head ConvAttention.
We used a convolution layer to generate the attention model's query, value, and key. The input to the attention model is \(F\in\mathbb{R}^{N\times H\times W\times D}\). We pass each map through a convolution layer to get \(\{Q,K,V\}\in\mathbb{R}^{N\times H\times W\times D^{\prime}}\), where \(D^{\prime}=D/T\) and \(T\) represent the number of attention heads. Then we applied a sub-network, \(M_{\theta}\), on the query and the key maps, which consists of an element-wise sum of the query and the key maps followed by another convolution layer to generate our attention map \(H_{(n,m)}\in\mathbb{R}^{H^{\prime}\times W\times 1}\):
\[H_{(n,m)}=M_{\theta}(Q_{n},K_{n}),\qquad n,m\in\{1,...,N\}. \tag{7}\]
After getting all the map attentions, \(H_{n}=\{H_{(n,1)},H_{(n,2)},....,H_{(n,N)}\}\), where \(H_{n}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times N}\), we applied a softmax operation along the third dimension of size, \(N\). Then we conducted an element-wise product between the attention map and the value map following the equation:
\[V^{\prime}_{n}=\sum_{m=1}^{N}SoftMax(H_{n})_{(n,m)}V_{m}. \tag{8}\]
We concatenated the new value representation, \(V^{\prime}_{n}\), obtained from the different attention heads. The multi-head attention is used to attend to input from various representation subspaces jointly:
\[MultiHead(Q,K,V)=Concat(V^{\prime}_{n_{1}},....,V^{\prime}_{n_{T}}). \tag{9}\]
Finally, we applied a convolution layer for merging the output of the multi-head and obtaining a high-level representation that groups all the heads. At the end of the network, we pass the encoded sequence to 3D max-pooling and finally to the classifier to make a prediction.
### Evaluation metrics
Accuracy is the probability that an object will be correctly classified. It is defined as the sum of the true positives plus true negatives divided by the total number of individuals tested:
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}, \tag{10}\]
where TP, TN, FP, and FN are, respectively, the true positive, true negative, false positive, and false negative.
Figure 5: Convolutional attention (left). Multi-head convolutional attention (right). To obtain the query, key, and value maps, we applied a convolution layer on the feature map obtained from 3DCNN.
The F1 score is a classification accuracy metric that combines precision and recall. It is a suitable measure of models tested with imbalanced datasets:
\[Precision=\frac{TP}{TP+FP}, \tag{11}\]
\[Recall=\frac{TP}{TP+FN}, \tag{12}\]
\[F1=2\times\frac{Precision\times Recall}{Precision+Recall}. \tag{13}\]
## 4 Experiments
### Implementation details
The supernovae in our data are not all spectroscopically confirmed, which means that the unconfirmed ones might contain some misclassified objects due to errors from the photometric typing. The model may not generalize due to this data bias. To ensure that our model performs a generalization only on spectroscopically confirmed data, we split up the training process into two steps. We divided the data into two datasets. The first one contains only the photometrically typed data and the second contains spectroscopically confirmed data. We trained the model at first with the photometrically typed data, then we used transfer learning to fine-tune the model on only spectroscopically confirmed data (Table 5 summarizes the partition of the data). The models are trained using cross-validation of five folds and three ensembles in each fold. All the architectures presented in this paper follow this same process and are implemented using PyTorch (Paszke et al., 2019).
We performed an extensive hyperparameter tuning of over 20 models to specify the best hyperparameters for our architecture, which contains 1.3 Million parameters. We conducted a hyperparameter optimization using only a non-confirmed dataset with different parameters, such as sequence length, \(M\), learning rate, \(lr\), 3DCCN sub-sequence length, \(K\), classifier layers' size, number of ConvBERT layers, \(L\), number of Multi-head ConvAttention, \(T\), batch size, and dropout. We used an Adam optimizer (Kingma & Ba, 2017), with a value of the learning rate of \(10^{-3}\), and we trained the model with cross-entropy loss and a dropout of 0.3. Hyperparameter tuning involves the number of images K that feed the 3DCNN and the maximum length of the sequence. The best values were \(K=3\) and \(M=99\), which means the number of sequences for the convolutional BERT is \(N=33\). The batch size was 128 sequences which we ran over 100 epochs. We chose the number of convolutional BERT layers to be \(L=2\) and the number of attention heads \(T=4\). Also, the images were normalized band-wise, as each band has different characteristics. We used only four classes (AGN, SNIa, Variable, SNOther) to train all the models. The class marked as "unknown" has not been considered in the study. It corresponds to noisy or very sparse data. It can easily be tagged from sparsity or noise in the image metrics and we do not expect any improvement in the classification if such objects are added to the training. We trained all models with 4 GPUs GeForce RTX 2080 Ti, Each model takes about three hours to complete training. The implementation will be released upon publication in our Github page 1.
Footnote 1: [https://github.com/DaBihy/ConvEntotic](https://github.com/DaBihy/ConvEntotic)
### Results
This section provides studies on SDSS comparing the accuracy and F1 score of our proposed solution with other works. Table 3 summarizes the result of **different models from different deep learning areas** to diversify our benchmark as it contains RNN architectures (SuperNnova, LSTM), CNN-based models such as SCONE, Hybrid models that have CNN and RNN such as (Carrasco-Davis et al., 2019) and (Gomez et al., 2020), and, finally, a transformer-based model. Also, we compared the result using two types of datasets: first, the image dataset and, second, the same dataset object but with the light curves; the goal is to highlight the advantage of using images instead of light curves. Moreover, the different works mentioned in Table 3 were initially proposed for different datasets with different classes and training protocols. Hence, the results do not reflect the quality of these works on other datasets. The goal of the comparison is to give visibility into the performance of our model from a deep learning standpoint and the importance of using image time series from an astronomy perspective.
Overall, our model ConvEntion obtains the highest accuracy of 79.83% and F1 score of 70.62%, 13 points higher in accuracy than the best results on images by (Gomez et al., 2020) and 12 points higher in accuracy than the best model using light curves. This confirms the advantage of using images over light curves. This advantage can be explained by the fact that the image contains more information than a single value of flux in a light curve. Hence, a model can learn robustly with the existence of more high-level feature maps. Also, ConvEntion performed better compared to the other image-based models, such as Carrasco-Davis et al. (2019). Additionally, transformers give a remarkable computational advantage because transformers avoid recursion and allow for parallel computation, thus reducing the training time. Our model took only three hours to train, compared to other image-based models which took five hours of training on our GPUs. Our model achieved better results using fewer parameters, compared to the other models trained on image sequences. The main benefit of using a transformer is that it reduces the drop in performance due to long dependencies. Transformers do not rely on past hidden states to capture dependencies with previous features such as RNNs. They instead process a sequence as a whole. Therefore, there is no risk of losing past information. Also, the integration of a spatio-temporal feature extraction helped in getting a better high-level representation of the sequence, in comparison to separating the spatial features from the temporal ones. The two types of features have correlations that may help the model to better discriminate between objects. We can also highlight the importance of separating the band to mitigate the impact of missing observations. Our model performed well, in comparison to that of Gomez et al. (2020) which uses multiple bands, which shows that separating the bands and adding band embedding works better than feeding the network with empty bands.
In the study of Carrasco-Davis et al. (2019), the authors trained their model on a dataset that only has a "g" band and they noted that the model can be adapted to classify the image sequence combining information using multiple bands. For the sake of comparison, we trained the image models with all the bands "ugriz" at first and then with only one "g" band. Our model achieved an accuracy of 76.89% and 63.20% in the F1 score using one band ("g") which dropped 7% in comparison to using multiple bands. Meanwhile, Carrasco-Davis et al. (2019) achieved 63% in accuracy and 60% in their F1 score. This shows that our model is more efficient when using multiple bands. This
also highlights the impact of band separation to mitigate the impact of the missing observations.
Figure 6 illustrates the obtained confusion matrix by ConvEntion and it shows that the model has well classified the supernovas. Most of the misclassified SNIa are associated with SNOther and vice versa, which is not a serious error. This is even an expected behavior, especially since all types of supernovas share a lot of similarities which may confuse the model. Additionally, with a small dataset like ours, it is normal to have such behavior because the model does not have enough samples to totally discriminate among objects. Meanwhile, variables were the best-classified class in our dataset, with just a bit of confusion with the AGN; this misclassification between AGN and variable can be explained by the class imbalance in our dataset based on the knowledge that the number of variables is higher than in the other classes.
Table 4 summarizes the results of different models trained only on three classes (AGN, SN, Variable), where classes SNIa and SNOther are combined into a single class. The goal of this experiment is to see the behavior of our model in discriminating between transient and non-transient objects. We got the best results with an accuracy of 83.90% with an F1 Score of 75.77%. The model was able to classify the SN accurately, with a score of 86% (as shown in Figure 7).
The model is able to effectively process a given survey without any loss in performance and without the requirement of pro
\begin{table}.. This table includes only experiments on a dataset with four classes.
\end{table}
Table 4: Performance comparison in terms of average F1 score and the average of the accuracy of five folds of cross-validation. This table includes only experiments on a dataset with three classes.
\begin{table}.. This table includes only experiments on a dataset with four classes.
\begin{table}.... \\
\end{table}
Table 3: Performance comparison in terms of average F1 score and the average of the accuracy of five folds of cross-validation
viding it with the time information for each image. However, when there is a covariate shift, or a mismatch, between the training set and the test set as when using a different dataset with a different observation sequence), incorporating the time information can improve the results. This experimental finding will be further studied and reported in future work using other datasets.
## 5 Conclusion
In this work, we present a method for efficient astronomical image time series classification that is entirely based on the combination of convolutional networks and transformers. Inspired by action recognition and satellite image time series classification, we propose a model ConvEntion that utilizes convolutions and transformers jointly to capture complex spatio-temporal dependencies between distinct steps, leading to accurate predictions based on different observations of an object. The accuracy of our model is better with a high margin of 13%, in comparison to state-of-the-art methods using image data - and even better compared to approaches using light curves.
Our model achieves good results on the SDSS dataset, while also being faster thanks to using fewer parameters and parallel computational processes, making it a good candidate for latency-sensitive applications such as the real-time thumbnail classifier of astronomical events. Meanwhile, our benchmark stands as clear evidence of the importance of images in the domain of astronomy. Indeed, the images contain more information than the normal light curves, even if they present more difficulties. In the future, we plan to scale up ConvEntion using self-supervised learning to investigate whether the model can generalize even better. With a large amount of unlabeled data in astronomy, we believe that the next step to advance AITS classification is creating self-supervised models.
###### Acknowledgements.
This work has been carried out thanks to the support of the DEEPIDP ANR project (ANR-19-CE31-0023). This work makes use of Sloan Digital Sky Survey (SDSS) data. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III website is [http://www.sdss3.org/](http://www.sdss3.org/). SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Institute of Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
|
2305.08885 | Smart Home Energy Management: VAE-GAN synthetic dataset generator and
Q-learning | Recent years have noticed an increasing interest among academia and industry
towards analyzing the electrical consumption of residential buildings and
employing smart home energy management systems (HEMS) to reduce household
energy consumption and costs. HEMS has been developed to simulate the
statistical and functional properties of actual smart grids. Access to publicly
available datasets is a major challenge in this type of research. The potential
of artificial HEMS applications will be further enhanced with the development
of time series that represent different operating conditions of the synthetic
systems. In this paper, we propose a novel variational auto-encoder-generative
adversarial network (VAE-GAN) technique for generating time-series data on
energy consumption in smart homes. We also explore how the generative model
performs when combined with a Q-learning-based HEMS. We tested the online
performance of Q-learning-based HEMS with real-world smart home data. To test
the generated dataset, we measure the Kullback-Leibler (KL) divergence, maximum
mean discrepancy (MMD), and the Wasserstein distance between the probability
distributions of the real and synthetic data. Our experiments show that
VAE-GAN-generated synthetic data closely matches the real data distribution.
Finally, we show that the generated data allows for the training of a
higher-performance Q-learning-based HEMS compared to datasets generated with
baseline approaches. | Mina Razghandi, Hao Zhou, Melike Erol-Kantarci, Damla Turgut | 2023-05-14T22:22:16Z | http://arxiv.org/abs/2305.08885v1 | # Smart Home Energy Management:
###### Abstract
Recent years have noticed an increasing interest among academia and industry towards analyzing the electrical consumption of residential buildings and employing smart home energy management systems (HEMS) to reduce household energy consumption and costs. HEMS have been developed to simulate the statistical and functional properties of actual smart grids. Access to publicly available datasets is a major challenge in this type of research. The potential of artificial HEMS applications will be further enhanced with the development of time series that represent different operating conditions of the synthetic systems.
In this paper, we propose a novel variational auto-encoder-generative adversarial network (VAE-GAN) technique for generating time-series data on energy consumption in smart homes. We also explore how the generative model performs when combined with a Q-learning-based HEMS. We tested the online performance of Q-learning-based HEMS with real-world smart home data. To test the generated dataset, we measure the Kullback-Leibler (KL) divergence, maximum mean discrepancy (MMD), and the Wasserstein distance between the probability distributions of the real and synthetic data. Our experiments show that VAE-GAN-generated synthetic data closely matches the real data distribution. Finally, we show that the generated data allows for the training of a higher-performance Q-learning-based HEMS compared to datasets generated with baseline approaches.
synthetic data, load consumption, smart grid, deep learning, generative adversarial network, q-learning
## I Introduction
The design of modern smart homes must take into consideration energy-efficient technologies and renewable energy sources. Recent advances in smart grid research include new technologies and strategies for managing energy generation, storage, supply, and demand [1]. Smart homes, as a critical component of the smart grid, are projected to increase household energy efficiency, save energy costs, and improve user comfort [2]. Utilities and smart home controllers can acquire information from smart meters for use in forecasting consumption and generation, demand-side management, and economic power dispatch [3]. Thus, acquiring fine-grained data about the living environment is becoming an essential precondition for a smart home.
It has become increasingly common to use artificial intelligence and statistical analysis for demand response and energy management tasks requiring accurate short-term (second-to-minute scale) representations of load behavior. These techniques usually require large datasets of representative data for training. However, the collection of such data presents significant security and privacy challenges, with relatively few high-quality publicly available datasets [4].
Therefore, synthetic data generation become an attractive alternative to training machine learning algorithms that can decide, for example, the optimal time for implementing demand response or charging an EV [5]. A variety of techniques had been used to generate such datasets, including Markov chains [6], statistical models [7], and physical simulator-based methods [8]. A major drawback of these methods can be attributed to targeting a limited number of problems, applications tailored to specific scenarios, and the lack of scalability. Specifically, these methods require case-by-case analyses for each device, user behavior, and environment. But it is impractical to model all the detailed energy consumption behaviors and state transitions when a large number of user devices are involved. Recent studies, on the other hand, have utilized deep learning-based approaches that can be applied to large-scale datasets and can be conducted directly on raw data, without the need for further analysis [9].
In our recent paper [10], our approach involved generating electric load profiles and PV generation synthetic data for a smart home using variational autoencoder-generative adversarial networks (VAE-GAN), and we compared the distributions of synthetic data for the VAE-GAN model and vanilla GAN network to the real data distributions. According to the statistical metrics such as KL divergence, Wasserstein distance, and maximum mean discrepancy, the distribution of data points was extremely close between synthetic data and real data for both PV power generation and energy consumption load profiles. Nevertheless, it is a key defining characteristic of a realistic time series to maintain an awareness of temporal differences between synthetic and real-time series. For instance, If PV power production shifted earlier or later in the day rather than following sunrise and sunset patterns, the distribution would still be similar to the real data, but it would not be realistic. The addition of electric vehicles (EV) charging load consumption synthetic data could provide additional insights and value in understanding smart home electricity consumption patterns. Also, no experiments were conducted to investigate if the synthetic dataset is applicable in real-life practice. To address these shortcomings, we propose a revised architecture for the variational autoencoder-generative adversarial network (VAE-GAN) technique for generating time-series data of smart homes from a variety of synthetic data sources and investigate the performance of synthetic data
generative models in the presence of control operations. In contrast to the schemes mentioned above, this strategy enables the learning of various types of data distributions in a smart home, including electric load profiles, PV power generation, and EV charging load consumption, and can subsequently generate plausible samples without carrying out any prior analysis before the training phase. The VAE-GAN architecture is favored in this model since it allows us to fine-tune the regulated latent space that influences the generated output, as opposed to vanilla GAN, in which the generator maps the input noise to the generated output and makes it susceptible to mode collapse, in which the generator repeatedly produces the same optimal input sequence to mislead the discriminator [11]. Furthermore, we compare one model-based and two data-driven generative models, including Gaussian Mixture Model (GMM), vanilla GAN, and VAE-GAN.
Taking advantage of the synthetic data, we propose a Q-learning-based smart home energy management system (HEMS). The generated data is used for the offline training of Q-learning HEMS agents, which aims to maximize long-term management profit. Then, the trained agents are tested online in an environment based on real-world data. Finally, we compare online profits to further investigate how the data generation method will affect the HEMS performance. The idea behind this scheme is that we assume good-quality synthetic data in the offline training can better prepare the agent for real-world operations [12]. Thus, the performance of the Q-learning-based HEMS is used to demonstrate the quality of the synthetic data.
Compared with conventional model-based optimization algorithms, such as convex optimization, the reinforcement learning (RL) based method can avoid the complexity of defining a sophisticated optimization model since the optimization problem can be transformed into a unified Markov decision process (MDP) scheme. On the other hand, most existing HEMS models assume a perfect environment for data collection and algorithm training [13]. By contrast, the proposed Q-learning HEMS uses synthetic data for training, overcoming the data availability bottleneck.
The main contributions of this paper are as follows:
* We propose a VAE-GAN-based scheme to generate high temporal resolution synthetic time-series data for energy consumption in a smart home.
* We compare the performance of the proposed approach with techniques based on Gaussian Mixtures and vanilla GANs.
* We further investigate the quality of the synthetic data by using it to train a Q-learning-based HEMS model. Evaluating the HEMS agent online in environments based on real-world data, we find that the VAE-GAN method allows higher HEMS profit than other baselines.
The rest of this paper is organized as follows. Section II introduces related work. Section III shows the proposed smart home synthetic data generation model and Q-learning-based HEMS model. Section IV presents the simulation settings and results, and Section V concludes the paper.
## II Related Work
The majority of previous work in generating synthetic energy consumption data for smart homes can be classified into model-based or data-driven approaches. Model-based approaches describe the features of household devices with hand-crafted features and mathematical equations. For instance, [14] proposes a bottom-up residential building energy simulation, which simulates occupant behavior patterns with a Markov chain clustering algorithm. [6] combined non-intrusive load decomposition and Markov chain methods for user energy consumption simulation. [8] defines a smart residential load simulator based on MATLAB-Simulink, and it includes dedicated physical models of various household devices. [7] and [15] utilize Gaussian Mixture Modeling to estimate the distribution of the arrival and departure time of electric vehicles (EVs) and to perform temporal modeling of the charging sessions.
On the other hand, recent developments in using GANs have achieved great success in producing synthetic time-series data. [16] coupled non-intrusive load decomposition with a conditional GAN to generate synthetic labeled (e.g., appliances) load for a smart home. For smart grid applications, [17] proposes a GAN-based approach that captures spatial and temporal correlations between renewable energy sources. [18] applies a GAN to the features derived by ARIMA and Fourier transform for generating realistic energy consumption data. [4] uses a GAN to learn the level and pattern features of smart home time-series data, both of which are defined by household consumption and activity, and generate synthesized data with a distribution similar to the real data. [19] clusters daily load profiles with known clustering techniques and uses a GAN to synthesize daily loads for each cluster. [20] proposed a GAN with a Wasserstein loss function to learn temporal and power patterns of EV charging sessions and create a synthetic load dataset.
Reinforcement learning-based energy management models have been extensively investigated in the smart grid. For example, a correlated Q-learning-based approach is proposed in [5] for microgrid energy management, and a multi-agent reinforcement learning method is introduced in [21] to minimize the energy cost of different smart home devices. A self-learning HEMS model is presented in [22], which includes price forecasting, price clustering, and power alert systems. [23] introduced a Bayesian deep reinforcement learning method for microgrid energy management under communication failure.
However, most aforementioned make the strong assumption of a perfect data collection and agent training environment. In a real-world environment, fine-grained and good-quality data may be inaccessible due to privacy concerns or measurement errors. As such, synthetic data-based HEMS is much more realistic since the agent can use pre-generated and fine-grained data for training.
## III Methodology
The overall architecture of the proposed approach is described in Fig. 1, and the organization of this work is introduced as followings:
* **Synthetic data generation:** Given a real-world smart home dataset, we first apply the VAE-GAN-based method to produce synthetic data, including the smart home load profiles, PV power generation, and EV charging load consumption, which is introduced in the following Section III-A to D.
* **Q-learning based HEMS training:** Next, the generated synthetic data is used for the offline training of Q-learning based HEMS. The intelligent HEMS agent will interact with a synthetic data-based environment and produces HEMS strategies to maximize long-term profit, which is included in Section III-E.
* **Real-world HEMS operation test:** Finally, the trained HEMS agent will be tested in a real-world data-based environment. The test is designed to evaluate the real-world performance of the synthetic data-based HEMS, providing an in-depth evaluation of the synthetic data quality. Specifically, it is expected to demonstrate that our synthetic data can be applied to HEMS training, and the agent can obtain a satisfying real-world performance without touching real-world data.
Among the well-known deep learning-based generative models, GAN and Variational Autoencoders are two of the best known. In this network, the encoder module first encodes the input sequence as a Gaussian distribution over the latent space. Then, the supervisor module trains the encoder module to approximate the next time step. In the generator module, the input sequence is reconstructed from the latent space in an attempt to fool the discriminator. On the other hand, the discriminator module trains the generator to create realistic sequences by identifying fake samples.
In the following, we first introduce the autoencoder(AE) and variational autoencoder (VAE), then we present generative adversarial network (GAN) and VAE-GAN generative models. Finally, we include the Q-learning-based HEMS control.
### _Autoencoder (AE)_
An autoencoder is a neural network architecture that uses an unsupervised learning technique to compress the input dimensions into a compressed knowledge representation, called latent space, and then reverses the compressed knowledge representation into the original input dimensions. This architecture has two essential modules: the encoder and the decoder. The encoder module maps the input sequence (\(x\)) into the meaningful latent space (\(z\)), and based on \(z\) the decoder module outputs a reconstruction of the original input sequence (\(\hat{x}\)). The original input is unlabeled, but the decoder is trained to make reconstructions as close as possible to the original input by minimizing the reconstruction error, \(\mathcal{L}_{reconstr}\), which is the distance between the original input and the subsequent reconstruction.
\[\mathcal{L}_{reconstr}=\|\hat{x}-x\|^{2} \tag{1}\]
where \(x\) and \(\hat{x}\) are original input and the reconstruction, respectively.
### _Variational Autoencoder (VAE)_
After training the autoencoder model, the decoder module can produce new content given a random latent space. Due to the unregulated latent space, which can lead to severe overfitting, the decoder may not be able to interpolate indefinitely in the absence of data points included in the input sequence. Latent space regularity is reliant on the distribution of the input sequence and the dimension of the latent space, hence it is not always guaranteed. As a solution to this limitation, variational autoencoders include a regularization parameter, such as Kullback-Leibler (KL) divergence, in the learning process to achieve the Gaussian distribution of the latent space and avoid overfitting. In this architecture, the VAE encoder (\(E\)) encodes the input sequence (\(x\)) as a distribution over latent space defined by Gaussian distribution first moment, mean, and second moment, standard deviation, and parameters. Ultimately, the encoder is trained to minimize the \(\mathcal{L}_{prior}\) loss and output a latent space with a Gaussian distribution. In the training process, the VAE minimizes a loss function (\(\mathcal{L}_{VAE}\)) that consists of two terms: the reconstruction term and the regularization term, which is the KL divergence between the latent space distribution and standard Gaussian distribution.
\[\mathcal{L}_{prior}=D_{KL}(E(x)||\mathcal{N}(0,1)) \tag{2}\]
\[\mathcal{L}_{VAE}=\mathcal{L}_{prior}+\mathcal{L}_{reconstr} \tag{3}\]
where \(D_{KL}\) is the KL divergence, \(E\) is VAE encoder, \(\mathcal{N}(0,1)\) is the Gaussian distribution, and \(\mathcal{L}_{prior}\) is the loss.
### _Generative Adversarial Network (GAN)_
Generative models, such as GAN, are used to generate new data samples. This architecture consists of two neural networks: one attempts to detect and learn patterns in the input data and produce a new sample, while the other aims to
Fig. 1: Overall architecture of the proposed scheme
distinguish between real input data and the synthesized sample. These two networks play against each other and actively seek equilibrium.
* **Generator (\(G\)):** Takes an input noise \(z\), which usually has a normal distribution, and maps that into data samples \(G(z;\theta_{g})\), with \(\theta_{g}\) as the network parameter.
* **Discriminator (\(D\)):**\(D(x;\theta_{d})\) returns the likelihood of \(x\) being classified as a member of the original data, where 0 represents fake data and 1 represents original data.
\(G\) and \(D\) engage in an adversarial game, equation (4), where \(G\) attempts to fool the \(D\) and maximize the final classification error, while \(D\) is trained to classify fake data more accurately and minimize the error.
\[\begin{split}\underset{G}{min}\underset{D}{max}\mathcal{L}_{ GAN}(D,G)&=E_{x}[log(D(x))]\\ &\quad+E_{z}[1-log(D(G(z)))]\end{split} \tag{4}\]
where \(G\) and \(D\) has been defined as generator and discriminator, \(x\) and \(z\) are input sequence and noise, respectively.
### _Data-driven Generative Model_
To generate synthetic smart home data we adopted a VAE-GAN architecture, as presented in Fig. 4 that avoids the shortcomings of the vanilla GAN or VAE. For example, vanilla GAN is prone to mode collapse, a situation where the generator finds an input sequence that fools the discriminator and repeatedly produces it over and over again. The VAE-GAN architecture, introduced by Larsen et al. [24], generates new data samples based on a regulated latent space rather than producing new data samples from a noise input. The discriminator will further classify the generated sample. Considering that the data for smart homes is time-series data, it is essential that the encoder be able to not only detect the underlying pattern and distribution in the input data but also retain the realistic order in which events occur. As a result, we took TimeGAN's [25] concept of incorporating a supervisor module into our VAE-GAN architecture. We will discuss each module in more detail:
As opposed to mapping the input sequence into a meaningful latent space, The **encoder** module (\(E\)) here translates the original input sequence \(x\) into two vectors that represent the mean (\(\mu\)) and variance (\(\sigma\)) of a standard Gaussian distribution. Minimizing \(\mathcal{L}_{prior}\) (equation (2)) forces the encoder to compress the data over a standard Gaussian distribution. \(\mu\) and \(\sigma\) are mapped into the latent space \(z\) using the _reparameterization_ technique. The encoder uses five layers of **Dilated One Dimensional Convolutions** (dilated Conv1D) stacked one on top of the other (see Fig. 4). This architecture promises similar performance but lower complexity and faster convergence compared to the Long Short-Term Memory (LSTM) networks typically used for time-series analysis. The Conv1D layer architecture resembles the WaveNet architecture [26], as each layer has a stride length of one and kernel size of two, but the dilation rate varies in each layer. As the second layer dilates at any second of the input sequence (skipping every other timestep), the third layer dilates at any fourth, etc., the lower layers are more focused on short-term dependencies while the higher layers capture long-term dependencies.
The **supervisor** module computes the distance between the latent representation at the current time step (\(z\)) and the next time step (\(\hat{z}\)), minimizing \(\mathcal{L}_{supervisor}\). Added to the encoder loss function, equation (6), allows the encoder to approximate the next time step in the latent space, retaining the original sequence of events.
\[\mathcal{L}_{supervisor}=\left\|z-\hat{z}\right\|^{2} \tag{5}\]
\[\mathcal{L}_{E}=\mathcal{L}_{prior}+\mathcal{L}_{supervisor} \tag{6}\]
The **generator** module (\(G\)) is trained to reconstruct the original input sequence, given latent space \(z\) as the input by minimizing the Mean Squared Error (MSE) between the reconstructed sequence and original input sequence (\(\mathcal{L}_{reconstr}+\mathcal{L}_{prior}\)). The generator loss also contains the term \(\mathcal{L}_{dG}\), computed as in equation (8), quantifying the likelihood of the discriminator classifying the reconstructed sequence as a fake sequence. The goal is to make the reconstructed sequence so realistic that it can mislead the discriminator.
\[\mathcal{L}_{dG}=E_{x}[log(D(G(z)))] \tag{7}\]
Fig. 3: VAE-GAN **encoder** module structure Fig. 3: VAE-GAN **encoder** **supervisor**, and **discriminator** modules structure.
\[\mathcal{L}_{generator}=\mathcal{L}_{prior}+\mathcal{L}_{reconstr}+\mathcal{L}_{ dG}+\mathcal{L}_{supervisor} \tag{8}\]
\[\mathcal{L}_{real}=E_{x}[log(D(x))] \tag{9}\]
\[\mathcal{L}_{fake}=E_{z}[1-log(D(G(z)))] \tag{10}\]
\[\mathcal{L}_{noise}=E_{z}[1-log(D(\mathcal{N}(0,1))) \tag{11}\]
\[\mathcal{L}_{D}=\mathcal{L}_{real}+\mathcal{L}_{fake}+\mathcal{L}_{noise} \tag{12}\]
The **discriminator** module (\(D\)) is responsible for classifying the original input sequence and fake data samples. \(D\) minimizes \(\mathcal{L}_{D}\), equation (12), in the training process, which has three terms. \(\mathcal{L}_{real}\), equation (9), is the likelihood of original input data being classified as fake data, \(\mathcal{L}_{fake}\), equation (10), is the likelihood of reconstructed data sample being classified as real, and \(\mathcal{L}_{noise}\), equation (11), is the likelihood of classifying a random noise input as a real data which is added to \(\mathcal{L}_{D}\) to improve the convergence of the discriminator.
### _Q-learning based HEMS control_
In this section, we introduce the Q-learning-based HEMS model. As shown in Fig. 5, the components of the smart home involved in the energy production and consumption include the PV panels, loads, EV, and the ESS:
* **PV**: The PV is considered a pure energy supplier. The PV power will first serve the energy demand of EVs and other smart home devices, then the surplus energy can be used for ESS charging or selling to the energy trading market for profit.
* **EV and other smart home loads**: EV and other smart home loads are pure energy consumers. They will first receive the power supply from PV or ESS for operation or buy electricity from the market if the internal power supply is insufficient.
* **ESS**: Finally, the ESS can be a power supplier when discharging, or a consumer when charging. For charging, it uses the surplus PV power or buys electricity from the market. For discharging, it supplies energy to EVs and smart home devices or sells electricity to the energy trading market for profit.
The HEMS model takes advantage of the flexibility of ESS to minimize the total energy cost and maximize profit. The optimization task of the centralized HEMS model is described as follows:
max \[\sum_{t=1}^{T}P^{total}(\gamma p_{t}^{sell}+(1-\gamma)p_{t}^{buy})\] (13) s.t. \[P^{total}=P_{t}^{ESS}+P_{t}^{PV}-P_{t}^{L}-P_{t}^{EV}\] (13a) \[\gamma=\mathbb{1}\{P^{total}>0\}\] (13b) \[P_{t}^{ESS}=P_{ch}q_{t}\] (13c) \[q_{t}=\left\{\begin{array}{ccc}-1&ESS&charges\\ 0&ESS& unchanged\\ 1&ESS& discharges\end{array}\right.\] (13d) \[Soc_{t+1}=Soc_{t}-\frac{P_{t}^{ESS}}{C^{ESS}}\] (13e) \[Soc_{min}\leq Soc_{t}\leq Soc_{max}\] (13f) where \[T\] is the total optimization period, \[p_{t}^{sell}\] and \[p_{t}^{buy}\] represent the price of selling energy to the energy trading market and buying energy from the market, respectively. \[P_{t}^{ESS}\], \[P_{t}^{PV}\], \[P_{t}^{L}\] and \[P_{t}^{EV}\] represent the power of ESS, PV, smart home load, and EV charging load at time slot \[t\], respectively. \[P^{total}\] denotes the total power consumption/demand of the smart home, and (13a) is the energy balance constraint. \[\mathbb{1}\{P^{total}>0\}=1\] when \[P^{total}>0\], which means the agent will sell surplus energy for profit; otherwise, \[\mathbb{1}\{P^{total}>0\}=0\] if \[P^{total}<0\], which means buying energy from the market. \[P_{ch}\] is the fixed ESS charging power. \[q_{t}=-1,0,1\] when ESS charges, remain unchanged and discharges, respectively. \[C^{ESS}\] is the fixed capacity of ESS, and \[Soc\] is the ESS state of charge (SOC). Equations ( 13c ) to ( 13e ) are ESS operation constraints, and ( 13f ) is the SOC upper and lower bound constraint.
To transform this optimization task into the context of Q-learning, we define a Markov decision process (MDP) as follows.
* **State:** The agent state is defined as \(s_{t}=\{Soc_{t},P_{t}^{PV},P_{t}^{Load}\}\), where \(P_{t}^{Load}=P_{t}^{L}+P_{t}^{EV}\) represents the total energy consumption of smart home.
* **Action:** Based on the total energy demand and PV power generation, the agent decides on the action \(a_{t}=q_{t}\), which indicates the charging, discharging, or remaining unchanged status of ESS.
* **Reward:** By selecting actions intelligently, the agent intends to maximize the total profit in the optimization period, and the reward function is defined by: \[r_{t}=P^{total}(\gamma p_{t}^{sell}+(1-\gamma)p_{t}^{buy}),\] (14) which is the objective function of our problem formulation equation (13).
In Q-learning, the agent aims to maximize the long-term expected reward
\[V^{\pi}(s)=E_{\pi}[\sum_{n=0}^{\infty}\gamma^{n}r(s_{n},a_{n})|s=s_{0}], \tag{15}\]
where \(s_{0}\) is the initial state, \(n\) is the number of iterations, \(E_{\pi}\) is the expected value under action selection policy \(\pi\), \(r(s_{n},a_{n})\) is the reward of selecting action \(a_{n}\) under state \(s_{n}\), and \(\gamma\)
Fig. 5: The proposed smart home energy management system
is the reward discount factor. A lower \(\gamma\) means focusing on immediate rewards, and a high \(\gamma\) indicates that future reward is more important.
Then, we define the state-action value
\[\begin{split} Q^{new}(s_{n},a_{n})=Q^{old}(s_{n},a_{n})+\\ \alpha(r_{n}+\gamma\max_{a}Q(s_{n+1},a)-Q^{old}(s_{n},a_{n})), \end{split} \tag{16}\]
where \(Q^{old}(s_{n},a_{n})\) and \(Q^{new}(s_{n},a_{n})\) are old and new Q-values, respectively, indicating the accumulated reward of selecting action \(a_{n}\) under state \(s_{n}\). \(s_{n+1}\) is the state of \(n+1\) iteration, and \(\alpha\) is the learning rate. A high learning rate will lead to a fast learning process, but the results can be unstable; otherwise, a lower learning rate may result in very slow convergence and a long training time.
In addition, we use the \(\epsilon\)-greedy policy for the action selection.
\[\pi(s)=\left\{\begin{array}{l}\arg\ \max_{a}\ Q(s_{n+1},a),\ \ \ \ rand>\epsilon,\\ \text{random action selection},\ \ \ rand\leq\epsilon.\end{array}\right. \tag{17}\]
where \(rand\) represents a random number \((0\leq rand\leq 1)\), and \(\epsilon<1\). When \(rand>\epsilon\), the agent takes greedy policy to select the action with maximum Q-value; otherwise, the action is selected randomly for exploration. \(\epsilon\)-greedy policy can balance the exploration and exploitation of the Q-learning agent to maximize the long-term expected reward.
Finally, the Q-learning-based HEMS algorithm is summarized in Algorithm 1, which consists of two phases. In the offline training phase, given the generated synthetic data, the intelligent agent is trained to maximize the total profit. Then, in the online test phase, we apply real-world data for online operation, which aims to test the performance of agents that are trained on synthetic data from various datasets.
```
1:Initialize: Q-learning and smart home parameters
2:Phase 1: Offline training using synthetic data:
3:Input: generated synthetic data of smart home load, EV load, and PV power generation.
4:for\(episode=1\) to \(E\)do
5: With probability \(\epsilon\) choose action \(a\) randomly. Otherwise, \(a=arg\ max(Q(s,a))\)
6: Agent calculates reward based on equation (13).
7: Update agent state \(\{t,S_{t},w_{g,t}\}\) and Q-value:
8:\(Q(s,a)=(1-\alpha)Q(s,a)+\alpha(r+\gamma maxQ(s^{\prime},a^{\prime}))\)
9:endfor
10:Output: Trained Q-learning based HEMS strategy.
11:Phase 2: Online test using real-world data:
12:Input: Real-world data.
13: Deploy the pre-trained Q-learning agent for an operation test.
14:Output: HEMS profit under real-world data.
```
**Algorithm 1** Q-learning for HEMS
## IV Evaluation Study
### _Dataset_
#### Iv-A1 SmartHome Dataset
The iHomeLab RAPT dataset [27] is a real-world dataset for residential power traces. Five households in Switzerland have been surveyed for 1.5 to 3.5 years with a sampling frequency of 5 minutes. The dataset contains residential electricity consumption and PV energy production, including appliance-level and aggregated household consumption data. Studies of PV power generation show recurring patterns with corresponding noises originating from rain and clouds. Consumption patterns show weekly trends, but with occasional irregular spikes.
The experiment was conducted on residential house D from the dataset. Data cleansing was done by choosing days with full 24-hour records, resulting in 594 days in total. As part of the training process, electrical load and PV power production data were downsampled to a resolution of 15 minutes. For the train and test datasets, a split ratio of 80:20 was adopted.
#### Iv-A2 Residential Electric Vehicle Charging Dataset
Sorensen et al. [28] presented a residential electric vehicle charging dataset recorded from apartment buildings. From December 2018 to January 2020, 97 users in Norway reported real-world EV charging experiences. Each charging session includes plug-in time, plug-out time, and charged energy. This dataset provides a synthetic charging load for level-1 charging and level-2 charging assuming 3.6 kW or 7.2 kW of charging power. In our experiments, we interpolate the data for a 15-minute resolution. We note that a higher time scale resolution such as 5-minute will increase the simulation running time, while a lower resolution such as 20 minutes can not capture the environment dynamics. In addition, the proposed synthetic data generator scheme can adapt to any time resolution without loss of generality.
Our analysis revealed that the user with the largest number of available records has only 62 days of 24-hour data. This data contains only 11 charging sessions, which is insufficient for the training of a deep learning model. To address this issue we generated a larger synthetic dataset by combining the charging data from several users and assuming that the data belonged to an indoor charging station shared by the building's residents, thus accumulating 263 days of historical data, We are thereby able to have more significant charging events for the model, resulting in a more efficient training process.
### _Baseline Models and Performance Metrics_
In our experiments, we compared the performance of our VAE-GAN-based approach with two other state-of-the-art approaches:
* **Gaussian Mixture Model (GMM)**: is a probabilistic approach that partitions data into groups using soft clustering based on multiple multidimensional Gaussian probability distributions. The mean and variance of the distributions are calculated using Expectation-Maximization. Upon fitting the GMM to some data, a generative probabilistic model can sample synthetic data, following the same distribution.
* **Vanilla GAN**: As described in Section III-C.
* **VAE-GAN**: As described in Section III-D
In the following, we introduce the metrics that we will employ to evaluate the quality of the generated synthetic data.
#### Iv-B1 Kullback-Leibler (KL) divergence
The Kullback-Leibler divergence (KLD) is one of the most often used measures for evaluating the similarity of two probability distributions. Equation (18) is the formal definition of KL divergence, where \(x\) and \(y\) are sampled data points, and \(p(x)\) and \(p(y)\) are their respective probability distributions. The KL divergence ranges from \(0\) when two probability distributions almost match everywhere, to \(\infty\) for completely different distributions.
\[D_{KL}(p\|q)=\sum_{i=1}^{N}p(x_{i})log\left(\frac{p(x_{i})}{q(y_{i})}\right) \tag{18}\]
#### Iv-B2 Maximum Mean Discrepancy (MMD)
The MMD approach represents the distance between distributions by the distance between the mean embeddings of features into a reproducing kernel Hilbert space. Given the distributions \(p(x)\) for \(\{x_{i}\}_{i=0}^{N}\) and \(q(y)\) for \(\{y_{j}\}_{j=0}^{M}\), MMD is calculated as follows:
\[MMD(p,q)^{2} =\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}K(x_{i},x_{j})\] \[\quad-\frac{2}{MN}\sum_{i=1}^{N}\sum_{j=1}^{M}K(x_{i},y_{j}) \tag{19}\] \[\quad+\frac{1}{M^{2}}\sum_{i=1}^{M}\sum_{j=1}^{M}K(y_{i},y_{j})\]
\[K(x,y)=\exp\left(\frac{-\|x-y\|^{2}}{2\sigma^{2}}\right) \tag{20}\]
#### Iv-B3 Wasserstein Distance
Intuitively, the Wasserstein distance, also called the earth mover's distance, models a probability distribution as a pile of soil, and computes how much soil needs to be moved to transform one probability distribution to the other.
\[l_{1}(p,q)=\underset{\pi\in\Gamma(p,q)}{inf}\int_{\mathbb{R}\times\mathbb{R}} \lvert x-y\rvert d\pi(x,y) \tag{21}\]
Equation (21) represents the formal definition of Wasserstein distance, where \(p(x)\), \(q(y)\) are probability distributions, \(\Gamma(p,q)\) is the set of probabilistic distributions on \(\mathbb{R}\times\mathbb{R}\) whose marginals represent \(p\) and \(q\) on the first and second moments, respectively.
#### Iv-B4 HEMS Models
The previous techniques measured the quality of the generated data based on its distance from the probability distribution of the real data. In this technique, we measure the quality of the synthetic data by its ability to be used in the training of a Q-learning-based smart home energy management system (HEMS). We implemented the Q-learning model in the MATLAB platform. The ESS fixed charging/discharging power is 4 \(kW\), and the ESS capacity is 16 \(kW\cdot h\). The learning rate is 0.8, the discount factor is 0.7, and the initial \(\epsilon\) value for the \(\epsilon\)-greedy exploration algorithm is 0.05. The energy trading price follows a fixed pattern as in [5]. We recorded the average results over 10 randomized runs.
### _Distance Metrics Evaluation Results_
We used the KL-divergence, MMD, and Wasserstein distance metrics to measure how close is the distribution of synthetic data to the real data. Table I shows these statistical metrics for smart home electrical load consumption, PV production, and EV charging load consumption synthetic data generated by the GMM, GAN, and VAE-GAN generative models. For all metrics, the lower the errors are the better, corresponding to a closer match between the synthetic and real data.
The results allow us to draw several conclusions. First, we find that all three models have relatively close errors for synthetic EV charging datasets, which can be explained by the comparative regularity of EV charging datasets versus PV generation, which is influenced by the weather or load, depending on many personal choices. Another possible explanation is that PV generation and load consumption training data were almost twice the size of EV charging data, suggesting that rich historical training data could have a positive impact on accuracy. Second, we find that for the KL-divergence and Wasserstein distance, our VAE-GAN-based approach almost always provides the best performance. For the MMD, in contrast, the best performance is provided by the GAN approach. This reversal in the relative order of the generator approaches for MMD is likely a result of the choice of mean embedding features. Understanding the exact mechanism of this difference requires future work.
While single numerical metrics quantifying the distance between the probability density functions are useful, examining the distributions as a whole provides us with a better understanding of the quality of the generated data. Figures 6 to 8 show the Probability Density Functions (PDF) for real and synthetic smart home data.
Fig. 6 shows the PDF for normalized real and synthetic electrical load consumption for each generative model. While both the GMM and VAE-GAN distributions are relatively close to the real data, the synthetic data generated by VAE-GAN provides the best matches with the real data distribution in terms of the standard deviation from the mean value, as supported by the distance metrics. Fig. 8(a) represents the pattern of real and synthetic electrical load consumption during the test data. It is evident from the close similarity of these patterns that the VAE-GAN network is capable of learning the distribution and patterns of smart home aggregated load consumption data and producing samples that reflect the same characteristics.
Fig. 6(c) illustrates that the VAE-GAN model performs better compared to two other generative models in generating synthetic PV production data, a conclusion that is supported by Fig. 8(b), where the pattern of synthetic PV production data generated by VAE-GAN is reasonably close to the pattern of real PV production data. We note that although PV production follows sunrise and sunset patterns, it is also highly affected by unpredictable environmental factors, therefore it is not expected for them to be the same equivalent.
Finally, Fig. 8(c) presents the EV charging load consumption real and synthetic data PDF for GMM, GAN, and VAE-GAN
generative models. In Fig. 7(a) it is shown that synthetic EV charging load consumption generated by the GMM model is larger in power range compared to the real EV charging load consumption. On the other hand, the GAN model in Fig. 7(b), generates data centered around the average. Although the VAE-GAN model, Fig. 7(c), generates a slightly smaller power consumption than actual EV charging data, the distribution of load consumption is comparable to that of real data. Accordingly, each generative model shows the same characteristics in the sample EV data presented in Fig. 8(c).
### _HEMS Performance Evaluation Results_
In this section, we investigate the performance of the Q-learning-based HEMS. For each type of synthetic data, we trained a corresponding policy, which was then evaluated using real-world data (see Algorithm 1). We compare these policies against an optimal baseline that uses real-world data for both offline training and online operation testing, indicating that the Q-learning agent can best learn the hidden data patterns and prepare for the test. The optimal baseline is expected to achieve the best overall performance by using the same real-world dataset for both training and testing, but it cannot guarantee that the optimal baseline can obtain higher profit every single day.
Fig. 10 shows the 40 days HEMS profit of different data generation methods. While the profit varies with the environmental conditions each day, as expected, the strategy trained on real-world data achieves the highest profit on most days. However, we found that the HEMS trained on the data generated by the proposed VAE-GAN model achieves a comparable performance. The GAN-based model achieves a somewhat lower performance, while the HEMS trained on the GMM-generated data shows the least profit.
The performance of the HEMS also depends on the capacity of the energy storage system (ESS) of the smart home. The average profit of the differently trained HEMS systems for various ESS capacities is shown in Fig. 0(a). As expected, a larger ESS improves the profit of the HEMS, as the agent has higher flexibility in making decisions. We found that for all ESS values, the VAE-GAN-based HEMS performs best, followed by the GAN and GMM-based methods. The higher the ESS capacity increases the advantage of the VAE-GAN compared to the other methods.
Finally, for practical applications, it is essential to establish that the learning process is stable. Fig. 0(b) shows the reward of the Q-learning process function of the learning iterations. The figure shows that the agent has a stable convergence after the exploration phase.
### _Complexity and Feasibility Analysis_
Based on the simulation results, this section will discuss the complexity and feasibility of the proposed scheme, including a) feasibility and scalability, b) implementation complexity, c) management costs, and d) economic benefits.
* Feasibility and scalability: Compared with conventional model-based methods such as Markov chain and simulators, the applied data-driven method has higher scalability and can be easily generalized to other scenarios. In the experiment, note that the only requirement for applying the proposed methods is the dataset, indicating that it can be easily generalized from smart homes to smart builds and communities with proper datasets. By contrast, it is impractical to build very huge simulators or Markov chains to describe all the detailed behaviors of a large area.
* Organizational complexity: The main complexity of the proposed method is the neural network architecture and design, which is a well-known issue for ML-based methods. However, in this work, note that no dedicated models are designed for any household devices or energy consumption behaviors, significantly reducing the organizational complexity.
* Management costs: As above-analyzed, the dataset is the only requirement of the proposed method. When the physical system changes, e.g., adding or removing new devices, our model can be easily updated by feeding new datasets to the neural networks, which means a very low system management cost.
* Economic benefits: The simulation results in Fig.10 and 0(a) have demonstrated that the proposed method can obtain satisfying profit for the smart home operation. The benefit can be greater when considering larger application scenarios such as smart buildings and communities.
## V Conclusion
In recent years, smart home management increasingly takes advantage of advanced data-demanding machine learning approaches, but the availability of find-grained data sets may prevent the applications. Therefore, synthetic data generation has become a critical enabler for smart home management. In this paper, we present a variational autoencoder GAN (VAE-GAN) approach for the synthetic data generation of smart home datasets. We have shown that the proposed VAE-GAN method can generate high-quality data that better matches the statistical properties of real-world data compared to benchmarks. In addition, we use the synthetic data to train
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{KL divergence} & \multicolumn{3}{c}{Wasserstein distance} & \multicolumn{3}{c}{MMD} \\ \cline{2-9} & Load & PV & EV & Load & PV & EV & Load & PV & EV \\ \hline GMM & 0.277 & 1.067 & 0.127 & 619.3 & 1319.3 & 0.006 & 0.168 & 0.074 & 0.000047 \\ GAN & 0.104 & 0.454 & 0.104 & 437.42 & 1080.5 & **0.002** & **0.010** & **0.049** & **0.000028** \\ VAE-GAN & **0.065** & **0.071** & **0.085** & **295.458** & **648.87** & 0.005 & 0.011 & 0.186 & 0.000071 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Distance between real and synthetic smart grid data distribution using KL-divergence, Wasserstein distance, and MMD. Best results are highlighted in **bold**.
a Q-learning-based smart home energy management system (HEMS), achieving a higher profit than other data-generation methods. The simulations prove that the generated synthetic data can be used for HEMS offline training, and the agent obtains a satisfying real-world online performance without touching the real-world data.
## Acknowledgement
Hao Zhou and Melike Erol-Kantarci were supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Collaborative Research and Training Experience Program (CREATE) under Grant 497981 and Canada Research Chairs Program.
|
2308.07592 | Graph-Segmenter: Graph Transformer with Boundary-aware Attention for
Semantic Segmentation | The transformer-based semantic segmentation approaches, which divide the
image into different regions by sliding windows and model the relation inside
each window, have achieved outstanding success. However, since the relation
modeling between windows was not the primary emphasis of previous work, it was
not fully utilized. To address this issue, we propose a Graph-Segmenter,
including a Graph Transformer and a Boundary-aware Attention module, which is
an effective network for simultaneously modeling the more profound relation
between windows in a global view and various pixels inside each window as a
local one, and for substantial low-cost boundary adjustment. Specifically, we
treat every window and pixel inside the window as nodes to construct graphs for
both views and devise the Graph Transformer. The introduced boundary-aware
attention module optimizes the edge information of the target objects by
modeling the relationship between the pixel on the object's edge. Extensive
experiments on three widely used semantic segmentation datasets (Cityscapes,
ADE-20k and PASCAL Context) demonstrate that our proposed network, a Graph
Transformer with Boundary-aware Attention, can achieve state-of-the-art
segmentation performance. | Zizhang Wu, Yuanzhu Gan, Tianhao Xu, Fan Wang | 2023-08-15T06:30:19Z | http://arxiv.org/abs/2308.07592v1 | # Graph-Segmenter: Graph Transformer with Boundary-aware Attention for Semantic Segmentation
###### Abstract
The transformer-based semantic segmentation approaches, which divide the image into different regions by sliding windows and model the relation inside each window, have achieved outstanding success. However, since the relation modeling between windows was not the primary emphasis of previous work, it was not fully utilized. To address this issue, we propose a Graph-Segmenter, including a Graph Transformer and a Boundary-aware Attention module, which is an effective network for simultaneously modeling the more profound relation between windows in a global view and various pixels inside each window as a local one, and for substantial low-cost boundary adjustment. Specifically, we treat every window and pixel inside the window as nodes to construct graphs for both views and devise the Graph Transformer. The introduced boundary-aware attention module optimizes the edge information of the target objects by modeling the relationship between the pixel on the object's edge. Extensive experiments on three widely used semantic segmentation datasets (Cityscapes, ADE-20k and PASCAL Context) demonstrate that our proposed network, a Graph Transformer with Boundary-aware Attention, can achieve state-of-the-art segmentation performance.
Graph Transformer, Graph Relation Network, Boundary-aware, Attention, Semantic Segmentation 2012
## 1 Introduction
Semantic segmentation [1, 2] is a primary task in the field of computer vision, with the goal of labeling each picture pixel with a category that corresponds to it. As a result, intense research attention has lately been focused on it since it has the potential to improve a wide range of downstream applications [3, 4, 5, 6], including geographic information systems, autonomous driving, medical picture diagnostics, and robotics. Modern semantic segmentation models almost follow the same paradigm [7, 8, 9, 10]: they consist of a basic backbone for feature extraction and a head for pixel-level classification tasks in the current deep learning era. Improving the performance of the backbone network and the head of the segmentation model are two of the most controversial subjects in current semantic segmentation work.
Natural language processing (NLP) has been dominated by transformers for a long time [11, 12], leading to a current spike of interest in exploring the prospect of using transformers in vision tasks, including significant advancements in semantic segmentation. Using the vision transformer [13], images are divided into a number of non-overlapping windows/patches, and some subsequent research investigated methods to enhance connections between windows/patches. It has the benefit of improving the modeling power of the system [14, 15, 16].
Swin employs [14] a shifted window approach, however, the direction in which it interacts with other windows is fixed and inevitable, overcoming the effectiveness of the
overlapping patch merging approach, however it focuses primarily on the local continuity between patches rather than the overall continuity between patches. The modeling of long-distance interactions between windows/patches is not investigated in depth by these methodologies, which is unfortunate. In addition to current work on the optimization of the backbone, a number of research have contributed to the design of the head in order to facilitate further optimization [17, 18, 19]. However, following the majority of them is computationally costly due to the fact that boundary prediction-based approaches need extra object boundary labeling for segmentation boundary optimization.
We propose a novel relation modeling method acting on sliding windows, using graph convolutions to establish relationships between windows and pixels inside each window, which enhanced the backbone to address the issues above. In particular, we regard each window or the pixels inside as nodes for the graph network and use the visual similarity between nodes to establish the edges between nodes. After that, we use the graph network further to update the nodes and edges of the graph. So that different nodes can adaptively establish connections and update information in network transmission to realize the nonlinear relationship modeling between different windows and different pixels inside. In brief, the network's overall feature learning and characterization capabilities are further improved by enhancing the long-distance nonlinear modeling capabilities between different windows and different pixels inside, as shown in Figure 1, which leads to an evident rise in performance.
Furthermore, we introduce an efficient boundary-aware attention-enhanced segmentation head that optimizes the boundary of objects in the semantic segmentation task, allowing us to reduce the labeling cost even further while simultaneously improving the accuracy of the semantic segmentation in the boundary of the objects under consideration. To put it another way, we develop a lightweight local information-aware attention module that allows for improved boundary segmentation. By determining the weights of the pixels around an object's border and applying various attention coefficients to distinct pixels via local perception, it is possible to reinforce the important pixels that are critical in categorization while weakening the interfering pixels. The attention module used in this study has just a few common CNN layers, which makes it efficient for segmentation boundary adjustments when considering the size, floating point operations, and latency time of the segmentation data.
We investigate the atypical interaction between windows that are arranged hierarchically on a vision transformer. Additionally, we improve the boundary of target instances using an efficient and lightweight boundary optimization approach that does not need any extra annotation information and can adaptively alter the segmentation boundary for a variety of objects. We have demonstrated in this paper that our Graph-Segmenter can achieve state-of-the-art performance on semantic segmentation tasks using extensive experiments on three standard semantic segmentation datasets (Cityscapes [20], ADE-20k [21], and PASCAL Context [22]), as demonstrated by extensive experiments on three standard semantic segmentation datasets.
## 2 Related works
### CNN-based Semantic Segmentation
Convolutional neural networks (CNNs) based methods serve as the standard approaches throughout the semantic segmentation [23, 24, 25, 26, 27, 28] task due to apparent advantages compared with traditional methods. FCN [23] started the era of end-to-end semantic segmentation, introducing dense prediction without any fully connected layer. Subsequently, many FCN-based methods [25, 26, 29] have been proposed to promote image-centric segmentation. DNN-based methods usually
Figure 1: An illustration of the proposed Graph-Segmenter with boundary-aware attention for semantic segmentation. The top shows the segmentation results of the previous transformer-based semantic segmentation methods (e.g., Swin [14]). The bottom shows the actual segmentation results of our proposed Graph-Segmenter, which achieves promising boundary segmentation via the hierarchical level graph reasoning and efficient boundary adjustment requiring no additional annotation.
need to expand the receptive field through the superposition of convolutional layers. Besides that, the receptive field issue was explored by several approaches, such as Pyramid Pooling Module (PPM) [25] in PSPNet and Atrous Spatial Pyramid Pooling (ASPP) in DeepLabv3 [26], which expand the receptive field and capture multiple-range information to enhance the representation capabilities. Recent CNN-based methods place a premium on effectively aggregating the hierarchical features extracted from a pre-trained backbone encoder using specifically developed modules: DANet [27] applies a different form of the non-local network; SFNet [30] addresses the misalignment problem through semantic flow by using the Flow Alignment Module (FAM); CCNet [31] present a Criss-Cross network which gathers contextual information adaptively throughout the criss-cross path; GFFNet [32]; APCNet [33] proposes Adaptive Pyramid Context Network for semantic segmentation, which creates multi-scale contextual representations adaptively using several well-designed Adaptive Context Modules (ACMs). [28] divides the feature map into different regions to extract regional features separately, and the bidirectional edges of the directed graph are used to represent the affinities between these regions, in order to help model region dependencies and alleviate unrealistic results. [34] learns boundaries as an additional semantic category, so the network can be aware of the layout of the boundaries. It also proposes unidirectional acyclic graphs (UAGs) to simulate the function of undirected cyclic graphs (UCGs) to structure the image. Boundaryaware feature propagation (BFP) module can acquire and propagate boundary local features of regions to create strong and weak connections between regions in the UAG.
### Transformer and Self-Attention
The attention mechanism for image classification was first seen in [35], and then [36] employed similar attention on translation and alignment simultaneously in the machine translation task, which was the first application of the attention mechanism to the NLP field. Inspired by the dominant performance in the NLP field later, the application of self-attention and transformer architectures in computer vision are widely explored in recent years. Self-attention layers as a replacement for spatial convolutional layers achieved rises in terms of robustness and performance-cost trade-off [37]. Moreover, Vision Transformer (ViT) [13] shown transformer structure with simple non-overlap patches could surpass state-of-the-art results, and the impressive result led to a trend of vision transformer. The issue that ViT is inevitable as a general backbone for semantic segmentation was further addressed by several approaches: [38] proposed DEtection TRansformer (DETR) for direct set prediction based on transformers and bipartite matching loss; based on DETR, [39] suggested Deformable DETR with attention modules that focus only on a limited number of critical sampling locations around a reference; after that, [40] investigated the video instance segmentation (VIS) task using vision transformers, termed VisTR, which view the VIS task as a direct end-to-end parallel sequence decoding/prediction problem. [41] expands the idea of DETR to multi-camera 3D object detection to make great progress.
In addition, the transformer-based semantic segmentation methods [14, 15, 42, 43] have drawn much attention. [42] relies on a ViT [13] backbone and introduces a mask decoder inspired by DETR [38]. [43] reformulates the image semantic segmentation problem from a transformer-based learning perspective. [14] uses a shifted windowing scheme to break the limited self-attention computation into non-overlapping local windows. [15] proposes a powerful semantic segmentation framework with lightweight multilayer perceptron decoders. They adopt the Overlapped Patch Merging module to preserve local continuity. However, these methods didn't look at the relationship between windows very well, which led this study to use hierarchical-level graph reasoning and efficient boundary adjustment. Our method puts more focus on the graph modeling between windows within the transformer blocks, which is the first work for the semantic segmentation task.
### Graph Model
Graph-based convolutional networks (GCNs) [44, 45, 46, 47] can exploit the interactions of non-Euclidean data, which was initially proposed in [48]. As graphs can be irregular, GCNs led to an increasing interest in recent years, and many excellent works further explored GCNs from various perspectives such as graph attention networks [49] for guided learning, and dynamic graph [50] to learn features more adaptively. GCNs also have achieved great performance in kinds of computer vision tasks, such as human pose estimation [51], image captioning [52], action recognition [53], and so on. [51] adopts graph atrous convolution and transformer layers to extract multi-scale context and long-range information for human pose estimation. [52] proposes an image captioning model based on a dual-GCN and transformer combination. [54] proposes an adaptive graph convolutional network with attention graph clustering for image or saliency detection. More
over, GCNs can propagate information globally and model contextual information efficiently for semantic segmentation [55, 56, 57, 44, 55], point cloud segmentation [58, 59] and instance segmentation [60]. [55] performs graph reasoning directly in the original feature space to model long-range context. [56] introduces the class-wise dynamic graph convolution module to capture long-range contextual information adaptively. [44] captures the global context along the spatial and channel dimension by the graph-convolutional module, respectively. [57] combines adjacency graphs and KSC-graphs by affinity nodes of multi-scale superpixels for better segmentation. [58] proposes a hierarchical attentive pooling graph network for point cloud segmentation to enhance the local modeling. [61, 62] provide more detailed surveys of GCNs.
### Discussion
Different from Swin [14], first of all, our proposed graph transformer focuses on the nonlinear relationship modeling of sliding windows, including the modeling of high-order (greater than or equal to 2-order) relationship between sliding windows and among different feature points inside each sliding window. Secondly, our graph transformer module is a lightweight pluggable sliding window modeling module, which can be embedded into some sliding window-based transfomer models such as Swin [14] to improve the feature modeling and expression ability of the original model. Finally, the graph transformer modules plus the boundary-aware attention modules together form our Graph-Segmenter network, which achieves state-of-the-art performance compared to other recent works on the existing segmentation benchmarks.
## 3 Methods
This section will elaborate on the design of our devised Graph-Segmenter, which includes the Graph Transformer module and Boundary-aware attention module.
### The Overview
As shown in Figure 2, our proposed coarse-to-fine-grained boundary modeling network, Graph-Segmenter, includes a coarse-grained graph transformer module in the backbone and the fine-grained boundary-aware attention module in the head of the semantic segmentation framework. Assume that the input image passes through some CNN layers and then obtains the positional embedding enhanced feature maps \(\mathbf{X}_{i}\in\mathbb{R}^{C\times H\times W}\), where \(C\) denotes the number of channels, \(H\) and \(W\) denote height and width in the spatial dimension of the image, respectively. In order to make better use of the Transformer mechanism, inspired by [14, 16], we first divide feature maps \(\mathbf{X}\) into \(M\times N\) window patches and define each window as \(\{\mathbf{x}_{m,n}\in\mathbb{R}^{C\times\frac{H}{N}\times\frac{N}{2}}|m\in\{1,2,...,M\},n\in\{1,2,...,N\}|\}\). And then \(\mathbf{X}\) passes through Graph Transformer (GT) Network module, generating the relation-enhanced features \(\mathbf{Y}\) after GT module in the backbone network. After this, we use the boundary-aware attention (BA) module to adjust the segmentation boundary on the feature maps \(\mathbf{Y}\) and obtain the object boundary optimal feature \(\mathbf{Z}\). At last, we use these features \(\mathbf{Z}\) to formulate the corresponding segmentation loss function in each pixel. In the following subsections, we will elaborate on our proposed window-aware relation transformer network module and boundary-aware attention module. To facilitate the following description, we first introduce the efficient graph relation network, which plays a central role in the graph transformer block and is capable of efficiently and effectively modeling the nonlinear relations for abstract graph nodes.
### Efficient Graph Relation Network
In this section, we first introduce the preliminary work on graph convolution networks (GCN), which introduces convolutional parameters that can be optimized compared to graph neural networks (GNN). As shown in [49], a GCN can be defined as
\[\mathbf{X}=\mathbf{RXW}, \tag{1}\]
where \(\mathbf{X}\), \(\mathbf{R}\) and \(\mathbf{W}\) are the input node matrix (here we still denote the output of the above equation as \(\mathbf{X}\)), the adjacency matrix reflecting the relationship between nodes and the learnable convolution parameter matrix respectively.
Based on above preliminary work, we introduce an efficient graph relation network, which was inspired by [49, 63], to model the non-linear relation of higher-order among the sliding windows and inside each window. Given the input \(\mathbf{X}\), we define the relation function \(\mathcal{R}\) as follows
\[\mathbf{R}=\mathcal{R}(\mathbf{X}). \tag{2}\]
As shown in Equation (2), to establish the relationship \(\mathcal{R}\) between different input data \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), where \(\mathbf{x}_{i},\mathbf{x}_{j}\in\mathbf{X}\), \(i,j\in\{1,2,\ldots,K\}\), and \(K\) is the number of nodes in \(\mathbf{X}\), mathematically, we can establish a simple linear relationship using linear functions or a more complex nonlinear relationship using nonlinear functions. In general, linear relations are poor in terms of robustness, as here we design a quadratic mul
tiplication operation that establishes a nonlinear relationship between different input nodes.
\[r_{i,j}=\frac{\mathbf{x}_{i}\cdot\mathbf{x}_{j}}{\|\mathbf{x}_{i}\cdot\mathbf{x}_{ j}\|}, \tag{3}\]
where \(r_{i,j}\in\mathbf{R}\) is the learned relation between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). \(\|\cdot\|\) represents a two-norm operation. Because \(\mathbf{x}_{i}\cdot\mathbf{x}_{j}\leq\|\mathbf{x}_{i}\cdot\mathbf{x}_{j}\|\), \(|r_{i,j}|\leq 1\), where \(|\cdot|\) denotes the absolute value operation, we do not need to normalize the \(r_{i,j}\) as in [64].
With the connection relation matrix \(\mathbf{R}\), we can define the node update function \(\mathcal{U}\) of the graph as follows
\[\mathbf{X}^{(t+1)}=\mathcal{U}(\mathbf{X}^{(t)}). \tag{4}\]
Now that we have \(r_{i,j}\) and also to design an efficient graph relational network, we want this graph network node propagation update operation to be a sparse matrix operation. To achieve efficient computation, we use the indicative function and the relational matrix \(\mathbf{R}\). Equation (4) can be rewritten into a form that satisfies the sparse matrix operation as follows:
\[\mathbf{x}_{i}^{(l+1)}=\sum_{\mathbf{x}_{j}^{(l)}\in\mathcal{N}(\mathbf{x}_{i }^{(l)})}\mathbb{I}(r_{i,j}>\theta)\cdot\mathbf{x}_{j}^{(l)},i,j\in\{1,2,\dots,K\}, \tag{5}\]
where \(\mathbb{I}(\cdot)\) is the indicative function which takes \(r_{ij}\) if the condition holds, or 0 otherwise. \(\mathbf{x}_{j}^{(l)}\) is the \(j\)th graph node in layer \(l\) in the graph, which is generated in the layer-by-layer transfer process of graph convolutional network. \(\delta(\mathbf{x}_{i}^{(l)})\) denotes the set of neighboring nodes of node \(\mathbf{x}_{i}^{(l)}\) in layer \(l\). Finally, in order to accomplish the graph convolutional network defined in Equation (4), we use the convolutional function \(\mathcal{G}\) to convolute the node information as follows
\[\mathbf{X}^{(l+1)}=\mathcal{C}(\mathbf{X}^{(l)})=\mathbf{X}^{(l)}\mathbf{W}^{ (l)}, \tag{6}\]
where \(\mathbf{W}^{(l)}\) is the learnable weighting matrix.
### Graph Transformer
Sliding windows-based methods [14, 16] model the local relation inside each window via transformer [13], most of which does not take into account the global modeling of a nonlinear relation between windows. In order to obtain more robust relation-enhanced features, we propose Graph Transformer (GT) to model the relationship between different windows and within each window. We use the visual similarity among different windows to learn the relation of different windows while using the visual similarity between different pixels to model the local relation inside each sliding window. Based on the above-devised efficient graph relation network, the detailed design of the global relation modeling module and the local one will be further elaborated on below.
#### 3.3.1 Global Relation Modeling
We propose a Global Relation (GR) Modeling module to model the global relation among different sliding windows and make the pixel-level classification more accurate. Specifically, we consider each window as one node and use the efficient graph relation network to build the relation of the graph nodes. As a result, we can obtain the coarse relation of the objects in each image via the relation modeling of windows.
Figure 2: An overview of the proposed Graph-Segmenter with efficient boundary adjustment for semantic segmentation, which includes Global Relation Modeling, Local Relation Modeling, and Boundary-aware Attention. “GR” denotes the global window-aware relation module and “LR” denotes the local window-aware relation module. “GR” or “LR” consists of \(1\times 1\) convolution, softmax, \(\mathbf{W}\) and \(1\times 1\) convolution. \(\mathbf{W}\) is the learnable weighting matrix, corresponding to \(\mathbf{W}^{(l)}\) in Equation 6, and also equivalent to \(\mathcal{C}^{gr}\) in “GR” or \(\mathcal{C}^{lr}\) in “LR”.
Concretely, given the partitioned \(M\times N\) sliding windows in feature maps \(\mathbf{X}\in\mathbb{R}^{C\times H\times W}\), we look at each sliding window \(\mathbf{x}_{i,j}\in\mathbb{R}^{C\times\frac{H}{N}\times\frac{N}{N}}\) as a node to establish a graph network. For the sake of narrative convenience, we denote \(\mathbf{x}_{m,n}\) as \(\mathbf{x}_{i}\), where \(i=(m-1)\times N+n\).
We first define the connection relation matrix of nodes \(\mathbf{R}^{gr}=\{r_{i,j}^{gr}|i,j\in\{1,2,\cdots,M\times N\}|\}\), here we use the matrix multiplication-based visual similarity to model the connection relation between nodes. From Equation (2), the relation matrix \(\mathbf{R}^{gr}=\mathcal{R}(\mathbf{X})\). Simultaneously, in order to alleviate the complexity of visual similarity computation, we devise a function \(\mathcal{R}^{gr}\) to reduce the channel dimension
\[\mathbf{\hat{x}}_{i}=\mathcal{A}^{gr}(\mathbf{x}_{i}), \tag{7}\]
where \(\mathbf{\hat{X}}=\{\mathbf{\hat{x}}_{i}\in\mathbb{R}^{\frac{H}{N}\times\frac{ H}{N}\times\frac{N}{N}}|i\in\{1,2,\cdots,M\times N\}|\}\) is the hidden vector used to compute the connectivity of the graph, mainly to save computation and to enhance the learnability of the module. \(r^{gr}\) is the channel compression ratio. Thus, this allows us to derive a formula for calculating the connection relation that saves significant computational complexity, as shown below
\[\mathbf{R}^{gr}=\mathcal{R}^{gr}(\mathbf{\hat{X}}), \tag{8}\]
where, for ease of understanding, we rewrite the relational modeling function \(\mathcal{R}\) in Equation (2) here as \(\mathcal{R}^{gr}\). With the connection relation matrix \(\mathbf{R}^{gr}\) (we rewrite \(\mathbf{R}\) as \(\mathbf{R}^{gr}\) in layer \(l\) of the graph neural network for simplicity in the following section), based on Equation (4) and Equation (6), we can define the node update function \(\mathcal{A}\mathcal{U}\) and the convolutional function \(\mathcal{C}^{gr}\) for global window-aware relation network as follows
\[\mathbf{\hat{X}}^{(l+1)}=\mathcal{C}^{gr}\circ\mathcal{U}^{gr}(\mathbf{\hat{ X}}^{(l)}), \tag{9}\]
where \(\circ\) denotes the composition operations on multiple functions, generating a compound function. In order to keep the input and output dimensions unchanged, we define \((\mathcal{A}^{gr})^{-1}\), the inverse transformation of \(\mathcal{A}^{gr}\), to restore the dimension of \(\mathbf{\hat{X}}\) in layer \((l+1)\), as shown below
\[\mathbf{X}^{(l+1)}=(\mathcal{A}^{gr})^{-1}(\mathbf{\hat{X}}^{(l+1)}). \tag{10}\]
#### 3.3.2 Local Relation Modeling
In order to learn the local relation inside each sliding window, we conduct a Local Relation (LR) Modeling module to build relations inside each sliding window. Similar to the global relation modeling module, we consider each pixel as a node and exploit the efficient graph relation network to model the local relation among different pixels within each window.
Specifically, given the sliding window \(\mathbf{x}_{i,j}\in\mathbb{R}^{C\times\frac{H}{N}\times\frac{N}{N}}\), we need to learn the relationships among these feature points inside each window \(\mathbf{x}_{i,j}\). Thus, we can similarly define the entire process as follows
\[\mathbf{x}_{i,j}^{(l+1)}=(\mathcal{A}^{lr})^{-1}\circ\mathcal{C}^{lr}\circ \mathcal{U}^{lr}\circ\mathcal{R}^{lr}\circ\mathcal{A}^{lr}(\mathbf{x}_{i,j}^{ (l)}), \tag{11}\]
where \(\mathbf{x}_{i,j}^{(l)}\) and \(\mathbf{x}_{i,j}^{(l+1)}\) are the feature maps in layer \(l\) and layer \(l+1\) of the local window-aware module, respectively.
#### 3.3.3 Module Structure
As shown in Figure 2, our presented graph transformer can be implemented by two \(1\times 1\) Conv, a normalization function (Softmax) and other basic elements (Self-Attention and Multilayer Perceptron) in ViT [13]. Specifically, \(\mathcal{A}^{gr}\) and \(\mathcal{A}^{lr}\) can be completed by a \(1\times 1\) Conv, which is used to reduce the computational complexity in the relation modeling process. Similarly, \((\mathcal{A}^{gr})^{-1}\) and \((\mathcal{A}^{lr})^{-1}\) can be realized by another \(1\times 1\) Conv, aiming to resume the channel dimension for module integration in the corresponding relation modeling network. In order to accomplish \(\mathcal{R}^{gr}\) and \(\mathcal{R}^{lr}\), we use a matrix multiplication and a softmax function to norm the results and obtain relation matrix \(\mathbf{R}^{gr}\) and \(\mathbf{R}^{lr}\) respectively. The node update function \(\mathcal{U}^{gr}\) and \(\mathcal{U}^{lr}\) can be accomplished by a matrix multiplication simply.
#### 3.3.4 Fusion
Given a global relation modeling module and a local one, we fuse these two different relation modeling modules to obtain better performance. As shown in Figure 3, there are
Figure 3: Three designs of fusion. ”GR” denotes the global relation modeling module and ”LR” denotes the local relation modeling module. (a) The two relation modeling modules are connected in two different series connections. (b) Two relation modeling modules are connected in parallel.
three fusion types, namely, global then local series and local then global series connection (Figure 3a), parallel connection (Figure 3b). We will discuss in detail the impact of different fusion methods on segmentation performance in the experimental section.
### Boundary-aware Attention
A robust feature extraction backbone network has been constructed, as indicated above. We further devise boundary-aware attention (BA) network to optimize the boundary region of the corresponding object for a better overall semantic segmentation effect.
As we all know, the edge information adjustment of objects is generally local, and is greatly affected by local features. Based on this, we design a local-aware attention module, namely the BA module, which further improves the boundary segmentation accuracy of the model by enhancing and weakening the weight information of some **local feature points** (or pixels, here we do not distinguish between pixels and feature points), some of which are **the boundaries of objects**. Concretely, we perceive the relation between local pixels by the local perception module and perform local relation modeling, based on which we learn to obtain the attention coefficients of each surrounding pixel. Finally, we use the attention coefficients to weigh the pixels, strengthen the key pixels, and weaken the inessential ones to achieve accurate boundary segmentation.
Given the input \(\mathbf{Y}\in\mathbb{R}^{\widetilde{C}\times\widetilde{H}\times\widetilde{W}}\), where \(\widetilde{C}\), \(\widetilde{H}\) and \(\widetilde{W}\) denotes the dimension of channel, height and width respectively, we use the channel squeezing function \(\widetilde{\mathcal{A}}\) to reduce the dimension of input features, and then use the local modeling function \(\mathbf{G}\) to learning the local relation, where the pixels around the boundary of the object is adjusted. Besides, we add non-linear function \(\mathbf{\mathcal{H}}\) inside the boundary-aware module to learn the more robust relation and use the inverse function \((\widetilde{\mathcal{A}})^{-1}\) to resume the input channel dimension. At last, we normalize the learned attention coefficient via the normalization function \(\mathbf{\mathcal{N}}\), so the whole formulation can be:
\[\mathbf{Z}=\mathbf{\mathcal{N}}\circ(\widetilde{\mathbf{\mathcal{A}}})^{-1}\circ\mathbf{ \mathcal{H}}\circ\mathbf{\mathcal{G}}\circ\widetilde{\mathbf{\mathcal{A}}}(\mathbf{Y}). \tag{12}\]
#### 3.4.1 Module Structure
As shown in Figure 2, boundary-aware attention module compose of two layers of \(1\times 1\) Conv, a layer of GELU and a layer of Sigmoid. Similar to the design of the graph transformer module, we use \(1\times 1\) Conv to accomplish the channel squeezing and resume functions \(\widetilde{\mathcal{A}}\) and \((\widetilde{\mathbf{\mathcal{A}}})^{-1}\). Here we simply use \(7\times 7\) Conv to implement the local relation modeling function \(\mathbf{G}\). Finally, function \(\mathbf{\mathcal{H}}\) and \(\mathbf{\mathcal{N}}\) can be completed through the common non-linear layer (GELU) and normalization layer (Sigmoid), respectively.
### Discussion
We discuss the complexity of the proposed model in this section. The analysis of our designed lightweight pluggable Graph Segmentation, consisting of a Graph Transformer and
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & \# param. & GMac \\ \hline Swin\({}^{*}\)[14] & 233.66 & 191.45 \\ \hline Graph-Segmenter (Ours) & 283.46 & 195.63 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The complexity comparison of our Graph-Segmenter model and the original Swin [14] model with Swin-L as a backbone in terms of the number of model parameters (#param) and the amount of multiplication computation (GMac). \({}^{*}\) refers to reproduced results by us.
Figure 4: Some examples of the three datasets: Cityscapes (first row), ADE-20k (second row), and PASCAL Context (third row).
Boundary-aware Attention, is shown in Table 1. From the table, we can conclude that the increase of our proposed Graph-Segmenter model with the original Swin [14] model in terms of the number of model parameters (#param) and the multiplicative computation is very small. In particular, in terms of multiplication computation, our model only increases the computation of 2.18% compared to the original model.
## 4 Experiments
In this section, we evaluate our proposed Graph-Segmenter on three standard semantic segmentation datasets: Cityscapes [20], ADE-20k [21] and PASCAL Context [22].
### Datasets and metrics
**Cityscapes**[20]. The dataset covers a range of 19 semantic classes from 50 different cities. It has 5,000 images in total, with 2,975 for training, 500 for validation, and another 1,525 for testing.
**ADE-20k**[21]. The dataset annotates 150 categories in challenging scenes. It contains more than 25,000 finely annotated images, split into 20,210, 2,000 and 3,352 for training, validation and testing respectively.
**PASCAL Context**[22]. The dataset is an extension of the PASCAL VOC 2010, which includes 4998 and 5105 photos for training and validation, respectively, and gives pixel-by-pixel semantic labels. It contains more than 400 classes from three major categories (stuff, hybrids and objects). However, since most classes are too sparse, we only analyze the most common 60 classes, which is the approach previous works did [43].
Figure 4 shows some examples of the three datasets.
**Metrics**. We adopt mean Intersection over Union (mIoU) to evaluate semantic segmentation models.
### Implementation Details
The entire model is implemented based on the Swin [14] codes that have been made publicly available. Training is separated into two phases: we first establish the backbone network using the model parameters pre-trained on ImageNet, and then we train the backbone network with the Graph-Segmenter that we proposed. The trained network parameters are utilized to initialize and train the network integrated with the segmentation boundary optimization module, which is then used to optimize the segmentation boundary.
### Comparison with State-of-the-Art
We compare our proposed Graph-Segmenter with the state-of-the-art models, which contains CNN (e.g., ResNet-101, ResNeSt-200) based methods [26, 27, 29, 65], and latest transformer based methods [14, 42, 43] on Cityscapes, ADE-20k and PASCAL Context datasets.
formance in all categories compared to the Swin model, and even for some more difficult categories (e.g., train, rider, wall) our model still achieves a very large performance improvement. This phenomenon shows that global and local window relation modeling and boundary-aware attention modeling have further improved the ability of the network for difficult-to-segment categories.
**ADE-20k.** Table 4 and Table 5 demonstrate the results on ADE-20k dataset. Our Graph-Segmenter achieves 53.9 mIoU and 62.4 test score on the ADE-20k val and test set, surpassing the previous best transformer-based model by +0.3 mIoU (53.6 mIoU by Seg-L-Mask/16 [42]) and +0.7 test score (61.7 by SETR-PUP [43]). Our approach still can perform excellently in the challenging ADE-20k dataset with more categories, which confirms our modules' robustness. A more qualitative comparison is shown in Figure 5.
**PASCAL Context**. To further demonstrate the effectiveness of our proposed Graph-Segmenter module, we conduct more experimental results on DASSAL. Content dataset for UperNet [76] and "UperNet + CAR" [77] models. As shown in Table 6, we can observe that different models equipped with our Graph-Segmenter module can gain a consistent performance improvement. All in all, the very recent model "UperNet + CAR" embedding Graph-Segmenter achieves the best mIoU accuracy.
**Qualitative results**. We further visualized and analyzed our devised Graph-segmenter network, as shown in Figure 5. As shown in Figure 5, our Graph-Segmenter can obtain a better segmentation boundary compared with DeepLabV3+ [26] and Swin Tranformer [14].
### Ablation Study
In this section, we deeply investigate the performance of each sub-module of the model and conduct experiments on the Cityscapes dataset.
#### 4.4.1 Sparsity Analysis
We first analyze the effect of different sparsity of the relation matrix \(\mathbf{R}\) on the experimental results. In order to eliminate the influence of other factors, we do not add the boundary-aware attention module in our experiments. By a simple setting of \(\theta=\frac{1}{K^{2}}\sum_{i}^{K}\sum_{j}^{K}r_{i,j}\), the accuracy of the model with the sparse constraint on the Cityscapes dataset could achieve 0.59 rise than the case without it. It can be concluded that for a given node, not all surrounding nodes can provide positive feature enhancement for the node. On the contrary, by properly filtering out some nodes, the model can achieve better segmentation performance. To further discuss the effect of the value of
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Backbone & mIoU \\ \hline FCN [23] & ResNet-101 & 45.74 \\ PSPNet [25] & ResNet-101 & 47.80 \\ DANet [27] & ResNet-101 & 52.60 \\ EMANet [78] & ResNet-101 & 53.10 \\ SVCNet [79] & ResNet-101 & 53.20 \\ BFP [34] & ResNet-101 & 53.6 \\ Strip pooling [70] & ResNet-101 & 54.50 \\ GFFNet [80] & ResNet-101 & 54.20 \\ APCNet [33] & ResNet-101 & 54.70 \\ GRAr [28] & ResNet-101 & 55.70 \\ \hline SETR-PUP [43] & ViT-L & 55.83 \\ \hline UperNet [76] & Swin-L & 57.48 \\ \hline Graph-Segmenter + UperNet (Ours) & Swin-L & **57.80** \\ Graph-Segmenter + UperNet + CAR (Ours) & Swin-L & **59.01** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of semantic segmentation on the Pascal Context dataset compared with state-of-the-art methods.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Backbone & val mIoU \\ \hline FCN [23] & ResNet-101 & 41.4 \\ UperNet [74] & ResNet-101 & 44.9 \\ DANet [27] & ResNet-101 & 45.3 \\ OCRNet [29] & ResNet-101 & 45.7 \\ ACNet [75] & ResNet-101 & 45.9 \\ DNL [65] & ResNet-101 & 46.0 \\ DLabv3+ [26] & ResNeS-101 & 47.3 \\ DLabv3+ [26] & ResNeS-200 & 48.4 \\ \hline SETR-PUP [43] & ViT-L & 50.3 \\ SegFormer-B5 [15] & SegFormer & 51.8 \\ Seg-L-Mask/16 [42] & ViT-L & 53.6 \\ Swin’ [14] & Swin-L & 53.1 \\ \hline Graph-Segmenter (Ours) & Swin-L & **53.9** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Multi-scale inference results of semantic segmentation on the ADE-20k validation dataset compared with state-of-the-art methods. \({}^{*}\) refers to reproduced results by us.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Backbone & test score \\ \hline ACNet [75] & ResNet-101 & 38.5 \\ DLabv3+ [26] & ResNet-101 & 55.1 \\ OCRNet [29] & ResNet-101 & 56.0 \\ DNL [65] & ResNet-101 & 56.2 \\ \hline SETR-PUP [43] & ViT-L & 61.7 \\ Swin’ [14] & Swin-L & 61.5 \\ \hline Graph-Segmenter (Ours) & Swin-L & **62.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of semantic segmentation on the ADE-20k test dataset compared with state-of-the-art methods. \({}^{*}\) refers to reproduced results by us.
\(\theta\) on the experimental results, we set \(\theta\) as different thresholds, respectively, as shown in Table 8. From the table, we can observe that different thresholds still have a certain degree of influence on the final segmentation effect of the model, and it is found that the model can achieve the best results when \(\theta=\frac{1}{4K^{2}}\sum_{i}^{K}\sum_{j}^{K}r_{i,j}\). Besides, the experimental results further prove the effectiveness of the sparse setting, which can not only simplify the calculation, but also improve the segmentation accuracy of the model to a certain extent.
#### 4.4.2 Different Connection Type Analysis
We analyze the influence of the different sparseness of the relation matrix R on the experimental results. As Table 7 shows, we can see that the connection in the serial is better than in parallel. Meanwhile, the best performance is obtained for serial connections by placing global window-aware attention before local window-aware attention. The phenomenon further indicates that better segmentation results can be achieved by large-scale coarse-grained relation modeling among windows first and following fine-grained modeling within each window.
#### 4.4.3 Graph Transformer and Boundary-aware Attention Modules Analysis
In order to analyze the specific role of each component, we compare various sub-modules to analyze the experimental results separately. As Table 9 shows, the model equipped with a graph transformer (Swin-GT) module or equipped with boundary-aware attention (Swin-BA) module can obtain a noticeable improvement compared to the original Swin model. Besides, we can observe that BA can improve a
\begin{table}
\begin{tabular}{l l l} \hline Connection Type & Backbone & val mIoU \\ \hline GR \(\cup\) LR & Swin-T & 76.70 \\ LR followed by GR & Swin-T & 76.44 \\ GR followed by LR & Swin-T & **76.99** \\ \hline \end{tabular}
\end{table}
Table 7: mIoU performance of different connection types on Cityscapes dataset. “\(\cup\)” denotes parallel connection. “-T” denotes Tiny.
\begin{table}
\begin{tabular}{l l l} \hline \hline \(\theta\) & Backbone & val mIoU \\ \hline \(2v\) & Swin-T & 76.44 \\ \(v\) & Swin-T & 76.27 \\ \(\frac{1}{2}v\) & Swin-T & 76.07 \\ \(\frac{1}{2}v\) & Swin-T & 77.64 \\ \(\frac{1}{2}v\) & Swin-T & 77.05 \\ \hline \hline \end{tabular}
\end{table}
Table 8: mIoU performance of different \(\theta\) on Cityscape dataset. We set \(v=\frac{1}{K^{2}}\sum_{i}^{K}\sum_{j}^{K}r_{i,j}\) for the convenience of expression. “-T” denotes Tiny.
Figure 5: Qualitative performance comparison of our proposed Graph-Segmenter with DeepLabV3+ [26] and Swin Tranformer [14] for semantic segmentation. Our Graph-Segmenter can obtain a better segmentation boundary.
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & Backbone & val mIoU \\ \hline Swin [14] & Swin-T & 75.82 \\ Swin-GT & Swin-T & 76.99 \\ Swin-BA & Swin-T & 75.94 \\ Swin-GTBA & Swin-T & **77.32** \\ \hline \hline \end{tabular}
\end{table}
Table 9: mIoU performance of different components on Cityscape dataset. “-T” denotes Tiny.
lot over Swin-GT, although it only gains marginal improvement over Swin. This phenomenon implies that after modeling the global and local relationship, the BA module can achieve more accurate classification based on more robust features than without Swin-GT enhancement. At the same time, we can see that Swin embeds all of the proposed modules (GT+BA) and obtains the best performance compared to the model embedding only some of our modules.
For analyzing the effect of two components in the graph transformer module, we compare the sub-module results, as shown in Table 10. The model with the Global Relation Modeling module (Swin-GR) or with the Local Relation Modeling module (Swin-LR) can obtain a visible improvement compared to the original model. When combining both kinds of sub-modules (Swin-GT), the model can achieve the best performance.
#### 4.4.4 Different Channel Compression Ratio \(r\) Analysis
We further analyze the performance of different compression ratios in the graph transformer module. As indicated in Table 11, we can see that the model obtains the best semantic segmentation accuracy when the compression ratio \(r\) (here we simply set \(r^{gr}\) and \(r^{lr}\) to the same value and mark them as \(r\)) is 16, which indicates that either too large or too small compression rate will have an impact on the recognition progress of the model, so as shown in the table, we choose the same compression rate of 16 in this paper.
## 5 Conclusion
This paper presents Graph-Segmenter, which enhanced the vision transformer with hierarchical level graph reasoning and efficient boundary adjustment requiring no additional annotation. Graph-Segmenter achieves state-of-the-art performance on Cityscapes, ADE-20k and PASCAL Context semantic segmentation, noticeably surpassing previous results. Extensive experiments demonstrate the effectiveness of multi-level graph modeling, features with unstructured associations leading to evident rise; and the effectiveness of the Boundary-aware Attention (BA) module, minor refinements around boundary gave smoother masks than the previous state-of-the-art.
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & Backbone & val mIoU \\ \hline Swin [14] & Swin-T & 75.82 \\ Swin-GR & Swin-T & 76.24 \\ Swin-LR & Swin-T & 76.28 \\ Swin-GT & Swin-T & **76.99** \\ \hline \hline \end{tabular}
\end{table}
Table 10: mIoU performance of two components in the graph transformer module on Cityscape dataset. “-T” denotes Tiny.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline r & r=2 & r=4 & r=8 & r=16 & r=32 \\ \hline mIoU & 76.65 & 76.15 & 76.29 & 76.99 & 76.35 \\ \hline \hline \end{tabular}
\end{table}
Table 11: mIoU performance of Graph-Segmenter (only with graph transformer module) on Cityscapes dataset for various channel compression ratio \(r\).
Figure 6: Val mIoU comparison with Swin Tranformer [14] in each class on the Cityscapes val set. Our Graph-Segmenter generally achieves better performance. “sw” denotes ”sidewalk”, “bd” denotes “building”, “tl” denotes “traffic light”, “ts” denotes “traffic sign”, “vg” denotes ”vegetation”, ”mc” denotes ”motorcycle” and “bc” denotes ”bicycle”. |
2304.11025 | Frames of quasi-geodesics, visibility and geodesic loops | In this paper we give a characterization in terms of ``quasi-geodesics
frames' of visibility and existence of geodesic loops for bounded domains in
$\mathbb C^d$ which are Kobayashi complete hyperbolic and Gromov hyperbolic. | Filippo Bracci | 2023-04-21T15:17:22Z | http://arxiv.org/abs/2304.11025v1 | # Frames of quasi-geodesics, visibility and geodesic loops
###### Abstract.
In this paper we give a characterization in terms of "quasi-geodesics frames' of visibility and existence of geodesic loops for bounded domains in \(\mathbb{C}^{d}\) which are Kobayashi complete hyperbolic and Gromov hyperbolic.
Key words and phrases:Gromov hyperbolic spaces; Kobayashi hyperbolic spaces; visibility; extension of holomorphic maps 2010 Mathematics Subject Classification: 32F45 Partially supported by PRIN 2017 Real and Complex Manifolds: Topology, Geometry and holomorphic dynamics, Ref: 2017JZ2SW5, by GNSAGA of INdAM and by the MUR Excellence Department Project [email protected]:E83C23000330006 awarded to the Department of Mathematics, University of Rome Tor Vergata.
Due to the completeness of \(K_{\Omega}\) and by Arzela-Ascoli's theorem, a non-compactly divergent sequence of geodesics rays or lines admits a subsequence converging uniformly on compacta to a geodesics ray or line.
One of the basic observations of the present paper is that, if \(\Omega\) is visibile then the convergence is actually uniform (not just on compacta)--see Lemma 3.1.
Moreover, visibile domains have a natural family of geodesics which exhibit certain peculiarities (they form a kind of "frame"). Motivated by this, we give the following definition:
**Definition 1.1**.: Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and let \(A\geq 1\), \(B\geq 0\). A family \(\mathcal{F}\) is a frame of \((A,B)\)-quasi-geodesics if
1. for every \(\gamma\in\mathcal{F}\) there exists \(R_{\gamma}\in(0,+\infty]\) such that, if \(R_{\gamma}<+\infty\), \(\gamma\colon[0,R_{\gamma}]\to\Omega\) is a \((A,B)\)-quasi-geodesic segment, while, if \(R_{\gamma}=+\infty\), \(\gamma\colon[0,+\infty)\to\Omega\) is a \((A,B)\)-quasi-geodesic ray,
2. if \(\gamma\in\mathcal{F}\) is a quasi-geodesic ray (i.e., \(R_{\gamma}=+\infty\)), then there exists \(p_{\gamma}\in\partial\Omega\) such that \(\lim_{t\to+\infty}\gamma(t)=p_{\gamma}\),
3. there exists a compact subset \(K\subset\Omega\) such that \(\gamma(0)\in K\) for every \(\gamma\in\mathcal{F}\),
4. for every sequence \(\{\gamma_{k}\}\subset\mathcal{F}\) there exist a subsequence \(\{\gamma_{k_{m}}\}\) and \(\gamma\in\mathcal{F}\) such that \(\{\gamma_{k_{m}}\}\) converges uniformly to \(\gamma\),
5. there exist \(\epsilon>0\) and \(\delta>0\) such that for every \(z\in\Omega\) with \(\operatorname{dist}(z,\partial\Omega)<\epsilon\) there exists \(\gamma\in\mathcal{F}\) such that \(K_{\Omega}(z,\gamma([0,+\infty])<\delta\).
It turns out that visibile domains have a frame of geodesics (see Proposition 3.2). One of the main result of the paper is to prove the converse for Gromov hyperbolic domains:
**Theorem 1.2**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and Gromov hyperbolic. Assume that \(\partial\Omega\) does not contain nontrivial analytic discs. Then \(\Omega\) is visible if and only if there exists a frame of \((A,B)\)-quasi-geodesics for some \(A\geq 1,B\geq 0\)._
The hypothesis on the non-existence of nontrivial analytic discs on the boundary is technical. It is presently not known whether a visible complete hyperbolic and Gromov hyperbolic domain might have nontrivial analytic discs on the boundary (in [4, Theorem 1.2] it is shown that for a visibile complete hyperbolic and Gromov hyperbolic domain with no geodesic loops-see below for the definition--and with \(C^{0}\)-boundary there can not be nontrivial analytic discs on the boundary).
The domain \(\Omega\) is a Gromov model domain provided the identity map extends as a homeomorphism from the Gromov compactification of \(\Omega\) to the Euclidean closure \(\overline{\Omega}\) (see [6]).
Due to the equivalence between Gromov's topology and Caratheodory's prime end topology (see, e.g., [5, Ch. 4]) in dimension one, Gromov model domains play essentially the role of simply connected Jordan domains, while, visible domains play the role of simply connected domains with locally connected boundary with respect to the extension to the boundary of biholomorphisms (see [6] for a careful explanation of this fact).
The following are examples of Gromov model domains: \(C^{2}\)-bounded strongly pseudoconvex domains [1], Gromov hyperbolic (bounded or unbounded) convex domain [7, 8, 9], smooth
bounded D'Angelo finite type convex domains [21, 22], smooth bounded D'Angelo finite type pseudoconvex domains in \(\mathbb{C}^{2}\)[16], bounded Gromov hyperbolic Lipschitz \(\mathbb{C}\)-convex domains [12].
It turns out (see [6, 12]) that a bounded Kobayashi complete hyperbolic and Gromov hyperbolic domain \(\Omega\) is a Gromov model if and only if \(\Omega\) is visible and has no geodesic loops. Where, a geodesic loop for \(\overline{\Omega}\) is a geodesic line \(\gamma:(-\infty,+\infty)\to\Omega\) such that the cluster set of \(\gamma\) at \(+\infty\) coincides with the cluster set of \(\gamma\) at \(-\infty\).
In this direction, we first show (see Proposition 4.3) that if \(\Omega\) is visible and there exist \(p\in\partial\Omega\) and \(U\) an open neighborhood of \(p\) such that \(U\cap\Omega\) has at least two connected components having \(p\) on their boundary then there exists a geodesic loop for \(\overline{\Omega}\).
The second main result of the paper is a characterization of (non)existence of geodesic loops in terms of frames of quasi-geodesics.
**Definition 1.3**.: Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and let \(A\geq 1,\,B\geq 0.\) A frame of \((A,B)\)-quasi-geodesics \(\mathscr{F}\) is looping if there exist two quasi-geodesic rays \(\gamma,\eta\in\mathscr{F}\) such that \(\lim_{t\to+\infty}\gamma(t)=\lim_{t\to+\infty}\eta(t)\) and which stay at infinite distance each other. We say that the quasi-geodesic frame \(\mathscr{F}\) is non-looping otherwise.
Then we prove:
**Theorem 1.4**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and Gromov hyperbolic. Assume \(\partial\Omega\) does not contain nontrivial analytic discs and that \(\Omega\) is visible. Then \(\Omega\) has no geodesic loops if and only if it has a non-looping frame of \((A,B)\)-quasi-geodesics for some \(A\geq 1\) and \(B\geq 0.\)_
As a consequence of Theorem 1.2, Theorem 1.4 and the previous discussion we have:
**Theorem 1.5**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and Gromov hyperbolic. Assume \(\partial\Omega\) does not contain nontrivial analytic discs. Then \(\Omega\) is a Gromov model domain if and only if it has a non-looping frame of \((A,B)\)-quasi-geodesics for some \(A\geq 1\) and \(B\geq 0.\)_
The definition of frames of (non-looping) quasi-geodesics might seem at a first sight rather technical and useless. However, unravelling the proofs in [7, 8, 9, 10, 21, 22, 23, 16] where several kind of (Gromov hyperbolic) bounded domains are proved to be Gromov models, one sees that actually the main work was exactly to construct non-looping quasi-geodesic frames in the sense of our definition. As another example, we sketch an argument for constructing a frame of non-looping quasi geodesics in strongly pseudoconvex domains, taking for granted that a \(C^{2}\) bounded strongly pseudoconvex domain \(\Omega\) is Kobayashi hyperbolic and Gromov hyperbolic. Indeed, one can construct easily a frame of non-looping quasi-geodesics as follows. If \(p\in\partial\Omega\) and \(U_{p}\) is an open neighborhood of \(p\) such that \(\Omega\cap U_{p}\) is biholomorphic to a strongly convex domain \(V_{p}\), then one can consider the family of all real segments in \(V_{p}\) steaming from a fixed point in \(V_{p}\). Arguing as in [7, 8] one can see that this family is a frame of non-looping quasi-geodesics in \(V_{p}.\) Hence, its preimage \(\mathscr{F}_{p}\) is a frame of non-looping quasi-geodesics in \(\Omega\cap U_{p}\)
Taking \(\mathcal{F}\) to be the union of all \(\mathcal{F}_{p}\) and using the localization of the Kobayashi distance at the boundary, one can show that \(\mathcal{F}\) is a frame of non-looping quasi-geodesics for \(\Omega\) and hence, by Theorem 1.5, \(\Omega\) is a Gromov model domain (a result well known, as remarked above, after [1]).
The paper is organized as follows. In Section 2 we recall some definitions and state some preliminary results we need in the proofs. In Section 3 we prove Theorem 1.2 and in Section 4 we prove Theorem 1.4.
## 2. Notations and preliminaries
In this section we let \(\Omega\subset\mathbb{C}^{d}\) be a bounded domain. We denote by \(k_{\Omega}\) the infinitesimal Kobayashi pseudometric of \(\Omega\) and by \(K_{\Omega}\) the Kobayashi distance of \(\Omega\). We refer the reader to [17] for definitions and properties.
If \(\Omega\subset\subset\mathbb{C}^{d}\) is complete hyperbolic, that is, \((\Omega,K_{\Omega})\) is a complete metric space, it follows from the Hopf-Rinow theorem that \((\Omega,K_{\Omega})\) is geodesic and thus, every couple of points in \(\Omega\) can be joined by a geodesic (i.e. length minimizing curve) for \(K_{\Omega}\). If \(p,q\in\Omega\), we denote by \([p,q]_{\Omega}\) any geodesic joining \(p\) and \(q\).
A geodesic \(\gamma:[a,b]\to\Omega\), \(-\infty<a<b<+\infty\) is called a geodesic segment. A geodesic \(\gamma:[a,+\infty)\to\Omega\), \(a\in\mathbb{R}\), is a geodesic ray and a geodesic \(\gamma:(-\infty,+\infty)\to\Omega\) is a geodesic line.
A geodesic triangle \(T\) is the union of 3 geodesic segments (called sides) \(T=[x,y]_{\Omega}\cup[y,z]_{\Omega}\cup[z,x]_{\Omega}\) joining 3 points \(x,\ y,z\in X\).
The complete metric space \((\Omega,K_{\Omega})\) is Gromov hyperbolic if there exists \(\delta>0\) (the Gromov constant of \(\Omega\)) such that every geodesic triangle \(T\) is \(\delta\)-thin, that is, every point on a side of \(T\) has distance from the union of the other two sides less than or equal to \(\delta\) (see, e.g., [13, 15] for details and further properties).
For an absolutely continuous curve \(\gamma:[a,b]\to\Omega\), we denote by \(l_{k_{\Omega}}(\gamma;[s,t])\) the length of the curve \(\gamma\) on \([s,t]\), \(a\leq s<t\leq b\), that is
\[l_{k_{\Omega}}(\gamma;[s,t]):=\int_{s}^{t}k_{\Omega}(\gamma(\tau);\gamma^{ \prime}(\tau))d\tau.\]
Let \(A>1\) and \(B>0\). An absolutely continuous curve \(\gamma:[a,b]\to\Omega\) is a \((A,B)\)-quasi-geodesic if for every \(a\leq s<t\leq b\), we have :
\[l_{k_{\Omega}}(\gamma;[s,t])\leq AK_{\Omega}(\gamma(s),\gamma(t))+B.\]
An \((A,B)\)-quasi-geodesic ray (or line) is an absolutely continuous curve whose restriction to any compact interval in the domain of definition is an \((A,B)\)-quasi-geodesic. A quasi-geodesic is just any \((A,B)\)-quasi-geodesic segment/ray/line for some \(A\geq 1\) and \(B\geq 0\).
One of the main feature of Gromov hyperbolic spaces is the so-called Geodesic Stability Theorem, which says that every \((A,B)\)-quasi-geodesic is shadowed by a geodesic at a distance which depends only on \(A,B\) and the Gromov constant of the space. In this paper we do not need it directly, but we will use instead a straightforward consequence for "quasi-geodesic rectangles" (see, e.g., [22, Observation 4.4]):
**Lemma 2.1**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a Kobayashi complete hyperbolic domain such that \((\Omega,K_{\Omega})\) is Gromov hyperbolic. Let \(a,b,c,d\in\Omega.\) Let \(A\geq 1\) and \(B\geq 0.\) If \(\Gamma_{1}\) is a \((A,B)\)-quasi-geodesic joining \(a\) with \(b\), \(\Gamma_{2}\) is a \((A,B)\)-quasi-geodesic joining \(b\) with \(c\), \(\Gamma_{3}\) is a \((A,B)\)-quasi-geodesic joining \(c\) with \(d\), \(\Gamma_{4}\) is a \((A,B)\)-quasi-geodesic joining \(a\) with \(d\) (that is, \(\Gamma_{1}\cup\Gamma_{2}\cup\Gamma_{3}\cup\Gamma_{4}\) is a \((A,B)\)-quasi-geodesic rectangle with sides \(\Gamma_{1},\Gamma_{2},\Gamma_{3}\) and \(\Gamma_{4}\)) then there exists \(N>0\) (which depends only on \(A,B\) and the Gromov constant of \(\Omega\)) such that every point of one side is contained in the \(N\)-tubular neighborhood (with respect to \(K_{\Omega}\)) of the union of the other three._
We now turn to the precise definition of visibility.
Let \(p,q\in\partial\Omega\), \(p\neq q.\) We say that the couple \((p,q)\) satisfies the visibility condition with respect to \(K_{\Omega}\) if there exist a neighborhood \(V_{p}\) of \(p\) and a neighborhood \(V_{q}\) of \(q\) and a compact subset \(K\) of \(\Omega\) such that \(V_{p}\cap V_{q}=\emptyset\) and \([x,y]_{\Omega}\cap K\neq\emptyset\) for every \(x\in V_{p}\cap\Omega\), \(y\in V_{q}\cap\Omega\).
We say that \(\Omega\) is visible if every couple of points \(p,q\in\partial\Omega\), \(p\neq q\), satisfies the visibility condition with respect to \(K_{\Omega}\).
We say that a geodesic line \(\gamma:(-\infty,+\infty)\to\Omega\) is a geodesic loop in \(\overline{D}\) if \(\gamma\) has the same cluster set \(\Gamma\) in \(\overline{\Omega}\) at \(+\infty\) and \(-\infty\). In such a case we say that \(\Gamma\) is the vertex of the geodesic loop \(\gamma\).
By [11, Lemma 3.1], we have:
**Lemma 2.2**.: _Suppose \(\Omega\subset\mathbb{C}^{d}\) is a bounded complete hyperbolic domain. If \(\Omega\) is visible then every geodesic ray lands, i.e., if \(\gamma:[0,+\infty)\to\Omega\) is a geodesic then there exists \(p\in\partial\Omega\) such that \(\lim_{t\to+\infty}\gamma(t)=p\). In particular, the vertex of every geodesic loop in \(\overline{\Omega}\) is a point in \(\partial\Omega\)._
In the sequel, we will also need the following lemma (see [6, Lemma A.2])
**Lemma 2.3** (D'Addezio).: _Let \(D\subset\mathbb{C}^{d}\) be a complete hyperbolic bounded domain. Assume that \(\partial D\) does not contain non-trivial analytic discs. If \(\{z_{n}\},\{w_{n}\}\subset D\) are two sequences such that \(\lim_{n\to+\infty}z_{n}=p,\lim_{n\to+\infty}w_{n}=q\) with \(p,q\in\partial D\) and_
\[\sup_{n}K_{D}(z_{n},w_{n})<+\infty,\]
_then \(p=q\)._
## 3. Visible domains and frame of quasi-geodesics
As a matter of notation, if \(\{\gamma_{k}\}\) is a sequence of curves in \(\mathbb{C}^{d}\) such that for every \(k\) the curve \(\gamma_{k}\) is defined on some interval \([0,R_{k}]\), with \(R_{k}\in(0,+\infty]\), we say that \(\{\gamma_{k}\}\) converges uniformly to a curve \(\gamma:[0,a]\to\Omega\) provided that \(a=\lim_{k\to+\infty}R_{k}\) and for every \(\theta>0\) there exists \(k_{0}\in\mathbb{N}\) such that for every \(k\geq k_{0}\) and for every \(t\in[0,R_{k}]\) it holds
\[|\gamma_{k}(t)-\gamma(t)|\leq\theta.\]
**Lemma 3.1**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic. Suppose \(\Omega\) is visible. If \(\{\gamma_{k}\}\) is a sequence of geodesic (segment or rays) in \(\Omega\) which converges uniformly on compacta to a geodesic ray \(\gamma\) then \(\{\gamma_{k}\}\) converges uniformly to \(\gamma\)._
Proof.: By hypothesis, for every \(k\), either \(\gamma_{k}\) is a geodesic segment defined on \([0,R_{k}]\) for some \(R_{k}>0\) or \(\gamma_{k}\) is geodesic ray defined on \([0,+\infty)\).
In case \(\gamma_{k}\) is a geodesic ray, since \(\Omega\) is visible, by Lemma 2.2 there exists \(\gamma_{k}(+\infty):=\lim_{t\to+\infty}\gamma_{k}\), hence, in this case, in order to unify notations, we can consider \(\gamma_{k}\) to be defined on \([0,+\infty]\) and set \(R_{k}=+\infty\).
Now, assume by contradiction there exists \(\theta>0\) and a sequence \(\{t_{k}\}\) such that \(t_{k}\in[0,R_{k}]\) and
\[|\gamma_{k}(t_{k})-\gamma(t_{k})|\geq\theta,\]
where, as before, if \(t_{k}=+\infty\) we set \(\gamma(+\infty):=\lim_{t\to+\infty}\gamma(t)\).
Since \(\{\gamma_{k}\}\) converges uniformly on compacta to \(\gamma\), it follows that either \(t_{k}=+\infty\) or \(\{t_{k}\}\) converges to \(+\infty\). Hence, up to subsequences, we can assume that \(\{\gamma_{k}(t_{k})\}\) converges to a point \(q\in\partial\Omega\) such that
\[|q-\gamma(+\infty)|\geq\theta.\]
Fix an open neighborhood \(U\) of \(\gamma(+\infty)\) and an open neighborhood \(V\) of \(q\) such that \(\overline{U}\cap\overline{V}=\emptyset\). By visibility hypothesis, there exists a compact subset \(K\subset\Omega\) such that every geodesic in \(\Omega\) joining a point of \(U\cap\Omega\) to a point in \(V\cap\Omega\) has to intersect \(K\).
Since \(\gamma(t)\) tends to \(\gamma(+\infty)\) as \(t\to+\infty\), there exists \(R>0\) such that \(\gamma(r)\in U\cap\Omega\) for all \(r\geq R\). Hence, for every \(r\geq R\), there exists \(k_{r}\in\mathbb{N}\) such that \(\gamma_{k}(r)\in U\cap\Omega\) for \(k\geq k_{r}\). Up to take \(k_{r}\) larger, we can also assume that \(\gamma_{k}(t_{k})\in V\cap\overline{\Omega}\) for \(k\geq k_{r}\).
Thus, \(\gamma_{k}([r,t_{k}))\) is a geodesic from \(U\cap\Omega\) to \(V\cap\Omega\) for \(k\geq k_{r}\). Therefore, for every \(k\geq k_{r}\) there exists \(s_{k}\in(r,t_{k})\) such that \(\gamma_{k}(s_{k})\in K\).
But,
\[+\infty>\max_{\xi\in K}K_{\Omega}(z_{0},\xi))\geq K_{\Omega}(z_{0},\gamma_{k} (s_{k}))=K_{\Omega}(\gamma_{k}(0),\gamma_{k}(s_{k}))=s_{k}>r,\]
which gives a contradiction for \(r\) large enough.
**Proposition 3.2**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic. If \(\Omega\) is visible then there exists a frame of geodesics._
Proof.: Fix \(z_{0}\in\Omega\) and let \(\mathcal{F}\) be the set of all geodesic steam from \(z_{0}\), i.e., \(\gamma\in\mathcal{F}\) if either there exists \(R_{\gamma}\in(0,+\infty)\) such that \(\gamma:[0,R_{\gamma}]\to\Omega\) is a geodesic segment or \(\gamma:[0,+\infty)\to\Omega\) is a geodesic ray, and (in both cases) \(\gamma(0)=z_{0}\). By definition, \(\mathcal{F}\) satisfies condition (1) and (3) of Definition 1.1. Also, since the distance \(K_{\Omega}\) is complete, by Hopf-Rinow's theorem, for every \(w\in\Omega\) there exists a geodesic segment \(\gamma\) joining \(z_{0}\) and \(w\), hence, (5) is trivially satisfied for any \(\varepsilon>0\) and \(\delta>0\).
Moreover, since \(\Omega\) is visible, it follows from Lemma 2.2 that \(\mathcal{F}\) enjoys also (2).
Now, we show that \(\mathcal{F}\) satisfies (4). Assume \(\{\gamma_{k}\}\) is a sequence of geodesic steam from \(z_{0}\).
Note that for every \(k\), either \(\gamma_{k}\) is a geodesic segment defined on \([0,R_{k}]\) for some \(R_{k}>0\) or \(\gamma_{k}\) is geodesic ray defined on \([0,+\infty)\). However, as remarked above, in this case there exists \(\gamma_{k}(+\infty):=\lim_{t\to+\infty}\gamma_{k}\), hence, we can consider \(\gamma_{k}\) to be defined on \([0,+\infty]\) and set \(R_{k}=+\infty\).
By Arzela-Ascoli's theorem, it follows that, up to subsequences, \(\{\gamma_{k}\}\) converges uniformly on compacta to some \(\gamma\in\mathcal{F}\). By Lemma 3.1 the convergence is actually uniform, and we are done.
Now we are in a good shape to prove Theorem 1.2:
Proof of Theorem 1.2.: One direction follows from Proposition 3.2.
Conversely, Assume that \(\Omega\) has a family \(\mathcal{F}\) of \((A,B)\)-quasi-geodesics. Hence, there exists a compact subset \(K\subset\Omega\) such that every quasi-geodesic in \(\mathcal{F}\) steams from \(K\). Assume \(\{z_{k}^{j}\}\subset\Omega\) be a sequence converging to \(p_{j}\in\partial\Omega\), \(j=1,2\) with \(p_{1}\neq p_{2}\).
We argue by contradiction and we assume that \(\{[z_{k}^{1},z_{k}^{2}]_{\Omega}\}\) eventually escapes every compact subset of \(\Omega\). By hypothesis, for \(k\) large enough, we can find \(\gamma_{k}^{1},\gamma_{k}^{2}\in\mathcal{F}\) and \(w_{k}^{j}\in\gamma_{k}^{j}\) such that \(K_{\Omega}(z_{k}^{j},w_{k}^{j})<\delta\), \(j=1,2\) (here, we a slight abuse of notation, we denote by \(\gamma_{k}^{j}\) also the image of the quasi-geodesic \(\gamma_{k}^{j}\)).
By Lemma 2.3, since \(\partial\Omega\) does not contain nontrivial analytic discs, it follows that \(\{w_{k}^{j}\}\) converges to \(p_{j}\), \(j=1,2\) for \(k\to+\infty\).
By condition (4) of Definition 1.1, we can assume, up to subsequences, that \(\{\gamma_{k}^{j}\}\) converges uniformly to some \(\gamma^{j}\in\mathcal{F}\), \(j=1,2\). Hence,
\[\lim_{t\to+\infty}\gamma^{j}(t)=p_{j},\quad j=1,2. \tag{3.1}\]
We claim now that \(\{[w_{k}^{1},w_{k}^{2}]_{\Omega}\}\) eventually escapes every compact subset of \(\Omega\), if so does \(\{[z_{k}^{1},z_{k}^{2}]_{\Omega}\}\). Indeed, assume by contradiction that there exists a compact set \(Q\subset\subset\Omega\) and \(c_{k}\in[w_{k}^{1},w_{k}^{2}]_{\Omega}\) such that \(c_{k}\in Q\) for every \(k\). Since
\[[w_{k}^{1},w_{k}^{2}]_{\Omega}\cup[z_{k}^{1},z_{k}^{2}]_{\Omega}\cup[z_{k}^{1 },w_{k}^{1}]_{\Omega}\cup[z_{k}^{2},w_{k}^{2}]_{\Omega}\]
is a geodesic rectangle, by Lemma 2.1, it follows that there exist \(N>0\) and
\[t_{k}\in[z_{k}^{1},z_{k}^{2}]_{\Omega}\cup[z_{k}^{1},w_{k}^{1}]_{\Omega}\cup[ z_{k}^{2},w_{k}^{2}]_{\Omega}\]
such that \(K_{\Omega}(t_{k},c_{k})\leq N\). Since for every \(s\in[z_{k}^{j},w_{k}^{j}]_{\Omega}\) we have that \(K_{\Omega}(s,z_{k}^{j})\leq K_{\Omega}(w_{k}^{j},z_{k}^{j})<\delta\), \(j=1,2\), it follows from the completeness of \(K_{\Omega}\) that \(t_{k}\in[z_{k}^{1},z_{k}^{2}]_{\Omega}\). But then \(t_{k}\) belongs to a \(N\)-tubular neighborhood of \(Q\), which is as well a relatively compact subset of \(\Omega\), and we get a contradiction.
Therefore, \(\{[w_{k}^{1},w_{k}^{2}]_{\Omega}\}\) eventually escapes every compact subset of \(\Omega\).
Let \(a_{k}^{j}>0\) be such that \(\gamma_{k}^{j}(a_{k}^{j})=w_{k}^{j}\), \(j=0,1\) and consider now the quasi-geodesic rectangle
\[[w_{k}^{1},w_{k}^{2}]_{\Omega}\cup\gamma_{k}^{1}([0,a_{k}^{1}])\cup\gamma_{k} ^{2}([0,a_{k}^{1}])\cup[\gamma_{k}^{1}(0),\gamma_{k}^{2}(0)]_{\Omega}.\]
Note that \(\{[\gamma_{k}^{1}(0),\gamma_{k}^{2}(0)]_{\Omega}\}\) is relatively compact in \(\Omega\). Hence, by Lemma 2.1, there exists \(N>0\) such that for every point \(s_{k}\in[w_{k}^{1},w_{k}^{2}]_{\Omega}\) there exists \(\zeta(s_{k})\in\gamma_{k}^{1}([0,a_{k}^{1}])\cup\gamma_{k}^{2}([0,a_{k}^{1}])\) such that
\[K_{\Omega}(s_{k},\zeta(s_{k}))\leq N. \tag{3.2}\]
Since \(\{s_{k}\}\) is compactly divergent, it follows by the completeness of \(K_{\Omega}\) that \(\{\zeta(s_{k})\}\) is compactly divergent as well. By (3.1), the cluster set of \(\{\zeta(s_{k})\}\) is contained in \(\{p_{1}\}\cup\{p_{2}\}\) and, by (3.2) and Lemma 2.3, so is the cluster set of \(\{s_{k}\}\). But this is a contradiction because the cluster set of \([w_{k}^{1},w_{k}^{2}]_{\Omega}\) has to be connected and contains both \(p_{1}\) and \(p_{2}\). Hence, \(\Omega\) is visible.
## 4. Geodesic loops in visible domains
**Definition 4.1**.: Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic. We say that two quasi-geodesic rays \(\gamma,\eta\) stay at infinite distance each other provided for every \(C>0\) there exists \(t_{C}>0\) such that
\[\min_{t\geq t_{C}}K_{\Omega}(\gamma(t),\eta([0,+\infty))>C,\quad\min_{t\geq t _{C}}K_{\Omega}(\eta(t),\gamma([0,+\infty))>C.\]
**Lemma 4.2**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic. Assume \(\gamma:\mathbb{R}\to\Omega\) is a geodesic loop for \(\overline{D}\). Let \(\gamma^{+}:[0,+\infty)\to\Omega\) be the geodesic ray defined by \(\gamma^{+}(t):=\gamma(t)\) and let \(\gamma^{-}:[0,+\infty)\to\Omega\) be the geodesic ray defined by \(\gamma^{-}(t):=\gamma^{-}(t)\). Then \(\gamma^{+}\) and \(\gamma^{-}\) stay at infinite distance each other._
Proof.: Assuming by contradiction that \(\gamma^{+},\gamma^{-}\) do not stay at infinite distance each other, we can find \(C>0\) such that for every \(t\in[0,+\infty)\) there exists \(s_{t}\in[0,+\infty)\) such that
\[K_{\Omega}(\gamma^{+}(t),\gamma^{-}(s_{t}))\leq C.\]
But, since \(\gamma\) is a geodesic,
\[t+s_{t}=K_{\Omega}(\gamma(t),\gamma(-s_{t}))=K_{\Omega}(\gamma^{+}(t),\gamma^ {-}(s_{t}))\leq C,\]
and, for \(t\to+\infty\) we have a contradiction.
**Proposition 4.3**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic. Assume that \(\Omega\) is visible. If there exists \(p\in\partial\Omega\) and an open neighborhood \(U\) of \(p\) such that \(U\cap\Omega\) has at least two connected components such that \(p\) belongs to their closure, then there exists a geodesic loop for \(\overline{\Omega}\) with vertex \(p\)._
Proof.: Let \(p,U\) as in the statement and let \(V_{1},V_{2}\) be two connected components of \(\Omega\cap U\) such that \(p\in\overline{V_{j}}\), \(j=1,2\). For \(j=1,2\), let \(\mathcal{V}_{j}\subset\subset V_{j}\) be an open set such that \(p\in\overline{\mathcal{V}_{j}}\) and \(p\not\in\partial\mathcal{V}_{j}\). Moreover, for \(j=1,2\), let \(\{z_{n}^{j}\}\subset\mathcal{V}_{j}\) be a sequence converging to \(p\). For every \(n\), let \(\gamma_{n}:[a_{n},b_{n}]:\to\Omega\) be a geodesic such that \(\gamma_{n}(a_{n})=z_{n}^{1}\) and \(\gamma_{n}(b_{n})=z_{n}^{2}\), \(a_{n}<b_{n}\).
We claim that there exists a compact subset \(K\subset\Omega\) such that \(\gamma_{n}([a_{n},b_{n}])\cap K\neq\emptyset\) for all \(n\). Indeed, if this were not the case, we can assume that \(\{\gamma_{n}([a_{n},b_{n}])\}\) escapes every compact subset of \(\Omega\) for \(n\) large. Let \(r_{n}\in(a_{n},b_{n})\) be such that \(\gamma_{n}(r_{n})\in\partial\mathcal{V}_{2}\), for all \(t\). Then, up to subsequences, \(\{\gamma_{n}(r_{n})\}\) converges, for \(n\to\infty\), to a point \(q\in\partial\Omega\setminus\{p\}\). Since \(\Omega\) is visible, it follows that there exists a compact subset \(K^{\prime}\subset\Omega\) such that \(\gamma_{n}([r_{n},b_{n}])\cap K^{\prime}\neq\emptyset\), reaching a contradiction.
Up to an affine change of parameterization, we can thus assume that \(\{\gamma_{n}(0)\}\) is relatively compact in \(\Omega\) and \(a_{n}<0<b_{n}\) for all \(n\).
We claim that \(\{\gamma_{n}\}\) (under the assumption that \(\{\gamma_{n}(0)\}\) is relatively compact in \(\Omega\)) converges, up to subsequences, to a geodesic loop in \(\overline{\Omega}\) with vertex \(p\).
Indeed, since \((\Omega,K_{\Omega})\) is complete and \(\gamma_{n}(0)\in K\),
\[\lim_{n\to\infty}|a_{n}|=\lim_{n\to\infty}K_{\Omega}(\gamma_{n}(0),z_{n}^{1})=+\infty,\]
and thus \(a_{n}\to-\infty\) for \(n\to\infty\). Similarly, \(b_{n}\to+\infty\) for \(n\to\infty\). Thus, for every \(S,T\in\mathbb{R}\) there exists \(n_{0}\) such that \(S,T\in(a_{n},b_{n})\) for \(n>n_{0}\). Moreover, setting for \(R>0\),
\[d_{K}(R):=\max\{K_{\Omega}(z,w):K_{\Omega}(z,K)\leq R,K_{\Omega}(w,K)\leq R\}<+\infty,\]
we have, for all \(n>n_{0}\),
\[K_{\Omega}(\gamma_{n}(S),\gamma_{n}(T))=K_{\Omega}(\gamma_{n}(0),\gamma_{n}(S ))+K_{\Omega}(\gamma_{n}(0),\gamma_{n}(T))\leq d_{K}(|S|)+d_{K}(|T|).\]
Hence \(\{\gamma_{n}\}\) are locally equibounded and locally equi-continuous and by Arzela-Ascoli's theorem, and taking into account that \(K_{D}(\gamma_{n}(T),\gamma_{n}(S))=|T-S|\) for all \(n\), we can extract a subsequence converging on compacta of \(\mathbb{R}\) to a geodesic line \(\gamma\) such that \(\gamma(0)\in K\).
By Lemma 2.2, \(\lim_{t\to\pm\infty}\gamma(t)=p\). Hence, \(\gamma\) is a geodesic loop in \(\overline{\Omega}\) with vertex \(p\), and we are done.
For Gromov hyperbolic domains the existence of a geodesic loop is equivalent to the existence of two geodesic rays landing at the same point which stay at infinite distance each other:
**Proposition 4.4**.: _Let \(\Omega\subset\subset\mathbb{C}^{d}\) be a domain such that \((\Omega,K_{\Omega})\) is complete hyperbolic and Gromov hyperbolic. Assume that \(\Omega\) is visible. Then there exists a geodesic loop for \(\overline{\Omega}\) with vertex \(p\in\partial\Omega\) if and only if there exists two geodesic rays in \(\Omega\) landing at \(p\) which stay at infinite distance each other._
Proof.: If \(\gamma\) is a geodesic loop for \(\overline{\Omega}\) with vertex \(p\), then \(\gamma^{+}\) and \(\gamma^{-}\) are geodesic rays at infinite distance each other, landing at \(p\) (see Lemma 4.2).
Conversely, if \(\alpha,\beta:[0,+\infty)\to\Omega\) are two geodesic rays which stay at infinite distance each other, for each \(T>0\), we consider the geodesic rectangle given by
\[[\alpha(0),\beta(0)]_{\Omega}\cup\alpha([0,T])\cup[\alpha(T),\beta(T)]_{ \Omega}\cup\beta([0,T]).\]
Since \((\Omega,K_{\Omega})\) is Gromov hyperbolic, by Lemma 2.1 there exists \(\delta>0\) such that \([\alpha(T),\beta(T)]_{\Omega}\) is contained in the \(\delta\)-tubular neighborhood (with respect to \(K_{\Omega}\)) of \([\alpha(0),\beta(0)]_{\Omega}\cup\alpha([0,T])\cup\beta([0,T])\).
Since \(\alpha\) and \(\beta\) stay at infinite distance each other, it follows that there exists \(T_{0}>0\) such that \(K_{\Omega}(\alpha(t),\beta(s))>4\delta\) for all \(t,s\geq T_{0}\).
We claim that for all \(T>T_{0}\) there exists \(z_{T}\in[\alpha(T),\beta(T)]_{\Omega}\) such that \(z_{T}\) belongs to the \(\delta\)-tubular neighborhood \(\mathcal{N}\) of \([\alpha(0),\beta(0)]_{\Omega}\cup\alpha([0,T_{0})]\cup\beta([0,T_{0}])\).
Indeed, if this were not the case, then every \(\xi\in[\alpha(T),\beta(T)]_{\Omega}\) would belong to a \(\delta\)-tubular neighborhood \(U\) of \(\alpha([T_{0},T])\cup\beta([T_{0},T])\). However, since \(K_{\Omega}(\alpha(t),\beta(s))>4\delta\) for all \(t,s\geq T_{0}\), it follows that \(U\) is the disjoint union of two open sets. But then \([\alpha(T),\beta(T)]_{\Omega}\) is the disjoint
union of the two open sets given by \([\alpha(T),\beta(T)]_{\Omega}\cap U\), hence, not connected, a contradiction and the claim follows.
Now, since \(K_{\Omega}\) is a complete distance, \(K:=\overline{\mathcal{N}}\) is a compact subset of \(\Omega\) and
\[[\alpha(T),\beta(T)]_{\Omega}\cap K\neq\emptyset\quad\forall T\geq 0.\]
Choosing a parametrization \(\gamma_{T}\) of \([\alpha(T),\beta(T)]_{\Omega}\) such that \(\gamma_{T}(0)\in K\) and arguing as in the proof of Proposition 4.3, we see that we can extract a sequence \(\{\gamma_{T_{n}}\}\) converging to a geodesic loop for \(\overline{\Omega}\) with vertex \(p\).
Now we are in good shape to prove Theorem 1.4:
Proof of Theorem 1.4.: First, assume that \(\Omega\) has no geodesic loops. Then by Proposition 3.2 (and its proof), the family \(\mathcal{F}\) of all geodesics steaming from a given point \(z_{0}\in\Omega\) is a geodesic frame. Such a geodesic frame is non-looping by Proposition 4.4.
Suppose now that \(\Omega\) has a non-looping frame of quasi-geodesics \(\mathcal{F}\). Assume by contradiction that \(\gamma:\mathbb{R}\to\Omega\) is a geodesic loop. Since \(\Omega\) is visible, it follows that there exists \(p\in\partial\Omega\) such that \(\lim_{t\to\pm\infty}\gamma(t)=p\). Let \(\delta>0,\varepsilon>0\) be given as in (5) of Definition 1.1 and let
\[V:=\{z\in\Omega:\operatorname{dist}(z,\partial\Omega)<\varepsilon\}.\]
Let \(\gamma^{+}(t):=\gamma(t)\) for \(t\geq 0\) and \(\gamma^{-}(t):=\gamma(-t)\) for \(t\geq 0\).
Hence, there exists \(t_{0}>0\) such that for \(t\geq t_{0}\) sufficiently large, \(\gamma^{\pm}(t)\in V\).
By hypothesis, for each \(k\geq t_{0}\), \(k\in\mathbb{N}\), there exist \(\gamma_{k}^{\pm}\in\mathcal{F}\) and some \(s_{k}^{\pm}\) in the domain of definition of \(\gamma_{k}^{\pm}\) such that
\[K_{\Omega}(\gamma^{\pm}(k),\gamma_{k}^{\pm}(s_{k}^{\pm}))\leq\varepsilon. \tag{4.1}\]
As \(k\to+\infty\), since \(K_{\Omega}\) is a complete distance and by Lemma 2.3, it follows that \(s_{k}^{\pm}\to+\infty\) and \(\gamma_{k}^{\pm}(s_{k}^{\pm})\to p\). By definition of quasi-geodesic frame, up to subsequences, \(\{\gamma_{k}^{\pm}\}\) converges uniformly to some quasi-geodesic ray \(\gamma_{\infty}^{\pm}\in\mathcal{F}\) such that
\[\lim_{t\to+\infty}\gamma_{\infty}^{\pm}(t)=p.\]
Since \(\mathcal{F}\) is non-looping, \(\gamma_{\infty}^{+}\) and \(\gamma_{\infty}^{-}\) stay at finite distance each other.
We claim that this implies that there exists \(C>0\) such that for all \(T\geq 0\),
\[K_{\Omega}(\gamma^{+}(T),\gamma^{-}(T))\leq C, \tag{4.2}\]
reaching a contradiction to Lemma 4.2.
In order to prove (4.2), for every \(k\geq t_{0}\), consider the quasi-geodesic rectangle given by
\[[\gamma^{+}(k),\gamma_{k}^{+}(s_{k})]_{\Omega}\cup\gamma^{+}([0,k])\cup\gamma _{k}^{+}([0,s_{k}^{+}])\cup[\gamma(0),\gamma_{k}^{+}(0)]_{\Omega}.\]
Fix \(T\geq t_{0}\). Hence, by Lemma 2.1, there exists some \(R>0\) such that for every \(k>T\) there exists \(z_{k}\in[\gamma^{+}(k),\gamma_{k}^{+}(s_{k})]_{\Omega}\cup\gamma_{k}^{+}([0,s _{k}^{+}])\cup[\gamma(0),\gamma_{k}^{+}(0)]_{\Omega}\) such that
\[K_{\Omega}(z_{k},\gamma^{+}(T))\leq R. \tag{4.3}\]
Since by hypothesis \(\{\gamma_{k}^{+}(0)\}\) is relatively compact in \(\Omega\)--and so is \(\{[\gamma(0),\gamma_{k}^{+}(0)]_{\Omega}\}\)--and \(\gamma^{+}(t)\to p\in\partial\Omega\) as \(t\to+\infty\), it follows that, if \(t_{0}\) (and hence \(T\)) is large enough, \(z_{k}\not\in[\gamma(0),\gamma_{k}^{+}(0)]_{\Omega}\).
Moreover, \(\{[\gamma^{+}(k),\gamma_{k}^{+}(s_{k})]_{\Omega}\}\) is compactly divergent. Indeed, if this is not the case, there is \(y_{k}\in[\gamma^{+}(k),\gamma_{k}^{+}(s_{k})]_{\Omega}\) such that \(\{y_{k}\}\) is relatively compact, thus
\[K_{\Omega}(\gamma^{+}(k),\gamma_{k}^{+}(s_{k}))=K_{\Omega}(\gamma^{+}(k),y_{k })+K_{\Omega}(y_{k},\gamma_{k}^{+}(s_{k})),\]
and the latter quantity tends to \(+\infty\) as \(k\to+\infty\) because \(K_{\Omega}\) is a complete distance, against (4.1).
Hence, if \(k\) is sufficiently large, \(z_{k}\in\gamma_{k}^{+}([0,s_{k}^{+}])\). By (4.3), \(\{z_{k}\}\) is relatively compact in \(\Omega\).
Since \(\gamma_{\infty}^{-}\) and \(\gamma_{\infty}^{+}\) stay at finite distance each other and \(\{\gamma_{k}^{\pm}\}\) converges uniformly to \(\gamma_{\infty}^{\pm}\), it follows that there exist \(D>0\) and \(w_{k}\in\gamma_{k}^{-}\) for \(k\) sufficiently large, so that
\[K_{\Omega}(z_{k},w_{k})\leq D.\]
In particular, this implies that also \(\{w_{k}\}\) is relatively compact in \(\Omega\).
Using again Lemma 2.1 and arguing as before with the quasi-geodesic rectangle
\[[\gamma^{-}(k),\gamma_{k}^{-}(s_{k})]_{\Omega}\cup\gamma^{-}([0,k])\cup\gamma _{k}^{-}([0,s_{k}^{-}])\cup[\gamma(0),\gamma_{k}^{-}(0)]_{\Omega},\]
we can find \(S\geq 0\) and \(E>0\) such that for all \(k\) sufficiently large
\[K_{\Omega}(\gamma^{-}(S),w_{k})\leq E.\]
Then, by the triangle inequality, \(K_{\Omega}(\gamma^{+}(T),\gamma^{-}(S))\leq C:=R+D+E\). Therefore,
\[K_{\Omega}(\gamma^{+}(T),\gamma^{-}(T)) \leq K_{\Omega}(\gamma^{+}(T),\gamma^{-}(S))+K_{\Omega}(\gamma^{ -}(S),\gamma^{-}(T))\] \[=K_{\Omega}(\gamma^{+}(T),\gamma^{-}(S))+|T-S|\] \[=K_{\Omega}(\gamma^{+}(T),\gamma^{-}(S))+|K_{\Omega}(\gamma^{+}( T),\gamma(0))-K_{\Omega}(\gamma^{-}(S),\gamma(0))|\] \[\leq 2K_{\Omega}(\gamma^{+}(T),\gamma^{-}(S))\leq C.\]
Thus, (4.2) follows, and we are done.
|
2308.06854 | Characteristic $p$ analogues of the Mumford--Tate and André--Oort
conjectures for products of ordinary GSpin Shimura varieties | Let $p$ be an odd prime. We state characteristic $p$ analogues of the
Mumford--Tate conjecture and the Andr\'e--Oort conjecture for ordinary strata
of mod $p$ Shimura varieties. We prove the conjectures for arbitrary products
of GSpin Shimura varieties (and their subvarieties). Important subvarieties of
GSpin Shimura varieties include modular and Shimura curves, Hilbert modular
surfaces, $\mathrm{U}(1,n)$ unitary Shimura varieties, and moduli spaces of
principally polarized Abelian and K3 surfaces. The two conjectures are both
related to a notion of linearity for mod $p$ Shimura varieties, about which
Chai has formulated the Tate-linear conjecture. Though seemingly different, the
three conjectures are intricately entangled. We will first solve the
Tate-linear conjecture for single GSpin Shimura varieties, above which we build
the proof of the Tate-linear conjecture and the characteristic $p$ analogue of
the Mumford--Tate conjecture for products of GSpin Shimura varieties. We then
use the Tate-linear and the characteristic $p$ analogue of the Mumford--Tate
conjectures to prove the characteristic $p$ analogue of the Andr\'e--Oort
conjecture. Our proof uses Chai's results on monodromy of $p$-divisible groups
and rigidity theorems for formal tori, as well as Crew's parabolicity
conjecture which is recently proven by D'Addezio. | Ruofan Jiang | 2023-08-13T22:18:14Z | http://arxiv.org/abs/2308.06854v3 | Characteristic \(p\) Analogues of the Mumford-Tate and Andre-Oort Conjectures for Ordinary GSpin Shimura Varieties
###### Abstract.
Let \(p\) be an odd prime. We state characteristic \(p\) analogues of the Mumford-Tate conjecture and the Andre-Oort conjecture for ordinary strata of mod \(p\) Shimura varieties. We prove the conjectures in the case of GSpin Shimura varieties and products of modular curves. The two conjectures are both related to a notion of linearity for mod \(p\) Shimura varieties, about which Chai has formulated the Tate-linear conjecture. We will first treat the Tate-linear conjecture, above which we then build the proof of the characteristic \(p\) analogue of the Mumford-Tate conjecture. Finally, we use the Tate-linear conjecture and the characteristic \(p\) analogue of the Mumford-Tate conjecture to prove the characteristic \(p\) analogue of the Andre-Oort conjecture. The proofs of these conjectures use Chai's results on monodromy of \(p\)-divisible groups and rigidity theorems for formal tori, as well as Crew's parabolicity conjecture which is recently proven by D'Addezio.
###### Contents
* 1 Introduction
* 1.1 Main results
* 1.2 Linearity of Shimura varieties and Tate-linear conjecture
* 1.3 Method and strategies
* 1.4 A further conjecture
* 1.5 Organization of the paper
* 1.6 Notations and conventions
* 2 Preliminaries
* 2.1 Definitions and first properties
* 2.2 Ordinary locus of the mod \(p\) fiber
* 2.3 Arithmetic deformation theory at an ordinary point
* 2.4 Canonical coordinates
* 3 Monodromy of local systems
* 3.1 Monodromy of etale lisse sheaves
* 3.2 Monodromy of \(F\)-isocrystals
* 3.3 Monodromy of overconvergent \(F\)-isocrystals
* 3.4 Monodromy of compatible systems
* 4 Constructions and conjectures
* 4.1 Constructions
* 4.2 Conjectures, implications and results
* 5 Lie theory of orthogonal and unitary groups
* 5.1 Setups
* 5.2 Main lemmas
* 6 Conjecture 4.5 and the Tate-linear conjecture
* 6.1 The Tate-linear character and cocharacter lattices
* 6.2 Local and global monodromy of \(\mathbb{L}_{\overline{\mathfrak{l}},p,X_{0}}\)
###### Contents
* 1 Introduction
* 2 The \(l\)-adic etale \(p\)-adic \(\mathbb{Q}_{l}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 3 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 4 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 5 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 6 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 7 Characteristic \(p\) analogue of the Mumford-Tate conjecture
* 8 Characteristic \(p\) analogue of the Andre-Oort conjecture
* 9 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 10 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 11 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 12 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 13 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 14 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 15 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 16 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 17 Characteristic \(p\) analogue of the Mumford-Tate conjecture
* 18 Characteristic \(p\) analogue of the Andre-Oort conjecture
* 19 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 20 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 21 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 22 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 23 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 24 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 25 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 26 The \(l\)-adic \(\mathbb{Q}_{p}\)-modules and \(\mathbb{Q}_{p}\)-modules
* 27 Characteristic \(p\) analogue of the Mumford-Tate conjecture
* 28 Characteristic \(p\) analogue of the Andre-Oort conjecture
[MISSING_PAGE_POST]
Now we consider the characteristic \(p\) analogue of the Andre-Oort conjecture. Naively, one can formulate the conjecture as follows: if a subvariety \(X\) of the mod \(p\) reduction of a Shimura variety contains Zariski dense collection of special subvarieties1, then \(X\) is special. Unfortunately, this is not true: since every \(\mathbb{F}\)-point2 in the mod \(p\) reduction of a Shimura variety is already special, any positive dimensional subvariety contains a Zariski dense set of special points. To make it possibly true, one need to put extra conditions on the collection of special subvarieties. A natural condition that one can put is that the special subvarieties in the collection are positive dimensional. However, even this is not enough to guarantee that \(X\) is special, see Example 1.5. Nevertheless, one can at least expect that \(X\) is "weakly special":
Footnote 1: Here, a special subvariety is defined as the mod \(p\) reduction of a special subvariety in characteristic \(0\).
**Conjecture 1.2** (Characteristic \(p\) analogue of the Andre-Oort conjecture for ordinary stratum of \(\mathcal{A}_{g,\mathbb{F}_{p}}\)).: _Suppose \(X\subseteq\mathcal{A}_{g,\mathbb{F}}\) is a generically ordinary subvariety which contains a Zariski dense collection of positive dimensional special subvarieties, then \(X\) admits a positive dimensional special factor._
Conjecture 1.2 is not sharp. In some situations one can expect more. For example, one of our main results shows that if \(X\) lies on a single GSpin Shimura variety, then a subvariety containing a Zariski dense collection of positive dimensional special subvarieties is already special. However, if \(X\) lies on a triple product of modular curves, or more generally, a triple product of Shimura varieties, one can not expect more, see again Example 1.5.
### Main results
In this paper, we formulate and prove the characteristic \(p\) analogues of the Mumford-Tate conjecture and the Andre-Oort conjecture for ordinary stratum of GSpin Shimura varieties and products of modular curves. In the following, a (locally closed) irreducible subvariety of \(\mathcal{A}_{g,\mathbb{F}}\) is said to be _special_, if its Zariski closure is a irreducible component of the reduction of the Zariski closure in \(\mathcal{A}_{g}\) of a _special subvariety of Hodge type_ in the sense of [14, SS1.1]. Loosely speaking, a subvariety is special if it comes from a Shimura subvariety.
#### 1.1.1. The case of GSpin Shimura varieties
Let \((L,Q)\) be an even quadratic \(\mathbb{Z}\)-lattice which is self-dual at \(p\) and has signature \((2,b)\). Associated to it is a Shimura variety \(\mathcal{S}\) that admits a Hodge embedding into \(A_{g}\). This Shimura variety is called the GSpin Shimura variety. Pulling back the universal Abelian scheme over \(A_{g}\) gives rise to an Abelian scheme over \(\mathcal{S}\), called the Kuga-Satake Abelian scheme. In [15], Kisin proved that after choosing a suitable level structure, there exists a canonical integral model \(\mathscr{S}\) of \(\mathcal{S}\), together with an embedding of smooth integral models \(\mathscr{S}\hookrightarrow\mathcal{A}_{g}\) that extends the Hodge embedding \(\mathcal{S}\hookrightarrow A_{g}\). The Kuga-Satake Abelian scheme over \(\mathcal{S}\) also extends to \(\mathscr{S}\), which we denote as \(\mathscr{A}^{\text{KS}}\). Our main results are
**Theorem 1.3** (Characteristic \(p\) analogue of the Mumford-Tate conjecture for ordinary strata of GSpin Shimura varieties, see also Theorem 4.12).: _Conjecture 1.1 is true if \(f_{0}\) factors through \(\mathscr{S}_{\mathbb{F}_{p}}\)._
**Theorem 1.4** (Characteristic \(p\) analogue of the Andre-Oort conjecture for ordinary strata of GSpin Shimura varieties, see also Theorem 4.13).: _Suppose \(X\) is a generically ordinary closed subvariety of \(\mathscr{S}_{\mathbb{F}}\) that contains a Zariski dense collection of positive dimensional special subvarieties, then \(X\) is special. In particular, Conjecture 1.2 is true._
#### 1.1.2. The case of products of modular curves
Suppose that \(\mathbf{I}\) is a finite index set and for each \(i\in\mathbf{I}\), \(\mathscr{S}_{i}\) is the integral model of a modular curve. We denote by \(\mathscr{S}_{\mathbf{I}}\) the product of the \(\mathscr{S}_{i}\). The characteristic \(p\) analogue of the Mumford-Tate conjecture in this case follows from \(l\)-adic and crystalline isogeny theorems for Abelian varieties over finite generated fields. However, the characteristic \(p\) analogue of the Andre-Oort conjecture for products of modular curves is much
more subtle. As noted before, a subvariety containing a Zariski dense collection of positive special subvarieties may fail to be special. The author learned the following example from a conversation with Chai:
_Example 1.5_.: Consider the case where \(\mathbf{I}=\{1,2,3\}\). Let \(C\) be a generically ordinary non-special curve in \(\mathscr{S}_{1,\mathbb{F}}\times\mathscr{S}_{2,\mathbb{F}}\) and let \(X=C\times\mathscr{S}_{3,\mathbb{F}}\). Since every point on \(C\) is special, \(X\) contains a Zariski dense collection of special curves \(\{x\times\mathscr{S}_{3,\mathbb{F}}|x\in C(\mathbb{F})\}\). However, \(X\) is not special. More generally, a subvariety which is the product of a positive dimensional special subvariety with a nonspecial subvariety is nonspecial, while containing a Zariski dense collection of positive dimensional special subvarieties.
However, we will show that Example 1.5 is the only obstruction towards having a special subvariety. More precisely, we show the following
**Theorem 1.6** (Characteristic \(p\) analogue of the Andre-Oort conjecture for ordinary strata of products of modular curves, see also Theorem 4.13).: _Suppose \(X\) is a generically ordinary closed subvariety of \(\mathscr{S}_{\mathbb{F}}\) that contains Zariski dense positive dimensional special subvarieties. Let \(\mathbf{I}_{S}\subseteq\mathbf{I}\) be the set of indices \(i\) such that \(X\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{S}_{i,\mathbb{F}}\) are positive dimensional. Then \(X\) is the product of a special subvariety of \(\mathscr{S}_{\mathbf{I}_{S},\mathbb{F}}\) and a subvariety of \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{S},\mathbb{F}}\). In particular, Conjecture 1.2 is true._
For a single Shimura variety, group theory guarantees that the phenomena in Example 1.5 don't happen. This is the reason why in Theorem 1.4, the existence of a Zariski dense collection of positive dimensional special subvarieties is enough to guarantee the specialness of \(X\).
### Linearity of Shimura varieties and Tate-linear conjecture
Linearity is a fundamental concept which characterizes special subvarieties of a Shimura variety. It will play a crucial role in our treatment of the conjectures. Several different notions of linearity exist. We will give a brief review of them. For simplicity, in the following we consider linearity for subvarieties of \(A_{g}\). This is already enough for dealing with Shimura subvarieties of Hodge type.
#### 1.2.1. Linearity in char 0
Consider the uniformization map \(\pi:\mathbb{H}_{g}\to A_{g}\) with deck group an arithmetic subgroup of \(\operatorname{GSp}_{2g}(\mathbb{Z})\). One can make sense of algebraic subvarieties of \(\mathbb{H}_{g}\), cf. [15, SS3]. A subvariety \(V\subseteq A_{g}\) is called bi-algebraic if \(\pi^{-1}(V)\) is algebraic. Since the morphism \(\pi\) is highly transcendental, bi-algebraic subvarieties are rare, and have very special properties. In fact, being bi-algebraic puts a strong linear condition on \(V\) (or \(\pi^{-1}(V)\)). This linearity can be understood better from more classical settings. Indeed, consider an Abelian variety \(A\) over \(\mathbb{C}\) with a uniformization map \(e:\mathbb{C}^{n}\to A\). It is a classical consequence of Ax-Schanuel theorem that a irreducible subvariety \(V\subseteq A\) is bi-algebraic if and only if \(V\) is a translation of an Abelian subvariety, and in this case, \(e^{-1}(V)\) is a linear subspace of \(\mathbb{C}^{n}\). In the case of Shimura varieties, similar phenomena happen. As a consequence of [15], a subvariety \(V\subseteq A_{g}\) is bi-algebraic if and only if \(V\) is a weakly special subvariety. In this case, \(\pi^{-1}(V)\) is a "linear subspace" of \(\mathbb{H}_{g}\) in the sense that it is totally geodesic. Indeed, a subvariety of a Euclidean space is linear if and only if it is totally geodesic. Therefore the property of being totally geodesic is a natural generalization of linearity in the classical sense.
The interpretation of linearity in terms of totally geodesic property can be found in much earlier works of Moonen. In [16], Moonen showed that a subvariety of a Shimura variety is weakly special if and only if it is totally geodesic. In this and a subsequent paper [16], Moonen also investigated the notion of "formal linearity". To state it, we recall that Serre-Tate theory implies that the completion of \(\mathcal{A}_{g}\) at an ordinary mod \(p\) point admits the structure of a formal torus, cf. [17]. It is usually called _Serre-Tate formal torus_ or _Serre-Tate formal coordinates_ or _canonical coordinates_. A subvariety of \(A_{g}\) is called _formally linear_ if the completion of its Zariski closure in \(\mathcal{A}_{g}\) at an ordinary mod \(p\) point is a union of torsion translates of subtori of the Serre-Tate formal torus.
By an earlier result of Noot ([11]), special subvarieties of \(A_{g}\) are formally linear. In [11], Moonen showed the converse, i.e., formal linearity characterizes special subvarieties.
#### 1.2.2. Linearity in char \(p\)
Linearity in terms of bi-algebraicity or totally geodesic property doesn't generalize easily to mod \(p\) Shimura varieties. However, formal linearity does generalize directly to the ordinary Newton stratum: a subvariety \(V\subseteq\mathcal{A}_{g,\mathbb{F}}^{\mathrm{ord}}\) is called formally linear, if its completion at an ordinary point is a subtorus of the (mod \(p\)) Serre-Tate formal torus. We will use Chai's terminology, and call it _Tate-linear_ at that point, cf. [1, Definition 5.1]. Tate-linearity appears naturally in Chai's work on Hecke orbit conjecture, since the Zariski closure of an ordinary Hecke orbit is Tate-linear, see [1]. Note that Noot's result ([11]) implies that a irreducible component of the mod \(p\) reduction of a special subvariety of \(A_{g}\) is Tate-linear at a smooth ordinary point, if the reduction is generically ordinary. It is natural to ask the converse: does Tate-linearity characterize mod \(p\) reductions of special subvarieties? This is very much the content of Chai's Tate-linear conjecture:
**Conjecture 1.7** (Tate-linear conjecture, see [1, Conjecture 7.2, Remark 7.2.1, Proposition 5.3, Remark 5.3.1]).: _If a irreducible subvariety of \(\mathcal{A}_{g,\mathbb{F}}^{\mathrm{ord}}\) is Tate-linear at a point, then it is special._
The conjecture is still open. In SS1.2.3, we will discuss our new progress on this conjecture.
So far we have only been taking about linearity for ordinary stratum. Linearity for higher Newton strata is much more subtle. This is because there is no notion of Serre-Tate coordinates for higher Newton strata. Nevertheless, there are generalizations of Serre-Tate formal torus to higher Newton strata, cf. [11]. We will refer the readers to Chai's more recent paper [1] for linearity in a more general sense. We won't be using linearity for higher Newton strata in this paper.
#### 1.2.3. Tate-linear conjecture
As a main ingredient of the proofs of characteristic \(p\) analogues of the Mumford-Tate conjecture and the Andre-Oort conjecture, we will establish the Tate-linear conjecture for GSpin Shimura varieties and products of modular curves:
**Theorem 1.8** (Tate-linear conjecture for GSpin Shimura varieties).: _Let \(\mathscr{S}\) be as in SS1.1.1 and \(X\subseteq\mathscr{S}_{\mathbb{F}}^{\mathrm{ord}}\) be a irreducible subvariety which is Tate-linear at a point, then \(X\) is special._
**Theorem 1.9** (Tate-linear conjecture for products of modular curves).: _Let \(\mathscr{S}_{\mathbf{I}}\) be as in SS1.1.2 and \(X\subseteq\mathscr{S}_{\mathbf{I},\mathbb{F}}^{\mathrm{ord}}\) be a irreducible subvariety which is Tate-linear at a point, then \(X\) is special._
We will show two stronger results and deduce Theorem 1.8 and 1.9 as special cases where \(f\) is a locally closed immersion and \(\mathscr{T}_{f,x}\) equals \(X^{/x}\):
**Theorem 1.10**.: _Let \(\mathscr{S}\) be as in SS1.1.1 and \(X\) be a smooth connected variety over \(\mathbb{F}\) that admits a morphism \(f\) into \(\mathscr{S}_{\mathbb{F}}^{\mathrm{ord}}\). Let \(x\) be an \(\mathbb{F}\)-point of \(X\) and \(\mathscr{T}_{f,x}\) be the smallest formal subtorus of the Serre-Tate torus \(\mathscr{S}_{\mathbb{F}}^{/x}\) through which \(f^{/x}\) factors. Then there is a special subvariety whose formal germ at \(f(x)\) coincides with \(\mathscr{T}_{f,x}\)._
**Theorem 1.11**.: _Let \(\mathscr{S}_{\mathbf{I}}\) be as in SS1.1.2 and \(X\) be a smooth connected variety over \(\mathbb{F}\) that admits a morphism \(f\) into \(\mathscr{S}_{\mathbf{I},\mathbb{F}}^{\mathrm{ord}}\). Let \(x\) be an \(\mathbb{F}\)-point of \(X\) and \(\mathscr{T}_{f,x}\) be the smallest formal subtorus of the Serre-Tate torus \(\mathscr{S}_{\mathbf{I},\mathbb{F}}^{\mathrm{ord}}\) through which the morphism \(f^{/x}\) factors. Then there is a special subvariety whose formal germ at \(f(x)\) coincides with \(\mathscr{T}_{f,x}\)._
### Method and strategies
Without loss of generality, we will only discuss the proof strategies of the conjectures for GSpin Shimura varieties. Suppose \(X_{0}\) is a smooth geometric connected variety over a finite field with a morphism \(f_{0}\) into \(\mathscr{S}_{\mathbb{F}_{p}}\), whose image lies in the ordinary locus. Let \(X,f\) be the base change to \(\mathbb{F}\) of \(X_{0},f_{0}\), and let \(\eta\) be the generic point of \(X\). We first construct a reductive group \(\mathrm{MT}(f)\) over \(\mathbb{Q}\) together with a representation \(\rho_{f}:\mathrm{MT}(f)\to\mathrm{GSpin}(L_{\mathbb{Q}})\). This group gives rise
to the correct special subvariety for both Theorem 1.3 and Theorem 1.10, and is also the correct characteristic \(p\) analogue of the Mumford-Tate group. The construction of \((\mathrm{MT}(f),\rho_{f})\) can be summarized as follows:
**Construction 1**.: Pick an \(\mathbb{F}\)-point \(x\) of \(X\) and let \(\tilde{x}\) be its canonical lift. The endomorphism algebra of the Kuga-Satake Abelian variety \(\mathscr{A}_{x}^{\mathrm{KS}}\) is equal to the endomorphism algebra of the Kuga-Satake Abelian scheme \(\mathscr{A}_{\overline{\eta}}^{\mathrm{KS}}\). On the other hand, the endomorphism algebra of the Kuga-Satake Abelian variety \(\mathscr{A}_{\overline{\eta}}^{\mathrm{KS}}\) is a subalgebra of the endomorphism algebra of \(\mathscr{A}_{x}^{\mathrm{KS}}\). So \(\mathrm{End}(\mathscr{A}_{\overline{\eta}}^{\mathrm{KS}})\) admits a lift to a subalgebra of \(\mathrm{End}(\mathscr{A}_{\tilde{x}}^{\mathrm{KS}})\). As a result, \(\mathrm{End}(\mathscr{A}_{\overline{\eta}}^{\mathrm{KS}})\) acts faithfully on \(H_{\mathbb{Q}}\), the rational Hodge structure of \(\mathscr{A}_{\mathbb{C}}^{\mathrm{KS}}\). Note that \(\mathrm{GSpin}(L_{\mathbb{Q}})\) also acts on \(H_{\mathbb{Q}}\). We define \(\mathrm{MT}(f)\) as the largest connected subgroup of \(\mathrm{GSpin}(L_{\mathbb{Q}})\) that commutes with \(\mathrm{End}(\mathscr{A}_{\overline{\eta}}^{\mathrm{KS}})\), and let \(\rho_{f}\) be its embedding into \(\mathrm{GSpin}(L_{\mathbb{Q}})\).
The above construction only uses endomorphisms of the Kuga-Satake Abelian variety, but not higher motivic cycles3. However, we still expect \(\mathrm{MT}(f)\) to be the correct analogue of the Mumford-Tate group. The reason behind this is that the classical Mumford-Tate group for a number field valued point of a GSpin Shimura variety is already determined by endomorphisms of the Kuga-Satake Abelian variety (so one doesn't need to consider higher Hodge tensors). This is a consequence of group theory and the classification of the Mumford-Tate group for K3 Hodge structures (cf. [10]), and is a special property of GSpin Shimura varieties.
Footnote 3: There is no theory of CM lift for higher motivic cycles. However, see [11, §2] for some conjectures.
3.1. Proof strategies for the Tate-linear conjecture and the characteristic \(p\) analogue of the Mumford-Tate conjecture
If we pick the point \(x\) in Construction 1 to be the point \(x\) in Theorem 1.10, then \(\mathrm{MT}(f)\) gives rise to a special subvariety \(\mathcal{X}_{f}\subseteq\mathcal{S}\), whose mod \(p\) reduction is exactly the special subvariety that we want in Theorem 1.3 and Theorem 1.10. Using deformation theory, one can show that \(f\) factors through the mod \(p\) reduction of \(\mathcal{X}_{f}\).
The essential step is then to show that \(\mathcal{X}_{f}\) is cut out by enough motivic cycles, so that the formal germ of the mod \(p\) reduction of \(\mathcal{X}_{f}\) at \(f(x)\) admits \(\mathscr{T}_{f,x}\) as an irreducible component. This also implies that \(\mathcal{X}_{f}\) is the smallest special subvariety whose mod \(p\) reduction contains the image of \(f\).
To show that \(\mathcal{X}_{f}\) is cut out by enough motivic cycles, we need a good understanding of the global monodromy group of a certain \(p\)-divisible group (namely, the formal Brauer group, see SS2.2) over \(X\). The structure of the local monodromy group of this \(p\)-divisible group is determined by \(\mathscr{T}_{f,x}\). Using Chai's results on local and global monodromy of \(p\)-divisible groups (cf. [1, SS3-4]), the parabolicity conjecture proven by D'Addezio in [15], and explicit group theory, we are able to largely understand the structure of this global monodromy group. We then use this to construct enough endomorphisms of the Kuga-Satake Abelian variety over \(\overline{\eta}\). Using deformation theory, we are able to establish Theorem 1.10.
To show Theorem 1.3, it suffices to show that \(\mathrm{MT}(f)\) is the correct rational model over \(\mathbb{Q}\) of the \(l\)-adic etale and crystalline monodromy groups. The structure of the global monodromy group that we mentioned in the last paragraph is closely related to the structure of \(\mathrm{MT}(f)\). Using the known structure of this global monodromy group, and independence of monodromy groups in a compatible system of coefficient objects (see for example [15, Theorem 1.2.1]), we are able to establish Theorem 1.3.
#### 1.3.2. Proof strategies for the characteristic \(p\) analogue of the Andre-Oort conjecture
The main ingredients of the proof of the characteristic \(p\) analogue of the Andre-Oort conjecture are the Tate-linear conjecture, the characteristic \(p\) analogue of the Mumford-Tate conjecture, Chai's rigidity theorem on formal tori (cf. [10]) and global Serre-Tate coordinates (cf. [10, SS2]). We begin by a baby example4:
Footnote 4: The author learned this from Ananth Shankar and Yunqing Tang
_Example 1.12_.: Let \(\mathscr{S}\) be the Siegel modular variety \(\mathcal{A}_{2}\), which is also a GSpin Shimura variety. This example shows that if a divisor \(X\subseteq\mathscr{S}_{\mathbb{F}}\) contains Zariski dense positive dimensional special subvarieties that pass through the same ordinary \(\mathbb{F}\)-point \(x\) of \(X\), then \(X\) is special. In fact, each special subvariety gives rise to a (union of) formal subtorus of \(\mathscr{S}_{\mathbb{F}}^{/x}\) which is also contained in \(X^{/x}\). One can show that these formal subtori are Zariski dense in \(X^{/x}\). Since each subtorus is invariant under scaling by \(\mathbb{Z}_{p}^{*}\), \(X^{/x}\) is also invariant under scaling by \(\mathbb{Z}_{p}^{*}\). By Chai's rigidity result, \(X^{/x}\) is a formal subtorus of \(\mathscr{S}_{\mathbb{F}}^{/x}\). Theorem 1.8 then implies that \(X\) is special.
In general, the Zariski dense collection of positive dimensional special subvarieties of \(X\) may not pass through the same point. Even if they do, the collection of formal subtori thus arise may not be Zariski dense in \(X^{/x}\). Hence the strategy in Example 1.12 won't work. Instead, we use the Zariski dense collection of special subvarieties to construct certain arithmetic \(p\)-adic etale lisse sheaves on (an etale open subset of) \(X\). The construction can be summarized as follows:
**Construction 2**.: Consider \((X\times\mathscr{S}_{\mathbb{F}})^{/\Delta}\), where \(\Delta\) is the graph of the immersion \(X\subseteq\mathscr{S}_{\mathbb{F}}\). For each special subvariety \(Z\subseteq X\), we consider \((Z\times Z)^{/\Delta}\), where \(\Delta\) stands for the diagonal. We then take the Zariski closure of the union of all \((Z\times Z)^{/\Delta}\) inside \((X\times\mathscr{S}_{\mathbb{F}})^{/\Delta}\), and call it \(\mathfrak{Z}\). Now Chai's theory of global Serre-Tate coordinates implies that \((X\times\mathscr{S}_{\mathbb{F}})^{/\Delta}\) is a lisse family of formal tori over \(X\), and \((Z\times Z)^{/\Delta}\) is a lisse family of formal tori over \(Z\). The scaling-by-\(\mathbb{Z}_{p}^{*}\) map on \((X\times\mathscr{S}_{\mathbb{F}})^{/\Delta}\) preserves each \((Z\times Z)^{/\Delta}\), hence \(\mathfrak{Z}\). Let \(\eta\) be the generic point of \(X\), we use Chai's rigidity result on formal tori to show that \(\mathfrak{Z}\times_{X}\overline{\eta}\) is a union of formal tori. Each irreducible component then gives rise to a \(p\)-adic lisse sheaf over an etale open subset of \(X\). These are essentially \(p\)-adic lisse sheaves that we want.
Since \(\mathfrak{Z}\subseteq(X\times X)^{/\Delta}\), a special feature of the \(p\)-adic lisse sheaf \(\mathscr{F}\) constructed as above is that \(\mathscr{F}_{x}\otimes\mathbb{G}_{m}^{\wedge}\subseteq X^{/x}\) for any \(x\in X(\mathbb{F})\). We use the Theorem 1.3 and representation theory to show that above inclusion is an equality. This will imply that \(X^{/x}\) is a formal torus, and Theorem 1.8 will imply Theorem 1.4.
For products of modular curves, the construction of the \(p\)-adic lisse sheaves is essentially the same as above. However, the representation theory is different, so one cannot deduce that \(\mathscr{F}_{x}\otimes\mathbb{G}_{m}^{\wedge}=X^{/x}\) as in the GSpin case. Of course, this meets our expectation, since the existence of Zariski dense positive special subvarieties doesn't guarantee that \(X\) is special, as in Example 1.5. The following is a concrete example for the construction of the \(p\)-adic lisse sheaf for a triple product of modular curves:
_Example 1.13_.: Let \(X\) be as in Example 1.5. We replace \(X\) by its ordinary stratum. Consider the projection \(\pi_{3}:X\to\mathscr{S}_{3,\mathbb{F}}^{\text{ord}}\). Then \(\mathfrak{Z}\) in Construction 2 is nothing other than the pullback via \(\pi_{3}\) of \((\mathscr{S}_{3,\mathbb{F}}^{\text{ord}}\times\mathscr{S}_{3,\mathbb{F}}^{ \text{ord}})^{/\Delta}\). It is a family of rank 1 formal tori over \(X\). The arithmetic \(p\)-adic lisse sheaf \(\mathscr{F}\) thus arise is the pullback via \(\pi_{3}\) of the obvious \(p\)-adic lisse sheaf over \(\mathscr{S}_{3,\mathbb{F}}^{\text{ord}}\) arising from the \(p\)-adic etale cohomology of the universal family. Note that for any point \(x\in X(\mathbb{F})\), \(\mathscr{F}_{x}\otimes\mathbb{G}_{m}^{\wedge}\subsetneq X^{/x}\) is a strict inclusion.
### A further conjecture
We are also able to make a further conjecture for products of GSpin Shimura varieties based on the known results. Suppose that \(\mathbf{I}\) is a finite index set and for each \(i\in\mathbf{I}\), \(\mathscr{S}_{i}\) is the canonical integral model of a GSpin Shimura variety as in SS1.1.1.
**Conjecture 1.14** (Characteristic \(p\) analogue of the Andre-Oort conjecture for ordinary strata of products of GSpin Shimura varieties, see also Conjecture 4.8).: _Suppose \(X\) is a generically ordinary closed subvariety of \(\mathscr{S}_{1,\mathbb{F}}\) that contains Zariski dense positive dimensional special subvarieties. Let \(\mathbf{I}_{S}\subseteq\mathbf{I}\) be the set of indices \(i\) such that \(X\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{S}_{i,\mathbb{F}}\) are positive dimensional. For \(i\in\mathbf{I}_{S}\), Theorem 1.4 guarantees that the
projection of \(X\) to \(\mathscr{S}_{i,\mathbb{F}}\) is a special subvariety. Decompose these special subvarieties into simple factors, and write \(\{\mathscr{Y}_{j,\mathbb{F}}\}_{j\in\mathbf{J}}\) for the collection of simple factors. Let \(\mathbf{J}_{S}\subseteq\mathbf{J}\) be the set of indices \(j\) such that \(X\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{Y}_{j,\mathbb{F}}\) are positive dimensional. Then \(X\) is the product of a special subvariety of \(\mathscr{Y}_{\mathbf{J}_{S},\mathbb{F}}\) and a subvariety of \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{S},\mathbb{F}}\times\mathscr{Y}_{\mathbf{J} -\mathbf{J}_{S},\mathbb{F}}\). In particular, Conjecture 1.2 is true._
Since a modular curve can be regarded as a GSpin Shimura variety with structure group \(\operatorname{GSpin}(2,1)\), the results in SS1.1 establish Conjectures 1.14 in the simplest cases where either \(\mathbf{I}=1\) or \(\mathscr{S}_{i}\) has structure group \(\operatorname{GSpin}(2,1)\) for all \(i\). The conjecture can already be solved using the methodologies presented in this paper, albeit with greater group-theoretical complexity. We opt to validate it in subsequent research.
### Organization of the paper
In SS2 we study the arithmetic completion of \(\operatorname{GSpin}\) Shimura varieties. We also review previous works on monodromy of \(F\)-isocrystals. In SS4 we give constructions of certain reductive groups and special subvarities of products of \(\operatorname{GSpin}\) Shimura varieties. We use that to state precise versions of the conjectures for products of \(\operatorname{GSpin}\) Shimura varieties. In SS5 we establish several group theoretical lemmas, which will be used to study the structure of the monodromy groups of certain local systems in later sections. In SS6 we prove the Tate-linear conjecture for \(\operatorname{GSpin}\) Shimura varieties and products of modular curves. In SS7 we prove the characteristic \(p\) analogue of the Mumford-Tate conjecture for \(\operatorname{GSpin}\) Shimura varieties and products of modular curves. In SS8 we prove the characteristic \(p\) analogue of the Andre-Oort conjecture for \(\operatorname{GSpin}\) Shimura varieties and products of modular curves.
### Notations and conventions
We use \(p\) to denote an odd prime and \(q\) to denote a positive power of \(p\). We fix an algebraic closure of \(\mathbb{F}_{p}\), and denote it by \(\mathbb{F}\). We denote by \(W\) the ring of Witt vectors of \(\mathbb{F}\). We fix an embedding \(W\subseteq\overline{\mathbb{Q}}_{p}\). The bold-case letters \(\mathbf{I},\mathbf{J}\) are reserved for denoting finite index sets. We also make the following conventions:
* (Algebraic closures) We fix once and for all an identification of \(\overline{\mathbb{Q}}_{p}\) with \(\mathbb{C}\). As a result, we have fixed an embedding of \(W\) into \(\mathbb{C}\).
* (Tori and formal tori) We use \(\mathbb{G}_{m}\)_resp._\(\mathbb{G}_{m}^{\wedge}\) to denote the the rank \(1\) torus _resp._ formal torus over \(W\), \(\mathbb{Z}_{p}\) or \(\mathbb{F}\), depending on the context. Sometimes we will also write \(\mathbb{G}_{m,W}\), \(\mathbb{G}_{m,\mathbb{Z}_{p}}\) and \(\mathbb{G}_{m,\mathbb{F}}\) to emphasize the base scheme.
* (Formal completions) Suppose \(\mathscr{Y}\) is a \(W\)-scheme and \(y\in\mathscr{Y}(\mathbb{F})\). We write \(\mathscr{Y}^{/y}\) for the completion of \(\mathscr{Y}\) along \(y\). If \(Y\) is a \(\mathbb{F}\)-scheme and \(Z\) is a closed subscheme, we write \(Y^{/Z}\) for the completion of \(Y\) along \(Z\). If \(f:X\to Y\) is a morphism of varieties over \(\mathbb{F}\) and \(x\in X(\mathbb{F})\), we write \(f^{/x}:X^{/x}\to Y^{/x}\) for the completion of \(f\) at \(x\). Note that \(Y^{/x}\) actually stands for \(Y^{/f(x)}\).
### Acknowledgements
The author thanks Ananth Shankar for pointing out the possible applications of the Tate-linear conjecture to characteristic \(p\) analogues of the Mumford-Tate and Andre-Oort conjectures, together with all of his help and enlightening conversations when the author is writing up the paper. The author thanks Ching-Li Chai for his previous works and theoretical buildups, without which the results of this paper won't be possible. The author also thanks Dima Arinkin, Ching-Li Chai, Asvin G, Qiao He, Jiaqi Hou, Brian Lawrence, Yu Luo, Keerthi Madapusi Pera, Devesh Maulik, Yunqing Tang, Yifan Wei, Ziquan Yang for valuable discussions. Finally, the author thanks Marco D'Addezio for helpful comments as well as kindly pointing out some mistakes in an earlier draft of the paper. The author is partially supported by the NSF grant DMS-2100436.
## 2. Preliminaries
This section concerns the background results. In SS2.1 and SS2.2 we review the notion of \(\operatorname{GSpin}\) Shimura varieties and formal Brauer groups on their special fibers. In SS2.3 and SS2.4, we establish a canonical isomorphism between the arithmetic completion of a \(\operatorname{GSpin}\) Shimura variety at an ordinary point and the arithmetic deformation of the extended formal Brauer group. We show that the isomorphism preserves the formal group structures.
### Definitions and first properties
We review the definitions, notations, and basic properties of \(\operatorname{GSpin}\) Shimura varieties, following [10],[11] and [1]. Let \(p\geq 3\) be a prime. For an integer \(b\geq 1\), let \((L,Q)\) be a quadratic \(\mathbb{Z}\)-lattice of rank \(b+2\) and signature \((2,b)\) with a bilinear form \((,):L\otimes L\to\mathbb{Z}\) such that for \(x\in L\), \(Q(x)=(x,x)/2\in\mathbb{Z}\), and that \((L,Q)\) is self dual at \(p\). Let \(\operatorname{GSpin}(L\otimes\mathbb{Z}_{(p)},Q)\) be the group of spinor similitude of \(L\otimes\mathbb{Z}_{(p)}\), which is a reductive group over \(\mathbb{Z}_{(p)}\), and a subgroup of \(\operatorname{Cl}(L\otimes\mathbb{Z}_{(p)})^{\times}\), where \(\operatorname{Cl}(\cdot)\) is the Clifford algebra. The group \(\operatorname{GSpin}(L_{\mathbb{R}})\) acts on the symmetric space
\[\mathcal{D}_{L}=\{z\in\mathbb{P}(L_{\mathbb{C}})|(z,z)=0,(z,\overline{z})<0\}\]
via \(c:\operatorname{GSpin}(L_{\mathbb{Q}})\to\operatorname{SO}(L_{\mathbb{Q}})\). This gives rise to a Shimura datum \((\operatorname{GSpin}(L_{\mathbb{Q}}),\mathcal{D})\) with reflex field \(\mathbb{Q}\). Consider a hyperspecial level structure \(\mathbb{K}\subseteq\operatorname{GSpin}(L_{\mathbb{A}_{f}})\cap \operatorname{Cl}(L_{\widehat{\mathbb{Z}}})^{\times}\), i.e. a compact open subgroup such that \(\mathbb{K}_{p}=\operatorname{GSpin}(L_{\mathbb{Z}_{p}})\). Then we have a Deligne-Mumford stack \(\mathcal{S}:=\mathcal{S}h(\operatorname{GSpin}(L_{\mathbb{Q}}),\mathcal{D}_{ L})_{\mathbb{K}}\) over \(\mathbb{Q}\), called the \(\operatorname{GSpin}\) Shimura variety, with \(\mathcal{S}(\mathbb{C})=\operatorname{GSpin}(L_{\mathbb{Q}})\backslash \mathcal{D}_{L}\times\operatorname{GSpin}(L_{\mathbb{A}_{f}})/\mathbb{K}\), which admits a canonical smooth integral model \(\mathscr{S}_{\mathbb{K}}\) over \(\mathbb{Z}_{(p)}\) ([16, Theorem 2.3.8]). When the level structure is fixed and clear from the context, we will simply drop the subscript and write the canonical integral model as \(\mathscr{S}\).
Let \(H=\operatorname{Cl}(L)\) with the action of itself on the right. Equip \(\operatorname{Cl}(L)\) with the action of \(\operatorname{GSpin}(L)\) on the left. There exists a choice of symplectic form on \(H\) that gives rise to a map \(\operatorname{GSpin}(L_{\mathbb{Q}})\to\operatorname{GSp}(H_{\mathbb{Q}})\), which induces a embedding of Shimura data, hence an embedding of Shimura varieties and their integral models.
Pulling back the universal Abelian scheme over the Siegel modular variety yields a Kuga-Satake Abelian scheme \(\mathscr{A}^{\operatorname{KS}}\to\mathscr{S}\) with left \(\operatorname{Cl}(L)\)-action, whose first \(\mathbb{Z}\)-coefficient Betti cohomology is the local system induced by \(H\). Let \(\mathbf{H}_{\operatorname{B}},\mathbf{H}_{\operatorname{dR}},\mathbf{H}_{l, \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \operatorname{ \operatornameoperatorname{ \operatorname{ \mid
with \(e_{i_{1}}e_{i_{2}}...e_{i_{k}}\) (it is independent of the choice of the orthogonal basis). The natural action of \(\wedge L_{\mathbb{Q}}\) on \(H_{\mathbb{Q}}\) results in a \(\operatorname{GSpin}(L_{\mathbb{Q}})\) invariant embedding \(\wedge L_{\mathbb{Q}}\hookrightarrow\operatorname{End}_{\operatorname{Cl}(L_{ \mathbb{Q}})}(H_{\mathbb{Q}})\). As a consequence, there is a natural embedding of local systems \(\wedge\mathbb{L}_{\bullet}\hookrightarrow\mathcal{E}nd_{\operatorname{Cl}(L)}( \mathbb{H}_{\bullet})\).
### Ordinary locus of the mod \(p\) fiber
Notation being the same as SS2.1. Let \(\mathscr{S}\) be the canonical integral model of a \(\operatorname{GSpin}\) Shimura variety. An \(\mathbb{F}\) point \(x\) in the special fiber \(\mathscr{S}_{\mathbb{F}_{p}}\) is called _ordinary_, if the Kuga-Satake Abelian variety \(\mathscr{A}_{x}^{\operatorname{KS}}\) is an ordinary Abelian variety. Another equivalent condition would be if the \(F\)-crystal5\(\mathbf{L}_{p,x}\) has slopes -1, 0, and 1. The ordinary locus \(\mathscr{S}_{\mathbb{F}}^{\operatorname{ord}}\) is a Zariski open dense subset of \(\mathscr{S}_{\mathbb{F}}\).
Footnote 5: Strictly speaking, \(\mathbf{L}_{p,x}\) is not an \(F\)-crystal, whereas \(\mathbf{L}_{p,x}(1)\) is. Nevertheless, we will still call \(\mathbf{L}_{p,x}\) an \(F\)-crystal.
#### 2.2.1. Formal Brauer groups
By [10, Theorem 2.4.2], the \(F\)-isocrystal \(\mathbf{L}_{p}^{\operatorname{ord}}:=\mathbf{L}_{p}|_{\mathscr{S}_{\mathbb{F} }^{\operatorname{ord}}}\) admits a slope filtration
\[\operatorname{Fil}_{-1}\mathbf{L}_{p}^{\operatorname{ord}}\subseteq \operatorname{Fil}_{0}\mathbf{L}_{p}^{\operatorname{ord}}\subseteq \operatorname{Fil}_{1}\mathbf{L}_{p}^{\operatorname{ord}}=\mathbf{L}_{p}^{ \operatorname{ord}}.\]
Let \(\mathbf{D}\) be the (contravariant) crystalline Dieudonne functor over \(\mathscr{S}_{\mathbb{F}}^{\operatorname{ord}}\). By [1, Theorem 1], there exists a rank \(b+1\) ordinary \(p\)-divisible group \(\Psi\) on \(\mathscr{S}_{\mathbb{F}}^{\operatorname{ord}}\), with \(\operatorname{Br}:=\Psi^{\operatorname{loc}}\), such that
\[\mathbf{D}(\Psi)=\mathbf{L}_{p}^{\operatorname{ord}}/\operatorname{Fil}_{-1} \mathbf{L}_{p}^{\operatorname{ord}},\ \ \mathbf{D}(\operatorname{Br})=\mathbf{L}_{p}^{\operatorname{ord}}/ \operatorname{Fil}_{0}\mathbf{L}_{p}^{\operatorname{ord}}.\]
We also have \(\mathbf{D}(\operatorname{Br}^{\vee})(-1)=\operatorname{Fil}_{-1}\mathbf{L}_{p} ^{\operatorname{ord}}\) from the existence of the pairing \(\mathbf{Q}\). When \(\mathscr{S}\) is the Shimura variety associated to \(\operatorname{GSpin}(2,19)\) - so that every point \(x\in\mathscr{S}_{\mathbb{F}}^{\operatorname{ord}}(\mathbb{F})\) corresponds to an ordinary K3 surface - the fibers of \(\operatorname{Br}\) and \(\Psi\) at \(x\) are isomorphic to the classical formal Brauer groups and extended formal Brauer groups of K3 surfaces, as per the definition given in [1]\({}^{6}\).
### Arithmetic deformation theory at an ordinary point
Notation being the same as SS2.1. Let \(x\in\mathscr{S}(\mathbb{F})\). We denote by \(\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]/W)\) the universal deformation space of the the \(p\)-divisible group \(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]\). There is a Hodge cocharacter \(\mu_{\mathbb{F}}:\mathbb{G}_{m,\mathbb{F}}\to\operatorname{GSpin}(L_{\mathbb{ F}})\) splitting the Hodge filtration of \(\mathscr{A}_{x}^{\operatorname{KS}}\). Pick an arbitrary lifting of \(\mu_{\mathbb{F}}\) to a cocharacter \(\tilde{\mu}_{W}:\mathbb{G}_{m,W}\to\operatorname{GSpin}(L_{W})\). Let \(U_{\operatorname{GSpin},\tilde{\mu}_{W}^{-1}}\) be the unipotent group corresponding to the inverse cocharacter \(\tilde{\mu}_{W}^{-1}\). It coincides with the opposite unipotent group corresponding to \(\tilde{\mu}_{W}\). From [11, SS4], we have
\[\mathscr{S}_{W}^{/x}=\widehat{U}_{\operatorname{GSpin},\tilde{\mu}_{W}^{-1}} \subseteq\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]/W), \tag{2.3.1}\]
where \(\widehat{U}_{\operatorname{GSpin},\tilde{\mu}_{W}^{-1}}\) is the completion at the identity section of \(U_{\operatorname{GSpin},\tilde{\mu}_{W}^{-1}}\). If \(S\subseteq\operatorname{End}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}])\) is a subset, let \(\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}],S/W) \subseteq\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}] /W)\) be the subspace parametrizing deformations of \(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]\) such that the endomorphisms within \(S\) also deform. Define
\[\operatorname{Def}(S/\mathscr{S}_{W}^{/x}):=\mathscr{S}_{W}^{/x}\cap \operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}],S/W). \tag{2.3.2}\]
In this section, we will show that
**Proposition 2.3**.: _If \(x\) is ordinary, then there is a canonical isomorphism \(\pi_{x}:\mathscr{S}_{W}^{/x}\simeq\operatorname{Def}(\Psi_{x}/W)\)._
See SS2.3.3 for the definition of \(\pi_{x}\). Proposition 2.3 can be seen as a generalization of [14, Theorem 1.6].
#### 2.3.1. The canonical Hodge cocharacter
Let \(\mathscr{G}\) be an ordinary \(p\)-divisible group over \(\mathbb{F}\), and let \(\mathscr{G}^{\rm{\acute{e}t}}\), \(\mathscr{G}^{\rm{loc}}\) be its etale and local part. Let \(F\) be the Frobenius on the Dieudonne module \(\mathbf{D}(\mathscr{G})\). Define a \(\mathbb{Z}_{p}\)-module
\[\omega(\mathscr{G})=\{v\in\mathbf{D}(\mathscr{G})|Fv=v\text{ or }pv\}. \tag{2.3.3}\]
There are canonical identifications
\[\omega(\mathscr{G}^{\rm{loc}})=X^{*}(\mathscr{G}^{\rm{loc}}),\ \ \omega(\mathscr{G}^{\rm{\acute{e}t}})=T_{p}(\mathscr{G}^{\rm{\acute{e}t}})^{ \vee}, \tag{2.3.4}\]
where the symbols \(X^{*}\) and \(T_{p}\) stand for the \(\mathbb{Z}_{p}\)-lattices of character and \(p\)-adic Tate module. Note that \(\omega(\mathscr{G})=\omega(\mathscr{G}^{\rm{loc}})\oplus\omega(\mathscr{G}^{ \rm{\acute{e}t}})\). The \(\mathbb{Z}_{p}\)-module \(\omega(\mathscr{G})\) is a **canonical**\(\mathbb{Z}_{p}\)-structure of \(\mathbf{D}(\mathscr{G})\), in the sense that \(\mathbf{D}(\mathscr{G})=\omega(\mathscr{G})\otimes W\) and
\[F=\begin{bmatrix}\operatorname{Id}_{\omega(\mathscr{G}^{\rm{\acute{e}t}})}&0 \\ 0&p\cdot\operatorname{Id}_{\omega(\mathscr{G}^{\rm{loc}})}\end{bmatrix}\sigma. \tag{2.3.5}\]
Define the _canonical Hodge cocharacter_ as
\[\mu:\mathbb{G}_{m,\mathbb{Z}_{p}}\to\operatorname{GL}(\omega(\mathscr{G})),\ \ t\to\begin{bmatrix}\operatorname{Id}_{\omega(\mathscr{G}^{\rm{\acute{e}t}})}&0 \\ 0&t\cdot\operatorname{Id}_{\omega(\mathscr{G}^{\rm{loc}})}\end{bmatrix}. \tag{2.3.6}\]
Let \(x\) be an ordinary point of \(\mathscr{S}(\mathbb{F})\) and \(\mathscr{G}=\mathscr{A}_{x}^{\rm{KS}}[p^{\infty}]\). We can identify \(\omega(\mathscr{G})\) with the \(\mathbb{Z}_{p}\)-lattice \(H_{\mathbb{Z}_{p}}\), the corresponding canonical Hodge cocharacter \(\mu:\mathbb{G}_{m,\mathbb{Z}_{p}}\to\operatorname{GL}(H_{\mathbb{Z}_{p}})\) lands in \(\operatorname{GSpin}(L_{\mathbb{Z}_{p}})\), and serves as a canonical lift of the Hodge cocharacter \(\mu_{\mathbb{F}}\) associated to \(x\). The scalar extension \(\mu_{W}\) is also refered to as the canonical Hodge cocharacter. The cocharacters \(\mu\) and \(\mu_{W}\) induce filtrations \(\operatorname{Fil}^{\bullet}H_{\mathbb{Z}_{p}}\) and \(\operatorname{Fil}^{\bullet}\mathbf{H}_{p,x}\), respectively. These will be called the _canonical Hodge filtrations_. Consequently, the Dieudonne module \(\mathbf{H}_{p,x}\) is equipped with a canonical \(\mathbb{Z}_{p}\)-structure:
\[\mathbf{H}_{p,x}=(\mathbf{H}_{p,x}(W),\operatorname{Fil}^{\bullet}\mathbf{H} _{p,x},F_{x})=(H_{\mathbb{Z}_{p}},\operatorname{Fil}^{\bullet}H_{\mathbb{Z}_{p }},\mu(p))\otimes W.\]
The Hodge filtration of the canonical lifting of \(x\) is exactly the canonical Hodge filtration induced by \(\mu_{W}\). More details pertaining to the theory of canonical liftings can be found in [10].
Define \(\mu^{c}\) as the composition of \(\mu\) with the projection \(c:\operatorname{GSpin}(L_{\mathbb{Z}_{p}})\to\operatorname{SO}(L_{\mathbb{Z}_{ p}})\). This subsequently gives rise to a three-step filtration \(\operatorname{Fil}^{\bullet}L_{\mathbb{Z}_{p}}\). The cocharacters \(\mu^{c}\) and \(\mu^{c}_{W}\) are again called the canonical Hodge cocharacters, whereas the induced filtrations \(\operatorname{Fil}^{\bullet}L_{\mathbb{Z}_{p}}\) and \(\operatorname{Fil}^{\bullet}\mathbf{L}_{p,x}\) are again termed as the canonical Hodge filtrations.
On the other hand, the inverse cocharacters \(\mu^{-1}\) and \(\mu^{-1}_{W}\) (_resp._\(\mu^{c,-1}\) and \(\mu^{c,-1}_{W}\)) induce the _slope filtrations_\(\operatorname{Fil}_{\bullet}H_{\mathbb{Z}_{p}}\) and \(\operatorname{Fil}_{\bullet}\mathbf{H}_{p,x}\) (_resp._\(\operatorname{Fil}_{\bullet}L_{\mathbb{Z}_{p}}\) and \(\operatorname{Fil}_{\bullet}\mathbf{L}_{p,x}\)). These are ascending filtrations that should not be confused with the canonical Hodge filtrations.
#### 2.3.2. The \(F\)-crystals \(\widehat{\mathbf{H}}_{p,x}\) and \(\widehat{\mathbf{L}}_{p,x}\)
Let \(x\in\mathscr{S}(\mathbb{F})\) be an ordinary point. We use the theory of explicit deformation of \(p\)-divisible groups - as developed in [11, SS7], [12, SS4] and [13, SS1.4-SS1.5] - to describe several important crystals over \(\mathscr{S}^{/x}_{W}\).
We start by considering \(\widehat{\mathbf{H}}_{p,x}\), the Dieudonne crystal of \(\mathscr{A}^{\rm{KS}}[p^{\infty}]\times_{\mathscr{S}}\mathscr{S}^{/x}_{W}\). Recall that we have a canonical Hodge cocharacter \(\mu:\mathbb{G}_{m,\mathbb{Z}_{p}}\to\operatorname{GSpin}(L_{\mathbb{Z}_{p}})\) associated to \(x\). Let \(U_{\operatorname{GSpin},\mu^{-1}}\) be the opposite unipotent of \(\mu\) in \(\operatorname{GSpin}(L_{\mathbb{Z}_{p}})\), and let \(\widehat{U}_{\operatorname{GSpin},\mu^{-1}}\) be its completion at the identity section. If we let \(\tilde{\mu}_{W}=\mu_{W}\) in (2.3.1), we then obtain
\[\mathscr{S}^{/x}_{W}\simeq\widehat{U}_{\operatorname{GSpin},\mu^{-1},W}. \tag{2.3.7}\]
Denote by \(R=\mathcal{O}(\widehat{U}_{\operatorname{GSpin},\mu^{-1},W})\) the ring of formal functions with a choice of Frobenius \(\varphi\). Let \(\operatorname{Fil}^{\bullet}\mathbf{H}_{p,x}\) be the canonical Hodge filtration associated to \(\mu_{W}\). Consider the module \(\mathbf{H}_{R}:=\mathbf{H}_{p,x}\otimes_{W}R\), which is equipped with a filtration \(\operatorname{Fil}^{\bullet}\mathbf{H}_{R}:=\operatorname{Fil}^{\bullet} \mathbf{H}_{p,x}\otimes_{W}R\) and a Frobenius \(F_{R}:=u\circ(F_{x}\otimes\varphi)\)
where \(F_{x}\) is the Frobenius on \(\mathbf{H}_{p,x}\) and \(u\) is the tautological element of \(\widehat{U}_{\mathrm{GSpin},\mu^{-1}}(R)\). The results from [10, SS4] guarantees the existence of a unique connection \(\nabla_{R}\) over \(\mathbf{H}_{R}\) such that
\[\widehat{\mathbf{H}}_{p,x}\simeq(\mathbf{H}_{R},\mathrm{Fil}^{\bullet}\, \mathbf{H}_{R},\nabla_{R},F_{R}). \tag{2.3.8}\]
We also note the readers that \(\widehat{\mathbf{H}}_{p,x}\) is equipped with a slope filtration \(\mathrm{Fil}_{\bullet}\,\mathbf{H}_{p,x}\otimes_{W}R\).
We then give a construction of \(\widehat{\mathbf{L}}_{p,x}\), the universal K3 crystal over \(\mathscr{S}_{W}^{/x}\). Let \(\boldsymbol{\pi}_{\mathrm{cris},x}\in\mathbf{H}_{p,x}^{\otimes(2,2)}\) be the crystalline tensor as per [11, Proposition 4.7]. The constructions in [10, SS4.8] imply that \(\mathbf{H}_{R}\) is equipped with a constant Hodge tensor \(\boldsymbol{\pi}_{R}=\boldsymbol{\pi}_{\mathrm{cris},x}\otimes 1\in\mathbf{H}_{R}^{ \otimes(2,2)}\) which is horizontal and \(F_{R}\)-invariant, and furthermore lies in \(\mathrm{Fil}^{0}\,\mathbf{H}_{R}^{\otimes(2,2)}\). Viewing \(\boldsymbol{\pi}_{R}\) as an idempotent operator over \(\mathbf{H}_{R}^{\otimes(1,1)}\), we define \(\mathbf{L}_{R}:=\boldsymbol{\pi}_{R}\mathbf{H}_{R}^{\otimes(1,1)}\). Clearly, \(\mathbf{L}_{R}\) is a direct summand of \(\mathbf{H}_{R}^{\otimes(1,1)}\), and coincides with \(\mathbf{L}_{p,x}\otimes_{W}R\). Let \(\mathrm{Fil}^{\bullet}\,\mathbf{L}_{p,x}\) be the canonical Hodge filtration over \(\mathbf{L}_{p,x}\). Define a filtration on \(\mathbf{L}_{R}\) by \(\mathrm{Fil}^{\bullet}\,\mathbf{L}_{R}:=\mathrm{Fil}^{\bullet}\,\mathbf{L}_{p,x}\otimes_{W}R\). Since \(\boldsymbol{\pi}_{R}\) is horizontal, \(\mathbf{L}_{R}\) is stable under the connection \(\nabla_{R}^{c}\) over \(\mathbf{H}_{R}^{\otimes(1,1)}\) induced from \(\nabla_{R}\). Furthermore, we define a Frobenius \(F_{R}^{c}\) on \(\mathbf{L}_{R}\) by \(F_{R}^{c}:=u\circ(F_{x}^{c}\otimes\varphi)\), where \(F_{x}^{c}\) is the Frobenius on \(\mathbf{L}_{p,x}\). Then \(\mathbf{L}_{R}[p^{-1}]\) is invariant under \(F_{R}^{c}\). Putting all of these together, we define
\[\widehat{\mathbf{L}}_{p,x}:=(\mathbf{L}_{R},\mathrm{Fil}^{\bullet}\,\mathbf{L }_{R},\nabla_{R}^{c},F_{R}^{c}). \tag{2.3.9}\]
Again, \(\widehat{\mathbf{L}}_{p,x}\) admits an embedding into \(\mathcal{E}nd_{\mathrm{Cl}(L)}(\widehat{\mathbf{H}}_{p,x})\) and is equipped with a paring \(\widehat{\mathbf{Q}}_{x}\). We further note that the slope filtration of \(\widehat{\mathbf{L}}_{p,x}\) is \(\mathrm{Fil}_{\bullet}\,\mathbf{L}_{p,x}\otimes_{W}R\).
#### 2.3.3. Defintion of \(\pi_{x}\)
The slope -1 submodule \(\mathrm{Fil}_{-1}\,\widehat{\mathbf{L}}_{p,x}\) is preserved under the Frobenius and connection of \(\widehat{\mathbf{L}}_{p,x}\). So the quotient \(\widehat{\widehat{\mathbf{L}}}_{p,x}=\widehat{\mathbf{L}}_{p,x}/\mathrm{Fil}_ {-1}\,\widehat{\mathbf{L}}_{p,x}\) is again a Frobenius module with connection. There is a two-step descending filtration over \(\widehat{\widehat{\mathbf{L}}}_{p,x}\) defined as
\[\widehat{\widehat{\mathbf{L}}}_{p,x}\supseteq\mathrm{Fil}^{1}\,\widehat{ \mathbf{L}}_{p,x}:=\widehat{\mathbf{L}}_{p,x}/\mathrm{Fil}_{-1}\,\widehat{ \mathbf{L}}_{p,x}.\]
It is easy to check that \(\widehat{\widehat{\mathbf{L}}}_{p,x}\) is a Dieudonne crystal over \(\mathscr{S}_{W}^{/x}\). By [1, Theorem 1]\(,\,\widehat{\widehat{\mathbf{L}}}_{p,x}\) is the Dieudonne crystal of a formal \(p\)-divisible group \(\widehat{\Psi}_{x}\) over \(\mathscr{S}_{W}^{/x}\) which deforms \(\Psi_{x}\). Clearly, \(\widehat{\Psi}_{x}\) induces a morphism of formal schemes
\[\pi_{x}:\mathscr{S}_{W}^{/x}\to\mathrm{Def}(\Psi_{x}/W) \tag{2.3.10}\]
via which \(\widehat{\Psi}_{x}\) is the pullback of the universal bundle over \(\mathrm{Def}(\Psi_{x}/W)\).
#### 2.3.4. Proof of Proposition 2.3
Recall that there is a slope filtration \(\mathrm{Fil}_{\bullet}\,L_{\mathbb{Z}_{p}}\) induced by \(\mu^{c,-1}\). Let \(\overline{L}_{\mathbb{Z}_{p}}=L_{\mathbb{Z}_{p}}/\,\mathrm{Fil}_{-1}\,L_{ \mathbb{Z}_{p}}\). We denote by \(U_{\mathrm{SO},\mu^{c}}\)_resp._\(U_{\mathrm{SO},\mu^{c,-1}}\) the unipotent _resp._ opposite unipotent of \(\mu^{c}\) in \(\mathrm{SO}(L_{\mathbb{Z}_{p}})\). Similarly, write \(U_{\mathrm{GL},\overline{\mu}}\)_resp._\(U_{\mathrm{GL},\overline{\mu}^{-1}}\) for the unipotent _resp._ opposite unipotent of \(\overline{\mu}\) in \(\mathrm{GL}(\overline{L}_{\mathbb{Z}_{p}})\). We will use \(r\) to denote both of the natural maps \(U_{\mathrm{SO},\mu^{c}}\to U_{\mathrm{GL},\overline{\mu}}\) and \(U_{\mathrm{SO},\mu^{c,-1}}\to U_{\mathrm{GL},\overline{\mu}^{-1}}\).
**Lemma 2.4**.: _The morphisms \(r\) and \(c\) induce the following chains of isomorphisms:_
1. \(U_{\mathrm{GSpin},\mu^{-1}}\xrightarrow{c}U_{\mathrm{SO},\mu^{c,-1}} \xrightarrow{r}U_{\mathrm{GL},\overline{\mu}^{-1}}\)_,_
2. \(U_{\mathrm{GSpin},\mu}\xrightarrow{c}U_{\mathrm{SO},\mu^{c}}\xrightarrow{r}U_{ \mathrm{GL},\overline{\mu}}\)_._
Proof.: It suffices to prove (1). Firstly, \(U_{\mathrm{GSpin},\mu^{-1}}\xrightarrow{c}U_{\mathrm{SO},\mu^{c,-1}}\) are isomorphic, since the \(\mathrm{SO}(L_{\mathbb{Z}_{p}})=\mathrm{GSpin}(L_{\mathbb{Z}_{p}})/\mathbb{G}_{m,\mathbb{Z}_{p}}\). To show that \(r\) is an isomorphism, we arrange the basis of \(L_{\mathbb{Z}_{p}}\) so that
\(\operatorname{Fil}_{-1}L_{\mathbb{Z}_{p}}=\operatorname{Span}_{\mathbb{Z}_{p}}\{e_{ 1}\}\), \(\operatorname{Fil}_{0}L_{\mathbb{Z}_{p}}=\operatorname{Span}_{\mathbb{Z}_{p}}\{e_{ 1},e_{2},...,e_{b+1}\}\), \(L_{\mathbb{Z}_{p}}=\operatorname{Span}_{\mathbb{Z}_{p}}\{e_{1},e_{2},...,e_{b +2}\}\), and
\[Q=\begin{bmatrix}&1\\ &Q_{0}\\ 1&\end{bmatrix}.\]
Let \(R\) be a \(\mathbb{Z}_{p}\)-algebra, then any element \(g^{\prime}\in U_{\operatorname{GL},\overline{\mu}^{-1}}(R)\) can be written as
\[g^{\prime}=\begin{bmatrix}\operatorname{Id}&\mathbf{v}\\ &1\end{bmatrix}. \tag{2.3.11}\]
Since \(U_{\operatorname{SO},\mu^{c,-1}}\) preserves \(Q\), there is a unique element
\[g=\begin{bmatrix}1&-\mathbf{v}^{t}Q_{0}&-\frac{1}{2}\mathbf{v}^{t}Q_{0} \mathbf{v}\\ &\operatorname{Id}&\mathbf{v}\\ &&1\end{bmatrix}\in U_{\operatorname{SO},\mu^{c,-1}}(R) \tag{2.3.12}\]
such that \(r(g)=g^{\prime}\). It follows that \(r\) is an isomorphism.
Proof of Proposition 2.3.: We decompose \(\pi_{x}\) according to the following commuting diagram:
(2.3.13)
The result follows from Lemma 2.4(1).
### Canonical coordinates
The theory of canonical coordinates implies that the deformation spaces \(\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]/W)\) and \(\operatorname{Def}(\Psi_{x}/W)\) both admit structures of formal tori. In this section, we will show that \(\mathscr{S}_{W}^{/x}\) is a formal subtorus of \(\operatorname{Def}(\mathscr{A}_{x}^{\operatorname{KS}}[p^{\infty}]/W)\). Furthermore, with this induced subtorus structure on \(\mathscr{S}_{W}^{/x}\), the morphism \(\pi_{x}\) in (2.3.13) is an isomorphism of formal tori.
#### 2.4.1. Canonical coordinates on the deformation spaces of ordinary \(p\)-divisible groups
Let \(\mathscr{G}\) be an ordinary \(p\)-divisible group over \(\mathbb{F}\). We denote by \(\mathscr{G}^{\operatorname{\acute{e}t}}\) and \(\mathscr{G}^{\operatorname{loc}}\) its etale and local part, respectively. Let \(\omega(\mathscr{G})\), \(\omega(\mathscr{G}^{\operatorname{\acute{e}t}})\), \(\omega(\mathscr{G}^{\operatorname{loc}})\) and \(\mu\) be the \(\mathbb{Z}_{p}\)-lattices and the canonical Hodge cocharacter defined in SS2.3.1. Then the formal deformation space \(\operatorname{Def}(\mathscr{G}/W)\) admits the structure of a formal torus:
\[\operatorname{Def}(\mathscr{G}/W)\simeq U_{\operatorname{GL},\mu}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m,W}^{\wedge} \tag{2.4.2}\] \[\simeq \operatorname{Hom}_{\mathbb{Z}_{p}}\left(U_{\operatorname{GL},\mu^ {-1}}(\mathbb{Z}_{p}),\mathbb{G}_{m,W}^{\wedge}\right). \tag{2.4.1}\]
Here the \(\mathbb{Z}_{p}\)-algebraic groups \(U_{\operatorname{GL},\mu},U_{\operatorname{GL},\mu^{-1}}\) are the unipotent and opposite unipotent of \(\mu\) in \(\operatorname{GL}(\omega(\mathscr{G}))\). Note that \(U_{\operatorname{GL},\mu}(\mathbb{Z}_{p})\) and \(U_{\operatorname{GL},\mu^{-1}}(\mathbb{Z}_{p})\) can be canonically identified with the cocharacter and character lattices of \(\operatorname{Def}(\mathscr{G}/W)\). In the same spirit as (2.3.4), we can also canonically identify \(U_{\operatorname{GL},\mu}(\mathbb{Z}_{p})\)_resp_. \(U_{\operatorname{GL},\mu^{-1}}(\mathbb{Z}_{p})\) with the \(\mathbb{Z}_{p}\)-linear space \(X_{*}(\mathscr{G}^{\operatorname{loc}})\otimes_{\mathbb{Z}_{p}}T_{p}(\mathscr{ G}^{\operatorname{\acute{e}t}})^{\vee}\)_resp_. \(X^{*}(\mathscr{G}^{\operatorname{loc}})\otimes_{\mathbb{Z}_{p}}T_{p}(\mathscr{G} ^{\operatorname{\acute{e}t}})\), where \(X_{*}\) stands for the cocharacter lattice.
The identifications (2.4.1)\(\sim\) (2.4.2) will be called the _canonical coordinates_ over \(\operatorname{Def}(\mathscr{G}/W)\). The group law of the formal torus comes from Baer sums of the extensions. The unique element in \(\operatorname{Def}(\mathscr{G}/W)\) corresponding to identity will be called the _canonical lifting_. For a formal \(W\)-algebra \(R\) and a \(p\)-divisible group \(\widehat{\mathscr{G}}\) over \(\operatorname{Spf}R\) deforming \(\mathscr{G}\), (2.4.2) yields a \(\mathbb{Z}_{p}\)-linear map
\[q_{\widehat{\mathscr{G}}}:U_{\operatorname{GL},\mu^{-1}}(\mathbb{Z}_{p})\simeq X^{*}(\mathscr{G}^{\operatorname{loc}})\otimes_{\mathbb{Z}_{p}}T_{p}(\mathscr{G} ^{\operatorname{\acute{e}t}})\to\mathbb{G}_{m,W}^{\wedge}(R), \tag{2.4.3}\]
which is termed as _canonical pairing_. If \(\mathscr{G}\) is equipped with a family of endomorphisms \(S\) such that each \(s\in S\) decomposes as \(s^{\mathrm{loc}}\times s^{\mathrm{gt}}\), then \(S\) deforms to a family of endomorphisms of \(\widehat{\mathscr{G}}\) if and only if \(q_{\widehat{\mathscr{G}}}\) fixes every \(s\in S\). More precisely, \(S\) deforms to \(\widehat{\mathscr{G}}\) if and only if
\[q_{\widehat{\mathscr{G}}}(s^{\mathrm{loc}}x\otimes y)=q_{\widehat{\mathscr{G }}}(x\otimes s^{\mathrm{gt}}y),\text{ for all }x\in X^{*}(\mathscr{G}^{\mathrm{loc}}),y\in T_{p}( \mathscr{G}^{\mathrm{gt}}),s\in S. \tag{2.4.4}\]
Therefore, there is a \(\mathbb{Z}_{p}\)-sublattice \(\Lambda_{S}\subseteq U_{\mathrm{GL},\mu^{-1}}(\mathbb{Z}_{p})\) such that
\[\mathrm{Def}(\mathscr{G},S/W)=\mathrm{Hom}_{\mathbb{Z}_{p}}\left(U_{\mathrm{ GL},\mu^{-1}}(\mathbb{Z}_{p})/\Lambda_{S},\mathbb{G}_{m,W}^{\wedge}\right).\]
Note that \(\mathrm{Def}(\mathscr{G},S/W)\) is a formal subtorus if and only if \(\Lambda_{S}\) is a saturated sublattice.
#### 2.4.2. Canonical coordinates on \(\mathscr{S}_{W}^{/x}\)
According to (2.4.2), \(\mathrm{Def}(\mathscr{A}_{x}^{\mathrm{KS}}[p^{\infty}]/W)\)_resp._\(\mathrm{Def}(\Psi_{x}/W)\) admits a structure of a formal torus, with character lattice \(U_{\mathrm{GL},\mu^{-1}}(\mathbb{Z}_{p})\)_resp._\(U_{\mathrm{GL},\overline{\mu}^{-1}}(\mathbb{Z}_{p})\). Our main goal is to show that:
**Proposition 2.5**.: \(\mathscr{S}_{W}^{/x}\subseteq\mathrm{Def}(\mathscr{A}_{x}^{\mathrm{KS}}[p^{ \infty}]/W)\) _is a formal subtorus with cocharacter lattice \(U_{\mathrm{GSpin},\mu}(\mathbb{Z}_{p})\). Furthermore, \(\pi_{x}^{-1}\) is an isomorphism of formal tori, whose induced morphism on the character lattices is exactly the composition_
\[U_{\mathrm{GSpin},\mu^{-1}}(\mathbb{Z}_{p})\xrightarrow{c}U_{\mathrm{SO},\mu^ {c,-1}}(\mathbb{Z}_{p})\xrightarrow{r}U_{\mathrm{GL},\overline{\mu}^{-1}}( \mathbb{Z}_{p}),\]
_where the morphisms \(r,c\) are defined in SS2.3.3._
_Remark 2.6_.: As a consequence of Theorem 2.5, we can write the torus structure on \(\mathscr{S}_{W}^{/x}\) into the following three equivalent forms:
\[\mathscr{S}_{W}^{/x}\simeq\begin{cases}U_{\mathrm{GSpin},\mu}(\mathbb{Z}_{p}) \otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^{\wedge},\\ U_{\mathrm{SO},\mu^{c}}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^{ \wedge},\\ U_{\mathrm{GL},\overline{\mu}}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G} _{m}^{\wedge}.\end{cases}\]
these three different forms originate from distinct contexts. As previously noted, the first torus structure arises from the canonical coordinates on \(\mathrm{Def}(\mathscr{A}_{x}[p^{\infty}]/W)\), while the third torus structure comes from the canonical coordinates on \(\mathrm{Def}(\Psi_{x}/W)\). The second torus structure, on the other hand, arises from the canonical coordinates on the deformation space of the K3 \(F\)-crystal \(\mathbf{L}_{p,x}\), see [1, Theorem 2.1.7]. The three tori are canonically identified via the isomorphisms
\[U_{\mathrm{GSpin},\mu}(\mathbb{Z}_{p})\xrightarrow{c}U_{\mathrm{SO},\mu^{c}} (\mathbb{Z}_{p})\xrightarrow{r}U_{\mathrm{GL},\overline{\mu}}(\mathbb{Z}_{p}).\]
Proof of Theorem 2.5.: The first statement is proven in [2, Proposition 2.6]. However, we give a different argument. Define \(\tau_{x}\) as the composition of \(\pi_{x}^{-1}\) with the embedding of \(\mathscr{S}_{W}^{/x}\subseteq\mathrm{Def}(\mathscr{A}_{x}^{\mathrm{KS}}[p^{ \infty}]/W)\). From (2.3.13) we see that \(\tau_{x}\) can be identified as
\[\widehat{U}_{\mathrm{GL},\overline{\mu}^{-1},W}\xrightarrow{\widehat{c}^{-1} \circ\widehat{r}^{-1}}\widehat{U}_{\mathrm{GSpin},\mu^{-1},W}\subseteq \widehat{U}_{\mathrm{GL},\mu^{-1},W}.\]
Fix a \(\mathbb{Z}_{p}\)-basis \(\{u_{i}\}_{i=1}^{N}\) of \(U_{\mathrm{GL},\mu}(\mathbb{Z}_{p})\) such that \(\{u_{i}\}_{i=1}^{b}\) is a basis of \(U_{\mathrm{GSpin},\mu}(\mathbb{Z}_{p})\). For \(1\leq i\leq b\), let \(\overline{u}_{i}=rc(u_{i})\). So \(\{\overline{u}_{i}\}_{i=1}^{b}\) is a basis of \(U_{\mathrm{GL},\overline{\mu}}(\mathbb{Z}_{p})\). We can view \(\{u_{i}\}_{i=1}^{N}\) as linear functions on the variety \(U_{\mathrm{GL},\mu^{-1}}\), hence identifying \(U_{\mathrm{GL},\mu^{-1}}=\mathrm{Spec}\,\mathbb{Z}_{p}[u_{1},...,u_{N}]\), \(\widehat{U}_{\mathrm{GL},\mu^{-1},W}=\mathrm{Spf}\,W[[u_{1},...,u_{N}]]\) and \(\widehat{U}_{\mathrm{GSpin},\mu^{-1},W}=\mathrm{Spf}\,W[[u_{1},...,u_{b}]]\). Similarly, we have \(\widehat{U}_{\mathrm{GL},\overline{\mu}^{-1},W}=\mathrm{Spf}\,W[[\overline{u }_{1},...,\overline{u}_{b}]]\). The morphism \(\tau_{x}\) corresponds to the ring homomorphism
\[\tau_{x}^{*}:W[[u_{i}]]\to W[[\overline{u}_{i}]],\ u_{i}\to\begin{cases} \overline{u}_{i},\ i\leq b,\\ 0,\ i>b.\end{cases}\]
Let \(\mathbb{G}_{m,W}^{\wedge}=\mathrm{Spf}\,W[[q-1]]\) with group structure \(q\to q\otimes q\). By (2.4.1), the dual basis \(\{u_{i}^{\vee}\}_{i=1}^{N}\) gives rise to morphisms \(f_{i}:U_{\mathrm{GL},\mu}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m,W} ^{\wedge}\to\mathbb{G}_{m,W}^{\wedge}\). In other words, it gives rise to morphisms \(f_{i}^{*}:U_{\mathrm{GL},\mu}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m,W}^{\wedge}\to\mathbb{G}_{m,W}^{\wedge}\).
\(W[[q-1]]\to W[[u_{1},...,u_{N}]]\). Let \(q_{i}=f_{i}^{*}(q)\). Then \(\widehat{U}_{\mathrm{GL},\mu^{-1},W}=\mathrm{Spf}\,W[[q_{i}-1]]\) with group structure \(q_{i}\to q_{i}\otimes q_{i}\), i.e., \(\{q_{i}\}_{i=1}^{N}\) is the set of canonical coordinates on \(\mathrm{Def}(\mathscr{A}_{x}^{\mathrm{KS}}[p^{\infty}]/W)\) corresponding to \(\{u_{i}\}_{i=1}^{N}\). Similarly, let \(\{\overline{q}_{i}\}_{i=1}^{b}\) be the canonical coordinates corresponding to \(\{\overline{u}_{i}\}_{i=1}^{b}\), so that \(\widehat{U}_{\mathrm{GL},\overline{\mu}^{-1},W}=\mathrm{Spf}\,W[[\overline{ q}_{i}-1]]\), with group structure \(\overline{q}_{i}\to\overline{q}_{i}\otimes\overline{q}_{i}\).
Pick Frobenius on \(W[[u_{i}]]\) extending the Frobenius on \(W\) and sending \(u_{i}\) to \(u_{i}^{p}\). Similarly, pick Frobenius on \(W[[\overline{u}_{i}]]\) extending the Frobenius on \(W\) and sending \(\overline{u}_{i}\) to \(\overline{u}_{i}^{p}\). Then [1, Theorem 1.4.2] implies that there are identifications of the coordinates ( see [2] for a detailed computation):
\[q_{i}=E_{p}(u_{i}),\ \overline{q}_{i}=E_{p}(\overline{u}_{i})\]
where \(E_{p}(t)=\sum_{k\geq 0}\frac{t^{p^{k}}}{p^{k}}\in\mathbb{Z}_{p}[[t]]\) is the Artin-Hasse exponential. It then follows that
\[\tau_{x}^{*}(q_{i})=\begin{cases}\overline{q}_{i},\ i\leq b\\ 1,\ i>b\end{cases}.\]
Therefore, \(\tau_{x}\) is a group homomorphism with image \(\mathrm{Spf}\,W[[q_{1},q_{2},...,q_{b}]]\). This is a subtorus of \(\widehat{U}_{\mathrm{GL},\mu^{-1},W}\) with character lattice \(U_{\mathrm{GSpin},\mu^{-1}}(\mathbb{Z}_{p})\). On the other hand, it is from the definition that the image of \(\tau_{x}\) is \(\mathscr{S}_{W}^{/x}\), so we deduce that \(\mathscr{S}_{W}^{/x}\) is a subtorus of \(\mathrm{Def}(\mathscr{A}_{x}^{\mathrm{KS}}[p^{\infty}]/W)\) with cocharacter lattice \(U_{\mathrm{GSpin},\mu}(\mathbb{Z}_{p})\). The second assertion of the proposition is clear.
## 3. Monodromy of local systems
We review the notion of monodromy for etale lisse sheaves and \(F\)-isocrystals, which will play a fundamental role in our paper. For simplicity, we will mostly stick to local systems with coefficients in \(\mathbb{Q}_{u}\), where \(u\) is a finite place of \(\mathbb{Q}\), but the treatment extends to local systems with coefficients in finite extensions of \(\mathbb{Q}_{u}\). For a more comprehensive and broader understanding of these notions, readers are suggested to refer to [1, SS2,SS3]. We will always assume that \(X_{0}\) is a geometric connected smooth variety over a finite field \(\mathbb{F}_{q}\) with \(X=(X_{0})_{\mathbb{F}}\), and \(x\) is an \(\mathbb{F}\)-point of \(X_{0}\).
### Monodromy of etale lisse sheaves
Let \(u\) be a finite place of \(\mathbb{Q}\), including \(p\). Consider \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\), the category of etale lisse sheaves of \(\mathbb{Q}_{u}\)-vector spaces over \(X_{0}\). This category is equivalent to the category of continuous \(\pi_{1}^{\mathrm{\acute{e}t}}(X_{0},x)\)-representations. It is also a neutral Tannakian category with fiber functor
\[\omega_{x}:\mathbf{LS}(X_{0},\mathbb{Q}_{u}) \to\mathrm{Vect}_{\mathbb{Q}_{u}}\] \[\mathcal{E} \to\mathcal{E}_{x}.\]
The _monodromy group_ of an object \(\mathcal{E}\) in \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\) at \(x\) is the Tannakian fundamental group of the tensor Abelian subcategory \(\langle\mathcal{E}\rangle^{\otimes}\) with fiber functor \(\omega_{x}\), denoted \(G(\mathcal{E},x)\). Since \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\) is equivalent to the category of continuous \(\pi_{1}^{\mathrm{\acute{e}t}}(X_{0},x)\)-representations, \(G(\mathcal{E},x)\) is nothing other than the Zariski closure of the image of \(\pi_{1}^{\mathrm{\acute{e}t}}(X_{0},x)\) in \(\mathrm{GL}(\mathcal{E}_{x})\).
There is also a notion of Weil lisse sheaves, which is more widely used (cf. [1]). A Weil lisse sheaf over \(X_{0}\) is a (geometric) etale lisse sheaf \(\mathcal{V}\) over \(X\), together with a Frobenius structure \(F^{*}\mathcal{V}\xrightarrow{\sim}\mathcal{V}\), where \(F\) is the geometric Frobenius of \(\mathbb{F}_{q}\) with respect to \(\mathbb{F}\). Let \(W(X_{0},x)\subseteq\pi_{1}^{\mathrm{\acute{e}t}}(X_{0},x)\) be the Weil group of \(X_{0}\). The category of Weil lisse sheaves is equivalent to the category of continuous \(W(X_{0},x)\)-representations, and is a neutral Tannakian category with fiber functor \(\omega_{x}\). The monodromy group at \(x\) of a Weil lisse sheaf \(\mathcal{V}\) is the Tannakian fundamental group of \(\langle\mathcal{V}\rangle^{\otimes}\) with fiber \(\omega_{x}\). It is the Zaraski closure of \(W(X_{0},x)\) in \(\mathrm{GL}(\mathcal{V}_{x})\).
An etale lisse sheaf \(\mathcal{E}\) in \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\) is automatically a Weil lisse sheaf via pullback, and its subquotients as Weil lisse sheaves are objects in \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\). In other words, the monodromy group of \(\mathcal{E}\) as an etale lisse sheaf over \(X_{0}\) equals the monodromy group of \(\mathcal{E}\) as a Weil lisse sheaf. In this paper, we almost only work in the category \(\mathbf{LS}(X_{0},\mathbb{Q}_{u})\).
### Monodromy of \(F\)-isocrystals
In this section, we use \(F\) to denote the absolute Frobenius. Let \(\mathbf{F}\)-\(\mathbf{Isoc}(X_{0})\) be the tensor Abelian category of \(F\)-isocrystals over \(X_{0}\), where by. Consider an object \(\mathcal{M}\) in \(\mathbf{F}\)-\(\mathbf{Isoc}(X_{0})\). We denote \(\langle\mathcal{M}\rangle^{\otimes}\) the tensor Abelian subcategory generated by \(\mathcal{M}\). Let \(e\) be the smallest positive integer such that the slopes of \(\mathcal{M}_{x}\) multiplied by \(e\) lie in \(\mathbb{Z}\). The fibre functor
\[\omega_{x}:\langle\mathcal{M}\rangle_{\mathbb{Q}_{p^{e}}}^{\otimes} \to\operatorname{Vect}_{\mathbb{Q}_{p^{e}}}\] \[(\mathcal{N},F) \to\{v\in\mathcal{N}_{x}|\exists i\in\mathbb{Z},\ (F_{x}^{e}-p^{i})v=0\}.\]
makes \(\langle\mathcal{M}\rangle_{\mathbb{Q}_{p^{e}}}^{\otimes}\) - the scalar extension of \(\langle\mathcal{M}\rangle^{\otimes}\) by \(\mathbb{Q}_{p^{e}}\) (cf.[1, SS1.4.1]) - a neutral Tannakian category. The functor \(\omega_{x}\) is essentially the same as the _Dieudonne-Manin fiber functor_ in [1, Section 3.1.6], and is also a minor generalization of the fiber functor found in [10]. The fundamental group \(\operatorname{Aut}^{\otimes}(\omega_{x})\subseteq\operatorname{GL}(\omega_{x} (\mathcal{M}))\) is called the _(global) monodromy group_ of \(\mathcal{M}\) at \(x\), denoted \(G(\mathcal{M},x)\). Let \(\mathcal{M}^{/x}\in\mathbf{F}\)-\(\mathbf{Isoc}(X^{/x})\) be the base change of \(\mathcal{M}\). The subcategory \(\langle\mathcal{M}^{/x}\rangle_{\mathbb{Q}_{p^{e}}}^{\otimes}\) is again a Tannakian category with the fiber functor \(\omega_{x}\), see [1, SS3.3]. The corresponding monodromy group is called the _local monodromy group_ of \(\mathcal{M}\) at \(x\), denoted \(G(\mathcal{M}^{/x},x)\). We have \(G(\mathcal{M}^{/x},x)\subseteq G(\mathcal{M},x)\subseteq\operatorname{GL}( \omega_{x}(\mathcal{M}))\).
We will mainly be interested in \(F\)-isocrystals with constant slopes. If \(\mathcal{M}\) has constant slopes, then [11, Corollary 2.6.2] and [11, Corollary 4.2] imply that \(\mathcal{M}\) admits the slope filtration
\[0=\mathcal{M}_{0}\subseteq...\subseteq\mathcal{M}_{l}=\mathcal{M},\]
where each graded piece \(\mathcal{M}_{i}/\mathcal{M}_{i-1}\) has pure slope \(s_{i}\in\mathbb{Q}\) and \(s_{1}<...<s_{l}\). We will write \(\operatorname{gr}\mathcal{M}=\bigoplus_{i=1}^{l}\mathcal{M}_{i}/\mathcal{M}_{ i-1}\). Let \(U(\mathcal{M},x)\)_resp._\(U(\mathcal{M}^{/x},x)\) be the kernel of the natural projection \(G(\mathcal{M},x)\to G(\operatorname{gr}\mathcal{M},x)\)_resp._\(G(\mathcal{M}^{/x},x)\to G(\operatorname{gr}\mathcal{M}^{/x},x)\). They are all unipotent, with \(U(\mathcal{M}^{/x},x)\subseteq U(\mathcal{M},x)\). The monodromy groups for an \(F\)-isocrystal with constant slopes is relatively easy to understand:
**Lemma 3.1**.: _Suppose that \(\mathcal{M}\) has constant slopes and let \(\nu\) be the Newton cocharacter of \(\mathcal{M}_{x}\). Identify \(G(\mathcal{M},x),G(\operatorname{gr}\mathcal{M},x),U(\mathcal{M},x)\) and their local counterparts as subgroups of \(\operatorname{GL}(\omega_{x}(\mathcal{M}))\). The following are true:_
1. \(G(\operatorname{gr}\mathcal{M}^{/x},x)=\operatorname{im}\nu\)_,_
2. \(G(\mathcal{M},x)=U(\mathcal{M},x)\rtimes G(\operatorname{gr}\mathcal{M},x)\)_, where_ \(G(\operatorname{gr}\mathcal{M},x)\) _acts on_ \(U(\mathcal{M},x)\) _via conjugation,_
3. \(G(\mathcal{M}^{/x},x)=U(\mathcal{M}^{/x},x)\rtimes G(\operatorname{gr} \mathcal{M}^{/x},x)\)_, where_ \(G(\operatorname{gr}\mathcal{M}^{/x},x)\) _acts on_ \(U(\mathcal{M}^{/x},x)\) _via conjugation._
Proof.:
1. follows since a uniroot \(F\)- isocrystal over \(X_{\mathbb{F}}^{\!\!\!/x}\) is constant.
2. There is a map \(\operatorname{gr}:\langle\mathcal{M}\rangle^{\otimes}\subseteq\langle \operatorname{gr}\mathcal{M}\rangle^{\otimes}\) sending an \(F\)-isocrystal to its graded object, inducing a section \(G(\operatorname{gr}\mathcal{M},x)\hookrightarrow G(\mathcal{M},x)\) to the natural map \(G(\mathcal{M},x)\to G(\operatorname{gr}\mathcal{M},x)\), hence we have the semi-direct product. The claim that \(G(\operatorname{gr}\mathcal{M},x)\) acts on \(U(\mathcal{M},x)\) via conjugation is clear from the way that they embed into \(\operatorname{GL}(\omega_{x}(\mathcal{M}))\).
3. is similar to (3).
#### 3.2.1. The case of ordinary \(p\)-divisible groups
We now review Chai's result on local and global monodromy of ordinary \(p\)-divisible groups. Let \(\mathscr{G}\) be an ordinary \(p\)-divisible group over \(X_{0}\), which is an extension of \(\mathscr{G}^{\operatorname{loc}}\) and \(\mathscr{G}^{\operatorname{et}}\). Write \(\mathcal{M}=\mathbb{D}(\mathscr{G})\), \(\mathcal{M}^{0}=\mathbb{D}(\mathscr{G}^{\operatorname{et}})\) and \(\mathcal{M}^{1}=\mathbb{D}(\mathscr{G}^{\operatorname{loc}})\), so \(\mathcal{M}\) admits a slope filtration with \(\operatorname{gr}\mathcal{M}=\mathcal{M}^{0}\oplus\mathcal{M}^{1}\).
As previously noted in SS2.3.1, we use \(\mu:\mathbb{G}_{m}\to\operatorname{GL}(\omega_{x}(\mathcal{M}))\) to denote the Hodge cocharacter of \(\mathcal{M}_{x}\). Since \(\mathscr{G}\) is ordinary, this coincides with the Newton cocharacter. Routinely, the notations \(U_{\operatorname{GL},\mu}\) and \(U_{\operatorname{GL},\mu^{-1}}\) represent the unipotent and the opposite unipotent of \(\mu\) in \(\operatorname{GL}(\omega_{x}(\mathcal{M}))\). Now,
we reconsider the Serre-Tate pairing (2.4.3):
\[q_{\mathscr{G}}:U_{\mathrm{GL},\mu^{-1}}(\mathbb{Z}_{p})\to\mathbb{G}_{m}^{\wedge }(X^{/x}).\]
Define \(N_{x}(\mathcal{M})=\ker(q_{\mathscr{G}})_{\mathbb{Q}_{p}}^{\perp}\), the subspace of \(U_{\mathrm{GL},\mu}(\mathbb{Q}_{p})\) which pairs to \(0\) with \(\ker(q_{\mathscr{G}})_{\mathbb{Q}_{p}}\). It can also be viewed as a \(\mathbb{Q}_{p}\)-unipotent subgroup of \(U_{\mathrm{GL},\mu}\).
**Theorem 3.2** (Chai).: _Notations as above. Identify \(G(\mathcal{M},x),G(\operatorname{gr}\mathcal{M},x),U(\mathcal{M},x)\) and their local counterparts as subgroups of \(\mathrm{GL}(\omega_{x}(\mathcal{M}))\) and regard \(N_{x}(\mathcal{M})\) as a \(\mathbb{Q}_{p}\)-unipotent subgroup of \(U_{\mathrm{GL},\mu}\). We have_
1. \(U(\mathcal{M}^{/x},x)=N_{x}(\mathcal{M})\) _and_ \(G(\mathcal{M}^{/x},x)=N_{x}(\mathcal{M})\rtimes\operatorname{im}\mu\)_, where_ \(\operatorname{im}\mu\) _acts on_ \(N_{x}(\mathcal{M})\) _via conjugation._
2. \(U(\mathcal{M},x)=N_{x}(\mathcal{M})\) _and_ \(G(\mathcal{M},x)=N_{x}(\mathcal{M})\rtimes G(\operatorname{gr}\mathcal{M},x)\)_, where_ \(G(\operatorname{gr}\mathcal{M},x)\) _acts on_ \(U(\mathcal{M}^{/x},x)\) _via conjugation._
Proof.: Using Lemma 3.1, we deduce (1) from [10, Theorem 3.3] and (2) from [10, Theorem 4.4] (note that [10, Theorem 4.4] is originally stated for a variety over \(\mathbb{F}\), but the proof works for \(X_{0}\)). See also [11, Theorem 3.4.4] for a generalization of this theorem.
### Monodromy of overconvergent \(F\)-isocrystals
We use the same setups and notation as in SS3.2. Let's denote \(\mathbf{F}\)-\(\mathbf{Isoc}^{\dagger}(X_{0})\) as the tensor Abelian category of overconvergent \(F\)-isocrystals over \(X_{0}\). There is a forgetful functor \(\operatorname{Fgt}:\mathbf{F}\)-\(\mathbf{Isoc}^{\dagger}(X_{0})\to\mathbf{F}\)-\(\mathbf{Isoc}(X_{0})\). Let \(\mathcal{M}^{\dagger}=(\mathcal{M}^{\dagger},F^{\dagger})\in\mathbf{F}\)-\(\mathbf{Isoc}^{\dagger}(X_{0})\) and let \(\mathcal{M}=(\mathcal{M},F)\) be its image in \(\mathbf{F}\)-\(\mathbf{Isoc}(X_{0})\) forgetting the overconvergent structure. Recall that \(e\) is the smallest positive integer such that the slopes of \(\mathcal{M}_{x}\) multiplied by \(e\) lie in \(\mathbb{Z}\). The fiber functor
\[\omega_{x}^{\dagger}=\omega_{x}\circ\operatorname{Fgt}:\langle\mathcal{M}^{ \dagger}\rangle_{\mathbb{Q}_{p^{e}}}^{\otimes}\to\operatorname{Vect}_{\mathbb{ Q}_{p^{e}}}\]
makes \(\langle\mathcal{M}^{\dagger}\rangle_{\mathbb{Q}_{p^{e}}}^{\otimes}\) a neutral Tannakian category. The Tannakian fundamental group thus arises is called the _overconvergent monodromy group_ of \(\mathcal{M}^{\dagger}\) at \(x\), denoted \(G(\mathcal{M}^{\dagger},x)\). Note that we have \(G(\mathcal{M},x)\subseteq G(\mathcal{M}^{\dagger},x)\subseteq\mathrm{GL}( \omega_{x}(\mathcal{M}))\).
**Theorem 3.3** (D'Addezio).: _The following are true:_
1. _Suppose_ \(\mathcal{M}^{\dagger}\) _admits slope filtration, then_ \(G(\mathcal{M},x)\subseteq G(\mathcal{M}^{\dagger},x)\) _is the parabolic subgroup fixing the slope filtration of_ \(\mathcal{M}_{x}\)_._
2. _Suppose_ \(g:\mathscr{A}\to X_{0}\) _is an Abelian scheme. Let_ \(D^{\dagger}(\mathscr{A})=R^{1}g_{*,\mathrm{cris}}\mathcal{O}_{\mathscr{A}, \mathrm{cris}}\)_, then the overconvergent monodromy group_ \(G(D^{\dagger}(\mathscr{A}),x)\) _is reductive._
3. _Setup being the same as (2). If_ \(\mathcal{M}^{\dagger}\) _is an overconvergent_ \(F\)_-isocrystal in_ \(\langle D^{\dagger}(\mathscr{A})\rangle^{\otimes}\) _that has constant slopes, then_ \(G(\operatorname{gr}\mathcal{M},x)\) _is reductive._
Proof.:
1. follows from [11, Theorem 5.1.2].
2. is [11, Corollary 3.5.2].
3. Since \(G(D^{\dagger}(\mathscr{A}),x)\) is reductive by (2), the group \(G(\mathcal{M}^{\dagger},x)\) is also reductive. By (1), \(G(\mathcal{M},x)\) is the parabolic subgroup of \(G(\mathcal{M}^{\dagger},x)\) fixing the slope filtration on \(\mathcal{M}_{x}\). Since \(G(\operatorname{gr}\mathcal{M},x)\) is a quotient of \(G(\mathcal{M},x)\) by Lemma 3.1(3), it is also reductive.
### Monodromy of compatible systems
We refer to [11, SS3] and [12, SS2] for the notion of coefficient objects, companions and compatible systems. Roughly speaking, a coefficient object over \(X_{0}\) is an \(l\)-adic issue sheaf or an overconvergent \(F\)-isocrystal. A coefficient object \(\mathcal{E}_{0}\) over \(X_{0}\) is equipped with a characteristic polynomial \(P(\mathcal{E}_{x_{0}},t)\) for every closed point \(x_{0}\in|X_{0}|\). Two coefficient objects are called companions, if their characteristic polynomials at all closed points coincide. For a number field \(E\), together with a set of finite places \(\Sigma\) containing all places that do not divide \(p\), an \(E\)-compatible system \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma}\) of coefficient objects is, roughly speaking, a
collection of coefficient objects \(\mathcal{E}_{\lambda,0}\) over \(E_{\lambda}\) indexed by \(\lambda\in\Sigma\), which are pair-wisely companions. In [10], D'Addezio proved that the monodromy groups of semi-simple coefficient objects in an \(E\)-compatible system are \(\lambda\)-independent. This means two things: (1) the \(\pi_{0}\) of the monodromy groups are \(\lambda\)-independent in the sense of [10, Theorem 1.2.4], (2) the neutral components of the monodromy groups are \(\lambda\)-independent in the sense of [10, Theorem 1.2.1].
In this paper, we will mainly consider the coefficient objects that arise from the first \(l\)-adic and the first crystalline cohomology of an Abelian scheme \(\mathscr{A}_{0}\) over \(X_{0}\). We will use \(\operatorname{fpl}(E)\) to denote the set of finite places of \(E\). Let \(\mathcal{E}_{l,0}:=H^{1}_{l,\operatorname{\acute{e}t}}(\mathscr{A}_{0})[ \frac{1}{p}]\) (\(l\neq p\)) and \(\mathcal{E}_{p,0}:=H^{1}_{\operatorname{cris}}(\mathscr{A}_{0})[\frac{1}{p}]\). Then \(\{\mathcal{E}_{l,0}\}_{l\in\operatorname{fpl}(\mathbb{Q})-\{p\}}\) is a \(\mathbb{Q}\)-compatible system. Indeed, for a closed point \(x_{0}\) in \(X_{0}\), the characteristic polynomials \(P(\mathcal{E}_{l,0,x_{0}},t)\) are all equal to the characteristic polynomial \(P(\mathscr{A}_{0,x_{0}},t)\in\mathbb{Q}[t]\) defined by \(P(\mathscr{A}_{0,x_{0}},n)=\deg([n]_{\mathscr{A}_{0,x_{0}}}-F_{\mathscr{A}_{0,x_{0}}})\), where \(F_{\mathscr{A}_{0,x_{0}}}\) is the geometric Frobenius, see [11]. For the place \(p\), it is a bit more subtle. Indeed, the Frobenius that appears in \(H^{1}_{\operatorname{cris}}(\mathscr{A}_{0})[\frac{1}{p}]\) as an \(F\)-isocrystal is the absolute Frobenius. In order to fit \(\mathcal{E}_{p,0}:=H^{1}_{\operatorname{cris}}(\mathscr{A}_{0})[\frac{1}{p}]\) in a compatible system with various \(\mathcal{E}_{l,0}\)'s, we need to consider \(\mathcal{E}^{\prime}_{p,0}\), the \(F^{N}\)-isocrystal associated to \(H^{1}_{\operatorname{cris}}(\mathscr{A}_{0})[\frac{1}{p}]\), where \(N=\log_{p}q\). Then \(\{\mathcal{E}_{l,0}\}_{l\in\operatorname{fpl}(\mathbb{Q})-\{p\}}\cup\{ \mathcal{E}^{\prime}_{p,0}\}\) is a \(\mathbb{Q}\)-compatible system.
Nevertheless, we will still call the collection \(\{\mathcal{E}_{u,0}\}_{u\in\operatorname{fpl}(\mathbb{Q})}\) a _weakly compatible system_. In general, a collection of coefficient objects \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma}\) will be termed a _weakly \(E\)-compatible system_, if for every \(\lambda[p,\) after replacing \(\mathcal{E}_{\lambda,0}\) by an associated \(\mathcal{E}^{\prime}_{\lambda,0}\) obtained by raising the Frobenius to a power, the new collection \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma,\lambda[p}\cup\{\mathcal{E}^{ \prime}_{\lambda,0}\}_{\lambda\in\Sigma,\lambda[p}\) is an \(E\)-compatible system.
**Lemma 3.4** (D'Addezio).: _If \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma}\) is a weakly compatible system of semi-simple coefficient objects over \(X_{0}\), then the neutral components of the monodromy groups of \(\mathcal{E}_{\lambda,0}\) are \(\lambda\)-independent in the sense of [10, Theorem 1.2.1]. Furthermore, there is a finite etale cover of \(X^{\prime}_{0}/X_{0}\), such that the monodromy groups of the elements in the pullback system \(\{\mathcal{E}_{\lambda,0}|_{X^{\prime}_{0}}\}_{\lambda\in\Sigma}\) are connected._
Proof.: Let \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma,\lambda[p}\cup\{\mathcal{E}^{ \prime}_{\lambda,0}\}_{\lambda\in\Sigma,\lambda[p}\) be the \(E\)-compatible system obtained from \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma}\) by replacing \(\mathcal{E}_{\lambda,0}\) where \(\lambda[p\) with \(\mathcal{E}^{\prime}_{\lambda,0}\), as in the definition. Up to base change, the monodromy groups of \(\mathcal{E}_{p,0}\) and \(\mathcal{E}^{\prime}_{p,0}\) have the same neutral component. So by [10, Theorem 1.2.1], the neutral components of the monodromy groups of \(\mathcal{E}_{\lambda,0}\) are \(\lambda\)-independent. For the second assertion, we know from [10, Theorem 1.2.4] that the \(\pi_{0}\) of the monodromy groups for elements in \(\{\mathcal{E}_{\lambda,0}\}_{\lambda\in\Sigma,\lambda[p}\) are \(\lambda\)-independent. Since there are only finitely many \(\lambda\) dividing \(p\), there is a finite etale cover \(X^{\prime}_{0}/X_{0}\) satisfying the assertion made in the lemma.
_Remark 3.5_.: As a consequence of [10, Theorem 1.2.1], the various \(l\)-adic etale monodromy groups and the crystalline monodromy group of an Abelian scheme over \(X_{0}\) have independent neutral components. This is a positive step towards the conjecture stated in the introduction of [11]. However, it is weaker, since [10, Theorem 1.2.1] does not guarantee the existence of a \(\mathbb{Q}\)-model for various monodromy groups, but only a \(K\)-model for some number field \(K\).
## 4. Constructions and conjectures
In this section we introduce a reductive group \(\operatorname{MT}(f)\) for a product of \(\operatorname{GSpin}\) Shimura varieties, and use it to construct a special subvariety \(\mathcal{X}_{f}\). We also reveal several basic properties of \(\operatorname{MT}(f)\) and \(\mathcal{X}_{f}\), which will enable us to make precise statements of the conjectures.
### Constructions
Suppose that \(\mathbf{I}\) is a finite index set. For each \(i\in\mathbf{I}\), let \((L_{i},Q_{i})\) be an even quadratic \(\mathbb{Z}\)-lattice which is self dual at \(p\) and has signature \((2,b_{i})\), and \(H_{i}=\operatorname{Cl}(L_{i})\). For each \(i\), we choose a hyperspecial level structure \(\mathbb{K}_{i}\). As a result, for each \(i\), we obtain a \(\operatorname{GSpin}\) Shimura variety \(\mathcal{S}_{i}\). It has a canonical integral model \(\mathscr{S}_{i}\) over \(\mathbb{Z}_{(p)}\), together with a Kuga-Satake Abelian scheme
\(\mathscr{A}_{i}^{\rm KS}\) over \(\mathscr{S}_{i}\) and local systems \(\mathbf{H}_{i,\bullet},\mathbf{L}_{i,\bullet}\). Let \(R\) be a \(\mathbb{Z}\)-algebra. We will use the following notation conventions:
\[(L_{\mathbf{I}},Q_{\mathbf{I}}):=\bigoplus_{i\in\mathbf{I}}(L_{i},Q_{i}), \ H_{\mathbf{I}}=\bigoplus_{i\in\mathbf{I}}H_{i},\ \mathbf{T}:=Z(\mathrm{GL}(H_{\mathbf{I}}))\simeq\mathbb{G}_{m, \mathbb{Z}},\] \[\mathrm{Spin}^{\prime}(L_{\mathbf{I},R}):=\prod_{i\in\mathbf{I} }\mathrm{Spin}(L_{i,R}),\ \mathrm{SO}^{\prime}(L_{\mathbf{I},R}):=\prod_{i\in\mathbf{I}} \mathrm{SO}(L_{i,R}),\] \[\mathrm{GSpin}^{\prime}(L_{\mathbf{I},R})=\mathbf{T}_{R}\cdot \mathrm{Spin}^{\prime}(L_{\mathbf{I},R})\subseteq\prod_{i\in\mathbf{I}} \mathrm{GSpin}(L_{i,R}). \tag{4.1.1}\]
Let \(\mathscr{S}_{\mathbf{I}}\) and \(\mathcal{S}_{\mathbf{I}}\) be the product \(\prod_{i\in\mathbf{I}}\mathscr{S}_{i}\) and \(\prod_{i\in\mathbf{I}}\mathcal{S}_{i}\), respectively. Furthermore, we also denote by \(\mathscr{A}_{\mathbf{I}}^{\rm KS}\), \(\mathbf{H}_{\mathbf{I},\bullet}\), \(\mathbf{L}_{\mathbf{I},\bullet}\) the products over \(\mathbf{I}\) of the Kuga-Satake Abelian schemes and the corresponding local systems.
#### 4.1.1. The group \(\mathrm{MT}(f)\)
Suppose that \(X\) is a smooth and connected variety over \(\mathbb{F}\) with a morphism \(f\) into \(\mathscr{S}_{\mathbf{I},\mathbb{F}}^{\rm ord}\). Let \(X_{0}/\mathbb{F}_{q}\) be a finite field model of \(X\) such that \(f\) is the base change of a morphism \(f_{0}:X_{0}\to\mathscr{S}_{\mathbf{I},\mathbb{F}_{p}}\). Let \(x\in X(\mathbb{F})\) and \(\tilde{x}\) be the canonical lift of \(x\), which lies in \(\mathscr{S}_{\mathbf{I}}(W)\) (cf. [20, Proposition 2.5]). Consider \(\tilde{x}_{\mathbb{C}}\), the base change of \(\tilde{x}\) to \(\mathbb{C}\) along the embedding \(W\hookrightarrow\mathbb{C}\) (SS1.6). We fix an identification \(\alpha:H_{\mathbf{I}}\simeq\mathbf{H}_{\mathbf{I},\mathbb{B},\tilde{x}_{ \mathbb{C}}}\). It gives rise to a cocharacter \(h_{\tilde{x}_{\mathbb{C}}}:\mathrm{Res}_{\mathbb{C}/\mathbb{R}}\,\mathbb{G}_{ m}\to\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{R}})\) corresponding to the Hodge structure of \(\mathbb{H}_{\mathbf{I},\mathbb{B},\tilde{x}_{\mathbb{C}}}\).
Let \(\eta\) be the generic point of \(X\). We fix an algebraic closure \(\overline{\eta}/\eta\). By Lemma 4.2 below, there is a connected etale cover \(X^{\prime}/X\) and a morphism \(\overline{\eta}\to X^{\prime}\), such that \(\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},X^{\prime}}^{\rm KS})\to\mathrm{End}^ {0}(\mathscr{A}_{\mathbf{I},\overline{\eta}}^{\rm KS})\) is an isomorphism. Let \(x^{\prime}\) be any point over \(x\). We have
\[\begin{split}\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{ \eta}}^{\rm KS})\simeq\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},X^{\prime}}^{ \rm KS})\subseteq\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},x^{\prime}}^{\rm KS}) =&\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},x}^{\rm KS}) \\ \subseteq&\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I}, \tilde{x}_{\mathbb{C}}}^{\rm KS})=\mathrm{End}_{h_{\tilde{x}_{\mathbb{C}}}}(H_{ \mathbf{I},\mathbb{Q}})\subseteq\mathrm{End}(H_{\mathbf{I},\mathbb{Q}}).\end{split} \tag{4.1.2}\]
The embedding \(\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{\eta}}^{\rm KS})\subseteq \mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},x}^{\rm KS})\) resulting from the first line of (4.1.2) is independent of the choices made. Let \(\mathrm{MT}(f)\) be the connected component of the commutant of \(\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{\eta}}^{\rm KS})\) in \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\), and \(\mathrm{Hdg}(f)\) be the image of \(\mathrm{MT}(f)\) in \(\mathrm{SO}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\). These groups are equipped with obvious representations
\[\rho_{f}:\mathrm{MT}(f)\hookrightarrow\mathrm{GL}(H_{\mathbf{I}, \mathbb{Q}}),\] \[\varrho_{f}:\mathrm{Hdg}(f)\hookrightarrow\mathrm{GL}(L_{\mathbf{ I},\mathbb{Q}}).\]
We have chosen the notation \(\mathrm{MT}(f)\) and \(\mathrm{Hdg}(f)\) because \(\mathrm{MT}(f)\) is an analogue of the Mumford-Tate group and \(\mathrm{Hdg}(f)\) is an analogue of the Hodge group.
**Lemma 4.1**.: \(\mathrm{MT}(f)\) _and \(\mathrm{Hdg}(f)\) are reductive groups over \(\mathbb{Q}\)._
Proof.: It suffices to show the assertion for \(\mathrm{MT}(f)\). Let \(e_{1},e_{2},...,e_{n}\in\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{ \eta}}^{\rm KS})\) be the generators of \(\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{\eta}}^{\rm KS})\) over \(\mathbb{Q}\), then \(\mathrm{MT}(f)\) is the \(\mathbb{Q}\)-subgroup of \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\) that commutes with \(e_{1},e_{2},...,e_{n}\). Let \(\theta=\mathrm{ad}\,h_{y}(i)\) be the Cartan involution over \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{C}})\), i.e., \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{C}})^{(\theta)}=\{g\in\mathrm{ GSpin}^{\prime}(L_{\mathbf{I},\mathbb{C}})|\theta(g)=\theta(\overline{g})\}\) is a compact Lie group over \(\mathbb{R}\). Since \(h_{y}\) factors through \(\mathrm{MT}(f)\), it makes sense to talk about \(\mathrm{MT}(f)(\mathbb{C})^{(\theta)}\), which is a subgroup of \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{C}})^{(\theta)}\) consisting of elements that commute with \(e_{1},e_{2},...,e_{n}\). Since commuting with each \(e_{i}\) imposes a closed condition on \(\mathrm{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{C}})^{(\theta)}\), \(\mathrm{MT}(f)(\mathbb{C})^{(\theta)}\) is compact. Therefore \(\mathrm{MT}(f)\) is reductive.
The following is a useful lemma on extending endomorphisms of Abelian schemes. It implies that \(\mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{\eta}}^{\rm KS})\simeq \mathrm{End}^{0}(\mathscr{A}_{\mathbf{I},X^{\prime}}^{\rm KS})\) for a connected finite etale cover \(X^{\prime}/X\).
**Lemma 4.2**.: _Suppose that \(\mathscr{A}\) is an Abelian scheme over an Noetherian integral scheme \(S\). Let \(F\) be a field extension of \(K(S)\) and \(w\in\operatorname{End}(\mathscr{A}_{F})\). Then \(w\) uniquely extends to a finite integral cover \(S^{\prime}/S\) with \(K(S^{\prime})\subseteq F\). If \(S\) is normal, then one can further require \(S^{\prime}\) to be an etale cover._
Proof.: If \(S\) is a DVR, and \(F=K(S)\), this is a special case of [20, Corollaire IX 1.4]. Consider the group \(\operatorname{Hom}\) scheme \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\), which is locally of finite type over \(S\). Using valuative criterion and the known case for DVR, we find that \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) is proper over \(S\). Since deforming an endomorphism of an Abelian variety over a field \(k\) to an Artin local thickening of \(k\), if being unobstructed, is unique, we see that \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) is formally unramified, hence unramified, over \(S\). Therefore \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) is finite over \(S\). If \(S\) is normal, then any component of \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) that dominants \(S\) is flat (cf. [1, Theorem 18.10.1]), hence etale. Note that \(w\) corresponds to an \(F\)-point of \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) dominating \(S\). We then let \(S^{\prime}\) be the irreducible component of \(\operatorname{Hom}_{\operatorname{gp}}(\mathscr{A})\) containing the image of \(F\).
#### 4.1.2. Relation with monodromy groups
The groups \(\operatorname{MT}(f)\) and \(\operatorname{Hdg}(f)\) give upper bounds for the etale and crystalline monodromy groups. To state it, we make several identifications of the fibers.
Let \(u\) be a finite place of \(\mathbb{Q}\). Recall from SS3 that \(\omega_{x}(\mathbb{H}_{\mathbf{I},u})\) is a \(\mathbb{Q}_{u}\)-space. The etale-Betti and crystalline-de Rham-Betti comparison isomorphisms yield canonical identifications \(\mathbf{H}_{\mathbf{I},\operatorname{B},\tilde{x}_{\mathbb{C}}}\otimes \mathbb{Q}_{u}\simeq\omega_{x}(\mathbb{H}_{\mathbf{I},u})\). Composing them with the base changes to \(\mathbb{Q}_{u}\) of the already fixed identification \(\alpha:H_{\mathbf{I}}\simeq\mathbf{H}_{\mathbf{I},\operatorname{B},\tilde{x}_ {\mathbb{C}}}\), we have identifications \(\alpha_{u}:H_{\mathbf{I},\mathbb{Q}_{u}}\simeq\omega_{x}(\mathbb{H}_{\mathbf{I },u})\). In a similar and compatible manner, we also have identifications \(\alpha_{u}^{\prime}:L_{\mathbf{I},\mathbb{Q}_{u}}\simeq\omega_{x}(\mathbb{L}_{ \mathbf{I},u})\).
These enable us to regard the monodromy group \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\)_resp._\(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)\) of the arithmetic local system \(\mathbb{H}_{\mathbf{I},u,X_{0}}\)_resp._\(\mathbb{L}_{\mathbf{I},u,X_{0}}\) as a subgroup of \(\operatorname{GL}(H_{\mathbf{I},\mathbb{Q}_{u}})\)_resp._\(\operatorname{GL}(L_{\mathbf{I},\mathbb{Q}_{u}})\), so that the standard representation of the later group restricts to the monodromy representation of the former group. We also identify \(\operatorname{MT}(f)\)_resp._\(\operatorname{Hdg}(f)\) as a subgroup of \(\operatorname{GL}(H_{\mathbf{I},\mathbb{Q}})\)_resp._\(\operatorname{GL}(L_{\mathbf{I},\mathbb{Q}})\) via \(\rho_{f}\)_resp._\(\varrho_{f}\). Therefore, \(\operatorname{MT}(f)_{\mathbb{Q}_{u}}\) and \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\)_resp._\(\operatorname{Hdg}(f)_{\mathbb{Q}_{u}}\) and \(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)\) are both subgroups of \(\operatorname{GL}(H_{\mathbf{I},\mathbb{Q}_{u}})\)_resp._\(\operatorname{GL}(L_{\mathbf{I},\mathbb{Q}_{u}})\), so it makes sense to compare them.
**Lemma 4.3**.: _Notation as above, we have \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{MT}(f)_{ \mathbb{Q}_{u}}\) and \(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{Hdg}(f)_{ \mathbb{Q}_{u}}\)._
Proof.: It suffices to show the first assertion. Possibly passing to a finite etale cover of \(X_{0}\), we can assume that \(\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},\overline{\mathcal{I}}}^{ \text{KS}})=\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},X_{0}}^{\text{KS}})\) and \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)=G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\). Recall from (4.1.2) that we identify \(\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},X_{0}}^{\text{KS}})\) as a subalgebra of \(\operatorname{End}(H_{\mathbf{I},\mathbb{Q}})\), and \(\operatorname{MT}(f)\) is the connected component of the commutant of \(\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},X_{0}}^{\text{KS}})\) in \(\operatorname{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\). Using \(\alpha_{u}\) as discussed above, we can identify \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\) as a subgroup of \(\operatorname{GL}(H_{\mathbf{I},\mathbb{Q}_{u}})\). Note that \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\) furthermore lies in \(\operatorname{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{Q}_{u}})\). As multiplicative subsets of \(\operatorname{End}^{0}(H_{\mathbf{I},\mathbb{Q}_{u}})\), \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\) and \(\operatorname{End}^{0}(\mathbb{H}_{\mathbf{I},u,X_{0}})\) commute, hence \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\) commutes with \(\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},X_{0}}^{\text{KS}})_{\mathbb{Q}_ {u}}\subseteq\operatorname{End}^{0}(\mathbb{H}_{\mathbf{I},u,X_{0}})\). In other words, \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)\) is contained in the commutant of \(\operatorname{End}^{0}(\mathscr{A}_{\mathbf{I},X_{0}}^{\text{KS}})_{\mathbb{Q}_ {u}}\). This implies that \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{MT}(f)_{ \mathbb{Q}_{u}}\).
#### 4.1.3. The Shimura variety \(\mathcal{X}_{f}\)
As a consequence of Lemma 4.1, \(\operatorname{MT}(f)\) gives rise to a Shimura subvariety \(\mathcal{X}_{f}\) of \(\mathcal{S}_{\mathbf{I}}\) with level structure \(\operatorname{MT}(f)\cap\prod_{i\in\mathbf{I}}\mathbb{K}_{i}\). We denote its reflex field \(\mathbf{E}\). \(\mathbf{E}\) contains a place \(\mathbf{p}\) determined by the identification \(\mathbb{C}\simeq\overline{\mathbb{Q}}_{p}\) (cf. SS1.6). Let \(\mathscr{X}_{f}\) be the Zariski closure of \(\mathcal{X}_{f}\) in \(\mathscr{S}_{\mathbf{I}}\times_{\mathbb{Z}_{(p)}}O_{\mathbf{E},(\mathbf{p})}\). Since \(h_{\tilde{x}_{\mathbb{C}}}\) factors through \(\operatorname{MT}(f)\), we have \(\tilde{x}\in\mathscr{X}_{f}(W)\) and \(x\in\mathscr{X}_{f}(\mathbb{F})\). We denote by \(\mathcal{X}_{f}^{+}\)_resp._\(\mathscr{X}_{f}^{+}\) the component of \(\mathcal{X}_{f}\)_resp._\(\mathscr{X}_{f}^{-}\) that contains \(\tilde{x}_{\mathbb{C}}\)_resp._\(\tilde{x}\). Let \(\mathscr{X}_{f,W}^{/x}\) be the completion of \(\mathscr{X}_{f,W}\) at \(x\). It follows from Noot's result [19, Theorem 3.7] that
1. \(\mathscr{X}_{f,W}^{/x}\) is a finite union of torsion translates of subtori of the Serre-Tate torus \(\mathscr{S}_{\mathbf{I},W}^{/x}\).
2. \(\mathscr{X}_{f,W}^{/x,+}\), the irreducible component of \(\mathscr{X}_{f,W}^{/x}\) that contains \(\tilde{x}\), is a formal subtorus of \(\mathscr{S}_{\mathbf{I},W}^{/x}\).
3. \(\mathscr{X}_{f,W}^{/x}\) is flat.
In our case, since \(\mathcal{X}_{f}\) is cut out by certain elements in \(\operatorname{End}^{0}(\mathscr{A}^{\text{KS}}_{\mathbf{I},\sharp c})\), one can say more:
**Lemma 4.4**.: _The following are true:_
1. _Any irreducible component of_ \(\mathscr{X}^{/x}_{f,W}\) _is a torsion translate of_ \(\mathscr{X}^{/x,+}_{f,W}\)_. In particular,_ \(\mathscr{X}^{/x}_{f,\mathbb{F},\operatorname{red}}\) _equals_ \(\mathscr{X}^{/x,+}_{f,\mathbb{F}}\)_, and is a subtorus of_ \(\mathscr{S}^{/x}_{\mathbf{I},\mathbb{F}}\) _with rank_ \(\dim\mathcal{X}_{f}\)_._
2. \(\operatorname{End}(\mathscr{A}^{\text{KS}}_{\mathbf{I},\overline{\eta}})\) _deforms to_ \(\mathscr{X}^{/x,+}_{f,W}\)_._
3. \(f\) _factors through_ \(\mathscr{X}_{f,\mathbb{F}}\)_._
Proof.: For a lattice \(\Gamma\subseteq\Gamma_{0}=\operatorname{End}(\mathscr{A}^{\text{KS}}_{ \mathbf{I},\overline{\eta}})\) of finite index, let \(\mathscr{D}_{\Gamma}=\operatorname{Def}(\Gamma/\mathscr{S}^{/x}_{\mathbf{I},W}) \subseteq\mathscr{S}^{/x}_{\mathbf{I},W}\). This is the obvious analogue of (2.3.2) for products of Shimura varieties.
We first study the structure of \(\mathscr{D}_{\Gamma}\). Let \(\Lambda=\prod_{i\in\mathbf{I}}U_{\operatorname{GSpin},\mu_{i}^{-1}}(\mathbb{ Z}_{p})\). It follows from Proposition 2.5 and Serre-Tate theory that there exists a \(\mathbb{Z}_{p}\)-sublattice \(\Lambda_{f}\subseteq\Lambda\) such that \(\mathscr{D}_{\Gamma}\simeq\operatorname{Hom}_{\mathbb{Z}_{p}}(\Lambda/\Lambda _{f},\mathbb{G}_{m}^{\wedge})\). Let \(\overline{\Lambda}_{f}\) be the saturation of \(\Lambda_{f}\). We see that \(\Lambda/\Lambda_{f}\) decomposes as a direct sum of the free \(\mathbb{Z}_{p}\)-module \(\Lambda/\overline{\Lambda}_{f}\) and the torsion \(\mathbb{Z}_{p}\)-module \(\overline{\Lambda}_{f}/\Lambda\). It follows that
\[\mathscr{D}_{\Gamma}\simeq\operatorname{Hom}_{\mathbb{Z}_{p}}(\Lambda/ \overline{\Lambda}_{f},\mathbb{G}_{m}^{\wedge})\times\operatorname{Hom}_{ \mathbb{Z}_{p}}(\overline{\Lambda}_{f}/\Lambda_{f},\mathbb{G}_{m}^{\wedge}).\]
Since \(\Gamma\) is of finite index in \(\Gamma_{0}\), \(\mathscr{D}_{\Gamma}^{+}:=\operatorname{Hom}_{\mathbb{Z}_{p}}(\Lambda/ \overline{\Lambda}_{f},\mathbb{G}_{m}^{\wedge})\) is a formal torus which is independent of \(\Gamma\). Indeed, we always have \(\mathscr{D}_{\Gamma}^{+}=\mathscr{D}_{\Gamma_{0}}^{+}\). On the other hand, \(\operatorname{Hom}_{\mathbb{Z}_{p}}(\overline{\Lambda}_{f}/\Lambda_{f}, \mathbb{G}_{m}^{\wedge})\) is a finite flat group scheme. Therefore \(\mathscr{D}_{\Gamma}\) is a torsion translate of \(\mathscr{D}_{\Gamma}^{+}\), and is flat over \(W\). Furthermore, all irreducible components of \(\mathscr{D}_{\Gamma}\) reduce to the same torus over \(\mathbb{F}\).
Since \(\mathcal{X}_{f}\) is defined by \(\operatorname{MT}(f)\), there is a sufficiently small sublattice \(\Gamma\), which is of finite index in \(\Gamma_{0}\), such that \(\Gamma\) deforms to \(\mathscr{X}^{/x}_{f,W}\). In other words, \(\mathscr{X}^{/x}_{f,W}\subseteq\mathscr{D}_{\Gamma}\), whence \(\mathscr{X}^{/x,+}_{f,W}\subseteq\mathscr{D}_{\Gamma}^{+}=\mathscr{D}_{\Gamma _{0}}^{+}\subseteq\mathscr{D}_{\Gamma_{0}}\). So we have (2). Furthermore, the generic fibers of \(\mathscr{X}^{/x,+}_{f,W}\) and \(\mathscr{D}_{\Gamma}^{+}\) have the same dimension. Since \(\mathscr{X}^{/x,+}_{f,W}\) and \(\mathscr{D}_{\Gamma}^{+}\) are both formal tori, we have \(\mathscr{X}^{/x,+}_{f,W}=\mathscr{D}_{\Gamma}^{+}\). Hence every irreducible component of \(\mathscr{X}^{/x}_{f,W}\) is a irreducible component of \(\mathscr{D}_{\Gamma}\), and is a torsion translate of \(\mathscr{X}^{/x,+}_{f,W}\). Therefore we have (1). Note that \(\operatorname{rk}\mathscr{X}^{/x}_{f,\mathbb{F},\operatorname{red}}=\dim \mathcal{X}_{f}\) follows simply from flatness.
Finally, \(f^{/x}\) factors through \(\mathscr{D}_{\Gamma,\mathbb{F}}\) for the reason that \(\Gamma\) deforms to \(X^{/x}\). Since \(X^{/x}\) is smooth, \(f^{/x}\) factors through \(\mathscr{D}_{\Gamma,\mathbb{F},\operatorname{red}}=\mathscr{X}^{/x}_{f,\mathbb{F },\operatorname{red}}\). Therefore \(f\) factors through \(\mathscr{X}^{*}_{f,\mathbb{F}}\). This proves (3).
### Conjectures, implications and results
In the following we state the conjectures for product of \(\operatorname{GSpin}\) Shimura varieties. Let \(X,X_{0},f,x\) be the same as in SS4.1.1.
**Conjecture 4.5**.: _Let \(x\in X(\mathbb{F})\), and \(\mathscr{T}_{f,x}\) be the smallest formal subtorus of the Serre-Tate torus \(\mathscr{S}^{/x}_{\mathbf{I},\mathbb{F}}\) through which the morphism \(f^{/x}:X^{/x}\to\mathscr{S}^{/x}_{\mathbf{I}}\) factors. Then \(\mathscr{X}^{/x}_{f,\mathbb{F},\operatorname{red}}=\mathscr{T}_{f,x}\)._
**Conjecture 4.6** (Tate-linear conjecture).: _Suppose \(f\) is a locally closed immersion, i.e., \(X\) is a subvariety of \(\mathscr{S}^{\operatorname{ord}}_{\mathbf{I},\mathbb{F}}\). If \(X\) is Tate-linear at \(x\in X(\mathbb{F})\), then \(X\) is special._
**Conjecture 4.7** (Characteristic \(p\) analogue of the Mumford-Tate conjecture for ordinary strata of products of \(\operatorname{GSpin}\) Shimura varieties).: \(\operatorname{Hdg}(f)\) _coincides with the generic Hodge group \(\operatorname{Hdg}(\mathcal{X}_{f})\) of the local system \(\mathbb{L}_{\mathbf{I},\mathbf{B},\mathcal{X}^{+}_{f}}\). Moreover, for every \(u\in\operatorname{fpl}(\mathbb{Q})\), the inclusion \(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{Hdg}(f)^{\circ} _{\mathbb{Q}_{u}}\) in Lemma 4.3 is an equality._
**Conjecture 4.8** (Characteristic \(p\) analogue of the Andre-Oort conjecture for ordinary strata of products of \(\operatorname{GSpin}\) Shimura varieties).: _Suppose \(f\) is a locally closed immersion, i.e., \(X\) is a subvariety of \(\mathscr{S}^{\operatorname{ord}}_{\mathbf{I},\mathbb{F}}\). Let \(\boldsymbol{A}\) be a collection of special subvarieties on \(X\) and \(\mathbf{I}_{\boldsymbol{A}}\subseteq\mathbf{I}\) be the set of indices \(i\) such that \(\boldsymbol{A}\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{S}_{i,\mathbb{F}}\) are positive dimensional. Then_
1. _For_ \(i\in\mathbf{I}_{\mathbf{A}}\)_, the projection of_ \(X\) _to_ \(\mathscr{S}_{i,\mathbb{F}}\) _is a special subvariety._
2. _Decompose the special subvarieties from (_1_) into simple factors, and write_ \(\{\mathscr{Y}_{j,\mathbb{F}}\}_{j\in\mathbf{J}}\) _for the collection of simple factors. Let_ \(\mathbf{J}_{\mathbf{A}}\subseteq\mathbf{J}\) _be the set of indices_ \(j\) _such that_ \(\mathbf{A}\) _contains a Zariski dense collection of special subvarieties whose projections to_ \(\mathscr{Y}_{j,\mathbb{F}}\) _are positive dimensional. Then_ \(X\) _is the product of a special subvariety of_ \(\mathscr{Y}_{\mathbf{J}_{\mathbf{A}},\mathbb{F}}\) _and a subvariety of_ \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}},\mathbb{F}}\times\mathscr{Y}_{ \mathbf{J}-\mathbf{J}_{\mathbf{A}},\mathbb{F}}\)_._
**Proposition 4.9** (Implications between various conjectures).: _The following are true:_
1. _Conjecture_ 4.5__\(\Rightarrow\) _Conjecture_ 4.6_._
2. _Conjecture_ 4.5__\(+\) _Conjecture_ 4.7__\(\Rightarrow\) _Conjecture_ 1.1 _when the morphism_ \(f:X\to\mathcal{A}_{g,\mathbb{F}}\) _in Conjecture_ 1.1 _factors through_ \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\)_._
3. _Conjecture_ 4.8__\(\Rightarrow\) _Conjecture_ 1.14_. Furthermore, Conjecture_ 4.8_(2) is trivially true when_ \(\#\mathbf{I}=1\) _and Conjecture_ 4.8_(1) is trivially true when each_ \(\mathscr{S}_{i}\) _is a modular curve._
Proof.:
1. Let \(X\) be as in Conjecture 4.6. Conjecture 4.5 implies the existence of a Shimura subvariety \(\mathcal{X}_{f}\) such that \(\mathscr{X}_{f,\mathbb{F},\text{red}}^{/x}=\mathscr{T}_{f,x}\). Therefore \(\overline{X}\) is a irreducible component of \(\mathscr{X}_{f,\mathbb{F}}\), hence special.
2. Suppose Conjecture 4.5 is true, we first show that \(\mathcal{X}_{f}\) is the _smallest_ special subvariety of \(\mathcal{S}_{\mathbf{I}}\) whose mod \(p\) reduction contains the image of \(f\). Here _smallest_ means that if there is another special subvariety \(\mathcal{Y}\) whose mod \(p\) reduction contains the image of \(f\), then up to connected components and etale covers, \(\mathcal{X}_{f}\) is contained in \(\mathcal{Y}\). Let \(\mathcal{Y}\) be a Shimura variety of \(\mathcal{S}_{\mathbf{I}}\) such that \(f\) factors through the Zariski closure \(\mathscr{Y}\) of \(\mathcal{Y}\) in \(\mathcal{S}_{\mathbf{I}}\). Then [11, Theorem 3.7] implies that \(\mathscr{Y}_{W}^{/x}\) contains \(\tilde{x}\), and is a union of torsion translates of formal subtori. Let \(\mathscr{Y}_{W}^{/x,+}\) be the irreducible component that contains \(\tilde{x}\). We must have \(\mathscr{T}_{f,x}\subseteq\mathscr{Y}_{\mathbb{F}}^{/x,+}\). Assuming that Conjecture 4.5 is true, we have \(\mathscr{X}_{f,\mathbb{F}}^{/x,+}=\mathscr{T}_{f,x}\subseteq\mathscr{Y}_{ \mathbb{F}}^{/x,+}\). This implies that \(\mathscr{X}_{f,W}^{/x,+}\subseteq\mathscr{Y}_{W}^{/x,+}\). As a result, \(\mathcal{X}_{f}\) is the smallest Shimura subvariety whose mod \(p\) reduction contains the image of \(f\). Now assume Conjecture 4.7. To establish Conjecture 1.1 in the case of products of \(\operatorname{GSpin}\) Shimura varieties, we need to show that (a) \(\operatorname{MT}(f)\) is the generic Mumford-Tate group \(\operatorname{MT}(\mathcal{X}_{f})\) of the local system \(\mathbb{H}_{\mathbf{I},\mathbf{B},\mathcal{X}_{f}^{+}}\) and (b) The inclusion \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{MT}(f)_{ \mathbb{Q}_{u}}^{\circ}\) is an equality. By the definitions of \(\mathcal{X}_{f}\) and \(\operatorname{MT}(f)\), we have \(\operatorname{MT}(\mathcal{X}_{f})\subseteq\operatorname{MT}(f)\). Let \(K_{0}\)_resp._\(K_{1}\)_resp._\(K_{2}\) be the kernel of the map \(\operatorname{MT}(\mathcal{X}_{f})\to\operatorname{Hdg}(\mathcal{X}_{f})\)_resp._\(\operatorname{MT}(f)\to\operatorname{Hdg}(f)\)_resp._\(\operatorname{GSpin}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\to \operatorname{SO}^{\prime}(L_{\mathbf{I},\mathbb{Q}})\). Recall that \(\mathbf{T}_{\mathbb{Q}}\) is the center of \(\operatorname{GL}(H_{\mathbf{I},\mathbb{Q}})\). The following diagram exhibits the relations between various groups:
Note that \(\mathbf{T}_{\mathbb{Q}}\) is of finite index in \(K_{2}\). Therefore \(\mathbf{T}_{\mathbb{Q}}\) is also of finite index in \(K_{0}\) and \(K_{1}\). This implies that \(K_{0}\) has finite index in \(K_{1}\). Hence \(\operatorname{MT}(\mathcal{X}_{f})\) has finite index in \(\operatorname{MT}(f)\). Since \(\operatorname{MT}(\mathcal{X}_{f})\) and \(\operatorname{MT}(f)\) are both connected reductive groups, we must have \(\operatorname{MT}(\mathcal{X}_{f})=\operatorname{MT}(f)\). This shows (a). Now let \(\mathcal{K}_{u}\) be the kernel of \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\to G(\mathbb{L}_{\mathbf{I},u,X_{0} },x)^{\circ}\). It is a basic fact that \(\mathbf{T}_{\mathbb{Q}_{u}}\subseteq\mathcal{K}_{u}\). Since \(\mathbf{T}_{\mathbb{Q}_{u}}\) has finite index in \(K_{1,\mathbb{Q}_{u}}\), \(\mathcal{K}_{u}\) must also have finite index in \(K_{1,\mathbb{Q}_{u}}\). Again, we see that \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\) has finite index in \(\operatorname{MT}(f)_{\mathbb{Q}_{u}}\). Since \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}\) and
\(\mathrm{MT}(f)_{\mathbb{Q}_{u}}\) are both reductive groups, we must have \(G(\mathbb{H}_{\mathbf{I},u,X_{0}},x)^{\circ}=\mathrm{MT}(f)_{\mathbb{Q}_{u}}^{\circ}\). Therefore (b) is also true.
3. is clear.
The following are the main results of our paper. The proofs of these theorems will be given in the later chapters.
**Theorem 4.10**.: _Conjecture 4.5 is true when (1) each \(\mathcal{S}_{i}\) is a modular curve, or (2) \(\#\mathbf{I}=1\)._
**Theorem 4.11**.: _Conjecture 4.6 is true when (1) each \(\mathcal{S}_{i}\) is a modular curve, or (2) \(\#\mathbf{I}=1\)._
**Theorem 4.12**.: _Conjecture 4.7 is true when (1) each \(\mathcal{S}_{i}\) is a modular curve, or (2) \(\#\mathbf{I}=1\)._
**Theorem 4.13**.: _Conjecture 4.8 is true when (1) each \(\mathcal{S}_{i}\) is a modular curve, or (2) \(\#\mathbf{I}=1\)._
**Theorem 4.14**.: _Conjecture 1.1 is true when \(f\) factors through \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\) and (1) each \(\mathcal{S}_{i}\) is a modular curve, or (2) \(\#\mathbf{I}=1\)._
## 5. Lie theory of orthogonal and unitary groups
In this section, we aim to establish the necessary amount of Lie theoretical results for the study of monodromy groups of crystalline local systems. These results serve as an essential technical component in the proof of both Tate-linear and Mumford-Tate conjectures for GSpin Shimura varieties. This section is intended to be used solely for reference.
### Setups
Let \((M,Q^{\prime})\) be a vector space over an algebraic closed field \(K\) of char \(0\), equipped with a nondegenerate quadratic pairing \(Q^{\prime}\). In our applications, the space \((M,Q^{\prime})\) will be a quadratic subspace of \((L,Q)\otimes K\), where \((L,Q)\) is the quadratic \(\mathbb{Z}\)-lattice involved in defining a GSpin Shimura datum. In the following we fix:
1. a basis \(\{w,v_{1},...,v_{n},w^{\prime}\}\) of \(M\), such that the pairing \(Q^{\prime}\) has the form \[Q^{\prime}=\begin{bmatrix}&1\\ &Q^{\prime}_{0}\\ 1&\end{bmatrix},\]
2. a cocharacter \(\nu:\mathbb{G}_{m}\to\mathrm{GL}(M),\ t\to\mathrm{diag}(t^{-1},1,1,...,1,t)\),
3. quadratic subspaces \(M_{0}=\mathrm{Span}_{K}\{v_{1},...,v_{n}\}\) and \(B=\mathrm{Span}_{K}\{w,w^{\prime}\}\).
We will use \(G\) to denote a connected reductive subgroup of \(\mathrm{SO}(M)\) containing the image of \(\nu\). Let \(Lv_{G,\nu}\) be the Levi of \(G\) corresponding to \(\nu\). We will canonically identify it as the subgroup of \(G\) that commutes with \(\nu\). For example, \(Lv_{G,\nu}\subseteq Lv_{\mathrm{SO}(M),\nu}=\mathrm{im}\,\nu\times\mathrm{SO} (M_{0})\subseteq\mathrm{SO}(M)\).
Let \(G_{0}\) be the projection of \(Lv_{G,\nu}\) to \(\mathrm{SO}(M_{0})\). For example, \(\mathrm{SO}(M)_{0}=\mathrm{SO}(M_{0})\). The assumption that \(\mathrm{im}\,\nu\subseteq G\) implies that \(Lv_{G,\nu}=G_{0}\times\mathrm{im}\,\nu\). The group \(G_{0}\) can be canonically identified as a subgroup of \(G\), namely, \(G_{0}=G\cap\mathrm{SO}(M_{0})\). Denote by \(U_{G,\nu}\)_resp._\(U_{G,\nu^{-1}}\) the unipotent _resp._ opposite unipotent of \(G\).
In our applications, \(G\) will be taken as the group \(\mathrm{SO}(M)\) itself, or a certain unitary subgroup \(\mathrm{U}(\varphi)\subseteq\mathrm{SO}(M)\). As we will see, these groups arise as the monodromy groups of certain overconvergent sub-\(F\)-isocrystals of the crystalline local system \(\mathbb{L}_{p,X}\) over some geometrically connected smooth base \(X/\mathbb{F}_{q}\).
#### 5.1.1. Unitary subgroups
Consider the diagonal embedding \(K\hookrightarrow K^{\prime}:=K\times K\). We shall regard it as a quadratic extension of \(K\) with an involution \(\iota\in\operatorname{Aut}_{K}(K^{\prime})\) swapping two copies of \(K\). We have a good notion of Hermitian forms over a \(K^{\prime}\)-space.
The quadratic subspace \(B\) can be endowed with an Hermitian form over \(K^{\prime}\). Under the basis \(w\) and \(w^{\prime}\), we let
\[\varphi_{B}=\begin{bmatrix}&(0,1)\\ (1,0)&\end{bmatrix}.\]
Then \(\operatorname{Tr}\varphi_{B}=Q^{\prime}|_{B}\), and \(w,w^{\prime}\) are both totally isotropic. The unitary subgroup \(\operatorname{U}(\varphi_{B})\) is nothing other than \(\operatorname{im}\nu\).
We say that \((M,\varphi)\) is an _Hermitian space over \(K^{\prime}\) respecting_\(Q^{\prime}\), if \(M\) decomposes into a direct sum of Hermitian spaces \((M_{0},\varphi_{0})\oplus(B,\varphi_{B})\) such that \(\operatorname{Tr}\varphi_{0}=Q^{\prime}_{0}\). When this is the case, we automatically have \(\operatorname{Tr}\varphi=Q^{\prime}\).
These give rise to unitary subgroups \(\operatorname{U}(\varphi)\subseteq\operatorname{SO}(M)\) and \(\operatorname{U}(\varphi_{0})\subseteq\operatorname{SO}(M_{0})\), such that \(\operatorname{U}(\varphi)_{0}=\operatorname{U}(\varphi_{0})\). Note also that \(\varphi_{0}\) induces a decomposition \(M_{0}=\mathcal{W}_{0}\oplus\mathcal{W}^{\prime}_{0}\) such that \(\mathcal{W}_{0}\) and \(\mathcal{W}^{\prime}_{0}\) are mutually dual maximal totally isotropic subspaces. In this case, \(\operatorname{U}(\varphi_{0})\simeq\operatorname{GL}(\mathcal{W}_{0})\).
Similarly, \(\varphi\) induces a decomposition of \(M\) into mutually dual maximal totally isotropic subspaces \(\mathcal{W}=\mathcal{W}_{0}\oplus\operatorname{Span}_{K}\{w\}\) and \(\mathcal{W}^{\prime}=\mathcal{W}^{\prime}_{0}\oplus\operatorname{Span}_{K} \{w^{\prime}\}\), and we have \(\operatorname{im}\nu\subseteq\operatorname{U}(\varphi)\simeq\operatorname{ GL}(\mathcal{W})\).
#### 5.1.2. Root systems
Suppose \(G\) contains the image of \(\nu\). We fix a maximal torus \(T_{G}\) that is contained in \(Lv_{G,\nu}\). Then the image of \(T_{G}\) in \(G_{0}\) is also a maximal torus, which we denote by \(T_{G_{0}}\). We always have \(T_{G}=T_{G_{0}}\times\operatorname{im}\nu\). In the rest of the paper, we will always choose maximal tori in such way.
Let \(\Phi_{G}\subseteq X^{*}(T_{G})_{\mathbb{R}}\) and \(\Phi_{G_{0}}\subseteq X^{*}(T_{G_{0}})_{\mathbb{R}}\) be the root systems that arise from the choice of maximal tori. We have a canonical embedding \(X^{*}(T_{G_{0}})_{\mathbb{R}}\subseteq X^{*}(T_{G})_{\mathbb{R}}\), which identifies \(\Phi_{G_{0}}\) as a sub-root system of \(\Phi_{G}\), and \(\Phi_{G_{0}}=\Phi_{G}\cap X^{*}(T_{G_{0}})_{\mathbb{R}}\). Note that we always have a splitting \(X^{*}(T_{G})=X^{*}(T_{G_{0}})\oplus X^{*}(\operatorname{im}\nu)\). Let \(\Phi_{G,\nu}\)_resp._\(\Phi_{G,\nu}^{+}\)_resp._\(\Phi_{G,\nu}^{-}\) be the set of roots _resp._ positive roots (i.e., the roots generating \(U_{G,\nu}\)) _resp._ negative roots (i.e., the roots generating \(U_{G,\nu^{-1}}\)) in \(\Phi_{G}-\Phi_{G_{0}}\).
We will also be using Lie algebras. The symbol \(\mathfrak{so}(M)\)_resp._\(\mathfrak{u}(\varphi)\)_resp._\(\mathfrak{g}\) will be used to denote the Lie algebra of \(\operatorname{SO}(M)\)_resp._\(\operatorname{U}(\varphi)\)_resp._\(G\). Furthermore, we use \(\mathfrak{t}_{\bullet}\) to denote the Lie algebra of a torus \(T_{\bullet}\). For example, \(\mathfrak{t}_{G}\) is the Lie algebra of \(T_{G}\). If \(\alpha\in\Phi_{G}\) is a root, we still use \(\mathfrak{g}_{\alpha}\) to denote the root sub-algebra of \(\mathfrak{g}\) associated to \(\alpha\). Finally, the Lie algebras of \(U_{G,\nu}\)_resp._\(U_{G,\nu^{-1}}\) will be written as \(\mathfrak{U}_{G,\nu}\)_resp._\(\mathfrak{U}_{G,\nu^{-1}}\).
We now summarize some basic facts on root systems of orthogonal and unitary groups, adapted to our settings:
1. (Root system of \(\operatorname{SO}(M)\)) Let \(G=\operatorname{SO}(M)\). It is known that, when \(n\) is even, \(\Phi_{\operatorname{SO}(M)}\) has Dynkin diagram \(D_{\lceil\frac{n+1}{2}\rceil}\), and when \(n\) is odd, \(\Phi_{\operatorname{SO}(M)}\) has Dynkin diagram \(B_{\lceil\frac{n+1}{2}\rceil}\). There is a basis \(\{e_{1},e_{2},...,e_{\lceil\frac{n-1}{2}\rceil}\}\) of \(X^{*}(T_{\operatorname{SO}(M_{0})})\) and \(e_{\lceil\frac{n+1}{2}\rceil}\in X^{*}(\operatorname{im}\nu)\), such that (5.1.1) where \(i\neq j\) and \(k\) run over \(\{1,2,...,\lceil\frac{n+1}{2}\rceil\}\) in the expression of \(\Phi_{\operatorname{SO}(M)}\), and \(i\) runs over \(\{1,2,...,\lceil\frac{n-1}{2}\rceil\}\) in the expression of \(\Phi_{\operatorname{SO}(M),\nu}^{+}\).
2. (Root system of \(\operatorname{U}(\varphi)\)) Suppose \((M,\varphi)\) is an is an Hermitian space over \(K^{\prime}\) respecting \(Q^{\prime}\) in the sense of SS5.1.1, and let \(G=\operatorname{U}(\varphi)\). It is known that \(\Phi_{\operatorname{U}(\varphi)}\) has Dynkin diagram \(A_{\frac{n}{2}}\)
There are linearly independent elements \(\{e^{\prime}_{1},e^{\prime}_{2},...,e^{\prime}_{\frac{n}{2}}\}\subseteq X^{*}(T_{ \mathrm{U}(\varphi_{0})})\) and \(e^{\prime}_{\frac{n}{2}+1}\in X^{*}(\mathrm{im}\,\nu)\), such that
\[\Phi_{\mathrm{U}(\varphi)}=\{\pm(e^{\prime}_{i}-e^{\prime}_{j})\},\] \[\Phi^{+}_{\mathrm{U}(\varphi),\nu}=\{e^{\prime}_{\frac{n}{2}+1}-e ^{\prime}_{i}\}, \tag{5.1.2}\]
where \(i\neq j\) run over \(\{1,2,...,\frac{n}{2}+1\}\) in the expression of \(\Phi_{\mathrm{U}(\varphi)}\) and \(i\) runs through \(\{1,2,...,\frac{n}{2}\}\) in the expression of \(\Phi^{+}_{\mathrm{U}(\varphi),\nu}\).
### Main lemmas
Our main goal is to prove Lemma 5.3, which will play an essential role in analysing the structure of certain crystalline monodromy groups. To prove it, we need to first make several simple observations.
**Lemma 5.1**.: _Let \(\mathcal{U}_{G,\nu}\) be the subgroup of \(G\) generated by \(U_{G,\nu}\) and \(U_{G,\nu^{-1}}\). If \(G=\mathrm{SO}(M)\), then \(\mathcal{U}_{\mathrm{SO}(M),\nu}=\mathrm{SO}(M)\). If \((M,\varphi)\) is an Hermitian space over \(K^{\prime}\) respecting \(Q^{\prime}\) as per SS5.1.1 and \(G=\mathrm{U}(\varphi)\), then \(\mathcal{U}_{\mathrm{U}(\varphi),\nu}=\mathrm{SU}(\varphi)\)._
Proof.: Let \(G=\mathrm{SO}(M)\). Let \(\mathrm{Lie}\mathcal{U}_{\mathrm{SO}(M),\nu}\) be the Lie algebra of \(\mathcal{U}_{\mathrm{SO}(M),\nu}\). Fix a maximal torus \(T_{\mathrm{SO}(M)}\) as in SS5.1.2. From the explicit description of the root system of \(\mathrm{SO}(M)\) as in SS1, we see that every root in \(\Phi_{\mathrm{SO}(M_{0})}\) is the sum of a root in \(\Phi^{+}_{\mathrm{SO}(M),\nu}\) and a root in \(\Phi^{-}_{\mathrm{SO}(M),\nu}\). Furthermore, one checks that \(\mathfrak{t}_{G}\) is already generated by the sub-algebras \([\mathfrak{so}(M)_{\alpha},\mathfrak{so}(M)_{-\alpha}]\), where \(\alpha\) runs over \(\Phi^{+}_{\mathrm{SO}(M),\nu}\). This shows that \(\mathrm{Lie}\mathcal{U}_{\mathrm{SO}(M),\nu}=\mathfrak{so}(M)\), whence \(\mathrm{SO}(M)=\mathcal{U}_{\mathrm{SO}(M),\nu}\).
The argument for \(G=\mathrm{U}(\varphi)\) is similar, and is left to the readers.
**Lemma 5.2**.: _Fix a maximal torus \(T_{\mathrm{SO}(M)}\) as in SS5.1.2. Let \(\alpha\in\Phi^{+}_{\mathrm{SO}(M),\nu}\). Suppose that \(\mathfrak{h}\) is an one dimensional sub-algebra of \(\mathfrak{U}_{\mathrm{SO}(M),\nu^{-1}}\) such that \(\mathfrak{t}:=[\mathfrak{h},\mathfrak{so}(M)_{\alpha}]\) is an one dimensional sub-algebra of \(\mathfrak{t}_{\mathrm{SO}(M)}\). If furthermore \([\mathfrak{t},\mathfrak{h}]\subseteq\mathfrak{h}\), then \(\mathfrak{h}=\mathfrak{so}(M)_{-\alpha}\)._
Proof.: This can be checked using the explicit description of the root system. Use the notation in (5.1.1). Let \(n\) be even. Without loss of generality, we can take \(\alpha=e_{\lceil\frac{n+1}{2}\rceil}+e_{1}\). For each \(\beta\in\Phi_{\mathrm{SO}(M)}\), pick a generator \(E_{\beta}\) for \(\mathfrak{so}(M)_{\beta}\). Then a generator of \(\mathfrak{h}\) can be written as
\[\sum_{\beta\in\Phi^{-}_{\mathrm{SO}(M),\nu}}c_{\beta}E_{\beta},\ c_{\beta}\in K.\]
The condition \([\mathfrak{t},\mathfrak{h}]\subseteq\mathfrak{h}\) is one dimensional implies that \(c_{\beta}=0\) unless \(\beta\) equals \(-\alpha\) or \(-\alpha^{\prime}:=-e_{\lceil\frac{n+1}{2}\rceil}+e_{1}\), and moreover \(c_{-\alpha}\neq 0\). We can assume that \(c_{-\alpha}=1\), and write \(\mathfrak{h}=\mathrm{Span}_{K}\{E_{-\alpha}+cE_{-\alpha^{\prime}}\}\). Since \([E_{-\alpha},E_{\alpha}]\) is a generator of \(\mathfrak{t}\), the condition \([\mathfrak{t},\mathfrak{h}]\subseteq\mathfrak{h}\) implies that
\[[[E_{-\alpha},E_{\alpha}],E_{-\alpha}+cE_{-\alpha^{\prime}}]\subseteq E_{- \alpha}+cE_{-\alpha^{\prime}}.\]
However, the left hand side equals \([[E_{-\alpha},E_{\alpha}],E_{-\alpha}]=-\alpha([E_{-\alpha},E_{\alpha}])E_{-\alpha}\). It is a nonzero multiple of \(E_{-\alpha}\). As a result \(c=0\). Therefore \(\mathfrak{h}=\mathfrak{so}(M)_{-\alpha}\). The argument for \(n\) odd is similar, and is left to the readers.
**Lemma 5.3**.: _Suppose \(M_{0}\) admits an orthogonal decomposition \(M_{0}=M_{a,0}\oplus M_{b,0}\). Let \(M_{a}=M_{a,0}\oplus B\) and \(M_{b}=M_{b,0}\oplus B\). Assume that \((M_{b},\varphi_{b})\) is an Hermitian space over \(K^{\prime}\) respecting \(Q^{\prime}|_{M_{b}}\) in the sense of SS5.1.1. That is, \((M_{b},\varphi_{b})=(M_{b,0},\varphi_{b,0})\oplus(B,\varphi_{B})\) and \(\mathrm{Tr}\,\varphi_{b,0}=Q^{\prime}|_{M_{b,0}}\). Suppose there is a connected reductive group \(G\subseteq\mathrm{SO}(M)\) containing the image of \(\nu\), such that_
1. \(G_{0}\subseteq\mathrm{SO}(M_{a,0})\times\mathrm{U}(\varphi_{b,0})\)_,_
2. \(U_{G,\nu}=U_{\mathrm{SO}(M_{a}),\nu}\times U_{\mathrm{U}(\varphi_{b}),\nu}\)_._
_Then either \(M_{a,0}=0\) and \(G=\mathrm{U}(\varphi_{b})\), or \(M_{b,0}=0\) and \(G=\mathrm{SO}(M)\)._
Proof.: We begin by remarking that \(U_{\mathrm{SO}(M_{a}),\nu}\) and \(U_{\mathrm{U}(\varphi_{b}),\nu}\) commute with each other, and have trivial intersection, so condition (2) makes sense. We also remind the readers that \(\mathrm{SO}(M_{a,0})=\mathrm{SO}(M_{a})_{0}\) and \(\mathrm{U}(\varphi_{b,0})=\mathrm{U}(\varphi_{b})_{0}\). Fix maximal tori \(T_{\mathrm{SO}(M_{0})}\), \(T_{\mathrm{SO}(M_{a,0})}\), \(T_{\mathrm{U}(\varphi_{b,0})}\) and \(T_{G_{0}}\), in the manner that \(T_{G_{0}}\subseteq T_{\mathrm{SO}(M_{a,0})}\times T_{\mathrm{U}(\varphi_{b,0})} \subseteq T_{\mathrm{SO}(M_{0})}\). Let \(T_{\mathrm{SO}(M)}\), \(T_{\mathrm{SO}(M_{a})}\), \(T_{\mathrm{U}(\varphi_{b})}\) and \(T_{G}\) be the products of \(\mathrm{im}\,\nu\) with \(T_{\mathrm{SO}(M_{0})}\), \(T_{\mathrm{SO}(M_{a,0})}\), \(T_{\mathrm{U}(\varphi_{b,0})}\) and \(T_{G_{0}}\), respectively. To ease notation, we also set \(T_{H_{0}}=T_{\mathrm{SO}(M_{a,0})}\times T_{\mathrm{U}(\varphi_{b,0})}\) and \(T_{H}=T_{H_{0}}\times\mathrm{im}\,\nu\).
_Claim_.: \(U_{G,\nu^{-1}}=U_{\mathrm{SO}(M_{a}),\nu^{-1}}\times U_{\mathrm{U}(\varphi_{b }),\nu^{-1}}\).
For dimension reasons, it suffices to show that \(U_{G,\nu^{-1}}\) contains the right hand side. We will achieve this by showing that \(\mathfrak{g}\) contains all root sub-algebras of \(\mathfrak{so}(M_{a})\) and \(\mathfrak{u}(\varphi_{b})\) that are associated to the negative roots. Note that \(T_{\mathrm{SO}(M_{a,0})}\) commutes with \(\mathrm{U}(\varphi_{b})\), whereas \(T_{\mathrm{U}(\varphi_{b,0})}\) commutes with \(\mathrm{SO}(M_{a})\). In addition, \(T_{\mathrm{SO}(M)}\) contains the central torus of \(\mathrm{U}(\varphi)\). By considering the adjunction action of the various maximal tori on the root sub-algebras, we see that
1. the root sub-algebra of \(\mathfrak{so}(M_{a})\)_resp_. \(\mathfrak{u}(\varphi_{b})\) associated to a positive root is also a root sub-algebra of \(\mathfrak{g}\) associated to a positive root.
2. the root sub-algebra of \(\mathfrak{so}(M_{a})\)_resp_. \(\mathfrak{u}(\varphi_{b})\) associated to an arbitrary root is a root sub-algebra of \(\mathfrak{so}(M)\).
Let \(\varpi:X^{*}(T_{H})_{\mathbb{R}}\to X^{*}(T_{G})_{\mathbb{R}}\) be the natural map. From (a), we see that \(\varpi\) carries \(\Phi_{\mathrm{SO}(M_{a}),\nu}^{+}\sqcup\Phi_{\mathrm{U}(\varphi_{b}),\nu}^{+}\) bijectively to \(\Phi_{G,\nu}^{+}\), and
\[\mathfrak{g}_{\varpi(\alpha)}=\begin{cases}\mathfrak{so}(M_{a})_{\alpha},& \alpha\in\Phi_{\mathrm{SO}(M_{a}),\nu}^{+},\\ \mathfrak{su}(\varphi_{b})_{\alpha},&\alpha\in\Phi_{\mathrm{U}(\varphi_{b} ),\nu}^{+}.\end{cases}\]
From (b), we see that \(\mathfrak{g}_{\varpi(\alpha)}\) are root sub-algebras of \(\mathfrak{so}(M)\), i.e., \(\mathfrak{g}_{\varpi(\alpha)}=\mathfrak{so}(M)_{\alpha}\)7. Since \(\mathfrak{g}_{-\varpi(\alpha)}\), being the opposite root sub-algebra of \(\mathfrak{g}_{\varpi(\alpha)}\), is an \(\mathfrak{h}\) in the context of Lemma 5.2. Therefore, we must have \(\mathfrak{g}_{-\varpi(\alpha)}=\mathfrak{so}(M)_{-\alpha}\). As a result, \(\mathfrak{g}\) contains all root sub-algebras of \(\mathfrak{so}(M_{a})\) and \(\mathfrak{u}(\varphi_{b})\) that are associated to negative roots. Therefore, _Claim_ is proven.
Footnote 7: By abuse of notation, we write a root subalgebra of \(\mathfrak{so}(M)\) that arises as in (b) as \(\mathfrak{so}(M)_{\alpha}\).
Let \(\mathcal{U}_{G,\nu}\) be the group as per Lemma 5.1. By _Claim_ and (2), we see that \(\mathcal{U}_{\mathrm{SO}(M_{a}),\nu}\times\mathcal{U}_{\mathrm{U}(\varphi_{b} ),\nu}\subseteq\mathcal{U}_{G,\nu}\). On the other hand, Lemma 5.1 implies that \(\mathcal{U}_{\mathrm{SO}(M_{a}),\nu}=\mathrm{SO}(M_{a})\) and \(\mathcal{U}_{\mathrm{U}(\varphi_{b}),\nu}=\mathrm{SU}(\varphi_{b})\). Since \(G\) contains \(\mathrm{im}\,\nu\), it contains both \(\mathrm{SO}(M_{a})\) and \(\mathrm{U}(\varphi_{b})\). Therefore
\[G_{0}=\mathrm{SO}(M_{a,0})\times\mathrm{U}(\varphi_{b,0}). \tag{5.2.1}\]
Note that \(T_{G_{0}}=T_{H_{0}}\) and \(T_{G}=T_{H}\). We have \(\Phi_{G_{0}}=\Phi_{\mathrm{SO}(M_{a,0})}\sqcup\Phi_{\mathrm{SU}(\varphi_{b,0})}\) and \(\Phi_{G,\nu}^{+}=\Phi_{\mathrm{SO}(M_{a}),\nu}^{+}\sqcup\Phi_{\mathrm{SU}( \varphi_{b}),\nu}^{+}\), where all the roots are considered as vectors in \(X^{*}(T_{H})\). Let \(\dim M_{a,0}=n_{a}\) and \(\dim M_{b,0}=n_{b}\). By the explicit expression of the root systems as per SS5.1.2, we can pick a basis \(\{e_{1},e_{2},...,e_{\lceil\frac{n_{a}-1}{2}\rceil}\}\) of \(X^{*}(T_{\mathrm{SO}(M_{a,0})})\) and \(e_{\lceil\frac{n_{a}+1}{2}\rceil}\in X^{*}(\mathrm{im}\,\nu)\), such that \(\Phi_{\mathrm{SO}(M_{a})}\) and \(\Phi_{\mathrm{SO}(M_{a})}^{+}\) are given by (5.1.1) (with \(n\) replaced by \(n_{a}\)). Similarly, there is a choice of linearly independent elements \(\{e_{1}^{\prime},e_{2}^{\prime},...,e_{\frac{\prime}{2}}^{\prime}\}\) of \(X^{*}(T_{\mathrm{U}(\varphi_{b})})\) and \(e_{\frac{n_{b}}{2}+1}^{\prime}\in X^{*}(\mathrm{im}\,\nu)\), such that \(\Phi_{\mathrm{U}(\varphi_{b})}\) and \(\Phi_{\mathrm{U}(\varphi_{b})}^{+}\) are given by (5.1.2) (with \(n\) replaced by \(n_{b}\)). We must have \(e_{\lceil\frac{n_{a}+1}{2}\rceil}=e_{\frac{n_{b}}{2}+1}^{\prime}\), so we use \(e^{\prime\prime}\) to denote this element.
Now we show that either \(n_{a}=0\) or \(n_{b}=0\). Suppose \(n_{b}\geq 2\), we must show that \(n_{a}=0\). We will use the fact that \(\Phi_{G}\) is closed under reflection along the plane that is perpendicular to a root. Note that \(e^{\prime\prime}-e_{1}^{\prime}\in\Phi_{G,\nu}^{+}\). Let \(\Pi\) be the hyperplane in \(X^{*}(T_{H})_{\mathbb{R}}\) perpendicular to \(e^{\prime\prime}-e_{1}^{\prime}\). If \(n_{a}\) is odd, then \(e^{\prime\prime}\in\Phi_{G,\nu}^{+}\). The reflection of \(e^{\prime\prime}\) through \(\Pi\), which is \(e_{1}^{\prime}\), should also lie in \(\Phi_{G}\). In fact, it should lie in \(\Phi_{G}\cap X^{*}(T_{H_{0}})_{\mathbb{R}}=\Phi_{G_{0}}\). Unfortunately, \(\Phi_{G_{0}}\) doesn't contain \(e_{1}^{\prime}\). This contradiction shows that
\(n_{a}\) must be even. If \(n_{a}\geq 2\), then \(e^{\prime\prime}-e_{1}\in\Phi_{G,\nu}^{+}\). Again, the reflection of \(e^{\prime\prime}-e_{1}\) through \(\Pi\), which is \(e_{1}+e_{1}^{\prime}\), should lie in \(\Phi_{G_{0}}\). But \(\Phi_{G_{0}}\) cannot contain such an element. Therefore we must have \(n_{a}\leq 1\). Combining these, we get \(n_{a}=0\).
As we noted before, \(G\) contains both \(\operatorname{SO}(M_{a})\) and \(\operatorname{U}(\varphi_{b})\). If \(n_{b}=0\), then \(G=\operatorname{SO}(M)\). If \(n_{a}=0\), then from our assumption that \(G_{0}=\operatorname{SO}(M_{a,0})\times\operatorname{U}(\varphi_{b,0})\), together with condition (2) and the _Claim_, we find that \(G=\operatorname{U}(\varphi_{b})\).
## 6. Conjecture 4.5 and the Tate-linear conjecture
In this section we prove Conjecture 4.5 for \(\operatorname{GSpin}\) Shimura varieties and products of modular curves. By Proposition 4.9, this implies the Tate-linear conjecture for these Shimura varieties. In SS6.1 we introduce several important lattices that arise from the formal torus \(\mathscr{T}_{f,x}\). We then relate these lattices to the monodromy of \(F\)-isocrystals in SS6.2. After that, we use the monodromy results and Lie theory lemmas (SS5) to construct a finite subset \(\boldsymbol{\Delta}\subseteq\operatorname{End}(\mathscr{A}^{\operatorname{ KS}}_{\mathbf{I},\overline{\boldsymbol{\eta}}})\otimes\mathbb{Z}_{p}\) with certain special properties (such a subset is called _sufficient_, see Definition 1). The existence of such a subset is sufficient for proving Conjecture 4.5 in the case of products of modular curves (SS6.3) and \(\operatorname{GSpin}\) Shimura varieties (SS6.4).
### The Tate-linear character and cocharacter lattices
We refer the readers to SS4.1.3 and SS4.2 for setups and notations. Let \(\Psi_{\mathbf{I}},\operatorname{Br}_{\mathbf{I}}\) be the products of (extended) formal Brauer groups over indices running through \(\mathbf{I}\). These are \(p\)-divisible groups over \(\mathscr{S}^{\operatorname{ord}}_{\mathbf{I},\overline{\mathbf{F}}}\). Let \(\mu_{i}:\mathbb{G}_{m}\to\operatorname{GL}(H_{i,\mathbb{Z}_{p}})\) be the canonical Hodge cocharates of \(\mathscr{A}^{\operatorname{KS}}_{i,x}[p^{\infty}]\) and \(\overline{\mu}_{i}:\mathbb{G}_{m}\to\operatorname{GL}(\overline{L}_{i, \mathbb{Z}_{p}})\) be the induced cocharacter, which is indeed the canonical Hodge cocharates of \(\Psi_{i,x}\). We make the following convention
\[\begin{split}\operatorname{GL}^{\prime}(H_{\mathbf{I}})& =\prod_{i\in\mathbf{I}}\operatorname{GL}(H_{i}),\\ \operatorname{GL}^{\prime}(\overline{L}_{\mathbf{I},\mathbb{Z}_{ p}})&=\prod_{i\in\mathbf{I}}\operatorname{GL}(\overline{L}_{i, \mathbb{Z}_{p}}).\end{split} \tag{6.1.1}\]
Let \(\mu_{\mathbf{I}}:\mathbb{G}_{m}\to\operatorname{GSpin}^{\prime}(L_{\mathbf{I },\mathbb{Z}_{p}})\)_resp._\(\mu_{\mathbf{I}}^{c}:\mathbb{G}_{m}\to\operatorname{SO}^{\prime}(L_{ \mathbf{I},\mathbb{Z}_{p}})\)_resp._\(\overline{\mu}_{\mathbf{I}}:\mathbb{G}_{m}\to \operatorname{GL}^{\prime}(\overline{L}_{\mathbf{I},\mathbb{Z}_{p}})\) be the product of \(\mu_{i}\)'s _resp._\(\mu_{i}^{c}\)'s _resp._\(\overline{\mu}_{i}\)'s. As usual, let \(U_{\operatorname{GSpin}^{\prime},\mu_{\mathbf{I}}}\)_resp._\(U_{\operatorname{SO}^{\prime},\mu_{\mathbf{I}}^{c}}\)_resp._\(U_{\operatorname{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}\) be the corresponding unipotent and \(U_{\operatorname{GSpin}^{\prime},\mu_{\mathbf{I}}^{-1}}\)_resp._\(U_{\operatorname{SO}^{\prime},\mu_{\mathbf{I}}^{c,-1}}\)_resp._\(U_{\operatorname{GL}^{\prime},\overline{\mu}_{\mathbf{I}}^{-1}}\) be the corresponding opposite unipotent. According to Remark 2.6, the arithmetic deformation space \(\mathscr{S}^{/x}_{\mathbf{I},W}\) admits three equivalent formal torus structures:
\[\mathscr{S}^{/x}_{\mathbf{I},W}\simeq\begin{cases}U_{\operatorname{GSpin}^{ \prime},\mu_{\mathbf{I}}}(\mathbb{Z}_{p})\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m }^{\wedge},\\ U_{\operatorname{SO}^{\prime},\mu_{\mathbf{I}}^{c}}(\mathbb{Z}_{p})\otimes_{ \mathbb{Z}_{p}}\mathbb{G}_{m}^{\wedge},\\ U_{\operatorname{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}(\mathbb{Z}_{p}) \otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^{\wedge}.\end{cases} \tag{6.1.2}\]
**Definition 1**.:
1. _The Tate-linear cocharacter lattice is the saturated sublattice of_ \(X_{*}(\mathscr{S}^{/x}_{\mathbf{I},\overline{\mathbf{F}}})\) _defined as_ \[\mathcal{T}_{f,x}:=X_{*}(\mathscr{T}_{f,x})\subseteq X_{*}(\mathscr{S}^{/x}_{ \mathbf{I},\overline{\mathbf{F}}}).\] _We also denote its rational structure as_ \(T_{f,x}:=\mathcal{T}_{f,x}\otimes\mathbb{Q}_{p}\subseteq X_{*}(\mathscr{S}^{/ x}_{\mathbf{I},\overline{\mathbf{F}}})\otimes\mathbb{Q}_{p}\)_. Under identifications (_6.1.2_),_ \(T_{f,x}\) _gives rise to unipotent_ \(\mathbb{Q}_{p}\)_-subgroups of_ \(U_{\operatorname{GSpin}^{\prime},\mu_{\mathbf{I}}}\)_,_ \(U_{\operatorname{SO}^{\prime},\mu_{\mathbf{I}}^{c}}\) _and_ \(U_{\operatorname{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}\)_. When the context is clear, these unipotent subgroups (or their_ \(\mathbb{Q}_{p}\)_-points) will again be denoted_ \(T_{f,x}\)_._
2. _The Tate-linear character lattice is the saturated sublattice of_ \(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},\mathbb{F}})\) _defined as_ \[\mathcal{K}_{f,x}:=\ker(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},\mathbb{F}})\to X^{*} (\mathscr{T}_{f,x})).\] _We also denote its rational structure as_ \(K_{f,x}:=\mathcal{K}_{f,x}\otimes\mathbb{Q}_{p}\subseteq X^{*}(\mathscr{S}^{/x }_{\mathbf{I},\mathbb{F}})\otimes\mathbb{Q}_{p}\)_. Again,_ \(K_{f,x}\) _gives rise to unipotent subgroups of_ \(U_{\mathrm{GSpin}^{\prime},\mu_{\mathbf{I}}^{-1}}\) _,_ \(U_{\mathrm{SO}^{\prime},\mu_{\mathbf{I}}^{c,-1}}\) _and_ \(U_{\mathrm{GL}^{\prime},\mu_{\mathbf{I}}^{-1}}\)_. These subgroups (or their_ \(\mathbb{Q}_{p}\)_-points) will again be denoted_ \(K_{f,x}\)_._
3. _A subset_ \(\mathbf{\Delta}\subseteq\mathrm{End}(\mathscr{A}^{\mathrm{KS}}_{\mathbf{I}, \overline{\eta}})\otimes\mathbb{Z}_{p}\) _is called sufficient, if the corresponding deformation space_ \[\mathscr{D}_{\mathbf{\Delta}}=\mathrm{Def}(\mathbf{\Delta}/\mathscr{S}^{/x}_{ \mathbf{I},W})\subseteq\mathscr{S}^{/x}_{\mathbf{I},W},\] _(which is an obvious generalization of (_2.3.2_) to products of Shimura varieties) satisfies_ \(\ker(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},W})\to X^{*}(\mathscr{D}_{\mathbf{ \Delta}}))_{\mathbb{Q}_{p}}=K_{f,x}\)__8. Footnote 8: Note that \(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},W})\) can be canonically identified with \(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},\overline{\eta}})\).
**Lemma 6.1**.: _If \(\mathrm{End}(\mathscr{A}^{\mathrm{KS}}_{\mathbf{I},\overline{\eta}})\) contains a sufficient subset, then Conjecture 4.5 holds._
Proof.: Let \(\Lambda=U_{\mathrm{GSpin}^{\prime},\mu_{\mathbf{I}}^{-1}}(\mathbb{Z}_{p})\), by Proposition 2.5, we can identify \(\mathscr{S}^{/x}_{\mathbf{I},W}\) with \(\mathrm{Hom}_{\mathbb{Z}_{p}}(\Lambda,\mathbb{G}^{\wedge}_{m})\). Let \(\mathbf{\Delta}\) be the sufficient subset in question. By the theory of canonical coordinates, there is a sublattice \(\Lambda_{\mathbf{\Delta}}\subseteq\Lambda\) such that \(\mathscr{D}_{\mathbf{\Delta}}=\mathrm{Hom}_{\mathbb{Z}_{p}}\left(\Lambda/ \Lambda_{\mathbf{\Delta}},\mathbb{G}^{\wedge}_{m}\right)\). This \(\Lambda_{\mathbf{\Delta}}\) is nothing other than \(\ker(X^{*}(\mathscr{S}^{/x}_{\mathbf{I},W})\to X^{*}(\mathscr{D}_{\mathbf{ \Delta}}))\). Since \(\mathbf{\Delta}\) is sufficient, \(\mathcal{K}_{f,x}\) is the saturation of \(\Lambda_{\mathbf{\Delta}}\). Therefore, \(\mathscr{D}_{\mathbf{\Delta},\mathbb{F}}\) admits \(\mathscr{T}_{f,x}\) as the induced reduced structure. On the other hand, \(\mathbf{\Delta}\) deforms to \(\mathscr{X}^{/x,+}_{f,W}\) by Lemma 4.4(2), hence \(\mathscr{X}^{/x}_{f,\mathbb{F},\mathrm{red}}\subseteq\mathscr{D}_{\mathbf{ \Delta},\mathbb{F},\mathrm{red}}=\mathscr{T}_{f,x}\). Since \(f\) factors through \(\mathscr{X}_{f,\mathbb{F}}\) by Lemma 4.4(3), we have \(\mathscr{X}^{/x}_{f,\mathbb{F},\mathrm{red}}=\mathscr{T}_{f,x}\).
### Local and global monodromy of \(\mathbb{L}^{-}_{\mathbf{I},p,X_{0}}\)
Let \(X,X_{0},f,x\) be as in SS4.1.1. Let \(\mathbb{L}^{-}_{\mathbf{I},p,X_{0}}\) be the underlying \(F\)-isocrystal of \(\mathbb{L}_{\mathbf{I},p,X_{0}}\) and \(\omega_{x}\) be the fiber functor of the Tannakian category \(\langle\mathbb{L}^{-}_{\mathbf{I},p,X_{0}}\rangle^{\otimes}\subseteq\mathbf{ F}\text{-}\mathbf{Isoc}(X_{0})\) defined in SS3.2 We have two exact sequences
\[0 \to\mathbb{D}(\mathrm{Br}_{\mathbf{I},X_{0}})^{\vee}(-1)\to \mathbb{L}_{\mathbf{I},p,X_{0}}\to\mathbb{D}(\Psi_{\mathbf{I},X_{0}})\to 0. \tag{6.2.2}\] \[0 \to\mathbb{D}(\Psi^{\mathrm{et}}_{\mathbf{I},X_{0}})\to\mathbb{D}( \Psi_{\mathbf{I},X_{0}})\to\mathbb{D}(\mathrm{Br}_{\mathbf{I},X_{0}})\to 0. \tag{6.2.1}\]
The existence of nondegenerate pairings \(\mathbf{Q}_{i}\) over each \(F\)-crystal \(\mathbf{L}_{i,p}\) guarantees that the natural projection
\[G(\mathbb{L}^{-}_{\mathbf{I},p,X_{0}},x)\twoheadrightarrow G(\mathbb{D}(\Psi_{ \mathbf{I},X_{0}}),x) \tag{6.2.3}\]
is an isomorphism. Therefore, to study the monodromy of \(\mathbb{L}^{-}_{\mathbf{I},p,X_{0}}\), it suffices to study the monodromy of \(\mathbb{D}(\Psi_{\mathbf{I},X_{0}})\). Since \(\Psi_{\mathbf{I},X_{0}}\) is ordinary, the techniques from SS3.2.1 apply.
_Remark 6.2_.: In order to carry out concrete computations, it is convenient to have an explicit description of the isomorphism (6.2.3). By projecting to each index, we can assume that \(\#\mathbf{I}=1\). In the following, we drop all the subscript \(\mathbf{I}\) as usual. Arrange the basis \(\{e_{i}\}\) of \(\omega_{x}(\mathbb{L}_{p,X_{0}})\) so that \(\omega_{x}(\mathbb{D}(\mathrm{Br}))=\mathrm{Span}_{\mathbb{Q}_{p}}\{e_{b+2}\}\), \(\omega_{x}(\mathbb{D}(\Psi))=\mathrm{Span}_{\mathbb{Q}_{p}}\{e_{2},...,e_{b+2}\}\) and the quadratic pairing over \(\omega_{x}(\mathbb{L}_{p,x})\) is given by
\[\mathbf{Q}=\begin{bmatrix}&1\\ &\mathbf{Q}_{0}\\ 1&\end{bmatrix}.\]
For a \(\mathbb{Q}_{p}\)-algebra \(R\), an element \(g\in G(\mathbb{D}(\Psi_{X_{0}}),x)(R)\) is of the form
\[g=\begin{bmatrix}\mathbf{B}&\mathbf{v}\\ &\lambda\end{bmatrix},\ \lambda\in\mathbb{G}_{m}(R),\ \mathbf{B}\in\mathrm{SO}(\mathbf{Q}_{0})(R),\ \mathbf{v}\in U_{\mathrm{GL},\overline{\mu}}(\mathbb{Q}_{p})\otimes R. \tag{6.2.4}\]
Then the preimage of \(g\) under (6.2.3) is
\[\begin{bmatrix}\lambda^{-1}&-\lambda^{-1}\mathbf{v}^{t}\mathbf{Q}_{0}\mathbf{B}&- \frac{1}{2}\lambda^{-1}\mathbf{v}^{t}\mathbf{Q}_{0}\mathbf{v}\\ &\mathbf{B}&\mathbf{v}\\ &&\lambda\end{bmatrix}. \tag{6.2.5}\]
#### 6.2.1. Local monodromy
Let \(\Psi_{i}^{/x}\) be the pullback of \(\Psi_{i}\) to \(\mathscr{S}_{i,\mathbb{F}}^{/x}\) and write \(\Psi_{\mathbf{I}}^{/x}=\prod_{i\in\mathbf{I}}\Psi_{i}^{/x}\). We have \(\mathbb{D}(\Psi_{\mathbf{I},X}^{/x})=\mathbb{D}(\Psi_{\mathbf{I},X})^{/x}\). By Lemma 3.1, the local monodromy is
\[G(\mathbb{D}(\Psi_{\mathbf{I},X})^{/x},x)=U(\mathbb{D}(\Psi_{\mathbf{I},X})^{/ x},x)\rtimes\mathrm{im}\,\overline{\mu}_{\mathbf{I}}.\]
**Proposition 6.3**.: _Notations being the same as SS3.2. We have_
1. _Regarding_ \(T_{f,x}\) _as a subgroup of_ \(U_{\mathrm{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}\)_, we have_ \[U(\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)=U(\mathbb{D}(\Psi_{\mathbf{I},X})^{/ x},x)=T_{f,x}.\]
2. _Regarding_ \(T_{f,x}\) _as a subgroup of_ \(U_{\mathrm{SO}^{\prime},\mu_{\mathbf{I}}}\)_, we have_ \[U(\mathbb{L}_{\mathbf{I},p,X_{0}}^{-},x)=U(\mathbb{L}_{\mathbf{I},p,X}^{-,/x}, x)=T_{f,x}.\]
Proof.: Providing the isomorphism (6.2.3), it suffices to prove (1). Let \(\Lambda=U_{\mathrm{GSpin}^{\prime},\mu_{\mathbf{I}}^{-1}}(\mathbb{Z}_{p})\), \(\Lambda^{\vee}=U_{\mathrm{GSpin}^{\prime},\mu_{\mathbf{I}}}(\mathbb{Z}_{p})\) and \(X^{/x}=\operatorname{Spf}R\). Consider the pairing \(q\in\operatorname{Hom}(\Lambda,\mathbb{G}_{m}^{\wedge}(R))\) that arises from \(\Psi_{\mathbf{I},X}^{/x}\). Since \(R\) is reduced, \(\ker(q)\) is saturated in \(\Lambda\). Let \(\ker(q)^{\perp}\) be the sub-lattice of \(\Lambda^{\vee}\) that pairs to \(0\) with \(\ker(q)\). Then \(\ker(q)^{\perp}\otimes\mathbb{G}_{m}^{\wedge}\) is the smallest subtorus of \(\Lambda^{\vee}\otimes\mathbb{G}_{m}^{\wedge}=\mathscr{S}_{\mathbf{I},\mathbb{ F}}^{/x}\) through which \(f^{/x}\) factors. This shows that \(\mathcal{T}_{f,x}=\ker(q)^{\perp}\). We are done by Theorem 3.2.
#### 6.2.2. Global monodromy
By Theorem 3.2 and Proposition 6.3, the structure of the global monodromy can be understood as
\[G(\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)=T_{f,x}\rtimes G( \operatorname{gr}\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x), \tag{6.2.7}\] \[G(\mathbb{L}_{\overline{\mathbf{I}},p,X_{0}}^{-},x)=T_{f,x} \rtimes G(\operatorname{gr}\mathbb{L}_{\overline{\mathbf{I}},p,X_{0}}^{-},x). \tag{6.2.6}\]
The following recovers an independence result of Chai ([1]) in the case of products of \(\mathrm{GSpin}\) Shimura varieties:
**Corollary 6.4**.: _The rank of the formal subtorus \(\mathscr{T}_{f,x}\) is independent of \(x\) chosen. In particular, if \(X_{0}\) is Tate-linear at one point, it is Tate-linear at all points._
Proof.: For different \(x\), the groups \(G(\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)\) are isomorphic. Similarly, for different \(x\), the groups \(G(\operatorname{gr}\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)\) are also isomorphic. Therefore \(\dim U(\mathbb{D}(\Psi_{\mathbf{I},X})^{/x},x)\) are the same for all \(x\). It now follows from Proposition 6.3 that \(\operatorname{rk}\mathscr{T}_{f,x}\) is independent of \(x\) chosen.
### The case of products of modular curves
We prove the Tate-linear conjecture for products of modular curves. Let \(f_{i}\) be the composition of \(f\) with the projection \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\to\mathscr{S}_{i,\mathbb{F}}\). More generally, for an index subset \(\mathbf{J}\subset\mathbf{I}\), let \(f_{\mathbf{J}}\) be the composition of \(f\) with the projection \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\to\mathscr{S}_{\mathbf{J},\mathbb{F}}\). In particular, \(f=f_{\mathbf{I}}\).
Note that \(\mathscr{T}_{f_{i},x}\) is both the smallest formal torus that \(f_{i}^{/x}\) factors through, and the projection of \(\mathscr{T}_{f,x}\) to \(\mathscr{S}_{i,\mathbb{F}}^{/x}\). Without loss of generality, we can assume that the projection of \(\mathscr{T}_{f_{\mathbf{I}},x}\) to each \(\mathscr{S}_{i,\mathbb{F}}^{/x}\) is nontrivial. Since \(\mathscr{S}_{i,\mathbb{F}}^{/x}\) is one dimensional, this forces \(\mathscr{T}_{f_{i},x}=\mathscr{S}_{i,\mathbb{F}}^{/x}\).
#### 6.3.1. The Tate-linear cocharacter \(T_{f,x}\)
To begin with, we show that only very special kind of subtori of \(\mathscr{S}_{\mathbf{I},\mathbb{F}}^{/x}\) can arise as \(\mathscr{T}_{f,x}\). We first establish a special case of mod \(p\) Mumford-Tate conjecture, which plays an important role in revealing the structure of \(\mathscr{T}_{f,x}\).
**Lemma 6.5**.: \(G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}=\mathrm{Hdg}(f_{i})_{\mathbb{Q}_{p}}= \mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\)_._
Proof.: We know that \(G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}\subseteq\mathrm{Hdg}(f_{i})_{\mathbb{Q}_{ p}}=\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\) is a reductive subgroup. Since \(\mathrm{rk}\,\mathscr{T}_{f_{i},x}=1\), we see from Proposition 6.3 that \(U(\mathbb{L}_{i,p,X_{0}}^{-},x)\) is one dimensional. Since the only connected reductive subgroup of \(\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\) is \(1\), \(\mathbb{G}_{m}\) and \(\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\), we have \(G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}=\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\).
**Corollary 6.6**.: _There is a partition \(\mathbf{I}=\bigsqcup_{h\in\mathcal{H}}\mathbf{I}_{h}\) such that_
1. \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}=\prod_{h\in\mathcal{H}}G(\mathbb{ L}_{\mathbf{I}_{h},p,X_{0}},x)^{\circ}\)_, and for each_ \(h\in\mathcal{H}\) _and_ \(i\in\mathbf{I}_{h}\)_, the natural projection_ \(G(\mathbb{L}_{\mathbf{I}_{h},p,X_{0}},x)^{\circ}\to G(\mathbb{L}_{i,p,X_{0}}, x)^{\circ}\) _is an isomorphism._
2. \(\mathscr{T}_{f,x}=\prod_{h\in\mathcal{H}}\mathscr{T}_{f_{\mathbf{I}_{h},x}}\)_, and for each_ \(h\in\mathcal{H}\) _and_ \(i\in\mathbf{I}_{h}\)_, the projection_ \(\mathscr{T}_{f_{\mathbf{I}_{h},x}}\to\mathscr{T}_{f_{i},x}\) _is an isomorphism._
Proof.: By Tannakian formalism, there are natural projections \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}\to G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}\) such that the induced morphism \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}\to\prod_{i\in\mathbf{I}}G(\mathbb{ L}_{i,p,X_{0}},x)^{\circ}\) is an embedding. It follows from Lemma 6.5 that each \(G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}\) is isomorphic to \(\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\). Since \(\mathrm{SO}(2,1)_{\mathbb{Q}_{p}}\) is adjoint and simple, Goursat's lemma implies that there is a partition \(\mathbf{I}=\bigsqcup_{h\in\mathcal{H}}\mathbf{I}_{h}\) such that \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}=\prod_{h\in\mathcal{H}}G(\mathbb{ L}_{\mathbf{I}_{h},p,X_{0}},x)^{\circ}\) and, for each \(h\in\mathcal{H}\) and \(i\in\mathbf{I}_{h}\), the projection \(G(\mathbb{L}_{\mathbf{I}_{h},p,X_{0}},x)^{\circ}\to G(\mathbb{L}_{i,p,X_{0}}, x)^{\circ}\) is an isomorphism. This proves (1).
By Theorem 3.3, for each \(h\in\mathcal{H}\) and \(i\in\mathbf{I}_{h}\), the projection \(G(\mathbb{L}_{\mathbf{I}_{h},p,X_{0}}^{-},x)^{\circ}\to G(\mathbb{L}_{i,p,X_{0 }}^{-},x)^{\circ}\) is an isomorphism. Passing to unipotent radical and using Proposition 6.3, we see that \(T_{f,x}=\prod_{h\in\mathcal{H}}T_{f_{\mathbf{I}_{h},x}}\), and for each \(h\in\mathcal{H}\) and \(i\in\mathbf{I}_{h}\), the projection \(T_{f_{\mathbf{I}_{h},x}}\to T_{f_{i},x}\) is an isomorphism. This implies (2).
#### 6.3.2. The crystalline endomorphisms \(\{\boldsymbol{\delta}_{h,x}\}_{h\in\mathcal{H}}\)
Our goal is to construct a family of crystalline endomorphisms \(\{\boldsymbol{\delta}_{h,x}\}_{h\in\mathcal{H}}\in\mathrm{End}(\mathscr{S}_{ \mathbf{I},\overline{\eta}}^{\mathrm{KS}})\otimes\mathbb{Z}_{p}\) indexed by the partition \(\mathcal{H}\) as in Corollary 6.6, and show that \(\{\boldsymbol{\delta}_{h,x}\}_{h\in\mathcal{H}}\) meets the conditions of Lemma 6.1.
Possibly replacing \(X_{0}\) by an etale cover, we can assume \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)\) is connected. Therefore each \(G(\mathbb{L}_{\mathbf{I}_{h},p,X_{0}},x)\) and \(G(\mathbb{L}_{i,p,X_{0}},x)\) are also connected. Similar to Remark 6.2, for each \(i\in\mathbf{I}\), we arrange the basis \(\{e_{i,1},e_{i,2},e_{i,3}\}\) of \(\omega_{x}(\mathbb{L}_{i,p,X_{0}})\) so that the quadratic pairing is
\[\mathbf{Q}_{i}=\begin{bmatrix}&&1\\ &1&\\ 1&&\end{bmatrix},\]
and the filtration is given by \(\omega_{x}(\mathbb{D}(\mathrm{Br}_{i}))=\mathrm{Span}_{\mathbb{Q}_{p}}\{e_{i,3}\}\) and \(\omega_{x}(\mathbb{D}(\Psi_{i}^{\mathrm{\acute{e}t}}))=\mathrm{Span}_{\mathbb{Q }_{p}}\{e_{i,2},e_{i,3}\}\). For a \(\mathbb{Q}_{p}\)-algebra \(R\), an element \(g\in G(\mathbb{L}_{i,p,X_{0}}^{-},x)(R)\) is of form
\[g=\begin{bmatrix}\lambda^{-1}&-\lambda^{-1}\mathbf{v}^{t}&-\frac{1}{2}\lambda^{- 1}\mathbf{v}^{t}\mathbf{v}\\ &1&\mathbf{v}\\ &&\lambda\end{bmatrix},\ \ \lambda\in\mathbb{G}_{m}(R),\mathbf{v}\in\mathrm{Span}_{ \mathbb{Q}_{p}}\{e_{i,3}^{\vee}\otimes e_{i,2}\}\otimes R. \tag{6.3.1}\]
Fix an index \(k\in\mathbf{I}_{h}\). Corollary 6.6(2) shows that there are \(a_{j}\in\mathbb{Z}_{p}^{*}\) for \(j\in\mathbf{I}_{h}-\{k\}\), such that \(T_{f_{\mathbf{I}_{h},x}}\) is the graph of a linear morphism
\[T_{f_{k},x}\xrightarrow{(a_{j})}\prod_{j\in\mathbf{I}_{h}-\{k\}}T_{f_{j},x}.\]
We further set \(a_{k}=1\) for convenience. Let \(A_{j}=\begin{bmatrix}a_{j}^{-1}&&\\ &1&\\ &&a_{j}\end{bmatrix}\) for \(j\in\mathbf{I}_{h}\). It follows from Corollary 6.6(1) and the explicit expression (6.3.1) that \(G(\mathbb{L}_{\mathbf{I}_{h},p,X_{0}},x)\) is the graph of the following morphism
\[G(\mathbb{L}_{k,p,X_{0}},x)\xrightarrow{\operatorname{(ad}A_{j})}\prod_{j\in \mathbf{I}_{h}-\{k\}}G(\mathbb{L}_{j,p,X_{0}},x). \tag{6.3.2}\]
One can then construct an endomorphism:
\[d_{h,x}:\omega_{x}(\mathbb{L}_{\mathbf{I},p,X_{0}})\xrightarrow{\operatorname {proj}_{k}}\omega_{x}(\mathbb{L}_{k,p,X_{0}})\xrightarrow{(A_{j})}\bigoplus_{ j\in\mathbf{I}_{h}}\omega_{x}(\mathbb{L}_{j,p,X_{0}})=\omega_{x}(\mathbb{L}_{ \mathbf{I}_{h},p,X_{0}})\hookrightarrow\omega_{x}(\mathbb{L}_{\mathbf{I},p,X_ {0}}). \tag{6.3.3}\]
Using the explicit expression (6.3.2), we deduce that \(d_{h,x}\) is an endomorphism of \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)\)-representations. Furthermore, note that each \(A_{j}\) is isometric, i.e., \(\mathbf{Q}_{j}(A_{j}-,A_{j}-)=\mathbf{Q}_{k}(-,-)\), and since the coefficients of \(A_{j}\) all lie in \(\mathbb{Z}_{p}\). Therefore, by the functoriality of the Kuga-Satake construction, the equivalence between the category of Dieudonne crystals and the category of \(p\)-divisible groups over \(X_{0}\) ([11]), and crystalline isogeny theorem over finite generated fields ([11]), \(d_{h,x}\) gives rise to a crystalline endomorphism
\[\boldsymbol{\delta}_{h,x}\in\operatorname{End}(\mathscr{A}_{\mathbf{I},\eta} ^{\operatorname{KS}})\otimes\mathbb{Z}_{p}\subseteq\operatorname{End}( \mathscr{A}_{\mathbf{I},x}^{\operatorname{KS}})\otimes\mathbb{Z}_{p}, \tag{6.3.4}\]
where \(\eta\) is the generic point of \(X\).
Proof of Theorem 4.10 in the case of products of modular curves.: We show that the set \(\boldsymbol{\Delta}=\{\boldsymbol{\delta}_{h,x}\}_{h\in\mathcal{H}}\) is sufficient in the sense of Definition 1(3). Clearly, one can reduce to the case where \(\mathcal{H}=1\). In the following we assume \(\mathcal{H}=1\) and drop the subscript \(h\). Let \(\Lambda=\operatorname{Span}_{\mathbb{Z}_{p}}\{e_{i,2}^{\vee}\otimes e_{i,3}\}_ {i\in\mathbf{I}}\). We can then identify
\[X^{*}(\mathscr{S}_{\mathbf{I},W}^{/x})\simeq U_{\operatorname{GL}^{\prime}, \overline{\mu_{\mathbf{I}}^{-1}}}(\mathbb{Z}_{p})\simeq\Lambda.\]
Serre-Tate theory implies that \(q\in\operatorname{Hom}_{\mathbb{Z}_{p}}\left(\Lambda,\mathbb{G}_{m}^{\wedge}\right)\) lies in \(\mathscr{D}_{\boldsymbol{\Delta}}\) if and only if for every \(i\in\mathbf{I}\), the identity \(q(A_{i}e_{k,3}\otimes e_{i,2}^{\vee})=q(e_{k,3}\otimes A_{i}^{t}e_{i,2}^{ \vee})\) holds. If we write \(\mathscr{D}_{\boldsymbol{\Delta}}=\operatorname{Hom}_{\mathbb{Z}_{p}}\left( \Lambda/\Lambda_{\boldsymbol{\Delta}},\mathbb{G}_{m}^{\wedge}\right)\), then
\[\Lambda_{\boldsymbol{\Delta}}=\operatorname{Span}_{\mathbb{Z}_{p}}\{e_{k,2}^{ \vee}\otimes e_{k,3}-a_{i}e_{i,2}^{\vee}\otimes e_{i,3}\}_{i\in\mathbf{I}}.\]
Clearly, \(\Lambda_{\boldsymbol{\Delta}}=\mathcal{K}_{f,x}\), so \(\boldsymbol{\Delta}\) is sufficient. Lemma 6.1 implies that Conjecture 4.5 holds in the case of products of modular curves.
### The case of GSpin Shimura varieties
In this section we suppose that \(\#\mathbf{I}=1\). We drop the subscript \(\mathbf{I}\) as usual. Let \(b=\dim\mathcal{S}\) and \(d=\operatorname{rk}\mathscr{T}_{f,x}\). Recall that \(\mathbf{Q}\) and \(\mathbf{Q}_{0}\) are the quadratic pairings over \(\omega_{x}(\mathbb{L}_{p})\) and \(\omega_{x}(\mathbb{D}(\Psi^{\mathfrak{e}\mathfrak{t}}))\), respectively. The following identifications are all canonical up to scalars
\[U_{\operatorname{GL},\overline{\mu}}(\mathbb{Q}_{p})\simeq\omega _{x}(\mathbb{D}(\operatorname{Br}))^{\vee}\otimes\omega_{x}(\mathbb{D}(\Psi^{ \mathfrak{e}\mathfrak{t}}))\simeq\omega_{x}(\mathbb{D}(\Psi^{\mathfrak{e} \mathfrak{t}})), \tag{6.4.2}\] \[U_{\operatorname{GL},\overline{\mu}^{-1}}(\mathbb{Q}_{p})\simeq \omega_{x}(\mathbb{D}(\operatorname{Br}))\otimes\omega_{x}(\mathbb{D}(\Psi^{ \mathfrak{e}\mathfrak{t}}))^{\vee}\simeq\omega_{x}(\mathbb{D}(\Psi^{ \mathfrak{e}\mathfrak{t}}))^{\vee}. \tag{6.4.1}\]
From (6.4.1), \(T_{f,x}\) can be canonically considered as a quadratic sublattice of \(\omega_{x}(\mathbb{D}(\Psi^{\mathfrak{e}\mathfrak{t}}))\).
#### 6.4.1. The Tate-linear cocharacter \(T_{f,x}\)
Similar to the case of products of modular curves, only very special kind of subtori of \(\mathscr{S}_{\mathbb{F}}^{/x}\) can arise as \(\mathscr{T}_{f,x}\). In fact, we show that, as a quadratic subspace of \(\omega_{x}(\mathbb{D}(\Psi^{\mathfrak{e}\mathfrak{t}}))\), \(T_{f,x}\) can only be nondegenerate or totally isotropic (Proposition 6.8).
Let \(T_{f,x}^{\perp}\) be the orthogonal complement of \(T_{f,x}\) in \(\omega_{x}(\mathbb{D}(\Psi^{\text{\'{e}t}}))\) and let \(U_{f,x}:=T_{f,x}\cap T_{f,x}^{\perp}\). Clearly, \(T_{f,x},T_{f,x}^{\perp},U_{f,x}\) are subrepresentations of \(G(\mathbb{D}(\Psi^{\text{\'{e}t}}_{X_{0}}),x)\). Since \(G(\mathbb{D}(\Psi^{\text{\'{e}t}}_{X_{0}}),x)\) is reductive by Theorem 3.3, there exist \(G(\mathbb{D}(\Psi^{\text{\'{e}t}}_{X_{0}}),x)\)-subrepresentations \(V_{f,x}\), \(V^{\prime}_{f,x}\) and \(U^{\prime}_{f,x}\) such that
\[\begin{split}&\omega_{x}(\mathbb{D}(\Psi^{\text{\'{e}t}}))=U_{f,x} \oplus U^{\prime}_{f,x}\oplus V_{f,x}\oplus V^{\prime}_{f,x},\\ & T_{f,x}=V_{f,x}\oplus U_{f,x},\\ & T_{f,x}^{\perp}=U_{f,x}\oplus V^{\prime}_{f,x}.\end{split} \tag{6.4.3}\]
We note that \(U_{f,x}\) is totally isotropic, while \(V_{f,x},V^{\prime}_{f,x}\) and \(U_{f,x}\oplus U^{\prime}_{f,x}\) are nondegenerate. Furthermore, \(\dim U_{f,x}=\dim U^{\prime}_{f,x}\), and \(U_{f,x}\) is a maximal totally isotropic subspace of \(U_{f,x}\oplus U^{\prime}_{f,x}\). Therefore the map \(G(\mathbb{D}(\Psi^{\text{\'{e}t}}_{X_{0}}),x)\to\operatorname{SO}(U_{f,x} \oplus U^{\prime}_{f,x})\) factors through a unitary subgroup associated to an Hermitian form \(\varphi_{f,x,0}\) with \(\operatorname{Tr}\varphi_{f,x,0}=\mathbf{Q}_{0}|_{U_{f,x}\oplus U^{\prime}_{ f,x}}\). Consequently, we might and do assume that \(U^{\prime}_{f,x}\) is totally isotropic and dual to \(U_{f,x}\) with respect to \(\mathbf{Q}_{0}\). In a similar manner, we consider certain distinguished subspaces of \(\omega_{x}(\mathbb{L}_{p})\). Let \(B_{x}=\omega_{x}(\mathbb{D}(\operatorname{Br}))\oplus\omega_{x}(\mathbb{D}( \operatorname{Br})^{\vee}(-1))\). We define
\[\begin{split}& M_{f,x}^{\text{\'{e}t}}:=U_{f,x}\oplus U^{\prime }_{f,x}\oplus V_{f,x},\\ & M_{f,x}:=B_{x}\oplus M_{f,x}^{\text{\'{e}t}},\\ & W_{f,x}:=U_{f,x}\oplus\omega_{x}(\mathbb{D}(\operatorname{Br})),\\ & W^{\prime}_{f,x}:=U^{\prime}_{f,x}\oplus\omega_{x}(\mathbb{D}( \operatorname{Br})^{\vee}(-1)).\end{split} \tag{6.4.4}\]
Note that \(\omega_{x}(\mathbb{D}(\operatorname{Br}))\) and \(\omega_{x}(\mathbb{D}(\operatorname{Br})^{\vee}(-1))\) are one dimensional subspaces of \(\omega_{x}(\mathbb{L}_{p})\), which are totally isotropic and dual to each other. So \(W_{f,x}\) and \(W^{\prime}_{f,x}\) are totally isotropic and dual to each other. Furthermore, \(W_{f,x}\oplus W^{\prime}_{f,x}\) is equipped with an Hermitian form \(\varphi_{f,x}\) such that \(\operatorname{Tr}\varphi_{f,x}=\mathbf{Q}|_{W_{f,x}\oplus W^{\prime}_{f,x}}\) and \(\varphi_{f,x,0}=\varphi_{f,x}|_{U_{f,x}\oplus U^{\prime}_{f,x}}\).
**Lemma 6.7**.: _We have a splitting \(\omega_{x}(\mathbb{L}_{p})=V^{\prime}_{f,x}\oplus M_{f,x}\) as \(G(\mathbb{L}_{p,X_{0}},x)\)-representations. Equivalently, \(\mathbb{L}_{p,X_{0}}=\mathbb{V}^{\prime}_{p,f}\oplus\mathbb{M}_{p,f}\) in **F-Isoc\({}^{\dagger}(X_{0})\)**. If \(V_{f,x}=0\), then we have a further splitting \(M_{f,x}=W_{f,x}\oplus W^{\prime}_{f,x}\) in the category of \(G(\mathbb{L}_{p,X_{0}},x)\)-representations. Equivalently, \(\mathbb{M}_{p,f}=\mathbb{W}_{p,f}\oplus\mathbb{W}^{\prime}_{p,f}\) in **F-Isoc\({}^{\dagger}(X_{0})\)**, and \(\mathbb{W}_{p,f}\) is dual to \(\mathbb{W}^{\prime}_{p,f}\). (Here we denote by \(\mathbb{V}^{\prime}_{p,f}\), \(\mathbb{M}_{p,f}\), \(\mathbb{W}_{p,f}\) and \(\mathbb{W}^{\prime}_{p,f}\) the overconvergent \(F\)-isocrystals arising from \(V^{\prime}_{f,x}\), \(M_{f,x}\), \(W_{f,x}\) and \(W^{\prime}_{f,x}\) via Tannakian formalism. The subscript \(p\), as usual, stands for "cris")._
Proof.: Easy computation involving explicit formulas in Remark 6.2 shows that \(V^{\prime}_{f,x}\) and \(M_{f,x}\) are \(G(\mathbb{L}_{p,X_{0}}^{-},x)\)-subrepresentations of \(\omega_{x}(\mathbb{L}_{p})\). So at least, we have \(\omega_{x}(\mathbb{L}_{p})=V^{\prime}_{f,x}\oplus M_{f,x}\) in the category of \(G(\mathbb{L}_{p,X_{0}}^{-},x)\)-subrepresentations. Let \(\mathbb{V}^{\prime-}_{p,f}\) and \(\mathbb{M}^{-}_{p,f}\) be the \(F\)-isocrystals corresponding to \(V^{\prime}_{f,x}\) and \(M_{f,x}\). We have \(\mathbb{L}_{p,X_{0}}^{-}=\mathbb{V}^{\prime-}_{p,f}\oplus\mathbb{M}^{-}_{p,f}\) in the category of \(F\)-isocrystals. Note that, there is an idempotent \(\alpha\in\operatorname{End}(\mathbb{L}_{p,X_{0}}^{-})\) such that \(\ker\alpha=\mathbb{V}^{\prime-}_{p,f}\) and \(\operatorname{im}\alpha=\mathbb{M}^{-}_{p,f}\). Since \(\mathbb{L}_{p,X_{0}}\) is overconvergent, Kedlaya's result [10] implies that \(\alpha\in\operatorname{End}(\mathbb{L}_{p,X_{0}})\). Therefore \(\ker\alpha\) and \(\operatorname{im}\alpha\) are also overconvergent and \(V^{\prime}_{f,x}\) and \(M_{f,x}\) are \(G(\mathbb{L}_{p,X_{0}},x)\)-subrepresentations of \(\omega_{x}(\mathbb{L}_{p})\).
Now suppose that \(V_{f,x}=0\), easy computation again shows that \(M_{f,x}=W_{f,x}\oplus W^{\prime}_{f,x}\) in the category of \(G(\mathbb{L}_{p,X_{0}}^{-},x)\)-representations. Let \(\mathbb{W}^{-}_{p,f}\) and \(\mathbb{W}^{\prime-}_{p,f}\) be the \(F\)-isocrystals corresponding to \(W_{f,x}\) and \(W^{\prime}_{f,x}\). So \(\mathbb{M}_{p,f}=\mathbb{W}^{-}_{p,f}\oplus\mathbb{W}^{\prime-}_{p,f}\) at least in the category of \(F\)-isocrystals. Similar to the previous case, [10] again implies that \(\mathbb{W}^{-}_{p,f}\) and \(\mathbb{W}^{\prime-}_{p,f}\) are overconvergent. The assertion that \(\mathbb{W}_{p,f}\) and \(\mathbb{W}^{\prime}_{p,f}\) are mutually dual follows easily.
**Proposition 6.8**.: _Either \(V_{f,x}=0\) or \(U_{f,x}=0\). In other words, \(T_{f,x}\) is either nondegenrate or totally isotropic. Furthermore, we have_
\[G(\mathbb{M}_{p,f},x)^{\circ}=\left\{\begin{aligned} & \operatorname{SO}(M_{f,x}),&& U_{f,x}=0,\\ &\operatorname{U}(\varphi_{f,x}),&& V_{f,x}=0.\end{aligned}\right.\]
Proof.: By Lemma 6.7, there is an overconvergent sub-\(F\)-isocrystal \(\mathbb{M}_{p,f}\subseteq\mathbb{L}_{p,X_{0}}\). Let \(\mathbb{M}_{p,f}^{\text{\'{e}t}}\subseteq\mathbb{D}(\Psi_{X_{0}}^{\text{\'{e} t}})\) be the sub-\(F\)-isocrystal corresponding to \(M_{f,x}^{\text{\'{e}t}}\). Theorem 3.3 implies that \(G(\mathbb{M}_{p,f},x)^{\circ}\) is a reductive subgroup of \(\operatorname{SO}(M_{f,x})\) which admits \(G(\mathbb{M}_{p,f}^{-},x)^{\circ}\) as the parabolic subgroup corresponding to \(\mu^{c}\). The Levi of \(G(\mathbb{M}_{p,f}^{-},x)^{\circ}\) is \(G(\mathbb{M}_{p,f}^{\text{\'{e}t}},x)^{\circ}\times\operatorname{im}\mu^{c}\). The projection of \(G(\mathbb{M}_{p,f}^{\text{\'{e}t}},x)^{\circ}\) to \(\operatorname{GL}(V_{f,x})\)_resp_. \(\operatorname{GL}(U_{f,x}\oplus U_{f,x}^{\prime})\) lies in \(\operatorname{SO}(V_{f,x})\)_resp_. \(\operatorname{U}(\varphi_{f,x,0})\), so we have
\[G(\mathbb{M}_{p,f}^{\text{\'{e}t}},x)^{\circ}\subseteq\operatorname{SO}(V_{f, x})\times\operatorname{U}(\varphi_{f,x,0}). \tag{6.4.5}\]
The unipotent of \(G(\mathbb{M}_{p,f}^{-},x)^{\circ}\) corresponding to \(\mu^{c}\) is \(U(\mathbb{L}_{p,X_{0}}^{-},x)\), which by Proposition 6.3 is \(T_{f,x}\) (considered as a subgroup of \(U_{\operatorname{SO},\mu^{c}}\)). Therefore
\[U(\mathbb{L}_{p,X_{0}}^{-},x)=U_{\operatorname{SO}(V_{f,x}\oplus B_{x}),\mu^{ c}}\times U_{\operatorname{U}(\varphi_{f,x}),\mu^{c}}. \tag{6.4.6}\]
Upon base changing to \(\overline{\mathbb{Q}}_{p}\), we are in the situation of SS5. In fact, let \(K,M,M_{0}\) and \(B\) in SS5 be \(\overline{\mathbb{Q}}_{p},M_{f,x},M_{f,x}^{\text{\'{e}t}}\) and \(B_{x}\), respectively. Let \(Q^{\prime},\nu,G\) and \(G_{0}\) in SS5 be \(\mathbf{Q}|_{M_{f,x}},\mu^{c}|_{\operatorname{SO}(M_{f,x})}\), \(G(\mathbb{M}_{p,f},x)^{\circ}\) and \(G(\mathbb{M}_{p,f}^{\text{\'{e}t}},x)^{\circ}\), respectively.
We wish to apply Lemma 5.3 to \(G=G(\mathbb{M}_{p,f},x)^{\circ}\). In fact, it suffices to take \(M_{a,0}=V_{f,x}\), \(M_{b,0}=U_{f,x}\oplus U_{f,x}^{\prime}\) and \((M_{b},\varphi_{b})=(U_{f,x}\oplus U_{f,x}^{\prime}\oplus B_{x},\varphi_{f,x})\) (which is an Hermitian space respecting \(\mathbf{Q}|_{M_{b}}\) in the sense of SS5.1.1). Now (6.4.5) and (6.4.6) translates to the fact that \(G\) meets conditions (1) and (2) of Lemma 5.3. It follows from the lemma that either \(U_{f,x}=0\) and \(G(\mathbb{M}_{p,f},x)^{\circ}=\operatorname{SO}(M_{f,x})\), or \(V_{f,x}=0\) and \(G(\mathbb{M}_{p,f},x)^{\circ}=\operatorname{U}(\varphi_{f,x})\).
_Remark 6.9_.: The identity (5.2.1) from Lemma 5.3 also translates to the following:
\[G(\mathbb{M}_{p,f}^{\text{\'{e}t}},x)^{\circ}=\left\{\begin{aligned} & \operatorname{SO}(T_{f,x}),&& T_{f,x}\text{ is nondegenerate},\\ &\operatorname{U}(\varphi_{f,x,0}),&& T_{f,x} \text{ is totally isotropic}.\end{aligned}\right.\]
4.2. The crystalline endomorphisms \(\boldsymbol{\delta}_{\mathbf{v},x}\) and \(\boldsymbol{\delta}_{\mathbf{w},x}\)
We construct fundamental crystalline endomorphisms \(\boldsymbol{\Delta}\subseteq\operatorname{End}(\mathscr{A}_{\eta}^{ \text{KS}})\otimes\mathbb{Z}_{p}\) that meet the conditions of Lemma 6.1. Possibly replacing \(X_{0}\) by a finite etale cover, we can assume that \(G(\mathbb{L}_{p,X_{0}},x)\) is connected. Let \(\eta\) be the generic point of \(X\). Let \(\dim V_{f,x}^{\prime}=r\). By Lemma 6.7, we have an overconvergent \(F\)-isocrystal \(\mathbb{V}_{p,f}^{\prime}\subseteq\mathbb{L}_{p,X_{0}}\). From Remark 2.2, we know that there is a chain of embeddings
\[\det\mathbb{V}_{p,f}^{\prime}\subseteq\wedge^{r}\mathbb{L}_{p,X_{0}}\subseteq \mathcal{E}nd(\mathbb{H}_{p,X_{0}}). \tag{6.4.7}\]
Since \(G(\mathbb{V}_{p,f}^{\prime},x)\subseteq\operatorname{SO}(V_{f,x}^{\prime})\), the uniroot \(F\)-isocrystal \(\det\mathbb{V}_{p,f}^{\prime}\) is constant. Pick a \(\mathbb{Q}_{p}\)-generator \(d_{\mathbf{v},x}\) of \(\det V_{f,x}^{\prime}\). By the equivalence between the category of Dieudonne crystals and the category of \(p\)-divisible groups over \(X_{0}\) ([11]) and crystalline isogeny theorem over finite generated fields ([11]), we have
\[d_{\mathbf{v},x}\in\operatorname{End}(\mathbb{H}_{p,\eta})=\operatorname{End} (\mathscr{A}_{\eta}^{\text{KS}})\otimes\mathbb{Q}_{p}.\]
Let \(\boldsymbol{\delta}_{\mathbf{v},x}\) be a suitable \(p\)-power multiple of \(d_{\mathbf{v},x}\) that lies in \(\operatorname{End}(\mathscr{A}_{\eta}^{\text{KS}})\otimes\mathbb{Z}_{p}\).
Now suppose \(V_{f,x}=0\), i.e., \(T_{f,x}\) is totally isotropic and \(\omega_{x}(\mathbb{L}_{p})=W_{f,x}\oplus W_{f,x}^{\prime}\oplus V_{f,x}^{\prime}\). Let \(1\neq\gamma\in\mathbb{Z}_{p}^{*}\). We define an isometry \(d_{\mathbf{w},x}\in\operatorname{End}(\omega_{x}(\mathbb{L}_{p,X_{0}}))\) by
\[d_{\mathbf{w},x}(v)=\begin{cases}\gamma v,\;\;v\in W_{f,x},\\ \gamma^{-1}v,\;\;v\in W_{f,x}^{\prime},\\ v,\;\;v\in V_{f,x}^{\prime}.\end{cases} \tag{6.4.8}\]
Since \(G(\mathbb{L}_{p,X_{0}},x)\subseteq\operatorname{U}(W_{f,x}\oplus W_{f,x}^{ \prime})\times\operatorname{SO}(V_{f,x}^{\prime})\), \(d_{\mathbf{w},x}\) is also an isometric isomorphism of \(G(\mathbb{L}_{p,X_{0}},x)\)-representations. By functoriality of the Kuga-Satake construction, the equivalence between the category of Dieudonne crystals and the category of \(p\)-divisible groups, and the crystalline isogeny theorem over finite generated fields, we again have \(d_{\mathbf{w},x}\in\operatorname{End}(\mathscr{A}_{\eta}^{\text{KS}})\otimes \mathbb{Q}_{p}\). Let \(\boldsymbol{\delta}_{\mathbf{w},x}\) be a suitable \(p\)-power multiple of \(d_{\mathbf{w},x}\) that lies in \(\operatorname{End}(\mathscr{A}_{\eta}^{\text{KS}})\otimes\mathbb{Z}_{p}\).
Proof of Theorem 4.10 for GSpin Shimura varieties.: We show that, when \(T_{f,x}\) is nondegenerate _resp_. totally isotropic, then \(\boldsymbol{\Delta}=\{\boldsymbol{\delta}_{\mathbf{v},x}\}\)_resp_. \(\{\boldsymbol{\delta}_{\mathbf{v},x},\boldsymbol{\delta}_{\mathbf{w},x}\}\) is sufficient in the sense of Definition 1(3). The conjecture will then follow from Lemma 6.1. Let \(\mathscr{D}_{\boldsymbol{\Delta},1}\)_resp_. \(\mathscr{D}_{\boldsymbol{\Delta},2}\) be \(\operatorname{Def}(\{\boldsymbol{\delta}_{\mathbf{v},x}\}/\mathscr{S}_{W}^{/x})\)_resp_. \(\operatorname{Def}(\{\boldsymbol{\delta}_{\mathbf{v},x},\boldsymbol{\delta}_{ \mathbf{w},x}\}/\mathscr{S}_{W}^{/x})\).
Let \(\mathscr{V}_{x}^{\prime}\) be an etale \(p\)-divisible subgroup of \(\Psi_{x}\) with \(\mathbb{D}(\mathscr{V}_{x}^{\prime})=V_{f,x}^{\prime}\), then \(\mathscr{V}_{x}^{\prime}\) splits from \(\Psi_{x}\) up to isogeny. Deforming the isogeny class of \(\boldsymbol{\delta}_{\mathbf{v},x}\in\operatorname{End}(\mathscr{A}_{x}^{ \text{KS}})\otimes\mathbb{Q}_{p}\) inside \(\mathscr{S}_{W}^{/x}\) is the same as deforming the corresponding global section of \(\wedge\mathbb{L}_{p,x}^{r}\), which is equivalent to deforming the splitting of \(\mathscr{V}_{x}^{\prime}\) from \(\Psi_{x}\) up to isogeny.
In the following we make the identification (6.4.1) and (6.4.2). Recall that the pairing \(\mathbf{Q}_{0}\) induces a canonical splitting \(U_{\operatorname{GL},\overline{\mu}}(\mathbb{Q}_{p})=M_{f,x}^{\text{\'{e}t}} \oplus V_{f,x}^{\prime}\). As a result, \((V_{f,x}^{\prime})^{\vee}\) canonically sits inside \(U_{\operatorname{GL},\overline{\mu}^{-1}}(\mathbb{Q}_{p})\) and is the kernel of the map \(U_{\operatorname{GL},\overline{\mu}^{-1}}(\mathbb{Q}_{p})\to(M_{f,x}^{\text{ \'{e}t}})^{\vee}\). By the theory of canonical coordinates, there is some lattice \(\Lambda_{\boldsymbol{\Delta},1}\) such that \(\mathscr{D}_{\boldsymbol{\Delta},1}=\operatorname{Hom}(U_{\operatorname{GL}, \overline{\mu}^{-1}}(\mathbb{Z}_{p})/\Lambda_{\boldsymbol{\Delta},1}, \mathbb{G}_{m}^{\wedge})\). As noted before, \(\Lambda_{\boldsymbol{\Delta},1,\mathbb{Q}_{p}}\) corresponds to the condition of deforming the splitting of \(\mathscr{V}_{x}^{\prime}\) from \(\Psi_{x}\) up to isogeny. Therefore \(\Lambda_{\boldsymbol{\Delta},1,\mathbb{Q}_{p}}=(V_{f,x}^{\prime})^{\vee}\).
When \(T_{f,x}\) is nondegenerate, we have \(T_{f,x}=M_{f,x}^{\text{\'{e}t}}\), hence \(\Lambda_{\boldsymbol{\Delta},1,\mathbb{Q}_{p}}=K_{f,x}\). When \(T_{f,x}\) is totally isotropic, there is some lattice \(\Lambda_{\boldsymbol{\Delta},2}\) such that \(\mathscr{D}_{\boldsymbol{\Delta},2}=\operatorname{Hom}(U_{\operatorname{GL}, \overline{\mu}^{-1}}(\mathbb{Z}_{p})/\Lambda_{\boldsymbol{\Delta},2},\mathbb{ G}_{m}^{\wedge})\). We must have \(\Lambda_{\boldsymbol{\Delta},2,\mathbb{Q}_{p}}\supseteq\Lambda_{\boldsymbol{ \Delta},1,\mathbb{Q}_{p}}=(V_{f,x}^{\prime})^{\vee}\). Since any deformation should also preserve the class \(\boldsymbol{\delta}_{\mathbf{w},x}\), the theory of canonical coordinates implies that \(\Lambda_{\boldsymbol{\Delta},2,\mathbb{Q}_{p}}=(V_{f,x}^{\prime})^{\vee}\oplus( U_{f,x}^{\prime})^{\vee}\). Since \(T_{f,x}=U_{f,x}\), we have \(\Lambda_{\boldsymbol{\Delta},2,\mathbb{Q}_{p}}=K_{f,x}\).
## 7. Characteristic \(p\) analogue of the Mumford-Tate conjecture
In this section we prove Conjecture 4.7 for GSpin Shimura varieties and products of modular curves. As note in Proposition 4.9, this proves Conjecture 1.1 for GSpin Shimura varieties and products of modular curves. The main input is Theorem 4.10 established in SS6 and the independence of monodromy groups in a compatible system of coefficient objects ([1, Theorem 1.2.1]). For \(u\in\operatorname{fpl}(\mathbb{Q})\), we will adopt the identification \(\alpha_{u}^{\prime}:L_{\mathbf{I},\mathbb{Q}_{u}}\simeq\omega_{x}(\mathbb{L}_ {\mathbf{I},u})\), as explained in SS4.1.2. Therefore \(\operatorname{Hdg}(f)\) is a subgroup of \(\operatorname{GL}(L_{\mathbf{I},\mathbb{Q}})\), and \(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)\) is a subgroup of \(\operatorname{GL}(L_{\mathbf{I},\mathbb{Q}_{u}})\). We will be using the fact that \(G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)^{\circ}\subseteq\operatorname{Hdg}(f)_{ \mathbb{Q}_{u}}\) (cf. Lemma 4.3) without mentioning.
### The case of products of modular curves
The case of products of modular curves should already be known in the literature. The result can be proved without the Tate-linear conjecture. Nevertheless, to exhibit the close relation between the Tate-linear and the Mumford-Tate conjectures, as well as to pave the way for future generalizations, we present a proof using these somewhat heavier machineries. In the following, we suppose, without loss of generality, that the projection of \(\mathscr{T}_{f,x}\) to each \(\mathscr{S}_{i,\overline{x}}^{/x}\) is nontrivial.
#### 7.1.1. The structure of \(\operatorname{Hdg}(f)\)
From Lemma 6.5, we already know that for each \(i\), \(\operatorname{Hdg}(f_{i})\simeq\operatorname{SO}(2,1)\). To understand \(\operatorname{Hdg}(f_{\mathbf{I}})\), let \(\mathbf{I}=\bigsqcup_{h\in\mathcal{H}}\mathbf{I}_{h}\) be the partition of \(\mathbf{I}\) as in Corollary 6.6.
**Proposition 7.1**.: \(\operatorname{Hdg}(f)=\prod_{h\in\mathcal{H}}\operatorname{Hdg}(f_{\mathbf{I}_{ h}})\) _and, for each \(h\in\mathcal{H}\) and \(i\in\mathbf{I}_{h}\), the projection \(\operatorname{Hdg}(f_{\mathbf{I}_{h}})\to\operatorname{Hdg}(f_{i})\) is an isomorphism._
Proof.: By definition, we have \(\operatorname{Hdg}(f)\subseteq\prod_{i\in\mathbf{I}}\operatorname{Hdg}(f_{i})\). Since the projection of \(G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}\) to each \(G(\mathbb{L}_{i,p,X_{0}},x)^{\circ}\) is surjective, Lemma 6.5 and Lemma 4.3 imply that the projection of \(\operatorname{Hdg}(f)\) to each \(\operatorname{Hdg}(f_{i})\) is surjective.
Since \(\operatorname{SO}(2,1)\) is adjoint and simple, Goursat's lemma implies that there is a partition \(\mathbf{I}=\bigsqcup_{h^{\prime}\in\mathcal{H}^{\prime}}\mathbf{I}_{h^{\prime}}\) such that \(\operatorname{Hdg}(f)=\prod_{h^{\prime}\in\mathcal{H}^{\prime}}\operatorname {Hdg}(f_{\mathbf{I}_{h^{\prime}}})\) and, for each \(h^{\prime}\in\mathcal{H}^{\prime}\) and \(i\in\mathbf{I}_{h^{\prime}}\), the projection \(\operatorname{Hdg}(f_{\mathbf{I}_{h^{\prime}}})\to\operatorname{Hdg}(f_{i})\) is an isomorphism. By Corollary 6.6(2), we have \(T_{f,x}=\prod_{h\in\mathcal{H}}T_{f_{\mathbf{I}_{h}},x}\), and for each \(h\in\mathcal{H}\) and \(i\in\mathbf{I}_{h}\), the projection \(T_{f_{\mathbf{I}_{h}},x}\to T_{f_{i},x}\) is an isomorphism. Taking the unipotent of \(\operatorname{Hdg}(f)_{\mathbb{Q}_{p}}\) corresponding to \(\mu^{c}_{\mathbf{I}}\) and using Theorem 6.3, we find that the partition \(\mathbf{I}=\bigsqcup_{h^{\prime}\in\mathcal{H}^{\prime}}\mathbf{I}_{h^{\prime}}\) is identical to the partition \(\mathbf{I}=\bigsqcup_{h\in\mathcal{H}}\mathbf{I}_{h}\). So \(\operatorname{Hdg}(f)\) has the desired structure.
Proof of Theorem 4.12 for products of modular curves.: Lemma 4.3, Proposition 7.1 and Corollary 6.6(1) imply that \(\operatorname{Hdg}(f)_{\mathbb{Q}_{p}}=G(\mathbb{L}_{\mathbf{I},p,X_{0}},x)^{\circ}\). Then the discussion in SS3.4, together with Lemma 3.4, implies that \(\operatorname{Hdg}(f)_{\mathbb{Q}_{u}}=G(\mathbb{L}_{\mathbf{I},u,X_{0}},x)^{\circ}\) for all finite place \(u\). From the explicit descriptions, it is an easy exercise to show that \(\operatorname{Hdg}(f)\) coincides with the generic Hodge group of \(\mathbb{L}_{\mathbf{I},\operatorname{B},\mathcal{X}_{f}^{+}}\). \(\Box\)
### The case of GSpin Shimura varieties
As usual, let \(\operatorname{rk}L=b+2\) and \(d=\operatorname{rk}\mathcal{T}_{f,x}\). Recall from SS6.4 that we have subspaces \(V^{\prime}_{f,x},M_{f,x},W_{f,x},W^{\prime}_{f,x}\) of \(\omega_{x}(\mathbb{L}_{p})\). By Lemma 6.7, these give rise to overconvergent sub-\(F\)-isocrystals \(\mathbb{V}^{\prime}_{p,f}\), \(\mathbb{M}_{p,f}\), \(\mathbb{W}_{p,f}\) and \(\mathbb{W}^{\prime}_{p,f}\) of \(\mathbb{L}_{p,X_{0}}\). We have
\[G(\mathbb{L}_{p,X_{0}},x)^{\circ}\subseteq G(\mathbb{V}^{\prime}_{p,f},x)^{ \circ}\times G(\mathbb{M}_{p,f},x)^{\circ}. \tag{7.2.1}\]
By Proposition 6.8, the subspace \(T_{f,x}\in\omega_{x}(\mathbb{L}_{p,x})\) is either nondegenerate or totally isotropic. In the nondegenerate case, we have \(G(\mathbb{M}_{p,f},x)^{\circ}=\operatorname{SO}(M_{f,x})\), while in the totally isotropic case, we have \(G(\mathbb{M}_{p,f},x)^{\circ}=\operatorname{U}(\varphi_{f,x})\).
**Lemma 7.2**.: _(7.2.1) is an equality._
Proof.: Possibly base change to an etale cover of \(X_{0}\), we can assume that \(G(\mathbb{L}_{p,X_{0}},x)\) is connected. Since any object in the Tannakian subcategory \(\langle\mathbb{V}^{\prime}_{p,f}\rangle^{\otimes}\) has zero slope, it suffices to show the following:
_Claim._ An object in \(\langle\mathbb{M}_{p,f}\rangle^{\otimes}\) has zero slope only when it is a direct sum of trivial objects.
Indeed, by Goursat's lemma, there is a common quotient \(H\) of \(G(\mathbb{V}^{\prime}_{p,f},x)\) and \(G(\mathbb{M}_{p,f},x)\), such that \(G(\mathbb{L}_{p,X_{0}},x)=G(\mathbb{V}^{\prime}_{p,f},x)\times_{H}G(\mathbb{M} _{p,f},x)\). A faithful representation of \(H\) gives rise to an object that lies in both \(\langle\mathbb{V}^{\prime}_{p,f}\rangle^{\otimes}\) and \(\langle\mathbb{M}_{p,f}\rangle^{\otimes}\). If the _Claim_ is true, then any faithful representation of \(H\) is trivial, so \(H=\{1\}\). Therefore \(G(\mathbb{L}_{p,X_{0}},x)=G(\mathbb{V}^{\prime}_{p,f},x)\times G(\mathbb{M}_{p,f},x)\).
We now prove the _Claim_. Let \(G=G(\mathbb{M}_{p,f},x)\) and \(\nu=\mu^{c}|_{\operatorname{SO}(M_{f,x})}\). An object \(\mathbb{X}\subseteq\langle\mathbb{M}_{p,f}\rangle^{\otimes}\) with zero slope gives rise to a representation \(G\to\operatorname{GL}(\omega_{x}(\mathbb{X}))\). Let \(N\) be the kernel. Then \(N\) is a normal subgroup containing \(\operatorname{im}\nu\) and \(U_{G,\nu}\). It suffices to show that \(N=G\). By Proposition 6.8, \(G\) is either \(\operatorname{SO}(M_{f,x})\) or \(\operatorname{U}(\varphi_{f,x})\), depending on whether \(T_{f,x}\) is nondegenerate or totally isotropic. In the nondegenerate case, \(N=\operatorname{SO}(M_{f,x})\) is immediate. For the totally isotropic case, we note that \(\operatorname{SU}(\varphi_{f,x})\) is simple. So at least \(\operatorname{SU}(\varphi_{f,x})\subseteq N\). Since \(N\) also contains \(\operatorname{im}\nu\), we have \(N=\operatorname{U}(\varphi_{f,x})\). The proof of the _Claim_ is complete.
#### 7.2.1. The structure of \(\operatorname{Hdg}(f)\)
**Proposition 7.3**.: _The possible structures of \(\operatorname{Hdg}(f)\) lie in the following two cases: (\(T_{f,x}\)**is nondegenerate**) There is a quadraple \((F,\tau,\mathbf{M}_{f},Q_{f})\) where \(F\) is totally real field, \(\tau:F\to\mathbb{C}\) is an embedding, and \((\mathbf{M}_{f},Q_{f})\) is a quadratic space over \(F\), such that_
1. \(\operatorname{Hdg}(f)=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{SO}(Q_{f})\)_._
2. \(Q_{f}\) _has signature_ \((2,d)\) _at place_ \(\tau\) _and is negative definite at all other real places._
3. \(Q=\operatorname{Tr}_{F/\mathbb{Q}}(Q_{f})\oplus Q^{\rho}\)_, where_ \(Q^{\rho}\) _is a negative definite quadratic form._
4. _Let_ \(\mathfrak{p}\) _be the place of_ \(F\) _given by_ \(F\xrightarrow{\tau}\mathbb{C}\simeq\overline{\mathbb{Q}}_{p}\)_. Then_ \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\) _and_ \((\mathbf{M}_{f},Q_{f})_{F_{\mathfrak{p}}}\simeq(M_{f,x},\mathbf{Q}|_{M_{f,x}})\)_._
_(\(T_{f,x}\)**is totally isotropic**) There is a quintuple \((E,F,\tau,\mathbf{M}_{f},\phi_{f})\) where \(F\) is a totally real field, \(\tau:F\to\mathbb{C}\) is an embedding, \(E/F\) is a quadratic imaginary extension, and \((\mathbf{M}_{f},\boldsymbol{\phi}_{f})\) is an Hermitian space over \(E\), such that_
1. \(\operatorname{Hdg}(f)=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{U}(\phi_ {f})\)_._
2. \(\phi_{f}\) _has signature_ \((1,d)\) _at place_ \(\tau\) _and is negative definite at all other real places._
3. \(Q=\operatorname{Tr}_{E/\mathbb{Q}}(\phi_{f})\oplus Q^{\rho}\)_, where_ \(Q^{\rho}\) _is a negative definite quadratic form._
4. _Let_ \(\mathfrak{p}\) _be the place of_ \(F\) _given by_ \(F\xrightarrow{\tau}\mathbb{C}\simeq\overline{\mathbb{Q}}_{p}\)_. Then_ \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\) _and_ \((\mathbf{M}_{f},\phi_{f})_{F_{\mathfrak{p}}}\simeq(M_{f,x},\varphi_{f,x})\)_._
Proof.: The following argument uses multiple ideas from [11]. Consider the standard representation \(\rho:\operatorname{Hdg}(f)\to\operatorname{SO}(L_{\mathbb{Q}})\). Let \(L_{\mathbb{Q}}^{\rho}\subseteq L_{\mathbb{Q}}\) be the subspace fixed by \(\operatorname{Hdg}(f)\) and let \(Q^{\rho}:=Q|_{L_{\mathbb{Q}}^{\rho}}\). The quadratic space \((L_{\mathbb{Q}}^{\rho},Q^{\rho})\) is negative definite, and \(\rho\) factors through \(\operatorname{SO}(L_{\mathbb{Q}}^{\rho,\perp})\subseteq\operatorname{SO}(L_{ \mathbb{Q}})\). Let \(\mathbf{E}\) be the center of the subalgebra of \(\operatorname{End}(L_{\mathbb{Q}})\) generated by \(\rho(\operatorname{Hdg}(f))\), and \(\mathbf{F}\) be the subalgebra fixed by the adjoint involution induced by \(Q\). Then \(\mathbf{F}\) is totally real, and decomposes into a finite product of totally real fields \(\mathbf{F}=\prod F_{\alpha}\). The idempotents of \(\mathbf{F}\) induce splittings \(\mathbf{E}=\prod E_{\alpha}\) and \(L_{\mathbb{Q}}^{\rho,\perp}=\bigoplus_{\alpha}\mathbf{M}_{\alpha}\). Each \(\mathbf{M}_{\alpha}\) has a structure of an \(F_{\alpha}\)-vector space, it is further equipped with an \(F_{\alpha}\)-valued quadratic form \(Q_{\alpha}\) such that \(Q|_{\mathbf{M}_{\alpha}}=\operatorname{Tr}_{F_{\alpha}/\mathbb{Q}}(Q_{\alpha})\). So \(\rho\) factors through \(\prod_{\alpha}\operatorname{Res}_{F_{\alpha}/\mathbb{Q}}\operatorname{SO}(Q_{ \alpha})\). The same argument in SS3 of _loc.cit_ shows that \(\mathbf{F}\) has exactly one factor \(F_{f}\) with the corresponding quadratic space \((\mathbf{M}_{f},Q_{f})\) such that the form \(\operatorname{Tr}_{F_{f}/\mathbb{Q}}(Q_{f})\) is indefinite. Furthermore, there exists exactly one real place \(\tau\) of \(F_{f}\) over which \(Q_{f}\) is indefinite.
_Claim_.: \(\mathbf{F}=F_{f}\) and \(L_{\mathbb{Q}}^{\rho,\perp}=\mathbf{M}_{f}\).
Consider the Shimura subvariety \(\mathcal{H}\) with structure group \(\operatorname{SO}(\mathbf{M}_{f})\) and \(\mathscr{H}\) be its closure in \(\mathscr{S}\). It is easy to see that the cocharacter \(h_{\tilde{x}_{\mathbb{C}}}\) factors through \(\mathcal{H}\), and the component of \(\mathcal{X}_{f}\) containing \(\tilde{x}_{\mathbb{C}}\) factors through \(\mathcal{H}\). As a result, \(f\) factors through \(\mathscr{H}_{\mathbb{F}}\). Let \(\Sigma\subseteq\operatorname{End}^{0}(\mathscr{A}_{\tilde{x}_{\mathbb{C}}}^{ \text{KS}})\) be the set of special endomorphisms that arise from \(\mathbf{M}_{f}^{\perp}\). Then \(\Sigma\) extends to \(\mathscr{A}_{\mathcal{H}}^{\text{KS}}\). Applying Lemma 4.2 to a irreducible component of \(\mathscr{H}_{\mathbb{F}}\) containing the image of \(f\), we see that \(\Sigma\subseteq\operatorname{End}^{0}(\mathscr{A}_{X^{\prime}}^{\text{KS}})\) for some integral finite cover \(X^{\prime}/X\). Therefore, \(\Sigma\subseteq\operatorname{End}^{0}(\mathscr{A}_{\overline{\eta}}^{\text{KS}})\). The group \(\operatorname{Hdg}(f)\), by definition, must also commute with \(\Sigma\). It follows that \(\operatorname{Hdg}(f)\subseteq\operatorname{SO}(\mathbf{M}_{f})\), hence \(\mathbf{F}=F_{f}\) and \(L_{\mathbb{Q}}^{\rho,\perp}=\mathbf{M}_{f}\). The field \(\mathbf{E}\) is either identical to \(\mathbf{F}\) or a nontrivial CM extension of \(\mathbf{F}\). An argument similar to [11, Construction 3.5] puts us in the following two cases:
(_Case 1_). \(\mathbf{E}=\mathbf{F}\). Write \(F\) for \(\mathbf{F}\), we have \(Q=\operatorname{Tr}_{F/\mathbb{Q}}(Q_{f})\oplus Q^{\rho}\) and \(\operatorname{Hdg}(f)=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{SO}(Q_{f})\). There is some integer \(l\) such that \(\tau Q_{f}\) has signature \((2,l)\). We see that \(\dim\mathcal{X}_{f}=l\). On the other hand, Proposition 4.4(1) and Theorem 4.10 for \(\operatorname{GSpin}\) Shimura varieties implies that \(\dim\mathcal{X}_{f}=\operatorname{rk}\mathscr{T}_{f,x}=d\). It follows that \(l=d\).
In the following we show that this is exactly the case when \(T_{f,x}\) is nondegenerate. Let \(\mathcal{P}\) be a place of \(F\) lying over \(p\) and \(\mathfrak{p}\) be the place of \(F\) given by \(F\xrightarrow{\tau}\mathbb{C}\simeq\overline{\mathbb{Q}}_{p}\). Let \(\mu^{c}:\mathbb{G}_{m}\to L_{\mathbb{Q}_{p}}\) be the
canonical Hodge cocharacter at point \(x\). Since \(\operatorname{im}\mu^{c}\subseteq G(\mathbb{L}_{p,X_{0}},x)^{\circ}\subseteq \operatorname{Hdg}(f)_{\mathbb{Q}_{p}}\), we see that \(\mu^{c}\) factors through
\[\operatorname{Hdg}(f)_{\mathbb{Q}_{p}}=\prod_{\mathcal{P}|p}\operatorname{Res}_{ F_{\mathcal{P}}/\mathbb{Q}_{p}}\operatorname{SO}(Q_{f})_{F_{\mathcal{P}}}. \tag{7.2.2}\]
Indeed, it furthermore factors through \(\operatorname{Res}_{F_{\mathfrak{p}}/\mathbb{Q}_{p}}\operatorname{SO}(Q_{f})_{F _{\mathfrak{p}}}\). By base change to \(\overline{\mathbb{Q}}_{p}\), we have
\[\big{(}\operatorname{Res}_{F_{\mathfrak{p}}/\mathbb{Q}_{p}}\operatorname{SO}(Q _{f})_{F_{\mathfrak{p}}}\big{)}\times\overline{\mathbb{Q}}_{p}=\prod_{\sigma: F_{\mathfrak{p}}\hookrightarrow\overline{\mathbb{Q}}_{p}}\operatorname{SO}(Q_{f})_{ \sigma,\overline{\mathbb{Q}}_{p}}. \tag{7.2.3}\]
Then \(\operatorname{SO}(Q_{f})_{\tau,\overline{\mathbb{Q}}_{p}}\) is the unique factor that \(\mu^{c}\) factors through. On the other hand, \(g\in\operatorname{Gal}(F_{\mathfrak{p}}/\mathbb{Q}_{p})\) acts transitively on the right of (7.2.3), and \(g\cdot\mu^{c}\) factors through \(\operatorname{SO}(Q_{f})_{g\cdot\tau,\overline{\mathbb{Q}}_{p}}\). Since \(\mu^{c}\) is defined over \(\mathbb{Q}_{p}\), we have \(g\cdot\mu^{c}=\mu^{c}\). Consequently, there is only one embedding of \(F_{\mathfrak{p}}\) into \(\overline{\mathbb{Q}}_{p}\). Hence \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\). It then follows from Theorem 3.3, Proposition 4.10 and (7.2.2) that
\[T_{f,x}=U(\mathbb{L}_{p,X_{0}}^{-},x)\subseteq U_{\operatorname{Hdg}(f)_{ \mathbb{Q}_{p}},\mu^{c}}=U_{\operatorname{SO}(Q_{f})_{F_{\mathfrak{p}}},\mu^{ c}}.\]
Since \(\dim U_{\operatorname{SO}(Q_{f})_{F_{\mathfrak{p}}},\mu^{c}}=d\), we have \(T_{f,x}=U_{\operatorname{SO}(Q_{f})_{F_{\mathfrak{p}}},\mu^{c}}\). Thus \(T_{f,x}\) is nondegenerate, \((\mathbf{M}_{f},Q_{f})_{F_{\mathfrak{p}}}\simeq(M_{f,x},\mathbf{Q}|_{M_{f,x}})\), and \(\operatorname{SO}(Q_{f})_{F_{\mathfrak{p}}}\simeq\operatorname{SO}(M_{f,x})\).
(_Case 2_). \(\mathbf{E}\) is a CM extension of \(\mathbf{F}\). We write \(E,F\) for \(\mathbf{E},\mathbf{F}\). Then \(Q_{f}\) is the trace of an \(E\)-valued Hermitian form \(\phi_{f}\) over \(\mathbf{M}_{f}\). We have \(Q=\operatorname{Tr}_{E/\mathbb{Q}}(\phi_{f})\oplus Q^{\rho}\) and \(\operatorname{Res}_{F/\mathbb{Q}}\operatorname{SU}(\phi_{f})\subseteq \operatorname{Hdg}(f)\subseteq\operatorname{Res}_{F/\mathbb{Q}}\operatorname{U }(\phi_{f})\). By the definitions of \(\mathcal{X}_{f}\) and \(\operatorname{MT}(f)\), we have \(\operatorname{Hdg}(\mathcal{X}_{f})\subseteq\operatorname{Hdg}(f)\). Now Zarhin's result [10] claims that \(\operatorname{Hdg}(\mathcal{X}_{f})\) must be the scalar restriction of a unitary group or an orthogonal group. This forces \(\operatorname{Hdg}(f)=\operatorname{Res}_{F/\mathbb{Q}}\operatorname{U}( \phi_{f})\).
Let \(r\) be the integer such that \(\tau\phi_{f}\) has signature \((1,r)\). We have \(\dim\mathcal{X}_{f}=r\). On the other hand, Proposition 4.4(1) and Theorem 4.10 for \(\operatorname{GSpin}\) Shimura varieties implies that \(\dim\mathcal{X}_{f}=\operatorname{rk}\mathcal{T}_{f,x}=d\). It follows that \(r=d\).
Like Case 1, \(\mu^{c}\) factors through \(\operatorname{Res}_{F_{\mathfrak{p}}/\mathbb{Q}_{p}}\operatorname{U}(\phi_{f})_ {F_{\mathfrak{p}}}\). A similar argument as in Case 1 shows that \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\) and
\[T_{f,x}=U(\mathbb{L}_{p,X_{0}}^{-},x)\subseteq U_{\operatorname{Hdg}(f)_{ \mathbb{Q}_{p}},\mu^{c}}=U_{\operatorname{U}(\phi_{f})_{F_{\mathfrak{p}}},\mu^ {c}}.\]
Since \(\dim U_{\operatorname{U}(\phi_{f})_{F_{\mathfrak{p}}},\mu^{c}}=d\), we have \(T_{f,x}=U_{\operatorname{U}(\phi_{f})_{F_{\mathfrak{p}}},\mu^{c}}\). It then follows that \(T_{f,x}\) is totally isotropic, \((\mathbf{M}_{f},\phi_{f})_{F_{\mathfrak{p}}}\simeq(M_{f,x},\varphi_{f,x})\), and \(\operatorname{U}(\phi_{f})_{F_{\mathfrak{p}}}\simeq\operatorname{U}(\varphi_{f,x})\).
_Remark 7.4_.: In Proposition 7.3, the existence of the place \(\mathfrak{p}\) such that \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\) is related to the fact that the image of \(f\) lies in the ordinary stratum. The readers shall compare this to [11, Corollary 1.0.2], where it is claimed that a Shimura subvariety (satisfying certain assumptions) with reflex field \(F\) admits a mod \(\mathfrak{p}\) reduction with nontrivial ordinary locus if and only if \(F_{\mathfrak{p}}=\mathbb{Q}_{p}\).
#### 7.2.2. The proof
Note that \(\{\mathbb{L}_{v,X_{0}}\}_{v\in\operatorname{fpl}(K)}\) is a weakly compatible system as per SS3.4. By the second assertion of Lemma 3.4, one can replace \(X_{0}\) by a finite etale cover, so that \(G(\mathbb{L}_{u,X_{0}},x)\) is connected for all \(u\). We will be assuming this in the rest of this section.
We begin by studying the splitting behavior of \(\operatorname{Hdg}(f)\) and various local systems. Let \(F,\tau\) be the totally real field and its complex embedding from Proposition 7.3. We fix a Galois extension \(K/F\), as well as an embedding \(K\subseteq\mathbb{C}\). Let \(\Sigma_{F}\) be the set of embeddings of \(F\) into \(K\). We will use \(v\) to denote a place of \(K\) over a finite place \(u\) of \(\mathbb{Q}\). The base change local system \(\mathbb{L}_{u,X_{0}}\otimes K_{v}\) will be denoted \(\mathbb{L}_{v,X_{0}}\). From Proposition 7.3, there is a \(F\)-group \(\mathcal{G}\), which is either \(\operatorname{SO}(\mathbf{M}_{f})\) or \(\operatorname{U}(\phi_{f})\), such that \(\operatorname{Hdg}(f)=\operatorname{Res}_{F/\mathbb{Q}}\mathcal{G}\). The group \(\mathcal{G}\) is equipped with a standard representation \(r_{f}:\mathcal{G}\to\operatorname{GL}(\mathbf{M}_{f})\). For a \(\sigma\in\Sigma_{F}\), we let \(\mathcal{G}_{\sigma,K}\), \(\mathbf{M}_{f,\sigma,K}\), \(r_{f,\sigma,K}\) be the base changes of \(\mathcal{G}\), \(\mathbf{M}_{f}\), \(r_{f}\) to
\(K\) via the embedding \(\sigma\). We have following decompositions
\[\operatorname{Hdg}(f)_{K}\subseteq\prod_{\sigma\in\Sigma_{F}} \mathcal{G}_{\sigma,K}, \tag{7.2.5}\] \[L_{K}=\bigoplus_{\sigma\in\Sigma_{F}}\mathbf{M}_{\sigma,K},\] (7.2.6) \[\varrho_{f,K}=\bigoplus_{\sigma\in\Sigma_{F}}r_{f,\sigma,K}|_{ \operatorname{Hdg}(f)_{K}}, \tag{7.2.4}\]
where we recall that \(\varrho_{f}\) is the representation of \(\operatorname{Hdg}(f)\) on \(L_{\mathbb{Q}}\). Note that for each \(v\), the monodromy group \(G(\mathbb{L}_{v,X_{0}},x)\) sits inside \(\operatorname{Hdg}(f)_{K_{v}}\). The representation \(r_{f,\sigma,K_{v}}\) induces a sub-local system \(\mathbb{M}_{\sigma,v,X_{0}}\subseteq\mathbb{L}_{v,X_{0}}\) such that \(\mathbb{L}_{v,X_{0}}\subseteq\bigoplus_{\sigma\in\Sigma_{F}}\mathbb{M}_{ \sigma,v,X_{0}}\). Let \(\mathbb{V}^{\prime}_{\sigma,v,X_{0}}\) be the image of \(\mathbb{L}_{v,X_{0}}\) in \(\bigoplus_{\sigma\neq\sigma^{\prime}\in\Sigma_{F}}\mathbb{M}_{\sigma^{\prime },v,X_{0}}\). It is immediate from the definition that, for every \(\sigma\):
\[G(\mathbb{M}_{\sigma,v,X_{0}},x)\subseteq\mathcal{G}_{\sigma,K_ {v}}, \tag{7.2.8}\] \[G(\mathbb{V}^{\prime}_{\sigma,v,X_{0}},x)\subseteq\prod_{\sigma \neq\sigma^{\prime}\in\Sigma_{F}}G(\mathbb{M}_{\sigma^{\prime},v,X_{0}},x),\] (7.2.9) \[G(\mathbb{L}_{v,X_{0}},x)\subseteq G(\mathbb{M}_{\sigma,v,X_{0}},x)\times G(\mathbb{V}^{\prime}_{\sigma,v,X_{0}},x). \tag{7.2.7}\]
The following lemma says that the sub-local systems that we have just constructed are also weakly compatible:
**Lemma 7.5**.: _For each \(\sigma\in\Sigma_{F}\), the collections \(\{\mathbb{M}_{\sigma,v,X_{0}}\}_{v\in\operatorname{fpl}(K)}\) and \(\{\mathbb{V}^{\prime}_{\sigma,v,X_{0}}\}_{v\in\operatorname{fpl}(K)}\) are weakly \(K\)-compatible systems of coefficient objects._
Proof.: We only prove the assertion for \(\{\mathbb{M}_{\sigma,v,X_{0}}\}_{v\in\operatorname{fpl}(K)}\), the assertion for \(\{\mathbb{V}^{\prime}_{\sigma,v,X_{0}}\}_{v\in\operatorname{fpl}(K)}\) is similar. It suffices to check that, at each closed point \(x_{0}\in|X_{0}|\), the characteristic polynomial \(P(\mathbb{M}_{\sigma,v,x_{0}},t)\) of the geometric Frobenius on the fiber \(\mathbb{M}_{\sigma,v,x_{0}}\) is independent of \(v\). Since \(x_{0}\) is ordinary, we can canonical lift the geometric Frobenius of \(\mathscr{A}_{x_{0}}^{\operatorname{KS}}\) to characteristic \(0\), this gives rise to a conjugacy class of elements in \(\operatorname{GL}(L_{\mathbb{Q}})\). We denote by \(P(L_{\mathbb{Q},\tilde{x}_{0}},t)\) the corresponding characteristic polynomial. Then the discussion in SS3.4 implies that \(P(L_{\mathbb{Q},\tilde{x}_{0}},t)_{K_{v}}=P(\mathbb{L}_{v,x_{0}},t)\) for all \(v\nmid p\). Furthermore, the equality holds for \(v|p\) if we replace the Frobenius of \(\mathbb{L}_{v,X_{0}}\) by a suitable power. Now for \(v|p\), we can also replace the Frobenius of \(\mathbb{M}_{\sigma,v,X_{0}}\) by a suitable power, so that from (7.2.5) and (7.2.6), we have \(P(L_{\mathbb{Q},\tilde{x}_{0}},t)_{K}=\prod_{\sigma\in\Sigma_{F}}P(\mathbf{M} _{\sigma,K,\tilde{x}_{0}},t)\) and \(P(\mathbb{L}_{v,x_{0}},t)=\prod_{\sigma\in\Sigma_{F}}P(\mathbb{M}_{\sigma,v,x _{0}},t)\) for all \(v\in\operatorname{fpl}(K)\). It then follows the definition that \(P(\mathbb{M}_{\sigma,v,x_{0}},t)=P(\mathbf{M}_{\sigma,K,\tilde{x}_{0}},t)_{K_ {v}}\). Therefore \(P(\mathbb{M}_{\sigma,v,x_{0}},t)\) is independent of \(v\).
The lemma can also be proved without the theory of canonical liftings. Instead, one considers the splitting of the motive \(\boldsymbol{L}_{x_{0}}\) as per [15, SS4] upon base changing to \(K\). The proof is left to the readers.
Proof of Theorem 4.12 for GSpin Shimura varieties.: Notation as above. It follows from Proposition 7.3 and Zarhin's result [11] that \(\operatorname{Hdg}(\mathcal{X}_{f})=\operatorname{Hdg}(f)\).
We now show that \(\operatorname{Hdg}(f)_{\mathbb{Q}_{u}}=G(\mathbb{L}_{u,X_{0}},x)^{\circ}\) for every \(u\in\operatorname{fpl}(\mathbb{Q})\). Recall that we have replaced \(X_{0}\) by a finite etale cover so that \(G(\mathbb{L}_{u,X_{0}},x)\) is connected for all \(u\). In the following will we omit the superscript "\(\circ\)" from all groups. Let \(\mathfrak{p}\) be the place of \(F\) corresponding to the embedding \(\tau\) (cf. Proposition 7.3). Take a place \(\mathfrak{P}|p\) of \(K\) lying over \(\tau\mathfrak{p}\). Then \(\mathbb{M}_{\tau,\mathfrak{P},X_{0}}\) and \(\mathbb{V}^{\prime}_{\tau,\mathfrak{P},X_{0}}\) are nothing other than \(\mathbb{M}_{p,f}\otimes K_{\mathfrak{P}}\) and \(\mathbb{V}^{\prime}_{p,f}\otimes K_{\mathfrak{P}}\). So by Proposition 6.8, Lemma 7.2 and Proposition 7.3, we have
\[G(\mathbb{L}_{\mathfrak{P},X_{0}},x)=G(\mathbb{M}_{\tau,\mathfrak{P},X_{0}},x) \times G(\mathbb{V}^{\prime}_{\tau,\mathfrak{P},X_{0}},x),\ G(\mathbb{M}_{\tau, \mathfrak{P},X_{0}},x)=\mathcal{G}_{\tau,K_{\mathfrak{P}}}.\]
Lemma 7.5 and Lemma 3.4 then imply that there is a finite extension \(\mathfrak{K}/K\) with the property that for each \(v\in\operatorname{fpl}(K)\) and \(\mathfrak{v}\in\operatorname{fpl}(\mathfrak{K})\) with \(\mathfrak{v}|v\), the base changes of (7.2.7) and (7.2.9) to \(\mathfrak{K}_{\mathfrak{p}}\) are
equalities. But this implies that (7.2.7) and (7.2.9) are already equalities over \(K_{v}\), i.e.,
\[G(\mathbb{L}_{v,X_{0}},x)=G(\mathbb{M}_{\tau,v,X_{0}},x)\times G(\mathbb{V}^{ \prime}_{\tau,v,X_{0}},x),\ G(\mathbb{M}_{\tau,v,X_{0}},x)=\mathcal{G}_{\tau,K_{ v}}. \tag{7.2.10}\]
We now show by Galois theory, that (7.2.10) holds if one replaces \(\tau\) by any other \(\sigma\in\Sigma_{F}\). Note that there is a natural bijection \(\Sigma_{F}\simeq\operatorname{Gal}(K/\mathbb{Q})/\operatorname{Gal}(K/\tau F)\). So \(\operatorname{Gal}(K/\mathbb{Q})\) acts on \(\Sigma_{F}\). In particular, the decomposition group of a place \(v\), which is identified with \(\operatorname{Gal}(K_{v}/\mathbb{Q}_{u})\), acts on \(\Sigma_{F}\). For any \(\sigma\in\Sigma_{F}\), Chebotarev's density theorem guarantees the existence of a place \(v\) and an element \(g\) in \(\operatorname{Gal}(K_{v}/\mathbb{Q}_{u})\) such that \(g\tau=\sigma\). Consider the base change (7.2.10)\(\times_{g}K_{v}\). We have
\[\mathcal{G}_{\tau,K_{v}}\times_{g}K_{v}=\mathcal{G}_{\sigma,K_{v}},\] \[G(\mathbb{M}_{\tau,v,X_{0}},x)\times_{g}K_{v}=G(\mathbb{M}_{ \sigma,v,X_{0}},x),\] \[G(\mathbb{V}^{\prime}_{\tau,v,X_{0}},x)\times_{g}K_{v}=G( \mathbb{V}^{\prime}_{\sigma,v,X_{0}},x),\] \[G(\mathbb{L}_{v,X_{0}},x)\times_{g}K_{v}=G(\mathbb{L}_{u,X_{0}},x)_{K_{v}}\times_{g}K_{v}=G(\mathbb{L}_{v,X_{0}},x).\]
As a result, for this particular place \(v\), we have
\[G(\mathbb{L}_{v,X_{0}},x)=G(\mathbb{M}_{\sigma,v,X_{0}},x)\times G(\mathbb{V} ^{\prime}_{\sigma,v,X_{0}},x),\ G(\mathbb{M}_{\sigma,v,X_{0}},x)=\mathcal{G}_{ \sigma,K_{v}}. \tag{7.2.11}\]
Lemma 3.4 then implies that for any finite place \(v\) of \(K\), (7.2.11) still holds. Since \(\sigma\) is arbitrary, we find that, for any \(v\),
\[G(\mathbb{L}_{v,X_{0}},x)=\prod_{\sigma\in\Sigma_{F}}\mathcal{G}_{\sigma,K_{ v}}.\]
Together with (7.2.4), we obtain
\[G(\mathbb{L}_{u,X_{0}},x)=\operatorname{Hdg}(f)_{\mathbb{Q}_{u}}=(\operatorname {Res}_{F/\mathbb{Q}}\mathcal{G})_{\mathbb{Q}_{u}}. \tag{7.2.12}\]
## 8. Characteristic \(p\) analogue of the Andre-Oort conjecture
In this section we prove the characteristic \(p\) analogue of the Andre-Oort conjecture (4.8) for GSpin Shimura varieties and products of modular curves. Suppose \(X\) contains a Zariski dense collection of positive dimensional special subvarieties. In SS8.2 we will construct a certain large \(p\)-adic etale lisse sheaves on a finite field model of \(X\) that arise from these special subvarieties. After that, in SS8.3 and SS8.4, we use the established cases of the Tate-linear conjecture and the characteristic \(p\) analogue of the Mumford-Tate conjecture to show that \(X\) is special. Routinely, we use \(X_{0}\) to denote a variety over \(\mathbb{F}_{q}\) whose base change to \(\mathbb{F}\) is \(X\), such that the immersion \(X\subseteq\mathscr{S}_{\mathbf{I},\mathbb{F}}\) is the base change of an immersion \(X_{0}\subseteq\mathscr{S}_{\mathbf{I},\mathbb{F}_{q}}\).
### Setups
Notation being the same as Conjecture 4.8. Recall that \(\boldsymbol{A}\) is a collection of special subvarieties on \(X\) and \(\mathbf{I}_{\boldsymbol{A}}\subseteq\mathbf{I}\) is the set of indices \(i\) such that \(\boldsymbol{A}\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{S}_{i,\mathbb{F}}\) are positive dimensional. We will always assume that \(\mathbf{I}_{\boldsymbol{A}}\neq\emptyset\). In the following, the letter "\(Z\)" is reserved for denoting special subvareities. \(\boldsymbol{A}\) is said to be _normalized_, if
1. Each \(Z\in\boldsymbol{A}\) is a positive dimensional connected smooth locally closed subvariety of \(X\),
2. The projection of any \(Z\in\boldsymbol{A}\) to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\boldsymbol{A}},\mathbb{F}}\) is a single point.
We call \(\boldsymbol{A}\) is _simple_, if it further satisfies
1. Any special subvariety in \(\boldsymbol{A}\) has positive dimensional projections to \(\mathscr{S}_{i,\mathbb{F}}\) for all \(i\in\mathbf{I}_{\boldsymbol{A}}\).
**Lemma 8.1**.: _Possibly shrinking \(\boldsymbol{A}\) and replacing a special subvariety in \(\boldsymbol{A}\) by an open dense subset, there is a normalized collection \(\boldsymbol{A}^{\prime}\) such that \(\mathbf{I}_{\boldsymbol{A}^{\prime}}=\mathbf{I}_{\boldsymbol{A}}\). Therefore, to prove Conjecture 4.8, we can always assume \(\boldsymbol{A}\) is normalized._
Proof.: First, for any \(Z\in\mathbf{A}\), replace \(Z\) by its irreducible components \(Z_{1},...,Z_{n}\). If \(Z_{i}\) is zero dimensional, we throw it out. Otherwise, replace \(Z_{i}\) by an open dense subset which is smooth. As a result, we get a collection \(\mathbf{A}^{\prime\prime}\) satisfying (1) with \(\mathbf{I}_{\mathbf{A}}=\mathbf{I}_{\mathbf{A}^{\prime\prime}}\). Then, let \(\mathbf{B}\) be the collection of special subvarieties in \(\mathbf{A}^{\prime\prime}\) whose projections to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}^{\prime\prime}},\mathbb{F}}\) are positive dimensional. \(\mathbf{B}\) can not be a Zariski dense collection. For otherwise there must be an index \(i\in\mathbf{I}-\mathbf{I}_{\mathbf{A}^{\prime\prime}}\) such that \(\mathbf{A}^{\prime\prime}\) contains a Zariski dense collection of special subvarieties whose projections to \(\mathscr{S}_{i,\mathbb{F}}\) are positive dimensional, hence \(i\in\mathbf{I}_{\mathbf{A}^{\prime\prime}}\), a contradiction. Now let \(\mathbf{A}^{\prime}=\mathbf{A}^{\prime\prime}-\mathbf{B}\).
As a result of Lemma 8.1, we will always assume that \(\mathbf{A}\) is normalized. We call a morphism \(X^{\prime}_{0}\to X_{0}\) an _etale open dense subset_, if \(X^{\prime}_{0}\) is geometrically connected and finite etale over an open dense subset of \(X_{0}\). In this case, we denote \(X^{\prime}\to X\) as the base change to \(\mathbb{F}\) of the map \(X^{\prime}_{0}\to X_{0}\). It is again an etale open dense subset. We then denote by \(\mathbf{A}_{X^{\prime}}\) the pullback of \(\mathbf{A}\) to \(X^{\prime}\). The definition of \(\mathbf{I}_{\mathbf{A}}\), and the notion of being normalized and simple, extends to \(\mathbf{A}_{X^{\prime}}\). We have \(\mathbf{I}_{\mathbf{A}}=\mathbf{I}_{\mathbf{A}_{X^{\prime}}}\). Furthermore, \(\mathbf{A}_{X^{\prime}}\) is normalized _resp_. simple, if \(\mathbf{A}\) is normalized _resp_. simple.
### \(p\)-adic lisse sheaves arising from dense collections of special subvarieties
#### 8.2.1. Tautological deformation spaces
For any subvariety of \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\), we use \(\Delta\) to denote its immersion into \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\). There are formal schemes \((X\times X)^{/\Delta}\), \((X\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}\) over \(X\) and \((Z\times Z)^{/\Delta}\) over \(Z\) sitting inside following diagram
These formal schemes shall be thought of as variations of deformation spaces over the base scheme. More precisely, over each \(x\in X(\mathbb{F})\), the fiber of \((X\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}\) at \(x\) is just \(\mathscr{S}_{\mathbf{I},\mathbb{F}}^{/x}\), while the fiber of \((X\times X)^{/\Delta}\) at \(x\) is \(X^{/x}\).
By Chai's theory of global Serre-Tate coordinates ([10, SS2]), there is an etale lisse sheaf of \(\mathbb{Z}_{p}\)-modules over \(X_{0}\), namely, \(\mathscr{E}_{\mathbf{I},X_{0}}:=\bigoplus_{i\in\mathbf{I}}X_{*}(\mathrm{Br}_{i, X_{0}})\otimes_{\mathbb{Z}_{p}}T_{p}(\Psi^{\mathfrak{e}_{i,X_{0}}}_{i,X_{0}})^{\vee}\), such that
\[(X\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}=\mathscr{E}_{\mathbf{I},X}\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^{\wedge}. \tag{8.2.1}\]
Here \(\mathbb{G}_{m}^{\wedge}\) stands for the formal torus over \(\mathbb{F}\) and \(\mathscr{E}_{\mathbf{I},X}\) is the pullback of \(\mathscr{E}_{\mathbf{I},X_{0}}\) to \(X\). For later use, we also set \(\mathscr{E}_{\mathbf{J},X_{0}}:=\bigoplus_{j\in\mathbf{J}}X_{*}(\mathrm{Br}_{j, X_{0}})\otimes_{\mathbb{Z}_{p}}T_{p}(\Psi^{\mathfrak{e}_{j,X_{0}}}_{j,X_{0}})^{\vee}\) for any subset \(\mathbf{J}\subseteq\mathbf{I}\).
Fix a special subvariety \(Z\in\mathbf{A}\). [10, Theorem 3.7] implies that \((Z\times Z)^{/\Delta}\subseteq(Z\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}\) is a subtorus of the formal torus (8.2.1) over \(Z\). There exists a finite field model \(Z_{0}/\mathbb{F}_{q^{n}}\) of \(Z\), such that (1) \(Z\subseteq X\) is the base change of \(Z_{0}\subseteq X_{0,\mathbb{F}_{q^{n}}}\) and (2) there is a saturated etale lisse subsheaf \(\mathcal{F}_{[Z]}\subseteq\mathscr{E}_{\mathbf{I},Z_{0}}\) over \(Z_{0}\), with
\[(Z\times Z)^{/\Delta}=\mathcal{F}_{[Z],Z}\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m} ^{\wedge}, \tag{8.2.2}\]
where \(\mathcal{F}_{[Z],Z}\) is the base change of \(\mathcal{F}_{[Z]}\) to \(Z\). Note that when \(Z\) runs over \(\mathbf{A}\), the corresponding \(n\) may not be bounded.
#### 8.2.2. The induced \(p\)-adic lisse sheaves
Let \(\mathfrak{X}\) be a Noetherian formal scheme and \(\mathcal{O}_{\mathfrak{X}}\) be its structure sheaf. An open formal subscheme of \(\mathfrak{X}\) is a pair \((\mathfrak{X}^{\prime},\mathcal{O}_{\mathfrak{X}}|_{\mathfrak{X}^{\prime}})\), where \(\mathfrak{X}^{\prime}\) is an open subset of \(\mathfrak{X}\). On the other hand, according to [11, SS10.14], a coherent ideal \(\mathscr{A}\subseteq\mathcal{O}_{\mathfrak{X}}\) defines a closed formal subscheme \((\mathfrak{Y},\mathcal{O}_{\mathfrak{X}}/\mathscr{A}|\mathfrak{Y})\), where \(\mathfrak{Y}\) is the support of \(\mathcal{O}_{\mathfrak{X}}/\mathscr{A}\), and every closed formal subscheme arise this way. Note that a closed formal subscheme is again Noetherian, and the intersection of two closed formal subschemes is again a closed formal subscheme. An open formal subscheme of a closed formal subscheme of \(\mathfrak{X}\) is called a locally closed formal subscheme.
Let \(\mathfrak{X}=(X\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}\). Then for every \(Z\in\mathbf{A}\), \((Z\times Z)^{/\Delta}\) is a locally closed formal subscheme of \(\mathfrak{X}\). Let \(\mathfrak{Z}\) be the smallest closed formal subscheme of \((X\times\mathscr{S}_{\mathbf{I},\mathbb{F}})^{/\Delta}\) containing all \((Z\times Z)^{/\Delta},Z\in\mathbf{A}\). From the discussion made in last paragraph, such \(\mathfrak{Z}\) always exists, and is a Noetherian subscheme of \(\mathfrak{X}\). Since \((X\times X)^{/\Delta}\) is a closed formal subscheme containing all \((Z\times Z)^{/\Delta}\), we have \(\mathfrak{Z}\subseteq(X\times X)^{/\Delta}\).
**Lemma 8.2**.: _There is an etale open dense subset \(X^{\prime}\to X\), together with \(p\)-adic lisse sheaves \(\{\mathcal{H}_{k}\}_{k=1}^{n}\) over \(X^{\prime}\), such that the irreducible components of \(\mathfrak{Z}_{X^{\prime}}:=\mathfrak{Z}\times_{X}X^{\prime}\) are exactly \(\{\mathcal{H}_{1}\otimes\mathbb{G}_{m}^{\wedge}\}_{k=1}^{n}\)._
Proof.: For every \(n\in\mathbb{Z}\), there is a scaling by \(1+p^{n}\) morphism \(\mathfrak{s}_{n}\) over \(\mathscr{E}_{\mathbf{I},X}\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^{\wedge}\). Clearly, every \(\mathfrak{s}_{n}\) is an isomorphism and takes each \((Z\times Z)^{/\Delta}\) to itself. Therefore, \(\mathfrak{s}_{n}\) takes \(\mathfrak{Z}\) to itself. By a rigidity result of Chai ([10]), any irreducible component of \(\mathfrak{Z}_{\overline{\eta}}\) is a formal subtorus of \(\mathscr{E}_{\mathbf{I},\overline{\eta}}\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^ {\wedge}\), where \(\eta\) is the generic point of \(X\). As a result, there is an etale open dense subset \(X^{\prime}\) of \(X\), with generic point \(\eta^{\prime}\), such that the irreducible components of \(\mathfrak{Z}_{X^{\prime}}\) are in bijection with the irreducible components of \(\mathfrak{Z}_{\overline{\eta}}\). Let \(\mathfrak{Y}_{1},...,\mathfrak{Y}_{n}\) be the irreducible components of \(\mathfrak{Z}_{X^{\prime}}\).
It follows that every \(\mathfrak{Y}_{k,\eta^{\prime}}\) is a formal subtorus of \(\mathscr{E}_{\mathbf{I},\eta^{\prime}}\otimes_{\mathbb{Z}_{p}}\mathbb{G}_{m}^ {\wedge}\). Taking cocharacter lattices, we see that \(\mathfrak{Y}_{k,\eta^{\prime}}\) gives rise to a saturated lisse subsheaf \(\mathcal{H}_{k,\eta^{\prime}}\subseteq\mathscr{E}_{\mathbf{I},\eta^{\prime}}\). Since \(X^{\prime}\) is smooth, there is a surjection \(\pi_{1}(\eta^{\prime},\overline{\eta})\to\pi_{1}(X^{\prime},\overline{\eta})\), so we can spread out \(\mathcal{H}_{k,\eta^{\prime}}\) to a saturated lisse subsheaf \(\mathcal{H}_{k}\subseteq\mathscr{E}_{\mathbf{I},X^{\prime}}\). Since \(\mathcal{H}_{k,\eta^{\prime}}\otimes\mathbb{G}_{m}^{\wedge}=\mathfrak{Y}_{k, \eta^{\prime}}\), by further shrinking \(X^{\prime}\), we can assume that \(\mathcal{H}_{k}\otimes\mathbb{G}_{m}^{\wedge}=\mathfrak{Y}_{k}\). This finishes the proof.
**Proposition 8.3**.: _Suppose that \(\mathbf{A}\) is simple and normalized. There is an etale open dense subset \(X^{\prime}_{0}\to X_{0}\), as well as a saturated etale lisse subsheaf \(\mathscr{F}\subseteq\mathscr{E}_{\mathbf{I}_{\mathbf{A}},X^{\prime}_{0}}\), which satisfies the following properties:_
1. _for each_ \(i\in\mathbf{I}_{\mathbf{A}}\)_, the projection of_ \(\mathscr{F}\) _to_ \(\mathscr{E}_{i,X^{\prime}_{0}}\) _has positive rank,_
2. \(\mathscr{F}_{X^{\prime}}\otimes\mathbb{G}_{m}^{\wedge}\subseteq(X^{\prime} \times X)^{/\Delta}\) _(here_ \(\Delta\) _is the graph of_ \(X^{\prime}\to X\)_)._
Proof.: By Lemma 8.2, there is a finite etale cover \(X^{\prime}\to X\), together with \(p\)-adic lisse sheaves \(\{\mathcal{H}_{k}\}_{k=1}^{n}\) over \(X^{\prime}\), such that the irreducible components of \(\mathfrak{Z}_{X^{\prime}}:=\mathfrak{Z}\times_{X}X^{\prime}\) are exactly \(\{\mathcal{H}_{1}\otimes\mathbb{G}_{m}^{\wedge}\}_{k=1}^{n}\). Since \(\mathfrak{Z}\subseteq(X\times X)^{/\Delta}\), we have
\[\mathcal{H}_{k}\otimes\mathbb{G}_{m}^{\wedge}\subseteq\mathfrak{Z}_{X^{\prime}} \subseteq(X^{\prime}\times X)^{/\Delta},\ 1\leq k\leq m. \tag{8.2.3}\]
Fix a \(Z\in\mathbf{A}_{X^{\prime}}\). There exists a sufficiently large finite field \(\mathbb{F}_{q^{n}}\), together with \(\mathbb{F}_{q^{n}}\)-varieties \(Z_{0}\) and \(X^{\prime}_{0}\), such that (a) there is an immersion \(Z_{0}\subseteq X^{\prime}_{0}\) and an etale open dense set \(X^{\prime}_{0}\to X_{0}\), such that \(Z\subseteq X^{\prime}\)_resp_. \(X^{\prime}\to X\) is the base change of \(Z_{0}\subseteq X^{\prime}_{0}\)_resp_. \(X^{\prime}_{0}\to X_{0}\) to \(\mathbb{F}\), (b) there is an etale lisse sheaf \(\mathcal{F}_{[Z]}\) over \(Z_{0}\) as discussed in SS8.2 and (c) \(Z_{0}(\mathbb{F}_{q^{n}})\neq\emptyset\).
Pick a point \(x_{0}\in Z_{0}(\mathbb{F}_{q^{n}})\) and a point \(x\in Z_{0}(\mathbb{F})\) mapping to \(x_{0}\). Let \(\mathscr{F}_{x}\) be the saturated submodule of \(\mathscr{E}_{\mathbf{I},x}\) generated by \(\{g\cdot\mathscr{F}_{[Z],x}|g\in\pi_{1}(X^{\prime},x)\}\). Since \(\mathcal{F}_{[Z],x}\) is invariant under \(\operatorname{Gal}(x|x_{0})\)-action, we can use the fact that \(\pi_{1}(X^{\prime}_{0},x)=\pi_{1}(X^{\prime},x)\rtimes\operatorname{Gal}(x|x_{0})\) to show that \(\mathscr{F}_{x}\) is invariant under \(\pi_{1}(X^{\prime}_{0},x)\)-action. It is then routine to check that \(\mathscr{F}_{x}\) is a continuous \(\pi_{1}(X^{\prime}_{0},x)\)-representation. As a result, \(\mathscr{F}_{x}\) gives rise to a saturated arithmetic lisse subsheaf \(\mathscr{F}\subseteq\mathscr{E}_{\mathbf{I},X^{\prime}_{0}}\).
Since \(\mathbf{A}\) is normalized, the projection of \(Z\) to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}},\mathbb{F}}\) is a single point. As a result, for any \(g\in\pi_{1}(X^{\prime},x)\), we have \(g\cdot\mathscr{F}_{[Z],x}\subseteq g\cdot\mathscr{E}_{\mathbf{I}_{\mathbf{A}},x} \subseteq\mathscr{E}_{\mathbf{I}_{\mathbf{A}},x}\). It follows that \(\mathscr{F}\subseteq\mathscr{E}_{\mathbf{I}_{\mathbf{A}},X^{\prime}_{0}}\).
Since \(\mathbf{A}_{X^{\prime}}\) is simple, for any \(i\in\mathbf{I}_{\mathbf{A}}\), the projection of \(Z\) to each \(\mathscr{S}_{i,\mathbb{F}}\) is positive dimensional. As a result, the projection of \(Z^{/x}=\mathcal{F}_{[Z],x}\otimes\mathbb{G}_{m}^{\wedge}\) to each \(\mathscr{S}_{i,\mathbb{F}}^{/x}=\mathscr{E}_{i,x}\otimes\mathbb{G}_{m}^{\wedge}\) has positive rank. Therefore, the projection of \(\mathcal{F}_{[Z],x}\) to each \(\mathscr{E}_{i,x}\) has positive rank. On the other hand, we have \(\mathcal{F}_{[Z],x}\subseteq\mathscr{F}_{x}\) by construction. It follows that the projection of \(\mathscr{F}_{x}\) to each \(\mathscr{E}_{i,x}\) has positive rank, hence (1).
Since the irreducible components of \(\mathfrak{Z}_{X^{\prime}}\) are in bijection with the irreducible components of \(\mathfrak{Z}_{\eta^{\prime}}\), \(\mathcal{F}_{[Z],Z}\) is contained in the restriction to \(Z\) of one of the lisse sheaves \(\mathcal{H}_{1},\mathcal{H}_{2},...,\mathcal{H}_{m}\). Say, it is contained in \(\mathcal{H}_{1,Z}\). It follows from definition that \(\mathscr{F}_{X^{\prime}}\subseteq\mathcal{H}_{1}\). But then, (8.2.3) implies that \(\mathscr{F}_{X^{\prime}}\otimes\mathbb{G}_{m}^{\wedge}\subseteq(X^{\prime} \times X)^{/\Delta}\). So we have (2).
#### 8.2.3. Monodromy representations
We say a few words about the monodromy of \(\mathscr{E}_{\mathbf{I},X_{0}}\). Consider the \(F\)-isocrystal \(\mathbb{E}_{\mathbf{I},X_{0}}=\bigoplus_{i\in\mathbf{I}}\mathbb{D}(\mathrm{Br}_{ i,X_{0}})^{\vee}\otimes\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}_{i,X_{0}})\). The issue sheaf \(\mathscr{E}_{\mathbf{I},X_{0}}\) is exactly the horizontal (or Frobenius invariant) sections of the uni-root \(F\)-isocrystal \(\mathbb{E}_{\mathbf{I},X_{0}}(-1)\). Under the equivalence between the category of uni-root \(F\)-isocrystals and the category of continuous \(\pi_{1}(X_{0},x)\)-representations in \(\mathbb{Q}_{p}\)-vector spaces ([13, Theorem 2.1]), \(\mathbb{E}_{\mathbf{I},X_{0}}(-1)\) corresponds exactly to \(\mathscr{E}_{\mathbf{I},X_{0}}\). Take the fiber functors as in SS3 and identify them via the canonical isomorphisms
\[\omega_{x}(\mathbb{E}_{\mathbf{I},X_{0}})\simeq\bigoplus_{i\in\mathbf{I}} \omega_{x}(\mathbb{D}(\mathrm{Br}_{i}))^{\vee}\otimes\omega_{x}(\mathbb{D}( \Psi^{\mathrm{\acute{e}t}}_{i}))\simeq U_{\mathrm{GL}^{\prime},\overline{ \mu}_{\mathbf{I}}}(\mathbb{Q}_{p})\simeq\mathscr{E}_{\mathbf{I},x,\mathbb{Q}_{ p}}.\]
We see that the monodromy group \(G(\mathbb{E}_{\mathbf{I},X_{0}},x)\subseteq\mathrm{GL}(\omega_{x}(\mathbb{E}_{ \mathbf{I},X_{0}}))\) is generated by \(G(\mathscr{E}_{\mathbf{I},X_{0}},x)\) together with a central torus \(\mathbb{G}_{m}\subseteq\mathrm{GL}(\omega_{x}(\mathbb{E}_{\mathbf{I},X_{0}}))\) generated by the Hodge cocharacter of \(\mathbb{E}_{\mathbf{I},X_{0}}\). On the other hand, there is a conjugation action of \(G(\mathrm{gr}\,\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)\) on \(U_{\mathrm{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}(\mathbb{Q}_{p})\), and the image of the representation \(G(\mathrm{gr}\,\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)\to\mathrm{GL}(U_{ \mathrm{GL}^{\prime},\overline{\mu}_{\mathbf{I}}}(\mathbb{Q}_{p}))\) is identified with \(G(\mathbb{E}_{\mathbf{I},X_{0}},x)\).
An etale lisse subsheaf \(\mathscr{F}\subseteq\mathscr{E}_{\mathbf{I},X_{0}}\) gives rise to a \(G(\mathscr{E}_{\mathbf{I},X_{0}},x)\)-subrepresentation \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq\mathscr{E}_{\mathbf{I},x,\mathbb{Q}_{ p}}\). By the above discussion, it canonically gives rise to a \(G(\mathrm{gr}\,\mathbb{D}(\Psi_{\mathbf{I},X_{0}}),x)\)-subrepresentation \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq U_{\mathrm{GL}^{\prime},\overline{ \mu}_{\mathbf{I}}}(\mathbb{Q}_{p})\). When \(\#\mathbf{I}=1\), the conjugation action of \(G(\mathbb{D}(\mathrm{Br}_{X_{0}}))\) on \(U_{\mathrm{GL},\overline{\mu}}(\mathbb{Q}_{p})\) is nothing other than scaling. After making the identification (6.4.1), \(\mathscr{F}\) also canonically gives rise to a subrepresentation \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq\omega_{x}(\mathbb{D}(\Psi^{\mathrm{ \acute{e}t}}))\) of the tautological representation of \(G(\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}_{X_{0}}),x)\) on \(\omega_{x}(\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}))\). We will apply these observations to the \(p\)-adic lisse sheaf \(\mathscr{F}\) that arises from Proposition 8.3.
### The case of GSpin Shimura varieties
In this section we prove mod \(p\) Andre-Oort conjecture for GSpin Shimura varieties, which is technically easier than the case of modular curves. Note that we only need to prove that \(X\), as a subvariety containing a Zariski dense collection of special subvarieties, is itself special.
Proof of Theorem 4.13 for GSpin Shimura varieties.: Let \(X^{\prime}_{0}\to X_{0}\) be an etale open subset that satisfies Proposition 8.3 and let \(\mathscr{F}\) be the lisse subsheaf thus arise. Pick \(x\in X^{\prime}_{0}(\mathbb{F})\) and let \(f:X^{\prime}\to\mathscr{F}_{\mathbb{F}}\) be the composition of the etale open subset \(X^{\prime}\to X\) and the immersion \(X\to\mathscr{S}_{\mathbb{F}}\), and let \(\mathscr{T}_{f,x}\) and \(T_{f,x}\) be objects defined in SS6.1. We have by Proposition 8.3(2),
\[\mathscr{F}_{x}\otimes\mathbb{G}_{m}^{\wedge}\subseteq X^{\prime/x}\subseteq \mathscr{T}_{f,x}. \tag{8.3.1}\]
In particular, \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq T_{f,x}\). Using the observation from SS8.2.3, we find that \(\mathscr{F}\) corresponds to a positive rank \(G(\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}_{X_{0}}),x)\)-subrepresentation of \(\omega_{x}(\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}))\). Since \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq T_{f,x}\), this subrepresentation is furthermore contained in \(T_{f,x}\). By Remark 6.9 (or by Theorem 4.12 and Proposition 7.3), \(G(\mathbb{D}(\Psi^{\mathrm{\acute{e}t}}_{X_{0}}),x)^{\circ}\) is either \(\mathrm{SO}(T_{f,x})\) or \(\mathrm{U}(\varphi_{f,x,0})\), depending on whether \(T_{f,x}\) is nondegenerate or totally isotropic.
We now deduce from these facts that \(\mathscr{T}_{f,x}=X^{\prime/x}\). Note that the case \(\dim X^{\prime}=1\) is trivial. So, in the following we assume \(\dim X^{\prime}\geq 2\). If \(\dim T_{f,x}=2\), then it follows from dimension reason that \(\mathscr{T}_{f,x}=X^{\prime/x}\). If \(\dim T_{f,x}\geq 3\), then the action of \(\mathrm{SO}(T_{f,x})\) or \(\mathrm{U}(\varphi_{f,x,0})\) on \(T_{f,x}\) is irreducible. This forces \(\mathscr{F}_{x,\mathbb{Q}_{p}}=T_{f,x}\). As a result, (8.3.1) is a equality and \(\mathscr{T}_{f,x}=X^{\prime/x}\). Since \(X^{/x}=X^{\prime/x}\), Theorem 4.10 implies that \(X\) is special.
### The case of products of modular curves
We now treat mod \(p\) Andre-Oort conjecture for products of modular curves. We begin by a simple case:
**Lemma 8.4**.: _If \(\boldsymbol{A}\) is normalized and simple, then \(\overline{X}\) is the product of a special subvariety in \(\mathscr{S}_{\mathbf{I}_{\boldsymbol{A}},\mathbb{F}}\) with a subvariety in \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\boldsymbol{A}},\mathbb{F}}\)._
Proof.: Let \(X^{\prime}_{0}\to X_{0}\) be an etale open subset satisfying Proposition 8.3 and let \(\mathscr{F}\) be the lisse subsheaf thus arise. Pick \(x\in X^{\prime}_{0}(\mathbb{F})\) and let \(f:X^{\prime}\to\mathscr{S}_{\mathbf{I},\mathbb{F}}\) be the composition of the etale open
subset \(X^{\prime}\to X\) and the immersion \(X\to\mathscr{S}_{\mathbf{I},\mathbb{F}}\), and let \(\mathscr{T}_{f,x}\) and \(T_{f,x}\) be objects defined in SS6.1. We then have
\[\mathscr{T}_{x}\otimes\mathbb{G}_{m}^{\wedge}\subseteq X^{\prime/x} \subseteq\mathscr{T}_{f,x}. \tag{8.4.2}\] \[\mathscr{T}_{x}\subseteq\mathscr{E}_{\mathbf{I}_{\mathbf{A}},x}. \tag{8.4.1}\]
Recall that for \(\mathbf{J}\subseteq\mathbf{I}\), \(f_{\mathbf{J}}\) is the composition of \(f\) with the projection \(\mathscr{S}_{\mathbf{I},\mathbb{F}}\to\mathscr{S}_{\mathbf{J},\mathbb{F}}\). Theorem 4.10 implies that there is a special subvariety \(\mathcal{X}_{f_{\mathbf{I}_{\mathbf{A}}}}\) of \(\mathcal{S}_{\mathbf{I}_{\mathbf{A}}}\) such that \(\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{red}}^{/x}= \mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x}\). Let \(Y\) be the Zariski closure of the projections of the elements in \(\boldsymbol{A}\) to \(\mathscr{S}_{f_{\mathbf{I}-\mathbf{I}_{\mathbf{A}}},\mathbb{F}}\), and let \(\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{red}}^{+}\) be the unique irreducible component of \(\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{red}}\) passing through \(x\). Then
\[\overline{X}\subseteq\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F}, \mathrm{red}}^{+}\times Y. \tag{8.4.3}\]
By definition, the projection of \(\overline{X}\) to \(Y\) is surjective. It suffices to show that this is actually an equality. Recall the discussion made in SS8.2.3. The group \(G(\mathrm{gr}\,\mathbb{D}(\Psi_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}}),x)^{\circ}\) acts on \(U_{\mathrm{GL}^{\prime},\overline{\mu}_{\mathbf{I}_{\mathbf{A}}}}(\mathbb{Q}_{p})\), and the lisse sheaf \(\mathscr{F}\) canonically gives rise to a sub-representation \(\mathscr{F}_{x,\mathbb{Q}_{p}}\subseteq U_{\mathrm{GL}^{\prime},\overline{\mu} _{\mathbf{I}_{\mathbf{A}}}}(\mathbb{Q}_{p})\). Moreover, in our case, \(G(\mathrm{gr}\,\mathbb{D}(\Psi_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}}),x)^{ \circ}=G(\mathbb{D}(\mathrm{Br}_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}}),x)^{\circ}\) is just a torus. Since \(G(\mathbb{D}(\mathrm{Br}_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}}),x)^{\circ}\) fixes the subspace \(\mathscr{F}_{x,\mathbb{Q}_{p}}\), and for each \(i\in\mathbf{I}_{\mathbf{A}}\), the projection of \(\mathscr{F}_{x,\mathbb{Q}_{p}}\) to \(U_{\mathrm{GL},\overline{\mu}_{i}}(\mathbb{Q}_{p})\) is of positive rank (hence surjective), we have \(\mathrm{rk}\,G(\mathbb{D}(\mathrm{Br}_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}} ),x)^{\circ}\leq\mathrm{rk}\,\mathscr{F}_{x}\). On the other hand, from Theorem 4.12 and Proposition 7.1, we see that \(\mathrm{rk}\,G(\mathbb{D}(\mathrm{Br}_{\mathbf{I}_{\mathbf{A}},X_{0}^{\prime}} ),x)^{\circ}=\dim T_{f_{\mathbf{I}_{\mathbf{A}}},x}\). Finally, it follows from (8.4.1) and (8.4.2) that \(\mathscr{F}_{x}\subseteq\mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x}\). Combining these, we have
\[\mathscr{F}_{x}=\mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x}, \tag{8.4.5}\] \[\mathcal{T}_{f,x}=\mathscr{F}_{x}\oplus\mathcal{T}_{f_{\mathbf{I} -\mathbf{I}_{\mathbf{A}}},x}. \tag{8.4.4}\]
Let \(y\) be the projection of \(x\) to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}},\mathbb{F}}\). The results (8.4.4),(8.4.5) and (8.4.1) show that \(X_{y}^{/x}\), the completion of the fiber \(X_{y}\) at \(x\), is also \(\mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x}\). Therefore
\[\dim\overline{X}_{y}=\mathrm{rk}\,\mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x}= \dim\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{red}}^{+}.\]
This implies that
\[\overline{X}_{y}=\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{ red}}^{+}\times\{y\}. \tag{8.4.6}\]
Let \(x^{\prime}\) be a point of \(X^{\prime}(\mathbb{F})\). The same argument as above shows that, if we replace \(x\) by \(x^{\prime}\) in (8.4.1) (8.4.2), (8.4.4) and (8.4.5), they remain true. Write \(y^{\prime}\) for the projection of \(x^{\prime}\) to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}},\mathbb{F}}\), we see that
\[\dim\overline{X}_{y^{\prime}}=\mathrm{rk}\,\mathcal{T}_{f_{\mathbf{I}_{ \mathbf{A}}},x^{\prime}}=\mathrm{rk}\,\mathcal{T}_{f_{\mathbf{I}_{\mathbf{A}}},x }=\dim\mathscr{X}_{f_{\mathbf{I}_{\mathbf{A}}},\mathbb{F},\mathrm{red}}^{+},\]
where the equality in the middle follows from (8.4.5) or Corollary 6.4. As a result, (8.4.6) holds when we replace \(y\) by \(y^{\prime}\). Let \(x^{\prime}\) run over \(X^{\prime}(\mathbb{F})\), we see that (8.4.6) holds for Zariski dense \(y^{\prime}\in Y(\mathbb{F})\). As a result, (8.4.3) is an equality.
Proof of Theorem 4.13 for products of modular curves.: We do induction on \(\#\mathbf{I}\). If \(\#\mathbf{I}=1\) there is nothing to prove. Suppose the theorem is true for \(\#\mathbf{I}<n\). Let \(\#\mathbf{I}=n\). There is always a nonempty subset \(\mathbf{J}\subseteq\mathbf{I}_{\mathbf{A}}\), and a Zariski dense sub-collection \(\boldsymbol{B}\subseteq\boldsymbol{A}\), such that each special subvariety in \(\boldsymbol{B}\) has positive dimensional projection to \(\mathscr{S}_{i,\mathbb{F}}\), where \(i\in\mathbf{J}\), while having zero dimensional projection to \(\mathscr{S}_{\mathbf{I}-\mathbf{J},\mathbb{F}}\). In other words, \(\boldsymbol{B}\) is simple and \(\mathbf{I}_{\boldsymbol{B}}=\mathbf{J}\). Lemma 8.4 then implies that \(\overline{X}\) is the product of a special subvariety of \(\mathscr{S}_{\mathbf{I}_{\mathbf{B}},\mathbb{F}}\) and a subvariety \(Y\subseteq\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{B}},\mathbb{F}}\). If \(\mathbf{I}_{\boldsymbol{B}}=\mathbf{I}_{\boldsymbol{A}}\), then we are already done. If \(\mathbf{I}_{\boldsymbol{B}}\subsetneq\mathbf{I}_{\boldsymbol{A}}\), we can project all special subvarieties in \(\boldsymbol{A}\) down to \(\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{B}},\mathbb{F}}\). The images form a collection of special subvarieties of \(Y\), which we denote by \(\overline{\boldsymbol{A}}\). Then \(\mathbf{I}_{\overline{\boldsymbol{A}}}=\mathbf{I}_{\boldsymbol{A}}-\mathbf{I}_{ \boldsymbol{B}}>0\). By induction hypothesis, \(Y\) is the product of a special subvariety of \(\mathscr{S}_{\mathbf{I}_{A}-\mathbf{I}_{B},\mathbb{F}}\) and a subvariety \(Y^{\prime}\subseteq\mathscr{S}_{\mathbf{I}-\mathbf{I}_{\mathbf{A}},\mathbb{F}}\). As
a result \(\overline{X}\) is a product of a special subvariety of \(\mathscr{S}_{\mathbf{1}_{A},\mathbb{F}}\) and a subvariety \(Y^{\prime}\subseteq\mathscr{S}_{\mathbf{1}-\mathbf{1}_{A},\mathbb{F}}\). Therefore the theorem holds for \(\#\mathbf{I}=n\).
|
2308.08879 | The Picard index of a surface with torus action | We consider normal rational projective surfaces with torus action and provide
a formula for their Picard index, that means the index of the Picard group
inside the divisor class group. As an application, we classify the log del
Pezzo surfaces with torus action of Picard number one up to Picard index
10,000. | Justus Springer | 2023-08-17T09:26:06Z | http://arxiv.org/abs/2308.08879v1 | # The Picard index of a surface with torus action
###### Abstract.
We consider normal rational projective surfaces with torus action and provide a formula for their Picard index, that means the index of the Picard group inside the divisor class group. As an application, we classify the log del Pezzo surfaces with torus action of Picard number one up to Picard index \(10\,000\).
## 1. Introduction
Consider a normal variety \(X\) defined over an algebraically closed field \(\mathbb{K}\) of characteristic zero. If \(X\) is normal, then the Picard group \(\operatorname{Pic}(X)\) embeds into the divisor class group \(\operatorname{Cl}(X)\) as the subgroup consisting of the Cartier divisor classes, and the _Picard index_\([\operatorname{Cl}(X):\operatorname{Pic}(X)]\) measures the amount of non-invertible reflexive rank one sheaves on \(X\). For rational normal projective surfaces admitting a (non-trivial) action of the multiplicative group \(\mathbb{K}^{*}\), we provide the following formula, involving the torsion part \(\operatorname{Cl}(X)^{\operatorname{tors}}\) and the local class groups \(\operatorname{Cl}(X,x)\), hosting the Weil divisors modulo those being principal near \(x\in X\).
**Theorem 1.1**.: _The Picard index of a normal rational projective \(\mathbb{K}^{*}\)-surface \(X\) is given by_
\[[\operatorname{Cl}(X):\operatorname{Pic}(X)]\ =\ \frac{1}{|\operatorname{Cl}(X)^{ \operatorname{tors}}|}\prod_{x\in X}|\operatorname{Cl}(X,x)|.\]
Note that rationality forces our \(\mathbb{K}^{*}\)-surface \(X\) to be \(\mathbb{Q}\)-factorial and \(\operatorname{Cl}(X)\) to be finitely generated, see for instance [2, Thm. 5.4.1.5]. Moreover, by normality of \(X\), there are only finitely many singular points and these are the only possible contributors of non-trivial local class groups. Thus, all terms in our formula are indeed finite.
Beyond the \(\mathbb{K}^{*}\)-surfaces, the formula trivially holds for all smooth projective surfaces with a finitely generated and torsion free divisor class group. As soon as we allow torsion, the r.h.s. is no longer integral in the smooth case and thus the formula fails. Concrete examples are the Enriques surfaces, having divisor class group \(\mathbb{Z}^{10}\times\mathbb{Z}/2\,\mathbb{Z}\). A singular counterexample without \(\mathbb{K}^{*}\)-action is provided by the \(D_{8}\)-singular log del Pezzo surface of Picard number one: it is \(\mathbb{Q}\)-factorial with divisor class group \(\mathbb{Z}\times\mathbb{Z}/2\,\mathbb{Z}\) and doesn't satisfy the formula, see Example 7.5.
Our motivation to consider the Picard index arises from the study of log del Pezzo surfaces. Recall that these are normal projective surfaces with an ample anticanonical divisor and at most finite quotient singularities. The log del Pezzo surfaces form an infinite class, which can be filtered into finite subclasses by further conditions on the singularities. Common conditions are bounding the Gorenstein index or the log terminality; for the state of the art we refer to [1, 5, 16] in the general case and to [7, 8, 9] in the case of log del Pezzo surfaces with \(\mathbb{K}^{*}\)-action. The
idea of filtering by the Picard index has appeared in [11], where not-necessarily log terminal Fano varieties with divisor class group \(\mathbb{Z}\) and a torus action of complexity one have been considered. Here, we use Theorem 1.1 to derive in Picard number one suitable bounds on toric and non-toric log del Pezzo \(\mathbb{K}^{*}\)-surfaces and then present a classification algorithm. Explicit results are obtained up to Picard index \(1\,000\,000\) in the toric case and up to Picard index \(10\,000\) in the non-toric case.
**Theorem 1.2**.: _There are \(1\,415\,486\) families of log del Pezzo \(\mathbb{K}^{*}\)-surfaces of Picard number one and Picard index at most \(10\,000\). Of those, \(68\,053\) are toric and \(1\,347\,433\) are non-toric. The number of families for given Picard index develops as follows:_
We use the approach to rational \(\mathbb{K}^{*}\)-surfaces developed in [10, 12], see also [2, Sections 3.4, 5.4]. A key feature is that rational projective \(\mathbb{K}^{*}\)-surfaces are realized in natural way as closed subvarieties of non-complete toric varieties with a very specific defining fan. This reduces the computation of the Picard index of a rational \(\mathbb{K}^{*}\)-surface to computing the Picard index of its non-complete toric ambient variety. Accordingly, we begin in Section 2 with a study of the Picard group in a purely toric setting. In Section 3, we recall the necessary combinatorial theory of \(\mathbb{K}^{*}\)-surfaces in terms of the integral generator matrices of their ambient toric varieties. Sections 4, 5 and 6 are the technical heart of the proof of Theorem 1.1: there we perform a detailed combinatorial study of the maximal minors of the integral matrices in question. Section 7 completes the proof of Theorem 1.1 and presents examples. Finally, in Section 8, we present the classification algorithm for log del Pezzo \(\mathbb{K}^{*}\)-surfaces of Picard number one with given Picard index, proving Theorem 1.2.
The author would like to thank Prof. Jurgen Hausen for his valuable feedback and advice.
###### Contents
* 1 Introduction
* 2 The Picard group of a toric variety
* 3 Background on \(\mathbb{K}^{*}\)-surfaces
* 4 Maximal minors of \(P\)
The construction of \(\hat{P}\)
* 6 Maximal minors of \(\hat{P}\)
* 7 Proof of Theorem 1.1 and examples
* 8 Log del Pezzo \(\mathbb{K}^{*}\)-surfaces of Picard number one
## 2. The Picard group of a toric variety
In this section, we develop our approach to the Picard group of toric varieties, which yields in Corollary 2.4 a criterion for torsion-freeness and in Proposition 2.7 a first formula involving the Picard index and local class groups. The reader is assumed to be familiar with the basics of toric geometry [4, 6].
**Construction 2.1**.: Let \(Z=Z_{\Sigma}\) be a toric variety coming from a fan \(\Sigma\) in the lattice \(N:=\mathbb{Z}^{r}\). We assume \(\Sigma\) to be non-degenerate, i.e. its primitive ray generators \(v_{1},\dots,v_{n}\) span \(\mathbb{Q}^{r}\) as a convex cone. We allow \(\Sigma\) to be non-complete. With \(F:=\mathbb{Z}^{n}\), we have the generator map
\[P\colon F\to N,\qquad e_{i}\mapsto v_{i}.\]
To any maximal cone \(\sigma=\operatorname{cone}(v_{i_{1}},\dots,v_{i_{n_{\sigma}}})\in\Sigma_{ \max}\), we associate the lattices
\[N_{\sigma}\ :=\ \operatorname{lin}_{\mathbb{Q}}(\sigma)\cap N,\qquad\qquad F _{\sigma}\ :=\ \mathbb{Z}^{n_{\sigma}}\,.\]
We define the _local generator map_ associated to \(\sigma\) by
\[P_{\sigma}\colon F_{\sigma}\to N_{\sigma},\qquad e_{j}\mapsto v_{i_{j}}.\]
With the inclusion \(\alpha_{\sigma}\colon N_{\sigma}\hookrightarrow N\) and the map \(\beta_{\sigma}\colon F_{\sigma}\to F\) sending \(e_{j}\) to \(e_{i_{j}}\), we obtain a commutative diagram
Consider the dual lattices
\[M:=N^{*},\qquad E:=F^{*},\qquad M_{\sigma}:=N^{*}_{\sigma},\qquad E_{\sigma}:= F^{*}_{\sigma}.\]
Setting \(K:=M/\operatorname{im}(P^{*})\) and \(K_{\sigma}:=M_{\sigma}/\operatorname{im}(P^{*}_{\sigma})\), we obtain a map \(\pi_{\sigma}\colon K\to K_{\sigma}\) fitting into the commutative diagram with exact rows
By standard toric geometry, we have isomorphisms \(K\cong\operatorname{Cl}(Z)\) and \(K_{\sigma}\cong\operatorname{Cl}(U_{\sigma})\), where \(U_{\sigma}\) is the affine toric chart associated to \(\sigma\). Moreover, the map \(\pi_{\sigma}\) corresponds to the restriction of divisor classes \([D]\mapsto[D|_{U_{\sigma}}]\). In particular, its kernel consists of those divisor classes that are principal on \(U_{\sigma}\).
**Construction 2.2**.: In the setting of Construction 2.1, we define lattices \(\mathbf{N}\) and \(\mathbf{F}\) and a lattice homomorphism \(\mathbf{P}\colon\,\mathbf{N}\to\mathbf{F}\) by
\[\mathbf{N}:=\bigoplus_{\sigma\in\Sigma_{\max}}N_{\sigma},\qquad\mathbf{F}:= \bigoplus_{\sigma\in\Sigma_{\max}}F_{\sigma},\qquad\mathbf{P}:=\bigoplus_{ \sigma\in\Sigma_{\max}}P_{\sigma}.\]
Furthermore, we define lattice homomorphisms
\[\alpha\colon\,\mathbf{N}\to N,\qquad N_{\sigma}\ni v\mapsto\alpha_{ \sigma}(v),\] \[\beta\colon\,\mathbf{F}\to F,\qquad F_{\sigma}\ni w\mapsto\beta_{ \sigma}(w).\]
Let \(\gamma\colon\hat{N}\to\mathbf{N}\) be a kernel of \(\alpha\) and \(\delta\colon\hat{F}\to\mathbf{F}\) be a kernel of \(\beta\). We obtain an induced map \(\hat{P}\colon\hat{F}\to\hat{N}\) making the following diagram commute.
Now consider the dual lattices \(\mathbf{M}:=\mathbf{N}^{*}\) and \(\mathbf{E}:=\mathbf{F}^{*}\) as well as the abelian group \(\mathbf{K}:=\bigoplus_{\sigma\in\Sigma_{\max}}K_{\sigma}\). We define the map
\[\pi\colon K\to\mathbf{K},\quad w\mapsto(\pi_{\sigma}(w))_{\sigma\in\Sigma_{ \max}}.\]
Setting \(\hat{M}:=\mathbf{M}/\operatorname{im}(\alpha^{*})\) and \(\hat{E}:=\mathbf{E}/\operatorname{im}(\beta^{*})\) as well as \(\hat{K}:=\mathbf{K}/\operatorname{im}(\pi)\), we obtain a map \(\hat{P}^{\prime}\colon\hat{M}\to\hat{E}\) fitting into the following commutative diagram with exact rows.
**Proposition 2.3**.: _In Construction 2.2, the map \(\beta\) is surjective and there is an exact sequence_
\[0\xrightarrow{}\operatorname{Pic}(Z)\xrightarrow{}\hat{M}\xrightarrow{\hat{ P}^{\prime}}\hat{E}\xrightarrow{}\hat{K}\xrightarrow{}0.\]
_Moreover, if \(\alpha\) is surjective, \(\hat{M}\) is torsion-free and \(\hat{P}^{\prime}=\hat{P}^{*}\)._
Proof.: Every primitive generator of a ray of \(\Sigma\) is a generator of some maximal cone. This implies that \(\beta\) is surjective, hence \(\beta^{*}\) is injective. As a subgroup of \(K\), the Picard group \(\operatorname{Pic}(Z)\) consists of the Cartier divisor classes, i.e. those that are principal on all affine toric charts \(U_{\sigma}\) for \(\sigma\in\Sigma^{\max}\). This means
\[\operatorname{Pic}(Z)=\bigcap_{\sigma\in\Sigma_{\max}}\ker(\pi_{\sigma})= \ker(\pi).\]
Applying the snake lemma to the lower diagram of Construction 2.2, gives the exact sequence of the Proposition. The last statement is clear.
**Corollary 2.4**.: _Assume that in Construction 2.2, the map \(\alpha\) is surjective. Then the Picard group \(\operatorname{Pic}(Z)\) is torsion-free._
**Remark 2.5**.: Corollary 2.4 generalizes the well-known fact that if \(Z\) has a toric fixed point, its Picard group is torsion-free. Indeed, having a toric fixed point means having a cone \(\sigma\in\Sigma\) of maximal dimension. This implies \(N_{\sigma}=N\), hence \(\alpha\) is surjective.
**Definition 2.6**.: The _Picard index_ of a normal \(\mathbb{Q}\)-factorial variety \(X\) is defined as
\[\iota_{\operatorname{Pic}}(X)\ :=\ [\operatorname{Cl}(X):\operatorname{Pic}(Z)].\]
**Proposition 2.7**.: _Let \(Z=Z_{\Sigma}\) be a toric variety with a non-degenerate simplicial fan \(\Sigma\). In the notation of Construction 2.2, we have_
\[\iota_{\operatorname{Pic}}(Z)\ =\ \frac{1}{|\hat{K}|}\prod_{\sigma\in\Sigma_{ \max}}|K_{\sigma}|.\]
Proof.: Recall that \(\operatorname{Pic}(Z)=\ker(\pi)\) and \(\hat{K}=\mathbf{K}\operatorname{/}\operatorname{im}(\pi)\). Since \(\Sigma\) is simplicial, each \(K_{\sigma}\) is finite, hence so is \(\mathbf{K}\). We obtain
\[\iota_{\operatorname{Pic}}(Z)=[K:\ker(\pi)]=|\operatorname{im}(\pi)|=\frac{| \mathbf{K}\operatorname{}|}{|\hat{K}|}=\frac{1}{|\hat{K}|}\prod_{\sigma\in \Sigma_{\max}}|K_{\sigma}|.\]
## 3. Background on \(\mathbb{K}^{*}\)-surfaces
A _\(\mathbb{K}^{*}\)-surface_ is an irreducible, normal surface \(X\) coming with an effective morphical action \(\mathbb{K}^{*}\times X\to X\). Let us briefly recall the relevant aspects of the geometry of \(\mathbb{K}^{*}\)-surfaces, the major part of which has been developed in [17, 18, 19].
Let \(X\) be a projective \(\mathbb{K}^{*}\)-surface. For each point \(x\in X\), the orbit map \(t\mapsto t\cdot x\) extends to a morphism \(\varphi_{x}\colon\ \mathbb{P}_{1}\to X\). This allows one to define
\[x_{0}\ :=\ \varphi_{x}(0),\qquad\qquad x_{\infty}\ :=\ \varphi_{x}(\infty).\]
The points \(x_{0}\) and \(x_{\infty}\) are fixed points for the \(\mathbb{K}^{*}\)-action and they lie in the closure of the orbit \(\mathbb{K}^{*}\cdot x\). There are three types of fixed points: A fixed point is called _parabolic_ (_hyperbolic_, _elliptic_), if it lies in the closure of precisely one (precisely two, infinitely many) non-trivial \(\mathbb{K}^{*}\)-orbits. Hyperbolic and elliptic fixed points are isolated, hence their number is finite. Parabolic fixed points form a closed smooth curve with at most two connected components. Every projective \(\mathbb{K}^{*}\)-surface has a _source_ and a _sink_, i.e. two irreducible components \(F^{+},F^{-}\subseteq X\) such that there exist non-empty \(\mathbb{K}^{*}\)-invariant open subsets \(U^{+},U^{-}\subseteq X\) with
\[x_{0}\in F^{+}\text{ for all }x\in U^{+},\qquad\qquad x_{\infty}\in F^{-} \text{ for all }x\in U^{-}.\]
The source either consists of a single elliptic fixed point or it is a smooth curve of parabolic fixed points. The same holds for the sink.
We now describe the combinatorial approach to \(\mathbb{K}^{*}\)-surfaces that will be our working environment for the rest of this article. It is the approach to varieties with a torus action of complexity one via their Cox Ring developed in [10, 12, 14]; see also [2, Section 5.4] for the surface case. In a first step, we produce generator matrices of toric varieties that will serve as ambient spaces for our \(\mathbb{K}^{*}\)-surfaces.
**Construction 3.1**.: Fix positive integers \(r,n_{0},\ldots,n_{r}\). We start with integral vectors \(l_{i}=(l_{i1},\ldots,l_{in_{i}})\in\mathbb{Z}_{\geq 1}^{n_{i}}\) and \(d_{i}=(d_{i1},\ldots,d_{in_{i}})\in\mathbb{Z}^{n_{i}}\) such that
\[\gcd(l_{ij},d_{ij})=1,\qquad\qquad\frac{d_{i1}}{l_{i1}}>\cdots>\frac{d_{in_{i}} }{d_{in_{i}}}.\]
The building blocks for our defining matrices are
\[L\ :=\ \begin{bmatrix}-l_{0}&l_{1}&\ldots&0\\ \vdots&&\ddots&\vdots\\ -l_{0}&0&\ldots&l_{r}\end{bmatrix},\qquad d\ :=\ \begin{bmatrix}d_{0}&d_{1}&\ldots&d_{r} \end{bmatrix}.\]
According to the possible constellations of source and sink, we introduce four types of integral matrices:
\[\text{(ee)}\quad P = \begin{bmatrix}L\\ d\end{bmatrix},\qquad\qquad\quad\text{(pe)}\quad P = \begin{bmatrix}L&0\\ d&1\end{bmatrix},\] \[\text{(ep)}\quad P = \begin{bmatrix}L&0\\ d&-1\end{bmatrix},\qquad\quad\text{(pp)}\quad P = \begin{bmatrix}L&0&0\\ d&1&-1\end{bmatrix}.\]
With the canonical basis vectors \(e_{1},\ldots,e_{r+1}\) of \(\mathbb{Z}^{r+1}\) and \(e_{0}:=-(e_{1}+\cdots+e_{r})\), the columns of \(P\) are
\[v_{ij}\ :=\ l_{ij}e_{i}+d_{ij}e_{r+1},\qquad v^{+}\ :=\ e_{r+1},\qquad v^{-}\ := \ -e_{r+1}.\]
We call \(P\) a _defining matrix_, if its columns generate \(\mathbb{Q}^{r+1}\) as a convex cone.
Next, we will construct the fan for the ambient toric varieties for our \(\mathbb{K}^{*}\)-surfaces.
**Construction 3.2**.: Let \(P\) be a defining matrix. Setting \(v_{i0}:=v^{+}\) and \(v_{in_{i}+1}:=v^{-}\) for all \(i\), we define the cones
\[\sigma^{+}:=\text{cone}(v_{01},\ldots,v_{r1}),\qquad\sigma^{-}:=\text{cone}( v_{0n_{0}},\ldots,v_{rn_{r}}),\]
\[\tau_{ij}:=\text{cone}(v_{ij},v_{ij+1}),\quad\text{for $i=0,\ldots,r$ and $j=0,\ldots,n_{i}$.}\]
According to the type of \(P\), we define \(\Sigma\) to be the fan with the following maximal cones:
\[\text{(ee)}\quad\quad\{\sigma^{+}\} \cup \{\tau_{i1},\ldots,\tau_{in_{i}-1}\ ;\ i=0,\ldots,r\} \cup \{\sigma^{-}\},\] \[\text{(pe)}\quad\{\tau_{00},\ldots,\tau_{r0}\} \cup \{\tau_{i1},\ldots,\tau_{in_{i}-1}\ ;\ i=0,\ldots,r\} \cup \{\sigma^{-}\},\] \[\text{(ep)}\quad\{\sigma^{+}\} \cup \{\tau_{i1},\ldots,\tau_{in_{i}-1}\ ;\ i=0,\ldots,r\} \cup \{\tau_{0n_{0}},\ldots,\tau_{rn_{r}}\},\] \[\text{(pp)}\quad\{\tau_{00},\ldots,\tau_{r0}\} \cup \{\tau_{i1},\ldots,\tau_{in_{i}-1}\ ;\ i=0,\ldots,r\} \cup \{\tau_{0n_{0}},\ldots,\tau_{rn_{r}}\}.\]
Note that \(\Sigma\) is non-degenerate simplicial lattice fan in \(\mathbb{Z}^{r+1}\). However, it is in general not complete.
**Construction 3.3**.: Let \(P\) be a defining matrix. Consider the toric variety \(Z=Z_{\Sigma}\), where \(\Sigma\) is as in Construction 3.2. Let \(U_{1},\ldots,U_{r+1}\) be the coordinate functions on the acting torus \(\mathbb{T}^{r+1}\) of \(Z\). Fix pairwise different \(1=\lambda_{2},\ldots,\lambda_{r}\in\mathbb{K}^{*}\) and set
\[h_{i}:=\lambda_{i}+U_{1}+U_{i},\qquad\qquad i=2,\ldots,r.\]
Passing to the closure of the common set of zeroes of \(h_{2},\ldots,h_{r}\), we obtain an irreducible rational normal projective surface
\[X(P):=\overline{V(h_{2},\ldots,h_{r})}\subseteq Z.\]
Since the \(h_{i}\) do not depend on the last coordinate \(U_{r+1}\) of \(\mathbb{T}^{r+1}\), we get an effective \(\mathbb{K}^{*}\)-action on \(X(P)\) as a subtorus of \(\mathbb{T}^{r+1}\) by
\[t\cdot x:=(1,\dots,1,t)\cdot x.\]
**Remark 3.4**.: Consider a \(\mathbb{K}^{*}\)-surface \(X=X(P)\). Let \(Z=Z_{\Sigma}\) be the ambient toric variety with acting torus \(T=\mathbb{T}^{r+1}\). The fixed points of \(X\) are given as follows. For every \(\tau_{ij}\in\Sigma_{\max}\), the associated toric orbit \(T\cdot z_{\tau_{ij}}\) intersects \(X\) in a fixed point
\[\{x_{ij}\}\ =\ X\ \cap\ T\cdot z_{\tau_{ij}}.\]
If \(1\leq j\leq n_{i}-1\), the fixed point \(x_{ij}\) is hyperbolic and all hyperbolic fixed points arise this way. For \(j\in\{0,n_{i}\}\), the fixed point \(x_{ij}\) is parabolic. According to the type of \(P\), we have the following.
* There are two elliptic fixed points \(x^{+}=z_{\sigma^{+}}\) and \(x^{-}=z_{\sigma^{-}}\) and no parabolic fixed points.
* There is one elliptic fixed point \(x^{-}=z_{\sigma^{-}}\). There are parabolic fixed points \(x_{i0}\in F^{+}\) and all parabolic fixed points in \(F^{+}\backslash\{x_{00},\dots,x_{r0}\}\) are smooth.
* There is one elliptic fixed point \(x^{+}=z_{\sigma^{+}}\). There are parabolic fixed points \(x_{in_{i}}\in F^{-}\) and all parabolic fixed points in \(F^{-}\backslash\{x_{0n_{0}},\dots,x_{rn_{r}}\}\) are smooth.
* There are no elliptic fixed points. There are parabolic fixed points \(x_{i0}\in F^{+}\) and \(x_{in_{i}}\in F^{-}\) and all parabolic fixed points in \(F^{+}\backslash\{x_{00},\dots,x_{r0}\}\) and \(F^{-}\backslash\{x_{0n_{0}},\dots,x_{rn_{r}}\}\) are smooth.
**Theorem 3.5** (See [2]*Thm. 5.4.1.5).: _Every rational projective \(\mathbb{K}^{*}\)-surface is isomorphic to a \(\mathbb{K}^{*}\)-surface \(X(P)\) arising from Construction 3.3._
**Proposition 3.6**.: _Let \(X=X(P)\subseteq Z\) arise from Construction 3.3. Then we have_
* \(\operatorname{Cl}(X)\cong\operatorname{Cl}(Z)\)_,_
* \(\operatorname{Cl}(X,x)\cong\operatorname{Cl}(Z,x)\) _for all_ \(x\in X\)_,_
* \(\iota_{\operatorname{Pic}}(X)=\iota_{\operatorname{Pic}}(Z)\)_._
Proof.: The embedding \(X\subseteq Z\) is the canonical toric embedding in the sense of [2, Sec. 3.2.5]. Now use [2, 3.2.5.4 and 3.3.1.7].
## 4. Maximal minors of \(P\)
In this section, we consider defining matrices \(P\) as in Construction 3.1. We show that the greatest common divisor of its maximal minors is equal to the greatest common divisor of a certain subset of maximal minors. A maximal minor of a defining matrix is given by a square submatrix of maximal size. First, we introduce a special lattice basis that reflects the block structure of \(P\).
**Construction 4.1**.: Let \(P\) be a defining matrix. Set \(n:=n_{0}+\dots+n_{r}\). Then \(P\) is an integral \((r+1)\times(n+m)\)-matrix, where \(m\in\{0,1,2\}\). Write \(e_{1},\dots,e_{r+1}\) for the canonical basis vectors of \(\mathbb{Z}^{r+1}\) and set \(u:=e_{r+1}\). Then we define
\[N\ :=\ \mathbb{Z}^{r+1}\ =\ \mathbb{Z}\,e_{1}\oplus\dots\oplus\mathbb{Z}\,e_{r} \oplus\mathbb{Z}\,u.\]
For each \(i=0,\ldots,r\), we define
\[\mathcal{F}_{i} := \{f_{i1},\ldots,f_{in_{i}}\},\qquad\qquad\qquad F_{i} := \mathbb{Z}\,f_{i1}\oplus\cdots\oplus\mathbb{Z}\,f_{in_{i}}\, \cong\,\mathbb{Z}^{n_{i}},\] \[\mathcal{F}^{\prime} := \begin{cases}\emptyset,&\text{(ee)}\\ \{f^{+}\},&\text{(pe)}\\ \{f^{-}\},&\text{(ep)}\\ \{f^{+},f^{-}\},&\text{(pp)}\end{cases},\qquad\qquad F^{\prime} := \bigoplus_{f\in\mathcal{F}^{\prime}}\mathbb{Z}\,f\,\cong\, \mathbb{Z}^{m}\,.\]
Setting
\[\mathcal{F}:=\mathcal{F}_{0}\cup\cdots\cup\mathcal{F}_{r}\cup\mathcal{F}^{ \prime},\qquad\qquad F\ :=\ F_{0}\oplus\cdots\oplus F_{r}\oplus F^{\prime},\]
we obtain an isomorphism \(F\!\cong\!\mathbb{Z}^{n+m}\). Set \(e_{0}:=-(e_{1}+\cdots+e_{r})\). Then we can view \(P\) as a lattice map given by
\[P\colon F\to N,\quad\begin{cases}f_{ij}\mapsto v_{ij}:=l_{ij}e_{i}+d_{ij}u& \\ f^{+}\mapsto u,&\text{if }f^{+}\in\mathcal{F}^{\prime}\\ f^{-}\mapsto-u,&\text{if }f^{-}\in\mathcal{F}^{\prime}\end{cases}.\]
**Definition 4.2**.: Let a subset \(A\subseteq\mathcal{F}\) with \(|A|=r+1\) be given. Then we have a sublattice \(F_{A}:=\bigoplus_{f\in A}\mathbb{Z}\cdot f\subseteq F\) and an induced map \(P_{A}\colon F_{A}\to N\) as in the commutative diagram
We call \(|\det(P_{A})|\in\mathbb{Z}_{\geq 0}\) the _maximal minor of \(P\) associated to \(A\)_. The set of maximal minors of \(P\) is
\[M(P)\ :=\ \{|\det(P_{A})|\ ;\ A\subseteq\mathcal{F},\ |A|=r+1\}.\]
The following Lemma gives a vanishing criterion for maximal minors of \(P\). In particular, it will allow us to describe the nonzero maximal minors of \(P\) explicitly.
**Lemma 4.3**.: _Let \(A\subseteq\mathcal{F}\) with \(|A|=r+1\). Assume that there exist \(0\leq i_{0}<i_{1}\leq r\) such that \(A\cap\mathcal{F}_{i_{0}}=A\cap\mathcal{F}_{i_{1}}=\emptyset\). Then \(\det(P_{A})=0\)._
Proof.: Consider the dual bases \(\{f^{*}_{ij}\}\) and \(\{e^{*}_{1},\ldots,e^{*}_{r},u^{*}\}\) of \(F^{*}\) and \(N^{*}\) respectively. Then we have
\[P^{*}(e^{*}_{i})=\left(\sum_{j=1}^{n_{i}}l_{ij}f^{*}_{ij}\right)-\left(\sum_{ j=1}^{n_{0}}l_{0j}f^{*}_{0j}\right).\]
If \(i_{0}=0\), this implies \(P^{*}_{A}(e^{*}_{i_{1}})=0\). If \(i_{0}\) and \(i_{1}\) are both nonzero, we have
\[P^{*}_{A}(e^{*}_{i_{0}})=-\left(\sum_{j=1}^{n_{0}}l_{0j}f^{*}_{0j}\right)=P^{* }_{A}(e^{*}_{i_{1}}).\]
Thus in both cases, \(\det(P_{A})=\det(P^{*}_{A})=0\)
**Definition 4.4**.:
1. Let numbers \(1\leq j_{i}\leq n_{i}\) for all \(i=0,\ldots,r\) be given. We define \[\mu(j_{0},\ldots,j_{r})\ :=\ \sum_{i_{0}=0}^{r}(-1)^{i_{0}}d_{i_{0}j_{i_{0}}}\prod_{i \neq i_{0}}l_{ij_{i}},\qquad\qquad\hat{\mu}\ :=\ \mu(n_{0},\ldots,n_{r}).\]
2. Let \(0\leq i\leq r\) and \(0\leq j,j^{\prime}\leq n_{i}\) be given. We define \[\nu(i,j,j^{\prime})\ :=\ l_{ij}d_{ij^{\prime}}-l_{ij^{\prime}}d_{ij},\qquad \qquad\hat{\nu}(i,j)\ :=\ \nu(i,j,n_{i}).\]
**Lemma 4.5**.: _Let \(A\subseteq\mathcal{F}\) with \(|A|=r+1\) such that \(\det(P_{A})\neq 0\). Assume that \(A\cap\mathcal{F}^{\prime}=\emptyset\) holds. Then either (i) or (ii) must hold._
1. _We have_ \(|A\cap\mathcal{F}_{i}|=1\) _for all_ \(i=0,\ldots,r\) _and_ \(|\det(P_{A})|=|\mu(j_{0},\ldots,j_{r})|\) _for some numbers_ \(1\leq j_{i}\leq n_{i}\)_._
2. _There exist_ \(0\leq i_{0},i_{1}\leq r\) _with_ \[|A\cap\mathcal{F}_{i}|=\begin{cases}2,&i=i_{0}\\ 0,&i=i_{1}\\ 1,&\text{otherwise}\end{cases}\] _and we have_ \(|\det(P_{A})|=|\nu(i_{0},j_{i_{0}},j^{\prime}_{i_{0}})|\prod_{i\neq i_{0},i_ {1}}l_{ij_{i}}\) _for some numbers_ \(1\leq j_{i}\leq n_{i}\) _as well as_ \(1\leq j^{\prime}_{i_{0}}\leq n_{i_{0}}\)_._
Proof.: Lemma 4.3 implies that there is at most one \(i=0,\ldots,r\) with \(A\cap\mathcal{F}_{i}=\emptyset\). Since \(A\cap\mathcal{F}^{\prime}=\emptyset\) and \(|A|=r+1\), the conditions on \(|A\cap\mathcal{F}_{i}|\) follow. The expressions for \(\det(P_{A})\) then come from cofactor expansion.
**Lemma 4.6**.: _Let \(A\subseteq\mathcal{F}\) with \(|A|=r+1\) such that \(\det(P_{A})\neq 0\). Assume that \(A\cap\mathcal{F}^{\prime}\neq\emptyset\) holds. Then there exists an \(i_{1}=0,\ldots,r\) such that_
\[|A\cap\mathcal{F}_{i}|=\begin{cases}0,&i=i_{1}\\ 1,&\text{otherwise}\end{cases}\]
_and we have \(|\det(P_{A})|=\prod_{i\neq i_{1}}l_{ij_{i}}\)._
Proof.: Note that \(|A\cap\mathcal{F}^{\prime}|=2\) cannot hold, since this would imply \(\det(P_{A})=0\). Hence \(|A\cap\mathcal{F}^{\prime}|=1\). Lemma 4.3 forces the condition on \(|A\cap\mathcal{F}_{i}|\). Cofactor expansion gives the expression for \(\det(P_{A})\).
**Lemma 4.7**.:
1. _Let_ \(i=0,\ldots,r\) _and_ \(1\leq j,j^{\prime}\leq n_{i}\)_. There exist integers_ \(\alpha,\beta\in\mathbb{Z}\) _such that_ \[\nu(i,j,j^{\prime})\ =\ \alpha\hat{\nu}(i,j)-\beta\hat{\nu}(i,j^{\prime}).\]
2. _Let_ \(i=0,\ldots,r\) _and_ \(j_{i}=1,\ldots,n_{i}\) _for all_ \(i\)_. There exist integers_ \(\beta_{i}\) _such that_ \[\mu(j_{0},\ldots,j_{r})\ = \beta_{i_{0}}\mu(j_{0},\ldots,j_{i_{0}-1},n_{i_{0}},j_{i_{0}+1}, \ldots,j_{r})\] \[+\sum_{i_{1}\neq i_{0}}\beta_{i_{1}}\hat{\nu}(i_{0},j_{i_{0}})\prod_{i \notin\{i_{0},i_{1}\}}l_{ij_{i}}.\]
Proof.: We show (i). Since \(\gcd(l_{in_{i}},d_{in_{i}})=1\), we find integers \(x,y\in\mathbb{Z}\) such that \(xl_{in_{i}}+yd_{in_{i}}=1\). Set \(\alpha:=xl_{ij}+yd_{ij}\) and \(\beta:=xl_{ij^{\prime}}+yd_{ij^{\prime}}\). Then the rows of \(3\times 3\)-matrix
\[\begin{bmatrix}l_{ij}&l_{ij^{\prime}}&l_{in_{i}}\\ d_{ij}&d_{ij^{\prime}}&d_{in_{i}}\\ \alpha&\beta&1\end{bmatrix}\]
are linearly dependent, hence its determinant vanishes. Cofactor expansion by the last row gives the desired equality. For (ii), we consider exemplarily the case \(i_{0}=0\). Since \(\gcd(l_{0n_{0}},d_{0n_{0}})=1\), we find \(x,y\in\mathbb{Z}\) such that \(-xl_{0n_{0}}+yd_{0n_{0}}=1\). Now set
\[\beta_{0}:=-xl_{0j_{0}}+yd_{0j_{0}},\qquad\qquad\beta_{1}:=xl_{1j_{1}}+yd_{1j_{ 1}},\]
as well as \(\beta_{i}:=yd_{ij_{i}}\) for \(i=2,\ldots,r\). Then the rows of the \((r+2)\times(r+2)\) matrix
\[\begin{bmatrix}-l_{0j_{0}}&-l_{0n_{0}}&l_{1j_{1}}&0&\ldots&0\\ -l_{0j_{0}}&-l_{0n_{0}}&0&l_{2j_{2}}&\ldots&0\\ \vdots&\vdots&\vdots&&\ddots&\vdots\\ -l_{0j_{0}}&-l_{0n_{0}}&0&\ldots&\ldots&l_{rj_{r}}\\ d_{0j_{0}}&d_{0n_{0}}&d_{1j_{1}}&d_{2j_{2}}&\ldots&d_{rj_{r}}\\ \beta_{0}&1&\beta_{1}&\beta_{2}&\ldots&\beta_{r}\end{bmatrix}\]
are linearly dependent, hence its determinant vanishes. Cofactor expansion by the last row and adjusting the signs of the \(\beta_{i}\) gives the desired equality.
**Definition 4.8**.: According to the type of \(P\), we define the set
\[M^{\prime}(P):=\begin{cases}\{|\hat{\mu}|\}\cup\left\{|\hat{\nu}(i_{0},j_{i_{0 }})|\prod_{i\neq i_{0},i_{1}}l_{ij_{i}}\ ;\begin{array}{l}0\leq i_{0},i_{1}\leq r,\text{ with }i_{0}\neq i_{1},\\ 1\leq j_{i}\leq n_{i}\text{ for all }i\neq i_{1}\end{array}\right\},\quad \text{(ee)}\\ \left\{\prod_{i\neq i_{1}}l_{ij_{i}}\ ;\ 0\leq i_{1}\leq r,\ 1\leq j_{i}\leq n _{i}\text{ for all }i\neq i_{1}\right\},\quad\text{(pe), (ep), (pp).}\end{cases}\]
**Proposition 4.9**.: _We have \(\gcd(M(P))=\gcd(M^{\prime}(P))\)._
Proof.: Consider first the case (ee). Then \(\mathcal{F}^{\prime}=\emptyset\), hence Lemma 4.5 implies \(M^{\prime}(P)\subseteq M(P)\). This shows that \(\gcd(M(P))\) divides \(\gcd(M^{\prime}(P))\). On the other hand, by repeated application of Lemma 4.7, we see that every maximal minor of \(P\) can be written as a \(\mathbb{Z}\)-linear combination of elements of \(M^{\prime}(P)\). This implies that \(\gcd(M^{\prime}(P))\) divides \(\gcd(M(P))\). For cases (pe), (ep) and (pp), Lemma 4.6 implies \(M^{\prime}(P)\subseteq M(P)\), hence \(\gcd(M(P))\) divides \(\gcd(M^{\prime}(P))\). On the other hand, Lemma 4.5 implies that every maximal minor of \(P\) is a \(\mathbb{Z}\)-linear combination of elements of \(M^{\prime}(P)\), hence \(\gcd(M^{\prime}(P))\) divides \(\gcd(M(P))\).
## 5. The construction of \(\hat{P}\)
In this section, we apply Construction 2.2 to the ambient toric variety of a rational projective \(\mathbb{K}^{*}\)-surface. In particular, we will give an explicit description of the map \(\hat{P}\).
**Construction 5.1**.: Let \(\Sigma\) be the fan of an ambient toric variety of a \(\mathbb{K}^{*}\)-surface, as defined in Construction 3.2. Consider the lattices \(F_{\sigma}:=\mathbb{Z}^{n_{\sigma}}\) and \(N_{\sigma}:=\lim_{N_{0}}(\sigma)\cap N\) from Construction 2.1. We have \(n_{\sigma^{+}}=n_{\sigma^{-}}=r+1\) and \(n_{\tau_{ij}}=2\). Moreover, we have \(N_{\sigma^{+}}=N_{\sigma^{-}}=N=\mathbb{Z}^{r+1}\) and \(N_{\tau_{ij}}=\mathbb{Z}\,e_{i}+\mathbb{Z}\,u\subseteq N\). We will work with the identifications
\[F_{\sigma^{+}} \cong \mathbb{Z}\,f_{01}^{+}\oplus\cdots\oplus\mathbb{Z}\,f_{r1}^{+}, N_{\sigma^{+}} \cong \mathbb{Z}\,e_{1}^{+}\oplus\cdots\oplus\mathbb{Z}\,e_{r}^{+} \oplus\mathbb{Z}\,u^{+},\] \[F_{\tau_{ij}} \cong \mathbb{Z}\,f_{ij}^{-}\oplus\mathbb{Z}\,f_{ij+1}^{+}, N_{\tau_{ij}} \cong \mathbb{Z}\,e_{ij}\oplus\mathbb{Z}\,u_{ij},\] \[F_{\sigma^{-}} \cong \mathbb{Z}\,f_{0n_{0}}^{-}\oplus\cdots\oplus\mathbb{Z}\,f_{rn_{r} }^{-}, N_{\sigma^{-}} \cong \mathbb{Z}\,e_{1}^{-}\oplus\cdots\oplus\mathbb{Z}\,e_{r}^{-} \oplus\mathbb{Z}\,u^{-}.\]
Then according to the type of \(P\), a lattice basis of \(\mathbf{N}\) is given by
\[\begin{array}{llllll}\mbox{(ee)}&\{e_{1}^{+},\ldots,e_{r}^{+},u^{+}\}&\cup&S& \cup&\{e_{1}^{-},\ldots,e_{r}^{-},u^{-}\},\\ \mbox{(pe)}&\{e_{i0},u_{i0}\;;\;i=0,\ldots,r\}&\cup&S&\cup&\{e_{1}^{-},\ldots,e _{r}^{-},u^{-}\},\\ \mbox{(ep)}&\{e_{1}^{+},\ldots,e_{r}^{+},u^{+}\}&\cup&S&\cup&\{e_{in_{i}},u_{in _{i}}\;;\;i=0,\ldots,r\},\\ \mbox{(pp)}&\{e_{i0},u_{i0}\;;\;i=0,\ldots,r\}&\cup&S&\cup&\{e_{in_{i}},u_{in _{i}}\;;\;i=0,\ldots,r\},\end{array}\]
where \(S:=\{e_{ij},u_{ij}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}-1\}\). A lattice basis of \(\mathbf{F}\) is given by
\[\begin{array}{llll}\mbox{(ee)}&\{f_{ij}^{-}\;;\;i=0,\ldots,r,\;j=1,\ldots, n_{i}\}&\cup&\{f_{ij}^{+}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}\},\\ \mbox{(pe)}&\{f_{ij}^{-}\;;\;i=0,\ldots,r,\;j=0,\ldots,n_{i}\}&\cup&\{f_{ij}^ {+}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}\},\\ \mbox{(ep)}&\{f_{ij}^{-}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}\}&\cup&\{f_{ij}^ {+}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}+1\},\\ \mbox{(pp)}&\{f_{ij}^{-}\;;\;i=0,\ldots,r,\;j=0,\ldots,n_{i}\}&\cup&\{f_{ij}^ {+}\;;\;i=0,\ldots,r,\;j=1,\ldots,n_{i}+1\}.\end{array}\]
In particular, we have \(\mbox{rank}(\mathbf{F})=\mbox{rank}(\mathbf{N})=2n+m(r+1)\). With respect to these bases, the maps \(\alpha\) and \(\beta\) from Construction 2.2 are
\[\begin{array}{llll}\alpha\colon\mathbf{N}\to N,&e_{ij},e_{i}^{+},e_{i}^{-} \mapsto e_{i},&u_{ij},u^{+},u^{-}\mapsto u,\\ \beta\colon\mathbf{F}\to F,&f_{ij}^{+},f_{ij}^{-}\mapsto f_{ij}.\end{array}\]
Setting \(e_{0}^{+}:=-(e_{1}^{+}+\cdots+e_{r}^{+})\) and \(e_{0}^{-}:=-(e_{1}^{-}+\cdots+e_{r}^{-})\), the maps \(P_{\sigma^{+}},P_{\sigma^{-}}\) and \(P_{\tau_{ij}}\) are then given as
\[P_{\sigma^{+}}\colon F_{\sigma^{+}}\to N_{\sigma^{-}}, f_{i1}^{+}\mapsto l_{i1}e_{i}^{+}+d_{i1}u^{+},\] \[P_{\sigma^{-}}\colon F_{\sigma^{-}}\to N_{\sigma^{-}}, f_{in_{i}}^{-}\mapsto l_{in_{i}}e_{i}^{-}+d_{in_{i}}u^{-},\] \[P_{\tau_{ij}}\colon F_{\tau_{ij}}\to N_{\tau_{ij}}, f_{ij}^{-}\mapsto l_{ij}e_{ij}+d_{ij}u_{ij}\] \[f_{ij+1}^{+}\mapsto l_{ij+1}e_{ij}+d_{ij+1}u_{ij}.\]
**Remark 5.2**.: Clearly, the map \(\alpha\) in Construction 5.1 is surjective in all cases. Hence Corollary 2.4 implies that the Picard group of a projective rational \(\mathbb{K}^{*}\)-surface is torsion-free.
**Construction 5.3**.: We continue in the setting of Construction 5.1. We will give explicit descriptions of the maps \(\gamma,\delta\) and \(\hat{P}\) from Construction 2.2. According to the type of \(P\), let us set for all \(i=0,\ldots,r\)
\[n_{i}^{\prime}:=\begin{cases}n_{i}-1,&\mbox{(ee)},\\ n_{i},&\mbox{(pe), (ep), (pp),}\end{cases}\]
\[\hat{\mathcal{N}_{i}}:=\{\hat{e}_{ij},\hat{u}_{ij}\;;\;j=1,\ldots,n_{i}^{ \prime}\},\qquad\qquad\hat{\mathcal{F}_{i}}:=\{\hat{f}_{ij}\;;\;j=1,\ldots n_{ i}\}.\]
Now define the sets
\[\begin{array}{llll}\tilde{\mathcal{N}}&:=&\begin{cases}\{\tilde{e}_{1}, \ldots,\tilde{e}_{r},\tilde{u}\},&\mbox{(ee)}\\ \emptyset,&\mbox{(pe)}\\ \emptyset,&\mbox{(ep)}\\ \{\tilde{u}_{1},\ldots,\tilde{u}_{r},\tilde{e}\},&\mbox{(pp)}\end{cases}, \qquad\tilde{\mathcal{F}}&:=&\begin{cases}\emptyset,&\mbox{(ee)}\\ \{\hat{f}_{i}^{-}\;;\;i=1,\ldots,r\},&\mbox{(pe)}\\ \{\hat{f}_{i}^{+}\;;\;i=1,\ldots,r\},&\mbox{(ep)}\\ \{\hat{f}_{i}^{+},\hat{f}_{i}^{-}\;;\;i=1,\ldots,r\},&\mbox{(pp)}\end{cases}, \qquad\hat{\mathcal{F}}&:=&\hat{\mathcal{F}}_{0}\;\cup\;\ldots\;\cup\;\hat{ \mathcal{F}_{r}}\;\cup\;\tilde{\mathcal{F}}.\end{array}\]
Let \(\hat{N}\) and \(\hat{F}\) be the free lattices over \(\hat{\mathcal{N}}\) and \(\hat{\mathcal{F}}\) respectively. In particular, we have \(\operatorname{rank}(\hat{N})=2n+(m-1)(r+1)\) and \(\operatorname{rank}(\hat{F})=n+mr\). According to the type of \(P\), we define a map \(\gamma\colon\hat{N}\to\mathbf{N}\) as follows:
\[\text{(ee)}\quad\quad\hat{e}_{ij} \mapsto \begin{cases}e_{i}^{+}-e_{i1},&j=1\\ e_{ij-1}-e_{ij},&2\leq j\leq n_{i}-1\end{cases}\] \[\hat{u}_{ij} \mapsto \begin{cases}u_{i}^{+}-u_{i1},&j=1\\ u_{ij-1}-u_{ij},&2\leq j\leq n_{i}-1\end{cases}\] \[\tilde{e}_{i} \mapsto e_{i}^{+}-e_{i}^{-}\] \[\tilde{u} \mapsto u^{+}-u^{-},\] \[\text{(pe)}\quad\quad\hat{e}_{ij} \mapsto \begin{cases}e_{ij-1}-e_{ij},&1\leq j\leq n_{i}-1\\ e_{in_{i}-1}-e_{i}^{-},&j=n_{i}\end{cases}\] \[\hat{u}_{ij} \mapsto \begin{cases}u_{ij-1}-u_{ij},&1\leq j\leq n_{i}-1\\ u_{in_{i}-1}-u^{-},&j=n_{i}\end{cases}\] \[\text{(ep)}\quad\quad\hat{e}_{ij} \mapsto \begin{cases}e_{i}^{+}-e_{i1},&j=1,\\ e_{ij-1}-e_{ij},&2\leq j\leq n_{i}\end{cases}\] \[\hat{u}_{ij} \mapsto \begin{cases}u^{+}-u_{i1},&j=1,\\ u_{ij-1}-u_{ij},&2\leq j\leq n_{i}\end{cases}\] \[\text{(pp)}\quad\quad\hat{e}_{ij} \mapsto e_{ij-1}-e_{ij},\] \[\hat{u}_{ij} \mapsto u_{ij-1}-u_{ij},\] \[\tilde{u}_{i} \mapsto u_{in_{i}}-u_{i-1,n_{i-1}}\] \[\tilde{e} \mapsto (e_{00}+e_{10}+\cdots+e_{r0}).\]
Then \(\gamma\) is a kernel of \(\alpha\colon\mathbf{N}\to N\). Next, we define a map \(\delta\colon\hat{F}\to\mathbf{F}\) as follows:
\[\text{(ee)}\quad\quad\hat{f}_{ij} \mapsto f_{ij}^{+}-f_{ij}^{-}\] \[\text{(pe)}\quad\quad\hat{f}_{ij} \mapsto f_{ij}^{+}-f_{ij}^{-}\] \[\hat{f}_{i}^{-} \mapsto f_{i-1,0}^{-}-f_{i,0}^{-}\] \[\text{(ep)}\quad\quad\hat{f}_{ij} \mapsto f_{ij}^{+}-f_{ij}^{-}\] \[\hat{f}_{i}^{+} \mapsto f_{i-1,n_{i-1}+1}^{+}-f_{i,n_{i}+1}^{+}\] \[\text{(pp)}\quad\quad\hat{f}_{ij} \mapsto f_{ij}^{+}-f_{ij}^{-}\] \[\hat{f}_{i}^{-} \mapsto f_{i-1,0}^{-}-f_{i,0}^{-}\] \[\hat{f}_{i}^{+} \mapsto f_{i-1,n_{i-1}+1}^{+}-f_{i,n_{i}+1}^{+}.\]
Then \(\delta\) is a kernel of \(\beta\colon\mathbf{F}\to F\). If \(P\) is of type (ee), set \(\tilde{e}_{0}:=-(\tilde{e}_{1}+\cdots+\tilde{e}_{r})\). Then the induced map \(\hat{P}\colon\hat{F}\to\hat{N}\) is given as follows.
\[\text{(ee)}\qquad\hat{f}_{ij} \mapsto \begin{cases}l_{ij}\hat{e}_{ij}+d_{ij}\hat{u}_{ij},&1\leq j\leq n _{i}-1\\ l_{in_{i}}\left(\tilde{e}_{i}-\sum_{k=0}^{n_{i}-1}\hat{e}_{ik}\right)+d_{in_{i} }\left(\tilde{u}-\sum_{k=1}^{n_{i}-1}\hat{u}_{ik}\right),&j=n_{i},\end{cases}\] \[\text{(pe)}\qquad\hat{f}_{ij} \mapsto l_{ij}\hat{e}_{ij}+d_{ij}\hat{u}_{ij}\] \[\hat{f}_{i}^{-} \mapsto \left(\sum_{k=1}^{n_{i-1}}\hat{u}_{i-1,k}\right)-\left(\sum_{k= 1}^{n_{i}}\hat{u}_{ik}\right)\] \[\text{(ep)}\qquad\hat{f}_{ij} \mapsto l_{ij}\hat{e}_{ij}+d_{ij}\hat{u}_{ij}\] \[\hat{f}_{i}^{+} \mapsto \left(\sum_{k=1}^{n_{i-1}}\hat{u}_{i-1,k}\right)-\left(\sum_{k= 1}^{n_{i}}\hat{u}_{ik}\right)\] \[\text{(pp)}\qquad\hat{f}_{ij} \mapsto l_{ij}\hat{e}_{ij}+d_{ij}\hat{u}_{ij}\] \[\hat{f}_{i}^{-} \mapsto \left(\sum_{k=1}^{n_{i-1}}\hat{u}_{i-1,k}\right)-\left(\sum_{k= 1}^{n_{i}}\hat{u}_{ik}\right)-\tilde{u}_{i}\] \[\hat{f}_{i}^{+} \mapsto \tilde{u}_{i}.\]
Proof.: In all cases, we can verify that \(\alpha\circ\gamma=0\) and \(\beta\circ\delta=0\). Moreover, \(\gamma\) and \(\delta\) are both injective and we have
\[\text{rank}(\hat{N}) =2n-(m-1)(r+1)=\text{rank}(\mathbf{N})-\text{rank}(N),\] \[\text{rank}(\hat{F}) =n+mr=\text{rank}(\mathbf{F})-\text{rank}(F).\]
This shows that \(\gamma\) and \(\delta\) are kernels of \(\alpha\) and \(\beta\) respectively. Furthermore, direct computations verify that \(\gamma\circ\hat{P}=\mathbf{P}\circ\delta\) holds in all cases. This shows that \(\hat{P}\) is the induced map between the kernels.
**Remark 5.4**.: We discuss the matrix representation of \(\hat{P}\) from Construction 5.3 for the case (ee). Let \(\hat{F}_{i}\) and \(\hat{N}_{i}\) be the free lattices over \(\hat{\mathcal{F}}_{i}\) and \(\hat{\mathcal{N}}_{i}\) respectively. Let \(\tilde{N}\) be the free lattice over \(\hat{\mathcal{N}}\). We define the lattice maps
\[\hat{P}_{i}\colon\hat{F}_{i}\to\hat{N}_{i},\qquad\hat{f}_{ij} \mapsto \begin{cases}l_{ij}\hat{e}_{ij}+d_{ij}\hat{u}_{ij},&1\leq j\leq n _{i}-1\\ -l_{in_{i}}\left(\sum_{k=1}^{n_{i}-1}\hat{e}_{ik}\right)-d_{in_{i}}\left(\sum _{k=1}^{n_{i}-1}\hat{u}_{ik}\right),&j=n_{i},\end{cases}\] \[\hat{P}_{i}\colon\hat{F}_{i}\to\tilde{N},\qquad\hat{f}_{ij} \mapsto \begin{cases}0,&1\leq j\leq n_{i}-1\\ l_{in_{i}}\tilde{e}_{i}+d_{in_{i}}\tilde{u},&j=n_{i}.\end{cases}.\]
Then we have \(\hat{P}(\hat{f}_{ij})=\hat{P}_{i}(\hat{f}_{ij})+\tilde{P}_{i}(\hat{f}_{ij})\). We obtain the matrix representations:
\[\hat{P}_{0} = \begin{bmatrix}0&\ldots&0&-l_{0n_{0}}\\ 0&\ldots&0&-l_{0n_{0}}\\ \vdots&&\vdots&\vdots&\vdots\\ 0&\ldots&0&-l_{0n_{0}}\\ 0&\ldots&0&d_{0n_{0}}\end{bmatrix},\qquad\qquad\qquad\tilde{P}_{i} = \begin{bmatrix}0&\ldots&0&0\\ \vdots&&\vdots&\vdots\\ 0&\ldots&0&l_{in_{i}}\\ \vdots&&\vdots&\vdots\\ 0&\ldots&0&0\\ 0&\ldots&0&d_{in_{i}}\end{bmatrix}\quad(i\geq 1),\] \[\hat{P}_{i} = \begin{bmatrix}l_{i1}&0&\ldots&0&-l_{in_{i}}\\ d_{i1}&0&\ldots&0&-d_{in_{i}}\\ 0&l_{i2}&&0&-l_{in_{i}}\\ 0&d_{i2}&&0&-d_{in_{i}}\\ \vdots&&\ddots&&\vdots\\ 0&0&\ldots&l_{in_{i}-1}&-l_{in_{i}}\\ 0&0&\ldots&d_{in_{i}-1}&-d_{in_{i}}\end{bmatrix},\qquad\hat{P} = \begin{bmatrix}\hat{P}_{0}&0&\ldots&0\\ 0&\hat{P}_{1}&\ldots&0\\ \vdots&&\ddots&\\ 0&0&&\hat{P}_{r}\\ \tilde{P}_{0}&\tilde{P}_{1}&\ldots&\tilde{P}_{r}\end{bmatrix}.\]
## 6. Maximal minors of \(\hat{P}\)
In this section, we work in the notation of Construction 5.3. We will determine the maximal minors of the lattice map \(\hat{P}\).
**Definition 6.1**.: Let a subset \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\) be given. Then we have a sublattice \(\hat{N}_{A}:=\bigoplus_{x\in A}\mathbb{Z}\cdot x\subseteq\hat{N}\) and an induced map \(\hat{P}_{A}\colon\hat{F}\to\hat{N}_{A}\) as in the commutative diagram
We call \(|\det(\hat{P}_{A})|\in\mathbb{Z}\) the _maximal minor of \(\hat{P}\) associated to \(A\)_. The set of all maximal minors of \(\hat{P}\) is defined as
\[M(\hat{P}):=\{|\det(\hat{P}_{A})|\ ;\ A\subseteq\hat{\mathcal{N}},\ |A|=|\hat{ \mathcal{F}}|\}.\]
**Construction 6.2**.: Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\). We define
\[\hat{\mathcal{N}}_{A}^{\rm sing} :=\{\hat{e}_{ij}\ ;\ \hat{e}_{ij}\in A\ {\rm and}\ \hat{u}_{ij}\notin A\}\ \cup\ \{\hat{u}_{ij}\ ;\ \hat{e}_{ij}\notin A\ {\rm and}\ \hat{u}_{ij}\in A\}\ \subseteq\ A,\] \[\hat{\mathcal{F}}_{A}^{\rm sing} :=\{\hat{f}_{ij}\ ;\ \hat{e}_{ij}\in A\ {\rm and}\ \hat{u}_{ij}\notin A\}\ \cup\ \{\hat{f}_{ij}\ ;\ \hat{e}_{ij}\notin A\ {\rm and}\ \hat{u}_{ij}\in A\}\ \subseteq\ \hat{\mathcal{F}},\] \[\hat{\mathcal{N}}_{A}^{\rm red} :=\ A\setminus\hat{\mathcal{N}}_{A}^{\rm sing},\qquad\qquad \hat{\mathcal{F}}_{A}^{\rm red}\ :=\ \hat{\mathcal{F}}\setminus\hat{\mathcal{F}}_{A}^{\rm sing}.\]
Note that we have \(|\hat{\mathcal{N}}_{A}^{\rm red}|=|\hat{\mathcal{F}}_{A}^{\rm red}|\). Let \(\hat{N}_{A}^{\rm red}\) and \(\hat{F}_{A}^{\rm red}\) be the free lattices over \(\hat{\mathcal{N}}_{A}^{\rm red}\) and \(\hat{\mathcal{F}}_{A}^{\rm red}\) respectively. We obtain an induced map \(\hat{P}_{A}^{\rm red}\) as in the commutative diagram
\[\begin{CD}\hat{F}@>{\hat{P}_{A}}>{}>\hat{N}_{A}\\ \uparrow\\ \hat{F}_{A}^{\rm red}@>{\hat{P}_{A}^{\rm red}}>{}>\hat{N}_{A}^{\rm red}.\end{CD}\]
We call \(|\det(\hat{P}^{\rm red}_{A})|\) the _reduced minor of \(\hat{P}\) associated to \(A\)_. We define the set of reduced minors of \(\hat{P}\) as
\[M^{\rm red}(\hat{P}):=\{|\det(\hat{P}^{\rm red}_{A})|\ ;\ A\subseteq\hat{\mathcal{N}},\ |A|=|\hat{\mathcal{F}}|\}.\]
**Proposition 6.3**.: _Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\). Then we have_
\[\det(\hat{P}_{A})=\det(\hat{P}^{\rm red}_{A})\prod_{f_{ij}\in\hat{\mathcal{F} }^{\rm sing}_{A}}x_{ij},\quad\text{where}\quad x_{ij}:=\begin{cases}l_{ij},& \hat{e}_{ij}\in A,\\ d_{ij},&\hat{u}_{ij}\in A.\end{cases}\]
_Moreover, \(\gcd(M(\hat{P}))=\gcd(M^{\rm red}(\hat{P}))\) holds._
Proof.: If \(\hat{f}_{ij}\in\hat{\mathcal{F}}^{\rm sing}_{A}\), we have
\[\hat{P}_{A}(\hat{f}_{ij})=\begin{cases}l_{ij}\hat{e}_{ij},&\hat{e}_{ij}\in A, \\ d_{ij}\hat{u}_{ij},&\hat{u}_{ij}\in A.\end{cases}\]
In other words, the matrix representation of \(\hat{P}_{A}\) has a column with the single entry \(x_{ij}\) and zeroes elsewhere. Doing cofactor expansion by all these columns amounts to passing from \(\det(\hat{P}_{A})\) to \(\det(\hat{P}^{\rm red}_{A})\). This shows the first claim. The second one then follows from \(\gcd(l_{ij},d_{ij})=1\).
**Definition 6.4**.: Set \(\mathcal{L}:=\{(i,j)\ ;\ i=0,\ldots,r,\ j=1,\ldots,n^{\prime}_{i}\}\). For \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\), we define
\[L(A)\ :=\ \{(i,j)\ ;\ \hat{e}_{ij}\in A\ \text{and}\ \hat{u}_{ij}\in A\}\ \subseteq\ \mathcal{L}.\]
**Lemma 6.5**.: _Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\)._
1. _If_ \(\hat{e}_{ij}\notin A\) _and_ \(\hat{u}_{ij}\notin A\) _for some_ \(i=0,\ldots,r\) _and_ \(j=1,\ldots,n^{\prime}_{i}\)_, we have_ \(\det(\hat{P}_{A})=0\)_._
2. _If_ \((i,j_{0}),(i,j_{1})\in L(A)\) _for some_ \(i=0,\ldots,r\) _and_ \(1\leq j_{0}<j_{1}\leq n^{\prime}_{i}\)_, we have_ \(\det(\hat{P}_{A})=0\)_._
Proof.: For (i), we have \(\hat{P}_{A}(\hat{f}_{ij})=0\), hence \(\det(\hat{P}_{A})=0\). We show (ii). Consider first the case (ee). Then we have \(n^{\prime}_{i}=n_{i}-1\). Set \(\hat{\mathcal{N}}_{A,i}:=A\cap\hat{\mathcal{N}}_{i}\) and \(\hat{N}_{A,i}:=\hat{N}_{A}\cap\hat{N}_{i}\). By (i), we may assume that \(\hat{e}_{ij}\in\hat{\mathcal{N}}_{A,i}\) or \(\hat{u}_{ij}\in\hat{\mathcal{N}}_{A,i}\) holds for all \(1\leq j\leq n_{i}-1\). Since \((i,j_{0}),(i,j_{1})\in L(A)\), we thus have \(|\hat{\mathcal{N}}_{A,i}|>n_{i}\). Consider the map \(\hat{P}_{A,i}\colon\hat{N}_{A,i}\to\hat{F}_{i}\). In the matrix representation from Remark 5.4, we have
\[\hat{P}_{A}=\begin{bmatrix}*&0&0\\ *&\framebox{$\hat{P}_{A,i}$ $0$}&0\\ *&*&*\end{bmatrix},\]
where the outlined box is a square \(|\hat{\mathcal{N}}_{A,i}|\times|\hat{\mathcal{N}}_{A,i}|\)-matrix. Since the determinant of the outlined box vanishes, also \(\det(\hat{P}_{A})=0\).
Now let \(P\) be of type (pe), (ep) or (pp). Then \(n^{\prime}_{i}=n_{i}\). We define the set
\[\bar{\mathcal{N}}_{A}\ :=\ \{\hat{u}_{ij}\ ;\ (i,j)\in L(A)\}\ \subseteq\ \hat{\mathcal{N}}_{A}^{\rm red}.\]
Writing \(\bar{N}_{A}\) for the free lattice over \(\bar{\mathcal{N}}_{A}\) and \(\bar{F}\) for the free lattice over \(\bar{\mathcal{F}}\), we obtain an induced map \(\bar{P}_{A}\colon\bar{F}\to\bar{N}_{A}\) as in the commutative diagram
Note that if \((i,j)\in L(A)\), we have \((\hat{P}^{\mathrm{red}}_{A})^{*}(\hat{e}^{*}_{ij})=l_{ij}\hat{f}^{*}_{ij}\). That means, \(\hat{P}^{\mathrm{red}}_{A}\) contains a row with a single entry \(l_{ij}\) and zeroes elsewhere. Doing cofactor expansion, we arrive at
\[\det(\hat{P}^{\mathrm{red}}_{A})=\det(\bar{P}_{A})\prod_{(i,j)\in L(A)}l_{ij}.\]
But since \((i,j_{0}),(i,j_{1})\in L(A)\), we have \(\bar{P}^{*}_{A}(\hat{u}^{*}_{ij_{0}})=\bar{P}^{*}_{A}(\hat{u}^{*}_{ij_{1}})\), i.e. \(\bar{P}_{A}\) contains two equal rows. Hence we have \(\det(\hat{P}_{A})=\det(\hat{P}^{\mathrm{red}}_{A})=\det(\bar{P}_{A})=0\).
**Proposition 6.6**.: _Let \(P\) be of type (pe), (ep) or (pp). Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|\) such that \(\det(\hat{P}_{A})\neq 0\). Then we have \(|L(A)|=r\). Furthermore, there exists an \(i_{1}=0,\dots,r\) and \(j_{i}=1,\dots,n_{i}\) for all \(i\neq i_{1}\) such that_
\[|\det(\hat{P}^{\mathrm{red}}_{A})|=\prod_{i\neq i_{1}}l_{ij_{i}}.\]
_In particular, we have \(\gcd(M^{\mathrm{red}}(\hat{P}))=\gcd(M^{\prime}(P))\)._
Proof.: By Lemma 6.5 (i), we have \(\hat{e}_{ij}\in A\) or \(\hat{u}_{ij}\in A\) for all \(i\) and \(j\). By Lemma 6.5 (ii), for each \(i\) there is at most one \(j\) with \(\hat{e}_{ij}\in A\) and \(\hat{u}_{ij}\in A\). Writing \(\pi_{1}\colon\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}\) for the projection onto the first coordinate, this implies that \(|\pi_{1}(L(A))|=|L(A)|\). Together, we obtain
\[|A\cap\hat{\mathcal{N}}_{i}|=\begin{cases}n_{i},&i\notin\pi_{1}(L(A)),\\ n_{i}+1,&i\in\pi_{1}(L(A)).\end{cases}\]
Thus, we have \(|A|=n+|L(A)|+|A\cap\hat{\mathcal{N}}|\). Recall that for the cases (pe) and (ep), we have \(\tilde{\mathcal{N}}=\emptyset\) and \(|A|=|\hat{\mathcal{F}}|=n+r\), hence \(|L(A)|=r\). For (pp), we must have \(\tilde{u}_{i}\in A\) for all \(i=1,\dots,r\), since otherwise \(\hat{P}_{A}(\hat{f}^{+}_{i})=0\). Hence \(|A\cap\tilde{\mathcal{N}}|=r\). Since in this case, \(|A|=n+2r\), we also arrive at \(|L(A)|=r\). This implies that we have \(L(A)=\{(i,j_{i})\ ;\ i\neq i_{1}\}\) for some \(i_{1}=0,\dots,r\) and \(j_{i}=1,\dots,n_{i}\). Following the proof of Lemma 6.5 (ii), we proceed to do cofactor expansion and see that \(|\det(\bar{P}_{A})|=1\). This proves the claim. The supplement follows directly from the Definition of \(M^{\prime}(P)\).
The preceding Proposition settles the discussion of maximal minors of \(\hat{P}\) for the cases (pe), (pe) and (pp). The remainder of this section is devoted to the case (ee).
**Lemma 6.7**.: _Let \(P\) be of type (ee). Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=|\hat{\mathcal{F}}|=n\) and \(\det(\hat{P}_{A})\neq 0\). Then we have_
\[|L(A)|=(r+1)-|A\cap\tilde{\mathcal{N}}|.\]
_In particular, \(0\leq|L(A)|\leq r+1\)._
Proof.: Let \(\pi_{1}\colon\,\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}\) be the projection onto the first coordinate. Lemma 6.5 (ii) implies \(|\pi_{1}(L(A))|=|L(A)|.\) Furthermore, we have
\[|A\cap\hat{\mathcal{N}}_{i}|=\begin{cases}n_{i}-1,&i\notin\pi_{1}(L(A))\\ n_{i},&i\in\pi_{1}(L(A))\end{cases}.\]
Since, \(|A|=n\), this implies the claim.
**Definition 6.8**.: Let \(P\) be of type (ee). For \(k=0,\ldots,r+1\), we define
\[M_{k}^{\mathrm{red}}(\hat{P}):=\{|\det(\hat{P}_{A}^{\mathrm{red}})|\ ;\ |L(A)|=k\}.\]
**Remark 6.9**.: Let \(A\subseteq\hat{\mathcal{F}}\) with \(|A|=n\) and \(L(A)=\emptyset\). Lemma 6.7 implies that \(A\cap\tilde{\mathcal{N}}=\tilde{\mathcal{N}}\). We obtain \(\det(\hat{P}_{A}^{\mathrm{red}})=\hat{\mu}\). Hence \(M_{0}^{\mathrm{red}}(\hat{P})=\{|\hat{\mu}|\}\).
**Proposition 6.10**.: _Let \(P\) be of type (ee). Let \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=n\) such that \(|\hat{P}_{A}^{\mathrm{red}}|\neq 0\). Write \(\pi_{1}\colon\,\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}\) for the projection onto the first coordinate._
1. _Assume_ \(\tilde{u}\notin A\)_. Then for all_ \(i=1,\ldots,r\)_, we have_ \(\tilde{e}_{i}\in A\) _or_ \(i\in\pi_{1}(L(A))\)_. In this case,_ \[|\det(\hat{P}_{A}^{\mathrm{red}})|=\left(\prod_{(i,j)\in L(A)}|\hat{\nu}(i,j) |\right)\left(\prod_{\begin{subarray}{c}0\leq i\leq r\\ i\notin\pi_{1}(L(A))\end{subarray}}l_{in_{i}}\right).\]
2. _Assume_ \(\tilde{u}\in A\)_. Then there exists at most one_ \(i_{1}=1,\ldots,r\) _with_ \(\tilde{e}_{i_{1}}\notin A\) _and_ \(i_{1}\notin\pi_{1}(L(A))\)_. If there exists such an_ \(i_{1}\)_, we have_ \[|\det(\hat{P}_{A}^{\mathrm{red}})|=\left|d_{i_{1}n_{i_{1}}}\left(\prod_{(i,j) \in L(A)}\hat{\nu}(i,j)\right)\left(\prod_{\begin{subarray}{c}0\leq i\leq r\\ i\notin\pi_{1}(L(A))\cup\{i_{1}\}\end{subarray}}l_{in_{i}}\right)\right|.\] _If there exists no such_ \(i_{1}\)_, we have_ \[|\det(\hat{P}_{A}^{\mathrm{red}})|=\left|\sum_{\begin{subarray}{c}0\leq i^{ \prime}\leq r\\ i^{\prime}\notin\pi_{1}(L(A))\end{subarray}}\pm d_{i^{\prime}n_{i^{\prime}}} \left(\prod_{(i,j)\in L(A)}\hat{\nu}(i,j)\right)\left(\prod_{ \begin{subarray}{c}0\leq i\leq r\\ i\notin\pi_{1}(L(A))\cup\{i^{\prime}\}\end{subarray}}l_{in_{i}}\right)\right|.\]
Proof.: For part (i), assume that there is some \(i=1,\ldots,r\) such that \(\tilde{e}_{i}\notin A\) and \(i\notin\pi_{1}(L(A))\). This implies \(\hat{e}_{ij}\notin\hat{\mathcal{N}}_{A}^{\mathrm{red}}\) and \(\hat{u}_{ij}\notin\hat{\mathcal{N}}_{A}^{\mathrm{red}}\) for all \(j=1,\ldots,n_{i}-1\). Since also \(\tilde{u}\notin A\), we obtain \(\hat{P}_{A}^{\mathrm{red}}(\hat{f}_{in_{i_{i}}})=0\). Hence \(\det(\hat{P}_{A}^{\mathrm{red}})=0\), a contradiction. It follows that if \(i\notin\pi_{1}(L(A))\), either \(i=0\) or \(\tilde{e}_{i}\in A\). The formula for \(\det(\hat{P}_{A}^{\mathrm{red}})\) now follows from cofactor expansion.
For part (ii), assume that there exist \(1\leq i_{0}<i_{1}\leq r\) such that \(\tilde{e}_{i_{0}},\tilde{e}_{i_{1}}\notin A\) and \(i_{0},i_{1}\notin\pi_{1}(L(A))\). We obtain \(\hat{P}_{A}^{\mathrm{red}}(\hat{f}_{i_{0}n_{i_{0}}})=d_{i_{0}n_{i_{0}}}\tilde{u}\) and \(\hat{P}_{A}^{\mathrm{red}}(\hat{f}_{i_{1}n_{i_{1}}})=d_{i_{1}n_{i_{1}}}\tilde{u}\). Hence \(\det(\hat{P}_{A}^{\mathrm{red}})=0\), a contradiction. The formulas for \(\det(\hat{P}_{A}^{\mathrm{red}})\) again follow from cofactor expansion.
**Definition 6.11**.: Let \(P\) be of type (ee). Let \(\pi_{1}\colon\,\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}\) be the projection onto the first coordinate. We call a subset \(L\subseteq\mathcal{L}\)_valid_, if \(|\pi_{1}(L)|=|L|\). For \(k=1,\ldots,r\)
we define
\[M_{k}^{\prime}(\hat{P}):=\left\{\left(\prod_{(i,j)\in L}|\nu(i,j)|\right)\left( \prod_{\begin{subarray}{c}0\leq i\leq r\\ i\notin\pi_{1}(L)\cup\{i_{1}\}\end{subarray}}l_{in_{i_{1}}}\right)\ ;\ L\subseteq\mathcal{L}\text{ valid},\ |L|=k, \right\}.\]
**Proposition 6.12**.: _Let \(P\) be of type (ee). We have_
1. \(\gcd(M_{k}^{\mathrm{red}}(\hat{P}))=\gcd(M_{k}^{\prime}(\hat{P}))\) _for all_ \(k=1,\ldots,r\)_,_
2. \(\gcd(M_{r+1}^{\mathrm{red}}(\hat{P})\cup M_{r}^{\prime}(\hat{P}))=\gcd(M_{r}^ {\prime}(\hat{P}))\)_,_
3. \(\gcd\left(\bigcup_{k=1}^{r+1}M_{k}^{\mathrm{red}}(\hat{P})\right)=\gcd\left( \bigcup_{k=1}^{r}M_{k}^{\prime}(\hat{P})\right)\)_._
Proof.: We show (i). Proposition 6.10 implies that every element of \(M_{k}^{\mathrm{red}}(\hat{P})\) is a \(\mathbb{Z}\)-linear combination of elements of \(M_{k}^{\prime}(\hat{P})\). This shows that \(\gcd(M_{k}^{\prime}(\hat{P}))\) divides \(\gcd(M_{k}^{\mathrm{red}}(\hat{P}))\). For the converse, it suffices to show that \(\gcd(M_{k}^{\mathrm{red}}(\hat{P}))\mid x\) holds for all \(x\in M_{k}^{\prime}(\hat{P})\). So, let
\[x=\left(\prod_{(i,j)\in L}\nu(i,j)\right)\left(\prod_{\begin{subarray}{c}0 \leq i\leq r\\ i\notin\pi_{1}(L)\cup\{i_{1}\}\end{subarray}}l_{in_{i_{1}}}\right)\in M_{k}^{ \prime}(\hat{P})\]
be arbitraty, where \(L\subseteq\mathcal{L}\) is a valid subset with \(|L|=k\) and \(0\leq i_{1}\leq r\) with \(i_{1}\notin\pi_{1}(L)\).
_Case 1: \(i_{1}\neq 0\) and \(0\in\pi_{1}(L)\)_. Choose a subset \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=n\) such that \(L(A)=L\) and
\[A\cap\tilde{\mathcal{N}}=\{\tilde{e}_{i}\ ;\ 1\leq i\leq r,\ i\notin\pi_{1}(L)\}.\]
Since \(0\in\pi_{1}(L)\), we have \(|A\cap\tilde{\mathcal{N}}|=r-(|L|-1)=(r+1)-|L|\). This means we can choose \(A\) such that \(\det(\hat{P}_{A}^{\mathrm{red}})\neq 0\). Now set
\[A^{\prime}:=(A\backslash\{\tilde{e}_{i_{1}}\})\cup\{\tilde{u}\}.\]
Proposition 6.10 implies that \(\det(\hat{P}_{A}^{\mathrm{red}})=l_{i_{1}n_{i_{1}}}x\) and \(\det(\hat{P}_{A^{\prime}}^{\mathrm{red}})=d_{i_{1}n_{i_{1}}}x\). Hence we have
\[\gcd(M_{k}^{\mathrm{red}}(\hat{P}))\mid\gcd(l_{i_{1}n_{i_{1}}}x,d_{i_{1}n_{i_{ 1}}}x)=x.\]
_Case 2: \(i_{1}\neq 0\) and \(0\notin\pi_{1}(L)\)_. Since \(k\geq 1\), we find some \(i_{0}\in\pi_{1}(L)\). Now choose \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=n\) such that \(L(A)=L\) and
\[A\cap\tilde{\mathcal{N}}=\{\tilde{e}_{i}\ ;\ 1\leq i\leq r,\ i\notin\pi_{1}(L)\} \cup\{\tilde{e}_{i_{0}}\}.\]
Again, we have \(|A\cap\tilde{\mathcal{N}}|=(r+1)-|L|\), hence we can pick \(A\) such that \(\det(\hat{P}_{A}^{\mathrm{red}})\neq 0\). Proceeding in the same way as in Case 1, we arrive at \(\gcd(M_{k}^{\mathrm{red}}(\hat{P}))\mid x\).
_Case 3: \(i_{1}=0\)_. As in Case 2, we can pick some \(i_{0}\in\pi_{1}(L)\) as well as a subset \(A\subseteq\hat{\mathcal{N}}\) with \(|A|=n\) such that \(L(A)=L\) and
\[A\cap\tilde{\mathcal{N}}=\{\tilde{e}_{i}\ ;\ 1\leq i\leq r,\ i\notin\pi_{1}(L)\} \cup\{\tilde{e}_{i_{0}}\}.\]
For all \(1\leq i\leq r\) with \(i\notin\pi_{1}(L)\), set \(A_{i}:=(A\backslash\{\tilde{e}_{i}\})\cup\{\tilde{u}\}.\) Then Proposition 6.10 implies that \(\det(P_{A}^{\mathrm{red}})=l_{0n_{0}}x\) and
\[\det(P_{A_{i_{0}}}^{\mathrm{red}})=d_{0n_{0}}x+\sum_{\begin{subarray}{c}1\leq i ^{\prime}\leq r\\ i^{\prime}\notin\pi_{1}(L)\end{subarray}}\pm\det(P_{A_{i^{\prime}}}^{\mathrm{ red}}).\]
This implies \(\gcd(M_{k}^{\mathrm{red}}(\hat{P}))\mid\gcd(l_{0n_{0}}x,d_{0n_{0}}x)=x\).
Part (ii) follows from the fact that every element of \(M_{r+1}^{\mathrm{red}}(\hat{P})\) is an integer multiple of an element of \(M_{r}^{\prime}(\hat{P})\). Part (iii) is a consequence of (i) and (ii).
**Definition 6.13**.: Let \(P\) be of type (ee). For \(k=1,\ldots,r\), we define the set
\[M_{k}^{\prime\prime}(\hat{P}):=\left\{\left(\prod_{(i,j)\in L}|\hat{\nu}(i,j) |\right)\left(\prod_{\begin{subarray}{c}0\leq i\leq r\\ i\notin\pi_{1}(\hat{L})\cup\{i_{1}\}\end{subarray}}l_{ij_{i}}\right)\ ;\ 0\leq i_{1}\leq r,\ i_{1}\notin\pi_{1}(L)\right\}.\]
**Lemma 6.14**.: _Let \(i=0,\ldots,r\) and \(j_{i}=1,\ldots,n_{i}-1\). Then we have_
\[\gcd(l_{in_{i}},\hat{\nu}(i,j))\mid l_{ij}.\]
Proof.: By definition we have \(\hat{\nu}(i,j)=l_{in_{i}}d_{ij}-l_{ij}d_{in_{i}}\). Since \(l_{in_{i}}\) and \(d_{in_{i}}\) are coprime, we find \(x,y\in\mathbb{Z}\) such that \(xl_{in_{i}}+yd_{in_{i}}=1\). Then we have
\[(xl_{ij}+yd_{in_{i}})l_{in_{i}}-y\hat{\nu}(i,j)=l_{ij}(xl_{in_{i}}+yd_{in_{i}} )=l_{ij}.\]
This implies the claim.
**Proposition 6.15**.: _Let \(P\) be of type (ee). We have_
1. \(\gcd(M_{k}^{\prime}(\hat{P})\cup M_{k+1}^{\prime\prime}(\hat{P}))=\gcd(M_{k}^ {\prime\prime}(\hat{P}))\) _for all_ \(k=1,\ldots,r-1\)_,_
2. \(\gcd\left(\bigcup_{k=1}^{r}M_{k}^{\prime}(\hat{P})\right)=\gcd(M_{1}^{\prime \prime}(\hat{P}))\)_._
Proof.: We show (i). Since \(M_{k}^{\prime}(\hat{P})\subseteq M_{k}^{\prime\prime}(\hat{P})\) and elements of \(M_{k+1}^{\prime\prime}(\hat{P})\) are \(\mathbb{Z}\)-linear combinations of elements of \(M_{k}^{\prime\prime}(\hat{P})\), we have \(\gcd(M_{k}^{\prime\prime}(\hat{P}))\mid\gcd(M_{k}^{\prime}(\hat{P})\cup M_{k +1}^{\prime\prime}(\hat{P}))\). For the converse, let
\[x=\left(\prod_{(i,j)\in L}\hat{\nu}(i,j)\right)\left(\prod_{\begin{subarray}{ c}0\leq i\leq r\\ i\notin\pi_{1}(\hat{L})\cup\{i^{\prime}\}\end{subarray}}l_{ij_{i}}\right)\in M _{k}^{\prime\prime}(\hat{P}),\]
where \(L\subseteq\mathcal{L}\) is a valid subset with \(|L|=k\) and \(0\leq i^{\prime}\leq r\) with \(i^{\prime}\notin\pi_{1}(L)\) and \(1\leq j_{i}\leq n_{i}\) for all \(i\notin\pi_{1}(L)\cup\{i^{\prime}\}\). Let us write
\[\{i_{1},\ldots,i_{r-k}\}:=\{0,\ldots,r\}\backslash(\pi_{1}(L)\cup\{i^{\prime} \}).\]
We define numbers
\[\begin{array}{lcl}x_{0}&:=&l_{i_{1}n_{i_{1}}}l_{i_{2}n_{i_{2}}}\ldots l_{i_{ r-k}n_{i_{r-k}}}\\ x_{1}&:=&l_{i_{1}j_{i_{1}}}l_{i_{2}n_{i_{2}}}\ldots l_{i_{r-k}n_{i_{r-k}}}\\ &&\vdots&\\ x_{r-k}&:=&l_{i_{1}j_{i_{1}}}l_{i_{2}j_{i_{2}}}\ldots l_{i_{r-k}j_{i_{r-k}}} \end{array}\]
as well as
\[\begin{array}{lcl}y_{1}&:=&\hat{\nu}(i_{1},j_{i_{1}})l_{i_{2}n_{i_{2}}}l_{i_{3 }n_{i_{3}}}\ldots l_{i_{r-k}n_{i_{r-k}}}\\ y_{2}&:=&l_{i_{1}j_{i_{1}}}\hat{\nu}(i_{2},j_{i_{2}})l_{i_{3}n_{i_{3}}}\ldots l _{i_{r-k}n_{i_{r-k}}}\\ &&\vdots&\\ y_{r-k}&:=&l_{i_{1}j_{i_{1}}}\ldots l_{i_{r-k-1}j_{i_{r-k-1}}}\hat{\nu}(i_{r-k},j_{i_{r-k}}).\end{array}\]
Then Lemma 6.14 implies \(\gcd(x_{m-1},y_{m})\mid x_{m}\) for all \(m=1,\ldots,r-k\). In particular, we obtain \(\gcd(x_{0},y_{1},\ldots,y_{r-k})\mid x_{r-k}\). Now set \(c:=\prod_{(i,j)\in L}\hat{\nu}(i,j)\). Then we have \(cx_{r-k}=x\) as well as \(x_{0}c\in M_{k}^{\prime}(\hat{P})\) and \(y_{m}c\in M_{k+1}^{\prime\prime}(\hat{P})\) for all \(m=1,\ldots r-k\). Together, we have
\[\gcd(M_{k}^{\prime}(\hat{P})\cup M_{k+1}^{\prime\prime}(\hat{P}))\mid\gcd(x_{0 }c,y_{1}c,\ldots,y_{r-k}c)\mid x_{r-k}c=x.\]
Part (ii) follows from repeated application of (i), together with the fact that \(M_{r}^{\prime}(\hat{P})=M_{r}^{\prime\prime}(\hat{P})\).
## 7. Proof of Theorem 1.1 and examples
In this section, we prove the formula for the Picard index of a \(\mathbb{K}^{*}\)-surface given in Theorem 1.1. We then give two examples where the formula fails: The first one is a toric threefold, the second one is the \(D_{8}\)-singular log del Pezzo surface of Picard number one.
**Proposition 7.1**.: _Let \(P\) be a defining matrix and \(\hat{P}\) be as in Construction 5.3. Write \(M(P)\) and \(M(\hat{P})\) for the set of maximal minors of \(P\) and \(\hat{P}\) respectively. Then we have_
\[\gcd(M(\hat{P}))=\gcd(M(P)).\]
Proof.: By Construction 6.2, we have \(\gcd(M(\hat{P}))=\gcd(M^{\mathrm{red}}(\hat{P}))\). On the other hand, Proposition 4.9 says that \(\gcd(M(P))=\gcd(M^{\prime}(P))\). In the cases (pe), (ep) and (pp), Proposition 6.6 gives the result. In the case (ee), combining Remark 6.9, Proposition 6.12 (iii) and Proposition 6.15 (ii), we get
\[\gcd(M^{\mathrm{red}}(\hat{P})) =\gcd\left(\{|\hat{\mu}|\}\cup\bigcup_{k=1}^{r+1}M_{k}^{\mathrm{ red}}(\hat{P})\right)\] \[=\gcd\left(\{|\hat{\mu}|\}\cup\bigcup_{k=1}^{r}M_{k}^{\prime}( \hat{P})\right)\] \[=\gcd\left(\{|\hat{\mu}|\}\cup M_{1}^{\prime\prime}(\hat{P}) \right).\]
By definition, we have \(\{|\hat{\mu}|\}\cup M_{1}^{\prime\prime}(\hat{P})=M^{\prime}(P)\), hence we arrive at the claim.
Proof of Theorem 1.1.: By Theorem 3.5, we can assume \(X=X(P)\subseteq Z\) as in Construction 3.3. Note that \(|\operatorname{Cl}(X,x)|\neq 1\) only holds for fixed points of the \(\mathbb{K}^{*}\)-action, which we denote by \(X^{\mathrm{fix}}\). By Remark 3.4 and Proposition 3.6 (ii), we get
\[\prod_{x\in X}|\operatorname{Cl}(X,x)|=\prod_{x\in X^{\mathrm{fix}}}| \operatorname{Cl}(X,x)|=\prod_{\sigma\in\Sigma^{\mathrm{max}}}|\operatorname{ Cl}(Z,z_{\sigma})|.\]
Combining Proposition 3.6 (iii) and Proposition 2.7, we get
\[\iota_{\mathrm{Pic}}(X)=\iota_{\mathrm{Pic}}(Z)=\frac{1}{|\hat{K}|}\prod_{ \sigma\in\Sigma^{\mathrm{max}}}|\operatorname{Cl}(Z,z_{\sigma})|,\]
By Proposition 3.6 (i), it remains to show that \(|\hat{K}|=|\operatorname{Cl}(Z)^{\mathrm{tors}}|\). Since \(\operatorname{Cl}(Z)\) is isomorphic to the cokernel of \(P^{*}\), we have
\[|\operatorname{Cl}(Z)^{\mathrm{tors}}|=\gcd(M(P^{*}))=\gcd(M(P)),\]
where \(M(A)\) denotes the set of maximal minors of a matrix \(A\). On the other hand, Proposition 2.3 says that \(\hat{K}\) is the cokernel of \(\hat{P}^{*}\). This implies
\[|\hat{K}|=\gcd(M(\hat{P}^{*}))=\gcd(M(\hat{P})).\]
Proposition 7.1 now implies the claim.
**Corollary 7.2**.: _Let \(Z=Z_{\Sigma}\) be a projective toric surface. Then_
\[\iota_{\operatorname{Pic}}(Z)=\frac{1}{|\operatorname{Cl}(Z)^{\operatorname{ tors}}|}\prod_{\sigma\in\Sigma^{\operatorname{max}}}|\operatorname{Cl}(Z,z_{ \sigma})|.\]
**Remark 7.3**.: Let \(Z=\mathbb{P}(w_{0},w_{1},w_{2})\) be a weighted projective plane. We can assume the weights \(w_{i}\) to be pairwise coprime. The divisor class group of a weighted projective space is torsion-free and the orders of the local class groups at the toric fixed points are equal to the weights \(w_{i}\). By Corollary 7.2, we obtain
\[\iota_{\operatorname{Pic}}(Z)=w_{0}w_{1}w_{2}.\]
In Proposition 8.1, we will generalize this formula to fake weighted projective planes. For weighted projective planes, there is also a direct way to compute the Picard index: The subgroup of divisor classes that are principle on the \(i\)-th standard affine chart \(U_{i}=Z\backslash V(x_{i})\) is generated by \(w_{i}=[V(x_{i})]\in\operatorname{Cl}(Z)\operatorname{\cong}\mathbb{Z}\). For the Picard group, this means
\[\operatorname{Pic}(Z)\ =\ \mathbb{Z}\,w_{0}\cap\mathbb{Z}\,w_{1}\cap\mathbb{Z}\,w_ {2}\ =\ \mathbb{Z}\operatorname{lcm}(w_{0},w_{1},w_{2})\ \subseteq\ \mathbb{Z}\,\cong\, \operatorname{Cl}(Z).\]
Since the weights are pairwise coprime, we have \(\operatorname{lcm}(w_{0},w_{1},w_{2})=w_{0}w_{1}w_{2}\).
The following example shows that a Corollary 7.2 does not hold for higher dimensional toric varieties.
**Example 7.4**.: Consider the three-dimensional weighted projective space \(Z=\mathbb{P}(2,2,3,5)\). Note that the weights are well-formed, i.e. any three weights have no common factor. The Picard group is given by
\[\operatorname{Pic}(Z)\ =\ 2\,\mathbb{Z}\,\cap\,2\,\mathbb{Z}\,\cap\,3\,\mathbb{Z} \,\cap\,5\,\mathbb{Z}\ =\ 30\,\mathbb{Z}\,\subseteq\ \mathbb{Z}\,\cong\, \operatorname{Cl}(Z).\]
Hence we have \(\iota_{\operatorname{Pic}}(Z)=30\). On the other hand, the product of the orders of the local class groups is \(60\).
We now consider the \(D_{8}\)-singular log del Pezzo surface of Picard number one, which does not admit a \(\mathbb{K}^{*}\)-action. Using the description of its Cox Ring [13, Theorem 4.1], we construct the surface via its canonical ambient toric variety, see also [2, Sections 3.2 and 3.3].
**Example 7.5**.: Consider the integral matrix
\[P\ :=\ \begin{bmatrix}v_{1}&v_{2}&v_{3}&v_{4}\end{bmatrix}\ :=\ \begin{bmatrix}1&0&1&-3\\ 0&1&1&-2\\ 0&0&2&-2\end{bmatrix}.\]
Let \(Z=Z_{\Sigma}\) be the toric variety whose fan \(\Sigma\) has the following maximal cones:
\[\sigma_{12}\ :=\ \operatorname{cone}(v_{1},v_{2}), \sigma_{23}\ :=\ \operatorname{cone}(v_{2},v_{3}), \sigma_{24}\ :=\ \operatorname{cone}(v_{2},v_{4}),\] \[\sigma_{134}\ :=\ \operatorname{cone}(v_{1},v_{3},v_{4}).\]
Let \(p\colon\hat{Z}\to Z\) be Cox's quotient presentation of \(Z\), where \(\hat{Z}\subseteq\bar{Z}:=\mathbb{K}^{4}\). Consider the polynomial
\[f:=T_{1}^{2}-T_{2}T_{3}T_{4}^{2}+T_{3}^{4}+T_{4}^{4}.\]
We obtain a commutative diagram
\[\begin{array}{ccccc}V(f)&=:&\bar{X}\xleftrightarrow{\raisebox{-1.0pt}{ \includegraphics[scale=0.5]{fig/R-1.pdf}}}\bar{Z}&=&\mathbb{K}^{4}\\ &&\cup|\!|&&\cup|\!|\\ \bar{X}\cap\hat{Z}&=:&\hat{X}\xleftrightarrow{\raisebox{-1.0pt}{\includegraphics [scale=0.5]{fig/R-1.pdf}}}\hat{Z}&\\ &&\cup p&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
This shows that the formula from Theorem 1.1 does not hold for \(X\).
## 8. Log del Pezzo \(\mathbb{K}^{*}\)-surfaces of Picard number one
We contribute to the classification of log del Pezzo \(\mathbb{K}^{*}\)-surfaces of Picard number one, which in the toric case are _fake weighted projective planes_. As a special case of fake weighted projective spaces, these have been studied by several authors [8, 15]. Our classification relies on Theorem 1.1 to produce bounds for the number of log del Pezzo \(\mathbb{K}^{*}\)-surfaces with fixed Picard index. For a related classification of fake weighted projective spaces by their Gorenstein index, we refer to [3].
We recall some facts about fake weighted projective planes, see also [8, Section 2]. Consider \(K:=\mathbb{Z}\times\mathbb{Z}\,/n\,\mathbb{Z}\), where \(n\in\mathbb{Z}_{\geq 1}\). Let \(\omega_{0},\omega_{1},\omega_{2}\in K\) be given, where \(\omega_{i}=(w_{i},\eta_{i})\) and the \(w_{i}\) are positive and pairwise coprime. Write \(C(n)\subset\mathbb{K}^{*}\) for the set of \(n\)-th roots of unity. The quasitorus \(H:=\mathbb{K}^{*}\times C(n)\) acts on \(\mathbb{K}^{3}\) by
\[H\times\mathbb{K}^{3}\to\mathbb{K}^{3},\qquad(s,\zeta)\cdot z:=(s^{w_{0}} \zeta^{\eta_{0}}z_{0},s^{w_{1}}\zeta^{\eta_{1}}z_{1},s^{w_{2}}\zeta^{\eta_{2} }z_{2}).\]
The _fake weighted projective plane_ associated to \(\omega_{0},\omega_{1},\omega_{2}\) is the surface
\[\mathbb{P}(\omega_{0},\omega_{1},\omega_{2}):=(\mathbb{K}^{3}\setminus\{0\})/H.\]
Given \(z=(z_{0},z_{1},z_{2})\in\mathbb{K}^{3}\setminus\{0\}\), we call \([z_{0},z_{1},z_{2}]:=H\cdot z\in\mathbb{P}(\omega_{0},\omega_{1},\omega_{2})\) the representation in homogeneous coordinates. Fake weighted projective planes are toric surfaces and its toric fixed points are
\[\mathbf{z}(0):=[1,0,0],\qquad\mathbf{z}(1):=[0,1,0],\qquad\mathbf{z}(2):=[0,0,1].\]
**Proposition 8.1**.: _Let \(Z=\mathbb{P}(\omega_{0},\omega_{1},\omega_{2})\) be a fake weighted projective plane. Let \(P=\begin{bmatrix}v_{0}&v_{1}&v_{2}\end{bmatrix}\) be the generator matrix of the defining fan of \(Z\). Then the orders of the local class groups \(\mu_{i}:=|\operatorname{Cl}(Z,\mathbf{z}(i))|\) are given by_
\[\mu_{0}=nw_{0}=\det(v_{1},v_{2}),\qquad\mu_{1}=nw_{1}=\det(v_{2},v_{0}),\qquad \mu_{2}=nw_{2}=\det(v_{0},v_{1}).\]
_Moreover, the Picard index is_
\[\iota_{\operatorname{Pic}}(Z)=n^{2}w_{0}w_{1}w_{2}.\]
Proof.: We have \(\operatorname{Cl}(Z,\mathbf{z}(i))=K/\,\mathbb{Z}\,\omega_{i}\), hence \(\mu_{i}=nw_{i}\). Applying [2, Lemma 2.1.4.1] expresses the \(\mu_{i}\) via minors of the generator matrix. The formula for the Picard Index follows from Corollary 7.2.
**Construction 8.2**.: Consider an integral \(2\times 3\) matrix
\[P=\begin{bmatrix}v_{0}&v_{1}&v_{2}\end{bmatrix},\]
where the columns are primitive vectors generating \(\mathbb{Q}^{3}\) as a convex cone. Let \(K:=\mathbb{Z}^{3}\,/P^{*}(\mathbb{Z}^{2})\) and consider the map \(Q\colon\,\mathbb{Z}^{3}\to K\). Then we have \(K\cong\mathbb{Z}\times\mathbb{Z}\,/n\,\mathbb{Z}\) for some \(n\in\mathbb{Z}_{\geq 1}\). Setting \(\omega_{i}:=Q(e_{i})\), we obtain a fake weighted projective plane
\[Z(P):=\mathbb{P}(\omega_{0},\omega_{1},\omega_{2}).\]
**Remark 8.3**.: Let \(Z(P)\) be as in Construction 8.2. Then \(P\) is the generator matrix of the defining fan of \(Z(P)\). Moreover, we have \(Z(P)\cong Z(P^{\prime})\) if and only if \(P^{\prime}=A\cdot P\cdot S\) holds with a unimodular matrix \(A\) and a permutation matrix \(S\).
**Proposition 8.4**.: _Let \(Z=\mathbb{P}(\omega_{0},\omega_{1},\omega_{2})\) be a fake weighted projective plane, where \(\omega_{i}\in\mathbb{Z}\times\mathbb{Z}\left/n\mathbb{Z}\right.\). Then there exists an integer \(0\leq x<nw_{2}\) with \(\gcd(x,nw_{2})=1\) such that \(Z\cong Z(P)\), where_
\[P=\begin{bmatrix}1&x&-\frac{w_{0}+xw_{1}}{w_{2}}\\ 0&nw_{2}&-nw_{1}\end{bmatrix}.\]
Proof.: Let \(P=\begin{bmatrix}v_{0}&v_{1}&v_{2}\end{bmatrix}\) be the generator matrix of \(Z\), where \(v_{i}=(x_{i},y_{i})\in\mathbb{Z}^{2}\) are primitive vectors. Applying a unimodular matrix from the left achieves \(v_{0}=(1,0)\) and \(0\leq x_{1}<y_{1}\). By Proposition 8.1, we have
\[nw_{2}=\det(v_{0},v_{1})=y_{1},\qquad\qquad-nw_{1}=\det(v_{0},v_{2})=y_{2}.\]
\[nw_{0}=\det(v_{1},v_{2})=-nw_{1}x_{1}-nw_{2}x_{2}.\]
This shows the claim.
**Algorithm 8.5**.: _Input:_ A positive integer \(\iota\), the prospective Picard Index. _Algorithm:_
* Set \(L:=\emptyset\).
* For each quadruple \((n,w_{0},w_{1},w_{2})\) of positive integers with \(n^{2}w_{0}w_{1}w_{2}=\iota\) such that \((w_{0},w_{1},w_{2})\) are pairwise coprime, do:
* For each \(0\leq x<nw_{2}\) with \(\gcd(x,nw_{2})=1\), do:
* Set \[P:=\begin{bmatrix}1&x&-\frac{w_{0}+xw_{1}}{w_{2}}\\ 0&nw_{2}&-nw_{1}\end{bmatrix}.\]
* If for all permutation matrices \(S\), the Hermite normal form of \(P\cdot S\) differs from the Hermite normal form of all \(P^{\prime}\in L\), add \(P\) to \(L\).
* end do.
* end do.
_Output:_ The set \(L\). Then every fake weighted projective plane \(Z\) with \(\iota_{\mathrm{Pic}}(Z)=\iota\) is isomorphic to precisely one \(Z(P)\) with \(P\in L\).
Proof.: Propositions 8.1 and 8.4 show that the algorithm produces all generator matrices of fake weighted projective planes sharing \(\iota\) as their Picard index. By Remark 8.3, the matrices in \(P\in L\) belong to pairwise non-isomorphic surfaces.
Using this Algorithm, we arrive at the following classification.
**Theorem 8.6**.: _There are \(15\,086\,426\) isomorphy classes of fake weighted projective planes with Picard index at most \(1\,000\,000\). Of those, \(68\,053\) have Picard index at most \(10\,000\). The number of isomorphy classes for given Picard index develops as follows:_
We turn to non-toric rational projective \(\mathbb{K}^{*}\)-surfaces of Picard number one. These arise from defining matrices as in Construction 3.3, see also [8, Section 4].
**Construction 8.7**.: Let \(X(P)\) be a rational projective \(\mathbb{K}^{*}\)-surface of Picard number one. We view the defining matrix \(P\) as a lattice map \(P\colon F\to N\) as in Construction 4.1. Consider the map
\[Q\colon F^{*}\to K:=F^{*}/\operatorname{im}(P^{*}).\]
Since \(X(P)\) is of Picard number one, we have a splitting
\[\operatorname{Cl}(X)\,\cong\,K\,\cong\,\mathbb{Z}\times\operatorname{Cl}(X)^ {\operatorname{tors}}.\]
Under this splitting, we can view \(Q\) as the matrix
\[\begin{bmatrix}w_{01}&\dots&w_{0n_{0}}&\dots&\dots&w_{r1}&\dots&w_{rn_{r}}&w^{ +}&w^{-}\\ \eta_{01}&\dots&\eta_{0n_{0}}&\dots&\dots&\eta_{r1}&\dots&\eta_{rn_{r}}&\eta^{ +}&\eta^{-}\end{bmatrix},\]
called the _degree matrix_ of \(P\). Here, we have \(w_{ij},w^{+},w^{-}\in\mathbb{Z}\) and \(\eta_{ij},\eta^{+},\eta^{-}\in\operatorname{Cl}(X)^{\operatorname{tors}}\).
**Proposition 8.8**.: _Let \(P=\begin{bmatrix}v_{01}&v_{02}&v_{1}&\dots&v_{r}\end{bmatrix}\) be a defining matrix of type (ee) with \(n_{0}=2\) and \(n_{1}=\dots=n_{r}=1\). Let_
\[Q_{0}=\begin{bmatrix}w_{01}&w_{02}&w_{1}&\dots w_{r}\end{bmatrix}\]
_be the free part of the degree matrix of \(P\). Then \(X=X(P)\) is of Picard number one. We have two elliptic fixed points \(x^{+}\) and \(x^{-}\) and one hyperbolic fixed point \(x_{01}\). With \(\lambda:=|\operatorname{Cl}(X)^{\operatorname{tors}}|\), the orders of the local class groups are given by_
\[|\operatorname{Cl}(X,x^{+})| =\lambda w_{02}=\det(v_{01},v_{1},\dots,v_{r}),\] \[|\operatorname{Cl}(X,x^{-})| =\lambda w_{01}=\det(v_{02},v_{1},\dots,v_{r}),\] \[|\operatorname{Cl}(X,x_{01})| =\det\begin{bmatrix}-l_{01}&-l_{02}\\ d_{01}&d_{02}\end{bmatrix}=d_{01}l_{02}-d_{02}l_{01}.\]
_With \(M:=|\operatorname{Cl}(X,x_{01})|\), the Picard index is given by_
\[\iota_{\operatorname{Pic}}(X)=\lambda w_{01}w_{02}M.\]
**Proposition 8.9**.: _Let \(P=\begin{bmatrix}v_{0}&\ldots&v_{r}&v^{-}\end{bmatrix}\) be a defining matrix of type (ep) with \(n_{0}=\cdots=n_{r}=1\). Let_
\[Q_{0}=\begin{bmatrix}w_{0}&\ldots&w_{r}&w^{-}\end{bmatrix}\]
_be the free part of the degree matrix. Then \(X=X(P)\) is of Picard number one. We have one elliptic fixed point \(x^{+}\) and parabolic fixed points \(x_{in_{i}}\) for \(i=0,\ldots,r\). With \(\lambda:=|\operatorname{Cl}(X)^{\operatorname{tors}}|\), the orders of the local class groups are_
\[|\operatorname{Cl}(X,x^{+})| =\lambda w^{-}=\det(v_{0},\ldots,v_{r})\] \[|\operatorname{Cl}(X,x_{in_{i}})| =\det\begin{bmatrix}l_{i}&0\\ d_{i}&1\end{bmatrix}=l_{i}\]
_Moreover, the Picard index is given by_
\[\iota_{\operatorname{Pic}}(X)=w^{-}l_{0}\cdots l_{r}.\]
Proof of Propositions 8.8 and 8.9.: In both cases, we have \(n+m=r+2\), hence \(X(P)\) is of Picard number one. For the descriptions of the fixed points, see Remark 3.4. For the orders of the local class groups, we apply [2, Lemma 2.1.4.1] to the toric ambient variety of \(X\) and use Proposition 3.6 (ii). Applying Theorem 1.1 gives the expressions for the Picard index.
**Proposition 8.10** (See [8, Proposition 5.9]).: _Let \(X\) be a non-toric rational projective \(\mathbb{K}^{*}\)-surface of Picard number one. Assume that \(X\) is log terminal. Then we have \(X\operatorname{\cong}X(P)\) for some defining matrix \(P\) as in one of the Propositions 8.8 and 8.9. The possible tuples \((l_{01},l_{02},l_{1},\ldots,l_{r})\) for type (ee) and \((l_{0},\ldots,l_{r})\) for type (ep) are the following:_
\[\begin{array}{llllll}&(1,1,x_{1},x_{2}),\ (1,y,2,2),\ (1,2,y,2),\ (1,z,3,2),\ (1,3,z,2),\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par |
2307.06009 | A Linear Algebraic Framework for Dynamic Scheduling Over Memory-Equipped
Quantum Networks | Quantum Internetworking is a recent field that promises numerous interesting
applications, many of which require the distribution of entanglement between
arbitrary pairs of users. This work deals with the problem of scheduling in an
arbitrary entanglement swapping quantum network - often called first generation
quantum network - in its general topology, multicommodity, loss-aware
formulation. We introduce a linear algebraic framework that exploits quantum
memory through the creation of intermediate entangled links. The framework is
then employed to apply Lyapunov Drift Minimization (a standard technique in
classical network science) to mathematically derive a natural class of
scheduling policies for quantum networks minimizing the square norm of the user
demand backlog. Moreover, an additional class of Max-Weight inspired policies
is proposed and benchmarked, reducing significantly the computation cost at the
price of a slight performance degradation. The policies are compared in terms
of information availability, localization and overall network performance
through an ad-hoc simulator that admits user-provided network topologies and
scheduling policies in order to showcase the potential application of the
provided tools to quantum network design. | Paolo Fittipaldi, Anastasios Giovanidis, Frédéric Grosshans | 2023-07-12T08:41:17Z | http://arxiv.org/abs/2307.06009v2 | # A Linear Algebraic Framework for Dynamic Scheduling Over Memory-Equipped Quantum Networks
###### Abstract
Quantum Internetworking is a recent field that promises numerous interesting applications, many of which require the distribution of entanglement between arbitrary pairs of users. This work deals with the problem of scheduling in an arbitrary entanglement swapping quantum network -- often called first generation quantum network -- in its general topology, multicommodity, loss-aware formulation. We introduce a linear algebraic framework that exploits quantum memory through the creation of intermediate entangled links. The framework is then employed to mathematically derive a natural class of quadratic scheduling policies for quantum networks by applying Lyapunov Drift Minimization, a standard technique in classical network science. Moreover, an additional class of Max-Weight inspired policies is proposed and benchmarked, reducing significantly the computation cost, at the price of a slight performance degradation. The policies are compared in terms of information availability, localization and overall network performance through an ad-hoc simulator that admits user-provided network topologies and scheduling policies in order to showcase the potential application of the provided tools to quantum network design.
Dynamic Scheduling, Optimal scheduling, Integer programming, Lyapunov methods, Queueing analysis, Quantum Communication, Quantum entanglement, Quantum networks, Scheduling, Scheduling algorithms, Teleportation.
## I Introduction
As experimental demonstrations of quantum repeater links and small-scale quantum networks [2][3][4] start to surface, the vision of a future Quantum Internet moves closer to reality [5][6][7][8].
Despite it still being a long-term goal, the road is partially paved by the development of the classical internet, that identified and solved all the problems intrinsic to scaling a network up and operating it in a distributed way. The solutions to such problems are not directly translatable to quantum networks in general because quantum hardware is radically different, creating the need for a new branch of network science with its own set of specialized tools. The present work aims to describe a novel framework to formulate and solve the problem of scheduling entanglement swapping operations in quantum networks, and showcase its potential through some application examples.
In classical networks, communication is achieved by making information packets hop through a series of network nodes until they reach their destination. Whenever several packets from different users need to pass through the same node, the node needs to have a specific discipline that regulates the order in which the packets are relayed. Depending on the application, the network might want to minimize all wait times, prioritize the packets that have certain properties or use more sophisticated specialized algorithms to determine the order of passage. The set of rules that a node applies to solve this problem is called a scheduling policy, and it is an integral part of every well-functioning network architecture [9].
Switching to quantum networks, the concept of packet going from a source to a destination no longer applies. The cornerstone of a large and varied set of communication applications [10][11] in the quantum domain is quantum
entanglement, and the ultimate task of a quantum network system is to distribute entanglement to arbitrary sets of users. Due to the difficulties that come with distributing entanglement over a long link, the task is achieved in practice through entanglement swapping operations at intermediate nodes [12] that may serve several distinguished pairs of end users. The challenge of scheduling in quantum networks revolves therefore around entanglement swapping operations, which must be scheduled by the nodes following what will be addressed in the following as a quantum scheduling policy.
Despite there being several solutions that yield an extensive choice of well-established policies for classical networks, the scheduling problem remains an active challenge for quantum networks: pioneeristic effort has been undertaken to solve the scheduling problem in specific quantum networking settings [13][14][15][16][17], but no trivial generalization of the results presented in these works to medium and large scale networks is possible.
In this context, our work aims to provide a general framework that can be employed for designing and benchmarking scheduling policies on general quantum networks. We stress that our findings pertain to arbitrary network topologies with no theoretical limit on scale and enable users to work with multiple commodities requesting streams of entangled pairs. Furthermore, our framework actively exploits quantum memory slots: even when not all elementary links along a given route are ready, the network is still allowed to create intermediate entangled pairs that cover a part of the route exploiting the available links and store them in memory for future use. The idea of intermediate links has already appeared in other works [18][19][20], and we seek to extend it to our general setting as a core mechanism of operation of the network systems we model.
It should be noted that, while some scheduling policies are proposed and analysed in the following, the broader focus of this work is on describing the framework as a practical tool and providing examples of its application to non-trivial scenarios.
Our work is primarily aimed at first generation quantum networks as detailed in [21], but our methods might prove interesting for a future treatment of second and third generation systems as well.
The paper follows the following structure: in sec. II, the relevant scientific literature is reviewed and compared with our contribution. Sec. III provides a detailed description of the system we are modeling and the various components of our algebraic framework. We follow up with sec. IV, where we introduce and analyze an array of scheduling policies through the tools we propose. Sec. V is devoted to presenting numerical results obtained by applying our tools to several network setups.
## II Context and relevance of this work
As a cross-disciplinary topic, quantum networks are interesting to both quantum physicists and classical network scientists. As such, it is common to try and adapt classical networks ideas and know-how to the quantum world. Much like our work, [13] provides a formulation of the scheduling problem on quantum networks, the main difference being that the cited work approaches the problem through architecture design and heuristic scheduling, while our contribution is more geared towards building a general algebraic framework to mathematically derive and compare scheduling policies.
Concerning purely theoretical results, an optimal theoretical bound for entanglement distribution across a line network with a single commodity is derived in [14] and expanded upon in [17].
References [22], [23], and [16] are all examples of stochastic analysis of a single quantum switch to characterize the scheduling policies that stabilize it. The physical model employed in these works is deeper, in that it accounts for purely quantum imperfections that we neglect, but their scope is somewhat narrower than ours because they all consider a single quantum switch that has to serve a set of users in a star-like configuration.
More specifically relevant to our work, [24] and [25] detail the application of Lyapunov stability theory to a quantum switch and the subsequent derivation of a throughput-optimal Max Weight [26] policy, much like it is done for the quadratic policies we propose. The key differences rely in the generality of our work, which applies to arbitrary topologies with multiple commodities, and in the fact that the cited papers model a switch as a single-hop queuing system dealing with entanglement requests, i.e. requests arrive at the switch and are served after waiting in a queue. In our work, we add a complexity layer: together with the single-hop queuing model for the requests, we propose a multi-hop model for entangled pairs in quantum memories, modeling swapping as movement of pairs between queues. This new set of queues acts as a variable resource that the network must regulate according to a suitable scheduling policy.
The usage of memory in our framework is physically similar to the Virtual Quantum Link idea first introduced in [18] and revisited in [19][17][20]: the introduction of memory at the nodes enables them to seek a balance between depleting their supply of entangled pairs for swapping and conserving it for future use or direct user consumption. The deeper implication of this point is that the network is free to create intermediate links and store them: this leads to distributing pairs across a service route in a "growing" fashion, that both increases performance and removes the need for end-to-end link state information, while naturally adapting to a multi-hop queuing scenario.
As a final remark, we stress that due to the abundance of interesting research that has been carried out to perform quantum routing on several network topologies [19][27][28][15] we assume the existence of a set of static pre-computed routes that connect each end-users pair, under the premise that our work should be easily integrable with a more refined routing technique.
To conclude the section, we summarize the key contributions of the present manuscript:
1. We introduce a general framework for scheduling in quantum networks that poses no assumptions on topology, number of commodities or choice of scheduling policy (sec. III);
2. We extend the idea of intermediate virtual link to the general network case (_ibidem_);
3. Through the help of our framework, we derive an optimal quadratic scheduling policy that works over our multi-hop model. We then formulate suboptimal versions of this policy that relax information requirements (sec. IV-C).
4. Finally, we propose a novel, Max-Weight inspired class of scheduling policies that is shown to perform satisfactorily while posing feasible communication constraints on the network (_ibidem_).
## III System Description
In this section, we describe the physical model that we will rely on to develop our framework. Since the framework we provide is composed of two interconnected queuing models, we devote subsections III-A and III-B to describe respectively the details behind ebit queues and demand queues. As a preliminary step, we clarify the notation conventions that are adopted in this work: lower case for scalars (\(x\)), bold lower case for vectors (\(\mathbf{x}\)), bold upper case for matrices (\(\mathbf{X}\)) and calligraphic upper case for sets (\(\mathcal{X}\)). Well-known matrices such as the identity matrix or the null matrix are indicated in blackboard bold and subscripted with their dimension, as in \(\mathbf{I}_{n}\) and \(\mathbf{0}_{n\times m}\).
Since the term is ubiquitous in the following, we state the definition of a quantum switch as a device that is equipped with quantum memories to store qubits, a Bell State Measurement (BSM) apparatus to perform entanglement swapping operations, and local quantum processing capabilities. An entanglement swapping operation is assumed to be instantaneous and always successful, and the classical communication overhead that comes with entanglement swapping (such as sharing measurement results) is considered free. We assume our quantum switches to be connected to a classical communication infrastructure to coordinate control operations for protocols and, if the chosen scheduling policy so requires, exchange status information with other nodes and/or a central scheduling controller. Moreover, every node is assumed to possess unlimited memory slots. While this might look like too coarse of an assumption, both the literature [22][29] and some preliminary results we present here suggest that, while indeed being an important modeling point, limiting the memory slots might not be the first network limitation that must be taken into account.
The physical system we consider is a network of quantum switches connected by lossy fiber links. We model it as an arbitrary connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where the switches are deployed at the locations specified by the vertices \(\mathcal{V}\) and interconnected by edges \((i,j)\in\mathcal{E}\) that represent a fiber link plus a generic elementary entanglement generation scheme (such as a \(\chi^{(2)}\) crystal, a Bell State Analyzer in the middle [30] or at one of the stations [12, sec. V-C]). Every switch has a number of memory slots, assumed to be infinite in this work, in which qubits may be stored. Pairs of entangled qubits (referred to as an ebit [31] hereafter) are generated by each fiber link with a given constant average rate, which may be heterogeneous across links but is constant in time, and stored inside memories at the end nodes of the respective link. Among the network nodes, there are \(n\) pairs \(\{(Alice_{1},Bob_{1}),\ldots,(Alice_{n},Bob_{n})\}\) that request ebits in a random way to realize a generic application. Each \((Alice_{n},Bob_{n})\) pairs is connected by one or more routes that are not necessarily disjoint form the ones connecting other users, and therefore can create congestion that needs to be managed by a scheduling policy. We stress that since we assume unlimited memory we are choosing to focus on the link congestion case: we leave node congestion for future investigation.
Given this starting point, the purpose of a quantum network is to perform entanglement swapping operations in order to distribute ebits to its users in a way that is optimal under a given performance metric. In pursuing this objective, the network must rely on a scheduling policy to minimize congestion by carefully deciding which swaps to perform when, while also being hindered by link-level fiber losses and by quantum memory imperfection causing the loss of stored ebits.
Memory and fiber losses are the only two sources of imperfection that are accounted for in this paper: for simplicity reasons, we neglect sources of state degradation other than losses in this formulation of our algebraic model, since they require a far lower level of abstraction, and lead to more complex multiobjective problems [32, 33]. However, our model could be reinterpreted in the context of more modern error-corrected networks if we state that each link generates entangled pairs with a given logical rate, i.e. the rate of creation of error-corrected ebits.
For practical reasons, we discretize the time axis: since the scheduler is supposed to take decisions at fixed times, it is natural to take a discrete time step \(\Delta t\) as the time unit of interest.
Between two subsequent clock ticks the system is free to evolve stochastically and at the end of each time step a scheduling decision is taken. This places a lower bound on \(\Delta t\): no decision can happen before all information has been successfully communicated to all deciding agents, thus \(\Delta t\) must be at least as large as the classical communication delay introduced by state-related information exchange. We note that, while at the moment our work does not take into account finite communication delays, the design process of a real system would need to consider that a policy that requires more communication, despite being better informed, will suffer from more losses (as they depend on the length of the time step) and be less reactive to instantaneous change.
### _Ebit Queues_
To model ebits stored at memory nodes, the concept of an ebit queue is introduced: each pair of nodes \(e=(i,j)\) inside the extended edge set \(\tilde{\mathcal{E}}=\mathcal{V}\times\mathcal{V}\) is said to possess an ebit queue \(q_{ij}(t)\). Furthermore, among ebit queues, every \(q_{ij}(t)\) associated to an edge \((i,j)\in\mathcal{E}\) corresponds to an elementary entanglement generation link, and is therefore called a physical queue, while all other ebit queues are called virtual queues. Ebit queues are therefore a piece of classical control information introduced to keep track of which nodes share entanglement: \(q_{ij}(t)=n\) means that there are \(n\) qubits at node \(i\) and \(n\) qubits at node \(j\), taking up \(n\) memory slots at the respective nodes and sharing pairwise entanglement. In the following, we describe how all the processes that ebits undergo in our model are translated to queue operations.
#### 1.1.1 Ebit Generation
At each time step, every fiber link -- and thus every physical queue -- generates a random number of ebits \(a_{ij}(t)\). This term can be seen as an open interface to the specific random process that models ebit generation and it is modeled hereafter as a Poisson process of constant mean value \(\alpha_{ij}\geq 0\) for generality. It should be noted that \(\alpha_{ij}\) is the final generation rate after accounting for link-level imperfections -- finite brightness of the source, propagation losses, finite success probability of pair-generation BSMs, etc. -- as a cascade of Poisson filtration processes, at the end of which we obtain a value for \(\alpha_{ij}\). Thus, ebit generation is modeled by a direct enqueueing operation along the relevant queue. It should be noted that, since this operation models entanglement generation at the physical level, it only concerns physical queues. For virtual queues, \(a_{ij}(t)=0\,\forall\,t\).
#### 1.1.2 Ebit Losses
To model (symmetrical) memory loss, we employ a standard quantum memory model and calculate the storage-and-retrieval efficiency of the memories as \(\eta=\exp\left\{\left(-\frac{\Delta t}{\tau}\right)\right\}\), where \(\tau\) is the expected lifetime of a qubit in the memory and \(\Delta t\) is the duration of a time step. This figure of merit models the probability to correctly retrieve a qubit from a memory after it has been stored in it for one time step. We assume losses to be symmetrical in that whenever one loss event happens, either both ends of the link lose their respective qubit or one end loses it and instantly communicates loss to the other concerned node. Therefore, one loss event always models the loss of one complete ebit.
At every time step, every queue throws as many biased coins as there are stored qubits and removes as losses all the ones that fail the random check. Losses are therefore modeled by the binomially distributed random variable \(\ell_{ij}(t)\), with as many trials as there are ebits stored in queue \((i,j)\) and probability to lose one pair \(1-\eta\). It should be clear that the number of trials for the geometric distribution is based on \(q_{ij}(t)\), i.e. on the pairs present at the beginning of the time step, meaning that new arrivals are immune to losses for the ongoing time step.
We remark that the statistical distribution of ebit survival times follows the geometric distribution defined by \(\eta\), whose mean value \(\frac{1}{1-\eta}\) tends to the expected \(\frac{\tau}{\Delta t}\) for small \(\frac{\Delta t}{\tau}\), \(\tau\) being the expected lifetime of ebits in the memories. The remaining difference is an effect of the dicretization. Finally, we stress that accounting for losses in such a time-dependent way makes the presented framework valid as a tool to determine the optimal frequency at which scheduling decision should be taken, given the technological parameters.
#### 1.1.3 Entanglement Swapping
After covering generation and loss, the last mechanism that can modify the amount of ebits in a queue is entanglement swapping. Entanglement swapping always involves consuming two "shorter" pairs to obtain one longer pair, which naturally translates to our queue-based formalism as two removals from the parent queues, and one addition to the child queue. We introduce the following notation: let \(r_{i[j]k}(t)\) indicate the number of swapping operations that happen at a given time step, at node \(j\), from queues \((i,j)\) and \((j,k)\) to queue \((i,k)\): as a notation example, \(r_{A[B]C}(2)=3\) means that the scheduler has ordered three BSMs to be performed at node \(B\) to swap three pairs from queues \(AB\), \(BC\) to \(AC\) at time step 2. There will be as many \(r_{i[j]k}(t)\) terms as there are transitions allowed by the chosen routing: if for instance there are two parallel paths \(ABCD\) and \(AB^{\prime}C^{\prime}D\) across the Alice-Bob pair \(AD\), but only \(ABCD\) is explicitly routed, the system will include terms \(r_{A[B]C}(t)\) and \(r_{A[C]D}(t)\), but not \(r_{A[B^{\prime}]C^{\prime}}(t)\) and \(r_{A[C^{\prime}]D}(t)\), effectively ignoring the second path. This is a limitation that directly arises from assuming that routing is static and known, but is also easily circumvented by adding more paths to the routing, since we place no theoretical limit on the number of routes that can serve a user pair.
To clarify how all the pieces introduced until now fit together, suppose to have the Alice-Bob pair \(AD\) connected by route \(ABCD\), as shown in Fig. 1. Assume the average generation rates to be \(\alpha_{AB}\), \(\alpha_{BC}\) and \(\alpha_{CD}=1\) (time steps)\({}^{-1}\). Lastly, assume that all the memories in the system have \(\eta=0.9\) storage-and-retrieval efficiency for the chosen time step duration. Fig. 1 shows how the full system evolves throughout two time steps, while Fig. 2 shows the same test run but focusing on queue \(AB\), to highlight the timing of the various phenomena at play.
* During time step 1: 1. At the beginning of the time step, the queue states are: \(q_{AB}(1)=q_{CD}(1)=1\), \(q_{BC}(1)=0\); 2. At the end of the time step, new ebits have been generated across \(AB\) and \(BC\) (\(a_{AB}(1)=2\), \(a_{BC}(1)=1\)) and one has been lost across \(CD\) (\(\ell_{CD}(1)=1\)). The scheduling decision is taken from this configuration as \(r_{A[B]C}(0)=1\): one swap at node \(B\) from queues \(AB\) and \(BC\) to \(AC\).
* During time step 2:
1. The initial configuration sees two stored pairs in \(AB\) which were not employed in the last time step (\(q_{AB}(2)=2\)) and the freshly swapped one in \(AC\) (\(q_{AC}(2)=1\));
2. Throughout the time step, one pair was lost across \(AB\) (\(\ell_{AB}(2)=1\)) and one generated across \(CD\). The scheduler may now decide \(r_{A[C]D}(2)=1\) to move to \(AD\) or store the pairs for future use.
To categorize transitions in terms of their net effect on queues, we say that a given transition \(i[j]k\) is _incoming_ for queue \((i,k)\), because it adds pairs to it, and _outgoing_ for queues \((i,j)\) and \((j,k)\), because it takes pairs from them. A queue's evolution can therefore be summarized as follows, i.t. and o.t. being shortcuts for incoming and outgoing transitions:
\[q_{ij}(t+1)=q_{ij}(t)+a_{ij}(t)-\ell_{ij}(t)\\ -\sum_{o\in\text{o.L}}r_{o}(t)+\sum_{k\in\text{i.L}}r_{k}(t). \tag{1}\]
For clarity, we reiterate that while all terms of (1) are calculated for every queue, \(a_{ij}(t)\) across a virtual queue will always be zero, because virtual queues do not generate ebits. Moreover, it is quite rare for a physical pair to have incoming transitions, but not impossible: it may happen in a peculiar topology such as the \(ABC\) triangle with \(AB\) as an Alice-Bob pair and \(ACB\) as service route. In this edge case, transition \(A[C]B\) is incoming for a physical queue.
Conversely, it should be stressed that the loss term \(\ell_{ij}(t)\) is calculated in the same way for all queues, because ebit storage is always handled by memories at the network nodes. A description of the whole system requires \(|\tilde{\mathcal{E}}|\) equations like (1), ushering a natural transition to a model built with matrices and vectors.
The first vector terms are \(\mathbf{q}(t)\), \(\mathbf{a}(t)\) and \(\boldsymbol{\ell}(t)\), whose \(N_{\text{queues}}\) entries correspond to the individual \(q_{ij}(t)\), \(a_{ij}(t)\) and \(\ell_{ij}(t)\) values (the ordering is irrelevant as long as it is consistent). Moreover, since the effect of swapping on the queues is linear, it is possible to describe it by introducing the vector \(\mathbf{r}(t)\), which has \(N_{\text{transitions}}\) elements -- and a matrix \(\mathbf{M}\) with \(N_{\text{queues}}\) rows and \(N_{\text{transitions}}\) columns to translate the transition rates into their net effect on queues.
The \(\mathbf{r}(t)\) vector embodies the scheduling decision and it is a mere list of all the \(r_{i[j]k}\) terms, while the \(\mathbf{M}\) matrix introduces an efficient encoding of the network topology and routes: For each of its columns, associated to transition \(i[j]k\), the \(\mathbf{M}\) matrix has \(-1\) on the rows associated to queues \((i,j)\) and \((j,k)\), and \(+1\) on the row associated to queue \((i,k)\). All other terms are zero. An example of the \(\mathbf{M}\) matrix is given in table 1 in order to provide the reader with intuition on how it's built. We remark that in all non-trivial examples that are analyzed in this work the \(\mathbf{M}\) matrix is automatically generated by our simulator.
System-wide queue evolution can be restated as the following simple linear equation:
\[\mathbf{q}(t+1)=\mathbf{q}(t)-\boldsymbol{\ell}(t)+\mathbf{a}(t)+\mathbf{M} \mathbf{r}(t). \tag{2}\]
Looking at tab. 1, notice that, as this work only involves bipartite entanglement, all columns of \(M\) have two \(-1\) terms and one \(1\). It would be possible to generalize this model to n-party entanglement by introducing multipartite queues and defining transitions that add to them by drawing from three or more bipartite queues to model a protocol similar to the ones shown in [34][35]. For the sake of simplicity and avoiding the severe scaling issues this generalization would create, we focus on bipartite states for now. This entails that every column of \(M\) sums to -1, i.e. every swap operation has the net effect of removing one pair from the system.
#### ii.2.5 Ebit Consumption
Up to now, the scheduler can freely swap pairs in the network but there is no mechanism for users to employ the received pairs. The missing piece of the puzzle for ebit queues is consumption: whenever there is availability of entangled pairs across one of the final (\(Alice_{n},Bob_{n}\)) pairs, the scheduler must be able to use the available pairs to serve requests, i.e. consume the distributed resource. This is implemented in the model by extending the matrix \(\mathbf{M}\) through concatenation of
\begin{table}
\begin{tabular}{c|c c c c} & \(A[B]C\) & \(B[C]D\) & \(A[B]D\) & \(A[C]D\) \\ \hline \(AB\) & \(-1\) & \(0\) & \(-1\) & \(0\) \\ \(BC\) & \(-1\) & \(-1\) & \(0\) & \(0\) \\ \(CD\) & \(0\) & \(-1\) & \(0\) & \(-1\) \\ \(AC\) & \(+1\) & \(0\) & \(0\) & \(-1\) \\ \(BD\) & \(0\) & \(+1\) & \(-1\) & \(0\) \\ \(AD\) & \(0\) & \(0\) & \(+1\) & \(+1\) \\ \end{tabular}
\end{table}
Table 1: \(\mathbf{M}\) matrix for the linear \(ABCD\) network
Figure 2: Example of two time steps from the point of view of queue \(AB\). Queue snapshots \(q_{ij}(t)\) are taken at the very beginning of a time step, while arrivals and losses happen stochastically but are only assessed at the end of the step, when the scheduling decision is taken. Note that ebits arriving during the current time step are not subject to losses in this model.
Figure 1: Explicit example of two time steps over a simple topology. Continuous lines represent physical queues and dashed lines virtual ones. Grey circles represent ebits that were in the queue at the beginning of a time step, red ones ebits that arrived during that time step. Blue crosses represent loss of an ebit. Upper figures (a) at the beginning of the corresponding time step, lower figures (b) at the end of it.
a negative identity block to obtain \(\mathbf{\tilde{M}}=\left[\mathbf{M}\right]-\mathbf{I}_{N_{\text{quenes}}}\), and the \(\mathbf{r}(t)\) vector to have \(N_{\text{transitions}}+N_{\text{quenes}}\) components.
What this extension achieves is to have a set of new transitions that only remove one pair from a given queue, modeling actual consumption of the distributed pair by the users. Extending \(\mathbf{M}\) to \(\mathbf{\tilde{M}}\) empowers the scheduler but also adds a new facet to the decision problem: if a given queue has \(n\) pairs inside, the scheduler not only needs to balance swapping and storage for future use, it might also have to account for direct consumption of some of the available ebits.
Putting all the terms together, the vector of ebit queues evolves as:
\[\mathbf{q}(t+1)=\mathbf{q}(t)-\boldsymbol{\ell}(t)+\mathbf{a}(t)+\mathbf{ \tilde{M}}\mathbf{r}(t). \tag{3}\]
### Demand Queues
The ultimate purpose of a communication network is to serve the requests that users issue. Therefore, we need to include in our discussion a mechanism that allows to keep track of user demand: at any given time, every \((Alice_{n},Bob_{n})\) pair will issue a random number of demands and store them in a backlog called the demand queue. Every time a direct consumption operation is scheduled and a pair is consumed along link \(ij\), a demand is contextually removed from the demand queue of link \(ij\). This physically corresponds to the users measuring their qubits and "consuming" one ebit to realize the specific application they are implementing.
Thus, it becomes natural to introduce another set of queues to describe the evolution of demands. Similarly to ebits, demands arriving to the system and being held for future service are modeled through queues: alongside every ebit queue, there exists a demand queue \(d_{ij}(t)\) that keeps track of the number of user-issued requests (as introduced in [23] for a single switch and generalized in this work for an arbitrary topology). At each time step, every demand queue \(d_{ij}(t)\) receives \(b_{ij}(t)\) demands, which for simplicity and generality are again modeled as a Poisson process with constant average value \(\beta_{ij}\) (as in the case of ebit generation, this term may be interpreted as an open interface to more refined traffic patterns). To maintain the model's uniformity, all edges belonging to \(\mathcal{\tilde{E}}\) have a demand queue, but only the ones that are associated to an \((Alice_{n},Bob_{n})\) pair have nonzero arrivals. For all the other links, \(b_{ij}(t)=0\,\forall\,t\).
Demand queues have a simpler evolution than ebit queues, since a demand is only a request for one ebit to be distributed across a given \((Alice,Bob)\) pair: demands enter their queues when they are received and exit when they are served. Demand service can be naturally controlled by the \(ij\) terms of the \(\mathbf{r}(t)\) vector, i.e. the same terms that control ebit consumption. We therefore introduce the matrix \(\mathbf{\tilde{N}}=\left[0_{N_{\text{quenes}}\times N_{\text{transitions}}}\middle|- \mathbf{I}_{N_{\text{quenes}}}\right]\) as a mean of interfacing with the consumption part of the \(\mathbf{r}(t)\) vector without being affected by the scheduling one, which is irrelevant to demand queues.
Demand evolution may therefore be stated as:
\[\mathbf{d}(t+1)=\mathbf{d}(t)+\mathbf{b}(t)+\mathbf{\tilde{N}}\mathbf{r}(t) \tag{4}\]
By construction, the last \(N_{\text{quenes}}\) components of the \(\mathbf{r}(t)\) vector regulate both demand and ebit consumption: one demand always consumes one ebit.
## IV Scheduling Policies
### General Overview
After introducing all the components of the model, we move to describing scheduling policies and how they can be tested through our tools. We first outline what a scheduling policy is in the context of our work and follow up with subsections dedicated to three categories of scheduling policies: subsection IV-B describes the Greedy scheduler, i.e. the simplest policy we analyze in this work; subsection IV-C features a mathematical derivation of a quadratic family of scheduling policies; subsection IV-D shows how the quadratic schedulers can be modified to obtain a class of policies that perform similarly but require lighter computations. We define a _Scheduling Policy_ as any arbitrary set of rules that at every time step \(t\) takes as its input some degree of information about the network state and returns a scheduling decision \(\mathbf{r}(t)\), i.e. a scheduling vector as defined in the previous section.
We first subdivide policies according to their localization degree: in distributed policies, the nodes themselves determine the operations to perform; in centralized ones, the system features a physical scheduler to which all the nodes communicate their status information and receive orders from. It is moreover possible to categorize policies in terms of information availability: we remark that in all policies that we analyze in the following we work on the assumption that \((\mathbf{q}(t),\mathbf{d}(t))\), i.e. the exact state of the system at the beginning of time step \(t\), is known to all parties. However, since networks are distributed systems, it may happen that some crucial information (such as the realizations of the random processes \(a_{ij}(t)\) and \(\ell_{ij}(t)\) for faraway queues) is not available or outdated when the scheduling decision is taken, introducing the notion of feasibility of a scheduling decision, which is detailed in the following paragraph.
To start, assume a centralized scheduler, with complete access to information. As shown in sec. III, the net effect of a scheduling decision \(\mathbf{r}(t)\) on the ebit and demand queues is given respectively by \(\mathbf{\tilde{M}}\mathbf{r}(t)\) and \(\mathbf{\tilde{N}}\mathbf{r}(t)\). We can set two bounds on the decision:
1. The net number of outgoing ebits from any given queue can never exceed what is physically available: \[-\mathbf{\tilde{M}}\mathbf{r}(t)\leq\mathbf{q}(t)-\boldsymbol{\ell}(t)+ \mathbf{a}(t).\] (5)
2. Along a queue, the number of consumed ebits should never be higher than the demands: \[-\mathbf{\tilde{N}}\mathbf{r}(t)\leq\mathbf{d}(t)+\mathbf{b}(t)\] (6)
We refer to those bounds as the feasibility bounds.
If we now suppose (as will be the case for most of the scheduling policies presented hereafter) to have incomplete access to information, one or more of the random processes' realizations become inaccessible, making it impossible to exactly formulate the feasibility bounds. Despite it still being possible to design scheduling policies that perform well while only using educated guesses based on averages, it is not possible to guarantee that their decisions at each time instant will respect (5) and (6).
Infeasibilities in general arise when \(n\) ebits are available in a queue and \(n^{\prime}>n\) are scheduled out of it; they may be caused by a central scheduler relying on outdated information and scheduling more pairs than available, or by race conditions between two node-local schedulers that try to draw from the same queue.
Infeasibile decisions themselves do not prevent a network from operating (performing more measurements than there are available ebits simply results in failure of the excess measurements), but infeasibility that is not properly managed may entail sensible degradation of performance. Therefore, a working quantum network stack also needs a specific discipline to manage infeasible orders. In the context of this work, conflicts are managed by assigning a random timeout to all measurement operations, and then executing them with a first-come-first-serve (FCFS) discipline. However, to avoid artificially degrading the performance, we also introduce a ranking system, shown in fig. 3, such that high-rank operations are always executed after low-rank ones. Were not such a system in place, it could happen that the scheduler ordered to feed \(q_{AC}\) through \(r_{A\left|B\right|C}=1\), exploit the new \(AC\) pair in \(r_{A\left|C\right|D}=1\) and finally serve one request with \(r_{AD}=1\). Each of these operations depends on the one before it, and if the execution order is not respected the system will serve one less \(AD\) request, possibly also wasting the intermediate links in the process. In practice, we hypothesize to have a control layer in charge of sending a control signal to the nodes to apply their scheduling decision at the end of the time step. To ensure proper priority is respected, we subdivide the signal from the control layer in multiple sequential "apply" signal, one per rank (i.e. one per horizontal set of nodes in fig. 3).
In the following sections we propose some examples of scheduling policies and provide detail on their degree of localization and information availability.
### Greedy Scheduler
The Greedy Scheduler is a nontrivial, distributed scheduling policy that works with minimal communication between the nodes. It is a natural and immediate solution to the scheduling problem, and it is commonly found in classical network literature as a test case. Under a greedy scheduling policy, all nodes perform swapping operations as soon as they are available, regardless of user demand. When several competing operations are available, the node selects randomly. It should be noted that, although it disregards user demand, the greedy scheduler we examine is still routing-aware: if the route \(ABCD\) is to be served, the scheduler will never attempt "downward" transitions like \(A\left[D\right]C\).
The greedy scheduler's advantage lies in the fact that it requires no additional communication infrastructure on top of the one already employed by ebit generation and swapping, since the policy works on strictly local information. The downside to such simplicity is found in the low performance of this policy, that is only interesting as a lower bound for other policies to beat in order to justify the additional communication overhead required. Simulation data for the greedy policy, as well as a comparison with more refined schedulers, is provided in sec. V-B.
### Quadratic Scheduling
We now turn to mathematically stating and solving the scheduling problem through the lens provided by our framework. Before solving the problem and displaying results, we spend some words to describe our tools.
#### v-B1 Drift Minimization
Lyapunov Drift Minimization (LDM) is a standard technique that is often used in classical network science to stabilize queuing systems [36, sec. 8.4]. We provide in this section a demonstration of how and why LDM works, and follow up with its application to quantum networks. As a first step, let \(V(\mathbf{q}(t),\mathbf{d}(t)))=V(\mathbf{s}(t))\) be an arbitrary, non-negative, convex \(\mathbf{N}^{n}\rightarrow\mathbf{R}\) function of the current state of the system, that we call the Lyapunov function. In short, choosing an arbitrary Lyapunov function and showing it satisfies certain conditions will allow us to infer that the system is stable.
Figure 3: The scheme of our rank system for an \(ABCDE\) chain topology. Every square with only two letters inside (e.g. \(DE\)) represents a link, while three-letter squares (e.g. \(C\left|DE\right|E\)) represent swapping transitions. A set of squares at the same height are grouped in one rank, starting from zero at the top (direct consumption from physical queues) and increasing going down. Arrows represent the “paths” to follow to obtain one of the final, user-requested pairs. Focusing on the bright red squares in this scheme, which all involve node \(C\) in some way, we can provide an example of how the conflict-management system works. Whenever it needs to apply a scheduling order, node \(C\) will sequentially: 1) Perform transition \(B\left[C\right]D\) as many times as requested. 2) Satisfy consumption orders along \(CE\) and \(AC\) with random timeouts and a FCFS discipline; 3) Perform transitions \(B\left[C\right]E\) and \(A\left[C\right]D\) with random timeouts and a FCFS discipline.
This method entails great simplification of the analysis of highly multivariate systems, because it reduces the problem to a scalar one: when \(V(\mathbf{s}(t))\) is small, all the queues are small, and when it is big, at least one queue is accumulating. A common convention [37] in network science is to use the square norm of the queue backlog vector as \(V(\mathbf{s}(t))\).
After choosing a suitable Lyapunov function, the next step is to define its _drift_\(\Delta V(\mathbf{s}(t))\) as:
\[\Delta V(\mathbf{s}(t))=\operatorname{\mathbb{E}}\left[V(\mathbf{s}(t+1))-V( \mathbf{s}(t))|\mathbf{s}(t)\right]. \tag{7}\]
Some intuition about this formulation can be gained by thinking of the Lyapunov function as a potential, akin to the electrical one in physics: the drift is positive if from \(t\) to \(t+1\) the system evolves into an higher-potential, less stable state, and negative otherwise. It is possible to prove [36, sec. 8.4.2] that if \(\Delta V(\mathbf{s}(t))\) is negative on the entire state space of the system, except possibly for a compact subset of \(\mathbf{s}(t)\) values, then the Markov chain describing the system is positive recurrent, i.e. the network is stable and user requests will not accumulate boundlessly. Such property is known as the Foster-Lyapunov criterion. Intuitively, the drift being positive only on a compact set means that there is a region of the state space in which the system evolves away from stability: since the drift is negative everywhere outside said region the system is always pushed back inside it, so that the Lyapunov function is never allowed to diverge. To visualize this, one may think of a charged particle in a potential well: even if it manages to exit in some way, it is eventually pushed back by the higher potential region. In its most general form, the Foster-Lyapunov criterion can be phrased as:
\[\Delta V(\mathbf{s}(t))\leq-f(\mathbf{s}(t))+g(\mathbf{s}(t)), \tag{8}\]
where \(f\) and \(g\) are two non-negative functions and the right-hand side is positive on a compact region of the state space of our system. Therefore, the practical goal is to find a bound for the drift and minimize it, in order to satisfy the Foster-Lyapunov criterion:
\[\min_{\mathcal{R}(t)\in\mathcal{R}}\Delta V(\mathbf{s}(t))\leq-f(\mathbf{s})+ g(\mathbf{s}(t)) \tag{9}\]
where \(\mathcal{R}\) is the set of all feasible scheduling policies.
Notice that everything in our equation is defined only in terms of \(t\) and \(t+1\): the optimization must be repeated at every time step because of the \(t\) dependence, and since the system only sees up to \(t+1\) we call this process a _myopic_ optimization. Solving the myopic problem at every time step can be proven [38, appendix] to be a suboptimal solution to the infinite horizon Markov Decision Problem of stabilizing the network at steady state.
#### 2.2.2 Application to the Framework
We now move to the application of drift minimization to our quantum problem. We first remark that we only seek to stabilize demand queues, because ebit queues play the role of a resource, and their accumulation is not an indicator of the ability of the network to serve user demand(accumulating ebit queues merely amount to more ebits being available and more freedom to the scheduler, especially under unlimited memory assumptions). Additionally, we remark that experimental quantum networks will have a finite number of quantum memory slots at every node, enforcing a hard upper bound on \(\mathbf{q}(t)\).
To make our analysis apply to any arbitrary scheduling decision in \(\mathbb{N}^{n}\), we refine our definition of \(\mathbf{d}(t)\):
\[\mathbf{d}(t+1)=(\mathbf{d}(t)+\mathbf{b}(t)+\tilde{\mathbf{N}}\mathbf{r}(t)) ^{+}, \tag{10}\]
where \((\cdot)^{+}\) is a shorthand for \(\max\left(\cdot,0\right)\). This is a failsafe measure that prevents the queues in our mathematical model from going negative even if a scheduling policy prescribes more service than there are requests.
To apply drift minimization to our case, the first step is to choose a Lyapunov function that satisfies the requirements detailed above. As is customary in classical networks, we opt for the square norm of the queue backlog:
\[V(t)=\frac{1}{2}\mathbf{d}^{T}(t)\mathbf{d}(t). \tag{11}\]
From there, we obtain the drift:
\[\Delta V=\operatorname{\mathbb{E}}\left[\frac{1}{2}\left[\mathbf{d}^{T}(t+1) \mathbf{d}(t+1)-\mathbf{d}^{T}(t)\mathbf{d}(t)\right]|\mathbf{d}(t)\right] \tag{12}\]
If we let \(\mathbf{d}(t)+\mathbf{b}(t)=\tilde{\mathbf{d}}(t)\) and note that \([\max\left(x,0\right)]^{2}\leq x^{2}\) we can bound the drift as:
\[\operatorname{\mathbb{E}}\left[\frac{1}{2}\left[\mathbf{d}^{T}(t +1)\mathbf{d}(t+1)-\mathbf{d}^{T}(t)\mathbf{d}(t)\right]|\mathbf{d}(t)\right] \leq\] \[\leq\operatorname{\mathbb{E}}\left[\frac{1}{2}\left[(\tilde{ \mathbf{d}}(t)+\tilde{\mathbf{N}}\mathbf{r}(t))^{T}(\tilde{\mathbf{d}}(t)+ \tilde{\mathbf{N}}\mathbf{r}(t))-\mathbf{d}^{T}(t)\mathbf{d}(t)\right]| \mathbf{d}(t)\right]=\] \[=\frac{1}{2}\big{[}\operatorname{\mathbb{E}}\left[\tilde{ \mathbf{d}}^{T}(t)\tilde{\mathbf{d}}(t)|\mathbf{d}(t)\right]-\mathbf{d}^{T}(t) \mathbf{d}(t)+\] \[+\operatorname{\mathbb{E}}\left[\tilde{\mathbf{d}}(t)|\mathbf{d}(t )\right]^{T}\tilde{\mathbf{N}}\mathbf{r}(t)+\mathbf{r}^{T}(t)\mathbf{N}^{T} \mathbf{N}\mathbf{r}(t)\big{]} \tag{13}\]
Therefore, stabilizing the system amounts to finding the \(\mathbf{r}(t)\) that minimizes the last term \(U(\mathbf{r}(t))=\operatorname{\mathbb{E}}\left[\tilde{\mathbf{d}}(t)| \mathbf{d}(t)\right]^{T}\tilde{\mathbf{N}}\mathbf{r}(t)+\mathbf{r}^{T}(t) \mathbf{N}^{T}\mathbf{N}\mathbf{r}(t)\). Notice that \(U(0)=0\) implies that \(U(\mathbf{r}(t))\leq 0\mathbf{r}(t)\).
#### 2.2.3 Fully Informed Quadratic Scheduler
The derivation presented in the previous section yielded an expression that has a direct effect on stability: the more negative \(U(\mathbf{r}(t))\) is, the stabler the network. In other words, the task of a scheduler in this context is to choose at every time step a decision \(\mathbf{r}(t)\) such that \(U(\mathbf{r}(t))\) is minimized.
The natural tool to solve this problem is optimization. Assuming, as an initial ideal case, that all information about the network state is available (and therefore dropping the expectation from \(U(\mathbf{r}(t))\), it is possible to formulate a central scheduling policy that at each time step solves the following quadratic integer program:
\[\begin{cases}\min\mathbf{w}(t)\cdot\mathbf{r}(t)+\mathbf{r}(t)^{T}\mathbf{N}^ {T}\mathbf{N}\mathbf{r}(t)\\ \text{s.t. }\mathbf{r}(t)\in\mathcal{R}(t)\end{cases} \tag{14}\]
with weights \(\mathbf{w}(t)=(\mathbf{d}(t)+\mathbf{b}(t))^{T}\tilde{\mathbf{N}}\)).
Since we assumed complete information availability, we can use as constraints the feasibility conditions mentioned in IV-A(\(d\) being a shorthand for the dimension of \(\mathbf{r}(t)\)):
\[\mathcal{R}(t)=\left\{\mathbf{r}(t)\in\mathbb{N}^{d}|-\tilde{ \mathbf{M}}\mathbf{r}(t)\leq\mathbf{q}(t)-\boldsymbol{\ell}(t)+\mathbf{a}(t), \\ -\tilde{\mathbf{N}}\mathbf{r}(t)\leq\mathbf{d}(t)+\mathbf{b}(t)\right\} \tag{15}\]
This constraint set binds the system so that, along every queue:
* No more outgoing transitions are scheduled than there are stored ebits;
* No more ebits are consumed than there is demand.
Solving this problem at every time step will guarantee the best possible scheduling decision \(\mathbf{r}(t)\) that can be obtained starting from a 2-norm Lyapunov function, even though such a policy carries a crucial flaw that hinders its experimental realizability: since this is a centralized policy, there must be a physical scheduling block that acts as an authority; all the nodes in the network submit local status information and receive a scheduling decision to apply. In the time it takes for the information to reach the scheduling agent and for the decision to be relayed back to the nodes and applied, the physical layer of the network has continued stochastically generating and losing ebits, so that when the decision finally arrives it is based on outdated information. Two possible solutions to this issue are addressed in the following, in the form of two policies that rely on less information being available.
#### V-A4 Partially Informed Quadratic Scheduler
One solution to the stale information problem detailed in the previous section could be to replace all unavailable information with sensible expectation values and thus implement a partially informed quadratic scheduler. We assume that for each queue, the scheduler has access to:
* The average arrival rate \(\alpha\);
* The loss parameter \(\eta\);
* The average demand rate \(\beta\);
* The system state \((\mathbf{q}(t),\mathbf{d}(t))\) at the beginning of each time step.
This information set relaxes the requirements because the network can take a snapshot of its state at the beginning of each time step and exploit the leftover time to communicate it to the scheduler. The scheduler will in turn use average parameters to build an expectation for the system's state at the end of the time step and take its decision based on that. Note that if these requirements are still too tight, it is always possible to formulate a policy that knows the exact state of the system with \(n\) time steps of delay, or even hybrid localized policies where every node knows the state of the surrounding queues with a delay that depends on their physical distance.
Let \(\mathcal{I}=\{\mathbf{q}(t),\mathbf{d}(t),\alpha,\beta,\eta\}\) be the set of available information at time \(t\). To formulate our partially informed policy, we re-use the (14) problem, but change the constraint set to:
\[\mathcal{R}(t)=\left\{\mathbf{r}(t)\in\mathbb{N}^{d}\ \middle| -\tilde{\mathbf{M}}\mathbf{r}\leq\mathbf{E}\left[\mathbf{q}(t)- \boldsymbol{\ell}(t)+\mathbf{a}(t)|\mathcal{I}(t)\right],\right. \tag{16}\] \[\left.-\tilde{\mathbf{N}}\mathbf{r}(t)\leq\mathbf{E}\left[\mathbf{ d}(t)+\mathbf{b}(t)|\mathcal{I}(t)\right]\right\}.\]
Which in practice reads:
\[\mathcal{R}(t)=\left\{\mathbf{r}(t)\in\mathbb{N}^{d}\ \middle| -\tilde{\mathbf{M}}\mathbf{r}\leq\eta\mathbf{q}(t)+\alpha \mathbf{I}_{N_{\text{queue}}\times 1}\,\right. \tag{17}\] \[\left.-\tilde{\mathbf{N}}\mathbf{r}(t)\leq\mathbf{d}(t)+\beta \mathbf{I}_{N_{\text{queue}}\times 1}\right\}.\]
This class of partially informed policies still outperforms greedy ones but removes the stale information problem.
It should be stressed that, since this policy relies on a guess made using averages, it is not guaranteed that its decisions will satisfy the feasibility conditions. Moreover, since the policy was formulated by manually modifying the result of LDM, it is by nature a suboptimal policy.
The performance of this policy is reviewed in sec. V-B.
#### V-A5 Node-Localized Quadratic Scheduler
As mentioned before, information availability is one of the main points to consider when choosing a scheduling policy: a well-designed policy must be able to take sensible decisions while leveraging the available information to the best extent possible.
Following this idea, we propose a distributed, optimization-based original policy and subsequently benchmark it to assess its expected performance.
Since we are describing a distributed policy, we shift our point of view to that of a node in the network: we assume that every node \(i\) in the network has access to all relevant average values, which can be communicated before the network is booted or measured in a rolling average fashion.
Additionally, let node \(i\) have access to the queue state of the full network at the start of each time step \((\mathbf{q}(t),\mathbf{d}(t))\), where the same remarks we gave in the previous section apply.
Finally, due to how entanglement generation and swapping are implemented, node \(i\) should have access to how many qubits are stored in its memory slots and with whom they are entangled, which means that node \(i\) also knows exact arrivals and exact losses for all the queues connected to it, both physical and virtual, and can exploit this additional information when taking a scheduling decision.
To formalize this, let \(\mathcal{C}^{l}\) be the set of queues connected to node \(i\), i.e. the set of edges \(e\) in the extended set \(\tilde{\mathcal{E}}\) such that \(e\) is connected to node \(i\). Using this concept, we can define a node-local version of the information set \(\mathcal{I}^{i}(t)\) which contains the entirety of the information available to node \(i\):
\[\mathcal{I}^{i}(t)=\left\{\mathbf{q}(t),\mathbf{d}(t),\eta,\beta,\alpha,a_{e}(t ),\ell_{e}(t),b_{e}(t),\ \forall e\in\mathcal{C}^{i}\right\},\]
where \(a_{e}(t),\ell_{e}(t)\) and \(b_{e}(t)\) correspond to the additional local exact information that is unique to each node.
Instead of phrasing a global optimization problem, node \(i\) may now formulate an individual problem and solve it to
obtain a strictly local scheduling decision to apply directly, without waiting for a discrete scheduler to send back a decision. To do so, the node builds all the relevant quantities (backlogs, arrivals, losses) with exact information from the queues it is connected to and expectation values from the other queues. The \(i\)-localized quadratic integer program can thus be written as:
\[\begin{cases}\min\mathbf{w}^{i}(t)\cdot\mathbf{r}(t)+\mathbf{r}^{T}(t)\mathbf{N }^{T}\mathbf{N}\mathbf{r}(t)\\ s.t.\ \mathbf{r}(t)\in\mathcal{R}^{i}(t)\end{cases} \tag{18}\]
where the weights are given by
\[\mathbf{w}^{i}(t)=\operatorname{E}\left[\mathbf{d}(t)+\mathbf{b}(t)|\mathcal{ I}^{i}(t)\right]^{T}\tilde{\mathbf{N}}, \tag{19}\]
In accordance with its previous definition, the set \(\mathcal{R}^{i}(t)\) of all possible scheduling decisions \(\mathbf{r}(t)\) at time slot \(t\) localised at node \(i\) is defined as:
\[\mathcal{R}^{i}(t)= \left\{\mathbf{r}(t)\in\mathbb{N}^{d}\ \right|\] \[-\tilde{\mathbf{N}}\mathbf{r}\leq\operatorname{E}\left[\mathbf{q }(t)-\boldsymbol{\ell}(t)+\mathbf{a}(t)|\mathcal{I}^{i}(t)\right],\] \[-\tilde{\mathbf{N}}\mathbf{r}(t)\leq\operatorname{E}\left[ \mathbf{d}(t)+\mathbf{b}(t)|\mathcal{I}^{i}(t)\right]\right\}, \tag{20}\]
where each individual expected value will locally resolve to a form similar to eq. 15 (i.e. all exact values) for queues that are connected to node \(i\) and to eq. 17 (i.e. all averages) for queues that are not. As an example, node \(A\) will be able to formulate a problem that includes the constraint \(-\tilde{\mathbf{N}}_{AB}\cdot\mathbf{r}\leq q_{AB}(t)-\ell_{AB}(t)+a_{AB}(t)\) (where \(\tilde{\mathbf{N}}_{AB}\), is row \(AB\) of \(\tilde{\mathbf{M}}\)) because queue \(AB\) is directly connected to it, but will have to resort to \(-\tilde{\mathbf{M}}_{CD}\cdot\mathbf{r}\leq\eta q_{CD}(t)+\alpha\) for queue \(CD\), because it has no up-to-date information about it.
The locally informed quadratic scheduler provides a practically implementable alternative to the globally informed policy while still retaining good enough performance. We remark once again that, while the centralized fully informed method came from exact calculations, this scheduler was modified and is thus partially heuristic. Therefore, while the decisions taken by the fully informed scheduler were optimal, the ones taken by the localized one are not: one of the tasks of performance analysis is to characterize this margin of suboptimality in order to gauge how close a distributed scheduler can get to its centralized, idealistic variant.
### _Max Weight Scheduling_
The quadratic policies that have been detailed in the previous section are valid solutions to the scheduling problem in quantum networks. However, situations might arise in which computational complexity is a stricter constraint than network performance. To accommodate such cases, we present in this section a class of policies that perform almost as well as the quadratic ones, for a fraction of the computational cost.
Looking at the policies presented until now, we notice two interesting points:
* The objective function features a linear term that depends on queue length plus a quadratic penalty that does not;
* The linear terms are reminiscent of the objective function for the Max Weight [26] policy, an extremely well-established result of classical network theory.
It is therefore natural to propose a class of semi-heuristic scheduling policies derived by taking our quadratic objectives and suppressing the quadratic penalty, which does not depend on the queue backlog. For brevity, we explicitly formulate only the fully informed variant of the Max Weight scheduler. The partial and local information quadratic schedulers can be turned to their linear variants following the same steps. The fully informed Max Weight problem is obtained by simply suppressing the quadratic term from (14):
\[\begin{cases}\min\mathbf{w}^{T}(t)\cdot\mathbf{r}(t)\\ s.t.\ \mathbf{r}(t)\in\mathcal{R}(t),\end{cases} \tag{21}\]
and solving it with the same constraints as (15). The partial and local information policies may be constructed in the same way, by suppressing the quadratic term from (14) and using the constraint sets (17) and (20) respectively. The performance analysis for the globally, partially and locally informed linear schedulers is provided in section V-B.
## V Numerical Analysis
In this section, we give an overview of how our simulation tool works and then provide results for the numerical analysis of all the proposed schedulers.
### _Simulator Architecture_
All the results shown in this work were obtained through an ad-hoc simulator implemented in Python, relying on the gurobi [39] solver for the optimization calculations and networkx [40] as a graph backend. In the following, we provide a quick breakdown of how our simulator works, from the point of view of a user that is not necessarily experienced with writing code. Interested readers may find more information on the simulator's GitHub repository [41].
From a black-box perspective, the focus of the code design phase of our work was on an object-oriented model of the network system that is as modular and layered as possible. The motivation driving this approach was that an ideal version of the controlling code should be abstract enough not to be aware whether it is driving our model, another more refined simulator or even a real network. In the following, we give a brief random of the kind of parameters that a user of our framework and simulator may expect to tune.
The simulator's input files are composed of two sets of ingredients for the user to provide: the first set of parameters is devoted to the generation of the network topology, the choice of service pairs and demand rates. Users are free to choose one of the topologies we propose in this work (with tunable parameters) or provide an entirely custom network graph.
After selecting the topology, the user selects the set of scheduling policies that the simulator will analyze. As before, it is possible to select one of the policies we analyzed here
or provide a custom one. The code provides seamless access to all the information we used in our policies through simple specification of an "information availability" parameter.
The second set of input values is related to physics and low-level simulation parameters, enabling fine-tuning of generation rates across physical links and losses at nodes, but also number and duration of the time steps.
A set of parameters related to the optimization of the simulator's performance concludes the user inputs for our code. A discussion of these parameters is out of the scope of this paper as they are only relevant to raw computational performance, but can be found in the full code documentation of the simulator on GitHub [41].
### _Results_
To avoid excessively prolonging this section, we show in fig. 4 that the quadratic schedulers provide a negligible, if any, increase in performance at the cost of a major increase in computational complexity (quadratic optimization calculations are much more taxing than linear ones). They were therefore omitted from the complete discussion of numerical results, that only shows the greedy scheduler and the three linear ones.
The main goal of the following analysis is to showcase how the proposed scheduling policies affect the performance of quantum networks of various topologies, both deterministic and randomly generated. The topologies on which our analysis was run, shown in fig. 5, are a complete 5x5 grid, a 6x6 grid with some randomly removed nodes, and two realizations of the Watts-Strogatz [42] and Erdos-Renyi [43] models of 25 nodes each.
Since our **M** matrix is built from the static routes that connect the service pairs, building a nontrivial example requires more than two routes. To obtain such an example, we increase the number of users we consider: for each topology, we run our simulation with ten user pairs, of which two are manually fixed (red and blue in fig. 5) and eight are randomly selected at the beginning of each simulation run to mimic different traffic configurations (green in 5). Every user pair is connected, when possible, by two semi-distinct routes. Since routing is outside the scope of this work, for we simply take the shortest path connecting each user pair, remove the edges that compose it with a given tunable probability, and then take the shortest path in the newly obtained graph as a second route, under the assumption that in a real application scenario users will provide sensibly computed static routes.
We sweep the demand rate of the two manually selected pairs, while fixing the random ones to a constant load value \(L\), and then average together the results of ten runs to remove the bias that one particular parasitic pairs set may entail.
Fig. 6 provides a showcase of all the results that we obtain from our simulation: given the complete 5x5 grid topology shown in fig. 5a and the fully informed Max Weight scheduler, we select the four corners of the grid as the two main user pairs, randomize the parasitic pairs and run the simulation, displaying all outputs.
Since tracing the capacity region of a network requires gauging its stability, we rely on fig. 6 as an aid to clarify our definition of this crucial concept. In the context of dynamical systems, stability may be defined in several different ways, depending on the amount of mathematical rigor required. The established definition states that a system of queues is stable if the time it takes for the cumulative queue length to return to zero is finite on average (i.e. the queues keep returning to zero, regardless of how far they stray). Of course, such a definition loses meaning in a finite-time context, because there is no way to tell whether a system would turn back to zero if left running for a longer wall time, even though it looks unstable over a finite time window. However, arguments can be made to justify the usage of such a notion in a context such as ours. First of all, it is safe to say that a queue whose length is constantly zero is stable (This is apparent from fig. 6, plot in the \((0,0)\) cell, which depicts the temporal trend of the total demand, with all demand rates set to zero). Secondly, we may state that a queue that has Poissonian arrivals and is never depleted will accumulate in a roughly linear fashion, and it will surely be unstable. Thirdly, we claim that the stability front of a network system is a Pareto boundary: if a given load \(L=(l_{1},l_{2},...l_{i},...,l_{n})\) cannot be served by the network and is therefore outside its stability region, then all higher loads \(L^{\prime}=(l_{1},l_{2},...l_{i}^{\prime},...,l_{n})\) such that \(l_{i}^{\prime}>l_{i}\) are unstable (6,upper-right cluster of linear plots, depicting total demand in a high-load scenario).
These considerations make a finite-time simulation slightly more insightful: if the queue length returns to zero several times during the simulation window, the system is likely stable. If the system shows a clear linear trend, there is high possibility that it is not. If a cluster of points all show a linear trend, the possibility of instability further increases.
Moreover, to conform with standard practice in the classical network field, we also include as a performance metric the average demand queue length, plotted as a colormap in the background of fig. 6's cells. This is the metric on which we focus for the rest of the analysis, since it yields a more easily legible graph of the stability of a load point and is therefore more suitable for high-resolution plots and/or comparison of a large number of results. Another reason why we choose to employ the average queue length because a color map is that it provides a visual approximation of the capacity region of the network we are considering.
To give a sense of scale, we complement our outputs with the maximum excursion of the cumulative demand backlog, shown in the top-left corner of every cell.
Running our analysis over all topologies and schedulers and displaying the average demand backlog, we obtain four arrays of plots that show the performance of our network as a function of the information granted to the scheduling policy (Greedy to Fully Informed Global) and the load on the parasitic pairs, shown in fig. 7. From these arrays of plots, insight on several levels may be obtained.
Firstly, looking at all the plots for any given topology, we observe that changing the scheduler entails radical change
on the capacity region of a quantum network, providing proof that not only the scheduling problem is an interesting one to formulate in the context of quantum networking, but its solution brings non-negligible performance margins to the operation of a quantum network. Another piece of information that may be gathered resides in the shapes of the stability margin: when the dark region is not shaped like a rectangle it means that the two plotted pairs are in direct competition, as increasing demand along one of the axes reduces the amount of demand that can be served along the other one. To an end user employing our tool for network design, this would mean that the network is bottlenecked by routing, since there is a set of routes across which the scheduler must balance service to two or more competing commodities.
Another point that can be made from these results comes from looking at the difference between the fully informed global scheduler and the local ones: as mentioned before, the fully informed Max Weight scheduler can be interpreted as a performance upper bound for a Max Weight policy. Therefore, when designing an original scheduler, one may gauge its performance by comparing stability regions with the fully informed scheduler. There is a noticeable difference between FI and LI but it may be deemed acceptable because of the information trade-off: the region still has the same shape and, although smaller, is still comparable to the upper bound, meaning that the locally informed policy we are proposing performs very well in this scenario.
Conversely, a rectangular shape (e.g. 7c, Fully Informed scheduler column) is an indicator that the two main pairs we selected are not directly competing over a shared bottleneck. This does not necessarily mean that the network is not congested: traffic from the parasitic pairs is still stressing the network (as demonstrated by the reduction in size of the stability region when going up along the parasitic load axis) and requiring careful scheduling decisions.
## VI Limitations of the framework and future outlook
In this section, we discuss the main limitations and open questions in our model, and propose some seed ideas for future directions. The first limitation to talk about is the modelization of strictly quantum imperfections such as decoherence, that degrade the quality of a quantum state without necessarily meaning the state is lost. Despite being well aware of the paramount importance of noise in quantum modeling, the history of the classical Internet shows that a successful large-scale network infrastructure is best thought of in terms of separate functional layers, and a layered architecture has already been proposed for a prospective future quantum internet [44] that effectively separates the Link Layer, where quantum error correction should be implemented, from the Network Layer, which is the scope of our work. While we are aware that in real implementations, especially initial ones, theoretically separate layers leak and blend with each other, the Quantum Internet should eventually converge to a well defined network stack, making it redundant to treat noise in the same layer as scheduling. Thus, while we remain interested in an expansion of our work that treats quantum imperfections, the lack of explicit state quality modeling does not make our work irrelevant.
A similar concern could be raised for the memory at the network nodes: despite this being another issue that is very close to hardware, its integration with scheduling policies would seem crucial because it could radically change how a scheduling decision is taken: if a node only has a finite number of memory slots, the scheduler would have the additional constraint of free space (or lack thereof, in some cases having to "waste" ebits in order to free up memory). As a matter of fact, a similar problem has been analysed over a single switch in [22] and [29], showing that the memory requirements of an isolated quantum switch are quite low (on the order of 5 slots) to achieve performance comparable to that of a switch with unlimited memory slots, making the memory problem not as concerning. Moreover, [45] formulates the problem of exploiting limited memory slots and develops a Max-Weight memory allocation policy for quantum nodes that could be adapted to our scenario.
Furthermore, it is possible to look at the memory problem from a different direction: while a solution inside our framework could in principle be to add compound constraints to the optimization problems, we stress that results such as fig. 6 (maximal excursion numbers) gauge the accumulation of total demand in a stable network, effectively providing an upper bound for memory requirements in the design of a real quantum network system.
The third limitation of our work is how the framework scales: The fact that the number of queues we need to account for grows quadratically with the number of nodes in the network entails quick growth of the **M** matrix, which makes the integer programs required by several policies presented here increasingly complex. While this is not as much of a problem currently as it was in the past decades, it is still an issue that is worth closely investigating, perhaps to find sub-optimal scheduling strategies that require only a subset of the extended edge set (akin to an overlay network, as demonstrated in [20]). We note here that easing scaling concerns would also enable a future extension of our framework to multipartite entanglement: as mentioned in the beginning, an extension in this direction would require the definition of new multipartite virtual queues, together with ad-hoc transitions that interface them with the bipartite ones, greatly increasing the overall number of queues and therefore the problem's complexity.
Finally, it would be interesting to delve into other physical imperfections, such as finite speed of communication between nodes, which entail a stricter definition of what information is local and accessible to a node at a given time. One interesting implication of such analysis would be the case in which only one of the qubits in an ebit is lost, and what happens if the loss is not communicated before other swapping operations are undertaken, i.e. the error propagates
along the swapping route. All these considerations would require a more refined physical model, which would in turn imply revisions to our mathematical framework, but should not be excessively difficult to include in the numerical part of our discussion: the simulator code was written from the ground up in order to provide a simpler and more agile contribution, but it was designed with particular attention to keeping a layered and modular structure that should be reasonably adaptable to well-established quantum network simulation packages such as NetSquid [46] or QuISP [47].
## VII Conclusions
In this work, we presented a general framework that allows to formulate and solve the scheduling problem in general, lossy memory-endowed quantum networks in a dynamical way. We then integrated our framework with Lyapunov Drift Minimization in order to mathematically derive an optimal quadratic scheduling policy for quantum networks and proposed several other suboptimal policies with various advantages. Finally, we showcased how our framework may be exploited by people interested in policy design to benchmark and fine-tune a general quantum network's performance under arbitrary scheduling policies. Despite a sizable amount of work still needing to be tackled before a collective quantum network science exists, the promising results we presented could eventually become one of many assets in the quest for the Quantum Internet.
|
2307.01766 | The Quantum Advantage in Binary Teams and the Coordination Dilemma: Part
II | In our previous work, we have shown that the use of a quantum architecture in
decentralised control allows access to a larger space of control strategies
beyond what is classically implementable through common randomness, and can
lead to an improvement in the cost -- a phenomenon we called the quantum
advantage. In the previous part of this two part series, we showed, however,
that not all decision problems admit such an advantage. We identified a
decision-theoretic property of the cost called the `coordination dilemma' as a
necessary condition for the quantum advantage to manifest. In this article, we
investigate the impact on the quantum advantage of a scalar parameter that
captures the extent of the coordination dilemma. We show that this parameter
can be bounded within an open interval for the quantum advantage to exist, and
for some classes, we precisely identify this range of values. This range is
found to be determined by the information of the agents. | Shashank A. Deshpande, Ankur A. Kulkarni | 2023-07-04T15:10:04Z | http://arxiv.org/abs/2307.01766v1 | # The Quantum Advantage in Binary Teams and the Coordination Dilemma: Part II
###### Abstract
In our previous work [1], we have shown that the use of a quantum architecture in decentralised control allows access to a larger space of control strategies beyond what is classically implementable through common randomness, and can lead to an improvement in the cost - a phenomenon we called the quantum advantage. In the previous part of this two part series, we showed, however, that not all decision problems admit such an advantage. We identified a decision-theoretic property of the cost called the 'coordination dilemma' as a necessary condition for the quantum advantage to manifest. In this article, we investigate the impact on the quantum advantage of a scalar parameter that captures the extent of the coordination dilemma. We show that this parameter can be bounded within an open interval for the quantum advantage to exist, and for some classes, we precisely identify this range of values. This range is found to be determined by the information of the agents.
## I Introduction
While common randomness is a classically implementable means of generating strategic distributions in decentralised control, it does not provide access to all strategies allowed within the information structure of a problem [2]. We have shown that there exist implementable quantum strategies that leverage quantum entanglement to access to some of the gap between what is informationally allowed and what is classically implementable [1]. In particular, we formulated _quantum_ strategies for static teams of decentralised agents, where the agents' actions are conditioned on outcomes of strategically performed measurements on a shared quantum system, in addition to their local information. This allows access of correlations beyond the locally correlated ones and thus offers a strict cost improvement over a classical decision set-up. For problems with static information structure, we then portrayed that finding the quantum optimum requires an optimisation of a linear objective over an abstractly specified convex, non-compact set of quantum strategies.
Our idea indeed originates from the fact that the phenomenon of entanglement in quantum mechanics allows for non-local correlations in systemic behaviour among spatially decentralised systems. The non-locality of these correlations is beyond what is achievable with classical systems. We refer the reader to [3, 4] and [5] for more about non-local quantum correlations and their applications to information processing.
Entanglement as of today, is a powerful but an expensive technological resource. It is thus important to isolate a class of decision problems that admit the quantum advantage. This motivates our decision-theoretic inquiry of the origins of the quantum advantage in static teams. In the first part [6] of this two part series of articles, we establish a decision-theoretic structure called the _coordination dilemma_ as central to the manifestation of the quantum advantage in a binary team superstructure. Our results thereby specify a sharp decision theoretic boundary, problems outside which do not admit a quantum advantage. In this article, we investigate within this boundary to further shave-off such problems, and thus provide a bounding set containing problems with the quantum advantage within our superstructure.
The extent of the coordination-dilemma in a problem instance is captured by a scalar parameter \(\chi\) in our superstructure. In our first result, we identify a tight interval of values of \(\chi\) corresponding to instances that admit the no-signalling advantage. These intervals consequently contain the entire set of instances that admit the quantum advantage, since all quantum strategies respect the no-signalling constraints.
We also show that for the problems in our superstructure all quantum strategies can be implemented using entangled systems with one qubit per decision maker. This allows us to specify the set of quantum strategies parametrically. We use this parametric set-up to investigate the quantum advantage within a specific class of problems where both the agents record equally informative observations of the natural state. We specify a tight interval of values of \(\chi\) which corresponds to all instances within this class that admit a quantum advantage. We find that this interval is a strict subset of the one that admits the no-signalling advantage, and thus the presence of no-signalling advantage within a problem is an insufficient condition for the quantum advantage to exist.
Our scrutiny of a binary team superstructure in this two part series of articles thus presents some analytical insight into the association of the quantum advantage in a control problem with its decision-theoretic character.
### _Organization_
This article is organised as follows. In Section II, we briefly recall the team superstructure we work with, from Part 1, and introduce our inquisition here. In Section III, we show sufficiency of strategies on entangled qubits and develop a parametric characterisation for the reduced strategic space. In Section IV, we isolate a bounded set of problem instances using the no-signalling relaxations. In Section V, we perform an in-depth analysis of problem instances within a problem class and deliver tight bounds on the extent of the necessary _coordination dilemma_ for quantum advantage to appear in this class. |
2310.02878 | A Database of Magnetic and Thermodynamic Properties of Confined And
Eruptive Solar Flares | Solar flares sometimes lead to coronal mass ejections that directly affect
the Earth's environment. However, a large fraction of flares, including on
solar-type stars, are confined flares. What are the differences in physical
properties between confined and eruptive flares? For the first time, we
quantify thermodynamic and magnetic properties of hundreds of confined and
eruptive flares of GOES class C5.0 and above, 480 flares total. We first
analyze large flares of GOES class M1.0 and above observed by the Solar
Dynamics Observatory (SDO): 216 flares total, including 103 eruptive and 113
confined flares, from 2010 until 2016 April, we then look at the entire dataset
above C5.0 of 480 flares. We compare GOES X-ray thermodynamic flare properties,
including peak temperature and emission measure, and active-region and
flare-ribbon magnetic field properties, including reconnected magnetic flux and
peak reconnection rate. We find that for fixed peak X-ray flux, confined and
eruptive flares have similar reconnection fluxes; however, for fixed peak X-ray
flux confined flares have on average larger peak magnetic reconnection rates,
are more compact, and occur in larger active regions than eruptive flares.
These findings suggest that confined flares are caused by reconnection between
more compact, stronger, lower lying magnetic-fields in larger active regions
that reorganizes smaller fraction of these regions' fields. This reconnection
proceeds at faster rates and ends earlier, potentially leading to more
efficient flare particle acceleration in confined flares. | Maria D. Kazachenko | 2023-10-04T15:13:42Z | http://arxiv.org/abs/2310.02878v3 | # A Database of Magnetic and Thermodynamic Properties of Confined And Eruptive Solar Flares
###### Abstract
Solar flares sometimes lead to coronal mass ejections that directly affect the Earth's environment. However, a large fraction of flares, including on solar-type stars, are confined flares. What are the differences in physical properties between confined and eruptive flares? For the first time, we quantify thermodynamic and magnetic properties of hundreds of confined and eruptive flares of GOES class C5.0 and above, 480 flares total. We first analyze large flares of GOES class M1.0 and above observed by the Solar Dynamics Observatory (SDO): 216 flares total, including 103 eruptive and 113 confined flares, from 2010 until 2016 April, we then look at the entire dataset above C5.0 of 480 flares. We compare GOES X-ray thermodynamic flare properties, including peak temperature and emission measure, and active-region and flare-ribbon magnetic field properties, including reconnected magnetic flux and peak reconnection rate. We find that for fixed peak X-ray flux, confined and eruptive flares have similar reconnection fluxes; however, for fixed peak X-ray flux confined flares have on average larger peak magnetic reconnection rates, are more compact, and occur in larger active regions than eruptive flares. These findings suggest that confined flares are caused by reconnection between more compact, stronger, lower lying magnetic-fields in larger active regions that reorganizes smaller fraction of these regions' fields. This reconnection proceeds at faster rates and ends earlier, potentially leading to more efficient flare particle acceleration in confined flares.
Sun: flares - Sun: magnetic fields - Sun: ARs 0000-0002-4880-8800]Maria D. Kazachenko
## 1 Introduction
Flares are classified into eruptive and confined types according to their association with coronal mass ejections (CMEs, Moore et al., 2001; Wang and Zhang, 2007; Thalmann et al., 2015; Temmer, 2021; Afanasyev et al., 2023). In an eruptive flare, plasma is ejected into interplanetary space and is later observed as a CME in white-light coronagraph images, although its visibility might greatly depend on flare intensity (Yashiro et al., 2006). In a non-eruptive (or confined or compact) flare, plasma falls back to the Sun and there is no CME. On the Sun both types of flares are frequent: for example, from 2010 to 2019 only half of large solar flares of GOES class M1.0 and above flares were eruptive and led to CMEs; another half were confined flares (Li et al., 2020). On solar-type cool stars, up to now there have been only 40 CME detections (Moschou et al., 2019; Veronig et al., 2021; Namekata et al., 2022).
While comparison of confined and eruptive flares on solar-like stars is difficult due to lack of spatial resolution, on the Sun we can measure both magnetic and thermodynamics properties of flares in high detail (see Figure 1). With the launch of the Solar Dynamics Observatory in 2010, the number of studies comparing properties of eruptive and confined flares surged. For example, Cheng et al. (2011) analyzed 9 M- and X-class flares all from the same active region (AR) to understand magnetic and thermodynamic properties of 6 confined and 3 eruptive events. Harra et al. (2016) analyzed properties of 9 confined and 33 eruptive flares, 42 X-class flares total. Hinterreiter et al. (2018) and Tschernitz et al. (2018) analyzed 50 flares, 19 eruptive and 31 confined events, C-class and above. Toriumi et al. (2017)
surveyed 51 flares, 32 eruptive and 19 confined events, M5.0-class and above. Finally, Li et al. (2020) analyzed 322 flares, 170 eruptive and 152 confined events, M1.0-class and above, followed by the largest-to-date study, by Li et al. (2021), of 719 flares, 251 eruptive and 468 confined events, C5.0-class and above.
Two physical concepts are typically used to describe whether a flare would be eruptive or confined. The first factor describes the degree of AR non-potentiality, i.e. magnetic free energy, twist, helicity, etc. The second factor describes the constraining effect of the strapping or overlying field: its strength and decay rate with height. The torus instability occurs once the decay index of \(n=-\frac{\partial\ln B_{\rm p}}{\partial\ln z}\) for poloidal field \(B_{\rm p}\) reaches a critical eruptive value of \(n_{\rm cr}\) at a critical height \(h_{\rm cr}\). In this case the poloidal component refers to the component of the strapping field perpendicular to the axis of the toroidal flux rope. For critical values, a range within 0.5 to 2 is typically used, depending on the magnetic configuration (see e.g., Torok and Kliem, 2005; Kliem and Torok, 2006; Wang and Zhang, 2007; Myers et al., 2015; Sun et al., 2015, 2022; Hassanin and Kliem, 2016; Hassanin et al., 2022 and references therein). In particular, a value of \(n_{c}=1.5\) for a thin, axisymmetric torus (Bateman, 1978) is widely used in observational studies. As larger total unsigned AR magnetic flux leads to increased critical decay-index height \(h_{\rm cr}\), ARs with large magnetic flux have confined eruptions due to strong magnetic cages (e.g., Amari et al., 2018; Li et al., 2021).
Below we review some magnetic field properties quantified in confined and eruptive events from observations. For more details on recent statistical studies of flare magnetic properties, we recommend reading our recent review paper (Kazachenko et al., 2022).
The main magnetic-field properties typically used to distinguish confined from eruptive events include total unsigned magnetic fluxes of flare-hosting ARs, \(\Phi_{\rm AR}\)(Li et al., 2021), fractions of the active-region magnetic flux or area swept by flare ribbons, \(\Phi_{\rm rbn}\) and \(S_{\rm rbn}\), relative to the total AR flux or area, \(R_{\Phi}=\Phi_{\rm rbn}/\Phi_{\rm AR}\) and \(R_{S}=S_{\rm rbn}/S_{\rm AR}\), respectively (Thalmann et al., 2015; Masson et al., 2017; Tschernitz et al., 2018; Toriumi et al., 2017; Li et al., 2020; Kazachenko et al., 2022), and the ratio of the twist near the flaring polarity inversion line (PIL) to the AR magnetic flux, \(\alpha_{\rm PIL}/\Phi_{\rm AR}\)(relative non-potentiality, Li et al., 2022). For a given flare class, eruptive flares involve larger fraction of AR magnetic flux and area, tend to occur in smaller ARs and have a larger ratio of the twist near the PIL to the AR flux. Using a sample of 43 eruptive and 63 confined flares, Li et al. (2022) found that the relative non-potentiality parameter of \(\alpha_{\rm PIL}/\Phi_{\rm AR}>2.2\times 10^{-24}Mm^{-1}Mx^{-1}\) distinguishes about 93% of eruptive events. On the other hand, confined flares have smaller fractions of AR magnetic flux and area, tend to occur in larger ARs and have a smaller ratio of the twist near the PIL to the AR flux. Again, in the work by Li et al. (2022) the above relative non-potentiality of \(\alpha_{\rm PIL}/\Phi_{\rm AR}<2.2\times 10^{-24}Mm^{-1}Mx^{-1}\) distinguishes about 83% of confined events. In addition, from analysis of photospheric vector magnetic fields, Liu et al. (2017); Vemareddy (2019); Avallone and Sun (2020), and Kazachenko et al. (2022) found that confined flares tend to occur in ARs that are more current-neutralized, and have a smaller amount of magnetic shear at the PIL. As shear and net current are proportional to each other through Ampere's law, smaller shear and net current in confined flares both support the same physical concept of smaller magnetic-field non-potentiality at the PIL (Kazachenko et al., 2022). To summarize, existing analyses of magnetic field observations fit within the above scenario that smaller non-potentiality and stronger overlying fields in confined flares constrain plasma from escaping. In terms of magnetic topology of pre-flare magnetic field, both confined and eruptive flares
Figure 1: X1.2-class eruptive solar flare observed by the Solar Dynamics Observatory in 1600Å and 304Å wavelengths. Bright ribbons mark the footpoints of reconnected fields lines that provide indirect measurement of the amount of reconnecting magnetic flux in the flare and the reconnection rate.
could originate from a sheared arcade or a flux rope (Masson et al., 2017) with one or several confined flares preceding an eruption (Kliem et al., 2021; Hassanin et al., 2022).
Numerous studies found that proximity of the flaring part of the AR to the AR boundary and the open field affect the flare eruptivity. For example, Tschernitz et al. (2018) found that for a given X-ray flux, confined events tend to have larger mean magnetic-flux density, implying that they tend to occur closer to the center of ARs where fields are stronger. Cheng et al. (2011) came to a similar conclusion, finding that confined flares occur closer to AR centers than eruptive flares, in agreement with Wang and Zhang (2007). Using non-linear force-free field extrapolation, they also found that in the low corona (\(\approx 10\) Mm), eruptive flares have a higher decay index of the transverse magnetic field than confined flares. In addition, the strength of the transverse magnetic field over the eruptive flare sites is weaker than it is over the confined ones in agreement with e.g., Cui et al. (2018); Vasantharaju et al. (2018). Finally, Derosa and Barnes (2018) analyzed the PFSS global magnetic fields. Using a sample of 50 eruptive and 6 confined flares, they found that confined events have slightly less access to open flux than eruptive events. However this difference is not large: the rate at which X-class flares are eruptive is 0.97 (30 out of 31 flares) and 0.8 (20/25) for events with and without access to open flux, respectively. This result agrees with Baumgartner et al. (2018). In addition, an active region could become more prone to host a confined flare when the orientation between its local and overlying magnetic flux systems becomes less antiparallel (Zuccarello et al., 2017).
Among _other properties_ differing in confined and eruptive flares, average ribbon-separation distances and ribbon-peak separation speeds tend to be smaller for confined flares (Kurokawa, 1989; Su et al., 2007; Veronig and Polanec, 2015; Thalmann et al., 2015; Hinterreiter et al., 2018). One explanation for this could be strong overlying fields in confined events which prevent reconnecting current sheet from moving upwards. As ribbon separation speed is a proxy for a localized reconnection rate, results by Hinterreiter et al. (2018); Thalmann et al. (2015) imply that eruptive flares might have higher local reconnection rate, however the authors of these papers do not explicitly state this. Note also, that while Hinterreiter et al. (2018) analyzed a large number of flares of different flare classes (50 flares, 31 confined and 19 eruptive), ranging from B- to X-class, their distributions of GOES peak X-ray fluxes of confined and eruptive flare samples were different, with the confined flare sample being significantly smaller than eruptive sample.
While a large number of statistical studies compared confined and eruptive flares' magnetic properties, few have analyzed thermodynamic properties and energy partition of confined flares and even fewer have compared those with eruptive events. Kahler and Ling (2022) analyzed GOES peak temperatures in hundreds of flares, of flare class M3.0 and above, and found, that for a given peak X-ray flux, confined flares have higher temperatures than eruptive flares, confirming earlier results of Kay et al. (2003) study of 69 flares. Cheng et al. (2011) analyzed 9 M- and X-class flares from the same ARs: 6 confined and 3 eruptive events. They found that confined flares have more impulsive soft X-ray time profiles, while eruptive flare have longer durations and extend over larger areas in EUV 195 A images. Harra et al. (2016) analyzed 42 flares, X-class and above: 9 confined and 33 eruptive flares. For their sample they found no statistical differences in thermal and magnetic properties between confined and eruptive flares. Tan et al. (2021) analyzed radio and X-ray data from GOES, RHESSI, and Fermi/GBM from 10 mostly confined flares in AR 12192. They found that the magnetic field in confined flares tends to be stronger than that in 412 eruptive flares studied by Nita et al. 2004. They also found that confined flares are efficient particle accelerators, however the energies to which electrons are accelerated in confined flares are lower (and do not produce HXR above 300 keV, indicating the lack of efficient acceleration of electrons to high energies), in agreement with Thalmann et al. (2015). Cai et al. (2021) analyzed energy partition in 4 confined flares. They found that the ratio of non-thermal energies to magnetic energies is significantly larger for confined than for eruptive flares, ranging within \(E_{\rm nth}/E_{\rm mag}\approx 0.7-0.76\), in agreement with case study by Thalmann et al. (2015). This implies that larger fraction of free energy is converted into kinetic energy of flare-accelerated electrons in confined flares, while a larger fraction of free energy is converted into other forms of energy, such as kinetic energy of particles, thermal energy, and CME kinetic energy, in eruptive flares (Emslie et al., 2005; Reeves et al., 2010; Emslie et al., 2012; Warmuth and Mann, 2016; Aschwanden et al., 2015; Aschwanden et al., 2017). Given that CME kinetic energy is the largest fraction in the eruptive-flare energy partition, it is not surprising that eruptive flares have smaller energy fractions in accelerated particles, compared with confined flares. Finally, Qiu and Cheng (2022) analyzed three eruptive (M2.0, M6.9 and M1.5) and three confined (C8.2, C8.7 and C9.8) flares. They found that in eruptive flares non-thermal HXR emission lags the EUV emission from flare ribbons, suggesting that eruptive flares have a gradual warm-up phase with lower non-thermal energy release efficiency. They also found that, on average, confined flares exhibit stronger magnetic shear and high-temperature component (up to 25 MK for up to one minute) at the onset, which is not present in eruptive flares (in agreement with case study by Veronig and Polanec, 2015).
2015). Note, however, that eruptive flares in the study by Qiu & Cheng (2022) had larger flare classes than confined events; ideally, flare samples of similar flare classes should be compared. To summarize, confined flares are efficient particle accelerators located lower in the corona in agreement with earlier studies (e.g., Klein et al., 2010), with only a small fraction of particles accelerated to extremely high energies.
While most statistical papers focused on magnetic properties of eruptive/confined flares and few analyzed their thermodynamic properties, in this paper _our goal_ is to fill this gap to create a comprehensive survey of both thermodynamic and magnetic properties of eruptive and confined flares on the Sun for a large, balanced sample. For this, we analyze properties of all flares of GOES class C5.0 and above observed by the SDO from 2010 to 2016 within \(45^{\circ}\) from the central meridian. To minimize errors in the reconnection fluxes we restrict our analysis to events with reconnection flux imbalance between positive and negative polarities \(\Phi_{\rm ribbon,imb}<20\%\). As a result we select 480 events total: 152 eruptive and 328 confined flares of GOES class C5.0 and above. Since the number of confined and eruptive flares in this sample is highly unbalanced, we first analyze a more balanced subsample of large flares of GOES class M1.0 and above, 216 events total: 103 eruptive and 113 confined flares. We then present an analysis for the whole dataset, coming to similar physical conclusions (see Appendix A). We analyze the following flare properties: AR magnetic fluxes, magnetic-reconnection fluxes and their peak rates, flare thermodynamic properties (temperature, emission measure) and flare durations.
This paper is organized as follows. In Section 2, we describe the four datasets that we use to assemble our final dataset, the variables and the procedures to evaluate uncertainties. In Section 3 we discuss our results for 216 flares of GOES class M1.0 and above. For comparison, in Appendix A we present results for a larger sample of 480 flares that includes smaller flares above C5.0. Finally, in Section 4, we summarize our conclusions for both flare samples.
## 2 Data and Methodology
Table 1 shows a list of physical properties and their data source that we use for each flare in our analysis. These properties include flare location, GOES peak X-ray flux, GOES time-integrated flux (fluence), GOES start, peak, end times and durations; GOES peak temperature and emission measure; AR area and unsigned magnetic flux; unsigned reconnection flux, peak reconnection rate, ribbon area and eruptivity (confined or eruptive) of each flare.
For initial flare detections and estimates of flare reconnection flux, ribbon area and their uncertainties we use RibbonDB dataset1(Kazachenko et al., 2017). RibbonDB database contains 3137 solar flare ribbon events corresponding to every flare of GOES class C1.0 and greater within \(45^{\circ}\) of the central meridian, from 2010 April until 2016 April, observed by the Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) on board the SDO in 1600A (see Eq. 7-11 in Kazachenko et al., 2017). To remind the reader, we sum up flux of positive and negative magnetic polarities within flare ribbons to find the unsigned ribbon reconnection flux (or AR unsigned flux):
Footnote 1: [http://solarmuri.ssl.berkeley.edu/](http://solarmuri.ssl.berkeley.edu/)\(\sim\)kazachenko/RibbonDB/
\[\Phi_{\rm ribbon}=\Phi_{\rm ribbon}^{+}+\Phi_{\rm ribbon}^{-}=\frac{\Phi^{+( \rm I_{6})}+\Phi^{+(\rm I_{10})}}{2}+\frac{\Phi^{-(\rm I_{6})}+\Phi^{-(\rm I _{10})}}{2}=\frac{\Phi^{(\rm I_{6})}+\Phi^{(\rm I_{10})}}{2}, \tag{1}\]
where \(|\Phi_{\rm ribbon}^{+}|\) and \(|\Phi_{\rm ribbon}^{-}|\) are signed reconnection fluxes in each polarity. Above, we take an average between ribbon fluxes in areas with \(c=6\) and \(c=10\) times the median background intensity for the minimum ribbon brightness. We do this to account for the uncertainty in ribbon edge identification due to variable choice of background cutoff (see Equations (5) and (6) in Kazachenko et al., 2017):
\[\Phi_{\rm ribbon}^{\pm(\rm L_{c})}=\int_{I_{c},(>or<100G)}|B_{n}|dS. \tag{2}\]
The error in the unsigned reconnection flux (or the ribbon area) is then
\[\Delta\Phi_{\rm ribbon}=\frac{\Phi^{(\rm I_{6})}-\Phi^{(\rm I_{10})}}{2}. \tag{3}\]
Following Kazachenko et al. (2017), we define percentages of the ribbon-to-AR magnetic fluxes and ribbon-to-AR areas, respectively, as
\[R_{\Phi}=\frac{\Phi_{\rm ribbon}}{\Phi_{\rm AR}}\times 100\%,\ \ \ \ R_{S}=\frac{S_{\rm ribbon}}{S_{\rm AR}}\times 1 00\%. \tag{4}\]
We also find mean magnetic flux density within AR and ribbon areas as \(\bar{B}_{\rm AR}=\frac{\Phi_{\rm AR}}{S_{\rm AR}}\) and \(\bar{B}_{\rm ribbon}=\frac{\Phi_{\rm ribbon}}{S_{\rm ribbon}}\), respectively.
We further expand the original RibbonDB dataset to include peak reconnection flux rate \(\dot{\Phi}_{\rm ribbon,peak}\) and imbalances in reconnection flux and the peak reconnection flux rate, \(\Phi_{\rm ribbon,imb}\) and \(\dot{\Phi}_{\rm ribbon,imb}\). Peak reconnection flux rate here describes the global peak rate of magnetic reconnection flux over the whole active region. The imbalance is the difference between positive and negative magnetic polarities \(\Phi_{\rm imb}=|\left|\Phi_{+}\right|-\left|\Phi_{+}\right||\) and should ideally be equal to zero for an isolated well-observed AR or a set of footpoints brightened by the flare ribbons. Therefore we use this quantity as an additional quality-control measure for our magnetic fluxes. Specifically, we define the imbalance in the reconnection flux as
\[\Phi_{\rm ribbon,imb}=\left|\ \left|\Phi_{\rm ribbon}^{+}\right|-\left|\Phi_{ \rm ribbon}^{-}\right|\ \right|. \tag{5}\]
Similarly, we define imbalance in the peak reconnection flux rate as
\[\dot{\Phi}_{\rm ribbon,imb}=\left|\ \left|\dot{\Phi}_{\rm ribbon}^{+}\right|- \left|\dot{\Phi}_{\rm ribbon}^{-}\right|\ \right|, \tag{6}\]
where positive and negative values are signed peak reconnection rates in each polarity. To minimize errors in the reconnection fluxes we restrict our analysis to events with reconnection flux imbalance between positive and negative polarities \(\Phi_{\rm ribbon,imb}<20\%\).
How suitable are ribbon reconnection flux and peak reconnection rate for description of release of magnetic energy in solar flares? As footpoints of reconnected fields, flare ribbons only identify magnetic fields participating in the flare reconnection. However, they do not directly capture dynamics of arcades adjacent or below reconnected fields that
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Variable** & **Description** & Units & Data Source Reference \\ \hline \(I_{\rm X,peak}\) & Peak 1-8Å X-ray flux, GOES & [W m\({}^{-2}\)] & Plutino et al. (2023) \\ \(t_{\rm start}\) & Flare start time & [UT] & Plutino et al. (2023) \\ \(t_{\rm peak}\) & Flare peak time & [UT] & Plutino et al. (2023) \\ \(t_{\rm end}\) & Flare end time & [UT] & Plutino et al. (2023) \\ \(\tau=t_{\rm end}-t_{\rm start}\) & Flare duration & [min] & Plutino et al. (2023) \\ \(\tau_{\rm rise}=t_{\rm peak}-t_{\rm start}\) & Flare duration (rise) & [min] & Plutino et al. (2023) \\ \(F_{\rm X}\) & X-ray total flux (fluence) & [W m\({}^{-2}s\)] & Plutino et al. (2023) \\ \(T_{\rm GOES}\) & Flare peak temperature, GOES & [MK] & TEBBS \\ \(EM_{\rm GOES}\) & Flare peak EM, GOES & [\(10^{48}cm^{-3}\)] & TEBBS \\ \(lon,lat\) & Flare longitude and latitude & [deg] & RibbonDB \\ \(AR_{\rm number}\) & AR NOAA number & \(-\) & RibbonDB \\ \(\Phi_{\rm AR}\) & Unsigned AR flux & [Mx] & RibbonDB \\ \(\Phi_{\rm ribbon}\) & Unsigned reconnection flux & [Mx] & RibbonDB \\ \(\Delta\Phi_{\rm ribbon}\) & Uncertainty in \(\Phi_{\rm ribbon}\) & [Mx] & RibbonDB \\ \(\Phi_{\rm ribbon,imb}\) & Imbalance in \(\Phi_{\rm ribbon}\) & [Mx] & RibbonDB \\ \(\dot{\Phi}_{\rm ribbon}\) & Peak reconnection flux rate & [Mx/s] & RibbonDB \\ \(\Delta\dot{\Phi}_{\rm ribbon}\) & Uncertainty in \(\dot{\Phi}_{\rm ribbon,peak}\) & [Mx/s] & RibbonDB \\ \(\dot{\Phi}_{\rm ribbon,imb}\) & Imbalance in \(\dot{\Phi}_{\rm ribbon,peak}\) & [Mx/s] & RibbonDB \\ \(S_{\rm AR}\) & AR area & [cm\({}^{2}\)] & RibbonDB \\ \(S_{\rm ribbon}\) & Ribbon area & [cm\({}^{2}\)] & RibbonDB \\ \(\Delta S_{\rm ribbon}\) & Uncertainty in \(S_{\rm ribbon}\) & [cm\({}^{2}\)] & RibbonDB \\ \(R_{\Phi}\) & Reconnection flux fraction & [\%] & RibbonDB \\ \(R_{S}\) & Ribbon area fraction & [\%] & RibbonDB \\ \(\epsilon\) & Eruptivity: 1 - eruptive, 0 - confined & \(-\) & Li et al. (2021) \\ \hline \hline \end{tabular} \({}^{a}\) [http://solarmuri.ssl.berkeley.edu/](http://solarmuri.ssl.berkeley.edu/)\(\sim\)kazachenko/SolarErupDB/
\end{table}
Table 1: List of variables we analyzed for 216 \(>\)M1.0-class flares and 480 \(>\)C5.0-class flares (see Appendix A) and reference to their data source: RibbonDB (Kazachenko et al., 2017), Plutino et al. (2023), TEBBS (Sadykov et al., 2019) and Li et al. (2021). The data is available online.
evolve as part of the overall relaxation of the coronal field (Hudson, 2000; Wang et al., 2018; Yadav and Kazachenko, 2023). What is the energetic significance of changes in fields that do not directly participate in the reconnection process? From comparison of magnetic energy released in the reconnection process and CMEs' energies (summed kinetic and gravitational potential energies), Zhu et al. (2020) found that for large M- and X-class flares reconnection dominates CME acceleration in fast CMEs, in agreement with earlier works by Longcope et al. (2007); Kazachenko et al. (2012). However, for smaller B- and C-class flares, work done by the reconnection electric field in the current sheet itself might not be enough to fuel the eruption. Furthermore, there is a strong correlation between the reconnection flux and the GOES peak X-ray flux (\(r_{s}=[0.6-0.9]\), Sindhuja and Gopalswamy, 2020; Toriumi et al., 2017; Kazachenko et al., 2017; Tschernitz et al., 2018; Kazachenko et al., 2022). Based on these arguments, we assume that it is reasonable to expect that the total amount of energy released in an event _scales with_ magnetic reconnection flux and reconnection rate, recognizing that these observables capture the key aspects of the relaxation process.
To find the flare GOES peak X-ray flux and fluence, and also flare start, peak and end times, \(I_{\rm X,peak},F_{\rm X},t_{\rm start},t_{\rm peak},t_{\rm end}\), we use a database by Plutino et al. (2023)2. In contrast to Heliophysics Event Catalog (HEC) that we used in RibbonDB catalog to identify flare peak X-ray flux, start, peak and end times, Plutino et al. (2023) catalog uses a method by Aschwanden and Freeland (2012) to identify flares in the GOES SXR light curve. As a result, the catalog of Plutino et al. (2023) has slightly different values for peak X-ray fluxes and also flare times, compared to RibbonDB (see Figure 7 in Plutino et al., 2023). For flare total and rise time durations we use
Footnote 2: [https://github.com/nplutino/FlareList](https://github.com/nplutino/FlareList)
\[\tau=t_{\rm end}-t_{\rm start}\ \ \ \ and\ \ \ \ \ \tau_{\rm rise}=t_{\rm peak }-t_{\rm start}, \tag{7}\]
respectively. In our analysis, we only used RibbonDB flare peak X-ray fluxes for initial selection of events.
To find the GOES peak temperature \(T_{\rm GOES}\) and emission measure \(EM_{\rm GOES}\), we use Temperature and EM-Based Background Subtraction algorithm (TEBBS) dataset3 from Sadykov et al. (2019), deduced from GOES SXR observations. In the TEBBS dataset Sadykov et al. (2019) apply TEBBS algorithm (Bornmann, 1990; Ryan et al., 2012) to estimate \(T_{\rm GOES}\) and \(EM_{\rm GOES}\) for all solar flares from 2002 January until 2017 December. \(T_{\rm GOES}\) describes the maximum temperature of the dense chromospheric plasma after the flare starts. \(EM_{\rm GOES}\) describes the maximum emission measure that the chromospheric plasma reaches as it first evaporates and then condenses back into the chromosphere. Note that in this approach, as the GOES data includes only two SXR energy channels, both the flare temperature and EM are calculated in a single-temperature approximation, which is a simplification given evidence for multi-temperature structure (Warmuth and Mann, 2016).
Footnote 3: [https://github.com/vasadykov/TEBBS](https://github.com/vasadykov/TEBBS)
In addition, we use GOES peak temperature \(T_{\rm GOES}\) and emission measure \(EM_{\rm GOES}\) to calculate flare thermal energy at the time of peak temperature, \(E_{\rm th,GOES}=3nk_{B}T_{\rm GOES}V\). From the emission measure \(EM_{\rm GOES}\approx n^{2}V\), we find density \(n=\sqrt{\frac{EM_{\rm GOES}}{V}}\), where for the flare loop volume \(V\) we use \(V=(\frac{S_{\rm ribbon}}{2})^{3/2}\). Then the expression for thermal energy becomes (Reep and Knizhnik, 2019)
\[E_{\rm th,GOES}=\frac{3}{8^{1/4}}k_{B}(EM_{\rm GOES})^{1/2}S_{\rm ribbon}^{3/4} T_{\rm GOES}. \tag{8}\]
Finally, to define whether each flare is confined or eruptive, we use the database of Li et al. (2021)4. This database includes 719 flares (251 eruptive and 468 confined) of GOES class C5.0 and greater that occurred within \(45\,\mathrm{\SIUnitSymbolDegree}\) of the central meridian, from June 2010 until June 2019. To define flare eruptivity, this dataset uses the CDAW CME catalog5(Yashiro et al., 2004) based on the Solar and Heliospheric Observatory/Large Angle and Spectrometric Coronagraph (SOHO/LASCO, Brueckner et al., 1995) observations, data from the twin Solar Terrestrial Relations Observatory (STEREO) and EUV wave detections from the AIA.
Footnote 4: [https://nadc.china-vo.org/res/r101068/](https://nadc.china-vo.org/res/r101068/)
Footnote 5: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/)
Figure 2 shows our final dataset of 480 flares, 152 eruptive and 328 confined, with GOES class C5.0 and above, that we use for analysis of large (\(>M1.0\)) and all (\(>C5.0\)) flares. The top panel shows the total number of confined and eruptive flares as a function of time; the bottom panel shows their location on the solar disk grouped by ARs and colored by the cumulative GOES peak X-ray class. As the fraction of confined flares increases for smaller GOES peak X-ray class, we intentionally limit our first study to flares above M1.0, which includes a roughly equal number of confined and eruptive flares: 103 eruptive and 113 confined flares. This list corresponds to _all flares of GOES class
_M1.0 and above_ observed by SDO from 2010 to 2016 April within 45\({}^{\circ}\) of the central meridian, spanning the majority of solar cycle 24. In this final dataset we determine flare GOES X-ray properties using the dataset of Plutino et al. (2023) dataset, flare ribbon properties from the RibbonDB dataset (Kazachenko et al., 2017), flare thermodynamic properties using the TEBBS dataset (Sadykov et al., 2019) and flare eruptivity using Li et al. (2021) list.
We describe the quantitative relationship between flare and AR properties, e.g., \(\mathbb{X}\) and \(\mathbb{Y}\), using the Spearman ranking correlation coefficient, \(r_{s}(\mathbb{X},\mathbb{Y})\). Specifically for qualitative strength of the correlation \(r_{s}\) we use: \(r_{s}\in[0.2,0.39]\) - weak, \(r_{s}\in[0.4,0.59]\) - moderate, \(r_{s}\in[0.6,0.79]\) - strong, and \(r_{s}\in[0.8,1.0]\) - very strong.
### Uncertainties
We recognize that the variables we analyze have uncertainties. For example, identification of eruptive/confined flares might be ambiguous due to observational constraints (see e.g., an \(M4.0\) flare on 2011 September 26 in AR 11302 that was identified as eruptive in the catalog of Li et al. (2021), but confined by Gupta et al. (2021). As another example, reconnection fluxes in the RibbonDB using 1600 A SDO images are slightly different from reconnection fluxes evaluated using Kanzelhohe Solar Observatory (KSO) ribbon images in \(H_{\alpha}\) and \(CaK\) bands (Sindhuja et al., 2019). While we could not re-examine all events, where possible, we evaluated uncertainties without modifying the main quantities. For unsigned reconnection fluxes \(\Phi_{\rm ribbon}\) (Eq. 1), we evaluated errors based on uncertainty in the median background intensity \(\Delta\Phi_{\rm ribbon}\) (Eq. 3). For unsigned peak reconnection fluxes and their rates, we evaluated error proxies based on the imbalance over positive and negative polarities (Eqs. 5 and 6). Furthermore we restrict our analysis to events with reconnection flux imbalance of less than 20%. Note that we have not assessed the accuracy of TEBBS estimates for peak temperature and emission measure and confinement/eruptivity detections of Li et al. (2021).
Figure 2: Our dataset of 480 flares (152 eruptive and 328 confined flares), including 103 eruptive and 113 confined M- and X-class flares (see §3) and 49 eruptive and 215 confined C-class flares (see Appendix A). Top panel: number of confined and eruptive C-, M- and X-class flares each month analyzed in this paper (left axis) and sunspot number from 2010 April until 2016 April (right axis). These flares correspond to all events above C5.0 from the flare ribbon database RibbondDB. Bottom panel: flare peak X-ray flux and latitude grouped by AR vs. time; circle size and color correspond to the peak X-ray flux summed over each AR number. See §2 for details.
## 3 Results: Flares of Goes Class M1.0 and Larger
In Figures 3-5 we show the main results of this paper: distributions, Spearman correlation matrix and scatter plots of variables listed in Table 1 for flares of GOES class M1.0 and larger. In Table 2 we show typical ranges and mean values for all variables.
### Results: distributions of thermodynamic and magnetic properties in confined and eruptive flares
In Figure 3(A-E) we first analyze the _distributions of thermodynamic properties_: GOES peak X-ray flux, peak temperature, emission measure, thermal energy and flare duration. In Table 2 we compare variables' typical range and mean values within confined and eruptive samples.
From Figure 3A we see that while the total number of confined and eruptive flares above M1.0 is similar (\(n_{\rm erup}=103\) vs. \(n_{\rm conf}=113\)), with increasing peak X-ray flux the number of confined flares vs. eruptive flares gradually decreases. As a result, in our sample eruptive flares have higher mean peak X-ray flux than eruptive flares: M4.6 vs. M2.7, respectively. Similarly, the typical ranges of both peak emission measure and thermal energy are higher in eruptive flares (Figures 3C and 3D):
\[EM_{\rm GOES,erup}[P_{20},P_{80}]=[0.7,3.2]\times 10^{48}cm^{-3}\quad vs. \quad EM_{\rm GOES,conf}[P_{20},P_{80}]=[0.5,1.5]\times 10^{48}cm^{-3}, \tag{9}\]
and
\[E_{\rm th,GOES,erup}[P_{20},P_{80}]=[3.8,15.3]\times 10^{29}ergs\quad vs. \quad E_{\rm th,GOES,conf}[P_{20},P_{80}]=[2.8,8.4]\times 10^{29}ergs. \tag{10}\]
In contrast, the mean SXR temperature is slightly lower in eruptive than confined flares (see Figure 3B). Lower peak temperatures and higher emission measures of eruptive flares are consistent with case studies, by Qiu & Cheng (2022),
Figure 3: Results: Distributions of various physical variables for eruptive (red) and confined (blue) events (\(>M1.0\) GOES flare class). Vertical dotted lines and numbers in the top right corner indicate the mean value for each variable within eruptive and confined flare groups. Darker-color histograms near dotted lines show distributions of the mean, if we select 100 random subsamples of the half of the events in each sample, and reflect variability of the mean due to event selection. See §3.1 for details.
of 3 eruptive and 3 confined flares, where they found the high-temperature component at the flare onset in confined events in spite of higher flare classes of eruptive events in their sample. Similarly, Kahler & Ling (2022) analyzed hundreds of flares, finding that for a given X-ray peak flux, confined flares have higher temperatures than eruptive flares confirming the results of Kay et al. (2003).
We find that confined flares have shorter durations than eruptive events, with the mean of \(\tau_{\rm conf}=63\pm 4\) minutes and \(\tau_{\rm erup}=99\pm 7\) minutes, and the typical range, as the \(20^{\rm th}\) to \(80^{\rm th}\) percentile, of [31, 91] minutes and [41, 148] minutes, respectively (see Table 2). This finding is consistent with e.g. Webb & Hundhausen (1987); Toriumi et al. (2017); Hinterreiter et al. (2018), who reported that eruptive flares last longer than confined ones.
In Figure 3(F-O) we analyze the _distributions of magnetic properties_. We find that confined flares have larger AR magnetic fluxes and areas (Figure 3F and Figure 3H). Confined flares have also larger mean magnetic field strengths within ribbon and AR areas (Figure 3L and Figure 3M). The above differences are consistent with magnetic cage interpretation, where ARs with large magnetic flux have confined eruptions. Practically, this means that for _large flares of GOES class M1.0 and above:_ larger ARs, especially those with unsigned magnetic flux of \(10^{23}\) Mx and above, tend to host confined flares; medium ARs with \(3\times 10^{22}Mx<\Phi_{\rm AR}<10^{23}\) Mx host both confined and eruptive flares; smaller ARs with \(\Phi_{\rm AR}<3\times 10^{22}\) Mx tend to host eruptive flares. Note that if we include smaller C-class flares, then we find that smaller active regions host a similar number of confined and eruptive flares (see Figure 6F in Appendix A).
From ribbon analysis, we find that the magnetic reconnection fluxes have similar distributions in both samples (Figure 3G), while the ribbon areas are larger for eruptive flares (Figure 3I). As a result, both reconnection flux and area fractions are larger for eruptive flares. These results are consistent with a scenario in which weaker strapping field makes flares more eruptive (Figure 3J and Figure 3K and also earlier works by e.g., Toriumi et al.2017). Another factor that could explain why larger ARs host more confined flares is the relationship between AR size and access to periphery. Smaller regions have larger AR area fractions near their peripheries, with potentially better access to global field. In contrast, larger ARs have smaller AR area fractions near peripheries and hence tend to be more confined. Furthermore, the concept of closeness to periphery could explain why eruptive flares tend to have weaker magnetic fields, since the magnetic fields on AR peripheries are generally weaker. In contrast, confined flares are further from the periphery, where magnetic field is stronger, therefore mean magnetic field within ribbons for confined flares is stronger.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**CONFININED FLARES**} & \multicolumn{2}{c}{**ERUPITIVE FLARES**} & \\ \cline{2-6} Quantity & Units & Typical range & Mean Value & Typical range & Mean Value & Figure \\ \(\mathbb{X}\) & & \(\mathbb{X}_{\rm conf}[P_{20},P_{80}]\) & \(\mathbb{X}_{\rm conf}\) & \(\mathbb{X}_{\rm erup}[P_{20},P_{80}]\) & \(\mathbb{X}_{\rm erup}\) & \\ \hline Peak X-ray flux, \(I_{\rm X,peak}\) & Flare class & [M1.4,M4.1] & \(M2.7\) & \([M1.8,M8.9]\) & \(M4.6\) & Fig. 3(A) \\ Duration, \(\tau\) & \(min\) & [31, 91] & 63 & \([41,148]\) & 99 & Fig. 3(E) \\ Rise duration, \(\tau_{\rm rise}\) & \(min\) & [8, 21, 21] & 16 & \([9.1,28.8]\) & 22 & – \\ Peak temperature, \(T_{\rm GOES}\) & \(MK\) & [15.6,18.6] & 17.5 & \([14.2,19.0]\) & 17.1 & Fig. 3(B) \\ Peak EM, \(EM_{\rm GOES}\) & \(10^{48}\)\(cm^{-3}\) & [0.5,1.5] & 1.4 & \([0.7,3.2]\) & 2.3 & Fig. 3(C) \\ Peak thermal energy, \(E_{\rm th,GOES}\) & \(10^{29}\)\(ergs\) & [2.8,8.4] & 8.6 & \([3.8,15.3]\) & 12.7 & Fig. 3(D) \\ AR mag. flux, \(\Phi_{\rm AR}\) & \(10^{21}\)\(Mx\) & [40, 95] & 77 & \([30,63]\) & 48 & Fig. 3(F) \\ Reconnection mag. flux, \(\Phi_{\rm ribbon}\) & \(10^{21}\)\(Mx\) & [2.6, 6.4] & 5.3 & \([2.8,8.1]\) & 6.1 & Fig. 3(G) \\ AR area, \(S_{\rm AR}\) & \(10^{18}\)\(cm^{2}\) & [98, 243] & 180 & \([83,177]\) & 131 & Fig. 3(H) \\ Ribbon area, \(S_{\rm ribbon}\) & \(10^{18}\)\(cm^{2}\) & [42, 8.7] & 7.6 & \([5.1,13.5]\) & 10.1 & Fig. 3(I) \\ Rec. flux fraction, \(R_{\Phi}\) & \(10^{22}\)\(Mx\) & [4.4, 9.2] & 7.3 & \([7.2,19.2]\) & 13.2 & Fig. 3(J) \\ Rec. area fraction, \(R_{S}\) & \(10^{22}\)\(Mx\) & [2.6, 6.1] & 4.6 & \([4.7,12.7]\) & 8.9 & Fig. 3(K) \\ Mean AR mag. field, \(\bar{B}_{\rm AR}\) & \(G\) & [348, 539] & 422 & \([329,448]\) & 386 & Fig. 3(L) \\ Mean ribbon mag. field, \(\bar{B}_{\rm ribbon}\) & \(G\) & [539, 821] & 685 & \([454,707]\) & 597 & Fig. 3(M) \\ Peak reconnection rate, \(\dot{\Phi}_{\rm ribbon}\) & [\(10^{19}\)\(Mx/s\)] & [\(1.1,2.4]\) & 1.9 & \([0.8,2.2]\) & 1.6 & Fig. 3(N) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Confined vs. Eruptive flares: comparison of typical ranges and mean values of their magnetic and thermodynamic properties for 216 \(>\)M1.0-class flares. The typical range of each quantity is described as the \(20^{th}\) to \(80^{th}\) percentile \(\mathbb{X}[P_{20},P_{80}]\). See §3.1 for details and Fig. 3 for distribution plots of all variables.
The above results of larger magnetic field and smaller ribbon area within confined flares imply that in confined flares reconnection proceeds in more compact current sheets at lower coronal heights with larger field strengths. On the other hand, decreased magnetic field and increased ribbon area within eruptive flares implies that in eruptive flares reconnection proceeds in more extended current sheets at larger coronal heights with smaller magnetic field strengths.
Finally, we find that peak reconnection rates are slightly higher in confined vs. eruptive flares (Figures 3N and 3O). In other words, the same amount of reconnection flux, which we find to be very similar for eruptive and confined events in Figure 3G, gets reconnected at higher rates in confined events compared to eruptive flares (see Figure 3E). We hypothesize that the fact that confined flares are more compact and as a result occur lower in the corona involving stronger magnetic fields is the main factor that leads to increased peak reconnection rates in confined flares. At first, these results for the reconnection rates might seem different from Hinterreiter et al. (2018) and Tschernitz et al. (2018), who analyzed local peak reconnection flux rates (i.e. electric fields \(E=v_{\rm rbn}B_{\rm n}\)) and global peak reconnection flux rates in 50 flares, finding that both are similar or higher in the eruptive flares compared to confined flares. We note, however, that our results here are for large flares above M1.0 with similar distributions of flare classes within confined and eruptive samples. In contrast, Hinterreiter et al. (2018) and Tschernitz et al. (2018) samples are very different, with flare class ranging from B to X-class flares and confined sample being much weaker than the eruptive one. Accounting for sample differences, we suggest that the two results would be consistent, i.e. a confined flare with similar reconnection and peak X-ray fluxes as an eruptive flare, would tend to have a higher peak reconnection rate than the eruptive flare. Figure 3(C) and Figure 7 in Tschernitz et al. (2018) confirm our guess, showing that for the same peak X-ray flux confined flares have higher peak reconnection rate.
### Results: relationship between thermodynamic and magnetic properties in confined and eruptive flares
In Figure 4 we show correlation matrix with Spearman correlation coefficients between all physical variables that we analyze in this paper. We show this matrix for illustrative purposes, to describe the relationship between all variables, especially those that we do not show in the scatter plots in Figure 5.
In Figure 5(A-F) we first analyze how peak X-ray flux scales with other variables and compare how confined and eruptive flare properties differ for a fixed peak X-ray flux.
Figure 4: Results: Correlation matrix showing Spearman correlation coefficient, \(r_{s}\), between different physical variables for all flares with GOES flare class \(>M1.0\). We describe the qualitative strength of the correlation using the following guide: \(r_{s}\in[0.2,0.39]\) – weak, \(r_{s}\in[0.4,0.59]\) – moderate, \(r_{s}\in[0.6,0.79]\) – strong, and \(r_{s}\in[0.8,1.0]\) – very strong. See §3.2 for details.
The GOES peak X-ray flux has a moderate to high correlation with several thermodynamic properties, including peak temperature (\(r_{s}=0.5\), Figure 5E), emission measure (\(r_{s}=0.9\), Figure 5D) and thermal energy (\(r_{s}=0.9\), Figure 5F). These correlations are in agreement with Feldman et al. (1996); Reep & Knizhnik (2019); Sadykov et al. (2019) and the standard chromospheric evaporation model (Hirayama, 1974), where GOES peak X-ray flux depends on the maximum EM.
The correlation between the AR magnetic flux and the peak X-ray flux is weak (\(r_{s}=0.1\), Figure 5A) - larger active regions could host both large and small flares. In contrast the correlation between the peak X-ray flux and reconnection flux is strong (\(r_{s}=0.6\) to 0.7 for flares above M3.0, Figure 5B) in agreement with Kazachenko et al. (2017); Toriumi et al. (2017); Tschernitz et al. (2018), meaning that larger flares have larger amount of magnetic field participating in the reconnection. For flares above C1.0, Kazachenko et al. (2017) fit this relationship as \(X_{\rm peak}\propto(\Phi_{\rm ribbon})^{1.5}\), while Reep & Knizhnik (2019) find similar \(X_{\rm peak}\propto(\Phi_{\rm ribbon})^{1.6}\). Here we find \(X_{\rm peak}\propto(\Phi_{\rm ribbon})^{1.5}\) for both confined and eruptive flares, consistent with Kazachenko et al. 2017.
Since the reconnection flux is calculated cumulatively by adding the newly reconnected flux, as flare progresses, it is physically more appropriate to compare reconnection flux with cumulative GOES SXR flux, like SXR fluence. In the past, SXR fluence has been widely used to compare flare and CME energetics (e.g., Salas-Matamoros & Klein, 2015). From Figure 5M we find that the correlation between the SXR fluence and reconnection flux is strong, \(r_{s}=0.7\), only slightly above the correlation coefficient between GOES peak X-ray flux and reconnection flux (\(r_{s}=0.6\)). This result agrees with the previous study of Sindhuja et al. (2019) who found that reconnection flux has stronger correlations with fluence than with the peak X-ray flux: \(r_{s}=0.8\) vs \(r_{s}=0.6\) in their sample. In addition, we find that the flare
Figure 5: Results: Scatter plots between different physical variables for confined (blue) and eruptive (red) events (\(>M1.0\) GOES flare class). In the corner we show Spearman correlation coefficient \(r_{s}\) between each pair of plotted variables for all flares (black) and eruptive (red) and confined (blue) flares separately, where bold and dim fonts indicate statistical significance of the correlation with p-values below and above \(p=0.01\), respectively. The values in the parentheses show values for flares above M3.0. See §3.2 for details.
peak X-ray flux has moderate correlation with flare area ratio and flux ratio (\(r_{s}=0.6\), Figure 5N), in agreement with Kazachenko et al. 2017.
We find that the correlation between the peak X-ray flux and the peak reconnection flux rate is moderate, \(r_{s}=0.4\) for both smaller and larger flares above M3.0 (Figure 5C), **which is significantly weaker than correlation between peak X-ray flux and reconnection flux (\(r_{s}=0.6\))**.
In Figures 5(G-H) we quantify how emission measure is related to other reconnection flux and its peak rate. As emission measure is very strongly correlated with the GOES peak X-ray flux (\(r=0.9\)), it has the same correlation coefficients with other variables as the GOES peak X-ray flux: strong correlations with reconnection flux (\(r_{s}=0.6\), Figure 5G), ribbon area (\(r_{s}=0.7\)), flux ratio and area ratio (\(r_{s}=0.6\)) and moderate correlations with GOES temperature (\(r_{s}=0.5\)) and peak reconnection rate (\(r_{s}=0.4\), Figure 5H).
For the first time, in Figures 5(I-J) we compare how GOES peak temperature is related to other reconnection properties. We find that the peak temperature has very weak correlation with the reconnection flux (\(r_{s}=0.3\)), which disappears for stronger flares (\(r_{s}=0.1\) for flares above M3.0, Figure 5I). In contrast to reconnection fluxes, the peak temperature is sensitive to the peak reconnection rate with moderate to strong correlation coefficients, \(r_{s}=0.5\) and \(r_{s}=0.6\), for flares above \(M1.0\) and \(M3.0\), respectively (Figure 5J).
How does thermal energy scale with the reconnection flux? We find a strong to very strong correlation between the two (\(r_{s}=0.8\)), indicating that reconnection flux defines the thermal energy output of the flare. We find that the correlation between the thermal energy and the peak reconnection flux rate is weaker (\(r_{s}=0.5\), Figure 5K), implying that the total reconnection flux has a stronger effect on the thermal energy than the peak reconnection rate, similar to the effect on the emission measure.
We find that flare duration has a moderate correlation with the flare ribbon area (\(r_{s}=0.4\), Figure 5L). These results are consistent with statistical results by Toriumi et al. (2017); Reep and Knizhnik (2019) who found that in large flares above M5.0 flare durations scale with ribbon separation distances, while in weaker flares the correlation is weak (Reep and Knizhnik, 2019). One possible interpretation is that flare duration could be defined by the Alfven transit time across the flaring area, which is related to ribbons' size. In addition, the correlation between flare duration with flare ribbons size could be explained with the avalanche flare model (e.g., Lu and Hamilton, 1991; Lu, 1995): if a flare proceeds as a chain reaction of one reconnection event triggering others, then reconnecting larger flux would take more time.
Finally, in Figure 5O we look at the statistical relationship between the reconnection fluxes and peak reconnection flux rates. From this plot we find that the two variables are moderately correlated (\(r_{s}=0.5\)). We also find that for a fixed reconnection flux, confined flares have higher peak reconnection rates.
We then use scatter plots in Figure 5 to describe how variables within confined and eruptive samples differ for fixed flare class or emission measure. We find that for a fixed peak X-ray flux, confined flares have higher reconnection rates and peak temperatures (see Figures 5C and E). For thermal energies, only large confined flares, of flare class X1.0 and above, have higher thermal energies than eruptive flares (see Figure 5F).
These results are consistent with a qualitative scenario of energy partition between CME kinetic energy and flare thermal energy, where CME removes energy from the source region, leading to less thermal energy available for flares and lower peak flare temperatures in eruptive flares.
To improve our understanding of the underlying physical mechanisms driving the above correlations, we use a framework of _extensive and intensive flare properties_(Tan et al., 2007; Welsch et al., 2009; Bobra and Ilonidis, 2016; Kazachenko et al., 2022).
Extensive properties are those that correlate or scale with flare size, described by peak X-ray flux or X-ray fluence, while intensive properties are those that do not scale with flare size. Above, we find the following extensive flare properties with a strong correlation with flare fluence, \(F_{\rm X}\): peak X-ray flux, emission measure, thermal energy, reconnection flux and ribbon area: \(r_{s}(F_{\rm X};[I_{\rm peak},EM_{\rm GOES},E_{\rm th,GOES},\Phi_{\rm ribbon },S_{\rm ribbon}])=[0.8,0.8,0.7,0.7]\). We also find that many extensive properties have a strong correlation with each other: e.g., \(r_{s}(S_{\rm ribbon},[EM_{\rm GOES},E_{\rm th,GOES}])=[0.7,0.9]\).
Above, we also find some variables that behave more like intensive properties, such as temperature and reconnection flux rate. These properties do not exhibit a correlation with X-ray fluence, \(r_{s}(F_{\rm X};[T_{\rm GOES},\dot{\Phi}_{\rm ribbon},])=[0.2,0.2]\), and only moderate correlations with peak X-ray flux: \(r_{s}(I_{\rm X,peak};[T_{\rm GOES},\dot{\Phi}_{\rm ribbon},])=[0.5,0.4]\). However, both temperature and reconnection rate exhibit moderate correlation with each other: \(r_{s}(T_{\rm GOES},\dot{\Phi}_{\rm ribbon})=0.5\).
What controls the plasma temperature and emission measure in flares? In Longcope et al. (2016, 2018), it is the reconnection rate that affects the maximum values of plasma properties. From our analysis we find that the temperature is more controlled by peak reconnection rate (\(r_{s}=0.5\)), while emission measure and thermal energy are more controlled
by the reconnection flux (\(r_{s}=0.6\) and \(r_{s}=0.8\)). For comparison the correlation between the reconnection flux and reconnection flux rate is only moderate (\(r_{s}=0.5\), Figure 5O). These observational inferences are consistent with the framework of correlation between intensive and extensive flare properties above and case studies of temporally resolved profiles of reconnection rates, temperature and emissions (see e.g., Qiu et al., 2010; Qiu and Cheng, 2022). For example, Qiu and Cheng (2022) analyzed 3 eruptive and 3 confined flares, finding that in all flares temperature rises and peaks early or co-temporal with the reconnection rate rise and peak.
In this section we focused on larger flares, of class \(M1.0\), and above. Note that while variable distributions we find within confined and eruptive samples are not drastically different, the differences in the mean values are statistically persistent for different sub-samples (see darker-tone shaded histograms centered at dotted mean values in Figure 3). If we include flares \(C5.0\) and above, as we do in Appendix A, we find similar physical conclusions in terms of flare magnetic and thermodynamic properties, with all the mean quantities being smaller due to smaller flare sizes. The main difference for flares within \(C5.0-M1.0\) class vs. flares \(M1.0\) and above, which we analyzed above, is that the fraction of confined flares drastically increases for flare class below \(M1.0\), with fractions of confined flares of 80% and higher as we go to lower flare class (see Appendix A for analysis including flares \(>C5.0\)). On the other hand, if we restrict our analysis to very large flares above \(M5.0\), we find similar physical conclusions as for smaller flares; the only difference is in the duration distributions: confined flares have similar durations as eruptive flares for GOES classes \(>M5.0\) (cf., for flares \(>M1.0\) confined flares are slightly shorter).
## 4 Conclusions
We have compared magnetic and thermodynamics properties of 216 confined and eruptive flares _of GOES class M1.0 and above_ (see Figures 3-5). In Appendix A we have also included smaller flares of GOES class C5.0 and above and analyzed properties in 480 flares.
What are the differences in average _magnetic properties between large confined and eruptive flares_? We find that:
* Eruptive flares have the same amount of reconnected flux as confined flares. However, unlike confined flares, they occur in smaller ARs with weaker magnetic field strengths. Their flare ribbons have weaker mean magnetic field strengths. As a result, they reconnect larger fractions of AR magnetic flux than confined flares.
* Confined flares occur in larger ARs with stronger mean magnetic field strengths compared to eruptive flares. Their ribbons are more compact, have stronger magnetic field strengths and reconnect smaller fractions of AR magnetic flux. Both of these results are consistent with earlier studies.
* For the first time, we find that for the same peak X-ray flux confined flares have higher mean peak reconnection rates than eruptive flares (see Figure 5C). This together with the fact that HXR (or microwave) emissions are temporally correlated with the global (see Qiu and Cheng, 2022 and references therein) and local reconnection rates (Temmer et al., 2007; Naus et al., 2022), implies that large, eruptive flares are less efficient in particle acceleration than confined flares, in agreement with direct particle measurements (see e.g., Thalmann et al., 2015; Cai et al., 2021).
The above results support previously described main factors that define whether a flare would be confined or eruptive: (1) the strength of the overlying field, given by the AR flux, (2) the ratio between the reconnected magnetic flux and the active region flux and (3) the strength of the reconnected field (mean ribbon magnetic field). Here we have not investigated either ribbon field non-potentiality or its proximity to the AR edge, but previous works have shown that these differ in confined and eruptive flares (see summary in SS1).
For flares of M-class and above, what are the differences in _thermodynamic plasma properties_ between confined and eruptive flares? Our conclusions are as follows.
* Large _eruptive_ flares have larger mean peak X-ray fluxes, peak emission measures and thermal energies. They have longer durations, and slightly smaller peak temperatures.
* Large _confined_ flares have smaller mean peak X-ray fluxes, peak emission measures and thermal energies. They have shorter durations, and slightly larger peak temperatures.
* Why, for similar reconnection fluxes, do confined flares have larger temperatures and peak reconnection flux rates? We explain it with lower coronal altitudes and stronger reconnection fields in confined flares (since confined flares are more compact).
* Why, for similar reconnection fluxes, do confined flares tend to have shorter durations than eruptive flares? We explain it with the smaller physical size of confined flares (ribbon areas) and as a consequence shorter Alfven transit time over the flaring area, than in eruptive flares (Reep & Knizhnik, 2019).
* We find that peak X-ray flux and emission measure are more strongly controlled by the reconnection fluxes than peak reconnection rates. On the other hand, peak temperature is more strongly controlled by the peak reconnection rate than the reconnection flux. This is consistent with temporally-resolved flare case studies that show temperature peaking early on together with the reconnection rate peak, while emission measure slowly rises to its maximum values as the cumulative reconnection flux increases (see e.g., Qiu & Cheng, 2022).
To summarize, our findings indicate that, in general, in eruptive flares, reconnection proceeds more slowly in larger current sheets higher in the corona where the coronal magnetic field is weaker. In contrast, in confined flares, reconnection proceeds faster in more compact current sheets lower in the corona where the coronal magnetic field is stronger. As higher reconnection rates lead to more accelerated ions and electrons, large confined flares could be more efficient in producing ionizing electromagnetic radiation than eruptive flares. Given that flare high-energy radiation could affect exoplanet habitability conditions (Yamashiki et al., 2019; Airapetian et al., 2020), we speculate that confined flares could be important in shaping exoplanet conditions. This, together with a lack of CME detections on solar-like stars, would imply potentially a larger role of confined flares in exoplanet habitability.
In this section we focused on large flares of class \(M1.0\) and above to have a similar number of eruptive and confined flares to analyze. Extending our analysis to flares of class \(C5.0\) and above, we end up with a much larger fraction of confined flares, but find similar physical differences between magnetic/thermodynamic properties of confined/eruptive flares, with smaller mean values. For the results of our analysis for smaller flares, C5.0 and above, see Appendix A below.
The catalog is available online in IDL SAV and CSV file formats, and can be used for a wide spectrum of quantitative studies in the future, including case studies of the outliers.
We thank the reviewer for providing comments that have improved the manuscript. We thank the HMI team for providing us with the vector magnetic field SDO/HMI data. We thank the AIA team, in particular Marc DeRosa, for providing us with the UV SDO/AIA data. We thank the Croom Team for stimulating discussions. We thank the US taxpayers for providing the funding that made this research possible. We acknowledge support from NASA LWS NNH17ZDA001N, 80NSSC19K0070, NASA ECIP NNH18ZDA001N and NSF CAREER SPVKK1RC2MZ3.
## Appendix A Expanding analysis from flares above M1.0 to flares above C5.0
In addition to flares above M1.0, as has been analyzed in the main paper's body, we extend our analysis to include smaller events with GOES class C5.0 and above. To minimize errors in the reconnection fluxes we restrict our analysis to events with reconnection flux imbalance between positive and negative polarities \(\Phi_{\rm ribbon,imb}<20\%\). As a result we analyze 480 events, 152 eruptive and 328 confined flares, \(C5.0\)-class and above.
For flares of GOES class below M1.0, the fraction of confined flares significantly increases from 60% for M1.0 flares to 85% for C5.0 flares. As a result, the gap between distributions of GOES peak X-ray fluxes becomes even larger for confined and eruptive flares, affecting distributions of emission measure, thermal energy, etc. Nevertheless, except for mean reconnection fluxes that become smaller for confined than for eruptive flares, we observe the same tendencies between magnetic/thermodynamic properties for confined/eruptive flares as for flares above M1.0. Even the peak reconnection rates are smaller for eruptive than for confined flares, consistent with conclusions for larger flares.
Figure 6: Appendix: Distributions of various physical variables for eruptive (red) and confined (blue) events (\(>C5.0\) GOES flare class). Vertical dotted lines and numbers in the top right corner indicate mean value for each variable within eruptive and confined flare groups. Darker-color histograms show distributions of the mean, if we select 100 random subsamples of the half of the events, and reflect variability of the mean due to event selection bias. See Appendix A for details.
Figure 7: Appendix: Correlation matrix showing Spearman correlation coefficient between different physical variables for all flares with GOES flare class \(>C5.0\). See Appendix A for details.
Figure 8: Appendix: Scatter plots: between different physical variables for confined (blue) and eruptive (red) events (\(>C5.0\), GOES flare class). See Appendix A for details. |
2306.16655 | Generalized Hopf Bifurcation in a Cancer Model with Antigenicity under
Weak and Strong Allee Effects | This article deals with an autonomous differential equation model that
studies the interaction between the immune system and the growth of tumor cells
with strong and weak Allee effects. The Allee effect refers to interspecific
competition, and when the population is small, it can retard population growth.
The work focuses on describing analytically, using a set of parameters, the
conditions in the phases of the immunoediting theory, particularly in the
equilibrium phase, where a latent tumor would exist. Saddle-Node,
Saddle-symmetric, Hopf, generalized Hopf, and Takens-Bogdanov bifurcations get
presented for both Allee effects, and their biological interpretation regarding
cancer dynamics gets discussed. The Hopf and generalized Hopf bifurcation
curves get analyzed through hyper-parameter projections of the model, where it
gets observed that with a strong Allee effect, more tumor control persists as
it has higher antigenicity, in contrast to the weak Allee effect, where lower
antigenicity gets observed. Also, we observe that the equilibrium phase
persists as antigenicity increases with a strong Allee effect. Finally, the
numerical continuation gets performed to replicate the analytical curves'
bifurcations and draw the limit and double limit cycles. | Eymard Hernández-López, Mayra Núñez-López, Napoleón Navarro-Tito | 2023-06-29T03:35:13Z | http://arxiv.org/abs/2306.16655v1 | # Generalized Hopf Bifurcation in a Cancer Model with
###### Abstract
This article deals with an autonomous differential equation model that studies the interaction between the immune system and the growth of tumor cells with strong and weak Allee effects. The Allee effect refers to interspecific competition, and when the population is small, it can retard population growth. The work focuses on describing analytically, using a set of parameters, the conditions in the phases of the immunoediting theory, particularly in the equilibrium phase, where a latent tumor would exist. Saddle-Node, Saddle-symmetric, Hopf, generalized Hopf, and Takens-Bogdanov bifurcations get presented for both Allee effects, and their biological interpretation regarding cancer dynamics gets discussed. The Hopf and generalized Hopf bifurcation curves get analyzed through hyper-parameter projections of the model, where it gets observed that with a strong Allee effect, more tumor control persists as it has higher antigenicity, in contrast to the weak Allee effect, where lower antigenicity gets observed. Also, we observe that the equilibrium phase persists as antigenicity increases with a strong Allee effect. Finally, the numerical continuation gets performed to replicate the analytical curves' bifurcations and draw the limit and double limit cycles.
_Keywords: Generalized Hopf bifurcation, Cancer modeling, Immunoediting, Weak-strong Allee effects._
Introduction
Multiple factors, such as genetic, environmental, viral infections, behavioral, and dietary factors, could cause the appearance of cancer. A latent tumor (equilibrium phase) is one of the crucial stages in cancer dynamics. Hence, the immune system is crucial in controlling and eliminating cancerous tumors. Cancer vulnerabilities become more apparent if thinking about the cells that make up a tumor as an endangered species. According to [1], many tiny tumors possibly get formed but almost always become extinct before they are clinically relevant and even before possible detection. Extinction is a complex phenomenon often driven by the interaction of ecological and evolutionary processes; in this context, the Allee effect implies the existence of a growth threshold that may be explored in therapeutics [2].
On the other hand, in the presence of Allee effects, successful tumors occur following rare large fluctuations in the population size that take the tumors over the Allee threshold. Allee effects get caused by several mechanisms, including cooperative feeding and defense, which can potentially be relevant to cancer. When the number of organisms is small, cooperation is typically inefficient, leading to a growth threshold in the population. Cell cooperation might be required in diseases like cancer to produce a sufficient density for tumor proliferation. Allee effect in cancer is being investigated because various factors may limit growth in tumor cells at low densities, cooperation among cells might be required to produce a sufficient density of diffusible growth factors needed for tumor proliferation [3], [4].
The Allee effect receives attention in several mathematical models, such as delay in differential equations where population density changes, in time series for population processes, and areas of biomathematics, among others [5]. The weak Allee effect is when at low density, the per capita growth rate is lower than at high densities, but the growth rate remains positive even in this case of a small population. In contrast, the strong Allee effect is characterized by the fact that after going from a positive to negative per capita growth rate in populations, through the threshold curve, the per capita rate of population growth drops dramatically, becoming negative at an accelerated rate until the population's extinction.
Simplified models at the cellular level consider tumor and immune cells in the microenvironment, where cytokines regulate their interactions and dormancy patterns [6], [2]. In literature, there are several dynamic models based on the interaction between cancer cells and the immune system, as in [7], [8], [9].
In others, it's analized the interaction between the immune system's cells and tumor cells has already been studied without Allee effect by [10], [11], [12], [13]. Most of the works mentioned above focus on stability points and bifurcations found numerically, but analytical bifurcations are lacking. In [14], the authors modeled the interaction between tumor cells and the immune system's effector cells; they study the generalized Hopf bifurcations (known as Bautin bifurcation), including weak Allee effect. Finally, in [15], present another way to analyze Allee's effects of bifurcation; they realized a study of the Allee effect on a generalized logistic map that exhibits rich and complex dynamics.
In this work, we propose a mathematical model to study of bifurcations on the dynamics which considers strong and weak Allee effects as a limiting factor in tumor growth and its interaction with the immune system.
The paper gets organized as follows: in Section II, we present and describe the model with weak and strong Allee effect; in Section III, we present the critical points. Then, section IV presents the bifurcations with the Allee effect. Finally, in Section V, we draw some conclusions about this work.
## II Model
The immune system is the first line of defense against aggressions of external nature, when normal cells become cancer cells, the immune system launches. In this works we will study the dynamics between cells of the immune system \(E\) and tumor cells \(T\) which will not take into account the action of some treatment on cancer. In this system we consider weak and strong Allee effects in a single model so as to allow for a comparative study of impacts of these types of Allee effect therefore the system is given by
\[\begin{array}{rcl}\frac{dT}{dt}&=&\frac{\hat{r}T(1-\hat{b}T)(T-\hat{\beta} )}{\hat{\alpha}+T}-\frac{\hat{a}TE}{\hat{\beta}+T}\\ \frac{dE}{dt}&=&\hat{c}T-\hat{\mu}E,\end{array} \tag{1}\]
where \(\hat{c}\), \(\hat{r}\), \(\hat{a}\), \(\hat{g}\), \(\hat{\mu}>0\).
In the model, the Allee threshold and the carrying capacity of the environment explicitly included as model parameters.
The parameter \(\hat{r}\) is the intrinsic growth rate of \(T\), it can be understood as the proliferation rate of tumor cells, and \(1/\hat{b}\) is the carrying capacity of \(T\). A logistic tumor cell growth
represents the first term of Eq. (1). The strong Allee threshold is given by \(\hat{\beta}\) and \(\hat{\alpha}\) is a "control" parameter that allows us to transition between weak effect Allee and strong effect Allee. If \(\hat{\beta}=-\hat{\alpha}\) is the scenario in which the demographic Allee effect is absent. For our analysis, if \(\hat{\beta}>0\) this corresponds to a strong effect Allee scenario, on the other hand, if \(\hat{\beta}=0\), we obtain a weak effect Allee scenario with \(\hat{\alpha}>0\) according to [16], [5], [17].
The second term of first equation of system (1) represents the loss of tumor cells due to their interaction with the effector cells \(E\) at a rate of \(\hat{a}\). The parameter \(\hat{a}\) corresponds to the immune system's response capacity to the presence of tumor cells where \(\hat{g}\) is the half-saturation rate for cancer clearance.
The second equation models the change in the population of the immune system's effector cells with respect to time. The first term represents the recruitment of effector cells in response to tumor antigenicity, where \(\hat{c}\) is the tumor antigenicity (the ability of the tumor to elicit an immune system response). Finally, the second term represents the death or apoptosis of the effector cells with \(\hat{\mu}\) as death rate of immune cells.
To facilitate analysis, we will use the following dimensionless system to perform the bifurcation analysis of the system (1).
\[\begin{array}{rcl}\frac{dx}{dt}&=&\frac{rx(1-bx)(x-\beta)}{\alpha+x}-\frac{ axy}{g+x}\\ \frac{dy}{dt}&=&cx-\mu y.\end{array} \tag{2}\]
where \(x\) are the tumor cells and \(y\) are cells of the immune system.
## III Critical points
The critical points for system (2) with strong Allee effect, are the trivial point \(P_{0}=(0,0)\), two complex critical points \(P_{1},\bar{P_{1}}\), and real critical point given by
\[P_{2}=\left(\frac{\eta-2ac}{6b\Gamma\mu r},\frac{c(\eta-2ac\Gamma)}{6b\Gamma \mu^{2}r}\right), \tag{3}\]
The equilibrium points of interest are those such that their inputs are real nonnegative. That is, the equilibrium point of interest is \(P_{2}\), \(\Gamma\) and \(\eta\) parameters can be found in the Appendix A as (A3) and (A2) respectively.
**Proposition 1**: _All solutions \(P_{2}=(x_{0},y_{0})\in\mathbb{R}_{+}^{2}\) from (3) of system (2) are positive in the following strong Allee case._
\[W_{Strong}\;=\;\left\{W_{s_{1}}<\alpha<W_{s_{2}},0<a<W_{s_{3}},W_{s_{4}}< \Gamma,0<b<W_{s_{5}}\right\}, \tag{4}\]
_where \(0<\Gamma\leq 1\) with \(\eta>2ac\), for \(g=\beta\), and provided that all other parameters are positive, including the parameters \(W_{s_{i}}>0\)_
1
Footnote 1: \(W_{s_{1}}=\frac{a^{2}c^{2}-2ac\mu r+b^{2}\beta^{2}\mu^{2}r^{2}+\mu^{2}r^{2}}{abc \mu r},\quad W_{s_{2}}=\frac{\mu r\left(\sqrt[3]{2}\mu r\left(3b^{2}\beta^{2}+1 \right)+\Gamma\right)-a\left(2\sqrt[3]{2}c\mu r+c\right)}{3\sqrt[3]{2}abc\mu r}, \quad W_{s_{3}}=\frac{3\sqrt[3]{2}abc\mu}{2\sqrt{2}c\mu r+c},\)\(W_{s_{4}}=\frac{1}{3}\sqrt{\frac{7}{3}(2\mu^{2}\mu^{3}r^{3})},\) and \(W_{s_{5}}=\frac{3\sqrt[3]{2}a^{2}c^{2}-4\sqrt[3]{2}ac\mu r+ac+2\sqrt[3]{2}a^{ 2}r^{2}}{\mu r}.\)
_\({}^{1}\), for \(i\in\{1,...,5\}\)._
**Proof 1**: _If we consider the expression (A3), when simplifying it is enough to compare_
\[3a^{2}c^{2}\mu r(b(3\alpha+2\beta-2g)+2) > 2a^{3}c^{3}+3ac\mu^{2}r^{2}\left(b\left(3\alpha+b\beta(3\alpha+2 \beta)+2bg^{2}-g(b(3\alpha+\beta)+1)+\beta\right)\right)\] \[+ \mu^{3}r^{3}(b(g-\beta)+2)(b(\beta+2g)+1)(b(2\beta+g)-1),\]
_If we simplify the expression for \(\eta\) (A2), and the we have_
\[\mu r\left(\sqrt[3]{2}\mu r\left(b\left(\beta(b\beta-1)+bg^{2}+b\beta g+g\right) +1\right)+b\Gamma(\beta-g)+\Gamma\right)>ac\left(\sqrt[3]{2}\mu r(3\alpha b+2 b\beta-2bg+2)+1\right).\]
_The expression (3) have positive components if_
\[(0<\Gamma\leq 1\wedge a>0\wedge\eta>2ac),\quad\text{ or }\quad\Gamma>1\wedge a >0\wedge\eta>2ac\Gamma.\]
_Therefore, the parameters \(\Gamma\) and \(\eta\) are positives if we have the following conditions \(W_{s_{1}}<\alpha<W_{s_{2}},0<a<W_{s_{3}},W_{s_{4}}<\Gamma,0<b<W_{s_{5}}\), for \(\beta=g\), and the positive parameters of system (2)._
Figure 1 shows the behavior of critical point \(P_{2}\) given by expression (3) with admissible parameter values, the parameters corresponding to different parameterizations of strong Allee effect, represented by \(\alpha\) and \(\beta\), we show the critical values of \(x\) and \(y\) for different values of \(\alpha\). As \(\alpha\) increases the population density at which the per capita growth rate is maximized moves to a higher density, \(i.e.\) reaches lower maximum values. Also for the case of Figure 1 (b)-(d), when \(\beta=0\) a weak Allee effect is obtained, just the maximum point of the curves. Figure 2 shows curves in 3D representing the critical points by varying \(\alpha\) and \(\beta\). If the value of \(\alpha\) increases, the parameter \(\beta\) should be smaller.
We analyze the stability of the critical points of biology interest in the following result.
**Proposition 2**: _If the critical points are reals, we have the stability as follows._
**(a)**: _The critical point_ \(P_{0}\) _is asymptotically stable._
Fig. 1: The effect of parameters \(\alpha\) and \(\beta\) on the values of the critical point (3).
Fig. 2: The effect of parameters \(\alpha\) and \(\beta\) on the values of the critical point (3), for parameter values \(a=5.55556\), \(b=1.24997\), \(c=0.252521\), \(g=0.0001\), \(r=16.6667\), and \(\mu=0.166667\).
**(b)**: _The critical point_ \(P_{2}\) _is an unstable or asymptotically stable node, unstable or asymptotically stable spiral, stable center, or a saddle._
**Proof 2**: _Analyzing the eigenvalues for \(P_{0}\): \(E_{0_{1}}=-\beta gr\), and \(E_{0_{2}}=-\alpha g\mu\), then the point \(P_{0}\) is asymptotically stable. We consider the eigenvalues for \(P_{2}\) as follow_
\[E_{2_{1},2_{2}}=\frac{1}{2}\left(\hat{P_{2_{1}}}-\hat{P_{2_{2}}}\right)\pm \sqrt{F(x_{0},y_{0}:a,b,c,g,r,\alpha,\beta)}),\]
_be_
\[\Omega=(x_{0},y_{0}:a,b,c,g,r,\alpha,\beta)\]
_such that \(F(\Omega)=F(x_{0},y_{0}:a,b,c,g,r,\alpha,\beta)\), and \(\hat{P_{2_{1}}}=(rx_{0}(b\beta+1)(2g+3x_{0})\), \(\hat{P_{2_{2}}}=(ay_{0}(\alpha+2x_{0})+brx_{0}^{2}(3g+4x_{0})+\beta r(g+2x_{0} )+\mu(g+x_{0})(\alpha+x_{0}))\), then_
* _For_ \(F(\Omega)=0\)_: If_ \(\hat{P_{2_{1}}}>\hat{P_{2_{2}}}\)_,_ \(P_{2}\) _is an unstable node. But if_ \(\hat{P_{2_{1}}}<\hat{P_{2_{2}}},\) _then_ \(P_{2}\) _is asymptotically stable._
* _In case_ \(F(\Omega)<0\)_: if_ \(\hat{P_{2_{1}}}>\hat{P_{2_{2}}}\)_,_ \(P_{2}\) _is spiral unstable. But if_ \(\hat{P_{2_{1}}}<\hat{P_{2_{2}}},\)__\(P_{2}\) _is spiral asymptotically stable, either if_ \(\hat{P_{2_{1}}}=\hat{P_{2_{2}}}\)__\(P_{2}\) _is stable center._
* _For_ \(F(\Omega)>0\)_,_ \(P_{2}\) _is an unstable node if_ \(\hat{P_{2_{1}}}>\hat{P_{2_{2}}}\) _and_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})\pm\sqrt{F(\Omega)}\)_, or if_ \(\hat{P_{2_{1}}}<\hat{P_{2_{2}}}\) _and_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})\pm\sqrt{F(\Omega)}\)_. The point_ \(P_{2}\) _is a saddle if_ \(\hat{P_{2_{1}}}>\hat{P_{2_{2}}}\)_,_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})-\sqrt{F(\Omega)}<0\)_, and_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})+\sqrt{F(\Omega)}>0\)_, or if_ \(\hat{P_{2_{1}}}<\hat{P_{2_{2}}}\)_,_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})-\sqrt{F(\Omega)}<0\)_, and_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})+\sqrt{F(\Omega)}>0\)_. Finally,_ \(P_{2}\) _is asymptotically stable if_ \(\hat{P_{2_{1}}}<\hat{P_{2_{2}}}\)_, and_ \((\hat{P_{2_{1}}}-\hat{P_{2_{2}}})\pm\sqrt{F(\Omega)}<0\)_._
_The function \(F(\Omega)\) in variables with parameters is in the Appendix A as (A1)._
Figure 3, shows the critical points \(P_{0}\) and \(P_{2}\), with some trajectories in the phase plane. It can be seen that there is a limit cycle for the set of positive parameters \(a=5.55556\), \(b=1.24997\), \(c=30.4117\), \(g=0.0001\), \(r=16.6667\), \(\alpha=1.0\times 10^{-11}\), \(\mu=11.95\), and \(\beta=1.0\times 10^{-9}\), where strong Allee effect gets represented by \(\alpha\) and \(\beta\).
## IV Bifurcations
In this section, we introduce some crucial results on bifurcations present in the system (2). We first present bifurcations with oncological interpretation. Such is the case of the Hopf
bifurcation since it is a helpful tool to understand the occurrence of oscillatory behavior in the system (2) and can help explain how complex patterns of behavior can arise due to changes in the system parameters. The conditions for a generalized Hopf or Bautin bifurcation get then stated; this bifurcation occurs when a stable limit cycle loses its stability as a parameter is varied.
We mention bifurcations in the system (2) that, despite not having ontological interpretation because the parameters are not all positive, are vital to study the Hopf and Bautin bifurcations. These are the saddle-node or Takens-Bogdanov bifurcations.
**Proposition 3**: _The following set of parameters_
\[\varsigma=\left\{(a,\alpha,b,\beta,c,g,\mu,r)\mid SN\right\}, \tag{5}\]
_contains the saddle-node bifurcations of the system (2), and the expression for \(SN\) is in the Appendix A._
**Proof 3**: _Calculating the roots in common \(R_{1}\), of \(\frac{dx}{dt}=f_{1}(x,y)\) and \(\frac{dy}{dt}=f_{2}(x,y)\), with
respect to the variable \(y\), and taking the discriminant of \(R_{1}\), now with respect to the variable \(x\), then we obtain the expression for \(SN\) in (A5)._
**Theorem 1**: _The following set of parameters_
\[H=\left\{(a,\alpha,b,\beta,c,g,\mu,r)\mid HOPF\right\}, \tag{6}\]
_contains the symmetric-saddle, and Hopf bifurcations of the system (2)._
**Proof 4**: _Let \(f_{1}(x,y)\) and \(f_{2}(x,y)\), as before, we consider the Jacobian matrix \(A\) of the system (2), with the trace \(trA\). Let \(R_{2}\) be the roots in common between the trace \(trA\) and \(f_{1}(x,y)\), and \(R_{3}\) be the roots in common between the trace \(trA\) and \(f_{2}(x,y)\) regarding the \(y\) variable. Finally, the \(HOPF\) expression gets found by calculating the common roots between \(R_{2}\) and \(R_{3}\), with respect to the variable \(x\), so the HOPF expression depends only on parameters. We obtain the expression \(HOPF\) in (A7)._
**Theorem 2**: _The following set of parameters_
\[B=\left\{(a,\alpha,b,\beta,c,g,\mu,r)\mid BT=(BT_{1})(BT_{2})(BTI)\right\}, \tag{7}\]
_contains the Bogdanov-Takens bifurcations of the system (2)._
\begin{table}
\begin{tabular}{l l} \hline \hline
**Description** & **Initials values** \\ \hline \(a\)**: cancer clearance term limited** & \(5.55556\) \\ \(1/b\)**: carrying capacity of tumor cells** & \(1\times 10^{9}\) \\ \(c\)**: antigenicity** & \(30.4117\) \\ \(\mu\)**: death rate of immune cells** & \(11.95\) \\ \(r\)**: cancer growth rate** & \(16.6667\) \\ \(g\)**: half-sat. for cancer clearance** & \(0.0001\) \\ \(\alpha\)**: weak Allee effect constant** & \(1\times 10^{-11}\) \\ \(\beta\)**: strong Allee effect constant** & \(1\times 10^{-9}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameter dimensional values of the System (1).
**Proof 5**: _Calculating the roots in common between the \(SN\) and \(HOPF\) expressions results in a polynomial \(BT\) with more than thirty-five thousand components dependent on parameters and not variables, where_
\[BT_{1}=\left(a\alpha c-\alpha^{2}b\mu r-\alpha b\beta\mu r-\alpha\mu r-\beta\mu r \right)^{2},\]
_the \(BT_{2}\) expression is in Appendix A as (A6), and the \(BTI\) have many terms._
As mentioned above, the saddle-node and Taken Bogdanov bifurcations only make sense for positive parameters; however, in the \(SN\) and \(BT\) expressions, some parameters must be negative for these bifurcations to exist, then these cases lack oncology interpretations. Furthermore, using symbolic software, we can only handle two principal terms on \(BT\); the intractable term \(BTI\) contains more than eighteen thousand terms. The main result of this work gets stated in the following theorem.
**Theorem 3**: _The set_
\[Bau=\left\{(a,\alpha,b,\beta,c,g,\mu,r)\mid BAUT\right\}, \tag{8}\]
_contains the Generalized Hopf bifurcation points._
**Proof 6**: _Using (8.22) and (8.23) from [8], p. \(310\), we calculate the first and second Lyapunov. To find the expression that determines the generalized Hopf bifurcation, we calculate the roots in common between the first Lyapunov coefficient \(l_{1}(0)\) and the HOPF set, thus obtaining the BAUT expression analytically, where BAUT is a polynomial in parameters with more than one hundred forty-one thousand terms._
\[BAUT=\left(28000a^{14}\alpha^{10}bc^{14}g^{3}r^{3}+\cdot\cdot\cdot+16\alpha^{6 }b^{12}\beta^{10}g^{8}\mu^{19}r^{12}\right).\]
_The second Lyapunov coefficient gets calculated analytically, but due to its size (more than eighty thousand terms, but now containing the variable \(x_{0}\)), obtaining an analytical expression to determine the sign was impossible. In this case, we used numerical values of the parameters and variables in a neighborhood where we observed a generalized Hopf bifurcation in numerical continuation. Then numerically, the second Lyapunov coefficient was positive; even when the variable \(x_{0}\) is zero, the second Lyapunov coefficient has the following positive expression._
\[l_{2}(0)_{x_{0}\sim 0} = a^{2}\alpha^{14}\beta^{3}c^{2}g^{13}r^{3}\left(a\alpha^{2}c+ \beta r^{2}(\alpha+\beta)(\alpha b+1)\right)^{2}\] \[\times \left(10a\alpha^{2}c+3\beta r^{2}(\alpha+\beta)(\alpha b+1)\right)\]
Figure IV presents the local bifurcation diagram around the generalized Hopf point stated in Theorem 3; the diagram is similar to the one studied for the case without Allee effect and with weak Allee effect in [14]. Although the first and second Lyapunov coefficients have extensive expressions, it was possible to study the main features and their sign in the case of the second coefficient. The existence of a generalized Hopf bifurcation can exhibit the persistence of double-limit cycles in a neighborhood. These double cycles characterize the equilibrium phase in cancer (latent tumor).
Figure 5 shows the weak effect Allee scenario, this effect gets recovered when \(\beta=0\) in
Figure 4: Local Generalized Hopf bifurcation diagram of the system (2). The bifurcation separates two branches, \(H_{-}\) and \(H_{+}\), corresponding to the Hopf bifurcation with a negative and positive first Lyapunov coefficient. The parameters \(C_{1}\) and \(C_{2}\) indicate the Bautin bifurcation parameters in the model (2), which are \(c\) and \(\mu\) respectively.
Appendix Eq. (A8). The case when there is no Allee effect, neither strong nor weak, was studied in [14]. Around the base of this graph, it can get seen that there are double limit cycles on values of the parameter \(c\). The outer red cycle is unstable, and the inner blue cycle is stable, as shown by region 3 of the bifurcation diagram in Figure IV. The occurrence of double cycles is presented for values very close to the specific value of \(c=24\), marked by two red curves at the bottom of all the blue cycles marked by two red curves at the bottom of all the blue cycles.
Figure 6 shows a part of the level set corresponding to the saddle-symmetric and Hopf bifurcations analytically, as a function of the parameters \(c\) and \(m\), with a projection on the parameters values \(a=3.59937\), \(b=1\), \(g=0.0001\), \(r=1\), \(\alpha=0\), and \(\beta=0\). On the other hand, Figure 7 is its counterpart in numerical continuation, and this figure corresponds to Figure 3 of [14]. Similar to the observations made previously, Figures 8 and 9 represent the case of the weak Allee effect; the first figure mentioned corresponds to the level set of saddle-symmetric and Hopf bifurcations analytically, while the second represents its numerical continuation, and in the same way, as in the previous case.
plotted by a projection onto the parametric values \(a=5.55556\), \(b=1.24997\), \(g=0.0001\), \(c=1.25\), \(\alpha=0\), and \(\beta=0\). The Hopf curve in blue color and the cycle limit point curve in red color.
Figure 6: Projection analytical of the level set of the symmetric-saddle and Hopf bifurcations (without Allee effect) from Theorem 1 in [14], and here we recover it through our Theorem 1, with parameters \(a=3.59937\), \(b=1\), \(g=0.0001\), \(r=1\), \(\alpha=0\), and \(\beta=0\).
Figure 7: Numerical continuation of symmetric-saddle and Hopf bifurcations (without Allee effect) with parameters \(a=3.59937\), \(b=1\), \(g=0.0001\), \(r=1\), \(\alpha=0\), and \(\beta=0\). The Hopf curve in blue color and the cycle limit point curve in red color.
\(r=16.6667\), \(\alpha=1.0\times 10^{-11}\), and \(\beta=1.0\times 10^{-9}\). The numerical continuation is the reflection of the analytical calculations. It gets shown in Figure 11, where the curve is a function of the parameters \(c\) and \(\mu\), the limit point of the cycles curve (LPC) get also observed, and the upper part of the Bautin bifurcation point get found using Matcont, as well as the LPC curve.
The Bautin bifurcation we found we get characterized by the sign of the second Lyapunov coefficient and is a type of bifurcation where from an unstable limit cycle (Region 2 in Figure IV), a new stable cycle emerges as the bifurcation parameters are perturbed, crossing the vertical (Axis \(C_{2}\) region 3 in Figure IV), which subsequently collapses into the \(LPC\) curve (\(T\) curve in Figure IV), and a type of unstable limit cycle persists.
Figure 12 shows the numerical continuation of double limit cycles with a strong Allee effect; in this case, the occurrence of double cycles is presented for values very close to the specific value of \(c=26.9\). By varying the parameter \(c\), we can observe the presence of limit cycles with different periods with ranges of values for antigenicity in the interval \([26.9,27.7]\) with \(\beta=1.0\times 10^{-7}\).
Figure 8: Projection analytical of the level set of the symmetric-saddle and Hopf bifurcations (weak Allee effect) from Theorem 3 in [14], and here we recover it through our Theorem 1, with parameters \(a=5.55556\), \(b=1.24997\), \(g=0.0001\), \(r=16.6667\), \(\alpha=8.00021\times 10^{-5}\), and \(\beta=0\).
In Figures 5 and 12, limit cycles get shown for different values of the parameter \(c\); it can get seen that in the inner region of the paraboloid-shaped graph, there are limit cycles with a short period, which increases as the \(c\) value decreases until reaching the cycle in red, which indicates a cycle limit point. In addition, the weak and strong Allee effects get observed the difference in the growth intervals of tumor cells and the immune system cells since all the parameters are the same except for the value of \(\beta\).
## V Conclusion
This work analyzes a simple mathematical model to study the interaction between the immune system and tumor cell growth with strong and weak Allee effects. It is known that the Allee effect can represent interspecific competition; when the population is small, this is often reflected in a delay in population growth. An example of this is shown in Figure 1, where the tumor cell population's growth rate decreases as parameters \(\alpha\) and \(\beta\) become more considerable, even becoming extinct. This work focuses on one of the phases of the
Fig. 9: Numerical continuation of symmetric-saddle and Hopf curve bifurcations (weak Allee effect). Hopf curve, LPC curve, and Bautin point. Parameters \(a=5.55556\), \(b=1.24997\), \(g=0.0001\), \(r=16.6667\), \(\alpha=1.0\times 10^{-11}\), and \(\beta=0\).
immunoediting theory; this crucial stage in cancer is the equilibrium phase; in this phase, it gets considered that there is a latent tumor. Second, this work presents the saddle-symmetric and Hopf bifurcations analytically for both Allee effects, strong and weak, that correspond to the numerical continuations where the antigenicity of the tumor plays a key role.
Saddle-node bifurcation and Takens-Bogdanov bifurcation do not have a biological interpretation; however, the saddle-node curve helped calculate the Hopf bifurcation. The Hopf bifurcation indicates the existence of limit cycles. Bautin suggests two limit cycles; consequently, the model presents the equilibrium phase in immunoediting theory. In terms of cancer dynamics, a Bautin bifurcation determines another type of equilibrium, where double limit cycles occur, and therefore, it can also get characterized as the equilibrium phase. In this phase, a tumor keeps dormant, where tumor cells are unstable with mutations and resists the immune system (tumor cells use immune evasive strategies to grow and can be detected clinically, but in some cases, this can correspond to the escape phase). We can see that considering the same values of the system parameters, with a strong Allee effect, there is greater tumor control by having a higher antigenicity, in difference with the weak Allee effect.
Figure 10: Projection analytical of the level set of the symmetric-saddle and Hopf bifurcations (strong Allee effect) from Theorem 1 with parameters \(a=5.55556\), \(b=1.24997\), \(g=0.0001\), \(r=16.6667\), \(\alpha=1.0\times 10^{-11}\), and \(\beta=1.0\times 10^{-9}\)
effect, where we can observe a smaller antigenicity (see Figures 5 and 12).
According to the numerical continuation with a strong Allee effect, the equilibrium phase persists as the \(c\) antigenicity increases (Hopf limit cycles). This is evidenced when the mortality rate \(\mu\) tends to zero. However, compared with the numerical continuation without the Allee effect, the antigenicity no longer increases even though \(\mu\) tends to zero.
###### Acknowledgements.
MN-L acknowledges the financial support from the Asociacion Mexicana de Cultura, A.C. EH-L acknowledges the most valuable support from CONACYT (Consejo Nacional de Ciencia y Tecnologia) for a postdoctoral fellowship.
## References
* (1)
Figure 11: Numerical continuation of symmetric-saddle and Hopf curve bifurcations (strong Allee effect) with parameters \(a=5.55556\), \(b=1.24997\), \(g=0.0001\), \(r=16.6667\), \(\alpha=1.0\times 10^{-11}\), and \(\beta=1.0\times 10^{-9}\).
## Appendix A
In this appendix we present several equations including the result of the calculations in Propositions and Theorems. The following equation get obtained from the demonstration of Proposition 2.
\[F(x_{0},y_{0}:a,b,c,g,r,\mu,\alpha,\beta) = 16b^{2}r^{2}x_{0}^{6}-24br^{2}x_{0}^{5}+24b^{2}gr^{2}x_{0}^{5}-24b^ {2}r^{2}\beta x_{0}^{5}\] \[+ 9r^{2}x_{0}^{4}+9b^{2}r^{2}\beta^{2}x_{0}^{4}+\mu^{2}x_{0}^{4}-1 2acx_{0}^{4}+16abry_{0}x_{0}^{4}+34br^{2}\beta x_{0}^{4}\] \[+ 6r\mu x_{0}^{4}-8br\alpha\mu x_{0}^{4}+6br\beta\mu x_{0}^{4}-12bg^ {2}r^{2}x_{0}^{3}+12gr^{2}x_{0}^{3}-12br^{2}\beta^{2}x_{0}^{3}\] \[+ 12b^{2}gr^{2}\beta^{2}x_{0}^{3}+2g\mu^{2}x_{0}^{3}+2\alpha\mu^{2 }x_{0}^{3}-8acgx_{0}^{3}-12ary_{0}x_{0}^{3}+12abgry_{0}x_{0}^{3}\] \[+ 8abry_{0}\alpha x_{0}^{3}-12b^{2}g^{2}r^{2}\beta x_{0}^{3}+44bgr ^{2}\beta x_{0}^{3}-12r^{2}\beta x_{0}^{3}-12abry_{0}\beta x_{0}^{3}\] \[+ 4ay_{0}\mu x_{0}^{3}-14bgr\alpha\mu x_{0}^{3}+6r\alpha\mu x_{0}^ {3}+10bgr\beta\mu x_{0}^{3}-4r\beta x_{0}^{3}\] \[+ 6br\alpha\beta\mu x_{0}^{3}+4g^{2}r^{2}x_{0}^{2}+4a^{2}y_{0}^{2} x_{0}^{2}-8ac\alpha^{2}x_{0}^{2}+4b^{2}g^{2}r^{2}\beta^{2}x_{0}^{2}-14bgr^{2} \beta^{2}x_{0}^{2}\] \[+ 4r^{2}\beta^{2}x_{0}^{2}+g^{2}\mu^{2}x_{0}^{2}+\alpha^{2}\mu^{2 }x_{0}^{2}+4g\alpha\mu^{2}x_{0}^{2}-8agry_{0}x_{0}^{2}-12acg\alpha x_{0}^{2}\] \[- 6ary_{0}\alpha x_{0}^{2}+6abgry_{0}\alpha x_{0}^{2}+14bg^{2}r^{2 }\beta x_{0}^{2}-14gr^{2}\beta x_{0}^{2}\] \[+ 8ary_{0}\beta x_{0}^{2}-8abgry_{0}\beta x_{0}^{2}-6abry_{0} \alpha\beta x_{0}^{2}+4g^{2}r\mu x_{0}^{2}\] \[- 6bg^{2}r\alpha\mu x_{0}^{2}+10gr\alpha\mu x_{0}^{2}+6ay_{0} \alpha\mu x_{0}^{2}+4bg^{2}r\beta\mu x_{0}^{2}\] \[+ 6gr\beta\mu x_{0}^{2}+10bgr\alpha\beta\mu x_{0}^{2}-4r\alpha \beta\mu x_{0}^{2}-4acg\alpha^{2}x_{0}\] \[- 4bg^{2}r^{2}\beta^{2}x_{0}+4gr^{2}\beta^{2}x_{0}+2g\alpha^{2}\mu ^{2}x_{0}+2g^{2}\alpha\mu^{2}x_{0}\] \[- 4agrbitrary_{0}\alpha x_{0}-4g^{2}r^{2}\beta x_{0}+4agry_{0} \beta x_{0}+4ary_{0}\alpha\beta x_{0}\] \[+ 2ay_{0}\alpha^{2}\mu x_{0}+4g^{2}r\alpha\mu x_{0}-2agy_{0} \alpha\mu x_{0}-2g^{2}r\beta\mu x_{0}\] \[+ 4bg^{2}r\alpha\beta\mu x_{0}-6gr\alpha\beta\mu x_{0}+a^{2}y_{0}^ {2}\alpha^{2}+g^{2}r^{2}\beta^{2}\] \[+ g^{2}\alpha^{2}\mu^{2}+2agry_{0}\alpha\beta-2agy_{0}\alpha^{2} \mu-2g^{2}r\alpha\beta\mu\] \[+ 10gr\mu x_{0}^{3}-34bgr^{2}x_{0}^{4}-14bgr\mu x_{0}^{4}-20ac \alpha x_{0}^{3}-34b^{2}gr^{2}\beta x_{0}^{4}\] \[- 8br\mu x_{0}^{5}+9b^{2}g^{2}r^{2}x_{0}^{4}-6bg^{2}r\mu x_{0}^{3} -4abgry_{0}\alpha\beta x_{0}\]
The following equations are derived from the critical point \(P_{2}\) given by expression (3) and are as follows.
\[\begin{array}{rcl}\eta&=&2\sqrt[3]{2}a^{2}c^{2}+4\sqrt[3]{2}abcg\mu\\ &-&6\sqrt[3]{2}aabc\mu r-4\sqrt[3]{2}ab\beta c\mu r-4\sqrt[3]{2}ac\mu r+2\sqrt[ 3]{2}b^{2}g^{2}\mu^{2}r^{2}\\ &+&2\sqrt[3]{2}b^{2}\beta g\mu^{2}r^{2}+2\sqrt[3]{2}b^{2}\beta^{2}\mu^{2}r^{2} +2\sqrt[3]{2}bg\mu^{2}r^{2}\\ &-&2b\Gamma g\mu r-2\sqrt[3]{2}b\beta\mu^{2}r^{2}+2b\beta\Gamma\mu r+2^{2/3} \Gamma^{2}+2\sqrt[3]{2}\mu^{2}r^{2}+2\Gamma\mu r\end{array}\] (A2)
\[\begin{array}{rcl}\Gamma^{3}&=&-2a^{3}c^{3}-6a^{2}bc^{2}g\mu r+9a^{2}\alpha bc^{2} \mu r+6a^{2}b\beta c^{2}\mu r+6a^{2}c^{2}\mu r\\ &-&6ab^{2}cg^{2}\mu^{2}r^{2}+9a\alpha b^{2}cg\mu^{2}r^{2}+3ab^{2}\beta cg\mu^{2 }r^{2}\\ &-&9a\alpha b^{2}\beta c\mu^{2}r^{2}-6ab^{2}\beta^{2}c\mu^{2}r^{2}+3abcg\mu^{2 }r^{2}-9a\alpha bc\mu^{2}r^{2}\\ &-&3ab\beta c\mu^{2}r^{2}-6ac\mu^{2}r^{2}-2b^{3}g^{3}\mu^{3}r^{3}-3b^{3}\beta g ^{2}\mu^{3}r^{3}\\ &+&3b^{3}\beta^{2}g\mu^{3}r^{3}+2b^{3}\beta^{3}\mu^{3}r^{3}-3b^{2}g^{2}\mu^{3} r^{3}-12b^{2}\beta g\mu^{3}r^{3}\\ &-&3b^{2}\beta^{2}\mu^{3}r^{3}+3bg\mu^{3}r^{3}-3b\beta\mu^{3}r^{3}+2\mu^{3}r^{ 3}+\sqrt{R_{\Gamma}},\end{array}\] (A3)
where
\[\begin{array}{rcl}R_{\Gamma}&=&-27b^{4}g^{4}r^{6}\mu^{6}-54b^{3}g^{3}r^{6}\mu ^{6}-27b^{2}g^{2}r^{6}\mu^{6}\\ &-&27b^{4}r^{6}\beta^{4}\mu^{6}-27b^{6}g^{2}r^{6}\beta^{4}\mu^{6}-54b^{5}gr^{6} \beta^{4}\mu^{6}+54b^{3}r^{6}\beta^{3}\mu^{6}-54b^{6}g^{3}r^{6}\beta^{3}\mu^{6} \\ &-&54b^{5}g^{2}r^{6}\beta^{3}\mu^{6}+54b^{4}gr^{6}\beta^{3}\mu^{6}-27b^{6}g^{4} r^{6}\beta^{2}\mu^{6}\\ &+&54b^{5}g^{3}r^{6}\beta^{2}\mu^{6}-27b^{2}r^{6}\beta^{2}\mu^{6}+162b^{4}g^{2} r^{6}\beta^{2}\mu^{6}+54b^{3}gr^{6}\beta^{2}\mu^{6}\\ &+&54b^{5}g^{4}r^{6}\beta\mu^{6}+54b^{4}g^{3}r^{6}\beta\mu^{6}-54b^{3}g^{2}r^{6 }\beta\mu^{6}-54b^{2}gr^{6}\beta\mu^{6}\\ &-&54ab^{3}cg^{3}r^{5}+54ab^{2}cg^{2}r^{5}\mu^{5}+54ab^{5}cg^{2}r^{5}\beta^{3} \mu^{5}+54ab^{3}cr^{5}\beta^{3}\mu^{5}\\ &+&216ab^{4}cgr^{5}\beta^{3}\mu^{5}-54ab^{4}cr^{5}\alpha\beta^{3}\mu^{5}+54ab^{ 5}cgr^{5}\alpha\beta^{3}\mu^{5}\\ &-&54ab^{5}cg^{3}r^{5}\beta^{2}\mu^{5}+108ab^{4}cg^{2}r^{5}\beta^{2}\mu^{5}+54 ab^{2}cr^{5}\beta^{2}\mu^{5}\\ &-&108ab^{3}cgr^{5}\beta^{2}\mu^{5}+216ab^{5}cg^{2}r^{5}\alpha\beta^{2}\mu^{5} +216ab^{3}cr^{5}\alpha\beta^{2}\mu^{5}\\ &+&108ab^{4}cgr^{5}\alpha\beta^{2}\mu^{5}+54ab^{4}cg^{3}r^{5}\alpha\mu^{5}+216 ab^{3}cg^{2}r^{5}\alpha\mu^{5}\\ &+&54ab^{2}cgr^{5}\alpha\mu^{5}+216ab^{4}cg^{3}r^{5}\beta\mu^{5}+108ab^{3}cg^{2} r^{5}\beta\mu^{5}\\ &+&216ab^{2}cgr^{5}\beta\mu^{5}+54ab^{5}cg^{3}r^{5}\alpha\beta\mu^{5}-108ab^{4} cg^{2}r^{5}\alpha\beta\mu^{5}-54ab^{2}cr^{5}\alpha\beta\mu^{5}\\ &+&108ab^{3}cgr^{5}\alpha\beta\mu^{5}-27a^{2}b^{2}c^{2}g^{2}r^{4}\mu^{4}-27a^{2} b^{2}c^{2}r^{4}\alpha^{2}\mu^{4}\\ &-&27a^{2}b^{4}c^{2}g^{2}r^{4}\alpha^{2}\mu^{4}-270a^{2}b^{3}c^{2}gr^{4}\alpha^{2 }\mu^{4}-27a^{2}b^{2}c^{2}r^{4}\beta^{2}\mu^{4}\\ &-&27a^{2}b^{4}c^{2}g^{2}r^{4}\beta^{2}\mu^{4}-270a^{2}b^{3}c^{2}gr^{4}\beta^{ 2}\mu^{4}-27a^{2}b^{4}c^{2}r^{4}\alpha^{2}\beta^{2}\mu^{4}\\ &+&108a^{2}b^{3}c^{2}r^{4}\alpha\beta^{2}\mu^{4}-108a^{2}b^{4}c^{2}gr^{4}\alpha \beta^{2}\mu^{4}\\ &+&108a^{2}b^{3}c^{2}g^{2}r^{4}\alpha\mu^{4}-108a^{2}b^{2}c^{2}gr^{4}\alpha \mu^{4}+270a^{2}b^{3}c^{2}g^{2}r^{4}\beta\mu^{4}\\ &-&270a^{2}b^{2}c^{2}gr^{4}\beta\mu^{4}+270a^{2}b^{3}c^{2}r^{4}\alpha^{2}\beta \mu^{4}-270a^{2}b^{4}c^{2}gr^{4}\alpha^{2}\beta\mu^{4}\\ &+&108a^{2}b^{2}c^{2}r^{4}\alpha\beta\mu^{4}+108a^{2}b^{4}c^{2}g^{2}r^{4}\alpha \beta\mu^{4}\\ &-&810a^{2}b^{3}c^{2}gr^{4}\alpha\beta\mu^{4}+108a^{3}b^{3}c^{3}r^{3}\alpha^{3} \mu^{3}+54a^{3}b^{2}c^{3}r^{3}\alpha^{2}\mu^{3}\\ &-&54a^{3}b^{3}c^{3}gr^{3}\alpha^{2}\mu^{3}+54a^{3}b^{2}c^{3}gr^{3}\alpha\mu^{3} +108a^{3}b^{2}c^{3}gr^{3}\beta\mu^{3}\\ &+&54a^{3}b^{3}c^{3}r^{3}\alpha^{2}\beta\mu^{3}-54a^{3}b^{2}c^{3}r^{3}\alpha \beta\mu^{3}+54a^{3}b^{3}c^{3}gr^{3}\alpha\beta\mu^{3}-27a^{4}b^{2}c^{4}r^{2} \alpha^{2}\mu^{2}\end{array}\] (A4)
Here is the equation that we obtained to demonstrate's process in Proposition 3:
\[\begin{array}{lcl}SN&=&a^{4}\alpha^{2}c^{4}-4a^{3}br\alpha^{3}\mu c^{3}-2a^{3} r\alpha^{2}\mu c^{3}+2a^{3}bgr\alpha^{2}\mu c^{3}\\ &-&2a^{3}gr\alpha\mu c^{3}-2a^{3}br\alpha^{2}\beta\mu c^{3}-4a^{3}gr\beta\mu c^{3 }\\ &+&2a^{3}r\alpha\beta\mu c^{3}-2a^{3}bgr\alpha\beta\mu c^{3}+a^{2}g^{2}r^{2} \mu^{2}c^{2}\\ &+&a^{2}r^{2}\alpha^{2}\mu^{2}c^{2}+a^{2}b^{2}g^{2}r^{2}\alpha^{2}\mu^{2}c^{2}+ 10a^{2}bgr^{2}\alpha^{2}\mu^{2}c^{2}\\ &+&a^{2}r^{2}\beta^{2}\mu^{2}c^{2}+a^{2}b^{2}g^{2}r^{2}\beta^{2}\mu^{2}c^{2}+ 10a^{2}bgr^{2}\beta^{2}\mu^{2}c^{2}\\ &+&a^{2}b^{2}r^{2}\alpha^{2}\beta^{2}\mu^{2}c^{2}-4a^{2}br^{2}\alpha\beta^{2} \mu^{2}c^{2}+4a^{2}b^{2}gr^{2}\alpha\beta^{2}\mu^{2}c^{2}\\ &-&4a^{2}bg^{2}r^{2}\alpha\mu^{2}c^{2}+4a^{2}gr^{2}\alpha\mu^{2}c^{2}-10a^{2}bg ^{2}r^{2}\beta\mu^{2}c^{2}\\ &+&10a^{2}gr^{2}\beta\mu^{2}c^{2}-10a^{2}br^{2}\alpha^{2}\beta\mu^{2}c^{2}+10a^ {2}b^{2}gr^{2}\alpha^{2}\beta\mu^{2}c^{2}-4a^{2}r^{2}\alpha\beta\mu^{2}c^{2}\\ &-&4a^{2}b^{2}g^{2}r^{2}\alpha\beta\mu^{2}c^{2}+30a^{2}bgr^{2}\alpha\beta\mu^{ 2}c^{2}+2abg^{3}r^{3}\mu^{3}c\\ &-&2ag^{2}r^{3}\mu^{3}c-2ab^{3}g^{2}r^{3}\beta^{3}\mu^{3}c-2abr^{3}\beta^{3} \mu^{3}c\\ &-&8ab^{2}gr^{3}\beta^{3}\mu^{3}c+2ab^{2}r^{3}\alpha\beta^{3}\mu^{3}c-2ab^{3} gr^{3}\alpha\beta^{3}\mu^{3}c+2ab^{3}g^{3}r^{3}\beta^{2}\mu^{3}c\\ &-&4ab^{2}g^{2}r^{3}\beta^{2}\mu^{3}c-2ar^{3}\beta^{2}\mu^{3}c+4abgr^{3}\beta^ {2}\mu^{3}c\\ &-&8ab^{3}g^{2}r^{3}\alpha\beta^{2}\mu^{3}c-8abr^{3}\alpha\beta^{2}\mu^{3}c-4 ab^{2}gr^{3}\alpha\beta^{2}\mu^{3}c\\ &-&2ab^{2}g^{3}r^{3}\alpha\mu^{3}c-8abg^{2}r^{3}\alpha\mu^{3}c-2agr^{3}\alpha \mu^{3}c\\ &-&8ab^{2}g^{3}r^{3}\beta\mu^{3}c-4abg^{2}r^{3}\beta^{3}c-8agr^{3}\beta\mu^{3} c\\ &-&2ab^{3}g^{3}r^{3}\alpha\beta\mu^{3}c+4ab^{2}g^{2}r^{3}\alpha\beta\mu^{3}c+2ar^{3}\alpha\beta\mu^{3}c-4abgr^{3} \alpha\beta\mu^{3}c\\ &+&b^{2}g^{4}r^{4}\mu^{4}+2bg^{3}r^{4}\mu^{4}+g^{2}r^{4}\mu^{4}+b^{2}r^{4} \beta^{4}\mu^{4}+b^{4}g^{2}r^{4}\beta^{4}\mu^{4}\\ &+&2b^{3}gr^{4}\beta^{4}\mu^{4}+2b^{4}g^{3}r^{4}\beta^{3}\mu^{4}+2b^{3}g^{2}r^ {4}\beta^{3}\mu^{4}\\ &-&2br^{4}\beta^{3}\mu^{4}-2b^{2}gr^{4}\beta^{3}\mu^{4}+b^{4}g^{4}r^{4}\beta^{ 2}\mu^{4}\\ &-&2b^{3}g^{3}r^{4}\beta^{2}\mu^{4}-6b^{2}g^{2}r^{4}\beta^{2}\mu^{4}-2bgr^{4} \beta^{2}\mu^{4}+r^{4}\beta^{2}\mu^{4}\\ &-&2b^{3}g^{4}r^{4}\beta\mu^{4}-2b^{2}g^{3}r^{4}\beta\mu^{4}+2bg^{2}r^{4} \beta\mu^{4}+2gr^{4}\beta\mu^{4},\end{array}\] (A6)
We derived the following expression in proof of Theorem 2.
\[\begin{array}{lcl}BT_{2}&=&a^{2}b^{2}\beta^{3}c^{2}r^{3}-2a^{2}b\beta^{2}c^{ 2}r^{3}-4a^{2}\alpha b\beta c^{2}\mu r^{2}-2a^{2}b\beta^{2}c^{2}\mu r^{2}\\ &-&a^{2}\alpha^{2}bc^{2}\mu^{2}r-a^{2}\alpha b\beta c^{2}\mu^{2}r+a^{2}\beta c^{ 2}r^{3}-2a^{2}\beta c^{2}\mu r^{2}\\ &-&a^{2}\alpha c^{2}\mu^{2}r+a\alpha b^{2}\beta^{2}c\mu^{3}r^{2}-2a\alpha b\beta c \mu^{3}r^{2}-2a\alpha^{2}bc\mu^{4}r\\ &+&a\alpha c\mu^{5}+a\alpha c\mu^{3}r^{2}+2a\beta c\mu^{4}r+\alpha^{2}(-b)\mu^ {6}r-\alpha b\beta\mu^{6}r-\alpha\mu^{6}r-\beta\mu^{6}r\end{array}\] (A7)
As a previous expression, the following equation gets provided in the proof of Theorem 2.
Next we present the complete equation for describing Generalized Hopf bifurcation.
(A8)
|
2308.11496 | Ultra diffuse galaxies in the Hydra I cluster from the LEWIS Project:
Phase-Space distribution and globular cluster richness | Although ultra diffuse galaxies (UDGs) are found in large numbers in clusters
of galaxies, the role of the cluster environment in shaping their low surface
brightness and large sizes is still uncertain. Here we examine a sample of UDGs
in the Hydra I cluster (D = 51 Mpc) with new radial velocities obtained as part
of the LEWIS (Looking into the faintest with MUSE) project using VLT/MUSE data.
Using a phase-space, or infall diagnostic, diagram we compare the UDGs to other
known galaxies in the Hydra I cluster and to UDGs in other clusters. The UDGs,
along with the bulk of regular Hydra I galaxies, have low relative velocities
and are located near the cluster core, and thus consistent with very early
infall into the cluster. Combining with literature data, we do not find the
expected trend of GC-rich UDGs associated with earlier infall times. This
result suggests that quenching mechanisms other than cluster infall should be
further considered, e.g. quenching by strong feedback or in cosmic sheets and
filaments. Tidal stripping of GCs in the cluster environment also warrants
further modelling. | Duncan Forbes, Jonah Gannon, Enrichetta Iodice, Michael Hilker, Goran Doll, Chiara Buttitta, Antonio La Marca, Magda Arnaboldi, Michele Cantiello, G. D'Ago, Jesus Falcon Barroso, Laura Greggio, Marco Gullieuszik, Johanna Hartke, Steffen Mieske, Marco Mirabile, Roberto Rampazzo, Marina Rejkuba, Marilena Spavone, Chiara Spiniello, Giulio Capasso | 2023-08-22T15:18:11Z | http://arxiv.org/abs/2308.11496v1 | Ultra diffuse galaxies in the Hydra I cluster from the LEWIS Project: Phase-Space distribution and globular cluster richness
###### Abstract
Although ultra diffuse galaxies (UDGs) are found in large numbers in clusters of galaxies, the role of the cluster environment in shaping their low surface brightness and large sizes is still uncertain. Here we examine a sample of UDGs in the Hydra I cluster (D = 51 Mpc) with new radial velocities obtained as part of the LEWIS (Looking into the faintest with MUSE) project using VLT/MUSE data. Using a phase-space, or infall diagnostic, diagram we compare the UDGs to other known galaxies in the Hydra I cluster and to UDGs in other clusters. The UDGs, along with the bulk of regular Hydra I galaxies, have low relative velocities and are located near the cluster core, and thus consistent with very early infall into the cluster. Combining with literature data, we do not find the expected trend of GC-rich UDGs associated with earlier infall times. This result suggests that quenching mechanisms other than cluster infall should be further considered, e.g. quenching by strong feedback or in cosmic sheets and filaments. Tidal stripping of GCs in the cluster environment also warrants further modelling.
keywords: galaxies: star clusters: general -- galaxies: haloes -- galaxies: structure -- galaxies: photometry
## 1 Introduction
The possible formation pathways of ultra-diffuse galaxies (UDGs) have been a subject of an ongoing vigorous debate since 2015, when a population of these extremely diffuse galaxies was identified in the Coma cluster using the Dragonfly Telephoto Array (van Dokkum et al., 2015). Existing in all environments, they are most common in clusters with several hundred found in the Coma cluster (Yagi et al., 2016; Alabi et al., 2018). This significant contribution to our 'census of galaxies' has prompted numerous simulation studies and accompanying predictions (see Sales et al. (2020) and references therein). These simulations can be broadly placed in two categories; internal processes (e.g. episodic supernova feedback) or external (e.g. tidal effects in a dense environment). Some combination of both processes may be operating along with past galaxy infall (and subsequent quenching) into clusters.
UDGs have low surface brightnesses (they are defined to have central values in the g band of \(\mu>24\) mag. per sq. arcsec) so that spectroscopic studies of them push even 8-10m class telescopes, with efficient low surface brightness instruments, such as KWCI on Keck or MUSE on VLT, to their limits. While strictly speaking dwarf galaxies with M\({}_{*}<10^{9}\) M\({}_{\odot}\), UDGs are unlike classical dwarfs as they have extreme sizes with effective radii R\({}_{e}>1.5\) kpc (i.e. comparable to the disk of the Milky Way with R\({}_{e}\sim\)3.5 kpc). They also reveal
another unexplained feature, with some hosting up to ten times more globular clusters (GCs) than classical dwarf galaxies of the same luminosity (Forbes et al., 2020). Their very existence in clusters and their generally old stellar populations suggests that some may be protected within an overly massive dark matter halo. The latter is supported by the correlation between GC numbers and host galaxy halo mass for normal galaxies, e.g. Burkert & Forbes (2020).
In the standard picture of dwarf galaxy evolution (Mistani et al., 2016), dwarfs that fell into clusters at early times will have experienced intense star formation, prior to, or at the start of, infall (which is also expected to give rise to a high fraction of stars in bound star clusters). This is followed by quenching of any further star formation as the infall proceeds. Both of these effects would lead to a high fraction of GCs relative to their field stars (Mistani et al., 2016; Ramos-Almendares et al., 2020). Indeed trends of GC richness and [\(\alpha\)/Fe] ratios with clustercentric radius provide some observational support for this interpretation (Peng et al., 2008; Liu et al., 2016; Romero-Gomez et al., 2023). This early-infall, or biasing, has been invoked for UDGs by Carleton et al. (2021) who include cluster tidal effects within the IllustrisTNG simulation and simplified GC formation physics. Similar to classical dwarfs, they predict that early-infall UDGs should be rich in GCs. Based on a semi-empirical model, Trujillo-Gomez et al. (2022) also predict that galaxies near the cluster core form core GCs.
Using phase-space, or infall diagnostic, diagrams of the type proposed by Rhee et al. (2017) one can investigate whether GC richness depends on UDG cluster infall time. No trend between GC richness and very early infall times might suggest that GC formation and quenching occurred before cluster infall. While low mass galaxies typically quench at late times, there is a considerable range in quenching times with some low mass galaxies quenching at z \(\sim\) 2 or 10.5 Gyr ago (Moster et al., 2020). Quenching at early times via stellar feedback (Stinson et al., 2013) may be one possibility. This early quenching applied to UDGs has been described by Danieli et al. (2022). Another possibility may be quenching via the interaction with cosmic sheets or filaments (Pasha et al., 2023). A first attempt at this sort of infall analysis applied to GCs was presented in Gannon et al. (2022) for several UDGs in the Coma and Perseus clusters. No clear signal was found but the sample was small with just over a dozen UDGs and with a bias towards GC-rich UDGs.
In this Letter we examine the infall diagnostic diagram for a new sample of UDGs in the Hydra I cluster (A1060; D = 51 \(\pm\) 4 Mpc). The Hydra I cluster appears to be fairly dynamically relaxed (Ventiniglia et al., 2011) but also reveals hints of substructures (Lima-Dias et al., 2021; La Marca et al., 2022), an infalling group of galaxies (Arnaboldi et al., 2012), and evidence for ram pressure stripping (Wang et al., 2021; Iodice et al., 2021). The observed UDGs are located near the cluster core and the northern subgroup, with all lying within the 0.3 virial radii (R\({}_{200}\)) of the Hydra I cluster centre. Each was observed using MUSE on the VLT as part of the ongoing LEWIS (Looking into the faintest WtH MUSE) project. Details of the project, including galaxy radial velocities, positions, GC counts etc, are given in Paper I by Iodice et al. (2023, in press). The GC counts are based on deep, optical multi-filter imaging with the VST as part of the VEGAS project (Iodice et al., 2020) and will be updated after the full analysis of the MUSE data. Here we explore the distribution of UDGs in phase space and investigate whether they reveal any trend in this space with their GC richness. We also include similar data for UDGs in other nearby clusters. For the Hydra I cluster we adopt the same parameters as used by La Marca et al. (2022), i.e. cz = 3683 \(\pm\) 46 km s\({}^{-1}\), \(\sigma\) = 724 \(\pm\) 31 km s\({}^{-1}\), and virial parameters R\({}_{200}\) = 1.6 Mpc, M\({}_{200}\) = 2 \(\times\) 10\({}^{14}\) h\({}^{-1}\) M\({}_{\odot}\) and take its centre as NGC 3311 (RA = 159.17842, Dec = -27.528339). These values are similar to those found by Lima-Dias et al. (2021) who recently studied Hydra I galaxies out to the virial radius.
## 2 Infall Diagnostic diagram for Hydra I cluster galaxies
Rhee et al. (2017) carried out cosmological simulations of several galaxy clusters and examined the resulting distribution of galaxies in phase-space (i.e. velocity of the galaxy relative to the mean cluster velocity normalised by the cluster velocity dispersion versus the galaxy's clustercentric radius normalised by the cluster virial radius). Based on the infall time of galaxies, they divided this diagram into several infall zones, ranging from those that fell into the cluster at very early times, to those that are yet to fall in. Thus the location of galaxies in this diagram provides an "infall diagnostic" which is statistical in nature and additional scatter is introduced when using 2D projected radii (as is the case for observational data). For example, the'very early infall' (or ancient infaller) zone in the simulation is occupied by a slight majority (52%) of galaxies that have resided in the cluster for more than 6.45 Gyr. Projection effects mean that the true clustercentric radius for some galaxies is larger in 3D than observed in 2D. For most galaxies this effect should be less a factor of two from the projected one.
In Fig. 1 we show such an infall diagnostic diagram for all galaxies in the Hydra I cluster out to half the virial radius. This includes giant and dwarf galaxies from the study of Christlein & Zabludoff (2003) plus the addition of UDGs and 3 low surface brightness (LSB) galaxies that have UDG-like sizes but are slightly brighter from Iodice et al. (2023, in press). We find that the bulk of the non-UDG Hydra I galaxies are located within the'very early infall' zone. The simulation of Rhee et al. (2017) predicts that just over half of these would have been part of the cluster for at least 6.45 Gyr. There are also galaxies located in later infall zones and three galaxies that may lie outside of the cluster with large relative velocities - these could be backsplash galaxies (having passed through the cluster) or simply galaxies that are yet to fall into the cluster.
If we examine giant and classical dwarf galaxies separately (divided at M\({}_{R}\) = -18 or m\({}_{R}\) = 15.5) there is no clear difference between them in terms of their infall properties. Compared to the UDGs they appear to scatter to higher relative velocities on average. A more quantitative measure of the differences in their infall properties can be obtained from the product of their relative velocity from the cluster mean and their radial position: \(\Delta\)V/\(\sigma\) x R/R\({}_{200}\). Restricting to 0.3R/R\({}_{200}\), as probed by the imaging, we find mean values (and error on the mean) of \(\Delta\)V/\(\sigma\) x R/R\({}_{200}\) = 0.83 (\(\pm\) 0.07) \(\times\) 0.15 (\(\pm\) 0.01) = 0.12 (\(\pm\) 0.02) for giant galaxies and 0.88 (\(\pm\) 0.06) \(\times\) 0.16 (\(\pm\) 0.01) = 0.14 (\(\pm\) 0.02) for classical dwarfs. For the UDGs the mean value is \(\Delta\)V/\(\sigma\) x R/R\({}_{200}\) = 0.80 (\(\pm\) 0.17) \(\times\) 0.16 (\(\pm\) 0.02) = 0.13 (\(\pm\) 0.04). This indicates that UDGs are similarly concentrated in phase-space to the other cluster galaxies. Also, while UDGs have a similar distribution in clustercentric radius, their velocities are closer to the cluster mean than either giants or classical dwarfs. We note that Lima-Dias et al. (2021) also found passive early-type galaxies to be concentrated in the cluster core.The LSB galaxies in Fig. 1 are found in a range of infall zones, from early to late infall.
As might be expected from their inner cluster position, our UDGs were among the earliest inhabitants of the cluster, infalling at least 6.45 Gyr ago according to simulations of Rhee et al. (2017). They would be expected to have star formation (SF) histories that indicate early quenching. A preliminary analysis by Iodice et al. (2023,
submitted) for one UDG (UDG11) indicates an old age of \(\sim\)10 Gyr, suggestive of early quenching. Future analysis will also include the [\(\alpha\)/Fe] ratios which appears to be a sensitive indicators of SF histories for low mass galaxies (see Ferre-Mateu et al. 2023, submitted for results on UDGs in other clusters and Romero-Gomez et al. (2023), for dwarf galaxies in the Fornax cluster). We note that the study of Lima-Dias et al. (2021) found 88% of Hydra I galaxies (with log M\({}_{*}\)\(>\) 8.5) to be quenched, i.e. no sign of ongoing star formation.
## 3 Infall Diagnostic Diagram for UDGs in Several Clusters
In Fig. 2 we show the UDGs from the Hydra I cluster along with those from the literature and coded by globular cluster (GC) richness. Total GC counts for the Hydra I UDGs are determined in (Iodice et al., 2020; La Marca et al., 2022) and listed again in Paper I (Iodice et al., 2023, submitted). Literature data comes from Gannon et al. (2022) and the recent work of Toloba et al. (2023). The GC counts are almost exclusively based on imaging (i.e. lacking radial velocities) and we follow Gannon et al. (2022) assigning a somewhat arbitrary separation between rich and poor GC systems at 20 GCs. This corresponds to a halo mass of 10\({}^{11}\) M\({}_{\odot}\) using the scaling relation of Burkert & Forbes (2020). Below 20 GCs the scaling relation is less predictive of halo mass due to increased scatter. By this definition, all of the UDGs in the Hydra I cluster are GC-poor (ranging from no GCs for several UDGs to 15 GCs for UDG3) and this is unlikely to change significantly when the full set of MUSE spectroscopic data is available. Given the relatively small stellar mass range of the Hydra I UDGs, a fixed GC number corresponds closely to a GC system total mass per host galaxy stellar mass. If we assume the same average mass for a GC of 10\({}^{5}\) M\({}_{\odot}\), this ratio is \(<\)1.2% for all of the observed Hydra I UDGs. While some Coma cluster UDGs also have a ratio \(<\)1.2% the majority have much higher ratios, with up to \(\sim\)10% of the galaxy stellar mass in their GC system, see figure 4 of Forbes et al. (2020).
Before interpreting Fig. 2 there are various caveats and selection effects that should be born in mind. Firstly, we note that some of the literature UDGs lack firm GC counts and their rich/poor status is on the basis of a visual estimate only (Gannon et al., 2022). Secondly, the literature sample is subject to sample selection effects. The Coma cluster sample of UDGs comes from studies that have focused on GC-rich galaxies or they have focused on a narrow range in clustercentric radius (i.e. around 0.12 R/R\({}_{200}\) in the Coma cluster). Observations of the Perseus cluster UDGs have so far avoided the cluster inner regions. The Virgo UDG sample is relatively small and mostly GC
Figure 1: Infall diagnostic diagram for non-UDG (giants, dwarfs and LSB galaxies) and UDGs in the Hydra I cluster. The diagram shows the relative line-of-sight velocity of each galaxy normalised by the cluster velocity dispersion against the projected radius normalised by the virial radius. Regions of the diagram are shaded according to their infall times from the cosmological simulations of Rhee et al. (2017) as indicated in the legend. The plot shows that most UDGs and non-UDG galaxies of the the Hydra I cluster lie within the very early infall zone – the simulations of indicating that around half of the galaxies in this zone were part of the cluster at least 6.45 Gyr ago.
poor. In terms of a selection bias, the Hydra I UDGs are the closest to being a representative sample of UDGs in the cluster, however only the inner 0.3 R/R\({}_{200}\) was imaged in Iodice et al. (2020). Thus, we may be missing the late infalling UDGs. We note that La Marca et al. (2022b) estimated a total UDG population out to the virial radius of \(48\pm 10\) and so many outer region UDGs, which may be late infallers, remain to be studied.
The UDG infall diagram does _not_ clearly show GC-rich UDGs to be located in earlier infall zones as might be expected in the standard picture of dwarf galaxy quenching due to infall which leads to richer GC systems (as described in the Introduction). Indeed, the opposite trend may be present, such that in the very early infall region there are 13 GC-poor UDGs and 5 GC-rich ones, whereas outside of this region (but within 0.5 R/R\({}_{200}\)) there are only 6 GC-poor and 6 GC-rich UDGs. Again, we caution that selection and projection effects make conclusions tentative.
## 4 Discussion
Alabi et al. (2018) used the phase-space diagram to investigate the infall epoch of UDGs, classical dwarfs and other galaxies in the Coma cluster (a massive, dynamically relaxed cluster). Similar to the Hydra I cluster, they saw little difference between classical dwarfs and the giant galaxies. For the UDGs, they identified both early and late infallers. A similar situation might be present for Hydra I UDGs if outer region UDGs were probed. Alabi et al. (2018) did not include GC richness in their study.
Given the lack of a clear signal for 'infall bias' in the GC richness of UDGs alternatives should be further investigated. As noted in the Introduction, quenching at very early times prior to cluster infall should be considered. For such UDGs, we would expect very old ages, low metallicities (similar to the metal-poor subpopulation of GCs) and high alpha overabundances (indicative of rapid star formation). A high fraction of mass in GCs relative to field stars might also be expected. A UDG in the NGC 5846 group, (NGC5846_UDG1) discovered in VEGAS imaging (Forbes et al., 2019), may be an example of such a failed galaxy having a remarkable 13% of its current stellar mass in the form of GCs (Danieli et al., 2022). As noted above, the observed Hydra I UDGs (from the inner cluster regions) all have less than 1.2% of their stellar mass in GCs.
Another possibility is that the Hydra I UDGs are GC-poor because they have been tidally stripped from their host galaxy. This tidal stripping would have to remove most of the dark matter halo before any GCs, since the dark matter is more radially extended than GC systems. Continued stripping would be expected to remove GCs and
Figure 2: Infall diagnostic diagram for only UDGs in the Hydra I, Coma, Virgo and Perseus clusters. Regions of the diagram are shaded according to their infall times from the cosmological simulations of Rhee et al. (2017). As per the legend, UDGs in different clusters are denoted by different symbols. Symbols are outlined in red (if GC-rich) or blue (if GC-poor), and without an outline if the GC properties are unknown. See main text for discussion of selection effects in the UDG samples. Globular cluster (GC) rich UDGs are _not_ predominately found in the very early infall region, indeed the data suggest that very early infall UDGs tend to be GC-poor.
stars in roughly equal proportions since the radial extent of GC systems for UDGs closely follows that of the galaxy stars. As well as operating in clusters, tidal stripping of UDGs may occur in galaxy groups. We note that UDGs in the field do tend to be GC-poor (Jones et al., 2023) however this is unlikely to be due to tidal effects and rather some internal process.
The Hydra I UDGs are generally well-fit by a single Sersic profile however a few show hints of asymmetries that might point to a tidal interaction (Iodice et al., 2020; La Marca et al., 2022). For the one UDG examined in detail by Iodice et al. (2023, submitted) there is some evidence for an isophotal twist in the MUSE data. This might indicate tidal interaction (or a triaxial potential). Furthermore, a Hydra I UDG first identified by Misgeld et al. (2008) reveals a clear S-shape indicative of ongoing tidal interaction (Koch et al., 2012). In the case of Coma cluster UDGs, Mowla et al. (2017) looked specifically for signs of tidal features via position angle twists in a stacked sample, finding no evidence for such twists.
Sales et al. (2020) have simulated UDGs in clusters of similar mass to Hydra I using Illustris-TNG100. They identify two types of UDGs in clusters, i.e. Tidal-UDGs and Born-UDGs (see also Jiang et al., 2019). The Tidal-UDGs were originally massive galaxies (up to 10\({}^{10}\) M\({}_{\odot}\)) that have been tidally stripped of stars and puffed-up by the cluster. Born-UDGs were formed as UDGs outside of the cluster and more recently entered the cluster. Thus Tidal-UDGs dominate the inner \(\sim\)0.5R/R\({}_{200}\) since they were accreted at early times, while Born-UDGs dominate the outer regions with some only recently falling into the cluster. We remind the reader that we only probe out to 0.3R/R\({}_{200}\) in Hydra I. The Sales et al. (2020) model would also predict on average higher metallicities, older ages and lower internal velocity dispersions, for a given stellar mass, for their Tidal-UDG compared to the Born-UDGs. These stellar population, kinematic, GC colours and dark matter content predicted for Tidal-UDGs can be tested when the full LEWIS dataset is available.
## 5 Conclusions
As part of the LEWIS project (Iodice et al., 2023, in press) we obtained new VLT/MUSE observations of the radial velocities of UDGs in the Hydra I cluster (at D = 51 Mpc). Here we examine the location of Hydra I UDGs in infall phase-space diagrams based on simulations of cluster galaxies. We find all of the observed UDGs (and 3 low surface brightness galaxies) to be associated with the cluster. From comparison with the Rhee et al. (2017) simulations, we conclude that most giants, classical dwarfs and UDGs fell into the Hydra I cluster long ago, with UDGs being among the earliest infallers. Projection effects in observations and the statistical nature of the infall diagnostic diagram limit our ability to determine the true fraction of ancient infallers. Nevertheless we might expect UDGs in the Hydra I cluster to reveal old stellar populations consistent with early quenching.
We also compare Hydra I UDGs with their counterparts in the Coma, Perseus and Virgo clusters in terms of their GC richness. If very early infall into a cluster is associated with enhanced GC richness (as has been suggested for classical dwarf galaxies) then such a trend is expected. The data from these clusters do _not_ show a clear trend of GC richness with earlier infall times, indeed the data suggest the opposite trend. If verified by larger and more complete samples, then UDGs may be quenched by a different mechanism than that thought to operate on classical dwarf galaxies. As more data for UDGs is acquired, trends, or the lack of, may become more apparent in an infall diagnostic diagram. A future analysis of star formation histories will give an indication of when quenching occurred for the Hydra I UDGs. Once the full dataset of the LEWIS project is available we will be able to test other mechanisms, such as pre-infall quenching and/or tidal stripping, and their possible role in shaping UDGs and their globular cluster systems.
## Acknowledgements
We wish to thank the anonymous referee for their comments. We thank A. Romanowsky, L. Buzzo, L. Haacke and O. Gerhard for useful suggestions. This work is based on visitor mode observations collected at the European Southern Observatory (ESO) La Silla Paranal Observatory and collected at the European Southern Observatory under ESO programmes 099.B-0560(A) and 108.222P. INAF authors acknowledge financial support for the VST project (P.I.: P. Schipani). DAF thanks the ARC for support via DP220101863. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. MC acknowledges support from the INAF-EDGE program (PI Leslie K. Hunt). J. F-B acknowledges support through the RAVET project by the grant PID2019-107427GB-C32 from the Spanish Ministry of Science, Innovation and Universities (MCIU), and through the IAC project TRACES which is partially supported through the state budget and the regional budget of the Conceigeria de Economia, Industria, Comerciro y Conocimiento of the Canary Islands Autonomous Community.
## Data Availability
Raw data is available from the ESO archive.
|
2307.01000 | Pareto optimal proxy metrics | North star metrics and online experimentation play a central role in how
technology companies improve their products. In many practical settings,
however, evaluating experiments based on the north star metric directly can be
difficult. The two most significant issues are 1) low sensitivity of the north
star metric and 2) differences between the short-term and long-term impact on
the north star metric. A common solution is to rely on proxy metrics rather
than the north star in experiment evaluation and launch decisions. Existing
literature on proxy metrics concentrates mainly on the estimation of the
long-term impact from short-term experimental data. In this paper, instead, we
focus on the trade-off between the estimation of the long-term impact and the
sensitivity in the short term. In particular, we propose the Pareto optimal
proxy metrics method, which simultaneously optimizes prediction accuracy and
sensitivity. In addition, we give an efficient multi-objective optimization
algorithm that outperforms standard methods. We applied our methodology to
experiments from a large industrial recommendation system, and found proxy
metrics that are eight times more sensitive than the north star and
consistently moved in the same direction, increasing the velocity and the
quality of the decisions to launch new features. | Lee Richardson, Alessandro Zito, Dylan Greaves, Jacopo Soriano | 2023-07-03T13:29:14Z | http://arxiv.org/abs/2307.01000v1 | # Pareto Optimal Proxy Metrics
###### Abstract
North star metrics and online experimentation play a central role in how technology companies improve their products. In many practical settings, however, evaluating experiments based on the north star metric directly can be difficult. The two most significant issues are 1) low sensitivity of the north star metric and 2) differences between the short-term and long-term impact on the north star metric. A common solution is to rely on proxy metrics rather than the north star in experiment evaluation and launch decisions. Existing literature on proxy metrics concentrates mainly on the estimation of the long-term impact from short-term experimental data. In this paper, instead, we focus on the trade-off between the estimation of the long-term impact and the sensitivity in the short term. In particular, we propose the Pareto optimal proxy metrics method, which simultaneously optimizes prediction accuracy and sensitivity. In addition, we give an efficient multi-objective optimization algorithm that outperforms standard methods. We applied our methodology to experiments from a large industrial recommendation system, and found proxy metrics that are eight times more sensitive than the north star and consistently moved in the same direction, increasing the velocity and the quality of the decisions to launch new features.
## 1 Introduction
North star metrics are central to the operations of technology companies like Airbnb, Uber, and Google, amongst many others [4]. Functionally, teams use north star metrics to align priorities, evaluate progress, and determine if features should be launched [18].
Although north star metrics are valuable, there are issues using north star metrics in experimentation. To understand the issues better, it is important to
know how experimentation works at large tech companies. A standard flow is the following: a team of engineers, data scientists and product managers have an idea to improve the product; the idea is implemented, and an experiment on a small amount of traffic is run for 1-2 weeks. If the metrics are promising, the team takes the experiment to a launch review, which determines if the feature will be launched to all users. The timescale of this process is crucial - the faster one can run and evaluate experiments, the more ideas one can evaluate and integrate into the product. Two main issues arise in this context. The first is that the north star metric is often not sufficiently sensitive [6]. This means that the team will have experiment results that do not provide a clear indication of whether the idea is improving the north star metric. The second issue is that the north star metric can be different in the short and long term [14] due to novelty effects, system learning, and user learning, amongst other factors.
A solution to deal with this problem is to use a _proxy metric_, also referred to as a _surrogate metric_, in place of the north star [8]. The ideal proxy metric is short-term sensitive, and an accurate predictor of the long-term impact of the north star metric. Figure 1 visualizes the ideal proxy metric in two scenarios where it helps teams overcome the limitations of the north star metric.
Existing literature on proxy metrics [9, 1] has focused more on predicting the long-term effect, but has not focused on its trade-off with short-term sensitivity. In this paper, we fulfill both goals with a method that optimizes both objectives simultaneously, called _Pareto optimal proxy metrics_. To our knowledge, this is the first method that explicitly optimizes sensitivity.
Figure 1: A simulated example of two cases where a proxy metric is useful. The left figure shows the case where the north star metric is positive, but is too small relative to the noise to measure accurately. The right figure shows the case where the north star metric is significantly different in the short and long term, and the proxy metric reflects the long-term impact early in the experiment.
The paper is divided as follows. Section 2 discusses how to measure the objectives and their empirical trade-off. Section 3 covers our methodology and algorithms. Section 4 discusses our results, and we conclude in Section 5 with some observations on how to use proxy metrics effectively.
## 2 How to measure proxy metric performance
The two key properties for metrics are _metric sensitivity_ and _directionality_[6]. The first refers to the ability of a metric to detect a statistically significant effect, while the second measures the level of agreement between the metric and the long-term effect of the north star. This Section discusses each property individually, and proposes metrics to quantify them. We conclude with our empirical observation regarding the trade-off between sensitivity and directionality, which motivated the methodology in this paper (see Figure 2).
### Metric sensitivity
Metric sensitivity is commonly associated with statistical power. However, it can be expressed as a broader concept [17]. In simple terms, metric sensitivity measures the ability to detect a significant effect for a metric. Following [5], we can write this as
\[P(\text{Reject }H_{0})=\int P(\text{Reject }H_{0}|\delta)\text{d}P(\delta), \tag{1}\]
where \(\delta\) is the true treatment effect, \(P(\text{Reject }H_{0}|\delta)\) is the statistical power, and \(\text{d}P(\delta)\) is the distribution of true treatment effects in a population of related experiments. Sensitivity depends heavily on the type of experiments. This is captured in the \(\text{d}P(\delta)\) term in Equation 1, and is sometimes referred to as the _moveability_ of the metric. For example, metrics related to Search quality will be more sensitive in Search experiments, and less sensitive in experiments from other product areas (notifications, home feed recommendations, etc.). Although each experiment is unique, our analysis groups together experiments with similar treatments, and we assume that the underlying treatment effects are independent and identically distributed draws from a common distribution of treatment effects.
We need to define quantities that summarize how sensitive a metric is. Our intuition is that we can estimate the probability a metric will detect a statistically significant effect by seeing how often such an effect was statistically significant in historical experiments. Suppose that there are \(J\) experiments whose outcome is recorded by \(M\) metrics. In each experiment, the population is randomly partitioned into \(N\approx 100\) equal groups, and within each group, users are independently assigned to a treatment and a control group. We refer to these groups as _independent hash buckets_[3].
Let \(X_{i,j,m}^{Tr}\) and \(X_{i,j,m}^{Ct}\) with \(m=1,\ldots,M\) and \(j=1,\ldots,J\) denote the short-term recorded values for metric \(m\) in experiment \(j\) in the treatment and in the
control group, respectively, and let \(X_{i,j,m}=100\%\times(X_{i,j,m}^{Tr}-X_{i,j,m}^{Ct})/X_{i,j,m}^{Ct}\) their percentage differences, in hash bucket \(i=1,\ldots,N\). We refer to these metrics as _auxiliary metrics_, since their combination will be used to construct a proxy metric in Section 3. The within hash bucket sample sizes are typically large enough that we can use the central limit theorem to assume that \(X_{i,j,m}\stackrel{{\mathrm{iid}}}{{\sim}}N(\theta_{j,m},\sigma_{ j,m}^{2})\) for \(i=1,\ldots,N\), where \(\theta_{j,m}\) and \(\sigma_{j,m}^{2}\) are unknown mean and variance parameters, and test
\[H_{0,j,m}:\theta_{j,m}=0\quad\text{vs}\quad H_{1,j,m}:\theta_{j,m}\neq 0.\]
Calling \(\bar{X}_{j,m}=N^{-1}\sum_{i=1}^{N}X_{i,j,m}\) the mean percentage difference between the two groups and \(se_{j,m}\) the standard error, calculated at Google via the Jackknife method [3], the null hypothesis \(H_{0,j,m}\) is rejected at the \(\alpha\) level if the test statistics \(t_{j,m}=\bar{X}_{j,m}/se_{j,m}\) is larger than a threshold \(\tau_{\alpha,N-1}\) in absolute value. The common practice is to let \(\alpha=0.05\).
From the above, it naturally follows that metric sensitivity should be directly related to the value of the test statistic \(t_{j,m}\). For instance, we call _binary sensitivity_ for metric \(m\) the quantity
\[\mathtt{BS}(\bar{X}_{\cdot,m})=\frac{1}{J}\sum_{j=1}^{J}\mathds{1}(|t_{j,m}|> \tau_{\alpha,N-1}),\quad(m=1,\ldots,M), \tag{2}\]
where \(\bar{X}_{\cdot,m}=\{\bar{X}_{1,m},\ldots,\bar{X}_{J,m}\}\). Equation (2) measures the proportion of statistically significant experiments in our pool of experiments for every metric \(m\). Another characteristic of equation (2) is that it takes on a discrete set of values. This is an issue when the number of experiments \(J\) is low. In this case, one can resort to smoother versions of binary sensitivity, such as the _average sensitivity_, defined as
\[\mathtt{AS}(\bar{X}_{\cdot,m})=\frac{1}{J}\sum_{j=1}^{J}|t_{j,m}|,\quad(m=1, \ldots,M). \tag{3}\]
The above quantity is the average absolute value of the test statistic across experiments. It has the advantage of being continuous and thus easier to optimize, but it pays a cost in terms of lack of interpretability and is also more susceptible to outliers. In the case of large outliers, one effective strategy is to cap the value of the t-statistic.
Which measure of sensitivity to use depends on the application. When a large pool of experiments is available, we recommend using equation (2) due to its interpretation and intrinsic simplicity. Equation (3) should be invoked when optimizing over a discrete quantity yields unstable results.
### Directionality
The second key metric property we need to quantify is called _directionality_. Through directionality, we want to capture the alignment between the increase (decrease) in the metric and long-term improvement (deterioration) of the user
experience. While this is ideal, getting ground truth data for directionality can be complex. A few existing approaches either involve running degradation experiments or manually labeling experiments, as discussed in [6, 7]. Both approaches are reasonable, but suffer from scalability issues.
Our method measures directionality by comparing the short-term value of a metric against the long-term value of the north star. The advantage of this approach is that we can compute the measure in every experiment. The disadvantage is that the estimate of the treatment effect of the north star metric is noisy, which makes it harder to separate the correlation in noise from the correlation in the treatment effects. This can be handled, however, by measuring correlation across repeated experiments.
There are various ways to quantify the directionality of a metric. In this paper, we consider two measures: the first is the _mean squared error_, while the second is the _empirical correlation_. Following the setting of Section 2.1, let \(Y_{i,j}^{Tr}\) and \(Y_{i,j}^{Ct}\) define the long-term value of the north star in the treatment and in the control group for every cookie bucket \(i\) and experiment \(j\). The resulting recorded percentage difference is \(Y_{i,j}=100\%(Y_{i,j}^{Tr}-Y_{i,j}^{Ct})/Y_{i,j}^{Ct}\). Then we can define the mean squared error as
\[\mathtt{MSE}(\bar{X}_{.,m})=\frac{1}{J}\sum_{j=1}^{J}(\bar{Y}_{j}-\bar{X}_{j, m})^{2},\quad(m=1,\ldots,M), \tag{4}\]
where again \(\bar{Y}_{j}=N^{-1}\sum_{i=1}^{N}Y_{i,j}\) is the long-term mean of the north star in experiment \(j\). Equation (4) measures how well metric \(m\) predicts the long-term north star on average. Such a measure depends on the scale of \(X\) and \(Y\) and may require standardization of the metrics. For a scale-free measure, one instead may adopt correlation, which is defined as follows
\[\mathtt{Cor}(\bar{X}_{.,m})=\frac{\sum_{j=1}^{J}(\bar{Y}_{j}-\bar{Y})(\bar{X} _{j,m}-\bar{X})}{\sqrt{\sum_{j=1}^{J}(\bar{Y}_{j}-\bar{Y}_{m})^{2}\sum_{j=1}^ {J}(\bar{X}_{j,m}-\bar{X}_{m})^{2}}},\quad(m=1,\ldots,M), \tag{5}\]
where \(\bar{X}_{m}=J^{-1}\sum_{j=1}^{J}\bar{X}_{j,m}\) and \(\bar{Y}=J^{-1}\sum_{j=1}^{J}\bar{Y}_{j}\) are the grand mean of metric \(m\) and the north star across all experiments.
Equations (4) and (5) quantify the agreeableness between a metric \(m\) and the north star, and their use is entirely dependent on the application. Notice that equation (5) measures the linear relationship, but other measures of correlation may be employed, such as Spearman correlation. It is possible to use different measures of correlation because our methodology is agnostic to specific measures of sensitivity and directionality, as detailed in Section 3.
### The trade-off between sensitivity and directionality
So far, we have established two key properties for a metric: sensitivity and directionality. Empirically, we observe an inverse relationship between these two properties. This can be clearly seen from Figure 2, where we plot the value
of the binary sensitivity in equation (2) and the correlation with the north star in equation (5) for over 300 experiments on a large industrial recommendation system.
As such, there is a _trade-off_ between sensitivity and directionality: the more we increase sensitivity, the less likely our metric will be related to the north star. Thus, our methodology aims to combine auxiliary metrics into a single proxy metric to balance such trade-off in an optimal manner.
## 3 Pareto optimal proxy metrics
Our core idea is to use multi-objective optimization to learn the optimal trade-off between sensitivity and directionality. Our algorithm learns a set of proxy metrics with the optimal trade-off, known as the Pareto front. The proxy metrics in the Pareto front are linear combinations of auxiliary metrics. Each proxy in the Pareto front is Pareto optimal, in that we can not increase sensitivity without decreasing correlation, and vice versa.
Figure 2: The relationship between correlation and sensitivity for 70 auxiliary metrics across over 300 experiments. Each metric is either a gray or black dot. We highlight several auxiliary metrics that trade-off between sensitivity and correlation in black. Notably, the short-term value of the north star is in the bottom right, which is the least sensitive metric, but the most correlated with the long-term impact of the north star.
In this section, we first describe the proxy metric problem, and we later cast the proxy metric problem into the Pareto optimal framework. Then we discuss algorithms to learn the Pareto front and compare their performance.
### The proxy metric problem
We define a proxy metric as a _linear combination_ between the auxiliary metrics \(m=1,\ldots,M\). Let \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{M})\) be a vector of weights. A proxy metric is obtained as
\[Z_{i,j}(\mathbf{\omega})=\sum_{m=1}^{M}\omega_{m}X_{i,j,m}, \tag{6}\]
for each \(i=1,\ldots,N\) and each experiment \(j=1,\ldots,J\). Here, \(\omega_{m}\) defines the weight that metric \(m\) has on the proxy \(Z_{i,j}\). For interpretability reasons, it is useful to consider a normalized version of the weights, namely imposing that \(\sum_{m=1}^{M}\omega_{m}=1\) with each \(\omega_{m}\geq 0\). In doing so, we require that a positive outcome is associated with an increase in the auxiliary metrics. This means we must swap the sign of metrics whose decrease has a positive impact. These include, for example, metrics that represent bad user experiences, like abandoning the page or refining a query, and which are negatively correlated with the north star metric. Within such a formulation, the proxy metric becomes a weighted average across single metrics where \(\omega_{m}\) measures the importance of metric \(m\). Un-normalized versions of the proxy weights can also be considered, depending on the context and the measures over which the optimization is carried over. In general, the binary sensitivity in equation (2) and the correlation in equation (5) are invariant to the scale of \(\omega_{m}\), which implies that they remain equal irrespective of whether the weights are normalized or not.
Within such a framework, our goal is to find the weights in equation (6). Let \(\bar{Z}_{j}(\mathbf{\omega})=J^{-1}\sum_{i=N}^{J}Z_{i,j}\) be the average values for the proxy metric in experiments \(j=1,\ldots,J\) and \(\bar{Z}_{\cdot}(\mathbf{\omega})=\{\bar{Z}_{1}(\mathbf{\omega}),\ldots,\bar{Z}_{J}( \mathbf{\omega})\}\) their collection. When binary sensitivity and correlation are used as measures for sensitivity and directionality, multi-objective optimization is performed via the following problem
\[\mathbf{\omega}^{*}=\arg\max_{\mathbf{\omega}=(\omega_{1},\ldots,\omega_{m})}\{\text{ BS}(\bar{Z}_{\cdot}(\mathbf{\omega})),\texttt{Cor}(\bar{Z}_{\cdot}(\mathbf{\omega}))\}. \tag{7}\]
The solution to the optimization in equation (7) is not available in an explicit analytical form, which means that we need to resort to multi-objective optimization algorithms to find \(\mathbf{\omega}^{*}\). We discuss these algorithms after first introducing the concept of Pareto optimality.
### Pareto optimality for proxy metrics
A Pareto equilibrium is a situation where any action taken by an individual toward optimizing one outcome will automatically lead to a loss in other outcomes. In this situation, there is no way to improve both outcomes simultaneously. If there was, then the current state is said to be _Pareto dominated_. In the context of our application, the natural trade-off between correlation and sensitivity
implies that we cannot unilaterally maximize one dimension without incurring in a loss in the other. Thus, our goal is to look for weights that are not dominated in any dimension. In reference with equation (7), we say that the set of weights \(\mathbf{\omega}\) is Pareto dominated if there exists another set of weight \(\mathbf{\omega}^{\prime}\) such that \(\texttt{BS}(\bar{Z}.(\mathbf{\omega}^{\prime}))\geq\texttt{BS}(\bar{Z}.(\mathbf{\omega}))\) and \(\texttt{Cor}(\bar{Z}.(\mathbf{\omega}^{\prime}))\geq\texttt{Cor}(\bar{Z}.(\mathbf{ \omega}))\) at the same time. We write \(\mathbf{\omega}\prec\mathbf{\omega}^{\prime}\) to indicate the dominance relationship. Then, the set of non-dominated points is called _Pareto set_. We indicate it as \(\mathcal{W}=\{\mathbf{\omega}_{1},\ldots,\mathbf{\omega}_{q}\}\), where for all \(\mathbf{\omega},\mathbf{\omega}^{\prime}\in\mathcal{W}\) neither \(\mathbf{\omega}\prec\mathbf{\omega}^{\prime}\) not \(\mathbf{\omega}^{\prime}\prec\mathbf{\omega}\). The objective values associated with the Pareto set are called the _Pareto front_.
Figure 3 shows an example of what the Pareto front and the Pareto set look like. The grey points represent the value of the objectives for a set of weights generated at random, while the red points are the ones in the Pareto set. The green dot is an example point that is Pareto dominated by the area highlighted in grey. It is easy to see that any point in the grey area is strictly better than the green dot. The purpose of multi-objective optimization is to efficiently identify the Pareto front and the weights in the Pareto set. Algorithms to estimate the Pareto front are reported in the next Section.
Figure 3: An example of the Pareto front in the proxy metric problem. Each gray dot represents evaluations of the objective in a randomized search. The red dots are points on the Pareto front. The green dot is a point that is Pareto dominated, and the gray shaded area shows where the green dot is Pareto dominated.
### Algorithms for Pareto optimal proxies
Multi-objective optimization is a well-studied problem that can be solved via a wealth of efficient algorithms. Common methods to extract the Pareto front combine Kriging techniques with expected improvement minimization [11, 20], or black box methods via transfer learning [19, 13]. These methods are particularly suitable for cases where the objective functions are intrinsically expensive to calculate, and therefore one wishes to limit the number of evaluations required to extract the front. In our case, however, both objective functions can be calculated with minimal computational effort. As such, we propose two algorithms to efficiently extract the front that rely on sampling strategies and nonlinear optimization routines. We then compare our algorithms against a standard Kriging-based implementation.
Our first method to extract the Pareto front involves a simple randomized search, as described in Algorithm 1 below. The mechanism is relatively straightforward: at each step, we propose a candidate weight \(\mathbf{\omega}\) and calculate the associated proxy \(Z_{i,j}\) for every \(i=1,\ldots,N\) and every experiment \(j=1,\ldots,J\). Then, we evaluate the desired objective functions, such as the binary sensitivity and the correlation in equations (2) and (5). These allow us to tell whether \(\mathbf{\omega}\) is dominated. In the second case, we update the Pareto front by removing the Pareto dominated weights and then by including the new one in the Pareto set.
```
1:Input: Number of samples \(T\), single metrics \(X\), north start \(Y\)
2:Output: Pareto set \(\mathcal{W}\)
3: Initialize the Pareto set to be empty: \(\mathcal{W}=\emptyset\).
4:for\(t=1\ldots,T\)do
5: Sample \(\mathbf{\omega}=\{\omega_{1},\ldots,\omega_{M}\}\) uniformly in \([0,1]\).
6: Calculate \(Z_{i,j}(\mathbf{\omega})\) as in equation (6).
7: Evaluate \(\texttt{BS}(\bar{Z}.(\mathbf{\omega}))\) and \(\texttt{Cor}(\bar{Z}.(\mathbf{\omega}))\) (or any other objective function).
8: If \(\mathbf{\omega}\) is not dominated by any other \(\mathbf{\omega}^{\prime}\in\mathcal{W}\), add \(\mathbf{\omega}\) to \(\mathcal{W}\).
9: Remove the weights in \(\mathcal{W}\) that are dominated.
10:endfor
```
**Algorithm 1** Randomized search
The advantage of Algorithm 1 is that it explores the whole space of possible weights and can be performed online with minimum storage requirements. However, such exploration is often inefficient, since the vast majority of sampled weights are not on the Pareto front. Moreover, the method may suffer from a curse of dimensionality: if the total number of auxiliary metrics \(M\) is large, then a massive number of candidate weights is required to explore the hypercube \([0,1]^{M}\) exhaustively. A standard solution to such a problem relies on a more directed exploration of the space of weights via Kriging, where the weight at one iteration is sampled from normal distributions whose mean and variance are obtained by minimizing an in-fill criterion [11]. Refer to [2] for a practical overview. Since evaluating sensitivity and correlation is a relatively simple operation, we propose a more directed algorithm, which we now illustrate.
Consider the bivariate optimization problem in equation (7). If we fix one dimension, say sensitivity, to a certain threshold and later optimize with respect to the other dimension in a _constrained_ manner, then varying the threshold between \(0\) and \(1\) should equivalently extract the front. In practice, this procedure is approximated by _binning_ the sensitivity in disjoint intervals, say \([u_{b},u_{b+1})\) with \(b=1,\ldots,B-1\), with \(u_{1}=0\) and \(u_{B}=1\), and then solving
\[\boldsymbol{\omega}_{b}^{*}=\arg\max_{\boldsymbol{\omega}:\;\texttt{BS}( \bar{Z}.(\boldsymbol{\omega}))\in[u_{b},u_{b+1})}\;\texttt{Cor}(\bar{Z}.( \boldsymbol{\omega})), \tag{8}\]
for each \(b=1,\ldots,B-1\). The resulting Pareto front is composed of a length \(B\) set of weights. We summarize this in Algorithm 2 below.
```
1:Input: Number of samples \(T\), single metrics \(X\), north start \(Y\), sensitivity bins \([u_{b},u_{b+1})\)
2:Output: Pareto set \(\mathcal{W}\) (approximation)
3: Initialize the Pareto set to be empty: \(\mathcal{W}=\emptyset\).
4:for\(b=1\ldots,B-1\)do
5: Solve the constrained optimization in equation (8) via nlopt.
6: Add \(\boldsymbol{\omega}_{b}\) to \(\mathcal{W}\)
7:endfor
```
**Algorithm 2** Constrained optimization via binning
The optimization problem in equation (7) and Algorithm 2 can be solved via common nonlinear optimization methods such as the ones in the nlopt package. See [15] and references therein.
Each algorithm produces a set of Pareto optimal proxy metrics. However, we typically rely on a single proxy metric for experiment evaluation and launch decisions. This means we need to select a proxy from the Pareto front. In practice, we use the Pareto set to reduce the space of candidate proxies, and later choose the final weights based on statistical properties and other product considerations.
### Algorithm performance
This Section evaluates the performance of our proposed algorithms. The task is extracting the Pareto front between binary sensitivity and correlation from a set of over 300 experiments. Details on the data are described in Section 4. We test three different algorithms:
1. **Randomized search** (Algorithm 1). We let the algorithm run for \(M\times 4000\) iterations.
2. **Constrained optimization via binning** (Algorithm 2). We split sensitivity into 14 discrete bins, ranging from 0 to the maximum sensitivity of a single metric in our data set. From the nlopt package, we rely on the locally biased dividing rectangles algorithm [12].
3. **Kriging** and minimization of the expected increase in hyper-volume [11], using the R package GPareto[2]. We let the algorithm run for \(M\times 40\) iterations.
We estimate the Pareto front for \(M=5,10,\text{ and }15\) metrics to understand how algorithm performance scales in the number of metrics. Figure 4 compares the Pareto front extracted by each algorithm. Each algorithm yields a similar Pareto front. We notice that constrained optimization detects points in high sensitivity and high correlation regions better than the other two methods, especially as the number of metrics increases. However, the middle of these extracted curves are very similar.
A more direct comparison is reported in Figure 5. Here, we quantify the extracted Pareto Front using the _Area under the Pareto Front_ metric (larger values are better). We also compare the run-time of each algorithm. The clear takeaway from Figure 5 is that the choice of algorithms does not matter much for a small number of metrics (5). However, constrained optimization is the best trade-off between accuracy and speed when the number of metrics is large.
## 4 Results
We implemented our methodology on over 300 experiments in a large industrial recommendation system. We then evaluated the performance of the resulting proxy on over 500 related experiments that ran throughout the subsequent six months. Specifically, we compare the proxy with the short-term north star metric, since its precise goal is to improve upon the sensitivity of the short-term north star itself. As success criteria, we use Binary Sensitivity in equation (2) and the _proxy score_, which is a one-number statistic that evaluates proxy quality. See Appendix A for a detailed definition.
Table 1 compares our short-term proxy metric against the short-term north star metric. Our proxy metric was 8.5 times more sensitive. In the cases where the long-term north star metric was statistically significant, the proxy was statistically significant 72% of the time, compared to just 40% of the time for
Figure 4: Pareto front extracted under the three methods.
the short-term north star. In this set of experiments, we did not observe any case where the proxy metric was statistically significant in the opposite direction as the long-term north star metric. We have, however, seen this occur in different analyses. But the occurrence is rare and happens in less than 1% of experiments. Finally, our proxy metric has a 50% higher proxy score than the short-term north star. Our key takeaway is that we can find proxy metrics that are dramatically more sensitive while barely sacrificing directionality.
Table 1 only evaluates the relationship between the proxy and north star metric when the north star is statistically significant. These experiments are useful because we have a clear direction from the north star metric. However, it is also important to assess the proxy metric when the long-term north star metric is neutral. For this, we can look at the magnitude of the north star metric when the long-term effect is not statistically significant, split by whether the proxy is negative, neutral, or positive. We display this in Figure 6, which shows
\begin{table}
\begin{tabular}{l c|c} \hline \hline \multicolumn{2}{c|}{short term north star} & \multicolumn{1}{c}{proxy} \\ \hline Proxy Score & 0.41 & 0.72 \\ Binary Sensitivity & X\% & 8.5X\% \\ Recall & 0.41 & 0.72 \\ Precision & 1.0 & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of using the short-term north star metric and the Pareto optimal proxy metric. This table was constructed on a set of experiments that ran for six months after we implemented our proxy. The sensitivity of our proxy is 8.5X%, compared to just X% for the north star.
Figure 5: The left plot shows the Area under the Pareto curve for each algorithm, the right plot shows how long each algorithm took to extract the Pareto front in Figure 4
that, although we may not get statistically significant results for the north star metric, making decisions based on the proxy will be positive for the north star on average. In practice, we are careful when rolling out these cases, and have tools to catch any launch that does not behave as expected.
Finally, it is instructive to analyze how the weights of the proxy metrics vary as we move along the Pareto front from directionality to sensitivity, as illustrated in the example in Figure 7. As expected, when we select points that emphasize correlation, our proxy metric puts more weight on the short-term north star. But when we choose points that emphasize sensitivity, we put much more weight on sensitive, local metrics.
## 5 Discussion
This paper proposes a new method to find proxy metrics that optimizes the trade-off between sensitivity and directionality. To our knowledge, this is the first approach that explicitly incorporates metric sensitivity into the objective. In our experiments, we found proxy metrics that were 6-10 times more sensitive than the short-term north star metric, and minimal cases where the proxy and the north star moved in opposite directions.
Our experience developing proxy metrics with multiple teams across multiple years has spurred many thoughts on their pros, cons, and things to watch out for. These considerations go beyond the mathematical framework discussed in
Figure 6: The magnitude of the long-term north star treatment effect, when the long-term treatment effect is neutral, depending on if the proxy metric is negative, neutral, or positive.
this paper, and we list them in the next section. We then discuss some other benefits of using proxy metrics. Finally, we'll discuss some limitations in our methodology and future areas of improvement.
### Considerations beyond Pareto optimality
Below are other important considerations we learned from deploying proxy metrics in practice:
* **Make sure you need proxies before developing them**. Proxies should be motivated by an insensitive north star metric, or one that is consistently different between the short and long term. It is important to validate that you have these issues before developing proxies. To assess sensitivity, you can compute the Binary Sensitivity in a set of experiments. To assess short and long-term differences, one possibility is to compare the treatment effects at the beginning and end of your experiments.
* **Try better experiment design before using proxies**. Proxies are one way to increase sensitivity, but they are not the only way. Before you create proxy metrics, you should assess if your sensitivity problems can be solved with a better experiment design. For example, you may be able to run larger experiments, longer experiments, or narrower triggering to only include users that were actually impacted by the treatment. Solving at the design stage is ideal because it allows us to target the north star directly.
Figure 7: Weights of the proxy metrics in the Pareto set as a function of both objectives. We include three metrics, the short-term north star and two metrics that are more sensitive, capturing different elements of the user experience. The optimal weights are highlighted in red for both objectives. In this example, we choose the point that optimized the Area under the Pareto Curve.
* **Choose proxies with common sense**. The best auxiliary metrics in our proxy metric captured intuitive, critical aspects of the specific user journey targeted by that class of experiments. For example, whether a user had a satisfactory watch from the homepage is a good auxiliary metric for experiments changing the recommendations on the home feed. In fact, many of the best auxiliary metrics were already informally used by engineers, suggesting that common sense metrics have superior statistical properties.
* **Validate and monitor your proxies, ideally using holdbacks**. It is important to remember that proxy metrics are not what we want to move. We want to move the north star, and proxies are a means to this end. The best tool we have found for validating proxies is the cumulative long-term holdback, including all launches that were made based on the same proxy metric. It is also helpful to regularly repeat the model fitting process on recent data, and perform out-of-sample testing, to ensure your proxy is still at an optimal point.
### Other benefits of proxy metrics
Developing proxies had many unplanned benefits beyond their strict application as a tool for experiment evaluation. The first major benefit is the sheer educational factor: the data science team and our organizational partners developed a much deeper intuition about our metrics. We learned baseline sensitivities, how the baseline sensitives vary across different product areas, and the correlations between metrics.
Another unplanned benefit is that the proxy metric development process highlighted several areas to improve the way we run experiments. We started to do better experiment design, and to collect data from experiments more systematically, now that the experiments can also be viewed as training data for proxy metrics.
Finally, the most important benefit is that we uncovered several auxiliary metrics that were correlated with the north star, but not holistic enough to be included in the final proxy. We added these signals directly into our machine-learning systems, which resulted in several launches that directly improved the long-term user experience.
### Discussion, limitations, and future directions
This methodology is an important milestone, but there are still many areas to develop, and our methodology is sure to evolve over time.
The first area to explore is causality. Our approach relies on the assumption that the treatment effects of the experiments are independent draws from a common distribution of treatment effects, and that future experiments come from the same generative process. Literature from clinical trials [16, 10], however, has more formal notions of causality for surrogate metrics, and we plan to
explore this area and see if there's anything we can glean.
Another important improvement would be a more principled approach to select the final proxy metric. Some initial work along these lines revolves around our proxy score (Appendix A) and Area under the Pareto curve (Figure 4). We hope to have a more refined perspective on this topic in the future.
We also did not explore more classic model-building improvements in detail. For example, we do not address non-linearity and feature selection. Non-linearity is particularly important, because it helps in cases where two components of the proxy metric move in opposite directions. For feature selection, we currently hand-pick several auxiliary metrics to include in the proxy metric optimization. However, we should be able to improve upon this by either inducing sparsity when estimating the Pareto front, or adopting a more principled feature selection approach.
To conclude, let's take a step back and consider the practical implications of our results. Essentially, we found that the appropriate local metrics, that are close to the experiment context, are vastly more sensitive than the north star, and rarely move in the opposite direction. The implication is that using the north star as a launch criterion is likely too conservative, and teams can learn more and faster by focusing on the relevant local metrics.
Faster iteration has also opened our eyes to other mechanisms we can use to ensure that our launches are positive for the user experience. We mentioned earlier that launches using proxies should be paired with larger and longer running holdbacks. In fact, through such holdbacks we were able to catch small but slightly negative launches (case 1 in Figure 1, but with the opposite sign), and further refine our understanding of the differences between the short and long term impact on the north star metric (case 2 in Figure 1, but with the opposite sign).
The proxy score
It is useful to have a single metric that quantifies the performance of a proxy metric. We have relied on a measure called _proxy score_. The proxy score rewards properties of an ideal proxy metric: short-term sensitivity, and moving in the same long-term direction as the north star (Figure 1). The motivation behind our specific definition comes from the contingency table visualized in Figure 8, which is generated from 1000 simulated experiments.
The green cells in Figure 8 represent cases where the proxy is statistically significant in the short-term, the north star is significant in the long-term, and the proxy and north star move in the same direction. These are unambiguously good cases, and we refer to them as _Detections_. The red cells are unambiguously bad cases: both the short-term proxy and north star are statistically significant, but they move in opposite directions. We call these _Mistakes_. Informally, we define the proxy score as
\[\text{Proxy Score}\quad=\quad\frac{\text{Detections}-\text{Mistakes}}{\text{ Number of experiments where the north star is significant}}.\]
The key idea is that the proxy score rewards both sensitivity, and accurate directionality. More sensitive metrics are more likely to be in the first and third rows, where they can accumulate reward. But metrics in the first and third rows can only accumulate reward if they are in the correct direction. Thus, the proxy score rewards both sensitivity and directionality. Microsoft independently developed a similar score, called _Label Agreement_[7].
More formally, and following the notation in Section 2, we can define the proxy score using hypothesis tests for the proxy metric and the north star metric, defined as
Figure 8: There are nine combinations of statistical significance between the proxy metric and the north star metric, based on if the proxy and north star are negative, neutral, or positive.
North Star: \[H^{ns}_{0,j}:\theta^{ns}_{j}=0\quad\mbox{vs}\quad H^{ns}_{1,j}: \theta^{ns}_{j}\neq 0,\] (9) Proxy: \[H^{z}_{0,j}:\theta^{z}_{j}=0\quad\mbox{vs}\quad H^{z}_{1,j}: \theta^{z}_{j}\neq 0.\] (10)
If we let \(D_{j}=\{\theta^{ns}_{j},\sigma^{ns}_{j},\theta^{z}_{j},\theta^{z}_{j}\}\) be data required to compute the hypothesis tests, then the proxy score for experiment \(j\) can be written as
\[\mbox{PS}(D_{j}) = \mbox{\bf 1}(H^{z}_{0,j}\mbox{ rejected})\qquad\mbox{(Proxy Significant)} \tag{11}\] \[\times \mbox{\bf 1}(H^{ns}_{0,j}\mbox{ rejected})\qquad\mbox{(North Star Significant)}\] \[\times \Big{[}\mbox{\bf 1}(\theta^{ns}_{j}>0\mbox{ and }\theta^{z}_{j}>0)+\mbox{\bf 1}( \theta^{ns}_{j}<0\mbox{ and }\theta^{z}_{j}<0)\Big{]}\quad\mbox{(Agree)}\] \[\times \Big{[}-\mbox{\bf 1}(\theta^{ns}_{j}>0\mbox{ and }\theta^{z}_{j}<0)-\mbox{\bf 1 }(\theta^{ns}_{j}<0\mbox{ and }\theta^{z}_{j}>0)\Big{]},\quad\mbox{(Disagree)}\]
where \(\mbox{\bf 1}(\cdot)\) is an indicator equal to one if its argument is true, and zero otherwise.
We can aggregate these values across all experiments in our data, and scale by the number of experiments where the north star is significant, to compute the final proxy score for a set of experiments. The scaling factor ensures that the proxy score is always between -1 and 1.
\[\mbox{PS}(D) = \sum_{j=1}^{J}\frac{\mbox{PS}(D_{j})}{\mbox{\bf 1}(H^{ns}_{0,j} \mbox{ rejected})}. \tag{12}\]
Similar to Binary sensitivity, there can be issues with the proxy score when the north star metric is rarely significant. We have explored a few ways to make this continuous, for example by substituting indicators for Bayesian posterior probabilities. |
2305.18328 | Open-Source GEMM Hardware Kernels Generator: Toward Numerically-Tailored
Computations | Many scientific computing problems can be reduced to Matrix-Matrix
Multiplications (MMM), making the General Matrix Multiply (GEMM) kernels in the
Basic Linear Algebra Subroutine (BLAS) of interest to the high-performance
computing community. However, these workloads have a wide range of numerical
requirements. Ill-conditioned linear systems require high-precision arithmetic
to ensure correct and reproducible results. In contrast, emerging workloads
such as deep neural networks, which can have millions up to billions of
parameters, have shown resilience to arithmetic tinkering and precision
lowering. | Louis Ledoux, Marc Casas | 2023-05-23T12:47:19Z | http://arxiv.org/abs/2305.18328v1 | # Open-Source GEMM Hardware Kernels Generator: Toward Numerically-Tailored Computations
###### Abstract
Many scientific computing problems can be reduced to Matrix-Matrix Multiplications (MMM), making the General Matrix Multiply (GEMM) kernels in the Basic Linear Algebra Subroutine (BLAS) of interest to the high-performance computing community. However, these workloads have a wide range of numerical requirements. Ill-conditioned linear systems require high-precision arithmetic to ensure correct and reproducible results [1]. In contrast, emerging workloads such as deep neural networks, which can have millions up to billions of parameters, have shown resilience to arithmetic tinkering [2] and precision lowering [3].
General purpose arithmetic units and computer formats such as the IEEE754 standard naturally underperform in this vast land of scenarios. We propose the generation of numerically tailored circuits where the necessary and sufficient internal precision is generated to target the computations requirements in terms of numerical quality while improving the energy cost.
GEMM, matrix-matrix-multiply, full stack framework, automated pipeline, flopoco, OpenCAPI, OpenBLAS, High Performance Computing, approximate/trans/extended precision.
## I Extended Abstract
Many scientific computing problems can be reduced to Matrix-Matrix Multiplications (MMM), making the General Matrix Multiply (GEMM) kernels in the Basic Linear Algebra Subroutine (BLAS) of interest to the high-performance computing community. However, these workloads have a wide range of numerical requirements. Ill-conditioned linear systems require high-precision arithmetic to ensure correct and reproducible results [1]. In contrast, emerging workloads such as deep neural networks, which can have millions up to billions of parameters, have shown resilience to arithmetic tinkering [2] and precision lowering [3].
General purpose arithmetic units and computer formats such as the IEEE754 standard naturally underperform in this vast land of scenarios. We propose the generation of numerically tailored circuits where the necessary and sufficient internal precision is generated to target the computations requirements in terms of numerical quality while improving the energy cost.
### _Open Source SW/HW co-designed framework for numerically tailored MMMs_
As depicted by Fig. 1, our framework [4] is composed of two distinct phases, the _a priori_ Hardware generation flow and the runtime execution flow. Because MMMs are basically made of arbitrary long dot products, we design a custom Fused Dot Product (FDP) operator that is agnostic to the computer format and supports posit, IEEE754, and bfloat16 variations, while never rounding between two accumulations. The intermediate precision of the fixed-point accumulator used in the dot-product is a key aspect of this work, and is configurable through the length of the scratchpad delimited by the parameters MSB (Most Significant Bit) and LSB (Least Significant Bit). We leverage the automated pipeline feature of _flopoco_[5], which is an effective tool for efficiently exploring the wide range of functional specifications along with performance specifications, to produce MMM kernels with the necessary basic elements (LUTs, FFs, Carry chains, DSPs) for a targeted \((chip,frequency)\) couple (see Fig. 1- 2).
The essence of this work is to make intermediate precision tweakings from the hardware accessible to high-end software code as transparent as possible. We achieve that by taking into account that many HPC codes rely on BLAS libraries to perform MMM operations. Such libraries receive the function call to perform a GEMM and dispatch adequately to the underlying hardware at their disposal.
### _HPC workloads results_
We experiment with two families of real HPC workloads with contrasting numerical requirements, namely Artificial Intelligence (AI) and Sea Surface Height (SSH), whose respective results can be observed in Fig. 3 and Fig. 2.
Fig. 1: Overview of the 2 phases framework. Left is Runtime execution flow and right is Hardware generation flow.
Fig. 2: Sea Surface Height computation comparing IEEE-754 double-, quad-precision FMAs and a 91-bit FDP wrt numerical quality and power consumption.
For the SSH computation, the results obtained with 64-bit and 128-bit FPUs exhibit decreasing reproducibility as the vector size increases. In contrast, our 91-bit \(\langle ovf:30,msb:30,lsb:30\rangle\) FDP maintains reproducibility for all vector sizes without deviation. Our proposed FDP consistently exhibits 52 correct bits, which is at least 5\(\times\) and 27.7\(\times\) more than quad-precision and double-precision. Our measurements on VU3P-2 FPGA at \(200MHz\) show that the units power consumption are 0.266, 0.549, and 0.491 watts for double-precision FMA, quad-precision FMA, and the 91-bit FDP, respectively. For all evaluated sizes, the 91-bit FDP yields at least \(5.6\times\) and \(15.1\times\) more correct bits for the same wattage as quad-precision and double-precision FMAs, respectively.
For AI workloads, we employ Pytorch as a base framework and link it to our modified OpenBLAS. We use popular neural network models such as ResNet18, ResNet34, ResNet50, DenseNet121, DenseNet161, DenseNet169, and VGG11 with batch normalization, and evaluate them on the CIFAR-10 and ImageNet datasets. To measure power consumption and accuracy, we use the BrainFloat16 and IEEE-754 32-bit formats for our computations with a large variety of accumulators varying their \(OVF\), \(MSB\), and \(LSB\) parameters. Fig. 3 shows the relationship between power consumption and accuracy for different accumulator and arithmetic combinations. For example, if \(84\%\) Top1 accuracy is satisfying for Imagenet with Resnet50, the most suited arithmetic/accumulator combination is IEEE-754 32-bit/\(\langle ovf:9,msb:6,lsb:-20\rangle\) represented by a light purple hexagon as all other markers are either on the right or below.
### _Conclusion_
Overall, our work highlights the importance of numerically tailored accumulators for reproducibility in scientific computing applications. Our results provide valuable insights into the trade-offs between power consumption and accuracy, and we believe that our results have the potential to inform the design of future AI and scientific computing systems. In light of this, we encourage other researchers to explore the possibilities of low precision accumulators using our open-source framework.
## II Acknowledgment
Marc Casas is supported by Grant RYC-2017-23269 funded by MCIN/AEI/ 10.13039/501100011033 and by "ESF Investing in your future".
|
2310.05871 | Dynamic value alignment through preference aggregation of multiple
objectives | The development of ethical AI systems is currently geared toward setting
objective functions that align with human objectives. However, finding such
functions remains a research challenge, while in RL, setting rewards by hand is
a fairly standard approach. We present a methodology for dynamic value
alignment, where the values that are to be aligned with are dynamically
changing, using a multiple-objective approach. We apply this approach to extend
Deep $Q$-Learning to accommodate multiple objectives and evaluate this method
on a simplified two-leg intersection controlled by a switching agent.Our
approach dynamically accommodates the preferences of drivers on the system and
achieves better overall performance across three metrics (speeds, stops, and
waits) while integrating objectives that have competing or conflicting actions. | Marcin Korecki, Damian Dailisan, Cesare Carissimo | 2023-10-09T17:07:26Z | http://arxiv.org/abs/2310.05871v1 | # Dynamic value alignment through preference aggregation of multiple objectives
###### Abstract
The development of ethical AI systems is currently geared toward setting objective functions that align with human objectives. However, finding such functions remains a research challenge, while in RL, setting rewards by hand is a fairly standard approach. We present a methodology for dynamic value alignment, where the values that are to be aligned with are dynamically changing, using a multiple-objective approach. We apply this approach to extend Deep \(Q\)-Learning to accommodate multiple objectives and evaluate this method on a simplified two-leg intersection controlled by a switching agent. Our approach dynamically accommodates the preferences of drivers on the system and achieves better overall performance across three metrics (speeds, stops, and waits) while integrating objectives that have competing or conflicting actions.
## 1 Introduction
As artificial intelligence (AI) research reaches new peaks, more and more AI systems are being implemented, applied, and deployed worldwide. Further integration of such systems with human societies demands a thorough consideration of their consequences and effects. The inherent property of most, if not all, AI systems is to act with an unprecedented level of autonomy, often in settings where its actions might directly affect human beings. The growing field of Value Alignment (VA) aims to explicitly study the values pursued and exhibited by AI agents and make sure that they correspond to human values.
Motivating examples of VA often consider the long-term and potentially existential threats posed by powerful, super-intelligent AI agents with misaligned values [16]. Not less pertinent are the short-term threats of more mundane, highly specialized AI systems, employed in particular in control settings, becoming misaligned. A prominent case where a potential misalignment is particularly dangerous is given by systems where humans voluntarily code control of a system to algorithms. Examples of such systems abound: self-driving cars, where the driver codes control of their vehicle [13]; recommender systems and content algorithms [14], where the user codes some control over their access to information; traffic control systems, where drivers cede control of traffic flow coordination [15], are all examples of systems where AI is a control method of choice or is in the process of becoming one1.
Footnote 1: A contrasting example of such a system, which has not yet been “taken over” by AI is the representative government, to which citizens cede control over many areas of their social lives.
A paradoxical question that arises here is _How do humans stay in control of systems which they voluntarily cede control of to AI?_ The paradox can be partially addressed by delimiting the boundaries of what is being controlled and the way in which it is controlled. According to VA, this way should conform to human values, but here we encounter another problem: namely, the challenge of identifying and defining human values. In many cases, it is clear that the values held by humans are varied to the extent of even being contradictory [1]. Moreover, these values can be
situation and time dependent. For instance, road users can value safety, sustainability, travel times, waiting times, and so on, depending on the current state of the traffic and their own conditioning. Thus, AI systems that are truly able to align their values should also be able to do so dynamically, responding to a variety of potentially shifting preference landscapes. This requires a coupling between the AI and the users of the system -- the AI must be made aware of the current values pursued by the users and be able to incorporate them into its decisions. Among areas that have produced methods and results conducive to such a design are social choice theory, which allows one to gauge and aggregate group preferences, and multi-objective optimization, allowing for a simultaneous pursuit of multiple values. Below, we present a motivational example for the direction of our work and explicitly state our goals.
### Motivating Example
Consider an intersection with two intersecting approaches as shown in Figure 1. A simple Reinforcement Learning (RL) agent controls the access to the intersection. The agent can switch between two actions -- giving green to the North-South approach and red to the West-East approach or, conversely, green to the West-East and red to the North-South. Assume now that the system's designers have chosen to make it control traffic in such a way that it becomes as environmentally sustainable as possible. This can be achieved by setting the reward of the agent to e.g. negative of some measure of emissions or a proxy of it, such as the negative of the number of stops (vehicles emit most emissions when accelerating from complete stops (Rakha and Ding, 2003)). The agent is then trained with this reward and reaches a solution. Namely, it only ever chooses one action, always keeping the red light for the North-South approach and green for the West-East. Indeed, the system has successfully found a global optimum -- the emissions, or the proxy of the number of stops, have been minimized. The vehicles on the West-East approach never stop, while the vehicles on the North-South approach stop once, and so the emissions are kept as low as possible (we assume it is impossible to keep all vehicles moving, which is the case if their relative number on each approach is high enough). We are then left with the effects of what could be referred to as "reward hacking" (Skalse et al., 2022) by the RL agent. While the reward selected by the designers has been optimized, what emerges is a probably unwanted and perfectly non-egalitarian solution2. This sort of result has been commonly reported in the context of many social systems, where the maximization of social welfare leads to inequality (e.g. in Roughgarden (2002)).
Footnote 2: The result mentioned here has been replicated in our simulated intersection environment by a Deep Q-Learning RL agent using the negative of the number of stops as its reward.
How would that situation develop if the RL agent could incorporate the preferences of users in the system dynamically, rather than always follow a preset and, in this case, clearly misguided goal? Consider a starting population of drivers, all preferring to optimize for sustainability. The system reaches the same state, but now the drivers on the North-South approach are likely to change their preferences -- they might no longer value sustainability as highly and switch to preferring to minimize their waiting time. If the RL agent is able to incorporate that shift in its decision process, the unfair situation might be escaped or avoided. In this scenario, a certain notion of multi-objectivity can be an asset in avoiding "reward hacking" and the resultant misalignment.
Figure 1: Two-way intersection controlled by an RL agent. The agent controls which direction has a green signal at any given time.
The aim of our work is to investigate a multi-objective and dynamic value alignment in systems where humans code some control over the system to an AI. We present a simulated intersection environment following the ideas from our motivating example, which can be used to further study the problem. We use a simple traffic environment to test our method and provide a proof-of-concept. A deployment to fully realistic traffic conditions with multiple intersections and complex dynamics is the natural extension of this work. By designing a simplified, but realistic simulation of a real-world system, we hope to provide a more concrete example of the challenges of VA and push this field of study beyond simple, grid-world derived studies, and provide a springboard for extending such systems to more complex problems in the real world. Lastly, we propose a Deep RL agent that can dynamically adapt to different values held by the user of the system under its control. We show how a multi-objective perspective can be leveraged to avoid reward hacking and, at the same time, give more control to the users of the system rather than its designers. We believe that our approach, which explicitly states the social context in which the proposed system exists and takes into account the potential negative consequences of AI systems, is highly relevant with respect to the perceived lack of such considerations in most recent publications (Birhane et al., 2022). Our work builds directly on Multiple-Principal Assistant Games (Fickinger et al., 2020), which consider the theoretical case of an agent acting on behalf of \(N\) humans with differing payoffs. We ground the problem to a particular use case of traffic signal control and extend the methodology to Deep Reinforcement Learning. In terms of arriving at a reward function for the agent to pursue we build on Critch and Russell (2017) and provide a practical implementation that allows for multiple objective to be pursued in real time.
## 2 Related Work
**Value Alignment:** The field has become formalized with respect to AI systems in recent years but followed an established line of research into moral, social, and technical consequences of automation and technology (Wiener, 1960). Presently, the field's motivation often focuses on the risks posed by a hypothetical, powerful, misaligned Artificial General Intelligence (AGI) (Russell, 2022, 2). Since there has also been significant criticism of this "singularity" argument (Walsh, 2017) we believe that it is valuable to also consider the problems of VA using more mundane, already existing AI systems (e.g. recommender systems (Stray et al., 2020)). Many researchers support this position, highlighting the lack of substantial work on real-world examples of systems that might suffer from misalignment (Fickinger et al., 2020, Hadfield-Menell et al., 2017).
The study of VA is inherently multidisciplinary, bringing insights from the study of ethics, sociology, economics, and AI (Hadfield-Menell and Hadfield, 2019). The philosophical arguments and discussions are well summarized by Gabriel (2020). The more technical side deals with concrete causes of misalignment, such as reward hacking (Amodei et al., 2016). Useful impossibility and uncertainty theorems have also been worked out based on work done by ethicists (Eckersley, 2019).
**Inverse Reinforcement Learning (IRL):** One of the primary methodologies employed for VA is IRL, where the RL agent learns the reward function directly from demonstrations rather than being programmed with it explicitly (Ng et al., 2000). The motivation behind this methodology is learning in situations where explicitly defining the reward function might be challenging but there might exist experts who are able to demonstrate optimal behavior (Abbeel and Ng, 2004). One particular application case has been learning navigation and driving behavior (Abbeel and Ng, 2004; Ziebart et al., 2008). A variety of approaches to the problem of reward learning have been proposed, such as a Bayesian one, where the prior information and the evidence given by expert's actions are used to derived a distribution over the possible reward functions (Ramachandran and Amir, 2007).
Assistance games have been used as a model for the study of alignment problems in much of the IRL literature (Fickinger et al., 2020, 2016, 2017, 20). The setting often involves a robot learning from a single human demonstrator. Fickinger et al. (2020) extend this setting to multiple demonstrators, and a social choice method is employed to combine the different preferences of the demonstrators. Brown et al. (2021) proposed a method allowing one to verify and quantify the alignment of a given agent. The issue of goal mis-generalization, where the agent retains its capabilities while pursuing the wrong goal (similar to the motivational example of misspecification given in section 1) has been studied by Langosco et al. (2022) in the context of Deep RL.
The main limitation of IRL with respect to the goal of our work is that it presupposes we can demonstrate the right behavior. This is not possible in many cases, where the agent learns behavior that is not the same as the behavior of the users: e.g. traffic light (agent) learning to control the flow of cars. A further complication is that the values that need to be followed by AI systems are temporal in nature, which is a challenge for this particular methodology. Arnold and Kasenberg (2017) give examples of instances where IRL fails to infer the intended normative behavior.
**Preference-Based Reinforcement Learning (PbRL):** Another methodology, that addresses the difficulty of selecting and engineering the reward function for RL, leverages the preferences (between states, actions, or trajectories) of users
or experts (Wirth et al., 2017). What is being learned is a policy consistent with the revealed preferences. PbRL can be deployed in simulator-free (Akrour et al., 2011) as well as model-free settings (Wirth et al., 2016). Some finite time guarantees are possible for tabular settings under the assumption of stochastic preferences (Xu et al., 2020). A set of benchmarks for studying PbRL problems have recently been proposed by Lee et al. (2021). Chen et al. (2022) have been able to extend the PbRL to non-tabular settings for the first time.
While this avenue of research is relevant and informative it is also limited and at the current time infeasible for application to our problem setting. The main issue in PbRL is that the preference landscape is fixed during training (the model learns a given preference landscape), thus limiting the potential for adaptation to novelty in the preference landscapes, which we consider to be a key interest for properly aligned systems.
**Multi-Objective Reinforcement Learning (MORL):** Our work deals with systems whose users might exhibit a variety of potentially contradictory preferences. Thus, a multi-objective perspective is necessary to design an RL agent to work in such an environment.
To study such agents with potentially competitive goals a framework of Markov games has been proposed (Littman, 1994). Algorithms for arriving at a common Pareto optimal policy usually rely on forms of communication between agents (Mariano and Morales, 2000a,b). Some methods aim to learn the entire Pareto front of policies (Van Moffaert and Nowe, 2014). MORL has also been extended to the Deep Learning case (Mossalam et al., 2016).
The main issue of MORL is defining how to combine diverse objectives based on their relative importance. This can be done _a priori_, _a posteriori_ or be learned during training (Hayes et al., 2022). In an _a priori_ case, the users' preferences need to be specified and fixed, which does not work for our problem setting. Similarly, if the preferences are learned, the model might not be able to accommodate a significant shift in the preference landscape. Thus, in our work, we follow the _a posteriori_ approach, where a set of models is trained and the users are able to select between the different models' actions in deployment.
**Social Choice Theory:** The study of aggregating multiple preferences has been consistently identified as a key element relevant to value alignment. The modern history of this field can be traced back to the work Von Neumann and Morgenstern (1947) who applied game theory and other tools to modeling human decision-making (though the origin of the field goes back to the mathematical theory of elections of Borda (1781) and Condorcet (1785) as well as the utilitarianism of Bentham (1789)). The main theories of the field have been summarized in Fishburn (1970) and key impossibility theorems have been given in Arrow (1950).
Some of the works mentioned in the previous sections have explicitly investigated the VA problem through the lens of social choice impossibility theorems (e.g. Gibbard's theorem) (Fickinger et al., 2020, 2020). These works have employed insights from the field of mechanism design to circumvent or lessen the effects of potential users misrepresenting their preferences. Methods such as approval voting have been identified as norms that should guide group decision-making for AI systems (Prasad, 2018). Some implementations of such AI voting-based systems have been proposed for ethical decision-making (Noothigattu et al., 2018) based on a large data set of preferences in a trolley problem setting (Awad et al., 2018).
Our work follows this direction by employing a voting layer directly in the decision-making process of an RL agent. Based on the insights discussed by Baum (2020), our design includes an up-front, explicit way of identifying and aggregating potentially contrary preferences rather than off-loading that process to the RL agent (to let it figure it out so to speak). In the following section, we give details of our approach and test its efficiency under two voting schemes.
## 3 Methodology
Our proposed approach has three main components: the models of different objectives, a method of vote aggregation, and an integration layer. Here, we describe in detail the employed design principles. We focus on control settings where humans code some control over the systems to the AI agent, which here is assumed to be a Deep RL agent. Finally, we introduce a simulated environment (inspired by our motivational example) in which we test our approach. In the following sections when we refer to the agent we always mean the controller, in our case traffic signal controller. We refer to the users of the systems as users, drivers, or vehicles. The users are not learning, while the controller takes its decision based on models learned through \(Q\)-learning.
### Multiple Objectives Models
As we are interested in the issues of dynamic value alignment to multiple values in the context of control, we need a way to model the effects of different actions on the state of the system with respect to different objectives. We propose to train a separate Deep \(Q\)-Network (DQN) for each of our objectives. The DQN returns a \(Q\)-value for a given state-action
pair, which can be interpreted as the measure of the expected future discounted rewards if the given action is taken in the given state.
Consider a DQN for an objective \(A\) with two possible actions. At inference, in some state \(S\) the DQN will output \(Q_{1}^{A}\) and \(Q_{2}^{A}\), denoting the expected reward. Typically, the action corresponding to the larger of these two \(Q\)-values will be preferred and selected. However, since these \(Q\)-values are interpretable, one could also take into account their relative quantities. Namely, if \(Q_{1}^{A}\gg Q_{2}^{A}\) or \(Q_{1}^{A}\ll Q_{2}^{A}\) there is a clear preference for one of the actions. On the other hand, if \(Q_{1}^{A}\approx Q_{2}^{A}\) then it might not make much difference which action is chosen even if one of the \(Q\)-values is technically greater than the other. We will develop this observation further when discussing the integration layer. Moreover, we note that the \(Q\)-values could also be compared between DQNs trained with different objectives if the two DQNs share the same action space. Specifically, \(Q_{1}^{A}\) could be compared with \(Q_{1}^{B}\), where \(B\) represents another objective. This is limited in that if the scale of the objectives is not the same, a fair comparison becomes more difficult. However, since here we will care only for the relative importance of the \(Q\)-values of a given objective (\(Q_{1}^{A}\), \(Q_{2}^{A}\)) we can normalize them (in this work we use a softmax normalizing function). These normalized values we refer to as \(q_{1}^{A}\), \(q_{2}^{A}\) for objective \(A\). Since their scale will be in the range of 0 to 1, we are now able to compare \(q_{1}^{A}\) with \(q_{1}^{B}\).
#### Vote Aggregation
We want to allow the users of our system to be able to reveal their preferences and affect the system according to them. Therefore, we need to specify the type of preferences such that they are meaningful to the system. Here, we set the possible preferences to correspond to the reward \(r\) of the DQNs described in the previous section. Moreover, we say that when a user selects a given reward as the preferred one, they do so under the assumption that this reward will be applied to the entire system and not preferentially to them alone. If the preferred \(r\) corresponds to minimizing the waiting time, the user asks the controller of the system that everybody's waiting time be minimized and not just their own3. Thus, the users remain under the _veil of ignorance_[14] as the exact effects of their chosen objective being pursued by the controller are not known to them. Given this setup, the users of the system are polled at decision time (each time the traffic signal control agent is to take action, in our environment that happens in 5-second intervals) and reveal their preferences. At the decision time, only users that are allowed to vote at the given moment are polled--these are the users on the incoming lanes of the intersection. Thus, the voting population at each step is not equal to the entire population of the system. Even though the preferences of each user are fixed, the relative preferences of the voting population change as the population changes. These preferences need to then be aggregated and weights \(w\) for each of the possible preferences are created. In this work, we consider and compare two simple aggregation methods.
Figure 2: Multi-objective action selection uses multiple Deep \(Q\)-Networks (DQN) each trained purely on a single objective. For a given environment state \(s\), each DQN outputs a vector of \(Q\)-values. A softmax re-scales these values to reflect their relative importance for a given DQN. These are then passed to an integration layer, where the \(q\)-values (normalized \(Q\)-values) corresponding to an action are weighted across the different objectives, using the user votes as weighting coefficients. The resulting new \(q^{\prime}\) values are then used to determine an action, using an \(\mathbf{argmax}\) as in typical DQN methodology.
**Majority voting:** The option with more votes (\(v_{A}\) being the number of votes for preference \(A\)) wins, and only the winning option is considered.
\[w_{A},w_{B}=\begin{cases}(1,0),\&v_{A}>v_{B}\\ (0,1),\&v_{A}<v_{B}\end{cases} \tag{1}\]
Thus, if preference A gets more votes, \(w_{A}\) = 1 and \(w_{B}\) = 0, for a case with two possible preferences A and B.
**Proportional voting:** The weights are created directly from the votes according to Equation 2.
\[w_{A}=\frac{v_{A}}{v_{A}+v_{B}} \tag{2}\] \[w_{B}=\frac{v_{B}}{v_{A}+v_{B}}\]
**Integration Layer**
The key element of our approach is the integration layer, which allows for the integration of the DQNs results with the preferences of the users. The layer affects the integration according to Equation 3.
\[q^{\prime}{}_{1}= w_{A}q_{1}^{A}+w_{B}q_{1}^{B} \tag{3}\] \[q^{\prime}{}_{2}= w_{A}q_{2}^{A}+w_{B}q_{2}^{B}\]
The \(q^{\prime}{}_{1}\) and \(q^{\prime}{}_{2}\) values encapsulate the expected rewards weighted by the weights \(w\) for the two possible actions. In practice, in cases where there is a strong preference (e.g. \(w_{A}\gg w_{B}\)) the action chosen under objective \(A\) (say \(q_{2}^{A}>q_{1}^{A}\)) will be selected unless \(q_{1}^{B}\gg q_{2}^{B}\) and \(q_{1}^{A}\approx q_{2}^{A}\). Thus, the system generally follows the preferences unless the relative difference between the expected rewards within one objective is large and within the other is small. Conversely, if the preference is weak (\(w_{A}\approx w_{B}\)) the system will prioritize the objective that stands to lose more from not being followed. This approach then allows accounting for user preferences, but at the same time tends to avoid taking very bad actions on one of the objectives.
Here we have described a version of our method with two objectives, but our system allows for more objectives to be included. One could imagine new objectives to be continuously added, as long as it is possible to train new DQNs based on different objectives.
The approach we have described throughout this section is summarized symbolically in Figure 2.
### Switching Traffic Lights
We modeled a simplified two-leg intersection that consists of two one-way roads intersecting at one point (see Figure 1). The intersection has traffic signals controlled by a switching agent. The switching agent facilitates the assignment of a green light to one of the roads and a red light to the other. Regularly, at intervals of \(t_{\mathrm{act}}\), the switching agent may choose to switch the active phase on the intersection. When choosing to switch or not, the agent polls the vehicles on the upstream approach (in other words only vehicles that will be directly affected by this action are allowed to vote, the vehicles on the downstream do not vote).
We train two Deep \(Q\)-Learning models on two different reward functions: stops and wait times. The state of the traffic signal control agent (and so the state of the model Deep \(Q\)-Network) is the occupancy of the vehicles (which is a function of the number and size of the vehicles) on the three segments of the incoming approaches (an approach is divided into three segments of equal length).
\[r_{\text{stops}}= -\text{ (number of stops in the past $t_{\mathrm{act}}$ steps)} \tag{4}\] \[r_{\text{wait}}= -\text{ (duration of stops in the past $t_{\mathrm{act}}$ steps)} \tag{5}\]
We also considered the case of using a single reward function to incorporate multiple objectives simultaneously. We train two additional Deep \(Q\)-Learning models on two different reward functions: a linear combination of stops and wait times (Equation 6) and Cobb-Douglas production function of stops and wait times (Equation 7).
\[r_{\text{linear}}= \alpha\mathrm{norm}(r_{\text{stops}})+\beta\mathrm{norm}(r_{ \text{wait}}) \tag{6}\] \[r_{\text{cobb-dug}}= -\mathrm{norm}(-r_{\text{stops}})^{\alpha}*\mathrm{norm}(-r_{ \text{wait}})^{\beta}, \tag{7}\]
where norm is the normalizing factor that scales the given value by its maximum for the stops and by half of the maximum for the wait times, and \(\alpha=\beta=0.5\).
**Experiments:** To test the performance of our approach, we run experiments in the simulated environment described above. We run the system under six different demand distributions. The demands are defined in terms of the number of cars on each of the two approaches (N, W), where N represents the number of cars on the North-South link and W represents the number of cars on the West-East approach. The demand can be: low unbalanced (11, 6); low balanced (11, 11); medium unbalanced (22, 11); medium balanced (22, 22); high unbalanced (32, 16); high balanced (32, 32).
For each demand, we set the preference of the participating vehicles to be half-half -- 50% of them prefer \(r_{\text{stops}}\) and 50% prefer \(r_{\text{wait}}\). Depending on the random initialization and the system dynamics, the proportions of the voting user's preferences change throughout the simulation. We ran our method 100 times for each demand setting with different random initialization and presented averaged results.
We conducted all experiments using the CityFlow simulator (Zhang et al., 2019) and we used periodic boundaries for the two roads. Each scenario is run for 3600 steps, corresponding to an hour of real-time. The code used for all the experiments and results (including all the hyperparameter settings) is available at this repository.
#### Underlying Assumptions
The expected performance of the proposed method rests on some assumptions, which we make explicit and discuss in this section. The first assumption is that of a centralized controller. We have stated in section 1 that in this work, we are interested in systems where users voluntarily code control. As such this control needs to be cededed to some particular decision mechanism, which is assumed to be centralized. This stems from our particular application to traffic signal control. The traffic system users are assumed to follow the common (though perhaps implicit) agreement on traffic laws relying on the existence of traffic lights, which are inherently a form of centralized control. From this assumption, another one follows: namely, that the preference aggregation rule used by our method is socially trusted. This assumption is akin to the current trust (or lack thereof) of road users in traffic signal control algorithms. Further assumption relating to the preference aggregation method is that it is well evaluated from the utilitarian perspective. While our method allows for any aggregation method to be employed, the analysis of the effects and trust in different ones is out of the scope of this work (thus, we rely on the above assumptions). It is worth noting that the practical side of implementing such a system would likely need to include consultation and educational efforts to inform the users about how it works.
A further assumption that we take is that of selfish rationality and truthfulness of traffic users. We assume that the voting users do indeed vote according to their real preferences. A common problem that could affect the system is strategic voting, whereby users misrepresent their preferences to exploit the aggregation method and increase the chances of their objective being chosen (Gibbard, 1973). This problem occurs mainly in schemes such as cumulative voting, where the user expresses their preferences as a ranking. In this work the consequences of the theorem are avoided since it only applies to cases with three or more options (we only have two options in this work). Therefore, in this work the strategic behavior of users would be equivalent to honest behavior. It is however worth noting that if this approach was applied to a situation with three or more objectives one would have to take into account the problem of users potentially mis-representing their preferences.
## 4 Results
### Reward functions with single objectives
In Figure 3 we report the performance on three different metrics of the two single-objective DQNs (optimizing for Equation 5) and the two multi-objective methods, one using majority voting and the other proportional voting. We observe that the speeds do not vary much between the methods (while they do vary between demands). It is also apparent that the single-objective DQNs suffer a trade-off between favorable stops or wait times. The Stops DQN is able to achieve a very low number of stops (close to 0 for all scenarios) but at the same time, its waiting times are prohibitively large. In fact, the Stops DQN learns to never switch, thus reaching a global minimum of the number of stops in systems where free flow is impossible, but making one lane of traffic wait forever. Conversely, the Wait Times DQN achieves very low wait times (close to 0 for all demands) but a high number of stops (qualitatively this solution corresponds to the agent switching all the time, alternating the green signals between two lanes cyclically). The multi-objective methods, on the other hand, never achieve performance as high as the single model on each objective. However, they are able to achieve good results on each of them -- effectively, they are able to pursue and find a balance between them both. The Proportional method outperforms the Majority method in all demands except Medium
Balanced (where they are the same) and High Balanced (where Proportional achieves fewer stops and higher wait times than Majority).
### Reward functions combining multiple objectives
In Figure 4 we report the performance on three different metrics of the two single-objective DQNs aggregating two objectives into one and the proportional voting-based multi-objective method. We note that in the Low conditions the method performs worse than the Cobb-Doug, which achieves both better wait times and stops. In the Medium Unbalanced condition the Prop S+W performs on the same level as Cobb-Doug in all metrics. In the Medium Balanced the Prop S+W performs in between the two methods in terms of stops and wait times. High Unbalanced and High Balanced conditions, arguably the most challenging ones, are where the Prop S+W shines, achieving performance at the level of Linear comb. and Cobb-Doug in both metrics at the same time while each of the two methods underperforms on one of the metrics.
### Alignment of the multi-objective method
In Figure 5 we report the correlations between the actions chosen by different DQNs in a given state. We observe that a DQN optimizing for \(r_{\text{stops}}\) has a low correlation with a DQN optimizing for \(r_{\text{wait}}\). This means that these DQNs tend to predict different actions as optimal given the same state. We see however that the multi-objective actions can align with either \(Q^{\text{stop}}\) or \(Q^{\text{wait}}\), for different demand conditions.
## 5 Discussion
Complex systems often have numerous objectives that designers may fail to account for during the design stage. Our proposed method offers modularity in accounting for novel preferences, making it well-suited to accommodate new objectives even after the system is deployed. By training a Deep \(Q\)-Network or any other model that outputs \(Q\)-value estimates, we can optimize the system for new objectives as needed. That is, once the system is deployed, users, designers, or controllers of the system can easily add new objectives once they realize the need for adjustment. Thus, one could imagine a system where users can indicate the objectives/preferences they care about that are currently
Figure 3: Radar plots for the three evaluation metrics (stops (seconds), wait times (seconds), and speeds (meters per second)) for six different demand distributions. Larger triangle areas can be associated with a more desirable performance across all metrics. The scales are different for each radar plot. Prop S+W represents our method using proportional aggregation, Major S+W represents our method using majority aggregation.
unavailable. Models optimizing for these objectives could then be trained in simulation or using historical data and deployed to the system. Users would then be able to vote for these preferences, and they would be taken into consideration proportionally to the vote. The central aspect allowing for this is that the system is able to work with any number of objectives. Nevertheless, we believe a further study (which is beyond the scope of this paper, whose goal is to introduce this kind of voting-induced multi-objective methodology) into the feasibility of including more than two objectives is needed.
In this traffic-inspired simulation environment, one could argue that designers can choose reasonable objectives so as to avoid the suboptimal/unfair policies learned through an inappropriate reward function. However, in a broader sense, there could also be systems with many incommensurate rewards, and setting them _a priori_ can amount to a sort of algorithmic dictatorship. Moreover, in many cases it is impossible currently to account for all the potential ways in which the reward can become hacked or misaligned and so setting fixed objectives carries a significant risk of running into misalignment at some point.
Using multiple single-objective reward functions (our approach) comes with some tradeoffs compared to using a single multiple-objective reward function. Our approach requires implementing separate \(Q\)-value estimate models for each objective. While the number of potential objectives varies across different application domains, we believe that in most cases, the number of objectives would not be too large, and implementing a few Deep \(Q\)-Networks (DQNs) should not be a significant limitation. However, for problems where a large number of objectives is desired, scaling issues may arise when using more traditional Pareto-front based methods. Multi-objective optimization may have difficulty scaling to a large number of objectives. Alternatively, attempting to optimize a linear or non-linear combination of multiple objectives using a single model (e.g., a deep neural network) would require specifying the parameters such as the weights given to each objective. We note that in our experiments optimizing the linear combination of the two objectives actually leads the system into the undesired optimum of the close to zero number of stops but very high wait times. This showcases the potential effects of misspecifying the weights. Moreover, if an additional objective is added to the model, it would require retraining the model from scratch, and once again searching and specifying the weights for the objectives. Our approach avoids these challenges, although it requires training additional single-objective models. Overall, we believe that our approach is a more scalable alternative to the method outlined above.
Figure 4: Radar plots for the three evaluation metrics (stops (seconds), wait times (seconds), and speeds (meters per second)) for six different demand distributions. Larger triangle areas can be associated with a more desirable performance across all metrics. Here, we compare the multi-objective approach (Prop S+W) against two single-objective approaches that incorporate two metrics (Linear combination and a Cobb–Douglas product function). The scales are different for each radar plot.
### Limitations
Our approach has a unique requirement for an environment: the environment must have multiple players (with their own logic) that interact with a system that is controlled and optimized by a controller. To our knowledge, additional environments would need to be made from scratch, which is why we have focused on only one environment.
Another limitation of our work is that we focus only on a half-half split between the two possible preferences. While due to the dynamics of the studied system and random initialization, we still get a variety of proportions in the population voting in each action phase, it would be interesting to further study the performance of our approach under different preference splits.
Furthermore, the approach we propose relies on optimization. As such it might suffer from limits of optimization such as potential incomparability of alternatives (Carissimo and Korecki, 2023). In practice it would make sense for the objectives that are available to be voted on were themselves selected in a democratic and participative way. A limitation of our work is that we assume these objectives to be given and at least in some sense commensurable (to the extent that they can be voted on).
Moreover, we test our system with two objectives; a clear extension would be to run experiments with three or more objectives. However, we show that combining even two objectives can have a positive effect and help avoid the negative effects of missepcified rewards. Lastly, a more formal quantification of the alignment of our approach would be valuable.
## 6 Conclusions
Based on our results, we can confirm that our proposed method avoids the "reward hacked" solution exhibited by the Stops DQN. Thus, our method is able to avoid misalignment in our setting, by following a more nuanced, multi-objective perspective. Moreover, our method performs well on both objectives, aligning well with both preferences exhibited by the user population. Furthermore, the social effects of deploying such a participatory system cannot be quantified in a simulation but are likely to be positive, promoting engagement and trust, and giving more control over the system to its users. We also note that the proportional voting system appears to work better in our setting than majority voting. This appears intuitive as the proportional method mitigates issues such as "tyranny of the majority" and "wasted votes",
Figure 5: Correlation of the actions that would have been chosen by the Stops (\(r_{\text{stops}}\)), Wait Times (\(r_{\text{wait}}\)), and Multi-Objective (our method) DQNs in a given state for the different traffic volumes (low, medium, high) and voting schemes (proportional and majority vote). Higher values mean the same action would have been taken by both schemes. |
2304.00896 | A generalized quantum cluster algebra of Kronecker type | The cluster multiplication formulas for a generalized quantum cluster algebra
of Kronecker type are explicitly given. Furthermore, a positive bar-invariant
$\mathbb{Z}[q^{\pm\frac{1}{2}}]$-basis of this algebra is constructed. | Liqian Bai, Xueqing Chen, Ming Ding, Fan Xu | 2023-04-03T11:34:39Z | http://arxiv.org/abs/2304.00896v1 | # A generalized quantum cluster algebra of Kronecker type
###### Abstract.
The cluster multiplication formulas for a generalized quantum cluster algebra of Kronecker type are explicitly given. Furthermore, a positive bar-invariant \(\mathbb{Z}[q^{+\frac{1}{2}}]\)-basis of this algebra is constructed.
Key words and phrases:generalized quantum cluster algebra, cluster multiplication formula, positive basis
## 1. Introduction
Cluster algebras were introduced by Fomin and Zelevinsky [13, 14] in order to set up an algebraic framework for studying the total positivity and Lusztig's canonical bases. Quantum cluster algebras, as the quantum deformations of cluster algebras, were later introduced by Berenstein and Zelevinsky [4] for studying the dual canonical bases in coordinate rings and their \(q\)-deformations. An important feature of (quantum) cluster algebras is the so-called Laurent phenomenon which says that all cluster variables belong to an intersection of certain (may be infinitely many) rings of Laurent polynomials.
Generalized cluster algebras were introduced by Chekhov and Shapiro [9] in order to understand the Teichmuller theory of hyperbolic orbifold surfaces. The exchange relations for cluster variables of generalized cluster algebras are polynomial exchange relations while the exchange relations for cluster algebras are binomial relations. Generalized cluster algebras also possess the Laurent phenomenon [9] and are studied by many people in a similar way as cluster algebras (see for example [18, 6, 7, 17, 2]). As a natural generalization of both quantum cluster algebras and generalized cluster algebras, we defined the generalized quantum cluster algebras [1]. Not surprised that the Laurent phenomenon also holds in these algebras [3].
One of the most important problems in cluster theory is to construct cluster multiplication formulas. For acyclic cluster algebras, Sherman and Zelevinsky [22] firstly established the cluster multiplication formulas in rank 2 cluster algebras of finite and affine types. Cerulli [8] generalized this result to rank 3 cluster algebra of affine type \(A_{2}^{(1)}\). Caldero and Keller [5] constructed the cluster multiplication formulas between two cluster characters for simply laced Dynkin quivers, which was generalized to affine types by Hubery in [15] and to acyclic types by Xiao and Xu in [23, 24]. In the quantum case, Ding and Xu [11] firstly gave the cluster multiplication formulas of the quantum cluster algebra of Kronecker type. Recently, Chen, Ding and Zhang [10] obtained the cluster multiplication formulas in the acyclic quantum cluster algebras with arbitrary coefficients through some quotients of derived Hall algebras of acyclic valued quivers. One of the most powerful tools in cluster theory is the cluster multiplication formulas which are very useful to construct bases of
Introduction
Let \(\mathcal{T}\) be a bounded bounded linear operator on \(\mathbb{Z}^{m}\) and \(\mathcal{T}\) be a bounded linear operator on \(\mathbb{Z}^{m}\). We say that \(\mathcal{T}\) is _strongly bounded linear operator_ if there exists a bounded linear operator \(\mathcal{T}\
for \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{m})^{T}\in\mathbb{Z}^{m}\).
For any \(1\leq i\leq n\), we say that \(\widetilde{B}^{\prime}=(b^{\prime}_{kl})\) is obtained from the matrix \(\widetilde{B}=(b_{kl})\) by the matrix mutation in the direction \(i\) if \(\widetilde{B}^{\prime}:=\mu_{i}\big{(}\widetilde{B}\big{)}\) is given by
\[b^{\prime}_{kl}=\begin{cases}-b_{kl},&\text{if}\quad k=i\quad\text{or}\quad l =i,\\ b_{kl}+\frac{|b_{ki}|b_{il}+b_{ki}|b_{il}|}{2},&\text{otherwise}.\end{cases}\]
Denote the function
\[[x]_{+}=\begin{cases}x,&\text{if}\quad x\geq 0;\\ 0,&\text{if}\quad x\leq 0.\end{cases}\]
For any \(1\leq i\leq n\) and a sign \(\varepsilon\in\{\pm 1\}\), denote by \(E_{\varepsilon}\) the \(m\times m\) matrix associated to the matrix \(\widetilde{B}=(b_{ij})\) with entries as follows
\[(E_{\varepsilon})_{kl}=\begin{cases}\delta_{kl},&\text{if}\ l\neq i;\\ -1,&\text{if}\ k=l=i;\\ _{+},&\text{if}\ k\neq l=i.\end{cases}\]
**Proposition 2.3**.: _[_4_, Proposition 3.4]_ _Let \((\Lambda,\widetilde{B})\) be a compatible pair, then the pair \((\Lambda^{\prime},\widetilde{B}^{\prime})\) is also compatible and independent of the choice of \(\varepsilon\), where \(\Lambda^{\prime}=E_{\varepsilon}^{T}\Lambda E_{\varepsilon}\) and \(\widetilde{B}^{\prime}=\mu_{i}(\widetilde{B})\)._
We say that the compatible pair \((\Lambda^{\prime},\widetilde{B}^{\prime})\) is obtained from the compatible pair \((\Lambda,\widetilde{B})\) by mutation in the direction \(i\) and denoted by \(\mu_{i}(\Lambda,\widetilde{B})\). It is known that \(\mu_{i}\) is an involution [4, Proposition 3.6].
For each \(1\leq i\leq n\), let \(d_{i}\) be a positive integer such that \(\frac{b_{ij}}{d_{i}}\) are integers for all \(1\leq l\leq m\) and denote by \(\beta^{i}=\frac{1}{d_{i}}\mathbf{b}^{i}\), where \(\mathbf{b}^{i}\) is the \(i\)-th column of \(\widetilde{B}\). Denote by
\[\mathbf{h}_{i}=\{h_{i,0}(q^{\frac{1}{2}}),h_{i,1}(q^{\frac{1}{2}}),\ldots,h_{ i,d_{i}}(q^{\frac{1}{2}})\},\ 1\leq i\leq n,\]
where \(h_{k,l}(q^{\frac{1}{2}})\in\mathbb{Z}[q^{\pm\frac{1}{2}}]\) satisfying that \(h_{k,l}(q^{\frac{1}{2}})=h_{k,d_{k}-l}(q^{\frac{1}{2}})\) and \(h_{k,0}(q^{\frac{1}{2}})=h_{k,d_{k}}(q^{\frac{1}{2}})=1\). We set \(\mathbf{h}:=(\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{n})\).
**Definition 2.4**.: _With the above notations, the quadruple \((X,\mathbf{h},\Lambda,\widetilde{B})\) is called a quantum seed if the pair \((\Lambda,\widetilde{B})\) be compatible. For a given quantum seed \((X,\mathbf{h},\Lambda,\widetilde{B})\) and each \(1\leq i\leq n\), the new quadruple_
\[(X^{\prime},\mathbf{h}^{\prime},\Lambda^{\prime},\widetilde{B}^{\prime}):=\mu _{i}(X,\mathbf{h},\Lambda,\widetilde{B})\]
_is defined by_
\[X^{\prime}(e_{k})=\mu_{i}(X(e_{k}))=\begin{cases}X(e_{k}),&\text{if}\ k\neq i; \\ \sum_{r=0}^{d_{i}}h_{i,r}(q^{\frac{1}{2}})X(r[\beta^{i}]_{+}+(d_{i}-r)[- \beta^{i}]_{+}-e_{i}),&\text{if}\ k=i,\end{cases} \tag{2.3}\]
_and_
\[\mathbf{h}^{\prime}=\mu_{i}(\mathbf{h})=\mathbf{h}\ \ \text{and}\ \ (\Lambda^{\prime}, \widetilde{B}^{\prime})=\mu_{i}(\Lambda,\widetilde{B}).\]
_We say that the quadruple \(\mu_{i}(X,\mathbf{h},\Lambda,\widetilde{B})\) is obtained from \((X,\mathbf{h},\Lambda,\widetilde{B})\) by mutation in the direction \(i\)._
**Proposition 2.5**.: _[_1_, Proposition 3.6]_ _Let the quadruple \((X,\mathbf{h},\Lambda,\widetilde{B})\) be a quantum seed, then the quadruple \(\mu_{i}(X,\mathbf{h},\Lambda,\widetilde{B})\) is also a quantum seed._
Note that \(\mu_{i}\) is an involution by [1, Proposition 3.7]. Two quantum seeds are said to be mutation-equivalent if they can be obtained from each other by a sequence of seed mutations. Given the initial quantum seed \((X,\mathbf{h},\Lambda,\widetilde{B})\), let \((X^{\prime},\mathbf{h}^{\prime},\Lambda^{\prime},\widetilde{B}^{\prime})\) is mutation-equivalent to \((X,\mathbf{h},\Lambda,\widetilde{B})\). Denote by \(X^{\prime}=\{X^{\prime}_{1},\ldots,X^{\prime}_{m}\}\) which is called the extended cluster and the set \(\{X^{\prime}_{1},\ldots,X^{\prime}_{n}\}\) is called the cluster. The element \(X^{\prime}_{i}\) is called a cluster variable for any \(1\leq i\leq n\) and \(X^{\prime}_{k}\) a frozen variable for any \(n+1\leq k\leq m\). Note that \(X^{\prime}_{k}=X_{k}\;(n+1\leq k\leq m)\). For convenience, let \(\mathbb{P}\) denote the multiplicative group generated by \(X_{n+1},\ldots,X_{m}\) and \(q^{\frac{1}{2}}\), and \(\mathbb{Z}\mathbb{P}\) the ring of the Laurent polynomials in \(X_{n+1},\ldots,X_{m}\) with coefficients in \(\mathbb{Z}[q^{\pm\frac{1}{2}}]\).
**Definition 2.6**.: _Given the initial quantum seed \((X,\boldsymbol{h},\Lambda,\widetilde{B})\), the associated generalized quantum cluster algebra \(\mathcal{A}(X,\boldsymbol{h},\Lambda,\widetilde{B})\) is the \(\mathbb{Z}\mathbb{P}\)-subalgebra of \(\mathcal{F}\) generated by all cluster variables from the quantum seeds which are mutation-equivalent to \((X,\boldsymbol{h},\Lambda,\widetilde{B})\)._
The following Laurent phenomenon is one of the most important results on generalized quantum cluster algebras.
**Theorem 2.7**.: _[_3_, Theorem 3.1]_ _The generalized quantum cluster algebra \(\mathcal{A}(X,\boldsymbol{h},\Lambda,\widetilde{B})\) is a subalgebra of the ring of Laurent polynomials in the cluster variables in any cluster over \(\mathbb{Z}\mathbb{P}\)._
## 3. The cluster multiplication formulas of \(\mathcal{A}_{q}(2,2)\)
In the following, we will consider the generalized quantum cluster algebra associated with the initial seed \((X,\mathbf{h},\Lambda,B)\), where \(\mathbf{d}=(2,2)\), \(\mathbf{h}_{1}=\mathbf{h}_{2}=(1,h,1)\) with \(h\in\mathbb{Z}[q^{\pm\frac{1}{2}}]\) and \(\overline{h}=h\),
\[\Lambda=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\text{ and }B=\left(\begin{array}{cc}0&2\\ -2&0\end{array}\right).\]
Note that \(\Lambda^{T}B=\left(\begin{array}{cc}2&0\\ 0&2\end{array}\right),\) and the based quantum torus is
\[\mathcal{T}=\mathbb{Z}[q^{\pm\frac{1}{2}}][X_{1}^{\pm 1},X_{2}^{\pm 1}|X_{1}X_{2}=qX_{ 2}X_{1}].\]
The quiver associated to the matrix \(B\) is the Kronecker quiver \(Q\):
We call this algebra a generalized quantum cluster algebra of Kronecker type, denoted by \(\mathcal{A}_{q}(2,2)\). By the definition and the Laurent phenomenon, \(\mathcal{A}_{q}(2,2)\) is the \(\mathbb{Z}[q^{\pm\frac{1}{2}}]\)-subalgebra of \(\mathcal{T}\) generated by the cluster variables \(\{X_{k}\;|\;k\in\mathbb{Z}\}\) which are obtained from the following exchange relations:
\[X_{k-1}X_{k+1}=qX_{k}^{2}+q^{\frac{1}{2}}hX_{k}+1.\]
Recall that the \(n\)-th Chebyshev polynomials of the first kind \(F_{n}(x)\) is defined by
\[F_{0}(x)=1,F_{1}(x)=x,F_{2}(x)=x^{2}-2,F_{n+1}(x)=F_{n}(x)x-F_{n-1}(x)\text{ for }n\geq 2,\]
and \(F_{n}(x)=0\) for \(n<0\).
Denote
\[X_{\delta}:=q^{\frac{1}{2}}X_{0}X_{3}-q^{\frac{1}{2}}(q^{\frac{1}{2}}X_{1}+h)(q^{ \frac{1}{2}}X_{2}+h),\]
thus \(X_{\delta}\in\mathcal{A}_{q}(2,2)\).
**Lemma 3.1**.: _For each \(n\in\mathbb{Z}_{>0}\), \(F_{n}(X_{\delta})\) is a bar-invariant element in \(\mathcal{A}_{q}(2,2)\)._
Proof.: An direct computation shows that
\[X_{\delta}=X(-1,-1)+hX(-1,0)+hX(0,-1)+X(-1,1)+X(1,-1),\]
thus \(X_{\delta}\) is a bar-invariant element in \(\mathcal{A}_{q}(2,2)\). Then the proof follows from the definition of the \(n\)-th Chebyshev polynomials \(F_{n}(x)\).
We define an automorphism denoted by \(\sigma\) on the generalized quantum cluster algebra \(\mathcal{A}_{q}(2,2)\) as follows
\[\sigma(X_{k})=X_{k+1}\ \ \text{and}\ \ \sigma(q^{\frac{k}{2}})=q^{\frac{k}{2}},\]
for any \(k\in\mathbb{Z}\). Then we have the following result which will be useful for us to prove the cluster multiplication formulas.
**Lemma 3.2**.: _For each \(n\in\mathbb{Z}_{>0}\), \(\sigma(F_{n}(X_{\delta}))=F_{n}(X_{\delta})\)._
Proof.: Note that
\[\sigma(X_{\delta}) =q^{\frac{1}{2}}X_{1}X_{4}-q^{\frac{1}{2}}(q^{\frac{1}{2}}X_{2}+h )(q^{\frac{1}{2}}X_{3}+h),\] \[X_{3} =X(-1,2)+hX(-1,1)+X(-1,0)\]
and
\[X_{4}= X(-2,3)+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})hX(-2,2)+(q^{-1}+h^{2}+q)X(- 2,1)+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})hX(-2,0)\] \[+X(-2,-1)+hX(-1,1)+h^{2}X(-1,0)+hX(-1,-1)+X(0,-1).\]
Thus
\[q^{\frac{1}{2}}X_{1}X_{4}= q^{2}X(-1,3)+(q+q^{2})hX(-1,2)+(1+qh^{2}+q^{2})X(-1,1)+(1+q)hX(-1,0)\] \[+X(-1,-1)+qhX(0,1)+q^{\frac{1}{2}}h^{2}+hX(0,-1)+X(1,-1)\]
and
\[q^{\frac{1}{2}}(q^{\frac{1}{2}}X_{2}+h)(q^{\frac{1}{2}}X_{3}+h)= q^{2}X(-1,3)+(q+q^{2})hX(-1,2)+qhX(0,1)\] \[+(qh^{2}+q^{2})X(-1,1)+qhX(-1,0)+q^{\frac{1}{2}}h^{2}.\]
We obtain that
\[\sigma(X_{\delta})=X(-1,1)+hX(-1,0)+X(-1,-1)+hX(0,-1)+X(1,-1)=X_{\delta}.\]
Then the proof follows from the induction on \(n\) and the definition of the \(n\)-th Chebyshev polynomials \(F_{n}(x)\).
For a real number \(x\), denote the floor function by \(\lfloor x\rfloor\) and the ceiling function by \(\lceil x\rceil\). The following Theorem 3.3 and Remark 3.4 give the explicit cluster multiplication formulas for \(\mathcal{A}_{q}(2,2)\).
**Theorem 3.3**.: _Let \(m\) and \(n\) be integers._
1. _For any_ \(m>n\geq 1\)_, we have_ \[F_{m}(X_{\delta})F_{n}(X_{\delta})=F_{m+n}(X_{\delta})+F_{m-n}(X_{\delta}),\ F_{n}(X_{\delta})F_{n}(X_{\delta})=F_{2n}(X_{ \delta})+2.\] (3.1)
2. _For any_ \(n\geq 1\)_, we have_ \[X_{m}F_{n}(X_{\delta})=q^{-\frac{n}{2}}X_{m-n}+q^{\frac{n}{2}}X_{m+n}+\sum_{k=1}^ {n}(\sum_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{n-k}(X_{\delta}).\] (3.2)
3. _For any_ \(n\geq 2\)_, we have_ \[X_{m}X_{m+n}= q^{\lfloor\frac{n}{2}\rfloor}X_{\lfloor m+\frac{n}{2}\rfloor}X_{ \lceil m+\frac{n}{2}\rceil}+\sum_{k=1}^{n-1}(\sum_{l=1}^{\text{min}(k,n-k)}q^{ -\frac{1}{2}+l})hX_{m+n-k}\] \[+\sum_{l=1}^{n-1}q^{-\frac{n-1-l}{2}}c_{l}F_{n-1-l}(X_{\delta}),\] (3.3) _where_ \(c_{1}=1\)_,_ \(c_{2}=h^{2}\) _and for_ \(k\geq 2\)_,_ \[c_{2k}=[\sum_{i=1}^{k-1}a_{i}(q^{-(k-i)}+q^{k-i})+a_{k}]h^{2}\] _and_ \[c_{2k-1}=2[\sum_{i=1}^{k-1}b_{i}(q^{-(k-i)}+q^{k-i})+b_{k}]h^{2}+\begin{cases} \sum_{i=1}^{\frac{k}{2}}(q^{-(k+1-2i)}+q^{k+1-2i}),&\text{if \ $k$ is even};\\ \sum_{i=1}^{\frac{k-1}{2}}(q^{-(k+1-2i)}+q^{k+1-2i})+1,&\text{if \ $k$ is odd},\end{cases}\] \[\text{with }a_{i}=\frac{i(i+1)}{2}\text{ and }\] \[b_{i}=\begin{cases}\frac{i^{2}-1}{4},&\text{if \ $i$ is odd};\\ \frac{i^{2}}{4},&\text{if \ $i$ is even}.\end{cases}\]
Proof.: (1) The proof is immediately from the definition of the \(n\)-th Chebyshev polynomials \(F_{n}(x)\).
(2) By using the automorphism \(\sigma\) repeatedly, it suffices to prove the following equation
\[X_{1}F_{n}(X_{\delta})=q^{-\frac{n}{2}}X_{1-n}+q^{\frac{n}{2}}X_{1+n}+\sum_{k= 1}^{n}(\sum_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{n-k}(X_{\delta}),\]
for \(n\geq 0\).
When \(n=1\),
\[X_{1}X_{\delta}= X(1,0)(X(-1,-1)+hX(-1,0)+hX(0,-1)+X(-1,1)+X(1,-1))\] \[= q^{-\frac{1}{2}}X(0,-1)+h+q^{-\frac{1}{2}}hX(1,-1)+q^{\frac{1}{ 2}}X(0,1)+q^{-\frac{1}{2}}X(2,-1).\]
Note that \(X_{0}=X(2,-1)+hX(1,-1)+X(0,-1)\). Thus \(X_{1}X_{\delta}=q^{-\frac{1}{2}}X_{0}+q^{\frac{1}{2}}X_{2}+h\). It follows that
\[X_{m}X_{\delta}=q^{-\frac{1}{2}}X_{m-1}+q^{\frac{1}{2}}X_{m+1}+h\]
for all \(m\in\mathbb{Z}\).
When \(n=2\),
\[X_{1}F_{2}(X_{\delta})= X_{1}(X_{\delta}^{2}-2)=q^{-\frac{1}{2}}X_{0}X_{\delta}+q^{ \frac{1}{2}}X_{2}X_{\delta}+hX_{\delta}-2X_{1}\] \[= q^{-1}X_{-1}+qX_{3}+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})h+hX_{\delta}.\]
When \(n\geq 3\), assume that \(X_{1}F_{t}(X_{\delta})=q^{-\frac{t}{2}}X_{1-t}+q^{\frac{t}{2}}X_{1+t}+\sum \limits_{k=1}^{t}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{n-k}(X_{\delta})\) for \(t\leq n-1\).
If \(t=n\), then
\[X_{1}F_{n}(X_{\delta})=X_{1}(F_{n-1}(X_{\delta})X_{\delta}-F_{n-2}(X_{\delta}) )=X_{1}F_{n-1}(X_{\delta})X_{\delta}-X_{1}F_{n-2}(X_{\delta}),\]
\[X_{1}F_{n-1}(X_{\delta})X_{\delta}\] \[= q^{-\frac{n-1}{2}}X_{2-n}X_{\delta}+q^{\frac{n-1}{2}}X_{n}X_{ \delta}+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{ n-1-k}(X_{\delta})X_{\delta}\] \[= q^{-\frac{n}{2}}X_{1-n}+q^{1-\frac{n}{2}}X_{3-n}+q^{\frac{n}{2}- 1}X_{n-1}+q^{\frac{n}{2}}X_{n+1}+(q^{-\frac{n-1}{2}}+q^{\frac{n-1}{2}})h\] \[+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l })hF_{n-1-k}(X_{\delta})X_{\delta},\]
and \(X_{1}F_{n-2}(X_{\delta})=q^{1-\frac{n}{2}}X_{3-n}+q^{\frac{n}{2}-1}X_{n-1}+ \sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{n-2-k}( X_{\delta})\). Note that
\[\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l })hF_{n-1-k}(X_{\delta})X_{\delta}-\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{ k}q^{-\frac{k+1}{2}+l})hF_{n-2-k}(X_{\delta})\] \[= \sum\limits_{k=1}^{n-3}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l })h(F_{n-1-k}(X_{\delta})X_{\delta}-F_{n-2-k}(X_{\delta}))+(\sum\limits_{l=1}^ {n-2}q^{-\frac{n-1}{2}+l})h(X_{\delta}^{2}-2)\] \[+(\sum\limits_{l=1}^{n-2}q^{-\frac{n-1}{2}+l})h+(\sum\limits_{l=1} ^{n-1}q^{-\frac{n}{2}+l})hX_{\delta}\] \[= \sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l })hF_{n-k}(X_{\delta})+(\sum\limits_{l=1}^{n-2}q^{-\frac{n-1}{2}+l})h\]
and \(\sum\limits_{l=1}^{n-2}q^{-\frac{n-1}{2}+l}h+(q^{-\frac{n-1}{2}}+q^{\frac{n-1} {2}})h=\sum\limits_{l=1}^{n}q^{-\frac{n+1}{2}+l}h\).
It follows that \(X_{1}F_{n}(X_{\delta})=q^{-\frac{n}{2}}X_{1-n}+q^{\frac{n}{2}}X_{n+1}+\sum \limits_{k=1}^{n}(\sum\limits_{l=1}^{k}q^{-\frac{k+1}{2}+l})hF_{n-k}(X_{\delta})\).
(3) In order to prove (3.3), it suffices to show that
\[X_{1}X_{1+n}=q^{\lfloor\frac{n}{2}\rfloor}X_{\lfloor 1+\frac{n}{2}\rfloor}X_{ \lceil 1+\frac{n}{2}\rceil}+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{\min(k,n-k )}q^{-\frac{1}{2}+l})hX_{1+n-k}+\sum\limits_{l=1}^{n-1}q^{-\frac{n-1-l}{2}}c_ {l}F_{n-1-l}(X_{\delta})\]
for \(n\geq 1\).
When \(n=2\), it is the exchange relation. When \(n=3\), by (3.2), we have that
\[X_{1}X_{4}\] \[= X_{1}(q^{-\frac{1}{2}}X_{3}X_{\delta}-q^{-1}X_{2}-q^{-\frac{1}{2}}h)\] \[= q^{-\frac{1}{2}}(qX_{2}^{2}+q^{\frac{1}{2}}hX_{2}+1)X_{\delta}-q ^{-1}X_{1}X_{2}-q^{-\frac{1}{2}}hX_{1}\] \[= q^{\frac{1}{2}}X_{2}(q^{-\frac{1}{2}}X_{1}+q^{\frac{1}{2}}X_{3}+ h)+h(q^{-\frac{1}{2}}X_{1}+q^{\frac{1}{2}}X_{3}+h)+q^{-\frac{1}{2}}X_{\delta}-q ^{-1}X_{1}X_{2}-q^{-\frac{1}{2}}hX_{1}\] \[= qX_{2}X_{3}+q^{\frac{1}{2}}hX_{2}+q^{\frac{1}{2}}hX_{3}+q^{- \frac{1}{2}}X_{\delta}+h^{2}.\]
Assume that
\[X_{1}X_{1+t}=q^{\lfloor\frac{t}{2}\rfloor}X_{\lfloor 1+\frac{t}{2}\rfloor}X_{ \lceil 1+\frac{t}{2}\rceil}+\sum_{k=1}^{t-1}(\sum_{l=1}^{\min(k,t-k)}q^{- \frac{1}{2}+l})hX_{1+t-k}+\sum_{l=1}^{t-1}q^{-\frac{t-1-l}{2}}c_{l}F_{t-1-l}( X_{\delta})\]
for all \(t\leq n-1\).
Note that \(X_{1}X_{n+1}=q^{-\frac{1}{2}}X_{1}X_{n}X_{\delta}-q^{-1}X_{1}X_{n-1}-q^{- \frac{1}{2}}hX_{1}\).
When \(n\) is even and \(n\geq 4\), then
\[X_{1}X_{n}=q^{\frac{n}{2}-1}X_{\frac{n}{2}}X_{\frac{n}{2}+1}+\sum_{k=1}^{n-2} (\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{1}{2}+l})hX_{n-k}+\sum_{l=1}^{n-2}q^{- \frac{n-2-l}{2}}c_{l}F_{n-2-l}(X_{\delta}),\]
\[q^{-1}X_{1}X_{n-1}=q^{\frac{n}{2}-2}X_{\frac{n}{2}}^{2}+\sum_{k=1}^{n-3}(\sum_ {l=1}^{\min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}+\sum_{l=1}^{n-3}q^{-\frac{n -1-l}{2}}c_{l}F_{n-3-l}(X_{\delta})\]
and
\[q^{-\frac{1}{2}}X_{1}X_{n}X_{\delta}\] \[= q^{\frac{n-3}{2}}X_{\frac{n}{2}}X_{\frac{n}{2}+1}X_{\delta}+ \sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+l})hX_{n-k}X_{\delta}+\sum_{l =1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-2-l}(X_{\delta})X_{\delta}\] \[= q^{\frac{n}{2}-2}X_{\frac{n}{2}}^{2}+q^{\frac{n}{2}}X_{\frac{n} {2}+1}^{2}+q^{\frac{n-1}{2}}hX_{\frac{n}{2}+1}+q^{\frac{n}{2}-1}+q^{\frac{n-3 }{2}}hX_{\frac{n}{2}}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{3}{ 2}+l})hX_{n-1-k}\] \[+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{1}{2}+l})hX_ {n+1-k}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+l})h^{2}\] \[+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-2-l}(X_{\delta})X_ {\delta}.\]
Note that
\[\begin{cases}k\leq n-2-k,\text{if }1\leq k\leq\frac{n}{2}-1,\\ k>n-2-k,\text{if }\frac{n}{2}\leq k\leq n-3,\\ k<n-1-k,\text{if }1\leq k\leq\frac{n}{2}-1,\\ k>n-1-k,\text{if }\frac{n}{2}\leq k\leq n-3,\\ k<n-k,\qquad\text{if }1\leq k\leq\frac{n}{2}-1,\\ k\geq n-k,\qquad\text{if }\frac{n}{2}\leq k\leq n-1.\end{cases}\]
It follows that
\[\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{3}{2}+l})hX_{ n-1-k}-\sum_{k=1}^{n-3}(\sum_{l=1}^{\min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}-q^ {-\frac{1}{2}}hX_{1}\] \[= \sum_{k=\frac{n}{2}}^{n-3}q^{-\frac{5}{2}+n-k}hX_{n-1-k}=\sum_{k =\frac{n}{2}+2}^{n-1}q^{-\frac{1}{2}+n-k}hX_{n+1-k}.\]
Hence
\[\sum_{k=1}^{n-1}(\sum_{l=1}^{\min(k,n-k)}q^{-\frac{1}{2}+l})hX_{n +1-k}\] \[= \sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{3}{2}+l})hX_ {n-1-k}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{1}{2}+l})hX_{n+1 -k}+q^{\frac{n-1}{2}}hX_{\frac{n}{2}+1}\] \[+q^{\frac{n-3}{2}}hX_{\frac{n}{2}}-\sum_{k=1}^{n-3}(\sum_{l=1}^{ \min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}-q^{-\frac{1}{2}}hX_{1}.\]
Because
\[q^{\frac{n}{2}-1}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{ -1+l})h^{2}+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-2-l}(X_{\delta})X_{\delta}\] \[-\sum_{l=1}^{n-3}q^{-\frac{n-1-l}{2}}c_{l}F_{n-3-l}(X_{\delta})\] \[= q^{\frac{n}{2}-1}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{ -1+l})h^{2}+q^{-1}c_{n-3}+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-1-l}(X _{\delta}),\]
It suffices to prove that \(q^{\frac{n}{2}-1}+\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{\min(k,n-1-k)}q^{ -1+l})h^{2}+q^{-1}c_{n-3}=c_{n-1}\).
We have that
\[c_{n-1}=[\sum_{i=1}^{\frac{n}{2}-1}b_{i}(q^{-(\frac{n}{2}-i)}+q^{\frac{n}{2}-i})+ b_{\frac{n}{2}}]2h^{2}+(q^{-(\frac{n}{2}-1)}+q^{-(\frac{n}{2}-3)}+\ldots+q^{\frac{n}{2}-3 }+q^{\frac{n}{2}-1}),\]
\[q^{-1}c_{n-3}= \Big{[}\sum_{i=1}^{\frac{n}{2}-2}b_{i}(q^{-(\frac{n}{2}-i)}+q^{ \frac{n}{2}-2-i})+b_{\frac{n}{2}-1}q^{-1}\Big{]}2h^{2}\] \[+(q^{-(\frac{n}{2}-1)}+q^{-(\frac{n}{2}-3)}+\ldots+q^{\frac{n}{2} -5}+q^{\frac{n}{2}-3})\]
and \(b_{k}-b_{k-2}=k-1\). Thus
\[c_{n-1}-q^{-1}c_{n-3}-q^{\frac{n}{2}-1}=\Big{[}\sum_{k=1}^{\frac{n}{2}}(k-1)q^ {\frac{n}{2}-k}\Big{]}2h^{2}=\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+ l})h^{2}.\]
Therefore
\[q^{\frac{n}{2}-1}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{ -1+l})h^{2}+q^{-1}c_{n-3}+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-1-l}(X_ {\delta})\] \[= \sum_{l=1}^{n-1}q^{-\frac{n-1-l}{2}}c_{l}F_{n-1-l}(X_{\delta})\]
and \(X_{1}X_{1+n}=q^{\frac{n}{2}}X_{\frac{n}{2}+1}^{2}+\sum_{k=1}^{n-1}(\sum_{l=1} ^{\min(k,n-k)}q^{-\frac{1}{2}+l})hX_{n+1-k}+\sum_{l=1}^{n-1}q^{-\frac{n-1-l}{2 }}c_{l}F_{n-1-l}(X_{\delta}).\)
When \(n\) is odd and \(n\geq 5\), we have
\[X_{1}X_{n}=q^{\frac{n-1}{2}}X_{\frac{n+1}{2}}^{2}+\sum_{k=1}^{n-2}(\sum_{l=1} ^{\min(k,n-1-k)}q^{-\frac{1}{2}+l})hX_{n-k}+\sum_{l=1}^{n-2}q^{-\frac{n-2-l}{2 }}c_{l}F_{n-2-l}(X_{\delta})\]
and
\[q^{-1}X_{1}X_{n-1}\] \[= q^{\frac{n-5}{2}}X_{\frac{n-1}{2}}X_{\frac{n+1}{2}}+\sum_{k=1}^{ n-3}(\sum_{l=1}^{\min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}+\sum_{l=1}^{n-3}q^{- \frac{n-1-l}{2}}c_{l}F_{n-3-l}(X_{\delta}).\]
Then
\[q^{-\frac{1}{2}}X_{1}X_{n}X_{\delta}\] \[= q^{\frac{n}{2}-1}X_{\frac{n+1}{2}}(q^{-\frac{1}{2}}X_{\frac{n-1 }{2}}+q^{\frac{1}{2}}X_{\frac{n+3}{2}}+h)+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2 }}c_{l}F_{n-2-l}(X_{\delta})X_{\delta}\] \[+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+l})h(q^{-\frac{1 }{2}}X_{n-1-k}+q^{\frac{1}{2}}X_{n+1-k}+h)\]
\[= q^{\frac{n-3}{2}}X_{\frac{n+1}{2}}X_{\frac{n-1}{2}}+q^{\frac{n-1}{2}}X _{\frac{n+1}{2}}X_{\frac{n+3}{2}}+q^{\frac{n}{2}-1}hX_{\frac{n+1}{2}}+\sum_{k=1} ^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}\] \[+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{1}{2}+l})hX _{n+1-k}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+l})h^{2}\] \[+\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-2-l}(X_{\delta})X_ {\delta}.\]
Note that
\[\begin{cases}k<n-2-k,\,\text{if}\,\,1\leq k\leq\frac{n-3}{2},\\ k>n-2-k,\,\text{if}\,\,\frac{n-1}{2}\leq k\leq n-3,\\ k\leq n-1-k,\,\text{if}\,\,1\leq k\leq\frac{n-1}{2},\\ k>n-1-k,\,\text{if}\,\,\frac{n+1}{2}\leq k\leq n-3,\\ k<n-k,\qquad\text{if}\,\,1\leq k\leq\frac{n-1}{2},\\ k>n-k,\qquad\text{if}\,\,\frac{n+1}{2}\leq k\leq n.\end{cases}\]
Hence
\[\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-\frac{3}{2}+l})hX_{ n-1-k}-\sum_{k=1}^{n-3}(\sum_{l=1}^{\min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}\] \[= q^{-\frac{1}{2}}hX_{1}+\sum_{k=\frac{n+1}{2}}^{n-3}q^{n-\frac{5} {2}-k}hX_{n-1-k}+q^{\frac{n}{2}-2}hX_{\frac{n-1}{2}}\] \[= q^{-\frac{1}{2}}hX_{1}+\sum_{k=\frac{n-1}{2}}^{n-3}q^{n-\frac{5} {2}-k}hX_{n-1-k}=\sum_{k=\frac{n+3}{2}}^{n}q^{n-\frac{1}{2}-k}hX_{n+1-k}.\]
Note that
\[q^{\frac{n}{2}-1}hX_{\frac{n+1}{2}}+\sum_{l=1}^{\frac{n-3}{2}}q^{-\frac{1}{2} +l}hX_{\frac{n+1}{2}}=\sum_{l=1}^{\frac{n-1}{2}}q^{-\frac{1}{2}+l}hX_{\frac{n+ 1}{2}}\]
and
\[\sum_{k=\frac{n-1}{2}}^{n-3}q^{n-\frac{5}{2}-k}hX_{n-1-k}=\sum_{k=\frac{n-3}{2} }^{n-1}q^{n-\frac{1}{2}-k}hX_{n-1-k},\]
then we obtain that
\[q^{\frac{n}{2}-1}hX_{\frac{n+1}{2}}+\sum_{k=1}^{n-2}(\sum_{l=1}^{ \min(k,n-1-k)}q^{-\frac{3}{2}+l})hX_{n-1-k}+\sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n -1-k)}q^{-\frac{1}{2}+l})hX_{n+1-k}\] \[-\sum_{k=1}^{n-3}(\sum_{l=1}^{\min(k,n-2-k)}q^{-\frac{3}{2}+l})hX_ {n-1-k}-q^{-\frac{1}{2}}hX_{1}\] \[= \sum_{k=1}^{n-1}(\sum_{l=1}^{\min(k,n-k)}q^{-\frac{1}{2}+l})hX_{n+ 1-k}.\]
Since
\[\sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-2-l}(X_{\delta})X_{ \delta}-\sum_{l=1}^{n-3}q^{-\frac{n-1-l}{2}}c_{l}F_{n-3-l}(X_{\delta})\] \[= \sum_{l=1}^{n-2}q^{-\frac{n-1-l}{2}}c_{l}F_{n-1-l}(X_{\delta})+q^ {-1}c_{n-3},\]
we only need to show that \(q^{-1}c_{n-3}+\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{\min(k,n-1-k)}q^{-1+l })h^{2}=c_{n-1}\).
Note that \(a_{k}-a_{k-2}=2k-1\) for \(k\geq 3\), then
\[c_{n-1}-q^{-1}c_{n-3}\] \[= \Big{[}(n-2)+(n-4)q+(n-6)q^{2}+\ldots+5q^{\frac{n-7}{2}}+3q^{ \frac{n-5}{2}}+q^{\frac{n-3}{2}}\Big{]}h^{2}\] \[= \sum_{k=1}^{n-2}(\sum_{l=1}^{\min(k,n-1-k)}q^{-1+l})h^{2}.\]
Therefore
\[X_{1}X_{1+n}=q^{\frac{n-1}{2}}X_{\frac{n+1}{2}}X_{\frac{n+3}{2}}+\sum_{k=1}^{ n-1}(\sum_{l=1}^{\min(k,n-k)}q^{-\frac{1}{2}+l})hX_{n+1-k}+\sum_{l=1}^{n-1}q^{- \frac{n-1-l}{2}}c_{l}F_{n-1-l}(X_{\delta}).\]
The proof is completed.
**Remark 3.4**.: _According to [1, Proposition 4.6] and Lemma 3.1, all cluster variables and \(F_{n}(X_{\delta})\)\((n\in\mathbb{Z}_{>0})\) are bar-invariant. Therefore, the cluster multiplication formulas for \(F_{n}(X_{\delta})F_{m}(X_{\delta})\), \(F_{n}(X_{\delta})X_{m}\) and \(X_{m+n}X_{m}\) can be obtained by applying the bar-involution to all formulas in Theorem 3.3._
## 4. A positive bar-invariant basis of \(\mathcal{A}_{q}(2,2)\)
In this section, we will explicitly construct a positive bar-invariant basis of \(\mathcal{A}_{q}(2,2)\).
**Definition 4.1**.: _A basis of \(\mathcal{A}_{q}(2,2)\) is called a positive \(\mathbb{Z}[q^{\pm\frac{1}{2}},h]\)-basis if its structure constants belong to \(\mathbb{Z}_{\geq 0}[q^{\pm\frac{1}{2}},h]\)._
Denote
\[\mathcal{B}=\{q^{-\frac{a_{1}a_{2}}{2}}X_{m}^{a_{1}}X_{m+1}^{a_{2}}|m\in \mathbb{Z},(a_{1},a_{2})\in\mathbb{Z}_{\geq 0}^{2}\}\sqcup\{F_{n}(X_{\delta})|n \in\mathbb{Z}_{>0}\}.\]
**Lemma 4.2**.: _All elements in \(\mathcal{B}\) are bar-invariant._
Proof.: According to [1, Lemma 4.3, Proposition 4.6], the following equations hold for any \(m\in\mathbb{Z}\):
\[X_{m}X_{m+1}=qX_{m+1}X_{m},\ \overline{X_{m}}=X_{m}.\]
Thus, for any \(m\in\mathbb{Z}\) and \((a_{1},a_{2})\in\mathbb{Z}_{\geq 0}^{2}\), we have
\[\overline{q^{-\frac{a_{1}a_{2}}{2}}X_{m}^{a_{1}}X_{m+1}^{a_{2}}}=q^{\frac{a_{1 }a_{2}}{2}}X_{m+1}^{a_{2}}X_{m}^{a_{1}}=q^{-\frac{a_{1}a_{2}}{2}}X_{m}^{a_{1}}X _{m+1}^{a_{2}}\]
which assert that all elements in the set \(\{q^{-\frac{a_{1}a_{2}}{2}}X_{m}^{a_{1}}X_{m+1}^{a_{2}}|m\in\mathbb{Z},(a_{1}, a_{2})\in\mathbb{Z}_{\geq 0}^{2}\}\) are bar-invariant. Together with Lemma 3.1, we know that any element in \(\mathcal{B}\) is bar-invariant.
In order to prove that the elements in \(\mathcal{B}\) are \(\mathbb{Z}[q^{\pm\frac{1}{2}},h]\)-independent, we need the following definition which gives a partial order \(\leq\) on \(\mathbb{Z}^{2}\).
**Definition 4.3**.: _Let \((r_{1},r_{2})\) and \((s_{1},s_{2})\in\mathbb{Z}^{2}\). If \(r_{i}\leq s_{i}\) for each \(1\leq i\leq 2\), we write \((r_{1},r_{2})\leq(s_{1},s_{2})\). Furthermore, if \(r_{i}<s_{i}\) for some \(i\), we write \((r_{1},r_{2})<(s_{1},s_{2})\)._
**Theorem 4.4**.: _The set \(\mathcal{B}\) is a positive bar-invariant \(\mathbb{Z}[q^{\pm\frac{1}{2}},h]\)-basis of \(\mathcal{A}_{q}(2,2)\)._
Proof.: According to Theorem 3.3 and Remark 3.4, we can deduce that the generalized quantum cluster algebra \(\mathcal{A}_{q}(2,2)\) is \(\mathbb{Z}[q^{\pm\frac{1}{2}},h]\)-spanned by the elements in \(\mathcal{B}\).
Note that \(X_{\delta}\) has the minimal non-zero term \(X^{(-1,-1)}\) associated to the partial order in Definition 4.3, and thus by Theorem 3.3, we deduce that the element \(F_{n}(X_{\delta})\) has the minimal non-zero term \(X^{(-n,-n)}\) for each \(n\in\mathbb{Z}_{>0}\). According to Theorem 3.4, we have \(X_{n}X_{\delta}=q^{\frac{1}{2}}X_{n+1}+q^{-\frac{1}{2}}X_{n-1}+h\). Thus, for each \(n\geq 2\), we obtain that the cluster variable \(X_{n}\) has the minimal non-zero term \(a_{n}X^{-(n-2,n-3)}\) where \(a_{n}\in\mathbb{Z}[q^{\pm\frac{1}{2}}]\), and for each \(n\geq-1\), the cluster variable \(X_{-n}\) has the minimal non-zero term \(b_{n}X^{-(n,n+1)}\) where \(b_{n}\in\mathbb{Z}[q^{\pm\frac{1}{2}}]\). Hence, there exists a bijection between the set of all minimal non-zero terms in cluster variables and almost positive roots associated to the affine Lie algebra \(\hat{\mathfrak{sl}}_{2}\). Using the same discussion as [22, Proposition 3.1], we have that there exists a bijection between the set of all minimal non-zero terms in the elements in \(\mathcal{B}\) and \(\mathbb{Z}^{2}\), which implies that the elements in \(\mathcal{B}\) are \(\mathbb{Z}[q^{\pm\frac{1}{2}}]\)-independent.
By using Theorem 3.3 and Remark 3.4 repeatedly, we can deduce that the structure constants of the basis elements in \(\mathcal{B}\) belong to \(\mathbb{Z}_{\geq 0}[q^{\pm\frac{1}{2}},h]\). Together with Lemma 4.2, the proof is completed.
**Remark 4.5**.: _If we set \(h=0\) and \(q=1\), then the set \(\mathcal{B}\) is exactly the canonical basis of the cluster algebra of Kronecker quiver obtained in [22]._
**Definition 4.6**.: _An element in \(\mathcal{A}_{q}(2,2)\) is called positive if the coefficients of its Laurent expansion associated to any cluster belong to \(\mathbb{Z}_{\geq 0}[q^{\pm\frac{1}{2}},h]\)._
**Remark 4.7**.: _According to Theorem 3.3 and Remark 3.4 or using the same arguments as [19, Corollary 8.3.3], it is not difficult to see that every element in \(\mathcal{B}\) is positive. In particular, we obtain that all cluster variables of \(\mathcal{A}_{q}(2,2)\) are positive, which is a special case in [21]._
## Acknowledgments
Liqian Bai was supported by NSF of China (No. 11801445) and the Natural Science Foundation of Shaanxi Province (No. 2020JQ-116), Ming Ding was supported by NSF of China (No. 11771217) and Guangdong Basic and Applied Basic Research Foundation (2023A1515011739) and Fan Xu was supported by NSF of China (No. 12031007).
|
2306.07758 | Generated Graph Detection | Graph generative models become increasingly effective for data distribution
approximation and data augmentation. While they have aroused public concerns
about their malicious misuses or misinformation broadcasts, just as what
Deepfake visual and auditory media has been delivering to society. Hence it is
essential to regulate the prevalence of generated graphs. To tackle this
problem, we pioneer the formulation of the generated graph detection problem to
distinguish generated graphs from real ones. We propose the first framework to
systematically investigate a set of sophisticated models and their performance
in four classification scenarios. Each scenario switches between seen and
unseen datasets/generators during testing to get closer to real-world settings
and progressively challenge the classifiers. Extensive experiments evidence
that all the models are qualified for generated graph detection, with specific
models having advantages in specific scenarios. Resulting from the validated
generality and oblivion of the classifiers to unseen datasets/generators, we
draw a safe conclusion that our solution can sustain for a decent while to curb
generated graph misuses. | Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang | 2023-06-13T13:18:04Z | http://arxiv.org/abs/2306.07758v1 | # Generated Graph Detection
###### Abstract
Graph generative models become increasingly effective for data distribution approximation and data augmentation. While they have aroused public concerns about their malicious misses or misinformation broadcasts, just as what _Deepfake_ visual and auditory media has been delivering to society. Hence it is essential to regulate the prevalence of generated graphs. To tackle this problem, we pioneer the formulation of the generated graph detection problem to distinguish generated graphs from real ones. We propose the first framework to systematically investigate a set of sophisticated models and their performance in four classification scenarios. Each scenario switches between seen and unseen datasets/generators during testing to get closer to real-world settings and progressively challenge the classifiers. Extensive experiments evidence that all the models are qualified for generated graph detection, with specific models having advantages in specific scenarios. Resulting from the validated generality and oblivion of the classifiers to unseen datasets/generators, we draw a safe conclusion that our solution can sustain for a decent while to curb generated graph misuses.1
Footnote 1: Our code is available at [https://github.com/Yvonnemannan/GGD](https://github.com/Yvonnemannan/GGD)
## 1 Introduction
Graph generative models aim to learn the distributions of real graphs and generate synthetic ones [30, 54, 57]. Generated graphs have found applications in numerous domains, such as social networks [34], e-commerce [27], chemoinformatics [22], etc. In particular, with the development of deep learning, graph generative models have witnessed significant advancement in the past 5 years [25, 28, 44, 62].
However, a coin has two sides. There is a concern that synthetic graphs can be misused. For example, molecular graphs are used to design new drugs [41, 62]. The generated graphs can be misused in this process (e.g., leading to unresolved protein-ligand binding [49, 66, 4]), hence it is essential for the pharmaceutical factory to vet the authenticity of the molecular graphs. Also, synthetic graphs make deep graph learning models more vulnerable to well-designed attacks. Existing graph-level backdoor attacks [56] and membership inference attacks [52] require the attackers to train their local models using the same or similar distribution data as those for the target models. Adversarial graph generation enables attackers to generate graphs that are close to the real graphs. It facilitates the attackers to build better attack models locally hence keeping those attacks more stealthy (since the attackers can minimize the interaction with the target models). This advantage also applies to the latest graph attacks such as the property inference attack [67] and GNN model stealing attack [39].
As a result, it is essential to regulate the prevalence of generated graphs. In this paper, we propose to target the generated graph detection problem, i.e., _to study whether generated graphs can be differentiated from real graphs with machine learning classifiers_.
To detect generated graphs, we train graph neural network (GNN)-based classifiers and show their effectiveness in encoding and classifying graphs [16, 26, 68]. Figure 2 (in Appendix A) illustrates the general pipeline of the generated graph detection. To evaluate their accuracy and generalizability, we test graphs from varying datasets and/or varying generators that are progressively extended toward the unseen during training. The _seen_ concept in dataset or generator means that the graphs used in the training and testing stage are from the same dataset or generated by the same generator, respectively. That is to say, they share the same or similar distribution. And the _unseen_ concept represents the opposite, which is that compared with the training stage, the graphs used in the testing stage may come from different datasets or are generated by different generators.
To sophisticate our solution space, we study three representative classification models. The first model is a direct application of GNN-based end-to-end classifiers [8, 16, 60, 26]. The second model shares the spirit of contrastive learning for images [10, 55, 20] and graphs [17, 64, 69, 70], which, as one of the cutting-edge self-supervised representation learning models, learns similar representations for the same data under different augmentations. The third model is based on deep metric learning [38, 43, 58], which learns close/distant representations for the data from the same/different classes. _Note that in this paper, our goal is not proposing a new algorithm, but solving a novel real-world problem using existing methods to identify practical solutions_. Thus the 3 proposed models are all based on previous work.
We systematically conduct experiments under different settings for all the classification models to demonstrate the effectiveness of our framework. Moreover, we conduct the
dataset-oblivious study which mixes various datasets in order to evaluate the influence along the dataset dimension. The evidenced dataset-oblivious property makes them independent of a specific dataset and practical in real-world situations.
## 2 Preliminaries
**Notations.** We define an undirected and unweighted homogeneous graph as \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\) represents the set of nodes, \(\mathcal{E}\subseteq\{(v,u)\mid v,u\in\mathcal{V}\}\) is the set of edges and \(\mathbf{A}\in\{0,1\}^{n\times n}\) denotes \(\mathcal{G}\)'s adjacency matrix. We denote the embedding of a node \(u\in\mathcal{V}\) as \(\mathbf{h}_{u}\) and the embedding of the whole graph \(\mathcal{G}\) as \(\mathbf{h}_{\mathcal{G}}\).
**Graph Neural Networks.** Graph neural networks (GNNs) have shown great effectiveness in fusing information from both the graph topology and node features [16, 26, 68]. In recent years, they become the state-of-the-art technique serving as essential building blocks in graph generators and graph classification algorithms. A GNN normally takes the graph structure as the input for message passing, during which the neighborhood information of each node \(u\) is aggregated to get a more comprehensive representation \(\mathbf{h}_{u}\). The detailed information of GNN is described in Section A.1.
**Graph Generators.** Graph generators aim to produce graph-structured data of observed graphs regardless of the domains, which is fundamental in graph generative models. The study of graph generators dated back at least to the work by Erdos-Renyi [13] in the 1960s. These traditional graph generators focus on various random models [3, 13], which typically use simple stochastic generation methods such as a random or preferential attachment mechanism. However, the traditional models require prior knowledge to obtain/tune the parameters and tie them specifically to certain properties (e.g., probability of connecting to other nodes, etc.), hence their limited capacity of handling the complex dependencies of properties. Recently, graph generators using GNN as the foundation has attracted huge attention [14, 28, 41, 63]. The GNN-based graph generators can be further grouped into two categories: autoencoder-based generators and autoregressive-based generators. Autoencoder-based generator [14, 25, 32, 41] is a type of neural network which is used to learn the representations of unlabeled data and reconstruct the input graphs based on the representations. Autoregressive-based generator [28, 63] uses sophisticated models to capture the properties of observed graphs better. By generating graphs sequentially, the models can leverage the complex dependencies between generated graphs. In this paper, we selectively focus on eight graph generators that span the space of commonly used architectures, including ER [13], BA [3], GRAN [28], VGAE [25], Graphite [14], GraphRNN [63], SBMGNN [32], and GraphVAE [41] (see more detailed information about graph generators in Section A.2).
## 3 Generated Graph Detection
### Problem Statement
The generated graph detection problem studied in this paper can be formulated as follows. Suppose we have a set of _real_ graphs \(\mathbf{RG}=\{\mathbf{rg}_{1},\ldots,\mathbf{rg}_{\ell}\}\), _m seen_ graph generators \(\varphi_{seen}=\{\phi_{1},\ldots,\phi_{m}\}\), _k unseen_ graph generators \(\varphi_{unseen}=\{\phi_{m+1},\ldots,\phi_{m+k}\}\), and a collection of _generated graphs_ by seen and unseen generators \(\mathbf{GG}=\{\mathbf{GG}_{1},\ldots,\mathbf{GG}_{m+k}\}\). Here each \(\mathbf{GG}_{i}\) is a set of graphs generated by a graph generator \(\phi_{i}\). To be specific, let \(D=\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{c},y_{z})\}\), where \(x_{i}\in\mathbf{RG}\bigcup\mathbf{GG}\), \(y_{i}\) represents the label of each graph (i.e., real or generated) and \(z=\sum_{i=1}^{m+k}|\mathbf{GG}_{i}|\) is the total number of samples. A generated graph detector \(f(\cdot)\) is later trained on \(D\). Once trained, it classifies each testing graph as real or generated. However, it is normal to see the arrival of graphs from unknown generators that have never been seen in training, and the graphs may not bear similar properties as the training data in the real world. The existing solutions usually leverage model retraining to cope with the problem. Yet, it is impractical to retrain a model from scratch every time a new graph generator is added or unseen data is encountered. Ideally, \(f(\cdot)\)_should be built in a way that it can be generalized to previously unseen data/generator in the real world_.
### A General Framework for Generated Graph Detection and Analysis
In this paper, we propose a general framework to detect generated graphs. Specifically, this framework consists of four scenarios depending on whether the dataset or graph generator has been used to train the model. These scenarios comprehensively cover from the simplest close-world scenario to the most challenging full open-world detection scenario. We discuss how we choose different ML models to implement this framework in Section 3.3.
**Closed World.** In this scenario, the training and testing graphs are sampled from _seen dataset_ and generated by _seen generators_. The goal is to predict whether a graph is real or generated by seen generators. Under this setting, to train the generated graph detector \(f(\cdot)\), we sample real graphs from \(\mathbf{RG}\) as positive samples and sample graphs generated by seen generators \(\varphi_{seen}\) as negative samples. The graphs used to test \(f(\cdot)\) share the same distribution with the training set, i.e., they consist of real graphs sampled from \(\mathbf{RG}\) and generated graphs generated by seen generators \(\varphi_{seen}\).
**Open Generator.** In this scenario, the negative samples of the testing graphs are generated by unseen generators but are in the same or similar distribution of training data (i.e., seen dataset). The training data of \(f(\cdot)\) does not contain any graphs generated by unseen generators \(\varphi_{unseen}\). Since only the graph generators used in the testing dataset are not seen at the time of training, we thus name it the "Open Generator" scenario. Under this setting, the detector \(f(\cdot)\) is trained with the graphs sampled from \(\mathbf{RG}\) (positive samples) and graphs generated by seen generators \(\varphi_{seen}\) (negative samples). The positive samples used to test \(f(\cdot)\) are also from \(\mathbf{RG}\) while the negative samples are generated by unseen generators \(\varphi_{unseen}\). The goal is to predict whether a graph is real or generated by unseen generators.
**Open Set.** In this scenario, the testing graphs are from seen generators that are trained on unseen datasets. Concretely, the graph generators that the system sees in training are what it will see in testing (i.e., seen generators). However, the testing
graphs are of different distributions of training data (i.e., unseen datasets). For instance, \(f(\cdot)\) was trained using both real and generated graphs from chemical graphs, yet, the testing graphs (either real or generated) are social network graphs that are inherently different. As such, we name this scenario the "Open Set" scenario. Similar to the "Open Generator" scenario, the detector \(f(\cdot)\) is trained with the graphs sampled from \(\mathbf{RG}\) (as positive samples) and the graphs generated by seen generators \(\varphi_{seen}\) (as negative samples). Unlike the previous experiments, the graphs used to test \(f(\cdot)\) are from different datasets. The testing graphs of \(f(\cdot)\) consist of real graphs sampled from other datasets and graphs generated by seen generators \(\varphi_{seen}\) based on other datasets. The goal is to predict whether a graph from the unseen dataset is real or generated.
**Open World.** In this scenario, the testing graphs are from unseen generators that are trained on unseen datasets. This setting is the most challenging yet common in the real world. It is normal to see the arrival of graphs from unknown generators that have never been seen at the training time, and the graphs may not bear with similar properties as of the training data. To be specific, certain generators that the classifier sees at the testing time are not included in its training stage (i.e., unseen generators), and the testing graphs are of the different distribution of training data (i.e., unseen datasets). Similar to "Open Generator" and "Open Set" scenarios, the generated graph detector \(f(\cdot)\) is trained with the graphs sampled from \(\mathbf{RG}\) (as positive samples) and the graphs generated by seen generators \(\varphi_{seen}\) (as negative samples). The testing graphs consist of real graphs sampled from other datasets and graphs generated by unseen generators.
### Detection Methodologies
As discussed in Section 3.2, we need an ML model \(f(\cdot)\) to cope with the four generated graph detection scenarios. In this section, we introduce three ML models - end-to-end classifier [60, 8, 16, 26], contrastive learning-based model [60, 64, 69, 70, 17], and metric learning-based model [43, 38, 58] - to implement the aforementioned detection framework. All the models can work as the \(f(\cdot)\) to do the final detection in all scenarios. For each scenario, \(f(\cdot)\) has the same structure, while trained or tested by different samples. The pros and cons of each model are evaluated and discussed in Section 4.
**End-To-End Classifier.** The most straightforward approach to distinguishing between real and generated graphs is to train a binary classifier in an end-to-end manner. As aforementioned, among all the research, graph classification methods based on graph convolutional networks (GCNs) are commonly recognized as the state-of-the-art technique in deep learning-based graph classification [60, 8, 16, 26]. Also, we can also see the results from Section A.5 also shows that GCN performs better most of the time compared with other GNN networks. Therefore we choose the GCN model [26] as our end-to-end classifier. The end-to-end classifier consists of four GCN layers and a fully connected layer. We use a four-layer GCN network to embed the graph data into a 128-dimensional vector, and use a fully connected layer to compute the final classification result.
**Contrastive Learning-Based Model.** Previous studies have shown that contrastive learning helps to improve the graph encoding performance [69, 70, 17, 64]. Different from the traditional binary classifier that trains the GNN model in an end-to-end manner, the contrastive learning-based model first learns a powerful graph encoder in a self-supervision manner, then uses the graph encoder to transform the graphs into graph embeddings, and employs a binary classifier to predict the results. Figure 3 (in Section A.3) illustrates the general workflow of our contrastive learning-based model. We use support vector machines (SVM) as the final classifier following the previous work [64, 45]. The implementation details of the contrastive learning-based model are introduced in Section A.3.
**Metric Learning-Based Model.** In the past few years, deep metric learning has consistently achieved state-of-the-art model performance [43, 58, 38]. As one of the cutting-edge unsupervised representation learning models, deep metric learning aims to map input data into a metric space, where data from the same class get close while data from different classes fall apart from each other. However, unlike other tasks such as classification or face recognition in which only one training sample is needed to get the output, in metric learning, at least two training samples are needed at one time, as the output of metric learning is whether the two input samples are from the same category [15, 18]. Based on the core concept of metric learning, siamese network [15, 18] is proposed, which takes paired samples as inputs and outputs whether the paired samples are from the same category. The implementation details of the metric learning-based model are introduced in Section A.4. Since the metric learning-based model takes paired samples as input and only predicts whether the two input samples are from the same label, to evaluate the performance of the metric learning-based model in the perspective of getting prediction results of each graph, we still need to predict the exact label for each testing sample by querying the model using paired samples consists of one testing sample needs to be predicted and one sample with the known label.
In order to get the final classification results, for each testing sample, we randomly select \(N_{k}\) samples of each label from the training set and generate \(N_{k}\ast N_{class}\) paired samples. Here \(N_{class}\) equals 2 (i.e., real and generated). After feeding the paired samples into the siamese network, we will get \(N_{k}\) posteriors for each label. Each posterior represents the probability of the paired samples from the same label. After calculating the mean value of the \(N_{k}\) posteriors for each label, we can find the maximum mean value and take the corresponding label as the predicted result of the testing sample. For example, if the maximum mean value of the posteriors is from label _real_, then we consider using _real_ as the final classification result.
## 4 Experiments
We first introduce the datasets and implementation details of our experiments. Then following the application scenarios described in Section 3, the experiments are conducted based on each scenario.
### Experimental Setup
**Datasets.** We use 7 benchmark datasets from TU-Dataset [33] to evaluate the performance, including AIDS [35], Alchemy [7], Deezer ego nets (abbreviated as Deezer) [37], DBLP [1], GitHub StarGazer (abbreviated as GitHub) [37], COLLAB [61] and Twitch ego nets (abbreviated as Twitch) [37]. Among them, Deezer, GitHub, and Twitch are social networks with nodes representing users and edges indicating friendships. DBLP and COLLAB are collaboration networks with nodes representing papers/researchers and edges indicating citations/collaborations. AIDS and Alchemy are molecular graphs with nodes representing atoms of the compound and edges corresponding to chemical bonds. These graphs form our _real_ datasets for the rest of the evaluation. The statistics of all the datasets are summarized in Table 1.
**Sampling High-Quality Generated Graphs.** Although the graph generators are capable to generate graphs with similar distribution as real graphs, some of the generated graphs may still contain obvious artifacts in some cases. There is a concern that the classification may be biased by such artifacts. Thus we compute the number of nodes, the number of edges, density, diameter, average clustering, and transitivity as the statistical features of each graph and use Euclidean Distance to measure the 1-nearest-neighbor similarity between each generated graph and real graph sets [65]. We select 20% generated graphs with the highest similarity for the following experiments.
To evaluate the quality of generated graphs, we use maximum mean discrepancy (MMD) over these graph features to measure the similarity between real graphs and graphs generated by different generators. The MMD results show that the graphs generated by different generators and real graphs are very similar at the statistical level. The MMD results are shown in Table 6 in Section A.6.
**Implementation Details.** We use the GCN to embed the graphs in the end-to-end classifier and metric learning-based model. The GCN is implemented in PyTorch [2]. The optimizer we used is Adam optimizer [23]. Each model is trained for 200 epochs. The learning rate is set to 0.001 and we adopt Cross-Entropy Loss as the loss function. The ratio of the training set and testing set is 8:2. The contrastive learning-based model is trained following the implementation details in GraphCL [64]. As mentioned in Section 3.3, we generate \(N_{ps}*2\) paired samples to train the siamese network in the metric learning-based model and use \(N_{k}\) samples from each label to predict the final results. \(N_{pts}*2\) is the number of paired samples used to train the siamese network. Here we conduct experiments to fine-tune the metric learning-based model and find the best \(N_{ps}=200,000\) and \(N_{k}=10\) which makes the model perform the best. The corresponding results are displayed in Section A.5 (Figure 5 and Figure 6).
**Baseline.** To better evaluate the performance of our proposed models, We incorporate a new model named Feature Classification (FC) as the baseline. FC model leverages the graph's statistical features as input and uses state-of-the-art machine learning models to do the final classification. Here we use Multilayer perceptron (MLP) and XGBoost [9] as the classifier since they are both powerful but effective models. The statistical features we used are the number of nodes, the number of edges, density, diameter, average clustering, and transitivity, which are the same as the features we used to sample high-quality graphs.
### Experiments for the "Closed World" Scenario
In this scenario, we want to explore whether real graphs and generated graphs can be distinguished when the distribution of all testing graphs is known. As introduced before, we propose three methods to classify graphs. The evaluation metrics we used in this paper are accuracy and F1-score.
**Overall Results.** The accuracy and F1-score of all the binary classifiers are summarized in Table 2. In general, our proposed models outperform the 2 FC models in all datasets, demonstrating that the GNN-based models can better capture the characteristics of graphs compared to using machine learning models with only statistical features. Also, we observe that among the three methods, the metric learning-based model performs the best in most cases, while the contrastive learning-based model performs the least satisfactorily. Moreover, the results show that in general, the performance of Deezer, Github, and Twitch is better than other datasets. Compared to other datasets, the graphs in Twitch, Github, and Deezer are bigger and the three datasets also have a richer amount of graphs. This implies that the binary classifiers can distinguish between real graphs and generated graphs with higher accuracy for larger datasets with bigger graphs.
Although the contrastive learning-based model and metric learning-based model have a similar goal, i.e. training an encoder that can map graphs to a latent space where graphs with the same label get close while graphs with different labels fall apart from each other, the metric learning-based model performs better than the contrastive learning-based model in this scenario. Thus we can draw the conclusion that the embeddings produced by metric learning tend to be distinguished easily in the "Closed World" scenario.
**Dataset Oblivious Study.** Besides the evaluation from the perspective of a single dataset, we also conduct the dataset oblivious study. In this experiment, we first randomly sample 1,000 real graphs from each dataset. Then we randomly select 1,000 generated graphs which are evenly generated by all the generators based on each dataset. Finally, we obtain a _mixed_ dataset consisting of 7,000 real graphs and 7,000 generated graphs to train and test the binary classifier. The ratio of the training set and testing set is 8:2.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Dataset & \# of graphs & Avg. Nodes & Avg. Edges \\ \hline AIDS & 2,000 & 15.69 & 16.20 \\ Alchemy & 202,579 & 10.10 & 10.44 \\ Deezer & 9,629 & 23.49 & 65.25 \\ DBLP & 19,456 & 10.48 & 19.65 \\ GitHub & 12,725 & 113.79 & 234.64 \\ COLLAB & 5,000 & 74.49 & 2,457.78 \\ Twitch & 127,094 & 29.67 & 86.59 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
Surprisingly, a persuasive performance can be noticed even when we don't take the dimension of the dataset into consideration. The performance indicates that the models can still distinguish real graphs from generated graphs even when the graphs used to train the model don't belong to any specific dataset. This is more meaningful in the real-world scenario as we may not know which dataset the graphs come from. In the mixed dataset, the end-to-end classifier performs the best, which means when the graphs which need to be classified do not belong to one specific dataset, the end-to-end classifier can better capture the complex dependencies of graphs and detect generated graphs with higher accuracy.
### Experiments for the "Open Generator" Scenario
The experiments above have proved that the real graphs and generated graphs can be distinguished in the close-world scenario. In order to further evaluate if our models can still detect generated graphs when given unseen generators, we choose three different generators - GraphRNN [63], SBMGNN [32], and GraphVAE [41] - as unseen generators to test the utility of the proposed models. For all datasets, we use real graphs as the positive samples and graphs generated by unseen generators as the negative samples to test the models.
**Overall Results.** The final classification results are shown in Table 2, from which we can see that the binary classification results of all the datasets are over 0.75, which indicates that even when the graphs are generated by unseen algorithms, the models can still have a relatively good performance. This indicates that all the models can be generalized to other graph generators. Also, compared with the "Closed World" scenario, the contrastive learning-based model starts to show some advantages when new generators arrive. We can see a comparable performance with the metric learning-based model, which exemplifies that the contrastive learning-based model suits the "Open Generator" scenario more. Similar to the previous scenario, the accuracies and F1-score of all the models in Deezer, Github, and Twitch are better than other datasets, which indicates that the models can be generalized to other datasets better for larger datasets with bigger graphs.
**Dataset Oblivious Study.** Moreover, we also conduct experiments for the mixed dataset. It can be noticed that the model performance in the mixed dataset is in line with those of other datasets. The experimental results suggest that our models can still generalize to previously unseen generators even when we don't take the dimension of the dataset into consideration.
The accuracy of the contrastive learning-based model for the mixed dataset is even better for COLLAB and is the best among all three models. This suggests that the contrastive learning-based model can better generalize to other generators in datasets with a wide range of node numbers and graph densities, i.e. mixed dataset.
**Distinguishing Graphs Generated by Unseen Generators.** Apart from classifying real graphs and graphs generated by unseen generators, we use metric learning to predict whether two graphs generated by unseen generators are generated by the same generator. To evaluate the performance of predicting whether any two graphs generated by unseen generators are generated by the same generator, we randomly generate 50,000 positive graph pairs and 50,000 negative graph pairs and use metric learning to take the graph pairs as input (the performance is shown in Table 3). It can be noticed that the metric learning-based model can predict whether two graphs generated by unseen generators are generated by the same generator to some extent. Moreover, we can see that the performance of Deezer, Github, and TWITCH is better than other datasets, which is consistent with the results of the "Open Generator" scenario.
However, when we use the mixed dataset to train and test the metric learning-based model, the performance is much worse than in other datasets. It is reasonable since we can see from Table 2 that the metric learning-based model with a mixed dataset performs the worst among all the datasets. The visualization of graphs generated by the unseen generators shown in Figure 7 also supports our results, the embeddings of the mixed dataset can not be separated explicitly compared to other datasets.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c} \hline \hline & \multicolumn{4}{c|}{Closed World} & \multicolumn{4}{c}{Open Generator} \\ \hline Dataset & FC(MLP) & FC(XGBoost) & End-To-End & Contrastive & Metric & FC(MLP) & FC(XGBoost) & End-To-End & Contrastive & Metric \\ \hline AIDS & 0.75/0.73 & 0.74/0.74 & 0.89/0.85 & 0.87/0.84 & **0.91/0.90** & 0.73/0.70 & 0.72/0.71 & 0.82/0.81 & 0.84/0.82 & **0.87/0.84** \\ Alchemy & 0.78/0.78 & 0.79/0.78 & 0.87/0.87 & 0.85/0.80 & **0.90/0.89** & 0.74/0.73 & 0.76/0.76 & 0.80/0.77 & 0.82/0.79 & **0.84/0.82** \\ Deezer & 0.78/0.78 & 0.80/0.80 & 0.97/0.95 & 0.95/0.94 & **0.98/0.97** & 0.74/0.74 & 0.77/0.5 & 0.90/0.88 & **0.92/0.92** & 0.91/0.91 \\ DBLP & 0.70/0.68 & 0.72/0.71 & **0.84/0.83** & 0.82/0.82 & 0.82/0.82 & 0.75/0.74 & 0.75/0.75 & 0.79/0.79 & **0.82/0.82** & 0.80/0.79 \\ Github & 0.81/0.81 & 0.84/0.82 & 0.95/0.94 & 0.92/0.92 & **0.96/0.96** & 0.80/0.82 & 0.81/0.80 & 0.94/0.94 & 0.91/0.91 & **0.96/0.92** \\ COLLAB & 0.56/0.55 & 0.60/0.60 & 0.85/0.84 & 0.84/0.82 & **0.89/0.89** & 0.50/0.49 & 0.51/0.50 & 0.78/0.76 & 0.80/0.79 & **0.84/0.82** \\ Twitch & 0.56/0.55 & 0.61/0.60 & 0.92/0.89 & 0.90/0.88 & **0.95/0.93** & 0.51/0.49 & 0.53/0.51 & 0.85/0.85 & **0.90/0.89** & 0.86/0.86 \\ Mixed & 0.64/0.62 & 0.65/0.64 & **0.84/0.83** & 0.80/0.80 & 0.82/0.81 & 0.60/0.59 & 0.62/0.62 & 0.78/0.76 & **0.82/0.80** & 0.79/0.78 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The accuracy/F1-score of generated graph detection in “Closed World” scenario and “Open Generator” scenario.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & Accuracy & F1-score \\ \hline AIDS & 0.78 & 0.78 \\ Alchemy & 0.82 & 0.82 \\ Deezer & 0.93 & 0.93 \\ DBLP & 0.75 & 0.74 \\ GitHub & 0.95 & 0.95 \\ COLLAB & 0.83 & 0.83 \\ Twitch & 0.89 & 0.89 \\ MIXED & 0.64 & 0.64 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distinguishing graphs generated by unseen generators.
### Experiments for the "Open Set" Scenario
Apart from distinguishing between real graphs and graphs generated by unseen algorithms, we also conduct experiments to evaluate whether graphs generated by unseen datasets can still be distinguished from real graphs. In this experiment, we use graphs from AIDS, Alchemy, Deezer, DBLP, and Github as the seen datasets to train all the models, and use COLLAB and Twitch as unseen datasets to test the models.
For each seen dataset, we randomly select 1,000 real graphs and 1,000 generated graphs which are evenly generated by the seen generators. In the end, we use the final dataset with 5,000 real graphs and 5,000 generated graphs to train all the models.
In this scenario, we want to evaluate whether the fake graphs generated by seen generators based on unseen datasets can be distinguished from the real graphs. Thus to test the model, for each unseen dataset, we randomly select real graphs and the same amount of generated graphs that are evenly generated by the seen generators. The final testing set contains 2,000 real graphs and 2,000 generated graphs. The performance of all the models is summarized in Table 4.
We can see from the table that, in general, the real graphs and generated graphs can be distinguished with an accuracy higher than 0.78. This implies that our models have the ability to generalize to unseen datasets. Moreover, the accuracy of the contrastive learning-based model is higher than 0.85 and the best among the three models, which suggests that the contrastive learning-based model can generalize to the unseen datasets better.
After comparing the performance with the "Closed World" scenario in Section 4.2, we find that the performance drops. It is reasonable because the graphs used to test the models come from new datasets which are not seen in the training set, which makes the task harder than in the previous experiment.
### Experiments for the "Open World" Scenario
The fourth scenario is to evaluate whether the fake graphs generated by unseen algorithms in unseen datasets can be distinguished from the real graphs. To test the model, for each unseen dataset, we randomly select real graphs and the same amount of generated graphs that are evenly generated by the unseen generators. The final testing set contains 2,000 real graphs and 2,000 generated graphs. The performance of all the models is summarized in Table 4.
The scenario is called the "Open World" scenario as described before since the datasets and generators are both unseen in the training phase. It is the hardest task among the four scenarios. We can see from Table 4 that the performance, as expected, is lower than those in Section 4.4 and Section 4.5.
Although the performance is not as competent, the accuracies of all models are still higher than 0.74. This suggests that the models can still distinguish real graphs and graphs generated by unseen generators in unseen datasets to some extent. Apart from that, the contrastive learning-based model performs the best among all the models, which is in line with the previous experiments.
Throughout all the experiments, we can draw a conclusion that the metric learning-based model tends to perform better in the "Closed World" scenario while the contrastive learning-based model shows advantages in "Open Generator", "Open Set" and "Open World" scenarios. The results give us an insight that metric learning can learn better representations of graphs with known graph distributions. On the contrary, as a representative self-supervised method, the contrastive learning-based model can learn representations that are more general and can be transferred to different graph distributions.
### Visualization Analysis
From the previous experiment, we can draw the conclusion that contrastive learning-based models tend to perform better in "Open Generator", "Open Set" and "Open World" scenarios with the mixed dataset, we further explore the reason behind it. To this end, we use t-Distributed Stochastic Neighbor Embedding (t-SNE) [47] to visualize the graphs embedded by different models. Figure 1 shows the t-SNE results of the testing samples used in the fourth scenario. It can be easily noticed that the embeddings produced by the contrastive encoder can be divided better, which may be the major reason why the contrastive learning-based model outperforms other models in the "Open World" Scenario.
## 5 Related Work
We have already covered several highly-related works (e.g., graph generative models and graph neural networks) in Section 2. We discuss additional related work in a broader scope below.
**Generated Data Detection.** Although it remains an unexplored area in generated graph detection, there has been some research about generated image detection in the past few years. Rossler et al. showed that simple classifiers can detect images created by a single category of networks [36]. Wang et al. demonstrated that a simple image classifier trained on one specific CNN generator (ProGAN [21]) is able to generalize well to unseen architectures [50]. Ning et al. learned the GAN fingerprints towards image attribution and showed that even a small difference in GAN training (e.g., the difference in initialization) can leave a distinct fingerprint that can be detected [65]. Most of the previous studies focus on image data; as far as we know, we are the first to investigate the generated graph detection.
**Privacy and Security Issues in GNN.** Rising concerns about the privacy and security of GNNs have led to a surge of re
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & Open Set & Open World \\ \hline FC(MLP) & 0.57/0.54 & 0.64/0.62 \\ FC(XGBoost) & 0.60/0.62 & 0.65/0.64 \\ End-To-End & 0.82/0.82 & 0.76/0.75 \\ Contrastive & **0.85**/0.84 & **0.83**/0.83 \\ Metric & 0.78/0.76 & 0.74/0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Generated graph detection in “Open Set” scenario and “Open World” scenario.
search on graph adversarial attacks. Broadly speaking, they can be grouped into two categories -- causative attacks and exploratory attacks. Causative attacks on GNNs add unnoticeable adversarial perturbations to node features and graph structures to reduce the accuracy of or intentionally change the outcome of node classification [5, 59, 6, 31, 46, 5], link prediction [5, 29], graph classification [11, 56], etc. To conduct causative attacks, attackers must be able to tamper with the training process of GNNs or influence the fine-tuning process of pre-trained GNNs. Exploratory attacks on GNNs send (carefully crafted) query data to the target GNNs and observe their decisions on these input data. Attackers then leverage the responses to build shadow models to achieve different attack goals, such as link re-identification [19], property inference [67], membership inference [51], model stealing [12], etc. To launch exploratory attacks, attackers must be able to interact with the GNNs (e.g., via publicly accessible API) at the runtime.
## 6 Conclusion and Future Work
In this paper, we propose a general framework for generated graph detection. In this framework, we introduce four application scenarios based on different training data and testing data and design three kinds of models to accomplish experiments in each scenario. The experimental results show that all models can distinguish real graphs from generated graphs successfully in all scenarios, which means that although the generative models show great advantage and success in many domains, the generated graphs can still be detected by GNN-based models. Also, we notice that the metric learning-based model tends to perform the best in the close world scenario while the contrastive learning-based model always shows advantages in "Open Generator", "Open Set" and "Open World" scenarios, which suggests that the contrastive learning-based model can generalize to new datasets and generators better. Our experiment about dataset oblivious study shows that the proposed models can still work with a persuasive performance when we use the mixed dataset to train and test the models. This is an interesting finding since the graphs in different datasets vary a lot, hence the mixed dataset tends to have a wide range of node numbers and densities. The results imply that the proposed models can handle datasets with many disparate graphs. The finding also fits more to the real-world situation, where the graphs that need to be detected may not be from a specific dataset. Moreover, although we only discuss the detection of generated graphs in this paper, the framework can also be extended to other research areas, such as images, text, or audio.
Figure 1: The visualization results of different models in “Open World” scenario. |
2307.07471 | Spectral Network Principle for Frequency Synchronization in Repulsive
Laser Networks | Network synchronization of lasers is critical for reaching high-power levels
and for effective optical computing. Yet, the role of network topology for the
frequency synchronization of lasers is not well understood. Here, we report our
significant progress toward solving this critical problem for networks of
heterogeneous laser model oscillators with repulsive coupling. We discover a
general approximate principle for predicting the onset of frequency
synchronization from the spectral knowledge of a complex matrix representing a
combination of the signless Laplacian induced by repulsive coupling and a
matrix associated with intrinsic frequency detuning. We show that the gap
between the two smallest eigenvalues of the complex matrix generally controls
the coupling threshold for frequency synchronization. In stark contrast with
Laplacian networks, we demonstrate that local rings and all-to-all networks
prevent frequency synchronization, whereas full bipartite networks have optimal
synchronization properties. Beyond laser models, we show that, with a few
exceptions, the spectral principle can be applied to repulsive Kuramoto
networks. Our results may provide guidelines for optimal designs of scalable
laser networks capable of achieving reliable synchronization. | Mostafa Honari-Latifpour, Jiajie Ding, Igor Belykh, Mohammad-Ali Miri | 2023-07-14T16:52:24Z | http://arxiv.org/abs/2307.07471v1 | # Spectral Network Principle for Frequency Synchronization in Repulsive Laser Networks
###### Abstract
Network synchronization of lasers is critical for reaching high-power levels and for effective optical computing. Yet, the role of network topology for the frequency synchronization of lasers is not well understood. Here, we report our significant progress toward solving this critical problem for networks of heterogeneous laser model oscillators with repulsive coupling. We discover a general approximate principle for predicting the onset of frequency synchronization from the spectral knowledge of a complex matrix representing a combination of the signless Laplacian induced by repulsive coupling and a matrix associated with intrinsic frequency detuning. We show that the gap between the two smallest eigenvalues of the complex matrix generally controls the coupling threshold for frequency synchronization. In stark contrast with Laplacian networks, we demonstrate that local rings and all-to-all networks prevent frequency synchronization, whereas full bipartite networks have optimal synchronization properties. Beyond laser models, we show that, with a few exceptions, the spectral principle can be applied to repulsive Kuramoto networks. Our results may provide guidelines for optimal designs of scalable laser networks capable of achieving reliable synchronization.
_Introduction._ Frequency synchronization when coupled photonic oscillators with different natural frequencies synchronize to a common frequency is a critical requirement for unconventional computing using lasers [1; 2; 3; 4; 5] or trapped Bose-Einstein condensates [6] as well as for high-power beam combining for communication, sensing, and metrology [7]. Complex laser oscillator networks that expand beyond the conventional lattice geometries based on the evanescent tail coupling of the neighboring lasers can be implemented using diffraction engineering [8; 9; 10]. The main types of coupling in laser networks encompass dispersive and dissipative interactions. Dissipative coupling induces the splitting of the resonant frequencies and is generally considered the superior mechanism for promoting network synchronization [11]. However, the dissipative coupling can be attractive or repulsive, promoting in-phase and out-of-phase oscillations, respectively. The significance of the repulsive coupling scenario manifests itself in various applications, including the spin models for unconventional computing [1; 12; 13]. In this context, the attractive coupling corresponds to the trivial ferromagnetic case, whereas repulsive coupling aligns with anti-ferromagnetism that can embed hard optimization problems [3; 6; 14], and can represent non-trivial energy based neural network models [15]. Furthermore, it has been suggested that anti-phase-coupled lasers can have better overall beam combining efficiencies [16].
Extensive research has been devoted to the role of network structure and parameter heterogeneity on the synchronization in oscillator networks with attractive coupling, including laser arrays [17; 18; 19; 20; 21; 22; 23], and more broadly, Laplacian [24; 25; 26; 27; 28; 29], pulse-coupled [30; 31], and Kuramoto-type networks [32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Yet, a significant knowledge gap remains regarding the interplay of these factors for frequency synchronization in repulsive oscillator networks. Such networks exhibit different forms of frequency synchronization, including splay states [42], clusters [43], and cyclops states [44] whose dependence on the network structure is not well understood and can be counterintuitive. For example, globally coupled repulsive Kuramoto networks fail to reach frequency synchronization whereas it occurs in locally coupled networks [45].
In this Letter, we discover a general principle that pairs frequency synchronization with the network structure and parameter detuning in networks of class-A laser oscillators [46; 47] with repulsive signless Laplacian dissipative coupling [11]. Much in the vein of the master stability function for complete synchronization in Laplacian networks [24; 27], this principle can predict a coupling threshold for frequency synchronization from the spectral knowledge of the complex matrix composed of the connectivity matrix and the matrix representing intrinsic frequency detuning. In contrast to complete synchronization in Laplacian networks, the coupling threshold in such laser networks is generally controlled by the spectral gap between the two smallest (non-zero) eigenvalues of the complex matrix. This principle suggests that full bipartite networks rather than global or local network topologies provide optimal synchronization properties.
_Model formulation._ We consider a network of \(N\) dissipatively coupled lasers described by a minimal dynamical model that involves only the amplitude and phase of the field in each laser cavity [48]. The complex amplitude of the \(n\)th oscillator obeys \(\hat{a}_{n}(t)=(-i\omega_{n}-1+g_{0}(1-|a_{n}|^{2}))a_{n}\), \(n=1,...,N\), where time is normalized to the field decay rate, \(\omega_{n}\) and \(g_{0}\) represent the dimensionless resonant frequency and small signal gain, respectively.
This model is valid when the atomic degrees of freedom are adiabatically eliminated for the so-called class-A lasers [46; 47]. Its individual dynamics is similar to that of the Landau-Stuart oscillator. The dynamical equations governing complex field amplitudes of the network are
\[\dot{\mathbf{a}}(t)=-\mathbf{a}+g_{0}(1-\mathbf{a}^{*}\cdot\mathbf{a})\cdot \mathbf{a}-i\Omega\mathbf{a}-\kappa Q\mathbf{a}, \tag{1}\]
where \(\mathbf{a}=[a_{1},...,a_{N}]^{T}\) is the vector containing the lasers complex amplitudes, \(\Omega=\text{diag}(\omega_{1},...,\omega_{N})\) is an \(N\times N\) diagonal matrix involving detuned resonant frequencies, \(Q\) is the signless Laplacian connectivity matrix with off-diagonal elements \(q_{mn}=1\) for coupled oscillators and \(q_{mn}=0\) otherwise, and diagonal elements \(q_{mm}=\sum_{m\neq n}q_{mn}\). The negative sign of the coupling term \(-\kappa Q\mathbf{a}\) with coupling coefficient \(\kappa>0\) determines the repulsive nature of the dissipative coupling. Combining the last two terms in (1), we introduce the complex matrix
\[M=i\Omega+\kappa Q, \tag{2}\]
which accounts for the contribution of intrinsic frequency detuning and linear coupling. In an amplitude and phase representation of the complex amplitudes \(a_{n}(t)=A_{n}(t)e^{i\phi_{n}(t)}\), the network (1) can be written in the form
\[\dot{A}_{n}=-1+g_{0}(1-A_{n}^{2})A_{n}-\kappa\sum\limits_{m=1}^{N }q_{mn}\cos(\phi_{m}-\phi_{n}), \tag{3}\] \[\dot{\phi}_{n}=-\omega_{n}-\kappa\sum\limits_{m=1}^{N}q_{mn}\frac {A_{n}}{A_{m}}\sin(\phi_{m}-\phi_{n}),\ n=1,...,N.\]
Under the simplifying assumption that the amplitudes of all laser oscillators settle down to the same value so that \(A_{n}(t)\to 1\), the system (3) can be reduced to the classical repulsive Kuramoto model with an arbitrary adjacency matrix \(C=Q-q_{mm}I\), where \(q_{mm}I\) is the degree matrix of the connection graph. However, the dynamics of the Kuramoto model and the full system can be different.
_The spectral network principle._ Frequency synchronization occurs in the network (3) when \(\langle\dot{\phi}_{1}\rangle=\langle\dot{\phi}_{2}\rangle=...=\langle\dot{ \phi}_{N}\rangle\), where \(\langle...\rangle\) denotes a time average. Hereafter, we will be using an order parameter \(R=\frac{2}{n(n-1)}(\sum_{i<j}^{n}\exp\{-(\dot{\phi}_{i}-\dot{\phi}_{j})^{2}\})\) as a measure for the degree of frequency coherence, with \(R=1\) corresponding to perfect frequency synchronization. Previous studies used energy Lyapunov-type functions to derive conditions on the stability of frequency synchronization in the classical Kuramoto model with global attractive coupling [35] and local repulsive coupling [45]. However, constructing such functions for the amplitude-phase model (1) with arbitrary repulsive coupling is elusive. Here, we use an alternative approach to making sense of the complex matrix \(M\)'s spectral properties as a network synchronizability criterion. We view the onset of frequency synchronization as competition between the network eigenmodes for oscillation. This can be better understood in the case of identical oscillators, i.e., when \(\omega_{1}=\omega_{2}=\cdots=\omega_{N}=\omega_{0}\). In this case, considering the dynamics starting at low field intensities \(|\mathbf{a}|\ll\mathbf{1}\), the evolution can be linearized in the rotating frame of \(\omega_{0}\) as
\[\dot{\mathbf{a}}=(g_{0}-1)\mathbf{a}-\kappa Q\mathbf{a}. \tag{4}\]
The connectivity matrix \(Q\) has \(N\) real eigenvalues \(s_{1}\leq s_{2}\leq...\leq s_{N}.\) Diagonalizing (4) using the eigenmode basis of \(Q\), \(\mathbf{a}(t)=\sum_{m}\alpha_{m}(t)\mathbf{v}_{m}\), where \(\alpha_{m}(t)=\mathbf{v}_{m}^{\dagger}\mathbf{a}(t)\), we obtain the evolution equation for the \(m\)th eigenmode
\[\dot{\alpha}_{m}(t)=(g_{0}-\kappa s_{m}-1)\alpha_{m},\ m=1,...,N. \tag{5}\]
The fundamental mode (\(m=1\)) with the maximum net small-intensity gain, \(g_{0}-\kappa s_{1}-1\), has a higher probability of becoming the lasing mode, thereby inducing frequency synchronization. To do so, it needs to win the lasing competition with its closest competing mode with \(m=2\) and the gain \(g_{0}-\kappa s_{2}-1.\) The outcome of this competition is generally controlled by the gain difference between the two modes, \(\kappa(s_{2}-s_{1})\), that has to exceed an energy threshold. This suggests that the threshold coupling, \(\kappa_{c}\), for the onset of frequency synchronization
Figure 1: (a) The concept of dissipative coupling, where the interaction between two laser cavities is mediated through a dissipative medium which can promote anti-phase synchronization (repulsive coupling). (b) The frequency spectrum of the two detuned laser oscillators from (a) and the transition to the anti-phase frequency synchronization via increasing the coupling \(\kappa\). The insets show the phase portraits \(\Re(a_{1}),\Im(a_{1})\) (red) and \(\Re(a_{2}),\Im(a_{2})\) (blue) before and after the critical phase transition. Parameters are \(\omega_{0}=1\), \(\Delta=0.005\), and \(g_{0}=0.02\). (c) General scheme for creating arbitrary coupling between laser oscillators by diffraction engineering [8]. (d) The equivalent network graph.
can be estimated as
\[\kappa_{c}=\frac{b}{s_{2}-s_{1}}\equiv\frac{b}{\gamma}, \tag{6}\]
where \(s_{1}\) and \(s_{2}\) are the first and second smallest eigenvalues of the signless Laplacian matrix \(Q\), and the parameter \(b\) is determined by the intrinsic properties of the individual laser oscillator and its lasing threshold. Note that the spectral gap \(\gamma=s_{2}-s_{1}\) is zero for the globally coupled network (1), so frequency synchronization cannot be achieved even for large values of \(\kappa.\) This observation agrees with the similar property of the repulsive Kuramoto model of identical oscillators [45]. Similarly, networks with the zero spectral gap, \(\gamma=0,\) are expected to be non-synchronizable. The property that guarantees a non-zero spectral gap is the bipartiteness of the graph associated with the matrix \(Q.\) It has been previously shown in the context of spectral signless Laplacian graph theory [49; 50; 51] that the more edges one needs to remove to make the graph bipartite, the larger the smallest eigenvalue [52] and the smaller the spectral gap are. Therefore, a bipartite graph that generally is the easiest to synchronize has its smallest eigenvalue at zero, leading to a larger spectral gap. To support this claim and validate the predictive power of the spectral network principle (6), we numerically studied the scaling of the synchronization threshold \(\kappa_{c}\) as a function of the network size in four common network topologies, ranging from sparse to dense graphs (Fig. 2). All four types of networks discussed here are bipartite graphs and hence synchronizable. For all these networks, the spectral gap \(\gamma\) can be calculated analytically as a function of \(N.\) For the chain graph \(\gamma=s_{2}-s_{1}=1-\cos{(\pi/N)},\) which for large \(N\) can be approximated as \(\pi^{2}/N^{2}\)[53]. For the square lattice graph, the gap is \(\gamma=1-\cos{(\pi/\sqrt{N})}\) and for large \(N\) it is approximated by \(\pi^{2}/n\). The star graph's gap is constant and equal to \(1.\) Finally, for the full bipartite graph the gap is \(\gamma=N/2\) for even \(N\) and \(\gamma=N-2\) for odd \(N\). Figure 2 indicates that the spectral network criterion (6) predicts the scaling of the coupling threshold rather precisely. To further illustrate the critical role of the spectral gap \(\gamma\) in frequency synchronization, we generated ensembles of uniformly connected, Barabasi-Albert scale-free networks [54]. Figure 3 shows that more heterogeneous networks with higher node degree hubs, in general, correspond to a larger spectral gap \(\gamma\), and such networks are easier to synchronize.
_Extension to nonidentical laser oscillators._ While the criterion (6) performs remarkably well for a large spectrum of the regular and scale-free networks depicted in Figs. 2-3, it is important to point to the limitations of its predictive power. The energy landscape governing the system of repulsively coupled identical oscillators via a hypothetical Lyapunov function may be a non-convex function. As a result, the fundamental eigenmode might not necessarily be the one to win the lasing competition or have
Figure 2: Actual (blue) and predicted (red) frequency synchronization threshold \(\kappa_{c}\) (with the order parameter \(R>0.99\)) in common bipartite network topologies, ranging from sparse to dense graphs. The predicted thresholds are computed from the spectral principle (6) with the scaling parameter \(b\) chosen to fit the data. Parameters are \(\omega_{0}=1\) and \(g_{0}=0.02.\)
Figure 3: Frequency synchronization of scale-free networks of identical oscillators and its relation to the spectral gap \(\gamma\). The scale-free networks are generated from an initial graph with \(m+10\) nodes via the preferential attachment mechanism [54]. (a). The onset of frequency synchronization via the dependence of the order parameter \(R\) on coupling strength \(\kappa.\) (b). The corresponding average spectral gap, \(\gamma\), for networks with different \(m\). Each curve in (a) and point in (b) correspond to the average of \(100\) randomly generated graphs of size \(N=70\) with \(m=1,...,9\). Other parameters are as in Fig. 2. A larger spectral gap enhances network synchronizability. (c). Sample networks with \(m=1,5,9.\)
the maximal overlap with the lasing mode. Therefore, any \(m\)th eigenmode with the corresponding eigenvalue \(s_{m}\) cannot be completely ruled out as unimportant for the synchronizability of the network. This might become particularly important for the case of non-identical laser oscillators where the signless Laplacian matrix \(Q\) alone is not determining the results. In fact, in the general case, the spectral gap that is predicted to control the frequency synchronization of non-identical laser models can be defined as the separation between the real parts of the two eigenvalues of matrix \(M\) with the smallest real parts, i.e., \(\gamma_{M}=\Re[\lambda_{2}-\lambda_{1}]\). As a result, the spectral criterion (6) can be approximately extended to non-identical oscillators as
\[\kappa_{c}=\frac{\bar{b}}{\gamma_{M}}, \tag{7}\]
where \(\bar{b}\) is a scaling parameter. For the two-oscillator setup of Fig. 1a with frequency detunings \(\omega_{1}=\omega_{0}-\Delta\) and \(\omega_{2}=\omega_{0}+\Delta\), the spectral gap can be calculated analytically as \(\gamma_{M}=2\sqrt{\kappa^{2}-\Delta^{2}}\), yielding the threshold coupling \(\kappa_{c}=\Delta\)[11]. Notably, the synchronization threshold is marked with a phase transition in the eigenvalues of matrix \(M\) that dictates the system's linearized dynamics (see Supplementary Fig. 1 in the Supplemental Material that also demonstrates this phase transition for random frequency detunings). To verify the general approximate criterion (7) for larger networks, we have numerically calculated the coupling threshold \(\kappa_{c}\) for all possible \(21\) network topologies of size \(N=5\) and \(1,000\) combinations of random frequency detunings. Figure 4 shows that the networks \(1,2,\) and \(3\), similarly to their identical oscillator counterparts with the zero spectral gap \(\gamma=0\), cannot support the frequency synchronization for any of the chosen frequency detunings. Remarkably, these networks include a locally coupled ring and an all-to-all network representing two opposite ends of the network topology range and are known to be synchronizable in Laplacian oscillator networks [24]. It is also worth noting a striking effect that adding one link to the local chain of Fig. 3top that completes the loop yields the unsynchronizable ring network \(2\) of Fig. 4. Out of the remaining \(18\) networks with non-zero spectral gap \(\gamma\), and therefore, capable of frequency synchronization according to the spectral criterion, only one, the network \(18\), does not follow the prediction. It remains unsynchronizable for any \(\kappa.\) This is the case where a complex interplay between the network structure and distributions of frequency detuning prevents each \(m\)th eigenmodes with eigenvalue \(\lambda_{m},\)\(m=1,...,N\) from becoming the lasing mode. Nonetheless, as in the identical oscillator case, the spectral criterion singles out the full bipartite network (network \(21\)) as the optimal network topology with the lowest synchronization threshold. To better relate the dependence of the threshold \(\kappa_{c}\) to the identical oscillator criterion (6), we choose the lowest value of \(\kappa_{c}\) for the full bipartite network (the lowest peak of the corresponding violin plot in Fig. 4) to identify the lowest scaling constant \(\bar{b}\) which could correspond to the least heterogeneous oscillators. We then use this scaling factor via (6) for all other networks, see how this trend compares to the actual heterogeneous oscillators (Fig. 4). Notably, with a few exceptions, even the identical oscillator spectral criterion can predict the general dependence on the spectral network gap \(\gamma.\) Obviously, the discrepancy between the predicted trend and the numerical data is due to multiple factors, including non-uniform scaling constants \(\bar{b}\) and spectral gaps \(\gamma_{M}\) for different detuning distributions. It
Figure 4: Synchronization threshold \(\kappa_{c}\) for all \(21\) possible connected networks of five detuned oscillators. The scattered points in each violin plot represent the coupling thresholds for \(1,000\) random frequency distributions with \(\omega_{m}\in\mathcal{U}(-0.5,0.5).\) The corresponding network is shown under each plot. The large red circles indicate an infinite coupling threshold corresponding to non-synchronizable networks. The networks are ordered from \(1\) to \(21\) by the spectral gap \(\gamma.\) The full bipartite network with index \(21\) has the largest \(\gamma.\) As a reference, the blue line shows a predicted trend from the spectral criterion for identical oscillators (6), with the scaling constant \(b\) calculated from the lowest value of \(\kappa_{c}\) for full bipartite network \(21\).
is also noteworthy that the spectral gap criterion successfully identifies the full bipartite network as the optimal network topology for frequency synchronization in the Kuramoto-type model obtained from the phase equation in system (3) by setting \(A_{n}=A_{m}=1\) (see Supplementary Fig. 2 for the similarities and differences between Fig. 4 and its Kuramoto model counterpart).
_Conclusions._ In this work, we revealed a general approximate principle that relates a critical coupling threshold for frequency synchronization to the spectral gap between the smallest eigenvalues of the matrix combined from the signless Laplacian connectivity and frequency detuning matrices. The discovered principle demonstrates that the spectral gap of the signless Laplacian, rather than mere connectivity, is a powerful indicator of the synchronizability of such repulsive networks. Although different, this predictive principle may be viewed as an analog of the master stability function for complete synchronization in Laplacian dynamical networks [24], as it isolates, in the identical oscillator case, the contribution of the coupling term from the individual oscillator dynamics. Applying the spectral principle, we discovered that in contrast to one's intuition, both local ring and global network structures prevent frequency synchronization, whereas the fully bipartite network has optimal synchronization properties. We also demonstrated that this latter property carries over to the repulsive Kuramoto network. The spectral principle has limitations, as it does not always rule out the synchronizability of a complex network of heterogeneous laser oscillators. However, it identifies topologies that can be easily synchronized and used for scalable designs of large laser arrays. Moreover, a maximal spectral gap of the complex matrix incorporating frequency detunings could be used as a guiding principle for machine learning approaches to designing disordered laser oscillator networks with optimal synchronization properties required for effective optical computing.
_Acknowledgments._ This work was supported by the Air Force Office of Scientific Research (AFOSR) Young Investigator Program (YIP) Award # FA9550-22-1-0189 (to M.-A.M.) and the Office of Naval Research under Grant No. N00014-22-1-2200 (to I. B.)
|
2308.10788 | Effectiveness of Reconfigurable Intelligent Surfaces to Enhance
Connectivity in UAV Networks | Reconfigurable intelligent surfaces (RISs) are expected to make future 6G
networks more connected and resilient against node failures, due to their
ability to introduce controllable phase-shifts onto impinging electromagnetic
waves and impose link redundancy. Meanwhile, unmanned aerial vehicles (UAVs)
are prone to failure due to limited energy, random failures, or targeted
failures, which causes network disintegration that results in information
delivery loss. In this paper, we show that the integration between UAVs and
RISs for improving network connectivity is crucial. We utilize RISs to provide
path diversity and alternative connectivity options for information flow from
user equipments (UEs) to less critical UAVs by adding more links to the
network, thereby making the network more resilient and connected. To that end,
we first define the criticality of UAV nodes, which reflects the importance of
some nodes over other nodes. We then employ the algebraic connectivity metric,
which is adjusted by the reflected links of the RISs and their criticality
weights, to formulate the problem of maximizing the network connectivity. Such
problem is a computationally expensive combinatorial optimization. To tackle
this problem, we propose a relaxation method such that the discrete scheduling
constraint of the problem is relaxed and becomes continuous. Leveraging this,
we propose two efficient solutions, namely semi-definite programming (SDP)
optimization and perturbation heuristic, which both solve the problem in
polynomial time. For the perturbation heuristic, we derive the lower and upper
bounds of the algebraic connectivity obtained by adding new links to the
network. Finally, we corroborate the effectiveness of the proposed solutions
through extensive simulation experiments. | Mohammed S. Al-Abiad, Mohammad Javad-Kalbasi, Shahrokh Valaee | 2023-08-21T15:27:02Z | http://arxiv.org/abs/2308.10788v1 | # Effectiveness of Reconfigurable Intelligent Surfaces to Enhance Connectivity in UAV Networks
###### Abstract
Reconfigurable intelligent surfaces (RISs) are expected to make future 6G networks more connected and resilient against node failures, due to their ability to introduce controllable phase-shifts onto impinging electromagnetic waves and impose link redundancy. Meanwhile, unmanned aerial vehicles (UAVs) are prone to failure due to limited energy, random failures, or targeted failures, which causes network disintegration that results in information delivery loss. In this paper, we show that the integration between UAVs and RISs for improving network connectivity is crucial. We utilize RISs to provide path diversity and alternative connectivity options for information flow from user equipments (UEs) to less critical UAVs by adding more links to the network, thereby making the network more resilient and connected. To that end, we first define the criticality of UAV nodes, which reflects the importance of some nodes over other nodes. We then employ the algebraic connectivity metric, which is adjusted by the reflected links of the RISs and their criticality weights, to formulate the problem of maximizing the network connectivity. Such problem is a computationally expensive combinatorial optimization. To tackle this problem, we propose a relaxation method such that the discrete scheduling constraint of the problem is relaxed and becomes continuous. Leveraging this, we propose two efficient solutions, namely semi-definite programming (SDP) optimization and perturbation heuristic, which both solve the problem in polynomial time. For the perturbation heuristic, we derive the lower and upper bounds of the algebraic connectivity obtained by adding new links to the network. Finally, we corroborate the effectiveness of the proposed solutions through extensive simulation experiments.
Network connectivity, algebraic connectivity, RIS-assisted UAV communications, graph theory, perturbation method.
## I Introduction
### _Motivation_
In future 6G networks, there is a proliferation of connected nodes (i.e., smart devices, sensors, military vehicles) and services (i.e., augmented reality, information flow, data collection) that need to be supported by wireless networks [2, 3, 4]. Consequently, there is a pressing need for more connected wireless networks that are resilient and robust against node and link failures. To address this need, unmanned aerial vehicles (UAVs) communication is shown to be a promising solution since they can be rapidly deployed with adjustable mobility [5, 6]. One distinctive aspect of UAV-assisted communications involves enhancing network connectivity through the establishment of line-of-sight (LoS) connections with user equipment (UE) [7].
The prime concern of UAV communications is that UAVs1 are prone to failure due to several reasons, such as limited energy, hardware failure, or targeted failure in the case of battlefield surveillance systems. Such UAV failures cause network disintegration, and consequently, information flow from UEs to a fusion center through UAVs can be severely impacted. Hence, it is crucial to always keep the network connected, which can be addressed by adding more backhaul links to the network [8]. Network densification, by adding more UAVs or access points (APs), helps to improve network connectivity but that will increase the hardware and energy consumption drastically [9]. In addition, deploying a large number of UAVs or APs can be challenging in densely populated urban areas due to site constraints, limited space, and limited UAV battery. Instead, deploying passive nodes in the network can achieve a more connected system with less cost and lower energy consumption.
Footnote 1: We use the terms nodes and UAVs/UEs interchangeably in this paper. In addition, the terms links and edges are used interchangeably since we usually use a graph to represent a network.
Recently, reconfigurable intelligent surfaces (RISs) have drawn significant attention from both academia and industry, with the features of relatively low-cost and the capability to extend coverage and reduce energy consumption. Besides their other benefits in wireless networks, RISs offer two key advantages for designing connected and resilient wireless networks as follows. First, RISs can improve network connectivity by creating an indirect path between a UE and a UAV when the LoS is not available, or to provide more reliable links between the connected UEs and the UAVs [10, 11]. As a result, path diversity and alternative connectivity options for information flow from UEs to UAVs are provided. Second, RIS provides a high passive beamforming gain via adjusting its reflection coefficients intelligently [12]. By tuning the phase shifts of RIS, we can direct the signals of the UEs from critical UAVs that are most likely to fail to less critical UAVs. Therefore, RIS can reflect the signals of the UEs to the desired UAVs, which is one of the main motivations of this work. As a result, RISs can be utilized to provide for more connected and resilient networks that can effectively operate in the presence of node failures. In this paper, we show that with the integration of UAV communications and RISs, a small number of low-cost RISs can increase the connectivity of the RIS-assisted UAV networks significantly.
Optimized network connectivity plays a crucial role in designing more connected and resilient networks and extending network lifetime, which is defined as the time until the network becomes disconnected [16, 17]. An important metric that
measures how well a graph network is connected is called the algebraic connectivity, also called the Fiedler metric or the second smallest eigenvalue2 of the graph Laplacian matrix [18]. One important property of the algebraic connectivity is that the larger it is, the more connected the graph will be. Such metric has been considered widely in wireless sensor networks. Several studies, as will be detailed below, consider routing solutions with the focus more on extending the battery lifetime of sensor nodes, e.g., [16, 17, 19, 20, 21]. One trivial strategy to optimize network connectivity is to add more links via adding more connected nodes (i.e., APs, relays, or UAVs) [16]. However, despite significant advancements in wireless sensor networks and UAV communications, the limited battery of nodes, the consumed energy, and cost necessitate a simple and efficient network design. Therefore, we introduce a cost-effective expansion of the traditional UAV communications by deploying compact RIS passive nodes that reflect the signals of the UEs to the desired UAVs. As a result, more links are added to the network to optimize its algebraic connectivity significantly.
Footnote 2: We use the terms algebraic connectivity and the Fiedler value interchangeably in this paper since they both measure the network connectivity of the graph Laplacian matrix.
### _Related Work_
In the recent literature, several works have studied the importance of RISs in improving different metrices, such as positioning accuracy [9, 10], extending the coverage of networks [11], boosting the communication capability [12, 13, 14, 15], etc. However, to the best of our knowledge, there is no work that used RISs to maximize network connectivity. Therefore, we briefly review some of the related works that addressed network connectivity maximization problem in different wireless sensor networks and UAV communications, e.g., [16, 17, 19, 20, 21, 22, 17, 23].
In spite of recent advances in wireless sensor networks, most of the existing studies consider routing solutions with the focus more on extending the battery lifetime of sensor nodes. These works define network connectivity as the network lifetime, in which the first node or all the nodes of a sensor network have failed [16, 19, 21]. In [16], the authors addressed the problem of adding relays to maximize the connectivity of multi-hop wireless networks. In [22], the authors maximized the algebraic connectivity by positioning the UAV to maximize the connectivity of small-cells backhaul network. The paper [24] proposed three different random relay deployment strategies, namely, the connectivity-oriented, the lifetime-oriented, and the hybrid-oriented strategies. However, there is no explicit optimization problem for maximizing the network lifetime in that work. A mathematical approach to placing a few flying UAVs over a wireless ad-hoc network in order to optimize the network connectivity for better coverage was proposed in [25]. However, none of the aforementioned works has ever explicitly considered the exploitation of RISs to add more reflected links to improve network connectivity of UAV-assisted networks. Different from the works [11, 16, 17, 19, 20, 21, 22, 23] that focused on routing solutions, this paper focuses on designing a more connected RIS-assisted UAV network. This network enables information flow from the UEs to the UAVs and is resistant to UAV failure.
The network connectivity maximization problem can be solved either by (i) relaxing the problem's constraints using convex relaxation, and then formulating the problem as a semi-definite programming (SDP) optimization problem to be solved using CVX [1, 16, 22] or (ii) using exhaustive search. However, the exhaustive search is computationally intractable for large network sizes and the SDP optimization is sub-optimal. Therefore, there is a need to find an efficient heuristic algorithm that can find a set of suitable reflected links of the RISs that connect the UEs to the UAVs in polynomial time, such that we maximize the connectivity of a RIS-assisted UAV network. As one of the main contributions in this paper, we build on the reference [26] to perturbate the eigenvalues of the original Laplacian matrix with rank-one update matrix and propose a novel perturbation heuristic. This efficient perturbation heuristic is based on calculating the values of the Fiedler vector, which can be conveniently applied to large graphs with low computational complexity.
### _Contribution_
In this paper, we investigate the integration between RISs and UAV-assisted communication systems by studying nodes criticality and the algebraic connectivity through SDP optimization and the Laplacian matrix perturbation, so that we maximize the connectivity of the envisioned RIS-assisted UAV network. The main contributions of this work are:
* **RIS-assisted UAV problem formulation:** First, we propose to define the criticality of UAV nodes, which reflects the importance of some nodes over other nodes towards the network connectivity. By considering the nodes' importance from the graph connectivity perspective, the node with higher importance will be retained in the network, therefore the connectivity of the remaining network is maintained as long as possible. We then employ the algebraic connectivity metric, which is adjusted by the reflected links of the RISs and their criticality weights, to formulate the problem of maximizing the network connectivity. Such problem is shown to be a combinatorial optimization problem. By embedding the nodes criticality in the links selections, we propose two solutions to solve the proposed problem.
* **Convex relaxation and SDP formulation:** To tackle this problem, we propose a relaxation method such that the objective function of the relaxed problem becomes continuous. Leveraging this, we propose to formulate the problem as a SDP optimization problem that can be solved efficiently in polynomial time using CVX.
* **Algebraic connectivity perturbation:** We propose a low-complexity, yet efficient, perturbation heuristic, which has less complexity compared to the relaxation method. In this heuristic perturbation, one UE-RIS-UAV link is added at a time by calculating only the eigenvector values corresponding to the algebraic connectivity. We also derive the lower and upper bounds of the algebraic
connectivity obtained by adding new links to the network based on this perturbation heuristic.
* **Performance evaluation:** We evaluate the performance of the proposed schemes in terms of network connectivity via extensive simulations. We verify that the proposed schemes result in improved network connectivity as compared to the existing solutions. In particular, the proposed perturbation heuristic has a superior performance that is roughly the same as the optimal solution using exhaustive search and close to the upper bound.
The rest of this paper is organized as follows. In Section II, we describe the system model, network connectivity, and then define nodes criticality and outline some of its important properties. In Section III, the network connectivity maximization problem in a RIS-assisted UAV network is formulated. In Section IV, the proposed solutions are explained, and the upper and lower bounds on the algebraic connectivity of our proposed perturbation scheme are analyzed in Section V. Extensive simulation results are presented in Section VI, and the conclusion is given in Section VII.
## II System Model and Network Connectivity
### _System Model_
Consider the sample RIS-assisted UAV system shown in Fig. 1. The system consists of a set of UAVs, a set of RISs, and multiple UEs that represent ground users, sensors, etc. The sets of UAVs, RISs, and UEs are denoted as \(\mathcal{A}=\{1,2,\ldots,A\}\), \(\mathcal{R}=\{1,2,\ldots,R\}\), and \(\mathcal{U}=\{1,2,\ldots,U\}\), respectively. All UEs and UAVs are equipped with single antennas. The \(A\) UAVs fly and hover over assigned locations at a fixed flying altitude and connect with \(U\) UEs. The locations of the UAVs, UEs, and RISs are assumed to be fixed. The RISs are installed with a certain altitude \(z_{r}\), \(r\in\mathcal{R}\). Let \((x_{r},y_{r},z_{r})\) be the 3D location of the \(r\)-th RIS, \((x_{a},y_{a},z_{a})\) the 3D location of the \(a\)-th UAV, and \((x_{u},y_{u})\) the 2D location of the \(u\)-th UE, respectively. The distances between the \(u\)-th UE and the \(r\)-th RIS and between the \(r\)-th RIS and the \(a\)-th UAV are denoted by \(R_{u,r}^{\text{UR}}\) and \(R_{r,a}^{\text{RA}}\), respectively.
Due to their altitude, UAVs can have good connectivity to UEs. However, UEs may occasionally experience deep fade. To overcome this problem and further improve network connectivity, we propose to utilize a set of RISs to impose link redundancy to the network. As such, the network becomes more resilient against node failures by providing path diversity and alternative connectivity options between the UEs and the UAVs. Each RIS is equipped with a controller and \(M:=M_{b}\times M_{c}\) passive reflecting units (PRUs) to form a uniform passive array (UPA). Each column of the UPA has \(M_{b}\) PRUs with an equal spacing of \(d_{c}\) meters (m) and each row of the UPA consists of \(M_{c}\) PRUs with an equal spacing of \(d_{b}\) m. Through appropriate adjustable phase shifts, these PRUs can add indirect links between the UEs and the UAVs or connect the blocked UEs to the desired UAVs. The phase-shift matrix of the \(r\)-th RIS is modeled as the diagonal matrix \(\mathbf{\Theta}_{r}=diag(e^{j\theta_{1}^{r}},\ldots,e^{j\theta_{2}^{r}},\ldots,e^{j \theta_{M}^{r}})\), where \(\theta_{m}^{r}\in[0,2\pi)\), for \(r\in\mathcal{R}\), and \(m\in\{1,\ldots,M\}\).
For the \(u\)-th UE, the reachable UAVs are denoted by a set \(\mathcal{A}^{u}\). Assuming that \(|\mathcal{A}^{u}|\geq 1\) in general, some UEs are able to access multiple UAVs simultaneously. Similarly, the reachable RISs for the \(u\)-th UE are denoted by a set \(\mathcal{R}_{u}\). The successful communications between the UEs and RISs are measured using the distance threshold \(D_{o}\), i.e., the \(u\)-th UE can be connected to the \(r\)-th RIS with distance \(d_{u,r}\) if \(d_{u,r}\leq D_{o}\). The communications between the UEs and UAVs/RISs are assumed to occur over different time slots (i.e., time multiplexing access) to avoid interference among the scheduled UEs. Therefore, in each time slot, we assume that distinct UEs \(u\) and \(u^{\prime}\) are allowed to transmit simultaneously if \(\mathcal{R}_{u}\cap\mathcal{R}_{u^{\prime}}=\emptyset\) to reduce interference.
We focus on the network connectivity from data link-layer viewpoint, thus we abstract the physical layer factors and consider a model that relies only on the distance between the nodes. Therefore, similar to [22], we model only the large scale fading and ignore the small scale fading. To quantify the UEs transmission to the UAVs/RISs, we use the signal-to-noise ratio (SNR). For the \(u\)-th UE, SNR is defined as follows [22]
\[\gamma_{u,a}^{(U)}=\frac{d_{u,a}^{-\alpha}p}{N_{0}}, \tag{1}\]
where \(d_{u,a}\) is the distance between the \(u\)-th UE and the \(a\)-th UAV, \(p\) is the transmit power of the \(u\)-th UE, which is maintained fixed for all the UEs, \(N_{0}\) is the additive white Gaussian noise (AWGN) variance, and \(\alpha\) is the path loss exponent that depends on the transmission environment.
UAVs hover at high altitudes, hence we reasonably assume that they maintain LoS channel between each other [7]. The path loss between the \(a\)-th UAV and \(a^{\prime}\)-th UAV can be expressed as
\[\Gamma_{a,a^{\prime}}=20\log\left(\frac{4\pi f_{c}d_{a,a^{\prime}}}{c}\right), \tag{2}\]
where \(d_{a,a^{\prime}}\) is the distance between the \(a\)-th UAV and the \(a^{\prime}\)-th UAV, \(f_{c}\) is the carrier frequency, and \(c\) is the speed of light. Consequently, the SNR between the \(a\)-th and the \(a^{\prime}\)-th UAVs is
\[\gamma_{a,a^{\prime}}^{(A)}=10\log P-\Gamma_{a,a^{\prime}}-10\log N_{0},\ \ \ \ \ (dB) \tag{3}\]
where \(P\) is the transmit power of the \(a\)-th UAV, which is maintained fixed for all the UAVs. Note that the SNR of the \(u\)-th UE determines whether it has a successful connection to the corresponding UAV \(a\). In other words, the \(a\)-th UAV
Fig. 1: A typical RIS-assisted UAV network with \(2\) RISs, \(2\) UEs, and \(4\) UAVs.
is assumed to be within the transmission range of the \(u\)-th UE if \(\gamma_{u,a}^{(U)}\geq\gamma_{0}^{\text{UE}}\), where \(\gamma_{0}^{\text{UE}}\) is the minimum SNR threshold for the communication links between the UEs and the UAVs. Similarly, we assume that UAV \(a\) and UAV \(a^{\prime}\) have a successful connection provided that \(\gamma_{a,a^{\prime}}^{(A)}\geq\gamma_{0}^{\text{UAV}}\), where \(\gamma_{0}^{(UAV)}\) is the minimum SNR threshold for the communication links between the UAVs.
Since RISs are deployed in high altitude, the signal propagation of UE-RIS link is adopted to be a simple yet reasonably accurate LoS channel model [12]. The LoS channel vector between the \(u\)-th UE and the \(r\)-th RIS is given by [12]
\[\mathbf{h}_{u,r}^{\text{UR}}=\sqrt{\frac{\beta_{0}}{(d_{u,r}^{\text{UR}})^{2}} }\mathbf{\tilde{h}}_{u,r}^{\text{UR}}, \tag{4}\]
where \(d_{u,r}^{\text{UR}}\) is the distance between the \(u\)-th UE and the \(r\)-th RIS, \(\beta_{0}\) denotes the path loss at the reference distance \(d=1\)m, and \(\mathbf{\hat{h}}_{u,r}^{\text{UR}}\) represents the array response component, which can be denoted by
\[\mathbf{\hat{h}}_{u,r}^{\text{UR}} = [1,e^{-j\frac{2\pi d_{b}}{\lambda}\phi_{u,r}^{\text{UR}}\psi_{u,r }^{\text{UR}}},\ldots,e^{-j\frac{2\pi d_{b}}{\lambda}(M_{b}-1)\phi_{u,r}^{ \text{UR}}\psi_{u,r}^{\text{UR}}}\overset{T}{\sim}\] \[\otimes[1,e^{-j\frac{2\pi d_{b}}{\lambda}\varphi_{u,r}^{\text{ UR}}\psi_{u,r}^{\text{UR}}},\ldots,e^{-j\frac{2\pi d_{b}}{\lambda}(M_{c}-1)\psi_{u,r}^{ \text{UR}}\psi_{u,r}^{\text{UR}}}]^{T},\]
where \(\lambda\) is the wavelength, \(\otimes\) is the Kronecker product, and \(\phi_{u,r}^{\text{UR}},\varphi_{u,r}^{\text{UR}}\), and \(\psi_{u,r}^{\text{UR}}\) are related to the sine and cosine terms of the vertical and horizontal angles-of-arrival (AoAs) at the \(r\)-th RIS [12], and respectively given by \(\phi_{u,r}^{\text{UR}}=\frac{y_{u}-y_{r}}{\sqrt{(x_{u}-x_{r})^{2}+(y_{u}-y_{r })^{2}}}\), \(\varphi_{u,r}^{\text{UR}}=\frac{x_{r}-x_{r}}{\sqrt{(x_{u}-x_{r})^{2}+(y_{u}-y_ {r})^{2}}}\), \(\psi_{u,r}^{\text{UR}}=\frac{2\pi}{d_{u}^{\text{UR}}}\). On the other hand, the RISs and UAVs are deployed in high altitudes, thus the reflected signal propagation of the RIS-UAV link typically occurs in clear airspace where the obstruction or reflection effects diminish. The LoS channel vector between the \(r\)-th RIS and the \(a\)-th UAV is given by
\[\mathbf{h}_{r,a}^{RA}=\sqrt{\frac{\beta_{0}}{(d_{r,a}^{RA})^{2}}}\mathbf{ \tilde{h}}_{r,a}^{RA}, \tag{5}\]
where \(d_{r,a}^{RA}\) is the distance between the \(r\)-th RIS and the \(a\)-th UAV, and \(\mathbf{\tilde{h}}_{r,a}^{\text{RA}}\) represents the array response component which can be denoted by
\[\mathbf{\tilde{h}}_{r,a}^{\text{RA}} = [1,e^{-j\frac{2\pi d_{b}}{\lambda}\phi_{r,a}^{\text{UR}}\psi_{u,r }^{\text{PA}}},\ldots,e^{-j\frac{2\pi d_{b}}{\lambda}(M_{b}-1)\phi_{r,a}^{ \text{RA}}\psi_{r,a}^{\text{UR}}}]^{T}\] \[\otimes[1,e^{-j\frac{2\pi d_{b}}{\lambda}\phi_{r,a}^{\text{RA}} \psi_{u,r}^{\text{UA}}},\ldots,e^{-j\frac{2\pi d_{b}}{\lambda}(M_{c}-1)\phi_{ r,a}^{\text{UR}}\psi_{u,r}^{\text{RA}}}]^{T},\]
where \(\phi_{r,a}^{\text{RA}},\varphi_{r,a}^{\text{RA}}\), and \(\psi_{r,a}^{\text{RA}}\) are related to the sine and cosine terms of the vertical and horizontal angles-of-departure (AoDs) from the \(r\)-th RIS to the \(a\)-th UAV [12], and respectively given by \(\phi_{r,a}^{\text{RA}}=\sqrt{\frac{(r_{u}-y_{u})}{\sqrt{(x_{r}-x_{2})^{2}+(y_{r }-y_{u})^{2}}}}\), \(\varphi_{r,a}^{\text{RA}}=\frac{x_{r}-x_{a}}{\sqrt{(x_{r}-x_{2})^{2}+(y_{r}-y_{ u})^{2}}}\), and \(\psi_{r,a}^{\text{RA}}=\frac{\frac{x_{r}-x_{a}}{\sqrt{a_{r}-x_{a}}}}{d_{r,a}^{ \text{UR}}}\).
Based on the channel models described above, the concatenated channel for the UE-RIS-UAV link between the \(u\)-th UE and the \(a\)-th UAV through the \(r\)-th RIS is given by [12]
\[\mathbf{h}_{u,a}^{\text{UR}}=(\mathbf{h}_{r,a}^{\text{RA}})^{H}\mathbf{\Theta }_{r}\mathbf{h}_{u,r}^{\text{UR}}. \tag{6}\]
Accordingly, the SNR of the reflected link between the \(u\)-th UE and the \(a\)-th UAV through the \(r\)-th RIS can be written as [27]
\[\gamma_{u,a}^{(R,r)}=\frac{p\|\mathbf{h}_{u,a}^{\text{UR}}\|^{2}}{N_{0}}. \tag{7}\]
In this paper, we model the considered RIS-assisted UAV network as an undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{V}\}\) is the set of nodes (i.e., UAVs and UEs) in the network, and \(\mathcal{E}=\{e_{1},e_{2},\cdots,e_{E}\}\) is the set of all edges, where \(V\) and \(E\) are the cardinality of the sets \(\mathcal{V}\) and \(\mathcal{E}\), respectively, i.e., \(V=|\mathcal{U}\cup\mathcal{A}|=|\mathcal{V}|\) and \(E=|\mathcal{E}|\). The edge between any two nodes is created based on a typical previously mentioned SNR threshold. The key notations are summarized in Table I.
### _Network Connectivity_
This subsection briefly discusses the definition of the Laplacian matrix \(\mathbf{L}\) representing a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), its second smallest eigenvalue \(\lambda_{2}(\mathbf{L})\), and the relationship between \(\lambda_{2}(\mathbf{L})\) and the connectivity of the associated graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\).
For an edge \(e_{k}\), \(1\leq k\leq E\), that connects two nodes \(\{v_{n},v_{m}\}\in\mathcal{V}\), let \(\mathbf{a}_{k}\) be a vector, where the \(n\)-th and \(m\)-th elements in \(\mathbf{a}_{k}\) are given by \(a_{k,n}=1\) and \(a_{k,m}=-1\), respectively, and zero otherwise. The incidence matrix \(\mathbf{A}\) of a graph \(\mathcal{G}\) is the matrix with the \(k\)-th column given by \(\mathbf{a}_{k}\). Furthermore, the weight of an edge \(e_{k}\) that connects two nodes \(\{v_{n},v_{m}\}\in\mathcal{V}\), denoted by \(w_{n,m}\) or \(w_{k}\), is a function of the criticality of the nodes, as will be discussed in Section II-C. The weight vector \(\mathbf{w}\in\left[\mathbb{R}^{+}\right]^{E}\) is defined as \(\mathbf{w}=\left[w_{1},w_{2},\ldots,w_{E}\right]\). Hence, in an undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), the Laplacian matrix \(\mathbf{L}\) is a \(V\) by \(V\) matrix, which is defined as follows [16]:
\[\mathbf{L}=\mathbf{A}\ diag(\mathbf{w})\ \mathbf{A}^{T}=\sum_{k=1}^{E}w_{k} \mathbf{a}_{k}\mathbf{a}_{k}^{T}, \tag{8}\]
where the entries of \(\mathbf{L}\) are given element-wise by
\[L(n,m)=\begin{cases}deg(v_{n})&\text{if }v_{n}=v_{m},\\ -w_{n,m}&\text{if }(v_{n},v_{m})\in\mathcal{E},\\ 0&\text{otherwise},\end{cases} \tag{9}\]
where \(n,m\in\{1,2,\ldots,V\}\) are the indices of the nodes, and \(deg(v_{n})\) is the degree of node \(v_{n}\), which represents the number of all of its neighboring nodes.
As mentioned above, in network connectivity, algebraic connectivity measures how well a graph \(\mathcal{G}\) that has the associated Laplacian matrix \(\mathbf{L}\) is connected [18]. This metric is usually denoted as \(\lambda_{2}(\mathbf{L})\). The main motivation of \(\lambda_{2}(\mathbf{L})\) as a network connectivity metric comes from the following two main reasons [18]. First, \(\lambda_{2}(\mathbf{L})>0\) if and only if \(\mathcal{G}\) is connected,
network resiliency against node failures. For simplicity, the remaining of the paper uses node \(n\) instead of node \(v_{n}\).
### _Nodes Criticality_
Let \(\mathcal{G}_{-n}\) be the remaining graph after removing UAV node \(n\) and all its adjacent edges to other nodes. Noticed that the most critical nodes in \(\mathcal{G}\) are those representing the UAVs since UAVs have many connections to UEs and other UAVs, i.e., they represent the backhaul core of the network. We propose to quantify the connectivity of the remaining graph based on the Fiedler value. The connectivity of the remaining graph can be quantified by \(\lambda_{2}(\mathcal{G}_{-n})\) of the graph resulted from removing a typical node \(n\) and all its connected edges in \(\mathcal{G}\). We define the nodes criticality as follows.
**Definition 1**.: _The criticality of node \(n\), which reflects the severity of network connectivity after removing that node and its connected edges to other nodes, is defined as_
\[\mathcal{C}_{n}=\frac{1}{\lambda_{2}(\mathcal{G}_{-n})}. \tag{10}\]
**Remark:** Since the Laplacian matrix \(\mathbf{L}\) is positive semi-definite (expressed as \(0\preceq\mathbf{L}\)), we have \(\mathcal{C}_{n}\geq 0\).
**Theorem 1**.: \(\frac{1}{\lambda_{2}(\mathcal{G})-1}\) _is a tight upper bound on \(\mathcal{C}_{n}\)._
Proof.: Consider the graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\) with the set of vertices, \(\mathcal{V}\), and the set of edges, \(\mathcal{E}\). Let us define a new graph \(\mathcal{G}_{com}(\mathcal{V},\mathcal{E}_{com})\) by extending \(\mathcal{G}(\mathcal{V},\mathcal{E})\) with adding all missing edges from node \(n\), such that, \(\mathcal{E}\subseteq\mathcal{E}_{com}\), and node \(n\) is connected to all other nodes in the graph. Then, \(\mathbf{L}(\mathcal{G}_{com})\) can be written as
\[\mathbf{L}(\mathcal{G}_{com})=\begin{pmatrix}\mathbf{L}(\mathcal{G}_{-n})+ \mathbf{I},&-\mathbf{1}\\ -\mathbf{1}^{T},&V-1\end{pmatrix}. \tag{11}\]
Let \(\mathbf{v}\) be an eigenvector of \(\mathbf{L}(\mathcal{G}_{-n})\) that is corresponding to \(\lambda_{2}(\mathbf{L}(\mathcal{G}_{-n}))\). From (11), we show that \(\lambda_{2}(\mathcal{G}_{-n})+1\) is an eigenvalue of \(\mathbf{L}(\mathcal{G}_{com})\) as follows:
\[\mathbf{L}(\mathcal{G}_{com})\begin{pmatrix}\mathbf{v}\\ 0\end{pmatrix} =\begin{pmatrix}\mathbf{L}(\mathcal{G}_{-n})+\mathbf{I},&-\mathbf{1} \\ -\mathbf{1}^{T},&V-1\end{pmatrix}\begin{pmatrix}\mathbf{v}\\ 0\end{pmatrix}\] \[=\begin{pmatrix}\mathbf{L}(\mathcal{G}_{-n})\mathbf{v}+\mathbf{I} \mathbf{v}\\ -\mathbf{1}^{T}\mathbf{v}\end{pmatrix}\overset{(a)}{=}\begin{pmatrix}\lambda_{2 }(\mathcal{G}_{-n})\mathbf{v}+\mathbf{v}\\ 0\end{pmatrix}\] \[=\begin{pmatrix}(\lambda_{2}(\mathcal{G}_{-n})+1)\mathbf{v}\\ 0\end{pmatrix}=[\lambda_{2}(\mathcal{G}_{-n})+1]\begin{pmatrix}\mathbf{v}\\ 0\end{pmatrix},\]
where (a) follows from the fact that \(-\mathbf{1}^{T}\mathbf{v}=0\) for a connected graph, and \(\mathbf{I}\mathbf{v}=\mathbf{v}\). Thus, \(\lambda_{2}(\mathcal{G}_{-n})+1\) is an eigenvalue of \(\mathbf{L}(\mathcal{G}_{com})\) that is different from the zero eigenvalue, i.e., \(\lambda_{2}(\mathcal{G}_{-n})+1\neq\lambda_{1}(\mathcal{G}_{com})\) and \(\lambda_{2}(\mathcal{G}_{com})\leq\lambda_{2}(\mathcal{G}_{-n})+1\). Therefore,
\[\lambda_{2}(\mathcal{G}_{-n})\geq\lambda_{2}(\mathcal{G}_{com})-1. \tag{12}\]
Finally, since \(\mathcal{E}\subseteq\mathcal{E}_{com}\), \(\lambda_{2}(\mathcal{G})\leq\lambda_{2}(\mathcal{G}_{com})\), (12) can be written as
\[\lambda_{2}(\mathcal{G}_{-n})\geq\lambda_{2}(\mathcal{G})-1. \tag{13}\]
This shows that removing node \(n\) from \(\mathcal{G}\) can at least reduce the algebraic connectivity \(\lambda_{2}(\mathcal{G})\) by \(1\), which depends on the number of edges that connect node \(n\) to the remaining nodes in \(\mathcal{G}\). From (13) and (10), we have
\[\mathcal{C}_{n}\leq\frac{1}{\lambda_{2}(\mathcal{G})-1}. \tag{14}\]
Inequality (14) reasonably implies that if \(\mathcal{G}\) is highly connected, removing a node from it would not affect the network connectivity much since all nodes have high criticality and more connected. Assuming that \(\mathcal{G}\) is a complete graph, \(\lambda_{2}(\mathcal{G})=V\), and accordingly, \(\lambda_{2}(\mathcal{G}_{-n})=\lambda_{2}(\mathcal{G})-1=V-1\). In this case, we have
\[\mathcal{C}_{n}=\frac{1}{\lambda_{2}(\mathcal{G}_{-n})}=\frac{1}{V-1},\]
Which shows the tightness of upper bound. We point out that \(\mathcal{C}_{n}\approx\frac{1}{V}\) for large complete graphs.
Definition 1 implies that the high criticality of nodes that cause severe reduction in the remaining algebraic network connectivity should be balanced. In this case, no node in the graph can severely impact the network resiliency if it has accidentally or intentionally failed. Intuitively to make the network more connected, the RISs should be utilized properly since they have less probability of failures as compared to UAVs due to their limited batteries. In addition, RISs will add more links to the network, which will maximize the network connectivity and reduce the criticalities of the nodes. Consequently, the signals of the UEs can be redirected via the RISs to less critical UAV nodes, which results in more balanced network. One can achieve this balance by assigning weights to edges connecting UE nodes to UAV nodes, and the selection of the UE-RIS-UAV combinations relatively rely on these weights. Thus, we propose to design the weight of link \(e_{k}\) that connects nodes \(n\) and \(m\) as follows
\[w_{n,m}=\frac{1}{\mathcal{C}_{n}+\mathcal{C}_{m}}, \tag{15}\]
where \(\mathcal{C}_{n}\) and \(\mathcal{C}_{m}\) are the criticalities of nodes \(n\) and \(m\), respectively. The weight \(w_{n,m}\) is high if the criticalities of the nodes \(n\) and \(m\) are low, thus a link between them via a possible RIS would be highly created. The high critical UAV nodes are less likely to have new links from the UEs via the RISs since their failures may significantly degrade the network connectivity. Therefore, by carefully adding new links to less critical UAV nodes, one can balance the criticality of all the UAV nodes in the network.
## III Problem Formulation
The problem of maximizing the connectivity of RIS-assisted UAV networks can be stated as follows. Given a RIS-assisted UAV network represented by a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), what are the optimum combinations between the UEs and the UAVs via optimizing phase shifts of the RISs in order to maximize \(\lambda_{2}(\mathbf{L})\) of the resulting network while balancing the criticality of the UAV nodes? Essentially, deploying the RISs in the network may result in connecting multiple UEs to multiple UAVs, which were not connected together. It may also result in adding new alternative options to the UEs if their scheduled high critical UAV nodes have failed. By adjusting their phase shifts, RISs can smartly beamform the signals of the UEs to the desired, less critical UAVs to maximize the network connectivity and make the network more _resilient_ against node failures.
We consider that multiple UEs are allowed to transmit simultaneously if their mutual transmission coverage to the RISs is empty. Therefore, we have the following UE-RIS-UAV association constraints:
* Multiple UEs are allowed to transmit as long as they do not have common RISs in their coverage transmissions.
* Each RIS is connected to only one UE.
* Each RIS reflects the signal of the selected UE to only one UAV.
* Each UAV is connected to one RIS only.
Let \(\mathcal{A}_{0}^{u,r}\) be a set of reachable UAVs that have indirect communication links from the \(u\)-th UE through the \(r\)-th RIS, i.e., \(\mathcal{A}_{0}^{u,r}=\{a\in\mathcal{A}\backslash\mathcal{A}^{u}\mid\gamma_{ u,a}^{r}\geq\gamma_{0}^{\text{RIS}}\}\), where \(\mathcal{A}^{u}\) is defined above as the set of UAVs that have direct links to the \(u\)-th UE. We aim at providing alternative links to connect the UEs to the suitable UAVs in the set \(\mathcal{A}_{0}^{u,r}\), \(\forall u\in\mathcal{U},r\in\mathcal{R}\). As such, the UEs do not miss the communications if their scheduled UAV have failed. Let \(X_{u,r}\) be a binary variable that is equal to \(1\) if the \(u\)-th UE is connected to the \(r\)-th RIS, and zero otherwise. Now, let \(Y_{r,a}^{u}\) be a binary variable that is equal to \(1\) if the \(r\)-th RIS is connected to the \(a\)-th UAV when the \(u\)-th UE is selected to transmit, and zero otherwise. Let \(\mathcal{U}_{t}\) be the set of the possible transmitting UEs, where \(\mathcal{U}_{t}\subset\mathcal{U}\). This set \(\mathcal{U}_{t}\) consists of multiple UEs that have empty mutual transmission coverage to the RISs, which can be defined as \(\mathcal{U}_{t}=\{u\in\mathcal{U}\mid(u,u^{\prime})\in\mathcal{U}^{2}, \mathcal{R}_{u}\cap\mathcal{R}_{u^{\prime}}=\emptyset\}\). Therefore, the considered optimization problem of maximizing the network connectivity is formulated as follows:
\[\max_{X_{u,r},Y_{r,a}^{u},\theta_{m}^{r}}\lambda_{2}(\mathbf{L}^ {\prime}) \tag{16a}\] \[\mathrm{subject\ to}\sum_{u\in\mathcal{U}_{t}}X_{u,r}=1, \forall r\in\mathcal{R},\] (16b) \[\sum_{u\in\mathcal{U}_{t}}\sum_{a\in\mathcal{A}_{0}^{u,r}}Y_{r,a }^{u}\leq 1, \forall r\in\mathcal{R},\] (16c) \[\theta_{m}^{r}\in[0,2\pi),\qquad\forall r\in\mathcal{R},m=\{1, \ldots,M\},\] (16d) \[X_{u,r}\in\{0,1\},Y_{r,a}^{u}\in\{0,1\},\quad\forall r\in \mathcal{R},a\in\mathcal{A}. \tag{16e}\]
In (16), constraint (16b) implies that each RIS receives a signal from a single UE in the set \(\mathcal{U}_{t}\). Constraint (16c) assures that each RIS reflects the signal of its associated UE to only one UAV. This also means that at maximum \(R\) paths from the selected UEs to the suitable UAVs through the RISs. Constraint (16d) is for the RIS phase shift optimization.
Since the above optimization problem (16) is NP-hard, we first propose heuristic solutions to find feasible UE-RIS-UAV associations. Afterwards, we optimize the phase shift of the RISs elements to smartly direct the signals of the UEs to the suitable UAV nodes.
## IV Proposed Solutions
In this section, we reformulate the problem (16), and then develop two heuristic solutions to maximize \(\lambda_{2}(\mathbf{L}^{\prime})\). In Section IV-A, we relax the problem in (17) to a convex optimization problem in order to be formulated as an SDP problem. In Section IV-B, we propose a novel perturbation heuristic that selects the maximum possible increase in \(\lambda_{2}(\mathbf{L}^{\prime})\) based on the weighted values of the differences between the values of the Fiedler vector.
We add a link connecting the \(u\)-th UE to the \(a\)-th UAV through the \(r\)-th RIS if both \(X_{u,r}\) and \(Y_{r,a}^{u}\) in (16) are \(1\). Let \(\mathbf{z}\) be a vector representing the UE-RIS-UAV candidate associations, in which case \(X_{u,r}=1\) and \(Y_{r,a}^{u}=1\), \(\forall u\in\mathcal{U},a\in\mathcal{A},r\in\mathcal{R}\). Therefore, the problem in (16) can be seen as having a set of \(|\mathbf{z}|\) UE-RIS-UAV candidate associations, and we want to select the optimum \(R\) UE-RIS-UAV associations
among these \(|\mathbf{z}|\) associations. This optimization problem can be formulated as
\[\max_{\mathbf{z}} \lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z})) \tag{17}\] \[\mathrm{subject\ to} \mathbf{1}^{T}\mathbf{z}=R,\] \[\mathbf{z}\in\{0,1\}^{|\mathbf{z}|},\]
where \(\mathbf{1}\in\mathbf{R}^{|\mathbf{z}|}\) is the all-ones vector and
\[\mathbf{L}^{\prime}(\mathbf{z})=\mathbf{L}+\sum_{l=1}^{|\mathbf{z}|}z_{l}w_{l }\mathbf{a}_{l}\mathbf{a}_{l}^{T}, \tag{18}\]
where \(\mathbf{a}_{l}\) is the incidence vector resulting from adding link \(l\) to the original graph \(\mathcal{G}\), \(\mathbf{1}^{T}\mathbf{z}=R\) indicates that the number of chosen RISs is \(R\), \(\mathbf{L}\) is the Laplacian matrix of the original graph \(\mathcal{G}\), and \(w_{l}\) is the weight of a constructed edge that connects a RIS with a UE and a UAV as given in (15). The \(l\)-th element of \(\mathbf{z}\), denoted by \(z_{l}\), is either \(1\) or \(0\), which corresponds to whether a RIS should be chosen or not, respectively. Clearly, the dimension of \(\mathbf{L}\) and \(\mathbf{L}^{\prime}(\mathbf{z})\) is \(V\times V\). We notice that the effect of adding RISs appears only in the edge set \(\mathcal{E}\), and not in the node set \(\mathcal{V}\). For ease of illustration, (18) can be written as
\[\mathbf{L}^{\prime}=\mathbf{L}+(\mathbf{z}\otimes\mathbf{I}_{|\mathbf{z}|})\Lambda, \tag{19}\]
where \(\Lambda=\left[(\mathbf{A}_{1}diag(\mathbf{w}_{1})\mathbf{A}_{1}^{T})^{T}, \ldots,(\mathbf{A}_{|\mathbf{z}|}diag(\mathbf{w}_{|\mathbf{z}|})\mathbf{A}_{| \mathbf{z}|}^{T})^{T}\right]^{T}\) and the block \((\mathbf{A}_{1}diag(\mathbf{w}_{1})\mathbf{A}_{1}^{T})^{T}\) is a \(V\times V\) matrix.
The problem in (17) is combinatorial, and can be solved exactly by exhaustive search by computing \(\lambda_{2}(\mathbf{L}^{\prime})\) for \(\binom{|\mathbf{z}|}{R}\) Laplacian matrices. However, this is not practical for large graphs that have large \(|\mathbf{z}|\) and \(R\). Instead, we are interested in proposing efficient heuristics for solving the problem in (17).
### _Convex Relaxation_
The proposed SDP solution for multiple RISs in this subsection is mainly related to the preliminary work [1] but the authors considered a simple case of one RIS without utilizing the criticality of the nodes.
The optimization vector in (17) is the vector \(\mathbf{z}\). The \(l\)-th element of \(\mathbf{z}\), denoted by \(z_{l}\), is either \(1\) or \(0\), which corresponds to whether this UE-RIS-UAV association should be chosen or not, respectively. Since (17) is NP-hard problem with high complexity, we relax the constraint on the entries of \(\mathbf{z}\) and allow them to take any value in the interval \([0,1]\). Specifically, we relax the Boolean constraint \(\mathbf{z}\in\{0,1\}^{|\mathbf{z}|}\) to be a linear constraint \(\mathbf{z}\in[0,1]^{|\mathbf{z}|}\), then we can represent the problem (17) as
\[\max_{\mathbf{z}} \lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z})) \tag{20}\] \[\mathrm{subject\ to} \mathbf{1}^{T}\mathbf{z}=R,\] \[0\leq\mathbf{z}\leq 1.\]
In [17], it was shown that \(\lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z}))\) in (20) is the point-wise infimum of a family of linear functions of \(\mathbf{z}\), which is a concave function in \(\mathbf{z}\). In addition, the relaxed constraints are linear in \(\mathbf{z}\). Therefore, the optimization problem in (20) is a convex optimization problem [17], and is equivalent to the following SDP optimization problem [28]
\[\max_{\mathbf{z},q}q \tag{21}\] \[\mathrm{subject\ to\ }q(\mathbf{I}-\frac{1}{|\mathbf{z}|}\mathbf{1 }\mathbf{1}^{T})\preceq\mathbf{L}^{\prime}(\mathbf{z}),\mathbf{1}^{T}\mathbf{ z}=R,0\leq\mathbf{z}\leq 1,\]
where \(\mathbf{I}\in\mathbf{R}^{V\times V}\) is the identity matrix and \(\mathbf{F}\preceq\mathbf{L}\) denotes that \(\mathbf{L}-\mathbf{F}\) is a positive semi-definite matrix.
The solution to the SDP optimization problem in (21) is explained as follows. First, we calculate the corresponding phase shifts of the RISs from each feasible UE node to each feasible UAV node, such that we generate all the possible schedules \(\mathbf{z}\). In particular, the corresponding phase shift at PRU of the \(r\)-th RIS to reflect the signal of the \(u\)-th UE to the \(a\)-th UAV is calculated as follows [12]
\[\theta_{m}^{r} =\pi\frac{f_{c}}{c}\{d_{b}(m_{b}-1)\psi_{r,a}^{\text{RA}}\phi_{r,a }^{\text{RA}}+d_{c}(m_{c}-1)\psi_{r,a}^{\text{RA}}\phi_{r,a}^{\text{RA}}\] \[\quad+d_{b}(m_{b}-1)\psi_{u,r}^{\text{UR}}\phi_{u,r}^{\text{UR}}+ d_{c}(m_{c}-1)\psi_{u,r}^{\text{UR}}\phi_{u,r}^{\text{UR}}\}. \tag{22}\]
Second, we use off-the-shelf CVX software solver [29] to solve the SDP optimization problem in (21) and obtain \(\mathbf{z}\). The entries of the output vector \(\mathbf{z}\) resulting from the CVX solver are continuous, that are between \(0\) and \(1\), and accordingly, we consider to round the maximum \(R\) entries to \(1\) while others are rounded to zero.
Note that if \(n\) is an articulation node (i.e., the removal of that node disconnects the network [16]), then \(\lambda_{2}(\mathcal{G}_{-n})\) theoretically equals to zero. This might cause numerical problem when we calculate the weight in the network connectivity. To avoid this problem, we introduce a small threshold, \(\epsilon\). If \(\lambda_{2}(\mathcal{G}_{-n})\leq\epsilon\), we set \(\lambda_{2}(\mathcal{G}_{-n})=\epsilon\). The steps of calculating the edge weights are summarized in Algorithm 1.
```
1:Input: Construct \(\mathcal{G}(\mathcal{V},\mathcal{E})\) and set \(\epsilon=10^{-5}\)
2:for For each edge \(l\) that connects two nodes \(u,a\)do
3: Form the Laplacian matrix \(\mathbf{L}(\mathcal{G}_{-u})\) as (8) and find the corresponding criticality value \(\mathcal{C}_{u}\):
4:if\(\lambda_{2}(\mathcal{G}_{-u})>\epsilon\)then\(\mathcal{C}_{u}=1/\lambda_{2}(\mathcal{G}_{-u})\)
5:else\(\mathcal{C}_{u}=1/\epsilon\)
6:endif
7: Form the Laplacian matrix \(\mathbf{L}(\mathcal{G}_{-a})\) as (8) and find the corresponding criticality value \(\mathcal{C}_{a}\):
8:if\(\lambda_{2}(\mathcal{G}_{-a})>\epsilon\)then\(\mathcal{C}_{a}=1/\lambda_{2}(\mathcal{G}_{-a})\)
9:else\(\mathcal{C}_{a}=1/\epsilon\)
10:endif
11: Calculate the weight of edge \(l\) as (15).
12:endfor
```
**Algorithm 1** Weight Calculation Based On Nodes Criticality
### _A Greedy Perturbation Heuristic_
The SDP optimization has high complexity when \(|\mathbf{z}|\) and \(R\) are large, which is the case of large networks. Instead, we propose an effective greedy heuristic for solving (17) based on the values of the Fiedler vector, which is denoted by \(\mathbf{v}\). On the other hand, unlike the exhaustive search that calculates \(\lambda_{2}(\mathbf{L}^{\prime})\) for each possible association of UE-RIS-UAV, the proposed perturbation heuristic adds the \(R\) edges one at a time by calculating only the weighted values of the differences between the values of the Fiedler vector. In the following proposition, we prove the upper bound of \(\lambda_{2}(\mathbf{L}^{\prime})\), and the proposed perturbation heuristic will be described next.
**Proposition 1**.: \(\lambda_{2}(\mathbf{L}^{\prime})\) _is upper bounded by \(\lambda_{2}(\mathbf{L})+w_{l}(v_{u}-v_{a})^{2}\), where \(v_{u}\) and \(v_{a}\) are the corresponding values of the
\(u\)-th and \(a\)-th indices of the Fiedler vector \(\mathbf{v}\) of \(\lambda_{2}(\mathbf{L})\) and \(w_{l}\) is the weight of edge \(l\) as given in (15)._
Proof.: For simplicity, we use \(V_{ua}=(v_{u}-v_{a})^{2}\). If \(\mathbf{v}\) is an eigenvector with unit norm corresponding to \(\lambda_{2}(\mathbf{L})\), then \(\mathbf{v}\mathbf{v}^{T}\) is a supergradient of \(\lambda_{2}(\mathbf{L})\)[30]. This means for any symmetric matrix \(Y\) with size \(V\times V\), we have
\[\lambda_{2}(\mathbf{L}+Y)\leq\lambda_{2}(\mathbf{L})+\mathbf{Tr}(Y\mathbf{v} \mathbf{v}^{T}). \tag{23}\]
In one connected graph, \(\lambda_{2}\) is isolated, where \(\lambda_{1}<\lambda_{2}<\lambda_{3}\), then \(\lambda_{2}(\mathbf{L}^{\prime})\) is an analytic function of \(\mathbf{L}^{\prime}\), and therefore of \(\mathbf{z}\). In this case the supergradient is the gradient [30], i.e.,
\[\frac{\partial\lambda_{2}(\mathbf{L}^{\prime})}{\partial z_{l}}=\mathbf{v}^{ T}\frac{\partial\mathbf{L}^{\prime}(\mathbf{z})}{\partial z_{l}}\mathbf{v}, \tag{24}\]
where \(\mathbf{v}\) is the unique normalized eigenvector corresponding to \(\lambda_{2}(\mathbf{L}^{\prime})\). By taking the partial derivative of
\[\mathbf{L}^{\prime}(\mathbf{z})=\mathbf{L}+\sum_{l=1}^{|\mathbf{z}|}z_{l}w_{ l}\mathbf{a}_{l}^{T}, \tag{25}\]
we have
\[\frac{\partial\mathbf{L}^{\prime}(\mathbf{z})}{\partial z_{l}}=w_{l}\mathbf{a }_{l}\mathbf{a}_{l}^{T}. \tag{26}\]
By substituting (26) in (24), we have
\[\frac{\partial\lambda_{2}(\mathbf{L}^{\prime})}{\partial z_{l}}=w_{l}\mathbf{ v}^{T}\mathbf{a}_{l}\mathbf{a}_{l}^{T}\mathbf{v}=w_{l}V_{ua}. \tag{27}\]
Therefore, the partial derivative of \(\lambda_{2}(\mathbf{L}^{\prime})\) with respect to \(z_{l}\) is \(w_{l}V_{ua}\), where \(l\) is the added edge between UE node \(u\) and UAV node \(a\). When \(\lambda_{2}\) is isolated, \(w_{l}V_{ua}\) gives the first order approximation of the increase in \(\lambda_{2}(\mathbf{L}^{\prime})\), if edge \(l\) is added to the graph. Therefore, our step (2) of the greedy heuristic corresponds to adding an edge, from among the remaining \(R\) edge candidates, that gives the largest possible increase in \(\lambda_{2}(\mathbf{L}^{\prime})\), according to a first order approximation. Therefore, we can say that if \(\mathbf{v}\mathbf{v}^{T}\) is a supergradient of \(\lambda_{2}(\mathbf{L}^{\prime})\) and based on (23), \(\lambda_{2}(\mathbf{L}^{\prime})\) can be written as follows
\[\lambda_{2}(\mathbf{L}^{\prime})=\lambda_{2}(\mathbf{L}+w_{l} \mathbf{a}_{l}\mathbf{a}_{l}^{T})\leq\lambda_{2}(\mathbf{L})+\mathbf{Tr}(w_{ l}\mathbf{a}_{l}\mathbf{a}_{l}^{T}\mathbf{v}\mathbf{v}^{T}) \tag{28}\] \[\leq\lambda_{2}(\mathbf{L})+w_{l}\mathbf{Tr}(\mathbf{a}_{l} \mathbf{a}_{l}^{T}\mathbf{v}\mathbf{v}^{T})\leq\lambda_{2}(\mathbf{L})+w_{l}V _{ua}. \tag{28}\]
(28) completes the proof.
**Greedy Heuristic:** Given Proposition 1, in each step of the proposed heuristic, we choose an edge \(l\) that connects UE \(u\) and UAV \(a\), which has the largest value of \(w_{l}V_{ua}\) that provides the maximum possible increase in \(\lambda_{2}(\mathbf{L}^{\prime})\). Starting from \(\mathcal{G}\) and \(\mathbf{L}\), we add new edges one at a time as follows:
* Calculate \(\mathbf{v}\), a unit eigenvector corresponding to \(\lambda_{2}(\mathbf{L})\), where \(\mathbf{L}\) is the current Laplacian matrix.
* From the remaining candidate edges corresponding to the UE-RIS-UAV schedules, add an edge \(l\) connecting UE \(u\) and UAV \(a\) with the largest \(w_{l}V_{ua}\).
* Remove all the UE-RIS-UAV candidate links of the already selected UE, RIS, UAV.
We stop the greedy heuristic when there is no feasible link to add. Since the number of UEs/UAVs is larger than the number of RISs, this heuristic stops when there is no more available RISs that have not been selected. The steps of the greedy algorithm are given in Algorithm 2.
```
1:Input: UEs, UAVs, RISs, and network topology.
2:Initially set \(\mathbf{L}^{\prime}\leftarrow\mathbf{L}\)
3:for\(i=1,2,\dots,R\)do
4: Calculate \(\mathbf{v}\) of the associated \(\mathbf{L}^{\prime}\).
5: From the remaining candidate edges corresponding to the UE-RIS-UAV schedules, add an edge \(l\) connecting UE \(u\) and UAV \(a\) with largest \(w_{l}V_{ua}\), where \(w_{l}\) is calculated as in Algorithm 1.
6: Calculate the corresponding phase shift at PRU of the selected \(r\)-th RIS that reflects the signal of the \(u\)-th UE to the \(a\)-th UAV as in (22).
7: Based on the selected edge \(l\), update \(\mathbf{L}^{\prime}\).
8: Remove all the UE-RIS-UAV candidate links of the already selected UE, RIS, UAV.
9:endfor
10:Output:\(\lambda_{2}(\mathbf{L}^{\prime})\).
```
**Algorithm 2** The Proposed Greedy Perturbation Heuristic
## V Perturbation Heuristic Analysis
In this section, we derive the upper and lower bounds of \(\lambda_{2}(\mathbf{L}^{\prime})\) based on the proposed perturbation heuristic solution. Then, the computational complexity of the proposed schemes, as compared to the exhaustive search, is analyzed in Section V-B.
### _Lower and Upper Bounds Analysis_
Given the proposed perturbation heuristic, Proposition 2 derives the lower and upper bounds on the algebraic connectivity of a graph obtained by adding \(R\) edges connecting UE nodes to UAV nodes to a single connected graph.
**Proposition 2**.: _Let \(\mathbf{L}\) be the Laplacian matrix of the original connected graph \(\mathcal{G}\). Suppose we add an edge \(l\) that connects UE node \(u\) to UAV node \(a\) through RIS \(r\) to \(\mathcal{G}\). Then, we have the following lower and upper bounds, respectively, for \(\lambda_{2}(\mathbf{L}+w_{l}\mathbf{a}_{l}^{T})\):_
\[\lambda_{2}(\mathbf{L}+w_{l}\mathbf{a}_{l}^{T})\geq\lambda_{2}\] \[+\frac{w_{l}V_{ua}+\delta+2w_{l}-\sqrt{5w_{l}V_{ua}-w_{l} \delta^{2}+4w_{l}^{2}+4w_{l}\delta}}{2}, \tag{29}\]
\[\lambda_{2}(\mathbf{L}+w_{l}\mathbf{a}_{l}\mathbf{a}_{l}^{T})\leq\lambda_{2}+ \frac{w_{l}V_{ua}}{1+w_{l}(2-V_{ua})/(\lambda_{n}-\lambda_{2})}, \tag{30}\]
_where \(\delta=\lambda_{3}-\lambda_{2}\)._
Proof.: Let \(\mathbf{L}=\mathbf{QDQ}^{T}\) be the eigenvalue decomposition of \(\mathbf{L}\), where \(\mathbf{D}\) is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, denoted by \(\lambda_{1},\lambda_{2},\dots,\lambda_{V}\), and \(\mathbf{Q}\) is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of \(\mathbf{L}\). Suppose that all entries in \(\mathbf{D}\) are distinct (same process applies if eigenvalues other than \(\lambda_{2}\) are repeated [30]). Note that \(\mathbf{a}_{l}\mathbf{a}_{l}^{T}\) is a matrix of rank-one and therefore our analysis follows the same steps used in [31] for eigenvalues perturbation of a matrix with rank-one update. In particular, the standard form in [31] is
\[\underbrace{\mathbf{L}^{\prime}}_{\text{updated matrix}}=\mathbf{Q} \underbrace{\mathbf{D}}_{\text{diagonal matrix with $\lambda_{i}$ entries}}\mathbf{Q}^{T}+\rho\underbrace{\mathbf{a}_{l} \mathbf{a}_{l}^{T}}_{\text{rank-one}}, \tag{31}\]
where \(\rho>0\), which will be replaced by \(w_{l}\). Recall that \(w_{l}\) is a non-negative value. Thus, we can write (31) as
\[\mathbf{QL}^{\prime}\mathbf{Q}^{T}=\mathbf{D}+w_{l}(\mathbf{Q}\mathbf{a}_{l})( \mathbf{Q}\mathbf{a}_{l})^{T}. \tag{32}\]
We denote the eigenvalues of \(\mathbf{L}^{\prime}\) by \(\tilde{\lambda}_{1},\tilde{\lambda}_{2},\ldots,\tilde{\lambda}_{V}\). Since we have one graph component, we assume \(\tilde{\lambda}_{i}\leq\tilde{\lambda}_{i+1}\), and similarly, \(\lambda_{i}\leq\lambda_{i+1}\). Note that the matrices \(\mathbf{L}\) and \(\mathbf{L}^{\prime}\) both have eigenvalue \(0\) with the corresponding eigenvector \(\mathbf{1}\), i.e., \(\lambda_{1}\) and \(\tilde{\lambda}_{1}\) are zero. Thus, we are interested in the remaining \(V-1\) eigenvalues of \(\mathbf{L}^{\prime}\), i.e., particularly \(\tilde{\lambda}_{2}\). Therefore, the eigenvalues of \(\mathbf{L}^{\prime}\) are the same as those of \(\mathbf{D}+w_{l}\mathbf{u}\mathbf{u}^{T}\), where \(\mathbf{u}=\mathbf{Q}\mathbf{a}_{l}\). To find the eigenvalues of \(\mathbf{L}^{\prime}\), assume first that \(\mathbf{D}-\tilde{\lambda}\mathbf{I}\) is non-singular, we compute the characteristic polynomial as follows:
\[\det(\mathbf{L}^{\prime}-\tilde{\lambda}\mathbf{I}) =\det(\mathbf{D}+w_{l}\mathbf{u}\mathbf{u}^{T}-\tilde{\lambda} \mathbf{I})\] \[=\det((\mathbf{D}-\tilde{\lambda}\mathbf{I})(\mathbf{I}+w_{l}( \mathbf{D}-\tilde{\lambda}\mathbf{I})^{-1}\mathbf{u}\mathbf{u}^{T}))\] \[=\det(\mathbf{D}-\tilde{\lambda}\mathbf{I})\det(\mathbf{I}+w_{l }(\mathbf{D}-\tilde{\lambda}\mathbf{I})^{-1}\mathbf{u}\mathbf{u}^{T}).\]
Since \(\mathbf{D}-\tilde{\lambda}\mathbf{I}\) is non-singular, \(\det(\mathbf{I}+w_{l}(\mathbf{D}-\tilde{\lambda}\mathbf{I})^{-1}\mathbf{u} \mathbf{u}^{T})=0\) whenever \(\tilde{\lambda}\) is an eigenvalue. Note that \(\mathbf{I}+w_{l}(\mathbf{D}-\tilde{\lambda}\mathbf{I})^{-1}\mathbf{u}\mathbf{ u}^{T})\) is the identity plus rank-one matrix. The determinant of such matrix is as follows3:
Footnote 3: By definition, if \(\mathbf{x}\) and \(\mathbf{y}\) are vectors, \(\det(\mathbf{I}+\mathbf{x}\mathbf{y}^{T})=1+\mathbf{y}^{T}\mathbf{x}\)[26].
\[\det(\mathbf{I}+w_{l}(\mathbf{D}-\tilde{\lambda}\mathbf{I})^{-1 }\mathbf{u}\mathbf{u}^{T}) =(1+w_{l}\mathbf{u}^{T}(\mathbf{D}-\tilde{\lambda}\mathbf{I})^{- 1}\mathbf{u})\] \[=\underbrace{\left(1+w_{l}\sum_{i=2}^{V}\frac{u_{i}^{2}}{ \lambda_{i}-\tilde{\lambda}}\right)}_{\text{secular equation}}, \tag{33}\]
where \(u_{i}=\mathbf{Q}_{i}^{T}\mathbf{a}_{l}\). Thus, \(u_{1}=\mathbf{Q}_{1}^{T}\mathbf{a}_{l}=0\) since the values of the eigenvector corresponding to \(\lambda_{1}\) is the same and \(u_{2}=\mathbf{Q}_{2}^{T}\mathbf{a}_{l}=v_{u}-v_{a}\). Golub [31] showed that in the above situation the eigenvalues of \(\mathbf{L}^{\prime}\) are the zeros of the secular equation, which is given as follows
\[f(\tilde{\lambda})=1+w_{l}\sum_{i=2}^{V}\frac{u_{i}^{2}}{\lambda_ {i}-\tilde{\lambda}}\] \[1+w_{l}\sum_{i=2}^{V}\frac{u_{i}^{2}}{\lambda_{i}-\tilde{\lambda }}=0\to w_{l}\sum_{i=2}^{V}\frac{u_{i}^{2}}{\tilde{\lambda}-\lambda_{i}}=1. \tag{34}\]
Consider \(u_{i}^{2}=0.5,\lambda_{i}=i,i=\{2,3,4\}\), \(V=4\), and \(w_{l}=1\), the function \(f(\tilde{\lambda})=1+\sum_{i=2}^{4}\frac{0.5}{\lambda_{i}-\tilde{\lambda}}\) has the graph shown in Fig. 2 for the interval \((\lambda_{2},\lambda_{3})\). Since \(f^{\prime}(\tilde{\lambda})=\sum_{i=2}^{4}\frac{u_{i}^{2}}{(\lambda_{i}-\tilde {\lambda})^{2}}>0\), the function is strictly increasing in the interval \((\lambda_{i},\lambda_{i+1})\). Thus, the roots of \(f(\tilde{\lambda})\) are interlaced by the \(\lambda_{i}\). Since \(\mathbf{I}\) has more edges than \(\mathbf{L}\) and by eigenvalue interlacing, we have [31, 32]
\[\lambda_{i}\leq\tilde{\lambda}_{i}\leq\lambda_{i+1},\hskip 28.452756pt\forall i =2,\ldots,V-1\]
Let us consider \(\tilde{\lambda}\) in the interval \((\lambda_{2},\lambda_{3})\). For \(\tilde{\lambda}\) in the interval \((\lambda_{2},\lambda_{3})\), from Fig. 2, if \(f(\tilde{\lambda})\leq 0\), then \(\tilde{\lambda}\leq\tilde{\lambda}_{2}\). It is easy to see that \(f(\tilde{\lambda})\leq 0\) is equivalent to
\[w_{l}\frac{u_{2}^{2}}{\lambda-\lambda_{2}}\geq 1+w_{l}\sum_{i=3}^{V}\frac{u_{i}^{ 2}}{\lambda_{i}-\tilde{\lambda}}. \tag{35}\]
Thus, we conclude that (35) implies \(\tilde{\lambda}\leq\tilde{\lambda}_{2}\).
Recall \(\|u\|=\|\mathbf{Q}^{T}\mathbf{a}_{l}\|=\|\mathbf{a}_{l}\|=2\), \(\sum_{i=3}u_{i}^{2}\leq 2\)[30]. From the right hand side of (35), we have
\[1+w_{l}\sum_{i=3}^{V}\frac{u_{i}^{2}}{\lambda_{i}-\tilde{\lambda}}\overset{(a)} {\leq}1+\frac{w_{l}}{\lambda_{3}-\tilde{\lambda}}\sum_{i=3}^{V}u_{i}^{2} \overset{(b)}{\leq}1+w_{l}\frac{2}{\lambda_{3}-\tilde{\lambda}}, \tag{36}\]
where \((a)\) and \((b)\) follow from the facts that \(\lambda_{3}\leq\lambda_{4},\ldots,\leq\lambda_{n}\) and \(\sum_{i=3}^{V}u_{i}^{2}\leq 2\), respectively. From (36), if
\[w_{l}\frac{u_{2}^{2}}{\lambda-\lambda_{2}}\geq 1+w_{l}\frac{2}{\lambda_{3}- \tilde{\lambda}}, \tag{37}\]
then \(\tilde{\lambda}\leq\tilde{\lambda}_{2}\). We set \(\tilde{\lambda}-\lambda_{2}=\epsilon\) and \(\lambda_{3}-\lambda_{2}=\delta\) in (37). We aim to find \(\epsilon>0\) (i.e., \(\tilde{\lambda}>\lambda_{2}\)) such that
\[w_{l}\frac{u_{2}^{2}}{\epsilon}\geq 1+w_{l}\frac{2}{\delta-\epsilon}, \tag{38}\]
By solving (38), we can verify that
\[\epsilon=\frac{w_{l}u_{2}^{2}+\delta+2w_{l}-\sqrt{w_{l}(u_{2}^{2}-\delta^{2})+4w _{l}^{2}+4w_{l}(u_{2}^{2}+\delta)}}{2}, \tag{39}\]
satisfies (38). Thus, the lower bound is
\[\tilde{\lambda}_{2}\geq\lambda_{2}+\frac{w_{l}u_{2}^{2}+\delta+2w_{l}-\sqrt{5w_{ l}u_{2}^{2}-w_{l}\delta^{2}+4w_{l}^{2}+4w_{l}\delta}}{2}, \tag{40}\]
where \(u_{2}^{2}=V_{ua}\). This concludes that (40) gives the lower bound in (29).
Now, we derive the upper bound of \(\tilde{\lambda}_{2}\). A sharper upper bound than (28) can be obtained using the secular equation. From (34), the algebraic connectivity of \(\mathbf{L}^{\prime}\) is the number \(\tilde{\lambda}_{2}\in(\lambda_{2},\lambda_{3})\) satisfying
\[w_{l}\frac{u_{2}^{2}}{\tilde{\lambda}-\lambda_{2}}+w_{l}\sum_{i =3}^{V}\frac{u_{i}^{2}}{\tilde{\lambda}-\lambda_{i}}=1\] \[w_{l}\frac{u_{2}^{2}}{\tilde{\lambda}-\lambda_{2}}=1+w_{l}\sum_{i =3}^{V}\frac{u_{i}^{2}}{\lambda_{i}-\tilde{\lambda}}\] \[\frac{w_{l}}{\tilde{\lambda}-\lambda_{2}}=\frac{1+w_{l}\sum_{i=3}^ {V}\frac{u_{i}^{2}}{\tilde{\lambda}_{i}-\tilde{\lambda}}}{u_{2}^{2}}\] \[\tilde{\lambda}-\lambda_{2}=\frac{w_{
Thus, the upper bound of \(\lambda_{2}\) is given as follows
\[\begin{split}\tilde{\lambda}_{2}&\stackrel{{ (a)}}{{\leq}}\lambda_{2}+\frac{w_{l}u_{2}^{2}}{1+w_{l}\sum_{i=3}^{V} \frac{u_{i}^{2}}{\lambda_{i}-\lambda_{2}}}\\ \tilde{\lambda}_{2}&\stackrel{{(b)}}{{ \leq}}\lambda_{2}+\frac{w_{l}u_{2}^{2}}{1+w_{l}(\frac{1}{\lambda_{n}-\lambda_ {2}}(\sum_{i=2}^{V}u_{i}^{2}-u_{2}^{2}))}\\ \tilde{\lambda}_{2}&\stackrel{{(c)}}{{ \leq}}\lambda_{2}+\frac{w_{l}V_{ua}}{1+w_{l}(2-V_{ua})/(\lambda_{n}-\lambda_ {2})},\end{split} \tag{41}\]
where \((a)\), \((b)\), and \((c)\) come from \(\lambda_{2}<\tilde{\lambda}_{2}\), \(\lambda_{n}\geq\lambda_{n-1},\ldots,\geq\lambda_{2}\), \(\sum_{i=2}^{V}u_{i}^{2}=2\), and \(u_{2}^{2}=V_{ua}\), respectively. This concludes that (41) gives the upper bound in (30).
### _Computational Complexity_
The exhaustive search requires a computational complexity of calculating \(\lambda_{2}(\mathbf{L}^{\prime})\) for \(\binom{[\mathbf{z}]}{R}\) Laplacian matrices, in which each Laplacian matrix computation requires \(\mathcal{O}(4E^{\prime}V^{3}/3)\)[33]. Thus, exhaustive search requires \(\mathcal{O}(\binom{[\mathbf{z}]}{R}4E^{\prime}V^{3}/3)\) operations. On the other hand, the SDP optimization for the convex relaxation runs in high computational complexity for large \(|\mathbf{z}|\). The proposed perturbation heuristic requires only an eigenvector value computation, as opposed to the exhaustive search. In particular, for the first added link, it computes the vector \(\mathbf{v}\) of the current \(\mathbf{L}\). Computing all the eigenvectors of an \(V\times V\) dense matrix costs approximately \(\mathcal{O}(4V^{3}/3)\) arithmetic operations [30]. Since we have at maximum \(R\) possible links to be added, the proposed perturbation solution runs in \(\mathcal{O}(4RV^{3}/3+4RE^{\prime}V^{3}/3)=\mathcal{O}(4RV^{3}/3(1+E^{\prime} ))\approx\mathcal{O}(4RV^{3}E^{\prime}/3)\) arithmetic operations.
## VI Numerical Results
We run MATLAB simulations to demonstrate the viability of the proposed schemes, and their superiority to the existing solutions. The simulation parameters of RISs configurations and UAV communications are consistent with those used in [22] and [12], respectively. We consider a RIS-assisted UAV system in an area of \(150\)\(m\times 150\)\(m\), where the RISs have fixed locations and the UEs and the UAVs are distributed randomly. The RISs are located at an altitude of \(20\) m, \(M=100\), \(d_{r}=5\) cm, \(d_{c}=5\) cm, \(\beta_{0}=10^{-6}\), \(N_{0}=-130\) dBm, the altitude of the UAVs is \(50\) m, \(f_{c}=3\times 10^{9}\) Hz, \(c=3\times 10^{8}\) m/s, \(\alpha=4\), \(p=1\) watt, \(P=5\) watt, \(\gamma_{0}^{\text{(U)}}=85\) dB, and \(\gamma_{0}^{\text{(A)}}=80\) dB. Unless specified otherwise, \(A=10\), \(U=15\), \(\gamma_{0}^{\text{(RIS)}}=30\) dB, and \(\epsilon=10^{-5}\).
The optimization problem in (16) is solved using the two proposed heuristics in Section IV-A and Section IV-B, denoted by SDP and Proposed Perturbation, which are inspired by [1, 30], respectively. For the sake of numerical comparison, the problem in (16) is solved optimally via exhaustive search, which searches over all the feasible possible links between the UEs and the UAVs through the RISs, and then selects the maximum \(\lambda_{2}(\mathbf{L}^{\prime})\). In addition, we consider solving (16) using the two benchmark schemes: original network without RISs deployment and random link selection. Finally, we implement the upper and lower bounds, which are computed from (30) and (29), respectively. Our performance measure is the network connectivity, which we calculate using \(500\) iterations at each chosen value of UE, UAV, RIS, and SNR threshold of the RISs. In each iteration, we change the locations of the UEs and the UAVs.
In Fig. 3, we plot the average network connectivity of the proposed and benchmark schemes versus the number of UEs \(U\). From Fig. 3, we can see that the proposed SDP and perturbation outperform the original scheme in terms of network connectivity. The results from the perturbation scheme are very close to the actual optimal value obtained using exhaustive search. This is because the perturbation scheme calculates the relative values of the Fiedler vector and selects the largest value that offers the possible maximum increase in the network connectivity, which is corresponding to the desired UE-RIS-UAV link. For this reason, the performance of the perturbation heuristic is also close to the upper bound. Due to controlling the phase shift of the RISs, the proposed schemes judiciously establish new links that connect the UEs to the desired UAVs such that the UEs do not miss the communications to the network while improving network connectivity. Between these two proposed solutions, the perturbation heuristic significantly outperforms SDP since the latter is sub-optimal. The original scheme that is without RISs deployment has poor performance. This interestingly shows that by adding a few number of low-cost passive nodes, the average network connectivity of RIS-assisted UAV networks is improved significantly. Notably, the values of \(\lambda_{2}(\mathbf{L}^{\prime})\) of all the schemes decreases as the number of UEs increases, since adding more unconnected UEs may result in a sparse graph with low network connectivity.
We observe from Fig. 3 that the performance of the proposed perturbation is very close to that of the optimal scheme. For ease of illustration, we include the optimal scheme in Fig. 3 only and omit it in the remaining figures.
In Fig. 4 and Fig. 5, we show the average network connectivity of the proposed and benchmark schemes versus the number of UAVs \(A\) in different setups, i.e., small and large network sizes. For a small number of UAVs in Fig. 4, the proposed SDP and perturbation schemes offer a slight performance gain in terms of network connectivity compared to the original scheme. This is because our proposed schemes have
Fig. 3: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) versus the number of UEs \(U\) for \(7\) UAVs and \(3\) RISs.
a few options of UE-UAV links, where the RISs can direct the signal of the UEs to a few number of UAVs. However, when the number of UAVs increases, the proposed schemes smartly select effective UE-RIS-UAV links that significantly maximize the network connectivity. It is noted that \(\lambda_{2}(\mathbf{L}^{\prime})\) of all schemes increases with the number of UAVs since adding more connected nodes to the network increases the number of new links, which increases the network connectivity. This also can be seen in Fig. 5 in the case of a large number of UAV nodes.
We observe that the values of \(\lambda_{2}(\mathbf{L}^{\prime})\) in Fig. 3 are smaller than the values of \(\lambda_{2}(\mathbf{L}^{\prime})\) in Fig. 4 and Fig. 5 for all the UAV configurations. This is reasonable because adding more connected nodes of UAVs, adds more links to the network, thus improves the network connectivity than adding more unconnected nodes of UEs. The latter makes the network less connected (i.e., more UE nodes and no links between them).
To show that adding a few passive nodes is indeed crucial to maximize the network connectivity of UAV networks, Fig. 6 plots the average network connectivity versus the number of RISs \(R\). For plotting this figure, we change the number of RISs and distribute them randomly and consider \(15\) UEs and \(10\) UAVs. For a small number of RISs in Fig. 6, the performance of all schemes increases significantly since there are many possible selections of UEs and UAVs for each RIS, and therefore all the schemes select the best schedule that maximizes the network connectivity. However, when the number of RISs increases (\(R>5\)), there are no more good opportunities of selecting links that connect the remaining UEs to the remaining UAVs. Thus, we notice a slight performance increase in the network connectivity of all the schemes. This also shows that adding the first few links is important to improve the network connectivity of the network. Note that the original scheme works irrespective of the RISs deployment, thus it has fixed performance when we change the number of RISs.
To show the superior performance of the proposed perturbation as compared to the random link addition, Fig. 7 studies the average network connectivity versus the number of RISs \(R\) for \(15\) UEs and \(10\) UAVs. For fair comparison, the random scheme is simulated similar to the perturbation heuristic except that the selection of a link \(l\) is randomly, not based on the maximum value of \(w_{l}V_{ua}\). Thus, both schemes add the same number of links to the network. As expected, random addition performs poorly compared to the perturbation heuristic. However, the point to be highlighted here is that a large increase in the average network connectivity can be obtained by adding a new edges carefully as we propose in the perturbation heuristic.
In Fig. 8, we show the impact of the SNR threshold \(\gamma_{0}^{(\text{RIS})}\) on the network connectivity. For small SNR threshold, all the links between the UEs and the UAVs through the RIS can satisfy this SNR threshold, thus many alternative links between the potential UE and the UAVs to select to maximize the network connectivity. On the other hand, for high RIS SNR threshold, a few UE-RIS-UAV links can satisfy such high SNR threshold. Thus, the network connectivity of all the schemes is degraded, and it becomes relatively close to the original scheme, which does not get affected by changing \(\gamma_{0}^{(\text{RIS})}\).
It is worth remarking that while the random scheme adds a random link to the network, the original scheme does not add a link. The proposed solutions balance between the afore
Fig. 4: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) versus the number of UAVs \(A\) for \(10\) UEs and \(3\) RISs.
Fig. 5: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) versus a large number of UAVs \(A\) for \(22\) UEs and \(3\) RISs.
mentioned aspects by judiciously selecting an effective link, between a UE and a UAV, that maximizes the network connectivity. This utilizes the benefits of the cooperation between an appropriate scheduling algorithm design and RISs phase shift configurations. Compared to the optimal scheme, the proposed perturbation heuristic is near-optimal, however our proposed SDP has a certain degradation in network connectivity that comes as the achieved polynomial computational complexity as compared to the high complexity of the optimal scheme.
## VII Conclusion
In this paper, we studied the effectiveness of RISs in UAV communications in order to maximize the network connectivity by adjusting the algebraic connectivity using the reflected links of the RISs. We started by defining the nodes criticality and formulating the proposed network connectivity maximization problem. By embedding the nodes criticality in the link selection, we proposed two efficient solutions to solve the proposed problem. First, we proposed to relax the problem to a convex problem to be formulated as an SDP optimization problem that can be solved efficiently in polynomial time using CVX. Then, we proposed a low-complexity, yet efficient, perturbation heuristic that adds one UE-RIS-UAV link at a time by calculating only the weighted difference between the values of the Fiedler eigenvector. We also derived the lower and upper bounds of the algebraic connectivity obtained by adding new links to the network based on the perturbation heuristic. Through numerical studies, we evaluated the performance of the proposed schemes via extensive simulations. We verified that the proposed schemes result in improved network connectivity as compared to the existing solutions. In particular, the perturbation heuristic is near-optimal solution that shows very close results in terms of network connectivity compared to the optimal solution.
|
2302.09282 | Dark Matter and $(g-2)_{e,μ}$ in ISS(2,3) based Gauged
$U(1)_{L_{e}-Lμ}$ Symmetric Model | We proposed a model which can explain the neutrino phenomenology, dark matter
and anomalous magnetic moment$(g-2)$ in a common framework. The inverted sea
saw (ISS)(2,3) mechanism has been incorporated, in which we get an extra
sterile state and this state act as a viable dark matter candidate. The right
handed neutrino mass is obtained in TeV scale, which is accessible at LHC. The
anomaly free $U(1)_{L_{e}-L{\mu}}$ gauge symmetry is introduced to explain the
anomalous magnetic moment of electron and muon because it provides a natural
origin of $(g-2)$ in a very minimal setup. The corresponding MeV scale gauge
boson successfully explain the anomalous magnetic moment of electron and
muon$(g-2)_{e,\mu}$, simultaneously. Thus obtained neutrino phenomenology and
relic abundance of dark matter are compatible with experimental results. | Rishu Verma, Ankush, B. C. Chauhan | 2023-02-18T10:36:59Z | http://arxiv.org/abs/2302.09282v1 | # Dark Matter and \((g-2)_{e,\mu}\) in ISS(2,3) based Gauged \(U(1)_{L_{e}-L\mu}\) Symmetric Model
###### Abstract
We proposed a model which can explain the neutrino phenomenology, dark matter and anomalous magnetic moment\((g-2)\) in a common framework. The inverted sea saw (ISS)(2,3) mechanism has been incorporated, in which we get an extra sterile state and this state act as a viable dark matter candidate. The right handed neutrino mass is obtained in TeV scale, which is accessible at LHC. The anomaly free \(U(1)_{L_{e}-L\mu}\) gauge symmetry is introduced to explain the anomalous magnetic moment of electron and muon because it provides a natural origin of \((g-2)\) in a very minimal setup. The corresponding MeV scale gauge boson successfully explain the anomalous magnetic moment of electron and muon\((g-2)_{e,\mu}\), simultaneously. Thus obtained neutrino phenomenology and relic abundance of dark matter are compatible with experimental results.
## 1 Introduction
The Standard Model(SM) of particle physics is an incredible theory which can successfully explain the interactions of fundamental particles and their dynamics. Despite its huge triumph, it lacks in explaining the neutrino mass, matter- antimatter asymmetry, dark matter(DM), anomalous magnetic moment(g-2)\({}_{e,\mu}\), etc. The experimental discoveries from Super-Kamiokande [1], SNO [2, 3] and KamLand [4] has confirmed the solar and atmospheric neutrino flavour oscillations and massive nature of neutrinos. There are many other open questions in neutrino physics like, absolute mass scale of neutrinos, their mass hierarchy(Normal Ordering or Inverted Ordering),
whether they are Majorana or Dirac, and CP violation etc., which are required to be answered. Certainly, all of these issues require a framework beyond SM(BSM).
There are several mechanisms in literature to generate the mass of neutrinos. The simplest and most popular way is to add right handed neutrino (RHN) to the existing SM by hand. In this way the Higgs can have the required Yukawa coupling with the neutrinos. To achieve this, there are several seesaw mechanisms like, Type-I [5, 6], Type-II [7], Type-III [8, 9] and Inverse Seesaw(ISS) [10, 11, 12].
Another important unsolved problem in cosmology is that of dark matter and its nature. Various astronomical and cosmological experiments suggest that the universe consists of non-luminous, non-baryonic, mysterious matter called dark matter [13, 14]. There are several evidences which confirms the presence of such unseen tangible structures in the universe. The cosmic microwave background(CMB), gravitational lensing and galactic rotation curves are some of these evidences [15, 16, 17]. Despite of such strong evidences, the nature of DM and its origin is still an open question. The current abundance of DM according to Plank is reported as [18]
\[\Omega_{DM}h^{2}=0.120\pm 0.001.\]
Any particle to be a DM candidate should fulfill the criteria given in [19]. According to these specifications, all the existing particles of SM are clearly ruled out to be a DM candidate. Hence in order to obtain correct DM phenomenology, we need a physics beyond the SM.
Further, the results from LSND [20], MINOS [21] etc, favour the existence of fourth state of neutrino known as, sterile neutrino. The anomalies found in the experiments like GALLEX [22] and SAGE [23] strengthen this fact, as they are successfully explained by incorporating the sterile neutrino. Sterile neutrino is a right handed neutrino and and can sense only the gravitational interaction. These neutrinos can be produced by their mixing with the active neutrino sector [24, 25]. Sterile neutrino is also, one of the popular warm dark matter(WDM) candidate having very small mixing with SM neutrinos leading to its stability [26]. It is a massive particle having very long lifetime. Sterile neutrinos having masses in \(keV\) range are extremely important in explaining the cosmological findings [27]. According to cosmological predictions, the DM candidate should have mass range \(0.4keV<<m_{DM}<<50keV\). The lowest bound is obtained in [28] and the upper bound is given in [29, 30]. Above \(50keV\) mass, the sterile neutrino loses its stability.
The magnetic moment of charged leptons plays an important role as it is a viable test for the theory of SM. The results from Fermi National Accelerator Laboratory (FNAL) has confirmed that the experimental value of magnetic moment of muon is not compatible with the SM model prediction with \(4.2\sigma\) discrepancy [31]. Similarly, the experimental value of magnetic moment of electron has \(1\sigma\) and \(2.4\sigma\) discrepancy over SM from Rubidium atom and Cesium atom measurements, respectively [32, 33]. These results also open a new window of BSM physics.
Motivated by all these studies explained above, we explored the possibility of a com
mon phenomena, which can explain the neutrino and DM phenomenology as well as anomalous magnetic moment of electron and muon \((g-2)_{e,\mu}\). We develop a model, in which we used ISS(2,3) framework to generate non-zero neutrino masses. The motivation for incorporating this framework is the minimal formalism, which can generate the right handed neutrino masses in TeV scale, which are accessible at LHC and provide a viable DM candidate, simultaneously [34]. We used anomaly free \(U(1)_{L_{e}-L\mu}\) gauge symmetry, so that we can explain the anomalous magnetic moment of electron and muon, which provides a natural origin of (g-2) in a very minimal setup. Here, we also implemented Type-II seesaw to explain all these phenomena simultaneously. The SM is extended by two RHN\((N_{1},N_{2})\) and three singlet sterile neutrinos \(S_{i}(\mathrm{i}=1,2,3)\) as required in ISS(2,3) framework. The field content of SM is further extended by three extra fields \(\phi\)(scalar singlet), \(\eta\)(scalar doublet) and \(\Delta\)(scalar triplet). Only \(\phi\) will have its contribution towards \((g-2)_{e,\mu}\). Finally, the cyclic symmetry \(Z_{3}\) has also been added to have economical formulation of mass matrices.
This paper is organised as follow: In section 2 we have given a detailed explanation of the ISS(2,3) formalism. In section 3, the model part is explained and the effective neutrino mass matrix is obtained. Section 4 contains the discussion of DM in ISS(2,3) mechanism. Anomalous magnetic moment of muon and electron is explained in Section 5. Numerical analysis and results are presented in Section 6. Finaly, the conclusions are given in Section 7.
## 2 Inverse Seesaw(2,3) Formalism
In order to explain the smallness of neutrino masses, there are different seesaw mechanisms explained in literature [5, 7, 8]. In canonical seesaw models, we obtain the right handed neutrino mass around GeV scale. But, ISS provides a formalism in which one can obtain the right handed neutrino mass at TeV scale, which can be probed at LHC and other future experiments. The ISS(2,3) is the minimal formalism to obtain the dark matter(DM) candidate [35, 36] as well as neutrino phenomenology. This is possible due to the fact that in this scenario we have unequal number of right-handed(RH) neutrinos \(N_{j}(j=1,2)\) and singlet fermions \(S_{i}(i=1,2,3)\), which leads to a DM candidate and two pseudo-Dirac pairs [37]. The mass Lagrangian in conventional ISS mechanism is written as
\[L=-\bar{\nu}_{\alpha L}M_{D}N_{j}-\bar{S}_{i}MN_{j}-\frac{1}{2}\bar{S}_{i}\mu S ^{C}_{k}+h.c., \tag{1}\]
where \(M_{D}\), \(M\) and \(\mu\) are the complex mass matrices and \(\alpha=(e,\mu,\tau)\), \(k=(1,2,3)\). After spontaneous symmetry breaking, the above Eq.(1) becomes following \(9\times 9\) neutrino mass matrix
\[M_{\nu}=\begin{pmatrix}0&M_{D}&0\\ M^{T}_{D}&0&M\\ 0&M^{T}&\mu\end{pmatrix}, \tag{2}\]
where \(M_{D}\) is the dirac mass matrix, \(M\) represents the interaction of the RH neutrinos
with singlet fermions and \(\mu\) represents the mass matrix of singlet fermions. The SM neutrinos are obtained at sub-eV scale from \(M_{D}\), \(\mu\) at keV scale and M at TeV scale [10, 11, 12]. Considering \(\mu<<M_{D}<<M\), after block diagonalization of the matrix in Eq.(2), the 3\(\times\)3 effective neutrino matrix is given as
\[m_{\nu}\approx M_{D}(M^{T})^{-1}\mu M^{-1}M_{D}^{T}. \tag{3}\]
and the mass matrix for heavy sector can be written as [38]
\[M_{H}=\begin{pmatrix}0&M\\ M^{T}&\mu\end{pmatrix}, \tag{4}\]
where \(M_{H}\) represents the mass matrix for heavy pseudo-Dirac pairs and extra fermions. Since, in ISS(2,3) scenario, the \(M\) is a 2\(\times\)3 matrix. Therefore, it is not possible to calculate \(M^{-1}\). Consequently, to obtain effective neutrino mass matrix we used the formalism as given in Ref. [39]
\[m_{\nu}\approx M_{D}dM_{D}^{T}, \tag{5}\]
where d is \(2\times 2\) matrix,
\[M_{H}^{-1}=\begin{pmatrix}d_{2\times 2}&...\\...&...\end{pmatrix}. \tag{6}\]
## 3 The Model
In the model, we have included two right-handed neutrinos \(N_{j}\)(j=1,2) and three singlet fermions \(S_{i}(i=1,2,3)\), which are charged \((-1,1)\) and \((0,0,0)\) under \(U(1)_{L_{e}-L\mu}\), respectively. Further, we extended the scalar sector with one \(SU(2)_{L}\) singlet scalar field \(\phi\) and a scalar doublet \(\eta\) which are charged 1 and \(-1\) respectively under \(U(1)_{L_{e}-L\mu}\).
The scalar field \(\phi\) is breaking the \(U(1)_{L_{e}-L\mu}\) symmetry, while H(Higgs doublet) is responsible for breaking electroweak symmetry. After spontaneous symmetry breaking (SSB), the vacuum expectation values (VEV) acquired by (\(H\) and \(\eta\)) and (\(\phi\)) give \(M_{D}\) and \(M\), respectively, with minimal number of parameters. In addition, \(Z_{3}\) symmetry is used to constrain the Yukawa Lagrangian. The fermionic and scalar field content along with respective charge assignments are shown in Table 1.
The leading Yukawa Lagrangian is
\[\mathcal{L}^{I}= y_{e}\bar{L}_{e}e_{R}H+y_{\mu}\bar{L}_{\mu}\mu_{R}H+y_{\tau}\bar{ L}_{\tau}\tau_{R}H+y_{1}^{\nu}\bar{L}_{e}N_{1}\tilde{H}+y_{2}^{\nu}\bar{L}_{\mu}N_{2 }\tilde{H}+ \tag{7}\] \[y_{3}^{\nu}\bar{L}_{\tau}N_{1}\tilde{\eta}+y_{\phi}^{1}N_{1}S_{ 1}\phi+y_{\phi}^{2}N_{1}S_{2}\phi+y_{\phi}^{3}N_{1}S_{3}\phi+p_{i}S_{i}S_{i}+h. c.,\]
where \(\tilde{H}=i\tau_{3}H\) and \(y_{q}(q=e,\mu,\tau)\), \(y_{i}^{\nu}(i=1,2,3)\), \(y_{\phi}^{j}(j=1,2,3)\) are Yukawa coupling constants. The VEVs
\[\langle H\rangle=v_{H},\,\langle\phi\rangle=v_{\phi}\text{ and }\langle\eta \rangle=v_{\eta}.\]
lead to diagonal charged lepton mass matrix as
\[m_{l}=Diag(y_{e},y_{\mu},y_{\tau})v_{H}. \tag{8}\]
Consequently, we have obtained \(M_{D}\), \(M\) and \(\mu\) as shown below
\[M_{D}=\begin{pmatrix}a&0\\ 0&b\\ m&0\end{pmatrix},M=\begin{pmatrix}A&B&G\\ B&0&0\end{pmatrix},\mu=\begin{pmatrix}p_{1}&0&0\\ 0&p_{2}&0\\ 0&0&p_{3}\end{pmatrix}, \tag{9}\]
where \(a=y_{1}^{\nu}v_{H},b=y_{3}^{\nu}v_{H},m=y_{3}^{\nu}v_{\eta},A=y_{\phi}^{1}v_{ \phi},B=y_{\phi}^{2}v_{\phi},G=y_{\phi}^{3}v_{\phi}\). For numerical estimation, we have assumed the degenerate masses for sterile singlet fermions, which results in lowest mass state eigenstate of heavy sector to be of keV range(\(p_{1}\approx p_{2}\approx p_{3}\approx p\)). Within ISS(2,3) mechanism, the above matrices lead to the light neutrino mass matrix as follow
\[m_{\nu}^{1}=\begin{pmatrix}\frac{-a^{2}p}{(B^{2}+G^{2})}&\frac{aAbp}{(B^{3}+ BG^{2})}&\frac{-amp}{(B^{2}+G^{2})}\\ \frac{aAbp}{(B^{3}+BG^{2})}&\frac{-b^{2}(A^{2}+B^{2}+G^{2})p}{B^{2}(B^{2}+G^{ 2})}&\frac{Abmp}{(B^{3}+BG^{2})}\\ \frac{-amp}{(B^{2}+G^{2})}&\frac{Abmp}{(B^{3}+BG^{2})}&\frac{-m^{2}p}{(B^{2}+ G^{2})}\end{pmatrix}. \tag{10}\]
We implemented type-II seesaw to get correct neutrino phenomenology. To implement type-II seesaw, we introduced a \(SU(2)_{L}\) triplet field \(\Delta\) in the model transforming as (-1) and \(\omega^{2}\) under \(U(1)_{L_{e}-L\mu}\) and \(Z_{3}\), respectively. The relevant Lagrangian corresponding to type-II seesaw is given by
\[\mathcal{L}^{\mathcal{II}}= f_{1}(L_{\mu}L_{\tau}+L_{\tau}L_{\mu})\Delta+h.c., \tag{11}\]
where, \(f_{1}\) is coupling constant. The vacuum expectation value \(\langle\Delta\rangle=v_{\Delta}\) gives
\[m_{\nu_{II}}=\begin{pmatrix}0&0&0\\ 0&0&X\\ 0&X&0\end{pmatrix}, \tag{12}\]
where, \(X=f_{1}v_{\Delta}\). The complete Lagrangian for the model is given as
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} Symmetry & \(\bar{L}_{e}\) & \(\bar{L}_{\mu}\) & \(\bar{L}_{\tau}\) & \(e_{R}\) & \(\mu_{R}\) & \(\tau_{R}\) & \(N_{1}\) & \(N_{2}\) & \(S_{i}\) & \(H\) & \(\phi\) & \(\eta\) & \(\Delta\) \\ \hline \(SU(2)_{L}\) & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 3 \\ \hline \(U(1)_{L_{e}-L\mu}\) & 1 & -1 & 0 & -1 & 1 & 0 & -1 & 1 & 0 & 0 & 1 & -1 & -1 \\ \hline \(Z_{3}\) & \(\omega\) & \(\omega\) & \(\omega\) & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega^{2}\) & 1 & 1 & \(\omega\) & 1 & \(\omega^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: The fermionic and scalar field content with respective charge assignments under \(SU(2)_{L}\), \(U(1)_{L_{e}-L\mu}\) and \(Z_{3}\) symmetries.
\[{\cal L}= y_{e}\bar{L}_{e}e_{R}H+y_{\mu}\bar{L}_{\mu}\mu_{R}H+y_{\tau}\bar{L}_{ \tau}\tau_{R}H+y^{\nu}_{1}\bar{L}_{e}N_{1}\tilde{H}+y^{\nu}_{2}\bar{L}_{\mu}N_{ 2}\tilde{H}+ \tag{13}\] \[y^{1}_{\phi}N_{1}S_{1}\phi+y^{2}_{\phi}N_{1}S_{2}\phi+y^{3}_{ \phi}N_{1}S_{3}\phi+p_{i}S_{i}S_{i}+f_{1}(L_{\mu}L_{\tau}+L_{\tau}L_{\mu}) \Delta+h.c.\]
The effective neutrino mass matrix is given by
\[M_{\nu}=m_{\nu_{I}}+m_{\nu_{II}},\]
which explicitly can be written as
\[M_{\nu}=\left(\begin{array}{ccc}\frac{-a^{2}p}{(B^{2}+G^{2})}&\frac{aAbp}{(B ^{3}+BG^{2})}&\frac{-amp}{(B^{2}+G^{2})}\\ \frac{aAbp}{(B^{3}+BG^{2})}&\frac{-b^{2}(A^{2}+B^{2}+G^{2})p}{B^{2}(B^{2}+G^{2 })}&\frac{Abmp}{(B^{3}+BG^{2})}+X\\ \frac{-amp}{(B^{2}+G^{2})}&\frac{Abmp}{(B^{3}+BG^{2})}+X&\frac{-m^{2}p}{(B^{2 }+G^{2})}\end{array}\right). \tag{14}\]
## 4 Dark Matter in ISS(2,3) mechanism
As stated earlier, the ISS(2,3) is a mechanism, which can explain neutrino phenomenology as well as DM, simultaneously. In the generic ISS realization, one can have the following mass spectrum [35] :
1) scale corresponding to heavy neutrino states of \({\cal O}(M_{D})\) and \({\cal O}(M)\).
2) \({\cal O}(\mu)\) scale corresponding to light sterile state (\(\#s-\#\nu_{R}\)). This state exists if \(\#s<\#\nu_{R}\) (\(\#\) represents the number of neutrinos).
3)Three light active neutrino states of mass scale \({\cal O}(\mu)\)\(\frac{k}{1+k^{2}}\), \(k=\frac{{\cal O}(M_{D})}{{\cal O}(M)}\).
Using Eq.(4), the \(M_{H}\) in our model is:
\[M_{H}=\begin{pmatrix}0&0&A&B&G\\ 0&0&B&0&0\\ A&B&p&0&0\\ B&0&0&p&0\\ G&0&0&0&p\end{pmatrix}. \tag{15}\]
The eigen values of \(M_{H}\) are obtained as
\[(M_{H})_{1}=\frac{1}{4}(2p-2\sqrt{2A^{2}+4B^{2}+2G^{2}}-2\sqrt{A^{4}+4A^{2}B^ {2}+2A^{2}G^{2}+G^{4}}+p^{2})\]
\[(M_{H})_{2}=\frac{1}{4}(2p+2\sqrt{2A^{2}+4B^{2}+2G^{2}}-2\sqrt{A^{4}+4A^{2}B^ {2}+2A^{2}G^{2}+G^{4}}+p^{2})\]
\[(M_{H})_{3}=\frac{1}{4}(2p-2\sqrt{2A^{2}+4B^{2}+2G^{2}}+2\sqrt{A^{4}+4A^{2}B^ {2}+2A^{2}G^{2}+G^{4}}+p^{2})\]
\[(M_{H})_{4}=\frac{1}{4}(2p+2\sqrt{2A^{2}+4B^{2}+2G^{2}}+2\sqrt{A^{4}+4A^{2}B^ {2}+2A^{2}G^{2}+G^{4}}+p^{2})\]
\[(M_{H})_{5}=p \tag{16}\]
It is to be noted that, \((M_{H})_{5}\) depends only on the \(\mu\) matrix. Hence \((M_{H})_{5}\) is the lightest sterile state acting as potential DM candidate. So, the mass of the DM particle can be determined by the '\(p\)' parameter of the model. In order to study the DM phenomenology, it is important to calculate the active-sterile mixing. This can be obtained from the first three eigenvectors corresponding to the eigenvalues \(M\) in keV range. The relation between DM mass and active sterile mixing with relic abundance is given as [40]
\[\Omega_{DM}h^{2}=1.1\times 10^{7}\Sigma C_{\alpha}(m_{s})|U_{\alpha s}|^{2} \left(\frac{m_{s}^{2}}{keV}\right),\alpha=e,\mu,\tau \tag{17}\]
The simplified solution of above equation is
\[\Omega_{DM}h^{2}=0.3\left(\frac{sin^{2}\theta}{10^{-10}}\right) \left(\frac{m_{s}}{100keV}\right)^{2}, \tag{18}\]
where, \(sin^{2}2\theta=4\Sigma_{\alpha e,\mu,\tau}|U_{\alpha s}|^{2}\). \(U_{\alpha s}\) represents the active-sterile mixing element and \(m_{s}\) is the mass of lightest sterile fermion. The DM particle should be stable atleast at the cosmological scale. The lightest sterile neutrino may decay into active neutrinos and photon \(\gamma\), which leads to the monochromatic X-ray line signal. Its decay rate is negligible as compared to the cosmological scale due to very small mixing angle and is given as [41]
\[\Gamma=1.38\times 10^{-32}\left(\frac{sin^{2}2\theta}{10^{-10}}\right) \left(\frac{ms}{keV}\right)^{5}s^{-1}. \tag{19}\]
The model parameters obtained in the model are used to find the relic and decay rate.
## 5 Anomalous Magnetic Moment of Electron and Muon
### Muon (g-2) Anomaly
The magnetic moment \(\vec{\mu}\) for any elementary particle having charge '\(q\)' and spin \(\vec{S}\) is given as
\[\vec{\mu}=g\frac{q}{2m}\vec{S}, \tag{20}\]
where \(m\) and \(g\) are the mass of the particle and gyromagnetic ratio, respectively. Using quantum mechanics formulation, Dirac predicted the value of \(g\) for spin-1/2 particle to be equal to 2 [42]. The higher order radiative corrections tends to deviate the value of \(g\) from 2. In particular, the fractional deviation of \(g\) from Dirac's prediction is known as anomalous magnetic moment(\(a\)). For muon the anomalous magnetic
moment is defined as \(a_{\mu}=(g-2)/2.\) The recent results of muon (g-2) experiment from Fermi National Accelerator Laboratory(FNAL) [31] predicts the value of \(a_{\mu}\) as
\[a_{\mu}^{FNAL}=116592040(54)\times 10^{-11}. \tag{21}\]
On the other hand, the theoretical predictions of SM results in [43]
\[a_{\mu}^{SM}=116591810(43)\times 10^{-11}. \tag{22}\]
The \(4.2\sigma\) significance discrepancy \(\Delta a_{\mu}(a_{\mu}^{FNAL}-a_{\mu}^{SM})\), challenges the SM and hints towards the new physics beyond SM. From Eq.(21) and Eq.(22), we get
\[\Delta a_{\mu}=251(59)\times 10^{-11}. \tag{23}\]
There are various models which have discussed \((g-2)_{\mu}\) anomaly can be found in the literature [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. Since we have extended our model by \(U(1)_{L_{e}-L\mu}\) symmetry, the \(Z^{\prime}\) boson can contribute to \((g-2)_{\mu}\) anomaly if its mass is in the range (10-100) MeV. Through the interaction of muon with \(Z^{\prime}\) boson can provide substantial rectification to \(g-2\). The neutral current interaction which gives the contribution to the calculation of \((g-2)_{\mu}\) is given as
\[{\cal L}=-g^{\prime}\bar{\mu}Z^{\prime}\gamma^{\mu}\mu, \tag{24}\]
where \(g^{\prime}\) is corresponding coupling constant. The analytical one loop contribution of \(Z^{\prime}\) can be written as [55, 56]
\[\Delta a_{\mu}^{Z^{\prime}}=\frac{g^{\prime 2}}{8\pi^{2}}\int_{0}^{1}dx\frac{2 m_{\mu}^{2}x^{2}(1-x)}{x^{2}m_{\mu}^{2}+(1-x)M_{Z^{\prime}}^{2}}, \tag{25}\]
where \(m_{\mu}\) is the mass of muon and \(M_{Z^{\prime}}\) is the gauge boson mass.
### Electron (g-2) Anomaly
Unlike the magnetic moment of muon, the recent experiments are not able to accurately measure the magnetic moment of electron. According to Rubidium atom
Figure 1: Feynman diagram for electron and muon (g-2)(\(l=e,\mu\)) with mediator Z\({}^{\prime}\) gauge boson.
measurement [32]
\[(\Delta a_{e})_{Rb}=48(30)\times 10^{-14}, \tag{26}\]
with \(1\sigma\) discrepancy over SM, whereas, Cesium atom gives
\[(\Delta a_{e})_{Cs}=-87(30)\times 10^{-14}, \tag{27}\]
with \(2.4\sigma\) discrepancy over SM [33]. The neutral current interaction which gives the contribution to the calculation of \((g-2)_{e}\) is given as
\[{\cal L}=-g^{\prime}\bar{e}Z^{\prime}\gamma^{e}e. \tag{28}\]
The one loop contribution of \(Z^{\prime}\) to the magnetic moment of electron is given as [57]
\[\Delta a_{e}^{Z^{\prime}}=\frac{g^{\prime 2}}{8\pi^{2}}\int_{0}^{1}dx\frac{2m _{e}^{2}x^{2}(1-x)}{x^{2}m_{e}^{2}+(1-x)M_{Z^{\prime}}^{2}}, \tag{29}\]
where \(m_{e}\) is mass of electron. The contributing Feynman diagram is shown in Fig.1. In order to explain these discrepancies in electron and muon anomalous magnetic moment, anomaly free gauge symmetry \(U(1)_{L_{e}-L\mu}\) is used. The extra gauge boson \(Z^{\prime}\) (in MeV range) obtained from \(U(1)_{L_{e}-L\mu}\) symmetry breaking effectively contributes to \(\Delta a_{\mu}\) and \(\Delta a_{e}\).
## 6 Numerical Analysis and Results
In this section we have numerically estimated the viability of the model with the neutrino oscillation data, DM relic and bounds on \(\Delta a_{e,\mu}\). From the light neutrino mass matrix \(M_{\nu}\) acquired by using ISS(2,3) and type-II seesaw mechanisms in Eq.(14), it is clear that we have eight unknown model parameters. For numerical analysis, these model parameters are evaluated using the constraints of neutrino oscillation data as shown in Table 2.
In charged lepton basis, the light neutrino mass matrix can be written as
\[M_{\nu}=UM_{d}U^{T}, \tag{30}\]
where \(M_{d}\) is diagonal mass matrix containing mass eigenvalues of neutrinos
\(diag(m_{1},m_{2},m_{3})\). \(U\) is Pontecorvo-Maki-Nakagawa-Sakata (\(U_{PMNS}\)) neutrino mixing matrix defined as \(U_{PMNS}=V.P\) where \(P\) is diagonal phase matrix \(diag(1,e^{i\alpha},e^{i(\beta+\delta)})\), in which \(\alpha\), \(\beta\) are Majorana type \(CP\) violating phases. In PDG representation, \(V\) is given by
\[\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}, \tag{31}\]
where \(\delta\) is Dirac \(CP\) violating phase. The oscillation parameters are given as
\[sin^{2}\theta_{13}=|U_{e3}|^{2},\hskip 28.452756ptsin^{2}\theta_{23}=\frac{|U_{\mu 3 }|^{2}}{1-|U_{e3}|^{2}}\hskip 28.452756pt\mbox{and}\hskip 28.452756ptsin^{2}\theta_{12}= \frac{|U_{e2}|^{2}}{1-|U_{e3}|^{2}}\]
where \(U_{\alpha i}\)(\(\alpha\) = e,\(\mu\),\(\tau\), and i = 1,2,3) are the elements of \(U_{PMNS}\) matrix. To check the viability of the model, we used the neutrino oscillation data as given in Table 2. Further, we found the parameter space of the model satisfying the neutrino oscillation data. The model parameter space compatible with the neutrino oscillation data is used to study active-sterile mixing, decay rate of DM candidate and relic abundance of DM.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Parameter & Best fit \(\pm\) 1\(\sigma\) range & 3\(\sigma\) range \\ \hline \multicolumn{3}{c}{Normal neutrino mass ordering (\(m_{1}<m_{2}<m_{3}\))} \\ \hline \(\sin^{2}\theta_{12}\) & \(0.304^{+0.013}_{-0.012}\) & \(0.269-0.343\) \\ \(\sin^{2}\theta_{13}\) & \(0.02221^{+0.00068}_{-0.00062}\) & \(0.02034-0.02420\) \\ \(\sin^{2}\theta_{23}\) & \(0.570^{+0.018}_{-0.024}\) & \(0.407-0.618\) \\ \(\Delta m^{2}_{21}\left[10^{-5}\mbox{eV}^{2}\right]\) & \(7.42^{+0.21}_{-0.20}\) & \(6.82-8.04\) \\ \(\Delta m^{2}_{31}\left[10^{-3}\mbox{eV}^{2}\right]\) & \(+2.541^{+0.028}_{-0.027}\) & \(+2.431-+2.598\) \\ \hline \multicolumn{3}{c}{Inverted neutrino mass ordering (\(m_{3}<m_{1}<m_{2}\))} \\ \hline \(\sin^{2}\theta_{12}\) & \(0.304^{+0.013}_{-0.012}\) & \(0.269-0.343\) \\ \(\sin^{2}\theta_{13}\) & \(0.02240^{+0.00062}_{-0.00062}\) & \(0.02053-0.02436\) \\ \(\sin^{2}\theta_{23}\) & \(0.575^{+0.017}_{-0.021}\) & \(0.411-0.621\) \\ \(\Delta m^{2}_{21}\left[10^{-5}\mbox{eV}^{2}\right]\) & \(7.42^{+0.21}_{-0.20}\) & \(6.82-8.04\) \\ \(\Delta m^{2}_{32}\left[10^{-3}\mbox{eV}^{2}\right]\) & \(-2.497^{+0.028}_{-0.028}\) & \(-2.583--2.412\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Neutrino oscillations experimental data NuFIT 5.0 used in the numerical analysis [58].
The Fig. 2 shows the correlation plots of (a) \(sin^{2}\theta_{12}\) vs \(sin^{2}\theta_{13}\), (b) \(sin^{2}\theta_{23}\) vs \(sin^{2}\theta_{13}\) and (c) \(sin^{2}\theta_{23}\) vs \(sin^{2}\theta_{12}\), for normal ordering. It is evident from Fig. 2, that the model satisfy the correct neutrino oscillation data on neutrino masses and mixing.
The Fig. 3 shows the variation of relic density of DM candidate(lightest sterile neutrino), decay rate of DM candidate and active-sterile mixing as a function DM candidate mass \(M_{DM}\). Fig. 3(a) shows the relic abundance of DM candidate as a function of DM mass. The relic abundance obtained in our model satisfy the experimental range for the DM mass within the range (3-35) keV for normal hierarchy. According to the cosmological limits given by Layman-\(\alpha\) and X-ray measurements, the DM mass below 10 keV and above 17 keV is excluded, respectively. The the relic abundance shows the partial contribution to the total relic abundance of DM, provided we incorporate these cosmological limits as shown as vertical lines in Fig. 3, then the relic abundance obtained. Fig. 3(b) gives the parameter space of decay rate of DM candi
date and DM mass. It is clear that the decay rate obtained is negligible and is within the range (\(10^{-27}-10^{-36}\)) s\({}^{-1}\) for DM mass range of (2-35) keV. Also, we have shown the active-sterile mixing as a function of DM mass in Fig. 3(c). In order to be a good DM candidate the sterile neutrino mass must be within the range (0.4-50) keV and its mixing with active neutrinos must be very small and within the range (\(10^{-12}-10^{-8}\)). It can be seen in the Fig. 3(c), that the mass and mixing(active-sterile) obtained in our model lies in these ranges.
In Fig. 4, we have shown the variation of anomalous magnetic moment with mass of gauge coupling \(g^{\prime}\). The left penal gives the variation of muon anomalous magnetic moment \(\Delta a_{\mu}\) with gauge coupling \(g^{\prime}\). The right penal gives the variation of electron anomalous magnetic moment with gauge coupling \(g^{\prime}\). The contribution to anomalous magnetic moment of muon and electron solves the observed discrepancy in contrast to SM.
Figure 3: Plots showing the variation of (a) Relic density of DM with DM mass, (b) Decay rate of DM with DM mass and (c) active-sterile mixing as a function DM mass.
## 7 Conclusions
In this work, we have presented a detailed study of neutrino phenomenology, dark matter, electron and muon (g-2) in an extended SM scenario incorporating ISS(2,3) seesaw mechanism. Here, we have employed ISS(2,3) seesaw mechanism because it results in an additional keV range sterile state which act as a viable DM candidate. Also, the mass of RHNs are obtained in TeV scale which are accessible at LHC. The model includes two extra RHNs \(N_{i}(i=1,2)\) and three singlet fermions \(S_{i}(i=1,2,3)\) as required in ISS(2,3). The anomaly free \(U(1)_{L_{e}-L\mu}\) gauge symmetry is used to get an additional \(Z^{\prime}\) gauge boson in MeV range, so that the model can explain the anomalous magnetic moment of electron and muon\((g-2)_{e,\mu}\), simultaneously. Assuming, the lightest sterile neutrino as our DM candidate, we successfully obtained the relic abundance of DM. The calculated relic abundance satisfies the experimental range for the DM mass range (3-35)keV. Further, we have calculated the decay rate of DM candidate to check its stability. We found that the DM candidate is stable as we have obtained negligible decay rate. Also, the active-sterile mixing and DM mass are compatible with the experimental ranges.
In summary, we developed a model formalism which could explain the correct neutrino phenomenology, DM problem and anomalous magnetic moment of electron and muon\((g-2)_{e,\mu}\).
**Acknowledgments**
R. Verma acknowledges the financial support provided by the Central University of Himachal Pradesh. B. C. Chauhan is thankful to the Inter University Centre for Astronomy and Astrophysics (IUCAA) for providing necessary facilities during the completion of this work. Ankush acknowledges the financial support provided by the University Grants Commission, Government of India vide registration number 201819-NFO-2018-19-OBCHIM-75542. Special thanks to Monal Kashav for his invaluable input and constant support throughout the research process.
Figure 4: Variation of electron and muon anomalous magnetic moment with gauge coupling constant \(g^{\prime}\). |
2303.06298 | MLP-SRGAN: A Single-Dimension Super Resolution GAN using MLP-Mixer | We propose a novel architecture called MLP-SRGAN, which is a single-dimension
Super Resolution Generative Adversarial Network (SRGAN) that utilizes
Multi-Layer Perceptron Mixers (MLP-Mixers) along with convolutional layers to
upsample in the slice direction. MLP-SRGAN is trained and validated using high
resolution (HR) FLAIR MRI from the MSSEG2 challenge dataset. The method was
applied to three multicentre FLAIR datasets (CAIN, ADNI, CCNA) of images with
low spatial resolution in the slice dimension to examine performance on
held-out (unseen) clinical data. Upsampled results are compared to several
state-of-the-art SR networks. For images with high resolution (HR) ground
truths, peak-signal-to-noise-ratio (PSNR) and structural similarity index
(SSIM) are used to measure upsampling performance. Several new structural,
no-reference image quality metrics were proposed to quantify sharpness (edge
strength), noise (entropy), and blurriness (low frequency information) in the
absence of ground truths. Results show MLP-SRGAN results in sharper edges, less
blurring, preserves more texture and fine-anatomical detail, with fewer
parameters, faster training/evaluation time, and smaller model size than
existing methods. Code for MLP-SRGAN training and inference, data generators,
models and no-reference image quality metrics will be available at
https://github.com/IAMLAB-Ryerson/MLP-SRGAN. | Samir Mitha, Seungho Choe, Pejman Jahbedar Maralani, Alan R. Moody, April Khademi | 2023-03-11T04:05:57Z | http://arxiv.org/abs/2303.06298v1 | # MLP-SRGAN: A Single-Dimension Super Resolution GAN using MLP-Mixer
###### Abstract
We propose a novel architecture called MLP-SRGAN, which is a single-dimension Super Resolution Generative Adversarial Network (SRGAN) that utilizes Multi-Layer Perceptron Mixers (MLP-Mixers) along with convolutional layers to upsample in the slice direction. MLP-SRGAN is trained and validated using high resolution (HR) FLAIR MRI from the MSSEG2 challenge dataset. The method was applied to three multicentre FLAIR datasets (CAIN, ADNI, CCNA) of images with low spatial resolution in the slice dimension to examine performance on held-out (unseen) clinical data. Upsampled results are compared to several state-of-the-art SR networks. For images with high resolution (HR) ground truths, peak-signal-to-noise-ratio (PSNR) and structural similarity index (SSIM) are used to measure upsampling performance. Several new structural, no-reference image quality metrics were proposed to quantify sharpness (edge strength), noise (entropy), and blurriness (low frequency information) in the absence of ground truths. Results show MLP-SRGAN results in sharper edges, less blurring, preserves more texture and fine-anatomical detail, with fewer parameters, faster training/evaluation time, and smaller model size than existing methods. Code for MLP-SRGAN training and inference, data generators, models and no-reference image quality metrics will be available at [https://github.com/IAMLAB-Ryerson/MLP-SRGAN](https://github.com/IAMLAB-Ryerson/MLP-SRGAN).
Super resolution, upsampling, MRI, image quality, SRGAN, MLP-Mixer
## 1 Introduction
Fluid attenuation inversion recovery (FLAIR) MRI is used to diagnose neurological disease including dementia and cerebrovascular disease (CVD). 2D FLAIR acquisitions usually have thick slices, which limits direct comparison with other MRI sequences, or longitudinal scans. Moreover, inputs to deep-learning and registration tools may require specific spatial resolutions. In the past, bilinear and bicubic interpolation methods were used to change the spatial resolution of images. More recently, super resolution (SR) methods have been proposed for natural images that retain photorealism [1].
Convolutional neural networks (CNN) have been used to learn the mapping between the low- and high-resolution images for SR applications [2]. The perceptual output of SRCNN surpasses traditional methods such as bilinear and bicubic interpolation. More recently, generative adversarial networks (GAN) for image super-resolution (SRGAN)
have been gaining traction as they maintain photo-realism in natural images for \(4\times\) upscaling [1]. SRGAN [1] aims to recover finer texture details with a perceptual loss that combines adversarial and content losses. Enhanced SRGAN [3] introduced the Residual-in-Residual Dense Block without batch normalization and relativistic GANs that lets the discriminator predict relative realness. It produced better visual quality with more realistic and natural textures than SRGAN and is current state-of-the-art. SR has also been utilized in medical imaging with similar networks and loss functions [3] - [4], which can restore small structures (i.e. septum pellucidum) [5] and better texture detail [4].
Although these methods show promise, photorealism, resolving fine-details and maintaining texture remain problems when upscaling using SR for medical images. Existing methods can create anatomical inaccuracies, blurring, smoothing, artefacts, etc. This is especially true for FLAIR MRI with thick slices causing notable interpolation errors and blurring between object boundaries when upsampled. Existing networks also upscale in 2 dimensions, but for FLAIR MRI, we are concerned with upsampling along the (single) slice dimension. Lastly, methods based strictly on CNNs have short-comings, in that CNNs require a lot of data to train and can fail to encode position and orientation information which may be important for retaining texture or fine-details.
To overcome these challenges, we propose a novel architecture called MLP-SRGAN, which is a single-dimension SRGAN that utilizes Multi-Layer Perceptron Mixers (MLP-Mixers) along with convolutional layers to upscale FLAIR MRI volumes. This is the first time an MLP-Mixer is employed in an SR application. Convolution-free networks are gaining attention in computer vision applications to overcome CNN limitations [6][7]. The MLP-Mixer [6] architecture has been purposed as one such solution, and studies show multi-layered perceptrons are enough for visual learning. MLP-Mixer architectures are based entirely on MLPs repeatedly applied across either spatial locations or feature channels. They accept a sequence of linearly projected patches (tokens) that maintains dimensionality. Channel-mixing MLP communicates between channels, and token-mixing MLPs operate on each token independently. CNNs have been shown to have a dependence on spatial location, which can affect the outcome of vision applications [8]. The channel mixing functionality of the MLP-Mixer helps to reduce this spatial dependency introduced from convolutional layers. Combining MLP-Mixers with convolutional layers for vision tasks have been done before [7]. The benefit of using MLP-Mixers in conjunction with convolutional layers is a reduction of parameters and faster compute time, while still achieving similar performance to CNN-based solutions.
For FLAIR MRI we complete upsampling along the slice dimension to interpolate thick slices. We propose a novel block known as the Residual MLP-Mixer in Residual Dense Block, which is used in an SRResNet-like architecture to upscale images by \(4\times\) over a single dimension. We also propose a selective downsampling Block, which uses convolutional layers to select relevant pixels from the fully upscaled images and provide intelligent anti-aliasing for a one-dimensional upscale. The selective downsampling block also allows the output of the network to be scaled to the desired resolution. Results are compared to five popular deep learning-based SR methods and bicubic interpolation. Networks were tested on four multicentre FLAIR MRI datasets and results were compared using peak-signal-to-noise-ratio (PSNR), structural similarity index measure (SSIM), and three novel no-reference image metrics based on sharpness, entropy, and low frequency of the discrete wavelet transform.
## 2 Methods
### Network Architecture
We propose a new architecture called MLP-SRGAN which contains a combination of MLP-Mixer blocks and convolutions in a generator network (Figure 1) and a CNN-based discriminator (Figure 2). The CNN-based discriminator network allows the generator network to converge faster during training and ensures outputs are photorealistic. The network accepts any input image resolution, and can render any output resolution, which permits us to take advantage of upscaling in a single dimension to address thick slices in FLAIR MRI.
#### 2.1.1 Generator
The generator network consists of MLP-Mixer blocks with a reshaping layer, a linear connecting layer, MLP encoder, followed by layer normalization, and finally another reshaping layer. A residual connection is added from the input of the MLP-Mixer block to the output to ensure information from lower layers is maintained which improves super resolution performance [3]. Three residual MLP-Mixer blocks are serially connected with a residual connection from the input of the first block to the output of the last block, to create the residual MLP in residual dense block (RMRDB). A varying number of RMRDB are connected together sequentially to form a SRResNet-like architecture, which is connected to convolution blocks and upsampling layers to create a \(4\times\) upscaled image. The \(4\times\) upscaled image is then input into a new block called the selective downsampling block, which provides for a highly flexible mechanism to control the output resolution.
The selective downsampling block consists of a convolutional layer followed by leaky ReLU activation as shown in Figure 1. By strategically selecting kernel size and stride of the convolutional portion of the selective downsampling
block, the output resolution of the network can be intelligently downsampled by an integer value. By changing the kernel size and stride of the downsampling block, and changing the number of upsampling layers, the output resolution of the volume can be scaled to any value. The flexibility of the network allows for upscaling to higher resolutions by increasing the number of RMRDBs, upsampling layers, and selective downsampling layers respectively.
For this experiment, kernel size of 5x5 is chosen to ensure information from the upscaled low resolution input is contained within each convolution operation. A stride of 2x1 ensures the convolution output size is scaled down by a factor of 2 in a single dimension for each selective downsampling layer. Using this kernel and stride size allows the network to achieve a 4\(\times\) upscale over the slice direction, thereby focusing the model on the thick slices, while still maintaining the original image quality in the other dimensions.
#### 2.1.2 Discriminator
The discriminator consists of convolutions, batch normalization, and leaky ReLU activations similar to [1], with the final dense layers and sigmoid activation replaced with a convolution layer to save memory.
### Loss Functions
The loss functions and loss function parameters used are similar to those in [3]. The generator uses perceptual loss, content loss, and adversarial loss
\[L_{gen}=\lambda_{1}L_{percep}+\lambda_{2}L_{content}+\lambda_{3}L_{adv} \tag{1}\]
Figure 1: MLP-SRGAN generator network. \(n\) refers to the number of filters in the layer, while \(s\) refers to stride size in the layer.
Figure 2: MLP-SRGAN discriminator network. \(n\) refers to the number of filters in the layer, while \(s\) refers to stride size in the layer.
where \(\lambda_{1}=1\), \(\lambda_{2}=0.01\), and \(\lambda_{3}=0.005\). The perceptual loss uses features from the last convolution layer of VGG19. Mean absolute error is computed from features of the high resolution image \(I_{HR}\) and generated super resolution image \(I_{SR}\) to create a perceptual loss
\[L_{percep}=\sum_{i=1}^{N}\frac{|vgg19(I_{HR})-vgg19(I_{SR})|}{N} \tag{2}\]
where \(vgg19\) is the VGG19 feature extraction network. The content loss is the mean absolute error between the pixel values of the original high resolution image \(I_{HR}\) and the generated super resolution image \(I_{SR}\). The adversarial loss for the generator is the binary cross entropy of 1 minus the relativistic average discriminator output, while the discriminator loss is the binary cross entropy of the discriminator output:
\[L_{adv}=-\log(1-D(I_{HR},I_{SR}))-\log(D(I_{SR},I_{JR})) \tag{3}\] \[L_{discr}=-\log(D(I_{HR},I_{SR}))-\log(1-D(I_{SR},I_{HR})) \tag{4}\]
where \(D\) is the discriminator network. The generator and discriminator are trained alternately during each iteration of training.
### Image Quality Metrics
This section presents the image quality metrics used in this work. Let \(I(x,y)\) represent an image where \((x,y)\in Z^{2}\) are the spatial coordinates and \(L\) is number of graylevels with \(I\in[0,L-1]\). The human perceptual metreics are the PSNR, SSIM and the structural image quality metrics are Shannon's Entropy, Sharpness and Wavelet Low (Blurriness). The equations for each metric are summarized below.
\begin{tabular}{|c|c|} \hline Name & Metric \\ \hline Peak Signal-to-Noise-Ratio (PSNR) & \(PSNR=10\cdot\log_{10}\left(\frac{MAX_{I}^{2}}{MSE}\right)\) \\ \hline Structural Similarity Index Measure (SSIM) & \(SSIM=\frac{\left(2\mu_{I_{1}}\mu_{I_{2}}+c_{1}\right)\left(2\sigma_{I_{1}}+ \sigma_{I_{2}}^{2}+c_{2}\right)}{\left(\mu_{I_{1}}^{2}+\mu_{I_{2}}^{2}+c_{1} \right)\left(\sigma_{I_{1}}^{2}+\sigma_{I_{2}}^{2}+c_{2}\right)}\) \\ \hline Shannon's Entropy & \(H(I)=-\sum_{i=1}^{L-1}p\left(I\right)\log p\left(I\right)\) \\ \hline Sharpness & \(IS(I)=\frac{1}{N}\sum_{x,y}H_{sobel}(x,y)*I(x,y)\) \\ \hline Blurriness (Wavelet Low) & \(Low(I)=\sum_{s=1}^{5}\frac{A_{s}^{2}(x,y)}{N}\) \\ \hline \end{tabular}
* **PSNR**: \(MAX_{I}\) represents the maximum intensity given the image's bit depth, while \(MSE\) is the mean squared error between the generated and ground truth images.
* **SSIM**: (\(\mu_{I_{1}}\),\(\mu_{I_{2}}\)) and (\(\sigma_{I_{1}}^{2}\), \(\sigma_{I_{2}}^{2}\)) are the mean and variance of the generated and ground truth images. \(\sigma_{I_{1}I_{2}}\) is the covariance between generated and ground truth images. \(c_{1}\) and \(c_{2}\) are constants that ensure the metric does not exceed 1.
* **Shannon's Entropy**: \(p(I)\) is the probability distribution (normalized number of occurences) of intensity \(I\). Entropy measures the randomness or noise levels in an image and a high value indicates more rapid intensity variations.
* **Sharpness**\(H_{sobel}(x,y)\) is the 2D sobel filter used for edge detection, \(*\) represents the convolution operation, and \(N\) is the total number of pixels in the image. High quality images have high contrast and sharp edges which in turn, have large edge magnitudes and an overall high Sharpness metric.
* **Blurriness**: the low frequency band (approximation coefficients) of the DWT are specified by \(A_{s}(x,y)\), where \(s\) is the scale (decomposition level). Five levels (\(s=5\)) were used with a Daubechies wavelet. This metric considers the energy in the low frequency bands. In images with more blurring, there is higher energy in the low frequency approximations, and the Wavelet metric would be high.
## 3 Experiments
### Data
Training data is from the MSSEG2 challenge [9] and consists of 80 high resolution FLAIR volumes of 256\(\times\)256\(\times\)256, with 0.9766mm\(\times\)0.9766mm\(\times\)0.5-1.0mm resolution from GE, Philips and Siemens scanners. Sixty-four (64) volumes are used for training/validation, and the remainder 16 are used for testing. Volumes are split into individual slices in the sagittal plane with blank slices removed for training, resulting in a total of 10,940 sagittal slices used to train each network. The original high-resolution (HR) slices are used as ground truth, and low-resolution (LR) images are
created by downsampling the original images to 256\(\times\)64 with bicubic interpolation. All models under go a five fold cross validation so all 80 volumes are used for testing purposes without data leakage. One hundred (100) volumes randomly sampled from three additional multicentre, clinical data sets are used as hold out, unseen datasets. The datasets are from the Canadian Atherosclerosis Imaging Network (CAIN) dataset [10], the Canadian Consortium of Neurodegeneration and Aging (CCNA) dataset [11], and the Alzheimer's Disease Neuroimaging Initiative (ADNI) [12]. Acquisition parameters are shown in Table 1. All volumes are intensity normalized [13].
### Training Details
To fairly compare with other super resolution experiments, the proposed network performs a 4\(\times\) upscale. A batch size of 8 is used, with low resolution inputs sized to 256\(\times\)64 and output images sized to 256\(\times\)256. Adam optimizer with default PyTorch parameters, a learning rate of 2e-4 and a decay beginning in the 100th epoch are used. The network uses a pre-trained VGG19 network for the perceptual loss. Since this network requires a 3-channel input image, the input images are duplicated into 3 channels. The VGG19 pre-trained model requires images to be normalized to a specific intensity range and default VGG19 parameters are used \(\mu_{R}=0.485\), \(\mu_{G}=0.456\), \(\mu_{B}=0.406\), and \(\sigma_{R}=0.229\), \(\sigma_{G}=0.224\), \(\sigma_{B}=0.225\). The network interpolation strategy proposed by [3] is used to train the generator, where the generator network is trained alone for 500 iterations using only the content loss. This strategy of warm up iterations helps to improve stability once the perceptual and adversarial loss when the discriminator is added as the generator is initialized with weights that help to maintain intensity.
### Image Quality Metrics
Several image quality metrics are used to evaluate upsampling performance. Peak Signal-to-Noise-Ratio (PSNR) and Structural Similarity Index Measure (SSIM) were used as they traditional metrics used in the literature. We also define three new no-reference based metrics to quantify noise/randomness, sharpness, and blurriness in Appendix A: Table 7. The first metric uses Shannon's entropy to measure noise/randomness in the upscaled images - a random image would have higher entropy. The second metric is Sharpness which measures the average magnitude of the image's edge content from the Sobel gradient - an image with high contrast and sharp edges would have a large Sharpness metric. The third metric measures the energy of the low frequency bands from the wavelet decomposition to quantify image blurriness - blurry images have high wavelet energy.
## 4 Results
Performance of MLP-SRGAN was compared to bicubic interpolation, EDSR [14], WDSR [15], SRGAN [1], ESRGAN [3], and SRCNN [2]. MLP-SRGAN was tested with varying number of RMRDBs, represented with (D-N), where N is the number of RMRDBs in the network. MLP-SRGAN (D-1) was also tested without a discriminator, using only the generator and excluding adversarial loss. Training time, evaluation time, model size, and trainable parameters is shown in Table 2. MLP-SRGAN has lower training time and approximately same inference time compared to ESRGAN. Increasing the number of RMRDBs increases the number of parameters, training/inference time. To determine whether performance is different between two models, paired-tests are performed on log-transformed metrics.
See Figure 3 for generated images from select models in MSSEG test data. Generated images for all methods and datasets are shown in Figure 10. Images generated by MLP-SRGAN produce noticeable sharpness in edge content (less blur), with more texture compared to other methods, including ESRGAN. There are differences in small anatomical structures between ESRGAN and MLP-SRGAN. As shown in sulci/gyri (top in Figure 3), the MLP-SRGAN more accurately represents anatomical details of the original image and has high quality upsampling for fine details. Using MLP-SRGAN, fine details of the cerebellum (bottom) closely follow the original ground truth image, including shape/edges/texture. ESRGAN output shows fine-details are lost, increased blur and there is lower shape and texture correspondence with the original. The high quality generation of fine details, anatomical accuracy, and improved texture of the MLP-Model is retaining small details.
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c||} \hline & \multicolumn{6}{c||}{**Acquisition Parameters**} \\ \hline
**Database** & **GE/Phil/Siem** & **TR (ms)** & **TE (ms)** & **TI (ms)** & **XY (mm)** & **Slice (mm)** \\ \hline MSSEG2 & N/A & N/A & N/A & N/A & 0.9766 & 0.5-1 \\ \hline CAIN & 27/24/49 & 9000-11000 & 117-150 & 2200-2800 & 0.4285-1 & 3-5 \\ \hline ADNI & 23/15/62 & 9000-11000 & 90-150 & 2250-2500 & 0.8594 & 2-5 \\ \hline CCNA & 10/16/74 & 9000-10000 & 115-145 & 2250-2500 & 0.9375 & 3-6 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset acquisition parameters. All data is 3T.
Performance metrics for the in-distribution MSSEG2 dataset are shown in Table 4 (average over five folds). The distribution of all the metrics is shown in Figure 7 and 8. The ground truth metrics are computed from the high resolution ground truth images and serve as gold standards for PSNR, SSIM, edge quality (Sharpness), noisiness (Entropy) and blurriness (Wavelet). For perfect upsampling, the metrics from the generated images would be identical to those from the ground truths. MLP-SRGAN without the discriminator performed the worst (most blur (Wavelet) similar to bicubic, highest randomness (Entropy) and Sharpness most dissimilar). As a result, MLP-SRGAN (No Discr) is not considered further. Images generated by MLP-SRGAN have image quality metrics closer to the gold standard metrics, shown as bold in the table (and red line in the box plots). This shows a discriminator helps to resolve fine-details and improve realness of the images.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline & \multicolumn{2}{c|}{**Train**} & \multicolumn{1}{c|}{**Eval.**} & \multicolumn{1}{c|}{**Trainable**} & \multicolumn{1}{c|}{**GAN**} \\
**Method** & **(hh:mm:ss)** & **Time (s)** & **Size (MB)** & **Params** & **GAN** \\ \hline
**Bicubic** & 0 & 0.002 & 0 & 0 & N \\ \hline
**EDSR** & 20:50:22 & 0.006 & 159.9 & 41,900,545 & N \\ \hline
**WDSR** & 3:25:30 & 0.009 & 2.8 & 700,350 & N \\ \hline
**SRCNN** & 1:12:39 & 0.0028 & 0.226 & 57,281 & N \\ \hline
**SRGAN** & 11:11:25 & 0.0048 & 6.0 & 6,244,183 & Y \\ \hline
**ESRGAN** & 52:18:22 & 0.0317 & 147.3 & 43,242,820 & Y \\ \hline
**MLP-SRGAN (D-1)** & 23:17:52 & 0.0144 & 79.9 & 25,595,460 & Y \\ \hline
**MLP-SRGAN (D-3)** & 32:34:22 & 0.0376 & 253.9 & 66,391,236 & Y \\ \hline
**MLP-SRGAN (D-5)** & 42:15:06 & 0.0605 & 392.0 & 107,187,012 & Y \\ \hline
**MLP-SRGAN (D-1) No D** & 18:12:26 & 0.0151 & 79.9 & 20,901,763 & N \\ \hline \hline \end{tabular}
\end{table}
Table 2: Time and space complexity of networks tested in the five fold cross validation. The MLP-SRGAN has significantly lower training time compared to ESRGAN.
Figure 3: Visual comparison of clinically relevant regions from MSSEG2 FLAIR (holdout).
Top models are MLP-SRGAN (D-1) and MLP-SRGAN (D-3). Since MLP-SRGAN (D-1) is more computationally efficient, we choose to further analyze this model and benchmark it against ESRGAN (current state-of-the-art SR method). Metrics from Table 4 show images generated from MLP-SRGAN (D-1) are more similar to ground truth (smallest metric difference) compared to ESRGAN. T-tests were used to compare metrics between MLP-SRGAN and ESRGAN and p-values are shown in Table 3. Over all metrics, there were statistical differences between MLP-SRGAN and ESRGAN (\(p<0.05\)) for SSIM, PSNR, and Entropy, indicating a significant improvement in image quality as explained by these metrics. These trends are supported by histograms of the metrics in Figure 5 and Figure 6. We hypothesize the fine anatomical details and texture preservation (which we observed visually), combined with sharper edges, less blurriness and more smoothness, contributes to these differences. There were no differences between MLP-SRGAN and ESRGAN with respect to the Sharpness or Wavelet measures (they have overlapping histograms). MLP-SRGAN (D-1) achieves these results with fewer parameters, 2.26\(\times\) faster training time, 2.20\(\times\) evaluation time, and 0.54\(\times\) smaller model size compared to ESRGAN.
The MLP-SRGAN (D-1), ESRGAN and bicubic interpolation methods are evaluated further on held-out (unseen) clinical datasets (CAIN, ADNI, CCNA). The clinical datasets do not have HR ground truths and therefore, we cannot compute PSNR or SSIM. Instead, we use the no-reference metrics Entropy, Sharpness and Wavelet and the distributions are shown in Figure 4. T-tests comparing MLP-SRGAN to ESRGAN are shown in Table 6 for ADNI, Table 5 for CAIN and Table 7 for CCNA. MLP-SRGAN (D-1) has similar Sharpness to ESRGAN, with lower Entropy and Wavelet metrics. Lower Entropy indicates smoother textures (less noise/randomness) in upscaled images for MLP-SRGAN (D-1).There are significant differences in Sharpness, Entropy and Wavelet between MLP-SRGAN and ESRGAN for the CAIN and CCNA datasets. In ADNI, there are some metrics that were similar, including Wavelet and Sharpness, but differences for Entropy. Entropy was significantly lower and different in MSSEG as well as closer to the ground truth on MSSEG, indicating this may be a good metric for future studies on texture differences between images. A lower Wavelet feature for MLP-SRGAN (D-1) indicates lower energy in the low frequency bands (less blurriness) which can be attributed to the intelligent upsampling along the (thick) slice dimension. This was statistically the same as ESRGAN, so blur may be similar. Since these are global metrics it may be hard to quantify local differences. In the future, we will examine metrics that analyze more local features, fine anatomical structures and texture.
Figure 4: Image quality metrics between MLP-SRGAN (D-1) and ESRGAN.
Figure 5: Perceptual image quality metrics PSNR and SSIM on MSSEG data.
performance (\(p<0.05\)) on testing sets, faster training and evaluation times compared to state-of-the-art methods such as ESRGAN. Visual analysis shows better retaining of texture, small anatomical details, with less blurring and noise, and higher quality edges in images generated from MLP-SRGAN. We hypothesize the MLP-Mixer blocks are able to retain fine-features by learning mappings from raw image pixels, which have a spatial dependency (compared to CNNs that fail to encode position and orientation information).
## Acknowledgments
We acknowledge the Natural Sciences and Engineering Research Council (NSERC) of Canada (Discovery Grant), Alzheimer's Society Research Program (ASRP) (New Investigator Grant) and the Ontario Government (Early Researcher Award) for funding this research.
Figure 6: Structural image quality metrics Sharpness, Entropy and Wavelet on MSSEG. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.